Posts Tagged ‘System Management’

Upgrading RHN Satellite 5.5 to 5.6

Saturday, October 12th, 2013

Redhat released version 5.6 of the Redhat Satellite. Time to have a closer look to it and how to upgrade from version 5.5.

New features

  • Finally PostgreSQL support is mature enough for Enterprise usage. No need of a closed source data base anymore. This also brings a lot of new capabilities such as online backups which before was only available using an external Oracle Database which needs the availability of a DBA.

    PostgreSQL also brings some performance benefits over the embedded Oracle database as delivered with 5.5 and earlier. Disclaimer: I did not made any benchmarks, but it “feels” much faster.

  • If you are using the multi-org feature, you may be happy about enhancements for Inter-Satellite-Sync (ISS). Now you can define access rights for different software channels for different organizations.
  • It is not a new feature, but now it is supported: cobbler buildiso. It is a handy solution if you can not use PXE boot in your environment. cobbler buildiso generates a small boot image which allows you to select the installation of a system from a boot menu.
  • Intergrated System Asset Manager (SAM) which is based on Candlepin and allows you assess your system landscape for subscription compliance.
  • Upgrading from RHN Satellite 5.5
    The first thing that you probably would ask: Is it possible and supported to migrate from the Embedded Oracle Database to PostgreSQL? Is it hassle free and bullet-proof? Yes it is.

    Keep in mind

  • As always: Have a look to the product documentation before doing anything on a production Satellite.
  • Create a new RHN Satellite Certificate at access.redhat.com
  • Download the ISO image for 5.6
  • ensure having a recent database backup
  • ensure having a recent backup of your /etc/rhn directory as well as /var/lib/cobbler
  • Update your existing Satellite 5.5 with the latest available patches
  • Delete unnecessary software channels from the Satellite for faster DB migration
  • Delete old Snapshots to minimize database data to be migrated
  • Make enough storage available to migrate from embedded Oracle to PostgreSQL. It takes roughly about the same amount of storage for the data. The PostgreSQL database stores its data in /var/lib/pgsql.
  • Install the latest available package rhn-upgrade: yum install rhn-upgrade

    Lets do it, Perparation work

    First of all, create a database backup of your embedded Oracle Database:

    [root@rhnsat ~]# rhn-satellite stop
    [root@rhnsat ~]# su - oracle -c "db-control backup /path/to/your/backup/directory"
    [root@rhnsat ~]# su - oracle -c "db-control verify /path/to/your/backup/directory"
    [root@rhnsat ~]# rhn-satellite start
    

    Backup the rest of your Satellite:

    [root@rhnsat ~]# cp -rp /etc/rhn/ /etc/rhn-$(date +"%F")
    [root@rhnsat ~]# cp -rp /var/lib/cobbler /var/lib/cobbler-$(date +"%F")
    [root@rhnsat ~]# cp -rp /etc/cobbler /etc/cobbler-$(date +"%F")
    

    Update your RHN Satellite 5.5 with the latest available patches and reboot:

    [root@rhnsat ~]# yum -y update && reboot
    

    Ensure the latest schema updates have been applied. The output should read as follow:

    [root@rhnsat ~]# spacewalk-schema-upgrade 
    
    You are about to perform upgrade of your satellite-schema.
    
    For general instructions on Red Hat Satellite schema upgrade, please consult
    the following article:
    
    https://access.redhat.com/knowledge/articles/273633
    
    Hit Enter to continue or Ctrl+C to interrupt: 
    Schema upgrade: [satellite-schema-5.6.0.10-1.el6sat] -> [satellite-schema-5.6.0.10-1.el6sat]
    Your database schema already matches the schema package version [satellite-schema-5.6.0.10-1.el6sat].
    [root@rhnsat ~]#
    

    It is always a good idea to restart a software and check if all is working as expected *before* doing an upgrade. So you can pinpoint problems better if there are some.

    [root@rhnsat ~]# rhn-satellite restart
    

    Review your list of software channels and delete unused ones. This example will delete the channel rhel-i386-rhev-agent-6-server:

    [root@rhnsat ~]# spacewalk-remove-channel -c rhel-i386-rhev-agent-6-server
    Deleting package metadata (20):
                      ________________________________________
    Removing:         ######################################## - complete
    [root@rhnsat ~]#  
    

    Delete old system snapshots not used anymore. The following example deletes all snapshots which are older than one month:

    [root@rhnsat ~]# sw-system-snapshot --delete --all --start-date 200001010000 --end-date $(date -d "-1 months" "+%Y%m%d0000")
    

    Update the rhn-update package to the latest available:

    yum install rhn-upgrade
    

    After installing the the rhn-upgrade package, the SQL scripts needed for the DB migration are installed as well as some documentation you should read. They are located in /etc/sysconfig/rhn/satellite-upgrade/doc.

    Upgrade Procedure

    Mount the downloaded ISO image:

    [root@rhnsat ~]# mount satellite-5.6.0-20130927-rhel-6-x86_64.iso /mnt -o loop && cd /mnt
    [root@rhnsat mnt]# 
    

    If you operate your Satellite behind a proxy, you need to upgrade it in disconnected mode, if not, ignore the –disconneded parameter.

    [root@rhnsat mnt]# ./install.pl --disconnected --upgrade
    * Starting the Spacewalk installer.
    * Performing pre-install checks.
    * Pre-install checks complete.  Beginning installation.
    * RHN Registration.
    ** Registration: Disconnected mode.  Not registering with RHN.
    * Upgrade flag passed.  Stopping necessary services.
    * Purging conflicting packages.
    * Checking for uninstalled prerequisites.
    ** Checking if yum is available ...
    There are some packages from Red Hat Enterprise Linux that are not part
    of the @base group that Satellite will require to be installed on this
    system. The installer will try resolve the dependencies automatically.
    However, you may want to install these prerequisites manually.
    Do you want the installer to resolve dependencies [y/N]? y
    * Installing RHN packages.
    * Now running spacewalk-setup.
    * Setting up Selinux..
    ** Database: Setting up database connection for PostgreSQL backend.
    ** Database: Installing the database:
    ** Database: This is a long process that is logged in:
    ** Database:   /var/log/rhn/install_db.log
    *** Progress: #
    ** Database: Installation complete.
    ** Database: Populating database.
    *** Progress: ###################################
    * Database: Starting Oracle to PostgreSQL database migration.
    ** Database: Starting embedded Oracle database.
    ** Database: Trying to connect to Oracle database: succeded.
    ** Database: Migrating data.
    *** Database: Migration process logged at: /var/log/rhn/rhn_db_migration.log
    ** Database: Data migration successfully completed.
    ** Database: Stoping embedded Oracle database.
    * Setting up users and groups.
    ** GPG: Initializing GPG and importing key.
    * Performing initial configuration.
    * Activating Red Hat Satellite.
    ** Certificate not activated.
    ** Upgrade process requires the certificate to be activated after the schema is upgraded.
    * Enabling Monitoring.
    * Configuring apache SSL virtual host.
    Should setup configure apache's default ssl server for you (saves original ssl.conf) [Y]? y
    * Configuring tomcat.
    ** /etc/sysconfig//tomcat6 has been backed up to tomcat6-swsave
    ** /etc/tomcat6//tomcat6.conf has been backed up to tomcat6.conf-swsave
    Reversed (or previously applied) patch detected!  Skipping patch.
    1 out of 1 hunk ignored -- saving rejects to file web.xml.rej
    * Configuring jabberd.
    * Creating SSL certificates.
    ** Skipping SSL certificate generation.
    * Deploying configuration files.
    * Update configuration in database.
    * Setting up Cobbler..
    cobblerd does not appear to be running/accessible
    Cobbler requires tftp and xinetd services be turned on for PXE provisioning functionality. Enable these services [Y]? 
    This portion of the Red Hat Satellite upgrade process has successfully completed.
    Please refer to appropriate upgrade document in /etc/sysconfig/rhn/satellite-upgrade
    for any remaining steps in the process.
    [root@rhnsat mnt]# 
    

    Depending on the size of your database and the speed of your disks, the upgrade procedure can take many hours.

    The next step is having a look at diff /etc/rhn/rhn.conf /etc/rhn-$(date +”%F”)/rhn.conf
    and edit /etc/rhn/rhn.conf accordingly. You will probably see missing things such as proxy, server.satellite.rhn_parent etc. Also change the setting disconnected to 0.

    After checking and correcting the config file you can activate the Satellite:

    [root@rhnsat ~]# rhn-satellite-activate --rhn-cert=/root/rhns-cert56.cert --ignore-version-mismatch
    

    After the activation the System is subscribed to the Softwarechannel “redhat-rhn-satellite-5.6-server-x86_64-6″, now bring the Satellite to latest available patchlevel:

    [root@rhnsat ~]# yum -y update 
    

    Stop and disable Oracle
    Bofore doing any Database related actions its better to stop the old Oracle Database to be sure all is now running on PostgreSQL.

    [root@rhnsat ~]# service oracle stop
    Shutting down Oracle Net Listener ...                      [  OK  ]
    Shutting down Oracle DB instance "rhnsat" ...              [  OK  ]
    [root@rhnsat ~]# chkconfig oracle off
    [root@rhnsat ~]# rhn-satellite restart
    

    Aftermath

    Check if your database schema is up-to-date:

    root@rhnsat ~]# spacewalk-schema-upgrade 
    
    You are about to perform upgrade of your satellite-schema.
    
    For general instructions on Red Hat Satellite schema upgrade, please consult
    the following article:
    
    https://access.redhat.com/knowledge/articles/273633
    
    Hit Enter to continue or Ctrl+C to interrupt: 
    Schema upgrade: [satellite-schema-5.6.0.10-1.el6sat] -> [satellite-schema-5.6.0.10-1.el6sat]
    Your database schema already matches the schema package version [satellite-schema-5.6.0.10-1.el6sat].
    [root@rhnsat ~]# 
    

    Rebuild the search index:

    [root@rhnsat ~]# service rhn-search cleanindex
    Stopping rhn-search...
    Stopped rhn-search.
    Starting rhn-search...
    [root@rhnsat ~]# 
    

    Recreate the software channel meta data:

    [root@rhnsat doc]# /etc/sysconfig/rhn/satellite-upgrade/scripts/regenerate-repodata -a
    Scheduling repodata creation for 'rhel-x86_64-server-supplementary-6'
    Scheduling repodata creation for 'rhel-x86_64-server-6'
    Scheduling repodata creation for 'rhn-tools-rhel-x86_64-server-6'
    [root@rhnsat doc]# 
    

    Check functionality
    Before removing the Oracle Database, run your tests to validate the Satellites functionality. Please proceed as stated in /etc/sysconfig/rhn/satellite-upgrade/doc/verification.txt

    This is an important point, as we are getting rid of the Oracle database later on. To be sure all is working as expected, do a complete functionality test for the important things.

    To be on the safe side, let the Satellite run for a few days with Oracle still installed.

    Getting rid of Oracle

    Please read /etc/sysconfig/rhn/satellite-upgrade/doc/satellite-upgrade-postgresql.txt first!

    [root@rhnsat ~]# yum remove *oracle*
    

    Getting rid of the last Oracle bits:

    [root@rhnsat ~]# rm -rf /rhnsat /opt/apps/oracle /usr/lib/oracle/
    

    Result:
    Having fun with a faster Satellite with an open source database :-)

    Disclaimer
    I take no responsibility about damaged Satellites, lost data etc. in doubt, stick on the official product documentation at http://access.redhat.com

Host based access control with IPA

Saturday, March 2nd, 2013

Host based access control is easy with IPA/FreeIPA, very easy.

Lets assume you want to have a host group called rhel-prod, a usergroup called prod-admins and you want to let them access the servers in the rhel-prod group by ssh from any host that can reach the servers. Lets call the HBAC rule prod-admins.

You can either user the web GUI or use the command line interface.

Lets create the user group:

[root@ipa1 ~]# ipa group-add prod-admins --desc="Production System Admins"
-------------------------
Added group "prod-admins"
-------------------------
  Group name: prod-admins
  Description: Production System Admins
  GID: 1222000004
[root@ipa1 ~]# 

Add some users to the user group:

[root@ipa1 ~]# ipa group-add-member prod-admins --users=luc,htester
  Group name: prod-admins
  Description: Production System Admins
  GID: 1222000004
  Member users: luc, htester
-------------------------
Number of members added 2
-------------------------
[root@ipa1 ~]# 

And the hostgroup

[root@ipa1 ~]# ipa hostgroup-add rhel-prod --desc "Production Servers"
---------------------------
Added hostgroup "rhel-prod"
---------------------------
  Host-group: rhel-prod
  Description: Production Servers
[root@ipa1 ~]#

Add some servers as members of the host group

[root@ipa1 ~]# ipa hostgroup-add-member rhel-prod --hosts=ipaclient1.example.com,ipaclient2.example.com
  Host-group: rhel-prod
  Description: Production Servers
  Member hosts: ipaclient1.example.com, ipaclient2.example.com
-------------------------
Number of members added 2
-------------------------
[root@ipa1 ~]#

Note: the servers are comma separated, without a space after the comma

Lets define the HBAC rule:

[root@ipa1 ~]# ipa hbacrule-add --srchostcat=all prod-admins
-----------------------------
Added HBAC rule "prod-admins"
-----------------------------
  Rule name: prod-admins
  Source host category: all
  Enabled: TRUE
[root@ipa1 ~]#

Add the user group to the rule:

[root@ipa1 ~]# ipa hbacrule-add-user --groups prod-admins prod-admins
  Rule name: prod-admins
  Source host category: all
  Enabled: TRUE
  User Groups: prod-admins
-------------------------
Number of members added 1
-------------------------
[root@ipa1 ~]#

Add the service to the rule:

[root@ipa1 ~]# ipa hbacrule-add-service --hbacsvcs sshd prod-admins
  Rule name: prod-admins
  Source host category: all
  Enabled: TRUE
  User Groups: prod-admins
  Services: sshd
-------------------------
Number of members added 1
-------------------------
[root@ipa1 ~]#

And finally add the host group to the rule

[root@ipa1 ~]# ipa hbacrule-add-host --hostgroups rhel-prod prod-admins
  Rule name: prod-admins
  Source host category: all
  Enabled: TRUE
  User Groups: prod-admins
  Host Groups: rhel-prod
  Services: sshd
-------------------------
Number of members added 1
-------------------------
[root@ipa1 ~]#

Of course you can enhance the rule by adding other services or restrict the access from particular hosts and so on.

Have fun :-)

RHEV 3.1 – an overview about the new features

Sunday, December 9th, 2012
RHEV-M

RHEV-M

Recently Red Hat announced the public availability of RHEV 3.1.

Finally, no more Windows needed for the whole software stack :-)

In 3.0, the new webadmin interface was already inncluded, as a tech preview and had its problems. Now with 3.1 its working great and looks neat. In contrary to 3.0, it is now listening on the standard ports 80 and 443. This will probably help users in organizations with strict proxy policies and setting.

So what else is new?

The supported number of virtual CPUs in a guest is now ridiculous 160, and RAM per guest is at ridiculous two Terabytes. But this are the least import updates.

Especially on the storage side, a lot of effort has been done and long missing features integrated.

From my point of view, the most important new feature is the possibility to have disks from more than one Storage Domain attached to a virtual machine. This would allow to install the Operating system to cheap SATA storage while data disks are super fast SSDs.

There is also support for live snapshots, but snapshots are (as on other platforms) kind of problematic because they are COW (Copy-On-Write). This can lead to I/O performance problems. Snapshots are a cool feature for i.e. taking a snapshot before updating software etc. Be sure you remove the snapshot afterwards if you want to keep a good I/O performance.

You now can use DirectLUN directly from the GUI without the usage of hooks. DirectLUN allows to attach FibreChannel and iSCSI LUNs directly to a Virtual Machine. This is great when you want to use shared filesystems such as GFS.

Another nice feature is Live Storage Migration which is a technical preview, means: Unsupported for the moment. It probably will be supported in a later version. Storage live migration is a nice feature when you need to free up some space on a storage domain and you can not shut down a VM. Be sure to power-cycle the VM in question as soon as your SLA allows it, to get rid of the Snapshot (COW here again).

If you want to script stuff or you are too lazy to open a brower, there is now a CLI available. Have a look to the documentation.

If you want to integrate RHEV deeper into your existing infrastructure, such as RHN Satellite, Cobbler, Your-super-duper-CMDB or IaaS/PaaS broker, there are two different APIs available. For the XML lovers, there is the previously known RestAPI which has some performance improvements. For the XML haters, there is now a native Python API which allows to to access RHEV entities directly as objects in your Python code. For both APIs, have a look to the Documentation.

I personally like the Python API, because a lot of other Red Hat infrastructure products come with Python APIs. So it is very easy to integrate those software pieces.

Under the hood, it is now powered by JBoss EAP6 instead of version 5. To be able to connect to standard ports 80 and 443, there is an Apache httpd with mod_proxy_ajp.

Have fun :-)

Upgrading RHN Satellite 5.4.1 to 5.5

Sunday, September 23rd, 2012

Red Hat has released RHN Satellite version 5.5. It is a release that is mainly a bug-fix release, but has some interesting new features as well. Here comes a brief guide how to update your RHN Satellite to the latest version. It is not a official guide, so if you trash your Satellite, it is not my fault…

Preparation
As always, before you upgrade the RHN Satellite, you need to order a new certificate. Open a Support case at Red Hat and tell them you need a new certificate for Version .5.5.

You also need to download the ISO file for the upgrade as the packages are only available in the software channel after the upgrade and activation. You can download the ISO at Red Hats download site. Of course you need to choose the architecture that matches your environment. Note that there is only one ISO available for each architecture, not two as it was before. The ISO comes with the embedded database. If you need to use an external database, use the --external-db parameter with install.pl

Ensure you have a working backup of your database before starting with the upgrade. Do this as follows:

su - oracle
db-control backup /your/back/up/directory
db-control verify /your/back/up/directory

A backup of your /etc/rhn directory is also a good idea, just for the case something is going wrong: cp -rp /etc/rhn /etc/rhn-$(date +"%F")

Ensure your database has enough free table space left. For the DATA_TBS and the UNDO_TBS it should be at least 1Gbyte, better are 2Gbyte. The following example shows an example:

[root@rhns ~]# su - oracle
-bash-4.1$ db-control report
Tablespace                  Size    Used   Avail   Use%
DATA_TBS                   16.1G   12.6G    3.5G    78%
SYSAUX                      500M  182.6M  317.3M    37%
SYSTEM                      400M  254.1M  145.8M    64%
TEMP_TBS                   1000M      0B   1000M     0%
UNDO_TBS                    3.9G  474.7M    3.4G    12%
USERS                       128M     64K  127.9M     0%
-bash-4.1$ 

You can grow the table spaces if needed by fire db-control extend UNDO_TBS.

It is also very important to have enoght free space in the /rhnsat filesystem, db-control gather-stats needs some extra space. At least 2 Gbyte to be on the safe side.

Having a look to the official upgrade guide is strongly recommended.

First you need to loop-back mount the ISO image and cd into the mountpoint:

[root@rhns ~]# mount satellite-5.5.0-20120911-rhel-6-x86_64.iso /mnt -o loop
[root@rhns ~]# cd /mnt
[root@rhns mnt]# 

Next step is to install the rhn-upgrade package.

[root@rhns mnt]# yum -y install rhn-upgrade
Loaded plugins: product-id, rhnplugin, subscription-manager
Updating certificate-based repositories.
Unable to read consumer identity
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package rhn-upgrade.noarch 0:5.5.0.16-1.el6sat will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=====================================================================================================
 Package         Arch       Version               Repository                                    Size
=====================================================================================================
Installing:
 rhn-upgrade     noarch     5.5.0.16-1.el6sat     redhat-rhn-satellite-5.4-server-x86_64-6      38 k

Transaction Summary
=====================================================================================================
Install       1 Package(s)

Total download size: 38 k
Installed size: 0  
Downloading Packages:
rhn-upgrade-5.5.0.16-1.el6sat.noarch.rpm                                      |  38 kB     00:00     
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : rhn-upgrade-5.5.0.16-1.el6sat.noarch                                              1/1 
Installed products updated.
  Verifying  : rhn-upgrade-5.5.0.16-1.el6sat.noarch                                              1/1 

Installed:
  rhn-upgrade.noarch 0:5.5.0.16-1.el6sat                                                             

Complete!
[root@rhns mnt]# 

The package contains documents and scripts to help you with the upgrade. They are located in the directory /etc/sysconfig/rhn/satellite-upgrade. Read those documents carefully before proceeding with the upgrade.

Upgrading
Lets do it… run the installer script with the --upgrade parameter, bold red letters are interactive input.

[root@rhns mnt]# ./install.pl --upgrade
* Starting the Red Hat Network Satellite installer.
* Performing pre-install checks.
* Pre-install checks complete.  Beginning installation.
* RHN Registration.
** Registration: System is already registered with RHN.  Not re-registering.
* Upgrade flag passed.  Stopping necessary services.
* Purging conflicting packages.
* Checking for uninstalled prerequisites.
** Checking if yum is available ...
There are some packages from Red Hat Enterprise Linux that are not part
of the @base group that Satellite will require to be installed on this
system. The installer will try resolve the dependencies automatically.
However, you may want to install these prerequisites manually.
Do you want the installer to resolve dependencies [y/N]? y
* Applying updates.
* Installing RHN packages.
Warning: yum did not install the following packages:
	geronimo-specs-compat
* Now running spacewalk-setup.
* Setting up Oracle environment.
* Setting up database.
** Database: Upgrading the database server to latest Oracle 10g:
** Database: This is a long process that is logged in:
** Database: /var/log/rhn/upgrade_db.log
*** Progress: ##############################################################
** Database: Setting up database connection for Oracle backend.
** Database: Testing database connection.
** Database: Populating database.
** Database: Skipping database population.
* Setting up users and groups.
** GPG: Initializing GPG and importing key.
* Performing initial configuration.
* Activating RHN Satellite.
** Certificate not activated.
** Upgrade process requires the certificate to be activated after the schema is upgraded.
* Enabling Monitoring.
* Configuring apache SSL virtual host.
Should setup configure apache's default ssl server for you (saves original ssl.conf) [Y]? y
* Configuring tomcat.
** /etc/tomcat6/tomcat6.conf has been backed up to tomcat6.conf-swsave
** /etc/tomcat6/server.xml has been backed up to server.xml-swsave
Reversed (or previously applied) patch detected!  Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file web.xml.rej
* Configuring jabberd.
* Creating SSL certificates.
** Skipping SSL certificate generation.
* Deploying configuration files.
* Update configuration in database.
* Setting up Cobbler..
cobblerd does not appear to be running/accessible
Cobbler requires tftp and xinetd services be turned on for PXE provisioning functionality. Enable these services [Y]? y
cobblerd does not appear to be running/accessible
This portion of the RHN Satellite upgrade process has successfully completed.
Please refer to appropriate upgrade document in /etc/sysconfig/rhn/satellite-upgrade
for any remaining steps in the process.
[root@rhns mnt]# 

Now some database actions are needed. Make sure your Satellite is stopped and only the database is running:

rhn-satellite stop
service oracle start

You need to create schema statistics:

su - oracle
-bash-4.1$ db-control gather-stats
Gathering statistics...
WARNING: this may be a very slow process.
done.
-bash-4.1$ 

Now it is time to upgrade the database schema

[root@rhns mnt]# spacewalk-schema-upgrade
Schema upgrade: [satellite-schema-5.4.0.19-1.el6sat] -> [satellite-schema-5.5.0.13-1.el6sat]
Searching for upgrade path: [satellite-schema-5.4.0.19-1] -> [satellite-schema-5.5.0.13-1]
Searching for upgrade path: [satellite-schema-5.4.0.19] -> [satellite-schema-5.5.0.13]
Searching for upgrade path: [satellite-schema-5.4.0] -> [satellite-schema-5.5.0]
Searching for upgrade path: [satellite-schema-5.4] -> [satellite-schema-5.5]
The path: [satellite-schema-5.4] -> [satellite-schema-5.5]
Planning to run spacewalk-sql with [/var/log/spacewalk/schema-upgrade/20120922-132500-script.sql]
Hit Enter to continue or Ctrl+C to interrupt: Enter
Executing spacewalk-sql, the log is in [/var/log/spacewalk/schema-upgrade/20120922-132500-to-satellite-schema-5.5.log].
The database schema was upgraded to version [satellite-schema-5.5.0.13-1.el6sat].
[root@rhns mnt]# 

Now it is time to activate your RHN Satellite to be able to receive updates for the Satellite and running satellite-sync

[root@rhns ~]# rhn-satellite-activate --ignore-version-mismatch --rhn-cert=/root/rhns-cert55.cert 
RHN_PARENT: satellite.rhn.redhat.com
[root@rhns ~]# 

To rebuild the search index please run service rhn-search cleanindex

[root@rhns ~]# service rhn-search cleanindex
Stopping rhn-search...
rhn-search was not running.
Starting rhn-search...
[root@rhns ~]# 

Before restarting the RHN Satellite, check if any updates are available for it.
yum -y update

Afterward, please check if there is another database schema update available. If the output looks as following, you are safe.

[root@rhns ~]# spacewalk-schema-upgrade
Schema upgrade: [satellite-schema-5.5.0.13-1.el6sat] -> [satellite-schema-5.5.0.13-1.el6sat]
Your database schema already matches the schema package version [satellite-schema-5.5.0.13-1.el6sat].
[root@rhns ~]# 

As a verification that the upgrade is fine, run a satellite-sync to sync some new content and update a registered server. If you have more than one Satellite, run a ISS (Inter Satellite Sync) for proofing its functionality.

Troubleshooting
If something goes wrong with the database update, before reverting to a backup, first check the Oracle alert file /rhnsat/admin/rhnsat/bdump/alert_rhnsat.log to figure out what went wrong. Another good place to have a look at are the trace files located in /rhnsat/admin/rhnsat/udump

PAM and IPA authentication for RHN Satellite

Sunday, August 12th, 2012

If you have a larger installation on your site, you may wish to have a single source of credentials not only for common system services, but for your RHN Satellite too.

This will show you how to configure your RHN Satellite Server to use PAM with SSSD. SSSD, the System Security Services Daemon is a common framework to provide authentication services. Needless to say that IPA is supported as well.

Assumptions:

  • You have a RHN Satellite running on RHEL6
  • You have an IPA infrastructure running (at least on RHEL 6.2)

Preparations
First you need to install the ipa-client on your satellite:

yum -y install ipa-client

And then join the server to your IPA environment:

ipa-client-install -p admin

Configuring PAM as follows:

cat << EOF > /etc/pam.d/rhn-satellite
auth        required      pam_env.so
auth        sufficient    pam_sss.so 
auth        required      pam_deny.so
account     sufficient    pam_sss.so
account     required      pam_deny.so
EOF

Configure the RHN Satellite
Your Satellite now needs to be aware that there is the possibility to authenticate users with PAM against IPA.

echo "pam_auth_service = rhn-satellite" >> /etc/rhn/rhn.conf

If you have users in your IPA domain with usernames shorter than five characters, you will need to add one more line to be able to create the users in RHN Satellite:

echo "web.min_user_len = 3" >>   /etc/rhn/rhn.conf

After this change, restart your RHN Satellite

rhn-satellite restart

Configuring users
Now you can log in to your RHN Satellite with your already configured admin user and select the checkbox “Pluggable Authentication Modules (PAM)” on existing users and/or new users.

Things to be considered
It is strongly recomended to have at leat one user per organization (ususally a “Organization Administrator”) plus the “RHN Satellite Administrator” not having PAM authentication enabled. Despite of the easy implementation of redundancy with IPA, this is important for a fallback scenario when your IPA environment has some service interruptions due to mainenance or failure.

SSSD caches users credentials on the RHN Satellite system, but this is only true for users logged in at least once. The default value for offline_credentials_expiration is 0, which means no cache time limit. However, depending on your organizations scurity policy this value can vary. Please check the PAM section in /etc/sssd/sssd.conf

Further documents to read

Identity Management with IPA Part II – Kerberized NFS service

Sunday, December 25th, 2011

In part one I was writing how to set up an IPA server for basic user authentication.

One reason NFSv4 is not that widespreaded yet, is it needs Kerberos for proper operation. Of course this is now much easier thanks to IPA.

Goal for the part of the guide

  • Configure IPA to serve the NFS principle
  • Configure NFS to use IPA
  • Configure some IPA clients to use Kerberos for the NFS service

Requirements

  • A runing IPA service like discussed in Part I of this guide.
  • A NFS server based on RHEL6.2
  • One or more IPA-Client

Lets doit
First you need to add the NFS server and its service principal to the IPA server. On ipa1.example.com run:

[root@ipa1 ~]# ipa host-add nfs.example.com
[root@ipa1 ~]# ipa service-add nfs/nfs.example.com

Next, log on to you NFS server, lets call it nfs.example.com and install the needed additional software packages:

[root@nfs ~]# yum -y install ipa-client nfs-utils

You need to enroll you NFS-server on the IPA domain. Run the following on nfs.example.com:

[root@nfs ~]# ipa-client-install -p admin

The next step is to get a Kerberos ticket and fetch the entries needed to be added in the krb5.keytab

[root@nfs ~]# kinit admin
[root@nfs ~]# ipa-getkeytab -s ipa1.example.com -p nfs/nfs.example.com -k /etc/krb5.keytab

Before you proceed to your clients, you need to enable secure NFS, create an export and restart NFS:

[root@nfs ~]# perl -npe 's/#SECURE_NFS="yes"/SECURE_NFS="yes"/g' -i /etc/sysconfig/nfs
[root@nfs ~]# echo "/home  *(rw,sec=sys:krb5:krb5i:krb5p)" >> /etc/exports
[root@nfs ~]# mkdir /home/tester1 && cp /etc/skel/.bash* /home/tester && chmod 700 /home/tester1 && chown -R tester1:ipausers /home/tester1
[root@nfs ~]# service nfs restart

Assuming you already have set up one or more IPA-client(s), it is stright forward to enable kerberized NFS on your systems. Log in to a client and run the following:

[root@ipaclient1 ~]# yum -y install nfs-utils
[root@ipaclient1 ~]# perl -npe 's/#SECURE_NFS="yes"/SECURE_NFS="yes"/g' -i /etc/sysconfig/nfs
[root@ipaclient1 ~]# 

Lets have a look if you have been successful. First look up the users UID.

[root@ipaclient1 ~]# getent passwd tester1
tester1:*:1037700500:1037700500:Hans Tester:/home/tester1:/bin/bash
[root@ipaclient1 ~]# 

Lets mount that users home directory manually on a client:

mount -t nfs4 nfs.exmaple.com:/home/tester1 /home/tester1

To check if is working as expected, issue

[root@ipaclient1 ~]# su - tester1

Fire ls -lan and see if the UID matches the UID you got from getent. If you see UID 4294967294, then something went wrong, this is the UID for the user “nobody” when using NFSv4 on 64 bit machines.

Whats next?
You will figure out when I post part III of this guide :-)

Have fun!

Identity Management with IPA Part I

Saturday, December 17th, 2011

Red Hat released RHEL 6.2 on December 6th. From my point of view, the greatest news in the release is that IPA (or now called Identity Management) is now fully supported and available in the RHEL 6 base channel without additional subscription costs.

Upstream project is freeIPA and is available trough the default Fedora repos.

About central Identity Management
IPA stands for Identification, Auditing, Policy. The focus in this article is on identification of users.

In the past, there have been a lot of solutions available to centrally manage users and its access to services. Just to name a few: LDAP, Kerberos, PAM, MS Active Directory, Novell Directory Server and countless others. All of those solutions have one in common: They are very powerful and very complex to set up and maintain. Because they are so complex, a lot of system administrators just do not use them and distribute SSH-keys, user credentials etc. by script without real central management, the nightmare of every security officer.

What is IPA?
The missing solution was a glue of LDAP and Kerberos which is easy to install and maintain, redundant and scalable from small office environments up to large enterprise installations. here it comes: IPA, which makes system administrators and security managers friends again.

IPA comes with a powerful CLI and a web interface for people that are afraid of a shell.

One of the cool stuff in IPA is its multi-master replication feature and automatic fail over facility. The clients are able to look up IPA servers with SRV DNS records, which are – of course – handled by IPA.

Lets do some stuff
One thing is just writing about how cool IPA is, but lets set up a high available centrally managed identity management system. This guide is written for RHEL 6.2 IPA-Servers and clients but should also work with freeIPA and Fedora 15 and later (Let me know if you have some issues).

Requirements
Requirements are straightforward:

  • 1Gbyte of RAM
  • approx. 6Gbyte of disk (including operating system)
  • NTP
  • DNS entries for all IPA servers (including PTR records)
  • Fully updated RHEL 6.2 GA
  • Firefox on the IPA servers if you want to use the web interface

NTP is very important since Kerberos is quite picky about synchronized system time. Ensure it is configured and running on all involved servers.

Assumptions

  • IP network is 192.168.100.0/24
  • Domain is example.com
  • Kerberos realm is EXAMPLE.COM
  • IPA-Server 1 is ipa1.example.com
  • IPA-Server 2 is ipa2.example.com
  • IPA-Client 1 is ipa-client1.example.com
  • IPA-Client 2 is ipa-client2.example.com
  • All passwords used are “somepassword” (needles to tell you to choose your own passwords
  • Main DNS is at 192.168.100.1
  • IPA-Clients are using ipa1.example.com and ipa2.example.com as there DNS servers.

Installation of the first IPA Server

yum -y install ipa-server bind-dyndb-ldap firefox xorg-x11-xauth

You are now ready to set up IPA. There are just a couple of questions, the non-default answers for this example are in red.

[root@ipa1 ~]# ipa-server-install --setup-dns --forwarder=192.168.100.1
The log file for this installation can be found in /var/log/ipaserver-install.log
==============================================================================
This program will set up the IPA Server.

This includes:
  * Configure a stand-alone CA (dogtag) for certificate management
  * Configure the Network Time Daemon (ntpd)
  * Create and configure an instance of Directory Server
  * Create and configure a Kerberos Key Distribution Center (KDC)
  * Configure Apache (httpd)
  * Configure DNS (bind)

To accept the default shown in brackets, press the Enter key.

Existing BIND configuration detected, overwrite? [no]: yes
Enter the fully qualified domain name of the computer
on which you're setting up server software. Using the form
.
Example: master.example.com.


Server host name [ipa1.example.com]:

Warning: skipping DNS resolution of host ipa1.example.com
The domain name has been calculated based on the host name.

Please confirm the domain name [example.com]:

The IPA Master Server will be configured with
Hostname:    ipa1.example.com
IP address:  192.168.100.227
Domain name: example.com

The kerberos protocol requires a Realm name to be defined.
This is typically the domain name converted to uppercase.

Please provide a realm name [EXAMPLE.COM]:
Certain directory server operations require an administrative user.
This user is referred to as the Directory Manager and has full access
to the Directory for system management tasks and will be added to the
instance of directory server created for IPA.
The password must be at least 8 characters long.

Directory Manager password: somepassword
Password (confirm): somepassword

The IPA server requires an administrative user, named 'admin'.
This user is a regular system account used for IPA server administration.

IPA admin password: somepassword
Password (confirm): somepassword

Do you want to configure the reverse zone? [yes]:
Please specify the reverse zone name [100.168.192.in-addr.arpa.]:
Using reverse zone 100.168.192.in-addr.arpa.

The following operations may take some minutes to complete.
Please wait until the prompt is returned.
Configuring ntpd
  [1/4]: stopping ntpd
  [2/4]: writing configuration
  [3/4]: configuring ntpd to start on boot
  [4/4]: starting ntpd
done configuring ntpd.
Configuring directory server for the CA: Estimated time 30 seconds
  [1/3]: creating directory server user
  [2/3]: creating directory server instance
  [3/3]: restarting directory server
done configuring pkids.

Lot of output omitted

Configuring named:
  [1/9]: adding DNS container
  [2/9]: setting up our zone
  [3/9]: setting up reverse zone
  [4/9]: setting up our own record
  [5/9]: setting up kerberos principal
  [6/9]: setting up named.conf
  [7/9]: restarting named
  [8/9]: configuring named to start on boot
  [9/9]: changing resolv.conf to point to ourselves
done configuring named.
==============================================================================
Setup complete

Next steps:
        1. You must make sure these network ports are open:
                TCP Ports:
                  * 80, 443: HTTP/HTTPS
                  * 389, 636: LDAP/LDAPS
                  * 88, 464: kerberos
                  * 53: bind
                UDP Ports:
                  * 88, 464: kerberos
                  * 53: bind
                  * 123: ntp

        2. You can now obtain a kerberos ticket using the command: 'kinit admin'
           This ticket will allow you to use the IPA tools (e.g., ipa user-add)
           and the web user interface.

Be sure to back up the CA certificate stored in /root/cacert.p12
This file is required to create replicas. The password for this
file is the Directory Manager password
[root@ipa1 ~]#

You now need to get a Kerberos ticket:

[root@ipa1 ~]# kinit admin
Password for admin@EXAMPLE.COM:
[root@ipa1 ~]#

Fire up firefox and point it to https://ipa1.example.com and follow the link provided in the error message. You will see the instructions needed to use Kerberos as authentication method. When importing the cert into Firefox, REALLY check all three boxes!

Afterwards you are automatically logged in, if you got your Kerberos ticket before (kinit admin)

Setting up a Recplica
For now, we one IPA server. If it failes, no one can log in to any system anymore. This is of course unacceptable and needs to be changed. So lets set up a replica to add high availability to our central identity management system.

Log in to ipa1.example.com and fire up ipa-replica-prepare to collect the data needed for the replica.

Non-default answers are coloured red

[root@ipa1 ~]# ipa-replica-prepare ipa2.example.com

Directory Manager (existing master) password: somepassword

Preparing replica for ipa2.example.com from ipa1.example.com
Creating SSL certificate for the Directory Server
Creating SSL certificate for the dogtag Directory Server
Creating SSL certificate for the Web Server
Exporting RA certificate
Copying additional files
Finalizing configuration
Packaging replica information into /var/lib/ipa/replica-info-ipa2.example.com.gpg
[root@ipa1 ~]#

/var/lib/ipa/replica-info-ipa2.example.com.gpg keeps all the information needed to set up the replica. You need to copy it by i.e scp to ipa2.example.com.

Now log in to ipa2.example.com and fire up ipa-replica-install

[root@ipa2 ~]# ipa-replica-install --setup-dns --forwarder=192.168.100.1 replica-info-ipa2.example.com.gpg

Directory Manager (existing master) password: somepassword

Run connection check to master
Check connection from replica to remote master 'ipa1.example.com':
   Directory Service: Unsecure port (389): OK
   Directory Service: Secure port (636): OK
   Kerberos KDC: TCP (88): OK
   Kerberos KDC: UDP (88): OK
   Kerberos Kpasswd: TCP (464): OK
   Kerberos Kpasswd: UDP (464): OK
   HTTP Server: port 80 (80): OK
   HTTP Server: port 443(https) (443): OK

Connection from replica to master is OK.
Start listening on required ports for remote master check
Get credentials to log in to remote master
admin@EXAMPLE.COM password:

Execute check on remote master
Check connection from master to remote replica 'ipa2.example.com':
   Directory Service: Unsecure port (389): OK
   Directory Service: Secure port (636): OK
   Kerberos KDC: TCP (88): OK
   Kerberos KDC: UDP (88): OK
   Kerberos Kpasswd: TCP (464): OK
   Kerberos Kpasswd: UDP (464): OK
   HTTP Server: port 80 (80): OK
   HTTP Server: port 443(https) (443): OK

Connection from master to replica is OK.

Connection check OK
Configuring ntpd
  [1/4]: stopping ntpd
  [2/4]: writing configuration
  [3/4]: configuring ntpd to start on boot
  [4/4]: starting ntpd
done configuring ntpd.
Configuring directory server: Estimated time 1 minute

Lot of output omitted

Using reverse zone 100.168.192.in-addr.arpa.
Configuring named:
  [1/8]: adding NS record to the zone
  [2/8]: setting up reverse zone
  [3/8]: setting up our own record
  [4/8]: setting up kerberos principal
  [5/8]: setting up named.conf
  [6/8]: restarting named
  [7/8]: configuring named to start on boot
  [8/8]: changing resolv.conf to point to ourselves
done configuring named.
[root@ipa2 ~]#

On ipa2, you need a Kerberos Ticket as well:

root@ipa2 ~]# kinit admin

Some adjustment
Unfortunately the default shell for new users is /bin/sh, which should probably be changed.

ipa config-mod --defaultshell=/bin/bash

Testing the replication
Log in to ipa1.example.com and add a new user:

ipa user-add tester1
ipa passwd tester1

You now can check if the user is really available on both servers by firing a ldapsearch command:

ldapsearch -x -b "dc=example, dc=com" uid=tester1

Compare the results of both servers. If they are the same, you have been successfully set up you two-node replicated high available IPA server.

What if ipa1.example.com is not available when I need to add a new user?
Simple answer: There is one way to find out….

Shut down ipa1.example.com
Log in to ipa2.example.com and add a new user:

root@ipa2 ~]# ipa user-add tester2

Start up ipa1.example.com again and run a ldapsearch again:

ldapsearch -x -b "dc=example, dc=com" uid=tester2

Set up a IPA-Client
Whats a centrally managed Identity Management server worth without a client? Nada! Lets set up a RHEL 6.2 server as a client:

[root@ipaclient1 ~]# yum  install ipa-client

After installation the setup program needs to be fired up. Non-default answers are coloured red

[root@ipaclient1 ~]# ipa-client-install -p admin
Discovery was successful!
Hostname: ipaclient1.example.com
Realm: EXAMPLE.COM
DNS Domain: example.com
IPA Server: ipa1.example.com
BaseDN: dc=example,dc=com


Continue to configure the system with these values? [no]: yes
Synchronizing time with KDC...
Password for admin@EXAMPLE.COM: somepassword

Enrolled in IPA realm EXAMPLE.COM
Created /etc/ipa/default.conf
Configured /etc/sssd/sssd.conf
Configured /etc/krb5.conf for IPA realm EXAMPLE.COM
Warning: Hostname (ipaclient1.example.com) not found in DNS
DNS server record set to: ipaclient1.example.com -> 192.168.100.253
SSSD enabled
NTP enabled
Client configuration complete.
[root@ipaclient1 ~]# 

Testing the login
Log in to your client, you will need to change your password first:

[luc@bond ~]$ ssh 192.168.100.253 -l tester1
tester1@192.168.100.253's password: 
Password expired. Change your password now.
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for user tester1.
Current Password: 
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.
Connection to 192.168.100.253 closed.
[luc@bond ~]$ ssh 192.168.100.253 -l tester1
tester1@192.168.100.253's password: 
Last login: Sat Dec 17 19:40:10 2011 from bond.home.delouw.ch
Could not chdir to home directory /home/tester1: No such file or directory
-bash-4.1$ 

In this case we do not have a home directory for the user tester1. NFS automount of home directories will be discussed in Part II oder III of this guide.

Now log out of ipaclient1.example.com and shut down ipa1.example.com to check if it is working when one IPA server failed. Needless to say that it is working… (okay, there is a delay of a few seconds)

Drawbacks
IPA is not that powerful like MS Active Directory or Novell Directory. There is no support (and most probably there will never be) for multiple and or custom LDAP schemata to keep it simple and easily maintainable, this actually makes the drawbacks into a feature . If you need such features like custom LDAP schemata, you may have a look to RHDS.

Conclusion
Never in the past of information technology is was easier to set up and maintain a centrally managed identity management system. In just a few minutes of work you will have a basic set up of a highly available fault tolerant and scalable identity management server.

Outlook to Part II of this guide
IPA does not only allow users to be authenticated, but also to restrict them to use particular services only an particular systems. Thanks to Kerberos, it also provides single-sign-on capabilities without providing a password.

As soon as I get some time I’ll write about the following topics:

  • Passwordless (and key-less) SSH logins
  • Kerberized web applications
  • Centralized sudo management

Having fun?
Yes definitively , I have fun with IPA, and as a Linux consultant I expect a lot of work waiting for me.

Cross distribution system management with Spacewalk

Tuesday, May 24th, 2011

In a perfect world, all systems in a data centre are running the same Linux operating system, a homogeneous system landscape. In real life things are working differently. Windows systems are out of focus in this post, lets concentrate on Linux systems.

Most companies with a large Linux base are either RHEL shops or using SLES. A lot of RHEL users have some SLES systems running and so are SLES users running some RHEL systems. Some companies have additional systems running Debian.

How to handle those heterogeneous system landscapes? Those real world scenarios? Lets assume a company runs 500 RHEL systems, 20 SLES systems and some 10 Debian systems.

At the moment, for the base software management subscription such Linux users are spending a lot of money for RHN Satellite and SUSE Manager. Additionally there are per-system costs for management, provisioning and other modules. The Debian systems are handled manually. A lot of additional costs for a few out-of-strategy systems.

The solution is Spacewalk, the upstream project of the RHN Satellite which is at the same time the upstream for the recently released SUSE Manager. While SUSE offers support for RHEL systems, Red Hat does not (yet) offer support for SLES systems for RHN Satellite.

In Spacewalk Version 1.4 code contributions from SUSE are included and a student at Brno University of Technology contributed Debian support for Spacewalk as part of his master thesis.

While the support for SUSE is already quite stable, the Debian related code still have some rough edges. No wonder, SUSE is using RPM for its packaging wile Debian has its own packaging system. This makes it much easier for SUSE to get Spacewalk ready for its distribution.

At the moment, one can call the Debian support still as experimental, but the goal for the Spacewalk project is to have it fully functional in future releases.

The goal should be that both of the management system from the major enterprise Linux vendors, Red Hat and SUSE should support each others distribution for its Spacewalk based products. Debian is a niche player in the enterprise Linux environment and should also be supported by both products, RHN Satellite and SUSE manager. Nobody does expected to get system support for those distributions by the competing distribution, but having support for the management of it.

Further readings:
Registering Clients
Deb support in Spacewalk

Have fun!

Implement a high available Cobbler provisioning system

Sunday, April 24th, 2011

A not so well known feature of Cobbler is its replication facility. It allows you to create a high available system provisioning system. The whole set up is straight forward.

Background
Today people tend to NOT backup systems, only data is being backed up. In the case of a system failure, they just re-provisioning the system and automatically configure it with configuration management tools such as Puppet, CFengine or RHN Satellite.

You not only need to have your configuration management system(s) to be redundant, you also need to replicate your Cobbler provisioning systems.

It even works in a set up where you have multiple datacenters for disaster recovery, having one or two Cobbler servers at each site.

Luckily, Cobbler comes with that feature out-of-the-box.

Assumptions

  • Both Cobbler servers are in the same network
  • Your master cobbler has a DNS-entry cobbler-master, your slave is cobbler-slave

Lets do it

Unfortunately, Cobbler replication, at the moment, does not work when SELinux is running in enforcing mode. You need to turn it off on both Cobbler servers.

setenforce 0

Hopefully this will change soon.

For a initial replication just run:

cobbler replicate --master=cobbler-master --sync-all

On the slave server. It is recommended to run this command nightly by a cronjob to ensure you have the latest RPM-packages of your distros in sync. Feel free to run this command manually at every time if you think it is needed to immediately sync the whole Cobbler system.

Update on-the-fly
You can use a Cobbler trigger script that automatically connects to the slave and triggers a replication with the master whenever a system is added or deleted.

I’m using the following script to do so:

#!/bin/bash
#
# This is a Cobbler trigger script that triggers a replication on the slave
# server. You need to create a private/public key-pair on the master and 
# add the public key to ~/.ssh/authorized_keys on the slave

SLAVE=cobbler-slave

ssh $SLAVE "cobbler replicate --master=cobbler-master --systems=* --prune"
ssh $SLAVE "cobbler sync"

Put this script in /var/lib/cobbler/triggers/add/system/post and create a symlink in /var/lib/cobbler/triggers/delete/system/post on your master server.

Security
Cobbler handles the file /etc/rsyncd.conf. So far so good. You need to be aware that EVERY host can rsync your distros, kickstarts, snippets etc unless you take some security measures with some firewall and/or other restrictions.

Possibly the simplest way is configuring xinetd.

A sample /etc/xinetd.d/rsync, where 192.0.2.2 is the IP of your slave, would be:

# default: off
# description: The rsync server is a good addition to an ftp server, as it \
#       allows crc checksumming etc.
service rsync
{
        disable         = no
        only_from       = 192.0.2.2
        flags           = IPv6
        socket_type     = stream
        wait            = no
        user            = root
        server          = /usr/bin/rsync
        server_args     = --daemon
        log_on_failure  += USERID
}

DHCP redundancy
You also need to operate a dhcp server on both Cobbler servers. If you have both Cobbler servers on the same subnet, you may have a look how to configure ISC dhcpd. A good guide (untestet) is at http://www.madboa.com/geek/dhcp-failover/. The lazy ones just turn off dhcp on the slave and activate it as needed.

Conclusion
As I wrote before, Cobbler is just great. Not only “batteries are included”, high availability, disaster safety and other cool stuff is included as well and can be quickly implemented.

Having fun? I rarely needed to re-provision systems because they went foobar and hopefully I will never experience a disaster such a burned-down, exploded or whatever datacenter. Cobbler does not protect you from such disasters, it helps you to recover.

SUSE Manager based on Fedora Spacewalk

Thursday, March 3rd, 2011

SUSE announced the availability of SUSE manager. Having a closer look to it, one recognizes it is based on Fedora Spacewalk. It is a clone of the Red Hat Satellite.

A few weeks ago I was puzzled to see a post on the spacewalk-devel mailing list. SUSE was contributing some code. What the heck? Now it is clear, they are using Spacewalk as there source for its own product. Spacewalk is no longer just the upstream of RHN Satellite, but also a major tool for managing SLES systems.

The open source way
It is good practice to share knowledge and code between different distributions. SUSE profits from the work Red Hat has done before, and Red Hat profits from the contributions of SUSE. IMHO this is the right way how open source software should work.

The price tag
SUSE claims “SUSE Manager allows you to save up to 50 percent for Linux support”. Really?

Lets have a look to How to buy. The price is exactly the same as for RHN Satellite: USD 13,500. Really the same price tag? Lets dig deeper on features Click on Database support. One would read

"SUSE Manager provides a built-in Oracle XE database, but can also leverage existing 
Oracle 10g or 11g databases, to locally store all data related to the 
managed Linux servers."  

Means: With the free Oracle XE database delivered with SUSE you can manage just a few systems. If you want to manage more systems, you need to buy a very expensive Oracle License which, last least, doubles the price tag of SUSE Manager.

And Debian? There are some works going on, maybe I’m going to write soon about Spacewalk and what it can do for and with Debian.

Conclusion
Because SUSE was not in a hurry to release its new product, I can not understand why SUSE was not helping the Spacewalk project to get PostgreSQL production ready before releasing it. This would provide its customers (and the spacewalk community) a real benefit.

I hope that SUSE will sustainably contribute code to Spacewalk, it is now in the interest of users of both distributions.

Have fun!