Archive for the ‘Red Hat’ Category

2FA with (Free) IPA. The good, the bad and the ugly

Thursday, April 9th, 2015

Two factor authentication (2FA) is more and more emerging which is good to enhance security. Since the release of IPA4 it comes with 2FA included.

Over time I made a lot of experiments and experience I wanted to share with you. Its is easy to set up and maintain as long as you use it only for system authentication. If you are using such things as webmail, it fails. This post shows you the capabilities as they are of today. Almost all bad issues apply not only to Fee(IPA) but 2FA in general.

The good
All your systems are Fedora 21, RHEL 7.1 or Ubuntu 14.02 all is working fine as the included SSSD is new enough to handle 2FA. All kerberized services can be used with 2FA w/o logging in again during the validity of your Kerberos ticket. Very convenient, very secure.

3rd Party applications can use LDAP authentication (Depending on the usecase)

The bad
Systems with older distributions such as RHEL6.6 come with a SSSD version which is to outdated to handle kerberized 2FA at all. This will probably change soon.


  • Use LDAP authentication (See later on)
  • Use a Jump host with a recent Linux distribution

If you are logging in to your workstation with a local user, you can not grab a Kerberos ticket with kinit and use this ticket further on. (i.e for ssh logins on remote server, mail etc.)


  • Switch to a IPA managed user if your workstation is recent enough.
  • Use a Jump host with a recent Linux distribution
  • Wait until krb5-PAKE is in place, software is being developed, see and
    • The ugly

      Looks like most mobile applications such as the IMAP client in Android do not prompt for the password, they expect it configured. Needless to say that you can not reconfigure the password each time you want to check your emails with your phone.


      • 3rd party email app? One that prompts for the password if needed
      • Configure IPA to accepts password and 2FA which lets the user choose to either use the password only or 2FA. Needless to say that this makes 2FA less useful as people tend to be lazy
      • Turn off 2FA in IPA and use a Yubikey with a static password (spit password). This is not a real 2FA it is a single password split in two. Password change is a horror.
      • Accessing Webmail clients (I tested roundcube mail) causes headaches as well. They authenticate the users with IMAP and use this credentials to access the mail storage. As the second factor is a one time password (OTP) this will result in failure to retrieve mails after logging in.

        Workaround: Same as for mobile applications. I would appreciate if someone can point me to a webmail software which can handle this.

        Offline usage

        One sentence: Offline usage does not work because it can not work.


        • Create a local user and use a Yubikey and configure it with a static password (split password). This is not a real 2FA it is a single password split in two. Password change is a horror.
        • Install a IPA server on your Notebook 😉 This will scale up to 18 Notebooks (plus two replicas in the datacenter) but introduce a lot of other problems, so: Not seriously to be considered.

        LDAP Authentication as a Workaround
        Configure PAM/SSSD to use LDAP authentication for your users. IPA comes with a very nice feature called ipa-advise.

        [root@ipa1 ~]# ipa-advise config-redhat-nss-pam-ldapd
        # ----------------------------------------------------------------------
        # Instructions for configuring a system with nss-pam-ldapd as a IPA
        # client. This set of instructions is targeted for platforms that
        # include the authconfig utility, which are all Red Hat based platforms.
        # ----------------------------------------------------------------------
        # Schema Compatibility plugin has not been configured on this server. To
        # configure it, run "ipa-adtrust-install --enable-compat"
        # Install required packages via yum
        yum install -y wget openssl nss-pam-ldapd pam_ldap authconfig
        # NOTE: IPA certificate uses the SHA-256 hash function. SHA-256 was
        # introduced in RHEL5.2. Therefore, clients older than RHEL5.2 will not
        # be able to interoperate with IPA server 3.x.
        # Please note that this script assumes /etc/openldap/cacerts as the
        # default CA certificate location. If this value is different on your
        # system the script needs to be modified accordingly.
        # Download the CA certificate of the IPA server
        mkdir -p -m 755 /etc/openldap/cacerts
        wget -O /etc/openldap/cacerts/ipa.crt
        # Generate hashes for the openldap library
        command -v cacertdir_rehash
        if [ $? -ne 0 ] ; then
         wget "" -O cacertdir_rehash ;
         chmod 755 ./cacertdir_rehash ;
         ./cacertdir_rehash /etc/openldap/cacerts/ ;
         cacertdir_rehash /etc/openldap/cacerts/ ;
        # Use the authconfig to configure nsswitch.conf and the PAM stack
        authconfig --updateall --enableldap --enableldapauth --ldapserver=ldap:// --ldapbasedn=cn=compat,dc=example,dc=com
        [root@ipa1 ~]#

        The output actually reflects your environment, will be replaced with your domain, its copy-paste ready. I love this feature :-) For other Linux systems, run ipa-advise without parameters to see which advises are available.

        2FA works well, convenient and secure in a datacenter and office environment. Notebooks are fine as well as long as there is a network connection available. The mobile world (Smartphones and Tablets) is not yet ready for 2FA. Some issues can be worked around (with some drawbacks) while others render 2FA not usable at all (offline usage).

        Hopefully there will be some smart solutions available for mobile usage soon, as mobile usage causes the most of the security headaches.

Migrating legacy servers to FreeIPA authentication using ID-views

Monday, April 6th, 2015

ID-Views are a new feature of FreeIPA4 which allows you to map UID/GID user/group names to another. This is a very handy solution when migrating legacy servers.

There are legacy servers in the field with a lot of history. They have been migrated from one operating system to another since the last decade(s). It is unfortunately also not uncommon on those legacy servers to find software with hardcoded UID/GID and/or user/group names. Along with an unknown number of scripts installed on such servers, its always problematic to migrate such systems when it comes to users and authentication. Another issue is that in the early years it was very common to have regular users with UID of >=500 while it is >=1000 as of today.

Unfortunately, almost nobody has the time to clean up the mess. Here is solution: ID-views. ID-Views can be applied to single hosts or group of hosts.

At the moment ID-Views are only working with newer SSSD versions as it is available with RHEL 7.1.

Creating a view

[root@ipa1 ~]# ipa idview-add --desc "Old servers with legacy users" oldservers
Added ID View "oldservers"
  ID View Name: oldservers
  Description: Old servers with legacy users
[root@ipa1 ~]# 

Override a group

[root@ipa1 ~]# ipa idoverridegroup-add --desc "Old group" --gid=500 --group-name=users oldservers users
Added Group ID override "users"
  Anchor to override: users
  Description: Old group
  Group name: users
  GID: 500
[root@ipa1 ~]#

Override a user
If you ommit the --login parameter (or any other) then the value in question is not overridden. Ususally you just override the numeric UID and/or GID.

[root@ipa1 ~]# ipa idoverrideuser-add --desc="John Doe is actually Hans Tester" --login=jdoe --uid=500 --gidnumber=500 --homedir=/home/jdoe --shell=/bin/csh oldservers tester
Added User ID override "tester"
  Anchor to override: tester
  Description: John Doe is actually Hans Tester
  User login: jdoe
  UID: 500
  GID: 500
  Home directory: /home/jdoe
  Login shell: /bin/csh
[root@ipa1 ~]# 

Apply the ID-View to a server

[root@ipa1 ~]# ipa idview-apply oldservers
Applied ID View "oldservers"
Number of hosts the ID View was applied to: 1
[root@ipa1 ~]# 

To enable the view on the client side, clean the SSSD cache and restart the sssd service. Login to

[root@legacy ~]# sss_cache -E
[root@legacy ~]# systemctl restart sssd

You also need to change the PAM configuration to accept logins with UID &lt1000.

Now do some tests. Both users, “jdoe” and “tester” have UID 500.

[root@legacy ~]# getent passwd jdoe
jdoe:*:500:500:Hans Tester:/home/jdoe:/bin/csh
[root@legacy ~]# getent passwd tester
jdoe:*:500:500:Hans Tester:/home/jdoe:/bin/csh
[root@legacy ~]# 

On other servers, the “jdoe” login is unknown, and “tester” has the normal UID assigned by IPA

[root@ipa1 ~]# getent passwd jdoe
[root@ipa1 ~]# echo $?
[root@ipa1 ~]# getent passwd tester
tester:*:1225800004:1225800004:Hans Tester:/home/tester:/bin/bash
[root@ipa1 ~]# 

Please keep in mind that not cleaning up a messy system is just a workaround :-)

Building a virtual CEPH storage cluster

Friday, April 3rd, 2015

cephThis post will guide you trough the procedure to build up a testbed on RHEL7 for a complete CEPH cluster. At the end you will have an admin server, one monitoring node and three storage nodes. CEPH is a object and block storage mostly used for virtual machine images and bulk BLOBS such as video- and other media. It is not intended to be used as a file storage (yet).

Machine set up
I’ve set up five virtual machines, one admin and monitoring server and five OSD servers.


Each of them have a disk for the OS of 10GB, the OSD servers additional 3x10GB disks for the storage, in total 90GB for the stroage. Each virtual machine got 1GB RAM assigned, which is barley good enough for some first tests.

Configure your network
While it is recommended to have two separate networks, one public and one for cluster interconnect (heartbeat, replication etc). However, for this testbed only one network is used.

While it is recommended practice to have your servers configured using the Fully qualified hostname (FQHN) you must also configure the short hostname for CEPH.

Check if this is working as needed:

[root@ceph-admin ~]# hostname
[root@ceph-admin ~]# hostname -s
[root@ceph-admin ~]# 

To be able to resolve the short hostname, edit your /etc/resolv.conf and enter a domain search path

[root@ceph-admin ~]# cat /etc/resolv.conf 
[root@ceph-admin ~]# 

Note: In my network, all is IPv6 enabled and I first tried to set CEPH up with all IPv6. I was unable to get it working properly with IPv6! Disable IPv6 before you start. Disclaimer: Maybe I made some mistakes.

You also need to keep time in sync. The usuage of NTP or chrony is best practice anyway.

Register and subscribe the machines and attach the repositories needed

This procedure needs to be repeated on every node, inlcuding the admin server and the monitoring node(s)

[root@ceph-admin ~]# subscription-manager register
[root@ceph-admin ~]# subscription-manager list --available > pools

Search the pools file for the Ceph subscription and attach the pool in question.

[root@ceph-admin ~]# subscription-manager attach --pool=<the-pool-id>

Disable all repositories and enable the needed ones

[root@ceph-admin ~]# subscription-manager repos --disable="*"
[root@ceph-admin ~]# subscription-manager repos --enable=rhel-7-server-rpms \
--enable=rhel-7-server-rhceph-1.2-calamari-rpms \
--enable=rhel-7-server-rhceph-1.2-installer-rpms \
--enable=rhel-7-server-rhceph-1.2-mon-rpms \

Set up a CEPH user
Of course, you should set a secure password instead of this example 😉

[root@ceph-admin ~]# useradd -d /home/ceph -m -p $(openssl passwd -1 <super-secret-password>) ceph

Creating the sudoers rule for the ceph user

[root@ceph-admin ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
[root@ceph-admin ~]# chmod 0440 /etc/sudoers.d/ceph

Setting up passwordless SSH logins. First create a ssh key for root. Do not set a pass phrase!

[root@ceph-admin ~]# ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa

And add the key to ~/.ssh/authorized_keys of the ceph user on the other nodes.

[root@ceph-admin ~]# ssh-copy-id ceph@ceph-mon01
[root@ceph-admin ~]# ssh-copy-id ceph@ceph-osd01
[root@ceph-admin ~]# ssh-copy-id ceph@ceph-osd02
[root@ceph-admin ~]# ssh-copy-id ceph@ceph-osd03

Configure your ssh configuration.

To make your life easier (not providing –username ceph) when you run ceph-deploy) set up the ssh client config file. This can be done for the user root in ~/.ssh/config or in /etc/ssh/ssh_config.

Host ceph-mon01
     Hostname ceph-mon01
     User ceph

Host ceph-osd01
     Hostname ceph-osd01
     User ceph

Host ceph-osd02
     Hostname ceph-osd02
     User ceph

Host ceph-osd03
     Hostname ceph-osd03
     User ceph

Set up the admin server

Go to and download the ISO image. Copy the image to your admin server and mount it loop-back.

[root@ceph-admin ~]# mount rhceph-1.2.3-rhel-7-x86_64.iso /mnt -o loop

Copy the required product certificated to /etc/pki/product

[root@ceph-admin ~]# cp /mnt/RHCeph-Calamari-1.2-x86_64-c1e8ca3b6c57-285.pem /etc/pki/product/285.pem
[root@ceph-admin ~]# cp /mnt/RHCeph-Installer-1.2-x86_64-8ad6befe003d-281.pem /etc/pki/product/281.pem
[root@ceph-admin ~]# cp /mnt/RHCeph-MON-1.2-x86_64-d8afd76a547b-286.pem /etc/pki/product/286.pem
[root@ceph-admin ~]# cp /mnt/RHCeph-OSD-1.2-x86_64-25019bf09fe9-288.pem /etc/pki/product/288.pem

Install the setup files

[root@ceph-admin ~]# yum install /mnt/ice_setup-*.rpm

Set up a config directory:

[root@ceph-admin ~]# mkdir ~/ceph-config
[root@ceph-admin ~]# cd ~/ceph-config

and run the installer

[root@ceph-admin ~]# ice_setup -d /mnt

To initilize, run calamari-ctl

[root@ceph-admin ceph-config]# calamari-ctl initialize
[INFO] Loading configuration..
[INFO] Starting/enabling salt...
[INFO] Starting/enabling postgres...
[INFO] Initializing database...
[INFO] Initializing web interface...
[INFO] You will now be prompted for login details for the administrative user account.  This is the account you will use to log into the web interface once setup is complete.
Username (leave blank to use 'root'): 
Email address:
Password (again): 
Superuser created successfully.
[INFO] Starting/enabling services...
[INFO] Restarting services...
[INFO] Complete.
[root@ceph-admin ceph-config]#

Create the cluster

Ensure you are running the following command in the config directory! In this example it is ~/ceph-config.

[root@ceph-admin ceph-config]# ceph-deploy new ceph-mon01

Edit some settings in ceph.conf

osd_journal_size = 1000
osd_pool_default_size = 3
osd_pool_default_min_size = 2
osd_pool_default_pg_num = 128
osd_pool_default_pgp_num = 128

In production, the first value should be bigger, at least 10G. The number of placement groups is according the number of your cluster members, the OSD servers. For small clusters up to 5, 128 pgs are fine.

Install the CEPH software on the nodes.

[root@ceph-admin ceph-config]# ceph-deploy install ceph-admin ceph-mon01 ceph-osd01 ceph-osd02 ceph-osd03

Adding the initual monitor server

[root@ceph-admin ceph-config]# ceph-deploy mon create-initial

Connect the all nodes server to calamari:

[root@ceph-admin ceph-config]# ceph-deploy calamari connect ceph-mon01 ceph-osd01 ceph-osd02 ceph-osd03 ceph-admin

Make your admin server being an admin server

[root@ceph-admin ceph-config]# yum -y install ceph ceph-common
[root@ceph-admin ceph-config]# ceph-deploy admin ceph-mon01 ceph-osd01 ceph-osd02 ceph-osd03 ceph-admin

Purge and add your data disks:

[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd01:vdb
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd01:vdc
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd01:vdd
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd02:vdb
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd02:vdc
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd02:vdd
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd01:vdb
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd02:vdc
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd03:vdd

[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd01:vdb
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd01:vdc
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd01:vdd
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd02:vdb
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd02:vdc
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd02:vdd
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd03:vdb
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd03:vdc
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd03:vdd

You now can check the health of your cluster:

[root@ceph-admin ceph-config]# ceph health
[root@ceph-admin ceph-config]# 

Or with some more information:

[root@ceph-admin ceph-config]# ceph status
    cluster 117bf1bc-04fd-4ae1-8360-8982dd38d6f2
     health HEALTH_OK
     monmap e1: 1 mons at {ceph-mon01=}, election epoch 2, quorum 0 ceph-mon01
     osdmap e42: 9 osds: 9 up, 9 in
      pgmap v73: 192 pgs, 3 pools, 0 bytes data, 0 objects
            318 MB used, 82742 MB / 83060 MB avail
                 192 active+clean
[root@ceph-admin ceph-config]# 

Whats next?
A storage is worthless if not used. A follow-up post will guide you trough how to use CEPH as storage for libvirt.

Further reading

Using IPA to provide automount maps for NFSv4 home directories

Saturday, March 14th, 2015

Since the invention of NFSv4, automount NFS home directories is secure. Since the invention of IPA, its easier to set up and maintain. This article guides you trough the steps needed to set it up. The procedures have been tested on RHEL7.1 for the IPA servers, RHEL6.6 and 7.1 as clients but should work on Fedora and CentOS. Unfortunately it seems not to work (yet) for Debian Sid and Ununtu. [Update] Works in Ubuntu 14.04[/Update]


  • Your Domain is
  • Your Kerberos Realm is EXAMPLE.COM
  • The NFS server is
  • The exported home directories are on /exports/home
  • The client is
  • A few words about security and kerbrized NFS
    There are basically three different modes: krb5, krb5i and krb5p.

    • krb5 means that the server and client authenticate each other, traffic can be intercepted.
    • krb5i the same as krb5 but providing integrity. It verifies that the data has not been tampered with, but traffic still can be intercepted.
    • krb5p like the two above, plus privacy protection, all traffic is encrypted.

    Depending on the sensitivity of the data to be transferred krb5i or krb5p should be used. Also keep in mind that the higher the security the lower the throughput is.

    Work to do on one of the IPA replicas

    Add the NFS service principal for the server and client to Kerberos.

    [root@ipa1 ~]# ipa service-add nfs/
    [root@ipa1 ~]# ipa service-add nfs/

    Assume you are only using one location, you can use the default one.

    Add the auto.home map

    [root@ipa1 ~]# ipa automountmap-add default auto.home
    Added automount map "auto.home"
      Map: auto.home
    [root@ipa1 ~]# 

    And add the auto.home map to auto.master

    [root@ipa1 ~]# ipa automountkey-add default --key "/home" --info auto.home auto.master
    Added automount key "/home"
      Key: /home
      Mount information: auto.home
    [root@ipa1 ~]# 

    Finally add the key to the auto.home map

    [root@ipa1 ~]# ipa automountkey-add default --key "*" --info "-fstype=nfs4,rw,sec=krb5,soft,rsize=8192,wsize=8192" auto.home
    Added automount key "*"
      Key: *
      Mount information: -fstype=nfs4,rw,sec=krb5i,soft,rsize=8192,wsize=8192
    [root@ipa1 ~]# 

    Configure the NFS server
    Create a Kerberos Keytab for your NFS server

    [root@nfs ~]# kinit admin
    [root@nfs ~]# ipa-getkeytab -s -p nfs/ -k /etc/krb5.keytab

    Tell your NFS service to use NFSv4

    [root@nfs ~]# perl -npe 's/#SECURE_NFS="yes"/SECURE_NFS="yes"/g' -i /etc/sysconfig/nfs

    Create your NFS share and start the NFS server

    [root@nfs ~]# mkdir /exports/home
    [root@nfs ~]# echo "/exports/home  *(rw,sec=sys:krb5:krb5i:krb5p)" >> /etc/exports
    [root@nfs ~]# service nfs start
    [root@nfs ~]# chkconfig nfs on

    Configure your clients

    Get the Kerberos keytab

    [root@ipaclient1 ~]# ipa-getkeytab -s -p nfs/ -k /etc/krb5.keytab

    Finally you need to configure your client systems to map use of the automount maps provided by IPA

    [root@login ~]# ipa-client-automount --location=default
    Searching for IPA server...
    IPA server: DNS discovery
    Location: default
    Continue to configure the system with these values? [no]: yes
    Configured /etc/nsswitch.conf
    Configured /etc/sysconfig/nfs
    Configured /etc/idmapd.conf
    Started rpcidmapd
    Started rpcgssd
    Restarting sssd, waiting for it to become available.
    Started autofs
    [root@login ~]# 

    Strange problems you can run into

    If you run into troubles, enable debugging in the related daemons. In /etc/sysconfig/autofs, add a line LOGGING=debug.
    Add debug_level = 9 in the [autofs] stanza.

    If you have something like this in /var/log/messages

    lookup(file): failed to read included master map auto.master

    Then probably your nsswitch.conf does not point to sss. Ensure you have

    automount:  files sss

    In your nsswitch.conf. This should actually be configured by ipa-client-automount but it seems that it is not 100% reliable to do so.

    If you have something like this in /var/log/messages:

    Mar 14 20:02:37 ipaclient nfsidmap[3039]: nss_getpwnam: name '' does not map into domain 'localdomain'

    Then check your /etc/hosts file if all is correct. Also ensure that the short hostname is not in front of the FQHN. Another mistake can trigger the same error: DNS. Ensure you have a working DNS setup for both A (and/or AAAA) and PTR records.

    Read further
    There are plenty of docs available, there is a choice

    Have fun! :-)

Upgrading RHN Satellite 5.6 to 5.7

Sunday, February 8th, 2015

This post guides you trough the upgrade procedure for a Satellite 5.6 using the embedded database on RHEL6-x86_64. Further it guides you to setup of Kerberos authentication of Satellite users with IPA.

Recently Redhat released Satellite Server 5.7. Despite Satellite 5.x will be outphased in the next few years, there are plenty of new features. The most significant new features are:

  • Upgraded PostgreSQL to 9.2
  • Authentication via IPA/SSSD/Kerberos
  • IPMI support
  • Renewed WebUI
  • Readonly API users

And finally… drum roll…. formal support for spacecmd :-)

As always when you plan to upgrade your Satellite server to the latest version, you need to do some preparations first.

Download the ISO
As usual, visit the Download site and make sure you select 5.7 and the architecture fitting you system (x86_64 or S390)

Get a new Satellite Certificate
Satellite 5.7 needs a new certificate to get it activated. You can create it by your own at the Subscription Management Application site, ensure you attach enough subscriptions to your Satellite server(s). Alternatively open a support case.

Usually an upgrade runs smooth, but just in case… it is recommended practice to have a recent backup ready. If your Satellite is running on a virtual machine, power off, snapshot and power on to have a consistent backup ready. For physical systems, db-control and the choice of your backup software need to be visited.

Backup the rest of your Satellite:

Create a copy of your rhn configuration directory as we need some information from the old files after the upgrade.

[root@rhnsat ~]# cp -rp /etc/rhn/ /etc/rhn-$(date +"%F")

Update your OS and Satellite
First step is to update the operating system and the Satellite 5.6 and apply the latest database schema updates as well.

yum -y update && reboot

To update the database schema, run the following command. Ideally it looks as follows:

root@rhnsat ~]# spacewalk-schema-upgrade 

You are about to perform upgrade of your satellite-schema.

For general instructions on Red Hat Satellite schema upgrade, please consult
the following article:

Hit Enter to continue or Ctrl+C to interrupt: 
Schema upgrade: [satellite-schema-] -> [satellite-schema-]
Your database schema already matches the schema package version [satellite-schema-].
[root@rhnsat ~]# 

Functionality Check
It is recommended to restart and check a softwares functionality before upgrading to be able to pinpoint problems if there are some.

[root@rhnsat ~]# rhn-satellite restart

Its a good idea to review the software channels in use and delete unused channels as this can free up quite some diskspace and reduces the size of the database significantly.

[root@rhnsat ~]# spacewalk-remove-channel -c rhel-i386-rhev-agent-6-server
Deleting package metadata (20):
Removing:         ######################################## - complete
[root@rhnsat ~]#

Delete old system snapshots not used anymore. The following example deletes all snapshots which are older than one month:

[root@rhnsat ~]# sw-system-snapshot --delete --all --start-date 200001010000 --end-date $(date -d "-1 months" "+%Y%m%d0000" 

Remove spacecmd from EPEL
Most Satellite users have spacecmd installed from EPEL. Its a good idea to remove it to avoid conflicts. It is also important to disable the EPEL repositories on Satellite servers as a simple yum update can bring your Satellite server into trouble.

If not done yet, install the rhn-upgrade package which contains the instructions how to proceed.

yum -y install rhn-upgrade

The package contains not only SQL- and other useful scripts needed for the upgrade but also important documents to read. The are located in /etc/sysconfig/rhn/satellite-upgrade/doc.

For most users, the document satellite-upgrade-postgresql.txt applies.

Do not forget to read the updated product documentation as well:

Changing your file system layout
As there will be an updated PostgreSQL version needed which is part of the Software Collection and not installable from the base channel, you need to add a new file system in /opt/rh.
The new database is about the same size as before. Check your used disk space at /var/lib/pgsql

[root@rhnsat ~]# lvcreate /dev/vg_data -n lv_opt_rh -L 17G 
[root@rhnsat ~]# mkfs.ext4 /dev/vg_data/lv_opt_rh
[root@rhnsat ~]# tune2fs -c0 -i0  /dev/vg_data/lv_opt_rh

Exit your /etc/fstab accordingly and mount the file system with mount -a to check if it working as expected.

Lets do it
Mount the ISO image and run the installer.

[root@rhnsat ~]# mount satellite-5.7.0-20150108-rhel-6-x86_64.iso /mnt -o loop
[root@rhnsat ~]# cd /mnt
[root@rhnsat mnt]# 

If you are using a proxy to sync your satellite, provide the --diconnected flag.

[root@rhnsat mnt]# ./ --upgrade --disconnected
* Starting Red Hat Satellite installer.
* Performing pre-install checks.
* Pre-install checks complete.  Beginning installation.
* RHN Registration.
** Registration: Disconnected mode.  Not registering with RHN.
* Upgrade flag passed.  Stopping necessary services.
* Purging conflicting packages.
* Checking for uninstalled prerequisites.
** Checking if yum is available ...
There are some packages from Red Hat Enterprise Linux that are not part
of the @base group that Satellite will require to be installed on this
system. The installer will try resolve the dependencies automatically.
However, you may want to install these prerequisites manually.
Do you want the installer to resolve dependencies [y/N]? y
* Installing RHN packages.
* Now running spacewalk-setup.
* Setting up SELinux..
** Database: Setting up database connection for PostgreSQL backend.
*** Upgrading embedded database.
** Database: Populating database.
** Database: Skipping database population.
* Setting up users and groups.
** GPG: Initializing GPG and importing key.
* Performing initial configuration.
* Activating Red Hat Satellite.
** Certificate not activated.
** Upgrade process requires the certificate to be activated after the schema is upgraded.
* Enabling Monitoring.
* Configuring apache SSL virtual host.
Should setup configure apache's default ssl server for you (saves original ssl.conf) [Y]? y
* Configuring tomcat.
* Configuring jabberd.
* Creating SSL certificates.
** Skipping SSL certificate generation.
* Deploying configuration files.
* Update configuration in database.
* Setting up Cobbler..
task started: 2015-02-08_154708_sync
task started (id=Sync, time=Sun Feb  8 15:47:08 2015)
running pre-sync triggers
cleaning trees
removing: /var/www/cobbler/images/ks-rhel-x86_64-es-4-u6
running shell triggers from /var/lib/cobbler/triggers/change/*
Cobbler requires tftp and xinetd services be turned on for PXE provisioning functionality. Enable these services [Y]? y
This portion of the Red Hat Satellite upgrade process has successfully completed.
Please refer to appropriate upgrade document in /etc/sysconfig/rhn/satellite-upgrade
for any remaining steps in the process.
[root@rhnsat mnt]#

The next step is having a look at diff /etc/rhn/rhn.conf /etc/rhn-$(date +”%F”)/rhn.conf
and edit /etc/rhn/rhn.conf accordingly. You will probably see missing things such as proxy, server.satellite.rhn_parent etc. Also change the setting disconnected to 0.

Activate the updated Satellite server
To subscribe the Satellite server to the appropriate software channels, it must be activated. Since it was activated before, the --ignore-version-mismatch parameter must be provided.

[root@rhnsat ~]# rhn-satellite-activate --rhn-cert=rhn-satellite57-2015-02-08.xml --ignore-version-mismatch

Initial Update of Software and database schema
There is a good chance that there are updates available for the Satellite Server as the ISO image will not be updated that often.

[root@rhnsat ~]# yum -y update

Even if no update was installed, there is a schema update available:

[root@rhnsat ~]# spacewalk-schema-upgrade 
Schema upgrade: [satellite-schema-] -> [satellite-schema-]
Searching for upgrade path: [satellite-schema-] -> [satellite-schema-]
Searching for upgrade path: [satellite-schema-] -> [satellite-schema-]
Searching for upgrade path: [satellite-schema-5.6.0] -> [satellite-schema-5.7.0]
Searching for upgrade path: [satellite-schema-5.6] -> [satellite-schema-5.7]
The path: [satellite-schema-5.6] -> [satellite-schema-5.7]
Planning to run spacewalk-sql with [/var/log/spacewalk/schema-upgrade/20150208-155657-script.sql]

Plase make sure you have a valid backup of your database before continuing.

Hit Enter to continue or Ctrl+C to interrupt: 
Executing spacewalk-sql, the log is in [/var/log/spacewalk/schema-upgrade/20150208-155657-to-satellite-schema-5.7.log].
The database schema was upgraded to version [satellite-schema-].
[root@rhnsat ~]# 

After startarting the Satellite Server, the package meta data should be automatically recreated. If not, run
/etc/sysconfig/rhn/satellite-upgrade/scriptsregenerate-repodata manually.

Rebuild the search index:

[root@rhnsat ~]# service rhn-search cleanindex

You don’t need to remove the old PostgreSQL version, this is done automatically.

Using IPA and Kerberos for authentication
Before configure the Satellite Server to use IPA, make sure it is enrolled and the HTTP service principal exists. If not, add it with the following command:

[root@ipa1 ~]# ipa-addservice HTTP/

Next will be getting a Kerbros Ticket of a user allowed to create Keytabs. In this example it is the user admin.

[root@rhnsat ~]# kinit admin
Password for admin@EXAMPLE.COM: 
[root@rhnsat ~]# 

Afterwards, run the setup script:

[root@rhnsat ~]# spacewalk-setup-ipa-authentication
Enabling authentication against [].
Retrieving HTTP/ service keytab into [/etc/httpd/conf/http.keytab] ...
Keytab successfully retrieved and stored in: /etc/httpd/conf/http.keytab
changed ownership of `/etc/httpd/conf/http.keytab' to apache
Configuring PAM service [spacewalk].
Will install additional packages ...
Loaded plugins: product-id, rhnplugin, security, subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package mod_auth_kerb.x86_64 0:5.4-13.el6 will be installed
---> Package mod_authnz_pam.x86_64 0:0.9.2-1.el6 will be installed
---> Package mod_intercept_form_submit.x86_64 0:0.9.7-1.el6 will be installed
---> Package mod_lookup_identity.x86_64 0:0.9.2-1.el6 will be installed
---> Package sssd-dbus.x86_64 0:1.11.6-30.el6_6.3 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

 Package                                      Arch                      Version                               Repository                               Size
 mod_auth_kerb                                x86_64                    5.4-13.el6                            rhel-x86_64-server-6                     30 k
 mod_authnz_pam                               x86_64                    0.9.2-1.el6                           rhel-x86_64-server-6                     13 k
 mod_intercept_form_submit                    x86_64                    0.9.7-1.el6                           rhel-x86_64-server-6                     17 k
 mod_lookup_identity                          x86_64                    0.9.2-1.el6                           rhel-x86_64-server-6                     19 k
 sssd-dbus                                    x86_64                    1.11.6-30.el6_6.3                     rhel-x86_64-server-6                    122 k

Transaction Summary
Install       5 Package(s)

Total download size: 201 k
Installed size: 0  
Downloading Packages:
(1/5): mod_auth_kerb-5.4-13.el6.x86_64.rpm                                                                                           |  30 kB     00:00     
(2/5): mod_authnz_pam-0.9.2-1.el6.x86_64.rpm                                                                                         |  13 kB     00:00     
(3/5): mod_intercept_form_submit-0.9.7-1.el6.x86_64.rpm                                                                              |  17 kB     00:00     
(4/5): mod_lookup_identity-0.9.2-1.el6.x86_64.rpm                                                                                    |  19 kB     00:00     
(5/5): sssd-dbus-1.11.6-30.el6_6.3.x86_64.rpm                                                                                        | 122 kB     00:00     
Total                                                                                                                        41 kB/s | 201 kB     00:04     
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : mod_authnz_pam-0.9.2-1.el6.x86_64                                                                                                        1/5 
  Installing : mod_intercept_form_submit-0.9.7-1.el6.x86_64                                                                                             2/5 
  Installing : mod_auth_kerb-5.4-13.el6.x86_64                                                                                                          3/5 
  Installing : mod_lookup_identity-0.9.2-1.el6.x86_64                                                                                                   4/5 
  Installing : sssd-dbus-1.11.6-30.el6_6.3.x86_64                                                                                                       5/5 
  Verifying  : mod_intercept_form_submit-0.9.7-1.el6.x86_64                                                                                             1/5 
  Verifying  : sssd-dbus-1.11.6-30.el6_6.3.x86_64                                                                                                       2/5 
  Verifying  : mod_lookup_identity-0.9.2-1.el6.x86_64                                                                                                   3/5 
  Verifying  : mod_authnz_pam-0.9.2-1.el6.x86_64                                                                                                        4/5 
  Verifying  : mod_auth_kerb-5.4-13.el6.x86_64                                                                                                          5/5 

  mod_auth_kerb.x86_64 0:5.4-13.el6                  mod_authnz_pam.x86_64 0:0.9.2-1.el6            mod_intercept_form_submit.x86_64 0:0.9.7-1.el6          
  mod_lookup_identity.x86_64 0:0.9.2-1.el6           sssd-dbus.x86_64 0:1.11.6-30.el6_6.3          

** /etc/sssd/sssd.conf has been backed up to sssd.conf-swsave
Updated sssd configuration.
Turning SELinux boolean [httpd_dbus_sssd] on ...
        ... done.
Turning SELinux boolean [allow_httpd_mod_auth_pam] on ...

        ... done.
Configuring Apache modules.
** /etc/tomcat6/server.xml has been backed up to server.xml-swsave.ipa
Stopping sssd:                                             [  OK  ]
Starting sssd:                                             [  OK  ]
Stopping tomcat6:                                          [  OK  ]
Starting tomcat6:                                          [  OK  ]
Stopping httpd:                                            [  OK  ]
Starting httpd:                                            [  OK  ]
Waiting for tomcat to be ready ...
Authentication against [] sucessfully enabled.
As admin, at Admin > Users > External Authentication, select
          Default organization to autopopulate new users into.
[root@rhnsat ~]# 

Next, point your browser to to finalize the setup.

Configure your browser for Kerberos
If you did not yet configured your browser to use Kerberos authentication, do so. Assuming you are using an IPA invironment, follow the instructions provided on the IPA servers.

I take no responsibility about damaged Satellites, lost of data etc. in doubt, stick on the official product documentation at