Upgrading RHN Satellite 5.5 to 5.6

October 12th, 2013

Redhat released version 5.6 of the Redhat Satellite. Time to have a closer look to it and how to upgrade from version 5.5.

New features

  • Finally PostgreSQL support is mature enough for Enterprise usage. No need of a closed source data base anymore. This also brings a lot of new capabilities such as online backups which before was only available using an external Oracle Database which needs the availability of a DBA.

    PostgreSQL also brings some performance benefits over the embedded Oracle database as delivered with 5.5 and earlier. Disclaimer: I did not made any benchmarks, but it “feels” much faster.

  • If you are using the multi-org feature, you may be happy about enhancements for Inter-Satellite-Sync (ISS). Now you can define access rights for different software channels for different organizations.
  • It is not a new feature, but now it is supported: cobbler buildiso. It is a handy solution if you can not use PXE boot in your environment. cobbler buildiso generates a small boot image which allows you to select the installation of a system from a boot menu.
  • Intergrated System Asset Manager (SAM) which is based on Candlepin and allows you assess your system landscape for subscription compliance.
  • Upgrading from RHN Satellite 5.5
    The first thing that you probably would ask: Is it possible and supported to migrate from the Embedded Oracle Database to PostgreSQL? Is it hassle free and bullet-proof? Yes it is.

    Keep in mind

  • As always: Have a look to the product documentation before doing anything on a production Satellite.
  • Create a new RHN Satellite Certificate at access.redhat.com
  • Download the ISO image for 5.6
  • ensure having a recent database backup
  • ensure having a recent backup of your /etc/rhn directory as well as /var/lib/cobbler
  • Update your existing Satellite 5.5 with the latest available patches
  • Delete unnecessary software channels from the Satellite for faster DB migration
  • Delete old Snapshots to minimize database data to be migrated
  • Make enough storage available to migrate from embedded Oracle to PostgreSQL. It takes roughly about the same amount of storage for the data. The PostgreSQL database stores its data in /var/lib/pgsql.
  • Install the latest available package rhn-upgrade: yum install rhn-upgrade

    Lets do it, Perparation work

    First of all, create a database backup of your embedded Oracle Database:

    [root@rhnsat ~]# rhn-satellite stop
    [root@rhnsat ~]# su - oracle -c "db-control backup /path/to/your/backup/directory"
    [root@rhnsat ~]# su - oracle -c "db-control verify /path/to/your/backup/directory"
    [root@rhnsat ~]# rhn-satellite start

    Backup the rest of your Satellite:

    [root@rhnsat ~]# cp -rp /etc/rhn/ /etc/rhn-$(date +"%F")
    [root@rhnsat ~]# cp -rp /var/lib/cobbler /var/lib/cobbler-$(date +"%F")
    [root@rhnsat ~]# cp -rp /etc/cobbler /etc/cobbler-$(date +"%F")

    Update your RHN Satellite 5.5 with the latest available patches and reboot:

    [root@rhnsat ~]# yum -y update && reboot

    Ensure the latest schema updates have been applied. The output should read as follow:

    [root@rhnsat ~]# spacewalk-schema-upgrade 
    You are about to perform upgrade of your satellite-schema.
    For general instructions on Red Hat Satellite schema upgrade, please consult
    the following article:
    Hit Enter to continue or Ctrl+C to interrupt: 
    Schema upgrade: [satellite-schema-] -> [satellite-schema-]
    Your database schema already matches the schema package version [satellite-schema-].
    [root@rhnsat ~]#

    It is always a good idea to restart a software and check if all is working as expected *before* doing an upgrade. So you can pinpoint problems better if there are some.

    [root@rhnsat ~]# rhn-satellite restart

    Review your list of software channels and delete unused ones. This example will delete the channel rhel-i386-rhev-agent-6-server:

    [root@rhnsat ~]# spacewalk-remove-channel -c rhel-i386-rhev-agent-6-server
    Deleting package metadata (20):
    Removing:         ######################################## - complete
    [root@rhnsat ~]#  

    Delete old system snapshots not used anymore. The following example deletes all snapshots which are older than one month:

    [root@rhnsat ~]# sw-system-snapshot --delete --all --start-date 200001010000 --end-date $(date -d "-1 months" "+%Y%m%d0000")

    Update the rhn-update package to the latest available:

    yum install rhn-upgrade

    After installing the the rhn-upgrade package, the SQL scripts needed for the DB migration are installed as well as some documentation you should read. They are located in /etc/sysconfig/rhn/satellite-upgrade/doc.

    Upgrade Procedure

    Mount the downloaded ISO image:

    [root@rhnsat ~]# mount satellite-5.6.0-20130927-rhel-6-x86_64.iso /mnt -o loop && cd /mnt
    [root@rhnsat mnt]# 

    If you operate your Satellite behind a proxy, you need to upgrade it in disconnected mode, if not, ignore the –disconneded parameter.

    [root@rhnsat mnt]# ./install.pl --disconnected --upgrade
    * Starting the Spacewalk installer.
    * Performing pre-install checks.
    * Pre-install checks complete.  Beginning installation.
    * RHN Registration.
    ** Registration: Disconnected mode.  Not registering with RHN.
    * Upgrade flag passed.  Stopping necessary services.
    * Purging conflicting packages.
    * Checking for uninstalled prerequisites.
    ** Checking if yum is available ...
    There are some packages from Red Hat Enterprise Linux that are not part
    of the @base group that Satellite will require to be installed on this
    system. The installer will try resolve the dependencies automatically.
    However, you may want to install these prerequisites manually.
    Do you want the installer to resolve dependencies [y/N]? y
    * Installing RHN packages.
    * Now running spacewalk-setup.
    * Setting up Selinux..
    ** Database: Setting up database connection for PostgreSQL backend.
    ** Database: Installing the database:
    ** Database: This is a long process that is logged in:
    ** Database:   /var/log/rhn/install_db.log
    *** Progress: #
    ** Database: Installation complete.
    ** Database: Populating database.
    *** Progress: ###################################
    * Database: Starting Oracle to PostgreSQL database migration.
    ** Database: Starting embedded Oracle database.
    ** Database: Trying to connect to Oracle database: succeded.
    ** Database: Migrating data.
    *** Database: Migration process logged at: /var/log/rhn/rhn_db_migration.log
    ** Database: Data migration successfully completed.
    ** Database: Stoping embedded Oracle database.
    * Setting up users and groups.
    ** GPG: Initializing GPG and importing key.
    * Performing initial configuration.
    * Activating Red Hat Satellite.
    ** Certificate not activated.
    ** Upgrade process requires the certificate to be activated after the schema is upgraded.
    * Enabling Monitoring.
    * Configuring apache SSL virtual host.
    Should setup configure apache's default ssl server for you (saves original ssl.conf) [Y]? y
    * Configuring tomcat.
    ** /etc/sysconfig//tomcat6 has been backed up to tomcat6-swsave
    ** /etc/tomcat6//tomcat6.conf has been backed up to tomcat6.conf-swsave
    Reversed (or previously applied) patch detected!  Skipping patch.
    1 out of 1 hunk ignored -- saving rejects to file web.xml.rej
    * Configuring jabberd.
    * Creating SSL certificates.
    ** Skipping SSL certificate generation.
    * Deploying configuration files.
    * Update configuration in database.
    * Setting up Cobbler..
    cobblerd does not appear to be running/accessible
    Cobbler requires tftp and xinetd services be turned on for PXE provisioning functionality. Enable these services [Y]? 
    This portion of the Red Hat Satellite upgrade process has successfully completed.
    Please refer to appropriate upgrade document in /etc/sysconfig/rhn/satellite-upgrade
    for any remaining steps in the process.
    [root@rhnsat mnt]# 

    Depending on the size of your database and the speed of your disks, the upgrade procedure can take many hours.

    The next step is having a look at diff /etc/rhn/rhn.conf /etc/rhn-$(date +”%F”)/rhn.conf
    and edit /etc/rhn/rhn.conf accordingly. You will probably see missing things such as proxy, server.satellite.rhn_parent etc. Also change the setting disconnected to 0.

    After checking and correcting the config file you can activate the Satellite:

    [root@rhnsat ~]# rhn-satellite-activate --rhn-cert=/root/rhns-cert56.cert --ignore-version-mismatch

    After the activation the System is subscribed to the Softwarechannel “redhat-rhn-satellite-5.6-server-x86_64-6″, now bring the Satellite to latest available patchlevel:

    [root@rhnsat ~]# yum -y update 

    Stop and disable Oracle
    Bofore doing any Database related actions its better to stop the old Oracle Database to be sure all is now running on PostgreSQL.

    [root@rhnsat ~]# service oracle stop
    Shutting down Oracle Net Listener ...                      [  OK  ]
    Shutting down Oracle DB instance "rhnsat" ...              [  OK  ]
    [root@rhnsat ~]# chkconfig oracle off
    [root@rhnsat ~]# rhn-satellite restart


    Check if your database schema is up-to-date:

    root@rhnsat ~]# spacewalk-schema-upgrade 
    You are about to perform upgrade of your satellite-schema.
    For general instructions on Red Hat Satellite schema upgrade, please consult
    the following article:
    Hit Enter to continue or Ctrl+C to interrupt: 
    Schema upgrade: [satellite-schema-] -> [satellite-schema-]
    Your database schema already matches the schema package version [satellite-schema-].
    [root@rhnsat ~]# 

    Rebuild the search index:

    [root@rhnsat ~]# service rhn-search cleanindex
    Stopping rhn-search...
    Stopped rhn-search.
    Starting rhn-search...
    [root@rhnsat ~]# 

    Recreate the software channel meta data:

    [root@rhnsat doc]# /etc/sysconfig/rhn/satellite-upgrade/scripts/regenerate-repodata -a
    Scheduling repodata creation for 'rhel-x86_64-server-supplementary-6'
    Scheduling repodata creation for 'rhel-x86_64-server-6'
    Scheduling repodata creation for 'rhn-tools-rhel-x86_64-server-6'
    [root@rhnsat doc]# 

    Check functionality
    Before removing the Oracle Database, run your tests to validate the Satellites functionality. Please proceed as stated in /etc/sysconfig/rhn/satellite-upgrade/doc/verification.txt

    This is an important point, as we are getting rid of the Oracle database later on. To be sure all is working as expected, do a complete functionality test for the important things.

    To be on the safe side, let the Satellite run for a few days with Oracle still installed.

    Getting rid of Oracle

    Please read /etc/sysconfig/rhn/satellite-upgrade/doc/satellite-upgrade-postgresql.txt first!

    [root@rhnsat ~]# yum remove *oracle*

    Getting rid of the last Oracle bits:

    [root@rhnsat ~]# rm -rf /rhnsat /opt/apps/oracle /usr/lib/oracle/

    Having fun with a faster Satellite with an open source database :-)

    I take no responsibility about damaged Satellites, lost data etc. in doubt, stick on the official product documentation at http://access.redhat.com

Intercepting proxies and spacewalk-repo-sync

September 14th, 2013

More and more companies are using intercepting proxies to scan for malware. Those malware scanners can be problematic due to added latency.

If you using spacewalk-repo-sync to synchronize external yum repositories to your custom software channels and experience the famous message [Errno 256] No more mirrors to try in your log files, then you need to configure spacewalk-repo-sync.

Unfortunately the documentation for that is a bit hidden in the man page. You need to create a directory and create a file.

mkdir /etc/rhn/spacewalk-repo-sync/

Create the configuration item:

echo "[main]" >> /etc/rhn/spacewalk-repo-sync/yum.conf
echo timeout=300 >> /etc/rhn/spacewalk-repo-sync/yum.conf

You need to experiment a bit with the value of the timeout setting, 5min should be good enough for most environments.

/etc/rhn/spacewalk-repo-sync/yum.conf has the same options like yum.conf, have a look for more information in the man page.

Have fun :-)

Centrally manage sudoers rules with IPA Part I – Preparation

July 25th, 2013

One of the features of IPA is its facility to centrally manage sudoers rules. This rules can be based on user, group memberships etc. and be constrained to one or more servers.

One of the benefits you get is: You are able to define stricter sudoers rules without annoying the users. At the end your systems are more secure and more convenient for the users.

Lets start.

Unfortunately, sudoers via LDAP does not just work out of the box, some configuration on the clients needs to be done. Those can be equal on all hosts and distributed via configuration management such as puppet or RHN Satellite.

IPA has a user called “sudo”. We first need to set a password for it:

[root@ipa1 ~]# ldappasswd -x -S -W -h ipa1.example.com -ZZ -D "cn=Directory Manager" uid=sudo,cn=sysaccounts,cn=etc,dc=example,dc=com
New password: 
Re-enter new password: 
Enter LDAP Password: 
[root@ipa1 ~]# 

We need to set this password later on as the bind password in the LDAP configuration.

Next we need to edit the /etc/nsswitch.conf file:

[root@ipaclient1 ~]# echo sudoers:  files ldap >> /etc/nsswitch.conf

Lets configure the sudoers-ldap file

root@ipaclient1 ~]# cat << EOF > /etc/sudo-ldap.conf
binddn uid=sudo,cn=sysaccounts,cn=etc,dc=example,dc=com
bindpw redhat
ssl start_tls
tls_cacertfile /etc/ipa/ca.crt
tls_checkpeer yes
uri ldap://ipa1.example.com ldap://ipa2.example.com
sudoers_base ou=SUDOers,dc=example,dc=com
root@ipaclient1 ~]#

The bindpw (in this example “redhat” is that one you previously set with ldappasswd, change it accordingly. The paramter “uri” should contain two IPA servers (as FQDN, no IP Address or shortname) for redundancy. The “binddn” and “sudoers_base” of course should match your environment.

Remember netgroups? Old school stuff from the time when NIS was used. I thought I’ll never get in touch with NIS anymore. Unfortunately sudo uses netgroups, so we need to set a proper NIS domainname.

cat << EOF >> /etc/rc.d/rc.local
nisdomainname example.com

The following files are needed to be configured on each host using IPA for sudoers rules:

  • /etc/nsswitch.conf
  • /etc/sudo-ldap.conf
  • /etc/rc.d/rc.local

Expect part two of this in the next few days.

Have fun :-)

Why journalctl is cool and syslog will survive for another decade

July 24th, 2013

There was a recent discussion going on if Fedora 20 should drop rsyslog and just using systemd journal. A lot of people are afraid of systemd and its journal, this a pity.

Well, there are pros and cons about this kind of logging. For System administrators daily use, journalctl is a powerful tool simplifying the hunt for log file entries.

On the other hand, there are AFAIK no monitoring tools (yet) that can work with journalctl. Those first need to be developed. A Nagios plug-in should be implemented quite quickly.

Why makes journalctl the life easier?
Instead of grepping trough thousands of lines in /var/log/messages you simply can filter the messages and work on them.

journalctl has auto completion (just hit the tab key) showing you the options to use. I.e.

fedora:~# journalctl  < TAB > 
_AUDIT_SESSION=              _PID=
_BOOT_ID=                    PRIORITY=
_CMDLINE=                    __REALTIME_TIMESTAMP=
CODE_FILE=                   _SELINUX_CONTEXT=
CODE_LINE=                   SYSLOG_FACILITY=
_COMM=                       SYSLOG_IDENTIFIER=
COREDUMP_EXE=                SYSLOG_PID=
__CURSOR=                    _SYSTEMD_CGROUP=
ERRNO=                       _SYSTEMD_OWNER_UID=
_EXE=                        _SYSTEMD_SESSION=
_GID=                        _SYSTEMD_UNIT=
_HOSTNAME=                   _TRANSPORT=
_MACHINE_ID=                 _UDEV_SYSNAME=
MESSAGE=                     _UID=
fedora:~# journalctl 

Quite some filtering options available here. Most of this options are self-explanatory.

If you just want to see the entries made by a particular command, issue journalctl _COMM= and the TAB key.

fedora:~# journalctl _COMM=
abrtd            dnsmasq          mtp-probe        sh               tgtd
anacron          gnome-keyring-d  network          smartd           udisksd
avahi-daemon     hddtemp          polkit-agent-he  smbd             umount
bash             journal2gelf     polkitd          sshd             userhelper
blueman-mechani  kdumpctl         pulseaudio       sssd_be          yum
chronyd          krb5_child       qemu-system-x86  su               
colord           libvirtd         sealert          sudo             
crond            logger           sendmail         systemd          
dbus-daemon      mcelog           setroubleshootd  systemd-journal  
fedora:~# journalctl _COMM=

If you enter journalctl _COMM=sshd you will just see the messages created by sshd.

fedora:~# journalctl _COMM=sshd 
-- Logs begin at Tue 2013-07-23 08:46:28 CEST, end at Wed 2013-07-24 11:10:01 CEST. --
Jul 23 09:48:45 fedora.example.com sshd[2172]: Server listening on port 22.
Jul 23 09:48:45 fedora.example.com sshd[2172]: Server listening on :: port 22.

Usually one is just interested in messages within a particular time range.

fedora:~# journalctl _COMM=crond --since "10:00" --until "11:00"
-- Logs begin at Tue 2013-07-23 08:46:28 CEST, end at Wed 2013-07-24 11:23:25 CEST. --
Jul 24 10:20:01 fedora.example.com CROND[28305]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Jul 24 10:50:01 fedora.example.com CROND[28684]: (root) CMD (/usr/lib64/sa/sa1 1 1)

And why will rsyslog stay another decade or even longer?

There are a lot of tools and scripts which are in place since a long time, some of them even come from a time before Linux was born.

Most of those scripts must be rewritten or at least change its behaviour. I.e taking input from STDIN instead of a log file, so those tools can digest the output from journalctl|your-super-duper-scipt.pl

For log digesting tools that are needed to be compatible between different Unix and Linux Systems they probably wont be changed. In this case syslogd will survive until the last of those systems is decommissioned.

Further reading

Creating and managing iSCSI targets

July 7th, 2013

If you want to create and manage iSCSI targets with Fedora or RHEL, you stumble upon tgtd and tgtadm. This tools are easy to use but have some obstacles to take care of. This is a quick guide on how to use tgtd and tgtadm.

iSCSI terminology
In the iSCSI world, we not taking about server and client, but iSCSI-Targets, which is the server and iSCSI-Initiators which are the clients

Install the tool set
It is just one package to install, afterwards enable the service:

target:~# yum install scsi-target-utils
target:~# chkconfig tgtd on
target:~# service tgtd start

Or Systemd style:

target:~# systemctl start tgtd.service
target:~# systemctl enable tgtd.service

Online configuration vs. configuration file
There are basically two ways of configuring iSCSI targets:

  • Online configuration with tgtadm, changes are getting available instantly, but not consistent over reboots
  • Configuration files. Changes are presistent, but not instantly available

Well, there is the dump parameter for tgtadm but i.e. passwords are replaced with “PLEASE_CORRECT_THE_PASSWORD” which makes tgtadm completely useless if you are using CHAP authentication.

If you do not use CHAP authentication and use IP based ACLs instead, tgtadm can help you, just dump the config to /etc/tgt/conf.d

Usage of tgtadm

After you have created the storage such as a logical volume (used in this example), a partition or even a file, you can add the first target:

target:~# tgtadm --lld iscsi --op new --mode target --tid 1 --targetname iqn.2013-07.com.example.storage.ssd1

Then you can add a LUN (logical Unit) to the target

target:~# tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 --backing-store /dev/vg_storage_ssd/lv_storage_ssd

It is always a good idea to restrict access to your iSCSI targets. There are two ways to do so: IP based and user (CHAP Authentication) based ACL.

In this example we first add two addresses and later on remove one of them again just as a demo

target:~# tgtadm --lld iscsi --mode target --op bind --tid 1 --initiator-address=
target:~# tgtadm --lld iscsi --mode target --op bind --tid 1 --initiator-address=

Go to both initiators where the IPs and check if the Targets are visible:

iscsiadm --mode discovery --type sendtargets --portal

Lets remove the ACL for the IP address

target:~# tgtadm --lld iscsi --mode target --op unbind --tid 1 --initiator-address=

Test if the Target is still visible on the host with IP address, it is not anymore.

If you want to use CHAP authentication, please be aware that tgtadm –dump does not save password, so initiators will not be able to login after a restart of the tgtd.

To add a new user:

target:~# tgtadm --lld iscsi --op new --mode account --user iscsi-user --password secret

And add the ACL to the target:

target:~# tgtadm --lld iscsi --op bind --mode account --tid 2 --user iscsi-user

To remove an account for the target:

target:~# tgtadm --lld iscsi --op unbind --mode account --tid 2 --user iscsi-user

As a wrote further above, configurations done by tgtadm are not persistent over reboot or restart of tgtd. For basic configurations as descibed above, the dump parameter is working fine. As configuration files in /etc/tgt/conf.d/ are automatically included, you just dump the config into a separate file.

target:~# tgt-admin --dump |grep -v default-driver > /etc/tgt/conf.d/my-targets.conf

The other way round
If you are using more sophisticated configuration, you probably want to manage your iSCSI configration the other way round.

You can edit your configuration file(s) in /etc/tgt/conf.d and invoke tgt-admin with the respective parameters to update the config instantly.

tgt-admin (not to be mistaken as tgtadm) is a perl script which basically parses /etc/tgt/targets.conf and updates the targets by invoking tgtadm.

To update your Target(s) issue:

tgt-admin --update ALL --force

For all your targets, incl. active ones (–force) or

tgt-admin --update --tid=1 --force

For updating Target ID 1

SIGKILL is nasty but sometimes needed
tgtd can not be stopped as usual daemons, you need to do a sledgehammer operation and invoke kill -9 to the process followed by service tgtd start command.

How the start up and stop process is working in a proper workaround way is being teached by Systemd, have a look at /usr/lib/systemd/system/tgtd.service which does not actually stop tgtd but just removes the targets.

tgtadm can be help- and sometimes harmful. Carefully consider what is the better way for you, creating config files with tgtadm or update the configuration files and activate them with tgt-admin.