New features in Satellite 5.8

Redhat Satellite 5 was released in version 5.8. based on Spacewalk 2.5. It will probably be the last upgrade available, Support ends in January 2019.

New features and enhancements

  • The major new feature is the introduction of support for the CDN for both, Satellite activation and content sync. The key benefit is a massively enhanced performance for content sync. It’s now called cdn-sync, not satellite-sync anymore. Be aware that some custom scripts as well as cronjobs must be updated as well.This change also introduces the usage of Satellite Manifests instead of the old Certificates
  • Introduction of the new CLI tool taskotop which allows you to watch the activity of the Taskomatic Daemon.
  • PostgreSQL is upgraded to 9.5 which brings, compared to 9.2, some performance improvements as well.
  • More Perl bits have been rewritten to Java
  • Java JRE is now IBM’s Version 1.8
  • A few new commands in the spacecmd CLI
  • Lots of bugfixes and small enhancements

Removed features

There are some features that have been dropped with this release.

  • Support for Patch management of Solaris Systems. Who was using that? I can not remember that I’ve seen a company using that feature.
  • Monitoring is gone as well, I only know one organization that have used that feature. Most companies are using Icinga or Nagios.

Usange of cdn-sync

Populate Repository Metadata

The listing of available channels is working off-line. To be able to see the number of packages assigned to which channels you need to download the repository metadata first.

[root@sat58 ~]# cdn-sync --count-packages
14:05:25 Number of channels: 1271
14:05:25 Number of repositories: 1456
Downloading repomd:   |##################################################| 100.0% 
Comparing repomd:     |##################################################| 100.0% 
Downloading metadata: |##################################################| 100.0% 
Counting packages:    |##################################################| 100.0% 
14:42:21 Total time: 0:36:56
[root@sat58 ~]# 

Be aware that this will take a while, depending on how many entitlements are defined in the Satellite Manifest.

To keep that data up to date you should add a cronjob to do so.

[root@sat58 ~]# echo '0 1 * * * perl -le "sleep rand 9000" && /usr/bin/cdn-sync --count-packages' >> /etc/cron.d/cdn-sync-populate-metadata

Initial content sync

Similar to the old satellity-sync you provide the parameter -c repeatedly for all repos to be synced.

[root@sat58 ~]# cdn-sync -c rhel-x86_64-server-7              
11:16:20 ======================================
11:16:20 | Channel: rhel-x86_64-server-7
11:16:20 ======================================
11:16:20 Sync of channel started.
11:16:20 Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/os
11:16:28 Packages in repo:             14275
11:16:41 Packages already synced:          0
11:16:41 Packages to sync:             14275
11:16:42 New packages to download:     14275
11:16:43 1/14275 : 389-ds-base-1.3.5.10-20.el7_3.x86_64.rpm
11:16:43 2/14275 : 389-ds-base-1.3.4.0-26.el7_2.x86_64.rpm
11:16:43 3/14275 : 389-ds-base-1.3.5.10-11.el7.x86_64.rpm
11:16:43 4/14275 : 389-ds-base-1.3.3.1-16.el7_1.x86_64.rpm
[.. output ommited ..]
11:57:03 14275/14275 : zsh-5.0.2-14.el7_2.2.x86_64.rpm
Importing packages:     |##################################################| 100.0% 
13:10:05 Linking packages to channel.
13:10:19 Repo https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/os has comps file 730c62cc7600c7518e4920f800cb9af6b73d75ba-comps.xml.
13:10:20 Repo https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/os has 1885 errata.
13:10:50 Kickstartable tree not detected (no valid treeinfo file)
13:10:50 Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/7.3/x86_64/kickstart
13:10:55 Packages in repo:              4751
13:12:32 No new packages to sync.
13:12:32 Linking packages to channel.
13:12:44 Repo https://cdn.redhat.com/content/dist/rhel/server/7/7.3/x86_64/kickstart has comps file c542e4cf37dd210de68877b53f41d92dc7686c6e1b35ca4b1852f2e62fca2c72-comps-Server.x86_64.xml.gz.
13:12:44 Repo https://cdn.redhat.com/content/dist/rhel/server/7/7.3/x86_64/kickstart has 0 errata.
13:12:44 Added new kickstartable tree ks-rhel-x86_64-server-7-7.3. Downloading content...
13:12:44 Gathering all files in kickstart repository...
Downloading kickstarts: |##################################################| 100.0%
[.. output ommited ..]
13:24:53 Sync of channel completed in 2:08:33.
13:24:54 Total time: 2:08:33
[root@sat58 ~]# 

A subsequent run of cdn-sync without any parameters behaves like satellite-sync, its syncing previously synced channels.

You probably want to schedule a cronjob for daily syncing new content

[root@sat58b ~]# echo '0 1 * * * perl -le "sleep rand 9000" && /usr/bin/cdn-sync' >> /etc/cron.d/cdn-sync

The output of the sync actions are logged to /var/log/rhn/cdnsync.log

Clearing the cache

Remember rm -rf /var/cache/rhn/satsync/* when something went wrong? That’s gone :-). You just use cdn-sync –clear-cache.

cdn-sync --clear-cache

Upgrading from Satellite 5.7

I’ve not found the time yet to test the upgrade, I’ll let you know about my experience and thoughts in a few days.

Conclusion

After approximately 15 years, old school Redhat Satellite 5 will finally be replaced with Satellite 6 which is built on base of completely different technologies such as The Foreman, Pulp, Katello etc.

Satellite 5.8 is a very mature release no major bugs are known.

Satellite users are encouraged to discover Satellite 6 now, to be ready for the transition to be made in 2020.

Have fun 🙂

Using Ansible to automate oVirt and RHV environments

Bored of clicking in the WebUI of RHV or oVirt? Automate it with Ansible! Set up a complete virtualization environment within a few minutes.

Some time ago, Ansible includes a module for orchestrating RHV environments. It allows you to automate the setup of such an environment as well as automating daily tasks.

Preparation

Of course, Ansible can not automate all tasks, you need to set up a few things manually. Lets assume you want your oVirt-engine or RHV-manager running outside of the RHV environment which has some benefits when it comes to system management.

  • Setup of at least two hypervisor machines with RHEL7 latest
  • Setup of the RHV-M machine with RHEL7 latest
  • Having the appropriate Redhat Subscriptions
  • A machine with Ansible 2.3 installed

Set up the inventory file

Ensure you have a inventory file like the following in place,i.e. in /etc/ansible/hosts

[rhv]
        rhv-m.example.com

[hypervisors]
        hv1.example.com
        hv2.example.com

Helper files

ovirt-engine-vars.yml

engine_url: https://rhv-m.example.com/ovirt-engine/api
username: admin@internal
password: redhat
engine_cafile: /etc/pki/ovirt-engine/ca.pem
datacenter: Default
cluster: Default

rhsm_user: user@example.com
rhsm_pass: secret

Please adjust the following example answer file for your environment.

rhv-setup.conf

# action=setup                                                                                                        
[environment:default]                                                                                                 
OVESETUP_DIALOG/confirmSettings=bool:True                                                                                            
OVESETUP_CONFIG/applicationMode=str:both                                                                                             
OVESETUP_CONFIG/remoteEngineSetupStyle=none:None                                                                                     
OVESETUP_CONFIG/sanWipeAfterDelete=bool:False                                                                                        
OVESETUP_CONFIG/storageIsLocal=bool:False                                                                                            
OVESETUP_CONFIG/firewallManager=none:None                                                                                            
OVESETUP_CONFIG/remoteEngineHostRootPassword=none:None                                                                               
OVESETUP_CONFIG/firewallChangesReview=none:None                                                                                      
OVESETUP_CONFIG/updateFirewall=bool:False                                                                                            
OVESETUP_CONFIG/remoteEngineHostSshPort=none:None                                                                                    
OVESETUP_CONFIG/fqdn=str:rhv-m.example.com                                                                                        
OVESETUP_CONFIG/storageType=none:None                                                                                                        
OSETUP_RPMDISTRO/requireRollback=none:None                                                                                                   
OSETUP_RPMDISTRO/enableUpgrade=none:None                                                                                                     
OVESETUP_PROVISIONING/postgresProvisioningEnabled=bool:True                                                                                  
OVESETUP_APACHE/configureRootRedirection=bool:True                                                                                           
OVESETUP_APACHE/configureSsl=bool:True                                                                                                         
OVESETUP_DB/secured=bool:False
OVESETUP_DB/fixDbConfiguration=none:None
OVESETUP_DB/user=str:engine
OVESETUP_DB/dumper=str:pg_custom
OVESETUP_DB/database=str:engine
OVESETUP_DB/fixDbViolations=none:None
OVESETUP_DB/engineVacuumFull=none:None
OVESETUP_DB/host=str:localhost
OVESETUP_DB/port=int:5432
OVESETUP_DB/filter=none:None
OVESETUP_DB/restoreJobs=int:2
OVESETUP_DB/securedHostValidation=bool:False
OVESETUP_ENGINE_CORE/enable=bool:True
OVESETUP_CORE/engineStop=none:None
OVESETUP_SYSTEM/memCheckEnabled=bool:True
OVESETUP_SYSTEM/nfsConfigEnabled=bool:False
OVESETUP_PKI/organization=str:example.com
OVESETUP_PKI/renew=none:None
OVESETUP_CONFIG/isoDomainName=none:None
OVESETUP_CONFIG/engineHeapMax=str:1955M
OVESETUP_CONFIG/ignoreVdsgroupInNotifier=none:None
OVESETUP_CONFIG/adminPassword=str:redhat
OVESETUP_CONFIG/isoDomainACL=none:None
OVESETUP_CONFIG/isoDomainMountPoint=none:None
OVESETUP_CONFIG/engineDbBackupDir=str:/var/lib/ovirt-engine/backups
OVESETUP_CONFIG/engineHeapMin=str:1955M
OVESETUP_DWH_CORE/enable=bool:True
OVESETUP_DWH_CONFIG/scale=str:1
OVESETUP_DWH_CONFIG/dwhDbBackupDir=str:/var/lib/ovirt-engine-dwh/backups
OVESETUP_DWH_DB/secured=bool:False
OVESETUP_DWH_DB/restoreBackupLate=bool:True
OVESETUP_DWH_DB/disconnectExistingDwh=none:None
OVESETUP_DWH_DB/host=str:localhost
OVESETUP_DWH_DB/user=str:ovirt_engine_history
OVESETUP_DWH_DB/dumper=str:pg_custom
OVESETUP_DWH_DB/database=str:ovirt_engine_history
OVESETUP_DWH_DB/performBackup=none:None
OVESETUP_DWH_DB/port=int:5432
OVESETUP_DWH_DB/filter=none:None
OVESETUP_DWH_DB/restoreJobs=int:2
OVESETUP_DWH_DB/securedHostValidation=bool:False
OVESETUP_DWH_PROVISIONING/postgresProvisioningEnabled=bool:True
OVESETUP_CONFIG/imageioProxyConfig=bool:True
OVESETUP_RHEVM_DIALOG/confirmUpgrade=bool:True
OVESETUP_VMCONSOLE_PROXY_CONFIG/vmconsoleProxyConfig=bool:True
OVESETUP_CONFIG/websocketProxyConfig=bool:True

Prepare your machines

The first Playbook ensures your machines are subscribed to RHSM and the needed repos are made available.

install_rhv.yml

---
- hosts: rhv,hypervisors
  vars_files:
    - ovirt-engine-vars.yml
  
  tasks:
  - name: Register the machines to RHSM
    redhat_subscription:
      state: present
      username: "{{ rhsm_user }}"
      password: "{{ rhsm_pass }}"
      pool: '^(Red Hat Enterprise Server|Red Hat Virtualization)$'

  - name: Disable all repos
    command: subscription-manager repos --disable=*

- hosts: hypervisors
  tasks:
    - name: Enable required repositories
      command: subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rhv-4-mgmt-agent-rpms
 
- hosts: rhv
  tasks:

    - name: Enable required repositories
      command: subscription-manager repos --enable=jb-eap-7-for-rhel-7-server-rpms --enable=rhel-7-server-rhv-4-tools-rpms --enable=rhel-7-server-rhv-4.1-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rpms

    - name: Copy Answer File
      copy:
        src: rhv-setup.conf
        dest: /tmp/rhv-setup.conf

    - name: Run RHV setup
      shell: |
        engine-setup --config-append=/tmp/rhv-setup.conf

Run the playbook

user@ansible playbooks]$ ansible-playbook -k install_rhv.yml 
SSH password: 

PLAY [rhv,hypervisors] ************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [rhv-m.example.com]
ok: [hv1.example.com]
ok: [hv2.example.com]

TASK [Register the machines to RHSM] **********************************************************************************************************
ok: [hv1.example.com]
ok: [hv2.example.com]
ok: [rhv-m.example.com]

TASK [Disable all repos] **********************************************************************************************************************
changed: [rhv-m.example.com]
changed: [hv2example.com]
changed: [hv1.example.com]

PLAY [hypervisors] ****************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [hv1.example.com]
ok: [hv2.example.com]

TASK [Enable required repositories] ***********************************************************************************************************
changed: [hv1.example.com]
changed: [hv2.example.com]

PLAY [rhv] ************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [rhv-m.example.com]

TASK [Enable required repositories] ***********************************************************************************************************
changed: [rhv-m.example.com]

TASK [Copy Answer File] ***********************************************************************************************************************
ok: [rhv-m.example.com]

TASK [Run RHV setup] **************************************************************************************************************************
changed: [rhv-m.example.com]

PLAY RECAP ************************************************************************************************************************************
hv1.example.com : ok=5    changed=2    unreachable=0    failed=0   
hv2.example.com : ok=5    changed=2    unreachable=0    failed=0   
rhv-m.example.com       : ok=7    changed=3    unreachable=0    failed=0   

[user@ansible playbooks]$ 

Deploy your environment

Your environment is now ready to set up all the required stuff such as data centers, clusters, networks, storage etc.

rhv-deploy.yml

---
- name: Deploy RHV environment
  hosts: rhv

  vars_files: 
    - ovirt-engine-vars.yml

  pre_tasks:
  - name: Log in
    ovirt_auth:
      url: "{{ engine_url }}"
      username: "{{ username }}"
      password: "{{ password }}"
      ca_file: "{{ engine_cafile }}"
    tags:
      - always

  tasks:

  - name: ensure Datacenter "{{ datacenter }}" is existing
    ovirt_datacenters:
      auth: "{{ ovirt_auth }}"
      name: "{{ datacenter }}"
      comment: "Our primary DC"
      compatibility_version: 4.1
      quota_mode: enabled
      local: False

  - name: Ensure Cluster "{{ cluster }}" is existing
    ovirt_clusters:
      auth: "{{ ovirt_auth }}"
      name: "{{ cluster }}"
      data_center: "{{ datacenter }}"
      description: "Default Cluster 1"
      cpu_type: "Intel Haswell-noTSX Family"
      switch_type: legacy
      compatibility_version: 4.1
      gluster: false
      ballooning: false
      ha_reservation: true
      memory_policy: server
      rng_sources:
        - random

  - name: Ensure logical network VLAN101 exists
    ovirt_networks:
      auth: "{{ ovirt_auth }}"
      data_center: "{{ datacenter }}"
      name: vlan101
      vlan_tag: 101
      clusters:
        - name: "{{ cluster }}"
          assigned: True
          required: False

  - name: ensure host hv1 is joined
    ovirt_hosts:
      auth: "{{ ovirt_auth }}"
      cluster: "{{ cluster }}"
      name: hv1
      address: 192.168.100.112
      password: redhat

  - name: ensure host hv2 is joined
    ovirt_hosts:
      auth: "{{ ovirt_auth }}"
      cluster: "{{ cluster }}"
      name: hv2
      address: 192.168.100.20
      password: redhat

  - name: Assign Networks to host 
    ovirt_host_networks:
      auth: "{{ ovirt_auth }}"
      state: present
      name: "{{ item }}"
      interface: eth1
      save: True
      networks: 
        - name: vlan101
    with_items:
      - hv1
      - hv2


  - name: Enable Power Management for host1    
    ovirt_host_pm:
      auth: "{{ ovirt_auth }}"
      name: hv1
      address: 10.10.10.10
      options:
        lanplus=true
      username: admin
      password: secret
      type: ipmilan

  - name: Enable Power Management for host1
    ovirt_host_pm:
      auth: "{{ ovirt_auth }}"
      name: hv2
      address: 10.10.10.11
      options:
        lanplus=true
      username: admin
      password: secret
      type: ipmilan

  - name: Create VM datastore
    ovirt_storage_domains:
      auth: "{{ ovirt_auth }}"
      name: vms
      host: "hv2"
      data_center: "{{ datacenter }}"
      nfs:
        address: nfs.example.com
        path: /exports/rhv/vms

  - name: Create export NFS storage domain
    ovirt_storage_domains:
      auth: "{{ ovirt_auth }}"
      name: export
      host: "hv2"
      domain_function: export
      data_center: "{{ datacenter }}"
      nfs:
        address: nfs.example.com
        path: /exports/rhv/export

  - name: Create ISO NFS storage domain
    ovirt_storage_domains:
      auth: "{{ ovirt_auth }}"
      name: iso
      host: "hv2"
      domain_function: iso
      data_center: "{{ datacenter }}"
      nfs:
        address: nfs.example.com
        path: /exports/rhv/iso

Run the playbook

user@ansible playbooks]$ ansible-playbook -k rhv-deploy.yml
SSH password: 

PLAY [Deplay RHV environment] *****************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [rhv-m.example.com]

TASK [Log in] *********************************************************************************************************************************
ok: [rhv-m.example.com]

TASK [ensure Datacenter "Default" is existing] ************************************************************************************************
changed: [rhv-m.example.com]

TASK [Ensure Cluster "Default" is existing] ***************************************************************************************************
changed: [rhv-m.example.com]

TASK [Ensure logical network VLAN101 exists] **************************************************************************************************
changed: [rhv-m.example.com]

TASK [ensure host hv1 is joined] ****************************************************************************************************
changed: [rhv-m.example.com]

TASK [ensure host hv2 is joined] ****************************************************************************************************
changed: [rhv-m.example.com]

TASK [Assign Networks to host] ****************************************************************************************************************
ok: [rhv-m.example.com] => (item=hv1)
ok: [rhv-m.example.com] => (item=hv2)

TASK [Enable Power Management for host1] ******************************************************************************************************
changed: [rhv-m.example.com]

TASK [Enable Power Management for host1] ******************************************************************************************************
changed: [rhv-m.example.com]

TASK [Create VM datastore] ********************************************************************************************************************
changed: [rhv-m.example.com]

TASK [Create export NFS storage domain] *******************************************************************************************************
changed: [rhv-m.example.com]

TASK [Create ISO NFS storage domain] **********************************************************************************************************
changed: [rhv-m.example.com]

PLAY RECAP ************************************************************************************************************************************
rhv-m.example.com       : ok=13   changed=10   unreachable=0    failed=0   

[user@ansible playbooks]$ 

Further readings

Conclusion

With the help of Ansible you can automate a lot of boring tasks in a convenient way. You may even merge the two playbooks into one, be aware that the RHV-M setup will fail if its already set up.

Have fun 🙂

Upgrading RHN Satellite 5.5 to 5.6

Redhat released version 5.6 of the Redhat Satellite. Time to have a closer look to it and how to upgrade from version 5.5.

New features

  • Finally PostgreSQL support is mature enough for Enterprise usage. No need of a closed source data base anymore. This also brings a lot of new capabilities such as online backups which before was only available using an external Oracle Database which needs the availability of a DBA.

    PostgreSQL also brings some performance benefits over the embedded Oracle database as delivered with 5.5 and earlier. Disclaimer: I did not made any benchmarks, but it “feels” much faster.

  • If you are using the multi-org feature, you may be happy about enhancements for Inter-Satellite-Sync (ISS). Now you can define access rights for different software channels for different organizations.
  • It is not a new feature, but now it is supported: cobbler buildiso. It is a handy solution if you can not use PXE boot in your environment. cobbler buildiso generates a small boot image which allows you to select the installation of a system from a boot menu.
  • Intergrated System Asset Manager (SAM) which is based on Candlepin and allows you assess your system landscape for subscription compliance.
  • Upgrading from RHN Satellite 5.5
    The first thing that you probably would ask: Is it possible and supported to migrate from the Embedded Oracle Database to PostgreSQL? Is it hassle free and bullet-proof? Yes it is.

    Keep in mind

  • As always: Have a look to the product documentation before doing anything on a production Satellite.
  • Create a new RHN Satellite Certificate at access.redhat.com
  • Download the ISO image for 5.6
  • ensure having a recent database backup
  • ensure having a recent backup of your /etc/rhn directory as well as /var/lib/cobbler
  • Update your existing Satellite 5.5 with the latest available patches
  • Delete unnecessary software channels from the Satellite for faster DB migration
  • Delete old Snapshots to minimize database data to be migrated
  • Make enough storage available to migrate from embedded Oracle to PostgreSQL. It takes roughly about the same amount of storage for the data. The PostgreSQL database stores its data in /var/lib/pgsql.
  • Install the latest available package rhn-upgrade: yum install rhn-upgrade

    Lets do it, Perparation work

    First of all, create a database backup of your embedded Oracle Database:

    [root@rhnsat ~]# rhn-satellite stop
    [root@rhnsat ~]# su - oracle -c "db-control backup /path/to/your/backup/directory"
    [root@rhnsat ~]# su - oracle -c "db-control verify /path/to/your/backup/directory"
    [root@rhnsat ~]# rhn-satellite start
    

    Backup the rest of your Satellite:

    [root@rhnsat ~]# cp -rp /etc/rhn/ /etc/rhn-$(date +"%F")
    [root@rhnsat ~]# cp -rp /var/lib/cobbler /var/lib/cobbler-$(date +"%F")
    [root@rhnsat ~]# cp -rp /etc/cobbler /etc/cobbler-$(date +"%F")
    

    Update your RHN Satellite 5.5 with the latest available patches and reboot:

    [root@rhnsat ~]# yum -y update && reboot
    

    Ensure the latest schema updates have been applied. The output should read as follow:

    [root@rhnsat ~]# spacewalk-schema-upgrade 
    
    You are about to perform upgrade of your satellite-schema.
    
    For general instructions on Red Hat Satellite schema upgrade, please consult
    the following article:
    
        https://access.redhat.com/knowledge/articles/273633
    
    Hit Enter to continue or Ctrl+C to interrupt: 
    Schema upgrade: [satellite-schema-5.6.0.10-1.el6sat] -> [satellite-schema-5.6.0.10-1.el6sat]
    Your database schema already matches the schema package version [satellite-schema-5.6.0.10-1.el6sat].
    [root@rhnsat ~]#
    

    It is always a good idea to restart a software and check if all is working as expected *before* doing an upgrade. So you can pinpoint problems better if there are some.

    [root@rhnsat ~]# rhn-satellite restart
    

    Review your list of software channels and delete unused ones. This example will delete the channel rhel-i386-rhev-agent-6-server:

    [root@rhnsat ~]# spacewalk-remove-channel -c rhel-i386-rhev-agent-6-server
    Deleting package metadata (20):
                      ________________________________________
    Removing:         ######################################## - complete
    [root@rhnsat ~]#  
    

    Delete old system snapshots not used anymore. The following example deletes all snapshots which are older than one month:

    [root@rhnsat ~]# sw-system-snapshot --delete --all --start-date 200001010000 --end-date $(date -d "-1 months" "+%Y%m%d0000")
    

    Update the rhn-update package to the latest available:

    yum install rhn-upgrade
    

    After installing the the rhn-upgrade package, the SQL scripts needed for the DB migration are installed as well as some documentation you should read. They are located in /etc/sysconfig/rhn/satellite-upgrade/doc.

    Upgrade Procedure

    Mount the downloaded ISO image:

    [root@rhnsat ~]# mount satellite-5.6.0-20130927-rhel-6-x86_64.iso /mnt -o loop && cd /mnt
    [root@rhnsat mnt]# 
    

    If you operate your Satellite behind a proxy, you need to upgrade it in disconnected mode, if not, ignore the –disconneded parameter.

    [root@rhnsat mnt]# ./install.pl --disconnected --upgrade
    * Starting the Spacewalk installer.
    * Performing pre-install checks.
    * Pre-install checks complete.  Beginning installation.
    * RHN Registration.
    ** Registration: Disconnected mode.  Not registering with RHN.
    * Upgrade flag passed.  Stopping necessary services.
    * Purging conflicting packages.
    * Checking for uninstalled prerequisites.
    ** Checking if yum is available ...
    There are some packages from Red Hat Enterprise Linux that are not part
    of the @base group that Satellite will require to be installed on this
    system. The installer will try resolve the dependencies automatically.
    However, you may want to install these prerequisites manually.
    Do you want the installer to resolve dependencies [y/N]? y
    * Installing RHN packages.
    * Now running spacewalk-setup.
    * Setting up Selinux..
    ** Database: Setting up database connection for PostgreSQL backend.
    ** Database: Installing the database:
    ** Database: This is a long process that is logged in:
    ** Database:   /var/log/rhn/install_db.log
    *** Progress: #
    ** Database: Installation complete.
    ** Database: Populating database.
    *** Progress: ###################################
    * Database: Starting Oracle to PostgreSQL database migration.
    ** Database: Starting embedded Oracle database.
    ** Database: Trying to connect to Oracle database: succeded.
    ** Database: Migrating data.
    *** Database: Migration process logged at: /var/log/rhn/rhn_db_migration.log
    ** Database: Data migration successfully completed.
    ** Database: Stoping embedded Oracle database.
    * Setting up users and groups.
    ** GPG: Initializing GPG and importing key.
    * Performing initial configuration.
    * Activating Red Hat Satellite.
    ** Certificate not activated.
    ** Upgrade process requires the certificate to be activated after the schema is upgraded.
    * Enabling Monitoring.
    * Configuring apache SSL virtual host.
    Should setup configure apache's default ssl server for you (saves original ssl.conf) [Y]? y
    * Configuring tomcat.
    ** /etc/sysconfig//tomcat6 has been backed up to tomcat6-swsave
    ** /etc/tomcat6//tomcat6.conf has been backed up to tomcat6.conf-swsave
    Reversed (or previously applied) patch detected!  Skipping patch.
    1 out of 1 hunk ignored -- saving rejects to file web.xml.rej
    * Configuring jabberd.
    * Creating SSL certificates.
    ** Skipping SSL certificate generation.
    * Deploying configuration files.
    * Update configuration in database.
    * Setting up Cobbler..
    cobblerd does not appear to be running/accessible
    Cobbler requires tftp and xinetd services be turned on for PXE provisioning functionality. Enable these services [Y]? 
    This portion of the Red Hat Satellite upgrade process has successfully completed.
    Please refer to appropriate upgrade document in /etc/sysconfig/rhn/satellite-upgrade
    for any remaining steps in the process.
    [root@rhnsat mnt]# 
    

    Depending on the size of your database and the speed of your disks, the upgrade procedure can take many hours.

    The next step is having a look at diff /etc/rhn/rhn.conf /etc/rhn-$(date +”%F”)/rhn.conf
    and edit /etc/rhn/rhn.conf accordingly. You will probably see missing things such as proxy, server.satellite.rhn_parent etc. Also change the setting disconnected to 0.

    After checking and correcting the config file you can activate the Satellite:

    [root@rhnsat ~]# rhn-satellite-activate --rhn-cert=/root/rhns-cert56.cert --ignore-version-mismatch
    

    After the activation the System is subscribed to the Softwarechannel “redhat-rhn-satellite-5.6-server-x86_64-6”, now bring the Satellite to latest available patchlevel:

    [root@rhnsat ~]# yum -y update 
    

    Stop and disable Oracle
    Bofore doing any Database related actions its better to stop the old Oracle Database to be sure all is now running on PostgreSQL.

    [root@rhnsat ~]# service oracle stop
    Shutting down Oracle Net Listener ...                      [  OK  ]
    Shutting down Oracle DB instance "rhnsat" ...              [  OK  ]
    [root@rhnsat ~]# chkconfig oracle off
    [root@rhnsat ~]# rhn-satellite restart
    

    Aftermath

    Check if your database schema is up-to-date:

    root@rhnsat ~]# spacewalk-schema-upgrade 
    
    You are about to perform upgrade of your satellite-schema.
    
    For general instructions on Red Hat Satellite schema upgrade, please consult
    the following article:
    
        https://access.redhat.com/knowledge/articles/273633
    
    Hit Enter to continue or Ctrl+C to interrupt: 
    Schema upgrade: [satellite-schema-5.6.0.10-1.el6sat] -> [satellite-schema-5.6.0.10-1.el6sat]
    Your database schema already matches the schema package version [satellite-schema-5.6.0.10-1.el6sat].
    [root@rhnsat ~]# 
    

    Rebuild the search index:

    [root@rhnsat ~]# service rhn-search cleanindex
    Stopping rhn-search...
    Stopped rhn-search.
    Starting rhn-search...
    [root@rhnsat ~]# 
    

    Recreate the software channel meta data:

    [root@rhnsat doc]# /etc/sysconfig/rhn/satellite-upgrade/scripts/regenerate-repodata -a
    Scheduling repodata creation for 'rhel-x86_64-server-supplementary-6'
    Scheduling repodata creation for 'rhel-x86_64-server-6'
    Scheduling repodata creation for 'rhn-tools-rhel-x86_64-server-6'
    [root@rhnsat doc]# 
    

    Check functionality
    Before removing the Oracle Database, run your tests to validate the Satellites functionality. Please proceed as stated in /etc/sysconfig/rhn/satellite-upgrade/doc/verification.txt

    This is an important point, as we are getting rid of the Oracle database later on. To be sure all is working as expected, do a complete functionality test for the important things.

    To be on the safe side, let the Satellite run for a few days with Oracle still installed.

    Getting rid of Oracle

    Please read /etc/sysconfig/rhn/satellite-upgrade/doc/satellite-upgrade-postgresql.txt first!

    [root@rhnsat ~]# yum remove *oracle*
    

    Getting rid of the last Oracle bits:

    [root@rhnsat ~]# rm -rf /rhnsat /opt/apps/oracle /usr/lib/oracle/
    

    Result:
    Having fun with a faster Satellite with an open source database 🙂

    Disclaimer
    I take no responsibility about damaged Satellites, lost data etc. in doubt, stick on the official product documentation at http://access.redhat.com

Host based access control with IPA

Host based access control is easy with IPA/FreeIPA, very easy.

Lets assume you want to have a host group called rhel-prod, a usergroup called prod-admins and you want to let them access the servers in the rhel-prod group by ssh from any host that can reach the servers. Lets call the HBAC rule prod-admins.

You can either user the web GUI or use the command line interface.

Lets create the user group:

[root@ipa1 ~]# ipa group-add prod-admins --desc="Production System Admins"
-------------------------
Added group "prod-admins"
-------------------------
  Group name: prod-admins
  Description: Production System Admins
  GID: 1222000004
[root@ipa1 ~]# 

Add some users to the user group:

[root@ipa1 ~]# ipa group-add-member prod-admins --users=luc,htester
  Group name: prod-admins
  Description: Production System Admins
  GID: 1222000004
  Member users: luc, htester
-------------------------
Number of members added 2
-------------------------
[root@ipa1 ~]# 

And the hostgroup

[root@ipa1 ~]# ipa hostgroup-add rhel-prod --desc "Production Servers"
---------------------------
Added hostgroup "rhel-prod"
---------------------------
  Host-group: rhel-prod
  Description: Production Servers
[root@ipa1 ~]#

Add some servers as members of the host group

[root@ipa1 ~]# ipa hostgroup-add-member rhel-prod --hosts=ipaclient1.example.com,ipaclient2.example.com
  Host-group: rhel-prod
  Description: Production Servers
  Member hosts: ipaclient1.example.com, ipaclient2.example.com
-------------------------
Number of members added 2
-------------------------
[root@ipa1 ~]#

Note: the servers are comma separated, without a space after the comma

Lets define the HBAC rule:

[root@ipa1 ~]# ipa hbacrule-add --srchostcat=all prod-admins
-----------------------------
Added HBAC rule "prod-admins"
-----------------------------
  Rule name: prod-admins
  Source host category: all
  Enabled: TRUE
[root@ipa1 ~]#

Add the user group to the rule:

[root@ipa1 ~]# ipa hbacrule-add-user --groups prod-admins prod-admins
  Rule name: prod-admins
  Source host category: all
  Enabled: TRUE
  User Groups: prod-admins
-------------------------
Number of members added 1
-------------------------
[root@ipa1 ~]#

Add the service to the rule:

[root@ipa1 ~]# ipa hbacrule-add-service --hbacsvcs sshd prod-admins
  Rule name: prod-admins
  Source host category: all
  Enabled: TRUE
  User Groups: prod-admins
  Services: sshd
-------------------------
Number of members added 1
-------------------------
[root@ipa1 ~]#

And finally add the host group to the rule

[root@ipa1 ~]# ipa hbacrule-add-host --hostgroups rhel-prod prod-admins
  Rule name: prod-admins
  Source host category: all
  Enabled: TRUE
  User Groups: prod-admins
  Host Groups: rhel-prod
  Services: sshd
-------------------------
Number of members added 1
-------------------------
[root@ipa1 ~]#

Of course you can enhance the rule by adding other services or restrict the access from particular hosts and so on.

Have fun 🙂