Providing SRV and TXT records for Kerberos and LDAP with dnsmasq

March 26th, 2014

What if you have an application such as OVirt/RHEV-M that relies on DNS services records and you dont have the possibility to add them to the DNS servers because the DNS admins do not like to do its job?

Fake them! DNSMasq is your friend :-) Install dnsmasq on the server in question and configure /etc/resolv.conf to query first dnsmask on localhost.

yum -y install dnsmasq
chkconfig dnsmasq on

Assuming your subdomain is called example.com and your ldap and kerberos providers are ipa1.example.com and ipa2.example.com, configure dnsmasq as following:

cat << EOF >> /etc/dnsmasq.conf
srv-host =_kerberos._udp.example.com,ipa1.example.com,88
srv-host =_kerberos._udp.example.com,ipa2.example.com,88
srv-host =_kerberos._tcp.example.com,ipa1.example.com,88
srv-host =_kerberos._tcp.example.com,ipa2.example.com,88
srv-host =_kerberos-master._tcp.example.com,ipa1.example.com,88
srv-host =_kerberos-master._tcp.example.com,ipa2.example.com,88
srv-host =_kerberos-master._udp.example.com,ipa1.example.com,88
srv-host =_kerberos-master._udp.example.com,ipa2.example.com,88
srv-host =_kpasswd._tcp.example.com,ipa1.example.com,88
srv-host =_kpasswd._tcp.example.com,ipa2.example.com,88
srv-host =_kpasswd._udp.example.com,ipa1.example.com,88
srv-host =_kpasswd._udp.example.com,ipa2.example.com,88
srv-host =_ldap._tcp.example.com,ipa1.example.com,389
srv-host =_ldap._tcp.example.com,ipa2.example.com,389
txt-record=_kerberos.example.com,"EXAMPLE.COM"
EOF

Add the follwing line to /etc/resolv.conf and make sure 127.0.0.1 is the first DNS server to be queried.

nameserver 127.0.0.1

Start dnsmasq and have fun :-)

service dnsmask start

    Upgrading RHN Satellite 5.5 to 5.6

    October 12th, 2013

    Redhat released version 5.6 of the Redhat Satellite. Time to have a closer look to it and how to upgrade from version 5.5.

    New features

    • Finally PostgreSQL support is mature enough for Enterprise usage. No need of a closed source data base anymore. This also brings a lot of new capabilities such as online backups which before was only available using an external Oracle Database which needs the availability of a DBA.

      PostgreSQL also brings some performance benefits over the embedded Oracle database as delivered with 5.5 and earlier. Disclaimer: I did not made any benchmarks, but it “feels” much faster.

    • If you are using the multi-org feature, you may be happy about enhancements for Inter-Satellite-Sync (ISS). Now you can define access rights for different software channels for different organizations.
    • It is not a new feature, but now it is supported: cobbler buildiso. It is a handy solution if you can not use PXE boot in your environment. cobbler buildiso generates a small boot image which allows you to select the installation of a system from a boot menu.
    • Intergrated System Asset Manager (SAM) which is based on Candlepin and allows you assess your system landscape for subscription compliance.
    • Upgrading from RHN Satellite 5.5
      The first thing that you probably would ask: Is it possible and supported to migrate from the Embedded Oracle Database to PostgreSQL? Is it hassle free and bullet-proof? Yes it is.

      Keep in mind

    • As always: Have a look to the product documentation before doing anything on a production Satellite.
    • Create a new RHN Satellite Certificate at access.redhat.com
    • Download the ISO image for 5.6
    • ensure having a recent database backup
    • ensure having a recent backup of your /etc/rhn directory as well as /var/lib/cobbler
    • Update your existing Satellite 5.5 with the latest available patches
    • Delete unnecessary software channels from the Satellite for faster DB migration
    • Delete old Snapshots to minimize database data to be migrated
    • Make enough storage available to migrate from embedded Oracle to PostgreSQL. It takes roughly about the same amount of storage for the data. The PostgreSQL database stores its data in /var/lib/pgsql.
    • Install the latest available package rhn-upgrade: yum install rhn-upgrade

      Lets do it, Perparation work

      First of all, create a database backup of your embedded Oracle Database:

      [root@rhnsat ~]# rhn-satellite stop
      [root@rhnsat ~]# su - oracle -c "db-control backup /path/to/your/backup/directory"
      [root@rhnsat ~]# su - oracle -c "db-control verify /path/to/your/backup/directory"
      [root@rhnsat ~]# rhn-satellite start
      

      Backup the rest of your Satellite:

      [root@rhnsat ~]# cp -rp /etc/rhn/ /etc/rhn-$(date +"%F")
      [root@rhnsat ~]# cp -rp /var/lib/cobbler /var/lib/cobbler-$(date +"%F")
      [root@rhnsat ~]# cp -rp /etc/cobbler /etc/cobbler-$(date +"%F")
      

      Update your RHN Satellite 5.5 with the latest available patches and reboot:

      [root@rhnsat ~]# yum -y update && reboot
      

      Ensure the latest schema updates have been applied. The output should read as follow:

      [root@rhnsat ~]# spacewalk-schema-upgrade 
      
      You are about to perform upgrade of your satellite-schema.
      
      For general instructions on Red Hat Satellite schema upgrade, please consult
      the following article:
      
      https://access.redhat.com/knowledge/articles/273633
      
      Hit Enter to continue or Ctrl+C to interrupt: 
      Schema upgrade: [satellite-schema-5.6.0.10-1.el6sat] -> [satellite-schema-5.6.0.10-1.el6sat]
      Your database schema already matches the schema package version [satellite-schema-5.6.0.10-1.el6sat].
      [root@rhnsat ~]#
      

      It is always a good idea to restart a software and check if all is working as expected *before* doing an upgrade. So you can pinpoint problems better if there are some.

      [root@rhnsat ~]# rhn-satellite restart
      

      Review your list of software channels and delete unused ones. This example will delete the channel rhel-i386-rhev-agent-6-server:

      [root@rhnsat ~]# spacewalk-remove-channel -c rhel-i386-rhev-agent-6-server
      Deleting package metadata (20):
                        ________________________________________
      Removing:         ######################################## - complete
      [root@rhnsat ~]#  
      

      Delete old system snapshots not used anymore. The following example deletes all snapshots which are older than one month:

      [root@rhnsat ~]# sw-system-snapshot --delete --all --start-date 200001010000 --end-date $(date -d "-1 months" "+%Y%m%d0000")
      

      Update the rhn-update package to the latest available:

      yum install rhn-upgrade
      

      After installing the the rhn-upgrade package, the SQL scripts needed for the DB migration are installed as well as some documentation you should read. They are located in /etc/sysconfig/rhn/satellite-upgrade/doc.

      Upgrade Procedure

      Mount the downloaded ISO image:

      [root@rhnsat ~]# mount satellite-5.6.0-20130927-rhel-6-x86_64.iso /mnt -o loop && cd /mnt
      [root@rhnsat mnt]# 
      

      If you operate your Satellite behind a proxy, you need to upgrade it in disconnected mode, if not, ignore the –disconneded parameter.

      [root@rhnsat mnt]# ./install.pl --disconnected --upgrade
      * Starting the Spacewalk installer.
      * Performing pre-install checks.
      * Pre-install checks complete.  Beginning installation.
      * RHN Registration.
      ** Registration: Disconnected mode.  Not registering with RHN.
      * Upgrade flag passed.  Stopping necessary services.
      * Purging conflicting packages.
      * Checking for uninstalled prerequisites.
      ** Checking if yum is available ...
      There are some packages from Red Hat Enterprise Linux that are not part
      of the @base group that Satellite will require to be installed on this
      system. The installer will try resolve the dependencies automatically.
      However, you may want to install these prerequisites manually.
      Do you want the installer to resolve dependencies [y/N]? y
      * Installing RHN packages.
      * Now running spacewalk-setup.
      * Setting up Selinux..
      ** Database: Setting up database connection for PostgreSQL backend.
      ** Database: Installing the database:
      ** Database: This is a long process that is logged in:
      ** Database:   /var/log/rhn/install_db.log
      *** Progress: #
      ** Database: Installation complete.
      ** Database: Populating database.
      *** Progress: ###################################
      * Database: Starting Oracle to PostgreSQL database migration.
      ** Database: Starting embedded Oracle database.
      ** Database: Trying to connect to Oracle database: succeded.
      ** Database: Migrating data.
      *** Database: Migration process logged at: /var/log/rhn/rhn_db_migration.log
      ** Database: Data migration successfully completed.
      ** Database: Stoping embedded Oracle database.
      * Setting up users and groups.
      ** GPG: Initializing GPG and importing key.
      * Performing initial configuration.
      * Activating Red Hat Satellite.
      ** Certificate not activated.
      ** Upgrade process requires the certificate to be activated after the schema is upgraded.
      * Enabling Monitoring.
      * Configuring apache SSL virtual host.
      Should setup configure apache's default ssl server for you (saves original ssl.conf) [Y]? y
      * Configuring tomcat.
      ** /etc/sysconfig//tomcat6 has been backed up to tomcat6-swsave
      ** /etc/tomcat6//tomcat6.conf has been backed up to tomcat6.conf-swsave
      Reversed (or previously applied) patch detected!  Skipping patch.
      1 out of 1 hunk ignored -- saving rejects to file web.xml.rej
      * Configuring jabberd.
      * Creating SSL certificates.
      ** Skipping SSL certificate generation.
      * Deploying configuration files.
      * Update configuration in database.
      * Setting up Cobbler..
      cobblerd does not appear to be running/accessible
      Cobbler requires tftp and xinetd services be turned on for PXE provisioning functionality. Enable these services [Y]? 
      This portion of the Red Hat Satellite upgrade process has successfully completed.
      Please refer to appropriate upgrade document in /etc/sysconfig/rhn/satellite-upgrade
      for any remaining steps in the process.
      [root@rhnsat mnt]# 
      

      Depending on the size of your database and the speed of your disks, the upgrade procedure can take many hours.

      The next step is having a look at diff /etc/rhn/rhn.conf /etc/rhn-$(date +”%F”)/rhn.conf
      and edit /etc/rhn/rhn.conf accordingly. You will probably see missing things such as proxy, server.satellite.rhn_parent etc. Also change the setting disconnected to 0.

      After checking and correcting the config file you can activate the Satellite:

      [root@rhnsat ~]# rhn-satellite-activate --rhn-cert=/root/rhns-cert56.cert --ignore-version-mismatch
      

      After the activation the System is subscribed to the Softwarechannel “redhat-rhn-satellite-5.6-server-x86_64-6″, now bring the Satellite to latest available patchlevel:

      [root@rhnsat ~]# yum -y update 
      

      Stop and disable Oracle
      Bofore doing any Database related actions its better to stop the old Oracle Database to be sure all is now running on PostgreSQL.

      [root@rhnsat ~]# service oracle stop
      Shutting down Oracle Net Listener ...                      [  OK  ]
      Shutting down Oracle DB instance "rhnsat" ...              [  OK  ]
      [root@rhnsat ~]# chkconfig oracle off
      [root@rhnsat ~]# rhn-satellite restart
      

      Aftermath

      Check if your database schema is up-to-date:

      root@rhnsat ~]# spacewalk-schema-upgrade 
      
      You are about to perform upgrade of your satellite-schema.
      
      For general instructions on Red Hat Satellite schema upgrade, please consult
      the following article:
      
      https://access.redhat.com/knowledge/articles/273633
      
      Hit Enter to continue or Ctrl+C to interrupt: 
      Schema upgrade: [satellite-schema-5.6.0.10-1.el6sat] -> [satellite-schema-5.6.0.10-1.el6sat]
      Your database schema already matches the schema package version [satellite-schema-5.6.0.10-1.el6sat].
      [root@rhnsat ~]# 
      

      Rebuild the search index:

      [root@rhnsat ~]# service rhn-search cleanindex
      Stopping rhn-search...
      Stopped rhn-search.
      Starting rhn-search...
      [root@rhnsat ~]# 
      

      Recreate the software channel meta data:

      [root@rhnsat doc]# /etc/sysconfig/rhn/satellite-upgrade/scripts/regenerate-repodata -a
      Scheduling repodata creation for 'rhel-x86_64-server-supplementary-6'
      Scheduling repodata creation for 'rhel-x86_64-server-6'
      Scheduling repodata creation for 'rhn-tools-rhel-x86_64-server-6'
      [root@rhnsat doc]# 
      

      Check functionality
      Before removing the Oracle Database, run your tests to validate the Satellites functionality. Please proceed as stated in /etc/sysconfig/rhn/satellite-upgrade/doc/verification.txt

      This is an important point, as we are getting rid of the Oracle database later on. To be sure all is working as expected, do a complete functionality test for the important things.

      To be on the safe side, let the Satellite run for a few days with Oracle still installed.

      Getting rid of Oracle

      Please read /etc/sysconfig/rhn/satellite-upgrade/doc/satellite-upgrade-postgresql.txt first!

      [root@rhnsat ~]# yum remove *oracle*
      

      Getting rid of the last Oracle bits:

      [root@rhnsat ~]# rm -rf /rhnsat /opt/apps/oracle /usr/lib/oracle/
      

      Result:
      Having fun with a faster Satellite with an open source database :-)

      Disclaimer
      I take no responsibility about damaged Satellites, lost data etc. in doubt, stick on the official product documentation at http://access.redhat.com

      Intercepting proxies and spacewalk-repo-sync

      September 14th, 2013

      More and more companies are using intercepting proxies to scan for malware. Those malware scanners can be problematic due to added latency.

      If you using spacewalk-repo-sync to synchronize external yum repositories to your custom software channels and experience the famous message [Errno 256] No more mirrors to try in your log files, then you need to configure spacewalk-repo-sync.

      Unfortunately the documentation for that is a bit hidden in the man page. You need to create a directory and create a file.

      mkdir /etc/rhn/spacewalk-repo-sync/
      

      Create the configuration item:

      echo "[main]" >> /etc/rhn/spacewalk-repo-sync/yum.conf
      echo timeout=300 >> /etc/rhn/spacewalk-repo-sync/yum.conf
      

      You need to experiment a bit with the value of the timeout setting, 5min should be good enough for most environments.

      /etc/rhn/spacewalk-repo-sync/yum.conf has the same options like yum.conf, have a look for more information in the man page.

      Have fun :-)

        Centrally manage sudoers rules with IPA Part I – Preparation

        July 25th, 2013

        One of the features of IPA is its facility to centrally manage sudoers rules. This rules can be based on user, group memberships etc. and be constrained to one or more servers.

        One of the benefits you get is: You are able to define stricter sudoers rules without annoying the users. At the end your systems are more secure and more convenient for the users.

        Lets start.

        Preparation
        Unfortunately, sudoers via LDAP does not just work out of the box, some configuration on the clients needs to be done. Those can be equal on all hosts and distributed via configuration management such as puppet or RHN Satellite.

        IPA has a user called “sudo”. We first need to set a password for it:

        [root@ipa1 ~]# ldappasswd -x -S -W -h ipa1.example.com -ZZ -D "cn=Directory Manager" uid=sudo,cn=sysaccounts,cn=etc,dc=example,dc=com
        New password: 
        Re-enter new password: 
        Enter LDAP Password: 
        [root@ipa1 ~]# 
        

        We need to set this password later on as the bind password in the LDAP configuration.

        Next we need to edit the /etc/nsswitch.conf file:

        [root@ipaclient1 ~]# echo sudoers:  files ldap >> /etc/nsswitch.conf
        

        Lets configure the sudoers-ldap file

        root@ipaclient1 ~]# cat << EOF > /etc/sudo-ldap.conf
        binddn uid=sudo,cn=sysaccounts,cn=etc,dc=example,dc=com
        bindpw redhat
        ssl start_tls
        tls_cacertfile /etc/ipa/ca.crt
        tls_checkpeer yes
        uri ldap://ipa1.example.com ldap://ipa2.example.com
        sudoers_base ou=SUDOers,dc=example,dc=com
        EOF
        root@ipaclient1 ~]#
        

        The bindpw (in this example “redhat” is that one you previously set with ldappasswd, change it accordingly. The paramter “uri” should contain two IPA servers (as FQDN, no IP Address or shortname) for redundancy. The “binddn” and “sudoers_base” of course should match your environment.

        Remember netgroups? Old school stuff from the time when NIS was used. I thought I’ll never get in touch with NIS anymore. Unfortunately sudo uses netgroups, so we need to set a proper NIS domainname.

        cat << EOF >> /etc/rc.d/rc.local
        nisdomainname example.com
        EOF
        

        Summary
        The following files are needed to be configured on each host using IPA for sudoers rules:

        • /etc/nsswitch.conf
        • /etc/sudo-ldap.conf
        • /etc/rc.d/rc.local

        Expect part two of this in the next few days.

        Have fun :-)

          Why journalctl is cool and syslog will survive for another decade

          July 24th, 2013

          There was a recent discussion going on if Fedora 20 should drop rsyslog and just using systemd journal. A lot of people are afraid of systemd and its journal, this a pity.

          Well, there are pros and cons about this kind of logging. For System administrators daily use, journalctl is a powerful tool simplifying the hunt for log file entries.

          On the other hand, there are AFAIK no monitoring tools (yet) that can work with journalctl. Those first need to be developed. A Nagios plug-in should be implemented quite quickly.

          Why makes journalctl the life easier?
          Instead of grepping trough thousands of lines in /var/log/messages you simply can filter the messages and work on them.

          journalctl has auto completion (just hit the tab key) showing you the options to use. I.e.

          fedora:~# journalctl  < TAB > 
          _AUDIT_LOGINUID=             __MONOTONIC_TIMESTAMP=
          _AUDIT_SESSION=              _PID=
          _BOOT_ID=                    PRIORITY=
          _CMDLINE=                    __REALTIME_TIMESTAMP=
          CODE_FILE=                   _SELINUX_CONTEXT=
          CODE_FUNC=                   _SOURCE_REALTIME_TIMESTAMP=
          CODE_LINE=                   SYSLOG_FACILITY=
          _COMM=                       SYSLOG_IDENTIFIER=
          COREDUMP_EXE=                SYSLOG_PID=
          __CURSOR=                    _SYSTEMD_CGROUP=
          ERRNO=                       _SYSTEMD_OWNER_UID=
          _EXE=                        _SYSTEMD_SESSION=
          _GID=                        _SYSTEMD_UNIT=
          _HOSTNAME=                   _TRANSPORT=
          _KERNEL_DEVICE=              _UDEV_DEVLINK=
          _KERNEL_SUBSYSTEM=           _UDEV_DEVNODE=
          _MACHINE_ID=                 _UDEV_SYSNAME=
          MESSAGE=                     _UID=
          MESSAGE_ID= 
          fedora:~# journalctl 
          

          Quite some filtering options available here. Most of this options are self-explanatory.

          If you just want to see the entries made by a particular command, issue journalctl _COMM= and the TAB key.

          fedora:~# journalctl _COMM=
          abrtd            dnsmasq          mtp-probe        sh               tgtd
          anacron          gnome-keyring-d  network          smartd           udisksd
          avahi-daemon     hddtemp          polkit-agent-he  smbd             umount
          bash             journal2gelf     polkitd          sshd             userhelper
          blueman-mechani  kdumpctl         pulseaudio       sssd_be          yum
          chronyd          krb5_child       qemu-system-x86  su               
          colord           libvirtd         sealert          sudo             
          crond            logger           sendmail         systemd          
          dbus-daemon      mcelog           setroubleshootd  systemd-journal  
          fedora:~# journalctl _COMM=
          

          If you enter journalctl _COMM=sshd you will just see the messages created by sshd.

          fedora:~# journalctl _COMM=sshd 
          -- Logs begin at Tue 2013-07-23 08:46:28 CEST, end at Wed 2013-07-24 11:10:01 CEST. --
          Jul 23 09:48:45 fedora.example.com sshd[2172]: Server listening on 0.0.0.0 port 22.
          Jul 23 09:48:45 fedora.example.com sshd[2172]: Server listening on :: port 22.
          fedora:~#
          

          Usually one is just interested in messages within a particular time range.

          fedora:~# journalctl _COMM=crond --since "10:00" --until "11:00"
          -- Logs begin at Tue 2013-07-23 08:46:28 CEST, end at Wed 2013-07-24 11:23:25 CEST. --
          Jul 24 10:20:01 fedora.example.com CROND[28305]: (root) CMD (/usr/lib64/sa/sa1 1 1)
          Jul 24 10:50:01 fedora.example.com CROND[28684]: (root) CMD (/usr/lib64/sa/sa1 1 1)
          fedora:~#   
          

          And why will rsyslog stay another decade or even longer?

          There are a lot of tools and scripts which are in place since a long time, some of them even come from a time before Linux was born.

          Most of those scripts must be rewritten or at least change its behaviour. I.e taking input from STDIN instead of a log file, so those tools can digest the output from journalctl|your-super-duper-scipt.pl

          For log digesting tools that are needed to be compatible between different Unix and Linux Systems they probably wont be changed. In this case syslogd will survive until the last of those systems is decommissioned.

          Further reading

            Creating and managing iSCSI targets

            July 7th, 2013

            If you want to create and manage iSCSI targets with Fedora or RHEL, you stumble upon tgtd and tgtadm. This tools are easy to use but have some obstacles to take care of. This is a quick guide on how to use tgtd and tgtadm.

            iSCSI terminology
            In the iSCSI world, we not taking about server and client, but iSCSI-Targets, which is the server and iSCSI-Initiators which are the clients

            Install the tool set
            It is just one package to install, afterwards enable the service:

            target:~# yum install scsi-target-utils
            target:~# chkconfig tgtd on
            target:~# service tgtd start
            

            Or Systemd style:

            target:~# systemctl start tgtd.service
            target:~# systemctl enable tgtd.service
            

            Online configuration vs. configuration file
            There are basically two ways of configuring iSCSI targets:

            • Online configuration with tgtadm, changes are getting available instantly, but not consistent over reboots
            • Configuration files. Changes are presistent, but not instantly available

            Well, there is the dump parameter for tgtadm but i.e. passwords are replaced with “PLEASE_CORRECT_THE_PASSWORD” which makes tgtadm completely useless if you are using CHAP authentication.

            If you do not use CHAP authentication and use IP based ACLs instead, tgtadm can help you, just dump the config to /etc/tgt/conf.d

            Usage of tgtadm

            After you have created the storage such as a logical volume (used in this example), a partition or even a file, you can add the first target:

            target:~# tgtadm --lld iscsi --op new --mode target --tid 1 --targetname iqn.2013-07.com.example.storage.ssd1
            

            Then you can add a LUN (logical Unit) to the target

            target:~# tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 --backing-store /dev/vg_storage_ssd/lv_storage_ssd
            

            It is always a good idea to restrict access to your iSCSI targets. There are two ways to do so: IP based and user (CHAP Authentication) based ACL.

            In this example we first add two addresses and later on remove one of them again just as a demo

            target:~# tgtadm --lld iscsi --mode target --op bind --tid 1 --initiator-address=192.168.0.106
            target:~# tgtadm --lld iscsi --mode target --op bind --tid 1 --initiator-address=192.168.0.107
            

            Go to both initiators where the IPs and check if the Targets are visible:

            iscsiadm --mode discovery --type sendtargets --portal 192.168.0.1
            

            Lets remove the ACL for the IP address 192.168.0.107

            target:~# tgtadm --lld iscsi --mode target --op unbind --tid 1 --initiator-address=192.168.0.107
            

            Test if the Target is still visible on the host with IP address 192.168.0.107, it is not anymore.

            If you want to use CHAP authentication, please be aware that tgtadm –dump does not save password, so initiators will not be able to login after a restart of the tgtd.

            To add a new user:

            target:~# tgtadm --lld iscsi --op new --mode account --user iscsi-user --password secret
            

            And add the ACL to the target:

            target:~# tgtadm --lld iscsi --op bind --mode account --tid 2 --user iscsi-user
            

            To remove an account for the target:

            target:~# tgtadm --lld iscsi --op unbind --mode account --tid 2 --user iscsi-user
            

            As a wrote further above, configurations done by tgtadm are not persistent over reboot or restart of tgtd. For basic configurations as descibed above, the dump parameter is working fine. As configuration files in /etc/tgt/conf.d/ are automatically included, you just dump the config into a separate file.

            target:~# tgt-admin --dump |grep -v default-driver > /etc/tgt/conf.d/my-targets.conf
            

            The other way round
            If you are using more sophisticated configuration, you probably want to manage your iSCSI configration the other way round.

            You can edit your configuration file(s) in /etc/tgt/conf.d and invoke tgt-admin with the respective parameters to update the config instantly.

            tgt-admin (not to be mistaken as tgtadm) is a perl script which basically parses /etc/tgt/targets.conf and updates the targets by invoking tgtadm.

            To update your Target(s) issue:

            tgt-admin --update ALL --force
            

            For all your targets, incl. active ones (–force) or

            tgt-admin --update --tid=1 --force
            

            For updating Target ID 1

            SIGKILL is nasty but sometimes needed
            tgtd can not be stopped as usual daemons, you need to do a sledgehammer operation and invoke kill -9 to the process followed by service tgtd start command.

            How the start up and stop process is working in a proper workaround way is being teached by Systemd, have a look at /usr/lib/systemd/system/tgtd.service which does not actually stop tgtd but just removes the targets.

            Conclusion
            tgtadm can be help- and sometimes harmful. Carefully consider what is the better way for you, creating config files with tgtadm or update the configuration files and activate them with tgt-admin.

              Creating a PHP application on Openshift

              June 8th, 2013

              What is OpenShift? It is a cloud, it is from Red Hat. More precisely: A PaaS (Platform As A Service).

              It is available since quite some time now and I finally found some time to test it. Conclusion: It is very simple to use. This will guide you how to create a PHP application which just prints “this is a test”. More to come in future postings.

              The following steps are needed:

              • Create an account
              • Installing the CLI and setting up your environment
              • Create an application
              • Initialize a git repository
              • Put some content into your git repository
              • Push/publish your application

              It is a good idea to start reading https://www.openshift.com/get-started”.

              Create an account
              Simply head to https://openshift.redhat.com/app/account/new and fill in the form. The captcha can be a hassle, you may need to try reading it correctly several times.

              Setting up your environment
              before being able to use your account, you need to install and set up some software on your developer workstation. Of course you also can go for the “Wimp Way” and using the web-UI, but real men use the CLI for higher productivity.

              The following steps I used on my Fedora 18 box:

              f18:~# yum install rubygems git
              

              Next, install the CLI tool. The simplest way to do so is using gem.

              f18:~# gem install rhc
              Fetching: net-ssh-2.6.7.gem (100%)
              Fetching: archive-tar-minitar-0.5.2.gem (100%)
              Fetching: highline-1.6.19.gem (100%)
              Fetching: commander-4.1.3.gem (100%)
              Fetching: httpclient-2.3.3.gem (100%)
              Fetching: open4-1.3.0.gem (100%)
              Fetching: rhc-1.9.6.gem (100%)
              ===========================================================================
              
              If this is your first time installing the RHC tools, please run 'rhc setup'
              
              ===========================================================================
              Successfully installed net-ssh-2.6.7
              Successfully installed archive-tar-minitar-0.5.2
              Successfully installed highline-1.6.19
              Successfully installed commander-4.1.3
              Successfully installed httpclient-2.3.3
              Successfully installed open4-1.3.0
              Successfully installed rhc-1.9.6
              7 gems installed
              Installing ri documentation for net-ssh-2.6.7...
              Installing ri documentation for archive-tar-minitar-0.5.2...
              Installing ri documentation for highline-1.6.19...
              Installing ri documentation for commander-4.1.3...
              Installing ri documentation for httpclient-2.3.3...
              Installing ri documentation for open4-1.3.0...
              Installing ri documentation for rhc-1.9.6...
              Installing RDoc documentation for net-ssh-2.6.7...
              Installing RDoc documentation for archive-tar-minitar-0.5.2...
              Installing RDoc documentation for highline-1.6.19...
              Installing RDoc documentation for commander-4.1.3...
              Installing RDoc documentation for httpclient-2.3.3...
              Installing RDoc documentation for open4-1.3.0...
              Installing RDoc documentation for rhc-1.9.6...
              

              Just to be sure there are not updates available:

              f18:~# gem update rhc
              Updating installed gems
              Nothing to update
              

              Next on the list is to set up your credentials and evironment. It is wizard style and will guide you trough the process.

              [luc@f18 ~]$ rhc setup
              OpenShift Client Tools (RHC) Setup Wizard
              
              This wizard will help you upload your SSH keys, set your application namespace, and check that other programs like Git are properly
              installed.
              
              Login to openshift.redhat.com: your-account@example.com
              Password: **********
              
              
              OpenShift can create and store a token on disk which allows to you to access the server without using your password. The key is stored
              in your home directory and should be kept secret.  You can delete the key at any time by running 'rhc logout'.
              Generate a token now? (yes|no) yes
              Generating an authorization token for this client ... lasts about 1 day
              
              Saving configuration to /home/luc/.openshift/express.conf ... done
              
              Your public SSH key must be uploaded to the OpenShift server to access code.  Upload now? (yes|no) yes
              
              Since you do not have any keys associated with your OpenShift account, your new key will be uploaded as the 'default' key.
              
              Uploading key 'default' ... done
              
              Checking for git ... found git version 1.8.1.4
              
              Checking common problems .. done
              
              Checking your namespace ... none
              
              Your namespace is unique to your account and is the suffix of the public URLs we assign to your applications. You may configure your
              namespace here or leave it blank and use 'rhc create-domain' to create a namespace later.  You will not be able to create applications
              without first creating a namespace.
              
              Please enter a namespace (letters and numbers only) ||: ldelouw
              Your domain name 'ldelouw' has been successfully created
              
              Checking for applications ... none
              
              Run 'rhc create-app' to create your first application.
              [..]
              Your client tools are now configured.
              

              Create an application
              Now as your environment is nearly finished setting up you can create your application instance on OpenShift.

              [luc@f18 ~]$ rhc create-app test zend-5.6
              Application Options
              -------------------
                Namespace:  ldelouw
                Cartridges: zend-5.6
                Gear Size:  default
                Scaling:    no
              
              Creating application 'test' ... done
              
              Waiting for your DNS name to be available ... done
              
              Downloading the application Git repository ...
              Cloning into 'test'...
              The authenticity of host 'test-ldelouw.rhcloud.com ()' can't be established.
              RSA key fingerprint is a-finger-print.
              Are you sure you want to continue connecting (yes/no)? yes
              Warning: Permanently added 'test-ldelouw.rhcloud.com' (RSA) to the list of known hosts.
              
              Your application code is now in 'test'
              
              test @ http://test-ldelouw.rhcloud.com/ (uuid: a-uuid)
              ------------------------------------------------------------------------
                Created: 5:22 PM
                Gears:   1 (defaults to small)
                Git URL: ssh://a-uuid@test-ldelouw.rhcloud.com/~/git/test.git/
                SSH:     a-uuid@test-ldelouw.rhcloud.com
              
                zend-5.6 (Zend Server 5.6)
                --------------------------
                  Gears: 1 small
              
              RESULT:
              Application test was created.
              Note: You should set password for the Zend Server Console at: https://test-ldelouw.rhcloud.com/ZendServer
              Zend Server 5.6 started successfully
              

              As mentioned in the output, you shoud proceed to https://yourapp-yourdomain.rhcloud.com/ZendServer

              Initialize a git repository

              This is not very clear in Red Hats documentation. When creating an application on OpenShift, a git repository is created to you. In order to push your app, you need to clone that repository locally or adding an upstream git master. Lets do it locally for now:

              [luc@f18 ~]$ cd ~/your-project-directory
              
              [luc@f18 your-project-directory]$ git clone ssh://a-uuid@test-ldelouw.rhcloud.com/~/git/test.git/
              Cloning into 'test'...
              remote: Counting objects: 26, done.
              remote: Compressing objects: 100% (20/20), done.
              remote: Total 26 (delta 2), reused 20 (delta 0)
              Receiving objects: 100% (26/26), 6.99 KiB, done.
              Resolving deltas: 100% (2/2), done.
              

              Put some content into your git repository
              What a git repository and an application instance without some content? Nothing, so lets change that.

              [luc@f18 your-project-directory]$ cat <<EOF>test/php/test.php
              <?php
              print "this is a test";
              ?>
              EOF
              

              Adding your project file to the git repository:

              git add test.php
              

              Commit it:

              git commit
              

              And push it:

              [luc@f18 your-project-directory]$ git push
              Counting objects: 6, done.
              Delta compression using up to 8 threads.
              Compressing objects: 100% (3/3), done.
              Writing objects: 100% (4/4), 398 bytes, done.
              Total 4 (delta 1), reused 0 (delta 0)
              remote: CLIENT_MESSAGE: Stopping Zend Server Console
              remote: Stopping Zend Server GUI [Lighttpd] [OK]
              remote: CLIENT_MESSAGE: Stopping Zend Server JobQueue daemon
              remote: Stopping JobQueue [OK]
              remote: CLIENT_MESSAGE: Stopping Apache
              remote: CLIENT_MESSAGE: Stopping Zend Server Monitor node
              remote: Stopping Zend Server Monitor node [OK]
              remote: CLIENT_MESSAGE: Stopping Zend Server Deployment daemon
              remote: Stopping Deployment [OK]
              remote: CLIENT_RESULT: Zend Server 5.6 stopped successfully
              remote: TODO
              remote: CLIENT_MESSAGE: Starting Zend Server Deployment daemon
              remote: Starting Deployment [OK]
              remote: [08.06.2013 11:36:30 SYSTEM] watchdog for zdd is running. 
              remote: [08.06.2013 11:36:30 SYSTEM] zdd is running. 
              remote: CLIENT_MESSAGE: Starting Zend Server Monitor node
              remote: Starting Zend Server Monitor node [OK]
              remote: [08.06.2013 11:36:31 SYSTEM] watchdog for monitor is running. 
              remote: [08.06.2013 11:36:31 SYSTEM] monitor is running. 
              remote: CLIENT_MESSAGE: Starting Apache
              remote: CLIENT_MESSAGE: Starting Zend Server JobQueue daemon
              remote: Starting JobQueue [OK]
              remote: [08.06.2013 11:36:34 SYSTEM] watchdog for jqd is running. 
              remote: [08.06.2013 11:36:34 SYSTEM] jqd is running. 
              remote: CLIENT_MESSAGE: Starting Zend Server Console
              remote: spawn-fcgi: child spawned successfully: PID: 1433
              remote: Starting Zend Server GUI [Lighttpd] [OK]
              remote: [08.06.2013 11:36:36 SYSTEM] watchdog for lighttpd is running. 
              remote: [08.06.2013 11:36:36 SYSTEM] lighttpd is running. 
              remote: CLIENT_RESULT: Zend Server 5.6 started successfully
              To ssh://a-uuid@test-ldelouw.rhcloud.com/~/git/test.git/
                 xxxxx..yyyy  master -> master
              [luc@f18 your-project-directory]$
              

              Did it all worked?

              Lets try…

              [luc@bond test]$ wget --quiet http://test-ldelouw.rhcloud.com/test.php -O -|grep test
              this is a test
              [luc@bond test]$ 
              

              Yes!

                Host based access control with IPA

                March 2nd, 2013

                Host based access control is easy with IPA/FreeIPA, very easy.

                Lets assume you want to have a host group called rhel-prod, a usergroup called prod-admins and you want to let them access the servers in the rhel-prod group by ssh from any host that can reach the servers. Lets call the HBAC rule prod-admins.

                You can either user the web GUI or use the command line interface.

                Lets create the user group:

                [root@ipa1 ~]# ipa group-add prod-admins --desc="Production System Admins"
                -------------------------
                Added group "prod-admins"
                -------------------------
                  Group name: prod-admins
                  Description: Production System Admins
                  GID: 1222000004
                [root@ipa1 ~]# 
                

                Add some users to the user group:

                [root@ipa1 ~]# ipa group-add-member prod-admins --users=luc,htester
                  Group name: prod-admins
                  Description: Production System Admins
                  GID: 1222000004
                  Member users: luc, htester
                -------------------------
                Number of members added 2
                -------------------------
                [root@ipa1 ~]# 
                

                And the hostgroup

                [root@ipa1 ~]# ipa hostgroup-add rhel-prod --desc "Production Servers"
                ---------------------------
                Added hostgroup "rhel-prod"
                ---------------------------
                  Host-group: rhel-prod
                  Description: Production Servers
                [root@ipa1 ~]#
                

                Add some servers as members of the host group

                [root@ipa1 ~]# ipa hostgroup-add-member rhel-prod --hosts=ipaclient1.example.com,ipaclient2.example.com
                  Host-group: rhel-prod
                  Description: Production Servers
                  Member hosts: ipaclient1.example.com, ipaclient2.example.com
                -------------------------
                Number of members added 2
                -------------------------
                [root@ipa1 ~]#
                

                Note: the servers are comma separated, without a space after the comma

                Lets define the HBAC rule:

                [root@ipa1 ~]# ipa hbacrule-add --srchostcat=all prod-admins
                -----------------------------
                Added HBAC rule "prod-admins"
                -----------------------------
                  Rule name: prod-admins
                  Source host category: all
                  Enabled: TRUE
                [root@ipa1 ~]#
                

                Add the user group to the rule:

                [root@ipa1 ~]# ipa hbacrule-add-user --groups prod-admins prod-admins
                  Rule name: prod-admins
                  Source host category: all
                  Enabled: TRUE
                  User Groups: prod-admins
                -------------------------
                Number of members added 1
                -------------------------
                [root@ipa1 ~]#
                

                Add the service to the rule:

                [root@ipa1 ~]# ipa hbacrule-add-service --hbacsvcs sshd prod-admins
                  Rule name: prod-admins
                  Source host category: all
                  Enabled: TRUE
                  User Groups: prod-admins
                  Services: sshd
                -------------------------
                Number of members added 1
                -------------------------
                [root@ipa1 ~]#
                

                And finally add the host group to the rule

                [root@ipa1 ~]# ipa hbacrule-add-host --hostgroups rhel-prod prod-admins
                  Rule name: prod-admins
                  Source host category: all
                  Enabled: TRUE
                  User Groups: prod-admins
                  Host Groups: rhel-prod
                  Services: sshd
                -------------------------
                Number of members added 1
                -------------------------
                [root@ipa1 ~]#
                

                Of course you can enhance the rule by adding other services or restrict the access from particular hosts and so on.

                Have fun :-)

                  Automated disk partitioning on virtual machines with Cobbler

                  December 15th, 2012

                  The default Cobbler Snippets just do simple auto partitioning. For a more sophisticated partition layout you need to know what kind of VM you are going to install. KVMs and RHEVs device name is /dev/vda, Xen uses /dev/xvda and ESX /dev/sda.

                  Luckily this can be figured out automatically, those different virtualization vendors are using its own MAC prefixes. So we can add two nice small Cobbler snippets to do the job. In this example, I call them hw-detect and partitioning.

                  hw-detect

                  #set $mac = $getVar('$mac_address_eth0')
                  #if $mac
                  #set $mac_prefix = $mac[0:8]
                  #if $mac_prefix == "00:1a:4a"
                  # This is a RHEV virtual machine
                  #set global $machinetype = 'kvm'
                  
                  #else if $mac_prefix == "52:54:00"
                  # This is a KVM/Qemu virtual machine
                  #set global $machinetype='kvm'
                  
                  #else if $mac_prefix == "00:16:3e"
                  # This is a XEN virtual machine
                  #set global $machinetype='xen'
                  #
                  #else if $mac_prefix == "00:50:56"
                  # This is a ESX virtual machine
                  #set global $machinetype = 'esx'
                  
                  #else
                  # #This is a physical machine
                  #set global $machinetype = 'physical'
                  #end if
                  #end if
                  

                  partitioning

                  #if $machinetype == 'kvm'
                  #set $disk='vda'
                  #else if $machinetype == 'xen'
                  #set $disk = 'xvda'
                  #else
                  #set $disk = 'sda'
                  #end if
                  # Lets install the system on /dev/$disk
                  part /boot      --fstype ext2 --size=250 --ondisk=$disk
                  part pv.0       --size=1 --grow --ondisk=$disk
                  
                  volgroup vg_${name} pv.0
                  
                  logvol /        --fstype ext4 --name=lv_root    --vgname=vg_${name} --size=4096
                  logvol /home    --fstype ext4 --name=lv_home    --vgname=vg_${name} --size=512 --fsoption=nosuid,nodev,noexec
                  logvol /tmp     --fstype ext4 --name=lv_tmp    --vgname=vg_${name} --size=1024 --fsoption=nosuid,nodev,noexec
                  logvol /var     --fstype ext4 --name=lv_var    --vgname=vg_${name} --size=2048 --fsoption=nosuid,nodev,noexec
                  logvol swap     --fstype swap --name=lv_swap    --vgname=vg_${name} --size=2048
                  

                  An additional “feature” of the partitioning Snippet is: It sets up the Volume Group name according to your systems name. This is the unofficial standard since quite some time. It also sets some more secure mount options. Review them carefully if they make sense for you and edit them as needed.

                  The next step is to configure your kickstart template.

                  Standalone Cobbler
                  On a standalone Cobbler server edit /var/lib/cobbler/kickstart/your-kick-start-template.ks

                  # Detect the used hardware type
                  $SNIPPET('hw-detect')
                  # Set up default partitioning
                  $SNIPPET('partitioning')
                  

                  Bundled Cobbler
                  When using cobbler bundled with Spacewalk or Red Hat Satellite, you need to edit the Kickstart profile in the WebUI.


                  Navigate to Systems -> Kickstart -> Profile. Select the Kickstart profile to be modified -> System Details -> Partitioning.

                  Copy the two Snippets in /var/lib/cobbler/spacewalk/1, where 1 is representing your OrgId.

                  Alternatively you can edit them in the WebUI as well.

                  To check if all is working as expected, add a system to Cobbler using the Command Line Interface and have a look to the rendered Kickstart file. This can be easily done with cobbler system getks --name=blah.

                  Happy System installing….

                  Have fun :-)

                    RHEV 3.1 – an overview about the new features

                    December 9th, 2012
                    RHEV-M

                    RHEV-M

                    Recently Red Hat announced the public availability of RHEV 3.1.

                    Finally, no more Windows needed for the whole software stack :-)

                    In 3.0, the new webadmin interface was already inncluded, as a tech preview and had its problems. Now with 3.1 its working great and looks neat. In contrary to 3.0, it is now listening on the standard ports 80 and 443. This will probably help users in organizations with strict proxy policies and setting.

                    So what else is new?

                    The supported number of virtual CPUs in a guest is now ridiculous 160, and RAM per guest is at ridiculous two Terabytes. But this are the least import updates.

                    Especially on the storage side, a lot of effort has been done and long missing features integrated.

                    From my point of view, the most important new feature is the possibility to have disks from more than one Storage Domain attached to a virtual machine. This would allow to install the Operating system to cheap SATA storage while data disks are super fast SSDs.

                    There is also support for live snapshots, but snapshots are (as on other platforms) kind of problematic because they are COW (Copy-On-Write). This can lead to I/O performance problems. Snapshots are a cool feature for i.e. taking a snapshot before updating software etc. Be sure you remove the snapshot afterwards if you want to keep a good I/O performance.

                    You now can use DirectLUN directly from the GUI without the usage of hooks. DirectLUN allows to attach FibreChannel and iSCSI LUNs directly to a Virtual Machine. This is great when you want to use shared filesystems such as GFS.

                    Another nice feature is Live Storage Migration which is a technical preview, means: Unsupported for the moment. It probably will be supported in a later version. Storage live migration is a nice feature when you need to free up some space on a storage domain and you can not shut down a VM. Be sure to power-cycle the VM in question as soon as your SLA allows it, to get rid of the Snapshot (COW here again).

                    If you want to script stuff or you are too lazy to open a brower, there is now a CLI available. Have a look to the documentation.

                    If you want to integrate RHEV deeper into your existing infrastructure, such as RHN Satellite, Cobbler, Your-super-duper-CMDB or IaaS/PaaS broker, there are two different APIs available. For the XML lovers, there is the previously known RestAPI which has some performance improvements. For the XML haters, there is now a native Python API which allows to to access RHEV entities directly as objects in your Python code. For both APIs, have a look to the Documentation.

                    I personally like the Python API, because a lot of other Red Hat infrastructure products come with Python APIs. So it is very easy to integrate those software pieces.

                    Under the hood, it is now powered by JBoss EAP6 instead of version 5. To be able to connect to standard ports 80 and 443, there is an Apache httpd with mod_proxy_ajp.

                    Have fun :-)