Archive for the ‘Infrastructure’ Category

Using OTP Tokens and 2FA with FreeIPA 4.0

Sunday, July 13th, 2014

On 2014-07-08 FreeIPA 4.0 was released. One of the most interesting new features is the support of two factor authentication (2FA). I was curious about how to set it up and get it running. Unfortunately the documentation does not tell much about the OTP setup.

What is OTP and 2FA? An overview
OTP stands for One Time Password and 2FA for two factor authentication. OTP is available since long time, in the beginning usually as a list of passwords printed on paper. It was enhancing security gradually but was an operational nightmare.

RSA then came up with harware tokens somewhere in the 1990this which made it much more usable. Also 2FA was introduced. the two factors are ownership (or possession) and knowledge. One needs to obtain a piece of hardware (Hardware Token or a smart phone with a software token) and knowledge (knowing the password).

Meanwhile a lot of competing tokens are on the market, as well as so called soft-tokens. Most (or all?) of the hardware tokens are proprietary, making system configuration a nightmare (RSA PAM modules and stuff). On the other hand, every proprietary solution comes with the support of Radius. There is a quite new definition of using a Radius proxy to use those tokens with Kerberos and connect them with IPA.

However, hardware tokens and Radius proxies have been out of scope for my initial test. Lets go for the simpler soft token way.

Installing FreeIPA 4.0
It is planed to include FreeIPA 4.0 in Fedora 21 which will be released later this year. For testing you can either use Fedora Rawhide 21 or Fedora 20 with an external Yum repository. I was choosing the later way.

wget https://copr.fedoraproject.org/coprs/pviktori/freeipa/repo/fedora-20-i386/pviktori-freeipa-fedora-20-i386.repo -O /etc/yum.repos.d/pviktori-freeipa-fedora-20-i386.repo

The rest of the installation is the same as with (Free)IPA2 and (Free)IPA3. Please have a look at my earlier Post

Enabling OTP
You can either enable OTP on a global scope or per user. At the moment I recommend it on a per-user base.

ipa user-mod username --user-auth-type=otp

If you want to enable users to authenticate with more than one method, user –user-auth-type={otp,password}

Adding a new user with OTP enabled will probably be possible in the future. There seems to be a bug, according to ipa user-add –help it is supposed to be working.

ipa user-add hwurst --first="Hans" --last="Wurst" --user-auth-type=otp

Adding a token
The best way for a user to add a token is probably the web interface. Lets call it self-service. The user first authenticates with username and the initial password set by the admin to set a new one. The OTP field can be ignored for the moment.

After authentication, the user can navigate to “OTP Tokens” on the top navigation bar and add a new token. This looks as following:

ipa-otpThe ID needs to be unique, this can case problems when users are adding the tokens by themself as people would tend to provide a simple ID by themself. When not providing an ID, one will be generated. The field Unique ID should IMHO not be available for ordinary users.

After adding the token, login via password only is not possible anymore (unless explicitly enabled with the user-auth-type).

After hitting “Add”, a QR code will be shown. This allows users to scan the code with the Smartphone app, such as FreeOTP and Google Authenticator.

The next step users needs to do is to sync the token. This can be done by returning to the login screen and clicking on “Sync OTP Token” right left to the Login button.

ipa-otp2With a generated Unique ID (=Token ID) its quite annoying to enter that ID. However, usually this only needs to be one once :-)

 

 

 

 

Limitations

The release notes mentions that there are concerns about the scalability when using HOTP, where TOTP has a known issue that tokens can be reused, but only within a short timeframe.

I see another issue which is a kind of a chicken-and-egg problem: After adding a user, this user is able to login with its password only until a token has been added. This ability is needed to log in to the IPA WebUI to add the token at the first place. However, password-only access should be limited to the token add facility.

Conclusion

I’m pretty amazed how well it works as this is a brand new feature for FreeIPA. The involved engineers made a brilliant job! I’m looking forward to see this feature in Redhat IPA/IdM somewhere in the future as 2FA is an often requested killer feature in enterprise environments.

Read more

Have fun! :-)

Intercepting proxies and spacewalk-repo-sync

Saturday, September 14th, 2013

More and more companies are using intercepting proxies to scan for malware. Those malware scanners can be problematic due to added latency.

If you using spacewalk-repo-sync to synchronize external yum repositories to your custom software channels and experience the famous message [Errno 256] No more mirrors to try in your log files, then you need to configure spacewalk-repo-sync.

Unfortunately the documentation for that is a bit hidden in the man page. You need to create a directory and create a file.

mkdir /etc/rhn/spacewalk-repo-sync/

Create the configuration item:

echo "[main]" >> /etc/rhn/spacewalk-repo-sync/yum.conf
echo timeout=300 >> /etc/rhn/spacewalk-repo-sync/yum.conf

You need to experiment a bit with the value of the timeout setting, 5min should be good enough for most environments.

/etc/rhn/spacewalk-repo-sync/yum.conf has the same options like yum.conf, have a look for more information in the man page.

Have fun :-)

PAM and IPA authentication for RHN Satellite

Sunday, August 12th, 2012

If you have a larger installation on your site, you may wish to have a single source of credentials not only for common system services, but for your RHN Satellite too.

This will show you how to configure your RHN Satellite Server to use PAM with SSSD. SSSD, the System Security Services Daemon is a common framework to provide authentication services. Needless to say that IPA is supported as well.

Assumptions:

  • You have a RHN Satellite running on RHEL6
  • You have an IPA infrastructure running (at least on RHEL 6.2)

Preparations
First you need to install the ipa-client on your satellite:

yum -y install ipa-client

And then join the server to your IPA environment:

ipa-client-install -p admin

Configuring PAM as follows:

cat << EOF > /etc/pam.d/rhn-satellite
auth        required      pam_env.so
auth        sufficient    pam_sss.so 
auth        required      pam_deny.so
account     sufficient    pam_sss.so
account     required      pam_deny.so
EOF

Configure the RHN Satellite
Your Satellite now needs to be aware that there is the possibility to authenticate users with PAM against IPA.

echo "pam_auth_service = rhn-satellite" >> /etc/rhn/rhn.conf

If you have users in your IPA domain with usernames shorter than five characters, you will need to add one more line to be able to create the users in RHN Satellite:

echo "web.min_user_len = 3" >>   /etc/rhn/rhn.conf

After this change, restart your RHN Satellite

rhn-satellite restart

Configuring users
Now you can log in to your RHN Satellite with your already configured admin user and select the checkbox “Pluggable Authentication Modules (PAM)” on existing users and/or new users.

Things to be considered
It is strongly recomended to have at leat one user per organization (ususally a “Organization Administrator”) plus the “RHN Satellite Administrator” not having PAM authentication enabled. Despite of the easy implementation of redundancy with IPA, this is important for a fallback scenario when your IPA environment has some service interruptions due to mainenance or failure.

SSSD caches users credentials on the RHN Satellite system, but this is only true for users logged in at least once. The default value for offline_credentials_expiration is 0, which means no cache time limit. However, depending on your organizations scurity policy this value can vary. Please check the PAM section in /etc/sssd/sssd.conf

Further documents to read

Identity Management with IPA Part II – Kerberized NFS service

Sunday, December 25th, 2011

In part one I was writing how to set up an IPA server for basic user authentication.

One reason NFSv4 is not that widespreaded yet, is it needs Kerberos for proper operation. Of course this is now much easier thanks to IPA.

Goal for the part of the guide

  • Configure IPA to serve the NFS principle
  • Configure NFS to use IPA
  • Configure some IPA clients to use Kerberos for the NFS service

Requirements

  • A runing IPA service like discussed in Part I of this guide.
  • A NFS server based on RHEL6.2
  • One or more IPA-Client

Lets doit
First you need to add the NFS server and its service principal to the IPA server. On ipa1.example.com run:

[root@ipa1 ~]# ipa host-add nfs.example.com
[root@ipa1 ~]# ipa service-add nfs/nfs.example.com

Next, log on to you NFS server, lets call it nfs.example.com and install the needed additional software packages:

[root@nfs ~]# yum -y install ipa-client nfs-utils

You need to enroll you NFS-server on the IPA domain. Run the following on nfs.example.com:

[root@nfs ~]# ipa-client-install -p admin

The next step is to get a Kerberos ticket and fetch the entries needed to be added in the krb5.keytab

[root@nfs ~]# kinit admin
[root@nfs ~]# ipa-getkeytab -s ipa1.example.com -p nfs/nfs.example.com -k /etc/krb5.keytab

Before you proceed to your clients, you need to enable secure NFS, create an export and restart NFS:

[root@nfs ~]# perl -npe 's/#SECURE_NFS="yes"/SECURE_NFS="yes"/g' -i /etc/sysconfig/nfs
[root@nfs ~]# echo "/home  *(rw,sec=sys:krb5:krb5i:krb5p)" >> /etc/exports
[root@nfs ~]# mkdir /home/tester1 && cp /etc/skel/.bash* /home/tester && chmod 700 /home/tester1 && chown -R tester1:ipausers /home/tester1
[root@nfs ~]# service nfs restart

Assuming you already have set up one or more IPA-client(s), it is stright forward to enable kerberized NFS on your systems. Log in to a client and run the following:

[root@ipaclient1 ~]# yum -y install nfs-utils
[root@ipaclient1 ~]# perl -npe 's/#SECURE_NFS="yes"/SECURE_NFS="yes"/g' -i /etc/sysconfig/nfs
[root@ipaclient1 ~]# 

Lets have a look if you have been successful. First look up the users UID.

[root@ipaclient1 ~]# getent passwd tester1
tester1:*:1037700500:1037700500:Hans Tester:/home/tester1:/bin/bash
[root@ipaclient1 ~]# 

Lets mount that users home directory manually on a client:

mount -t nfs4 nfs.exmaple.com:/home/tester1 /home/tester1

To check if is working as expected, issue

[root@ipaclient1 ~]# su - tester1

Fire ls -lan and see if the UID matches the UID you got from getent. If you see UID 4294967294, then something went wrong, this is the UID for the user “nobody” when using NFSv4 on 64 bit machines.

Whats next?
You will figure out when I post part III of this guide :-)

Have fun!

Identity Management with IPA Part I

Saturday, December 17th, 2011

Red Hat released RHEL 6.2 on December 6th. From my point of view, the greatest news in the release is that IPA (or now called Identity Management) is now fully supported and available in the RHEL 6 base channel without additional subscription costs.

Upstream project is freeIPA and is available trough the default Fedora repos.

About central Identity Management
IPA stands for Identification, Auditing, Policy. The focus in this article is on identification of users.

In the past, there have been a lot of solutions available to centrally manage users and its access to services. Just to name a few: LDAP, Kerberos, PAM, MS Active Directory, Novell Directory Server and countless others. All of those solutions have one in common: They are very powerful and very complex to set up and maintain. Because they are so complex, a lot of system administrators just do not use them and distribute SSH-keys, user credentials etc. by script without real central management, the nightmare of every security officer.

What is IPA?
The missing solution was a glue of LDAP and Kerberos which is easy to install and maintain, redundant and scalable from small office environments up to large enterprise installations. here it comes: IPA, which makes system administrators and security managers friends again.

IPA comes with a powerful CLI and a web interface for people that are afraid of a shell.

One of the cool stuff in IPA is its multi-master replication feature and automatic fail over facility. The clients are able to look up IPA servers with SRV DNS records, which are – of course – handled by IPA.

Lets do some stuff
One thing is just writing about how cool IPA is, but lets set up a high available centrally managed identity management system. This guide is written for RHEL 6.2 IPA-Servers and clients but should also work with freeIPA and Fedora 15 and later (Let me know if you have some issues).

Requirements
Requirements are straightforward:

  • 1Gbyte of RAM
  • approx. 6Gbyte of disk (including operating system)
  • NTP
  • DNS entries for all IPA servers (including PTR records)
  • Fully updated RHEL 6.2 GA
  • Firefox on the IPA servers if you want to use the web interface

NTP is very important since Kerberos is quite picky about synchronized system time. Ensure it is configured and running on all involved servers.

Assumptions

  • IP network is 192.168.100.0/24
  • Domain is example.com
  • Kerberos realm is EXAMPLE.COM
  • IPA-Server 1 is ipa1.example.com
  • IPA-Server 2 is ipa2.example.com
  • IPA-Client 1 is ipa-client1.example.com
  • IPA-Client 2 is ipa-client2.example.com
  • All passwords used are “somepassword” (needles to tell you to choose your own passwords
  • Main DNS is at 192.168.100.1
  • IPA-Clients are using ipa1.example.com and ipa2.example.com as there DNS servers.

Installation of the first IPA Server

yum -y install ipa-server bind-dyndb-ldap firefox xorg-x11-xauth

You are now ready to set up IPA. There are just a couple of questions, the non-default answers for this example are in red.

[root@ipa1 ~]# ipa-server-install --setup-dns --forwarder=192.168.100.1
The log file for this installation can be found in /var/log/ipaserver-install.log
==============================================================================
This program will set up the IPA Server.

This includes:
  * Configure a stand-alone CA (dogtag) for certificate management
  * Configure the Network Time Daemon (ntpd)
  * Create and configure an instance of Directory Server
  * Create and configure a Kerberos Key Distribution Center (KDC)
  * Configure Apache (httpd)
  * Configure DNS (bind)

To accept the default shown in brackets, press the Enter key.

Existing BIND configuration detected, overwrite? [no]: yes
Enter the fully qualified domain name of the computer
on which you're setting up server software. Using the form
.
Example: master.example.com.


Server host name [ipa1.example.com]:

Warning: skipping DNS resolution of host ipa1.example.com
The domain name has been calculated based on the host name.

Please confirm the domain name [example.com]:

The IPA Master Server will be configured with
Hostname:    ipa1.example.com
IP address:  192.168.100.227
Domain name: example.com

The kerberos protocol requires a Realm name to be defined.
This is typically the domain name converted to uppercase.

Please provide a realm name [EXAMPLE.COM]:
Certain directory server operations require an administrative user.
This user is referred to as the Directory Manager and has full access
to the Directory for system management tasks and will be added to the
instance of directory server created for IPA.
The password must be at least 8 characters long.

Directory Manager password: somepassword
Password (confirm): somepassword

The IPA server requires an administrative user, named 'admin'.
This user is a regular system account used for IPA server administration.

IPA admin password: somepassword
Password (confirm): somepassword

Do you want to configure the reverse zone? [yes]:
Please specify the reverse zone name [100.168.192.in-addr.arpa.]:
Using reverse zone 100.168.192.in-addr.arpa.

The following operations may take some minutes to complete.
Please wait until the prompt is returned.
Configuring ntpd
  [1/4]: stopping ntpd
  [2/4]: writing configuration
  [3/4]: configuring ntpd to start on boot
  [4/4]: starting ntpd
done configuring ntpd.
Configuring directory server for the CA: Estimated time 30 seconds
  [1/3]: creating directory server user
  [2/3]: creating directory server instance
  [3/3]: restarting directory server
done configuring pkids.

Lot of output omitted

Configuring named:
  [1/9]: adding DNS container
  [2/9]: setting up our zone
  [3/9]: setting up reverse zone
  [4/9]: setting up our own record
  [5/9]: setting up kerberos principal
  [6/9]: setting up named.conf
  [7/9]: restarting named
  [8/9]: configuring named to start on boot
  [9/9]: changing resolv.conf to point to ourselves
done configuring named.
==============================================================================
Setup complete

Next steps:
        1. You must make sure these network ports are open:
                TCP Ports:
                  * 80, 443: HTTP/HTTPS
                  * 389, 636: LDAP/LDAPS
                  * 88, 464: kerberos
                  * 53: bind
                UDP Ports:
                  * 88, 464: kerberos
                  * 53: bind
                  * 123: ntp

        2. You can now obtain a kerberos ticket using the command: 'kinit admin'
           This ticket will allow you to use the IPA tools (e.g., ipa user-add)
           and the web user interface.

Be sure to back up the CA certificate stored in /root/cacert.p12
This file is required to create replicas. The password for this
file is the Directory Manager password
[root@ipa1 ~]#

You now need to get a Kerberos ticket:

[root@ipa1 ~]# kinit admin
Password for admin@EXAMPLE.COM:
[root@ipa1 ~]#

Fire up firefox and point it to https://ipa1.example.com and follow the link provided in the error message. You will see the instructions needed to use Kerberos as authentication method. When importing the cert into Firefox, REALLY check all three boxes!

Afterwards you are automatically logged in, if you got your Kerberos ticket before (kinit admin)

Setting up a Recplica
For now, we one IPA server. If it failes, no one can log in to any system anymore. This is of course unacceptable and needs to be changed. So lets set up a replica to add high availability to our central identity management system.

Log in to ipa1.example.com and fire up ipa-replica-prepare to collect the data needed for the replica.

Non-default answers are coloured red

[root@ipa1 ~]# ipa-replica-prepare ipa2.example.com

Directory Manager (existing master) password: somepassword

Preparing replica for ipa2.example.com from ipa1.example.com
Creating SSL certificate for the Directory Server
Creating SSL certificate for the dogtag Directory Server
Creating SSL certificate for the Web Server
Exporting RA certificate
Copying additional files
Finalizing configuration
Packaging replica information into /var/lib/ipa/replica-info-ipa2.example.com.gpg
[root@ipa1 ~]#

/var/lib/ipa/replica-info-ipa2.example.com.gpg keeps all the information needed to set up the replica. You need to copy it by i.e scp to ipa2.example.com.

Now log in to ipa2.example.com and fire up ipa-replica-install

[root@ipa2 ~]# ipa-replica-install --setup-dns --forwarder=192.168.100.1 replica-info-ipa2.example.com.gpg

Directory Manager (existing master) password: somepassword

Run connection check to master
Check connection from replica to remote master 'ipa1.example.com':
   Directory Service: Unsecure port (389): OK
   Directory Service: Secure port (636): OK
   Kerberos KDC: TCP (88): OK
   Kerberos KDC: UDP (88): OK
   Kerberos Kpasswd: TCP (464): OK
   Kerberos Kpasswd: UDP (464): OK
   HTTP Server: port 80 (80): OK
   HTTP Server: port 443(https) (443): OK

Connection from replica to master is OK.
Start listening on required ports for remote master check
Get credentials to log in to remote master
admin@EXAMPLE.COM password:

Execute check on remote master
Check connection from master to remote replica 'ipa2.example.com':
   Directory Service: Unsecure port (389): OK
   Directory Service: Secure port (636): OK
   Kerberos KDC: TCP (88): OK
   Kerberos KDC: UDP (88): OK
   Kerberos Kpasswd: TCP (464): OK
   Kerberos Kpasswd: UDP (464): OK
   HTTP Server: port 80 (80): OK
   HTTP Server: port 443(https) (443): OK

Connection from master to replica is OK.

Connection check OK
Configuring ntpd
  [1/4]: stopping ntpd
  [2/4]: writing configuration
  [3/4]: configuring ntpd to start on boot
  [4/4]: starting ntpd
done configuring ntpd.
Configuring directory server: Estimated time 1 minute

Lot of output omitted

Using reverse zone 100.168.192.in-addr.arpa.
Configuring named:
  [1/8]: adding NS record to the zone
  [2/8]: setting up reverse zone
  [3/8]: setting up our own record
  [4/8]: setting up kerberos principal
  [5/8]: setting up named.conf
  [6/8]: restarting named
  [7/8]: configuring named to start on boot
  [8/8]: changing resolv.conf to point to ourselves
done configuring named.
[root@ipa2 ~]#

On ipa2, you need a Kerberos Ticket as well:

root@ipa2 ~]# kinit admin

Some adjustment
Unfortunately the default shell for new users is /bin/sh, which should probably be changed.

ipa config-mod --defaultshell=/bin/bash

Testing the replication
Log in to ipa1.example.com and add a new user:

ipa user-add tester1
ipa passwd tester1

You now can check if the user is really available on both servers by firing a ldapsearch command:

ldapsearch -x -b "dc=example, dc=com" uid=tester1

Compare the results of both servers. If they are the same, you have been successfully set up you two-node replicated high available IPA server.

What if ipa1.example.com is not available when I need to add a new user?
Simple answer: There is one way to find out….

Shut down ipa1.example.com
Log in to ipa2.example.com and add a new user:

root@ipa2 ~]# ipa user-add tester2

Start up ipa1.example.com again and run a ldapsearch again:

ldapsearch -x -b "dc=example, dc=com" uid=tester2

Set up a IPA-Client
Whats a centrally managed Identity Management server worth without a client? Nada! Lets set up a RHEL 6.2 server as a client:

[root@ipaclient1 ~]# yum  install ipa-client

After installation the setup program needs to be fired up. Non-default answers are coloured red

[root@ipaclient1 ~]# ipa-client-install -p admin
Discovery was successful!
Hostname: ipaclient1.example.com
Realm: EXAMPLE.COM
DNS Domain: example.com
IPA Server: ipa1.example.com
BaseDN: dc=example,dc=com


Continue to configure the system with these values? [no]: yes
Synchronizing time with KDC...
Password for admin@EXAMPLE.COM: somepassword

Enrolled in IPA realm EXAMPLE.COM
Created /etc/ipa/default.conf
Configured /etc/sssd/sssd.conf
Configured /etc/krb5.conf for IPA realm EXAMPLE.COM
Warning: Hostname (ipaclient1.example.com) not found in DNS
DNS server record set to: ipaclient1.example.com -> 192.168.100.253
SSSD enabled
NTP enabled
Client configuration complete.
[root@ipaclient1 ~]# 

Testing the login
Log in to your client, you will need to change your password first:

[luc@bond ~]$ ssh 192.168.100.253 -l tester1
tester1@192.168.100.253's password: 
Password expired. Change your password now.
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for user tester1.
Current Password: 
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.
Connection to 192.168.100.253 closed.
[luc@bond ~]$ ssh 192.168.100.253 -l tester1
tester1@192.168.100.253's password: 
Last login: Sat Dec 17 19:40:10 2011 from bond.home.delouw.ch
Could not chdir to home directory /home/tester1: No such file or directory
-bash-4.1$ 

In this case we do not have a home directory for the user tester1. NFS automount of home directories will be discussed in Part II oder III of this guide.

Now log out of ipaclient1.example.com and shut down ipa1.example.com to check if it is working when one IPA server failed. Needless to say that it is working… (okay, there is a delay of a few seconds)

Drawbacks
IPA is not that powerful like MS Active Directory or Novell Directory. There is no support (and most probably there will never be) for multiple and or custom LDAP schemata to keep it simple and easily maintainable, this actually makes the drawbacks into a feature . If you need such features like custom LDAP schemata, you may have a look to RHDS.

Conclusion
Never in the past of information technology is was easier to set up and maintain a centrally managed identity management system. In just a few minutes of work you will have a basic set up of a highly available fault tolerant and scalable identity management server.

Outlook to Part II of this guide
IPA does not only allow users to be authenticated, but also to restrict them to use particular services only an particular systems. Thanks to Kerberos, it also provides single-sign-on capabilities without providing a password.

As soon as I get some time I’ll write about the following topics:

  • Passwordless (and key-less) SSH logins
  • Kerberized web applications
  • Centralized sudo management

Having fun?
Yes definitively , I have fun with IPA, and as a Linux consultant I expect a lot of work waiting for me.

Cross distribution system management with Spacewalk

Tuesday, May 24th, 2011

In a perfect world, all systems in a data centre are running the same Linux operating system, a homogeneous system landscape. In real life things are working differently. Windows systems are out of focus in this post, lets concentrate on Linux systems.

Most companies with a large Linux base are either RHEL shops or using SLES. A lot of RHEL users have some SLES systems running and so are SLES users running some RHEL systems. Some companies have additional systems running Debian.

How to handle those heterogeneous system landscapes? Those real world scenarios? Lets assume a company runs 500 RHEL systems, 20 SLES systems and some 10 Debian systems.

At the moment, for the base software management subscription such Linux users are spending a lot of money for RHN Satellite and SUSE Manager. Additionally there are per-system costs for management, provisioning and other modules. The Debian systems are handled manually. A lot of additional costs for a few out-of-strategy systems.

The solution is Spacewalk, the upstream project of the RHN Satellite which is at the same time the upstream for the recently released SUSE Manager. While SUSE offers support for RHEL systems, Red Hat does not (yet) offer support for SLES systems for RHN Satellite.

In Spacewalk Version 1.4 code contributions from SUSE are included and a student at Brno University of Technology contributed Debian support for Spacewalk as part of his master thesis.

While the support for SUSE is already quite stable, the Debian related code still have some rough edges. No wonder, SUSE is using RPM for its packaging wile Debian has its own packaging system. This makes it much easier for SUSE to get Spacewalk ready for its distribution.

At the moment, one can call the Debian support still as experimental, but the goal for the Spacewalk project is to have it fully functional in future releases.

The goal should be that both of the management system from the major enterprise Linux vendors, Red Hat and SUSE should support each others distribution for its Spacewalk based products. Debian is a niche player in the enterprise Linux environment and should also be supported by both products, RHN Satellite and SUSE manager. Nobody does expected to get system support for those distributions by the competing distribution, but having support for the management of it.

Further readings:
Registering Clients
Deb support in Spacewalk

Have fun!

Implement a high available Cobbler provisioning system

Sunday, April 24th, 2011

A not so well known feature of Cobbler is its replication facility. It allows you to create a high available system provisioning system. The whole set up is straight forward.

Background
Today people tend to NOT backup systems, only data is being backed up. In the case of a system failure, they just re-provisioning the system and automatically configure it with configuration management tools such as Puppet, CFengine or RHN Satellite.

You not only need to have your configuration management system(s) to be redundant, you also need to replicate your Cobbler provisioning systems.

It even works in a set up where you have multiple datacenters for disaster recovery, having one or two Cobbler servers at each site.

Luckily, Cobbler comes with that feature out-of-the-box.

Assumptions

  • Both Cobbler servers are in the same network
  • Your master cobbler has a DNS-entry cobbler-master, your slave is cobbler-slave

Lets do it

Unfortunately, Cobbler replication, at the moment, does not work when SELinux is running in enforcing mode. You need to turn it off on both Cobbler servers.

setenforce 0

Hopefully this will change soon.

For a initial replication just run:

cobbler replicate --master=cobbler-master --sync-all

On the slave server. It is recommended to run this command nightly by a cronjob to ensure you have the latest RPM-packages of your distros in sync. Feel free to run this command manually at every time if you think it is needed to immediately sync the whole Cobbler system.

Update on-the-fly
You can use a Cobbler trigger script that automatically connects to the slave and triggers a replication with the master whenever a system is added or deleted.

I’m using the following script to do so:

#!/bin/bash
#
# This is a Cobbler trigger script that triggers a replication on the slave
# server. You need to create a private/public key-pair on the master and 
# add the public key to ~/.ssh/authorized_keys on the slave

SLAVE=cobbler-slave

ssh $SLAVE "cobbler replicate --master=cobbler-master --systems=* --prune"
ssh $SLAVE "cobbler sync"

Put this script in /var/lib/cobbler/triggers/add/system/post and create a symlink in /var/lib/cobbler/triggers/delete/system/post on your master server.

Security
Cobbler handles the file /etc/rsyncd.conf. So far so good. You need to be aware that EVERY host can rsync your distros, kickstarts, snippets etc unless you take some security measures with some firewall and/or other restrictions.

Possibly the simplest way is configuring xinetd.

A sample /etc/xinetd.d/rsync, where 192.0.2.2 is the IP of your slave, would be:

# default: off
# description: The rsync server is a good addition to an ftp server, as it \
#       allows crc checksumming etc.
service rsync
{
        disable         = no
        only_from       = 192.0.2.2
        flags           = IPv6
        socket_type     = stream
        wait            = no
        user            = root
        server          = /usr/bin/rsync
        server_args     = --daemon
        log_on_failure  += USERID
}

DHCP redundancy
You also need to operate a dhcp server on both Cobbler servers. If you have both Cobbler servers on the same subnet, you may have a look how to configure ISC dhcpd. A good guide (untestet) is at http://www.madboa.com/geek/dhcp-failover/. The lazy ones just turn off dhcp on the slave and activate it as needed.

Conclusion
As I wrote before, Cobbler is just great. Not only “batteries are included”, high availability, disaster safety and other cool stuff is included as well and can be quickly implemented.

Having fun? I rarely needed to re-provision systems because they went foobar and hopefully I will never experience a disaster such a burned-down, exploded or whatever datacenter. Cobbler does not protect you from such disasters, it helps you to recover.

SUSE Manager based on Fedora Spacewalk

Thursday, March 3rd, 2011

SUSE announced the availability of SUSE manager. Having a closer look to it, one recognizes it is based on Fedora Spacewalk. It is a clone of the Red Hat Satellite.

A few weeks ago I was puzzled to see a post on the spacewalk-devel mailing list. SUSE was contributing some code. What the heck? Now it is clear, they are using Spacewalk as there source for its own product. Spacewalk is no longer just the upstream of RHN Satellite, but also a major tool for managing SLES systems.

The open source way
It is good practice to share knowledge and code between different distributions. SUSE profits from the work Red Hat has done before, and Red Hat profits from the contributions of SUSE. IMHO this is the right way how open source software should work.

The price tag
SUSE claims “SUSE Manager allows you to save up to 50 percent for Linux support”. Really?

Lets have a look to How to buy. The price is exactly the same as for RHN Satellite: USD 13,500. Really the same price tag? Lets dig deeper on features Click on Database support. One would read

"SUSE Manager provides a built-in Oracle XE database, but can also leverage existing 
Oracle 10g or 11g databases, to locally store all data related to the 
managed Linux servers."  

Means: With the free Oracle XE database delivered with SUSE you can manage just a few systems. If you want to manage more systems, you need to buy a very expensive Oracle License which, last least, doubles the price tag of SUSE Manager.

And Debian? There are some works going on, maybe I’m going to write soon about Spacewalk and what it can do for and with Debian.

Conclusion
Because SUSE was not in a hurry to release its new product, I can not understand why SUSE was not helping the Spacewalk project to get PostgreSQL production ready before releasing it. This would provide its customers (and the spacewalk community) a real benefit.

I hope that SUSE will sustainably contribute code to Spacewalk, it is now in the interest of users of both distributions.

Have fun!

Updating a distro in cobbler

Monday, February 28th, 2011

A few weeks ago RHEL 5.6 was released, the installation media was also updated. So it is time to get it into cobbler to deploy the latest dot release when provisioning new systems.

Lets assume your profile name is rhel5-x86_64, you have an existing distro named rhel55-x86_64 and you want to replace it with rhel56-x86_64.

Lets start with importing the new distro:

# Mount the ISO as loop back
mount /some/where/rhel-server-5.6-x86_64-dvd.iso /mnt/rhel56iso -o loop

# Import the Install Media
cobbler import --path=/mnt/rhel56iso --name rhel56-x86_64

Cobbler creates a profile consisting of the name provided at import plus its architecture. We do not want that profile, lets delete it:

cobbler profile remove --name=rhel56-x86_64

The next step is to change the distribution used by the profile

cobbler profile edit --name=rhel5-x86_64 --distro=rhel56-x86_64

If you want to delete the old distro, first check if there is a profile still using this distro, otherwise any childs will be deleted.

cobbler profile find --distro=rhel55-x86_64

If the result is null, it is safe to remove the old distribution. If you want to keep it for a fallback scenario, just omit this step.

cobbler distro remove --name=rhel55-x86_64

Cobbler is just cool stuff :-) Want to know more? Visit https://fedorahosted.org/cobbler/wiki.

Have fun!

A review of RHEV

Sunday, February 27th, 2011

In the past few weeks I had the chance to have a closer look at the current release 2.2. The reason is that I’m working on a project using RHEL6 clients as virtual Desktops. For a proof-of-concept I’ve set up a test environment in the lab. Due to the lack of time I was not able to test every single feature.

After reading some docs, it was amazingly easily and quickly installed.

Test environment
The tests have been made on the following hardware:

  • 2 Xeon servers with a total of 8 cores and 24Gbyte on each host. OS is RHEL5.6. Those hosts are called RHEV-H (H like Hypervisor)
  • 1 Windows 2008R2 server (64bit) with 4Gbyte RAM as RHEV-M (M like management)
  • 1 dedicated NetApp filer serving storage over NFS
  • 1 RHEL 5.6 Server providing “cheap” NFS storage
  • 3 different networks connected to 3 NICs
  • 1 RHEL 5.6 “thin” client with spice-xpi as client
  • 1 Windows 2008R2 Server with spice-activex as client (actually on the management server)

VM’s (All 64bit):

  • 20 Virtual desktops with RHEL 6.0 clients using one core and 2Gbyte or RAM each
  • 2 Virtual servers with RHEL 6.0 server using 2 cores and 2Gbyte of RAM each
  • 2 Virtual servers with RHEL 5.6 server using 2 cores and 2Gbyte of RAM each

Management Portal
The Management interface is the central point of administration. It is not always as intuitive as expected. If you are new to RHEV, you can get confused. A big plus is the search functionality. One can search for storage, VMs, hosts and any other item in the environment.

User Portal
The user portal is very simple, lean and clean. Users see the machines that are assigned to them, they can power on and off the machines and can connect them to powered VMs, and the client appears on the desktop.

The user portal resides on the management server and is just the connection broker. As soon as a user connects to its desktop or server, the connection will be made directly to the spiced qemu-kvm running on one of the hypervisors.

Client
At the moment there are two different clients supported:

  • spice-activex with Internet Explorer 7+ on Windows XP, Vista and 7
  • spice-xpi with Firefox for RHEL5 and RHEL6

If you do not fear the work, you probably can get the spice-xpi client also running on Fedora 13 and other Linux distributions.

The client communicates with the VM over SPICE (Simplified Protocol for Independent Computing Environments). It can forward USB devices as well as sound and other multimedia stuff. For more informations about the protocol, please have a look at http://www.spice-space.org/home.html.

Storage
The first step to set up a RHEV environment is to define and set up the storage.

There are 3 types of storage:

  • ISO, where you upload your install-media for your virtual machines.
  • Data, where the VM images are stored.
  • Export, where I have no clue (yet) what it is good for.

Usually the storage is a NFS or iSCSI server, FC storage is also supported. The storage server creates snapshots of volumes for backing up the stuff.

Hosts (hypervisors)
You can either use RHEV-H as an “embedded” hypervisor which comes as a stripped-down RHEL5 distribution (the ISO is just about 75Mbyte IIRC) or you can set up RHEL5 and use it as a hypervisor.

First I tried the RHEV-H, it is manual work to install it. You need to burn a CDROM put it in your server and enter some stuff like network config etc. In the short time I did not tried to provision RHEV-H with cobbler. According to the documentation one can pass boot parameters to partially automate the installation.

Because I have a cobbler server handy, I decided to use RHEL5 as hypervisors. Just be sure you subscribe the system to the channels “rhel-x86_64-server-vt-5″ and rhel-x86_64-rhev-mgmt-agent-5. You also need to allow root to log in via SSH. You can change this later after the hosts is registered on the RHEV-M server.

After the setup of the RHEL5 servers you just need to tell RHEV-M that there is a new host available. Tell RHEV-M about the root password and all the needed additional software gets installed automatically.

That’s a pretty lean procedure to install and set up hosts. Hopefully there will soon be a way to fully automate this, w/o using MS Powershell.

The technologies on the hosts are the well known stable and mature KVM and qemu-kvm. There are also parts of o-virt being used.

Networking
I do not know how many networks you can add, but I think enough even for large environments. The network configuration part in RHEV-M shows a list of available Ethernet ports on each host. Just assign them a static network and IP address and you’re done. Make sure you define the network you want to add on all hosts in a particular datacenter.

Ensure that firewalls between the users and VM’s networks are configured accordingly to avoid connection problems.

Sizing and performance

  • Memory overcommitment
    Thanks to KSM (Kernel Samepage Merging) one can quite overcommit the memory. Ensure you have a LOT of swap on your hosts. Why? It takes some time until KSM kicks in and frees memory pages. During the collection of those pages, swap space is needed to prevent getting a visit by the OOM killer.

    I once faced a complete crash when putting one host in maintenance mode and all VMs have been migrated to the second host. That’s exactly why you should have LOTs of swap spaces to survive when using memory over-commitment. During the period of KSM searching for same pages, performance is degraded.

    Under normal circumstances, this will never happen since the virtual hosts are load balanced. This means VM’s are distributed to have the optimal performance.

  • CPU overcommitment
    This very depends on your users computing needs. For normal desktops and servers you can overcommit at least 200% of the CPUs and the performance is still fine. For CPU intensive workload it is not recommended to over commit CPUs.
  • Number of hosts
    Its recommended to start with at least three hosts to avoid temporary performance penalties if you need to put a host into maintenance mode. (See also “Memory overcommitment”).

    The performance is surprisingly good, the user experience is nearly the same as on physical desktops. For servers, KVM is already known for its good performance.

Drawbacks
Unfortunately you can not hot-add or hot-remove virtual CPU’s, NICs and storage. Despite of being supported by the underlying technologies used, your VM must be powered off to add or remove resources.

Another drawback is the lack of different storage classes, this makes it actually impossible to use the product in large enterprise environments. Since it makes no sense to back up swap drives, it is waste of backup space.
It is also not possible to have different storage for different needs. Lets say a bulk RAID5 consisting of cheap 2Tbyte SATA disks for operating systems and a RAID10 consisting of fast 15k SAS or FC disks for data and swap. It is also not possible to life-migrate VMs from one storage to another.

As of today, SPICE is only useful on LAN networks. Depending on the work load it can demand lots of bandwidth available. For usual office workload, 1 Mbps per connection is fine. A WAN optimized version is under development and should be released “later this year”.

The two supported clients (spice-xpi and spice-activex) can be problematic when the clients are not in an environment controlled by you. Why? Lots of enterprises are preventing its users to install Active X applications and Firefox Add-On’s for security reasons. It would be better to have some kind of “fat client” which does not need to be installed, or a Java Applet which does the job.

At the moment, it is still required to have a Windows 2008R2 server installed and use MS technologies such as MSSQL, DotNET, Powershell and Active Directory. Because you can not manage RHEV-M with a satellite, one needs to download and install updates manually.

RHEV-M is not only the management server, it is also the connection broker for clients connecting. RHEV-M is a single point of failure. Maybe one can build a MS-Cluster with it, no clue. It is also not possible to self-host RHEV-M on the RHEV-H hosts. Already connected clients should not be disconnected in the case of RHEV-M failing (to be tested).

Red Hat is working on an Linux-Only RHEV-M, as you can read in a comment from a Red Hat employee in a earlier post.

The same comment talks about the WAN problems to be solved in near future.

Conclusion
Frankly: From my point of view, RHEV is not yet “enterprise ready”, this is because of the drawbacks mentioned above, especially the storage shortcomings and RHEV-M being a SPoF.

For smaller environments up to lets say ~50 servers or ~200 desktops it good enough.

Nevertheless: I think RHEV has a huge potential in the future. Compared to VMware ESX, the KVM’s technology is much better and scales better. At the moment VMware’s USP is its management software, not the hypervisor used. As soon as Red Hat ironed out the major drawbacks, I’ll expect a boost for RHEV.

Red Hat is keen to improve the product, I guess a lot of improvements will be annouced in 2011.

Have fun!