Posts Tagged ‘Red Hat’

Host based access control with IPA

Saturday, March 2nd, 2013

Host based access control is easy with IPA/FreeIPA, very easy.

Lets assume you want to have a host group called rhel-prod, a usergroup called prod-admins and you want to let them access the servers in the rhel-prod group by ssh from any host that can reach the servers. Lets call the HBAC rule prod-admins.

You can either user the web GUI or use the command line interface.

Lets create the user group:

[root@ipa1 ~]# ipa group-add prod-admins --desc="Production System Admins"
-------------------------
Added group "prod-admins"
-------------------------
  Group name: prod-admins
  Description: Production System Admins
  GID: 1222000004
[root@ipa1 ~]# 

Add some users to the user group:

[root@ipa1 ~]# ipa group-add-member prod-admins --users=luc,htester
  Group name: prod-admins
  Description: Production System Admins
  GID: 1222000004
  Member users: luc, htester
-------------------------
Number of members added 2
-------------------------
[root@ipa1 ~]# 

And the hostgroup

[root@ipa1 ~]# ipa hostgroup-add rhel-prod --desc "Production Servers"
---------------------------
Added hostgroup "rhel-prod"
---------------------------
  Host-group: rhel-prod
  Description: Production Servers
[root@ipa1 ~]#

Add some servers as members of the host group

[root@ipa1 ~]# ipa hostgroup-add-member rhel-prod --hosts=ipaclient1.example.com,ipaclient2.example.com
  Host-group: rhel-prod
  Description: Production Servers
  Member hosts: ipaclient1.example.com, ipaclient2.example.com
-------------------------
Number of members added 2
-------------------------
[root@ipa1 ~]#

Note: the servers are comma separated, without a space after the comma

Lets define the HBAC rule:

[root@ipa1 ~]# ipa hbacrule-add --srchostcat=all prod-admins
-----------------------------
Added HBAC rule "prod-admins"
-----------------------------
  Rule name: prod-admins
  Source host category: all
  Enabled: TRUE
[root@ipa1 ~]#

Add the user group to the rule:

[root@ipa1 ~]# ipa hbacrule-add-user --groups prod-admins prod-admins
  Rule name: prod-admins
  Source host category: all
  Enabled: TRUE
  User Groups: prod-admins
-------------------------
Number of members added 1
-------------------------
[root@ipa1 ~]#

Add the service to the rule:

[root@ipa1 ~]# ipa hbacrule-add-service --hbacsvcs sshd prod-admins
  Rule name: prod-admins
  Source host category: all
  Enabled: TRUE
  User Groups: prod-admins
  Services: sshd
-------------------------
Number of members added 1
-------------------------
[root@ipa1 ~]#

And finally add the host group to the rule

[root@ipa1 ~]# ipa hbacrule-add-host --hostgroups rhel-prod prod-admins
  Rule name: prod-admins
  Source host category: all
  Enabled: TRUE
  User Groups: prod-admins
  Host Groups: rhel-prod
  Services: sshd
-------------------------
Number of members added 1
-------------------------
[root@ipa1 ~]#

Of course you can enhance the rule by adding other services or restrict the access from particular hosts and so on.

Have fun :-)

One year in Berlin, one year at Red Hat

Sunday, July 1st, 2012

In March 2011, I signed my contract with Red Hat and moved from Zurich to Berlin, as posted here in April 2011.

After one year it is time for a review of my “new life”. At once, a lot of things changed in my life: New Country, new City, new Appartment, new Job. Quite a lot of stuff.

At my former job, I had a notice period of three months which gaves me some time for the planing of the move. A lot of burocracy was waiting for me, both in Switzerland and in Germany.

Getting an appartment
The first challange was to get an appartment in Berlin. I went to Linux Tag 2011 in May to have a look to quite a few appartments. It was not that easy as I was told from different people. Gentrification is not only a problem in Zurich, but also in Berlin.

The chicken and egg problem. In order to get an appartment, you need a “Schufa-Auszug”, a paper that “certifies” your creditability. Usually it is only possible to get this paper when beeing a resident in Germany. How to get a resident without an appartment when you need a Schufa-Auszug to get an appartment when you need a residency in Germany and therefore need a Schufa-Auszug?

So I went to a Schufa-Shop and it took me 30min of explaining the clerk that the processes at real estate brokers ar completly idiotic but I need the paper. So I finally got the Schufa-Auszug with my old address in Zurich.

Finally I was able to sign a contract with a land lord. The appartment and its location is very nice and very close to the excellent public transport (although, Berliners grumble about the S-Bahn trains, it is excellent compared .i.e to Munich).
As you can see in the picture, it is close to Alexanderplatz, the new City center of Berlin, just two underground train stations away to the west. Two underground stations to the east, and I find myself in the Party Neighborhood (Kiez in Berlin-Speak) at Simon-Dach-Strasse. Walking south, crossing the Spree river and I find myself in the vibrant Berlin Club scene.

A special feature of the appartment is the roof top terrace where neighbors meet for partying. Quite uncommon for Germany: There are washing machines available, so I dont need to buy one. Also quite uncommon in Germany: The appartment has a kitchen, no hassle to buy the stuff.

Preparing the move
The usual stuff like getting rid of old stuff and putting the rest into moving boxes is straight forward, as well as finding the movers. More complex is the coordination of the due dates for all the stuff.

Paper work part one
Since Switzerland is not in the customs union of the EU, it adds more complexity. I need two papers: The stamped registration form of Berlin, and the stamped levaing form of Zurich.

Getting the first form is straight forward: Just do a online-reservation at the registration office (Meldeamt at Bezirksamt), getting there and walk out after 10 minutes. Myth busted: German bureaucracy is always complex

The latter one cost a shitload of money. You get it from the Zurich tax office, but only if you pay the guesstimated taxes upfront, in cash!. Of course this means you need to fill out a lot of forms upfront, what an annoyance. Myth busted: Swiss bureaucracy is alwas easy.

The next task was then to get a health insurance. Since a lot of Germans are living in Switzerland, I just some good advices upfront, easy stuff. Now it was time to cancel all contracts such as Internet access, mobile phone contract, insurances and getting new contracts in Berlin.

Emigration
I had a early start at Red Hat, so I left Switzeland on 26th of June, went to a training in Farnborough, UK spending the weekend in London and getting straight to Munich for another training and finally arrived in Berlin on 05. July 2011. In fact I was homeless for 1.5 weeks, sleeping in hotels. The first two days my furniture has not yet arrived, sleeping on the floor in a sleeping bag.

Paperwork part two
Soon after the registration in Berlin, I got my tax payer ID number. I also needed to fill out a form with a rather complex title “Antrag auf Bescheinigung für den Lohnsteuerabzug” (something like application for a certificate for the income tax deduction). I needed to show up at the Finanzamt (Tax Office) and unlike the forms title suggests, it was painless.

Another important task was the application for change my Swiss driver license into a German one. The pitfall is that one needs to apply in the first six months after immigration or to jeoppardize the whole licese. Well I had to wait more than two months to get the license exchanged.

Assimilation
Left wing politicians do not like the word. From my point of view, foreigner should assimilate them reasonably. For me that was very easy since Switzerland and Germany has a lot in common. The same political and cultural values and – for northern Swiss people – the same language (well, kind of). Of course I needed to adapt my German getting rid of typical helvetisms which are not understood in Germany or understood in the wrong way which can annoy some Germans.

In meantime I got assimilated even better: I watch soccer matches ;-)

The foreigner
Everyone is a foreigner, nearly everywhere (unknown quote). So yes, I’ living as a foreigner now.

Almost everyone welcomed me in Berlin and other German cities where I was working and I quickly got new friends. The average German is generally more open minded and cosmopolitan than the avarage Swiss (especially when comparing Berlin with Zurich)

When I’m looking back to Switzerland and see how some people treat Germans: Its a shame! I wish that this mind will change in Switzerland and Germans are treated the same friendly way as I’m treated in Germany.

Living in Berlin
The crazy thing about my working contract with Red Hat is: I got offered to be based on a choice of four locations where Red Hat has offices: Munich, Stuttgart, Frankfurt and Berlin. I have already visited the first three cities multiple times, but I was never in Berlin before, just heared its a nice city. Well, Munich is beatiful but expensive and the Airport is only reachable by air. Stuttgart is a bit boring, Frankfurt hmm… So I was taking the risk and choose to move to Berlin without much knowledge about the city.

Well, I’m now living in Friedrichshain, just north of Kreuzberg.

Berlin is cool! I mean: Really cool! I guess you can not find any other europeen metropolis which offers a greater diversity of culture, food and of course people. Going to clubs in Berlin on weekends is a delight. You can find clubs for almost every style of music.

Culinary: Well, the Currywurst and Döner Kebap was invented in Berlin, but this are not the real highlights. In the Simon-Dach-Kiez as well as near Alexanderplatz one will find restaurants with food from allover the planet. Thai, Vietnamese, Bulgarian, Chinese, Russian, Korean, Japanese, Italian… you will find them all. There are even Swiss restaurants, but I never made it yet.

Public transport: Awesome! A S-Bahn train every two to five minutes, same applies to underground trains. During the weekends, S- and U-Bahn are operating the whole night, without any idiotic night-surcharge, and of course there is a train every approx. 15min. From my point of view the public transport in Zurich looks like a really bad (but expensive) joke.

Long distance high speed ICE Trains are also awesome. Berlin-Hamburg (approx 300km) in 1:39h. Zurich-Geneva (approx 300km) in 2:43h

home sickness
The first few weeks have been very hard for me. Yes, I had home sickness. I left all my friends in Switzeland and I miss the beautiful old towns of Zurich and Winterthur as well as the mountains. What I really miss is the “third dimension”, it is all flat here, the highest elevation in Berlin are the Müggelberge (Berg means mountain, what a fool) with 114,7m above sealevel. Before I left Switzerland I was not aware about how beautiful the Alpes are, it was just a matter of course to always have them in sight.

In the last 12 months I have been visiting Switzerland three times. I have enjoyed those trips, visting my old frieds, having a BBQ country side and strolling trough the old towns of Winterthur and Zurich.

Whats the better country for a living? Germany or Switzerland?
This is a question I hear all the time. My answer is always the same: Neither of them are better, those countries are just different, but not that much.

My job as Senior Linux Consultant at Red Hat
When Red Hat approached me, I first was surprised, then I got a contract and I got it very fast :-)

It is a very interessing and challenging job. As a consultant I’m visiting a lot of customers to help them with particular technologies in their projects. Every customers has its own processes and infrastructure, so I need to adapt very fast.I also travel a lot, customers are usually located in central europe, mostly in Germany. Somethimes it happens that I can travel a bit further, for example, my customer engagement in Kuala Lumpur, Malaysia was an impressive experience.

Travelling means to see a lot of different locations, that makes it even more interessting. The drawback is being only at home for the weekends.

At the end of the day, Red Hat was the best that could happen to me, a open source guy. Lots of nice and very competent and open minded collegues in a international team and the possibility to always get in touch with the latest and greatest technology in the open source world.

Having fun? Yes, sure…

Identity Management with IPA Part I

Saturday, December 17th, 2011

Red Hat released RHEL 6.2 on December 6th. From my point of view, the greatest news in the release is that IPA (or now called Identity Management) is now fully supported and available in the RHEL 6 base channel without additional subscription costs.

Upstream project is freeIPA and is available trough the default Fedora repos.

About central Identity Management
IPA stands for Identification, Auditing, Policy. The focus in this article is on identification of users.

In the past, there have been a lot of solutions available to centrally manage users and its access to services. Just to name a few: LDAP, Kerberos, PAM, MS Active Directory, Novell Directory Server and countless others. All of those solutions have one in common: They are very powerful and very complex to set up and maintain. Because they are so complex, a lot of system administrators just do not use them and distribute SSH-keys, user credentials etc. by script without real central management, the nightmare of every security officer.

What is IPA?
The missing solution was a glue of LDAP and Kerberos which is easy to install and maintain, redundant and scalable from small office environments up to large enterprise installations. here it comes: IPA, which makes system administrators and security managers friends again.

IPA comes with a powerful CLI and a web interface for people that are afraid of a shell.

One of the cool stuff in IPA is its multi-master replication feature and automatic fail over facility. The clients are able to look up IPA servers with SRV DNS records, which are – of course – handled by IPA.

Lets do some stuff
One thing is just writing about how cool IPA is, but lets set up a high available centrally managed identity management system. This guide is written for RHEL 6.2 IPA-Servers and clients but should also work with freeIPA and Fedora 15 and later (Let me know if you have some issues).

Requirements
Requirements are straightforward:

  • 1Gbyte of RAM
  • approx. 6Gbyte of disk (including operating system)
  • NTP
  • DNS entries for all IPA servers (including PTR records)
  • Fully updated RHEL 6.2 GA
  • Firefox on the IPA servers if you want to use the web interface

NTP is very important since Kerberos is quite picky about synchronized system time. Ensure it is configured and running on all involved servers.

Assumptions

  • IP network is 192.168.100.0/24
  • Domain is example.com
  • Kerberos realm is EXAMPLE.COM
  • IPA-Server 1 is ipa1.example.com
  • IPA-Server 2 is ipa2.example.com
  • IPA-Client 1 is ipa-client1.example.com
  • IPA-Client 2 is ipa-client2.example.com
  • All passwords used are “somepassword” (needles to tell you to choose your own passwords
  • Main DNS is at 192.168.100.1
  • IPA-Clients are using ipa1.example.com and ipa2.example.com as there DNS servers.

Installation of the first IPA Server

yum -y install ipa-server bind-dyndb-ldap firefox xorg-x11-xauth

You are now ready to set up IPA. There are just a couple of questions, the non-default answers for this example are in red.

[root@ipa1 ~]# ipa-server-install --setup-dns --forwarder=192.168.100.1
The log file for this installation can be found in /var/log/ipaserver-install.log
==============================================================================
This program will set up the IPA Server.

This includes:
  * Configure a stand-alone CA (dogtag) for certificate management
  * Configure the Network Time Daemon (ntpd)
  * Create and configure an instance of Directory Server
  * Create and configure a Kerberos Key Distribution Center (KDC)
  * Configure Apache (httpd)
  * Configure DNS (bind)

To accept the default shown in brackets, press the Enter key.

Existing BIND configuration detected, overwrite? [no]: yes
Enter the fully qualified domain name of the computer
on which you're setting up server software. Using the form
.
Example: master.example.com.


Server host name [ipa1.example.com]:

Warning: skipping DNS resolution of host ipa1.example.com
The domain name has been calculated based on the host name.

Please confirm the domain name [example.com]:

The IPA Master Server will be configured with
Hostname:    ipa1.example.com
IP address:  192.168.100.227
Domain name: example.com

The kerberos protocol requires a Realm name to be defined.
This is typically the domain name converted to uppercase.

Please provide a realm name [EXAMPLE.COM]:
Certain directory server operations require an administrative user.
This user is referred to as the Directory Manager and has full access
to the Directory for system management tasks and will be added to the
instance of directory server created for IPA.
The password must be at least 8 characters long.

Directory Manager password: somepassword
Password (confirm): somepassword

The IPA server requires an administrative user, named 'admin'.
This user is a regular system account used for IPA server administration.

IPA admin password: somepassword
Password (confirm): somepassword

Do you want to configure the reverse zone? [yes]:
Please specify the reverse zone name [100.168.192.in-addr.arpa.]:
Using reverse zone 100.168.192.in-addr.arpa.

The following operations may take some minutes to complete.
Please wait until the prompt is returned.
Configuring ntpd
  [1/4]: stopping ntpd
  [2/4]: writing configuration
  [3/4]: configuring ntpd to start on boot
  [4/4]: starting ntpd
done configuring ntpd.
Configuring directory server for the CA: Estimated time 30 seconds
  [1/3]: creating directory server user
  [2/3]: creating directory server instance
  [3/3]: restarting directory server
done configuring pkids.

Lot of output omitted

Configuring named:
  [1/9]: adding DNS container
  [2/9]: setting up our zone
  [3/9]: setting up reverse zone
  [4/9]: setting up our own record
  [5/9]: setting up kerberos principal
  [6/9]: setting up named.conf
  [7/9]: restarting named
  [8/9]: configuring named to start on boot
  [9/9]: changing resolv.conf to point to ourselves
done configuring named.
==============================================================================
Setup complete

Next steps:
        1. You must make sure these network ports are open:
                TCP Ports:
                  * 80, 443: HTTP/HTTPS
                  * 389, 636: LDAP/LDAPS
                  * 88, 464: kerberos
                  * 53: bind
                UDP Ports:
                  * 88, 464: kerberos
                  * 53: bind
                  * 123: ntp

        2. You can now obtain a kerberos ticket using the command: 'kinit admin'
           This ticket will allow you to use the IPA tools (e.g., ipa user-add)
           and the web user interface.

Be sure to back up the CA certificate stored in /root/cacert.p12
This file is required to create replicas. The password for this
file is the Directory Manager password
[root@ipa1 ~]#

You now need to get a Kerberos ticket:

[root@ipa1 ~]# kinit admin
Password for admin@EXAMPLE.COM:
[root@ipa1 ~]#

Fire up firefox and point it to https://ipa1.example.com and follow the link provided in the error message. You will see the instructions needed to use Kerberos as authentication method. When importing the cert into Firefox, REALLY check all three boxes!

Afterwards you are automatically logged in, if you got your Kerberos ticket before (kinit admin)

Setting up a Recplica
For now, we one IPA server. If it failes, no one can log in to any system anymore. This is of course unacceptable and needs to be changed. So lets set up a replica to add high availability to our central identity management system.

Log in to ipa1.example.com and fire up ipa-replica-prepare to collect the data needed for the replica.

Non-default answers are coloured red

[root@ipa1 ~]# ipa-replica-prepare ipa2.example.com

Directory Manager (existing master) password: somepassword

Preparing replica for ipa2.example.com from ipa1.example.com
Creating SSL certificate for the Directory Server
Creating SSL certificate for the dogtag Directory Server
Creating SSL certificate for the Web Server
Exporting RA certificate
Copying additional files
Finalizing configuration
Packaging replica information into /var/lib/ipa/replica-info-ipa2.example.com.gpg
[root@ipa1 ~]#

/var/lib/ipa/replica-info-ipa2.example.com.gpg keeps all the information needed to set up the replica. You need to copy it by i.e scp to ipa2.example.com.

Now log in to ipa2.example.com and fire up ipa-replica-install

[root@ipa2 ~]# ipa-replica-install --setup-dns --forwarder=192.168.100.1 replica-info-ipa2.example.com.gpg

Directory Manager (existing master) password: somepassword

Run connection check to master
Check connection from replica to remote master 'ipa1.example.com':
   Directory Service: Unsecure port (389): OK
   Directory Service: Secure port (636): OK
   Kerberos KDC: TCP (88): OK
   Kerberos KDC: UDP (88): OK
   Kerberos Kpasswd: TCP (464): OK
   Kerberos Kpasswd: UDP (464): OK
   HTTP Server: port 80 (80): OK
   HTTP Server: port 443(https) (443): OK

Connection from replica to master is OK.
Start listening on required ports for remote master check
Get credentials to log in to remote master
admin@EXAMPLE.COM password:

Execute check on remote master
Check connection from master to remote replica 'ipa2.example.com':
   Directory Service: Unsecure port (389): OK
   Directory Service: Secure port (636): OK
   Kerberos KDC: TCP (88): OK
   Kerberos KDC: UDP (88): OK
   Kerberos Kpasswd: TCP (464): OK
   Kerberos Kpasswd: UDP (464): OK
   HTTP Server: port 80 (80): OK
   HTTP Server: port 443(https) (443): OK

Connection from master to replica is OK.

Connection check OK
Configuring ntpd
  [1/4]: stopping ntpd
  [2/4]: writing configuration
  [3/4]: configuring ntpd to start on boot
  [4/4]: starting ntpd
done configuring ntpd.
Configuring directory server: Estimated time 1 minute

Lot of output omitted

Using reverse zone 100.168.192.in-addr.arpa.
Configuring named:
  [1/8]: adding NS record to the zone
  [2/8]: setting up reverse zone
  [3/8]: setting up our own record
  [4/8]: setting up kerberos principal
  [5/8]: setting up named.conf
  [6/8]: restarting named
  [7/8]: configuring named to start on boot
  [8/8]: changing resolv.conf to point to ourselves
done configuring named.
[root@ipa2 ~]#

On ipa2, you need a Kerberos Ticket as well:

root@ipa2 ~]# kinit admin

Some adjustment
Unfortunately the default shell for new users is /bin/sh, which should probably be changed.

ipa config-mod --defaultshell=/bin/bash

Testing the replication
Log in to ipa1.example.com and add a new user:

ipa user-add tester1
ipa passwd tester1

You now can check if the user is really available on both servers by firing a ldapsearch command:

ldapsearch -x -b "dc=example, dc=com" uid=tester1

Compare the results of both servers. If they are the same, you have been successfully set up you two-node replicated high available IPA server.

What if ipa1.example.com is not available when I need to add a new user?
Simple answer: There is one way to find out….

Shut down ipa1.example.com
Log in to ipa2.example.com and add a new user:

root@ipa2 ~]# ipa user-add tester2

Start up ipa1.example.com again and run a ldapsearch again:

ldapsearch -x -b "dc=example, dc=com" uid=tester2

Set up a IPA-Client
Whats a centrally managed Identity Management server worth without a client? Nada! Lets set up a RHEL 6.2 server as a client:

[root@ipaclient1 ~]# yum  install ipa-client

After installation the setup program needs to be fired up. Non-default answers are coloured red

[root@ipaclient1 ~]# ipa-client-install -p admin
Discovery was successful!
Hostname: ipaclient1.example.com
Realm: EXAMPLE.COM
DNS Domain: example.com
IPA Server: ipa1.example.com
BaseDN: dc=example,dc=com


Continue to configure the system with these values? [no]: yes
Synchronizing time with KDC...
Password for admin@EXAMPLE.COM: somepassword

Enrolled in IPA realm EXAMPLE.COM
Created /etc/ipa/default.conf
Configured /etc/sssd/sssd.conf
Configured /etc/krb5.conf for IPA realm EXAMPLE.COM
Warning: Hostname (ipaclient1.example.com) not found in DNS
DNS server record set to: ipaclient1.example.com -> 192.168.100.253
SSSD enabled
NTP enabled
Client configuration complete.
[root@ipaclient1 ~]# 

Testing the login
Log in to your client, you will need to change your password first:

[luc@bond ~]$ ssh 192.168.100.253 -l tester1
tester1@192.168.100.253's password: 
Password expired. Change your password now.
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for user tester1.
Current Password: 
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.
Connection to 192.168.100.253 closed.
[luc@bond ~]$ ssh 192.168.100.253 -l tester1
tester1@192.168.100.253's password: 
Last login: Sat Dec 17 19:40:10 2011 from bond.home.delouw.ch
Could not chdir to home directory /home/tester1: No such file or directory
-bash-4.1$ 

In this case we do not have a home directory for the user tester1. NFS automount of home directories will be discussed in Part II oder III of this guide.

Now log out of ipaclient1.example.com and shut down ipa1.example.com to check if it is working when one IPA server failed. Needless to say that it is working… (okay, there is a delay of a few seconds)

Drawbacks
IPA is not that powerful like MS Active Directory or Novell Directory. There is no support (and most probably there will never be) for multiple and or custom LDAP schemata to keep it simple and easily maintainable, this actually makes the drawbacks into a feature . If you need such features like custom LDAP schemata, you may have a look to RHDS.

Conclusion
Never in the past of information technology is was easier to set up and maintain a centrally managed identity management system. In just a few minutes of work you will have a basic set up of a highly available fault tolerant and scalable identity management server.

Outlook to Part II of this guide
IPA does not only allow users to be authenticated, but also to restrict them to use particular services only an particular systems. Thanks to Kerberos, it also provides single-sign-on capabilities without providing a password.

As soon as I get some time I’ll write about the following topics:

  • Passwordless (and key-less) SSH logins
  • Kerberized web applications
  • Centralized sudo management

Having fun?
Yes definitively , I have fun with IPA, and as a Linux consultant I expect a lot of work waiting for me.

I got employed by Red Hat

Thursday, April 21st, 2011

This is pretty cool: End of March I signed a contract with Red Hat as a senior Linux consultant. It is not just “another new job”. It is cool for (at least) two reasons: First reason is that Red Hat is not “just another company”, it is Red Hat which is not very comparable to other employers, it is THE Linux and open source company, for me as a open source guy, this is perfect. The second reason is: I’m moving from Zurich in Switzerland to Berlin in Germany.

So, two major changes in my life at the same time. I’m looking forward to the challenges that are waiting for me.

I’ll continue to work at Siemens IT Solutions and Services AG until approx. mid of June and start working at Red Hat at 1st of July.

From May, 09 to May 15. I’ll be the first time in Berlin to have a look at the city and its different suburbs. I’ll also be there to organize some stuff required to settle in Berlin. In the same time, Europe’s biggest Linux conference will be held in Berlin, the “Linux Tag”. I guess I’ll have a lot of fun, and maybe meet some of my future workmates.

It is hard for me to leave my country, I have a lot of friends here. On the other hand, Berlin is just about 1.5h away by plane. As a consultant, I’m travelling a lot. Because of that, it would not be that easy to build up a social network (I mean real-life-stuff, not Facebook) in Berlin.

It also is not easy for me to leave Siemens, I’m involved in a very cool project with the Swiss government (all Systems will be RHEL6) and I also have friends and nice workmates at work which I’m going to leave.

I already know quite some people at Red Hat, they are all nice and I guess some of them will get good friends over the time.

Having fun?

Absolutely guaranteed!

A review of RHEV

Sunday, February 27th, 2011

In the past few weeks I had the chance to have a closer look at the current release 2.2. The reason is that I’m working on a project using RHEL6 clients as virtual Desktops. For a proof-of-concept I’ve set up a test environment in the lab. Due to the lack of time I was not able to test every single feature.

After reading some docs, it was amazingly easily and quickly installed.

Test environment
The tests have been made on the following hardware:

  • 2 Xeon servers with a total of 8 cores and 24Gbyte on each host. OS is RHEL5.6. Those hosts are called RHEV-H (H like Hypervisor)
  • 1 Windows 2008R2 server (64bit) with 4Gbyte RAM as RHEV-M (M like management)
  • 1 dedicated NetApp filer serving storage over NFS
  • 1 RHEL 5.6 Server providing “cheap” NFS storage
  • 3 different networks connected to 3 NICs
  • 1 RHEL 5.6 “thin” client with spice-xpi as client
  • 1 Windows 2008R2 Server with spice-activex as client (actually on the management server)

VM’s (All 64bit):

  • 20 Virtual desktops with RHEL 6.0 clients using one core and 2Gbyte or RAM each
  • 2 Virtual servers with RHEL 6.0 server using 2 cores and 2Gbyte of RAM each
  • 2 Virtual servers with RHEL 5.6 server using 2 cores and 2Gbyte of RAM each

Management Portal
The Management interface is the central point of administration. It is not always as intuitive as expected. If you are new to RHEV, you can get confused. A big plus is the search functionality. One can search for storage, VMs, hosts and any other item in the environment.

User Portal
The user portal is very simple, lean and clean. Users see the machines that are assigned to them, they can power on and off the machines and can connect them to powered VMs, and the client appears on the desktop.

The user portal resides on the management server and is just the connection broker. As soon as a user connects to its desktop or server, the connection will be made directly to the spiced qemu-kvm running on one of the hypervisors.

Client
At the moment there are two different clients supported:

  • spice-activex with Internet Explorer 7+ on Windows XP, Vista and 7
  • spice-xpi with Firefox for RHEL5 and RHEL6

If you do not fear the work, you probably can get the spice-xpi client also running on Fedora 13 and other Linux distributions.

The client communicates with the VM over SPICE (Simplified Protocol for Independent Computing Environments). It can forward USB devices as well as sound and other multimedia stuff. For more informations about the protocol, please have a look at http://www.spice-space.org/home.html.

Storage
The first step to set up a RHEV environment is to define and set up the storage.

There are 3 types of storage:

  • ISO, where you upload your install-media for your virtual machines.
  • Data, where the VM images are stored.
  • Export, where I have no clue (yet) what it is good for.

Usually the storage is a NFS or iSCSI server, FC storage is also supported. The storage server creates snapshots of volumes for backing up the stuff.

Hosts (hypervisors)
You can either use RHEV-H as an “embedded” hypervisor which comes as a stripped-down RHEL5 distribution (the ISO is just about 75Mbyte IIRC) or you can set up RHEL5 and use it as a hypervisor.

First I tried the RHEV-H, it is manual work to install it. You need to burn a CDROM put it in your server and enter some stuff like network config etc. In the short time I did not tried to provision RHEV-H with cobbler. According to the documentation one can pass boot parameters to partially automate the installation.

Because I have a cobbler server handy, I decided to use RHEL5 as hypervisors. Just be sure you subscribe the system to the channels “rhel-x86_64-server-vt-5″ and rhel-x86_64-rhev-mgmt-agent-5. You also need to allow root to log in via SSH. You can change this later after the hosts is registered on the RHEV-M server.

After the setup of the RHEL5 servers you just need to tell RHEV-M that there is a new host available. Tell RHEV-M about the root password and all the needed additional software gets installed automatically.

That’s a pretty lean procedure to install and set up hosts. Hopefully there will soon be a way to fully automate this, w/o using MS Powershell.

The technologies on the hosts are the well known stable and mature KVM and qemu-kvm. There are also parts of o-virt being used.

Networking
I do not know how many networks you can add, but I think enough even for large environments. The network configuration part in RHEV-M shows a list of available Ethernet ports on each host. Just assign them a static network and IP address and you’re done. Make sure you define the network you want to add on all hosts in a particular datacenter.

Ensure that firewalls between the users and VM’s networks are configured accordingly to avoid connection problems.

Sizing and performance

  • Memory overcommitment
    Thanks to KSM (Kernel Samepage Merging) one can quite overcommit the memory. Ensure you have a LOT of swap on your hosts. Why? It takes some time until KSM kicks in and frees memory pages. During the collection of those pages, swap space is needed to prevent getting a visit by the OOM killer.

    I once faced a complete crash when putting one host in maintenance mode and all VMs have been migrated to the second host. That’s exactly why you should have LOTs of swap spaces to survive when using memory over-commitment. During the period of KSM searching for same pages, performance is degraded.

    Under normal circumstances, this will never happen since the virtual hosts are load balanced. This means VM’s are distributed to have the optimal performance.

  • CPU overcommitment
    This very depends on your users computing needs. For normal desktops and servers you can overcommit at least 200% of the CPUs and the performance is still fine. For CPU intensive workload it is not recommended to over commit CPUs.
  • Number of hosts
    Its recommended to start with at least three hosts to avoid temporary performance penalties if you need to put a host into maintenance mode. (See also “Memory overcommitment”).

    The performance is surprisingly good, the user experience is nearly the same as on physical desktops. For servers, KVM is already known for its good performance.

Drawbacks
Unfortunately you can not hot-add or hot-remove virtual CPU’s, NICs and storage. Despite of being supported by the underlying technologies used, your VM must be powered off to add or remove resources.

Another drawback is the lack of different storage classes, this makes it actually impossible to use the product in large enterprise environments. Since it makes no sense to back up swap drives, it is waste of backup space.
It is also not possible to have different storage for different needs. Lets say a bulk RAID5 consisting of cheap 2Tbyte SATA disks for operating systems and a RAID10 consisting of fast 15k SAS or FC disks for data and swap. It is also not possible to life-migrate VMs from one storage to another.

As of today, SPICE is only useful on LAN networks. Depending on the work load it can demand lots of bandwidth available. For usual office workload, 1 Mbps per connection is fine. A WAN optimized version is under development and should be released “later this year”.

The two supported clients (spice-xpi and spice-activex) can be problematic when the clients are not in an environment controlled by you. Why? Lots of enterprises are preventing its users to install Active X applications and Firefox Add-On’s for security reasons. It would be better to have some kind of “fat client” which does not need to be installed, or a Java Applet which does the job.

At the moment, it is still required to have a Windows 2008R2 server installed and use MS technologies such as MSSQL, DotNET, Powershell and Active Directory. Because you can not manage RHEV-M with a satellite, one needs to download and install updates manually.

RHEV-M is not only the management server, it is also the connection broker for clients connecting. RHEV-M is a single point of failure. Maybe one can build a MS-Cluster with it, no clue. It is also not possible to self-host RHEV-M on the RHEV-H hosts. Already connected clients should not be disconnected in the case of RHEV-M failing (to be tested).

Red Hat is working on an Linux-Only RHEV-M, as you can read in a comment from a Red Hat employee in a earlier post.

The same comment talks about the WAN problems to be solved in near future.

Conclusion
Frankly: From my point of view, RHEV is not yet “enterprise ready”, this is because of the drawbacks mentioned above, especially the storage shortcomings and RHEV-M being a SPoF.

For smaller environments up to lets say ~50 servers or ~200 desktops it good enough.

Nevertheless: I think RHEV has a huge potential in the future. Compared to VMware ESX, the KVM’s technology is much better and scales better. At the moment VMware’s USP is its management software, not the hypervisor used. As soon as Red Hat ironed out the major drawbacks, I’ll expect a boost for RHEV.

Red Hat is keen to improve the product, I guess a lot of improvements will be annouced in 2011.

Have fun!

Spice and RHEV, a RHCE goes MCSE

Tuesday, January 11th, 2011

I’m currently working in a project which includes some virtual Linux desktops. The desktop of choice is RHEL6.

How to bring a Linux desktop via WAN to a thin client? VNC -> are you nuts? Remote X11 over SSH -> WAN = no go. NX -> another vendor involved. SPICE -> Spicy! But: Spice over WAN? To be tested…

SPICE is the protocol used by RHEV (Red Hat Enterprise Virtualization). Some time ago I had the chance to test this stuff @Red Hat in Munich. The experience was nice, it is comparable to vSphere, but it only works with MS Internet explorer due to ActiveX and .Net stuff.

The management software needs to be installed on a Windows 2008R2 server. The database to be used is – you guess it – MS SQL. Users are authenticated either by Active Directory (Not generic LDAP!) or local Windows Users. Holy cow!

At first, when I got this product presented by Red Hat I was LOL. Now, it seems that I need to refresh my Windows knowledge because it seems to be the only product capable to provide enterprise ready Linux desktop virtualization. I’m crying :-(

At least the hypervisor used is not MS HyperV, it is KVM based on RHEL5, to replaced with RHEL6 in the future.

There is some light at the end of the tunnel: Red Hat is working on a replacement of the Windows-bound stuff. It will be replaced with some JBOSS and Java stuff. The database will probably be PostgreSQL. It will take some time to develop it before it will be ready for production.

Since Red Hat is opensourcing all (or most) of its products, it would be great to get in touch with the upstream project (release early, release often).

In meantime I need to build up knowledge about Windows Server 2008R2, Active Directory, MS SQL Server and DotNet.

Having fun?

Important RHN Satellite 5.4 bugs has been fixed

Wednesday, December 15th, 2010

Red Hat recently released some bugfixes for the RHN-Satellite version 5.4. They needed approx. one month to develop a fix for those serious bugs.

If you upgraded to sat540 before those bugsfixes have been released you will have a crippled database. The errata provides a way how to fix it. It needs some time, but it works perfectly. For “my” satellites it was taking about 48h for both satellites, about 12h for the master and 36h for the slave satellite.

This time, Red Hat’s QA also made a good job, it is now working like expected. The developers had a hard time too, according to the git log they worked on weekends too.

If you are new to sat540 or upgrading to it, please ensure that you do NOT take any action before applying the errara!

Have fun! (This time REALLY for sure)

3rd employer within six months

Wednesday, December 15th, 2010

Up to the 30rd of June this year, I was working at “Siemens Switzerland AG”. On July 2010 the IT shop of Siemens was “carved out” to a company called “Siemens IT Solutions and Services AG” (=SIS).

As soon as this was announced, it was clear that SIS will be sold as Siemens made it before with Infinion, BenQ, SEN, Gigaset etc. I was pretty sure that it takes at least one year to get SIS ready to be sold.

This morning my workmates saluted me with “Bonjour, ça va bien?”. I was puzzled… The reason for that is that there was the announcement that SIS has been sold to a company called “Atos Origin”, a french company.

Who is this company? I was aware of the name of this company but nobody can tell us how this company is behaving. Is this company using Red Hat products? (I really like Red Hat products) or is it a Windows Shop? Does it use Novell/Attachmate products? (SLES)… Questions and more questions…

What shall I do now? If I’ll get a job offer from a Red Hat related company or Red Hat itself, I’ll check it for sure. If not, I’ll give my unknown employer a chance.

Having fun? No clue…

Spacewalk 1.2 released -> PostgreSQL Support quite ready -> First analysis

Saturday, November 20th, 2010

Today, Spacewalk – the upstream project of the RHN satellite – released version 1.2. One of the promises the developers made was better support of PostgreSQL. It seems that lot of stuff is now working. As I promised, I’m going to examine whats working and whats not. I’ll file every single bug I’ll find, please do the same in a polite manner.

First impression
Installation and first sync of yum channels works like PostgreSQL support was there from the first second. Nevertheless, there is still a lot to test.

How to install Spacewalk with PostgreSQL?
It is straight forward:

  • Set up a PostgreSQL database as described here
  • Follow and exclude things that mentions Oracle
  • Go for the the instructions about PostgreSQL.

And enjoy your newly installed Spacewalk server w/o Oracle!

What I proofed working so far:

  • Installing with PostgreSQL went smooth and much faster than the stuff with an Oracle setup
  • Creating a CentOS5 Channel
  • Add a yum repository (i.e.mirror.switch.ch)
  • Linking the yum repo to a channel

Conclusion so far

  • Spacewalk feels (not measured) MUCH (very much) faster with PostgreSQL. (Feels like more than the tripple speed)
  • PostgreSQL support seems to be almost ready for production (the tested stuff)
  • As RHN Satellite 5.4 is out now and the ISS bug is fixed (in spacewalk-nightly, not yet with an erratum) Red Hat should and can now focus on the complete replacement of the Oracle embedded DB.
  • RHN Satellite 6.0 can and should be released w/o being bound to Oracle

More things to test

Since syncing repos is a time consuming task (seems to be much less time consuming with PostgreSQL), some tests are still pending. There is no single System yet subscribed, no deployment tests etc. I’ll test them later and let you know.

Some more words to say

The RHN Satellite and Spacewalk developer crew (once again) made an outstanding good job (I wish I could say the same on QA). At FUDCon 2010 in Zurich, Miroslav stated that nobody is willing to test the PostgreSQL support. No wonder it was not yet ready to test it at that time. Now, PostgreSQL enabled Spacewalk is ready for being tested by broad public , do it as I do it!

Having fun? Yes sure, I’m going to do some more intensive tests on the PostgreSQL support.

Cheers,

Luc

Upgrading RHN Satellite from 5.3 to 5.4, experiences and hints

Monday, November 8th, 2010

As I wrote in my previous post I’ll let you know about my experiences. The most important message is: It is easy to upgrade your Satellite from 5.3 to 5.4, if you have an eye on certain things. Everyone that plans to use RHEL6 and manage it with a Satellite server needs this upgrade, due to the fact that RHEL6 comes with SHA-256 checksums on its RPM packages.

Who needs this upgrade?
Any company that plans to use RHEL6 which is released when quoting Red Hat: “later this year”. Any company with more than 50 managed systems which is annoyed by BZ #629543. Every company with more than 500 managed systems to able to manage 501+ systems because of the same bug.

The odd thing
RHN Satellite 5.4 was released at the end of October 2010. There no press release, no announcement, nothing (yet). This upgrade is probably one of the most important ones, because of RHEL6. @Red Hat: Please explain.

Before you begin
Obtain the new Satellite certificate from Red Hat. Open a support case at rhn.redhat.com. It takes about two days. Provide them the version of the new Satellite to avoid further loss of time.

Do I need to say that you need to download the ISO-image from rhn.redhat.com?

If your Satellite has SELinux enabled and it is in enforcing mode, you need to put it into permissive mode by issuing setenforce 0. At the moment there is are two open bugs regading SELinux: BZ #646863 and BZ #646862.

The protocol for syncing has changed from version 5.3 to 5.4. You need to keep this in mind when using ISS (Inter Satellite Sync). At the moment a 5.3 Satellite cannot be the master of a 5.4 Satellite. The other way round it works perfectly. A bug was filed, read BZ #644239 for more information, a fix will be released quite soon.

It is important to backup /etc/rhn/rhn.conf since this file will get overwritten. A Bugzilla issue is filed (BZ #650987).

If you did not backed up your database, do it now!

Another thing you need to know is that Red Hats recommendations on extending the Oracle embedded DB’s table space is not enough. Please ensure you have at least 2 Gbyte on DATA_TBS and 1GByte of UNDO_TBS. To be sure of that, fire up db-control report as user “oracle”. If one of the TBS’ does not have the expected free space, fire up db-control extend you-name-it-TBS.

Further, you need to have a closer look to the file system size – depending on on your file system layout – to /var/cache/rhn. I made some odd observations that the space needed has quite exploded from about 2Gbyte to about 6Gybte. So ensure you’re having some free space on /var resp. /var/cache/rhn.

If you using cobbler and kickstarting for provisioning or monitoring, please install the package rhn-upgrade and have a look to the files installed in /etc/sysconfig/rhn/satellite-upgrade/ for more information about the procedure.

First you need to delete some stuff…
Before upgrading your Satellite you need to clean your caches located in /var/cache/rhn. It will be rebuilt later.

… and rebuild it from scratch later
The different tasks you need to conduct will take, depending on how many custom channels you have, between four and 14 hours. More to read further below, after the upgrade process is finished.

Lets start

  • Mount the ISO-image downloaded: mount satellite-embedded-oracle-5.4.0-20101025-rhel-5-x86_64.iso /mnt -o loop && cd /mnt.
  • Fire up the installer: ./install.pl –upgrade It will ask you some questions, anwer then with “Y” for yes.
  • Fire up su – oracle -c db-control gather-stats
  • Update your database scheme with spacewalk-schema-upgrade and depending on the hour, take a break for a long lunch/dinner or go home and continue the following day.
  • When finished, check if the schema was applied successfully by issuing rhn-schema-version. It should read as 5.4.0.8-1el5sat.
  • Activate your freshly upgraded Satellite: rhn-satellite-activate –rhn-cert /path/to/the/cert/you/got/from/redhat.
  • You would like to rebuild the search indexes: service rhn-search cleanindex
  • Check /etc/rhn/rhn.conf and compare it to the previously backed up version. Change it accordingly.
  • Restart your satellite: rhn-satellite restart and have another break.

Rebuild your satsync cache
As written further up, you need to rebuild some more meta data caches: /var/cache/rhn/repodata will be rebuilt when restarting the Satellite. The /var/cache/rhn/satsync will we rebuilt on the first satellite-sync. Keep in mind that satellite-sync still does not remember previously synced custom channels. Please vote on this bug.

Cool stuff to do with RHN Satellite 5.4
Who is using EPEL? IUS? someone? I think a lot of people do so. Until now, the best method was probably wget -m and rhn-push all RPMs in the output directory. This was time consuming and created some extra traffic on the net. Now you can add yum repositories to your satellite and link them to a custom channel. Log in and go to Channels -> Manage Software Channels -> Manage Repositories -> create new repository.

Add a label, i.e. “epel5-x86_64″ and add the repository URL. I.e. http://download.fedora.redhat.com/pub/epel/5/x86_64/. Save and go back to “Manage Software Channels” and select a channel, or create a new one. Base channel is mostly a Red Hat Channel. Go to “Repositories” and select the formerly created repository. Click on “Update Repositories”, Click in “sync” and confirm by clicking the “Sync” button and you’re done.

What else? Staging content sounds nice. Unfortunately this only works with the upcoming RHEL5.6 (Beta was announced today) and RHEL 6.1 (Why not RHEL 6.0?). It means that if enabled, every enabled system downloads the to-be-updated packages before the actual maintenance window. This greatly helps keeping downtimes short.

At the end of the day…
Companies using RHN Satellite are strongly encouraged to upgrade. Not only because of the support for RHEL6, there also have been a lot of bugfixes, performance improvements and enhancements. I can encourage every Satellite user to upgrade, Red Hat (and in particular the Spacewalk developer team) made a great job, thanks a lot!

Having fun?
Sure, Red Hat Satellite 5.4 makes my daily work more efficient :-)