Archive for February, 2011

Updating a distro in cobbler

Monday, February 28th, 2011

A few weeks ago RHEL 5.6 was released, the installation media was also updated. So it is time to get it into cobbler to deploy the latest dot release when provisioning new systems.

Lets assume your profile name is rhel5-x86_64, you have an existing distro named rhel55-x86_64 and you want to replace it with rhel56-x86_64.

Lets start with importing the new distro:

# Mount the ISO as loop back
mount /some/where/rhel-server-5.6-x86_64-dvd.iso /mnt/rhel56iso -o loop

# Import the Install Media
cobbler import --path=/mnt/rhel56iso --name rhel56-x86_64

Cobbler creates a profile consisting of the name provided at import plus its architecture. We do not want that profile, lets delete it:

cobbler profile remove --name=rhel56-x86_64

The next step is to change the distribution used by the profile

cobbler profile edit --name=rhel5-x86_64 --distro=rhel56-x86_64

If you want to delete the old distro, first check if there is a profile still using this distro, otherwise any childs will be deleted.

cobbler profile find --distro=rhel55-x86_64

If the result is null, it is safe to remove the old distribution. If you want to keep it for a fallback scenario, just omit this step.

cobbler distro remove --name=rhel55-x86_64

Cobbler is just cool stuff :-) Want to know more? Visit https://fedorahosted.org/cobbler/wiki.

Have fun!

A review of RHEV

Sunday, February 27th, 2011

In the past few weeks I had the chance to have a closer look at the current release 2.2. The reason is that I’m working on a project using RHEL6 clients as virtual Desktops. For a proof-of-concept I’ve set up a test environment in the lab. Due to the lack of time I was not able to test every single feature.

After reading some docs, it was amazingly easily and quickly installed.

Test environment
The tests have been made on the following hardware:

  • 2 Xeon servers with a total of 8 cores and 24Gbyte on each host. OS is RHEL5.6. Those hosts are called RHEV-H (H like Hypervisor)
  • 1 Windows 2008R2 server (64bit) with 4Gbyte RAM as RHEV-M (M like management)
  • 1 dedicated NetApp filer serving storage over NFS
  • 1 RHEL 5.6 Server providing “cheap” NFS storage
  • 3 different networks connected to 3 NICs
  • 1 RHEL 5.6 “thin” client with spice-xpi as client
  • 1 Windows 2008R2 Server with spice-activex as client (actually on the management server)

VM’s (All 64bit):

  • 20 Virtual desktops with RHEL 6.0 clients using one core and 2Gbyte or RAM each
  • 2 Virtual servers with RHEL 6.0 server using 2 cores and 2Gbyte of RAM each
  • 2 Virtual servers with RHEL 5.6 server using 2 cores and 2Gbyte of RAM each

Management Portal
The Management interface is the central point of administration. It is not always as intuitive as expected. If you are new to RHEV, you can get confused. A big plus is the search functionality. One can search for storage, VMs, hosts and any other item in the environment.

User Portal
The user portal is very simple, lean and clean. Users see the machines that are assigned to them, they can power on and off the machines and can connect them to powered VMs, and the client appears on the desktop.

The user portal resides on the management server and is just the connection broker. As soon as a user connects to its desktop or server, the connection will be made directly to the spiced qemu-kvm running on one of the hypervisors.

Client
At the moment there are two different clients supported:

  • spice-activex with Internet Explorer 7+ on Windows XP, Vista and 7
  • spice-xpi with Firefox for RHEL5 and RHEL6

If you do not fear the work, you probably can get the spice-xpi client also running on Fedora 13 and other Linux distributions.

The client communicates with the VM over SPICE (Simplified Protocol for Independent Computing Environments). It can forward USB devices as well as sound and other multimedia stuff. For more informations about the protocol, please have a look at http://www.spice-space.org/home.html.

Storage
The first step to set up a RHEV environment is to define and set up the storage.

There are 3 types of storage:

  • ISO, where you upload your install-media for your virtual machines.
  • Data, where the VM images are stored.
  • Export, where I have no clue (yet) what it is good for.

Usually the storage is a NFS or iSCSI server, FC storage is also supported. The storage server creates snapshots of volumes for backing up the stuff.

Hosts (hypervisors)
You can either use RHEV-H as an “embedded” hypervisor which comes as a stripped-down RHEL5 distribution (the ISO is just about 75Mbyte IIRC) or you can set up RHEL5 and use it as a hypervisor.

First I tried the RHEV-H, it is manual work to install it. You need to burn a CDROM put it in your server and enter some stuff like network config etc. In the short time I did not tried to provision RHEV-H with cobbler. According to the documentation one can pass boot parameters to partially automate the installation.

Because I have a cobbler server handy, I decided to use RHEL5 as hypervisors. Just be sure you subscribe the system to the channels “rhel-x86_64-server-vt-5″ and rhel-x86_64-rhev-mgmt-agent-5. You also need to allow root to log in via SSH. You can change this later after the hosts is registered on the RHEV-M server.

After the setup of the RHEL5 servers you just need to tell RHEV-M that there is a new host available. Tell RHEV-M about the root password and all the needed additional software gets installed automatically.

That’s a pretty lean procedure to install and set up hosts. Hopefully there will soon be a way to fully automate this, w/o using MS Powershell.

The technologies on the hosts are the well known stable and mature KVM and qemu-kvm. There are also parts of o-virt being used.

Networking
I do not know how many networks you can add, but I think enough even for large environments. The network configuration part in RHEV-M shows a list of available Ethernet ports on each host. Just assign them a static network and IP address and you’re done. Make sure you define the network you want to add on all hosts in a particular datacenter.

Ensure that firewalls between the users and VM’s networks are configured accordingly to avoid connection problems.

Sizing and performance

  • Memory overcommitment
    Thanks to KSM (Kernel Samepage Merging) one can quite overcommit the memory. Ensure you have a LOT of swap on your hosts. Why? It takes some time until KSM kicks in and frees memory pages. During the collection of those pages, swap space is needed to prevent getting a visit by the OOM killer.

    I once faced a complete crash when putting one host in maintenance mode and all VMs have been migrated to the second host. That’s exactly why you should have LOTs of swap spaces to survive when using memory over-commitment. During the period of KSM searching for same pages, performance is degraded.

    Under normal circumstances, this will never happen since the virtual hosts are load balanced. This means VM’s are distributed to have the optimal performance.

  • CPU overcommitment
    This very depends on your users computing needs. For normal desktops and servers you can overcommit at least 200% of the CPUs and the performance is still fine. For CPU intensive workload it is not recommended to over commit CPUs.
  • Number of hosts
    Its recommended to start with at least three hosts to avoid temporary performance penalties if you need to put a host into maintenance mode. (See also “Memory overcommitment”).

    The performance is surprisingly good, the user experience is nearly the same as on physical desktops. For servers, KVM is already known for its good performance.

Drawbacks
Unfortunately you can not hot-add or hot-remove virtual CPU’s, NICs and storage. Despite of being supported by the underlying technologies used, your VM must be powered off to add or remove resources.

Another drawback is the lack of different storage classes, this makes it actually impossible to use the product in large enterprise environments. Since it makes no sense to back up swap drives, it is waste of backup space.
It is also not possible to have different storage for different needs. Lets say a bulk RAID5 consisting of cheap 2Tbyte SATA disks for operating systems and a RAID10 consisting of fast 15k SAS or FC disks for data and swap. It is also not possible to life-migrate VMs from one storage to another.

As of today, SPICE is only useful on LAN networks. Depending on the work load it can demand lots of bandwidth available. For usual office workload, 1 Mbps per connection is fine. A WAN optimized version is under development and should be released “later this year”.

The two supported clients (spice-xpi and spice-activex) can be problematic when the clients are not in an environment controlled by you. Why? Lots of enterprises are preventing its users to install Active X applications and Firefox Add-On’s for security reasons. It would be better to have some kind of “fat client” which does not need to be installed, or a Java Applet which does the job.

At the moment, it is still required to have a Windows 2008R2 server installed and use MS technologies such as MSSQL, DotNET, Powershell and Active Directory. Because you can not manage RHEV-M with a satellite, one needs to download and install updates manually.

RHEV-M is not only the management server, it is also the connection broker for clients connecting. RHEV-M is a single point of failure. Maybe one can build a MS-Cluster with it, no clue. It is also not possible to self-host RHEV-M on the RHEV-H hosts. Already connected clients should not be disconnected in the case of RHEV-M failing (to be tested).

Red Hat is working on an Linux-Only RHEV-M, as you can read in a comment from a Red Hat employee in a earlier post.

The same comment talks about the WAN problems to be solved in near future.

Conclusion
Frankly: From my point of view, RHEV is not yet “enterprise ready”, this is because of the drawbacks mentioned above, especially the storage shortcomings and RHEV-M being a SPoF.

For smaller environments up to lets say ~50 servers or ~200 desktops it good enough.

Nevertheless: I think RHEV has a huge potential in the future. Compared to VMware ESX, the KVM’s technology is much better and scales better. At the moment VMware’s USP is its management software, not the hypervisor used. As soon as Red Hat ironed out the major drawbacks, I’ll expect a boost for RHEV.

Red Hat is keen to improve the product, I guess a lot of improvements will be annouced in 2011.

Have fun!

Updated my Nexus one to Gingerbread

Sunday, February 27th, 2011

Google has finally released Gingerbread (Android 2.3.3) for the Nexus one mobile phone. Until the rollout via OTA (Over the Air) will be completed, it will take a few weeks.

I was not willing to wait for such a long time.

So, I just downloaded the image from Google and updated my phone manually.

    Steps needed:

  • Download the image from here
  • Rename the image file to update.zip and copy it to your phone SD card’s root
  • Shut down you phone
  • Press the trackball while powering on the phone
  • Select bootloader (use the volume keys to navigate) and press the power button again
  • When the exclamation mark show up on the screen hold down the power key and then press volume up
  • Navigate to Apply sdcard:update.zip and confirm the action by pressing the track ball
  • Reboot after successfully update your Android to 2.3.3

Benefits of Gingerbread for the Nexus one

  • The overall-speed has been improved, it feels much more snappy now
  • Re-worked user interface. The UI is now much darker than before and has some nice effects like the “glowing” when reaching the top or bottom of a list. A cool eye-candy appears when locking the screen.
  • Improved virtual keyboard. Its is more comfortable than the old versions

Googles definition of “soon”
Google was announcing the availability of the Android 2.3 SDK on early December last year. On December, 20t, Google promised to release Gingerbread for the Nexus One in the “coming weeks”. Later on, rumours that it will be released during the “Mobile World Congress” on Barcelona, Spain have proven wrong.

Since the announcement of Gingerbread to the OTA roll-out to the Nexus One, we had to wait for almost three months. What kept Google back to release it earlier?

Hopefully we do not need to wait such a long time for “Ice Cream” (Android version 4?).

Conclusion
It was worth to buy a Google phone. Its is already the second major version that hit my phone. Other phone manufacturers do either not release any update or roll them out with a delay of a few months.

Have fun!

Updated my blog from WordPress 3.x to 3.1

Thursday, February 24th, 2011

One sentence: It is working as expected :-)

RHEL6.1 and Red Hat is changing its subscription methods

Wednesday, February 16th, 2011

I just got an email with the subject “Opportunity for Red Hat Certified Professionals to test new Red Hat software”.

Quoting the email:

" The new subscription management tools provide a very different user
   experience than today’s Red Hat Network (RHN). We would like to get
   your feedback on the software so that we can improve the tooling before
   RHEL 6.1 is released. As part of this Beta Program, we will be offering
   you a beta version Red Hat Enterprise Linux Personal Subscription. This
   subscription will allow you to access the tooling that will be provided
   as part of the RHEL 6.1 minor release."

In the same email Red Hat offers the audience to have up to 10 systems registered for free:

  "Under  the Personal Subscription provided via the Beta Program, users are able
   to deploy the software on up to 10 personal systems. The Red Hat
   Personal Subscriptions entitle you to access software and software
   updates"

That are actually great news for us as Red Hat certified professionals. But it also opens new questions about the future of RHN, the RHN Satellite and the subscription model of Red Hat in general.

According to the documentation (You need to be at least RHCE and provide your RHCE number to get access to it), with RHEL6.1 registering systems to the RHN will completely change. No more rhn-register it is now the CLI command subscription-manager.

The most important thing that changed is that the username and password of RHN needs to be transmitted just once, afterwards you will get identified by a X509 client certificate.

The only drawback I’ve found was that the command to register a consumer needs to provide the password in clear text on the command line.

And for Satellite users?
As far as I can see, nothing changes, Satellite users can still provision the systems with activation keys, it is still channel based, not product based.

For enterprise users nothing is changing in the next time, the new entitlement and subscription method does only apply to does users NOT using a Satellite server, at least for now and for RHEL6.1.

The Readme also mentions RHN Satellte 5.5 which is not (yet) released. It is quite unclear what is expecting us with Satellite 5.5.

Reading some bugzilla entries, it is clear that there is still some time until RHN Satellite 5.5. will hit the road.

Please: A public Beta for RHN Satellite 5.5
Please Red Hat, provide a public, or at least a semi-public beta (like for RHEL6.1) release, to give your enterprise customers a chance to do the Q&A which was missing on the release of RHN Satellite 5.4.

Having fun? I do actually not care about RHN Network, I’m a Satellite user. Personally I’m having fun with my 10 personal RHEL6.1 subscriptions for free, it allows me to do lots of tests before putting RHEL6.1 into production.

RHEL 4.9 released

Wednesday, February 16th, 2011

Today, Red Hat released its “service pack” or “maintenance release” of RHEL4. According to Red Hats life cycle policy” it ends the production stage two.

That means: In future only bugs with a high severity will be fixed. The “normal” life cycle of RHEL4 will end in approx. one year.

This means that everyone using RHEL4 systems should think about a migration scenario to RHEL6. Unfortunately, Red Hat does not support OS upgrades, you need to install the systems from scratch. Since RHEL4 was released in February 2005, most of this systems have reached there lifecycle anyway.

Its time to migrate your RHEL4 systems.

For the release notes, please see http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/4/html/4.9_Release_Notes/index.html

Have fun with migrating….

CentOS6 to be released in the next few weeks

Wednesday, February 16th, 2011

According to an interview with Karanbir Singh – a major contributor to the project – it is just a question of a few weeks until we can expect CentOS6 to be released.

CentOS is extremely important for the RHEL community, it is a playground for trying out new stuff before getting into an engineering phase with the Red Hat supported RHEL.

Lets have fun with it…