Archive for the ‘Uncategorized’ Category

Why journalctl is cool and syslog will survive for another decade

Wednesday, July 24th, 2013

There was a recent discussion going on if Fedora 20 should drop rsyslog and just using systemd journal. A lot of people are afraid of systemd and its journal, this a pity.

Well, there are pros and cons about this kind of logging. For System administrators daily use, journalctl is a powerful tool simplifying the hunt for log file entries.

On the other hand, there are AFAIK no monitoring tools (yet) that can work with journalctl. Those first need to be developed. A Nagios plug-in should be implemented quite quickly.

Why makes journalctl the life easier?
Instead of grepping trough thousands of lines in /var/log/messages you simply can filter the messages and work on them.

journalctl has auto completion (just hit the tab key) showing you the options to use. I.e.

fedora:~# journalctl  < TAB > 
_AUDIT_LOGINUID=             __MONOTONIC_TIMESTAMP=
_AUDIT_SESSION=              _PID=
_BOOT_ID=                    PRIORITY=
_CMDLINE=                    __REALTIME_TIMESTAMP=
CODE_FILE=                   _SELINUX_CONTEXT=
CODE_FUNC=                   _SOURCE_REALTIME_TIMESTAMP=
CODE_LINE=                   SYSLOG_FACILITY=
_COMM=                       SYSLOG_IDENTIFIER=
COREDUMP_EXE=                SYSLOG_PID=
__CURSOR=                    _SYSTEMD_CGROUP=
ERRNO=                       _SYSTEMD_OWNER_UID=
_EXE=                        _SYSTEMD_SESSION=
_GID=                        _SYSTEMD_UNIT=
_HOSTNAME=                   _TRANSPORT=
_KERNEL_DEVICE=              _UDEV_DEVLINK=
_KERNEL_SUBSYSTEM=           _UDEV_DEVNODE=
_MACHINE_ID=                 _UDEV_SYSNAME=
MESSAGE=                     _UID=
MESSAGE_ID= 
fedora:~# journalctl 

Quite some filtering options available here. Most of this options are self-explanatory.

If you just want to see the entries made by a particular command, issue journalctl _COMM= and the TAB key.

fedora:~# journalctl _COMM=
abrtd            dnsmasq          mtp-probe        sh               tgtd
anacron          gnome-keyring-d  network          smartd           udisksd
avahi-daemon     hddtemp          polkit-agent-he  smbd             umount
bash             journal2gelf     polkitd          sshd             userhelper
blueman-mechani  kdumpctl         pulseaudio       sssd_be          yum
chronyd          krb5_child       qemu-system-x86  su               
colord           libvirtd         sealert          sudo             
crond            logger           sendmail         systemd          
dbus-daemon      mcelog           setroubleshootd  systemd-journal  
fedora:~# journalctl _COMM=

If you enter journalctl _COMM=sshd you will just see the messages created by sshd.

fedora:~# journalctl _COMM=sshd 
-- Logs begin at Tue 2013-07-23 08:46:28 CEST, end at Wed 2013-07-24 11:10:01 CEST. --
Jul 23 09:48:45 fedora.example.com sshd[2172]: Server listening on 0.0.0.0 port 22.
Jul 23 09:48:45 fedora.example.com sshd[2172]: Server listening on :: port 22.
fedora:~#

Usually one is just interested in messages within a particular time range.

fedora:~# journalctl _COMM=crond --since "10:00" --until "11:00"
-- Logs begin at Tue 2013-07-23 08:46:28 CEST, end at Wed 2013-07-24 11:23:25 CEST. --
Jul 24 10:20:01 fedora.example.com CROND[28305]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Jul 24 10:50:01 fedora.example.com CROND[28684]: (root) CMD (/usr/lib64/sa/sa1 1 1)
fedora:~#   

And why will rsyslog stay another decade or even longer?

There are a lot of tools and scripts which are in place since a long time, some of them even come from a time before Linux was born.

Most of those scripts must be rewritten or at least change its behaviour. I.e taking input from STDIN instead of a log file, so those tools can digest the output from journalctl|your-super-duper-scipt.pl

For log digesting tools that are needed to be compatible between different Unix and Linux Systems they probably wont be changed. In this case syslogd will survive until the last of those systems is decommissioned.

Further reading

Creating a PHP application on Openshift

Saturday, June 8th, 2013

What is OpenShift? It is a cloud, it is from Red Hat. More precisely: A PaaS (Platform As A Service).

It is available since quite some time now and I finally found some time to test it. Conclusion: It is very simple to use. This will guide you how to create a PHP application which just prints “this is a test”. More to come in future postings.

The following steps are needed:

  • Create an account
  • Installing the CLI and setting up your environment
  • Create an application
  • Initialize a git repository
  • Put some content into your git repository
  • Push/publish your application

It is a good idea to start reading https://www.openshift.com/get-started”.

Create an account
Simply head to https://openshift.redhat.com/app/account/new and fill in the form. The captcha can be a hassle, you may need to try reading it correctly several times.

Setting up your environment
before being able to use your account, you need to install and set up some software on your developer workstation. Of course you also can go for the “Wimp Way” and using the web-UI, but real men use the CLI for higher productivity.

The following steps I used on my Fedora 18 box:

f18:~# yum install rubygems git

Next, install the CLI tool. The simplest way to do so is using gem.

f18:~# gem install rhc
Fetching: net-ssh-2.6.7.gem (100%)
Fetching: archive-tar-minitar-0.5.2.gem (100%)
Fetching: highline-1.6.19.gem (100%)
Fetching: commander-4.1.3.gem (100%)
Fetching: httpclient-2.3.3.gem (100%)
Fetching: open4-1.3.0.gem (100%)
Fetching: rhc-1.9.6.gem (100%)
===========================================================================

If this is your first time installing the RHC tools, please run 'rhc setup'

===========================================================================
Successfully installed net-ssh-2.6.7
Successfully installed archive-tar-minitar-0.5.2
Successfully installed highline-1.6.19
Successfully installed commander-4.1.3
Successfully installed httpclient-2.3.3
Successfully installed open4-1.3.0
Successfully installed rhc-1.9.6
7 gems installed
Installing ri documentation for net-ssh-2.6.7...
Installing ri documentation for archive-tar-minitar-0.5.2...
Installing ri documentation for highline-1.6.19...
Installing ri documentation for commander-4.1.3...
Installing ri documentation for httpclient-2.3.3...
Installing ri documentation for open4-1.3.0...
Installing ri documentation for rhc-1.9.6...
Installing RDoc documentation for net-ssh-2.6.7...
Installing RDoc documentation for archive-tar-minitar-0.5.2...
Installing RDoc documentation for highline-1.6.19...
Installing RDoc documentation for commander-4.1.3...
Installing RDoc documentation for httpclient-2.3.3...
Installing RDoc documentation for open4-1.3.0...
Installing RDoc documentation for rhc-1.9.6...

Just to be sure there are not updates available:

f18:~# gem update rhc
Updating installed gems
Nothing to update

Next on the list is to set up your credentials and evironment. It is wizard style and will guide you trough the process.

[luc@f18 ~]$ rhc setup
OpenShift Client Tools (RHC) Setup Wizard

This wizard will help you upload your SSH keys, set your application namespace, and check that other programs like Git are properly
installed.

Login to openshift.redhat.com: your-account@example.com
Password: **********


OpenShift can create and store a token on disk which allows to you to access the server without using your password. The key is stored
in your home directory and should be kept secret.  You can delete the key at any time by running 'rhc logout'.
Generate a token now? (yes|no) yes
Generating an authorization token for this client ... lasts about 1 day

Saving configuration to /home/luc/.openshift/express.conf ... done

Your public SSH key must be uploaded to the OpenShift server to access code.  Upload now? (yes|no) yes

Since you do not have any keys associated with your OpenShift account, your new key will be uploaded as the 'default' key.

Uploading key 'default' ... done

Checking for git ... found git version 1.8.1.4

Checking common problems .. done

Checking your namespace ... none

Your namespace is unique to your account and is the suffix of the public URLs we assign to your applications. You may configure your
namespace here or leave it blank and use 'rhc create-domain' to create a namespace later.  You will not be able to create applications
without first creating a namespace.

Please enter a namespace (letters and numbers only) ||: ldelouw
Your domain name 'ldelouw' has been successfully created

Checking for applications ... none

Run 'rhc create-app' to create your first application.
[..]
Your client tools are now configured.

Create an application
Now as your environment is nearly finished setting up you can create your application instance on OpenShift.

[luc@f18 ~]$ rhc create-app test zend-5.6
Application Options
-------------------
  Namespace:  ldelouw
  Cartridges: zend-5.6
  Gear Size:  default
  Scaling:    no

Creating application 'test' ... done

Waiting for your DNS name to be available ... done

Downloading the application Git repository ...
Cloning into 'test'...
The authenticity of host 'test-ldelouw.rhcloud.com ()' can't be established.
RSA key fingerprint is a-finger-print.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'test-ldelouw.rhcloud.com' (RSA) to the list of known hosts.

Your application code is now in 'test'

test @ http://test-ldelouw.rhcloud.com/ (uuid: a-uuid)
------------------------------------------------------------------------
  Created: 5:22 PM
  Gears:   1 (defaults to small)
  Git URL: ssh://a-uuid@test-ldelouw.rhcloud.com/~/git/test.git/
  SSH:     a-uuid@test-ldelouw.rhcloud.com

  zend-5.6 (Zend Server 5.6)
  --------------------------
    Gears: 1 small

RESULT:
Application test was created.
Note: You should set password for the Zend Server Console at: https://test-ldelouw.rhcloud.com/ZendServer
Zend Server 5.6 started successfully

As mentioned in the output, you shoud proceed to https://yourapp-yourdomain.rhcloud.com/ZendServer

Initialize a git repository

This is not very clear in Red Hats documentation. When creating an application on OpenShift, a git repository is created to you. In order to push your app, you need to clone that repository locally or adding an upstream git master. Lets do it locally for now:

[luc@f18 ~]$ cd ~/your-project-directory

[luc@f18 your-project-directory]$ git clone ssh://a-uuid@test-ldelouw.rhcloud.com/~/git/test.git/
Cloning into 'test'...
remote: Counting objects: 26, done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 26 (delta 2), reused 20 (delta 0)
Receiving objects: 100% (26/26), 6.99 KiB, done.
Resolving deltas: 100% (2/2), done.

Put some content into your git repository
What a git repository and an application instance without some content? Nothing, so lets change that.

[luc@f18 your-project-directory]$ cat <<EOF>test/php/test.php
<?php
print "this is a test";
?>
EOF

Adding your project file to the git repository:

git add test.php

Commit it:

git commit

And push it:

[luc@f18 your-project-directory]$ git push
Counting objects: 6, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 398 bytes, done.
Total 4 (delta 1), reused 0 (delta 0)
remote: CLIENT_MESSAGE: Stopping Zend Server Console
remote: Stopping Zend Server GUI [Lighttpd] [OK]
remote: CLIENT_MESSAGE: Stopping Zend Server JobQueue daemon
remote: Stopping JobQueue [OK]
remote: CLIENT_MESSAGE: Stopping Apache
remote: CLIENT_MESSAGE: Stopping Zend Server Monitor node
remote: Stopping Zend Server Monitor node [OK]
remote: CLIENT_MESSAGE: Stopping Zend Server Deployment daemon
remote: Stopping Deployment [OK]
remote: CLIENT_RESULT: Zend Server 5.6 stopped successfully
remote: TODO
remote: CLIENT_MESSAGE: Starting Zend Server Deployment daemon
remote: Starting Deployment [OK]
remote: [08.06.2013 11:36:30 SYSTEM] watchdog for zdd is running. 
remote: [08.06.2013 11:36:30 SYSTEM] zdd is running. 
remote: CLIENT_MESSAGE: Starting Zend Server Monitor node
remote: Starting Zend Server Monitor node [OK]
remote: [08.06.2013 11:36:31 SYSTEM] watchdog for monitor is running. 
remote: [08.06.2013 11:36:31 SYSTEM] monitor is running. 
remote: CLIENT_MESSAGE: Starting Apache
remote: CLIENT_MESSAGE: Starting Zend Server JobQueue daemon
remote: Starting JobQueue [OK]
remote: [08.06.2013 11:36:34 SYSTEM] watchdog for jqd is running. 
remote: [08.06.2013 11:36:34 SYSTEM] jqd is running. 
remote: CLIENT_MESSAGE: Starting Zend Server Console
remote: spawn-fcgi: child spawned successfully: PID: 1433
remote: Starting Zend Server GUI [Lighttpd] [OK]
remote: [08.06.2013 11:36:36 SYSTEM] watchdog for lighttpd is running. 
remote: [08.06.2013 11:36:36 SYSTEM] lighttpd is running. 
remote: CLIENT_RESULT: Zend Server 5.6 started successfully
To ssh://a-uuid@test-ldelouw.rhcloud.com/~/git/test.git/
   xxxxx..yyyy  master -> master
[luc@f18 your-project-directory]$

Did it all worked?

Lets try…

[luc@bond test]$ wget --quiet http://test-ldelouw.rhcloud.com/test.php -O -|grep test
this is a test
[luc@bond test]$ 

Yes!

RHEV 3.1 – an overview about the new features

Sunday, December 9th, 2012
RHEV-M

RHEV-M

Recently Red Hat announced the public availability of RHEV 3.1.

Finally, no more Windows needed for the whole software stack :-)

In 3.0, the new webadmin interface was already inncluded, as a tech preview and had its problems. Now with 3.1 its working great and looks neat. In contrary to 3.0, it is now listening on the standard ports 80 and 443. This will probably help users in organizations with strict proxy policies and setting.

So what else is new?

The supported number of virtual CPUs in a guest is now ridiculous 160, and RAM per guest is at ridiculous two Terabytes. But this are the least import updates.

Especially on the storage side, a lot of effort has been done and long missing features integrated.

From my point of view, the most important new feature is the possibility to have disks from more than one Storage Domain attached to a virtual machine. This would allow to install the Operating system to cheap SATA storage while data disks are super fast SSDs.

There is also support for live snapshots, but snapshots are (as on other platforms) kind of problematic because they are COW (Copy-On-Write). This can lead to I/O performance problems. Snapshots are a cool feature for i.e. taking a snapshot before updating software etc. Be sure you remove the snapshot afterwards if you want to keep a good I/O performance.

You now can use DirectLUN directly from the GUI without the usage of hooks. DirectLUN allows to attach FibreChannel and iSCSI LUNs directly to a Virtual Machine. This is great when you want to use shared filesystems such as GFS.

Another nice feature is Live Storage Migration which is a technical preview, means: Unsupported for the moment. It probably will be supported in a later version. Storage live migration is a nice feature when you need to free up some space on a storage domain and you can not shut down a VM. Be sure to power-cycle the VM in question as soon as your SLA allows it, to get rid of the Snapshot (COW here again).

If you want to script stuff or you are too lazy to open a brower, there is now a CLI available. Have a look to the documentation.

If you want to integrate RHEV deeper into your existing infrastructure, such as RHN Satellite, Cobbler, Your-super-duper-CMDB or IaaS/PaaS broker, there are two different APIs available. For the XML lovers, there is the previously known RestAPI which has some performance improvements. For the XML haters, there is now a native Python API which allows to to access RHEV entities directly as objects in your Python code. For both APIs, have a look to the Documentation.

I personally like the Python API, because a lot of other Red Hat infrastructure products come with Python APIs. So it is very easy to integrate those software pieces.

Under the hood, it is now powered by JBoss EAP6 instead of version 5. To be able to connect to standard ports 80 and 443, there is an Apache httpd with mod_proxy_ajp.

Have fun :-)

How to recover from a lost Kerberos password for admin

Saturday, December 8th, 2012

Ever lost your password for the admin principle on your Linux Kerberos server? It is quite easy to recover by just setting a new one.

You just need to log in to your KDC and proceed as follows:

[root@ipa1 ~]# kadmin.local
Authenticating as principal admin/admin@EXAMPLE.COM with password.
kadmin.local:  change_password admin@EXAMPLE.COM
Enter password for principal "admin@EXAMPLE.COM": 
Re-enter password for principal "admin@EXAMPLE.COM": 
Password for "admin@EXAMPLE.COM" changed.
kadmin.local: q
[root@ipa1 ~]#

Now enter kinit to get a Kerberos ticket.

Have fun :-)

Migrating from CentOS6 to RHEL6

Saturday, December 8th, 2012

There are different tutorial on the net how to migrate from RHEL to CentOS but almost no information about the other way round. It is quite simple and at the end of the day you have only Red Hat Packages installed.

you need to copy the following packages from a Red Hat medium and install them:

yum localinstall \
rhn-check-1.0.0-87.el6.noarch.rpm \
rhn-client-tools-1.0.0-87.el6.noarch.rpm \
rhnlib-2.5.22-12.el6.noarch.rpm \
rhnsd-4.9.3-2.el6.x86_64.rpm \
rhn-setup-1.0.0-87.el6.noarch.rpm \
yum-3.2.29-30.el6.noarch.rpm \
yum-metadata-parser-1.1.2-16.el6.x86_64.rpm \
yum-rhn-plugin-0.9.1-40.el6.noarch.rpm \
yum-utils-1.1.30-14.el6.noarch.rpm \
sos-2.2-29.el6.noarch.rpm \

Then you need to remove the centos release package and install the Red Hat release package:

rpm -e centos-release-6-3.el6.centos.9.x86_64 --nodeps
yum localinstall redhat-release-server-6Server-6.3.0.3.el6.x86_64.rpm

Now it is time to register your system at RHN with rhn_register

After the successful registration you need to replace all CentOS packages by the RPMs provided by Red Hat:

yum reinstall "*"

To be sure there are no new configuration files to take care of run the following:

yum install mlocate.x86_64
updatedb
locate rpmnew

Go through the list and check if there is some configuration work to do

Update your machine to the latest and greatest versions of packages and reboot your machine

yum -y update && reboot

Query the RPM database for leftovers from CentOS:

rpm -qa --queryformat "%{NAME} %{VENDOR}\n" | grep -i centos | cut -d' ' -f1

There are some problematic packages which has “centos” in its name, i.e yum and dhcp

rpm -e yum --nodeps
rpm -ihv yum-3.2.29-30.el6.noarch.rpm

At the end, you have the previously installed kernel packages left. Keep them as a backup, they will be automatically uninstalled after two more kernel updates.

Is the procedure supported by Red Hat? No it is not supported.

Will the converted machine be supported after this procedure? Well, officially it is not supported, but if there are no traces of CentOS on the machine…

Have fun :-)

How to get a RTL2832U based DVB-T stick working on Fedora 17

Sunday, September 16th, 2012

This week I bought a no-name DVB-T stick with the risk to not getting it working with Linux. The device contains a RTL2832u chip which seems to be quite common according to this list. The price tag was just €14, so I was taking the risk.

First experiments shown that there is no chance to get it running on Fedora 17. After digging deeper I figured out that someone wrote a driver and published it on github.

Later on, I figured out that there is a driver also available in upstreams 3.6rc Kernel. Unfortunately the Kernel shipped with Fedora 17 does not support the device yet.

Steps to do

Ensure you have installed the kernel headers package that match your running kernel version. If not, run yum -y install kernel-headers. The package dvb-apps will help you to set up the channels later on, install it with yum -y install dvb-apps

Getting and compiling the kernel module

git clone https://github.com/tmair/DVB-Realtek-RTL2832U-2.2.2-10tuner-mod_kernel-3.0.0.git
cd DVB-Realtek-RTL2832U-2.2.2-10tuner-mod_kernel-3.0.0/RTL2832-2.2.2_kernel-3.0.0/
make && make install

Afterwards you need to scan your DVB-T stick for stations and put it into mplayers channels file. In /usr/share/dvb/dvb-t/ you will find the right setting the region you are living. For me de-Berlin is the right one.

scandvb /usr/share/dvb/dvb-t/de-Berlin -o zap >> ~/.mplayer/channels.conf

Now you are ready to watch digital terrestrial TV on you Fedora box. mplayer "dvb://Das Erste" does the job.

A more comfortable player is kaffeine which has features like EPG (electronic Program Guide), recording facilities etc. It comes with KDE.

Have fun!

Identity Management with IPA Part II – Kerberized NFS service

Sunday, December 25th, 2011

In part one I was writing how to set up an IPA server for basic user authentication.

One reason NFSv4 is not that widespreaded yet, is it needs Kerberos for proper operation. Of course this is now much easier thanks to IPA.

Goal for the part of the guide

  • Configure IPA to serve the NFS principle
  • Configure NFS to use IPA
  • Configure some IPA clients to use Kerberos for the NFS service

Requirements

  • A runing IPA service like discussed in Part I of this guide.
  • A NFS server based on RHEL6.2
  • One or more IPA-Client

Lets doit
First you need to add the NFS server and its service principal to the IPA server. On ipa1.example.com run:

[root@ipa1 ~]# ipa host-add nfs.example.com
[root@ipa1 ~]# ipa service-add nfs/nfs.example.com

Next, log on to you NFS server, lets call it nfs.example.com and install the needed additional software packages:

[root@nfs ~]# yum -y install ipa-client nfs-utils

You need to enroll you NFS-server on the IPA domain. Run the following on nfs.example.com:

[root@nfs ~]# ipa-client-install -p admin

The next step is to get a Kerberos ticket and fetch the entries needed to be added in the krb5.keytab

[root@nfs ~]# kinit admin
[root@nfs ~]# ipa-getkeytab -s ipa1.example.com -p nfs/nfs.example.com -k /etc/krb5.keytab

Before you proceed to your clients, you need to enable secure NFS, create an export and restart NFS:

[root@nfs ~]# perl -npe 's/#SECURE_NFS="yes"/SECURE_NFS="yes"/g' -i /etc/sysconfig/nfs
[root@nfs ~]# echo "/home  *(rw,sec=sys:krb5:krb5i:krb5p)" >> /etc/exports
[root@nfs ~]# mkdir /home/tester1 && cp /etc/skel/.bash* /home/tester && chmod 700 /home/tester1 && chown -R tester1:ipausers /home/tester1
[root@nfs ~]# service nfs restart

Assuming you already have set up one or more IPA-client(s), it is stright forward to enable kerberized NFS on your systems. Log in to a client and run the following:

[root@ipaclient1 ~]# yum -y install nfs-utils
[root@ipaclient1 ~]# perl -npe 's/#SECURE_NFS="yes"/SECURE_NFS="yes"/g' -i /etc/sysconfig/nfs
[root@ipaclient1 ~]# 

Lets have a look if you have been successful. First look up the users UID.

[root@ipaclient1 ~]# getent passwd tester1
tester1:*:1037700500:1037700500:Hans Tester:/home/tester1:/bin/bash
[root@ipaclient1 ~]# 

Lets mount that users home directory manually on a client:

mount -t nfs4 nfs.exmaple.com:/home/tester1 /home/tester1

To check if is working as expected, issue

[root@ipaclient1 ~]# su - tester1

Fire ls -lan and see if the UID matches the UID you got from getent. If you see UID 4294967294, then something went wrong, this is the UID for the user “nobody” when using NFSv4 on 64 bit machines.

Whats next?
You will figure out when I post part III of this guide :-)

Have fun!

I got employed by Red Hat

Thursday, April 21st, 2011

This is pretty cool: End of March I signed a contract with Red Hat as a senior Linux consultant. It is not just “another new job”. It is cool for (at least) two reasons: First reason is that Red Hat is not “just another company”, it is Red Hat which is not very comparable to other employers, it is THE Linux and open source company, for me as a open source guy, this is perfect. The second reason is: I’m moving from Zurich in Switzerland to Berlin in Germany.

So, two major changes in my life at the same time. I’m looking forward to the challenges that are waiting for me.

I’ll continue to work at Siemens IT Solutions and Services AG until approx. mid of June and start working at Red Hat at 1st of July.

From May, 09 to May 15. I’ll be the first time in Berlin to have a look at the city and its different suburbs. I’ll also be there to organize some stuff required to settle in Berlin. In the same time, Europe’s biggest Linux conference will be held in Berlin, the “Linux Tag”. I guess I’ll have a lot of fun, and maybe meet some of my future workmates.

It is hard for me to leave my country, I have a lot of friends here. On the other hand, Berlin is just about 1.5h away by plane. As a consultant, I’m travelling a lot. Because of that, it would not be that easy to build up a social network (I mean real-life-stuff, not Facebook) in Berlin.

It also is not easy for me to leave Siemens, I’m involved in a very cool project with the Swiss government (all Systems will be RHEL6) and I also have friends and nice workmates at work which I’m going to leave.

I already know quite some people at Red Hat, they are all nice and I guess some of them will get good friends over the time.

Having fun?

Absolutely guaranteed!

I voted for beefy miracle

Thursday, April 7th, 2011

Beefy miracle

 

There is a open poll on voting for a name for Fedora 16. I gave my vote to Beefy Miracle. Why I voted for Beefy Miracle? Because it is cool, geeky, freaky, I’m loving hot dogs and it is something new.

The Fedora distribution is geeky, freaky and open to new stuff.

Having fun? Of course!

Pulp, what is it about it?

Thursday, December 2nd, 2010

Thanks to Máirín’s posting I got aware of the Pulp project.

What is it? I had a brief look at it, it is a Red Hat sponsored project with a similar functionality like Spacewalk and RHN Satellite.

This brings me to the question: Is Pulp is intended to be a replacement of Spacewalk? It can make sense, it is written in Python as Cobbler is. Cobbler and Spacewalk are not really playing nice together. Spacewalk used Java, Perl and Python.

Anyway, Pulp seems to be in its early childhood, but it seems to be a really interesting project. What are the plans for the future? And what are the plans for Spacewalk and thus RHN Satellite?

Having fun? As soon as I get the time to install it and give Pulp a closer look….