Using Data Deduplication and Compression with VDO on RHEL 7 and 8

Storage deduplication technology has been on the market for quite some time now. Unfortunately all of the implementations have been vendor specific proprietary software. With VDO, there is now an open source Linux native solution available.

Red hat has introduced VDO (Virtual Data Optimizer) in RHEL 7.5, a storage deduplication technology bough with Permabit in 2017. Of course it has been open sourced since then.

In contrast to ZFS which provides the same functionality on the file system level, VDO is an inline data reduction which works on block device level, it is file system agnostic.

Use cases

There are basically two major use cases: VM Storage and Object Storage Backends.

VM Storage

The main use case is storage for virtual machines where a lot of data is redundant, i.e. the base operating system of the VMs. This allows to deduplicate the data on disk on a large scale, think about 100 VMs where the operating system takes about 5Gbyte each will be reduced to approx. 5 Gbyte instead of 500 Gbyte.

Typically VM storage can be over committed by factor 10.

Object- and Block storage backends

As a backend for CEPH and Glusterfs, it is recommended to not over commit more than factor 3. The reason for the lower over commitment is that the storage administrator usually does not know what kind of data will be stored on it.

Availability

VDO is available since RHEL 7.5, it is included in the base subscription. At the moment it is not available for Fedora (yet).

The source code is available on github:

At the moment the Kernel code is not yet in the upstream Mainline Kernel, it is ongoing work to get it into the Mainstream Kernel.

Typical setup

Physical disk -> VDO -> Volumegroup -> Logical volume -> file system.

Block device can be a physical disk (or a partition on it), multi path device, LUKS disk, or a software RAID device (md or LVM RAID).

Restrictions

You can not use LVM cache, LVM snapshots and thin provisioned logical volumes on top of VDO. Theoretically you can use LUKS on top of VDO, but it makes no sense because there is nothing to deduplicate. Needless to say that VDO on top of a VDO device does not make any sense as well. Be aware that you can not make use of partitioning or (LVM) Raid on top of VDO devices, all that things should be done in the underlying layer of VDO.

When using SAN, check if your storage box already does deduplication. In this case VDO is useless for you.

Installation

Its straight forward:

[root@vdotest ~]# yum -y install vdo kmod-kvdo

Create the VDO volume

In this test case, I attached a 110Gbyte disk, created a 100 GByte partition and will over commit it by factor 10.

Warning! As of writing this article, never use a whole physical disk, use a partition instead and leave some spare space in the disk to avoid data loss! (see further below)

[root@vdotest ~]# vdo create --name=vdo1 --device=/dev/vdb1 --vdoLogicalSize=1T

Creating volume group, logical volume and file system on top of the VDO volume

[root@vdotest ~]# pvcreate /dev/mapper/vdo1
[root@vdotest ~]# vgcreate vg_vdo /dev/mapper/vdo1
[root@vdotest ~]# lvcreate -n lv_vdo vg_vdo -L 900G
[root@vdotest ~]# mkfs.xfs -K /dev/vg_vdo/lv_vdo
[root@vdotest ~]# echo "/dev/mapper/vg_vdo-lv_vdo       /mnt    xfs     defaults,x-systemd.requires=vdo.service 0 0" >> /etc/fstab

Display the whole stack

[root@vdotest ~]# lsblk /dev/vdb
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdb                 252:16   0  110G  0 disk 
└─vdb1              252:17   0  100G  0 part 
  └─vdo1            253:7    0    1T  0 vdo  
    └─vg_vdo-lv_vdo 253:8    0  900G  0 lvm  /mnt
[root@vdotest ~]#

Populate the disk with data

The ideal test for VDO is to put some real-life VM-Images to the file system on top of it. In this case I scp’ed three IPA server and some instances to that file system. This kind of systems are all quite similar, the disk space saved is tremendous. The total size of the vm images is 105G

Lets have a look:

[root@vdotest ~]# df -h /mnt
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vg_vdo-lv_vdo  900G  105G  800G  12% /mnt
[root@vdotest ~]# 
[root@vdotest ~]# ll -h /mnt
total 101G
-rw-------. 1 root root 21G Dec 17 09:41 ipa1.lab.delouw.ch.qcow2
-rw-------. 1 root root 21G Dec 17 09:47 ipa1.ldelouw.ch
-rw-------. 1 root root 21G Dec 17 09:54 ipa2.ldelouw.ch
-rw-r--r--. 1 root root 21G Dec 17 09:58 ipaclient-rhel6.home.delouw.ch
-rw-------. 1 root root 21G Dec 17 10:03 ipatest.delouw.ch.qcow2
[root@vdotest ~]#

Lets use the vdostats utility to display the actually used storage on disk:

[root@vdotest ~]# vdostats --si
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo1        107.4G     15.2G     92.2G  14%           89%
[root@vdotest ~]#

Performance Tuning

There are a lot of parameters that can be changed. Unfortunately the documentation available at the moment is rudimentary, thus its more a guesswork than facts.

  • Number of worker threads of different kind
  • Enable or Disable compression

On machines with a lot of CPUs using more threads than the defaults can dramatically boost performance. man 8 vdo gives a glimpse of the different parameters related to threads.

Compression is a quite expensive operation. On top of that, depending on the kind of data you are storing, it does not make much sense to use compression (Well, deduplication is kind of compression as well).

Pitfalls

Be aware! With every storage deduplication solution there comes a big pitfall: The logical volume on top of VDO shows free disk space while the actual disk space on the physical disk can be (almost) exhausted. You need to carefully monitor the actual disk usage.

The fill grade can rapidly change if the data to be stored contains a lot of non-deduplicatable and/or compressible data. A good example is virtual machine images containing a LUKS encrypted disk, In such a case, use LUKS on the storage, not on the VM level.

Even if you update one virtual machine, the delta to other machine images will grow and less physical space is available.

VDO comes with a few Nagios plugins which are very useful for alerting administrators in the cause the available physical disk is filling up. They are located in /usr/share/doc/vdo/examples/nagios

According to df -h, on my test system there is still 800 Gbyte available. What happens if I store my 700 Gbyte Satellite 6 image? The data is mostly RPMs which are already compressed quite well. Lets see….

After a transfer of approx 155 Gbyte, the physical disk got full and the file system is inaccessible. I was hitting the worst case that can happen: Complete and unrecoverable data loss.

The df command shows some 241 Gbyte free.

[root@vdotest ~]# df -h |grep mnt
/dev/mapper/vg_vdo-lv_vdo                900G  241G  660G  27% /mnt
[root@vdotest ~]#

The vdostat command tells a different story, like expected.

[root@vdotest ~]# vdostats --si
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo1        107.4G    107.4G      0.0B 100%           59%
[root@vdotest ~]# 

When attempting to access the data, there will be an I/O error.

[root@vdotest ~]# ll -h /mnt
ls: cannot access /mnt: Input/output error
[root@vdotest ~]# 

Thats bad. I mean really bad. The device is not accessible anymore.

xfs_repair does not work. Do not attempt to make use of the -L option! Your file system will be gone.

Recovering from a full physical disk

Lets resize the partition instead. First unmount the file system

[root@vdotest ~]# umount /mnt

Delete and recreate the partition using fdisk

[root@vdotest ~]# fdisk /dev/vdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): d
Selected partition 1
Partition 1 is deleted

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): 
Using default response p
Partition number (1-4, default 1): 
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): 
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@vdotest ~]# partprobe
[root@vdotest ~]#
[root@vdotest ~]# vdo growPhysical -n vdo1

Run a file system check.

Now you are able to mount the file system again and your data is available again.

Documentation

Red Hat maintains a nice documentation about storage administration, VDO is covered by an own chapter. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/storage_administration_guide/#vdo

Conclusion

The technology is very interesting and will kick some ass. Storage deduplication will be more and more important, with VDO there is now a Linux native solution for that.

At the moment it is quite dangerous to use VDO in production. Filling up a physical disk without spare space is an unrecoverable error, a complete data loss. That means: Always create the VDO device on top of a partition that is not using the whole disk or another device that can grow in size to prevent data loss.

If you plan to use VDO in production make sure you have a proper monitoring in place that alerts quite ahead of time to be able to take corrective action.

Nevertheless: Its cool stuff and I’m sure the current situation will be fixed soon.

Using Ansible to automate oVirt and RHV environments

Bored of clicking in the WebUI of RHV or oVirt? Automate it with Ansible! Set up a complete virtualization environment within a few minutes.

Some time ago, Ansible includes a module for orchestrating RHV environments. It allows you to automate the setup of such an environment as well as automating daily tasks.

Preparation

Of course, Ansible can not automate all tasks, you need to set up a few things manually. Lets assume you want your oVirt-engine or RHV-manager running outside of the RHV environment which has some benefits when it comes to system management.

  • Setup of at least two hypervisor machines with RHEL7 latest
  • Setup of the RHV-M machine with RHEL7 latest
  • Having the appropriate Redhat Subscriptions
  • A machine with Ansible 2.3 installed

Set up the inventory file

Ensure you have a inventory file like the following in place,i.e. in /etc/ansible/hosts

[rhv]
        rhv-m.example.com

[hypervisors]
        hv1.example.com
        hv2.example.com

Helper files

ovirt-engine-vars.yml

engine_url: https://rhv-m.example.com/ovirt-engine/api
username: admin@internal
password: redhat
engine_cafile: /etc/pki/ovirt-engine/ca.pem
datacenter: Default
cluster: Default

rhsm_user: user@example.com
rhsm_pass: secret

Please adjust the following example answer file for your environment.

rhv-setup.conf

# action=setup                                                                                                        
[environment:default]                                                                                                 
OVESETUP_DIALOG/confirmSettings=bool:True                                                                                            
OVESETUP_CONFIG/applicationMode=str:both                                                                                             
OVESETUP_CONFIG/remoteEngineSetupStyle=none:None                                                                                     
OVESETUP_CONFIG/sanWipeAfterDelete=bool:False                                                                                        
OVESETUP_CONFIG/storageIsLocal=bool:False                                                                                            
OVESETUP_CONFIG/firewallManager=none:None                                                                                            
OVESETUP_CONFIG/remoteEngineHostRootPassword=none:None                                                                               
OVESETUP_CONFIG/firewallChangesReview=none:None                                                                                      
OVESETUP_CONFIG/updateFirewall=bool:False                                                                                            
OVESETUP_CONFIG/remoteEngineHostSshPort=none:None                                                                                    
OVESETUP_CONFIG/fqdn=str:rhv-m.example.com                                                                                        
OVESETUP_CONFIG/storageType=none:None                                                                                                        
OSETUP_RPMDISTRO/requireRollback=none:None                                                                                                   
OSETUP_RPMDISTRO/enableUpgrade=none:None                                                                                                     
OVESETUP_PROVISIONING/postgresProvisioningEnabled=bool:True                                                                                  
OVESETUP_APACHE/configureRootRedirection=bool:True                                                                                           
OVESETUP_APACHE/configureSsl=bool:True                                                                                                         
OVESETUP_DB/secured=bool:False
OVESETUP_DB/fixDbConfiguration=none:None
OVESETUP_DB/user=str:engine
OVESETUP_DB/dumper=str:pg_custom
OVESETUP_DB/database=str:engine
OVESETUP_DB/fixDbViolations=none:None
OVESETUP_DB/engineVacuumFull=none:None
OVESETUP_DB/host=str:localhost
OVESETUP_DB/port=int:5432
OVESETUP_DB/filter=none:None
OVESETUP_DB/restoreJobs=int:2
OVESETUP_DB/securedHostValidation=bool:False
OVESETUP_ENGINE_CORE/enable=bool:True
OVESETUP_CORE/engineStop=none:None
OVESETUP_SYSTEM/memCheckEnabled=bool:True
OVESETUP_SYSTEM/nfsConfigEnabled=bool:False
OVESETUP_PKI/organization=str:example.com
OVESETUP_PKI/renew=none:None
OVESETUP_CONFIG/isoDomainName=none:None
OVESETUP_CONFIG/engineHeapMax=str:1955M
OVESETUP_CONFIG/ignoreVdsgroupInNotifier=none:None
OVESETUP_CONFIG/adminPassword=str:redhat
OVESETUP_CONFIG/isoDomainACL=none:None
OVESETUP_CONFIG/isoDomainMountPoint=none:None
OVESETUP_CONFIG/engineDbBackupDir=str:/var/lib/ovirt-engine/backups
OVESETUP_CONFIG/engineHeapMin=str:1955M
OVESETUP_DWH_CORE/enable=bool:True
OVESETUP_DWH_CONFIG/scale=str:1
OVESETUP_DWH_CONFIG/dwhDbBackupDir=str:/var/lib/ovirt-engine-dwh/backups
OVESETUP_DWH_DB/secured=bool:False
OVESETUP_DWH_DB/restoreBackupLate=bool:True
OVESETUP_DWH_DB/disconnectExistingDwh=none:None
OVESETUP_DWH_DB/host=str:localhost
OVESETUP_DWH_DB/user=str:ovirt_engine_history
OVESETUP_DWH_DB/dumper=str:pg_custom
OVESETUP_DWH_DB/database=str:ovirt_engine_history
OVESETUP_DWH_DB/performBackup=none:None
OVESETUP_DWH_DB/port=int:5432
OVESETUP_DWH_DB/filter=none:None
OVESETUP_DWH_DB/restoreJobs=int:2
OVESETUP_DWH_DB/securedHostValidation=bool:False
OVESETUP_DWH_PROVISIONING/postgresProvisioningEnabled=bool:True
OVESETUP_CONFIG/imageioProxyConfig=bool:True
OVESETUP_RHEVM_DIALOG/confirmUpgrade=bool:True
OVESETUP_VMCONSOLE_PROXY_CONFIG/vmconsoleProxyConfig=bool:True
OVESETUP_CONFIG/websocketProxyConfig=bool:True

Prepare your machines

The first Playbook ensures your machines are subscribed to RHSM and the needed repos are made available.

install_rhv.yml

---
- hosts: rhv,hypervisors
  vars_files:
    - ovirt-engine-vars.yml
  
  tasks:
  - name: Register the machines to RHSM
    redhat_subscription:
      state: present
      username: "{{ rhsm_user }}"
      password: "{{ rhsm_pass }}"
      pool: '^(Red Hat Enterprise Server|Red Hat Virtualization)$'

  - name: Disable all repos
    command: subscription-manager repos --disable=*

- hosts: hypervisors
  tasks:
    - name: Enable required repositories
      command: subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rhv-4-mgmt-agent-rpms
 
- hosts: rhv
  tasks:

    - name: Enable required repositories
      command: subscription-manager repos --enable=jb-eap-7-for-rhel-7-server-rpms --enable=rhel-7-server-rhv-4-tools-rpms --enable=rhel-7-server-rhv-4.1-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rpms

    - name: Copy Answer File
      copy:
        src: rhv-setup.conf
        dest: /tmp/rhv-setup.conf

    - name: Run RHV setup
      shell: |
        engine-setup --config-append=/tmp/rhv-setup.conf

Run the playbook

user@ansible playbooks]$ ansible-playbook -k install_rhv.yml 
SSH password: 

PLAY [rhv,hypervisors] ************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [rhv-m.example.com]
ok: [hv1.example.com]
ok: [hv2.example.com]

TASK [Register the machines to RHSM] **********************************************************************************************************
ok: [hv1.example.com]
ok: [hv2.example.com]
ok: [rhv-m.example.com]

TASK [Disable all repos] **********************************************************************************************************************
changed: [rhv-m.example.com]
changed: [hv2example.com]
changed: [hv1.example.com]

PLAY [hypervisors] ****************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [hv1.example.com]
ok: [hv2.example.com]

TASK [Enable required repositories] ***********************************************************************************************************
changed: [hv1.example.com]
changed: [hv2.example.com]

PLAY [rhv] ************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [rhv-m.example.com]

TASK [Enable required repositories] ***********************************************************************************************************
changed: [rhv-m.example.com]

TASK [Copy Answer File] ***********************************************************************************************************************
ok: [rhv-m.example.com]

TASK [Run RHV setup] **************************************************************************************************************************
changed: [rhv-m.example.com]

PLAY RECAP ************************************************************************************************************************************
hv1.example.com : ok=5    changed=2    unreachable=0    failed=0   
hv2.example.com : ok=5    changed=2    unreachable=0    failed=0   
rhv-m.example.com       : ok=7    changed=3    unreachable=0    failed=0   

[user@ansible playbooks]$ 

Deploy your environment

Your environment is now ready to set up all the required stuff such as data centers, clusters, networks, storage etc.

rhv-deploy.yml

---
- name: Deploy RHV environment
  hosts: rhv

  vars_files: 
    - ovirt-engine-vars.yml

  pre_tasks:
  - name: Log in
    ovirt_auth:
      url: "{{ engine_url }}"
      username: "{{ username }}"
      password: "{{ password }}"
      ca_file: "{{ engine_cafile }}"
    tags:
      - always

  tasks:

  - name: ensure Datacenter "{{ datacenter }}" is existing
    ovirt_datacenters:
      auth: "{{ ovirt_auth }}"
      name: "{{ datacenter }}"
      comment: "Our primary DC"
      compatibility_version: 4.1
      quota_mode: enabled
      local: False

  - name: Ensure Cluster "{{ cluster }}" is existing
    ovirt_clusters:
      auth: "{{ ovirt_auth }}"
      name: "{{ cluster }}"
      data_center: "{{ datacenter }}"
      description: "Default Cluster 1"
      cpu_type: "Intel Haswell-noTSX Family"
      switch_type: legacy
      compatibility_version: 4.1
      gluster: false
      ballooning: false
      ha_reservation: true
      memory_policy: server
      rng_sources:
        - random

  - name: Ensure logical network VLAN101 exists
    ovirt_networks:
      auth: "{{ ovirt_auth }}"
      data_center: "{{ datacenter }}"
      name: vlan101
      vlan_tag: 101
      clusters:
        - name: "{{ cluster }}"
          assigned: True
          required: False

  - name: ensure host hv1 is joined
    ovirt_hosts:
      auth: "{{ ovirt_auth }}"
      cluster: "{{ cluster }}"
      name: hv1
      address: 192.168.100.112
      password: redhat

  - name: ensure host hv2 is joined
    ovirt_hosts:
      auth: "{{ ovirt_auth }}"
      cluster: "{{ cluster }}"
      name: hv2
      address: 192.168.100.20
      password: redhat

  - name: Assign Networks to host 
    ovirt_host_networks:
      auth: "{{ ovirt_auth }}"
      state: present
      name: "{{ item }}"
      interface: eth1
      save: True
      networks: 
        - name: vlan101
    with_items:
      - hv1
      - hv2


  - name: Enable Power Management for host1    
    ovirt_host_pm:
      auth: "{{ ovirt_auth }}"
      name: hv1
      address: 10.10.10.10
      options:
        lanplus=true
      username: admin
      password: secret
      type: ipmilan

  - name: Enable Power Management for host1
    ovirt_host_pm:
      auth: "{{ ovirt_auth }}"
      name: hv2
      address: 10.10.10.11
      options:
        lanplus=true
      username: admin
      password: secret
      type: ipmilan

  - name: Create VM datastore
    ovirt_storage_domains:
      auth: "{{ ovirt_auth }}"
      name: vms
      host: "hv2"
      data_center: "{{ datacenter }}"
      nfs:
        address: nfs.example.com
        path: /exports/rhv/vms

  - name: Create export NFS storage domain
    ovirt_storage_domains:
      auth: "{{ ovirt_auth }}"
      name: export
      host: "hv2"
      domain_function: export
      data_center: "{{ datacenter }}"
      nfs:
        address: nfs.example.com
        path: /exports/rhv/export

  - name: Create ISO NFS storage domain
    ovirt_storage_domains:
      auth: "{{ ovirt_auth }}"
      name: iso
      host: "hv2"
      domain_function: iso
      data_center: "{{ datacenter }}"
      nfs:
        address: nfs.example.com
        path: /exports/rhv/iso

Run the playbook

user@ansible playbooks]$ ansible-playbook -k rhv-deploy.yml
SSH password: 

PLAY [Deplay RHV environment] *****************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [rhv-m.example.com]

TASK [Log in] *********************************************************************************************************************************
ok: [rhv-m.example.com]

TASK [ensure Datacenter "Default" is existing] ************************************************************************************************
changed: [rhv-m.example.com]

TASK [Ensure Cluster "Default" is existing] ***************************************************************************************************
changed: [rhv-m.example.com]

TASK [Ensure logical network VLAN101 exists] **************************************************************************************************
changed: [rhv-m.example.com]

TASK [ensure host hv1 is joined] ****************************************************************************************************
changed: [rhv-m.example.com]

TASK [ensure host hv2 is joined] ****************************************************************************************************
changed: [rhv-m.example.com]

TASK [Assign Networks to host] ****************************************************************************************************************
ok: [rhv-m.example.com] => (item=hv1)
ok: [rhv-m.example.com] => (item=hv2)

TASK [Enable Power Management for host1] ******************************************************************************************************
changed: [rhv-m.example.com]

TASK [Enable Power Management for host1] ******************************************************************************************************
changed: [rhv-m.example.com]

TASK [Create VM datastore] ********************************************************************************************************************
changed: [rhv-m.example.com]

TASK [Create export NFS storage domain] *******************************************************************************************************
changed: [rhv-m.example.com]

TASK [Create ISO NFS storage domain] **********************************************************************************************************
changed: [rhv-m.example.com]

PLAY RECAP ************************************************************************************************************************************
rhv-m.example.com       : ok=13   changed=10   unreachable=0    failed=0   

[user@ansible playbooks]$ 

Further readings

Conclusion

With the help of Ansible you can automate a lot of boring tasks in a convenient way. You may even merge the two playbooks into one, be aware that the RHV-M setup will fail if its already set up.

Have fun 🙂

PXE boot a virtual machine with NAT connection to the host

If you have a notebook and you want to quickly deploy new virtual machines for testing, PXE boot is your friend.

On notebooks people are usally not using a bridged network but NAT instead. The DHCP server on the host that is managed by Libvirt needs to configured with the TFTP server and the boot file.

On my “mobile lab”, I’ve installed a virtual machine with a Redhat Satellite 5 where the other VMs get its content from. PXE boot files are managed by the bundled Cobbler Server.

To do so, edit the XML file of the default network (or any other NAT network):

notebook-hv:~# virsh net-edit default

Add the red marked line to the file. Replace 192.168.122.122 with the actual IP address of your PXE/TFTP server.

<network>
  <name>default</name>
  <uuid>d54b7049-254b-46b0-b434-db2a1481cbd3</uuid>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:96:21:b5'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
      <bootp file='pxelinux.0' server='192.168.122.122'/>
    </dhcp>
  </ip>
</network>

And save it. The changes are with immediate effect, happy PXE-booting 🙂

Automated disk partitioning on virtual machines with Cobbler

The default Cobbler Snippets just do simple auto partitioning. For a more sophisticated partition layout you need to know what kind of VM you are going to install. KVMs and RHEVs device name is /dev/vda, Xen uses /dev/xvda and ESX /dev/sda.

Luckily this can be figured out automatically, those different virtualization vendors are using its own MAC prefixes. So we can add two nice small Cobbler snippets to do the job. In this example, I call them hw-detect and partitioning.

hw-detect

#set $mac = $getVar('$mac_address_eth0')
#if $mac
#set $mac_prefix = $mac[0:8]
#if $mac_prefix == "00:1a:4a"
# This is a RHEV virtual machine
#set global $machinetype = 'kvm'

#else if $mac_prefix == "52:54:00"
# This is a KVM/Qemu virtual machine
#set global $machinetype='kvm'

#else if $mac_prefix == "00:16:3e"
# This is a XEN virtual machine
#set global $machinetype='xen'
#
#else if $mac_prefix == "00:50:56"
# This is a ESX virtual machine
#set global $machinetype = 'esx'

#else
# #This is a physical machine
#set global $machinetype = 'physical'
#end if
#end if

partitioning

#if $machinetype == 'kvm'
#set $disk='vda'
#else if $machinetype == 'xen'
#set $disk = 'xvda'
#else
#set $disk = 'sda'
#end if
# Lets install the system on /dev/$disk
part /boot      --fstype ext2 --size=250 --ondisk=$disk
part pv.0       --size=1 --grow --ondisk=$disk

volgroup vg_${name} pv.0

logvol /        --fstype ext4 --name=lv_root    --vgname=vg_${name} --size=4096
logvol /home    --fstype ext4 --name=lv_home    --vgname=vg_${name} --size=512 --fsoption=nosuid,nodev,noexec
logvol /tmp     --fstype ext4 --name=lv_tmp    --vgname=vg_${name} --size=1024 --fsoption=nosuid,nodev,noexec
logvol /var     --fstype ext4 --name=lv_var    --vgname=vg_${name} --size=2048 --fsoption=nosuid,nodev,noexec
logvol swap     --fstype swap --name=lv_swap    --vgname=vg_${name} --size=2048

An additional “feature” of the partitioning Snippet is: It sets up the Volume Group name according to your systems name. This is the unofficial standard since quite some time. It also sets some more secure mount options. Review them carefully if they make sense for you and edit them as needed.

The next step is to configure your kickstart template.

Standalone Cobbler
On a standalone Cobbler server edit /var/lib/cobbler/kickstart/your-kick-start-template.ks

# Detect the used hardware type
$SNIPPET('hw-detect')
# Set up default partitioning
$SNIPPET('partitioning')

Bundled Cobbler
When using cobbler bundled with Spacewalk or Red Hat Satellite, you need to edit the Kickstart profile in the WebUI.


Navigate to Systems -> Kickstart -> Profile. Select the Kickstart profile to be modified -> System Details -> Partitioning.

Copy the two Snippets in /var/lib/cobbler/spacewalk/1, where 1 is representing your OrgId.

Alternatively you can edit them in the WebUI as well.

To check if all is working as expected, add a system to Cobbler using the Command Line Interface and have a look to the rendered Kickstart file. This can be easily done with cobbler system getks --name=blah.

Happy System installing….

Have fun 🙂