New features in Satellite 5.8

Redhat Satellite 5 was released in version 5.8. based on Spacewalk 2.5. It will probably be the last upgrade available, Support ends in January 2019.

New features and enhancements

  • The major new feature is the introduction of support for the CDN for both, Satellite activation and content sync. The key benefit is a massively enhanced performance for content sync. It’s now called cdn-sync, not satellite-sync anymore. Be aware that some custom scripts as well as cronjobs must be updated as well.This change also introduces the usage of Satellite Manifests instead of the old Certificates
  • Introduction of the new CLI tool taskotop which allows you to watch the activity of the Taskomatic Daemon.
  • PostgreSQL is upgraded to 9.5 which brings, compared to 9.2, some performance improvements as well.
  • More Perl bits have been rewritten to Java
  • Java JRE is now IBM’s Version 1.8
  • A few new commands in the spacecmd CLI
  • Lots of bugfixes and small enhancements

Removed features

There are some features that have been dropped with this release.

  • Support for Patch management of Solaris Systems. Who was using that? I can not remember that I’ve seen a company using that feature.
  • Monitoring is gone as well, I only know one organization that have used that feature. Most companies are using Icinga or Nagios.

Usange of cdn-sync

Populate Repository Metadata

The listing of available channels is working off-line. To be able to see the number of packages assigned to which channels you need to download the repository metadata first.

[root@sat58 ~]# cdn-sync --count-packages
14:05:25 Number of channels: 1271
14:05:25 Number of repositories: 1456
Downloading repomd:   |##################################################| 100.0% 
Comparing repomd:     |##################################################| 100.0% 
Downloading metadata: |##################################################| 100.0% 
Counting packages:    |##################################################| 100.0% 
14:42:21 Total time: 0:36:56
[root@sat58 ~]# 

Be aware that this will take a while, depending on how many entitlements are defined in the Satellite Manifest.

To keep that data up to date you should add a cronjob to do so.

[root@sat58 ~]# echo '0 1 * * * perl -le "sleep rand 9000" && /usr/bin/cdn-sync --count-packages' >> /etc/cron.d/cdn-sync-populate-metadata

Initial content sync

Similar to the old satellity-sync you provide the parameter -c repeatedly for all repos to be synced.

[root@sat58 ~]# cdn-sync -c rhel-x86_64-server-7              
11:16:20 ======================================
11:16:20 | Channel: rhel-x86_64-server-7
11:16:20 ======================================
11:16:20 Sync of channel started.
11:16:20 Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/os
11:16:28 Packages in repo:             14275
11:16:41 Packages already synced:          0
11:16:41 Packages to sync:             14275
11:16:42 New packages to download:     14275
11:16:43 1/14275 : 389-ds-base-1.3.5.10-20.el7_3.x86_64.rpm
11:16:43 2/14275 : 389-ds-base-1.3.4.0-26.el7_2.x86_64.rpm
11:16:43 3/14275 : 389-ds-base-1.3.5.10-11.el7.x86_64.rpm
11:16:43 4/14275 : 389-ds-base-1.3.3.1-16.el7_1.x86_64.rpm
[.. output ommited ..]
11:57:03 14275/14275 : zsh-5.0.2-14.el7_2.2.x86_64.rpm
Importing packages:     |##################################################| 100.0% 
13:10:05 Linking packages to channel.
13:10:19 Repo https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/os has comps file 730c62cc7600c7518e4920f800cb9af6b73d75ba-comps.xml.
13:10:20 Repo https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/os has 1885 errata.
13:10:50 Kickstartable tree not detected (no valid treeinfo file)
13:10:50 Repo URL: https://cdn.redhat.com/content/dist/rhel/server/7/7.3/x86_64/kickstart
13:10:55 Packages in repo:              4751
13:12:32 No new packages to sync.
13:12:32 Linking packages to channel.
13:12:44 Repo https://cdn.redhat.com/content/dist/rhel/server/7/7.3/x86_64/kickstart has comps file c542e4cf37dd210de68877b53f41d92dc7686c6e1b35ca4b1852f2e62fca2c72-comps-Server.x86_64.xml.gz.
13:12:44 Repo https://cdn.redhat.com/content/dist/rhel/server/7/7.3/x86_64/kickstart has 0 errata.
13:12:44 Added new kickstartable tree ks-rhel-x86_64-server-7-7.3. Downloading content...
13:12:44 Gathering all files in kickstart repository...
Downloading kickstarts: |##################################################| 100.0%
[.. output ommited ..]
13:24:53 Sync of channel completed in 2:08:33.
13:24:54 Total time: 2:08:33
[root@sat58 ~]# 

A subsequent run of cdn-sync without any parameters behaves like satellite-sync, its syncing previously synced channels.

You probably want to schedule a cronjob for daily syncing new content

[root@sat58b ~]# echo '0 1 * * * perl -le "sleep rand 9000" && /usr/bin/cdn-sync' >> /etc/cron.d/cdn-sync

The output of the sync actions are logged to /var/log/rhn/cdnsync.log

Clearing the cache

Remember rm -rf /var/cache/rhn/satsync/* when something went wrong? That’s gone :-). You just use cdn-sync –clear-cache.

cdn-sync --clear-cache

Upgrading from Satellite 5.7

I’ve not found the time yet to test the upgrade, I’ll let you know about my experience and thoughts in a few days.

Conclusion

After approximately 15 years, old school Redhat Satellite 5 will finally be replaced with Satellite 6 which is built on base of completely different technologies such as The Foreman, Pulp, Katello etc.

Satellite 5.8 is a very mature release no major bugs are known.

Satellite users are encouraged to discover Satellite 6 now, to be ready for the transition to be made in 2020.

Have fun 🙂

Using Ansible to automate oVirt and RHV environments

Bored of clicking in the WebUI of RHV or oVirt? Automate it with Ansible! Set up a complete virtualization environment within a few minutes.

Some time ago, Ansible includes a module for orchestrating RHV environments. It allows you to automate the setup of such an environment as well as automating daily tasks.

Preparation

Of course, Ansible can not automate all tasks, you need to set up a few things manually. Lets assume you want your oVirt-engine or RHV-manager running outside of the RHV environment which has some benefits when it comes to system management.

  • Setup of at least two hypervisor machines with RHEL7 latest
  • Setup of the RHV-M machine with RHEL7 latest
  • Having the appropriate Redhat Subscriptions
  • A machine with Ansible 2.3 installed

Set up the inventory file

Ensure you have a inventory file like the following in place,i.e. in /etc/ansible/hosts

[rhv]
        rhv-m.example.com

[hypervisors]
        hv1.example.com
        hv2.example.com

Helper files

ovirt-engine-vars.yml

engine_url: https://rhv-m.example.com/ovirt-engine/api
username: admin@internal
password: redhat
engine_cafile: /etc/pki/ovirt-engine/ca.pem
datacenter: Default
cluster: Default

rhsm_user: user@example.com
rhsm_pass: secret

Please adjust the following example answer file for your environment.

rhv-setup.conf

# action=setup                                                                                                        
[environment:default]                                                                                                 
OVESETUP_DIALOG/confirmSettings=bool:True                                                                                            
OVESETUP_CONFIG/applicationMode=str:both                                                                                             
OVESETUP_CONFIG/remoteEngineSetupStyle=none:None                                                                                     
OVESETUP_CONFIG/sanWipeAfterDelete=bool:False                                                                                        
OVESETUP_CONFIG/storageIsLocal=bool:False                                                                                            
OVESETUP_CONFIG/firewallManager=none:None                                                                                            
OVESETUP_CONFIG/remoteEngineHostRootPassword=none:None                                                                               
OVESETUP_CONFIG/firewallChangesReview=none:None                                                                                      
OVESETUP_CONFIG/updateFirewall=bool:False                                                                                            
OVESETUP_CONFIG/remoteEngineHostSshPort=none:None                                                                                    
OVESETUP_CONFIG/fqdn=str:rhv-m.example.com                                                                                        
OVESETUP_CONFIG/storageType=none:None                                                                                                        
OSETUP_RPMDISTRO/requireRollback=none:None                                                                                                   
OSETUP_RPMDISTRO/enableUpgrade=none:None                                                                                                     
OVESETUP_PROVISIONING/postgresProvisioningEnabled=bool:True                                                                                  
OVESETUP_APACHE/configureRootRedirection=bool:True                                                                                           
OVESETUP_APACHE/configureSsl=bool:True                                                                                                         
OVESETUP_DB/secured=bool:False
OVESETUP_DB/fixDbConfiguration=none:None
OVESETUP_DB/user=str:engine
OVESETUP_DB/dumper=str:pg_custom
OVESETUP_DB/database=str:engine
OVESETUP_DB/fixDbViolations=none:None
OVESETUP_DB/engineVacuumFull=none:None
OVESETUP_DB/host=str:localhost
OVESETUP_DB/port=int:5432
OVESETUP_DB/filter=none:None
OVESETUP_DB/restoreJobs=int:2
OVESETUP_DB/securedHostValidation=bool:False
OVESETUP_ENGINE_CORE/enable=bool:True
OVESETUP_CORE/engineStop=none:None
OVESETUP_SYSTEM/memCheckEnabled=bool:True
OVESETUP_SYSTEM/nfsConfigEnabled=bool:False
OVESETUP_PKI/organization=str:example.com
OVESETUP_PKI/renew=none:None
OVESETUP_CONFIG/isoDomainName=none:None
OVESETUP_CONFIG/engineHeapMax=str:1955M
OVESETUP_CONFIG/ignoreVdsgroupInNotifier=none:None
OVESETUP_CONFIG/adminPassword=str:redhat
OVESETUP_CONFIG/isoDomainACL=none:None
OVESETUP_CONFIG/isoDomainMountPoint=none:None
OVESETUP_CONFIG/engineDbBackupDir=str:/var/lib/ovirt-engine/backups
OVESETUP_CONFIG/engineHeapMin=str:1955M
OVESETUP_DWH_CORE/enable=bool:True
OVESETUP_DWH_CONFIG/scale=str:1
OVESETUP_DWH_CONFIG/dwhDbBackupDir=str:/var/lib/ovirt-engine-dwh/backups
OVESETUP_DWH_DB/secured=bool:False
OVESETUP_DWH_DB/restoreBackupLate=bool:True
OVESETUP_DWH_DB/disconnectExistingDwh=none:None
OVESETUP_DWH_DB/host=str:localhost
OVESETUP_DWH_DB/user=str:ovirt_engine_history
OVESETUP_DWH_DB/dumper=str:pg_custom
OVESETUP_DWH_DB/database=str:ovirt_engine_history
OVESETUP_DWH_DB/performBackup=none:None
OVESETUP_DWH_DB/port=int:5432
OVESETUP_DWH_DB/filter=none:None
OVESETUP_DWH_DB/restoreJobs=int:2
OVESETUP_DWH_DB/securedHostValidation=bool:False
OVESETUP_DWH_PROVISIONING/postgresProvisioningEnabled=bool:True
OVESETUP_CONFIG/imageioProxyConfig=bool:True
OVESETUP_RHEVM_DIALOG/confirmUpgrade=bool:True
OVESETUP_VMCONSOLE_PROXY_CONFIG/vmconsoleProxyConfig=bool:True
OVESETUP_CONFIG/websocketProxyConfig=bool:True

Prepare your machines

The first Playbook ensures your machines are subscribed to RHSM and the needed repos are made available.

install_rhv.yml

---
- hosts: rhv,hypervisors
  vars_files:
    - ovirt-engine-vars.yml
  
  tasks:
  - name: Register the machines to RHSM
    redhat_subscription:
      state: present
      username: "{{ rhsm_user }}"
      password: "{{ rhsm_pass }}"
      pool: '^(Red Hat Enterprise Server|Red Hat Virtualization)$'

  - name: Disable all repos
    command: subscription-manager repos --disable=*

- hosts: hypervisors
  tasks:
    - name: Enable required repositories
      command: subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rhv-4-mgmt-agent-rpms
 
- hosts: rhv
  tasks:

    - name: Enable required repositories
      command: subscription-manager repos --enable=jb-eap-7-for-rhel-7-server-rpms --enable=rhel-7-server-rhv-4-tools-rpms --enable=rhel-7-server-rhv-4.1-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rpms

    - name: Copy Answer File
      copy:
        src: rhv-setup.conf
        dest: /tmp/rhv-setup.conf

    - name: Run RHV setup
      shell: |
        engine-setup --config-append=/tmp/rhv-setup.conf

Run the playbook

user@ansible playbooks]$ ansible-playbook -k install_rhv.yml 
SSH password: 

PLAY [rhv,hypervisors] ************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [rhv-m.example.com]
ok: [hv1.example.com]
ok: [hv2.example.com]

TASK [Register the machines to RHSM] **********************************************************************************************************
ok: [hv1.example.com]
ok: [hv2.example.com]
ok: [rhv-m.example.com]

TASK [Disable all repos] **********************************************************************************************************************
changed: [rhv-m.example.com]
changed: [hv2example.com]
changed: [hv1.example.com]

PLAY [hypervisors] ****************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [hv1.example.com]
ok: [hv2.example.com]

TASK [Enable required repositories] ***********************************************************************************************************
changed: [hv1.example.com]
changed: [hv2.example.com]

PLAY [rhv] ************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [rhv-m.example.com]

TASK [Enable required repositories] ***********************************************************************************************************
changed: [rhv-m.example.com]

TASK [Copy Answer File] ***********************************************************************************************************************
ok: [rhv-m.example.com]

TASK [Run RHV setup] **************************************************************************************************************************
changed: [rhv-m.example.com]

PLAY RECAP ************************************************************************************************************************************
hv1.example.com : ok=5    changed=2    unreachable=0    failed=0   
hv2.example.com : ok=5    changed=2    unreachable=0    failed=0   
rhv-m.example.com       : ok=7    changed=3    unreachable=0    failed=0   

[user@ansible playbooks]$ 

Deploy your environment

Your environment is now ready to set up all the required stuff such as data centers, clusters, networks, storage etc.

rhv-deploy.yml

---
- name: Deploy RHV environment
  hosts: rhv

  vars_files: 
    - ovirt-engine-vars.yml

  pre_tasks:
  - name: Log in
    ovirt_auth:
      url: "{{ engine_url }}"
      username: "{{ username }}"
      password: "{{ password }}"
      ca_file: "{{ engine_cafile }}"
    tags:
      - always

  tasks:

  - name: ensure Datacenter "{{ datacenter }}" is existing
    ovirt_datacenters:
      auth: "{{ ovirt_auth }}"
      name: "{{ datacenter }}"
      comment: "Our primary DC"
      compatibility_version: 4.1
      quota_mode: enabled
      local: False

  - name: Ensure Cluster "{{ cluster }}" is existing
    ovirt_clusters:
      auth: "{{ ovirt_auth }}"
      name: "{{ cluster }}"
      data_center: "{{ datacenter }}"
      description: "Default Cluster 1"
      cpu_type: "Intel Haswell-noTSX Family"
      switch_type: legacy
      compatibility_version: 4.1
      gluster: false
      ballooning: false
      ha_reservation: true
      memory_policy: server
      rng_sources:
        - random

  - name: Ensure logical network VLAN101 exists
    ovirt_networks:
      auth: "{{ ovirt_auth }}"
      data_center: "{{ datacenter }}"
      name: vlan101
      vlan_tag: 101
      clusters:
        - name: "{{ cluster }}"
          assigned: True
          required: False

  - name: ensure host hv1 is joined
    ovirt_hosts:
      auth: "{{ ovirt_auth }}"
      cluster: "{{ cluster }}"
      name: hv1
      address: 192.168.100.112
      password: redhat

  - name: ensure host hv2 is joined
    ovirt_hosts:
      auth: "{{ ovirt_auth }}"
      cluster: "{{ cluster }}"
      name: hv2
      address: 192.168.100.20
      password: redhat

  - name: Assign Networks to host 
    ovirt_host_networks:
      auth: "{{ ovirt_auth }}"
      state: present
      name: "{{ item }}"
      interface: eth1
      save: True
      networks: 
        - name: vlan101
    with_items:
      - hv1
      - hv2


  - name: Enable Power Management for host1    
    ovirt_host_pm:
      auth: "{{ ovirt_auth }}"
      name: hv1
      address: 10.10.10.10
      options:
        lanplus=true
      username: admin
      password: secret
      type: ipmilan

  - name: Enable Power Management for host1
    ovirt_host_pm:
      auth: "{{ ovirt_auth }}"
      name: hv2
      address: 10.10.10.11
      options:
        lanplus=true
      username: admin
      password: secret
      type: ipmilan

  - name: Create VM datastore
    ovirt_storage_domains:
      auth: "{{ ovirt_auth }}"
      name: vms
      host: "hv2"
      data_center: "{{ datacenter }}"
      nfs:
        address: nfs.example.com
        path: /exports/rhv/vms

  - name: Create export NFS storage domain
    ovirt_storage_domains:
      auth: "{{ ovirt_auth }}"
      name: export
      host: "hv2"
      domain_function: export
      data_center: "{{ datacenter }}"
      nfs:
        address: nfs.example.com
        path: /exports/rhv/export

  - name: Create ISO NFS storage domain
    ovirt_storage_domains:
      auth: "{{ ovirt_auth }}"
      name: iso
      host: "hv2"
      domain_function: iso
      data_center: "{{ datacenter }}"
      nfs:
        address: nfs.example.com
        path: /exports/rhv/iso

Run the playbook

user@ansible playbooks]$ ansible-playbook -k rhv-deploy.yml
SSH password: 

PLAY [Deplay RHV environment] *****************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [rhv-m.example.com]

TASK [Log in] *********************************************************************************************************************************
ok: [rhv-m.example.com]

TASK [ensure Datacenter "Default" is existing] ************************************************************************************************
changed: [rhv-m.example.com]

TASK [Ensure Cluster "Default" is existing] ***************************************************************************************************
changed: [rhv-m.example.com]

TASK [Ensure logical network VLAN101 exists] **************************************************************************************************
changed: [rhv-m.example.com]

TASK [ensure host hv1 is joined] ****************************************************************************************************
changed: [rhv-m.example.com]

TASK [ensure host hv2 is joined] ****************************************************************************************************
changed: [rhv-m.example.com]

TASK [Assign Networks to host] ****************************************************************************************************************
ok: [rhv-m.example.com] => (item=hv1)
ok: [rhv-m.example.com] => (item=hv2)

TASK [Enable Power Management for host1] ******************************************************************************************************
changed: [rhv-m.example.com]

TASK [Enable Power Management for host1] ******************************************************************************************************
changed: [rhv-m.example.com]

TASK [Create VM datastore] ********************************************************************************************************************
changed: [rhv-m.example.com]

TASK [Create export NFS storage domain] *******************************************************************************************************
changed: [rhv-m.example.com]

TASK [Create ISO NFS storage domain] **********************************************************************************************************
changed: [rhv-m.example.com]

PLAY RECAP ************************************************************************************************************************************
rhv-m.example.com       : ok=13   changed=10   unreachable=0    failed=0   

[user@ansible playbooks]$ 

Further readings

Conclusion

With the help of Ansible you can automate a lot of boring tasks in a convenient way. You may even merge the two playbooks into one, be aware that the RHV-M setup will fail if its already set up.

Have fun 🙂

PXE boot a virtual machine with NAT connection to the host

If you have a notebook and you want to quickly deploy new virtual machines for testing, PXE boot is your friend.

On notebooks people are usally not using a bridged network but NAT instead. The DHCP server on the host that is managed by Libvirt needs to configured with the TFTP server and the boot file.

On my “mobile lab”, I’ve installed a virtual machine with a Redhat Satellite 5 where the other VMs get its content from. PXE boot files are managed by the bundled Cobbler Server.

To do so, edit the XML file of the default network (or any other NAT network):

notebook-hv:~# virsh net-edit default

Add the red marked line to the file. Replace 192.168.122.122 with the actual IP address of your PXE/TFTP server.

<network>
  <name>default</name>
  <uuid>d54b7049-254b-46b0-b434-db2a1481cbd3</uuid>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:96:21:b5'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
      <bootp file='pxelinux.0' server='192.168.122.122'/>
    </dhcp>
  </ip>
</network>

And save it. The changes are with immediate effect, happy PXE-booting 🙂

Signing Linux Kernel Modules and enforce to load only signed Modules

Introduction

With the enforcement of loading only signed Linux Kernel Modules you can greatly enhance the security of your Systems.

There are basically two methods of enforcement: Secure (UEFI) Boot and the other is a grub parameter. When using Secure boot you can sign own (or 3rd party) Kernel modules by yourself and add your public key as a MOK (Machine Owner Key) in UEFI. When not using Secure Boot, you can not load self signed modules due to the lack of a capability of storing MOKs. At least you can prevent loading unsigned Modules.

Unfortunately I was unable to test Secure Boot with a KVM virtual machine, the MOK was not added. Also on hardware it does not seem to work on all machines. I failed with my Lenovo T450s Notebook. Finally I succeeded with my Workstation with a Gigabyte Z97-D3H motherboard using Fedora 25. If someone has a solution with virtual machines, please let me know.

About Secure boot

Basically it is a chain of trust with x509 certificates. UEFI Firmware -> Shim First-Stage Bootloader -> Grub Second Stage Bootloader -> Kernel -> Modules.

This adds complexity. If something goes wrong it’s not always easy to figure out where and why it goes wrong.

Secure Boot is not without some controversy, its dominated by Microsoft, only Microsoft can sign bootloaders. Yes, the Shim bootloader is signed by Microsoft. If Microsoft decides to no longer sign Shim (or any non-MS loader), the whole Linux landscape is not able to use Secure boot anymore. As of today, most UEFI Firmware let users choose to turn off Secure Boot, how about that in the future?

Creating a dummy Kernel Module

First you need to build a unsigned Kernel module. A “Hello Wold” is good enough

Install the required RPMs

yum -y install kernel-devel.x86_64 gcc keyutils mokutil.x86_64

hello.c

#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Luc de Louw");
MODULE_DESCRIPTION("Hello World Linux Kernel Module");

static int __init hello_init(void)
{
    printk(KERN_INFO "Hello world!\n");
    return 0;
}

static void __exit hello_exit(void)
{
    printk(KERN_INFO "Unloading Hello world.\n");
}

module_init(hello_init);
module_exit(hello_exit);

Makefile

obj-m += hello.o

all:
        make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules

clean:
        make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean

install:
        cp hello.ko  /lib/modules/$(shell uname -r)/extra
        depmod

Building

make && make install

Testing

modprobe hello

To remove the module afterwards run

rmmod hello

Set up the enforcement of loading only signed modules

This is only needed on machines without secure boot.

Add module.sig_enforce=1 to GRUB_CMDLINE_LINUX in /etc/default/grub

/etc/default/grub

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=vg_rhel7test/lv_root rd.lvm.lv=vg_rhel7test/lv_swap module.sig_enforce=1"
GRUB_DISABLE_RECOVERY="true"

The next step is to update the GRUB configuration. Please check if you are using UEFI or BIOS on your system first.

On systems with UEFI

[root@rhel7uefi ~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

On BIOS systems

[root@rhel7test ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

Reboot your system.

Testing

modprobe hello

The Module will not load. You will see an error message instead:

modprobe: ERROR: could not insert 'hello': Required key not available

Signing the module

Needless to say that this must be done on a protected system and not on production servers.

First you need to create an OpenSSL config file like this:

x509.conf

cat >>/tmp/x509.conf <<EOF
[ req ]
default_bits = 4096
distinguished_name = req_distinguished_name
prompt = no
string_mask = utf8only
x509_extensions = extensions

[ req_distinguished_name ]
O = Example, Inc.
CN = Example, Inc. Kernel signing key
emailAddress = jdoe@example.com

[ extensions ]
basicConstraints=critical,CA:FALSE
keyUsage=digitalSignature
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid
EOF

Generating the Keypair

[root@rhel7uefi ~]# openssl req -x509 -new -nodes -utf8 -sha256 -days 99999 -batch -config /tmp/x509.conf -outform DER -out pubkey.der -keyout priv.key 

Adding the Public key as a MOK (Machine Owner Key)

Note: This does only work on systems with UEFI, on BIOS machines you will get an error.

[root@rhel7uefi ~]# mokutil --import pubkey.der

You will be prompted for a password that will be used for the second part of the MOK enrollment. Reboot your machine, the shim UEFI Key Manager will appear. After waiting 10sec the system continues to boot the normal system.

You can list the enrolled keys with

root@rhel7uefi ~]# mokutil --list-enrolled

Signing the Module

After successfully enroll the MOK you can sign and test the Kernel Module.

First lets have a look at the Module

root@rhel7uefi ~]# modinfo hello
filename:       /lib/modules/3.10.0-514.16.1.el7.x86_64/extra/hello.ko
description:    Hello World Linux Kernel Module
author:         Luc de Louw
license:        GPL
rhelversion:    7.3
srcversion:     4A5235839200E8580493A17
depends:        
vermagic:       3.10.0-514.16.1.el7.x86_64 SMP mod_unload modversions 
[root@rhel7uefi ~]# 

Sign it.

[root@rhel7uefi ~]# /usr/src/kernels/$(uname -r)/scripts/sign-file sha256 priv.key pubkey.der /lib/modules/$(uname -r)/extra/hello.ko

Lets have a look to the module again.

[root@rhel7uefi ~]# modinfo hello.ko
filename: /root/hello.ko
description: Hello World Linux Kernel Module
author: Luc de Louw
license: GPL
rhelversion: 7.3
srcversion: 4A5235839200E8580493A17
depends:
vermagic: 3.10.0-514.16.1.el7.x86_64 SMP mod_unload modversions
signer: Example, Inc. Kernel signing key
sig_key: 71:F7:AA:48:60:A0:B5:D9:D8:A8:1D:A4:6F:92:30:DF:87:35:81:19
sig_hashalgo: sha256
[root@rhel7uefi ~]#

Now you should be able to load your module.

[root@rhel7uefi ~]# modprobe hello

If something went wrong, you will see an error message such as

modprobe: ERROR: could not insert 'hello': Required key not available

Syslog and Journald are more verbose:

Request for unknown module key 'Example, Inc. Kernel signing key: 22e37ef0c0784c7a2c1e2690dc8b27c75533b29d' err -11

Further reading

Conclusion

If your hardware works with secure boot, you can easily enhance security and keep the flexibility to load 3rd party Kernel modules by signing them.

On virtual machines you can make use of signing enforcement which prevents to load any 3rd party module. This may, or may not be a problem.

A major drawback I see is scalability. It may be okay to manually enroll keys on a few workstations or notebooks. On a larger enterprise scale I see problems. For really large environments, you can probably talk with the hardware vendor to include the MOK (Machine Owner Key) factory preinstalled.

Have fun 🙂