Install and configure DKIM with Postfix on RHEL7

Signed Email


DKIM (Domain Keys Identified Mail) is a measure against email spoofing, Phishing and SPAM mails. Its easy to implement as you will learn in this article.

DKIM signs emails on the outgoing SMTP server, the receiving SMTP can verify the signature by looking up the mail._domainkey TXT DNS record of the respective domain to check if the email originates from that domain or if it is forged.

This howto can be used to implement DKIM on a SMTP server responsible for both, in- and out-going mails.

It has been standardized in 2007 as the successor of DomainKeys introduced by Yahoo in 2004. The latest standard revision is defined in defined in RFC 6376.


  • A running Postfix SMTP server
  • Access to the RHEL 7 Optional Software Channel/Repo (rhel-x86_64-server-optional-7)
  • EPEL repository available

Installing the Software

The dependencies will be installed automatically

mail:~# yum -y install opendkim

Enable DKIM on system startup

mail:~# systemctl enable opendkim.service

Configure OpenDKIM

Add/Uncomment the following lines in /etc/opendkim.conf

Socket inet:12341@localhost # Choose any free services number
Mode    sv
KeyTable        /etc/opendkim/KeyTable
SigningTable    refile:/etc/opendkim/SigningTable
InternalHosts   refile:/etc/opendkim/TrustedHosts
SignatureAlgorithm      rsa-sha256


In this file you configure a whitelist which domains and/or IP addresses are considered as trusted. This is usually just localhost.


Here the definition of your private key is set up


Here comes the definitions of email address patterns


Create the keypair

mail:~# mkdir /etc/opendkim/keys/
mail:~# cd /etc/opendkim/keys/
mail:~# opendkim-genkey -s mail -d
mail:~# chown opendkim:opendkim mail.private

The file /etc/opendkim/keys/ contains the public key which must be added to a DNS server authoritative for the domain. It looks as following:

mail._domainkey IN      TXT     ( "v=DKIM1; k=rsa; "
          "p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC9grq0kphBEtp9biB09/X0rS42s87yHbxq4DsR0SYBNGTdendDzsFaGZeQMu0bGkY488Jm2OjmT4vXBy7FvTdqFIUKvKWXl0uKbH6nn0NcJe/Q71YnmNsGI1/EFa+YXIHqdbUjCVoQOzXQ1UiB+jZiw/G0Hhs45FW9sR8LFwaj6QIDAQAB" )  ; ----- DKIM key mail for

If you are running (Free)IPA or Redhat Identity Management responsible as a DNS server, do the following:

[root@ipa1 ~]# ipa dnsrecord-add --txt-rec="p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC9grq0kphBEtp9biB09/X0rS42s87yHbxq4DsR0SYBNGTdendDzsFaGZeQMu0bGkY488Jm2OjmT4vXBy7FvTdqFIUKvKWXl0uKbH6nn0NcJe/Q71YnmNsGI1/EFa+YXIHqdbUjCVoQOzXQ1UiB+jZiw/G0Hhs45FW9sR8LFwaj6QIDAQAB" mail._domainkey

Configure Postfix

Thanks to Postfix Milter Implementation its a nobrainer to configure postfix:

mail:~# postconf milter_protocol=2
mail:~# postconf milter_default_action=accept
mail:~# postconf smtpd_milters=inet:localhost:12341
mail:~# postconf non_smtpd_milters=inet:localhost:12341

Restart the Services

mail:~# systemctl restart opendkim.service
mail:~# systemctl restart postfix.service


Write an email to to test your set up. A few seconds later you will get an automated response which shows the results.

Do not get confused by DomainKeys check: neutral in the test results, they are for the legacy Yahoo DomainKeys. The important stuff is DKIM.

You can also write your self an email and check the source of it, it will be looking simulat to this:

Return-Path: <>
Received: from (unknown [])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	(Authenticated sender:
	by (Postfix) with ESMTPSA id 3D1CFA34
	for <>; Sun, 19 Feb 2017 17:20:37 +0100 (CET)
DKIM-Filter: OpenDKIM Filter v2.11.0 3D1CFA34
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=mail;
	t=1487521237; bh=asdfasdfasasdfasfasdfsadfsdaf=;
From: Joe Doe <>
Subject: test

Read further

Have fun! ๐Ÿ™‚

Migrating from CentOS7 to RHEL7

There are various reasons why to migrate from CentOS to RHEL. Quicker access to bugfixes and new minor releases as well as having a fully commercially supported system.

There are different tutorial on the net how to migrate from RHEL to CentOS but almost no information about the other way round. It is quite simple and at the end of the day you have only Red Hat Packages installed.

In 2012 I wrote an article about Migrating from CentOS6 to RHEL6. Now its time for an update.


Some of the procedures can be destructive for your system and/or your data. I’m not taking any responsibility for any damage casue. Take a full backup of your system before even thinking about trying this procedure!

Also import to note is that such a procedure is not supported by Redhat.


There are only two things you need

  • A valid RHEL subscription obtained from Redhats online store
  • A RHEL7 ISO-Image which corresponds with your current CentOS minor release (or newer) which can be downloaded at Redhat downloads


Be sure you activated your subscription.

Mount the ISO image on your CentOS7 machine:

[root@centos7 ~]# mount /dev/cdrom /mnt -o loop

Go to /mnt/Packages and install the packages we need:

[root@centos7 Packages]# yum -y localinstall subscription-manager-1.15.9-15.el7.x86_64.rpm

(Re)Move your CentOS repos
To avoid conflicts between CentOS and Redhat Repositories you need to get rid of them. Remove them or just keep a copy.

[root@centos7 Packages]# mkdir /etc/yum.repos.d.centos
[root@centos7 Packages]# mv /etc/yum.repos.d/CentOS-* /etc/yum.repos.d.centos

Force-remove the centos-release and yum RPMs

[root@centos7 Packages]# rpm -e yum --nodeps
[root@centos7 Packages]# rpm -ihv yum-3.4.3-132.el7.noarch.rpm
[root@centos7 Packages]# rpm -e centos-release --nodeps
[root@centos7 Packages]# yum localinstall redhat-release-server-7.2-9.el7.x86_64.rpm

Register your system

To get access to RHEL repositories, you need to register your system. The username “” must be replaced with your username. The ID is a randomly generated UUID.

[root@centos7 ~]# subscription-manager register
Registering to:
The system has been registered with ID: e61bd536-854c-4f32-a1fa-7f75c37046a5  
[root@centos7 ~]# 

Attach the system to a subscription

Usually it is just good enough to auto-attach the subscription needed.

[root@centos7 ~]# subscription-manager attach --auto

Installed Product Current Status:
Product Name: Red Hat Enterprise Linux Server
Status:       Subscribed

[root@centos7 ~]# s

Review enabled repositories

Sometimes you dont want to use all the repos provided. The simplest way is just to disable all and re-enable those you need.

[root@centos7 ~]# subscription-manager repos --list
[root@centos7 ~]# subscription-manager repos --disable "*"
[root@centos7 ~]# subscription-manager repos --enable rhel-7-server-rpms --enable rhel-7-server-optional-rpms --enable whatever-else-you-need
[root@centos7 ~]# yum clean all

Changing the Distribution

Now we have all requirements met, lets reinstall the packages.

[root@centos7 ~]# yum reinstall "*" --exclude=filesystem
[ommited output]
 zlib                     x86_64 1.2.7-15.el7           rhel-7-server-rpms  90 k
Not available:
 dhclient                 x86_64 12:4.2.5-42.el7.centos -                  0.0  
 plymouth                 x86_64 0.8.9-0.24.20140113.el7.centos
                                                        -                  0.0  
 curl                     x86_64 7.29.0-25.el7.centos   -                  0.0  
 grub2-tools              x86_64 1:2.02-0.29.el7.centos -                  0.0  
 basesystem               noarch 10.0-7.el7.centos      -                  0.0  
 plymouth-core-libs       x86_64 0.8.9-0.24.20140113.el7.centos
                                                        -                  0.0  
 mariadb-libs             x86_64 1:5.5.44-2.el7.centos  -                  0.0  
 libcurl                  x86_64 7.29.0-25.el7.centos   -                  0.0  
 dhcp-libs                x86_64 12:4.2.5-42.el7.centos -                  0.0  
 plymouth-scripts         x86_64 0.8.9-0.24.20140113.el7.centos
                                                        -                  0.0  
 dhcp-common              x86_64 12:4.2.5-42.el7.centos -                  0.0  
 grub2                    x86_64 1:2.02-0.29.el7.centos -                  0.0  
 centos-logos             noarch 70.0.6-3.el7.centos    -                  0.0  

Transaction Summary
Reinstall      291 Packages
Not available   13 Packages

Total download size: 154 M
Installed size: 577 M
Is this ok [y/d/N]:

Here you can see the Centos specific packages, we need to take care about them later. Proceed and acknowledge with Y.


No we need to manually clean up the CentOS specific packages with are named [package-name-and-version]-centos.

[root@centos7 ~]# rpm -qa --queryformat "%{NAME} %{VENDOR}\n" | grep -i centos | cut -d' ' -f1
[root@centos7 ~]#

With some of the packages you need to proceed very careful, the i.e. filesystem package is awful. If you remove it, you will reinstall your system.

Luckily there is the rpm parameter –justdb which only does changes to the RPM-Database but not on the actual file system.

Some more critical packages need to be replaced as well.

[root@centos7 Packages]# rpm -e centos-logos plymouth plymouth-scripts plymouth-core-libs grub2 grub2-tools dhcp-common dhclient dhcp-libs curl libcurl --nodeps
[root@centos7 Packages]# rpm -i curl-7.29.0-25.el7.x86_64.rpm libcurl-7.29.0-25.el7.x86_64.rpm
[root@centos7 Packages]#  yum -y install plymouth plymouth-scripts plymouth-core-libs grub2 grub2-tools dhcp-common dhclient dhcp-libs
[root@centos7 ~]# yum remove basesystem
[root@centos7 ~]# yum -y install basesystem

Dirty Hardcore Hack, please be careful, use the –justdb parameter

[root@centos7 Packages]# rpm -e filesystem --nodeps --justdb
[root@centos7 Packages]# cp filesystem-3.2-20.el7.x86_64.rpm /root/
[root@centos7 Packages]# cd
[root@centos7 ~]# umount /mnt
[root@centos7 ~]# rpm -ihv filesystem-3.2-20.el7.x86_64.rpm 


Now update your system, reboot and check if all is working as expected. There may be more cleanup work to do.

[root@centos7 ~]# umount /mnt
[root@centos7 ~]# yum -y update && reboot


Check if there are still RPMs of vendor “Centos” installed:

[root@centos7 ~]# rpm -qa --queryformat "%{NAME} %{VENDOR}\n" | grep -i centos | cut -d' ' -f1

This should return nothing, almost all is now RHEL7. The only traces left are the previously install Kernels. They will get deleted over time when installing (updating) new Kernels.

In my case I just used CentOS7 minimal installation. The CentOS distribution comes with a total of 231 packages which need to be manually replaced if installed. If you plan to go down this road, please clone the system first for testing before migrating the actual system.

Support by Redhat

Will the converted machine be supported after this procedure? Well, officially it is not supported, but if there are no traces of CentOS left on the machineโ€ฆ

Better install RHEL in the first place ๐Ÿ™‚

Building a virtual CEPH storage cluster

cephThis post will guide you trough the procedure to build up a testbed on RHEL7 for a complete CEPH cluster. At the end you will have an admin server, one monitoring node and three storage nodes. CEPH is a object and block storage mostly used for virtual machine images and bulk BLOBS such as video- and other media. It is not intended to be used as a file storage (yet).

Machine set up
I’ve set up five virtual machines, one admin and monitoring server and five OSD servers.


Each of them have a disk for the OS of 10GB, the OSD servers additional 3x10GB disks for the storage, in total 90GB for the stroage. Each virtual machine got 1GB RAM assigned, which is barley good enough for some first tests.

Configure your network
While it is recommended to have two separate networks, one public and one for cluster interconnect (heartbeat, replication etc). However, for this testbed only one network is used.

While it is recommended practice to have your servers configured using the Fully qualified hostname (FQHN) you must also configure the short hostname for CEPH.

Check if this is working as needed:

[root@ceph-admin ~]# hostname
[root@ceph-admin ~]# hostname -s
[root@ceph-admin ~]# 

To be able to resolve the short hostname, edit your /etc/resolv.conf and enter a domain search path

[root@ceph-admin ~]# cat /etc/resolv.conf 
[root@ceph-admin ~]# 

Note: In my network, all is IPv6 enabled and I first tried to set CEPH up with all IPv6. I was unable to get it working properly with IPv6! Disable IPv6 before you start. Disclaimer: Maybe I made some mistakes.

You also need to keep time in sync. The usuage of NTP or chrony is best practice anyway.

Register and subscribe the machines and attach the repositories needed

This procedure needs to be repeated on every node, inlcuding the admin server and the monitoring node(s)

[root@ceph-admin ~]# subscription-manager register
[root@ceph-admin ~]# subscription-manager list --available > pools

Search the pools file for the Ceph subscription and attach the pool in question.

[root@ceph-admin ~]# subscription-manager attach --pool=<the-pool-id>

Disable all repositories and enable the needed ones

[root@ceph-admin ~]# subscription-manager repos --disable="*"
[root@ceph-admin ~]# subscription-manager repos --enable=rhel-7-server-rpms \
--enable=rhel-7-server-rhceph-1.2-calamari-rpms \
--enable=rhel-7-server-rhceph-1.2-installer-rpms \
--enable=rhel-7-server-rhceph-1.2-mon-rpms \

Set up a CEPH user
Of course, you should set a secure password instead of this example ๐Ÿ˜‰

[root@ceph-admin ~]# useradd -d /home/ceph -m -p $(openssl passwd -1 <super-secret-password>) ceph

Creating the sudoers rule for the ceph user

[root@ceph-admin ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
[root@ceph-admin ~]# chmod 0440 /etc/sudoers.d/ceph

Setting up passwordless SSH logins. First create a ssh key for root. Do not set a pass phrase!

[root@ceph-admin ~]# ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa

And add the key to ~/.ssh/authorized_keys of the ceph user on the other nodes.

[root@ceph-admin ~]# ssh-copy-id ceph@ceph-mon01
[root@ceph-admin ~]# ssh-copy-id ceph@ceph-osd01
[root@ceph-admin ~]# ssh-copy-id ceph@ceph-osd02
[root@ceph-admin ~]# ssh-copy-id ceph@ceph-osd03

Configure your ssh configuration.

To make your life easier (not providing –username ceph) when you run ceph-deploy) set up the ssh client config file. This can be done for the user root in ~/.ssh/config or in /etc/ssh/ssh_config.

Host ceph-mon01
     Hostname ceph-mon01
     User ceph

Host ceph-osd01
     Hostname ceph-osd01
     User ceph

Host ceph-osd02
     Hostname ceph-osd02
     User ceph

Host ceph-osd03
     Hostname ceph-osd03
     User ceph

Set up the admin server

Go to and download the ISO image. Copy the image to your admin server and mount it loop-back.

[root@ceph-admin ~]# mount rhceph-1.2.3-rhel-7-x86_64.iso /mnt -o loop

Copy the required product certificated to /etc/pki/product

[root@ceph-admin ~]# cp /mnt/RHCeph-Calamari-1.2-x86_64-c1e8ca3b6c57-285.pem /etc/pki/product/285.pem
[root@ceph-admin ~]# cp /mnt/RHCeph-Installer-1.2-x86_64-8ad6befe003d-281.pem /etc/pki/product/281.pem
[root@ceph-admin ~]# cp /mnt/RHCeph-MON-1.2-x86_64-d8afd76a547b-286.pem /etc/pki/product/286.pem
[root@ceph-admin ~]# cp /mnt/RHCeph-OSD-1.2-x86_64-25019bf09fe9-288.pem /etc/pki/product/288.pem

Install the setup files

[root@ceph-admin ~]# yum install /mnt/ice_setup-*.rpm

Set up a config directory:

[root@ceph-admin ~]# mkdir ~/ceph-config
[root@ceph-admin ~]# cd ~/ceph-config

and run the installer

[root@ceph-admin ~]# ice_setup -d /mnt

To initilize, run calamari-ctl

[root@ceph-admin ceph-config]# calamari-ctl initialize
[INFO] Loading configuration..
[INFO] Starting/enabling salt...
[INFO] Starting/enabling postgres...
[INFO] Initializing database...
[INFO] Initializing web interface...
[INFO] You will now be prompted for login details for the administrative user account.  This is the account you will use to log into the web interface once setup is complete.
Username (leave blank to use 'root'): 
Email address:
Password (again): 
Superuser created successfully.
[INFO] Starting/enabling services...
[INFO] Restarting services...
[INFO] Complete.
[root@ceph-admin ceph-config]#

Create the cluster

Ensure you are running the following command in the config directory! In this example it is ~/ceph-config.

[root@ceph-admin ceph-config]# ceph-deploy new ceph-mon01

Edit some settings in ceph.conf

osd_journal_size = 1000
osd_pool_default_size = 3
osd_pool_default_min_size = 2
osd_pool_default_pg_num = 128
osd_pool_default_pgp_num = 128

In production, the first value should be bigger, at least 10G. The number of placement groups is according the number of your cluster members, the OSD servers. For small clusters up to 5, 128 pgs are fine.

Install the CEPH software on the nodes.

[root@ceph-admin ceph-config]# ceph-deploy install ceph-admin ceph-mon01 ceph-osd01 ceph-osd02 ceph-osd03

Adding the initual monitor server

[root@ceph-admin ceph-config]# ceph-deploy mon create-initial

Connect the all nodes server to calamari:

[root@ceph-admin ceph-config]# ceph-deploy calamari connect ceph-mon01 ceph-osd01 ceph-osd02 ceph-osd03 ceph-admin

Make your admin server being an admin server

[root@ceph-admin ceph-config]# yum -y install ceph ceph-common
[root@ceph-admin ceph-config]# ceph-deploy admin ceph-mon01 ceph-osd01 ceph-osd02 ceph-osd03 ceph-admin

Purge and add your data disks:

[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd01:vdb
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd01:vdc
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd01:vdd
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd02:vdb
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd02:vdc
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd02:vdd
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd01:vdb
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd02:vdc
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd03:vdd

[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd01:vdb
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd01:vdc
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd01:vdd
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd02:vdb
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd02:vdc
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd02:vdd
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd03:vdb
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd03:vdc
[root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd03:vdd

You now can check the health of your cluster:

[root@ceph-admin ceph-config]# ceph health
[root@ceph-admin ceph-config]# 

Or with some more information:

[root@ceph-admin ceph-config]# ceph status
    cluster 117bf1bc-04fd-4ae1-8360-8982dd38d6f2
     health HEALTH_OK
     monmap e1: 1 mons at {ceph-mon01=}, election epoch 2, quorum 0 ceph-mon01
     osdmap e42: 9 osds: 9 up, 9 in
      pgmap v73: 192 pgs, 3 pools, 0 bytes data, 0 objects
            318 MB used, 82742 MB / 83060 MB avail
                 192 active+clean
[root@ceph-admin ceph-config]# 

Whats next?
A storage is worthless if not used. A follow-up post will guide you trough how to use CEPH as storage for libvirt.

Further reading