This post will guide you trough the procedure to build up a testbed on RHEL7 for a complete CEPH cluster. At the end you will have an admin server, one monitoring node and three storage nodes. CEPH is a object and block storage mostly used for virtual machine images and bulk BLOBS such as video- and other media. It is not intended to be used as a file storage (yet).
Machine set up
I’ve set up five virtual machines, one admin and monitoring server and five OSD servers.
- ceph-admin.example.com
- ceph-mon01.example.com
- ceph-osd01.example.com
- ceph-osd02.example.com
- ceph-osd03.example.com
Each of them have a disk for the OS of 10GB, the OSD servers additional 3x10GB disks for the storage, in total 90GB for the stroage. Each virtual machine got 1GB RAM assigned, which is barley good enough for some first tests.
Configure your network
While it is recommended to have two separate networks, one public and one for cluster interconnect (heartbeat, replication etc). However, for this testbed only one network is used.
While it is recommended practice to have your servers configured using the Fully qualified hostname (FQHN) you must also configure the short hostname for CEPH.
Check if this is working as needed:
[root@ceph-admin ~]# hostname ceph-admin.example.com [root@ceph-admin ~]# hostname -s ceph-admin [root@ceph-admin ~]#
To be able to resolve the short hostname, edit your /etc/resolv.conf and enter a domain search path
[root@ceph-admin ~]# cat /etc/resolv.conf search example.com nameserver 192.168.100.148 [root@ceph-admin ~]#
Note: In my network, all is IPv6 enabled and I first tried to set CEPH up with all IPv6. I was unable to get it working properly with IPv6! Disable IPv6 before you start. Disclaimer: Maybe I made some mistakes.
You also need to keep time in sync. The usuage of NTP or chrony is best practice anyway.
Register and subscribe the machines and attach the repositories needed
This procedure needs to be repeated on every node, inlcuding the admin server and the monitoring node(s)
[root@ceph-admin ~]# subscription-manager register [root@ceph-admin ~]# subscription-manager list --available > pools
Search the pools file for the Ceph subscription and attach the pool in question.
[root@ceph-admin ~]# subscription-manager attach --pool=<the-pool-id>
Disable all repositories and enable the needed ones
[root@ceph-admin ~]# subscription-manager repos --disable="*" [root@ceph-admin ~]# subscription-manager repos --enable=rhel-7-server-rpms \ --enable=rhel-7-server-rhceph-1.2-calamari-rpms \ --enable=rhel-7-server-rhceph-1.2-installer-rpms \ --enable=rhel-7-server-rhceph-1.2-mon-rpms \ --enable=rhel-7-server-rhceph-1.2-osd-rpms
Set up a CEPH user
Of course, you should set a secure password instead of this example 😉
[root@ceph-admin ~]# useradd -d /home/ceph -m -p $(openssl passwd -1 <super-secret-password>) ceph
Creating the sudoers rule for the ceph user
[root@ceph-admin ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph [root@ceph-admin ~]# chmod 0440 /etc/sudoers.d/ceph
Setting up passwordless SSH logins. First create a ssh key for root. Do not set a pass phrase!
[root@ceph-admin ~]# ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa
And add the key to ~/.ssh/authorized_keys of the ceph user on the other nodes.
[root@ceph-admin ~]# ssh-copy-id ceph@ceph-mon01 [root@ceph-admin ~]# ssh-copy-id ceph@ceph-osd01 [root@ceph-admin ~]# ssh-copy-id ceph@ceph-osd02 [root@ceph-admin ~]# ssh-copy-id ceph@ceph-osd03
Configure your ssh configuration.
To make your life easier (not providing –username ceph) when you run ceph-deploy) set up the ssh client config file. This can be done for the user root in ~/.ssh/config
or in /etc/ssh/ssh_config
.
Host ceph-mon01 Hostname ceph-mon01 User ceph Host ceph-osd01 Hostname ceph-osd01 User ceph Host ceph-osd02 Hostname ceph-osd02 User ceph Host ceph-osd03 Hostname ceph-osd03 User ceph
Set up the admin server
Go to https://access.redhat.com and download the ISO image. Copy the image to your admin server and mount it loop-back.
[root@ceph-admin ~]# mount rhceph-1.2.3-rhel-7-x86_64.iso /mnt -o loop
Copy the required product certificated to /etc/pki/product
[root@ceph-admin ~]# cp /mnt/RHCeph-Calamari-1.2-x86_64-c1e8ca3b6c57-285.pem /etc/pki/product/285.pem [root@ceph-admin ~]# cp /mnt/RHCeph-Installer-1.2-x86_64-8ad6befe003d-281.pem /etc/pki/product/281.pem [root@ceph-admin ~]# cp /mnt/RHCeph-MON-1.2-x86_64-d8afd76a547b-286.pem /etc/pki/product/286.pem [root@ceph-admin ~]# cp /mnt/RHCeph-OSD-1.2-x86_64-25019bf09fe9-288.pem /etc/pki/product/288.pem
Install the setup files
[root@ceph-admin ~]# yum install /mnt/ice_setup-*.rpm
Set up a config directory:
[root@ceph-admin ~]# mkdir ~/ceph-config [root@ceph-admin ~]# cd ~/ceph-config
and run the installer
[root@ceph-admin ~]# ice_setup -d /mnt
To initilize, run calamari-ctl
[root@ceph-admin ceph-config]# calamari-ctl initialize [INFO] Loading configuration.. [INFO] Starting/enabling salt... [INFO] Starting/enabling postgres... [INFO] Initializing database... [INFO] Initializing web interface... [INFO] You will now be prompted for login details for the administrative user account. This is the account you will use to log into the web interface once setup is complete. Username (leave blank to use 'root'): Email address: luc@example.com Password: Password (again): Superuser created successfully. [INFO] Starting/enabling services... [INFO] Restarting services... [INFO] Complete. [root@ceph-admin ceph-config]#
Create the cluster
Ensure you are running the following command in the config directory! In this example it is ~/ceph-config
.
[root@ceph-admin ceph-config]# ceph-deploy new ceph-mon01
Edit some settings in ceph.conf
osd_journal_size = 1000 osd_pool_default_size = 3 osd_pool_default_min_size = 2 osd_pool_default_pg_num = 128 osd_pool_default_pgp_num = 128
In production, the first value should be bigger, at least 10G. The number of placement groups is according the number of your cluster members, the OSD servers. For small clusters up to 5, 128 pgs are fine.
Install the CEPH software on the nodes.
[root@ceph-admin ceph-config]# ceph-deploy install ceph-admin ceph-mon01 ceph-osd01 ceph-osd02 ceph-osd03
Adding the initual monitor server
[root@ceph-admin ceph-config]# ceph-deploy mon create-initial
Connect the all nodes server to calamari:
[root@ceph-admin ceph-config]# ceph-deploy calamari connect ceph-mon01 ceph-osd01 ceph-osd02 ceph-osd03 ceph-admin
Make your admin server being an admin server
[root@ceph-admin ceph-config]# yum -y install ceph ceph-common
[root@ceph-admin ceph-config]# ceph-deploy admin ceph-mon01 ceph-osd01 ceph-osd02 ceph-osd03 ceph-admin
Purge and add your data disks:
[root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd01:vdb [root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd01:vdc [root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd01:vdd [root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd02:vdb [root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd02:vdc [root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd02:vdd [root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd01:vdb [root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd02:vdc [root@ceph-admin ceph-config]# ceph-deploy disk zap ceph-osd03:vdd [root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd01:vdb [root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd01:vdc [root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd01:vdd [root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd02:vdb [root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd02:vdc [root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd02:vdd [root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd03:vdb [root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd03:vdc [root@ceph-admin ceph-config]# ceph-deploy osd create ceph-osd03:vdd
You now can check the health of your cluster:
[root@ceph-admin ceph-config]# ceph health HEALTH_OK [root@ceph-admin ceph-config]#
Or with some more information:
[root@ceph-admin ceph-config]# ceph status cluster 117bf1bc-04fd-4ae1-8360-8982dd38d6f2 health HEALTH_OK monmap e1: 1 mons at {ceph-mon01=192.168.100.150:6789/0}, election epoch 2, quorum 0 ceph-mon01 osdmap e42: 9 osds: 9 up, 9 in pgmap v73: 192 pgs, 3 pools, 0 bytes data, 0 objects 318 MB used, 82742 MB / 83060 MB avail 192 active+clean [root@ceph-admin ceph-config]#
Whats next?
A storage is worthless if not used. A follow-up post will guide you trough how to use CEPH as storage for libvirt.
Further reading
- Install and Configure Inktank Ceph storage an article on access.redhat.com
- Official Redhat product documentation
- http://ceph.com/ the official project website