The oVirt project introduced support for Ceph storage via OpenStack Cinder in version 3.6.1.
A few years later that integration was deprecated after introduction of cinderlib support in 4.3.0.
What’s the status of the Ceph support? Can we use it as storage for the whole datacenter?
The answer is yes.
Self-Hosted Engine on Ceph
There is no direct support for running a Self-Hosted Engine on managed block storage but that’s not too bad.
In order to use Ceph for a Self-Hosted Engine you need to create an image and expose it as iSCSI from the Ceph system.
Prerequisites:
- You need a Ceph instance up and running.
- In my environment I deployed a 3 nodes Ceph Pacific instance using CentOS Stream 8 as operating system and following
https://docs.ceph.com/en/latest/install/manual-deployment/.
Please be sure your hardware is matching at least the minimum requirements (see
https://docs.ceph.com/en/latest/start/hardware-recommendations/).
I used 3 hosts with 8 cores and 16 GB ram but it is just a test environment.
Tips:
- This video helped a lot getting the system up and running: https://www.youtube.com/watch?v=mgC488kLFuk
- I found the `terminator` terminal very useful for sending commands to all the ceph nodes at once.
You can find it in the EPEL 8 repository.
A few steps that are not covered in above documentation:
- You need to open ceph firewall ports:
# firewall-cmd --zone=public --add-service=ceph-mon # firewall-cmd --zone=public --add-service=ceph-mon --permanent # firewall-cmd --zone=public --add-service=ceph # firewall-cmd --zone=public --add-service=ceph --permanent
- At the end of the installation, if ceph reports:
health: HEALTH_WARN mons are allowing insecure global_id reclaim
This will solve:
# ceph config set mon auth_allow_insecure_global_id_reclaim false
- I also installed the dashboard:
# dnf install ceph-mgr-dashboard # ceph mgr module enable dashboard # firewall-cmd --zone=public --add-port=8080/tcp # firewall-cmd --zone=public --add-port=8443/tcp # firewall-cmd --zone=public --add-port=8443/tcp --permanent # firewall-cmd --zone=public --add-port=8080/tcp --permanent
Procedure:
Enabling iSCSI support in Ceph:
- Enable ceph-iscsi repository saving https://download.ceph.com/ceph-iscsi/3/rpm/el8/ceph-iscsi.repo in
/etc/yum/repos.d
- Add to the repo file you just downloaded:
[tcmu-runner] gpgcheck=0 baseurl=https://4.chacra.ceph.com/r/tcmu-runner/master/06d64ab78c2898c032fe5be93f9ae6f64b199d5b/centos/8/flavors/default/x86_64/ enabled=1 name=tcmu-runner packages
- Open the ports:
# firewall-cmd --zone=public --add-port=5000/tcp --permanent # firewall-cmd --zone=public --add-port=5000/tcp # firewall-cmd --zone=public --add-service=iscsi-target --permanent # firewall-cmd --zone=public --add-service=iscsi-target
- Configure
/etc/ceph/iscsi-gateway.cfg
following
https://github.com/ceph/ceph-iscsi/blob/master/iscsi-gateway.cfg_sample
You can follow documentation at https://docs.ceph.com/en/latest/rbd/iscsi-overview/ for exposing a target using the gwcli command.
In my case the result is:
o- / ......................................................................................................................... [...] o- cluster ......................................................................................................... [Clusters: 1] | o- ceph ............................................................................................................ [HEALTH_OK] | o- pools .......................................................................................................... [Pools: 4] | | o- device_health_metrics ................................................. [(x3), Commit: 0.00Y/90277768K (0%), Used: 0.00Y] | | o- iscsi-images .................................................... [(x3), Commit: 80G/90277768K (92%), Used: 25884174156b] | | o- managed .............................................................. [(x3), Commit: 0.00Y/90277768K (0%), Used: 45459b] | | o- my_ovirt_pool ........................................................ [(x3), Commit: 0.00Y/90277768K (0%), Used: 44079b] | o- topology ................................................................................................ [OSDs: 3,MONs: 3] o- disks ......................................................................................................... [80G, Disks: 1] | o- iscsi-images ........................................................................................... [iscsi-images (80G)] | o- he_vol ................................................................................ [iscsi-images/he_vol (Online, 80G)] o- iscsi-targets ............................................................................... [DiscoveryAuth: None, Targets: 1] o- iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw ......................................................... [Auth: None, Gateways: 3] o- disks .......................................................................................................... [Disks: 1] | o- iscsi-images/he_vol .......................................................................... [Owner: ceph0.lab, Lun: 0] o- gateways ............................................................................................ [Up: 3/3, Portals: 3] | o- ceph0.lab ............................................................................................ [10.46.8.125 (UP)] | o- ceph1.lab ............................................................................................ [10.46.8.161 (UP)] | o- ceph2.lab ............................................................................................ [10.46.8.176 (UP)] o- host-groups .................................................................................................. [Groups : 0] o- hosts ....................................................................................... [Auth: ACL_ENABLED, Hosts: 2] o- iqn.1994-05.com.redhat:b1c6ad206685 .............................................. [LOGGED-IN, Auth: None, Disks: 1(80G)] | o- lun 0 .................................................................... [iscsi-images/he_vol(80G), Owner: ceph0.lab] o- iqn.1994-05.com.redhat:11c86d03c0ca .............................................. [LOGGED-IN, Auth: None, Disks: 1(80G)] o- lun 0 .................................................................... [iscsi-images/he_vol(80G), Owner: ceph0.lab]
Where iscsi-images pool is used for storing he_vol disk which will be exposed as lun0 to the hosts that will run the hosted engine.
You can then deploy a self-hosted engine pointing to this iSCSI target attaching to the volume you exposed there.
Attaching a Ceph pool as Managed Block Storage Domain
Now that you have the engine up and running, you can attach a ceph pool to be used as Managed Block Storage Domain
Procedure:
On Ceph side, create a pool to be used by the ovirt-engine, be sure it has application set to rbd.
-
Follow oVirt documentation for setting up Cinderlib
to prepare the hosts and the engine with needed packages.
I used RDO victoria and Ceph pacific instead of RDO ussuri and Ceph nautilus. -
Reconfigure the engine to use cinderlib if not done before:
# engine-setup --reconfigure-optional-components
- Enable managed block domain support if using a cluster level older than 4.6:
# engine-config -s ManagedBlockDomainSupported=true
This step is not needed if you are using cluster level 4.6 since the Managed Block Domain support is enabled by default starting with this version.
-
Copy
/etc/ceph
directory from your ceph node to ovirt-engine host. - Change ownership of the files in
/etc/ceph
on the ovirt-engine host making them readable from the engine process:# chown ovirt /etc/ceph/*
-
Within the storage domain creation configure the managed blog device for Ceph by manually adding the following key/value pairs:
rbd_ceph_conf - /etc/ceph/ceph.conf rbd_pool -
rbd_user - admin use_multipath_for_image_xfer - true volume_driver - cinder.volume.drivers.rbd.RBDDriver rbd_keyring_conf - /etc/ceph/ceph.client.admin.keyring - You can now create volumes for your VMs from the Storage section of your ovirt-engine.
- And when you’ll attach the disk to your VM you’ll need to select the Managed Block tab to choose the disk:
- On Ceph side, the volume will be visible in the Block -> Images section:
If you have questions or comments please send them to oVirt users mailing list