.. important:: To use RBD with OpenStack, you must have a running Ceph cluster.
+There are two parts of OpenStack integrated with RBD: images and
+volumes. In OpenStack, images are templates for VM images, and are
+managed by Glance, the OpenStack image service. Volumes are block
+devices that can be used to run VMs. These are managed by the
+nova-volume (prior to Folsom) or Cinder (post-Folsom).
+
+RBD is integrated into each of these components, so you can store
+images in RBD through Glance, and then boot off of copy-on-write
+clones of them created by Cinder or nova-volume (since OpenStack
+Folsom and Ceph 0.52).
+
+The instructions below detail the setup for both Glance and Nova,
+although they do not have to be used together. You could store images
+in RBD while running VMs off of local disk, or vice versa.
Create a Pool
=============
-By default, RBD uses the ``data`` pool. You may use any available RBD pool.
-We recommend creating a pool for Nova. Ensure your Ceph cluster is running,
-then create a pool. ::
+By default, RBD uses the ``rbd`` pool. You may use any available pool.
+We recommend creating a pool for Nova/Cinder and a pool for Glance.
+Ensure your Ceph cluster is running, then create the pools. ::
- ceph osd pool create nova
+ ceph osd pool create volumes
+ ceph osd pool create images
See `Create a Pool`_ for detail on specifying the number of placement groups
-for your pool, and `Placement Groups`_ for details on the number of placement
-groups you should set for your pool.
+for your pools, and `Placement Groups`_ for details on the number of placement
+groups you should set for your pools.
+
+If you have `cephx authentication`_ enabled, create a new user
+for Nova/Cinder and Glance::
+
+ ceph auth get-or-create client.volumes mon 'allow r' osd 'allow rwx pool=volumes, allow rx pool=images'
+ ceph auth get-or-create client.images mon 'allow r' osd 'allow rwx pool=images'
.. _Create a Pool: ../../cluster-ops/pools#createpool
.. _Placement Groups: ../../cluster-ops/placement-groups
+.. _cephx authentication: ../../cluster-ops/authentication
+
+Configure OpenStack ceph clients
+================================
+
+The hosts running glance-api, nova-compute, and
+nova-volume/cinder-volume act as Ceph clients. Each requires
+the ``ceph.conf`` file::
+
+ ssh your-openstack-server sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
+
+Install Ceph client packages
+----------------------------
+
+On the glance-api host, you'll need the python bindings for librbd::
+
+ sudo apt-get install python-ceph
+
+On the nova-volume or cinder host, the client command line tools are
+used::
+
+ sudo apt-get install ceph-common
+
+Setup Ceph client authentication
+--------------------------------
+
+If you're using cephx authentication, add the keyrings for client.volumes
+and client.images to the appropriate hosts and change their ownership::
+
+ ceph auth get-or-create client.images | ssh your-glance-api-server sudo tee /etc/ceph/ceph.client.images.keyring
+ ssh your-glance-api-server sudo chown glance:glance /etc/ceph/ceph.client.images.keyring
+ ceph auth get-or-create client.volumes | ssh your-volume-server sudo tee /etc/ceph/ceph.client.volumes.keyring
+ ssh your-volume-server sudo chown cinder:cinder /etc/ceph/ceph.client.volumes.keyring
-Install Ceph Common on the OpenStack Host
-=========================================
+Hosts running nova-compute do not need the keyring. Instead, they
+store the secret key in libvirt. Create a temporary copy of the secret
+key on the hosts running nova-compute::
-OpenStack operates as a Ceph client. You must install Ceph common on the
-OpenStack host, and copy your Ceph cluster's ``ceph.conf`` file to the
-``/etc/ceph`` directory. If you have installed Ceph on the host, ceph-common
-is already included. ::
+ ssh your-compute-host client.volumes.key <`ceph auth get-key client.volumes`
- sudo apt-get install ceph-common
- cd /etc/ceph
- ssh your-openstack-server sudo tee /etc/ceph/ceph.conf <ceph.conf
+Then, on the compute hosts, add the secret key to libvirt and remove
+the temporary copy of the key::
+ cat > secret.xml <<EOF
+ <secret ephemeral='no' private='no'>
+ <usage type='ceph'>
+ <name>client.volumes secret</name>
+ </usage>
+ </secret>
+ EOF
+ sudo virsh secret-define --file secret.xml
+ <uuid of secret is output here>
+ sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.volumes.key) && rm client.volumes.key secret.xml
-Add the RBD Driver and the Pool Name to ``nova.conf``
-=====================================================
+Save the uuid of the secret for configuring nova-compute later.
+Finally, on each host running cinder-volume or nova-volume, add
+``CEPH_ARGS="--id volumes"`` to the init script or other place that
+starts it.
+
+Configure OpenStack to use RBD
+==============================
+
+Configuring Glance
+------------------
+Glance can use multiple backends to store images. To use RBD by
+default, edit ``/etc/glance/glance-api.conf`` and add::
+
+ default_store=rbd
+ rbd_store_user=images
+ rbd_store_pool=images
+
+If you're using Folsom and want to enable copy-on-write cloning of
+images into volumes, also add::
+
+ show_image_direct_url=True
+
+Note that this exposes the backend location via Glance's api, so the
+endpoint with this option enabled should not be publicly accessible.
+
+Configuring Cinder/nova-volume
+------------------------------
OpenStack requires a driver to interact with RADOS block devices. You must also
specify the pool name for the block device. On your OpenStack host, navigate to
-the ``/etc/nova`` directory. Open the ``nova.conf`` file in a text editor using
+the ``/etc/cinder`` directory. Open the ``cinder.conf`` file in a text editor using
sudo privileges and add the following lines to the file::
- volume_driver=nova.volume.driver.RBDDriver
- rbd_pool=nova
+ volume_driver=cinder.volume.driver.RBDDriver
+ rbd_pool=volumes
+If you're not using cinder, replace cinder with nova in the previous section.
+
+If you're using `cephx authentication`_, also configure the user and
+uuid of the secret you added to libvirt earlier::
+
+ rbd_user=volumes
+ rbd_secret_uuid={uuid of secret}
Restart OpenStack
=================
Once OpenStack is up and running, you should be able to create a volume with
OpenStack on a Ceph RADOS block device.
+
+Booting from RBD
+================
+
+If you're using OpenStack Folsom or later, you can create a volume
+from an image using the cinder command line tool::
+
+ cinder create --image-id {id of image} --display-name {name of volume} {size of volume}
+
+Before Ceph 0.52 this will be a full copy of the data, but in 0.52 and
+later when Glance and Cinder are both using RBD this is a
+copy-on-write clone, so volume creation is very fast.
+
+In the OpenStack dashboard you can then boot from that volume by
+launching a new instance, choosing the image that you created the
+volume from, and selecting 'boot from volume' and the volume you
+created.