From: Josh Durgin Date: Mon, 1 Oct 2012 18:39:54 +0000 (-0700) Subject: doc: first draft of full OpenStack integration X-Git-Tag: v0.54~59^2~4 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=60a5d78e577fed156da52a0fbf97f9d234d020a8;p=ceph.git doc: first draft of full OpenStack integration Includes glance, cinder, and nova config with cloning. Signed-off-by: Josh Durgin --- diff --git a/doc/rbd/rbd-openstack.rst b/doc/rbd/rbd-openstack.rst index 7d782572db25..5732f3002075 100644 --- a/doc/rbd/rbd-openstack.rst +++ b/doc/rbd/rbd-openstack.rst @@ -31,47 +31,140 @@ processor. The following diagram depicts the OpenStack/Ceph technology stack. .. important:: To use RBD with OpenStack, you must have a running Ceph cluster. +There are two parts of OpenStack integrated with RBD: images and +volumes. In OpenStack, images are templates for VM images, and are +managed by Glance, the OpenStack image service. Volumes are block +devices that can be used to run VMs. These are managed by the +nova-volume (prior to Folsom) or Cinder (post-Folsom). + +RBD is integrated into each of these components, so you can store +images in RBD through Glance, and then boot off of copy-on-write +clones of them created by Cinder or nova-volume (since OpenStack +Folsom and Ceph 0.52). + +The instructions below detail the setup for both Glance and Nova, +although they do not have to be used together. You could store images +in RBD while running VMs off of local disk, or vice versa. Create a Pool ============= -By default, RBD uses the ``data`` pool. You may use any available RBD pool. -We recommend creating a pool for Nova. Ensure your Ceph cluster is running, -then create a pool. :: +By default, RBD uses the ``rbd`` pool. You may use any available pool. +We recommend creating a pool for Nova/Cinder and a pool for Glance. +Ensure your Ceph cluster is running, then create the pools. :: - ceph osd pool create nova + ceph osd pool create volumes + ceph osd pool create images See `Create a Pool`_ for detail on specifying the number of placement groups -for your pool, and `Placement Groups`_ for details on the number of placement -groups you should set for your pool. +for your pools, and `Placement Groups`_ for details on the number of placement +groups you should set for your pools. + +If you have `cephx authentication`_ enabled, create a new user +for Nova/Cinder and Glance:: + + ceph auth get-or-create client.volumes mon 'allow r' osd 'allow rwx pool=volumes, allow rx pool=images' + ceph auth get-or-create client.images mon 'allow r' osd 'allow rwx pool=images' .. _Create a Pool: ../../cluster-ops/pools#createpool .. _Placement Groups: ../../cluster-ops/placement-groups +.. _cephx authentication: ../../cluster-ops/authentication + +Configure OpenStack ceph clients +================================ + +The hosts running glance-api, nova-compute, and +nova-volume/cinder-volume act as Ceph clients. Each requires +the ``ceph.conf`` file:: + + ssh your-openstack-server sudo tee /etc/ceph/ceph.conf secret.xml < + + client.volumes secret + + + EOF + sudo virsh secret-define --file secret.xml + + sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.volumes.key) && rm client.volumes.key secret.xml -Add the RBD Driver and the Pool Name to ``nova.conf`` -===================================================== +Save the uuid of the secret for configuring nova-compute later. +Finally, on each host running cinder-volume or nova-volume, add +``CEPH_ARGS="--id volumes"`` to the init script or other place that +starts it. + +Configure OpenStack to use RBD +============================== + +Configuring Glance +------------------ +Glance can use multiple backends to store images. To use RBD by +default, edit ``/etc/glance/glance-api.conf`` and add:: + + default_store=rbd + rbd_store_user=images + rbd_store_pool=images + +If you're using Folsom and want to enable copy-on-write cloning of +images into volumes, also add:: + + show_image_direct_url=True + +Note that this exposes the backend location via Glance's api, so the +endpoint with this option enabled should not be publicly accessible. + +Configuring Cinder/nova-volume +------------------------------ OpenStack requires a driver to interact with RADOS block devices. You must also specify the pool name for the block device. On your OpenStack host, navigate to -the ``/etc/nova`` directory. Open the ``nova.conf`` file in a text editor using +the ``/etc/cinder`` directory. Open the ``cinder.conf`` file in a text editor using sudo privileges and add the following lines to the file:: - volume_driver=nova.volume.driver.RBDDriver - rbd_pool=nova + volume_driver=cinder.volume.driver.RBDDriver + rbd_pool=volumes +If you're not using cinder, replace cinder with nova in the previous section. + +If you're using `cephx authentication`_, also configure the user and +uuid of the secret you added to libvirt earlier:: + + rbd_user=volumes + rbd_secret_uuid={uuid of secret} Restart OpenStack ================= @@ -88,3 +181,20 @@ If you have OpenStack configured as a service, you can also execute:: Once OpenStack is up and running, you should be able to create a volume with OpenStack on a Ceph RADOS block device. + +Booting from RBD +================ + +If you're using OpenStack Folsom or later, you can create a volume +from an image using the cinder command line tool:: + + cinder create --image-id {id of image} --display-name {name of volume} {size of volume} + +Before Ceph 0.52 this will be a full copy of the data, but in 0.52 and +later when Glance and Cinder are both using RBD this is a +copy-on-write clone, so volume creation is very fast. + +In the OpenStack dashboard you can then boot from that volume by +launching a new instance, choosing the image that you created the +volume from, and selecting 'boot from volume' and the volume you +created.