From: John Wilkins Date: Thu, 31 Oct 2013 00:21:14 +0000 (-0700) Subject: doc: Removed nova-volume, early Ceph references and Folsom references. X-Git-Tag: v0.73~52^2~2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=8193bad873fcb4732ba3023f60f7c9d805c10ef7;p=ceph.git doc: Removed nova-volume, early Ceph references and Folsom references. fixes: 5006 Signed-off-by: John Wilkins --- diff --git a/doc/rbd/rbd-openstack.rst b/doc/rbd/rbd-openstack.rst index ba9df072d16d..da0a2313bbd9 100644 --- a/doc/rbd/rbd-openstack.rst +++ b/doc/rbd/rbd-openstack.rst @@ -10,7 +10,7 @@ objects across the cluster, which means that large Ceph Block Device images have better performance than a standalone server! To use Ceph Block Devices with OpenStack, you must install QEMU, ``libvirt``, -and OpenStack first. We recommend using a separate physical host for your +and OpenStack first. We recommend using a separate physical node for your OpenStack installation. OpenStack recommends a minimum of 8GB of RAM and a quad-core processor. The following diagram depicts the OpenStack/Ceph technology stack. @@ -44,16 +44,13 @@ Two parts of OpenStack integrate with Ceph's block devices: downloads them accordingly. - **Volumes**: Volumes are block devices. OpenStack uses volumes - to boot VMs, or to attach volumes to running VMs. OpenStack - manages volumes using ``nova-volume`` prior to the Folsom - release. OpenStack manages volumes using Cinder services - beginning with the Folsom release. + to boot VMs, or to attach volumes to running VMs. OpenStack manages + volumes using Cinder services. -Beginning with OpenStack Folsom and Ceph 0.52, you can use OpenStack Glance to -store images in a Ceph Block Device, and you can use Cinder or ``nova-volume`` -to boot a VM using a copy-on-write clone of an image. +You can use OpenStack Glance to store images in a Ceph Block Device, and you +can use Cinder to boot a VM using a copy-on-write clone of an image. -The instructions below detail the setup for Glance and Nova/Cinder, although +The instructions below detail the setup for Glance and Cinder, although they do not have to be used together. You may store images in Ceph block devices while running VMs using a local disk, or vice versa. @@ -63,7 +60,7 @@ Create a Pool ============= By default, Ceph block devices use the ``rbd`` pool. You may use any available -pool. We recommend creating a pool for Nova/Cinder and a pool for Glance. Ensure +pool. We recommend creating a pool for Cinder and a pool for Glance. Ensure your Ceph cluster is running, then create the pools. :: ceph osd pool create volumes 128 @@ -80,56 +77,50 @@ groups you should set for your pools. Configure OpenStack Ceph Clients ================================ -The hosts running ``glance-api``, ``nova-compute``, and ``nova-volume`` or -``cinder-volume`` act as Ceph clients. Each requires the ``ceph.conf`` file:: +The nodes running ``glance-api`` and ``cinder-volume`` act as Ceph clients. Each +requires the ``ceph.conf`` file:: ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf secret.xml <