From: John Wilkins Date: Thu, 10 Jul 2014 00:18:03 +0000 (-0700) Subject: doc: Clean up formatting, usage and removed duplicate section. X-Git-Tag: v0.84~129 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=bd6ba100428c1a67ccd1bce851e7778cfe21ebb1;p=ceph.git doc: Clean up formatting, usage and removed duplicate section. Signed-off-by: John Wilkins --- diff --git a/doc/rbd/rbd-openstack.rst b/doc/rbd/rbd-openstack.rst index 9cc004bbd207..bb573658402b 100644 --- a/doc/rbd/rbd-openstack.rst +++ b/doc/rbd/rbd-openstack.rst @@ -39,26 +39,23 @@ technology stack. Three parts of OpenStack integrate with Ceph's block devices: -- **Images**: OpenStack Glance manages images for VMs. Images - are immutable. OpenStack treats images as binary blobs and - downloads them accordingly. - -- **Volumes**: Volumes are block devices. OpenStack uses volumes - to boot VMs, or to attach volumes to running VMs. OpenStack manages - volumes using Cinder services. - -- **Guest Disks**: Guest disks are guest operating system disks. - By default, when you boot a virtual machine, - its disk appears as a file on the filesystem of the hypervisor - (usually under ``/var/lib/nova/instances//``). Prior OpenStack - Havana, the only way to boot a VM in Ceph was to use the boot from volume - functionality from Cinder. However, now it is possible to - directly boot every virtual machine inside Ceph without using Cinder. - This is really handy because it allows us to easily perform - maintenance operation with the live-migration process. - On the other hand, if your hypervisor dies it is also really convenient - to trigger ``nova evacuate`` and almost seamlessly run the virtual machine - somewhere else. +- **Images**: OpenStack Glance manages images for VMs. Images are immutable. + OpenStack treats images as binary blobs and downloads them accordingly. + +- **Volumes**: Volumes are block devices. OpenStack uses volumes to boot VMs, + or to attach volumes to running VMs. OpenStack manages volumes using + Cinder services. + +- **Guest Disks**: Guest disks are guest operating system disks. By default, + when you boot a virtual machine, its disk appears as a file on the filesystem + of the hypervisor (usually under ``/var/lib/nova/instances//``). Prior + to OpenStack Havana, the only way to boot a VM in Ceph was to use the + boot-from-volume functionality of Cinder. However, now it is possible to boot + every virtual machine inside Ceph directly without using Cinder, which is + advantageous because it allows you to perform maintenance operations easily + with the live-migration process. Additionally, if your hypervisor dies it is + also convenient to trigger ``nova evacuate`` and run the virtual machine + elsewhere almost seamlessly. You can use OpenStack Glance to store images in a Ceph Block Device, and you can use Cinder to boot a VM using a copy-on-write clone of an image. @@ -67,8 +64,9 @@ The instructions below detail the setup for Glance, Cinder and Nova, although they do not have to be used together. You may store images in Ceph block devices while running VMs using a local disk, or vice versa. -.. important:: Ceph doesn’t support QCOW2 for hosting virtual machine disk. Thus if you want - to boot virtual machines in Ceph (ephemeral backend or boot from volume), Glance image format must be RAW. +.. important:: Ceph doesn’t support QCOW2 for hosting a virtual machine disk. + Thus if you want to boot virtual machines in Ceph (ephemeral backend or boot + from volume), the Glance image format must be ``RAW``. .. tip:: This document describes using Ceph Block Devices with OpenStack Havana. For earlier versions of OpenStack see @@ -99,11 +97,12 @@ groups you should set for your pools. Configure OpenStack Ceph Clients ================================ -The nodes running ``glance-api``, ``cinder-volume``, ``nova-compute`` and ``cinder-backup`` act as Ceph clients. Each -requires the ``ceph.conf`` file:: +The nodes running ``glance-api``, ``cinder-volume``, ``nova-compute`` and +``cinder-backup`` act as Ceph clients. Each requires the ``ceph.conf`` file:: ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf