From: Jason Dillaman Date: Tue, 8 Aug 2017 15:53:42 +0000 (-0400) Subject: doc: re-ordered rbd table of contents X-Git-Tag: v12.1.3~22^2~2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=d1a97d633b761a91a98a414bebc52f7b69c4279c;p=ceph.git doc: re-ordered rbd table of contents Signed-off-by: Jason Dillaman (cherry picked from commit 34ff1ddca1d228bb785ec04f3aef6ccfdccdc5de) --- diff --git a/doc/index.rst b/doc/index.rst index d070c4726183f..253e2a4f54911 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -82,7 +82,7 @@ about Ceph, see our `Architecture`_ section. .. _Ceph Object Store: radosgw -.. _Ceph Block Device: rbd/rbd +.. _Ceph Block Device: rbd .. _Ceph Filesystem: cephfs .. _Getting Started: start .. _Architecture: architecture @@ -96,7 +96,7 @@ about Ceph, see our `Architecture`_ section. install/index rados/index cephfs/index - rbd/rbd + rbd/index radosgw/index mgr/index api/index diff --git a/doc/rados/index.rst b/doc/rados/index.rst index 9ff756c97d36b..929bb7efacbcf 100644 --- a/doc/rados/index.rst +++ b/doc/rados/index.rst @@ -70,7 +70,7 @@ the Ceph Storage Cluster. -.. _Ceph Block Devices: ../rbd/rbd +.. _Ceph Block Devices: ../rbd/ .. _Ceph Filesystem: ../cephfs/ .. _Ceph Object Storage: ../radosgw/ .. _Deployment: ../rados/deployment/ diff --git a/doc/rbd/api/index.rst b/doc/rbd/api/index.rst new file mode 100644 index 0000000000000..71f680933b7fa --- /dev/null +++ b/doc/rbd/api/index.rst @@ -0,0 +1,8 @@ +======================== + Ceph Block Device APIs +======================== + +.. toctree:: + :maxdepth: 2 + + librados (Python) diff --git a/doc/rbd/api/librbdpy.rst b/doc/rbd/api/librbdpy.rst new file mode 100644 index 0000000000000..fa903312557fe --- /dev/null +++ b/doc/rbd/api/librbdpy.rst @@ -0,0 +1,82 @@ +================ + Librbd (Python) +================ + +.. highlight:: python + +The `rbd` python module provides file-like access to RBD images. + + +Example: Creating and writing to an image +========================================= + +To use `rbd`, you must first connect to RADOS and open an IO +context:: + + cluster = rados.Rados(conffile='my_ceph.conf') + cluster.connect() + ioctx = cluster.open_ioctx('mypool') + +Then you instantiate an :class:rbd.RBD object, which you use to create the +image:: + + rbd_inst = rbd.RBD() + size = 4 * 1024**3 # 4 GiB + rbd_inst.create(ioctx, 'myimage', size) + +To perform I/O on the image, you instantiate an :class:rbd.Image object:: + + image = rbd.Image(ioctx, 'myimage') + data = 'foo' * 200 + image.write(data, 0) + +This writes 'foo' to the first 600 bytes of the image. Note that data +cannot be :type:unicode - `Librbd` does not know how to deal with +characters wider than a :c:type:char. + +In the end, you will want to close the image, the IO context and the connection to RADOS:: + + image.close() + ioctx.close() + cluster.shutdown() + +To be safe, each of these calls would need to be in a separate :finally +block:: + + cluster = rados.Rados(conffile='my_ceph_conf') + try: + ioctx = cluster.open_ioctx('my_pool') + try: + rbd_inst = rbd.RBD() + size = 4 * 1024**3 # 4 GiB + rbd_inst.create(ioctx, 'myimage', size) + image = rbd.Image(ioctx, 'myimage') + try: + data = 'foo' * 200 + image.write(data, 0) + finally: + image.close() + finally: + ioctx.close() + finally: + cluster.shutdown() + +This can be cumbersome, so the :class:`Rados`, :class:`Ioctx`, and +:class:`Image` classes can be used as context managers that close/shutdown +automatically (see :pep:`343`). Using them as context managers, the +above example becomes:: + + with rados.Rados(conffile='my_ceph.conf') as cluster: + with cluster.open_ioctx('mypool') as ioctx: + rbd_inst = rbd.RBD() + size = 4 * 1024**3 # 4 GiB + rbd_inst.create(ioctx, 'myimage', size) + with rbd.Image(ioctx, 'myimage') as image: + data = 'foo' * 200 + image.write(data, 0) + +API Reference +============= + +.. automodule:: rbd + :members: RBD, Image, SnapIterator diff --git a/doc/rbd/index.rst b/doc/rbd/index.rst new file mode 100644 index 0000000000000..5d9d433ce1bba --- /dev/null +++ b/doc/rbd/index.rst @@ -0,0 +1,74 @@ +=================== + Ceph Block Device +=================== + +.. index:: Ceph Block Device; introduction + +A block is a sequence of bytes (for example, a 512-byte block of data). +Block-based storage interfaces are the most common way to store data with +rotating media such as hard disks, CDs, floppy disks, and even traditional +9-track tape. The ubiquity of block device interfaces makes a virtual block +device an ideal candidate to interact with a mass data storage system like Ceph. + +Ceph block devices are thin-provisioned, resizable and store data striped over +multiple OSDs in a Ceph cluster. Ceph block devices leverage +:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` capabilities +such as snapshotting, replication and consistency. Ceph's +:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` Block Devices (RBD) +interact with OSDs using kernel modules or the ``librbd`` library. + +.. ditaa:: +------------------------+ +------------------------+ + | Kernel Module | | librbd | + +------------------------+-+------------------------+ + | RADOS Protocol | + +------------------------+-+------------------------+ + | OSDs | | Monitors | + +------------------------+ +------------------------+ + +.. note:: Kernel modules can use Linux page caching. For ``librbd``-based + applications, Ceph supports `RBD Caching`_. + +Ceph's block devices deliver high performance with infinite scalability to +`kernel modules`_, or to :abbr:`KVMs (kernel virtual machines)` such as `QEMU`_, and +cloud-based computing systems like `OpenStack`_ and `CloudStack`_ that rely on +libvirt and QEMU to integrate with Ceph block devices. You can use the same cluster +to operate the `Ceph RADOS Gateway`_, the `Ceph FS filesystem`_, and Ceph block +devices simultaneously. + +.. important:: To use Ceph Block Devices, you must have access to a running + Ceph cluster. + +.. toctree:: + :maxdepth: 1 + + Commands + Kernel Modules + Snapshots + Mirroring + QEMU + libvirt + Cache Settings + OpenStack + CloudStack + RBD Replay + +.. toctree:: + :maxdepth: 2 + + Manpages + +.. toctree:: + :maxdepth: 2 + + APIs + + + + +.. _RBD Caching: ../rbd-config-ref/ +.. _kernel modules: ../rbd-ko/ +.. _QEMU: ../qemu-rbd/ +.. _OpenStack: ../rbd-openstack +.. _CloudStack: ../rbd-cloudstack +.. _Ceph RADOS Gateway: ../../radosgw/ +.. _Ceph FS filesystem: ../../cephfs/ diff --git a/doc/rbd/librbdpy.rst b/doc/rbd/librbdpy.rst deleted file mode 100644 index fa903312557fe..0000000000000 --- a/doc/rbd/librbdpy.rst +++ /dev/null @@ -1,82 +0,0 @@ -================ - Librbd (Python) -================ - -.. highlight:: python - -The `rbd` python module provides file-like access to RBD images. - - -Example: Creating and writing to an image -========================================= - -To use `rbd`, you must first connect to RADOS and open an IO -context:: - - cluster = rados.Rados(conffile='my_ceph.conf') - cluster.connect() - ioctx = cluster.open_ioctx('mypool') - -Then you instantiate an :class:rbd.RBD object, which you use to create the -image:: - - rbd_inst = rbd.RBD() - size = 4 * 1024**3 # 4 GiB - rbd_inst.create(ioctx, 'myimage', size) - -To perform I/O on the image, you instantiate an :class:rbd.Image object:: - - image = rbd.Image(ioctx, 'myimage') - data = 'foo' * 200 - image.write(data, 0) - -This writes 'foo' to the first 600 bytes of the image. Note that data -cannot be :type:unicode - `Librbd` does not know how to deal with -characters wider than a :c:type:char. - -In the end, you will want to close the image, the IO context and the connection to RADOS:: - - image.close() - ioctx.close() - cluster.shutdown() - -To be safe, each of these calls would need to be in a separate :finally -block:: - - cluster = rados.Rados(conffile='my_ceph_conf') - try: - ioctx = cluster.open_ioctx('my_pool') - try: - rbd_inst = rbd.RBD() - size = 4 * 1024**3 # 4 GiB - rbd_inst.create(ioctx, 'myimage', size) - image = rbd.Image(ioctx, 'myimage') - try: - data = 'foo' * 200 - image.write(data, 0) - finally: - image.close() - finally: - ioctx.close() - finally: - cluster.shutdown() - -This can be cumbersome, so the :class:`Rados`, :class:`Ioctx`, and -:class:`Image` classes can be used as context managers that close/shutdown -automatically (see :pep:`343`). Using them as context managers, the -above example becomes:: - - with rados.Rados(conffile='my_ceph.conf') as cluster: - with cluster.open_ioctx('mypool') as ioctx: - rbd_inst = rbd.RBD() - size = 4 * 1024**3 # 4 GiB - rbd_inst.create(ioctx, 'myimage', size) - with rbd.Image(ioctx, 'myimage') as image: - data = 'foo' * 200 - image.write(data, 0) - -API Reference -============= - -.. automodule:: rbd - :members: RBD, Image, SnapIterator diff --git a/doc/rbd/man/index.rst b/doc/rbd/man/index.rst new file mode 100644 index 0000000000000..33a192a7722c6 --- /dev/null +++ b/doc/rbd/man/index.rst @@ -0,0 +1,16 @@ +============================ + Ceph Block Device Manpages +============================ + +.. toctree:: + :maxdepth: 1 + + rbd <../../man/8/rbd> + rbd-fuse <../../man/8/rbd-fuse> + rbd-nbd <../../man/8/rbd-nbd> + rbd-ggate <../../man/8/rbd-ggate> + ceph-rbdnamer <../../man/8/ceph-rbdnamer> + rbd-replay-prep <../../man/8/rbd-replay-prep> + rbd-replay <../../man/8/rbd-replay> + rbd-replay-many <../../man/8/rbd-replay-many> + rbd-map <../../man/8/rbdmap> diff --git a/doc/rbd/rbd-config-ref.rst b/doc/rbd/rbd-config-ref.rst index 6ce2fdc2c2c26..db942f88c786b 100644 --- a/doc/rbd/rbd-config-ref.rst +++ b/doc/rbd/rbd-config-ref.rst @@ -98,7 +98,7 @@ section of your configuration file. The settings include: :Required: No :Default: ``true`` -.. _Block Device: ../../rbd/rbd/ +.. _Block Device: ../../rbd Read-ahead Settings diff --git a/doc/rbd/rbd.rst b/doc/rbd/rbd.rst deleted file mode 100644 index e27e8c64756a3..0000000000000 --- a/doc/rbd/rbd.rst +++ /dev/null @@ -1,72 +0,0 @@ -=================== - Ceph Block Device -=================== - -.. index:: Ceph Block Device; introduction - -A block is a sequence of bytes (for example, a 512-byte block of data). -Block-based storage interfaces are the most common way to store data with -rotating media such as hard disks, CDs, floppy disks, and even traditional -9-track tape. The ubiquity of block device interfaces makes a virtual block -device an ideal candidate to interact with a mass data storage system like Ceph. - -Ceph block devices are thin-provisioned, resizable and store data striped over -multiple OSDs in a Ceph cluster. Ceph block devices leverage -:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` capabilities -such as snapshotting, replication and consistency. Ceph's -:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` Block Devices (RBD) -interact with OSDs using kernel modules or the ``librbd`` library. - -.. ditaa:: +------------------------+ +------------------------+ - | Kernel Module | | librbd | - +------------------------+-+------------------------+ - | RADOS Protocol | - +------------------------+-+------------------------+ - | OSDs | | Monitors | - +------------------------+ +------------------------+ - -.. note:: Kernel modules can use Linux page caching. For ``librbd``-based - applications, Ceph supports `RBD Caching`_. - -Ceph's block devices deliver high performance with infinite scalability to -`kernel modules`_, or to :abbr:`KVMs (kernel virtual machines)` such as `QEMU`_, and -cloud-based computing systems like `OpenStack`_ and `CloudStack`_ that rely on -libvirt and QEMU to integrate with Ceph block devices. You can use the same cluster -to operate the `Ceph RADOS Gateway`_, the `Ceph FS filesystem`_, and Ceph block -devices simultaneously. - -.. important:: To use Ceph Block Devices, you must have access to a running - Ceph cluster. - -.. toctree:: - :maxdepth: 1 - - Commands - Kernel Modules - Snapshots - Mirroring - QEMU - libvirt - Cache Settings - OpenStack - CloudStack - Manpage rbd <../../man/8/rbd> - Manpage rbd-fuse <../../man/8/rbd-fuse> - Manpage rbd-nbd <../../man/8/rbd-nbd> - Manpage ceph-rbdnamer <../../man/8/ceph-rbdnamer> - RBD Replay - Manpage rbd-replay-prep <../../man/8/rbd-replay-prep> - Manpage rbd-replay <../../man/8/rbd-replay> - Manpage rbd-replay-many <../../man/8/rbd-replay-many> - Manpage rbdmap <../../man/8/rbdmap> - librbd - - - -.. _RBD Caching: ../rbd-config-ref/ -.. _kernel modules: ../rbd-ko/ -.. _QEMU: ../qemu-rbd/ -.. _OpenStack: ../rbd-openstack -.. _CloudStack: ../rbd-cloudstack -.. _Ceph RADOS Gateway: ../../radosgw/ -.. _Ceph FS filesystem: ../../cephfs/ diff --git a/doc/start/quick-rbd.rst b/doc/start/quick-rbd.rst index debc0fc206f87..5534fa9e572e9 100644 --- a/doc/start/quick-rbd.rst +++ b/doc/start/quick-rbd.rst @@ -89,7 +89,7 @@ See `block devices`_ for additional details. .. _Storage Cluster Quick Start: ../quick-ceph-deploy .. _create a pool: ../../rados/operations/pools/#create-a-pool -.. _block devices: ../../rbd/rbd +.. _block devices: ../../rbd .. _FAQ: http://wiki.ceph.com/How_Can_I_Give_Ceph_a_Try .. _OS Recommendations: ../os-recommendations .. _rbdmap manpage: ../../man/8/rbdmap