.. _Ceph Object Store: radosgw
-.. _Ceph Block Device: rbd/rbd
+.. _Ceph Block Device: rbd
.. _Ceph Filesystem: cephfs
.. _Getting Started: start
.. _Architecture: architecture
install/index
rados/index
cephfs/index
- rbd/rbd
+ rbd/index
radosgw/index
mgr/index
api/index
</td></tr></tbody></table>
-.. _Ceph Block Devices: ../rbd/rbd
+.. _Ceph Block Devices: ../rbd/
.. _Ceph Filesystem: ../cephfs/
.. _Ceph Object Storage: ../radosgw/
.. _Deployment: ../rados/deployment/
--- /dev/null
+========================
+ Ceph Block Device APIs
+========================
+
+.. toctree::
+ :maxdepth: 2
+
+ librados (Python) <librbdpy>
--- /dev/null
+================
+ Librbd (Python)
+================
+
+.. highlight:: python
+
+The `rbd` python module provides file-like access to RBD images.
+
+
+Example: Creating and writing to an image
+=========================================
+
+To use `rbd`, you must first connect to RADOS and open an IO
+context::
+
+ cluster = rados.Rados(conffile='my_ceph.conf')
+ cluster.connect()
+ ioctx = cluster.open_ioctx('mypool')
+
+Then you instantiate an :class:rbd.RBD object, which you use to create the
+image::
+
+ rbd_inst = rbd.RBD()
+ size = 4 * 1024**3 # 4 GiB
+ rbd_inst.create(ioctx, 'myimage', size)
+
+To perform I/O on the image, you instantiate an :class:rbd.Image object::
+
+ image = rbd.Image(ioctx, 'myimage')
+ data = 'foo' * 200
+ image.write(data, 0)
+
+This writes 'foo' to the first 600 bytes of the image. Note that data
+cannot be :type:unicode - `Librbd` does not know how to deal with
+characters wider than a :c:type:char.
+
+In the end, you will want to close the image, the IO context and the connection to RADOS::
+
+ image.close()
+ ioctx.close()
+ cluster.shutdown()
+
+To be safe, each of these calls would need to be in a separate :finally
+block::
+
+ cluster = rados.Rados(conffile='my_ceph_conf')
+ try:
+ ioctx = cluster.open_ioctx('my_pool')
+ try:
+ rbd_inst = rbd.RBD()
+ size = 4 * 1024**3 # 4 GiB
+ rbd_inst.create(ioctx, 'myimage', size)
+ image = rbd.Image(ioctx, 'myimage')
+ try:
+ data = 'foo' * 200
+ image.write(data, 0)
+ finally:
+ image.close()
+ finally:
+ ioctx.close()
+ finally:
+ cluster.shutdown()
+
+This can be cumbersome, so the :class:`Rados`, :class:`Ioctx`, and
+:class:`Image` classes can be used as context managers that close/shutdown
+automatically (see :pep:`343`). Using them as context managers, the
+above example becomes::
+
+ with rados.Rados(conffile='my_ceph.conf') as cluster:
+ with cluster.open_ioctx('mypool') as ioctx:
+ rbd_inst = rbd.RBD()
+ size = 4 * 1024**3 # 4 GiB
+ rbd_inst.create(ioctx, 'myimage', size)
+ with rbd.Image(ioctx, 'myimage') as image:
+ data = 'foo' * 200
+ image.write(data, 0)
+
+API Reference
+=============
+
+.. automodule:: rbd
+ :members: RBD, Image, SnapIterator
--- /dev/null
+===================
+ Ceph Block Device
+===================
+
+.. index:: Ceph Block Device; introduction
+
+A block is a sequence of bytes (for example, a 512-byte block of data).
+Block-based storage interfaces are the most common way to store data with
+rotating media such as hard disks, CDs, floppy disks, and even traditional
+9-track tape. The ubiquity of block device interfaces makes a virtual block
+device an ideal candidate to interact with a mass data storage system like Ceph.
+
+Ceph block devices are thin-provisioned, resizable and store data striped over
+multiple OSDs in a Ceph cluster. Ceph block devices leverage
+:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` capabilities
+such as snapshotting, replication and consistency. Ceph's
+:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` Block Devices (RBD)
+interact with OSDs using kernel modules or the ``librbd`` library.
+
+.. ditaa:: +------------------------+ +------------------------+
+ | Kernel Module | | librbd |
+ +------------------------+-+------------------------+
+ | RADOS Protocol |
+ +------------------------+-+------------------------+
+ | OSDs | | Monitors |
+ +------------------------+ +------------------------+
+
+.. note:: Kernel modules can use Linux page caching. For ``librbd``-based
+ applications, Ceph supports `RBD Caching`_.
+
+Ceph's block devices deliver high performance with infinite scalability to
+`kernel modules`_, or to :abbr:`KVMs (kernel virtual machines)` such as `QEMU`_, and
+cloud-based computing systems like `OpenStack`_ and `CloudStack`_ that rely on
+libvirt and QEMU to integrate with Ceph block devices. You can use the same cluster
+to operate the `Ceph RADOS Gateway`_, the `Ceph FS filesystem`_, and Ceph block
+devices simultaneously.
+
+.. important:: To use Ceph Block Devices, you must have access to a running
+ Ceph cluster.
+
+.. toctree::
+ :maxdepth: 1
+
+ Commands <rados-rbd-cmds>
+ Kernel Modules <rbd-ko>
+ Snapshots<rbd-snapshot>
+ Mirroring <rbd-mirroring>
+ QEMU <qemu-rbd>
+ libvirt <libvirt>
+ Cache Settings <rbd-config-ref/>
+ OpenStack <rbd-openstack>
+ CloudStack <rbd-cloudstack>
+ RBD Replay <rbd-replay>
+
+.. toctree::
+ :maxdepth: 2
+
+ Manpages <man/index>
+
+.. toctree::
+ :maxdepth: 2
+
+ APIs <api/index>
+
+
+
+
+.. _RBD Caching: ../rbd-config-ref/
+.. _kernel modules: ../rbd-ko/
+.. _QEMU: ../qemu-rbd/
+.. _OpenStack: ../rbd-openstack
+.. _CloudStack: ../rbd-cloudstack
+.. _Ceph RADOS Gateway: ../../radosgw/
+.. _Ceph FS filesystem: ../../cephfs/
+++ /dev/null
-================
- Librbd (Python)
-================
-
-.. highlight:: python
-
-The `rbd` python module provides file-like access to RBD images.
-
-
-Example: Creating and writing to an image
-=========================================
-
-To use `rbd`, you must first connect to RADOS and open an IO
-context::
-
- cluster = rados.Rados(conffile='my_ceph.conf')
- cluster.connect()
- ioctx = cluster.open_ioctx('mypool')
-
-Then you instantiate an :class:rbd.RBD object, which you use to create the
-image::
-
- rbd_inst = rbd.RBD()
- size = 4 * 1024**3 # 4 GiB
- rbd_inst.create(ioctx, 'myimage', size)
-
-To perform I/O on the image, you instantiate an :class:rbd.Image object::
-
- image = rbd.Image(ioctx, 'myimage')
- data = 'foo' * 200
- image.write(data, 0)
-
-This writes 'foo' to the first 600 bytes of the image. Note that data
-cannot be :type:unicode - `Librbd` does not know how to deal with
-characters wider than a :c:type:char.
-
-In the end, you will want to close the image, the IO context and the connection to RADOS::
-
- image.close()
- ioctx.close()
- cluster.shutdown()
-
-To be safe, each of these calls would need to be in a separate :finally
-block::
-
- cluster = rados.Rados(conffile='my_ceph_conf')
- try:
- ioctx = cluster.open_ioctx('my_pool')
- try:
- rbd_inst = rbd.RBD()
- size = 4 * 1024**3 # 4 GiB
- rbd_inst.create(ioctx, 'myimage', size)
- image = rbd.Image(ioctx, 'myimage')
- try:
- data = 'foo' * 200
- image.write(data, 0)
- finally:
- image.close()
- finally:
- ioctx.close()
- finally:
- cluster.shutdown()
-
-This can be cumbersome, so the :class:`Rados`, :class:`Ioctx`, and
-:class:`Image` classes can be used as context managers that close/shutdown
-automatically (see :pep:`343`). Using them as context managers, the
-above example becomes::
-
- with rados.Rados(conffile='my_ceph.conf') as cluster:
- with cluster.open_ioctx('mypool') as ioctx:
- rbd_inst = rbd.RBD()
- size = 4 * 1024**3 # 4 GiB
- rbd_inst.create(ioctx, 'myimage', size)
- with rbd.Image(ioctx, 'myimage') as image:
- data = 'foo' * 200
- image.write(data, 0)
-
-API Reference
-=============
-
-.. automodule:: rbd
- :members: RBD, Image, SnapIterator
--- /dev/null
+============================
+ Ceph Block Device Manpages
+============================
+
+.. toctree::
+ :maxdepth: 1
+
+ rbd <../../man/8/rbd>
+ rbd-fuse <../../man/8/rbd-fuse>
+ rbd-nbd <../../man/8/rbd-nbd>
+ rbd-ggate <../../man/8/rbd-ggate>
+ ceph-rbdnamer <../../man/8/ceph-rbdnamer>
+ rbd-replay-prep <../../man/8/rbd-replay-prep>
+ rbd-replay <../../man/8/rbd-replay>
+ rbd-replay-many <../../man/8/rbd-replay-many>
+ rbd-map <../../man/8/rbdmap>
:Required: No
:Default: ``true``
-.. _Block Device: ../../rbd/rbd/
+.. _Block Device: ../../rbd
Read-ahead Settings
+++ /dev/null
-===================
- Ceph Block Device
-===================
-
-.. index:: Ceph Block Device; introduction
-
-A block is a sequence of bytes (for example, a 512-byte block of data).
-Block-based storage interfaces are the most common way to store data with
-rotating media such as hard disks, CDs, floppy disks, and even traditional
-9-track tape. The ubiquity of block device interfaces makes a virtual block
-device an ideal candidate to interact with a mass data storage system like Ceph.
-
-Ceph block devices are thin-provisioned, resizable and store data striped over
-multiple OSDs in a Ceph cluster. Ceph block devices leverage
-:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` capabilities
-such as snapshotting, replication and consistency. Ceph's
-:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` Block Devices (RBD)
-interact with OSDs using kernel modules or the ``librbd`` library.
-
-.. ditaa:: +------------------------+ +------------------------+
- | Kernel Module | | librbd |
- +------------------------+-+------------------------+
- | RADOS Protocol |
- +------------------------+-+------------------------+
- | OSDs | | Monitors |
- +------------------------+ +------------------------+
-
-.. note:: Kernel modules can use Linux page caching. For ``librbd``-based
- applications, Ceph supports `RBD Caching`_.
-
-Ceph's block devices deliver high performance with infinite scalability to
-`kernel modules`_, or to :abbr:`KVMs (kernel virtual machines)` such as `QEMU`_, and
-cloud-based computing systems like `OpenStack`_ and `CloudStack`_ that rely on
-libvirt and QEMU to integrate with Ceph block devices. You can use the same cluster
-to operate the `Ceph RADOS Gateway`_, the `Ceph FS filesystem`_, and Ceph block
-devices simultaneously.
-
-.. important:: To use Ceph Block Devices, you must have access to a running
- Ceph cluster.
-
-.. toctree::
- :maxdepth: 1
-
- Commands <rados-rbd-cmds>
- Kernel Modules <rbd-ko>
- Snapshots<rbd-snapshot>
- Mirroring <rbd-mirroring>
- QEMU <qemu-rbd>
- libvirt <libvirt>
- Cache Settings <rbd-config-ref/>
- OpenStack <rbd-openstack>
- CloudStack <rbd-cloudstack>
- Manpage rbd <../../man/8/rbd>
- Manpage rbd-fuse <../../man/8/rbd-fuse>
- Manpage rbd-nbd <../../man/8/rbd-nbd>
- Manpage ceph-rbdnamer <../../man/8/ceph-rbdnamer>
- RBD Replay <rbd-replay>
- Manpage rbd-replay-prep <../../man/8/rbd-replay-prep>
- Manpage rbd-replay <../../man/8/rbd-replay>
- Manpage rbd-replay-many <../../man/8/rbd-replay-many>
- Manpage rbdmap <../../man/8/rbdmap>
- librbd <librbdpy>
-
-
-
-.. _RBD Caching: ../rbd-config-ref/
-.. _kernel modules: ../rbd-ko/
-.. _QEMU: ../qemu-rbd/
-.. _OpenStack: ../rbd-openstack
-.. _CloudStack: ../rbd-cloudstack
-.. _Ceph RADOS Gateway: ../../radosgw/
-.. _Ceph FS filesystem: ../../cephfs/
.. _Storage Cluster Quick Start: ../quick-ceph-deploy
.. _create a pool: ../../rados/operations/pools/#create-a-pool
-.. _block devices: ../../rbd/rbd
+.. _block devices: ../../rbd
.. _FAQ: http://wiki.ceph.com/How_Can_I_Give_Ceph_a_Try
.. _OS Recommendations: ../os-recommendations
.. _rbdmap manpage: ../../man/8/rbdmap