=======================
- librbd Settings
+ Config Settings
=======================
See `Block Device`_ for additional details.
.. sidebar:: Kernel Caching
- The kernel driver for Ceph block devices can use the Linux page cache to
+ The kernel driver for Ceph block devices can use the Linux page cache to
improve performance.
The user space implementation of the Ceph block device (i.e., ``librbd``) cannot
written to the OSDs. This means that using write-back caching is just as safe as
using a well-behaved physical hard disk with a VM that properly sends flushes
(i.e. Linux kernel >= 2.6.32). The cache uses a Least Recently Used (LRU)
-algorithm, and in write-back mode it can coalesce contiguous requests for
+algorithm, and in write-back mode it can coalesce contiguous requests for
better throughput.
-.. versionadded:: 0.46
+The librbd cache is enabled by default and supports three different cache
+policies: write-around, write-back, and write-through. Writes return
+immediately under both the write-around and write-back policies, unless there
+are more than ``rbd cache max dirty`` unwritten bytes to the storage cluster.
+The write-around policy differs from the write-back policy in that it does
+not attempt to service read requests from the cache, unlike the write-back
+policy, and is therefore faster for high performance write workloads. Under the
+write-through policy, writes return only when the data is on disk on all
+replicas, but reads may come from the cache.
-Ceph supports write-back caching for RBD. To enable it, add ``rbd cache =
-true`` to the ``[client]`` section of your ``ceph.conf`` file. By default
-``librbd`` does not perform any caching. Writes and reads go directly to the
-storage cluster, and writes return only when the data is on disk on all
-replicas. With caching enabled, writes return immediately, unless there are more
-than ``rbd cache max dirty`` unflushed bytes. In this case, the write triggers
-writeback and blocks until enough bytes are flushed.
+Prior to receiving a flush request, the cache behaves like a write-through cache
+to ensure safe operation for older operating systems that do not send flushes to
+ensure crash consistent behavior.
-.. versionadded:: 0.47
+If the librbd cache is disabled, writes and
+reads go directly to the storage cluster, and writes return only when the data
+is on disk on all replicas.
+
+.. note::
+ The cache is in memory on the client, and each RBD image has
+ its own. Since the cache is local to the client, there's no coherency
+ if there are others accessing the image. Running GFS or OCFS on top of
+ RBD will not work with caching enabled.
-Ceph supports write-through caching for RBD. You can set the size of
-the cache, and you can set targets and limits to switch from
-write-back caching to write through caching. To enable write-through
-mode, set ``rbd cache max dirty`` to 0. This means writes return only
-when the data is on disk on all replicas, but reads may come from the
-cache. The cache is in memory on the client, and each RBD image has
-its own. Since the cache is local to the client, there's no coherency
-if there are others accessing the image. Running GFS or OCFS on top of
-RBD will not work with caching enabled.
The ``ceph.conf`` file settings for RBD should be set in the ``[client]``
-section of your configuration file. The settings include:
+section of your configuration file. The settings include:
``rbd cache``
:Default: ``true``
+``rbd cache policy``
+
+:Description: Select the caching policy for librbd.
+:Type: Enum
+:Required: No
+:Default: ``writearound``
+:Values: ``writearound``, ``writeback``, ``writethrough``
+
+
+``rbd cache writethrough until flush``
+
+:Description: Start out in write-through mode, and switch to write-back after the first flush request is received. Enabling this is a conservative but safe setting in case VMs running on rbd are too old to send flushes, like the virtio driver in Linux before 2.6.32.
+:Type: Boolean
+:Required: No
+:Default: ``true``
+
+
``rbd cache size``
:Description: The RBD cache size in bytes.
:Type: 64-bit Integer
:Required: No
:Default: ``32 MiB``
+:Policies: write-back and write-through
``rbd cache max dirty``
:Required: No
:Constraint: Must be less than ``rbd cache size``.
:Default: ``24 MiB``
+:Policies: write-around and write-back
``rbd cache target dirty``
:Required: No
:Constraint: Must be less than ``rbd cache max dirty``.
:Default: ``16 MiB``
+:Policies: write-back
``rbd cache max dirty age``
:Type: Float
:Required: No
:Default: ``1.0``
+:Policies: write-back
-.. versionadded:: 0.60
-
-``rbd cache writethrough until flush``
-
-:Description: Start out in write-through mode, and switch to write-back after the first flush request is received. Enabling this is a conservative but safe setting in case VMs running on rbd are too old to send flushes, like the virtio driver in Linux before 2.6.32.
-:Type: Boolean
-:Required: No
-:Default: ``true``
.. _Block Device: ../../rbd
Read-ahead Settings
=======================
-.. versionadded:: 0.86
-
-RBD supports read-ahead/prefetching to optimize small, sequential reads.
+librbd supports read-ahead/prefetching to optimize small, sequential reads.
This should normally be handled by the guest OS in the case of a VM,
-but boot loaders may not issue efficient reads.
-Read-ahead is automatically disabled if caching is disabled.
+but boot loaders may not issue efficient reads. Read-ahead is automatically
+disabled if caching is disabled or if the policy is write-around.
``rbd readahead trigger requests``
:Default: ``50 MiB``
-RBD Features
-============
+Image Features
+==============
RBD supports advanced features which can be specified via the command line when creating images or the default features can be specified via Ceph config file via 'rbd_default_features = <sum of feature numeric values>' or 'rbd_default_features = <comma-delimited list of CLI values>'
:KRBD support: no
-RBD QOS Settings
-================
+QOS Settings
+============
-RBD supports limiting per image IO, controlled by the following
+librbd supports limiting per image IO, controlled by the following
settings.
``rbd qos iops limit``