From 996c0197ca214d94287e9be9bf99dc080e8bf9f1 Mon Sep 17 00:00:00 2001 From: Jason Dillaman Date: Fri, 19 Jul 2019 10:24:36 -0400 Subject: [PATCH] doc/rbd: tweak config reference Clean up the caching sections to match the current implementation and clarify which sections apply only to librbd. Signed-off-by: Jason Dillaman --- doc/rbd/rbd-config-ref.rst | 94 ++++++++++++++++++++++---------------- 1 file changed, 54 insertions(+), 40 deletions(-) diff --git a/doc/rbd/rbd-config-ref.rst b/doc/rbd/rbd-config-ref.rst index 8fdc8a322fc..ccd2a3e835e 100644 --- a/doc/rbd/rbd-config-ref.rst +++ b/doc/rbd/rbd-config-ref.rst @@ -1,5 +1,5 @@ ======================= - librbd Settings + Config Settings ======================= See `Block Device`_ for additional details. @@ -9,7 +9,7 @@ Cache Settings .. sidebar:: Kernel Caching - The kernel driver for Ceph block devices can use the Linux page cache to + The kernel driver for Ceph block devices can use the Linux page cache to improve performance. The user space implementation of the Ceph block device (i.e., ``librbd``) cannot @@ -19,33 +19,36 @@ disk caching. When the OS sends a barrier or a flush request, all dirty data is written to the OSDs. This means that using write-back caching is just as safe as using a well-behaved physical hard disk with a VM that properly sends flushes (i.e. Linux kernel >= 2.6.32). The cache uses a Least Recently Used (LRU) -algorithm, and in write-back mode it can coalesce contiguous requests for +algorithm, and in write-back mode it can coalesce contiguous requests for better throughput. -.. versionadded:: 0.46 +The librbd cache is enabled by default and supports three different cache +policies: write-around, write-back, and write-through. Writes return +immediately under both the write-around and write-back policies, unless there +are more than ``rbd cache max dirty`` unwritten bytes to the storage cluster. +The write-around policy differs from the write-back policy in that it does +not attempt to service read requests from the cache, unlike the write-back +policy, and is therefore faster for high performance write workloads. Under the +write-through policy, writes return only when the data is on disk on all +replicas, but reads may come from the cache. -Ceph supports write-back caching for RBD. To enable it, add ``rbd cache = -true`` to the ``[client]`` section of your ``ceph.conf`` file. By default -``librbd`` does not perform any caching. Writes and reads go directly to the -storage cluster, and writes return only when the data is on disk on all -replicas. With caching enabled, writes return immediately, unless there are more -than ``rbd cache max dirty`` unflushed bytes. In this case, the write triggers -writeback and blocks until enough bytes are flushed. +Prior to receiving a flush request, the cache behaves like a write-through cache +to ensure safe operation for older operating systems that do not send flushes to +ensure crash consistent behavior. -.. versionadded:: 0.47 +If the librbd cache is disabled, writes and +reads go directly to the storage cluster, and writes return only when the data +is on disk on all replicas. + +.. note:: + The cache is in memory on the client, and each RBD image has + its own. Since the cache is local to the client, there's no coherency + if there are others accessing the image. Running GFS or OCFS on top of + RBD will not work with caching enabled. -Ceph supports write-through caching for RBD. You can set the size of -the cache, and you can set targets and limits to switch from -write-back caching to write through caching. To enable write-through -mode, set ``rbd cache max dirty`` to 0. This means writes return only -when the data is on disk on all replicas, but reads may come from the -cache. The cache is in memory on the client, and each RBD image has -its own. Since the cache is local to the client, there's no coherency -if there are others accessing the image. Running GFS or OCFS on top of -RBD will not work with caching enabled. The ``ceph.conf`` file settings for RBD should be set in the ``[client]`` -section of your configuration file. The settings include: +section of your configuration file. The settings include: ``rbd cache`` @@ -56,12 +59,30 @@ section of your configuration file. The settings include: :Default: ``true`` +``rbd cache policy`` + +:Description: Select the caching policy for librbd. +:Type: Enum +:Required: No +:Default: ``writearound`` +:Values: ``writearound``, ``writeback``, ``writethrough`` + + +``rbd cache writethrough until flush`` + +:Description: Start out in write-through mode, and switch to write-back after the first flush request is received. Enabling this is a conservative but safe setting in case VMs running on rbd are too old to send flushes, like the virtio driver in Linux before 2.6.32. +:Type: Boolean +:Required: No +:Default: ``true`` + + ``rbd cache size`` :Description: The RBD cache size in bytes. :Type: 64-bit Integer :Required: No :Default: ``32 MiB`` +:Policies: write-back and write-through ``rbd cache max dirty`` @@ -71,6 +92,7 @@ section of your configuration file. The settings include: :Required: No :Constraint: Must be less than ``rbd cache size``. :Default: ``24 MiB`` +:Policies: write-around and write-back ``rbd cache target dirty`` @@ -80,6 +102,7 @@ section of your configuration file. The settings include: :Required: No :Constraint: Must be less than ``rbd cache max dirty``. :Default: ``16 MiB`` +:Policies: write-back ``rbd cache max dirty age`` @@ -88,15 +111,8 @@ section of your configuration file. The settings include: :Type: Float :Required: No :Default: ``1.0`` +:Policies: write-back -.. versionadded:: 0.60 - -``rbd cache writethrough until flush`` - -:Description: Start out in write-through mode, and switch to write-back after the first flush request is received. Enabling this is a conservative but safe setting in case VMs running on rbd are too old to send flushes, like the virtio driver in Linux before 2.6.32. -:Type: Boolean -:Required: No -:Default: ``true`` .. _Block Device: ../../rbd @@ -104,12 +120,10 @@ section of your configuration file. The settings include: Read-ahead Settings ======================= -.. versionadded:: 0.86 - -RBD supports read-ahead/prefetching to optimize small, sequential reads. +librbd supports read-ahead/prefetching to optimize small, sequential reads. This should normally be handled by the guest OS in the case of a VM, -but boot loaders may not issue efficient reads. -Read-ahead is automatically disabled if caching is disabled. +but boot loaders may not issue efficient reads. Read-ahead is automatically +disabled if caching is disabled or if the policy is write-around. ``rbd readahead trigger requests`` @@ -136,8 +150,8 @@ Read-ahead is automatically disabled if caching is disabled. :Default: ``50 MiB`` -RBD Features -============ +Image Features +============== RBD supports advanced features which can be specified via the command line when creating images or the default features can be specified via Ceph config file via 'rbd_default_features = ' or 'rbd_default_features = ' @@ -233,10 +247,10 @@ RBD supports advanced features which can be specified via the command line when :KRBD support: no -RBD QOS Settings -================ +QOS Settings +============ -RBD supports limiting per image IO, controlled by the following +librbd supports limiting per image IO, controlled by the following settings. ``rbd qos iops limit`` -- 2.39.5