From be07a5b407fb6ffc2ee933e58e2489e5e7aa0c70 Mon Sep 17 00:00:00 2001 From: Kefu Chai Date: Thu, 22 Apr 2021 11:05:40 +0800 Subject: [PATCH] doc/rados/configuration: add missing options and link to them when appropriate Signed-off-by: Kefu Chai --- .../configuration/bluestore-config-ref.rst | 9 +++++++++ doc/rados/configuration/mclock-config-ref.rst | 10 +++++----- doc/rados/configuration/osd-config-ref.rst | 17 ++++++++++------- 3 files changed, 24 insertions(+), 12 deletions(-) diff --git a/doc/rados/configuration/bluestore-config-ref.rst b/doc/rados/configuration/bluestore-config-ref.rst index 0dcb00fb5f6b8..e4757c8bc6792 100644 --- a/doc/rados/configuration/bluestore-config-ref.rst +++ b/doc/rados/configuration/bluestore-config-ref.rst @@ -324,6 +324,15 @@ To enable sharding and apply the Pacific defaults, stop an OSD and run .. confval:: bluestore_rocksdb_cf .. confval:: bluestore_rocksdb_cfs +Throttling +========== + +.. confval:: bluestore_throttle_bytes +.. confval:: bluestore_throttle_deferred_bytes +.. confval:: bluestore_throttle_cost_per_io +.. confval:: bluestore_throttle_cost_per_io_hdd +.. confval:: bluestore_throttle_cost_per_io_ssd + SPDK Usage ================== diff --git a/doc/rados/configuration/mclock-config-ref.rst b/doc/rados/configuration/mclock-config-ref.rst index 663c032aa42e9..d797c8ea722c3 100644 --- a/doc/rados/configuration/mclock-config-ref.rst +++ b/doc/rados/configuration/mclock-config-ref.rst @@ -167,7 +167,7 @@ maximize the impact of the mclock scheduler. :Bluestore Throttle Parameters: We recommend using the default values as defined by - ``bluestore_throttle_bytes`` and ``bluestore_throttle_deferred_bytes``. But + :confval:`bluestore_throttle_bytes` and :confval:`bluestore_throttle_deferred_bytes`. But these parameters may also be determined during the benchmarking phase as described below. @@ -183,7 +183,7 @@ correct bluestore throttle values. 2. Install cbt and all the dependencies mentioned on the cbt github page. 3. Construct the Ceph configuration file and the cbt yaml file. 4. Ensure that the bluestore throttle options ( i.e. - ``bluestore_throttle_bytes`` and ``bluestore_throttle_deferred_bytes``) are + :confval:`bluestore_throttle_bytes` and :confval:`bluestore_throttle_deferred_bytes`) are set to the default values. 5. Ensure that the test is performed on similar device types to get reliable OSD capacity data. @@ -195,8 +195,8 @@ correct bluestore throttle values. value is the baseline throughput(IOPS) when the default bluestore throttle options are in effect. 9. If the intent is to determine the bluestore throttle values for your - environment, then set the two options, ``bluestore_throttle_bytes`` and - ``bluestore_throttle_deferred_bytes`` to 32 KiB(32768 Bytes) each to begin + environment, then set the two options, :confval:`bluestore_throttle_bytes` and + :confval:`bluestore_throttle_deferred_bytes` to 32 KiB(32768 Bytes) each to begin with. Otherwise, you may skip to the next section. 10. Run the 4KiB random write workload as before on the OSD(s) for 300 secs. 11. Note the overall throughput from the cbt log files and compare the value @@ -253,7 +253,7 @@ The other values for the built-in profiles include *balanced* and *high_recovery_ops*. If there is a requirement to change the default profile, then the option -``osd_mclock_profile`` may be set in the **[global]** or **[osd]** section of +:confval:`osd_mclock_profile` may be set in the **[global]** or **[osd]** section of your Ceph configuration file before bringing up your cluster. Alternatively, to change the profile during runtime, use the following command: diff --git a/doc/rados/configuration/osd-config-ref.rst b/doc/rados/configuration/osd-config-ref.rst index de39c1e69b19a..65fe1205edfff 100644 --- a/doc/rados/configuration/osd-config-ref.rst +++ b/doc/rados/configuration/osd-config-ref.rst @@ -179,6 +179,9 @@ scrubbing operations. Operations ========== +.. confval:: osd_op_num_shards +.. confval:: osd_op_num_shards_hdd +.. confval:: osd_op_num_shards_ssd .. confval:: osd_op_queue .. confval:: osd_op_queue_cut_off .. confval:: osd_client_op_priority @@ -285,8 +288,8 @@ queues within Ceph. First, requests to an OSD are sharded by their placement group identifier. Each shard has its own mClock queue and these queues neither interact nor share information among them. The number of shards can be controlled with the configuration options -``osd_op_num_shards``, ``osd_op_num_shards_hdd``, and -``osd_op_num_shards_ssd``. A lower number of shards will increase the +:confval:`osd_op_num_shards`, :confval:`osd_op_num_shards_hdd`, and +:confval:`osd_op_num_shards_ssd`. A lower number of shards will increase the impact of the mClock queues, but may have other deleterious effects. Second, requests are transferred from the operation queue to the @@ -303,11 +306,11 @@ the impact of mClock, we want to keep as few operations in the operation sequencer as possible. So we have an inherent tension. The configuration options that influence the number of operations in -the operation sequencer are ``bluestore_throttle_bytes``, -``bluestore_throttle_deferred_bytes``, -``bluestore_throttle_cost_per_io``, -``bluestore_throttle_cost_per_io_hdd``, and -``bluestore_throttle_cost_per_io_ssd``. +the operation sequencer are :confval:`bluestore_throttle_bytes`, +:confval:`bluestore_throttle_deferred_bytes`, +:confval:`bluestore_throttle_cost_per_io`, +:confval:`bluestore_throttle_cost_per_io_hdd`, and +:confval:`bluestore_throttle_cost_per_io_ssd`. A third factor that affects the impact of the mClock algorithm is that we're using a distributed system, where requests are made to multiple -- 2.39.5