.. confval:: bluestore_rocksdb_cf
.. confval:: bluestore_rocksdb_cfs
+Throttling
+==========
+
+.. confval:: bluestore_throttle_bytes
+.. confval:: bluestore_throttle_deferred_bytes
+.. confval:: bluestore_throttle_cost_per_io
+.. confval:: bluestore_throttle_cost_per_io_hdd
+.. confval:: bluestore_throttle_cost_per_io_ssd
+
SPDK Usage
==================
:Bluestore Throttle Parameters:
We recommend using the default values as defined by
- ``bluestore_throttle_bytes`` and ``bluestore_throttle_deferred_bytes``. But
+ :confval:`bluestore_throttle_bytes` and :confval:`bluestore_throttle_deferred_bytes`. But
these parameters may also be determined during the benchmarking phase as
described below.
2. Install cbt and all the dependencies mentioned on the cbt github page.
3. Construct the Ceph configuration file and the cbt yaml file.
4. Ensure that the bluestore throttle options ( i.e.
- ``bluestore_throttle_bytes`` and ``bluestore_throttle_deferred_bytes``) are
+ :confval:`bluestore_throttle_bytes` and :confval:`bluestore_throttle_deferred_bytes`) are
set to the default values.
5. Ensure that the test is performed on similar device types to get reliable
OSD capacity data.
value is the baseline throughput(IOPS) when the default bluestore
throttle options are in effect.
9. If the intent is to determine the bluestore throttle values for your
- environment, then set the two options, ``bluestore_throttle_bytes`` and
- ``bluestore_throttle_deferred_bytes`` to 32 KiB(32768 Bytes) each to begin
+ environment, then set the two options, :confval:`bluestore_throttle_bytes` and
+ :confval:`bluestore_throttle_deferred_bytes` to 32 KiB(32768 Bytes) each to begin
with. Otherwise, you may skip to the next section.
10. Run the 4KiB random write workload as before on the OSD(s) for 300 secs.
11. Note the overall throughput from the cbt log files and compare the value
*high_recovery_ops*.
If there is a requirement to change the default profile, then the option
-``osd_mclock_profile`` may be set in the **[global]** or **[osd]** section of
+:confval:`osd_mclock_profile` may be set in the **[global]** or **[osd]** section of
your Ceph configuration file before bringing up your cluster.
Alternatively, to change the profile during runtime, use the following command:
Operations
==========
+.. confval:: osd_op_num_shards
+.. confval:: osd_op_num_shards_hdd
+.. confval:: osd_op_num_shards_ssd
.. confval:: osd_op_queue
.. confval:: osd_op_queue_cut_off
.. confval:: osd_client_op_priority
placement group identifier. Each shard has its own mClock queue and
these queues neither interact nor share information among them. The
number of shards can be controlled with the configuration options
-``osd_op_num_shards``, ``osd_op_num_shards_hdd``, and
-``osd_op_num_shards_ssd``. A lower number of shards will increase the
+:confval:`osd_op_num_shards`, :confval:`osd_op_num_shards_hdd`, and
+:confval:`osd_op_num_shards_ssd`. A lower number of shards will increase the
impact of the mClock queues, but may have other deleterious effects.
Second, requests are transferred from the operation queue to the
operation sequencer as possible. So we have an inherent tension.
The configuration options that influence the number of operations in
-the operation sequencer are ``bluestore_throttle_bytes``,
-``bluestore_throttle_deferred_bytes``,
-``bluestore_throttle_cost_per_io``,
-``bluestore_throttle_cost_per_io_hdd``, and
-``bluestore_throttle_cost_per_io_ssd``.
+the operation sequencer are :confval:`bluestore_throttle_bytes`,
+:confval:`bluestore_throttle_deferred_bytes`,
+:confval:`bluestore_throttle_cost_per_io`,
+:confval:`bluestore_throttle_cost_per_io_hdd`, and
+:confval:`bluestore_throttle_cost_per_io_ssd`.
A third factor that affects the impact of the mClock algorithm is that
we're using a distributed system, where requests are made to multiple