per bucket you would consider decreasing :confval:`rgw_lc_max_wp_worker` from the default value of 3.
.. note:: When looking to tune either of these specific values please validate the
- current Cluster performance and Ceph Object Gateway utilization before increasing.
+ current Cluster performance and Ceph Object Gateway utilization before increasing.
Garbage Collection Settings
===========================
radosgw-admin gc list
-.. note:: specify ``--include-all`` to list all entries, including unexpired
-
+.. note:: Specify ``--include-all`` to list all entries, including unexpired
+ Garbage Collection objects.
+
Garbage collection is a background activity that may
execute continuously or during times of low loads, depending upon how the
administrator configures the Ceph Object Gateway. By default, the Ceph Object
:Tuning Garbage Collection for Delete Heavy Workloads:
-As an initial step towards tuning Ceph Garbage Collection to be more aggressive the following options are suggested to be increased from their default configuration values::
+As an initial step towards tuning Ceph Garbage Collection to be more
+aggressive the following options are suggested to be increased from their
+default configuration values::
rgw_gc_max_concurrent_io = 20
rgw_gc_max_trim_chunk = 64
Currently the scheduler defaults to a throttler which throttles the active
connections to a configured limit. QoS based on mClock is currently in an
*experimental* phase and not recommended for production yet. Current
-implementation of *dmclock_client* op queue divides RGW Ops on admin, auth
+implementation of *dmclock_client* op queue divides RGW ops on admin, auth
(swift auth, sts) metadata & data requests.