From cbb9ab7716ae98ab80e485a6a4e3149e49be88aa Mon Sep 17 00:00:00 2001 From: Ville Ojamo <14869000+bluikko@users.noreply.github.com> Date: Wed, 7 May 2025 16:48:24 +0700 Subject: [PATCH] doc/radosgw: Cosmetic improvements in dynamicresharding.rst Make reference to config section a hyperlink. Capitalization consistency: use title case in section titles, fix two invalid capitalizations in text. Promptify CLI example commands. A JSON key-value pair is a "property" and not an "object". Use an ordered list instead of inline code with hardcoded list numbers. Use the American "canceled" (majority of occurrences in doc/) instead of "cancelled". Use admonitions instead of spelling out "Note:". Clarify language on sharding cleanup for multisite. Format JSON keys as inline code. Indent example JSON output from radosgw-admin correctly (same as real output) with 4 spaces. Use colon instead of full stop at the end of text that describes the following example command. Move admonition to after such example command. Signed-off-by: Ville Ojamo <14869000+bluikko@users.noreply.github.com> --- doc/radosgw/dynamicresharding.rst | 186 ++++++++++++++++-------------- 1 file changed, 97 insertions(+), 89 deletions(-) diff --git a/doc/radosgw/dynamicresharding.rst b/doc/radosgw/dynamicresharding.rst index 4c67c56c6f4ea..a1abfb207327a 100644 --- a/doc/radosgw/dynamicresharding.rst +++ b/doc/radosgw/dynamicresharding.rst @@ -23,7 +23,7 @@ resharding process, but reads are not. By default dynamic bucket index resharding can only increase the number of bucket index shards to 1999, although this upper-bound is a -configuration parameter (see Configuration below). When +configuration parameter (see `Configuration`_ below). When possible, the process chooses a prime number of shards in order to spread the number of entries across the bucket index shards more evenly. @@ -43,9 +43,9 @@ buckets that fluctuate in numbers of objects. Multisite ========= -With Ceph releases Prior to Reef, the Ceph Object Gateway (RGW) does not support +With Ceph releases prior to Reef, the Ceph Object Gateway (RGW) does not support dynamic resharding in a -multisite environment. For information on dynamic resharding, see +multisite deployment. For information on dynamic resharding, see :ref:`Resharding ` in the RGW multisite documentation. Configuration @@ -62,98 +62,106 @@ Configuration .. confval:: rgw_reshard_progress_judge_interval .. confval:: rgw_reshard_progress_judge_ratio -Admin commands +Admin Commands ============== -Add a bucket to the resharding queue +Add a Bucket to the Resharding Queue ------------------------------------ -:: +.. prompt:: bash # - # radosgw-admin reshard add --bucket --num-shards + radosgw-admin reshard add --bucket --num-shards -List resharding queue +List Resharding Queue --------------------- -:: +.. prompt:: bash # - # radosgw-admin reshard list + radosgw-admin reshard list -Process tasks on the resharding queue +Process Tasks on the Resharding Queue ------------------------------------- -:: +.. prompt:: bash # - # radosgw-admin reshard process + radosgw-admin reshard process -Bucket resharding status +Bucket Resharding Status ------------------------ -:: +.. prompt:: bash # - # radosgw-admin reshard status --bucket + radosgw-admin reshard status --bucket -The output is a JSON array of 3 objects (reshard_status, new_bucket_instance_id, num_shards) per shard. +The output is a JSON array of 3 properties (``reshard_status``, ``new_bucket_instance_id``, ``num_shards``) per shard. For example, the output at each dynamic resharding stage is shown below: -``1. Before resharding occurred:`` -:: - - [ - { - "reshard_status": "not-resharding", - "new_bucket_instance_id": "", - "num_shards": -1 - } - ] - -``2. During resharding:`` -:: - - [ - { - "reshard_status": "in-progress", - "new_bucket_instance_id": "1179f470-2ebf-4630-8ec3-c9922da887fd.8652.1", - "num_shards": 2 - }, - { - "reshard_status": "in-progress", - "new_bucket_instance_id": "1179f470-2ebf-4630-8ec3-c9922da887fd.8652.1", - "num_shards": 2 - } - ] - -``3. After resharding completed:`` -:: - - [ - { - "reshard_status": "not-resharding", - "new_bucket_instance_id": "", - "num_shards": -1 - }, - { - "reshard_status": "not-resharding", - "new_bucket_instance_id": "", - "num_shards": -1 - } - ] - - -Cancel pending bucket resharding +#. Before resharding occurred: + + :: + + [ + { + "reshard_status": "not-resharding", + "new_bucket_instance_id": "", + "num_shards": -1 + } + ] + +#. During resharding: + + :: + + [ + { + "reshard_status": "in-progress", + "new_bucket_instance_id": "1179f470-2ebf-4630-8ec3-c9922da887fd.8652.1", + "num_shards": 2 + }, + { + "reshard_status": "in-progress", + "new_bucket_instance_id": "1179f470-2ebf-4630-8ec3-c9922da887fd.8652.1", + "num_shards": 2 + } + ] + +#. After resharding completed: + + :: + + [ + { + "reshard_status": "not-resharding", + "new_bucket_instance_id": "", + "num_shards": -1 + }, + { + "reshard_status": "not-resharding", + "new_bucket_instance_id": "", + "num_shards": -1 + } + ] + + +Cancel Pending Bucket Resharding -------------------------------- -Note: Bucket resharding tasks cannot be cancelled once they start executing. :: +.. note:: - # radosgw-admin reshard cancel --bucket + Bucket resharding tasks cannot be canceled once they transition to + the ``in-progress`` state from the initial ``not-resharding`` state. -Manual immediate bucket resharding +.. prompt:: bash # + + radosgw-admin reshard cancel --bucket + +Manual Immediate Bucket Resharding ---------------------------------- -:: +.. prompt:: bash # - # radosgw-admin bucket reshard --bucket --num-shards + radosgw-admin bucket reshard --bucket --num-shards When choosing a number of shards, the administrator must anticipate each bucket's peak number of objects. Ideally one should aim for no @@ -166,12 +174,12 @@ since the former is prime. A variety of web sites have lists of prime numbers; search for "list of prime numbers" with your favorite search engine to locate some web sites. -Setting a bucket's minimum number of shards +Setting a Bucket's Minimum Number of Shards ------------------------------------------- -:: +.. prompt:: bash # - # radosgw-admin bucket set-min-shards --bucket --num-shards + radosgw-admin bucket set-min-shards --bucket --num-shards Since dynamic resharding can now reduce the number of shards, administrators may want to prevent the number of shards from becoming @@ -185,27 +193,28 @@ Troubleshooting Clusters prior to Luminous 12.2.11 and Mimic 13.2.5 left behind stale bucket instance entries, which were not automatically cleaned up. This issue also affected -LifeCycle policies, which were no longer applied to resharded buckets. Both of -these issues could be worked around by running ``radosgw-admin`` commands. +lifecycle policies, which were no longer applied to resharded buckets. Both of +these issues can be remediated by running ``radosgw-admin`` commands. -Stale instance management +Stale Instance Management ------------------------- -List the stale instances in a cluster that are ready to be cleaned up. +List the stale instances in a cluster that may be cleaned up: -:: +.. prompt:: bash # - # radosgw-admin reshard stale-instances list + radosgw-admin reshard stale-instances list -Clean up the stale instances in a cluster. Note: cleanup of these -instances should only be done on a single-site cluster. +Clean up the stale instances in a cluster: -:: +.. prompt:: bash # - # radosgw-admin reshard stale-instances delete + radosgw-admin reshard stale-instances delete +.. note:: Cleanup of stale instances should not be done in a multisite deployment. -Lifecycle fixes + +Lifecycle Fixes --------------- For clusters with resharded instances, it is highly likely that the old @@ -217,15 +226,15 @@ resharding must be fixed manually. The command to do so is: -:: +.. prompt:: bash # - # radosgw-admin lc reshard fix --bucket {bucketname} + radosgw-admin lc reshard fix --bucket {bucketname} If the ``--bucket`` argument is not provided, this command will try to fix lifecycle policies for all the buckets in the cluster. -Object Expirer fixes +Object Expirer Fixes -------------------- Objects subject to Swift object expiration on older clusters may have @@ -237,17 +246,16 @@ objects, ``radosgw-admin`` provides two subcommands. Listing: -:: +.. prompt:: bash # - # radosgw-admin objects expire-stale list --bucket {bucketname} + radosgw-admin objects expire-stale list --bucket {bucketname} Displays a list of object names and expiration times in JSON format. Deleting: -:: - - # radosgw-admin objects expire-stale rm --bucket {bucketname} +.. prompt:: bash # + radosgw-admin objects expire-stale rm --bucket {bucketname} Initiates deletion of such objects, displaying a list of object names, expiration times, and deletion status in JSON format. -- 2.39.5