From 49af82c1a85721f34c5872250c22092418caa4f5 Mon Sep 17 00:00:00 2001 From: Ville Ojamo <14869000+bluikko@users.noreply.github.com> Date: Fri, 16 Jan 2026 13:47:44 +0700 Subject: [PATCH] doc/rados: use ref for links and improve links in operations Add labels for doc top and CRUSH MSR in crush-map.rst. Add a see more link to crush-map-edits.rst from crush-map.rst. Use ref for linking if labels were added or existed already in a few related files. Signed-off-by: Ville Ojamo <14869000+bluikko@users.noreply.github.com> --- doc/rados/operations/add-or-rm-osds.rst | 6 ++---- doc/rados/operations/cache-tiering.rst | 9 +++------ doc/rados/operations/crush-map-edits.rst | 7 ++++--- doc/rados/operations/crush-map.rst | 5 +++++ doc/rados/operations/data-placement.rst | 7 ++----- doc/rados/operations/placement-groups.rst | 3 +-- 6 files changed, 17 insertions(+), 20 deletions(-) diff --git a/doc/rados/operations/add-or-rm-osds.rst b/doc/rados/operations/add-or-rm-osds.rst index 461d238e1950e..63dc4e66a36e3 100644 --- a/doc/rados/operations/add-or-rm-osds.rst +++ b/doc/rados/operations/add-or-rm-osds.rst @@ -151,7 +151,7 @@ cluster have and therefore might have greater weight as well. the CRUSH map, add the OSD to the device list, add the host as a bucket (if it is not already in the CRUSH map), add the device as an item in the host, assign the device a weight, recompile the CRUSH map, and set the CRUSH map. - For details, see `Add/Move an OSD`_. This is rarely necessary with recent + For details, see :ref:`addosd`. This is rarely necessary with recent releases (this sentence was written the month that Reef was released). @@ -247,7 +247,6 @@ The PG states will first change from ``active+clean`` to ``active, some degraded objects`` and then return to ``active+clean`` when migration completes. When you are finished observing, press Ctrl-C to exit. -.. _Add/Move an OSD: ../crush-map#addosd .. _ceph: ../monitoring @@ -383,7 +382,7 @@ If your Ceph cluster is older than Luminous, you will be unable to use the ``ceph osd purge`` command. Instead, carry out the following procedure: #. Remove the OSD from the CRUSH map so that it no longer receives data (for - more details, see `Remove an OSD`_): + more details, see :ref:`removeosd`): .. prompt:: bash $ @@ -414,4 +413,3 @@ If your Ceph cluster is older than Luminous, you will be unable to use the ceph osd rm 1 -.. _Remove an OSD: ../crush-map#removeosd diff --git a/doc/rados/operations/cache-tiering.rst b/doc/rados/operations/cache-tiering.rst index ae46c3bcd5066..36076360b5c59 100644 --- a/doc/rados/operations/cache-tiering.rst +++ b/doc/rados/operations/cache-tiering.rst @@ -181,12 +181,12 @@ Setting up a backing storage pool typically involves one of two scenarios: In the standard storage scenario, you can setup a CRUSH rule to establish the failure domain (e.g., osd, host, chassis, rack, row, etc.). Ceph OSD Daemons perform optimally when all storage drives in the rule are of the -same size, speed (both RPMs and throughput) and type. See `CRUSH Maps`_ +same size, speed (both RPMs and throughput) and type. See :ref:`rados-crush-map` for details on creating a rule. Once you have created a rule, create a backing storage pool. In the erasure coding scenario, the pool creation arguments will generate the -appropriate rule automatically. See `Create a Pool`_ for details. +appropriate rule automatically. See :ref:`createpool` for details. In subsequent examples, we will refer to the backing storage pool as ``cold-storage``. @@ -207,7 +207,7 @@ In subsequent examples, we will refer to the cache pool as ``hot-storage`` and the backing pool as ``cold-storage``. For cache tier configuration and default values, see -`Pools - Set Pool Values`_. +:ref:`setpoolvalues`. Creating a Cache Tier @@ -610,8 +610,5 @@ See `Tracker Issue #44286 `_ for the history of this issue. -.. _Create a Pool: ../pools#create-a-pool -.. _Pools - Set Pool Values: ../pools#set-pool-values .. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter -.. _CRUSH Maps: ../crush-map .. _Absolute Sizing: #absolute-sizing diff --git a/doc/rados/operations/crush-map-edits.rst b/doc/rados/operations/crush-map-edits.rst index 3b358f007b757..8dd18e6ce0dfd 100644 --- a/doc/rados/operations/crush-map-edits.rst +++ b/doc/rados/operations/crush-map-edits.rst @@ -1,3 +1,5 @@ +.. _rados-crush-map-edits: + Manually editing the CRUSH Map ============================== @@ -17,8 +19,8 @@ To edit an existing CRUSH map, carry out the following procedure: #. `Recompile`_ the CRUSH map. #. `Set the CRUSH map`_. -For details on setting the CRUSH map rule for a specific pool, see `Set Pool -Values`_. +For details on setting the CRUSH map rule for a specific pool, +see :ref:`setpoolvalues`. .. _Get the CRUSH map: #getcrushmap .. _Decompile: #decompilecrushmap @@ -27,7 +29,6 @@ Values`_. .. _Rules: #crushmaprules .. _Recompile: #compilecrushmap .. _Set the CRUSH map: #setcrushmap -.. _Set Pool Values: ../pools#setpoolvalues .. _getcrushmap: diff --git a/doc/rados/operations/crush-map.rst b/doc/rados/operations/crush-map.rst index 1f74de5124129..b0020b9ef7bf7 100644 --- a/doc/rados/operations/crush-map.rst +++ b/doc/rados/operations/crush-map.rst @@ -1,3 +1,5 @@ +.. _rados-crush-map: + ============ CRUSH Maps ============ @@ -217,6 +219,7 @@ CRUSH rules can be created via the command-line by specifying the *pool type* that they will govern (replicated or erasure coded), the *failure domain*, and optionally a *device class*. In rare cases, CRUSH rules must be created by manually editing the CRUSH map. +For more information, see :ref:`rados-crush-map-edits`. To see the rules that are defined for the cluster, run the following command: @@ -752,6 +755,8 @@ The relevant erasure-code profile properties are as follows: argument is omitted, then Ceph will create the CRUSH rule automatically. +.. _rados-crush-msr-rules: + CRUSH MSR Rules --------------- diff --git a/doc/rados/operations/data-placement.rst b/doc/rados/operations/data-placement.rst index 3d3be65ec0874..3e94af8fb8bf9 100644 --- a/doc/rados/operations/data-placement.rst +++ b/doc/rados/operations/data-placement.rst @@ -12,7 +12,7 @@ include: storing objects. Pools manage the number of placement groups, the number of replicas, and the CRUSH rule for the pool. To store data in a pool, it is necessary to be an authenticated user with permissions for the pool. Ceph is - able to make snapshots of pools. For additional details, see `Pools`_. + able to make snapshots of pools. For additional details, see :ref:`rados_pools`. - **Placement Groups:** Ceph maps objects to placement groups. Placement groups (PGs) are shards or fragments of a logical object pool that place @@ -28,7 +28,7 @@ include: topology of the cluster to the CRUSH algorithm, so that it can determine both (1) where the data for an object and its replicas should be stored and (2) how to store that data across failure domains so as to improve data safety. - For additional details, see `CRUSH Maps`_. + For additional details, see :ref:`rados-crush-map`. - **Balancer:** The balancer is a feature that automatically optimizes the distribution of placement groups across devices in order to achieve a @@ -42,6 +42,3 @@ when planning a large Ceph cluster, values should be customized for data-placement operations with reference to the different roles played by pools, placement groups, and CRUSH. -.. _Pools: ../pools -.. _CRUSH Maps: ../crush-map -.. _Balancer: ../balancer diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst index c7a62beb1e12a..5468ca31a8711 100644 --- a/doc/rados/operations/placement-groups.rst +++ b/doc/rados/operations/placement-groups.rst @@ -687,7 +687,7 @@ Setting the Number of PGs :ref:`Placement Group Link ` Setting the initial number of PGs in a pool is done implicitly or explicitly -at the time a pool is created. See `Create a Pool`_ for details. +at the time a pool is created. See :ref:`createpool` for details. However, after a pool is created, if the ``pg_autoscaler`` is not being used to manage ``pg_num`` values, you can change the number of PGs by running a @@ -975,5 +975,4 @@ about it entirely (if it is too new to have a previous version). To mark the pg-concepts -.. _Create a Pool: ../pools#createpool .. _Mapping PGs to OSDs: ../../../architecture#mapping-pgs-to-osds -- 2.47.3