Add labels for doc top and CRUSH MSR in crush-map.rst.
Add a see more link to crush-map-edits.rst from crush-map.rst.
Use ref for linking if labels were added or existed already in a few
related files.
Signed-off-by: Ville Ojamo <14869000+bluikko@users.noreply.github.com>
the CRUSH map, add the OSD to the device list, add the host as a bucket (if
it is not already in the CRUSH map), add the device as an item in the host,
assign the device a weight, recompile the CRUSH map, and set the CRUSH map.
- For details, see `Add/Move an OSD`_. This is rarely necessary with recent
+ For details, see :ref:`addosd`. This is rarely necessary with recent
releases (this sentence was written the month that Reef was released).
degraded objects`` and then return to ``active+clean`` when migration
completes. When you are finished observing, press Ctrl-C to exit.
-.. _Add/Move an OSD: ../crush-map#addosd
.. _ceph: ../monitoring
``ceph osd purge`` command. Instead, carry out the following procedure:
#. Remove the OSD from the CRUSH map so that it no longer receives data (for
- more details, see `Remove an OSD`_):
+ more details, see :ref:`removeosd`):
.. prompt:: bash $
ceph osd rm 1
-.. _Remove an OSD: ../crush-map#removeosd
In the standard storage scenario, you can setup a CRUSH rule to establish
the failure domain (e.g., osd, host, chassis, rack, row, etc.). Ceph OSD
Daemons perform optimally when all storage drives in the rule are of the
-same size, speed (both RPMs and throughput) and type. See `CRUSH Maps`_
+same size, speed (both RPMs and throughput) and type. See :ref:`rados-crush-map`
for details on creating a rule. Once you have created a rule, create
a backing storage pool.
In the erasure coding scenario, the pool creation arguments will generate the
-appropriate rule automatically. See `Create a Pool`_ for details.
+appropriate rule automatically. See :ref:`createpool` for details.
In subsequent examples, we will refer to the backing storage pool
as ``cold-storage``.
the backing pool as ``cold-storage``.
For cache tier configuration and default values, see
-`Pools - Set Pool Values`_.
+:ref:`setpoolvalues`.
Creating a Cache Tier
history of this issue.
-.. _Create a Pool: ../pools#create-a-pool
-.. _Pools - Set Pool Values: ../pools#set-pool-values
.. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter
-.. _CRUSH Maps: ../crush-map
.. _Absolute Sizing: #absolute-sizing
+.. _rados-crush-map-edits:
+
Manually editing the CRUSH Map
==============================
#. `Recompile`_ the CRUSH map.
#. `Set the CRUSH map`_.
-For details on setting the CRUSH map rule for a specific pool, see `Set Pool
-Values`_.
+For details on setting the CRUSH map rule for a specific pool,
+see :ref:`setpoolvalues`.
.. _Get the CRUSH map: #getcrushmap
.. _Decompile: #decompilecrushmap
.. _Rules: #crushmaprules
.. _Recompile: #compilecrushmap
.. _Set the CRUSH map: #setcrushmap
-.. _Set Pool Values: ../pools#setpoolvalues
.. _getcrushmap:
+.. _rados-crush-map:
+
============
CRUSH Maps
============
that they will govern (replicated or erasure coded), the *failure domain*, and
optionally a *device class*. In rare cases, CRUSH rules must be created by
manually editing the CRUSH map.
+For more information, see :ref:`rados-crush-map-edits`.
To see the rules that are defined for the cluster, run the following command:
argument is omitted, then Ceph will create the CRUSH rule automatically.
+.. _rados-crush-msr-rules:
+
CRUSH MSR Rules
---------------
storing objects. Pools manage the number of placement groups, the number of
replicas, and the CRUSH rule for the pool. To store data in a pool, it is
necessary to be an authenticated user with permissions for the pool. Ceph is
- able to make snapshots of pools. For additional details, see `Pools`_.
+ able to make snapshots of pools. For additional details, see :ref:`rados_pools`.
- **Placement Groups:** Ceph maps objects to placement groups. Placement
groups (PGs) are shards or fragments of a logical object pool that place
topology of the cluster to the CRUSH algorithm, so that it can determine both
(1) where the data for an object and its replicas should be stored and (2)
how to store that data across failure domains so as to improve data safety.
- For additional details, see `CRUSH Maps`_.
+ For additional details, see :ref:`rados-crush-map`.
- **Balancer:** The balancer is a feature that automatically optimizes the
distribution of placement groups across devices in order to achieve a
data-placement operations with reference to the different roles played by
pools, placement groups, and CRUSH.
-.. _Pools: ../pools
-.. _CRUSH Maps: ../crush-map
-.. _Balancer: ../balancer
:ref:`Placement Group Link <pgcalc>`
Setting the initial number of PGs in a pool is done implicitly or explicitly
-at the time a pool is created. See `Create a Pool`_ for details.
+at the time a pool is created. See :ref:`createpool` for details.
However, after a pool is created, if the ``pg_autoscaler`` is not being
used to manage ``pg_num`` values, you can change the number of PGs by running a
pg-concepts
-.. _Create a Pool: ../pools#createpool
.. _Mapping PGs to OSDs: ../../../architecture#mapping-pgs-to-osds