From 84e2808afbf8ea1199e66dd803149b34e263a0cd Mon Sep 17 00:00:00 2001 From: Kefu Chai Date: Sat, 29 Aug 2020 23:24:22 +0800 Subject: [PATCH] doc: fix broken hyper link this link was broken in 1427905c473e352e7cac1a9ac209cddb82544b57 Signed-off-by: Kefu Chai --- doc/rados/operations/cache-tiering.rst | 3 +-- doc/rados/operations/crush-map-edits.rst | 2 ++ doc/start/hardware-recommendations.rst | 2 +- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/doc/rados/operations/cache-tiering.rst b/doc/rados/operations/cache-tiering.rst index 237b6e3c9f376..8dec0ced65443 100644 --- a/doc/rados/operations/cache-tiering.rst +++ b/doc/rados/operations/cache-tiering.rst @@ -185,7 +185,7 @@ scenario, but with this difference: the drives for the cache tier are typically high performance drives that reside in their own servers and have their own CRUSH rule. When setting up such a rule, it should take account of the hosts that have the high performance drives while omitting the hosts that don't. See -`Placing Different Pools on Different OSDs`_ for details. +:ref:`CRUSH Device Class` for details. In subsequent examples, we will refer to the cache pool as ``hot-storage`` and @@ -469,7 +469,6 @@ disable and remove it. .. _Create a Pool: ../pools#create-a-pool .. _Pools - Set Pool Values: ../pools#set-pool-values -.. _Placing Different Pools on Different OSDs: ../crush-map-edits/#placing-different-pools-on-different-osds .. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter .. _CRUSH Maps: ../crush-map .. _Absolute Sizing: #absolute-sizing diff --git a/doc/rados/operations/crush-map-edits.rst b/doc/rados/operations/crush-map-edits.rst index 6b75cd24507a4..452140a77b6c2 100644 --- a/doc/rados/operations/crush-map-edits.rst +++ b/doc/rados/operations/crush-map-edits.rst @@ -113,6 +113,8 @@ will normally have one defined here for each OSD daemon in your cluster. Devices are identified by an id (a non-negative integer) and a name, normally ``osd.N`` where ``N`` is the device id. +.. _crush-map-device-class: + Devices may also have a *device class* associated with them (e.g., ``hdd`` or ``ssd``), allowing them to be conveniently targeted by a crush rule. diff --git a/doc/start/hardware-recommendations.rst b/doc/start/hardware-recommendations.rst index 42a0e1e575245..aa6c2b71f6944 100644 --- a/doc/start/hardware-recommendations.rst +++ b/doc/start/hardware-recommendations.rst @@ -208,7 +208,7 @@ storage of CephFS metadata from the storage of the CephFS file contents. Ceph provides a default ``metadata`` pool for CephFS metadata. You will never have to create a pool for CephFS metadata, but you can create a CRUSH map hierarchy for your CephFS metadata pool that points only to a host's SSD storage media. See -`Mapping Pools to Different Types of OSDs`_ for details. +:ref:`CRUSH Device Class` for details. Controllers -- 2.39.5