From: Kefu Chai Date: Sat, 29 Aug 2020 15:24:22 +0000 (+0800) Subject: doc: fix broken hyper link X-Git-Tag: v16.1.0~1255^2~4 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=84e2808afbf8ea1199e66dd803149b34e263a0cd;p=ceph.git doc: fix broken hyper link this link was broken in 1427905c473e352e7cac1a9ac209cddb82544b57 Signed-off-by: Kefu Chai --- diff --git a/doc/rados/operations/cache-tiering.rst b/doc/rados/operations/cache-tiering.rst index 237b6e3c9f37..8dec0ced6544 100644 --- a/doc/rados/operations/cache-tiering.rst +++ b/doc/rados/operations/cache-tiering.rst @@ -185,7 +185,7 @@ scenario, but with this difference: the drives for the cache tier are typically high performance drives that reside in their own servers and have their own CRUSH rule. When setting up such a rule, it should take account of the hosts that have the high performance drives while omitting the hosts that don't. See -`Placing Different Pools on Different OSDs`_ for details. +:ref:`CRUSH Device Class` for details. In subsequent examples, we will refer to the cache pool as ``hot-storage`` and @@ -469,7 +469,6 @@ disable and remove it. .. _Create a Pool: ../pools#create-a-pool .. _Pools - Set Pool Values: ../pools#set-pool-values -.. _Placing Different Pools on Different OSDs: ../crush-map-edits/#placing-different-pools-on-different-osds .. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter .. _CRUSH Maps: ../crush-map .. _Absolute Sizing: #absolute-sizing diff --git a/doc/rados/operations/crush-map-edits.rst b/doc/rados/operations/crush-map-edits.rst index 6b75cd24507a..452140a77b6c 100644 --- a/doc/rados/operations/crush-map-edits.rst +++ b/doc/rados/operations/crush-map-edits.rst @@ -113,6 +113,8 @@ will normally have one defined here for each OSD daemon in your cluster. Devices are identified by an id (a non-negative integer) and a name, normally ``osd.N`` where ``N`` is the device id. +.. _crush-map-device-class: + Devices may also have a *device class* associated with them (e.g., ``hdd`` or ``ssd``), allowing them to be conveniently targeted by a crush rule. diff --git a/doc/start/hardware-recommendations.rst b/doc/start/hardware-recommendations.rst index 42a0e1e57524..aa6c2b71f694 100644 --- a/doc/start/hardware-recommendations.rst +++ b/doc/start/hardware-recommendations.rst @@ -208,7 +208,7 @@ storage of CephFS metadata from the storage of the CephFS file contents. Ceph provides a default ``metadata`` pool for CephFS metadata. You will never have to create a pool for CephFS metadata, but you can create a CRUSH map hierarchy for your CephFS metadata pool that points only to a host's SSD storage media. See -`Mapping Pools to Different Types of OSDs`_ for details. +:ref:`CRUSH Device Class` for details. Controllers