high performance drives that reside in their own servers and have their own
CRUSH rule. When setting up such a rule, it should take account of the hosts
that have the high performance drives while omitting the hosts that don't. See
-`Placing Different Pools on Different OSDs`_ for details.
+:ref:`CRUSH Device Class<crush-map-device-class>` for details.
In subsequent examples, we will refer to the cache pool as ``hot-storage`` and
.. _Create a Pool: ../pools#create-a-pool
.. _Pools - Set Pool Values: ../pools#set-pool-values
-.. _Placing Different Pools on Different OSDs: ../crush-map-edits/#placing-different-pools-on-different-osds
.. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter
.. _CRUSH Maps: ../crush-map
.. _Absolute Sizing: #absolute-sizing
cluster. Devices are identified by an id (a non-negative integer) and
a name, normally ``osd.N`` where ``N`` is the device id.
+.. _crush-map-device-class:
+
Devices may also have a *device class* associated with them (e.g.,
``hdd`` or ``ssd``), allowing them to be conveniently targeted by a
crush rule.
provides a default ``metadata`` pool for CephFS metadata. You will never have to
create a pool for CephFS metadata, but you can create a CRUSH map hierarchy for
your CephFS metadata pool that points only to a host's SSD storage media. See
-`Mapping Pools to Different Types of OSDs`_ for details.
+:ref:`CRUSH Device Class<crush-map-device-class>` for details.
Controllers