method of storing and retrieving data, Ceph avoids a single point of failure, a
performance bottleneck, and a physical limit to its scalability.
-CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly
-store and retrieve data in OSDs with a uniform distribution of data across the
-cluster. For a detailed discussion of CRUSH, see
+CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly
+store and retrieve data in OSDs with a uniform distribution of data across the
+cluster. For a detailed discussion of CRUSH, see
`CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data`_
CRUSH maps contain a list of :abbr:`OSDs (Object Storage Devices)`, a list of
#. Note that the order of the keys does not matter.
#. The key name (left of ``=``) must be a valid CRUSH ``type``. By default
- these include root, datacenter, room, row, pod, pdu, rack, chassis and host,
- but those types can be customized to be anything appropriate by modifying
+ these include root, datacenter, room, row, pod, pdu, rack, chassis and host,
+ but those types can be customized to be anything appropriate by modifying
the CRUSH map.
#. Not all keys need to be specified. For example, by default, Ceph
automatically sets a ``ceph-osd`` daemon's location to be
| {o}root default |
+--------+--------+
|
- +---------------+---------------+
+ +---------------+---------------+
| |
+-------+-------+ +-----+-------+
| {o}host foo | | {o}host bar |
+-------+-------+ +-----+-------+
- | |
+ | |
+-------+-------+ +-------+-------+
| | | |
+-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+
erasure coded), the *failure domain*, and optionally a *device class*.
In rare cases rules must be written by hand by manually editing the
CRUSH map.
-
+
You can see what rules are defined for your cluster with::
ceph osd crush rule ls
``name``
-:Description: The full name of the OSD.
+:Description: The full name of the OSD.
:Type: String
:Required: Yes
:Example: ``osd.0``
``bucket-type``
-:Description: You may specify the OSD's location in the CRUSH hierarchy.
+:Description: You may specify the OSD's location in the CRUSH hierarchy.
:Type: Key/value pairs.
:Required: No
:Example: ``datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1``
``name``
-:Description: The full name of the OSD.
+:Description: The full name of the OSD.
:Type: String
:Required: Yes
:Example: ``osd.0``
``weight``
-:Description: The CRUSH weight for the OSD.
+:Description: The CRUSH weight for the OSD.
:Type: Double
:Required: Yes
:Example: ``2.0``
``name``
-:Description: The full name of the OSD.
+:Description: The full name of the OSD.
:Type: String
:Required: Yes
:Example: ``osd.0``
``bucket-type``
-:Description: You may specify the bucket's location in the CRUSH hierarchy.
+:Description: You may specify the bucket's location in the CRUSH hierarchy.
:Type: Key/value pairs.
:Required: No
:Example: ``datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1``
ceph osd crush rule rm {rule-name}
+.. _crush-map-tunables:
+
Tunables
========
CRUSH is sometimes unable to find a mapping. The optimal value (in
terms of computational cost and correctness) is 1.
-Migration impact:
+Migration impact:
* For existing clusters that have lots of existing data, changing
from 0 to 1 will cause a lot of data to move; a value of 4 or 5