their descriptions, types, default and currently set values. These may be edited as well.
* **Pools**: List Ceph pools and their details (e.g. applications,
pg-autoscaling, placement groups, replication size, EC profile, CRUSH
- rulesets, quotas etc.)
+ rules, quotas etc.)
* **OSDs**: List OSDs, their status and usage statistics as well as
detailed information like attributes (OSD map), metadata, performance
counters and usage histograms for read/write operations. Mark OSDs
For example, imagine you have an existing rule like::
- rule replicated_ruleset {
+ rule replicated_rule {
id 0
type replicated
min_size 1
If you reclassify the root `default` as class `hdd`, the rule will
become::
- rule replicated_ruleset {
+ rule replicated_rule {
id 0
type replicated
min_size 1
$ ceph osd crush rule dump erasurepool
{ "rule_id": 1,
"rule_name": "erasurepool",
- "ruleset": 1,
"type": 3,
"min_size": 3,
"max_size": 20,
modify the Ceph cluster and only work on a local files::
$ ceph osd crush rule dump erasurepool
- { "rule_name": "erasurepool",
- "ruleset": 1,
+ { "rule_id": 1,
+ "rule_name": "erasurepool",
"type": 3,
"min_size": 3,
"max_size": 20,
bad mapping rule 8 x 173 num_rep 9 result [0,4,6,8,2,1,3,7,2147483647]
Where ``--num-rep`` is the number of OSDs the erasure code CRUSH
-rule needs, ``--rule`` is the value of the ``ruleset`` field
+rule needs, ``--rule`` is the value of the ``rule_id`` field
displayed by ``ceph osd crush rule dump``. The test will try mapping
one million values (i.e. the range defined by ``[--min-x,--max-x]``)
and must display at least one bad mapping. If it outputs nothing it
like::
rule erasurepool {
- ruleset 1
+ id 1
type erasure
min_size 3
max_size 20