From: Sage Weil Date: Sat, 26 Jun 2021 15:23:38 +0000 (-0400) Subject: doc: scrub 'ruleset' from docs X-Git-Tag: v17.1.0~1398^2~15 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=8976fcf476f2567dcbe83517946b77517712739a;p=ceph-ci.git doc: scrub 'ruleset' from docs Signed-off-by: Sage Weil --- diff --git a/doc/mgr/dashboard.rst b/doc/mgr/dashboard.rst index 32d02fd4a6b..e7e528d392d 100644 --- a/doc/mgr/dashboard.rst +++ b/doc/mgr/dashboard.rst @@ -83,7 +83,7 @@ The Ceph Dashboard offers the following monitoring and management capabilities: their descriptions, types, default and currently set values. These may be edited as well. * **Pools**: List Ceph pools and their details (e.g. applications, pg-autoscaling, placement groups, replication size, EC profile, CRUSH - rulesets, quotas etc.) + rules, quotas etc.) * **OSDs**: List OSDs, their status and usage statistics as well as detailed information like attributes (OSD map), metadata, performance counters and usage histograms for read/write operations. Mark OSDs diff --git a/doc/rados/operations/crush-map-edits.rst b/doc/rados/operations/crush-map-edits.rst index aea48ae7d5b..cb8771ac2c7 100644 --- a/doc/rados/operations/crush-map-edits.rst +++ b/doc/rados/operations/crush-map-edits.rst @@ -552,7 +552,7 @@ There are three types of transformations possible: For example, imagine you have an existing rule like:: - rule replicated_ruleset { + rule replicated_rule { id 0 type replicated min_size 1 @@ -565,7 +565,7 @@ There are three types of transformations possible: If you reclassify the root `default` as class `hdd`, the rule will become:: - rule replicated_ruleset { + rule replicated_rule { id 0 type replicated min_size 1 diff --git a/doc/rados/troubleshooting/troubleshooting-pg.rst b/doc/rados/troubleshooting/troubleshooting-pg.rst index 60796b4a8a4..15b0c6894c4 100644 --- a/doc/rados/troubleshooting/troubleshooting-pg.rst +++ b/doc/rados/troubleshooting/troubleshooting-pg.rst @@ -522,7 +522,6 @@ the rule:: $ ceph osd crush rule dump erasurepool { "rule_id": 1, "rule_name": "erasurepool", - "ruleset": 1, "type": 3, "min_size": 3, "max_size": 20, @@ -566,8 +565,8 @@ extracting the crushmap from the cluster so your experiments do not modify the Ceph cluster and only work on a local files:: $ ceph osd crush rule dump erasurepool - { "rule_name": "erasurepool", - "ruleset": 1, + { "rule_id": 1, + "rule_name": "erasurepool", "type": 3, "min_size": 3, "max_size": 20, @@ -590,7 +589,7 @@ modify the Ceph cluster and only work on a local files:: bad mapping rule 8 x 173 num_rep 9 result [0,4,6,8,2,1,3,7,2147483647] Where ``--num-rep`` is the number of OSDs the erasure code CRUSH -rule needs, ``--rule`` is the value of the ``ruleset`` field +rule needs, ``--rule`` is the value of the ``rule_id`` field displayed by ``ceph osd crush rule dump``. The test will try mapping one million values (i.e. the range defined by ``[--min-x,--max-x]``) and must display at least one bad mapping. If it outputs nothing it @@ -609,7 +608,7 @@ The relevant part of the ``crush.txt`` file should look something like:: rule erasurepool { - ruleset 1 + id 1 type erasure min_size 3 max_size 20