From b78bbfb1023d617bed5c1d5518c67ac2e28f00d7 Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Sat, 13 Aug 2022 07:53:21 +1000 Subject: [PATCH] doc/rados: add prompts to pools.rst This commit adds ".. prompt:: bash $"-style prompts to pools.rst. This brings this file up to the standard established in 2020 when Kefu added support for the ".. prompt::" directive. This commit is a part of an initiative to modernize the presentation of all BASH commands in the RADOS documentation. The progress of this project can be tracked here: https://tracker.ceph.com/issues/57108 Signed-off-by: Zac Dover (cherry picked from commit 1bd64192568242b141d8e30fef6758bf162ec350) --- doc/rados/operations/pools.rst | 118 ++++++++++++++++++++++----------- 1 file changed, 80 insertions(+), 38 deletions(-) diff --git a/doc/rados/operations/pools.rst b/doc/rados/operations/pools.rst index 3b6d227aad74..58a3432b9f5f 100644 --- a/doc/rados/operations/pools.rst +++ b/doc/rados/operations/pools.rst @@ -45,10 +45,11 @@ operations. Please do not create or manipulate pools with these names. List Pools ========== -To list your cluster's pools, execute:: +To list your cluster's pools, execute: - ceph osd lspools +.. prompt:: bash $ + ceph osd lspools .. _createpool: @@ -64,12 +65,16 @@ For details on placement group numbers refer to `setting the number of placement application using the pool. See `Associate Pool to Application`_ below for more information. -For example:: +For example: + +.. prompt:: bash $ osd_pool_default_pg_num = 128 osd_pool_default_pgp_num = 128 -To create a pool, execute:: +To create a pool, execute: + +.. prompt:: bash $ ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] [replicated] \ [crush-rule-name] [expected-num-objects] @@ -179,9 +184,11 @@ initialized using the ``rbd`` tool (see `Block Device Commands`_ for more information). For other cases, you can manually associate a free-form application name to -a pool.:: +a pool.: + +.. prompt:: bash $ - ceph osd pool application enable {pool-name} {application-name} + ceph osd pool application enable {pool-name} {application-name} .. note:: CephFS uses the application name ``cephfs``, RBD uses the application name ``rbd``, and RGW uses the application name ``rgw``. @@ -190,13 +197,17 @@ Set Pool Quotas =============== You can set pool quotas for the maximum number of bytes and/or the maximum -number of objects per pool. :: +number of objects per pool: - ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}] +.. prompt:: bash $ -For example:: + ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}] - ceph osd pool set-quota data max_objects 10000 +For example: + +.. prompt:: bash $ + + ceph osd pool set-quota data max_objects 10000 To remove a quota, set its value to ``0``. @@ -204,9 +215,11 @@ To remove a quota, set its value to ``0``. Delete a Pool ============= -To delete a pool, execute:: +To delete a pool, execute: + +.. prompt:: bash $ - ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it] + ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it] To remove a pool the mon_allow_pool_delete flag must be set to true in the Monitor's @@ -217,11 +230,15 @@ See `Monitor Configuration`_ for more information. .. _Monitor Configuration: ../../configuration/mon-config-ref If you created your own rules for a pool you created, you should consider -removing them when you no longer need your pool:: +removing them when you no longer need your pool: - ceph osd pool get {pool-name} crush_rule +.. prompt:: bash $ -If the rule was "123", for example, you can check the other pools like so:: + ceph osd pool get {pool-name} crush_rule + +If the rule was "123", for example, you can check the other pools like so: + +.. prompt:: bash $ ceph osd dump | grep "^pool" | grep "crush_rule 123" @@ -229,7 +246,10 @@ If no other pools use that custom rule, then it's safe to delete that rule from the cluster. If you created users with permissions strictly for a pool that no longer -exists, you should consider deleting those users too:: +exists, you should consider deleting those users too: + + +.. prompt:: bash $ ceph auth ls | grep -C 5 {pool-name} ceph auth del {user} @@ -238,9 +258,11 @@ exists, you should consider deleting those users too:: Rename a Pool ============= -To rename a pool, execute:: +To rename a pool, execute: + +.. prompt:: bash $ - ceph osd pool rename {current-pool-name} {new-pool-name} + ceph osd pool rename {current-pool-name} {new-pool-name} If you rename a pool and you have per-pool capabilities for an authenticated user, you must update the user's capabilities (i.e., caps) with the new pool @@ -249,28 +271,36 @@ name. Show Pool Statistics ==================== -To show a pool's utilization statistics, execute:: +To show a pool's utilization statistics, execute: - rados df +.. prompt:: bash $ -Additionally, to obtain I/O information for a specific pool or all, execute:: + rados df - ceph osd pool stats [{pool-name}] +Additionally, to obtain I/O information for a specific pool or all, execute: + +.. prompt:: bash $ + + ceph osd pool stats [{pool-name}] Make a Snapshot of a Pool ========================= -To make a snapshot of a pool, execute:: +To make a snapshot of a pool, execute: + +.. prompt:: bash $ - ceph osd pool mksnap {pool-name} {snap-name} + ceph osd pool mksnap {pool-name} {snap-name} Remove a Snapshot of a Pool =========================== -To remove a snapshot of a pool, execute:: +To remove a snapshot of a pool, execute: - ceph osd pool rmsnap {pool-name} {snap-name} +.. prompt:: bash $ + + ceph osd pool rmsnap {pool-name} {snap-name} .. _setpoolvalues: @@ -278,9 +308,11 @@ To remove a snapshot of a pool, execute:: Set Pool Values =============== -To set a value to a pool, execute the following:: +To set a value to a pool, execute the following: + +.. prompt:: bash $ - ceph osd pool set {pool-name} {key} {value} + ceph osd pool set {pool-name} {key} {value} You may set values for the following keys: @@ -662,9 +694,11 @@ You may set values for the following keys: Get Pool Values =============== -To get a value from a pool, execute the following:: +To get a value from a pool, execute the following: - ceph osd pool get {pool-name} {key} +.. prompt:: bash $ + + ceph osd pool get {pool-name} {key} You may get values for the following keys: @@ -830,24 +864,30 @@ You may get values for the following keys: Set the Number of Object Replicas ================================= -To set the number of object replicas on a replicated pool, execute the following:: +To set the number of object replicas on a replicated pool, execute the following: + +.. prompt:: bash $ - ceph osd pool set {poolname} size {num-replicas} + ceph osd pool set {poolname} size {num-replicas} .. important:: The ``{num-replicas}`` includes the object itself. If you want the object and two copies of the object for a total of three instances of the object, specify ``3``. -For example:: +For example: + +.. prompt:: bash $ - ceph osd pool set data size 3 + ceph osd pool set data size 3 You may execute this command for each pool. **Note:** An object might accept I/Os in degraded mode with fewer than ``pool size`` replicas. To set a minimum number of required replicas for I/O, you should use the ``min_size`` setting. -For example:: +For example: - ceph osd pool set data min_size 2 +.. prompt:: bash $ + + ceph osd pool set data min_size 2 This ensures that no object in the data pool will receive I/O with fewer than ``min_size`` replicas. @@ -856,9 +896,11 @@ This ensures that no object in the data pool will receive I/O with fewer than Get the Number of Object Replicas ================================= -To get the number of object replicas, execute the following:: +To get the number of object replicas, execute the following: + +.. prompt:: bash $ - ceph osd dump | grep 'replicated size' + ceph osd dump | grep 'replicated size' Ceph will list the pools, with the ``replicated size`` attribute highlighted. By default, ceph creates two replicas of an object (a total of three copies, or -- 2.47.3