From: Zac Dover Date: Tue, 6 Dec 2022 06:56:02 +0000 (+1000) Subject: doc/rados: add prompts to placement-groups.rst X-Git-Tag: v16.2.11~89^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=refs%2Fpull%2F49272%2Fhead;p=ceph.git doc/rados: add prompts to placement-groups.rst Add unselectable prompts to doc/rados/operations/placement-groups.rst. https://tracker.ceph.com/issues/57108 Signed-off-by: Zac Dover (cherry picked from commit ec38804d5a9007bbccb3d841f4e882d7c7a5951b) --- diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst index 33b45d5dced8..4c5e35f16751 100644 --- a/doc/rados/operations/placement-groups.rst +++ b/doc/rados/operations/placement-groups.rst @@ -18,40 +18,54 @@ Each pool in the system has a ``pg_autoscale_mode`` property that can be set to * ``on``: Enable automated adjustments of the PG count for the given pool. * ``warn``: Raise health alerts when the PG count should be adjusted -To set the autoscaling mode for existing pools,:: +To set the autoscaling mode for an existing pool: - ceph osd pool set pg_autoscale_mode +.. prompt:: bash # -For example to enable autoscaling on pool ``foo``,:: + ceph osd pool set pg_autoscale_mode - ceph osd pool set foo pg_autoscale_mode on +For example to enable autoscaling on pool ``foo``: + +.. prompt:: bash # + + ceph osd pool set foo pg_autoscale_mode on You can also configure the default ``pg_autoscale_mode`` that is -applied to any pools that are created in the future with:: +set on any pools that are subsequently created: + +.. prompt:: bash # - ceph config set global osd_pool_default_pg_autoscale_mode + ceph config set global osd_pool_default_pg_autoscale_mode You can disable or enable the autoscaler for all pools with the ``noautoscale`` flag. By default this flag is set to be ``off``, -but you can turn it ``on`` by using the command:: +but you can turn it ``on`` by using the command: + +.. prompt:: bash $ + + ceph osd pool set noautoscale - ceph osd pool set noautoscale +You can turn it ``off`` using the command: -You can turn it ``off`` using the command:: +.. prompt:: bash # - ceph osd pool unset noautoscale + ceph osd pool unset noautoscale -To ``get`` the value of the flag use the command:: +To ``get`` the value of the flag use the command: - ceph osd pool get noautoscale +.. prompt:: bash # + + ceph osd pool get noautoscale Viewing PG scaling recommendations ---------------------------------- You can view each pool, its relative utilization, and any suggested changes to -the PG count with this command:: +the PG count with this command: + +.. prompt:: bash # - ceph osd pool autoscale-status + ceph osd pool autoscale-status Output will be something like:: @@ -100,7 +114,12 @@ number of PGs that the pool is working towards, if a ``pg_num`` change is in progress). **NEW PG_NUM**, if present, is what the system believes the pool's ``pg_num`` should be changed to. It is always a power of 2, and will only be present if the "ideal" value -varies from the current value by more than a factor of 3. +varies from the current value by more than a factor of 3 by default. +This factor can be be adjusted with: + +.. prompt:: bash # + + ceph osd pool set threshold 2.0 **AUTOSCALE**, is the pool ``pg_autoscale_mode`` and will be either ``on``, ``off``, or ``warn``. @@ -126,9 +145,11 @@ than 3 times off from what it thinks it should be. The target number of PGs per OSD is based on the ``mon_target_pg_per_osd`` configurable (default: 100), which can be -adjusted with:: +adjusted with: - ceph config set global mon_target_pg_per_osd 100 +.. prompt:: bash # + + ceph config set global mon_target_pg_per_osd 100 The autoscaler analyzes pools and adjusts on a per-subtree basis. Because each pool may map to a different CRUSH rule, and each rule may @@ -153,17 +174,23 @@ scales down when the usage ratio across the pool is not even. However, if the pool doesn't have the `bulk` flag, the pool will start out with minimal PGs and only when there is more usage in the pool. -To create pool with `bulk` flag:: +To create pool with `bulk` flag: + +.. prompt:: bash # + + ceph osd pool create --bulk - ceph osd pool create --bulk +To set/unset `bulk` flag of existing pool: -To set/unset `bulk` flag of existing pool:: +.. prompt:: bash # - ceph osd pool set bulk + ceph osd pool set bulk -To get `bulk` flag of existing pool:: +To get `bulk` flag of existing pool: - ceph osd pool get bulk +.. prompt:: bash # + + ceph osd pool get bulk .. _specifying_pool_target_size: @@ -184,14 +211,18 @@ The *target size* of a pool can be specified in two ways: either in terms of the absolute size of the pool (i.e., bytes), or as a weight relative to other pools with a ``target_size_ratio`` set. -For example,:: +For example: + +.. prompt:: bash # - ceph osd pool set mypool target_size_bytes 100T + ceph osd pool set mypool target_size_bytes 100T will tell the system that `mypool` is expected to consume 100 TiB of -space. Alternatively,:: +space. Alternatively: - ceph osd pool set mypool target_size_ratio 1.0 +.. prompt:: bash # + + ceph osd pool set mypool target_size_ratio 1.0 will tell the system that `mypool` is expected to consume 1.0 relative to the other pools with ``target_size_ratio`` set. If `mypool` is the @@ -218,10 +249,12 @@ parallelism client will see when doing IO, even when a pool is mostly empty. Setting the lower bound prevents Ceph from reducing (or recommending you reduce) the PG number below the configured number. -You can set the minimum or maximum number of PGs for a pool with:: +You can set the minimum or maximum number of PGs for a pool with: + +.. prompt:: bash # - ceph osd pool set pg_num_min - ceph osd pool set pg_num_max + ceph osd pool set pg_num_min + ceph osd pool set pg_num_max You can also specify the minimum or maximum PG count at pool creation time with the optional ``--pg-num-min `` or ``--pg-num-max @@ -232,9 +265,11 @@ time with the optional ``--pg-num-min `` or ``--pg-num-max A preselection of pg_num ======================== -When creating a new pool with:: +When creating a new pool with: - ceph osd pool create {pool-name} [pg_num] +.. prompt:: bash # + + ceph osd pool create {pool-name} [pg_num] it is optional to choose the value of ``pg_num``. If you do not specify ``pg_num``, the cluster can (by default) automatically tune it @@ -243,9 +278,11 @@ for you based on how much data is stored in the pool (see above, :ref:`pg-autosc Alternatively, ``pg_num`` can be explicitly provided. However, whether you specify a ``pg_num`` value or not does not affect whether the value is automatically tuned by the cluster after the fact. To -enable or disable auto-tuning,:: +enable or disable auto-tuning: + +.. prompt:: bash # - ceph osd pool set {pool-name} pg_autoscale_mode (on|off|warn) + ceph osd pool set {pool-name} pg_autoscale_mode (on|off|warn) The "rule of thumb" for PGs per OSD has traditionally be 100. With the additional of the balancer (which is also enabled by default), a