From: Zac Dover Date: Tue, 6 Dec 2022 06:56:02 +0000 (+1000) Subject: doc/rados: add prompts to placement-groups.rst X-Git-Tag: v17.2.6~304^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=becfb545a7c7a04595181fa1cd70a5c5c8846581;p=ceph.git doc/rados: add prompts to placement-groups.rst Add unselectable prompts to doc/rados/operations/placement-groups.rst. https://tracker.ceph.com/issues/57108 Signed-off-by: Zac Dover (cherry picked from commit ec38804d5a9007bbccb3d841f4e882d7c7a5951b) --- diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst index c471ff8bcd94..f823913f44f7 100644 --- a/doc/rados/operations/placement-groups.rst +++ b/doc/rados/operations/placement-groups.rst @@ -20,40 +20,54 @@ Each pool has a ``pg_autoscale_mode`` property that can be set to ``off``, ``on` * ``on``: Enable automated adjustments of the PG count for the given pool. * ``warn``: Raise health alerts when the PG count should be adjusted -To set the autoscaling mode for an existing pool:: +To set the autoscaling mode for an existing pool: - ceph osd pool set pg_autoscale_mode +.. prompt:: bash # -For example to enable autoscaling on pool ``foo``:: + ceph osd pool set pg_autoscale_mode - ceph osd pool set foo pg_autoscale_mode on +For example to enable autoscaling on pool ``foo``: + +.. prompt:: bash # + + ceph osd pool set foo pg_autoscale_mode on You can also configure the default ``pg_autoscale_mode`` that is -set on any pools that are subsequently created:: +set on any pools that are subsequently created: + +.. prompt:: bash # - ceph config set global osd_pool_default_pg_autoscale_mode + ceph config set global osd_pool_default_pg_autoscale_mode You can disable or enable the autoscaler for all pools with the ``noautoscale`` flag. By default this flag is set to be ``off``, -but you can turn it ``on`` by using the command:: +but you can turn it ``on`` by using the command: + +.. prompt:: bash $ + + ceph osd pool set noautoscale - ceph osd pool set noautoscale +You can turn it ``off`` using the command: -You can turn it ``off`` using the command:: +.. prompt:: bash # - ceph osd pool unset noautoscale + ceph osd pool unset noautoscale -To ``get`` the value of the flag use the command:: +To ``get`` the value of the flag use the command: - ceph osd pool get noautoscale +.. prompt:: bash # + + ceph osd pool get noautoscale Viewing PG scaling recommendations ---------------------------------- You can view each pool, its relative utilization, and any suggested changes to -the PG count with this command:: +the PG count with this command: + +.. prompt:: bash # - ceph osd pool autoscale-status + ceph osd pool autoscale-status Output will be something like:: @@ -103,7 +117,9 @@ change is in progress). **NEW PG_NUM**, if present, is what the system believes the pool's ``pg_num`` should be changed to. It is always a power of 2, and will only be present if the "ideal" value varies from the current value by more than a factor of 3 by default. -This factor can be be adjusted with:: +This factor can be be adjusted with: + +.. prompt:: bash # ceph osd pool set threshold 2.0 @@ -131,9 +147,11 @@ than a factor of 3 off from what it thinks it should be. The target number of PGs per OSD is based on the ``mon_target_pg_per_osd`` configurable (default: 100), which can be -adjusted with:: +adjusted with: + +.. prompt:: bash # - ceph config set global mon_target_pg_per_osd 100 + ceph config set global mon_target_pg_per_osd 100 The autoscaler analyzes pools and adjusts on a per-subtree basis. Because each pool may map to a different CRUSH rule, and each rule may @@ -158,17 +176,23 @@ scales down when the usage ratio across the pool is not even. However, if the pool doesn't have the `bulk` flag, the pool will start out with minimal PGs and only when there is more usage in the pool. -To create pool with `bulk` flag:: +To create pool with `bulk` flag: - ceph osd pool create --bulk +.. prompt:: bash # -To set/unset `bulk` flag of existing pool:: + ceph osd pool create --bulk - ceph osd pool set bulk +To set/unset `bulk` flag of existing pool: -To get `bulk` flag of existing pool:: +.. prompt:: bash # - ceph osd pool get bulk + ceph osd pool set bulk + +To get `bulk` flag of existing pool: + +.. prompt:: bash # + + ceph osd pool get bulk .. _specifying_pool_target_size: @@ -189,14 +213,18 @@ The *target size* of a pool can be specified in two ways: either in terms of the absolute size of the pool (i.e., bytes), or as a weight relative to other pools with a ``target_size_ratio`` set. -For example:: +For example: - ceph osd pool set mypool target_size_bytes 100T +.. prompt:: bash # + + ceph osd pool set mypool target_size_bytes 100T will tell the system that `mypool` is expected to consume 100 TiB of -space. Alternatively:: +space. Alternatively: + +.. prompt:: bash # - ceph osd pool set mypool target_size_ratio 1.0 + ceph osd pool set mypool target_size_ratio 1.0 will tell the system that `mypool` is expected to consume 1.0 relative to the other pools with ``target_size_ratio`` set. If `mypool` is the @@ -223,10 +251,12 @@ parallelism client will see when doing IO, even when a pool is mostly empty. Setting the lower bound prevents Ceph from reducing (or recommending you reduce) the PG number below the configured number. -You can set the minimum or maximum number of PGs for a pool with:: +You can set the minimum or maximum number of PGs for a pool with: - ceph osd pool set pg_num_min - ceph osd pool set pg_num_max +.. prompt:: bash # + + ceph osd pool set pg_num_min + ceph osd pool set pg_num_max You can also specify the minimum or maximum PG count at pool creation time with the optional ``--pg-num-min `` or ``--pg-num-max @@ -237,9 +267,11 @@ time with the optional ``--pg-num-min `` or ``--pg-num-max A preselection of pg_num ======================== -When creating a new pool with:: +When creating a new pool with: + +.. prompt:: bash # - ceph osd pool create {pool-name} [pg_num] + ceph osd pool create {pool-name} [pg_num] it is optional to choose the value of ``pg_num``. If you do not specify ``pg_num``, the cluster can (by default) automatically tune it @@ -248,7 +280,7 @@ for you based on how much data is stored in the pool (see above, :ref:`pg-autosc Alternatively, ``pg_num`` can be explicitly provided. However, whether you specify a ``pg_num`` value or not does not affect whether the value is automatically tuned by the cluster after the fact. To -enable or disable auto-tuning:: +enable or disable auto-tuning: ceph osd pool set {pool-name} pg_autoscale_mode (on|off|warn)