* ``on``: Enable automated adjustments of the PG count for the given pool.
* ``warn``: Raise health alerts when the PG count should be adjusted
-To set the autoscaling mode for existing pools,::
+To set the autoscaling mode for an existing pool:
- ceph osd pool set <pool-name> pg_autoscale_mode <mode>
+.. prompt:: bash #
-For example to enable autoscaling on pool ``foo``,::
+ ceph osd pool set <pool-name> pg_autoscale_mode <mode>
- ceph osd pool set foo pg_autoscale_mode on
+For example to enable autoscaling on pool ``foo``:
+
+.. prompt:: bash #
+
+ ceph osd pool set foo pg_autoscale_mode on
You can also configure the default ``pg_autoscale_mode`` that is
-applied to any pools that are created in the future with::
+set on any pools that are subsequently created:
+
+.. prompt:: bash #
- ceph config set global osd_pool_default_pg_autoscale_mode <mode>
+ ceph config set global osd_pool_default_pg_autoscale_mode <mode>
You can disable or enable the autoscaler for all pools with
the ``noautoscale`` flag. By default this flag is set to be ``off``,
-but you can turn it ``on`` by using the command::
+but you can turn it ``on`` by using the command:
+
+.. prompt:: bash $
+
+ ceph osd pool set noautoscale
- ceph osd pool set noautoscale
+You can turn it ``off`` using the command:
-You can turn it ``off`` using the command::
+.. prompt:: bash #
- ceph osd pool unset noautoscale
+ ceph osd pool unset noautoscale
-To ``get`` the value of the flag use the command::
+To ``get`` the value of the flag use the command:
- ceph osd pool get noautoscale
+.. prompt:: bash #
+
+ ceph osd pool get noautoscale
Viewing PG scaling recommendations
----------------------------------
You can view each pool, its relative utilization, and any suggested changes to
-the PG count with this command::
+the PG count with this command:
+
+.. prompt:: bash #
- ceph osd pool autoscale-status
+ ceph osd pool autoscale-status
Output will be something like::
change is in progress). **NEW PG_NUM**, if present, is what the
system believes the pool's ``pg_num`` should be changed to. It is
always a power of 2, and will only be present if the "ideal" value
-varies from the current value by more than a factor of 3.
+varies from the current value by more than a factor of 3 by default.
+This factor can be be adjusted with:
+
+.. prompt:: bash #
+
+ ceph osd pool set threshold 2.0
**AUTOSCALE**, is the pool ``pg_autoscale_mode``
and will be either ``on``, ``off``, or ``warn``.
The target number of PGs per OSD is based on the
``mon_target_pg_per_osd`` configurable (default: 100), which can be
-adjusted with::
+adjusted with:
- ceph config set global mon_target_pg_per_osd 100
+.. prompt:: bash #
+
+ ceph config set global mon_target_pg_per_osd 100
The autoscaler analyzes pools and adjusts on a per-subtree basis.
Because each pool may map to a different CRUSH rule, and each rule may
However, if the pool doesn't have the `bulk` flag, the pool will
start out with minimal PGs and only when there is more usage in the pool.
-To create pool with `bulk` flag::
+To create pool with `bulk` flag:
+
+.. prompt:: bash #
+
+ ceph osd pool create <pool-name> --bulk
- ceph osd pool create <pool-name> --bulk
+To set/unset `bulk` flag of existing pool:
-To set/unset `bulk` flag of existing pool::
+.. prompt:: bash #
- ceph osd pool set <pool-name> bulk <true/false/1/0>
+ ceph osd pool set <pool-name> bulk <true/false/1/0>
-To get `bulk` flag of existing pool::
+To get `bulk` flag of existing pool:
- ceph osd pool get <pool-name> bulk
+.. prompt:: bash #
+
+ ceph osd pool get <pool-name> bulk
.. _specifying_pool_target_size:
terms of the absolute size of the pool (i.e., bytes), or as a weight
relative to other pools with a ``target_size_ratio`` set.
-For example,::
+For example:
+
+.. prompt:: bash #
- ceph osd pool set mypool target_size_bytes 100T
+ ceph osd pool set mypool target_size_bytes 100T
will tell the system that `mypool` is expected to consume 100 TiB of
-space. Alternatively,::
+space. Alternatively:
- ceph osd pool set mypool target_size_ratio 1.0
+.. prompt:: bash #
+
+ ceph osd pool set mypool target_size_ratio 1.0
will tell the system that `mypool` is expected to consume 1.0 relative
to the other pools with ``target_size_ratio`` set. If `mypool` is the
empty. Setting the lower bound prevents Ceph from reducing (or
recommending you reduce) the PG number below the configured number.
-You can set the minimum or maximum number of PGs for a pool with::
+You can set the minimum or maximum number of PGs for a pool with:
+
+.. prompt:: bash #
- ceph osd pool set <pool-name> pg_num_min <num>
- ceph osd pool set <pool-name> pg_num_max <num>
+ ceph osd pool set <pool-name> pg_num_min <num>
+ ceph osd pool set <pool-name> pg_num_max <num>
You can also specify the minimum or maximum PG count at pool creation
time with the optional ``--pg-num-min <num>`` or ``--pg-num-max
A preselection of pg_num
========================
-When creating a new pool with::
+When creating a new pool with:
- ceph osd pool create {pool-name} [pg_num]
+.. prompt:: bash #
+
+ ceph osd pool create {pool-name} [pg_num]
it is optional to choose the value of ``pg_num``. If you do not
specify ``pg_num``, the cluster can (by default) automatically tune it
Alternatively, ``pg_num`` can be explicitly provided. However,
whether you specify a ``pg_num`` value or not does not affect whether
the value is automatically tuned by the cluster after the fact. To
-enable or disable auto-tuning,::
+enable or disable auto-tuning:
+
+.. prompt:: bash #
- ceph osd pool set {pool-name} pg_autoscale_mode (on|off|warn)
+ ceph osd pool set {pool-name} pg_autoscale_mode (on|off|warn)
The "rule of thumb" for PGs per OSD has traditionally be 100. With
the additional of the balancer (which is also enabled by default), a