From 5da8bfadb086cb01042935a544b4b090b03a6688 Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Fri, 2 Feb 2024 11:53:45 +1000 Subject: [PATCH] doc/rados: update config for autoscaler Update doc/rados/configuration/pool-pg-config-ref.rst to account for the behavior of autoscaler. Previously, this file was last meaningfully altered in 2013, prior to the invention of autoscaler. A recent confusion was brought to my attention on the Ceph Slack whereby a user attempted to alter the default values of a Quincy cluster, as suggested in this documentation. That alteration caused Ceph to throw the error "Error ERANGE: 'pgp_num' must be greater than 0 and lower or equal than 'pg_num', which in this case is one" and a related "rgw_init_ioctx ERROR" reading in part "Numerical result out of range". The user removed the "osd_pool_default_pgp_num" configuration line from ceph.conf and the cluster worked as expected. I presume that this is because the removal of this configuration line allowed autoscaler to work as intended. Fixes: https://tracker.ceph.com/issues/64259 Co-authored-by: David Orman Signed-off-by: Zac Dover (cherry picked from commit 4dc12092be584da44baca14e31ca33231164235f) --- .../configuration/pool-pg-config-ref.rst | 41 ++++++++++++++++--- 1 file changed, 35 insertions(+), 6 deletions(-) diff --git a/doc/rados/configuration/pool-pg-config-ref.rst b/doc/rados/configuration/pool-pg-config-ref.rst index 3f9c149b6fe50..3a1ff9b68f01e 100644 --- a/doc/rados/configuration/pool-pg-config-ref.rst +++ b/doc/rados/configuration/pool-pg-config-ref.rst @@ -4,12 +4,41 @@ .. index:: pools; configuration -Ceph uses default values to determine how many placement groups (PGs) will be -assigned to each pool. We recommend overriding some of the defaults. -Specifically, we recommend setting a pool's replica size and overriding the -default number of placement groups. You can set these values when running -`pool`_ commands. You can also override the defaults by adding new ones in the -``[global]`` section of your Ceph configuration file. +The number of placement groups that the CRUSH algorithm assigns to each pool is +determined by the values of variables in the centralized configuration database +in the monitor cluster. + +Both containerized deployments of Ceph (deployments made using ``cephadm`` or +Rook) and non-containerized deployments of Ceph rely on the values in the +central configuration database in the monitor cluster to assign placement +groups to pools. + +Example Commands +---------------- + +To see the value of the variable that governs the number of placement groups in a given pool, run a command of the following form: + +.. prompt:: bash + + ceph config get osd osd_pool_default_pg_num + +To set the value of the variable that governs the number of placement groups in a given pool, run a command of the following form: + +.. prompt:: bash + + ceph config set osd osd_pool_default_pg_num + +Manual Tuning +------------- +In some cases, it might be advisable to override some of the defaults. For +example, you might determine that it is wise to set a pool's replica size and +to override the default number of placement groups in the pool. You can set +these values when running `pool`_ commands. + +See Also +-------- + +See :ref:`pg-autoscaler`. .. literalinclude:: pool-pg.conf -- 2.39.5