From a99eb3d6e7851b407983995fbde5d45887f52cff Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Mon, 3 Feb 2025 23:37:34 +1000 Subject: [PATCH] doc/rados: improve pg_num/pgp_num info Improve the guidance around setting pg_num, and clear up confusion around whether pgp_num should be set manually or, indeed, if it even can be set manually. This PR was raised in response to Mark Schouten's email here: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/CBDJTLTTIEZVG7GVZBX37UAWGYNSSMPD/ Co-authored-by: Anthony D'Atri Signed-off-by: Zac Dover (cherry picked from commit c43e7337212fe38e8db63d00345fa9858b3cb10a) --- doc/rados/operations/placement-groups.rst | 28 ++++++++++++++--------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst index 93ab1f0c02433..c9eb41792d27d 100644 --- a/doc/rados/operations/placement-groups.rst +++ b/doc/rados/operations/placement-groups.rst @@ -659,22 +659,28 @@ command of the following form: ceph osd pool set {pool-name} pg_num {pg_num} -If you increase the number of PGs, your cluster will not rebalance until you -increase the number of PGs for placement (``pgp_num``). The ``pgp_num`` -parameter specifies the number of PGs that are to be considered for placement -by the CRUSH algorithm. Increasing ``pg_num`` splits the PGs in your cluster, -but data will not be migrated to the newer PGs until ``pgp_num`` is increased. -The ``pgp_num`` parameter should be equal to the ``pg_num`` parameter. To -increase the number of PGs for placement, run a command of the following form: +Since the Nautilus release, Ceph automatically steps ``pgp_num`` for a pool +whenever ``pg_num`` is changed, either by the PG autoscaler or manually. Admins +generally do not need to touch ``pgp_num`` directly, but can monitor progress +with ``watch ceph osd pool ls detail``. When ``pg_num`` is changed, the value +of ``pgp_num`` is stepped slowly so that the cost of splitting or merging PGs +is amortized over time to minimize performance impact. + +Increasing ``pg_num`` splits the PGs in your cluster, but data will not be +migrated to the newer PGs until ``pgp_num`` is increased. + +It is possible to manually set the ``pgp_num`` parameter. The ``pgp_num`` +parameter should be equal to the ``pg_num`` parameter. To increase the number +of PGs for placement, run a command of the following form: .. prompt:: bash # ceph osd pool set {pool-name} pgp_num {pgp_num} -If you decrease the number of PGs, then ``pgp_num`` is adjusted automatically. -In releases of Ceph that are Nautilus and later (inclusive), when the -``pg_autoscaler`` is not used, ``pgp_num`` is automatically stepped to match -``pg_num``. This process manifests as periods of remapping of PGs and of +If you decrease or increase the number of PGs, then ``pgp_num`` is adjusted +automatically. In releases of Ceph that are Nautilus and later (inclusive), +when the ``pg_autoscaler`` is not used, ``pgp_num`` is automatically stepped to +match ``pg_num``. This process manifests as periods of remapping of PGs and of backfill, and is expected behavior and normal. .. _rados_ops_pgs_get_pg_num: -- 2.39.5