From: Anthony D'Atri Date: Mon, 2 Jun 2025 18:35:23 +0000 (-0400) Subject: doc/rados/operations: Additional improvements to placement-groups.rst X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=refs%2Fpull%2F63650%2Fhead;p=ceph.git doc/rados/operations: Additional improvements to placement-groups.rst Signed-off-by: Anthony D'Atri (cherry picked from commit 201e34119ab8628f5c99763212613678ec29dde3) --- diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst index b00d563adc451..0f3f7e9f4b509 100644 --- a/doc/rados/operations/placement-groups.rst +++ b/doc/rados/operations/placement-groups.rst @@ -391,7 +391,7 @@ The autoscaler attempts to satisfy the following conditions: - The number of PG replicas per OSD should be proportional to the amount of data in the pool. -- There should by default 50-100 PGs per pool, taking into account the replication +- There should by default be 50-100 PGs per pool, taking into account the replication overhead or erasure-coding fan-out of each PG's replicas across OSDs. Use of Placement Groups @@ -610,7 +610,7 @@ Memory, CPU and network usage Every PG in the cluster imposes memory, network, and CPU demands upon OSDs and Monitors. These needs must be met at all times and are increased during recovery. Indeed, one of the main reasons PGs were developed was to decrease this overhead -by aggregating RADOS objects into a sets of a manageable size. +by aggregating RADOS objects into sets of a manageable size. For this reason, limiting the number of PGs saves significant resources.