From 027672b777402381f6736e517ed287b38bb17abb Mon Sep 17 00:00:00 2001 From: Sage Weil Date: Thu, 14 Sep 2017 16:01:14 -0400 Subject: [PATCH] doc/rados/operations/health-checks: fix TOO_MANY_PGS discussion Fiddling with pgp_num doesn't help with TOO_MANY_PGS. Signed-off-by: Sage Weil --- doc/rados/operations/health-checks.rst | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/doc/rados/operations/health-checks.rst b/doc/rados/operations/health-checks.rst index 34b12aff2fab9..40b886f93f556 100644 --- a/doc/rados/operations/health-checks.rst +++ b/doc/rados/operations/health-checks.rst @@ -335,17 +335,20 @@ TOO_MANY_PGS ____________ The number of PGs in use in the cluster is above the configurable -threshold of ``mon_pg_warn_max_per_osd`` PGs per OSD. This can lead +threshold of ``mon_max_pg_per_osd`` PGs per OSD. If this threshold is +exceed the cluster will not allow new pools to be created, pool `pg_num` to +be increased, or pool replication to be increased (any of which would lead to +more PGs in the cluster). A large number of PGs can lead to higher memory utilization for OSD daemons, slower peering after cluster state changes (like OSD restarts, additions, or removals), and higher load on the Manager and Monitor daemons. -The ``pg_num`` value for existing pools cannot currently be reduced. -However, the ``pgp_num`` value can, which effectively collocates some -PGs on the same sets of OSDs, mitigating some of the negative impacts -described above. The ``pgp_num`` value can be adjusted with:: +The simplest way to mitigate the problem is to increase the number of +OSDs in the cluster by adding more hardware. Note that the OSD count +used for the purposes of this health check is the number of "in" OSDs, +so marking "out" OSDs "in" (if there are any) can also help:: - ceph osd pool set pgp_num + ceph osd in Please refer to :ref:`choosing-number-of-placement-groups` for more information. @@ -366,7 +369,6 @@ triggering the data migration, with:: ceph osd pool set pgp_num - MANY_OBJECTS_PER_PG ___________________ -- 2.39.5