From: Zac Dover Date: Fri, 27 Oct 2023 05:22:34 +0000 (+1000) Subject: doc/rados: edit troubleshooting-pg.rst X-Git-Tag: v19.0.0~238^2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=97512a3e3c28d9aae7e61c30bbe66987298960a9;p=ceph.git doc/rados: edit troubleshooting-pg.rst s/placement group/pool/ in a sentence that, prior to this change, was confusing. Suitable for backports prior to Reef. Signed-off-by: Zac Dover --- diff --git a/doc/rados/troubleshooting/troubleshooting-pg.rst b/doc/rados/troubleshooting/troubleshooting-pg.rst index 4af2cf347efb4..74d04bd9ffe39 100644 --- a/doc/rados/troubleshooting/troubleshooting-pg.rst +++ b/doc/rados/troubleshooting/troubleshooting-pg.rst @@ -405,11 +405,11 @@ Can't Write Data ================ If the cluster is up, but some OSDs are down and you cannot write data, make -sure that you have the minimum number of OSDs running for the placement group. -If you don't have the minimum number of OSDs running, Ceph will not allow you -to write data because there is no guarantee that Ceph can replicate your data. -See ``osd_pool_default_min_size`` in the :ref:`Pool, PG, and CRUSH Config -Reference ` for details. +sure that you have the minimum number of OSDs running in the pool. If you don't +have the minimum number of OSDs running in the pool, Ceph will not allow you to +write data to it because there is no guarantee that Ceph can replicate your +data. See ``osd_pool_default_min_size`` in the :ref:`Pool, PG, and CRUSH +Config Reference ` for details. PGs Inconsistent