from the cache, a state that normally leads to very high latencies and
poor performance.
-The cache pool target size can be adjusted with::
+The cache pool target size can be adjusted with:
- ceph osd pool set <cache-pool-name> target_max_bytes <bytes>
- ceph osd pool set <cache-pool-name> target_max_objects <objects>
+.. prompt:: bash $
+
+ ceph osd pool set <cache-pool-name> target_max_bytes <bytes>
+ ceph osd pool set <cache-pool-name> target_max_objects <objects>
Normal cache flush and evict activity may also be throttled due to reduced
availability or performance of the base tier, or overall cluster load.
much data as others.
This is easily corrected by setting the ``pg_num`` value for the
-affected pool(s) to a nearby power of two::
+affected pool(s) to a nearby power of two:
+
+.. prompt:: bash $
+
+ ceph osd pool set <pool-name> pg_num <value>
- ceph osd pool set <pool-name> pg_num <value>
+This health warning can be disabled with:
-This health warning can be disabled with::
+.. prompt:: bash $
- ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false
+ ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false
POOL_TOO_FEW_PGS
________________
``warn``.
To disable the warning, you can disable auto-scaling of PGs for the
-pool entirely with::
+pool entirely with:
- ceph osd pool set <pool-name> pg_autoscale_mode off
+.. prompt:: bash $
-To allow the cluster to automatically adjust the number of PGs,::
+ ceph osd pool set <pool-name> pg_autoscale_mode off
- ceph osd pool set <pool-name> pg_autoscale_mode on
+To allow the cluster to automatically adjust the number of PGs,:
+
+.. prompt:: bash $
+
+ ceph osd pool set <pool-name> pg_autoscale_mode on
You can also manually set the number of PGs for the pool to the
-recommended amount with::
+recommended amount with:
+
+.. prompt:: bash $
- ceph osd pool set <pool-name> pg_num <new-pg-num>
+ ceph osd pool set <pool-name> pg_num <new-pg-num>
Please refer to :ref:`choosing-number-of-placement-groups` and
:ref:`pg-autoscaler` for more information.
The simplest way to mitigate the problem is to increase the number of
OSDs in the cluster by adding more hardware. Note that the OSD count
used for the purposes of this health check is the number of "in" OSDs,
-so marking "out" OSDs "in" (if there are any) can also help::
+so marking "out" OSDs "in" (if there are any) can also help:
- ceph osd in <osd id(s)>
+.. prompt:: bash $
+
+ ceph osd in <osd id(s)>
Please refer to :ref:`choosing-number-of-placement-groups` for more
information.
``pg_autoscale_mode`` property on the pool is set to ``warn``.
To disable the warning, you can disable auto-scaling of PGs for the
-pool entirely with::
+pool entirely with:
+
+.. prompt:: bash $
+
+ ceph osd pool set <pool-name> pg_autoscale_mode off
- ceph osd pool set <pool-name> pg_autoscale_mode off
+To allow the cluster to automatically adjust the number of PGs,:
-To allow the cluster to automatically adjust the number of PGs,::
+.. prompt:: bash $
- ceph osd pool set <pool-name> pg_autoscale_mode on
+ ceph osd pool set <pool-name> pg_autoscale_mode on
You can also manually set the number of PGs for the pool to the
-recommended amount with::
+recommended amount with:
- ceph osd pool set <pool-name> pg_num <new-pg-num>
+.. prompt:: bash $
+
+ ceph osd pool set <pool-name> pg_num <new-pg-num>
Please refer to :ref:`choosing-number-of-placement-groups` and
:ref:`pg-autoscaler` for more information.
themselves or in combination with other pools' actual usage).
This is usually an indication that the ``target_size_bytes`` value for
-the pool is too large and should be reduced or set to zero with::
+the pool is too large and should be reduced or set to zero with:
+
+.. prompt:: bash $
- ceph osd pool set <pool-name> target_size_bytes 0
+ ceph osd pool set <pool-name> target_size_bytes 0
For more information, see :ref:`specifying_pool_target_size`.
``target_size_ratio`` takes precedence and ``target_size_bytes`` is
ignored.
-To reset ``target_size_bytes`` to zero::
+To reset ``target_size_bytes`` to zero:
- ceph osd pool set <pool-name> target_size_bytes 0
+.. prompt:: bash $
+
+ ceph osd pool set <pool-name> target_size_bytes 0
For more information, see :ref:`specifying_pool_target_size`.
when ``pgp_num`` is changed.
This is normally resolved by setting ``pgp_num`` to match ``pg_num``,
-triggering the data migration, with::
+triggering the data migration, with:
+
+.. prompt:: bash $
- ceph osd pool set <pool> pgp_num <pg-num-value>
+ ceph osd pool set <pool> pgp_num <pg-num-value>
MANY_OBJECTS_PER_PG
___________________
tagged for use by a particular application.
Resolve this warning by labeling the pool for use by an application. For
-example, if the pool is used by RBD,::
+example, if the pool is used by RBD,:
+
+.. prompt:: bash $
- rbd pool init <poolname>
+ rbd pool init <poolname>
If the pool is being used by a custom application 'foo', you can also label
-via the low-level command::
+via the low-level command:
- ceph osd pool application enable foo
+.. prompt:: bash $
+
+ ceph osd pool application enable foo
For more information, see :ref:`associate-pool-to-application`.
quota. The threshold to trigger this error condition is controlled by
the ``mon_pool_quota_crit_threshold`` configuration option.
-Pool quotas can be adjusted up or down (or removed) with::
+Pool quotas can be adjusted up or down (or removed) with:
+
+.. prompt:: bash $
- ceph osd pool set-quota <pool> max_bytes <bytes>
- ceph osd pool set-quota <pool> max_objects <objects>
+ ceph osd pool set-quota <pool> max_bytes <bytes>
+ ceph osd pool set-quota <pool> max_objects <objects>
Setting the quota value to 0 will disable the quota.
One threshold that can trigger this warning condition is the
``mon_pool_quota_warn_threshold`` configuration option.
-Pool quotas can be adjusted up or down (or removed) with::
+Pool quotas can be adjusted up or down (or removed) with:
+
+.. prompt:: bash $
- ceph osd pool set-quota <pool> max_bytes <bytes>
- ceph osd pool set-quota <pool> max_objects <objects>
+ ceph osd pool set-quota <pool> max_bytes <bytes>
+ ceph osd pool set-quota <pool> max_objects <objects>
Setting the quota value to 0 will disable the quota.
Ideally, a down OSD can be brought back online that has the more
recent copy of the unfound object. Candidate OSDs can be identified from the
-peering state for the PG(s) responsible for the unfound object::
+peering state for the PG(s) responsible for the unfound object:
- ceph tell <pgid> query
+.. prompt:: bash $
+
+ ceph tell <pgid> query
If the latest copy of the object is not available, the cluster can be
told to roll back to a previous version of the object. See
bug.
The request queue for the daemon in question can be queried with the
-following command, executed from the daemon's host::
+following command, executed from the daemon's host:
+
+.. prompt:: bash $
+
+ ceph daemon osd.<id> ops
+
+A summary of the slowest recent requests can be seen with:
- ceph daemon osd.<id> ops
+.. prompt:: bash $
-A summary of the slowest recent requests can be seen with::
+ ceph daemon osd.<id> dump_historic_ops
- ceph daemon osd.<id> dump_historic_ops
+The location of an OSD can be found with:
-The location of an OSD can be found with::
+.. prompt:: bash $
- ceph osd find osd.<id>
+ ceph osd find osd.<id>
PG_NOT_SCRUBBED
_______________