OSD_DOWN
________
- One or more OSDs are marked down
+ One or more OSDs are marked down. The ceph-osd daemon may have been
+ stopped, or peer OSDs may be unable to reach the OSD over the network.
+ Common causes include a stopped or crashed daemon, a down host, or a
+ network outage.
+
+ Verify the host is healthy, the daemon is started, and network is
+ functioning. If the daemon has crashed, the daemon log file
+ (``/var/log/ceph/ceph-osd.*``) may contain debugging information.
-OSD_<crush type>_DOWN
+OSD_<crush type>_DOWN
_____________________
(e.g. OSD_HOST_DOWN, OSD_ROOT_DOWN)
MANY_OBJECTS_PER_PG
___________________
+ One or more pools has an average number of objects per PG that is
+ significantly higher than the overall cluster average. The specific
+ threshold is controlled by the ``mon_pg_warn_max_object_skew``
+ configuration value.
+
+ This is usually an indication that the pool(s) containing most of the
+ data in the cluster have too few PGs, and/or that other pools that do
+ not contain as much data have too many PGs. See the discussion of
+ *TOO_MANY_PGS* above.
+
+ The threshold can be raised to silence the health warning by adjusting
+ the ``mon_pg_warn_max_object_skew`` config option on the monitors.
+POOL_APP_NOT_ENABLED
+____________________
+
+A pool exists that contains one or more objects but has not been
+tagged for use by a particular application.
+
+Resolve this warning by labeling the pool for use by an application. For
+example, if the pool is used by RBD,::
+
+ rbd pool init <poolname>
+
+If the pool is being used by a custom application 'foo', you can also label
+via the low-level command::
+
+ ceph osd pool application enable foo
+
+For more information, see :doc:`pools.rst#associate-pool-to-application`.
+
POOL_FULL
_________