This page lists the health checks that are raised by the monitor and manager
daemons. In addition to these, you may also see health checks that originate
-from MDS daemons (see :doc:`/cephfs/health-messages`), and health checks
+from MDS daemons (see :ref:`cephfs-health-messages`), and health checks
that are defined by ceph-mgr python modules.
Definitions
New storage should be added to the cluster by deploying more OSDs or
existing data should be deleted in order to free up space.
-
+
OSD_BACKFILLFULL
________________
ceph osd set <flag>
ceph osd unset <flag>
-
+
OSD_FLAGS
_________
oldest tunables that can be used (i.e., the oldest client version that
can connect to the cluster) without triggering this health warning is
determined by the ``mon_crush_min_required_version`` config option.
-See :doc:`/rados/operations/crush-map/#tunables` for more information.
+See :ref:`crush-map-tunables` for more information.
OLD_CRUSH_STRAW_CALC_VERSION
____________________________
The CRUSH map should be updated to use the newer method
(``straw_calc_version=1``). See
-:doc:`/rados/operations/crush-map/#tunables` for more information.
+:ref:`crush-map-tunables` for more information.
CACHE_POOL_NO_HIT_SET
_____________________
ceph osd pool set <poolname> hit_set_type <type>
ceph osd pool set <poolname> hit_set_period <period-in-seconds>
ceph osd pool set <poolname> hit_set_count <number-of-hitsets>
- ceph osd pool set <poolname> hit_set_fpp <target-false-positive-rate>
+ ceph osd pool set <poolname> hit_set_fpp <target-false-positive-rate>
OSD_NO_SORTBITWISE
__________________
ceph osd pool set-quota <pool> max_bytes <bytes>
ceph osd pool set-quota <pool> max_objects <objects>
-Setting the quota value to 0 will disable the quota.
+Setting the quota value to 0 will disable the quota.
POOL_NEAR_FULL
______________