consensus about the cluster map (e.g., 1; 2 out of 3; 3 out of 5; 4 out of 6;
etc.).
-``mon force quorum join``
-
-:Description: Force monitor to join quorum even if it has been previously removed from the map
-:Type: Boolean
-:Default: ``False``
+.. confval:: mon_force_quorum_join
.. index:: Ceph Monitor; consistency
so the value may not appear in a configuration file. The ``fsid`` makes it
possible to run daemons for multiple clusters on the same hardware.
-``fsid``
-
-:Description: The cluster ID. One per cluster.
-:Type: UUID
-:Required: Yes.
-:Default: N/A. May be generated by a deployment tool if not specified.
-
-.. note:: Do not set this value if you use a deployment tool that does
- it for you.
-
+.. confval:: fsid
.. index:: Ceph Monitor; initial members
mon_initial_members = a,b,c
-``mon_initial_members``
-
-:Description: The IDs of initial monitors in a cluster during startup. If
- specified, Ceph requires an odd number of monitors to form an
- initial quorum (e.g., 3).
-
-:Type: String
-:Default: None
-
-.. note:: A *majority* of monitors in your cluster must be able to reach
- each other in order to establish a quorum. You can decrease the initial
- number of monitors to establish a quorum with this setting.
+.. confval:: mon_initial_members
.. index:: Ceph Monitor; data path
by setting it in the ``[mon]`` section of the configuration file.
-``mon_data``
-
-:Description: The monitor's data location.
-:Type: String
-:Default: ``/var/lib/ceph/mon/$cluster-$id``
-
-
-``mon_data_size_warn``
-
-:Description: Raise ``HEALTH_WARN`` status when a monitor's data
- store grows to be larger than this size, 15GB by default.
-
-:Type: Integer
-:Default: ``15*1024*1024*1024``
-
-
-``mon_data_avail_warn``
-
-:Description: Raise ``HEALTH_WARN`` status when the filesystem that houses a
- monitor's data store reports that its available capacity is
- less than or equal to this percentage .
-
-:Type: Integer
-:Default: ``30``
-
-
-``mon_data_avail_crit``
-
-:Description: Raise ``HEALTH_ERR`` status when the filesystem that houses a
- monitor's data store reports that its available capacity is
- less than or equal to this percentage.
-
-:Type: Integer
-:Default: ``5``
-
-``mon_warn_on_cache_pools_without_hit_sets``
-
-:Description: Raise ``HEALTH_WARN`` when a cache pool does not
- have the ``hit_set_type`` value configured.
- See :ref:`hit_set_type <hit_set_type>` for more
- details.
-
-:Type: Boolean
-:Default: ``True``
-
-``mon_warn_on_crush_straw_calc_version_zero``
-
-:Description: Raise ``HEALTH_WARN`` when the CRUSH
- ``straw_calc_version`` is zero. See
- :ref:`CRUSH map tunables <crush-map-tunables>` for
- details.
-
-:Type: Boolean
-:Default: ``True``
-
-
-``mon_warn_on_legacy_crush_tunables``
-
-:Description: Raise ``HEALTH_WARN`` when
- CRUSH tunables are too old (older than ``mon_min_crush_required_version``)
-
-:Type: Boolean
-:Default: ``True``
-
-
-``mon_crush_min_required_version``
-
-:Description: The minimum tunable profile required by the cluster.
- See
- :ref:`CRUSH map tunables <crush-map-tunables>` for
- details.
-
-:Type: String
-:Default: ``hammer``
-
-
-``mon_warn_on_osd_down_out_interval_zero``
-
-:Description: Raise ``HEALTH_WARN`` when
- ``mon_osd_down_out_interval`` is zero. Having this option set to
- zero on the leader acts much like the ``noout`` flag. It's hard
- to figure out what's going wrong with clusters without the
- ``noout`` flag set but acting like that just the same, so we
- report a warning in this case.
-
-:Type: Boolean
-:Default: ``True``
-
-
-``mon_warn_on_slow_ping_ratio``
-
-:Description: Raise ``HEALTH_WARN`` when any heartbeat
- between OSDs exceeds ``mon_warn_on_slow_ping_ratio``
- of ``osd_heartbeat_grace``. The default is 5%.
-:Type: Float
-:Default: ``0.05``
-
-
-``mon_warn_on_slow_ping_time``
-
-:Description: Override ``mon_warn_on_slow_ping_ratio`` with a specific value.
- Raise ``HEALTH_WARN`` if any heartbeat
- between OSDs exceeds ``mon_warn_on_slow_ping_time``
- milliseconds. The default is 0 (disabled).
-:Type: Integer
-:Default: ``0``
-
-
-``mon_warn_on_pool_no_redundancy``
-
-:Description: Raise ``HEALTH_WARN`` if any pool is
- configured with no replicas.
-:Type: Boolean
-:Default: ``True``
-
-
-``mon_cache_target_full_warn_ratio``
-
-:Description: Position between pool's ``cache_target_full`` and
- ``target_max_object`` where we start warning
-
-:Type: Float
-:Default: ``0.66``
-
-
-``mon_health_to_clog``
-
-:Description: Enable sending a health summary to the cluster log periodically.
-:Type: Boolean
-:Default: ``True``
-
-
-``mon_health_to_clog_tick_interval``
-
-:Description: How often (in seconds) the monitor sends a health summary to the cluster
- log (a non-positive number disables). If current health summary
- is empty or identical to the last time, monitor will not send it
- to cluster log.
-
-:Type: Float
-:Default: ``60.0``
-
-
-``mon_health_to_clog_interval``
-
-:Description: How often (in seconds) the monitor sends a health summary to the cluster
- log (a non-positive number disables). Monitors will always
- send a summary to the cluster log whether or not it differs from
- the previous summary.
-
-:Type: Integer
-:Default: ``3600``
-
-
+.. confval:: mon_data
+.. confval:: mon_data_size_warn
+.. confval:: mon_data_avail_warn
+.. confval:: mon_data_avail_crit
+.. confval:: mon_warn_on_cache_pools_without_hit_sets
+.. confval:: mon_warn_on_crush_straw_calc_version_zero
+.. confval:: mon_warn_on_legacy_crush_tunables
+.. confval:: mon_crush_min_required_version
+.. confval:: mon_warn_on_osd_down_out_interval_zero
+.. confval:: mon_warn_on_slow_ping_ratio
+.. confval:: mon_warn_on_slow_ping_time
+.. confval:: mon_warn_on_pool_no_redundancy
+.. confval:: mon_cache_target_full_warn_ratio
+.. confval:: mon_health_to_clog
+.. confval:: mon_health_to_clog_tick_interval
+.. confval:: mon_health_to_clog_interval
.. index:: Ceph Storage Cluster; capacity planning, Ceph Monitor; capacity planning
type: uuid
level: basic
desc: cluster fsid (uuid)
+ fmt_desc: The cluster ID. One per cluster.
+ May be generated by a deployment tool if not specified.
+ note: Do not set this value if you use a deployment tool that does
+ it for you.
tags:
- service
services:
type: str
level: advanced
desc: path to mon database
+ fmt_desc: The monitor's data location.
default: /var/lib/ceph/mon/$cluster-$id
services:
- mon
- name: mon_initial_members
type: str
level: advanced
+ fmt_desc: The IDs of initial monitors in a cluster during startup. If
+ specified, Ceph requires an odd number of monitors to form an
+ initial quorum (e.g., 3).
+ note: A *majority* of monitors in your cluster must be able to reach
+ each other in order to establish a quorum. You can decrease the initial
+ number of monitors to establish a quorum with this setting.
services:
- mon
flags:
level: advanced
desc: issue CACHE_POOL_NEAR_FULL health warning when cache pool utilization exceeds
this ratio of usable space
+ fmt_desc: Position between pool's ``cache_target_full`` and ``target_max_object``
+ where we start warning
default: 0.66
services:
- mgr
type: bool
level: advanced
desc: issue OLD_CRUSH_TUNABLES health warning if CRUSH tunables are older than mon_crush_min_required_version
+ fmt_desc: Raise ``HEALTH_WARN`` when CRUSH tunables are too old (older than ``mon_min_crush_required_version``)
default: true
services:
- mgr
type: str
level: advanced
desc: minimum ceph release to use for mon_warn_on_legacy_crush_tunables
+ fmt_desc: The minimum tunable profile required by the cluster. See
+ :ref:`CRUSH map tunables <crush-map-tunables>` for details.
default: hammer
services:
- mgr
level: advanced
desc: issue OLD_CRUSH_STRAW_CALC_VERSION health warning if the CRUSH map's straw_calc_version
is zero
+ fmt_desc: Raise ``HEALTH_WARN`` when the CRUSH ``straw_calc_version`` is zero. See
+ :ref:`CRUSH map tunables <crush-map-tunables>` for details.
default: true
services:
- mgr
long_desc: Having mon_osd_down_out_interval set to 0 means that down OSDs are not
marked out automatically and the cluster does not heal itself without administrator
intervention.
+ fmt_desc: Raise ``HEALTH_WARN`` when ``mon_osd_down_out_interval`` is zero. Having this
+ option set to zero on the leader acts much like the ``noout`` flag. It's hard to figure
+ out what's going wrong with clusters without the ``noout`` flag set but acting like that
+ just the same, so we report a warning in this case.
default: true
services:
- mgr
level: advanced
desc: issue CACHE_POOL_NO_HIT_SET health warning for cache pools that do not have
hit sets configured
+ fmt_desc: Raise ``HEALTH_WARN`` when a cache pool does not have the ``hit_set_type``
+ value configured. See :ref:`hit_set_type <hit_set_type>` for more details.
default: true
services:
- mgr
type: bool
level: advanced
desc: Issue a health warning if any pool is configured with no replicas
+ fmt_desc: Raise ``HEALTH_WARN`` if any pool is configured with no replicas.
default: true
services:
- mon
type: float
level: advanced
desc: Override mon_warn_on_slow_ping_ratio with specified threshold in milliseconds
+ fmt_desc: Override ``mon_warn_on_slow_ping_ratio`` with a specific value.
+ Raise ``HEALTH_WARN`` if any heartbeat between OSDs exceeds
+ ``mon_warn_on_slow_ping_time`` milliseconds. The default is 0 (disabled).
default: 0
services:
- mgr
type: float
level: advanced
desc: Issue a health warning if heartbeat ping longer than percentage of osd_heartbeat_grace
+ fmt_desc: Raise ``HEALTH_WARN`` when any heartbeat between OSDs exceeds
+ ``mon_warn_on_slow_ping_ratio`` of ``osd_heartbeat_grace``.
default: 0.05
services:
- mgr
type: bool
level: advanced
desc: log monitor health to cluster log
+ fmt_desc: Enable sending a health summary to the cluster log periodically.
default: true
services:
- mon
type: int
level: advanced
desc: frequency to log monitor health to cluster log
+ fmt_desc: How often (in seconds) the monitor sends a health summary to the cluster
+ log (a non-positive number disables). Monitors will always
+ send a summary to the cluster log whether or not it differs from
+ the previous summary.
default: 10_min
services:
- mon
- name: mon_health_to_clog_tick_interval
type: float
level: dev
+ fmt_desc: How often (in seconds) the monitor sends a health summary to the cluster
+ log (a non-positive number disables). If current health summary
+ is empty or identical to the last time, monitor will not send it
+ to cluster log.
default: 1_min
services:
- mon
type: int
level: advanced
desc: issue MON_DISK_CRIT health error when mon available space below this percentage
+ fmt_desc: Raise ``HEALTH_ERR`` status when the filesystem that houses a
+ monitor's data store reports that its available capacity is
+ less than or equal to this percentage.
default: 5
services:
- mon
type: int
level: advanced
desc: issue MON_DISK_LOW health warning when mon available space below this percentage
+ fmt_desc: Raise ``HEALTH_WARN`` status when the filesystem that houses a
+ monitor's data store reports that its available capacity is
+ less than or equal to this percentage .
default: 30
services:
- mon
type: size
level: advanced
desc: issue MON_DISK_BIG health warning when mon database is above this size
+ fmt_desc: Raise ``HEALTH_WARN`` status when a monitor's data
+ store grows to be larger than this size, 15GB by default.
default: 15_G
services:
- mon
type: bool
level: advanced
desc: force mon to rejoin quorum even though it was just removed
+ fmt_desc: Force monitor to join quorum even if it has been previously removed from the map
default: false
services:
- mon