From: Wido den Hollander Date: Wed, 6 Jun 2018 11:04:58 +0000 (+0200) Subject: common/config: Add description to (near)full ratio settings X-Git-Tag: v14.0.1~673^2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=1213f0f9f152790b45bebe3044bcffc0f4b8bb07;p=ceph.git common/config: Add description to (near)full ratio settings For many users it is not clear that these only apply to initial creation of the cluster and that afterwards these commands need to be used: ceph osd set-nearfull-ratio ceph osd set-full-ratio Signed-off-by: Wido den Hollander --- diff --git a/doc/rados/configuration/mon-config-ref.rst b/doc/rados/configuration/mon-config-ref.rst index 4cdd7889cfff2..e0e990c19cae4 100644 --- a/doc/rados/configuration/mon-config-ref.rst +++ b/doc/rados/configuration/mon-config-ref.rst @@ -520,6 +520,9 @@ you expect to fail to arrive at a reasonable full ratio. Repeat the foregoing process with a higher number of OSD failures (e.g., a rack of OSDs) to arrive at a reasonable number for a near full ratio. +The following settings only apply on cluster creation and are then stored in +the OSDMap. + .. code-block:: ini [global] @@ -559,6 +562,10 @@ a reasonable number for a near full ratio. .. tip:: If some OSDs are nearfull, but others have plenty of capacity, you may have a problem with the CRUSH weight for the nearfull OSDs. +.. tip:: These settings only apply during cluster creation. Afterwards they need + to be changed in the OSDMap using ``ceph osd set-nearfull-ratio`` and + ``ceph osd set-full-ratio`` + .. index:: heartbeat Heartbeat diff --git a/doc/rados/troubleshooting/troubleshooting-osd.rst b/doc/rados/troubleshooting/troubleshooting-osd.rst index 2ca5fdbe8b701..d917e97d7ccfb 100644 --- a/doc/rados/troubleshooting/troubleshooting-osd.rst +++ b/doc/rados/troubleshooting/troubleshooting-osd.rst @@ -208,16 +208,29 @@ is getting near its full ratio. The ``mon osd full ratio`` defaults to ``0.95``, or 95% of capacity before it stops clients from writing data. The ``mon osd backfillfull ratio`` defaults to ``0.90``, or 90 % of capacity when it blocks backfills from starting. The -``mon osd nearfull ratio`` defaults to ``0.85``, or 85% of capacity +OSD nearfull ratio defaults to ``0.85``, or 85% of capacity when it generates a health warning. +Changing it can be done using: + +:: + + ceph osd set-nearfull-ratio + + Full cluster issues usually arise when testing how Ceph handles an OSD failure on a small cluster. When one node has a high percentage of the cluster's data, the cluster can easily eclipse its nearfull and full ratio immediately. If you are testing how Ceph reacts to OSD failures on a small cluster, you should leave ample free disk space and consider temporarily -lowering the ``mon osd full ratio``, ``mon osd backfillfull ratio`` and -``mon osd nearfull ratio``. +lowering the OSD ``full ratio``, OSD ``backfillfull ratio`` and +OSD ``nearfull ratio`` using these commands: + +:: + + ceph osd set-nearfull-ratio + ceph osd set-full-ratio + ceph osd set-backfillfull-ratio Full ``ceph-osds`` will be reported by ``ceph health``:: diff --git a/src/common/options.cc b/src/common/options.cc index 6ab65b1a8f697..d0052e3fbe6ec 100644 --- a/src/common/options.cc +++ b/src/common/options.cc @@ -1389,7 +1389,7 @@ std::vector