From: Paul Reece Date: Thu, 6 May 2021 18:09:34 +0000 (-0400) Subject: doc: added documentation on additional throttling options for the PG balancer module X-Git-Tag: v17.1.0~2017^2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=eafa979b7972be72f7bde1dd29f40b403f29a496;p=ceph-ci.git doc: added documentation on additional throttling options for the PG balancer module Signed-off-by: Paul Reece --- diff --git a/doc/rados/operations/balancer.rst b/doc/rados/operations/balancer.rst index 5146c23c270..3183011c9b5 100644 --- a/doc/rados/operations/balancer.rst +++ b/doc/rados/operations/balancer.rst @@ -44,6 +44,34 @@ be moved) is below a threshold of (by default) 5%. The ceph config set mgr mgr/balancer/max_misplaced .07 # 7% +Set the number of seconds to sleep in between runs of the automatic balancer:: + + ceph config set mgr mgr/balancer/sleep_interval 60 + +Set the time of day to begin automatic balancing in HHMM format:: + + ceph config set mgr mgr/balancer/begin_time 0000 + +Set the time of day to finish automatic balancing in HHMM format:: + + ceph config set mgr mgr/balancer/end_time 2400 + +Restrict automatic balancing to this day of the week or later. +Uses the same conventions as crontab, 0 or 7 is Sunday, 1 is Monday, and so on:: + + ceph config set mgr mgr/balancer/begin_weekday 0 + +Restrict automatic balancing to this day of the week or earlier. +Uses the same conventions as crontab, 0 or 7 is Sunday, 1 is Monday, and so on:: + + ceph config set mgr mgr/balancer/end_weekday 7 + +Pool IDs to which the automatic balancing will be limited. +The default for this is an empty string, meaning all pools will be balanced. +The numeric pool IDs can be gotten with the :command:`ceph osd pool ls detail` command:: + + ceph config set mgr mgr/balancer/pool_ids 1,2,3 + Modes ----- @@ -136,3 +164,4 @@ The quality of the distribution that would result after executing a plan can be Assuming the plan is expected to improve the distribution (i.e., it has a lower score than the current cluster state), the user can execute that plan with:: ceph balancer execute +