From: Loic Dachary Date: Fri, 19 Sep 2014 15:19:41 +0000 (+0200) Subject: documentation: explain ceph osd reweight vs crush weight X-Git-Tag: v0.88~168^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=refs%2Fpull%2F2536%2Fhead;p=ceph.git documentation: explain ceph osd reweight vs crush weight Using the wording from Gregory Farnum at http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040961.html Signed-off-by: Loic Dachary --- diff --git a/doc/rados/operations/control.rst b/doc/rados/operations/control.rst index f3d7b7c886c8..0e7237e8fc7b 100644 --- a/doc/rados/operations/control.rst +++ b/doc/rados/operations/control.rst @@ -203,9 +203,16 @@ resending pending requests. :: ceph osd pause ceph osd unpause -Set the weight of ``{osd-num}`` to ``{weight}``. Two OSDs with the same weight will receive -roughly the same number of I/O requests and store approximately the -same amount of data. :: +Set the weight of ``{osd-num}`` to ``{weight}``. Two OSDs with the +same weight will receive roughly the same number of I/O requests and +store approximately the same amount of data. ``ceph osd reweight`` +sets an override weight on the OSD. This value is in the range 0 to 1, +and forces CRUSH to re-place (1-weight) of the data that would +otherwise live on this drive. It does not change the weights assigned +to the buckets above the OSD in the crush map, and is a corrective +measure in case the normal CRUSH distribution isn't working out quite +right. For instance, if one of your OSDs is at 90% and the others are +at 50%, you could reduce this weight to try and compensate for it. :: ceph osd reweight {osd-num} {weight}