From 639c9818fe3b646c0f9bd446683c84d46c102462 Mon Sep 17 00:00:00 2001 From: Loic Dachary Date: Fri, 19 Sep 2014 17:19:41 +0200 Subject: [PATCH] documentation: explain ceph osd reweight vs crush weight Using the wording from Gregory Farnum at http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040961.html Signed-off-by: Loic Dachary --- doc/rados/operations/control.rst | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/doc/rados/operations/control.rst b/doc/rados/operations/control.rst index f3d7b7c886c8..0e7237e8fc7b 100644 --- a/doc/rados/operations/control.rst +++ b/doc/rados/operations/control.rst @@ -203,9 +203,16 @@ resending pending requests. :: ceph osd pause ceph osd unpause -Set the weight of ``{osd-num}`` to ``{weight}``. Two OSDs with the same weight will receive -roughly the same number of I/O requests and store approximately the -same amount of data. :: +Set the weight of ``{osd-num}`` to ``{weight}``. Two OSDs with the +same weight will receive roughly the same number of I/O requests and +store approximately the same amount of data. ``ceph osd reweight`` +sets an override weight on the OSD. This value is in the range 0 to 1, +and forces CRUSH to re-place (1-weight) of the data that would +otherwise live on this drive. It does not change the weights assigned +to the buckets above the OSD in the crush map, and is a corrective +measure in case the normal CRUSH distribution isn't working out quite +right. For instance, if one of your OSDs is at 90% and the others are +at 50%, you could reduce this weight to try and compensate for it. :: ceph osd reweight {osd-num} {weight} -- 2.47.3