From: David Galloway Date: Fri, 8 Sep 2017 14:46:21 +0000 (-0400) Subject: doc/rados/operations/bluestore-migration: Add instruction for evacuating X-Git-Tag: v13.0.1~990^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=a9314f0a172b9607180a9212fbe405654365d56a;p=ceph.git doc/rados/operations/bluestore-migration: Add instruction for evacuating host Signed-off-by: David Galloway --- diff --git a/doc/rados/operations/bluestore-migration.rst b/doc/rados/operations/bluestore-migration.rst index ac9b44beccfc..0bf05cec9b83 100644 --- a/doc/rados/operations/bluestore-migration.rst +++ b/doc/rados/operations/bluestore-migration.rst @@ -126,6 +126,20 @@ the data migrating only once. ceph osd crush add-bucket $NEWHOST host + If you would like to use an existing host that is already part of the cluster, + and there is sufficient free space on that host so that all of its data can + be migrated off, then you can instead do:: + + ceph osd crush unlink $NEWHOST default + + where "default" is the immediate ancestor in the CRUSH map. (For smaller + clusters with unmodified configurations this will normally be "default", but + it might also be a rack name.) This will move the host out of the CRUSH + hierarchy and cause all data to be migrated off. Once it is completely empty of + data, you can proceed:: + + while ! ceph osd safe-to-destroy $(ceph osd ls-tree $NEWHOST); do sleep 60 ; done + #. Provision new BlueStore OSDs for all devices:: ceph-disk prepare --bluestore /dev/$DEVICE