From a9314f0a172b9607180a9212fbe405654365d56a Mon Sep 17 00:00:00 2001 From: David Galloway Date: Fri, 8 Sep 2017 10:46:21 -0400 Subject: [PATCH] doc/rados/operations/bluestore-migration: Add instruction for evacuating host Signed-off-by: David Galloway --- doc/rados/operations/bluestore-migration.rst | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/doc/rados/operations/bluestore-migration.rst b/doc/rados/operations/bluestore-migration.rst index ac9b44beccfc..0bf05cec9b83 100644 --- a/doc/rados/operations/bluestore-migration.rst +++ b/doc/rados/operations/bluestore-migration.rst @@ -126,6 +126,20 @@ the data migrating only once. ceph osd crush add-bucket $NEWHOST host + If you would like to use an existing host that is already part of the cluster, + and there is sufficient free space on that host so that all of its data can + be migrated off, then you can instead do:: + + ceph osd crush unlink $NEWHOST default + + where "default" is the immediate ancestor in the CRUSH map. (For smaller + clusters with unmodified configurations this will normally be "default", but + it might also be a rack name.) This will move the host out of the CRUSH + hierarchy and cause all data to be migrated off. Once it is completely empty of + data, you can proceed:: + + while ! ceph osd safe-to-destroy $(ceph osd ls-tree $NEWHOST); do sleep 60 ; done + #. Provision new BlueStore OSDs for all devices:: ceph-disk prepare --bluestore /dev/$DEVICE -- 2.47.3