umount /var/lib/ceph/osd/ceph-$ID
-#. Destroy the OSD data. Be *EXTREMELY CAREFUL* as this will destroy
+#. Destroy the OSD data. Be *EXTREMELY CAREFUL* as this will destroy
the contents of the device; be certain the data on the device is
not needed (i.e., that the cluster is healthy) before proceeding. ::
- ceph-disk zap $DEVICE
+ ceph-volume lvm zap $DEVICE
#. Tell the cluster the OSD has been destroyed (and a new OSD can be
reprovisioned with the same ID)::
This requires you do identify which device to wipe based on what you saw
mounted above. BE CAREFUL! ::
- ceph-disk prepare --bluestore $DEVICE --osd-id $ID
+ ceph-volume create --bluestore --data $DEVICE --osd-id $ID
#. Repeat.
If you're using a new host, start at step #1. For an existing host,
jump to step #5 below.
-
+
#. Provision new BlueStore OSDs for all devices::
- ceph-disk prepare --bluestore /dev/$DEVICE
+ ceph-volume lvm create --bluestore --data /dev/$DEVICE
#. Verify OSDs join the cluster with::
#. Wipe the old OSD devices. This requires you do identify which
devices are to be wiped manually (BE CAREFUL!). For each device,::
- ceph-disk zap $DEVICE
+ ceph-volume lvm zap $DEVICE
#. Use the now-empty host as the new host, and repeat::
Caveats:
* This strategy requires that a blank BlueStore OSD be prepared
- without allocating a new OSD ID, something that the ``ceph-disk``
+ without allocating a new OSD ID, something that the ``ceph-volume``
tool doesn't support. More importantly, the setup of *dmcrypt* is
closely tied to the OSD identity, which means that this approach
does not work with encrypted OSDs.