This will likely one day or another break something. If ceph-disk
complains about a disk just use the purge-cluster.yml playbook first as
it will wipe all the devices.
Signed-off-by: Sébastien Han <seb@redhat.com>
## SCENARIO 1: JOURNAL AND OSD_DATA ON THE SAME DEVICE
- include: ../check_devices.yml
-- include: ../zap_devices.yml
# NOTE (leseb): the prepare process must be parallelized somehow...
# if you have 64 disks with 4TB each, this will take a while
## SCENARIO 3: N JOURNAL DEVICES FOR N OSDS
- include: ../check_devices.yml
-- include: ../zap_devices.yml
# NOTE (leseb): the prepare process must be parallelized somehow...
# if you have 64 disks with 4TB each, this will take a while
+++ /dev/null
----
-# NOTE (leseb): some devices might miss partition label which which will result
-# in ceph-disk failing to prepare OSD. Thus zapping them prior to prepare the OSD
-# ensures that the device will get successfully prepared.
-- name: erasing partitions and labels from osd disk(s)
- command: ceph-disk zap {{ item.2 }}
- changed_when: false
- with_together:
- - parted.results
- - ispartition.results
- - devices
- when:
- item.0.rc != 0 and
- item.1.rc != 0 and
- zap_devices and
- (journal_collocation or raw_multi_journal)
-
-- name: erasing partitions and labels from the journal device(s)
- command: ceph-disk zap {{ item.2 }}
- changed_when: false
- with_together:
- - parted.results
- - ispartition.results
- - raw_journal_devices
- when:
- item.0.rc != 0 and
- item.1.rc != 0 and
- zap_devices and
- raw_multi_journal