ceph.restart now marks the osds down, so the objects are actually being
created while slowest of the osds boots. That causes a ton of 1 byte
objects to be created in a degraded state and causes the cleanup to take
a long time. Also, reduce length of bench since it's only being used
to ensure the osds came up correctly.
Signed-off-by: Samuel Just <sjust@redhat.com>
- ceph.restart: [mon.a, mon.b, mon.c, osd.0, osd.1, osd.2]
- ceph_manager.wait_for_clean: null
- ceph.restart: [osd.0, osd.1, osd.2]
+ - ceph_manager.wait_for_clean: null
- exec:
osd.0:
- sudo grep -c 'unable to peek at' /var/log/ceph/ceph-osd.0.log
- radosbench:
clients: [client.0]
- time: 30
+ time: 5
size: 1
- - ceph_manager.wait_for_clean: null
- ceph.restart: [osd.0, osd.1, osd.2]
- ceph_manager.wait_for_clean: null
- print: '**** done verifying final upgrade'