ceph.restart should mark restarted osds down in order to avoid a
race condition with ceph_manager.wait_for_clean
Fixes: http://tracker.ceph.com/issues/15778
Signed-off-by: Warren Usui <wusui@redhat.com>
(manual cherry pick of
1b7552c9cb331978cb0bfd4d7dc4dcde4186c176)
Conflicts:
qa/tasks/ceph.py (original commit was in ceph/ceph-qa-suite.git)
if config.get('wait-for-osds-up', False):
for cluster in clusters:
wait_for_osds_up(ctx=ctx, config=dict(cluster=cluster))
+ manager = ctx.managers['ceph']
+ for dmon in daemons:
+ if '.' in dmon:
+ dm_parts = dmon.split('.')
+ if dm_parts[1].isdigit():
+ if dm_parts[0] == 'osd':
+ manager.mark_down_osd(int(dm_parts[1]))
yield