This wreaks havoc on our QA because it marks osds up and down and then
immediately after that we try to scrub and some osds are still down.
Adjust the CLI test to wait for all OSDs to come back up after thrashing.
Signed-off-by: Sage Weil <sage@inktank.com>
break
fi
done
+
+ceph osd thrash 10
+for ((i=0; i < 100; i++)); do
+ if ceph osd dump | grep 'down in'; then
+ echo "waiting for osd(s) to come back up"
+ sleep 10
+ else
+ break
+ fi
+done
+
ceph osd dump | grep 'osd.0 up'
ceph osd find 1
ceph osd metadata 1 | grep 'distro'
ceph osd pool get rbd crush_ruleset | grep 'crush_ruleset: 0'
-ceph osd thrash 10
-
set +e
r = expect('osd/pool/get.json?pool=rbd&var=crush_ruleset', 'GET', 200, 'json')
assert(r.myjson['output']['crush_ruleset'] == 0)
- expect('osd/thrash?num_epochs=10', 'PUT', 200, '')
print 'OK'