Other parts of this script leave OSDs reweighted, which can make this test
fail to go fully clean.
0 ssd 0.08789 osd.0 up 0.63213 1.00000
1 ssd 0.08789 osd.1 up 0.63213 1.00000
2 ssd 0.08789 osd.2 up 1.00000 1.00000
35.0 raw ([2,1,
2147483647], p2) up ([2,1,
2147483647], p2) acting ([2,1,2], p2)
Fix by just deleting this pool when we're done.
Fixes: https://tracker.ceph.com/issues/44067
Signed-off-by: Sage Weil <sage@redhat.com>
check_response 'not change the size'
set -e
ceph osd pool get pool_erasure erasure_code_profile
+ ceph osd pool rm pool_erasure pool_erasure --yes-i-really-really-mean-it
for flag in nodelete nopgchange nosizechange write_fadvise_dontneed noscrub nodeep-scrub; do
ceph osd pool set $TEST_POOL_GETSET $flag false