From: Ramana Raja Date: Wed, 29 Nov 2023 16:25:30 +0000 (-0500) Subject: qa/workunits/rbd/cli_generic.sh: narrow race window X-Git-Tag: v19.0.0~14^2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=ea033fe8607c2b31892536afc3f08f3009b24139;p=ceph-ci.git qa/workunits/rbd/cli_generic.sh: narrow race window ... when checking whether a rbd_support module command fails after blocklisting the module's client. In tests that check the recovery of the rbd_support module after its client is blocklisted, the rbd_support module's client is blocklisted using the `osd blocklist add` command. Next, `osd blocklist ls` command is issued to confirm that the client is blocklisted. A rbd_support module command is then issued and expected to fail in order to verify that the blocklisting has affected the rbd_support module's operations. Sometimes it was observed that before this rbd_support module command reached the ceph-mgr, the rbd_support module detected the blocklisting, recovered from it, and was able to serve the command. To reduce the race window that occurs when trying to verify that the rbd_support module's operation is affected by client blocklisting, get rid of the `osd blocklist ls` command. Fixes: https://tracker.ceph.com/issues/63673 Signed-off-by: Ramana Raja --- diff --git a/qa/workunits/rbd/cli_generic.sh b/qa/workunits/rbd/cli_generic.sh index 57279d26dce..c35bbe8f83e 100755 --- a/qa/workunits/rbd/cli_generic.sh +++ b/qa/workunits/rbd/cli_generic.sh @@ -1261,7 +1261,6 @@ test_trash_purge_schedule_recovery() { jq 'select(.name == "rbd_support")' | jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add') ceph osd blocklist add $CLIENT_ADDR - ceph osd blocklist ls | grep $CLIENT_ADDR # Check that you can add a trash purge schedule after a few retries expect_fail rbd trash purge schedule add -p rbd3 10m @@ -1420,7 +1419,6 @@ test_mirror_snapshot_schedule_recovery() { jq 'select(.name == "rbd_support")' | jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add') ceph osd blocklist add $CLIENT_ADDR - ceph osd blocklist ls | grep $CLIENT_ADDR # Check that you can add a mirror snapshot schedule after a few retries expect_fail rbd mirror snapshot schedule add -p rbd3/ns1 --image test1 2m @@ -1529,7 +1527,6 @@ test_perf_image_iostat_recovery() { jq 'select(.name == "rbd_support")' | jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add') ceph osd blocklist add $CLIENT_ADDR - ceph osd blocklist ls | grep $CLIENT_ADDR expect_fail rbd perf image iostat --format json rbd3/ns sleep 10 @@ -1661,7 +1658,6 @@ test_tasks_recovery() { jq 'select(.name == "rbd_support")' | jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add') ceph osd blocklist add $CLIENT_ADDR - ceph osd blocklist ls | grep $CLIENT_ADDR expect_fail ceph rbd task add flatten rbd2/clone1 sleep 10