This test deletes the CephFS already present on the cluster at the very
beginning and unmounts the first client beforehand. But it leaves the
second client mounted on this deleted CephFS that doesn't exist for the
rest of the test. And then at the very end of this test it attempts to
remount the second client (during tearDown()) which hangs and causes
test runner to crash.
Unmount the second client beforehand to prevent the bug and delete
mount_b object to avoid confusion for the readers in future about
whether or not 2nd mountpoint exists.
Fixes: https://tracker.ceph.com/issues/66077
Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit
2130ec8ebc377364a11be7448ed2773b46b464c0)
characters
"""
self.mount_a.umount_wait(require_clean=True)
+ # let's unmount both client before deleting the FS
+ self.mount_b.umount_wait(require_clean=True)
self.mds_cluster.delete_all_filesystems()
fs_name = "cephfs-_."
self.fs = self.mds_cluster.newfs(name=fs_name)