From: Anand Jain Date: Wed, 12 Feb 2020 09:35:09 +0000 (+0800) Subject: btrfs/179: call sync qgroup counts X-Git-Tag: v2022.05.01~862 X-Git-Url: http://git.apps.os.sepia.ceph.com/?p=xfstests-dev.git;a=commitdiff_plain;h=26e6cda8aaf8f00822252aff171d83095fb6dc8f btrfs/179: call sync qgroup counts On some systems btrfs/179 fails because the check finds that there is difference in the qgroup counts. So as the intention of the test case is to test any hang like situation during heavy snapshot create/delete operation with quota enabled, so make sure the qgroup counts are consistent at the end of the test case, so to make the check happy. Signed-off-by: Anand Jain Reviewed-by: Qu Wenruo Signed-off-by: Eryu Guan --- diff --git a/tests/btrfs/179 b/tests/btrfs/179 index 4a24ea41..8795d59c 100755 --- a/tests/btrfs/179 +++ b/tests/btrfs/179 @@ -109,6 +109,15 @@ wait $snapshot_pid kill $delete_pid wait $delete_pid +# By the async nature of qgroup tree scan and subvolume delete, the latest +# qgroup counts at the time of umount might not be upto date, if it isn't +# then the check will report the difference in count. The difference in +# qgroup counts are anyway updated in the following mount, so it is not a +# real issue that this test case is trying to verify. So make sure the +# qgroup counts are in sync before unmount happens. + +$BTRFS_UTIL_PROG subvolume sync $SCRATCH_MNT >> $seqres.full + # success, all done echo "Silence is golden"