From: Qu Wenruo Date: Tue, 9 Dec 2025 08:52:13 +0000 (+1030) Subject: fstests: btrfs/301: use correct blocksize to fill the fs X-Git-Tag: v2026.01.05~4 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=80259c855ad782c66133e5ed06cf569857dd1f97;p=xfstests-dev.git fstests: btrfs/301: use correct blocksize to fill the fs [FAILURE] When running the test with 8K fs block size (tried both 4K page size and 64K page size), the test case btrfs/301 always fail like this: FSTYP -- btrfs PLATFORM -- Linux/x86_64 btrfs-vm 6.18.0-custom+ #323 SMP PREEMPT_DYNAMIC Mon Dec 8 07:38:30 ACDT 2025 MKFS_OPTIONS -- -s 8k /dev/mapper/test-scratch1 MOUNT_OPTIONS -- /dev/mapper/test-scratch1 /mnt/scratch btrfs/301 42s ... - output mismatch (see /home/adam/xfstests/results//btrfs/301.out.bad) --- tests/btrfs/301.out 2024-01-02 14:44:11.140000000 +1030 +++ /home/adam/xfstests/results//btrfs/301.out.bad 2025-12-09 19:14:32.057824678 +1030 @@ -1,18 +1,71 @@ QA output created by 301 basic accounting +subvol 256 mismatched usage 41099264 vs 33964032 (expected data 33554432 expected meta 409600 diff 7135232) +subvol 256 mismatched usage 175316992 vs 168181760 (expected data 167772160 expected meta 409600 diff 7135232) +subvol 256 mismatched usage 41099264 vs 33964032 (expected data 33554432 expected meta 409600 diff 7135232) +subvol 256 mismatched usage 41099264 vs 33964032 (expected data 33554432 expected meta 409600 diff 7135232) fallocate: Disk quota exceeded ... (Run 'diff -u /home/adam/xfstests/tests/btrfs/301.out /home/adam/xfstests/results//btrfs/301.out.bad' to see the entire diff) [CAUSE] Although the subvolume usage doesn't match the expectation, "btrfs check" doesn't report any qgroup number mismatch. This means the qgroup numbers are correct, but our expectation is not. Upon inspection of the on-disk file extents, there are a lot of file extents that are partially overwritten. This means during the fio random writes, there are fs blocks that are partially written, then written back to the storage, then written again. This is a symptom of too small IO block size. The default FIO blocksize is only 4K, and it will result the above overwrite of the same fs block for 8K fs block size. [FIX] Add blocksize option to the fio config, so that we won't have above over-write behavior which boost the qgroup numbers. Signed-off-by: Qu Wenruo Reviewed-by: David Sterba Signed-off-by: Zorro Lang --- diff --git a/tests/btrfs/301 b/tests/btrfs/301 index f1f33cd9..1f72a97b 100755 --- a/tests/btrfs/301 +++ b/tests/btrfs/301 @@ -32,6 +32,9 @@ fill_sz=$((64 * 1024)) total_fill=$(($nr_fill * $fill_sz)) nodesize=$($BTRFS_UTIL_PROG inspect-internal dump-super $SCRATCH_DEV | \ grep nodesize | $AWK_PROG '{print $2}') +blocksize=$($BTRFS_UTIL_PROG inspect-internal dump-super $SCRATCH_DEV |\ + grep sectorsize | $AWK_PROG '{print $2}') +echo "blocksize=$blocksize" >> $seqres.full ext_sz=$((128 * 1024 * 1024)) limit_nr=8 limit=$(($ext_sz * $limit_nr)) @@ -45,6 +48,7 @@ directory=${subv} rw=randwrite nrfiles=${nr_fill} filesize=${fill_sz} +blocksize=${blocksize} EOF _require_fio $fio_config