generic/371: Fix the test to be compatible block sizes upto 64k
When this test was ran with btrfs with 64k sector/block size, it
failed with
QA output created by 371
Silence is golden
+fallocate: No space left on device
+pwrite: No space left on device
+fallocate: No space left on device
+pwrite: No space left on device
+pwrite: No space left on device
...
This is what is going on:
Let us see the following set of operations:
--- With 4k sector size ---
$ mkfs.btrfs -f -b 256m -s 4k -n 4k /dev/loop0
$ mount /dev/loop0 /mnt1/scratch/
$ df -h /dev/loop0
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 256M 1.5M 175M 1% /mnt1/scratch
$ xfs_io -f -c "pwrite 0 80M" /mnt1/scratch/t1
wrote
83886080/
83886080 bytes at offset 0
80 MiB, 20480 ops; 0.4378 sec (182.693 MiB/sec and 46769.3095 ops/sec)
$ df -h /dev/loop0
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 256M 1.5M 175M 1% /mnt1/scratch
$ sync
$ df -h /dev/loop0
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 256M 82M 95M 47% /mnt1/scratch
$ xfs_io -f -c "pwrite 0 80M" /mnt1/scratch/t2
wrote
83886080/
83886080 bytes at offset 0
80 MiB, 20480 ops; 0:00:01.25 (63.881 MiB/sec and 16353.4648 ops/sec)
$ df -h /dev/loop0
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 256M 137M 40M 78% /mnt1/scratch
$ sync
$ df -h /dev/loop0
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 256M 162M 15M 92% /mnt1/scratch
Now let us repeat with 64k sector size
--- With 64k sector size ---
$ mkfs.btrfs -f -b 256m -s 64k -n 64k /dev/loop0
$ mount /dev/loop0 /mnt1/scratch/
$ df -h /dev/loop0
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 256M 24M 175M 12% /mnt1/scratch
$ xfs_io -f -c "pwrite 0 80M" /mnt1/scratch/t1
wrote
83886080/
83886080 bytes at offset 0
80 MiB, 20480 ops; 0.8460 sec (94.553 MiB/sec and 24205.4914 ops/sec)
$
$ df -h /dev/loop0
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 256M 24M 175M 12% /mnt1/scratch
$ sync
$ df -h /dev/loop0
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 256M 104M 95M 53% /mnt1/scratch
$ xfs_io -f -c "pwrite 0 80M" /mnt1/scratch/t2
pwrite: No space left on device
Now, we can see that with 64k node size, 256M is not sufficient
to hold 2 files worth 80M. For 64k, we can also see that the initial
space usage on a fresh filesystem is 24M and for 4k its 1.5M. So
because of higher node size, more metadata space is getting used.
This test requires the size of the filesystem to be at least capable
to hold 2 80M files.
Fix this by increasing the fs size from 256M to 330M.
Reported-by: Disha Goel <disgoel@linux.ibm.com>
Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
Reviewed-by: Zorro Lang <zlang@redhat.com>
Signed-off-by: Zorro Lang <zlang@kernel.org>