2 # SPDX-License-Identifier: GPL-2.0
3 # Copyright (C) 2014 SUSE Linux Products GmbH. All Rights Reserved.
7 # This test was motivated by btrfs issues, but it's generic enough as it
8 # doesn't use any btrfs specific features.
10 # Stress btrfs' block group allocation and deallocation while running fstrim in
11 # parallel. Part of the goal is also to get data block groups deallocated so
12 # that new metadata block groups, using the same physical device space ranges,
13 # get allocated while fstrim is running. This caused several issues ranging
14 # from invalid memory accesses, kernel crashes, metadata or data corruption,
15 # free space cache inconsistencies, free space leaks and memory leaks.
17 # These issues were fixed by the following btrfs linux kernel patches:
19 # Btrfs: fix invalid block group rbtree access after bg is removed
20 # Btrfs: fix crash caused by block group removal
21 # Btrfs: fix freeing used extents after removing empty block group
22 # Btrfs: fix race between fs trimming and block group remove/allocation
23 # Btrfs: fix race between writing free space cache and trimming
24 # Btrfs: make btrfs_abort_transaction consider existence of new block groups
25 # Btrfs: fix memory leak after block remove + trimming
26 # Btrfs: fix fs mapping extent map leak
27 # Btrfs: fix unprotected deletion from pending_chunks list
29 # The issues were found on a qemu/kvm guest with 4 virtual CPUs, 4Gb of ram and
30 # scsi-hd devices with discard support enabled (that means hole punching in the
31 # disk's image file is performed by the host).
34 seqres=$RESULT_DIR/$seq
35 echo "QA output created by $seq"
38 status=1 # failure is the default!
39 trap "_cleanup; exit \$status" 0 1 2 3 15
46 # get standard environment, filters and checks
50 # real QA test starts here
53 _require_xfs_io_command "falloc"
57 # Keep allocating and deallocating 1G of data space with the goal of creating
58 # and deleting 1 block group constantly. The intention is to race with the
62 # Wait for running subcommand before exitting so that
63 # mountpoint is not busy when we try to unmount it
64 trap "wait; exit" SIGTERM
68 $XFS_IO_PROG -f -c "falloc -k 0 1G" \
69 $SCRATCH_MNT/$name &> /dev/null
71 $XFS_IO_PROG -c "truncate 0" \
72 $SCRATCH_MNT/$name &> /dev/null
79 # Wait for running subcommand before exitting so that
80 # mountpoint is not busy when we try to unmount it
81 trap "wait; exit" SIGTERM
84 $FSTRIM_PROG $SCRATCH_MNT
88 # Create a bunch of small files that get their single extent inlined in the
89 # btree, so that we consume a lot of metadata space and get a chance of a
90 # data block group getting deleted and reused for metadata later. Sometimes
91 # the creation of all these files succeeds other times we get ENOSPC failures
92 # at some point - this depends on how fast the btrfs' cleaner kthread is
93 # notified about empty block groups, how fast it deletes them and how fast
94 # the fallocate calls happen. So we don't really care if they all succeed or
95 # not, the goal is just to keep metadata space usage growing while data block
98 # Creating 200,000 files sequentially is really slow, so speed it up a bit
99 # by doing it concurrently with 4 threads in 4 separate directories.
100 nr_files=$((50000 * LOAD_FACTOR))
105 for ((n = 0; n < 4; n++)); do
106 mkdir $SCRATCH_MNT/$n
108 trap "wait; exit" SIGTERM
110 for ((i = 1; i <= $nr_files; i++)); do
111 $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 3900" \
112 $SCRATCH_MNT/$n/"${prefix}_$i" &> /dev/null
113 if [ $? -ne 0 ]; then
114 echo "Failed creating file $n/${prefix}_$i" >>$seqres.full
122 wait ${create_pids[@]}
126 _scratch_mkfs >>$seqres.full 2>&1
128 _require_fs_space $SCRATCH_MNT $((10 * 1024 * 1024))
129 _require_batched_discard $SCRATCH_MNT
131 for ((i = 0; i < $((4 * $LOAD_FACTOR)); i++)); do
136 for ((i = 0; i < $((1 * $LOAD_FACTOR)); i++)); do
137 fallocate_loop "falloc_file_$i" &
138 fallocate_pids[$i]=$!
141 create_files "foobar"
143 kill ${fallocate_pids[@]}
147 # The fstests framework will now check for fs consistency with fsck.
148 # The trimming was racy and caused some btree nodes to get full of zeroes on
149 # disk, which obviously caused fs metadata corruption. The race often lead
150 # to missing free space entries in a block group's free space cache too.
152 echo "Silence is golden"