1 _______________________
2 BUILDING THE FSQA SUITE
3 _______________________
8 1. Make sure that package list is up-to-date and install all necessary packages:
11 $ sudo apt-get install acl attr automake bc dbench dump e2fsprogs fio gawk \
12 gcc git indent libacl1-dev libaio-dev libcap-dev libgdbm-dev libtool \
13 libtool-bin liburing-dev libuuid1 lvm2 make psmisc python3 quota sed \
14 uuid-dev uuid-runtime xfsprogs linux-headers-$(uname -r) sqlite3
16 2. Install packages for the filesystem(s) being tested:
18 $ sudo apt-get install exfatprogs f2fs-tools ocfs2-tools udftools xfsdump \
21 For OverlayFS install:
22 - see https://github.com/hisilicon/overlayfs-progs
27 1. Install all necessary packages from standard repository:
29 $ sudo yum install acl attr automake bc dbench dump e2fsprogs fio gawk gcc \
30 gdbm-devel git indent kernel-devel libacl-devel libaio-devel \
31 libcap-devel libtool liburing-devel libuuid-devel lvm2 make psmisc \
32 python3 quota sed sqlite udftools xfsprogs
34 2. Install packages for the filesystem(s) being tested:
36 $ sudo yum install btrfs-progs exfatprogs f2fs-tools ocfs2-tools xfsdump \
39 For OverlayFS build and install:
40 - see https://github.com/hisilicon/overlayfs-progs
45 1. Enable EPEL repository:
46 - see https://docs.fedoraproject.org/en-US/epel/#How_can_I_use_these_extra_packages.3F
48 2. Install all necessary packages which are available from standard repository
51 $ sudo yum install acl attr automake bc dbench dump e2fsprogs fio gawk gcc \
52 gdbm-devel git indent kernel-devel libacl-devel libaio-devel \
53 libcap-devel libtool libuuid-devel lvm2 make psmisc python3 quota sed \
54 sqlite udftools xfsprogs
56 Or, EPEL packages could be compiled from sources, see:
57 - https://dbench.samba.org/web/download.html
58 - https://www.gnu.org/software/indent/
60 3. Build and install 'liburing':
61 - see https://github.com/axboe/liburing.
63 4. Install packages for the filesystem(s) being tested:
66 $ sudo yum install xfsdump xfsprogs-devel
69 $ sudo yum install exfatprogs
71 For f2fs build and install:
72 - see https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-tools.git/about/
74 For ocfs2 build and install:
75 - see https://github.com/markfasheh/ocfs2-tools
77 For OverlayFS build and install:
78 - see https://github.com/hisilicon/overlayfs-progs
80 Build and install test, libs and utils
81 --------------------------------------
83 $ git clone git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
91 1. Compile XFS/EXT4/BTRFS/etc. into your kernel or load as module. For example,
92 for XFS, enable XFS_FS in your kernel configuration, or compile it as a
93 module and load it with 'sudo modprobe xfs'. Most of the distributions will
94 have these filesystems already in the kernel/as module.
96 2. Create TEST device:
97 - format as the filesystem type you wish to test.
98 - should be at least 10GB in size.
99 - optionally populate with destroyable data.
100 - device contents may be destroyed.
102 3. (optional) Create SCRATCH device.
103 - many tests depend on the SCRATCH device existing.
104 - not need to be formatted.
105 - should be at least 10GB in size.
106 - must be different to TEST device.
107 - device contents will be destroyed.
109 4. (optional) Create SCRATCH device pool.
110 - needed for BTRFS testing
111 - specifies 3 or more independent SCRATCH devices via the SCRATCH_DEV_POOL
112 variable e.g SCRATCH_DEV_POOL="/dev/sda /dev/sdb /dev/sdc"
113 - device contents will be destroyed.
114 - SCRATCH device should be left unset, it will be overridden
115 by the SCRATCH_DEV_POOL implementation.
117 5. Copy local.config.example to local.config and edit as needed. The TEST_DEV
118 and TEST_DIR are required.
120 6. (optional) Create fsgqa test users and groups:
122 $ sudo useradd -m fsgqa
123 $ sudo useradd 123456-fsgqa
124 $ sudo useradd fsgqa2
125 $ sudo groupadd fsgqa
127 The "123456-fsgqa" user creation step can be safely skipped if your system
128 doesn't support names starting with digits, only a handful of tests require
131 7. (optional) If you wish to run the udf components of the suite install
132 mkudffs. Also download and build the Philips UDF Verification Software from
133 https://www.lscdweb.com/registered/udf_verifier.html, then copy the udf_test
134 binary to xfstests/src/.
137 For example, to run the tests with loopback partitions:
139 # xfs_io -f -c "falloc 0 10g" test.img
140 # xfs_io -f -c "falloc 0 10g" scratch.img
142 # losetup /dev/loop0 ./test.img
143 # losetup /dev/loop1 ./scratch.img
144 # mkdir -p /mnt/test && mount /dev/loop0 /mnt/test
145 # mkdir -p /mnt/scratch
147 The config for the setup above is:
150 export TEST_DEV=/dev/loop0
151 export TEST_DIR=/mnt/test
152 export SCRATCH_DEV=/dev/loop1
153 export SCRATCH_MNT=/mnt/scratch
155 From this point you can run some basic tests, see 'USING THE FSQA SUITE' below.
160 Some tests require additional configuration in your local.config. Add these
161 variables to a local.config and keep that file in your workarea. Or add a case
162 to the switch in common/config assigning these variables based on the hostname
163 of your test machine. Or use 'setenv' to set them.
165 Extra TEST device specifications:
166 - Set TEST_LOGDEV to "device for test-fs external log"
167 - Set TEST_RTDEV to "device for test-fs realtime data"
168 - If TEST_LOGDEV and/or TEST_RTDEV, these will always be used.
169 - Set FSTYP to "the filesystem you want to test", the filesystem type is
170 devised from the TEST_DEV device, but you may want to override it; if
171 unset, the default is 'xfs'
173 Extra SCRATCH device specifications:
174 - Set SCRATCH_LOGDEV to "device for scratch-fs external log"
175 - Set SCRATCH_RTDEV to "device for scratch-fs realtime data"
176 - If SCRATCH_LOGDEV and/or SCRATCH_RTDEV, the USE_EXTERNAL environment
178 Tape device specification for xfsdump testing:
179 - Set TAPE_DEV to "tape device for testing xfsdump".
180 - Set RMT_TAPE_DEV to "remote tape device for testing xfsdump"
181 variable set to "yes" will enable their use.
182 - Note that if testing xfsdump, make sure the tape devices have a tape which
185 Extra XFS specification:
186 - Set TEST_XFS_REPAIR_REBUILD=1 to have _check_xfs_filesystem run
187 xfs_repair -n to check the filesystem; xfs_repair to rebuild metadata
188 indexes; and xfs_repair -n (a third time) to check the results of the
190 - Set FORCE_XFS_CHECK_PROG=yes to have _check_xfs_filesystem run xfs_check
191 to check the filesystem. As of August 2021, xfs_repair finds all
192 filesystem corruptions found by xfs_check, and more, which means that
193 xfs_check is no longer run by default.
194 - xfs_scrub, if present, will always check the test and scratch
195 filesystems if they are still online at the end of the test. It is no
196 longer necessary to set TEST_XFS_SCRUB.
200 - Set DUMP_CORRUPT_FS=1 to record metadata dumps of XFS, ext* or
201 btrfs filesystems if a filesystem check fails.
202 - Set DUMP_COMPRESSOR to a compression program to compress metadumps of
203 filesystems. This program must accept '-f' and the name of a file to
204 compress; and it must accept '-d -f -k' and the name of a file to
205 decompress. In other words, it must emulate gzip.
207 - Set KEEP_DMESG=yes to keep dmesg log after test
209 - Set USE_KMEMLEAK=yes to scan for memory leaks in the kernel after every
210 test, if the kernel supports kmemleak.
212 - Set FSSTRESS_AVOID and/or FSX_AVOID, which contain options added to
213 the end of fsstresss and fsx invocations, respectively, in case you wish
214 to exclude certain operational modes from these tests.
216 Kernel/Modules related configuration:
217 - Set TEST_FS_MODULE_RELOAD=1 to unload the module and reload it between
218 test invocations. This assumes that the name of the module is the same
220 - Set MODPROBE_PATIENT_RM_TIMEOUT_SECONDS to specify the amount of time we
221 should try a patient module remove. The default is 50 seconds. Set this
222 to "forever" and we'll wait forever until the module is gone.
223 - Set KCONFIG_PATH to specify your preferred location of kernel config
224 file. The config is used by tests to check if kernel feature is enabled.
227 - If you wish to disable UDF verification test set the environment variable
228 DISABLE_UDF_TEST to 1.
229 - Set LOGWRITES_DEV to a block device to use for power fail testing.
230 - Set PERF_CONFIGNAME to a arbitrary string to be used for identifying
231 the test setup for running perf tests. This should be different for
232 each type of performance test you wish to run so that relevant results
233 are compared. For example 'spinningrust' for configurations that use
234 spinning disks and 'nvme' for tests using nvme drives.
235 - Set MIN_FSSIZE to specify the minimal size (bytes) of a filesystem we
236 can create. Setting this parameter will skip the tests creating a
237 filesystem less than MIN_FSSIZE.
238 - Set DIFF_LENGTH to "number of diff lines to print from a failed test",
239 by default 10, set to 0 to print the full diff
241 ______________________
243 ______________________
248 - By default the tests suite will run all the tests in the auto group. These
249 are the tests that are expected to function correctly as regression tests,
250 and it excludes tests that exercise conditions known to cause machine
251 failures (i.e. the "dangerous" tests).
252 - ./check '*/001' '*/002' '*/003'
254 - Groups of tests maybe ran by: ./check -g [group(s)]
255 See the tests/*/group.list files after building xfstests to learn about
256 each test's group memberships.
257 - If you want to run all tests regardless of what group they are in
258 (including dangerous tests), use the "all" group: ./check -g all
259 - To randomize test order: ./check -r [test(s)]
260 - You can explicitly specify NFS/CIFS/OVERLAY, otherwise
261 the filesystem type will be autodetected from $TEST_DEV:
262 - for running nfs tests: ./check -nfs [test(s)]
263 - for running cifs/smb3 tests: ./check -cifs [test(s)]
264 - for overlay tests: ./check -overlay [test(s)]
265 The TEST and SCRATCH partitions should be pre-formatted
266 with another base fs, where the overlay dirs will be created
269 The check script tests the return value of each script, and
270 compares the output against the expected output. If the output
271 is not as expected, a diff will be output and an .out.bad file
272 will be produced for the failing test.
274 Unexpected console messages, crashes and hangs may be considered
275 to be failures but are not necessarily detected by the QA system.
277 __________________________
278 ADDING TO THE FSQA SUITE
279 __________________________
282 Creating new tests scripts:
284 Use the "new" script.
286 Test script environment:
288 When developing a new test script keep the following things in
289 mind. All of the environment variables and shell procedures are
290 available to the script once the "common/preamble" file has been
291 sourced and the "_begin_fstest" function has been called.
293 1. The tests are run from an arbitrary directory. If you want to
294 do operations on an XFS filesystem (good idea, eh?), then do
295 one of the following:
297 (a) Create directories and files at will in the directory
298 $TEST_DIR ... this is within an XFS filesystem and world
299 writeable. You should cleanup when your test is done,
300 e.g. use a _cleanup shell procedure in the trap ... see
301 001 for an example. If you need to know, the $TEST_DIR
302 directory is within the filesystem on the block device
305 (b) mkfs a new XFS filesystem on $SCRATCH_DEV, and mount this
306 on $SCRATCH_MNT. Call the the _require_scratch function
307 on startup if you require use of the scratch partition.
308 _require_scratch does some checks on $SCRATCH_DEV &
309 $SCRATCH_MNT and makes sure they're unmounted. You should
310 cleanup when your test is done, and in particular unmount
312 Tests can make use of $SCRATCH_LOGDEV and $SCRATCH_RTDEV
313 for testing external log and realtime volumes - however,
314 these tests need to simply "pass" (e.g. cat $seq.out; exit
315 - or default to an internal log) in the common case where
316 these variables are not set.
318 2. You can safely create temporary files that are not part of the
319 filesystem tests (e.g. to catch output, prepare lists of things
320 to do, etc.) in files named $tmp.<anything>. The standard test
321 script framework created by "new" will initialize $tmp and
324 3. By default, tests are run as the same uid as the person
325 executing the control script "check" that runs the test scripts.
327 4. Some other useful shell procedures:
329 _get_fqdn - echo the host's fully qualified
332 _get_pids_by_name - one argument is a process name, and
333 return all of the matching pids on
336 _within_tolerance - fancy numerical "close enough is good
337 enough" filter for deterministic
338 output ... see comments in
339 common/filter for an explanation
341 _filter_date - turn ctime(3) format dates into the
342 string DATE for deterministic
345 _cat_passwd, - dump the content of the password
346 _cat_group or group file (both the local file
347 and the content of the NIS database
348 if it is likely to be present)
350 5. General recommendations, usage conventions, etc.:
351 - When the content of the password or group file is
352 required, get it using the _cat_passwd and _cat_group
353 functions, to ensure NIS information is included if NIS
355 - When calling getfacl in a test, pass the "-n" argument so
356 that numeric rather than symbolic identifiers are used in
358 - When creating a new test, it is possible to enter a custom name
359 for the file. Filenames are in form NNN-custom-name, where NNN
360 is automatically added by the ./new script as an unique ID,
361 and "custom-name" is the optional string entered into a prompt
362 in the ./new script. It can contain only alphanumeric characters
363 and dash. Note the "NNN-" part is added automatically.
365 6. Test group membership: Each test can be associated with any number
366 of groups for convenient selection of subsets of tests. Group names
367 can be any sequence of non-whitespace characters. Test authors
368 associate a test with groups by passing the names of those groups as
369 arguments to the _begin_fstest function. For example, the code:
371 _begin_fstest auto quick subvol snapshot
373 associates the current test with the "auto", "quick", "subvol", and
374 "snapshot" groups. It is not necessary to specify the "all" group
375 in the list because that group is computed at run time.
377 The build process scans test files for _begin_fstest invocations and
378 compiles the group list from that information. In other words, test
379 files must call _begin_fstest or they will not be run.
383 Each test script has a name, e.g. 007, and an associated
384 verified output, e.g. 007.out.
386 It is important that the verified output is deterministic, and
387 part of the job of the test script is to filter the output to
388 make this so. Examples of the sort of things that need filtering:
395 - variable directory contents
396 - imprecise numbers, especially sizes and times
400 The script "check" may be used to run one or more tests.
402 Test number $seq is deemed to "pass" when:
403 (a) no "core" file is created,
404 (b) the file $seq.notrun is not created,
405 (c) the exit status is 0, and
406 (d) the output matches the verified output.
408 In the "not run" case (b), the $seq.notrun file should contain a
409 short one-line summary of why the test was not run. The standard
410 output is not checked, so this can be used for a more verbose
411 explanation and to provide feedback when the QA test is run
415 To force a non-zero exit status use:
421 won't have the desired effect because of the way the exit trap
424 The recent pass/fail history is maintained in the file "check.log".
425 The elapsed time for the most recent pass for each test is kept
428 The compare-failures script in tools/ may be used to compare failures
429 across multiple runs, given files containing stdout from those runs.
435 Send patches to the fstests mailing list at fstests@vger.kernel.org.