BUILDING THE FSQA SUITE
_______________________
-Building Linux:
- - cd into the xfstests directory
- - install prerequisite packages
- For example, for Ubuntu:
- "sudo apt-get install xfslibs-dev uuid-dev libtool-bin e2fsprogs
- automake gcc libuuid1 quota attr libattr1-dev make
- libacl1-dev libaio-dev xfsprogs libgdbm-dev gawk fio dbench"
- - run make
- - run make install
- - create fsgqa test user ("sudo useradd fsgqa")
- - create 123456-fsgqa test user ("sudo useradd 123456-fsgqa")
-
-Building IRIX:
- - cd into the xfstests directory
- - set the ROOT and TOOLROOT env variables for IRIX appropriately
- - run ./make_irix
+Ubuntu or Debian
+----------------
+
+1. Make sure that package list is up-to-date and install all necessary packages:
+
+ $ sudo apt-get update
+ $ sudo apt-get install acl attr automake bc dbench dump e2fsprogs fio gawk \
+ gcc git indent libacl1-dev libaio-dev libcap-dev libgdbm-dev libtool \
+ libtool-bin liburing-dev libuuid1 lvm2 make psmisc python3 quota sed \
+ uuid-dev uuid-runtime xfsprogs linux-headers-$(uname -r) sqlite3
+
+2. Install packages for the filesystem(s) being tested:
+
+ $ sudo apt-get install exfatprogs f2fs-tools ocfs2-tools udftools xfsdump \
+ xfslibs-dev
+
+ For OverlayFS install:
+ - see https://github.com/hisilicon/overlayfs-progs
+
+Fedora
+------
+
+1. Install all necessary packages from standard repository:
+
+ $ sudo yum install acl attr automake bc dbench dump e2fsprogs fio gawk gcc \
+ gdbm-devel git indent kernel-devel libacl-devel libaio-devel \
+ libcap-devel libtool liburing-devel libuuid-devel lvm2 make psmisc \
+ python3 quota sed sqlite udftools xfsprogs
+
+2. Install packages for the filesystem(s) being tested:
+
+ $ sudo yum install btrfs-progs exfatprogs f2fs-tools ocfs2-tools xfsdump \
+ xfsprogs-devel
+
+ For OverlayFS build and install:
+ - see https://github.com/hisilicon/overlayfs-progs
+
+RHEL or CentOS
+--------------
+
+1. Enable EPEL repository:
+ - see https://docs.fedoraproject.org/en-US/epel/#How_can_I_use_these_extra_packages.3F
+
+2. Install all necessary packages which are available from standard repository
+ and EPEL:
+
+ $ sudo yum install acl attr automake bc dbench dump e2fsprogs fio gawk gcc \
+ gdbm-devel git indent kernel-devel libacl-devel libaio-devel \
+ libcap-devel libtool libuuid-devel lvm2 make psmisc python3 quota sed \
+ sqlite udftools xfsprogs
+
+ Or, EPEL packages could be compiled from sources, see:
+ - https://dbench.samba.org/web/download.html
+ - https://www.gnu.org/software/indent/
+
+3. Build and install 'liburing':
+ - see https://github.com/axboe/liburing.
+
+4. Install packages for the filesystem(s) being tested:
+
+ For XFS install:
+ $ sudo yum install xfsdump xfsprogs-devel
+
+ For exfat install:
+ $ sudo yum install exfatprogs
+
+ For f2fs build and install:
+ - see https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-tools.git/about/
+
+ For ocfs2 build and install:
+ - see https://github.com/markfasheh/ocfs2-tools
+
+ For OverlayFS build and install:
+ - see https://github.com/hisilicon/overlayfs-progs
+
+Build and install test, libs and utils
+--------------------------------------
+
+$ git clone git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
+$ cd xfstests-dev
+$ make
+$ sudo make install
+
+Setup Environment
+-----------------
+
+1. Compile XFS/EXT4/BTRFS/etc. into your kernel or load as module. For example,
+ for XFS, enable XFS_FS in your kernel configuration, or compile it as a
+ module and load it with 'sudo modprobe xfs'. Most of the distributions will
+ have these filesystems already in the kernel/as module.
+
+2. Create TEST device:
+ - format as the filesystem type you wish to test.
+ - should be at least 10GB in size.
+ - optionally populate with destroyable data.
+ - device contents may be destroyed.
+
+3. (optional) Create SCRATCH device.
+ - many tests depend on the SCRATCH device existing.
+ - not need to be formatted.
+ - should be at least 10GB in size.
+ - must be different to TEST device.
+ - device contents will be destroyed.
+
+4. (optional) Create SCRATCH device pool.
+ - needed for BTRFS testing
+ - specifies 3 or more independent SCRATCH devices via the SCRATCH_DEV_POOL
+ variable e.g SCRATCH_DEV_POOL="/dev/sda /dev/sdb /dev/sdc"
+ - device contents will be destroyed.
+ - SCRATCH device should be left unset, it will be overridden
+ by the SCRATCH_DEV_POOL implementation.
+
+5. Copy local.config.example to local.config and edit as needed. The TEST_DEV
+ and TEST_DIR are required.
+
+6. (optional) Create fsgqa test users and groups:
+
+ $ sudo useradd -m fsgqa
+ $ sudo useradd 123456-fsgqa
+ $ sudo useradd fsgqa2
+ $ sudo groupadd fsgqa
+
+ The "123456-fsgqa" user creation step can be safely skipped if your system
+ doesn't support names starting with digits, only a handful of tests require
+ it.
+
+7. (optional) If you wish to run the udf components of the suite install
+ mkudffs. Also download and build the Philips UDF Verification Software from
+ https://www.lscdweb.com/registered/udf_verifier.html, then copy the udf_test
+ binary to xfstests/src/.
+
+
+For example, to run the tests with loopback partitions:
+
+ # xfs_io -f -c "falloc 0 10g" test.img
+ # xfs_io -f -c "falloc 0 10g" scratch.img
+ # mkfs.xfs test.img
+ # losetup /dev/loop0 ./test.img
+ # losetup /dev/loop1 ./scratch.img
+ # mkdir -p /mnt/test && mount /dev/loop0 /mnt/test
+ # mkdir -p /mnt/scratch
+
+The config for the setup above is:
+
+ $ cat local.config
+ export TEST_DEV=/dev/loop0
+ export TEST_DIR=/mnt/test
+ export SCRATCH_DEV=/dev/loop1
+ export SCRATCH_MNT=/mnt/scratch
+
+From this point you can run some basic tests, see 'USING THE FSQA SUITE' below.
+
+Additional Setup
+----------------
+
+Some tests require additional configuration in your local.config. Add these
+variables to a local.config and keep that file in your workarea. Or add a case
+to the switch in common/config assigning these variables based on the hostname
+of your test machine. Or use 'setenv' to set them.
+
+Extra TEST device specifications:
+ - Set TEST_LOGDEV to "device for test-fs external log"
+ - Set TEST_RTDEV to "device for test-fs realtime data"
+ - If TEST_LOGDEV and/or TEST_RTDEV, these will always be used.
+ - Set FSTYP to "the filesystem you want to test", the filesystem type is
+ devised from the TEST_DEV device, but you may want to override it; if
+ unset, the default is 'xfs'
+
+Extra SCRATCH device specifications:
+ - Set SCRATCH_LOGDEV to "device for scratch-fs external log"
+ - Set SCRATCH_RTDEV to "device for scratch-fs realtime data"
+ - If SCRATCH_LOGDEV and/or SCRATCH_RTDEV, the USE_EXTERNAL environment
+
+Tape device specification for xfsdump testing:
+ - Set TAPE_DEV to "tape device for testing xfsdump".
+ - Set RMT_TAPE_DEV to "remote tape device for testing xfsdump"
+ variable set to "yes" will enable their use.
+ - Note that if testing xfsdump, make sure the tape devices have a tape which
+ can be overwritten.
+
+Extra XFS specification:
+ - Set TEST_XFS_REPAIR_REBUILD=1 to have _check_xfs_filesystem run
+ xfs_repair -n to check the filesystem; xfs_repair to rebuild metadata
+ indexes; and xfs_repair -n (a third time) to check the results of the
+ rebuilding.
+ - Set FORCE_XFS_CHECK_PROG=yes to have _check_xfs_filesystem run xfs_check
+ to check the filesystem. As of August 2021, xfs_repair finds all
+ filesystem corruptions found by xfs_check, and more, which means that
+ xfs_check is no longer run by default.
+ - xfs_scrub, if present, will always check the test and scratch
+ filesystems if they are still online at the end of the test. It is no
+ longer necessary to set TEST_XFS_SCRUB.
+
+Tools specification:
+ - dump:
+ - Set DUMP_CORRUPT_FS=1 to record metadata dumps of XFS, ext* or
+ btrfs filesystems if a filesystem check fails.
+ - Set DUMP_COMPRESSOR to a compression program to compress metadumps of
+ filesystems. This program must accept '-f' and the name of a file to
+ compress; and it must accept '-d -f -k' and the name of a file to
+ decompress. In other words, it must emulate gzip.
+ - dmesg:
+ - Set KEEP_DMESG=yes to keep dmesg log after test
+ - kmemleak:
+ - Set USE_KMEMLEAK=yes to scan for memory leaks in the kernel after every
+ test, if the kernel supports kmemleak.
+ - fsstress:
+ - Set FSSTRESS_AVOID and/or FSX_AVOID, which contain options added to
+ the end of fsstresss and fsx invocations, respectively, in case you wish
+ to exclude certain operational modes from these tests.
+
+Kernel/Modules related configuration:
+ - Set TEST_FS_MODULE_RELOAD=1 to unload the module and reload it between
+ test invocations. This assumes that the name of the module is the same
+ as FSTYP.
+ - Set MODPROBE_PATIENT_RM_TIMEOUT_SECONDS to specify the amount of time we
+ should try a patient module remove. The default is 50 seconds. Set this
+ to "forever" and we'll wait forever until the module is gone.
+ - Set KCONFIG_PATH to specify your preferred location of kernel config
+ file. The config is used by tests to check if kernel feature is enabled.
+
+Misc:
+ - If you wish to disable UDF verification test set the environment variable
+ DISABLE_UDF_TEST to 1.
+ - Set LOGWRITES_DEV to a block device to use for power fail testing.
+ - Set PERF_CONFIGNAME to a arbitrary string to be used for identifying
+ the test setup for running perf tests. This should be different for
+ each type of performance test you wish to run so that relevant results
+ are compared. For example 'spinningrust' for configurations that use
+ spinning disks and 'nvme' for tests using nvme drives.
+ - Set MIN_FSSIZE to specify the minimal size (bytes) of a filesystem we
+ can create. Setting this parameter will skip the tests creating a
+ filesystem less than MIN_FSSIZE.
+ - Set DIFF_LENGTH to "number of diff lines to print from a failed test",
+ by default 10, set to 0 to print the full diff
+ - set IDMAPPED_MOUNTS=true to run all tests on top of idmapped mounts. While
+ this option is supported for all filesystems currently only -overlay is
+ expected to run without issues. For other filesystems additional patches
+ and fixes to the test suite might be needed.
______________________
USING THE FSQA SUITE
______________________
-Preparing system for tests (IRIX and Linux):
-
- - compile XFS into your kernel or load XFS modules
- - install user tools including mkfs.xfs, xfs_db & xfs_bmap
- - If you wish to run the udf components of the suite install
- mkfs_udf and udf_db for IRIX and mkudffs for Linux. Also download and
- build the Philips UDF Verification Software from
- http://www.extra.research.philips.com/udf/, then copy the udf_test
- binary to xfstests/src/. If you wish to disable UDF verification test
- set the environment variable DISABLE_UDF_TEST to 1.
-
-
- - create one or two partitions to use for testing
- - one TEST partition
- - format as XFS, mount & optionally populate with
- NON-IMPORTANT stuff
- - one SCRATCH partition (optional)
- - leave empty and expect this partition to be clobbered
- by some tests. If this is not provided, many tests will
- not be run.
- (SCRATCH and TEST must be two DIFFERENT partitions)
- OR
- - for btrfs only: some btrfs test cases will need 3 or more independent
- SCRATCH disks which should be set using SCRATCH_DEV_POOL (for eg:
- SCRATCH_DEV_POOL="/dev/sda /dev/sdb /dev/sdc") with which
- SCRATCH_DEV should be unused by the tester, and for the legacy
- support SCRATCH_DEV will be set to the first disk of the
- SCRATCH_DEV_POOL by xfstests script.
-
- - setup your environment
- - setenv TEST_DEV "device containing TEST PARTITION"
- - setenv TEST_DIR "mount point of TEST PARTITION"
- - optionally:
- - setenv SCRATCH_DEV "device containing SCRATCH PARTITION" OR
- (btrfs only) setenv SCRATCH_DEV_POOL "to 3 or more SCRATCH disks for
- testing btrfs raid concepts"
- - setenv SCRATCH_MNT "mount point for SCRATCH PARTITION"
- - setenv TAPE_DEV "tape device for testing xfsdump"
- - setenv RMT_TAPE_DEV "remote tape device for testing xfsdump"
- - setenv RMT_IRIXTAPE_DEV "remote IRIX tape device for testing xfsdump"
- - setenv SCRATCH_LOGDEV "device for scratch-fs external log"
- - setenv SCRATCH_RTDEV "device for scratch-fs realtime data"
- - setenv TEST_LOGDEV "device for test-fs external log"
- - setenv TEST_RTDEV "device for test-fs realtime data"
- - if TEST_LOGDEV and/or TEST_RTDEV, these will always be used.
- - if SCRATCH_LOGDEV and/or SCRATCH_RTDEV, the USE_EXTERNAL
- environment variable set to "yes" will enable their use.
- - setenv DIFF_LENGTH "number of diff lines to print from a failed test",
- by default 10, set to 0 to print the full diff
- - setenv FSTYP "the filesystem you want to test", the filesystem
- type is devised from the TEST_DEV device, but you may want to
- override it; if unset, the default is 'xfs'
- - setenv FSSTRESS_AVOID and/or FSX_AVOID, which contain options
- added to the end of fsstresss and fsx invocations, respectively,
- in case you wish to exclude certain operational modes from these
- tests.
-
- - or add a case to the switch in common/config assigning
- these variables based on the hostname of your test
- machine
- - or add these variables to a file called local.config and keep that
- file in your workarea.
-
- - if testing xfsdump, make sure the tape devices have a
- tape which can be overwritten.
-
- - make sure $TEST_DEV is a mounted XFS partition
- - make sure that $SCRATCH_DEV or $SCRATCH_DEV_POOL contains nothing useful
-
Running tests:
- cd xfstests
- - By default the tests suite will run xfs tests:
+ - By default the tests suite will run all the tests in the auto group. These
+ are the tests that are expected to function correctly as regression tests,
+ and it excludes tests that exercise conditions known to cause machine
+ failures (i.e. the "dangerous" tests).
- ./check '*/001' '*/002' '*/003'
- ./check '*/06?'
- - You can explicitly specify NFS/CIFS/UDF, otherwise the filesystem type will
- be autodetected from $TEST_DEV:
- ./check -nfs [test(s)]
- Groups of tests maybe ran by: ./check -g [group(s)]
- See the 'group' file for details on groups
- - for udf tests: ./check -udf [test(s)]
- Running all the udf tests: ./check -udf -g udf
- - for running nfs tests: ./check -nfs [test(s)]
- - for running cifs/smb3 tests: ./check -cifs [test(s)]
+ See the tests/*/group.list files after building xfstests to learn about
+ each test's group memberships.
+ - If you want to run all tests regardless of what group they are in
+ (including dangerous tests), use the "all" group: ./check -g all
- To randomize test order: ./check -r [test(s)]
+ - You can explicitly specify NFS/CIFS/OVERLAY, otherwise
+ the filesystem type will be autodetected from $TEST_DEV:
+ - for running nfs tests: ./check -nfs [test(s)]
+ - for running cifs/smb3 tests: ./check -cifs [test(s)]
+ - for overlay tests: ./check -overlay [test(s)]
+ The TEST and SCRATCH partitions should be pre-formatted
+ with another base fs, where the overlay dirs will be created
+
-
The check script tests the return value of each script, and
compares the output against the expected output. If the output
is not as expected, a diff will be output and an .out.bad file
will be produced for the failing test.
-
+
Unexpected console messages, crashes and hangs may be considered
to be failures but are not necessarily detected by the QA system.
-__________________________
+__________________________
ADDING TO THE FSQA SUITE
__________________________
When developing a new test script keep the following things in
mind. All of the environment variables and shell procedures are
- available to the script once the "common/rc" file has been
- sourced.
+ available to the script once the "common/preamble" file has been
+ sourced and the "_begin_fstest" function has been called.
1. The tests are run from an arbitrary directory. If you want to
do operations on an XFS filesystem (good idea, eh?), then do
$TEST_DEV.
(b) mkfs a new XFS filesystem on $SCRATCH_DEV, and mount this
- on $SCRATCH_MNT. Call the the _require_scratch function
+ on $SCRATCH_MNT. Call the the _require_scratch function
on startup if you require use of the scratch partition.
- _require_scratch does some checks on $SCRATCH_DEV &
- $SCRATCH_MNT and makes sure they're unmounted. You should
- cleanup when your test is done, and in particular unmount
+ _require_scratch does some checks on $SCRATCH_DEV &
+ $SCRATCH_MNT and makes sure they're unmounted. You should
+ cleanup when your test is done, and in particular unmount
$SCRATCH_MNT.
Tests can make use of $SCRATCH_LOGDEV and $SCRATCH_RTDEV
for testing external log and realtime volumes - however,
in the ./new script. It can contain only alphanumeric characters
and dash. Note the "NNN-" part is added automatically.
+ 6. Test group membership: Each test can be associated with any number
+ of groups for convenient selection of subsets of tests. Group names
+ must be human readable using only characters in the set [:alnum:_-].
+
+ Test authors associate a test with groups by passing the names of those
+ groups as arguments to the _begin_fstest function. While _begin_fstests
+ is a shell function that must be called at the start of a test to
+ initialise the test environment correctly, the the build infrastructure
+ also scans the test files for _begin_fstests invocations. It does this
+ to compile the group lists that are used to determine which tests to run
+ when `check` is executed. In other words, test files files must call
+ _begin_fstest with their intended groups or they will not be run.
+
+ However, because the build infrastructure also uses _begin_fstests as
+ a defined keyword, addition restrictions are placed on how it must be
+ formatted:
+
+ (a) It must be a single line with no multi-line continuations.
+
+ (b) group names should be separated by spaces and not other whitespace
+
+ (c) A '#' placed anywhere in the list, even in the middle of a group
+ name, will cause everything from the # to the end of the line to be
+ ignored.
+
+ For example, the code:
+
+ _begin_fstest auto quick subvol snapshot # metadata
+
+ associates the current test with the "auto", "quick", "subvol", and
+ "snapshot" groups. Because "metadata" is after the "#" comment
+ delimiter, it is ignored by the build infrastructure and so it will not
+ be associated with that group.
+
+ It is not necessary to specify the "all" group in the list because that
+ group is always computed at run time from the group lists.
+
+
Verified output:
Each test script has a name, e.g. 007, and an associated
The recent pass/fail history is maintained in the file "check.log".
The elapsed time for the most recent pass for each test is kept
in "check.time".
+
+ The compare-failures script in tools/ may be used to compare failures
+ across multiple runs, given files containing stdout from those runs.
+
+__________________
+SUBMITTING PATCHES
+__________________
+
+Send patches to the fstests mailing list at fstests@vger.kernel.org.