From 60f3da6dc9895352d9e93a4943bab75d5cedacc6 Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Thu, 15 May 2008 16:36:40 +0000 Subject: [PATCH] With the recent change for reliability with 64k page size made to test 008,the file sizes got much larger. It appears that randholes actually reads the entire file, so this has slowed the test down by a factor of ten (all file sizes were increased by 10x). This means the test is now taking about 18 minutes to run on a UML session, and all the time is spent reading the files. Instead, scale the file size based on the page size. We know how many holes we are trying to produce and the I/O size being used to produce them, so the size of the files can be finely tuned. Assuming a decent random distribution, if the number of blocks in the file is 4x the page size and the I/O size is page sized, this means that every I/O should generate a new hole and we'll only get a small amount of adjacent extents. This has passed over 10 times on ia64 w/ 64k page and another 15 times on UML with 4k page. UML runtime is down from ~1000s to 5s, ia64 runtime is down from ~30s to 7s. Merge of master-melb:xfs-cmds:31168a by kenmcd. Greatly reduce runtime by reducing filesizes down to sane minimum. --- 008 | 22 ++++++++++++++++------ 008.out | 10 +++++----- 2 files changed, 21 insertions(+), 11 deletions(-) diff --git a/008 b/008 index 384c7ca9..6e1b6c85 100755 --- a/008 +++ b/008 @@ -28,7 +28,8 @@ _cleanup() _filter() { - sed -e "s/-b $pgsize/-b PGSIZE/g" + sed -e "s/-b $pgsize/-b PGSIZE/g" \ + -e "s/-l .* -c/-l FSIZE -c/g" } # get standard environment, filters and checks @@ -79,13 +80,22 @@ _setup_testdir rm -f $here/$seq.out.full -_do_test 1 50 "-l 50000000 -c 50 -b $pgsize" -_do_test 2 100 "-l 100000000 -c 100 -b $pgsize" -_do_test 3 100 "-l 100000000 -c 100 -b 512" # test partial pages +# Note on special numbers here. +# +# We are trying to create roughly 50 or 100 holes in a file +# using random writes. Assuming a good distribution of 50 writes +# in a file, the file only needs to be 3-4x the size of the write +# size muliplied by the number of writes. Hence we use 200 * pgsize +# for files we want 50 holes in and 400 * pgsize for files we want +# 100 holes in. This keeps the runtime down as low as possible. +# +_do_test 1 50 "-l `expr 200 \* $pgsize` -c 50 -b $pgsize" +_do_test 2 100 "-l `expr 400 \* $pgsize` -c 100 -b $pgsize" +_do_test 3 100 "-l `expr 400 \* $pgsize` -c 100 -b 512" # test partial pages # rinse, lather, repeat for direct IO -_do_test 4 50 "-d -l 50000000 -c 50 -b $pgsize" -_do_test 5 100 "-d -l 100000000 -c 100 -b $pgsize" +_do_test 4 50 "-d -l `expr 200 \* $pgsize` -c 50 -b $pgsize" +_do_test 5 100 "-d -l `expr 400 \* $pgsize` -c 100 -b $pgsize" # note: direct IO requires page aligned IO # todo: realtime. diff --git a/008.out b/008.out index ccd23d69..5e3ae8e3 100644 --- a/008.out +++ b/008.out @@ -1,21 +1,21 @@ QA output created by 008 -randholes.1 : -l 50000000 -c 50 -b PGSIZE +randholes.1 : -l FSIZE -c 50 -b PGSIZE ------------------------------------------ holes is in range -randholes.2 : -l 100000000 -c 100 -b PGSIZE +randholes.2 : -l FSIZE -c 100 -b PGSIZE ------------------------------------------ holes is in range -randholes.3 : -l 100000000 -c 100 -b 512 +randholes.3 : -l FSIZE -c 100 -b 512 ------------------------------------------ holes is in range -randholes.4 : -d -l 50000000 -c 50 -b PGSIZE +randholes.4 : -d -l FSIZE -c 50 -b PGSIZE ------------------------------------------ holes is in range -randholes.5 : -d -l 100000000 -c 100 -b PGSIZE +randholes.5 : -d -l FSIZE -c 100 -b PGSIZE ------------------------------------------ holes is in range -- 2.39.5