From: Zorro Lang Date: Wed, 12 Sep 2018 10:15:47 +0000 (+0800) Subject: shared/010: avoid dedupe testing blocked on large fs X-Git-Tag: v2022.05.01~1425 X-Git-Url: http://git.apps.os.sepia.ceph.com/?p=xfstests-dev.git;a=commitdiff_plain;h=29b2f63b3ee705e821d18158734be6b2666f1431 shared/010: avoid dedupe testing blocked on large fs When test on large fs (--large-fs), xfstests preallocates a large file in SCRATCH_MNT/ at first. Duperemove will take too long time to deal with that large file (many days on 500T XFS). So move working directory to a sub-dir underlying $SCRATCH_MNT/. Signed-off-by: Zorro Lang Reviewed-by: Eric Sandeen Signed-off-by: Eryu Guan --- diff --git a/tests/shared/010 b/tests/shared/010 index 1817081b..04f55890 100755 --- a/tests/shared/010 +++ b/tests/shared/010 @@ -65,15 +65,17 @@ function end_test() sleep_time=$((50 * TIME_FACTOR)) # Start fsstress +testdir="$SCRATCH_MNT/dir" +mkdir $testdir fsstress_opts="-r -n 1000 -p $((5 * LOAD_FACTOR))" -$FSSTRESS_PROG $fsstress_opts -d $SCRATCH_MNT -l 0 >> $seqres.full 2>&1 & +$FSSTRESS_PROG $fsstress_opts -d $testdir -l 0 >> $seqres.full 2>&1 & dedup_pids="" dupe_run=$TEST_DIR/${seq}-running # Start several dedupe processes on same directory touch $dupe_run for ((i = 0; i < $((2 * LOAD_FACTOR)); i++)); do while [ -e $dupe_run ]; do - $DUPEREMOVE_PROG -dr --dedupe-options=same $SCRATCH_MNT/ \ + $DUPEREMOVE_PROG -dr --dedupe-options=same $testdir \ >>$seqres.full 2>&1 done & dedup_pids="$! $dedup_pids"