We're removing from XFS the ability to perform no-allocation file
creation. This was added years ago because some customer of SGI
demanded that we still be able to create (empty?) files with zero
free blocks remaining so long as there were free inodes and space in
existing directory blocks. This came at an unacceptable risk of
ENOSPC'ing midway through a transaction and shutting down the fs, so
we're removing it for the create case having changed our minds 20
years later.
However, some tests fail as a result, so fix them to be more
flexible about not failing when a dir/file creation fails due to
ENOSPC.
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Eryu Guan <eguan@redhat.com>
for i in $(seq 1 $LOOPS)
do
# hard link the content of the current directory to the next
- cp -Rl $SCRATCH_MNT/dir$i $SCRATCH_MNT/dir$((i+1)) 2>&1 | \
- filter_enospc
+ while ! test -d $SCRATCH_MNT/dir$((i+1)); do
+ cp -Rl $SCRATCH_MNT/dir$i $SCRATCH_MNT/dir$((i+1)) 2>&1 | \
+ filter_enospc
+ done
# do a random replacement of files in the new directory
_rand_replace $SCRATCH_MNT/dir$((i+1)) $COUNT
# consume 1/2 of the current preallocation across the set of 4 writers
write_size=$((TOTAL_PREALLOC / 2 / 4))
+ for i in $(seq 0 3); do
+ touch $dir/file.$i
+ done
for i in $(seq 0 3); do
$XFS_IO_PROG -f -c "pwrite 0 $write_size" $dir/file.$i \
>> $seqres.full &
# -w ensures that the only ops are ones which cause write I/O
FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
-n $nops $FSSTRESS_AVOID`
- $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full &
+ $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
}
# real QA test starts here
while [ $j -lt 100 ]; do
$XFS_IO_PROG -f -c 'pwrite -b 64k 0 16m' $file \
>/dev/null 2>&1
- rm $file
+ test -e $file && rm $file
let j=$j+1
done
} &