The ltp/fsstress always fails on io_uring_queue_init() by returnning
ENOMEM. Due to io_uring accounts memory it needs under the rlimit
memlocked option, which can be quite low on some setups, especially
on 64K pagesize machine. root isn't under this restriction, but
regular users are. So only g/233 and g/270 which use $qa_user to run
fsstress are failed.
To avoid this failure, set max locked memory to unlimited before
doing fsstress, then restore it after test done.
Signed-off-by: Zorro Lang <zlang@redhat.com>
Reviewed-by: Eryu Guan <guaneryu@gmail.com>
Signed-off-by: Eryu Guan <guaneryu@gmail.com>
-f rename=10 -f fsync=2 -f write=15 -f dwrite=15 \
-n $count -d $out -p 7`
-f rename=10 -f fsync=2 -f write=15 -f dwrite=15 \
-n $count -d $out -p 7`
+ # io_uring accounts memory it needs under the rlimit memlocked option,
+ # which can be quite low on some setups (especially 64K pagesize). root
+ # isn't under this restriction, but regular users are. To avoid the
+ # io_uring_queue_init fail on ENOMEM, set max locked memory to unlimited
+ # temporarily.
+ ulimit -l unlimited
echo "fsstress $args" >> $seqres.full
if ! su $qa_user -c "$FSSTRESS_PROG $args" | tee -a $seqres.full | _filter_num
then
echo "fsstress $args" >> $seqres.full
if ! su $qa_user -c "$FSSTRESS_PROG $args" | tee -a $seqres.full | _filter_num
then
cp $FSSTRESS_PROG $tmp.fsstress.bin
$SETCAP_PROG cap_chown=epi $tmp.fsstress.bin
cp $FSSTRESS_PROG $tmp.fsstress.bin
$SETCAP_PROG cap_chown=epi $tmp.fsstress.bin
+ # io_uring accounts memory it needs under the rlimit memlocked option,
+ # which can be quite low on some setups (especially 64K pagesize). root
+ # isn't under this restriction, but regular users are. To avoid the
+ # io_uring_queue_init fail on ENOMEM, set max locked memory to unlimited
+ # temporarily.
+ ulimit -l unlimited
(su $qa_user -c "$tmp.fsstress.bin $args" &) > /dev/null 2>&1
echo "Run dd writers in parallel"
(su $qa_user -c "$tmp.fsstress.bin $args" &) > /dev/null 2>&1
echo "Run dd writers in parallel"