From e37418a588634d4289c73cb1cafa47d8348b0dc3 Mon Sep 17 00:00:00 2001 From: Zorro Lang Date: Mon, 4 Feb 2019 23:48:09 +0800 Subject: [PATCH] fsstress: avoid splice_f generating too large sparse file Thanks to Darrick J. Wong find this issue! Current splice_f generates file offset as below: lr = ((int64_t)random() << 32) + random(); off2 = (off64_t)(lr % maxfsize); It generates a pseudorandom 64-bit candidate offset for the destination file where we'll land the splice data, and then caps the offset at maxfsize (which is 2^63- 1 on x64), which effectively means that the data will appear at a very high file offset which creates large (sparse) files very quickly. That's not what we want, and some case likes shared/009 will take forever to run md5sum on lots of huge files. Signed-off-by: Zorro Lang Reviewed-by: Eryu Guan Signed-off-by: Eryu Guan --- ltp/fsstress.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/ltp/fsstress.c b/ltp/fsstress.c index c04feb78..25e0c3e2 100644 --- a/ltp/fsstress.c +++ b/ltp/fsstress.c @@ -2862,10 +2862,11 @@ splice_f(int opno, long r) /* * splice can overlap write, so the offset of the target file can be - * any number (< maxfsize) + * any number. But to avoid too large offset, add a clamp of 1024 blocks + * past the current dest file EOF */ lr = ((int64_t)random() << 32) + random(); - off2 = (off64_t)(lr % maxfsize); + off2 = (off64_t)(lr % MIN(stat2.st_size + (1024ULL * stat2.st_blksize), MAXFSIZE)); /* * Due to len, off1 and off2 will be changed later, so record the -- 2.47.3