Currently, the donor file size is hardcoded to 250M. fsstress can generate
more data than this limit, which causes the `e4compact` helper to crash.
Specifically, `e4compact.c:do_defrag_range()` contains the following
assertion:
assert(donor->length >= len);
If the donor file is not large enough to accommodate the data being compacted,
this assertion fails (or validation logic rejects it), causing the test to
fail unexpectedly with an abort or error.
Additionally, the previous 'usage' calculation used `du -sch`, which outputs
human-readable sizes (e.g., "1.5M"). xfs_io's falloc command does not
support decimal values in length arguments, leading to syntax errors during
file allocation.
Fix this by:
1. Using `du -sm` to calcuate the required size in integer MB (rounding up),
avoiding decimal issues.
2. Allocating the donor file using this calculated `usage` size instead of
the fixed 250M limit, ensuring it is always large enough for the operation.
Signed-off-by: caokewu <caokewu1@uniontech.com> Reviewed-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Zorro Lang <zlang@kernel.org>