os/bluestore/compression: Fix Estimator::split_and_compress
Fixed calculation on effective blob size.
When fully non-compressible data is passed,
it could cause losing few bytes in the end.
Example:
-107> 2025-05-17T20:40:50.468+0000
7f267a42f640 15 bluestore(/var/lib/ceph/osd/ceph-4) _do_write_v2_compressed 200000~78002 -> 200000~78002
-106> 2025-05-17T20:40:50.468+0000
7f267a42f640 20 blobs to put: 200000~f000(4d61) 20f000~f000(b51) 21e000~f000(b51) 22d000~f000(b51) 23c000~f000(b51) 24b000~f000(b51) 25a000~f000(b51) 269000~f000(b51)
In result we split 0x78002 into 8 * 0xf000, losing 0x2 in the process.
Calculations for original:
>>> size=0x78002
>>> blobs=(size+0xffff) / 0x10000
>>> blob_size = size / blobs
>>> print hex(size), blobs, hex(blob_size)
0x78002 8 0xf000 <-this means roundup is 0xf000
Calculations for fixed:
>>> size=0x78002
>>> blobs=(size+0xffff) / 0x10000
>>> blob_size = (size+blobs-1) / blobs
>>> print hex(size), blobs, hex(blob_size)
0x78002 8 0xf001 <-this meand roundup is 0x10000
Fixes: https://tracker.ceph.com/issues/71531
Signed-off-by: Adam Kupczyk <akupczyk@ibm.com>
(cherry picked from commit
80b7d6840ca989d04a86e90ab946b464bd8d5982)