If we are doing a small overwrite over a blob with a large chunk size
(say, due to a large csum order), we are better off writing into a new
allocation than doing a large read/modify/write. If the read amp will be
more than min_alloc_size, skip the read entirely and write into a new
blob.
Signed-off-by: Sage Weil <sage@redhat.com>
uint64_t tail_read =
ROUND_UP_TO(b_off + b_len, chunk_size) - (b_off + b_len);
if ((head_read || tail_read) &&
- (b->blob.get_ondisk_length() >= b_off + b_len + tail_read)) {
+ (b->blob.get_ondisk_length() >= b_off + b_len + tail_read) &&
+ head_read + tail_read < min_alloc_size) {
dout(20) << __func__ << " reading head 0x" << std::hex << head_read
<< " and tail 0x" << tail_read << std::dec << dendl;
if (head_read) {