Existing logic handling scrub preemptions is halving both the
'max' and the 'min' values.
This isn't optimal: the 'min' values (used mainly to guarantee a minimal
number of objects to fetch from the backend in one operation) can and
should also be used to limit the effect of preemptions on the execution
of the scrub.
Fixes: https://tracker.ceph.com/issues/73410
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
type: int
level: advanced
desc: Minimum number of objects to deep-scrub in a single chunk
- fmt_desc: The minimal number of object store chunks to scrub during single operation.
- Ceph blocks writes to single chunk during scrub.
+ fmt_desc: The minimum number of objects to scrub during single operation. Also
+ serves as a minimal chunk size even after scrubbing is preempted by client
+ operations and the effective chunk size is halved.
default: 5
see_also:
- osd_scrub_chunk_max
type: int
level: advanced
default: 512
- fmt_desc: The maximum number of objects per backfill scan.p
+ fmt_desc: The maximum number of objects per backfill scan.
with_legacy: true
- name: osd_extblkdev_plugins
type: str
const int max_from_conf = static_cast<int>(size_from_conf(
m_is_deep, conf, osd_scrub_chunk_max, osd_shallow_scrub_chunk_max));
+ const int min_chunk_sz = std::max(3, min_from_conf);
const int divisor = static_cast<int>(preemption_data.chunk_divisor());
- const int min_chunk_sz = std::max(3, min_from_conf / divisor);
const int max_chunk_sz = std::max(min_chunk_sz, max_from_conf / divisor);
dout(10) << fmt::format(