Some disks have a discard performance that is too low to keep up with
write workloads. Using async discard in this case will cause the OSD to
run out of capacity due to the number of outstanding discards preventing
allocations from being freed. While sync discard could be used in this
case to cause backpressure, this might have unacceptable performance
implications.
For the most part, as long as enough discards are getting through to a
device, then it will stay trimmed enough to maintain acceptable
performance. Thus, we can introduce a cap on the pending discard count,
ensuring that the queue of allocations to be freed doesn't get too long
while also issuing sufficient discards to disk. The default value of
1000000 has ample room for discard spikes (e.g. from snaptrim); it could
result in multiple minutes of discards being queued up, but at least
it's not unbounded (though if a user really wants unbounded behaviour,
they can choose it by setting the new configuration option to 0).
Fixes: https://tracker.ceph.com/issues/69604
Signed-off-by: Joshua Baergen <jbaergen@digitalocean.com>
(cherry picked from commit
1dee8837959075687ea8a81c4eec2e1c6625e486)
// this is private and is expected that the caller checks that discard
// threads are running via _discard_started()
-void KernelDevice::_queue_discard(interval_set<uint64_t> &to_release)
+bool KernelDevice::_queue_discard(interval_set<uint64_t> &to_release)
{
if (to_release.empty())
- return;
+ return false;
+
+ auto max_pending = cct->_conf->bdev_async_discard_max_pending;
std::lock_guard l(discard_lock);
+
+ if (max_pending > 0 && discard_queued.num_intervals() >= max_pending)
+ return false;
+
discard_queued.insert(to_release);
discard_cond.notify_one();
+ return true;
}
// return true only if discard was queued, so caller won't have to do
return false;
if (async && _discard_started()) {
- _queue_discard(to_release);
- return true;
+ return _queue_discard(to_release);
} else {
for (auto p = to_release.begin(); p != to_release.end(); ++p) {
_discard(p.get_start(), p.get_len());
void _aio_thread();
void _discard_thread(uint64_t tid);
- void _queue_discard(interval_set<uint64_t> &to_release);
+ bool _queue_discard(interval_set<uint64_t> &to_release);
bool try_discard(interval_set<uint64_t> &to_release, bool async = true) override;
int _aio_start();
- runtime
see_also:
- bdev_enable_discard
+ - bdev_async_discard_max_pending
+- name: bdev_async_discard_max_pending
+ desc: maximum number of pending discards
+ long_desc: The maximum number of pending async discards that can be queued and not claimed by an
+ async discard thread. Discards will not be issued once the queue is full and blocks will be
+ freed back to the allocator immediately instead. This is useful if you have a device with slow
+ discard performance that can't keep up to a consistently high write workload. 0 means
+ 'unlimited'.
+ type: uint
+ level: advanced
+ default: 1000000
+ min: 0
+ with_legacy: true
+ flags:
+ - runtime
+ see_also:
+ - bdev_async_discard_threads
- name: bdev_flock_retry_interval
type: float
level: advanced