If we choose a primary that does not belong to the current up set,
and all up peers are still recoverable, then we might end up excluding
some up peer from the acting_recovery_backfill set too due to the
"want size <= pool size" constraint (since https://github.com/ceph/ceph/pull/24035),
as a result of which all up peers might not get recovered in one go.
Fix by falling through any oversized want set to async recovery, which
should be able to handle it nicely.
Fixes: https://tracker.ceph.com/issues/42577
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
(cherry picked from commit
22c8cdad8ee1d7376c7d200bdb6ec94ed6d3b5e6)
Conflicts:
src/osd/PeeringState.cc
- file does not exist in mimic; made the change manually to
src/osd/PG.cc
choose_async_recovery_replicated(all_info, auth_log_shard->second, &want, &want_async_recovery);
}
}
+ while (want.size() > pool.info.size) {
+ // async recovery should have taken out as many osds as it can.
+ // if not, then always evict the last peer
+ // (will get synchronously recovered later)
+ dout(10) << __func__ << " evicting osd." << want.back()
+ << " from oversized want " << want << dendl;
+ want.pop_back();
+ }
if (want != acting) {
dout(10) << __func__ << " want " << want << " != acting " << acting
<< ", requesting pg_temp change" << dendl;