Theoretically peers which have a longer list of objects to recover
shall equivalently take a longer time to recover and hence have a
bigger chance to block client ops.
Also, to minimize the risk of data loss, we want to bring those broken
(inconsistent) peers back to normal as soon as possible. Putting them
into the async_recovery_targets queue, however, did quite the oppsite.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
dout(20) << __func__ << " candidates by cost are: " << candidates_by_cost
<< dendl;
// take out as many osds as we can for async recovery, in order of cost
- for (auto weighted_shard : candidates_by_cost) {
+ for (auto rit = candidates_by_cost.rbegin();
+ rit != candidates_by_cost.rend(); ++rit) {
if (want->size() <= pool.info.min_size) {
break;
}
- pg_shard_t cur_shard = weighted_shard.second;
+ pg_shard_t cur_shard = rit->second;
vector<int> candidate_want(*want);
for (auto it = candidate_want.begin(); it != candidate_want.end(); ++it) {
if (*it == cur_shard.osd) {