Because the cluster topology may change (e.g., because we add some
new racks, hosts and disks) and we want the existing osds are then
aware of new incoming osds, guaranteeing osds are always trying to
do heartbeat as wide as possible(e.g., across all racks, hosts etc.).
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
{
assert(osd_lock.is_locked());
- if (is_waiting_for_healthy()) {
+ if (is_waiting_for_healthy() || is_active()) {
utime_t now = ceph_clock_now();
if (last_heartbeat_resample == utime_t()) {
last_heartbeat_resample = now;
dout(10) << "maybe_update_heartbeat_peers forcing update after " << dur << " seconds" << dendl;
heartbeat_set_peers_need_update();
last_heartbeat_resample = now;
- reset_heartbeat_peers(); // we want *new* peers!
+ if (is_waiting_for_healthy()) {
+ reset_heartbeat_peers(); // we want *new* peers!
+ }
}
}
}