PG may go through inactive state(such like peering) shortly during
state transition, user usually want to only get alter just when
PG is stucking in inactive for long enough time.
Such monitoring depends on PG.last_active > cutoff, but we will not
update PG.last_active if there is neither state change nor IO happened
on this PG. As a result, PG.last_active may lag behind a long time, in
idle cluster/pool.
This patch update the last_active filed to now(), when we first find it
as inactiv, as kind of optimistic estimation to solve the problem.
Signed-off-by: Xiaoxi Chen <xiaoxchen@ebay.com>
utime_t now = ceph_clock_now();
if (info.stats.state != state) {
info.stats.last_change = now;
+ // Optimistic estimation, if we just find out an inactive PG,
+ // assumt it is active till now.
+ if (!(state & PG_STATE_ACTIVE) &&
+ (info.stats.state & PG_STATE_ACTIVE))
+ info.stats.last_active = now;
+
if ((state & PG_STATE_ACTIVE) &&
!(info.stats.state & PG_STATE_ACTIVE))
info.stats.last_became_active = now;