This fixes a specific case of double-queuing seen in #4832:
- client goes stale, inode marked NEEDSRECOVER
- eval does sync, queued, -> RECOVERING
- client resumes
- client goes stale (again), inode marked NEEDSRECOVER
- eval_gather queues *again*
Note that a cursory look at the recovery code makes me think this needs
a much more serious overhaul. In particular, I don't think we should
be triggering recovery when transitioning *from* a stable state, but
explicitly when we are flagged, or when gathering. We should probably
also hold a wrlock over the recovery period and remove the force_wrlock
kludge from the final size check. Opened ticket #5268.
Signed-off-by: Sage Weil <sage@inktank.com>
if (lock->get_sm() == &sm_filelock) {
assert(in);
- if (in->state_test(CInode::STATE_NEEDSRECOVER)) {
+ if (in->state_test(CInode::STATE_RECOVERING)) {
+ dout(7) << "eval_gather finished gather, but still recovering" << dendl;
+ } else if (in->state_test(CInode::STATE_NEEDSRECOVER)) {
dout(7) << "eval_gather finished gather, but need to recover" << dendl;
mds->mdcache->queue_file_recover(in);
mds->mdcache->do_file_recover();
}
- if (in->state_test(CInode::STATE_RECOVERING)) {
- dout(7) << "eval_gather finished gather, but still recovering" << dendl;
- return;
- }
+ return;
}
if (!lock->get_parent()->is_auth()) {