Sage Weil [Fri, 14 Jan 2011 06:08:40 +0000 (22:08 -0800)]
mds: tolerate (with warning) replayed op with bad prealloc_inos
This comes up when an ESesssion close is followed by an EMetaBlob that
uses a prealloc_ino. That isn't supposed to happen (it's probably a corner
case with session timeout vs a request waiting on locks that didn't
get killed/canceled?). But tolerate it during replay just the same.
Sage Weil [Thu, 13 Jan 2011 21:14:24 +0000 (13:14 -0800)]
filejournal: rewrite completion handling, fix ordering on full->notfull
Rewriting the completion handling to be simpler, clearer, so that it is
easier to maintain a strict completion ordering invariant.
This also fixes an ordering bug: When restarting journal, we defer
initially until we get a committed_thru from the previous commit and then
do all those completions. That same logic needs to also apply to new items
submitted during that commit interval. This was broken before, but the
simpler structure fixes it. Fixes #666.
Tested-by: Jim Schutt <jaschut@sandia.gov> Signed-off-by: Sage Weil <sage@newdream.net>
Samuel Just [Thu, 13 Jan 2011 20:18:17 +0000 (12:18 -0800)]
PG: activate should not enqueue snap_trimmer on a replica
Previously, activate would queue_snap_trim() for replicas if snap_trimq
ended up non-empty, guaranteeing a crash for any replica starting up
while purged_snaps lagged behind pool->cached_removed_snaps.
This should fix #702.
Signed-off-by: Samuel Just <samuelj@hq.newdream.net>
Samuel Just [Wed, 12 Jan 2011 23:09:51 +0000 (15:09 -0800)]
ReplicatedPG: Fix oi.size bug in _rollback_to
_rollback_to calls _delete_head before cloning the clone into place.
_delete_head sets the object info size to 0. _rollback_to now resets
the size to match the rolled back object. Previously, this bug
manifested as a failed assert in scrub when checking the object sizes.
Signed-off-by: Samuel Just <samuelj@hq.newdream.net>
Samuel Just [Wed, 12 Jan 2011 21:51:55 +0000 (13:51 -0800)]
ReplicatedPG: register_object_context and register_snapset_context cleanup
Previously, get_object_context and get_snapset_context did not register
the resulting objects. In some cases, these objects would not get
registered and multiple copies would end up created. This caused a bug
in find_object_context where get_snapset_context could return an object
distinct from the one referenced by the object returned from
get_object_context.
Signed-off-by: Samuel Just <samuelj@hq.newdream.net>
Samuel Just [Wed, 12 Jan 2011 20:07:44 +0000 (12:07 -0800)]
ReplicatedPG: snap_trimmer work around
Currently, an OSD bug is causing snap_trimq to contain some snaps
already in purged_snaps. This work around should let kvmtest
come back up. A real fix is still needed.
Signed-off-by: Samuel Just <samuelj@hq.newdream.net>
Samuel Just [Mon, 10 Jan 2011 22:45:06 +0000 (14:45 -0800)]
ReplicatedPG: Fix bug in rollback
Previously, _rollback_to assumed that the rollback was a noop if
ctx->clone_obc was set and it's prior version matches head's version.
However, this broke in sequences like:
Write "snap1 contents" to oid "blah"
create snapshot "snap1"
Write "snap2 contents" to oid "blah"
create snapshot "snap2"
rollback oid "blah" to snapshot "snap1"
In this case, make_writeable would have just cloned head to the snap2
clone, but the relevant clone is actually "snap1". _rollback_to now
verifies that the most recent clone is the correct one before assuming
that head is already correct.
Signed-off-by: Samuel Just <samuelj@hq.newdream.net>
Samuel Just [Thu, 6 Jan 2011 23:48:13 +0000 (15:48 -0800)]
ReplicatedPG: clone_overlap should contain one entry per clone
Previously, writefull and _delete_head would remove the last
entry from snapset.clone_overlap. Now, the last entry becomes
an empty interval_set. clone_overlap should contain one entry
per clone.
The missing entries previously caused a bug in _rollback_to where
iter would be clone_overlap.end().
Signed-off-by: Samuel Just <samuelj@hq.newdream.net>
Sage Weil [Fri, 24 Dec 2010 16:36:05 +0000 (08:36 -0800)]
osd: generate backlog if needed to get last_complete >= log.tail || backlog
If primary or a replica has a mistrimmed pg log, we need to generate the
backlog during peering. This sucks, because the PG won't go active for
a long time, but it's what happens when there's a bug in the code that
mis-trims the PG log!
Sage Weil [Mon, 3 Jan 2011 22:32:48 +0000 (14:32 -0800)]
mds: load root inode on replay if auth
If we are auth for the root inode, load it's initial value off of disk. We
may not see it in the log if it has not been modified. If it has, this
is useless but fast/harmless. This only occurs for brand-new filesystems
where the mds is immediately restarted.
It seems that we have not been zeroing
PG::Info::History:last_epoch_clean when the History structure is
created. This led to some very interesting log output (and bugs!)
Signed-off-by: Colin McCabe <colinm@hq.newdream.net>
Sage Weil [Mon, 20 Dec 2010 21:22:49 +0000 (13:22 -0800)]
osd: compensate for replicas with tail > last_complete
Normally we shouldn't ever have a last_complete < log.tail (&& !backlog).
But maybe we do (old bugs, whatever; see #590). In that case, the primary
can compensate by sending more log info to the replica.
Sage Weil [Sat, 18 Dec 2010 05:02:58 +0000 (21:02 -0800)]
mds: make nested scatterlock state change check more robust
The predirty_journal_parents() calls wrlock_start() with nowait=true
because it has a journal entry open and we don't want to trigger a nested
scatterlock change that needs to journal something again (either
via scatter_writebehind or scatter_start). (MDLog can only handle a single
log entry open at once because building multiple at once would require very
very very careful ordering of predirty() calls and versions.)
We were already check for the simple_lock() case (which may call
writebehind); fix up the check to also cover the scatter_mix() (which may
call scatter_start) case.
Sage Weil [Fri, 17 Dec 2010 23:12:17 +0000 (15:12 -0800)]
filestore: make OpSequencer::flush() work for writeahead journaling items
It was only waiting for items in the op_queue to complete. The goal is
to wait for anything we've called queue_transactions(&osr,...) on. If we
do writeahead journaling, though, there might be new ops that are still
journaling but not yet submitted to the fs that are missed.
This adds a journal queue to the OpSequencer, and uses it in the writeahead
case only.
Sage Weil [Fri, 17 Dec 2010 20:54:38 +0000 (12:54 -0800)]
osd: flush pg writes to disk before starting scrub scan
This avoids two races:
- we just completed recovery by pushing objects to the replica, and the
replica starts scanning before those writes reach the fs.
- we just trimmed to something after last_update_applied.
Sage Weil [Tue, 14 Dec 2010 17:26:12 +0000 (09:26 -0800)]
mon: trim pgmap less aggressively
This will make observer crashes due to missed states (#648) much harder to
hit. Eventually the pgmap state trim problem will go away when the
monitor/paxos code is restructured (#647).
Sage Weil [Sun, 12 Dec 2010 22:39:48 +0000 (14:39 -0800)]
mds: fix replay/resent vs completed request check
If it is a _replayed_ request, we should always send a simple ack if it is
completed, because the client doesn't not care about any additional caps.
If it is a _resent_ request, then we want to return useful caps on open or
create requests, even if any modification side-effects have already been
committed. The additional checks for completed already exist in the
create and open handlers.
Vangelis Koukis [Thu, 9 Dec 2010 18:53:22 +0000 (20:53 +0200)]
Fix overflow in FileJournal::_open_file()
[ The following text is in the "iso-8859-7" character set. ]
[ Your display is set for the "iso-8859-1" character set. ]
[ Some special characters may be displayed incorrectly. ]
Running the unstable branch, mkcephfs fails when trying to create
a 3GB journal file on the OSDs.
Relevant messages from the osd logfile:
2010-12-09 19:03:54.419737 7fdde4d51720 journal _open_file: unable to extend journal to 18446744072560312320 bytes
2010-12-09 19:03:54.419789 7fdde4d51720 filestore(/osd) mkjournal error creating journal on /osd/journal
The problem is that the calculation of the journal size in bytes
overflows, in FileJournal::_open_file().
Signed-off-by: Vangelis Koukis <vkoukis@cslab.ece.ntua.gr> Signed-off-by: Sage Weil <sage@newdream.net>
Sage Weil [Wed, 8 Dec 2010 23:53:13 +0000 (15:53 -0800)]
filejournal: reset last_commited_seq if we find journal to be invalid
If we read an event that's later than our expected entry, we set read_pos
to -1 and discard the journal. If that happens we also need to reset
last_committed_seq to avoid a crash like
Sage Weil [Tue, 7 Dec 2010 21:31:01 +0000 (13:31 -0800)]
mds: sync->mix replica state is sync->mix(2)
When auth first moves to sync->mix,
- auth sends AC_MIX to replicas
- replicas go to sync->mix
- replicas finish gather, send AC_SYNCACK, move to sync->mix(2)
- auth gets all acks, sends AC_MIX again
- replica moves to MIX
So any new replica should just get sync->mix(2), so that it is not confused
by the second AC_MIX.
Sage Weil [Tue, 7 Dec 2010 19:15:56 +0000 (11:15 -0800)]
mds: open undef dirfrags during rejoin
Any invented dirfrags have a version of 0. This will cause problems later
if we pre_dirty() anything in that dir because the dir version won't be
in sync (it'll be way too small). Also, we can do that at any point,
e.g. when flushing dirty caps, and aren't allowed to delay, so we need to
load those dirfrags now.
In theory we could read only the fnode and not all the dentries, but we
may as well. We should be more careful about memory that this patch is,
though.
Sage Weil [Tue, 7 Dec 2010 17:06:47 +0000 (09:06 -0800)]
mds: send LOCKFLUSHED to trigger finish_flush on replicas
Since f741766a we have triggered start_flush and finish_flush on replicas.
The problem is that the finish_flush didn't always happen for the mix->lock
case: we sould start_flush when we sent the AC_LOCKACK, but could only
finish_flush if/when we got another SYNC or MIX. If the primary stayed in
the LOCK state, we would keep our flushing flag. That in turn causes
problems later when we try to eval_gather() (esp if we are auth at that
point?).
Fix this by sending an explicit AC_LOCKFLUSHED message to replicas after
we do a scatter_writebehind. The replica will only set flushing if it
flushed dirty data, which forces scatter_writebehind, so we will always
get the LOCKFLUSHED to match. Replicas that didn't flush will also get
it, but oh well. We'd need to keep track which ones sent dirty data to
do that properly, though.
TODO: still need to verify that this is correct for rejoin.
Sage Weil [Tue, 7 Dec 2010 15:58:01 +0000 (07:58 -0800)]
mds: clear EXPORTINGCAPS on export_reverse
We need to reverse the effects of encode_export_inode_caps(), which is just
the pin and state bit.
The original problem can be reproduced with
- ceph tell mds 0 injectargs '--mds-kill-import-at 5'
- restart mds
- recovery completes successfully
- wait for the subtree to be reexported
- fail with bad EXPORTINGCAPS get in encode_export_inode_caps
Sage Weil [Mon, 6 Dec 2010 22:01:28 +0000 (14:01 -0800)]
osd: drop not-quite-copy constructor for object_info_t
Making a copy-like constructor that doesn't actaully copy is confusing
and error prone. In this case, we initialized a clone's object_info with
the head's snapid, causing problems with what info was encoded and crashing
later in the snap_trimmer. Here the one caller already called
copy_user_bits(); let's move the lost copy there.
Tune Debian packaging for the upcoming v0.24 release.
Including switch OpenSSL dependency to Crypto++ as its being used instead of
the former; remove radosacl as its not compiled anymore and pristine clean
the source. Explicitly note this is in a 1.0 package format.