We've changed quite a lot of the restart behavior, as well as one
of the message encodings. This is cheaper and easier than using feature bits,
and CephFS is still a tech preview or whatever, so let's cover them using this.
Yan, Zheng [Sun, 31 Mar 2013 06:19:17 +0000 (14:19 +0800)]
mds: don't roll back prepared table updates
When table server is recovering, it re-sends 'agree' messages for
prepared table updates. It is possible table client receives an
'agree' messages before it commits the corresponding update. Don't
send 'rollback' message back to the server in this case.
Yan, Zheng [Thu, 14 Mar 2013 04:24:54 +0000 (12:24 +0800)]
mds: fix export cancel notification
The comment says that if the importer is dead, bystanders thinks the
exporter is the only auth, as per mdcache->handle_mds_failure(). But
there is no such code in MDCache::handle_mds_failure().
Yan, Zheng [Tue, 12 Mar 2013 08:51:53 +0000 (16:51 +0800)]
mds: send lock action message when auth MDS is in proper state.
For rejoining object, don't send lock ACK message because lock states
are still uncertain. The lock ACK may confuse object's auth MDS and
trigger assertion.
If object's auth MDS is not active, just skip sending NUDGE, REQRDLOCK
and REQSCATTER messages. MDCache::handle_mds_recovery() will take care
of them.
Also defer caps release message until clientreplay or active
Yan, Zheng [Tue, 12 Mar 2013 08:27:22 +0000 (16:27 +0800)]
mds: share inode max size after MDS recovers
The MDS may crash after journaling the new max size, but before sending
the new max size to the client. Later when the MDS recovers, the client
re-requests the new max size, but the MDS finds max size unchanged. So
the client waits for the new max size forever. This issue can be avoided
by checking client cap's last_sent, share inode max size if it is zero.
(reconnected cap's last_sent is zero)
Yan, Zheng [Thu, 14 Mar 2013 12:06:27 +0000 (20:06 +0800)]
mds: handle linkage mismatch during cache rejoin
For MDS cluster, not all file system namespace operations that impact
multiple MDS use two phase commit. Some operations use dentry link/unlink
message to update replica dentry's linkage after they are committed by
the master MDS. It's possible the master MDS crashes after journaling an
operation, but before sending the dentry link/unlink messages. Later when
the MDS recovers and receives cache rejoin messages from the surviving
MDS, it will find linkage mismatch.
The original cache rejoin code does not properly handle the case that
dentry unlink messages were missing. Unlinked inodes were linked to stray
dentries. So the cache rejoin ack message need push replicas of these
stray dentries to the surviving MDS.
This patch also adds code that handles cache expiration in the middle of
cache rejoining.
Yan, Zheng [Wed, 13 Mar 2013 12:58:26 +0000 (20:58 +0800)]
mds: encode dirfrag base in cache rejoin ack
Cache rejoin ack message already encodes inode base, make it also encode
dirfrag base. This allowes the message to replicate stray dentries like
MDentryUnlink message. The function will be used by later patch.
Yan, Zheng [Wed, 13 Mar 2013 12:47:11 +0000 (20:47 +0800)]
mds: include replica nonce in MMDSCacheRejoin::inode_strong
So the recovering MDS can properly handle cache expire messages.
Also increase the nonce value when sending the cache rejoin acks.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com> Reviewed-by: Greg Farnum <greg@inktank.com>
Also update the MMDSCacheRejoin encoding to the new format. Signed-off-by: Greg Farnum <greg@inktank.com>
Yan, Zheng [Wed, 13 Mar 2013 11:23:18 +0000 (19:23 +0800)]
mds: remove MDCache::rejoin_fetch_dirfrags()
In commit 77946dcdae (mds: fetch missing inodes from disk), I introduced
MDCache::rejoin_fetch_dirfrags(). But it basicly duplicates the function
of MDCache::open_undef_dirfrags(), so just remove rejoin_fetch_dirfrags()
and make open_undef_dirfrags() also handle undefined inodes.
For mds cluster, rename operation may involve multiple MDS. If the
rename source's auth MDS crashes after some witness MDS have prepared
the rename but before the rename is committing. Later when the MDS
recovers, its subtree map and linkages are different from the prepared
MDS'. This causes problems for both subtree resolve and cache rejoin.
The solution is, if the rename source's auth MDS fails, the prepared
witness MDS query the master MDS if the operation is committing. If
it's not, rollback the rename, then send resolve message to the
recovering MDS.
Another similar case is a prepared witness MDS crashes when the
rename source's auth MDS has prepared or is preparing the operation.
when the witness recovers, the master just delay sending the resolve
ack message until the it commits the operation.
This patch also updates Server::handle_client_rename(). Make preparing
the rename source's auth MDS be the final step before committing the
rename.
Yan, Zheng [Fri, 15 Mar 2013 02:34:09 +0000 (10:34 +0800)]
mds: don't send MDentry{Link,Unlink} before receiving cache rejoin
The active MDS calls MDCache::rejoin_scour_survivor_replicas() when it
receives the cache rejoin message. The function will remove the objects
replicated by MDentry{Link,Unlink} from replica map.
Yan, Zheng [Thu, 14 Mar 2013 16:08:39 +0000 (00:08 +0800)]
mds: set resolve/rejoin gather MDS set in advance
For active MDS, it may receive resolve/rejoin message before receiving
the mdsmap message that claims the MDS cluster is in resolving/rejoning
state. So instead of set the gather MDS set when receiving the mdsmap.
set them in advance when detecting MDS' failure.
Yan, Zheng [Thu, 14 Mar 2013 04:27:51 +0000 (12:27 +0800)]
mds: don't send resolve message between active MDS
When MDS cluster is resolving, current behavior is sending subtree resolve
message to all other MDS and waiting for all other MDS' resolve message.
The problem is that active MDS can have diffent subtree map due to rename.
Besides gathering active MDS's resolve messages are also racy. The only
function for these messages is disambiguate other MDS' import. We can
replace it by import finish notification.
Yan, Zheng [Wed, 13 Mar 2013 02:28:58 +0000 (10:28 +0800)]
mds: unify slave request waiting
When requesting remote xlock or remote wrlock, the master request is
put into lock object's REMOTEXLOCK waiting queue. The problem is that
remote wrlock's target can be different from lock's auth MDS. When
the lock's auth MDS recovers, MDCache::handle_mds_recovery() may wake
incorrect request. So just unify slave request waiting, dispatch the
master request when receiving slave request reply.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com> Reviewed-by: Sage Weil <sage@inktank.com>
Yan, Zheng [Tue, 12 Mar 2013 12:24:52 +0000 (20:24 +0800)]
mds: defer eval gather locks when removing replica
Locks' states should not change between composing the cache rejoin ack
messages and sending the message. If Locker::eval_gather() is called
in MDCache::{inode,dentry}_remove_replica(), it may wake requests and
change locks' states.
Yan, Zheng [Sat, 16 Mar 2013 00:02:18 +0000 (08:02 +0800)]
mds: make sure table request id unique
When a MDS becomes active, the table server re-sends 'agree' messages
for old prepared request. If the recoverd MDS starts a new table request
at the same time, The new request's ID can happen to be the same as old
prepared request's ID, because current table client code assigns request
ID from zero after MDS restarts.
This patch make table server send 'ready' messages when table clients
become active or itself becomes active. The 'ready' message updates
table client's last_reqid to avoid request ID collision. The message
also replaces the roles of finish_recovery() and handle_mds_recovery()
callbacks for table client.
Sage Weil [Mon, 1 Apr 2013 16:12:44 +0000 (09:12 -0700)]
client: always remove cond from list after waiting
The signal method removes conds from the list after it signals. That's
not okay if the cond triggers for some other reason; an invalid Cond*
will remain on the list and get signaled later.
Make the wait_on_list() helper remove it; use that in several callers;
explicitly do the removal in the remaining callers.
Change signal_cond_list() to not clear the list; rely on the signalee's to
do that. Audit all users and make sure they are either using the
wait_on_list() helper (which removes its Cond) or do the remove explicitly.
Backport some form of this: bobtail Signed-off-by: Sage Weil <sage@inktank.com>
Josh Durgin [Sun, 31 Mar 2013 00:25:18 +0000 (17:25 -0700)]
librbd: change diff_iterate interface to be more C-friendly
Use int instead of bool for the callback, and make it represent
whether the data exists, rather than the opposite, since callers
are likely to test for whether it's data instead of whether its zeroes.
Change the return value to 0, since an int64_t will wrap around
for large reads, and there's no value in reporting the length
read when it will always be the length requested clipped to the
size of the image.
Josh Durgin [Fri, 29 Mar 2013 23:48:02 +0000 (16:48 -0700)]
rbd: fail import-diff if we reach the end of the stream sooner than expected
safe_read() just protects against EINTR, and may return less data than
requested if it reaches the end of the file. Use safe_read_exact() to
make sure we get the right amount of data.
We were using the internal CEPH_NOSNAP and CEPH_SNAPDIR constants, and
defining a clone_info_t::HEAD (with a different value). The docs were
referrring to the internal constant names.
Instead, define librados constants (C and C++) with the same values as the
internal types.
Note that this changes the clone_info_t::HEAD value from -1 to -2 so that
it now matches the internal type.
Sage Weil [Thu, 28 Mar 2013 21:13:03 +0000 (14:13 -0700)]
librbd: fix diff_iterate arithmetic for non-standard striping
This code is confusing because we are moving back and forth between
image offsets, "buffer" offsets (image offsets relative to off), and
object offsets. Fix the math.
Sage Weil [Thu, 28 Mar 2013 16:26:11 +0000 (09:26 -0700)]
librbd: handle diff from clone
If we have a parent image, and the reference is from snap 0 (beginning of
time) we need to look at the diff on the parent from the beginning of time
and report that when we get an ENOENT.
Sage Weil [Wed, 27 Mar 2013 06:16:54 +0000 (23:16 -0700)]
qa: add rbd/diff_continuous.sh stress test
Stress test that does io on an image while we are mirroring a diff from
earlier snaps to a second copy. At the end, verify that all snaps have
matching content.
Sage Weil [Mon, 25 Mar 2013 21:14:50 +0000 (14:14 -0700)]
librbd: implement diff_iterate
Implement a diff_iterate() method that will iterate over an image and
report which extents vary between two snapshots (or a snapshot and the
head). The callback gets an extent and a flag indicating whether it is
full of data or is known to be zero in the ending snapshot.
Sage Weil [Tue, 26 Mar 2013 15:53:00 +0000 (08:53 -0700)]
osd: fix clone snap list for list-snaps
We need to return the list of snaps that each clone is defined for, not
the list of snaps we know may or may not exist globally over a similar
interval. This requires looking at the clone's obc, unfortunately.
Sage Weil [Tue, 26 Mar 2013 17:31:19 +0000 (10:31 -0700)]
osd: direct reads on SNAPDIR to either head or snapdir
The list_snaps operation needs to look at the SnapSet, and is logically
querying all revisions of the object. Make requests to SNAPDIR be
read-only, and grab the head or snapdir obc transparently (whichever one
exists). This allows us to list snaps when, say, the head does not
exist, but there are in fact snaps.
Sage Weil [Tue, 26 Mar 2013 04:18:47 +0000 (21:18 -0700)]
osd: do not include snaps with head on list_snaps()
If there is a sequence of snaps 1, 2, 3, 4, 5, and we have a clone
2 with [1,2], and the head reflects content at snap times [3,4,5], then
the snap_list should return
clone 2 snaps [1,2]
head snaps
seq 2
because it never saw a write after snap 2, and therefor has the same
content currently as it did in snaps 3,4,5. If the SnapSet on the
object lists snaps 3,4,5, and the head exists, it actually means the
object was deleted between 2 and 3, and was recreated after 5:
clone 2 snaps [1,2]
head snaps []
seq 5
The key to telling the two situations apart is the seq number on the
SnapSet (now included in the list_snaps reply) that tells us when the
last update was.
Sage Weil [Mon, 1 Apr 2013 04:47:38 +0000 (21:47 -0700)]
rgw: fix warning
On a 64-bit arch, we still want to make sure it's a 32-bit value. Gcc is
too smart for us to just cast; it will still warn on 32-bit arch that the
comparison is always true.