Sage Weil [Tue, 30 Oct 2012 20:19:30 +0000 (13:19 -0700)]
msg/SimpleMessenger: start accepter in ready()
Start the accepter thread when the first dispatcher is ready. This ensures
that there will be someone around to verify authorizers for incoming
connections, and means we have a bit less failure noise on the monitors
as a result.
Sage Weil [Tue, 30 Oct 2012 17:00:42 +0000 (10:00 -0700)]
msg/Pipe: only randomize start seq #'s if MSG_AUTH feature is present
The kernel client expects seq #'s to start at 1 or else it is unhappy.
So, only randomize these values if the MSG_AUTH feature is present--that is
the only time it matters anyway.
Sam Lang [Mon, 29 Oct 2012 15:30:01 +0000 (10:30 -0500)]
client: Fix ref counting double free with hardlink
Peforming a hard link through the libcephfs interface causes
a double free on shutdown, due to the Client::link call decrementing
the parent (of the target) directory's inode. This fix removes the
put_inode(dir) call, to match the behavior of Client::ll_link.
Signed-off-by: Sam Lang <sam.lang@inktank.com> Reviewed-by: Sage Weil <sage@inktank.com>
Dan Mick [Tue, 23 Oct 2012 04:15:51 +0000 (21:15 -0700)]
librbd: clip requests past end-of-image.
Rename check_io to clip_io, which can modify the passed-in length
to clamp it to the device size. This is expected behavior for
block-device emulation.
Call clip_io in rbd_write(); need to return clipped length there,
even though aio_write() is calling clip_io() as well (for the
direct path).
Signed-off-by: Dan Mick <dan.mick@inktank.com> Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Jim Schutt [Thu, 27 Sep 2012 21:56:15 +0000 (15:56 -0600)]
PG: Do not discard op data too early
Under a sustained cephfs write load where the offered load is higher
than the storage cluster write throughput, a backlog of replication ops
that arrive via the cluster messenger builds up. The client message
policy throttler, which should be limiting the total write workload
accepted by the storage cluster, is unable to prevent it, for any
value of osd_client_message_size_cap, under such an overload condition.
The root cause is that op data is released too early, in op_applied().
If instead the op data is released at op deletion, then the limit
imposed by the client policy throttler applies over the entire
lifetime of the op, including commits of replication ops. That
makes the policy throttler an effective means for an OSD to
protect itself from a sustained high offered load, because it can
effectively limit the total, cluster-wide resources needed to process
in-progress write ops.
Sage Weil [Wed, 24 Oct 2012 21:41:38 +0000 (14:41 -0700)]
osdc/ObjectCacher: set complete flag when we observe ENOENT
If we observe an ENOENT on a read, set the complete flag. Any dirty
buffers we have will still be in memory, even if the write are in flight,
because the TX state remains pinned until the writes commit. Writes cannot
proceed faster than reads, even though reads may proceed faster than
writes.
Sage Weil [Wed, 24 Oct 2012 19:48:02 +0000 (12:48 -0700)]
osdc/ObjectCacher: refresh iterator in read apply loop
The p iterator points to the next bh, but try_merge_bh() at the end of the
loop might merge that into our result and invalidate the iterator. Fix
this by repeating the lookup on each pass through the loop.
Sage Weil [Wed, 24 Oct 2012 19:44:25 +0000 (12:44 -0700)]
osdc/ObjectCacher: do read completions after assimilating read result
Wait until we have applied the entire read result to the cache before we
trigger any read completion events. This is a cleaner and safer approach
since we can be sure that the callback won't get blocked again on data we
have but haven't applied yet. It also fixes a crash I just observed where
the completion did a read, called trim(), and invalidated/destroyed the
iterator/bh p was referencing.
Sage Weil [Tue, 23 Oct 2012 16:18:04 +0000 (09:18 -0700)]
osdc/ObjectCacher: check lru_is_expireable() in can_close()
We assert that if can_close(), the Object isn't pinned in the LRU. This
assumes we did yur get/put refcounting properly, such that the pins are
at least as restrictive as can_close().
Sage Weil [Fri, 26 Oct 2012 18:30:06 +0000 (11:30 -0700)]
librbd: fix race in AioCompletion that are still being built
When caching is enabled, it is possible for the io completion to happen
faster than we call ->finish_adding_requests() (e.g., on cache read).
When that happens, the final read request completion doesn't see a
pending_count == 0 and thus doesn't do all the final buffer construction
that is necessary to return correct data. In particular, users will see
zeroed buffers. test_librbd_fsx is turning this up consistently after
several thousand ops with an image size of ~100MB and cloning disabled.
This was introduced with the extra logic added here with striping.
Fix this by making a separate flag to indicate the completion is under
construction, and make sure we call complete() when both pending_count==0
and building==false.
Signed-off-by: Sage Weil <sage@inktank.com> Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Noah Watkins [Thu, 25 Oct 2012 19:04:00 +0000 (12:04 -0700)]
client: double mount returns -EISCONN
Change error code from -EDOM to -EISCONN when mounting an already
mounted ceph_mount_info instance. The current convention is to return
-ENOTCONN when using the libcephfs interface in an unmounted state.
Sage Weil [Sun, 21 Oct 2012 21:54:23 +0000 (14:54 -0700)]
client: do not reset session state on reopened sessions
We can have a sequence one the MDS like:
- queue REQUEST_CLOSE to journal
- force_open, queue open to journal
- request_close acked, do nothing
- force_open acked, send OPEN
In this case, the MDS never actually closed the session, and all of the
state remained valid. The client, however, gets a suprious OPEN
message and resets the session state.
Fix this by not resetting that state.
A nicer fix might be to not send the second OPEN at all, but that would
require a REOPENING state on the MDS which is more complicated; this is
good enough. Also, that approach might not give the client an
appropriate opportunity to say "um, no..." and resend the
REQUEST_CLOSE.
Sage Weil [Sun, 21 Oct 2012 21:22:51 +0000 (14:22 -0700)]
mds: fix handling of cache_expire export
During export, between the warning stage and the final notify, we may
get cache expire messages because the replicas are sending to both us
and the new auth. This check should look for >= WARNING so that it
includes the EXPORTING states as well as the portion of WARNING after
we heard from that replica. This aligns the conditional with the
following assert such that they are properly mutually exclusive.
Fixes: #1527 Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Thu, 25 Oct 2012 00:00:01 +0000 (17:00 -0700)]
osd: fix populate_obc_watchers() assert
There is one case where populate_obc_watchers gets called when the object
is missing: during a revert. And in that case we *should* do the populate,
since all that is getting reverted is the object version.
Fixes: #3405 Signed-off-by: Sage Weil <sage@inktank.com> Reviewed-by: Sam Just <sam.just@inktank.com>
Sam Lang [Tue, 23 Oct 2012 21:21:02 +0000 (16:21 -0500)]
vstart.sh: Use ./init-ceph instead of CEPH_BIN
This effectively reverts faddb80c4230acad2b4a17aa6cbf0c30ae8d24a9
which prevented vstart.sh from being used in an enviroment where
CEPH_BIN pointed to a make install target.
Sage Weil [Tue, 23 Oct 2012 00:57:08 +0000 (17:57 -0700)]
librbd: use assert_exists() to simplify copyup check
Previously we would explicitly STAT the object to see if it exists before
sending the write to the OSD. Instead, send the write optimistically, and
assert that the object already exists. This avoids an extra round trip in
the optimistic/common case, and makes the existence check in the initial
first-write case more expensive because we send the data payload along.
Sage Weil [Tue, 23 Oct 2012 00:51:11 +0000 (17:51 -0700)]
librados: add assert_exists guard operation
Add a guard operation for writes that asserts that the object already
exists. To avoid requiring new functionality on the OSD side, implement
this by including a STAT operation, and discard the results on the
client side.
Sage Weil [Mon, 22 Oct 2012 21:14:09 +0000 (14:14 -0700)]
msg/Pipe: fix tight reconnect loop on connect failure
The fault() call in connect should not set onread=true since connect is
effectively a write path. This was forcing the writer() into a tight
loop that repeatedly would call connect(); not very polite.
Changing that, we want to avoid treating this as a normal fault (with the
failure callback) and instead back off.
Signed-off-by: Sage Weil <sage@inktank.com> Reviewed-by: Greg Farnum <greg@inktank.com>