Sage Weil [Fri, 28 Dec 2012 21:07:18 +0000 (13:07 -0800)]
log: broadcast cond signals
We were using a single cond, and only signalling one waiter. That means
that if the flusher and several logging threads are waiting, and we hit
a limit, we the logger could signal another logger instead of the flusher,
and we could deadlock.
Similarly, if the flusher empties the queue, it might signal only a single
logger, and that logger could re-signal the flusher, and the other logger
could wait forever.
Intead, break the single cond into two: one for loggers, and one for the
flusher. Always signal the (one) flusher, and always broadcast to all
loggers.
Backport: bobtail, argonaut Signed-off-by: Sage Weil <sage@inktank.com> Reviewed-by: Dan Mick <dan.mick@inktank.com>
Sam Lang [Fri, 28 Dec 2012 19:58:39 +0000 (13:58 -0600)]
ceph-fuse: Avoid doing handle cleanup in dtor
The CephFuse::Handle class needs the client
pointer to be valid for finalizing, so don't finalize
in the destructor (which doesn't get called till the
fuse handle leaves scope), instead use a finalize method that
gets called explicitly before the client pointer is freed.
Sam Lang [Thu, 6 Dec 2012 05:21:12 +0000 (23:21 -0600)]
ceph-fuse: Split main into init/main/finalize
With the invalidate callback enabled for fuse, the Client::unmount
call requires the fuse channel and session objects remain for performing
the invalidate callbacks. This patch splits the ceph_fuse_ll_main
call into init, main, and finalize functions, so finalization of the
channel and session objects can be done after the unmount completes.
The patch includes cleanup for the code in fuse_ll.cc to make it more
in the style of C++ and make use of the pimpl idiom to hide the fuse
structures within the CephFuse::Handle pimpl class.
Signed-off-by: Sam Lang <sam.lang@inktank.com> Reviewed-by: Sage Weil <sage@inktank.com>
Sage Weil [Thu, 27 Dec 2012 19:12:33 +0000 (11:12 -0800)]
osd: drop 'osd recovery max active' back to previous default (5)
Having this too large means that queues get too deep on the OSDs during
backfill and latency is very high. In my tests, it also meant we generated
a lot of slow recovery messages just from the recovery ops themselves (no
client io).
Keeping this at the old default means we are no worse in this respect than
argonaut, which is a safe position to start from.
Sage Weil [Sun, 16 Dec 2012 20:26:06 +0000 (12:26 -0800)]
mds: replace closed sessions on connect
If a connection comes and there is a closed session attached, remove it.
This is probably a failure of an old session to get cleaned up properly,
and in certain cases it may even be from a different client (if the addr
nonce is reused). In that case this prevents further damage, although
a complete solution would also clean up the closed connection state if
there is a fault. See #3630.
This fixes a hang that is reproduced by running the libcephfs
Caps.ReadZero test in a loop; eventually the client addr is reused and
we are linked to an ancient Session with a different client id.
Backport: bobtail Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Sat, 3 Mar 2012 23:39:06 +0000 (15:39 -0800)]
mds: don't force in->first == dn->first
The fullbit sets it now. For multiversion inodes, it's "first" can be in
the future, since this dentry may not have changed when the inode was
cowed in place. (OTOH, the dentry cannot have changed without the inode
also have changing.)
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
Yan, Zheng [Sat, 8 Dec 2012 14:43:32 +0000 (22:43 +0800)]
mds: fix race between send_dentry_link() and cache expire
MDentryLink message can race with cache expire, When it arrives at
the target MDS, it's possible there is no corresponding dentry in
the cache. If this race happens, we should expire the replica inode
encoded in the MDentryLink message. But to expire an inode, the MDS
need to know which subtree does the inode belong to, so modify the
MDentryLink message to include this information.
Yan, Zheng [Mon, 10 Dec 2012 07:43:44 +0000 (15:43 +0800)]
mds: fix file existing check in Server::handle_client_openc()
Creating new file needs to be handled by directory fragment's auth
MDS, opening existing file in write mode needs to be handled by
corresponding inode's auth MDS. If a file is remote link, its parent
directory fragment's auth MDS can be different from corresponding
inode's auth MDS. So which MDS to handle create file request can be
affected by if the corresponding file already exists.
handle_client_openc() calls rdlock_path_xlock_dentry() at the very
beginning. It always assumes the request needs to be handled by
directory fragment's auth MDS. When handling a create file request,
if the file already exists and remotely linked to a non-auth inode,
handle_client_openc() falls back to handle_client_open(),
handle_client_open() forwards the request because the MDS is not
inode's auth MDS. Then when the request arrives at inode's auth MDS,
rdlock_path_xlock_dentry() is called, it will forward the request
back.
Yan, Zheng [Sun, 9 Dec 2012 05:03:41 +0000 (13:03 +0800)]
mds: don't retry readdir request after issuing caps
If remote linkage without inode is encountered after some caps are
issued, Server::handle_client_readdir() should send the reply to
client immediately instead of retrying the request after opening
the remote dentry. This is because the MDS may want to revoke these
caps before the MDS succeeds in opening the remote dentry.
Yan, Zheng [Sat, 8 Dec 2012 16:53:28 +0000 (00:53 +0800)]
mds: take export lock set before sending MExportDirDiscover
Migrator::export_dir() only check if it can lock the export lock set
but not take the lock set. So someone else can change the path to
the exporting dir and confuse Migrator::handle_export_discover().
Yan, Zheng [Sun, 9 Dec 2012 05:06:33 +0000 (13:06 +0800)]
mds: re-issue caps after importing caps
The imported caps may prevent unstable locks from entering stable
states. So we should call Locker::eval_gather() with parameter
"first" set to true after caps are imported.
Yan, Zheng [Sat, 8 Dec 2012 07:07:53 +0000 (15:07 +0800)]
mds: fix error hanlding in MDCache::handle_discover_reply()
The error hanlding code in MDCache::handle_discover_reply() has two
main issues. MDCache::handle_discover_reply() does not wake waiters
if dir_auth_hint in reply message is equal to itself's nodeid. This
can happen if discover race with subtree importing. Another issue is
that it checks the existence of cached directory fragment to decide
if it should take waiter from inode or from directory fragment. The
check is unreliable because subtree importing can add directory
fragments to the cache.
Yan, Zheng [Sat, 8 Dec 2012 05:59:38 +0000 (13:59 +0800)]
mds: set want_base_dir to false for MDCache::discover_ino()
When frozen inode is encountered, MDCache::handle_discover() sends
reply immediately if the reply message is not empty. When handling
"discover ino" requests, the reply message always contains the base
directory fragment. But requestor already has the base directory
fragment, the only effect of the reply message is wake the requestor
and make it send same "discover ino" request again. So the requestor
keeps sending "discover ino" requests but can't make any progress.
The fix is set want_base_dir to false for MDCache::discover_ino().
After set want_base_dir to false, also need update the code that
handles "discover ino" error.
This patch also remove unused error handling code for flag_error_dn
Yan, Zheng [Fri, 7 Dec 2012 07:59:56 +0000 (15:59 +0800)]
mds: no bloom filter for replica dir
We should delete dir fragment's bloom filter after exporting the dir
fragment to other MDS. Otherwise the residual bloom filter may cause
problem if the MDS imports dir fragment later.
Yan, Zheng [Thu, 6 Dec 2012 01:28:46 +0000 (09:28 +0800)]
mds: properly mark dirfrag dirty
If predirty_journal_parents() does not propagate changes in dir's
fragstat into corresponding inode's dirstat, it should mark the
inode as dirfrag dirty. This happens when we modify dir fragments
that are auth subtree roots.
Yan, Zheng [Fri, 30 Nov 2012 00:53:33 +0000 (08:53 +0800)]
mds: alllow handle_client_readdir() fetching freezing dir.
At that point, the request already auth pins and locks some objects.
So CDir::fetch() should ignore the can_auth_pin check and continue
to fetch freezing dir.
Sage Weil [Mon, 24 Dec 2012 03:59:04 +0000 (19:59 -0800)]
Merge branch 'wip-create-layout'
Reviewed-by: Greg Farnum <greg@inktank.com>
The functional tests for the create operations should add and specify non-default
pools, but we don't have a set of library methods to do that yet (to interact with
the monitor).
Samuel Just [Fri, 21 Dec 2012 23:39:50 +0000 (15:39 -0800)]
PG: Handle repair once in scrub_finish
We don't want to change missing sets during a chunky
scrub since it would cause !is_clean() and derail
the rest of the scrub. Instead, move the missing,
inconsistent, and authoritative sets into scrubber
and add to during scrub_compare_maps(). Then,
handle repairing objects all at once in scrub_finish().
Dan Mick [Fri, 21 Dec 2012 03:53:07 +0000 (19:53 -0800)]
import_export.sh: sparse import export
Add tests for:
- sparse import makes expected sparse images
- sparse export makes expected sparse files
- sparse import from stdin also creates sparse images
- import from partially-sparse file leads to partially-sparse image
- import from stdin with zeros leads to sparse
- export from zeros-image to file leads to sparse file
Signed-off-by: Dan Mick <dan.mick@inktank.com> Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Dan Mick [Sat, 8 Dec 2012 06:57:06 +0000 (22:57 -0800)]
rbd: harder-working sparse import from stdin
Try to accumulate image-sized blocks when importing from stdin, even if
each read is shorter than requested; if we get a full block, and it's
all zeroes, we can seek and make a sparse output file
Signed-off-by: Dan Mick <dan.mick@inktank.com> Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Sage Weil [Sat, 22 Dec 2012 00:47:50 +0000 (16:47 -0800)]
osd: fix pg stat msgs vs timeout
We can get a pattern like so:
- new mon session
- after say 120 seconds, we decide to send a stats msg
- outstanding_pg_stats is finally true, we immediately time out (30 second
grace), and reconnect to a new mon
-> repeat
The problem is that we don't reset the last_sent timestamp when we send.
Or that we do this check after sending instead of before. Fix both.
This should resolve the issue #3661 where osds that don't have pgs
updating are not stats messags to the mon to check in, and are eventually
getting marked down as a result.
Signed-off-by: Sage Weil <sage@inktank.com> Reviewed-by: Samuel Just <sam.just@inktank.com>
John Wilkins [Sat, 22 Dec 2012 00:07:27 +0000 (16:07 -0800)]
doc: Updated the Configuration File section.
- Replaced ceph.conf with Ceph configuration to clarify
when running multiple clusters on the same hardware.
- Added a [client] entry so people know it can be set too.
- Updated existing auth example.
- Added an authentication section with a link to the cephx guide.
- Added section for running multiple clusters. Per Tommi.
Signed-off-by: John Wilkins <john.wilkins@inktank.com>
Sage Weil [Fri, 21 Dec 2012 21:44:19 +0000 (13:44 -0800)]
monc: only warn about missing keyring if we fail to authenticate
This avoids the situation where a librados or other user with the default
of 'cephx,none' and no keyring is authenticating against a cluster with
required of 'none' and an annoying warning is generated every time. Now
we only print a helpful message if we actually failed.
Sage Weil [Fri, 21 Dec 2012 06:01:34 +0000 (22:01 -0800)]
osd: clear scrub state if queued scrub doesn't start
We set SCRUBBING when we queue a pg for scrub. If we dequeue and
call scrub() but abort for some reason (!active, degraded, etc.), clear
that state bit.
Bug is easily reproduced with 'ceph osd scrub N' during cluster startup
when PGs are peering; some PGs can get left in the scrubbing state.
Add ceph osd ls to help; make help for ceph osd tell N bench look
more like injectargs, which says <osd-id or *> to make it clear you
can benchmark all osds simultaneously