Yan, Zheng [Tue, 28 Jan 2014 04:00:44 +0000 (12:00 +0800)]
mds: wake up dentry waiters when handling cache rejoin ack
Cache rejoin ack message may fragment dirfrag, we should set the
'replay' parameter of adjust_dir_fragments() to false in this case.
This makes sure that CDir::merge/split wake up any dentry waiter
in the fragmented dirfrag.
Yan, Zheng [Mon, 20 Jan 2014 02:05:39 +0000 (10:05 +0800)]
mds: fix negative rstat assertion
when gathering rstat for directory inode that is fragmented to
several dirfrags, inode's rstat may temporarily become nagtive.
This is because, when splitting dirfrag, delta rstat is always
added to the first new dirfrag.
Yan, Zheng [Sun, 19 Jan 2014 10:37:22 +0000 (18:37 +0800)]
mds: avoid race between cache expire and pushing replicas
MDentryLink and MMDSFragmentNotify push replica inode/dirfrags
to other MDS. They both are racy because, when the target MDS
receives them, it may has expired the replicaed inode/dirfrags'
ancestor. The race creates unconnected replica inode/dirfrags,
unconnected replicas are problematic for subtree migration
because migrator sends MExportDirNotify according to subtree
dirfrag's replica list. MDS that contains unconnected replicas
may not receive MExportDirNotify.
The fix is, for MDentryLink and MMDSFragmentNotify messages
that may be received later, we avoid trimming their parent
replica objects. If null replica dentry is not readable, we
may receive a MDentryLink message later. If replica inode's
dirfragtreelock is not readable, it's likely some dirfrags
of the inode are being fragmented, we may receive a
MMDSFragmentNotify message later.
Yan, Zheng [Sun, 19 Jan 2014 04:13:06 +0000 (12:13 +0800)]
mds: fix scattered wrlock rejoin
If unstable scatter lock is encountered when handling weak cache
rejoin, don't remove the recovering MDS from the scatter lock's
gather list. The reason is the recovering MDS may hold rejoined
wrlock on the scatter lock. (Rejoined wrlocks were created when
handling strong cache rejoins from survivor MDS)
When composing cache rejoin ack, if the recovering MDS is in lock's
gather list, set lock state of the recovering MDS to a compatible
unstable stable.
Yan, Zheng [Sun, 19 Jan 2014 02:36:45 +0000 (10:36 +0800)]
mds: fixes for thrash fragment
This patch contains 3 changes:
- limit the number of in progress fragmenting processes.
- reduce the probability of splitting small dirfrag.
- process the merge_queue when thrash_fragments is enabled.
Yan, Zheng [Sat, 18 Jan 2014 13:34:47 +0000 (21:34 +0800)]
mds: fix 'force dirfrags' during journal replay
For rename operation, null dentry is first replayed, it detaches
the inode from the FS hierarchy. Then primary dentry is replayed,
it updates the inode and re-attaches the inode to the FS hierarchy.
We may call CInode::force_dirfrag() when updating the inode. But
CInode::force_dirfrag() doesn't work well when inode is detached
from the FS hierarchy because adjusting fragments may also adjust
subtree map. The fix is don't detach the inode when replaying the
null dentry.
Yan, Zheng [Sat, 18 Jan 2014 12:31:36 +0000 (20:31 +0800)]
mds: journal dirfragtree change
Introduce new flag DIRTYDFT to CDir and EMetaBlob::dirlump, the new
flag indicates the dirfrag is newly fragmented and the corresponding
dirfragtree change hasn't been propagate to the directory inode.
After fragmenting subtree dirfrags, make sure DIRTYDFT flag is set
on EMetaBlob::dirlump that correspond to the resulting dirfrags.
Journal replay code uses DIRTYDFT frag to decide if dirfragtree is
scattered dirty.
Yan, Zheng [Sat, 18 Jan 2014 00:27:02 +0000 (08:27 +0800)]
mds: allow fragmenting subtree dirfrags
We can't wait until object becomes auth pinnable after freezing a
dirfrag/subtree, because it can cause deadlock. Current fragmenting
dirfrag code checks if the directory inode is auth pinnable, then
calls Locker::acquire_locks(). It avoids deadlock, but also forbids
fragmenting subtree dirfrags. We can get rid of the limitation by
using 'nonlocking auth pin' mode of Locker::acquire_locks().
Yan, Zheng [Thu, 16 Jan 2014 00:15:08 +0000 (08:15 +0800)]
mds: freeze dir deadlock detection
freezing dir and freezing tree have the same deadlock cases.
This patch adds freeze dir deadlock detection, which imitates
commit ab93aa59 (mds: freeze tree deadlock detection)
Yan, Zheng [Wed, 15 Jan 2014 08:49:23 +0000 (16:49 +0800)]
mds: improve freeze tree deadlock detection
Current code uses the start time of freezing tree to detect deadlock.
It is better to check how long the auth pin count of freezing tree
stays unchanged to decide if there is potential deadlock.
Yan, Zheng [Wed, 15 Jan 2014 07:31:25 +0000 (15:31 +0800)]
mds: handle frag mismatch for cache expire
When sending MDSFragmentNotify to peers, also replicate the new
dirfrags. This guarantees peers get new replica nonces for the
new dirfrags. So it's safe to ignore mismatched/old dirfrags in
the cache expire message.
Yan, Zheng [Wed, 15 Jan 2014 04:13:52 +0000 (12:13 +0800)]
mds: fix open undef dirfrags
Undef inode may contain a undef dirfrag (*). When undef inode is
opened, we should force fragment the undef dirfrag (*). because
we may open other dirfrags later, the undef dirfrag (*) will
overlap with them.
Yan, Zheng [Wed, 15 Jan 2014 03:12:43 +0000 (11:12 +0800)]
mds: properly set COMPLETE flag when merging dirfrags
don't keep the COMPLETE flag when merging dirfrags during journal
replay, because it's inconvenience to check if the all dirfrags
under the 'basefrag' are in the cache and complete. One special case
is that newly created dirfrag get fragmented, then the fragment
operation get rolled back. The COMPLETE flag should be preserved in
this case because the dirfrag still doesn't exist on object store.
Yan, Zheng [Tue, 14 Jan 2014 02:08:40 +0000 (10:08 +0800)]
mds: fix MDCache::adjust_subtree_after_rename()
process subtree dirfrags first, then process nested dirfrags. because
the code that processes nested dirfrags treats unprocessed subtree
dirfrags as child directories' dirfrags.
Yan, Zheng [Tue, 14 Jan 2014 01:07:16 +0000 (09:07 +0800)]
mds: fix MDCache::get_force_dirfrag_bound_set()
don't force dir fragments according to the subtree bounds in resolve
message. The resolve message was not sent by the auth MDS of these
subtree bounds dirfrags.
Yan, Zheng [Mon, 13 Jan 2014 06:52:20 +0000 (14:52 +0800)]
mds: handle frag mismatch for discover
When handle discover dirfrag message, choose an approximate frag if
the requested dirfrag doesn't exist. When handling discover dirfrag
replay, wake up appropriate waiters if the reply is different from
the the requested dirfrag.
Yan, Zheng [Mon, 13 Jan 2014 03:32:38 +0000 (11:32 +0800)]
mds: use discover_path to open remote inode
MDCache::discover_ino() doesn't work well for directories that are
fragmented to several dirfrags. Because MDCache::handle_discover()
doesn't know which dirfrags the inode lives in when the sender has
outdatad frag information.
This patch replaces all use of MDCache::discover_ino() with
MDCache::discover_path().
Current discover dirfrag code only allows discover one dirfrag at
a time. This can cause deadlock if there are directories that are
fragmented to several dirfrags. For example:
mds.0 mds.1
-----------------------------------------------------------------
freeze subtree (1.*) with bound (2.1*)
discover (2.0*) ->
handle discover (2.0*), frozen tree, wait
<- export subtree (1.*) to with bound (2.1*)
discover (2.1*), wait
commit 15a5d37a (mds: fix race between scatter gather and dirfrag export)
is incomplete, it doesn't handles the race that no fragstat/neststat is
gathered. Previous commit prevents scatter gather during exporting dir,
which eliminates races of this type.
Yan, Zheng [Sun, 12 Jan 2014 10:51:18 +0000 (18:51 +0800)]
mds: acquire scatter locks when exporting dir
If auth MDS of the subtree root inode is neither the exporter MDS
nor the importer MDS and it gathers subtree root's fragstat/neststat
while the subtree is exporting. It's possible that the exporter MDS
and the importer MDS both are auth MDS of the subtree root or both
are not auth MDS of the subtree root at the time they receive the
lock messages. So the auth MDS of the subtree root inode may get no
or duplicated fragstat/neststat for the subtree root dirfrag.
The fix is, during exporting a subtree, both the exporter MDS and
the importer MDS hold locks on scatter locks of the subtree root
inode. For the importer MDS, it tries acquiring locks on the scatter
locks when handling the MExportDirPrep message. If fails to acquire
all locks, it sends a NACK to the exporter MDS. The exporter MDS
cancels the exporting when receiving the NACK.
Yan, Zheng [Sun, 12 Jan 2014 09:31:57 +0000 (17:31 +0800)]
mds: acquire locks required by exporting dir
Start internal MDS request to acquire locks required by exporting dir.
It's more reliable than using Locker::rdlock_take_set(), It also allows
acquiring locks besides rdlock.
Only use Locker::acquire_locks() to acquire locks in the first stage of
exporting dir (before freeze the subtree). After the subtree is frozen,
to minimize the time of frozen tree, still use 'try lock' to re-acquire
the locks.
Yan, Zheng [Sun, 12 Jan 2014 07:13:42 +0000 (15:13 +0800)]
mds: introduce nonlocking auth pin
Add a parameter to Locker::acquire_locks() to enabled nonblocking
auth pin. If nonblocking mode is enabled and an object that can't
be auth pinned is encountered, Locker::acquire_locks() aborts the
MDRequest instead of waiting.
The nonlocking mode is acquired by the cases that we want to acquire
locks after auth pinning a directory or freezing a dirfrag/subtree.
Yan, Zheng [Fri, 10 Jan 2014 02:58:41 +0000 (10:58 +0800)]
mds: allow acquiring wrlock and remote wrlock at the same time
Rename may move file from one dirfrag to another dirfrag of the same
directory inode. If two dirfrags belong to different auth MDS, both
MDS should hold wrlocks on filelock/nestlock of the directory inode.
If a lock is in both wrlocks list and remote_wrlocks list, current
Locker::acquire_locks() only acquires the local wrlock. The auth MDS
of the source dirfrag doesn't have the wrlock, so slave request of
the operation may modify the dirfrag after fragstat/neststat of the
dirfrag have already been gathered. It corrupts the dirstat/neststat
accounting.
Yan, Zheng [Sun, 16 Feb 2014 14:14:50 +0000 (22:14 +0800)]
ReplicatedPG: return no data if read size is trimmed to zero
OSD should return no data if the read size is trimmed to zero by the
truncate_seq/truncate_size check. We can't rely on ObjectStore::read()
to do that because it reads the entire object when the 'len' parameter
is zero.
Sage Weil [Sun, 16 Feb 2014 01:22:30 +0000 (17:22 -0800)]
osd: set client incarnation for Objecter instance
Each ceph-osd process's Objecter instance has a sequence
of tid's that start at 1. To ensure these are unique
across all time, set the client incarnation to the
OSDMap epoch in which we booted.
Note that the MDS does something similar (except the
incarnation is actually the restart count for the MDS
rank, since the MDSMap tracks that explicitly).
Backport: emperor Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Wed, 12 Feb 2014 20:39:25 +0000 (12:39 -0800)]
osd: schedule agent from a priority queue
We need to focus agent attention on those PGs that most need it. For
starters, full PGs need immediate attention so that we can unblock IO.
More generally, fuller ones will give us the best payoff in terms of
evicted data vs effort expended finding candidate objects.
Restructure the agent queue with priorities. Quantize evict_effort so that
PGs do not jump between priorities too frequently.
Sage Weil [Wed, 12 Feb 2014 00:25:51 +0000 (16:25 -0800)]
osd/ReplicatedPG: block requests to cache PGs when they are full
If we are full and get a write request to a new object, put the op on a
wait list. Wake up when the agent frees up some space.
Note that we do not block writes to existing objects. That would be a
more aggressive strategy, but it is difficult to know up front whether we
will increase the size of the object or not, so we just leave it be. I
suspect this strategy is "good enough".
Also note that we do not yet prioritize agent attention to PGs that most
need eviction (e.g., those that are full).
Sage Weil [Tue, 11 Feb 2014 22:01:10 +0000 (14:01 -0800)]
osd/ReplicatedPG: redirect reads instead of promoting when full
If the cache pool is full, we are processing a read op, and we would
otherwise promote, redirect instead. This lets us continue to process the
op without blocking or making the cache pool any more full than it is.
Sage Weil [Sat, 8 Feb 2014 02:05:04 +0000 (18:05 -0800)]
osd/ReplicatedPG: do not flush omap objects to an EC base pool
The EC pool does not support omap content. If the caching/tiering agent
encounters such an object, just skip it. Use the OMAP object_info_t flag
for this.
Although legacy pools will have objects with omap that do not have this
flag set, no *cache* pools yet exist, so we do not need to worry about the
agent running across legacy content.
Sage Weil [Sat, 8 Feb 2014 02:03:19 +0000 (18:03 -0800)]
osd: add OMAP flag to object_info_t
Set a flag if we ever set or update OMAP content on an object. This gives
us an easy indicator for the cache agent (without actually querying the
ObjectStore) so that we can avoid trying to flush omap to EC pools.
Sage Weil [Mon, 3 Feb 2014 01:11:23 +0000 (17:11 -0800)]
osd/ReplicatedPG: do not choke on op-less flush OpContexts (from flush)
The agent initiates flush ops that don't have an OpRequest associated
with them. Make reply_ctx skip the actual reply message instead of
crashing if the flush request gets canceled (e.g., due to a race with
a write).
Sage Weil [Tue, 28 Jan 2014 01:57:53 +0000 (17:57 -0800)]
osd/ReplicatedPG: add slop to agent mode selection
We want to avoid a situation where the agent clicks on and off when the
system hovers around a utilization threshold. Particularly for trim,
the system can expend a lot of energy doing a minimal amount of work when
the effort level is low. To avoid this, enable when we are some amount
above the threshold, and do not turn off until we are the same amount below
the target.
Sage Weil [Tue, 28 Jan 2014 00:26:19 +0000 (16:26 -0800)]
osd/ReplicatedPG: initial agent to random hash position inside pg
When the agent starts, start at a random offset to ensure we get a more
uniform distribution of attention to all objects in the PG. Otherwise, we
will disproportionately examine objects at the "beginning" of the PG if we
are interrupted by peering or restarts or some other activity.
Note that if the agent_state is preserved, we do not forget our position,
which is also nice.
We *could* persist this position in the pg_info_t somewhere, but I am not
sure it is worth the effort.
Sage Weil [Fri, 24 Jan 2014 22:35:41 +0000 (14:35 -0800)]
osd/ReplicatedPG: basic flush and evict agent functionality
This is very basic flush and evict functionality for the tiering agent.
The flush policy is very simple: if we are above the threshold and the
object is dirty, and not super young, flush it. This is not too braindead
of a policy (although we could clearly do something smarter).
The evict policy is pretty simple: evict the object if it is clean and
we are over our full threshold. If we are in the middle mode, try to
estimate how cold the object is based on an accumulated histogram of
objects we have examined so far, and decide to evict based on our
position in that histogram relative to our "effort" level.
Caveats:
* the histograms are not refreshed
* we aren't taking temperature into consideration yet, although some of
the infrastructure is there.
Sage Weil [Tue, 4 Feb 2014 06:10:07 +0000 (22:10 -0800)]
osd/ReplicatedPG: add on_finish to OpContext
Add a callback hook for whenever an OpContext completes or cancels. We
are pretty sloppy here about the return values because our initial user
will not care, and it is unclear if future users will.
Sage Weil [Mon, 20 Jan 2014 18:28:43 +0000 (10:28 -0800)]
osd: add pg_pool_t::get_pg_num_divisor
A PG is not always an equally sized fraction of the total pool size due to
the use of ceph_stable_mod. Add a helper to return the fraction
(denominator) of a given pg based on the current pg_num value.
By default, disallow adjustment of primary affinity unless the user has
opted in by adjusting their monitor config. This will avoid some user
pain because inadvertantly setting the affinity will prevent older clients
from connecting to and using the cluster.