Sage Weil [Wed, 24 Feb 2010 18:48:31 +0000 (10:48 -0800)]
filer: remove -> purge_range, and scale to large ranges
Redefine remove interface to operate over a range of objects
numbers, not a byte range, since we are removing objects. It is
the caller's responsibility to ensure they have the proper
range (by mapping from the ceph_file_layout).
And behave when the range is large by only allowing a few in
flight remove requests at once.
Eventually the objecter probably needs a more generalized request
throttling mechanism, but this will do for now.
Sage Weil [Wed, 24 Feb 2010 05:08:56 +0000 (21:08 -0800)]
mds: make scatter_nudge actually nudge when replica asks
If we're not replicated, there is no need to twiddle the
lockstate.. we can just write out any dirty data, as when we
have delayed rstat propagation. If we are replicated, though,
and a replica asks to nudge the lock, we had better nudge the
lock state!
Sage Weil [Tue, 23 Feb 2010 23:51:39 +0000 (15:51 -0800)]
mds: fix file purge race
Handle the case where a new inode ref appears while we are
purging an inode. If so, we just truncate it to 0, so that next
time we go through purge_stray() we don't have to do the work
over again.
This can happen if a client goes snooping in the stray dir (or
who knows what else!).
Sage Weil [Thu, 18 Feb 2010 23:05:33 +0000 (15:05 -0800)]
objectstore: simpler transaction encoding
Just concatenate operations to a bufferlist as we go. No
distinct decoding step is needed; we parse the transaction as it
is replayed/applied. This avoids the old decoded intermediate
representation overhead.
Since we still decode the old version, that code is still there,
but not used for anything new.
Sage Weil [Tue, 16 Feb 2010 23:07:35 +0000 (15:07 -0800)]
osd: pool cleanups
missed this before:
- no need to initalize in create_pending(), constructor does that
- int32_t, not int
- pool_max while we're at it
- initialize pool_max in OSDMap constructor
Greg Farnum [Fri, 12 Feb 2010 21:21:22 +0000 (13:21 -0800)]
osd: Deal with pools being removed from OSDMap.
This potentially has issues, since pools are not removed from the map
until after all the PGs are removed (which is threaded, not inline with
map delivery). But Sage thinks it's okay and the system keeps working
even if you delete a pool while benchmarking on it with rados.
Greg Farnum [Fri, 12 Feb 2010 00:57:23 +0000 (16:57 -0800)]
OSDMap: get_pg_pool now returns a pointer
This lets us return NULL if the pool isn't in the map, which is
needed functionality for pool deletion. Meanwhile, code which
expects the pool to exist will continue to cause a crash if it doesn't.
Sage Weil [Mon, 15 Feb 2010 21:47:41 +0000 (13:47 -0800)]
mds: infer 'follows' in journal_dirty_inode on non-head inodes
There are lots of callers to journal_dirty_inode that may
unwittingly be dealing with a non-head inode (e.g.
check_file_max). If the provided inode is snapped, infer an
appropriate follows values so as not to cow_inode() again.
Sage Weil [Fri, 12 Feb 2010 22:45:02 +0000 (14:45 -0800)]
osd: fix recovery requeue race
If a recovery op finished right as another recovery op was
begin started, we could get into start_recovery_ops() and get
max = 0 and not start anything. Since the PG wasn't being
requeued for later, it would never recover. So, requeue if we
race and get max == 0.
that look a bit like multiple procs were racing into
join_reader(). Add an assert to catch that if it happens again,
and also wrap thread starts in pipe_lock to ensure we keep the
_running flags in sync with reality. Add in a few other
sanity checks too.
Sage Weil [Fri, 12 Feb 2010 21:35:57 +0000 (13:35 -0800)]
mon: note mds beacon times more carefully
We need to update the beacon timestamp even when we are updating
the mds state. Otherwise we can get caught in a busy loop
between marking an mds laggy and !laggy because the beacon stamp
never updates.
So even if we are updating, and the reply will be slow, update
our timestamp, so we don't mark the mds laggy.
Sage Weil [Fri, 12 Feb 2010 21:27:49 +0000 (13:27 -0800)]
osd: bail out of interval loop completely
We're going backwards, so once this test fails, it always fails,
and we can break instead of continue. Any skipped intervals will
be pruned shortly anyway.
Sage Weil [Fri, 12 Feb 2010 21:26:19 +0000 (13:26 -0800)]
osd: always update up_thru if pg changes before going active
We already required this if prior PG members were down, so this
affected the 'failure' case. We now also require it for
non-failure PG changes (expansion, migration).
This fixes our maybe_went_rw calculation for prior PG intervals,
which is based on up_thru. If maybe_went_rw is false when the
pg actually went rw, we can lose (and have lost) data. But it is
not practical to calculate without up_thru being consistently
updated, because determining whether a pg would have been able to
go active depends on knowing last_epoch_started at a previous
point in time, which then determines how many prior intervals
may have been considered, which in turn determines whether
up_thru would have been updated, etc. Much simpler to update it
all the time.
This should not impose a significantly greater cost, since we
already need it for the failure case. And in general the
migration/expansion/whatever case is no more common nor critical
than the failure case.
Sage Weil [Tue, 9 Feb 2010 18:27:08 +0000 (10:27 -0800)]
init-ceph: Required-start: $remote_fs
This ensures /usr is mounted before ceph daemons start. It seems like
this may be problematic for hosts that act as both servers and clients,
but nfs-kernel-server does the same, so whatev!