Casey Bodley [Wed, 21 Dec 2016 19:32:04 +0000 (14:32 -0500)]
rgw: RGWMetaSyncShardCR drops stack refs on destruction
if the coroutine is canceled before collect_children() can clean up
all of its child stacks, those stack refs will leak. store these
stacks as boost::intrusive_ptr so the ref is dropped automatically on
destruction
Matt Benjamin [Fri, 6 Jan 2017 17:30:42 +0000 (12:30 -0500)]
rgw_rados: add guard assert in add_io()
Use the iterator-returning insert operation in std::map, check
assert the insert case. As a side effect, this makes use of the
inserted object record more clear.
Samuel Just [Thu, 12 Jan 2017 20:44:44 +0000 (12:44 -0800)]
Objecter: resend pg commands on interval change
mark_lost_unfound* are now async since the rework, so we need
the Objecter to be able to resend on interval change. This
is preferable to somehow requeueing the Command because they
don't use the normal op queue.
Fixes: http://tracker.ceph.com/issues/18358 Signed-off-by: Samuel Just <sjust@redhat.com>
Jason Dillaman [Tue, 3 Jan 2017 19:51:14 +0000 (14:51 -0500)]
librbd: add new lock_get_owners / lock_break_lock API methods
If the client application supports failover, let the application
force break the current lock and blacklist the owner. This is
required in case the current lock owner is alive from the point-of-view
of librbd but failover was required due to a higher level reason.
Fixes: http://tracker.ceph.com/issues/18327 Signed-off-by: Jason Dillaman <dillaman@redhat.com>
(cherry picked from commit 9a5a8c75a025143cee6f92f3dbc3a12f2b6a9ad7)
Jason Dillaman [Thu, 22 Dec 2016 20:00:23 +0000 (15:00 -0500)]
librbd: separate break lock logic into standalone state machine
The current lockers are now queried before the lock is attempted to
prevent any possible race conditions when one or more clients attempt
to break the lock of a dead client.
Sage Weil [Fri, 30 Dec 2016 17:22:42 +0000 (12:22 -0500)]
os/bluestore/BlueFS: fix reclaim_blocks
We need to return all extents to the caller. The current code
fails to assign *offset so it appears like a single extent from
the start of the device, which is very wrong.
Samuel Just [Tue, 3 Jan 2017 18:50:22 +0000 (10:50 -0800)]
PrimaryLogPG: don't update digests for objects with mismatched names
I've only seen this on one cluster, but let's not issue repops during
scrub on objects where the object_info_t::soid value is not correct.
The cluster in question has been through many different non-release
kernels and osd versions, so the objects presumably came about due to an
old xfs or filestore bug. They recently became fatal since we made
filestore crash on ENOENT for setattrs. In the past, the cluster just
silently tolerated them.
http://tracker.ceph.com/issues/18409 is a larger feature to detect these
better and repair them automatically.
Related: http://tracker.ceph.com/issues/18409 Signed-off-by: Samuel Just <sjust@redhat.com>
huanwen ren [Tue, 27 Dec 2016 10:54:45 +0000 (10:54 +0000)]
mon/OSDMonitor: fixup sortbitwise flag warning
"ceph -s" does not report warning when using
command "ceph osd unset sortbitwise" to drop
sortbitwise flag.
we should use "osdmap.get_up_osd_features() &
CEPH_FEATURE_OSD_BITWISE_HOBJ_SORT"
instead of "(osdmap.get_features(CEPH_ENTITY_TYPE_OSD, NULL) &
CEPH_FEATURE_OSD_BITWISE_HOBJ_SORT)",
because osdmap.get_features only get local "features"
Sage Weil [Thu, 22 Dec 2016 18:05:22 +0000 (13:05 -0500)]
qa/tasks/workunit: clear clone dir before retrying checkout
If we checkout ceph-ci.git, and don't find a branch,
we'll try again from ceph.git. But the checkout will
already exist and the clone will fail, so we'll still
fail to find the branch.
The same can happen if a previous workunit task already
checked out the repo.
Fix by removing the repo before checkout (the first and
second times). Note that this may break if there are
multiple workunit tasks running in parallel on the same
role. That is already racy, so if it's happening, we'll
want to switch to using a truly unique clonedir for each
instantiation.
Fixes: http://tracker.ceph.com/issues/18336 Signed-off-by: Sage Weil <sage@redhat.com>
It so happens that it's not safe to assume the monmap will be in an
empty state upon decoding.
Turns out the MonClient will reuse the MonMap instance when decoding
the just received map from the monitors. Should the monitors be on an
older version that do not support 'mon_info', this field will not be
decoded (after all, there's no field to decode from); but by this time,
the MonClient would already have a built monmap, which could have
populated 'mon_info' with temporary mon names from 'mon initial
members'.
Given the existing entries in 'mon_info', and the conflicting entries in
'mon_addr', we would end up asserting in 'sanitize_mons()'. This becomes
a non-issue if 'mon_info' is empty, as was unfortunately presumed.
Fixes: http://tracker.ceph.com/issues/18265 Signed-off-by: Joao Eduardo Luis <joao@suse.de>
Samuel Just [Tue, 20 Dec 2016 17:47:41 +0000 (09:47 -0800)]
osd/: treat PINGs as RWORDERED
89fd030bf9436dc4e37cc3a0f935ec077455d9d5 switched them to show up
as reads to avoid logging them, but we still pipeline them with
reconnects. Thus, also force them to be rwordered.
Fixes: http://tracker.ceph.com/issues/18310 Signed-off-by: Samuel Just <sjust@redhat.com>
Sage Weil [Mon, 19 Dec 2016 22:04:26 +0000 (17:04 -0500)]
os/bluestore: preserve source collection cache during split
OSD split transactions look something like
mkcoll new
split old
...
omap_rmkey_range old
omap_setkeys old
omap_setkeys new
The last part splits the log into two pieces. The
problem is that the rmkey_range needs to wait on old
omap transactions to flush, and those are linked to the
old onode, and split clears the cache. The result is
that we don't wait, rmkeyrange leaves some recent pg log
keys behind, and on OSD restart we get an error because
the object doesn't belong to the (old) collection.
Fix this by preserving objects in the old collection and
only clear out objects that are moving to the newly
split collections. This will include the pgmeta object
that we care about.
(Note that we are one step closer to preserving the
cache contents across the split, but not quite there
yet: at this point we don't have all of the destination
collections. A change in the ObjectStore interface is
probably needed to make that not be extremely awkward.)
Sage Weil [Mon, 19 Dec 2016 15:03:44 +0000 (10:03 -0500)]
os/bluestore: include modified objects in flush list even if onode unchanged
We use the onode flush list so that we can ->flush() as
a barrier before doing any read/modify/write. For
example, omap_rmkeyrange will flush before reading to
see what keys to erase in order to ensure that any
previous inserts are applied to the db and we see them
and remove them.
However, some omap operations don't update the onode
itself, which means write_onode() doesn't get called and
we aren't put on this list.
Add a note_modified_object() helper that can be called
instead of write_onode() for those cases. That way we
get on the list and flush() works as expected.
We could have resolved this by just putting ourselves on
the dirty onode list, but in practice every OSD op is
writing omap keys to the pgmeta object and there is no
need to touch the onode key in this case, so doing so
would be a big regression.