Loic Dachary [Fri, 9 Jan 2015 00:28:11 +0000 (01:28 +0100)]
Merge pull request #3220 from ceph/wip-mon-backports.firefly
mon: backports for #9987 against firefly
Reviewed-by: Joao Eduardo Luis <joao@redhat.com> Reviewed-by: Sage Weil <sage@redhat.com> Reviewed-by: Samuel Just <sjust@redhat.com> Reviewed-by: Loic Dachary <ldachary@redhat.com>
Sage Weil [Tue, 23 Dec 2014 23:49:26 +0000 (15:49 -0800)]
osdc/Objecter: handle reply race with pool deletion
We need to handle this scenario:
- send request in epoch X
- osd replies
- pool is deleted in epoch X+1
- client gets map X+1, sends a map check
- client handles reply
-> asserts that no map checks are in flight
This isn't the best solution. We could infer that a map check isn't needed
since the pool existed earlier and doesn't now. But this is firefly and
the fix is no more expensive than the old assert.
Fixes: #10372 Signed-off-by: Sage Weil <sage@redhat.com>
Sage Weil [Mon, 24 Nov 2014 02:50:51 +0000 (18:50 -0800)]
crush/CrushWrapper: fix create_or_move_item when name exists but item does not
We were using item_exists(), which simply checks if we have a name defined
for the item. Instead, use _search_item_exists(), which looks for an
instance of the item somewhere in the hierarchy. This matches what
get_item_weightf() is doing, which ensures we get a non-negative weight
that converts properly to floating point.
Backport: giant, firefly Fixes: #9998 Reported-by: Pawel Sadowski <ceph@sadziu.pl> Signed-off-by: Sage Weil <sage@redhat.com>
(cherry picked from commit 9902383c690dca9ed5ba667800413daa8332157e)
Sage Weil [Sat, 22 Nov 2014 01:47:56 +0000 (17:47 -0800)]
crush/builder: prevent bucket weight underflow on item removal
It is possible to set a bucket weight that is not the sum of the item
weights if you manually modify/build the CRUSH map. Protect against any
underflow on the bucket weight when removing items.
Dan Mick [Wed, 10 Dec 2014 21:19:16 +0000 (13:19 -0800)]
rados.py: remove Rados.__del__(); it just causes problems
Recent versions of Python contain a change to thread shutdown that
causes ceph to hang on exit; see http://bugs.python.org/issue21963.
As it turns out, this is relatively easy to avoid by not spawning
threads on exit, as Rados.__del__() will certainly do by calling
shutdown(); I suspect, but haven't proven, that the problem is
that shutdown() tries to start() a threading.Thread() that never
makes it all the way back to signal start().
Also add a PendingReleaseNote and extra doc comments to clarify.
Loic Dachary [Fri, 14 Nov 2014 00:16:10 +0000 (01:16 +0100)]
common: do not omit shard when ghobject NO_GEN is set
Do not silence the display of shard_id when generation is NO_GEN.
Erasure coded objects JSON representation used by ceph_objectstore_tool
need the shard_id to find the file containing the chunk.
Minimal testing is added to ceph_objectstore_tool.py
Samuel Just [Tue, 23 Sep 2014 22:52:08 +0000 (15:52 -0700)]
ReplicatedPG: don't move on to the next snap immediately
If we have a bunch of trimmed snaps for which we have no
objects, we'll spin for a long time. Instead, requeue.
Fixes: #9487
Backport: dumpling, firefly, giant Reviewed-by: Sage Weil <sage@redhat.com> Signed-off-by: Samuel Just <sam.just@inktank.com>
(cherry picked from commit c17ac03a50da523f250eb6394c89cc7e93cb4659)
Sage Weil [Tue, 23 Sep 2014 23:21:33 +0000 (16:21 -0700)]
osd: initialize purged_snap on backfill start; restart backfill if change
If we backfill a PG to a new OSD, we currently neglect to initialize
purged_snaps. As a result, the first time the snaptrimmer runs it has to
churn through every deleted snap for all time, and to make matters worse
does so in one go with the PG lock held. This leads to badness on any
cluster with a significant number of removed snaps that experiences
backfill.
Resolve this by initializing purged_snaps when we finish backfill. The
backfill itself will clear out any stray snaps and ensure the object set
is in sync with purged_snaps. Note that purged_snaps on the primary
that is driving backfill will not change during this period as the
snaptrimmer is not scheduled unless the PG is clean (which it won't be
during backfill).
If we by chance to interrupt backfill, go clean with other OSDs,
purge snaps, and then let this OSD rejoin, we will either restart
backfill (non-contiguous log) or the log will include the result of
the snap trim (the events that remove the trimmed snap).
Jason Dillaman [Tue, 18 Nov 2014 02:49:26 +0000 (21:49 -0500)]
librbd: protect list_children from invalid child pool IoCtxs
While listing child images, don't ignore error codes returned
from librados when creating an IoCtx. This will prevent seg
faults from occurring when an invalid IoCtx is used.
Fixes: #10123
Backport: giant, firefly, dumpling Signed-off-by: Jason Dillaman <dillaman@redhat.com>
(cherry picked from commit 0d350b6817d7905908a4e432cd359ca1d36bab50)
Loic Dachary [Thu, 9 Oct 2014 16:52:17 +0000 (18:52 +0200)]
ceph-disk: run partprobe after zap
Not running partprobe after zapping a device can lead to the following:
* ceph-disk prepare /dev/loop2
* links are created in /dev/disk/by-partuuid
* ceph-disk zap /dev/loop2
* links are not removed from /dev/disk/by-partuuid
* ceph-disk prepare /dev/loop2
* some links are not created in /dev/disk/by-partuuid
This is assuming there is a bug in the way udev events are handled by
the operating system.
Loic Dachary [Fri, 10 Oct 2014 08:23:34 +0000 (10:23 +0200)]
ceph-disk: encapsulate partprobe / partx calls
Add the update_partition function to reduce code duplication.
The action is made an argument although it always is -a because it will
be -d when deleting a partition.
Use the update_partition function in prepare_journal_dev
Introduce ceph_erasure_code_non_regression to check and compare how an
erasure code plugin encodes and decodes content with a given set of
parameters. For instance:
Will create an encoded object (--create) and store it into a directory
along with the chunks, one chunk per file. The directory name is derived
from the parameters. The content of the object is a random pattern of 31
bytes repeated to fill the object size specified with --stripe-width.
The check function (--check) reads the object back from the file,
encodes it and compares the result with the content of the chunks read
from the files. It also attempts recover from one or two erasures.
Chunks encoded by a given version of Ceph are expected to be encoded
exactly in the same way by all Ceph versions going forward.
Sage Weil [Mon, 20 Oct 2014 20:55:33 +0000 (13:55 -0700)]
osd: discard rank > 0 ops on erasure pools
Erasure pools do not support read from replica, so we should drop
any rank > 0 requests.
This fixes a bug where an erasure pool maps to [1,2,3], temporarily maps
to [-1,2,3], sends a request to osd.2, and then remaps back to [1,2,3].
Because the 0 shard never appears on osd.2, the request sits in the
waiting_for_pg map indefinitely and cases slow request warnings.
This problem does not come up on replicated pools because all instances of
the PG are created equal.
Fix by only considering role == 0 for erasure pools as a correct mapping.
Fixes: #9835 Signed-off-by: Sage Weil <sage@redhat.com>
Sage Weil [Thu, 13 Nov 2014 01:04:35 +0000 (17:04 -0800)]
osd/OSDMap: add osd_is_valid_op_target()
Helper to check whether an osd is a given op target for a pg. This
assumes that for EC we always send ops to the primary, while for
replicated we may target any replica.
Josh Durgin [Wed, 12 Nov 2014 02:16:02 +0000 (18:16 -0800)]
qa: allow small allocation diffs for exported rbds
The local filesytem may behave slightly differently. This isn't
foolproof, but seems to be reliable enough on rhel7 rootfs, where
exact comparison was failing.
Sage Weil [Sun, 25 May 2014 15:38:38 +0000 (08:38 -0700)]
osd: fix map advance limit to handle map gaps
The recent change in cf25bdf6b0090379903981fe8cee5ea75efd7ba0 would stop
advancing after some number of epochs, but did not take into consideration
the possibilty that there are missing maps. In that case, it is impossible
to advance past the gap.
Fix this by increasing the max epoch as we go so that we can always get
beyond the gap.
John Spray [Fri, 7 Nov 2014 11:34:43 +0000 (11:34 +0000)]
tools: fix MDS journal import
Previously it only worked on fresh filesystems which
hadn't been trimmed yet, and resulted in an invalid
trimmed_pos when expire_pos wasn't on an object
boundary.
Sage Weil [Mon, 15 Sep 2014 22:29:08 +0000 (15:29 -0700)]
ceph-disk: mount xfs with inode64 by default
We did this forever ago with mkcephfs, but ceph-disk didn't. Note that for
modern XFS this option is obsolete, but for older kernels it was not the
default.
Yehuda Sadeh [Thu, 9 Oct 2014 17:20:27 +0000 (10:20 -0700)]
rgw: set length for keystone token validation request
Fixes: #7796
Backport: giany, firefly
Need to set content length to this request, as the server might not
handle a chunked request (even though we don't send anything).
Tested-by: Mark Kirkwood <mark.kirkwood@catalyst.net.nz> Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
(cherry picked from commit 3dd4ccad7fe97fc16a3ee4130549b48600bc485c)
Yehuda Sadeh [Tue, 19 Aug 2014 20:15:46 +0000 (13:15 -0700)]
rgw: subuser creation fixes
Fixes: #8587
There were a couple of issues, one when trying to identify whether swift
user exists, we weren't using the correct swift id. The second problem
is that we relied on the gen_access flag in the swift case, where it
doesn't really need to apply.
Greg Farnum [Thu, 23 Oct 2014 00:16:31 +0000 (17:16 -0700)]
client: cast m->get_client_tid() to compare to 16-bit Inode::flushing_cap_tid
m->get_client_tid() is 64 bits (as it should be), but Inode::flushing_cap_tid
is only 16 bits. 16 bits should be plenty to let the cap flush updates
pipeline appropriately, but we need to cast in the proper direction when
comparing these differently-sized versions. So downcast the 64-bit one
to 16 bits.
Samuel Just [Mon, 29 Sep 2014 22:01:25 +0000 (15:01 -0700)]
PG: release backfill reservations if a backfill peer rejects
Also, the full peer will wait until the rejection from the primary
to do a state transition.
Fixes: #9626
Backport: giant, firefly, dumpling Signed-off-by: Samuel Just <sam.just@inktank.com>
(cherry picked from commit 624aaf2a4ea9950153a89ff921e2adce683a6f51)
Samuel Just [Mon, 20 Oct 2014 21:10:58 +0000 (14:10 -0700)]
PG:: reset_interval_flush and in set_last_peering_reset
If we have a change in the prior set, but not in the up/acting set, we go back
through Reset in order to reset peering state. Previously, we would reset
last_peering_reset in the Reset constructor. This did not, however, reset the
flush_interval, which caused the eventual flush event to be ignored and the
peering messages to not be sent.
Instead, we will always reset_interval_flush if we are actually changing the
last_peering_reset value.
Xiaoxi Chen [Wed, 20 Aug 2014 07:35:44 +0000 (15:35 +0800)]
CrushWrapper: pick a ruleset same as rule_id
Originally in the add_simple_ruleset funtion, the ruleset_id
is not reused but rule_id is reused. So after some add/remove
against rules, the newly created rule likely to have
ruleset!=rule_id.
We dont want this happen because we are trying to hold the constraint
that ruleset == rule_id.
Ma Jianpeng [Thu, 21 Aug 2014 07:10:46 +0000 (15:10 +0800)]
os/FileJournal: For journal-aio-mode, don't use aio when closing journal.
For jouranl-aio-mode when closing journal, the write_finish_thread_entry may exit before
write_thread_entry. This cause no one wait last aios to complete.
On some platform, after that the journal-header on journal corrupted.
To avoid this, when closing jouranl we don't use aio.
Fixes: 9073 Reported-by: Mark Kirkwood <mark.kirkwood@catalyst.net.nz> Tested-by: Mark Kirkwood <mark.kirkwood@catalyst.net.nz> Signed-off-by: Ma Jianpeng <jianpeng.ma@intel.com>
(cherry picked from commit e870fd09ce846e5642db268c33bbe8e2e17ffef2)
Ma Jianpeng [Wed, 23 Jul 2014 17:10:38 +0000 (10:10 -0700)]
os/FileJournal: Update the journal header when closing journal
When closing journal, it should check must_write_header and update
journal header if must_write_header alreay set.
It can reduce the nosense journal-replay after restarting osd.
Signed-off-by: Ma Jianpeng <jianpeng.ma@intel.com> Reviewed-by: Sage Weil <sage@redhat.com>
(cherry picked from commit 5bf472aefb7360a1fe17601b42e551df120badfb)
Sage Weil [Sun, 21 Sep 2014 22:56:18 +0000 (15:56 -0700)]
osd/ReplicatedPG: do not clone or preserve snapdir on cache_evict
If we cache_evict a head in a cache pool, we need to prevent
make_writeable() from cloning the head and finish_ctx() from
preserving the snapdir object.
Otherwise statfs may fail if mkfs hasn't been run yet or if the monitor
data directory does not exist. There are checks to account for the mon
data dir not existing and we should wait for them to clear before we go
ahead and check the fs stats.
Sage Weil [Thu, 18 Sep 2014 21:23:36 +0000 (14:23 -0700)]
mon: re-bootstrap if we get probed by a mon that is way ahead
During bootstrap we verify that our paxos commits overlap with the other
mons we will form a quorum with. If they do not, we do a sync.
However, it is possible we pass those checks, then fail to join a quorum
before the quorum moves ahead in time such that we no longer overlap.
Currently nothing kicks up back into a probing state to discover we need
to sync... we will just keep trying to call or join an election instead.
Fix this by jumping back to bootstrap if we get a probe that is ahead of
us. Only do this from non probe or sync states as these will be common;
it is only the active and electing states that matter (and probably just
electing!).
Sage Weil [Wed, 13 Aug 2014 23:17:02 +0000 (16:17 -0700)]
mon/Paxos: share state and verify contiguity early in collect phase
We verify peons are contiguous and share new paxos states to catch peons
up at the end of the round. Do this each time we (potentially) get new
states via a collect message. This will allow peons to be pulled forward
and remain contiguous when they otherwise would not have been able to.
For example, if
If we got mon.1 first and then mon.2 second, we would store the new txns
and then boot mon.1 out at the end because 15..25 is not contiguous with
28..40. However, with this change, we share 26..30 to mon.1 when we get
the collect, and then 31..40 when we get mon.2's collect, pulling them
both into the final quorum.
It also breaks the 'catch-up' work into smaller pieces, which ought to
smooth out latency a bit.