In https://github.com/ceph/ceph/pull/23958 an OSD will ping monitor
periodically now if it is stuck at __wait_for_healthy__. But in the
above case OSDs are still considering themselves as __active__ and
hence should miss that fixer.
Since these OSDs might be still able to contact with monitors (
otherwise there is no way for them to be marked up again) and send
beacons contiguously, we can simply get them out of the trap by
sharing some new maps with them.
Sage Weil [Sat, 8 Sep 2018 00:34:00 +0000 (19:34 -0500)]
Merge PR #23449 into master
* refs/pull/23449/head:
osd/OSDMap: cleanup: s/tmpmap/nextmap/
qa/standalone/osd/osd-backfill-stats: fixes
osd/OSDMap: clean out pg_temp mappings that exceed pool size
mon/OSDMonitor: clean temps and upmaps in encode_pending, efficiently
osd/OSDMapMapping: do not crash if acting > pool size
Sage Weil [Fri, 31 Aug 2018 15:52:04 +0000 (10:52 -0500)]
qa/standalone/osd/osd-backfill-stats: fixes
Grep from the primary's log, not every osd's log.
For the backfill_remapped task in particular, after the pg_temp change it
just so happens that the primary changes across the pool size change and
thus two different primaries do (some) backfill. Fix that test to pass
the correct primary.
Other tests are unaffected as they do not (happen to) trigger a primary
change and already satisfied the (removed) check that only one OSD does
backfill.
Sage Weil [Mon, 6 Aug 2018 18:12:33 +0000 (13:12 -0500)]
osd/OSDMap: clean out pg_temp mappings that exceed pool size
If the pool size is reduced, we can end up with pg_temp mappings that are
too big. This can trigger bad behavior elsewhere (e.g., OSDMapMapping,
which assumes that acting and up are always <= pool size).
Fixes: http://tracker.ceph.com/issues/26866 Signed-off-by: Sage Weil <sage@redhat.com>
Sage Weil [Mon, 6 Aug 2018 17:54:55 +0000 (12:54 -0500)]
mon/OSDMonitor: clean temps and upmaps in encode_pending, efficiently
- do not rebuild the next map when we already have it
- do this work in encode_pending, not create_pending, so we get bad
values before they are published.
mon: test if gid exists in pending for prepare_beacon
If it does not, send a null map. Bug introduced by 624efc64323f99b2e843f376879c1080276e036f which made preprocess_beacon only look
at the current fsmap (correctly). prepare_beacon relied on preprocess_beacon
doing that check on pending.
Running:
while sleep 0.5; do bin/ceph mds fail 0; done
is sufficient to reproduce this bug. You will see:
2018-09-07 15:33:30.350 7fffe36a8700 5 mon.a@0(leader).mds e69 preprocess_beacon mdsbeacon(24412/a up:reconnect seq 2 v69) v7 from mds.0 127.0.0.1:6813/2891525302 compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
2018-09-07 15:33:30.350 7fffe36a8700 10 mon.a@0(leader).mds e69 preprocess_beacon: GID exists in map: 24412
2018-09-07 15:33:30.350 7fffe36a8700 5 mon.a@0(leader).mds e69 _note_beacon mdsbeacon(24412/a up:reconnect seq 2 v69) v7 noting time
2018-09-07 15:33:30.350 7fffe36a8700 7 mon.a@0(leader).mds e69 prepare_update mdsbeacon(24412/a up:reconnect seq 2 v69) v7
2018-09-07 15:33:30.350 7fffe36a8700 12 mon.a@0(leader).mds e69 prepare_beacon mdsbeacon(24412/a up:reconnect seq 2 v69) v7 from mds.0 127.0.0.1:6813/2891525302
2018-09-07 15:33:30.350 7fffe36a8700 15 mon.a@0(leader).mds e69 prepare_beacon got health from gid 24412 with 0 metrics.
2018-09-07 15:33:30.350 7fffe36a8700 5 mon.a@0(leader).mds e69 mds_beacon mdsbeacon(24412/a up:reconnect seq 2 v69) v7 is not in fsmap (state up:reconnect)
in the mon leader log. The last line indicates the problem was safely handled.
Fixes: http://tracker.ceph.com/issues/35848 Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Sage Weil [Fri, 7 Sep 2018 20:55:21 +0000 (15:55 -0500)]
Merge PR #20469 into master
* refs/pull/20469/head:
osd/PG: remove warn on delete+merge race
osd: base project_pg_history on is_new_interval
osd: make project_pg_history handle concurrent osdmap publish
osd: handle pg delete vs merge race
osd/PG: do not purge strays in premerge state
doc/rados/operations/placement-groups: a few minor corrections
doc/man/8/ceph: drop enumeration of pg states
doc/dev/placement-groups: drop old 'splitting' reference
osd: wait for laggy pgs without osd_lock in handle_osd_map
osd: drain peering wq in start_boot, not _committed_maps
osd: kick split children
osd: no osd_lock for finish_splits
osd/osd_types: remove is_split assert
ceph-objectstore-tool: prevent import of pg that has since merged
qa/suites: test pg merging
qa/tasks/thrashosds: support merging pgs too
mon/OSDMonitor: mon_inject_pg_merge_bounce_probability
doc/rados/operations/placement-groups: update to describe pg_num reductions too
doc/rados/operations: remove reference to lpgs
osd: implement pg merge
osd/PG: implement merge_from
osdc/Objecter: resend ops on pg merge
osd: collect and record pg_num changes by pool
osd: make load_pgs remove message more accurate
osd/osd_types: pg_t: add is_merge_target()
osd/osd_types: pg_t::is_merge -> is_merge_source
osd/osd_types: adding or substracting invalid stats -> invalid stats
osd/PG: clear_ready_to_merge on_shutdown (or final merge source prep)
osd: debug pending_creates_from_osd cleanup, don't use cbegin
ceph-objectstore-tool: debug intervals update
mgr/ClusterState: discard pg updates for pgs >= pg_num
mon/OSDMonitor: fix long line
mon/OSDMonitor: move pool created check into caller
mon/OSDMonitor: adjust pgp_num_target down along with pg_num_target as needed
mon/OSDMonitor: add mon_osd_max_initial_pgs to cap initial pool pgs
osd/OSDMap: set pg[p]_num_target in build_simple*() methods
mon/PGMap: adjust SMALLER_PGP_NUM warning to use *_target values
mon/OSDMonitor: set CREATING flag for force-create-pg
mon/OSDMonitor: start sending new-style pg_create2 messages
mon/OSDMonitor: set last_force_resend_prenautilus for pg_num_pending changes
osd: ignore pg creates when pool FLAG_CREATING is not set
mgr: do not adjust pg_num until FLAG_CREATING removed from pool
mon/OSDMonitor: add FLAG_CREATING on upgrade if pools still creating
mon/OSDMonitor: prevent FLAG_CREATING from getting set pre-nautilus
mon/OSDMonitor: disallow pg_num changes while CREATING flag is set
mon/OSDMonitor: set POOL_CREATING flag until initial pool pgs are created
osd/osd_types: add pg_pool_t FLAG_POOL_CREATING
osd/osd_types: introduce last_force_resend_prenautilus
osd/PGLog: merge_from helper
osd: no cache agent or snap trimming during premerge
osd: notify mon when pending PGs are ready to merge
mgr: add simple controller to adjust pg[p]_num_actual
mon/OSDMonitor: MOSDPGReadyToMerge to complete a pg_num change
mon/OSDMonitor: allow pg_num to adjusted up or down via pg[p]_num_target
osd/osd_types: make pg merge an interval boundary
osd/osd_types: add pg_t::is_merge() method
osd/osd_types: add pg_num_pending to pg_pool_t
osd: allow multiple threads to block on wait_min_pg_epoch
osd: restructure advance_pg() call mechanism
mon/PGMap: prune merged pgs
mon/PGMap: track pgs by state for each pool
osd/SnapMapper: allow split_bits to decrease (merge)
os/bluestore: fix osr_drain before merge
os/bluestore: allow reuse of osr from existing collection
os/filestore: (re)implement merge
os/filestore: add _merge_collections post-check
os: implement merge_collection
os/ObjectStore: add merge_collection operation to Transaction
Sage Weil [Tue, 14 Aug 2018 17:15:52 +0000 (12:15 -0500)]
osd: handle pg delete vs merge race
Deletion involves an awkward dance between the pg lock and shard locks,
while the merge prep and tracking is "shard down". If the delete has
finished its work we may find that a merge has since been prepped.
Unwinding the merge tracking is nontrivial, especially because it might
involved a second PG, possibly even a fabricated placeholder one. Instead,
if we delete and find that a merge is coming, undo our deletion and let
things play out in the future map epoch.
Sage Weil [Fri, 10 Aug 2018 13:50:42 +0000 (08:50 -0500)]
osd/PG: do not purge strays in premerge state
The point of premerge is to ensure that the constituent parts of the
target PG are fully clean. If there is an intervening PG migration and
one of the halves finishes migrating before the other, one half could
get removed and the final merge could result in an incomplete PG. In the
worst case, the two halves (let's call them A and B) could have started
out together on say [0,1,2], A moves to [3,4,5] and gets deleted from
[0,1,2], and then the final merge happens such that *all* copies of the PG
are incomplete.
We could construct a clever check that does allow removal of strays when
the sibling PG is also ready to go, but it would be complicated. Do the
simple thing. In reality, this would be an extremely hard case to hit
because the premerge window is generally very short.
Sage Weil [Fri, 3 Aug 2018 15:45:51 +0000 (10:45 -0500)]
osd: wait for laggy pgs without osd_lock in handle_osd_map
We can't hold osd_lock while blocking because other objectstore completions
need to take osd_lock (e.g., _committed_osd_maps), and those objectstore
completions need to complete in order to finish_splits. Move the blocking
to the top before we establish any local state in this stack frame since
both the public and cluster dispatchers may race in handle_osd_map and
we are dropping and retaking osd_lock.
Sage Weil [Wed, 1 Aug 2018 21:33:22 +0000 (16:33 -0500)]
osd: drain peering wq in start_boot, not _committed_maps
We can't safely block in _committed_osd_maps because we are being run
by the store's finisher threads, and we may have to wait for a PG to split
and then merge via that same queue and deadlock.
Do not hold osd_lock while waiting as this can interfere with *other*
objectstore completions that take osd_lock.
Sage Weil [Mon, 30 Jul 2018 14:40:35 +0000 (09:40 -0500)]
osd: kick split children
Ensure that we bring split children up to date to the latest map even in
the absence of new OSDMaps feeding in NullEvts. This is important when
the handle_osd_map (or boot) thread is blocked waiting for pgs to catch
up, but we also need a newly-split child to catch up (perhaps so that it
can merge).
Sage Weil [Tue, 31 Jul 2018 21:54:26 +0000 (16:54 -0500)]
osd: no osd_lock for finish_splits
This used to protect the pg registration probably? There is no need for
it now.
More importantly, having it here can cause a deadlock when we are holding
osd_lock and blocking on wait_min_pg_epoch(), because a PG may need to
finish splitting to advance and then merge with a peer. (The wait won't
block on *this* PG since it isn't registered in the shard yet, but it
will block on the merge peer.)
Sage Weil [Wed, 18 Apr 2018 13:01:19 +0000 (08:01 -0500)]
osd/osd_types: remove is_split assert
The problem is:
osd is at epoch 80
import pg 1.a as of e57
1.a and 1.1a merged in epoch 60something
we set up a merge now,
but in should_restart_peering via advance_pg we hit the is_split assert
that the ps is < old_pg_num
We can meaningfully return false (this is not a split) for a pg that is
beyond pg_num.
Sage Weil [Fri, 15 Jun 2018 15:53:51 +0000 (10:53 -0500)]
ceph-objectstore-tool: prevent import of pg that has since merged
We currently import a portion of the PG if it has split. Merge is more
complicated, though, mainly because COT is operating in a mode where it
fast-forwards the PG to the latest OSDMap epoch, which means it has to
implement any transformations to the PG (split/merge) independently.
Avoid doing this for merge.
Sage Weil [Fri, 6 Apr 2018 15:26:52 +0000 (10:26 -0500)]
osd: implement pg merge
- Vevamps the split tracking infrastructure, and adds new tracking for
upcoming merges in consume_map. These are now unified into the same
identify_ method. these consume the new pg_num change tracking
instructure we just added in the prior commit.
- PGs that are about to merge have a new wait infrastructure, since all
sources and the target have to reach the target epoch before the merge
can happen.
- If one of the sources for a merge does not exist, we create an empty
dummy PG to merge with. The implies that the resulting merged PG will
be incomplete (and mostly useless), but it unifies the code paths.
- The actual merge (PG::merge_from) happens in advance_pg().
Fixes: http://tracker.ceph.com/issues/85 Signed-off-by: Sage Weil <sage@redhat.com>
Sage Weil [Fri, 27 Jul 2018 13:58:24 +0000 (08:58 -0500)]
osd/PG: implement merge_from
This is the building block that smooshes multiple PGs back into one. The
resulting combination PG will have no PG log. That means the sources
need to be clean and quiesced or else the result will end up being
marked incomplete.
Sage Weil [Sat, 7 Apr 2018 19:35:36 +0000 (14:35 -0500)]
mon/OSDMonitor: set CREATING flag for force-create-pg
In order to recreate a lost PG, we need to set the CREATING flag for the
pool. This prevents pg_num from changing in future OSDMap epochs until
*after* the PG has successfully been instantiated.
Note that a pg_num change in *this* epoch is fine; the recreated PG will
instantiate in *this* epoch, which is /after/ the split a pg_num in this
epoch would describe.
The new sharded wq implementation cannot handle a resent mon create
message and a split child already existing. This a side effect of the
new pg create path instantiating the PG at the pool create epoch osdmap
and letting it roll forward through splits; the mon may be resending a
create for a pg that was already created elsewhere and split elsewhere,
such that one of those split children has peered back onto this same OSD.
When we roll forward our re-created empty parent it may split and find the
child already exists, crashing.
This is no longer a concern because the mgr-based controller for pg_num
will not split PGs until after the initial PGs are all created. (We
know this because the pool has the CREATED flag set.)
The old-style path had it's own problem
http://tracker.ceph.com/issues/22165. We would build the history and
instantiate the pg in the latest osdmap epoch, ignoring any split children
that should have been created between teh pool create epoch and the
current epoch. Since we're now taking the new path, that is no longer
a problem.
Fixes: http://tracker.ceph.com/issues/22165 Signed-off-by: Sage Weil <sage@redhat.com>
Sage Weil [Fri, 6 Apr 2018 16:26:26 +0000 (11:26 -0500)]
mon/OSDMonitor: set last_force_resend_prenautilus for pg_num_pending changes
This will force pre-nautilus clients to resend ops when we are adjusting
pg_num_pending. This is a big hammer: for nautilus+ clients, we only have
an interval change for the affected PGs (the two PGs that are about to
merge), whereas this compat hack will do an op resend for the whole pool.
However, it is better than requiring all clients be upgraded to nautilus in
order to do PG merges.
Note that we already do the same thing for pre-luminous clients both for
splits, so we've already inflicted similar pain the past (and, to my
knowledge, have not seen any negative feedback or fallout from that).
Sage Weil [Sat, 7 Apr 2018 02:53:35 +0000 (21:53 -0500)]
mgr: do not adjust pg_num until FLAG_CREATING removed from pool
This is more reliable than looking at PG states because the PG may have
gone active and sent a notification to the mon (pg created!) and mgr
(new state!) but the mon may not have persisted that information yet.
Sage Weil [Sat, 7 Apr 2018 02:39:14 +0000 (21:39 -0500)]
mon/OSDMonitor: set POOL_CREATING flag until initial pool pgs are created
Set the flag when the pool is created, and clear it when the initial set
of PGs have been created by the mon. Move the update_creating_pgs()
block so that we can process the pgid removal from the creating list and
the pool flag removal in the same epoch; otherwise we might remove the
pgid but have no cluster activity to roll over another osdmap epoch to
allow the pool flag to be removed.
Previously, we renamed the old last_force_resend to
last_force_resend_preluminous and created a new last_force_resend for
luminous+. This allowed us to force preluminous clients to resend ops
(because they didn't understand the new pg split => new interval rule)
without affecting luminous clients.
Do the same rename again, adding a last_force_resend_prenautilus (luminous
or mimic).
Adjust the OSD code accordingly so it matches the behavior we'll see from
a luminous client.
Sage Weil [Fri, 13 Apr 2018 22:16:41 +0000 (17:16 -0500)]
osd/PGLog: merge_from helper
When merging two logs, we throw out all of the actual log entries.
However, we need to convert them to dup ops as appropriate, and merge
those together. Reuse the trim code to do this.
Sage Weil [Sat, 17 Feb 2018 17:38:57 +0000 (11:38 -0600)]
osd: notify mon when pending PGs are ready to merge
When a PG is in the pending merge state it is >= pg_num_pending and <
pg_num. When this happens quiesce IO, peer, wait for activate to commit,
and then notify the mon that we are idle and safe to merge.
Sage Weil [Fri, 6 Apr 2018 15:26:10 +0000 (10:26 -0500)]
mgr: add simple controller to adjust pg[p]_num_actual
This is a pretty trivial controller. It adds some constraints that were
obviously not there before when the user could set these values to anything
they wanted, but does not implement all of the "nice" stepping that we'll
eventually want. That can come later.
Splits:
- throttle pg_num increases, currently using the same config option
(mon_osd_max_creating_pgs) that we used to throttle pg creation
- do not increase pg_num until the initial pg creation has completed.
Merges:
- wait until the source and target pgs for merge are active and clean
before doing a merge.
Sage Weil [Fri, 16 Feb 2018 03:25:32 +0000 (21:25 -0600)]
mon/OSDMonitor: allow pg_num to adjusted up or down via pg[p]_num_target
The CLI now sets the *_target values, imposing only the subset of constraints that
the user needs to be concerned with.
new "pg_num_actual" and "pgp_num_actual" properties/commands are added that allow
the underlying raw values to be adjusted. For the merge case, this sets
pg_num_pending instead of pg_num so that the OSDs can go through the
merge prep process.
A controller (in a future commit) will make pg[p]_num converge to pg[p]_num_target.
Sage Weil [Mon, 9 Jul 2018 22:22:58 +0000 (17:22 -0500)]
os/bluestore: fix osr_drain before merge
We need to make sure the deferred writes on the source collection finish
before the merge so that ops ordered via the final target sequencer will
occur after those writes.
Sage Weil [Sun, 8 Jul 2018 19:24:49 +0000 (14:24 -0500)]
os/bluestore: allow reuse of osr from existing collection
We try to attach an old osr at prepare_new_collection time, but that
happens before a transaction is submitted, and we might have a
transaction that removes and then recreates a collection.
Move the logic to _osr_attach and extend it to include reusing an osr
in use by a collection already in coll_map. Also adjust the
_osr_register_zombie method to behave if the osr is already there, which
can happen with a remove, create, remove+create transaction sequence.
Fixes: https://tracker.ceph.com/issues/25180 Signed-off-by: Sage Weil <sage@redhat.com>
Sage Weil [Sat, 4 Aug 2018 18:51:05 +0000 (13:51 -0500)]
os/filestore: (re)implement merge
Merging is a bit different then splitting, because the two collections
may already be hashed at different levels. Since lookup etc rely on the
idea that the object is always at the deepest level of hashing, if you
merge collections with different levels that share some common bit prefix
then some objects will end up higher up the hierarchy even though deeper
hashed directories exist.
osd/OSD: ping monitor if we are stuck at __waiting_for_healthy__
One of our clusters has encountered some network issues several days
ago and we've observed some OSDs were stuck at __waiting_for_healthy__
with an obsolete OSDMap(683) in hand(By contrast, the newest OSDMap
from the monitor side has been successfully bumped up to 1589):
```
2018-08-28 15:26:54.858892 7faa3869c700 1 osd.31 683 is_healthy false -- only 1/5 up peers (less than 33%)
2018-08-28 15:26:54.858909 7faa3869c700 1 osd.31 683 not healthy; waiting to boot
2018-08-28 15:26:55.859007 7faa3869c700 1 osd.31 683 is_healthy false -- only 1/5 up peers (less than 33%)
2018-08-28 15:26:55.859023 7faa3869c700 1 osd.31 683 not healthy; waiting to boot
2018-08-28 15:26:56.859122 7faa3869c700 1 osd.31 683 is_healthy false -- only 1/5 up peers (less than 33%)
2018-08-28 15:26:56.859151 7faa3869c700 1 osd.31 683 not healthy; waiting to boot
```
Since most heartbeat_peers of osd.31 were actually offline and osd.31 itself
was stuck at __waiting_for_healthy__, it was unable to refresh osdmap
(which was required for contacting with new up heartbeat_peers) and hence could
be stuck at __waiting_for_healthy__ forever.
qa/tasks/cram: tasks now must live in the repository
Commit 0d8887652d53 ("qa/tasks/cram: use suite_repo repository for all
cram jobs") removed hardcoded git.ceph.com links, but as it turned out
it is still used for nightlies. There is no good way to accommodate
the different URL schemes, so let's get rid of URLs altogether.
ceph-volume lvm.batch use 'ceph' as the cluster name with filestore
Custom cluster names are currently broken on ceph-volume, should get
addressed with http://tracker.ceph.com/issues/27210 which is out of
scope for these changes