The seastar submodule's .gitmodules links to `../dpdk` which is no longer present after removing dpdk from ceph.git's .gitmodules.
```
<dwfreed> the ceph/seastar repo uses awful URLs for the submodules
<dwfreed> and those awful URLs are the real reason it's failing
<dwfreed> dgalloway: ^^^
<dwfreed> seastar's .gitmodules references repos in the parent directory, so that when it's checked out as a submodule of ceph, you don't download the repos twice (and git will probably also use references instead of duplicating the local .git); however, ceph doesn't have a submodule for dpdk anymore
<dwfreed> so seastar's referencing a dpdk repo that doesn't exist
<dgalloway> i think i follow. so you're suggesting revert https://github.com/ceph/ceph/commit/cb8087dfac31b8490fefdfca28d389b7b9901ef8 ?
<dwfreed> yep
<dwfreed> that'd be one way to fix it
...
<joshd> dgalloway: I'd suggest revert for now, and let the crimson folks figure out the longer term fix when they're back
```
Signed-off-by: David Galloway <dgallowa@redhat.com>
Sage Weil [Thu, 30 Jan 2020 16:22:49 +0000 (10:22 -0600)]
qa/tasks/ceph: only re-request scrub on unscrubbed pgs
If we haven't scrubbed everything, we occasinoally re-request scrub in case
the request was missed by the OSD (this can happen). But we were
re-requesting scrub on ALL pgs, and if they are done in a
semi-deterministic order and are slow, then we may never get to the final
ones.
Sage Weil [Thu, 30 Jan 2020 15:28:38 +0000 (09:28 -0600)]
Merge PR #32878 into master
* refs/pull/32878/head:
cephadm: share code between 'pull' and 'inspect-image'
mgr/cephadm: upgrade: pull image after upgrade start, and for each host
cephadm: add inspect-image command
Patrick Donnelly [Thu, 30 Jan 2020 15:06:21 +0000 (07:06 -0800)]
Merge PR #32397 into master
* refs/pull/32397/head:
mds: Move StrayManager initializations to its header
mds: Remove extra spaces in StrayManager header.
mds: Reorganize structure members in StrayManager header
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Lenz Grimmer [Thu, 30 Jan 2020 13:43:46 +0000 (13:43 +0000)]
mgr/dashboard: Change project name to "Ceph Dashboard" (#32959)
mgr/dashboard: Change project name to "Ceph Dashboard"
Reviewed-by: Laura Paduano <lpaduano@suse.com> Reviewed-by: Patrick Seidensal <pnawracay@suse.com> Reviewed-by: Stephan Müller <smueller@suse.com> Reviewed-by: Tatjana Dehler <tdehler@suse.com> Reviewed-by: Volker Theile <vtheile@suse.com>
Sage Weil [Thu, 30 Jan 2020 13:01:47 +0000 (07:01 -0600)]
Merge PR #32972 into master
* refs/pull/32972/head:
python-common/ceph/deployment/translate: use 'prepare' instead of 'batch' for trivial case
qa/tasks/cephadm: pass short dev name to osd prepare
mgr/cephadm: fix detection of just-created OSDs
mgr/cephadm: properly indent raise conditions
mgr/cephadm: add warning to other orchestrators
mgr/cephadm: separate acceptance criterias for Devices
mgr/cephadm: fix typos
mgr/cephadm: move utils in test/utils.py
mgr/ssh: increase disk size to 20G
drivegroups: add support for drivegroups + tests
mgr/orch_cli: allow multiple drivegroups
drivegroups: translate disk spec to ceph-volume call
Reviewed-by: Jan Fajerski <jfajerski@suse.com> Reviewed-by: Joshua Schmid <jschmid@suse.de>
Greg Farnum [Thu, 30 Jan 2020 12:43:13 +0000 (04:43 -0800)]
mon: elector: return after triggering a new election
When receiving an old propose, we were correctly triggering a new election
but not then returning out of receive_propose(), so we processed the
"should I defer" logic and perhaps sent out a deferal (in the current epoch!).
Gregory Farnum [Thu, 30 Jan 2020 10:27:06 +0000 (11:27 +0100)]
doc: remove the CephFS-Hadoop instructions
These have not aged gracefully, and in particular include instructions
for setting pool size 1 to let Hadoop control the replication — but I've
heard reports of users setting up multiple size-1 pools and then wondering
where their data went when an OSD dies.
Michal Skalski [Wed, 29 Jan 2020 00:29:58 +0000 (01:29 +0100)]
OSD: Allow 64-char hostname to be added as the "host" in CRUSH
On Linux system it is possible to set 64 character length hostname when
HOST_NAME_MAX is set to 64. It means that if we execute gethostname
function we should expect HOST_NAME_MAX characters + 1 for null
character ending hostname string as described here:
http://man7.org/linux/man-pages/man2/sethostname.2.html
With the current code on host with 64 long hostname osd during start
updates crush map with host=unknown_host.
Signed-off-by: Michal Skalski <mskalski@juniper.net>
Aleksei Zakharov [Fri, 20 Dec 2019 15:05:05 +0000 (18:05 +0300)]
mgr/prometheus: pg count by pool
If we have all other stats by pool, it's better to have total
count by pool too. We always can sum() all of total, but it's
hard to count by-pool total.
Signed-off-by: Aleksei Zakharov <zakharov.a.g@yandex.ru>
Kefu Chai [Wed, 29 Jan 2020 12:46:45 +0000 (20:46 +0800)]
mgr/cephadm: init attrs created by settattr()
to address the test falure caused by mypy:
```
mypy run-test: commands[0] | mypy --config-file=../../mypy.ini ansible/module.py cephadm/module.py mgr_module.py mgr_util.py orchestrator.py orchestrator_cli/module.py rook/module.py
test_orchestrator/module.py
cephadm/module.py: note: In member "_check_for_strays" of class "CephadmOrchestrator":
cephadm/module.py:596: error: "CephadmOrchestrator" has no attribute "warn_on_stray_hosts"
cephadm/module.py:596: error: "CephadmOrchestrator" has no attribute "warn_on_stray_services"
cephadm/module.py:599: error: "CephadmOrchestrator" has no attribute "warn_on_stray_services"
Found 3 errors in 1 file (checked 8 source files)
```
see also https://github.com/python/mypy/issues/5719
Sébastien Han [Fri, 24 Jan 2020 15:29:54 +0000 (16:29 +0100)]
ceph-volume: add db and wal support to raw mode
Using the raw mode, the OSD preparation can now be called with extra
arguments to specify a block device for either rocksdb db or wal.
The code expects the device to be available and no validation will be
performed. Also, the entire block device will be consumed.
The goal is run this on Kubernetes with Rook and OSD are running on PVC.
Users will request PVC from a certain size, the request will be
acknowledged and a block will be created, then attached to the machine.
Later on consumed by Rook's preparation job.
Lenz Grimmer [Wed, 29 Jan 2020 10:04:28 +0000 (11:04 +0100)]
mgr/dashboard: Change project name to "Ceph Dashboard"
The Dashboard's "About" page still referred to the
application as "Ceph Manager Dashboard".
Changed the `projectName` constant to
"Ceph Dashboard" to resolve this.
Sage Weil [Tue, 28 Jan 2020 19:33:49 +0000 (13:33 -0600)]
osd: dispatch_context and queue split finish on early bail-out
If we bail out of advance_pg early because there is an upcoming merge, we
still need to dispatch_context() on rctx before we drop the PG lock. And
the rctx that we submit needs to include the on_applied finisher comit
to call _finish_splits.
This is noticeable (at least) when there is a split and merge that are
both known. When we process the split, the new child is added to new_pgs.
When we get to the merge epoch, we stop early and take the bail-out
path.
Fix by adding a dispatch_context call for this path. And further make sure
that both dispatch_context callers in this function queue up the
new_pgs event.
Fixes: https://tracker.ceph.com/issues/43825 Signed-off-by: Sage Weil <sage@redhat.com>