cephadm: fix osd adoption with custom cluster name
When adopting Ceph OSD containers from a Ceph cluster with a custom name, it fails
because the name isn't propagated in unit.run.
The idea here is to change the lvm metadata and enforce 'ceph.cluster_name=ceph'
given that cephadm doesn't support custom names anyway.
Fixes: https://tracker.ceph.com/issues/55654 Signed-off-by: Adam King <adking@redhat.com> Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit e720a658d6a1582c0497bdf709ef4bd26bb5bb73)
ceph-mixin: refactor the structure of _config and utils
Before this refactor we couln't override the config externally. Now the
_config is correctly propagated and not only taken from the
config.libsonnet file.
ceph-mixin: fix linting issue and add cluster template support
Fix most of the issues reported by dashboards-linter:
- Add matcher/template for job (and also cluster)
- use $__rate_interval everywhere
Also this change all the irate functions to rate as most of irate where
not actually used correctly. While using irate on graph for instance you
can easily miss some of the metrics values as irate only take the two
last values and the query steps can be quite large if you want a graph
for a few hours/a day or more.
Fixes: https://tracker.ceph.com/issues/55003 Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@cern.ch>
ceph-mixin: add config with matchers and tags
Zac Dover [Wed, 1 Jun 2022 14:44:02 +0000 (00:44 +1000)]
doc: (squash) adding pacific.rst to toctree
Signed-off-by: Zac Dover <zac.dover@gmail.com>
doc/start: update "memory" in hardware-recs.rst
This PR corrects some usage errors in the "Memory" section
of the hardware-recommendations.rst file. It also closes
some opened but never closed parentheses.
This adds the security/ directory from the main branch.
This is done so that all references in the pacific.rst
file find destinations. This means that Sphinx will re-
cognize the document as coherent and that Sphinx will
permit it to build.
Signed-off-by: Zac Dover <zac.dover@gmail.com>
doc: (squash) add security/index.rst to toctree
Signed-off-by: Zac Dover <zac.dover@gmail.com>
doc: remove :confvals: from bluestore-config-ref
Signed-off-by: Zac Dover <zac.dover@gmail.com>
doc: (squash) linking to s3-feature-table
Signed-off-by: Zac Dover <zac.dover@gmail.com>
doc: (squash) repair refs to cephfs-top
Signed-off-by: Zac Dover <zac.dover@gmail.com>
doc: (squash) fix link to snap-schedule
Signed-off-by: Zac Dover <zac.dover@gmail.com>
doc: (squash) fix link to ceph-dokan
Signed-off-by: Zac Dover <zac.dover@gmail.com>
doc: (squash) fixing active-releases link
Signed-off-by: Zac Dover <zac.dover@gmail.com>
doc: testing security/index toctree link
afd8be7eac5e996c3bd07656601a4534053e2516 broke it.
It has dropped`block_wal` and `block_db` from
`ceph_volume.devices.raw.activate.activate_bluestore` but
`activate.main.Activate.main` still passes those arguments when
calling `RAWActivate([]).activate()`
Note that we're making this change because (1) it allows db/wal and (2)
because there are no known users of 'raw activate'. The only known user
is via 'ceph-volume activate' and we've fixed that caller in this commit.
Jos Collin [Wed, 4 May 2022 13:03:12 +0000 (18:33 +0530)]
qa: fix is_addr_blocklisted() to get blocklisted clients from 'osd dump'
By the introduction of range blocklist, the 'blocklist ls' command outputs
two lists. It's also straightforward to get the blocklisted clients directly
from 'osd dump' to avoid regression.
Fixes: https://tracker.ceph.com/issues/55516 Signed-off-by: Jos Collin <jcollin@redhat.com>
(cherry picked from commit 47de5d79b8190458847072aae1c29db7d6a9b66b)
Zac Dover [Wed, 1 Jun 2022 14:54:40 +0000 (00:54 +1000)]
doc/rados: update bluestore-config-ref.rst
This PR updates bluestore-config-ref.rst so that
other PRs that refer to material in it can be
backported.
In order to ensure the coherence of this document,
all :confval: declarations have been removed. The
module that interprets those is called ceph_confval
and is available only in Quincy.
Greg Farnum [Tue, 30 Nov 2021 18:29:46 +0000 (18:29 +0000)]
osd: Check range_blocklist in is_blocklisted(): we actually blocklist ranges
Carry a parallel map from cidr addresses to a new
range_bits class (stored entirely as ephemeral state) so that we
don't need to re-compute masks and bit mappings too often, and to
separate out the unpleasant ipv6 bit mapping logic. Then check
against those with range_bits::matches() the same way we check
for equality on specific-entity matches. Nice and simple loops!
Greg Farnum [Tue, 2 Nov 2021 00:38:50 +0000 (00:38 +0000)]
osdmap: convert get_blocklist() to provide the entity/IP and range blocklists
Providing a non-range-aware blocklist accessor would just be
asking for trouble, so don't.
The ugly part of this is how the Objecter is currently just
throwing the range blocklist on the end of its own list. The in-tree
callers are okay with this, and I'd like to look at removing the
blocklist events API from librados entirely -- it exposes "OSD-only"
state to clients and, as evidenced by this patch series, is not
particularly stable.
Greg Farnum [Wed, 8 Dec 2021 21:32:58 +0000 (21:32 +0000)]
mon: take blocklist ranges as a subcommand, not implicitly from address format
I discovered in testing with CephFS that this tends to interpret client IPs
(which don't have ports, but do have nonces) as invalid ranges. So give it
a separate input keyword that has to be applied first.
Greg Farnum [Mon, 25 Oct 2021 19:53:04 +0000 (19:53 +0000)]
msg: common: allow entity_addr_t to store a CIDR address range
This required very little change to the existing code. Use with care, because
existing code expects an IP address instead of a range, but it saves on
writing a new parser.
Greg Farnum [Tue, 2 Nov 2021 00:34:34 +0000 (00:34 +0000)]
mds: Server: Simplify apply_blocklist and usage of the OSDMap's blocklist
This previoulsly re-implemented a bunch of the OSDMap::is_blocklisted()
function, and wasn't actually any faster to run -- the list of new blocklists
may be smaller than the full set, but OSDMap::blocklist is an unordered_map
of constant lookup time so it shouldn't slow things down. More importantly,
this is much simpler, less likely to be buggy from duplicate code, and lets
the MDS off the hook for dealing with range blocklisting.
Greg Farnum [Mon, 1 Nov 2021 23:52:53 +0000 (23:52 +0000)]
client: Simplify blocklist tracking and interface
I'm not sure if the blocklist events tracking in Client.cc was ever
the simplest way to track that state, but it definitely isn't now. We
can just hand our addr_vec to the OSDMap and ask it -- it handles
version compatibility issues and, happily, means the Client doesn't
need to learn to deal with ranges directly.
mgr/dashboard: introduce memory and cpu usage for daemons
Fixes: https://tracker.ceph.com/issues/55218 Signed-off-by: Avan Thakkar <athakkar@redhat.com> Co-authored-by: Aashish Sharma <aasharma@redhat.com>
Introducing 2 new columns in Cluster->Host->Daemons table for Memory and CPU usage.
Conflicts:
src/pybind/mgr/cephadm/module.py
- _process_ls_output() doesn't exist in pacific as agent isn't yet backported. So similar changes
needs to be done in serve.py instead.
In `master` the milestone step exits and causes remaining tasks not to be run. I previously tried with the `continue-on-error` flag, but it didn't work, so let's try putting that steps at the end.
Prashant D [Thu, 17 Mar 2022 14:29:40 +0000 (14:29 +0000)]
mgr, mgr/prometheus: Fix regression with prometheus metrics
The ceph dameons on host are inheriting ceph version from the host.
This introduces a wrong interpretation in prometheus metrics as well
as in dump_server. Each ceph daemon should represent it's own
ceph version based on the ceph binary is use for that daemon.
Consider a situation where partial upgrade is done on host, some daemons
which are restarted should have ceph version tag as upgraded version
and rest should have older ceph version but presently all inherites
host version. In containerized environment, all daemons are
using ceph version of last daemon registered as a service on the host.
Fixes: https://tracker.ceph.com/issues/54611 Signed-off-by: Prashant D <pdhange@redhat.com>
(cherry picked from commit aeca2e41ef560cf51c1ad935cfb6470e782aa8d5)
Prashant D [Mon, 28 Mar 2022 13:02:08 +0000 (14:02 +0100)]
mgr, mon: Keep upto date metadata with mgr for MONs
The mgr updates mon metadata through handle_mon_map which
gets triggered when MONs were removed/added from/to cluster or
the active mgr is restarted or mgr failsover.
We could have handled metadata update through MMgrOpen or
early MMgrReport messages but these are sent before monitor
electin completes and lead monitor updates pending metadata
in monstore. Instead of relying on fetching mon metadata using
'ceph mon metadata <id>' command, explicitly send metadata
update request with mon metadata to mgr.
Fixes: https://tracker.ceph.com/issues/55088 Signed-off-by: Prashant D <pdhange@redhat.com>
(cherry picked from commit 1a065043b964f8c014ebb5bc890a243c398ff07c)
Laura Flores [Mon, 16 May 2022 22:59:42 +0000 (17:59 -0500)]
qa/suites/rados/thrash-erasure-code-big/thrashers: add `osd max backfills` setting to mapgap and pggrow
All `rados/thrash-erasure-code-big` tests that die due to the “wait_for_recovery” timeout have one thing in common: They contain either `thrashers/pggrow` or `thrashers/mapgap`.
The difference between pggrow and mapgap vs. all other non-offending thrashers (default, careful, fastread, and morepggrow) is that they lack an override setting for `osd max backfills`. `osd max backfills` is the max number of backfill operations allowed to/from an OSD. The higher the number, the quicker the recovery. By default, this value is 1. On all of the non-offending thrashers (default, careful, fastread, and morepggrow), the default 1 value gets overridden in their .yaml files with a value > 1. This is not the case for pggrow and mapgap, however, as they lack an `osd max backfills` override setting.
The mclock op scheduler is known to override `osd max backfills` with a high value, but all of the thrash-erasure-code-big thrashers have their op queue set to “debug_random”, which chooses randomly between op queues (the debug_random op queue is set to override the default mclock_scheduler in qa/config/rados.yaml). So, coupled with the “debug_random” op queue, the low `osd max backfill` setting is causing some tests to time out in recovery.
WITHOUT `osd max backfills`, as they are now, “mapgap” and “pggrow” tests die due to timed-out recovery about 17/100 times, as seen here with a pggrow test: http://pulpito.front.sepia.ceph.com/lflores-2022-05-18_14:24:29-rados:thrash-erasure-code-big-master-distro-default-smithi/
WITH `osd max backfills` specified, as I have suggested in this PR, 99/100 tests passed, with one test failing for a different reason:
http://pulpito.front.sepia.ceph.com/lflores-2022-05-17_22:40:27-rados:thrash-erasure-code-big-master-distro-default-smithi/
I also scheduled 145 tests WITH `osd max backfills` that are a mix of pggrow and mapgap thrashers. 144/145 tests passed, with one test failing for a different reason. http://pulpito.front.sepia.ceph.com/lflores-2022-05-17_15:27:54-rados:thrash-erasure-code-big-master-distro-default-smithi/
Fixes: https://tracker.ceph.com/issues/51076 Signed-off-by: Laura Flores <lflores@redhat.com>
(cherry picked from commit 40062676c2ceed49b9fa147127ffa83ba6118e2a)
Tim Serong [Wed, 30 Mar 2022 05:25:30 +0000 (16:25 +1100)]
ceph.spec.in: remove build directory in %clean, not %install
Removing the build directory at the end of %install is too soon,
and means we get rid of a bunch of stuff needed to correctly
create debuginfo/debugsource packages, which happens automatically
right after %install. So, let's put it where it really belongs, in
the %clean section.
Fixes: aa18cb12003e3526c8e8f23dc2335a483fbfa68e Fixes: https://tracker.ceph.com/issues/55079 Signed-off-by: Tim Serong <tserong@suse.com>
(cherry picked from commit 94ad178bdcbae56a8eafc65a3a276e25d7a51a5e)
Tim Serong [Mon, 28 Mar 2022 04:12:10 +0000 (15:12 +1100)]
ceph.spec.in: remove build directory at end of %install
By the time we get to the end of the %install section, all the
built binaries have been installed in the build root, so we can
delete the build directory from the source tree. This frees up
about 17GB of disk space on build hosts, which is helpful in
case other processes later in the RPM build need more disk space.
Fixes: https://tracker.ceph.com/issues/55079 Signed-off-by: Tim Serong <tserong@suse.com>
(cherry picked from commit aa18cb12003e3526c8e8f23dc2335a483fbfa68e)
Conflicts:
ceph.spec.in
- pacific uses "build", not "%{_vpath_builddir}"