When the class `Device` is instantiated with a path instead of a
block device, it fails like following.
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 130, in __init__
self._parse()
File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 233, in _parse
self.ceph_device = disk.has_bluestore_label(self.path)
File "/usr/lib/python3.6/site-packages/ceph_volume/util/disk.py", line 906, in has_bluestore_label
with open(device_path, "rb") as fd:
IsADirectoryError: [Errno 21] Is a directory: '/var/lib/ceph/osd/ceph-0/'
```
passing a path instead of a block device is valid, `simple scan` needs it.
ceph-volume unit tests shouldn't actually create contents on the
filesystem from where it runs (even though they are written in a tmp
dir), let's use pyfakefs.
ceph-volume shouldn't report devices `/dev/sdy` and `/dev/sdz`, they will never be
available in such a scenario. Considering this, in a host with a bunch of mpath devices
it will pollute the inventory output.
Zack Cerza [Tue, 21 Jun 2022 17:28:30 +0000 (11:28 -0600)]
ceph-volume: Rename env var; add warning
So that we can have a nice big warning that fires once per invocation, I
think using a callable class with a class attribute seems like a decent
approach. A closure could work too.
Zack Cerza [Tue, 17 May 2022 17:29:02 +0000 (11:29 -0600)]
ceph-volume: Optionally consume loop devices
A similar proposal was rejected in #24765; I understand the logic
behind the rejection, but this will allow us to run Ceph clusters on
machines that lack disk resources for testing purposes. We just need to
make it impossible to accidentally enable, and make it clear it is
unsupported.
This check was added in commit ecd3778a6f9a ("rbd-mirror: ensure that
the last non-primary snapshot cannot be pruned") as an additional
safeguard against pruning an incomplete non-primary snapshot in case
there is no predecessor mirror snapshot. However it still fires if the
predecessor is there but happens to be a primary demotion snapshot.
A bogus "incomplete local non-primary snapshot" error is reported and
the replayer gets stuck.
Remove completed_non_primary_snapshots_exist tracking as the presence
of the predecessor in the incomplete non-primary snapshot pruning arm
is already ensured by "m_local_snap_id_start > 0" condition.
Incomplete non-primary snapshot handling is bifurcated depending
on whether any data objects have been copied. If no data objects
have been copied, an incomplete non-primary snapshot is assumed to
be malformed and gets pruned; the sync is restarted from scratch.
It doesn't really thrash anything, just repeatedly restarts the
workload on top of a dirty cache file. rbd_pwl_cache_recovery is
more on point and gets covered by existing CODEOWNERS.
Yin Congmin [Fri, 7 Jan 2022 07:03:44 +0000 (15:03 +0800)]
qa/tasks: add thrash test for persistent write log cache
add thrash test for persistent write log cache. run rbd bench
on persistent write log cache, thrashes rbd bench, test the
recovery function of persistent write log cache.
rbd: don't default empty pool name unless namespace is specified
Commit 96f05a7956b3 ("rbd: delay determination of default pool name")
broke "rbd perf image iostat" and "rbd perf image iotop" GLOBAL_POOL_KEY
support (the ability to blend all rbd pools together into a single
view).
Ilya Dryomov [Fri, 20 May 2022 12:05:03 +0000 (14:05 +0200)]
qa/suites/rbd: disable workunit timeout for dynamic_features_no_cache
The I/O workload in this test is xfstests (qa/run_xfstests_qemu.sh)
which isn't subjected to any timeout other than global max_job_time
limit in any other subsuite (e.g. qemu/workloads/qemu_xfstests.yaml).
But here, there is a parallel "op" workload defined as a workunit.
The workunit task has a default timeout of 3 hours which is effectively
imposed on the entire job. In the "rbd cache = false" configuration,
it's sometimes exceeded.
librbd: bail from schedule_request_lock() if already lock owner
Race condition may be hit if there are multiple pending locks for the
same image and pending callbacks. Abort exclusive lock process if
already exclusive lock owner.
Fixes: https://tracker.ceph.com/issues/56549 Signed-off-by: Christopher Hoffman <choffman@redhat.com>
(cherry picked from commit 3527d2c764626c09c5ede80ae844551fd8845756)
when option --gpg-url is specified, the name used for the gpg filename is missing and throws an exception
this adds the string "manual" to the gpg key : /etc/apt/trusted.gpg.d/ceph.manual.gpg
Adam King [Tue, 26 Jul 2022 13:55:05 +0000 (09:55 -0400)]
mgr/cephadm: clear error message when resuming upgrade
the message field in the output of "ceph orch upgrade status"
will first take the value of the error field of the UpgradeState,
and if only if it blank/None, display an info string we periodically
update throughout the upgrade with useful info such as that
we're upgrading a daemon of a particular type or pulling an image
on a certain host. When an upgrade fails, we set the error field
of the UpgradeState, pause the upgrade and raise a health warning.
Sometimes, the user is able to resolve the issue and simply resume
the upgrade. The issue here is, in that case, the error field of
the UpgradeState is still set, so instead of seeing the useful info
messages, it will continue to display an error message that may
no longer be relevant. By emptying the error field of the UpgradeState
when upgrades are resumed, we return to normal behavior of
displaying the info string, and will only show another error message
if another error actually occurs.
ceph-volume: fix fast device alloc size on mulitple device
The size computed by get_physical_fast_allocs() was wrong when the
function had multiple devices to treat.
For instance if there is 4 OSDs and 2 fast devices of each 10G while
allocating 2 slots per fast devvices. The behavior before was that each
slot would be 2.5G meaning that both fast devices would half full. The
behavior now is that each slot will take 5G so that the fast devices
would be full.
Fixes: https://tracker.ceph.com/issues/56031 Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@cern.ch>
(cherry picked from commit d0f9e93914e2b7feac41a634311d74c146c8868b)
ceph-volume: do not call get_device_vgs() per devices
let's call `ceph_volume.api.lvm.get_all_devices_vgs` only one time instead
so we avoid a bunch of subprocess calls that slow down the process when running
`ceph-volume inventory` command.
`lsblk_all()` should return an empty dict `{}` if nothing was found.
If we raise `RuntimeError()` then the loop in `scan.Scan.main` will stop
and make ceph-volume fails because we don't try to catch this exception.
`scan.Scan.main()` has its own logic in order to detect the given path
is a ceph-disk created OSD anyway.
libcephsqlite: ceph-mgr crashes when compiled with gcc12
regex in libcephsqlite, when compiled with GCC12 treats '-' as a range
operator resulting in the following error.
"Invalid start of '[x-x]' range in regular expression"
Deferred writes can sometimes update regions that are no longer mapped to any object.
This cannot happen when BlueStore is running, as blobs are being held,
and allocations are not released until deferred op is executed.
However in case of restart allocations that deferred is targetting are already freed.
Deferred replay is done on BlueStore bootup, before any new object can be allocated,
so no collision with object is possible.
But BlueFS can allocate space from block with deferred ops still pending.
Fixes: https://tracker.ceph.com/issues/54547 Signed-off-by: Adam Kupczyk <akupczyk@redhat.com>
(cherry picked from commit 2afb951a490aac9ca4d01614867396cbd06b793d)
Conflicts: src/os/bluestore/BlueStore.cc
Modified non-critical BlueFS call foreach_block_extents to get_block_extents.
libcephsqlite: ceph-mgr crashes when compiled with gcc12
regex in libcephsqlite, when compiled with GCC12 treats '-' as a range
operator resulting in the following error.
"Invalid start of '[x-x]' range in regular expression"
Kotresh HR [Fri, 4 Feb 2022 09:58:39 +0000 (15:28 +0530)]
qa: validate subvolume discover on upgrade
Validate subvolume discover on upgrade from
legacy subvolume to v1. The handcrafted
".meta" file on legacy subvolume root should
not be used for any subvolume apis like getpath,
authorize.
Kotresh HR [Fri, 4 Feb 2022 09:25:03 +0000 (14:55 +0530)]
mgr/volumes: Fix subvolume discover during upgrade
Fixes the subvolume discover to use the correct
metadata file after an upgrade from legacy subvolume
to v1. The fix makes sure, it doesn't use the
handcrafted metadata file placed in the subvolume
root of legacy subvolume.
Co-authored-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@cern.ch> Co-authored-by: Dan van der Ster <daniel.vanderster@cern.ch> Co-authored-by: Ramana Raja <rraja@redhat.com> Signed-off-by: Kotresh HR <khiremat@redhat.com>
(cherry picked from commit 7eba9cab6cfb9a13a84062177d7a0fa228311e13)
(cherry picked from commit 50f4ac7b75185c89ede210b2c975672e8bdd73aa)
Casey Bodley [Wed, 11 May 2022 18:49:49 +0000 (14:49 -0400)]
qa/rgw: use 'with-sse-s3' override for s3tests
don't rely on the ceph manager task to parse a config file. each rgw
could be using a different config. instead, revert to an s3tests
override called 'with-sse-s3'
this way, the only job that enables sse-s3, vault_transit.yaml, contains
both the 'rgw crypt sse s3' configurables, and the flag to enable the
associated test cases
Adam C. Emerson [Tue, 8 Feb 2022 18:47:49 +0000 (13:47 -0500)]
rgw: Fix data race in ChangeStatus
Fixes: https://tracker.ceph.com/issues/54208 Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit 27f5ba9e5f649d8767c8ab44d56404e0186f6fc1) Fixes: https://tracker.ceph.com/issues/54208 Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
Adam C. Emerson [Fri, 8 Jul 2022 18:58:16 +0000 (14:58 -0400)]
rgw: Guard against malformed bucket URLs
Misplaced colons can result in radosgw thinking is has a bucket URL
but with no bucket name, leading to a crash later on.
Fixes: https://tracker.ceph.com/issues/55765 Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit 3ee9a3b41a289a926fed8b8927ca2a93b4f120a6) Fixes: https://tracker.ceph.com/issues/56585 Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
Tatjana Dehler [Thu, 7 Jul 2022 15:21:14 +0000 (17:21 +0200)]
mgr/dashboard: prevent alert redirect
Prevent Alertmanager alerts from being redirected to the active mgr
dashboard instance. There are two reasons for it:
1. It doesn't bring any additional benefit. The Alertmanager config
includes all available mgr instances - active and passive ones. In
case of an alert, it will be sent to all of them. It ensures that
the active mgr dashboard will receive the alert in any case.
2. The redirect URL includes the mgr IP and NOT the FQDN. This leads
to issues in environments where an SSL certificate is configured and
matches the FQDNs, only.
Fixes: https://tracker.ceph.com/issues/56401 Signed-off-by: Tatjana Dehler <tdehler@suse.com>
(cherry picked from commit 965005e0789e566ccadce7a326b0e197ab8d7f5f)