Adam Kupczyk [Wed, 2 Feb 2022 19:28:14 +0000 (20:28 +0100)]
os/bluestore/bluefs: Make volume selector operations atomic
Make all RocksDBBlueFSVolumeSelector files/extents/size tracking atomic.
It used to be synchronized by BlueFS global lock.
Now, in Fine Grain Locking era, it is necessary to prevent corruption.
Fixes: https://tracker.ceph.com/issues/53906 Signed-off-by: Adam Kupczyk <akupczyk@redhat.com>
(cherry picked from commit 372bda350966624d5081635e659f7c46947980c2)
Adam Kupczyk [Thu, 20 Jan 2022 12:44:35 +0000 (13:44 +0100)]
os/bluestore/bluefs: Code for volume selector check
Adds ability to verify that volume selector properly tracks disk usage.
Creates options:
- bluefs_check_volume_selector_on_umount
- bluefs_check_volume_selector_often
that can be used to validate that vselector does not diverge from
values it should have.
Tobias Urdin [Mon, 7 Aug 2023 20:34:43 +0000 (20:34 +0000)]
rgw/auth: handle HTTP OPTIONS with v4 auth
This adds code to properly verify the signature
for HTTP OPTIONS calls that is preflight CORS
requests passing the expected method in the
access-control-request-method header.
Rishabh Dave [Mon, 11 Sep 2023 09:55:46 +0000 (15:25 +0530)]
doc/cephfs: write cephfs commands fully in docs
We write CephFS commands incompletely in docs. For example, "ceph tell
mds.a help" is simply written as "tell mds.a help". This might confuse
the reader and it won't harm to write the command in full.
Fixes: https://tracker.ceph.com/issues/62791 Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit e63b573d3edc272d83ee1b5eb3dace037f762d87)
* refs/pull/51045/head:
qa: Add test for per-module finisher thread
qa: allow check_counter to look at nested keys
qa: allow specifying min for check-counter
mgr: Add one finisher thread per module
Patrick Donnelly [Mon, 17 Jul 2023 20:10:59 +0000 (16:10 -0400)]
mds: drop locks and retry when lock set changes
An optimization was added to avoid an unnecessary gather on the inode
filelock when the client can safely get the file size without also
getting issued the requested caps. However, if a retry of getattr
is necessary, this conditional inclusion of the inode filelock
can cause lock-order violations resulting in deadlock.
So, if we've already acquired some of the inode's locks then we must
drop locks and retry.
Fixes: https://tracker.ceph.com/issues/62052 Fixes: c822b3e2573578c288d170d1031672b74e02dced Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit b5719ac32fe6431131842d62ffaf7101c03e9bac)
Ilya Dryomov [Sun, 27 Aug 2023 17:09:15 +0000 (19:09 +0200)]
qa/suites/upgrade/pacific-p2p: skip TestClsRbd.mirror_snapshot test
The behavior of the class method changed in reef; the change was
backported to pacific and quincy. An older pacific binary used against
newer pacific OSDs produces an expected failure:
[ RUN ] TestClsRbd.mirror_snapshot
.../ceph-16.2.7/src/test/cls_rbd/test_cls_rbd.cc:2278: Failure
Expected equality of these values:
-85
mirror_image_snapshot_unlink_peer(&ioctx, oid, 1, "peer2")
Which is: 0
[ FAILED ] TestClsRbd.mirror_snapshot (30 ms)
TestClsRbd.snapshots_namespaces test was removed in commit 4ad9d565a15c
("librbd: simplified retrieving snapshots from image header") many years
ago.
It's a no-no to acquire locks in these "fast" messenger methods. This
can lead to messenger slow downs in the best case as it's blocking reads
on the wire. In the worse case, the messenger may deadlock with other
threads, preventing any further message reads off the wire.
It's not obvious this method is "fast" so I've added a comment regarding
this.
Fixes: https://tracker.ceph.com/issues/61874 Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit 69980823e62f67d502c4045e15c41c5c44cd5127)
python-common: drive_selection: fix KeyError when osdspec_affinity is not set
When osdspec_affinity is not set, the drive selection code will fail.
This can happen when a device has multiple LVs where some of are used
by Ceph and at least one LV isn't used by Ceph.
Ilya Dryomov [Mon, 14 Aug 2023 11:16:59 +0000 (13:16 +0200)]
qa/suites/upgrade/octopus-x: skip TestClsRbd.mirror_snapshot test
The behavior of the class method changed in reef; the change was
backported to pacific and quincy. An octopus test binary used against
pacific OSDs produces an expected failure:
[ RUN ] TestClsRbd.mirror_snapshot
.../ceph-15.2.17/src/test/cls_rbd/test_cls_rbd.cc:2279: Failure
Expected equality of these values:
-85
mirror_image_snapshot_unlink_peer(&ioctx, oid, 1, "peer2")
Which is: 0
[ FAILED ] TestClsRbd.mirror_snapshot (6 ms)
liu shi [Fri, 14 May 2021 07:51:01 +0000 (03:51 -0400)]
cpu_profiler: fix asok command crash
fixes: https://tracker.ceph.com/issues/50814 Signed-off-by: liu shi <liu.shi@navercorp.com>
(cherry picked from commit be7303aafe34ae470d2fd74440c3a8d51fcfa3ff)
Patrick Donnelly [Fri, 21 Jul 2023 15:56:49 +0000 (11:56 -0400)]
mds: adjust cap acquisition throttles
For production workloads, these defaults rarely help. Adjust
accordingly. For a steady state "find" workload, these new throttles
will prevent acquiring more than ~2300 caps/second which is quite
manageable with typical recall rates.
-ln(0.5) / 30 * 100k = 2310
Fixes: https://tracker.ceph.com/issues/62114 Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit f290ef9d0d2d09fb978d56c46be704c6efd45c43)
Venky Shankar [Wed, 9 Aug 2023 05:43:01 +0000 (11:13 +0530)]
qa: avoid explicit set to client mountpoint as "/"
This causes self.cephfs_mntpt to set as "/" by default which
overrides the config in ceph.conf. `test_client_cache_size`
updates ceph.conf with:
client mountpoint = /subdir
However, the ceph-fuse mount command has --client_mountpoint explicitly
set as "/", thereby causing the root of the file system to get mounted which
confuses the test.
Conflicts:
qa/tasks/cephfs/fuse_mount.py
- merge conflicts due to updated upstream code
- removed offending line; host_mntpt was appended to the mount command
later in the code; this issue was created due to manual conflict
resolution during backporting process;
qa/tasks/cephfs/kernel_mount.py
qa/tasks/cephfs/mount.py
- fixed conflicts between 'main' and 'pacific' branches
Conflicts:
src/pybind/mgr/rbd_support/mirror_snapshot_schedule.py
- Above conflict was due to commit e4a16e2
("mgr/rbd_support: add type annotation") not in pacific
Conflicts:
src/pybind/mgr/rbd_support/module.py
- Above conflict was due to commit dcb51b0
("mgr/rbd_support: define commands using CLICommand") not in pacific
Ramana Raja [Wed, 10 May 2023 18:37:44 +0000 (14:37 -0400)]
rbd_support: recover from "double blocklisting"
Recover from being blocklisted while recovering from blocklisting.
When the rbd_support module is being set up to recover from client
blocklisting, the module's new rados client connection can also get
blocklisted. Currently, this will cause the recovery to fail and
the module will remain inoperable. Instead, retry module recovery
when the new client gets blocklisted during the module setup in the
recovery thread.
Conflicts:
src/pybind/mgr/rbd_support/mirror_snapshot_schedule.py
src/pybind/mgr/rbd_support/module.py
src/pybind/mgr/rbd_support/perf.py
src/pybind/mgr/rbd_support/task.py
src/pybind/mgr/rbd_support/trash_purge_schedule.py
- Above conflicts were due to commit e4a16e2
("mgr/rbd_support: add type annotation") not in pacific
- Above conflicts were due to commit dcb51b0
("mgr/rbd_support: define commands using CLICommand") not in pacific
Ramana Raja [Wed, 15 Feb 2023 15:12:54 +0000 (10:12 -0500)]
mgr/rbd_support: recover from rados client blocklisting
In certain scenarios the OSDs were slow to process RBD requests.
This lead to the rbd_support module's RBD client not being able to
gracefully handover a RBD exclusive lock to another RBD client.
After the condition persisted for some time, the other RBD client
forcefully acquired the lock by blocklisting the rbd_support module's
RBD client, and consequently blocklisted the module's RADOS client. The
rbd_support module stopped working. To recover the module, the entire
mgr service had to be restarted which reloaded other mgr modules.
Instead of recovering the rbd_support module from client blocklisting
by being disruptive to other mgr modules, recover the module
automatically without restarting the mgr serivce. On client getting
blocklisted, shutdown the module's handlers and blocklisted client,
create a new rados client for the module, and start the new handlers.
Conflicts:
src/pybind/mgr/rbd_support/mirror_snapshot_schedule.py
src/pybind/mgr/rbd_support/module.py
src/pybind/mgr/rbd_support/perf.py
src/pybind/mgr/rbd_support/task.py
src/pybind/mgr/rbd_support/trash_purge_schedule.py
- Above conflicts were due to commit e4a16e2
("mgr/rbd_support: add type annotation") not in pacific
- Above conflicts were due to commit dcb51b0
("mgr/rbd_support: define commands using CLICommand") not in pacific