The prototyped approach adds a copy of the shard name (which is
assured to be a small string) to rgw::sal::LCEntry. It's not
expected to be represented in underlying store types.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Matt Benjamin [Thu, 17 Feb 2022 15:55:14 +0000 (10:55 -0500)]
rgwlc: remove bucket_lc_prepare, add backoff
Remove now-unused RGWLC::bucket_lc_prepare. Wrap serializer calls
in RGWLC::process(int index...) with simple backoff, limited to 5
retries.
In RGWLC::process(int index...), also open-coded the behavior of
RGWLC::bucket_lc_prepare(...), as the lock sharing between these
methods is error prone. For now, that method exists, so that it can
be called from the single-bucket process.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Matt Benjamin [Mon, 14 Feb 2022 21:39:27 +0000 (16:39 -0500)]
rgwlc: remove explicit lc shard resets at start-of-run
This is an alternative solution to the (newly exposed) lifecycle
shard starvation problem reported by Jeegen Chen.
There was always an starvation condition implied by the
reset of lc shard head at the start of processing. The introduction
of "stale sessions" in parallel lifecycle changes made it more
visible, in particular when rgw_lc_debug_interval was set to a small
value and many buckets had lifecycle policy.
My hypothesis in this change is that lifecycle processing for each
lc shard should /always/ continue through the full set of eligible
buckets for the shard, regardless of how many processing cycles might
be required to do so. In general, restarting at the first eligible
bucket on each reschedule invites starvation when processing "gets
behind", so just avoid it.
Fixes: https://tracker.ceph.com/issues/49446 Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit 6e2ae13adced6b3dbb2fe16b547a30e9d68dfa06)
rgwlc: add a wraparound to continued shard processing
If the full set of buckets for a given lc shard couldn't be
processed in the prior cycle, processing will start with a
non-empty marker. Note the initial marker position, then
when the end of shard is reached, allow processing to wrap
around to the logical beginning of the shard and proceeding
through the initial marker.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Please enter the commit message for your changes. Lines starting
(cherry picked from commit 0b8f683d3cf444cc68fd30c3f179b9aa0ea08e7c)
don't report clearing incorrectly
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Matt Benjamin [Mon, 14 Feb 2022 23:26:22 +0000 (18:26 -0500)]
rgwlc: permit disabling of (default) auto-clearing of stale sessions
Provide an option to disable automatic clearing of stale sessions--
which, unless disabled, happens after 2 lifecycle scheduling cycles.
The default behavior is most likely not desired when a debugging or
testing lifecycle processing with rgw_lc_debug_interval is set, and
therefore re-entering a running session after 2 scheduling cycles is
fairly likely.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Add the snaptrim duration to the json formatted output of the pg dump
stats. Define methods for a PG to set the snaptrim begin time and then to
calculate the total time spent to trim all the objects for the snaps in
the snap_trimq for the PG.
Tests:
- Librados C and C++ API tests to verify the time spent for a snaptrim
operation on a PG. These tests use the self-managed snaps APIs.
- Standalone tests to verify snaptrim duration using rados pool snaps.
Add a new column, OBJECTS_TRIMMED, to the pg dump stats that shows the
number of objects trimmed when a snap is removed.
When a pg splits, the stats from the parent pg is copied to the child
pg. In such a case, reset objects_trimmed to 0 for the child pg
(see PeeringState::split_into()). Otherwise, this will result in incorrect
stats to be shown for a child pg after the split operation.
Tests:
- Librados C and C++ API tests to verify the number of objects trimmed
during snaptrim operation. These tests use the self-managed snaps APIs.
- Standalone tests to verify objects trimmed using rados pool snaps.
Ramana Raja [Tue, 25 Jan 2022 01:06:11 +0000 (20:06 -0500)]
mgr/nfs: allow dynamic update of cephfs nfs export
mgr/nfs module's apply_export() method is used to update an existing
CephFS NFS export. The method always restarted the ganesha service (
ganesha server cluster) after updating the export object and notifying
the ganesha servers to reload their exports. The restart temporarily
affected the clients connections of all the exports served by the
ganesha servers.
It is not always necessary to restart the ganesha servers. Only
updating the export ID, path, or FSAL block of a CephFS NFS export
requires a restart. So modify apply_export() to only restart the
ganesha servers for such export updates.
The mgr/nfs module creates a FSAL ceph user with read-only or
read-write path restricted MDS caps for each export. To change the
access type of the CephFS NFS export, the MDS caps of the export's FSAL
ceph user must also be changed. Ganesha can dynamically enforce an
export's access type changes, but Ceph server daemons can't dynamically
enforce changes in caps of the Ceph clients. To allow dynamic updates
of CephFS NFS exports, always create a FSAL Ceph user with read-write
path restricted MDS caps per export. Rely on the ganesha servers to
enforce the export access type changes for the NFS clients.
Fixes: https://tracker.ceph.com/issues/54025 Signed-off-by: Ramana Raja <rraja@redhat.com>
Xuehan Xu [Fri, 28 Jan 2022 05:04:03 +0000 (13:04 +0800)]
crimson/os/seastore: extract fixed kv btree implementation out of lba manager
Basically, this pr moves the current LBABtree and lba_range_pin out of lba manager,
and rename LBABtree to FixedKVBtree. This is the preparation for implementing backrefs
Joseph Sawaya [Fri, 11 Mar 2022 20:45:16 +0000 (15:45 -0500)]
doc: Add note to osds_per_device description about dual-actuator devices
This commit adds information about using dual-actuator devices with the
osds_per_device drive group option, letting users know they can create
an OSD for each actuator by setting this value to 2 in the drive group
they're using to apply OSDs to the device.
Kefu Chai [Wed, 2 Mar 2022 16:41:19 +0000 (00:41 +0800)]
librbd: s/boost::variant/std::variant/
boost::variant explicitly prevent it from being compared other types
by marking the return type of, for instance operator==(const T&) as
"void", if T is not identical to the variant type. so address this,
let's use std::variant<> instead.
more standard compliant and simpler this way.
because of https://cplusplus.github.io/LWG/issue3052, libstdc++
only specializes for std::variant<> for std::visit(), while libc++
allows the derived types of std::variant<> to be "visited", hence
we add an adaptor for SnapshotNamespace.
Addition of a SCRUB_DURATION field that shows how long the scrub/deep-scrub of a pg took.
This field will be displayed in the output of the "ceph pg dump --format=json" and "ceph pg ls-by-pool --format=json" commands.
Bucket head OPs should have quota in the output. However, we were only
fetching quota on OPs that also had an object. The object itself is not
necessary for quota (although a bucket is). Change it so that we get
quota on bucket OPs as well.
Fixes https://tracker.ceph.com/issues/54488
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
Neha Ojha [Wed, 9 Mar 2022 23:19:35 +0000 (15:19 -0800)]
Merge pull request #45305 from Thingee/update-foundation-mem-202203
docs: Updating Foundation member list for 202203
Reviewed-by: Dan van der Ster <daniel.vanderster@cern.ch>
Reviewed-by:Anthony D'Atri <anthony.datri@gmail.com> Reviewed-by: Neha Ojha <nojha@redhat.com>
Kefu Chai [Mon, 7 Mar 2022 16:00:28 +0000 (00:00 +0800)]
cls/rbd: define SnapshotNamespace's ctor using its parent
simpler this way. also this prevent the compiler from trying to
convert a random value into SnapshotNamespace just because it
has a templated constructor when it tries to lookup a candidate
of operator<<(ostream&, Random value).
crimson/osd: fix no ENOENT when removing already removed object
This patch deals with the following problem:
```
[rzarzynski@o06 build]$ RBD_FEATURES="21" ./bin/ceph_test_cls_rbd --gtest_filter=TestClsRbd.create
Running main() from gmock_main.cc
Note: Google Test filter = TestClsRbd.create
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from TestClsRbd
[ RUN ] TestClsRbd.create
../src/test/cls_rbd/test_cls_rbd.cc:467: Failure
Expected equality of these values:
-2
ioctx.remove(oid)
Which is: 0
[ FAILED ] TestClsRbd.create (10 ms)
[----------] 1 test from TestClsRbd (10 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (2805 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] TestClsRbd.create
Ilya Dryomov [Tue, 8 Mar 2022 12:56:15 +0000 (13:56 +0100)]
test/librbd/test_notify.py: effect post object map rebuild assert
Instead of just optionally skipping update_features test, commit 9c0b239d70cd ("qa/upgrade: conditionally disable update_features
tests") moved it after rebuild_object_map test. This isn't right
because update_features test invalidates the object map as a side
effect and rebuild_object_map test is what makes it valid again:
crimson, cls: fix the inability to print logs from plugins.
`cls_log()` of the interface between an OSD and a plugin
(a Ceph Class) is implemented on top of the `dout` macros
and shared between crimson and the classical OSD.
Unfortunately, when a plugin is hosted inside crimson,
this causes the inability to send to any in-plugin generated
message to the `seastar::logger` for the `objclass` subsystem.
This patch differtiates the implementation of `cls_log()`,
and thus allow the `seastar::logger`-based implementation
of `dout` to be used.