Feng Hualong [Wed, 16 Feb 2022 06:01:17 +0000 (14:01 +0800)]
common/compressor: fix the issue that cannot processed concurrently
Now, one session cannot support concurrent and it will lead to crash.
So there are mutil session using. At same time, it also can improve
the performance.
Ilya Dryomov [Sat, 19 Mar 2022 13:04:52 +0000 (14:04 +0100)]
qa/workunits/rbd/cli_generic.sh: relax trash purge schedule status assert
Commit 08df6e0fd006 ("qa/workunits/rbd: expand LevelSpec parsing
coverage") didn't account for images with a separate data pool. This
was missed because of small-cache-pool.yaml breakage.
This change is a part of ongoing upgrade of Seastar which will
be completed in a follow-up PR, after merging another change
with the Seastar's upstream.
iovec have unsigned length (size_t) and before this patch the
total length was computed by adding iovec's length to a signed
length variable (ssize_t). While the code checked if the resulting
length was negative on overflow, the case where length is positive
after overflow was not checked. This patch fixes the overflow check
by changing length to unsigned size_t.
Additionally, this patch fixes the case where some iovecs have been
added to the bufferlist and the aio completion has been blocked, but
adding an additional iovec fails because of overflow. This leads to
the UserBufferDeleter trying to unblock the completion on destruction
of the bufferlist but asserting because the completion was never
armed. We avoid this by first computing the total length and checking
for overflows and iovcnt before adding them to the bufferlist.
The prototyped approach adds a copy of the shard name (which is
assured to be a small string) to rgw::sal::LCEntry. It's not
expected to be represented in underlying store types.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Matt Benjamin [Thu, 17 Feb 2022 15:55:14 +0000 (10:55 -0500)]
rgwlc: remove bucket_lc_prepare, add backoff
Remove now-unused RGWLC::bucket_lc_prepare. Wrap serializer calls
in RGWLC::process(int index...) with simple backoff, limited to 5
retries.
In RGWLC::process(int index...), also open-coded the behavior of
RGWLC::bucket_lc_prepare(...), as the lock sharing between these
methods is error prone. For now, that method exists, so that it can
be called from the single-bucket process.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Matt Benjamin [Mon, 14 Feb 2022 21:39:27 +0000 (16:39 -0500)]
rgwlc: remove explicit lc shard resets at start-of-run
This is an alternative solution to the (newly exposed) lifecycle
shard starvation problem reported by Jeegen Chen.
There was always an starvation condition implied by the
reset of lc shard head at the start of processing. The introduction
of "stale sessions" in parallel lifecycle changes made it more
visible, in particular when rgw_lc_debug_interval was set to a small
value and many buckets had lifecycle policy.
My hypothesis in this change is that lifecycle processing for each
lc shard should /always/ continue through the full set of eligible
buckets for the shard, regardless of how many processing cycles might
be required to do so. In general, restarting at the first eligible
bucket on each reschedule invites starvation when processing "gets
behind", so just avoid it.
Fixes: https://tracker.ceph.com/issues/49446 Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit 6e2ae13adced6b3dbb2fe16b547a30e9d68dfa06)
rgwlc: add a wraparound to continued shard processing
If the full set of buckets for a given lc shard couldn't be
processed in the prior cycle, processing will start with a
non-empty marker. Note the initial marker position, then
when the end of shard is reached, allow processing to wrap
around to the logical beginning of the shard and proceeding
through the initial marker.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Please enter the commit message for your changes. Lines starting
(cherry picked from commit 0b8f683d3cf444cc68fd30c3f179b9aa0ea08e7c)
don't report clearing incorrectly
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Matt Benjamin [Mon, 14 Feb 2022 23:26:22 +0000 (18:26 -0500)]
rgwlc: permit disabling of (default) auto-clearing of stale sessions
Provide an option to disable automatic clearing of stale sessions--
which, unless disabled, happens after 2 lifecycle scheduling cycles.
The default behavior is most likely not desired when a debugging or
testing lifecycle processing with rgw_lc_debug_interval is set, and
therefore re-entering a running session after 2 scheduling cycles is
fairly likely.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
The bucket chown command needs to touch every object, and if it fails
can take a long time to catch up. Allow it to accept the --marker
option so you can tell it where to pick up.
Fixes: https://tracker.ceph.com/issues/53614 Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
Kefu Chai [Mon, 14 Mar 2022 15:35:13 +0000 (23:35 +0800)]
cls/fifo: use friend instead of member operators
to address following error when compiling with C++20 standard:
../src/rgw/cls_fifo_legacy.cc:2217:22: error: ISO C++20 considers use of overloaded operator '==' (with operand types 'rados::cls::fifo::journal_entry' and 'rados::cls::fifo::journal_entry') to be ambiguous despite there being a unique best viable function [-Werror,-Wambiguous-reversed-operator]
!(jiter->second == e)) {
~~~~~~~~~~~~~ ^ ~
../src/cls/fifo/cls_fifo_types.h:148:8: note: ambiguity is between a regular call to this operator and a call with the argument order reversed
bool operator ==(const journal_entry& e) {
^
Casey Bodley [Mon, 14 Feb 2022 22:56:33 +0000 (17:56 -0500)]
neorados: assoc_delete() uses allocator_traits
the std::allocator<T> member functions destroy() and deallocate() were
deprecated in c++17 and removed in c++20. call the static functions on
std::allocator_traits<T> instead
resolves the c++20 compilation error with clang13:
In file included from ceph/src/test/cls_fifo/bench_cls_fifo.cc:38:
ceph/src/neorados/cls/fifo.h:684:7: error: no member named 'destroy' in 'std::allocator<neorados::cls::fifo::detail::JournalProcessor<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>>'
a.destroy(t);
~ ^
ceph/src/neorados/cls/fifo.h:1728:11: note: in instantiation of function template specialization 'neorados::cls::fifo::FIFO::assoc_delete<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>, neorados::cls::fifo::detail::JournalProcessor<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>>' requested here
FIFO::assoc_delete(h, this);
^
ceph/src/neorados/cls/fifo.h:1605:6: note: in instantiation of member function 'neorados::cls::fifo::detail::JournalProcessor<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>::handle' requested here
handle(errc::inconsistency);
^
ceph/src/neorados/cls/fifo.h:857:8: note: in instantiation of member function 'neorados::cls::fifo::detail::JournalProcessor<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>::process' requested here
p->process();
^
/usr/include/boost/asio/bind_executor.hpp:407:12: note: in instantiation of member function 'neorados::cls::fifo::FIFO::NewPartPreparer<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>::operator()' requested here
return this->target_(BOOST_ASIO_MOVE_CAST(Args)(args)...);
^
ceph/src/common/async/bind_allocator.h:179:12: note: in instantiation of function template specialization 'boost::asio::executor_binder<neorados::cls::fifo::FIFO::NewPartPreparer<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>, boost::asio::executor>::operator()<boost::system::error_code &, bool>' requested here
return this->target(std::forward<Args>(args)...);
^
ceph/src/neorados/cls/fifo.h:939:5: note: in instantiation of function template specialization 'neorados::cls::fifo::FIFO::_update_meta<ceph::async::allocator_binder<boost::asio::executor_binder<neorados::cls::fifo::FIFO::NewPartPreparer<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>, boost::asio::executor>, std::allocator<void>>>' requested here
_update_meta(fifo::update{}.journal_entries_add(jentries),
^
ceph/src/neorados/cls/fifo.h:1008:7: note: in instantiation of function template specialization 'neorados::cls::fifo::FIFO::_prepare_new_part<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>' requested here
_prepare_new_part(
^
ceph/src/neorados/cls/fifo.h:524:7: note: in instantiation of function template specialization 'neorados::cls::fifo::FIFO::_prepare_new_head<ceph::async::allocator_binder<boost::asio::executor_binder<neorados::cls::fifo::FIFO::Pusher<spawn::detail::coro_handler<boost::asio::executor_binder<void (*)(), boost::asio::executor>, void>>, boost::asio::executor>, std::allocator<void>>>' requested here
_prepare_new_head(std::move(p));
^
Casey Bodley [Fri, 11 Feb 2022 23:56:30 +0000 (18:56 -0500)]
xlist: use friend instead of member operators
resolves a c++20 compilation error with clang 13:
In file included from ceph/src/client/Client.cc:55:
In file included from ceph/src/messages/MClientCaps.h:19:
In file included from ceph/src/mds/mdstypes.h:22:
ceph/src/include/xlist.h:212:27: warning: ISO C++20 considers use of overloaded operator '!=' (with operand types 'xlist<Dentry *>::const_iterat
or' and 'xlist<Dentry *>::const_iterator') to be ambiguous despite there being a unique best viable function with non-reversed arguments [-Wambiguous-reversed
-operator]
for (const auto &item : list) {
^
ceph/src/client/Client.cc:3299:63: note: in instantiation of member function 'operator<<' requested here
ldout(cct, 20) << "link inode " << in << " parents now " << in->dentries << dendl;
^
ceph/src/include/xlist.h:202:10: note: candidate function with non-reversed arguments
bool operator!=(const_iterator& rhs) const {
^
ceph/src/include/xlist.h:199:10: note: ambiguous candidate function with reversed arguments
bool operator==(const_iterator& rhs) const {
^
Add the snaptrim duration to the json formatted output of the pg dump
stats. Define methods for a PG to set the snaptrim begin time and then to
calculate the total time spent to trim all the objects for the snaps in
the snap_trimq for the PG.
Tests:
- Librados C and C++ API tests to verify the time spent for a snaptrim
operation on a PG. These tests use the self-managed snaps APIs.
- Standalone tests to verify snaptrim duration using rados pool snaps.
Add a new column, OBJECTS_TRIMMED, to the pg dump stats that shows the
number of objects trimmed when a snap is removed.
When a pg splits, the stats from the parent pg is copied to the child
pg. In such a case, reset objects_trimmed to 0 for the child pg
(see PeeringState::split_into()). Otherwise, this will result in incorrect
stats to be shown for a child pg after the split operation.
Tests:
- Librados C and C++ API tests to verify the number of objects trimmed
during snaptrim operation. These tests use the self-managed snaps APIs.
- Standalone tests to verify objects trimmed using rados pool snaps.
Ramana Raja [Tue, 25 Jan 2022 01:06:11 +0000 (20:06 -0500)]
mgr/nfs: allow dynamic update of cephfs nfs export
mgr/nfs module's apply_export() method is used to update an existing
CephFS NFS export. The method always restarted the ganesha service (
ganesha server cluster) after updating the export object and notifying
the ganesha servers to reload their exports. The restart temporarily
affected the clients connections of all the exports served by the
ganesha servers.
It is not always necessary to restart the ganesha servers. Only
updating the export ID, path, or FSAL block of a CephFS NFS export
requires a restart. So modify apply_export() to only restart the
ganesha servers for such export updates.
The mgr/nfs module creates a FSAL ceph user with read-only or
read-write path restricted MDS caps for each export. To change the
access type of the CephFS NFS export, the MDS caps of the export's FSAL
ceph user must also be changed. Ganesha can dynamically enforce an
export's access type changes, but Ceph server daemons can't dynamically
enforce changes in caps of the Ceph clients. To allow dynamic updates
of CephFS NFS exports, always create a FSAL Ceph user with read-write
path restricted MDS caps per export. Rely on the ganesha servers to
enforce the export access type changes for the NFS clients.
Fixes: https://tracker.ceph.com/issues/54025 Signed-off-by: Ramana Raja <rraja@redhat.com>