BuildBoost.cmake (used when we're building the submodule) doesn't
provide parity with FindBoost.cmake (used with system Boost).
Specifically, it doesn't set the _FOUND variables for the various
components, making it hard to depend on finding those features.
Set Boost_<component>_FOUND for all the components we're building in
BuildBoost.cmake to make using these variables possible.
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
(cherry picked from commit 0f4cb207bb4a9905619894286edd41a89379a747) Signed-off-by: Mauricio Faria de Oliveira <mfo@canonical.com>
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
(cherry picked from commit 4ca4201b7fe3e0ca172548204b4b888a0908d162) Signed-off-by: Mauricio Faria de Oliveira <mfo@canonical.com>
Conflicts:
src/cls/CMakeLists.txt
src/test/rgw/CMakeLists.txt
- Add spawn headers to includes to fix the two build errors below.
No linking is needed since the files don't use 'spawn::' at all.
In file included from /git/ceph/src/rgw/rgw_common.h:31:0,
from /git/ceph/src/cls/otp/cls_otp_client.cc:25:
/git/ceph/src/common/async/yield_context.h:31:10: fatal error: spawn/spawn.hpp: No such file or directory
#include <spawn/spawn.hpp>
^~~~~~~~~~~~~~~~~
compilation terminated.
src/cls/CMakeFiles/cls_otp_client.dir/build.make:62: recipe for target 'src/cls/CMakeFiles/cls_otp_client.dir/otp/cls_otp_client.cc.o' failed
In file included from /git/ceph/src/rgw/rgw_dmclock_scheduler.h:21:0,
from /git/ceph/src/rgw/rgw_dmclock_sync_scheduler.h:18,
from /git/ceph/src/test/rgw/test_rgw_dmclock_scheduler.cc:17:
/git/ceph/src/common/async/yield_context.h:31:10: fatal error: spawn/spawn.hpp: No such file or directory
#include <spawn/spawn.hpp>
^~~~~~~~~~~~~~~~~
compilation terminated.
src/test/rgw/CMakeFiles/unittest_rgw_dmclock_scheduler.dir/build.make:62: recipe for target 'src/test/rgw/CMakeFiles/unittest_rgw_dmclock_scheduler.dir/test_rgw_dmclock_scheduler.cc.o' failed
Casey Bodley [Wed, 6 Nov 2019 20:57:01 +0000 (15:57 -0500)]
rgw: use new spawn() implementation
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 769841a08c3e79985d9634f06c9ff4d62647dcda) Signed-off-by: Mauricio Faria de Oliveira <mfo@canonical.com>
Conflicts:
src/rgw/CMakeLists.txt
- Remove changes for 'rgw_schedulers' cmake target, not in Nautilus.
- Link 'radosgw_a' against 'spawn'; transitivity from 'rgw_schedulers'
(which is public) is lost, and 'rgw_a'/'rgw_libs' (which is private
to 'radosgw_a') isn't enough to build 'rgw_main.cc' ( error below.)
src/rgw/rgw_aio.cc
- This file doesn't exist in Nautilus; similar changes are done in
other files.
src/rgw/rgw_aio_throttle.h
- No changes required; the base for the changes (e.g., class, variables)
are not in Nautilus.
src/rgw/rgw_asio_frontend.cc
- Less changes required, similarly; commit dd4350b not in Nautilus.
Build error:
In file included from /git/ceph/src/rgw/rgw_common.h:31:0,
from /git/ceph/src/rgw/rgw_main.cc:15:
/git/ceph/src/common/async/yield_context.h:31:10: fatal error: spawn/spawn.hpp: No such file or directory
#include <spawn/spawn.hpp>
^~~~~~~~~~~~~~~~~
compilation terminated.
src/rgw/CMakeFiles/radosgw.dir/build.make:62: recipe for target 'src/rgw/CMakeFiles/radosgw.dir/rgw_main.cc.o' failed
Igor Fedotov [Fri, 26 Feb 2021 14:16:11 +0000 (17:16 +0300)]
os/bluestore: go beyond pinned onodes while trimming the cache.
One might face lack of cache trimming when there is a bunch of pinned entries on the top of Onode's cache LRU list. If these pinned entries stay in the state for a long time cache might start using too much memory causing OSD to go out of osd-memory-target limit. Pinned state tend to happen to osdmap onodes.
The proposed patch preserves last trim position in the LRU list (if it pointed to a pinned entry) and proceeds trimming from that position if it wasn't invalidated. LRU nature of the list enables to do that safely since no new entries appear above the previously present entry while it's not touched.
Fixes: https://tracker.ceph.com/issues/48729 Signed-off-by: Igor Fedotov <ifedotov@suse.com>
Adam Kupczyk [Sat, 30 Jan 2021 11:57:05 +0000 (12:57 +0100)]
os/bluestore: Add option to check BlueFS reads
Add option "bluefs_check_for_zeros" to check if there are any zero-filled page.
If so, reread data. It is known that sometimes BlueStore gets such pages.
See "bluestore_retry_disk_reads".
- docstring added to describe the link to mgr/prometheus conflicted with the
const fmt definition for the message. resolved by adding doc under the const
definition.
Paul Cuzner [Thu, 8 Oct 2020 03:30:56 +0000 (16:30 +1300)]
mgr/prometheus: Add healthcheck metric for SLOW_OPS
SLOW_OPS is triggered by op tracker, and generates a health
alert but healthchecks do not create metrics for prometheus to
use as alert triggers. This change adds SLOW_OPS metric, and
provides a simple means to extend to other relevant health
checks in the future
If the extract of the value from the health check message fails
we log an error and remove the metric from the metric set. In
addition the metric description has changed to better reflect
the scenarios where SLOW_OPS can be triggered.
Nathan Cutler [Thu, 25 Feb 2021 20:50:20 +0000 (21:50 +0100)]
common/mempool: include standard thread library
Attempt to address FTBFS:
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/test_mempool.cc:399:11: error: request for member 'clear' in 'workers', which is of non-class type 'int'
399 | workers.clear();
| ^~~~~
Igor Fedotov [Fri, 5 Feb 2021 11:03:48 +0000 (14:03 +0300)]
os/bluestore: fix huge(>4GB) writes from RocksDB to BlueFS.
Fixes: https://tracker.ceph.com/issues/49168 Signed-off-by: Igor Fedotov <ifedotov@suse.com>
(cherry picked from commit 5f94883ec8d64c02b2bb499caad8eaf91dd715f7)
Conflicts:
(lack of bufferlist refactor from https://github.com/ceph/ceph/pull/36754)
(lack of single allocator support from https://github.com/ceph/ceph/pull/30838)
src/os/bluestore/BlueFS.h
src/test/objectstore/test_bluefs.cc
Jianpeng Ma [Mon, 10 Aug 2020 07:56:13 +0000 (15:56 +0800)]
os/bluestore/BlueRocksEnv: Avoid flushing too much data at once.
Although, in _flush func we already check length. If length of dirty
is less then bluefs_min_flush_size, we will skip this flush.
But in fact, we found rocksdb can call many times Append() and then
call Flush(). This make flush_data is much larger than
bluefs_min_flush_size.
From my test, w/o this patch, it can reduce 99.99% latency(from
145.753ms to 20.474ms) for 4k randwrite with bluefs_buffered_io=true.
Because Bluefs::flush acquire lock. So we add new api try_flush
to avoid lock contention.
Kotresh HR [Fri, 19 Feb 2021 11:27:23 +0000 (16:57 +0530)]
mgr/volumes: Bump up AuthMetadataManager's version
With ceph_volume_client and mgr-volumes co-existing
for sometime, the version of both needs to be same.
The ceph_volume_client version <=5 can't decode
'subvolumes' key in auth-metadata file. Hence to
handle version in-compatibility, the version of
ceph_volume_client is bumped up to 6 and the same
needs to be done in mgr-volume's AuthMetadataManager
Kotresh HR [Fri, 19 Feb 2021 11:12:33 +0000 (16:42 +0530)]
pybind/ceph_volume_client: Bump up the version and compat_version to 6
With 'volumes' key updated to 'subvolumes', the version of
ceph_volume_client <= 5 can't decode auth-metadata file. Hence
bumping up ceph_volume_client version and compat_version to 6.
Kotresh HR [Mon, 15 Feb 2021 16:26:51 +0000 (21:56 +0530)]
pybind/ceph_volume_client: Update the 'volumes' key to 'subvolumes' in auth metadata file
The older auth metadata files before nautilus release stores
the authorized subvolumes using the 'volumes' key. As the
notion of 'subvolumes' brought in by mgr/volumes, it makes
sense to use 'subvolumes' key. This patch would be tranparently
update 'volumes' key to 'subvolumes' and newer auth metadata
files would store them with 'subvolumes' key.
Also fails the deauthorize if the auth-id doesn't exist.
Matthew Vernon [Thu, 4 Feb 2021 11:41:14 +0000 (11:41 +0000)]
rgw/radosgw-admin clarify error when email address already in use
The error message if you try and create an S3 user with an email
address that is already associated with another S3 account is very
confusing; this patch makes it much clearer
To reproduce:
radosgw-admin user create --uid=foo --display-name="Foo test" --email=bar@domain.invalid
radosgw-admin user create --uid=test --display-name="AN test" --email=bar@domain.invalid
could not create user: unable to parse parameters, user id mismatch, operation id: foo does not match: test
With this patch:
radosgw-admin user create --uid=test --display-name="AN test" --email=bar@domain.invalid
could not create user: unable to create user test because user id foo already exists with email bar@domain.invalid
Fixes: https://tracker.ceph.com/issues/49137 Fixes: https://tracker.ceph.com/issues/19411 Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
(cherry picked from commit 05318d6f71e45a42a46518a0ef17047dfab83990)
It appears that commit 6eb8f30a238 broke the test utility and
its failure was masked by the test case that expected a failure
due to a timeout force-killing the app.
Fixes: https://tracker.ceph.com/issues/49117 Signed-off-by: Jason Dillaman <dillaman@redhat.com>
(cherry picked from commit 8643b046fb4d5b05b4c75b83f16cd8ccc6a8b0a0)
Conflicts:
qa/workunits/rbd/rbd_mirror_helpers.sh
- no show_diff function in nautilus
qa/tasks/ceph_manager: use s/ByteIO/StringIO in stdout for ceph-objectstore-tool
wrt master, we have moved to using run_ceph_objectstore_tool which uses
StringIO for stdout and stderr, to make the changes compatible with
nautilus, replacing use of ByteIO with StringIO.
These two vxattrs will only exist in local client side, with which
we can easily know which mountpoint the file belongs to and also
they can help locate the debugfs path quickly.
Conflicts:
src/client/Client.cc
- add .hidden member because we still need it
in nautilus
src/client/Client.h
- drop the mirror.info xattr related code
because nautilus does not introduce it