Conflicts:
qa/workunits/cephtool/test.sh
- drop unrelated "# test elector" comment (elector test not backported)
- no "test_mon_priority_and_weight" function in nautilus
Conflicts:
PendingReleaseNotes - add release notes about this feature
qa/tasks/mgr/test_progress.py - replace helper functions that is neeeded
for dealing with more than 1 type of events
- remove `period` in wait_until_equal()
src/pybind/mgr/progress/module.py - remove code that deals with mypy in master
qa/suites/rados/singleton/all/pg-autoscaler-progress-off.yaml - remove log-ignorelist
The ceph-volume lvm batch --auto introduced by [1] breaks the backward
compatibility when using non rotational devices only (SSD and/or NVMe).
Those devices are reaffected as bluestore db or filestore journal
devices while we want them as data devices.
Kefu Chai [Thu, 25 Jun 2020 02:41:30 +0000 (10:41 +0800)]
mgr: avoid false alarm of MGR_MODULE_ERROR
mgr sends healthy report periodically, the report includes the
information whether the always-on modules are loaded or not. but the
modules are loaded with two steps:
1. load the options and command exposed by modules. the options and
commands are registered using static methods of the subclasss of
MgrModule.
2. create an instance of the subclass of MgrModule. this is performed
in background by a Finisher thread. upon finishing of the construction
of the instance, ActivePyModules::start_one() adds the module which
successfully creates the class to `modules`.
but there is chance that when mgr sends healthy report, the always-on
module is still creating its instance of MgrModule subclass, or that
task is still pending in the finisher thread. in that case, mgr would
add a false error message like
```
4 mgr modules have failed (MGR_MODULE_ERROR)
```
in the healthy report
in this change, the number of modules in pending state is tracked,
and mgr will not take the missing always-on modules into account unless
the number of pending modules is 0.
Dimitri Savineau [Mon, 26 Oct 2020 19:12:59 +0000 (15:12 -0400)]
ceph-volume: consume mount opt in simple activate
When running ceph-volume simple activate command on a Filestore OSD
then the data device is mounted without any specific options so the
one from the ceph configuration file are ignored.
When deploying Filestore with the lvm subcommand then everything is
fine because the filestore_activate method uses mount_osd which relies
on the mount options defined in the ceph configuration file (if any).
Ilya Dryomov [Fri, 16 Oct 2020 10:57:50 +0000 (12:57 +0200)]
mon/MonClient: bring back CEPHX_V2 authorizer challenges
Commit c58c5754dfd2 ("msg/async/ProtocolV1: use AuthServer and
AuthClient") introduced a backwards compatibility issue into msgr1.
To fix it, commit 321548010578 ("mon/MonClient: skip CEPHX_V2
challenge if client doesn't support it") set out to skip authorizer
challenges for peers that don't support CEPHX_V2. However, it
made it so that authorizer challenges are skipped for all peers in
both msgr1 and msgr2 cases, effectively disabling the protection
against replay attacks that was put in place in commit f80b848d3f83
("auth/cephx: add authorizer challenge", CVE-2018-1128).
This is because con->get_features() always returns 0 at that
point. In msgr1 case, the peer shares its features along with the
authorizer, but while they are available in connect_msg.features they
aren't assigned to con until ProtocolV1::open(). In msgr2 case, the
peer doesn't share its features until much later (in CLIENT_IDENT
frame, i.e. after the authentication phase). The result is that
!CEPHX_V2 branch is taken in all cases and replay attack protection
is lost.
Only clusters with cephx_service_require_version set to 2 on the
service daemons would not be silently downgraded. But, since the
default is 1 and there are no reports of looping on BADAUTHORIZER
faults, I'm pretty sure that no one has ever done that. Note that
cephx_require_version set to 2 would have no effect even though it
is supposed to be stronger than cephx_service_require_version
because MonClient::handle_auth_request() didn't check it.
To fix:
- for msgr1, check connect_msg.features (as was done before commit c58c5754dfd2) and challenge if CEPHX_V2 is supported. Together
with two preceding patches that resurrect proper cephx_* option
handling in msgr1, this covers both "I want old clients to work"
and "I wish to require better authentication" use cases.
- for msgr2, don't check anything and always challenge. CEPHX_V2
predates msgr2, anyone speaking msgr2 must support it.
Conflicts:
src/msg/async/ProtocolV1.cc [ commit c58c5754dfd2
("msg/async/ProtocolV1: use AuthServer and AuthClient") not
in nautilus. This means that only msgr2 is affected, so drop
ProtocolV1.cc hunk. As a result, skip_authorizer_challenge is
never set, but this is fine because msgr1 still uses old ms_*
auth methods and tests CEPHX_V2 appropriately. ]
This was added in commit 9bcbc2a3621f ("mon,msg: implement
cephx_*_require_version options") and inadvertently dropped in
commit e6f043f7d2dc ("msgr/async: huge refactoring of protocol V1").
As a result, service daemons don't enforce cephx_require_version
and cephx_cluster_require_version options and connections without
CEPH_FEATURE_CEPHX_V2 are allowed through.
(cephx_service_require_version enforcement was brought back a
year later in commit 321548010578 ("mon/MonClient: skip CEPHX_V2
challenge if client doesn't support it"), although the peer gets
TAG_BADAUTHORIZER instead of TAG_FEATURES.)
Resurrect the original behaviour: all cephx_*require_version
options are enforced and the peer gets TAG_FEATURES, signifying
that it is missing a required feature.
Ilya Dryomov [Fri, 16 Oct 2020 09:33:32 +0000 (11:33 +0200)]
msg/async/ProtocolV1: resurrect "include MGR as service when applying cephx settings"
This was added in commit 0ec7d6bbc4af ("msg/async,simple: include MGR
as service when applying cephx settings") and inadvertently dropped in
commit e6f043f7d2dc ("msgr/async: huge refactoring of protocol V1").
As a result, mgr daemons are miscategorized as clients when enforcing
cephx_*require_signatures options.
Conflicts:
src/test/mon/MonMap.cc
- do not attempt to introduce boost::intrusive_ptr into Nautilus
- monmap.build_initial takes bare cct in nautilus (master: cct.get())
Conflicts:
src/test/mon/MonMap.cc
- do not attempt to introduce boost::intrusive_ptr into nautilus
- monmap.build_initial takes bare cct in nautilus (master: cct.get())
blk/kernel: retry forever if bdev_flock_retry is 0
retry forever if cct->_conf->bdev_flock_retry is 0.
systemd-udevd is most likely the reason why ceph-osd fails to
acquire the flock when "mkfs", because systemd-udevd probes
all block devices when the device changes in the system using
libblkid, and when systemd-udevd starts looking at the device
it takes a `LOCK_SH|LOCK_NB` lock. and it releases the lock
right after done with it. so normally, it only takes a jiffy,
see
https://github.com/systemd/systemd/blob/ee0b9e721a368742ac6fa9c3d9a33e45dc3203a2/src/shared/lockfile-util.c#L18
so, we just need to retry couple times before acquiring the
lock.
blk/kernel: use open file description lock if available
* use OFD lock if available. OFD is Linux specific, and only available
on 3.15 kernels. OFD is able to synchronize both threads and
processes. and has simpler semantics. this is just a cleanup.
as we don't create threads for acquiring the flock.
* use BSD flock(2) as a fallback
* return the errno right away, without printing logging messages.
for two reasons:
- writing logging messages would reset the errno.
- the caller of _lock() also prints the logging messages along
with strerror(errno)
also drop bdev_flock_retry and bdev_flock_retry_interval from
legacy_config_opts.h, as `KernelDevice::_lock()` is not in the critical
path, there is no need to access these settings via member variables --
get_val<> would just suffice.
Mykola Golub [Mon, 13 Jan 2020 08:36:39 +0000 (08:36 +0000)]
mgr: fix race between module load and notify
When starting a module, there was a time window between we
registered the module in the module list and loaded it in the
finisher thread, when a notify could be delivered to not fully
initialized module.
We can avoid this by delaying registering the module in the
module list until it is successfully initialized.
When the block changes, systemd-udevd will open the block,
read some information and close it. Then a failure occurs here.
So we need to try again here.
Igor Fedotov [Fri, 29 May 2020 19:28:15 +0000 (22:28 +0300)]
test/objectstore/store_test: kill ExcessiveFragmentation test case.
This test case was introduced by https://github.com/ceph/ceph/pull/18494
to verify allocation failure handling while gifting during bluefs rebalance
Not it looks outdated as there is no periodic gifting any more.
Fixes: https://tracker.ceph.com/issues/45788 Signed-off-by: Igor Fedotov <ifedotov@suse.com>
(cherry picked from commit b852703dd01a66028c0d123ac3579e1611393afe)
Jason Dillaman [Mon, 21 Sep 2020 16:53:37 +0000 (12:53 -0400)]
osdc/ObjectCacher: overwrite might cause stray read request callbacks
In librbd, if readahead is active, there might be a pending read request
for the cache which is then (partially) overwritten by a write request.
This overwrite will cause bh splits and merges which can cause the
bh read callback to fail to invoke the pending read callbacks.
Fixes: https://tracker.ceph.com/issues/46822 Signed-off-by: Jason Dillaman <dillaman@redhat.com>
(cherry picked from commit 94d43165ed7319d163640f38d154f8f80408eb14)
qa/workunits/rbd: yet another attempt to improve rbd-nbd unmap
Previously it still could race when unmap_device returned success
because the device was not found in `rbd-nbd list-mapped` (the nbd
device was removed) but the test failed because the process was still
found in the ps table.
KeyCount should return object count + common prefix count.
see S3 example: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html#API_ListObjectsV2_Example_5
rgw: radosgw-admin should paginate internally when listing bucket
Currently `radosgw-admin bucket list ...`, when listing a bucket, asks
for the value of "--max-entries" internally. To list a large bucket
entirely the user would have to set "--max-entries" to a large value
(e.g., 10000000). Internally this doesn't paginate, so it will try to
produce the entire list at once. This can consume a lot of memory, and
there are known cases where this induces an out-of-memory crash.
So now we'll set a maximum pagination size of 10,000. So even with
large values of "--max-entries" it will still be able to produce the
full listing without stressing memory, because it will ask for at most
10,000 entries at a time.
Conflicts:
src/rgw/rgw_admin.cc
- RGWRados::Bucket::List.list_objects() does not take null_yield
argument in nautilus
- formatter does not have get() method in nautilus
Or Friedmann [Thu, 23 Jul 2020 15:36:07 +0000 (18:36 +0300)]
rgw: fix expiration header returned even if there is only one tag in the object the same as the rule
Expiration header returned even if there is only one tag in the object the same as the rule
Signed-off-by: Or Friedmann <ofriedma@redhat.com> Reported-by: Avi Mor <avmor@redhat.com> Fixes: https://tracker.ceph.com/issues/46614
(cherry picked from commit bf7c7e59f390afb53cb1e30a440ab26bb093c11c)
Conflicts:
src/rgw/rgw_lc.cc
- whitespace (effectively no conflict)
luo rixin [Tue, 1 Sep 2020 09:06:40 +0000 (17:06 +0800)]
rgw/rgw_file: Fix the incorrect lru object eviction
In func lookup_fh, when RGWFileHandle not be found in fh_cache, it
need to recycle an object and create an new RGWFileHandle. When there
are multi threads use lookup_fh to find and create RGWFileHandle concurrently,
it must to make sure evict lru object from the partiton of fh_cache which new
RGWFileHandle will be inserted to.
Fixes: https://tracker.ceph.com/issues/47235 Signed-off-by: luo rixin <luorixin@huawei.com>
(cherry picked from commit f2097338722d7f2526bb815da47695f2da17fcce)
rgw: rgw-orphan-list should use "plain" formatted `rados ls` output
The previous version that used "json-pretty" output for `rados ls`
added complications due to json's escaping of special characters. So
this version returns to the "plain" output for `rados ls` but deals
with entries (oids) that might have namespaces and/or locators as
well.
rgw: allow rgw-orphan-list to note when rados objects are in namespace
Currently namespaces and locators are ignored when `rados ls` is run
by rgw-orphan-list to record RADOS's known objects.
However there have been cases where RADOS objects have a locator, and
when one is included in the listing, the script does not handle it
correctly. Now when objects have locators, we will prevent their
output from entering the .intermediate file.
Additionally we do not expect RGW data objects to be in RADOS
namespaces, so when a namespaced object is detected, we'll error out
with a message.
rgw: fix setting of namespace in ordered and unordered bucket listing
The namespace is not always set correctly during bucket listing. This
can, for example, cause the listing of incomplete multipart uploads,
which are in the _multipart_ namespace, to not paginate correctly, and
cause entries to be re-listed.
J. Eric Ivancich [Fri, 31 Jan 2020 20:01:40 +0000 (15:01 -0500)]
rgw: fix bug with (un)ordered bucket listing and marker w/ namespace
When listing without specifying a namespace, the returned entries
could be in one or more namespaces. The marker used to continue the
listing may therefore contain a namespace, and that needs to be
preserved. This fixes a bug in both ordered and unordered listings
where it was not preserved.