Mistakenly removed in commit d79f2a81541c ("docs: warning and remove
few docs section for Filestore Update docs after filestore removal.").
The kernel client, however new, will continue to be able to talk to
FileStore OSDs for as long as they exist.
Line-edit doc/rados/user-management.rst (2 of x). Some internal
references had to be removed, but these will be repaired when the next
part of this file is updated in a future PR.
Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com> Signed-off-by: Zac Dover <zac.dover@proton.me>
librbd: always refresh after creating snapshot in CreatePrimaryRequest
Up until now this was conditioned on whether the caller expressed
interest in the ID of the created snapshot and happened to work only
because CreatePrimaryRequest wasn't actually consulting any mirror
snapshot metadata. This has just changed with unlink_peer() needing to
see an up-to-date complete flag which is set in SetImageStateRequest
following the write out of image state object(s).
librbd: remove previous incomplete primary snapshot after successfully creating a new one
Problem:
-------
At a high level, creating a primary snapshot consists of three steps:
1. actually creating a snapshot in the mirror namespace
2. generating a set of image state objects with additional metadata for
the snapshot
3. marking the snapshot as complete after the image state objects are
written out
Depending on the circumstances, a request to create a primary snapshot
can be forwarded to rbd-mirror daemon. If that happens and rbd-mirror
daemon gets axed for some practical reason after completing steps (1)
and/or (2) but before completing step (3), we are left with a
permanently incomplete primary snapshot because upon retrying that
primary snapshot creation request, librbd notices that such snapshot
already exists. It does not check whether this "pre-existing" snapshot
is complete.
Solution:
--------
As part of the next mirror snapshot create (say triggered by the
scheduler) the unlink_peer() is called, it checks if there exists any
incomplete snapshot and delete them accordingly.
* refs/pull/50089/head:
doc: add a note for minimum compatible python version and supported distros
tools/cephfs/top/CMakeList.txt: check the minimum compatible python version for cephfs-top
Rishabh Dave [Tue, 18 Apr 2023 14:55:01 +0000 (20:25 +0530)]
qa/cephfs/cap_tester: simplify CapTester and its instantiation
Class CapTester contains two distinct immiscible group of methods: one
that tests MON caps and other that tests MDS caps. When using CapTester
for the former reason the instantiation neither needs mount object and
the path where files for testing will be created nor it needs to run the
method that creates files for testing rw permissions. When using
this class for latter the case is the exact opposite.
Create 2 separate classes for each of these purpose and class that
inherits both of these classes so that instantiating the class becomes
as simple as it can be.
Rishabh Dave [Thu, 6 Apr 2023 09:42:14 +0000 (15:12 +0530)]
qa/cephfs: move few methods such that they can be reused
Move get_mon_cap_from_keyring() and get_fsnmes_from_moncap() from class
CapTester to main namespace of caps_helper.py so that they can be
imported freely and reused by tests.
This method checks if the output of the command "ceph fs ls" for client
ID it receives is same as the output printed for client.admin. Don't do
so, limit the test to only checking if "ceph fs ls --id client.x -k
keyring_file" prints fs name for which client.x has permissions.
Rishabh Dave [Fri, 31 Mar 2023 19:14:52 +0000 (00:44 +0530)]
qa/cephfs: improve caps_helper.CapTester
Improvement #1:
CapTester.write_test_files() not only creates the test file but also
does the following for every mount object it receives in parameters -
* carefully produces the path for the test file as per parameters
received
* generates the unique data for each test file on a CephFS mount
* creates a data structure -- list of lists -- that holds all this
information along with mount object itself for each mount object so
that tests can be conducted at a later point
Untangle this mess of code by splitting this method into 3 separate
methods -
1. To produce the path for test file (as per user's need).
2. To generate the data that will be written into the test file.
3. To actually create the test file on CephFS.
Improvement #2:
Remove the internal data structure used for testing -- self.test_set --
and use separate class attributes to store all the data required for
testing instead of a tuple. This serves two purpose -
One, it makes it easy to manipulate all this data from helper methods
and during debugging session, especially while using a PDB session.
And two, make it impossible to have multiple mounts/multiple "test sets"
within same CapTester instance for the sake of simplicity. Users can
instead create two instances of CapTester instances if needed.
Rishabh Dave [Thu, 13 Apr 2023 19:08:33 +0000 (00:38 +0530)]
qa/cephfs: don't inherit CephFSTestCase in CapTester
Inheritting CephFSTestCase in CapTester just for methods assertEqual()
and assertIn() from class unittest.TestCase is odd and heavy-weight.
Don't inherit CephFSTestCase and use simple assert instead.
We're currently installing cython with pip when using Ubuntu
to cross compile Ceph for Windows. This can fail with recent
Python versions if attempting to use the global env:
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
Cython isn't really needed by the Windows build so we can go
ahead and drop it. We were hoping to use the Python bindings
on Windows, however Python extensions can't be cross compiled.
We're no longer using pip either, so we're dropping the dependency.
g++ was getting installed as a pip dependency, so we'll have to
include that instead. Note that g++ is used when building the boost
b2 tool.
While at it, we'll also ensure that git is installed.
Merge pull request #51055 from ceph/wip-yuriw-release-16.2.12-main
doc: 16.2.12 Release Notes
Reviewed-by: Josh Durgin <jdurgin@redhat.com> Reviewed-by: Laura Flores <lflores@redhat.com> Reviewed-by: Guillaume Abrioux <gabrioux@redhat.com> Reviewed-by: Adam King adking@redhat.com Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
librbd: on notify_quiesce() show attempts in a better format
notify_quiesce() currently shows number of attempts in descending order,
this might be bit confusing to read.
Example: on the very first attempt,
2023-04-04T19:45:56.096+0530 7ff8ba7fc640 10 librbd::ImageWatcher:
0x7ff898008b30 notify_quiesce: async_request_id=[4151,140705343226832,23] attempts=10
I initially misread the above means 10 attempts where done.
This commit tries to pick the format that is used by
ImageWatcher<I>::handle_payload() and ImageWatcher<I>::notify_async_progress()
common/tracer: fix decoding when jaeger tracing is disabled
We aren't currently using jaeger tracing on Windows. The issue is
that Windows hosts (or any other host that doesn't use jaeger)
are experiencing message decoding failures after a recent change [1].
This change updates the tracer encoding so that messages from
non-jaeger hosts may be decoded by services that use jaeger.
qa/suites/rbd: install qemu-utils in addition to qemu-block-extra on Ubuntu
qemu-utils is usually pre-installed but, due to what appears to be
a Ubuntu packaging bug, it's not upgraded when qemu-block-extra is
installed:
The following NEW packages will be installed:
qemu-block-extra
The following packages will be upgraded:
qemu-system-common qemu-system-data qemu-system-gui qemu-system-x86
However, the version of the block driver must match exactly the version
of the qemu-img tool, so the above leads to:
$ qemu-img convert -f qcow2 -O raw /home/ubuntu/cephtest/qemu/base.client.0.0.qcow2 rbd:rbd/client.0.0
Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-rbd.so
Note: only modules from the same build can be loaded.
qemu: module block-block-rbd not found, do you want to install qemu-block-extra package?
qemu-img: Unknown protocol 'rbd'
Aashish Sharma [Wed, 21 Dec 2022 11:53:37 +0000 (17:23 +0530)]
mgr/dashboard: fix rbd mirror snapshot creation
There are two types of snapshots that can be created on a snapshot based mirroring image - Normal Snapshot(same as journal based snapshot) and Nirror Image Snapshot. Till now Dashboard allowed only Mirror image snapshot, this PR intends to enable both the types
Samuel Just [Wed, 8 Mar 2023 01:21:24 +0000 (17:21 -0800)]
osd/: remove PL::reschedule_scrub, notify scrubber on config/pool change directly
As with on_info_history_change(), we don't need to deal with scrub
scheduling during peering. Once we've gone active, the scrubber itself
would be the origin of any stat changes that could affect scrub
scheduling. The other possible change vectors would be OSD config
changes or pool config changes.
PG::reschedule_scrub becomes PG::on_scrub_schedule_input_change. Should
be called in all cases where an input to scrub scheduling changes.
OSD::resched_all_scrubs() calls PG::on_scrub_schedule_input_change
unconditionally
now to deal with changes to osd_scrub_(min|max)_interval.
PG::set_last_[deep_]scrub_stamp now invoke
PG::on_scrub_schedule_input_change directly.
PG::handle_activate_map() now calls PG::on_scrub_schedule_input_change
directly to deal with changes to scrub related pool options.
The only usage of this method was to notify scrub that the pg history
has been updated during split or peering. That shouldn't be necessary.
Scrub does not schedule itself prior to activation, and we necessarily
must have an authoritative history by that point.