Rishabh Dave [Fri, 9 Jun 2023 18:54:12 +0000 (00:24 +0530)]
MDSAuthCaps: print a special error message for wrong permissions
Permissions mentioned in MDS caps flags can either begin with "r" or
"rw", or can be "*" and "all". But it can't start with or be just "w" or
something else. This is confusing for some CephFS users since MON caps
can be just "w".
Command "ceph fs authorize" complains about this to the user. But other
commands (specifically, "ceph auth add", "ceph auth caps",
"ceph auth get-or-create" and "ceph auth get-or-create-key") don't. Make
these commands too print a helpful message, the way "ceph fs authorize"
command does.
Fixes: https://tracker.ceph.com/issues/61666 Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit f163dd3ef1fd9f05f05fa50eda9993225770d524)
Conflicts:
src/mds/MDSAuthCaps.cc
"std::string" was replace "string" in main but that's not the case
in Reef.
Rishabh Dave [Sat, 24 Jun 2023 17:11:07 +0000 (22:41 +0530)]
qa/ceph_test_case: add a method to negative test Ceph commands
Also, add comments to explain the users the arguments are accepted by
run_ceph_cmd(), get_ceph_cmd_result(), get_ceph_cmd_stdout() and
negtest_ceph_cmd() methods of class RunCephCmd.
Rishabh Dave [Wed, 9 Aug 2023 12:40:32 +0000 (18:10 +0530)]
qa: inherit RunCephCmd in CephTestCase instead of CephFSTestCase
MgrTestCase also needs RunCephCmd. If RunCephCmd is inherited by
CephTestCase, instead of CephFSTestCase, MgrTestCase will automatically
inherit RunCephCmd because it inhertis CephTestCase.
Fixes: https://tracker.ceph.com/issues/62084 Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 4b369cf18ed1391a426ab4ae86da834e9c074f81)
Rishabh Dave [Mon, 27 Mar 2023 06:21:16 +0000 (11:51 +0530)]
qa/cephfs: use run_ceph_cmd() when cmd output is not needed
In filesystem.py and wherever instance of class Filesystem are used, use
run_ceph_cmd() instead of get_ceph_cluster_stdout() when output of Ceph
command is not required.
Rishabh Dave [Mon, 27 Mar 2023 06:09:11 +0000 (11:39 +0530)]
qa/cephfs: add helper methods to filesystem.py
Add run_ceph_cmd(), get_ceph_cmd_stdout() and get_ceph_cmd_result() to
class Filesystem so that running Ceph command is easier. This affects
not only methods inside class Filesystem but also methods elsewhere that
uses instance of class Filesystem to run Ceph commands.
Instead of "self.fs.mon_manager.raw_cluster_cmd()" writing
"self.fs.run_ceph_cmd()" will suffice.
Conflicts:
qa/tasks/cephfs/test_mirroring.py
- Commit e4dd0e41a3a0 was not present on main but it is now
present on main as well as on Reef, which leads to conflict.
- The line located right before one of the patches in this
commit was modified in latest Reef branch, thus creating
conflict when PR branch was rebased on latest Reef.
Rishabh Dave [Thu, 16 Mar 2023 10:02:39 +0000 (15:32 +0530)]
qa/cephfs: add and use get_ceph_cmd_stdout()
Add method get_ceph_cmd_stdout() to class CephFSTestCase so that one
doesn't have to type something as long as
"self.mds_cluster.mon_manager.raw_cluster_cmd()" to execute a
command and get its output. And delete and replace
CephFSTestCase.run_cluster_cmd() too.
Conflicts:
qa/tasks/cephfs/caps_helper.py
- This file is very different in Reef.
qa/tasks/cephfs/test_mirroring.py
- Commit e4dd0e41a3a0 was not present on main but it is now
present on main as well as on Reef, which leads to conflict.
- On Reef branch, the line before that patch in this commit was
thus creating a conflict when the PR branch for this commit
series was rebased on latest Reef.
Rishabh Dave [Thu, 16 Mar 2023 09:41:08 +0000 (15:11 +0530)]
qa/cephfs: add and use run_ceph_cmd()
Instead of writing something as long as
"self.mds_cluster.mon_manager.run_cluster_cmd()" to execute a command,
let's add a helper method to class CephFSTestCase and use it instead.
With this, running a command becomes simple - "self.run_ceph_cmd()".
Conflicts:
qa/tasks/cephfs/test_damage.py
This file is slightly different because this commit c8f8324ee2fae48e8d3c2bbdbf45cc9ffe46fd4c was merged on main and
backported after the commit being cherry-picked here was merged
in main.
Rishabh Dave [Tue, 14 Mar 2023 19:43:56 +0000 (01:13 +0530)]
qa/cephfs: add and use get_ceph_cmd_result()
To run a command and get its return value, instead of typing something
as long as "self.mds_cluster.mon_manager.raw_cluster_cmd_result" add a
hepler method in CephFSTestCase and use it. This makes this task very
simple - "self.get_ceph_cmd_result()".
Also, remove method CephFSTestCase.run_cluster_cmd_result() in favour of
this new method.
Rishabh Dave [Mon, 13 Mar 2023 13:05:50 +0000 (18:35 +0530)]
qa/cephfs: create CephManager instance in CephFSTestCase
To run a Ceph command conveniently, run_cluster_cmd(), raw_cluster_cmd()
or raw_cluster_cmd_result() must be called. These methods are available
in class CephManager which in turn is available only if an instance of
Filesystem, MDSCluster, CephCluster or MgrCluster is initialized. Having
an instance of CephManager in CephFSTestCase will provide easy access to
these methods.
For example, in CephFS tests writing "self.mon_manager.raw_cluser_cmd()"
instead of writing "self.mds_cluster.mon_manager.raw_cluster()" will
suffice.
This commit provides a basis for upcoming commits in this patch series.
With next patches, running Ceph command will be further simplified. Just
writing self.run_ceph_cmd() will suffice for running a CephFS command.
Aashish Sharma [Wed, 4 Oct 2023 09:07:42 +0000 (14:37 +0530)]
mgr/dashboard: upgrade from old 'graph' type panels to the new
'timeseries' panel
The graph panel type is deprecated, and disappears after Grafana v9.1 (current version is 10.0) to prevent more old type panels being created. These should be migrated to the timeseries panel type, to avoid potential problems with future Grafana versions.
* refs/pull/56479/head:
pybind/mgr/devicehealth: skip legacy objects that cannot be loaded
qa: test devicehealth legacy load of deleted snap obj
qa: allow failing whatever the active mgr is
qa: add unit tests for MgrMap down flag
mon/MgrMonitor: add "down" setting to simplify testing
Niklas Hambüchen [Sat, 30 Mar 2024 16:42:48 +0000 (17:42 +0100)]
doc/rados/operations: Improve crush_location docs
* Fix incorrect syntax
* Use underscores for config options, like other ceph docs did
* Fix incorrect statement that crush_location_hook adds fiels; it replaces
* Explain `root=default host=HOSTNAME` is not set if `crush_location` is given
* Remove duplication across sections
* Point out that `root=default` is important
Afreen [Fri, 1 Mar 2024 07:26:25 +0000 (12:56 +0530)]
mgr/dashboard: Locking improvements in bucket create form
Fixes https://tracker.ceph.com/issues/64658
- Addition of help texts
- Addition of info/warnings related to modes and versioning
- change of Locking section layout
- renaming locking to 'Object Locking'
- changes default retention period to 10
- edit bucket only shows lock when its enabled
Patrick Donnelly [Wed, 27 Mar 2024 13:02:43 +0000 (09:02 -0400)]
Merge PR #54468 into reef
* refs/pull/54468/head:
mds,client: update the oldest_client_tid via the renew caps
mds: add trim_completed_request_list() helper
client: return false if cannot link all the way to mountpoint
client: use the fs' full path instead of from mountpoint's root
qa/tasks/cephfs/test_admin: run root_squash tests only for FUSE client
qa/tasks/cephfs: Add reproducer for https://tracker.ceph.com/issues/56067
qa: add test for checking access in client side of root_squash
qa: add sudo paramter for read_file()
test/libcephfs: remove reduntant test for acccess
mds/Server: disallow clients that have root_squash
mds/Locker: remove session check access when doing cap updates
client: check the cephx mds auth access for open
client: always set the caller_uid/gid to -1
mds: add CEPHFS_FEATURE_MDS_AUTH_CAPS_CHECK feature bit
client: check the cephx mds auth access for setattr
client: save the cap_auths in client when session being opened
client: add make_path_string() helpers support
client: add _get_root_ino() helper support
test/libcephfs: add a tag for each test unique directory
client: rename MAY_* to CLIENT_MAY_* to avoid conflicts
mds: send the cap_auths to clients when openning the sessions
mds: add cap_auths in MClientSession
mds: add MDSCapAuth support
mds: encode/decode the MDSCapMatch
mds: add assign operator support for MDSCapMatch
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Xiubo Li [Thu, 19 Oct 2023 02:20:55 +0000 (10:20 +0800)]
client: use the fs' full path instead of from mountpoint's root
The mountpoint's root ino# possibly not be the full CephFS
filesystem root, it's just the mountpoint of this particular client.
Just prepend the mountpoint path to the full path.
Introduced-by: c1bf8d88e9d client: check the cephx mds auth access for setattr Introduced-by: ce216595c03 client: check the cephx mds auth access for open Fixes: https://github.com/ceph/ceph/pull/48027#issuecomment-1741019086 Signed-off-by: Xiubo Li <xiubli@redhat.com>
(cherry picked from commit e46dc20cdfb157f94781032451057d1e138535cc)
Ramana Raja [Mon, 8 Aug 2022 18:33:06 +0000 (14:33 -0400)]
qa/tasks/cephfs: Add reproducer for https://tracker.ceph.com/issues/56067
A kernel CephFS client with MDS root_squash caps is able to write to a
file as non-root user. However, the data written is lost after clearing
the kernel client cache, or re-mounting the client. This issue is not
observed with a FUSE CephFS client.
Xiubo Li [Wed, 2 Nov 2022 01:12:16 +0000 (09:12 +0800)]
qa: add test for checking access in client side of root_squash
Test the 'chown' and 'truncate', which will call the setattr and
'cat' will open the files. Before each testing will open the file
by non-root user and keep it to make sure the Fxw caps are issued,
and then user the 'sudo' do to the tests, which will set the uid/gid
to 0/0.
Ramana Raja [Tue, 15 Nov 2022 19:00:24 +0000 (14:00 -0500)]
mds/Server: disallow clients that have root_squash
... MDS auth caps but don't have CEPHFS_FEATURE_MDS_AUTH_CAPS_CHECK
feature bit (i.e., can't check the auth caps sent back to it by the
MDS) from establishing a session. Do this in
Server::handle_client_session(), and Server::handle_client_reconnect(),
where old clients try to reconnect to MDS servers after an upgrade.
If the client doesn't have the ability to authorize session access
based on the MDS auth caps send back to it by the MDS, then the
client may buffer changes locally during open and setattr operations
when it's not supposed to, e.g., when enforcing root_squash MDS auth
caps.
Xiubo Li [Fri, 9 Sep 2022 04:17:06 +0000 (12:17 +0800)]
client: always set the caller_uid/gid to -1
Since the setattr will check the cephx mds auth access before
buffering the changes, so it makes no sense any more to let the
cap update to check the access in MDS again.
Xiubo Li [Tue, 25 Apr 2023 09:31:25 +0000 (17:31 +0800)]
client: add make_path_string() helpers support
Will use this to get the path string to do the mds auth check. It
may fail when the there is no any dentry in local cache, which could
be caused by just unlinking the last dentry while the inode keeps
opening and then try to change the mode.
Patrick Donnelly [Thu, 21 Dec 2023 13:48:33 +0000 (08:48 -0500)]
pybind/mgr/devicehealth: skip legacy objects that cannot be loaded
Log looks like after test:
2023-12-21T16:09:28.804+0000 7fbe7fd86700 0 [devicehealth DEBUG root] loading object ABC_DEADB33F_FA
2023-12-21T16:09:28.805+0000 7fbe7fd86700 0 [devicehealth DEBUG root] object rados.Object(ioctx=<rados.Ioctx object at 0x7fbeee0c4668>,key=ABC_DEADB33F_FA,nspace=--default--,locator=None) does not exist because it is deleted in HEAD
2023-12-21T16:09:28.805+0000 7fbe7fd86700 0 [devicehealth DEBUG root] finished reading legacy pool, complete = True
Credit to Greg Farnum for postulating the cause.
Fixes: https://tracker.ceph.com/issues/63882 Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit 5e6fc0bf5f52732966d5cf2987e679abee8a384d)
Patrick Donnelly [Thu, 21 Dec 2023 15:39:03 +0000 (10:39 -0500)]
qa: test devicehealth legacy load of deleted snap obj
Failure without fix looks like:
2023-12-21T16:05:55.737+0000 7fbe585b0700 0 [devicehealth DEBUG root] loading object ABC_DEADB33F_FA
2023-12-21T16:05:55.737+0000 7fbe585b0700 -1 log_channel(cluster) log [ERR] : Unhandled exception from module 'devicehealth' while running on mgr.x: [errno 2] RADOS object not found (Failed to operate read op for oid ABC_DEADB33F_FA)
2023-12-21T16:05:55.737+0000 7fbe585b0700 -1 devicehealth.serve:
2023-12-21T16:05:55.737+0000 7fbe585b0700 -1 Traceback (most recent call last):
File "/home/pdonnell/ceph/src/pybind/mgr/devicehealth/module.py", line 394, in serve
self._do_serve()
File "/home/pdonnell/ceph/src/pybind/mgr/mgr_module.py", line 524, in check
return func(self, *args, **kwargs)
File "/home/pdonnell/ceph/src/pybind/mgr/devicehealth/module.py", line 354, in _do_serve
finished_loading_legacy = self.check_legacy_pool()
File "/home/pdonnell/ceph/src/pybind/mgr/devicehealth/module.py", line 326, in check_legacy_pool
if self._load_legacy_object(ioctx, obj.key):
File "/home/pdonnell/ceph/src/pybind/mgr/devicehealth/module.py", line 300, in _load_legacy_object
ioctx.operate_read_op(op, oid)
File "rados.pyx", line 3723, in rados.Ioctx.operate_read_op
rados.ObjectNotFound: [errno 2] RADOS object not found (Failed to operate read op for oid ABC_DEADB33F_FA)