Zac Dover [Tue, 21 Jun 2022 14:09:05 +0000 (00:09 +1000)]
doc/dev: add context note to dev guide config
This PR adds a note directing first-time cloners of
their Ceph git forks to make sure to cd into the ceph/
directory before trying to run the "git config" commands.
Kotresh HR [Fri, 18 Mar 2022 06:43:53 +0000 (12:13 +0530)]
mgr/volumes: Disable quota for mgr libcephfs connection
This is done to give 'mgr' libcephfs connection right to bypass
quota. The mgr/volumes plugin maintains configuration files
with in the directory where the user has enforced quota. So
when the quota is met, certain mgr/volumes apis don't work as
intended. e.g., When subvolumegroup quota is met, the group's
subvolume removal with '--retain-snapshots' fails.
This is done to give 'mgr' libcephfs connection right to bypass
quota. The mgr/volumes plugin maintains configuration files
with in the directory where the user has enforced quota. So
when the quota is met, certain mgr/volumes apis don't work as
intended. e.g., When subvolumegroup quota is met, the group's
subvolume removal with '--retain-snapshots' fails.
Conflicts:
src/pybind/mgr/volumes/fs/operations/group.py
- Updates in defination of create_groups
src/pybind/mgr/volumes/fs/volume.py
- Added set_group_attrs in import list and split long line
Xiubo Li [Wed, 8 Jun 2022 05:00:20 +0000 (13:00 +0800)]
qa: wait rank 0 to become up:active state before mounting fuse client
When setting the ec pool to the layout the filesystem may not be
ready, so when mounting a fuse client it will fail. To fix this we
need to wait at least the rank 0 to be in up:active state.
Xiubo Li [Fri, 27 May 2022 05:11:44 +0000 (13:11 +0800)]
client: choose auth MDS for getxattr with the Xs caps
If any 'x' caps is issued we can just choose the auth MDS instead
of the random replica MDSes. Because only when the Locker is in
LOCK_EXEC state will the loner client could get the 'x' caps. And
if we send the getattr requests to any replica MDS it must auth pin
and tries to rdlock from the auth MDS, and then the auth MDS need
to do the Locker state transition to LOCK_SYNC. And after that the
lock state will change back.
This cost much when doing the Locker state transition and usually
will need to revoke caps from clients.
And for the 'Xs' caps for getxattr we will also choose the auth MDS,
because the MDS side code is buggy due to setxattr won't notify the
increased xattr_version to replicated MDSes when the values changed
and the replica MDS will return the old xattr_version value. The
client will just drop the xattr values since it sees the xattr_version
doesn't change.
Xiubo Li [Wed, 16 Mar 2022 09:15:57 +0000 (17:15 +0800)]
mds, client: only send the metrices supported by MDSes
For the old ceph clusters the clients won't send any metrics to
them as default unless they have backported this commit, but there
has one option 'client_collect_and_send_global_metrics' still could
be used to enable it manually.
This will fix the crash bug when upgrading from old ceph clusters,
which will crash the MDSes once they receive unknown metrics.
mgr/cephadm: adding logic to close ports when removing a daemon Fixes: https://tracker.ceph.com/issues/52906 Signed-off-by: Redouane Kachach <rkachach@redhat.com>
(cherry picked from commit 4deb546ffd67ac8f05d2788150764a26b5671b87)
Redouane Kachach [Tue, 31 May 2022 10:11:03 +0000 (12:11 +0200)]
mgr/cephadm: check if a service exists before trying to restart it Fixes: https://tracker.ceph.com/issues/55800 Signed-off-by: Redouane Kachach <rkachach@redhat.com>
(cherry picked from commit 6b76753c3cabf9663fa1daa47c7bcb7df110a94c)
Redouane Kachach [Tue, 31 May 2022 10:59:26 +0000 (12:59 +0200)]
mgr/cephadm: capture exception when not able to list upgrade tags Fixes: https://tracker.ceph.com/issues/55801 Signed-off-by: Redouane Kachach <rkachach@redhat.com>
(cherry picked from commit 0e7a4366c0c1edd74d52acad5ed4dc3df0ef7679)
qa: set, get, list and remove custom metadata for snapshot
Following test are added:
1. Set custom metadata for subvolume snapshot.
2. Set custom metadata for subvolume snapshot(Idempotency).
3. Get custom metadata for specified key.
4. Get custom metadata if specified key not exist (Expecting error ENOENT).
5. Get custom metadata if no any key-value is added means section not exist (Expecting error ENOENT).
6. Update value for existing key in custom metadata.
7. List custom metadata of subvolume snapshot.
8. List custom metadata of subvolume snapshot if no any key-value is added (Expect empty json/dictionary)
9. Remove custom metadata for specified key.
10. Remove custom metadata if specified key not exist (Expecting error ENOENT).
11. Remove custom metadata if no any key-value is added means section not exist (Expecting error ENOENT).
12. Remove custom metadata with --force option.
13. Remove custom metadata with --force option if specified key not exist (Expecting command to succeed because of '--force' option)
14. Remove subvolume snapshot and verify whether metadata for snapshot is removed or not
docs: set, get, list and remove custom metadata for snapshot
Set custom metadata on the snapshot as a key-value pair using
$ ceph fs subvolume snapshot metadata set <vol_name> <subvol_name> <snap_name> <key_name> <value> [--group_name <subvol_group_name>]
note: If the key_name already exists then the old value will get replaced by the new value.
note: The key_name and value should be a string of ASCII characters (as specified in python's string.printable). The key_name is case-insensitive and always stored in lower case.
note: Custom metadata on a snapshots is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
Get custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata get <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>]
List custom metadata (key-value pairs) set on the snapshot using::
$ ceph fs subvolume snapshot metadata ls <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
Remove custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata rm <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>] [--force]
Using the '--force' flag allows the command to succeed that would otherwise fail if the metadata key did not exist.
mgr/volumes: set, get, list and remove custom metadata for snapshot
If CephFS in ODF configured in external mode, user like to use
subvolume snapshot metadata to store some Openshift specific
information, as the PVC/PV/namespace the subvolumes/snapshot
are coming from. For RBD volumes, it's possible to add metadata
information to the images using the 'rbd image-meta' command.
However, this feature is not available for CephFS volumes.
We'd like to request this capability.
Zac Dover [Tue, 14 Jun 2022 22:15:33 +0000 (08:15 +1000)]
doc/dev: s/master/main/ in basic workflow
This PR changes "master" to "main" in the
basic_workflow.rst file. I have even changed
"master" to "main" in some terminal output from
several years ago. This isn't historically ac-
curate, of course, but my hope is that this change
will prevent someone in the future from being con-
fused about why an antiquated branch name is ref-
erred to.
Or Friedmann [Tue, 19 Apr 2022 12:00:28 +0000 (12:00 +0000)]
rgw: RGWCoroutine::set_sleeping() checks for null stack
users of the RGWOmapAppend coroutine don't manage the lifetime of its
underlying coroutine stack, so end up making calls on RGWOmapAppend
after its stack goes away. this null check is a band-aid, and there are
still several other calls in RGWCoroutine that don't check for null
stack
Fixes: https://tracker.ceph.com/issues/49302 Signed-off-by: Or Friedmann <ofriedma@redhat.com> Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 3f0f831d66c7d43c9872f5de2aceb68aef4004d8)
mon/OSDMonitor: Ensure kvmon() is writeable before handling "osd new" cmd
Before proceeding to handle "osd new" mon command as part of
OSDMonitor::prepare_command_impl(), a check is made to verify if the
authmon is writeable. Later on, prepare_command_osd_new() invokes
KVMonitor::do_osd_new() to create pending dmcrypt keys and calls
propose_pending(). The propose could fail (with an assertion failure)
if there was a prior mon command that resulted in the kvmon invoking
propose_pending().
In order to avoid such a situation, introduce a check to verify that
kvmon is also writeable in OSDMonitor::prepare_command_impl(). If it
is not writeable, the op is pushed into the wait_for_active context
queue to be retried later.
When we are activating we may receive several service map updates
initiated by the previous active mgr. Treat them all as initial map.
The code also adds "pending_service_map_dirty == 0" assert, which we
expect is true when receiving an initial map -- otherwise we can't
just initialize pending_service_map with received map.
Xiubo Li [Wed, 19 Jan 2022 09:27:08 +0000 (17:27 +0800)]
mds: clear the recover and check queues in front of identify_files_to_recover()
If the monitor sends rejoin mdsmap twice just before the first time
hasn't finished yet, it may run identify_files_to_recover() twice.
Since the rejoin_recover_q and rejoin_check_q were vector so there
could be duplicated inodes.
Xiubo Li [Mon, 11 Apr 2022 02:34:16 +0000 (10:34 +0800)]
client: skip sync statx when only AT_STATX_DONT_SYNC flag is set
From the posix and the initial statx supporting commit comments,
the AT_STATX_DONT_SYNC is a lightweight stat flag and the
AT_STATX_FORCE_SYNC is a heaverweight one. And also checked all
the other current usage about these two flags they are all doing
the same, that is only when the AT_STATX_FORCE_SYNC is not set
and the AT_STATX_DONT_SYNC is set will they skip sync retriving
the attributes from storage.