Melissa Li [Thu, 26 May 2022 18:07:30 +0000 (14:07 -0400)]
mgr/dashboard: add rbd status endpoint
Show "No RBD pools available" error page when accessing block/rbd if there are no rbd pools.
Add a "button_name" and "button_route" property to `ModuleStatusGuardService` config to customize the button on the error page.
Modify `ModuleStatusGuardService` to execute API calls to `/ui-api/<uiApiPath>/status` which uses the `UIRouter`.
Fixes: https://tracker.ceph.com/issues/42109 Signed-off-by: Melissa Li <melissali@redhat.com>
(cherry picked from commit 6ac9b3cfe171a8902454ea907b3ba37d83eda3dc)
Nizamudeen A [Mon, 6 Jun 2022 05:51:29 +0000 (11:21 +0530)]
mgr/dashboard: configure rbd mirroring
One-click button in the case of an orch cluster for configuring the
rbd-mirroring when its not properly setup. This button will create an
rbd-mirror service and also an rbd labelled pool(replicated: size-3) (if they are not
existing)
Fixes: https://tracker.ceph.com/issues/55646 Signed-off-by: Nizamudeen A <nia@redhat.com>
Conflicts:
src/pybind/mgr/dashboard/frontend/src/app/core/error/error.component.html
src/pybind/mgr/dashboard/frontend/src/app/shared/services/module-status-guard.service.ts
src/pybind/mgr/dashboard/tests/test_rbd_mirroring.py
error component and module-status-guard.service.ts had one minor
conflict which was resolved by getting incoming changes (add missing
logic).
test_rbd_mirroring had an import conflict which was resolved by
accepting incoming changes which had the same imports with new ones.
Pere Diaz Bou [Mon, 30 May 2022 14:10:11 +0000 (16:10 +0200)]
mgr/dashboard: move replaying images to Syncing tab
Images with 'Replaying' state will be displayed in Syncing tab. syncTmpl
removed as it was unnecessary if sate is provided from the backend.
Replaying images in contrast of Syncing images don't have a progress
percentage, nevertheless, we have an approximation of how much time left
there is until the image is fully synced. Therefore, we can use seconds_until_synced to represent the progress.
Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
Pere Diaz Bou [Fri, 13 May 2022 15:15:33 +0000 (17:15 +0200)]
mgr/dashboard: snapshot mirroring from dashboard
Enable snapshot mirroring from the Pools -> Image
Also show the mirror-snapshot in the image where snapshot is enabled
When parsing images if an image has the snapshot mode enabled, it will
try to run commands that don't work with that mode. The solution was
not running those for now and appending the mode in the get call.
Fixes: https://tracker.ceph.com/issues/55648 Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com> Signed-off-by: Nizamudeen A <nia@redhat.com> Signed-off-by: Aashish Sharma <aasharma@redhat.com> Signed-off-by: Avan Thakkar <athakkar@redhat.com>
(cherry picked from commit 489a385a95d6ffa5dbd4c5f9c53c1f80ea179142)
Zac Dover [Tue, 21 Jun 2022 14:09:05 +0000 (00:09 +1000)]
doc/dev: add context note to dev guide config
This PR adds a note directing first-time cloners of
their Ceph git forks to make sure to cd into the ceph/
directory before trying to run the "git config" commands.
Kotresh HR [Fri, 18 Mar 2022 06:43:53 +0000 (12:13 +0530)]
mgr/volumes: Disable quota for mgr libcephfs connection
This is done to give 'mgr' libcephfs connection right to bypass
quota. The mgr/volumes plugin maintains configuration files
with in the directory where the user has enforced quota. So
when the quota is met, certain mgr/volumes apis don't work as
intended. e.g., When subvolumegroup quota is met, the group's
subvolume removal with '--retain-snapshots' fails.
This is done to give 'mgr' libcephfs connection right to bypass
quota. The mgr/volumes plugin maintains configuration files
with in the directory where the user has enforced quota. So
when the quota is met, certain mgr/volumes apis don't work as
intended. e.g., When subvolumegroup quota is met, the group's
subvolume removal with '--retain-snapshots' fails.
Conflicts:
src/pybind/mgr/volumes/fs/operations/group.py
- Updates in defination of create_groups
src/pybind/mgr/volumes/fs/volume.py
- Added set_group_attrs in import list and split long line
Xiubo Li [Wed, 8 Jun 2022 05:00:20 +0000 (13:00 +0800)]
qa: wait rank 0 to become up:active state before mounting fuse client
When setting the ec pool to the layout the filesystem may not be
ready, so when mounting a fuse client it will fail. To fix this we
need to wait at least the rank 0 to be in up:active state.
Xiubo Li [Fri, 27 May 2022 05:11:44 +0000 (13:11 +0800)]
client: choose auth MDS for getxattr with the Xs caps
If any 'x' caps is issued we can just choose the auth MDS instead
of the random replica MDSes. Because only when the Locker is in
LOCK_EXEC state will the loner client could get the 'x' caps. And
if we send the getattr requests to any replica MDS it must auth pin
and tries to rdlock from the auth MDS, and then the auth MDS need
to do the Locker state transition to LOCK_SYNC. And after that the
lock state will change back.
This cost much when doing the Locker state transition and usually
will need to revoke caps from clients.
And for the 'Xs' caps for getxattr we will also choose the auth MDS,
because the MDS side code is buggy due to setxattr won't notify the
increased xattr_version to replicated MDSes when the values changed
and the replica MDS will return the old xattr_version value. The
client will just drop the xattr values since it sees the xattr_version
doesn't change.
Xiubo Li [Wed, 16 Mar 2022 09:15:57 +0000 (17:15 +0800)]
mds, client: only send the metrices supported by MDSes
For the old ceph clusters the clients won't send any metrics to
them as default unless they have backported this commit, but there
has one option 'client_collect_and_send_global_metrics' still could
be used to enable it manually.
This will fix the crash bug when upgrading from old ceph clusters,
which will crash the MDSes once they receive unknown metrics.
mgr/cephadm: adding logic to close ports when removing a daemon Fixes: https://tracker.ceph.com/issues/52906 Signed-off-by: Redouane Kachach <rkachach@redhat.com>
(cherry picked from commit 4deb546ffd67ac8f05d2788150764a26b5671b87)
Redouane Kachach [Tue, 31 May 2022 10:11:03 +0000 (12:11 +0200)]
mgr/cephadm: check if a service exists before trying to restart it Fixes: https://tracker.ceph.com/issues/55800 Signed-off-by: Redouane Kachach <rkachach@redhat.com>
(cherry picked from commit 6b76753c3cabf9663fa1daa47c7bcb7df110a94c)
Redouane Kachach [Tue, 31 May 2022 10:59:26 +0000 (12:59 +0200)]
mgr/cephadm: capture exception when not able to list upgrade tags Fixes: https://tracker.ceph.com/issues/55801 Signed-off-by: Redouane Kachach <rkachach@redhat.com>
(cherry picked from commit 0e7a4366c0c1edd74d52acad5ed4dc3df0ef7679)
qa: set, get, list and remove custom metadata for snapshot
Following test are added:
1. Set custom metadata for subvolume snapshot.
2. Set custom metadata for subvolume snapshot(Idempotency).
3. Get custom metadata for specified key.
4. Get custom metadata if specified key not exist (Expecting error ENOENT).
5. Get custom metadata if no any key-value is added means section not exist (Expecting error ENOENT).
6. Update value for existing key in custom metadata.
7. List custom metadata of subvolume snapshot.
8. List custom metadata of subvolume snapshot if no any key-value is added (Expect empty json/dictionary)
9. Remove custom metadata for specified key.
10. Remove custom metadata if specified key not exist (Expecting error ENOENT).
11. Remove custom metadata if no any key-value is added means section not exist (Expecting error ENOENT).
12. Remove custom metadata with --force option.
13. Remove custom metadata with --force option if specified key not exist (Expecting command to succeed because of '--force' option)
14. Remove subvolume snapshot and verify whether metadata for snapshot is removed or not
docs: set, get, list and remove custom metadata for snapshot
Set custom metadata on the snapshot as a key-value pair using
$ ceph fs subvolume snapshot metadata set <vol_name> <subvol_name> <snap_name> <key_name> <value> [--group_name <subvol_group_name>]
note: If the key_name already exists then the old value will get replaced by the new value.
note: The key_name and value should be a string of ASCII characters (as specified in python's string.printable). The key_name is case-insensitive and always stored in lower case.
note: Custom metadata on a snapshots is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
Get custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata get <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>]
List custom metadata (key-value pairs) set on the snapshot using::
$ ceph fs subvolume snapshot metadata ls <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
Remove custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata rm <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>] [--force]
Using the '--force' flag allows the command to succeed that would otherwise fail if the metadata key did not exist.
mgr/volumes: set, get, list and remove custom metadata for snapshot
If CephFS in ODF configured in external mode, user like to use
subvolume snapshot metadata to store some Openshift specific
information, as the PVC/PV/namespace the subvolumes/snapshot
are coming from. For RBD volumes, it's possible to add metadata
information to the images using the 'rbd image-meta' command.
However, this feature is not available for CephFS volumes.
We'd like to request this capability.
Zac Dover [Tue, 14 Jun 2022 22:15:33 +0000 (08:15 +1000)]
doc/dev: s/master/main/ in basic workflow
This PR changes "master" to "main" in the
basic_workflow.rst file. I have even changed
"master" to "main" in some terminal output from
several years ago. This isn't historically ac-
curate, of course, but my hope is that this change
will prevent someone in the future from being con-
fused about why an antiquated branch name is ref-
erred to.