Conflicts:
src/rgw/rgw_sal.h
- `unique_ptr` overload of empty
Fixes: https://tracker.ceph.com/issues/56585 Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit e40ce4a3511e669e761da5f39d81b14e6cdbdeba)
Resolves: rhbz#118423 Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
Adam King [Mon, 23 May 2022 19:57:14 +0000 (15:57 -0400)]
mgr/cephadm: store device info separately from rest of host cache
device info tends to take up the most space out of
everything, so the hope is by giving it its own
location in the config key store we can avoid hitting
issues where the host cache value we attempt to
place in the config key store exceeds the size limit
When we are activating we may receive several service map updates
initiated by the previous active mgr. Treat them all as initial map.
The code also adds "pending_service_map_dirty == 0" assert, which we
expect is true when receiving an initial map -- otherwise we can't
just initialize pending_service_map with received map.
Fixes: https://tracker.ceph.com/issues/53210 Signed-off-by: Avan Thakkar <athakkar@redhat.com>
Service instances was displaying only the ceph services, but with these changes it'll display instances
of cephadm services as well.
Signed-off-by: Pedro Gonzalez Gomez <pegonzal@redhat.com> Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
(cherry picked from commit 1a17bd27a08d87ef24457788542f064f4e917d54)
One-click button in the case of an orch cluster for configuring the
rbd-mirroring when its not properly setup. This button will create an
rbd-mirror service and also an rbd labelled pool(replicated: size-3) (if they are not
existing)
Fixes: https://tracker.ceph.com/issues/55646 Signed-off-by: Nizamudeen A <nia@redhat.com>
Conflicts:
src/pybind/mgr/dashboard/frontend/src/app/core/error/error.component.html
src/pybind/mgr/dashboard/frontend/src/app/shared/services/module-status-guard.service.ts
This commits had minor conflicts where the addition of dashboardButton
had to be explicitly added and the state of api/orch/status in
module-status-guard had to be updated.
Show "No RBD pools available" error page when accessing block/rbd if there are no rbd pools.
Add a "button_name" and "button_route" property to `ModuleStatusGuardService` config to customize the button on the error page.
Modify `ModuleStatusGuardService` to execute API calls to `/ui-api/<uiApiPath>/status` which uses the `UIRouter`.
Images with 'Replaying' state will be displayed in Syncing tab. syncTmpl
removed as it was unnecessary if sate is provided from the backend.
Replaying images in contrast of Syncing images don't have a progress
percentage, nevertheless, we have an approximation of how much time left
there is until the image is fully synced. Therefore, we can use seconds_until_synced to represent the progress.
Also show the mirror-snapshot in the image where snapshot is enabled
When parsing images if an image has the snapshot mode enabled, it will
try to run commands that don't work with that mode. The solution was
not running those for now and appending the mode in the get call.
Fixes: https://tracker.ceph.com/issues/55648 Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com> Signed-off-by: Nizamudeen A <nia@redhat.com> Signed-off-by: Aashish Sharma <aasharma@redhat.com> Signed-off-by: Avan Thakkar <athakkar@redhat.com>
(cherry picked from commit 489a385a95d6ffa5dbd4c5f9c53c1f80ea179142)
(cherry picked from commit 3ca9ca7e215562912daf00ca3cca40dbc5d560b5)
mgr/dashboard:Get "Different Storage Class" metrics in Prometheus dashboard
Get metrics of the different "HDDRule" and "MixedUse" classes of the "Raw Storage" for their ceph VMs. So that Prometheus can scrape the data and display it to them in grafana
Ilya Dryomov [Fri, 12 Aug 2022 09:10:45 +0000 (11:10 +0200)]
rbd: remove incorrect use of std::includes()
- std::includes() requires sorted ranges but command specs aren't
sorted
- std::includes() purpose is to check whether the second range is
a subsequence of the first range but here the size of the second
range is always equal to the size of the first range, which means
that, had the ranges been sorted, std::includes() would have checked
straight equality
Ilya Dryomov [Fri, 12 Aug 2022 09:10:45 +0000 (11:10 +0200)]
rbd: find_action() should sort actions first
The order in which objects with static storage duration in
different TUs are initialized is undefined. If the compiler
chooses to initialize Shell::Action objects in action/Trash.cc
before Shell::Action objects in action/TrashPurgeSchedule.cc,
all "rbd trash purge schedule ..." commands get shadowed by
"rbd trash purge" command:
$ rbd trash purge schedule list
rbd: too many arguments
The confusing error arises because "rbd trash purge" takes a single
positional argument. "schedule" gets interpreted as <pool-spec> and
"list" generates an error.
osd: Handle oncommits and wait for future work items from mClock queue
When a worker thread with the smallest thread index waits for future work
items from the mClock queue, oncommit callbacks are called. But after the
callback, the thread has to continue waiting instead of returning back to
the ShardedThreadPool::shardedthreadpool_worker() loop. Returning results
in the threads with the smallest index across all shards to busy loop
causing very high CPU utilization.
The fix involves reacquiring the shard_lock and waiting on sdata_cond
until notified or until time period lapses. After this, the smallest
thread index repopulates the oncommit queue from the context_queue
if there were any additions.
libcephsqlite: ceph-mgr crashes when compiled with gcc12
regex in libcephsqlite, when compiled with GCC12 treats '-' as a range
operator resulting in the following error.
"Invalid start of '[x-x]' range in regular expression"
Kotresh HR [Fri, 4 Feb 2022 09:58:39 +0000 (15:28 +0530)]
qa: validate subvolume discover on upgrade
Validate subvolume discover on upgrade from
legacy subvolume to v1. The handcrafted
".meta" file on legacy subvolume root should
not be used for any subvolume apis like getpath,
authorize.
Kotresh HR [Fri, 4 Feb 2022 09:25:03 +0000 (14:55 +0530)]
mgr/volumes: Fix subvolume discover during upgrade
Fixes the subvolume discover to use the correct
metadata file after an upgrade from legacy subvolume
to v1. The fix makes sure, it doesn't use the
handcrafted metadata file placed in the subvolume
root of legacy subvolume.
Co-authored-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@cern.ch> Co-authored-by: Dan van der Ster <daniel.vanderster@cern.ch> Co-authored-by: Ramana Raja <rraja@redhat.com> Signed-off-by: Kotresh HR <khiremat@redhat.com>
(cherry picked from commit 7eba9cab6cfb9a13a84062177d7a0fa228311e13)
(cherry picked from commit 50f4ac7b75185c89ede210b2c975672e8bdd73aa)
We really want to have the ability to know how many
entries `PGLog::IndexedLog::dups` has inside.
The current ways are either invasive (stopping an OSD)
or indirect (examination of `dump_mempools`).
Although the chunking in off-line `dups` trimming (via COT) seems
fine, the `ceph-objectstore-tool` is a client of `trim()` of
`PGLog::IndexedLog` which means than a partial revert is not
possible without extensive changes.
The backport ticket is: https://tracker.ceph.com/issues/55981
Revert "osd/PGLog.cc: Trim duplicates by number of entries"
This reverts commit 3ff0df6a28a1d9e197bdba40be7126fed8a14ae9
which is the in-OSD part of the fix for accumulation of `dup`
entries in a PG Log. Brainstorming it has brought questions
on the OSD's behaviour during an upgrade if there are tons of
dups in the log. What must be double-checked before bringing
it back is ensuring we chunk the deletions properly to not
impose OOMs / stalls in, to exemplify, RocksDB.
The backport ticket is: https://tracker.ceph.com/issues/55981
qa: set, get, list and remove custom metadata for snapshot
Following test are added:
1. Set custom metadata for subvolume snapshot.
2. Set custom metadata for subvolume snapshot(Idempotency).
3. Get custom metadata for specified key.
4. Get custom metadata if specified key not exist (Expecting error ENOENT).
5. Get custom metadata if no any key-value is added means section not exist (Expecting error ENOENT).
6. Update value for existing key in custom metadata.
7. List custom metadata of subvolume snapshot.
8. List custom metadata of subvolume snapshot if no any key-value is added (Expect empty json/dictionary)
9. Remove custom metadata for specified key.
10. Remove custom metadata if specified key not exist (Expecting error ENOENT).
11. Remove custom metadata if no any key-value is added means section not exist (Expecting error ENOENT).
12. Remove custom metadata with --force option.
13. Remove custom metadata with --force option if specified key not exist (Expecting command to succeed because of '--force' option)
14. Remove subvolume snapshot and verify whether metadata for snapshot is removed or not
docs: set, get, list and remove custom metadata for snapshot
Set custom metadata on the snapshot as a key-value pair using
$ ceph fs subvolume snapshot metadata set <vol_name> <subvol_name> <snap_name> <key_name> <value> [--group_name <subvol_group_name>]
note: If the key_name already exists then the old value will get replaced by the new value.
note: The key_name and value should be a string of ASCII characters (as specified in python's string.printable). The key_name is case-insensitive and always stored in lower case.
note: Custom metadata on a snapshots is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
Get custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata get <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>]
List custom metadata (key-value pairs) set on the snapshot using::
$ ceph fs subvolume snapshot metadata ls <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
Remove custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata rm <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>] [--force]
Using the '--force' flag allows the command to succeed that would otherwise fail if the metadata key did not exist.
mgr/volumes: set, get, list and remove custom metadata for snapshot
If CephFS in ODF configured in external mode, user like to use
subvolume snapshot metadata to store some Openshift specific
information, as the PVC/PV/namespace the subvolumes/snapshot
are coming from. For RBD volumes, it's possible to add metadata
information to the images using the 'rbd image-meta' command.
However, this feature is not available for CephFS volumes.
We'd like to request this capability.
Xiubo Li [Tue, 19 Apr 2022 06:21:49 +0000 (14:21 +0800)]
mds: trigger to flush the mdlog in handle_find_ino()
If the the CInode was just created by using openc in current
auth MDS, but the client just sends a getattr request to another
replica MDS. Then here it will make a path of '#INODE-NUMBER'
only because the CInode hasn't been linked yet, and the replica
MDS will keep retrying until the auth MDS flushes the mdlog and
the C_MDS_openc_finish and link_primary_inode are called at most
5 seconds later.
Xiubo Li [Tue, 12 Apr 2022 11:40:02 +0000 (19:40 +0800)]
qa: add file sync stuck test support
This will test the file sync of a directory, which maybe stuck for
at most 5 seconds. This was because the related code will wait for
all the unsafe requests to get safe reply from MDSes, but the MDSes
just think that it's unnecessary to flush the mdlog immediately
after early reply, and the mdlog will be flushed every 5 seconds
in the tick thread.
This should have been fixed in kclient and libcephfs by triggering
mdlog flush before waiting requests' safe reply.
Xiubo Li [Tue, 12 Apr 2022 04:37:13 +0000 (12:37 +0800)]
qa: add filesystem sync stuck test support
This will test the sync of the filesystem, which maybe stuck for
at most 5 seconds. This was because the related code will wait
for all the unsafe requests to get safe reply from MDSes, but the
MDSes just think that it's unnecessary to flush the mdlog immediately
after early reply, and the mdlog will be flushed every 5 seconds
in the tick thread.
This should have been fixed in kclient and libcephfs by triggering
mdlog flush before waiting requests' safe reply.
Ronen Friedman [Tue, 17 May 2022 16:13:59 +0000 (16:13 +0000)]
osd/scrub: restart snap trimming after a failed scrub
A followup to PR#45640.
In PR#45640 snap trimming was restarted (if blocked) after all
successful scrubs, and after most scrub failures. Still, a few
failure scenarios did not handle snaptrim restart correctly.
The current PR cleans up and fixes the interaction between
scrub initiation/termination (for whatever cause) and snap
trimming.
Jos Collin [Wed, 4 May 2022 13:03:12 +0000 (18:33 +0530)]
qa: fix is_addr_blocklisted() to get blocklisted clients from 'osd dump'
By the introduction of range blocklist, the 'blocklist ls' command outputs
two lists. It's also straightforward to get the blocklisted clients directly
from 'osd dump' to avoid regression.
Fixes: https://tracker.ceph.com/issues/55516 Signed-off-by: Jos Collin <jcollin@redhat.com>
(cherry picked from commit 47de5d79b8190458847072aae1c29db7d6a9b66b)
Xiubo Li [Wed, 16 Mar 2022 09:15:57 +0000 (17:15 +0800)]
mds, client: only send the metrices supported by MDSes
For the old ceph clusters the clients won't send any metrics to
them as default unless they have backported this commit, but there
has one option 'client_collect_and_send_global_metrics' still could
be used to enable it manually.
This will fix the crash bug when upgrading from old ceph clusters,
which will crash the MDSes once they receive unknown metrics.