qa: set, get, list and remove custom metadata for snapshot
Following test are added:
1. Set custom metadata for subvolume snapshot.
2. Set custom metadata for subvolume snapshot(Idempotency).
3. Get custom metadata for specified key.
4. Get custom metadata if specified key not exist (Expecting error ENOENT).
5. Get custom metadata if no any key-value is added means section not exist (Expecting error ENOENT).
6. Update value for existing key in custom metadata.
7. List custom metadata of subvolume snapshot.
8. List custom metadata of subvolume snapshot if no any key-value is added (Expect empty json/dictionary)
9. Remove custom metadata for specified key.
10. Remove custom metadata if specified key not exist (Expecting error ENOENT).
11. Remove custom metadata if no any key-value is added means section not exist (Expecting error ENOENT).
12. Remove custom metadata with --force option.
13. Remove custom metadata with --force option if specified key not exist (Expecting command to succeed because of '--force' option)
14. Remove subvolume snapshot and verify whether metadata for snapshot is removed or not
docs: set, get, list and remove custom metadata for snapshot
Set custom metadata on the snapshot as a key-value pair using
$ ceph fs subvolume snapshot metadata set <vol_name> <subvol_name> <snap_name> <key_name> <value> [--group_name <subvol_group_name>]
note: If the key_name already exists then the old value will get replaced by the new value.
note: The key_name and value should be a string of ASCII characters (as specified in python's string.printable). The key_name is case-insensitive and always stored in lower case.
note: Custom metadata on a snapshots is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
Get custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata get <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>]
List custom metadata (key-value pairs) set on the snapshot using::
$ ceph fs subvolume snapshot metadata ls <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
Remove custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata rm <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>] [--force]
Using the '--force' flag allows the command to succeed that would otherwise fail if the metadata key did not exist.
mgr/volumes: set, get, list and remove custom metadata for snapshot
If CephFS in ODF configured in external mode, user like to use
subvolume snapshot metadata to store some Openshift specific
information, as the PVC/PV/namespace the subvolumes/snapshot
are coming from. For RBD volumes, it's possible to add metadata
information to the images using the 'rbd image-meta' command.
However, this feature is not available for CephFS volumes.
We'd like to request this capability.
Xiubo Li [Tue, 19 Apr 2022 06:21:49 +0000 (14:21 +0800)]
mds: trigger to flush the mdlog in handle_find_ino()
If the the CInode was just created by using openc in current
auth MDS, but the client just sends a getattr request to another
replica MDS. Then here it will make a path of '#INODE-NUMBER'
only because the CInode hasn't been linked yet, and the replica
MDS will keep retrying until the auth MDS flushes the mdlog and
the C_MDS_openc_finish and link_primary_inode are called at most
5 seconds later.
Xiubo Li [Tue, 12 Apr 2022 11:40:02 +0000 (19:40 +0800)]
qa: add file sync stuck test support
This will test the file sync of a directory, which maybe stuck for
at most 5 seconds. This was because the related code will wait for
all the unsafe requests to get safe reply from MDSes, but the MDSes
just think that it's unnecessary to flush the mdlog immediately
after early reply, and the mdlog will be flushed every 5 seconds
in the tick thread.
This should have been fixed in kclient and libcephfs by triggering
mdlog flush before waiting requests' safe reply.
Xiubo Li [Tue, 12 Apr 2022 04:37:13 +0000 (12:37 +0800)]
qa: add filesystem sync stuck test support
This will test the sync of the filesystem, which maybe stuck for
at most 5 seconds. This was because the related code will wait
for all the unsafe requests to get safe reply from MDSes, but the
MDSes just think that it's unnecessary to flush the mdlog immediately
after early reply, and the mdlog will be flushed every 5 seconds
in the tick thread.
This should have been fixed in kclient and libcephfs by triggering
mdlog flush before waiting requests' safe reply.
Ronen Friedman [Tue, 17 May 2022 16:13:59 +0000 (16:13 +0000)]
osd/scrub: restart snap trimming after a failed scrub
A followup to PR#45640.
In PR#45640 snap trimming was restarted (if blocked) after all
successful scrubs, and after most scrub failures. Still, a few
failure scenarios did not handle snaptrim restart correctly.
The current PR cleans up and fixes the interaction between
scrub initiation/termination (for whatever cause) and snap
trimming.
Jos Collin [Wed, 4 May 2022 13:03:12 +0000 (18:33 +0530)]
qa: fix is_addr_blocklisted() to get blocklisted clients from 'osd dump'
By the introduction of range blocklist, the 'blocklist ls' command outputs
two lists. It's also straightforward to get the blocklisted clients directly
from 'osd dump' to avoid regression.
Fixes: https://tracker.ceph.com/issues/55516 Signed-off-by: Jos Collin <jcollin@redhat.com>
(cherry picked from commit 47de5d79b8190458847072aae1c29db7d6a9b66b)
Xiubo Li [Wed, 16 Mar 2022 09:15:57 +0000 (17:15 +0800)]
mds, client: only send the metrices supported by MDSes
For the old ceph clusters the clients won't send any metrics to
them as default unless they have backported this commit, but there
has one option 'client_collect_and_send_global_metrics' still could
be used to enable it manually.
This will fix the crash bug when upgrading from old ceph clusters,
which will crash the MDSes once they receive unknown metrics.
Greg Farnum [Tue, 30 Nov 2021 18:29:46 +0000 (18:29 +0000)]
osd: Check range_blocklist in is_blocklisted(): we actually blocklist ranges
Carry a parallel map from cidr addresses to a new
range_bits class (stored entirely as ephemeral state) so that we
don't need to re-compute masks and bit mappings too often, and to
separate out the unpleasant ipv6 bit mapping logic. Then check
against those with range_bits::matches() the same way we check
for equality on specific-entity matches. Nice and simple loops!
Greg Farnum [Tue, 2 Nov 2021 00:38:50 +0000 (00:38 +0000)]
osdmap: convert get_blocklist() to provide the entity/IP and range blocklists
Providing a non-range-aware blocklist accessor would just be
asking for trouble, so don't.
The ugly part of this is how the Objecter is currently just
throwing the range blocklist on the end of its own list. The in-tree
callers are okay with this, and I'd like to look at removing the
blocklist events API from librados entirely -- it exposes "OSD-only"
state to clients and, as evidenced by this patch series, is not
particularly stable.
Greg Farnum [Wed, 8 Dec 2021 21:32:58 +0000 (21:32 +0000)]
mon: take blocklist ranges as a subcommand, not implicitly from address format
I discovered in testing with CephFS that this tends to interpret client IPs
(which don't have ports, but do have nonces) as invalid ranges. So give it
a separate input keyword that has to be applied first.
Greg Farnum [Mon, 25 Oct 2021 19:53:04 +0000 (19:53 +0000)]
msg: common: allow entity_addr_t to store a CIDR address range
This required very little change to the existing code. Use with care, because
existing code expects an IP address instead of a range, but it saves on
writing a new parser.
Greg Farnum [Tue, 2 Nov 2021 00:34:34 +0000 (00:34 +0000)]
mds: Server: Simplify apply_blocklist and usage of the OSDMap's blocklist
This previoulsly re-implemented a bunch of the OSDMap::is_blocklisted()
function, and wasn't actually any faster to run -- the list of new blocklists
may be smaller than the full set, but OSDMap::blocklist is an unordered_map
of constant lookup time so it shouldn't slow things down. More importantly,
this is much simpler, less likely to be buggy from duplicate code, and lets
the MDS off the hook for dealing with range blocklisting.
Greg Farnum [Mon, 1 Nov 2021 23:52:53 +0000 (23:52 +0000)]
client: Simplify blocklist tracking and interface
I'm not sure if the blocklist events tracking in Client.cc was ever
the simplest way to track that state, but it definitely isn't now. We
can just hand our addr_vec to the OSDMap and ask it -- it handles
version compatibility issues and, happily, means the Client doesn't
need to learn to deal with ranges directly.
Laura Flores [Tue, 24 May 2022 17:21:46 +0000 (17:21 +0000)]
qa/config: override `bluestore_zero_block_detection` default for rados suite tests
By default, `bluestore_zero_block_detection` is false since it interacts
negatively with some RBD and CephFS features. We still want this enabled
for rados teuthology tests though so we can use it for large-scale testing.
Adding this setting to the rados teuthology config file will override the global
config setting and enable `bluestore_zero_block_detection` only for tests in
the teuthology rados suite.
Laura Flores [Fri, 6 May 2022 17:14:10 +0000 (17:14 +0000)]
os/bluestore: turn bluestore zero block detection off by default
This commit adds a new global config option called `bluestore_zero_block_detection`.
This option is used at a dev level to skip zero bufferlist writes in large-scale
testing scenarios.
Store tests originally written for the BlueStore zero block detection feature
now account for the global config option; these tests will only run if
`bluestore_zero_block_detection` is set to "true".
Fixes: https://tracker.ceph.com/issues/55521 Signed-off-by: Laura Flores <lflores@redhat.com>
(cherry picked from commit 4a8180952c292aa2f8ee656685fcdddc3f0f0a4f)
Nizamudeen A [Sun, 8 May 2022 14:27:34 +0000 (19:57 +0530)]
mgr/dashboard: smart data for devices with scsi protocol
In the dashboard, we've been showing smart data for hdd devices with ata
protocol only. Otherwise we show a No Smart Data found error which is
clearly misleading since Smart Data is returned even in the api call.
So this PR is trying to show the smart data for hdd devices
that uses scsi protocol too.
Nizamudeen A [Fri, 6 May 2022 15:19:18 +0000 (20:49 +0530)]
mgr/dashboard: fix smart data error
the error in the log was this
```
"/usr/share/ceph/mgr/dashboard/services/ceph_service.py", line 253, in _get_smart_data_by_device
May 06 07:38:39 occldlr750-1.occl208.lab conmon[2142938]: svc_type, svc_id = daemon.split('.')
May 06 07:38:39 occldlr750-1.occl208.lab conmon[2142938]: ValueError: too many values to unpack (expected 2)
```
on the cluster, the output of `ceph device ls-by-host` looks like this
the first device is mon and its name is mon.occldlr750-1.occl208.lab.
In our dashboard code, when fetching the smart data we have a line like
this
`svc_type, svc_id = daemon.split('.')`
so for the mon the output of `daemon.split('.') will be ['mon', 'occldlr750-1', 'occl208', 'lab']. The svc_id gets split into three because of the split. I am changing that and giving the criteria as splitting only on the first occurence of the dot and the considering everything that comes after the dot as the svc_id of the device.
Nizamudeen A [Thu, 5 May 2022 17:43:38 +0000 (23:13 +0530)]
mgr/dashboard: devices with same UID causes multiselection
In the Physical Disks page, the uids for multiple devices are coming in
as same and that causes the selection to go berserk and select multiple
rows with same UID. The uid is generated in the frontend service call
itself. I just added some more parameters to it inorder to make it more
unique.
The second issue is the number of selected number getting multiplied
exponentially. Its because each time the table is updated or refreshed,
we push the row with the number of selected items we had before and that
causes the number of selection to multiply.
Zac Dover [Mon, 30 May 2022 13:32:06 +0000 (23:32 +1000)]
doc/start: update "memory" in hardware-recs.rst
This PR corrects some usage errors in the "Memory" section
of the hardware-recommendations.rst file. It also closes
some opened but never closed parentheses.
In `master` the milestone step exits and causes remaining tasks not to be run. I previously tried with the `continue-on-error` flag, but it didn't work, so let's try putting that steps at the end.