Nizamudeen A [Wed, 1 Jun 2022 07:40:14 +0000 (13:10 +0530)]
mgr/dashboard: fix drain e2e failure
Cypress sometimes fail to register the click and that causes the
deselect/select to not happen properly. Deselecting the row immediately
after performing the action makes it pass from cypress.
Fixes: https://tracker.ceph.com/issues/55741 Signed-off-by: Nizamudeen A <nia@redhat.com>
Zac Dover [Mon, 30 May 2022 13:32:06 +0000 (23:32 +1000)]
doc/start: update "memory" in hardware-recs.rst
This PR corrects some usage errors in the "Memory" section
of the hardware-recommendations.rst file. It also closes
some opened but never closed parentheses.
Ronen Friedman [Tue, 17 May 2022 16:13:59 +0000 (16:13 +0000)]
osd/scrub: restart snap trimming after a failed scrub
A followup to PR#45640.
In PR#45640 snap trimming was restarted (if blocked) after all
successful scrubs, and after most scrub failures. Still, a few
failure scenarios did not handle snaptrim restart correctly.
The current PR cleans up and fixes the interaction between
scrub initiation/termination (for whatever cause) and snap
trimming.
Xuehan Xu [Fri, 20 May 2022 09:23:03 +0000 (17:23 +0800)]
crimson/os/seastore/segment_cleaner: add dedicated backref trimming process
Space reclamation needs to merge backrefs up to the point where the latest
release of extents within the scope of the reclamation process happened.
When the journal size is large, that merge may generate a transaction
record with size exceeds the max record size threshold. So we need have a
backref trimming process that merge most of the backrefs before the space
reclamation happens.
This commit also fixes issue: https://tracker.ceph.com/issues/55692, by
repeating the inflight backrefs trimming transaction when it's
invalidated by other trans on the ROOT block
qa: set, get, list and remove custom metadata for snapshot
Following test are added:
1. Set custom metadata for subvolume snapshot.
2. Set custom metadata for subvolume snapshot(Idempotency).
3. Get custom metadata for specified key.
4. Get custom metadata if specified key not exist (Expecting error ENOENT).
5. Get custom metadata if no any key-value is added means section not exist (Expecting error ENOENT).
6. Update value for existing key in custom metadata.
7. List custom metadata of subvolume snapshot.
8. List custom metadata of subvolume snapshot if no any key-value is added (Expect empty json/dictionary)
9. Remove custom metadata for specified key.
10. Remove custom metadata if specified key not exist (Expecting error ENOENT).
11. Remove custom metadata if no any key-value is added means section not exist (Expecting error ENOENT).
12. Remove custom metadata with --force option.
13. Remove custom metadata with --force option if specified key not exist (Expecting command to succeed because of '--force' option)
14. Remove subvolume snapshot and verify whether metadata for snapshot is removed or not
docs: set, get, list and remove custom metadata for snapshot
Set custom metadata on the snapshot as a key-value pair using
$ ceph fs subvolume snapshot metadata set <vol_name> <subvol_name> <snap_name> <key_name> <value> [--group_name <subvol_group_name>]
note: If the key_name already exists then the old value will get replaced by the new value.
note: The key_name and value should be a string of ASCII characters (as specified in python's string.printable). The key_name is case-insensitive and always stored in lower case.
note: Custom metadata on a snapshots is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
Get custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata get <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>]
List custom metadata (key-value pairs) set on the snapshot using::
$ ceph fs subvolume snapshot metadata ls <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
Remove custom metadata set on the snapshot using the metadata key::
$ ceph fs subvolume snapshot metadata rm <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>] [--force]
Using the '--force' flag allows the command to succeed that would otherwise fail if the metadata key did not exist.
mgr/volumes: set, get, list and remove custom metadata for snapshot
If CephFS in ODF configured in external mode, user like to use
subvolume snapshot metadata to store some Openshift specific
information, as the PVC/PV/namespace the subvolumes/snapshot
are coming from. For RBD volumes, it's possible to add metadata
information to the images using the 'rbd image-meta' command.
However, this feature is not available for CephFS volumes.
We'd like to request this capability.
crimson/osd: reindent the trace-related fragment of main()
The issue is we're mixing tabs and spaces which affects readability
when the code is read on github during a review (like recently with
the PR #46296).
Xiubo Li [Thu, 31 Mar 2022 07:16:49 +0000 (15:16 +0800)]
client: stop retrying the request when exceeding 256 times
The type of 'retry_attempt' in 'MetaRequest' is 'int', while in
'ceph_mds_request_head' the type of 'num_retry' is '__u8'. So in
case the request retries exceeding 256 times, the MDS will receive
a incorrect retry seq.
In this case it's ususally a bug in MDS and continue retrying the
request makes no sense. For now let's limit it to 256. In future
this could be fixed in ceph code, so avoid using the hardcode here.
Fixes: https://tracker.ceph.com/issues/55144 Signed-off-by: Xiubo Li <xiubli@redhat.com>
as per https://www.json.org/json-en.html, JSON encodes bool as
"true" or "false", without the quotes. before this change, the quotes
are always added when encoding boolean values.
but this change is not backward compatible.
encode_json()'s bool overload is used by rgw. it uses JSONObj
defined in common/ceph_json.h to decode JSON-encoded structs.
and it does not differentiate bool from str when decoding a boolean
value despite that it could have check the "quoted" member variable
of JSONObj for validating the type of value. so we should be fine.
but gcc-toolset-8-annobin provides this file. upgrading to
gcc-toolset-11 does not help. see https://centos.pkgs.org/8-stream/centos-appstream-x86_64/gcc-toolset-11-annobin-plugin-gcc-10.23-1.el8.x86_64.rpm.html
so, the intermediate solution would be to disable the plugin, if
we want to use gcc-toolset to build rpm packages.
in this change, _annotated_build is undefined to prevent the compiler
from adding extra information to the binary. in general this change
shuold be safe, without these information, it'd be hard to tell if
the binary is hardened or what ABI version it expects. see
also https://fedoraproject.org/wiki/Changes/Annobin
Rishabh Dave [Thu, 19 May 2022 18:29:25 +0000 (23:59 +0530)]
qa/cephfs: remove temporary files
These temporary files don't matter for test execution with teuthology
but they do matter for execution with vstart_runner.py since the test
fails if these files exist already. And tests are often run repeatedly
with vstart_runner.py, unlike with teuthology.
Fixes: https://tracker.ceph.com/issues/55719 Signed-off-by: Rishabh Dave <ridave@redhat.com>
Laura Flores [Mon, 16 May 2022 22:59:42 +0000 (17:59 -0500)]
qa/suites/rados/thrash-erasure-code-big/thrashers: add `osd max backfills` setting to mapgap and pggrow
All `rados/thrash-erasure-code-big` tests that die due to the “wait_for_recovery” timeout have one thing in common: They contain either `thrashers/pggrow` or `thrashers/mapgap`.
The difference between pggrow and mapgap vs. all other non-offending thrashers (default, careful, fastread, and morepggrow) is that they lack an override setting for `osd max backfills`. `osd max backfills` is the max number of backfill operations allowed to/from an OSD. The higher the number, the quicker the recovery. By default, this value is 1. On all of the non-offending thrashers (default, careful, fastread, and morepggrow), the default 1 value gets overridden in their .yaml files with a value > 1. This is not the case for pggrow and mapgap, however, as they lack an `osd max backfills` override setting.
The mclock op scheduler is known to override `osd max backfills` with a high value, but all of the thrash-erasure-code-big thrashers have their op queue set to “debug_random”, which chooses randomly between op queues (the debug_random op queue is set to override the default mclock_scheduler in qa/config/rados.yaml). So, coupled with the “debug_random” op queue, the low `osd max backfill` setting is causing some tests to time out in recovery.
WITHOUT `osd max backfills`, as they are now, “mapgap” and “pggrow” tests die due to timed-out recovery about 17/100 times, as seen here with a pggrow test: http://pulpito.front.sepia.ceph.com/lflores-2022-05-18_14:24:29-rados:thrash-erasure-code-big-master-distro-default-smithi/
WITH `osd max backfills` specified, as I have suggested in this PR, 99/100 tests passed, with one test failing for a different reason:
http://pulpito.front.sepia.ceph.com/lflores-2022-05-17_22:40:27-rados:thrash-erasure-code-big-master-distro-default-smithi/
I also scheduled 145 tests WITH `osd max backfills` that are a mix of pggrow and mapgap thrashers. 144/145 tests passed, with one test failing for a different reason. http://pulpito.front.sepia.ceph.com/lflores-2022-05-17_15:27:54-rados:thrash-erasure-code-big-master-distro-default-smithi/
Fixes: https://tracker.ceph.com/issues/51076 Signed-off-by: Laura Flores <lflores@redhat.com>
Adam King [Fri, 1 Apr 2022 12:20:28 +0000 (08:20 -0400)]
mgr/cephadm: make UpgradeState from_json a bit safer
This way, for downgrades to whatever versions
this lands in onward, having added new parameters to
UpgradeState shouldn't break anything. Can't do much
about downgrades to older versions from this one
but this should help in the future.
Adam King [Mon, 28 Mar 2022 16:10:15 +0000 (12:10 -0400)]
mgr/cephadm: split _do_upgrade into sub functions
This function was around 500 lines and difficult to work
with. Splitting it into sub functions should hopefully make
it a bit easier to understand and make changes to.
Rishabh Dave [Thu, 19 May 2022 15:33:54 +0000 (21:03 +0530)]
cephfs-shell: check version before importing Cmd2ArgparseError
Cmd2ArgparseError is available only cmd2 version 1.0.1 onwards. Before
that, SystemExit(2) is raised. This commit creates an empty class
Cmd2ArgparseError for earlier version so that similar error won't creep
up again.
Fixes: https://tracker.ceph.com/issues/55716 Signed-off-by: Rishabh Dave <ridave@redhat.com>
Soumya Koduri [Fri, 6 May 2022 17:10:12 +0000 (22:40 +0530)]
rgw/qa: Run tests on multiple cloudtier config
Run cloudtier tests with parameter 'retain_head_object'
set to true and false.
However having multiple cloudtier storage classes in the same task
is increasing the transition time and resulting in spurious failures.
Hence until there is a consistent way of running the tests, without
having to depend on lc_debug_interval, disabled one of the config for
now.