Ville Ojamo [Thu, 10 Apr 2025 10:34:57 +0000 (17:34 +0700)]
doc/radosgw: Promptify CLI, cosmetic fixes
Use the more modern prompt block for CLI commands
and use right one $ vs #.
Fix indentation on JSON example outputs and
some CLI command switches.
Add some arguably missing comma in JSON example output.
Add a full stop at the end of a one-sentence paragraph.
Remove extra comma mid-sentence in another.
Fix missing backslashes or typo at end of multiline commands.
Lines under section headings as long as heading text.
Fix hyperlinks. Fix list items prefixed with - insted of *.
Format configuration syntax in the middle of text as code.
Fix typo "PI" to "API" and remove extra space.
Remove colons at the end of section headers in a few places.
Use Title Case in section titles consistently with short words lowercase.
Possibly controversial: don't add whitespace before and
after main title section header text.
Possibly controversial: don't indent line continuation
backslashes, leave only 1 space before them.
test/librbd/test_notify.py: force line-buffered output
"master" and "slave" invocations are intended to run in parallel and
coordinate between themselves. Ensure that their respective output is
properly timestamped and ordered in teuthology.log file.
mgr/dashboard: Fix empty ceph version in GET api/hosts
Fixes https://tracker.ceph.com/issues/70821
Due to the pagination the host list is being fetched from orchestrator which caused a regression as via orchestrator list ceph version is always marked empty.
Caused by https://github.com/ceph/ceph/pull/52154
Also fixed tests , as the new version addition causing whole json object mock to fail in tests
test/librbd/test_notify.py: conditionally ignore some errors
In 2020, commit 01ff1530544c ("librbd: make all maintenance op
notifications async") introduced a backwards compatibility issue where
if exclusive lock is held by an older (octopus and below) client and
a maintenance op is proxied to it from a newer client, the newer client
interprets the notification for the in-place completion of the op as
the notification for the acceptance of an async request and expects
another notification for the completion of the op which never comes.
In 2021, this bug was discovered and test_notify.py was amended to
ignore it in commit 9c0b239d70cd ("qa/upgrade: conditionally disable
update_features tests").
However the two update_features tests that started hanging and got
disabled weren't the only ones to misbehave. Rename, create_snap and
remove_snap tests were affected too but didn't hang or fail because
librbd also filtered certain errors codes like EEXIST and EINVAL.
Taking rename is an example:
1. a rename request is sent to from a newer client (N) to an octopus
client (O)
2. O successfully renames the image and sends a completion notification
with result = 0
3. N mistakes it for async request acceptance
4. after a timeout, N resends the rename request to O
5. O sees that an image already has that name (after step 2) and sends
a completion notification with result = EEXIST
6. N interprets it as async request denial and bubbles up EEXIST,
however right before returning control from Operations::rename()
EEXIST is filtered and 0 is returned to the user
So back then rename, create_snap and remove_snap tests continued to
pass but started taking 30+ seconds instead of completing immediately.
In 2025 we did away with filtering error codes in commit 66508cdaa190
("librbd: stop filtering async request error codes") and these tests
started to fail. Following the approach taken in commit 9c0b239d70cd
("qa/upgrade: conditionally disable update_features tests"), let's
ignore these failures based on the same environment variable.
test/librbd/test_notify.py: conditionally ignore some errors
In 2020, commit 01ff1530544c ("librbd: make all maintenance op
notifications async") introduced a backwards compatibility issue where
if exclusive lock is held by an older (octopus and below) client and
a maintenance op is proxied to it from a newer client, the newer client
interprets the notification for the in-place completion of the op as
the notification for the acceptance of an async request and expects
another notification for the completion of the op which never comes.
In 2021, this bug was discovered and test_notify.py was amended to
ignore it in commit 9c0b239d70cd ("qa/upgrade: conditionally disable
update_features tests").
However the two update_features tests that started hanging and got
disabled weren't the only ones to misbehave. Rename, create_snap and
remove_snap tests were affected too but didn't hang or fail because
librbd also filtered certain errors codes like EEXIST and EINVAL.
Taking rename is an example:
1. a rename request is sent to from a newer client (N) to an octopus
client (O)
2. O successfully renames the image and sends a completion notification
with result = 0
3. N mistakes it for async request acceptance
4. after a timeout, N resends the rename request to O
5. O sees that an image already has that name (after step 2) and sends
a completion notification with result = EEXIST
6. N interprets it as async request denial and bubbles up EEXIST,
however right before returning control from Operations::rename()
EEXIST is filtered and 0 is returned to the user
So back then rename, create_snap and remove_snap tests continued to
pass but started taking 30+ seconds instead of completing immediately.
In 2025 we did away with filtering error codes in commit 66508cdaa190
("librbd: stop filtering async request error codes") and these tests
started to fail. Following the approach taken in commit 9c0b239d70cd
("qa/upgrade: conditionally disable update_features tests"), let's
ignore these failures based on the same environment variable.
Ronen Friedman [Thu, 30 Jan 2025 09:27:58 +0000 (03:27 -0600)]
osd/scrub: discard repair_oinfo_oid()
repair_oinfo_oid(), called every scrub, has a very specific
functionality: fix the object ID specified in the Object Info
attribute, if different from the ID of the owning object.
This fix was added in 2017, as a response to a unique failure
scenario that was observed in Sepia - probably following a
filesystem bug. See https://tracker.ceph.com/issues/18409 &
https://tracker.ceph.com/issues/20471.
The limited functionality of repair_oinfo_oid() -
only repairing this one specific issue, and only if the OI_ATTR
exists and is decodable - does not justify the overhead of
running it every scrub.
Kefu Chai [Sun, 30 Mar 2025 03:59:12 +0000 (11:59 +0800)]
cephfs-top: Removes unused `global` statements
Recent flake8 runs were failing with:
```
py3: flake8==7.2.0,mccabe==0.7.0,pip==25.0.1,pycodestyle==2.13.0,pyflakes==3.3.0,setuptools==75.8.0,wheel==0.45.1
py3: commands[0] /home/jenkins-build/build/workspace/ceph-pull-requests/src/tools/cephfs/top> flake8 --ignore=W503 --max-line-length=100 cephfs-top
cephfs-top:344:9: F824 `global fs_list` is unused: name is never assigned in scope
cephfs-top:466:13: F824 `global current_states` is unused: name is never assigned in scope
cephfs-top:872:9: F824 `global metrics_dict` is unused: name is never assigned in scope
cephfs-top:872:9: F824 `global current_states` is unused: name is never assigned in scope
cephfs-top:911:9: F824 `global fs_list` is unused: name is never assigned in scope
cephfs-top:981:9: F824 `global current_states` is unused: name is never assigned in scope
cephfs-top:1126:13: F824 `global current_states` is unused: name is never assigned in scope
py3: exit 1 (0.77 seconds) /home/jenkins-build/build/workspace/ceph-pull-requests/src/tools/cephfs/top> flake8 --ignore=W503 --max-line-length=100 cephfs-top pid=2309605
py3: FAIL code 1 (8.15=setup[7.38]+cmd[0.77] seconds)
evaluation failed :( (8.24 seconds)
```
Since these variables are only being referenced and not assigned within
their scopes, the `global` declarations are unnecessary and can be
safely removed. This change:
- Removes all flagged `global` statements
- Fixes the failing flake8 checks in the CI pipeline
- Maintains the original code behavior as variable references still work without the `global` keyword
The `global` keyword is only needed when assigning to global variables
within a function scope, not when simply referencing them.
Kefu Chai [Sun, 30 Mar 2025 03:48:28 +0000 (11:48 +0800)]
qa: Remove unnecessary global statements in tests
Removes unused `global` statements from Python test files to fix flake8
F824 errors.
Recent flake8 runs were failing with:
```
./tasks/radosgw_admin.py:330:5: F824 `global log` is unused: name is never assigned in scope
./workunits/dencoder/test_readable.py:99:5: F824 `global incompat_paths` is unused: name is never assigned in scope
./workunits/dencoder/test_readable.py:164:5: F824 `global backward_compat` is unused: name is never assigned in scope
./workunits/dencoder/test_readable.py:165:5: F824 `global fast_shouldnt_skip` is unused: name is never assigned in scope
```
Since these variables are only being referenced and not assigned within
their scopes, the `global` declarations are unnecessary and can be
safely removed. This change:
- Removes all flagged `global` statements
- Fixes the failing flake8 checks in the CI pipeline
- Maintains the original code behavior as variable references still work
without the `global` keyword
The `global` keyword is only needed when assigning to global variables
within a function scope, not when simply referencing them.
ceph-volume: allow zapping partitions on multipath devices
ceph-volume refuses to zap a device if it is a partition on a multipath
device due to an overly strict condition. This change ensures that only
full mapper devices (excluding partitions) are blocked from being zapped,
allowing partitions on multipath devices to be processed correctly.
Zac Dover [Mon, 24 Mar 2025 12:26:11 +0000 (22:26 +1000)]
src/common: add guidance for deep-scrubbing ratio warning
Add an explanation of how to set the value of
"mon_warn_pg_not_deep_scrubbed_ratio" to the confval definition of that
variable. Although this variable contains the string "mon", it is set on
the Manager. I have added a note to direct users to set this value on
the Manager.
This issue was pointed out by Petr Tlapa on Slack in late March of 2025.
Nitzan Mordechai [Thu, 20 Feb 2025 07:37:45 +0000 (07:37 +0000)]
LogMonitor: set no_reply for forward MLog commands
On streach mod clusters we can see slow ops when
removing and adding osds with --zap --force when osds
connected to peon monitor and forwarding the MLog to leader.
the no_reply is set only when we are connected to the leader,
this fix will add also the other option - so no_reply set anyway.
when extending the log, the sequence was left on a bad state because it would first create a transaction to update with the current seq number but leave the "real" transaction with the same sequence number which should be `extend_log_transaction.seq + 1`.
This commit fixes documentation about many-to-many topic relationship for notifications. The current sentence states the same fact twice instead of clarifying.
John Mulligan [Tue, 18 Mar 2025 19:56:25 +0000 (15:56 -0400)]
reef: mgr/diskprediction_local: avoid more mypy errors
Similar to c4111033172db28c4737e8438f27901811919ce4 this patch
suppresses mypy errors in the diskprediction_local mgr module.
I probably put the magic comment on more lines than needed but
mypy does not have a block-comment method to suppress checking
for just a region of code today.
This patch is not a backport as the issue is only impacting
reef CI jobs and so it is applied directly to the reef branch.
Signed-off-by: John Mulligan <phlogistonjohn@asynchrono.us>
Samuel Just [Thu, 13 Feb 2025 04:16:47 +0000 (04:16 +0000)]
dmclock/.../dmclock_server: do not clean clients with requests
PriorityQueueBase::do_clean() shouldn't remove ClientRec instances which
still have queued requests. Otherwise, very low priority clients might
end up having requests actually lost, which shouldn't be possible.
In the OSD, this resulted in PGRecovery items being lost if queued with
background_best_effort while expanding a cluster. Such items can
legitimately sit in the queue for a long period of time as they
represent background data migration which is allowed to be starved by an
aggressive client workload. Dropping the items broke an assumption in
the OSD that all items enqueued would eventually be dequeued resulting
in resources being leaked.
Samuel Just [Thu, 13 Feb 2025 03:54:28 +0000 (03:54 +0000)]
test/osd/TestMClockScheduler: create_item should pass prio < cutoff
Cutoff is set to 12, so let's pass something < 12 rather than 12.
Comments in some tests suggest that the intent is for create_item
to create things in the mclock queue rather than the high_queue.
Samuel Just [Thu, 13 Feb 2025 02:55:27 +0000 (02:55 +0000)]
test/osd/TestMClockScheduler: add test for very slow dequeue
Related: https://tracker.ceph.com/issues/61594 Signed-off-by: Samuel Just <sjust@redhat.com>
(cherry picked from commit b35589f7eb39e6bfabe7df1c55281f41925eca61)
John Mulligan [Thu, 13 Mar 2025 11:59:42 +0000 (07:59 -0400)]
script: ensure curl is always available in build containers
Ensure that curl is installed in all build containers regardless of
ceph's dependencies or other factors. This allows us to use curl in
any subsequent build steps/scripts.
Fixes: https://tracker.ceph.com/issues/70451 Signed-off-by: John Mulligan <jmulligan@redhat.com>
(cherry picked from commit b4e11f75bfa76036b9109485aa1cb4f9d633c8a2)