Jos Collin [Fri, 19 Sep 2025 04:40:05 +0000 (10:10 +0530)]
Merge PR #65364 into wip-jcollin-testing-20250919.043955-reef
* refs/pull/65364/head:
test/libcephfs: use more entries to reproduce snapdiff fragmentation
mds: rollback the snapdiff fragment entries with the same name if needed.
test/libcephfs: Polisihing SnapdiffDeletionRecreation case
Test failure: LibCephFS.SnapdiffDeletionRecreation
Jos Collin [Fri, 19 Sep 2025 04:40:01 +0000 (10:10 +0530)]
Merge PR #65430 into wip-jcollin-testing-20250919.043955-reef
* refs/pull/65430/head:
qa: Disable a test for kernel mount
src/test/mds: Fix TestMDSAuthCaps
client: Fix the multifs auth caps check
mds: Fix multifs auth caps check
qa: Fix validation of client_version
qa: Test cross fs access by single client in multifs
qa: Run test_admin with the reef client
Igor Fedotov [Thu, 21 Aug 2025 10:42:54 +0000 (13:42 +0300)]
test/libcephfs: use more entries to reproduce snapdiff fragmentation
issue.
Snapdiff listing fragments have different boundaries in Reef and Squid+
releases hence original reproducer (made for Reef) doesn't work properly
in S+ releases. This patch fixes that at cost of longer execution.
This might be redundant/senseless when backporting to Reef.
Related-to: https://tracker.ceph.com/issues/72518 Signed-off-by: Igor Fedotov <igor.fedotov@croit.io>
(cherry picked from commit 23397d32607fc307359d63cd651df3c83ada3a7f)
Igor Fedotov [Tue, 12 Aug 2025 13:17:49 +0000 (16:17 +0300)]
mds: rollback the snapdiff fragment entries with the same name if needed.
This is required when more entries with the same name don't fit into the
fragment. With the existing means for fragment offset specification such a splitting to be
prohibited.
Fixes: https://tracker.ceph.com/issues/72518 Signed-off-by: Igor Fedotov <igor.fedotov@croit.io>
(cherry picked from commit 24955e66f4826f8623d2bec1dbfc580f0e4c39ae)
Kotresh HR [Wed, 13 Aug 2025 11:03:29 +0000 (11:03 +0000)]
qa: Disable a test for kernel mount
The kclient fix isn't yet landed in the kernel and hence
the test 'test_multifs_single_client_cross_access_r_caps_end'
would fail for kernel mount. So disable the failing validation
in the test for kclient.
Kotresh HR [Thu, 26 Jun 2025 07:44:00 +0000 (07:44 +0000)]
qa: Fix validation of client_version
The multifs auth caps bug has a fix both in client and mds.
If it's old client and not patched, we expect that the fs
with 'rw' would end up having 'r' caps with the multifs
auth caps used as in the test
'test_multifs_single_client_cross_access_r_caps_end'.
This patch adds the conditional to validate the same.
This is required to test the features involving
fixes both in client and mds. This is to make
sure the older clients are not broken with the
fix. The version 18.2.6 is reef with out the
client fix.
The test suite sets up the cluster with squid
18.2.6 and upgrades only the ceph cluster node
leaving the client node.
NOTE: The version is changed to 18.2.6 because
this is reef backport where as it's 19.2.2 in
higher releases. Please check commit a4f97c0aa92
Problem:
The readdir wouldn't list all the entries in the directory
when the osd is full with rstats enabled.
Cause:
The issue happens only in multi-mds cephfs cluster. If rstats
is enabled, the readdir would request 'Fa' cap on every dentry,
basically to fetch the size of the directories. Note that 'Fa' is
CEPH_CAP_GWREXTEND which maps to CEPH_CAP_FILE_WREXTEND and is
used by CEPH_STAT_RSTAT.
The request for the cap is a getattr call and it need not go to
the auth mds. If rstats is enabled, the getattr would go with
the mask CEPH_STAT_RSTAT which mandates the requirement for
auth-mds in 'handle_client_getattr', so that the request gets
forwarded to auth mds if it's not the auth. But if the osd is full,
the indode is fetched in the 'dispatch_client_request' before
calling the handler function of respective op, to check the
FULL cap access for certain metadata write operations. If the inode
doesn't exist, ESTALE is returned. This is wrong for the operations
like getattr, where the inode might not be in memory on the non-auth
mds and returning ESTALE is confusing and client wouldn't retry. This
is introduced by the commit 6db81d8479b539d which fixes subvolume
deletion when osd is full.
Fix:
Fetch the inode required for the FULL cap access check for the
relevant operations in osd full scenario. This makes sense because
all the operations would mostly be preceded with lookup and load
the inode in memory or they would handle ESTALE gracefully.
Update the "Disconnected+Remounted FS" section in
doc/cephfs/troubleshooting.rst, as suggested by Venky Shankar in https://github.com/ceph/ceph/pull/65129/files#r2312903062
mgr/dashboard: show non default realm sync status in rgw overview page
Currently, we just show the sync status of the default realm in rgw
overview page. This PR is to show the sync status of non-default realms
as well. Multisite sync status can be viewed for any of the active daemon
which runs in default/non-default realm.
Dan Mick [Tue, 26 Aug 2025 00:45:21 +0000 (17:45 -0700)]
Remove git clean -fdx
either
1) a source tarball is supplied, in which case the local dir is
irrelevant, or
2) make-debs calls make-dist, which doesn't care about a dirty cwd
so it just punishes the unaware by removing things that they may
have wanted to keep.
Dan Mick [Sat, 23 Aug 2025 00:43:24 +0000 (17:43 -0700)]
make-debs.sh: invoke tar with --no-same-owner
When running as a normal user, tar does not attempt to preserve
owners set on the tar content files. When running as root, it does.
Containerized builds are running as root. Stop make-debs.sh from
trying to set other owners for files, and leaving files in the
host system with mapped UIDs other than the user running the container
(which causes jenkins to be unable to clear the workspace).
Dan Mick [Thu, 21 Aug 2025 20:00:43 +0000 (13:00 -0700)]
make-debs.sh: make "skip debug packages" conditional
Now that we're using make-debs.sh as a builder inside containers,
the default should be to build all the packages, including debug.
(Also, fix a typo.)
doc/rados/configuration: Mention show-with-defaults and ceph-conf
A small improvement based on
"Why is it still so difficult to just dump all config and where it comes from?"
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/EZSLRYBYEWDA6YIARQVMUKQUWHAE3PGR/
`show-with-defaults` is very useful, and `ceph-conf` is mentioned
so that it's clear that it's legacy, and the user doesn't have to
wonder if it's actually useful but was forgotten in the list.
Zac Dover [Fri, 22 Aug 2025 08:39:29 +0000 (18:39 +1000)]
doc/cephfs: edit troubleshooting.rst (Slow MDS)
Move the "Slow requests (MDS)" section immediately after the first
section in this document ("Slow/Stuck Operations"), because the first
procedure on the page directs the reader to undertake the operation in
"Slow requests (MDS)" before trying anything else.
Improve source rpm detection by adding a new detection method that
executes and rpm command in a container to get exactly the version of
the source rpm that the ceph.spec file would have generated. For
backwards compatibility and that I don't entirely trust myself to have
tested this the old methods are still available.
The old `--rpm-no-match-sha` is now an alias for `--srpm-match=any` to
cause it to build any (unique) ceph srpm it finds.
`--srpm-match=versionglob` retains the previous default behavior of
using a glob matching on the git id or ceph version value. The new
default of `--srpm-match=auto` implements the rpm command based behavior
described above.
All of this is wrapped in a new step `find-rpm` but that's mostly an
implementation detail and for testing.
Dan Mick [Wed, 13 Aug 2025 19:16:45 +0000 (12:16 -0700)]
pybind/mgr/dashboard/frontend: add NPM_CACHEDIR envvar, use in bwc
Add an optional NPM_CACHEDIR environment variable to serve as the
cache parameter for npm in the dashboard frontend build. The idea
is to allow it to persist across builds so that we decrease the load
on registry.npmjs.org, which has been throttling our requests when
using build-with-container.py, and also hopefully improve the time
of the frontend npm operations.
build-with-container.py also grows a --npm-cache-path option to allow
setting it for container builds and passing the envvar to the build.
John Mulligan [Wed, 21 May 2025 21:46:40 +0000 (17:46 -0400)]
dashboard: fix the workaround for unpacking node sources
My previous workaround in the dashboard for the unpacking of non-root
own tarball as the fake root of a container did not work because of the
strange quoting/escaping behavior of cmake (it tried to run `id -u` as a
single command, not a command and an argument).
Use single quoted string and old school backticks to work around this issue.
Fixes: 24dbfb5da4813c6588f9cd199b9f527bb67f1e88 Signed-off-by: John Mulligan <jmulligan@redhat.com>
(cherry picked from commit 3a36180a373d91adcf9726660204f0cc1dcecba3)
John Mulligan [Fri, 2 May 2025 15:17:53 +0000 (11:17 -0400)]
dashboard: ensure nodeenv downloaded content is owned by current user
When testing ceph builds in a container we discovered that certain files
could not be deleted by jenkins after a build. This was due to the way
the container maps IDs - files owned by the root user in the container
become owned by the "real" user/jenkins user on the "host".
However, the node tarball that is fetched and unpacked by nodeenv has
a different owner name/uid that is preserved in the tree and this id
gets mapped to something that can be managed by the "fake root" of the
container but not by the "regular" user outside the container.
The simplest workaround I can think of is to chown the tree back
to the current user and avoid leaving files on disk with uncleanly
mapped uids.
John Mulligan [Fri, 20 Jun 2025 23:34:45 +0000 (19:34 -0400)]
Dockerfile.build: make WITH_CRIMSON a build arg
We've chosen to enable crimson by default to match the CI, but that
is not always something a developer may want, so make WITH_CRIMSON
a build argument that can be toggled off if necessary.
John Mulligan [Thu, 29 May 2025 17:41:45 +0000 (13:41 -0400)]
mgr/dashboard: add a cobertura xml file workaround variable
Add an environment variable REWRITE_COVERAGE_ROOTDIR that
changes the "hardcoded" path in the cobertura-coverage.xml file.
This can be used to map the paths used in a container build to
the paths known to a jenkins job (or whatever else you want to
do with the file).
Nitzan Mordechai [Thu, 19 Jun 2025 08:54:43 +0000 (08:54 +0000)]
monitor: Enhance historic ops command output and error handling
Dumping monitor historic operations currently yields no results
and incorrectly issues an error message indicating that
"mon_enable_op_tracker" is not enabled, even when it should be.
This commit addresses these issues by:
- Adding previously missing commands for historic operations.
- Correcting the dump operations check to only issue an error when
"mon_enable_op_tracker" is genuinely not enabled.
- Tracking "mon_enable_op_tracker" changes
- Refactoring and organizing the historic operations dump command code.
- Improving the appearance and clarity of error messages.
John Mulligan [Fri, 20 Jun 2025 23:46:16 +0000 (19:46 -0400)]
script/build-with-container: support --build-arg arguments
Allow passing --build-arg arguments to build-with-container.py
which are passed directly to the container build command.
This allows a developer to toggle certain features of the build
container, however this should not be used in CI.
Remove the unused build arg for JENKINS_HOME. This was
once used to try and create build images like the CI jobs. However,
the env var is now unconditionally set in the build script and must
be passed (or not) explicitly by the user.