Alex Ainscow [Tue, 1 Jul 2025 14:51:58 +0000 (15:51 +0100)]
osd: Relax assertion that all recoveries require a read.
If multiple object are being read as part of the same recovery (this happens
when recovering some snapshots) and a read fails, then some reads from other
shards will be necessary. However, some objects may not need to read. In this
case it is only important that at least one read message is sent, rather than
one read message per object is sent.
Alex Ainscow [Sun, 29 Jun 2025 21:54:51 +0000 (22:54 +0100)]
osd: Truncate coding shards to minimal size
Scrub detected a bug where if an object was truncated to a size where the first
shard is smaller than the chunk size (only possible for >4k chunk sizes), then
the coding shards were being aligned to the chunk size, rather than to 4k.
This fixes changes how the write plan is calculated to write the correct size.
Fix some unusual scenarios where peering was incorrectly
declaring that objects were missing on stray shards. When
proc_master_log rolls forward partial writes it need to
update pwlc exactly the same way as if the write had been
completed. This ensures that stray shards that were not
updated because of partial writes do not cause objects
to be incorrectly marked as missing.
The fix also means some code in GetMissing which was trying
to do a similar thing for shards that were acting,
recovering or backfilling (but not stray) can be deleted.
Bill Scales [Wed, 25 Jun 2025 09:26:17 +0000 (10:26 +0100)]
osdc: Optimized EC pools routing bug
Fix bug with routing to an acting set like [None,Y,X,X]p(X)
for a 3+1 optimzed pool where osd X is representing more
than one shard. For an optimized EC pool we want it to
choose shard 3 because shard 2 is a non-primary. If we
just search the acting set for the first OSD that matches
X this will pick shard 2, so we have to convert the order
to primary's first, then find the matching OSD and then
convert this back to the normal ordering to get shard 3.
Bill Scales [Mon, 23 Jun 2025 10:36:37 +0000 (11:36 +0100)]
mon: Optimized EC clean_temps needs to permit primary change
Optimized EC pools were blocking clean_temps from clearing pg_temp
when up == acting but up_primary != acting_primary because optimized
pools sometimes use pg_temp to force a change of primary shard.
However this can block merges which require the two PGs being
merged to have the same primary. Relax clean_temps to permit
pg_temp to be cleared so long as the new primary is not a
non-primary shard.
Bill Scales [Mon, 23 Jun 2025 09:24:17 +0000 (10:24 +0100)]
osd: Optimized EC pools - fix overaggressive assert in read_log_and_missing
Non-primary shards may not be updated because of partial writes. This means
that the OI verison for an object on these shards may be stale. An assert
in read_log_and_missing was checking that the OI version matched the have
version in a missing entry. The missing entry calculates the have version
using the prior_version from a log entry, this does not take into account
partial writes so can be ahead of the stale OI version.
Relax the assert for optimized pools to require have >= oi.version
Bill Scales [Mon, 23 Jun 2025 09:12:10 +0000 (10:12 +0100)]
osd: rewind_divergent_log needs to dirty log if crt changes or ...
rollback_info_trimmed_to changes
PGLog::rewind_divergent_log was only causing the log to be marked
dirty and checkpointed if there were divergent entries. However
after a PG split it is possible that the log can be rewound
modifying crt and/or rollback_info_trimmed_to without creating
divergent entries because the entries being rolled back were
all split into the other PG.
Failing to checkpoint the log generates a window where if the OSD
is reset you can end up with crt (and rollback_info_trimmed_to) > head.
One consequence of this is asserts like
ceph_assert(rollback_info_trimmed_to == head); firing.
Fixes: https://tracker.ceph.com/issues/55141 Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit d8f78adf85f8cb11deeae3683a28db92046779b5)
Alex Ainscow [Fri, 20 Jun 2025 20:47:32 +0000 (21:47 +0100)]
osd: Correct truncate logic for new EC
The clone logic in the truncate was only cloning from the truncate
to the end of the pre-truncate object. If the next shard was being
truncated to a shorter length (which is common), then this shard
has a larger clone.
The rollback, however, can only be given a single range, so it was
given a range which covers all clones. The problem is that if shard
0 is rolled back, then some empty space from the clone was copied
to shard 0.
Fix is easy - calculate the full clone length and apply to all shards, so it matches the rollback.
Bill Scales [Thu, 19 Jun 2025 13:26:04 +0000 (14:26 +0100)]
osd: Do not apply log entry if shard not written
This was a failed test, where the primary concluded that all objects were present
despite one missing object on the non primary shard.
The problem was caused because the log entries are sent to the unwritten shards if that
shard is missing in order to update the version number in the missing object. However,
the log entry should not actually be added to the log.
Further testing showed there are other scenarios where log entries are sent to
unwritten shards (for example a clone + partial_write in the same transaction),
these scenarios do not want to add the log entry either.
Bill Scales [Thu, 19 Jun 2025 12:41:17 +0000 (13:41 +0100)]
osd: EC Optimizations proc_replica_log needs to apply pwlc
PeeringState::proc_replica_log needs to apply pwlc before
calling PGLog so that any partial writes that have occurred
are taken into account when working out where a replica/stray
has diverged from the primary.
Alex Ainscow [Wed, 18 Jun 2025 19:46:49 +0000 (20:46 +0100)]
osd: Multiple Decode fixes.
Fix 1:
These are multiple fixes that affected the same code. To simplify review
and understanding of the changes, they have been merged into a single
commit.
What happened in defect is (k,m = 6,4)
1. State is: fast_reads = true, shards 0,4,5,6,7,8 are available. Shard 1 is missing this object.
2. Shard 5 only needs zeros, so read is dropped. Other sub read message sent.
3. Object on shard 1 completes recovery (so becomes not-missing)
4. Read completes, complete notices that it only has 5 reads, so calculates what it needs to re-read.
5. Calculates it needs 0,1,4,5,6,7 - and so wants to read shard 1.
6. Code assumes that enough reads should have been performed, so refused to do another reads and instead generates an EIO.
The problem here is some "lazy" code in step (4). What is should be doing is working out that it
can use the zero buffers and not calling get_remaining_reads(). Instead, what it attempts to do is
call get_remaining_reads() and if there is no work to do, then it assumes it has everything
already and completes the read with success. This assumption mostly works - but in this
combination of fast_reads picking less than k shards to read from AND an object completing
recovery in parallel causes issue.
The solution is to wait for all reads to complete and then assume that any remaining zero buffers
count as completed reads. This should then cause the plugin to declare "success"
Fix 2:
There are decodes in new EC which can occur when less than k
shards have been read. These reads in the last stripe, where
for decoding purposes, the data past the end of the shard can
be considered zeros. EC does not read these, but instead relies
on the decode function inventing the zero buffers.
This was working correctly when fast reads were turned off, but
if such an IO was encountered with fast reads turned on the
logic was disabled and the IO returns an EIO.
This commit fixes that logic, so that if all reads have complete
and send_all_remaining_reads conveys that no new reads were
requested, then decode will still be possible.
FIX 3:
If reading the end of an object with unequally sized objects,
we pad out the end of the decode with zeros, to provide
the correct data to the plugin.
Previously, the code decided not to add the zeros to "needed"
shards. This caused a problem where for some parity-only
decodes, an incomplete set of zeros was generated, fooling the
_decode function into thinking that the entire shard was zeros.
In the fix, we need to cope with the case where the only data
needed from the shard is the padding itself.
The comments around the new code describe the logic behind
the change.
This makes the encode-specific use case of padding out the
to-be-decoded shards unnecessary, as this is performed by the
pad_on_shards function below.
Also fixing some logic in calculating the need_set being passed
to the decode function did not contain the extra shards needed
for the decode. This need_set is actually ignored by all the
plugins as far as I know, but being wrong does not seem
helpful if its needed in the future.
Fix 4: Extend reads when recovering parity
Here is an example use case which was going wrong:
1. Start with 3+2 EC, shards 0,3,4 are 8k shard 1,2 is 4k
2. Perform a recovery, where we recover 2 and 4. 2 is missing, 4 can be copied from another OSD.
3. Recovery works out that it can do the whole recovery with shards 0,1,3. (note not 4)
4. So the "need" set is 0,1,3, the "want" set is 2,4 and the "have" set is 0,1,3,4,5
5. The logic in get_all_avail_shards then tries to work out the extents it needs - it only. looks at 2, because we "have" 4
6. Result is that we end up reading 4k on 0,1,3, then attempt to recover 8k on shard 4 from this... which clearly does not work.
Fix 5: Round up padding to 4k alignment in EC
The pad_on_shards was not aligning to 4k. However, the decode/encode functions were. This meant that
we assigned a new buffer, then added another after - this should be faster.
Fix 6: Do not invent encode buffers before doing decode.
In this bug, during recovery, we could potentially be creating
unwanted encode buffers and using them to decode data buffers.
This fix simply removes the bad code, as there is new code above
which is already doing the correct action.
Fix 7: Fix miscompare with missing decodes.
In this case, two OSDs failed at once. One was replaced and the other was not.
This caused us to attempt to encode a missing shard while another shard was missing, which
caused a miscompare because the recovery failed to do the decode properly before doing an encode.
Bill Scales [Wed, 18 Jun 2025 11:11:51 +0000 (12:11 +0100)]
osd: Optimized EC pools bug fix when repeating GetLog
When the primary shard of an optimized EC pool does not have
a copy of the log it may need to repeat the GetLog peering
step twice, the first time to get a full copy of a log from
a shard that sees all log entries and then a second time
to get the "best" log from a nonprimary shard which may
have a partial log due to partial writes.
A side effect of repeating GetLog is that the missing
list is collected for both the "best" shard and the
shard that provides a full copy of the log. This later
missing list confuses later steps in the peering
process and may cause this shard to complete writes
and end up diverging from the primary. Discarding
this missing list causes Peering to behave the same as if
the GetLog step did not need to be repeated.
Alex Ainscow [Wed, 11 Jun 2025 15:30:40 +0000 (16:30 +0100)]
osd: Fix attribute recover in rare recovery scenario
When recovering attributes, we read them from the first potential primary, then
if that read failures, attempt to read from another potential primary.
The problem is that the code which calculates which shards to read for a recovery
only takes into account *data* and not where the attributes are. As such, if the
second read only required a non-primary, then the attribute read fails and the
OSD panics.
The fix is to detect this scenario and perform an empty read to that shard, which
the attribute-read code can use for attribute reads.
Code was incorrectly interpreting a failed attribute read on recovery as
meaning a "fast_read". Also, no attribute recovery would occur in this case.
Bill Scales [Wed, 11 Jun 2025 14:53:48 +0000 (15:53 +0100)]
osd: EC Optimizations fix bugs in applying pwlc to update info and log
1. Refactor the code that applies pwlc to update info and log so that there
is one function rather than multiple copies of the code.
2. pwlc information is used to track shards that have not been updated by
partial writes. It is used to advance last_complete (and last_update and
the log head) to account for log entries that the shard missed. It was
only being applied if last_complete matched the range of partial writes
recorded in pwlc. When a shard has missing objects last_complete is
deliberately held before the oldest need, this stops pwlc being applied.
This is wrong - pwlc can still try and update last update and the log
head even if it cannot update last_complete.
3. When the primary receives info (and pwlc) information from OSD x(y)
it uses the pwlc information to update x(y)'s info. During backfill
there may be other shards z(y) which should also be updated using the
pwlc information.
Alex Ainscow [Fri, 27 Jun 2025 15:00:56 +0000 (16:00 +0100)]
osd: Deduplicate zeros in EC slice iterator
Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit 06658fdac16dde95d20a8907511afb7fde7313da) Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
Update the "Disconnected+Remounted FS" section in
doc/cephfs/troubleshooting.rst, as suggested by Venky Shankar in https://github.com/ceph/ceph/pull/65129/files#r2312903062
according to `dpkg-buildflags`, ubuntu 24 raised this value to
`-D_FORTIFY_SOURCE=3` which causes `error: "_FORTIFY_SOURCE" redefined`
compilation failures because Ceph itself adds `-D_FORTIFY_SOURCE=2`
`_FORTIFY_SOURCE` is a hardening option. both our rpm and debian builds
already specify that via environment variables, so Ceph's cmake should
leave it alone
Dan Mick [Tue, 26 Aug 2025 00:45:21 +0000 (17:45 -0700)]
Remove git clean -fdx
either
1) a source tarball is supplied, in which case the local dir is
irrelevant, or
2) make-debs calls make-dist, which doesn't care about a dirty cwd
so it just punishes the unaware by removing things that they may
have wanted to keep.
Dan Mick [Sat, 23 Aug 2025 00:43:24 +0000 (17:43 -0700)]
make-debs.sh: invoke tar with --no-same-owner
When running as a normal user, tar does not attempt to preserve
owners set on the tar content files. When running as root, it does.
Containerized builds are running as root. Stop make-debs.sh from
trying to set other owners for files, and leaving files in the
host system with mapped UIDs other than the user running the container
(which causes jenkins to be unable to clear the workspace).
Dan Mick [Thu, 21 Aug 2025 20:00:43 +0000 (13:00 -0700)]
make-debs.sh: make "skip debug packages" conditional
Now that we're using make-debs.sh as a builder inside containers,
the default should be to build all the packages, including debug.
(Also, fix a typo.)
Afreen Misbah [Wed, 13 Aug 2025 06:49:02 +0000 (12:19 +0530)]
mgr/dashboard: Add /health/snapshot api
Fixes https://tracker.ceph.com/issues/72609
- The current minimal API relies on fetching data from osdmap and pgmap.
- These commands produce large, detailed payloads that become a performance bottleneck and impact scalability, especially in large clusters.
- To address this, we propose switching to the ceph snapshot API using ceph status command, which retrieves essential information directly from the cluster map.
- ceph status is significantly more lightweight compared to osdmap/pgmap, reducing payload sizes and processing overhead.
- This change ensures faster response times, improves system efficiency in large deployments, and minimizes unnecessary data transfer.
- update tests
- clipboard icon not displaying breaking several places
- cliboard icon on click gets filed primary green color losing the visibilty of icon. The icon now remain visible on click
- clipboard button for path and copy in tables on mouseover does not give `hand` but `cursor`. which was not ideal from a usability standpoint. This behavior has been updated to use the hand cursor making the interaction semantically correct and more intuitive for users.
Afreen Misbah [Mon, 11 Aug 2025 09:03:32 +0000 (14:33 +0530)]
mgr/dashboard: Replace capacity threshold data with prometheus metrics
- Fixes https://tracker.ceph.com/issues/72519
- the osd dump metrics is used in /api/osd/settings
- this metrics creates perf bottleneck when osds are 1000s
- replacing with similar prometheus metrics
- minor refactors - including renaming, comments.
Afreen Misbah [Wed, 6 Aug 2025 07:37:16 +0000 (13:07 +0530)]
mgr/dashboard: Stop rules api being polled on every page
- /rules ar epolled every 5 seconds on every page
- it is only required for alerts page where full rules list is shown in `Alerts` tab
- also added observable for getting rules instead of plain array
Niklas Hambüchen [Sat, 21 Jun 2025 17:46:13 +0000 (19:46 +0200)]
doc/rados/configuration: Mention show-with-defaults and ceph-conf
A small improvement based on
"Why is it still so difficult to just dump all config and where it comes from?"
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/EZSLRYBYEWDA6YIARQVMUKQUWHAE3PGR/
`show-with-defaults` is very useful, and `ceph-conf` is mentioned
so that it's clear that it's legacy, and the user doesn't have to
wonder if it's actually useful but was forgotten in the list.
Zac Dover [Fri, 22 Aug 2025 08:39:29 +0000 (18:39 +1000)]
doc/cephfs: edit troubleshooting.rst (Slow MDS)
Move the "Slow requests (MDS)" section immediately after the first
section in this document ("Slow/Stuck Operations"), because the first
procedure on the page directs the reader to undertake the operation in
"Slow requests (MDS)" before trying anything else.
Ilya Dryomov [Thu, 21 Aug 2025 19:39:29 +0000 (21:39 +0200)]
mon/MonClient: post version request completions outside of monc_lock
dispatch() is allowed to invoke the completion object in the current
thread, before control returns from dispatch(). This isn't desirable
when it comes to discarding version requests in MonClient::shutdown()
and MonClient::_reopen_session() because completion objects could then
be invoked under monc_lock. In case of MonClient::_reopen_session() in
particular, this leads to an attempt to acquire monc_lock once again in
MonClient::get_version() on a retry due to monc_errc::session_reset
that is converted to errc::resource_unavailable_try_again:
MonClient::ms_handle_reset
< takes monc_lock >
MonClient::_reopen_session
< invokes the completion object via dispatch() with ec == monc_errc::session_reset >
Objecter::CB_Objecter_GetVersion::operator() [ ec == errc::resource_unavailable_try_again ]
Objecter::_wait_for_latest_osdmap
MonClient::get_version
< attempts to take monc_lock in the body of the lambda >
The end result is either a lockup or some form of undefined behavior.
The best possible outcome here is an exception (std::system_error with
"Resource deadlock avoided" error) and a successive call to
std::terminate().
This is a regression introduced in commit e81d4eae4e76 ("common/async:
Update `use_blocked` for newer asio"). Revert to posting version
request completions for the error cases in a way that is uniform with
the success case in MonClient::handle_get_version_reply().
Fixes: https://tracker.ceph.com/issues/72692 Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
(cherry picked from commit bd449f0ac823413a55069e3df9e163a4b4adbebd) Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
Improve source rpm detection by adding a new detection method that
executes and rpm command in a container to get exactly the version of
the source rpm that the ceph.spec file would have generated. For
backwards compatibility and that I don't entirely trust myself to have
tested this the old methods are still available.
The old `--rpm-no-match-sha` is now an alias for `--srpm-match=any` to
cause it to build any (unique) ceph srpm it finds.
`--srpm-match=versionglob` retains the previous default behavior of
using a glob matching on the git id or ceph version value. The new
default of `--srpm-match=auto` implements the rpm command based behavior
described above.
All of this is wrapped in a new step `find-rpm` but that's mostly an
implementation detail and for testing.
Dan Mick [Wed, 13 Aug 2025 19:16:45 +0000 (12:16 -0700)]
pybind/mgr/dashboard/frontend: add NPM_CACHEDIR envvar, use in bwc
Add an optional NPM_CACHEDIR environment variable to serve as the
cache parameter for npm in the dashboard frontend build. The idea
is to allow it to persist across builds so that we decrease the load
on registry.npmjs.org, which has been throttling our requests when
using build-with-container.py, and also hopefully improve the time
of the frontend npm operations.
build-with-container.py also grows a --npm-cache-path option to allow
setting it for container builds and passing the envvar to the build.
John Mulligan [Wed, 21 May 2025 21:46:40 +0000 (17:46 -0400)]
dashboard: fix the workaround for unpacking node sources
My previous workaround in the dashboard for the unpacking of non-root
own tarball as the fake root of a container did not work because of the
strange quoting/escaping behavior of cmake (it tried to run `id -u` as a
single command, not a command and an argument).
Use single quoted string and old school backticks to work around this issue.
Fixes: 24dbfb5da4813c6588f9cd199b9f527bb67f1e88 Signed-off-by: John Mulligan <jmulligan@redhat.com>
(cherry picked from commit 3a36180a373d91adcf9726660204f0cc1dcecba3)
John Mulligan [Fri, 2 May 2025 15:17:53 +0000 (11:17 -0400)]
dashboard: ensure nodeenv downloaded content is owned by current user
When testing ceph builds in a container we discovered that certain files
could not be deleted by jenkins after a build. This was due to the way
the container maps IDs - files owned by the root user in the container
become owned by the "real" user/jenkins user on the "host".
However, the node tarball that is fetched and unpacked by nodeenv has
a different owner name/uid that is preserved in the tree and this id
gets mapped to something that can be managed by the "fake root" of the
container but not by the "regular" user outside the container.
The simplest workaround I can think of is to chown the tree back
to the current user and avoid leaving files on disk with uncleanly
mapped uids.