Update `rate()` queries to be more accurate. The use of `irate()` leads
to misleading graphs because it only looks at the last 2 samples over
the selected time range step interval. Also use `$__rate_interval`
consistently in order to scale over short and long time ranges.
* Replace `irate()` with `rate()` to avoid sample bias.
* Use `$__rate_interval` consistently.
* Update auto_count/min to provide higher detail graphs.
1. Fixes the promql expr used to calculate "In" OSDs in
ceph-cluster-advanced.json.
2. Fixes the color coding for the single state panels used in the OSDs
grafana panel like "In", "Out" etc
In file included from /home/pdonnell/ceph/src/mds/FSMap.h:31,
from /home/pdonnell/ceph/src/mon/PaxosFSMap.h:20,
from /home/pdonnell/ceph/src/mon/MDSMonitor.h:26,
from /home/pdonnell/ceph/src/mon/FSCommands.cc:17:
/home/pdonnell/ceph/src/mds/MDSMap.h: In member function ‘int FileSystemCommandHandler::set_val(Monitor*, FSMap&, MonOpRequestRef, const cmdmap_t&, std::ostream&, FileSystemCommandHandler::fs_or_fscid, std::string, std::string)’:
/home/pdonnell/ceph/src/mds/MDSMap.h:223:40: warning: ‘fsp’ may be used uninitialized in this function [-Wmaybe-uninitialized]
223 | bool test_flag(int f) const { return flags & f; }
| ^~~~~
/home/pdonnell/ceph/src/mon/FSCommands.cc:417:21: note: ‘fsp’ was declared here
417 | const Filesystem* fsp;
| ^~~
Adam King [Mon, 22 Sep 2025 21:05:07 +0000 (17:05 -0400)]
pybind/mgr: pin cheroot version in requirements-required.txt
With python 3.10 (didn't seem to happen with python 3.12) the
pybind/mgr/cephadm/tests/test_node_proxy.py test times out.
This appears to be related to a new release of the cheroot
package and a github issues describing the same problem
we're seeing has been opened by another user
https://github.com/cherrypy/cheroot/issues/769
It is worth noting that the workaround described in that
issue does also work for us. If you add
John Mulligan [Fri, 12 Sep 2025 17:52:25 +0000 (13:52 -0400)]
build-with-container: add argument groups to organize options
Use the argparse add_argument_group feature to organize the mass of
arguments into more sensible categories. Hopefully, someone reading
over the `--help` output can now more easily see options that
are useful rather than being overwhelmed by a wall of text.
mgr/dashboard: fix zone update API forcing STANDARD storage class
The zone update REST API (`edit_zone`) always attempted to configure a
placement target for the `STANDARD` storage class, even when the request
was intended for a different storage class name.
This caused failures in deployments where `STANDARD` is not defined.
Changes:
Club add placement target and add storage class methods into one single
add_placement_targets_storage_class_zone method which takes the storage
class as a param as well alongside the rest of the placement params.
cephfs-journal-tool:: Don't reset the journal trim position
If the fs had to go through journal recovery and reset,
the cephfs-journal-tool resets the journal trim position
because of which the old unused journal objects just stay
forever in the metadata pool. The patch fixes the issue.
Now, the old stale journal objects are trimmed during the
regular trimming cycle helping to recover space in the
metadata pool.
Validates that the cephfs-journal-tool reset
doesn't reset the trim position so that the
journal trim takes care of trimming the older
unused journal objects helping to recover the
space in metadata pool.
Adam King [Wed, 7 May 2025 20:02:56 +0000 (16:02 -0400)]
mgr/cephadm: don't mark nvmeof daemons without pool and group in name as stray
Cephadm's naming of these daemons always includes the pool and
group name associated with the nvmeof service. Nvmeof recently
has started to register with the cluster using names that
don't include that, resulting in warnings likes
```
[WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
stray daemon nvmeof.vm-01.hwwhfc on host vm-01 not managed by cephadm
```
where cephadm knew that nvmeof daemon as
```
[ceph: root@vm-00 /]# ceph orch ps --daemon-type nvmeof
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID
nvmeof.foo.group1.vm-01.hwwhfc vm-01 *:5500,4420,8009,10008 stopped 5m ago 25m - - <unknown> <unknown>
```
Adam C. Emerson [Mon, 8 Sep 2025 18:19:20 +0000 (14:19 -0400)]
rgw: Record the `service_unique_id`, if present, in the SrviceMap
For consistency and ease associating the two.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit 3a94a7b2ed02d20b2bc839b283e60cf4778f69e4) Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
Adam C. Emerson [Fri, 5 Sep 2025 15:31:40 +0000 (11:31 -0400)]
common: Allow PerfCounters to return a provided service ID
Dashboard has asked for a unique identifier that can be associated
with services. This commit provides a component of that
functionality. Enforcing uniqueness is beyond the scope of this PR and
is the responsibility of cluster setup and orchestration. The scope of
uniqueness is a matter of policy and up to the design of cluster setup
and orchestration software.
We provide the `--service_unique_id` argument that can be passed on
the command line when executing a Ceph service that uses
`global_init`. If non-empty, a `service_unique_id` section is added to
the PerfCounters dump for that service. This section has a single
entry whose name is set to the argument of `service_unique_id` and
whose value is arbitrary. If unspecified or empty, no
`service_unique_id` section is added.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit 6dc322421f7a3758251fe29e3f35934231358011) Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
Jon Bailey [Wed, 20 Aug 2025 10:11:09 +0000 (11:11 +0100)]
osd: Reduce the amount of status invalidations when rolling shards forwards during peering
Currently stats invalidations happen during peering when rolling forward shards.
We can reduce this so we only invalidate the stats when we don't have any other shards at the version we want to roll the stats forwards to.
In the cases where we have a shard with the stats at the correct version, we use those stats instead of invalidating.
If we do not have any shards with the correct version of stats, we do the invalidate as before.
* Current primary shard has been absent so has missed the latest few writes
* All the recent writes are partial writes that have not updated shard X
* All the recent writes have completed
The authorative shard is chosen from the set of primary-capable shards
that have the highest last epoch started, these have all got log entries
for the recent writes.
The get log shard is chosen from the set of shards that have the highest
last epoch started, this chooses shard X because its furthest behind
The primary shard last update is not less than get log shard last
update so this if statement decides that it has a good enough log:
We then proceed through peering using the primary log and the
log from shard X. Neither have details about the recent writes
which are then incorrectly rolled back.
The if statement should be looking at last_update for the
authorative shard rather than the get_log_shard, the code
would then realize that it needs to get the log from the
authorative shard first and then have a second pass
where it gets the log from the get log shard.
Peering would then have information about the partial writes
(obtained from the authorative shards log) and could correctly
roll these writes forward by deducing that the get_log_shard
didn't have these log entries because they were partial writes.
Alex Ainscow [Fri, 8 Aug 2025 09:25:53 +0000 (10:25 +0100)]
osd: Fix segfault in EC debug string
The old debug_string implementation was potentially reading up to 3
bytes off the end of an array. It was also doing lots of unnecessary
bufferlist reconstructs. This refactor of this function fixes both
issues.
Bill Scales [Fri, 8 Aug 2025 08:58:14 +0000 (09:58 +0100)]
osd: Optimized EC backfill interval has wrong versions
Bug in the optimized EC code creating the backfill
interval on the primary. It is creating a map with
the object version for each backfilling shard. When
there are multiple backfill targets the code was
overwriting oi.version with the version
for a shard that has had partial writes which
can result in the object not being backfilled.
Can manifest as a data integirty issue, scrub
error or snapshot corruption.
Bill Scales [Mon, 4 Aug 2025 15:24:41 +0000 (16:24 +0100)]
osd: Optimized EC choose_acting needs to use best primary shard
There have been a couple of corner case bugs with choose_acting
with optimized EC pools in the scenario where a new primary
with no existing log is choosen and find_best_info selects
a non-primary shard as the authorative shard.
Non-primary shards don't have a full log so in this scenario
we need to get the log from a shard that does have a complete
log first (so our log is ahead or eqivalent to authorative shard)
and then repeat the get log for the authorative shard.
Problems arise if we make different decisions about the acting
set and backfill/recovery based on these two different shards.
In one bug we osicillated between two different primaries
because one primary used one shard to making peering decisions
and the other primary used the other shard, resulting in
looping flip/flop changes to the acting_set.
In another bug we used one shard to decide that we could do
async recovery but then tried to get the log from another
shard and asserted because we didn't have enough history in
the log to do recovery and should have choosen to do a backfill.
This change makes optimized EC pools always choose the
best !non_primary shard when making decisions about peering
(irrespective of whether the primary has a full log or not).
The best overall shard is now only used for get log when
deciding how far to rollback the log.
It also sets repeat_getlog to false if peering fails because
the PG is incomplete to avoid looping forever trying to get
the log.
Alex Ainscow [Fri, 1 Aug 2025 14:09:58 +0000 (15:09 +0100)]
osd: Do not sent PDWs if read count > k
The main point of PDW (as currently implemented) is to reduce the amount
of reading performed by the primary when preparing for a read-modify-write (RMW).
It was making the assumption that if any recovery was required by a
conventional RMW, then a PDW is always better. This was an incorrect assumption
as a conventional RMW performs at most K reads for any plugin which
supports PDW. As such, we tweak this logic to perform a conventional RMW
if the PDW is going to read k or more shards.
This should improve performance in some minor areas.
Alex Ainscow [Wed, 18 Jun 2025 19:46:49 +0000 (20:46 +0100)]
osd: Fix decode for some extent cache reads.
The extent cache in EC can cause the backend to perform some surprising reads. Some
of the patterns were discovered in test that caused the decode to attempt to
decode more data than was anticipated during the read planning, leading to an
assert. This simple fix reduces the scope of the decode to the minimum.
Bill Scales [Fri, 1 Aug 2025 10:48:18 +0000 (11:48 +0100)]
osd: Optimized EC calculate_maxles_and_minlua needs to use ...
exclude_nonprimary_shards
When an optimized EC pool is searching for the best shard that
isn't a non-primary shard then the calculation for maxles and
minlua needs to exclude nonprimary-shards
This bug was seen in a test run where activating a PG was
interrupted by a new epoch and only a couple of non-primary
shards became active and updated les. In the next epoch
a new primary (without log) failed to find a shard that
wasn't non-primary with the latest les. The les of
non-primary shards should be ignored when looking for
an appropriate shard to get the full log from.
This is safe because an epoch cannot start I/O without
at least K shards that have updated les, and there
are always K-1 non-primary shards. If I/O has started
then we will find the latest les even if we skip
non-primary shards. If I/O has not started then the
latest les ignoring non-primary shards is the
last epoch in which I/O was started and has a good
enough log+missing list.
Bill Scales [Fri, 1 Aug 2025 09:39:16 +0000 (10:39 +0100)]
osd: Optimized EC choose_async_recovery_ec must use auth_shard
Optimized EC pools modify how GetLog and choose_acting work,
if the auth_shard is a non-primary shard and the (new) primary
is behind the auth_shard then we cannot just get the log from
the non-primary shard because it will be missing entries for
partial writes. Instead we need to get the log from a shard
that has the full log first and then repeat GetLog to get
the log from the auth_shard.
choose_acting was modifying auth_shard in the case where
we need to get the log from another shard first. This is
wrong - the remainder of the logic in choose_acting and
in particular choose_async_recovery_ec needs to use the
auth_shard to calculate what the acting set will be.
Using a different shard occasional can cause a
different acting set to be selected (because of
thresholds about the number of log entries behind
a shard needs to be to perform async recovery) and
this can lead to two shards flip/flopping with
different opinions about what the acting set should be.
Fix is to separate out which shard will be returned
to GetLog from the auth_shard which will be used
for acting set calculations.
Bill Scales [Fri, 1 Aug 2025 09:22:47 +0000 (10:22 +0100)]
osd: Optimized EC don't try to trim past crt
If there is an exceptionally long sequence of partial writes
that did not update a shard that is followed by a full write
then it is possible that the log trim point is ahead of the
previous write to the shard (and hence crt). We cannot trim
beyond crt. In this scenario its fine to limit the trim to crt
because the shard doesn't have any of the log entries for the
partial writes so there is nothing more to trim.
Bill Scales [Fri, 1 Aug 2025 08:56:23 +0000 (09:56 +0100)]
osd: Optimized EC missing call to apply_pwlc after updating pwlc
update_peer_info was updating pwlc with a newer version received
from another shard, but failed to update the peer_info's to
reflect the new pwlc by calling apply_pwlc.
Scenario was primary receiving an update from shard X which had
newer information about shard Y. The code was calling apply_pwlc
for shard X but not for shard Y.
The fix simplifies the logic in update_peer_info - if we are
the primary update all peer_info's that have pwlc. If we
are a non-primary and there is pwlc then update info.
Bill Scales [Wed, 30 Jul 2025 11:44:10 +0000 (12:44 +0100)]
osd: Optimized EC don't apply pwlc for divergent writes
Split pwlc epoch into a separate variable so that we
can use epoch and version number when comparing if
last_update is within a pwlc range. This ensures that
pwlc is not applied to a shard that has a divergent
write, but still tracks the most recent update of pwlc.
Bill Scales [Wed, 30 Jul 2025 11:41:34 +0000 (12:41 +0100)]
osd: Optimized EC present_shards no longer needed
present_shards is no longer needed in the PG log entry, this has been
replaced with code in proc_master_log that calculates which shards were
in the last epoch started and are still present.
Bill Scales [Mon, 28 Jul 2025 08:26:36 +0000 (09:26 +0100)]
osd: Optimized EC proc_master_log fix roll-forward logic when shard is absent
Fix bug in optimized EC code where proc_master_log incorrectly did not
roll forward a write if one of the written shards is missing in the current
epoch and there is a stray version of that shard that did not receive the
write.
As long as the currently present shards that participated in les and were
updated by a write have the update then the write should be rolled-forward.
Bill Scales [Mon, 28 Jul 2025 08:21:54 +0000 (09:21 +0100)]
osd: Refactor find_best_info and choose_acting
Refactor find_best_info to have separate function to calculate
maxles and minlua. The refactor makes history_les_bound
optional, tidy up the choose_acting interface removing this
where it is not used.
Bill Scales [Thu, 17 Jul 2025 18:17:27 +0000 (19:17 +0100)]
osd: EC Optimizations proc_master_log boundary case bug fixes
Fix a couple of bugs in proc_master_log for optimized EC
pools dealing with boundary conditions such as an empty
log and merging two logs that diverge from the very first
entry.
Refactor the code to handle the boundary conditions and
neaten up the code.
Predicate the code block with if (pool.info.allows_ecoptimizations())
to make it clear this code path is only for optimized EC pools.
Jon Bailey [Fri, 25 Jul 2025 13:16:35 +0000 (14:16 +0100)]
osd: Invalidate stats during peering if we are rolling a shard forwards.
This change will mean we always recalculate stats upon rolling stats forwards. This prevent the situation where we end up with incorrect statistics due to where we always take the stats of the oldest shard during peering; causing outdated pg stats being applied for cases where the oldest shards are shards that don't see partial writes where num_bytes has changed on other places after that point on that shard.