mon/MonClient: wipe secrets and invalidate tickets on auth epoch change
* This causes service daemons to drop all known service tickets and request new
ones from the auth server.
* This causes the clients (and service daemons) to request new tickets from the
auth server which will include tickets signed with the new service keys.
This will be used to indicate to clients / service daemons that the auth
service keys have been rotated. Clients and service daemons are expected to
invalidate their tickets and reauth. Service daemons should wipe their service
keys.
Patrick Donnelly [Wed, 26 Mar 2025 01:59:34 +0000 (21:59 -0400)]
mon/AuthMonitor: add dump-keys and wipe-rotating-service-keys
`auth dump-keys` allows examining the key types for each entity and also the
rotating session keys. This lets us confirm key upgrades are done as expected.
`wipe-rotating-service-keys` clears out existing non-auth service keys so that we do not
need to wait for the rotating key expiration. It is not disruptive so long as clients
renew their tickets when prompted by the auth epoch change.
Matan Breizman [Mon, 9 Jun 2025 12:07:49 +0000 (12:07 +0000)]
include/common_fwd: Include Crypto classes
CryptoManager::cct is now used in CephContext ctor. To provide this
defintion
any ceph_context.cc target must also include Crypto.cc.
crimson-alien-common library which only had ceph_context.cc must now
also include Crypto.cc.
However, the fact that crimson-common also includes Crypto.cc would
cause multiple defintions
to any Crypto classes methods.
To resolve this, let's wrap all Crypto classes with TOPNSPC::common that
would be forwarded using common_fwd logic.
Yehuda Sadeh [Wed, 28 May 2025 19:51:19 +0000 (15:51 -0400)]
cephx: sign messages using hmac_sha256
if key type is newer than the original AES, calculate message
hash by using HMAC-SHA256.
We cannot use plain aes256k like we do with the aes key because
of the confounder. The other option would be to inject a
confounder, but that would weaken the cipher.
Yehuda Sadeh [Fri, 7 Mar 2025 18:20:58 +0000 (13:20 -0500)]
auth: add a configurable to control rotating keys cipher type
auth_service_cipher: a mon configurable that determines what type of cipher
the rotating keys are using. The configurable can change at runtime. Note
that the change does not invalidate existing keys, these would expire
based on their ttl.
Yehuda Sadeh [Thu, 27 Feb 2025 21:14:06 +0000 (16:14 -0500)]
auth/cephx: modify client + server challenges hashing
This applies when using ciphers that are not the original
AES-128 one. Use the hmac-sha256 hash now. With AES256KRB5
the original method of encrypting the combined challenges
doesn't work as the confounder randomizes the result.
Yehuda Sadeh [Thu, 27 Feb 2025 16:55:37 +0000 (11:55 -0500)]
ceph-authtool: support --key-type param
Also move the encryption handlers out of the ceph_context.
Handlers are now returned as a shared_ptr, to support the
creation of new handlers with different params (such as
the usage param).
Jon Bailey [Wed, 20 Aug 2025 10:11:09 +0000 (11:11 +0100)]
osd: Reduce the amount of status invalidations when rolling shards forwards during peering
Currently stats invalidations happen during peering when rolling forward shards.
We can reduce this so we only invalidate the stats when we don't have any other shards at the version we want to roll the stats forwards to.
In the cases where we have a shard with the stats at the correct version, we use those stats instead of invalidating.
If we do not have any shards with the correct version of stats, we do the invalidate as before.
* Current primary shard has been absent so has missed the latest few writes
* All the recent writes are partial writes that have not updated shard X
* All the recent writes have completed
The authorative shard is chosen from the set of primary-capable shards
that have the highest last epoch started, these have all got log entries
for the recent writes.
The get log shard is chosen from the set of shards that have the highest
last epoch started, this chooses shard X because its furthest behind
The primary shard last update is not less than get log shard last
update so this if statement decides that it has a good enough log:
We then proceed through peering using the primary log and the
log from shard X. Neither have details about the recent writes
which are then incorrectly rolled back.
The if statement should be looking at last_update for the
authorative shard rather than the get_log_shard, the code
would then realize that it needs to get the log from the
authorative shard first and then have a second pass
where it gets the log from the get log shard.
Peering would then have information about the partial writes
(obtained from the authorative shards log) and could correctly
roll these writes forward by deducing that the get_log_shard
didn't have these log entries because they were partial writes.
Alex Ainscow [Fri, 8 Aug 2025 09:25:53 +0000 (10:25 +0100)]
osd: Fix segfault in EC debug string
The old debug_string implementation was potentially reading up to 3
bytes off the end of an array. It was also doing lots of unnecessary
bufferlist reconstructs. This refactor of this function fixes both
issues.
Bill Scales [Fri, 8 Aug 2025 08:58:14 +0000 (09:58 +0100)]
osd: Optimized EC backfill interval has wrong versions
Bug in the optimized EC code creating the backfill
interval on the primary. It is creating a map with
the object version for each backfilling shard. When
there are multiple backfill targets the code was
overwriting oi.version with the version
for a shard that has had partial writes which
can result in the object not being backfilled.
Can manifest as a data integirty issue, scrub
error or snapshot corruption.
Bill Scales [Mon, 4 Aug 2025 15:24:41 +0000 (16:24 +0100)]
osd: Optimized EC choose_acting needs to use best primary shard
There have been a couple of corner case bugs with choose_acting
with optimized EC pools in the scenario where a new primary
with no existing log is choosen and find_best_info selects
a non-primary shard as the authorative shard.
Non-primary shards don't have a full log so in this scenario
we need to get the log from a shard that does have a complete
log first (so our log is ahead or eqivalent to authorative shard)
and then repeat the get log for the authorative shard.
Problems arise if we make different decisions about the acting
set and backfill/recovery based on these two different shards.
In one bug we osicillated between two different primaries
because one primary used one shard to making peering decisions
and the other primary used the other shard, resulting in
looping flip/flop changes to the acting_set.
In another bug we used one shard to decide that we could do
async recovery but then tried to get the log from another
shard and asserted because we didn't have enough history in
the log to do recovery and should have choosen to do a backfill.
This change makes optimized EC pools always choose the
best !non_primary shard when making decisions about peering
(irrespective of whether the primary has a full log or not).
The best overall shard is now only used for get log when
deciding how far to rollback the log.
It also sets repeat_getlog to false if peering fails because
the PG is incomplete to avoid looping forever trying to get
the log.
Alex Ainscow [Fri, 1 Aug 2025 14:09:58 +0000 (15:09 +0100)]
osd: Do not sent PDWs if read count > k
The main point of PDW (as currently implemented) is to reduce the amount
of reading performed by the primary when preparing for a read-modify-write (RMW).
It was making the assumption that if any recovery was required by a
conventional RMW, then a PDW is always better. This was an incorrect assumption
as a conventional RMW performs at most K reads for any plugin which
supports PDW. As such, we tweak this logic to perform a conventional RMW
if the PDW is going to read k or more shards.
This should improve performance in some minor areas.
Alex Ainscow [Wed, 18 Jun 2025 19:46:49 +0000 (20:46 +0100)]
osd: Fix decode for some extent cache reads.
The extent cache in EC can cause the backend to perform some surprising reads. Some
of the patterns were discovered in test that caused the decode to attempt to
decode more data than was anticipated during the read planning, leading to an
assert. This simple fix reduces the scope of the decode to the minimum.
Bill Scales [Fri, 1 Aug 2025 10:48:18 +0000 (11:48 +0100)]
osd: Optimized EC calculate_maxles_and_minlua needs to use ...
exclude_nonprimary_shards
When an optimized EC pool is searching for the best shard that
isn't a non-primary shard then the calculation for maxles and
minlua needs to exclude nonprimary-shards
This bug was seen in a test run where activating a PG was
interrupted by a new epoch and only a couple of non-primary
shards became active and updated les. In the next epoch
a new primary (without log) failed to find a shard that
wasn't non-primary with the latest les. The les of
non-primary shards should be ignored when looking for
an appropriate shard to get the full log from.
This is safe because an epoch cannot start I/O without
at least K shards that have updated les, and there
are always K-1 non-primary shards. If I/O has started
then we will find the latest les even if we skip
non-primary shards. If I/O has not started then the
latest les ignoring non-primary shards is the
last epoch in which I/O was started and has a good
enough log+missing list.
Bill Scales [Fri, 1 Aug 2025 09:39:16 +0000 (10:39 +0100)]
osd: Optimized EC choose_async_recovery_ec must use auth_shard
Optimized EC pools modify how GetLog and choose_acting work,
if the auth_shard is a non-primary shard and the (new) primary
is behind the auth_shard then we cannot just get the log from
the non-primary shard because it will be missing entries for
partial writes. Instead we need to get the log from a shard
that has the full log first and then repeat GetLog to get
the log from the auth_shard.
choose_acting was modifying auth_shard in the case where
we need to get the log from another shard first. This is
wrong - the remainder of the logic in choose_acting and
in particular choose_async_recovery_ec needs to use the
auth_shard to calculate what the acting set will be.
Using a different shard occasional can cause a
different acting set to be selected (because of
thresholds about the number of log entries behind
a shard needs to be to perform async recovery) and
this can lead to two shards flip/flopping with
different opinions about what the acting set should be.
Fix is to separate out which shard will be returned
to GetLog from the auth_shard which will be used
for acting set calculations.
Bill Scales [Fri, 1 Aug 2025 09:22:47 +0000 (10:22 +0100)]
osd: Optimized EC don't try to trim past crt
If there is an exceptionally long sequence of partial writes
that did not update a shard that is followed by a full write
then it is possible that the log trim point is ahead of the
previous write to the shard (and hence crt). We cannot trim
beyond crt. In this scenario its fine to limit the trim to crt
because the shard doesn't have any of the log entries for the
partial writes so there is nothing more to trim.
Bill Scales [Fri, 1 Aug 2025 08:56:23 +0000 (09:56 +0100)]
osd: Optimized EC missing call to apply_pwlc after updating pwlc
update_peer_info was updating pwlc with a newer version received
from another shard, but failed to update the peer_info's to
reflect the new pwlc by calling apply_pwlc.
Scenario was primary receiving an update from shard X which had
newer information about shard Y. The code was calling apply_pwlc
for shard X but not for shard Y.
The fix simplifies the logic in update_peer_info - if we are
the primary update all peer_info's that have pwlc. If we
are a non-primary and there is pwlc then update info.
Bill Scales [Wed, 30 Jul 2025 11:44:10 +0000 (12:44 +0100)]
osd: Optimized EC don't apply pwlc for divergent writes
Split pwlc epoch into a separate variable so that we
can use epoch and version number when comparing if
last_update is within a pwlc range. This ensures that
pwlc is not applied to a shard that has a divergent
write, but still tracks the most recent update of pwlc.
Bill Scales [Wed, 30 Jul 2025 11:41:34 +0000 (12:41 +0100)]
osd: Optimized EC present_shards no longer needed
present_shards is no longer needed in the PG log entry, this has been
replaced with code in proc_master_log that calculates which shards were
in the last epoch started and are still present.
Bill Scales [Mon, 28 Jul 2025 08:26:36 +0000 (09:26 +0100)]
osd: Optimized EC proc_master_log fix roll-forward logic when shard is absent
Fix bug in optimized EC code where proc_master_log incorrectly did not
roll forward a write if one of the written shards is missing in the current
epoch and there is a stray version of that shard that did not receive the
write.
As long as the currently present shards that participated in les and were
updated by a write have the update then the write should be rolled-forward.
Bill Scales [Mon, 28 Jul 2025 08:21:54 +0000 (09:21 +0100)]
osd: Refactor find_best_info and choose_acting
Refactor find_best_info to have separate function to calculate
maxles and minlua. The refactor makes history_les_bound
optional, tidy up the choose_acting interface removing this
where it is not used.
Bill Scales [Thu, 17 Jul 2025 18:17:27 +0000 (19:17 +0100)]
osd: EC Optimizations proc_master_log boundary case bug fixes
Fix a couple of bugs in proc_master_log for optimized EC
pools dealing with boundary conditions such as an empty
log and merging two logs that diverge from the very first
entry.
Refactor the code to handle the boundary conditions and
neaten up the code.
Predicate the code block with if (pool.info.allows_ecoptimizations())
to make it clear this code path is only for optimized EC pools.
Jon Bailey [Fri, 25 Jul 2025 13:16:35 +0000 (14:16 +0100)]
osd: Invalidate stats during peering if we are rolling a shard forwards.
This change will mean we always recalculate stats upon rolling stats forwards. This prevent the situation where we end up with incorrect statistics due to where we always take the stats of the oldest shard during peering; causing outdated pg stats being applied for cases where the oldest shards are shards that don't see partial writes where num_bytes has changed on other places after that point on that shard.