osd/scrub: allow auto-repair on operator-initiated scrubs
Previously, operator-initiated scrubs would not auto-repair, regardless
of the value of the 'osd_scrub_auto_repair' config option. This was
less confusing to the operator than it could have been, as most
operator commands would in fact cause a regular periodic scrub
to be initiated. However, that quirk is now fixed: operator commands
now trigger 'op-initiated' scrubs. Thus the need for this patch.
The original bug was fixed in https://github.com/ceph/ceph/pull/54615,
but was unfortunately re-introduced later on. Fixes: https://tracker.ceph.com/issues/72178 Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
(cherry picked from commit 97de817ad1c253ee1c7c9c9302981ad2435301b9)
Zac Dover [Wed, 11 Jun 2025 12:44:32 +0000 (22:44 +1000)]
doc/rados/ops: edit cache-tiering.rst
Add material to doc/rados/operations/cache-tiering.rst, as suggested by
Anthony D'Atri in
https://github.com/ceph/ceph/pull/63745#discussion_r2127887785.
Ville Ojamo [Wed, 30 Apr 2025 18:17:14 +0000 (01:17 +0700)]
doc/radosgw: Improve rgw-cache.rst
Try to improve the language by completely rewriting some sentences.
Attempt to format the document more like the rest of the docs.
Fix several errors in punctuation, capitalization, spaces etc.
Use blocks with bash prompts for CLI commands instead of hardcoded
prompts.
Fix section hierarchy and section title underline lengths.
Use admonition.
Soumya Koduri [Fri, 23 May 2025 21:39:50 +0000 (03:09 +0530)]
rgw/restore: Use strtoull to read size till 2^64
Reviewed-by: Adam Emerson <aemerson@redhat.com> Reviewed-by: Matt Benjamin <mbenjamin@redhat.com> Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit b3c867a121a7315b5a9e2d30d0af44c08676f8ca)
Soumya Koduri [Fri, 23 May 2025 20:25:30 +0000 (01:55 +0530)]
rgw/cloud-restore: Handle failure with adding restore entry
In case adding restore entry to FIFO fails, reset the `restore_status`
of that object as "RestoreFailed" so that restore process can be
retried from the end S3 user.
Reviewed-by: Adam Emerson <aemerson@redhat.com> Reviewed-by: Jiffin Tony Thottan <thottanjiffin@gmail.com> Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit 9974f51eb61603b8117d7b50e6b0b4614fcce721)
rgw/cloud-restore: Support restoration of objects transitioned to Glacier/Tape endpoint
Restoration of objects from certain cloud services (like Glacier/Tape) could
take significant amount of time (even days). Hence store the state of such restore requests
and periodically process them.
Brief summary of changes
* Refactored existing restore code to consolidate and move all restore processing into rgw_restore* file/class
* RGWRestore class is defined to manage the restoration of objects.
* Lastly, for SAL_RADOS, FIFO is used to store and read restore entries.
Currently, this PR handles storing state of restore requests sent to cloud-glacier tier-type which need async processing.
The changes are tested with AWS Glacier Flexible Retrieval with tier_type Expedited and Standard.
Reviewed-by: Matt Benjamin <mbenjamin@redhat.com> Reviewed-by: Adam Emerson <aemerson@redhat.com> Reviewed-by: Jiffin Tony Thottan <thottanjiffin@gmail.com> Reviewed-by: Daniel Gryniewicz <dang@redhat.com> Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit ef96bb0d6137bacf45b9ee2f99ad5bcd8b3b6add)
qa/standalone/scrub: fix "scrubbed in 0ms" in osd-scrub-test.sh
The specific test looks for a 'last scrub duration' higher than
0 as a sign that the scrub actually ran. Previous code fixes
guaranteed that even a scrub duration as low as 1ms would be
reported as "1" (1s). However, none of the 15 objects created
in this test were designated for the tested PG, which remained
empty. As a result, the scrub duration was reported as "0".
The fix is to create a large enough number of objects so that
at least one of them is mapped to the tested PG.
Alex Ainscow [Fri, 6 Jun 2025 11:09:04 +0000 (12:09 +0100)]
osd: Optimised EC should avoid decodes off the end of objects.
This was a particular edge case whereby the need to do an encode and a decode as part of recovery
was causing EC to attempt to do a decode off the end of the shard, despite this being
unnecessary.
Alex Ainscow [Fri, 23 May 2025 08:59:31 +0000 (09:59 +0100)]
osd: During recovery, pass "for_recovery" when attempting re-reads
get_all_remaining_reads() is used for two purposes:
1. If a read unexpectedly fails, recover from other shards.
2. If a shard is missing, but is allowed to be missing (typically
due to unequal shard sizes), we rely on this function to return
no new reads, without an error.
The test failure we saw case (2), but I think case (1) is important.
Most of the time we probably would not notice, but if insufficient redundancy
exists without the for_recovery being set, then this will result in
recovery failing.
Alex Ainscow [Fri, 2 May 2025 09:11:45 +0000 (10:11 +0100)]
osd: Improve backfill in new EC.
In old EC, the full stripe was always read and written. In new EC, we only attempt
to recover the shards that were missing. If an old OSD is available, the read can
be directed there.
Alex Ainscow [Thu, 8 May 2025 15:14:03 +0000 (16:14 +0100)]
osd: Cope with empty reads from an OSD without panic.
If a ReadOp from EC contains two objects where one object only reads from a single shard, but
other onjects require other shards, then this bug can be hit. The fix should make it clear
what the issue is
Alex Ainscow [Thu, 8 May 2025 13:22:36 +0000 (14:22 +0100)]
osd: Recover non-primary shards with the correct version.
Scrub revealed a bug whereby the non-primary shards were being given a
version number in the OI which did not match the expected version in
the authoritative OI.
A secondary issue is that all attributes were being pushed to the non
primary shards, whereas only OI is actually needed.
Alex Ainscow [Wed, 7 May 2025 09:33:24 +0000 (10:33 +0100)]
osd: Do not do a read-modify-write if op.delete_first is set
Some client OPs are able to generate transactions which delete an
object and then write it again. This is used by the copy-from ops.
If such a write is not 4k aligned, then the new EC code was incorrectly doing
a read-modify write on the misaligned 4k. This causes some
garbage to be written to the backend OSD, off the end of the
object. This is only a problem if the object is later extended
without the end being written.
Problematic sequence is:
1. Create two objects (A and B) of size X and Y where:
X > Y, (Y % 4096) != 0
2. copy_from OP B -> A
3. Extend B without writing offset Y+1
This will result in a corrupt data buffer at Y+1 without this fix.
Alex Ainscow [Thu, 1 May 2025 09:09:15 +0000 (10:09 +0100)]
osd: Fix off-by-one error in shard_extent_map.
Inserting the first parity buffer was causing the ro-range within the SEM to be incorrectly calculated.
Simple fix and I have added some unit tests to defend this error in the future.
Alex Ainscow [Fri, 25 Apr 2025 18:15:34 +0000 (19:15 +0100)]
osd: Minor performance improvement in ECUtil.cc
Code changes to prevent create then erase of empty
shard. Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit c2d6414f659b123fa7060442bff7a90a7ceeb7c0)
Alex Ainscow [Thu, 24 Apr 2025 13:02:41 +0000 (14:02 +0100)]
osd: clone + delete ops should invalidate source new EC extent cache.
The op.is_delete() function only returns true if the op is not ALSO
doing something else (e.g. a clone). This causes issues with clearing
the new EC extent cache.
Alex Ainscow [Wed, 23 Apr 2025 14:41:11 +0000 (15:41 +0100)]
osd: Fix parity updates in truncates.
Previously in optimised EC, when truncating to a partial
stripe, the parity was not being updated. This fix reads
the non-truncated data from the final stripe and calculates
parity updates, which are written to the parity shards.
Alex Ainscow [Tue, 22 Apr 2025 12:41:19 +0000 (13:41 +0100)]
osd: Fix EC cache invalidation bug
With optimised EC, there were two bugs with cache invalidation:
1. If two invalidates were in the queue, its possible the second
invalidate might be cleared by the first.
2. Reads were being requested if size was being reduced.
Also, added a few debug improvements and some new asserts.
Alex Ainscow [Wed, 9 Apr 2025 12:49:49 +0000 (13:49 +0100)]
osd: Make EC alignment independent of page size.
Code which manipulates full pages is often faster. To exploit this
optimised EC was written to deal with 4k alignment wherever possible.
When inputs are not aligned, they are quickly aligned to 4k.
Not all architectures use 4k page sizes. Some power architectures for
example have a 64k page size. In such situations, it is unlikely that
using 64k page alignment will provide any performance boost, indeed it
is likely to hurt performance significantly. As such, EC has been
moved to maintain its internal alignment (4k), whcih can be configured.
This has the added advantage, that we can can potentially tweak this
value in the future.
Alex Ainscow [Wed, 16 Apr 2025 09:41:48 +0000 (10:41 +0100)]
osd: Fix Truncates in Optimised EC
The previous truncate code attempted to perform a non-aligned truncate by
creating a zero buffer at the end of the object, which was written.
The new code initially truncates to the exact size of the user object before
growing the object to the required 4k alignment. This simpler arrangement
also simplifies the rollback.
Bill Scales [Fri, 6 Jun 2025 12:28:14 +0000 (13:28 +0100)]
osd: EC optimizations fix bug when recovering only partial write objects
PGLog::reset_complete_to is not handling the scenario where all the
missing objects have a partial write that excludes updating the shard being
recovered as their most recent update. In this scenario the oldest need
is newer than newest log entry. Setting last_compelte to the head of the
log confuses code and makes it think that recovery has completed.
The fix is to hold last_complete one entry behind the head of the log
until all missing objects have been recovered.
PGLog::recover_got already does this when an object is recovered and the
remaining objects to recover match this scenario, so this fix just makes
reset_complete_to behave the same way as recover_got.
Bill Scales [Thu, 5 Jun 2025 10:17:06 +0000 (11:17 +0100)]
osd: EC optimizations correct pwlc after PG split
When a PG splits the log entries are divided between the two PGs,
this can result in PWLC refering to log entries in the other PG.
Rollback PWLC after the split so it is not further advanced that
the most recently completed log entry.
Non-primary shards can be missing log entries and may rollback
PWLC too far because of this, however this does not matter
because a split occurs at the start of a peering cycle and these
shards will be updated with the correct PWLC from the primary
shard later in the peering cycle when they are activated.
Bill Scales [Mon, 26 May 2025 13:33:12 +0000 (14:33 +0100)]
osd: EC optimizations overaggresive check for missing objects
Relax an assert in read_log_and_missing for optimized EC
pools. Because the log may not have entries for partial
writes but the missing list is calculated from the full
log the need version for a missing item may be newer than
the lastest log entry for that object.
ceph_objectstore_tool needs care because we don't want to add
extra dependencies. To minimise the dependencies, we always
relax the asserts when using this tool.
Signed-off-by: Bill Scales <bill_scales@uk.ibm.com> Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit 20e883fedaf19293e939c4cac44de196bd6c9c19)
Bill Scales [Wed, 21 May 2025 17:16:50 +0000 (18:16 +0100)]
osd: EC Optimizations fix proc_master_log handling of splits
For optimized EC pools proc_master_log needs to deal with
the other log being merged being behind the local log because
it is missing partial writes. This is done by finding the
point where the logs diverge and then checking whether local
log entries have been committed on all the shards.
A bug in this code meant that after a PG split (where there
may be gaps in the log due to entries moving to the other PG)
that the divergence point was not found and committed
partial writes ended up being discarded which creates
unfound objects.
Bill Scales [Tue, 20 May 2025 10:37:05 +0000 (11:37 +0100)]
osd: EC Optimizations fix missing call to partial_write
When a shard is backfilling and it receives a log entry where the
transaction is not applied it can skip the roll forward by
immediately advancing crt. However it is still necessary to
call partial_write in this scenario to keep the pwlc information
up to date.
Bill Scales [Wed, 14 May 2025 07:39:40 +0000 (08:39 +0100)]
osd: EC Optimizations bug fix for flip/flop acting set
EC optimizations pools have a set of non-primary shards which
cannot become the primary because they do not have all the
metadata updates. If one of these shards is chosen as the
primary it will set the acting set to force another shard to
be chosen.
It is important that the selected acting set is the same
acting set that will be chosen by the next primary (assuming
nothing else changes) otherwise a PG can get into a state where
the acting set flip/flops between two different states causing
the PG to get stuck in peering and hanging I/O.
A bug in update_peer_info meant that non-primary shards did not
present the same info to choose_acting_set as primary shards
because they were not updating their pg_info_t based on pwlc
information from other shards.
Alex Ainscow [Tue, 13 May 2025 11:55:14 +0000 (12:55 +0100)]
osd: Refuse to commit/rollforward beyond end of log.
In optimised EC, if transaction is applied to all shards, followed by a
partial transaction AND these two transactions overlap, then it is
possible for the non-primary shards to commit a version which is after
then end of the log.
This commit changes the apply_log such that the commit version will be
changed to the head of the log in such situations.
Alex Ainscow [Tue, 29 Apr 2025 11:02:07 +0000 (12:02 +0100)]
osd: Refactor partial_write to address multiple issues.
We fix a number of issues with partial_write here.
Fix an issue where it is unclear whether the empty PWLC state is
newer or older than a populated PWLC on another shard by always
updating the pwlc with an empty range, rather than blank.
This is an unfortunate small increase in metadata, so we should
come back to this in a later commit (or possibly later PR).
Normally a PG log consists of a set of log entries with each
log entry have a version number one greater than the previous
entry. When a PG splits the PG log is split so that each of the
new PGs only has log entries for objects in that PG, which
means there can be gaps between version numbers.
PGBackend::partial_write is trying to keep track of adjacent
log updates than do not update a particular shard storing
these as a range in partial_writes_last_complete. To do this
it must compare with the version number of the previous log
entry rather than testing for a version number increment of one.
Also simplify partial_writes to make it more readable.
Signed-off-by: Bill Scales <bill_scales@uk.ibm.com> Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit 2467406f3e22c0746ce20cd04b838dccedadf055)
Alex Ainscow [Tue, 29 Apr 2025 10:59:24 +0000 (11:59 +0100)]
osd: nonprimary shards are permitted to have a crt newer than head
Non-primary shards do not get updates for some transactions. It is possible
however for other transactions to increase the can_rollback_to to a later
version. This causes an assert for some operations.
Bill Scales [Fri, 25 Apr 2025 14:03:02 +0000 (15:03 +0100)]
osd: overaggressive assert in read_log_and_missing with optimized EC pool
read_log_and_missing is called during OSD initializaiton to sanity check
the PG log. One of its checks is too agressive for an optimized EC pool
where because of a partial write there can be a log entry but no update
to the object on this shard (other shards will have been updated). The
fix is to skip the checks when the log entry indicates this shard was
not updated.
Only affects pool with allow_ec_optimizations flag on.
Bill Scales [Thu, 29 May 2025 11:53:27 +0000 (12:53 +0100)]
osd: EC optimizations rework for pg_temp
Bug fixes for how pg_temp is used with optimized EC pools. For these
pools pg_temp is re-ordered with non-primary shards last. The acting
set was undoing this re-ordering in PeeringState, but this is too
late and results code getting the shard id wrong. One consequence
iof this was an OSD refusing to create a PG because of an incorrect
shard id.
This commit moves the re-ordering earlier into OSDMap::_get_temp_osds,
some changes are then required to OSDMap::clean_temps.
Bill Scales [Fri, 23 May 2025 09:45:46 +0000 (10:45 +0100)]
osd: EC Optimizations OSDMap::clean_temps preventing change of primary
clean_temps is clearing pg_temp if the acting set will be the same
as the up set. For optimized EC pools this is overaggressive because
there are scenarios where it is setting acting set to be the same as
up set to force an alternative shard to be chosen as primary - this
happens because the acting set is transformed to place non-primary
shards at the end of the pg_temp vector.
Detect this scenario and stop clean_temps from undoing the acting
set which is being set by PeeringState::choose_acting.
Bill Scales [Thu, 22 May 2025 12:12:57 +0000 (13:12 +0100)]
osd: EC optimizations bug in OSDMap::clean_temps
OSDMap clean_temps clears pg_temp for a PG when the up set
matches the acting_set. For optimized EC pools the pg_temp
is reordered to place primary shards first, this function
was not calling pgtemp_undo_primaryfirst to revert the
reordering.
This meant that a 2+1 EC PG with up set [1,2,3] and
a desired acting set [1,3,2] re-ordered the acting
set to produce pg_temp as [1,2,3] and then deleted this
because it equals the up set.
Calling pgtemp_undo_primaryfirst makes this code work
as intended.