Github allows to add a instructions file to each repo
(.github/copilot-instructions.md) to improve the behavior
of Copilot Reviews and Agent.
These instructions can also be customized per path, filetype, etc.:
https://docs.github.com/en/copilot/how-tos/configure-custom-instructions/add-repository-instructions
This commit was authored through a Github Agent session: https://github.com/ceph/ceph/tasks/edeca07b-eabd-477c-917a-a18e72a0e2c2
mgr/DaemonServer: Re-order OSDs in crush bucket to maximize OSDs for upgrade
DaemonServer::_maximize_ok_to_upgrade_set() attempts to find which OSDs
from the initial set found as part of _populate_crush_bucket_osds() can be
upgraded as part of the initial phase. If the initial set results in failure,
the convergence logic trims the 'to_upgrade' vector from the end until a safe
set is found.
Therefore, it would be advantageous to sort the OSDs by the ascending number
of PGs hosted by the OSDs. By placing OSDs with smallest (or no PGs) at the
beginning of the vector, the trim logic along with _check_offlines_pgs() will
have the best chance of finding OSDs to upgrade as it approaches a grouping
of OSDs that have the smallest or no PGs.
To achieve the above, a temporary vector of struct pgs_per_osd is created and
sorted for a given crush bucket. The sorted OSDs are pushed to the main
crush_bucket_osds that is eventually used to run the _check_offlines_pgs()
logic to find a safe set of OSDs to upgrade.
pgmap is passed to _populate_crush_bucket_osds() to utilize get_num_pg_by_osd()
for the above logic to work.
Implement a new Mgr command called 'ok-to-upgrade' that returns a set of OSDs
within the provided CRUSH bucket that are safe to upgrade without reducing
immediate data availability.
The command accepts the following as input:
- CRUSH bucket name (required)
- The CRUSH bucket type is limited to 'rack', 'chassis', 'host' and 'osd'.
This is to prevent users from specifying a bucket type higher up the tree
which could result in performance issues if the number of OSDs in the
bucket is very high.
- The new Ceph version to check against. The format accepted is the short
form of the Ceph version, for e.g. 20.3.0-3803-g63ca1ffb5a2. (required)
- The maximum number of OSDs to consider if specified. (optional)
Implementation Details:
After sanity checks on the provided parameters, the following steps are
performed:
1. The set of OSDs within the CRUSH bucket is first determined.
2. From the main set of OSDs, a filtered set of OSDs not yet running the new
Ceph version is created.
- For this purpose, the OSD's 'ceph_version_short' string is read from
the metadata. For this purpose a new method called
DaemonServer::get_osd_metadata() is used. The information is determined
from the DaemonStatePtr maintained within the DaemonServer.
3. If all OSDs are already running the new Ceph version, a success report is
generated and returned.
4. If OSDs are not running the new Ceph version, a new set (to_upgrade) is
created.
5. If the current version cannot be determined, an error is logged and the
output report with 'bad_no_version' field populated with the OSD in question
is generated.
6. On the new set (to_upgrade), the existing logic in _check_offline_pgs() is
executed to see if stopping any or all OSDs in the set as part of the upgrade
can reduce immediate data availability.
- If data availability is impacted, then the number of OSDs in the filtered
set is reduced by a factor defined by a new config option called
'mgr_osd_upgrade_check_convergence_factor' which is set to 0.8 by default.
- The logic in _check_offline_pgs() is repeated for the new set.
- The above is repeated until a safe subset of OSDs that can be stopped for
upgrade is found. Each iteration reduces the number of OSDs to check by
the convergence factor mentioned above.
7. It must be noted that the default value of
'mgr_osd_upgrade_check_convergence_factor' is on the higher side in order to
help determine an optimal set of OSDs to upgrade. In other words, a higher
convergence factor would help maximize the number of OSDs to upgrade. In this
case, the number of iterations and therefore the time taken to determine the
OSDs to upgrade is proportional to the number of OSDs in the CRUSH bucket.
The converse is true if a lower convergence factor is used.
8. If the number of OSDs determined is lower than the 'max' specified, then an
additional loop is executed to determine if other children of the CRUSH
bucket can be added to the existing set.
9. Once a viable set is determined, an output report similar to the following is
generated:
A standalone test is introduced that exercises the logic for both replicated
and erasure-coded pools by manipulating the min_size for a pool and check for
upgradability. The tests also performs other basic sanity checks and error
conditions.
The output shown below is for a cluster running on a single node with 10 OSDs
and with replicated pool configuration:
mgr/DaemonServer: Modify offline_pg_report to handle set or vector types
The offline_pg_report structure to be used by both the 'ok-to-stop' and
'ok-to-upgrade' commands is modified to handle either std::set or std::vector
type containers. This is necessitated due to the differences in the way
both commands work. For the 'ok-to-upgrade' command logic to work optimally,
the items in the specified crush bucket including items found in the subtree
must be strictly ordered. The earlier std::set container re-orders the items
upon insertion by sorting the items which results in the offline pg check to
report sub-optimal results.
Therefore, the offline_pg_report struct is modified to use
std::variant<std::vector<int>, std::set<int>> as a ContainerType and handled
accordingly in dump() using std::visit(). This ensures backward compatibility
with the existing 'ok-to-stop' command while catering to the requirements of
the new 'ok-to-upgrade' command.
Problem:
In TEST_backfill_test_sametarget
test fails to keep 1 PG at backfill-toofull
and 1 PG at active+clean due to not giving enough
time from 1 backfill to the other
In TEST_ec_backfill_multi
Similar to the test_sametarget above, not enough
time from 1 backfill to the other
Solution:
Give more time gap between one backfill request
from another using sleep <duration> bash command
Kotresh HR [Sat, 7 Feb 2026 14:26:36 +0000 (19:56 +0530)]
qa: Add retry logic to remove most sleeps in mirroring tests
The mirroring tests contain lot of sleeps adding it up to ~1hr.
This patch adds a retry logic and removes most of them. This
is cleaner and saves considerable time in test time for mirroring.
Matan Breizman [Mon, 9 Feb 2026 14:46:12 +0000 (14:46 +0000)]
common/options/crimson.yaml.in: allow seastar to select available backend
* If non backend is selected, let seastar choose the default
reactor backend (io_uring, if available).
* If a backend is selected, don't implicitly fallback to diffrent one.
Updated the conf option and moved to "Reactor options" section.
Matan Breizman [Mon, 2 Feb 2026 14:42:39 +0000 (16:42 +0200)]
CMakeLists.txt: find aio for CRIMSON or BLUESTORE
aio is used to set WITH_LIBURING, instead of finding it twice try to
find aio if either flags are set.
Does not change existing behavior for Bluestore.
Patrick Donnelly [Wed, 11 Feb 2026 19:05:07 +0000 (14:05 -0500)]
Merge PR #67011 into main
* refs/pull/67011/head:
qa/multisite: use boto3's ClientError in place of assert_raises from tools.py.
qa/multisite: test fixes
qa/multisite: boto3 in tests.py
qa/multisite: zone files use boto3 resource api
qa/multisite: switch to boto3 in multisite test libraries
Igor Fedotov [Wed, 11 Feb 2026 16:36:20 +0000 (19:36 +0300)]
test/test_bluefs: reproduce volume selector inconsistency after
recovering WAL in envelope mode.
Reproduces: https://tracker.ceph.com/issues/74765
Exact bug reproduction (which occurs on file removal) requires
'bluefs_check_volume_selector_on_mount' to be set manually to false
in 'wal_v2_simulate_crash' test case.
That's not the case for the default implementation which fails volume selector
validation at earlier stage during BlueFS mount.
This seems a general approach which provides better test coverage.
Signed-off-by: Igor Fedotov <igor.fedotov@croit.io>