]> git-server-git.apps.pok.os.sepia.ceph.com Git - ceph-ci.git/log
ceph-ci.git
47 hours agoMerge PR #66572 into wip-vshankar-testing-20260224.100235 wip-vshankar-testing-20260224.100235 testing/wip-vshankar-testing-20260224.100235
Venky Shankar [Tue, 24 Feb 2026 10:03:01 +0000 (15:33 +0530)]
Merge PR #66572 into wip-vshankar-testing-20260224.100235

* refs/pull/66572/head:

47 hours agoMerge PR #66578 into wip-vshankar-testing-20260224.100235
Venky Shankar [Tue, 24 Feb 2026 10:02:55 +0000 (15:32 +0530)]
Merge PR #66578 into wip-vshankar-testing-20260224.100235

* refs/pull/66578/head:

47 hours agoMerge PR #67476 into wip-vshankar-testing-20260224.100235
Venky Shankar [Tue, 24 Feb 2026 10:02:50 +0000 (15:32 +0530)]
Merge PR #67476 into wip-vshankar-testing-20260224.100235

* refs/pull/67476/head:

Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
47 hours agoMerge PR #67477 into wip-vshankar-testing-20260224.100235
Venky Shankar [Tue, 24 Feb 2026 10:02:46 +0000 (15:32 +0530)]
Merge PR #67477 into wip-vshankar-testing-20260224.100235

* refs/pull/67477/head:

47 hours agoMerge pull request #67474 from afreen23/health-card-hardware-tab
Afreen Misbah [Tue, 24 Feb 2026 09:59:06 +0000 (15:29 +0530)]
Merge pull request #67474 from afreen23/health-card-hardware-tab

mgr/dashboard: Health card hardware tab

Reviewed-by: Nizamudeen A <nia@redhat.com>
Reviewed-by: pujaoshahu <pshahu@redhat.com>
47 hours agoMerge pull request #67159 from rhcs-dashboard/subsystem-host-page
Afreen Misbah [Tue, 24 Feb 2026 09:51:42 +0000 (15:21 +0530)]
Merge pull request #67159 from rhcs-dashboard/subsystem-host-page

mgr/dashboard: NVMe – Fix host,listeners namespace list display on Subsystem resource page

Reviewed-by: Afreen Misbah <afreen@ibm.com>
Reviewed-by: Naman Munet <nmunet@redhat.com>
2 days agoMerge pull request #67284 from knrt10/crimson-rgw-cls-get-config
Kautilya Tripathi [Tue, 24 Feb 2026 08:56:34 +0000 (14:26 +0530)]
Merge pull request #67284 from knrt10/crimson-rgw-cls-get-config

cls/rgw_gc: read config via cls_get_config

2 days agoqa: fix TypeError in delay
Jos Collin [Tue, 24 Feb 2026 02:03:13 +0000 (07:33 +0530)]
qa: fix TypeError in delay

random.randrange() asserts TypeError for arguments of type 'float'.
So use random.uniform() to fix this.

Fixes: https://tracker.ceph.com/issues/75090
Signed-off-by: Jos Collin <jcollin@redhat.com>
2 days agoMerge pull request #67467 from gbregman/main
Gil Bregman [Tue, 24 Feb 2026 06:40:07 +0000 (08:40 +0200)]
Merge pull request #67467 from gbregman/main

nvmeof: Change the NVMEOF image version to 1.7

2 days agoMerge pull request #66857 from rhcs-dashboard/cephfs-mirroring-entity
Dnyaneshwari Talwekar [Tue, 24 Feb 2026 06:05:56 +0000 (11:35 +0530)]
Merge pull request #66857 from rhcs-dashboard/cephfs-mirroring-entity

mgr/dashboard: Cephfs Mirroring - Entity

Reviewed-by: Dnyaneshwari talwekar <dtalweka@redhat.com>
Reviewed-by: Naman Munet <nmunet@redhat.com>
Reviewed-by: Pedro Gonzalez Gomez <pegonzal@redhat.com>
Reviewed-by: Ankush Behl <cloudbehl@gmail.com>
2 days agomgr/dashboard: Add hardware tab to health card
Afreen Misbah [Mon, 23 Feb 2026 23:51:58 +0000 (05:21 +0530)]
mgr/dashboard: Add hardware tab to health card

Fixes https://tracker.ceph.com/issues/75120

Signed-off-by: Afreen Misbah <afreen@ibm.com>
2 days agomgr/dashboard: Added variations of alerts card sub total layout
Afreen Misbah [Mon, 23 Feb 2026 20:13:43 +0000 (01:43 +0530)]
mgr/dashboard: Added variations of alerts card sub total layout

- when health card's tab closed the layout is compact
- when health card's tab open the layout take space

Signed-off-by: Afreen Misbah <afreen@ibm.com>
2 days agomgr/dashboard: Css fixes for health card and alerts card
Afreen Misbah [Mon, 23 Feb 2026 19:33:15 +0000 (01:03 +0530)]
mgr/dashboard: Css fixes for health card and alerts card

Signed-off-by: Afreen Misbah <afreen@ibm.com>
2 days agoMerge pull request #67453 from baum/crimson-ceph-context-leak
baum [Mon, 23 Feb 2026 19:24:02 +0000 (21:24 +0200)]
Merge pull request #67453 from baum/crimson-ceph-context-leak

common: fix uninitialized nref in crimson CephContext

2 days agoMerge pull request #67379 from zdover23/wip-doc-2026-02-18-rados-config-mon-lookup-dns
Ilya Dryomov [Mon, 23 Feb 2026 19:18:02 +0000 (20:18 +0100)]
Merge pull request #67379 from zdover23/wip-doc-2026-02-18-rados-config-mon-lookup-dns

doc: update broken reference

Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
Reviewed-by: Patrick Donnelly <pdonnell@ibm.com>
2 days agoMerge pull request #67295 from kamoltat/wip-ksirivad-fix-74524
Radoslaw Zarzynski [Mon, 23 Feb 2026 18:46:28 +0000 (19:46 +0100)]
Merge pull request #67295 from kamoltat/wip-ksirivad-fix-74524

qa/standalone: improve reliability of osd-backfill tests

Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
2 days agoMerge pull request #66466 from smanjara/wip-fix-datasync-init
Shilpa Jagannath [Mon, 23 Feb 2026 18:24:34 +0000 (10:24 -0800)]
Merge pull request #66466 from smanjara/wip-fix-datasync-init

rgw/multisite: fix segfault during multisite startup

2 days agofix for quorum in API
Afreen Misbah [Mon, 23 Feb 2026 10:23:13 +0000 (15:53 +0530)]
fix for quorum in API

Signed-off-by: Afreen Misbah <afreen@ibm.com>
2 days agomgr/dashboard: Add systems tab to health card
Afreen Misbah [Fri, 13 Feb 2026 23:14:46 +0000 (04:44 +0530)]
mgr/dashboard: Add systems tab to health card

Fixes https://tracker.ceph.com/issues/75065

Signed-off-by: Afreen Misbah <afreen@ibm.com>
2 days agoMerge pull request #67460 from afreen23/alerts-card
Afreen Misbah [Mon, 23 Feb 2026 15:52:41 +0000 (21:22 +0530)]
Merge pull request #67460 from afreen23/alerts-card

mgr/dashboard: Add alerts card

Reviewed-by: Devika Babrekar <devika.babrekar@ibm.com>
2 days agoMerge PR #67135 into main
Patrick Donnelly [Mon, 23 Feb 2026 15:29:13 +0000 (10:29 -0500)]
Merge PR #67135 into main

* refs/pull/67135/head:
pybind: remove compile_time_env parameter from setup.py files
pybind/rados,rgw: replace Tempita errno checks with C preprocessor
pybind/cephfs: replace deprecated IF with C preprocessor macro

Reviewed-by: Ilya Dryomov <idryomov@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@ibm.com>
2 days agoMerge pull request #65947 from kchheda3/wipi-fix-lc-dm-delete
Krunal Chheda [Mon, 23 Feb 2026 15:21:21 +0000 (20:51 +0530)]
Merge pull request #65947 from kchheda3/wipi-fix-lc-dm-delete

rgw/lc: Do not delete DM if its at end of pagination list.

Reviewed-by: Matt Benjamin <mbenjamin@redhat.com>
2 days agoMerge pull request #65607 from kchheda3/wip-lc-skip-bucket
Krunal Chheda [Mon, 23 Feb 2026 15:20:33 +0000 (20:50 +0530)]
Merge pull request #65607 from kchheda3/wip-lc-skip-bucket

rgw/lc: Increase the timeout value while fetching the lc shard lock and update the logic on expired session

Reviewed-by: Daniel Gryniewicz <dang@redhat.com>
Reviewed-by: Matt Benjamin <mbenjamin@redhat.com>
2 days agomgr/dashboard: NVMe – Fix host,listeners namespace list display on Subsystem resource...
pujaoshahu [Mon, 2 Feb 2026 08:46:20 +0000 (14:16 +0530)]
mgr/dashboard: NVMe – Fix host,listeners namespace list display on Subsystem resource page

Fixes: https://tracker.ceph.com/issues/74697
Signed-off-by: pujaoshahu <pshahu@redhat.com>
 Conflicts:
src/pybind/mgr/dashboard/frontend/src/app/ceph/block/block.module.ts

Signed-off-by: pujaoshahu <pshahu@redhat.com>
2 days agoMerge pull request #67312 from ifed01/wip-ifed-fix-vselector-in-envmode_index_file
Igor Fedotov [Mon, 23 Feb 2026 14:40:20 +0000 (17:40 +0300)]
Merge pull request #67312 from ifed01/wip-ifed-fix-vselector-in-envmode_index_file

os/bluestore: fix vselector update after enveloped WAL recovery

Reviewed-by: Adam Kupczyk <akupczyk@ibm.com>
2 days agoMerge pull request #67445 from cbodley/wip-mailmap-bluikko
Casey Bodley [Mon, 23 Feb 2026 14:05:55 +0000 (09:05 -0500)]
Merge pull request #67445 from cbodley/wip-mailmap-bluikko

mailmap: update email address for Ville Ojamo

Reviewed-by: Patrick Donnelly <pdonnell@ibm.com>
Reviewed-by: Ville Ojamo <git2233+ceph@ojamo.eu>
2 days agoMerge pull request #67332 from anthonyeleven/docfix
Anthony D'Atri [Mon, 23 Feb 2026 14:03:40 +0000 (09:03 -0500)]
Merge pull request #67332 from anthonyeleven/docfix

doc/rados/operations: Improve formatting in crush-map.rst

2 days agoMerge pull request #67432 from kshtsk/wip-test-lua-ignore-tz
kyr [Mon, 23 Feb 2026 13:23:46 +0000 (14:23 +0100)]
Merge pull request #67432 from kshtsk/wip-test-lua-ignore-tz

test/rgw/lua: ignore hours for zero mtime

2 days agoMerge pull request #66108 from sseshasa/wip-rfe-implement-ok-to-upgrade-command
Sridhar Seshasayee [Mon, 23 Feb 2026 12:51:00 +0000 (18:21 +0530)]
Merge pull request #66108 from sseshasa/wip-rfe-implement-ok-to-upgrade-command

mgr/DaemonServer: Implement ok-to-upgrade command

Reviewed-by: Kamoltat Sirivadhna <ksirivad@redhat.com>
Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
Reviewed-by: Nitzan Mordechai <nmordech@redhat.com>
Reviewed-by: Anthony D Atri <anthony.datri@gmail.com>
2 days agoMerge pull request #66316 from kchheda3/wip-fix-parse-url-crash
Yuval Lifshitz [Mon, 23 Feb 2026 12:37:07 +0000 (14:37 +0200)]
Merge pull request #66316 from kchheda3/wip-fix-parse-url-crash

rgw/notification: Fix the crash in parse_url while initializing the regex

2 days agoMerge pull request #66065 from mertsunacoglu/wip-lua-abort
Yuval Lifshitz [Mon, 23 Feb 2026 12:35:46 +0000 (14:35 +0200)]
Merge pull request #66065 from mertsunacoglu/wip-lua-abort

 rgw: Add Lua functionality for blocking requests

2 days agocls/rgw_gc/cls_rgw_gc: read config via cls_get_config
Kautilya Tripathi [Tue, 10 Feb 2026 05:31:26 +0000 (11:01 +0530)]
cls/rgw_gc/cls_rgw_gc: read config via cls_get_config

Commit https://github.com/ceph/ceph/commit/3877c1e37f2fa4e1574b57f05132288f210835a7
added new way to let CLS gain access to global configuration (`g_ceph_context`).

`cls_rgw_gc_queue_init` method is not using the new CLS call of `cls_get_config`
but instead directly uses `g_ceph_context`.

Crimson OSD implementation does **not** support `g_ceph_context` which results in a (SIGSEGV)
crash due to null access. Switching to `cls_get_config`, similarly to `cls_rgw.cc`, would allow
both OSD implementations to access the conf safely.

The above approach is well-defined due to the two orthogonal implementations of objclass.cc.
Classical OSD uses `src/osd/objclass.cc` While Crimson OSD uses `src/crimson/osd/objclass.cc`.

Fixes: https://tracker.ceph.com/issues/74844
Signed-off-by: Kautilya Tripathi <kautilya.tripathi@ibm.com>
2 days agonvmeof: Change the NVMEOF image version to 1.7
Gil Bregman [Mon, 23 Feb 2026 10:56:54 +0000 (12:56 +0200)]
nvmeof: Change the NVMEOF image version to 1.7
Fixes: https://tracker.ceph.com/issues/75097
Signed-off-by: Gil Bregman <gbregman@il.ibm.com>
2 days agoMerge PR #65467 into main
Venky Shankar [Mon, 23 Feb 2026 10:24:10 +0000 (15:54 +0530)]
Merge PR #65467 into main

* refs/pull/65467/head:

Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Kotresh Hiremath Ravishankar <khiremat@redhat.com>
2 days agoMerge PR #66475 into main
Venky Shankar [Mon, 23 Feb 2026 10:22:58 +0000 (15:52 +0530)]
Merge PR #66475 into main

* refs/pull/66475/head:

Reviewed-by: Matan Breizman <mbreizma@redhat.com>
2 days agoMerge pull request #67442 from imran-imtiaz/wip-dashboard-schedule-level
Imran Imtiaz [Mon, 23 Feb 2026 10:22:31 +0000 (10:22 +0000)]
Merge pull request #67442 from imran-imtiaz/wip-dashboard-schedule-level

mgr/dashboard: add schedule_level to image API for pool/cluster snapshot schedule

2 days agoqa: temporarily disable some mirroring test (see: https://tracker.ceph.com/issues...
Venky Shankar [Mon, 23 Feb 2026 10:11:03 +0000 (15:41 +0530)]
qa: temporarily disable some mirroring test (see: https://tracker.ceph.com/issues/74984)

To be tested with https://github.com/ceph/ceph/pull/66572

Signed-off-by: Venky Shankar <vshankar@redhat.com>
3 days agomgr/dashboard: Add alerts card
Afreen Misbah [Sun, 22 Feb 2026 10:24:41 +0000 (15:54 +0530)]
mgr/dashboard: Add alerts card

Fixes https://tracker.ceph.com/issues/75066

Signed-off-by: Afreen Misbah <afreen@ibm.com>
3 days agomgr/dashboard: Cephfs mirroring - Entity
Dnyaneshwari Talwekar [Fri, 9 Jan 2026 09:59:50 +0000 (15:29 +0530)]
mgr/dashboard: Cephfs mirroring - Entity

Fixes: https://tracker.ceph.com/issues/74366
Signed-off-by: Dnyaneshwari Talwekar <dtalweka@redhat.com>
3 days agoMerge pull request #66981 from rhcs-dashboard/namespace-list-delete
Afreen Misbah [Mon, 23 Feb 2026 07:16:23 +0000 (12:46 +0530)]
Merge pull request #66981 from rhcs-dashboard/namespace-list-delete

mgr/dashboard: Add nvmeof namespace list and delete modal

Reviewed-by: Afreen Misbah <afreen@ibm.com>
Reviewed-by: Naman Munet <nmunet@redhat.com>
3 days agoMerge pull request #67360 from rhcs-dashboard/revamp-onboarding-screen
Afreen Misbah [Mon, 23 Feb 2026 07:14:53 +0000 (12:44 +0530)]
Merge pull request #67360 from rhcs-dashboard/revamp-onboarding-screen

mgr/dashboard:revamp on-boarding screen

Reviewed-by: Afreen Misbah <afreen@ibm.com>
Reviewed-by: Pedro Gonzalez Gomez <pegonzal@redhat.com>
3 days agomgr/DaemonServer: Re-order OSDs in crush bucket to maximize OSDs for upgrade
Sridhar Seshasayee [Thu, 12 Feb 2026 20:03:25 +0000 (01:33 +0530)]
mgr/DaemonServer: Re-order OSDs in crush bucket to maximize OSDs for upgrade

DaemonServer::_maximize_ok_to_upgrade_set() attempts to find which OSDs
from the initial set found as part of _populate_crush_bucket_osds() can be
upgraded as part of the initial phase. If the initial set results in failure,
the convergence logic trims the 'to_upgrade' vector from the end until a safe
set is found.

Therefore, it would be advantageous to sort the OSDs by the ascending number
of PGs hosted by the OSDs. By placing OSDs with smallest (or no PGs) at the
beginning of the vector, the trim logic along with _check_offlines_pgs() will
have the best chance of finding OSDs to upgrade as it approaches a grouping
of OSDs that have the smallest or no PGs.

To achieve the above, a temporary vector of struct pgs_per_osd is created and
sorted for a given crush bucket. The sorted OSDs are pushed to the main
crush_bucket_osds that is eventually used to run the _check_offlines_pgs()
logic to find a safe set of OSDs to upgrade.

pgmap is passed to _populate_crush_bucket_osds() to utilize get_num_pg_by_osd()
for the above logic to work.

Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
3 days agomgr/DaemonServer: Implement ok-to-upgrade command
Sridhar Seshasayee [Mon, 27 Oct 2025 16:34:54 +0000 (22:04 +0530)]
mgr/DaemonServer: Implement ok-to-upgrade command

Implement a new Mgr command called 'ok-to-upgrade' that returns a set of OSDs
within the provided CRUSH bucket that are safe to upgrade without reducing
immediate data availability.

The command accepts the following as input:
 - CRUSH bucket name (required)
   - The CRUSH bucket type is limited to 'rack', 'chassis', 'host' and 'osd'.
     This is to prevent users from specifying a bucket type higher up the tree
     which could result in performance issues if the number of OSDs in the
     bucket is very high.
 - The new Ceph version to check against. The format accepted is the short
   form of the Ceph version, for e.g. 20.3.0-3803-g63ca1ffb5a2. (required)
 - The maximum number of OSDs to consider if specified. (optional)

Implementation Details:

After sanity checks on the provided parameters, the following steps are
performed:

1. The set of OSDs within the CRUSH bucket is first determined.
2. From the main set of OSDs, a filtered set of OSDs not yet running the new
   Ceph version is created.
   - For this purpose, the OSD's 'ceph_version_short' string is read from
     the metadata. For this purpose a new method called
     DaemonServer::get_osd_metadata() is used. The information is determined
     from the DaemonStatePtr maintained within the DaemonServer.
3. If all OSDs are already running the new Ceph version, a success report is
   generated and returned.
4. If OSDs are not running the new Ceph version, a new set (to_upgrade) is
   created.
5. If the current version cannot be determined, an error is logged and the
   output report with 'bad_no_version' field populated with the OSD in question
   is generated.
6. On the new set (to_upgrade), the existing logic in _check_offline_pgs() is
   executed to see if stopping any or all OSDs in the set as part of the upgrade
   can reduce immediate data availability.
   - If data availability is impacted, then the number of OSDs in the filtered
     set is reduced by a factor defined by a new config option called
     'mgr_osd_upgrade_check_convergence_factor' which is set to 0.8 by default.
   - The logic in _check_offline_pgs() is repeated for the new set.
   - The above is repeated until a safe subset of OSDs that can be stopped for
     upgrade is found. Each iteration reduces the number of OSDs to check by
     the convergence factor mentioned above.
7. It must be noted that the default value of
   'mgr_osd_upgrade_check_convergence_factor' is on the higher side in order to
   help determine an optimal set of OSDs to upgrade. In other words, a higher
   convergence factor would help maximize the number of OSDs to upgrade. In this
   case, the number of iterations and therefore the time taken to determine the
   OSDs to upgrade is proportional to the number of OSDs in the CRUSH bucket.
   The converse is true if a lower convergence factor is used.
8. If the number of OSDs determined is lower than the 'max' specified, then an
   additional loop is executed to determine if other children of the CRUSH
   bucket can be added to the existing set.
9. Once a viable set is determined, an output report similar to the following is
   generated:

A standalone test is introduced that exercises the logic for both replicated
and erasure-coded pools by manipulating the min_size for a pool and check for
upgradability. The tests also performs other basic sanity checks and error
conditions.

The output shown below is for a cluster running on a single node with 10 OSDs
and with replicated pool configuration:

$ ceph osd ok-to-upgrade incerta06 01.00.00-gversion-test --format=json
{"ok_to_upgrade":true,"all_osds_upgraded":false,\
 "osds_in_crush_bucket":[0,1,2,3,4,5,6,7,8,9],\
 "osds_ok_to_upgrade":[0],"osds_upgraded":[],"bad_no_version":[]}

The following report is shown if all OSDs are running the desired Ceph version:

$ ceph osd ok-to-upgrade --crush_bucket  localrack \
  --ceph_version 20.3.0-3803-g63ca1ffb5a2
{"ok_to_upgrade":false,"all_osds_upgraded":true,\
 "osds_in_crush_bucket":[0,1,2,3,4,5,6,7,8,9],"osds_ok_to_upgrade":[],\
"osds_upgraded":[0,1,2,3,4,5,6,7,8,9],"bad_no_version":[]}'

Fixes: https://tracker.ceph.com/issues/73031
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
3 days agomgr/DaemonServer: Modify offline_pg_report to handle set or vector types
Sridhar Seshasayee [Mon, 2 Feb 2026 08:44:15 +0000 (14:14 +0530)]
mgr/DaemonServer: Modify offline_pg_report to handle set or vector types

The offline_pg_report structure to be used by both the 'ok-to-stop' and
'ok-to-upgrade' commands is modified to handle either std::set or std::vector
type containers. This is necessitated due to the differences in the way
both commands work. For the 'ok-to-upgrade' command logic to work optimally,
the items in the specified crush bucket including items found in the subtree
must be strictly ordered. The earlier std::set container re-orders the items
upon insertion by sorting the items which results in the offline pg check to
report sub-optimal results.

Therefore, the offline_pg_report struct is modified to use
std::variant<std::vector<int>, std::set<int>> as a ContainerType and handled
accordingly in dump() using std::visit(). This ensures backward compatibility
with the existing 'ok-to-stop' command while catering to the requirements of
the new 'ok-to-upgrade' command.

Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
3 days agoMerge pull request #67386 from afreen23/health-checks
Afreen Misbah [Mon, 23 Feb 2026 07:09:40 +0000 (12:39 +0530)]
Merge pull request #67386 from afreen23/health-checks

mgr/dashboard: Add health check panel

Reviewed-by: Devika Babrekar <devika.babrekar@ibm.com>
3 days agoMerge pull request #67330 from gadididi/nvmeof/add_rados_ns
Gadi [Mon, 23 Feb 2026 06:54:28 +0000 (08:54 +0200)]
Merge pull request #67330 from gadididi/nvmeof/add_rados_ns

mgr/dashboard: Adding RADOS namespace option into add_ns_req

3 days agomgr/dashboard: Fix nvmeof namespace list and delete modal
pujaoshahu [Tue, 20 Jan 2026 06:14:44 +0000 (11:44 +0530)]
mgr/dashboard: Fix nvmeof namespace list and delete modal

Fixes: https://tracker.ceph.com/issues/74451
Signed-off-by: pujaoshahu <pshahu@redhat.com>
 Conflicts:
src/pybind/mgr/dashboard/frontend/src/app/shared/api/nvmeof.service.ts

Signed-off-by: pujaoshahu <pshahu@redhat.com>
3 days agoMerge pull request #66575 from Tom-Sollers/ceph-pg-repeer-test
SrinivasaBharathKanta [Mon, 23 Feb 2026 02:04:43 +0000 (07:34 +0530)]
Merge pull request #66575 from Tom-Sollers/ceph-pg-repeer-test

qa/standalone: Add a test for running repeer on simple ec and rep pools

3 days agoMerge pull request #53457 from NitzanMordhai/wip-nitzan-crush-rule-delete
SrinivasaBharathKanta [Mon, 23 Feb 2026 01:54:03 +0000 (07:24 +0530)]
Merge pull request #53457 from NitzanMordhai/wip-nitzan-crush-rule-delete

mon/OSDMonitor: remove unused crush rules after erasure code pools deleted

3 days agotools/cephfs_mirror: Fix lock order issue wip-khiremat-mulithread-mirror-66572-reviewed-5-fix-purge-snap
Kotresh HR [Sun, 15 Feb 2026 18:41:51 +0000 (00:11 +0530)]
tools/cephfs_mirror: Fix lock order issue

Lock order 1:
InstanceWatcher::m_lock ----> FSMirror::m_lock
Lock order 2:
FSMirror::m_lock -----> InstanceWatcher::m_lock

The Lock order 1 is where it's aborted and it happens
during blocklisting. The InstanceWatcher::handle_rewatch_complete()
acquires InstanceWatcher::m_lock and calls
m_elistener.set_blocklisted_ts() which tries to acquire
FSMirror::m_lock

The Lock order 2 exists in mirror peer status command.
The FSMirror::mirror_status(Formatter *f) takes FSMirro::m_lock
and calls is_blocklisted which takes InstanceWatcher::m_lock

Fix:
FSMirror::m_blocklisted_ts and FSMirror::m_failed_ts is converted
to std::<atomic> and also fixed the scope of m_lock in
InstanceWatcher::handle_rewatch_complete() and
MirrorWatcher::handle_rewatch_complete()

Look at the tracker for traceback and further details.

Fixes: https://tracker.ceph.com/issues/74953
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agodoc: Document cephfs mirroring multi-thread
Kotresh HR [Wed, 4 Feb 2026 10:07:30 +0000 (15:37 +0530)]
doc: Document cephfs mirroring multi-thread

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agodoc/PendingReleaseNotes: cephfs mirroring multi-thread
Kotresh HR [Sat, 21 Feb 2026 20:14:37 +0000 (01:44 +0530)]
doc/PendingReleaseNotes: cephfs mirroring multi-thread

Also mentions about blockdiff which is no longer used
for small files and about new configuration
introduced.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agodoc: Document configs introduced with multi-threaded cephfs-mirror
Kotresh HR [Wed, 4 Feb 2026 09:17:12 +0000 (14:47 +0530)]
doc: Document configs introduced with multi-threaded cephfs-mirror

Following configs are introduced:
  - cephfs_mirror_max_datasync_threads
  - cephfs_mirror_blockdiff_min_file_size

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Do remote fs sync once instead of fsync on each fd
Kotresh HR [Sat, 21 Feb 2026 15:55:30 +0000 (21:25 +0530)]
tools/cephfs_mirror: Do remote fs sync once instead of fsync on each fd

Do remote fs sync once just before taking snapshot
as it's faster than doing fsync on each fd after
file copy.

Moreover, all the datasync threads use the same sinlge libceph
onnection and doing ceph_fsync concurrently on different fds on
a single libcephfs connection could cause hang as observed in
testing as below. This issue is tracked at
https://tracker.ceph.com/issues/75070

-----
Thread 2 (Thread 0xffff644cc400 (LWP 74020) "d_replayer-0"):
0  0x0000ffff8e82656c in __futex_abstimed_wait_cancelable64 () from /lib64/libc.so.6
1  0x0000ffff8e828ff0 [PAC] in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libc.so.6
2  0x0000ffff8fc90fd4 [PAC] in ceph::condition_variable_debug::wait ...
3  0x0000ffff9080fc9c in ceph::condition_variable_debug::wait<Client::wait_on_context_list ...
4  Client::wait_on_context_list ... at /lsandbox/upstream/ceph/src/client/Client.cc:4540
5  0x0000ffff9083fae8 in Client::_fsync ... at /lsandbox/upstream/ceph/src/client/Client.cc:13299
6  0x0000ffff90840278 in Client::_fsync ...
7  0x0000ffff90840514 in Client::fsync ... at /lsandbox/upstream/ceph/src/client/Client.cc:13042
8  0x0000ffff907f06e0 in ceph_fsync ... at /lsandbox/upstream/ceph/src/libcephfs.cc:316
9  0x0000aaaaad5b2f88 in cephfs::mirror::PeerReplayer::copy_to_remote ...
----

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Don't use blockdiff on smaller files
Kotresh HR [Sun, 15 Feb 2026 09:37:09 +0000 (15:07 +0530)]
tools/cephfs_mirror: Don't use blockdiff on smaller files

Introduce a new configuration option,
'cephfs_mirror_blockdiff_min_file_size', to control the minimum file
size above which block-level diff is used during CephFS mirroring.

Files smaller than the configured threshold are synchronized using
full file copy, while larger files attempt block-level delta sync.
This provides better flexibility across environments with varying
file size distributions and performance constraints.

The default value is set to 16_M (16 MiB). The value is read once
at beginning of every snapshot sync.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Handle shutdown/blocklist/cancel at syncm dataq wait
Kotresh HR [Sat, 21 Feb 2026 15:40:08 +0000 (21:10 +0530)]
tools/cephfs_mirror: Handle shutdown/blocklist/cancel at syncm dataq wait

1. Add is_stopping() predicate at sdq_cv wait
2. Use the existing should_backoff() routine to validate
   shutdown/blocklsit/cancel errors and set corresponding errors.
3. Handle notify logic at the end
4. In shutdown(), notify all syncm's sdq_cv wait

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Handle shutdown/blocklist at syncm_q wait
Kotresh HR [Sun, 22 Feb 2026 18:10:32 +0000 (23:40 +0530)]
tools/cephfs_mirror: Handle shutdown/blocklist at syncm_q wait

1. Convert smq_cv.wait to timed wait as blocklist doesn't have
   predicate to evaluate. Evaluate is_shutdown() as predicate.
   When either of the two is true, set corresponding error and
   backoff flag in all the syncm objects. The last thread data
   sync thread would wake up all the crawler threads. This is
   necessary to wake up the crawler threads whose data queue
   is not picked by any datasync threads.
2. In shutdown(), change the order of join, join datasync threads
   first. The idea is kill datasync threads first before crawler
   threads as datasync threads are extension of crawler threads
   and othewise might cause issues. Also wake up smq_cv wait for
   shutdown.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Monitor num of active datasync threads
Kotresh HR [Sat, 21 Feb 2026 14:06:39 +0000 (19:36 +0530)]
tools/cephfs_mirror: Monitor num of active datasync threads

Introduce an atomic counter in PeerReplayer to track the number of
active SnapshotDataSyncThread instances.

The counter is incremented when a datasync thread enters its entry()
function and decremented automatically on exit via a small RAII guard
(DataSyncThreadGuard). This ensures accurate accounting even in the
presence of early returns or future refactoring.

This change helps in handling of shutdown and blocklist scenarios.
At the time of shutdown or blocklisting, datasync threads may still
be processing multiple jobs across different SyncMechanism instances.
It is therefore essential that only the final exiting datasync thread
performs the notifications for all relevant waiters, including the
syncm data queue, syncm queue, and m_cond.

This approach ensures orderly teardown by keeping crawler threads
active until all datasync threads have completed execution.
Terminating crawler threads prematurely—before datasync threads have
exited—can lead to inconsistencies, as crawler threads deregister the
mirroring directory while datasync threads may still be accessing it.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Store a reference of PeerReplayer object in SyncMechanism
Kotresh HR [Sat, 21 Feb 2026 14:03:39 +0000 (19:33 +0530)]
tools/cephfs_mirror: Store a reference of PeerReplayer object in SyncMechanism

Store a reference of PeerReplayer object in SyncMechanism.
This allows SyncMechansim object to call functions of PeerReplayer.
This is required in multiple places like handling
shutdown/blocklist/cancel where should_backoff() needs to be
called by syncm object while poppig dataq by data sync threads.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Make PeerReplayer::m_stopping atomic
Kotresh HR [Sun, 15 Feb 2026 03:09:54 +0000 (08:39 +0530)]
tools/cephfs_mirror: Make PeerReplayer::m_stopping atomic

Make PeerReplayer::m_stopping as std::<atomic> and make it
independant of m_lock. This helps 'm_stopping' to be used
as predicate in any conditional wait which doesn't use
m_lock.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Fix assert while opening handles
Kotresh HR [Sat, 21 Feb 2026 13:51:02 +0000 (19:21 +0530)]
tools/cephfs_mirror: Fix assert while opening handles

Issue:
When the crawler or a datasync thread encountered an error,
it's possible that the crawler gets notified by a datasync
thread and bails out resulting in the unregister of the
particular dir_root. The other datasync threads might
still hold the same syncm object and tries to open the
handles during which the following assert is hit.

ceph_assert(it != m_registered.end());

Cause:
This happens because the in_flight counter in syncm object
was tracking if it's processing the actual job from the data
queue.

Fix:
Make in_flight counter in syncm object to track the active
syncm object i.e, inrement as soon as the datasync thread
get a reference to it and decrement when it goes out of
reference.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Fix dequeue of syncm on error
Kotresh HR [Sat, 21 Feb 2026 10:36:31 +0000 (16:06 +0530)]
tools/cephfs_mirror: Fix dequeue of syncm on error

On error encountered in crawler thread or datasync
thread while processing a syncm object, it's possible
that multiple datasync threads attempts the dequeue of
syncm object. Though it's safe, add a condition to avoid
it.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Handle errors in crawler thread
Kotresh HR [Sat, 21 Feb 2026 10:27:42 +0000 (15:57 +0530)]
tools/cephfs_mirror: Handle errors in crawler thread

Any error encountered in crawler threads should be
communicated to the data sync threads by marking the
crawl error in the corresponding syncm object. The
data sync threads would finish pending jobs, dequeue
the syncm object and notify crawler to bail out.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Handle error in datasync thread
Kotresh HR [Sat, 21 Feb 2026 10:18:56 +0000 (15:48 +0530)]
tools/cephfs_mirror: Handle error in datasync thread

On any error encountered in datasync threads while syncing
a particular syncm dataq, mark the datasync error and
communicate the error to the corresponding syncm's crawler
which is waiting to take a snaphsot. The crawler will log
the error and bail out.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Add debug to capture file sync time
Kotresh HR [Thu, 8 Jan 2026 09:18:01 +0000 (14:48 +0530)]
tools/cephfs_mirror: Add debug to capture file sync time

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Efficient use of data sync threads
Kotresh HR [Sat, 21 Feb 2026 08:34:44 +0000 (14:04 +0530)]
tools/cephfs_mirror: Efficient use of data sync threads

The job queue is something like below for data sync threads.

  |syncm1|---------|syncm2|------...---|syncmn|
     |                |                   |
   |m_sync_dataq|   |m_sync_dataq|    |m_sync_dataq|

There is global queue of SyncMechanism objects(syncm). Each syncm
object represents a single snapshot being synced and each syncm
object owns m_sync_dataq representing list of files in the snapshot
to be synced.

The data sync threads should consume the next syncm job
if the present syncm has no pending work. This can evidently
happen if the last file being synced in the present syncm
job is a large file from it's syncm_dataq. In this case, one
data sync thread is busy syncing the large file, the rest of
data sync threads just wait for it to finish to avoid busy loop.
Instead, the idle data sync threads could start consuming the next
syncm job.

This brings in a change to data structure.
 - syncm_q has to be std::deque instead of std::queue as syncm in the
   middle can finish syncing first and that needs to be removed before
   the front

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agotools/cephfs_mirror: Make max datasync threads configureable
Kotresh HR [Wed, 14 Jan 2026 12:50:26 +0000 (18:20 +0530)]
tools/cephfs_mirror: Make max datasync threads configureable

Add a config to configure the number of data sync threads.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
3 days agoMerge pull request #66876 from tchaikov/wip-librbd-pwl-fix-leaks
Ilya Dryomov [Sun, 22 Feb 2026 15:58:12 +0000 (16:58 +0100)]
Merge pull request #66876 from tchaikov/wip-librbd-pwl-fix-leaks

librbd/pwl: fix memory leaks in discard operations

Reviewed-by: Ilya Dryomov <idryomov@gmail.com>
3 days agomgr/dashboard: Adding rados ns option into add_ns_req
gadi-didi [Thu, 12 Feb 2026 14:17:38 +0000 (16:17 +0200)]
mgr/dashboard: Adding rados ns option into add_ns_req

adding rados ns name option into add ns nvme command.

Signed-off-by: gadi-didi <gadi.didi@ibm.com>
3 days agocommon: fix uninitialized nref in crimson CephContext
Alexander Indenbaum [Sat, 21 Feb 2026 19:13:50 +0000 (21:13 +0200)]
common: fix uninitialized nref in crimson CephContext

Initialize nref(1) in the constructor so put() correctly releases
the context. LeakSanitizer reports a leak.

Signed-off-by: Alexander Indenbaum <aindenba@redhat.com>
3 days agoMerge pull request #67410 from VallariAg/wip-nvmeof-submodule-1.6.6
Vallari Agrawal [Sun, 22 Feb 2026 10:09:27 +0000 (15:39 +0530)]
Merge pull request #67410 from VallariAg/wip-nvmeof-submodule-1.6.6

mgr/dashboard: bump nvmeof submodule to 1.6.7

3 days agomgr/dashboard: Add health check panel
Afreen Misbah [Mon, 16 Feb 2026 13:57:24 +0000 (19:27 +0530)]
mgr/dashboard: Add health check panel

Fixes https://tracker.ceph.com/issues/74958

- adds helath check panel in overview dashboard
- updates tests
- refactors component as per modern Angular convention
- using onPush CDS in Overview component
- using view model pattern to aggregate data for rendering

Signed-off-by: Afreen Misbah <afreen@ibm.com>
3 days agomgr/dashboard: Add health card
Afreen Misbah [Fri, 13 Feb 2026 23:14:46 +0000 (04:44 +0530)]
mgr/dashboard: Add health card

Fixes https://tracker.ceph.com/issues/74958

Signed-off-by: Afreen Misbah <afreen@ibm.com>
4 days agoMerge pull request #64500 from tchaikov/wip-os-silence-Wsign-compare
Kefu Chai [Sun, 22 Feb 2026 02:56:02 +0000 (10:56 +0800)]
Merge pull request #64500 from tchaikov/wip-os-silence-Wsign-compare

os,common:change osd_target_transaction_size to uint

Reviewed-by: Matan Breizman <mbreizma@redhat.com>
4 days agotools/cephfs_mirror: Synchronize taking snapshot
Kotresh HR [Sat, 21 Feb 2026 08:28:47 +0000 (13:58 +0530)]
tools/cephfs_mirror: Synchronize taking snapshot

The crawler/entry creation thread needs to wait until
all the data is synced by datasync threads to take
the snapshot. This patch adds the necessary conditions
for the same.

It is important for the conditional flag to be part
of SyncMechanism and not part of PeerReplayer class.
The following bug would be hit if it were part of
PeerReplayer class.

When multiple directories are confiugred for mirroring as below
/d0                /d1              /d2
Crawler1         Crawler2          Crawler3
DoneEntryOps     DoneEntryOps      DoneEntryOps
WaitForSafeSnap  WaitForSafeSnap   WaitForSafeSnap

When all crawler threads are waiting at above, the data sync threads
which is done processing /d1, would notify, waking up all the crawlers
causing spurious/unwanted wake up and half baked snapshots.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotools/cephfs_mirror: Add SnapDiff entries to dataq
Kotresh HR [Wed, 14 Jan 2026 12:17:47 +0000 (17:47 +0530)]
tools/cephfs_mirror: Add SnapDiff entries to dataq

Add SnapDiff entries to dataq and process the same
in datasync threads similar to RemoteSync entries.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotools/cephfs_mirror: Process entries from dataq
Kotresh HR [Sat, 21 Feb 2026 08:20:56 +0000 (13:50 +0530)]
tools/cephfs_mirror: Process entries from dataq

Consume entries from syncm's data queue and sync
them to remote.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotools/cephfs_mirror: Move dir_root to SyncMechanism
Kotresh HR [Sat, 21 Feb 2026 08:19:19 +0000 (13:49 +0530)]
tools/cephfs_mirror: Move dir_root to SyncMechanism

Store m_dir_root in parent (SyncMehansim) to make
it accessible in the data sync threads to sync
files

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotools/cephfs_mirror: Fix data sync threads completion logic
Kotresh HR [Sat, 21 Feb 2026 08:18:15 +0000 (13:48 +0530)]
tools/cephfs_mirror: Fix data sync threads completion logic

We need to exactly know when all data threads completes
the processing of a syncm. If a few threads finishes the
job, they all need to wait for the in processing threads
of that syncm to complete. Otherwise the finished threads
would be busy loop until in processing threads finishes.

And only after all threads finishes processing, the crawler
thread can be notified to take the snapshot.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotools/cephfs_mirror: Populate dataq for RemoteSync
Kotresh HR [Tue, 9 Dec 2025 10:49:57 +0000 (16:19 +0530)]
tools/cephfs_mirror: Populate dataq for RemoteSync

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotools/cephfs_mirror: Move remote_mkdir to SyncMechanism
Kotresh HR [Wed, 14 Jan 2026 11:11:50 +0000 (16:41 +0530)]
tools/cephfs_mirror: Move remote_mkdir to SyncMechanism

This is required as SyncMechanism::get_entry would sync
directories during crawl.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotools/cephfs_mirror: Mark crawl finished
Kotresh HR [Tue, 9 Dec 2025 10:05:08 +0000 (15:35 +0530)]
tools/cephfs_mirror: Mark crawl finished

After entry operations are synced and stack is empty,
mark the crawl as finished so the data sync threads'
wait logic works correctly and doesn't indefinitely wait.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotools/cephfs_mirror: Add m_sync_data queue
Kotresh HR [Wed, 14 Jan 2026 09:56:25 +0000 (15:26 +0530)]
tools/cephfs_mirror: Add m_sync_data queue

Add data sync queue for each SyncMechanism.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotools/cephfs_mirror: Add SyncMechanism Queue
Kotresh HR [Wed, 14 Jan 2026 08:47:07 +0000 (14:17 +0530)]
tools/cephfs_mirror: Add SyncMechanism Queue

Add a queue of shared_ptr of type SyncMechanism.
Since it's shared_ptr, the queue can hold both
shared_ptr to both RemoteSync and SnapDiffSync objects.
Each SyncMechanism holds the queue for the SyncEntry
items to be synced using the data sync threads.

The SyncMechanism queue needs to be shared_ptr because
all the data sync threads needs to access the object
of SyncMechanism to process the SyncEntry Queue.

This patch sets up the building blocks for the same.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotools/cephfs_mirror: Join datasync threads on shutdown
Kotresh HR [Tue, 25 Nov 2025 10:25:05 +0000 (15:55 +0530)]
tools/cephfs_mirror: Join datasync threads on shutdown

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotools/cephfs_mirror: Use the existing m_lock and m_cond
Kotresh HR [Wed, 14 Jan 2026 08:27:34 +0000 (13:57 +0530)]
tools/cephfs_mirror: Use the existing m_lock and m_cond

The entire snapshot is synced outside the lock.
The m_lock and m_cond pair is used for data sync
threads along with crawler threads to work well
with all terminal conditions like shutdown and
existing data structures.

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotools/cephfs_mirror: Add a pool of datasync threads
Kotresh HR [Mon, 24 Nov 2025 14:43:04 +0000 (20:13 +0530)]
tools/cephfs_mirror: Add a pool of datasync threads

Fixes: https://tracker.ceph.com/issues/73452
Signed-off-by: Kotresh HR <khiremat@redhat.com>
4 days agotest/rgw/lua: ignore hours for zero mtime
Kyr Shatskyy [Thu, 19 Feb 2026 17:01:44 +0000 (18:01 +0100)]
test/rgw/lua: ignore hours for zero mtime

Check mtime for zero timestamp only date part if corresponds
to 1970-01-01 for UTC and ahead of UTC, and to 1969-12-31
for cases behind UTC.

Fixes: https://tracker.ceph.com/issues/75039
Signed-off-by: Kyr Shatskyy <kyrylo.shatskyy@clyso.com>
4 days agoMerge pull request #66735 from ajarr/wip-fix-schedule-start-time
Ilya Dryomov [Sat, 21 Feb 2026 16:12:24 +0000 (17:12 +0100)]
Merge pull request #66735 from ajarr/wip-fix-schedule-start-time

mgr/rbd_support: Fix "start-time" arg behavior

Reviewed-by: Mykola Golub <mykola.golub@clyso.com>
Reviewed-by: Ilya Dryomov <idryomov@gmail.com>
4 days agomgr/rbd_support: Fix "start-time" arg behavior
Ramana Raja [Wed, 24 Dec 2025 10:24:50 +0000 (05:24 -0500)]
mgr/rbd_support: Fix "start-time" arg behavior

The "start-time" argument, optionally passed when adding or removing an
mirror image snapshot schedule or a trash purge schedule, does not
behave as intended. It is meant to schedule an initial operation at a
specific time of day in a given time zone. Instead, it offsets the
schedule’s anchor time. By default, the scheduler uses the UNIX epoch as
the anchor to calculate recurring schedule times, and "start-time"
simply shifts this anchor away from UTC, which can confuse users. For
example:

```
$ # current time
$ date --universal
Wed Dec 10 05:55:21 PM UTC 2025
$ rbd mirror snapshot schedule add -p data --image img1 1h 19:00Z
$ rbd mirror snapshot schedule ls -p data --image img1
every 15m starting at 19:00:00+00:00
```

A user might assume that the scheduler will run the first snapshot each
day at 19:00 UTC and then run snapshots every 15 minutes. Instead, the
scheduler runs the first snapshot at 18:00 UTC and then continues at the
configured interval:

```
$ rbd mirror snapshot schedule status -p data --image img1
SCHEDULE TIME        IMAGE
2025-12-10 18:00:00  data/img1
```

Additionally, the "start-time" argument accepts a full ISO 8601
timestamp but silently ignores everything except hour, minute, and time
zone. Even time zone handling is incorrect: specifying "23:00-01:00"
with an interval of "1d" results in a snapshot taken once per day at
22:00 UTC rather than 00:00 UTC, because only utcoffset.seconds is used
while utcoffset.days is ignored.

Fix:
Similar to the handling of the "start" argument in the FS snap-schedule
manager module, require "start-time" to use an ISO 8601 date-time format
with a mandatory date component. Time and time zone are optional and
default to 00:00 and UTC respectively.

The "start-time" now defines the anchor time used to compute recurring
schedule times. The default anchor remains the UNIX epoch. Existing
on-disk schedules with legacy-format "start-time" values are updated to
include the date Jan 1, 1970.

The `snap schedule ls` output now displays "start-time" with date and
time in the format "%Y-%m-%d %H:%M:00". The display time is in UTC.

Fixes: https://tracker.ceph.com/issues/74192
Signed-off-by: Ramana Raja <rraja@redhat.com>
4 days agocommon/options: change osd_target_transaction_size from int to uint
Kefu Chai [Tue, 15 Jul 2025 06:40:09 +0000 (14:40 +0800)]
common/options: change osd_target_transaction_size from int to uint

Change osd_target_transaction_size from signed int to unsigned int to
match the return type of Transaction::get_num_opts() (ceph_le64).

This change:
- Eliminates compiler warnings when comparing signed/unsigned values
- Enables automatic size conversion (e.g., "4_K" → 4096) via y2c.py
  for improved administrator usability
- Maintains type consistency throughout the codebase

Signed-off-by: Kefu Chai <tchaikov@gmail.com>
5 days agolibrbd/pwl: fix memory leaks in discard operations
Kefu Chai [Tue, 17 Feb 2026 11:41:32 +0000 (19:41 +0800)]
librbd/pwl: fix memory leaks in discard operations

Fix memory leak in librbd persistent write log (PWL) cache discard
operations by properly completing request objects.

ASan reported the following leaks in unittest_librbd:

  Direct leak of 240 byte(s) in 1 object(s) allocated from:
    #0 operator new(unsigned long)
    #1 librbd::cache::pwl::AbstractWriteLog<librbd::MockImageCtx>::discard(...)
       /ceph/src/librbd/cache/pwl/AbstractWriteLog.cc:935:5
    #2 TestMockCacheReplicatedWriteLog_discard_Test::TestBody()
       /ceph/src/test/librbd/cache/pwl/test_mock_ReplicatedWriteLog.cc:534:7

  Plus multiple indirect leaks totaling 2,076 bytes through the
  shared_ptr reference chain.

Root cause:

C_DiscardRequest objects were never deleted because their complete()
method was never called. The on_write_persist callback released the
BlockGuard cell but didn't call complete() to trigger self-deletion.

Write requests use WriteLogOperationSet which takes the request as
its on_finish callback, ensuring complete() is eventually called.
Discard requests don't use WriteLogOperationSet and must explicitly
call complete() in their on_write_persist callback.

Solution:

Call discard_req->complete(r) in the on_write_persist callback and
move cell release into finish_req() -- mirroring how C_WriteRequest
handles it. The complete() -> finish() -> finish_req() chain ensures
the cell is released after the user request is completed, preserving
the same ordering as write requests.

Test results:
- Before: 2,316 bytes leaked in 15 allocations
- After: 0 bytes leaked
- unittest_librbd discard tests pass successfully with ASan

Fixes: https://tracker.ceph.com/issues/74972
Signed-off-by: Kefu Chai <k.chai@proxmox.com>
5 days agoos/Transaction: change get_num_ops() return type to uint64_t
Kefu Chai [Mon, 14 Jul 2025 10:50:48 +0000 (18:50 +0800)]
os/Transaction: change get_num_ops() return type to uint64_t

Change Transaction::get_num_ops() to return uint64_t instead of int
to match the underlying data.ops type (ceph_le<__u64>) and eliminate
compiler warnings about signed/unsigned comparison.

Fixes warning in ECTransaction.cc:

```
/home/kefu/dev/ceph/src/osd/ECTransaction.cc: In constructor ‘ECTransaction::Generate::Generate(PGTransaction&, ceph::ErasureCodeInterfaceRef&, pg_t&, const ECUtil::stripe_info_t&, const std::map<hobject_t, ECUtil::shard_extent_map_t>&, std::map<hobject_t, ECUtil::shard_extent_map_t>*, shard_id_map<ceph::os::Transaction>&, const OSDMapRef&, const hobject_t&, PGTransaction::ObjectOperation&, ECTransaction::WritePlanObj&, DoutPrefixProvider*, pg_log_entry_t*)’:
/home/kefu/dev/ceph/src/osd/ECTransaction.cc:589:25: warning: comparison of integer expressions of different signedness: ‘int’ and ‘__gnu_cxx::__alloc_traits<std::allocator<unsigned int>, unsigned int>::value_type’ {aka ‘unsigned int’} [-Wsign-compare]
  589 |     if (t.get_num_ops() > old_transaction_counts[int(shard)] &&
```

Signed-off-by: Kefu Chai <tchaikov@gmail.com>
5 days agoMerge pull request #66557 from phlogistonjohn/jjm-smb-exo-cluster
John Mulligan [Fri, 20 Feb 2026 19:53:21 +0000 (14:53 -0500)]
Merge pull request #66557 from phlogistonjohn/jjm-smb-exo-cluster

smb: allow smb clusters to use cephfs from a different ceph cluster

Reviewed-by: Avan Thakkar <athakkar@redhat.com>
Reviewed-by: Xavi Hernandez <xhernandez@gmail.com>
Reviewed-by: Adam King <adking@redhat.com>
5 days agoMerge pull request #67256 from afreen23/storage-card
Afreen Misbah [Fri, 20 Feb 2026 16:32:36 +0000 (22:02 +0530)]
Merge pull request #67256 from afreen23/storage-card

mgr/dashboard: Add storage card to overview page

Reviewed-by: Aashish Sharma <aasharma@redhat.com>
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
5 days agoMerge pull request #67423 from cbodley/wip-74573
Adam Emerson [Fri, 20 Feb 2026 16:20:39 +0000 (11:20 -0500)]
Merge pull request #67423 from cbodley/wip-74573

qa/rgw: bucket notifications use pynose

Reviewed-by: Adam C. Emerson <aemerson@redhat.com>
5 days agomgr/dashboard: add schedule_level to image API for pool/cluster snapshot schedule
Imran Imtiaz [Fri, 20 Feb 2026 10:57:15 +0000 (10:57 +0000)]
mgr/dashboard: add schedule_level to image API for pool/cluster snapshot schedule

Add optional schedule_level param (image|pool|cluster) to
PUT /api/block/image/{image_spec}. Removes more-specific schedules
before setting at the chosen level. Backward compatible when omitted.

Fixes: https://tracker.ceph.com/issues/75043
Assisted-by: Cursor AI
Signed-off-by: Imran Imtiaz <imran.imtiaz@uk.ibm.com>
5 days agoMerge pull request #67397 from cbodley/wip-74047
Casey Bodley [Fri, 20 Feb 2026 14:12:03 +0000 (09:12 -0500)]
Merge pull request #67397 from cbodley/wip-74047

doc/radosgw: document account-root for PUT and POST /admin/user

Reviewed-by: Ville Ojamo <git2233+ceph@ojamo.eu>
5 days agomailmap: update email address for Ville Ojamo
Casey Bodley [Fri, 20 Feb 2026 14:04:58 +0000 (09:04 -0500)]
mailmap: update email address for Ville Ojamo

add correct email address to .mailmap as the preferred address, and move
reference to github id `bluikko` to .githubmap

Signed-off-by: Casey Bodley <cbodley@redhat.com>
5 days agoMerge pull request #67368 from idryomov/wip-write-log-operation-set-cell
Ilya Dryomov [Fri, 20 Feb 2026 11:26:00 +0000 (12:26 +0100)]
Merge pull request #67368 from idryomov/wip-write-log-operation-set-cell

librbd/cache/pwl: WriteLogOperationSet::cell can be garbage

Reviewed-by: Miki Patel <miki.patel132@gmail.com>