]> git.apps.os.sepia.ceph.com Git - ceph-ci.git/log
ceph-ci.git
7 days agoosd: Invalidate stats during peering if we are rolling a shard forwards.
Jon Bailey [Fri, 25 Jul 2025 13:16:35 +0000 (14:16 +0100)]
osd: Invalidate stats during peering if we are rolling a shard forwards.

This change will mean we always recalculate stats upon rolling stats forwards. This prevent the situation where we end up with incorrect statistics due to where we always take the stats of the oldest shard during peering; causing outdated pg stats being applied for cases where the oldest shards are shards that don't see partial writes where num_bytes has changed on other places after that point on that shard.

Signed-off-by: Jon Bailey <jonathan.bailey1@ibm.com>
(cherry picked from commit b178ce476f4a5b2bb0743e36d78f3a6e23ad5506)

7 days agoosd: ECTransaction.h includes OSDMap.h
Radoslaw Zarzynski [Wed, 21 May 2025 16:33:15 +0000 (16:33 +0000)]
osd: ECTransaction.h includes OSDMap.h

Needed for crimson.

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit 6dd393e37f6afb9063c4bed3e573557bd0efb6bd)

7 days agoosd: bypass messenger for local EC reads
Radoslaw Zarzynski [Mon, 21 Apr 2025 08:49:55 +0000 (08:49 +0000)]
osd: bypass messenger for local EC reads

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit b07d1f67625c8b621b2ebf5a7f744c588cae99d3)

7 days agoosd: fix buildability after get_write_plan() shuffling
Radoslaw Zarzynski [Fri, 18 Jul 2025 10:35:09 +0000 (10:35 +0000)]
osd: fix buildability after get_write_plan() shuffling

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit 7f4cb19251345849736e83bd0c7cc15ccdcdf48b)

7 days agoosd: just shuffle get_write_plan() from ECBackend to ECCommon
Radoslaw Zarzynski [Sun, 11 May 2025 10:40:55 +0000 (10:40 +0000)]
osd: just shuffle get_write_plan() from ECBackend to ECCommon

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit 9d5bf623537b8ee29e000504d752ace1c05964d7)

7 days agoosd: prepare get_write_plan() for moving from ECBackend to ECCommon
Radoslaw Zarzynski [Sun, 11 May 2025 09:20:29 +0000 (09:20 +0000)]
osd: prepare get_write_plan() for moving from ECBackend to ECCommon

For the sake of sharing with crimson.

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit dc5b0910a500363b62cfda8be44b4bed634f9cd6)

7 days agoosd: separate producing EC's WritePlan out into a dedicated method
Radoslaw Zarzynski [Sun, 11 May 2025 06:51:23 +0000 (06:51 +0000)]
osd: separate producing EC's WritePlan out into a dedicated method

For the sake of sharing with crimson in next commits.

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit e06c0c6dd08fd6d2418a189532171553d63a9deb)

7 days agoosd: fix unused variable warning in ClientReadCompleter
Radoslaw Zarzynski [Wed, 23 Apr 2025 11:42:00 +0000 (11:42 +0000)]
osd: fix unused variable warning in ClientReadCompleter

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit eb3a3bb3a70e6674f6e23a88dd1b2b86551efda2)

7 days agoosd: shuffle ECCommon::RecoveryBackend from ECBackend.cc to ECCommon.cc
Radoslaw Zarzynski [Thu, 9 May 2024 21:00:05 +0000 (21:00 +0000)]
osd: shuffle ECCommon::RecoveryBackend from ECBackend.cc to ECCommon.cc

It's just code movement; there is no changes apart that.

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit ef644c9d29b8adaef228a20fc96830724d1fc3f5)

7 days agoosd: drop junky `#if 1` in recovery backend
Radoslaw Zarzynski [Thu, 9 May 2024 20:32:32 +0000 (20:32 +0000)]
osd: drop junky `#if 1` in recovery backend

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit d43bded4a02532c4612d53fc4418db8e4e829c3f)

7 days agoosd: move ECCommon::RecoveryBackend from ECBackend.cc to ECCommon.cc
Radoslaw Zarzynski [Thu, 9 May 2024 19:11:14 +0000 (19:11 +0000)]
osd: move ECCommon::RecoveryBackend from ECBackend.cc to ECCommon.cc

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit debd035a650768ead64e0707028bb862f4767bef)

7 days agoosd: replace get_obc() with maybe_load_obc() in EC recovery
Radoslaw Zarzynski [Thu, 9 May 2024 19:09:50 +0000 (19:09 +0000)]
osd: replace get_obc() with maybe_load_obc() in EC recovery

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit 266773625f997ff6a1fda82b201e023948a5c081)

7 days agoosd: abstract sending MOSDPGPush during EC recovery
Radoslaw Zarzynski [Thu, 9 May 2024 19:07:32 +0000 (19:07 +0000)]
osd: abstract sending MOSDPGPush during EC recovery

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit 1d54eaff41ec8d880bcf9149e4c71114e0ffdc09)

7 days agoosd: prepare ECCommon::RecoveryBackend for shuffling to ECCommon.cc
Radoslaw Zarzynski [Tue, 26 Mar 2024 14:28:16 +0000 (14:28 +0000)]
osd: prepare ECCommon::RecoveryBackend for shuffling to ECCommon.cc

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit e3ade5167d3671524eb372a028157f2a46e7a219)

7 days agoosd: squeeze RecoveryHandle out of ECCommon::RecoveryBackend
Radoslaw Zarzynski [Tue, 26 Mar 2024 14:20:56 +0000 (14:20 +0000)]
osd: squeeze RecoveryHandle out of ECCommon::RecoveryBackend

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit 1e0feb73a4b91bd8b7b3ecc164d28fe005b97ed1)

7 days agoosd: just shuffle RecoveryMessages to ECCommon.h
Radosław Zarzyński [Wed, 27 Sep 2023 12:17:06 +0000 (14:17 +0200)]
osd: just shuffle RecoveryMessages to ECCommon.h

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit bc28c16a9a83b0f12d3d6463eaeacbab40b0890b)

7 days agoosd: prepare RecoveryMessages for shuffling to ECCommon.h
Radoslaw Zarzynski [Tue, 26 Mar 2024 11:59:42 +0000 (11:59 +0000)]
osd: prepare RecoveryMessages for shuffling to ECCommon.h

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit 0581926035113b1a9cb38f76233242d6b32a7dc6)

7 days agoosd: ECCommon::RecoveryBackend doesn't depend on ECBackend anymore
Radoslaw Zarzynski [Mon, 25 Mar 2024 13:02:07 +0000 (13:02 +0000)]
osd: ECCommon::RecoveryBackend doesn't depend on ECBackend anymore

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit 6ead960b23a95211847250d90e3d2945c6254345)

7 days agoosd: fix buildability after the RecoveryBackend shuffling
Radoslaw Zarzynski [Fri, 18 Apr 2025 08:42:18 +0000 (08:42 +0000)]
osd: fix buildability after the RecoveryBackend shuffling

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit c9d18cf3024e5ba681bed5dc315f70527e99b3f1)

7 days agoosd: just shuffle RecoveryBackend from ECBackend.h to ECCommon.h
Radoslaw Zarzynski [Mon, 25 Mar 2024 11:08:23 +0000 (11:08 +0000)]
osd: just shuffle RecoveryBackend from ECBackend.h to ECCommon.h

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit 98480d2f75a7b99aa72562a6a6daa5f39db3d425)

8 days agoMerge pull request #65411 from aainscow/wip-72561-tentacle
Laura Flores [Tue, 16 Sep 2025 22:09:30 +0000 (17:09 -0500)]
Merge pull request #65411 from aainscow/wip-72561-tentacle

tentacle: Optimized Erasure Coding - Fixpack 2

8 days agoMerge pull request #64030 from VallariAg/wip-71724-tentacle
Vallari Agrawal [Tue, 16 Sep 2025 19:37:29 +0000 (01:07 +0530)]
Merge pull request #64030 from VallariAg/wip-71724-tentacle

tentacle: qa: reduce nvmeof thrasher fio to 32 devices from 200

8 days agoMerge pull request #65429 from nbalacha/wip-72905-tentacle
Yuri Weinstein [Tue, 16 Sep 2025 19:36:53 +0000 (12:36 -0700)]
Merge pull request #65429 from nbalacha/wip-72905-tentacle

tentacle: rgw/logging: fixes data loss during rollover

Reviewed-by: Adam Emerson <aemerson@redhat.com>
Reviewed-by: Yuval Lifshitz <ylifshit@redhat.com>
8 days agoMerge pull request #65271 from smanjara/wip-72570-tentacle
Yuri Weinstein [Tue, 16 Sep 2025 19:36:03 +0000 (12:36 -0700)]
Merge pull request #65271 from smanjara/wip-72570-tentacle

tentacle: rgw/multisite: url-encode list_bucket query param 'key-marker'

Reviewed-by: Adam Emerson <aemerson@redhat.com>
8 days agoMerge pull request #64862 from adamemerson/wip-71066-tentacle
Yuri Weinstein [Tue, 16 Sep 2025 19:19:39 +0000 (12:19 -0700)]
Merge pull request #64862 from adamemerson/wip-71066-tentacle

tentacle: rgw/multisite: Fix lifetime issues

Reviewed-by: Casey Bodley <cbodley@redhat.com>
8 days agoMerge pull request #65546 from stackhpc/doc-balancer-tentacle
Anthony D'Atri [Tue, 16 Sep 2025 18:36:58 +0000 (13:36 -0500)]
Merge pull request #65546 from stackhpc/doc-balancer-tentacle

tentacle: doc: Fixes a typo in balancer operations

8 days agodoc: Fixes a typo in balancer operations
Tyler Brekke [Tue, 24 Jun 2025 19:12:33 +0000 (12:12 -0700)]
doc: Fixes a typo in balancer operations

Signed-off-by: Tyler Brekke <tbrekke@digitalocean.com>
(cherry picked from commit b038b8093d01a5e676ffa419607489a79261ef29)

9 days agoMerge pull request #65160 from adk3798/wip-72668-tentacle
Adam King [Mon, 15 Sep 2025 15:30:56 +0000 (11:30 -0400)]
Merge pull request #65160 from adk3798/wip-72668-tentacle

tentacle: cephadm/cephadmlib: Eliminate false warnings about old sysctl conf files

Reviewed-by: Gil Bregman <gbregman@il.ibm.com>
9 days agoMerge pull request #65068 from adk3798/tentacle-smb-remotectl
Adam King [Mon, 15 Sep 2025 15:30:26 +0000 (11:30 -0400)]
Merge pull request #65068 from adk3798/tentacle-smb-remotectl

tentacle: smb: add remote control server

Reviewed-by: John Mulligan <jmulligan@redhat.com>
9 days agoMerge pull request #64724 from adk3798/wip-72268-tentacle
Adam King [Mon, 15 Sep 2025 15:27:59 +0000 (11:27 -0400)]
Merge pull request #64724 from adk3798/wip-72268-tentacle

tentacle: mgr/cephadm: updating maintenance health status in the serve…

Reviewed-by: Guillaume Abrioux <gabrioux@ibm.com>
Reviewed-by: Kushal Deb <Kushal.Deb@ibm.com>
9 days agoMerge pull request #64723 from adk3798/wip-72264-tentacle
Adam King [Mon, 15 Sep 2025 15:26:56 +0000 (11:26 -0400)]
Merge pull request #64723 from adk3798/wip-72264-tentacle

tentacle: mgr/cephadm: Provide appropriate exit codes for orch operations

Reviewed-by: Kushal Deb <Kushal.Deb@ibm.com>
Reviewed-by: Guillaume Abrioux <gabrioux@ibm.com>
9 days agoMerge pull request #64722 from adk3798/tentacle-cephadm-nvmeof-add-force-tls-flag
Adam King [Mon, 15 Sep 2025 15:26:13 +0000 (11:26 -0400)]
Merge pull request #64722 from adk3798/tentacle-cephadm-nvmeof-add-force-tls-flag

tentacle: mgr/cephadm/nvmeof: Add "force TLS" flag to NVMeOF spec file.

Reviewed-by: Gil Bregman <gbregman@il.ibm.com>
9 days agoMerge pull request #64721 from adk3798/tentacle-cephadm-nvmeof-increase-default-max...
Adam King [Mon, 15 Sep 2025 15:25:18 +0000 (11:25 -0400)]
Merge pull request #64721 from adk3798/tentacle-cephadm-nvmeof-increase-default-max-namespaces

tentacle: mgr/cephadm/nvmeof: Increase the default limit of max_namespaces

Reviewed-by: Afreen Misbah <afreen@ibm.com>
Reviewed-by: Gil Bregman <gbregman@il.ibm.com>
9 days agoMerge pull request #64691 from adk3798/wip-72135-tentacle
Adam King [Mon, 15 Sep 2025 15:22:31 +0000 (11:22 -0400)]
Merge pull request #64691 from adk3798/wip-72135-tentacle

tentacle: mgr/cephadm: disallow changing OSD service type to non-OSD types

Reviewed-by: Guillaume Abrioux <gabrioux@ibm.com>
9 days agoMerge pull request #64674 from adk3798/tentacle-cephadm-undefined-variable-haproxy...
Adam King [Mon, 15 Sep 2025 15:21:33 +0000 (11:21 -0400)]
Merge pull request #64674 from adk3798/tentacle-cephadm-undefined-variable-haproxy-config

tentacle: mgr/cephadm: handle possibly undefined template variable in haproxy.cfg.j2

Reviewed-by: Guillaume Abrioux <gabrioux@ibm.com>
9 days agoMerge pull request #64673 from adk3798/wip-72138-tentacle
Adam King [Mon, 15 Sep 2025 15:19:44 +0000 (11:19 -0400)]
Merge pull request #64673 from adk3798/wip-72138-tentacle

tentacle: mgr/rgw: don't fail realm bootstrap if system user exists already

Reviewed-by: Guillaume Abrioux <gabrioux@ibm.com>
9 days agoMerge pull request #64672 from adk3798/tentacle-teuth-add-cephadm-file-path
Adam King [Mon, 15 Sep 2025 15:19:10 +0000 (11:19 -0400)]
Merge pull request #64672 from adk3798/tentacle-teuth-add-cephadm-file-path

tentacle: qa/tasks/cephadm: allow to select from 'cephadm' and 'cephadm.py'

Reviewed-by: Guillaume Abrioux <gabrioux@ibm.com>
9 days agoMerge pull request #65466 from xhernandez/wip-72952-tentacle
Venky Shankar [Mon, 15 Sep 2025 11:51:38 +0000 (17:21 +0530)]
Merge pull request #65466 from xhernandez/wip-72952-tentacle

tentacle: libcephfs_proxy: fix backward compatibility issue

Reviewed-by: Anoop C S <anoopcs@cryptolab.net>
9 days agoMerge pull request #65447 from aaSharma14/wip-71903-tentacle
afreen23 [Mon, 15 Sep 2025 06:46:12 +0000 (12:16 +0530)]
Merge pull request #65447 from aaSharma14/wip-71903-tentacle

tentacle: mgr/dashboard: Enable rgw module automatically in the primary and secondary cluster if not enabled during multi-site automation

Reviewed-by: Afreen Misbah <afreen@ibm.com>
9 days agoMerge pull request #65454 from rhcs-dashboard/wip-72928-tentacle
afreen23 [Mon, 15 Sep 2025 06:15:06 +0000 (11:45 +0530)]
Merge pull request #65454 from rhcs-dashboard/wip-72928-tentacle

tentacle: mgr/dashboard:RGW- Storage Class ACL Mapping

Reviewed-by: Pedro Gonzalez Gomez <pegonzal@redhat.com>
12 days agoMerge pull request #64891 from aainscow/wip-72372-tentacle
Yuri Weinstein [Fri, 12 Sep 2025 16:00:17 +0000 (09:00 -0700)]
Merge pull request #64891 from aainscow/wip-72372-tentacle

tentacle: osd: Reduce reads when rebalancing healthy Erasure Coded PGs

Reviewed-by: Bill Scales <bill_scales@uk.ibm.com>
13 days agoMerge pull request #65477 from cbodley/wip-tentacle-rgw-reshard-release-note
Casey Bodley [Thu, 11 Sep 2025 13:51:13 +0000 (09:51 -0400)]
Merge pull request #65477 from cbodley/wip-tentacle-rgw-reshard-release-note

tentacle: doc/rgw: release note for bucket reshard optimization

Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
13 days agoMerge pull request #65476 from rhcs-dashboard/wip-72966-tentacle
Nizamudeen A [Thu, 11 Sep 2025 06:59:55 +0000 (12:29 +0530)]
Merge pull request #65476 from rhcs-dashboard/wip-72966-tentacle

tentacle: monitoring: add user-agent headers to the urllib

2 weeks agodoc/rgw: release note for bucket reshard optimization
Casey Bodley [Wed, 10 Sep 2025 16:29:28 +0000 (12:29 -0400)]
doc/rgw: release note for bucket reshard optimization

(note that this applies directly to tentacle, not main)

https://github.com/ceph/ceph/pull/56597 merged last year but was not
mentioned in release notes. this optimization is worth highlighting for
rgw

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2 weeks agomonitoring: add user-agent headers to the urllib
Nizamudeen A [Wed, 10 Sep 2025 13:00:36 +0000 (18:30 +0530)]
monitoring: add user-agent headers to the urllib

The documentation started raising 403 suddenly. Adding User-Agent
headers to the request

Signed-off-by: Nizamudeen A <nia@redhat.com>
(cherry picked from commit b8fe487010483681bbc8ddb8dfe18b40ebfd346b)

2 weeks agoMerge pull request #64694 from adk3798/wip-72262-tentacle
Adam King [Wed, 10 Sep 2025 14:11:08 +0000 (10:11 -0400)]
Merge pull request #64694 from adk3798/wip-72262-tentacle

tentacle: cephadm: Bind mount /var/lib/samba with 0755

Reviewed-by: John Mulligan <jmulligan@redhat.com>
2 weeks agoMerge pull request #65038 from guits/backport-seastore-cv
Guillaume Abrioux [Wed, 10 Sep 2025 13:00:15 +0000 (15:00 +0200)]
Merge pull request #65038 from guits/backport-seastore-cv

(tentacle): ceph-volume: add seastore OSDs support

2 weeks agomgr/dashboard: Enable rgw module automatically in the primary and secondary cluster...
Aashish Sharma [Wed, 23 Apr 2025 09:23:24 +0000 (14:53 +0530)]
mgr/dashboard: Enable rgw module automatically in the primary and secondary cluster if not enabled during multi-site automation

1. Enable rgw module automatically in the primary and secondary cluster if not enabled during multi-site automation
2. Improve progress bar descriptions and add sub-descriptions for steps

Fixes: https://tracker.ceph.com/issues/71033
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
(cherry picked from commit d6c657b7e6bb85f6d246b389c54941b777364f49)

Conflicts:
src/pybind/mgr/dashboard/frontend/src/app/ceph/rgw/rgw-overview-dashboard/rgw-overview-dashboard.component.spec.ts

2 weeks agolibcephfs_proxy: fix userperm pointer decoding for older protocols
Xavi Hernandez [Mon, 1 Sep 2025 12:43:26 +0000 (14:43 +0200)]
libcephfs_proxy: fix userperm pointer decoding for older protocols

The random data used to decode pointers coming from the old protocol was
taken from the client instead of using the global_random data, which is
the correct one.

Fixes: https://tracker.ceph.com/issues/72800
Signed-off-by: Xavi Hernandez <xhernandez@gmail.com>
(cherry picked from commit 674bd44001d6feb919a331d4a4586cc0d97847f8)

2 weeks agolibcephfs_proxy: remove unnecessary protocol references in daemon
Xavi Hernandez [Mon, 1 Sep 2025 09:58:30 +0000 (11:58 +0200)]
libcephfs_proxy: remove unnecessary protocol references in daemon

With the new protocol structure definitions, it's not necessary to
explicitly access each field inside its version substructure (v0, for
example). Now all fields of the latest version are declared inside an
anonymous substructure that can be accessed without a prefix.

Fixes: https://tracker.ceph.com/issues/72800
Signed-off-by: Xavi Hernandez <xhernandez@gmail.com>
(cherry picked from commit b5df01a605d0adfa332e08665faea00bf7b0fbd0)

2 weeks agolibcephfs_proxy: remove unnecessary protocol references in client
Xavi Hernandez [Mon, 1 Sep 2025 09:41:10 +0000 (11:41 +0200)]
libcephfs_proxy: remove unnecessary protocol references in client

With the new protocol structure definitions, it's not necessary to
explicitly access each field inside its version substructure (v0, for
example). Now all fields of the latest version are declared inside an
anonymous substructure that can be accessed without a prefix.

Fixes: https://tracker.ceph.com/issues/72800
Signed-off-by: Xavi Hernandez <xhernandez@gmail.com>
(cherry picked from commit 8cd9f8c808a09f8c2f1da5b15233a14effa41296)

2 weeks agolibcephfs_proxy: fix protocol structures for backward compatibility
Xavi Hernandez [Mon, 1 Sep 2025 09:22:05 +0000 (11:22 +0200)]
libcephfs_proxy: fix protocol structures for backward compatibility

The structures used for transferring data between the proxy client and
the proxy daemon had been reworked in a recent change to be able to
expand the protocol. This caused an inconsistency in the size of the
data transferred when communication with a peer using the older version.
The result was that the peer receiving the data with an unexpected size
was closing the connection, causing unexpected errors.

The discrepancy in size is the result of how compilers pad structures
combined with the change in the structure layout introduced when
extending the protocol. With these changes, the computation of the size
of each version of the structures was not done correctly.

This change makes the layout equal to the older version, so that
computing the size of the structures becomes easier and doesn't depend
on unexpected paddings.

Fixes: https://tracker.ceph.com/issues/72800
Signed-off-by: Xavi Hernandez <xhernandez@gmail.com>
(cherry picked from commit 62e917148496bce299f4cd48342765b73b9950a8)

2 weeks agoMerge pull request #64606 from cbodley/wip-72191-tentacle
Yuri Weinstein [Tue, 9 Sep 2025 16:45:59 +0000 (09:45 -0700)]
Merge pull request #64606 from cbodley/wip-72191-tentacle

tentacle: deb/cephadm: add explicit --home for cephadm user

Reviewed-by: Adam Emerson <aemerson@redhat.com>
2 weeks agoMerge pull request #64935 from pritha-srivastava/wip-72465-tentacle
Yuri Weinstein [Tue, 9 Sep 2025 16:12:59 +0000 (09:12 -0700)]
Merge pull request #64935 from pritha-srivastava/wip-72465-tentacle

tentacle: rgw: check all JWKS for STS

Reviewed-by: Adam Emerson <aemerson@redhat.com>
2 weeks agomgr/dashboard:RGW- Storage Class ACL Mapping
Dnyaneshwari [Fri, 1 Aug 2025 04:30:18 +0000 (10:00 +0530)]
mgr/dashboard:RGW- Storage Class ACL Mapping

Fixes: https://tracker.ceph.com/issues/72362
Signed-off-by: Dnyaneshwari Talwekar <dtalwekar@redhat.com>
(cherry picked from commit 38b237c8cb12cb77fd29a560c10c7e2225786955)

2 weeks agoMerge pull request #65438 from ljflores/wip-72913-tentacle
Laura Flores [Tue, 9 Sep 2025 15:40:01 +0000 (10:40 -0500)]
Merge pull request #65438 from ljflores/wip-72913-tentacle

tentacle: doc/rados/operations: add kernel client procedure to read balancer documentation

2 weeks agoMerge pull request #65400 from ceph/tentacle-release
Yuri Weinstein [Tue, 9 Sep 2025 15:33:08 +0000 (08:33 -0700)]
Merge pull request #65400 from ceph/tentacle-release

v20.1.0

Reviewed-by: Ilya Dryomov <idryomov@redhat.com>
Reviewed-by: David Galloway <dgallowa@redhat.com>
Reviewed-by: Yuri Weinstein <yweinste@redhat.com>
2 weeks agoceph-volume: add seastore OSDs support
Guillaume Abrioux [Tue, 1 Jul 2025 07:15:18 +0000 (07:15 +0000)]
ceph-volume: add seastore OSDs support

This adds the seastore OSD objectstore support to ceph-volume.

Fixes: https://tracker.ceph.com/issues/71414
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
(cherry picked from commit 1b83235849fd875aac1f46b714a63f2c1e8ce836)

2 weeks agoceph-volume: refactor LvmBlueStore.setup_device()
Guillaume Abrioux [Tue, 18 Mar 2025 14:20:51 +0000 (14:20 +0000)]
ceph-volume: refactor LvmBlueStore.setup_device()

This refactores redundant device setup calls in LvmBlueStore class:
Calling the same function twice with different arguments for WAL
and DB devices was inefficient and unnecessary.
The new implementation simplifies the logic by directly accessing
`self.args`, it removes the need for passing arguments manually.

Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
(cherry picked from commit 7626c12e5fd4800187963f3b7f8691eb2847c119)

2 weeks agoMerge pull request #65301 from guits/wip-72782-tentacle
Guillaume Abrioux [Tue, 9 Sep 2025 11:36:04 +0000 (13:36 +0200)]
Merge pull request #65301 from guits/wip-72782-tentacle

tentacle: ceph-volume: drop udevadm subprocess calls

2 weeks agoMerge pull request #65446 from aaSharma14/wip-72910-tentacle
Aashish Sharma [Tue, 9 Sep 2025 10:26:32 +0000 (15:56 +0530)]
Merge pull request #65446 from aaSharma14/wip-72910-tentacle

tentacle: mgr/dashboard: fix RGW Bucket Notification Dashboard units

Reviewed-by: Abhishek Desai <abhishek.desai1@ibm.com>
2 weeks agomgr/dashboard: fix RGW Bucket Notification Dashboard units
Aashish Sharma [Thu, 4 Sep 2025 08:10:00 +0000 (13:40 +0530)]
mgr/dashboard: fix RGW Bucket Notification Dashboard units

Fixes: https://tracker.ceph.com/issues/72868
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
(cherry picked from commit 2c2f1f83ecad42a4de0d4df3f4a44c9c2b69390f)

2 weeks agoneorados/cls/fifo/detail/fifo: include strtol.h
Matan Breizman [Mon, 11 Aug 2025 12:28:47 +0000 (12:28 +0000)]
neorados/cls/fifo/detail/fifo: include strtol.h

https://github.com/ceph/ceph/commit/a2d26647c011274b61805f8ac17c3422e9b9b63c

ftbfs:
```
/home/jenkins-build/build/workspace/ceph-pull-requests/src/neorados/cls/fifo/detail/fifo.h:630:14: error: no member named 'parse' in namespace 'ceph'; did you mean 'pause'?
  630 |     auto n = ceph::parse<decltype(m.num)>(num);
      |              ^~~~~~~~~~~
```

Signed-off-by: Matan Breizman <mbreizma@redhat.com>
(cherry picked from commit 3df1133c17b174c27c250cf7ac018199cc40b15b)
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
2 weeks agorgw/datalog: Manage and shutdown tasks properly
Adam C. Emerson [Mon, 30 Jun 2025 20:54:46 +0000 (16:54 -0400)]
rgw/datalog: Manage and shutdown tasks properly

This is slightly ugly but good enough for now. Make sure we can block
when shutting down background tasks.

Remove a few `driver` parameters that are unused. This lets us
simplify the IAM Policy and Lua tests and not construct stores we
never use. (Which is good since we aren't running them under a cluster.)

Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit 3def99eec5df353a0fb34be79d9e98a08eb05985)

Conflicts:
src/rgw/driver/rados/rgw_service.cc
src/rgw/rgw_sal.h
src/rgw/rgw_sal.cc
 - `#ifdef` changes
src/test/rgw/test_rgw_iam_policy.cc
src/test/rgw/test_rgw_lua.cc
 - SAL renaming

Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
2 weeks agoMerge pull request #65323 from rhcs-dashboard/wip-72798-tentacle
Nizamudeen A [Tue, 9 Sep 2025 02:37:45 +0000 (08:07 +0530)]
Merge pull request #65323 from rhcs-dashboard/wip-72798-tentacle

tentacle: mgr/dashboard: expose image summary API

2 weeks agoneorados/fifo: Rewrite as proper I/O object
Adam C. Emerson [Fri, 11 Jul 2025 18:57:02 +0000 (14:57 -0400)]
neorados/fifo: Rewrite as proper I/O object

Split nominal handle object and reference-counted
implementation. While we're at it, add lazy-open functionality.

Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit 3097297dd39432d172d69454419fa83a908075f6)
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
2 weeks ago{neorados,osdc}: Support subsystem cancellation
Adam C. Emerson [Thu, 26 Jun 2025 17:58:57 +0000 (13:58 -0400)]
{neorados,osdc}: Support subsystem cancellation

Tag operations with a subsystem so we can cancel them all in one go.

Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit 2526eb573b789b33b7d9ebf1169491f13e2318bb)
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
2 weeks agorgw/multi: Give tasks a reference to RGWDataChangesLog
Adam C. Emerson [Fri, 25 Apr 2025 21:40:05 +0000 (17:40 -0400)]
rgw/multi: Give tasks a reference to RGWDataChangesLog

Also run them in strands. Also `datalog_rados` is a `shared_ptr`,
now. Probably make it intrusive later.

Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit 3c2b587ead6b0cb5acfd84788958dd957d020875)

Conflicts:
src/rgw/driver/rados/rgw_service.cc
src/rgw/rgw_sal.cc
 - `#ifdef`s for standalone Rados
src/rgw/driver/rados/rgw_datalog.cc
 - Periodic re-run of recovery removed in main and pending backport

Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
2 weeks agoneorados: Hold reference to implementation across operations
Adam C. Emerson [Fri, 30 May 2025 20:54:45 +0000 (16:54 -0400)]
neorados: Hold reference to implementation across operations

Asynchrony combined with cancellations keeps leading to occasional
lifetime issues, so follow the best-practices of Asio I/O objects by
having completions keep a reference live.

The original NeoRados backing implements Asio's two-phase shutdown
properly.

The RadosClient backing does not, because it shares an Objecter with
completions that do not belong to it. In practice I don't think this
will matter since librados and neorados get shut down around the same
time.

Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit 57c9723928b4d2b2148ca0dd4d505acdc071f8eb)
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
2 weeks agotentacle: update formatting to match across tentacle branches
John Mulligan [Mon, 8 Sep 2025 18:13:59 +0000 (14:13 -0400)]
tentacle: update formatting to match across tentacle branches

I managed to create a bit of a mess with formatting changes after
a fix was cherry picked to `tentacle-release`. This change makes
the formatting on `tentacle-release` match that of `tentacle`.

Signed-off-by: John Mulligan <jmulligan@redhat.com>
2 weeks agodoc/rados/operations: add kernel client procedure to read balancer documentation
Laura Flores [Fri, 5 Sep 2025 21:46:20 +0000 (16:46 -0500)]
doc/rados/operations: add kernel client procedure to read balancer documentation

As of now, the kernel client does not support `pg-upmap-primary`. I have
added some troubleshooting steps to help users who are unable to
mount images and filesystems with the kernel client while using `pg-upmap-primary`.

Once the feature is supported by the kernel client, users will be able
to perform mounts along with `pg-upmap-primary`.

Fixes: https://tracker.ceph.com/issues/72897
Signed-off-by: Laura Flores <lflores@ibm.com>
(cherry picked from commit 546d523147873c8fc6cf4208b4fa71eb2703e9c3)

2 weeks agoMerge pull request #65306 from rhcs-dashboard/wip-72768-tentacle
afreen23 [Mon, 8 Sep 2025 18:43:46 +0000 (00:13 +0530)]
Merge pull request #65306 from rhcs-dashboard/wip-72768-tentacle

tentacle: mgr/dashboard: About panel showing other icons in background while open

Reviewed-by: Aashish Sharma <aasharma@redhat.com>
2 weeks agoMerge pull request #65307 from aaSharma14/wip-72767-tentacle
afreen23 [Mon, 8 Sep 2025 18:35:23 +0000 (00:05 +0530)]
Merge pull request #65307 from aaSharma14/wip-72767-tentacle

tentacle: mgr/dashboard: Fix duplicate selection on multi-select in table component

Reviewed-by: Afreen Misbah <afreen@ibm.com>
2 weeks agoMerge pull request #65381 from aaSharma14/wip-72841-tentacle
afreen23 [Mon, 8 Sep 2025 18:32:42 +0000 (00:02 +0530)]
Merge pull request #65381 from aaSharma14/wip-72841-tentacle

tentacle: mgr/dashboard: Allow the user to re-use existing realm/zg/zone and setup replication

Reviewed-by: Afreen Misbah <afreen@ibm.com>
2 weeks agoMerge pull request #65383 from aaSharma14/wip-72867-tentacle
afreen23 [Mon, 8 Sep 2025 18:31:34 +0000 (00:01 +0530)]
Merge pull request #65383 from aaSharma14/wip-72867-tentacle

tentacle: mgr/dashboard: Adding RGW Bucket Notification Dashboard for Grafana

Reviewed-by: Afreen Misbah <afreen@ibm.com>
2 weeks agorgw/logging: fixes data loss during rollover
N Balachandran [Thu, 28 Aug 2025 06:22:23 +0000 (11:52 +0530)]
rgw/logging: fixes data loss during rollover

Multiple threads attempting to roll over the same log object can result
in the creation of numerous orphan tail objects, each with a single record.
This occurs when a NULL RGWObjVersionTracker is used during the creation of
a new logging object. These records are inaccessible, leading to data loss,
which is particularly critical in Journal mode.
Furthermore, valid log tail objects may be added to the Garbage Collection (GC)
list, exacerbating data loss.

Fixes: https://tracker.ceph.com/issues/72740
Signed-off-by: N Balachandran <nithya.balachandran@ibm.com>
(cherry picked from commit eea6525c031ae93f4ae846b06d55831e658faa2c)

2 weeks agoosd: Optimized EC invalid pwlc for shards doing backfill/async
Bill Scales [Wed, 16 Jul 2025 14:55:40 +0000 (15:55 +0100)]
osd: Optimized EC invalid pwlc for shards doing backfill/async

Shards performing backfill or async recovery receive log entries
(but not transactions) for updates to missing/yet to be backfilled
objects. These log entries get applied and completed immediately
because there is nothing that can be rolled back. This causes
pwlc to advance too early and causes problems if other shards
do not complete the update and end up rolling it backwards.

This fix sets pwlc to be invalid when such a log entry is
applied and completed and it then remains invalid until the
next interval when peering runs again. Other shards will
continue to update pwlc and any complete subset of shards
in a future interval will include at least one shard that
has continued to update pwlc

Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit 534fc76d40a86a49bfabab247d3a703cbb575e27)

2 weeks agoosd: Optimized EC add_log_entry should not skip partial writes
Bill Scales [Wed, 16 Jul 2025 14:05:16 +0000 (15:05 +0100)]
osd: Optimized EC add_log_entry should not skip partial writes

Undo a previous attempt at a fix that made add_log_entry skip adding partial
writes to the log if the write did not update this shard. The only case where
this code path executed is when a partial write was to an object that needs
backfilling or async recovery. For async recovery we need to keep the
log entry because it is needed to update the missing list. For backfill it
doesn't harm to keep the log entry.

Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit 9f0e883b710a06e3371bc7e0681e034727447f27)

2 weeks agoosd: Optimized EC apply_pwlc needs to be careful about advancing last_complete
Bill Scales [Wed, 16 Jul 2025 13:55:49 +0000 (14:55 +0100)]
osd: Optimized EC apply_pwlc needs to be careful about advancing last_complete

Fix bug in apply_pwlc where the primary was advancing last_complete for a
shard doing async recorvery so that last_complete became equal to last_update
and it then thought that recovery had completed. It is only valid to advance
last_complete if it is equal to last_update.

Tidy up the logging in this function as consecutive calls to this function
often logged that it could advance on the 1st call and then that it could not
on the 2nd call. We only want one log message.

Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit 0b19ed49ff76d7470bfcbd7f26ea0c7e5a2bc358)

2 weeks agoosd: Use std::cmp_greater to avoid signedness warnings.
Alex Ainscow [Tue, 15 Jul 2025 10:50:20 +0000 (11:50 +0100)]
osd: Use std::cmp_greater to avoid signedness warnings.

Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit 846879e6c2ec4ab5a65040981a617f4b603c379a)

2 weeks agoosd: Always send EC messages to all shards following error.
Alex Ainscow [Mon, 14 Jul 2025 22:57:49 +0000 (23:57 +0100)]
osd: Always send EC messages to all shards following error.

Explanation of bug which is being fixed:

Log entry 204'784 is an error - "_rollback_to deleting head on smithi17019940-573 because
got ENOENT|whiteout on find_object_context" so this log entry is generated outside of EC by
PrimaryLogPG. It should be applied to all shards, however osd 13(2) was a little slow and
the update got interrupted by a new epoch so it didn't apply it. All the other shards
marked it as applied and completed (there isn't the usual interlock that EC has of making
sure all shards apply the update before any complete it).

We then processed 4 partial writes applying and completing them (they didn't update osd
13(2)), then we have a new epoch and go through peering. Peering says osd 13(2) didn't see
update 204'784 (it didn't) and therefore the error log entry and the 4 partial writes need
to be rolled back. The other shards had completed those 4 partial writes so we end up with
4 missing objects on all the shards which become unfound objects.

I think the underlying bug means that log entry 204'784 isn't really complete and may
"disappear" from the log in a subsequent peering cycle. Trying to forcefully rollback a
logged error doesn't generate a missing object or a miscompare, so the consequences of the
bug are hidden. It is however tripping up the new EC code where proc_master_log is being
much stricter about what a completed write means.

Fix:
After generating a logged error we could force the next write to EC to update metadata on
all shards even if its a partial write. This means this write won't complete unless all
shards see the logged error. This will make new EC behave the same as old EC. There is
already an interlock with EC (call_write_ordered) which is called just before generating
the log error that ensures that any in-flight writes complete before submitting the log
error. We could set a boolean flag here (at the point call_write_ordered is called is fine,
don't need to wait for the callback) to say the next write has to be to all shards. The
flag can be cleared if we generate the transactions for the next write, or we get an
on_change notification (peering will then clear up the mess)

Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit 4948f74331c13cd93086b057e0f25a59573e3167)

2 weeks agoosd: Attribute re-reads in optimised EC
Alex Ainscow [Mon, 14 Jul 2025 15:40:22 +0000 (16:40 +0100)]
osd: Attribute re-reads in optimised EC

There were some bugs in attribute reads during recovery in optimised
EC where the attribute read failed. There were two scenarios:

1. It was not necessary to do any further reads to recover the data. This
can happen during recovery of many shards.
2. The re-read could be honoured from non-primary shards. There are
sometimes multiple copies of the shard whcih can be used, so a failed read
on one OSD can be replaced by a read from another.

Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit 417fb71c9b5628726d3217909ba1b6d3e7bf251a)

2 weeks agoosd: EC optimizations keep log entries on all shards
Bill Scales [Fri, 11 Jul 2025 11:59:40 +0000 (12:59 +0100)]
osd: EC optimizations keep log entries on all shards

When a shard is backfilling it gets given log entries
for partial writes even if they do not apply to the
shard. The code was updating the missing list but
discarding the log entry. This is wrong because the
update can be rolled backwards and the log entry is
required to revert the update to the missing list.
Keeping the log entry has a small but insignificant
performance impact.

Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit 1fa5302092cbbb37357142d01ca008cae29d4f5e)

2 weeks agoosd: Remove some extraneous references to hinfo.
Alex Ainscow [Fri, 11 Jul 2025 09:58:32 +0000 (09:58 +0000)]
osd: Remove some extraneous references to hinfo.

Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit 1c2476560f1c4b2eec1c074a6eead5520b5474eb)

2 weeks agomon: Optimized EC pools preprocess_pgtemp incorrectly rejecting pgtemp as nop
Bill Scales [Mon, 7 Jul 2025 20:13:59 +0000 (21:13 +0100)]
mon: Optimized EC pools preprocess_pgtemp incorrectly rejecting pgtemp as nop

Optimized EC pools store pgtemp with primary shards first, this was not
being taken into account by OSDMonitor::preprocess_pgtemp which meant
that the change of pgtemp from [None,2,4] to [None,4,2] for a 2+1 pool
was being rejected as a nop because the primary first encoded version
of [None,2,4] is [None,4,2].

Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit 00aa1933d3457c377d9483072e663442a4ff8ffd)

2 weeks agoosd: EC Optimizations proc_master_log bug fixes
Bill Scales [Fri, 4 Jul 2025 10:51:05 +0000 (11:51 +0100)]
osd: EC Optimizations proc_master_log bug fixes

1. proc_master_log can roll forward full-writes that have
been applied to all shards but not yet completed. Add a
new function consider_adjusting_pwlc to roll-forward
pwlc. Later partial_write can be called to process the
same writes again. This can result in pwlc being rolled
backwards. Modify partial_write so it does not undo pwlc.

2. At the end of proc_master_log we want the new
authorative view of pwlc to persist - this may be
better or worse than the stale view of pwlc held by
other shards. consider_rollback_pwlc sometimes
updated the epoch in the toversion (second value of the
range fromversion-toverison). We now always do this.
Updating toversion.epoch causes problems because this
version sometimes gets copied to last_update and
last_complete - using the wrong epoch here messes
everything up in later peering cycles. Instead we
now update fromversion.epoch. This requires changes
to apply_pwlc and an assert in Stray::react(const MInfoRec&)

3. Calling apply_pwlc at the end of proc_master_log is
too early - updating last_update and last_complete here
breaks GetMissing. We need to do this later when activating
(change to search_missing and activate)

4. proc_master_log is calling partial_write with the
wrong previous version - this causes problems after a
split when the log is sparsely populated.

5. merging PGs is not setting up pwlc correctly which
can cause issues in future peering cycles. The
pwlc can simply be reset, we need to update the epoch
to make sure this view of pwlc persists vs stale
pwlc from other shards.

Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit 0b8593a0112e31705acb581ac388a4ef1df31b4b)

2 weeks agoosd: Fix issue where not all shards are receiving setattr when it's sent to an object...
Jon Bailey [Thu, 3 Jul 2025 13:24:41 +0000 (14:24 +0100)]
osd: Fix issue where not all shards are receiving setattr when it's sent to an object with the whiteout flag set.

Signed-off-by: Jon Bailey <jonathan.bailey1@ibm.com>
(cherry picked from commit 89fef784aa46e74dd05ef8f1bff16f357016dfc3)

2 weeks agoosd: Relax assertion that all recoveries require a read.
Alex Ainscow [Tue, 1 Jul 2025 14:51:58 +0000 (15:51 +0100)]
osd: Relax assertion that all recoveries require a read.

If multiple object are being read as part of the same recovery (this happens
when recovering some snapshots) and a read fails, then some reads from other
shards will be necessary.  However, some objects may not need to read. In this
case it is only important that at least one read message is sent, rather than
one read message per object is sent.

Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit 9f9ea6ddd38ebf6ae7159855267e61858bb2b7fc)

2 weeks agoosd: Recovery of zero length reads when we add a new OSD without an interval.
Alex Ainscow [Tue, 1 Jul 2025 14:49:20 +0000 (15:49 +0100)]
osd: Recovery of zero length reads when we add a new OSD without an interval.

Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit 3493d13d733454bb75616c628e25b2fa94dcb400)

2 weeks agoosd: Relax PGLog assert when ec optimisations are enabled on a pool.
Alex Ainscow [Mon, 30 Jun 2025 13:31:21 +0000 (14:31 +0100)]
osd: Relax PGLog assert when ec optimisations are enabled on a pool.

The versions on partial shards are permitted to be behind, so we need
to relax several asserts, this is another example.

Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit 0c89e7ef2ab48199ee3f7296cf1cb44c9aeec667)

2 weeks agoosd: Truncate coding shards to minimal size
Alex Ainscow [Sun, 29 Jun 2025 21:54:51 +0000 (22:54 +0100)]
osd: Truncate coding shards to minimal size

Scrub detected a bug where if an object was truncated to a size where the first
shard is smaller than the chunk size (only possible for >4k chunk sizes), then
the coding shards were being aligned to the chunk size, rather than to 4k.

This fixes changes how the write plan is calculated to write the correct size.

Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit a39a309631482b0caa071d586f192cd19a7ae470)

2 weeks agoosd: EC Optimizations fix peering bug causing unfound objects
Bill Scales [Fri, 27 Jun 2025 12:35:58 +0000 (13:35 +0100)]
osd: EC Optimizations fix peering bug causing unfound objects

Fix some unusual scenarios where peering was incorrectly
declaring that objects were missing on stray shards. When
proc_master_log rolls forward partial writes it need to
update pwlc exactly the same way as if the write had been
completed. This ensures that stray shards that were not
updated because of partial writes do not cause objects
to be incorrectly marked as missing.

The fix also means some code in GetMissing which was trying
to do a similar thing for shards that were acting,
recovering or backfilling (but not stray) can be deleted.

Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit 83a9c0a9f8e9ed4f514adb32f1ae2df1602c3f88)

2 weeks agoosdc: Optimized EC pools routing bug
Bill Scales [Wed, 25 Jun 2025 09:26:17 +0000 (10:26 +0100)]
osdc: Optimized EC pools routing bug

Fix bug with routing to an acting set like [None,Y,X,X]p(X)
for a 3+1 optimzed pool where osd X is representing more
than one shard. For an optimized EC pool we want it to
choose shard 3 because shard 2 is a non-primary. If we
just search the acting set for the first OSD that matches
X this will pick shard 2, so we have to convert the order
to primary's first, then find the matching OSD and then
convert this back to the normal ordering to get shard 3.

Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit 3310f97859109090706b84824cac2f8a6cfe6928)

2 weeks agomon: Optimized EC clean_temps needs to permit primary change
Bill Scales [Mon, 23 Jun 2025 10:36:37 +0000 (11:36 +0100)]
mon: Optimized EC clean_temps needs to permit primary change

Optimized EC pools were blocking clean_temps from clearing pg_temp
when up == acting but up_primary != acting_primary because optimized
pools sometimes use pg_temp to force a change of primary shard.

However this can block merges which require the two PGs being
merged to have the same primary. Relax clean_temps to permit
pg_temp to be cleared so long as the new primary is not a
non-primary shard.

Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit ce53276191e60375486f75d93508690f780bee21)

2 weeks agoosd: Optimized EC pools - fix overaggressive assert in read_log_and_missing
Bill Scales [Mon, 23 Jun 2025 09:24:17 +0000 (10:24 +0100)]
osd: Optimized EC pools - fix overaggressive assert in read_log_and_missing

Non-primary shards may not be updated because of partial writes. This means
that the OI verison for an object on these shards may be stale. An assert
in read_log_and_missing was checking that the OI version matched the have
version in a missing entry. The missing entry calculates the have version
using the prior_version from a log entry, this does not take into account
partial writes so can be ahead of the stale OI version.

Relax the assert for optimized pools to require have >= oi.version

Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit 74e138a7c1f8b7e375568c6811a60f6bdad181b3)

2 weeks agoosd: rewind_divergent_log needs to dirty log if crt changes or ...
Bill Scales [Mon, 23 Jun 2025 09:12:10 +0000 (10:12 +0100)]
osd: rewind_divergent_log needs to dirty log if crt changes or ...
rollback_info_trimmed_to changes

PGLog::rewind_divergent_log was only causing the log to be marked
dirty and checkpointed if there were divergent entries. However
after a PG split it is possible that the log can be rewound
modifying crt and/or rollback_info_trimmed_to without creating
divergent entries because the entries being rolled back were
all split into the other PG.

Failing to checkpoint the log generates a window where if the OSD
is reset you can end up with crt (and rollback_info_trimmed_to) > head.
One consequence of this is asserts like
ceph_assert(rollback_info_trimmed_to == head); firing.

Fixes: https://tracker.ceph.com/issues/55141
Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit d8f78adf85f8cb11deeae3683a28db92046779b5)

2 weeks agoosd: Correct truncate logic for new EC
Alex Ainscow [Fri, 20 Jun 2025 20:47:32 +0000 (21:47 +0100)]
osd: Correct truncate logic for new EC

The clone logic in the truncate was only cloning from the truncate
to the end of the pre-truncate object. If the next shard was being
truncated to a shorter length (which is common), then this shard
has a larger clone.

The rollback, however, can only be given a single range, so it was
given a range which covers all clones.  The problem is that if shard
0 is rolled back, then some empty space from the clone was copied
to shard 0.

Fix is easy - calculate the full clone length and apply to all shards, so it matches the rollback.

Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit 5d7588c051b31098c9970877ab6a784967ff94c8)

2 weeks agoosd: Fix incorrect invalidate_crc during slice iterate
Alex Ainscow [Fri, 20 Jun 2025 10:48:59 +0000 (11:48 +0100)]
osd: Fix incorrect invalidate_crc during slice iterate

The CRCs were being invalidate at the wrong point, so the last CRC was
not being invalidated.

Signed-off-by: Alex Ainscow <aainscow@uk.ibm.com>
(cherry picked from commit 564b53c446201ed33b5345c936c3c4b5d32bdaab)

2 weeks agoosd: Do not apply log entry if shard not written
Bill Scales [Thu, 19 Jun 2025 13:26:04 +0000 (14:26 +0100)]
osd: Do not apply log entry if shard not written

This was a failed test, where the primary concluded that all objects were present
despite one missing object on the non primary shard.

The problem was caused because the log entries are sent to the unwritten shards if that
shard is missing in order to update the version number in the missing object. However,
the log entry should not actually be added to the log.

Further testing showed there are other scenarios where log entries are sent to
unwritten shards (for example a clone + partial_write in the same transaction),
these scenarios do not want to add the log entry either.

Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit 24cd772f2099aa5f7dfeb7609522f770d0ae1115)

2 weeks agoosd: EC Optimizations proc_replica_log needs to apply pwlc
Bill Scales [Thu, 19 Jun 2025 12:41:17 +0000 (13:41 +0100)]
osd: EC Optimizations proc_replica_log needs to apply pwlc

PeeringState::proc_replica_log needs to apply pwlc before
calling PGLog so that any partial writes that have occurred
are taken into account when working out where a replica/stray
has diverged from the primary.

Signed-off-by: Bill Scales <bill_scales@uk.ibm.com>
(cherry picked from commit 6c3c0a88b68e2548df670dbe9797d54f89259398)