]>
git-server-git.apps.pok.os.sepia.ceph.com Git - ceph-ci.git/log
Abhishek Desai [Tue, 9 Sep 2025 18:53:05 +0000 (00:23 +0530)]
mgr/dashboard : Hide suppressed alert on landing page
fixes : https://tracker.ceph.com/issues/72944
Resolves: rhbz#
2319055
Signed-off-by: Abhishek Desai <abhishek.desai1@ibm.com>
(cherry picked from commit
280d8f66bf811bf6ca05da4703c4fdadcd89504a )
(cherry picked from commit
caed5ffca7207cd7887c1d749d19ae1c7bddc2a5 )
Abhishek Desai [Thu, 7 Aug 2025 07:50:38 +0000 (13:20 +0530)]
mgr/dashboard : Fixed mirrored image usage info bar
fixes : https://tracker.ceph.com/issues/72431
Resolves: rhbz#
2383217
Signed-off-by: Abhishek Desai <abhishek.desai1@ibm.com>
(cherry picked from commit
3a192b7c38e3f1669f3deee31702ba802d7411fd )
(cherry picked from commit
d35b384d030114165b1d6fe261f78326c5e945b0 )
Abhishek Desai [Fri, 29 Aug 2025 14:29:09 +0000 (19:59 +0530)]
mgr/dashboard: Group similar alerts
fixes : https://tracker.ceph.com/issues/72788
Resolves: rhbz#
2400414
Signed-off-by: Abhishek Desai <abhishek.desai1@ibm.com>
(cherry picked from commit
cdd74a35103ecea7f8031aed494868fbd618d45b )
Conflicts:
src/pybind/mgr/dashboard/frontend/src/app/ceph/dashboard-v3/dashboard/dashboard-v3.component.scss
Accept the incoming changes
Conflicts:
src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/prometheus/active-alert-list/active-alert-list.component.html
src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/prometheus/active-alert-list/active-alert-list.component.ts
(cherry picked from commit
4701b11e1bc65591dcdbf60b5ab5d3be9b99dc1a )
Aashish Sharma [Wed, 2 Jul 2025 11:05:14 +0000 (16:35 +0530)]
monitoring: fix MTU Mismatch alert rule and expr
Fixes: https://tracker.ceph.com/issues/73290
Resolves: rhbz#
2374279
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
(cherry picked from commit
bee24dec441b9e6b263e4498c2ab333b0a60a52d )
Conflicts:
src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/prometheus/active-alert-list/active-alert-list.component.html
src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/prometheus/active-alert-list/active-alert-list.component.ts
(cherry picked from commit
2e75347958d499465a586581f4bf14cd66ef12d5 )
Naman Munet [Wed, 24 Sep 2025 07:23:40 +0000 (12:53 +0530)]
mgr/dashboard: Blank entry for Storage Capacity in dashboard under Cluster > Expand Cluster > Review
Fixes: https://tracker.ceph.com/issues/73220
Resolves: rhbz#
2314624
Signed-off-by: Naman Munet <naman.munet@ibm.com>
(cherry picked from commit
a01909e7588c7ff757079475e3ea6f1dc3054db7 )
(cherry picked from commit
5e5b0fd9d169eb0e7c108c8ad23668f576e158ac )
Dhairya Parmar [Wed, 3 Sep 2025 11:40:34 +0000 (17:10 +0530)]
test/libcephfs: add test cases for Client::mksnap and Client::rmsnap
they exist as libceohfs APIs but aren't being called by FUSE/pybind clients,
make sure they work as intended.
Fixes: https://tracker.ceph.com/issues/71740
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
(cherry picked from commit
91319f4e7964403600e9656b00cb5007d110acda )
Resolves: https://jsw.ibm.com/browse/ISCE-1465
Dhairya Parmar [Wed, 27 Aug 2025 10:58:31 +0000 (16:28 +0530)]
mgr/volumes: disable client_respect_subvolume_snapshot_visibility in CephfsConnectionPool::connect
To prevent any type of config change (via mons, args, envvar, tell command) on the ceph-mgr from breaking the volume plugin.
Conflicts:
fscrypt changes exist downstream
01a4d2a0356e5f66b7260dad7de70a5fa9cc3aa7 but not upstream,
hence a conflict and kept both changes.
Fixes: https://tracker.ceph.com/issues/71740
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
(cherry picked from commit
f8abcf3c3a0141a5c518382556dee42339879396 )
Resolves: https://jsw.ibm.com/browse/ISCE-1465
Dhairya Parmar [Sun, 3 Aug 2025 23:56:05 +0000 (05:26 +0530)]
doc/cephfs/fs-volumes: add documentation for controlling snapshot visibility
for subvolume based paths
Fixes: https://tracker.ceph.com/issues/71740
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
(cherry picked from commit
0d50f9205a82b401548e2046a4290d88c75f69b6 )
Resolves: https://jsw.ibm.com/browse/ISCE-1465
Dhairya Parmar [Sun, 3 Aug 2025 21:52:11 +0000 (03:22 +0530)]
qa/tasks/cephfs: add test cases for subvolume config snapshot_visibility
Conflicts:
funcs added upstream in
cc09354a49a396d85459f965de0f1cc4d3d16773 aren't backported downstream,
so it lead to conflict. Removed these functions.
Fixes: https://tracker.ceph.com/issues/71740
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
(cherry picked from commit
3611bb2a54e0cdd56b1f5b3d923d2b059303085c )
Resolves: https://jsw.ibm.com/browse/ISCE-1465
Dhairya Parmar [Fri, 25 Jul 2025 14:45:06 +0000 (20:15 +0530)]
pybind/mgr/volumes: add getter and setter APIs for snapdir_visibility
Conflicts:
fscrypt changes exist downstream
01a4d2a0356e5f66b7260dad7de70a5fa9cc3aa7 but not upstream,
so it led to a conflict, kept both the changes in the branch.
Fixes: https://tracker.ceph.com/issues/71740
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
(cherry picked from commit
7a33e22c7d572ec8f48cd6973aa9da4737b6582b )
Resolves: https://jsw.ibm.com/browse/ISCE-1465
Dhairya Parmar [Tue, 8 Jul 2025 21:25:36 +0000 (02:55 +0530)]
client: check client config and snaprealm flag before snapdir lookup
this commit adds a new client config client_respect_subvolume_snapshot_visibility
which acts as knob to have a per-client control over the snapshot visibility and
checks it along with the snaprealm flag while looking up a subvolume inode.
Conficts:
Confict arose due to upstream
7ab995b715968a4d03cf91aa7c6f44e25757a45e not being part of
ceph-9.0-rhel-patches branch. Relevant code removed.
Fixes: https://tracker.ceph.com/issues/71740
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
(cherry picked from commit
1d20ac737af627667e31a09e2291d6c0e0b40ede )
Resolves: https://jsw.ibm.com/browse/ISCE-1465
Dhairya Parmar [Wed, 6 Aug 2025 21:32:05 +0000 (03:02 +0530)]
common,mds: transmit SNAPDIR_VISIBILITY flag via SnapRealmInfoNew
at the time of building snap trace
Conflicts:
upstream
ed6b71246137f9793f2d56b4d050b271a3da29fd made changes to generate_test_instances()
which is not present downstream in ceph-9.0-rhel-patches, so had to adjust accordingly.
Fixes: https://tracker.ceph.com/issues/71740
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
(cherry picked from commit
9540f543011a9aa08886a14159323cc68722faa4 )
Resolves: https://jsw.ibm.com/browse/ISCE-1465
Dhairya Parmar [Wed, 23 Jul 2025 13:12:47 +0000 (18:42 +0530)]
mds: rebuild snaprealm cache if last_modified or change_attr changed
For the server side snapdir visibility changes to be transported to the
client — SnapRealm cache needs to be rebuilt otherwise the same metadata
would be sent via the send_snap_update() in C_MDS_inode_update_finish() while
setting the `ceph.dir.subvolume.snaps.visible` vxattr.
The condition used to check for the `seq` and `last_destroyed` against their
cached values but for the vxattr change, it's a rather non-feasible
heavylifting to update the `seq` which involves a set of steps to prepare the
op, commit the op, journal the changes and update snap-server/client(s) just
for a mere flag update (and updating last_destroyed anyway doesn't make sense
for this case). So, compare last_modified and change_attr with their cached
values to check if the SnapRealm cache should be rebuilt. These values are
incremented in the Server::handle_client_setvxattr while toggling the
snapshot visibility xattr and this would enforce a cache rebuild.
Fixes: https://tracker.ceph.com/issues/71740
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
(cherry picked from commit
b47f88732fc31027305b069e4a9dba3ebed2f080 )
Resolves: https://jsw.ibm.com/browse/ISCE-1465
Dhairya Parmar [Thu, 3 Jul 2025 11:52:38 +0000 (17:22 +0530)]
mds: add ceph.dir.subvolume.snaps.visible vxattr
using SNAPDIR_VISIBILITY flag added in sr_t
Conflicts:
upstream
ed6b71246137f9793f2d56b4d050b271a3da29fd made changes to generate_test_instances()
but is not present downstream in ceph-9.0-rhel-patches, so had to resort to adjust accordingly.
Fixes: https://tracker.ceph.com/issues/71740
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
(cherry picked from commit
98452c28315b53f828a1e833e288d54b9b1e29c6 )
Resolves: https://jsw.ibm.com/browse/ISCE-1465
Miki Patel [Thu, 18 Sep 2025 11:20:44 +0000 (16:50 +0530)]
qa/workunits/rbd: add test_resync_after_relocate_and_force_promote
Validating the status of group snapshot in resync scenario, where
primary group is first demoted and then (force) promoted.
Test case validates fix proposed in commit
b162d7df0c06 ("rbd-mirror:
do not prune primary snapshots as part of the secondary daemon").
Signed-off-by: Miki Patel <miki.patel132@gmail.com>
Resolves: rhbz#
2399618
Prasanna Kumar Kalever [Thu, 25 Sep 2025 14:31:06 +0000 (20:01 +0530)]
librbd: fix segfault when removing non-existent group
Removing a non-existent group triggers a segfault in
librbd::mirror::GroupGetInfoRequest::send(). The issue is caused by a missing
return after finish(), which allows execution to fall through into
GroupGetInfoRequest::get_id() and access invalid memory.
Also, makesure to ignore ENOENT throughout the method Group::remove()
except at cls_client::dir_get_id()
Credits to Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Resolves: rhbz#
2399618
Ramana Raja [Mon, 8 Sep 2025 02:50:51 +0000 (22:50 -0400)]
qa/workunits: add scenario to "test_force_promote_delete_group"
... in rbd_mirror_group_simple test suite.
After the group and its images are removed from the secondary, the test
can run in one of two scenarios. In Scenario 1, the test confirms that
the group is completely synced from the primary to the secondary. In
Scenario 2, the test disables and re-enables the primary, and then
confirms the group syncs from the primary to the secondary. Currently,
both of the scenarios fail occassionally when trying to confirm that
group is completely synced from the primary to the secondary.
Signed-off-by: Ramana Raja <rraja@redhat.com>
Resolves: rhbz#
2399618
Prasanna Kumar Kalever [Mon, 22 Sep 2025 14:36:40 +0000 (20:06 +0530)]
rbd-mirror: skip validation of primary demote snapshots
Problem:
When a primary demotion is in progress, the demote snapshot is in an incomplete
state. However, the group replayer incorrectly attempts to validate this
snapshot using validate_local_group_snapshots(), treating the cluster as if it
were secondary. This results in the group status being incorrectly set to
up+replaying instead of up+unknown.
Solution:
Avoid validating snapshots that are in the process of being demoted on the
primary. This ensures the group replayer does not mistakenly assign an
incorrect role or state during transition.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Resolves: rhbz#
2399618
Mark Kogan [Wed, 30 Jul 2025 12:54:19 +0000 (12:54 +0000)]
rgw: add rate limit for LIST & DELETE ops, hotfix syncup
Add rate limiting specific to LIST ops,
similar to the current rate-limiting
(https://docs.ceph.com/en/latest/radosgw/admin/#rate-limit-management)
Example usage:
```
./bin/radosgw-admin ratelimit set --ratelimit-scope=user --uid=<UID> --max_list_ops=2
./bin/radosgw-admin ratelimit set --ratelimit-scope=user --uid=<UID> --max_delete_ops=2
./bin/radosgw-admin ratelimit enable --ratelimit-scope=user --uid=<UID>
./bin/radosgw-admin ratelimit get --ratelimit-scope=user --uid=<UID>
{
"user_ratelimit": {
"max_read_ops": 0,
"max_write_ops": 0,
"max_list_ops": 2,
"max_delete_ops": 2,
"max_read_bytes": 0,
"max_write_bytes": 0,
"enabled": true
}
}
pkill -9 radosgw
./bin/radosgw -c ./ceph.conf ...
aws --endpoint-url 'http://0:8000' s3 mb s3://bkt
aws --endpoint-url 'http://0:8000' s3 cp ./ceph.conf s3://bkt
aws --endpoint-url http://0:8000 s3api list-objects-v2 --bucket bkt --prefix 'ceph.conf' --delimiter '/'
{
"Contents": [
{
"Key": "ceph.conf",
"LastModified": "2025-07-30T13:59:38+00:00",
"ETag": "\"
13d11d431ae290134562c019d9e40c0e \"",
"Size": 32346,
"StorageClass": "STANDARD"
}
],
"RequestCharged": null
}
aws --endpoint-url http://0:8000 s3api list-objects-v2 --bucket bkt --prefix 'ceph.conf' --delimiter '/'
{
"Contents": [
{
"Key": "ceph.conf",
"LastModified": "2025-07-30T13:59:38+00:00",
"ETag": "\"
13d11d431ae290134562c019d9e40c0e \"",
"Size": 32346,
"StorageClass": "STANDARD"
}
],
"RequestCharged": null
}
aws --endpoint-url http://0:8000 s3api list-objects-v2 --bucket bkt --prefix 'ceph.conf' --delimiter '/'
argument of type 'NoneType' is not iterable
tail -F ./out/radosgw.8000.log | grep beast
...
beast: 0x7fffbbe09780: [30/Jul/2025:15:44:50.359 +0000] " GET /bkt?list-type=2&delimiter=%2F&prefix=ceph.conf&encoding-type=url HTTP/1.1" 200 535 - "aws-cli/2.15.31 Python/3.9.21 Linux/5.14.0-570.28.1.el9_6.x86_64 source/x86_64.rhel.9 prompt/off command/s3api.list-objects-v2" - latency=0.000999995s
beast: 0x7fffbbe09780: [30/Jul/2025:15:44:53.904 +0000] " GET /bkt?list-type=2&delimiter=%2F&prefix=ceph.conf&encoding-type=url HTTP/1.1" 200 535 - "aws-cli/2.15.31 Python/3.9.21 Linux/5.14.0-570.28.1.el9_6.x86_64 source/x86_64.rhel.9 prompt/off command/s3api.list-objects-v2" - latency=0.000999995s
vvv
beast: 0x7fffbbe09780: [30/Jul/2025:15:44:58.192 +0000] " GET /bkt?list-type=2&delimiter=%2F&prefix=ceph.conf&encoding-type=url HTTP/1.1" 503 228 - "aws-cli/2.15.31 Python/3.9.21 Linux/5.14.0-570.28.1.el9_6.x86_64 source/x86_64.rhel.9 prompt/off command/s3api.list-objects-v2" - latency=0.000000000s
beast: 0x7fffbbe09780: [30/Jul/2025:15:44:58.798 +0000] " GET /bkt?list-type=2&delimiter=%2F&prefix=ceph.conf&encoding-type=url HTTP/1.1" 503 228 - "aws-cli/2.15.31 Python/3.9.21 Linux/5.14.0-570.28.1.el9_6.x86_64 source/x86_64.rhel.9 prompt/off command/s3api.list-objects-v2" - latency=0.000999994s
beast: 0x7fffbbe09780: [30/Jul/2025:15:44:59.807 +0000] " GET /bkt?list-type=2&delimiter=%2F&prefix=ceph.conf&encoding-type=url HTTP/1.1" 503 228 - "aws-cli/2.15.31 Python/3.9.21 Linux/5.14.0-570.28.1.el9_6.x86_64 source/x86_64.rhel.9 prompt/off command/s3api.list-objects-v2" - latency=0.000000000s
s3cmd put ./ceph.conf s3://bkt/1
s3cmd put ./ceph.conf s3://bkt/2
s3cmd put ./ceph.conf s3://bkt/3
s3cmd rm s3://bkt/1
s3cmd rm s3://bkt/2
s3cmd rm s3://bkt/3
delete: 's3://bkt/1'
delete: 's3://bkt/2'
WARNING: Retrying failed request: /3 (503 (SlowDown))
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /3 (503 (SlowDown))
^^^
```
Resolves: rhbz#
2393774
Resolves: rhbz#
2393477
Resolves: rhbz#
2395642
Resolves: rhbz#
2391529
Resolves: rhbz#
2389280
Signed-off-by: Mark Kogan <mkogan@ibm.com>
Update PendingReleaseNotes
Co-authored-by: Yuval Lifshitz <yuvalif@yahoo.com>
Signed-off-by: Mark Kogan <31659604+mkogan1@users.noreply.github.com>
pujashahu [Thu, 11 Sep 2025 13:40:27 +0000 (19:10 +0530)]
mgr/dashboard: Form retains old data when switching from edit to create mode
Fixes: https://tracker.ceph.com/issues/72989
Resolves: rhbz#
2161889
Signed-off-by: pujashahu <pshahu@redhat.com>
(cherry picked from commit
918dff407d912b3a5ac068e0050467396668163c )
(cherry picked from commit
da174509298da14857479515cf8e7c7ea9e541de )
Shweta Bhosale [Tue, 23 Sep 2025 16:33:05 +0000 (22:03 +0530)]
mgr/cephadm: Fixed stats frontend to always enable health url
Fixes: https://tracker.ceph.com/issues/71707
Signed-off-by: Shweta Bhosale <Shweta.Bhosale1@ibm.com>
(cherry picked from commit
4cf55bdb9e541a62c01286729125bf1e0a6178b2 )
Resolves: rhbz#
2353516
Shweta Bhosale [Wed, 3 Sep 2025 14:37:53 +0000 (20:07 +0530)]
mgr/cephadm: Allow Ingress service to expose the metrics via HTTPS also add fields in spec to accept monitor ips/ monitor networks
Fixes: https://tracker.ceph.com/issues/71707
Signed-off-by: Shweta Bhosale <Shweta.Bhosale1@ibm.com>
(cherry picked from commit
da5d3ab0f32dc9e19da73ae01a0dbb95b2d1f8b6 )
Conflicts:
src/pybind/mgr/cephadm/services/ingress.py
src/pybind/mgr/cephadm/services/service_discovery.py
src/pybind/mgr/cephadm/templates/services/ingress/haproxy.cfg.j2
src/pybind/mgr/cephadm/tests/test_services.py
src/python-common/ceph/deployment/service_spec.py
Resolves: rhbz#
2353516
Shweta Bhosale [Wed, 3 Sep 2025 14:34:54 +0000 (20:04 +0530)]
mgr/cephadm: Adding genric cert/key name support for get certificates
Fixes: https://tracker.ceph.com/issues/71707
Signed-off-by: Shweta Bhosale <Shweta.Bhosale1@ibm.com>
(cherry picked from commit
14b1dd781de7088683433dde394ddbdbafedaa0e )
Resolves: rhbz#
2353516
Adam King [Thu, 25 Sep 2025 20:13:18 +0000 (16:13 -0400)]
mgr/cephadm: split host cache entries if they exceed max mon store entry size
If the json blob we attempt to store for a host entry
exceeds the max mon store entry size, we become unable
to continue to store that hosts information in the
config key store. This means we only ever have the
information from the last time the json blob was
under the size limit each time the mgr fails over,
resulting in a number of stray host/daemon warnings
being generated and very outdated information being
reported by `ceph orch ps` and `ceph orch ls` around
the time of the failover
Signed-off-by: Adam King <adking@redhat.com>
(cherry picked from commit
ffe61afc2b5e6c2f4db1001e5288dcd8f995e570 )
Resolves: rhbz#
2345474
Tomer Haskalovitch [Wed, 10 Sep 2025 09:02:03 +0000 (12:02 +0300)]
mgr/dashboard: add nsid param to ns add command
Resolves: rhbz#
2398049
Signed-off-by: Tomer Haskalovitch <tomer.haska@ibm.com>
(cherry picked from commit
ee37978e7341ad3c29f986f316d89cb76b26efb5 )
Tomer Haskalovitch [Fri, 12 Sep 2025 00:58:44 +0000 (03:58 +0300)]
mgr/dashboard: --no-group-append default value to False, aligned with old CLI
Resolves: rhbz#
2391725
Signed-off-by: Tomer Haskalovitch <tomer.haska@ibm.com>
(cherry picked from commit
46b74faa763e7894e62558f14f786c870d740b29 )
Igor Fedotov [Thu, 21 Aug 2025 10:42:54 +0000 (13:42 +0300)]
test/libcephfs: use more entries to reproduce snapdiff fragmentation
issue.
Snapdiff listing fragments have different boundaries in Reef and Squid+
releases hence original reproducer (made for Reef) doesn't work properly
in S+ releases. This patch fixes that at cost of longer execution.
This might be redundant/senseless when backporting to Reef.
Resolves: rhbz#
2390060
Related-to: https://tracker.ceph.com/issues/72518
Signed-off-by: Igor Fedotov <igor.fedotov@croit.io>
(cherry picked from commit
23397d32607fc307359d63cd651df3c83ada3a7f )
(cherry picked from commit
04c34b62a7ebad9296593edfcc8132b1c7351513 )
Igor Fedotov [Tue, 12 Aug 2025 13:17:49 +0000 (16:17 +0300)]
mds: rollback the snapdiff fragment entries with the same name if needed.
This is required when more entries with the same name don't fit into the
fragment. With the existing means for fragment offset specification such a splitting to be
prohibited.
Resolves: rhbz#
2390060
Fixes: https://tracker.ceph.com/issues/72518
Signed-off-by: Igor Fedotov <igor.fedotov@croit.io>
(cherry picked from commit
24955e66f4826f8623d2bec1dbfc580f0e4c39ae )
(cherry picked from commit
0a9e33733c50aacac223bd409813b0a711b7b181 )
Igor Fedotov [Tue, 12 Aug 2025 13:07:43 +0000 (16:07 +0300)]
test/libcephfs: Polisihing SnapdiffDeletionRecreation case
Resolves: rhbz#
2390060
Signed-off-by: Igor Fedotov <igor.fedotov@croit.io>
(cherry picked from commit
daf3350621cfafa383cd9deea81b60b775a53093 )
(cherry picked from commit
a16e0395b9d8e1303617fda956c1d89c58c6f0fd )
sajibreadd [Mon, 11 Aug 2025 08:46:39 +0000 (10:46 +0200)]
Test failure: LibCephFS.SnapdiffDeletionRecreation
Resolves: rhbz#
2390060
Reproduces: https://tracker.ceph.com/issues/72518
Signed-off-by: Md Mahamudur Rahaman Sajib <mahamudur.sajib@croit.io>
(cherry picked from commit
4ff71386ac1529dc1f7c2640511f509bd6842862 )
(cherry picked from commit
48f5a5d04fb2cef52c5e4a3daf452ccf988666d2 )
(cherry picked from commit
c79cf1af29e786161916ae0aa411bdd891fa8d29 )
Venky Shankar [Thu, 25 Sep 2025 07:19:57 +0000 (12:49 +0530)]
mds: remove duplicate `CLIENT_METRIC_TYPE_STDEV_METADATA_LATENCY` from feature bit
Resolves: ISCE-2037
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Reported-by: Sachin Punadikar <sachin.punadikar@ibm.com>
Aashish Sharma [Fri, 25 Apr 2025 06:28:37 +0000 (11:58 +0530)]
mgr/dashboard: Migrate from promtail to grafana alloy
Since promtail is now deprecated, we need to start using grafana alloy for centralized logging setup
Fixes: https://tracker.ceph.com/issues/71072
Resolves: rhbz#
2398027
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
(cherry picked from commit
f6021bd4829fe2bf3fbc63900a8f69143f7dd444 )
Conflicts:
src/pybind/mgr/cephadm/migrations.py
src/pybind/mgr/cephadm/module.py
src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.ts
Dnyaneshwari [Fri, 5 Sep 2025 10:17:11 +0000 (15:47 +0530)]
mgr/dashboard: Local storage class creation via dashboard doesn't handle creation of pool.
Fixes: https://tracker.ceph.com/issues/72569
Resolves: rhbz#
2398058
Signed-off-by: Dnyaneshwari <dtalweka@redhat.com>
mgr/dashboard: handle creation of new pool
Commit includes:
1) Provide link to create a new pool
2) Refactored validation on ACL mapping, removed required validator as default
3) fixed runtime error on console due to ACL length due to which the details section was not opening
4) Used rxjs operators to make API calls and making form ready once all data is available, fixing the form patch issues
5) Refactored some part of code to improve the performance
6) Added zone and pool information in details section for local storage class
Fixes: https://tracker.ceph.com/issues/72569
Signed-off-by: Naman Munet <naman.munet@ibm.com>
(cherry picked from commit
2d0e71c845643a26d4425ddac8ee0ff30153eff2 )
src/pybind/mgr/dashboard/frontend/src/app/ceph/rgw/rgw-storage-class-form/rgw-storage-class-form.component.ts
src/pybind/mgr/dashboard/frontend/src/app/ceph/rgw/rgw.module.ts
src/pybind/mgr/dashboard/services/rgw_client.py
(cherry picked from commit
a6431bfe8de5c5e32cb62ff5b072aef00239c6e9 )
Dnyaneshwari [Fri, 19 Sep 2025 11:01:43 +0000 (16:31 +0530)]
mgr/dashboard: FS - Attach Command showing undefined for MountData
Fixes: https://tracker.ceph.com/issues/73137
Resolves: rhbz#
2398048
Signed-off-by: Dnyaneshwari Talwekar <dtalwekar@redhat.com>
(cherry picked from commit
50ef955207e7095578dc09820885a3dd0d6b3d52 )
(cherry picked from commit
f79cc7bfecf795c359f6cb9c06833e69d24c9ff3 )
Dnyaneshwari [Wed, 20 Aug 2025 04:46:21 +0000 (10:16 +0530)]
mgr/dashboard: Tiering form - Placement Target in Advanced Section
Fixes: https://tracker.ceph.com/issues/72545
Resolves: rhbz#
2398033
Signed-off-by: Dnyaneshwari Talwekar <dtalweka@redhat.com>
(cherry picked from commit
aa3bb8adddac675ea3c6dcd0bd4e9143743124b8 )
(cherry picked from commit
4a46b495c0c51776bfb8e9d2d12c813c45e96223 )
Kotresh HR [Thu, 24 Jul 2025 17:31:12 +0000 (17:31 +0000)]
qa: Add test for subvolume_ls on osd full
Resolves: rhbz#
2313740
Fixes: https://tracker.ceph.com/issues/72260
Signed-off-by: Kotresh HR <khiremat@redhat.com>
(cherry picked from commit
8547e57ebc4022ca6750149f49b68599a8af712e )
(cherry picked from commit
a08fb44e28ce7c37b91774909f89356ca3ab95dd )
Kotresh HR [Thu, 24 Jul 2025 09:33:06 +0000 (09:33 +0000)]
mds: Fix readdir when osd is full.
Problem:
The readdir wouldn't list all the entries in the directory
when the osd is full with rstats enabled.
Cause:
The issue happens only in multi-mds cephfs cluster. If rstats
is enabled, the readdir would request 'Fa' cap on every dentry,
basically to fetch the size of the directories. Note that 'Fa' is
CEPH_CAP_GWREXTEND which maps to CEPH_CAP_FILE_WREXTEND and is
used by CEPH_STAT_RSTAT.
The request for the cap is a getattr call and it need not go to
the auth mds. If rstats is enabled, the getattr would go with
the mask CEPH_STAT_RSTAT which mandates the requirement for
auth-mds in 'handle_client_getattr', so that the request gets
forwarded to auth mds if it's not the auth. But if the osd is full,
the indode is fetched in the 'dispatch_client_request' before
calling the handler function of respective op, to check the
FULL cap access for certain metadata write operations. If the inode
doesn't exist, ESTALE is returned. This is wrong for the operations
like getattr, where the inode might not be in memory on the non-auth
mds and returning ESTALE is confusing and client wouldn't retry. This
is introduced by the commit
6db81d8479b539d which fixes subvolume
deletion when osd is full.
Fix:
Fetch the inode required for the FULL cap access check for the
relevant operations in osd full scenario. This makes sense because
all the operations would mostly be preceded with lookup and load
the inode in memory or they would handle ESTALE gracefully.
Resolves: rhbz#
2313740
Fixes: https://tracker.ceph.com/issues/72260
Introduced-by: 6db81d8479b539d3ca6b98dc244c525e71a36437
Signed-off-by: Kotresh HR <khiremat@redhat.com>
(cherry picked from commit
1ca8f334f944ff78ba12894f385ffb8c1932901c )
(cherry picked from commit
aa339674471e2e26488252fc3c4bd458d37ccd42 )
Igor Golikov [Tue, 16 Sep 2025 14:12:58 +0000 (14:12 +0000)]
mds: use std::variant instead of boost::variant
Resolves: ISCE-2037
Signed-off-by: Igor Golikov <igolikov@redhat.com>
(cherry picked from commit
479eae30dfab1563bb0163f3fff9eb671a09ff08 )
Venky Shankar [Wed, 24 Sep 2025 09:15:31 +0000 (14:45 +0530)]
include/cephfs: fix build error
Resolves: ISCE-2037
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Pedro Gonzalez Gomez [Tue, 26 Aug 2025 12:05:45 +0000 (14:05 +0200)]
mgr/dashboard: fix SMB custom DNS button and linked_to_cluster col
- The button 'add custom DNS' in smb cluster form should only appear for active directory where is relevant.
- The linked_to_cluster column data is missing from smb standalone
- Some refactoring to remove magic strings and use FormControl for publicAddrs field
Resolves:rhbz#
2361570
Resolves:rhbz#
2366226
Fixes: https://tracker.ceph.com/issues/73096
Signed-off-by: Pedro Gonzalez Gomez <pegonzal@ibm.com>
(cherry picked from commit
9ce943e21558d17b3a214840b39bb57eab0cbd85 )
(cherry picked from commit
766dbab7320949f7917ef60e26b1e5029ed59d9f )
Pedro Gonzalez Gomez [Wed, 27 Aug 2025 14:41:41 +0000 (16:41 +0200)]
mgr/dashboard: add multiple ceph users deletion
Resolves:rhbz#
2272916
Fixes: https://tracker.ceph.com/issues/72752
Signed-off-by: Pedro Gonzalez Gomez <pegonzal@ibm.com>
(cherry picked from commit
14ca16576d16de49c07725fb4b0feb112c8a1a43 )
(cherry picked from commit
16cbb7588b378a802c0f4eebb29fc174e024cb61 )
Dnyaneshwari [Wed, 6 Aug 2025 09:42:43 +0000 (15:12 +0530)]
mgr/dashboard:[NFS] add Subvolume Groups and Subvolumes in "Edit NFS Export" form
Resolves: rhbz#
2354020
Fixes: https://tracker.ceph.com/issues/72435
Signed-off-by: Dnyaneshwari Talwekar <dtalweka@redhat.com>
(cherry picked from commit
3862c76be6d1ba9bb0e9d8d3bf67c8a3791c2349 )
Igor Golikov [Tue, 16 Sep 2025 14:14:18 +0000 (14:14 +0000)]
mds: remove unused enum fields for subvolume metrics
Resolves: ISCE-2037
Signed-off-by: Igor Golikov <igolikov@redhat.com>
Fixes: https://tracker.ceph.com/issues/73053
(cherry picked from commit
836cc49ca1809b755c1fd366e5bd6a231ad568ca )
Igor Golikov [Sun, 13 Jul 2025 11:14:21 +0000 (11:14 +0000)]
doc: update documentation
Resolves: ISCE-2037
Fixes: https://tracker.ceph.com/issues/68931
Signed-off-by: Igor Golikov <igolikov@ibm.com>
(cherry picked from commit
e6a21442ab814f426ca9f304dbfd96dfd6269509 )
(cherry picked from commit
354ba41ee2f63717bf520a86a77f1d408c20e29c )
Igor Golikov [Thu, 7 Aug 2025 16:35:47 +0000 (16:35 +0000)]
test: add subvolume metrics sanity test
Resolves: ISCE-2037
Signed-off-by: Igor Golikov <igolikov@ibm.com>
Fixes: https://tracker.ceph.com/issues/68929
(cherry picked from commit
1327fc8897c8779a8f31ea7f5e8759de2e048515 )
(cherry picked from commit
067962fdde079c32c26850e526211de967d60d42 )
Venky Shankar [Mon, 8 Sep 2025 06:40:35 +0000 (06:40 +0000)]
qa/cephfs: run selective test classes from basic volumes test
Resolves: ISCE-2037
Signed-off-by: Venky Shankar <vshankar@redhat.com>
(cherry picked from commit
13d9baf38ea1d7df50a31da026f430110c940bf0 )
(cherry picked from commit
0cd54294b78ff6b66b5bcf197a8874392e367067 )
Venky Shankar [Fri, 29 Aug 2025 07:15:09 +0000 (07:15 +0000)]
qa/cephfs: use fuse mount for volumes/subvolume tests
Using the kernel client is a) not really required existing
volume/subvolume test and b) per-subvolume metrics is only
supported by the user-space client library.
Resolves: ISCE-2037
Signed-off-by: Venky Shankar <vshankar@redhat.com>
(cherry picked from commit
17c35a97ac0e580545688f71b8facbdf14fc2376 )
Venky Shankar [Fri, 29 Aug 2025 17:59:05 +0000 (17:59 +0000)]
mds, messages: include subvolume metric count in log dumps and message exchanges
Resolves: ISCE-2037
Signed-off-by: Venky Shankar <vshankar@redhat.com>
(cherry picked from commit
bbc141e8dcb209450e63581e815339aba1f71929 )
Venky Shankar [Fri, 29 Aug 2025 17:56:21 +0000 (17:56 +0000)]
mds: remove unneeded SubvolumeMetric field from `struct Metric`
Resolves: ISCE-2037
Signed-off-by: Venky Shankar <vshankar@redhat.com>
(cherry picked from commit
4d2ad699fb59d63143b89faecc7046f1db2dfe12 )
(cherry picked from commit
6a1a10e3d94ac78e1444b38408ade8a20d420420 )
Conflicts:
src/mds/MDSPerfMetricTypes.h
Reverse adjust back to decode version 5 since this commit remove unneeded
subvolume_metrics from Metrics structure.
Venky Shankar [Fri, 29 Aug 2025 17:55:03 +0000 (17:55 +0000)]
mds: add metric debug log in refresh_subvolume_metrics_for_rank()
Resolves: ISCE-2037
Signed-off-by: Venky Shankar <vshankar@redhat.com>
(cherry picked from commit
d2360189a08b7a3b1d5547cbe79302c9cb16d1bc )
(cherry picked from commit
688e8c361de73306e58e15795b9bec6f75c128cc )
Igor Golikov [Thu, 10 Jul 2025 10:21:56 +0000 (10:21 +0000)]
mgr,stats: integrate subvolume metrics
mgr and stats support for the new subvolume metrics via existing perf
queries mechanism
Resolves: ISCE-2037
Fixes: https://tracker.ceph.com/issues/68932
Signed-off-by: Igor Golikov <igolikov@ibm.com>
(cherry picked from commit
1a733b01489335471b20df39a99c2000b0ef7729 )
(cherry picked from commit
7fbbec6138548c754655b7a4f2042a9cdb32baf8 )
Igor Golikov [Thu, 10 Jul 2025 10:18:57 +0000 (10:18 +0000)]
mds: aggregate and expose subvolume metrics
rank0 periodically receives subvolume metrics from other MDS instances
and aggregate subvolume metrics using sliding window.
The MetricsAggregator exposes PerfCounters and PerfQueries for these
metrics.
Resolves: ISCE-2037
Fixes: https://tracker.ceph.com/issues/68931
Signed-off-by: Igor Golikov <igolikov@ibm.com>
(cherry picked from commit
a49ba9d27ab93fe3466822d8865c6112eea86c09 )
(cherry picked from commit
bb49fe739f5c9b83439f02449c8f3df3e26bd9cc )
Conflicts:
src/mds/MDSPerfMetricTypes.h
Adjust decoding version since CopyIoSizesMetric is present in the branch.
Igor Golikov [Thu, 10 Jul 2025 10:17:36 +0000 (10:17 +0000)]
client,mds: add support for subvolume level metrics
Add support for client side metrics collection using SimpleIOMetric
struct and aggregation using AggregatedIOMetrics struct,
Client holds SimpleIOMetrics vector per each subvolume it recognized
(via caps/metadata messages), aggregates them into the
AggregatedIOMetric struct, and sends periodically to the MDS, along
with regulat client metrics.
MDS holds map of subvolume_path -> vector<AggregatedIOMetrics> and sends
it periodically to rank0, for further aggregation and exposure.
Resolves: ISCE-2037
Fixes: https://tracker.ceph.com/issues/68929, https://tracker.ceph.com/issues/68930
Signed-off-by: Igor Golikov <igolikov@ibm.com>
(cherry picked from commit
c376635d5d5572d64fa76a82a7461ce917186047 )
(cherry picked from commit
9004492bc93676c5dbb1b864d7f07b14d52e8387 )
Conflicts:
src/client/Client.cc
src/client/Client.h
src/include/cephfs/metrics/Types.h
src/mds/MetricsHandler.cc
src/mds/MetricsHandler.h
src/mds/cephfs_features.h
Resolve conflicts with fscrypt related changes on the client side and
CopyIoSizesPayload metrics type on the MDS side.
Yaarit Hatuka [Fri, 16 May 2025 22:08:10 +0000 (18:08 -0400)]
mgr/call_home: change ECuRep endpoint
The current ECuRep default endpoint for file uploads in production
(https://www.secure.ecurep.ibm.com) will sunset in Q3.
We need to change the default from:
https://www.secure.ecurep.ibm.com
to:
https://www.ecurep.ibm.com
Uploading files to ECuRep with this new endpoint was tested and it was
working well.
Resolves: rhbz#
2366947
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
(cherry picked from commit
f6020f35831c501ff4ca96a6e6ee855ffe805475 )
Yaarit Hatuka [Mon, 21 Apr 2025 19:54:16 +0000 (15:54 -0400)]
mgr/call_home: add country code to the inventory report
Country code was missing from the inventory report, adding it.
Resolves: rhbz#
2361502
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
(cherry picked from commit
22b67301e27362e6f855c4d05b1003b7dee1a247 )
Yaarit Hatuka [Thu, 16 May 2024 02:08:32 +0000 (22:08 -0400)]
mgr/call_home: refactor agent
This refactor allows for versatility and ease of maintenance,
as well as introduces logic to handle cooldown and stale requests of
log uploads.
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
(cherry picked from commit
0b9327bb5b4e75f26715966890b8239fa684985c )
When cherry-picking to the 9.0 branch a conflict occurred:
Conflicts:
src/pybind/mgr/call_home_agent/module.py
We resolved it by keeping the changes in this patch and deleting the
original version.
Yaarit Hatuka [Mon, 21 Oct 2024 20:35:31 +0000 (16:35 -0400)]
mgr/callhome: persist operations between mgr restarts
Currently the operations dictionary is only kept in memory. It is lost
when the mgr restarts, and this can cause the module to handle upload
requests which were already processed and registered in the operations
dictionary. To prevent that, we write the operations to the db, and load
them when the module starts.
Resolves: rhbz#
2320831
https://bugzilla.redhat.com/show_bug.cgi?id=
2320831
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
(cherry picked from commit
9a28b7c97467ede85afdb8d71a19b0d7be124280 )
(cherry picked from commit
c5a8a8b89b8f9a548ffae072c5bbe85d6bfe77b2 )
Yaarit Hatuka [Mon, 30 Sep 2024 22:42:42 +0000 (18:42 -0400)]
mgr/callhome: change last_contact frequency to 30 minutes
It is currently set to 5 minutes, and we were asked to change it to 30
minutes.
Resolves: rhbz#
2315797
https://bugzilla.redhat.com/show_bug.cgi?id=
2315797
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
(cherry picked from commit
41ab0f25b7271ec9d3212181b297ca9aae8b820b )
Juan Miguel Olmo Martínez [Thu, 19 Sep 2024 07:22:24 +0000 (09:22 +0200)]
mgr/callhome: management of diagnostic upload requests (#78)
Call Home stores diagnostic upload requests for 10 days
Call Home does not process operations sent repeated by IBM Call Home mesh
Call Home able repeat level 1 operations after 5 minutes
Call Home able to repeat level2 (and upper) operations after 1 hour
Resolves: rhbz#
2313070
https://bugzilla.redhat.com/show_bug.cgi?id=
2313070
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
6e464b2ef44b4463d82407d157ede2aa16117b7d )
(cherry picked from commit
c593965c2762c333560f13fe20e35e4412204105 )
Juan Miguel Olmo Martínez [Wed, 18 Sep 2024 07:39:19 +0000 (09:39 +0200)]
mgr/callhome: make unique the event_id in log upload progress status events (#75)
We use now the last_contact event_id (what we have) plus a counter
Resolves: rhbz#
2303848
https://bugzilla.redhat.com/show_bug.cgi?id=
2303848
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
255cd59afb9fab68783a9ae5786f7e97ed73f6bf )
Juan Miguel Olmo Martínez [Wed, 21 Aug 2024 08:34:44 +0000 (10:34 +0200)]
mgr/callhome: Add the zstandard module to manager modules requeriments
Call Home uses zstandard to provide performance report content compressed
Resolves: rhbz#
2306021
https://bugzilla.redhat.com/show_bug.cgi?id=
2306021
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
e93e8135a57cbf72c4bc55b7f32ffc9cce0833bb )
Juan Miguel Olmo Martínez [Mon, 22 Jul 2024 07:49:09 +0000 (09:49 +0200)]
mgr/ccha: Complete flag is set to false only when operation in progress
Resolves: rhbz#
2299176
https://bugzilla.redhat.com/show_bug.cgi?id=
2299176
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
bba0da72404c9dfb75287a2c8518af90395f5878 )
(cherry picked from commit
6709c2eb7b446112f8c0d448ccd78b1b2cdfc1eb )
Juan Miguel Olmo Martínez [Wed, 3 Jul 2024 10:23:21 +0000 (12:23 +0200)]
mgr/callhome: ISCE-740 - Call Home Performance report
Resolves: rhbz#
2303388
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
41147e6fc3508ca4c5d2570ea54b491e82410ea3 )
Juan Miguel Olmo Martínez [Mon, 24 Jun 2024 10:51:33 +0000 (12:51 +0200)]
mgr/callhome: ISCE-739 Support UI - Call Home info
Resolves: rhbz#
2303389
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
4d457b9f8314478ee12ace9944d4dce1b51ca6fe )
Juan Miguel Olmo Martínez [Thu, 29 Jun 2023 08:31:46 +0000 (10:31 +0200)]
mgr/call_home_agent: IBM Call Home Agent module
This is a combination of 18 commits to ease maintenance.
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
c9deac16f75e174e66ecde453cc8e71c936b3981 )
Resolves: rhbz#
2235256
(cherry picked from commit
eb3b298b7bc5bb049b55b6918b194754fc01a478 )
(cherry picked from commit
de6cbfbde53c64877941751d2ef5f8198ae5dccc )
Conflicts:
ceph.spec.in
A new line was missing between the block of "%files node-proxy"
and that of "%files mgr-callhome".
Please note that the changes in
de6cbfbde53c64877941751d2ef5f8198ae5dccc
to src/cephadm/cephadm.py were reset in this commit, since they were
extracted and cherry-picked to a separate call-home-cephadm branch.
pybind/mgr: add call_home_agent to CMakeLists.txt and tox.ini
This fixes the previous commit
(
c9deac16f75e174e66ecde453cc8e71c936b3981 ) and could be squashed down in
future rebases.
Related: rhbz#
2235256
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
(cherry picked from commit
e74a8c4ab2de5a1ab20fc69b53c842249f63e6e1 )
(cherry picked from commit
a58aff2cfafbdbf0c2b49683e26ef7d0d5755ee5 )
mgr/call_home_agent: fix reports frequency
Inventory reports frequency should be daily (60*60*24).
Status reports frequency should be every 30 minutes (60*30).
Move content of options.py into module.py, no reason for separate files.
Resolves: rhbz#
2241825
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
(cherry picked from commit
27da98d9968a67a7a8f7d5426cd20565d689f481 )
(cherry picked from commit
0bd94e22ab5acc49759ddcb056baf75ad416d53d )
mgr/call_home_agent: add "request_time" to all events payload
Storage Insights requires that a "request_time" key will be included in
the "payload" section of all events. Its value is unix timestamp
milliseconds.
Resolves: rhbz#
2248640
https://bugzilla.redhat.com/show_bug.cgi?id=
2248640
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
(cherry picked from commit
981b0ea2f793f8eacfd499fa7205cf89b84a9a6e )
(cherry picked from commit
0b7945474044ef7305d03fdabef1288986890ef4 )
mgr/ccha: Fix decoding issue
The encoded JWT token inside password must be managed as raw string
Resolves: rhbz#
2231489
https://bugzilla.redhat.com/show_bug.cgi?id=
2231489
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
e7e618e7f064a75434e3822c4ef9499ddab43fd1 )
(cherry picked from commit
9acaa16bfde2c9f1ab4d34cf356205794a4a4eb3 )
mgr/ccha: Fix help for ceph callhome show command
Resolves: rhbz#
2243795
https://bugzilla.redhat.com/show_bug.cgi?id=
2243795
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
cd4356588a615d83d9ae3cd7806008bd8b4cdae1 )
(cherry picked from commit
14bb3e9a9efdeedb80344ea02a90e28f96e0ae35 )
mgr/ccha: Fix ceph callhome get user output
Call home manager module options names and fields shown in the
callhome get user command are now the same
Resolves: rhbz#
2243796
https://bugzilla.redhat.com/show_bug.cgi?id=
2243796
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
d99ca03f04dac20fa7efd22fba6a548a3c7d10df )
(cherry picked from commit
ccf42acecc9e7ec19c8994e4d2ca0180b612ad1e )
mgr/callhome: Add hardware status to inventory reports (#48)
Hardware status will be fetched daily from Node Proxy, and will be added to the
inventory reports.
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
(cherry picked from commit
9be771d7e63e21b185a65f1d0fecb4c959a3c058 )
Resolves: rhbz#
2264434
https://bugzilla.redhat.com/show_bug.cgi?id=
2264434
(cherry picked from commit
209527a8e087c916fadd0e395e3619a89cf1c3a6 )
mgr/callhome: Send alerts to Call Home (#47)
Add the functionality of sending Prometheus alerts to IBM Call Home.
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
(cherry picked from commit
aed81302e05451fb2c0d271d199a099623b205d9 )
Resolves: rhbz#
2264432
https://bugzilla.redhat.com/show_bug.cgi?id=
2264432
(cherry picked from commit
e5a14624719fb52ee1b1a93acb1277db40337be0 )
mgr/callhome: Upload diagnostics
Implementation of the upload diagnostics functionality in Call Home Agent
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
2830d82ef60f93ee156e196f7fe8959071deabfa )
Resolves: rhbz#
2264444
https://bugzilla.redhat.com/show_bug.cgi?id=
2264444
(cherry picked from commit
6a3d65e6a1b99f1793f8e5849e20f62cf3e31af5 )
mgr/ccha: Fix sos report file corrupted in EcuRep
The sos report file upload to EcuRep cannot be unpacked
This commit can be safely squashed along with:
6a3d65e6a1b99f1793f8e5849e20f62cf3e31af5
mgr/callhome: Upload diagnostics
Resolves: rhbz#
2266236
https://bugzilla.redhat.com/show_bug.cgi?id=
2266236
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
a56fe4624845d248fe534f87f145abc984f281fb )
(cherry picked from commit
89a4ea154d7ed38760fa0c569d5fa1b569650513 )
mgr/ccha: Fix inventory report (#54)
The inventory report cannot be generated
This commit can be safely squashed along with:
209527a8e087c916fadd0e395e3619a89cf1c3a6
mgr/callhome: Add hardware status to inventory reports
in future releases
Resolves: rhbz#
2264434
https://bugzilla.redhat.com/show_bug.cgi?id=
2264434
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
0320c857e0a7151f4846f2187c48c1fc2a3c0c98 )
(cherry picked from commit
6e4b152899158240293d82ce3a4d45a400e3ce56 )
mgr/ccha: Upload diagnostics level 1 report error (#55)
Resolves: rhbz#
2268399
https://bugzilla.redhat.com/show_bug.cgi?id=
2268399
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
bcfcabf3a2f965c84db868f75f2b9fbdbfd2c2d7 )
(cherry picked from commit
efde76bda5d495a376365113c2faa3b1e4137591 )
cephadm: make cephadm sos cmd more robust (#56)
This commit is extracted from
bac75bb1b9052982fd9b4ebc3ff5116b67081c54
in order to include only the call home module changes.
See:
https://gitlab.cee.redhat.com/ceph/ceph/-/commit/
bac75bb1b9052982fd9b4ebc3ff5116b67081c54 ?merge_request_iid=520
This commit was created with:
$ git format-patch -1
bac75bb1b9052982fd9b4ebc3ff5116b67081c54 -- src/pybind/mgr/call_home_agent/module.py
0001-mgr-ccha-make-cephadm-sos-cmd-more-robust-56.patch
$
$ git apply -3 0001-mgr-ccha-make-cephadm-sos-cmd-more-robust-56.patch
Applied patch to 'src/cephadm/cephadm.py' cleanly.
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
mgr/ccha: Remove jti error message when no credentials (#61)
Avoid the annoying error message if not credentials present
Fix error if registry credentials are set using ceph cephadm registry-reg_credentials
Changed default regex for registry urls
Resolves: rhbz#
2231489
https://bugzilla.redhat.com/show_bug.cgi?id=
2231489
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
aac9d3a359af145ec6cae92836e387d65978602e )
(cherry picked from commit
9dd5647d374c4d8647198da4e4a7f7f645477e89 )
mgr/callhome: use static Transfer ID
ECuRep requires Transfer ID credentials (user ID and password). In this fix we
are adding the option to load them from the encrypted keys file instead of
asking the user to populate them. The keys from the files are the default. As a
workaround, we are leaving the option to manually populate the module options,
in case we ever need it.
Resolves: rhbz#
2271537
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
(cherry picked from commit
27eddcb76bd2f67eee13b7539bb006d286ba560d )
(cherry picked from commit
55e45c7a27c95ac387edfc6b4b51c0a778b4635c )
mgr/ccha: Increase default value for cooling window
Increased upload_snap_cooling_window_seconds option value to 1 day
Resolves: rhbz#
2273565
https://bugzilla.redhat.com/show_bug.cgi?id=
2273565
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@ibm.com>
(cherry picked from commit
e6787dbdca28e2a007d405775f145dbad94a68c9 )
(cherry picked from commit
7181f16fd50f2311f42fc4a56721f2b1355ff049 )
BF-
2271537 : mgr/callhome: pick up SI event ID (#65)
Storage Insights event ID was not picked up correctly which prevented Ceph from
listening to SI triggered requests, and thus not fulfilling them and updating
on their status.
Status for operations updated to match SI expectations
Resolves: rhbz#
2271537
Signed-off-by: Yaarit Hatuka <yhatuka@ibm.com>
Co-authored-by: Yaarit Hatuka <yhatuka@ibm.com>
(cherry picked from commit
7d87f5cd50a21ff16675c69ff6dac55235cdbc22 )
(cherry picked from commit
5290a81d189b81ab463c73601c44f77a99f4107e )
(cherry picked from commit
9786c32e2f98840a249405f64639e43606d09b87 )
When cherry-picking to the 9.0 branch, this conflict occurred:
Conflicts:
src/pybind/mgr/tox.ini
[testenv:test]
setenv = {[testenv]setenv}
deps = {[testenv]deps}
commands = {[testenv]commands}
<<<<<<< HEAD
=======
[testenv:fix]
basepython = python3
deps =
autopep8
modules =
alerts \
balancer \
call_home_agent \
cephadm \
cli_api \
crash \
devicehealth \
diskprediction_local \
insights \
iostat \
nfs \
orchestrator \
prometheus \
rgw \
status \
telemetry
commands =
python --version
autopep8 {[autopep8]addopts} \
{posargs:{[testenv:fix]modules}}
>>>>>>>
9786c32e2f9 (mgr/call_home_agent: IBM Call Home Agent module)
[testenv:pylint]
We ewsolved it by removing the original changes.
Marcus Watts [Thu, 18 Sep 2025 04:59:53 +0000 (00:59 -0400)]
copy object encryption fixes - complete_multipart_upload w/ sse-c
complete_multipart_upload: the spec requires that the client
provide the same values for sse-c as were used to initiate
the upload. Verify the required paraemeters exist and match.
XXX fixup merge w/ previous
Resolves: rhbz#
2394511
Fixes: https://tracker.ceph.com/issues/23264
Signed-off-by: Marcus Watts <mwatts@redhat.com>
(cherry picked from commit
a5b3ac39619d3c15f5fbed3cd8c564a1e2beaa59 )
Marcus Watts [Wed, 17 Sep 2025 21:11:33 +0000 (17:11 -0400)]
copy object encryption fixes - copy_part_enc with sse-c; use correct copysource values
copy_part w/ sse-c: use the correct copysource attributes for sse-c
XXX fixup merge w/ previous
Resolves: rhbz#
2394511
Fixes: https://tracker.ceph.com/issues/23264
Signed-off-by: Marcus Watts <mwatts@redhat.com>
(cherry picked from commit
29871b4c88a60c98062d7acac64b07b21199cf24 )
Marcus Watts [Wed, 17 Sep 2025 18:49:33 +0000 (14:49 -0400)]
copy object encryption fixes - copy_part_enc encryption attributes in result
copy_part w/ encrypted parameters; dump destination encryption
parameters on each part.
XXX fixup merge w/ previous
Resolves: rhbz#
2394511
Fixes: https://tracker.ceph.com/issues/23264
Signed-off-by: Marcus Watts <mwatts@redhat.com>
(cherry picked from commit
3452fd1235c055d773252902bfffb1e28da6af51 )
Shilpa Jagannath [Thu, 11 Sep 2025 15:26:50 +0000 (11:26 -0400)]
rgw/multisite: reset RGW_ATTR_OBJ_REPLICATION_TRACE during object attr changes.
otherwise, if a zone receives request for any s3 object api requests like PutObjectAcl, PutObjectTagging etc. and this zone
was originally the source zone for the object put request, then such subsequent sync ops will fail. this is because the
zone id was added to the replication trace to ensure that we don't sync the object back to it.
for example in a put/delete race during full sync(https://tracker.ceph.com/issues/58911)
so, if the same zone ever becomes the destination for subsequent sync requests on the same object, we compare this zone as
the destination zone against the zone entries in replication trace and because it's entry is already present in the trace,
the sync operation returns -ERR_NOT_MODIFIED.
Signed-off-by: Shilpa Jagannath <smanjara@redhat.com>
(cherry picked from commit
e1ac09ec912ced1c7316c8a18dfad891423be30e )
Yuval Lifshitz [Tue, 9 Sep 2025 17:51:29 +0000 (17:51 +0000)]
rgw/logging: rollover objects when conf changes
and return the name of the flushed object to the client
Fixes: https://tracker.ceph.com/issues/72940
Resolves: rhbz#
2393440
Signed-off-by: Yuval Lifshitz <ylifshit@ibm.com>
(cherry picked from commit
60a7f72bb16ae193e1eb19062bc915da7f46f9ac )
Yuval Lifshitz [Thu, 11 Sep 2025 15:22:57 +0000 (15:22 +0000)]
rgw/logging: add error message when log_record fails
when log_record fails in journal mode due to issues in the target
bucket, the result code that the client get will be confusing, since
there is no indication that the issue is wit hte target bucket and not
the source bucket on which the client was operating.
the HTTP error message will be used to convey this information.
Fixes: https://tracker.ceph.com/issues/72543
Resolves: rhbz#
2395210
Signed-off-by: Yuval Lifshitz <ylifshit@ibm.com>
(cherry picked from commit
263f13f27da61f8323a466769c46d81ea5237460 )
Conflicts:
src/rgw/rgw_bucket_logging.cc
Yuval Lifshitz [Thu, 4 Sep 2025 10:53:07 +0000 (10:53 +0000)]
rgw/logging: allow committing empty objects
Fixes: https://tracker.ceph.com/issues/72542
Resolves: rhbz#
2394062
Signed-off-by: Yuval Lifshitz <ylifshit@ibm.com>
(cherry picked from commit
62fed9946937cbdda4b6e100a50fc05e9d94ab47 )
Conflicts:
src/rgw/rgw_rest_bucket_logging.cc
Yuval Lifshitz [Wed, 2 Jul 2025 14:28:27 +0000 (14:28 +0000)]
rgw/logging: verify http method exists
Resolves: rhbz#
2372311
Signed-off-by: Yuval Lifshitz <ylifshit@ibm.com>
(cherry picked from commit
56c753742cb4b86bc8726e0dfeebd65e9d7fc982 )
Yuval Lifshitz [Thu, 12 Jun 2025 12:21:07 +0000 (12:21 +0000)]
rgw/logging: fix/remove/add bucket logging op names
Fixes: https://tracker.ceph.com/issues/71638
Resolves: rhbz#
2372311
Signed-off-by: Yuval Lifshitz <ylifshit@ibm.com>
(cherry picked from commit
326eef3501ce834f7067dfcc44963e3ef4c571df )
Yuval Lifshitz [Thu, 12 Jun 2025 10:05:03 +0000 (10:05 +0000)]
rgw/logging: refactor canonical_name()
function is moved up in the inheritance hierarchy
when possible
Resolves: rhbz#
2372311
Signed-off-by: Yuval Lifshitz <ylifshit@ibm.com>
(cherry picked from commit
72482a796670f4fcf03baf3a74501c86ada217ae )
Yuval Lifshitz [Wed, 11 Jun 2025 14:16:31 +0000 (14:16 +0000)]
rgw/logging: fix canonical names
Fixes: https://tracker.ceph.com/issues/71638
Resolves: rhbz#
2372311
Signed-off-by: Yuval Lifshitz <ylifshit@ibm.com>
(cherry picked from commit
ad3f8f8105600faa350d7263374ae01ace70bbd8 )
Soumya Koduri [Fri, 12 Sep 2025 07:29:22 +0000 (12:59 +0530)]
rgw/restore: Mark the restore entry status as `None` first time
While adding the restore entry to the FIFO, mark its status as `None`
so that restore thread knows that the entry is being processed for
the first time. Incase the restore is still in progress and the entry
needs to be re-added to the queue, its status then will be marked
`InProgress`.
Resolves: rhbz#
2312933
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Soumya Koduri [Thu, 28 Aug 2025 14:13:45 +0000 (19:43 +0530)]
qa/rgw: Include rgw_restore_processor_period in s3tests
Resolves: rhbz#
2312933
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Soumya Koduri [Sun, 10 Aug 2025 12:13:11 +0000 (17:43 +0530)]
rgw/restore: Persistently store the restore state for cloud-s3 tier
In order to resume IN_PROGRESS restore operations post RGW service
restarts, store the entries of the objects being restored from `cloud-s3`
tier persistently. This is already being done for `cloud-s3-glacier`
tier and now the same will be applied to `cloud-s3` tier too.
With this change, when `restore-object` is performed on any object,
it will be marked RESTORE_ALREADY_IN_PROGRESS and added to a restore FIFO queue.
This queue is later processed by Restore worker thread which will try to
fetch the objects from Cloud or Glacier/Tape S3 services. Hence all the
restore operations are now handled asynchronously (for both `cloud-s3`,
`cloud-s3-glacier` tiers).
Resolves: rhbz#
2312933
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Matt Benjamin [Thu, 11 Sep 2025 20:42:03 +0000 (16:42 -0400)]
rgw_cksum: return ChecksumAlgorithm and ChecksumType in ListParts
An uncompleted multipart upload's checksum algorithm and type can
be deduced from the upload object. Also the ChecksumType element
was being omitted in the completed case.
Fixes: https://tracker.ceph.com/issues/72998
Resolves: rhbz#
2324147
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit
803d1dbdbcd31c46cda350cfe9dce7793762510e )
Shilpa Jagannath [Thu, 4 Sep 2025 20:58:23 +0000 (16:58 -0400)]
rgw/multisite: update api_name during a zonegroup rename
Signed-off-by: Shilpa Jagannath <smanjara@redhat.com>
(cherry picked from commit
212ac92168de3aaa3d2a73a224deef5b3246f1c9 )
resolves rhbz#
2366182
Jiffin Tony Thottan [Tue, 15 Apr 2025 06:22:26 +0000 (11:52 +0530)]
rgw/cloud-restore: admin CLI for restore list and status
Also added stats as part of radosgw-admin bucket stats command
Resolves: rhbz#
2345487
Signed-off-by: Jiffin Tony Thottan <thottanjiffin@gmail.com>
(cherry picked from commit
644402fbf18ba3fe2acc39afdf399a098548ea12 )
Soumya Koduri [Thu, 31 Jul 2025 19:19:44 +0000 (00:49 +0530)]
rgw/restore: Update expiry-date of restored copies
As per AWS spec (https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html),
if a `restore-object` request is re-issued on already restored copy, server needs to
update restoration period relative to the current time. These changes handles the same.
Note: this applies to only temporary restored copies
Resolves: rhbz#
2360695
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit
9fa3433a99a3463b2f71040c4bd6d3341f779813 )
Marcus Watts [Sun, 7 Sep 2025 07:42:06 +0000 (03:42 -0400)]
copy object encryption fixes - complete multipart upload attributes
complete multipart upload should return encryption attributes in its results.
XXX fixup merge w/ copy object encryption fixes
Fixes: https://tracker.ceph.com/issues/23264
Signed-off-by: Marcus Watts <mwatts@redhat.com>
(cherry picked from commit
656214697d323638377dfb9375219a145efa7933 )
Marcus Watts [Sat, 6 Sep 2025 22:45:36 +0000 (18:45 -0400)]
copy object encryption fixes - copy object result attributes
Copy object should return encryption attributes in its results.
XXX fixup merge w/ copy object encryption fixes
Fixes: https://tracker.ceph.com/issues/23264
Signed-off-by: Marcus Watts <mwatts@redhat.com>
(cherry picked from commit
de5e988b9e9ab34472b3bcb343caa4c472ba5b7c )
Jiffin Tony Thottan [Tue, 24 Jun 2025 06:19:41 +0000 (11:49 +0530)]
cloud restore : add None type for cloud-s3-glacier
AWS supports various glacier conf options such as Standard, Expetided
to restore object with in a time period. Theses options may not be supported in
other S3 servers. So introducing option NoTier, so other vendors can be supported.
Resolves: rhbz#
2365095
Signed-off-by: Jiffin Tony Thottan <thottanjiffin@gmail.com>
(cherry picked from commit
a6e199398e6886806037467ae16bdef55f77b6c8 )
Adam C. Emerson [Mon, 8 Sep 2025 22:38:36 +0000 (18:38 -0400)]
rgw: Fix LMDB finding and test building
Resolves: rhbz#
2036531
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit
aa37b8e2746091ede0cfaf50024bcabf36bbad99 )
Harsimran Singh [Mon, 8 Sep 2025 13:38:49 +0000 (19:08 +0530)]
Fixing CephContext fwd declaration issue and headers issue
Resolves: rhbz#
2036531
Signed-off-by: Harsimran Singh <hsthukral51@gmail.com>
(cherry picked from commit
db163ce9a496fbfa3926ddc38a183563c7e4c3fc )
Adam C. Emerson [Thu, 4 Sep 2025 19:09:46 +0000 (15:09 -0400)]
rgw/usage: Fix CephContext forward declaration
Use `common_fwd.h` to avoid clashes with Crimson namespace hackery.
Resolves: rhbz#
2036531
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit
65cf34258a8dd02295239ded4d9adfb062e12dc6 )
Harsimran Singh [Wed, 3 Sep 2025 14:09:59 +0000 (19:39 +0530)]
Addressing Review Comments about Object Count and extra Placeholders
Resolves: rhbz#
2036531
Signed-off-by: Harsimran Singh <hsthukral51@gmail.com>
(cherry picked from commit
1fa2992815f547b8013bc6c32162b9a7b04a0835 )
Harsimran Singh [Tue, 2 Sep 2025 14:24:57 +0000 (19:54 +0530)]
rgw/usage: Quota tracking integration and testing
This squashes:
- Quota Tracking Changes
- Fixing issues in integration and Testing
Resolves: rhbz#
2036531
Signed-off-by: Harsimran Singh <hsthukral51@gmail.com>
(cherry picked from commit
b588fd05c7d82b52fc8fa3742976a9a45c3755b4 )
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Jane Zhu [Wed, 20 Aug 2025 18:38:23 +0000 (18:38 +0000)]
rgw: discard olh_ attributes when copying object from a versioning-suspended bucket to a versioning-disabled bucket
Resolves: rhbz#
2390658
Signed-off-by: Jane Zhu <jzhu116@bloomberg.net>
(cherry picked from commit
3fed58f43c3cb3977130926a2d1bca551deefade )
Matt Benjamin [Mon, 8 Sep 2025 20:26:26 +0000 (16:26 -0400)]
rgw: fix policy enforcement for GetObjectAttributes
Per https://docs.aws.amazon.com/cli/latest/reference/s3api/get-object-attributes.html:
"If the bucket is not versioned, you need the s3:GetObject and s3:GetObjectAttributes permissions."
Fixes: https://tracker.ceph.com/issues/72915
Resolves: rhbz#
2313820
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit
16ab79dacbf7d8e94e70d28192c945cd79c5934c )
Casey Bodley [Wed, 3 Sep 2025 13:27:18 +0000 (09:27 -0400)]
rgw/admin: allow listing account's root users
`radosgw-admin user list`, when given `--account-id` or
`--account-name`, lists only the users from that account
add support for the `--account-root` option to list only that account's
root users
Fixes: https://tracker.ceph.com/issues/72847
Resolves: rhbz#
2360695
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit
772fbbbafcdd1d26ff95ef005211f2200b724741 )
Ali Masarwa [Thu, 24 Aug 2023 15:40:22 +0000 (18:40 +0300)]
RGW: When using Keystone auth for RGW, include the Keystone user in ops log
Resolves: rhbz#
1769182
Signed-off-by: Ali Masarwa <ali.saed.masarwa@gmail.com>
Signed-off-by: Ali Masarwa <amasarwa@redhat.com>
(cherry picked from commit
47166556c5bbcf1f26621bf24cf04221b65af366 )
Oguzhan Ozmen [Thu, 31 Jul 2025 22:15:24 +0000 (22:15 +0000)]
RGW: multi object delete op; skip olh update for all deletes but the last one
Fixes: https://tracker.ceph.com/issues/72375
Resolves: rhbz#
2387764
Signed-off-by: Oguzhan Ozmen <oozmen@bloomberg.net>
(cherry picked from commit
9bb170104446bfea0ad87b34244f3a3d47962fcc )
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Mark Kogan [Wed, 30 Jul 2025 12:54:19 +0000 (12:54 +0000)]
rgw: add rate limit for LIST & DELETE ops
Add rate limiting specific to LIST ops,
similar to the current rate-limiting
(https://docs.ceph.com/en/latest/radosgw/admin/#rate-limit-management)
Example usage:
```
./bin/radosgw-admin ratelimit set --ratelimit-scope=user --uid=<UID> --max_list_ops=2
./bin/radosgw-admin ratelimit set --ratelimit-scope=user --uid=<UID> --max_delete_ops=2
./bin/radosgw-admin ratelimit enable --ratelimit-scope=user --uid=<UID>
./bin/radosgw-admin ratelimit get --ratelimit-scope=user --uid=<UID>
{
"user_ratelimit": {
"max_read_ops": 0,
"max_write_ops": 0,
"max_list_ops": 2,
"max_delete_ops": 2,
"max_read_bytes": 0,
"max_write_bytes": 0,
"enabled": true
}
}
pkill -9 radosgw
./bin/radosgw -c ./ceph.conf ...
aws --endpoint-url 'http://0:8000' s3 mb s3://bkt
aws --endpoint-url 'http://0:8000' s3 cp ./ceph.conf s3://bkt
aws --endpoint-url http://0:8000 s3api list-objects-v2 --bucket bkt --prefix 'ceph.conf' --delimiter '/'
{
"Contents": [
{
"Key": "ceph.conf",
"LastModified": "2025-07-30T13:59:38+00:00",
"ETag": "\"
13d11d431ae290134562c019d9e40c0e \"",
"Size": 32346,
"StorageClass": "STANDARD"
}
],
"RequestCharged": null
}
aws --endpoint-url http://0:8000 s3api list-objects-v2 --bucket bkt --prefix 'ceph.conf' --delimiter '/'
{
"Contents": [
{
"Key": "ceph.conf",
"LastModified": "2025-07-30T13:59:38+00:00",
"ETag": "\"
13d11d431ae290134562c019d9e40c0e \"",
"Size": 32346,
"StorageClass": "STANDARD"
}
],
"RequestCharged": null
}
aws --endpoint-url http://0:8000 s3api list-objects-v2 --bucket bkt --prefix 'ceph.conf' --delimiter '/'
argument of type 'NoneType' is not iterable
tail -F ./out/radosgw.8000.log | grep beast
...
beast: 0x7fffbbe09780: [30/Jul/2025:15:44:50.359 +0000] " GET /bkt?list-type=2&delimiter=%2F&prefix=ceph.conf&encoding-type=url HTTP/1.1" 200 535 - "aws-cli/2.15.31 Python/3.9.21 Linux/5.14.0-570.28.1.el9_6.x86_64 source/x86_64.rhel.9 prompt/off command/s3api.list-objects-v2" - latency=0.000999995s
beast: 0x7fffbbe09780: [30/Jul/2025:15:44:53.904 +0000] " GET /bkt?list-type=2&delimiter=%2F&prefix=ceph.conf&encoding-type=url HTTP/1.1" 200 535 - "aws-cli/2.15.31 Python/3.9.21 Linux/5.14.0-570.28.1.el9_6.x86_64 source/x86_64.rhel.9 prompt/off command/s3api.list-objects-v2" - latency=0.000999995s
vvv
beast: 0x7fffbbe09780: [30/Jul/2025:15:44:58.192 +0000] " GET /bkt?list-type=2&delimiter=%2F&prefix=ceph.conf&encoding-type=url HTTP/1.1" 503 228 - "aws-cli/2.15.31 Python/3.9.21 Linux/5.14.0-570.28.1.el9_6.x86_64 source/x86_64.rhel.9 prompt/off command/s3api.list-objects-v2" - latency=0.000000000s
beast: 0x7fffbbe09780: [30/Jul/2025:15:44:58.798 +0000] " GET /bkt?list-type=2&delimiter=%2F&prefix=ceph.conf&encoding-type=url HTTP/1.1" 503 228 - "aws-cli/2.15.31 Python/3.9.21 Linux/5.14.0-570.28.1.el9_6.x86_64 source/x86_64.rhel.9 prompt/off command/s3api.list-objects-v2" - latency=0.000999994s
beast: 0x7fffbbe09780: [30/Jul/2025:15:44:59.807 +0000] " GET /bkt?list-type=2&delimiter=%2F&prefix=ceph.conf&encoding-type=url HTTP/1.1" 503 228 - "aws-cli/2.15.31 Python/3.9.21 Linux/5.14.0-570.28.1.el9_6.x86_64 source/x86_64.rhel.9 prompt/off command/s3api.list-objects-v2" - latency=0.000000000s
s3cmd put ./ceph.conf s3://bkt/1
s3cmd put ./ceph.conf s3://bkt/2
s3cmd put ./ceph.conf s3://bkt/3
s3cmd rm s3://bkt/1
s3cmd rm s3://bkt/2
s3cmd rm s3://bkt/3
delete: 's3://bkt/1'
delete: 's3://bkt/2'
WARNING: Retrying failed request: /3 (503 (SlowDown))
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /3 (503 (SlowDown))
^^^
```
Signed-off-by: Mark Kogan <mkogan@ibm.com>
Update PendingReleaseNotes
Co-authored-by: Yuval Lifshitz <yuvalif@yahoo.com>
Signed-off-by: Mark Kogan <31659604+mkogan1@users.noreply.github.com>
Update PendingReleaseNotes
Resolves: rhbz#
2391529
Co-authored-by: Yuval Lifshitz <yuvalif@yahoo.com>
Signed-off-by: Mark Kogan <31659604+mkogan1@users.noreply.github.com>
(cherry picked from commit
965eda7a45b12c9ccd78f230076002043f7df65c )
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Marcus Watts [Sat, 22 Jun 2024 02:02:00 +0000 (22:02 -0400)]
rgw: trivial cleanup from former fix attribute handling for swift bucket post and put
Trivial "free' cleanup: this commit removes an unused variable "battrs".
This is a remanent of a much larger patch that now has a different
fix upstream.
Signed-off-by: Marcus Watts <mwatts@redhat.com>
Conflicts:
src/rgw/rgw_op.cc
(cherry picked from commit
340d10bf63c8ae53021dd26c7ea7fbd35db5d4b8 )
Marcus Watts [Tue, 25 Feb 2025 22:00:06 +0000 (17:00 -0500)]
copy object encryption fixes - fixups
minor fixup on byte ranges.
other updates to match ceph main.
Fixes: https://tracker.ceph.com/issues/23264
Signed-off-by: Marcus Watts <mwatts@redhat.com>
(cherry picked from commit
2292920e188987f37b848cfa1789c02d31173b39 )
Soumya Koduri [Tue, 29 Oct 2024 08:44:11 +0000 (14:14 +0530)]
rgw/copy-object: Fix overflow with bufferlist copy
This fixes the issue with bufferlist copy overflow in the `copy-object`
Op path.
Resolves: rhbz#
2321269
Reviewed-by: Marcus Watts <mwatts@redhat.com>
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit
95ac4e63be73790474c03d3cd314fec7983f12e9 )