Marcus Watts [Wed, 28 Aug 2024 21:21:13 +0000 (17:21 -0400)]
rgw/storage class. Don't inherit storage class for copy object.
When an object is copied, it should only be depending on data
in the request to determine the storage class, and if it is
not specified, it should default to 'STANDARD'. In radosgw,
this means that this is another attribute (similar to encryption)
that should not be merged from the source object.
Fixes: https://tracker.ceph.com/issues/67787 Signed-off-by: Marcus Watts <mwatts@redhat.com>
Marcus Watts [Wed, 28 Aug 2024 15:42:05 +0000 (11:42 -0400)]
rgw/storage class: don't store/report STANDARD storage class.
While 'STANDARD' is a valid storage class, it is not supposed
to ever be returned when fetching an object. This change suppresses
storing 'STANDARD' as the attribute value, so that objects
explicitly created with 'STANDARD' will in fact be indistinguishable
from those where it was implicitly set.
Fixes: https://tracker.ceph.com/issues/67786 Signed-off-by: Marcus Watts <mwatts@redhat.com>
Marcus Watts [Sat, 25 May 2024 03:45:14 +0000 (23:45 -0400)]
Fix lifecycle transition of encrypted multipart objects.
Lifecycle transtion can copy objects to a different storage tier.
When this happens, since the object is repacked, the original
manifest is invalidated. It is necessary to store a special
"parts_len" attribute to fix this. There was code in PutObj
to handle this, but that was only used for multisite replication,
it is not used by the lifecycle transisiton code. This fix
adds similar logic to the lifecycle transition code path to make the
same thing happen.
Fixes: https://tracker.ceph.com/issues/23264 Signed-off-by: Marcus Watts <mwatts@redhat.com>
Marcus Watts [Fri, 14 Apr 2023 09:19:59 +0000 (05:19 -0400)]
copy object encryption fixes
This contains code to allow copyobject to copy encrypted objects.
It includes additional data paths to communicate data from the
rest layer down to the sal layer to handle decrypting
objects. The data paths include logic to use filter chains
from get and put that process encryption and compression.
There are several hacks to deal with quirks of the filter chains.
The "get" path has to propgate flushes around the chain,
because a flush isn't guaranteed to propagate through it.
Also the "get" and "put" chains have conflicting uses of the
buffer list logic, so the buffer list has to be copied so that
they don't step on each other's toes.
Fixes: https://tracker.ceph.com/issues/23264 Signed-off-by: Marcus Watts <mwatts@redhat.com>
Marcus Watts [Tue, 16 Jul 2024 21:16:10 +0000 (17:16 -0400)]
rgw/compression antibug check
If another bug tells the compression filter to decompress more
data than is actually present, the resulting "end_of_buffer"
error was thrown. The thrown exception unwinds the stack,
including a completion that is pending. The resulting core dump
indicates a failure with this completion rather than the end of buffer
exception, which is misleading and not useful.
With this change, radosgw does not abort, and instead logs
a somewhat useful message before returning an "unknown" error
to the client.
Fixes: https://tracker.ceph.com/issues/23264 Signed-off-by: Marcus Watts <mwatts@redhat.com>
Patrick Donnelly [Tue, 23 Sep 2025 12:02:22 +0000 (08:02 -0400)]
Merge PR #65555 into main
* refs/pull/65555/head:
script/redmine-upkeep: include each issue changed in summary
.github/workflows/redmine-upkeep: use smaller limit for testing push
mgr/dashboard: Local storage class creation via dashboard doesn't handle creation of pool.
Fixes: https://tracker.ceph.com/issues/72569 Signed-off-by: Dnyaneshwari <dtalweka@redhat.com>
mgr/dashboard: handle creation of new pool
Commit includes:
1) Provide link to create a new pool
2) Refactored validation on ACL mapping, removed required validator as default
3) fixed runtime error on console due to ACL length due to which the details section was not opening
4) Used rxjs operators to make API calls and making form ready once all data is available, fixing the form patch issues
5) Refactored some part of code to improve the performance
6) Added zone and pool information in details section for local storage class
Fixes: https://tracker.ceph.com/issues/72569 Signed-off-by: Naman Munet <naman.munet@ibm.com>
Adam King [Mon, 22 Sep 2025 21:05:07 +0000 (17:05 -0400)]
pybind/mgr: pin cheroot version in requirements-required.txt
With python 3.10 (didn't seem to happen with python 3.12) the
pybind/mgr/cephadm/tests/test_node_proxy.py test times out.
This appears to be related to a new release of the cheroot
package and a github issues describing the same problem
we're seeing has been opened by another user
https://github.com/cherrypy/cheroot/issues/769
It is worth noting that the workaround described in that
issue does also work for us. If you add
rgw/restore: Mark the restore entry status as `None` first time
While adding the restore entry to the FIFO, mark its status as `None`
so that restore thread knows that the entry is being processed for
the first time. Incase the restore is still in progress and the entry
needs to be re-added to the queue, its status then will be marked
`InProgress`.
Soumya Koduri [Sun, 10 Aug 2025 12:13:11 +0000 (17:43 +0530)]
rgw/restore: Persistently store the restore state for cloud-s3 tier
In order to resume IN_PROGRESS restore operations post RGW service
restarts, store the entries of the objects being restored from `cloud-s3`
tier persistently. This is already being done for `cloud-s3-glacier`
tier and now the same will be applied to `cloud-s3` tier too.
With this change, when `restore-object` is performed on any object,
it will be marked RESTORE_ALREADY_IN_PROGRESS and added to a restore FIFO queue.
This queue is later processed by Restore worker thread which will try to
fetch the objects from Cloud or Glacier/Tape S3 services. Hence all the
restore operations are now handled asynchronously (for both `cloud-s3`,
`cloud-s3-glacier` tiers).
Nizamudeen A [Thu, 17 Jul 2025 11:13:08 +0000 (16:43 +0530)]
mgr/dashboard: support inline edit for datatable
adding a new celltransformation that will transform the cell to a form
control when you click on the edit button which is inside a cell.
Added a new CellTemplate `editing` which if set to a cell, then it'll
add the edit button. You can also add validators to the control by using
the `customTemplateConfig` like
```
customTemplateConfig: {
validators: [Validators.required]
}
```
Also using a `EditState` to keep track of the different cells I can edit
simultaneously in a single time.
Fixes: https://tracker.ceph.com/issues/72171 Signed-off-by: Nizamudeen A <nia@redhat.com>
mgr/dashboard: fix zone update API forcing STANDARD storage class
The zone update REST API (`edit_zone`) always attempted to configure a
placement target for the `STANDARD` storage class, even when the request
was intended for a different storage class name.
This caused failures in deployments where `STANDARD` is not defined.
Changes:
Club add placement target and add storage class methods into one single
add_placement_targets_storage_class_zone method which takes the storage
class as a param as well alongside the rest of the placement params.
cephfs-journal-tool:: Don't reset the journal trim position
If the fs had to go through journal recovery and reset,
the cephfs-journal-tool resets the journal trim position
because of which the old unused journal objects just stay
forever in the metadata pool. The patch fixes the issue.
Now, the old stale journal objects are trimmed during the
regular trimming cycle helping to recover space in the
metadata pool.
Validates that the cephfs-journal-tool reset
doesn't reset the trim position so that the
journal trim takes care of trimming the older
unused journal objects helping to recover the
space in metadata pool.
1. Fixes the promql expr used to calculate "In" OSDs in
ceph-cluster-advanced.json.
2. Fixes the color coding for the single state panels used in the OSDs
grafana panel like "In", "Out" etc