Marcus Watts [Wed, 28 Aug 2024 21:21:13 +0000 (17:21 -0400)]
rgw/storage class. Don't inherit storage class for copy object.
When an object is copied, it should only be depending on data
in the request to determine the storage class, and if it is
not specified, it should default to 'STANDARD'. In radosgw,
this means that this is another attribute (similar to encryption)
that should not be merged from the source object.
Fixes: https://tracker.ceph.com/issues/67787 Signed-off-by: Marcus Watts <mwatts@redhat.com>
Marcus Watts [Wed, 28 Aug 2024 15:42:05 +0000 (11:42 -0400)]
rgw/storage class: don't store/report STANDARD storage class.
While 'STANDARD' is a valid storage class, it is not supposed
to ever be returned when fetching an object. This change suppresses
storing 'STANDARD' as the attribute value, so that objects
explicitly created with 'STANDARD' will in fact be indistinguishable
from those where it was implicitly set.
Fixes: https://tracker.ceph.com/issues/67786 Signed-off-by: Marcus Watts <mwatts@redhat.com>
Marcus Watts [Sat, 25 May 2024 03:45:14 +0000 (23:45 -0400)]
Fix lifecycle transition of encrypted multipart objects.
Lifecycle transtion can copy objects to a different storage tier.
When this happens, since the object is repacked, the original
manifest is invalidated. It is necessary to store a special
"parts_len" attribute to fix this. There was code in PutObj
to handle this, but that was only used for multisite replication,
it is not used by the lifecycle transisiton code. This fix
adds similar logic to the lifecycle transition code path to make the
same thing happen.
Fixes: https://tracker.ceph.com/issues/23264 Signed-off-by: Marcus Watts <mwatts@redhat.com>
Marcus Watts [Fri, 14 Apr 2023 09:19:59 +0000 (05:19 -0400)]
copy object encryption fixes
This contains code to allow copyobject to copy encrypted objects.
It includes additional data paths to communicate data from the
rest layer down to the sal layer to handle decrypting
objects. The data paths include logic to use filter chains
from get and put that process encryption and compression.
There are several hacks to deal with quirks of the filter chains.
The "get" path has to propgate flushes around the chain,
because a flush isn't guaranteed to propagate through it.
Also the "get" and "put" chains have conflicting uses of the
buffer list logic, so the buffer list has to be copied so that
they don't step on each other's toes.
Fixes: https://tracker.ceph.com/issues/23264 Signed-off-by: Marcus Watts <mwatts@redhat.com>
Marcus Watts [Tue, 16 Jul 2024 21:16:10 +0000 (17:16 -0400)]
rgw/compression antibug check
If another bug tells the compression filter to decompress more
data than is actually present, the resulting "end_of_buffer"
error was thrown. The thrown exception unwinds the stack,
including a completion that is pending. The resulting core dump
indicates a failure with this completion rather than the end of buffer
exception, which is misleading and not useful.
With this change, radosgw does not abort, and instead logs
a somewhat useful message before returning an "unknown" error
to the client.
Fixes: https://tracker.ceph.com/issues/23264 Signed-off-by: Marcus Watts <mwatts@redhat.com>
Nizamudeen A [Thu, 16 Oct 2025 05:35:32 +0000 (11:05 +0530)]
mgr/dashboard: fix generic form submit validator for inline edit
currently the validation error is being applied generically to the
parent formgroup which will set the whole form into an error state when
one of the inline editing is failing on a validation. So just changing
that to a single control.
Fixes: https://tracker.ceph.com/issues/73558 Signed-off-by: Nizamudeen A <nia@redhat.com>
Casey Bodley [Wed, 15 Oct 2025 21:08:48 +0000 (17:08 -0400)]
cmake: BuildArrow.cmake uses bundled thrift if system version < 0.17
the bump to arrow 17.0.0 broke the ubuntu jammy builds with:
In file included from /usr/include/thrift/transport/TTransport.h:25,
from /usr/include/thrift/protocol/TProtocol.h:28,
from /usr/include/thrift/TBase.h:24,
from /build/ceph-20.3.0-3599-g3d863d32/src/arrow/cpp/src/generated/parquet_types.h:14,
from /build/ceph-20.3.0-3599-g3d863d32/src/arrow/cpp/src/generated/parquet_constants.h:10,
from /build/ceph-20.3.0-3599-g3d863d32/src/arrow/cpp/src/generated/parquet_constants.cpp:7:
/usr/include/thrift/transport/TTransportException.h:23:10: fatal error: boost/numeric/conversion/cast.hpp: No such file or directory
23 | #include <boost/numeric/conversion/cast.hpp>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
when comparing the gcc command line with arrow-15.0.0, the following argument
is no longer present:
> -isystem /build/ceph-20.3.0-3402-gb2db4947/obj-x86_64-linux-gnu/boost/include
arrow 17.0.0 seems to assume that thrift doesn't depend on boost anymore. a
comment in https://github.com/apache/arrow/issues/32266 claims that
> we don't need Boost with system Thrift 0.17.0 or later
but our jammy builds are stuck with libthrift-0.16.0. to reenable jammy builds,
instruct Arrow's cmake to use its bundled thrift dependency if our system thrift
version is < 0.17.0
Kefu Chai [Wed, 15 Oct 2025 07:46:26 +0000 (15:46 +0800)]
debian/control: Add libxsimd-dev build dependency for vendored Arrow
In commit e8460cbd, we introduced the "pkg.ceph.arrow" build profile to
support building with system Arrow packages. However, neither Debian nor
Ubuntu currently ships Arrow packages.
Since WITH_RADOSGW_SELECT_PARQUET is always enabled in debian/rules,
Arrow support is required for all builds. When the pkg.ceph.arrow profile
is not selected, the build uses vendored Arrow. With the recent change to
use AUTO mode for xsimd detection, Arrow will attempt to find system xsimd
>= 9.0.1. Adding libxsimd-dev as a build dependency ensures it's available
for Arrow to detect and use, reducing build time on supported distributions.
On distributions with insufficient xsimd versions (< 9.0.1), Arrow will
automatically fall back to its bundled version.
Kefu Chai [Wed, 15 Oct 2025 07:46:22 +0000 (15:46 +0800)]
cmake/BuildArrow: Use AUTO mode for xsimd dependency detection
Arrow requires xsimd >= 9.0.1 according to arrow/cpp/thirdparty/versions.txt.
Previously, we unconditionally set -Dxsimd_SOURCE=BUNDLED, forcing the use
of Arrow's vendored xsimd regardless of system package availability.
This commit changes to -Dxsimd_SOURCE=AUTO, which allows Arrow's
resolve_dependency mechanism to automatically:
1. Try to find system xsimd package
2. Check if version >= 9.0.1
3. Use system version if found and sufficient
4. Fall back to bundled version otherwise
This reduces build time and dependencies on systems with sufficient xsimd,
while maintaining compatibility with older distributions.
Distribution availability:
- Ubuntu Noble (24.04): libxsimd-dev 12.1.1 (✓ will use system)
- Ubuntu Jammy (22.04): libxsimd-dev 7.6.0 (✗ will use bundled)
- Debian Trixie (13): libxsimd-dev 13.2.0 (✓ will use system)
- CentOS Stream 9: xsimd-devel 7.4.9 (✗ will use bundled)
Kefu Chai [Tue, 14 Oct 2025 13:26:06 +0000 (21:26 +0800)]
cephadm/build: Fix _has_python_pip() function check
The _has_python_pip() function was incorrectly checking for the venv
module instead of pip, causing it to always return the wrong result.
This would prevent proper detection of whether pip is available during
the cephadm build process.
Fix by changing the module check from 'venv' to 'pip'.
Allow the user to control the content of the build image with a
high-level `--image-variant=` switch. Currently the supported values are
`default` (the same maximal image we have been generating) and
`packages` a slimmer image that avoids installing certain test-only
dependencies.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
John Mulligan [Mon, 13 Oct 2025 20:23:10 +0000 (16:23 -0400)]
install-deps.sh: let FOR_MAKE_CHECK variable take precedence
Previously, the FOR_MAKE_CHECK variable could only enable installing
extra (test) dependencies when install-deps.sh was used and it was
ignored if `tty -s` exited true. This change allows FOR_MAKE_CHECK to
take precedence over the tty check and to specify one of true, 1, yes to
enable extra "for make check" deps or false, 0, no to explicitly disable
the extra deps.
Based-on-work-by: Dan Mick <dan.mick@redhat.com> Signed-off-by: John Mulligan <jmulligan@redhat.com>
Martin Koch [Tue, 30 Sep 2025 12:58:52 +0000 (14:58 +0200)]
doc/mgr/dashboard: add note that only RSA keys are supported for TLS
The dashboard module fails to start when configured with ECDSA/EC
private keys due to pyOpenSSL limitations ("key type unsupported").
Add a note to the SSL/TLS documentation advising users to use
RSA keys until ECDSA is supported.
Kefu Chai [Fri, 10 Oct 2025 03:04:50 +0000 (11:04 +0800)]
crimson/seastore: use DMA alignment for block size instead of stat
Before this fix, BlockSegmentManager used stat.block_size (typically 512
bytes from file_stat()) as the alignment requirement for writes. However,
Seastar's DMA engine requires alignment to disk_write_dma_alignment()
(typically 4096 bytes, the logical block size) for performant writes.
This mismatch caused failures in BlockSegmentManager::segment_write():
1. Crimson believed block_size was 512 bytes (from file_stat())
2. Crimson prepared 512-byte aligned buffers for writing
3. Seastar's internal::sanitize_iovecs() trimmed the unaligned portions
based on the actual 4096-byte DMA alignment requirement
4. This left an empty buffer (512 bytes trimmed from 512-byte buffer)
5. The write operation returned 0 bytes written
6. The assertion 'written != len' failed in do_writev()
The fix queries the actual DMA alignment requirement from Seastar via
file.disk_write_dma_alignment() and uses that as block_size throughout
the segment manager. This ensures all writes are properly aligned for
Seastar's DMA engine.