-* RGW: OpenSSL engine support deprecated in favor of provider support.
- - Removed `openssl_engine_opts` configuration option. OpenSSL engine configurations in string format are no longer supported.
- - Added `openssl_conf` configuration option for loading specified providers as default providers.
+* RGW: OpenSSL engine support is deprecated in favor of provider support.
+ - Removed the `openssl_engine_opts` configuration option. OpenSSL engine configuration in string format is no longer supported.
+ - Added the `openssl_conf` configuration option for loading specified providers as default providers.
Configuration file syntax follows the OpenSSL standard (see https://github.com/openssl/openssl/blob/master/doc/man5/config.pod).
- If the default provider is still required when using custom providers,
+ If the default provider is required when also using custom providers,
it must be explicitly loaded in the configuration file or code (see https://github.com/openssl/openssl/blob/master/README-PROVIDERS.md).
>=20.0.0
-* RADOS: lead Monitor and stretch mode status are now included in the `ceph status` output.
+* RADOS: The lead Monitor and stretch mode status are now displayed by `ceph status`.
Related Tracker: https://tracker.ceph.com/issues/70406
* RGW: The User Account feature introduced in Squid provides first-class support for
- IAM APIs and policy. Our preliminary STS support was instead based on tenants, and
+ IAM APIs and policy. Our preliminary STS support was based on tenants, and
exposed some IAM APIs to admins only. This tenant-level IAM functionality is now
deprecated in favor of accounts. While we'll continue to support the tenant feature
itself for namespace isolation, the following features will be removed no sooner
than the V release:
- * tenant-level IAM APIs like CreateRole, PutRolePolicy and PutUserPolicy,
- * use of tenant names instead of accounts in IAM policy documents,
- * interpretation of IAM policy without cross-account policy evaluation,
+ * Tenant-level IAM APIs including CreateRole, PutRolePolicy and PutUserPolicy,
+ * Use of tenant names instead of accounts in IAM policy documents,
+ * Interpretation of IAM policy without cross-account policy evaluation,
* S3 API support for cross-tenant names such as `Bucket='tenant:bucketname'`
* RGW: Lua scripts will not run against health checks.
* RGW: For compatibility with AWS S3, LastModified timestamps are now truncated
- to the second. Note that during upgrade, users may observe these timestamps
+ to the second. Note that during an upgrade, users may observe these timestamps
moving backwards as a result.
-* RGW: IAM policy evaluation now supports conditions ArnEquals and ArnLike, along
+* RGW: IAM policy evaluation now supports the conditions ArnEquals and ArnLike, along
with their Not and IfExists variants.
-* RGW: Adding missing quotes to the ETag values returned by S3 CopyPart,
+* RGW: Adding missing quotes to ETag values in S3 CopyPart,
PostObject and CompleteMultipartUpload responses.
-* RGW: Added support for S3 GetObjectAttributes.
-* RGW: Added BEAST frontend option 'so_reuseport' which facilitates running multiple
- RGW instances on the same host by sharing a single TCP port.
-
+* RGW: Add support for S3 GetObjectAttributes.
+* RGW: Add the Beast front end option 'so_reuseport', which facilitates running multiple
+ RGW instances on the same host that share a single TCP port.
* RBD: All Python APIs that produce timestamps now return "aware" `datetime`
objects instead of "naive" ones (i.e. those including time zone information
instead of those not including it). All timestamps remain to be in UTC but
including `timezone.utc` makes it explicit and avoids the potential of the
returned timestamp getting misinterpreted -- in Python 3, many `datetime`
methods treat "naive" `datetime` objects as local times.
-* RBD: `rbd group info` and `rbd group snap info` commands are introduced to
+* RBD: The `rbd group info` and `rbd group snap info` commands are introduced to
show information about a group and a group snapshot respectively.
-* RBD: `rbd group snap ls` output now includes the group snapshot IDs. The header
- of the column showing the state of a group snapshot in the unformatted CLI
- output is changed from 'STATUS' to 'STATE'. The state of a group snapshot
- that was shown as 'ok' is now shown as 'complete', which is more descriptive.
+* RBD: The output of `rbd group snap ls` now includes the group snapshot IDs. The
+ heading for group snapshot status in the unformatted CLI
+ output is changed from `STATUS` to `STATE`. A group snapshot
+ that was previously shown as `ok` is now shown as `complete`, which is more descriptive.
* CephFS: Directories may now be configured with case-insensitive or
- normalized directory entry names. This is an inheritable configuration making
- it apply to an entire directory tree. For more information, see
+ normalized directory entry names. This is inheritable and
+ applies to an entire directory tree. For more information, see
https://docs.ceph.com/en/latest/cephfs/charmap/
* Based on tests performed at scale on an HDD based Ceph cluster, it was found
that scheduling with mClock was not optimal with multiple OSD shards. For
- example, in the test cluster with multiple OSD node failures, the client
- throughput was found to be inconsistent across test runs coupled with multiple
- reported slow requests. However, the same test with a single OSD shard and
+ example, when multiple OSD nodes failed, client
+ throughput was found to be inconsistent across test runs and multiple
+ slow requests were reported. However, the same test with a single OSD shard and
with multiple worker threads yielded significantly better results in terms of
- consistency of client and recovery throughput across multiple test runs.
- Therefore, as an interim measure until the issue with multiple OSD shards
+ consistent client and recovery throughput across multiple test runs.
+ As an interim measure until the issue with multiple OSD shards
(or multiple mClock queues per OSD) is investigated and fixed, the following
- changes to the default option values have been made:
+ changes to option value defaults have been made:
- osd_op_num_shards_hdd = 1 (was 5)
- osd_op_num_threads_per_shard_hdd = 5 (was 1)
For more details see https://tracker.ceph.com/issues/66289.
* MGR: The Ceph Manager's always-on modulues/plugins can now be force-disabled.
- This can be necessary in cases where we wish to prevent the manager from being
+ This can be necessary when we wish to prevent the Manager from being
flooded by module commands when Ceph services are down or degraded.
* CephFS: It is now possible to pause the threads that asynchronously purge
the subvolume snapshots by using the config option
"mgr/volumes/pause_cloning".
-* CephFS: Modifying the setting "max_mds" when a cluster is
+* CephFS: Modifying the setting `max_mds` when a cluster is
unhealthy now requires users to pass the confirmation flag
- (--yes-i-really-mean-it). This has been added as a precaution to tell the
- users that modifying "max_mds" may not help with troubleshooting or recovery
- effort. Instead, it might further destabilize the cluster.
+ (--yes-i-really-mean-it). This has been added as a precaution to inform
+ admins that modifying `max_mds` may not help with troubleshooting or recovery
+ efforts. Instead, it might further destabilize the cluster.
* RADOS: Added convenience function `librados::AioCompletion::cancel()` with
the same behavior as `librados::IoCtx::aio_cancel()`.
* mgr/restful, mgr/zabbix: both modules, already deprecated since 2020, have been
- finally removed. They have not been actively maintenance in the last years,
- and started suffering from vulnerabilities in their dependency chain (e.g.:
+ finally removed. They have not been actively maintained and
+ suffer from vulnerabilities in their dependency chain (e.g.:
CVE-2023-46136). As alternatives, for the `restful` module, the `dashboard` module
provides a richer and better maintained RESTful API. Regarding the `zabbix` module,
- there are alternative monitoring solutions, like `prometheus`, which is the most
- widely adopted among the Ceph user community.
+ there are alternative monitoring solutions, notably the Prometheus Alertmanager,
+ which scales more readily and is widely adopted within the Ceph community.
* CephFS: EOPNOTSUPP (Operation not supported ) is now returned by the CephFS
fuse client for `fallocate` for the default case (i.e. mode == 0) since
* RGW: PutObjectLockConfiguration can now be used to enable S3 Object Lock on an
existing versioning-enabled bucket that was not created with Object Lock enabled.
-* RADOS: The ceph df command reports incorrect MAX AVAIL for stretch mode pools when
+* RADOS: The `ceph df` command reports incorrect `MAX AVAIL` values for stretch mode pools when
CRUSH rules use multiple take steps for datacenters. PGMap::get_rule_avail
incorrectly calculates available space from only one datacenter.
- As a workaround, define CRUSH rules with take default and choose firstn 0 type
- datacenter. See https://tracker.ceph.com/issues/56650#note-6 for details.
- Upgrading a cluster configured with a crush rule with multiple take steps
- can lead to data shuffling, as the new crush changes may necessitate data
- redistribution. In contrast, a stretch rule with a single-take configuration
- will not cause any data movement during the upgrade process.
+ As a workaround, define CRUSH rules with `take default` and `choose firstn 0 type
+ datacenter`. See https://tracker.ceph.com/issues/56650#note-6 for details.
+ Upgrading a cluster configured with a CRUSH rule that includes multiple `take` steps
+ can lead to data shuffling, as CRUSH changes may necessitate data
+ redistribution. In contrast, a stretch rule with a single `take` step
+ will not cause data movement during the upgrade process.
* RGW: The `x-amz-confirm-remove-self-bucket-access` header is now supported by
`PutBucketPolicy`. Additionally, the root user will always have access to modify
is not passed or if either is a non-empty pool, the command will abort.
* RADOS: A new command, `ceph osd rm-pg-upmap-primary-all`, has been added that allows
- users to clear all pg-upmap-primary mappings in the osdmap when desired.
+ admins to clear all pg-upmap-primary mappings in the osdmap when desired.
Related trackers:
- https://tracker.ceph.com/issues/67179
- https://tracker.ceph.com/issues/66867
* RADOS: The default plugin for erasure coded pools has been changed
- from Jerasure to ISA-L. Clusters created on T or later releases will
- use ISA-L as the default plugin when creating a new pool. Clusters that upgrade
- to the T release will continue to use their existing default values.
+ from Jerasure to ISA-L. Pools created on Tentacle or later releases will
+ use ISA-L as the default plugin. Pools within clusters upgraded
+ to Tentacle or Umbrella will continue to use their existing plugin.
The default values can be overriden by creating a new erasure code profile and
- selecting it when creating a new pool.
+ selecting it when creating a pool.
ISA-L is recommended for new pools because the Jerasure library is
no longer maintained.
-* CephFS: Format of name of pool namespace for CephFS volumes has been changed
+* CephFS: The format of pool namespaces for CephFS volumes has been changed
from `fsvolumens__<subvol-name>` to `fsvolumens__<subvol-grp-name>_<subvol-name>`
to avoid namespace collision when two subvolumes located in different
subvolume groups have the same name. Even with namespace collision, there were
no security issues since the MDS auth cap is restricted to the subvolume path.
- Now, with this change, the namespaces are completely isolated.
+ Now, with this change, namespaces are completely isolated.
-* RGW: Added support for the `RestrictPublicBuckets` property of the S3 `PublicAccessBlock`
+* RGW: Add support for the `RestrictPublicBuckets` property of the S3 `PublicAccessBlock`
configuration.
* RBD: Moving an image that is a member of a group to trash is no longer
Replication of tags is controlled by the `s3:GetObject(Version)Tagging` permission.
* RADOS: A new command, ``ceph osd pool availability-status``, has been added that allows
- users to view the availability score for each pool in a cluster. A pool is considered
- unavailable if any PG in the pool is not in active state or if there are unfound
- objects. Otherwise the pool is considered available. The score is updated every
- 5 seconds. The feature is on by default. A new config option ``enable_availability_tracking``
+ users to view the availability score for each pool. A pool is considered
+ `unavailable` if any PG in the pool is not in active state or if there are unfound
+ objects, otherwise the pool is considered `available`. The score is updated every
+ five seconds and the feature is enabled by default. A new config option ``enable_availability_tracking``
can be used to turn off the feature if required. Another command is added to clear the
availability status for a specific pool, ``ceph osd pool clear-availability-status <pool-name>``.
This feature is in tech preview.
>=19.2.1
-* CephFS: Command `fs subvolume create` now allows tagging subvolumes through option
+* CephFS: The `fs subvolume create` command now allows tagging subvolumes through option
`--earmark` with a unique identifier needed for NFS or SMB services. The earmark
string for a subvolume is empty by default. To remove an already present earmark,
an empty string can be assigned to it. Additionally, commands
- nvme-gw delete
Relevant tracker: https://tracker.ceph.com/issues/64777
-* MDS now uses host errors, as defined in errno.cc, for current platform.
-errorcode32_t is converting, internally, the error code from host to ceph, when encoding, and vice versa,
-when decoding, resulting having LINUX codes on the wire, and HOST code on the receiver.
-All CEPHFS_E* defines have been removed across Ceph (including the python binding).
+* MDS now uses host error numbers, as defined in errno.cc, for current platform.
+`errorcode32_t` is converts, internally, error codes from host to Ceph when encoding, and vice versa,
+when decoding, resulting in Linux codes on the wire, and host codes on the receiver.
+All CEPHFS_E* defines have been removed across Ceph (including the Python binding).
Relevant tracker: https://tracker.ceph.com/issues/64611