buffers are larger. The previous behavior of comparing up to the size of
the compare buffer was prone to subtle breakage upon straddling a stripe
unit boundary.
+* RGW: Parquet implementation is about accessing columnar objects(Parquet format)
+ using Parquet reader(apache arrow) that will eventually save a lot of IOPS.
+ The s3select-engine(RGW submodule) contains that Parquet-reader.
+ The Parquet object is identified by its name(*.parquet) and by the magic-number exists
+ in objects.
* RBD: compare-and-write operation is no longer limited to 512-byte sectors.
Assuming proper alignment, it now allows operating on stripe units (4M by
default).
* The 'AT_NO_ATTR_SYNC' macro is deprecated, please use the standard 'AT_STATX_DONT_SYNC'
macro. The 'AT_NO_ATTR_SYNC' macro will be removed in the future.
-
+
>=17.2.1
* The "BlueStore zero block detection" feature (first introduced to Quincy in
See a sample report with `ceph telemetry preview`.
Opt-in with `ceph telemetry on`.
-* Filestore has been deprecated in Quincy, considering that BlueStore has been
- the default objectstore for quite some time.
-
-* Critical bug in OMAP format upgrade is fixed. This could cause data corruption
- (improperly formatted OMAP keys) after pre-Pacific cluster upgrade if
- bluestore-quick-fix-on-mount parameter is set to true or ceph-bluestore-tool's
- quick-fix/repair commands are invoked.
- Relevant tracker: https://tracker.ceph.com/issues/53062
-
-* `ceph-mgr-modules-core` debian package does not recommend `ceph-mgr-rook`
- anymore. As the latter depends on `python3-numpy` which cannot be imported in
- different Python sub-interpreters multi-times if the version of
- `python3-numpy` is older than 1.19. Since `apt-get` installs the `Recommends`
- packages by default, `ceph-mgr-rook` was always installed along with
- `ceph-mgr` debian package as an indirect dependency. If your workflow depends
- on this behavior, you might want to install `ceph-mgr-rook` separately.
-
-* the "kvs" Ceph object class is not packaged anymore. "kvs" Ceph object class
- offers a distributed flat b-tree key-value store implemented on top of librados
- objects omap. Because we don't have existing internal users of this object
- class, it is not packaged anymore.
-
-* A new library is available, libcephsqlite. It provides a SQLite Virtual File
- System (VFS) on top of RADOS. The database and journals are striped over
- RADOS across multiple objects for virtually unlimited scaling and throughput
- only limited by the SQLite client. Applications using SQLite may change to
- the Ceph VFS with minimal changes, usually just by specifying the alternate
- VFS. We expect the library to be most impactful and useful for applications
- that were storing state in RADOS omap, especially without striping which
- limits scalability.
-
-* The ``device_health_metrics`` pool has been renamed ``.mgr``. It is now
- used as a common store for all ``ceph-mgr`` modules.
-
-* fs: A file system can be created with a specific ID ("fscid"). This is useful
- in certain recovery scenarios, e.g., monitor database lost and rebuilt, and
- the restored file system is expected to have the same ID as before.
-
-* fs: A file system can be renamed using the `fs rename` command. Any cephx
- credentials authorized for the old file system name will need to be
- reauthorized to the new file system name. Since the operations of the clients
- using these re-authorized IDs may be disrupted, this command requires the
- "--yes-i-really-mean-it" flag. Also, mirroring is expected to be disabled
- on the file system.
-* MDS upgrades no longer require stopping all standby MDS daemons before
- upgrading the sole active MDS for a file system.
-
-* Parquet implementation is about accessing columnar objects(Parquet format)
- using Parquet reader(apache arrow) that will eventually save a lot of IOPS.
- The s3select-engine(RGW submodule) contains that Parquet-reader.
- The Parquet object is identified by its name(*.parquet) and by the magic-number exists
- in objects.
-
* RGW: RGW now supports rate limiting by user and/or by bucket.
With this feature it is possible to limit user and/or bucket, the total operations and/or
bytes per minute can be delivered.