This topic has been discussed many times; recently at the Dev
Summit of Cephalocon 2023.
This commit is the minial version of the work, contained entirely
within the `doc`. However, likely it will be expanded as there
were ideas like e.g. adding cache tiering back experimental feature
list (Sam) to warn users when deploying a new cluster.
Signed-off-by: Radosław Zarzyński <rzarzyns@redhat.com>
`ceph config set mgr mgr/telemetry/leaderboard_description ‘Cluster description’`.
* CEPHFS: After recovering a Ceph File System post following the disaster recovery
procedure, the recovered files under `lost+found` directory can now be deleted.
+* core: cache-tiering is now deprecated.
>=17.2.1
===============
Cache Tiering
===============
+.. warning:: Cache tiering has been deprecated in the Reef release as it
+ has lacked a maintainer for a very long time. This does not mean
+ it will be certainly removed, but we may choose to remove it
+ without much further notice.
A cache tier provides Ceph Clients with better I/O performance for a subset of
the data stored in a backing storage tier. Cache tiering involves creating a