From: Rishabh Dave Date: Tue, 26 Nov 2024 06:26:38 +0000 (+0530) Subject: Merge pull request #59897 from avanthakkar/note-cephfs-earmark X-Git-Tag: testing/wip-mchangir-testing-PR59936-main-debug~6 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=98d50099ca5baa82b270e7a6e1056e717a4bb123;p=ceph-ci.git Merge pull request #59897 from avanthakkar/note-cephfs-earmark doc: add pendingreleasenotes for cephfs subvolume earmarking feature Reviewed-by: Rishabh Dave Reviewed-by: Neeraj Pratap Singh --- 98d50099ca5baa82b270e7a6e1056e717a4bb123 diff --cc PendingReleaseNotes index 97a326aa719,ee842519030..146cab64d6f --- a/PendingReleaseNotes +++ b/PendingReleaseNotes @@@ -12,42 -12,16 +12,51 @@@ of the column showing the state of a group snapshot in the unformatted CLI output is changed from 'STATUS' to 'STATE'. The state of a group snapshot that was shown as 'ok' is now shown as 'complete', which is more descriptive. +* Based on tests performed at scale on an HDD based Ceph cluster, it was found + that scheduling with mClock was not optimal with multiple OSD shards. For + example, in the test cluster with multiple OSD node failures, the client + throughput was found to be inconsistent across test runs coupled with multiple + reported slow requests. However, the same test with a single OSD shard and + with multiple worker threads yielded significantly better results in terms of + consistency of client and recovery throughput across multiple test runs. + Therefore, as an interim measure until the issue with multiple OSD shards + (or multiple mClock queues per OSD) is investigated and fixed, the following + changes to the default option values have been made: + - osd_op_num_shards_hdd = 1 (was 5) + - osd_op_num_threads_per_shard_hdd = 5 (was 1) + For more details see https://tracker.ceph.com/issues/66289. +* MGR: The Ceph Manager's always-on modulues/plugins can now be force-disabled. + This can be necessary in cases where we wish to prevent the manager from being + flooded by module commands when Ceph services are down or degraded. + +* CephFS: Modifying the setting "max_mds" when a cluster is + unhealthy now requires users to pass the confirmation flag + (--yes-i-really-mean-it). This has been added as a precaution to tell the + users that modifying "max_mds" may not help with troubleshooting or recovery + effort. Instead, it might further destabilize the cluster. + +* mgr/restful, mgr/zabbix: both modules, already deprecated since 2020, have been + finally removed. They have not been actively maintenance in the last years, + and started suffering from vulnerabilities in their dependency chain (e.g.: + CVE-2023-46136). As alternatives, for the `restful` module, the `dashboard` module + provides a richer and better maintained RESTful API. Regarding the `zabbix` module, + there are alternative monitoring solutions, like `prometheus`, which is the most + widely adopted among the Ceph user community. + +* CephFS: EOPNOTSUPP (Operation not supported ) is now returned by the CephFS + fuse client for `fallocate` for the default case (i.e. mode == 0) since + CephFS does not support disk space reservation. The only flags supported are + `FALLOC_FL_KEEP_SIZE` and `FALLOC_FL_PUNCH_HOLE`. + >=19.2.1 + + * CephFS: Command `fs subvolume create` now allows tagging subvolumes through option + `--earmark` with a unique identifier needed for NFS or SMB services. The earmark + string for a subvolume is empty by default. To remove an already present earmark, + an empty string can be assigned to it. Additionally, commands + `ceph fs subvolume earmark set`, `ceph fs subvolume earmark get` and + `ceph fs subvolume earmark rm` have been added to set, get and remove earmark from a given subvolume. + >=19.0.0 * cephx: key rotation is now possible using `ceph auth rotate`. Previously,