From 4c33cb0d6acc29bf5efb681e10f8dc5182e5eb8a Mon Sep 17 00:00:00 2001 From: Ernesto Puerta <37327689+epuertat@users.noreply.github.com> Date: Wed, 1 Dec 2021 21:32:17 +0100 Subject: [PATCH] doc: 16.2.7 Release Notes (dashboard) Signed-off-by: Ernesto Puerta --- doc/releases/pacific.rst | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/doc/releases/pacific.rst b/doc/releases/pacific.rst index e91363080889..079418ff9e0e 100644 --- a/doc/releases/pacific.rst +++ b/doc/releases/pacific.rst @@ -50,6 +50,23 @@ Notable Changes until we implement a limit on the number of PGs default pools should consume, in combination with the 'scale-down' profile. +* Cephadm & Ceph Dashboard: NFS management has been completely reworked to + ensure that NFS exports are managed consistently across the different Ceph + components. Prior to this, there were 3 incompatible implementations for + configuring the NFS exports: Ceph-Ansible/OpenStack Manila, Ceph Dashboard and + 'mgr/nfs' module. With this release the 'mgr/nfs' way becomes the official + interface, and the remaining components (Cephadm and Ceph Dashboard) adhere to + it. While this might require manually migrating from the deprecated + implementations, it will simplify the user experience for those heavily + relying on NFS exports. + +* Dashboard: "Cluster Expansion Wizard". After the 'cephadm bootstrap' step, + users that log into the Ceph Dashboard will be presented with a welcome + screen. If they choose to follow the installation wizard, they will be guided + through a set of steps to help them configure their Ceph cluster: expanding + the cluster by adding more hosts, detecting and defining their storage + devices, and finally deploying and configuring the different Ceph services. + Changelog --------- -- 2.47.3