From c1ff3c7a4089193e95aa8724f9976bfad3ec0f5a Mon Sep 17 00:00:00 2001 From: dparmar18 Date: Tue, 14 Jun 2022 15:46:37 +0530 Subject: [PATCH] doc/cephadm/upgrade: Add doc for mds upgrade without reducing mds_mds to 1. Signed-off-by: Dhairya Parmar --- doc/cephadm/upgrade.rst | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/doc/cephadm/upgrade.rst b/doc/cephadm/upgrade.rst index 221f212449f..8e62af61e44 100644 --- a/doc/cephadm/upgrade.rst +++ b/doc/cephadm/upgrade.rst @@ -48,6 +48,31 @@ The automated upgrade process follows Ceph best practices. For example: Starting the upgrade ==================== +.. note:: + .. note:: + `Staggered Upgrade`_ of the mons/mgrs may be necessary to have access + to this new feature. + + Cephadm by default reduces `max_mds` to `1`. This can be disruptive for large + scale CephFS deployments because the cluster cannot quickly reduce active MDS(s) + to `1` and a single active MDS cannot easily handle the load of all clients + even for a short time. Therefore, to upgrade MDS(s) without reducing `max_mds`, + the `fail_fs` option can to be set to `true` (default value is `false`) prior + to initiating the upgrade: + + .. prompt:: bash # + + ceph config set mgr mgr/orchestrator/fail_fs true + + This would: + #. Fail CephFS filesystems, bringing active MDS daemon(s) to + `up:standby` state. + + #. Upgrade MDS daemons safely. + + #. Bring CephFS filesystems back up, bringing the state of active + MDS daemon(s) from `up:standby` to `up:active`. + Before you use cephadm to upgrade Ceph, verify that all hosts are currently online and that your cluster is healthy by running the following command: .. prompt:: bash # -- 2.39.5