From: dparmar18 Date: Tue, 14 Jun 2022 10:16:37 +0000 (+0530) Subject: doc/cephadm/upgrade: Add doc for mds upgrade without reducing mds_mds to 1. X-Git-Tag: v18.1.0~1131^2~3 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=cdcdc84de8144cb1b5c12ef3be49d41d2aa12a77;p=ceph.git doc/cephadm/upgrade: Add doc for mds upgrade without reducing mds_mds to 1. Signed-off-by: Dhairya Parmar --- diff --git a/doc/cephadm/upgrade.rst b/doc/cephadm/upgrade.rst index 221f212449f7..8e62af61e440 100644 --- a/doc/cephadm/upgrade.rst +++ b/doc/cephadm/upgrade.rst @@ -48,6 +48,31 @@ The automated upgrade process follows Ceph best practices. For example: Starting the upgrade ==================== +.. note:: + .. note:: + `Staggered Upgrade`_ of the mons/mgrs may be necessary to have access + to this new feature. + + Cephadm by default reduces `max_mds` to `1`. This can be disruptive for large + scale CephFS deployments because the cluster cannot quickly reduce active MDS(s) + to `1` and a single active MDS cannot easily handle the load of all clients + even for a short time. Therefore, to upgrade MDS(s) without reducing `max_mds`, + the `fail_fs` option can to be set to `true` (default value is `false`) prior + to initiating the upgrade: + + .. prompt:: bash # + + ceph config set mgr mgr/orchestrator/fail_fs true + + This would: + #. Fail CephFS filesystems, bringing active MDS daemon(s) to + `up:standby` state. + + #. Upgrade MDS daemons safely. + + #. Bring CephFS filesystems back up, bringing the state of active + MDS daemon(s) from `up:standby` to `up:active`. + Before you use cephadm to upgrade Ceph, verify that all hosts are currently online and that your cluster is healthy by running the following command: .. prompt:: bash #