From b3846f6f6a4ff3daf7c26eb087992bd230ee9a47 Mon Sep 17 00:00:00 2001 From: Sebastian Wagner Date: Thu, 18 Feb 2021 15:39:59 +0100 Subject: [PATCH] doc/cephadm: group MON sections into one chapter Signed-off-by: Sebastian Wagner --- doc/cephadm/index.rst | 1 + doc/cephadm/install.rst | 160 +------------------------------ doc/cephadm/mon.rst | 165 ++++++++++++++++++++++++++++++++ doc/cephadm/troubleshooting.rst | 8 +- doc/mgr/orchestrator.rst | 33 ------- 5 files changed, 173 insertions(+), 194 deletions(-) create mode 100644 doc/cephadm/mon.rst diff --git a/doc/cephadm/index.rst b/doc/cephadm/index.rst index 3cdaf48ba2e..502e3fba0d7 100644 --- a/doc/cephadm/index.rst +++ b/doc/cephadm/index.rst @@ -31,6 +31,7 @@ versions of Ceph. install adoption host-management + mon osd rgw custom-container diff --git a/doc/cephadm/install.rst b/doc/cephadm/install.rst index bbb96d28102..701351332cc 100644 --- a/doc/cephadm/install.rst +++ b/doc/cephadm/install.rst @@ -214,168 +214,14 @@ Adding Hosts Next, add all hosts to the cluster by following :ref:`cephadm-adding-hosts`. - -.. _deploy_additional_monitors: - -Deploy additional monitors (optional) -===================================== +Adding additional MONs +====================== A typical Ceph cluster has three or five monitor daemons spread across different hosts. We recommend deploying five monitors if there are five or more nodes in your cluster. -.. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation - -When Ceph knows what IP subnet the monitors should use it can automatically -deploy and scale monitors as the cluster grows (or contracts). By default, -Ceph assumes that other monitors should use the same subnet as the first -monitor's IP. - -If your Ceph monitors (or the entire cluster) live on a single subnet, -then by default cephadm automatically adds up to 5 monitors as you add new -hosts to the cluster. No further steps are necessary. - -* If there is a specific IP subnet that should be used by monitors, you - can configure that in `CIDR`_ format (e.g., ``10.1.2.0/24``) with: - - .. prompt:: bash # - - ceph config set mon public_network ** - - For example: - - .. prompt:: bash # - - ceph config set mon public_network 10.1.2.0/24 - - Cephadm deploys new monitor daemons only on hosts that have IPs - configured in the configured subnet. - -* If you want to adjust the default of 5 monitors, run this command: - - .. prompt:: bash # - - ceph orch apply mon ** - -* To deploy monitors on a specific set of hosts, run this command: - - .. prompt:: bash # - - ceph orch apply mon ** - - Be sure to include the first (bootstrap) host in this list. - -* You can control which hosts the monitors run on by making use of - host labels. To set the ``mon`` label to the appropriate - hosts, run this command: - - .. prompt:: bash # - - ceph orch host label add ** mon - - To view the current hosts and labels, run this command: - - .. prompt:: bash # - - ceph orch host ls - - For example: - - .. prompt:: bash # - - ceph orch host label add host1 mon - ceph orch host label add host2 mon - ceph orch host label add host3 mon - ceph orch host ls - - .. code-block:: bash - - HOST ADDR LABELS STATUS - host1 mon - host2 mon - host3 mon - host4 - host5 - - Tell cephadm to deploy monitors based on the label by running this command: - - .. prompt:: bash # - - ceph orch apply mon label:mon - -* You can explicitly specify the IP address or CIDR network for each monitor - and control where it is placed. To disable automated monitor deployment, run - this command: - - .. prompt:: bash # - - ceph orch apply mon --unmanaged - - To deploy each additional monitor: - - .. prompt:: bash # - - ceph orch daemon add mon * [...]* - - For example, to deploy a second monitor on ``newhost1`` using an IP - address ``10.1.2.123`` and a third monitor on ``newhost2`` in - network ``10.1.2.0/24``, run the following commands: - - .. prompt:: bash # - - ceph orch apply mon --unmanaged - ceph orch daemon add mon newhost1:10.1.2.123 - ceph orch daemon add mon newhost2:10.1.2.0/24 - - .. note:: - The **apply** command can be confusing. For this reason, we recommend using - YAML specifications. - - Each ``ceph orch apply mon`` command supersedes the one before it. - This means that you must use the proper comma-separated list-based - syntax when you want to apply monitors to more than one host. - If you do not use the proper syntax, you will clobber your work - as you go. - - For example: - - .. prompt:: bash # - - ceph orch apply mon host1 - ceph orch apply mon host2 - ceph orch apply mon host3 - - This results in only one host having a monitor applied to it: host 3. - - (The first command creates a monitor on host1. Then the second command - clobbers the monitor on host1 and creates a monitor on host2. Then the - third command clobbers the monitor on host2 and creates a monitor on - host3. In this scenario, at this point, there is a monitor ONLY on - host3.) - - To make certain that a monitor is applied to each of these three hosts, - run a command like this: - - .. prompt:: bash # - - ceph orch apply mon "host1,host2,host3" - - There is another way to apply monitors to multiple hosts: a ``yaml`` file - can be used. Instead of using the "ceph orch apply mon" commands, run a - command of this form: - - .. prompt:: bash # - - ceph orch apply -i file.yaml - - Here is a sample **file.yaml** file:: - - service_type: mon - placement: - hosts: - - host1 - - host2 - - host3 +Please follow :ref:`deploy_additional_monitors` to deploy additional MONs. Adding Storage ============== diff --git a/doc/cephadm/mon.rst b/doc/cephadm/mon.rst new file mode 100644 index 00000000000..7bbb1f72ce2 --- /dev/null +++ b/doc/cephadm/mon.rst @@ -0,0 +1,165 @@ +=========== +MON Service +=========== + +.. _deploy_additional_monitors: + +Deploy additional monitors +========================== + +A typical Ceph cluster has three or five monitor daemons spread +across different hosts. We recommend deploying five +monitors if there are five or more nodes in your cluster. + +.. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation + +When Ceph knows what IP subnet the monitors should use it can automatically +deploy and scale monitors as the cluster grows (or contracts). By default, +Ceph assumes that other monitors should use the same subnet as the first +monitor's IP. + +If your Ceph monitors (or the entire cluster) live on a single subnet, +then by default cephadm automatically adds up to 5 monitors as you add new +hosts to the cluster. No further steps are necessary. + +* If there is a specific IP subnet that should be used by monitors, you + can configure that in `CIDR`_ format (e.g., ``10.1.2.0/24``) with: + + .. prompt:: bash # + + ceph config set mon public_network ** + + For example: + + .. prompt:: bash # + + ceph config set mon public_network 10.1.2.0/24 + + Cephadm deploys new monitor daemons only on hosts that have IPs + configured in the configured subnet. + +* If you want to adjust the default of 5 monitors, run this command: + + .. prompt:: bash # + + ceph orch apply mon ** + +* To deploy monitors on a specific set of hosts, run this command: + + .. prompt:: bash # + + ceph orch apply mon ** + + Be sure to include the first (bootstrap) host in this list. + +* You can control which hosts the monitors run on by making use of + host labels. To set the ``mon`` label to the appropriate + hosts, run this command: + + .. prompt:: bash # + + ceph orch host label add ** mon + + To view the current hosts and labels, run this command: + + .. prompt:: bash # + + ceph orch host ls + + For example: + + .. prompt:: bash # + + ceph orch host label add host1 mon + ceph orch host label add host2 mon + ceph orch host label add host3 mon + ceph orch host ls + + .. code-block:: bash + + HOST ADDR LABELS STATUS + host1 mon + host2 mon + host3 mon + host4 + host5 + + Tell cephadm to deploy monitors based on the label by running this command: + + .. prompt:: bash # + + ceph orch apply mon label:mon + +* You can explicitly specify the IP address or CIDR network for each monitor + and control where it is placed. To disable automated monitor deployment, run + this command: + + .. prompt:: bash # + + ceph orch apply mon --unmanaged + + To deploy each additional monitor: + + .. prompt:: bash # + + ceph orch daemon add mon * [...]* + + For example, to deploy a second monitor on ``newhost1`` using an IP + address ``10.1.2.123`` and a third monitor on ``newhost2`` in + network ``10.1.2.0/24``, run the following commands: + + .. prompt:: bash # + + ceph orch apply mon --unmanaged + ceph orch daemon add mon newhost1:10.1.2.123 + ceph orch daemon add mon newhost2:10.1.2.0/24 + + .. note:: + The **apply** command can be confusing. For this reason, we recommend using + YAML specifications. + + Each ``ceph orch apply mon`` command supersedes the one before it. + This means that you must use the proper comma-separated list-based + syntax when you want to apply monitors to more than one host. + If you do not use the proper syntax, you will clobber your work + as you go. + + For example: + + .. prompt:: bash # + + ceph orch apply mon host1 + ceph orch apply mon host2 + ceph orch apply mon host3 + + This results in only one host having a monitor applied to it: host 3. + + (The first command creates a monitor on host1. Then the second command + clobbers the monitor on host1 and creates a monitor on host2. Then the + third command clobbers the monitor on host2 and creates a monitor on + host3. In this scenario, at this point, there is a monitor ONLY on + host3.) + + To make certain that a monitor is applied to each of these three hosts, + run a command like this: + + .. prompt:: bash # + + ceph orch apply mon "host1,host2,host3" + + There is another way to apply monitors to multiple hosts: a ``yaml`` file + can be used. Instead of using the "ceph orch apply mon" commands, run a + command of this form: + + .. prompt:: bash # + + ceph orch apply -i file.yaml + + Here is a sample **file.yaml** file:: + + service_type: mon + placement: + hosts: + - host1 + - host2 + - host3 diff --git a/doc/cephadm/troubleshooting.rst b/doc/cephadm/troubleshooting.rst index 889d526bac2..5f0c82bbf59 100644 --- a/doc/cephadm/troubleshooting.rst +++ b/doc/cephadm/troubleshooting.rst @@ -229,7 +229,7 @@ Restoring the MON quorum ------------------------ In case the Ceph MONs cannot form a quorum, cephadm is not able -to manage the cluster, until the quorum is restored. +to manage the cluster, until the quorum is restored. In order to restore the MON quorum, remove unhealthy MONs form the monmap by following these steps: @@ -242,14 +242,14 @@ form the monmap by following these steps: 2. Identify a surviving monitor and log in to that host:: - ssh {mon-host} + ssh {mon-host} cephadm enter --name mon.`hostname` -3. Follow the steps in :ref:`rados-mon-remove-from-unhealthy` +3. Follow the steps in :ref:`rados-mon-remove-from-unhealthy` + Manually deploying a MGR daemon ------------------------------- - cephadm requires a MGR daemon in order to manage the cluster. In case the cluster the last MGR of a cluster was removed, follow these steps in order to deploy a MGR ``mgr.hostname.smfvfd`` on a random host of your cluster manually. diff --git a/doc/mgr/orchestrator.rst b/doc/mgr/orchestrator.rst index cd5e36b25be..1af9bd3ba08 100644 --- a/doc/mgr/orchestrator.rst +++ b/doc/mgr/orchestrator.rst @@ -106,39 +106,6 @@ To remove a label, run:: where ``journal`` is the filestore journal device, ``wal`` is the bluestore write ahead log device, and ``all`` stands for all devices associated with the OSD - -Monitor and manager management -============================== - -Creates or removes MONs or MGRs from the cluster. Orchestrator may return an -error if it doesn't know how to do this transition. - -Update the number of monitor hosts:: - - ceph orch apply mon --placement= [--dry-run] - -Where ``placement`` is a :ref:`orchestrator-cli-placement-spec`. - -Each host can optionally specify a network for the monitor to listen on. - -Update the number of manager hosts:: - - ceph orch apply mgr --placement= [--dry-run] - -Where ``placement`` is a :ref:`orchestrator-cli-placement-spec`. - -.. - .. note:: - - The host lists are the new full list of mon/mgr hosts - - .. note:: - - specifying hosts is optional for some orchestrator modules - and mandatory for others (e.g. Ansible). - - - .. _orchestrator-cli-cephfs: Deploying CephFS -- 2.39.5