From 511b547d30a28b4cae73cb7c9e409c36e1bf439c Mon Sep 17 00:00:00 2001 From: Sebastian Wagner Date: Fri, 23 Jul 2021 09:54:14 +0200 Subject: [PATCH] doc/cephadm: Move some sections from mon.rst to serivce-management.rst Avoid duplication and instead only reference the corresponding sections. Signed-off-by: Sebastian Wagner (cherry picked from commit 24b753f2c8111305cf73de9b40a393c0c1a9bae7) --- doc/cephadm/mon.rst | 111 +++-------------------------- doc/cephadm/service-management.rst | 86 +++++++++++++++++++++- 2 files changed, 91 insertions(+), 106 deletions(-) diff --git a/doc/cephadm/mon.rst b/doc/cephadm/mon.rst index 38ae493f5ff2e..2121c28a7c7a3 100644 --- a/doc/cephadm/mon.rst +++ b/doc/cephadm/mon.rst @@ -26,7 +26,11 @@ default to that subnet unless cephadm is instructed to do otherwise. If all of the ceph monitor daemons in your cluster are in the same subnet, manual administration of the ceph monitor daemons is not necessary. ``cephadm`` will automatically add up to five monitors to the subnet, as -needed, as new hosts are added to the cluster. +needed, as new hosts are added to the cluster. + +By default, cephadm will deploy 5 daemons on arbitrary hosts. See +:ref:`orchestrator-cli-placement-spec`_ for details of specifying +the placement of daemons. Designating a Particular Subnet for Monitors -------------------------------------------- @@ -48,67 +52,18 @@ format (e.g., ``10.1.2.0/24``): Cephadm deploys new monitor daemons only on hosts that have IP addresses in the designated subnet. -Changing the number of monitors from the default ------------------------------------------------- - -If you want to adjust the default of 5 monitors, run this command: - - .. prompt:: bash # - - ceph orch apply mon ** - -Deploying monitors only to specific hosts ------------------------------------------ - -To deploy monitors on a specific set of hosts, run this command: - - .. prompt:: bash # - - ceph orch apply mon ** - - Be sure to include the first (bootstrap) host in this list. - -Using Host Labels ------------------ +You can also specify two public networks by using a list of networks: -You can control which hosts the monitors run on by making use of host labels. -To set the ``mon`` label to the appropriate hosts, run this command: - .. prompt:: bash # - ceph orch host label add ** mon - - To view the current hosts and labels, run this command: - - .. prompt:: bash # - - ceph orch host ls + ceph config set mon public_network *,* For example: .. prompt:: bash # - ceph orch host label add host1 mon - ceph orch host label add host2 mon - ceph orch host label add host3 mon - ceph orch host ls - - .. code-block:: bash - - HOST ADDR LABELS STATUS - host1 mon - host2 mon - host3 mon - host4 - host5 + ceph config set mon public_network 10.1.2.0/24,192.168.0.1/24 - Tell cephadm to deploy monitors based on the label by running this command: - - .. prompt:: bash # - - ceph orch apply mon label:mon - -See also :ref:`host labels `. Deploying Monitors on a Particular Network ------------------------------------------ @@ -136,53 +91,3 @@ run this command: ceph orch apply mon --unmanaged ceph orch daemon add mon newhost1:10.1.2.123 ceph orch daemon add mon newhost2:10.1.2.0/24 - - .. note:: - The **apply** command can be confusing. For this reason, we recommend using - YAML specifications. - - Each ``ceph orch apply mon`` command supersedes the one before it. - This means that you must use the proper comma-separated list-based - syntax when you want to apply monitors to more than one host. - If you do not use the proper syntax, you will clobber your work - as you go. - - For example: - - .. prompt:: bash # - - ceph orch apply mon host1 - ceph orch apply mon host2 - ceph orch apply mon host3 - - This results in only one host having a monitor applied to it: host 3. - - (The first command creates a monitor on host1. Then the second command - clobbers the monitor on host1 and creates a monitor on host2. Then the - third command clobbers the monitor on host2 and creates a monitor on - host3. In this scenario, at this point, there is a monitor ONLY on - host3.) - - To make certain that a monitor is applied to each of these three hosts, - run a command like this: - - .. prompt:: bash # - - ceph orch apply mon "host1,host2,host3" - - There is another way to apply monitors to multiple hosts: a ``yaml`` file - can be used. Instead of using the "ceph orch apply mon" commands, run a - command of this form: - - .. prompt:: bash # - - ceph orch apply -i file.yaml - - Here is a sample **file.yaml** file:: - - service_type: mon - placement: - hosts: - - host1 - - host2 - - host3 diff --git a/doc/cephadm/service-management.rst b/doc/cephadm/service-management.rst index 9f819998a8619..c8f47d9edd7df 100644 --- a/doc/cephadm/service-management.rst +++ b/doc/cephadm/service-management.rst @@ -158,6 +158,54 @@ or in a YAML files. cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`. + .. note:: + The **apply** command can be confusing. For this reason, we recommend using + YAML specifications. + + Each ``ceph orch apply `` command supersedes the one before it. + If you do not use the proper syntax, you will clobber your work + as you go. + + For example: + + .. prompt:: bash # + + ceph orch apply mon host1 + ceph orch apply mon host2 + ceph orch apply mon host3 + + This results in only one host having a monitor applied to it: host 3. + + (The first command creates a monitor on host1. Then the second command + clobbers the monitor on host1 and creates a monitor on host2. Then the + third command clobbers the monitor on host2 and creates a monitor on + host3. In this scenario, at this point, there is a monitor ONLY on + host3.) + + To make certain that a monitor is applied to each of these three hosts, + run a command like this: + + .. prompt:: bash # + + ceph orch apply mon "host1,host2,host3" + + There is another way to apply monitors to multiple hosts: a ``yaml`` file + can be used. Instead of using the "ceph orch apply mon" commands, run a + command of this form: + + .. prompt:: bash # + + ceph orch apply -i file.yaml + + Here is a sample **file.yaml** file:: + + service_type: mon + placement: + hosts: + - host1 + - host2 + - host3 + Explicit placements ------------------- @@ -192,7 +240,39 @@ and ``=name`` specifies the name of the new monitor. Placement by labels ------------------- -Daemons can be explicitly placed on hosts that match a specific label: +Daemon placement can be limited to hosts that match a specific label. To set +a label ``mylabel`` to the appropriate hosts, run this command: + + .. prompt:: bash # + + ceph orch host label add ** mylabel + + To view the current hosts and labels, run this command: + + .. prompt:: bash # + + ceph orch host ls + + For example: + + .. prompt:: bash # + + ceph orch host label add host1 mylabel + ceph orch host label add host2 mylabel + ceph orch host label add host3 mylabel + ceph orch host ls + + .. code-block:: bash + + HOST ADDR LABELS STATUS + host1 mylabel + host2 mylabel + host3 mylabel + host4 + host5 + +Now, Tell cephadm to deploy daemons based on the label by running +this command: .. prompt:: bash # @@ -240,8 +320,8 @@ Or in YAML: host_pattern: "*" -Setting a limit ---------------- +Changing the number of monitors +------------------------------- By specifying ``count``, only the number of daemons specified will be created: -- 2.39.5