From 88e8e91f7289b4c47056ecae1d2d198c7e7259a8 Mon Sep 17 00:00:00 2001 From: Sebastian Wagner Date: Thu, 18 Feb 2021 13:43:09 +0100 Subject: [PATCH] doc/cephadm: group host mgmt sections into one chapter Signed-off-by: Sebastian Wagner --- doc/cephadm/host-management.rst | 133 ++++++++++++++++++++++++++++++++ doc/cephadm/index.rst | 2 + doc/cephadm/install.rst | 35 +-------- doc/cephadm/operations.rst | 52 +------------ doc/mgr/orchestrator.rst | 48 ------------ 5 files changed, 140 insertions(+), 130 deletions(-) create mode 100644 doc/cephadm/host-management.rst diff --git a/doc/cephadm/host-management.rst b/doc/cephadm/host-management.rst new file mode 100644 index 00000000000..f003d375365 --- /dev/null +++ b/doc/cephadm/host-management.rst @@ -0,0 +1,133 @@ +.. _orchestrator-cli-host-management: + +=============== +Host Management +=============== + +To list hosts associated with the cluster: + +.. prompt:: bash # + + ceph orch host ls [--format yaml] + +.. _cephadm-adding-hosts: + +Adding Hosts +============ + +To add each new host to the cluster, perform two steps: + +#. Install the cluster's public SSH key in the new host's root user's ``authorized_keys`` file: + + .. prompt:: bash # + + ssh-copy-id -f -i /etc/ceph/ceph.pub root@** + + For example: + + .. prompt:: bash # + + ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2 + ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3 + +#. Tell Ceph that the new node is part of the cluster: + + .. prompt:: bash # + + ceph orch host add *newhost* + + For example: + + .. prompt:: bash # + + ceph orch host add host2 + ceph orch host add host3 + +.. _cephadm-removing-hosts: + +Removing Hosts +============== + +If the node that want you to remove is running OSDs, make sure you remove the OSDs from the node. + +To remove a host from a cluster, do the following: + +For all Ceph service types, except for ``node-exporter`` and ``crash``, remove +the host from the placement specification file (for example, cluster.yml). +For example, if you are removing the host named host2, remove all occurrences of +``- host2`` from all ``placement:`` sections. + +Update: + +.. code-block:: yaml + + service_type: rgw + placement: + hosts: + - host1 + - host2 + +To: + +.. code-block:: yaml + + + service_type: rgw + placement: + hosts: + - host1 + +Remove the host from cephadm's environment: + +.. prompt:: bash # + + ceph orch host rm host2 + + +If the host is running ``node-exporter`` and crash services, remove them by running +the following command on the host: + +.. prompt:: bash # + + cephadm rm-daemon --fsid CLUSTER_ID --name SERVICE_NAME + + +Maintenance Mode +================ + +Place a host in and out of maintenance mode (stops all Ceph daemons on host):: + + ceph orch host maintenance enter [--force] + ceph orch host maintenace exit + +Where the force flag when entering maintenance allows the user to bypass warnings (but not alerts) + +See also :ref:`cephadm-fqdn` + +Host Specification +================== + +Many hosts can be added at once using +``ceph orch apply -i`` by submitting a multi-document YAML file:: + + --- + service_type: host + addr: node-00 + hostname: node-00 + labels: + - example1 + - example2 + --- + service_type: host + addr: node-01 + hostname: node-01 + labels: + - grafana + --- + service_type: host + addr: node-02 + hostname: node-02 + +This can be combined with service specifications (below) to create a cluster spec +file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec`` +also to do this during bootstrap. Cluster SSH Keys must be copied to hosts prior to adding them. diff --git a/doc/cephadm/index.rst b/doc/cephadm/index.rst index 19634d16fa4..ceeb9487759 100644 --- a/doc/cephadm/index.rst +++ b/doc/cephadm/index.rst @@ -30,6 +30,8 @@ versions of Ceph. stability install adoption + host-management + upgrade Cephadm operations Cephadm monitoring diff --git a/doc/cephadm/install.rst b/doc/cephadm/install.rst index 3835da317da..904e15ece56 100644 --- a/doc/cephadm/install.rst +++ b/doc/cephadm/install.rst @@ -209,37 +209,10 @@ its status with: ceph status +Adding Hosts +============ -Add hosts to the cluster -======================== - -To add each new host to the cluster, perform two steps: - -#. Install the cluster's public SSH key in the new host's root user's ``authorized_keys`` file: - - .. prompt:: bash # - - ssh-copy-id -f -i /etc/ceph/ceph.pub root@** - - For example: - - .. prompt:: bash # - - ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2 - ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3 - -#. Tell Ceph that the new node is part of the cluster: - - .. prompt:: bash # - - ceph orch host add *newhost* - - For example: - - .. prompt:: bash # - - ceph orch host add host2 - ceph orch host add host3 +Next, add all hosts to the cluster by following :ref:`cephadm-adding-hosts`. .. _deploy_additional_monitors: @@ -553,4 +526,4 @@ Deploying custom containers =========================== It is also possible to choose different containers than the default containers to deploy Ceph. See :ref:`containers` for information about your options in this regard. -.. _cluster network: ../rados/configuration/network-config-ref#cluster-network \ No newline at end of file +.. _cluster network: ../rados/configuration/network-config-ref#cluster-network diff --git a/doc/cephadm/operations.rst b/doc/cephadm/operations.rst index ec7fea6b2b4..75702555bb3 100644 --- a/doc/cephadm/operations.rst +++ b/doc/cephadm/operations.rst @@ -307,56 +307,6 @@ Then, run bootstrap referencing this file:: cephadm bootstrap -c /root/ceph.conf ... -.. _cephadm-removing-hosts: - -Removing Hosts -============== - -If the node that want you to remove is running OSDs, make sure you remove the OSDs from the node. - -To remove a host from a cluster, do the following: - -For all Ceph service types, except for ``node-exporter`` and ``crash``, remove -the host from the placement specification file (for example, cluster.yml). -For example, if you are removing the host named host2, remove all occurrences of -``- host2`` from all ``placement:`` sections. - -Update: - -.. code-block:: yaml - - service_type: rgw - placement: - hosts: - - host1 - - host2 - -To: - -.. code-block:: yaml - - - service_type: rgw - placement: - hosts: - - host1 - -Remove the host from cephadm's environment: - -.. code-block:: bash - - ceph orch host rm host2 - -See also :ref:`orchestrator-cli-host-management`. - -If the host is running ``node-exporter`` and crash services, remove them by running -the following command on the host: - -.. code-block:: bash - - cephadm rm-daemon --fsid CLUSTER_ID --name SERVICE_NAME - - .. _cephadm-spec-unmanaged: Disable automatic deployment of daemons @@ -416,4 +366,4 @@ For example deploy a new daemon a few seconds later. * See :ref:`orchestrator-cli-create-osds` for special handling of unmanaged OSDs. -* See also :ref:`cephadm-pause` \ No newline at end of file +* See also :ref:`cephadm-pause` diff --git a/doc/mgr/orchestrator.rst b/doc/mgr/orchestrator.rst index 8240076c491..22b1d9b56e2 100644 --- a/doc/mgr/orchestrator.rst +++ b/doc/mgr/orchestrator.rst @@ -58,54 +58,6 @@ Status Show current orchestrator mode and high-level status (whether the orchestrator plugin is available and operational) -.. _orchestrator-cli-host-management: - -Host Management -=============== - -List hosts associated with the cluster:: - - ceph orch host ls - -Add and remove hosts:: - - ceph orch host add [] [...] - ceph orch host rm - -Place a host in and out of maintenance mode (stops all Ceph daemons on host):: - - ceph orch host maintenance enter [--force] - ceph orch host maintenace exit - -Where the force flag when entering maintenance allows the user to bypass warnings (but not alerts) - -For cephadm, see also :ref:`cephadm-fqdn` and :ref:`cephadm-removing-hosts`. - -Host Specification ------------------- - -Many hosts can be added at once using -``ceph orch apply -i`` by submitting a multi-document YAML file:: - - --- - service_type: host - addr: node-00 - hostname: node-00 - labels: - - example1 - - example2 - --- - service_type: host - addr: node-01 - hostname: node-01 - labels: - - grafana - --- - service_type: host - addr: node-02 - hostname: node-02 - -This can be combined with service specifications (below) to create a cluster spec file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec`` also to do this during bootstrap. Cluster SSH Keys must be copied to hosts prior to adding them. .. _orchestrator-host-labels: -- 2.39.5