.. prompt:: bash #
- ceph orch host ls [--format yaml] [--host-pattern <name>] [--label <label>] [--host-status <status>]
+ ceph orch host ls [--format yaml] [--host-pattern <name>] [--label <label>] [--host-status <status>]
where the optional arguments "host-pattern", "label" and "host-status" are used for filtering.
"host-pattern" is a regex that will match against hostnames and will only return matching hosts
.. prompt:: bash #
- ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*
+ ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*
For example:
.. prompt:: bash #
- ceph orch host add *<newhost>* [*<ip>*] [*<label1> ...*]
+ ceph orch host add *<newhost>* [*<ip>*] [*<label1> ...*]
For example:
.. prompt:: bash #
- ceph orch host add host4 10.10.0.104 --labels _admin
+ ceph orch host add host4 10.10.0.104 --labels _admin
.. _cephadm-removing-hosts:
.. prompt:: bash #
- ceph orch host drain *<host>*
+ ceph orch host drain *<host>*
The '_no_schedule' label will be applied to the host. See :ref:`cephadm-special-host-labels`
.. prompt:: bash #
- ceph orch osd rm status
+ ceph orch osd rm status
see :ref:`cephadm-osd-removal` for more details about osd removal
.. prompt:: bash #
- ceph orch ps <host>
+ ceph orch ps <host>
Once all daemons are removed you can remove the host with the following:
.. prompt:: bash #
- ceph orch host rm <host>
+ ceph orch host rm <host>
Offline host removal
--------------------
.. prompt:: bash #
- ceph orch host rm <host> --offline --force
+ ceph orch host rm <host> --offline --force
This can potentially cause data loss as osds will be forcefully purged from the cluster by calling ``osd purge-actual`` for each osd.
Service specs that still contain this host should be manually updated.
can have multiple labels. They can be used to specify placement
of daemons. See :ref:`orch-placement-by-labels`
-Labels can be added when adding a host with the ``--labels`` flag::
+Labels can be added when adding a host with the ``--labels`` flag:
- ceph orch host add my_hostname --labels=my_label1
- ceph orch host add my_hostname --labels=my_label1,my_label2
+.. prompt:: bash #
+
+ ceph orch host add my_hostname --labels=my_label1
+ ceph orch host add my_hostname --labels=my_label1,my_label2
+
+To add a label a existing host, run:
+
+.. prompt:: bash #
-To add a label a existing host, run::
+ ceph orch host label add my_hostname my_label
- ceph orch host label add my_hostname my_label
+To remove a label, run:
-To remove a label, run::
+.. prompt:: bash #
- ceph orch host label rm my_hostname my_label
+ ceph orch host label rm my_hostname my_label
.. _cephadm-special-host-labels:
Maintenance Mode
================
-Place a host in and out of maintenance mode (stops all Ceph daemons on host)::
+Place a host in and out of maintenance mode (stops all Ceph daemons on host):
+
+.. prompt:: bash #
- ceph orch host maintenance enter <hostname> [--force]
- ceph orch host maintenance exit <hostname>
+ ceph orch host maintenance enter <hostname> [--force]
+ ceph orch host maintenance exit <hostname>
Where the force flag when entering maintenance allows the user to bypass warnings (but not alerts)
Some servers and external enclosures may not register device removal or insertion with the
kernel. In these scenarios, you'll need to perform a host rescan. A rescan is typically
-non-disruptive, and can be performed with the following CLI command.::
+non-disruptive, and can be performed with the following CLI command.:
- ceph orch host rescan <hostname> [--with-summary]
+.. prompt:: bash #
+
+ ceph orch host rescan <hostname> [--with-summary]
The ``with-summary`` flag provides a breakdown of the number of HBAs found and scanned, together
-with any that failed.::
+with any that failed.:
+
+.. prompt:: bash [ceph:root@rh9-ceph1/]#
+
+ ceph orch host rescan rh9-ceph1 --with-summary
+
+::
- [ceph: root@rh9-ceph1 /]# ceph orch host rescan rh9-ceph1 --with-summary
- Ok. 2 adapters detected: 2 rescanned, 0 skipped, 0 failed (0.32s)
+ Ok. 2 adapters detected: 2 rescanned, 0 skipped, 0 failed (0.32s)
Creating many hosts at once
===========================
key is generated automatically and no additional configuration
is necessary.
-A *new* SSH key can be generated with::
+A *new* SSH key can be generated with:
- ceph cephadm generate-key
+.. prompt:: bash #
-The public portion of the SSH key can be retrieved with::
+ ceph cephadm generate-key
- ceph cephadm get-pub-key
+The public portion of the SSH key can be retrieved with:
-The currently stored SSH key can be deleted with::
+.. prompt:: bash #
- ceph cephadm clear-key
+ ceph cephadm get-pub-key
-You can make use of an existing key by directly importing it with::
+The currently stored SSH key can be deleted with:
- ceph config-key set mgr/cephadm/ssh_identity_key -i <key>
- ceph config-key set mgr/cephadm/ssh_identity_pub -i <pub>
+.. prompt:: bash #
-You will then need to restart the mgr daemon to reload the configuration with::
+ ceph cephadm clear-key
- ceph mgr fail
+You can make use of an existing key by directly importing it with:
+
+.. prompt:: bash #
+
+ ceph config-key set mgr/cephadm/ssh_identity_key -i <key>
+ ceph config-key set mgr/cephadm/ssh_identity_pub -i <pub>
+
+You will then need to restart the mgr daemon to reload the configuration with:
+
+.. prompt:: bash #
+
+ ceph mgr fail
.. _cephadm-ssh-user:
and execute commands without prompting for a password. If you do not want
to use the "root" user (default option in cephadm), you must provide
cephadm the name of the user that is going to be used to perform all the
-cephadm operations. Use the command::
+cephadm operations. Use the command:
- ceph cephadm set-user <user>
+.. prompt:: bash #
+
+ ceph cephadm set-user <user>
Prior to running this the cluster ssh key needs to be added to this users
authorized_keys file and non-root users must have passwordless sudo access.
There are two ways to customize this configuration for your environment:
#. Import a customized configuration file that will be stored
- by the monitor with::
+ by the monitor with:
+
+ .. prompt:: bash #
- ceph cephadm set-ssh-config -i <ssh_config_file>
+ ceph cephadm set-ssh-config -i <ssh_config_file>
- To remove a customized SSH config and revert back to the default behavior::
+ To remove a customized SSH config and revert back to the default behavior:
- ceph cephadm clear-ssh-config
+ .. prompt:: bash #
-#. You can configure a file location for the SSH configuration file with::
+ ceph cephadm clear-ssh-config
+
+#. You can configure a file location for the SSH configuration file with:
+
+ .. prompt:: bash #
- ceph config set mgr mgr/cephadm/ssh_config_file <path>
+ ceph config set mgr mgr/cephadm/ssh_config_file <path>
We do *not recommend* this approach. The path name must be
visible to *any* mgr daemon, and cephadm runs all daemons as
..
TODO: This chapter needs to provide way for users to configure
- Grafana in the dashboard, as this is right no very hard to do.
+ Grafana in the dashboard, as this is right now very hard to do.