ceph orch host add host2 10.10.0.102
ceph orch host add host3 10.10.0.103
- It is best to explicitly provide the host IP address. If an IP is
+ It is best to explicitly provide the host IP address. If an address is
not provided, then the host name will be immediately resolved via
- DNS and that IP will be used.
+ DNS and the result will be used.
One or more labels can also be included to immediately label the
new host. For example, by default the ``_admin`` label will make
The ``_no_schedule`` and ``_no_conf_keyring`` labels will be applied to the
host. See :ref:`cephadm-special-host-labels`.
-If you only want to drain daemons but leave managed ceph conf and keyring
+If you want to drain daemons but leave managed `ceph.conf` and keyring
files on the host, you may pass the ``--keep-conf-keyring`` flag to the
drain command.
This will apply the ``_no_schedule`` label to the host but not the
``_no_conf_keyring`` label.
-All OSDs on the host will be scheduled to be removed. You can check the progress of the OSD removal operation with the following command:
+All OSDs on the host will be scheduled to be removed. You can check
+progress of the OSD removal operation with the following command:
.. prompt:: bash #
Offline host removal
--------------------
-Even if a host is offline and can not be recovered, it can be removed from the
+If a host is offline and can not be recovered, it can be removed from the
cluster by running a command of the following form:
.. prompt:: bash #
=======================
Some servers and external enclosures may not register device removal or insertion with the
-kernel. In these scenarios, you'll need to perform a host rescan. A rescan is typically
-non-disruptive, and can be performed with the following CLI command:
+kernel. In these scenarios, you'll need to perform a device rescan on the appropriate host.
+A rescan is typically non-disruptive, and can be performed with the following CLI command:
.. prompt:: bash #
The ``location`` attribute will be only affect the initial CRUSH location. Subsequent
changes of the ``location`` property will be ignored. Also, removing a host will not remove
- any CRUSH buckets unless the ``--rm-crush-entry`` flag is provided to the ``orch host rm`` command
+ an associated CRUSH bucket unless the ``--rm-crush-entry`` flag is provided to the ``orch host rm`` command
See also :ref:`crush_map_default_types`.
Removing a host from the CRUSH map
==================================
-The ``ceph orch host rm`` command has support for removing the bucket entry for the host
-in the CRUSH map. This is done by providing the ``--rm-crush-entry`` flag.
+The ``ceph orch host rm`` command has support for removing the associated host bucket
+from the CRUSH map. This is done by providing the ``--rm-crush-entry`` flag.
.. prompt:: bash [ceph:root@host1/]#
ceph orch host rm host1 --rm-crush-entry
-When this flag is specified, cephadm will attempt to remove the bucket entry
-for the host from the CRUSH map as part of the host removal process. Note that if
+When this flag is specified, cephadm will attempt to remove the host bucket
+from the CRUSH map as part of the host removal process. Note that if
it fails to do so, cephadm will report the failure and the host will remain under
cephadm control.
.. note::
- The removal from the CRUSH map will fail if there are OSDs deployed on the
+ Removal from the CRUSH map will fail if there are OSDs deployed on the
host. If you would like to remove all the host's OSDs as well, please start
by using the ``ceph orch host drain`` command to do so. Once the OSDs
- are all gone, then you may have cephadm remove the CRUSH entry along with the
- host using the ``--rm-crush-entry`` flag.
+ have been removed, then you may direct cephadm remove the CRUSH bucket
+ along with the host using the ``--rm-crush-entry`` flag.
OS Tuning Profiles
==================
-Cephadm can be used to manage operating-system-tuning profiles that apply sets
-of sysctl settings to sets of hosts.
+Cephadm can be used to manage operating system tuning profiles that apply
+``sysctl`` settings to sets of hosts.
-Create a YAML spec file in the following format:
+To do so, create a YAML spec file in the following format:
.. code-block:: yaml
ceph orch tuned-profile apply -i <tuned-profile-file-name>
-This profile is written to ``/etc/sysctl.d/`` on each host that matches the
-hosts specified in the placement block of the yaml, and ``sysctl --system`` is
+This profile is written to a file under ``/etc/sysctl.d/`` on each host
+specified in the ``placement`` block, then ``sysctl --system`` is
run on the host.
.. note::
The exact filename that the profile is written to within ``/etc/sysctl.d/``
is ``<profile-name>-cephadm-tuned-profile.conf``, where ``<profile-name>`` is
- the ``profile_name`` setting that you specify in the YAML spec. Because
+ the ``profile_name`` setting that you specify in the YAML spec. We suggest
+ naming these profiles following the usual ``sysctl.d`` `NN-xxxxx` convention. Because
sysctl settings are applied in lexicographical order (sorted by the filename
- in which the setting is specified), you may want to set the ``profile_name``
- in your spec so that it is applied before or after other conf files.
+ in which the setting is specified), you may want to carefully choose
+ the ``profile_name`` in your spec so that it is applied before or after other
+ conf files. Careful selection ensures that values supplied here override or
+ do not override those in other ``sysctl.d`` files as desired.
.. note::
.. note::
- Applying tuned profiles is idempotent when the ``--no-overwrite`` option is
+ Applying tuning profiles is idempotent when the ``--no-overwrite`` option is
passed. Moreover, if the ``--no-overwrite`` option is passed, existing
profiles with the same name are not overwritten.
We do *not recommend* this approach. The path name must be
visible to *any* mgr daemon, and cephadm runs all daemons as
- containers. That means that the file either need to be placed
+ containers. That means that the file must either be placed
inside a customized container image for your deployment, or
manually distributed to the mgr data directory
(``/var/lib/ceph/<cluster-fsid>/mgr.<id>`` on the host, visible at
host name:
The FQDN (Fully Qualified Domain Name) of the system is the
- name that the resolver(3) returns for the host name, such as,
- ursula.example.com. It is usually the hostname followed by the DNS
+ name that the resolver(3) returns for the host name, for example
+ ``ursula.example.com``. It is usually the short hostname followed by the DNS
domain name (the part after the first dot). You can check the FQDN
using ``hostname --fqdn`` or the domain name using ``dnsdomainname``.