#. Make sure that the ``cephadm`` command line tool is available on each host
in the existing cluster. See :ref:`get-cephadm` to learn how.
-#. Prepare each host for use by ``cephadm`` by running this command:
+#. Prepare each host for use by ``cephadm`` by running this command on that host:
.. prompt:: bash #
cephadm prepare-host
#. Choose a version of Ceph to use for the conversion. This procedure will work
- with any release of Ceph that is Octopus (15.2.z) or later, inclusive. The
+ with any release of Ceph that is Octopus (15.2.z) or later. The
latest stable release of Ceph is the default. You might be upgrading from an
earlier Ceph release at the same time that you're performing this
- conversion; if you are upgrading from an earlier release, make sure to
+ conversion. If you are upgrading from an earlier release, make sure to
follow any upgrade-related instructions for that release.
- Pass the image to cephadm with the following command:
+ Pass the Ceph container image to cephadm with the following command:
.. prompt:: bash #
cephadm ls
- Before starting the conversion process, ``cephadm ls`` shows all existing
- daemons to have a style of ``legacy``. As the adoption process progresses,
- adopted daemons will appear with a style of ``cephadm:v1``.
+ Before starting the conversion process, ``cephadm ls`` reports all existing
+ daemons with the style ``legacy``. As the adoption process progresses,
+ adopted daemons will appear with the style ``cephadm:v1``.
Adoption process
----------------
-#. Make sure that the ceph configuration has been migrated to use the cluster
- config database. If the ``/etc/ceph/ceph.conf`` is identical on each host,
- then the following command can be run on one single host and will affect all
- hosts:
+#. Make sure that the ceph configuration has been migrated to use the cluster's
+ central config database. If ``/etc/ceph/ceph.conf`` is identical on all
+ hosts, then the following command can be run on one host and will take
+ effect for all hosts:
.. prompt:: bash #
ceph config assimilate-conf -i /etc/ceph/ceph.conf
If there are configuration variations between hosts, you will need to repeat
- this command on each host. During this adoption process, view the cluster's
+ this command on each host, taking care that if there are conflicting option
+ settings across hosts, the values from the last host will be used. During this
+ adoption process, view the cluster's central
configuration to confirm that it is complete by running the following
command:
ceph config dump
-#. Adopt each monitor:
+#. Adopt each Monitor:
.. prompt:: bash #
cephadm adopt --style legacy --name mon.<hostname>
- Each legacy monitor should stop, quickly restart as a cephadm
+ Each legacy Monitor will stop, quickly restart as a cephadm
container, and rejoin the quorum.
-#. Adopt each manager:
+#. Adopt each Manager:
.. prompt:: bash #
cephadm adopt --style legacy --name mgr.<hostname>
-#. Enable cephadm:
+#. Enable cephadm orchestration:
.. prompt:: bash #
ceph mgr module enable cephadm
ceph orch set backend cephadm
-#. Generate an SSH key:
+#. Generate an SSH key for cephadm:
.. prompt:: bash #
ceph cephadm generate-key
ceph cephadm get-pub-key > ~/ceph.pub
-#. Install the cluster SSH key on each host in the cluster:
+#. Install the cephadm SSH key on each host in the cluster:
.. prompt:: bash #
SSH keys.
.. note::
- It is also possible to have cephadm use a non-root user to SSH
+ It is also possible to arrange for cephadm to use a non-root user to SSH
into cluster hosts. This user needs to have passwordless sudo access.
- Use ``ceph cephadm set-user <user>`` and copy the SSH key to that user.
+ Use ``ceph cephadm set-user <user>`` and copy the SSH key to that user's
+ home directory on each host.
See :ref:`cephadm-ssh-user`
#. Tell cephadm which hosts to manage:
ceph orch host add <hostname> [ip-address]
- This will perform a ``cephadm check-host`` on each host before adding it;
- this check ensures that the host is functioning properly. The IP address
- argument is recommended; if not provided, then the host name will be resolved
- via DNS.
+ This will run ``cephadm check-host`` on each host before adding it.
+ This check ensures that the host is functioning properly. The IP address
+ argument is recommended. If the address is not provided, then the host name
+ will be resolved via DNS.
#. Verify that the adopted monitor and manager daemons are visible:
cephadm adopt --style legacy --name osd.1
cephadm adopt --style legacy --name osd.2
-#. Redeploy MDS daemons by telling cephadm how many daemons to run for
- each file system. List file systems by name with the command ``ceph fs
+#. Redeploy CephFS MDS daemons (if deployed) by telling cephadm how many daemons to run for
+ each file system. List CephFS file systems by name with the command ``ceph fs
ls``. Run the following command on the master nodes to redeploy the MDS
daemons:
systemctl stop ceph-mds.target
rm -rf /var/lib/ceph/mds/ceph-*
-#. Redeploy RGW daemons. Cephadm manages RGW daemons by zone. For each
- zone, deploy new RGW daemons with cephadm:
+#. Redeploy Ceph Object Gateway RGW daemons if deployed. Cephadm manages RGW
+ daemons by zone. For each zone, deploy new RGW daemons with cephadm:
.. prompt:: bash #
ceph orch apply rgw <svc_id> [--realm=<realm>] [--zone=<zone>] [--port=<port>] [--ssl] [--placement=<placement>]
where *<placement>* can be a simple daemon count, or a list of
- specific hosts (see :ref:`orchestrator-cli-placement-spec`), and the
+ specific hosts (see :ref:`orchestrator-cli-placement-spec`). The
zone and realm arguments are needed only for a multisite setup.
After the daemons have started and you have confirmed that they are
- functioning, stop and remove the old, legacy daemons:
+ functioning, stop and remove the legacy daemons:
.. prompt:: bash #
=======================
Basic Ceph Client Setup
=======================
-Client machines require some basic configuration to interact with
-Ceph clusters. This section describes how to configure a client machine
-so that it can interact with a Ceph cluster.
+Client hosts require basic configuration to interact with
+Ceph clusters. This section describes how to perform this configuration.
.. note::
- Most client machines need to install only the `ceph-common` package
- and its dependencies. Such a setup supplies the basic `ceph` and
- `rados` commands, as well as other commands including `mount.ceph`
- and `rbd`.
+ Most client hosts need to install only the ``ceph-common`` package
+ and its dependencies. Such an installation supplies the basic ``ceph`` and
+ ``rados`` commands, as well as other commands including ``mount.ceph``
+ and ``rbd``.
Config File Setup
=================
-Client machines usually require smaller configuration files (here
-sometimes called "config files") than do full-fledged cluster members.
+Client hosts usually require smaller configuration files (here
+sometimes called "config files") than do back-end cluster hosts.
To generate a minimal config file, log into a host that has been
-configured as a client or that is running a cluster daemon, and then run the following command:
+configured as a client or that is running a cluster daemon, then
+run the following command:
.. prompt:: bash #
ceph config generate-minimal-conf
This command generates a minimal config file that tells the client how
-to reach the Ceph monitors. The contents of this file should usually
-be installed in ``/etc/ceph/ceph.conf``.
+to reach the Ceph Monitors. This file should usually
+be copied to ``/etc/ceph/ceph.conf`` on each client host.
Keyring Setup
=============
Most Ceph clusters run with authentication enabled. This means that
-the client needs keys in order to communicate with the machines in the
-cluster. To generate a keyring file with credentials for `client.fs`,
+the client needs keys in order to communicate with Ceph daemons.
+To generate a keyring file with credentials for ``client.fs``,
log into an running cluster member and run the following command:
.. prompt:: bash $
The resulting output is directed into a keyring file, typically
``/etc/ceph/ceph.keyring``.
-To gain a broader understanding of client keyring distribution and administration, you should read :ref:`client_keyrings_and_configs`.
+To gain a broader understanding of client keyring distribution and administration,
+you should read :ref:`client_keyrings_and_configs`.
-To see an example that explains how to distribute ``ceph.conf`` configuration files to hosts that are tagged with the ``bare_config`` label, you should read the section called "Distributing ceph.conf to hosts tagged with bare_config" in the section called :ref:`etc_ceph_conf_distribution`.
+To see an example that explains how to distribute ``ceph.conf`` configuration
+files to hosts that are tagged with the ``bare_config`` label, you should read
+the subsection named "Distributing ceph.conf to hosts tagged with bare_config"
+under the heading :ref:`etc_ceph_conf_distribution`.
.. note::
- While not all podman versions have been actively tested against
- all Ceph versions, there are no known issues with using podman
+ While not all Podman versions have been actively tested against
+ all Ceph versions, there are no known issues with using Podman
version 3.0 or greater with Ceph Quincy and later releases.
.. warning::
Deploying a new Ceph cluster
============================
-Cephadm creates a new Ceph cluster by "bootstrapping" on a single
+Cephadm creates a new Ceph cluster by bootstrapping a single
host, expanding the cluster to encompass any additional hosts, and
then deploying the needed services.
- Python 3
- Systemd
- Podman or Docker for running containers
-- Time synchronization (such as chrony or NTP)
+- Time synchronization (such as Chrony or the legacy ``ntpd``)
- LVM2 for provisioning storage devices
Any modern Linux distribution should be sufficient. Dependencies
#. a :ref:`curl-based installation<cephadm_install_curl>` method
#. :ref:`distribution-specific installation methods<cephadm_install_distros>`
+.. note:: Recent versions of cephadm are distributed as an executable compiled
+ from source code. Unlike for earlier versions of Ceph it is no longer
+ sufficient to copy a single script from Ceph's git tree and run it.
-.. _cephadm_install_curl:
+.. _cephadm_install_distros:
-curl-based installation
------------------------
+distribution-specific installations
+-----------------------------------
+
+.. important:: The methods of installing ``cephadm`` in this section are distinct from the curl-based method above. Use either the curl-based method above or one of the methods in this section, but not both the curl-based method and one of these.
+
+Some Linux distributions may already include up-to-date Ceph packages. In
+that case, you can install cephadm directly. For example:
-* Use ``curl`` to fetch the most recent version of the
- standalone script.
+ In Ubuntu:
.. prompt:: bash #
- :substitutions:
- curl --silent --remote-name --location https://github.com/ceph/ceph/raw/|stable-release|/src/cephadm/cephadm
+ apt install -y cephadm
- Make the ``cephadm`` script executable:
+ In CentOS Stream:
.. prompt:: bash #
+ :substitutions:
- chmod +x cephadm
+ dnf search release-ceph
+ dnf install --assumeyes centos-release-ceph-|stable-release|
+ dnf install --assumeyes cephadm
- This script can be run directly from the current directory:
+ In Fedora:
.. prompt:: bash #
- ./cephadm <arguments...>
+ dnf -y install cephadm
-* Although the standalone script is sufficient to get a cluster started, it is
- convenient to have the ``cephadm`` command installed on the host. To install
- the packages that provide the ``cephadm`` command, run the following
- commands:
+ In SUSE:
.. prompt:: bash #
- :substitutions:
- ./cephadm add-repo --release |stable-release|
- ./cephadm install
+ zypper install -y cephadm
- Confirm that ``cephadm`` is now in your PATH by running ``which``:
- .. prompt:: bash #
+.. _cephadm_install_curl:
- which cephadm
+curl-based installation
+-----------------------
- A successful ``which cephadm`` command will return this:
+* First, determine what version of Ceph you wish to install. You can use the releases
+ page to find the `latest active releases <https://docs.ceph.com/en/latest/releases/#active-releases>`_.
+ For example, we might find that ``18.2.1`` is the latest
+ active release.
- .. code-block:: bash
+* Use ``curl`` to fetch a build of cephadm for that release.
- /usr/sbin/cephadm
+ .. prompt:: bash #
+ :substitutions:
-.. _cephadm_install_distros:
+ CEPH_RELEASE=18.2.0 # replace this with the active release
+ curl --silent --remote-name --location https://download.ceph.com/rpm-${CEPH_RELEASE}/el9/noarch/cephadm
-distribution-specific installations
------------------------------------
+ Ensure the ``cephadm`` file is executable:
-.. important:: The methods of installing ``cephadm`` in this section are distinct from the curl-based method above. Use either the curl-based method above or one of the methods in this section, but not both the curl-based method and one of these.
+ .. prompt:: bash #
-Some Linux distributions may already include up-to-date Ceph packages. In
-that case, you can install cephadm directly. For example:
+ chmod +x cephadm
- In Ubuntu:
+ This file can be run directly from the current directory:
.. prompt:: bash #
- apt install -y cephadm
+ ./cephadm <arguments...>
- In CentOS Stream:
+* If you encounter any issues with running cephadm due to errors including
+ the message ``bad interpreter``, then you may not have Python or
+ the correct version of Python installed. The cephadm tool requires Python 3.6
+ or later. You can manually run cephadm with a particular version of Python by
+ prefixing the command with your installed Python version. For example:
.. prompt:: bash #
:substitutions:
- dnf search release-ceph
- dnf install --assumeyes centos-release-ceph-|stable-release|
- dnf install --assumeyes cephadm
+ python3.8 ./cephadm <arguments...>
- In Fedora:
+* Although the standalone cephadm is sufficient to bootstrap a cluster, it is
+ best to have the ``cephadm`` command installed on the host. To install
+ the packages that provide the ``cephadm`` command, run the following
+ commands:
.. prompt:: bash #
+ :substitutions:
- dnf -y install cephadm
+ ./cephadm add-repo --release |stable-release|
+ ./cephadm install
- In SUSE:
+ Confirm that ``cephadm`` is now in your PATH by running ``which``:
.. prompt:: bash #
- zypper install -y cephadm
+ which cephadm
+ A successful ``which cephadm`` command will return this:
+ .. code-block:: bash
+
+ /usr/sbin/cephadm
Bootstrap a new cluster
=======================
The first step in creating a new Ceph cluster is running the ``cephadm
bootstrap`` command on the Ceph cluster's first host. The act of running the
``cephadm bootstrap`` command on the Ceph cluster's first host creates the Ceph
-cluster's first "monitor daemon", and that monitor daemon needs an IP address.
+cluster's first Monitor daemon.
You must pass the IP address of the Ceph cluster's first host to the ``ceph
bootstrap`` command, so you'll need to know the IP address of that host.
This command will:
-* Create a monitor and manager daemon for the new cluster on the local
+* Create a Monitor and a Manager daemon for the new cluster on the local
host.
* Generate a new SSH key for the Ceph cluster and add it to the root
user's ``/root/.ssh/authorized_keys`` file.
* Write a copy of the public key to ``/etc/ceph/ceph.pub``.
* Write a minimal configuration file to ``/etc/ceph/ceph.conf``. This
- file is needed to communicate with the new cluster.
+ file is needed to communicate with Ceph daemons.
* Write a copy of the ``client.admin`` administrative (privileged!)
secret key to ``/etc/ceph/ceph.client.admin.keyring``.
* Add the ``_admin`` label to the bootstrap host. By default, any host
Further information about cephadm bootstrap
-------------------------------------------
-The default bootstrap behavior will work for most users. But if you'd like
+The default bootstrap process will work for most users. But if you'd like
immediately to know more about ``cephadm bootstrap``, read the list below.
Also, you can run ``cephadm bootstrap -h`` to see all of ``cephadm``'s
journald. If you want Ceph to write traditional log files to ``/var/log/ceph/$fsid``,
use the ``--log-to-file`` option during bootstrap.
-* Larger Ceph clusters perform better when (external to the Ceph cluster)
+* Larger Ceph clusters perform best when (external to the Ceph cluster)
public network traffic is separated from (internal to the Ceph cluster)
cluster traffic. The internal cluster traffic handles replication, recovery,
and heartbeats between OSD daemons. You can define the :ref:`cluster
network<cluster-network>` by supplying the ``--cluster-network`` option to the ``bootstrap``
- subcommand. This parameter must define a subnet in CIDR notation (for example
+ subcommand. This parameter must be a subnet in CIDR notation (for example
``10.90.90.0/24`` or ``fe80::/64``).
-* ``cephadm bootstrap`` writes to ``/etc/ceph`` the files needed to access
+* ``cephadm bootstrap`` writes to ``/etc/ceph`` files needed to access
the new cluster. This central location makes it possible for Ceph
packages installed on the host (e.g., packages that give access to the
cephadm command line interface) to find these files.
EOF
$ ./cephadm bootstrap --config initial-ceph.conf ...
-* The ``--ssh-user *<user>*`` option makes it possible to choose which SSH
+* The ``--ssh-user *<user>*`` option makes it possible to designate which SSH
user cephadm will use to connect to hosts. The associated SSH key will be
added to ``/home/*<user>*/.ssh/authorized_keys``. The user that you
designate with this option must have passwordless sudo access.
-* If you are using a container on an authenticated registry that requires
+* If you are using a container image from a registry that requires
login, you may add the argument:
* ``--registry-json <path to json file>``
Cephadm will attempt to log in to this registry so it can pull your container
and then store the login info in its config database. Other hosts added to
- the cluster will then also be able to make use of the authenticated registry.
+ the cluster will then also be able to make use of the authenticated container registry.
* See :ref:`cephadm-deployment-scenarios` for additional examples for using ``cephadm bootstrap``.
By default, a ``ceph.conf`` file and a copy of the ``client.admin`` keyring are
maintained in ``/etc/ceph`` on all hosts that have the ``_admin`` label. This
-label is initially applied only to the bootstrap host. We usually recommend
+label is initially applied only to the bootstrap host. We recommend
that one or more other hosts be given the ``_admin`` label so that the Ceph CLI
(for example, via ``cephadm shell``) is easily accessible on multiple hosts. To add
the ``_admin`` label to additional host(s), run a command of the following form:
Adding additional MONs
======================
-A typical Ceph cluster has three or five monitor daemons spread
+A typical Ceph cluster has three or five Monitor daemons spread
across different hosts. We recommend deploying five
-monitors if there are five or more nodes in your cluster.
+Monitors if there are five or more nodes in your cluster. Most clusters do not
+benefit from seven or more Monitors.
Please follow :ref:`deploy_additional_monitors` to deploy additional MONs.
To deploy hyperconverged Ceph with TripleO, please refer to the TripleO documentation: `Scenario: Deploy Hyperconverged Ceph <https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/cephadm.html#scenario-deploy-hyperconverged-ceph>`_
-In other cases where the cluster hardware is not exclusively used by Ceph (hyperconverged),
+In other cases where the cluster hardware is not exclusively used by Ceph (converged infrastructure),
reduce the memory consumption of Ceph like so:
.. prompt:: bash #
- # hyperconverged only:
+ # converged only:
ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2
Then enable memory autotuning:
Single host
-----------
-To configure a Ceph cluster to run on a single host, use the
-``--single-host-defaults`` flag when bootstrapping. For use cases of this, see
-:ref:`one-node-cluster`.
+To deploy a Ceph cluster running on a single host, use the
+``--single-host-defaults`` flag when bootstrapping. For use cases, see
+:ref:`one-node-cluster`. Such clusters are generally not suitable for
+production.
+
The ``--single-host-defaults`` flag sets the following configuration options::
-------------------------------------
You might need to install cephadm in an environment that is not connected
-directly to the internet (such an environment is also called an "isolated
-environment"). This can be done if a custom container registry is used. Either
+directly to the Internet (an "isolated" or "airgapped"
+environment). This requires the use of a custom container registry. Either
of two kinds of custom container registry can be used in this scenario: (1) a
Podman-based or Docker-based insecure registry, or (2) a secure registry.
Note that this setup does not require installing the corresponding public key
from the private key passed to bootstrap on other nodes. In fact, cephadm will
reject the ``--ssh-public-key`` argument when passed along with ``--ssh-signed-cert``.
-Not because having the public key breaks anything, but because it is not at all needed
-for this setup and it helps bootstrap differentiate if the user wants the CA signed
-keys setup or standard pubkey encryption. What this means is, SSH key rotation
+This is not because having the public key breaks anything, but rather because it is not at all needed
+and helps the bootstrap command differentiate if the user wants the CA signed
+keys setup or standard pubkey encryption. What this means is that SSH key rotation
would simply be a matter of getting another key signed by the same CA and providing
cephadm with the new private key and signed cert. No additional distribution of
keys to cluster nodes is needed after the initial setup of the CA key as a trusted key,
Cluster Configuration Checks
----------------------------
-Cephadm periodically scans each of the hosts in the cluster in order
-to understand the state of the OS, disks, NICs etc. These facts can
-then be analysed for consistency across the hosts in the cluster to
+Cephadm periodically scans each host in the cluster in order
+to understand the state of the OS, disks, network interfacess etc. This information can
+then be analyzed for consistency across the hosts in the cluster to
identify any configuration anomalies.
Enabling Cluster Configuration Checks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The configuration checks are an **optional** feature, and are enabled
+These configuration checks are an **optional** feature, and are enabled
by running the following command:
.. prompt:: bash #
States Returned by Cluster Configuration Checks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The configuration checks are triggered after each host scan (1m). The
+Configuration checks are triggered after each host scan. The
cephadm log entries will show the current state and outcome of the
configuration checks as follows:
# ceph cephadm config-check ls
NAME HEALTHCHECK STATUS DESCRIPTION
- kernel_security CEPHADM_CHECK_KERNEL_LSM enabled checks SELINUX/Apparmor profiles are consistent across cluster hosts
- os_subscription CEPHADM_CHECK_SUBSCRIPTION enabled checks subscription states are consistent for all cluster hosts
- public_network CEPHADM_CHECK_PUBLIC_MEMBERSHIP enabled check that all hosts have a NIC on the Ceph public_network
+ kernel_security CEPHADM_CHECK_KERNEL_LSM enabled check that SELINUX/Apparmor profiles are consistent across cluster hosts
+ os_subscription CEPHADM_CHECK_SUBSCRIPTION enabled check that subscription states are consistent for all cluster hosts
+ public_network CEPHADM_CHECK_PUBLIC_MEMBERSHIP enabled check that all hosts have a network interface on the Ceph public_network
osd_mtu_size CEPHADM_CHECK_MTU enabled check that OSD hosts share a common MTU setting
- osd_linkspeed CEPHADM_CHECK_LINKSPEED enabled check that OSD hosts share a common linkspeed
- network_missing CEPHADM_CHECK_NETWORK_MISSING enabled checks that the cluster/public networks defined exist on the Ceph hosts
- ceph_release CEPHADM_CHECK_CEPH_RELEASE enabled check for Ceph version consistency - ceph daemons should be on the same release (unless upgrade is active)
- kernel_version CEPHADM_CHECK_KERNEL_VERSION enabled checks that the MAJ.MIN of the kernel on Ceph hosts is consistent
+ osd_linkspeed CEPHADM_CHECK_LINKSPEED enabled check that OSD hosts share a common network link speed
+ network_missing CEPHADM_CHECK_NETWORK_MISSING enabled check that the cluster/public networks as defined exist on the Ceph hosts
+ ceph_release CEPHADM_CHECK_CEPH_RELEASE enabled check for Ceph version consistency: all Ceph daemons should be the same release unless upgrade is in progress
+ kernel_version CEPHADM_CHECK_KERNEL_VERSION enabled checks that the maj.min version of the kernel is consistent across Ceph hosts
The name of each configuration check can be used to enable or disable a specific check by running a command of the following form:
:
CEPHADM_CHECK_SUBSCRIPTION
~~~~~~~~~~~~~~~~~~~~~~~~~~
-This check relates to the status of vendor subscription. This check is
-performed only for hosts using RHEL, but helps to confirm that all hosts are
+This check relates to the status of OS vendor subscription. This check is
+performed only for hosts using RHEL and helps to confirm that all hosts are
covered by an active subscription, which ensures that patches and updates are
available.
CEPHADM_CHECK_PUBLIC_MEMBERSHIP
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-All members of the cluster should have NICs configured on at least one of the
+All members of the cluster should have a network interface configured on at least one of the
public network subnets. Hosts that are not on the public network will rely on
routing, which may affect performance.
CEPHADM_CHECK_MTU
~~~~~~~~~~~~~~~~~
-The MTU of the NICs on OSDs can be a key factor in consistent performance. This
+The MTU of the network interfaces on OSD hosts can be a key factor in consistent performance. This
check examines hosts that are running OSD services to ensure that the MTU is
-configured consistently within the cluster. This is determined by establishing
+configured consistently within the cluster. This is determined by determining
the MTU setting that the majority of hosts is using. Any anomalies result in a
-Ceph health check.
+health check.
CEPHADM_CHECK_LINKSPEED
~~~~~~~~~~~~~~~~~~~~~~~
-This check is similar to the MTU check. Linkspeed consistency is a factor in
-consistent cluster performance, just as the MTU of the NICs on the OSDs is.
-This check determines the linkspeed shared by the majority of OSD hosts, and a
-health check is run for any hosts that are set at a lower linkspeed rate.
+This check is similar to the MTU check. Link speed consistency is a factor in
+consistent cluster performance, as is the MTU of the OSD node network interfaces.
+This check determines the link speed shared by the majority of OSD hosts, and a
+health check is run for any hosts that are set at a lower link speed rate.
CEPHADM_CHECK_NETWORK_MISSING
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CEPHADM_CHECK_CEPH_RELEASE
~~~~~~~~~~~~~~~~~~~~~~~~~~
-Under normal operations, the Ceph cluster runs daemons under the same ceph
-release (that is, the Ceph cluster runs all daemons under (for example)
-Octopus). This check determines the active release for each daemon, and
+Under normal operations, the Ceph cluster runs daemons that are of the same Ceph
+release (for example, Reef). This check determines the active release for each daemon, and
reports any anomalies as a healthcheck. *This check is bypassed if an upgrade
-process is active within the cluster.*
+is in process.*
CEPHADM_CHECK_KERNEL_VERSION
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The OS kernel version (maj.min) is checked for consistency across the hosts.
+The OS kernel version (maj.min) is checked for consistency across hosts.
The kernel version of the majority of the hosts is used as the basis for
identifying anomalies.
ceph orch set backend ''
ceph mgr module disable cephadm
-These commands disable all of the ``ceph orch ...`` CLI commands. All
+These commands disable all ``ceph orch ...`` CLI commands. All
previously deployed daemon containers continue to run and will start just as
they were before you ran these commands.
ceph orch ls --service_name=<service-name> --format yaml
-This will return something in the following form:
+This will return information in the following form:
.. code-block:: yaml
Accessing the Admin Socket
--------------------------
-Each Ceph daemon provides an admin socket that bypasses the MONs (See
-:ref:`rados-monitoring-using-admin-socket`).
+Each Ceph daemon provides an admin socket that allows runtime option setting and statistic reading. See
+:ref:`rados-monitoring-using-admin-socket`.
#. To access the admin socket, enter the daemon container on the host::
[root@mon1 ~]# cephadm enter --name <daemon-name>
-#. Run a command of the following form to see the admin socket's configuration::
+#. Run a command of the following forms to see the admin socket's configuration and other available actions::
[ceph: root@mon1 /]# ceph --admin-daemon /var/run/ceph/ceph-<daemon-name>.asok config show
+ [ceph: root@mon1 /]# ceph --admin-daemon /var/run/ceph/ceph-<daemon-name>.asok help
Running Various Ceph Tools
--------------------------------
Upgrading Ceph
==============
-Cephadm can safely upgrade Ceph from one bugfix release to the next. For
+Cephadm can safely upgrade Ceph from one point release to the next. For
example, you can upgrade from v15.2.0 (the first Octopus release) to the next
point release, v15.2.1.
----------------------
This alert (``UPGRADE_NO_STANDBY_MGR``) means that Ceph does not detect an
-active standby manager daemon. In order to proceed with the upgrade, Ceph
-requires an active standby manager daemon (which you can think of in this
+active standby Manager daemon. In order to proceed with the upgrade, Ceph
+requires an active standby Manager daemon (which you can think of in this
context as "a second manager").
-You can ensure that Cephadm is configured to run 2 (or more) managers by
+You can ensure that Cephadm is configured to run two (or more) Managers by
running the following command:
.. prompt:: bash #
ceph orch apply mgr 2 # or more
-You can check the status of existing mgr daemons by running the following
+You can check the status of existing Manager daemons by running the following
command:
.. prompt:: bash #
ceph orch ps --daemon-type mgr
-If an existing mgr daemon has stopped, you can try to restart it by running the
+If an existing Manager daemon has stopped, you can try to restart it by running the
following command:
.. prompt:: bash #
=================================
For most users, upgrading requires nothing more complicated than specifying the
-Ceph version number to upgrade to. In such cases, cephadm locates the specific
+Ceph version to which to upgrade. In such cases, cephadm locates the specific
Ceph container image to use by combining the ``container_image_base``
configuration option (default: ``docker.io/ceph/ceph``) with a tag of
``vX.Y.Z``.