From 31b15b5c4215b0286f24a745e36fc83cacea8ec2 Mon Sep 17 00:00:00 2001 From: Anthony D'Atri Date: Mon, 11 Mar 2024 03:04:47 -0400 Subject: [PATCH] doc/cephadm: Improve multiple files Signed-off-by: Anthony D'Atri (cherry picked from commit 800dd29e60fcd2bcd27db56d3fe45c58ddf10c8a) --- doc/cephadm/adoption.rst | 63 +++++++++++++++-------------- doc/cephadm/client-setup.rst | 36 +++++++++-------- doc/cephadm/compatibility.rst | 4 +- doc/cephadm/install.rst | 71 ++++++++++++++++++++------------- doc/cephadm/operations.rst | 53 ++++++++++++------------ doc/cephadm/troubleshooting.rst | 11 ++--- doc/cephadm/upgrade.rst | 14 +++---- 7 files changed, 137 insertions(+), 115 deletions(-) diff --git a/doc/cephadm/adoption.rst b/doc/cephadm/adoption.rst index 86254a16cd4..2ebce606c4f 100644 --- a/doc/cephadm/adoption.rst +++ b/doc/cephadm/adoption.rst @@ -22,20 +22,20 @@ Preparation #. Make sure that the ``cephadm`` command line tool is available on each host in the existing cluster. See :ref:`get-cephadm` to learn how. -#. Prepare each host for use by ``cephadm`` by running this command: +#. Prepare each host for use by ``cephadm`` by running this command on that host: .. prompt:: bash # cephadm prepare-host #. Choose a version of Ceph to use for the conversion. This procedure will work - with any release of Ceph that is Octopus (15.2.z) or later, inclusive. The + with any release of Ceph that is Octopus (15.2.z) or later. The latest stable release of Ceph is the default. You might be upgrading from an earlier Ceph release at the same time that you're performing this - conversion; if you are upgrading from an earlier release, make sure to + conversion. If you are upgrading from an earlier release, make sure to follow any upgrade-related instructions for that release. - Pass the image to cephadm with the following command: + Pass the Ceph container image to cephadm with the following command: .. prompt:: bash # @@ -50,25 +50,27 @@ Preparation cephadm ls - Before starting the conversion process, ``cephadm ls`` shows all existing - daemons to have a style of ``legacy``. As the adoption process progresses, - adopted daemons will appear with a style of ``cephadm:v1``. + Before starting the conversion process, ``cephadm ls`` reports all existing + daemons with the style ``legacy``. As the adoption process progresses, + adopted daemons will appear with the style ``cephadm:v1``. Adoption process ---------------- -#. Make sure that the ceph configuration has been migrated to use the cluster - config database. If the ``/etc/ceph/ceph.conf`` is identical on each host, - then the following command can be run on one single host and will affect all - hosts: +#. Make sure that the ceph configuration has been migrated to use the cluster's + central config database. If ``/etc/ceph/ceph.conf`` is identical on all + hosts, then the following command can be run on one host and will take + effect for all hosts: .. prompt:: bash # ceph config assimilate-conf -i /etc/ceph/ceph.conf If there are configuration variations between hosts, you will need to repeat - this command on each host. During this adoption process, view the cluster's + this command on each host, taking care that if there are conflicting option + settings across hosts, the values from the last host will be used. During this + adoption process, view the cluster's central configuration to confirm that it is complete by running the following command: @@ -76,36 +78,36 @@ Adoption process ceph config dump -#. Adopt each monitor: +#. Adopt each Monitor: .. prompt:: bash # cephadm adopt --style legacy --name mon. - Each legacy monitor should stop, quickly restart as a cephadm + Each legacy Monitor will stop, quickly restart as a cephadm container, and rejoin the quorum. -#. Adopt each manager: +#. Adopt each Manager: .. prompt:: bash # cephadm adopt --style legacy --name mgr. -#. Enable cephadm: +#. Enable cephadm orchestration: .. prompt:: bash # ceph mgr module enable cephadm ceph orch set backend cephadm -#. Generate an SSH key: +#. Generate an SSH key for cephadm: .. prompt:: bash # ceph cephadm generate-key ceph cephadm get-pub-key > ~/ceph.pub -#. Install the cluster SSH key on each host in the cluster: +#. Install the cephadm SSH key on each host in the cluster: .. prompt:: bash # @@ -118,9 +120,10 @@ Adoption process SSH keys. .. note:: - It is also possible to have cephadm use a non-root user to SSH + It is also possible to arrange for cephadm to use a non-root user to SSH into cluster hosts. This user needs to have passwordless sudo access. - Use ``ceph cephadm set-user `` and copy the SSH key to that user. + Use ``ceph cephadm set-user `` and copy the SSH key to that user's + home directory on each host. See :ref:`cephadm-ssh-user` #. Tell cephadm which hosts to manage: @@ -129,10 +132,10 @@ Adoption process ceph orch host add [ip-address] - This will perform a ``cephadm check-host`` on each host before adding it; - this check ensures that the host is functioning properly. The IP address - argument is recommended; if not provided, then the host name will be resolved - via DNS. + This will run ``cephadm check-host`` on each host before adding it. + This check ensures that the host is functioning properly. The IP address + argument is recommended. If the address is not provided, then the host name + will be resolved via DNS. #. Verify that the adopted monitor and manager daemons are visible: @@ -153,8 +156,8 @@ Adoption process cephadm adopt --style legacy --name osd.1 cephadm adopt --style legacy --name osd.2 -#. Redeploy MDS daemons by telling cephadm how many daemons to run for - each file system. List file systems by name with the command ``ceph fs +#. Redeploy CephFS MDS daemons (if deployed) by telling cephadm how many daemons to run for + each file system. List CephFS file systems by name with the command ``ceph fs ls``. Run the following command on the master nodes to redeploy the MDS daemons: @@ -189,19 +192,19 @@ Adoption process systemctl stop ceph-mds.target rm -rf /var/lib/ceph/mds/ceph-* -#. Redeploy RGW daemons. Cephadm manages RGW daemons by zone. For each - zone, deploy new RGW daemons with cephadm: +#. Redeploy Ceph Object Gateway RGW daemons if deployed. Cephadm manages RGW + daemons by zone. For each zone, deploy new RGW daemons with cephadm: .. prompt:: bash # ceph orch apply rgw [--realm=] [--zone=] [--port=] [--ssl] [--placement=] where ** can be a simple daemon count, or a list of - specific hosts (see :ref:`orchestrator-cli-placement-spec`), and the + specific hosts (see :ref:`orchestrator-cli-placement-spec`). The zone and realm arguments are needed only for a multisite setup. After the daemons have started and you have confirmed that they are - functioning, stop and remove the old, legacy daemons: + functioning, stop and remove the legacy daemons: .. prompt:: bash # diff --git a/doc/cephadm/client-setup.rst b/doc/cephadm/client-setup.rst index f98ba798b5f..0f38773b12b 100644 --- a/doc/cephadm/client-setup.rst +++ b/doc/cephadm/client-setup.rst @@ -1,36 +1,36 @@ ======================= Basic Ceph Client Setup ======================= -Client machines require some basic configuration to interact with -Ceph clusters. This section describes how to configure a client machine -so that it can interact with a Ceph cluster. +Client hosts require basic configuration to interact with +Ceph clusters. This section describes how to perform this configuration. .. note:: - Most client machines need to install only the `ceph-common` package - and its dependencies. Such a setup supplies the basic `ceph` and - `rados` commands, as well as other commands including `mount.ceph` - and `rbd`. + Most client hosts need to install only the ``ceph-common`` package + and its dependencies. Such an installation supplies the basic ``ceph`` and + ``rados`` commands, as well as other commands including ``mount.ceph`` + and ``rbd``. Config File Setup ================= -Client machines usually require smaller configuration files (here -sometimes called "config files") than do full-fledged cluster members. +Client hosts usually require smaller configuration files (here +sometimes called "config files") than do back-end cluster hosts. To generate a minimal config file, log into a host that has been -configured as a client or that is running a cluster daemon, and then run the following command: +configured as a client or that is running a cluster daemon, then +run the following command: .. prompt:: bash # ceph config generate-minimal-conf This command generates a minimal config file that tells the client how -to reach the Ceph monitors. The contents of this file should usually -be installed in ``/etc/ceph/ceph.conf``. +to reach the Ceph Monitors. This file should usually +be copied to ``/etc/ceph/ceph.conf`` on each client host. Keyring Setup ============= Most Ceph clusters run with authentication enabled. This means that -the client needs keys in order to communicate with the machines in the -cluster. To generate a keyring file with credentials for `client.fs`, +the client needs keys in order to communicate with Ceph daemons. +To generate a keyring file with credentials for ``client.fs``, log into an running cluster member and run the following command: .. prompt:: bash $ @@ -40,6 +40,10 @@ log into an running cluster member and run the following command: The resulting output is directed into a keyring file, typically ``/etc/ceph/ceph.keyring``. -To gain a broader understanding of client keyring distribution and administration, you should read :ref:`client_keyrings_and_configs`. +To gain a broader understanding of client keyring distribution and administration, +you should read :ref:`client_keyrings_and_configs`. -To see an example that explains how to distribute ``ceph.conf`` configuration files to hosts that are tagged with the ``bare_config`` label, you should read the section called "Distributing ceph.conf to hosts tagged with bare_config" in the section called :ref:`etc_ceph_conf_distribution`. +To see an example that explains how to distribute ``ceph.conf`` configuration +files to hosts that are tagged with the ``bare_config`` label, you should read +the subsection named "Distributing ceph.conf to hosts tagged with bare_config" +under the heading :ref:`etc_ceph_conf_distribution`. diff --git a/doc/cephadm/compatibility.rst b/doc/cephadm/compatibility.rst index 46ab62a6272..8dd301f1a22 100644 --- a/doc/cephadm/compatibility.rst +++ b/doc/cephadm/compatibility.rst @@ -30,8 +30,8 @@ This table shows which version pairs are expected to work or not work together: .. note:: - While not all podman versions have been actively tested against - all Ceph versions, there are no known issues with using podman + While not all Podman versions have been actively tested against + all Ceph versions, there are no known issues with using Podman version 3.0 or greater with Ceph Quincy and later releases. .. warning:: diff --git a/doc/cephadm/install.rst b/doc/cephadm/install.rst index b1aa736e225..0ab4531ffbd 100644 --- a/doc/cephadm/install.rst +++ b/doc/cephadm/install.rst @@ -4,7 +4,7 @@ Deploying a new Ceph cluster ============================ -Cephadm creates a new Ceph cluster by "bootstrapping" on a single +Cephadm creates a new Ceph cluster by bootstrapping a single host, expanding the cluster to encompass any additional hosts, and then deploying the needed services. @@ -18,7 +18,7 @@ Requirements - Python 3 - Systemd - Podman or Docker for running containers -- Time synchronization (such as chrony or NTP) +- Time synchronization (such as Chrony or the legacy ``ntpd``) - LVM2 for provisioning storage devices Any modern Linux distribution should be sufficient. Dependencies @@ -45,6 +45,13 @@ There are two ways to install ``cephadm``: Choose either the distribution-specific method or the curl-based method. Do not attempt to use both these methods on one system. +.. note:: Recent versions of cephadm are distributed as an executable compiled + from source code. Unlike for earlier versions of Ceph it is no longer + sufficient to copy a single script from Ceph's git tree and run it. If you + wish to run cephadm using a development version you should create your own + build of cephadm. See :ref:`compiling-cephadm` for details on how to create + your own standalone cephadm executable. + .. _cephadm_install_distros: distribution-specific installations @@ -85,9 +92,9 @@ that case, you can install cephadm directly. For example: curl-based installation ----------------------- -* First, determine what version of Ceph you will need. You can use the releases +* First, determine what version of Ceph you wish to install. You can use the releases page to find the `latest active releases `_. - For example, we might look at that page and find that ``18.2.0`` is the latest + For example, we might find that ``18.2.1`` is the latest active release. * Use ``curl`` to fetch a build of cephadm for that release. @@ -113,7 +120,7 @@ curl-based installation * If you encounter any issues with running cephadm due to errors including the message ``bad interpreter``, then you may not have Python or the correct version of Python installed. The cephadm tool requires Python 3.6 - and above. You can manually run cephadm with a particular version of Python by + or later. You can manually run cephadm with a particular version of Python by prefixing the command with your installed Python version. For example: .. prompt:: bash # @@ -121,6 +128,11 @@ curl-based installation python3.8 ./cephadm +* Although the standalone cephadm is sufficient to bootstrap a cluster, it is + best to have the ``cephadm`` command installed on the host. To install + the packages that provide the ``cephadm`` command, run the following + commands: + .. _cephadm_update: update cephadm @@ -166,7 +178,7 @@ What to know before you bootstrap The first step in creating a new Ceph cluster is running the ``cephadm bootstrap`` command on the Ceph cluster's first host. The act of running the ``cephadm bootstrap`` command on the Ceph cluster's first host creates the Ceph -cluster's first "monitor daemon", and that monitor daemon needs an IP address. +cluster's first Monitor daemon. You must pass the IP address of the Ceph cluster's first host to the ``ceph bootstrap`` command, so you'll need to know the IP address of that host. @@ -187,13 +199,13 @@ Run the ``ceph bootstrap`` command: This command will: -* Create a monitor and manager daemon for the new cluster on the local +* Create a Monitor and a Manager daemon for the new cluster on the local host. * Generate a new SSH key for the Ceph cluster and add it to the root user's ``/root/.ssh/authorized_keys`` file. * Write a copy of the public key to ``/etc/ceph/ceph.pub``. * Write a minimal configuration file to ``/etc/ceph/ceph.conf``. This - file is needed to communicate with the new cluster. + file is needed to communicate with Ceph daemons. * Write a copy of the ``client.admin`` administrative (privileged!) secret key to ``/etc/ceph/ceph.client.admin.keyring``. * Add the ``_admin`` label to the bootstrap host. By default, any host @@ -205,7 +217,7 @@ This command will: Further information about cephadm bootstrap ------------------------------------------- -The default bootstrap behavior will work for most users. But if you'd like +The default bootstrap process will work for most users. But if you'd like immediately to know more about ``cephadm bootstrap``, read the list below. Also, you can run ``cephadm bootstrap -h`` to see all of ``cephadm``'s @@ -216,15 +228,15 @@ available options. journald. If you want Ceph to write traditional log files to ``/var/log/ceph/$fsid``, use the ``--log-to-file`` option during bootstrap. -* Larger Ceph clusters perform better when (external to the Ceph cluster) +* Larger Ceph clusters perform best when (external to the Ceph cluster) public network traffic is separated from (internal to the Ceph cluster) cluster traffic. The internal cluster traffic handles replication, recovery, and heartbeats between OSD daemons. You can define the :ref:`cluster network` by supplying the ``--cluster-network`` option to the ``bootstrap`` - subcommand. This parameter must define a subnet in CIDR notation (for example + subcommand. This parameter must be a subnet in CIDR notation (for example ``10.90.90.0/24`` or ``fe80::/64``). -* ``cephadm bootstrap`` writes to ``/etc/ceph`` the files needed to access +* ``cephadm bootstrap`` writes to ``/etc/ceph`` files needed to access the new cluster. This central location makes it possible for Ceph packages installed on the host (e.g., packages that give access to the cephadm command line interface) to find these files. @@ -245,12 +257,12 @@ available options. EOF $ ./cephadm bootstrap --config initial-ceph.conf ... -* The ``--ssh-user **`` option makes it possible to choose which SSH +* The ``--ssh-user **`` option makes it possible to designate which SSH user cephadm will use to connect to hosts. The associated SSH key will be added to ``/home/**/.ssh/authorized_keys``. The user that you designate with this option must have passwordless sudo access. -* If you are using a container on an authenticated registry that requires +* If you are using a container image from a registry that requires login, you may add the argument: * ``--registry-json `` @@ -261,7 +273,7 @@ available options. Cephadm will attempt to log in to this registry so it can pull your container and then store the login info in its config database. Other hosts added to - the cluster will then also be able to make use of the authenticated registry. + the cluster will then also be able to make use of the authenticated container registry. * See :ref:`cephadm-deployment-scenarios` for additional examples for using ``cephadm bootstrap``. @@ -326,7 +338,7 @@ Add all hosts to the cluster by following the instructions in By default, a ``ceph.conf`` file and a copy of the ``client.admin`` keyring are maintained in ``/etc/ceph`` on all hosts that have the ``_admin`` label. This -label is initially applied only to the bootstrap host. We usually recommend +label is initially applied only to the bootstrap host. We recommend that one or more other hosts be given the ``_admin`` label so that the Ceph CLI (for example, via ``cephadm shell``) is easily accessible on multiple hosts. To add the ``_admin`` label to additional host(s), run a command of the following form: @@ -339,9 +351,10 @@ the ``_admin`` label to additional host(s), run a command of the following form: Adding additional MONs ====================== -A typical Ceph cluster has three or five monitor daemons spread +A typical Ceph cluster has three or five Monitor daemons spread across different hosts. We recommend deploying five -monitors if there are five or more nodes in your cluster. +Monitors if there are five or more nodes in your cluster. Most clusters do not +benefit from seven or more Monitors. Please follow :ref:`deploy_additional_monitors` to deploy additional MONs. @@ -366,12 +379,12 @@ See :ref:`osd_autotune`. To deploy hyperconverged Ceph with TripleO, please refer to the TripleO documentation: `Scenario: Deploy Hyperconverged Ceph `_ -In other cases where the cluster hardware is not exclusively used by Ceph (hyperconverged), +In other cases where the cluster hardware is not exclusively used by Ceph (converged infrastructure), reduce the memory consumption of Ceph like so: .. prompt:: bash # - # hyperconverged only: + # converged only: ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2 Then enable memory autotuning: @@ -400,9 +413,11 @@ Different deployment scenarios Single host ----------- -To configure a Ceph cluster to run on a single host, use the -``--single-host-defaults`` flag when bootstrapping. For use cases of this, see -:ref:`one-node-cluster`. +To deploy a Ceph cluster running on a single host, use the +``--single-host-defaults`` flag when bootstrapping. For use cases, see +:ref:`one-node-cluster`. Such clusters are generally not suitable for +production. + The ``--single-host-defaults`` flag sets the following configuration options:: @@ -419,8 +434,8 @@ Deployment in an isolated environment ------------------------------------- You might need to install cephadm in an environment that is not connected -directly to the internet (such an environment is also called an "isolated -environment"). This can be done if a custom container registry is used. Either +directly to the Internet (an "isolated" or "airgapped" +environment). This requires the use of a custom container registry. Either of two kinds of custom container registry can be used in this scenario: (1) a Podman-based or Docker-based insecure registry, or (2) a secure registry. @@ -569,9 +584,9 @@ in order to have cephadm use them for SSHing between cluster hosts Note that this setup does not require installing the corresponding public key from the private key passed to bootstrap on other nodes. In fact, cephadm will reject the ``--ssh-public-key`` argument when passed along with ``--ssh-signed-cert``. -Not because having the public key breaks anything, but because it is not at all needed -for this setup and it helps bootstrap differentiate if the user wants the CA signed -keys setup or standard pubkey encryption. What this means is, SSH key rotation +This is not because having the public key breaks anything, but rather because it is not at all needed +and helps the bootstrap command differentiate if the user wants the CA signed +keys setup or standard pubkey encryption. What this means is that SSH key rotation would simply be a matter of getting another key signed by the same CA and providing cephadm with the new private key and signed cert. No additional distribution of keys to cluster nodes is needed after the initial setup of the CA key as a trusted key, diff --git a/doc/cephadm/operations.rst b/doc/cephadm/operations.rst index 5d8fdaca840..623cf1635a6 100644 --- a/doc/cephadm/operations.rst +++ b/doc/cephadm/operations.rst @@ -328,15 +328,15 @@ You can disable this health warning by running the following command: Cluster Configuration Checks ---------------------------- -Cephadm periodically scans each of the hosts in the cluster in order -to understand the state of the OS, disks, NICs etc. These facts can -then be analysed for consistency across the hosts in the cluster to +Cephadm periodically scans each host in the cluster in order +to understand the state of the OS, disks, network interfacess etc. This information can +then be analyzed for consistency across the hosts in the cluster to identify any configuration anomalies. Enabling Cluster Configuration Checks ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The configuration checks are an **optional** feature, and are enabled +These configuration checks are an **optional** feature, and are enabled by running the following command: .. prompt:: bash # @@ -346,7 +346,7 @@ by running the following command: States Returned by Cluster Configuration Checks ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The configuration checks are triggered after each host scan (1m). The +Configuration checks are triggered after each host scan. The cephadm log entries will show the current state and outcome of the configuration checks as follows: @@ -383,14 +383,14 @@ To list all the configuration checks and their current states, run the following # ceph cephadm config-check ls NAME HEALTHCHECK STATUS DESCRIPTION - kernel_security CEPHADM_CHECK_KERNEL_LSM enabled checks SELINUX/Apparmor profiles are consistent across cluster hosts - os_subscription CEPHADM_CHECK_SUBSCRIPTION enabled checks subscription states are consistent for all cluster hosts - public_network CEPHADM_CHECK_PUBLIC_MEMBERSHIP enabled check that all hosts have a NIC on the Ceph public_network + kernel_security CEPHADM_CHECK_KERNEL_LSM enabled check that SELINUX/Apparmor profiles are consistent across cluster hosts + os_subscription CEPHADM_CHECK_SUBSCRIPTION enabled check that subscription states are consistent for all cluster hosts + public_network CEPHADM_CHECK_PUBLIC_MEMBERSHIP enabled check that all hosts have a network interface on the Ceph public_network osd_mtu_size CEPHADM_CHECK_MTU enabled check that OSD hosts share a common MTU setting - osd_linkspeed CEPHADM_CHECK_LINKSPEED enabled check that OSD hosts share a common linkspeed - network_missing CEPHADM_CHECK_NETWORK_MISSING enabled checks that the cluster/public networks defined exist on the Ceph hosts - ceph_release CEPHADM_CHECK_CEPH_RELEASE enabled check for Ceph version consistency - ceph daemons should be on the same release (unless upgrade is active) - kernel_version CEPHADM_CHECK_KERNEL_VERSION enabled checks that the MAJ.MIN of the kernel on Ceph hosts is consistent + osd_linkspeed CEPHADM_CHECK_LINKSPEED enabled check that OSD hosts share a common network link speed + network_missing CEPHADM_CHECK_NETWORK_MISSING enabled check that the cluster/public networks as defined exist on the Ceph hosts + ceph_release CEPHADM_CHECK_CEPH_RELEASE enabled check for Ceph version consistency: all Ceph daemons should be the same release unless upgrade is in progress + kernel_version CEPHADM_CHECK_KERNEL_VERSION enabled checks that the maj.min version of the kernel is consistent across Ceph hosts The name of each configuration check can be used to enable or disable a specific check by running a command of the following form: : @@ -414,31 +414,31 @@ flagged as an anomaly and a healthcheck (WARNING) state raised. CEPHADM_CHECK_SUBSCRIPTION ~~~~~~~~~~~~~~~~~~~~~~~~~~ -This check relates to the status of vendor subscription. This check is -performed only for hosts using RHEL, but helps to confirm that all hosts are +This check relates to the status of OS vendor subscription. This check is +performed only for hosts using RHEL and helps to confirm that all hosts are covered by an active subscription, which ensures that patches and updates are available. CEPHADM_CHECK_PUBLIC_MEMBERSHIP ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -All members of the cluster should have NICs configured on at least one of the +All members of the cluster should have a network interface configured on at least one of the public network subnets. Hosts that are not on the public network will rely on routing, which may affect performance. CEPHADM_CHECK_MTU ~~~~~~~~~~~~~~~~~ -The MTU of the NICs on OSDs can be a key factor in consistent performance. This +The MTU of the network interfaces on OSD hosts can be a key factor in consistent performance. This check examines hosts that are running OSD services to ensure that the MTU is -configured consistently within the cluster. This is determined by establishing +configured consistently within the cluster. This is determined by determining the MTU setting that the majority of hosts is using. Any anomalies result in a -Ceph health check. +health check. CEPHADM_CHECK_LINKSPEED ~~~~~~~~~~~~~~~~~~~~~~~ -This check is similar to the MTU check. Linkspeed consistency is a factor in -consistent cluster performance, just as the MTU of the NICs on the OSDs is. -This check determines the linkspeed shared by the majority of OSD hosts, and a -health check is run for any hosts that are set at a lower linkspeed rate. +This check is similar to the MTU check. Link speed consistency is a factor in +consistent cluster performance, as is the MTU of the OSD node network interfaces. +This check determines the link speed shared by the majority of OSD hosts, and a +health check is run for any hosts that are set at a lower link speed rate. CEPHADM_CHECK_NETWORK_MISSING ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -448,15 +448,14 @@ a health check is raised. CEPHADM_CHECK_CEPH_RELEASE ~~~~~~~~~~~~~~~~~~~~~~~~~~ -Under normal operations, the Ceph cluster runs daemons under the same ceph -release (that is, the Ceph cluster runs all daemons under (for example) -Octopus). This check determines the active release for each daemon, and +Under normal operations, the Ceph cluster runs daemons that are of the same Ceph +release (for example, Reef). This check determines the active release for each daemon, and reports any anomalies as a healthcheck. *This check is bypassed if an upgrade -process is active within the cluster.* +is in process.* CEPHADM_CHECK_KERNEL_VERSION ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The OS kernel version (maj.min) is checked for consistency across the hosts. +The OS kernel version (maj.min) is checked for consistency across hosts. The kernel version of the majority of the hosts is used as the basis for identifying anomalies. diff --git a/doc/cephadm/troubleshooting.rst b/doc/cephadm/troubleshooting.rst index c0bf6f9a00a..a7afaa108c8 100644 --- a/doc/cephadm/troubleshooting.rst +++ b/doc/cephadm/troubleshooting.rst @@ -32,7 +32,7 @@ completely by running the following commands: ceph orch set backend '' ceph mgr module disable cephadm -These commands disable all of the ``ceph orch ...`` CLI commands. All +These commands disable all ``ceph orch ...`` CLI commands. All previously deployed daemon containers continue to run and will start just as they were before you ran these commands. @@ -56,7 +56,7 @@ following form: ceph orch ls --service_name= --format yaml -This will return something in the following form: +This will return information in the following form: .. code-block:: yaml @@ -252,16 +252,17 @@ For more detail on operations of this kind, see Accessing the Admin Socket -------------------------- -Each Ceph daemon provides an admin socket that bypasses the MONs (See -:ref:`rados-monitoring-using-admin-socket`). +Each Ceph daemon provides an admin socket that allows runtime option setting and statistic reading. See +:ref:`rados-monitoring-using-admin-socket`. #. To access the admin socket, enter the daemon container on the host:: [root@mon1 ~]# cephadm enter --name -#. Run a command of the following form to see the admin socket's configuration:: +#. Run a command of the following forms to see the admin socket's configuration and other available actions:: [ceph: root@mon1 /]# ceph --admin-daemon /var/run/ceph/ceph-.asok config show + [ceph: root@mon1 /]# ceph --admin-daemon /var/run/ceph/ceph-.asok help Running Various Ceph Tools -------------------------------- diff --git a/doc/cephadm/upgrade.rst b/doc/cephadm/upgrade.rst index e0a9f610ae2..9bb1a6b4da4 100644 --- a/doc/cephadm/upgrade.rst +++ b/doc/cephadm/upgrade.rst @@ -2,7 +2,7 @@ Upgrading Ceph ============== -Cephadm can safely upgrade Ceph from one bugfix release to the next. For +Cephadm can safely upgrade Ceph from one point release to the next. For example, you can upgrade from v15.2.0 (the first Octopus release) to the next point release, v15.2.1. @@ -137,25 +137,25 @@ UPGRADE_NO_STANDBY_MGR ---------------------- This alert (``UPGRADE_NO_STANDBY_MGR``) means that Ceph does not detect an -active standby manager daemon. In order to proceed with the upgrade, Ceph -requires an active standby manager daemon (which you can think of in this +active standby Manager daemon. In order to proceed with the upgrade, Ceph +requires an active standby Manager daemon (which you can think of in this context as "a second manager"). -You can ensure that Cephadm is configured to run 2 (or more) managers by +You can ensure that Cephadm is configured to run two (or more) Managers by running the following command: .. prompt:: bash # ceph orch apply mgr 2 # or more -You can check the status of existing mgr daemons by running the following +You can check the status of existing Manager daemons by running the following command: .. prompt:: bash # ceph orch ps --daemon-type mgr -If an existing mgr daemon has stopped, you can try to restart it by running the +If an existing Manager daemon has stopped, you can try to restart it by running the following command: .. prompt:: bash # @@ -183,7 +183,7 @@ Using customized container images ================================= For most users, upgrading requires nothing more complicated than specifying the -Ceph version number to upgrade to. In such cases, cephadm locates the specific +Ceph version to which to upgrade. In such cases, cephadm locates the specific Ceph container image to use by combining the ``container_image_base`` configuration option (default: ``docker.io/ceph/ceph``) with a tag of ``vX.Y.Z``. -- 2.39.5