Most Ceph clusters run with authentication enabled. This means that
the client needs keys in order to communicate with Ceph daemons.
To generate a keyring file with credentials for ``client.fs``,
-log into an running cluster member and run the following command:
+log into a running cluster member and run the following command:
.. prompt:: bash #
against name, label and status simultaneously, or to filter against any
proper subset of name, label and status.
-The ``detail`` parameter provides more host related information for cephadm-based
+The ``detail`` parameter provides more host-related information for cephadm-based
clusters. For example:
.. prompt:: bash #
Offline Host Removal
--------------------
-If a host is offline and can not be recovered, it can be removed from the
+If a host is offline and cannot be recovered, it can be removed from the
cluster by running a command of the following form:
.. prompt:: bash #
===========
The orchestrator supports assigning labels to hosts. Labels
-are free-form and have no particular meaning by itself and each host
+are free-form and have no particular meaning by themsevels. Each host
can have multiple labels. They can be used to specify the placement
of daemons. For more information, see :ref:`orch-placement-by-labels`.
ceph orch host add my_hostname --labels=my_label1
ceph orch host add my_hostname --labels=my_label1,my_label2
-To add a label a existing host, run:
+To add a label to an existing host, run:
.. prompt:: bash #
bootstrap was originally run), and the ``client.admin`` key is set to be distributed
to that host via the ``ceph orch client-keyring ...`` function. Adding this label
to additional hosts will normally cause cephadm to deploy configuration and keyring files
- in ``/etc/ceph``. Starting from versions 16.2.10 (Pacific) and 17.2.1 (Quincy) in
+ in ``/etc/ceph``. Starting from versions 16.2.10 (Pacific) and 17.2.1 (Quincy), in
addition to the default location ``/etc/ceph/`` cephadm also stores config and keyring
files in the ``/var/lib/ceph/<fsid>/config`` directory.
Removal from the CRUSH map will fail if there are OSDs deployed on the
host. If you would like to remove all the host's OSDs as well, please start
- by using the ``ceph orch host drain`` command to do so. Once the OSDs
+ by using the ``ceph orch host drain`` command to do so. Once the OSDs
have been removed, then you may direct cephadm to remove the CRUSH bucket
along with the host using the ``--rm-crush-entry`` flag.
ceph orch daemon reconfig <name>
-Rotating a Daemon's Authenticate Key
-------------------------------------
+Rotating a Daemon's Authentication Key
+--------------------------------------
All Ceph and gateway daemons in the cluster have a secret key that is used to connect
to and authenticate with the cluster. This key can be rotated (i.e., replaced with a
``CEPHADM_STRAY_DAEMON``
~~~~~~~~~~~~~~~~~~~~~~~~
-One or more Ceph daemons are running but not are not managed by
+One or more Ceph daemons are running but are not managed by
*cephadm*. This may be because they were deployed using a different
tool, or because they were started manually. Those
services cannot currently be managed by cephadm (e.g., restarted,
----------------------------
Cephadm periodically scans each host in the cluster in order
-to understand the state of the OS, disks, network interfacess etc. This information can
+to understand the state of the OS, disks, network interfaces etc. This information can
then be analyzed for consistency across the hosts in the cluster to
identify any configuration anomalies.
exist. Use the `dirs` property to create them if necessary.
* ``init_containers``
A list of "init container" definitions. An init container exists to
- run prepratory steps before the primary container starts. Init containers
- are optional. One or more container can be defined.
+ run preparatory steps before the primary container starts. Init containers
+ are optional. One or more containers can be defined.
Each definition can contain the following fields:
* ``image``
* ``envs``
A list of environment variables.
* ``privileged``
- A boolean indicate if the container should run with privileges or not. If
+ A boolean indicating whether the container should run with privileges or not. If
left unspecified, the init container will inherit the primary container's
value.
#. Prometheus: basic authentication is required to access the web portal and TLS is enabled for secure communication.
#. Alertmanager: basic authentication is required to access the web portal and TLS is enabled for secure communication.
#. Node Exporter: TLS is enabled for secure communication.
-#. Grafana: TLS is enabled and authentication is requiered to access the datasource information.
+#. Grafana: TLS is enabled and authentication is required to access the datasource information.
#. Cephadm service discovery endpoint: basic authentication is required to access service discovery information, and TLS is enabled for secure communication.
In this secure setup, users will need to setup authentication
SMB support is under active development and many features may be
missing or immature. A Ceph MGR module, named smb, is available to help
- organize and manage SMB related featues. Unless the smb module
+ organize and manage SMB-related features. Unless the smb module
has been determined to be unsuitable for your needs we recommend using that
module over directly using the smb service spec.
High-Availability or "transparent state migration") the feature flag
``clustered`` is needed. If this flag is not specified cephadm may deploy
multiple smb servers but they will lack the coordination needed of an actual
- Highly-Avaiable cluster. When the ``clustered`` flag is specified cephadm
+ Highly-Available cluster. When the ``clustered`` flag is specified cephadm
will deploy additional containers that manage this coordination.
Additionally, the cluster_meta_uri and cluster_lock_uri values must be
specified. The former is used by cephadm to describe the smb cluster layout
`documentation for Samba <https://wiki.samba.org/index.php/Main_Page>`_ as
well as the `samba server container
<https://github.com/samba-in-kubernetes/samba-container/blob/master/docs/server.md>`_
-and the `configuation file
+and the `configuration file
<https://github.com/samba-in-kubernetes/sambacc/blob/master/docs/configuration.md>`_
it accepts.
cephadm logs --name <name-of-daemon>
-.. Note:: This works only when run on the same host that is running the daemon.
+.. note:: This works only when run on the same host that is running the daemon.
To get the logs of a daemon that is running on a different host, add the
``--fsid`` option to the command, as in the following example:
not be able to manage the cluster until quorum is restored.
In order to restore the quorum, remove unhealthy monitors
-form the monmap by following these steps:
+from the monmap by following these steps:
#. Stop all Monitors. Use ``ssh`` to connect to each Monitor's host, and then
while connected to the Monitor's host use ``cephadm`` to stop the Monitor
would use to create a shell. Modify the script by removing the ``--init``
argument and replace it with the argument that joins to the namespace used for
a running running container. For example, assume we want to debug the Manager
-and have determnined that the Manager is running in a container named
+and have determined that the Manager is running in a container named
``ceph-bc615290-685b-11ee-84a6-525400220000-mgr-ceph0-sluwsk``. In this case,
the argument
``--pid=container:ceph-bc615290-685b-11ee-84a6-525400220000-mgr-ceph0-sluwsk``
large-scale CephFS deployments because the cluster cannot quickly reduce active MDS(s)
to `1` and a single active MDS cannot easily handle the load of all clients
even for a short time. Therefore, to upgrade MDS(s) without reducing ``max_mds``,
- the ``fail_fs`` option can to be set to ``true`` (default value is ``false``) prior
+ the ``fail_fs`` option can be set to ``true`` (default value is ``false``) prior
to initiating the upgrade:
.. prompt:: bash #