.. _cephadm_deploying_new_cluster:
==========================================
-Using cephadm to Deploy a New Ceph Cluster
+Using Cephadm to Deploy a New Ceph Cluster
==========================================
Cephadm creates a new Ceph cluster by bootstrapping a single
host, expanding the cluster to encompass any additional hosts, and
then deploying the needed services.
-.. highlight:: console
.. _cephadm-host-requirements:
Ceph.
-
.. _get-cephadm:
-Install cephadm
+Install Cephadm
===============
When installing cephadm there are two key steps: first you need to acquire
cephadm. See :ref:`compiling-cephadm` for details on how to create your own
standalone cephadm executable.
+
.. _cephadm_install_distros:
-distribution-specific installations
+Distribution-specific Installations
-----------------------------------
Some Linux distributions may already include up-to-date Ceph packages. In
zypper install -y cephadm
+
.. _cephadm_install_curl:
-Using curl to install cephadm
+Using Curl to Install Cephadm
-----------------------------
#. Determine which version of Ceph you will install. Use the releases page to
- find the `latest active releases
- <https://docs.ceph.com/en/latest/releases/#active-releases>`_. For example,
+ find the :ref:`active-releases`. For example,
you might find that ``18.2.1`` is the latest active release.
#. Use ``curl`` to fetch a build of cephadm for that release.
.. prompt:: bash #
- :substitutions:
CEPH_RELEASE=18.2.0 # replace this with the active release
curl --silent --remote-name --location https://download.ceph.com/rpm-${CEPH_RELEASE}/el9/noarch/cephadm
./cephadm <arguments...>
-cephadm Requires Python 3.6 or Later
+
+Cephadm Requires Python 3.6 or Later
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-* ``cephadm`` requires Python 3.6 or later. If you encounter difficulties
- running ``cephadm``, then you may not have Python or the correct version of
- Python installed. This includes any errors that include the message ``bad
- interpreter``.
-
- You can manually run cephadm with a particular version of Python by prefixing
- the command with your installed Python version. For example:
+``cephadm`` requires Python 3.6 or later. If you encounter difficulties
+running ``cephadm``, then you may not have Python or the correct version of
+Python installed. This includes any errors that include the message ``bad
+interpreter``.
- .. prompt:: bash #
+You can manually run cephadm with a particular version of Python by prefixing
+the command with your installed Python version. For example:
+
+.. prompt:: bash #
+
+ python3.8 ./cephadm <arguments...>
- python3.8 ./cephadm <arguments...>
-Installing cephadm on the Host
+Installing Cephadm on the Host
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Although the standalone ``cephadm`` is sufficient to bootstrap a cluster, it is
/usr/sbin/cephadm
-Bootstrap a new cluster
+
+Bootstrap a New Cluster
=======================
-What to know before you bootstrap
+What to Know Before You Bootstrap
---------------------------------
The first step in creating a new Ceph cluster is running the ``cephadm
.. note:: If there are multiple networks and interfaces, be sure to choose one
that will be accessible by any host accessing the Ceph cluster.
-Running the bootstrap command
+
+Running the Bootstrap Command
-----------------------------
Run the ``ceph bootstrap`` command:
.. prompt:: bash #
- cephadm bootstrap --mon-ip *<mon-ip>*
+ cephadm bootstrap --mon-ip <mon-ip>
This command will:
with this label will (also) get a copy of ``/etc/ceph/ceph.conf`` and
``/etc/ceph/ceph.client.admin.keyring``.
+
.. _cephadm-bootstrap-further-info:
-Further information about cephadm bootstrap
+Further Information about Cephadm Bootstrap
-------------------------------------------
The default bootstrap process will work for most users. But if you'd like
available options.
* By default, Ceph daemons send their log output to stdout/stderr, which is picked
- up by the container runtime (docker or podman) and (on most systems) sent to
+ up by the container runtime (Docker or Podman) and (on most systems) sent to
journald. If you want Ceph to write traditional log files to ``/var/log/ceph/$fsid``,
use the ``--log-to-file`` option during bootstrap.
cephadm command line interface) to find these files.
Daemon containers deployed with cephadm, however, do not need
- ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option
+ ``/etc/ceph`` at all. Use the ``--output-dir <directory>`` option
to put them in a different directory (for example, ``.``). This may help
avoid conflicts with an existing Ceph configuration (cephadm or
otherwise) on the same host.
* You can pass any initial Ceph configuration options to the new
cluster by putting them in a standard ini-style configuration file
- and using the ``--config *<config-file>*`` option. For example::
+ and using the ``--config <config-file>`` option. For example:
+
+ .. prompt:: bash # auto
- $ cat <<EOF > initial-ceph.conf
+ # cat <<EOF > initial-ceph.conf
[global]
- osd crush chooseleaf type = 0
+ osd_crush_chooseleaf_type = 0
EOF
- $ ./cephadm bootstrap --config initial-ceph.conf ...
+ # ./cephadm bootstrap --config initial-ceph.conf ...
-* The ``--ssh-user *<user>*`` option makes it possible to designate which SSH
+* The ``--ssh-user <user>`` option makes it possible to designate which SSH
user cephadm will use to connect to hosts. The associated SSH key will be
- added to ``~*<user>*/.ssh/authorized_keys``. The user that you
+ added to ``~<user>/.ssh/authorized_keys``. The user that you
designate with this option must have passwordless sudo access.
* If you are using a container image from a registry that requires
- login, you may add the argument:
+ login, you may add the argument ``--registry-json <path to JSON file>``.
- * ``--registry-json <path to json file>``
-
- example contents of JSON file with login info::
+ example contents of a JSON file with login info::
{"url":"REGISTRY_URL", "username":"REGISTRY_USERNAME", "password":"REGISTRY_PASSWORD"}
* See :ref:`cephadm-deployment-scenarios` for additional examples for using ``cephadm bootstrap``.
+
.. _cephadm-enable-cli:
Enable Ceph CLI
with all of the Ceph packages installed. By default, if
configuration and keyring files are found in ``/etc/ceph`` on the
host, they are passed into the container environment so that the
- shell is fully functional. Note that when executed on a MON host,
- ``cephadm shell`` will infer the ``config`` from the MON container
+ shell is fully functional. Note that when executed on a Monitor host,
+ ``cephadm shell`` will infer the ``config`` from the Monitor container
instead of using the default configuration. If ``--mount <path>``
is given, then the host ``<path>`` (file or directory) will appear
under ``/mnt`` inside the container:
cephadm shell -- ceph -s
* You can install the ``ceph-common`` package, which contains all of the
- ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
+ Ceph tools, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
CephFS file systems), etc.:
.. prompt:: bash #
ceph status
+
Adding Hosts
============
(for example, via ``cephadm shell``) is easily accessible on multiple hosts. To add
the ``_admin`` label to additional host(s), run a command of the following form:
- .. prompt:: bash #
+.. prompt:: bash #
- ceph orch host label add *<host>* _admin
+ ceph orch host label add <host> _admin
-Adding additional MONs
-======================
+Adding Additional Monitors
+==========================
A typical Ceph cluster has three or five Monitor daemons spread
across different hosts. We recommend deploying five
Monitors if there are five or more nodes in your cluster. Most clusters do not
benefit from seven or more Monitors.
-Please follow :ref:`deploy_additional_monitors` to deploy additional MONs.
+Please follow :ref:`deploy_additional_monitors` to deploy additional Monitors.
+
Adding Storage
==============
To add storage to the cluster, you can tell Ceph to consume any
available and unused device(s):
- .. prompt:: bash #
+.. prompt:: bash #
- ceph orch apply osd --all-available-devices
+ ceph orch apply osd --all-available-devices
See :ref:`cephadm-deploy-osds` for more detailed instructions.
-Enabling OSD memory autotuning
+
+Enabling OSD Memory Autotuning
------------------------------
-.. warning:: By default, cephadm enables ``osd_memory_target_autotune`` on bootstrap, with ``mgr/cephadm/autotune_memory_target_ratio`` set to ``.7`` of total host memory.
+.. warning:: By default, cephadm enables :confval:`osd_memory_target_autotune` on bootstrap, with :confval:`mgr/cephadm/autotune_memory_target_ratio` set to ``.7`` of total host memory.
See :ref:`osd_autotune`.
In other cases where the cluster hardware is not exclusively used by Ceph (converged infrastructure),
reduce the memory consumption of Ceph like so:
- .. prompt:: bash #
+.. prompt:: bash #
- # converged only:
- ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2
+ # converged only:
+ ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2
Then enable memory autotuning:
- .. prompt:: bash #
+.. prompt:: bash #
- ceph config set osd osd_memory_target_autotune true
+ ceph config set osd osd_memory_target_autotune true
Using Ceph
To use the *Ceph Object Gateway*, follow :ref:`cephadm-deploy-rgw`.
-To use *NFS*, follow :ref:`deploy-cephadm-nfs-ganesha`
+To use *NFS*, follow :ref:`deploy-cephadm-nfs-ganesha`.
+
+To use *iSCSI*, follow :ref:`cephadm-iscsi`.
-To use *iSCSI*, follow :ref:`cephadm-iscsi`
.. _cephadm-deployment-scenarios:
-Different deployment scenarios
+Different Deployment Scenarios
==============================
-Single host
+Single Host
-----------
To deploy a Ceph cluster running on a single host, use the
mgr/mgr_standby_modules = False
For more information on these options, see :ref:`one-node-cluster` and
-``mgr_standby_modules`` in :ref:`mgr-administrator-guide`.
+:confval:`mgr_standby_modules` in :ref:`mgr-administrator-guide`.
+
.. _cephadm-airgap:
-Deployment in an isolated environment
+Deployment in an Isolated Environment
-------------------------------------
You might need to install cephadm in an environment that is not connected
* Ceph container image. See :ref:`containers`.
* Prometheus container image
- * Node exporter container image
+ * Node Exporter container image
* Grafana container image
- * Alertmanager container image
+ * AlertManager container image
#. Create a temporary configuration file to store the names of the monitoring
images. (See :ref:`cephadm_monitoring-images`):
::
[mgr]
- mgr/cephadm/container_image_prometheus = *<hostname>*:5000/prometheus
- mgr/cephadm/container_image_node_exporter = *<hostname>*:5000/node_exporter
- mgr/cephadm/container_image_grafana = *<hostname>*:5000/grafana
- mgr/cephadm/container_image_alertmanager = *<hostname>*:5000/alertmanger
+ mgr/cephadm/container_image_prometheus = <hostname>:5000/prometheus
+ mgr/cephadm/container_image_node_exporter = <hostname>:5000/node_exporter
+ mgr/cephadm/container_image_grafana = <hostname>:5000/grafana
+ mgr/cephadm/container_image_alertmanager = <hostname>:5000/alertmanger
#. Run bootstrap using the ``--image`` flag and pass the name of your
container image as the argument of the image flag. For example:
.. prompt:: bash #
- cephadm --image *<hostname>*:5000/ceph/ceph bootstrap --mon-ip *<mon-ip>*
+ cephadm --image <hostname>:5000/ceph/ceph bootstrap --mon-ip <mon-ip>
-.. _cluster network: ../rados/configuration/network-config-ref#cluster-network
.. _cephadm-bootstrap-custom-ssh-keys:
-Deployment with custom SSH keys
+Deployment with Custom SSH Keys
-------------------------------
Bootstrap allows users to create their own private/public SSH key pair
This setup allows users to use a key that has already been distributed to hosts
the user wants in the cluster before bootstrap.
-.. note:: In order for cephadm to connect to other hosts you'd like to add
+.. note:: In order for cephadm to connect to other hosts you would like to add
to the cluster, make sure the public key of the key pair provided is set up
as an authorized key for the ssh user being used, typically root. If you'd
- like more info on using a non-root user as the ssh user, see :ref:`cephadm-bootstrap-further-info`
+ like more info on using a non-root user as the ssh user, see :ref:`cephadm-bootstrap-further-info`.
.. _cephadm-bootstrap-ca-signed-keys:
-Deployment with CA signed SSH keys
+Deployment with CA-signed SSH Keys
----------------------------------
As an alternative to standard public key authentication, cephadm also supports
-deployment using CA signed keys. Before bootstrapping it's recommended to set up
-the CA public key as a trusted CA key on hosts you'd like to eventually add to
+deployment using CA-signed keys. Before bootstrapping it is recommended to set up
+the CA public key as a trusted CA key on hosts you would like to eventually add to
the cluster. For example:
-.. prompt:: bash
-
- # we will act as our own CA, therefore we'll need to make a CA key
- [root@host1 ~]# ssh-keygen -t rsa -f ca-key -N ""
-
- # make the ca key trusted on the host we've generated it on
- # this requires adding in a line in our /etc/sshd_config
- # to mark this key as trusted
- [root@host1 ~]# cp ca-key.pub /etc/ssh
- [root@host1 ~]# vi /etc/ssh/sshd_config
- [root@host1 ~]# cat /etc/ssh/sshd_config | grep ca-key
- TrustedUserCAKeys /etc/ssh/ca-key.pub
- # now restart sshd so it picks up the config change
- [root@host1 ~]# systemctl restart sshd
-
- # now, on all other hosts we want in the cluster, also install the CA key
- [root@host1 ~]# scp /etc/ssh/ca-key.pub host2:/etc/ssh/
-
- # on other hosts, make the same changes to the sshd_config
- [root@host2 ~]# vi /etc/ssh/sshd_config
- [root@host2 ~]# cat /etc/ssh/sshd_config | grep ca-key
- TrustedUserCAKeys /etc/ssh/ca-key.pub
- # and restart sshd so it picks up the config change
- [root@host2 ~]# systemctl restart sshd
+.. prompt::
+ :language: bash
+ :prompts: [root@host1 ~]#,[root@host2 ~]#
+ :modifiers: auto
+
+ # we will act as our own CA, therefore we'll need to make a CA key
+ [root@host1 ~]# ssh-keygen -t rsa -f ca-key -N ""
+
+ # make the ca key trusted on the host we've generated it on
+ # this requires adding in a line in our /etc/sshd_config
+ # to mark this key as trusted
+ [root@host1 ~]# cp ca-key.pub /etc/ssh
+ [root@host1 ~]# vi /etc/ssh/sshd_config
+ [root@host1 ~]# cat /etc/ssh/sshd_config | grep ca-key
+ TrustedUserCAKeys /etc/ssh/ca-key.pub
+ # now restart sshd so it picks up the config change
+ [root@host1 ~]# systemctl restart sshd
+
+ # now, on all other hosts we want in the cluster, also install the CA key
+ [root@host1 ~]# scp /etc/ssh/ca-key.pub host2:/etc/ssh/
+
+ # on other hosts, make the same changes to the sshd_config
+ [root@host2 ~]# vi /etc/ssh/sshd_config
+ [root@host2 ~]# cat /etc/ssh/sshd_config | grep ca-key
+ TrustedUserCAKeys /etc/ssh/ca-key.pub
+ # and restart sshd so it picks up the config change
+ [root@host2 ~]# systemctl restart sshd
Once the CA key has been installed and marked as a trusted key, you are ready
-to use a private key/CA signed cert combination for SSH. Continuing with our
-current example, we will create a new key-pair for for host access and then
-sign it with our CA key
-
-.. prompt:: bash
-
- # make a new key pair
- [root@host1 ~]# ssh-keygen -t rsa -f cephadm-ssh-key -N ""
- # sign the private key. This will create a new cephadm-ssh-key-cert.pub
- # note here we're using user "root". If you'd like to use a non-root
- # user the arguments to the -I and -n params would need to be adjusted
- # Additionally, note the -V param indicates how long until the cert
- # this creates will expire
- [root@host1 ~]# ssh-keygen -s ca-key -I user_root -n root -V +52w cephadm-ssh-key
- [root@host1 ~]# ls
- ca-key ca-key.pub cephadm-ssh-key cephadm-ssh-key-cert.pub cephadm-ssh-key.pub
-
- # verify our signed key is working. To do this, make sure the generated private
- # key ("cephadm-ssh-key" in our example) and the newly signed cert are stored
- # in the same directory. Then try to ssh using the private key
- [root@host1 ~]# ssh -i cephadm-ssh-key host2
-
-Once you have your private key and corresponding CA signed cert and have tested
+to use a private key/CA-signed cert combination for SSH. Continuing with our
+current example, we will create a new key pair for for host access and then
+sign it with our CA key:
+
+.. prompt::
+ :language: bash
+ :prompts: [root@host1 ~]#,[root@host2 ~]#
+ :modifiers: auto
+
+ # make a new key pair
+ [root@host1 ~]# ssh-keygen -t rsa -f cephadm-ssh-key -N ""
+ # sign the public key. This will create a new cephadm-ssh-key-cert.pub
+ # note here we're using user "root". If you'd like to use a non-root
+ # user the arguments to the -I and -n params would need to be adjusted
+ # Additionally, note the -V param indicates how long until the cert
+ # this creates will expire
+ [root@host1 ~]# ssh-keygen -s ca-key -I user_root -n root -V +52w cephadm-ssh-key.pub
+ [root@host1 ~]# ls
+ ca-key ca-key.pub cephadm-ssh-key cephadm-ssh-key-cert.pub cephadm-ssh-key.pub
+
+ # verify our signed key is working. To do this, make sure the generated private
+ # key ("cephadm-ssh-key" in our example) and the newly signed cert are stored
+ # in the same directory. Then try to ssh using the private key
+ [root@host1 ~]# ssh -i cephadm-ssh-key host2
+
+Once you have your private key and the corresponding CA-signed cert and have tested
SSH authentication using that key works, you can pass those keys to bootstrap
-in order to have cephadm use them for SSHing between cluster hosts
+in order to have cephadm use them for SSHing between cluster hosts:
-.. prompt:: bash
+.. prompt::
+ :language: bash
+ :prompts: [root@host1 ~]#,[root@host2 ~]#
+ :modifiers: auto
- [root@host1 ~]# cephadm bootstrap --mon-ip <ip-addr> --ssh-private-key cephadm-ssh-key --ssh-signed-cert cephadm-ssh-key-cert.pub
+ [root@host1 ~]# cephadm bootstrap --mon-ip <ip-addr> --ssh-private-key cephadm-ssh-key --ssh-signed-cert cephadm-ssh-key-cert.pub
Note that this setup does not require installing the corresponding public key
from the private key passed to bootstrap on other nodes. In fact, cephadm will
reject the ``--ssh-public-key`` argument when passed along with ``--ssh-signed-cert``.
This is not because having the public key breaks anything, but rather because it is not at all needed
-and helps the bootstrap command differentiate if the user wants the CA signed
+and helps the bootstrap command differentiate if the user wants the CA-signed
keys setup or standard pubkey encryption. What this means is that SSH key rotation
would simply be a matter of getting another key signed by the same CA and providing
-cephadm with the new private key and signed cert. No additional distribution of
+cephadm with the new private key and the new signed certificate. No additional distribution of
keys to cluster nodes is needed after the initial setup of the CA key as a trusted key,
no matter how many new private key/signed cert pairs are rotated in.