install
adoption
host-management
+ mon
osd
rgw
custom-container
Next, add all hosts to the cluster by following :ref:`cephadm-adding-hosts`.
-
-.. _deploy_additional_monitors:
-
-Deploy additional monitors (optional)
-=====================================
+Adding additional MONs
+======================
A typical Ceph cluster has three or five monitor daemons spread
across different hosts. We recommend deploying five
monitors if there are five or more nodes in your cluster.
-.. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation
-
-When Ceph knows what IP subnet the monitors should use it can automatically
-deploy and scale monitors as the cluster grows (or contracts). By default,
-Ceph assumes that other monitors should use the same subnet as the first
-monitor's IP.
-
-If your Ceph monitors (or the entire cluster) live on a single subnet,
-then by default cephadm automatically adds up to 5 monitors as you add new
-hosts to the cluster. No further steps are necessary.
-
-* If there is a specific IP subnet that should be used by monitors, you
- can configure that in `CIDR`_ format (e.g., ``10.1.2.0/24``) with:
-
- .. prompt:: bash #
-
- ceph config set mon public_network *<mon-cidr-network>*
-
- For example:
-
- .. prompt:: bash #
-
- ceph config set mon public_network 10.1.2.0/24
-
- Cephadm deploys new monitor daemons only on hosts that have IPs
- configured in the configured subnet.
-
-* If you want to adjust the default of 5 monitors, run this command:
-
- .. prompt:: bash #
-
- ceph orch apply mon *<number-of-monitors>*
-
-* To deploy monitors on a specific set of hosts, run this command:
-
- .. prompt:: bash #
-
- ceph orch apply mon *<host1,host2,host3,...>*
-
- Be sure to include the first (bootstrap) host in this list.
-
-* You can control which hosts the monitors run on by making use of
- host labels. To set the ``mon`` label to the appropriate
- hosts, run this command:
-
- .. prompt:: bash #
-
- ceph orch host label add *<hostname>* mon
-
- To view the current hosts and labels, run this command:
-
- .. prompt:: bash #
-
- ceph orch host ls
-
- For example:
-
- .. prompt:: bash #
-
- ceph orch host label add host1 mon
- ceph orch host label add host2 mon
- ceph orch host label add host3 mon
- ceph orch host ls
-
- .. code-block:: bash
-
- HOST ADDR LABELS STATUS
- host1 mon
- host2 mon
- host3 mon
- host4
- host5
-
- Tell cephadm to deploy monitors based on the label by running this command:
-
- .. prompt:: bash #
-
- ceph orch apply mon label:mon
-
-* You can explicitly specify the IP address or CIDR network for each monitor
- and control where it is placed. To disable automated monitor deployment, run
- this command:
-
- .. prompt:: bash #
-
- ceph orch apply mon --unmanaged
-
- To deploy each additional monitor:
-
- .. prompt:: bash #
-
- ceph orch daemon add mon *<host1:ip-or-network1> [<host1:ip-or-network-2>...]*
-
- For example, to deploy a second monitor on ``newhost1`` using an IP
- address ``10.1.2.123`` and a third monitor on ``newhost2`` in
- network ``10.1.2.0/24``, run the following commands:
-
- .. prompt:: bash #
-
- ceph orch apply mon --unmanaged
- ceph orch daemon add mon newhost1:10.1.2.123
- ceph orch daemon add mon newhost2:10.1.2.0/24
-
- .. note::
- The **apply** command can be confusing. For this reason, we recommend using
- YAML specifications.
-
- Each ``ceph orch apply mon`` command supersedes the one before it.
- This means that you must use the proper comma-separated list-based
- syntax when you want to apply monitors to more than one host.
- If you do not use the proper syntax, you will clobber your work
- as you go.
-
- For example:
-
- .. prompt:: bash #
-
- ceph orch apply mon host1
- ceph orch apply mon host2
- ceph orch apply mon host3
-
- This results in only one host having a monitor applied to it: host 3.
-
- (The first command creates a monitor on host1. Then the second command
- clobbers the monitor on host1 and creates a monitor on host2. Then the
- third command clobbers the monitor on host2 and creates a monitor on
- host3. In this scenario, at this point, there is a monitor ONLY on
- host3.)
-
- To make certain that a monitor is applied to each of these three hosts,
- run a command like this:
-
- .. prompt:: bash #
-
- ceph orch apply mon "host1,host2,host3"
-
- There is another way to apply monitors to multiple hosts: a ``yaml`` file
- can be used. Instead of using the "ceph orch apply mon" commands, run a
- command of this form:
-
- .. prompt:: bash #
-
- ceph orch apply -i file.yaml
-
- Here is a sample **file.yaml** file::
-
- service_type: mon
- placement:
- hosts:
- - host1
- - host2
- - host3
+Please follow :ref:`deploy_additional_monitors` to deploy additional MONs.
Adding Storage
==============
--- /dev/null
+===========
+MON Service
+===========
+
+.. _deploy_additional_monitors:
+
+Deploy additional monitors
+==========================
+
+A typical Ceph cluster has three or five monitor daemons spread
+across different hosts. We recommend deploying five
+monitors if there are five or more nodes in your cluster.
+
+.. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation
+
+When Ceph knows what IP subnet the monitors should use it can automatically
+deploy and scale monitors as the cluster grows (or contracts). By default,
+Ceph assumes that other monitors should use the same subnet as the first
+monitor's IP.
+
+If your Ceph monitors (or the entire cluster) live on a single subnet,
+then by default cephadm automatically adds up to 5 monitors as you add new
+hosts to the cluster. No further steps are necessary.
+
+* If there is a specific IP subnet that should be used by monitors, you
+ can configure that in `CIDR`_ format (e.g., ``10.1.2.0/24``) with:
+
+ .. prompt:: bash #
+
+ ceph config set mon public_network *<mon-cidr-network>*
+
+ For example:
+
+ .. prompt:: bash #
+
+ ceph config set mon public_network 10.1.2.0/24
+
+ Cephadm deploys new monitor daemons only on hosts that have IPs
+ configured in the configured subnet.
+
+* If you want to adjust the default of 5 monitors, run this command:
+
+ .. prompt:: bash #
+
+ ceph orch apply mon *<number-of-monitors>*
+
+* To deploy monitors on a specific set of hosts, run this command:
+
+ .. prompt:: bash #
+
+ ceph orch apply mon *<host1,host2,host3,...>*
+
+ Be sure to include the first (bootstrap) host in this list.
+
+* You can control which hosts the monitors run on by making use of
+ host labels. To set the ``mon`` label to the appropriate
+ hosts, run this command:
+
+ .. prompt:: bash #
+
+ ceph orch host label add *<hostname>* mon
+
+ To view the current hosts and labels, run this command:
+
+ .. prompt:: bash #
+
+ ceph orch host ls
+
+ For example:
+
+ .. prompt:: bash #
+
+ ceph orch host label add host1 mon
+ ceph orch host label add host2 mon
+ ceph orch host label add host3 mon
+ ceph orch host ls
+
+ .. code-block:: bash
+
+ HOST ADDR LABELS STATUS
+ host1 mon
+ host2 mon
+ host3 mon
+ host4
+ host5
+
+ Tell cephadm to deploy monitors based on the label by running this command:
+
+ .. prompt:: bash #
+
+ ceph orch apply mon label:mon
+
+* You can explicitly specify the IP address or CIDR network for each monitor
+ and control where it is placed. To disable automated monitor deployment, run
+ this command:
+
+ .. prompt:: bash #
+
+ ceph orch apply mon --unmanaged
+
+ To deploy each additional monitor:
+
+ .. prompt:: bash #
+
+ ceph orch daemon add mon *<host1:ip-or-network1> [<host1:ip-or-network-2>...]*
+
+ For example, to deploy a second monitor on ``newhost1`` using an IP
+ address ``10.1.2.123`` and a third monitor on ``newhost2`` in
+ network ``10.1.2.0/24``, run the following commands:
+
+ .. prompt:: bash #
+
+ ceph orch apply mon --unmanaged
+ ceph orch daemon add mon newhost1:10.1.2.123
+ ceph orch daemon add mon newhost2:10.1.2.0/24
+
+ .. note::
+ The **apply** command can be confusing. For this reason, we recommend using
+ YAML specifications.
+
+ Each ``ceph orch apply mon`` command supersedes the one before it.
+ This means that you must use the proper comma-separated list-based
+ syntax when you want to apply monitors to more than one host.
+ If you do not use the proper syntax, you will clobber your work
+ as you go.
+
+ For example:
+
+ .. prompt:: bash #
+
+ ceph orch apply mon host1
+ ceph orch apply mon host2
+ ceph orch apply mon host3
+
+ This results in only one host having a monitor applied to it: host 3.
+
+ (The first command creates a monitor on host1. Then the second command
+ clobbers the monitor on host1 and creates a monitor on host2. Then the
+ third command clobbers the monitor on host2 and creates a monitor on
+ host3. In this scenario, at this point, there is a monitor ONLY on
+ host3.)
+
+ To make certain that a monitor is applied to each of these three hosts,
+ run a command like this:
+
+ .. prompt:: bash #
+
+ ceph orch apply mon "host1,host2,host3"
+
+ There is another way to apply monitors to multiple hosts: a ``yaml`` file
+ can be used. Instead of using the "ceph orch apply mon" commands, run a
+ command of this form:
+
+ .. prompt:: bash #
+
+ ceph orch apply -i file.yaml
+
+ Here is a sample **file.yaml** file::
+
+ service_type: mon
+ placement:
+ hosts:
+ - host1
+ - host2
+ - host3
------------------------
In case the Ceph MONs cannot form a quorum, cephadm is not able
-to manage the cluster, until the quorum is restored.
+to manage the cluster, until the quorum is restored.
In order to restore the MON quorum, remove unhealthy MONs
form the monmap by following these steps:
2. Identify a surviving monitor and log in to that host::
- ssh {mon-host}
+ ssh {mon-host}
cephadm enter --name mon.`hostname`
-3. Follow the steps in :ref:`rados-mon-remove-from-unhealthy`
+3. Follow the steps in :ref:`rados-mon-remove-from-unhealthy`
+
Manually deploying a MGR daemon
-------------------------------
-
cephadm requires a MGR daemon in order to manage the cluster. In case the cluster
the last MGR of a cluster was removed, follow these steps in order to deploy
a MGR ``mgr.hostname.smfvfd`` on a random host of your cluster manually.
where ``journal`` is the filestore journal device, ``wal`` is the bluestore
write ahead log device, and ``all`` stands for all devices associated with the OSD
-
-Monitor and manager management
-==============================
-
-Creates or removes MONs or MGRs from the cluster. Orchestrator may return an
-error if it doesn't know how to do this transition.
-
-Update the number of monitor hosts::
-
- ceph orch apply mon --placement=<placement> [--dry-run]
-
-Where ``placement`` is a :ref:`orchestrator-cli-placement-spec`.
-
-Each host can optionally specify a network for the monitor to listen on.
-
-Update the number of manager hosts::
-
- ceph orch apply mgr --placement=<placement> [--dry-run]
-
-Where ``placement`` is a :ref:`orchestrator-cli-placement-spec`.
-
-..
- .. note::
-
- The host lists are the new full list of mon/mgr hosts
-
- .. note::
-
- specifying hosts is optional for some orchestrator modules
- and mandatory for others (e.g. Ansible).
-
-
-
.. _orchestrator-cli-cephfs:
Deploying CephFS