Client machines can generally get away with a smaller config file than
a full-fledged cluster member. To generate a minimal config file, log
into a host that is already configured as a client or running a cluster
-daemon, and then run::
+daemon, and then run
+
+.. code-block:: bash
ceph config generate-minimal-conf
Most Ceph clusters are run with authentication enabled, and the client will
need keys in order to communicate with cluster machines. To generate a
keyring file with credentials for `client.fs`, log into an extant cluster
-member and run::
+member and run
+
+.. code-block:: bash
ceph auth get-or-create client.fs
domain name (the part after the first dot). You can check the FQDN
using ``hostname --fqdn`` or the domain name using ``dnsdomainname``.
- ::
+ .. code-block:: none
You cannot change the FQDN with hostname or dnsdomainname.
However, there is a special cases that cephadm needs to consider.
In case the are fewer hosts selected by the placement specification than
-demanded by ``count``, cephadm will only deploy on selected hosts.
\ No newline at end of file
+demanded by ``count``, cephadm will only deploy on selected hosts.
It gives the user an abstract way tell ceph which disks should turn into an OSD
with which configuration without knowing the specifics of device names and paths.
-Instead of doing this::
+Instead of doing this
- [monitor 1] # ceph orch daemon add osd *<host>*:*<path-to-device>*
+.. prompt:: bash [monitor.1]#
+
+ ceph orch daemon add osd *<host>*:*<path-to-device>*
for each device and each host, we can define a yaml|json file that allows us to describe
the layout. Here's the most basic example.
the glob pattern '*'. (The glob pattern matches against the registered hosts from `host ls`)
There will be a more detailed section on host_pattern down below.
-and pass it to `osd create` like so::
+and pass it to `osd create` like so
+
+.. prompt:: bash [monitor.1]#
- [monitor 1] # ceph orch apply osd -i /path/to/osd_spec.yml
+ ceph orch apply osd -i /path/to/osd_spec.yml
This will go out on all the matching hosts and deploy these OSDs.
Also, there is a `--dry-run` flag that can be passed to the `apply osd` command, which gives you a synopsis
of the proposed layout.
-Example::
+Example
- [monitor 1] # ceph orch apply osd -i /path/to/osd_spec.yml --dry-run
+.. prompt:: bash [monitor.1]#
+
+ [monitor.1]# ceph orch apply osd -i /path/to/osd_spec.yml --dry-run
You can assign disks to certain groups by their attributes using filters.
The attributes are based off of ceph-volume's disk query. You can retrieve the information
-with::
+with
+
+.. code-block:: bash
ceph-volume inventory </path/to/disk>
Concrete examples:
-Includes disks of an exact size::
+Includes disks of an exact size
+
+.. code-block:: yaml
size: '10G'
-Includes disks which size is within the range::
+Includes disks which size is within the range
+
+.. code-block:: yaml
size: '10G:40G'
-Includes disks less than or equal to 10G in size::
+Includes disks less than or equal to 10G in size
+
+.. code-block:: yaml
size: ':10G'
-Includes disks equal to or greater than 40G in size::
+Includes disks equal to or greater than 40G in size
+
+.. code-block:: yaml
size: '40G:'
The simple case
---------------
-All nodes with the same setup::
+All nodes with the same setup
+
+.. code-block:: none
20 HDDs
Vendor: VendorA
The advanced case
-----------------
-Here we have two distinct setups::
+Here we have two distinct setups
+
+.. code-block:: none
20 HDDs
Vendor: VendorA
The examples above assumed that all nodes have the same drives. That's however not always the case.
-Node1-5::
+Node1-5
+
+.. code-block:: none
20 HDDs
Vendor: Intel
Model: MC-55-44-ZX
Size: 512GB
-Node6-10::
+Node6-10
+
+.. code-block:: none
5 NVMEs
Vendor: Intel
All previous cases co-located the WALs with the DBs.
It's however possible to deploy the WAL on a dedicated device as well, if it makes sense.
-::
+.. code-block:: none
20 HDDs
Vendor: VendorA
converted an existing cluster to cephadm management, you can set up
monitoring by following the steps below.
-#. Enable the prometheus module in the ceph-mgr daemon. This exposes the internal Ceph metrics so that prometheus can scrape them.::
+#. Enable the prometheus module in the ceph-mgr daemon. This exposes the internal Ceph metrics so that prometheus can scrape them.
+
+ .. code-block:: bash
ceph mgr module enable prometheus
-#. Deploy a node-exporter service on every node of the cluster. The node-exporter provides host-level metrics like CPU and memory utilization.::
+#. Deploy a node-exporter service on every node of the cluster. The node-exporter provides host-level metrics like CPU and memory utilization.
+
+ .. code-block:: bash
ceph orch apply node-exporter '*'
-#. Deploy alertmanager::
+#. Deploy alertmanager
+
+ .. code-block:: bash
ceph orch apply alertmanager 1
#. Deploy prometheus. A single prometheus instance is sufficient, but
- for HA you may want to deploy two.::
+ for HA you may want to deploy two.
+
+ .. code-block:: bash
ceph orch apply prometheus 1 # or 2
-#. Deploy grafana::
+#. Deploy grafana
+
+ .. code-block:: bash
ceph orch apply grafana 1
configurations automatically.
It may take a minute or two for services to be deployed. Once
-completed, you should see something like this from ``ceph orch ls``::
+completed, you should see something like this from ``ceph orch ls``
+
+.. code-block:: console
$ ceph orch ls
NAME RUNNING REFRESHED IMAGE NAME IMAGE ID SPEC
- ``container_image_alertmanager``
- ``container_image_node_exporter``
-Custom images can be set with the ``ceph config`` command::
+Custom images can be set with the ``ceph config`` command
+
+.. code-block:: bash
ceph config set mgr mgr/cephadm/<option_name> <value>
-For example::
+For example
+
+.. code-block:: bash
ceph config set mgr mgr/cephadm/container_image_prometheus prom/prometheus:v1.4.1
If you choose to go with the recommendations instead, you can reset the
custom image you have set before. After that, the default value will be
- used again. Use ``ceph config rm`` to reset the configuration option::
+ used again. Use ``ceph config rm`` to reset the configuration option
+
+ .. code-block:: bash
ceph config rm mgr mgr/cephadm/<option_name>
- For example::
+ For example
+
+ .. code-block:: bash
ceph config rm mgr mgr/cephadm/container_image_prometheus
--------------------
If you have deployed monitoring and would like to remove it, you can do
-so with::
+so with
+
+.. code-block:: bash
ceph orch rm grafana
ceph orch rm prometheus --force # this will delete metrics data collected so far
to manage it yourself, you need to configure it to integrate with your Ceph
cluster.
-* Enable the prometheus module in the ceph-mgr daemon::
+* Enable the prometheus module in the ceph-mgr daemon
+
+ .. code-block:: bash
ceph mgr module enable prometheus