Configure ceph-csi
==================
-Setup Ceph Client Authentication
+Ceph Client Authentication Setup
--------------------------------
-Create a new user for nomad and `ceph-csi`. Execute the following command and
+Create a new user for Nomad and `ceph-csi`. Execute the following command and
record the generated key:
.. code-block:: console
Create ceph-csi controller and plugin nodes
===========================================
-The `ceph-csi`_ plugin requieres two components:
+The `ceph-csi`_ plugin requires two components:
-- **Controller plugin**: Communicates with the provider's API.
-- **Node plugin**: execute tasks on the client.
+- **Controller plugin**: communicates with the provider's API.
+- **Node plugin**: executes tasks on the client.
.. note::
- We'll set the ceph-csi's version in those files see `ceph-csi release`_ for other versions.
+ We'll set the ceph-csi's version in those files. See `ceph-csi release`_
+ for information about ceph-csi's compatibility with other versions.
Configure controller plugin
---------------------------
-The controller plugin requires Cpeh monitor addresses of for the Ceph cluster.
-Collect both the Ceph cluster unique `fsid` and the monitor addresses::
+The controller plugin requires the Ceph monitor addresses of the Ceph
+cluster. Collect both (1) the Ceph cluster unique `fsid` and (2) the monitor
+addresses:
- $ ceph mon dump
- <...>
- fsid b9127830-b0cc-4e34-aa47-9d1a2e9949a8
- <...>
- 0: [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] mon.a
- 1: [v2:192.168.1.2:3300/0,v1:192.168.1.2:6789/0] mon.b
- 2: [v2:192.168.1.3:3300/0,v1:192.168.1.3:6789/0] mon.c
+.. code-block:: console
+
+ $ ceph mon dump
+ <...>
+ fsid b9127830-b0cc-4e34-aa47-9d1a2e9949a8
+ <...>
+ 0: [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] mon.a
+ 1: [v2:192.168.1.2:3300/0,v1:192.168.1.2:6789/0] mon.b
+ 2: [v2:192.168.1.3:3300/0,v1:192.168.1.3:6789/0] mon.c
-Generate a `ceph-csi-plugin-controller.nomad` file similar to the example below, substituting
-the `fsid` for "clusterID", and the monitor addresses for "monitors"::
+Generate a ``ceph-csi-plugin-controller.nomad`` file similar to the example
+below. Substitute the `fsid` for "clusterID", and the monitor addresses for
+"monitors"::
job "ceph-csi-plugin-controller" {
Configure plugin node
---------------------
-Generate a `ceph-csi-plugin-node.nomad` file similar to the example below, substituting
-the `fsid` for "clusterID", and the monitor addresses for "monitors"::
+
+Generate a ``ceph-csi-plugin-nodes.nomad`` file similar to the example below.
+Substitute the `fsid` for "clusterID" and the monitor addresses for
+"monitors"::
job "ceph-csi-plugin-nodes" {
Start plugin controller and node
--------------------------------
-Run::
- nomad job run ceph-csi-plugin-controller.nomad
- nomad job run ceph-csi-plugin-nodes.nomad
+To start the plugin controller and the Nomad node, run the following commands:
+
+.. prompt:: bash $
+
+ nomad job run ceph-csi-plugin-controller.nomad
+ nomad job run ceph-csi-plugin-nodes.nomad
-`ceph-csi`_ image will be downloaded, after few minutes check plugin status::
+The `ceph-csi`_ image will be downloaded.
- $ nomad plugin status ceph-csi
- ID = ceph-csi
- Provider = rbd.csi.ceph.com
- Version = 3.3.1
- Controllers Healthy = 1
- Controllers Expected = 1
- Nodes Healthy = 1
- Nodes Expected = 1
+Check the plugin status after a few minutes:
+
+.. code-block:: console
- Allocations
- ID Node ID Task Group Version Desired Status Created Modified
- 23b4db0c a61ef171 nodes 4 run running 3h26m ago 3h25m ago
- fee74115 a61ef171 controller 6 run running 3h26m ago 3h25m ago
+ $ nomad plugin status ceph-csi
+ ID = ceph-csi
+ Provider = rbd.csi.ceph.com
+ Version = 3.3.1
+ Controllers Healthy = 1
+ Controllers Expected = 1
+ Nodes Healthy = 1
+ Nodes Expected = 1
+
+ Allocations
+ ID Node ID Task Group Version Desired Status Created Modified
+ 23b4db0c a61ef171 nodes 4 run running 3h26m ago 3h25m ago
+ fee74115 a61ef171 controller 6 run running 3h26m ago 3h25m ago
Using Ceph Block Devices
========================
Create rbd image
----------------
-`ceph-csi` requires the cephx credentials for communicating with the Ceph
-cluster. Generate a `ceph-volume.hcl` file similar to the example below,
+``ceph-csi`` requires the cephx credentials for communicating with the Ceph
+cluster. Generate a ``ceph-volume.hcl`` file similar to the example below,
using the newly created nomad user id and cephx key::
id = "ceph-mysql"
imageFeatures = "layering"
}
-Once generated, create the volume::
+After the ``ceph-volume.hcl`` file has been generated, create the volume:
+
+.. prompt:: bash $
- $ nomad volume create ceph-volume.hcl
+ nomad volume create ceph-volume.hcl
Use rbd image with a container
------------------------------
-As example we'll modify Hashicorp learn `nomad sateful`_ example
+As an exercise in using an rbd image with a container, modify the Hashicorp
+`nomad stateful`_ example.
-Generate a mysql.nomad file similar to the example below.::
+Generate a ``mysql.nomad`` file similar to the example below::
job "mysql-server" {
datacenters = ["dc1"]
}
}
-Start the job::
+Start the job:
- $ nomad job run mysql.nomad
+.. prompt:: bash $
-Check job's status::
+ nomad job run mysql.nomad
+
+Check the status of the job:
+
+.. code-block:: console
- nomad job status mysql-server
- ...
- Status = running
- ...
- Allocations
- ID Node ID Task Group Version Desired Status Created Modified
- 38070da7 9ad01c63 mysql-server 0 run running 6s ago 3s ago
+ $ nomad job status mysql-server
+ ...
+ Status = running
+ ...
+ Allocations
+ ID Node ID Task Group Version Desired Status Created Modified
+ 38070da7 9ad01c63 mysql-server 0 run running 6s ago 3s ago
-To check data are actually persistant, you can modify database, purge the job then create it using the same file.
-It will reuse the same RBD image.
+To check that data are persistent, modify the database, purge the job, then
+create it using the same file. The same RBD image will be used (re-used,
+really).
.. _ceph-csi: https://github.com/ceph/ceph-csi/
.. _csi: https://www.nomadproject.io/docs/internals/plugins/csi
.. _Placement Groups: ../../rados/operations/placement-groups
.. _CRUSH tunables: ../../rados/operations/crush-map/#tunables
.. _RBD image features: ../rbd-config-ref/#image-features
-.. _nomad sateful: https://learn.hashicorp.com/tutorials/nomad/stateful-workloads-csi-volumes?in=nomad/stateful-workloads#create-the-job-file
+.. _nomad stateful: https://learn.hashicorp.com/tutorials/nomad/stateful-workloads-csi-volumes?in=nomad/stateful-workloads#create-the-job-file
.. _ceph-csi release: https://github.com/ceph/ceph-csi#ceph-csi-container-images-and-release-compatibility