From 967104103c6c611b122e3e37227a04486cd6d2b1 Mon Sep 17 00:00:00 2001 From: Kefu Chai Date: Sat, 12 Sep 2020 07:04:14 +0800 Subject: [PATCH] doc/cephadm: use appropriate directive for formatting codeblocks Signed-off-by: Kefu Chai --- doc/cephadm/client-setup.rst | 8 ++++-- doc/cephadm/concepts.rst | 4 +-- doc/cephadm/drivegroups.rst | 56 +++++++++++++++++++++++++----------- doc/cephadm/monitoring.rst | 48 +++++++++++++++++++++++-------- 4 files changed, 84 insertions(+), 32 deletions(-) diff --git a/doc/cephadm/client-setup.rst b/doc/cephadm/client-setup.rst index dd0bc32856732..3efc1cc11ffa7 100644 --- a/doc/cephadm/client-setup.rst +++ b/doc/cephadm/client-setup.rst @@ -15,7 +15,9 @@ Config File Setup Client machines can generally get away with a smaller config file than a full-fledged cluster member. To generate a minimal config file, log into a host that is already configured as a client or running a cluster -daemon, and then run:: +daemon, and then run + +.. code-block:: bash ceph config generate-minimal-conf @@ -28,7 +30,9 @@ Keyring Setup Most Ceph clusters are run with authentication enabled, and the client will need keys in order to communicate with cluster machines. To generate a keyring file with credentials for `client.fs`, log into an extant cluster -member and run:: +member and run + +.. code-block:: bash ceph auth get-or-create client.fs diff --git a/doc/cephadm/concepts.rst b/doc/cephadm/concepts.rst index 8b1743799b4a0..7d11d22dbe146 100644 --- a/doc/cephadm/concepts.rst +++ b/doc/cephadm/concepts.rst @@ -49,7 +49,7 @@ host name: domain name (the part after the first dot). You can check the FQDN using ``hostname --fqdn`` or the domain name using ``dnsdomainname``. - :: + .. code-block:: none You cannot change the FQDN with hostname or dnsdomainname. @@ -117,4 +117,4 @@ candidate hosts. However, there is a special cases that cephadm needs to consider. In case the are fewer hosts selected by the placement specification than -demanded by ``count``, cephadm will only deploy on selected hosts. \ No newline at end of file +demanded by ``count``, cephadm will only deploy on selected hosts. diff --git a/doc/cephadm/drivegroups.rst b/doc/cephadm/drivegroups.rst index f1dd523e22204..a1397af01ece1 100644 --- a/doc/cephadm/drivegroups.rst +++ b/doc/cephadm/drivegroups.rst @@ -8,9 +8,11 @@ OSD Service Specification It gives the user an abstract way tell ceph which disks should turn into an OSD with which configuration without knowing the specifics of device names and paths. -Instead of doing this:: +Instead of doing this - [monitor 1] # ceph orch daemon add osd **:** +.. prompt:: bash [monitor.1]# + + ceph orch daemon add osd **:** for each device and each host, we can define a yaml|json file that allows us to describe the layout. Here's the most basic example. @@ -32,9 +34,11 @@ Turn any available(ceph-volume decides what 'available' is) into an OSD on all h the glob pattern '*'. (The glob pattern matches against the registered hosts from `host ls`) There will be a more detailed section on host_pattern down below. -and pass it to `osd create` like so:: +and pass it to `osd create` like so + +.. prompt:: bash [monitor.1]# - [monitor 1] # ceph orch apply osd -i /path/to/osd_spec.yml + ceph orch apply osd -i /path/to/osd_spec.yml This will go out on all the matching hosts and deploy these OSDs. @@ -43,9 +47,11 @@ Since we want to have more complex setups, there are more filters than just the Also, there is a `--dry-run` flag that can be passed to the `apply osd` command, which gives you a synopsis of the proposed layout. -Example:: +Example - [monitor 1] # ceph orch apply osd -i /path/to/osd_spec.yml --dry-run +.. prompt:: bash [monitor.1]# + + [monitor.1]# ceph orch apply osd -i /path/to/osd_spec.yml --dry-run @@ -64,7 +70,9 @@ Filters You can assign disks to certain groups by their attributes using filters. The attributes are based off of ceph-volume's disk query. You can retrieve the information -with:: +with + +.. code-block:: bash ceph-volume inventory @@ -105,20 +113,28 @@ Size specification of format can be of form: Concrete examples: -Includes disks of an exact size:: +Includes disks of an exact size + +.. code-block:: yaml size: '10G' -Includes disks which size is within the range:: +Includes disks which size is within the range + +.. code-block:: yaml size: '10G:40G' -Includes disks less than or equal to 10G in size:: +Includes disks less than or equal to 10G in size + +.. code-block:: yaml size: ':10G' -Includes disks equal to or greater than 40G in size:: +Includes disks equal to or greater than 40G in size + +.. code-block:: yaml size: '40G:' @@ -206,7 +222,9 @@ Examples The simple case --------------- -All nodes with the same setup:: +All nodes with the same setup + +.. code-block:: none 20 HDDs Vendor: VendorA @@ -265,7 +283,9 @@ Note: All of the above DriveGroups are equally valid. Which of those you want to The advanced case ----------------- -Here we have two distinct setups:: +Here we have two distinct setups + +.. code-block:: none 20 HDDs Vendor: VendorA @@ -317,7 +337,9 @@ The advanced case (with non-uniform nodes) The examples above assumed that all nodes have the same drives. That's however not always the case. -Node1-5:: +Node1-5 + +.. code-block:: none 20 HDDs Vendor: Intel @@ -328,7 +350,9 @@ Node1-5:: Model: MC-55-44-ZX Size: 512GB -Node6-10:: +Node6-10 + +.. code-block:: none 5 NVMEs Vendor: Intel @@ -371,7 +395,7 @@ Dedicated wal + db All previous cases co-located the WALs with the DBs. It's however possible to deploy the WAL on a dedicated device as well, if it makes sense. -:: +.. code-block:: none 20 HDDs Vendor: VendorA diff --git a/doc/cephadm/monitoring.rst b/doc/cephadm/monitoring.rst index 9faaafbe25ebd..6d4f21da1aee7 100644 --- a/doc/cephadm/monitoring.rst +++ b/doc/cephadm/monitoring.rst @@ -46,24 +46,34 @@ did not do this (by passing ``--skip-monitoring-stack``, or if you converted an existing cluster to cephadm management, you can set up monitoring by following the steps below. -#. Enable the prometheus module in the ceph-mgr daemon. This exposes the internal Ceph metrics so that prometheus can scrape them.:: +#. Enable the prometheus module in the ceph-mgr daemon. This exposes the internal Ceph metrics so that prometheus can scrape them. + + .. code-block:: bash ceph mgr module enable prometheus -#. Deploy a node-exporter service on every node of the cluster. The node-exporter provides host-level metrics like CPU and memory utilization.:: +#. Deploy a node-exporter service on every node of the cluster. The node-exporter provides host-level metrics like CPU and memory utilization. + + .. code-block:: bash ceph orch apply node-exporter '*' -#. Deploy alertmanager:: +#. Deploy alertmanager + + .. code-block:: bash ceph orch apply alertmanager 1 #. Deploy prometheus. A single prometheus instance is sufficient, but - for HA you may want to deploy two.:: + for HA you may want to deploy two. + + .. code-block:: bash ceph orch apply prometheus 1 # or 2 -#. Deploy grafana:: +#. Deploy grafana + + .. code-block:: bash ceph orch apply grafana 1 @@ -71,7 +81,9 @@ Cephadm handles the prometheus, grafana, and alertmanager configurations automatically. It may take a minute or two for services to be deployed. Once -completed, you should see something like this from ``ceph orch ls``:: +completed, you should see something like this from ``ceph orch ls`` + +.. code-block:: console $ ceph orch ls NAME RUNNING REFRESHED IMAGE NAME IMAGE ID SPEC @@ -93,11 +105,15 @@ configuration first. The following configuration options are available. - ``container_image_alertmanager`` - ``container_image_node_exporter`` -Custom images can be set with the ``ceph config`` command:: +Custom images can be set with the ``ceph config`` command + +.. code-block:: bash ceph config set mgr mgr/cephadm/ -For example:: +For example + +.. code-block:: bash ceph config set mgr mgr/cephadm/container_image_prometheus prom/prometheus:v1.4.1 @@ -112,11 +128,15 @@ For example:: If you choose to go with the recommendations instead, you can reset the custom image you have set before. After that, the default value will be - used again. Use ``ceph config rm`` to reset the configuration option:: + used again. Use ``ceph config rm`` to reset the configuration option + + .. code-block:: bash ceph config rm mgr mgr/cephadm/ - For example:: + For example + + .. code-block:: bash ceph config rm mgr mgr/cephadm/container_image_prometheus @@ -124,7 +144,9 @@ Disabling monitoring -------------------- If you have deployed monitoring and would like to remove it, you can do -so with:: +so with + +.. code-block:: bash ceph orch rm grafana ceph orch rm prometheus --force # this will delete metrics data collected so far @@ -140,7 +162,9 @@ If you have an existing prometheus monitoring infrastructure, or would like to manage it yourself, you need to configure it to integrate with your Ceph cluster. -* Enable the prometheus module in the ceph-mgr daemon:: +* Enable the prometheus module in the ceph-mgr daemon + + .. code-block:: bash ceph mgr module enable prometheus -- 2.39.5