From 98596f5fcb4f453d0a7dd53949062dc76abc856a Mon Sep 17 00:00:00 2001 From: Ville Ojamo <14869000+bluikko@users.noreply.github.com> Date: Fri, 3 Nov 2023 12:44:00 +0700 Subject: [PATCH] doc/cephadm/services: remove excess rendered indentation in osd.rst Start bash command blocks at the left margin, removing excessive padding/indentation that would render the block too much towards the right. At the same time ident the source consistently: - Two spaces for command blocks and output blocks. - Four spaces for notes, code blocks. There seems to be no uniform style for this, sometimes commands are indented with three spaces but it would seem two spaces is common. In the end it all renders the same I guess. Signed-off-by: Ville Ojamo <14869000+bluikko@users.noreply.github.com> (cherry picked from commit 329df4959d08e9bc90d6e1d83f99bd344a13dc1e) --- doc/cephadm/services/osd.rst | 81 ++++++++++++++++++------------------ 1 file changed, 40 insertions(+), 41 deletions(-) diff --git a/doc/cephadm/services/osd.rst b/doc/cephadm/services/osd.rst index 00e414c1b2bcf..f62b0f83116ec 100644 --- a/doc/cephadm/services/osd.rst +++ b/doc/cephadm/services/osd.rst @@ -15,10 +15,9 @@ To print a list of devices discovered by ``cephadm``, run this command: .. prompt:: bash # - ceph orch device ls [--hostname=...] [--wide] [--refresh] + ceph orch device ls [--hostname=...] [--wide] [--refresh] -Example -:: +Example:: Hostname Path Type Serial Size Health Ident Fault Available srv-01 /dev/sdb hdd 15P0A0YFFRD6 300G Unknown N/A N/A No @@ -44,7 +43,7 @@ enable cephadm's "enhanced device scan" option as follows; .. prompt:: bash # - ceph config set mgr mgr/cephadm/device_enhanced_scan true + ceph config set mgr mgr/cephadm/device_enhanced_scan true .. warning:: Although the libstoragemgmt library performs standard SCSI inquiry calls, @@ -173,16 +172,16 @@ will happen without actually creating the OSDs. For example: - .. prompt:: bash # +.. prompt:: bash # - ceph orch apply osd --all-available-devices --dry-run + ceph orch apply osd --all-available-devices --dry-run - :: +:: - NAME HOST DATA DB WAL - all-available-devices node1 /dev/vdb - - - all-available-devices node2 /dev/vdc - - - all-available-devices node3 /dev/vdd - - + NAME HOST DATA DB WAL + all-available-devices node1 /dev/vdb - - + all-available-devices node2 /dev/vdc - - + all-available-devices node3 /dev/vdd - - .. _cephadm-osd-declarative: @@ -197,9 +196,9 @@ command completes will be automatically found and added to the cluster. We will examine the effects of the following command: - .. prompt:: bash # +.. prompt:: bash # - ceph orch apply osd --all-available-devices + ceph orch apply osd --all-available-devices After running the above command: @@ -212,17 +211,17 @@ If you want to avoid this behavior (disable automatic creation of OSD on availab .. prompt:: bash # - ceph orch apply osd --all-available-devices --unmanaged=true + ceph orch apply osd --all-available-devices --unmanaged=true .. note:: - Keep these three facts in mind: + Keep these three facts in mind: - - The default behavior of ``ceph orch apply`` causes cephadm constantly to reconcile. This means that cephadm creates OSDs as soon as new drives are detected. + - The default behavior of ``ceph orch apply`` causes cephadm constantly to reconcile. This means that cephadm creates OSDs as soon as new drives are detected. - - Setting ``unmanaged: True`` disables the creation of OSDs. If ``unmanaged: True`` is set, nothing will happen even if you apply a new OSD service. + - Setting ``unmanaged: True`` disables the creation of OSDs. If ``unmanaged: True`` is set, nothing will happen even if you apply a new OSD service. - - ``ceph orch daemon add`` creates OSDs, but does not add an OSD service. + - ``ceph orch daemon add`` creates OSDs, but does not add an OSD service. * For cephadm, see also :ref:`cephadm-spec-unmanaged`. @@ -250,7 +249,7 @@ Example: Expected output:: - Scheduled OSD(s) for removal + Scheduled OSD(s) for removal OSDs that are not safe to destroy will be rejected. @@ -273,14 +272,14 @@ You can query the state of OSD operation with the following command: .. prompt:: bash # - ceph orch osd rm status + ceph orch osd rm status Expected output:: - OSD_ID HOST STATE PG_COUNT REPLACE FORCE STARTED_AT - 2 cephadm-dev done, waiting for purge 0 True False 2020-07-17 13:01:43.147684 - 3 cephadm-dev draining 17 False True 2020-07-17 13:01:45.162158 - 4 cephadm-dev started 42 False True 2020-07-17 13:01:45.162158 + OSD_ID HOST STATE PG_COUNT REPLACE FORCE STARTED_AT + 2 cephadm-dev done, waiting for purge 0 True False 2020-07-17 13:01:43.147684 + 3 cephadm-dev draining 17 False True 2020-07-17 13:01:45.162158 + 4 cephadm-dev started 42 False True 2020-07-17 13:01:45.162158 When no PGs are left on the OSD, it will be decommissioned and removed from the cluster. @@ -302,11 +301,11 @@ Example: .. prompt:: bash # - ceph orch osd rm stop 4 + ceph orch osd rm stop 4 Expected output:: - Stopped OSD(s) removal + Stopped OSD(s) removal This resets the initial state of the OSD and takes it off the removal queue. @@ -327,7 +326,7 @@ Example: Expected output:: - Scheduled OSD(s) for replacement + Scheduled OSD(s) for replacement This follows the same procedure as the procedure in the "Remove OSD" section, with one exception: the OSD is not permanently removed from the CRUSH hierarchy, but is @@ -434,10 +433,10 @@ the ``ceph orch ps`` output in the ``MEM LIMIT`` column:: To exclude an OSD from memory autotuning, disable the autotune option for that OSD and also set a specific memory target. For example, - .. prompt:: bash # +.. prompt:: bash # - ceph config set osd.123 osd_memory_target_autotune false - ceph config set osd.123 osd_memory_target 16G + ceph config set osd.123 osd_memory_target_autotune false + ceph config set osd.123 osd_memory_target 16G .. _drivegroups: @@ -500,7 +499,7 @@ Example .. prompt:: bash [monitor.1]# - ceph orch apply -i /path/to/osd_spec.yml --dry-run + ceph orch apply -i /path/to/osd_spec.yml --dry-run @@ -510,9 +509,9 @@ Filters ------- .. note:: - Filters are applied using an `AND` gate by default. This means that a drive - must fulfill all filter criteria in order to get selected. This behavior can - be adjusted by setting ``filter_logic: OR`` in the OSD specification. + Filters are applied using an `AND` gate by default. This means that a drive + must fulfill all filter criteria in order to get selected. This behavior can + be adjusted by setting ``filter_logic: OR`` in the OSD specification. Filters are used to assign disks to groups, using their attributes to group them. @@ -522,7 +521,7 @@ information about the attributes with this command: .. code-block:: bash - ceph-volume inventory + ceph-volume inventory Vendor or Model ^^^^^^^^^^^^^^^ @@ -631,9 +630,9 @@ but want to use only the first two, you could use `limit`: .. code-block:: yaml - data_devices: - vendor: VendorA - limit: 2 + data_devices: + vendor: VendorA + limit: 2 .. note:: `limit` is a last resort and shouldn't be used if it can be avoided. @@ -856,8 +855,8 @@ See :ref:`orchestrator-cli-placement-spec` .. note:: - Assuming each host has a unique disk layout, each OSD - spec needs to have a different service id + Assuming each host has a unique disk layout, each OSD + spec needs to have a different service id Dedicated wal + db @@ -987,7 +986,7 @@ activates all existing OSDs on a host. .. prompt:: bash # - ceph cephadm osd activate ... + ceph cephadm osd activate ... This will scan all existing disks for OSDs and deploy corresponding daemons. -- 2.39.5