From: Ville Ojamo <14869000+bluikko@users.noreply.github.com> Date: Thu, 18 Dec 2025 07:05:29 +0000 (+0700) Subject: doc/cephadm: Fix formatting errors and improvements in osd.rst X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=bbf51cf73eba503fed44c0d700b483890a4b5700;p=ceph.git doc/cephadm: Fix formatting errors and improvements in osd.rst Use console code-block instead of literal text to avoid highlighting. Promptify few last CLI commands that were missing prompts. Remove lone single-item unordered list formatting. Add missing full stops. Linkify mentions of other sections. Use title case consistently in section titles. Use :confval: consistently instead of literal text. Use double backticks consistently. Use "YAML" and "JSON" instead of literal .yaml and .json. Fix two occurrences of double backticks missing one backtick that lead to broken rendering. Use *italic* RST syntax instead of rendered underscores. Try to improve language in simple sentences. Remove unnecessary * characters that were rendered in CLI example commands. Signed-off-by: Ville Ojamo <14869000+bluikko@users.noreply.github.com> --- diff --git a/doc/cephadm/services/osd.rst b/doc/cephadm/services/osd.rst index 59ff19eeb134..71c3686b097d 100644 --- a/doc/cephadm/services/osd.rst +++ b/doc/cephadm/services/osd.rst @@ -17,7 +17,9 @@ To print a list of devices discovered by ``cephadm``, run this command: ceph orch device ls [--hostname=...] [--wide] [--refresh] -Example:: +Example: + +.. code-block:: console Hostname Path Type Serial Size Health Ident Fault Available srv-01 /dev/sdb hdd 15P0A0YFFRD6 300G Unknown N/A N/A No @@ -47,7 +49,9 @@ release. The ``--wide`` option shows device details, including any reasons that the device might not be eligible for use as an OSD. -Example (Reef):: +Example (Reef): + +.. code-block:: console HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS davidsthubbins /dev/sdc hdd SEAGATE_ST20000NM002D_ZVTBJNGC17010W339UW25 18.1T No 22m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected @@ -66,7 +70,7 @@ Example (Reef):: lsmcli ldl``. If your hardware is supported you should see something like this: - :: + .. code-block:: console Path | SCSI VPD 0x83 | Link Type | Serial Number | Health Status ---------------------------------------------------------------------------- @@ -77,9 +81,12 @@ Example (Reef):: After enabling ``libstoragemgmt`` support, the output will look something like this: -:: +.. prompt:: bash # + + ceph orch device ls + +.. code-block:: console - # ceph orch device ls Hostname Path Type Serial Size Health Ident Fault Available srv-01 /dev/sdb hdd 15P0A0YFFRD6 300G Good Off Off No srv-01 /dev/sdc hdd 15R0A08WFRD6 300G Good Off Off No @@ -90,7 +97,7 @@ interact with the identification and fault LEDs on the drive enclosures. For fur information about interacting with these LEDs, refer to :ref:`devices`. .. note:: - The current release of `libstoragemgmt`` (1.8.8) supports SCSI, SAS, and SATA based + The current release of ``libstoragemgmt`` (1.8.8) supports SCSI, SAS, and SATA based local drives only. There is no official support for NVMe devices (PCIe), SAN LUNs, or exotic/complex metadevices. @@ -116,7 +123,7 @@ general form of the command above: cephadm shell ceph-volume inventory /dev/sdc --format json | jq .sys_api.human_readable_size -:: +.. code-block:: console "3.64 TB" @@ -171,7 +178,7 @@ There are multiple ways to create new OSDs: .. prompt:: bash # - ceph orch daemon add osd **:** + ceph orch daemon add osd : For example: @@ -189,7 +196,7 @@ There are multiple ways to create new OSDs: .. prompt:: bash # - ceph orch daemon add osd **:** + ceph orch daemon add osd : For example: @@ -234,7 +241,7 @@ For example: ceph orch apply osd --all-available-devices --dry-run -:: +.. code-block:: console NAME HOST DATA DB WAL all-available-devices node1 /dev/vdb - - @@ -281,7 +288,7 @@ If you want to avoid this behavior (disable automatic creation of OSD on availab - ``ceph orch daemon add`` creates OSDs, but does not add an OSD service. -* For more on ``cephadm``, see also :ref:`cephadm-spec-unmanaged`. +For more on ``cephadm``, see also :ref:`cephadm-spec-unmanaged`. .. _cephadm-osd-removal: @@ -306,7 +313,9 @@ Example: ceph orch osd rm 0 ceph orch osd rm 1138 --zap -Expected output:: +Expected output: + +.. code-block:: console Scheduled OSD(s) for removal @@ -324,7 +333,7 @@ OSD's drives, leaving it a blank slate for redeployment or other reuse. or modify it in some way that it no longer matches the drives used for the OSDs you wish to remove. Then re-apply the spec. For more info on drivegroup specs see :ref:`drivegroups`. For more info on the declarative nature of - ``cephadm`` in reference to deploying OSDs, see :ref:`cephadm-osd-declarative` + ``cephadm`` in reference to deploying OSDs, see :ref:`cephadm-osd-declarative`. Monitoring OSD State During OSD Removal --------------------------------------- @@ -336,7 +345,9 @@ by running the following command: ceph orch osd rm status -Expected output:: +Expected output: + +.. code-block:: console OSD_ID HOST STATE PG_COUNT REPLACE FORCE STARTED_AT 2 cephadm-dev done, waiting for purge 0 True False 2020-07-17 13:01:43.147684 @@ -365,7 +376,9 @@ Example: ceph orch osd rm stop 4 -Expected output:: +Expected output: + +.. code-block:: console Stopped OSD(s) removal @@ -386,11 +399,13 @@ Example: ceph orch osd rm 4 --replace -Expected output:: +Expected output: + +.. code-block:: console Scheduled OSD(s) for replacement -This follows the same procedure as the procedure in the "Remove OSD" section, with +This follows the same procedure as the procedure in the :ref:`cephadm-osd-removal` section, with one exception: the OSD is not permanently removed from the CRUSH hierarchy, but is instead assigned the ``destroyed`` flag. @@ -403,7 +418,7 @@ instead assigned the ``destroyed`` flag. The ``destroyed`` flag is used to determine which OSD IDs will be reused in the next OSD deployment. -If you use OSDSpecs for OSD deployment, your newly added drives will be assigned +If you use :ref:`OSDSpecs ` for OSD deployment, your newly added drives will be assigned the OSD IDs of their replaced counterparts. This assumes that the new drives still match the OSDSpecs. @@ -415,7 +430,7 @@ without the ``--dry-run`` flag. .. tip:: - The name of your OSDSpec can be retrieved with the command ``ceph orch ls`` + The name of your OSDSpec can be retrieved with the command ``ceph orch ls``. Alternatively, you can use an OSDSpec file: @@ -423,7 +438,9 @@ Alternatively, you can use an OSDSpec file: ceph orch apply -i --dry-run -Expected output:: +Expected output: + +.. code-block:: console NAME HOST DATA DB WAL node1 /dev/vdb - - @@ -459,7 +476,7 @@ Example command: .. _osd_autotune: -Automatically tuning OSD memory +Automatically Tuning OSD Memory =============================== OSD daemons will adjust their memory consumption based on the @@ -470,22 +487,28 @@ memory consumption target based on the total amount of RAM and the number of dep OSDs. This allows the full use of available memory, and adapts when OSDs or RAM are added or removed. -.. warning:: Cephadm sets ``osd_memory_target_autotune`` to ``true`` by default which is usually not appropriate for converged architectures, where a given node is used for both Ceph and compute purposes. +.. warning:: Cephadm sets :confval:`osd_memory_target_autotune` to ``true`` by + default which is usually not appropriate for converged architectures, where + a given node is used for both Ceph and compute purposes. ``Cephadm`` will use a fraction :confval:`mgr/cephadm/autotune_memory_target_ratio` of available memory, subtracting memory consumed by non-autotuned daemons (non-OSDs and OSDs for which -``osd_memory_target_autotune`` is false), and then divide the balance by the number -of OSDs. +:confval:`osd_memory_target_autotune` is ``false``), and then divide the +balance by the number of OSDs. -The final targets are reflected in the config database with options like the below:: +The final targets are reflected in the config database with options like the below: + +.. code-block:: console WHO MASK LEVEL OPTION VALUE osd host:foo basic osd_memory_target 126092301926 osd host:bar basic osd_memory_target 6442450944 Both the limits and the current memory consumed by each daemon are visible from -the ``ceph orch ps`` output in the ``MEM LIMIT`` column:: +the ``ceph orch ps`` output in the ``MEM LIMIT`` column: + +.. code-block:: console NAME HOST PORTS STATUS REFRESHED AGE MEM USED MEM LIMIT VERSION IMAGE ID CONTAINER ID osd.1 dael running (3h) 10s ago 3h 72857k 117.4G 17.0.0-3781-gafaed750 7015fda3cd67 9e183363d39c @@ -514,8 +537,8 @@ and which configurations to apply to those OSDs. for transformation into OSDs even when the Ceph cluster operator does not know the specific device names and paths associated with those disks. -:ref:`orchestrator-cli-service-spec`\s make it possible to define a ``.yaml`` -or ``.json`` file that can be used to reduce the amount of manual work involved +:ref:`orchestrator-cli-service-spec`\s make it possible to define a YAML +or JSON file that can be used to reduce the amount of manual work involved in creating OSDs. .. note:: @@ -532,7 +555,7 @@ For example, instead of running the following command: .. prompt:: bash [monitor.1]# - ceph orch daemon add osd **:** + ceph orch daemon add osd : for each device and each host, we can create a ``.yaml`` or ``.json`` file that allows us to describe the layout. Here is the most basic example: @@ -551,9 +574,9 @@ Create a file called (for example) ``osd_spec.yml``: This means : -#. Turn any available device (``ceph-volume` decides which are _available_) into an +#. Turn any available device (``ceph-volume`` decides which are *available*) into an OSD on all hosts that match the glob pattern '*'. The glob pattern matches - registered hosts from `ceph orch host ls`. See + registered hosts from ``ceph orch host ls``. See :ref:`cephadm-services-placement-by-pattern-matching` for more on using ``host_pattern`` matching to use devices for OSDs. @@ -593,7 +616,7 @@ Filters are used to select sets of drives for OSD data or WAL+DB offload based on various attributes. These attributes are gathered by ``ceph-volume``'s drive inventory. Retrieve these attributes with this command: -.. code-block:: bash +.. prompt:: bash # ceph-volume inventory @@ -662,8 +685,8 @@ To include drives equal to or greater than 666 GB in size: size: '666G:' -The supported units of size are Megabyte(M), Gigabyte(G) and Terabyte(T). -The ``B`` (_byte_) suffix for units is also acceptable: ``MB``, ``GB``, ``TB``. +The supported units of size are Megabyte (``M``), Gigabyte (``G``) and Terabyte (``T``). +The ``B`` (*byte*) suffix for units is also acceptable: ``MB``, ``GB``, ``TB``. Rotational @@ -697,9 +720,9 @@ Spec file syntax: rotational: 0 | 1 -`1` to match all drives that the kernel indicates are rotational +``1`` to match all drives that the kernel indicates are rotational -`0` to match all drives that are non-rotational (SATA, SATA, NVMe SSDs, SAN LUNs, etc) +``0`` to match all drives that are non-rotational (SATA, SATA, NVMe SSDs, SAN LUNs, etc) All @@ -763,7 +786,7 @@ are ready. encrypted: true Ceph Squid onwards support TPM2 token enrollment for LUKS2 devices. -Add the `tpm2` attribute to the OSD spec: +Add the ``tpm2`` attribute to the OSD spec: .. code-block:: yaml @@ -777,7 +800,7 @@ Add the `tpm2` attribute to the OSD spec: encrypted: true tpm2: true -See a full list in the DriveGroupSpecs +A full list of supported attributes: .. py:currentmodule:: ceph.deployment.drive_group @@ -789,7 +812,7 @@ See a full list in the DriveGroupSpecs Examples ======== -The simple case +The Simple Case --------------- When all cluster nodes have identical drives and we wish to use @@ -859,7 +882,7 @@ filter by size: .. note:: All of the above OSD specs are equally valid. Which you use depends on taste and on how much you expect your node layout to change. -Multiple OSD specs for a single host +Multiple OSD Specs for a Single Host ------------------------------------ Here we specify two distinct strategies for deploying OSDs across multiple @@ -920,7 +943,7 @@ The remaining ten SAS/SATA SSDs will be used as OSD data devices, with ``VendorC`` NVMEs SSDs assigned as dedicated DB/WAL devices, each serving two SAS/SATA OSDs. We call these _hybrid OSDs. -Multiple hosts with the same disk layout +Multiple Hosts with the same Disk Layout ---------------------------------------- When a cluster comprises hosts with different drive layouts, or a complex @@ -988,7 +1011,7 @@ You can specify a ``placement`` to target only certain nodes. This applies different OSD specs to different hosts that match hosts tagged with ``ceph orch`` labels via the ``placement`` filter. -See :ref:`orchestrator-cli-placement-spec` +For more information, see :ref:`orchestrator-cli-placement-spec`. .. note:: @@ -1117,7 +1140,7 @@ via the ``paths`` keyword with the following syntax: .. _cephadm-osd-activate: -Activate existing OSDs +Activate Existing OSDs ====================== If a host's operating system has been reinstalled, existing OSDs