From 7b5f73f4c17acc7cd0aab87db528e5232c8f05e1 Mon Sep 17 00:00:00 2001 From: Anthony D'Atri Date: Thu, 20 Mar 2025 23:29:57 -0400 Subject: [PATCH] doc/cephadm/services: Correct indentation in osd.rst Signed-off-by: Anthony D'Atri --- doc/cephadm/services/osd.rst | 39 +++++++++++++++++++++++------------- 1 file changed, 25 insertions(+), 14 deletions(-) diff --git a/doc/cephadm/services/osd.rst b/doc/cephadm/services/osd.rst index bb461478e56..bc72a1903d5 100644 --- a/doc/cephadm/services/osd.rst +++ b/doc/cephadm/services/osd.rst @@ -7,8 +7,9 @@ List Devices ============ ``ceph-volume`` scans each host in the cluster periodically in order -to determine which devices are present and whether they are eligible to be -used as OSDs. +to determine the devices that are present and responsive. It is also +determined whether each is eligible to be used for new OSDs in a block, +DB, or WAL role. To print a list of devices discovered by ``cephadm``, run this command: @@ -31,10 +32,7 @@ Example:: srv-03 /dev/sdc hdd 15R0A0P7FRD6 300G Unknown N/A N/A No srv-03 /dev/sdd hdd 15R0A0O7FRD6 300G Unknown N/A N/A No -The ``--wide`` option shows device details, -including any reasons that the device might not be eligible for use as an OSD. - -In the above example you can see fields named ``Health``, ``Ident``, and ``Fault``. +In the above examples you can see fields named ``Health``, ``Ident``, and ``Fault``. This information is provided by integration with `libstoragemgmt`_. By default, this integration is disabled because `libstoragemgmt`_ may not be 100% compatible with your hardware. To direct Ceph to include these fields, @@ -44,8 +42,19 @@ enable ``cephadm``'s "enhanced device scan" option as follows: ceph config set mgr mgr/cephadm/device_enhanced_scan true +Note that the columns reported by ``ceph orch device ls`` may vary from release to +release. + +The ``--wide`` option shows device details, +including any reasons that the device might not be eligible for use as an OSD. +Example (Reef):: + + HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS + davidsthubbins /dev/sdc hdd SEAGATE_ST20000NM002D_ZVTBJNGC17010W339UW25 18.1T No 22m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected + nigeltufnel /dev/sdd hdd SEAGATE_ST20000NM002D_ZVTBJNGC17010C3442787 18.1T No 22m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected + .. warning:: - Although the ``libstoragemgmt`` library performs standard SCSI inquiry calls, + Although the ``libstoragemgmt`` library issues standard SCSI (SES) inquiry calls, there is no guarantee that your hardware and firmware properly implement these standards. This can lead to erratic behaviour and even bus resets on some older hardware. It is therefore recommended that, before enabling this feature, @@ -732,8 +741,10 @@ There are multiple optional settings that specify the way OSDs are deployed. Add these options to an OSD spec for them to take effect. This example deploys encrypted OSDs on all unused drives. Note that if Linux -MD mirroring is used for the boot, `/var/log`, or other volumes this spec _may_ +MD mirroring is used for the boot, ``/var/log``, or other volumes this spec *may* grab replacement or added drives before you can employ them for non-OSD purposes. +The ``unmanaged`` attribute may be set to pause automatic deployment until you +are ready. .. code-block:: yaml @@ -884,19 +895,19 @@ This can be specificed with two service specs in the same file: db_devices: model: MC-55-44-XZ # Select only this model for WAL+DB offload limit: 2 # Select at most two for this purpose - db_slots: 5 # Back five slower HDD data devices with each - + db_slots: 5 # Chop the DB device into this many slices and + # use one for each of this many HDD OSDs --- service_type: osd service_id: osd_spec_ssd # Unique so it doesn't overwrite the above placement: host_pattern: '*' - spec: + spec: # This scenario is uncommon data_devices: model: MC-55-44-XZ # Select drives of this model for OSD data - db_devices: - vendor: VendorC # Select drives of this brand for WAL+DB - db_slots: 2 # Back two slower SAS/SATA SSD data devices with each + db_devices: # Select drives of this brand for WAL+DB. Since the + vendor: VendorC # data devices are SAS/SATA SSDs this would make sense for NVMe SSDs + db_slots: 2 # Back two slower SAS/SATA SSD data devices with each NVMe slice This would create the desired layout by using all HDDs as data devices with two SATA/SAS SSDs assigned as dedicated DB/WAL devices, each backing five HDD OSDs. -- 2.39.5