ceph orch device ls [--hostname=...] [--wide] [--refresh]
-Example::
+Example:
+
+.. code-block:: console
Hostname Path Type Serial Size Health Ident Fault Available
srv-01 /dev/sdb hdd 15P0A0YFFRD6 300G Unknown N/A N/A No
The ``--wide`` option shows device details,
including any reasons that the device might not be eligible for use as an OSD.
-Example (Reef)::
+Example (Reef):
+
+.. code-block:: console
HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS
davidsthubbins /dev/sdc hdd SEAGATE_ST20000NM002D_ZVTBJNGC17010W339UW25 18.1T No 22m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
lsmcli ldl``. If your hardware is supported you should see something like
this:
- ::
+ .. code-block:: console
Path | SCSI VPD 0x83 | Link Type | Serial Number | Health Status
----------------------------------------------------------------------------
After enabling ``libstoragemgmt`` support, the output will look something
like this:
-::
+.. prompt:: bash #
+
+ ceph orch device ls
+
+.. code-block:: console
- # ceph orch device ls
Hostname Path Type Serial Size Health Ident Fault Available
srv-01 /dev/sdb hdd 15P0A0YFFRD6 300G Good Off Off No
srv-01 /dev/sdc hdd 15R0A08WFRD6 300G Good Off Off No
information about interacting with these LEDs, refer to :ref:`devices`.
.. note::
- The current release of `libstoragemgmt`` (1.8.8) supports SCSI, SAS, and SATA based
+ The current release of ``libstoragemgmt`` (1.8.8) supports SCSI, SAS, and SATA based
local drives only. There is no official support for NVMe devices (PCIe), SAN LUNs,
or exotic/complex metadevices.
cephadm shell ceph-volume inventory /dev/sdc --format json | jq .sys_api.human_readable_size
-::
+.. code-block:: console
"3.64 TB"
.. prompt:: bash #
- ceph orch daemon add osd *<host>*:*<device-path>*
+ ceph orch daemon add osd <host>:<device-path>
For example:
.. prompt:: bash #
- ceph orch daemon add osd *<host>*:*<lvm-path>*
+ ceph orch daemon add osd <host>:<lvm-path>
For example:
ceph orch apply osd --all-available-devices --dry-run
-::
+.. code-block:: console
NAME HOST DATA DB WAL
all-available-devices node1 /dev/vdb - -
- ``ceph orch daemon add`` creates OSDs, but does not add an OSD service.
-* For more on ``cephadm``, see also :ref:`cephadm-spec-unmanaged`.
+For more on ``cephadm``, see also :ref:`cephadm-spec-unmanaged`.
.. _cephadm-osd-removal:
ceph orch osd rm 0
ceph orch osd rm 1138 --zap
-Expected output::
+Expected output:
+
+.. code-block:: console
Scheduled OSD(s) for removal
or modify it in some way that it no longer matches the drives used for the
OSDs you wish to remove. Then re-apply the spec. For more info on drivegroup
specs see :ref:`drivegroups`. For more info on the declarative nature of
- ``cephadm`` in reference to deploying OSDs, see :ref:`cephadm-osd-declarative`
+ ``cephadm`` in reference to deploying OSDs, see :ref:`cephadm-osd-declarative`.
Monitoring OSD State During OSD Removal
---------------------------------------
ceph orch osd rm status
-Expected output::
+Expected output:
+
+.. code-block:: console
OSD_ID HOST STATE PG_COUNT REPLACE FORCE STARTED_AT
2 cephadm-dev done, waiting for purge 0 True False 2020-07-17 13:01:43.147684
ceph orch osd rm stop 4
-Expected output::
+Expected output:
+
+.. code-block:: console
Stopped OSD(s) removal
ceph orch osd rm 4 --replace
-Expected output::
+Expected output:
+
+.. code-block:: console
Scheduled OSD(s) for replacement
-This follows the same procedure as the procedure in the "Remove OSD" section, with
+This follows the same procedure as the procedure in the :ref:`cephadm-osd-removal` section, with
one exception: the OSD is not permanently removed from the CRUSH hierarchy, but is
instead assigned the ``destroyed`` flag.
The ``destroyed`` flag is used to determine which OSD IDs will be reused in the
next OSD deployment.
-If you use OSDSpecs for OSD deployment, your newly added drives will be assigned
+If you use :ref:`OSDSpecs <drivegroups>` for OSD deployment, your newly added drives will be assigned
the OSD IDs of their replaced counterparts. This assumes that the new drives
still match the OSDSpecs.
.. tip::
- The name of your OSDSpec can be retrieved with the command ``ceph orch ls``
+ The name of your OSDSpec can be retrieved with the command ``ceph orch ls``.
Alternatively, you can use an OSDSpec file:
ceph orch apply -i <osd_spec_file> --dry-run
-Expected output::
+Expected output:
+
+.. code-block:: console
NAME HOST DATA DB WAL
<name_of_osd_spec> node1 /dev/vdb - -
.. _osd_autotune:
-Automatically tuning OSD memory
+Automatically Tuning OSD Memory
===============================
OSD daemons will adjust their memory consumption based on the
OSDs. This allows the full use of available memory, and adapts when OSDs or
RAM are added or removed.
-.. warning:: Cephadm sets ``osd_memory_target_autotune`` to ``true`` by default which is usually not appropriate for converged architectures, where a given node is used for both Ceph and compute purposes.
+.. warning:: Cephadm sets :confval:`osd_memory_target_autotune` to ``true`` by
+ default which is usually not appropriate for converged architectures, where
+ a given node is used for both Ceph and compute purposes.
``Cephadm`` will use a fraction
:confval:`mgr/cephadm/autotune_memory_target_ratio` of available memory,
subtracting memory consumed by non-autotuned daemons (non-OSDs and OSDs for which
-``osd_memory_target_autotune`` is false), and then divide the balance by the number
-of OSDs.
+:confval:`osd_memory_target_autotune` is ``false``), and then divide the
+balance by the number of OSDs.
-The final targets are reflected in the config database with options like the below::
+The final targets are reflected in the config database with options like the below:
+
+.. code-block:: console
WHO MASK LEVEL OPTION VALUE
osd host:foo basic osd_memory_target 126092301926
osd host:bar basic osd_memory_target 6442450944
Both the limits and the current memory consumed by each daemon are visible from
-the ``ceph orch ps`` output in the ``MEM LIMIT`` column::
+the ``ceph orch ps`` output in the ``MEM LIMIT`` column:
+
+.. code-block:: console
NAME HOST PORTS STATUS REFRESHED AGE MEM USED MEM LIMIT VERSION IMAGE ID CONTAINER ID
osd.1 dael running (3h) 10s ago 3h 72857k 117.4G 17.0.0-3781-gafaed750 7015fda3cd67 9e183363d39c
for transformation into OSDs even when the Ceph cluster operator does not know
the specific device names and paths associated with those disks.
-:ref:`orchestrator-cli-service-spec`\s make it possible to define a ``.yaml``
-or ``.json`` file that can be used to reduce the amount of manual work involved
+:ref:`orchestrator-cli-service-spec`\s make it possible to define a YAML
+or JSON file that can be used to reduce the amount of manual work involved
in creating OSDs.
.. note::
.. prompt:: bash [monitor.1]#
- ceph orch daemon add osd *<host>*:*<path-to-device>*
+ ceph orch daemon add osd <host>:<path-to-device>
for each device and each host, we can create a ``.yaml`` or ``.json`` file that
allows us to describe the layout. Here is the most basic example:
This means :
-#. Turn any available device (``ceph-volume` decides which are _available_) into an
+#. Turn any available device (``ceph-volume`` decides which are *available*) into an
OSD on all hosts that match the glob pattern '*'. The glob pattern matches
- registered hosts from `ceph orch host ls`. See
+ registered hosts from ``ceph orch host ls``. See
:ref:`cephadm-services-placement-by-pattern-matching` for more on using
``host_pattern`` matching to use devices for OSDs.
on various attributes. These attributes are gathered by ``ceph-volume``'s drive
inventory. Retrieve these attributes with this command:
-.. code-block:: bash
+.. prompt:: bash #
ceph-volume inventory </path/to/drive>
size: '666G:'
-The supported units of size are Megabyte(M), Gigabyte(G) and Terabyte(T).
-The ``B`` (_byte_) suffix for units is also acceptable: ``MB``, ``GB``, ``TB``.
+The supported units of size are Megabyte (``M``), Gigabyte (``G``) and Terabyte (``T``).
+The ``B`` (*byte*) suffix for units is also acceptable: ``MB``, ``GB``, ``TB``.
Rotational
rotational: 0 | 1
-`1` to match all drives that the kernel indicates are rotational
+``1`` to match all drives that the kernel indicates are rotational
-`0` to match all drives that are non-rotational (SATA, SATA, NVMe SSDs, SAN LUNs, etc)
+``0`` to match all drives that are non-rotational (SATA, SATA, NVMe SSDs, SAN LUNs, etc)
All
encrypted: true
Ceph Squid onwards support TPM2 token enrollment for LUKS2 devices.
-Add the `tpm2` attribute to the OSD spec:
+Add the ``tpm2`` attribute to the OSD spec:
.. code-block:: yaml
encrypted: true
tpm2: true
-See a full list in the DriveGroupSpecs
+A full list of supported attributes:
.. py:currentmodule:: ceph.deployment.drive_group
Examples
========
-The simple case
+The Simple Case
---------------
When all cluster nodes have identical drives and we wish to use
.. note:: All of the above OSD specs are equally valid. Which you use depends on taste and on how much you expect your node layout to change.
-Multiple OSD specs for a single host
+Multiple OSD Specs for a Single Host
------------------------------------
Here we specify two distinct strategies for deploying OSDs across multiple
used as OSD data devices, with ``VendorC`` NVMEs SSDs assigned as
dedicated DB/WAL devices, each serving two SAS/SATA OSDs. We call these _hybrid OSDs.
-Multiple hosts with the same disk layout
+Multiple Hosts with the same Disk Layout
----------------------------------------
When a cluster comprises hosts with different drive layouts, or a complex
This applies different OSD specs to different hosts that match hosts
tagged with ``ceph orch`` labels via the ``placement`` filter.
-See :ref:`orchestrator-cli-placement-spec`
+For more information, see :ref:`orchestrator-cli-placement-spec`.
.. note::
.. _cephadm-osd-activate:
-Activate existing OSDs
+Activate Existing OSDs
======================
If a host's operating system has been reinstalled, existing OSDs