From: Sebastian Wagner Date: Tue, 26 Oct 2021 09:31:14 +0000 (+0200) Subject: doc/cephadm: osd.rst: s/DriveGroup/OSD spec/ X-Git-Tag: v17.1.0~580^2~1 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=61fe2a21b7d3869a156b1994696b38fbb6587be9;p=ceph.git doc/cephadm: osd.rst: s/DriveGroup/OSD spec/ Signed-off-by: Sebastian Wagner --- diff --git a/doc/cephadm/services/osd.rst b/doc/cephadm/services/osd.rst index aff13a2cef5ea..8739ad11c5f20 100644 --- a/doc/cephadm/services/osd.rst +++ b/doc/cephadm/services/osd.rst @@ -358,7 +358,7 @@ Example command: .. note:: If the unmanaged flag is unset, cephadm automatically deploys drives that - match the DriveGroup in your OSDSpec. For example, if you use the + match the OSDSpec. For example, if you use the ``all-available-devices`` option when creating OSDs, when you ``zap`` a device the cephadm orchestrator automatically creates a new OSD in the device. To disable this behavior, see :ref:`cephadm-osd-declarative`. @@ -612,7 +612,7 @@ Additional Options ------------------ There are multiple optional settings you can use to change the way OSDs are deployed. -You can add these options to the base level of a DriveGroup for it to take effect. +You can add these options to the base level of an OSD spec for it to take effect. This example would deploy all OSDs with encryption enabled. @@ -699,7 +699,7 @@ If you know that drives with more than 2TB will always be the slower data device db_devices: size: ':2TB' -Note: All of the above DriveGroups are equally valid. Which of those you want to use depends on taste and on how much you expect your node layout to change. +Note: All of the above OSD specs are equally valid. Which of those you want to use depends on taste and on how much you expect your node layout to change. Multiple OSD specs for a single host