From 61fe2a21b7d3869a156b1994696b38fbb6587be9 Mon Sep 17 00:00:00 2001 From: Sebastian Wagner Date: Tue, 26 Oct 2021 11:31:14 +0200 Subject: [PATCH] doc/cephadm: osd.rst: s/DriveGroup/OSD spec/ Signed-off-by: Sebastian Wagner --- doc/cephadm/services/osd.rst | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/cephadm/services/osd.rst b/doc/cephadm/services/osd.rst index aff13a2cef5..8739ad11c5f 100644 --- a/doc/cephadm/services/osd.rst +++ b/doc/cephadm/services/osd.rst @@ -358,7 +358,7 @@ Example command: .. note:: If the unmanaged flag is unset, cephadm automatically deploys drives that - match the DriveGroup in your OSDSpec. For example, if you use the + match the OSDSpec. For example, if you use the ``all-available-devices`` option when creating OSDs, when you ``zap`` a device the cephadm orchestrator automatically creates a new OSD in the device. To disable this behavior, see :ref:`cephadm-osd-declarative`. @@ -612,7 +612,7 @@ Additional Options ------------------ There are multiple optional settings you can use to change the way OSDs are deployed. -You can add these options to the base level of a DriveGroup for it to take effect. +You can add these options to the base level of an OSD spec for it to take effect. This example would deploy all OSDs with encryption enabled. @@ -699,7 +699,7 @@ If you know that drives with more than 2TB will always be the slower data device db_devices: size: ':2TB' -Note: All of the above DriveGroups are equally valid. Which of those you want to use depends on taste and on how much you expect your node layout to change. +Note: All of the above OSD specs are equally valid. Which of those you want to use depends on taste and on how much you expect your node layout to change. Multiple OSD specs for a single host -- 2.39.5