From: Adam King Date: Tue, 12 Jul 2022 20:54:19 +0000 (-0400) Subject: doc/cephadm: add note about OSDs being recreated to OSD removal section X-Git-Tag: v16.2.11~434^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=refs%2Fpull%2F47103%2Fhead;p=ceph.git doc/cephadm: add note about OSDs being recreated to OSD removal section Signed-off-by: Adam King (cherry picked from commit d4a39cd046b93cb7bb5b7ce0311139d9f6552802) --- diff --git a/doc/cephadm/services/osd.rst b/doc/cephadm/services/osd.rst index 8e2da1a1cd8a..ab62924a258f 100644 --- a/doc/cephadm/services/osd.rst +++ b/doc/cephadm/services/osd.rst @@ -245,6 +245,18 @@ Expected output:: OSDs that are not safe to destroy will be rejected. +.. note:: + After removing OSDs, if the drives the OSDs were deployed on once again + become available, cephadm may automatically try to deploy more OSDs + on these drives if they match an existing drivegroup spec. If you deployed + the OSDs you are removing with a spec and don't want any new OSDs deployed on + the drives after removal, it's best to modify the drivegroup spec before removal. + Either set ``unmanaged: true`` to stop it from picking up new drives at all, + or modify it in some way that it no longer matches the drives used for the + OSDs you wish to remove. Then re-apply the spec. For more info on drivegroup + specs see :ref:`drivegroups`. For more info on the declarative nature of + cephadm in reference to deploying OSDs, see :ref:`cephadm-osd-declarative` + Monitoring OSD State --------------------