* When you add new drives to the cluster, they will automatically be used to
create new OSDs.
-* When ou remove an OSD and clean the LVM physical volume, a new OSD will be
+* When you remove an OSD and clean the LVM physical volume, a new OSD will be
created automatically.
If you want to avoid this behavior (disable automatic creation of OSD on available devices), use the ``unmanaged`` parameter:
.. note::
If the ``unmanaged`` flag is not set, ``cephadm`` automatically deploys drives that
- match the OSDSpec. For example, if you specifythe
+ match the OSDSpec. For example, if you specify the
``all-available-devices`` option when creating OSDs, when you ``zap`` a
device the ``cephadm`` orchestrator automatically creates a new OSD on the
device. To disable this behavior, see :ref:`cephadm-osd-declarative`.
size: ':1701G'
-To include drives equal to or greater than 666 GB in siz:
+To include drives equal to or greater than 666 GB in size:
.. code-block:: yaml
for WAL+DB offload.
If you know that drives larger than 2 TB should always be used as data devices,
-and drives smaller than 2 TB should always be used as WAL/DB devices, you can
+and drives smaller than 2 TB should always be used as WAL/DB devices, you can
filter by size:
.. code-block:: yaml
data_devices:
rotational: 1 # All drives identified as HDDs
db_devices:
- rotational: 0 # All drivves identified as SSDs
+ rotational: 0 # All drives identified as SSDs
---
service_type: osd
service_id: disk_layout_b
Linux or an HBA may enumerate devices differently across boots, or when
drives are added or replaced.
-It iss possible to specify a ``crush_device_class`` parameter
+It is possible to specify a ``crush_device_class`` parameter
to be applied to OSDs created on devices matched by the ``paths`` filter:
.. code-block:: yaml