Low-hanging spelling, punctuation, and capitalization errors.
Ignore style and other more complex issues.
Use angle brackets consistently for value placeholders.
Signed-off-by: Ville Ojamo <git2233+ceph@ojamo.eu>
``drive-group``
===============
The drive-group subcommand allows for passing :ref:`drivegroups` specifications
-straight to ceph-volume as json. ceph-volume will then attempt to deploy this
+straight to ceph-volume as JSON. ceph-volume will then attempt to deploy this
drive groups via the batch subcommand.
The specification can be passed via a file, string argument or on stdin.
ceph-volume
===========
-Deploy OSDs with different device technologies like lvm or physical disks using
+Deploy OSDs with different device technologies like LVM or physical disks using
pluggable tools (:doc:`lvm/index` itself is treated like a plugin) and trying to
-follow a predictable, and robust way of preparing, activating, and starting OSDs.
+follow a predictable and robust way of preparing, activating, and starting OSDs.
:ref:`Overview <ceph-volume-overview>` |
-:ref:`Plugin Guide <ceph-volume-plugins>` |
+:ref:`Plugin Guide <ceph-volume-plugins>`
**Command Line Subcommands**
**Node inventory**
The :ref:`ceph-volume-inventory` subcommand provides information and metadata
-about a nodes physical disk inventory.
+about a node's physical disk inventory.
Migrating
New deployments
^^^^^^^^^^^^^^^
-For new deployments, :ref:`ceph-volume-lvm` is recommended, it can use any
+For new deployments, :ref:`ceph-volume-lvm` is recommended. It can use any
logical volume as input for data OSDs, or it can setup a minimal/naive logical
volume from a device.
It deviates from ``ceph-disk`` by not interacting or relying on the udev rules
that come installed for Ceph. These rules allow automatic detection of
-previously setup devices that are in turn fed into ``ceph-disk`` to activate
+previously set up devices that are in turn fed into ``ceph-disk`` to activate
them.
.. _ceph-disk-replaced:
Replacing ``ceph-disk``
-----------------------
The ``ceph-disk`` tool was created at a time when the project was required to
-support many different types of init systems (upstart, sysvinit, etc...) while
+support many different types of init systems (upstart, sysvinit, etc.) while
being able to discover devices. This caused the tool to concentrate initially
(and exclusively afterwards) on GPT partitions. Specifically on GPT GUIDs,
which were used to label devices in a unique way to answer questions like:
It was hard to debug, or even replicate these problems given the asynchronous
behavior of ``UDEV``.
-Since the world-view of ``ceph-disk`` had to be GPT partitions exclusively, it meant
+Since the worldview of ``ceph-disk`` had to be GPT partitions exclusively, it meant
that it couldn't work with other technologies like LVM, or similar device
mapper devices. It was ultimately decided to create something modular, starting
with LVM support, and the ability to expand on other technologies as needed.
there are going to be lots of ways that people provision the hardware devices
that we need to consider. There are already two: legacy ceph-disk devices that
are still in use and have GPT partitions (handled by :ref:`ceph-volume-simple`),
-and lvm. SPDK devices where we manage NVMe devices directly from userspace are
+and LVM. SPDK devices where we manage NVMe devices directly from userspace are
on the immediate horizon, where LVM won't work there since the kernel isn't
involved at all.
``ceph-volume lvm``
-------------------
-By making use of :term:`LVM tags`, the :ref:`ceph-volume-lvm` sub-command is
+By making use of :term:`LVM tags`, the :ref:`ceph-volume-lvm` subcommand is
able to store and later re-discover and query devices associated with OSDs so
that they can later be activated.
``inventory``
=============
-The ``inventory`` subcommand queries a host's disc inventory and provides
+The ``inventory`` subcommand queries a host's disk inventory and provides
hardware information and metadata on every physical device.
By default the command returns a short, human-readable report of all physical disks.
-For programmatic consumption of this report pass ``--format json`` to generate a
+For programmatic consumption of this report, pass ``--format json`` to generate a
JSON formatted report. This report includes extensive information on the
physical drives such as disk metadata (like model and size), logical volumes
-and whether they are used by ceph, and if the disk is usable by ceph and
+and whether they are used by Ceph, and if the disk is usable by Ceph and
reasons why not.
A device path can be specified to report extensive information on a device in
-both plain and json format.
+both plain and JSON format.
New OSDs
--------
-To activate newly prepared OSDs both the :term:`OSD id` and :term:`OSD uuid`
+To activate newly prepared OSDs, both the :term:`OSD ID` and :term:`OSD UUID`
need to be supplied. For example::
ceph-volume lvm activate --bluestore 0 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8
will report them in the command output and skip them, making it safe to rerun
(idempotent).
-requiring uuids
+Requiring UUIDs
^^^^^^^^^^^^^^^
-The :term:`OSD uuid` is being required as an extra step to ensure that the
+The :term:`OSD UUID` is being required as an extra step to ensure that the
right OSD is being activated. It is entirely possible that a previous OSD with
-the same id exists and would end up activating the incorrect one.
+the same ID exists and would end up activating the incorrect one.
dmcrypt
With OSDs previously created by ``ceph-volume``, a *discovery* process is
performed using :term:`LVM tags` to enable the systemd units.
-The systemd unit will capture the :term:`OSD id` and :term:`OSD uuid` and
+The systemd unit will capture the :term:`OSD ID` and :term:`OSD UUID` and
persist it. Internally, the activation will enable it like::
- systemctl enable ceph-volume@lvm-$id-$uuid
+ systemctl enable ceph-volume@lvm-<id>-<uuid>
For example::
systemctl enable ceph-volume@lvm-0-8715BEB4-15C5-49DE-BA6F-401086EC7B41
-Would start the discovery process for the OSD with an id of ``0`` and a UUID of
+Would start the discovery process for the OSD with an ID of ``0`` and a UUID of
``8715BEB4-15C5-49DE-BA6F-401086EC7B41``.
-.. note:: for more details on the systemd workflow see :ref:`ceph-volume-lvm-systemd`
+.. note:: For more details on the systemd workflow, see :ref:`ceph-volume-lvm-systemd`.
The systemd unit will look for the matching OSD device, and by looking at its
:term:`LVM tags` will proceed to:
Existing OSDs
-------------
For existing OSDs that have been deployed with ``ceph-disk``, they need to be
-scanned and activated :ref:`using the simple sub-command <ceph-volume-simple>`.
+scanned and activated :ref:`using the simple subcommand <ceph-volume-simple>`.
If a different tool was used then the only way to port them over to the new
mechanism is to prepare them again (losing data). See
:ref:`ceph-volume-lvm-existing-osds` for details on how to proceed.
Summary
-------
-To recap the ``activate`` process for :term:`bluestore`:
+To recap the ``activate`` process for :term:`BlueStore`:
-#. Require both :term:`OSD id` and :term:`OSD uuid`
-#. Enable the system unit with matching id and uuid
+#. Require both :term:`OSD ID` and :term:`OSD UUID`
+#. Enable the system unit with matching ID and UUID
#. Create the ``tmpfs`` mount at the OSD directory in
- ``/var/lib/ceph/osd/$cluster-$id/``
+ ``/var/lib/ceph/osd/<cluster>-<id>/``
#. Recreate all the files needed with ``ceph-bluestore-tool prime-osd-dir`` by
pointing it to the OSD ``block`` device.
#. The systemd unit will ensure all devices are ready and linked
drive-groups. One individual drive group specification translates to a single
``batch`` invocation.
-The subcommand is based to :ref:`ceph-volume-lvm-create`, and will use the very
+The subcommand is based on :ref:`ceph-volume-lvm-create`, and will use the very
same code path. All ``batch`` does is to calculate the appropriate sizes of all
volumes and skip over already created volumes.
All the features that ``ceph-volume lvm create`` supports, like ``dmcrypt``,
-avoiding ``systemd`` units from starting, defining bluestore,
-is supported.
+avoiding ``systemd`` units from starting, and defining BlueStore,
+are supported.
.. _ceph-volume-lvm-batch_auto:
Automatic sorting of disks
--------------------------
If ``batch`` receives only a single list of data devices and other options are
-passed , ``ceph-volume`` will auto-sort disks by its rotational
+passed, ``ceph-volume`` will auto-sort disks by its rotational
property and use non-rotating disks for ``block.db`` or ``journal`` depending
on the objectstore used. If all devices are to be used for standalone OSDs,
no matter if rotating or solid state, pass ``--no-auto``.
-For example assuming :term:`bluestore` is used and ``--no-auto`` is not passed,
+For example assuming :term:`BlueStore` is used and ``--no-auto`` is not passed,
the deprecated behavior would deploy the following, depending on the devices
passed:
-#. Devices are all spinning HDDs: 1 OSD is created per device
+#. Devices are all HDDs: 1 OSD is created per device
#. Devices are all SSDs: 2 OSDs are created per device
#. Devices are a mix of HDDs and SSDs: data is placed on the spinning device,
the ``block.db`` is created on the SSD, as large as possible.
.. note:: Although operations in ``ceph-volume lvm create`` allow usage of
``block.wal`` it isn't supported with the ``auto`` behavior.
-This default auto-sorting behavior is now DEPRECATED and will be changed in future releases.
-Instead devices are not automatically sorted unless the ``--auto`` option is passed
+This default auto-sorting behavior is now **deprecated** and will be changed in future releases.
+Instead, devices are not automatically sorted unless the ``--auto`` option is passed.
-It is recommended to make use of the explicit device lists for ``block.db``,
- ``block.wal`` and ``journal``.
+It is recommended to make use of the explicit device lists
+for ``block.db``, ``block.wal`` and ``journal``.
.. _ceph-volume-lvm-batch_bluestore:
$ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1
This will deploy three OSDs with external ``db`` and ``wal`` volumes on
-an NVME device.
+an NVMe device.
Pretty reporting
----------------
``create``
===========
-This subcommand wraps the two-step process to provision a new osd (calling
+This subcommand wraps the two-step process to provision a new OSD (calling
``prepare`` first and then ``activate``) into a single
one. The reason to prefer ``prepare`` and then ``activate`` is to gradually
-introduce new OSDs into a cluster, and avoiding large amounts of data being
+introduce new OSDs into a cluster, avoiding large amounts of data being
rebalanced.
The single-call process unifies exactly what :ref:`ceph-volume-lvm-prepare` and
Logical volumes can be encrypted using ``dmcrypt`` by specifying the
``--dmcrypt`` flag when creating OSDs. When using LVM, logical volumes can be
encrypted in different ways. ``ceph-volume`` does not offer as many options as
-LVM does, but it encrypts logical volumes in a way that is consistent and
+LVM does, but it encrypts logical volumes in a way that is consistent and
robust.
In this case, ``ceph-volume lvm`` follows this constraint:
Ceph.
.. note:: Version 1 of LUKS is referred to in this documentation as "LUKS".
- Version 2 is of LUKS is referred to in this documentation as "LUKS2".
+ Version 2 of LUKS is referred to in this documentation as "LUKS2".
LUKS on LVM
naming convention *although it does not make sense for the new encryption
workflow*.
-After the common steps of setting up the OSD during the "prepare stage" (
-with :term:`bluestore`), the logical volume is left ready
+After the common steps of setting up the OSD during the "prepare stage"
+(with :term:`BlueStore`), the logical volume is left ready
to be activated, regardless of the state of the device (encrypted or
decrypted).
**Internal functionality**
There are other aspects of the ``lvm`` subcommand that are internal and not
-exposed to the user, these sections explain how these pieces work together,
+exposed to the user. These sections explain how these pieces work together,
clarifying the workflows of the tool.
:ref:`Systemd Units <ceph-volume-lvm-systemd>` |
means that all devices and logical volumes found in the system will be
displayed.
-Full ``pretty`` reporting for two OSDs, one with a lv as a journal, and another
+Full ``pretty`` reporting for two OSDs, one with a LV as a journal, and another
one with a physical device may look similar to:
.. prompt:: bash #
``pretty``, but an array when using ``json``.
.. note:: Tags are displayed in a readable format. The ``osd id`` key is stored
- as a ``ceph.osd_id`` tag. For more information on lvm tag conventions
- see :ref:`ceph-volume-lvm-tag-api`
+ as a ``ceph.osd_id`` tag. For more information on LVM tag conventions,
+ see :ref:`ceph-volume-lvm-tag-api`.
Single Reporting
----------------
.. note:: Tags are displayed in a readable format. The ``osd id`` key is stored
- as a ``ceph.osd_id`` tag. For more information on lvm tag conventions
- see :ref:`ceph-volume-lvm-tag-api`
+ as a ``ceph.osd_id`` tag. For more information on LVM tag conventions,
+ see :ref:`ceph-volume-lvm-tag-api`.
For plain disks, the full path to the device is required. For example, for
devices like ``/dev/sda1`` could change to ``/dev/sdb1``.
The detection is possible because the ``PARTUUID`` is stored as part of the
-metadata in the logical volume for the data lv. Even in the case of a journal
+metadata in the logical volume for the data LV. Even in the case of a journal
that is a physical device, this information is still stored on the data logical
volume associated with it.
``migrate``
===========
-Moves BlueFS data from source volume(s) to the target one, source volumes
+Moves BlueFS data from source volume(s) to the target one. Source volumes
(except the main, i.e. data or block one) are removed on success.
-LVM volumes are permitted for Target only, both already attached or new one.
+LVM volumes are permitted for target only, both already attached or new one.
In the latter case it is attached to the OSD replacing one of the source
devices.
-Following replacement rules apply (in the order of precedence, stop
+Following replacement rules apply (in the order of precedence; stop
on the first match):
- if source list has DB volume - target device replaces it.
.. prompt:: bash #
- lvextend -l ${size} ${lv}/${db} ${ssd_dev}
+ lvextend -l <size> <lv>/<db> <ssd_dev>
#. Stop the OSD:
.. prompt:: bash #
- cephadm unit --fsid $cid --name osd.${osd} stop
+ cephadm unit --fsid <cid> --name osd.<osd> stop
#. Run the ``bluefs-bdev-expand`` command:
.. prompt:: bash #
- cephadm shell --fsid $cid --name osd.${osd} -- ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-${osd}
+ cephadm shell --fsid <cid> --name osd.<osd> -- ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-<osd>
#. Run the ``bluefs-bdev-migrate`` command:
.. prompt:: bash #
- cephadm shell --fsid $cid --name osd.${osd} -- ceph-bluestore-tool bluefs-bdev-migrate --path /var/lib/ceph/osd/ceph-${osd} --devs-source /var/lib/ceph/osd/ceph-${osd}/block --dev-target /var/lib/ceph/osd/ceph-${osd}/block.db
+ cephadm shell --fsid <cid> --name osd.<osd> -- ceph-bluestore-tool bluefs-bdev-migrate --path /var/lib/ceph/osd/ceph-<osd> --devs-source /var/lib/ceph/osd/ceph-<osd>/block --dev-target /var/lib/ceph/osd/ceph-<osd>/block.db
#. Restart the OSD:
.. prompt:: bash #
- cephadm unit --fsid $cid --name osd.${osd} start
+ cephadm unit --fsid <cid> --name osd.<osd> start
-.. note:: *The above procedure was developed by Chris Dunlop on the [ceph-users] mailing list, and can be seen in its original context here:* `[ceph-users] Re: Fixing BlueFS spillover (pacific 16.2.14) <https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/POPUFSZGXR3P2RPYPJ4WJ4HGHZ3QESF6/>`_
+.. note:: *The above procedure was developed by Chris Dunlop on the [ceph-users] mailing list, and can be seen in its original context here:* `[ceph-users] Re: Fixing BlueFS spillover (pacific 16.2.14) <https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/POPUFSZGXR3P2RPYPJ4WJ4HGHZ3QESF6/>`_.
``bluestore``
-------------
-:term:`Bluestore<bluestore>` is the default backend for new OSDs. Bluestore
+:term:`BlueStore<bluestore>` is the default backend for new OSDs. BlueStore
supports the following configurations:
* a block device, a block.wal device, and a block.db device
If a ``block.db`` device or a ``block.wal`` device is needed, it can be
specified with ``--block.db`` or ``--block.wal``. These can be physical
devices, partitions, or logical volumes. ``block.db`` and ``block.wal`` are
-optional for bluestore.
+optional for BlueStore.
For both ``block.db`` and ``block.wal``, partitions can be used as-is, and
therefore are not made into logical volumes.
In the above case, a device was used for ``block``, so ``ceph-volume`` created
a volume group and a logical volume using the following conventions:
-* volume group name: ``ceph-{cluster fsid}`` (or if the volume group already
- exists: ``ceph-{random uuid}``)
+* volume group name: ``ceph-<cluster fsid>`` (or if the volume group already
+ exists: ``ceph-<random uuid>``)
-* logical volume name: ``osd-block-{osd_fsid}``
+* logical volume name: ``osd-block-<osd_fsid>``
.. _ceph-volume-lvm-prepare_filestore:
-------------
.. warning:: Filestore has been deprecated in the Reef release and is no longer supported.
-``Filestore<filestore>`` is the OSD backend that prepares logical volumes for a
-`filestore`-backed object-store OSD.
+Filestore is the OSD backend that prepares logical volumes for a
+filestore-backed object-store OSD.
-``Filestore<filestore>`` uses a logical volume to store OSD data and it uses
+Filestore uses a logical volume to store OSD data and it uses
physical devices, partitions, or logical volumes to store the journal. If a
physical device is used to create a filestore backend, a logical volume will be
created on that physical device. If the provided volume group's name begins
/var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o \
/var/lib/ceph/osd/<cluster name>-<osd id>/activate.monmap
-To populate the OSD directory (which has already been mounted), use this ``ceph-osd`` command:
+To populate the OSD directory (which has already been mounted), use this ``ceph-osd`` command:
.. prompt:: bash #
Partition Table: gpt
Disk Flags:
-Now lets create a single partition, and verify later if ``blkid`` can find
+Now let's create a single partition, and verify later if ``blkid`` can find
a ``PARTUUID`` that is needed by ``ceph-volume``:
.. prompt:: bash #
Existing OSDs
-------------
For existing clusters that want to use this new system and have OSDs that are
-already running there are a few things to take into account:
+already running, there are a few things to take into account:
-.. warning:: this process will forcefully format the data device, destroying
+.. warning:: This process will forcefully format the data device, destroying
existing data, if any.
* OSD paths should follow this convention::
ceph-volume lvm prepare --filestore --osd-id 0 --osd-fsid E3D291C1-E7BF-4984-9794-B60D9FA139CB
-The command line tool will not contact the monitor to generate an OSD ID and
+The command line tool will not contact the Monitor to generate an OSD ID and
will format the LVM device in addition to storing the metadata on it so that it
can be started later (for detailed metadata description see
:ref:`ceph-volume-lvm-tags`).
* ``osd_id``
* ``crush_device_class``
-For :term:`bluestore` these tags will be added:
+For :term:`BlueStore`, these tags will be added:
* ``block_device``
* ``block_uuid``
Summary
-------
-To recap the ``prepare`` process for :term:`bluestore`:
+To recap the ``prepare`` process for :term:`BlueStore`:
#. Accepts raw physical devices, partitions on physical devices or logical volumes as arguments.
#. Creates logical volumes on any raw physical devices.
#. Generate a UUID for the OSD
-#. Ask the monitor get an OSD ID reusing the generated UUID
+#. Ask the Monitor for an OSD ID reusing the generated UUID
#. OSD data directory is created on a tmpfs mount.
#. ``block``, ``block.wal``, and ``block.db`` are symlinked if defined.
#. monmap is fetched for activation
scan
====
-This sub-command will allow to discover Ceph volumes previously setup by the
+This subcommand will allow to discover Ceph volumes previously set up by the
tool by looking into the system's logical volumes and their tags.
As part of the :ref:`ceph-volume-lvm-prepare` process, the logical volumes are assigned
a few tags with important pieces of information.
-.. note:: This sub-command is not yet implemented
+.. note:: This subcommand is not yet implemented.
=======
Upon startup, it will identify the logical volume using :term:`LVM tags`,
finding a matching ID and later ensuring it is the right one with
-the :term:`OSD uuid`.
+the :term:`OSD UUID`.
-After identifying the correct volume it will then proceed to mount it by using
+After identifying the correct volume, it will then proceed to mount it by using
the OSD destination conventions, that is::
/var/lib/ceph/osd/<cluster name>-<osd id>
-For our example OSD with an id of ``0``, that means the identified device will
+For our example OSD with an ID of ``0``, that means the identified device will
be mounted at::
systemctl start ceph-osd@0
The systemd portion of this process is handled by the ``ceph-volume lvm
-trigger`` sub-command, which is only in charge of parsing metadata coming from
+trigger`` subcommand, which is only in charge of parsing metadata coming from
systemd and startup, and then dispatching to ``ceph-volume lvm activate`` which
would proceed with activation.
``zap``
=======
-This subcommand is used to zap lvs, partitions or raw devices that have been used
-by ceph OSDs so that they may be reused. If given a path to a logical
-volume it must be in the format of vg/lv. Any file systems present
-on the given lv or partition will be removed and all data will be purged.
+This subcommand is used to zap LVs, partitions, or raw devices that have been used
+by Ceph OSDs so that they may be reused. If given a path to a logical
+volume, it must be in the format of vg/lv. Any file systems present
+on the given LV or partition will be removed and all data will be purged.
-.. note:: The lv or partition will be kept intact.
+.. note:: The LV or partition will be kept intact.
-.. note:: If the logical volume, raw device or partition is being used for any ceph related
- mount points they will be unmounted.
+.. note:: If the logical volume, raw device, or partition is being used for any Ceph-related
+ mount points, they will be unmounted.
Zapping a logical volume::
- ceph-volume lvm zap {vg name/lv name}
+ ceph-volume lvm zap <vg name/lv name>
Zapping a partition::
Removing Devices
----------------
-When zapping, and looking for full removal of the device (lv, vg, or partition)
+When zapping and looking for full removal of the device (LV, VG, or partition),
use the ``--destroy`` flag. A common use case is to simply deploy OSDs using
a whole raw device. If you do so and then wish to reuse that device for another
-OSD you must use the ``--destroy`` flag when zapping so that the vgs and lvs
+OSD, you must use the ``--destroy`` flag when zapping so that the VGs and LVs
that ceph-volume created on the raw device will be removed.
-.. note:: Multiple devices can be accepted at once, to zap them all
+.. note:: Multiple devices can be specified at once to zap them all.
-Zapping a raw device and destroying any vgs or lvs present::
+Zapping a raw device and destroying any VGs or LVs present::
ceph-volume lvm zap /dev/sdc --destroy
-This action can be performed on partitions, and logical volumes as well::
+This action can be performed on partitions and logical volumes::
ceph-volume lvm zap /dev/sdc1 --destroy
ceph-volume lvm zap osd-vg/data-lv --destroy
``activate``
============
-Once :ref:`ceph-volume-simple-scan` has been completed, and all the metadata
-captured for an OSD has been persisted to ``/etc/ceph/osd/{id}-{uuid}.json``
+Once :ref:`ceph-volume-simple-scan` has been completed and all the metadata
+captured for an OSD has been persisted to ``/etc/ceph/osd/<id>-<uuid>.json``,
the OSD is now ready to get "activated".
This activation process **disables** all ``ceph-disk`` systemd units by masking
-them, to prevent the UDEV/ceph-disk interaction that will attempt to start them
+them, to prevent the udev/ceph-disk interaction that will attempt to start them
up at boot time.
The disabling of ``ceph-disk`` units is done only when calling ``ceph-volume
simple activate`` directly, but is avoided when being called by systemd when
the system is booting up.
-The activation process requires using both the :term:`OSD id` and :term:`OSD uuid`
+The activation process requires using both the :term:`OSD ID` and :term:`OSD UUID`.
To activate parsed OSDs::
ceph-volume simple activate 0 6cc43680-4f6e-4feb-92ff-9c7ba204120e
ceph-volume simple activate --file /etc/ceph/osd/0-6cc43680-4f6e-4feb-92ff-9c7ba204120e.json
-requiring uuids
+Requiring UUIDs
^^^^^^^^^^^^^^^
-The :term:`OSD uuid` is being required as an extra step to ensure that the
+The :term:`OSD UUID` is being required as an extra step to ensure that the
right OSD is being activated. It is entirely possible that a previous OSD with
-the same id exists and would end up activating the incorrect one.
+the same ID exists and would end up activating the incorrect one.
Discovery
This discovery process ensures that devices can be correctly detected even if
they are repurposed into another system or if their name changes (as in the
-case of non-persisting names like ``/dev/sda1``)
+case of non-persisting names like ``/dev/sda1``).
The JSON configuration file used to map what devices go to what OSD will then
coordinate the mounting and symlinking as part of activation.
To ensure that the symlinks are always correct, if they exist in the OSD
directory, the symlinks will be re-done.
-A systemd unit will capture the :term:`OSD id` and :term:`OSD uuid` and
+A systemd unit will capture the :term:`OSD ID` and :term:`OSD UUID` and
persist it. Internally, the activation will enable it like::
- systemctl enable ceph-volume@simple-$id-$uuid
+ systemctl enable ceph-volume@simple-<id>-<uuid>
For example::
systemctl enable ceph-volume@simple-0-8715BEB4-15C5-49DE-BA6F-401086EC7B41
-Would start the discovery process for the OSD with an id of ``0`` and a UUID of
+Would start the discovery process for the OSD with an ID of ``0`` and a UUID of
``8715BEB4-15C5-49DE-BA6F-401086EC7B41``.
The systemd process will call out to activate passing the information needed to
identify the OSD and its devices, and it will proceed to:
-# mount the device in the corresponding location (by convention this is
- ``/var/lib/ceph/osd/<cluster name>-<osd id>/``)
+#. mount the device in the corresponding location (by convention this is
+ ``/var/lib/ceph/osd/<cluster name>-<osd id>/``)
+#. ensure that all required devices are ready for that OSD and properly linked
-# ensure that all required devices are ready for that OSD and properly linked.
-The symbolic link will **always** be re-done to ensure that the correct device is linked.
-
-# start the ``ceph-osd@0`` systemd unit
+ The symbolic link will **always** be re-done to ensure that the correct device is linked.
+#. start the ``ceph-osd@0`` systemd unit
so that when activation is needed, the OSD can start normally without getting
interference from ``ceph-disk``.
-As part of the activation process the systemd units for ``ceph-disk`` in charge
-of reacting to ``udev`` events, are linked to ``/dev/null`` so that they are
+As part of the activation process, the systemd units for ``ceph-disk`` in charge
+of reacting to ``udev`` events are linked to ``/dev/null`` so that they are
fully inactive.
workflows or tools (like ``udev`` or ``ceph-disk``). Encryption with LUKS or
PLAIN formats is fully supported.
-The command has the ability to inspect a running OSD, by inspecting the
+The command has the ability to inspect a running OSD by inspecting the
directory where the OSD data is stored, or by consuming the data partition.
The command can also scan all running OSDs if no path or device is provided.
Once scanned, information will (by default) persist the metadata as JSON in
a file in ``/etc/ceph/osd``. This ``JSON`` file will use the naming convention
-of: ``{OSD ID}-{OSD FSID}.json``. An OSD with an id of 1, and an FSID like
-``86ebd829-1405-43d3-8fd6-4cbc9b6ecf96`` the absolute path of the file would
+of: ``<OSD ID>-<OSD FSID>.json``. For an OSD with an ID of 1 and an FSID like
+``86ebd829-1405-43d3-8fd6-4cbc9b6ecf96``, the absolute path of the file would
be::
/etc/ceph/osd/1-86ebd829-1405-43d3-8fd6-4cbc9b6ecf96.json
The ``scan`` subcommand will refuse to write to this file if it already exists.
If overwriting the contents is needed, the ``--force`` flag must be used::
- ceph-volume simple scan --force {path}
+ ceph-volume simple scan --force <path>
If there is no need to persist the ``JSON`` metadata, there is support to send
the contents to ``stdout`` (no file will be written)::
- ceph-volume simple scan --stdout {path}
+ ceph-volume simple scan --stdout <path>
.. _ceph-volume-simple-scan-directory:
-----------------
Using this command without providing an OSD directory or device will scan the
directories of any currently running OSDs. If a running OSD was not created
-by ceph-disk it will be ignored and not scanned.
+by ceph-disk, it will be ignored and not scanned.
To scan all running ceph-disk OSDs, the command would look like::
For a directory like ``/var/lib/ceph/osd/ceph-1``, the command could look
like::
- ceph-volume simple scan /var/lib/ceph/osd/ceph1
+ ceph-volume simple scan /var/lib/ceph/osd/ceph-1
.. _ceph-volume-simple-scan-device:
**must be** the data partition of the OSD.
As long as the data partition of the OSD is being passed in as an argument, the
-sub-command can scan its contents.
+subcommand can scan its contents.
In the case where the device is already mounted, the tool can detect this
scenario and capture file contents from that directory.
The contents of the JSON object is very simple. The scan not only will persist
information from the special OSD files and their contents, but will also
validate paths and device UUIDs. Unlike what ``ceph-disk`` would do, by storing
-them in ``{device type}_uuid`` files, the tool will persist them as part of the
+them in ``<device type>_uuid`` files, the tool will persist them as part of the
device type key.
For example, a ``block.db`` device would look something like::
This duplication is in place because the tool is trying to ensure the
following:
-# Support OSDs that may not have ceph-disk special files
-# Check the most up-to-date information on the device, by querying against LVM
-and ``blkid``
-# Support both logical volumes and GPT devices
+#. Support OSDs that may not have ceph-disk special files
+#. Check the most up-to-date information on the device, by querying against LVM
+ and ``blkid``
+#. Support both logical volumes and GPT devices
-This is a sample ``JSON`` metadata, from an OSD that is using ``bluestore``::
+This is a sample ``JSON`` metadata, from an OSD that is using BlueStore::
{
"active": "ok",
systemd
=======
Upon startup, it will identify the logical volume by loading the JSON file in
-``/etc/ceph/osd/{id}-{uuid}.json`` corresponding to the instance name of the
+``/etc/ceph/osd/<id>-<uuid>.json`` corresponding to the instance name of the
systemd unit.
-After identifying the correct volume it will then proceed to mount it by using
+After identifying the correct volume, it will then proceed to mount it by using
the OSD destination conventions, that is::
- /var/lib/ceph/osd/{cluster name}-{osd id}
+ /var/lib/ceph/osd/<cluster name>-<osd id>
For our example OSD with an id of ``0``, that means the identified device will
be mounted at::
systemctl start ceph-osd@0
The systemd portion of this process is handled by the ``ceph-volume simple
-trigger`` sub-command, which is only in charge of parsing metadata coming from
+trigger`` subcommand, which is only in charge of parsing metadata coming from
systemd and startup, and then dispatching to ``ceph-volume simple activate`` which
would proceed with activation.
=======
As part of the activation process (either with :ref:`ceph-volume-lvm-activate`
or :ref:`ceph-volume-simple-activate`), systemd units will get enabled that
-will use the OSD id and uuid as part of their name. These units will be run
+will use the OSD ID and UUID as part of their name. These units will be run
when the system boots, and will proceed to activate their corresponding
-volumes via their sub-command implementation.
+volumes via their subcommand implementation.
-The API for activation is a bit loose, it only requires two parts: the
+The API for activation is a bit loose. It only requires two parts: the
subcommand to use and any extra meta information separated by a dash. This
convention makes the units look like::
- ceph-volume@{command}-{extra metadata}
+ ceph-volume@<command>-<extra metadata>
The *extra metadata* can be anything needed that the subcommand implementing
the processing might need. In the case of :ref:`ceph-volume-lvm` and
-:ref:`ceph-volume-simple`, both look to consume the :term:`OSD id` and :term:`OSD uuid`,
-but this is not a hard requirement, it is just how the sub-commands are
+:ref:`ceph-volume-simple`, both look to consume the :term:`OSD ID` and :term:`OSD UUID`,
+but this is not a hard requirement, it is just how the subcommands are
implemented.
-Both the command and extra metadata gets persisted by systemd as part of the
+Both the command and extra metadata get persisted by systemd as part of the
*"instance name"* of the unit. For example an OSD with an ID of 0, for the
-``lvm`` sub-command would look like::
+``lvm`` subcommand would look like::
systemctl enable ceph-volume@lvm-0-0A3E1ED2-DA8A-4F0E-AA95-61DEC71768D6
Implements the functionality needed to deploy OSDs from the ``zfs`` subcommand:
``ceph-volume zfs``
-The current implementation only works for ZFS on FreeBSD
+The current implementation only works for ZFS on FreeBSD.
**Command Line Subcommands**
**Internal functionality**
There are other aspects of the ``zfs`` subcommand that are internal and not
-exposed to the user, these sections explain how these pieces work together,
+exposed to the user. These sections explain how these pieces work together,
clarifying the workflows of the tool.
:ref:`zfs <ceph-volume-zfs-api>`
``inventory``
=============
-The ``inventory`` subcommand queries a host's disc inventory through GEOM and provides
+The ``inventory`` subcommand queries a host's disk inventory through GEOM and provides
hardware information and metadata on every physical device.
This only works on a FreeBSD platform.
-By default the command returns a short, human-readable report of all physical disks.
+By default, the command returns a short, human-readable report of all physical disks.
-For programmatic consumption of this report pass ``--format json`` to generate a
+For programmatic consumption of this report, pass ``--format json`` to generate a
JSON formatted report. This report includes extensive information on the
physical drives such as disk metadata (like model and size), logical volumes
-and whether they are used by ceph, and if the disk is usable by ceph and
+and whether they are used by Ceph, and if the disk is usable by Ceph and
reasons why not.
A device path can be specified to report extensive information on a device in
-both plain and json format.
+both plain and JSON format.