From 53032041c324e08fe02e46b2d42acce7ee509c7d Mon Sep 17 00:00:00 2001 From: Ville Ojamo <14869000+bluikko@users.noreply.github.com> Date: Tue, 31 Mar 2026 13:51:15 +0700 Subject: [PATCH] doc/ceph-volume: Fix spelling etc errors Low-hanging spelling, punctuation, and capitalization errors. Ignore style and other more complex issues. Use angle brackets consistently for value placeholders. Signed-off-by: Ville Ojamo --- doc/ceph-volume/drive-group.rst | 2 +- doc/ceph-volume/index.rst | 10 ++++----- doc/ceph-volume/intro.rst | 10 ++++----- doc/ceph-volume/inventory.rst | 8 +++---- doc/ceph-volume/lvm/activate.rst | 26 +++++++++++------------ doc/ceph-volume/lvm/batch.rst | 22 +++++++++---------- doc/ceph-volume/lvm/create.rst | 4 ++-- doc/ceph-volume/lvm/encryption.rst | 8 +++---- doc/ceph-volume/lvm/index.rst | 2 +- doc/ceph-volume/lvm/list.rst | 12 +++++------ doc/ceph-volume/lvm/migrate.rst | 6 +++--- doc/ceph-volume/lvm/newdb.rst | 12 +++++------ doc/ceph-volume/lvm/prepare.rst | 32 ++++++++++++++-------------- doc/ceph-volume/lvm/scan.rst | 4 ++-- doc/ceph-volume/lvm/systemd.rst | 8 +++---- doc/ceph-volume/lvm/zap.rst | 26 +++++++++++------------ doc/ceph-volume/simple/activate.rst | 33 ++++++++++++++--------------- doc/ceph-volume/simple/index.rst | 4 ++-- doc/ceph-volume/simple/scan.rst | 28 ++++++++++++------------ doc/ceph-volume/simple/systemd.rst | 8 +++---- doc/ceph-volume/systemd.rst | 16 +++++++------- doc/ceph-volume/zfs/index.rst | 4 ++-- doc/ceph-volume/zfs/inventory.rst | 10 ++++----- 23 files changed, 147 insertions(+), 148 deletions(-) diff --git a/doc/ceph-volume/drive-group.rst b/doc/ceph-volume/drive-group.rst index 4296ba7e9fca..3c3441d8b7a3 100644 --- a/doc/ceph-volume/drive-group.rst +++ b/doc/ceph-volume/drive-group.rst @@ -3,7 +3,7 @@ ``drive-group`` =============== The drive-group subcommand allows for passing :ref:`drivegroups` specifications -straight to ceph-volume as json. ceph-volume will then attempt to deploy this +straight to ceph-volume as JSON. ceph-volume will then attempt to deploy this drive groups via the batch subcommand. The specification can be passed via a file, string argument or on stdin. diff --git a/doc/ceph-volume/index.rst b/doc/ceph-volume/index.rst index 9271bc2a0e96..722fceabd1fc 100644 --- a/doc/ceph-volume/index.rst +++ b/doc/ceph-volume/index.rst @@ -2,12 +2,12 @@ ceph-volume =========== -Deploy OSDs with different device technologies like lvm or physical disks using +Deploy OSDs with different device technologies like LVM or physical disks using pluggable tools (:doc:`lvm/index` itself is treated like a plugin) and trying to -follow a predictable, and robust way of preparing, activating, and starting OSDs. +follow a predictable and robust way of preparing, activating, and starting OSDs. :ref:`Overview ` | -:ref:`Plugin Guide ` | +:ref:`Plugin Guide ` **Command Line Subcommands** @@ -24,7 +24,7 @@ that may have been deployed with ``ceph-disk``. **Node inventory** The :ref:`ceph-volume-inventory` subcommand provides information and metadata -about a nodes physical disk inventory. +about a node's physical disk inventory. Migrating @@ -45,7 +45,7 @@ ceph-disk replaced? ` section. New deployments ^^^^^^^^^^^^^^^ -For new deployments, :ref:`ceph-volume-lvm` is recommended, it can use any +For new deployments, :ref:`ceph-volume-lvm` is recommended. It can use any logical volume as input for data OSDs, or it can setup a minimal/naive logical volume from a device. diff --git a/doc/ceph-volume/intro.rst b/doc/ceph-volume/intro.rst index c36f12a776dd..8416e0c9d8d0 100644 --- a/doc/ceph-volume/intro.rst +++ b/doc/ceph-volume/intro.rst @@ -8,7 +8,7 @@ preparing, activating, and creating OSDs. It deviates from ``ceph-disk`` by not interacting or relying on the udev rules that come installed for Ceph. These rules allow automatic detection of -previously setup devices that are in turn fed into ``ceph-disk`` to activate +previously set up devices that are in turn fed into ``ceph-disk`` to activate them. .. _ceph-disk-replaced: @@ -16,7 +16,7 @@ them. Replacing ``ceph-disk`` ----------------------- The ``ceph-disk`` tool was created at a time when the project was required to -support many different types of init systems (upstart, sysvinit, etc...) while +support many different types of init systems (upstart, sysvinit, etc.) while being able to discover devices. This caused the tool to concentrate initially (and exclusively afterwards) on GPT partitions. Specifically on GPT GUIDs, which were used to label devices in a unique way to answer questions like: @@ -35,7 +35,7 @@ a node. It was hard to debug, or even replicate these problems given the asynchronous behavior of ``UDEV``. -Since the world-view of ``ceph-disk`` had to be GPT partitions exclusively, it meant +Since the worldview of ``ceph-disk`` had to be GPT partitions exclusively, it meant that it couldn't work with other technologies like LVM, or similar device mapper devices. It was ultimately decided to create something modular, starting with LVM support, and the ability to expand on other technologies as needed. @@ -66,13 +66,13 @@ Modularity there are going to be lots of ways that people provision the hardware devices that we need to consider. There are already two: legacy ceph-disk devices that are still in use and have GPT partitions (handled by :ref:`ceph-volume-simple`), -and lvm. SPDK devices where we manage NVMe devices directly from userspace are +and LVM. SPDK devices where we manage NVMe devices directly from userspace are on the immediate horizon, where LVM won't work there since the kernel isn't involved at all. ``ceph-volume lvm`` ------------------- -By making use of :term:`LVM tags`, the :ref:`ceph-volume-lvm` sub-command is +By making use of :term:`LVM tags`, the :ref:`ceph-volume-lvm` subcommand is able to store and later re-discover and query devices associated with OSDs so that they can later be activated. diff --git a/doc/ceph-volume/inventory.rst b/doc/ceph-volume/inventory.rst index edb1fd20501f..071bbb82ff83 100644 --- a/doc/ceph-volume/inventory.rst +++ b/doc/ceph-volume/inventory.rst @@ -2,16 +2,16 @@ ``inventory`` ============= -The ``inventory`` subcommand queries a host's disc inventory and provides +The ``inventory`` subcommand queries a host's disk inventory and provides hardware information and metadata on every physical device. By default the command returns a short, human-readable report of all physical disks. -For programmatic consumption of this report pass ``--format json`` to generate a +For programmatic consumption of this report, pass ``--format json`` to generate a JSON formatted report. This report includes extensive information on the physical drives such as disk metadata (like model and size), logical volumes -and whether they are used by ceph, and if the disk is usable by ceph and +and whether they are used by Ceph, and if the disk is usable by Ceph and reasons why not. A device path can be specified to report extensive information on a device in -both plain and json format. +both plain and JSON format. diff --git a/doc/ceph-volume/lvm/activate.rst b/doc/ceph-volume/lvm/activate.rst index fe34ecb713a9..78be5eab3d82 100644 --- a/doc/ceph-volume/lvm/activate.rst +++ b/doc/ceph-volume/lvm/activate.rst @@ -20,7 +20,7 @@ For information about OSDs deployed by cephadm, refer to New OSDs -------- -To activate newly prepared OSDs both the :term:`OSD id` and :term:`OSD uuid` +To activate newly prepared OSDs, both the :term:`OSD ID` and :term:`OSD UUID` need to be supplied. For example:: ceph-volume lvm activate --bluestore 0 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 @@ -44,11 +44,11 @@ and will activate them one by one. If any of the OSDs are already running, it will report them in the command output and skip them, making it safe to rerun (idempotent). -requiring uuids +Requiring UUIDs ^^^^^^^^^^^^^^^ -The :term:`OSD uuid` is being required as an extra step to ensure that the +The :term:`OSD UUID` is being required as an extra step to ensure that the right OSD is being activated. It is entirely possible that a previous OSD with -the same id exists and would end up activating the incorrect one. +the same ID exists and would end up activating the incorrect one. dmcrypt @@ -63,19 +63,19 @@ Discovery With OSDs previously created by ``ceph-volume``, a *discovery* process is performed using :term:`LVM tags` to enable the systemd units. -The systemd unit will capture the :term:`OSD id` and :term:`OSD uuid` and +The systemd unit will capture the :term:`OSD ID` and :term:`OSD UUID` and persist it. Internally, the activation will enable it like:: - systemctl enable ceph-volume@lvm-$id-$uuid + systemctl enable ceph-volume@lvm-- For example:: systemctl enable ceph-volume@lvm-0-8715BEB4-15C5-49DE-BA6F-401086EC7B41 -Would start the discovery process for the OSD with an id of ``0`` and a UUID of +Would start the discovery process for the OSD with an ID of ``0`` and a UUID of ``8715BEB4-15C5-49DE-BA6F-401086EC7B41``. -.. note:: for more details on the systemd workflow see :ref:`ceph-volume-lvm-systemd` +.. note:: For more details on the systemd workflow, see :ref:`ceph-volume-lvm-systemd`. The systemd unit will look for the matching OSD device, and by looking at its :term:`LVM tags` will proceed to: @@ -93,19 +93,19 @@ The systemd unit will look for the matching OSD device, and by looking at its Existing OSDs ------------- For existing OSDs that have been deployed with ``ceph-disk``, they need to be -scanned and activated :ref:`using the simple sub-command `. +scanned and activated :ref:`using the simple subcommand `. If a different tool was used then the only way to port them over to the new mechanism is to prepare them again (losing data). See :ref:`ceph-volume-lvm-existing-osds` for details on how to proceed. Summary ------- -To recap the ``activate`` process for :term:`bluestore`: +To recap the ``activate`` process for :term:`BlueStore`: -#. Require both :term:`OSD id` and :term:`OSD uuid` -#. Enable the system unit with matching id and uuid +#. Require both :term:`OSD ID` and :term:`OSD UUID` +#. Enable the system unit with matching ID and UUID #. Create the ``tmpfs`` mount at the OSD directory in - ``/var/lib/ceph/osd/$cluster-$id/`` + ``/var/lib/ceph/osd/-/`` #. Recreate all the files needed with ``ceph-bluestore-tool prime-osd-dir`` by pointing it to the OSD ``block`` device. #. The systemd unit will ensure all devices are ready and linked diff --git a/doc/ceph-volume/lvm/batch.rst b/doc/ceph-volume/lvm/batch.rst index 2114518bf56f..624da513690a 100644 --- a/doc/ceph-volume/lvm/batch.rst +++ b/doc/ceph-volume/lvm/batch.rst @@ -7,13 +7,13 @@ an input of devices. The ``batch`` subcommand is closely related to drive-groups. One individual drive group specification translates to a single ``batch`` invocation. -The subcommand is based to :ref:`ceph-volume-lvm-create`, and will use the very +The subcommand is based on :ref:`ceph-volume-lvm-create`, and will use the very same code path. All ``batch`` does is to calculate the appropriate sizes of all volumes and skip over already created volumes. All the features that ``ceph-volume lvm create`` supports, like ``dmcrypt``, -avoiding ``systemd`` units from starting, defining bluestore, -is supported. +avoiding ``systemd`` units from starting, and defining BlueStore, +are supported. .. _ceph-volume-lvm-batch_auto: @@ -21,15 +21,15 @@ is supported. Automatic sorting of disks -------------------------- If ``batch`` receives only a single list of data devices and other options are -passed , ``ceph-volume`` will auto-sort disks by its rotational +passed, ``ceph-volume`` will auto-sort disks by its rotational property and use non-rotating disks for ``block.db`` or ``journal`` depending on the objectstore used. If all devices are to be used for standalone OSDs, no matter if rotating or solid state, pass ``--no-auto``. -For example assuming :term:`bluestore` is used and ``--no-auto`` is not passed, +For example assuming :term:`BlueStore` is used and ``--no-auto`` is not passed, the deprecated behavior would deploy the following, depending on the devices passed: -#. Devices are all spinning HDDs: 1 OSD is created per device +#. Devices are all HDDs: 1 OSD is created per device #. Devices are all SSDs: 2 OSDs are created per device #. Devices are a mix of HDDs and SSDs: data is placed on the spinning device, the ``block.db`` is created on the SSD, as large as possible. @@ -37,11 +37,11 @@ passed: .. note:: Although operations in ``ceph-volume lvm create`` allow usage of ``block.wal`` it isn't supported with the ``auto`` behavior. -This default auto-sorting behavior is now DEPRECATED and will be changed in future releases. -Instead devices are not automatically sorted unless the ``--auto`` option is passed +This default auto-sorting behavior is now **deprecated** and will be changed in future releases. +Instead, devices are not automatically sorted unless the ``--auto`` option is passed. -It is recommended to make use of the explicit device lists for ``block.db``, - ``block.wal`` and ``journal``. +It is recommended to make use of the explicit device lists +for ``block.db``, ``block.wal`` and ``journal``. .. _ceph-volume-lvm-batch_bluestore: @@ -58,7 +58,7 @@ Consider the following invocation:: $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1 This will deploy three OSDs with external ``db`` and ``wal`` volumes on -an NVME device. +an NVMe device. Pretty reporting ---------------- diff --git a/doc/ceph-volume/lvm/create.rst b/doc/ceph-volume/lvm/create.rst index 17fe9fa5ab2a..ea915c817e28 100644 --- a/doc/ceph-volume/lvm/create.rst +++ b/doc/ceph-volume/lvm/create.rst @@ -2,10 +2,10 @@ ``create`` =========== -This subcommand wraps the two-step process to provision a new osd (calling +This subcommand wraps the two-step process to provision a new OSD (calling ``prepare`` first and then ``activate``) into a single one. The reason to prefer ``prepare`` and then ``activate`` is to gradually -introduce new OSDs into a cluster, and avoiding large amounts of data being +introduce new OSDs into a cluster, avoiding large amounts of data being rebalanced. The single-call process unifies exactly what :ref:`ceph-volume-lvm-prepare` and diff --git a/doc/ceph-volume/lvm/encryption.rst b/doc/ceph-volume/lvm/encryption.rst index 4564a7ffed84..0f069ac02022 100644 --- a/doc/ceph-volume/lvm/encryption.rst +++ b/doc/ceph-volume/lvm/encryption.rst @@ -6,7 +6,7 @@ Encryption Logical volumes can be encrypted using ``dmcrypt`` by specifying the ``--dmcrypt`` flag when creating OSDs. When using LVM, logical volumes can be encrypted in different ways. ``ceph-volume`` does not offer as many options as -LVM does, but it encrypts logical volumes in a way that is consistent and +LVM does, but it encrypts logical volumes in a way that is consistent and robust. In this case, ``ceph-volume lvm`` follows this constraint: @@ -21,7 +21,7 @@ implement but not widely available in all Linux distributions supported by Ceph. .. note:: Version 1 of LUKS is referred to in this documentation as "LUKS". - Version 2 is of LUKS is referred to in this documentation as "LUKS2". + Version 2 of LUKS is referred to in this documentation as "LUKS2". LUKS on LVM @@ -62,8 +62,8 @@ compatibility and prevent ceph-disk from breaking, ceph-volume uses the same naming convention *although it does not make sense for the new encryption workflow*. -After the common steps of setting up the OSD during the "prepare stage" ( -with :term:`bluestore`), the logical volume is left ready +After the common steps of setting up the OSD during the "prepare stage" +(with :term:`BlueStore`), the logical volume is left ready to be activated, regardless of the state of the device (encrypted or decrypted). diff --git a/doc/ceph-volume/lvm/index.rst b/doc/ceph-volume/lvm/index.rst index 962e51a51c68..c951841ef43f 100644 --- a/doc/ceph-volume/lvm/index.rst +++ b/doc/ceph-volume/lvm/index.rst @@ -27,7 +27,7 @@ Implements the functionality needed to deploy OSDs from the ``lvm`` subcommand: **Internal functionality** There are other aspects of the ``lvm`` subcommand that are internal and not -exposed to the user, these sections explain how these pieces work together, +exposed to the user. These sections explain how these pieces work together, clarifying the workflows of the tool. :ref:`Systemd Units ` | diff --git a/doc/ceph-volume/lvm/list.rst b/doc/ceph-volume/lvm/list.rst index 6fb8fe84d27b..55a7ccfc2608 100644 --- a/doc/ceph-volume/lvm/list.rst +++ b/doc/ceph-volume/lvm/list.rst @@ -21,7 +21,7 @@ When no positional arguments are used, a full reporting will be presented. This means that all devices and logical volumes found in the system will be displayed. -Full ``pretty`` reporting for two OSDs, one with a lv as a journal, and another +Full ``pretty`` reporting for two OSDs, one with a LV as a journal, and another one with a physical device may look similar to: .. prompt:: bash # @@ -81,8 +81,8 @@ to be part of a logical volume, the value will be comma separated when using ``pretty``, but an array when using ``json``. .. note:: Tags are displayed in a readable format. The ``osd id`` key is stored - as a ``ceph.osd_id`` tag. For more information on lvm tag conventions - see :ref:`ceph-volume-lvm-tag-api` + as a ``ceph.osd_id`` tag. For more information on LVM tag conventions, + see :ref:`ceph-volume-lvm-tag-api`. Single Reporting ---------------- @@ -115,8 +115,8 @@ can be listed in the following way: .. note:: Tags are displayed in a readable format. The ``osd id`` key is stored - as a ``ceph.osd_id`` tag. For more information on lvm tag conventions - see :ref:`ceph-volume-lvm-tag-api` + as a ``ceph.osd_id`` tag. For more information on LVM tag conventions, + see :ref:`ceph-volume-lvm-tag-api`. For plain disks, the full path to the device is required. For example, for @@ -188,7 +188,7 @@ that may be in use haven't changed naming. It is possible that non-persistent devices like ``/dev/sda1`` could change to ``/dev/sdb1``. The detection is possible because the ``PARTUUID`` is stored as part of the -metadata in the logical volume for the data lv. Even in the case of a journal +metadata in the logical volume for the data LV. Even in the case of a journal that is a physical device, this information is still stored on the data logical volume associated with it. diff --git a/doc/ceph-volume/lvm/migrate.rst b/doc/ceph-volume/lvm/migrate.rst index 983d2e79716b..37c6f5039cf1 100644 --- a/doc/ceph-volume/lvm/migrate.rst +++ b/doc/ceph-volume/lvm/migrate.rst @@ -3,15 +3,15 @@ ``migrate`` =========== -Moves BlueFS data from source volume(s) to the target one, source volumes +Moves BlueFS data from source volume(s) to the target one. Source volumes (except the main, i.e. data or block one) are removed on success. -LVM volumes are permitted for Target only, both already attached or new one. +LVM volumes are permitted for target only, both already attached or new one. In the latter case it is attached to the OSD replacing one of the source devices. -Following replacement rules apply (in the order of precedence, stop +Following replacement rules apply (in the order of precedence; stop on the first match): - if source list has DB volume - target device replaces it. diff --git a/doc/ceph-volume/lvm/newdb.rst b/doc/ceph-volume/lvm/newdb.rst index a8136c9886bb..1682f0341e06 100644 --- a/doc/ceph-volume/lvm/newdb.rst +++ b/doc/ceph-volume/lvm/newdb.rst @@ -27,30 +27,30 @@ following steps: .. prompt:: bash # - lvextend -l ${size} ${lv}/${db} ${ssd_dev} + lvextend -l / #. Stop the OSD: .. prompt:: bash # - cephadm unit --fsid $cid --name osd.${osd} stop + cephadm unit --fsid --name osd. stop #. Run the ``bluefs-bdev-expand`` command: .. prompt:: bash # - cephadm shell --fsid $cid --name osd.${osd} -- ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-${osd} + cephadm shell --fsid --name osd. -- ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph- #. Run the ``bluefs-bdev-migrate`` command: .. prompt:: bash # - cephadm shell --fsid $cid --name osd.${osd} -- ceph-bluestore-tool bluefs-bdev-migrate --path /var/lib/ceph/osd/ceph-${osd} --devs-source /var/lib/ceph/osd/ceph-${osd}/block --dev-target /var/lib/ceph/osd/ceph-${osd}/block.db + cephadm shell --fsid --name osd. -- ceph-bluestore-tool bluefs-bdev-migrate --path /var/lib/ceph/osd/ceph- --devs-source /var/lib/ceph/osd/ceph-/block --dev-target /var/lib/ceph/osd/ceph-/block.db #. Restart the OSD: .. prompt:: bash # - cephadm unit --fsid $cid --name osd.${osd} start + cephadm unit --fsid --name osd. start -.. note:: *The above procedure was developed by Chris Dunlop on the [ceph-users] mailing list, and can be seen in its original context here:* `[ceph-users] Re: Fixing BlueFS spillover (pacific 16.2.14) `_ +.. note:: *The above procedure was developed by Chris Dunlop on the [ceph-users] mailing list, and can be seen in its original context here:* `[ceph-users] Re: Fixing BlueFS spillover (pacific 16.2.14) `_. diff --git a/doc/ceph-volume/lvm/prepare.rst b/doc/ceph-volume/lvm/prepare.rst index 5fc3662611fa..7682bed6cd38 100644 --- a/doc/ceph-volume/lvm/prepare.rst +++ b/doc/ceph-volume/lvm/prepare.rst @@ -25,7 +25,7 @@ the backend, which can be done by using the following flags and arguments: ``bluestore`` ------------- -:term:`Bluestore` is the default backend for new OSDs. Bluestore +:term:`BlueStore` is the default backend for new OSDs. BlueStore supports the following configurations: * a block device, a block.wal device, and a block.db device @@ -70,7 +70,7 @@ Starting with Ceph Squid, you can opt for TPM2 token enrollment for the created If a ``block.db`` device or a ``block.wal`` device is needed, it can be specified with ``--block.db`` or ``--block.wal``. These can be physical devices, partitions, or logical volumes. ``block.db`` and ``block.wal`` are -optional for bluestore. +optional for BlueStore. For both ``block.db`` and ``block.wal``, partitions can be used as-is, and therefore are not made into logical volumes. @@ -102,10 +102,10 @@ directory looks like this: In the above case, a device was used for ``block``, so ``ceph-volume`` created a volume group and a logical volume using the following conventions: -* volume group name: ``ceph-{cluster fsid}`` (or if the volume group already - exists: ``ceph-{random uuid}``) +* volume group name: ``ceph-`` (or if the volume group already + exists: ``ceph-``) -* logical volume name: ``osd-block-{osd_fsid}`` +* logical volume name: ``osd-block-`` .. _ceph-volume-lvm-prepare_filestore: @@ -114,11 +114,11 @@ a volume group and a logical volume using the following conventions: ------------- .. warning:: Filestore has been deprecated in the Reef release and is no longer supported. -``Filestore`` is the OSD backend that prepares logical volumes for a -`filestore`-backed object-store OSD. +Filestore is the OSD backend that prepares logical volumes for a +filestore-backed object-store OSD. -``Filestore`` uses a logical volume to store OSD data and it uses +Filestore uses a logical volume to store OSD data and it uses physical devices, partitions, or logical volumes to store the journal. If a physical device is used to create a filestore backend, a logical volume will be created on that physical device. If the provided volume group's name begins @@ -196,7 +196,7 @@ To fetch the monmap by using the bootstrap key from the OSD, use this command: /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o \ /var/lib/ceph/osd/-/activate.monmap -To populate the OSD directory (which has already been mounted), use this ``ceph-osd`` command: +To populate the OSD directory (which has already been mounted), use this ``ceph-osd`` command: .. prompt:: bash # @@ -252,7 +252,7 @@ print``: Partition Table: gpt Disk Flags: -Now lets create a single partition, and verify later if ``blkid`` can find +Now let's create a single partition, and verify later if ``blkid`` can find a ``PARTUUID`` that is needed by ``ceph-volume``: .. prompt:: bash # @@ -270,9 +270,9 @@ a ``PARTUUID`` that is needed by ``ceph-volume``: Existing OSDs ------------- For existing clusters that want to use this new system and have OSDs that are -already running there are a few things to take into account: +already running, there are a few things to take into account: -.. warning:: this process will forcefully format the data device, destroying +.. warning:: This process will forcefully format the data device, destroying existing data, if any. * OSD paths should follow this convention:: @@ -290,7 +290,7 @@ any data** in the OSD): ceph-volume lvm prepare --filestore --osd-id 0 --osd-fsid E3D291C1-E7BF-4984-9794-B60D9FA139CB -The command line tool will not contact the monitor to generate an OSD ID and +The command line tool will not contact the Monitor to generate an OSD ID and will format the LVM device in addition to storing the metadata on it so that it can be started later (for detailed metadata description see :ref:`ceph-volume-lvm-tags`). @@ -336,7 +336,7 @@ regardless of the type of volume (journal or data) or OSD objectstore: * ``osd_id`` * ``crush_device_class`` -For :term:`bluestore` these tags will be added: +For :term:`BlueStore`, these tags will be added: * ``block_device`` * ``block_uuid`` @@ -350,12 +350,12 @@ For :term:`bluestore` these tags will be added: Summary ------- -To recap the ``prepare`` process for :term:`bluestore`: +To recap the ``prepare`` process for :term:`BlueStore`: #. Accepts raw physical devices, partitions on physical devices or logical volumes as arguments. #. Creates logical volumes on any raw physical devices. #. Generate a UUID for the OSD -#. Ask the monitor get an OSD ID reusing the generated UUID +#. Ask the Monitor for an OSD ID reusing the generated UUID #. OSD data directory is created on a tmpfs mount. #. ``block``, ``block.wal``, and ``block.db`` are symlinked if defined. #. monmap is fetched for activation diff --git a/doc/ceph-volume/lvm/scan.rst b/doc/ceph-volume/lvm/scan.rst index aa9990f7199d..b5e2ce93408e 100644 --- a/doc/ceph-volume/lvm/scan.rst +++ b/doc/ceph-volume/lvm/scan.rst @@ -1,9 +1,9 @@ scan ==== -This sub-command will allow to discover Ceph volumes previously setup by the +This subcommand will allow to discover Ceph volumes previously set up by the tool by looking into the system's logical volumes and their tags. As part of the :ref:`ceph-volume-lvm-prepare` process, the logical volumes are assigned a few tags with important pieces of information. -.. note:: This sub-command is not yet implemented +.. note:: This subcommand is not yet implemented. diff --git a/doc/ceph-volume/lvm/systemd.rst b/doc/ceph-volume/lvm/systemd.rst index 30260de7e882..33a6c4378bdc 100644 --- a/doc/ceph-volume/lvm/systemd.rst +++ b/doc/ceph-volume/lvm/systemd.rst @@ -4,14 +4,14 @@ systemd ======= Upon startup, it will identify the logical volume using :term:`LVM tags`, finding a matching ID and later ensuring it is the right one with -the :term:`OSD uuid`. +the :term:`OSD UUID`. -After identifying the correct volume it will then proceed to mount it by using +After identifying the correct volume, it will then proceed to mount it by using the OSD destination conventions, that is:: /var/lib/ceph/osd/- -For our example OSD with an id of ``0``, that means the identified device will +For our example OSD with an ID of ``0``, that means the identified device will be mounted at:: @@ -23,6 +23,6 @@ Once that process is complete, a call will be made to start the OSD:: systemctl start ceph-osd@0 The systemd portion of this process is handled by the ``ceph-volume lvm -trigger`` sub-command, which is only in charge of parsing metadata coming from +trigger`` subcommand, which is only in charge of parsing metadata coming from systemd and startup, and then dispatching to ``ceph-volume lvm activate`` which would proceed with activation. diff --git a/doc/ceph-volume/lvm/zap.rst b/doc/ceph-volume/lvm/zap.rst index e737fc38685a..bc70d9e0b673 100644 --- a/doc/ceph-volume/lvm/zap.rst +++ b/doc/ceph-volume/lvm/zap.rst @@ -3,19 +3,19 @@ ``zap`` ======= -This subcommand is used to zap lvs, partitions or raw devices that have been used -by ceph OSDs so that they may be reused. If given a path to a logical -volume it must be in the format of vg/lv. Any file systems present -on the given lv or partition will be removed and all data will be purged. +This subcommand is used to zap LVs, partitions, or raw devices that have been used +by Ceph OSDs so that they may be reused. If given a path to a logical +volume, it must be in the format of vg/lv. Any file systems present +on the given LV or partition will be removed and all data will be purged. -.. note:: The lv or partition will be kept intact. +.. note:: The LV or partition will be kept intact. -.. note:: If the logical volume, raw device or partition is being used for any ceph related - mount points they will be unmounted. +.. note:: If the logical volume, raw device, or partition is being used for any Ceph-related + mount points, they will be unmounted. Zapping a logical volume:: - ceph-volume lvm zap {vg name/lv name} + ceph-volume lvm zap Zapping a partition:: @@ -23,20 +23,20 @@ Zapping a partition:: Removing Devices ---------------- -When zapping, and looking for full removal of the device (lv, vg, or partition) +When zapping and looking for full removal of the device (LV, VG, or partition), use the ``--destroy`` flag. A common use case is to simply deploy OSDs using a whole raw device. If you do so and then wish to reuse that device for another -OSD you must use the ``--destroy`` flag when zapping so that the vgs and lvs +OSD, you must use the ``--destroy`` flag when zapping so that the VGs and LVs that ceph-volume created on the raw device will be removed. -.. note:: Multiple devices can be accepted at once, to zap them all +.. note:: Multiple devices can be specified at once to zap them all. -Zapping a raw device and destroying any vgs or lvs present:: +Zapping a raw device and destroying any VGs or LVs present:: ceph-volume lvm zap /dev/sdc --destroy -This action can be performed on partitions, and logical volumes as well:: +This action can be performed on partitions and logical volumes:: ceph-volume lvm zap /dev/sdc1 --destroy ceph-volume lvm zap osd-vg/data-lv --destroy diff --git a/doc/ceph-volume/simple/activate.rst b/doc/ceph-volume/simple/activate.rst index 8c7737162e8f..325036549145 100644 --- a/doc/ceph-volume/simple/activate.rst +++ b/doc/ceph-volume/simple/activate.rst @@ -2,19 +2,19 @@ ``activate`` ============ -Once :ref:`ceph-volume-simple-scan` has been completed, and all the metadata -captured for an OSD has been persisted to ``/etc/ceph/osd/{id}-{uuid}.json`` +Once :ref:`ceph-volume-simple-scan` has been completed and all the metadata +captured for an OSD has been persisted to ``/etc/ceph/osd/-.json``, the OSD is now ready to get "activated". This activation process **disables** all ``ceph-disk`` systemd units by masking -them, to prevent the UDEV/ceph-disk interaction that will attempt to start them +them, to prevent the udev/ceph-disk interaction that will attempt to start them up at boot time. The disabling of ``ceph-disk`` units is done only when calling ``ceph-volume simple activate`` directly, but is avoided when being called by systemd when the system is booting up. -The activation process requires using both the :term:`OSD id` and :term:`OSD uuid` +The activation process requires using both the :term:`OSD ID` and :term:`OSD UUID`. To activate parsed OSDs:: ceph-volume simple activate 0 6cc43680-4f6e-4feb-92ff-9c7ba204120e @@ -27,11 +27,11 @@ Alternatively, using a path to a JSON file directly is also possible:: ceph-volume simple activate --file /etc/ceph/osd/0-6cc43680-4f6e-4feb-92ff-9c7ba204120e.json -requiring uuids +Requiring UUIDs ^^^^^^^^^^^^^^^ -The :term:`OSD uuid` is being required as an extra step to ensure that the +The :term:`OSD UUID` is being required as an extra step to ensure that the right OSD is being activated. It is entirely possible that a previous OSD with -the same id exists and would end up activating the incorrect one. +the same ID exists and would end up activating the incorrect one. Discovery @@ -46,7 +46,7 @@ queried against ``lvs`` (the LVM tool to list logical volumes). This discovery process ensures that devices can be correctly detected even if they are repurposed into another system or if their name changes (as in the -case of non-persisting names like ``/dev/sda1``) +case of non-persisting names like ``/dev/sda1``). The JSON configuration file used to map what devices go to what OSD will then coordinate the mounting and symlinking as part of activation. @@ -54,26 +54,25 @@ coordinate the mounting and symlinking as part of activation. To ensure that the symlinks are always correct, if they exist in the OSD directory, the symlinks will be re-done. -A systemd unit will capture the :term:`OSD id` and :term:`OSD uuid` and +A systemd unit will capture the :term:`OSD ID` and :term:`OSD UUID` and persist it. Internally, the activation will enable it like:: - systemctl enable ceph-volume@simple-$id-$uuid + systemctl enable ceph-volume@simple-- For example:: systemctl enable ceph-volume@simple-0-8715BEB4-15C5-49DE-BA6F-401086EC7B41 -Would start the discovery process for the OSD with an id of ``0`` and a UUID of +Would start the discovery process for the OSD with an ID of ``0`` and a UUID of ``8715BEB4-15C5-49DE-BA6F-401086EC7B41``. The systemd process will call out to activate passing the information needed to identify the OSD and its devices, and it will proceed to: -# mount the device in the corresponding location (by convention this is - ``/var/lib/ceph/osd/-/``) +#. mount the device in the corresponding location (by convention this is + ``/var/lib/ceph/osd/-/``) +#. ensure that all required devices are ready for that OSD and properly linked -# ensure that all required devices are ready for that OSD and properly linked. -The symbolic link will **always** be re-done to ensure that the correct device is linked. - -# start the ``ceph-osd@0`` systemd unit + The symbolic link will **always** be re-done to ensure that the correct device is linked. +#. start the ``ceph-osd@0`` systemd unit diff --git a/doc/ceph-volume/simple/index.rst b/doc/ceph-volume/simple/index.rst index 315dea99a106..8fb1c9bb4273 100644 --- a/doc/ceph-volume/simple/index.rst +++ b/doc/ceph-volume/simple/index.rst @@ -27,6 +27,6 @@ The scanning will infer everything that ``ceph-volume`` needs to start the OSD, so that when activation is needed, the OSD can start normally without getting interference from ``ceph-disk``. -As part of the activation process the systemd units for ``ceph-disk`` in charge -of reacting to ``udev`` events, are linked to ``/dev/null`` so that they are +As part of the activation process, the systemd units for ``ceph-disk`` in charge +of reacting to ``udev`` events are linked to ``/dev/null`` so that they are fully inactive. diff --git a/doc/ceph-volume/simple/scan.rst b/doc/ceph-volume/simple/scan.rst index 2749b14b64ac..172c65fb0d0a 100644 --- a/doc/ceph-volume/simple/scan.rst +++ b/doc/ceph-volume/simple/scan.rst @@ -7,14 +7,14 @@ so that ``ceph-volume`` can manage it without the need of any other startup workflows or tools (like ``udev`` or ``ceph-disk``). Encryption with LUKS or PLAIN formats is fully supported. -The command has the ability to inspect a running OSD, by inspecting the +The command has the ability to inspect a running OSD by inspecting the directory where the OSD data is stored, or by consuming the data partition. The command can also scan all running OSDs if no path or device is provided. Once scanned, information will (by default) persist the metadata as JSON in a file in ``/etc/ceph/osd``. This ``JSON`` file will use the naming convention -of: ``{OSD ID}-{OSD FSID}.json``. An OSD with an id of 1, and an FSID like -``86ebd829-1405-43d3-8fd6-4cbc9b6ecf96`` the absolute path of the file would +of: ``-.json``. For an OSD with an ID of 1 and an FSID like +``86ebd829-1405-43d3-8fd6-4cbc9b6ecf96``, the absolute path of the file would be:: /etc/ceph/osd/1-86ebd829-1405-43d3-8fd6-4cbc9b6ecf96.json @@ -22,12 +22,12 @@ be:: The ``scan`` subcommand will refuse to write to this file if it already exists. If overwriting the contents is needed, the ``--force`` flag must be used:: - ceph-volume simple scan --force {path} + ceph-volume simple scan --force If there is no need to persist the ``JSON`` metadata, there is support to send the contents to ``stdout`` (no file will be written):: - ceph-volume simple scan --stdout {path} + ceph-volume simple scan --stdout .. _ceph-volume-simple-scan-directory: @@ -36,7 +36,7 @@ Running OSDs scan ----------------- Using this command without providing an OSD directory or device will scan the directories of any currently running OSDs. If a running OSD was not created -by ceph-disk it will be ignored and not scanned. +by ceph-disk, it will be ignored and not scanned. To scan all running ceph-disk OSDs, the command would look like:: @@ -85,7 +85,7 @@ Would get stored as:: For a directory like ``/var/lib/ceph/osd/ceph-1``, the command could look like:: - ceph-volume simple scan /var/lib/ceph/osd/ceph1 + ceph-volume simple scan /var/lib/ceph/osd/ceph-1 .. _ceph-volume-simple-scan-device: @@ -99,7 +99,7 @@ still require a few files present. This means that the device to be scanned **must be** the data partition of the OSD. As long as the data partition of the OSD is being passed in as an argument, the -sub-command can scan its contents. +subcommand can scan its contents. In the case where the device is already mounted, the tool can detect this scenario and capture file contents from that directory. @@ -121,7 +121,7 @@ could look like:: The contents of the JSON object is very simple. The scan not only will persist information from the special OSD files and their contents, but will also validate paths and device UUIDs. Unlike what ``ceph-disk`` would do, by storing -them in ``{device type}_uuid`` files, the tool will persist them as part of the +them in ``_uuid`` files, the tool will persist them as part of the device type key. For example, a ``block.db`` device would look something like:: @@ -138,12 +138,12 @@ But it will also persist the ``ceph-disk`` special file generated, like so:: This duplication is in place because the tool is trying to ensure the following: -# Support OSDs that may not have ceph-disk special files -# Check the most up-to-date information on the device, by querying against LVM -and ``blkid`` -# Support both logical volumes and GPT devices +#. Support OSDs that may not have ceph-disk special files +#. Check the most up-to-date information on the device, by querying against LVM + and ``blkid`` +#. Support both logical volumes and GPT devices -This is a sample ``JSON`` metadata, from an OSD that is using ``bluestore``:: +This is a sample ``JSON`` metadata, from an OSD that is using BlueStore:: { "active": "ok", diff --git a/doc/ceph-volume/simple/systemd.rst b/doc/ceph-volume/simple/systemd.rst index aa5bebffe71c..32fbc486bd95 100644 --- a/doc/ceph-volume/simple/systemd.rst +++ b/doc/ceph-volume/simple/systemd.rst @@ -3,13 +3,13 @@ systemd ======= Upon startup, it will identify the logical volume by loading the JSON file in -``/etc/ceph/osd/{id}-{uuid}.json`` corresponding to the instance name of the +``/etc/ceph/osd/-.json`` corresponding to the instance name of the systemd unit. -After identifying the correct volume it will then proceed to mount it by using +After identifying the correct volume, it will then proceed to mount it by using the OSD destination conventions, that is:: - /var/lib/ceph/osd/{cluster name}-{osd id} + /var/lib/ceph/osd/- For our example OSD with an id of ``0``, that means the identified device will be mounted at:: @@ -23,6 +23,6 @@ Once that process is complete, a call will be made to start the OSD:: systemctl start ceph-osd@0 The systemd portion of this process is handled by the ``ceph-volume simple -trigger`` sub-command, which is only in charge of parsing metadata coming from +trigger`` subcommand, which is only in charge of parsing metadata coming from systemd and startup, and then dispatching to ``ceph-volume simple activate`` which would proceed with activation. diff --git a/doc/ceph-volume/systemd.rst b/doc/ceph-volume/systemd.rst index 5b5273c9cfd8..fd6b552be1ed 100644 --- a/doc/ceph-volume/systemd.rst +++ b/doc/ceph-volume/systemd.rst @@ -4,25 +4,25 @@ systemd ======= As part of the activation process (either with :ref:`ceph-volume-lvm-activate` or :ref:`ceph-volume-simple-activate`), systemd units will get enabled that -will use the OSD id and uuid as part of their name. These units will be run +will use the OSD ID and UUID as part of their name. These units will be run when the system boots, and will proceed to activate their corresponding -volumes via their sub-command implementation. +volumes via their subcommand implementation. -The API for activation is a bit loose, it only requires two parts: the +The API for activation is a bit loose. It only requires two parts: the subcommand to use and any extra meta information separated by a dash. This convention makes the units look like:: - ceph-volume@{command}-{extra metadata} + ceph-volume@- The *extra metadata* can be anything needed that the subcommand implementing the processing might need. In the case of :ref:`ceph-volume-lvm` and -:ref:`ceph-volume-simple`, both look to consume the :term:`OSD id` and :term:`OSD uuid`, -but this is not a hard requirement, it is just how the sub-commands are +:ref:`ceph-volume-simple`, both look to consume the :term:`OSD ID` and :term:`OSD UUID`, +but this is not a hard requirement, it is just how the subcommands are implemented. -Both the command and extra metadata gets persisted by systemd as part of the +Both the command and extra metadata get persisted by systemd as part of the *"instance name"* of the unit. For example an OSD with an ID of 0, for the -``lvm`` sub-command would look like:: +``lvm`` subcommand would look like:: systemctl enable ceph-volume@lvm-0-0A3E1ED2-DA8A-4F0E-AA95-61DEC71768D6 diff --git a/doc/ceph-volume/zfs/index.rst b/doc/ceph-volume/zfs/index.rst index c06228de91dc..5bf2217a5ae8 100644 --- a/doc/ceph-volume/zfs/index.rst +++ b/doc/ceph-volume/zfs/index.rst @@ -5,7 +5,7 @@ Implements the functionality needed to deploy OSDs from the ``zfs`` subcommand: ``ceph-volume zfs`` -The current implementation only works for ZFS on FreeBSD +The current implementation only works for ZFS on FreeBSD. **Command Line Subcommands** @@ -25,7 +25,7 @@ The current implementation only works for ZFS on FreeBSD **Internal functionality** There are other aspects of the ``zfs`` subcommand that are internal and not -exposed to the user, these sections explain how these pieces work together, +exposed to the user. These sections explain how these pieces work together, clarifying the workflows of the tool. :ref:`zfs ` diff --git a/doc/ceph-volume/zfs/inventory.rst b/doc/ceph-volume/zfs/inventory.rst index fd00325b6a88..3266b226c42a 100644 --- a/doc/ceph-volume/zfs/inventory.rst +++ b/doc/ceph-volume/zfs/inventory.rst @@ -2,18 +2,18 @@ ``inventory`` ============= -The ``inventory`` subcommand queries a host's disc inventory through GEOM and provides +The ``inventory`` subcommand queries a host's disk inventory through GEOM and provides hardware information and metadata on every physical device. This only works on a FreeBSD platform. -By default the command returns a short, human-readable report of all physical disks. +By default, the command returns a short, human-readable report of all physical disks. -For programmatic consumption of this report pass ``--format json`` to generate a +For programmatic consumption of this report, pass ``--format json`` to generate a JSON formatted report. This report includes extensive information on the physical drives such as disk metadata (like model and size), logical volumes -and whether they are used by ceph, and if the disk is usable by ceph and +and whether they are used by Ceph, and if the disk is usable by Ceph and reasons why not. A device path can be specified to report extensive information on a device in -both plain and json format. +both plain and JSON format. -- 2.47.3