From f07f697786b528b50871e203e2d5c460b138bb0e Mon Sep 17 00:00:00 2001 From: Ville Ojamo <14869000+bluikko@users.noreply.github.com> Date: Thu, 10 Apr 2025 15:09:11 +0700 Subject: [PATCH] doc/ceph-volume: Promptify commands and fix formatting Use the more modern prompt block for CLI commands, fix missing newline and messed up line breaks. Also change existing prompts to all indent with same amount of spaces. Signed-off-by: Ville Ojamo <14869000+bluikko@users.noreply.github.com> --- doc/ceph-volume/drive-group.rst | 6 ++- doc/ceph-volume/lvm/list.rst | 29 ++++++++++---- doc/ceph-volume/lvm/prepare.rst | 71 ++++++++++++++++++++++----------- 3 files changed, 73 insertions(+), 33 deletions(-) diff --git a/doc/ceph-volume/drive-group.rst b/doc/ceph-volume/drive-group.rst index f9d1cf3c381c4..4296ba7e9fcad 100644 --- a/doc/ceph-volume/drive-group.rst +++ b/doc/ceph-volume/drive-group.rst @@ -7,6 +7,8 @@ straight to ceph-volume as json. ceph-volume will then attempt to deploy this drive groups via the batch subcommand. The specification can be passed via a file, string argument or on stdin. -See the subcommand help for further details:: +See the subcommand help for further details: - # ceph-volume drive-group --help +.. prompt:: bash # + + ceph-volume drive-group --help diff --git a/doc/ceph-volume/lvm/list.rst b/doc/ceph-volume/lvm/list.rst index 718154b102194..6fb8fe84d27ba 100644 --- a/doc/ceph-volume/lvm/list.rst +++ b/doc/ceph-volume/lvm/list.rst @@ -22,10 +22,13 @@ means that all devices and logical volumes found in the system will be displayed. Full ``pretty`` reporting for two OSDs, one with a lv as a journal, and another -one with a physical device may look similar to:: +one with a physical device may look similar to: - # ceph-volume lvm list +.. prompt:: bash # + ceph-volume lvm list + +:: ====== osd.1 ======= @@ -88,10 +91,13 @@ Single reporting can consume both devices and logical volumes as input name as well as the logical volume name. For example the ``data-lv2`` logical volume, in the ``test_group`` volume group -can be listed in the following way:: +can be listed in the following way: + +.. prompt:: bash # - # ceph-volume lvm list test_group/data-lv2 + ceph-volume lvm list test_group/data-lv2 +:: ====== osd.1 ======= @@ -114,11 +120,13 @@ can be listed in the following way:: For plain disks, the full path to the device is required. For example, for -a device like ``/dev/sdd1`` it can look like:: +a device like ``/dev/sdd1`` it can look like: +.. prompt:: bash # - # ceph-volume lvm list /dev/sdd1 + ceph-volume lvm list /dev/sdd1 +:: ====== osd.0 ======= @@ -138,9 +146,14 @@ information is presented as-is. Full output as well as single devices can be listed. For brevity, this is how a single logical volume would look with ``json`` -output (note how tags aren't modified):: +output (note how tags aren't modified): + +.. prompt:: bash # + + ceph-volume lvm list --format=json test_group/data-lv1 + +:: - # ceph-volume lvm list --format=json test_group/data-lv1 { "0": [ { diff --git a/doc/ceph-volume/lvm/prepare.rst b/doc/ceph-volume/lvm/prepare.rst index c7dae83d06272..5fc3662611fab 100644 --- a/doc/ceph-volume/lvm/prepare.rst +++ b/doc/ceph-volume/lvm/prepare.rst @@ -47,25 +47,25 @@ case it looks like: .. prompt:: bash # - ceph-volume lvm prepare --bluestore --data vg/lv + ceph-volume lvm prepare --bluestore --data vg/lv A raw device can be specified in the same way: .. prompt:: bash # - ceph-volume lvm prepare --bluestore --data /path/to/device + ceph-volume lvm prepare --bluestore --data /path/to/device For enabling :ref:`encryption `, the ``--dmcrypt`` flag is required: .. prompt:: bash # - ceph-volume lvm prepare --bluestore --dmcrypt --data vg/lv + ceph-volume lvm prepare --bluestore --dmcrypt --data vg/lv Starting with Ceph Squid, you can opt for TPM2 token enrollment for the created LUKS2 devices with the ``--with-tpm`` flag: .. prompt:: bash # - ceph-volume lvm prepare --bluestore --dmcrypt --with-tpm --data vg/lv + ceph-volume lvm prepare --bluestore --dmcrypt --with-tpm --data vg/lv If a ``block.db`` device or a ``block.wal`` device is needed, it can be specified with ``--block.db`` or ``--block.wal``. These can be physical @@ -81,9 +81,14 @@ and are ephemeral. A symlink is created for the ``block`` device, and is optional for ``block.db`` and ``block.wal``. For a cluster with a default name and an OSD ID of 0, the -directory looks like this:: +directory looks like this: + +.. prompt:: bash # + + ls -l /var/lib/ceph/osd/ceph-0 + +:: - # ls -l /var/lib/ceph/osd/ceph-0 lrwxrwxrwx. 1 ceph ceph 93 Oct 20 13:05 block -> /dev/ceph-be2b6fbd-bcf2-4c51-b35d-a35a162a02f0/osd-block-25cf0a05-2bc6-44ef-9137-79d65bd7ad62 lrwxrwxrwx. 1 ceph ceph 93 Oct 20 13:05 block.db -> /dev/sda1 lrwxrwxrwx. 1 ceph ceph 93 Oct 20 13:05 block.wal -> /dev/ceph/osd-wal-0 @@ -187,18 +192,19 @@ To fetch the monmap by using the bootstrap key from the OSD, use this command: .. prompt:: bash # - /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring - /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o + /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring \ + /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o \ /var/lib/ceph/osd/-/activate.monmap To populate the OSD directory (which has already been mounted), use this ``ceph-osd`` command: + .. prompt:: bash # - ceph-osd --cluster ceph --mkfs --mkkey -i \ --monmap + ceph-osd --cluster ceph --mkfs --mkkey -i --monmap \ /var/lib/ceph/osd/-/activate.monmap --osd-data \ - /var/lib/ceph/osd/- --osd-journal - /var/lib/ceph/osd/-/journal \ --osd-uuid - --keyring /var/lib/ceph/osd/-/keyring \ --setuser ceph + /var/lib/ceph/osd/- --osd-journal \ + /var/lib/ceph/osd/-/journal --osd-uuid \ + --keyring /var/lib/ceph/osd/-/keyring --setuser ceph \ --setgroup ceph All of the information from the previous steps is used in the above command. @@ -216,9 +222,14 @@ If using device partitions the only requirement is that they contain the For example, using a new, unformatted drive (``/dev/sdd`` in this case) we can use ``parted`` to create a new partition. First we list the device -information:: +information: + +.. prompt:: bash # + + parted --script /dev/sdd print + +:: - $ parted --script /dev/sdd print Model: VBOX HARDDISK (scsi) Disk /dev/sdd: 11.5GB Sector size (logical/physical): 512B/512B @@ -226,10 +237,15 @@ information:: This device is not even labeled yet, so we can use ``parted`` to create a ``gpt`` label before we create a partition, and verify again with ``parted -print``:: +print``: + +.. prompt:: bash # + + parted --script /dev/sdd mklabel gpt + parted --script /dev/sdd print + +:: - $ parted --script /dev/sdd mklabel gpt - $ parted --script /dev/sdd print Model: VBOX HARDDISK (scsi) Disk /dev/sdd: 11.5GB Sector size (logical/physical): 512B/512B @@ -237,10 +253,15 @@ print``:: Disk Flags: Now lets create a single partition, and verify later if ``blkid`` can find -a ``PARTUUID`` that is needed by ``ceph-volume``:: +a ``PARTUUID`` that is needed by ``ceph-volume``: + +.. prompt:: bash # + + parted --script /dev/sdd mkpart primary 1 100% + blkid /dev/sdd1 + +:: - $ parted --script /dev/sdd mkpart primary 1 100% - $ blkid /dev/sdd1 /dev/sdd1: PARTLABEL="primary" PARTUUID="16399d72-1e1f-467d-96ee-6fe371a7d0d4" @@ -263,9 +284,11 @@ already running there are a few things to take into account: The one time process for an existing OSD, with an ID of 0 and using a ``"ceph"`` cluster name would look like (the following command will **destroy -any data** in the OSD):: +any data** in the OSD): + +.. prompt:: bash # - ceph-volume lvm prepare --filestore --osd-id 0 --osd-fsid E3D291C1-E7BF-4984-9794-B60D9FA139CB + ceph-volume lvm prepare --filestore --osd-id 0 --osd-fsid E3D291C1-E7BF-4984-9794-B60D9FA139CB The command line tool will not contact the monitor to generate an OSD ID and will format the LVM device in addition to storing the metadata on it so that it @@ -278,7 +301,9 @@ Crush device class To set the crush device class for the OSD, use the ``--crush-device-class`` flag. - ceph-volume lvm prepare --bluestore --data vg/lv --crush-device-class foo +.. prompt:: bash # + + ceph-volume lvm prepare --bluestore --data vg/lv --crush-device-class foo .. _ceph-volume-lvm-multipath: -- 2.39.5