From 9c62b41565afefb89bb03c8c261b5e76a196b38c Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Thu, 11 May 2023 00:52:50 +1000 Subject: [PATCH] doc/cephfs: fix prompts in fs-volumes.rst Fixed a regression introduced in e5355e3d66e1438d51de6b57eae79fab47cd0184 that broke the unselectable prompts in the RST. Signed-off-by: Zac Dover (cherry picked from commit e019948783adf41207d70e8cd2540d335e07b80b) --- doc/cephfs/fs-volumes.rst | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/doc/cephfs/fs-volumes.rst b/doc/cephfs/fs-volumes.rst index ed6daa071fa2a..492c1ae60d554 100644 --- a/doc/cephfs/fs-volumes.rst +++ b/doc/cephfs/fs-volumes.rst @@ -42,7 +42,7 @@ Requirements FS Volumes ---------- -Create a volume using:: +Create a volume using: $ ceph fs volume create [] @@ -68,18 +68,18 @@ nodes ``host1`` and ``host2`` (for a total of four MDS daemons in the cluster): For more details on placement specification refer to the :ref:`orchestrator-cli-service-spec`, but keep in mind that specifying placement via a YAML file is not supported. -To remove a volume, run the following command:: +To remove a volume, run the following command: $ ceph fs volume rm [--yes-i-really-mean-it] This removes a file system and its data and metadata pools. It also tries to remove MDS daemons using the enabled ceph-mgr orchestrator module. -List volumes using:: +List volumes using: $ ceph fs volume ls -Rename a volume using:: +Rename a volume using: $ ceph fs volume rename [--yes-i-really-mean-it] @@ -97,7 +97,7 @@ The CephX IDs authorized for need to be reauthorized for -.. note:: Subvolume group snapshot feature is no longer supported in mainline CephFS (existing group - snapshots can still be listed and deleted) +.. note:: Subvolume group snapshot feature is no longer supported in mainline + CephFS (existing group snapshots can still be listed and deleted) Fetch the metadata of a subvolume group using:: @@ -199,7 +199,7 @@ Check the presence of any subvolume group using:: $ ceph fs subvolumegroup exist -The 'exist' command outputs: +The 'exist' command outputs:: * "subvolumegroup exists": if any subvolumegroup is present * "no subvolumegroup exists": if no subvolumegroup is present @@ -384,7 +384,6 @@ Create a snapshot of a subvolume using:: $ ceph fs subvolume snapshot create [--group_name ] - Remove a snapshot of a subvolume using:: $ ceph fs subvolume snapshot rm [--group_name ] [--force] @@ -479,9 +478,12 @@ Protecting snapshots prior to cloning was a prerequisite in the Nautilus release snapshots were introduced for this purpose. This prerequisite, and hence the commands to protect/unprotect, is being deprecated and may be removed from a future release. -The commands being deprecated are:: - $ ceph fs subvolume snapshot protect [--group_name ] - $ ceph fs subvolume snapshot unprotect [--group_name ] +The commands being deprecated are: + +.. prompt:: bash # + + ceph fs subvolume snapshot protect [--group_name ] + ceph fs subvolume snapshot unprotect [--group_name ] .. note:: Using the above commands will not result in an error, but they have no useful purpose. @@ -612,7 +614,6 @@ On successful cancellation, the cloned subvolume is moved to the ``canceled`` st Pinning Subvolumes and Subvolume Groups --------------------------------------- - Subvolumes and subvolume groups may be automatically pinned to ranks according to policies. This can distribute load across MDS ranks in predictable and stable ways. Review :ref:`cephfs-pinning` and :ref:`cephfs-ephemeral-pinning` -- 2.39.5