From 412432510f2af039ac0662bf9cf454ee80f14ebe Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Thu, 11 May 2023 00:52:50 +1000 Subject: [PATCH] doc/cephfs: fix prompts in fs-volumes.rst Fixed a regression introduced in e5355e3d66e1438d51de6b57eae79fab47cd0184 that broke the unselectable prompts in the RST. Signed-off-by: Zac Dover (cherry picked from commit e019948783adf41207d70e8cd2540d335e07b80b) --- doc/cephfs/fs-volumes.rst | 103 +++++++++++++++++++------------------- 1 file changed, 52 insertions(+), 51 deletions(-) diff --git a/doc/cephfs/fs-volumes.rst b/doc/cephfs/fs-volumes.rst index be2fa830c1f6..33e6b1c8bd3c 100644 --- a/doc/cephfs/fs-volumes.rst +++ b/doc/cephfs/fs-volumes.rst @@ -42,7 +42,7 @@ Requirements FS Volumes ---------- -Create a volume using:: +Create a volume using: $ ceph fs volume create [] @@ -68,18 +68,18 @@ nodes ``host1`` and ``host2`` (for a total of four MDS daemons in the cluster): For more details on placement specification refer to the :ref:`orchestrator-cli-service-spec`, but keep in mind that specifying placement via a YAML file is not supported. -To remove a volume, run the following command:: +To remove a volume, run the following command: $ ceph fs volume rm [--yes-i-really-mean-it] This removes a file system and its data and metadata pools. It also tries to remove MDS daemons using the enabled ceph-mgr orchestrator module. -List volumes using:: +List volumes using: $ ceph fs volume ls -Rename a volume using:: +Rename a volume using: $ ceph fs volume rename [--yes-i-really-mean-it] @@ -97,7 +97,7 @@ The CephX IDs authorized for need to be reauthorized for [--force] @@ -165,18 +165,18 @@ The removal of a subvolume group fails if it is not empty or non-existent. '--force' flag allows the non-existent subvolume group remove command to succeed. -Fetch the absolute path of a subvolume group using:: +Fetch the absolute path of a subvolume group using: $ ceph fs subvolumegroup getpath -List subvolume groups using:: +List subvolume groups using: $ ceph fs subvolumegroup ls .. note:: Subvolume group snapshot feature is no longer supported in mainline CephFS (existing group snapshots can still be listed and deleted) -Fetch the metadata of a subvolume group using:: +Fetch the metadata of a subvolume group using: $ ceph fs subvolumegroup info @@ -195,7 +195,7 @@ The output format is JSON and contains fields as follows: * ``created_at``: creation time of the subvolume group in the format "YYYY-MM-DD HH:MM:SS" * ``data_pool``: data pool to which the subvolume group belongs -Check the presence of any subvolume group using:: +Check the presence of any subvolume group using: $ ceph fs subvolumegroup exist @@ -206,7 +206,7 @@ The 'exist' command outputs: .. note:: This command checks for the presence of custom groups and not presence of the default one. To validate the emptiness of the volume, a subvolumegroup existence check alone is not sufficient. Subvolume existence also needs to be checked as there might be subvolumes in the default group. -Resize a subvolume group using:: +Resize a subvolume group using: $ ceph fs subvolumegroup resize [--no_shrink] @@ -217,14 +217,14 @@ size. The subvolume group may be resized to an infinite size by passing ``inf`` or ``infinite`` as the ``new_size``. -Remove a snapshot of a subvolume group using:: +Remove a snapshot of a subvolume group using: $ ceph fs subvolumegroup snapshot rm [--force] Supplying the ``--force`` flag allows the command to succeed when it would otherwise fail due to the snapshot not existing. -List snapshots of a subvolume group using:: +List snapshots of a subvolume group using: $ ceph fs subvolumegroup snapshot ls @@ -232,7 +232,7 @@ List snapshots of a subvolume group using:: FS Subvolumes ------------- -Create a subvolume using:: +Create a subvolume using: $ ceph fs subvolume create [--size ] [--group_name ] [--pool_layout ] [--uid ] [--gid ] [--mode ] [--namespace-isolated] @@ -247,7 +247,7 @@ default a subvolume is created within the default subvolume group, and with an o mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory and no size limit. -Remove a subvolume using:: +Remove a subvolume using: $ ceph fs subvolume rm [--group_name ] [--force] [--retain-snapshots] @@ -267,7 +267,7 @@ empty for all operations not involving the retained snapshots. .. note:: Retained snapshots can be used as a clone source to recreate the subvolume, or clone to a newer subvolume. -Resize a subvolume using:: +Resize a subvolume using: $ ceph fs subvolume resize [--group_name ] [--no_shrink] @@ -276,29 +276,29 @@ The `--no_shrink`` flag prevents the subvolume from shrinking below the current The subvolume can be resized to an unlimited (but sparse) logical size by passing ``inf`` or ``infinite`` as `` new_size``. -Authorize cephx auth IDs, the read/read-write access to fs subvolumes:: +Authorize cephx auth IDs, the read/read-write access to fs subvolumes: $ ceph fs subvolume authorize [--group_name=] [--access_level=] The 'access_level' takes 'r' or 'rw' as value. -Deauthorize cephx auth IDs, the read/read-write access to fs subvolumes:: +Deauthorize cephx auth IDs, the read/read-write access to fs subvolumes: $ ceph fs subvolume deauthorize [--group_name=] -List cephx auth IDs authorized to access fs subvolume:: +List cephx auth IDs authorized to access fs subvolume: $ ceph fs subvolume authorized_list [--group_name=] -Evict fs clients based on auth ID and subvolume mounted:: +Evict fs clients based on auth ID and subvolume mounted: $ ceph fs subvolume evict [--group_name=] -Fetch the absolute path of a subvolume using:: +Fetch the absolute path of a subvolume using: $ ceph fs subvolume getpath [--group_name ] -Fetch the information of a subvolume using:: +Fetch the information of a subvolume using: $ ceph fs subvolume info [--group_name ] @@ -340,13 +340,13 @@ A subvolume's ``state`` is based on the current state of the subvolume and conta * ``complete``: subvolume is ready for all operations * ``snapshot-retained``: subvolume is removed but its snapshots are retained -List subvolumes using:: +List subvolumes using: $ ceph fs subvolume ls [--group_name ] .. note:: subvolumes that are removed but have snapshots retained, are also listed. -Check the presence of any subvolume using:: +Check the presence of any subvolume using: $ ceph fs subvolume exist [--group_name ] @@ -355,7 +355,7 @@ These are the possible results of the ``exist`` command: * ``subvolume exists``: if any subvolume of given group_name is present * ``no subvolume exists``: if no subvolume of given group_name is present -Set custom metadata on the subvolume as a key-value pair using:: +Set custom metadata on the subvolume as a key-value pair using: $ ceph fs subvolume metadata set [--group_name ] @@ -365,27 +365,26 @@ Set custom metadata on the subvolume as a key-value pair using:: .. note:: Custom metadata on a subvolume is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot. -Get custom metadata set on the subvolume using the metadata key:: +Get custom metadata set on the subvolume using the metadata key: $ ceph fs subvolume metadata get [--group_name ] -List custom metadata (key-value pairs) set on the subvolume using:: +List custom metadata (key-value pairs) set on the subvolume using: $ ceph fs subvolume metadata ls [--group_name ] -Remove custom metadata set on the subvolume using the metadata key:: +Remove custom metadata set on the subvolume using the metadata key: $ ceph fs subvolume metadata rm [--group_name ] [--force] Using the ``--force`` flag allows the command to succeed that would otherwise fail if the metadata key did not exist. -Create a snapshot of a subvolume using:: +Create a snapshot of a subvolume using: $ ceph fs subvolume snapshot create [--group_name ] - -Remove a snapshot of a subvolume using:: +Remove a snapshot of a subvolume using: $ ceph fs subvolume snapshot rm [--group_name ] [--force] @@ -394,11 +393,11 @@ fail if the snapshot did not exist. .. note:: if the last snapshot within a snapshot retained subvolume is removed, the subvolume is also removed -List snapshots of a subvolume using:: +List snapshots of a subvolume using: $ ceph fs subvolume snapshot ls [--group_name ] -Fetch the information of a snapshot using:: +Fetch the information of a snapshot using: $ ceph fs subvolume snapshot info [--group_name ] @@ -441,7 +440,7 @@ Sample output when no snapshot clone is in progress or pending:: "has_pending_clones": "no" } -Set custom key-value metadata on the snapshot by running:: +Set custom key-value metadata on the snapshot by running: $ ceph fs subvolume snapshot metadata set [--group_name ] @@ -451,15 +450,15 @@ Set custom key-value metadata on the snapshot by running:: .. note:: Custom metadata on a snapshot is not preserved when snapshotting the subvolume, and hence is also not preserved when cloning the subvolume snapshot. -Get custom metadata set on the snapshot using the metadata key:: +Get custom metadata set on the snapshot using the metadata key: $ ceph fs subvolume snapshot metadata get [--group_name ] -List custom metadata (key-value pairs) set on the snapshot using:: +List custom metadata (key-value pairs) set on the snapshot using: $ ceph fs subvolume snapshot metadata ls [--group_name ] -Remove custom metadata set on the snapshot using the metadata key:: +Remove custom metadata set on the snapshot using the metadata key: $ ceph fs subvolume snapshot metadata rm [--group_name ] [--force] @@ -479,35 +478,38 @@ Protecting snapshots prior to cloning was a prerequisite in the Nautilus release snapshots were introduced for this purpose. This prerequisite, and hence the commands to protect/unprotect, is being deprecated and may be removed from a future release. -The commands being deprecated are:: - $ ceph fs subvolume snapshot protect [--group_name ] - $ ceph fs subvolume snapshot unprotect [--group_name ] +The commands being deprecated are: + +.. prompt:: bash # + + ceph fs subvolume snapshot protect [--group_name ] + ceph fs subvolume snapshot unprotect [--group_name ] .. note:: Using the above commands will not result in an error, but they have no useful purpose. .. note:: Use the ``subvolume info`` command to fetch subvolume metadata regarding supported ``features`` to help decide if protect/unprotect of snapshots is required, based on the availability of the ``snapshot-autoprotect`` feature. -To initiate a clone operation use:: +To initiate a clone operation use: $ ceph fs subvolume snapshot clone -If a snapshot (source subvolume) is a part of non-default group, the group name needs to be specified:: +If a snapshot (source subvolume) is a part of non-default group, the group name needs to be specified: $ ceph fs subvolume snapshot clone --group_name -Cloned subvolumes can be a part of a different group than the source snapshot (by default, cloned subvolumes are created in default group). To clone to a particular group use:: +Cloned subvolumes can be a part of a different group than the source snapshot (by default, cloned subvolumes are created in default group). To clone to a particular group use: $ ceph fs subvolume snapshot clone --target_group_name -Similar to specifying a pool layout when creating a subvolume, pool layout can be specified when creating a cloned subvolume. To create a cloned subvolume with a specific pool layout use:: +Similar to specifying a pool layout when creating a subvolume, pool layout can be specified when creating a cloned subvolume. To create a cloned subvolume with a specific pool layout use: $ ceph fs subvolume snapshot clone --pool_layout -Configure the maximum number of concurrent clones. The default is 4:: +Configure the maximum number of concurrent clones. The default is 4: $ ceph config set mgr mgr/volumes/max_concurrent_clones -To check the status of a clone operation use:: +To check the status of a clone operation use: $ ceph fs clone status [--group_name ] @@ -584,7 +586,7 @@ To delete a partial clone use:: .. note:: Cloning synchronizes only directories, regular files and symbolic links. Inode timestamps (access and modification times) are synchronized up to seconds granularity. -An ``in-progress`` or a ``pending`` clone operation may be canceled. To cancel a clone operation use the ``clone cancel`` command:: +An ``in-progress`` or a ``pending`` clone operation may be canceled. To cancel a clone operation use the ``clone cancel`` command: $ ceph fs clone cancel [--group_name ] @@ -612,17 +614,16 @@ On successful cancellation, the cloned subvolume is moved to the ``canceled`` st Pinning Subvolumes and Subvolume Groups --------------------------------------- - Subvolumes and subvolume groups may be automatically pinned to ranks according to policies. This can distribute load across MDS ranks in predictable and stable ways. Review :ref:`cephfs-pinning` and :ref:`cephfs-ephemeral-pinning` for details on how pinning works. -Pinning is configured by:: +Pinning is configured by: $ ceph fs subvolumegroup pin -or for subvolumes:: +or for subvolumes: $ ceph fs subvolume pin @@ -631,7 +632,7 @@ one of ``export``, ``distributed``, or ``random``. The ``pin_setting`` corresponds to the extended attributed "value" as in the pinning documentation referenced above. -So, for example, setting a distributed pinning strategy on a subvolume group:: +So, for example, setting a distributed pinning strategy on a subvolume group: $ ceph fs subvolumegroup pin cephfilesystem-a csi distributed 1 -- 2.47.3