From 52584b4deb5c81903f5a04fb124eec3df9955d4e Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Sat, 13 May 2023 01:49:14 +1000 Subject: [PATCH] doc/cephfs: edit fs-volumes.rst (1 of x) Edit the syntax of the English language in the file doc/cephfs/fs-volumes.rst up to (but not including) the section called "FS Subvolumes". Signed-off-by: Zac Dover (cherry picked from commit a1184070a1a3d2f6c1462c62f88fe70df5626c36) --- doc/cephfs/fs-volumes.rst | 139 +++++++++++++++++++++----------------- 1 file changed, 76 insertions(+), 63 deletions(-) diff --git a/doc/cephfs/fs-volumes.rst b/doc/cephfs/fs-volumes.rst index d64986ead650c..cffca11cc3d4c 100644 --- a/doc/cephfs/fs-volumes.rst +++ b/doc/cephfs/fs-volumes.rst @@ -3,14 +3,13 @@ FS volumes and subvolumes ========================= -The volumes -module of the :term:`Ceph Manager` daemon (ceph-mgr) provides a single -source of truth for CephFS exports. The OpenStack shared -file system service (manila_) and Ceph Container Storage Interface (CSI_) -storage administrators among others can use the common CLI provided by the -ceph-mgr volumes module to manage CephFS exports. - -The ceph-mgr volumes module implements the following file system export +The volumes module of the :term:`Ceph Manager` daemon (ceph-mgr) provides a +single source of truth for CephFS exports. The OpenStack shared file system +service (manila_) and the Ceph Container Storage Interface (CSI_) storage +administrators use the common CLI provided by the ceph-mgr ``volumes`` module +to manage CephFS exports. + +The ceph-mgr ``volumes`` module implements the following file system export abstractions: * FS volumes, an abstraction for CephFS file systems @@ -18,8 +17,8 @@ abstractions: * FS subvolumes, an abstraction for independent CephFS directory trees * FS subvolume groups, an abstraction for a directory level higher than FS - subvolumes to effect policies (e.g., :doc:`/cephfs/file-layouts`) across a - set of subvolumes + subvolumes. Used to effect policies (e.g., :doc:`/cephfs/file-layouts`) + across a set of subvolumes Some possible use-cases for the export abstractions: @@ -41,31 +40,30 @@ Requirements FS Volumes ---------- -Create a volume using: +Create a volume by running the following command: $ ceph fs volume create [] This creates a CephFS file system and its data and metadata pools. It can also -deploy MDS daemons for the filesystem using a ceph-mgr orchestrator -module (see :doc:`/mgr/orchestrator`), for example Rook. - - is the volume name (an arbitrary string), and +deploy MDS daemons for the filesystem using a ceph-mgr orchestrator module (for +example Rook). See :doc:`/mgr/orchestrator`. - is an optional string that designates the hosts that should have -an MDS running on them and, optionally, the total number of MDS daemons the cluster -should have. For example, the -following placement string means "deploy MDS on nodes ``host1`` and ``host2`` (one -MDS per host): +```` is the volume name (an arbitrary string). ```` is an +optional string that specifies the hosts that should have an MDS running on +them and, optionally, the total number of MDS daemons that the cluster should +have. For example, the following placement string means "deploy MDS on nodes +``host1`` and ``host2`` (one MDS per host):: "host1,host2" -and this placement specification says to deploy two MDS daemons on each of -nodes ``host1`` and ``host2`` (for a total of four MDS daemons in the cluster): +The following placement specification means "deploy two MDS daemons on each of +nodes ``host1`` and ``host2`` (for a total of four MDS daemons in the +cluster)":: "4 host1,host2" -For more details on placement specification refer to the :ref:`orchestrator-cli-service-spec`, -but keep in mind that specifying placement via a YAML file is not supported. +See :ref:`orchestrator-cli-service-spec` for more on placement specification. +Specifying placement via a YAML file is not supported. To remove a volume, run the following command: @@ -74,29 +72,29 @@ To remove a volume, run the following command: This removes a file system and its data and metadata pools. It also tries to remove MDS daemons using the enabled ceph-mgr orchestrator module. -List volumes using: +List volumes by running the following command: $ ceph fs volume ls -Rename a volume using: +Rename a volume by running the following command: $ ceph fs volume rename [--yes-i-really-mean-it] Renaming a volume can be an expensive operation that requires the following: -- Rename the orchestrator-managed MDS service to match the . - This involves launching a MDS service with and bringing down - the MDS service with . -- Rename the file system matching to -- Change the application tags on the data and metadata pools of the file system - to -- Rename the metadata and data pools of the file system. +- Renaming the orchestrator-managed MDS service to match the . + This involves launching a MDS service with ```` and bringing + down the MDS service with ````. +- Renaming the file system matching ```` to ````. +- Changing the application tags on the data and metadata pools of the file system + to ````. +- Renaming the metadata and data pools of the file system. -The CephX IDs authorized for need to be reauthorized for . Any -on-going operations of the clients using these IDs may be disrupted. Mirroring is -expected to be disabled on the volume. +The CephX IDs that are authorized for ```` must be reauthorized for +````. Any ongoing operations of the clients using these IDs may +be disrupted. Ensure that mirroring is disabled on the volume. -To fetch the information of a CephFS volume, run: +To fetch the information of a CephFS volume, run the following command: $ ceph fs volume info vol_name [--human_readable] @@ -142,7 +140,7 @@ Sample output of the ``volume info`` command:: FS Subvolume groups ------------------- -Create a subvolume group using:: +Create a subvolume group by running the following command: $ ceph fs subvolumegroup create [--size ] [--pool_layout ] [--uid ] [--gid ] [--mode ] @@ -155,28 +153,31 @@ a quota on it (see :doc:`/cephfs/quota`). By default, the subvolume group is created with octal file mode ``755``, uid ``0``, gid ``0`` and the data pool layout of its parent directory. -Remove a subvolume group using: +Remove a subvolume group by running a command of the following form: $ ceph fs subvolumegroup rm [--force] -The removal of a subvolume group fails if it is not empty or non-existent. -'--force' flag allows the non-existent subvolume group remove command to succeed. - +The removal of a subvolume group fails if the subvolume group is not empty or +is non-existent. The ``--force`` flag allows the non-existent "subvolume group remove +command" to succeed. -Fetch the absolute path of a subvolume group using:: +Fetch the absolute path of a subvolume group by running a command of the +following form: $ ceph fs subvolumegroup getpath -List subvolume groups using:: +List subvolume groups by running a command of the following form: $ ceph fs subvolumegroup ls .. note:: Subvolume group snapshot feature is no longer supported in mainline CephFS (existing group snapshots can still be listed and deleted) -Fetch the metadata of a subvolume group using:: +Fetch the metadata of a subvolume group by running a command of the following form: + +.. prompt:: bash # - $ ceph fs subvolumegroup info + ceph fs subvolumegroup info The output format is JSON and contains fields as follows: @@ -193,38 +194,50 @@ The output format is JSON and contains fields as follows: * ``created_at``: creation time of the subvolume group in the format "YYYY-MM-DD HH:MM:SS" * ``data_pool``: data pool to which the subvolume group belongs -Check the presence of any subvolume group using:: +Check the presence of any subvolume group by running a command of the following +form: - $ ceph fs subvolumegroup exist +.. prompt:: bash $ -The 'exist' command outputs:: + ceph fs subvolumegroup exist + +The ``exist`` command outputs: * "subvolumegroup exists": if any subvolumegroup is present * "no subvolumegroup exists": if no subvolumegroup is present -.. note:: This command checks for the presence of custom groups and not presence of the default one. To validate the emptiness of the volume, a subvolumegroup existence check alone is not sufficient. Subvolume existence also needs to be checked as there might be subvolumes in the default group. +.. note:: This command checks for the presence of custom groups and not + presence of the default one. To validate the emptiness of the volume, a + subvolumegroup existence check alone is not sufficient. Subvolume existence + also needs to be checked as there might be subvolumes in the default group. + +Resize a subvolume group by running a command of the following form: + +.. prompt:: bash $ + + ceph fs subvolumegroup resize [--no_shrink] -Resize a subvolume group using:: +The command resizes the subvolume group quota, using the size specified by +``new_size``. The ``--no_shrink`` flag prevents the subvolume group from +shrinking below the current used size. - $ ceph fs subvolumegroup resize [--no_shrink] +The subvolume group may be resized to an infinite size by passing ``inf`` or +``infinite`` as the ``new_size``. -The command resizes the subvolume group quota using the size specified by ``new_size``. -The ``--no_shrink`` flag prevents the subvolume group from shrinking below the current used -size. +Remove a snapshot of a subvolume group by running a command of the following form: -The subvolume group may be resized to an infinite size by passing ``inf`` or ``infinite`` -as the ``new_size``. +.. prompt:: bash $ -Remove a snapshot of a subvolume group using:: + ceph fs subvolumegroup snapshot rm [--force] - $ ceph fs subvolumegroup snapshot rm [--force] +Supplying the ``--force`` flag allows the command to succeed when it would +otherwise fail due to the nonexistence of the snapshot. -Supplying the ``--force`` flag allows the command to succeed when it would otherwise -fail due to the snapshot not existing. +List snapshots of a subvolume group by running a command of the following form: -List snapshots of a subvolume group using:: +.. prompt:: bash $ - $ ceph fs subvolumegroup snapshot ls + ceph fs subvolumegroup snapshot ls FS Subvolumes -- 2.39.5