From: Zac Dover Date: Mon, 20 May 2024 11:55:16 +0000 (+1000) Subject: doc/cephfs: edit "Cloning Snapshots" in fs-volumes.rst X-Git-Tag: v17.2.8~346^2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=refs%2Fpull%2F57667%2Fhead;p=ceph.git doc/cephfs: edit "Cloning Snapshots" in fs-volumes.rst Edit the "Cloning Snapshots" section in doc/cephfs/fs-volumes.rst. This commit represents only a grammar pass. A future commit (and future PR) will separate this section into subsections by command. Signed-off-by: Zac Dover (cherry picked from commit 69941180e9b1ff5f7f0eddfba670028aae93e333) --- diff --git a/doc/cephfs/fs-volumes.rst b/doc/cephfs/fs-volumes.rst index 29bc258d200c4..38e7a55e5df1d 100644 --- a/doc/cephfs/fs-volumes.rst +++ b/doc/cephfs/fs-volumes.rst @@ -563,14 +563,17 @@ fail (if the metadata key did not exist). Cloning Snapshots ----------------- -Subvolumes can be created by cloning subvolume snapshots. Cloning is an asynchronous operation that copies -data from a snapshot to a subvolume. Due to this bulk copying, cloning is inefficient for very large -data sets. +Subvolumes can be created by cloning subvolume snapshots. Cloning is an +asynchronous operation that copies data from a snapshot to a subvolume. Because +cloning is an operation that involves bulk copying, it is inefficient for +very large data sets. -.. note:: Removing a snapshot (source subvolume) would fail if there are pending or in progress clone operations. +.. note:: Removing a snapshot (source subvolume) fails when there are + pending or in-progress clone operations. -Protecting snapshots prior to cloning was a prerequisite in the Nautilus release, and the commands to protect/unprotect -snapshots were introduced for this purpose. This prerequisite, and hence the commands to protect/unprotect, is being +Protecting snapshots prior to cloning was a prerequisite in the Nautilus +release. Commands that made possible the protection and unprotection of +snapshots were introduced for this purpose. This prerequisite is being deprecated and may be removed from a future release. The commands being deprecated are: @@ -584,29 +587,46 @@ The commands being deprecated are: .. note:: Use the ``subvolume info`` command to fetch subvolume metadata regarding supported ``features`` to help decide if protect/unprotect of snapshots is required, based on the availability of the ``snapshot-autoprotect`` feature. -To initiate a clone operation use: +Run a command of the following form to initiate a clone operation: - $ ceph fs subvolume snapshot clone +.. prompt:: bash # + + ceph fs subvolume snapshot clone + +Run a command of the following form when a snapshot (source subvolume) is a +part of non-default group. Note that the group name needs to be specified: -If a snapshot (source subvolume) is a part of non-default group, the group name needs to be specified: +.. prompt:: bash # - $ ceph fs subvolume snapshot clone --group_name + ceph fs subvolume snapshot clone --group_name -Cloned subvolumes can be a part of a different group than the source snapshot (by default, cloned subvolumes are created in default group). To clone to a particular group use: +Cloned subvolumes can be a part of a different group than the source snapshot +(by default, cloned subvolumes are created in default group). Run a command of +the following form to clone to a particular group use: - $ ceph fs subvolume snapshot clone --target_group_name +.. prompt:: bash # -Similar to specifying a pool layout when creating a subvolume, pool layout can be specified when creating a cloned subvolume. To create a cloned subvolume with a specific pool layout use: + ceph fs subvolume snapshot clone --target_group_name - $ ceph fs subvolume snapshot clone --pool_layout +Pool layout can be specified when creating a cloned subvolume in a way that is +similar to specifying a pool layout when creating a subvolume. Run a command of +the following form to create a cloned subvolume with a specific pool layout: + +.. prompt:: bash # + + ceph fs subvolume snapshot clone --pool_layout Configure the maximum number of concurrent clones. The default is 4: - $ ceph config set mgr mgr/volumes/max_concurrent_clones +.. prompt:: bash # -To check the status of a clone operation use: + ceph config set mgr mgr/volumes/max_concurrent_clones - $ ceph fs clone status [--group_name ] +Run a command of the following form to check the status of a clone operation: + +.. prompt:: bash # + + ceph fs clone status [--group_name ] A clone can be in one of the following states: @@ -658,11 +678,14 @@ Here is an example of a ``failed`` clone:: } } -(NOTE: since ``subvol1`` is in the default group, the ``source`` object's ``clone status`` does not include the group name) +.. note:: Because ``subvol1`` is in the default group, the ``source`` object's + ``clone status`` does not include the group name) -.. note:: Cloned subvolumes are accessible only after the clone operation has successfully completed. +.. note:: Cloned subvolumes are accessible only after the clone operation has + successfully completed. -After a successful clone operation, ``clone status`` will look like the below:: +After a successful clone operation, ``clone status`` will look like the +following:: $ ceph fs clone status cephfs clone1 { @@ -673,35 +696,79 @@ After a successful clone operation, ``clone status`` will look like the below:: If a clone operation is unsuccessful, the ``state`` value will be ``failed``. -To retry a failed clone operation, the incomplete clone must be deleted and the clone operation must be issued again. -To delete a partial clone use:: +To retry a failed clone operation, the incomplete clone must be deleted and the +clone operation must be issued again. - $ ceph fs subvolume rm [--group_name ] --force +Run a command of the following form to delete a partial clone: -.. note:: Cloning synchronizes only directories, regular files and symbolic links. Inode timestamps (access and - modification times) are synchronized up to seconds granularity. +.. prompt:: bash # -An ``in-progress`` or a ``pending`` clone operation may be canceled. To cancel a clone operation use the ``clone cancel`` command: + ceph fs subvolume rm [--group_name ] --force - $ ceph fs clone cancel [--group_name ] +.. note:: Cloning synchronizes only directories, regular files and symbolic + links. inode timestamps (access and modification times) are synchronized up + to a second's granularity. -On successful cancellation, the cloned subvolume is moved to the ``canceled`` state:: +An ``in-progress`` or a ``pending`` clone operation may be canceled. To cancel +a clone operation use the ``clone cancel`` command: - $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1 - $ ceph fs clone cancel cephfs clone1 - $ ceph fs clone status cephfs clone1 - { - "status": { - "state": "canceled", - "source": { - "volume": "cephfs", - "subvolume": "subvol1", - "snapshot": "snap1" - } +.. prompt:: bash # + + ceph fs clone cancel [--group_name ] + +On successful cancellation, the cloned subvolume is moved to the ``canceled`` +state: + +.. prompt:: bash # + + ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1 + ceph fs clone cancel cephfs clone1 + ceph fs clone status cephfs clone1 + +:: + + { + "status": { + "state": "canceled", + "source": { + "volume": "cephfs", + "subvolume": "subvol1", + "snapshot": "snap1" + } + } } } -.. note:: The canceled cloned may be deleted by supplying the ``--force`` option to the `fs subvolume rm` command. +.. note:: Delete the canceled cloned by supplying the ``--force`` option to the + ``fs subvolume rm`` command. + +Configurables +~~~~~~~~~~~~~ + +Configure the maximum number of concurrent clone operations. The default is 4: + +.. prompt:: bash # + + ceph config set mgr mgr/volumes/max_concurrent_clones + +Configure the ``snapshot_clone_no_wait`` option: + +The ``snapshot_clone_no_wait`` config option is used to reject clone-creation +requests when cloner threads (which can be configured using the above options, +for example, ``max_concurrent_clones``) are not available. It is enabled by +default. This means that the value is set to ``True``, but it can be configured +by using the following command: + +.. prompt:: bash # + + ceph config set mgr mgr/volumes/snapshot_clone_no_wait + +The current value of ``snapshot_clone_no_wait`` can be fetched by running the +following command. + +.. prompt:: bash # + + ceph config get mgr mgr/volumes/snapshot_clone_no_wait .. _subvol-pinning: