FS Volumes
----------
-Create a volume using::
+Create a volume using:
$ ceph fs volume create <vol_name> [<placement>]
For more details on placement specification refer to the :ref:`orchestrator-cli-service-spec`,
but keep in mind that specifying placement via a YAML file is not supported.
-To remove a volume, run the following command::
+To remove a volume, run the following command:
$ ceph fs volume rm <vol_name> [--yes-i-really-mean-it]
This removes a file system and its data and metadata pools. It also tries to
remove MDS daemons using the enabled ceph-mgr orchestrator module.
-List volumes using::
+List volumes using:
$ ceph fs volume ls
-Rename a volume using::
+Rename a volume using:
$ ceph fs volume rename <vol_name> <new_vol_name> [--yes-i-really-mean-it]
on-going operations of the clients using these IDs may be disrupted. Mirroring is
expected to be disabled on the volume.
-To fetch the information of a CephFS volume, run::
+To fetch the information of a CephFS volume, run:
$ ceph fs volume info vol_name [--human_readable]
layout of its parent directory.
-Remove a subvolume group using::
+Remove a subvolume group using:
$ ceph fs subvolumegroup rm <vol_name> <group_name> [--force]
'--force' flag allows the non-existent subvolume group remove command to succeed.
-Fetch the absolute path of a subvolume group using::
+Fetch the absolute path of a subvolume group using:
$ ceph fs subvolumegroup getpath <vol_name> <group_name>
-List subvolume groups using::
+List subvolume groups using:
$ ceph fs subvolumegroup ls <vol_name>
.. note:: Subvolume group snapshot feature is no longer supported in mainline CephFS (existing group
snapshots can still be listed and deleted)
-Fetch the metadata of a subvolume group using::
+Fetch the metadata of a subvolume group using:
$ ceph fs subvolumegroup info <vol_name> <group_name>
* ``created_at``: creation time of the subvolume group in the format "YYYY-MM-DD HH:MM:SS"
* ``data_pool``: data pool to which the subvolume group belongs
-Check the presence of any subvolume group using::
+Check the presence of any subvolume group using:
$ ceph fs subvolumegroup exist <vol_name>
.. note:: This command checks for the presence of custom groups and not presence of the default one. To validate the emptiness of the volume, a subvolumegroup existence check alone is not sufficient. Subvolume existence also needs to be checked as there might be subvolumes in the default group.
-Resize a subvolume group using::
+Resize a subvolume group using:
$ ceph fs subvolumegroup resize <vol_name> <group_name> <new_size> [--no_shrink]
The subvolume group may be resized to an infinite size by passing ``inf`` or ``infinite``
as the ``new_size``.
-Remove a snapshot of a subvolume group using::
+Remove a snapshot of a subvolume group using:
$ ceph fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> [--force]
Supplying the ``--force`` flag allows the command to succeed when it would otherwise
fail due to the snapshot not existing.
-List snapshots of a subvolume group using::
+List snapshots of a subvolume group using:
$ ceph fs subvolumegroup snapshot ls <vol_name> <group_name>
FS Subvolumes
-------------
-Create a subvolume using::
+Create a subvolume using:
$ ceph fs subvolume create <vol_name> <subvol_name> [--size <size_in_bytes>] [--group_name <subvol_group_name>] [--pool_layout <data_pool_name>] [--uid <uid>] [--gid <gid>] [--mode <octal_mode>] [--namespace-isolated]
mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of
its parent directory and no size limit.
-Remove a subvolume using::
+Remove a subvolume using:
$ ceph fs subvolume rm <vol_name> <subvol_name> [--group_name <subvol_group_name>] [--force] [--retain-snapshots]
.. note:: Retained snapshots can be used as a clone source to recreate the subvolume, or clone to a newer subvolume.
-Resize a subvolume using::
+Resize a subvolume using:
$ ceph fs subvolume resize <vol_name> <subvol_name> <new_size> [--group_name <subvol_group_name>] [--no_shrink]
The subvolume can be resized to an unlimited (but sparse) logical size by passing ``inf`` or ``infinite`` as `` new_size``.
-Authorize cephx auth IDs, the read/read-write access to fs subvolumes::
+Authorize cephx auth IDs, the read/read-write access to fs subvolumes:
$ ceph fs subvolume authorize <vol_name> <sub_name> <auth_id> [--group_name=<group_name>] [--access_level=<access_level>]
The 'access_level' takes 'r' or 'rw' as value.
-Deauthorize cephx auth IDs, the read/read-write access to fs subvolumes::
+Deauthorize cephx auth IDs, the read/read-write access to fs subvolumes:
$ ceph fs subvolume deauthorize <vol_name> <sub_name> <auth_id> [--group_name=<group_name>]
-List cephx auth IDs authorized to access fs subvolume::
+List cephx auth IDs authorized to access fs subvolume:
$ ceph fs subvolume authorized_list <vol_name> <sub_name> [--group_name=<group_name>]
-Evict fs clients based on auth ID and subvolume mounted::
+Evict fs clients based on auth ID and subvolume mounted:
$ ceph fs subvolume evict <vol_name> <sub_name> <auth_id> [--group_name=<group_name>]
-Fetch the absolute path of a subvolume using::
+Fetch the absolute path of a subvolume using:
$ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
-Fetch the information of a subvolume using::
+Fetch the information of a subvolume using:
$ ceph fs subvolume info <vol_name> <subvol_name> [--group_name <subvol_group_name>]
* ``complete``: subvolume is ready for all operations
* ``snapshot-retained``: subvolume is removed but its snapshots are retained
-List subvolumes using::
+List subvolumes using:
$ ceph fs subvolume ls <vol_name> [--group_name <subvol_group_name>]
.. note:: subvolumes that are removed but have snapshots retained, are also listed.
-Check the presence of any subvolume using::
+Check the presence of any subvolume using:
$ ceph fs subvolume exist <vol_name> [--group_name <subvol_group_name>]
* ``subvolume exists``: if any subvolume of given group_name is present
* ``no subvolume exists``: if no subvolume of given group_name is present
-Set custom metadata on the subvolume as a key-value pair using::
+Set custom metadata on the subvolume as a key-value pair using:
$ ceph fs subvolume metadata set <vol_name> <subvol_name> <key_name> <value> [--group_name <subvol_group_name>]
.. note:: Custom metadata on a subvolume is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
-Get custom metadata set on the subvolume using the metadata key::
+Get custom metadata set on the subvolume using the metadata key:
$ ceph fs subvolume metadata get <vol_name> <subvol_name> <key_name> [--group_name <subvol_group_name>]
-List custom metadata (key-value pairs) set on the subvolume using::
+List custom metadata (key-value pairs) set on the subvolume using:
$ ceph fs subvolume metadata ls <vol_name> <subvol_name> [--group_name <subvol_group_name>]
-Remove custom metadata set on the subvolume using the metadata key::
+Remove custom metadata set on the subvolume using the metadata key:
$ ceph fs subvolume metadata rm <vol_name> <subvol_name> <key_name> [--group_name <subvol_group_name>] [--force]
Using the ``--force`` flag allows the command to succeed that would otherwise
fail if the metadata key did not exist.
-Create a snapshot of a subvolume using::
+Create a snapshot of a subvolume using:
$ ceph fs subvolume snapshot create <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
-
-Remove a snapshot of a subvolume using::
+Remove a snapshot of a subvolume using:
$ ceph fs subvolume snapshot rm <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>] [--force]
.. note:: if the last snapshot within a snapshot retained subvolume is removed, the subvolume is also removed
-List snapshots of a subvolume using::
+List snapshots of a subvolume using:
$ ceph fs subvolume snapshot ls <vol_name> <subvol_name> [--group_name <subvol_group_name>]
-Fetch the information of a snapshot using::
+Fetch the information of a snapshot using:
$ ceph fs subvolume snapshot info <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
"has_pending_clones": "no"
}
-Set custom key-value metadata on the snapshot by running::
+Set custom key-value metadata on the snapshot by running:
$ ceph fs subvolume snapshot metadata set <vol_name> <subvol_name> <snap_name> <key_name> <value> [--group_name <subvol_group_name>]
.. note:: Custom metadata on a snapshot is not preserved when snapshotting the subvolume, and hence is also not preserved when cloning the subvolume snapshot.
-Get custom metadata set on the snapshot using the metadata key::
+Get custom metadata set on the snapshot using the metadata key:
$ ceph fs subvolume snapshot metadata get <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>]
-List custom metadata (key-value pairs) set on the snapshot using::
+List custom metadata (key-value pairs) set on the snapshot using:
$ ceph fs subvolume snapshot metadata ls <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
-Remove custom metadata set on the snapshot using the metadata key::
+Remove custom metadata set on the snapshot using the metadata key:
$ ceph fs subvolume snapshot metadata rm <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>] [--force]
snapshots were introduced for this purpose. This prerequisite, and hence the commands to protect/unprotect, is being
deprecated and may be removed from a future release.
-The commands being deprecated are::
- $ ceph fs subvolume snapshot protect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
- $ ceph fs subvolume snapshot unprotect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
+The commands being deprecated are:
+
+.. prompt:: bash #
+
+ ceph fs subvolume snapshot protect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
+ ceph fs subvolume snapshot unprotect <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]
.. note:: Using the above commands will not result in an error, but they have no useful purpose.
.. note:: Use the ``subvolume info`` command to fetch subvolume metadata regarding supported ``features`` to help decide if protect/unprotect of snapshots is required, based on the availability of the ``snapshot-autoprotect`` feature.
-To initiate a clone operation use::
+To initiate a clone operation use:
$ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name>
-If a snapshot (source subvolume) is a part of non-default group, the group name needs to be specified::
+If a snapshot (source subvolume) is a part of non-default group, the group name needs to be specified:
$ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --group_name <subvol_group_name>
-Cloned subvolumes can be a part of a different group than the source snapshot (by default, cloned subvolumes are created in default group). To clone to a particular group use::
+Cloned subvolumes can be a part of a different group than the source snapshot (by default, cloned subvolumes are created in default group). To clone to a particular group use:
$ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --target_group_name <subvol_group_name>
-Similar to specifying a pool layout when creating a subvolume, pool layout can be specified when creating a cloned subvolume. To create a cloned subvolume with a specific pool layout use::
+Similar to specifying a pool layout when creating a subvolume, pool layout can be specified when creating a cloned subvolume. To create a cloned subvolume with a specific pool layout use:
$ ceph fs subvolume snapshot clone <vol_name> <subvol_name> <snap_name> <target_subvol_name> --pool_layout <pool_layout>
-Configure the maximum number of concurrent clones. The default is 4::
+Configure the maximum number of concurrent clones. The default is 4:
$ ceph config set mgr mgr/volumes/max_concurrent_clones <value>
-To check the status of a clone operation use::
+To check the status of a clone operation use:
$ ceph fs clone status <vol_name> <clone_name> [--group_name <group_name>]
.. note:: Cloning synchronizes only directories, regular files and symbolic links. Inode timestamps (access and
modification times) are synchronized up to seconds granularity.
-An ``in-progress`` or a ``pending`` clone operation may be canceled. To cancel a clone operation use the ``clone cancel`` command::
+An ``in-progress`` or a ``pending`` clone operation may be canceled. To cancel a clone operation use the ``clone cancel`` command:
$ ceph fs clone cancel <vol_name> <clone_name> [--group_name <group_name>]
Pinning Subvolumes and Subvolume Groups
---------------------------------------
-
Subvolumes and subvolume groups may be automatically pinned to ranks according
to policies. This can distribute load across MDS ranks in predictable and
stable ways. Review :ref:`cephfs-pinning` and :ref:`cephfs-ephemeral-pinning`
for details on how pinning works.
-Pinning is configured by::
+Pinning is configured by:
$ ceph fs subvolumegroup pin <vol_name> <group_name> <pin_type> <pin_setting>
-or for subvolumes::
+or for subvolumes:
$ ceph fs subvolume pin <vol_name> <group_name> <pin_type> <pin_setting>
corresponds to the extended attributed "value" as in the pinning documentation
referenced above.
-So, for example, setting a distributed pinning strategy on a subvolume group::
+So, for example, setting a distributed pinning strategy on a subvolume group:
$ ceph fs subvolumegroup pin cephfilesystem-a csi distributed 1