From aebf75c9dfa52be23c447b775698b954f842de0d Mon Sep 17 00:00:00 2001 From: Ramana Raja Date: Fri, 13 Sep 2019 16:38:41 +0530 Subject: [PATCH] doc: add ceph fs volumes and subvolumes documentation Fixes: https://tracker.ceph.com/issues/40689 Signed-off-by: Ramana Raja --- doc/cephfs/fs-volumes.rst | 155 ++++++++++++++++++++++++++++++++++++++ doc/cephfs/index.rst | 1 + 2 files changed, 156 insertions(+) create mode 100644 doc/cephfs/fs-volumes.rst diff --git a/doc/cephfs/fs-volumes.rst b/doc/cephfs/fs-volumes.rst new file mode 100644 index 00000000000..6b17b949041 --- /dev/null +++ b/doc/cephfs/fs-volumes.rst @@ -0,0 +1,155 @@ +.. _fs-volumes-and-subvolumes: + +FS volumes and subvolumes +========================= + +A single source of truth for CephFS exports is implemented in the volumes +module of the :term:`Ceph Manager` daemon (ceph-mgr). The OpenStack shared +file system service (manila_), Ceph Containter Storage Interface (CSI_), +storage administrators among others can use the common CLI provided by the +ceph-mgr volumes module to manage the CephFS exports. + +The ceph-mgr volumes module implements the following file system export +abstactions: + +* FS volumes, an abstraction for CephFS file systems + +* FS subvolumes, an abstraction for independent CephFS directory trees + +* FS subvolume groups, an abstraction for a directory level higher than FS + subvolumes to effect policies (e.g., :doc:`/cephfs/file-layouts`) across a + set of subvolumes + +Some possible use-cases for the export abstractions: + +* FS subvolumes used as manila shares or CSI volumes + +* FS subvolume groups used as manila share groups + +Requirements +------------ + +* Nautilus (14.2.x) or a later version of Ceph + +* Cephx client user (see :doc:`/rados/operations/user-management`) with + the following minimum capabilities:: + + mon 'allow r' + mgr 'allow rw' + + +FS Volumes +---------- + +Create a volume using:: + + $ ceph fs volume create + +This creates a CephFS file sytem and its data and metadata pools. It also tries +to create MDSes for the filesytem using the enabled ceph-mgr orchestrator +module (see :doc:`/mgr/orchestrator_cli`) , e.g., rook. + +Remove a volume using:: + + $ ceph fs volume rm + +This removes a file system and its data and metadata pools. It also tries to +remove MDSes using the enabled ceph-mgr orchestrator module. + +List volumes using:: + + $ ceph fs volume ls + +FS Subvolume groups +------------------- + +Create a subvolume group using:: + + $ ceph fs subvolumegroup create [--mode --pool_layout ] + +The command succeeds even if the subvolume group already exists. + +When creating a subvolume group you can specify its data pool layout (see +:doc:`/cephfs/file-layouts`), and file mode in octal numerals. By default, the +subvolume group is created with an octal file mode '755', and data pool layout +of its parent directory. + + +Remove a subvolume group using:: + + $ ceph fs subvolumegroup rm [--force] + +The removal of a subvolume group fails if it is not empty, e.g., has subvolumes +or snapshots, or is non-existent. Using the '--force' flag allows the command +to succeed even if the subvolume group is non-existent. + + +Fetch the absolute path of a subvolume group using:: + + $ ceph fs subvolumegroup getpath + +Create a snapshot (see :doc:`/cephfs/experimental-features`) of a +subvolume group using:: + + $ ceph fs subvolumegroup snapshot create + +This implicitly snapshots all the subvolumes under the subvolume group. + +Remove a snapshot of a subvolume group using:: + + $ ceph fs subvolumegroup snapshot rm [--force] + +Using the '--force' flag allows the command to succeed that would otherwise +fail if the snapshot did not exist. + + +FS Subvolumes +------------- + +Create a subvolume using:: + + $ ceph fs subvolume create [--group_name --mode --pool_layout --size ] + + +The command succeeds even if the subvolume already exists. + +When creating a subvolume you can specify its subvolume group, data pool layout, +file mode in octal numerals, and size in bytes. The size of the subvolume is +specified by setting a quota on it (see :doc:`/cephfs/quota`). By default a +subvolume is created within the default subvolume group, and with an octal file +mode '755', data pool layout of its parent directory and no size limit. + + +Remove a subvolume group using:: + + $ ceph fs subvolume rm [--group_name --force] + + +The command removes the subvolume and its contents. It does this in two steps. +First, it move the subvolume to a trash folder, and then asynchronously purges +its contents. + +The removal of a subvolume fails if it has snapshots, or is non-existent. +Using the '--force' flag allows the command to succeed even if the subvolume is +non-existent. + + +Fetch the absolute path of a subvolume using:: + + $ ceph fs subvolume getpath [--group_name ] + + +Create a snapshot of a subvolume using:: + + $ ceph fs subvolume snapshot create [--group_name ] + + +Remove a snapshot of a subvolume using:: + + $ ceph fs subvolume snapshot rm [--group_name --force] + +Using the '--force' flag allows the command to succeed that would otherwise +fail if the snapshot did not exist. + +.. _manila: https://github.com/openstack/manila +.. _CSI: https://github.com/ceph/ceph-csi diff --git a/doc/cephfs/index.rst b/doc/cephfs/index.rst index 7e70d492855..aed20343afa 100644 --- a/doc/cephfs/index.rst +++ b/doc/cephfs/index.rst @@ -120,6 +120,7 @@ authentication keyring. Scrub LazyIO Distributed Metadata Cache + FS volume and subvolumes .. toctree:: :hidden: -- 2.39.5