From 39765c30962963de179a58f882f9827fc5d20fdd Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Sun, 12 May 2024 11:39:34 +1000 Subject: [PATCH] doc/cephfs: edit fs-volumes.rst (1 of x) followup Include the suggestions for improving doc/cephfs/fs-volumes.rst made by Anthony D'Atri here https://github.com/ceph/ceph/pull/57415#discussion_r1597362110 Co-authored-by: Anthony D'Atri Signed-off-by: Zac Dover (cherry picked from commit cb700d804b4390fd9f55444dcfc04dfebac3a1bf) --- doc/cephfs/fs-volumes.rst | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/doc/cephfs/fs-volumes.rst b/doc/cephfs/fs-volumes.rst index 6df2b3893558..552ec596f740 100644 --- a/doc/cephfs/fs-volumes.rst +++ b/doc/cephfs/fs-volumes.rst @@ -46,9 +46,9 @@ Create a volume by running the following command: ceph fs volume create [placement] -This creates a CephFS file system and its data and metadata pools. It can also -deploy MDS daemons for the filesystem using a ceph-mgr orchestrator module (for -example Rook). See :doc:`/mgr/orchestrator`. +This creates a CephFS file system and its data and metadata pools. This command +can also deploy MDS daemons for the filesystem using a Ceph Manager orchestrator +module (for example Rook). See :doc:`/mgr/orchestrator`. ```` is the volume name (an arbitrary string). ``[placement]`` is an optional string that specifies the :ref:`orchestrator-cli-placement-spec` for @@ -62,13 +62,13 @@ To remove a volume, run the following command: $ ceph fs volume rm [--yes-i-really-mean-it] -This removes a file system and its data and metadata pools. It also tries to -remove MDS daemons using the enabled ceph-mgr orchestrator module. +This command removes a file system and its data and metadata pools. It also +tries to remove MDS daemons using the enabled Ceph Manager orchestrator module. -.. note:: After volume deletion, it is recommended to restart `ceph-mgr` - if a new file system is created on the same cluster and subvolume interface - is being used. Please see https://tracker.ceph.com/issues/49605#note-5 - for more details. +.. note:: After volume deletion, we recommend restarting `ceph-mgr` if a new + file system is created on the same cluster and the subvolume interface is + being used. See https://tracker.ceph.com/issues/49605#note-5 for more + details. List volumes by running the following command: -- 2.47.3