helping to avoid conflicts and ensuring that each subvolume is associated
with the intended service or use case.
+Valid Earmarks
+~~~~~~~~~~~~~~~~~~~~
+
+- **For NFS:**
+ - The valid earmark format is the top-level scope: ``'nfs'``.
+
+- **For SMB:**
+ - The valid earmark formats are:
+ - The top-level scope: ``'smb'``.
+ - The top-level scope with an intra-module level scope: ``'smb.cluster.{cluster_id}'``, where ``cluster_id`` is a short string uniquely identifying the cluster.
+ - Example without intra-module scope: ``smb``
+ - Example with intra-module scope: ``smb.cluster.cluster_1``
+
+.. note:: If you are changing an earmark from one scope to another (e.g., from nfs to smb or vice versa),
+ be aware that user permissions and ACLs associated with the previous scope might still apply. Ensure that
+ any necessary permissions are updated as needed to maintain proper access control.
+
+
Removing a subvolume
~~~~~~~~~~~~~~~~~~~~
::
{
- "status": {
- "state": "in-progress",
- "source": {
- "volume": "cephfs",
- "subvolume": "subvol1",
- "snapshot": "snap1"
- }
+ "status": {
+ "state": "in-progress",
+ "source": {
+ "volume": "cephfs",
+ "subvolume": "subvol1",
+ "snapshot": "snap1"
+ },
+ "progress_report": {
+ "percentage cloned": "12.24%",
+ "amount cloned": "376M/3.0G",
+ "files cloned": "4/6"
}
+ }
}
+A progress report is also printed in the output when clone is ``in-progress``.
+Here the progress is reported only for the specific clone. For collective
+progress made by all ongoing clones, a progress bar is printed at the bottom
+in ouput of ``ceph status`` command::
+
+ progress:
+ 3 ongoing clones - average progress is 47.569% (10s)
+ [=============...............] (remaining: 11s)
+
+If the number of clone jobs are more than cloner threads, two progress bars
+are printed, one for ongoing clones (same as above) and other for all
+(ongoing+pending) clones::
+
+ progress:
+ 4 ongoing clones - average progress is 27.669% (15s)
+ [=======.....................] (remaining: 41s)
+ Total 5 clones - average progress is 41.667% (3s)
+ [===========.................] (remaining: 4s)
+
.. note:: The ``failure`` section will be shown only if the clone's state is ``failed`` or ``cancelled``
Here is an example of a ``failed`` clone:
your user to just ``caps osd = "allow rw"`` and delete ``tag cephfs
data=....``
-Disabling Volumes Plugin
-========================
-In certain scenarios, volumes plugin might be needed to disabled to prevent compromise
-for rest of the Ceph cluster. For details see: :ref:`disabling-volumes-plugin`
+Disabling the Volumes Plugin
+============================
+In certain scenarios, the Volumes plugin may need to be disabled to prevent
+compromise for rest of the Ceph cluster. For details see:
+:ref:`disabling-volumes-plugin`
Reporting Issues
================