- src/pybind/mgr/dashboard/tests/test_ganesha.py
- qa/tasks/cephfs/test_nfs.py
- qa/tasks/mgr/dashboard/test_ganesha.py
- - doc/cephfs/fs-nfs-exports.rst
+ - doc/mgr/nfs.rst
- doc/cephfs/nfs.rst
- doc/cephadm/nfs.rst
- doc/radosgw/nfs.rst
.. note:: Only the NFSv4 protocol is supported.
The simplest way to manage NFS is via the ``ceph nfs cluster ...``
-commands; see :ref:`cephfs-nfs`. This document covers how to manage the
+commands; see :ref:`mgr-nfs`. This document covers how to manage the
cephadm services directly, which should only be necessary for unusual NFS
configurations.
MDS Configuration Settings <mds-config-ref>
Manual: ceph-mds <../../man/8/ceph-mds>
Export over NFS <nfs>
- Export over NFS with volume nfs interface <fs-nfs-exports>
Application best practices <app-best-practices>
FS volume and subvolumes <fs-volumes>
CephFS Quotas <quota>
Orchestrator module <orchestrator>
Rook module <rook>
MDS Autoscaler module <mds_autoscaler>
+ NFS module <nfs>
-.. _cephfs-nfs:
+.. _mgr-nfs:
=======================
* First class NFS gateway support in Ceph is here! It's now possible to create
scale-out ("active-active") NFS gateway clusters that export CephFS using
a few commands. The gateways are deployed via cephadm (or Rook, in the future).
- For more information, see :ref:`cephfs-nfs`.
+ For more information, see :ref:`mgr-nfs`.
* Multiple active MDS file system scrub is now stable. It is no longer necessary
to set ``max_mds`` to 1 and wait for non-zero ranks to stop. Scrub commands