From: Sebastian Wagner Date: Thu, 18 Feb 2021 16:36:11 +0000 (+0100) Subject: doc/cephadm: group NFS sections into one chapter X-Git-Tag: v17.1.0~2825^2~4 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=8d07e35b2f10f8b6b6bb039e6c066ff3d1056d86;p=ceph.git doc/cephadm: group NFS sections into one chapter Signed-off-by: Sebastian Wagner --- diff --git a/doc/cephadm/index.rst b/doc/cephadm/index.rst index d4d619d88b46..e5f7ae4811e5 100644 --- a/doc/cephadm/index.rst +++ b/doc/cephadm/index.rst @@ -34,6 +34,7 @@ versions of Ceph. mon osd rgw + nfs custom-container monitoring service-management diff --git a/doc/cephadm/install.rst b/doc/cephadm/install.rst index 701351332cc5..5c9a0720e551 100644 --- a/doc/cephadm/install.rst +++ b/doc/cephadm/install.rst @@ -256,32 +256,7 @@ MDS daemons. To use the *Ceph Object Gateway*, follow :ref:`cephadm-deploy-rgw`. - -.. _deploy-cephadm-nfs-ganesha: - -Deploying NFS ganesha -===================== - -Cephadm deploys NFS Ganesha using a pre-defined RADOS *pool* -and optional *namespace* - -To deploy a NFS Ganesha gateway, run the following command: - -.. prompt:: bash # - - ceph orch apply nfs ** ** ** --placement="** [** ...]" - -For example, to deploy NFS with a service id of *foo*, that will use the RADOS -pool *nfs-ganesha* and namespace *nfs-ns*: - -.. prompt:: bash # - - ceph orch apply nfs foo nfs-ganesha nfs-ns - -.. note:: - Create the *nfs-ganesha* pool first if it doesn't exist. - -See :ref:`orchestrator-cli-placement-spec` for details of the placement specification. +To use *NFS*, follow :ref:`deploy-cephadm-nfs-ganesha` .. _cluster network: ../rados/configuration/network-config-ref#cluster-network diff --git a/doc/cephadm/nfs.rst b/doc/cephadm/nfs.rst new file mode 100644 index 000000000000..ee664363cfe4 --- /dev/null +++ b/doc/cephadm/nfs.rst @@ -0,0 +1,59 @@ +=========== +NFS Service +=========== + +.. _deploy-cephadm-nfs-ganesha: + +Deploying NFS ganesha +===================== + +Cephadm deploys NFS Ganesha using a pre-defined RADOS *pool* +and optional *namespace* + +To deploy a NFS Ganesha gateway, run the following command: + +.. prompt:: bash # + + ceph orch apply nfs ** ** ** --placement="** [** ...]" + +For example, to deploy NFS with a service id of *foo*, that will use the RADOS +pool *nfs-ganesha* and namespace *nfs-ns*: + +.. prompt:: bash # + + ceph orch apply nfs foo nfs-ganesha nfs-ns + +.. note:: + Create the *nfs-ganesha* pool first if it doesn't exist. + +See :ref:`orchestrator-cli-placement-spec` for details of the placement specification. + +Service Specification +===================== + +Alternatively, an NFS service can also be applied using a YAML specification. + +A service of type ``nfs`` requires a pool name and may contain +an optional namespace: + +.. code-block:: yaml + + service_type: nfs + service_id: mynfs + placement: + hosts: + - host1 + - host2 + spec: + pool: mypool + namespace: mynamespace + +where ``pool`` is a RADOS pool where NFS client recovery data is stored +and ``namespace`` is a RADOS namespace where NFS client recovery +data is stored in the pool. + +The specification can then be applied using: + +.. prompt:: bash # + + ceph orch apply -i nfs.yaml diff --git a/doc/cephadm/service-management.rst b/doc/cephadm/service-management.rst index 3584c2db70f9..10d0e91170aa 100644 --- a/doc/cephadm/service-management.rst +++ b/doc/cephadm/service-management.rst @@ -84,25 +84,6 @@ Each service type can have additional service specific properties. Service specifications of type ``mon``, ``mgr``, and the monitoring types do not require a ``service_id``. -A service of type ``nfs`` requires a pool name and may contain -an optional namespace: - -.. code-block:: yaml - - service_type: nfs - service_id: mynfs - placement: - hosts: - - host1 - - host2 - spec: - pool: mypool - namespace: mynamespace - -where ``pool`` is a RADOS pool where NFS client recovery data is stored -and ``namespace`` is a RADOS namespace where NFS client recovery -data is stored in the pool. - A service of type ``osd`` is described in :ref:`drivegroups` Many service specifications can be applied at once using