+.. _deploy-cephadm-nfs-ganesha:
+
===========
NFS Service
===========
.. note:: Only the NFSv4 protocol is supported.
-.. _deploy-cephadm-nfs-ganesha:
+The simplest way to manage NFS is via the ``ceph nfs cluster ...``
+commands; see :ref:`cephfs-nfs`. This document covers how to manage the
+cephadm services directly, which should only be necessary for unusual NFS
+configurations.
Deploying NFS ganesha
=====================
-Cephadm deploys NFS Ganesha using a pre-defined RADOS *pool*
-and optional *namespace*.
+Cephadm deploys NFS Ganesha daemon (or set of daemons). The configuration for
+NFS is stored in the ``nfs-ganesha`` pool and exports are managed via the
+``ceph nfs export ...`` commands and via the dashboard.
To deploy a NFS Ganesha gateway, run the following command:
.. prompt:: bash #
ceph orch apply -i nfs.yaml
+
+
+High-availability NFS
+=====================
+
+Deploying an *ingress* service for an existing *nfs* service will provide:
+
+* a stable, virtual IP that can be used to access the NFS server
+* fail-over between hosts if there is a host failure
+* load distribution across multiple NFS gateways (although this is rarely necessary)
+
+Ingress for NFS can be deployed for an existing NFS service
+(``nfs.mynfs`` in this example) with the following specification:
+
+.. code-block:: yaml
+
+ service_type: ingress
+ service_id: nfs.mynfs
+ placement:
+ count: 2
+ spec:
+ backend_service: nfs.mynfs
+ frontend_port: 2049
+ monitor_port: 9000
+ virtual_ip: 10.0.0.123/24
+
+A few notes:
+
+ * The *virtual_ip* must include a CIDR prefix length, as in the
+ example above. The virtual IP will normally be configured on the
+ first identified network interface that has an existing IP in the
+ same subnet. You can also specify a *virtual_interface_networks*
+ property to match against IPs in other networks; see
+ :ref:`ingress-virtual-ip` for more information.
+ * The *monitor_port* is used to access the haproxy load status
+ page. The user is ``admin`` by default, but can be modified by
+ via an *admin* property in the spec. If a password is not
+ specified via a *password* property in the spec, the auto-generated password
+ can be found with:
+
+ .. prompt:: bash #
+
+ ceph config-key get mgr/cephadm/ingress.*{svc_id}*/monitor_password
+
+ For example:
+
+ .. prompt:: bash #
+
+ ceph config-key get mgr/cephadm/ingress.nfs.myfoo/monitor_password
+
+ * The backend service (``nfs.mynfs`` in this example) should include
+ a *port* property that is not 2049 to avoid conflicting with the
+ ingress service, which could be placed on the same host(s).
The active haproxy acts like a load balancer, distributing all RGW requests
between all the RGW daemons available.
-**Prerequisites:**
+Prerequisites
+-------------
* An existing RGW service, without SSL. (If you want SSL service, the certificate
should be configured on the ingress service, not the RGW service.)
-**Deploy of the high availability service for RGW**
+Deploying
+---------
Use the command::
ceph orch apply -i <ingress_spec_file>
-**Service specification file:**
+Service specification
+---------------------
It is a yaml format file with the following properties:
SSL certificate, if SSL is to be enabled. This must contain the both the certificate and
private key blocks in .pem format.
-**Selecting ethernet interfaces for the virtual IP:**
+.. _ingress-virtual-ip:
+
+Selecting ethernet interfaces for the virtual IP
+------------------------------------------------
You cannot simply provide the name of the network interface on which
to configure the virtual IP because interface names tend to vary
and reference that dummy network in the networks list (see above).
-**Useful hints for ingress:**
+Useful hints for ingress
+------------------------
-* Good to have at least 3 RGW daemons
-* Use at least 3 hosts for the ingress
+* It is good to have at least 3 RGW daemons.
+* We recommend at least 3 hosts for the ingress service.