In particular, the Ceph Object Gateway can now be configured to
provide file-based access when embedded in the NFS-Ganesha NFS server.
-The simplest and preferred way of managing nfs-ganesha clusters and rgw exports
+The simplest and preferred way of managing NFS-Ganesha clusters and RGW exports
is using ``ceph nfs ...`` commands. See :doc:`/mgr/nfs` for more details.
librgw
=====================
The implementation conforms to Amazon Web Services (AWS) hierarchical
-namespace conventions which map UNIX-style path names onto S3 buckets
+namespace conventions which map Unix-style path names onto S3 buckets
and objects.
The top level of the attached namespace consists of S3 buckets,
* additional RGW authentication types such as Keystone are not currently supported
-Manually configuring an NFS-Ganesha Instance
+Manually Configuring an NFS-Ganesha Instance
============================================
Each NFS RGW instance is an NFS-Ganesha server instance *embedding*
``ceph_conf`` gives a path to a non-default ceph.conf file to use
-Other useful NFS-Ganesha configuration:
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Other Useful NFS-Ganesha Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Any EXPORT block which should support NFSv3 should include version 3
in the NFS_Protocols setting. Additionally, NFSv3 is the last major
LOG {
- Components {
- MEMLEAKS = FATAL;
- FSAL = FATAL;
- NFSPROTO = FATAL;
- NFS_V4 = FATAL;
- EXPORT = FATAL;
- FILEHANDLE = FATAL;
- DISPATCH = FATAL;
- CACHE_INODE = FATAL;
- CACHE_INODE_LRU = FATAL;
- HASHTABLE = FATAL;
- HASHTABLE_CACHE = FATAL;
- DUPREQ = FATAL;
- INIT = DEBUG;
- MAIN = DEBUG;
- IDMAPPER = FATAL;
- NFS_READDIR = FATAL;
- NFS_V4_LOCK = FATAL;
- CONFIG = FATAL;
- CLIENTID = FATAL;
- SESSIONS = FATAL;
- PNFS = FATAL;
- RW_LOCK = FATAL;
- NLM = FATAL;
- RPC = FATAL;
- NFS_CB = FATAL;
- THREAD = FATAL;
- NFS_V4_ACL = FATAL;
- STATE = FATAL;
- FSAL_UP = FATAL;
- DBUS = FATAL;
- }
- # optional: redirect log output
- # Facility {
- # name = FILE;
- # destination = "/tmp/ganesha-rgw.log";
- # enable = active;
- }
- }
+ Components {
+ MEMLEAKS = FATAL;
+ FSAL = FATAL;
+ NFSPROTO = FATAL;
+ NFS_V4 = FATAL;
+ EXPORT = FATAL;
+ FILEHANDLE = FATAL;
+ DISPATCH = FATAL;
+ CACHE_INODE = FATAL;
+ CACHE_INODE_LRU = FATAL;
+ HASHTABLE = FATAL;
+ HASHTABLE_CACHE = FATAL;
+ DUPREQ = FATAL;
+ INIT = DEBUG;
+ MAIN = DEBUG;
+ IDMAPPER = FATAL;
+ NFS_READDIR = FATAL;
+ NFS_V4_LOCK = FATAL;
+ CONFIG = FATAL;
+ CLIENTID = FATAL;
+ SESSIONS = FATAL;
+ PNFS = FATAL;
+ RW_LOCK = FATAL;
+ NLM = FATAL;
+ RPC = FATAL;
+ NFS_CB = FATAL;
+ THREAD = FATAL;
+ NFS_V4_ACL = FATAL;
+ STATE = FATAL;
+ FSAL_UP = FATAL;
+ DBUS = FATAL;
+ }
+ # optional: redirect log output
+ # Facility {
+ # name = FILE;
+ # destination = "/tmp/ganesha-rgw.log";
+ # enable = active;
+ # }
+ }
Running Multiple NFS Gateways
=============================
bucket name and will be rejected unless ``rgw_relaxed_s3_bucket_names``
is set to true.
-Configuring NFSv4 clients
+Configuring NFSv4 Clients
=========================
To access the namespace, mount the configured NFS-Ganesha export(s)
are built atop of the multisite framework that allows for forwarding data and
metadata to a different external tier. A sync module allows for a set of actions
to be performed whenever a change in data occurs (metadata ops like bucket or
-user creation etc. are also regarded as changes in data). As the rgw multisite
+user creation etc. are also regarded as changes in data). As the RGW multisite
changes are eventually consistent at remote sites, changes are propagated
asynchronously. This would allow for unlocking use cases such as backing up the
object storage to an external cloud cluster or a custom backup solution using
A sync module configuration is local to a zone. The sync module determines
whether the zone exports data or can only consume data that was modified in
-another zone. As of luminous the supported sync plugins are `elasticsearch`_,
+another zone. As of Luminous the supported sync plugins are `elasticsearch`_,
``rgw``, which is the default sync plugin that synchronizes data between the
zones and ``log`` which is a trivial sync plugin that logs the metadata
operation that happens in the remote zones. The following docs are written with
the example of a zone using `elasticsearch sync module`_, the process would be similar
-for configuring any sync plugin
+for configuring any sync plugin.
.. toctree::
:maxdepth: 1
Let us assume a simple multisite configuration as described in the :ref:`multisite`
docs, of 2 zones ``us-east`` and ``us-west``, let's add a third zone
``us-east-es`` which is a zone that only processes metadata from the other
-sites. This zone can be in the same or a different ceph cluster as ``us-east``.
+sites. This zone can be in the same or a different Ceph cluster as ``us-east``.
This zone would only consume metadata from other zones and RGWs in this zone
will not serve any end user requests directly.
--tier-config=endpoint=http://localhost:9200,num_shards=10,num_replicas=1
-For the various supported tier-config options refer to the `elasticsearch sync module`_ docs
+For the various supported tier-config options refer to the `elasticsearch sync module`_ docs.
Finally update the period