From: Ville Ojamo <14869000+bluikko@users.noreply.github.com> Date: Fri, 23 May 2025 09:48:07 +0000 (+0700) Subject: doc/radosgw: Fix capitalization, tab use, punctuation in two files X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=refs%2Fpull%2F63450%2Fhead;p=ceph.git doc/radosgw: Fix capitalization, tab use, punctuation in two files Use title case consistently in section titles. Capitalize first letter for Ceph, Unix, Luminous. Capitalize RGW and NFS-Ganesha consistently. Remove a colon from end of a section title in nfs.rst. Add full stops at end of two sentences in sync-modules.rst. Change tabs into four spaces in nfs.rst. Also use comments more sensibly in logging example in nfs.rst: - Indent the comments consistently, fixing a leading space in the beginning of the rendered preformatted block. - Also comment out the closing brace. Signed-off-by: Ville Ojamo <14869000+bluikko@users.noreply.github.com> --- diff --git a/doc/radosgw/nfs.rst b/doc/radosgw/nfs.rst index 373765e1005d..cb7dbe53f063 100644 --- a/doc/radosgw/nfs.rst +++ b/doc/radosgw/nfs.rst @@ -13,7 +13,7 @@ protocols (S3 and Swift). In particular, the Ceph Object Gateway can now be configured to provide file-based access when embedded in the NFS-Ganesha NFS server. -The simplest and preferred way of managing nfs-ganesha clusters and rgw exports +The simplest and preferred way of managing NFS-Ganesha clusters and RGW exports is using ``ceph nfs ...`` commands. See :doc:`/mgr/nfs` for more details. librgw @@ -34,7 +34,7 @@ Namespace Conventions ===================== The implementation conforms to Amazon Web Services (AWS) hierarchical -namespace conventions which map UNIX-style path names onto S3 buckets +namespace conventions which map Unix-style path names onto S3 buckets and objects. The top level of the attached namespace consists of S3 buckets, @@ -103,7 +103,7 @@ following characteristics: * additional RGW authentication types such as Keystone are not currently supported -Manually configuring an NFS-Ganesha Instance +Manually Configuring an NFS-Ganesha Instance ============================================ Each NFS RGW instance is an NFS-Ganesha server instance *embedding* @@ -191,8 +191,8 @@ variables in the RGW config section:: ``ceph_conf`` gives a path to a non-default ceph.conf file to use -Other useful NFS-Ganesha configuration: -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Other Useful NFS-Ganesha Configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Any EXPORT block which should support NFSv3 should include version 3 in the NFS_Protocols setting. Additionally, NFSv3 is the last major @@ -239,45 +239,45 @@ Example:: LOG { - Components { - MEMLEAKS = FATAL; - FSAL = FATAL; - NFSPROTO = FATAL; - NFS_V4 = FATAL; - EXPORT = FATAL; - FILEHANDLE = FATAL; - DISPATCH = FATAL; - CACHE_INODE = FATAL; - CACHE_INODE_LRU = FATAL; - HASHTABLE = FATAL; - HASHTABLE_CACHE = FATAL; - DUPREQ = FATAL; - INIT = DEBUG; - MAIN = DEBUG; - IDMAPPER = FATAL; - NFS_READDIR = FATAL; - NFS_V4_LOCK = FATAL; - CONFIG = FATAL; - CLIENTID = FATAL; - SESSIONS = FATAL; - PNFS = FATAL; - RW_LOCK = FATAL; - NLM = FATAL; - RPC = FATAL; - NFS_CB = FATAL; - THREAD = FATAL; - NFS_V4_ACL = FATAL; - STATE = FATAL; - FSAL_UP = FATAL; - DBUS = FATAL; - } - # optional: redirect log output - # Facility { - # name = FILE; - # destination = "/tmp/ganesha-rgw.log"; - # enable = active; - } - } + Components { + MEMLEAKS = FATAL; + FSAL = FATAL; + NFSPROTO = FATAL; + NFS_V4 = FATAL; + EXPORT = FATAL; + FILEHANDLE = FATAL; + DISPATCH = FATAL; + CACHE_INODE = FATAL; + CACHE_INODE_LRU = FATAL; + HASHTABLE = FATAL; + HASHTABLE_CACHE = FATAL; + DUPREQ = FATAL; + INIT = DEBUG; + MAIN = DEBUG; + IDMAPPER = FATAL; + NFS_READDIR = FATAL; + NFS_V4_LOCK = FATAL; + CONFIG = FATAL; + CLIENTID = FATAL; + SESSIONS = FATAL; + PNFS = FATAL; + RW_LOCK = FATAL; + NLM = FATAL; + RPC = FATAL; + NFS_CB = FATAL; + THREAD = FATAL; + NFS_V4_ACL = FATAL; + STATE = FATAL; + FSAL_UP = FATAL; + DBUS = FATAL; + } + # optional: redirect log output + # Facility { + # name = FILE; + # destination = "/tmp/ganesha-rgw.log"; + # enable = active; + # } + } Running Multiple NFS Gateways ============================= @@ -315,7 +315,7 @@ if a Swift container name contains underscores, it is not a valid S3 bucket name and will be rejected unless ``rgw_relaxed_s3_bucket_names`` is set to true. -Configuring NFSv4 clients +Configuring NFSv4 Clients ========================= To access the namespace, mount the configured NFS-Ganesha export(s) diff --git a/doc/radosgw/sync-modules.rst b/doc/radosgw/sync-modules.rst index 53f16c70cb3a..2dc8330bfb6b 100644 --- a/doc/radosgw/sync-modules.rst +++ b/doc/radosgw/sync-modules.rst @@ -9,7 +9,7 @@ create multiple zones and mirror data and metadata between them. ``Sync Modules` are built atop of the multisite framework that allows for forwarding data and metadata to a different external tier. A sync module allows for a set of actions to be performed whenever a change in data occurs (metadata ops like bucket or -user creation etc. are also regarded as changes in data). As the rgw multisite +user creation etc. are also regarded as changes in data). As the RGW multisite changes are eventually consistent at remote sites, changes are propagated asynchronously. This would allow for unlocking use cases such as backing up the object storage to an external cloud cluster or a custom backup solution using @@ -17,12 +17,12 @@ tape drives, indexing metadata in ElasticSearch etc. A sync module configuration is local to a zone. The sync module determines whether the zone exports data or can only consume data that was modified in -another zone. As of luminous the supported sync plugins are `elasticsearch`_, +another zone. As of Luminous the supported sync plugins are `elasticsearch`_, ``rgw``, which is the default sync plugin that synchronizes data between the zones and ``log`` which is a trivial sync plugin that logs the metadata operation that happens in the remote zones. The following docs are written with the example of a zone using `elasticsearch sync module`_, the process would be similar -for configuring any sync plugin +for configuring any sync plugin. .. toctree:: :maxdepth: 1 @@ -40,7 +40,7 @@ Requirements and Assumptions Let us assume a simple multisite configuration as described in the :ref:`multisite` docs, of 2 zones ``us-east`` and ``us-west``, let's add a third zone ``us-east-es`` which is a zone that only processes metadata from the other -sites. This zone can be in the same or a different ceph cluster as ``us-east``. +sites. This zone can be in the same or a different Ceph cluster as ``us-east``. This zone would only consume metadata from other zones and RGWs in this zone will not serve any end user requests directly. @@ -71,7 +71,7 @@ For example in the ``elasticsearch`` sync module --tier-config=endpoint=http://localhost:9200,num_shards=10,num_replicas=1 -For the various supported tier-config options refer to the `elasticsearch sync module`_ docs +For the various supported tier-config options refer to the `elasticsearch sync module`_ docs. Finally update the period