From: Ville Ojamo <14869000+bluikko@users.noreply.github.com> Date: Fri, 13 Mar 2026 08:48:01 +0000 (+0700) Subject: doc/radosgw: Fix spelling errors X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=81ba57a3c600085b08b4518eadcb3d874015fde0;p=ceph.git doc/radosgw: Fix spelling errors Signed-off-by: Ville Ojamo --- diff --git a/doc/radosgw/STS.rst b/doc/radosgw/STS.rst index bbaa1cfd5e0..916cb905d6a 100644 --- a/doc/radosgw/STS.rst +++ b/doc/radosgw/STS.rst @@ -40,7 +40,7 @@ The following STS REST APIs have been implemented in Ceph Object Gateway: trust policy of the role being assumed requires MFA. #. AssumeRoleWithWebIdentity: Returns a set of temporary credentials for users that - have been authenticated by a web/mobile app by an OpenID Connect /OAuth2.0 Identity Provider. + have been authenticated by a web/mobile app by an OpenID Connect/OAuth 2.0 Identity Provider. Currently Keycloak has been tested and integrated with RGW. Parameters: @@ -55,19 +55,19 @@ The following STS REST APIs have been implemented in Ceph Object Gateway: Its default value is 3600. **ProviderId** (String/ Optional): Fully qualified host component of the domain name - of the IDP. Valid only for OAuth2.0 tokens (not for OpenID Connect tokens). + of the IDP. Valid only for OAuth 2.0 tokens (not for OpenID Connect tokens). - **WebIdentityToken** (String/ Required): The OpenID Connect/ OAuth2.0 token, which the + **WebIdentityToken** (String/ Required): The OpenID Connect/OAuth 2.0 token, which the application gets in return after authenticating its user with an IDP. #. GetCallerIdentity: Returns details about the IAM user or role whose credentials are used to call the operation. Response: - **Account** (The account ID that owns or contains the calling entity. + **Account** The account ID that owns or contains the calling entity. **Arn** The ARN associated with the calling entity. - **UserId** The unique identifier of the calling entity(user or assumed role). + **UserId** The unique identifier of the calling entity (user or assumed role). .. note:: No permissions are required to perform GetCallerIdentity. @@ -175,7 +175,7 @@ Examples resp = s3client.list_buckets() #. The following is an example of AssumeRoleWithWebIdentity API call, where an external app that has users authenticated with - an OpenID Connect/ OAuth2 IDP (Keycloak in this example), assumes a role to get back temporary credentials and access S3 resources + an OpenID Connect/OAuth 2.0 IDP (Keycloak in this example), assumes a role to get back temporary credentials and access S3 resources according to permission policy of the role. .. code-block:: python @@ -190,7 +190,7 @@ Examples ) oidc_response = iam_client.create_open_id_connect_provider( - Url=, ClientIDList=[ ], @@ -221,7 +221,7 @@ Examples region_name='', ) - response = client.assume_role_with_web_identity( + response = sts_client.assume_role_with_web_identity( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600, diff --git a/doc/radosgw/STSLite.rst b/doc/radosgw/STSLite.rst index 437be717532..ca59a12024a 100644 --- a/doc/radosgw/STSLite.rst +++ b/doc/radosgw/STSLite.rst @@ -29,7 +29,7 @@ credentials will have the same permission as that of the AWS credentials. Parameters: **DurationSeconds** (Integer/ Optional): The duration in seconds for which the credentials should remain valid. Its default value is 3600. Its default max - value is 43200 which is can be configured using rgw sts max session duration. + value is 43200 which can be configured using rgw sts max session duration. **SerialNumber** (String/ Optional): The Id number of the MFA device associated with the user making the GetSessionToken call. @@ -73,7 +73,7 @@ configurable options will be:: rgw keystone api version = {keystone api version} rgw keystone implicit tenants = {true for private tenant for each new user} rgw keystone admin password = {keystone service tenant user name} - rgw keystone admin user = keystone service tenant user password} + rgw keystone admin user = {keystone service tenant user password} rgw keystone accepted roles = {accepted user roles} rgw keystone token cache size = {number of tokens to cache} rgw s3 auth use keystone = true @@ -95,7 +95,7 @@ The complete set of configurables to use STS Lite with LDAP are:: rgw_ldap_dnattr = {attribute being used in the constructed search filter to match a username} rgw_ldap_searchfilter = {search filter} -The details of the integrating ldap with Ceph Object Gateway can be found here: +The details of integrating LDAP with Ceph Object Gateway can be found here: :doc:`ldap-auth` Note: By default, STS and S3 APIs co-exist in the same namespace, and both S3 @@ -126,7 +126,7 @@ Keystone. | user_id | 40a7140e424f493d8165abc652dc731c | +------------+--------------------------------------------------------+ -2. Use the credentials created in the step 1. to get back a set of temporary +2. Use the credentials created in step 1 to get back a set of temporary credentials using GetSessionToken API. .. code-block:: python @@ -147,7 +147,7 @@ Keystone. DurationSeconds=43200 ) -3. The temporary credentials obtained in step 2. can be used for making S3 calls: +3. The temporary credentials obtained in step 2 can be used for making S3 calls: .. code-block:: python diff --git a/doc/radosgw/account.rst b/doc/radosgw/account.rst index f0da3d7d08e..97c1ac0e960 100644 --- a/doc/radosgw/account.rst +++ b/doc/radosgw/account.rst @@ -65,7 +65,7 @@ policy documents. For example, ``arn:aws:iam::RGW33567154695143645:user/A`` refers to an IAM user named A in that account. The Ceph Object Gateway also supports tenant names in that position. -Accounts IDs can also be used in ACLs for a ``Grantee`` of type ``CanonicalUser``. +Account IDs can also be used in ACLs for a ``Grantee`` of type ``CanonicalUser``. User IDs are also supported here. IAM Policy @@ -201,7 +201,7 @@ Account topics are supported only when the ``notification_v2`` feature is enable as described in :ref:`radosgw-notifications` and :ref:`Supported Zone Features `. #. **Migration Impact:** When a non-account user is migrated to an account, the - the existing notification topics remain accessible through the RADOS Gateway admin API, + existing notification topics remain accessible through the RADOS Gateway admin API, but the user loses access to them via the SNS Topic API. Despite this, the topics remain functional, and bucket notifications will continue to be delivered as expected. diff --git a/doc/radosgw/admin.rst b/doc/radosgw/admin.rst index c04ce3b5a50..2115845a72b 100644 --- a/doc/radosgw/admin.rst +++ b/doc/radosgw/admin.rst @@ -38,8 +38,8 @@ There are two types of user: +-----+ Subuser | +-----------+ -Users and subusers can be created, modified, viewed, suspended and removed. -you may add a Display names and an email addresses can be added to user +Users and subusers can be created, modified, viewed, suspended, and removed. +Display names and email addresses can be added to user profiles. Keys and secrets can either be specified or generated automatically. When generating or specifying keys, remember that user IDs correspond to S3 key types and subuser IDs correspond to Swift key types. diff --git a/doc/radosgw/adminops.rst b/doc/radosgw/adminops.rst index bb5c89f0a99..674b7b4a728 100644 --- a/doc/radosgw/adminops.rst +++ b/doc/radosgw/adminops.rst @@ -252,7 +252,7 @@ Request Parameters ``end`` -:Description: Date and (optional) time that specifies the end time of the requested data (none inclusive). +:Description: Date and (optional) time that specifies the end time of the requested data (non-inclusive). :Type: String :Example: ``2012-09-25 16:00:00`` :Required: No @@ -1596,7 +1596,7 @@ Request Parameters ``uid`` -:Description: The user ID under which a subuser is to be created. +:Description: The user ID under which a subuser is to be created. :Type: String :Example: ``foo_user`` :Required: Yes @@ -2273,7 +2273,7 @@ Request Parameters ``purge-objects`` -:Description: Remove a buckets objects before deletion. +:Description: Remove a bucket's objects before deletion. :Type: Boolean :Example: True [False] :Required: No @@ -2813,7 +2813,7 @@ permission. :: Rate Limit ========== -The Admin Operations API enables you to set and get ratelimit configurations on users and on bucket and global rate limit configurations. See `Rate Limit Management`_ for additional details. +The Admin Operations API enables you to set and get ratelimit configurations on users and on buckets and global rate limit configurations. See `Rate Limit Management`_ for additional details. Rate Limit includes the maximum number of operations and/or bytes per accumulation interval, separated by read and/or write (Additionally list and get operations), to a bucket and/or by a user and the maximum storage size in megabytes. diff --git a/doc/radosgw/api.rst b/doc/radosgw/api.rst index cb31284e042..ddd9ec01c5e 100644 --- a/doc/radosgw/api.rst +++ b/doc/radosgw/api.rst @@ -6,7 +6,7 @@ librgw (Python) .. highlight:: python -The `rgw` python module provides file-like access to rgw. +The `rgw` Python module provides file-like access to RGW. API Reference ============= diff --git a/doc/radosgw/bucket_logging.rst b/doc/radosgw/bucket_logging.rst index 4d58d328459..a0e0ae948ce 100644 --- a/doc/radosgw/bucket_logging.rst +++ b/doc/radosgw/bucket_logging.rst @@ -2,7 +2,7 @@ Bucket Logging ============== -.. versionadded:: T +.. versionadded:: Tentacle .. contents:: @@ -230,7 +230,7 @@ possible formats: Journal ``````` -The "Journal" record format uses minimum amount of data for journaling +The "Journal" record format uses a minimum amount of data for journaling bucket changes (this is a Ceph extension). - bucket owner (or dash if empty) @@ -252,7 +252,7 @@ For example: Standard ```````` -The "Standard" record format is based on `AWS Logging Record Format`_. +The "Standard" record format is based on the `AWS Logging Record Format`_. - bucket owner (or dash if empty) - bucket name (or dash if empty) in the format: ``[tenant:]`` diff --git a/doc/radosgw/bucketpolicy.rst b/doc/radosgw/bucketpolicy.rst index 35f575619c3..29e9aa062b1 100644 --- a/doc/radosgw/bucketpolicy.rst +++ b/doc/radosgw/bucketpolicy.rst @@ -136,7 +136,7 @@ Bucket Related Operations +-----------------------+----------------------+----------------+ | | s3:x-amz-acl | | | | s3:x-amz-grant-| | -|s3:createBucket | where perm is one of | | +|s3:CreateBucket | where perm is one of | | | | read/write/read-acp | | | | write-acp/ | | | | full-control | | @@ -183,7 +183,7 @@ Object Related Operations | |s3:RequestObjectTag/ | | | | | | +-----------------------------+---------------------------------------------------+-------------------+ -|s3:PutObjectAcl |s3:x-amz-acl & s3-amz-grant- | | +|s3:PutObjectAcl |s3:x-amz-acl & s3:x-amz-grant- | | |s3:PutObjectVersionAcl | | | | +---------------------------------------------------+-------------------+ | |s3:ExistingObjectTag/ | | diff --git a/doc/radosgw/cloud-restore.rst b/doc/radosgw/cloud-restore.rst index 266904c55a4..c2647901e5f 100644 --- a/doc/radosgw/cloud-restore.rst +++ b/doc/radosgw/cloud-restore.rst @@ -10,7 +10,7 @@ of those transitioned objects from the remote S3 endpoints into the local RGW deployment. This feature currently enables the restoration of objects transitioned to -S3-compatible cloud services. In order to faciliate this, +S3-compatible cloud services. In order to facilitate this, the ``retain_head_object`` option should be set to ``true`` in the ``tier-config`` when configuring the storage class. @@ -32,7 +32,7 @@ objects as well:: { "access_key": , - "secret": ,` + "secret": , "endpoint": , "region": , "host_style": , diff --git a/doc/radosgw/cloud-sync-module.rst b/doc/radosgw/cloud-sync-module.rst index 78480521da7..8592e05b891 100644 --- a/doc/radosgw/cloud-sync-module.rst +++ b/doc/radosgw/cloud-sync-module.rst @@ -51,7 +51,7 @@ Non Trivial Configuration "access_key": , "secret": , "endpoint": , - "host_style" , + "host_style": , }, "acls": [ { @@ -67,7 +67,7 @@ Non Trivial Configuration "access_key": , "secret": , "endpoint": , - "host_style" , # optional + "host_style": , # optional } ... ], "acl_profiles": [ { diff --git a/doc/radosgw/cloud-transition.rst b/doc/radosgw/cloud-transition.rst index 665d4c1bb3f..238996442ae 100644 --- a/doc/radosgw/cloud-transition.rst +++ b/doc/radosgw/cloud-transition.rst @@ -202,7 +202,7 @@ For example "access_key": "", "secret": "", "host_style": "path", - "location_constraint": ""; + "location_constraint": "", "target_storage_class": "", "target_path": "", "acl_mappings": [], diff --git a/doc/radosgw/config-ref.rst b/doc/radosgw/config-ref.rst index 9ca252e6e82..246bd43b604 100644 --- a/doc/radosgw/config-ref.rst +++ b/doc/radosgw/config-ref.rst @@ -4,7 +4,7 @@ Ceph Object Gateway Config Reference ====================================== -The following settings may added to the Ceph configuration file (i.e., usually +The following settings may be added to the Ceph configuration file (i.e., usually ``ceph.conf``) under the ``[client.radosgw.{instance-name}]`` section. The settings may contain default values. If you do not specify each setting in the Ceph configuration file, the default value will be set automatically. diff --git a/doc/radosgw/d3n_datacache.rst b/doc/radosgw/d3n_datacache.rst index d02395129fe..f9c7b04f147 100644 --- a/doc/radosgw/d3n_datacache.rst +++ b/doc/radosgw/d3n_datacache.rst @@ -24,7 +24,7 @@ Architecture D3N improves the performance of big-data jobs by speeding up repeatedly accessed dataset reads from the data lake. Cache servers are located in the datacenter on the access side of potential network and storage bottlenecks. -D3Ns two-layer logical cache forms a traditional caching hierarchy :sup:`*` +D3N's two-layer logical cache forms a traditional caching hierarchy :sup:`*` where caches nearer the client have the lowest access latency and overhead, while caches in higher levels in the hierarchy are slower (requiring multiple hops to access). The layer 1 cache server nearest to the client handles object requests by breaking them into blocks, @@ -108,13 +108,13 @@ to each RADOS Gateway without a balancer in order to avoid cached data duplicati Logs ---- -- D3N related log lines in ``radosgw.*.log`` contain the string ``d3n`` (case insensitive). -- Low level D3N logs can be enabled by the ``debug_rgw_datacache`` subsystem (up to ``debug_rgw_datacache=30``). +- D3N-related log lines in ``radosgw.*.log`` contain the string ``d3n`` (case insensitive). +- Low-level D3N logs can be enabled by the ``debug_rgw_datacache`` subsystem (up to ``debug_rgw_datacache=30``). Config Reference ================ -The following D3N related settings can be added to the Ceph configuration file +The following D3N-related settings can be added to the Ceph configuration file (i.e., usually ``ceph.conf``) under the ``[client.rgw.{instance-name}]`` section. .. confval:: rgw_d3n_l1_local_datacache_enabled diff --git a/doc/radosgw/dynamicresharding.rst b/doc/radosgw/dynamicresharding.rst index a1abfb20732..1c7ea3387c5 100644 --- a/doc/radosgw/dynamicresharding.rst +++ b/doc/radosgw/dynamicresharding.rst @@ -31,7 +31,7 @@ shards more evenly. Detection of resharding opportunities runs as a background process that periodically scans all buckets. A bucket that requires resharding is added to a queue. A thread runs in the background and processes the -queueued resharding tasks one at a time. +queued resharding tasks one at a time. Starting with Tentacle, dynamic resharding has the ability to reduce the number of shards. Once the condition allowing reduction is noted, @@ -183,7 +183,7 @@ Setting a Bucket's Minimum Number of Shards Since dynamic resharding can now reduce the number of shards, administrators may want to prevent the number of shards from becoming -too low, for example if the expect the number of objects to increase +too low, for example if they expect the number of objects to increase in the future. This command allows administrators to set a per-bucket minimum. This does not, however, prevent administrators from manually resharding to a lower number of shards. diff --git a/doc/radosgw/elastic-sync-module.rst b/doc/radosgw/elastic-sync-module.rst index b902e7372d2..29d0aa6a9ad 100644 --- a/doc/radosgw/elastic-sync-module.rst +++ b/doc/radosgw/elastic-sync-module.rst @@ -1,16 +1,16 @@ .. _radosgw-elastic-sync-module: ========================= -ElasticSearch Sync Module +Elasticsearch Sync Module ========================= .. versionadded:: Kraken .. note:: - As of 31 May 2020, only Elasticsearch 6 and lower are supported. ElasticSearch 7 is not supported. + As of 31 May 2020, only Elasticsearch 6 and lower are supported. Elasticsearch 7 is not supported. -This sync module writes the metadata from other zones to `ElasticSearch`_. As of -luminous this is a json of data fields we currently store in ElasticSearch. +This sync module writes the metadata from other zones to `Elasticsearch`_. As of +Luminous this is a JSON of data fields we currently store in Elasticsearch. :: @@ -41,7 +41,7 @@ luminous this is a json of data fields we currently store in ElasticSearch. -ElasticSearch tier type configurables +Elasticsearch tier type configurables ------------------------------------- * ``endpoint`` @@ -80,7 +80,7 @@ be indexed. Suffixes and prefixes can also be provided. * ``override_index_path`` (string) -if not empty, this string will be used as the elasticsearch index +if not empty, this string will be used as the Elasticsearch index path. Otherwise the index path will be determined and generated on sync initialization. @@ -90,24 +90,24 @@ End user metadata queries .. versionadded:: Luminous -Since the ElasticSearch cluster now stores object metadata, it is important that -the ElasticSearch endpoint is not exposed to the public and only accessible to +Since the Elasticsearch cluster now stores object metadata, it is important that +the Elasticsearch endpoint is not exposed to the public and only accessible to the cluster administrators. For exposing metadata queries to the end user itself this poses a problem since we'd want the user to only query their metadata and -not of any other users, this would require the ElasticSearch cluster to +not of any other users, this would require the Elasticsearch cluster to authenticate users in a way similar to RGW does which poses a problem. As of Luminous RGW in the metadata master zone can now service end user -requests. This allows for not exposing the elasticsearch endpoint in public and +requests. This allows for not exposing the Elasticsearch endpoint in public and also solves the authentication and authorization problem since RGW itself can authenticate the end user requests. For this purpose RGW introduces a new query -in the bucket APIs that can service elasticsearch requests. All these requests +in the bucket APIs that can service Elasticsearch requests. All these requests must be sent to the metadata master zone. Syntax ~~~~~~ -Get an elasticsearch query +Get an Elasticsearch query `````````````````````````` :: diff --git a/doc/radosgw/frontends.rst b/doc/radosgw/frontends.rst index f1f9c264a5d..ac38d427135 100644 --- a/doc/radosgw/frontends.rst +++ b/doc/radosgw/frontends.rst @@ -34,7 +34,7 @@ Options :Description: Sets the listening address in the form ``address[:port]``, where the address is an IPv4 address string in dotted decimal form, or an IPv6 address in hexadecimal notation surrounded by square - brackets. Specifying a IPv6 endpoint would listen to IPv6 only. The + brackets. Specifying an IPv6 endpoint would listen to IPv6 only. The optional port defaults to 80 for ``endpoint`` and 443 for ``ssl_endpoint``. Can be specified multiple times as in ``endpoint=[::1] endpoint=192.168.0.100:8000``. diff --git a/doc/radosgw/keystone.rst b/doc/radosgw/keystone.rst index 0a6717c55ad..741b2d23bd8 100644 --- a/doc/radosgw/keystone.rst +++ b/doc/radosgw/keystone.rst @@ -7,7 +7,7 @@ It is possible to integrate the Ceph Object Gateway with Keystone, the OpenStack identity service. This sets up the gateway to accept Keystone as the users authority. A user that Keystone authorizes to access the gateway will also be -automatically created on the Ceph Object Gateway (if didn't exist beforehand). A +automatically created on the Ceph Object Gateway (if it didn't exist beforehand). A token that Keystone validates will be considered as valid by the gateway. The following configuration options are available for Keystone integration:: @@ -28,11 +28,11 @@ shared secret ``rgw keystone admin token`` in the configuration file, which is recommended to be disabled in production environments. The service tenant credentials should have admin privileges, for more details refer the `OpenStack Keystone documentation`_, which explains the process in detail. The requisite -configuration options for are:: +configuration options are:: rgw keystone admin user = {keystone service tenant user name} rgw keystone admin password = {keystone service tenant user password} - rgw keystone admin password = {keystone service tenant user password path} # preferred + rgw keystone admin password path = {keystone service tenant user password path} # preferred rgw keystone admin tenant = {keystone service tenant name} @@ -131,18 +131,18 @@ object-storage endpoint: The Keystone URL is the Keystone admin RESTful API URL. The admin token is the token that is configured internally in Keystone for admin requests. -OpenStack Keystone may be terminated with a self signed ssl certificate, in +OpenStack Keystone may be terminated with a self-signed SSL certificate, in order for radosgw to interact with Keystone in such a case, you could either -install Keystone's ssl certificate in the node running radosgw. Alternatively -radosgw could be made to not verify the ssl certificate at all (similar to +install Keystone's SSL certificate in the node running radosgw. Alternatively +radosgw could be made to not verify the SSL certificate at all (similar to OpenStack clients with a ``--insecure`` switch) by setting the value of the configurable ``rgw keystone verify ssl`` to false. .. _OpenStack Keystone documentation: http://docs.openstack.org/developer/keystone/configuringservices.html#setting-up-projects-users-and-roles -Cross Project(Tenant) Access ----------------------------- +Cross-Project (Tenant) Access +----------------------------- In order to let a project (earlier called a 'tenant') access buckets belonging to a different project, the following config option needs to be enabled:: diff --git a/doc/radosgw/layout.rst b/doc/radosgw/layout.rst index 9548f920b4b..45d6a46fb1b 100644 --- a/doc/radosgw/layout.rst +++ b/doc/radosgw/layout.rst @@ -118,7 +118,7 @@ causes no ambiguity. For the same reason, slashes are permitted in object names (keys). It is possible to create multiple data pools and make it so that -different users\` buckets will be created in different RADOS pools by default, +different users' buckets will be created in different RADOS pools by default, thus providing the necessary scaling. The layout and naming of these pools is controlled by a 'policy' setting.[3] @@ -126,7 +126,7 @@ An RGW object may comprise multiple RADOS objects, the first of which is the ``HEAD`` that contains metadata including manifest, ACLs, content type, ETag, and user-defined metadata. The metadata is stored in xattrs. The ``HEAD`` object may also inline up to :confval:`rgw_max_chunk_size` of object data, for efficiency -and atomicity. This enables a convenenient tiering strategy: index pools +and atomicity. This enables a convenient tiering strategy: index pools are necessarily replicated (cannot be EC) and should be placed on fast SSD OSDs. With a mix of small/hot RGW objects and larger, warm/cold RGW objects like video files, the larger objects will automatically be placed diff --git a/doc/radosgw/ldap-auth.rst b/doc/radosgw/ldap-auth.rst index 486d0c6236d..a90a7c8a231 100644 --- a/doc/radosgw/ldap-auth.rst +++ b/doc/radosgw/ldap-auth.rst @@ -9,7 +9,7 @@ You can delegate the Ceph Object Gateway authentication to an LDAP server. How it works ============ -The Ceph Object Gateway extracts the users LDAP credentials from a token. A +The Ceph Object Gateway extracts the user's LDAP credentials from a token. A search filter is constructed with the user name. The Ceph Object Gateway uses the configured service account to search the directory for a matching entry. If an entry is found, the Ceph Object Gateway attempts to bind to the found @@ -81,7 +81,7 @@ authentication: Using a custom search filter to limit user access ================================================= -There are two ways to use the ``rgw_search_filter`` parameter: +There are two ways to use the ``rgw_ldap_searchfilter`` parameter: Specifying a partial filter to further limit the constructed search filter -------------------------------------------------------------------------- @@ -94,7 +94,7 @@ An example for a partial filter: The Ceph Object Gateway will generate the search filter as usual with the user name from the token and the value of ``rgw_ldap_dnattr``. The constructed -filter is then combined with the partial filter from the ``rgw_search_filter`` +filter is then combined with the partial filter from the ``rgw_ldap_searchfilter`` attribute. Depending on the user name and the settings the final search filter might become: @@ -118,8 +118,8 @@ to a specific group, use the following filter: "(&(uid=@USERNAME@)(memberOf=cn=ceph-users,ou=groups,dc=mycompany,dc=com))" -.. note:: Using the ``memberOf`` attribute in LDAP searches requires server side - support from you specific LDAP server implementation. +.. note:: Using the ``memberOf`` attribute in LDAP searches requires server-side + support from your specific LDAP server implementation. Generating an access token for LDAP authentication ================================================== diff --git a/doc/radosgw/lua-scripting.rst b/doc/radosgw/lua-scripting.rst index 62bf004e4b7..a0e49a7f38d 100644 --- a/doc/radosgw/lua-scripting.rst +++ b/doc/radosgw/lua-scripting.rst @@ -8,11 +8,11 @@ Lua Scripting This feature allows users to assign execution context to Lua scripts. The supported contexts are: - - ``prerequest`` which will execute a script before each operation is performed - - ``postrequest`` which will execute after each operation is performed - - ``background`` which will execute within a specified time interval - - ``getdata`` which will execute on objects' data when objects are downloaded - - ``putdata`` which will execute on objects' data when objects are uploaded +- ``prerequest`` which will execute a script before each operation is performed +- ``postrequest`` which will execute after each operation is performed +- ``background`` which will execute within a specified time interval +- ``getdata`` which will execute on objects' data when objects are downloaded +- ``putdata`` which will execute on objects' data when objects are uploaded A request (pre or post) or data (get or put) context script may be constrained to operations belonging to a specific tenant's users. The request context script can also access fields in the request and modify certain fields, as well as the `Global RGW Table`_. @@ -28,13 +28,13 @@ By default, the execution of a Lua script is limited to a maximum runtime of 100 By default, all Lua standard libraries are available in the script, however, in order to allow for additional Lua modules to be used in the script, we support adding packages to an allowlist: - - Make sure that the ``luarocks`` package manager is installed on the host. - - Adding a Lua package to the allowlist, or removing a packge from it does not install or remove it. For the changes to take affect a "reload" command should be called. - - In addition all packages in the allowlist are being re-installed using the luarocks package manager on radosgw restart. - - To add a package that contains C source code that needs to be compiled, use the ``--allow-compilation`` flag. In this case a C compiler needs to be available on the host - - Lua packages are installed in, and used from, a directory local to the radosgw. Meaning that Lua packages in the allowlist are separated from any Lua packages available on the host. - By default, this directory would be ``/tmp/luarocks/``. Its prefix part (``/tmp/luarocks/``) could be set to a different location via the ``rgw_luarocks_location`` configuration parameter. - Note that this parameter should not be set to one of the default locations where luarocks install packages (e.g. ``$HOME/.luarocks``, ``/usr/lib64/lua``, ``/usr/share/lua``). +- Make sure that the ``luarocks`` package manager is installed on the host. +- Adding a Lua package to the allowlist, or removing a package from it does not install or remove it. For the changes to take effect a "reload" command should be called. +- In addition all packages in the allowlist are being re-installed using the luarocks package manager on radosgw restart. +- To add a package that contains C source code that needs to be compiled, use the ``--allow-compilation`` flag. In this case a C compiler needs to be available on the host +- Lua packages are installed in, and used from, a directory local to the radosgw. Meaning that Lua packages in the allowlist are separated from any Lua packages available on the host. + By default, this directory would be ``/tmp/luarocks/``. Its prefix part (``/tmp/luarocks/``) could be set to a different location via the ``rgw_luarocks_location`` configuration parameter. + Note that this parameter should not be set to one of the default locations where luarocks install packages (e.g. ``$HOME/.luarocks``, ``/usr/lib64/lua``, ``/usr/share/lua``). .. toctree:: @@ -130,6 +130,7 @@ To apply changes from the allowlist to all RGWs: Context Free Functions ---------------------- + Debug Log ~~~~~~~~~ The ``RGWDebugLog()`` function accepts a string and prints it to the debug log with priority 20. @@ -150,7 +151,7 @@ Request Fields +----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+ -| Field | Type | Description | Iterable | Writeable | Optional | +| Field | Type | Description | Iterable | Writable | Optional | +====================================================+==========+==============================================================+==========+===========+==========+ | ``Request.RGWOp`` | string | radosgw operation | no | no | no | +----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+ @@ -176,7 +177,7 @@ Request Fields +----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+ | ``Request.Bucket.Tenant`` | string | tenant of the bucket | no | no | yes | +----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+ -| ``Request.Bucket.Name`` | string | bucket name (writeable only in ``prerequest`` context) | no | yes | no | +| ``Request.Bucket.Name`` | string | bucket name (writable only in ``prerequest`` context) | no | yes | no | +----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+ | ``Request.Bucket.Marker`` | string | bucket marker (initial id) | no | no | yes | +----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+ @@ -194,7 +195,7 @@ Request Fields +----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+ | ``Request.Bucket.Quota.MaxObjects`` | integer | bucket quota max number of objects | no | no | no | +----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+ -| ``Reques.Bucket.Quota.Enabled`` | boolean | bucket quota is enabled | no | no | no | +| ``Request.Bucket.Quota.Enabled`` | boolean | bucket quota is enabled | no | no | no | +----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+ | ``Request.Bucket.Quota.Rounded`` | boolean | bucket quota is rounded to 4K | no | no | no | +----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+ @@ -316,7 +317,8 @@ Request Fields +----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+ Request Functions --------------------- +----------------- + Operations Log ~~~~~~~~~~~~~~ The ``Request.Log()`` function prints the requests into the operations log. This function has no parameters. It returns 0 for success and an error code if it fails. @@ -336,6 +338,7 @@ Tracing functions can be used only in the ``postrequest`` context. Request Blocking and Error Handling ----------------------------------- + Script Execution Errors ~~~~~~~~~~~~~~~~~~~~~~~ If the Lua script fails with a syntax or runtime error, RGW will log the error. The request that triggered the script will still go through. @@ -351,14 +354,17 @@ Return Value Context The Lua script’s return value is evaluated only during the prerequest context and is ignored in any other RGW request-processing context. The HTTP response status code is 403 (Forbidden) by default when a request is blocked by Lua. The response code can be changed using ``Request.Response.HTTPStatusCode`` and ``Request.Response.HTTPStatus``. If a request is aborted this way, the ``data`` and ``postrequest`` context will also be aborted. + Background Context --------------------- +------------------ The ``background`` context may be used for purposes that include analytics, monitoring, caching data for other context executions. + - Background script execution default interval is 5 seconds. Data Context --------------------- +------------ Both ``getdata`` and ``putdata`` contexts have the following fields: + - ``Data`` which is read-only and iterable (byte by byte). In case that an object is uploaded or retrieved in multiple chunks, the ``Data`` field will hold data of one chunk at a time. - ``Offset`` which is holding the offset of the chunk within the entire object. - The ``Request`` fields and the background ``RGW`` table are also available in these contexts. @@ -366,7 +372,8 @@ Both ``getdata`` and ``putdata`` contexts have the following fields: Global RGW Table -------------------- The ``RGW`` Lua table is accessible from all contexts and saves data written to it -during execution so that it may be read and used later during other executions, from the same context of a different one. +during execution so that it may be read and used later during other executions, from the same context or a different one. + - Each RGW instance has its own private and ephemeral ``RGW`` Lua table that is lost when the daemon restarts. Note that ``background`` context scripts will run on every instance. - The maximum number of entries in the table is 100,000. Each entry has a string key a value with a combined length of no more than 1KB. A Lua script will abort with an error if the number of entries or entry size exceeds these limits. @@ -558,9 +565,9 @@ in ``postrequest`` context, we can add attributes and events to the request's tr Request.Trace.AddEvent("second event", event_attrs) - The entropy value of an object could be used to detect whether the object is encrypted. - The following script calculates the entropy and size of uploaded objects and print to debug log + The following script calculates the entropy and size of uploaded objects and prints to debug log. -in the ``putdata`` context, add the following script +Add the following script in the ``putdata`` context: .. code-block:: lua diff --git a/doc/radosgw/metrics.rst b/doc/radosgw/metrics.rst index d639ed58877..4b8c2dad4ed 100644 --- a/doc/radosgw/metrics.rst +++ b/doc/radosgw/metrics.rst @@ -4,7 +4,7 @@ Metrics The Ceph Object Gateway uses :ref:`Perf Counters` to track metrics. The counters can be labeled (:ref:`Labeled Perf Counters`). When counters are labeled, they are stored in the Ceph Object Gateway specific caches. -These metrics can be sent to the time series database Prometheus to visualize a cluster wide view of usage data (ex: number of S3 put operations on a specific bucket) over time. +These metrics can be sent to the time series database Prometheus to visualize a cluster-wide view of usage data (for example, number of S3 put operations on a specific bucket) over time. .. contents:: @@ -187,7 +187,7 @@ Cache sizing can depend on a number of factors. These factors include: #. Number of users in the cluster #. Number of buckets in the cluster #. Memory usage of the Ceph Object Gateway -#. Disk and memory usage of Promtheus. +#. Disk and memory usage of Prometheus. To help calculate the Ceph Object Gateway's memory usage of a cache, it should be noted that each cache entry, encompassing all of the op metrics, is 1360 bytes. This is an estimate and subject to change if metrics are added or removed from the op metrics list. diff --git a/doc/radosgw/mfa.rst b/doc/radosgw/mfa.rst index 416f23af106..54522abeeff 100644 --- a/doc/radosgw/mfa.rst +++ b/doc/radosgw/mfa.rst @@ -10,7 +10,7 @@ The S3 multifactor authentication (MFA) feature allows users to require the use of one-time password when removing objects on certain buckets. The buckets need to be configured with versioning and MFA enabled which can be done through -the S3 api. +the S3 API. Time-based one time password tokens can be assigned to a user through radosgw-admin. Each token has a secret seed, and a serial @@ -22,7 +22,7 @@ Multisite While the MFA IDs are set on the user's metadata, the actual MFA one time password configuration resides in the local zone's -osds. Therefore, in a multi-site environment it is advisable to use +OSDs. Therefore, in a multi-site environment it is advisable to use different tokens for different zones. @@ -97,6 +97,6 @@ In order to re-sync the TOTP token (in case of time skew). This requires feeding two consecutive pins: the previous pin, and the current pin. :: # radosgw-admin mfa resync --uid= --totp-serial= \ - --totp-pin= --totp=pin= + --totp-pin= --totp-pin= diff --git a/doc/radosgw/multisite.rst b/doc/radosgw/multisite.rst index 18369976206..58bd1b15ccf 100644 --- a/doc/radosgw/multisite.rst +++ b/doc/radosgw/multisite.rst @@ -86,7 +86,7 @@ At the top of this diagram, we see two applications (also known as "clients"). The application on the right is both writing and reading data from the Ceph Cluster, by means of the RADOS Gateway (RGW). The application on the left is only *reading* data from the Ceph Cluster, by means of an instance of RADOS -Gateway. In both cases (read-and-write and read-only), the transmssion of +Gateway. In both cases (read-and-write and read-only), the transmission of data is handled RESTfully. In the middle of this diagram, we see two zones, each of which contains an @@ -108,8 +108,8 @@ Beginning with Kraken, each Ceph Object Gateway can be configured to work in an active-active zone mode. This makes it possible to write to non-master zones. The multi-site configuration is stored within a container called a "realm". The -realm stores zonegroups, zones, and a time "period" with multiple epochs (which -(the epochs) are used for tracking changes to the configuration). +realm stores zonegroups, zones, and a time "period" with multiple epochs. +The epochs are used for tracking changes to the configuration. Beginning with Kraken, the ``ceph-radosgw`` daemons handle the synchronization of data across zones, which eliminates the need for a separate synchronization @@ -129,8 +129,8 @@ geographically separate locations; however, the configuration can work on the same site. This guide also assumes two Ceph Object Gateway servers named ``rgw1`` and ``rgw2``. -.. important:: Running a single geographically-distributed Ceph storage cluster - is NOT recommended unless you have low latency WAN connections. +.. important:: Running a single geographically distributed Ceph storage cluster + is NOT recommended unless you have low-latency WAN connections. A multi-site configuration requires a master zonegroup and a master zone. Each zonegroup requires a master zone. Zonegroups may have one or more secondary diff --git a/doc/radosgw/multitenancy.rst b/doc/radosgw/multitenancy.rst index 09f5071c110..95c1d68fa46 100644 --- a/doc/radosgw/multitenancy.rst +++ b/doc/radosgw/multitenancy.rst @@ -25,8 +25,8 @@ Administering Users With Explicit Tenants Tenants as such do not have any operations on them. They appear and disappear as needed, when users are administered. In order to create, modify, and remove users with explicit tenants, either an additional -option --tenant is supplied, or a syntax '$' is used -in the parameters of the radosgw-admin command. +option ``--tenant`` is supplied, or a syntax ``$`` is used +in the parameters of the ``radosgw-admin`` command. Examples -------- @@ -142,11 +142,11 @@ configuration option:: Once you enable this option, any newly connecting user (whether they are using the Swift API, or Keystone-authenticated S3) will prompt -radosgw to create a user named ``$$``, where ```` is a Keystone tenant (project) UUID --- for example, ``7188e165c0ae4424ac68ae2e89a05c50$7188e165c0ae4424ac68ae2e89a05c50``. -Whenever that user then creates an Swift container, radosgw internally +Whenever that user then creates a Swift container, radosgw internally translates the given container name into ``/``, such as ``7188e165c0ae4424ac68ae2e89a05c50/foo``. This ensures that if there @@ -155,11 +155,11 @@ are two or more different tenants all creating a container named prefix. It is also possible to limit the effects of implicit tenants -to only apply to swift or s3, by setting ``rgw keystone implicit tenants`` +to only apply to Swift or S3, by setting ``rgw keystone implicit tenants`` to either ``s3`` or ``swift``. This will likely primarily be of use to users who had previously used implicit tenants with older versions of ceph, where implicit tenants -only applied to the swift protocol. +only applied to the Swift protocol. Notes and known issues ---------------------- diff --git a/doc/radosgw/nfs.rst b/doc/radosgw/nfs.rst index f4795849d83..35225f89cf9 100644 --- a/doc/radosgw/nfs.rst +++ b/doc/radosgw/nfs.rst @@ -90,7 +90,7 @@ following characteristics: - NFS protocol security is provided by the NFS-Ganesha server, as negotiated by the NFS server and clients - + e.g., clients can by trusted (AUTH_SYS), or required to present Kerberos user credentials (RPCSEC_GSS) + + e.g., clients can be trusted (AUTH_SYS), or required to present Kerberos user credentials (RPCSEC_GSS) + RPCSEC_GSS wire security can be integrity only (krb5i) or integrity and privacy (encryption, krb5p) + various NFS-specific security and permission rules are available @@ -127,7 +127,7 @@ optional. A small number of config variables (e.g., ``rgw_nfs_namespace_expire_secs``) are unique to RGW NFS. -In particular, front-end selection is handled specially by the librgw.so runtime. By default, only the +In particular, frontend selection is handled specially by the librgw.so runtime. By default, only the ``rgw-nfs`` frontend is started. Additional frontends (e.g., ``beast``) are enabled via the ``rgw nfs frontends`` config option. Its syntax is identical to the ordinary ``rgw frontends`` option. Default options for non-default frontends are specified via ``rgw frontend defaults`` as normal. @@ -298,7 +298,7 @@ RGW vs RGW NFS ============== Exporting an NFS namespace and other RGW namespaces (e.g., S3 or Swift -via the Civetweb HTTP front-end) from the same program instance is +via the Civetweb HTTP frontend) from the same program instance is currently not supported. When adding objects and buckets outside of NFS, those objects will diff --git a/doc/radosgw/notifications.rst b/doc/radosgw/notifications.rst index 35e088c53e4..2fa90913483 100644 --- a/doc/radosgw/notifications.rst +++ b/doc/radosgw/notifications.rst @@ -85,7 +85,7 @@ which tells the client that it may retry later. control the retry frequency with ``retry_sleep_duration``. .. tip:: To minimize the latency added by asynchronous notification, we - recommended placing the "log" pool on fast media. + recommend placing the "log" pool on fast media. Persistent bucket notifications are managed by the following central configuration options: @@ -346,9 +346,9 @@ Request parameters: - ``broker``: Messages are considered "delivered" if acked by the broker. (This is the default.) - - ``kafka-brokers``: A command-separated list of ``host:port`` of Kafka brokers: + - ``kafka-brokers``: A comma-separated list of ``host:port`` of Kafka brokers: these brokers (may contain a broker which is defined in Kafka URI) will be - added to Kafka URI to support sending notifcations to a Kafka cluster. + added to Kafka URI to support sending notifications to a Kafka cluster. .. note:: diff --git a/doc/radosgw/opa.rst b/doc/radosgw/opa.rst index f1b76b5ef78..88eacca93bb 100644 --- a/doc/radosgw/opa.rst +++ b/doc/radosgw/opa.rst @@ -15,10 +15,11 @@ Configure OPA To configure OPA, load custom policies into OPA that control the resources users are allowed to access. Relevant data or context can also be loaded into OPA to make decisions. -Policies and data can be loaded into OPA in the following ways:: - * OPA's RESTful APIs - * OPA's *bundle* feature that downloads policies and data from remote HTTP servers - * Filesystem +Policies and data can be loaded into OPA in the following ways: + +* OPA's RESTful APIs +* OPA's *bundle* feature that downloads policies and data from remote HTTP servers +* Filesystem Configure the Ceph Object Gateway ================================= @@ -66,7 +67,7 @@ Response:: {"result": true} The above is a sample request sent to OPA which contains information about the -user, resource and the action to be performed on the resource. Based on the polices +user, resource and the action to be performed on the resource. Based on the policies and data loaded into OPA, it will verify whether the request should be allowed or denied. In the sample request, RGW makes a POST request to the endpoint */v1/data/ceph/authz*, where *ceph* is the package name and *authz* is the rule name. diff --git a/doc/radosgw/orphans.rst b/doc/radosgw/orphans.rst index dd2399c811f..2b94a043b46 100644 --- a/doc/radosgw/orphans.rst +++ b/doc/radosgw/orphans.rst @@ -2,7 +2,7 @@ Orphan List and Associated Tooling ================================== -.. version added:: Luminous +.. versionadded:: Luminous .. contents:: @@ -51,7 +51,7 @@ available pools and prompt the user to enter the name of the data pool. At that point the tool will, perhaps after an extended period of time, produce a local file containing the RADOS objects from the designated pool that appear to be orphans. The administrator is free -to examine this file and the decide on a course of action, perhaps +to examine this file and then decide on a course of action, perhaps removing those RADOS objects from the designated pool. All intermediate results are stored on the local file system rather diff --git a/doc/radosgw/qat-accel.rst b/doc/radosgw/qat-accel.rst index 3d6a8e81db0..598b2d2a7be 100644 --- a/doc/radosgw/qat-accel.rst +++ b/doc/radosgw/qat-accel.rst @@ -33,7 +33,7 @@ QAT Environment Setup encryption and compression services. And QAT driver in kernel space have to be loaded to drive the hardware. -The out-of-tree QAT driver package can be downloaded from `Intel Quickassist +The out-of-tree QAT driver package can be downloaded from `Intel QuickAssist Technology`_. The QATlib can be downloaded from `qatlib`_, which is used for the in-tree QAT @@ -43,7 +43,7 @@ driver. The out-of-tree QAT driver is gradually being migrated to an in-tree driver+QATlib. 2. The implementation of QAT-based encryption is directly based on the QAT API, - which is included the driver package. However, QAT support for compression + which is included in the driver package. However, QAT support for compression depends on the QATzip project, which is a userspace library that builds on top of the QAT API. At the time of writing (July 2024), QATzip speeds up gzip compression and decompression. @@ -56,7 +56,7 @@ Implementation `OpenSSL support for RGW encryption`_ has been merged into Ceph, and Intel also provides one `QAT Engine`_ for OpenSSL. Theoretically, QAT-based encryption in -Ceph can be directly supported through the OpenSSl+QAT Engine. +Ceph can be directly supported through the OpenSSL+QAT Engine. However, the QAT Engine for OpenSSL currently supports only chained operations, which means that Ceph will not be able to utilize QAT hardware features for @@ -144,7 +144,7 @@ Configuration cd ceph ./do_cmake.sh -DWITH_QATDRV=ON cd build - ininja + ninja .. note:: The section name in QAT configuration files must be ``CEPH``, because the section name is set to ``CEPH`` in the Ceph crypto source code. diff --git a/doc/radosgw/rgw-cache.rst b/doc/radosgw/rgw-cache.rst index 9cfe8dd3a73..1b8da2b76af 100644 --- a/doc/radosgw/rgw-cache.rst +++ b/doc/radosgw/rgw-cache.rst @@ -12,7 +12,7 @@ When data is already cached, it need not be fetched from RGW. A permission check This feature is based on the Nginx modules ``ngx_http_auth_request_module`` and `nginx-aws-auth-module `_, and OpenResty for Lua capabilities. Currently this feature will cache only AWSv4 requests (only S3 requests), caching-in the output of the first GET request -and caching-out on subsequent GET requests, passing through transparently PUT,POST,HEAD,DELETE and COPY requests. +and caching-out on subsequent GET requests, passing through transparently PUT, POST, HEAD, DELETE and COPY requests. The feature introduces 2 new APIs: Auth and Cache. @@ -25,7 +25,7 @@ New APIs There are 2 new APIs for this feature: - **Auth API:** The cache uses this to validate that a user can access the cached data. -- **Cache API:** Adds the ability to override securely ``Range`` header so that Nginx can use its own `smart cache `_ on top of S3. +- **Cache API:** Adds the ability to override the ``Range`` header securely so that Nginx can use its own `smart cache `_ on top of S3. Using this API gives the ability to read ahead objects when client is asking a specific range from the object. On subsequent accesses to the cached object, Nginx will satisfy requests for already-cached ranges from the cache. Uncached ranges will be read from RGW (and cached). diff --git a/doc/radosgw/role.rst b/doc/radosgw/role.rst index 2d283bea28a..16ab27bbc2e 100644 --- a/doc/radosgw/role.rst +++ b/doc/radosgw/role.rst @@ -19,7 +19,7 @@ Create a Role To create a role, run a command of the following form:: - radosgw-admin role create --role-name={role-name} [--path=="{path to the role}"] [--assume-role-policy-doc={trust-policy-document}] + radosgw-admin role create --role-name={role-name} [--path="{path to the role}"] [--assume-role-policy-doc={trust-policy-document}] Request Parameters ~~~~~~~~~~~~~~~~~~ @@ -31,7 +31,7 @@ Request Parameters ``path`` -:Description: Path to the role. The default value is a slash(``/``). +:Description: Path to the role. The default value is a slash (``/``). :Type: String ``assume-role-policy-doc`` diff --git a/doc/radosgw/s3-notification-compatibility.rst b/doc/radosgw/s3-notification-compatibility.rst index efa746ba4d5..cb0e7cc3872 100644 --- a/doc/radosgw/s3-notification-compatibility.rst +++ b/doc/radosgw/s3-notification-compatibility.rst @@ -29,7 +29,7 @@ Notification Configuration XML Following tags (and the tags inside them) are not supported: +-----------------------------------+----------------------------------------------+ -| Tag | Remaks | +| Tag | Remarks | +===================================+==============================================+ | ```` | not needed, we treat all destinations as SNS | +-----------------------------------+----------------------------------------------+ diff --git a/doc/radosgw/s3/authentication.rst b/doc/radosgw/s3/authentication.rst index 52bb7710d6e..dc7d8c73aa1 100644 --- a/doc/radosgw/s3/authentication.rst +++ b/doc/radosgw/s3/authentication.rst @@ -127,7 +127,7 @@ Internally, S3 operations are mapped to ACL permissions thus: +---------------------------------------+---------------+ | ``s3:DeleteObject`` | ``WRITE`` | +---------------------------------------+---------------+ -| ``s3:s3DeleteObjectVersion`` | ``WRITE`` | +| ``s3:DeleteObjectVersion`` | ``WRITE`` | +---------------------------------------+---------------+ | ``s3:PutObject`` | ``WRITE`` | +---------------------------------------+---------------+ @@ -195,7 +195,7 @@ Internally, S3 operations are mapped to ACL permissions thus: +---------------------------------------+---------------+ | ``s3:PutBucketTagging`` | ``WRITE_ACP`` | +---------------------------------------+---------------+ -| ``s3:PutPutBucketVersioning`` | ``WRITE_ACP`` | +| ``s3:PutBucketVersioning`` | ``WRITE_ACP`` | +---------------------------------------+---------------+ | ``s3:PutBucketWebsite`` | ``WRITE_ACP`` | +---------------------------------------+---------------+ diff --git a/doc/radosgw/s3/bucketops.rst b/doc/radosgw/s3/bucketops.rst index 664b46ee694..4321347e6b6 100644 --- a/doc/radosgw/s3/bucketops.rst +++ b/doc/radosgw/s3/bucketops.rst @@ -57,7 +57,7 @@ Request Entities +===============================+===========+=================================================================+ | ``CreateBucketConfiguration`` | Container | A container for the bucket configuration. | +-------------------------------+-----------+-----------------------------------------------------------------+ -| ``LocationConstraint`` | String | A zonegroup api name, with optional :ref:`s3_bucket_placement`. | +| ``LocationConstraint`` | String | A zonegroup API name, with optional :ref:`s3_bucket_placement`. | +-------------------------------+-----------+-----------------------------------------------------------------+ @@ -561,7 +561,7 @@ Parameters are XML encoded in the body of the request, in the following format: +-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+ | ``S3Tags`` | Container | Holding a list of ``FilterRule`` entities, for filtering based on object tags. | No | | | | All filter rules in the list must match the tags defined on the object. However, | | -| | | the object still match it it has other tags not listed in the filter. | | +| | | the object still match if it has other tags not listed in the filter. | | +-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+ | ``S3Key.FilterRule`` | Container | Holding ``Name`` and ``Value`` entities. ``Name`` would be: ``prefix``, ``suffix`` | Yes | | | | or ``regex``. The ``Value`` would hold the key prefix, key suffix or a regular | | diff --git a/doc/radosgw/s3/cpp.rst b/doc/radosgw/s3/cpp.rst index 089c9c53a30..862a3e6641e 100644 --- a/doc/radosgw/s3/cpp.rst +++ b/doc/radosgw/s3/cpp.rst @@ -73,7 +73,7 @@ This creates a connection so that you can interact with the server. Listing Owned Buckets --------------------- -This gets a list of Buckets that you own. +This gets a list of buckets that you own. This also prints out the bucket name, owner ID, and display name for each bucket. @@ -180,7 +180,7 @@ Deleting a Bucket .. note:: - The Bucket must be empty! Otherwise it won't work! + The bucket must be empty! Otherwise it won't work! .. code-block:: cpp diff --git a/doc/radosgw/s3/csharp.rst b/doc/radosgw/s3/csharp.rst index af1c6e4b57c..7370e31e54d 100644 --- a/doc/radosgw/s3/csharp.rst +++ b/doc/radosgw/s3/csharp.rst @@ -31,7 +31,7 @@ This creates a connection so that you can interact with the server. Listing Owned Buckets --------------------- -This gets a list of Buckets that you own. +This gets a list of buckets that you own. This also prints out the bucket name and creation date of each bucket. .. code-block:: csharp @@ -87,7 +87,7 @@ Deleting a Bucket .. note:: - The Bucket must be empty! Otherwise it won't work! + The bucket must be empty! Otherwise it won't work! .. code-block:: csharp diff --git a/doc/radosgw/s3/java.rst b/doc/radosgw/s3/java.rst index 64cc976da5d..eaf037f1f71 100644 --- a/doc/radosgw/s3/java.rst +++ b/doc/radosgw/s3/java.rst @@ -3,8 +3,8 @@ Java S3 Examples ================ -Pre-requisites --------------- +Prerequisites +------------- All examples are written against AWS Java SDK 2.17.42. You may need to change some code when using another client. @@ -12,7 +12,7 @@ to change some code when using another client. Setup ----- -The following examples may require some or all of the following java +The following examples may require some or all of the following Java classes to be imported: .. code-block:: java @@ -129,7 +129,7 @@ Deleting a Bucket ----------------- .. note:: - The Bucket must be empty! Otherwise it won't work! + The bucket must be empty! Otherwise it won't work! .. code-block:: java @@ -215,7 +215,7 @@ period even if the object is private (when the time period is up, the URL will stop working). .. note:: - The java library does not have a method for generating unsigned + The Java library does not have a method for generating unsigned URLs, so the example below just generates a signed URL. .. code-block:: java diff --git a/doc/radosgw/s3/objectops.rst b/doc/radosgw/s3/objectops.rst index 5af82eb5a42..2cbb979deef 100644 --- a/doc/radosgw/s3/objectops.rst +++ b/doc/radosgw/s3/objectops.rst @@ -73,7 +73,7 @@ Response Entities +------------------------+-------------+-----------------------------------------------+ | **LastModified** | Date | The last modified date of the source object. | +------------------------+-------------+-----------------------------------------------+ -| **Etag** | String | The ETag of the new object. | +| **ETag** | String | The ETag of the new object. | +------------------------+-------------+-----------------------------------------------+ Remove Object diff --git a/doc/radosgw/s3/perl.rst b/doc/radosgw/s3/perl.rst index f12e5c6987f..f5b3b7abac9 100644 --- a/doc/radosgw/s3/perl.rst +++ b/doc/radosgw/s3/perl.rst @@ -77,7 +77,7 @@ Deleting a Bucket ----------------- .. note:: - The Bucket must be empty! Otherwise it won't work! + The bucket must be empty! Otherwise it won't work! .. code-block:: perl @@ -89,7 +89,7 @@ Forced Delete for Non-empty Buckets .. attention:: - not available in the `Amazon::S3`_ perl module + not available in the `Amazon::S3`_ Perl module Creating an Object @@ -148,7 +148,7 @@ Generate Object Download URLs (signed and unsigned) --------------------------------------------------- This generates an unsigned download URL for ``hello.txt``. This works because we made ``hello.txt`` public by setting the ACL above. -Then this generates a signed download URL for ``secret_plans.txt`` that +It then generates a signed download URL for ``secret_plans.txt`` that will work for 1 hour. Signed download URLs will work for the time period even if the object is private (when the time period is up, the URL will stop working). @@ -158,7 +158,7 @@ URL will stop working). URLs, so we are going to be using another module instead. Unfortunately, most modules for generating these URLs assume that you are using Amazon, so we have had to go with using a more obscure module, `Muck::FS::S3`_. This - should be the same as Amazon's sample S3 perl module, but this sample + should be the same as Amazon's sample S3 Perl module, but this sample module is not in CPAN. So, you can either use CPAN to install `Muck::FS::S3`_, or install Amazon's sample S3 module manually. If you go the manual route, you can remove ``Muck::FS::`` from the example below. diff --git a/doc/radosgw/s3/php.rst b/doc/radosgw/s3/php.rst index 4878a34898f..580e580a71c 100644 --- a/doc/radosgw/s3/php.rst +++ b/doc/radosgw/s3/php.rst @@ -33,7 +33,7 @@ This creates a connection so that you can interact with the server. define('AWS_SECRET_KEY', 'place secret key here'); $ENDPOINT = 'http://objects.dreamhost.com'; - // require the amazon sdk from your composer vendor dir + // require the Amazon SDK from your composer vendor dir require __DIR__.'/vendor/autoload.php'; // Instantiate the S3 class and point it at the desired host @@ -119,7 +119,7 @@ This deletes the bucket called ``my-old-bucket`` and returns a .. note:: - The Bucket must be empty! Otherwise it won't work! + The bucket must be empty! Otherwise it won't work! .. code-block:: php diff --git a/doc/radosgw/s3/python.rst b/doc/radosgw/s3/python.rst index 35f6828932e..09146b695a0 100644 --- a/doc/radosgw/s3/python.rst +++ b/doc/radosgw/s3/python.rst @@ -27,7 +27,7 @@ This creates a connection so that you can interact with the server. Listing Owned Buckets --------------------- -This gets a list of Buckets that you own. +This gets a list of buckets that you own. This also prints out the bucket name and creation date of each bucket. .. code-block:: python @@ -82,7 +82,7 @@ Deleting a Bucket .. note:: - The Bucket must be empty! Otherwise it won't work! + The bucket must be empty! Otherwise it won't work! .. code-block:: python @@ -183,7 +183,7 @@ The output of this will look something like:: Using S3 API Extensions ----------------------- -To use the boto3 client to tests the RadosGW extensions to the S3 API, the `extensions file`_ should be placed under: ``~/.aws/models/s3/2006-03-01/`` directory. +To use the boto3 client to test the RadosGW extensions to the S3 API, the `extensions file`_ should be placed under: ``~/.aws/models/s3/2006-03-01/`` directory. For example, unordered list of objects could be fetched using: .. code-block:: python diff --git a/doc/radosgw/s3/ruby.rst b/doc/radosgw/s3/ruby.rst index 435b3c63083..dce1073fc27 100644 --- a/doc/radosgw/s3/ruby.rst +++ b/doc/radosgw/s3/ruby.rst @@ -6,7 +6,7 @@ Ruby `AWS::SDK`_ Examples (aws-sdk gem ~>2) Settings --------------------- -You can setup the connection on global way: +You can set up the connection on global way: .. code-block:: ruby @@ -84,7 +84,7 @@ The output will look something like this if the bucket has some files:: Deleting a Bucket ----------------- .. note:: - The Bucket must be empty! Otherwise it won't work! + The bucket must be empty! Otherwise it won't work! .. code-block:: ruby @@ -258,7 +258,7 @@ The output will look something like this if the bucket has some files:: Deleting a Bucket ----------------- .. note:: - The Bucket must be empty! Otherwise it won't work! + The bucket must be empty! Otherwise it won't work! .. code-block:: ruby diff --git a/doc/radosgw/s3_objects_dedup.rst b/doc/radosgw/s3_objects_dedup.rst index 28ea792f4b1..fe83124d154 100644 --- a/doc/radosgw/s3_objects_dedup.rst +++ b/doc/radosgw/s3_objects_dedup.rst @@ -99,7 +99,7 @@ Next, we iterate through these dedup candidate objects, reading their complete information from the object metadata (a per-object RADOS operation). During this step, we filter out **compressed** and **user-encrypted** objects. -Following this, we calculate a cryptograhically strong hash of the candidate +Following this, we calculate a cryptographically strong hash of the candidate object data. This involves a full-object read which is a resource-intensive operation. The hash ensures that the dedup candidates are indeed perfect matches. If they are, we proceed with the deduplication: diff --git a/doc/radosgw/s3select.rst b/doc/radosgw/s3select.rst index cf829c6386a..3610e4268fa 100644 --- a/doc/radosgw/s3select.rst +++ b/doc/radosgw/s3select.rst @@ -15,7 +15,7 @@ The S3 Select engine makes it possible to use an SQL-like syntax to select a restricted subset of data stored in an S3 object. The S3 Select engine facilitates the use of higher level, analytic applications (for example: SPARK-SQL). The ability of the S3 Select engine to target a proper subset of -structed data within an S3 object decreases latency and increases throughput. +structured data within an S3 object decreases latency and increases throughput. For example: assume that a user needs to extract a single column that is filtered by another column, and that these colums are stored in a CSV file in @@ -72,7 +72,7 @@ review the below s3-select-feature-table_. Error Handling ~~~~~~~~~~~~~~ -Upon an error being detected, RGW returns 400-Bad-Request and a specific error message sends back to the client. +Upon an error being detected, RGW returns 400-Bad-Request and a specific error message is sent back to the client. Currently, there are 2 main types of error. **Syntax error**: the s3select parser rejects user requests that are not aligned with parser syntax definitions, as @@ -485,7 +485,7 @@ the following queries will produce identical results. Additional Syntax Support ~~~~~~~~~~~~~~~~~~~~~~~~~ -S3select syntax supports table-alias ``select s._1 from s3object s where s._2 = ‘4’;`` +S3select syntax supports table-alias ``select s._1 from s3object s where s._2 = '4';`` S3select syntax supports case insensitive ``Select SUM(Cast(_1 as int)) FROM S3Object;`` @@ -677,7 +677,7 @@ A JSON Query Example # the from-clause define a single row. # _1 points to root object level. - # _1.age appears twice in Documnet-row, the last value is used for the operation. + # _1.age appears twice in Document row, the last value is used for the operation. query = "select _1.firstname,_1.key_after_array,_1.age+4,_1.description.main_desc,_1.description.second_desc from s3object[*];"; expected_result = Joe_2,XXX,25,value_1,value_2 @@ -693,7 +693,7 @@ A JSON Query Example BOTO3 ----- -using BOTO3 is "natural" and easy due to AWS-cli support. +Using BOTO3 is "natural" and easy due to AWS-cli support. :: diff --git a/doc/radosgw/session-tags.rst b/doc/radosgw/session-tags.rst index 67a85389593..567024dd903 100644 --- a/doc/radosgw/session-tags.rst +++ b/doc/radosgw/session-tags.rst @@ -5,7 +5,7 @@ Session tags for Attribute Based Access Control in STS Session tags are key-value pairs that can be passed while federating a user (currently it is only supported as part of the web token passed to AssumeRoleWithWebIdentity). The session tags are passed along as aws:PrincipalTag in the session credentials (temporary credentials) -that is returned back by STS. These Principal Tags consists of the session tags that come in +that is returned back by STS. These Principal Tags consist of the session tags that come in as part of the web token and the tags that are attached to the role being assumed. Please note that the tags have to be always specified in the following namespace: https://aws.amazon.com/tags. @@ -70,7 +70,7 @@ Tag Keys The following are the tag keys that can be used in the role's trust policy or the role's permission policy: 1. aws:RequestTag: This key is used to compare the key-value pair passed in the request with the key-value pair -in the role's trust policy. In case of AssumeRoleWithWebIdentity, the session tags that are passed by the idp +in the role's trust policy. In case of AssumeRoleWithWebIdentity, the session tags that are passed by the IDP in the web token can be used as aws:RequestTag in the role's trust policy based on which a federated user can be allowed to assume a role. @@ -90,7 +90,7 @@ An example of a role trust policy that uses aws:RequestTag is as follows: } 2. aws:PrincipalTag: This key is used to compare the key-value pair attached to the principal with the key-value pair -in the policy. In case of AssumeRoleWithWebIdentity, the session tags that are passed by the idp in the web token appear +in the policy. In case of AssumeRoleWithWebIdentity, the session tags that are passed by the IDP in the web token appear as Principal tags in the temporary credentials once a user has been authenticated, and these tags can be used as aws:PrincipalTag in the role's permission policy. @@ -168,13 +168,13 @@ An example of a role's permission policy that uses s3:ResourceTag is as follows: { "Effect":"Allow", "Action":["s3:PutBucketTagging"], - "Resource":["arn:aws:s3::t1tenant:my-test-bucket\","arn:aws:s3::t1tenant:my-test-bucket/*"] + "Resource":["arn:aws:s3::t1tenant:my-test-bucket","arn:aws:s3::t1tenant:my-test-bucket/*"] }, { "Effect":"Allow", "Action":["s3:*"], "Resource":["*"], - "Condition":{"StringEquals":{"s3:ResourceTag/Department":\"Engineering"}} + "Condition":{"StringEquals":{"s3:ResourceTag/Department":"Engineering"}} } } @@ -213,7 +213,7 @@ the s3 resource (object/ bucket): { "Effect":"Allow", "Action":["s3:PutBucketTagging"], - "Resource":["arn:aws:s3::t1tenant:my-test-bucket\","arn:aws:s3::t1tenant:my-test-bucket/*"] + "Resource":["arn:aws:s3::t1tenant:my-test-bucket","arn:aws:s3::t1tenant:my-test-bucket/*"] }, { "Effect":"Allow", diff --git a/doc/radosgw/swift/containerops.rst b/doc/radosgw/swift/containerops.rst index 434b90ef5a3..5556321f23b 100644 --- a/doc/radosgw/swift/containerops.rst +++ b/doc/radosgw/swift/containerops.rst @@ -30,7 +30,7 @@ Create a Container To create a new container, make a ``PUT`` request with the API version, account, and the name of the new container. The container name must be unique, must not -contain a forward-slash (/) character, and should be less than 256 bytes. You +contain a forward slash (``/``) character, and should be less than 256 bytes. You may include access control headers and metadata headers in the request. The operation is idempotent; that is, if you make a request to create a container that already exists, it will return with a HTTP 202 return code, but will not @@ -67,7 +67,7 @@ Headers ``X-Container-Meta-{key}`` -:Description: A user-defined meta data key that takes an arbitrary string value. +:Description: A user-defined metadata key that takes an arbitrary string value. :Type: String :Required: No @@ -261,7 +261,7 @@ Request Headers ``X-Container-Meta-{key}`` -:Description: A user-defined meta data key that takes an arbitrary string value. +:Description: A user-defined metadata key that takes an arbitrary string value. :Type: String :Required: No diff --git a/doc/radosgw/swift/tempurl.rst b/doc/radosgw/swift/tempurl.rst index 41dbb0ccb85..fe83e569fa0 100644 --- a/doc/radosgw/swift/tempurl.rst +++ b/doc/radosgw/swift/tempurl.rst @@ -2,7 +2,7 @@ Temp URL Operations ==================== -To allow temporary access (for eg for `GET` requests) to objects +To allow temporary access (for example, for `GET` requests) to objects without the need to share credentials, temp url functionality is supported by swift endpoint of radosgw. For this functionality, initially the value of `X-Account-Meta-Temp-URL-Key` and optionally @@ -69,7 +69,7 @@ Temporary URL uses a cryptographic HMAC-SHA1 signature, which includes the following elements: #. The value of the Request method, "GET" for instance -#. The expiry time, in format of seconds since the epoch, ie Unix time +#. The expiry time, in format of seconds since the epoch, i.e. Unix time #. The request path starting from "v1" onwards The above items are normalized with newlines appended between them, diff --git a/doc/radosgw/swift/tutorial.rst b/doc/radosgw/swift/tutorial.rst index fca7c30ee0f..ea8af856d7d 100644 --- a/doc/radosgw/swift/tutorial.rst +++ b/doc/radosgw/swift/tutorial.rst @@ -3,7 +3,7 @@ ========== The Swift-compatible API tutorials follow a simple container-based object -lifecycle. The first step requires you to setup a connection between your +lifecycle. The first step requires you to set up a connection between your client and the RADOS Gateway server. Then, you may follow a natural container and object lifecycle, including adding and retrieving object metadata. See example code for the following languages: diff --git a/doc/radosgw/sync-modules.rst b/doc/radosgw/sync-modules.rst index 2dc8330bfb6..66d81db7401 100644 --- a/doc/radosgw/sync-modules.rst +++ b/doc/radosgw/sync-modules.rst @@ -5,15 +5,15 @@ Sync Modules .. versionadded:: Kraken The :ref:`multisite` functionality of RGW introduced in Jewel allowed the ability to -create multiple zones and mirror data and metadata between them. ``Sync Modules`` -are built atop of the multisite framework that allows for forwarding data and +create multiple zones and mirror data and metadata between them. *Sync Modules* +are built on top of the multisite framework that allows for forwarding data and metadata to a different external tier. A sync module allows for a set of actions to be performed whenever a change in data occurs (metadata ops like bucket or user creation etc. are also regarded as changes in data). As the RGW multisite changes are eventually consistent at remote sites, changes are propagated asynchronously. This would allow for unlocking use cases such as backing up the object storage to an external cloud cluster or a custom backup solution using -tape drives, indexing metadata in ElasticSearch etc. +tape drives, indexing metadata in Elasticsearch etc. A sync module configuration is local to a zone. The sync module determines whether the zone exports data or can only consume data that was modified in @@ -27,7 +27,7 @@ for configuring any sync plugin. .. toctree:: :maxdepth: 1 - ElasticSearch Sync Module + Elasticsearch Sync Module Cloud Sync Module Archive Sync Module diff --git a/doc/radosgw/troubleshooting.rst b/doc/radosgw/troubleshooting.rst index 4a084e82a7a..a26d22d7cfe 100644 --- a/doc/radosgw/troubleshooting.rst +++ b/doc/radosgw/troubleshooting.rst @@ -165,8 +165,8 @@ Object Storage services, you can resolve this problem in a few ways: 405 MethodNotAllowed -------------------- -If you receive an 405 error, check to see if you have the S3 subdomain set up correctly. -You will need to have a wild card setting in your DNS record for subdomain functionality +If you receive a 405 error, check to see if you have the S3 subdomain set up correctly. +You must have a wildcard in your DNS record for subdomain functionality to work properly. Also, check to ensure that the default site is disabled. :: diff --git a/doc/radosgw/vault.rst b/doc/radosgw/vault.rst index afda0e040b8..797344e06ca 100644 --- a/doc/radosgw/vault.rst +++ b/doc/radosgw/vault.rst @@ -106,7 +106,7 @@ Vault agent, such as having the Vault agent listen only to localhost. Token Policies for the Object Gateway ------------------------------------- -All Vault tokens have powers as specified by the polices attached +All Vault tokens have powers as specified by the policies attached to that token. Multiple policies may be associated with one token. You should only use the policies necessary for your configuration.