trust policy of the role being assumed requires MFA.
#. AssumeRoleWithWebIdentity: Returns a set of temporary credentials for users that
- have been authenticated by a web/mobile app by an OpenID Connect /OAuth2.0 Identity Provider.
+ have been authenticated by a web/mobile app by an OpenID Connect/OAuth 2.0 Identity Provider.
Currently Keycloak has been tested and integrated with RGW.
Parameters:
Its default value is 3600.
**ProviderId** (String/ Optional): Fully qualified host component of the domain name
- of the IDP. Valid only for OAuth2.0 tokens (not for OpenID Connect tokens).
+ of the IDP. Valid only for OAuth 2.0 tokens (not for OpenID Connect tokens).
- **WebIdentityToken** (String/ Required): The OpenID Connect/ OAuth2.0 token, which the
+ **WebIdentityToken** (String/ Required): The OpenID Connect/OAuth 2.0 token, which the
application gets in return after authenticating its user with an IDP.
#. GetCallerIdentity: Returns details about the IAM user or role whose credentials are used to call the operation.
Response:
- **Account** (The account ID that owns or contains the calling entity.
+ **Account** The account ID that owns or contains the calling entity.
**Arn** The ARN associated with the calling entity.
- **UserId** The unique identifier of the calling entity(user or assumed role).
+ **UserId** The unique identifier of the calling entity (user or assumed role).
.. note:: No permissions are required to perform GetCallerIdentity.
resp = s3client.list_buckets()
#. The following is an example of AssumeRoleWithWebIdentity API call, where an external app that has users authenticated with
- an OpenID Connect/ OAuth2 IDP (Keycloak in this example), assumes a role to get back temporary credentials and access S3 resources
+ an OpenID Connect/OAuth 2.0 IDP (Keycloak in this example), assumes a role to get back temporary credentials and access S3 resources
according to permission policy of the role.
.. code-block:: python
)
oidc_response = iam_client.create_open_id_connect_provider(
- Url=<URL of the OpenID Connect Provider,
+ Url=<URL of the OpenID Connect Provider>,
ClientIDList=[
<Client id registered with the IDP>
],
region_name='',
)
- response = client.assume_role_with_web_identity(
+ response = sts_client.assume_role_with_web_identity(
RoleArn=role_response['Role']['Arn'],
RoleSessionName='Bob',
DurationSeconds=3600,
Parameters:
**DurationSeconds** (Integer/ Optional): The duration in seconds for which the
credentials should remain valid. Its default value is 3600. Its default max
- value is 43200 which is can be configured using rgw sts max session duration.
+ value is 43200 which can be configured using rgw sts max session duration.
**SerialNumber** (String/ Optional): The Id number of the MFA device associated
with the user making the GetSessionToken call.
rgw keystone api version = {keystone api version}
rgw keystone implicit tenants = {true for private tenant for each new user}
rgw keystone admin password = {keystone service tenant user name}
- rgw keystone admin user = keystone service tenant user password}
+ rgw keystone admin user = {keystone service tenant user password}
rgw keystone accepted roles = {accepted user roles}
rgw keystone token cache size = {number of tokens to cache}
rgw s3 auth use keystone = true
rgw_ldap_dnattr = {attribute being used in the constructed search filter to match a username}
rgw_ldap_searchfilter = {search filter}
-The details of the integrating ldap with Ceph Object Gateway can be found here:
+The details of integrating LDAP with Ceph Object Gateway can be found here:
:doc:`ldap-auth`
Note: By default, STS and S3 APIs co-exist in the same namespace, and both S3
| user_id | 40a7140e424f493d8165abc652dc731c |
+------------+--------------------------------------------------------+
-2. Use the credentials created in the step 1. to get back a set of temporary
+2. Use the credentials created in step 1 to get back a set of temporary
credentials using GetSessionToken API.
.. code-block:: python
DurationSeconds=43200
)
-3. The temporary credentials obtained in step 2. can be used for making S3 calls:
+3. The temporary credentials obtained in step 2 can be used for making S3 calls:
.. code-block:: python
refers to an IAM user named A in that account. The Ceph Object Gateway also
supports tenant names in that position.
-Accounts IDs can also be used in ACLs for a ``Grantee`` of type ``CanonicalUser``.
+Account IDs can also be used in ACLs for a ``Grantee`` of type ``CanonicalUser``.
User IDs are also supported here.
IAM Policy
as described in :ref:`radosgw-notifications` and :ref:`Supported Zone Features <radosgw-zone-features>`.
#. **Migration Impact:** When a non-account user is migrated to an account, the
- the existing notification topics remain accessible through the RADOS Gateway admin API,
+ existing notification topics remain accessible through the RADOS Gateway admin API,
but the user loses access to them via the SNS Topic API. Despite this, the topics
remain functional, and bucket notifications will continue to be delivered as expected.
+-----+ Subuser |
+-----------+
-Users and subusers can be created, modified, viewed, suspended and removed.
-you may add a Display names and an email addresses can be added to user
+Users and subusers can be created, modified, viewed, suspended, and removed.
+Display names and email addresses can be added to user
profiles. Keys and secrets can either be specified or generated automatically.
When generating or specifying keys, remember that user IDs correspond to S3 key
types and subuser IDs correspond to Swift key types.
``end``
-:Description: Date and (optional) time that specifies the end time of the requested data (none inclusive).
+:Description: Date and (optional) time that specifies the end time of the requested data (non-inclusive).
:Type: String
:Example: ``2012-09-25 16:00:00``
:Required: No
``uid``
-:Description: The user ID under which a subuser is to be created.
+:Description: The user ID under which a subuser is to be created.
:Type: String
:Example: ``foo_user``
:Required: Yes
``purge-objects``
-:Description: Remove a buckets objects before deletion.
+:Description: Remove a bucket's objects before deletion.
:Type: Boolean
:Example: True [False]
:Required: No
Rate Limit
==========
-The Admin Operations API enables you to set and get ratelimit configurations on users and on bucket and global rate limit configurations. See `Rate Limit Management`_ for additional details.
+The Admin Operations API enables you to set and get ratelimit configurations on users and on buckets and global rate limit configurations. See `Rate Limit Management`_ for additional details.
Rate Limit includes the maximum number of operations and/or bytes per accumulation interval, separated by read and/or write (Additionally list and get operations),
to a bucket and/or by a user and the maximum storage size in megabytes.
.. highlight:: python
-The `rgw` python module provides file-like access to rgw.
+The `rgw` Python module provides file-like access to RGW.
API Reference
=============
Bucket Logging
==============
-.. versionadded:: T
+.. versionadded:: Tentacle
.. contents::
Journal
```````
-The "Journal" record format uses minimum amount of data for journaling
+The "Journal" record format uses a minimum amount of data for journaling
bucket changes (this is a Ceph extension).
- bucket owner (or dash if empty)
Standard
````````
-The "Standard" record format is based on `AWS Logging Record Format`_.
+The "Standard" record format is based on the `AWS Logging Record Format`_.
- bucket owner (or dash if empty)
- bucket name (or dash if empty) in the format: ``[tenant:]<bucket name>``
+-----------------------+----------------------+----------------+
| | s3:x-amz-acl | |
| | s3:x-amz-grant-<perm>| |
-|s3:createBucket | where perm is one of | |
+|s3:CreateBucket | where perm is one of | |
| | read/write/read-acp | |
| | write-acp/ | |
| | full-control | |
| |s3:RequestObjectTag/<tag-key> | |
| | | |
+-----------------------------+---------------------------------------------------+-------------------+
-|s3:PutObjectAcl |s3:x-amz-acl & s3-amz-grant-<perm> | |
+|s3:PutObjectAcl |s3:x-amz-acl & s3:x-amz-grant-<perm> | |
|s3:PutObjectVersionAcl | | |
| +---------------------------------------------------+-------------------+
| |s3:ExistingObjectTag/<tag-key> | |
RGW deployment.
This feature currently enables the restoration of objects transitioned to
-S3-compatible cloud services. In order to faciliate this,
+S3-compatible cloud services. In order to facilitate this,
the ``retain_head_object`` option should be set to ``true``
in the ``tier-config`` when configuring the storage class.
{
"access_key": <access>,
- "secret": <secret>,`
+ "secret": <secret>,
"endpoint": <endpoint>,
"region": <region>,
"host_style": <path | virtual>,
"access_key": <access>,
"secret": <secret>,
"endpoint": <endpoint>,
- "host_style" <path | virtual>,
+ "host_style": <path | virtual>,
},
"acls": [
{
"access_key": <access>,
"secret": <secret>,
"endpoint": <endpoint>,
- "host_style" <path | virtual>, # optional
+ "host_style": <path | virtual>, # optional
} ... ],
"acl_profiles": [
{
"access_key": "",
"secret": "",
"host_style": "path",
- "location_constraint": "";
+ "location_constraint": "",
"target_storage_class": "",
"target_path": "",
"acl_mappings": [],
Ceph Object Gateway Config Reference
======================================
-The following settings may added to the Ceph configuration file (i.e., usually
+The following settings may be added to the Ceph configuration file (i.e., usually
``ceph.conf``) under the ``[client.radosgw.{instance-name}]`` section. The
settings may contain default values. If you do not specify each setting in the
Ceph configuration file, the default value will be set automatically.
D3N improves the performance of big-data jobs by speeding up repeatedly accessed dataset reads from the data lake.
Cache servers are located in the datacenter on the access side of potential network and storage bottlenecks.
-D3Ns two-layer logical cache forms a traditional caching hierarchy :sup:`*`
+D3N's two-layer logical cache forms a traditional caching hierarchy :sup:`*`
where caches nearer the client have the lowest access latency and overhead,
while caches in higher levels in the hierarchy are slower (requiring multiple hops to access).
The layer 1 cache server nearest to the client handles object requests by breaking them into blocks,
Logs
----
-- D3N related log lines in ``radosgw.*.log`` contain the string ``d3n`` (case insensitive).
-- Low level D3N logs can be enabled by the ``debug_rgw_datacache`` subsystem (up to ``debug_rgw_datacache=30``).
+- D3N-related log lines in ``radosgw.*.log`` contain the string ``d3n`` (case insensitive).
+- Low-level D3N logs can be enabled by the ``debug_rgw_datacache`` subsystem (up to ``debug_rgw_datacache=30``).
Config Reference
================
-The following D3N related settings can be added to the Ceph configuration file
+The following D3N-related settings can be added to the Ceph configuration file
(i.e., usually ``ceph.conf``) under the ``[client.rgw.{instance-name}]`` section.
.. confval:: rgw_d3n_l1_local_datacache_enabled
Detection of resharding opportunities runs as a background process
that periodically scans all buckets. A bucket that requires resharding
is added to a queue. A thread runs in the background and processes the
-queueued resharding tasks one at a time.
+queued resharding tasks one at a time.
Starting with Tentacle, dynamic resharding has the ability to reduce
the number of shards. Once the condition allowing reduction is noted,
Since dynamic resharding can now reduce the number of shards,
administrators may want to prevent the number of shards from becoming
-too low, for example if the expect the number of objects to increase
+too low, for example if they expect the number of objects to increase
in the future. This command allows administrators to set a per-bucket
minimum. This does not, however, prevent administrators from manually
resharding to a lower number of shards.
.. _radosgw-elastic-sync-module:
=========================
-ElasticSearch Sync Module
+Elasticsearch Sync Module
=========================
.. versionadded:: Kraken
.. note::
- As of 31 May 2020, only Elasticsearch 6 and lower are supported. ElasticSearch 7 is not supported.
+ As of 31 May 2020, only Elasticsearch 6 and lower are supported. Elasticsearch 7 is not supported.
-This sync module writes the metadata from other zones to `ElasticSearch`_. As of
-luminous this is a json of data fields we currently store in ElasticSearch.
+This sync module writes the metadata from other zones to `Elasticsearch`_. As of
+Luminous this is a JSON of data fields we currently store in Elasticsearch.
::
-ElasticSearch tier type configurables
+Elasticsearch tier type configurables
-------------------------------------
* ``endpoint``
* ``override_index_path`` (string)
-if not empty, this string will be used as the elasticsearch index
+if not empty, this string will be used as the Elasticsearch index
path. Otherwise the index path will be determined and generated on
sync initialization.
.. versionadded:: Luminous
-Since the ElasticSearch cluster now stores object metadata, it is important that
-the ElasticSearch endpoint is not exposed to the public and only accessible to
+Since the Elasticsearch cluster now stores object metadata, it is important that
+the Elasticsearch endpoint is not exposed to the public and only accessible to
the cluster administrators. For exposing metadata queries to the end user itself
this poses a problem since we'd want the user to only query their metadata and
-not of any other users, this would require the ElasticSearch cluster to
+not of any other users, this would require the Elasticsearch cluster to
authenticate users in a way similar to RGW does which poses a problem.
As of Luminous RGW in the metadata master zone can now service end user
-requests. This allows for not exposing the elasticsearch endpoint in public and
+requests. This allows for not exposing the Elasticsearch endpoint in public and
also solves the authentication and authorization problem since RGW itself can
authenticate the end user requests. For this purpose RGW introduces a new query
-in the bucket APIs that can service elasticsearch requests. All these requests
+in the bucket APIs that can service Elasticsearch requests. All these requests
must be sent to the metadata master zone.
Syntax
~~~~~~
-Get an elasticsearch query
+Get an Elasticsearch query
``````````````````````````
::
:Description: Sets the listening address in the form ``address[:port]``, where
the address is an IPv4 address string in dotted decimal form, or
an IPv6 address in hexadecimal notation surrounded by square
- brackets. Specifying a IPv6 endpoint would listen to IPv6 only. The
+ brackets. Specifying an IPv6 endpoint would listen to IPv6 only. The
optional port defaults to 80 for ``endpoint`` and 443 for
``ssl_endpoint``. Can be specified multiple times as in
``endpoint=[::1] endpoint=192.168.0.100:8000``.
It is possible to integrate the Ceph Object Gateway with Keystone, the OpenStack
identity service. This sets up the gateway to accept Keystone as the users
authority. A user that Keystone authorizes to access the gateway will also be
-automatically created on the Ceph Object Gateway (if didn't exist beforehand). A
+automatically created on the Ceph Object Gateway (if it didn't exist beforehand). A
token that Keystone validates will be considered as valid by the gateway.
The following configuration options are available for Keystone integration::
recommended to be disabled in production environments. The service tenant
credentials should have admin privileges, for more details refer the `OpenStack
Keystone documentation`_, which explains the process in detail. The requisite
-configuration options for are::
+configuration options are::
rgw keystone admin user = {keystone service tenant user name}
rgw keystone admin password = {keystone service tenant user password}
- rgw keystone admin password = {keystone service tenant user password path} # preferred
+ rgw keystone admin password path = {keystone service tenant user password path} # preferred
rgw keystone admin tenant = {keystone service tenant name}
The Keystone URL is the Keystone admin RESTful API URL. The admin token is the
token that is configured internally in Keystone for admin requests.
-OpenStack Keystone may be terminated with a self signed ssl certificate, in
+OpenStack Keystone may be terminated with a self-signed SSL certificate, in
order for radosgw to interact with Keystone in such a case, you could either
-install Keystone's ssl certificate in the node running radosgw. Alternatively
-radosgw could be made to not verify the ssl certificate at all (similar to
+install Keystone's SSL certificate in the node running radosgw. Alternatively
+radosgw could be made to not verify the SSL certificate at all (similar to
OpenStack clients with a ``--insecure`` switch) by setting the value of the
configurable ``rgw keystone verify ssl`` to false.
.. _OpenStack Keystone documentation: http://docs.openstack.org/developer/keystone/configuringservices.html#setting-up-projects-users-and-roles
-Cross Project(Tenant) Access
-----------------------------
+Cross-Project (Tenant) Access
+-----------------------------
In order to let a project (earlier called a 'tenant') access buckets belonging to a different project, the following config option needs to be enabled::
names (keys).
It is possible to create multiple data pools and make it so that
-different users\` buckets will be created in different RADOS pools by default,
+different users' buckets will be created in different RADOS pools by default,
thus providing the necessary scaling. The layout and naming of these pools
is controlled by a 'policy' setting.[3]
is the ``HEAD`` that contains metadata including manifest, ACLs, content type,
ETag, and user-defined metadata. The metadata is stored in xattrs.
The ``HEAD`` object may also inline up to :confval:`rgw_max_chunk_size` of object data, for efficiency
-and atomicity. This enables a convenenient tiering strategy: index pools
+and atomicity. This enables a convenient tiering strategy: index pools
are necessarily replicated (cannot be EC) and should be placed on fast SSD
OSDs. With a mix of small/hot RGW objects and larger, warm/cold RGW
objects like video files, the larger objects will automatically be placed
How it works
============
-The Ceph Object Gateway extracts the users LDAP credentials from a token. A
+The Ceph Object Gateway extracts the user's LDAP credentials from a token. A
search filter is constructed with the user name. The Ceph Object Gateway uses
the configured service account to search the directory for a matching entry. If
an entry is found, the Ceph Object Gateway attempts to bind to the found
Using a custom search filter to limit user access
=================================================
-There are two ways to use the ``rgw_search_filter`` parameter:
+There are two ways to use the ``rgw_ldap_searchfilter`` parameter:
Specifying a partial filter to further limit the constructed search filter
--------------------------------------------------------------------------
The Ceph Object Gateway will generate the search filter as usual with the
user name from the token and the value of ``rgw_ldap_dnattr``. The constructed
-filter is then combined with the partial filter from the ``rgw_search_filter``
+filter is then combined with the partial filter from the ``rgw_ldap_searchfilter``
attribute. Depending on the user name and the settings the final search filter
might become:
"(&(uid=@USERNAME@)(memberOf=cn=ceph-users,ou=groups,dc=mycompany,dc=com))"
-.. note:: Using the ``memberOf`` attribute in LDAP searches requires server side
- support from you specific LDAP server implementation.
+.. note:: Using the ``memberOf`` attribute in LDAP searches requires server-side
+ support from your specific LDAP server implementation.
Generating an access token for LDAP authentication
==================================================
This feature allows users to assign execution context to Lua scripts. The supported contexts are:
- - ``prerequest`` which will execute a script before each operation is performed
- - ``postrequest`` which will execute after each operation is performed
- - ``background`` which will execute within a specified time interval
- - ``getdata`` which will execute on objects' data when objects are downloaded
- - ``putdata`` which will execute on objects' data when objects are uploaded
+- ``prerequest`` which will execute a script before each operation is performed
+- ``postrequest`` which will execute after each operation is performed
+- ``background`` which will execute within a specified time interval
+- ``getdata`` which will execute on objects' data when objects are downloaded
+- ``putdata`` which will execute on objects' data when objects are uploaded
A request (pre or post) or data (get or put) context script may be constrained to operations belonging to a specific tenant's users.
The request context script can also access fields in the request and modify certain fields, as well as the `Global RGW Table`_.
By default, all Lua standard libraries are available in the script, however, in order to allow for additional Lua modules to be used in the script, we support adding packages to an allowlist:
- - Make sure that the ``luarocks`` package manager is installed on the host.
- - Adding a Lua package to the allowlist, or removing a packge from it does not install or remove it. For the changes to take affect a "reload" command should be called.
- - In addition all packages in the allowlist are being re-installed using the luarocks package manager on radosgw restart.
- - To add a package that contains C source code that needs to be compiled, use the ``--allow-compilation`` flag. In this case a C compiler needs to be available on the host
- - Lua packages are installed in, and used from, a directory local to the radosgw. Meaning that Lua packages in the allowlist are separated from any Lua packages available on the host.
- By default, this directory would be ``/tmp/luarocks/<entity name>``. Its prefix part (``/tmp/luarocks/``) could be set to a different location via the ``rgw_luarocks_location`` configuration parameter.
- Note that this parameter should not be set to one of the default locations where luarocks install packages (e.g. ``$HOME/.luarocks``, ``/usr/lib64/lua``, ``/usr/share/lua``).
+- Make sure that the ``luarocks`` package manager is installed on the host.
+- Adding a Lua package to the allowlist, or removing a package from it does not install or remove it. For the changes to take effect a "reload" command should be called.
+- In addition all packages in the allowlist are being re-installed using the luarocks package manager on radosgw restart.
+- To add a package that contains C source code that needs to be compiled, use the ``--allow-compilation`` flag. In this case a C compiler needs to be available on the host
+- Lua packages are installed in, and used from, a directory local to the radosgw. Meaning that Lua packages in the allowlist are separated from any Lua packages available on the host.
+ By default, this directory would be ``/tmp/luarocks/<entity name>``. Its prefix part (``/tmp/luarocks/``) could be set to a different location via the ``rgw_luarocks_location`` configuration parameter.
+ Note that this parameter should not be set to one of the default locations where luarocks install packages (e.g. ``$HOME/.luarocks``, ``/usr/lib64/lua``, ``/usr/share/lua``).
.. toctree::
Context Free Functions
----------------------
+
Debug Log
~~~~~~~~~
The ``RGWDebugLog()`` function accepts a string and prints it to the debug log with priority 20.
+----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
-| Field | Type | Description | Iterable | Writeable | Optional |
+| Field | Type | Description | Iterable | Writable | Optional |
+====================================================+==========+==============================================================+==========+===========+==========+
| ``Request.RGWOp`` | string | radosgw operation | no | no | no |
+----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
| ``Request.Bucket.Tenant`` | string | tenant of the bucket | no | no | yes |
+----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
-| ``Request.Bucket.Name`` | string | bucket name (writeable only in ``prerequest`` context) | no | yes | no |
+| ``Request.Bucket.Name`` | string | bucket name (writable only in ``prerequest`` context) | no | yes | no |
+----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
| ``Request.Bucket.Marker`` | string | bucket marker (initial id) | no | no | yes |
+----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
| ``Request.Bucket.Quota.MaxObjects`` | integer | bucket quota max number of objects | no | no | no |
+----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
-| ``Reques.Bucket.Quota.Enabled`` | boolean | bucket quota is enabled | no | no | no |
+| ``Request.Bucket.Quota.Enabled`` | boolean | bucket quota is enabled | no | no | no |
+----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
| ``Request.Bucket.Quota.Rounded`` | boolean | bucket quota is rounded to 4K | no | no | no |
+----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
Request Functions
---------------------
+-----------------
+
Operations Log
~~~~~~~~~~~~~~
The ``Request.Log()`` function prints the requests into the operations log. This function has no parameters. It returns 0 for success and an error code if it fails.
Request Blocking and Error Handling
-----------------------------------
+
Script Execution Errors
~~~~~~~~~~~~~~~~~~~~~~~
If the Lua script fails with a syntax or runtime error, RGW will log the error. The request that triggered the script will still go through.
The Lua script’s return value is evaluated only during the prerequest context and is ignored in any other RGW request-processing context.
The HTTP response status code is 403 (Forbidden) by default when a request is blocked by Lua. The response code can be changed using ``Request.Response.HTTPStatusCode`` and ``Request.Response.HTTPStatus``.
If a request is aborted this way, the ``data`` and ``postrequest`` context will also be aborted.
+
Background Context
---------------------
+------------------
The ``background`` context may be used for purposes that include analytics, monitoring, caching data for other context executions.
+
- Background script execution default interval is 5 seconds.
Data Context
---------------------
+------------
Both ``getdata`` and ``putdata`` contexts have the following fields:
+
- ``Data`` which is read-only and iterable (byte by byte). In case that an object is uploaded or retrieved in multiple chunks, the ``Data`` field will hold data of one chunk at a time.
- ``Offset`` which is holding the offset of the chunk within the entire object.
- The ``Request`` fields and the background ``RGW`` table are also available in these contexts.
Global RGW Table
--------------------
The ``RGW`` Lua table is accessible from all contexts and saves data written to it
-during execution so that it may be read and used later during other executions, from the same context of a different one.
+during execution so that it may be read and used later during other executions, from the same context or a different one.
+
- Each RGW instance has its own private and ephemeral ``RGW`` Lua table that is lost when the daemon restarts. Note that ``background`` context scripts will run on every instance.
- The maximum number of entries in the table is 100,000. Each entry has a string key a value with a combined length of no more than 1KB.
A Lua script will abort with an error if the number of entries or entry size exceeds these limits.
Request.Trace.AddEvent("second event", event_attrs)
- The entropy value of an object could be used to detect whether the object is encrypted.
- The following script calculates the entropy and size of uploaded objects and print to debug log
+ The following script calculates the entropy and size of uploaded objects and prints to debug log.
-in the ``putdata`` context, add the following script
+Add the following script in the ``putdata`` context:
.. code-block:: lua
The Ceph Object Gateway uses :ref:`Perf Counters` to track metrics. The counters can be labeled (:ref:`Labeled Perf Counters`). When counters are labeled, they are stored in the Ceph Object Gateway specific caches.
-These metrics can be sent to the time series database Prometheus to visualize a cluster wide view of usage data (ex: number of S3 put operations on a specific bucket) over time.
+These metrics can be sent to the time series database Prometheus to visualize a cluster-wide view of usage data (for example, number of S3 put operations on a specific bucket) over time.
.. contents::
#. Number of users in the cluster
#. Number of buckets in the cluster
#. Memory usage of the Ceph Object Gateway
-#. Disk and memory usage of Promtheus.
+#. Disk and memory usage of Prometheus.
To help calculate the Ceph Object Gateway's memory usage of a cache, it should be noted that each cache entry, encompassing all of the op metrics, is 1360 bytes. This is an estimate and subject to change if metrics are added or removed from the op metrics list.
users to require the use of one-time password when removing
objects on certain buckets. The buckets need to be configured
with versioning and MFA enabled which can be done through
-the S3 api.
+the S3 API.
Time-based one time password tokens can be assigned to a user
through radosgw-admin. Each token has a secret seed, and a serial
While the MFA IDs are set on the user's metadata, the
actual MFA one time password configuration resides in the local zone's
-osds. Therefore, in a multi-site environment it is advisable to use
+OSDs. Therefore, in a multi-site environment it is advisable to use
different tokens for different zones.
feeding two consecutive pins: the previous pin, and the current pin. ::
# radosgw-admin mfa resync --uid=<user-id> --totp-serial=<serial> \
- --totp-pin=<prev-pin> --totp=pin=<current-pin>
+ --totp-pin=<prev-pin> --totp-pin=<current-pin>
The application on the right is both writing and reading data from the Ceph
Cluster, by means of the RADOS Gateway (RGW). The application on the left is
only *reading* data from the Ceph Cluster, by means of an instance of RADOS
-Gateway. In both cases (read-and-write and read-only), the transmssion of
+Gateway. In both cases (read-and-write and read-only), the transmission of
data is handled RESTfully.
In the middle of this diagram, we see two zones, each of which contains an
active-active zone mode. This makes it possible to write to non-master zones.
The multi-site configuration is stored within a container called a "realm". The
-realm stores zonegroups, zones, and a time "period" with multiple epochs (which
-(the epochs) are used for tracking changes to the configuration).
+realm stores zonegroups, zones, and a time "period" with multiple epochs.
+The epochs are used for tracking changes to the configuration.
Beginning with Kraken, the ``ceph-radosgw`` daemons handle the synchronization
of data across zones, which eliminates the need for a separate synchronization
same site. This guide also assumes two Ceph Object Gateway servers named
``rgw1`` and ``rgw2``.
-.. important:: Running a single geographically-distributed Ceph storage cluster
- is NOT recommended unless you have low latency WAN connections.
+.. important:: Running a single geographically distributed Ceph storage cluster
+ is NOT recommended unless you have low-latency WAN connections.
A multi-site configuration requires a master zonegroup and a master zone. Each
zonegroup requires a master zone. Zonegroups may have one or more secondary
Tenants as such do not have any operations on them. They appear and
disappear as needed, when users are administered. In order to create,
modify, and remove users with explicit tenants, either an additional
-option --tenant is supplied, or a syntax '<tenant>$<user>' is used
-in the parameters of the radosgw-admin command.
+option ``--tenant`` is supplied, or a syntax ``<tenant>$<user>`` is used
+in the parameters of the ``radosgw-admin`` command.
Examples
--------
Once you enable this option, any newly connecting user (whether they
are using the Swift API, or Keystone-authenticated S3) will prompt
-radosgw to create a user named ``<tenant_id>$<tenant_id``, where
+radosgw to create a user named ``<tenant_id>$<tenant_id>``, where
``<tenant_id>`` is a Keystone tenant (project) UUID --- for example,
``7188e165c0ae4424ac68ae2e89a05c50$7188e165c0ae4424ac68ae2e89a05c50``.
-Whenever that user then creates an Swift container, radosgw internally
+Whenever that user then creates a Swift container, radosgw internally
translates the given container name into
``<tenant_id>/<container_name>``, such as
``7188e165c0ae4424ac68ae2e89a05c50/foo``. This ensures that if there
prefix.
It is also possible to limit the effects of implicit tenants
-to only apply to swift or s3, by setting ``rgw keystone implicit tenants``
+to only apply to Swift or S3, by setting ``rgw keystone implicit tenants``
to either ``s3`` or ``swift``. This will likely primarily
be of use to users who had previously used implicit tenants
with older versions of ceph, where implicit tenants
-only applied to the swift protocol.
+only applied to the Swift protocol.
Notes and known issues
----------------------
- NFS protocol security is provided by the NFS-Ganesha server, as negotiated by the NFS server and clients
- + e.g., clients can by trusted (AUTH_SYS), or required to present Kerberos user credentials (RPCSEC_GSS)
+ + e.g., clients can be trusted (AUTH_SYS), or required to present Kerberos user credentials (RPCSEC_GSS)
+ RPCSEC_GSS wire security can be integrity only (krb5i) or integrity and privacy (encryption, krb5p)
+ various NFS-specific security and permission rules are available
A small number of config variables (e.g., ``rgw_nfs_namespace_expire_secs``)
are unique to RGW NFS.
-In particular, front-end selection is handled specially by the librgw.so runtime. By default, only the
+In particular, frontend selection is handled specially by the librgw.so runtime. By default, only the
``rgw-nfs`` frontend is started. Additional frontends (e.g., ``beast``) are enabled via the
``rgw nfs frontends`` config option. Its syntax is identical to the ordinary ``rgw frontends`` option.
Default options for non-default frontends are specified via ``rgw frontend defaults`` as normal.
==============
Exporting an NFS namespace and other RGW namespaces (e.g., S3 or Swift
-via the Civetweb HTTP front-end) from the same program instance is
+via the Civetweb HTTP frontend) from the same program instance is
currently not supported.
When adding objects and buckets outside of NFS, those objects will
control the retry frequency with ``retry_sleep_duration``.
.. tip:: To minimize the latency added by asynchronous notification, we
- recommended placing the "log" pool on fast media.
+ recommend placing the "log" pool on fast media.
Persistent bucket notifications are managed by the following central configuration options:
- ``broker``: Messages are considered "delivered" if acked by the broker. (This
is the default.)
- - ``kafka-brokers``: A command-separated list of ``host:port`` of Kafka brokers:
+ - ``kafka-brokers``: A comma-separated list of ``host:port`` of Kafka brokers:
these brokers (may contain a broker which is defined in Kafka URI) will be
- added to Kafka URI to support sending notifcations to a Kafka cluster.
+ added to Kafka URI to support sending notifications to a Kafka cluster.
.. note::
To configure OPA, load custom policies into OPA that control the resources users
are allowed to access. Relevant data or context can also be loaded into OPA to make decisions.
-Policies and data can be loaded into OPA in the following ways::
- * OPA's RESTful APIs
- * OPA's *bundle* feature that downloads policies and data from remote HTTP servers
- * Filesystem
+Policies and data can be loaded into OPA in the following ways:
+
+* OPA's RESTful APIs
+* OPA's *bundle* feature that downloads policies and data from remote HTTP servers
+* Filesystem
Configure the Ceph Object Gateway
=================================
{"result": true}
The above is a sample request sent to OPA which contains information about the
-user, resource and the action to be performed on the resource. Based on the polices
+user, resource and the action to be performed on the resource. Based on the policies
and data loaded into OPA, it will verify whether the request should be allowed or denied.
In the sample request, RGW makes a POST request to the endpoint */v1/data/ceph/authz*,
where *ceph* is the package name and *authz* is the rule name.
Orphan List and Associated Tooling
==================================
-.. version added:: Luminous
+.. versionadded:: Luminous
.. contents::
pool. At that point the tool will, perhaps after an extended period of
time, produce a local file containing the RADOS objects from the
designated pool that appear to be orphans. The administrator is free
-to examine this file and the decide on a course of action, perhaps
+to examine this file and then decide on a course of action, perhaps
removing those RADOS objects from the designated pool.
All intermediate results are stored on the local file system rather
encryption and compression services. And QAT driver in kernel space have to
be loaded to drive the hardware.
-The out-of-tree QAT driver package can be downloaded from `Intel Quickassist
+The out-of-tree QAT driver package can be downloaded from `Intel QuickAssist
Technology`_.
The QATlib can be downloaded from `qatlib`_, which is used for the in-tree QAT
The out-of-tree QAT driver is gradually being migrated to an in-tree driver+QATlib.
2. The implementation of QAT-based encryption is directly based on the QAT API,
- which is included the driver package. However, QAT support for compression
+ which is included in the driver package. However, QAT support for compression
depends on the QATzip project, which is a userspace library that builds on
top of the QAT API. At the time of writing (July 2024), QATzip speeds up
gzip compression and decompression.
`OpenSSL support for RGW encryption`_ has been merged into Ceph, and Intel also
provides one `QAT Engine`_ for OpenSSL. Theoretically, QAT-based encryption in
-Ceph can be directly supported through the OpenSSl+QAT Engine.
+Ceph can be directly supported through the OpenSSL+QAT Engine.
However, the QAT Engine for OpenSSL currently supports only chained operations,
which means that Ceph will not be able to utilize QAT hardware features for
cd ceph
./do_cmake.sh -DWITH_QATDRV=ON
cd build
- ininja
+ ninja
.. note:: The section name in QAT configuration files must be ``CEPH``,
because the section name is set to ``CEPH`` in the Ceph crypto source code.
This feature is based on the Nginx modules ``ngx_http_auth_request_module`` and `nginx-aws-auth-module <https://github.com/kaltura/nginx-aws-auth-module>`_, and OpenResty for Lua capabilities.
Currently this feature will cache only AWSv4 requests (only S3 requests), caching-in the output of the first GET request
-and caching-out on subsequent GET requests, passing through transparently PUT,POST,HEAD,DELETE and COPY requests.
+and caching-out on subsequent GET requests, passing through transparently PUT, POST, HEAD, DELETE and COPY requests.
The feature introduces 2 new APIs: Auth and Cache.
There are 2 new APIs for this feature:
- **Auth API:** The cache uses this to validate that a user can access the cached data.
-- **Cache API:** Adds the ability to override securely ``Range`` header so that Nginx can use its own `smart cache <https://www.nginx.com/blog/smart-efficient-byte-range-caching-nginx/>`_ on top of S3.
+- **Cache API:** Adds the ability to override the ``Range`` header securely so that Nginx can use its own `smart cache <https://www.nginx.com/blog/smart-efficient-byte-range-caching-nginx/>`_ on top of S3.
Using this API gives the ability to read ahead objects when client is asking a specific range from the object.
On subsequent accesses to the cached object, Nginx will satisfy requests for already-cached ranges from the cache. Uncached ranges will be read from RGW (and cached).
To create a role, run a command of the following form::
- radosgw-admin role create --role-name={role-name} [--path=="{path to the role}"] [--assume-role-policy-doc={trust-policy-document}]
+ radosgw-admin role create --role-name={role-name} [--path="{path to the role}"] [--assume-role-policy-doc={trust-policy-document}]
Request Parameters
~~~~~~~~~~~~~~~~~~
``path``
-:Description: Path to the role. The default value is a slash(``/``).
+:Description: Path to the role. The default value is a slash (``/``).
:Type: String
``assume-role-policy-doc``
Following tags (and the tags inside them) are not supported:
+-----------------------------------+----------------------------------------------+
-| Tag | Remaks |
+| Tag | Remarks |
+===================================+==============================================+
| ``<QueueConfiguration>`` | not needed, we treat all destinations as SNS |
+-----------------------------------+----------------------------------------------+
+---------------------------------------+---------------+
| ``s3:DeleteObject`` | ``WRITE`` |
+---------------------------------------+---------------+
-| ``s3:s3DeleteObjectVersion`` | ``WRITE`` |
+| ``s3:DeleteObjectVersion`` | ``WRITE`` |
+---------------------------------------+---------------+
| ``s3:PutObject`` | ``WRITE`` |
+---------------------------------------+---------------+
+---------------------------------------+---------------+
| ``s3:PutBucketTagging`` | ``WRITE_ACP`` |
+---------------------------------------+---------------+
-| ``s3:PutPutBucketVersioning`` | ``WRITE_ACP`` |
+| ``s3:PutBucketVersioning`` | ``WRITE_ACP`` |
+---------------------------------------+---------------+
| ``s3:PutBucketWebsite`` | ``WRITE_ACP`` |
+---------------------------------------+---------------+
+===============================+===========+=================================================================+
| ``CreateBucketConfiguration`` | Container | A container for the bucket configuration. |
+-------------------------------+-----------+-----------------------------------------------------------------+
-| ``LocationConstraint`` | String | A zonegroup api name, with optional :ref:`s3_bucket_placement`. |
+| ``LocationConstraint`` | String | A zonegroup API name, with optional :ref:`s3_bucket_placement`. |
+-------------------------------+-----------+-----------------------------------------------------------------+
+-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
| ``S3Tags`` | Container | Holding a list of ``FilterRule`` entities, for filtering based on object tags. | No |
| | | All filter rules in the list must match the tags defined on the object. However, | |
-| | | the object still match it it has other tags not listed in the filter. | |
+| | | the object still match if it has other tags not listed in the filter. | |
+-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
| ``S3Key.FilterRule`` | Container | Holding ``Name`` and ``Value`` entities. ``Name`` would be: ``prefix``, ``suffix`` | Yes |
| | | or ``regex``. The ``Value`` would hold the key prefix, key suffix or a regular | |
Listing Owned Buckets
---------------------
-This gets a list of Buckets that you own.
+This gets a list of buckets that you own.
This also prints out the bucket name, owner ID, and display name
for each bucket.
.. note::
- The Bucket must be empty! Otherwise it won't work!
+ The bucket must be empty! Otherwise it won't work!
.. code-block:: cpp
Listing Owned Buckets
---------------------
-This gets a list of Buckets that you own.
+This gets a list of buckets that you own.
This also prints out the bucket name and creation date of each bucket.
.. code-block:: csharp
.. note::
- The Bucket must be empty! Otherwise it won't work!
+ The bucket must be empty! Otherwise it won't work!
.. code-block:: csharp
Java S3 Examples
================
-Pre-requisites
---------------
+Prerequisites
+-------------
All examples are written against AWS Java SDK 2.17.42. You may need
to change some code when using another client.
Setup
-----
-The following examples may require some or all of the following java
+The following examples may require some or all of the following Java
classes to be imported:
.. code-block:: java
-----------------
.. note::
- The Bucket must be empty! Otherwise it won't work!
+ The bucket must be empty! Otherwise it won't work!
.. code-block:: java
URL will stop working).
.. note::
- The java library does not have a method for generating unsigned
+ The Java library does not have a method for generating unsigned
URLs, so the example below just generates a signed URL.
.. code-block:: java
+------------------------+-------------+-----------------------------------------------+
| **LastModified** | Date | The last modified date of the source object. |
+------------------------+-------------+-----------------------------------------------+
-| **Etag** | String | The ETag of the new object. |
+| **ETag** | String | The ETag of the new object. |
+------------------------+-------------+-----------------------------------------------+
Remove Object
-----------------
.. note::
- The Bucket must be empty! Otherwise it won't work!
+ The bucket must be empty! Otherwise it won't work!
.. code-block:: perl
.. attention::
- not available in the `Amazon::S3`_ perl module
+ not available in the `Amazon::S3`_ Perl module
Creating an Object
---------------------------------------------------
This generates an unsigned download URL for ``hello.txt``. This works
because we made ``hello.txt`` public by setting the ACL above.
-Then this generates a signed download URL for ``secret_plans.txt`` that
+It then generates a signed download URL for ``secret_plans.txt`` that
will work for 1 hour. Signed download URLs will work for the time
period even if the object is private (when the time period is up, the
URL will stop working).
URLs, so we are going to be using another module instead. Unfortunately,
most modules for generating these URLs assume that you are using Amazon,
so we have had to go with using a more obscure module, `Muck::FS::S3`_. This
- should be the same as Amazon's sample S3 perl module, but this sample
+ should be the same as Amazon's sample S3 Perl module, but this sample
module is not in CPAN. So, you can either use CPAN to install
`Muck::FS::S3`_, or install Amazon's sample S3 module manually. If you go
the manual route, you can remove ``Muck::FS::`` from the example below.
define('AWS_SECRET_KEY', 'place secret key here');
$ENDPOINT = 'http://objects.dreamhost.com';
- // require the amazon sdk from your composer vendor dir
+ // require the Amazon SDK from your composer vendor dir
require __DIR__.'/vendor/autoload.php';
// Instantiate the S3 class and point it at the desired host
.. note::
- The Bucket must be empty! Otherwise it won't work!
+ The bucket must be empty! Otherwise it won't work!
.. code-block:: php
Listing Owned Buckets
---------------------
-This gets a list of Buckets that you own.
+This gets a list of buckets that you own.
This also prints out the bucket name and creation date of each bucket.
.. code-block:: python
.. note::
- The Bucket must be empty! Otherwise it won't work!
+ The bucket must be empty! Otherwise it won't work!
.. code-block:: python
Using S3 API Extensions
-----------------------
-To use the boto3 client to tests the RadosGW extensions to the S3 API, the `extensions file`_ should be placed under: ``~/.aws/models/s3/2006-03-01/`` directory.
+To use the boto3 client to test the RadosGW extensions to the S3 API, the `extensions file`_ should be placed under: ``~/.aws/models/s3/2006-03-01/`` directory.
For example, unordered list of objects could be fetched using:
.. code-block:: python
Settings
---------------------
-You can setup the connection on global way:
+You can set up the connection on global way:
.. code-block:: ruby
Deleting a Bucket
-----------------
.. note::
- The Bucket must be empty! Otherwise it won't work!
+ The bucket must be empty! Otherwise it won't work!
.. code-block:: ruby
Deleting a Bucket
-----------------
.. note::
- The Bucket must be empty! Otherwise it won't work!
+ The bucket must be empty! Otherwise it won't work!
.. code-block:: ruby
information from the object metadata (a per-object RADOS operation). During
this step, we filter out **compressed** and **user-encrypted** objects.
-Following this, we calculate a cryptograhically strong hash of the candidate
+Following this, we calculate a cryptographically strong hash of the candidate
object data. This involves a full-object read which is a resource-intensive
operation. The hash ensures that the dedup candidates are indeed perfect
matches. If they are, we proceed with the deduplication:
restricted subset of data stored in an S3 object. The S3 Select engine
facilitates the use of higher level, analytic applications (for example:
SPARK-SQL). The ability of the S3 Select engine to target a proper subset of
-structed data within an S3 object decreases latency and increases throughput.
+structured data within an S3 object decreases latency and increases throughput.
For example: assume that a user needs to extract a single column that is
filtered by another column, and that these colums are stored in a CSV file in
Error Handling
~~~~~~~~~~~~~~
-Upon an error being detected, RGW returns 400-Bad-Request and a specific error message sends back to the client.
+Upon an error being detected, RGW returns 400-Bad-Request and a specific error message is sent back to the client.
Currently, there are 2 main types of error.
**Syntax error**: the s3select parser rejects user requests that are not aligned with parser syntax definitions, as
Additional Syntax Support
~~~~~~~~~~~~~~~~~~~~~~~~~
-S3select syntax supports table-alias ``select s._1 from s3object s where s._2 = ‘4’;``
+S3select syntax supports table-alias ``select s._1 from s3object s where s._2 = '4';``
S3select syntax supports case insensitive ``Select SUM(Cast(_1 as int)) FROM S3Object;``
# the from-clause define a single row.
# _1 points to root object level.
- # _1.age appears twice in Documnet-row, the last value is used for the operation.
+ # _1.age appears twice in Document row, the last value is used for the operation.
query = "select _1.firstname,_1.key_after_array,_1.age+4,_1.description.main_desc,_1.description.second_desc from s3object[*];";
expected_result = Joe_2,XXX,25,value_1,value_2
BOTO3
-----
-using BOTO3 is "natural" and easy due to AWS-cli support.
+Using BOTO3 is "natural" and easy due to AWS-cli support.
::
Session tags are key-value pairs that can be passed while federating a user (currently it
is only supported as part of the web token passed to AssumeRoleWithWebIdentity). The session
tags are passed along as aws:PrincipalTag in the session credentials (temporary credentials)
-that is returned back by STS. These Principal Tags consists of the session tags that come in
+that is returned back by STS. These Principal Tags consist of the session tags that come in
as part of the web token and the tags that are attached to the role being assumed. Please note
that the tags have to be always specified in the following namespace: https://aws.amazon.com/tags.
The following are the tag keys that can be used in the role's trust policy or the role's permission policy:
1. aws:RequestTag: This key is used to compare the key-value pair passed in the request with the key-value pair
-in the role's trust policy. In case of AssumeRoleWithWebIdentity, the session tags that are passed by the idp
+in the role's trust policy. In case of AssumeRoleWithWebIdentity, the session tags that are passed by the IDP
in the web token can be used as aws:RequestTag in the role's trust policy based on which a federated user can be
allowed to assume a role.
}
2. aws:PrincipalTag: This key is used to compare the key-value pair attached to the principal with the key-value pair
-in the policy. In case of AssumeRoleWithWebIdentity, the session tags that are passed by the idp in the web token appear
+in the policy. In case of AssumeRoleWithWebIdentity, the session tags that are passed by the IDP in the web token appear
as Principal tags in the temporary credentials once a user has been authenticated, and these tags can be used as
aws:PrincipalTag in the role's permission policy.
{
"Effect":"Allow",
"Action":["s3:PutBucketTagging"],
- "Resource":["arn:aws:s3::t1tenant:my-test-bucket\","arn:aws:s3::t1tenant:my-test-bucket/*"]
+ "Resource":["arn:aws:s3::t1tenant:my-test-bucket","arn:aws:s3::t1tenant:my-test-bucket/*"]
},
{
"Effect":"Allow",
"Action":["s3:*"],
"Resource":["*"],
- "Condition":{"StringEquals":{"s3:ResourceTag/Department":\"Engineering"}}
+ "Condition":{"StringEquals":{"s3:ResourceTag/Department":"Engineering"}}
}
}
{
"Effect":"Allow",
"Action":["s3:PutBucketTagging"],
- "Resource":["arn:aws:s3::t1tenant:my-test-bucket\","arn:aws:s3::t1tenant:my-test-bucket/*"]
+ "Resource":["arn:aws:s3::t1tenant:my-test-bucket","arn:aws:s3::t1tenant:my-test-bucket/*"]
},
{
"Effect":"Allow",
To create a new container, make a ``PUT`` request with the API version, account,
and the name of the new container. The container name must be unique, must not
-contain a forward-slash (/) character, and should be less than 256 bytes. You
+contain a forward slash (``/``) character, and should be less than 256 bytes. You
may include access control headers and metadata headers in the request. The
operation is idempotent; that is, if you make a request to create a container
that already exists, it will return with a HTTP 202 return code, but will not
``X-Container-Meta-{key}``
-:Description: A user-defined meta data key that takes an arbitrary string value.
+:Description: A user-defined metadata key that takes an arbitrary string value.
:Type: String
:Required: No
``X-Container-Meta-{key}``
-:Description: A user-defined meta data key that takes an arbitrary string value.
+:Description: A user-defined metadata key that takes an arbitrary string value.
:Type: String
:Required: No
Temp URL Operations
====================
-To allow temporary access (for eg for `GET` requests) to objects
+To allow temporary access (for example, for `GET` requests) to objects
without the need to share credentials, temp url functionality is
supported by swift endpoint of radosgw. For this functionality,
initially the value of `X-Account-Meta-Temp-URL-Key` and optionally
the following elements:
#. The value of the Request method, "GET" for instance
-#. The expiry time, in format of seconds since the epoch, ie Unix time
+#. The expiry time, in format of seconds since the epoch, i.e. Unix time
#. The request path starting from "v1" onwards
The above items are normalized with newlines appended between them,
==========
The Swift-compatible API tutorials follow a simple container-based object
-lifecycle. The first step requires you to setup a connection between your
+lifecycle. The first step requires you to set up a connection between your
client and the RADOS Gateway server. Then, you may follow a natural
container and object lifecycle, including adding and retrieving object
metadata. See example code for the following languages:
.. versionadded:: Kraken
The :ref:`multisite` functionality of RGW introduced in Jewel allowed the ability to
-create multiple zones and mirror data and metadata between them. ``Sync Modules``
-are built atop of the multisite framework that allows for forwarding data and
+create multiple zones and mirror data and metadata between them. *Sync Modules*
+are built on top of the multisite framework that allows for forwarding data and
metadata to a different external tier. A sync module allows for a set of actions
to be performed whenever a change in data occurs (metadata ops like bucket or
user creation etc. are also regarded as changes in data). As the RGW multisite
changes are eventually consistent at remote sites, changes are propagated
asynchronously. This would allow for unlocking use cases such as backing up the
object storage to an external cloud cluster or a custom backup solution using
-tape drives, indexing metadata in ElasticSearch etc.
+tape drives, indexing metadata in Elasticsearch etc.
A sync module configuration is local to a zone. The sync module determines
whether the zone exports data or can only consume data that was modified in
.. toctree::
:maxdepth: 1
- ElasticSearch Sync Module <elastic-sync-module>
+ Elasticsearch Sync Module <elastic-sync-module>
Cloud Sync Module <cloud-sync-module>
Archive Sync Module <archive-sync-module>
405 MethodNotAllowed
--------------------
-If you receive an 405 error, check to see if you have the S3 subdomain set up correctly.
-You will need to have a wild card setting in your DNS record for subdomain functionality
+If you receive a 405 error, check to see if you have the S3 subdomain set up correctly.
+You must have a wildcard in your DNS record for subdomain functionality
to work properly.
Also, check to ensure that the default site is disabled. ::
Token Policies for the Object Gateway
-------------------------------------
-All Vault tokens have powers as specified by the polices attached
+All Vault tokens have powers as specified by the policies attached
to that token. Multiple policies may be associated with one
token. You should only use the policies necessary for your
configuration.