RGW Module
============
-The rgw module provides a simple interface to deploy RGW multisite.
+The ``rgw`` module provides a simple interface to deploy RGW :ref:`multisite`.
It helps with bootstrapping and configuring RGW realm, zonegroup and
the different related entities.
RGW Realm Operations
------------------------
+--------------------
Bootstrapping RGW realm creates a new RGW realm entity, a new zonegroup,
and a new zone. It configures a new system user that can be used for
multisite sync operations. Under the hood this module instructs the
orchestrator to create and deploy the corresponding RGW daemons. The module
-supports both passing the arguments through the cmd line or as a spec file:
+supports passing the arguments through the command line or as a spec file:
.. prompt:: bash #
ceph rgw realm bootstrap [--realm-name] [--zonegroup-name] [--zone-name] [--port] [--placement] [--start-radosgw]
-The command supports providing the configuration through a spec file (`-i option`):
+The command supports configuration through a spec file (``-i`` option):
.. prompt:: bash #
spec:
rgw_frontend_port: 5500
-.. note:: The spec file used by RGW has the same format as the one used by the orchestrator. Thus,
- the user can provide any orchestration supported rgw parameters including advanced
- configuration features such as SSL certificates etc.
+.. note:: RGW multisite spec files follow the same format as cephadm
+ orchestrator spec files. Thus the user can specify any supported RGW
+ parameters including advanced configuration features such as SSL
+ certificates etc.
-Users can also specify custom zone endpoints in the spec (or through the cmd line). In this case, no
+Users can also specify custom zone endpoints in the spec (or through the command line). In this case, no
cephadm daemons will be launched. Following is an example RGW spec file with zone endpoints:
.. code-block:: yaml
User can use the token to pull a realm to create secondary zone on a
different cluster that syncs with the master zone on the primary cluster
-by using `ceph rgw zone create` command and providing the corresponding token.
+by using ``ceph rgw zone create`` command and providing the corresponding token.
Following is an example of zone spec file:
ceph rgw zone create -i zone-spec.yaml
.. note:: The spec file used by RGW has the same format as the one used by the orchestrator. Thus,
- the user can provide any orchestration supported rgw parameters including advanced
+ the user can provide any orchestration supported RGW parameters including advanced
configuration features such as SSL certificates etc.
Commands
ceph rgw realm bootstrap -i spec.yaml
-Create a new realm + zonegroup + zone and deploy rgw daemons via the
+Create a new realm + zonegroup + zone and deploy RGW daemons via the
orchestrator using the information specified in the YAML file.
.. prompt:: bash #
ceph rgw admin [*]
-RGW admin command
-
-Upgrading root ca certificates
+Upgrading Root CA Certificates
------------------------------
#. Make sure that the RGW service is running.
#. Make sure that the RGW service is up.
#. Make sure that the RGW service has been upgraded to the latest release.
-#. From the Primary cluster on the Manager node, run the following command:
+#. From the primary cluster on the Manager node, run the following command:
.. prompt:: bash #
ceph orch cert-store get cert cephadm_root_ca_cert
-#. On the node where the RGW service is running, store the certificate on the
+#. On the nodes where the RGW service is running, store the certificate on the
following path::
/etc/pki/ca-trust/source/anchors/<cert_name>.crt
openssl x509 -in <cert_name>.crt -noout -text
-#. Perform the above steps on the MGR node and on the RGW node of all secondary
- clusters.
+#. Perform the above steps on the Manager node and on the RGW nodes of all
+ secondary clusters.
#. After the certificates have been validated on all clusters, run the
following command on all clusters that generate certificates:
update-ca-trust
-#. From the primary node, ensure that the ``curl`` command can be run by the
+#. From the primary cluster, ensure that the ``curl`` command can be run by the
user:
.. prompt:: bash [user@primary-node]$