Configuring Federated Gateways
================================
-.. versionadded:: 0.71
+.. versionadded:: 0.72 Emperor
-In Ceph version 0.71 and beyond, you may configure Ceph Object Gateways in a
-federated architecture, with multiple regions, and with multiple zones for a
-region.
+In Ceph version 0.72 Emperor and beyond, you may configure each :term:`Ceph
+Object Gateway` to participate in a federated architecture, with multiple
+regions, and with multiple zones for a region.
-- **Region**: A region represents a logical geographic area and contains one
+- **Region**: A region represents a *logical* geographic area and contains one
or more zones. A cluster with multiple regions must specify a master region.
-- **Zone**: A zone is a logical grouping of one or more Ceph Object Gateway
+- **Zone**: A zone is a *logical* grouping of one or more Ceph Object Gateway
instance(s). A region has a master zone that processes client requests.
+
+Background
+----------
+
When you deploy a :term:`Ceph Object Store` service that spans geographical
locales, configuring Ceph Object Gateway regions and metadata synchronization
agents enables the service to maintain a global namespace, even though Ceph
copy(ies) of the master zone's data. Extra copies of the data are important for
failover, backup and disaster recovery.
-You may deploy a single Ceph Storage Cluster with a federated architecture
-if you have low latency network connections (this isn't recommended). You may
-also deploy one Ceph Storage Cluster per region with a separate set of
-pools for each zone (typical). You may also deploy a separate Ceph Storage
-Cluster for each zone if your requirements and resources warrant this level
-of redundancy.
+You may deploy a single Ceph Storage Cluster with a federated architecture if
+you have low latency network connections (this isn't recommended). You may also
+deploy one Ceph Storage Cluster per region with a separate set of pools for
+each zone (typical). You may also deploy a separate Ceph Storage Cluster for
+each zone if your requirements and resources warrant this level of redundancy.
+About this Guide
+----------------
-Exemplary Cluster
-=================
+In the following sections, we will demonstrate how to configure a federated
+cluster in two logical steps:
-For the purposes of this configuration guide, we provide an exemplary procedure
-for setting up two regions and two zones for each region. So the cluster will
-comprise four gateway instances--one per zone. A production cluster at the
-petabyte scale and beyond would likely involve deploying more instances per
-zone.
+#. **Configure a Master Region:** This section of the guide describes how to
+ set up a region with multiple zones, and how to synchronize data between the
+ master zone and the secondary zone(s) within the master region.
+
+#. **Configure a Secondary Region:** This section of the guide describes how
+ to repeat the section on setting up a master region and multiple zones so
+ that you have two regions with intra-zone synchronization in each region.
+ Finally, you will learn how to set up a metadata synchronization agent so
+ that you can maintain a global namespace for the regions in your cluster.
-Let's assume the first region represents New York and the second region
-represents London. For naming purposes, we will refer to them by their standard
-abbreviations:
-- New York: ``ny``
-- London: ``ldn``
+Configure a Master Region
+=========================
-Zones are logical containers for the gateway instances. The physical location of
-the gateway instances is up to you, but disaster recovery is an important
-consideration. A disaster can be as simple as a power failure or a network
-failure. Yet, it can also involve a natural disaster or a significant political
-or economic event. In such cases, it is prudent to maintain a secondary zone
-outside of the geographic (not logical) region.
+This section provides an exemplary procedure for setting up a region, and two
+zones within the region. The cluster will comprise two gateway daemon
+instances--one per zone. This region will serve as the master region.
-Let's assume the master zone for each region is physically located in that
-region, and the secondary zone is physically located in another region. For
-continuity, our naming convention will use ``{region name}-{zone name}`` format,
-but you can use any naming convention you prefer.
-- New York Region, Master Zone: ``ny-ny``
-- New York Region, Secondary Zone: ``ny-ldn``
-- London Region, Master Zone: ``ldn-ldn``
-- London Region, Secondary Zone: ``ldn-ny``
+Naming for the Master Region
+----------------------------
-.. image:: ../images/region-zone-sync.png
+Before configuring the cluster, defining region, zone and instance names will
+help you manage your cluster. Let's assume the region represents the United
+States, and we refer to it by its standard abbreviation.
-To configure the exemplary cluster, you must configure regions and zones.
-Once you configure regions and zones, you must configure each instance of a
-:term:`Ceph Object Gateway` to use the Ceph Storage Cluster as the data storage
-backend.
+- United States: ``us``
+Let's assume the zones represent the Eastern and Western United States. For
+continuity, our naming convention will use ``{region name}-{zone name}`` format,
+but you can use any naming convention you prefer.
-Configuring Regions and Zones
-=============================
+- United States, East Region: ``us-east``
+- United States, West Region: ``us-west``
-For each :term:`Ceph Node` that runs a :term:`Ceph Object Gateway`, you must
-install Apache, FastCGI and the Ceph Object Gateway daemon (``radosgw``). See
-`Install Apache, FastCGI and Gateway`_ for details.
+Finally, let's assume that zones may have more than one Ceph Object Gateway
+instance per zone. For continuity, our naming convention will use
+``{region name}-{zone name}-{instance}`` format, but you can use any naming
+convention you prefer.
-Default Region and Zone
------------------------
+- United States Region, Master Zone, Instance 1: ``us-east-1``
+- United States Region, Secondary Zone, Instance 1: ``us-west-1``
-The Ceph Object Gateway can generate its own default gateway and zone. These
-defaults are the master region and master zone for a cluster. When you configure
-your cluster for regions and zones, you will be replacing (and likely deleting,
-if it exists) the default region and zone.
-In the exemplary cluster, the New York region and New York zone (``ny-ny``)
-are the master region and zone respectively. All other instances retrieve
-their metadata from the master region and zone.
+Create Pools
+------------
+
+You may have a Ceph Storage Cluster for the entire region or a Ceph Storage
+Cluster for each zone.
+
+For continuity, our naming convention will use ``{region name}-{zone name}``
+format prepended to the pool name, but you can use any naming convention you
+prefer. For example:
+
+
+- ``.us.rgw.root``
+
+- ``.us-east.rgw.root``
+- ``.us-east.rgw.control``
+- ``.us-east.rgw.gc``
+- ``.us-east.log``
+- ``.us-east.intent-log``
+- ``.us-east.usage``
+- ``.us-east.users``
+- ``.us-east.users.email``
+- ``.us-east.users.swift``
+- ``.us-east.users.uid``
+
+- ``.us-west.rgw.root``
+- ``.us-west.rgw.control``
+- ``.us-west.rgw.gc``
+- ``.us-west.log``
+- ``.us-west.intent-log``
+- ``.us-west.usage``
+- ``.us-west.users``
+- ``.us-west.users.email``
+- ``.us-west.users.swift``
+- ``.us-west.users.uid``
+
+See `Configuration Reference - Pools`_ for details on the default pools for
+gateways. See `Pools`_ for details on creating pools. Execute the following
+to create a pool::
+ ceph osd pool create {poolname} {pg-num} {pgp-num}
-Create Regions
---------------
-#. Configure a region infile called ``region.json`` for the ``ny`` region.
+.. tip:: When adding a large number of pools, it may take some time for your
+ cluster to return to a ``active + clean`` state.
- Copy the contents of the following example to a text editor. Set
- ``is_master`` to ``true``. Replace ``{fqdn}`` with the fully-qualified
- domain name of the endpoint. It will specify a master zone as ``ny-ny`` and
- list it in the ``zones`` list along with the ``ny-ldn`` zone.
- See `Configuration Reference - Regions`_ for details.::
+.. topic:: CRUSH Maps
- { "name": "ny",
- "api_name": "ny",
- "is_master": "true",
- "endpoints": [
- "http:\/\/{fqdn}:80\/"],
- "master_zone": "ny-ny",
- "zones": [
- { "name": "ny-ny",
- "endpoints": [
- "http:\/\/{fqdn}:80\/"],
- "log_meta": "false",
- "log_data": "false"},
- { "name": "ny-ldn",
- "endpoints": [
- "http:\/\/{fqdn}:80\/"],
- "log_meta": "false",
- "log_data": "false"}],
- "placement_targets": [],
- "default_placement": ""}
+ When deploying a Ceph Storage Cluster for the entire region, consider
+ using a CRUSH rule for the the zone such that you do NOT have overlapping
+ failure domains. See `CRUSH Map`_ for details.
+When you have completed this step, execute the following to ensure that
+you have created all of the foregoing pools::
-#. Create the ``ny`` region using the ``region.json`` infile you just
- created. ::
+ rados lspools
- sudo radosgw-admin region set --infile region.json
- Repeat the foregoing process to create region ``ldn``, but set
- ``is_master`` to ``false``. Update the ``master_zone`` so that it is
- set to ``ldn-ldn``. Ensure that you have entries for ``ldn-ldn`` and
- ``ldn-ny`` in the ``zones`` values.
+Create a Keyring
+----------------
-#. Delete the default region. ::
+Each instance must have a user name and key to communicate with a Ceph Storage
+Cluster. In the following steps, we use an admin node to create a keyring.
+Then, we create a client user name and key for each instance. Next, we add the
+keys to the Ceph Storage Cluster(s). Finally, we distribute the key ring to
+each node containing an instance.
- rados -p .rgw.root rm region_info.default
+#. Create a keyring. ::
+
+ sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
+ sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring
-#. Set the ``ny`` region as the default region. ::
- sudo radosgw-admin region default --rgw-region=ny
+#. Generate a Ceph Object Gateway user name and key for each instance. ::
- Only one region can be the default region for a cluster.
+ sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.us-east-1 --gen-key
+ sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.us-west-1 --gen-key
-#. Update the region map. ::
- sudo radosgw-admin regionmap update
+#. Add capabilities to each key. See `Configuration Reference - Pools`_ for details
+ on the effect of write permissions for the monitor and creating pools. ::
+ sudo ceph-authtool -n client.radosgw.us-east-1 --cap osd 'allow rwx' --cap mon 'allow rw' /etc/ceph/ceph.client.radosgw.keyring
+ sudo ceph-authtool -n client.radosgw.us-west-1 --cap osd 'allow rwx' --cap mon 'allow rw' /etc/ceph/ceph.client.radosgw.keyring
-Create Zone Users
------------------
+#. Once you have created a keyring and key to enable the Ceph Object Gateway
+ with access to the Ceph Storage Cluster, add each key as an entry to your
+ Ceph Storage Cluster(s). For example::
-Create zone users before configuring the zones. ::
+ sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-east-1 -i /etc/ceph/ceph.client.radosgw.keyring
+ sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-west-1 -i /etc/ceph/ceph.client.radosgw.keyring
- sudo radosgw-admin user create --uid="ny-ny" --display-name="Region-NY Zone-NY"
- sudo radosgw-admin user create --uid="ny-ldn" --display-name="Region-NY Zone-LDN"
- sudo radosgw-admin user create --uid="ldn-ldn" --display-name="Region-LDN Zone-LDN"
- sudo radosgw-admin user create --uid="ldn-ny" --display-name="Region-LDN Zone-NY"
-Copy the ``access_key`` and ``secret_key`` fields for each user. You will need them
-to configure each zone.
+Install Apache/FastCGI
+----------------------
-Create a Zone
--------------
+For each :term:`Ceph Node` that runs a :term:`Ceph Object Gateway` daemon
+instance, you must install Apache, FastCGI, the Ceph Object Gateway daemon
+(``radosgw``) and the Ceph Object Gateway Sync Agent (``radosgw-agent``).
+See `Install Apache, FastCGI and Gateway`_ for details.
-#. Configure a zone infile called ``zone.json`` for the ``ny-ny`` zone.
- Copy the contents of the following example to a text editor.
- Paste the contents of the ``access_key`` and ``secret_key`` fields from the
- step of creating a zone user into the ``system_key`` field. This
- configuration uses pool names prepended with the region name and zone name.
- See `Configuration Reference - Pools`_ for additional details on gateway
- pools. See `Configuration Reference - Zones`_ for additional details on
- zones. ::
-
- { "domain_root": ".ny-ny.rgw",
- "control_pool": ".ny-ny.rgw.control",
- "gc_pool": ".ny-ny.rgw.gc",
- "log_pool": ".ny-ny.log",
- "intent_log_pool": ".ny-ny.intent-log",
- "usage_log_pool": ".ny-ny.usage",
- "user_keys_pool": ".ny-ny.users",
- "user_email_pool": ".ny-ny.users.email",
- "user_swift_pool": ".ny-ny.users.swift",
- "user_uid_pool": ".ny-ny.users.uid",
- "system_key": { "access_key": "", "secret_key": ""}
- }
+Create Data Directories
+-----------------------
+Create data directories for each daemon instance on their respective
+hosts. ::
-#. Create the ``ny-ny`` zone using the ``zone.json`` infile you just
- created. ::
+ ssh {us-east-1}
+ sudo mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.us-east-1
+
+ ssh {us-west-1}
+ sudo mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.us-west-1
+
- sudo radosgw-admin zone set --rgw-zone=ny-ny --infile zone.json
+Create a Gateway Configuration
+------------------------------
- Repeat the previous two steps to create zones ``ny-ldn``, ``ldn-ldn``,
- and ``ldn-ny`` (replacing ``ny-ny`` in the ``zone.json`` file).
+For each instance, create an Ceph Object Gateway configuration file under the
+``/etc/apache2/sites-available`` directory on the host(s) where you installed
+the Ceph Object Gateway daemon(s). See below for an exemplary embodiment of a
+gateway configuration as discussed in the following text.
+.. literalinclude:: rgw.conf
+ :language: ini
-#. Delete the default zone. ::
+#. Replace the ``/{path}/{socket-name}`` entry with path to the socket and
+ the socket name. For example,
+ ``/var/run/ceph/client.radosgw.us-east-1.sock``. Ensure that you use the
+ same path and socket name in your ``ceph.conf`` entry.
- rados -p .rgw.root rm zone_info.default
+#. Replace the ``{fqdn}`` entry with the fully-qualified domain name of the
+ server.
+
+#. Replace the ``{email.address}`` entry with the email address for the
+ server administrator.
+
+#. Add a ``ServerAlias`` if you wish to use S3-style subdomains
+ (of course you do).
+#. Save the configuration to a file (e.g., ``rgw-us-east.conf``).
-#. Update the region map. ::
+Repeat the process for the secondary zone (e.g., ``rgw-us-west.conf``).
- radosgw-admin regionmap update
+Enable the Configuration
+------------------------
-Create Pools
-============
-
-If the username(s) and key(s) that provide your Ceph Object Gateway with access
-to the Ceph Storage Cluster do not have write capability to the :term:`Ceph
-Monitor`, you must create the pools manually. See `Configuration Reference -
-Pools`_ for details on the default pools for gateways. See `Pools`_ for
-details on creating pools. The default pools for a Ceph Object Gateway are:
-
-- ``.rgw``
-- ``.rgw.control``
-- ``.rgw.gc``
-- ``.log``
-- ``.intent-log``
-- ``.usage``
-- ``.users``
-- ``.users.email``
-- ``.users.swift``
-- ``.users.uid``
-
-The `Exemplary Cluster`_ assumes that you will have a Ceph Storage Cluster for
-each region, and that you will create pools for each zone that resides
-**physically** in that region (e.g., ``ny-ny`` and ``ldn-ny`` in New York and
-``ldn-ldn`` and ``ny-ldn`` in London). For each pool, prepend the name of the
-zone name (e.g., ``.ny-ny``, ``.ny-ldn``, ``ldn-ldn``, ``ldn-ny``). For example:
-
-- ``.ny-ny.rgw``
-- ``.ny-ny.rgw.control``
-- ``.ny-ny.rgw.gc``
-- ``.ny-ny.log``
-- ``.ny-ny.intent-log``
-- ``.ny-ny.usage``
-- ``.ny-ny.users``
-- ``.ny-ny.users.email``
-- ``.ny-ny.users.swift``
-- ``.ny-ny.users.uid``
-
-
-Execute one of the following::
-
- rados mkpool {poolname}
- ceph osd pool create {poolname} {pg-num} {pgp-num}
+For each instance, enable the gateway configuration and disable the
+default site.
+#. Enable the site for the gateway configuration. ::
-.. tip:: When adding a large number of pools, it may take some time for your
- cluster to return to a ``active + clean`` state.
+ sudo a2ensite {rgw-conf-filename}
+#. Disable the default site. ::
-Configuring Gateway Instances
-=============================
+ sudo a2dissite default
-The `Exemplary Cluster`_ assumes that you will configure an instance for
-each zone. In larger deployments, you may need to configure multiple instances
-per zone to handle higher loads.
+.. note:: Failure to disable the default site can lead to problems.
-Before you configure a gateway instance, determine an ID for the instance. You
-can name a Ceph Object Gateway instance anything you like. In large clusters
-with regions and zones, it may help to add region and zone names into your
-instance name. For example::
- ny-ny-instance1
+Add a FastCGI Script
+--------------------
-When referring to your instance identifier in the Ceph configuration file, it
-is prepended with ``client.radosgw.``. For example, an instance named
-``ny-ny-instance1`` will look like this::
+FastCGI requires a script for each Ceph Object Gateway instance to
+enable the S3-compatible interface. To create the script, execute
+the following procedures.
- [client.radosgw.ny-ny-instance1]
-Similarly, the default data path for an instance named
-``ny-ny-instance1`` is prepended with ``{cluster}-radosgw.``. For
-example::
+#. Go to the ``/var/www`` directory. ::
- /var/lib/ceph/radosgw/ceph-radosgw.ny-ny-instance1
-
+ cd /var/www
-Create a Data Directory
------------------------
+#. Open an editor with the file name ``s3gw.fcgi``. **Note:** The configuration
+ file specifies this filename. ::
-Create a data directory on the node where you installed ``radosgw``. ::
-
- sudo mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.ny-ny-instance1
-
+ sudo vim s3gw.fcgi
-Create a Storage Cluster User
------------------------------
+#. Add a shell script with ``exec`` and the path to the gateway binary,
+ the path to the Ceph configuration file, and the user name (``-n``;
+ the same user name created in step 2 of `Create a Keyring`_.
+ Copy the following into the editor. ::
-When accessing the Ceph Storage Cluster, each instance of a Ceph Object Gateway
-must provide the Ceph Storage Cluster with a user name and key. We recommend
-setting up at least one user name and key for each zone. See `Cephx
-Administration`_ for a discussion on adding keyrings and keys.
+ #!/bin/sh
+ exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.{ID}
-#. Create a keyring for the Ceph Object Gateway. For example::
+ For example::
- sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.ny.keyring
- sudo chmod +r /etc/ceph/ceph.client.radosgw.ny.keyring
+ #!/bin/sh
+ exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.us-east-1
+#. Save the file.
-#. Generate a key so that the Ceph Object Gateway can provide a user name and
- key to authenticate itself with the Ceph Storage Cluster. Then, add
- capabilities to the key. See `Configuration Reference - Pools`_ for details
- on the effect of write permissions for the monitor and creating pools. ::
+#. Change the permissions on the file so that it is executable. ::
- sudo ceph-authtool /etc/ceph/ceph.client.radosgw.ny.keyring -n client.radosgw.ny-ny --gen-key
- sudo ceph-authtool /etc/ceph/ceph.client.radosgw.ny.keyring -n client.radosgw.ldn-ny --gen-key
- sudo ceph-authtool -n client.radosgw.ny-ny --cap osd 'allow rwx' --cap mon 'allow rw' /etc/ceph/ceph.client.radosgw.ny.keyring
- sudo ceph-authtool -n client.radosgw.ldn-ny --cap osd 'allow rwx' --cap mon 'allow rw' /etc/ceph/ceph.client.radosgw.ny.keyring
+ sudo chmod +x s3gw.fcgi
- **Note:** You will need to generate a key for each zone that will access
- the Ceph Storage Cluster (assuming 1 per region).
-
-#. Once you have created a keyring and key to enable the Ceph Object Gateway
- with access to the Ceph Storage Cluster, add it as an entry to your Ceph
- Storage Cluster. For example::
- sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.ny-ny -i /etc/ceph/ceph.client.radosgw.ny.keyring
+Repeat the process for the secondary zone.
+
+
+Add Instances to Ceph Config File
+---------------------------------
+
+On an admin node, add an entry for each instance in the Ceph configuration file
+for your Ceph Storage Cluster(s). For example::
+
+ ...
+ [client.radosgw.us-east-1]
+ rgw region = us
+ rgw region root pool = .us.rgw.root
+ rgw zone = us-east
+ rgw zone root pool = .us-east.rgw.root
+ keyring = /etc/ceph/ceph.client.radosgw.keyring
+ rgw dns name = {hostname}
+ rgw socket path = /var/run/ceph/$name.sock
+ host = {host-name}
+
+ [client.radosgw.us-west-1]
+ rgw region = us
+ rgw region root pool = .us.rgw.root
+ rgw zone = ny-queens
+ rgw zone root pool = .us-west.rgw.root
+ keyring = /etc/ceph/ceph.client.radosgw.keyring
+ rgw dns name = {hostname}
+ rgw socket path = /var/run/ceph/$name.sock
+ host = {host-name}
-Create a Gateway Configuration
-------------------------------
-Create an ``rgw.conf`` file under the ``/etc/apache2/sites-available`` directory
-on the host(s) where you installed the Ceph Object Gateway daemon(s). See below
-for an exemplary embodiment of a gateway configuration as discussed in the
-following text.
+Then, update each :term:`Ceph Node` with the updated Ceph configuration
+file. For example::
-.. literalinclude:: rgw.conf
- :language: ini
+ ceph-deploy --overwrite-conf config {node1} {node2} {nodex}
-#. Replace the ``{fqdn}`` entry with the fully-qualified domain name of the
- server.
-
-#. Replace the ``{email.address}`` entry with the email address for the
- server administrator.
-
-#. Add a ``ServerAlias`` if you wish to use S3-style subdomains
- (of course you do).
+Create a Region
+---------------
-Enable the Configuration
-------------------------
+#. Configure a region infile called ``us.json`` for the ``us`` region.
+ Copy the contents of the following example to a text editor. Set
+ ``is_master`` to ``true``. Replace ``{fqdn}`` with the fully-qualified
+ domain name of the endpoint. It will specify a master zone as ``us-east``
+ and list it in the ``zones`` list along with the ``us-west`` zone.
+ See `Configuration Reference - Regions`_ for details.::
-#. Enable the site for the gateway configuration (e.g., ``rgw.conf``). ::
+ { "name": "us",
+ "api_name": "us",
+ "is_master": "true",
+ "endpoints": [
+ "http:\/\/{fqdn}:80\/"],
+ "master_zone": "us-east",
+ "zones": [
+ { "name": "us-east",
+ "endpoints": [
+ "http:\/\/{fqdn}:80\/"],
+ "log_meta": "false",
+ "log_data": "false"},
+ { "name": "us-west",
+ "endpoints": [
+ "http:\/\/{fqdn}:80\/"],
+ "log_meta": "false",
+ "log_data": "false"}],
+ "placement_targets": [],
+ "default_placement": ""}
- sudo a2ensite rgw.conf
-#. Disable the default site. ::
+#. Create the ``us`` region using the ``us.json`` infile you just
+ created. ::
- sudo a2dissite default
+ sudo radosgw-admin region set --infile us.json --name client.radosgw.us-east-1
-.. note:: Failure to disable the default site can lead to problems.
+#. Delete the default region (if it exists). ::
+ rados -p .us.rgw.root rm region_info.default
+
+#. Set the ``us`` region as the default region. ::
-Add a FastCGI Script
---------------------
+ sudo radosgw-admin region default --rgw-region=us --name client.radosgw.us-east-1
-FastCGI requires a script for the S3-compatible interface. To create the
-script, execute the following procedures on the server node.
+ Only one region can be the default region for a cluster.
-#. Go to the ``/var/www`` directory. ::
+#. Update the region map. ::
- cd /var/www
+ sudo radosgw-admin regionmap update --name client.radosgw.us-east-1
-#. Open an editor with the file name ``s3gw.fcgi``. ::
- sudo vim s3gw.fcgi
-#. Add a shell script with ``exec`` and the path to the gateway binary,
- the path to the Ceph configuration file, and the user name (``-n``;
- the same user name created in step 2 of `Create a Storage Cluster User`_.
- Copy the following into the editor. ::
+Create Zones
+------------
- #!/bin/sh
- exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.ny-ny-instance1
+#. Configure a zone infile called ``us-east.json`` for the
+ ``us-east`` zone.
-#. Save the file.
+ Copy the contents of the following example to a text editor.
+ This configuration uses pool names prepended with the region name and
+ zone name. See `Configuration Reference - Pools`_ for additional details on
+ gateway pools. See `Configuration Reference - Zones`_ for additional
+ details on zones. ::
+
+ { "domain_root": ".us-east.rgw.root",
+ "control_pool": ".us-east.rgw.control",
+ "gc_pool": ".us-east.rgw.gc",
+ "log_pool": ".us-east.log",
+ "intent_log_pool": ".us-east.intent-log",
+ "usage_log_pool": ".us-east.usage",
+ "user_keys_pool": ".us-east.users",
+ "user_email_pool": ".us-east.users.email",
+ "user_swift_pool": ".us-east.users.swift",
+ "user_uid_pool": ".us-east.users.uid",
+ "system_key": { "access_key": "", "secret_key": ""}
+ }
-#. Change the permissions on the file so that it is executable. ::
- sudo chmod +x s3gw.fcgi
+#. Add the ``us-east`` zone using the ``us-east.json`` infile you
+ just created in both the east and west pools by specifying their respective
+ user names (i.e., ``--name``). ::
+ sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name client.radosgw.us-east-1
+ sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name client.radosgw.us-west-1
-Add Configuration to Ceph Config
---------------------------------
+ Repeat step 1 to create a zone infile for ``us-west``. Then add the zone
+ using the ``us-west.json`` infile in both the east and west pools by
+ specifying their respective user names (i.e., ``--name``). ::
-Ceph Object Gateway instances read the `Ceph configuration file`_ (located at
-``/etc/ceph/{cluster-name}.conf`` by default). The Ceph Object Gateway is a
-client of the Ceph Storage Cluster, so you must place each instance under a
-``[client]`` section that specifically refers to the ``radosgw`` daemon and the
-instance ID. For example::
+ sudo radosgw-admin zone set --rgw-zone=us-west --infile us-west.json --name client.radosgw.us-east-1
+ sudo radosgw-admin zone set --rgw-zone=us-west --infile us-west.json --name client.radosgw.us-west-1
- [client.radosgw.ny-ny-instance1]
-
- #Region Info
- rgw region = ny
- rgw region root pool = .ny.rgw.root
-
- #Zone Info
- rgw zone = ny-ny
- rgw zone root pool = .ny-ny.rgw.root
- keyring = /etc/ceph/ceph.client.radosgw.ny.keyring
+#. Delete the default zone (if it exists). ::
+
+ rados -p .rgw.root rm zone_info.default
- #DNS Info for S3 Subdomains
- rgw dns name = {hostname}
-
- #Ceph Node Info
- rgw socket path = /tmp/$name.sock
- host = {host-name}
-
-Add the foregoing setting (replacing values in braces where appropriate) to the
-master copy of the Ceph configuration file you keep with your admin node. Then,
-use ``ceph-deploy`` to push a fresh copy the configuration file from your admin
-node Ceph Nodes. ::
+#. Update the region map. ::
+
+ sudo radosgw-admin regionmap update --name client.radosgw.us-east-1
- ceph-deploy --overwrite-conf config push {host-name [host-name]...}
+Create Zone Users
+-----------------
+
+Ceph Object Gateway stores zone users in the zone pools. So you must create zone
+users after configuring the zones. Copy the ``access_key`` and ``secret_key``
+fields for each user so you can update your zone configuration once you complete
+this step. ::
+
+ sudo radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-east-1 --system
+ sudo radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name client.radosgw.us-west-1 --system
+
+
+Update Zone Configurations
+--------------------------
+
+You must update the zone configuration with zone users so that
+the synchronization agents can authenticate with the zones.
+
+#. Open your ``us-east.json`` zone configuration file and paste the contents of
+ the ``access_key`` and ``secret_key`` fields from the step of creating
+ zone users into the ``system_key`` field of your zone configuration
+ infile. ::
+
+ { "domain_root": ".us-east.rgw",
+ "control_pool": ".us-east.rgw.control",
+ "gc_pool": ".us-east.rgw.gc",
+ "log_pool": ".us-east.log",
+ "intent_log_pool": ".us-east.intent-log",
+ "usage_log_pool": ".us-east.usage",
+ "user_keys_pool": ".us-east.users",
+ "user_email_pool": ".us-east.users.email",
+ "user_swift_pool": ".us-east.users.swift",
+ "user_uid_pool": ".us-east.users.uid",
+ "system_key": {
+ "access_key": "{paste-access_key-here}",
+ "secret_key": "{paste-secret_key-here}"
+ },
+ "placement_pools": []
+ }
+
+#. Save the ``us-east.json`` file. Then, update your zone configuration. ::
+
+ sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name client.radosgw.us-east-1
+ sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name client.radosgw.us-west-1
+
+#. Repeat step 1 to update the zone infile for ``us-west``. Then, update
+ your zone configuration. ::
+
+ sudo radosgw-admin zone set --rgw-zone=us-west --infile us-west.json --name client.radosgw.us-east-1
+ sudo radosgw-admin zone set --rgw-zone=us-west --infile us-west.json --name client.radosgw.us-west-1
+
Restart Services
-================
+----------------
Once you have redeployed your Ceph configuration files, we recommend restarting
-your ``ceph`` service.
+your Ceph Storage Cluster(s) and Apache instances.
-For Ubuntu, use the following::
+For Ubuntu, use the following on each :term:`Ceph Node`::
sudo restart ceph-all
gateway instance we recommend restarting the ``apache2`` service. For example::
sudo service apache2 restart
-
-Start Gateways
-==============
-Start up the ``radosgw`` service. When starting the service with other than
-the default region and zone, you must specify them explicitly. ::
+Start Gateway Instances
+-----------------------
+
+Start up the ``radosgw`` service. ::
- sudo /etc/init.d/radosgw start --rgw-region={region} --rgw-zone={zone}
+ sudo /etc/init.d/radosgw start
+If you are running multiple instances on the same host, you must specify the
+user name. ::
-Synchronize Metadata
-====================
+ sudo /etc/init.d/radosgw start --name client.radosgw.us-east-1
-The metadata agent maintains a global namespace for the cluster. The master
-zone of the master region is the source for all other instances in the cluster.
+Open a browser and check the endpoints for each zone. A simple HTTP request
+to the domain name should return the following:
-Configure an Agent
-------------------
+.. code-block:: xml
-To configure the metadata synchronization agent, retrieve the following from all
-zones:
+ <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <Owner>
+ <ID>anonymous</ID>
+ <DisplayName/>
+ </Owner>
+ <Buckets/>
+ </ListAllMyBucketsResult>
+
+Replicate Data
+--------------
+
+The data synchronization agent replicates the data of a master zone to a
+secondary zone. The master zone of a region is the source for the secondary zone
+of the region.
+
+
+To configure the synchronization agent, retrieve the following from each zone:
+
+- Zone Name
- Access Key
- Secret Key
- Hostname
- Port
-You only need the hostname and port for a single instance (assuming all gateway
-instances in a region/zone access the same Ceph Storage Cluster). Specify these
-values in a configuration file (e.g., ``cluster-md-sync.conf``), and include a
-``log_file`` name and an identifier for the ``daemon_id``. For example:
+You only need the hostname and port for a single instance (assuming all
+gateway instances in a region/zone access the same Ceph Storage Cluster).
+Specify these values in a configuration file
+(e.g., ``cluster-data-sync.conf``), and include a ``log_file`` name and an
+identifier for the ``daemon_id``. For example:
.. code-block:: ini
log_file: {log.filename}
daemon_id: {daemon-id}
-The `Exemplary Cluster`_ assumes that ``ny-ny`` is the master region and zone,
-so it is the source for ``ny-ldn``, ``ldn-ldn`` and ``ldn-ny``.
+A concrete example may look like this:
+.. code-block:: ini
-Activate an Agent
------------------
+ src_access_key: DG8RE354EFPZBICHIAF0
+ src_secret_key: i3U0HiRP8CXaBWrcF8bbh6CbsxGYuPPwRkixfFSb
+ src_host: ceph-gateway-east
+ src_port: 80
+ src_zone: us-east
+ dest_access_key: U60RFI6B08F32T2PD30G
+ dest_secret_key: W3HuUor7Gl1Ee93pA2pq2wFk1JMQ7hTrSDecYExl
+ dest_host: ceph-gateway-west
+ dest_port: 80
+ dest_zone: us-west
+ log_file: /var/log/radosgw/radosgw-sync-us-east-west.log
+ daemon_id: rgw-east-2-west
-To activate the metadata synchronization agent, execute the following::
- radosgw-agent -c region-md-sync.conf
+To activate the data synchronization agent, open a terminal and
+execute the following::
-You must have an agent for each source-destination pair.
+ radosgw-agent -c region-data-sync.conf
+When the synchronization agent is running, you should see output
+indicating that the agent is synchronizing shards of data. ::
-Replicate Data
-==============
+ INFO:radosgw_agent.sync:Starting incremental sync
+ INFO:radosgw_agent.worker:17910 is processing shard number 0
+ INFO:radosgw_agent.worker:shard 0 has 0 entries after ''
+ INFO:radosgw_agent.worker:finished processing shard 0
+ INFO:radosgw_agent.worker:17910 is processing shard number 1
+ INFO:radosgw_agent.sync:1/64 shards processed
+ INFO:radosgw_agent.worker:shard 1 has 0 entries after ''
+ INFO:radosgw_agent.worker:finished processing shard 1
+ INFO:radosgw_agent.sync:2/64 shards processed
+ ...
-The data synchronization agent replicates the data of a master zone to a
-secondary zone. The master zone of a region is the source for the secondary zone
-of the region.
+.. note:: You must have an agent for each source-destination pair.
-Configure an Agent
-------------------
+Configure a Secondary Region
+============================
-To configure the synchronization agent, retrieve the following from all zones:
+This section provides an exemplary procedure for setting up a cluster with
+multiple regions. Configuring a cluster that spans regions requires maintaining
+a global namespace, so that there are no namespace clashes among object names
+stored across in different regions.
-- Access Key
-- Secret Key
-- Hostname
-- Port
+This section extends the procedure in `Configure a Master Region`_, but
+changes the region name and modifies a few procedures. See the following
+sections for details.
-You only need the hostname and port for a single instance (assuming all gateway
-instances in a region/zone access the same Ceph Storage Cluster). Specify these
-values in a configuration file (e.g., ``cluster-data-sync.conf``), and include a
-``log_file`` name and an identifier for the ``daemon_id``. For example:
-.. code-block:: ini
+Naming for the Secondary Region
+-------------------------------
- src_access_key: {source-access-key}
- src_secret_key: {source-secret-key}
- src_host: {source-hostname}
- src_port: {source-port}
- src_zone: {source-zone}
- dest_access_key: {destination-access-key}
- dest_secret_key: {destination-secret-key}
- dest_host: {destination-hostname}
- dest_port: {destination-port}
- dest_zone: {destination-zone}
- log_file: {log.filename}
- daemon_id: {daemon-id}
+Before configuring the cluster, defining region, zone and instance names will
+help you manage your cluster. Let's assume the region represents the European
+Union, and we refer to it by its standard abbreviation.
-The `Exemplary Cluster`_ assumes that ``ny-ny`` and ``ldn-ldn`` are the master
-zones, so they are the source for ``ny-ldn`` and ``ldn-ldn`` respectively.
+- European Union: ``eu``
+Let's assume the zones represent the Eastern and Western European Union. For
+continuity, our naming convention will use ``{region name}-{zone name}``
+format, but you can use any naming convention you prefer.
-Activate an Agent
------------------
+- European Union, East Region: ``eu-east``
+- European Union, West Region: ``eu-west``
-To activate the data synchronization agent, execute the following::
+Finally, let's assume that zones may have more than one Ceph Object Gateway
+instance per zone. For continuity, our naming convention will use
+``{region name}-{zone name}-{instance}`` format, but you can use any naming
+convention you prefer.
- radosgw-agent -c region-data-sync.conf
+- European Union Region, Master Zone, Instance 1: ``eu-east-1``
+- European Union Region, Secondary Zone, Instance 1: ``eu-west-1``
+
+
+Configuring a Secondary Region
+------------------------------
+
+Repeat the exemplary procedure of `Configure a Master Region`_
+with the following differences:
+
+#. Use `Naming for the Secondary Region`_ in lieu of `Naming for
+ the Master Region`_.
+
+#. `Create Pools`_ using ``eu`` instead of ``us``.
+
+#. `Create a Keyring`_ and the corresponding keys using ``eu`` instead of
+ ``us``. You may use the same keyring if you desire, but ensure that you
+ create the keys on the Ceph Storage Cluster for that region (or region
+ and zone).
+
+#. `Install Apache/FastCGI`_.
+
+#. `Create Data Directories`_ using ``eu`` instead of ``us``.
+
+#. `Create a Gateway Configuration`_ using ``eu`` instead of ``us`` for
+ the socket names.
+
+#. `Enable the Configuration`_.
+
+#. `Add a FastCGI Script`_ using ``eu`` instead of ``us`` for the user names.
+
+#. `Add Instances to Ceph Config File`_ using ``eu`` instead of ``us`` for the
+ pool names.
+
+#. `Create a Region`_ using ``eu`` instead of ``us``. Set ``is_master`` to
+ ``false``. For consistency, create the master region in the secondary region
+ too. ::
+
+ sudo radosgw-admin region set --infile us.json --name client.radosgw.eu-east-1
+
+#. `Create Zones`_ using ``eu`` instead of ``us``. Ensure that you update the
+ user name (i.e., ``--name``) so that you create the zones in the correct
+ cluster.
+
+#. `Update Zone Configurations`_ using ``eu`` instead of ``us``.
+
+#. Create zones from master region in the secondary region. ::
+
+ sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name client.radosgw.eu-east-1
+ sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name client.radosgw.eu-west-1
+ sudo radosgw-admin zone set --rgw-zone=us-west --infile us-west.json --name client.radosgw.eu-east-1
+ sudo radosgw-admin zone set --rgw-zone=us-west --infile us-west.json --name client.radosgw.eu-west-1
+
+#. Create zones from secondary region in the master region. ::
+
+ sudo radosgw-admin zone set --rgw-zone=eu-east --infile eu-east.json --name client.radosgw.us-east-1
+ sudo radosgw-admin zone set --rgw-zone=eu-east --infile eu-east.json --name client.radosgw.us-west-1
+ sudo radosgw-admin zone set --rgw-zone=eu-west --infile eu-west.json --name client.radosgw.us-east-1
+ sudo radosgw-admin zone set --rgw-zone=eu-west --infile eu-west.json --name client.radosgw.us-west-1
+
+#. `Restart Services`_.
+
+#. `Start Gateway Instances`_.
-You must have an agent for each source-destination pair.
+#. `Replicate Data`_ by specifying the master zone of the master region as the
+ source zone and the master zone of the secondary region as the secondary
+ zone. When activating the ``radosgw-agent``, specify ``--metadata-only`` so
+ that it only copies metadata.
+Once you have completed the foregoing procedure, you should have a cluster
+consisting of a master region (``us``) and a secondary region (``eu``) where
+there is a unified namespace between the two regions.
+.. _CRUSH Map: ../../rados/operations/crush-map
.. _Install Apache, FastCGI and Gateway: ../manual-install
.. _Cephx Administration: ../../rados/operations/authentication/#cephx-administration
.. _Ceph configuration file: ../../rados/configuration/ceph-conf