--- /dev/null
+RGW Multisite
+=============
+
+This document contains directions for configuring RGW Multisite in ceph-ansible for multiple realms or multiple zonegroups or multiple zones on a cluster.
+
+Multisite replication can be configured either over multiple Ceph clusters or in a single Ceph cluster using Ceph version **Jewel or newer**.
+
+The first two sections give an overview on working with ansible inventory and the variables to configure RGW Multisite.
+
+The next sections are instructions on deploying a single Ceph cluster with multiple realms
+
+For information on configuring RGW Multisite with just one realms, zone groups, and zone in a cluster, refer to [README-MULTISITE.md](README-MULTISITE.md).
+
+# Working with Ansible Inventory
+
+If you are familiar with basic ansible terminology, working with inventory files, and inventory groups, feel free to skip this section.
+
+## The Inventory File
+
+Ceph-ansible starts up all the different daemons in a Ceph cluster.
+
+Each daemon (osd.0, mon.1, rgw.a) is given a line in the inventory file. Each line is called a **host** in ansible.
+Each type of daemon (OSD, Mon, RGW, Mgr, etc.) is given a **group** with its respective daemons in the ansible inventory file.
+
+Here is an example of an inventory file (in .ini format) for a Ceph cluster with 1 Mgrs, 4 RGWs, 3 OSDs, and 2 Mons:
+```
+[mgrs]
+ mgr-000 ansible_ssh_host=192.168.224.48 ansible_ssh_port=22
+[rgws]
+ rgws-000 ansible_ssh_host=192.168.216.145 ansible_ssh_port=22 radosgw_address=192.168.216.145
+ rgws-001 ansible_ssh_host=192.168.215.178 ansible_ssh_port=22 radosgw_address=192.168.215.178
+ rgws-002 ansible_ssh_host=192.168.132.221 ansible_ssh_port=22 radosgw_address=192.168.132.221
+ rgws-003 ansible_ssh_host=192.168.145.7 ansible_ssh_port=22 radosgw_address=192.168.145.7
+[osds]
+ osd-002 ansible_ssh_host=192.168.176.118 ansible_ssh_port=22
+ osd-001 ansible_ssh_host=192.168.226.21 ansible_ssh_port=22
+ osd-000 ansible_ssh_host=192.168.230.196 ansible_ssh_port=22
+[mons]
+ mon-000 ansible_ssh_host=192.168.210.155 ansible_ssh_port=22 monitor_address=192.168.210.155
+ mon-001 ansible_ssh_host=192.168.179.111 ansible_ssh_port=22 monitor_address=192.168.179.111
+```
+
+Notice there are 4 groups defined here: mgrs, rgws, osds, mons.
+
+There is one host (mgr-000) in mgrs, 4 hosts (rgws-000, rgws-001, rgws-002, rgws-003) in rgws, 3 hosts (osd-000, osd-001, osd-002) in osds, and 2 hosts (mon-000, mon-001) in mons.
+
+## The Groups of Inventory Groups
+
+Groups in the inventory can have subgroups. Consider the following inventory file:
+```
+[rgws]
+
+[rgws:children]
+usa
+canada
+
+[usa]
+ rgws-000 ansible_ssh_host=192.168.216.145 ansible_ssh_port=22 radosgw_address=192.168.216.145
+ rgws-001 ansible_ssh_host=192.168.215.178 ansible_ssh_port=22 radosgw_address=192.168.215.178
+
+[canada]
+ rgws-002 ansible_ssh_host=192.168.132.221 ansible_ssh_port=22 radosgw_address=192.168.132.221
+ rgws-003 ansible_ssh_host=192.168.145.7 ansible_ssh_port=22 radosgw_address=192.168.145.7
+```
+
+In this inventory rgws-000 and rgws-001 are in group `usa` and rgws-002 and rgws-003 are in group `canada`.
+
+`usa` and `canada` are both child groups of `rgws`. Groups that are children can have children groups of their own.
+
+## group_vars
+
+In the ceph-ansible tree there is a directory called `group_vars`. This directory has a collection of .yml files for variables set for each of the groups.
+
+The variables defined in `all.yml` apply to all groups in the inventory.
+When a variable, for example if `rgw_realm: usa`, is set in `group_vars/usa.yml`, `milkway` will be the value for `rgw_realm` for all of the RGW hosts in group `milkway`.
+
+If a group is a child of another group, the hosts in that group inherit all the parent specific values.
+If a variable is set in a group and its parent group, the variables evaluates to the value of the group closest to where the host is defined.
+
+For example if in the above inventory configuration `rgw_realm: nowhere` in `group_vars/rgws.yml` and `rgw_realm: usa` in `group_vars/usa.yml`, then the value for `rgw_realm` will be `usa` for rgws-000 and rgws-001.
+
+For more information on working with ansible inventory please visit: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html
+
+# RGW Multisite Config Overview
+
+## Inventory Groups for Multisite
+
+To run multisite, the inventory files needs to have a group for `rgws`, groups for `realms`, and groups for `zones`.
+
+Each of the groups that is a `realm` needs to be a child of the `rgws` group.
+Each of the groups that is a `zone` needs to be a child of one the groups that is a `realm`.
+
+Each RGW host in the inventory must be in one of the groups that is a zone. Each RGW host can only be in one zone at a time.
+
+Consider the following inventory file:
+```
+[rgws]
+
+[rgws:children]
+usa
+canada
+
+[usa]
+[usa:children]
+boston
+seattle
+
+[boston]
+ rgws-000 ansible_ssh_host=192.168.216.145 ansible_ssh_port=22 radosgw_address=192.168.216.145
+[seattle]
+ rgws-001 ansible_ssh_host=192.168.215.178 ansible_ssh_port=22 radosgw_address=192.168.215.178
+
+[canada]
+[canada:children]
+toronto
+
+[toronto]
+ rgws-002 ansible_ssh_host=192.168.215.178 ansible_ssh_port=22 radosgw_address=192.168.215.199
+ rgws-003 ansible_ssh_host=192.168.215.178 ansible_ssh_port=22 radosgw_address=192.168.194.109
+```
+
+In this inventory there are 2 realms: `usa` and `canada`.
+
+The realm `usa` has 2 zones: `boston` and `seattle`. Zone `boston` contains the RGW on host rgw-000. Zone `seattle` contains the RGW on host rgw-001.
+
+The realm `canada` only has 1 zone `toronto`. Zone `toronto` contains the RGWs on the hosts rgws-002 and rgws-003.
+
+Finally, `radosgw-address` must be set for all rgw hosts.
+
+## Multisite Variables in group_vars/all.yml
+
+The following are the multisite variables that can be configured for all RGW hosts via `group_vars/all.yml` set to their defaults:
+```
+## Rados Gateway options
+#
+radosgw_num_instances: 1
+
+#############
+# MULTISITE #
+#############
+
+rgw_multisite: false
+rgw_zone: default
+```
+
+The **only** value that needs to be changed is `rgw_multisite`. Changing this variable to `true` runs the multisite playbooks in ceph-ansible on all the RGW hosts.
+
+`rgw_zone` is set to "default" to enable compression for clusters configured without RGW multi-site.
+Changing this value in a zone specific .yaml file overrides this default value.
+
+`radosgw_num_instances` must be set to 1. The playbooks do not support deploying RGW Multisite on hosts with more than 1 RGW.
+
+## Multisite Variables in group_vars/{zone name}.yml
+Each of the zones in a multisite deployment must have a .yml file in `group_vars/` with its name.
+
+All values must be set in a zone's configuation file.
+
+In the example inventory configuration, `group_vars/` would have files for zones named `boston.yml`, `seattle.yml` and `toronto.yml`.
+
+The variables in a zone specific file must be the same as the below variables in `group_vars/zone.yml.sample`:
+```
+rgw_zone: boston
+
+rgw_zonemaster: true
+rgw_zonesecondary: false
+
+rgw_zonegroup: solarsystem
+
+rgw_zonegroupmaster: true
+```
+A group of 1 or more RGWs can be grouped into a **zone**.
+
+To avoid any confusion the value of `rgw_zone` should always be set to the name of the file it is in. For example this file should be named `group_vars/boston.yml`
+
+`rgw_zonemaster` specifies that the zone will be the master zone in a zonegroup.
+
+`rgw_zonesecondary` specifies that the zone will be a secondary zone in a zonegroup.
+
+Both `rgw_zonemaster` and `rgw_zonesecondary` need to be defined. They cannot have the same value.
+
+A zone is default if it is the only zone in a cluster.
+The ceph-ansible multisites playbooks automatically make a zone default if it is the only zone in a cluster.
+
+A group of 1 or more zones can be grouped into a **zonegroup**.
+
+A zonegroup must have a master zone in order for secondary zones to exist in it.
+
+Setting `rgw_zonegroupmaster: true` specifies the zonegroup will be the master zonegroup in a realm.
+
+Setting `rgw_zonegroupmaster: false` indicates the zonegroup will be non-master.
+
+There must be one master zonegroup per realm. After the master zonegroup is created there can be any number of non-master zonegroups per realm.
+
+A zonegroup is default if it is the only zonegroup in a cluster.
+The ceph-ansible multisite playbooks automatically make a zonegroup default if it is the only zonegroup in a cluster.
+
+## Multisite Variables in group_vars/{realm name}.yml
+
+Each of the realms in a multisite deployment must have a .yml file in `group_vars/` with its name.
+
+All values must be set in a realm's configuation file.
+
+In the example inventory configuration, `group_vars/` would have files named `usa.yml` and `canada.yml`.
+
+The variables in a realm specific file must be the same below variables in `group_vars/realm.yml.sample`:
+```
+rgw_realm: usa
+
+system_access_key: 6kWkikvapSnHyE22P7nO
+system_secret_key: MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt
+
+rgw_zone_user: bostonian
+rgw_zone_user_display_name: "Mark Wahlberg"
+
+rgw_pull_port: "{{ radosgw_frontend_port }}"
+rgw_pull_proto: "http"
+rgw_pullhost: localhost
+```
+To avoid any confusion the value of `rgw_realm` should always be set to the name of the file it is in. For example this file should be `group_vars/usa.yml`
+
+The `system_access_key` and `system_secret_key` should be user generate and different for each realm.
+
+Each realm has a system user that is created, with a display name for it.
+
+The variables `rgw_pull_port`, `rgw_pull_proto`, `rgw_pullhost`, are joined together to make an endpoint string needed to create secondary zones.
+
+This endpoint is of one of the RGW endpoints in a master zone in the zonegroup and realm you want to create secondary zones in.
+
+This endpoint **must be resolvable** from the mons and rgws in the cluster the secondary zone(s) are being created in.
+
+# Deployment Scenario #1: Single Realm & Zonegroup with Multiple Ceph Clusters
+
+## Creating the Master Zone in the Primary Cluster
+
+This deployment will setup a default realm, default master zonegroup and default master zone in the Ceph cluster.
+
+The following inventory file will be used for the primary cluster:
+```
+[rgws]
+
+[rgws:children]
+usa
+
+[usa]
+[usa:children]
+boston
+
+[boston]
+ rgws-000 ansible_ssh_host=192.168.216.145 ansible_ssh_port=22 radosgw_address=192.168.216.145
+```
+
+1. Generate System Access and System Secret Keys for the Realm "usa"
+
+```
+echo system_access_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1) > usa-multi-site-keys.txt
+echo system_secret_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 | head -n 1) >> usa-multi-site-keys.txt
+```
+2. Edit `group_vars/all.yml` for the Primary cluster
+
+```
+## Rados Gateway options
+#
+radosgw_num_instances: 1
+
+#############
+# MULTISITE #
+#############
+
+rgw_multisite: false
+rgw_zone: default
+```
+
+2. Edit the Zone Config `group_vars/boston.yml` for the Primary cluster
+```
+cp group_vars/zone.yml.sample group_vars/boston.yml
+```
+
+```
+rgw_zone: boston
+
+rgw_zonemaster: true
+rgw_zonesecondary: false
+
+rgw_zonegroup: massachusetts
+
+rgw_zonegroupmaster: true
+```
+
+3. Edit the Realm Config `group_vars/usa.yml` for the Primary cluster
+```
+cp group_vars/realm.yml.sample group_vars/usa.yml
+```
+
+```
+rgw_realm: usa
+
+system_access_key: 6kWkikvapSnHyE22P7nO
+system_secret_key: MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt
+
+rgw_zone_user: bostonian
+rgw_zone_user_display_name: "Mark Wahlberg"
+
+rgw_pull_port: "{{ radosgw_frontend_port }}"
+rgw_pull_proto: "http"
+rgw_pullhost: 192.168.216.145 # IP address for rgws-000
+```
+
+5. Run the ceph-ansible playbook for the 1st cluster
+
+3. **(Optional)** Edit the rgws.yml in group_vars for rgw related pool creation
+
+```
+rgw_create_pools:
+ "{{ rgw_zone }}.rgw.buckets.data":
+ pg_num: 64
+ size: ""
+ type: ec
+ ec_profile: myecprofile
+ ec_k: 5
+ ec_m: 3
+ "{{ rgw_zone }}.rgw.buckets.index":
+ pg_num: 8
+ size: ""
+ type: replicated
+```
+
+**Note:** A pgcalc tool should be used to determine the optimal sizes for the rgw.buckets.data, rgw.buckets.index pools as well as any other pools declared in this dictionary.
+
+4. Run the ceph-ansible playbook on your 1st cluster
+
+## Configuring the Secondary Zone in a Separate Cluster
+
+This deployment will setup a secondary zone with a different Ceph cluster as it's backend.
+The secondary zone will have the same realm, and zonegroup that was created in the primary Ceph cluster.
+
+The following inventory file will be used for the secondary cluster:
+```
+[rgws]
+
+[rgws:children]
+usa
+
+[usa]
+[usa:children]
+salem
+
+[salem]
+ rgws-000 ansible_ssh_host=192.168.215.178 ansible_ssh_port=22 radosgw_address=192.168.215.178
+```
+
+1. Edit `group_vars/all.yml` for the Secondary Cluster
+
+```
+## Rados Gateway options
+#
+radosgw_num_instances: 1
+
+#############
+# MULTISITE #
+#############
+
+rgw_multisite: false
+rgw_zone: default
+```
+
+**Note:** `radosgw_num_instances` must be set to 1. The playbooks do not support deploying RGW Multisite on hosts with more than 1 RGW.
+
+2. Edit the Zone Config `group_vars/salem.yml` for the Secondary Cluster
+```
+cp group_vars/zone.yml.sample group_vars/boston.yml
+```
+
+```
+rgw_zone: salem
+
+rgw_zonemaster: false
+rgw_zonesecondary: true
+
+rgw_zonegroup: massachussets
+
+rgw_zonegroupmaster: true
+```
+
+**Note:** `rgw_zonesecondary` is set to `true` here and `rgw_zonemaster` is set to `false`.
+
+3. Use the Exact Same Realm Config `group_vars/usa.yml` from the Primary cluster
+
+```
+rgw_realm: usa
+
+system_access_key: 6kWkikvapSnHyE22P7nO
+system_secret_key: MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt
+
+rgw_zone_user: bostonian
+rgw_zone_user_display_name: "Mark Wahlberg"
+
+rgw_pull_port: "{{ radosgw_frontend_port }}"
+rgw_pull_proto: "http"
+rgw_pullhost: 192.168.216.145 # IP address for rgws-000 from the primary cluster.
+```
+
+**Note:** The endpoint made from `rgw_pull_proto` + `rgw_pull_host` + `rgw_pull_port` must be resolvable by the secondary Ceph clusters mon and rgw node(s).
+
+5. Run the ceph-ansible playbook on your 2nd cluster
+
+6. **(Optional)** Edit the rgws.yml in group_vars for rgw related pool creation
+
+```
+rgw_create_pools:
+ "{{ rgw_zone }}.rgw.buckets.data":
+ pg_num: 64
+ size: ""
+ type: ec
+ ec_profile: myecprofile
+ ec_k: 5
+ ec_m: 3
+ "{{ rgw_zone }}.rgw.buckets.index":
+ pg_num: 8
+ size: ""
+ type: replicated
+```
+**Note:** The pg_num values should match the values for the rgw pools created on the primary cluster. Mismatching pg_num values on different sites can result in very poor performance.
+
+**Note:** An online pgcalc tool (ex: https://ceph.io/pgcalc) should be used to determine the optimal sizes for the rgw.buckets.data, rgw.buckets.index pools as well as any other pools declared in this dictionary.
+
+7. Run the ceph-ansible playbook on your 2nd cluster
+
+## Conclusion
+
+You should now have a master zone on the primary cluster and a secondary zone on secondary cluster in Active-Active mode.
+
+# Deployment Scenario #2: Single Ceph Cluster with Multiple Realms
+
+This deployment will setup two realms. One realm will have two different zones and zonegroups. The other will just have one zone and zonegroup.
+
+The following inventory file will be used for the cluster:
+```
+[rgws]
+
+[rgws:children]
+usa
+canada
+
+[usa]
+[usa:children]
+boston
+seattle
+
+[boston]
+ rgws-000 ansible_ssh_host=192.168.216.145 ansible_ssh_port=22 radosgw_address=192.168.216.145
+[seattle]
+ rgws-001 ansible_ssh_host=192.168.215.178 ansible_ssh_port=22 radosgw_address=192.168.215.178
+
+[canada]
+[canada:children]
+toronto
+
+[toronto]
+ rgws-002 ansible_ssh_host=192.168.215.178 ansible_ssh_port=22 radosgw_address=192.168.215.199
+ rgws-003 ansible_ssh_host=192.168.215.178 ansible_ssh_port=22 radosgw_address=192.168.194.109
+```
+
+1. Generate System Access and System Secret Keys for the Realms "usa" and "canada"
+
+```
+echo system_access_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1) > usa-multi-site-keys.txt
+echo system_secret_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 | head -n 1) >> usa-multi-site-keys.txt
+
+echo system_access_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1) > canada-multi-site-keys.txt
+echo system_secret_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 | head -n 1) >> canada-multi-site-keys.txt
+```
+2. Edit `group_vars/all.yml` for the Cluster
+
+```
+## Rados Gateway options
+#
+radosgw_num_instances: 1
+
+#############
+# MULTISITE #
+#############
+
+rgw_multisite: false
+rgw_zone: default
+```
+
+2. Edit the Zone Configs `group_vars/{boston, seattle, toronto}.yml`
+```
+for i in {boston,seattle,toronto}; do cp group_vars/zone.yml.sample group_vars/$i.yml; done
+```
+
+```
+rgw_zone: boston
+
+rgw_zonemaster: true
+rgw_zonesecondary: false
+
+rgw_zonegroup: massachusetts
+
+rgw_zonegroupmaster: true
+```
+
+```
+rgw_zone: seattle
+
+rgw_zonemaster: true
+rgw_zonesecondary: false
+
+rgw_zonegroup: washington
+
+rgw_zonegroupmaster: false
+```
+
+```
+rgw_zone: toronto
+
+rgw_zonemaster: true
+rgw_zonesecondary: false
+
+rgw_zonegroup: ontario
+
+rgw_zonegroupmaster: true
+```
+
+**Note** Since boston and seattle are in different zonegroups (massachussetts and washington) they can both be master zones.
+
+3. Edit the Realm Configs `group_vars/{usa, canada}.yml`
+```
+for i in {usa,canada}; do cp group_vars/realm.yml.sample group_vars/$i.yml; done
+```
+
+```
+rgw_realm: usa
+
+system_access_key: usaaccesskey
+system_secret_key: usasecretkey
+
+rgw_realm_system_user: bostonian
+rgw_realm_system_user_display_name: "Mark Wahlberg"
+
+rgw_pull_port: "{{ radosgw_frontend_port }}"
+rgw_pull_proto: "http"
+rgw_pullhost: 192.168.216.145 # ipn address for rgws-000
+```
+
+```
+rgw_realm: canada
+
+system_access_key: canadaaccesskey
+system_secret_key: canadasecretkey
+
+rgw_realm_system_user: canadian
+rgw_realm_system_user_display_name: "Justin Trudeau"
+
+rgw_pull_port: "{{ radosgw_frontend_port }}"
+rgw_pull_proto: "http"
+rgw_pullhost: 192.168.215.199 # IP address for rgws-002
+```
+
+**Note** The secret keys and access keys should be replaced the ones generated in step #1
+
+**Note:** The endpoint made from `rgw_pull_proto` + `rgw_pull_host` + `rgw_pull_port` for each realm should be resolvable by the mons and rgws since they are in the same Ceph cluster.
+
+5. Run the ceph-ansible playbook
+
RGW Multisite
=============
-Directions for configuring the RGW Multisite support in ceph-ansible
+This document contains directions for configuring the RGW Multisite support in ceph-ansible when the desired multisite configuration involves only one realm, one zone group and one zone in a cluster.
+
+For information on configuring RGW Multisite with multiple realms, zone groups, or zones in a cluster, refer to [README-MULTISITE-MULTIREALM.md](README-MULTISITE-MULTIREALM.md).
+
+In Ceph Multisite, a realm, master zone group, and a master zone is created on a Primary Ceph Cluster.
+
+The realm on the primary cluster is pulled onto a secondary cluster where a new zone is created and joins the realm.
+
+Once the realm is pulled on the secondary cluster and the new zone is created, data will now sync between the primary and secondary clusters.
## Requirements
+Multisite replication can be configured either over multiple Ceph clusters or in a single Ceph cluster using Ceph version **Jewel or newer**.
+
* At least 2 Ceph clusters
-* 1 RGW per cluster
+* at least 1 RGW per cluster
+* 1 RGW per host
* Jewel or newer
-More details:
-
-* Can configure a Master and Secondary realm/zonegroup/zone on 2 separate clusters.
+## Configuring the Master Zone in the Primary Ceph Cluster
-## Configuring the Master Zone in the Primary Cluster
-
-This will setup the realm, zonegroup and master zone and make them the defaults. It will also reconfigure the specified RGW for use with the zone.
+This will setup the realm, master zonegroup and master zone and make them the defaults on the Primary Cluster.
``
1. Generate System Access and System Secret Keys
echo system_access_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1) > multi-site-keys.txt
echo system_secret_key: $(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 40 | head -n 1) >> multi-site-keys.txt
```
-2. Edit the all.yml in group_vars
+2. Edit `group_vars/all.yml` for the Primary Cluster
```
-copy_admin_key: true
+## Rados Gateway options
+#
+radosgw_num_instances: 1
+
+#############
+# MULTISITE #
+#############
+
# Enable Multisite support
rgw_multisite: true
rgw_zone: jupiter
rgw_zonemaster: true
+rgw_zonegroupmaster: true
rgw_zonesecondary: false
rgw_multisite_proto: "http"
-rgw_multisite_endpoint_addr: "{{ ansible_fqdn }}"
-rgw_multisite_endpoints_list: "{{ rgw_multisite_proto }}://{{ ansible_fqdn }}:{{ radosgw_frontend_port }}"
rgw_zonegroup: solarsystem
rgw_zone_user: zone.user
+rgw_zone_user_display_name: "Zone User"
rgw_realm: milkyway
system_access_key: 6kWkikvapSnHyE22P7nO
system_secret_key: MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt
```
-**Note:** `rgw_zonemaster` should have the value of `true` and `rgw_zonesecondary` should be `false`
-
-**Note:** replace the `system_access_key` and `system_secret_key` values with the ones you generated
+**Note**: `radosgw_num_instances` must be set to 1. The playbooks do not support deploying RGW Multisite on hosts with more than 1 RGW.
-**Note:** `ansible_fqdn` domain name assigned to `rgw_multisite_endpoint_addr` must be resolvable from the secondary Ceph clusters mon and rgw node(s)
-
-**Note:** if there is more than 1 RGW in the cluster, `rgw_multisite_endpoints` needs to be set.<br/>
-`rgw_multisite_endpoints` is a comma seperated list, with no spaces, of the RGW endpoints in the format:<br/>
-`{{ rgw_multisite_proto }}://{{ ansible_fqdn }}:{{ radosgw_frontend_port }}`<br/>
-for example: `rgw_multisite_endpoints: http://foo.example.com:8080,http://bar.example.com:8080,http://baz.example.com:8080`
+**Note:** `rgw_zone` cannot be set to "default"
+**Note:** `rgw_zonemaster` should have the value of `true` and `rgw_zonesecondary` should be `false`
-3. Run the ceph-ansible playbook on your 1st cluster
+**Note:** replace the `system_access_key` and `system_secret_key` values with the ones you generated
3. **(Optional)** Edit the rgws.yml in group_vars for rgw related pool creation
4. Run the ceph-ansible playbook on your 1st cluster
-## Configuring the Secondary Zone in a Separate Cluster
+## Pulling the Realm and Configuring a New Zone on a Secondary Ceph Cluster
-5. Edit the all.yml in group_vars
+This configuration will pull the realm from the primary cluster onto the secondary cluster and create a new zone on the cluster as well.
+
+5. Edit `group_vars/all.yml` for the Secondary Cluster
```
-copy_admin_key: true
-# Enable Multisite support
+## Rados Gateway options
+#
+radosgw_num_instances: 1
+
+#############
+# MULTISITE #
+#############
+
rgw_multisite: true
rgw_zone: mars
rgw_zonemaster: false
rgw_zonesecondary: true
+rgw_zonegroupmaster: false
rgw_multisite_proto: "http"
-rgw_multisite_endpoint_addr: "{{ ansible_fqdn }}"
-rgw_multisite_endpoints_list: "{{ rgw_multisite_proto }}://{{ ansible_fqdn }}:{{ radosgw_frontend_port }}"
rgw_zonegroup: solarsystem
rgw_zone_user: zone.user
+rgw_zone_user_display_name: "Zone User"
rgw_realm: milkyway
system_access_key: 6kWkikvapSnHyE22P7nO
system_secret_key: MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt
rgw_pullhost: cluster0-rgw0
```
+**Note:** `rgw_zone` cannot be set to "default"
+
**Note:** `rgw_zonemaster` should have the value of `false` and `rgw_zonesecondary` should be `true`
-**Note:** `rgw_pullhost` should be the `rgw_multisite_endpoint_addr` of the RGW that is configured in the Primary Cluster
+**Note:** The endpoint made from `rgw_pull_proto` + `rgw_pull_host` + `rgw_pull_port` for each realm should be resolvable by the Primary Ceph clusters mons and rgws
**Note:** `rgw_zone_user`, `system_access_key`, and `system_secret_key` should match what you used in the Primary Cluster
-**Note:** `ansible_fqdn` domain name assigned to `rgw_multisite_endpoint_addr` must be resolvable from the Primary Ceph clusters mon and rgw node(s)
-
-**Note:** if there is more than 1 RGW in the Secondary Cluster, `rgw_multisite_endpoints` needs to be set with the RGWs in the Secondary Cluster just like it was set in the Primary Cluster
-
-5. Run the ceph-ansible playbook on your 2nd cluster
-
6. **(Optional)** Edit the rgws.yml in group_vars for rgw related pool creation
```
# MULTISITE #
#############
+# Changing this value allows multisite code to run
#rgw_multisite: false
-# The following Multi-site related variables should be set by the user.
-#
-# If there is more than 1 RGW in a master or secondary cluster than rgw_multisite_endpoints needs to be a comma seperated list (with no spaces) of the RGW endpoints in the format:
-# {{ rgw_multisite_proto }}://{{ ansible_fqdn }}:{{ radosgw_frontend_port }}
-# ex: rgw_multisite_endpoints: http://foo.example.com:8080,http://bar.example.com:8080,http://baz.example.com:8080
+# If the desired multisite configuration involves only one realm, one zone group and one zone (per cluster), then the multisite variables can be set here.
+# Please see README-MULTISITE.md for more information.
#
-# If there is only 1 RGW in the inventory, rgw_multisite_endpoints does not need to change
+# If multiple realms or multiple zonegroups or multiple zones need to be created on a cluster then,
+# the multisite config variables should be editted in their respective zone .yaml file and realm .yaml file.
+# See README-MULTISITE-MULTIREALM.md for more information.
+
+# The following Multi-site related variables should be set by the user.
#
# rgw_zone is set to "default" to enable compression for clusters configured without rgw multi-site
-# If multisite is configured rgw_zone should not be set to "default". See README-MULTISITE.md for an example.
+# If multisite is configured, rgw_zone should not be set to "default".
+#
#rgw_zone: default
#rgw_zonemaster: true
#rgw_zonesecondary: false
-#rgw_multisite_proto: "http"
-#rgw_multisite_endpoint_addr: "{{ ansible_fqdn }}"
-#rgw_multisite_endpoints_list: "{{ rgw_multisite_proto }}://{{ ansible_fqdn }}:{{ radosgw_frontend_port }}"
#rgw_zonegroup: solarsystem # should be set by the user
+#rgw_zonegroupmaster: true
#rgw_zone_user: zone.user
+#rgw_zone_user_display_name: "Zone User"
#rgw_realm: milkyway # should be set by the user
+#rgw_multisite_proto: "http"
#system_access_key: 6kWkikvapSnHyE22P7nO # should be re-created by the user
#system_secret_key: MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt # should be re-created by the user
# Multi-site remote pull URL variables
-#rgw_pull_port: "{{ radosgw_civetweb_port }}"
+#rgw_pull_port: "{{ radosgw_frontend_port }}"
#rgw_pull_proto: "http" # should be the same as rgw_multisite_proto for the master zone cluster
-#rgw_pullhost: localhost # rgw_pullhost only needs to be declared if there is a zone secondary. It should be the same as rgw_multisite_endpoint_addr for the master zone cluster
-
+#rgw_pullhost: localhost # rgw_pullhost only needs to be declared if there is a zone secondary.
###################
# CONFIG OVERRIDE #
#container_exec_cmd:
#docker: false
+
--- /dev/null
+#rgw_realm: usa
+
+# the user should generate a new pair of keys for each realm
+
+#system_access_key: 6kWkikvapSnHyE22P7nO
+#system_secret_key: MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt
+
+#rgw_realm_system_user: bostonian
+#rgw_realm_system_user_display_name: "Mark Wahlberg"
+
+# The variables rgw_pull_port, rgw_pull_proto, rgw_pullhost, are what comprise one of the rgw endpoints in a master zone in the zonegroup and realm you want to create secondary zones in.
+
+#rgw_pull_port: "{{ radosgw_frontend_port }}"
+#rgw_pull_proto: "http"
+#rgw_pullhost: localhost
# MULTISITE #
#############
+# Changing this value allows multisite code to run
#rgw_multisite: false
-# The following Multi-site related variables should be set by the user.
-#
-# If there is more than 1 RGW in a master or secondary cluster than rgw_multisite_endpoints needs to be a comma seperated list (with no spaces) of the RGW endpoints in the format:
-# {{ rgw_multisite_proto }}://{{ ansible_fqdn }}:{{ radosgw_frontend_port }}
-# ex: rgw_multisite_endpoints: http://foo.example.com:8080,http://bar.example.com:8080,http://baz.example.com:8080
+# If the desired multisite configuration involves only one realm, one zone group and one zone (per cluster), then the multisite variables can be set here.
+# Please see README-MULTISITE.md for more information.
#
-# If there is only 1 RGW in the inventory, rgw_multisite_endpoints does not need to change
+# If multiple realms or multiple zonegroups or multiple zones need to be created on a cluster then,
+# the multisite config variables should be editted in their respective zone .yaml file and realm .yaml file.
+# See README-MULTISITE-MULTIREALM.md for more information.
+
+# The following Multi-site related variables should be set by the user.
#
# rgw_zone is set to "default" to enable compression for clusters configured without rgw multi-site
-# If multisite is configured rgw_zone should not be set to "default". See README-MULTISITE.md for an example.
+# If multisite is configured, rgw_zone should not be set to "default".
+#
#rgw_zone: default
#rgw_zonemaster: true
#rgw_zonesecondary: false
-#rgw_multisite_proto: "http"
-#rgw_multisite_endpoint_addr: "{{ ansible_fqdn }}"
-#rgw_multisite_endpoints_list: "{{ rgw_multisite_proto }}://{{ ansible_fqdn }}:{{ radosgw_frontend_port }}"
#rgw_zonegroup: solarsystem # should be set by the user
+#rgw_zonegroupmaster: true
#rgw_zone_user: zone.user
+#rgw_zone_user_display_name: "Zone User"
#rgw_realm: milkyway # should be set by the user
+#rgw_multisite_proto: "http"
#system_access_key: 6kWkikvapSnHyE22P7nO # should be re-created by the user
#system_secret_key: MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt # should be re-created by the user
# Multi-site remote pull URL variables
-#rgw_pull_port: "{{ radosgw_civetweb_port }}"
+#rgw_pull_port: "{{ radosgw_frontend_port }}"
#rgw_pull_proto: "http" # should be the same as rgw_multisite_proto for the master zone cluster
-#rgw_pullhost: localhost # rgw_pullhost only needs to be declared if there is a zone secondary. It should be the same as rgw_multisite_endpoint_addr for the master zone cluster
-
+#rgw_pullhost: localhost # rgw_pullhost only needs to be declared if there is a zone secondary.
###################
# CONFIG OVERRIDE #
#container_exec_cmd:
#docker: false
+
--- /dev/null
+#rgw_zone: boston
+
+# Both rgw_zonemaster and rgw_zonesecondary must be set and they cannot have the same value
+
+#rgw_zonemaster: true
+#rgw_zonesecondary: false
+
+#rgw_zonegroup: massachusetts
+
+# The variable rgw_zonegroupmaster specifies the zonegroup will be the master zonegroup in a realm. There can only be one master zonegroup in a realm
+
+#rgw_zonegroupmaster: true
{% if 'num_threads' not in radosgw_frontend_options %}
rgw thread pool size = {{ radosgw_thread_pool_size }}
{% endif %}
+{% if rgw_multisite | bool %}
+rgw_realm = {{ instance['rgw_realm'] }}
+rgw_zonegroup = {{ instance['rgw_zonegroup'] }}
+rgw_zone = {{ instance['rgw_zone'] }}
+{% endif %}
{% endfor %}
{% endif %}
{% endfor %}
# MULTISITE #
#############
+# Changing this value allows multisite code to run
rgw_multisite: false
-# The following Multi-site related variables should be set by the user.
-#
-# If there is more than 1 RGW in a master or secondary cluster than rgw_multisite_endpoints needs to be a comma seperated list (with no spaces) of the RGW endpoints in the format:
-# {{ rgw_multisite_proto }}://{{ ansible_fqdn }}:{{ radosgw_frontend_port }}
-# ex: rgw_multisite_endpoints: http://foo.example.com:8080,http://bar.example.com:8080,http://baz.example.com:8080
+# If the desired multisite configuration involves only one realm, one zone group and one zone (per cluster), then the multisite variables can be set here.
+# Please see README-MULTISITE.md for more information.
#
-# If there is only 1 RGW in the inventory, rgw_multisite_endpoints does not need to change
+# If multiple realms or multiple zonegroups or multiple zones need to be created on a cluster then,
+# the multisite config variables should be editted in their respective zone .yaml file and realm .yaml file.
+# See README-MULTISITE-MULTIREALM.md for more information.
+
+# The following Multi-site related variables should be set by the user.
#
# rgw_zone is set to "default" to enable compression for clusters configured without rgw multi-site
-# If multisite is configured rgw_zone should not be set to "default". See README-MULTISITE.md for an example.
+# If multisite is configured, rgw_zone should not be set to "default".
+#
rgw_zone: default
-rgw_zonemaster: true
-rgw_zonesecondary: false
-rgw_multisite_proto: "http"
-rgw_multisite_endpoint_addr: "{{ ansible_fqdn }}"
-#rgw_multisite_endpoints_list: "{{ rgw_multisite_proto }}://{{ ansible_fqdn }}:{{ radosgw_frontend_port }}"
+#rgw_zonemaster: true
+#rgw_zonesecondary: false
#rgw_zonegroup: solarsystem # should be set by the user
+#rgw_zonegroupmaster: true
#rgw_zone_user: zone.user
+#rgw_zone_user_display_name: "Zone User"
#rgw_realm: milkyway # should be set by the user
+#rgw_multisite_proto: "http"
#system_access_key: 6kWkikvapSnHyE22P7nO # should be re-created by the user
#system_secret_key: MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt # should be re-created by the user
# Multi-site remote pull URL variables
-rgw_pull_port: "{{ radosgw_civetweb_port }}"
-rgw_pull_proto: "http" # should be the same as rgw_multisite_proto for the master zone cluster
-#rgw_pullhost: localhost # rgw_pullhost only needs to be declared if there is a zone secondary. It should be the same as rgw_multisite_endpoint_addr for the master zone cluster
-
+#rgw_pull_port: "{{ radosgw_frontend_port }}"
+#rgw_pull_proto: "http" # should be the same as rgw_multisite_proto for the master zone cluster
+#rgw_pullhost: localhost # rgw_pullhost only needs to be declared if there is a zone secondary.
###################
# CONFIG OVERRIDE #
######################################################
container_exec_cmd:
-docker: false
\ No newline at end of file
+docker: false
_radosgw_address: "{{ hostvars[inventory_hostname][_interface][ip_version][0]['address'] }}"
when: ip_version == 'ipv6'
-- name: set_fact rgw_instances
+- name: set_fact rgw_instances without rgw multisite
set_fact:
- rgw_instances: "{{ rgw_instances|default([]) | union([{'instance_name': 'rgw' + item|string, 'radosgw_address': _radosgw_address, 'radosgw_frontend_port': radosgw_frontend_port|int + item|int}]) }}"
+ rgw_instances: "{{ rgw_instances|default([]) | union([{'instance_name': 'rgw' + item|string, 'radosgw_address': _radosgw_address, 'radosgw_frontend_port': radosgw_frontend_port|int + item|int }]) }}"
with_sequence: start=0 end={{ radosgw_num_instances|int - 1 }}
+ when:
+ - inventory_hostname in groups.get(rgw_group_name, [])
+ - not rgw_multisite | bool
+
+- name: set_fact rgw_instances with rgw multisite
+ set_fact:
+ rgw_instances: "{{ rgw_instances|default([]) | union([{'instance_name': 'rgw' + item|string, 'radosgw_address': _radosgw_address, 'radosgw_frontend_port': radosgw_frontend_port|int, 'rgw_realm': rgw_realm|string, 'rgw_zonegroup': rgw_zonegroup|string, 'rgw_zone': rgw_zone|string}]) }}"
+ with_sequence: start=0 end={{ radosgw_num_instances|int - 1 }}
+ when:
+ - inventory_hostname in groups.get(rgw_group_name, [])
+ - rgw_multisite | bool
---
-- name: update period
- command: "{{ container_exec_cmd }} radosgw-admin --cluster {{ cluster }} period update --commit"
- delegate_to: "{{ groups[mon_group_name][0] }}"
- run_once: true
-
- name: restart rgw
service:
name: "ceph-radosgw@rgw.{{ ansible_hostname }}.{{ item.instance_name }}"
- name: include_tasks start_radosgw.yml
include_tasks: start_radosgw.yml
- when: not containerized_deployment | bool
+ when:
+ - not rgw_multisite | bool
+ - not containerized_deployment | bool
- name: include start_docker_rgw.yml
include_tasks: start_docker_rgw.yml
- when: containerized_deployment | bool
+ when:
+ - not rgw_multisite | bool
+ - containerized_deployment | bool
- name: include_tasks multisite/main.yml
include_tasks: multisite/main.yml
---
- name: check if the realm already exists
- command: "{{ container_exec_cmd }} radosgw-admin realm get --rgw-realm={{ rgw_realm }}"
+ command: "{{ container_exec_cmd }} radosgw-admin realm get --cluster={{ cluster }} --rgw-realm={{ rgw_realm }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
register: realmcheck
failed_when: False
check_mode: no
- name: check if the zonegroup already exists
- command: "{{ container_exec_cmd }} radosgw-admin zonegroup get --rgw-zonegroup={{ rgw_zonegroup }}"
+ command: "{{ container_exec_cmd }} radosgw-admin zonegroup get --cluster={{ cluster }} --rgw-realm={{ rgw_realm }} --rgw-zonegroup={{ rgw_zonegroup }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
register: zonegroupcheck
failed_when: False
check_mode: no
- name: check if the zone already exists
- command: "{{ container_exec_cmd }} radosgw-admin zone get --rgw-zone={{ rgw_zone }}"
+ command: "{{ container_exec_cmd }} radosgw-admin zone get --rgw-realm={{ rgw_realm }} --cluster={{ cluster }} --rgw-zonegroup={{ rgw_zonegroup }} --rgw-zone={{ rgw_zone }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
register: zonecheck
failed_when: False
changed_when: False
check_mode: no
-
-- name: check if the system user already exists
- command: "{{ container_exec_cmd }} radosgw-admin user info --uid={{ rgw_zone_user }}"
- delegate_to: "{{ groups[mon_group_name][0] }}"
- register: usercheck
- failed_when: False
- changed_when: False
- check_mode: no
--- /dev/null
+- name: create list realms
+ set_fact:
+ realms: "{{ realms | default([]) + [{ 'realm': hostvars[item]['rgw_realm'] }] }}"
+ with_items: "{{ groups.get(rgw_group_name, []) }}"
+ run_once: true
+ when:
+ - hostvars[item]['rgw_zonemaster'] | bool
+ - "'No such file or directory' in hostvars[item]['realmcheck'].stderr"
+
+- name: make all items in realms unique
+ set_fact:
+ realms: "{{ realms | unique }}"
+ run_once: true
+ when:
+ - realms is defined
+
+- name: create list secondary_realms
+ set_fact:
+ secondary_realms: "{{ secondary_realms | default([]) + [{ 'realm': hostvars[item]['rgw_realm'], 'is_master': hostvars[item]['rgw_zonemaster'], 'endpoint': hostvars[item]['rgw_pull_proto'] + '://' + hostvars[item]['rgw_pullhost'] + ':' + hostvars[item]['rgw_pull_port']|string, 'access_key': hostvars[item]['system_access_key'], 'secret_key': hostvars[item]['system_secret_key'] }] }}"
+ with_items: "{{ groups.get(rgw_group_name, []) }}"
+ run_once: true
+ when:
+ - not hostvars[item]['rgw_zonemaster'] | bool
+
+- name: make all items in secondary_realms unique
+ set_fact:
+ realms: "{{ secondary_realms | unique }}"
+ run_once: true
+ when:
+ - secondary_realms is defined
+
+- name: create list zonegroups
+ set_fact:
+ zonegroups: "{{ zonegroups | default([]) + [{ 'realm': hostvars[item]['rgw_realm'], 'zonegroup': hostvars[item]['rgw_zonegroup'], 'is_master': hostvars[item]['rgw_zonegroupmaster'] }] }}"
+ with_items: "{{ groups.get(rgw_group_name, []) }}"
+ run_once: true
+ when:
+ - hostvars[item]['rgw_zonemaster'] | bool
+ - "'No such file or directory' in hostvars[item]['zonegroupcheck'].stderr"
+
+- name: make all items in zonegroups unique
+ set_fact:
+ zonegroups: "{{ zonegroups | unique }}"
+ run_once: true
+ when:
+ - zonegroups is defined
+
+- name: create list zones
+ set_fact:
+ zones: "{{ zones | default([]) + [{ 'realm': hostvars[item]['rgw_realm'], 'zonegroup': hostvars[item]['rgw_zonegroup'], 'zone': hostvars[item]['rgw_zone'], 'is_master': hostvars[item]['rgw_zonemaster'], 'access_key': hostvars[item]['system_access_key'], 'secret_key': hostvars[item]['system_secret_key'] }] }}"
+ with_items: "{{ groups.get(rgw_group_name, []) }}"
+ run_once: true
+ when: "'No such file or directory' in hostvars[item]['zonecheck'].stderr"
+
+- name: make all items in zones unique
+ set_fact:
+ zones: "{{ zones | unique }}"
+ run_once: true
+ when:
+ - zones is defined
+
+- name: create a list of dicts with each rgw endpoint and it's zone
+ set_fact:
+ zone_endpoint_pairs: "{{ zone_endpoint_pairs | default([]) + [{ 'endpoint': hostvars[item]['rgw_multisite_proto'] + '://' + hostvars[item]['_radosgw_address'] + ':' + radosgw_frontend_port|string, 'rgw_zone': hostvars[item]['rgw_zone'] }] }}"
+ with_items: "{{ groups.get(rgw_group_name, []) }}"
+ run_once: true
+
+- name: create string of all the endpoints in the same rgw_zone
+ set_fact:
+ zone_endpoints_string: "{{ zone_endpoints_string | default('') + item.endpoint + ',' }}"
+ with_items: "{{ zone_endpoint_pairs }}"
+ when: item.rgw_zone == rgw_zone
+
+- name: remove ',' after last endpoint in a endpoints string
+ set_fact:
+ zone_endpoints_string: "{{ zone_endpoints_string[:-1] }}"
+ when:
+ - endpoints_string is defined
+ - endpoints_string[-1] == ','
+
+- name: create a list of zones and all their endpoints
+ set_fact:
+ zone_endpoints_list: "{{ zone_endpoints_list | default([]) + [{ 'endpoints': hostvars[item]['zone_endpoints_string'], 'zone': hostvars[item]['rgw_zone'], 'zonegroup': hostvars[item]['rgw_zonegroup'], 'realm': hostvars[item]['rgw_realm'], 'is_master': hostvars[item]['rgw_zonemaster'] }] }}"
+ with_items: "{{ groups.get(rgw_group_name, []) }}"
+ run_once: true
+ when: hostvars[item]['zone_endpoints_string'] is defined
+
+- name: make all items in zone_endpoints_list unique
+ set_fact:
+ zone_endpoints_list: "{{ zone_endpoints_list | unique }}"
+ run_once: true
+ when:
+ - zone_endpoints_list is defined
--- /dev/null
+- name: check if the realm system user already exists
+ command: "{{ container_exec_cmd }} radosgw-admin user info --cluster={{ cluster }} --rgw-realm={{ rgw_realm }} --rgw-zonegroup={{ rgw_zonegroup }} --rgw-zone={{ rgw_zone }} --uid={{ rgw_zone_user }}"
+ delegate_to: "{{ groups[mon_group_name][0] }}"
+ register: usercheck
+ failed_when: False
+ changed_when: False
+ check_mode: no
+
+- name: create list zone_users
+ set_fact:
+ zone_users: "{{ zone_users | default([]) + [{ 'realm': hostvars[item]['rgw_realm'], 'zonegroup': hostvars[item]['rgw_zonegroup'], 'zone': hostvars[item]['rgw_zone'], 'access_key': hostvars[item]['system_access_key'], 'secret_key': hostvars[item]['system_secret_key'], 'user': hostvars[item]['rgw_zone_user'], 'display_name': hostvars[item]['rgw_zone_user_display_name'] }] }}"
+ with_items: "{{ groups.get(rgw_group_name, []) }}"
+ run_once: true
+ when:
+ - hostvars[item]['rgw_zonemaster'] | bool
+ - hostvars[item]['rgw_zonegroupmaster'] | bool
+ - "'could not fetch user info: no user info saved' in hostvars[item]['usercheck'].stderr"
+
+- name: make all items in zone_users unique
+ set_fact:
+ zone_users: "{{ zone_users | unique }}"
+ run_once: true
+ when:
+ - zone_users is defined
+
+- name: create the zone user(s)
+ command: "{{ container_exec_cmd }} radosgw-admin user create --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }} --rgw-zone={{ item.zone }} --uid={{ item.user }} --display-name='{{ item.display_name }}' --access-key={{ item.access_key }} --secret={{ item.secret_key }} --system"
+ delegate_to: "{{ groups[mon_group_name][0] }}"
+ run_once: true
+ with_items: "{{ zone_users }}"
+ when: zone_users is defined
+++ /dev/null
----
-- name: delete the zone user
- command: radosgw-admin user rm --uid=zone.user
- run_once: true
- failed_when: false
- register: rgw_delete_the_zone_user
- changed_when: rgw_delete_the_zone_user.rc == 0
-
-- name: remove zone from zonegroup
- command: radosgw-admin zonegroup remove --rgw-zonegroup={{ rgw_zonegroup }} --rgw-zone={{ rgw_zone }}
- run_once: true
- failed_when: false
- register: rgw_remove_zone_from_zonegroup
- changed_when: rgw_remove_zone_from_zonegroup.rc == 0
- notify: update period
-
-- name: delete the zone
- command: radosgw-admin zone delete --rgw-zonegroup={{ rgw_zonegroup }} --rgw-zone={{ rgw_zone }}
- run_once: true
- failed_when: false
- register: rgw_delete_the_zone
- changed_when: rgw_delete_the_zone.rc == 0
-
-- name: delete the zonegroup
- command: radosgw-admin zonegroup delete --rgw-zonegroup={{ rgw_zonegroup }}
- run_once: true
- failed_when: false
- register: rgw_delete_the_zonegroup
- changed_when: rgw_delete_the_zonegroup.rc == 0
-
-- name: delete the realm
- command: radosgw-admin realm delete --rgw-realm={{ rgw_realm }}
- run_once: true
- failed_when: false
- register: rgw_delete_the_realm
- changed_when: rgw_delete_the_realm.rc == 0
-
-- name: delete zone from rgw stanza in ceph.conf
- lineinfile:
- dest: "/etc/ceph/{{ cluster }}.conf"
- regexp: "rgw_zone = {{ rgw_zonegroup }}-{{ rgw_zone }}"
- state: absent
- when:
- - rgw_zone is defined
- - rgw_zonegroup is defined
- notify: restart rgw
- name: include multisite checks
include_tasks: checks.yml
+- name: include_tasks create_realm_zonegroup_zone_lists.yml
+ include_tasks: create_realm_zonegroup_zone_lists.yml
+
# Include the tasks depending on the zone type
- name: include_tasks master.yml
include_tasks: master.yml
- rgw_zonemaster | bool
- not rgw_zonesecondary | bool
+- name: include_tasks start_radosgw.yml for zonemaster rgws
+ include_tasks: ../start_radosgw.yml
+ when:
+ - rgw_zonemaster | bool
+ - not rgw_zonesecondary | bool
+ - not containerized_deployment | bool
+
+- name: include_tasks start_docker_rgw.yml for zonemaster rgws
+ include_tasks: ../start_docker_rgw.yml
+ when:
+ - rgw_zonemaster | bool
+ - not rgw_zonesecondary | bool
+ - containerized_deployment | bool
+
- name: include_tasks secondary.yml
include_tasks: secondary.yml
when:
- not rgw_zonemaster | bool
- rgw_zonesecondary | bool
-# Continue with common tasks
-- name: add zone to rgw stanza in ceph.conf
- ini_file:
- dest: "/etc/ceph/{{ cluster }}.conf"
- section: "client.rgw.{{ ansible_hostname }}.{{ item.instance_name }}"
- option: "rgw_zone"
- value: "{{ rgw_zone }}"
- with_items: "{{ rgw_instances }}"
+- name: include_tasks start_radosgw.yml for zonesecondary rgws
+ include_tasks: ../start_radosgw.yml
when:
- - rgw_instances is defined
- notify: restart rgw
+ - not rgw_zonemaster | bool
+ - rgw_zonesecondary | bool
+ - not containerized_deployment | bool
+
+- name: include_tasks start_docker_rgw.yml for zonesecondary rgws
+ include_tasks: ../start_docker_rgw.yml
+ when:
+ - not rgw_zonemaster | bool
+ - rgw_zonesecondary | bool
+ - containerized_deployment | bool
---
-- name: create the realm
- command: "{{ container_exec_cmd }} radosgw-admin realm create --rgw-realm={{ rgw_realm }} --default"
+- name: create default realm
+ command: "{{ container_exec_cmd }} radosgw-admin realm create --cluster={{ cluster }} --rgw-realm={{ item.realm }} --default"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: true
- when: "'No such file or directory' in realmcheck.stderr"
+ with_items: "{{ realms }}"
+ when:
+ - realms is defined
+ - realms | length == 1
-- name: create the zonegroup
- command: "{{ container_exec_cmd }} radosgw-admin zonegroup create --rgw-zonegroup={{ rgw_zonegroup }} --endpoints={{ rgw_multisite_proto }}://{{ rgw_multisite_endpoint_addr }}:{{ radosgw_frontend_port }} --master --default"
+- name: create the realm(s)
+ command: "{{ container_exec_cmd }} radosgw-admin realm create --cluster={{ cluster }} --rgw-realm={{ item.realm }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: true
- when: "'No such file or directory' in zonegroupcheck.stderr"
+ with_items: "{{ realms }}"
+ when:
+ - realms is defined
+ - realms | length > 1
-- name: create the zone
- command: "{{ container_exec_cmd }} radosgw-admin zone create --rgw-zonegroup={{ rgw_zonegroup }} --rgw-zone={{ rgw_zone }} --endpoints={{ rgw_multisite_proto }}://{{ rgw_multisite_endpoint_addr }}:{{ radosgw_frontend_port }} --access-key={{ system_access_key }} --secret={{ system_secret_key }} --default --master"
+- name: create default master zonegroup(s)
+ command: "{{ container_exec_cmd }} radosgw-admin zonegroup create --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }} --default --master"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: true
- when: "'No such file or directory' in zonecheck.stderr"
+ with_items: "{{ zonegroups }}"
+ when:
+ - zonegroups is defined
+ - zonegroups | length == 1
+ - item.is_master | bool
-- name: create the zone user
- command: "{{ container_exec_cmd }} radosgw-admin user create --uid={{ rgw_zone_user }} --display-name=\"Zone User\" --access-key={{ system_access_key }} --secret={{ system_secret_key }} --system"
+- name: create default zonegroup(s)
+ command: "{{ container_exec_cmd }} radosgw-admin zonegroup create --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }} --default"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: true
- when: "'could not fetch user info: no user info saved' in usercheck.stderr"
- notify: update period
+ with_items: "{{ zonegroups }}"
+ when:
+ - zonegroups is defined
+ - zonegroups | length == 1
+ - not item.is_master | bool
-- name: add other endpoints to the zone
- command: "{{ container_exec_cmd }} radosgw-admin zone modify --rgw-zone={{ rgw_zone }} --endpoints {{ rgw_multisite_endpoints_list }}"
+- name: create master zonegroup(s)
+ command: "{{ container_exec_cmd }} radosgw-admin zonegroup create --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }} --master"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: true
- when: rgw_multisite_endpoints_list is defined
- notify: update period
+ with_items: "{{ zonegroups }}"
+ when:
+ - zonegroups is defined
+ - zonegroups | length > 1
+ - item.is_master | bool
+
+- name: create non-master non-default zonegroup(s)
+ command: "{{ container_exec_cmd }} radosgw-admin zonegroup create --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }}"
+ delegate_to: "{{ groups[mon_group_name][0] }}"
+ run_once: true
+ with_items: "{{ zonegroups }}"
+ when:
+ - zonegroups is defined
+ - zonegroups | length > 1
+ - not item.is_master | bool
+
+- name: create the default master zone
+ command: "{{ container_exec_cmd }} radosgw-admin zone create --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }} --rgw-zone={{ item.zone }} --access-key={{ item.access_key }} --secret={{ item.secret_key }} --master --default"
+ delegate_to: "{{ groups[mon_group_name][0] }}"
+ run_once: true
+ with_items: "{{ zones }}"
+ when:
+ - zones is defined
+ - zones | length == 1
+ - item.is_master | bool
+
+- name: create the master zone(s)
+ command: "{{ container_exec_cmd }} radosgw-admin zone create --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }} --rgw-zone={{ item.zone }} --access-key={{ item.access_key }} --secret={{ item.secret_key }} --master"
+ delegate_to: "{{ groups[mon_group_name][0] }}"
+ run_once: true
+ with_items: "{{ zones }}"
+ when:
+ - zones is defined
+ - zones | length > 1
+ - item.is_master | bool
+
+- name: add endpoints to their zone(s)
+ command: "{{ container_exec_cmd }} radosgw-admin zone modify --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }} --rgw-zone={{ item.zone }} --endpoints {{ item.endpoints }}"
+ with_items: "{{ zone_endpoints_list }}"
+ delegate_to: "{{ groups[mon_group_name][0] }}"
+ run_once: true
+ when:
+ - zone_endpoints_list is defined
+ - item.is_master | bool
+
+- name: update period for zone creation
+ command: "{{ container_exec_cmd }} radosgw-admin --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }} --rgw-zone={{ item.zone }} period update --commit"
+ delegate_to: "{{ groups[mon_group_name][0] }}"
+ run_once: true
+ with_items: "{{ zone_endpoints_list }}"
+ when:
+ - zone_endpoints_list is defined
+ - item.is_master | bool
+
+- name: include_tasks create_zone_user.yml
+ include_tasks: create_zone_user.yml
---
-- name: fetch the realm
- command: "{{ container_exec_cmd }} radosgw-admin realm pull --url={{ rgw_pull_proto }}://{{ rgw_pullhost }}:{{ rgw_pull_port }} --access-key={{ system_access_key }} --secret={{ system_secret_key }}"
+- name: fetch the realm(s)
+ command: "{{ container_exec_cmd }} radosgw-admin realm pull --cluster={{ cluster }} --rgw-realm={{ item.realm }} --url={{ item.endpoint }} --access-key={{ item.access_key }} --secret={{ item.secret_key }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: true
- when: "'No such file or directory' in realmcheck.stderr"
+ with_items: "{{ secondary_realms }}"
+ when: secondary_realms is defined
-- name: fetch the period
- command: "{{ container_exec_cmd }} radosgw-admin period pull --url={{ rgw_pull_proto }}://{{ rgw_pullhost }}:{{ rgw_pull_port }} --access-key={{ system_access_key }} --secret={{ system_secret_key }}"
+- name: get the period(s)
+ command: "{{ container_exec_cmd }} radosgw-admin period get --cluster={{ cluster }} --rgw-realm={{ item.realm }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: true
- when: "'No such file or directory' in realmcheck.stderr"
+ with_items: "{{ secondary_realms }}"
+ when: secondary_realms is defined
-- name: set default realm
- command: "{{ container_exec_cmd }} radosgw-admin realm default --rgw-realm={{ rgw_realm }}"
- changed_when: false
+- name: create the default zone
+ command: "{{ container_exec_cmd }} radosgw-admin zone create --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }} --rgw-zone={{ item.zone }} --access-key={{ item.access_key }} --secret={{ item.secret_key }} --default"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: true
+ with_items: "{{ zones }}"
+ when:
+ - zones is defined
+ - zones | length == 1
+ - not item.is_master | bool
-- name: set default zonegroup
- command: "{{ container_exec_cmd }} radosgw-admin zonegroup default --rgw-zonegroup={{ rgw_zonegroup }}"
- changed_when: false
+- name: create the non-master non-default zone(s)
+ command: "{{ container_exec_cmd }} radosgw-admin zone create --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }} --rgw-zone={{ item.zone }} --access-key={{ item.access_key }} --secret={{ item.secret_key }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: true
+ with_items: "{{ zones }}"
+ when:
+ - zones is defined
+ - zones | length > 1
+ - not item.is_master | bool
-- name: create the zone
- command: "{{ container_exec_cmd }} radosgw-admin zone create --rgw-zonegroup={{ rgw_zonegroup }} --rgw-zone={{ rgw_zone }} --endpoints={{ rgw_multisite_proto }}://{{ rgw_multisite_endpoint_addr }}:{{ radosgw_frontend_port }} --access-key={{ system_access_key }} --secret={{ system_secret_key }} --default"
+- name: add endpoints to their zone(s)
+ command: "{{ container_exec_cmd }} radosgw-admin zone modify --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }} --rgw-zone={{ item.zone }} --endpoints {{ item.endpoints }}"
+ with_items: "{{ zone_endpoints_list }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: true
- when: "'No such file or directory' in zonecheck.stderr"
- notify: update period
+ when:
+ - zone_endpoints_list is defined
+ - not item.is_master | bool
-- name: add other endpoints to the zone
- command: "{{ container_exec_cmd }} radosgw-admin zone modify --rgw-zone={{ rgw_zone }} --endpoints {{ rgw_multisite_endpoints_list }}"
+- name: update period for zone creation
+ command: "{{ container_exec_cmd }} radosgw-admin --cluster={{ cluster }} --rgw-realm={{ item.realm }} --rgw-zonegroup={{ item.zonegroup }} --rgw-zone={{ item.zone }} period update --commit"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: true
- when: rgw_multisite_endpoints_list is defined
- notify: update period
+ with_items: "{{ zone_endpoints_list }}"
+ when:
+ - zone_endpoints_list is defined
+ - not item.is_master | bool
fail:
msg: "rgw_zonemaster and rgw_zonesecondary cannot both be true"
when:
- - rgw_zonemaster
- - rgw_zonesecondary
+ - rgw_zonemaster | bool
+ - rgw_zonesecondary | bool
- name: fail if rgw_zonegroup is not set
fail:
msg: "rgw_zone_user has not been set by the user"
when: rgw_zone_user is undefined
+- name: fail if rgw_zone_user_display_name is not set
+ fail:
+ msg: "rgw_zone_user_display_name has not been set by the user"
+ when: rgw_zone_user_display_name is undefined
+
- name: fail if rgw_realm is not set
fail:
msg: "rgw_realm has not been set by the user"
fail:
msg: "rgw_pull_port has not been set by the user"
when:
- - rgw_zonesecondary
+ - rgw_zonesecondary | bool
- rgw_pull_port is undefined
- name: fail if rgw_pull_proto is not set
fail:
msg: "rgw_pull_proto has not been set by the user"
when:
- - rgw_zonesecondary
+ - rgw_zonesecondary | bool
- rgw_pull_proto is undefined
- name: fail if rgw_pullhost is not set
fail:
msg: "rgw_pullhost has not been set by the user"
when:
- - rgw_zonesecondary
+ - rgw_zonesecondary | bool
- rgw_pullhost is undefined
+
+- name: fail if radosgw_num_instances is not 1
+ fail:
+ msg: "radosgw_num_instances cannot be more than 1"
+ when: radosgw_num_instances|int > 1
rgw_zone: jupiter
rgw_zonemaster: true
rgw_zonesecondary: false
-rgw_multisite_proto: http
rgw_zonegroup: solarsystem
+rgw_zonegroupmaster: True
rgw_zone_user: zone.user
+rgw_zone_user_display_name: "Zone User"
+rgw_multisite_proto: http
rgw_realm: milkyway
system_access_key: 6kWkikvapSnHyE22P7nO
system_secret_key: MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt
osd0
[rgws]
-osd0 copy_admin_key=True rgw_multisite=True rgw_zone=mars rgw_zonemaster=False rgw_zonesecondary=True rgw_zonegroup=solarsystem rgw_zone_user=zone.user rgw_realm=milkyway rgw_multisite_proto=http rgw_multisite_endpoint_addr=192.168.107.11 system_access_key=6kWkikvapSnHyE22P7nO system_secret_key=MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt rgw_pull_proto=http rgw_pull_port=8080 rgw_pullhost=192.168.105.11
+osd0 copy_admin_key=True rgw_multisite=True rgw_zone=mars rgw_zonemaster=False rgw_zonesecondary=True rgw_zonegroup=solarsystem rgw_zonegroupmaster=True rgw_zone_user=zone.user rgw_zone_user_display_name="Zone User" rgw_multisite_proto=http rgw_realm=milkyway system_access_key=6kWkikvapSnHyE22P7nO system_secret_key=MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt rgw_pull_proto=http rgw_pull_port=8080 rgw_pullhost=192.168.105.11 rgw_multisite_endpoint_addr=192.168.107.11
rgw_zone: jupiter
rgw_zonemaster: true
rgw_zonesecondary: false
-rgw_multisite_proto: http
rgw_zonegroup: solarsystem
+rgw_zonegroupmaster: True
rgw_zone_user: zone.user
+rgw_zone_user_display_name: "Zone User"
+rgw_multisite_proto: http
rgw_realm: milkyway
system_access_key: 6kWkikvapSnHyE22P7nO
system_secret_key: MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt
osd0
[rgws]
-osd0 rgw_multisite=True rgw_zone=mars rgw_zonemaster=False rgw_zonesecondary=True rgw_zonegroup=solarsystem rgw_zone_user=zone.user rgw_realm=milkyway rgw_multisite_proto=http rgw_multisite_endpoint_addr=192.168.103.11 system_access_key=6kWkikvapSnHyE22P7nO system_secret_key=MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt rgw_pull_proto=http rgw_pull_port=8080 rgw_pullhost=192.168.101.11
+osd0 rgw_multisite=True rgw_zone=mars rgw_zonemaster=False rgw_zonesecondary=True rgw_zonegroup=solarsystem rgw_zonegroupmaster=True rgw_zone_user=zone.user rgw_zone_user_display_name="Zone User" rgw_realm=milkyway rgw_multisite_proto=http system_access_key=6kWkikvapSnHyE22P7nO system_secret_key=MGecsMrWtKZgngOHZdrd6d3JxGO5CPWgT2lcnpSt rgw_pull_proto=http rgw_pull_port=8080 rgw_pullhost=192.168.101.11 rgw_multisite_endpoint_addr=192.168.103.11