From: Orit Wasserman Date: Sun, 1 Oct 2017 05:40:27 +0000 (+0300) Subject: doc: replace region with zonegroup in configure bucket sharding section X-Git-Tag: v13.0.1~701^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=refs%2Fpull%2F18063%2Fhead;p=ceph.git doc: replace region with zonegroup in configure bucket sharding section Fixes: http://tracker.ceph.com/issues/21610 Signed-off-by: Orit Wasserman --- diff --git a/doc/install/install-ceph-gateway.rst b/doc/install/install-ceph-gateway.rst index 09fdf921e373..2bc0d0691060 100644 --- a/doc/install/install-ceph-gateway.rst +++ b/doc/install/install-ceph-gateway.rst @@ -273,23 +273,22 @@ On Ubuntu execute:: sudo service radosgw restart id=rgw. For federated configurations, each zone may have a different ``index_pool`` -setting for failover. To make the value consistent for a region's zones, you -may set ``rgw_override_bucket_index_max_shards`` in a gateway's region +setting for failover. To make the value consistent for a zonegroup's zones, you +may set ``rgw_override_bucket_index_max_shards`` in a gateway's zonegroup configuration. For example:: - radosgw-admin region get > region.json + radosgw-admin zonegroup get > zonegroup.json -Open the ``region.json`` file and edit the ``bucket_index_max_shards`` setting -for each named zone. Save the ``region.json`` file and reset the region. For -example:: - - radosgw-admin region set < region.json +Open the ``zonegroup.json`` file and edit the ``bucket_index_max_shards`` setting +for each named zone. Save the ``zonegroup.json`` file and reset the zonegroup. +For example:: -Once you have updated your region, update the region map. For example:: + radosgw-admin zonegroup set < zonegroup.json - radosgw-admin regionmap update --name client.rgw.ceph-client +Once you have updated your zonegroup, update and commit the period. +For example:: -Where ``client.rgw.ceph-client`` is the name of the gateway user. + radosgw-admin period update --commit .. note:: Mapping the index pool (for each zone, if applicable) to a CRUSH ruleset of SSD-based OSDs may also help with bucket index performance.