From 3a6471a6e6c7f67f5a3e42d322ed462d998189b7 Mon Sep 17 00:00:00 2001 From: Casey Bodley Date: Thu, 18 May 2017 14:45:03 -0400 Subject: [PATCH] doc/rgw: update pool names, document namespaces Fixes: http://tracker.ceph.com/issues/19504 Signed-off-by: Casey Bodley --- doc/radosgw/index.rst | 1 + doc/radosgw/multisite.rst | 57 ++++----------------------------------- doc/radosgw/pools.rst | 55 +++++++++++++++++++++++++++++++++++++ 3 files changed, 61 insertions(+), 52 deletions(-) create mode 100644 doc/radosgw/pools.rst diff --git a/doc/radosgw/index.rst b/doc/radosgw/index.rst index 499da6a8b2f21..d64e6dd6b3004 100644 --- a/doc/radosgw/index.rst +++ b/doc/radosgw/index.rst @@ -38,6 +38,7 @@ you may write data with one API and retrieve it with the other. Manual Install w/Civetweb <../../install/install-ceph-gateway> Multisite Configuration + Configuring Pools Config Reference Admin Guide S3 API diff --git a/doc/radosgw/multisite.rst b/doc/radosgw/multisite.rst index c030ba0d8c81d..0c2c44258b16f 100644 --- a/doc/radosgw/multisite.rst +++ b/doc/radosgw/multisite.rst @@ -74,58 +74,8 @@ In this guide, the ``rgw1`` host will serve as the master zone of the master zone group; and, the ``rgw2`` host will serve as the secondary zone of the master zone group. -Pools -===== - -We recommend using the `Ceph Placement Group’s per Pool -Calculator `__ to calculate a -suitable number of placement groups for the pools the ``ceph-radosgw`` -daemon will create. Set the calculated values as defaults in your Ceph -configuration file. For example: - -:: - - osd pool default pg num = 50 - osd pool default pgp num = 50 - -.. note:: Make this change to the Ceph configuration file on your - storage cluster; then, either make a runtime change to the - configuration so that it will use those defaults when the gateway - instance creates the pools. - -Alternatively, create the pools manually. See -`Pools `__ -for details on creating pools. - -Pool names particular to a zone follow the naming convention -``{zone-name}.pool-name``. For example, a zone named ``us-east`` will -have the following pools: - -- ``.rgw.root`` - -- ``us-east.rgw.control`` - -- ``us-east.rgw.data.root`` - -- ``us-east.rgw.gc`` - -- ``us-east.rgw.log`` - -- ``us-east.rgw.intent-log`` - -- ``us-east.rgw.usage`` - -- ``us-east.rgw.users.keys`` - -- ``us-east.rgw.users.email`` - -- ``us-east.rgw.users.swift`` - -- ``us-east.rgw.users.uid`` - -- ``us-east.rgw.buckets.index`` - -- ``us-east.rgw.buckets.data`` +See `Pools`_ for instructions on creating and tuning pools for Ceph +Object Storage. Configuring a Master Zone @@ -1504,3 +1454,6 @@ instance. | | keeping inter-zone group | | | | | synchronization progress. | | | +-------------------------------------+-----------------------------------+---------+-----------------------+ + + +.. _`Pools`: ../pools diff --git a/doc/radosgw/pools.rst b/doc/radosgw/pools.rst new file mode 100644 index 0000000000000..2d88a3c103b30 --- /dev/null +++ b/doc/radosgw/pools.rst @@ -0,0 +1,55 @@ +===== +Pools +===== + +The Ceph Object Gateway uses several pools for its various storage needs, +which are listed in the Zone object (see ``radosgw-admin zone get``). A +single zone named ``default`` is created automatically with pool names +starting with ``default.rgw.``, but a `Multisite Configuration`_ will have +multiple zones. + +Tuning +====== + +When ``radosgw`` first tries to operate on a zone pool that does not +exist, it will create that pool with the default values from +``osd pool default pg num`` and ``osd pool default pgp num``. These defaults +are sufficient for some pools, but others (especially those listed in +``placement_pools`` for the bucket index and data) will require additional +tuning. We recommend using the `Ceph Placement Group’s per Pool +Calculator `__ to calculate a suitable number of +placement groups for these pools. See +`Pools `__ +for details on pool creation. + +Pool Namespaces +=============== + +.. versionadded:: Luminous + +Pool names particular to a zone follow the naming convention +``{zone-name}.pool-name``. For example, a zone named ``us-east`` will +have the following pools: + +- ``.rgw.root`` + +- ``us-east.rgw.control`` + +- ``us-east.rgw.meta`` + +- ``us-east.rgw.log`` + +- ``us-east.rgw.buckets.index`` + +- ``us-east.rgw.buckets.data`` + +The zone definitions list several more pools than that, but many of those +are consolidated through the use of rados namespaces. For example, all of +the following pool entries use namespaces of the ``us-east.rgw.meta`` pool:: + + "user_keys_pool": "us-east.rgw.meta:users.keys", + "user_email_pool": "us-east.rgw.meta:users.email", + "user_swift_pool": "us-east.rgw.meta:users.swift", + "user_uid_pool": "us-east.rgw.meta:users.uid", + +.. _`Multisite Configuration`: ../multisite -- 2.39.5