each. This cluster will require significantly more resources and significantly
more time for peering.
-For determining the optimal number of PGs per OSD, we recommend the `PGCalc`_
-tool.
-
.. _setting the number of placement groups:
.. _Create a Pool: ../pools#createpool
.. _Mapping PGs to OSDs: ../../../architecture#mapping-pgs-to-osds
-.. _pgcalc: https://old.ceph.com/pgcalc/
<../erasure-code>`_, resilience is defined as the number of coding chunks
(for example, ``m = 2`` in the default **erasure code profile**).
-- **Placement Groups**: You can set the number of placement groups (PGs) for
- the pool. In a typical configuration, the target number of PGs is
- approximately one hundred PGs per OSD. This provides reasonable balancing
- without consuming excessive computing resources. When setting up multiple
- pools, be careful to set an appropriate number of PGs for each pool and for
- the cluster as a whole. Each PG belongs to a specific pool: when multiple
- pools use the same OSDs, make sure that the **sum** of PG replicas per OSD is
- in the desired PG-per-OSD target range. To calculate an appropriate number of
- PGs for your pools, use the `pgcalc`_ tool.
+- **Placement Groups**: The :ref:`autoscaler <pg-autoscaler>` sets the number
+ of placement groups (PGs) for the pool. In a typical configuration, the
+ target number of PGs is approximately one-hundred and fifty PGs per OSD. This
+ provides reasonable balancing without consuming excessive computing
+ resources. When setting up multiple pools, set an appropriate number of PGs
+ for each pool and for the cluster as a whole. Each PG belongs to a specific
+ pool: when multiple pools use the same OSDs, make sure that the **sum** of PG
+ replicas per OSD is in the desired PG-per-OSD target range.
- **CRUSH Rules**: When data is stored in a pool, the placement of the object
and its replicas (or chunks, in the case of erasure-coded pools) in your
===============================================
See :ref:`managing_bulk_flagged_pools`.
-
-.. _pgcalc: https://old.ceph.com/pgcalc/
.. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref
.. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter
.. _setting the number of placement groups: ../placement-groups#set-the-number-of-placement-groups
Tuning
======
-When ``radosgw`` first tries to operate on a zone pool that does not
-exist, it will create that pool with the default values from
-``osd pool default pg num`` and ``osd pool default pgp num``. These defaults
-are sufficient for some pools, but others (especially those listed in
-``placement_pools`` for the bucket index and data) will require additional
-tuning. We recommend using the `Ceph Placement Group’s per Pool
-Calculator <https://old.ceph.com/pgcalc/>`__ to calculate a suitable number of
-placement groups for these pools. See
-`Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
-for details on pool creation.
+When ``radosgw`` first tries to operate on a zone pool that does not exist, it
+will create that pool with the default values from ``osd pool default pg num``
+and ``osd pool default pgp num``. These defaults are sufficient for some pools,
+but others (especially those listed in ``placement_pools`` for the bucket index
+and data) will require additional tuning. See `Pools
+<http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__ for details
+on pool creation.
.. _radosgw-pool-namespaces: