From: Kamoltat Date: Fri, 6 Aug 2021 04:23:29 +0000 (+0000) Subject: doc/rados/operations/placement-groups: added bias + profile X-Git-Tag: v17.1.0~1158^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=refs%2Fpull%2F42568%2Fhead;p=ceph.git doc/rados/operations/placement-groups: added bias + profile Added documentations on the autoscale profile and bias Signed-off-by: Kamoltat --- diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst index ec2b111d3c7b9..d3f1c5b7430dc 100644 --- a/doc/rados/operations/placement-groups.rst +++ b/doc/rados/operations/placement-groups.rst @@ -43,10 +43,10 @@ the PG count with this command:: Output will be something like:: - POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO PG_NUM NEW PG_NUM AUTOSCALE - a 12900M 3.0 82431M 0.4695 8 128 warn - c 0 3.0 82431M 0.0000 0.2000 0.9884 1 64 warn - b 0 953.6M 3.0 82431M 0.0347 8 warn + POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE PROFILE + a 12900M 3.0 82431M 0.4695 8 128 warn scale-up + c 0 3.0 82431M 0.0000 0.2000 0.9884 1.0 1 64 warn scale-down + b 0 953.6M 3.0 82431M 0.0347 8 warn scale-down **SIZE** is the amount of data stored in the pool. **TARGET SIZE**, if present, is the amount of data the administrator has specified that @@ -79,6 +79,10 @@ ratio takes precedence. The system uses the larger of the actual ratio and the effective ratio for its calculation. +**BIAS** is used as a multiplier to manually adjust a pool's PG based +on prior information about how much PGs a specific pool is expected +to have. + **PG_NUM** is the current number of PGs for the pool (or the current number of PGs that the pool is working towards, if a ``pg_num`` change is in progress). **NEW PG_NUM**, if present, is what the @@ -86,9 +90,13 @@ system believes the pool's ``pg_num`` should be changed to. It is always a power of 2, and will only be present if the "ideal" value varies from the current value by more than a factor of 3. -The final column, **AUTOSCALE**, is the pool ``pg_autoscale_mode``, +**AUTOSCALE**, is the pool ``pg_autoscale_mode`` and will be either ``on``, ``off``, or ``warn``. +The final column, **PROFILE** shows the autoscale profile +used by each pool. ``scale-up`` and ``scale-down`` are the +currently available profiles. + Automated scaling ----------------- @@ -115,6 +123,28 @@ example, a pool that maps to OSDs of class `ssd` and a pool that maps to OSDs of class `hdd` will each have optimal PG counts that depend on the number of those respective device types. +The autoscaler uses the `scale-down` profile by default, +where each pool starts out with a full complements of PGs and only scales +down when the usage ratio across the pools is not even. However, it also has +a `scale-up` profile, where it starts out each pool with minimal PGs and scales +up PGs when there is more usage in each pool. + +With only the `scale-down` profile, the autoscaler identifies +any overlapping roots and prevents the pools with such roots +from scaling because overlapping roots can cause problems +with the scaling process. + +To use the `scale-up` profile:: + + ceph osd pool set autoscale-profile scale-up + +To switch back to the default `scale-down` profile:: + + ceph osd pool set autoscale-profile scale-down + +Existing clusters will continue to use the `scale-up` profile. +To use the `scale-down` profile, users will need to set autoscale-profile `scale-down`, +after upgrading to a version of Ceph that provides the `scale-down` feature. .. _specifying_pool_target_size: