From: Kamoltat Date: Tue, 21 Dec 2021 07:56:37 +0000 (+0000) Subject: Added ReleasesNotes and documentation X-Git-Tag: v17.1.0~190^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=abaab51dd8bb5eadbf6c149c17a073eacb3298b0;p=ceph.git Added ReleasesNotes and documentation Add Release Notes and remove any `profile` related stuff in the autoscaler documentation and replace it with `bulk` flag. Signed-off-by: Kamoltat --- diff --git a/PendingReleaseNotes b/PendingReleaseNotes index 438bbdc6f4ea..18d39546becb 100644 --- a/PendingReleaseNotes +++ b/PendingReleaseNotes @@ -84,6 +84,13 @@ * LevelDB support has been removed. ``WITH_LEVELDB`` is no longer a supported build option. +* MON/MGR: Pools can now be created with `--bulk` flag. Any pools created with `bulk` + will use a profile of the `pg_autoscaler` that provides more performance from the start. + However, any pools created without the `--bulk` flag will remain using it's old behavior + by default. For more details, see: + + https://docs.ceph.com/en/latest/rados/operations/placement-groups/ + >=16.0.0 -------- * mgr/nfs: ``nfs`` module is moved out of volumes plugin. Prior using the diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst index c59830635cde..caaeb7af7aac 100644 --- a/doc/rados/operations/placement-groups.rst +++ b/doc/rados/operations/placement-groups.rst @@ -43,10 +43,10 @@ the PG count with this command:: Output will be something like:: - POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE PROFILE - a 12900M 3.0 82431M 0.4695 8 128 warn scale-up - c 0 3.0 82431M 0.0000 0.2000 0.9884 1.0 1 64 warn scale-down - b 0 953.6M 3.0 82431M 0.0347 8 warn scale-down + POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK + a 12900M 3.0 82431M 0.4695 8 128 warn True + c 0 3.0 82431M 0.0000 0.2000 0.9884 1.0 1 64 warn True + b 0 953.6M 3.0 82431M 0.0347 8 warn False **SIZE** is the amount of data stored in the pool. **TARGET SIZE**, if present, is the amount of data the administrator has specified that @@ -96,9 +96,12 @@ This factor can be be adjusted with:: **AUTOSCALE**, is the pool ``pg_autoscale_mode`` and will be either ``on``, ``off``, or ``warn``. -The final column, **PROFILE** shows the autoscale profile -used by each pool. ``scale-up`` and ``scale-down`` are the -currently available profiles. +The final column, **BULK** determines if the pool is ``bulk`` +and will be either ``True`` or ``False``. A ``bulk`` pool +means that the pool is expected to be large and should start out +with large amount of PGs for performance purposes. On the other hand, +pools without the ``bulk`` flag are expected to be smaller e.g., +.mgr or meta pools. Automated scaling @@ -126,28 +129,27 @@ example, a pool that maps to OSDs of class `ssd` and a pool that maps to OSDs of class `hdd` will each have optimal PG counts that depend on the number of those respective device types. -The autoscaler uses the `scale-up` profile by default, -where it starts out each pool with minimal PGs and scales -up PGs when there is more usage in each pool. However, it also has -a `scale-down` profile, where each pool starts out with a full complements -of PGs and only scales down when the usage ratio across the pools is not even. +The autoscaler uses the `bulk` flag to determine which pool +should start out with a full complements of PGs and only +scales down when the the usage ratio across the pool is not even. +However, if the pool doesn't have the `bulk` flag, the pool will +start out with minimal PGs and only when there is more usage in the pool. -With only the `scale-down` profile, the autoscaler identifies -any overlapping roots and prevents the pools with such roots -from scaling because overlapping roots can cause problems +The autoscaler identifies any overlapping roots and prevents the pools +with such roots from scaling because overlapping roots can cause problems with the scaling process. -To use the `scale-down` profile:: +To create pool with `bulk` flag:: - ceph osd pool set autoscale-profile scale-down + ceph osd create --bulk -To switch back to the default `scale-up` profile:: +To set/unset `bulk` flag of existing pool:: - ceph osd pool set autoscale-profile scale-up + ceph osd pool set bulk=true/false/1/0 -Existing clusters will continue to use the `scale-up` profile. -To use the `scale-down` profile, users will need to set autoscale-profile `scale-down`, -after upgrading to a version of Ceph that provides the `scale-down` feature. +To get `bulk` flag of existing pool:: + + ceph osd pool get bulk .. _specifying_pool_target_size: