pg_autoscale module will now start out all the pools
with a scale-up profile by default.
Added tests in workunits/mon/pg_autoscaler.sh
to evaluate if the default pool creation is
a scale-up profile
Updated documentation and release notes to
reflect the change in the default behavior
of the pg_autoscale profile.
Fixes: https://tracker.ceph.com/issues/53309
Signed-off-by: Kamoltat <ksirivad@redhat.com>
(cherry picked from commit
a9f9f7b3fd813d429c4a539edf560d3fb6eb553b)
Conflicts:
src/pybind/mgr/pg_autoscaler/module.py - trivial fix
>=16.2.6
--------
-* MGR: The pg_autoscaler has a new default 'scale-down' profile which provides more
- performance from the start for new pools (for newly created clusters).
- Existing clusters will retain the old behavior, now called the 'scale-up' profile.
+* MGR: The pg_autoscaler has a new 'scale-down' profile which provides more
+ performance from the start for new pools. However, the module will remain
+ using it old behavior by default, now called the 'scale-up' profile.
For more details, see:
https://docs.ceph.com/en/latest/rados/operations/placement-groups/
to OSDs of class `hdd` will each have optimal PG counts that depend on
the number of those respective device types.
-The autoscaler uses the `scale-down` profile by default,
-where each pool starts out with a full complements of PGs and only scales
-down when the usage ratio across the pools is not even. However, it also has
-a `scale-up` profile, where it starts out each pool with minimal PGs and scales
-up PGs when there is more usage in each pool.
+The autoscaler uses the `scale-up` profile by default,
+where it starts out each pool with minimal PGs and scales
+up PGs when there is more usage in each pool. However, it also has
+a `scale-down` profile, where each pool starts out with a full complements
+of PGs and only scales down when the usage ratio across the pools is not even.
With only the `scale-down` profile, the autoscaler identifies
any overlapping roots and prevents the pools with such roots
from scaling because overlapping roots can cause problems
with the scaling process.
-To use the `scale-up` profile::
+To use the `scale-down` profile::
- ceph osd pool set autoscale-profile scale-up
+ ceph osd pool set autoscale-profile scale-down
-To switch back to the default `scale-down` profile::
+To switch back to the default `scale-up` profile::
- ceph osd pool set autoscale-profile scale-down
+ ceph osd pool set autoscale-profile scale-up
Existing clusters will continue to use the `scale-up` profile.
To use the `scale-down` profile, users will need to set autoscale-profile `scale-down`,
# get num pools again since we created more pools
NUM_POOLS=$(ceph osd pool ls | wc -l)
+# get profiles of pool a and b
+PROFILE1=$(ceph osd pool autoscale-status | grep 'a' | grep -o -m 1 'scale-up\|scale-down' || true)
+PROFILE2=$(ceph osd pool autoscale-status | grep 'b' | grep -o -m 1 'scale-up\|scale-down' || true)
+
+# evaluate the default profile a
+if [[ $PROFILE1 = "scale-up" ]]
+then
+ echo "Success: pool a PROFILE is scale-up"
+else
+ echo "Error: a PROFILE is scale-down"
+ exit 1
+fi
+
+# evaluate the default profile of pool b
+if [[ $PROFILE2 = "scale-up" ]]
+then
+ echo "Success: pool b PROFILE is scale-up"
+else
+ echo "Error: b PROFILE is scale-down"
+ exit 1
+fi
+
+# This part of this code will now evaluate the accuracy of
+# scale-down profile
+
+# change to scale-down profile
+ceph osd pool set autoscale-profile scale-down
+
+# get profiles of pool a and b
+PROFILE1=$(ceph osd pool autoscale-status | grep 'a' | grep -o -m 1 'scale-up\|scale-down' || true)
+PROFILE2=$(ceph osd pool autoscale-status | grep 'b' | grep -o -m 1 'scale-up\|scale-down' || true)
+
+# evaluate that profile a is now scale-down
+if [[ $PROFILE1 = "scale-down" ]]
+then
+ echo "Success: pool a PROFILE is scale-down"
+else
+ echo "Error: a PROFILE is scale-up"
+ exit 1
+fi
+
+# evaluate the profile of b is now scale-down
+if [[ $PROFILE2 = "scale-down" ]]
+then
+ echo "Success: pool b PROFILE is scale-down"
+else
+ echo "Error: b PROFILE is scale-up"
+ exit 1
+fi
+
# get pool size
POOL_SIZE_A=$(ceph osd pool get a size| grep -Eo '[0-9]{1,4}')
POOL_SIZE_B=$(ceph osd pool get b size| grep -Eo '[0-9]{1,4}')
version = 0;
pending.clear();
bufferlist bl;
- bl.append("scale-down");
+ bl.append("scale-up");
pending["config/mgr/mgr/pg_autoscaler/autoscale_profile"] = bl;
}
default='scale-up',
type='str',
desc='pg_autoscale profiler',
- long_desc=('Determines the behavior of the autoscaler algorithm '
+ long_desc=('Determines the behavior of the autoscaler algorithm, '
'`scale-up` means that it starts out with minmum pgs '
- 'and scales up when there is pressure, `scale-down` '
- 'means starts out with full pgs and scales down when '
- 'there is pressure '),
+ 'and scales up when there is pressure'
+ '`scale-down means start out with full pgs and scales'
+ 'down when there is pressure'),
runtime=True),
]