* New OSD daemon command dump_scrub_reservations which reveals the
scrub reservations that are held for local (primary) and remote (replica) PGs.
+
+* The ``pg_autoscale_mode`` is now set to ``on`` by default for newly
+ created pools, which means that Ceph will automatically manage the
+ number of PGs. To change this behavior, or to learn more about PG
+ autoscaling, see :ref:`pg-autoscaler`. Note that existing pools in
+ upgraded clusters will still be set to ``warn`` by default.
+
.set_description(""),
Option("osd_pool_default_pg_autoscale_mode", Option::TYPE_STR, Option::LEVEL_ADVANCED)
- .set_default("warn")
+ .set_default("on")
.set_flag(Option::FLAG_RUNTIME)
.set_enum_allowed({"off", "warn", "on"})
.set_description("Default PG autoscaling behavior for new pools"),
nearfull_ratio 0
min_compat_client jewel
- pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
+ pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
max_osd 3
nearfull_ratio 0
min_compat_client jewel
- pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
+ pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
max_osd 1
nearfull_ratio 0
min_compat_client jewel
- pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
+ pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
max_osd 3
osdmaptool: writing epoch 1 to myosdmap
$ osdmaptool --print myosdmap | grep 'pool 1'
osdmaptool: osdmap file 'myosdmap'
- pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
+ pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
$ rm -f myosdmap
nearfull_ratio 0
min_compat_client jewel
- pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 15296 pgp_num 15296 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
+ pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 15296 pgp_num 15296 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
max_osd 239
osdmaptool: writing epoch 1 to om
$ osdmaptool --print om | grep 'pool 1'
osdmaptool: osdmap file 'om'
- pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 15296 pgp_num 15296 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
+ pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 15296 pgp_num 15296 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
$ rm -f om