4 or 8 PGs doesn't provide much parallelism at baseline. Start with 16
and set the floor there; that's a more reasonable number of OSDs that
will be put to work on a single pool.
Note that there is no magic number here. At some point someone has to
tell Ceph if an empty pool should get lots of PGs across lots of devices
to get the full throughput of the cluster. But this will be a bit less
painful/surprising for users.
Fixes: https://tracker.ceph.com/issues/42509
Signed-off-by: Sage Weil <sage@redhat.com>
(cherry picked from commit
78bf92448002eece7501da01b67f900a84207e70)
value is the same as ``pg_num`` with ``mkpool``.
:Type: 32-bit Integer
-:Default: ``8``
+:Default: ``16``
``osd pool default pgp num``
.add_service("mon"),
Option("osd_pool_default_pg_num", Option::TYPE_UINT, Option::LEVEL_ADVANCED)
- .set_default(8)
+ .set_default(16)
.set_description("number of PGs for new pools")
.set_flag(Option::FLAG_RUNTIME)
.add_service("mon"),
INTERVAL = 5
-PG_NUM_MIN = 4 # unless specified on a per-pool basis
+PG_NUM_MIN = 16 # unless specified on a per-pool basis
def nearest_power_of_two(n):
v = int(n)