Until we understand the performance regression and allocator behavior,
go back to 64k. This will continue to have a high space overhead for
small objects and EC, but will preserve the current performance levels
for all workloads.
This partially reverts
0ec75c99ddb103c664f66c04450003ed4c407708
Signed-off-by: Sage Weil <sage@redhat.com>
(cherry picked from commit
66797ef19949503e77d52b567962c218ff0d8a77)
.set_long_description("A smaller allocation size generally means less data is read and then rewritten when a copy-on-write operation is triggered (e.g., when writing to something that was recently snapshotted). Similarly, less data is journaled before performing an overwrite (writes smaller than min_alloc_size must first pass through the BlueStore journal). Larger values of min_alloc_size reduce the amount of metadata required to describe the on-disk layout and reduce overall fragmentation."),
Option("bluestore_min_alloc_size_hdd", Option::TYPE_SIZE, Option::LEVEL_ADVANCED)
- .set_default(4_K)
+ .set_default(64_K)
.set_flag(Option::FLAG_CREATE)
.set_description("Default min_alloc_size value for rotational media")
.add_see_also("bluestore_min_alloc_size"),