- avoid a memory utilization explosion when a pg is degraded
- avoid overhead of converting pg log entries to dup op entries.
The net of this is that during recovery we will keep fewer entries, which
means we will trigger backfill sooner rather than later.
In the future, it would still be nice to dynamically adjust this in such
a way as to avoid increasing the memory footprint (e.g., by borrowing
memory from BlueStore).
Signed-off-by: Sage Weil <sage@redhat.com>
.set_description(""),
Option("osd_min_pg_log_entries", Option::TYPE_UINT, Option::LEVEL_ADVANCED)
- .set_default(1500)
+ .set_default(3000)
.set_description("minimum number of entries to maintain in the PG log")
.add_service("osd")
.add_see_also("osd_max_pg_log_entries")
.add_see_also("osd_pg_log_dups_tracked"),
Option("osd_max_pg_log_entries", Option::TYPE_UINT, Option::LEVEL_ADVANCED)
- .set_default(10000)
+ .set_default(3000)
.set_description("maximum number of entries to maintain in the PG log when degraded before we trim")
.add_service("osd")
.add_see_also("osd_min_pg_log_entries")