Dhruba Borthakur [Thu, 27 Dec 2012 02:03:34 +0000 (18:03 -0800)]
Every level can have overlapping files.
Summary:
Leveldb has high write amplification because one file from level n
is compacted with all overlapping files in level n+1. This method
of compaction reduces read amplification (becasue there is only
one file to inspect per level) but the write amplification is high.
Another option would be to compact multiple files from the same
level and push it to a new file in level n+1. This means that
there will be overlapping files in each level. Each read
request might have to inspect multiple files at each
level. This could increase read amplification but should reduce
write amplification. This is called the "Hybrid" mode of
operations.
This patch introduces the "Hybrid" mode of operations (this deserves
a better name?). In the Hybrid mode, all levels can have overlapping
files. The number of files in a level determine whether compaction
is needed or not. Files in higher levels are larger in size that
files in lower levels. The default option is to have files of size
10MB in L1, 100MB in L2, 1000MB in L3, and so on. If the number of
files in any level exceed 10 files, then that level is a target
for compactions. A compaction process takes many files from
level n and produces a single file in level n+1. The number of
files that are picked as part of a single compaction run is
limited by the size of the output file to be produced at the next
higher level.
This patch was produced by a two-full-day Christmas-Hack of 2012.
Test Plan:
All unit tests pass.
This patch switches on the Hybrid mode by default. This is done so
that all unit tests pass with the Hybrid Mode turned on. At time of
commit, I will switch off the Hybrid Mode.
Dhruba Borthakur [Thu, 27 Dec 2012 02:03:34 +0000 (18:03 -0800)]
Every level can have overlapping files.
Summary:
Leveldb has high write amplification because one file from level n
is compacted with all overlapping files in level n+1. This method
of compaction reduces read amplification (becasue there is only
one file to inspect per level) but the write amplification is high.
Another option would be to compact multiple files from the same
level and push it to a new file in level n+1. This means that
there will be overlapping files in each level. Each read
request might have to inspect multiple files at each
level. This could increase read amplification but should reduce
write amplification. This is called the "Hybrid" mode of
operations.
This patch introduces the "Hybrid" mode of operations (this deserves
a better name?). In the Hybrid mode, all levels can have overlapping
files. The number of files in a level determine whether compaction
is needed or not. Files in higher levels are larger in size that
files in lower levels. The default option is to have files of size
10MB in L1, 100MB in L2, 1000MB in L3, and so on. If the number of
files in any level exceed 10 files, then that level is a target
for compactions. A compaction process takes many files from
level n and produces a single file in level n+1. The number of
files that are picked as part of a single compaction run is
limited by the size of the output file to be produced at the next
higher level.
This patch was produced by a two-full-day Christmas-Hack of 2012.
Test Plan:
All unit tests pass.
This patch switches on the Hybrid mode by default. This is done so
that all unit tests pass with the Hybrid Mode turned on. At time of
commit, I will switch off the Hybrid Mode.
Summary:
`Table::Open()` assumes that `size` correctly describes the size of `file`, added a check that the footer is actually the right size and for good measure added assertions to `Footer::DecodeFrom()`.
This was discovered by running `valgrind ./db_test` and seeing that `Footer::DecodeFrom()` was accessing uninitialized memory.
Test Plan:
make clean check
ran `valgrind ./db_test` and saw DBTest.NoSpace no longer complains about a conditional jump being dependent on uninitialized memory.
db_bench should use the default value for max_grandparent_overlap_factor.
Summary:
This was a peformance regression caused by https://reviews.facebook.net/D6729.
The default value of max_grandparent_overlap_factor was erroneously
set to 0 in db_bench.
This was causing compactions to create really really small files because the max_grandparent_overlap_factor was erroneously set to zero in the benchmark.
Mark Callaghan [Thu, 3 Jan 2013 20:11:50 +0000 (12:11 -0800)]
Add --seed, --read_range to db_bench
Summary:
Adds the option --seed to db_bench to specify the base for the per-thread RNG.
When not set each thread uses the same value across runs of db_bench which defeats
IO stress testing.
Adds the option --read_range. When set to a value > 1 an iterator is created and
each query done for the randomread benchmark will do a range scan for that many
rows. When not set or set to 1 the existing behavior (a point lookup) is done.
Fixes a bug where a printf format string was missing.