]> git-server-git.apps.pok.os.sepia.ceph.com Git - rocksdb.git/log
rocksdb.git
13 years agoFixing some issues Valgrind found
Kosie van der Merwe [Tue, 8 Jan 2013 20:16:40 +0000 (12:16 -0800)]
Fixing some issues Valgrind found

Summary: Found some issues running Valgrind on `db_test` (there are still some outstanding ones) and fixed them.

Test Plan:
make check

ran `valgrind ./db_test` and saw that errors no longer occur

Reviewers: dhruba, vamsi, emayanke, sheki

Reviewed By: dhruba

CC: leveldb
Differential Revision: https://reviews.facebook.net/D7803

13 years agoFixed memory leak in ShardedLRUCache
Kosie van der Merwe [Tue, 8 Jan 2013 19:24:15 +0000 (11:24 -0800)]
Fixed memory leak in ShardedLRUCache

Summary: `~ShardedLRUCache()` was empty despite `init()` allocating memory on the heap. Fixed the leak by freeing memory allocated by `init()`.

Test Plan:
make check

Ran valgrind on db_test before and after patch and saw leaked memory went down

Reviewers: vamsi, dhruba, emayanke, sheki

Reviewed By: dhruba

CC: leveldb
Differential Revision: https://reviews.facebook.net/D7791

13 years agodb_bench should use the default value for max_grandparent_overlap_factor.
Dhruba Borthakur [Tue, 8 Jan 2013 18:03:36 +0000 (10:03 -0800)]
db_bench should use the default value for max_grandparent_overlap_factor.

Summary:
This was a peformance regression caused by https://reviews.facebook.net/D6729.
The default value of max_grandparent_overlap_factor was erroneously
set to 0 in db_bench.

This was causing compactions to create really really small files because the max_grandparent_overlap_factor was erroneously set to zero in the benchmark.

Test Plan: Run --benchmarks=overwrite

Reviewers: heyongqiang, emayanke, sheki, MarkCallaghan

Reviewed By: sheki

CC: leveldb
Differential Revision: https://reviews.facebook.net/D7797

13 years agoAdded clearer error message for failure to create db directory in DBImpl::Recover()
Kosie van der Merwe [Mon, 7 Jan 2013 18:11:18 +0000 (10:11 -0800)]
Added clearer error message for failure to create db directory in DBImpl::Recover()

Summary:
Changed CreateDir() to CreateDirIfMissing() so a directory that already exists now causes and error.

Fixed CreateDirIfMissing() and added Env.DirExists()

Test Plan:
make check to test for regessions

Ran the following to test if the error message is not about lock files not existing
./db_bench --db=dir/testdb

After creating a file "testdb", ran the following to see if it failed with sane error message:
./db_bench --db=testdb

Reviewers: dhruba, emayanke, vamsi, sheki

Reviewed By: emayanke

CC: leveldb
Differential Revision: https://reviews.facebook.net/D7707

13 years agoAdd --seed, --read_range to db_bench
Mark Callaghan [Thu, 3 Jan 2013 20:11:50 +0000 (12:11 -0800)]
Add --seed, --read_range to db_bench

Summary:
Adds the option --seed to db_bench to specify the base for the per-thread RNG.
When not set each thread uses the same value across runs of db_bench which defeats
IO stress testing.

Adds the option --read_range. When set to a value > 1 an iterator is created and
each query done for the randomread benchmark will do a range scan for that many
rows. When not set or set to 1 the existing behavior (a point lookup) is done.

Fixes a bug where a printf format string was missing.

Test Plan: run db_bench

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb
Differential Revision: https://reviews.facebook.net/D7749

13 years agoFixing and adding some comments
Kosie van der Merwe [Fri, 4 Jan 2013 01:13:56 +0000 (17:13 -0800)]
Fixing and adding some comments

Summary:
`MemTableList::Add()` neglected to mention that it took ownership of the reference held by its caller.

The comment in `MemTable::Get()` was wrong in describing the format of the key.

Test Plan: None

Reviewers: dhruba, sheki, emayanke, vamsi

Reviewed By: dhruba

CC: leveldb
Differential Revision: https://reviews.facebook.net/D7755

13 years agoUse a priority queue to merge files.
Abhishek Kona [Wed, 26 Dec 2012 19:51:36 +0000 (11:51 -0800)]
Use a priority queue to merge files.

Summary:
Use a std::priority_queue in merger.cc instead of doing a o(n) search
every time.
Currently only the ForwardIteration uses a Priority Queue.

Test Plan: make all check

Reviewers: dhruba

Reviewed By: dhruba

CC: emayanke, zshao
Differential Revision: https://reviews.facebook.net/D7629

13 years agoExtendOverlappingInputs too slow for large databases.
Dhruba Borthakur [Mon, 31 Dec 2012 06:18:52 +0000 (22:18 -0800)]
ExtendOverlappingInputs too slow for large databases.

Summary:
There was a bug in the ExtendOverlappingInputs method so that
the terminating condition for the backward search was incorrect.

Test Plan: make clean check

Reviewers: sheki, emayanke, MarkCallaghan

Reviewed By: MarkCallaghan

CC: leveldb
Differential Revision: https://reviews.facebook.net/D7725