Summary:
With https://github.com/facebook/rocksdb/issues/10087, applications calling `SyncWAL()` or writing with `WriteOptions::sync=true` can suffer
from performance regression. This PR reverts to original behavior of tracking the syncing of closed WALs.
After we revert back to old behavior, recovery, whether kPointInTime or kAbsoluteConsistency, may fail to
detect corruption in synced WALs if the corruption is in the live WAL.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10330
Test Plan:
make check
Before https://github.com/facebook/rocksdb/issues/10087
```bash
fillsync : 750.269 micros/op 1332 ops/sec 75.027 seconds 100000 operations; 0.1 MB/s (100 ops)
fillsync : 776.492 micros/op 1287 ops/sec 77.649 seconds 100000 operations; 0.1 MB/s (100 ops)
fillsync [AVG 2 runs] : 1310 (± 44) ops/sec; 0.1 (± 0.0) MB/sec
fillsync : 805.625 micros/op 1241 ops/sec 80.563 seconds 100000 operations; 0.1 MB/s (100 ops)
fillsync [AVG 3 runs] : 1287 (± 51) ops/sec; 0.1 (± 0.0) MB/sec
fillsync [AVG 3 runs] : 1287 (± 51) ops/sec; 0.1 (± 0.0) MB/sec
fillsync [MEDIAN 3 runs] : 1287 ops/sec; 0.1 MB/sec
```
Before this PR and after https://github.com/facebook/rocksdb/issues/10087
```bash
fillsync : 1479.601 micros/op 675 ops/sec 147.960 seconds 100000 operations; 0.1 MB/s (100 ops)
fillsync : 1626.080 micros/op 614 ops/sec 162.608 seconds 100000 operations; 0.1 MB/s (100 ops)
fillsync [AVG 2 runs] : 645 (± 59) ops/sec; 0.1 (± 0.0) MB/sec
fillsync : 1588.402 micros/op 629 ops/sec 158.840 seconds 100000 operations; 0.1 MB/s (100 ops)
fillsync [AVG 3 runs] : 640 (± 35) ops/sec; 0.1 (± 0.0) MB/sec
fillsync [AVG 3 runs] : 640 (± 35) ops/sec; 0.1 (± 0.0) MB/sec
fillsync [MEDIAN 3 runs] : 629 ops/sec; 0.1 MB/sec
```
After this PR
```bash
fillsync : 749.621 micros/op 1334 ops/sec 74.962 seconds 100000 operations; 0.1 MB/s (100 ops)
fillsync : 865.577 micros/op 1155 ops/sec 86.558 seconds 100000 operations; 0.1 MB/s (100 ops)
fillsync [AVG 2 runs] : 1244 (± 175) ops/sec; 0.1 (± 0.0) MB/sec
fillsync : 845.837 micros/op 1182 ops/sec 84.584 seconds 100000 operations; 0.1 MB/s (100 ops)
fillsync [AVG 3 runs] : 1223 (± 109) ops/sec; 0.1 (± 0.0) MB/sec
fillsync [AVG 3 runs] : 1223 (± 109) ops/sec; 0.1 (± 0.0) MB/sec
fillsync [MEDIAN 3 runs] : 1182 ops/sec; 0.1 MB/sec
```
Reviewed By: ajkr
Differential Revision:
D37725212
Pulled By: riversand963
fbshipit-source-id:
8fa7d13b3c7662be5d56351c42caf3266af937ae
# Rocksdb Change Log
+## 7.4.3 (07/13/2022)
+### Behavior Changes
+* For track_and_verify_wals_in_manifest, revert to the original behavior before #10087: syncing of live WAL file is not tracked, and we track only the synced sizes of **closed** WALs. (PR #10330).
+
## 7.4.2 (06/30/2022)
### Bug Fixes
-Fix a bug in Logger where if dbname and db_log_dir are on different filesystems, dbname creation would fail wrt to db_log_dir path returning an error and fails to open the DB.
+* Fix a bug in Logger where if dbname and db_log_dir are on different filesystems, dbname creation would fail wrt to db_log_dir path returning an error and fails to open the DB.
## 7.4.1 (06/28/2022)
### Bug Fixes
ASSERT_TRUE(db_->VerifyFileChecksums(ReadOptions()).IsInvalidArgument());
}
-TEST_F(DBBasicTest, ManualWalSync) {
+// TODO: re-enable after we provide finer-grained control for WAL tracking to
+// meet the needs of different use cases, durability levels and recovery modes.
+TEST_F(DBBasicTest, DISABLED_ManualWalSync) {
Options options = CurrentOptions();
options.track_and_verify_wals_in_manifest = true;
options.wal_recovery_mode = WALRecoveryMode::kAbsoluteConsistency;
for (auto it = logs_.begin(); it != logs_.end() && it->number <= up_to;) {
auto& wal = *it;
assert(wal.IsSyncing());
- if (immutable_db_options_.track_and_verify_wals_in_manifest &&
- wal.GetPreSyncSize() > 0) {
- synced_wals.AddWal(wal.number, WalMetadata(wal.GetPreSyncSize()));
- }
if (logs_.size() > 1) {
+ if (immutable_db_options_.track_and_verify_wals_in_manifest &&
+ wal.GetPreSyncSize() > 0) {
+ synced_wals.AddWal(wal.number, WalMetadata(wal.GetPreSyncSize()));
+ }
logs_to_free_.push_back(wal.ReleaseWriter());
// To modify logs_ both mutex_ and log_write_mutex_ must be held
InstrumentedMutexLock l(&log_write_mutex_);
bool flush_verify_memtable_count = true;
// If true, the log numbers and sizes of the synced WALs are tracked
- // in MANIFEST, then during DB recovery, if a synced WAL is missing
+ // in MANIFEST. During DB recovery, if a synced WAL is missing
// from disk, or the WAL's size does not match the recorded size in
// MANIFEST, an error will be reported and the recovery will be aborted.
//
+ // This is one additional protection against WAL corruption besides the
+ // per-WAL-entry checksum.
+ //
// Note that this option does not work with secondary instance.
+ // Currently, only syncing closed WALs are tracked. Calling `DB::SyncWAL()`,
+ // etc. or writing with `WriteOptions::sync=true` to sync the live WAL is not
+ // tracked for performance/efficiency reasons.
//
// Default: false
bool track_and_verify_wals_in_manifest = false;
"Size of per-key-value checksum in each write batch. Currently "
"only value 0 and 8 are supported.");
+DEFINE_bool(track_and_verify_wals_in_manifest, false,
+ "If true, enable WAL tracking in the MANIFEST");
+
namespace ROCKSDB_NAMESPACE {
namespace {
static Status CreateMemTableRepFactory(
}
options.allow_data_in_errors = FLAGS_allow_data_in_errors;
+ options.track_and_verify_wals_in_manifest =
+ FLAGS_track_and_verify_wals_in_manifest;
// Integrated BlobDB
options.enable_blob_files = FLAGS_enable_blob_files;