From 02ce12c2c48eaa105d218e4ae23bf0a119053e40 Mon Sep 17 00:00:00 2001 From: Radoslaw Zarzynski Date: Tue, 8 Oct 2019 19:34:05 +0200 Subject: [PATCH] test/crimson: cbt test does rand-reads instead of seq-reads. The rationale behind the change is not only that random read performance tends to be more important and meaningful. Actually, the current procedure is broken because of how `rados bench` internally operates: the number of seq read operations is limited by the number of write ops used to create the data set (`write` with the `--no-cleanup` option): ```cpp int ObjBencher::write_bench(/* ... */) { // ... //write object size/number data for read benchmarks encode(data.object_size, b_write); encode(data.finished, b_write); //... int ObjBencher::fetch_bench_metadata(// ... int* num_ops, /* ... */) { // ... decode(*object_size, p); decode(*num_ops, p); int ObjBencher::seq_read_bench( int seconds_to_run, int num_ops, int num_objects, int concurrentios, int pid, bool no_verify) { // ... //start initial reads for (int i = 0; i < concurrentios; ++i) { // ... ++data.started; } // ... while ((seconds_to_run && mono_clock::now() < finish_time) && num_ops > data.started) { ``` This makes significant problem as the cbt test uses short (3 sec.) period of prefill. In the consequence, the sequential read testing takes around half-second. Signed-off-by: Radoslaw Zarzynski --- src/test/crimson/cbt/radosbench_4K_read.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/src/test/crimson/cbt/radosbench_4K_read.yaml b/src/test/crimson/cbt/radosbench_4K_read.yaml index d4205d710e0..db21f7a4df2 100644 --- a/src/test/crimson/cbt/radosbench_4K_read.yaml +++ b/src/test/crimson/cbt/radosbench_4K_read.yaml @@ -13,6 +13,7 @@ tasks: pool_profile: 'replicated' read_time: 30 read_only: true + readmode: 'rand' prefill_time: 3 acceptable: bandwidth: '(or (greater) (near 0.05))' -- 2.39.5