The rationale behind the change is not only that random
read performance tends to be more important and meaningful.
Actually, the current procedure is broken because of how
`rados bench` internally operates: the number of seq read
operations is limited by the number of write ops used to
create the data set (`write` with the `--no-cleanup` option):
```cpp
int ObjBencher::write_bench(/* ... */) {
// ...
//write object size/number data for read benchmarks
encode(data.object_size, b_write);
encode(data.finished, b_write);
//...
int ObjBencher::fetch_bench_metadata(// ...
int* num_ops, /* ... */) {
// ...
decode(*object_size, p);
decode(*num_ops, p);
int ObjBencher::seq_read_bench(
int seconds_to_run, int num_ops, int num_objects,
int concurrentios, int pid, bool no_verify) {
// ...
//start initial reads
for (int i = 0; i < concurrentios; ++i) {
// ...
++data.started;
}
// ...
while ((seconds_to_run && mono_clock::now() < finish_time) &&
num_ops > data.started) {
```
This makes significant problem as the cbt test uses short (3 sec.)
period of prefill. In the consequence, the sequential read testing
takes around half-second.
Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
pool_profile: 'replicated'
read_time: 30
read_only: true
+ readmode: 'rand'
prefill_time: 3
acceptable:
bandwidth: '(or (greater) (near 0.05))'