The small segments and small segment limit
were used when doing a hacky flush by doing
IO and waiting: now that we have the explicit
'flush journal' asok in use, we can just use
a normal journal configuration.
Signed-off-by: John Spray <john.spray@redhat.com>
# trim log segment as fast as possible
self.set_conf('mds', 'mds cache size', 100)
- self.set_conf('mds', 'mds log max segments', 2)
- self.set_conf('mds', 'mds log events per segment', 1)
self.set_conf('mds', 'mds verify backtrace', 1)
self.fs.mds_restart()
self.fs.wait_for_daemons()