See #7171. In rare cases CRUSH can't handle it when only 2/6 of
the OSDs are marked in. Avoid those situations for now.
Signed-off-by: Sage Weil <sage@inktank.com>
(cherry picked from commit
495f2163a8debe6323b1b67737a47a0b31172f07)
if self.config.get('powercycle'):
self.revive_timeout += 120
self.clean_wait = self.config.get('clean_wait', 0)
- self.minin = self.config.get("min_in", 2)
+ self.minin = self.config.get("min_in", 3)
num_osds = self.in_osds + self.out_osds
self.max_pgs = self.config.get("max_pgs_per_pool_osd", 1200) * num_osds
The config is optional, and is a dict containing some or all of:
- min_in: (default 2) the minimum number of OSDs to keep in the
+ min_in: (default 3) the minimum number of OSDs to keep in the
cluster
min_out: (default 0) the minimum number of OSDs to keep out of the