- which features are (or have been) enabled
- how many data pools
- approximate file system age (year + month of creation)
- - how much metadata is being cached per file system
+ - how many files, bytes, and snapshots
+ - how much metadata is being cached
We have also added:
# - added device health metrics (i.e., SMART data, minus serial number)
# - remove crush_rule
# - added CephFS metadata (how many MDSs, fs features, how many data pools,
-# how much metadata is cached)
+# how much metadata is cached, rfiles, rbytes, rsnapshots)
# - added more pool metadata (rep vs ec, cache tiering mode, ec profile)
# - added host count, and counts for hosts with each of (mon, osd, mds, mgr)
# - whether an OSD cluster network is in use
cached_dn = 0
cached_cap = 0
subtrees = 0
+ rfiles = 0
+ rbytes = 0
+ rsnaps = 0
for gid, mds in fs['info'].items():
num_sessions += self.get_latest('mds', mds['name'],
'mds_sessions.session_count')
'mds_mem.cap')
subtrees += self.get_latest('mds', mds['name'],
'mds.subtrees')
+ if mds['rank'] == 0:
+ rfiles = self.get_latest('mds', mds['name'],
+ 'mds.root_rfiles')
+ rbytes = self.get_latest('mds', mds['name'],
+ 'mds.root_rbytes')
+ rsnaps = self.get_latest('mds', mds['name'],
+ 'mds.root_rsnaps')
report['fs']['filesystems'].append({
'max_mds': fs['max_mds'],
'ever_allowed_features': fs['ever_allowed_features'],
'num_data_pools': len(fs['data_pools']),
'standby_count_wanted': fs['standby_count_wanted'],
'approx_ctime': fs['created'][0:7],
+ 'files': rfiles,
+ 'bytes': rbytes,
+ 'snaps': rsnaps,
})
num_mds += len(fs['info'])
report['fs']['total_num_mds'] = num_mds