From: Rick Chen Date: Fri, 14 Sep 2018 17:42:27 +0000 (-0500) Subject: mgr/diskprediction: add prototype diskprediction module X-Git-Tag: v14.0.1~270^2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=4abb79f1591088f914aa6312597d38d9952e40ee;p=ceph-ci.git mgr/diskprediction: add prototype diskprediction module This module is written by Rick Chen and provides both a built-in local predictor and a cloud mode that queries a cloud service (provided by ProphetStor) to predict device failures. Signed-off-by: Rick Chen Signed-off-by: Sage Weil --- diff --git a/COPYING b/COPYING index cd45ce086ab..d8afa9d28ef 100644 --- a/COPYING +++ b/COPYING @@ -145,3 +145,8 @@ Files: src/include/timegm.h Copyright (C) Copyright Howard Hinnant Copyright (C) Copyright 2010-2011 Vicente J. Botet Escriba License: Boost Software License, Version 1.0 + +Files: src/pybind/mgr/diskprediction/predictor/models/* +Copyright: None +License: Public domain + diff --git a/doc/mgr/diskprediction.rst b/doc/mgr/diskprediction.rst new file mode 100644 index 00000000000..7c25cfbb9e4 --- /dev/null +++ b/doc/mgr/diskprediction.rst @@ -0,0 +1,345 @@ +===================== +DISKPREDICTION PLUGIN +===================== + +The *diskprediction* plugin supports two modes: cloud mode and local mode. In cloud mode, the disk and Ceph operating status information is collected from Ceph cluster and sent to a cloud-based DiskPrediction server over the Internet. DiskPrediction server analyzes the data and provides the analytics and prediction results of performance and disk health states for Ceph clusters. + +Local mode doesn't require any external server for data analysis and output results. In local mode, the *diskprediction* plugin uses an internal predictor module for disk prediction service, and then returns the disk prediction result to the Ceph system. + +Enabling +======== + +Run the following command to enable the *diskprediction* module in the Ceph +environment: + +:: + + ceph mgr module enable diskprediction + + +Select the prediction mode: + +:: + + ceph device set-prediction-mode + + +Connection settings +=================== +The connection settings are used for connection between Ceph and DiskPrediction server. + +Local Mode +---------- + +The *diskprediction* plugin leverages Ceph device health check to collect disk health metrics and uses internal predictor module to produce the disk failure prediction and returns back to Ceph. Thus, no connection settings are required in local mode. The local predictor module requires at least six datasets of device health metrics to implement the prediction. + +Run the following command to use local predictor predict device life expectancy. + +:: + + ceph device predict-life-expectancy + + +Cloud Mode +---------- + +The user registration is required in cloud mode. The users have to sign up their accounts at https://www.diskprophet.com/#/ to receive the following DiskPrediction server information for connection settings. + +**Certificate file path**: After user registration is confirmed, the system will send a confirmation email including a certificate file download link. Download the certificate file and save it to the Ceph system. Run the following command to verify the file. Without certificate file verification, the connection settings cannot be completed. + +**DiskPrediction server**: The DiskPrediction server name. It could be an IP address if required. + +**Connection account**: An account name used to set up the connection between Ceph and DiskPrediction server + +**Connection password**: The password used to set up the connection between Ceph and DiskPrediction server + +Run the following command to complete connection setup. + +:: + + ceph device set-cloud-prediction-config + + +You can use the following command to display the connection settings: + +:: + + ceph device show-prediction-config + + +Additional optional configuration settings are the following: + +:diskprediction_upload_metrics_interval: Indicate the frequency to send Ceph performance metrics to DiskPrediction server regularly at times. Default is 10 minutes. +:diskprediction_upload_smart_interval: Indicate the frequency to send Ceph physical device info to DiskPrediction server regularly at times. Default is 12 hours. +:diskprediction_retrieve_prediction_interval: Indicate Ceph that retrieves physical device prediction data from DiskPrediction server regularly at times. Default is 12 hours. + + + +Diskprediction Data +=================== + +The *diskprediction* plugin actively sends/retrieves the following data to/from DiskPrediction server. + + +Metrics Data +------------- +- Ceph cluster status + ++----------------------+-----------------------------------------+ +|key |Description | ++======================+=========================================+ +|cluster_health |Ceph health check status | ++----------------------+-----------------------------------------+ +|num_mon |Number of monitor node | ++----------------------+-----------------------------------------+ +|num_mon_quorum |Number of monitors in quorum | ++----------------------+-----------------------------------------+ +|num_osd |Total number of OSD | ++----------------------+-----------------------------------------+ +|num_osd_up |Number of OSDs that are up | ++----------------------+-----------------------------------------+ +|num_osd_in |Number of OSDs that are in cluster | ++----------------------+-----------------------------------------+ +|osd_epoch |Current epoch of OSD map | ++----------------------+-----------------------------------------+ +|osd_bytes |Total capacity of cluster in bytes | ++----------------------+-----------------------------------------+ +|osd_bytes_used |Number of used bytes on cluster | ++----------------------+-----------------------------------------+ +|osd_bytes_avail |Number of available bytes on cluster | ++----------------------+-----------------------------------------+ +|num_pool |Number of pools | ++----------------------+-----------------------------------------+ +|num_pg |Total number of placement groups | ++----------------------+-----------------------------------------+ +|num_pg_active_clean |Number of placement groups in | +| |active+clean state | ++----------------------+-----------------------------------------+ +|num_pg_active |Number of placement groups in active | +| |state | ++----------------------+-----------------------------------------+ +|num_pg_peering |Number of placement groups in peering | +| |state | ++----------------------+-----------------------------------------+ +|num_object |Total number of objects on cluster | ++----------------------+-----------------------------------------+ +|num_object_degraded |Number of degraded (missing replicas) | +| |objects | ++----------------------+-----------------------------------------+ +|num_object_misplaced |Number of misplaced (wrong location in | +| |the cluster) objects | ++----------------------+-----------------------------------------+ +|num_object_unfound |Number of unfound objects | ++----------------------+-----------------------------------------+ +|num_bytes |Total number of bytes of all objects | ++----------------------+-----------------------------------------+ +|num_mds_up |Number of MDSs that are up | ++----------------------+-----------------------------------------+ +|num_mds_in |Number of MDS that are in cluster | ++----------------------+-----------------------------------------+ +|num_mds_failed |Number of failed MDS | ++----------------------+-----------------------------------------+ +|mds_epoch |Current epoch of MDS map | ++----------------------+-----------------------------------------+ + + +- Ceph mon/osd performance counts + +Mon: + ++----------------------+-----------------------------------------+ +|key |Description | ++======================+=========================================+ +|num_sessions |Current number of opened monitor sessions| ++----------------------+-----------------------------------------+ +|session_add |Number of created monitor sessions | ++----------------------+-----------------------------------------+ +|session_rm |Number of remove_session calls in monitor| ++----------------------+-----------------------------------------+ +|session_trim |Number of trimed monitor sessions | ++----------------------+-----------------------------------------+ +|num_elections |Number of elections monitor took part in | ++----------------------+-----------------------------------------+ +|election_call |Number of elections started by monitor | ++----------------------+-----------------------------------------+ +|election_win |Number of elections won by monitor | ++----------------------+-----------------------------------------+ +|election_lose |Number of elections lost by monitor | ++----------------------+-----------------------------------------+ + +Osd: + ++----------------------+-----------------------------------------+ +|key |Description | ++======================+=========================================+ +|op_wip |Replication operations currently being | +| |processed (primary) | ++----------------------+-----------------------------------------+ +|op_in_bytes |Client operations total write size | ++----------------------+-----------------------------------------+ +|op_r |Client read operations | ++----------------------+-----------------------------------------+ +|op_out_bytes |Client operations total read size | ++----------------------+-----------------------------------------+ +|op_w |Client write operations | ++----------------------+-----------------------------------------+ +|op_latency |Latency of client operations (including | +| |queue time) | ++----------------------+-----------------------------------------+ +|op_process_latency |Latency of client operations (excluding | +| |queue time) | ++----------------------+-----------------------------------------+ +|op_r_latency |Latency of read operation (including | +| |queue time) | ++----------------------+-----------------------------------------+ +|op_r_process_latency |Latency of read operation (excluding | +| |queue time) | ++----------------------+-----------------------------------------+ +|op_w_in_bytes |Client data written | ++----------------------+-----------------------------------------+ +|op_w_latency |Latency of write operation (including | +| |queue time) | ++----------------------+-----------------------------------------+ +|op_w_process_latency |Latency of write operation (excluding | +| |queue time) | ++----------------------+-----------------------------------------+ +|op_rw |Client read-modify-write operations | ++----------------------+-----------------------------------------+ +|op_rw_in_bytes |Client read-modify-write operations write| +| |in | ++----------------------+-----------------------------------------+ +|op_rw_out_bytes |Client read-modify-write operations read | +| |out | ++----------------------+-----------------------------------------+ +|op_rw_latency |Latency of read-modify-write operation | +| |(including queue time) | ++----------------------+-----------------------------------------+ +|op_rw_process_latency |Latency of read-modify-write operation | +| |(excluding queue time) | ++----------------------+-----------------------------------------+ + + +- Ceph pool statistics + ++----------------------+-----------------------------------------+ +|key |Description | ++======================+=========================================+ +|bytes_used |Per pool bytes used | ++----------------------+-----------------------------------------+ +|max_avail |Max available number of bytes in the pool| ++----------------------+-----------------------------------------+ +|objects |Number of objects in the pool | ++----------------------+-----------------------------------------+ +|wr_bytes |Number of bytes written in the pool | ++----------------------+-----------------------------------------+ +|dirty |Number of bytes dirty in the pool | ++----------------------+-----------------------------------------+ +|rd_bytes |Number of bytes read in the pool | ++----------------------+-----------------------------------------+ +|raw_bytes_used |Bytes used in pool including copies made | ++----------------------+-----------------------------------------+ + +- Ceph physical device metadata + ++----------------------+-----------------------------------------+ +|key |Description | ++======================+=========================================+ +|disk_domain_id |Physical device identify id | ++----------------------+-----------------------------------------+ +|disk_name |Device attachement name | ++----------------------+-----------------------------------------+ +|disk_wwn |Device wwn | ++----------------------+-----------------------------------------+ +|model |Device model name | ++----------------------+-----------------------------------------+ +|serial_number |Device serial number | ++----------------------+-----------------------------------------+ +|size |Device size | ++----------------------+-----------------------------------------+ +|vendor |Device vendor name | ++----------------------+-----------------------------------------+ + +- Ceph each objects correlation information +- The plugin agent information +- The plugin agent cluster information +- The plugin agent host information + + +SMART Data +----------- +- Ceph physical device SMART data (provided by Ceph *devicehealth* plugin) + + +Prediction Data +---------------- +- Ceph physical device prediction data + + +Receiving predicted health status from a Ceph OSD disk drive +============================================================ + +You can receive predicted health status from Ceph OSD disk drive by using the +following command. + +:: + + ceph device get-predicted-status + + +The get-predicted-status command returns: + + +:: + + { + "near_failure": "Good", + "disk_wwn": "5000011111111111", + "serial_number": "111111111", + "predicted": "2018-05-30 18:33:12", + "attachment": "sdb" + } + + ++--------------------+-----------------------------------------------------+ +|Attribute | Description | ++====================+=====================================================+ +|near_failure | The disk failure prediction state: | +| | Good/Warning/Bad/Unknown | ++--------------------+-----------------------------------------------------+ +|disk_wwn | Disk WWN number | ++--------------------+-----------------------------------------------------+ +|serial_number | Disk serial number | ++--------------------+-----------------------------------------------------+ +|predicted | Predicted date | ++--------------------+-----------------------------------------------------+ +|attachment | device name on the local system | ++--------------------+-----------------------------------------------------+ + +The *near_failure* attribute for disk failure prediction state indicates disk life expectancy in the following table. + ++--------------------+-----------------------------------------------------+ +|near_failure | Life expectancy (weeks) | ++====================+=====================================================+ +|Good | > 6 weeks | ++--------------------+-----------------------------------------------------+ +|Warning | 2 weeks ~ 6 weeks | ++--------------------+-----------------------------------------------------+ +|Bad | < 2 weeks | ++--------------------+-----------------------------------------------------+ + + +Debugging +========= + +If you want to debug the DiskPrediction module mapping to Ceph logging level, +use the following command. + +:: + + [mgr] + + debug mgr = 20 + +With logging set to debug for the manager the plugin will print out logging +message with prefix *mgr[diskprediction]* for easy filtering. + diff --git a/doc/mgr/index.rst b/doc/mgr/index.rst index 8b3487dd807..baf225598f1 100644 --- a/doc/mgr/index.rst +++ b/doc/mgr/index.rst @@ -29,6 +29,7 @@ sensible. Writing plugins Writing orchestrator plugins Dashboard plugin + DiskPrediction plugin Local pool plugin RESTful plugin Zabbix plugin diff --git a/qa/tasks/mgr/test_module_selftest.py b/qa/tasks/mgr/test_module_selftest.py index 30968118f82..fa636d05ea3 100644 --- a/qa/tasks/mgr/test_module_selftest.py +++ b/qa/tasks/mgr/test_module_selftest.py @@ -47,6 +47,9 @@ class TestModuleSelftest(MgrTestCase): def test_influx(self): self._selftest_plugin("influx") + def test_diskprediction(self): + self._selftest_plugin("diskprediction") + def test_telegraf(self): self._selftest_plugin("telegraf") diff --git a/src/pybind/mgr/diskprediction/__init__.py b/src/pybind/mgr/diskprediction/__init__.py new file mode 100644 index 00000000000..e65bbfbc867 --- /dev/null +++ b/src/pybind/mgr/diskprediction/__init__.py @@ -0,0 +1,2 @@ +from __future__ import absolute_import +from .module import Module diff --git a/src/pybind/mgr/diskprediction/agent/__init__.py b/src/pybind/mgr/diskprediction/agent/__init__.py new file mode 100644 index 00000000000..64a456fa8e1 --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/__init__.py @@ -0,0 +1,38 @@ +from __future__ import absolute_import + +from ..common import timeout, TimeoutError + + +class BaseAgent(object): + + measurement = '' + + def __init__(self, mgr_module, obj_sender, timeout=30): + self.data = [] + self._client = None + self._client = obj_sender + self._logger = mgr_module.log + self._module_inst = mgr_module + self._timeout = timeout + + def run(self): + try: + self._collect_data() + self._run() + except TimeoutError: + self._logger.error('{} failed to execute {} task'.format( + __name__, self.measurement)) + + def __nonzero__(self): + if not self._module_inst and not self._client: + return False + else: + return True + + @timeout() + def _run(self): + pass + + @timeout() + def _collect_data(self): + pass diff --git a/src/pybind/mgr/diskprediction/agent/metrics/__init__.py b/src/pybind/mgr/diskprediction/agent/metrics/__init__.py new file mode 100644 index 00000000000..57fbfd5d2eb --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/metrics/__init__.py @@ -0,0 +1,61 @@ +from __future__ import absolute_import + +from .. import BaseAgent +from ...common import DP_MGR_STAT_FAILED, DP_MGR_STAT_WARNING, DP_MGR_STAT_OK + +AGENT_VERSION = '1.0.0' + + +class MetricsField(object): + def __init__(self): + self.tags = {} + self.fields = {} + self.timestamp = None + + def __str__(self): + return str({ + 'tags': self.tags, + 'fields': self.fields, + 'timestamp': self.timestamp + }) + + +class MetricsAgent(BaseAgent): + + def log_summary(self, status_info): + try: + if status_info: + measurement = status_info['measurement'] + success_count = status_info['success_count'] + failure_count = status_info['failure_count'] + total_count = success_count + failure_count + display_string = \ + '%s agent stats in total count: %s, success count: %s, failure count: %s.' + self._logger.info( + display_string % (measurement, total_count, success_count, failure_count) + ) + except Exception as e: + self._logger.error(str(e)) + + def _run(self): + collect_data = self.data + result = {} + if collect_data: + status_info = self._client.send_info(collect_data, self.measurement) + # show summary info + self.log_summary(status_info) + # write sub_agent buffer + total_count = status_info['success_count'] + status_info['failure_count'] + if total_count: + if status_info['success_count'] == 0: + self._module_inst.status = \ + {'status': DP_MGR_STAT_FAILED, + 'reason': 'failed to send metrics data to the server'} + elif status_info['failure_count'] == 0: + self._module_inst.status = \ + {'status': DP_MGR_STAT_OK} + else: + self._module_inst.status = \ + {'status': DP_MGR_STAT_WARNING, + 'reason': 'failed to send partial metrics data to the server'} + return result diff --git a/src/pybind/mgr/diskprediction/agent/metrics/ceph_cluster.py b/src/pybind/mgr/diskprediction/agent/metrics/ceph_cluster.py new file mode 100644 index 00000000000..d49b063b247 --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/metrics/ceph_cluster.py @@ -0,0 +1,146 @@ +from __future__ import absolute_import + +import socket + +from . import MetricsAgent, MetricsField +from ...common.clusterdata import ClusterAPI + + +class CephCluster(MetricsField): + """ Ceph cluster structure """ + measurement = 'ceph_cluster' + + def __init__(self): + super(CephCluster, self).__init__() + self.tags['cluster_id'] = None + self.fields['agenthost'] = None + self.tags['agenthost_domain_id'] = None + self.fields['cluster_health'] = '' + self.fields['num_mon'] = None + self.fields['num_mon_quorum'] = None + self.fields['num_osd'] = None + self.fields['num_osd_up'] = None + self.fields['num_osd_in'] = None + self.fields['osd_epoch'] = None + self.fields['osd_bytes'] = None + self.fields['osd_bytes_used'] = None + self.fields['osd_bytes_avail'] = None + self.fields['num_pool'] = None + self.fields['num_pg'] = None + self.fields['num_pg_active_clean'] = None + self.fields['num_pg_active'] = None + self.fields['num_pg_peering'] = None + self.fields['num_object'] = None + self.fields['num_object_degraded'] = None + self.fields['num_object_misplaced'] = None + self.fields['num_object_unfound'] = None + self.fields['num_bytes'] = None + self.fields['num_mds_up'] = None + self.fields['num_mds_in'] = None + self.fields['num_mds_failed'] = None + self.fields['mds_epoch'] = None + + +class CephClusterAgent(MetricsAgent): + measurement = 'ceph_cluster' + + def _collect_data(self): + # process data and save to 'self.data' + obj_api = ClusterAPI(self._module_inst) + cluster_id = obj_api.get_cluster_id() + + c_data = CephCluster() + cluster_state = obj_api.get_health_status() + c_data.tags['cluster_id'] = cluster_id + c_data.fields['cluster_health'] = str(cluster_state) + c_data.fields['agenthost'] = socket.gethostname() + c_data.tags['agenthost_domain_id'] = \ + '%s_%s' % (cluster_id, c_data.fields['agenthost']) + c_data.fields['osd_epoch'] = obj_api.get_osd_epoch() + c_data.fields['num_mon'] = len(obj_api.get_mons()) + c_data.fields['num_mon_quorum'] = \ + len(obj_api.get_mon_status().get('quorum', [])) + + osds = obj_api.get_osds() + num_osd_up = 0 + num_osd_in = 0 + for osd_data in osds: + if osd_data.get('up'): + num_osd_up = num_osd_up + 1 + if osd_data.get('in'): + num_osd_in = num_osd_in + 1 + if osds: + c_data.fields['num_osd'] = len(osds) + else: + c_data.fields['num_osd'] = 0 + c_data.fields['num_osd_up'] = num_osd_up + c_data.fields['num_osd_in'] = num_osd_in + c_data.fields['num_pool'] = len(obj_api.get_osd_pools()) + + df_stats = obj_api.get_df_stats() + total_bytes = df_stats.get('total_bytes', 0) + total_used_bytes = df_stats.get('total_used_bytes', 0) + total_avail_bytes = df_stats.get('total_avail_bytes', 0) + c_data.fields['osd_bytes'] = total_bytes + c_data.fields['osd_bytes_used'] = total_used_bytes + c_data.fields['osd_bytes_avail'] = total_avail_bytes + if total_bytes and total_avail_bytes: + c_data.fields['osd_bytes_used_percentage'] = \ + round(float(total_used_bytes) / float(total_bytes) * 100, 4) + else: + c_data.fields['osd_bytes_used_percentage'] = 0.0000 + + pg_stats = obj_api.get_pg_stats() + num_bytes = 0 + num_object = 0 + num_object_degraded = 0 + num_object_misplaced = 0 + num_object_unfound = 0 + num_pg_active = 0 + num_pg_active_clean = 0 + num_pg_peering = 0 + for pg_data in pg_stats: + num_pg_active = num_pg_active + len(pg_data.get('acting')) + if 'active+clean' in pg_data.get('state'): + num_pg_active_clean = num_pg_active_clean + 1 + if 'peering' in pg_data.get('state'): + num_pg_peering = num_pg_peering + 1 + + stat_sum = pg_data.get('stat_sum', {}) + num_object = num_object + stat_sum.get('num_objects', 0) + num_object_degraded = \ + num_object_degraded + stat_sum.get('num_objects_degraded', 0) + num_object_misplaced = \ + num_object_misplaced + stat_sum.get('num_objects_misplaced', 0) + num_object_unfound = \ + num_object_unfound + stat_sum.get('num_objects_unfound', 0) + num_bytes = num_bytes + stat_sum.get('num_bytes', 0) + + c_data.fields['num_pg'] = len(pg_stats) + c_data.fields['num_object'] = num_object + c_data.fields['num_object_degraded'] = num_object_degraded + c_data.fields['num_object_misplaced'] = num_object_misplaced + c_data.fields['num_object_unfound'] = num_object_unfound + c_data.fields['num_bytes'] = num_bytes + c_data.fields['num_pg_active'] = num_pg_active + c_data.fields['num_pg_active_clean'] = num_pg_active_clean + c_data.fields['num_pg_peering'] = num_pg_active_clean + + filesystems = obj_api.get_file_systems() + num_mds_in = 0 + num_mds_up = 0 + num_mds_failed = 0 + mds_epoch = 0 + for fs_data in filesystems: + num_mds_in = \ + num_mds_in + len(fs_data.get('mdsmap', {}).get('in', [])) + num_mds_up = \ + num_mds_up + len(fs_data.get('mdsmap', {}).get('up', {})) + num_mds_failed = \ + num_mds_failed + len(fs_data.get('mdsmap', {}).get('failed', [])) + mds_epoch = mds_epoch + fs_data.get('mdsmap', {}).get('epoch', 0) + c_data.fields['num_mds_in'] = num_mds_in + c_data.fields['num_mds_up'] = num_mds_up + c_data.fields['num_mds_failed'] = num_mds_failed + c_data.fields['mds_epoch'] = mds_epoch + self.data.append(c_data) diff --git a/src/pybind/mgr/diskprediction/agent/metrics/ceph_mon_osd.py b/src/pybind/mgr/diskprediction/agent/metrics/ceph_mon_osd.py new file mode 100644 index 00000000000..0c85fb514cb --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/metrics/ceph_mon_osd.py @@ -0,0 +1,159 @@ +from __future__ import absolute_import + +import socket + +from . import MetricsAgent, MetricsField +from ...common.clusterdata import ClusterAPI + + +class CephMON(MetricsField): + """ Ceph monitor structure """ + measurement = 'ceph_mon' + + def __init__(self): + super(CephMON, self).__init__() + self.tags['cluster_id'] = None + self.tags['mon_id'] = None + self.fields['agenthost'] = None + self.tags['agenthost_domain_id'] = None + self.fields['num_sessions'] = None + self.fields['session_add'] = None + self.fields['session_rm'] = None + self.fields['session_trim'] = None + self.fields['num_elections'] = None + self.fields['election_call'] = None + self.fields['election_win'] = None + self.fields['election_lose'] = None + + +class CephOSD(MetricsField): + """ Ceph osd structure """ + measurement = 'ceph_osd' + + def __init__(self): + super(CephOSD, self).__init__() + self.tags['cluster_id'] = None + self.tags['osd_id'] = None + self.fields['agenthost'] = None + self.tags['agenthost_domain_id'] = None + self.tags['host_domain_id'] = None + self.fields['op_w'] = None + self.fields['op_in_bytes'] = None + self.fields['op_r'] = None + self.fields['op_out_bytes'] = None + self.fields['op_wip'] = None + self.fields['op_latency'] = None + self.fields['op_process_latency'] = None + self.fields['op_r_latency'] = None + self.fields['op_r_process_latency'] = None + self.fields['op_w_in_bytes'] = None + self.fields['op_w_latency'] = None + self.fields['op_w_process_latency'] = None + self.fields['op_w_prepare_latency'] = None + self.fields['op_rw'] = None + self.fields['op_rw_in_bytes'] = None + self.fields['op_rw_out_bytes'] = None + self.fields['op_rw_latency'] = None + self.fields['op_rw_process_latency'] = None + self.fields['op_rw_prepare_latency'] = None + self.fields['op_before_queue_op_lat'] = None + self.fields['op_before_dequeue_op_lat'] = None + + +class CephMonOsdAgent(MetricsAgent): + measurement = 'ceph_mon_osd' + + # counter types + PERFCOUNTER_LONGRUNAVG = 4 + PERFCOUNTER_COUNTER = 8 + PERFCOUNTER_HISTOGRAM = 0x10 + PERFCOUNTER_TYPE_MASK = ~3 + + def _stattype_to_str(self, stattype): + typeonly = stattype & self.PERFCOUNTER_TYPE_MASK + if typeonly == 0: + return 'gauge' + if typeonly == self.PERFCOUNTER_LONGRUNAVG: + # this lie matches the DaemonState decoding: only val, no counts + return 'counter' + if typeonly == self.PERFCOUNTER_COUNTER: + return 'counter' + if typeonly == self.PERFCOUNTER_HISTOGRAM: + return 'histogram' + return '' + + def _generate_osd(self, cluster_id, service_name, perf_counts): + obj_api = ClusterAPI(self._module_inst) + service_id = service_name[4:] + d_osd = CephOSD() + stat_bytes = 0 + stat_bytes_used = 0 + d_osd.tags['cluster_id'] = cluster_id + d_osd.tags['osd_id'] = service_name[4:] + d_osd.fields['agenthost'] = socket.gethostname() + d_osd.tags['agenthost_domain_id'] = \ + '%s_%s' % (cluster_id, d_osd.fields['agenthost']) + d_osd.tags['host_domain_id'] = \ + '%s_%s' % (cluster_id, + obj_api.get_osd_hostname(d_osd.tags['osd_id'])) + for i_key, i_val in perf_counts.iteritems(): + if i_key[:4] == 'osd.': + key_name = i_key[4:] + else: + key_name = i_key + if self._stattype_to_str(i_val['type']) == 'counter': + value = obj_api.get_rate('osd', service_id, i_key) + else: + value = obj_api.get_latest('osd', service_id, i_key) + if key_name == 'stat_bytes': + stat_bytes = value + elif key_name == 'stat_bytes_used': + stat_bytes_used = value + else: + d_osd.fields[key_name] = value + + if stat_bytes and stat_bytes_used: + d_osd.fields['stat_bytes_used_percentage'] = \ + round(float(stat_bytes_used) / float(stat_bytes) * 100, 4) + else: + d_osd.fields['stat_bytes_used_percentage'] = 0.0000 + self.data.append(d_osd) + + def _generate_mon(self, cluster_id, service_name, perf_counts): + d_mon = CephMON() + d_mon.tags['cluster_id'] = cluster_id + d_mon.tags['mon_id'] = service_name[4:] + d_mon.fields['agenthost'] = socket.gethostname() + d_mon.tags['agenthost_domain_id'] = \ + '%s_%s' % (cluster_id, d_mon.fields['agenthost']) + d_mon.fields['num_sessions'] = \ + perf_counts.get('mon.num_sessions', {}).get('value', 0) + d_mon.fields['session_add'] = \ + perf_counts.get('mon.session_add', {}).get('value', 0) + d_mon.fields['session_rm'] = \ + perf_counts.get('mon.session_rm', {}).get('value', 0) + d_mon.fields['session_trim'] = \ + perf_counts.get('mon.session_trim', {}).get('value', 0) + d_mon.fields['num_elections'] = \ + perf_counts.get('mon.num_elections', {}).get('value', 0) + d_mon.fields['election_call'] = \ + perf_counts.get('mon.election_call', {}).get('value', 0) + d_mon.fields['election_win'] = \ + perf_counts.get('mon.election_win', {}).get('value', 0) + d_mon.fields['election_lose'] = \ + perf_counts.get('election_lose', {}).get('value', 0) + self.data.append(d_mon) + + def _collect_data(self): + # process data and save to 'self.data' + obj_api = ClusterAPI(self._module_inst) + perf_data = obj_api.get_all_perf_counters() + if not perf_data and not isinstance(perf_data, dict): + self._logger.error('unable to get all perf counters') + return + cluster_id = obj_api.get_cluster_id() + for n_name, i_perf in perf_data.iteritems(): + if n_name[0:3].lower() == 'mon': + self._generate_mon(cluster_id, n_name, i_perf) + elif n_name[0:3].lower() == 'osd': + self._generate_osd(cluster_id, n_name, i_perf) diff --git a/src/pybind/mgr/diskprediction/agent/metrics/ceph_pool.py b/src/pybind/mgr/diskprediction/agent/metrics/ceph_pool.py new file mode 100644 index 00000000000..86ee10a21d9 --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/metrics/ceph_pool.py @@ -0,0 +1,58 @@ +from __future__ import absolute_import + +import socket + +from . import MetricsAgent, MetricsField +from ...common.clusterdata import ClusterAPI + + +class CephPool(MetricsField): + """ Ceph pool structure """ + measurement = 'ceph_pool' + + def __init__(self): + super(CephPool, self).__init__() + self.tags['cluster_id'] = None + self.tags['pool_id'] = None + self.fields['agenthost'] = None + self.tags['agenthost_domain_id'] = None + self.fields['bytes_used'] = None + self.fields['max_avail'] = None + self.fields['objects'] = None + self.fields['wr_bytes'] = None + self.fields['dirty'] = None + self.fields['rd_bytes'] = None + self.fields['raw_bytes_used'] = None + + +class CephPoolAgent(MetricsAgent): + measurement = 'ceph_pool' + + def _collect_data(self): + # process data and save to 'self.data' + obj_api = ClusterAPI(self._module_inst) + df_data = obj_api.get('df') + cluster_id = obj_api.get_cluster_id() + for pool in df_data.get('pools', []): + d_pool = CephPool() + p_id = pool.get('id') + d_pool.tags['cluster_id'] = cluster_id + d_pool.tags['pool_id'] = p_id + d_pool.fields['agenthost'] = socket.gethostname() + d_pool.tags['agenthost_domain_id'] = \ + '%s_%s' % (cluster_id, d_pool.fields['agenthost']) + d_pool.fields['bytes_used'] = \ + pool.get('stats', {}).get('bytes_used', 0) + d_pool.fields['max_avail'] = \ + pool.get('stats', {}).get('max_avail', 0) + d_pool.fields['objects'] = \ + pool.get('stats', {}).get('objects', 0) + d_pool.fields['wr_bytes'] = \ + pool.get('stats', {}).get('wr_bytes', 0) + d_pool.fields['dirty'] = \ + pool.get('stats', {}).get('dirty', 0) + d_pool.fields['rd_bytes'] = \ + pool.get('stats', {}).get('rd_bytes', 0) + d_pool.fields['raw_bytes_used'] = \ + pool.get('stats', {}).get('raw_bytes_used', 0) + self.data.append(d_pool) diff --git a/src/pybind/mgr/diskprediction/agent/metrics/db_relay.py b/src/pybind/mgr/diskprediction/agent/metrics/db_relay.py new file mode 100644 index 00000000000..1d5ca239390 --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/metrics/db_relay.py @@ -0,0 +1,610 @@ +from __future__ import absolute_import + +import socket + +from . import MetricsAgent, MetricsField +from ...common import get_human_readable +from ...common.clusterdata import ClusterAPI +from ...common.cypher import CypherOP, NodeInfo + + +class BaseDP(object): + """ basic diskprediction structure """ + _fields = [] + + def __init__(self, *args, **kwargs): + if len(args) > len(self._fields): + raise TypeError('Expected {} arguments'.format(len(self._fields))) + + for name, value in zip(self._fields, args): + setattr(self, name, value) + + for name in self._fields[len(args):]: + setattr(self, name, kwargs.pop(name)) + + if kwargs: + raise TypeError('Invalid argument(s): {}'.format(','.join(kwargs))) + + +class MGRDpCeph(BaseDP): + _fields = [ + 'fsid', 'health', 'max_osd', 'size', + 'avail_size', 'raw_used', 'raw_used_percent' + ] + + +class MGRDpHost(BaseDP): + _fields = ['fsid', 'host', 'ipaddr'] + + +class MGRDpMon(BaseDP): + _fields = ['fsid', 'host', 'ipaddr'] + + +class MGRDpOsd(BaseDP): + _fields = [ + 'fsid', 'host', '_id', 'uuid', 'up', '_in', 'weight', 'public_addr', + 'cluster_addr', 'state', 'backend_filestore_dev_node', + 'backend_filestore_partition_path', 'ceph_release', 'devices', + 'osd_data', 'osd_journal', 'rotational' + ] + + +class MGRDpMds(BaseDP): + _fields = ['fsid', 'host', 'ipaddr'] + + +class MGRDpPool(BaseDP): + _fields = [ + 'fsid', 'size', 'pool_name', 'pool_id', 'type', 'min_size', + 'pg_num', 'pgp_num', 'created_time', 'used', 'pgids' + ] + + +class MGRDpRBD(BaseDP): + _fields = ['fsid', '_id', 'name', 'pool_name', 'size', 'pgids'] + + +class MGRDpPG(BaseDP): + _fields = [ + 'fsid', 'pgid', 'up_osds', 'acting_osds', 'state', + 'objects', 'degraded', 'misplaced', 'unfound' + ] + + +class MGRDpDisk(BaseDP): + _fields = ['host_domain_id', 'model', 'size'] + + +class DBRelay(MetricsField): + """ DB Relay structure """ + measurement = 'db_relay' + + def __init__(self): + super(DBRelay, self).__init__() + self.fields['agenthost'] = None + self.tags['agenthost_domain_id'] = None + self.tags['dc_tag'] = 'na' + self.tags['host'] = None + self.fields['cmd'] = None + + +class DBRelayAgent(MetricsAgent): + measurement = 'db_relay' + + def __init__(self, *args, **kwargs): + super(DBRelayAgent, self).__init__(*args, **kwargs) + self._cluster_node = self._get_cluster_node() + self._cluster_id = self._cluster_node.domain_id + self._host_nodes = dict() + self._osd_nodes = dict() + + def _get_cluster_node(self): + db = ClusterAPI(self._module_inst) + cluster_id = db.get_cluster_id() + dp_cluster = MGRDpCeph( + fsid=cluster_id, + health=db.get_health_status(), + max_osd=db.get_max_osd(), + size=db.get_global_total_size(), + avail_size=db.get_global_avail_size(), + raw_used=db.get_global_raw_used_size(), + raw_used_percent=db.get_global_raw_used_percent() + ) + cluster_id = db.get_cluster_id() + cluster_name = cluster_id[-12:] + cluster_node = NodeInfo( + label='CephCluster', + domain_id=cluster_id, + name='cluster-{}'.format(cluster_name), + meta=dp_cluster.__dict__ + ) + return cluster_node + + def _cluster_contains_host(self): + cluster_id = self._cluster_id + cluster_node = self._cluster_node + + db = ClusterAPI(self._module_inst) + + hosts = set() + + # Add host from osd + osd_data = db.get_osds() + for _data in osd_data: + osd_id = _data['osd'] + if not _data.get('in'): + continue + osd_addr = _data['public_addr'].split(':')[0] + osd_metadata = db.get_osd_metadata(osd_id) + if osd_metadata: + osd_host = osd_metadata['hostname'] + hosts.add((osd_host, osd_addr)) + + # Add host from mon + mons = db.get_mons() + for _data in mons: + mon_host = _data['name'] + mon_addr = _data['public_addr'].split(':')[0] + if mon_host: + hosts.add((mon_host, mon_addr)) + + # Add host from mds + file_systems = db.get_file_systems() + for _data in file_systems: + mds_info = _data.get('mdsmap').get('info') + for _gid in mds_info: + mds_data = mds_info[_gid] + mds_addr = mds_data.get('addr').split(':')[0] + mds_host = mds_data.get('name') + if mds_host: + hosts.add((mds_host, mds_addr)) + + # create node relation + for tp in hosts: + data = DBRelay() + host = tp[0] + self._host_nodes[host] = None + + host_node = NodeInfo( + label='VMHost', + domain_id='{}_{}'.format(cluster_id, host), + name=host, + meta={} + ) + + # add osd node relationship + cypher_cmd = CypherOP.add_link( + cluster_node, + host_node, + 'CephClusterContainsHost' + ) + cluster_host = socket.gethostname() + data.fields['agenthost'] = cluster_host + data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, data.fields['agenthost'])) + data.tags['host'] = cluster_host + data.fields['cmd'] = str(cypher_cmd) + self._host_nodes[host] = host_node + self.data.append(data) + + def _host_contains_mon(self): + cluster_id = self._cluster_id + + db = ClusterAPI(self._module_inst) + mons = db.get_mons() + for mon in mons: + mon_name = mon.get('name', '') + mon_addr = mon.get('addr', '').split(':')[0] + for hostname in self._host_nodes: + if hostname != mon_name: + continue + + host_node = self._host_nodes[hostname] + data = DBRelay() + dp_mon = MGRDpMon( + fsid=cluster_id, + host=mon_name, + ipaddr=mon_addr + ) + + # create mon node + mon_node = NodeInfo( + label='CephMon', + domain_id='{}.mon.{}'.format(cluster_id, mon_name), + name=mon_name, + meta=dp_mon.__dict__ + ) + + # add mon node relationship + cypher_cmd = CypherOP.add_link( + host_node, + mon_node, + 'HostContainsMon' + ) + cluster_host = socket.gethostname() + data.fields['agenthost'] = cluster_host + data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, data.fields['agenthost'])) + data.tags['host'] = cluster_host + data.fields['cmd'] = str(cypher_cmd) + self.data.append(data) + + def _host_contains_osd(self): + cluster_id = self._cluster_id + + db = ClusterAPI(self._module_inst) + osd_data = db.get_osd_data() + osd_journal = db.get_osd_journal() + for _data in db.get_osds(): + osd_id = _data['osd'] + osd_uuid = _data['uuid'] + osd_up = _data['up'] + osd_in = _data['in'] + if not osd_in: + continue + osd_weight = _data['weight'] + osd_public_addr = _data['public_addr'] + osd_cluster_addr = _data['cluster_addr'] + osd_state = _data['state'] + osd_metadata = db.get_osd_metadata(osd_id) + if osd_metadata: + data = DBRelay() + osd_host = osd_metadata['hostname'] + osd_ceph_version = osd_metadata['ceph_version'] + osd_rotational = osd_metadata['rotational'] + osd_devices = osd_metadata['devices'].split(',') + + # filter 'dm' device. + devices = [] + for devname in osd_devices: + if 'dm' in devname: + continue + devices.append(devname) + + for hostname in self._host_nodes: + if hostname != osd_host: + continue + + self._osd_nodes[str(osd_id)] = None + host_node = self._host_nodes[hostname] + osd_dev_node = None + for dev_node in ['backend_filestore_dev_node', + 'bluestore_bdev_dev_node']: + val = osd_metadata.get(dev_node) + if val and val.lower() != 'unknown': + osd_dev_node = val + break + + osd_dev_path = None + for dev_path in ['backend_filestore_partition_path', + 'bluestore_bdev_partition_path']: + val = osd_metadata.get(dev_path) + if val and val.lower() != 'unknown': + osd_dev_path = val + break + + dp_osd = MGRDpOsd( + fsid=cluster_id, + host=osd_host, + _id=osd_id, + uuid=osd_uuid, + up=osd_up, + _in=osd_in, + weight=osd_weight, + public_addr=osd_public_addr, + cluster_addr=osd_cluster_addr, + state=','.join(osd_state), + backend_filestore_dev_node=osd_dev_node, + backend_filestore_partition_path=osd_dev_path, + ceph_release=osd_ceph_version, + osd_data=osd_data, + osd_journal=osd_journal, + devices=','.join(devices), + rotational=osd_rotational) + + # create osd node + osd_node = NodeInfo( + label='CephOsd', + domain_id='{}.osd.{}'.format(cluster_id, osd_id), + name='OSD.{}'.format(osd_id), + meta=dp_osd.__dict__ + ) + # add osd node relationship + cypher_cmd = CypherOP.add_link( + host_node, + osd_node, + 'HostContainsOsd' + ) + cluster_host = socket.gethostname() + data.fields['agenthost'] = cluster_host + data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, data.fields['agenthost'])) + data.tags['host'] = cluster_host + data.fields['cmd'] = str(cypher_cmd) + self._osd_nodes[str(osd_id)] = osd_node + self.data.append(data) + + def _host_contains_mds(self): + cluster_id = self._cluster_id + + db = ClusterAPI(self._module_inst) + file_systems = db.get_file_systems() + + for _data in file_systems: + mds_info = _data.get('mdsmap').get('info') + for _gid in mds_info: + mds_data = mds_info[_gid] + mds_addr = mds_data.get('addr').split(':')[0] + mds_host = mds_data.get('name') + mds_gid = mds_data.get('gid') + + for hostname in self._host_nodes: + if hostname != mds_host: + continue + + data = DBRelay() + host_node = self._host_nodes[hostname] + dp_mds = MGRDpMds( + fsid=cluster_id, + host=mds_host, + ipaddr=mds_addr + ) + + # create osd node + mds_node = NodeInfo( + label='CephMds', + domain_id='{}.mds.{}'.format(cluster_id, mds_gid), + name='MDS.{}'.format(mds_gid), + meta=dp_mds.__dict__ + ) + # add osd node relationship + cypher_cmd = CypherOP.add_link( + host_node, + mds_node, + 'HostContainsMds' + ) + cluster_host = socket.gethostname() + data.fields['agenthost'] = cluster_host + data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, data.fields['agenthost'])) + data.tags['host'] = cluster_host + data.fields['cmd'] = str(cypher_cmd) + self.data.append(data) + + def _osd_contains_pg(self): + cluster_id = self._cluster_id + db = ClusterAPI(self._module_inst) + + pg_stats = db.get_pg_stats() + for osd_data in db.get_osds(): + osd_id = osd_data['osd'] + if not osd_data.get('in'): + continue + for _data in pg_stats: + state = _data.get('state') + up = _data.get('up') + acting = _data.get('acting') + pgid = _data.get('pgid') + stat_sum = _data.get('stat_sum', {}) + num_objects = stat_sum.get('num_objects') + num_objects_degraded = stat_sum.get('num_objects_degraded') + num_objects_misplaced = stat_sum.get('num_objects_misplaced') + num_objects_unfound = stat_sum.get('num_objects_unfound') + if osd_id in up: + if str(osd_id) not in self._osd_nodes: + continue + osd_node = self._osd_nodes[str(osd_id)] + data = DBRelay() + dp_pg = MGRDpPG( + fsid=cluster_id, + pgid=pgid, + up_osds=','.join(str(x) for x in up), + acting_osds=','.join(str(x) for x in acting), + state=state, + objects=num_objects, + degraded=num_objects_degraded, + misplaced=num_objects_misplaced, + unfound=num_objects_unfound + ) + + # create pg node + pg_node = NodeInfo( + label='CephPG', + domain_id='{}.pg.{}'.format(cluster_id, pgid), + name='PG.{}'.format(pgid), + meta=dp_pg.__dict__ + ) + + # add pg node relationship + cypher_cmd = CypherOP.add_link( + osd_node, + pg_node, + 'OsdContainsPg' + ) + cluster_host = socket.gethostname() + data.fields['agenthost'] = cluster_host + data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, data.fields['agenthost'])) + data.tags['host'] = cluster_host + data.fields['cmd'] = str(cypher_cmd) + self.data.append(data) + + def _osd_contains_disk(self): + cluster_id = self._cluster_id + db = ClusterAPI(self._module_inst) + + osd_metadata = db.get_osd_metadata() + for osd_id in osd_metadata: + osds_smart = db.get_osd_smart(osd_id) + if not osds_smart: + continue + + if str(osd_id) not in self._osd_nodes: + continue + + hostname = db.get_osd_hostname(osd_id) + osd_node = self._osd_nodes[str(osd_id)] + for dev_name, s_val in osds_smart.iteritems(): + data = DBRelay() + disk_domain_id = str(dev_name) + try: + if isinstance(s_val.get('user_capacity'), dict): + user_capacity = \ + s_val['user_capacity'].get('bytes', {}).get('n', 0) + else: + user_capacity = s_val.get('user_capacity', 0) + except ValueError: + user_capacity = 0 + dp_disk = MGRDpDisk( + host_domain_id='{}_{}'.format(cluster_id, hostname), + model=s_val.get('model_name', ''), + size=get_human_readable( + int(user_capacity), 0) + ) + + # create disk node + disk_node = NodeInfo( + label='VMDisk', + domain_id=disk_domain_id, + name=dev_name, + meta=dp_disk.__dict__ + ) + + # add disk node relationship + cypher_cmd = CypherOP.add_link( + osd_node, + disk_node, + 'DiskOfOsd' + ) + cluster_host = socket.gethostname() + data.fields['agenthost'] = cluster_host + data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, data.fields['agenthost'])) + data.tags['host'] = cluster_host + data.fields['cmd'] = str(cypher_cmd) + self.data.append(data) + + # host node and disk node relationship + data = DBRelay() + host_node = NodeInfo( + label='VMHost', + domain_id='{}_{}'.format(cluster_id, hostname), + name=hostname, + meta={} + ) + + # add osd node relationship + cypher_cmd = CypherOP.add_link( + host_node, + disk_node, + 'VmHostContainsVmDisk' + ) + data.fields['agenthost'] = cluster_host + data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, data.fields['agenthost'])) + data.tags['host'] = cluster_host + data.fields['cmd'] = str(cypher_cmd) + self.data.append(data) + self.data.append(data) + + def _rbd_contains_pg(self): + cluster_id = self._cluster_id + db = ClusterAPI(self._module_inst) + + pg_stats = db.get_pg_stats() + pools = db.get_osd_pools() + for pool_data in pools: + pool_name = pool_data.get('pool_name') + rbd_list = db.get_rbd_list(pool_name=pool_name) + for rbd_data in rbd_list: + image_name = rbd_data.get('name') + # Sometimes get stuck on query rbd objects info + rbd_info = db.get_rbd_info(pool_name, image_name) + rbd_id = rbd_info.get('id') + rbd_size = rbd_info.get('size') + rbd_pgids = rbd_info.get('pgs', []) + + pgids = [] + for _data in rbd_pgids: + pgid = _data.get('pgid') + if pgid: + pgids.append(pgid) + + # RBD info + dp_rbd = MGRDpRBD( + fsid=cluster_id, + _id=rbd_id, + name=image_name, + pool_name=pool_name, + size=rbd_size, + pgids=','.join(pgids) + ) + + # create rbd node + rbd_node = NodeInfo( + label='CephRBD', + domain_id='{}.rbd.{}'.format(cluster_id, image_name), + name=image_name, + meta=dp_rbd.__dict__ + ) + + for _data in pg_stats: + pgid = _data.get('pgid') + if pgid not in pgids: + continue + + state = _data.get('state') + up = _data.get('up') + acting = _data.get('acting') + stat_sum = _data.get('stat_sum', {}) + num_objects = stat_sum.get('num_objects') + num_objects_degraded = stat_sum.get('num_objects_degraded') + num_objects_misplaced = stat_sum.get('num_objects_misplaced') + num_objects_unfound = stat_sum.get('num_objects_unfound') + + data = DBRelay() + dp_pg = MGRDpPG( + fsid=cluster_id, + pgid=pgid, + up_osds=','.join(str(x) for x in up), + acting_osds=','.join(str(x) for x in acting), + state=state, + objects=num_objects, + degraded=num_objects_degraded, + misplaced=num_objects_misplaced, + unfound=num_objects_unfound + ) + + # create pg node + pg_node = NodeInfo( + label='CephPG', + domain_id='{}.pg.{}'.format(cluster_id, pgid), + name='PG.{}'.format(pgid), + meta=dp_pg.__dict__ + ) + + # add rbd node relationship + cypher_cmd = CypherOP.add_link( + rbd_node, + pg_node, + 'RbdContainsPg' + ) + cluster_host = socket.gethostname() + data.fields['agenthost'] = cluster_host + data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, data.fields['agenthost'])) + data.tags['host'] = cluster_host + data.fields['cmd'] = str(cypher_cmd) + self.data.append(data) + + def _collect_data(self): + if not self._module_inst: + return + + self._cluster_contains_host() + self._host_contains_osd() + self._host_contains_mon() + self._host_contains_mds() + self._osd_contains_pg() + self._osd_contains_disk() diff --git a/src/pybind/mgr/diskprediction/agent/metrics/sai_agent.py b/src/pybind/mgr/diskprediction/agent/metrics/sai_agent.py new file mode 100644 index 00000000000..63c8e870f71 --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/metrics/sai_agent.py @@ -0,0 +1,71 @@ +# vim: tabstop=4 shiftwidth=4 softtabstop=4 +from __future__ import absolute_import + +import socket +import time + +from . import AGENT_VERSION, MetricsAgent, MetricsField +from ...common import DP_MGR_STAT_FAILED, DP_MGR_STAT_WARNING +from ...common.clusterdata import ClusterAPI + + +class SAIAgentFields(MetricsField): + """ SAI DiskSmart structure """ + measurement = 'sai_agent' + + def __init__(self): + super(SAIAgentFields, self).__init__() + self.tags['agenthost_domain_id'] = None + self.fields['agent_type'] = str('ceph') + self.fields['agent_version'] = str(AGENT_VERSION) + self.fields['agenthost'] = '' + self.fields['cluster_domain_id'] = '' + self.fields['heartbeat_interval'] = '' + self.fields['host_ip'] = '' + self.fields['host_name'] = '' + self.fields['is_error'] = False + self.fields['is_ceph_error'] = False + self.fields['needs_warning'] = False + self.fields['send'] = None + + +class SAIAgent(MetricsAgent): + measurement = 'sai_agent' + + def _collect_data(self): + mgr_id = [] + c_data = SAIAgentFields() + obj_api = ClusterAPI(self._module_inst) + svc_data = obj_api.get_server(socket.gethostname()) + cluster_state = obj_api.get_health_status() + if not svc_data: + raise Exception('unable to get %s service info' % socket.gethostname()) + # Filter mgr id + for s in svc_data.get('services', []): + if s.get('type', '') == 'mgr': + mgr_id.append(s.get('id')) + + for _id in mgr_id: + mgr_meta = obj_api.get_mgr_metadata(_id) + cluster_id = obj_api.get_cluster_id() + c_data.fields['cluster_domain_id'] = str(cluster_id) + c_data.fields['agenthost'] = str(socket.gethostname()) + c_data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, c_data.fields['agenthost'])) + c_data.fields['heartbeat_interval'] = \ + int(obj_api.get_configuration('diskprediction_upload_metrics_interval')) + c_data.fields['host_ip'] = str(mgr_meta.get('addr', '127.0.0.1')) + c_data.fields['host_name'] = str(socket.gethostname()) + if obj_api.module.status.get('status', '') in [DP_MGR_STAT_WARNING, DP_MGR_STAT_FAILED]: + c_data.fields['is_error'] = bool(True) + else: + c_data.fields['is_error'] = bool(False) + if cluster_state in ['HEALTH_ERR', 'HEALTH_WARN']: + c_data.fields['is_ceph_error'] = bool(True) + c_data.fields['needs_warning'] = bool(True) + c_data.fields['is_error'] = bool(True) + c_data.fields['problems'] = str(obj_api.get_health_checks()) + else: + c_data.fields['is_ceph_error'] = bool(False) + c_data.fields['send'] = int(time.time() * 1000) + self.data.append(c_data) diff --git a/src/pybind/mgr/diskprediction/agent/metrics/sai_cluster.py b/src/pybind/mgr/diskprediction/agent/metrics/sai_cluster.py new file mode 100644 index 00000000000..ac9cab9aa9a --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/metrics/sai_cluster.py @@ -0,0 +1,36 @@ +from __future__ import absolute_import + +import socket + +from . import AGENT_VERSION, MetricsAgent, MetricsField +from ...common.clusterdata import ClusterAPI + + +class SAIClusterFields(MetricsField): + """ SAI Host structure """ + measurement = 'sai_cluster' + + def __init__(self): + super(SAIClusterFields, self).__init__() + self.tags['domain_id'] = None + self.fields['agenthost'] = None + self.fields['agenthost_domain_id'] = None + self.fields['name'] = None + self.fields['agent_version'] = str(AGENT_VERSION) + + +class SAICluserAgent(MetricsAgent): + measurement = 'sai_cluster' + + def _collect_data(self): + c_data = SAIClusterFields() + obj_api = ClusterAPI(self._module_inst) + cluster_id = obj_api.get_cluster_id() + + c_data.tags['domain_id'] = str(cluster_id) + c_data.tags['host_domain_id'] = '%s_%s' % (str(cluster_id), str(socket.gethostname())) + c_data.fields['agenthost'] = str(socket.gethostname()) + c_data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, c_data.fields['agenthost'])) + c_data.fields['name'] = 'ceph mgr plugin' + self.data.append(c_data) diff --git a/src/pybind/mgr/diskprediction/agent/metrics/sai_disk.py b/src/pybind/mgr/diskprediction/agent/metrics/sai_disk.py new file mode 100644 index 00000000000..447b24ba030 --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/metrics/sai_disk.py @@ -0,0 +1,172 @@ +from __future__ import absolute_import + +import socket + +from . import AGENT_VERSION, MetricsAgent, MetricsField +from ...common import get_human_readable +from ...common.clusterdata import ClusterAPI + + +class SAIDiskFields(MetricsField): + """ SAI Disk structure """ + measurement = 'sai_disk' + + def __init__(self): + super(SAIDiskFields, self).__init__() + self.fields['agenthost'] = None + self.tags['agenthost_domain_id'] = None + self.tags['disk_domain_id'] = None + self.tags['disk_name'] = None + self.tags['disk_wwn'] = None + self.tags['primary_key'] = None + self.fields['cluster_domain_id'] = None + self.fields['host_domain_id'] = None + self.fields['model'] = None + self.fields['serial_number'] = None + self.fields['size'] = None + self.fields['vendor'] = None + self.fields['agent_version'] = str(AGENT_VERSION) + + """disk_status + 0: unknown 1: good 2: failure + """ + self.fields['disk_status'] = 0 + + """disk_type + 0: unknown 1: HDD 2: SSD 3: SSD NVME + 4: SSD SAS 5: SSD SATA 6: HDD SAS 7: HDD SATA + """ + self.fields['disk_type'] = 0 + + +class SAIDiskAgent(MetricsAgent): + measurement = 'sai_disk' + + @staticmethod + def _convert_disk_type(is_ssd, sata_version, protocol): + """ return type: + 0: "Unknown', 1: 'HDD', + 2: 'SSD", 3: "SSD NVME", + 4: "SSD SAS", 5: "SSD SATA", + 6: "HDD SAS", 7: "HDD SATA" + """ + if is_ssd: + if sata_version and not protocol: + disk_type = 5 + elif 'SCSI'.lower() in protocol.lower(): + disk_type = 4 + elif 'NVMe'.lower() in protocol.lower(): + disk_type = 3 + else: + disk_type = 2 + else: + if sata_version and not protocol: + disk_type = 7 + elif 'SCSI'.lower() in protocol.lower(): + disk_type = 6 + else: + disk_type = 1 + return disk_type + + def _collect_data(self): + # process data and save to 'self.data' + obj_api = ClusterAPI(self._module_inst) + cluster_id = obj_api.get_cluster_id() + osds = obj_api.get_osds() + for osd in osds: + if osd.get('osd') is None: + continue + if not osd.get('in'): + continue + osds_meta = obj_api.get_osd_metadata(osd.get('osd')) + if not osds_meta: + continue + osds_smart = obj_api.get_osd_smart(osd.get('osd')) + if not osds_smart: + continue + for dev_name, s_val in osds_smart.iteritems(): + d_data = SAIDiskFields() + d_data.tags['disk_name'] = str(dev_name) + d_data.fields['cluster_domain_id'] = str(cluster_id) + d_data.tags['host_domain_id'] = \ + str('%s_%s' + % (cluster_id, osds_meta.get('hostname', 'None'))) + d_data.fields['agenthost'] = str(socket.gethostname()) + d_data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, d_data.fields['agenthost'])) + serial_number = s_val.get('serial_number') + wwn = s_val.get('wwn', {}) + wwpn = '' + if wwn: + wwpn = '%06X%X' % (wwn.get('oui', 0), wwn.get('id', 0)) + for k in wwn.keys(): + if k in ['naa', 't10', 'eui', 'iqn']: + wwpn = ('%X%s' % (wwn[k], wwpn)).lower() + break + + if wwpn: + d_data.tags['disk_domain_id'] = str(dev_name) + d_data.tags['disk_wwn'] = str(wwpn) + if serial_number: + d_data.fields['serial_number'] = str(serial_number) + else: + d_data.fields['serial_number'] = str(wwpn) + elif serial_number: + d_data.tags['disk_domain_id'] = str(dev_name) + d_data.fields['serial_number'] = str(serial_number) + if wwpn: + d_data.tags['disk_wwn'] = str(wwpn) + else: + d_data.tags['disk_wwn'] = str(serial_number) + else: + d_data.tags['disk_domain_id'] = str(dev_name) + d_data.tags['disk_wwn'] = str(dev_name) + d_data.fields['serial_number'] = str(dev_name) + d_data.tags['primary_key'] = \ + str('%s%s%s' + % (cluster_id, d_data.tags['host_domain_id'], + d_data.tags['disk_domain_id'])) + d_data.fields['disk_status'] = int(1) + is_ssd = True if s_val.get('rotation_rate') == 0 else False + vendor = s_val.get('vendor', None) + model = s_val.get('model_name', None) + if s_val.get('sata_version', {}).get('string'): + sata_version = s_val['sata_version']['string'] + else: + sata_version = '' + if s_val.get('device', {}).get('protocol'): + protocol = s_val['device']['protocol'] + else: + protocol = '' + d_data.fields['disk_type'] = \ + self._convert_disk_type(is_ssd, sata_version, protocol) + d_data.fields['firmware_version'] = \ + str(s_val.get('firmware_version')) + if model: + d_data.fields['model'] = str(model) + if vendor: + d_data.fields['vendor'] = str(vendor) + if sata_version: + d_data.fields['sata_version'] = str(sata_version) + if s_val.get('logical_block_size'): + d_data.fields['sector_size'] = \ + str(str(s_val['logical_block_size'])) + d_data.fields['transport_protocol'] = str('') + d_data.fields['vendor'] = \ + str(s_val.get('model_family', '')).replace('\"', '\'') + try: + if isinstance(s_val.get('user_capacity'), dict): + user_capacity = \ + s_val['user_capacity'].get('bytes', {}).get('n', 0) + else: + user_capacity = s_val.get('user_capacity', 0) + except ValueError: + user_capacity = 0 + d_data.fields['size'] = \ + get_human_readable(int(user_capacity), 0) + + if s_val.get('smart_status', {}).get('passed'): + d_data.fields['smart_health_status'] = 'PASSED' + else: + d_data.fields['smart_health_status'] = 'FAILED' + self.data.append(d_data) diff --git a/src/pybind/mgr/diskprediction/agent/metrics/sai_disk_smart.py b/src/pybind/mgr/diskprediction/agent/metrics/sai_disk_smart.py new file mode 100644 index 00000000000..509d9263aa8 --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/metrics/sai_disk_smart.py @@ -0,0 +1,156 @@ +from __future__ import absolute_import + +import datetime +import json +import _strptime +import socket +import time + +from . import AGENT_VERSION, MetricsAgent, MetricsField +from ...common.clusterdata import ClusterAPI + + +class SAIDiskSmartFields(MetricsField): + """ SAI DiskSmart structure """ + measurement = 'sai_disk_smart' + + def __init__(self): + super(SAIDiskSmartFields, self).__init__() + self.fields['agenthost'] = None + self.tags['agenthost_domain_id'] = None + self.tags['disk_domain_id'] = None + self.tags['disk_name'] = None + self.tags['disk_wwn'] = None + self.tags['primary_key'] = None + self.fields['cluster_domain_id'] = None + self.fields['host_domain_id'] = None + self.fields['agent_version'] = str(AGENT_VERSION) + + +class SAIDiskSmartAgent(MetricsAgent): + measurement = 'sai_disk_smart' + + def _collect_data(self): + # process data and save to 'self.data' + obj_api = ClusterAPI(self._module_inst) + cluster_id = obj_api.get_cluster_id() + osds = obj_api.get_osds() + for osd in osds: + if osd.get('osd') is None: + continue + if not osd.get('in'): + continue + osds_meta = obj_api.get_osd_metadata(osd.get('osd')) + if not osds_meta: + continue + devs_info = obj_api.get_osd_device_id(osd.get('osd')) + if devs_info: + for dev_name, dev_info in devs_info.iteritems(): + osds_smart = obj_api.get_device_health(dev_info['dev_id']) + if not osds_smart: + continue + # Always pass through last smart data record + o_key = sorted(osds_smart.iterkeys(), reverse=True)[0] + if o_key: + s_date = o_key + s_val = osds_smart[s_date] + smart_data = SAIDiskSmartFields() + smart_data.tags['disk_name'] = str(dev_name) + smart_data.fields['cluster_domain_id'] = str(cluster_id) + smart_data.tags['host_domain_id'] = \ + str('%s_%s' + % (cluster_id, osds_meta.get('hostname', 'None'))) + smart_data.fields['agenthost'] = str(socket.gethostname()) + smart_data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, smart_data.fields['agenthost'])) + # parse attributes + ata_smart = s_val.get('ata_smart_attributes', {}) + for attr in ata_smart.get('table', []): + if attr.get('raw', {}).get('string'): + if str(attr.get('raw', {}).get('string', '0')).isdigit(): + smart_data.fields['%s_raw' % attr.get('id')] = \ + int(attr.get('raw', {}).get('string', '0')) + else: + if str(attr.get('raw', {}).get('string', '0')).split(' ')[0].isdigit(): + smart_data.fields['%s_raw' % attr.get('id')] = \ + int(attr.get('raw', {}).get('string', '0').split(' ')[0]) + else: + smart_data.fields['%s_raw' % attr.get('id')] = \ + attr.get('raw', {}).get('value', 0) + smart_data.fields['raw_data'] = str(json.dumps(osds_smart[s_date]).replace("\"", "\'")) + if s_val.get('temperature', {}).get('current') is not None: + smart_data.fields['CurrentDriveTemperature_raw'] = \ + int(s_val['temperature']['current']) + if s_val.get('temperature', {}).get('drive_trip') is not None: + smart_data.fields['DriveTripTemperature_raw'] = \ + int(s_val['temperature']['drive_trip']) + if s_val.get('elements_grown_list') is not None: + smart_data.fields['ElementsInGrownDefectList_raw'] = int(s_val['elements_grown_list']) + if s_val.get('power_on_time', {}).get('hours') is not None: + smart_data.fields['9_raw'] = int(s_val['power_on_time']['hours']) + if s_val.get('scsi_percentage_used_endurance_indicator') is not None: + smart_data.fields['PercentageUsedEnduranceIndicator_raw'] = \ + int(s_val['scsi_percentage_used_endurance_indicator']) + if s_val.get('scsi_error_counter_log') is not None: + s_err_counter = s_val['scsi_error_counter_log'] + for s_key in s_err_counter.keys(): + if s_key.lower() in ['read', 'write']: + for s1_key in s_err_counter[s_key].keys(): + if s1_key.lower() == 'errors_corrected_by_eccfast': + smart_data.fields['ErrorsCorrectedbyECCFast%s_raw' % s_key.capitalize()] = \ + int(s_err_counter[s_key]['errors_corrected_by_eccfast']) + elif s1_key.lower() == 'errors_corrected_by_eccdelayed': + smart_data.fields['ErrorsCorrectedbyECCDelayed%s_raw' % s_key.capitalize()] = \ + int(s_err_counter[s_key]['errors_corrected_by_eccdelayed']) + elif s1_key.lower() == 'errors_corrected_by_rereads_rewrites': + smart_data.fields['ErrorCorrectedByRereadsRewrites%s_raw' % s_key.capitalize()] = \ + int(s_err_counter[s_key]['errors_corrected_by_rereads_rewrites']) + elif s1_key.lower() == 'total_errors_corrected': + smart_data.fields['TotalErrorsCorrected%s_raw' % s_key.capitalize()] = \ + int(s_err_counter[s_key]['total_errors_corrected']) + elif s1_key.lower() == 'correction_algorithm_invocations': + smart_data.fields['CorrectionAlgorithmInvocations%s_raw' % s_key.capitalize()] = \ + int(s_err_counter[s_key]['correction_algorithm_invocations']) + elif s1_key.lower() == 'gigabytes_processed': + smart_data.fields['GigaBytesProcessed%s_raw' % s_key.capitalize()] = \ + float(s_err_counter[s_key]['gigabytes_processed']) + elif s1_key.lower() == 'total_uncorrected_errors': + smart_data.fields['TotalUncorrectedErrors%s_raw' % s_key.capitalize()] = \ + int(s_err_counter[s_key]['total_uncorrected_errors']) + + serial_number = s_val.get('serial_number') + wwn = s_val.get('wwn', {}) + wwpn = '' + if wwn: + wwpn = '%06X%X' % (wwn.get('oui', 0), wwn.get('id', 0)) + for k in wwn.keys(): + if k in ['naa', 't10', 'eui', 'iqn']: + wwpn = ('%X%s' % (wwn[k], wwpn)).lower() + break + if wwpn: + smart_data.tags['disk_domain_id'] = str(dev_info['dev_id']) + smart_data.tags['disk_wwn'] = str(wwpn) + if serial_number: + smart_data.fields['serial_number'] = str(serial_number) + else: + smart_data.fields['serial_number'] = str(wwpn) + elif serial_number: + smart_data.tags['disk_domain_id'] = str(dev_info['dev_id']) + smart_data.fields['serial_number'] = str(serial_number) + if wwpn: + smart_data.tags['disk_wwn'] = str(wwpn) + else: + smart_data.tags['disk_wwn'] = str(serial_number) + else: + smart_data.tags['disk_domain_id'] = str(dev_info['dev_id']) + smart_data.tags['disk_wwn'] = str(dev_name) + smart_data.fields['serial_number'] = str(dev_name) + smart_data.tags['primary_key'] = \ + str('%s%s%s' + % (cluster_id, + smart_data.tags['host_domain_id'], + smart_data.tags['disk_domain_id'])) + smart_data.timestamp = \ + time.mktime(datetime.datetime.strptime( + s_date, '%Y%m%d-%H%M%S').timetuple()) + self.data.append(smart_data) diff --git a/src/pybind/mgr/diskprediction/agent/metrics/sai_host.py b/src/pybind/mgr/diskprediction/agent/metrics/sai_host.py new file mode 100644 index 00000000000..f3fc8ba257a --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/metrics/sai_host.py @@ -0,0 +1,105 @@ +from __future__ import absolute_import + +import socket + +from . import AGENT_VERSION, MetricsAgent, MetricsField +from ...common.clusterdata import ClusterAPI + + +class SAIHostFields(MetricsField): + """ SAI Host structure """ + measurement = 'sai_host' + + def __init__(self): + super(SAIHostFields, self).__init__() + self.tags['domain_id'] = None + self.fields['agenthost'] = None + self.tags['agenthost_domain_id'] = None + self.fields['cluster_domain_id'] = None + self.fields['name'] = None + self.fields['host_ip'] = None + self.fields['host_ipv6'] = None + self.fields['host_uuid'] = None + self.fields['os_type'] = str('ceph') + self.fields['os_name'] = None + self.fields['os_version'] = None + self.fields['agent_version'] = str(AGENT_VERSION) + + +class SAIHostAgent(MetricsAgent): + measurement = 'sai_host' + + def _collect_data(self): + db = ClusterAPI(self._module_inst) + cluster_id = db.get_cluster_id() + + hosts = set() + + osd_data = db.get_osds() + for _data in osd_data: + osd_id = _data['osd'] + if not _data.get('in'): + continue + osd_addr = _data['public_addr'].split(':')[0] + osd_metadata = db.get_osd_metadata(osd_id) + if osd_metadata: + osd_host = osd_metadata.get('hostname', 'None') + if osd_host not in hosts: + data = SAIHostFields() + data.fields['agenthost'] = str(socket.gethostname()) + data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, data.fields['agenthost'])) + data.tags['domain_id'] = \ + str('%s_%s' % (cluster_id, osd_host)) + data.fields['cluster_domain_id'] = str(cluster_id) + data.fields['host_ip'] = osd_addr + data.fields['host_uuid'] = \ + str('%s_%s' % (cluster_id, osd_host)) + data.fields['os_name'] = \ + osd_metadata.get('ceph_release', '') + data.fields['os_version'] = \ + osd_metadata.get('ceph_version_short', '') + data.fields['name'] = osd_host + hosts.add(osd_host) + self.data.append(data) + + mons = db.get_mons() + for _data in mons: + mon_host = _data['name'] + mon_addr = _data['public_addr'].split(':')[0] + if mon_host not in hosts: + data = SAIHostFields() + data.fields['agenthost'] = str(socket.gethostname()) + data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, data.fields['agenthost'])) + data.tags['domain_id'] = \ + str('%s_%s' % (cluster_id, mon_host)) + data.fields['cluster_domain_id'] = str(cluster_id) + data.fields['host_ip'] = mon_addr + data.fields['host_uuid'] = \ + str('%s_%s' % (cluster_id, mon_host)) + data.fields['name'] = mon_host + hosts.add((mon_host, mon_addr)) + self.data.append(data) + + file_systems = db.get_file_systems() + for _data in file_systems: + mds_info = _data.get('mdsmap').get('info') + for _gid in mds_info: + mds_data = mds_info[_gid] + mds_addr = mds_data.get('addr').split(':')[0] + mds_host = mds_data.get('name') + if mds_host not in hosts: + data = SAIHostFields() + data.fields['agenthost'] = str(socket.gethostname()) + data.tags['agenthost_domain_id'] = \ + str('%s_%s' % (cluster_id, data.fields['agenthost'])) + data.tags['domain_id'] = \ + str('%s_%s' % (cluster_id, mds_host)) + data.fields['cluster_domain_id'] = str(cluster_id) + data.fields['host_ip'] = mds_addr + data.fields['host_uuid'] = \ + str('%s_%s' % (cluster_id, mds_host)) + data.fields['name'] = mds_host + hosts.add((mds_host, mds_addr)) + self.data.append(data) diff --git a/src/pybind/mgr/diskprediction/agent/predict/__init__.py b/src/pybind/mgr/diskprediction/agent/predict/__init__.py new file mode 100644 index 00000000000..28e2eebd2b7 --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/predict/__init__.py @@ -0,0 +1 @@ +from __future__ import absolute_import diff --git a/src/pybind/mgr/diskprediction/agent/predict/prediction.py b/src/pybind/mgr/diskprediction/agent/predict/prediction.py new file mode 100644 index 00000000000..27ba3335422 --- /dev/null +++ b/src/pybind/mgr/diskprediction/agent/predict/prediction.py @@ -0,0 +1,216 @@ +from __future__ import absolute_import +import datetime + +from .. import BaseAgent +from ...common import DP_MGR_STAT_FAILED, DP_MGR_STAT_OK +from ...common.clusterdata import ClusterAPI + +PREDICTION_FILE = '/var/tmp/disk_prediction.json' + +TIME_DAYS = 24*60*60 +TIME_WEEK = TIME_DAYS * 7 + + +class PredictionAgent(BaseAgent): + + measurement = 'sai_disk_prediction' + + @staticmethod + def _get_disk_type(is_ssd, vendor, model): + """ return type: + 0: "Unknown", 1: "HDD", + 2: "SSD", 3: "SSD NVME", + 4: "SSD SAS", 5: "SSD SATA", + 6: "HDD SAS", 7: "HDD SATA" + """ + if is_ssd: + if vendor: + disk_type = 4 + elif model: + disk_type = 5 + else: + disk_type = 2 + else: + if vendor: + disk_type = 6 + elif model: + disk_type = 7 + else: + disk_type = 1 + return disk_type + + def _store_prediction_result(self, result): + self._module_inst._prediction_result = result + + def _parse_prediction_data(self, host_domain_id, disk_domain_id): + result = {} + try: + query_info = self._client.query_info( + host_domain_id, disk_domain_id, 'sai_disk_prediction') + status_code = query_info.status_code + if status_code == 200: + result = query_info.json() + self._module_inst.status = {'status': DP_MGR_STAT_OK} + else: + resp = query_info.json() + if resp.get('error'): + self._logger.error(str(resp['error'])) + self._module_inst.status = \ + {'status': DP_MGR_STAT_FAILED, + 'reason': 'failed to parse device {} prediction data'.format(disk_domain_id)} + except Exception as e: + self._logger.error(str(e)) + return result + + @staticmethod + def _convert_timestamp(predicted_timestamp, life_expectancy_day): + """ + + :param predicted_timestamp: unit is nanoseconds + :param life_expectancy_day: unit is seconds + :return: + date format '%Y-%m-%d' ex. 2018-01-01 + """ + return datetime.datetime.fromtimestamp( + predicted_timestamp / (1000 ** 3) + life_expectancy_day).strftime('%Y-%m-%d') + + def _fetch_prediction_result(self): + obj_api = ClusterAPI(self._module_inst) + cluster_id = obj_api.get_cluster_id() + + result = {} + osds = obj_api.get_osds() + for osd in osds: + osd_id = osd.get('osd') + if osd_id is None: + continue + if not osd.get('in'): + continue + osds_meta = obj_api.get_osd_metadata(osd_id) + if not osds_meta: + continue + osds_smart = obj_api.get_osd_smart(osd_id) + if not osds_smart: + continue + + hostname = osds_meta.get('hostname', 'None') + host_domain_id = '%s_%s' % (cluster_id, hostname) + + for dev_name, s_val in osds_smart.iteritems(): + is_ssd = True if s_val.get('rotation_rate') == 0 else False + vendor = s_val.get('vendor', '') + model = s_val.get('model_name', '') + disk_type = self._get_disk_type(is_ssd, vendor, model) + serial_number = s_val.get('serial_number') + wwn = s_val.get('wwn', {}) + wwpn = '' + if wwn: + wwpn = '%06X%X' % (wwn.get('oui', 0), wwn.get('id', 0)) + for k in wwn.keys(): + if k in ['naa', 't10', 'eui', 'iqn']: + wwpn = ('%X%s' % (wwn[k], wwpn)).lower() + break + + tmp = {} + if wwpn: + tmp['disk_domain_id'] = dev_name + tmp['disk_wwn'] = wwpn + if serial_number: + tmp['serial_number'] = serial_number + else: + tmp['serial_number'] = wwpn + elif serial_number: + tmp['disk_domain_id'] = dev_name + tmp['serial_number'] = serial_number + if wwpn: + tmp['disk_wwn'] = wwpn + else: + tmp['disk_wwn'] = serial_number + else: + tmp['disk_domain_id'] = dev_name + tmp['disk_wwn'] = dev_name + tmp['serial_number'] = dev_name + + if s_val.get('smart_status', {}).get('passed'): + tmp['smart_health_status'] = 'PASSED' + else: + tmp['smart_health_status'] = 'FAILED' + + tmp['sata_version'] = s_val.get('sata_version', {}).get('string', '') + tmp['sector_size'] = str(s_val.get('logical_block_size', '')) + try: + if isinstance(s_val.get('user_capacity'), dict): + user_capacity = \ + s_val['user_capacity'].get('bytes', {}).get('n', 0) + else: + user_capacity = s_val.get('user_capacity', 0) + except ValueError: + user_capacity = 0 + disk_info = { + 'disk_name': dev_name, + 'disk_type': str(disk_type), + 'disk_status': '1', + 'disk_wwn': tmp['disk_wwn'], + 'dp_disk_idd': tmp['disk_domain_id'], + 'serial_number': tmp['serial_number'], + 'vendor': vendor, + 'sata_version': tmp['sata_version'], + 'smart_healthStatus': tmp['smart_health_status'], + 'sector_size': tmp['sector_size'], + 'size': str(user_capacity), + 'prediction': self._parse_prediction_data( + host_domain_id, tmp['disk_domain_id']) + } + # Update osd life-expectancy + predicted = None + life_expectancy_day_min = None + life_expectancy_day_max = None + devs_info = obj_api.get_osd_device_id(osd_id) + if disk_info.get('prediction', {}).get('predicted'): + predicted = int(disk_info['prediction']['predicted']) + if disk_info.get('prediction', {}).get('near_failure'): + if disk_info['prediction']['near_failure'].lower() == 'good': + life_expectancy_day_min = (TIME_WEEK * 6) + TIME_DAYS + life_expectancy_day_max = None + elif disk_info['prediction']['near_failure'].lower() == 'warning': + life_expectancy_day_min = (TIME_WEEK * 2) + life_expectancy_day_max = (TIME_WEEK * 6) + elif disk_info['prediction']['near_failure'].lower() == 'bad': + life_expectancy_day_min = 0 + life_expectancy_day_max = (TIME_WEEK * 2) - TIME_DAYS + else: + # Near failure state is unknown. + predicted = None + life_expectancy_day_min = None + life_expectancy_day_max = None + + if predicted and tmp['disk_domain_id'] and life_expectancy_day_min: + from_date = None + to_date = None + try: + if life_expectancy_day_min: + from_date = self._convert_timestamp(predicted, life_expectancy_day_min) + + if life_expectancy_day_max: + to_date = self._convert_timestamp(predicted, life_expectancy_day_max) + + obj_api.set_device_life_expectancy(tmp['disk_domain_id'], from_date, to_date) + self._logger.info( + 'succeed to set device {} life expectancy from: {}, to: {}'.format( + tmp['disk_domain_id'], from_date, to_date)) + except Exception as e: + self._logger.error( + 'failed to set device {} life expectancy from: {}, to: {}, {}'.format( + tmp['disk_domain_id'], from_date, to_date, str(e))) + else: + if tmp['disk_domain_id']: + obj_api.reset_device_life_expectancy(tmp['disk_domain_id']) + if tmp['disk_domain_id']: + result[tmp['disk_domain_id']] = disk_info + + return result + + def run(self): + result = self._fetch_prediction_result() + if result: + self._store_prediction_result(result) diff --git a/src/pybind/mgr/diskprediction/common/__init__.py b/src/pybind/mgr/diskprediction/common/__init__.py new file mode 100644 index 00000000000..e567e4f72d9 --- /dev/null +++ b/src/pybind/mgr/diskprediction/common/__init__.py @@ -0,0 +1,59 @@ +from __future__ import absolute_import +import errno +from functools import wraps +from httplib import BAD_REQUEST +import os +import signal + + +DP_MGR_STAT_OK = 'OK' +DP_MGR_STAT_WARNING = 'WARNING' +DP_MGR_STAT_FAILED = 'FAILED' +DP_MGR_STAT_DISABLED = 'DISABLED' +DP_MGR_STAT_ENABLED = 'ENABLED' + + +class DummyResonse: + def __init__(self): + self.resp_json = dict() + self.content = 'DummyResponse' + self.status_code = BAD_REQUEST + + def json(self): + return self.resp_json + + +class TimeoutError(Exception): + pass + + +def timeout(seconds=10, error_message=os.strerror(errno.ETIME)): + def decorator(func): + def _handle_timeout(signum, frame): + raise TimeoutError(error_message) + + def wrapper(*args, **kwargs): + if hasattr(args[0], '_timeout') is not None: + seconds = args[0]._timeout + signal.signal(signal.SIGALRM, _handle_timeout) + signal.alarm(seconds) + try: + result = func(*args, **kwargs) + finally: + signal.alarm(0) + return result + + return wraps(func)(wrapper) + + return decorator + + +def get_human_readable(size, precision=2): + suffixes = ['B', 'KB', 'MB', 'GB', 'TB'] + suffix_index = 0 + while size > 1000 and suffix_index < 4: + # increment the index of the suffix + suffix_index += 1 + # apply the division + size = size/1000.0 + return '%.*d %s' % (precision, size, suffixes[suffix_index]) diff --git a/src/pybind/mgr/diskprediction/common/client_pb2.py b/src/pybind/mgr/diskprediction/common/client_pb2.py new file mode 100644 index 00000000000..9f65c731a84 --- /dev/null +++ b/src/pybind/mgr/diskprediction/common/client_pb2.py @@ -0,0 +1,1775 @@ +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: mainServer.proto + +import sys +_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1')) +from google.protobuf import descriptor as _descriptor +from google.protobuf import message as _message +from google.protobuf import reflection as _reflection +from google.protobuf import symbol_database as _symbol_database +from google.protobuf import descriptor_pb2 +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + +from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 + + +DESCRIPTOR = _descriptor.FileDescriptor( + name='mainServer.proto', + package='proto', + syntax='proto3', + serialized_pb=_b('\n\x10mainServer.proto\x12\x05proto\x1a\x1cgoogle/api/annotations.proto\"\x07\n\x05\x45mpty\"#\n\x10GeneralMsgOutput\x12\x0f\n\x07message\x18\x01 \x01(\t\")\n\x16GeneralHeartbeatOutput\x12\x0f\n\x07message\x18\x01 \x01(\t\"\x1d\n\nPingOutout\x12\x0f\n\x07message\x18\x01 \x01(\t\"*\n\tTestInput\x12\x1d\n\x06people\x18\x01 \x03(\x0b\x32\r.proto.Person\"\xbe\x01\n\nTestOutput\x12\x10\n\x08strArray\x18\x01 \x03(\t\x12\x31\n\x08mapValue\x18\x02 \x03(\x0b\x32\x1f.proto.TestOutput.MapValueEntry\x12\x19\n\x02pn\x18\x04 \x01(\x0b\x32\r.proto.Person\x12\x1f\n\x07profile\x18\x03 \x03(\x0b\x32\x0e.proto.Profile\x1a/\n\rMapValueEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"\xcf\x01\n\x06Person\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\n\n\x02id\x18\x02 \x01(\x03\x12\r\n\x05\x65mail\x18\x03 \x01(\t\x12)\n\x06phones\x18\x04 \x03(\x0b\x32\x19.proto.Person.PhoneNumber\x1a\x44\n\x0bPhoneNumber\x12\x0e\n\x06number\x18\x01 \x01(\t\x12%\n\x04type\x18\x02 \x01(\x0e\x32\x17.proto.Person.PhoneType\"+\n\tPhoneType\x12\n\n\x06MOBILE\x10\x00\x12\x08\n\x04HOME\x10\x01\x12\x08\n\x04WORK\x10\x02\"\xa9\x01\n\x07Profile\x12%\n\x08\x66ileInfo\x18\x01 \x01(\x0b\x32\x13.proto.Profile.File\x1aw\n\x04\x46ile\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x11\n\ttypeInt32\x18\x02 \x01(\x05\x12\x11\n\ttypeInt64\x18\x03 \x01(\x03\x12\x11\n\ttypeFloat\x18\x04 \x01(\x02\x12\x12\n\ntypeDouble\x18\x05 \x01(\x01\x12\x14\n\x0c\x62ooleanValue\x18\x06 \x01(\x08\"4\n\x15GetUsersByStatusInput\x12\x0e\n\x06status\x18\x01 \x01(\t\x12\x0b\n\x03key\x18\x02 \x01(\t\":\n\x16GetUsersByStatusOutput\x12 \n\x05users\x18\x01 \x03(\x0b\x32\x11.proto.UserOutput\")\n\x16\x41\x63\x63ountHeartbeatOutput\x12\x0f\n\x07message\x18\x01 \x01(\t\"-\n\nLoginInput\x12\r\n\x05\x65mail\x18\x01 \x01(\t\x12\x10\n\x08password\x18\x02 \x01(\t\"\xf2\x01\n\nUserOutput\x12\n\n\x02id\x18\x01 \x01(\t\x12\r\n\x05\x65mail\x18\x02 \x01(\t\x12\x0e\n\x06status\x18\x03 \x01(\t\x12\r\n\x05phone\x18\x04 \x01(\t\x12\x11\n\tfirstName\x18\x05 \x01(\t\x12\x10\n\x08lastName\x18\x06 \x01(\t\x12\x13\n\x0b\x63reatedTime\x18\x07 \x01(\t\x12\x11\n\tnamespace\x18\x08 \x01(\t\x12\x12\n\ndomainName\x18\t \x01(\t\x12\x0f\n\x07\x63ompany\x18\n \x01(\t\x12\x0b\n\x03url\x18\x0b \x01(\t\x12\x14\n\x0c\x61gentAccount\x18\x0c \x01(\t\x12\x15\n\ragentPassword\x18\r \x01(\t\"s\n\x0bSingupInput\x12\r\n\x05\x65mail\x18\x01 \x01(\t\x12\r\n\x05phone\x18\x02 \x01(\t\x12\x11\n\tfirstName\x18\x03 \x01(\t\x12\x10\n\x08lastName\x18\x04 \x01(\t\x12\x10\n\x08password\x18\x05 \x01(\t\x12\x0f\n\x07\x63ompany\x18\x06 \x01(\t\"\x1f\n\x0cSingupOutput\x12\x0f\n\x07message\x18\x01 \x01(\t\"-\n\x0f\x44\x65leteUserInput\x12\r\n\x05\x65mail\x18\x01 \x01(\t\x12\x0b\n\x03key\x18\x02 \x01(\t\"C\n\x15UpdateUserStatusInput\x12\r\n\x05\x65mail\x18\x01 \x01(\t\x12\x0b\n\x03key\x18\x02 \x01(\t\x12\x0e\n\x06status\x18\x03 \x01(\t\"\'\n\x16ResendConfirmCodeInput\x12\r\n\x05\x65mail\x18\x01 \x01(\t\"+\n\x0c\x43onfirmInput\x12\r\n\x05\x65mail\x18\x01 \x01(\t\x12\x0c\n\x04\x63ode\x18\x02 \x01(\t\"$\n\x11\x44PHeartbeatOutput\x12\x0f\n\x07message\x18\x01 \x01(\t\"n\n\x17\x44PGetPhysicalDisksInput\x12\x0f\n\x07hostIds\x18\x01 \x01(\t\x12\x0b\n\x03ids\x18\x02 \x01(\t\x12\r\n\x05limit\x18\x03 \x01(\x03\x12\x0c\n\x04page\x18\x04 \x01(\x03\x12\x0c\n\x04\x66rom\x18\x05 \x01(\t\x12\n\n\x02to\x18\x06 \x01(\t\"{\n\x19\x44PGetDisksPredictionInput\x12\x17\n\x0fphysicalDiskIds\x18\x01 \x01(\t\x12\x0e\n\x06status\x18\x02 \x01(\t\x12\r\n\x05limit\x18\x03 \x01(\x03\x12\x0c\n\x04page\x18\x04 \x01(\x03\x12\x0c\n\x04\x66rom\x18\x05 \x01(\t\x12\n\n\x02to\x18\x06 \x01(\t\"\x1e\n\x0e\x44PBinaryOutput\x12\x0c\n\x04\x64\x61ta\x18\x01 \x01(\x0c\",\n\x19\x43ollectionHeartbeatOutput\x12\x0f\n\x07message\x18\x01 \x01(\t\"\"\n\x10PostMetricsInput\x12\x0e\n\x06points\x18\x01 \x03(\t\" \n\x10PostDBRelayInput\x12\x0c\n\x04\x63mds\x18\x01 \x03(\t\":\n\x17\x43ollectionMessageOutput\x12\x0e\n\x06status\x18\x01 \x01(\x03\x12\x0f\n\x07message\x18\x02 \x01(\t2\x85\x02\n\x07General\x12\x63\n\x10GeneralHeartbeat\x12\x0c.proto.Empty\x1a\x1d.proto.GeneralHeartbeatOutput\"\"\x82\xd3\xe4\x93\x02\x1c\x12\x1a/apis/v2/general/heartbeat\x12\x46\n\x04Ping\x12\x0c.proto.Empty\x1a\x11.proto.PingOutout\"\x1d\x82\xd3\xe4\x93\x02\x17\x12\x15/apis/v2/general/ping\x12M\n\x04Test\x12\x10.proto.TestInput\x1a\x11.proto.TestOutput\" \x82\xd3\xe4\x93\x02\x1a\"\x15/apis/v2/general/test:\x01*2\xa4\x06\n\x07\x41\x63\x63ount\x12\x63\n\x10\x41\x63\x63ountHeartbeat\x12\x0c.proto.Empty\x1a\x1d.proto.AccountHeartbeatOutput\"\"\x82\xd3\xe4\x93\x02\x1c\x12\x1a/apis/v2/account/heartbeat\x12N\n\x05Login\x12\x11.proto.LoginInput\x1a\x11.proto.UserOutput\"\x1f\x82\xd3\xe4\x93\x02\x19\"\x14/apis/v2/users/login:\x01*\x12S\n\x06Signup\x12\x12.proto.SingupInput\x1a\x13.proto.SingupOutput\" \x82\xd3\xe4\x93\x02\x1a\"\x15/apis/v2/users/signup:\x01*\x12r\n\x11ResendConfirmCode\x12\x1d.proto.ResendConfirmCodeInput\x1a\x17.proto.GeneralMsgOutput\"%\x82\xd3\xe4\x93\x02\x1f\"\x1a/apis/v2/users/confirmcode:\x01*\x12_\n\x07\x43onfirm\x12\x13.proto.ConfirmInput\x1a\x17.proto.GeneralMsgOutput\"&\x82\xd3\xe4\x93\x02 \"\x1b/apis/v2/users/confirmation:\x01*\x12g\n\x10GetUsersByStatus\x12\x1c.proto.GetUsersByStatusInput\x1a\x1d.proto.GetUsersByStatusOutput\"\x16\x82\xd3\xe4\x93\x02\x10\x12\x0e/apis/v2/users\x12\x63\n\nDeleteUser\x12\x16.proto.DeleteUserInput\x1a\x17.proto.GeneralMsgOutput\"$\x82\xd3\xe4\x93\x02\x1e*\x1c/apis/v2/users/{email}/{key}\x12l\n\x10UpdateUserStatus\x12\x1c.proto.UpdateUserStatusInput\x1a\x17.proto.GeneralMsgOutput\"!\x82\xd3\xe4\x93\x02\x1b\x1a\x16/apis/v2/users/{email}:\x01*2\xcf\x02\n\x0b\x44iskprophet\x12T\n\x0b\x44PHeartbeat\x12\x0c.proto.Empty\x1a\x18.proto.DPHeartbeatOutput\"\x1d\x82\xd3\xe4\x93\x02\x17\x12\x15/apis/v2/dp/heartbeat\x12l\n\x12\x44PGetPhysicalDisks\x12\x1e.proto.DPGetPhysicalDisksInput\x1a\x15.proto.DPBinaryOutput\"\x1f\x82\xd3\xe4\x93\x02\x19\x12\x17/apis/v2/physical-disks\x12|\n\x14\x44PGetDisksPrediction\x12 .proto.DPGetDisksPredictionInput\x1a\x15.proto.DPBinaryOutput\"+\x82\xd3\xe4\x93\x02%\x12#/apis/v2/physical-disks/predictions2\xdb\x02\n\nCollection\x12l\n\x13\x43ollectionHeartbeat\x12\x0c.proto.Empty\x1a .proto.CollectionHeartbeatOutput\"%\x82\xd3\xe4\x93\x02\x1f\x12\x1d/apis/v2/collection/heartbeat\x12o\n\x0bPostDBRelay\x12\x17.proto.PostDBRelayInput\x1a\x1e.proto.CollectionMessageOutput\"\'\x82\xd3\xe4\x93\x02!\"\x1c/apis/v2/collection/relation:\x01*\x12n\n\x0bPostMetrics\x12\x17.proto.PostMetricsInput\x1a\x1e.proto.CollectionMessageOutput\"&\x82\xd3\xe4\x93\x02 \"\x1b/apis/v2/collection/metrics:\x01*b\x06proto3') + , + dependencies=[google_dot_api_dot_annotations__pb2.DESCRIPTOR,]) + + + +_PERSON_PHONETYPE = _descriptor.EnumDescriptor( + name='PhoneType', + full_name='proto.Person.PhoneType', + filename=None, + file=DESCRIPTOR, + values=[ + _descriptor.EnumValueDescriptor( + name='MOBILE', index=0, number=0, + options=None, + type=None), + _descriptor.EnumValueDescriptor( + name='HOME', index=1, number=1, + options=None, + type=None), + _descriptor.EnumValueDescriptor( + name='WORK', index=2, number=2, + options=None, + type=None), + ], + containing_type=None, + options=None, + serialized_start=579, + serialized_end=622, +) +_sym_db.RegisterEnumDescriptor(_PERSON_PHONETYPE) + + +_EMPTY = _descriptor.Descriptor( + name='Empty', + full_name='proto.Empty', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=57, + serialized_end=64, +) + + +_GENERALMSGOUTPUT = _descriptor.Descriptor( + name='GeneralMsgOutput', + full_name='proto.GeneralMsgOutput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='message', full_name='proto.GeneralMsgOutput.message', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=66, + serialized_end=101, +) + + +_GENERALHEARTBEATOUTPUT = _descriptor.Descriptor( + name='GeneralHeartbeatOutput', + full_name='proto.GeneralHeartbeatOutput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='message', full_name='proto.GeneralHeartbeatOutput.message', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=103, + serialized_end=144, +) + + +_PINGOUTOUT = _descriptor.Descriptor( + name='PingOutout', + full_name='proto.PingOutout', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='message', full_name='proto.PingOutout.message', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=146, + serialized_end=175, +) + + +_TESTINPUT = _descriptor.Descriptor( + name='TestInput', + full_name='proto.TestInput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='people', full_name='proto.TestInput.people', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=177, + serialized_end=219, +) + + +_TESTOUTPUT_MAPVALUEENTRY = _descriptor.Descriptor( + name='MapValueEntry', + full_name='proto.TestOutput.MapValueEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='proto.TestOutput.MapValueEntry.key', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='value', full_name='proto.TestOutput.MapValueEntry.value', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=_descriptor._ParseOptions(descriptor_pb2.MessageOptions(), _b('8\001')), + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=365, + serialized_end=412, +) + +_TESTOUTPUT = _descriptor.Descriptor( + name='TestOutput', + full_name='proto.TestOutput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='strArray', full_name='proto.TestOutput.strArray', index=0, + number=1, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='mapValue', full_name='proto.TestOutput.mapValue', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='pn', full_name='proto.TestOutput.pn', index=2, + number=4, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='profile', full_name='proto.TestOutput.profile', index=3, + number=3, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[_TESTOUTPUT_MAPVALUEENTRY, ], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=222, + serialized_end=412, +) + + +_PERSON_PHONENUMBER = _descriptor.Descriptor( + name='PhoneNumber', + full_name='proto.Person.PhoneNumber', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='number', full_name='proto.Person.PhoneNumber.number', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='type', full_name='proto.Person.PhoneNumber.type', index=1, + number=2, type=14, cpp_type=8, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=509, + serialized_end=577, +) + +_PERSON = _descriptor.Descriptor( + name='Person', + full_name='proto.Person', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='proto.Person.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='id', full_name='proto.Person.id', index=1, + number=2, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='email', full_name='proto.Person.email', index=2, + number=3, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='phones', full_name='proto.Person.phones', index=3, + number=4, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[_PERSON_PHONENUMBER, ], + enum_types=[ + _PERSON_PHONETYPE, + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=415, + serialized_end=622, +) + + +_PROFILE_FILE = _descriptor.Descriptor( + name='File', + full_name='proto.Profile.File', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='name', full_name='proto.Profile.File.name', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='typeInt32', full_name='proto.Profile.File.typeInt32', index=1, + number=2, type=5, cpp_type=1, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='typeInt64', full_name='proto.Profile.File.typeInt64', index=2, + number=3, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='typeFloat', full_name='proto.Profile.File.typeFloat', index=3, + number=4, type=2, cpp_type=6, label=1, + has_default_value=False, default_value=float(0), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='typeDouble', full_name='proto.Profile.File.typeDouble', index=4, + number=5, type=1, cpp_type=5, label=1, + has_default_value=False, default_value=float(0), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='booleanValue', full_name='proto.Profile.File.booleanValue', index=5, + number=6, type=8, cpp_type=7, label=1, + has_default_value=False, default_value=False, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=675, + serialized_end=794, +) + +_PROFILE = _descriptor.Descriptor( + name='Profile', + full_name='proto.Profile', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='fileInfo', full_name='proto.Profile.fileInfo', index=0, + number=1, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[_PROFILE_FILE, ], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=625, + serialized_end=794, +) + + +_GETUSERSBYSTATUSINPUT = _descriptor.Descriptor( + name='GetUsersByStatusInput', + full_name='proto.GetUsersByStatusInput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='status', full_name='proto.GetUsersByStatusInput.status', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='key', full_name='proto.GetUsersByStatusInput.key', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=796, + serialized_end=848, +) + + +_GETUSERSBYSTATUSOUTPUT = _descriptor.Descriptor( + name='GetUsersByStatusOutput', + full_name='proto.GetUsersByStatusOutput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='users', full_name='proto.GetUsersByStatusOutput.users', index=0, + number=1, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=850, + serialized_end=908, +) + + +_ACCOUNTHEARTBEATOUTPUT = _descriptor.Descriptor( + name='AccountHeartbeatOutput', + full_name='proto.AccountHeartbeatOutput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='message', full_name='proto.AccountHeartbeatOutput.message', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=910, + serialized_end=951, +) + + +_LOGININPUT = _descriptor.Descriptor( + name='LoginInput', + full_name='proto.LoginInput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='email', full_name='proto.LoginInput.email', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='password', full_name='proto.LoginInput.password', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=953, + serialized_end=998, +) + + +_USEROUTPUT = _descriptor.Descriptor( + name='UserOutput', + full_name='proto.UserOutput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='id', full_name='proto.UserOutput.id', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='email', full_name='proto.UserOutput.email', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='status', full_name='proto.UserOutput.status', index=2, + number=3, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='phone', full_name='proto.UserOutput.phone', index=3, + number=4, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='firstName', full_name='proto.UserOutput.firstName', index=4, + number=5, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='lastName', full_name='proto.UserOutput.lastName', index=5, + number=6, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='createdTime', full_name='proto.UserOutput.createdTime', index=6, + number=7, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='namespace', full_name='proto.UserOutput.namespace', index=7, + number=8, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='domainName', full_name='proto.UserOutput.domainName', index=8, + number=9, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='company', full_name='proto.UserOutput.company', index=9, + number=10, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='url', full_name='proto.UserOutput.url', index=10, + number=11, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='agentAccount', full_name='proto.UserOutput.agentAccount', index=11, + number=12, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='agentPassword', full_name='proto.UserOutput.agentPassword', index=12, + number=13, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1001, + serialized_end=1243, +) + + +_SINGUPINPUT = _descriptor.Descriptor( + name='SingupInput', + full_name='proto.SingupInput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='email', full_name='proto.SingupInput.email', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='phone', full_name='proto.SingupInput.phone', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='firstName', full_name='proto.SingupInput.firstName', index=2, + number=3, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='lastName', full_name='proto.SingupInput.lastName', index=3, + number=4, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='password', full_name='proto.SingupInput.password', index=4, + number=5, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='company', full_name='proto.SingupInput.company', index=5, + number=6, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1245, + serialized_end=1360, +) + + +_SINGUPOUTPUT = _descriptor.Descriptor( + name='SingupOutput', + full_name='proto.SingupOutput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='message', full_name='proto.SingupOutput.message', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1362, + serialized_end=1393, +) + + +_DELETEUSERINPUT = _descriptor.Descriptor( + name='DeleteUserInput', + full_name='proto.DeleteUserInput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='email', full_name='proto.DeleteUserInput.email', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='key', full_name='proto.DeleteUserInput.key', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1395, + serialized_end=1440, +) + + +_UPDATEUSERSTATUSINPUT = _descriptor.Descriptor( + name='UpdateUserStatusInput', + full_name='proto.UpdateUserStatusInput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='email', full_name='proto.UpdateUserStatusInput.email', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='key', full_name='proto.UpdateUserStatusInput.key', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='status', full_name='proto.UpdateUserStatusInput.status', index=2, + number=3, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1442, + serialized_end=1509, +) + + +_RESENDCONFIRMCODEINPUT = _descriptor.Descriptor( + name='ResendConfirmCodeInput', + full_name='proto.ResendConfirmCodeInput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='email', full_name='proto.ResendConfirmCodeInput.email', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1511, + serialized_end=1550, +) + + +_CONFIRMINPUT = _descriptor.Descriptor( + name='ConfirmInput', + full_name='proto.ConfirmInput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='email', full_name='proto.ConfirmInput.email', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='code', full_name='proto.ConfirmInput.code', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1552, + serialized_end=1595, +) + + +_DPHEARTBEATOUTPUT = _descriptor.Descriptor( + name='DPHeartbeatOutput', + full_name='proto.DPHeartbeatOutput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='message', full_name='proto.DPHeartbeatOutput.message', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1597, + serialized_end=1633, +) + + +_DPGETPHYSICALDISKSINPUT = _descriptor.Descriptor( + name='DPGetPhysicalDisksInput', + full_name='proto.DPGetPhysicalDisksInput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='hostIds', full_name='proto.DPGetPhysicalDisksInput.hostIds', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='ids', full_name='proto.DPGetPhysicalDisksInput.ids', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='limit', full_name='proto.DPGetPhysicalDisksInput.limit', index=2, + number=3, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='page', full_name='proto.DPGetPhysicalDisksInput.page', index=3, + number=4, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='from', full_name='proto.DPGetPhysicalDisksInput.from', index=4, + number=5, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='to', full_name='proto.DPGetPhysicalDisksInput.to', index=5, + number=6, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1635, + serialized_end=1745, +) + + +_DPGETDISKSPREDICTIONINPUT = _descriptor.Descriptor( + name='DPGetDisksPredictionInput', + full_name='proto.DPGetDisksPredictionInput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='physicalDiskIds', full_name='proto.DPGetDisksPredictionInput.physicalDiskIds', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='status', full_name='proto.DPGetDisksPredictionInput.status', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='limit', full_name='proto.DPGetDisksPredictionInput.limit', index=2, + number=3, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='page', full_name='proto.DPGetDisksPredictionInput.page', index=3, + number=4, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='from', full_name='proto.DPGetDisksPredictionInput.from', index=4, + number=5, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='to', full_name='proto.DPGetDisksPredictionInput.to', index=5, + number=6, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1747, + serialized_end=1870, +) + + +_DPBINARYOUTPUT = _descriptor.Descriptor( + name='DPBinaryOutput', + full_name='proto.DPBinaryOutput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='data', full_name='proto.DPBinaryOutput.data', index=0, + number=1, type=12, cpp_type=9, label=1, + has_default_value=False, default_value=_b(""), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1872, + serialized_end=1902, +) + + +_COLLECTIONHEARTBEATOUTPUT = _descriptor.Descriptor( + name='CollectionHeartbeatOutput', + full_name='proto.CollectionHeartbeatOutput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='message', full_name='proto.CollectionHeartbeatOutput.message', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1904, + serialized_end=1948, +) + + +_POSTMETRICSINPUT = _descriptor.Descriptor( + name='PostMetricsInput', + full_name='proto.PostMetricsInput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='points', full_name='proto.PostMetricsInput.points', index=0, + number=1, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1950, + serialized_end=1984, +) + + +_POSTDBRELAYINPUT = _descriptor.Descriptor( + name='PostDBRelayInput', + full_name='proto.PostDBRelayInput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='cmds', full_name='proto.PostDBRelayInput.cmds', index=0, + number=1, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1986, + serialized_end=2018, +) + + +_COLLECTIONMESSAGEOUTPUT = _descriptor.Descriptor( + name='CollectionMessageOutput', + full_name='proto.CollectionMessageOutput', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='status', full_name='proto.CollectionMessageOutput.status', index=0, + number=1, type=3, cpp_type=2, label=1, + has_default_value=False, default_value=0, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='message', full_name='proto.CollectionMessageOutput.message', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=2020, + serialized_end=2078, +) + +_TESTINPUT.fields_by_name['people'].message_type = _PERSON +_TESTOUTPUT_MAPVALUEENTRY.containing_type = _TESTOUTPUT +_TESTOUTPUT.fields_by_name['mapValue'].message_type = _TESTOUTPUT_MAPVALUEENTRY +_TESTOUTPUT.fields_by_name['pn'].message_type = _PERSON +_TESTOUTPUT.fields_by_name['profile'].message_type = _PROFILE +_PERSON_PHONENUMBER.fields_by_name['type'].enum_type = _PERSON_PHONETYPE +_PERSON_PHONENUMBER.containing_type = _PERSON +_PERSON.fields_by_name['phones'].message_type = _PERSON_PHONENUMBER +_PERSON_PHONETYPE.containing_type = _PERSON +_PROFILE_FILE.containing_type = _PROFILE +_PROFILE.fields_by_name['fileInfo'].message_type = _PROFILE_FILE +_GETUSERSBYSTATUSOUTPUT.fields_by_name['users'].message_type = _USEROUTPUT +DESCRIPTOR.message_types_by_name['Empty'] = _EMPTY +DESCRIPTOR.message_types_by_name['GeneralMsgOutput'] = _GENERALMSGOUTPUT +DESCRIPTOR.message_types_by_name['GeneralHeartbeatOutput'] = _GENERALHEARTBEATOUTPUT +DESCRIPTOR.message_types_by_name['PingOutout'] = _PINGOUTOUT +DESCRIPTOR.message_types_by_name['TestInput'] = _TESTINPUT +DESCRIPTOR.message_types_by_name['TestOutput'] = _TESTOUTPUT +DESCRIPTOR.message_types_by_name['Person'] = _PERSON +DESCRIPTOR.message_types_by_name['Profile'] = _PROFILE +DESCRIPTOR.message_types_by_name['GetUsersByStatusInput'] = _GETUSERSBYSTATUSINPUT +DESCRIPTOR.message_types_by_name['GetUsersByStatusOutput'] = _GETUSERSBYSTATUSOUTPUT +DESCRIPTOR.message_types_by_name['AccountHeartbeatOutput'] = _ACCOUNTHEARTBEATOUTPUT +DESCRIPTOR.message_types_by_name['LoginInput'] = _LOGININPUT +DESCRIPTOR.message_types_by_name['UserOutput'] = _USEROUTPUT +DESCRIPTOR.message_types_by_name['SingupInput'] = _SINGUPINPUT +DESCRIPTOR.message_types_by_name['SingupOutput'] = _SINGUPOUTPUT +DESCRIPTOR.message_types_by_name['DeleteUserInput'] = _DELETEUSERINPUT +DESCRIPTOR.message_types_by_name['UpdateUserStatusInput'] = _UPDATEUSERSTATUSINPUT +DESCRIPTOR.message_types_by_name['ResendConfirmCodeInput'] = _RESENDCONFIRMCODEINPUT +DESCRIPTOR.message_types_by_name['ConfirmInput'] = _CONFIRMINPUT +DESCRIPTOR.message_types_by_name['DPHeartbeatOutput'] = _DPHEARTBEATOUTPUT +DESCRIPTOR.message_types_by_name['DPGetPhysicalDisksInput'] = _DPGETPHYSICALDISKSINPUT +DESCRIPTOR.message_types_by_name['DPGetDisksPredictionInput'] = _DPGETDISKSPREDICTIONINPUT +DESCRIPTOR.message_types_by_name['DPBinaryOutput'] = _DPBINARYOUTPUT +DESCRIPTOR.message_types_by_name['CollectionHeartbeatOutput'] = _COLLECTIONHEARTBEATOUTPUT +DESCRIPTOR.message_types_by_name['PostMetricsInput'] = _POSTMETRICSINPUT +DESCRIPTOR.message_types_by_name['PostDBRelayInput'] = _POSTDBRELAYINPUT +DESCRIPTOR.message_types_by_name['CollectionMessageOutput'] = _COLLECTIONMESSAGEOUTPUT +_sym_db.RegisterFileDescriptor(DESCRIPTOR) + +Empty = _reflection.GeneratedProtocolMessageType('Empty', (_message.Message,), dict( + DESCRIPTOR = _EMPTY, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.Empty) + )) +_sym_db.RegisterMessage(Empty) + +GeneralMsgOutput = _reflection.GeneratedProtocolMessageType('GeneralMsgOutput', (_message.Message,), dict( + DESCRIPTOR = _GENERALMSGOUTPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.GeneralMsgOutput) + )) +_sym_db.RegisterMessage(GeneralMsgOutput) + +GeneralHeartbeatOutput = _reflection.GeneratedProtocolMessageType('GeneralHeartbeatOutput', (_message.Message,), dict( + DESCRIPTOR = _GENERALHEARTBEATOUTPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.GeneralHeartbeatOutput) + )) +_sym_db.RegisterMessage(GeneralHeartbeatOutput) + +PingOutout = _reflection.GeneratedProtocolMessageType('PingOutout', (_message.Message,), dict( + DESCRIPTOR = _PINGOUTOUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.PingOutout) + )) +_sym_db.RegisterMessage(PingOutout) + +TestInput = _reflection.GeneratedProtocolMessageType('TestInput', (_message.Message,), dict( + DESCRIPTOR = _TESTINPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.TestInput) + )) +_sym_db.RegisterMessage(TestInput) + +TestOutput = _reflection.GeneratedProtocolMessageType('TestOutput', (_message.Message,), dict( + + MapValueEntry = _reflection.GeneratedProtocolMessageType('MapValueEntry', (_message.Message,), dict( + DESCRIPTOR = _TESTOUTPUT_MAPVALUEENTRY, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.TestOutput.MapValueEntry) + )) + , + DESCRIPTOR = _TESTOUTPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.TestOutput) + )) +_sym_db.RegisterMessage(TestOutput) +_sym_db.RegisterMessage(TestOutput.MapValueEntry) + +Person = _reflection.GeneratedProtocolMessageType('Person', (_message.Message,), dict( + + PhoneNumber = _reflection.GeneratedProtocolMessageType('PhoneNumber', (_message.Message,), dict( + DESCRIPTOR = _PERSON_PHONENUMBER, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.Person.PhoneNumber) + )) + , + DESCRIPTOR = _PERSON, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.Person) + )) +_sym_db.RegisterMessage(Person) +_sym_db.RegisterMessage(Person.PhoneNumber) + +Profile = _reflection.GeneratedProtocolMessageType('Profile', (_message.Message,), dict( + + File = _reflection.GeneratedProtocolMessageType('File', (_message.Message,), dict( + DESCRIPTOR = _PROFILE_FILE, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.Profile.File) + )) + , + DESCRIPTOR = _PROFILE, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.Profile) + )) +_sym_db.RegisterMessage(Profile) +_sym_db.RegisterMessage(Profile.File) + +GetUsersByStatusInput = _reflection.GeneratedProtocolMessageType('GetUsersByStatusInput', (_message.Message,), dict( + DESCRIPTOR = _GETUSERSBYSTATUSINPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.GetUsersByStatusInput) + )) +_sym_db.RegisterMessage(GetUsersByStatusInput) + +GetUsersByStatusOutput = _reflection.GeneratedProtocolMessageType('GetUsersByStatusOutput', (_message.Message,), dict( + DESCRIPTOR = _GETUSERSBYSTATUSOUTPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.GetUsersByStatusOutput) + )) +_sym_db.RegisterMessage(GetUsersByStatusOutput) + +AccountHeartbeatOutput = _reflection.GeneratedProtocolMessageType('AccountHeartbeatOutput', (_message.Message,), dict( + DESCRIPTOR = _ACCOUNTHEARTBEATOUTPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.AccountHeartbeatOutput) + )) +_sym_db.RegisterMessage(AccountHeartbeatOutput) + +LoginInput = _reflection.GeneratedProtocolMessageType('LoginInput', (_message.Message,), dict( + DESCRIPTOR = _LOGININPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.LoginInput) + )) +_sym_db.RegisterMessage(LoginInput) + +UserOutput = _reflection.GeneratedProtocolMessageType('UserOutput', (_message.Message,), dict( + DESCRIPTOR = _USEROUTPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.UserOutput) + )) +_sym_db.RegisterMessage(UserOutput) + +SingupInput = _reflection.GeneratedProtocolMessageType('SingupInput', (_message.Message,), dict( + DESCRIPTOR = _SINGUPINPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.SingupInput) + )) +_sym_db.RegisterMessage(SingupInput) + +SingupOutput = _reflection.GeneratedProtocolMessageType('SingupOutput', (_message.Message,), dict( + DESCRIPTOR = _SINGUPOUTPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.SingupOutput) + )) +_sym_db.RegisterMessage(SingupOutput) + +DeleteUserInput = _reflection.GeneratedProtocolMessageType('DeleteUserInput', (_message.Message,), dict( + DESCRIPTOR = _DELETEUSERINPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.DeleteUserInput) + )) +_sym_db.RegisterMessage(DeleteUserInput) + +UpdateUserStatusInput = _reflection.GeneratedProtocolMessageType('UpdateUserStatusInput', (_message.Message,), dict( + DESCRIPTOR = _UPDATEUSERSTATUSINPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.UpdateUserStatusInput) + )) +_sym_db.RegisterMessage(UpdateUserStatusInput) + +ResendConfirmCodeInput = _reflection.GeneratedProtocolMessageType('ResendConfirmCodeInput', (_message.Message,), dict( + DESCRIPTOR = _RESENDCONFIRMCODEINPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.ResendConfirmCodeInput) + )) +_sym_db.RegisterMessage(ResendConfirmCodeInput) + +ConfirmInput = _reflection.GeneratedProtocolMessageType('ConfirmInput', (_message.Message,), dict( + DESCRIPTOR = _CONFIRMINPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.ConfirmInput) + )) +_sym_db.RegisterMessage(ConfirmInput) + +DPHeartbeatOutput = _reflection.GeneratedProtocolMessageType('DPHeartbeatOutput', (_message.Message,), dict( + DESCRIPTOR = _DPHEARTBEATOUTPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.DPHeartbeatOutput) + )) +_sym_db.RegisterMessage(DPHeartbeatOutput) + +DPGetPhysicalDisksInput = _reflection.GeneratedProtocolMessageType('DPGetPhysicalDisksInput', (_message.Message,), dict( + DESCRIPTOR = _DPGETPHYSICALDISKSINPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.DPGetPhysicalDisksInput) + )) +_sym_db.RegisterMessage(DPGetPhysicalDisksInput) + +DPGetDisksPredictionInput = _reflection.GeneratedProtocolMessageType('DPGetDisksPredictionInput', (_message.Message,), dict( + DESCRIPTOR = _DPGETDISKSPREDICTIONINPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.DPGetDisksPredictionInput) + )) +_sym_db.RegisterMessage(DPGetDisksPredictionInput) + +DPBinaryOutput = _reflection.GeneratedProtocolMessageType('DPBinaryOutput', (_message.Message,), dict( + DESCRIPTOR = _DPBINARYOUTPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.DPBinaryOutput) + )) +_sym_db.RegisterMessage(DPBinaryOutput) + +CollectionHeartbeatOutput = _reflection.GeneratedProtocolMessageType('CollectionHeartbeatOutput', (_message.Message,), dict( + DESCRIPTOR = _COLLECTIONHEARTBEATOUTPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.CollectionHeartbeatOutput) + )) +_sym_db.RegisterMessage(CollectionHeartbeatOutput) + +PostMetricsInput = _reflection.GeneratedProtocolMessageType('PostMetricsInput', (_message.Message,), dict( + DESCRIPTOR = _POSTMETRICSINPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.PostMetricsInput) + )) +_sym_db.RegisterMessage(PostMetricsInput) + +PostDBRelayInput = _reflection.GeneratedProtocolMessageType('PostDBRelayInput', (_message.Message,), dict( + DESCRIPTOR = _POSTDBRELAYINPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.PostDBRelayInput) + )) +_sym_db.RegisterMessage(PostDBRelayInput) + +CollectionMessageOutput = _reflection.GeneratedProtocolMessageType('CollectionMessageOutput', (_message.Message,), dict( + DESCRIPTOR = _COLLECTIONMESSAGEOUTPUT, + __module__ = 'mainServer_pb2' + # @@protoc_insertion_point(class_scope:proto.CollectionMessageOutput) + )) +_sym_db.RegisterMessage(CollectionMessageOutput) + + +_TESTOUTPUT_MAPVALUEENTRY.has_options = True +_TESTOUTPUT_MAPVALUEENTRY._options = _descriptor._ParseOptions(descriptor_pb2.MessageOptions(), _b('8\001')) + +_GENERAL = _descriptor.ServiceDescriptor( + name='General', + full_name='proto.General', + file=DESCRIPTOR, + index=0, + options=None, + serialized_start=2081, + serialized_end=2342, + methods=[ + _descriptor.MethodDescriptor( + name='GeneralHeartbeat', + full_name='proto.General.GeneralHeartbeat', + index=0, + containing_service=None, + input_type=_EMPTY, + output_type=_GENERALHEARTBEATOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\034\022\032/apis/v2/general/heartbeat')), + ), + _descriptor.MethodDescriptor( + name='Ping', + full_name='proto.General.Ping', + index=1, + containing_service=None, + input_type=_EMPTY, + output_type=_PINGOUTOUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\027\022\025/apis/v2/general/ping')), + ), + _descriptor.MethodDescriptor( + name='Test', + full_name='proto.General.Test', + index=2, + containing_service=None, + input_type=_TESTINPUT, + output_type=_TESTOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\032\"\025/apis/v2/general/test:\001*')), + ), +]) +_sym_db.RegisterServiceDescriptor(_GENERAL) + +DESCRIPTOR.services_by_name['General'] = _GENERAL + + +_ACCOUNT = _descriptor.ServiceDescriptor( + name='Account', + full_name='proto.Account', + file=DESCRIPTOR, + index=1, + options=None, + serialized_start=2345, + serialized_end=3149, + methods=[ + _descriptor.MethodDescriptor( + name='AccountHeartbeat', + full_name='proto.Account.AccountHeartbeat', + index=0, + containing_service=None, + input_type=_EMPTY, + output_type=_ACCOUNTHEARTBEATOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\034\022\032/apis/v2/account/heartbeat')), + ), + _descriptor.MethodDescriptor( + name='Login', + full_name='proto.Account.Login', + index=1, + containing_service=None, + input_type=_LOGININPUT, + output_type=_USEROUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\031\"\024/apis/v2/users/login:\001*')), + ), + _descriptor.MethodDescriptor( + name='Signup', + full_name='proto.Account.Signup', + index=2, + containing_service=None, + input_type=_SINGUPINPUT, + output_type=_SINGUPOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\032\"\025/apis/v2/users/signup:\001*')), + ), + _descriptor.MethodDescriptor( + name='ResendConfirmCode', + full_name='proto.Account.ResendConfirmCode', + index=3, + containing_service=None, + input_type=_RESENDCONFIRMCODEINPUT, + output_type=_GENERALMSGOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\037\"\032/apis/v2/users/confirmcode:\001*')), + ), + _descriptor.MethodDescriptor( + name='Confirm', + full_name='proto.Account.Confirm', + index=4, + containing_service=None, + input_type=_CONFIRMINPUT, + output_type=_GENERALMSGOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002 \"\033/apis/v2/users/confirmation:\001*')), + ), + _descriptor.MethodDescriptor( + name='GetUsersByStatus', + full_name='proto.Account.GetUsersByStatus', + index=5, + containing_service=None, + input_type=_GETUSERSBYSTATUSINPUT, + output_type=_GETUSERSBYSTATUSOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\020\022\016/apis/v2/users')), + ), + _descriptor.MethodDescriptor( + name='DeleteUser', + full_name='proto.Account.DeleteUser', + index=6, + containing_service=None, + input_type=_DELETEUSERINPUT, + output_type=_GENERALMSGOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\036*\034/apis/v2/users/{email}/{key}')), + ), + _descriptor.MethodDescriptor( + name='UpdateUserStatus', + full_name='proto.Account.UpdateUserStatus', + index=7, + containing_service=None, + input_type=_UPDATEUSERSTATUSINPUT, + output_type=_GENERALMSGOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\033\032\026/apis/v2/users/{email}:\001*')), + ), +]) +_sym_db.RegisterServiceDescriptor(_ACCOUNT) + +DESCRIPTOR.services_by_name['Account'] = _ACCOUNT + + +_DISKPROPHET = _descriptor.ServiceDescriptor( + name='Diskprophet', + full_name='proto.Diskprophet', + file=DESCRIPTOR, + index=2, + options=None, + serialized_start=3152, + serialized_end=3487, + methods=[ + _descriptor.MethodDescriptor( + name='DPHeartbeat', + full_name='proto.Diskprophet.DPHeartbeat', + index=0, + containing_service=None, + input_type=_EMPTY, + output_type=_DPHEARTBEATOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\027\022\025/apis/v2/dp/heartbeat')), + ), + _descriptor.MethodDescriptor( + name='DPGetPhysicalDisks', + full_name='proto.Diskprophet.DPGetPhysicalDisks', + index=1, + containing_service=None, + input_type=_DPGETPHYSICALDISKSINPUT, + output_type=_DPBINARYOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\031\022\027/apis/v2/physical-disks')), + ), + _descriptor.MethodDescriptor( + name='DPGetDisksPrediction', + full_name='proto.Diskprophet.DPGetDisksPrediction', + index=2, + containing_service=None, + input_type=_DPGETDISKSPREDICTIONINPUT, + output_type=_DPBINARYOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002%\022#/apis/v2/physical-disks/predictions')), + ), +]) +_sym_db.RegisterServiceDescriptor(_DISKPROPHET) + +DESCRIPTOR.services_by_name['Diskprophet'] = _DISKPROPHET + + +_COLLECTION = _descriptor.ServiceDescriptor( + name='Collection', + full_name='proto.Collection', + file=DESCRIPTOR, + index=3, + options=None, + serialized_start=3490, + serialized_end=3837, + methods=[ + _descriptor.MethodDescriptor( + name='CollectionHeartbeat', + full_name='proto.Collection.CollectionHeartbeat', + index=0, + containing_service=None, + input_type=_EMPTY, + output_type=_COLLECTIONHEARTBEATOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002\037\022\035/apis/v2/collection/heartbeat')), + ), + _descriptor.MethodDescriptor( + name='PostDBRelay', + full_name='proto.Collection.PostDBRelay', + index=1, + containing_service=None, + input_type=_POSTDBRELAYINPUT, + output_type=_COLLECTIONMESSAGEOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002!\"\034/apis/v2/collection/relation:\001*')), + ), + _descriptor.MethodDescriptor( + name='PostMetrics', + full_name='proto.Collection.PostMetrics', + index=2, + containing_service=None, + input_type=_POSTMETRICSINPUT, + output_type=_COLLECTIONMESSAGEOUTPUT, + options=_descriptor._ParseOptions(descriptor_pb2.MethodOptions(), _b('\202\323\344\223\002 \"\033/apis/v2/collection/metrics:\001*')), + ), +]) +_sym_db.RegisterServiceDescriptor(_COLLECTION) + +DESCRIPTOR.services_by_name['Collection'] = _COLLECTION + +# @@protoc_insertion_point(module_scope) diff --git a/src/pybind/mgr/diskprediction/common/client_pb2_grpc.py b/src/pybind/mgr/diskprediction/common/client_pb2_grpc.py new file mode 100644 index 00000000000..c1c32178a6a --- /dev/null +++ b/src/pybind/mgr/diskprediction/common/client_pb2_grpc.py @@ -0,0 +1,395 @@ +# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! +import grpc + +import client_pb2 as mainServer__pb2 + + +class GeneralStub(object): + """-------------------------- General ------------------------------------- + """ + + def __init__(self, channel): + """Constructor. + + Args: + channel: A grpc.Channel. + """ + self.GeneralHeartbeat = channel.unary_unary( + '/proto.General/GeneralHeartbeat', + request_serializer=mainServer__pb2.Empty.SerializeToString, + response_deserializer=mainServer__pb2.GeneralHeartbeatOutput.FromString, + ) + self.Ping = channel.unary_unary( + '/proto.General/Ping', + request_serializer=mainServer__pb2.Empty.SerializeToString, + response_deserializer=mainServer__pb2.PingOutout.FromString, + ) + self.Test = channel.unary_unary( + '/proto.General/Test', + request_serializer=mainServer__pb2.TestInput.SerializeToString, + response_deserializer=mainServer__pb2.TestOutput.FromString, + ) + + +class GeneralServicer(object): + """-------------------------- General ------------------------------------- + """ + + def GeneralHeartbeat(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def Ping(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def Test(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + +def add_GeneralServicer_to_server(servicer, server): + rpc_method_handlers = { + 'GeneralHeartbeat': grpc.unary_unary_rpc_method_handler( + servicer.GeneralHeartbeat, + request_deserializer=mainServer__pb2.Empty.FromString, + response_serializer=mainServer__pb2.GeneralHeartbeatOutput.SerializeToString, + ), + 'Ping': grpc.unary_unary_rpc_method_handler( + servicer.Ping, + request_deserializer=mainServer__pb2.Empty.FromString, + response_serializer=mainServer__pb2.PingOutout.SerializeToString, + ), + 'Test': grpc.unary_unary_rpc_method_handler( + servicer.Test, + request_deserializer=mainServer__pb2.TestInput.FromString, + response_serializer=mainServer__pb2.TestOutput.SerializeToString, + ), + } + generic_handler = grpc.method_handlers_generic_handler( + 'proto.General', rpc_method_handlers) + server.add_generic_rpc_handlers((generic_handler,)) + + +class AccountStub(object): + """-------------------------- SERVER ACCOUNT ------------------------------ + """ + + def __init__(self, channel): + """Constructor. + + Args: + channel: A grpc.Channel. + """ + self.AccountHeartbeat = channel.unary_unary( + '/proto.Account/AccountHeartbeat', + request_serializer=mainServer__pb2.Empty.SerializeToString, + response_deserializer=mainServer__pb2.AccountHeartbeatOutput.FromString, + ) + self.Login = channel.unary_unary( + '/proto.Account/Login', + request_serializer=mainServer__pb2.LoginInput.SerializeToString, + response_deserializer=mainServer__pb2.UserOutput.FromString, + ) + self.Signup = channel.unary_unary( + '/proto.Account/Signup', + request_serializer=mainServer__pb2.SingupInput.SerializeToString, + response_deserializer=mainServer__pb2.SingupOutput.FromString, + ) + self.ResendConfirmCode = channel.unary_unary( + '/proto.Account/ResendConfirmCode', + request_serializer=mainServer__pb2.ResendConfirmCodeInput.SerializeToString, + response_deserializer=mainServer__pb2.GeneralMsgOutput.FromString, + ) + self.Confirm = channel.unary_unary( + '/proto.Account/Confirm', + request_serializer=mainServer__pb2.ConfirmInput.SerializeToString, + response_deserializer=mainServer__pb2.GeneralMsgOutput.FromString, + ) + self.GetUsersByStatus = channel.unary_unary( + '/proto.Account/GetUsersByStatus', + request_serializer=mainServer__pb2.GetUsersByStatusInput.SerializeToString, + response_deserializer=mainServer__pb2.GetUsersByStatusOutput.FromString, + ) + self.DeleteUser = channel.unary_unary( + '/proto.Account/DeleteUser', + request_serializer=mainServer__pb2.DeleteUserInput.SerializeToString, + response_deserializer=mainServer__pb2.GeneralMsgOutput.FromString, + ) + self.UpdateUserStatus = channel.unary_unary( + '/proto.Account/UpdateUserStatus', + request_serializer=mainServer__pb2.UpdateUserStatusInput.SerializeToString, + response_deserializer=mainServer__pb2.GeneralMsgOutput.FromString, + ) + + +class AccountServicer(object): + """-------------------------- SERVER ACCOUNT ------------------------------ + """ + + def AccountHeartbeat(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def Login(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def Signup(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def ResendConfirmCode(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def Confirm(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def GetUsersByStatus(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def DeleteUser(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def UpdateUserStatus(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + +def add_AccountServicer_to_server(servicer, server): + rpc_method_handlers = { + 'AccountHeartbeat': grpc.unary_unary_rpc_method_handler( + servicer.AccountHeartbeat, + request_deserializer=mainServer__pb2.Empty.FromString, + response_serializer=mainServer__pb2.AccountHeartbeatOutput.SerializeToString, + ), + 'Login': grpc.unary_unary_rpc_method_handler( + servicer.Login, + request_deserializer=mainServer__pb2.LoginInput.FromString, + response_serializer=mainServer__pb2.UserOutput.SerializeToString, + ), + 'Signup': grpc.unary_unary_rpc_method_handler( + servicer.Signup, + request_deserializer=mainServer__pb2.SingupInput.FromString, + response_serializer=mainServer__pb2.SingupOutput.SerializeToString, + ), + 'ResendConfirmCode': grpc.unary_unary_rpc_method_handler( + servicer.ResendConfirmCode, + request_deserializer=mainServer__pb2.ResendConfirmCodeInput.FromString, + response_serializer=mainServer__pb2.GeneralMsgOutput.SerializeToString, + ), + 'Confirm': grpc.unary_unary_rpc_method_handler( + servicer.Confirm, + request_deserializer=mainServer__pb2.ConfirmInput.FromString, + response_serializer=mainServer__pb2.GeneralMsgOutput.SerializeToString, + ), + 'GetUsersByStatus': grpc.unary_unary_rpc_method_handler( + servicer.GetUsersByStatus, + request_deserializer=mainServer__pb2.GetUsersByStatusInput.FromString, + response_serializer=mainServer__pb2.GetUsersByStatusOutput.SerializeToString, + ), + 'DeleteUser': grpc.unary_unary_rpc_method_handler( + servicer.DeleteUser, + request_deserializer=mainServer__pb2.DeleteUserInput.FromString, + response_serializer=mainServer__pb2.GeneralMsgOutput.SerializeToString, + ), + 'UpdateUserStatus': grpc.unary_unary_rpc_method_handler( + servicer.UpdateUserStatus, + request_deserializer=mainServer__pb2.UpdateUserStatusInput.FromString, + response_serializer=mainServer__pb2.GeneralMsgOutput.SerializeToString, + ), + } + generic_handler = grpc.method_handlers_generic_handler( + 'proto.Account', rpc_method_handlers) + server.add_generic_rpc_handlers((generic_handler,)) + + +class DiskprophetStub(object): + """------------------------ SERVER DISKPROPHET --------------------------- + """ + + def __init__(self, channel): + """Constructor. + + Args: + channel: A grpc.Channel. + """ + self.DPHeartbeat = channel.unary_unary( + '/proto.Diskprophet/DPHeartbeat', + request_serializer=mainServer__pb2.Empty.SerializeToString, + response_deserializer=mainServer__pb2.DPHeartbeatOutput.FromString, + ) + self.DPGetPhysicalDisks = channel.unary_unary( + '/proto.Diskprophet/DPGetPhysicalDisks', + request_serializer=mainServer__pb2.DPGetPhysicalDisksInput.SerializeToString, + response_deserializer=mainServer__pb2.DPBinaryOutput.FromString, + ) + self.DPGetDisksPrediction = channel.unary_unary( + '/proto.Diskprophet/DPGetDisksPrediction', + request_serializer=mainServer__pb2.DPGetDisksPredictionInput.SerializeToString, + response_deserializer=mainServer__pb2.DPBinaryOutput.FromString, + ) + + +class DiskprophetServicer(object): + """------------------------ SERVER DISKPROPHET --------------------------- + """ + + def DPHeartbeat(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def DPGetPhysicalDisks(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def DPGetDisksPrediction(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + +def add_DiskprophetServicer_to_server(servicer, server): + rpc_method_handlers = { + 'DPHeartbeat': grpc.unary_unary_rpc_method_handler( + servicer.DPHeartbeat, + request_deserializer=mainServer__pb2.Empty.FromString, + response_serializer=mainServer__pb2.DPHeartbeatOutput.SerializeToString, + ), + 'DPGetPhysicalDisks': grpc.unary_unary_rpc_method_handler( + servicer.DPGetPhysicalDisks, + request_deserializer=mainServer__pb2.DPGetPhysicalDisksInput.FromString, + response_serializer=mainServer__pb2.DPBinaryOutput.SerializeToString, + ), + 'DPGetDisksPrediction': grpc.unary_unary_rpc_method_handler( + servicer.DPGetDisksPrediction, + request_deserializer=mainServer__pb2.DPGetDisksPredictionInput.FromString, + response_serializer=mainServer__pb2.DPBinaryOutput.SerializeToString, + ), + } + generic_handler = grpc.method_handlers_generic_handler( + 'proto.Diskprophet', rpc_method_handlers) + server.add_generic_rpc_handlers((generic_handler,)) + + +class CollectionStub(object): + """------------------------ SERVER Collection --------------------------- + + """ + + def __init__(self, channel): + """Constructor. + + Args: + channel: A grpc.Channel. + """ + self.CollectionHeartbeat = channel.unary_unary( + '/proto.Collection/CollectionHeartbeat', + request_serializer=mainServer__pb2.Empty.SerializeToString, + response_deserializer=mainServer__pb2.CollectionHeartbeatOutput.FromString, + ) + self.PostDBRelay = channel.unary_unary( + '/proto.Collection/PostDBRelay', + request_serializer=mainServer__pb2.PostDBRelayInput.SerializeToString, + response_deserializer=mainServer__pb2.CollectionMessageOutput.FromString, + ) + self.PostMetrics = channel.unary_unary( + '/proto.Collection/PostMetrics', + request_serializer=mainServer__pb2.PostMetricsInput.SerializeToString, + response_deserializer=mainServer__pb2.CollectionMessageOutput.FromString, + ) + + +class CollectionServicer(object): + """------------------------ SERVER Collection --------------------------- + + """ + + def CollectionHeartbeat(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def PostDBRelay(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + def PostMetrics(self, request, context): + # missing associated documentation comment in .proto file + pass + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details('Method not implemented!') + raise NotImplementedError('Method not implemented!') + + +def add_CollectionServicer_to_server(servicer, server): + rpc_method_handlers = { + 'CollectionHeartbeat': grpc.unary_unary_rpc_method_handler( + servicer.CollectionHeartbeat, + request_deserializer=mainServer__pb2.Empty.FromString, + response_serializer=mainServer__pb2.CollectionHeartbeatOutput.SerializeToString, + ), + 'PostDBRelay': grpc.unary_unary_rpc_method_handler( + servicer.PostDBRelay, + request_deserializer=mainServer__pb2.PostDBRelayInput.FromString, + response_serializer=mainServer__pb2.CollectionMessageOutput.SerializeToString, + ), + 'PostMetrics': grpc.unary_unary_rpc_method_handler( + servicer.PostMetrics, + request_deserializer=mainServer__pb2.PostMetricsInput.FromString, + response_serializer=mainServer__pb2.CollectionMessageOutput.SerializeToString, + ), + } + generic_handler = grpc.method_handlers_generic_handler( + 'proto.Collection', rpc_method_handlers) + server.add_generic_rpc_handlers((generic_handler,)) diff --git a/src/pybind/mgr/diskprediction/common/clusterdata.py b/src/pybind/mgr/diskprediction/common/clusterdata.py new file mode 100644 index 00000000000..3810f0e9be0 --- /dev/null +++ b/src/pybind/mgr/diskprediction/common/clusterdata.py @@ -0,0 +1,511 @@ +""" +Ceph database API + +""" +from __future__ import absolute_import + +import json +import rbd +import rados +from mgr_module import CommandResult + + +RBD_FEATURES_NAME_MAPPING = { + rbd.RBD_FEATURE_LAYERING: 'layering', + rbd.RBD_FEATURE_STRIPINGV2: 'striping', + rbd.RBD_FEATURE_EXCLUSIVE_LOCK: 'exclusive-lock', + rbd.RBD_FEATURE_OBJECT_MAP: 'object-map', + rbd.RBD_FEATURE_FAST_DIFF: 'fast-diff', + rbd.RBD_FEATURE_DEEP_FLATTEN: 'deep-flatten', + rbd.RBD_FEATURE_JOURNALING: 'journaling', + rbd.RBD_FEATURE_DATA_POOL: 'data-pool', + rbd.RBD_FEATURE_OPERATIONS: 'operations', +} + + +def differentiate(data1, data2): + """ + # >>> times = [0, 2] + # >>> values = [100, 101] + # >>> differentiate(*zip(times, values)) + 0.5 + """ + return (data2[1] - data1[1]) / float(data2[0] - data1[0]) + + +class ClusterAPI(object): + def __init__(self, module_obj): + self.module = module_obj + + @staticmethod + def format_bitmask(features): + """ + Formats the bitmask: + # >>> format_bitmask(45) + ['deep-flatten', 'exclusive-lock', 'layering', 'object-map'] + """ + names = [val for key, val in RBD_FEATURES_NAME_MAPPING.items() + if key & features == key] + return sorted(names) + + def _open_connection(self, pool_name='device_health_metrics'): + pools = self.module.rados.list_pools() + is_pool = False + for pool in pools: + if pool == pool_name: + is_pool = True + break + if not is_pool: + self.module.log.debug('create %s pool' % pool_name) + # create pool + result = CommandResult('') + self.module.send_command(result, 'mon', '', json.dumps({ + 'prefix': 'osd pool create', + 'format': 'json', + 'pool': pool_name, + 'pg_num': 1, + }), '') + r, outb, outs = result.wait() + assert r == 0 + + # set pool application + result = CommandResult('') + self.module.send_command(result, 'mon', '', json.dumps({ + 'prefix': 'osd pool application enable', + 'format': 'json', + 'pool': pool_name, + 'app': 'mgr_devicehealth', + }), '') + r, outb, outs = result.wait() + assert r == 0 + + ioctx = self.module.rados.open_ioctx(pool_name) + return ioctx + + @classmethod + def _rbd_disk_usage(cls, image, snaps, whole_object=True): + class DUCallback(object): + def __init__(self): + self.used_size = 0 + + def __call__(self, offset, length, exists): + if exists: + self.used_size += length + snap_map = {} + prev_snap = None + total_used_size = 0 + for _, size, name in snaps: + image.set_snap(name) + du_callb = DUCallback() + image.diff_iterate(0, size, prev_snap, du_callb, + whole_object=whole_object) + snap_map[name] = du_callb.used_size + total_used_size += du_callb.used_size + prev_snap = name + return total_used_size, snap_map + + def _rbd_image(self, ioctx, pool_name, image_name): + with rbd.Image(ioctx, image_name) as img: + stat = img.stat() + stat['name'] = image_name + stat['id'] = img.id() + stat['pool_name'] = pool_name + features = img.features() + stat['features'] = features + stat['features_name'] = self.format_bitmask(features) + + # the following keys are deprecated + del stat['parent_pool'] + del stat['parent_name'] + stat['timestamp'] = '{}Z'.format(img.create_timestamp() + .isoformat()) + stat['stripe_count'] = img.stripe_count() + stat['stripe_unit'] = img.stripe_unit() + stat['data_pool'] = None + try: + parent_info = img.parent_info() + stat['parent'] = { + 'pool_name': parent_info[0], + 'image_name': parent_info[1], + 'snap_name': parent_info[2] + } + except rbd.ImageNotFound: + # no parent image + stat['parent'] = None + # snapshots + stat['snapshots'] = [] + for snap in img.list_snaps(): + snap['timestamp'] = '{}Z'.format( + img.get_snap_timestamp(snap['id']).isoformat()) + snap['is_protected'] = img.is_protected_snap(snap['name']) + snap['used_bytes'] = None + snap['children'] = [] + img.set_snap(snap['name']) + for child_pool_name, child_image_name in img.list_children(): + snap['children'].append({ + 'pool_name': child_pool_name, + 'image_name': child_image_name + }) + stat['snapshots'].append(snap) + # disk usage + if 'fast-diff' in stat['features_name']: + snaps = [(s['id'], s['size'], s['name']) + for s in stat['snapshots']] + snaps.sort(key=lambda s: s[0]) + snaps += [(snaps[-1][0]+1 if snaps else 0, stat['size'], None)] + total_prov_bytes, snaps_prov_bytes = self._rbd_disk_usage( + img, snaps, True) + stat['total_disk_usage'] = total_prov_bytes + for snap, prov_bytes in snaps_prov_bytes.items(): + if snap is None: + stat['disk_usage'] = prov_bytes + continue + for ss in stat['snapshots']: + if ss['name'] == snap: + ss['disk_usage'] = prov_bytes + break + else: + stat['total_disk_usage'] = None + stat['disk_usage'] = None + return stat + + def get_rbd_list(self, pool_name=None): + if pool_name: + pools = [pool_name] + else: + pools = [] + for data in self.get_osd_pools(): + pools.append(data['pool_name']) + result = [] + for pool in pools: + rbd_inst = rbd.RBD() + with self._open_connection(str(pool)) as ioctx: + names = rbd_inst.list(ioctx) + for name in names: + try: + stat = self._rbd_image(ioctx, pool_name, name) + except rbd.ImageNotFound: + continue + result.append(stat) + return result + + def get_pg_summary(self): + return self.module.get('pg_summary') + + def get_df_stats(self): + return self.module.get('df').get('stats', {}) + + def get_object_pg_info(self, pool_name, object_name): + result = CommandResult('') + data_jaon = {} + self.module.send_command( + result, 'mon', '', json.dumps({ + 'prefix': 'osd map', + 'format': 'json', + 'pool': pool_name, + 'object': object_name, + }), '') + ret, outb, outs = result.wait() + try: + if outb: + data_jaon = json.loads(outb) + else: + self.module.log.error('unable to get %s pg info' % pool_name) + except Exception as e: + self.module.log.error( + 'unable to get %s pg, error: %s' % (pool_name, str(e))) + return data_jaon + + def get_rbd_info(self, pool_name, image_name): + with self._open_connection(pool_name) as ioctx: + try: + stat = self._rbd_image(ioctx, pool_name, image_name) + if stat.get('id'): + objects = self.get_pool_objects(pool_name, stat.get('id')) + if objects: + stat['objects'] = objects + stat['pgs'] = list() + for obj_name in objects: + pgs_data = self.get_object_pg_info(pool_name, obj_name) + stat['pgs'].extend([pgs_data]) + except rbd.ImageNotFound: + stat = {} + return stat + + def get_pool_objects(self, pool_name, image_id=None): + # list_objects + objects = [] + with self._open_connection(pool_name) as ioctx: + object_iterator = ioctx.list_objects() + while True: + try: + rados_object = object_iterator.next() + if image_id is None: + objects.append(str(rados_object.key)) + else: + v = str(rados_object.key).split('.') + if len(v) >= 2 and v[1] == image_id: + objects.append(str(rados_object.key)) + except StopIteration: + break + return objects + + def get_global_total_size(self): + total_bytes = \ + self.module.get('df').get('stats', {}).get('total_bytes') + total_size = float(total_bytes) / (1024 * 1024 * 1024) + return round(total_size) + + def get_global_avail_size(self): + total_avail_bytes = \ + self.module.get('df').get('stats', {}).get('total_avail_bytes') + total_avail_size = float(total_avail_bytes) / (1024 * 1024 * 1024) + return round(total_avail_size, 2) + + def get_global_raw_used_size(self): + total_used_bytes = \ + self.module.get('df').get('stats', {}).get('total_used_bytes') + total_raw_used_size = float(total_used_bytes) / (1024 * 1024 * 1024) + return round(total_raw_used_size, 2) + + def get_global_raw_used_percent(self): + total_bytes = \ + self.module.get('df').get('stats').get('total_bytes') + total_used_bytes = \ + self.module.get('df').get('stats').get('total_used_bytes') + if total_bytes and total_used_bytes: + total_used_percent = \ + float(total_used_bytes) / float(total_bytes) * 100 + else: + total_used_percent = 0.0 + return round(total_used_percent, 2) + + def get_osd_data(self): + return self.module.get('config').get('osd_data', '') + + def get_osd_journal(self): + return self.module.get('config').get('osd_journal', '') + + def get_osd_metadata(self, osd_id=None): + if osd_id is not None: + return self.module.get('osd_metadata')[str(osd_id)] + return self.module.get('osd_metadata') + + def get_mgr_metadata(self, mgr_id): + return self.module.get_metadata('mgr', mgr_id) + + def get_osd_epoch(self): + return self.module.get('osd_map').get('epoch', 0) + + def get_osds(self): + return self.module.get('osd_map').get('osds', []) + + def get_max_osd(self): + return self.module.get('osd_map').get('max_osd', '') + + def get_osd_pools(self): + return self.module.get('osd_map').get('pools', []) + + def get_pool_bytes_used(self, pool_id): + bytes_used = None + pools = self.module.get('df').get('pools', []) + for pool in pools: + if pool_id == pool['id']: + bytes_used = pool['stats']['bytes_used'] + return bytes_used + + def get_cluster_id(self): + return self.module.get('mon_map').get('fsid') + + def get_health_status(self): + health = json.loads(self.module.get('health')['json']) + return health.get('status') + + def get_health_checks(self): + health = json.loads(self.module.get('health')['json']) + if health.get('checks'): + message = '' + checks = health['checks'] + for key in checks.keys(): + if message: + message += ";" + if checks[key].get('summary', {}).get('message', ""): + message += checks[key]['summary']['message'] + return message + else: + return '' + + def get_mons(self): + return self.module.get('mon_map').get('mons', []) + + def get_mon_status(self): + mon_status = json.loads(self.module.get('mon_status')['json']) + return mon_status + + def get_osd_smart(self, osd_id, device_id=None): + osd_devices = [] + osd_smart = {} + devices = self.module.get('devices') + for dev in devices.get('devices', []): + osd = "" + daemons = dev.get('daemons', []) + for daemon in daemons: + if daemon[4:] != str(osd_id): + continue + osd = daemon + if not osd: + continue + if dev.get('devid'): + osd_devices.append(dev.get('devid')) + for dev_id in osd_devices: + o_key = '' + if device_id and dev_id != device_id: + continue + smart_data = self.get_device_health(dev_id) + if smart_data: + o_key = sorted(smart_data.iterkeys(), reverse=True)[0] + if o_key and smart_data and smart_data.values(): + dev_smart = smart_data[o_key] + if dev_smart: + osd_smart[dev_id] = dev_smart + return osd_smart + + def get_device_health(self, device_id): + res = {} + try: + with self._open_connection() as ioctx: + with rados.ReadOpCtx() as op: + iter, ret = ioctx.get_omap_vals(op, '', '', 500) + assert ret == 0 + try: + ioctx.operate_read_op(op, device_id) + for key, value in list(iter): + v = None + try: + v = json.loads(value) + except ValueError: + self.module.log.error( + 'unable to parse value for %s: "%s"' % (key, value)) + res[key] = v + except IOError: + pass + except OSError as e: + self.module.log.error( + 'unable to get device {} health, {}'.format(device_id, str(e))) + except IOError: + return {} + return res + + def get_osd_hostname(self, osd_id): + result = '' + osd_metadata = self.get_osd_metadata(osd_id) + if osd_metadata: + osd_host = osd_metadata.get('hostname', 'None') + result = osd_host + return result + + def get_osd_device_id(self, osd_id): + result = {} + if not str(osd_id).isdigit(): + if str(osd_id)[0:4] == 'osd.': + osdid = osd_id[4:] + else: + raise Exception('not a valid id or number') + else: + osdid = osd_id + osd_metadata = self.get_osd_metadata(osdid) + if osd_metadata: + osd_device_ids = osd_metadata.get('device_ids', '') + if osd_device_ids: + result = {} + for osd_device_id in osd_device_ids.split(','): + dev_name = '' + if len(str(osd_device_id).split('=')) >= 2: + dev_name = osd_device_id.split('=')[0] + dev_id = osd_device_id.split('=')[1] + else: + dev_id = osd_device_id + if dev_name: + result[dev_name] = {'dev_id': dev_id} + return result + + def get_file_systems(self): + return self.module.get('fs_map').get('filesystems', []) + + def get_pg_stats(self): + return self.module.get('pg_dump').get('pg_stats', []) + + def get_all_perf_counters(self): + return self.module.get_all_perf_counters() + + def get(self, data_name): + return self.module.get(data_name) + + def set_device_life_expectancy(self, device_id, from_date, to_date=None): + result = CommandResult('') + + if to_date is None: + self.module.send_command(result, 'mon', '', json.dumps({ + 'prefix': 'device set-life-expectancy', + 'devid': device_id, + 'from': from_date + }), '') + else: + self.module.send_command(result, 'mon', '', json.dumps({ + 'prefix': 'device set-life-expectancy', + 'devid': device_id, + 'from': from_date, + 'to': to_date + }), '') + ret, outb, outs = result.wait() + if ret != 0: + self.module.log.error( + 'failed to set device life expectancy, %s' % outs) + return ret + + def reset_device_life_expectancy(self, device_id): + result = CommandResult('') + self.module.send_command(result, 'mon', '', json.dumps({ + 'prefix': 'device rm-life-expectancy', + 'devid': device_id + }), '') + ret, outb, outs = result.wait() + if ret != 0: + self.module.log.error( + 'failed to reset device life expectancy, %s' % outs) + return ret + + def get_server(self, hostname): + return self.module.get_server(hostname) + + def get_configuration(self, key): + return self.module.get_configuration(key) + + def get_rate(self, svc_type, svc_name, path): + """returns most recent rate""" + data = self.module.get_counter(svc_type, svc_name, path)[path] + + if data and len(data) > 1: + return differentiate(*data[-2:]) + return 0.0 + + def get_latest(self, daemon_type, daemon_name, counter): + return self.module.get_latest(daemon_type, daemon_name, counter) + + def get_all_information(self): + result = dict() + result['osd_map'] = self.module.get('osd_map') + result['osd_map_tree'] = self.module.get('osd_map_tree') + result['osd_map_crush'] = self.module.get('osd_map_crush') + result['config'] = self.module.get('config') + result['mon_map'] = self.module.get('mon_map') + result['fs_map'] = self.module.get('fs_map') + result['osd_metadata'] = self.module.get('osd_metadata') + result['pg_summary'] = self.module.get('pg_summary') + result['pg_dump'] = self.module.get('pg_dump') + result['io_rate'] = self.module.get('io_rate') + result['df'] = self.module.get('df') + result['osd_stats'] = self.module.get('osd_stats') + result['health'] = self.get_health_status() + result['mon_status'] = self.get_mon_status() + return result diff --git a/src/pybind/mgr/diskprediction/common/cypher.py b/src/pybind/mgr/diskprediction/common/cypher.py new file mode 100644 index 00000000000..7b7b60e5059 --- /dev/null +++ b/src/pybind/mgr/diskprediction/common/cypher.py @@ -0,0 +1,71 @@ +from __future__ import absolute_import + +import time + + +class NodeInfo(object): + """ Neo4j Node information """ + def __init__(self, label, domain_id, name, meta): + self.label = label + self.domain_id = domain_id + self.name = name + self.meta = meta + + +class CypherOP(object): + """ Cypher Operation """ + + @staticmethod + def update(node, key, value, timestamp=int(time.time()*(1000**3))): + result = '' + if isinstance(node, NodeInfo): + if key != 'time': + cy_value = '\'%s\'' % value + else: + cy_value = value + result = \ + 'set %s.%s=case when %s.time >= %s then %s.%s ELSE %s end' % ( + node.label, key, node.label, timestamp, node.label, key, + cy_value) + return result + + @staticmethod + def create_or_merge(node, timestamp=int(time.time()*(1000**3))): + result = '' + if isinstance(node, NodeInfo): + meta_list = [] + if isinstance(node.meta, dict): + for key, value in node.meta.items(): + meta_list.append(CypherOP.update(node, key, value, timestamp)) + domain_id = '{domainId:\'%s\'}' % node.domain_id + if meta_list: + result = 'merge (%s:%s %s) %s %s %s' % ( + node.label, node.label, + domain_id, + CypherOP.update(node, 'name', node.name, timestamp), + ' '.join(meta_list), + CypherOP.update(node, 'time', timestamp, timestamp)) + else: + result = 'merge (%s:%s %s) %s %s' % ( + node.label, node.label, + domain_id, + CypherOP.update(node, 'name', node.name, timestamp), + CypherOP.update(node, 'time', timestamp, timestamp)) + return result + + @staticmethod + def add_link(snode, dnode, relationship, timestamp=None): + result = '' + if timestamp is None: + timestamp = int(time.time()*(1000**3)) + if isinstance(snode, NodeInfo) and isinstance(dnode, NodeInfo): + cy_snode = CypherOP.create_or_merge(snode, timestamp) + cy_dnode = CypherOP.create_or_merge(dnode, timestamp) + target = snode.label + dnode.label + link = 'merge (%s)-[%s:%s]->(%s) set %s.time=case when %s.time >= %s then %s.time ELSE %s end' % ( + snode.label, target, relationship, + dnode.label, target, + target, timestamp, + target, timestamp) + result = '%s %s %s' % (cy_snode, cy_dnode, link) + return result diff --git a/src/pybind/mgr/diskprediction/common/grpcclient.py b/src/pybind/mgr/diskprediction/common/grpcclient.py new file mode 100644 index 00000000000..cc331ef1535 --- /dev/null +++ b/src/pybind/mgr/diskprediction/common/grpcclient.py @@ -0,0 +1,235 @@ +# vim: tabstop=4 shiftwidth=4 softtabstop=4 +import grpc +import json +from logging import getLogger +import os +import time + +from . import DummyResonse +import client_pb2 +import client_pb2_grpc + + +def gen_configuration(**kwargs): + configuration = { + 'host': kwargs.get('host', 'api.diskprophet.com'), + 'user': kwargs.get('user'), + 'password': kwargs.get('password'), + 'port': kwargs.get('port', 31400), + 'mgr_inst': kwargs.get('mgr_inst', None), + 'cert_context': kwargs.get('cert_context'), + 'ssl_target_name': kwargs.get('ssl_target_name', 'api.diskprophet.com'), + 'default_authority': kwargs.get('default_authority', 'api.diskprophet.com')} + return configuration + + +class GRPcClient: + + def __init__(self, configuration): + self.auth = None + self.channel = None + self.host = configuration.get('host') + self.port = configuration.get('port') + if configuration.get('user') and configuration.get('password'): + self.auth = ( + ('account', configuration.get('user')), + ('password', configuration.get('password'))) + self.cert_context = configuration.get('cert_context') + self.ssl_target_name = configuration.get('ssl_target_name') + self.default_authority = configuration.get('default_authority') + self.mgr_inst = configuration.get('mgr_inst') + if self.mgr_inst: + self._logger = self.mgr_inst.log + else: + self._logger = getLogger() + self._get_channel() + + def __nonzero__(self): + if self.channel: + return True + else: + return False + + def _get_channel(self): + try: + creds = grpc.ssl_channel_credentials( + root_certificates=self.cert_context) + self.channel = \ + grpc.secure_channel('{}:{}'.format( + self.host, self.port), creds, + options=(('grpc.ssl_target_name_override', self.ssl_target_name,), + ('grpc.default_authority', self.default_authority),)) + except Exception as e: + self._logger.error( + 'failed to create connection exception: {}'.format( + ';'.join(str(e).split('\n\t')))) + + def test_connection(self): + try: + stub_accout = client_pb2_grpc.AccountStub(self.channel) + result = stub_accout.AccountHeartbeat(client_pb2.Empty()) + if result and "is alive" in str(result.message): + return True + else: + return False + except Exception as e: + self._logger.error( + 'failed to test connection exception: {}'.format( + ';'.join(str(e).split('\n\t')))) + return False + + def _send_metrics(self, data, measurement): + status_info = dict() + status_info['measurement'] = None + status_info['success_count'] = 0 + status_info['failure_count'] = 0 + for dp_data in data: + d_measurement = dp_data.measurement + if not d_measurement: + status_info['measurement'] = measurement + else: + status_info['measurement'] = d_measurement + tag_list = [] + field_list = [] + for name in dp_data.tags: + tag = '{}={}'.format(name, dp_data.tags[name]) + tag_list.append(tag) + for name in dp_data.fields: + if dp_data.fields[name] is None: + continue + if isinstance(dp_data.fields[name], str): + field = '{}=\"{}\"'.format(name, dp_data.fields[name]) + elif isinstance(dp_data.fields[name], bool): + field = '{}={}'.format(name, + str(dp_data.fields[name]).lower()) + elif (isinstance(dp_data.fields[name], int) or + isinstance(dp_data.fields[name], long)): + field = '{}={}i'.format(name, dp_data.fields[name]) + else: + field = '{}={}'.format(name, dp_data.fields[name]) + field_list.append(field) + data = '{},{} {} {}'.format( + status_info['measurement'], + ','.join(tag_list), + ','.join(field_list), + int(time.time() * 1000 * 1000 * 1000)) + try: + resp = self._send_info(data=[data], measurement=status_info['measurement']) + status_code = resp.status_code + if 200 <= status_code < 300: + self._logger.debug( + '{} send diskprediction api success(ret: {})'.format( + status_info['measurement'], status_code)) + status_info['success_count'] += 1 + else: + self._logger.error( + 'return code: {}, content: {}'.format( + status_code, resp.content)) + status_info['failure_count'] += 1 + except Exception as e: + status_info['failure_count'] += 1 + self._logger.error(str(e)) + return status_info + + def _send_db_relay(self, data, measurement): + status_info = dict() + status_info['measurement'] = measurement + status_info['success_count'] = 0 + status_info['failure_count'] = 0 + for dp_data in data: + try: + resp = self._send_info( + data=[dp_data.fields['cmd']], measurement=measurement) + status_code = resp.status_code + if 200 <= status_code < 300: + self._logger.debug( + '{} send diskprediction api success(ret: {})'.format( + measurement, status_code)) + status_info['success_count'] += 1 + else: + self._logger.error( + 'return code: {}, content: {}'.format( + status_code, resp.content)) + status_info['failure_count'] += 1 + except Exception as e: + status_info['failure_count'] += 1 + self._logger.error(str(e)) + return status_info + + def send_info(self, data, measurement): + """ + :param data: data structure + :param measurement: data measurement class name + :return: + status_info = { + 'success_count': , + 'failure_count': + } + """ + if measurement == 'db_relay': + return self._send_db_relay(data, measurement) + else: + return self._send_metrics(data, measurement) + + def _send_info(self, data, measurement): + resp = DummyResonse() + try: + stub_collection = client_pb2_grpc.CollectionStub(self.channel) + if measurement == 'db_relay': + result = stub_collection.PostDBRelay( + client_pb2.PostDBRelayInput(cmds=data), metadata=self.auth) + else: + result = stub_collection.PostMetrics( + client_pb2.PostMetricsInput(points=data), metadata=self.auth) + if result and 'success' in str(result.message).lower(): + resp.status_code = 200 + resp.content = '' + else: + resp.status_code = 400 + resp.content = ';'.join(str(result).split('\n\t')) + self._logger.error( + 'failed to send info: {}'.format(resp.content)) + except Exception as e: + resp.status_code = 400 + resp.content = ';'.join(str(e).split('\n\t')) + self._logger.error( + 'failed to send info exception: {}'.format(resp.content)) + return resp + + def query_info(self, host_domain_id, disk_domain_id, measurement): + resp = DummyResonse() + try: + stub_dp = client_pb2_grpc.DiskprophetStub(self.channel) + predicted = stub_dp.DPGetDisksPrediction( + client_pb2.DPGetDisksPredictionInput( + physicalDiskIds=disk_domain_id), + metadata=self.auth) + if predicted and hasattr(predicted, 'data'): + resp.status_code = 200 + resp.content = '' + resp_json = json.loads(predicted.data) + rc = resp_json.get('results', []) + if rc: + series = rc[0].get('series', []) + if series: + values = series[0].get('values', []) + if not values: + resp.resp_json = {} + else: + columns = series[0].get('columns', []) + for item in values: + # get prediction key and value from server. + for name, value in zip(columns, item): + # process prediction data + resp.resp_json[name] = value + return resp + else: + resp.status_code = 400 + resp.content = '' + resp.resp_json = {'error': ';'.join(str(predicted).split('\n\t'))} + return resp + except Exception as e: + resp.status_code = 400 + resp.content = ';'.join(str(e).split('\n\t')) + resp.resp_json = {'error': resp.content} + return resp diff --git a/src/pybind/mgr/diskprediction/common/localpredictor.py b/src/pybind/mgr/diskprediction/common/localpredictor.py new file mode 100644 index 00000000000..f2ca8f16a2d --- /dev/null +++ b/src/pybind/mgr/diskprediction/common/localpredictor.py @@ -0,0 +1,120 @@ +# vim: tabstop=4 shiftwidth=4 softtabstop=4 +# from __future__ import absolute_import +from __future__ import absolute_import +from logging import getLogger +import time + +from . import DummyResonse +from .clusterdata import ClusterAPI +from ..predictor.DiskFailurePredictor import DiskFailurePredictor, get_diskfailurepredictor_path + + +def gen_configuration(**kwargs): + configuration = { + 'mgr_inst': kwargs.get('mgr_inst', None)} + return configuration + + +class LocalPredictor: + + def __init__(self, configuration): + self.mgr_inst = configuration.get('mgr_inst') + if self.mgr_inst: + self._logger = self.mgr_inst.log + else: + self._logger = getLogger() + + def __nonzero__(self): + if self.mgr_inst: + return True + else: + return False + + def test_connection(self): + resp = DummyResonse() + resp.status_code = 200 + resp.content = '' + return resp + + def send_info(self, data, measurement): + status_info = dict() + status_info['measurement'] = measurement + status_info['success_count'] = 0 + status_info['failure_count'] = 0 + for dp_data in data: + try: + resp = self._send_info(data=dp_data, measurement=measurement) + status_code = resp.status_code + if 200 <= status_code < 300: + self._logger.debug( + '%s send diskprediction api success(ret: %s)' + % (measurement, status_code)) + status_info['success_count'] += 1 + else: + self._logger.error( + 'return code: %s, content: %s, data: %s' % ( + status_code, resp.content, data)) + status_info['failure_count'] += 1 + except Exception as e: + status_info['failure_count'] += 1 + self._logger.error(str(e)) + return status_info + + def _send_info(self, data, measurement): + resp = DummyResonse() + resp.status_code = 200 + resp.content = '' + return resp + + def _local_predict(self, smart_datas): + obj_predictor = DiskFailurePredictor() + predictor_path = get_diskfailurepredictor_path() + models_path = "{}/models".format(predictor_path) + obj_predictor.initialize(models_path) + return obj_predictor.predict(smart_datas) + + def query_info(self, host_domain_id, disk_domain_id, measurement): + predict_datas = list() + obj_api = ClusterAPI(self.mgr_inst) + predicted_result = 'Unknown' + smart_datas = obj_api.get_device_health(disk_domain_id) + if len(smart_datas) >= 6: + o_keys = sorted(smart_datas.iterkeys(), reverse=True) + for o_key in o_keys: + dev_smart = {} + s_val = smart_datas[o_key] + ata_smart = s_val.get('ata_smart_attributes', {}) + for attr in ata_smart.get('table', []): + if attr.get('raw', {}).get('string'): + if str(attr.get('raw', {}).get('string', '0')).isdigit(): + dev_smart['smart_%s_raw' % attr.get('id')] = \ + int(attr.get('raw', {}).get('string', '0')) + else: + if str(attr.get('raw', {}).get('string', '0')).split(' ')[0].isdigit(): + dev_smart['smart_%s_raw' % attr.get('id')] = \ + int(attr.get('raw', {}).get('string', + '0').split(' ')[0]) + else: + dev_smart['smart_%s_raw' % attr.get('id')] = \ + attr.get('raw', {}).get('value', 0) + if s_val.get('power_on_time', {}).get('hours') is not None: + dev_smart['smart_9_raw'] = int(s_val['power_on_time']['hours']) + if dev_smart: + predict_datas.append(dev_smart) + + if predict_datas: + predicted_result = self._local_predict(predict_datas) + resp = DummyResonse() + resp.status_code = 200 + resp.resp_json = { + "disk_domain_id": disk_domain_id, + "near_failure": predicted_result, + "predicted": int(time.time() * (1000 ** 3))} + return resp + else: + resp = DummyResonse() + resp.status_code = 400 + resp.content = '\'predict\' need least 6 pieces disk smart data' + resp.resp_json = \ + {'error': '\'predict\' need least 6 pieces disk smart data'} + return resp diff --git a/src/pybind/mgr/diskprediction/module.py b/src/pybind/mgr/diskprediction/module.py new file mode 100644 index 00000000000..4ae2a7976a1 --- /dev/null +++ b/src/pybind/mgr/diskprediction/module.py @@ -0,0 +1,402 @@ +""" +A diskprediction module +""" +from __future__ import absolute_import + +from datetime import datetime +import errno +import json +from mgr_module import MgrModule +import os +from threading import Event + +from .common import DP_MGR_STAT_ENABLED, DP_MGR_STAT_DISABLED +from .task import MetricsRunner, PredictionRunner, SmartRunner + + +DP_AGENTS = [MetricsRunner, SmartRunner, PredictionRunner] + + +class Module(MgrModule): + + OPTIONS = [ + { + 'name': 'diskprediction_config_mode', + 'default': 'local' + }, + { + 'name': 'diskprediction_server', + 'default': '' + }, + { + 'name': 'diskprediction_port', + 'default': '31400' + }, + { + 'name': 'diskprediction_user', + 'default': '' + }, + { + 'name': 'diskprediction_password', + 'default': '' + }, + { + 'name': 'diskprediction_upload_metrics_interval', + 'default': '600' + }, + { + 'name': 'diskprediction_upload_smart_interval', + 'default': '43200' + }, + { + 'name': 'diskprediction_retrieve_prediction_interval', + 'default': '43200' + }, + { + 'name': 'diskprediction_cert_context', + 'default': '' + }, + { + 'name': 'diskprediction_ssl_target_name_override', + 'default': 'api.diskprophet.com' + }, + { + 'name': 'diskprediction_default_authority', + 'default': 'api.diskprophet.com' + } + ] + + COMMANDS = [ + { + 'cmd': 'device set-prediction-mode ' + 'name=mode,type=CephString,req=true', + 'desc': 'config disk prediction mode [\"cloud\"|\"local\"]', + 'perm': 'rw' + }, + { + 'cmd': 'device show-prediction-config', + 'desc': 'Prints diskprediction configuration', + 'perm': 'r' + }, + { + 'cmd': 'device set-cloud-prediction-config ' + 'name=server,type=CephString,req=true ' + 'name=user,type=CephString,req=true ' + 'name=password,type=CephString,req=true ' + 'name=certfile,type=CephString,req=true ' + 'name=port,type=CephString,req=false ', + 'desc': 'Configure Disk Prediction service', + 'perm': 'rw' + }, + { + 'cmd': 'device get-predicted-status ' + 'name=dev_id,type=CephString,req=true', + 'desc': 'Get physical device predicted result', + 'perm': 'r' + }, + { + 'cmd': 'device debug metrics-forced', + 'desc': 'Run metrics agent forced', + 'perm': 'r' + }, + { + 'cmd': 'device debug prediction-forced', + 'desc': 'Run prediction agent forced', + 'perm': 'r' + }, + { + 'cmd': 'device debug smart-forced', + 'desc': 'Run smart agent forced', + 'perm': 'r' + }, + { + 'cmd': 'device predict-life-expectancy ' + 'name=dev_id,type=CephString,req=true', + 'desc': 'Predict life expectancy with local predictor', + 'perm': 'r' + }, + { + 'cmd': 'diskprediction self-test', + 'desc': 'Prints hello world to mgr.x.log', + 'perm': 'r' + }, + { + 'cmd': 'diskprediction status', + 'desc': 'Check diskprediction status', + 'perm': 'r' + } + ] + + def __init__(self, *args, **kwargs): + super(Module, self).__init__(*args, **kwargs) + self.status = {'status': DP_MGR_STAT_DISABLED} + self.shutdown_event = Event() + self._agents = [] + self._activated_cloud = False + self._activated_local = False + self._prediction_result = {} + self.config = dict() + + @property + def config_keys(self): + return dict((o['name'], o.get('default', None)) for o in self.OPTIONS) + + def set_config_option(self, option, value): + if option not in self.config_keys.keys(): + raise RuntimeError('{0} is a unknown configuration ' + 'option'.format(option)) + + if option in ['diskprediction_port', + 'diskprediction_upload_metrics_interval', + 'diskprediction_upload_smart_interval', + 'diskprediction_retrieve_prediction_interval']: + if not str(value).isdigit(): + raise RuntimeError('invalid {} configured. Please specify ' + 'a valid integer {}'.format(option, value)) + + self.log.debug('Setting in-memory config option %s to: %s', option, + value) + self.set_config(option, value) + self.config[option] = value + + return True + + def get_configuration(self, key): + return self.get_config(key, self.config_keys[key]) + + def _show_prediction_config(self, inbuf, cmd): + self.show_module_config() + return 0, json.dumps(self.config, indent=4), '' + + def _set_prediction_mode(self, inbuf, cmd): + self.status = {} + str_mode = cmd.get('mode', 'cloud') + if str_mode.lower() not in ['cloud', 'local']: + return -errno.EINVAL, '', 'invalid configuration, enable=[cloud|local]' + try: + self.set_config('diskprediction_config_mode', str_mode) + for _agent in self._agents: + _agent.event.set() + return (0, + 'success to config disk prediction mode: %s' + % str_mode.lower(), 0) + except Exception as e: + return -errno.EINVAL, '', str(e) + + def _self_test(self, inbuf, cmd): + from .test.test_agents import test_agents + test_agents(self) + return 0, 'self-test completed', '' + + def _set_ssl_target_name(self, inbuf, cmd): + str_ssl_target = cmd.get('ssl_target_name', '') + try: + self.set_config('diskprediction_ssl_target_name_override', str_ssl_target) + return (0, + 'success to config ssl target name', 0) + except Exception as e: + return -errno.EINVAL, '', str(e) + + def _set_ssl_default_authority(self, inbuf, cmd): + str_ssl_authority = cmd.get('ssl_authority', '') + try: + self.set_config('diskprediction_default_authority', str_ssl_authority) + return (0, + 'success to config ssl default authority', 0) + except Exception as e: + return -errno.EINVAL, '', str(e) + + def _get_predicted_status(self, inbuf, cmd): + physical_data = dict() + try: + if not self._prediction_result: + for _agent in self._agents: + if isinstance(_agent, PredictionRunner): + _agent.event.set() + break + pre_data = self._prediction_result.get(cmd['dev_id']) + if pre_data: + p_data = pre_data.get('prediction', {}) + if not p_data.get('predicted'): + predicted = '' + else: + predicted = datetime.fromtimestamp(int( + p_data.get('predicted')) / (1000 ** 3)) + d_data = { + 'near_failure': p_data.get('near_failure'), + 'predicted': str(predicted), + 'serial_number': pre_data.get('serial_number'), + 'disk_wwn': pre_data.get('disk_wwn'), + 'attachment': p_data.get('disk_name', '') + } + physical_data[cmd['dev_id']] = d_data + msg = json.dumps(d_data, indent=4) + else: + msg = 'device %s predicted data not ready' % cmd['dev_id'] + except Exception as e: + if str(e).find('No such file') >= 0: + msg = 'unable to get device {} predicted data'.format( + cmd['dev_id']) + else: + msg = 'unable to get osd {} predicted data, {}'.format( + cmd['dev_id'], str(e)) + self.log.error(msg) + return -errno.EINVAL, '', msg + return 0, msg, '' + + def _set_cloud_prediction_config(self, inbuf, cmd): + trusted_certs = '' + str_cert_path = cmd.get('certfile', '') + if os.path.exists(str_cert_path): + with open(str_cert_path, 'rb') as f: + trusted_certs = f.read() + self.set_config_option( + 'diskprediction_cert_context', trusted_certs) + for _agent in self._agents: + _agent.event.set() + self.set_config('diskprediction_server', cmd['server']) + self.set_config('diskprediction_user', cmd['user']) + self.set_config('diskprediction_password', cmd['password']) + if cmd.get('port'): + self.set_config('diskprediction_port', cmd['port']) + return 0, 'succeed to config cloud mode connection', '' + else: + return -errno.EINVAL, '', 'certification file not existed' + + def _debug_prediction_forced(self, inbuf, cmd): + msg = '' + for _agent in self._agents: + if isinstance(_agent, PredictionRunner): + msg = 'run prediction agent successfully' + _agent.event.set() + return 0, msg, '' + + def _debug_metrics_forced(self, inbuf, cmd): + msg = '' + for _agent in self._agents: + if isinstance(_agent, MetricsRunner): + msg = 'run metrics agent successfully' + _agent.event.set() + return 0, msg, '' + + def _debug_smart_forced(self, inbuf, cmd): + msg = ' ' + for _agent in self._agents: + if isinstance(_agent, SmartRunner): + msg = 'run smart agent successfully' + _agent.event.set() + return 0, msg, '' + + def _status(self, inbuf, cmd): + return 0, json.dumps(self.status), '' + + def _predict_life_expectancy(self, inbuf, cmd): + assert cmd['dev_id'] + from .common.localpredictor import LocalPredictor, gen_configuration + conf = gen_configuration(mgr_inst=self) + obj_predictor = LocalPredictor(conf) + result = obj_predictor.query_info('', cmd['dev_id'], '') + if result.status_code == 200: + near_failure = result.json()['near_failure'] + if near_failure.lower() == 'good': + return 0, '>6w', '' + elif near_failure.lower() == 'warning': + return 0, '>=2w and <=6w', '' + elif near_failure.lower() == 'bad': + return 0, '<2w', '' + else: + return 0, 'unknown', '' + else: + return -errno.ENAVAIL, '', result.content + + def handle_command(self, inbuf, cmd): + for o_cmd in self.COMMANDS: + if cmd['prefix'] == o_cmd['cmd'][:len(cmd['prefix'])]: + fun_name = '' + avgs = o_cmd['cmd'].split(' ') + for avg in avgs: + if avg.lower() == 'diskprediction': + continue + if avg.lower() == 'device': + continue + if '=' in avg or ',' in avg or not avg: + continue + fun_name += '_%s' % avg.replace('-', '_') + if fun_name: + fun = getattr( + self, fun_name) + if fun: + return fun(inbuf, cmd) + return -errno.EINVAL, '', 'cmd not found' + + def show_module_config(self): + self.fsid = self.get('mon_map')['fsid'] + self.log.debug('Found Ceph fsid %s', self.fsid) + + for key, default in self.config_keys.items(): + self.set_config_option(key, self.get_config(key, default)) + + def serve(self): + self.log.info('Starting diskprediction module') + self.status = {'status': DP_MGR_STAT_ENABLED} + + while True: + if self.get_configuration('diskprediction_config_mode').lower() == 'cloud': + enable_cloud = True + else: + enable_cloud = False + # Enable cloud mode prediction process + if enable_cloud and not self._activated_cloud: + if self._activated_local: + self.stop_disk_prediction() + self.start_cloud_disk_prediction() + # Enable local mode prediction process + elif not enable_cloud and not self._activated_local: + if self._activated_cloud: + self.stop_disk_prediction() + self.start_local_disk_prediction() + + self.shutdown_event.wait(5) + if self.shutdown_event.is_set(): + break + self.stop_disk_prediction() + + def start_cloud_disk_prediction(self): + assert not self._activated_cloud + for dp_agent in DP_AGENTS: + obj_agent = dp_agent(self) + if obj_agent: + obj_agent.start() + else: + raise Exception('failed to start task %s' % obj_agent.task_name) + self._agents.append(obj_agent) + self._activated_cloud = True + self.log.info('start cloud disk prediction') + + def start_local_disk_prediction(self): + assert not self._activated_local + for dp_agent in [PredictionRunner]: + obj_agent = dp_agent(self) + if obj_agent: + obj_agent.start() + else: + raise Exception('failed to start task %s' % obj_agent.task_name) + self._agents.append(obj_agent) + self._activated_local = True + self.log.info('start local model disk prediction') + + def stop_disk_prediction(self): + assert self._activated_local or self._activated_cloud + self.status = {'status': DP_MGR_STAT_DISABLED} + while self._agents: + dp_agent = self._agents.pop() + dp_agent.terminate() + dp_agent.join(5) + del dp_agent + self._activated_local = False + self._activated_cloud = False + self.log.info('stop disk prediction') + + def shutdown(self): + self.shutdown_event.set() + super(Module, self).shutdown() diff --git a/src/pybind/mgr/diskprediction/predictor/DiskFailurePredictor.py b/src/pybind/mgr/diskprediction/predictor/DiskFailurePredictor.py new file mode 100644 index 00000000000..e16d3e1a203 --- /dev/null +++ b/src/pybind/mgr/diskprediction/predictor/DiskFailurePredictor.py @@ -0,0 +1,257 @@ +"""Sample code for disk failure prediction. + +This sample code is a community version for anyone who is interested in Machine +Learning and care about disk failure. + +This class provides a disk failure prediction module. Given models dirpath to +initialize a predictor instance and then use 6 days data to predict. Predict +function will return a string to indicate disk failure status: "Good", +"Warning", "Bad", or "Unknown". + +An example code is as follows: + +>>> model = DiskFailurePredictor.DiskFailurePredictor() +>>> status = model.initialize("./models") +>>> if status: +>>> model.predict(disk_days) +'Bad' + + +Provided by ProphetStor Data Services Inc. +http://www.prophetstor.com/ + +""" + +from __future__ import print_function +import os +import json +from sklearn.externals import joblib + + +def get_diskfailurepredictor_path(): + path = os.path.abspath(__file__) + dir_path = os.path.dirname(path) + return dir_path + + +class DiskFailurePredictor(object): + """Disk failure prediction + + This class implements a disk failure prediction module. + """ + + CONFIG_FILE = "config.json" + EXCLUDED_ATTRS = ['smart_9_raw', 'smart_241_raw', 'smart_242_raw'] + + def __init__(self): + """ + This function may throw exception due to wrong file operation. + """ + + self.model_dirpath = "" + self.model_context = {} + + def initialize(self, model_dirpath): + """ + Initialize all models. + + Args: None + + Returns: + Error message. If all goes well, return an empty string. + + Raises: + """ + + config_path = os.path.join(model_dirpath, self.CONFIG_FILE) + if not os.path.isfile(config_path): + return "Missing config file: " + config_path + else: + with open(config_path) as f_conf: + self.model_context = json.load(f_conf) + + for model_name in self.model_context: + model_path = os.path.join(model_dirpath, model_name) + + if not os.path.isfile(model_path): + return "Missing model file: " + model_path + + self.model_dirpath = model_dirpath + + def __preprocess(self, disk_days): + """ + Preprocess disk attributes. + + Args: + disk_days: Refer to function predict(...). + + Returns: + new_disk_days: Processed disk days. + """ + + req_attrs = [] + new_disk_days = [] + + attr_list = set.intersection(*[set(disk_day.keys()) + for disk_day in disk_days]) + for attr in attr_list: + if (attr.startswith('smart_') and attr.endswith('_raw')) and \ + attr not in self.EXCLUDED_ATTRS: + req_attrs.append(attr) + + for disk_day in disk_days: + new_disk_day = {} + for attr in req_attrs: + if float(disk_day[attr]) >= 0.0: + new_disk_day[attr] = disk_day[attr] + + new_disk_days.append(new_disk_day) + + return new_disk_days + + @staticmethod + def __get_diff_attrs(disk_days): + """ + Get 5 days differential attributes. + + Args: + disk_days: Refer to function predict(...). + + Returns: + attr_list: All S.M.A.R.T. attributes used in given disk. Here we + use intersection set of all disk days. + + diff_disk_days: A list struct comprises 5 dictionaries, each + dictionary contains differential attributes. + + Raises: + Exceptions of wrong list/dict operations. + """ + + all_attrs = [set(disk_day.keys()) for disk_day in disk_days] + attr_list = list(set.intersection(*all_attrs)) + attr_list = disk_days[0].keys() + prev_days = disk_days[:-1] + curr_days = disk_days[1:] + diff_disk_days = [] + + for prev, cur in zip(prev_days, curr_days): + diff_disk_days.append({attr:(int(cur[attr]) - int(prev[attr])) + for attr in attr_list}) + + return attr_list, diff_disk_days + + def __get_best_models(self, attr_list): + """ + Find the best model from model list according to given attribute list. + + Args: + attr_list: All S.M.A.R.T. attributes used in given disk. + + Returns: + modelpath: The best model for the given attribute list. + model_attrlist: 'Ordered' attribute list of the returned model. + Must be aware that SMART attributes is in order. + + Raises: + """ + + models = self.model_context.keys() + + scores = [] + for model_name in models: + scores.append(sum(attr in attr_list + for attr in self.model_context[model_name])) + max_score = max(scores) + + # Skip if too few matched attributes. + if max_score < 3: + print("Too few matched attributes") + return None + + best_models = {} + best_model_indices = [idx for idx, score in enumerate(scores) + if score > max_score - 2] + for model_idx in best_model_indices: + model_name = list(models)[model_idx] + model_path = os.path.join(self.model_dirpath, model_name) + model_attrlist = self.model_context[model_name] + best_models[model_path] = model_attrlist + + return best_models + # return os.path.join(self.model_dirpath, model_name), model_attrlist + + @staticmethod + def __get_ordered_attrs(disk_days, model_attrlist): + """ + Return ordered attributes of given disk days. + + Args: + disk_days: Unordered disk days. + model_attrlist: Model's ordered attribute list. + + Returns: + ordered_attrs: Ordered disk days. + + Raises: None + """ + + ordered_attrs = [] + + for one_day in disk_days: + one_day_attrs = [] + + for attr in model_attrlist: + if attr in one_day: + one_day_attrs.append(one_day[attr]) + else: + one_day_attrs.append(0) + + ordered_attrs.append(one_day_attrs) + + return ordered_attrs + + def predict(self, disk_days): + """ + Predict using given 6-days disk S.M.A.R.T. attributes. + + Args: + disk_days: A list struct comprises 6 dictionaries. These + dictionaries store 'consecutive' days of disk SMART + attributes. + Returns: + A string indicates prediction result. One of following four strings + will be returned according to disk failure status: + (1) Good : Disk is health + (2) Warning : Disk has some symptoms but may not fail immediately + (3) Bad : Disk is in danger and data backup is highly recommended + (4) Unknown : Not enough data for prediction. + + Raises: None + """ + + all_pred = [] + + proc_disk_days = self.__preprocess(disk_days) + attr_list, diff_data = DiskFailurePredictor.__get_diff_attrs(proc_disk_days) + modellist = self.__get_best_models(attr_list) + if modellist is None: + return "Unknown" + + for modelpath in modellist: + model_attrlist = modellist[modelpath] + ordered_data = DiskFailurePredictor.__get_ordered_attrs( + diff_data, model_attrlist) + + clf = joblib.load(modelpath) + pred = clf.predict(ordered_data) + + all_pred.append(1 if any(pred) else 0) + + score = 2 ** sum(all_pred) - len(modellist) + if score > 10: + return "Bad" + elif score > 4: + return "Warning" + else: + return "Good" diff --git a/src/pybind/mgr/diskprediction/predictor/__init__.py b/src/pybind/mgr/diskprediction/predictor/__init__.py new file mode 100644 index 00000000000..d056be7308b --- /dev/null +++ b/src/pybind/mgr/diskprediction/predictor/__init__.py @@ -0,0 +1 @@ +'''Initial file for predictors''' diff --git a/src/pybind/mgr/diskprediction/predictor/models/config.json b/src/pybind/mgr/diskprediction/predictor/models/config.json new file mode 100644 index 00000000000..61439ae5e3b --- /dev/null +++ b/src/pybind/mgr/diskprediction/predictor/models/config.json @@ -0,0 +1,77 @@ +{ +"svm_123.joblib": ["smart_197_raw", "smart_183_raw", "smart_200_raw", "smart_194_raw", "smart_254_raw", "smart_252_raw", "smart_4_raw", "smart_222_raw", "smart_187_raw", "smart_184_raw"], +"svm_105.joblib": ["smart_197_raw", "smart_4_raw", "smart_5_raw", "smart_252_raw", "smart_184_raw", "smart_223_raw", "smart_198_raw", "smart_10_raw", "smart_189_raw", "smart_222_raw"], +"svm_82.joblib":["smart_184_raw", "smart_2_raw", "smart_187_raw", "smart_225_raw", "smart_198_raw", "smart_197_raw", "smart_4_raw", "smart_13_raw", "smart_188_raw", "smart_251_raw"], +"svm_186.joblib":["smart_3_raw", "smart_11_raw", "smart_198_raw", "smart_250_raw", "smart_13_raw", "smart_200_raw", "smart_224_raw", "smart_187_raw", "smart_22_raw", "smart_4_raw", "smart_220_raw"], +"svm_14.joblib":["smart_12_raw", "smart_226_raw", "smart_187_raw", "smart_196_raw", "smart_5_raw", "smart_183_raw", "smart_255_raw", "smart_250_raw", "smart_201_raw", "smart_8_raw"], +"svm_10.joblib":["smart_251_raw", "smart_4_raw", "smart_223_raw", "smart_13_raw", "smart_255_raw", "smart_188_raw", "smart_197_raw", "smart_201_raw", "smart_250_raw", "smart_15_raw"], +"svm_235.joblib":["smart_15_raw", "smart_255_raw", "smart_252_raw", "smart_197_raw", "smart_250_raw", "smart_254_raw", "smart_13_raw", "smart_251_raw", "smart_198_raw", "smart_189_raw", "smart_191_raw"], +"svm_234.joblib":["smart_187_raw", "smart_183_raw", "smart_3_raw", "smart_4_raw", "smart_222_raw", "smart_184_raw", "smart_5_raw", "smart_198_raw", "smart_200_raw", "smart_8_raw", "smart_10_raw"], +"svm_119.joblib":["smart_254_raw", "smart_8_raw", "smart_183_raw", "smart_184_raw", "smart_195_raw", "smart_252_raw", "smart_191_raw", "smart_10_raw", "smart_200_raw", "smart_197_raw"], +"svm_227.joblib":["smart_254_raw", "smart_189_raw", "smart_225_raw", "smart_224_raw", "smart_197_raw", "smart_223_raw", "smart_4_raw", "smart_183_raw", "smart_11_raw", "smart_184_raw", "smart_13_raw"], +"svm_18.joblib":["smart_197_raw", "smart_3_raw", "smart_220_raw", "smart_193_raw", "smart_10_raw", "smart_187_raw", "smart_188_raw", "smart_225_raw", "smart_194_raw", "smart_13_raw"], +"svm_78.joblib":["smart_10_raw", "smart_183_raw", "smart_191_raw", "smart_13_raw", "smart_198_raw", "smart_22_raw", "smart_195_raw", "smart_12_raw", "smart_224_raw", "smart_200_raw"], +"svm_239.joblib":["smart_3_raw", "smart_254_raw", "smart_199_raw", "smart_225_raw", "smart_187_raw", "smart_195_raw", "smart_197_raw", "smart_2_raw", "smart_193_raw", "smart_220_raw", "smart_183_raw"], +"svm_174.joblib":["smart_183_raw", "smart_196_raw", "smart_225_raw", "smart_189_raw", "smart_4_raw", "smart_3_raw", "smart_9_raw", "smart_198_raw", "smart_15_raw", "smart_5_raw", "smart_194_raw"], +"svm_104.joblib":["smart_12_raw", "smart_198_raw", "smart_197_raw", "smart_4_raw", "smart_240_raw", "smart_187_raw", "smart_225_raw", "smart_8_raw", "smart_3_raw", "smart_2_raw"], +"svm_12.joblib":["smart_222_raw", "smart_251_raw", "smart_194_raw", "smart_9_raw", "smart_184_raw", "smart_191_raw", "smart_187_raw", "smart_255_raw", "smart_4_raw", "smart_11_raw"], +"svm_97.joblib":["smart_15_raw", "smart_197_raw", "smart_190_raw", "smart_199_raw", "smart_200_raw", "smart_12_raw", "smart_191_raw", "smart_254_raw", "smart_194_raw", "smart_201_raw"], +"svm_118.joblib":["smart_11_raw", "smart_225_raw", "smart_196_raw", "smart_197_raw", "smart_198_raw", "smart_200_raw", "smart_3_raw", "smart_10_raw", "smart_191_raw", "smart_22_raw"], +"svm_185.joblib":["smart_191_raw", "smart_254_raw", "smart_3_raw", "smart_190_raw", "smart_15_raw", "smart_22_raw", "smart_2_raw", "smart_198_raw", "smart_13_raw", "smart_226_raw", "smart_225_raw"], +"svm_206.joblib":["smart_183_raw", "smart_192_raw", "smart_197_raw", "smart_255_raw", "smart_187_raw", "smart_254_raw", "smart_198_raw", "smart_13_raw", "smart_226_raw", "smart_240_raw", "smart_8_raw"], +"svm_225.joblib":["smart_224_raw", "smart_11_raw", "smart_5_raw", "smart_4_raw", "smart_225_raw", "smart_197_raw", "smart_15_raw", "smart_183_raw", "smart_193_raw", "smart_190_raw", "smart_187_raw"], +"svm_169.joblib":["smart_252_raw", "smart_183_raw", "smart_254_raw", "smart_11_raw", "smart_193_raw", "smart_22_raw", "smart_226_raw", "smart_189_raw", "smart_225_raw", "smart_198_raw", "smart_200_raw"], +"svm_79.joblib":["smart_184_raw", "smart_196_raw", "smart_4_raw", "smart_226_raw", "smart_199_raw", "smart_187_raw", "smart_193_raw", "smart_188_raw", "smart_12_raw", "smart_250_raw"], +"svm_69.joblib":["smart_187_raw", "smart_9_raw", "smart_200_raw", "smart_11_raw", "smart_252_raw", "smart_189_raw", "smart_4_raw", "smart_188_raw", "smart_255_raw", "smart_201_raw"], +"svm_201.joblib":["smart_224_raw", "smart_8_raw", "smart_250_raw", "smart_2_raw", "smart_198_raw", "smart_15_raw", "smart_193_raw", "smart_223_raw", "smart_3_raw", "smart_11_raw", "smart_191_raw"], +"svm_114.joblib":["smart_226_raw", "smart_188_raw", "smart_2_raw", "smart_11_raw", "smart_4_raw", "smart_193_raw", "smart_184_raw", "smart_194_raw", "smart_198_raw", "smart_13_raw"], +"svm_219.joblib":["smart_12_raw", "smart_22_raw", "smart_8_raw", "smart_191_raw", "smart_197_raw", "smart_254_raw", "smart_15_raw", "smart_193_raw", "smart_199_raw", "smart_225_raw", "smart_192_raw"], +"svm_168.joblib":["smart_255_raw", "smart_191_raw", "smart_193_raw", "smart_220_raw", "smart_5_raw", "smart_3_raw", "smart_222_raw", "smart_223_raw", "smart_197_raw", "smart_196_raw", "smart_22_raw"], +"svm_243.joblib":["smart_11_raw", "smart_255_raw", "smart_10_raw", "smart_189_raw", "smart_225_raw", "smart_240_raw", "smart_222_raw", "smart_197_raw", "smart_183_raw", "smart_198_raw", "smart_12_raw"], +"svm_195.joblib":["smart_183_raw", "smart_5_raw", "smart_11_raw", "smart_197_raw", "smart_15_raw", "smart_9_raw", "smart_4_raw", "smart_220_raw", "smart_12_raw", "smart_192_raw", "smart_240_raw"], +"svm_222.joblib":["smart_10_raw", "smart_13_raw", "smart_188_raw", "smart_15_raw", "smart_192_raw", "smart_224_raw", "smart_225_raw", "smart_187_raw", "smart_222_raw", "smart_220_raw", "smart_252_raw"], +"svm_62.joblib":["smart_196_raw", "smart_251_raw", "smart_187_raw", "smart_224_raw", "smart_11_raw", "smart_12_raw", "smart_8_raw", "smart_199_raw", "smart_220_raw", "smart_195_raw"], +"svm_151.joblib":["smart_187_raw", "smart_223_raw", "smart_200_raw", "smart_189_raw", "smart_251_raw", "smart_255_raw", "smart_222_raw", "smart_192_raw", "smart_12_raw", "smart_183_raw", "smart_22_raw"], +"svm_125.joblib":["smart_9_raw", "smart_252_raw", "smart_197_raw", "smart_251_raw", "smart_11_raw", "smart_12_raw", "smart_188_raw", "smart_240_raw", "smart_10_raw", "smart_223_raw"], +"svm_124.joblib":["smart_193_raw", "smart_187_raw", "smart_183_raw", "smart_11_raw", "smart_10_raw", "smart_8_raw", "smart_194_raw", "smart_189_raw", "smart_222_raw", "smart_191_raw"], +"svm_67.joblib":["smart_2_raw", "smart_8_raw", "smart_225_raw", "smart_240_raw", "smart_13_raw", "smart_5_raw", "smart_187_raw", "smart_198_raw", "smart_199_raw", "smart_3_raw"], +"svm_115.joblib":["smart_222_raw", "smart_193_raw", "smart_223_raw", "smart_195_raw", "smart_252_raw", "smart_189_raw", "smart_199_raw", "smart_187_raw", "smart_15_raw", "smart_184_raw"], +"svm_1.joblib":["smart_201_raw", "smart_8_raw", "smart_200_raw", "smart_252_raw", "smart_251_raw", "smart_187_raw", "smart_9_raw", "smart_188_raw", "smart_15_raw", "smart_184_raw"], +"svm_112.joblib":["smart_220_raw", "smart_197_raw", "smart_10_raw", "smart_188_raw", "smart_12_raw", "smart_4_raw", "smart_196_raw", "smart_3_raw", "smart_240_raw", "smart_225_raw"], +"svm_138.joblib":["smart_183_raw", "smart_10_raw", "smart_191_raw", "smart_195_raw", "smart_223_raw", "smart_189_raw", "smart_187_raw", "smart_255_raw", "smart_226_raw", "smart_8_raw"], +"svm_229.joblib":["smart_224_raw", "smart_8_raw", "smart_192_raw", "smart_220_raw", "smart_195_raw", "smart_183_raw", "smart_250_raw", "smart_187_raw", "smart_225_raw", "smart_4_raw", "smart_252_raw"], +"svm_145.joblib":["smart_190_raw", "smart_8_raw", "smart_226_raw", "smart_184_raw", "smart_225_raw", "smart_220_raw", "smart_193_raw", "smart_183_raw", "smart_201_raw", "smart_187_raw", "smart_2_raw"], +"svm_59.joblib":["smart_188_raw", "smart_11_raw", "smart_184_raw", "smart_2_raw", "smart_220_raw", "smart_198_raw", "smart_225_raw", "smart_240_raw", "smart_197_raw", "smart_251_raw"], +"svm_204.joblib":["smart_15_raw", "smart_240_raw", "smart_225_raw", "smart_223_raw", "smart_252_raw", "smart_22_raw", "smart_200_raw", "smart_13_raw", "smart_220_raw", "smart_198_raw", "smart_191_raw"], +"svm_88.joblib":["smart_198_raw", "smart_3_raw", "smart_8_raw", "smart_225_raw", "smart_251_raw", "smart_222_raw", "smart_188_raw", "smart_10_raw", "smart_240_raw", "smart_189_raw"], +"svm_182.joblib":["smart_10_raw", "smart_190_raw", "smart_250_raw", "smart_15_raw", "smart_193_raw", "smart_22_raw", "smart_200_raw", "smart_8_raw", "smart_4_raw", "smart_187_raw", "smart_9_raw"], +"svm_61.joblib":["smart_5_raw", "smart_12_raw", "smart_9_raw", "smart_198_raw", "smart_195_raw", "smart_252_raw", "smart_15_raw", "smart_240_raw", "smart_255_raw", "smart_224_raw"], +"svm_50.joblib":["smart_220_raw", "smart_5_raw", "smart_194_raw", "smart_250_raw", "smart_15_raw", "smart_240_raw", "smart_8_raw", "smart_198_raw", "smart_224_raw", "smart_191_raw"], +"svm_210.joblib":["smart_8_raw", "smart_15_raw", "smart_195_raw", "smart_224_raw", "smart_5_raw", "smart_191_raw", "smart_198_raw", "smart_225_raw", "smart_200_raw", "smart_251_raw", "smart_240_raw"], +"svm_16.joblib":["smart_222_raw", "smart_10_raw", "smart_250_raw", "smart_189_raw", "smart_191_raw", "smart_2_raw", "smart_5_raw", "smart_193_raw", "smart_9_raw", "smart_187_raw"], +"svm_85.joblib":["smart_252_raw", "smart_184_raw", "smart_9_raw", "smart_5_raw", "smart_254_raw", "smart_3_raw", "smart_195_raw", "smart_10_raw", "smart_12_raw", "smart_222_raw"], +"svm_36.joblib":["smart_201_raw", "smart_251_raw", "smart_184_raw", "smart_3_raw", "smart_5_raw", "smart_183_raw", "smart_194_raw", "smart_195_raw", "smart_224_raw", "smart_2_raw"], +"svm_33.joblib":["smart_223_raw", "smart_254_raw", "smart_225_raw", "smart_9_raw", "smart_199_raw", "smart_5_raw", "smart_189_raw", "smart_194_raw", "smart_240_raw", "smart_4_raw"], +"svm_3.joblib":["smart_225_raw", "smart_194_raw", "smart_3_raw", "smart_189_raw", "smart_9_raw", "smart_254_raw", "smart_240_raw", "smart_5_raw", "smart_255_raw", "smart_223_raw"], +"svm_93.joblib":["smart_8_raw", "smart_188_raw", "smart_5_raw", "smart_10_raw", "smart_222_raw", "smart_2_raw", "smart_254_raw", "smart_12_raw", "smart_193_raw", "smart_224_raw"], +"svm_120.joblib":["smart_189_raw", "smart_224_raw", "smart_222_raw", "smart_193_raw", "smart_5_raw", "smart_201_raw", "smart_8_raw", "smart_254_raw", "smart_194_raw", "smart_22_raw"], +"svm_128.joblib":["smart_195_raw", "smart_184_raw", "smart_251_raw", "smart_8_raw", "smart_5_raw", "smart_196_raw", "smart_10_raw", "smart_4_raw", "smart_225_raw", "smart_191_raw"], +"svm_212.joblib":["smart_225_raw", "smart_192_raw", "smart_10_raw", "smart_12_raw", "smart_222_raw", "smart_184_raw", "smart_13_raw", "smart_226_raw", "smart_5_raw", "smart_201_raw", "smart_22_raw"], +"svm_221.joblib":["smart_255_raw", "smart_2_raw", "smart_224_raw", "smart_192_raw", "smart_252_raw", "smart_13_raw", "smart_183_raw", "smart_193_raw", "smart_15_raw", "smart_199_raw", "smart_200_raw"], +"svm_223.joblib":["smart_4_raw", "smart_194_raw", "smart_9_raw", "smart_255_raw", "smart_188_raw", "smart_201_raw", "smart_3_raw", "smart_226_raw", "smart_192_raw", "smart_251_raw", "smart_191_raw"], +"svm_44.joblib":["smart_255_raw", "smart_11_raw", "smart_200_raw", "smart_3_raw", "smart_195_raw", "smart_201_raw", "smart_4_raw", "smart_5_raw", "smart_10_raw", "smart_191_raw"], +"svm_213.joblib":["smart_22_raw", "smart_191_raw", "smart_183_raw", "smart_4_raw", "smart_194_raw", "smart_255_raw", "smart_254_raw", "smart_193_raw", "smart_11_raw", "smart_10_raw", "smart_220_raw"], +"svm_131.joblib":["smart_22_raw", "smart_194_raw", "smart_184_raw", "smart_250_raw", "smart_10_raw", "smart_189_raw", "smart_183_raw", "smart_240_raw", "smart_12_raw", "smart_252_raw"], +"svm_6.joblib":["smart_194_raw", "smart_250_raw", "smart_223_raw", "smart_224_raw", "smart_184_raw", "smart_191_raw", "smart_201_raw", "smart_9_raw", "smart_252_raw", "smart_3_raw"], +"svm_161.joblib":["smart_255_raw", "smart_222_raw", "smart_226_raw", "smart_254_raw", "smart_183_raw", "smart_22_raw", "smart_12_raw", "smart_190_raw", "smart_11_raw", "smart_192_raw", "smart_251_raw"], +"svm_72.joblib":["smart_13_raw", "smart_184_raw", "smart_223_raw", "smart_240_raw", "smart_250_raw", "smart_251_raw", "smart_201_raw", "smart_196_raw", "smart_5_raw", "smart_4_raw"], +"svm_27.joblib":["smart_189_raw", "smart_188_raw", "smart_255_raw", "smart_251_raw", "smart_240_raw", "smart_15_raw", "smart_9_raw", "smart_191_raw", "smart_226_raw", "smart_10_raw"], +"svm_141.joblib":["smart_9_raw", "smart_191_raw", "smart_2_raw", "smart_226_raw", "smart_13_raw", "smart_22_raw", "smart_193_raw", "smart_222_raw", "smart_220_raw", "smart_225_raw", "smart_3_raw"], +"svm_57.joblib":["smart_12_raw", "smart_252_raw", "smart_190_raw", "smart_226_raw", "smart_10_raw", "smart_189_raw", "smart_193_raw", "smart_2_raw", "smart_9_raw", "smart_223_raw"], +"svm_236.joblib":["smart_200_raw", "smart_189_raw", "smart_226_raw", "smart_252_raw", "smart_250_raw", "smart_193_raw", "smart_13_raw", "smart_2_raw", "smart_254_raw", "smart_22_raw", "smart_9_raww"], +"svm_208.joblib":["smart_223_raw", "smart_15_raw", "smart_251_raw", "smart_5_raw", "smart_198_raw", "smart_252_raw", "smart_4_raw", "smart_8_raw", "smart_220_raw", "smart_254_raw", "smart_193_raw"], +"svm_230.joblib":["smart_184_raw", "smart_5_raw", "smart_191_raw", "smart_198_raw", "smart_11_raw", "smart_255_raw", "smart_189_raw", "smart_254_raw", "smart_196_raw", "smart_199_raw", "smart_223_raw"], +"svm_134.joblib":["smart_8_raw", "smart_194_raw", "smart_4_raw", "smart_189_raw", "smart_223_raw", "smart_5_raw", "smart_187_raw", "smart_9_raw", "smart_192_raw", "smart_220_raw"], +"svm_71.joblib":["smart_220_raw", "smart_13_raw", "smart_194_raw", "smart_197_raw", "smart_192_raw", "smart_22_raw", "smart_184_raw", "smart_199_raw", "smart_222_raw", "smart_183_raw"], +"svm_109.joblib":["smart_224_raw", "smart_252_raw", "smart_2_raw", "smart_200_raw", "smart_5_raw", "smart_194_raw", "smart_222_raw", "smart_198_raw", "smart_4_raw", "smart_13_raw"] +} diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_1.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_1.joblib new file mode 100644 index 00000000000..0cbc376d2f6 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_1.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_10.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_10.joblib new file mode 100644 index 00000000000..e9133c5e917 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_10.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_104.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_104.joblib new file mode 100644 index 00000000000..f89aa2ff8e1 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_104.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_105.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_105.joblib new file mode 100644 index 00000000000..4524e169c4e Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_105.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_109.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_109.joblib new file mode 100644 index 00000000000..42c9aff3180 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_109.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_112.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_112.joblib new file mode 100644 index 00000000000..bf9791dfe03 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_112.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_114.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_114.joblib new file mode 100644 index 00000000000..8409a991b5c Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_114.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_115.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_115.joblib new file mode 100644 index 00000000000..095a2d6ba7b Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_115.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_118.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_118.joblib new file mode 100644 index 00000000000..167d5fb4175 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_118.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_119.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_119.joblib new file mode 100644 index 00000000000..682da025829 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_119.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_12.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_12.joblib new file mode 100644 index 00000000000..2c2dfb68e49 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_12.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_120.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_120.joblib new file mode 100644 index 00000000000..d44ab60fa2b Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_120.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_123.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_123.joblib new file mode 100644 index 00000000000..9cae3082e12 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_123.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_124.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_124.joblib new file mode 100644 index 00000000000..3949def23c1 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_124.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_125.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_125.joblib new file mode 100644 index 00000000000..e0e7e2ebe08 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_125.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_128.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_128.joblib new file mode 100644 index 00000000000..9fff1a2f81a Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_128.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_131.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_131.joblib new file mode 100644 index 00000000000..8edfe40a5d1 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_131.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_134.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_134.joblib new file mode 100644 index 00000000000..37f96daa9b4 --- /dev/null +++ b/src/pybind/mgr/diskprediction/predictor/models/svm_134.joblib @@ -0,0 +1,69 @@ +xÚíÝXSWÃÀñ¸q£¢â¬{āàHpÔA¥Š W'ÜUq¯º­–ZGÝ[UÄܸ·U +¨ˆÊÐN·ß_¿ +}¥iH¸In’›çùy%ãÜsÏ>wgõ +4ØÛ3`hÀCêx ö ôÌÓ¹[ËSîŸW­4Ö5ë“xÕÄIGޜi~@%ûðzªòܳm{ì‰_TÉ¥Æ\:³àJêû—#f…d›~GuÞ7l“"áFºÏ?.ïµÉò]©$Õª võz6JTu¶;^/kƒ¸Ôϗ<˜q;_®Gª\óƒÊd±KQ…íÏg3õ‰jsŲ½º¡ê¾Õn`ľ9ªâ¥Ýª.u?­š²6´ïì$U‹¼¯g¼ ½£Ryÿõ­ÓÇ©árw!Ú9Zø×Ü[}§z0åPñ±Ë~Q Û=fï ×h•ýŒ–ÍäNV…Yy9¯~¸ªu««ÖîˆPǝ>Ó´x²jgÕ¢ÙâSRû}vsíø¿î¦Û®a¶¾#z;ÞWÕ­[NJ¡ªîŸ½Ù=c“ÔÏo„h¿Ï!Eõâ‹I'–ÜV%åùæ‡,÷¯ªN¿]S¡sþõ©ß ìµçøº3ª‡W5ïÖñ¶jZ@ÉJóžlR͏Ü4baý'é֛suÑ¢9Ä©öwp.Q:ôaêûU/¾ZXäóGªý»Ïl²ê±Úüz¹3g¯¥…›íS…¬iÙOv;õýé폽ðY¸P5ù”ÒíÇÍ7TunmJŒ­r_5½wåò>QªAݾv®«Z’?¡qPL’jtÞö7k”M+<óûßhŸ zÕæ¶oÛf ª‰Sƒ&íܓ JªuåÊU’*¼ÊÂoëM»—úý9w'ø¿“ú÷–£ïZ%^ W]¬í9tG£‹ª6ù +Oµm]ÀÈ£1÷Ônτ|³Z…Ç_VÍo[¬g¡ýñª,›&&/y‘>|/,x¶±sRº÷¿õXâyÂîŽÎé¹ÜùìÈ&MgdøûeÚäì»®rrêߓ&Ï-Þ÷ÁCUå ›®ìZ£síîÐÞ%>Ýû+Wß]V•z,T»S7çÒ¥ê%¦{ÿYƒ±U.͸£š6/j\U”àíݳ2+›Ì>~[¥XÐÛùø¡hÕôœ“ži™–½7ý- ‡ªìÌYƒgž¾¢º:müؚþ1ªäƒ¦Whö(]x' +úήÜârº÷›x´>áVùŽ*ä›h§;{STÅC¾›<×uŸÆøUw¼Ñ~td‚Ê©KüëuÕ÷Í;n˜â@9ϵñÞíA?œWíìV¸Ã£ÁÒiÏ£ºS[y¤•¯Äɝò¬K6X¿”Ñ倻¯Û ×_n™yfyö_ÒÊCéÕoÏə¤u8[o¼â‰*ähÈsÿÛÕ~o…Ì®ïéáò­Fäñò¥¢“ôžîÙ\Û^zÖö~ê߱ɽòû„½<œoXx­S»e¢+§bY.ZÖnAô÷—U{ۜ:9ðJq¦Ðö'¿ß6z¼.ÄuÍç×1Qçßy±¯f`ŠÙçßÎ×}/Xx}ï8^Þ*õï«£¿ŸéuÓbêÃÐˋoÔý!&ÓἛܾB5íóåäÔÞ³ z µOÒR§å°ý>ÅόT_~TÝÖn{’"¥—†eöCmd³uG~õ:¦º".FJO Ë͑ù£klÏ<¢blóäJ?=<Ü>ónÕ[´9ý~׃›z”³Øü?’'Çӎ/ԏóælñkîLJ,.]ŠU?èxqM¼ÉÄ7ñ\›ºÓ]O÷~ɯ[ìHn(øú¬Ûî½ÝÝû¡êúò·ï”kŽ©f$ùüp=W¢Á¶·«_•SõÌœö4ñ@X“cêÇÿÁÎÉ7žŠ+¾^—v‹NNp®_È›ýòÖ>» ž~«"¦f‘õ”Æê–Y +w^§X+Gõ½|Ùw©³sÎÈ ÿÛ-~ê«¿ö­òÜ®U;Kù.-Žt³.ö¬Î²'ª½[j®~³&íø^öã¹7ÿDJ'i©Û|0ëÊvÃ^=Ô[ø#¯Œ;6Æ*­ý/^<èU‰‹©ۆ_ØëVl|¦×SæBtG«wéLJ±Î«KlˆS ܔï7ÿ×§Ôׯ+®Ö,ý8¯Iþ£Rª&©ÊœÙÚáçê½Õþ¾Cؗ߹K6™|9WjAÍk!þþø³áOn4½”ùù䨓…w:\Ôø½oâÊ;%¥‡Ù]¤Ý@Û#:¯÷x®·=¾)+ÃßOn{sÚ +ûôÛÞ¹m»%~šc6lnÓ9ÜêºæãºÙF>ªxóžöÇÁ“Š «“~UéOû¾— +¦ÕçaËÝr¨¢Ž¥þýSãů;]Ö¾¿¸¾§ÊšjƒN«¶·x³ô€ÑËï¾ÐÇ5Fõ½¬ñ{7·¿9cCêßóœîði³Eu½Í;/ۚ÷U][¿søaÛ•²éÛÛÇ¥¯¿×ò¬ýüÛ"sÔÏÿ<šékc¸z?9®ªó^gáûû/? ¿–Ó%í<©Çë»6©™Oó~çþoÖ>+êpE}=šëzïYëãéÞÏW©L–õí3¼òÌ.kZáçŸÒ½_r]BýÕÏÕþnnŸ ¹î=¿.*š–Þ¥D½LNÔ[ø{OúÔ˟üXçß;^ü©ù©« éÞ_3Éí¾×¯‰K§Ëå7ßôٝbøõ +µœ]óIم£c2Î7ÑÍ%xü¶;–Œz´3Yïéàê8»XÀˆµŸçþ*ªÀ‰{±ò#÷¯Ü—/OÐy=Åæìª6=Éhù½g±m–ÖM¶¼ª[NíÔ}׈Dã×É7¯ýN{ÐzÞïŸFUû½};îĹ>M”éŸµk)#¶ÍG\³·–hvåAZêwYìúÈóÃãå¿/ù¦µû¿šî9d¯ñ{ ò Ïϧ˜où¿W§ãÛW™?Þê1ù”íkÖæ[óäÌÀ¸lɌõñ[öD꿍Ù_¾ìùxèóÚ%žæH‘êëßËzy‹E•ŠÒÿ–¶ýy{s)øãˁ9úVéx-Óë=QcÏÉî%öȳEy•.b›VŸN'vñt3áv.L¡ìtJ~3ÃßW”ê9>ìÚ]­×ó|Â¥¦#µ在¬ÍL¤ëÊKºea8¡®;[Οķ=gfKŽ2»vÓúõ¨ÇUºÆ¦{¿Rô”™ç²|}­Ž”Ýò«xÊA”S|ã”käS[Æû[žú~ÿøa‰žƒÏiüý˜¡§í7g¹ªöó·NÄÌÞ¯·øš;zg‰§Â§ç„ó[\衾îó¼v­^èCÁí¾õÖÏ›Ö{yyºõkÏ"jÇꄏèÑcQ´ÑÖ¿!K÷­³SäµO\ó©qK<風“uÕèû©w˜¼bÄÞN±êûÅÅ»¢Ÿ¶| Úvºb¯1ž“ºÞÕù÷Ó¼W²Ü}Ñlϥܝ;%…>2¹þòÃ!}wý¬}9ß'ëþ¢KÕÓz_e'ëVßÿK°x_¤½íSñÔÏoöí8_ëíhÃK³;©OŸZ™‡Zê²x\ß'ýÝb·}Üõ7õž ¿ŸÝó­ÒköaíÛ¹3N}J¾Í@ù°Þ±«Ï”LìþuÙéâ/Ÿß•tˆüp\ó×}e¬ÆŸT?ïm^¿•Íjõǟì]mn‘6î ïûäòDÍã˜?¿jÙ#!I¼õ¡¾òTÛ8ûSF‡OèŽIþ¦SÖÁýÿ8~|ÿ\¶s»çf<ü1ÍÚ Ît|ß?`X?geùq¬0NYñq”MUVJ‹ã‡ÍáÐo›Yy懅®¬’º~ßÜmäOÿžC†x*«¶‘ïyÿúÞÝú}L†ûû òáí4, ÐCYí꫏SÖH[u‡ÞyÞG±fÚÚeÿúª øßÌP÷ù§¯zjÞÿø²ÒðùÇ×Çõ6Ö^DÃSýwx©Û™Áð>¾ێŒÆÏACü¬Úޏ¯rÂäG ëQóù*?òW¦ÈX~XkžºíµSüwxvÿþ»_ ñ« eùÓXž”¿&ÂQ÷y# ñÓ¶~8jˆ‡m„Àí‹B˜úVNS=Ô²=MM?U†¢ÜZñßé÷1ŸÔmo-Å¿no„¦úk#p{j¥exê¶×QCÿæøïÛ«¶72ðö–S›‚ÿ0pþj{üM_é'xýˆ6ýl…Ώq§Ÿ¶ûµ Üd´ßúøÒ´ß\¨ý»_Õ®¿M<<™Ð큑ã'öð ~V–¿V–bk_¬µü½¦ðje2¼Oûs[ãW_àøµ6~}§*2ןZþz +œ~v™ Oìí•Úù¹J?Û«.ÕÕk¡çoBwx†šèZþt½>T×ù´­Àù!øü\àø >S“ÏÖ:¶B•g¡Ë_jxÚµÛ¦Þ¾h¹ÿ*ØÙÐÛ+ÒþãӇv‰®½Ðm¼fèò§mûlèñš±÷ÿ‰¥}ÑõøÇ§×Ÿk*†žïgôüzí†ëÅÕ¼ì¢ãxHèñ½šWð†§ù%ô|ë“W=…¸ÊŸÐãÓO_šŽ×8j^%…A˟Laèö@¥Ýzê螁ꇶù¡i~㠖üèe“Éò,úýÅB¯ÔôçjÒ1¢£ÀùkðýCB÷GBïõ•~Z–omë’ÙöEÏý¥¦ý' œB·Ÿ†g•Ùízüg ýbiŸM5<Ï‡³°ôúüD¡çƒå®o6ÇOèú+tú‰½½*¼ÇYՕg5óêO?Á <þ3ö|ËÐõ·’Èë¯ØóÃʶWìáizuQHé§ÓK!Ì|0Óã]ê›¦û)Øêº½zžOëš~©÷ãQS^( “¦Z?4ÝÏÒVÇôS;ß×±üY¨üé?¡óWèûq U5¬'¸“šÏ¨=ø´]*mΡ˳à盨~è|ý›‘û#GÃ3øù×7¡ËŸºx蚆êuÎ_Ë‹è·Wàú›Ùë ÄžƒÈÃs4õø©Œ[þ ?±‡'öô3³úkiéרÐá9ýEž~Ž–RxRxRxRxfÓ¿Iý¯Ô™ÓxÈÈÇg •~šž'§ísÜ4oTž¦ëC´ ¯Ô˜õ|P +Ï4Úi>mœñŸÐ÷ËøÜÔÇÏ*ãæ¯£È·×ÔóÃØÇÄ~|K]~|¼nIÝù þýw‚ŸŸmòÇ?ľÿTèû(Dž~fV^ľÿÞÒê›4?²ðù‚Ðí©–é íÎØxÃPçKYÚùHb¯o†>ßÑØ×Hóÿ‰|~.øüMÚkÖåEŸšö|ÚäÏ·•ê›Y—?m_ŽÏØõÃÁÀá‰>?¤ýMzÙa¶í½ÈÇÚ¾¤ðDÕ^z{E¿¿NèôSHõWLí³¹m¯¥µWަÞHåE¯™Pÿ&öûEšú|Aðñ•@ñûxÿsuá©9Ï>¸¥…µ&?ÿºþм½2·û3HûL,?T–•–¶Cìá™[{àhêé'—ê¯!ãרÄÃ3XùS˜éöª¤òbÎÛ+…gÚáIý‡¸ÒOjŸ¥ò'…gºý¹£”~fݞšz{/µÏ¦ÝÿŠýü?©üé9¼íÂkdâé'í?ê¯97¤ð¤ðL©ý{ýæûR~Hý¯ž¾Ú+±ŸŸhêçûHåY +ϒë›ÔÿJá‰9#m¯4ޕò×BÓÚ+©ÿ´ýS‰;~ש¤öÀé'µ÷¦?±×Ìí~ú"ß^éúiügÌôs”Ê‹TßÌ(?,m¾%ÿô;ÿÐöú_é~núÝ^éø‘aÇÏÒxÒ¼Û{i|eÙáYÚùR–ö<9i|*…gÌñ†ÔIí©9O¥ûÑJû#¤ôoy–Ž/HåY +ÏtǓo_"¤þ\ŸšQúE[ߤû KåEoHá™ëx×ÒÆ–6{ÿ&ÍW¥ñ†9…'ÝïA +OÌí½ÅÕ©=•³àú!o4ïñŸƒ‰ç¯ÔŸ›wúI÷‘ê‡%w-íúsK{þ‡Ø¯ï‘ögKí³9Õ7i|*ÕAç¿ê/«(„Ý^uãj]óR[Ë)¤ú&…'õRx¦=þ“ާHõWš¿IáIåOJ¿ÿw›jÿaiÏǖ³ÌùÂÇõ”Žø÷ÏÕܯ>f¼ÈÛgõM!îü•Žw›vÿ&_ô[žÝÿJý¹~Ç–v?dƒm¯šß¹‰|ülê÷o:<±·÷Ö +íʹ©×_¡Æ§ÖSAÍçµ ÝDhÿàÏERßԔË7C§Ÿ™•g]ã§æýK¦žº¿­ºtmnâåEðú«e{ÕÎÐ핖ësIùSï:")/ÚæG5Åǯ–Žý¯ºû…«©‡Á},¬?RWžm5äG+5íÁ,C—?…ÈóÃ̞‡&=?J\óK¡÷çHç?›ÖöZÚý©­.Úº¿Ô²=hgaåÅÐãK»…hóWÃzÔÜ×jŠŽí©ÖCÿº¿4Øü×LÇ/B•?mÃùX® UžSÿjY‰´}þ¸?+³ùññ¼ÜŒæ‡HǧÖ3Úýþcú˜úùè®w‚Ä/¸B¿í³£Pí‹\àôËhû'ÐøÀF¡];`©ói‹©¿_„Àñhÿâ¿·÷ãçë­Ú¥ßÇãBÒý"ukGõ5ž”®g7«ííëmäþÈԏ÷d´¾éÚ>7ÖԞjy¿%‘Ý3xžËŸ£Àù‘Éð.MRhW^¿¨„í?k¯t,GbK?­ë‡ÀÇóÄzÿq½å‡@ã?½µ§*qnoËs_?#µWbo躽&Z?4•c·ÏBϧeæ«B?¿Qçòl óUôU?>ýž£ë‡LËúcªó¡Ç/ÛÞG¶~ÔPy{žoûzG-ë»Xƶ:öG?*×.ý µ¿Doí©–ùaèöTèö J„qÇC†®¿‚ï2ìx<Ýõ2˟Â0õCçý%róCeÜòlèói¯oFod¶?úx™ÑöŸfðÕÃHûs´ /¸µB”óÕ¯OÆa꫒ÁxÙ(Ä]^ôUžõu>—¡îçö1œrZ–'CߟPÓöêû|$Sž¡ØÏˆõù`bÙ?$Öëߌu>µ¥/¥my®gèýbŸ(Α_?c¨ü•ê[ÆÚ++ÆWb?^¡ëþDƒ5•ñšÜ¼¶×àã—aê¯XòC_핽B§ðªÜUè7~b¿ßƒÐñËè8[ßåÅPç‡ú~õbþŒ¶Ûë$͏ôžºû¹©›Gü]^ÔÞ÷FC~Ĭüäw4´Ïb/ˆ,¼¾JÇÑóÚ¥_% ã …Â´ÒÏÔï·$öûÉHóéLÆO!LyÎèù“ƬÓOy¹´A¡ßñšT?Ì{¾¥)0LxB×l",+?„nOmÍ,ýLõy¦RþÄ^ß,íx£¡¶×PÏs5ôøEèùªôüq7¤öEØð4í?Îdýµ¥4¼§ +ó*B‡WZ!òö ²òCz¾•i÷G†>]ìó-¡âg«i«0Ïò,’ë ºÅOß×Káé·¼X\¤0nú98Â¼ê‡ØÏh$òô:¼~R{jÙÛ«áóá +ãn¯Ø¯‡º½×Ôþ•S˜Vyz¾Ð(“åYëògfçgÿ½½Ãu ÷Óú¨¯ôS{‘_ojìñ½¡ÛSs»>Êä÷ˆdþ¦öw"ŸÏˆ>?̬¼XÚþ+G ËC÷GڎÇMýzK Oðúfàüm«Ðïöšüþ? »~ÆØÇ·tžÏ(đ~Æ.Ïêò£tDæâ§ö¾Ì*í¶O_ù¡íyƒÚ>O$³ã!¹Â0åEºÞÔ´Ç÷b??Boãqſ竆ò¢˜¡0ny© ’ýêÒIÛëM~¢†üÕö|~i~dÚí½©ßßÛØû7>†WIa˜í|ÿ®†WO#µã«ÇøESŽ_ª)„ ïcü¬tÜn;…vã+]'÷Ô1~Sô\¥ûmš÷üÒÒ·3õó7Ìm¾`êד˜[niåùcù³Qè–v +qæ¯ÎíŸyžŸ¨óüò+…iÕSoOErÿݾ³tœ÷Û) SžµÝï _þ'¼ˆ~:Î/µ>_Yäçs™êøÅܶ·Ëß﫹XD…n큦çŬP˜W~h›ÒýMk~niñº¼~}™Hæûjã¡0nü ¾R!òøÉŒ[žÅ¶¿Sßó¡ç[¦Þ_JÇt‹Ÿ¾îonçÿúøª¹]eiý¯…>OS´ó#S¹~KÝ}“mÂn¯XÚu烈ýzÑ×7qÜTëñ©Îç#YXû'øù ÿ=_žªÊ³4Ò}¼«±ü <¾²´ûeh{¡©ßOÐÜî7ü±< •Ž™½?’¾Ïo¬iü'ðþ±Ï§ESžu ×NaÜô3·ù` ÿ +ݞj{Ím~ÙHàüz¼¡¯ñ¸›B·ñÕ§×áÙ <Þµ´óŸ…>ÎTö¨½^Ü@Ûk¨ý/™,Ïw+L«<¼ü <3·óÅ~=¬¾æ3åß[Úùfb¿ÿ¸žHÛg…nåÅìîß.ðx×Pã!S OèþHèüm,òðtM?MãggóY¡0¯ò,ôý¯Ìíü{ƒß_Eìí©Àñ3·ãÉÒù*ƍŸ¶÷SµVˆ»¼ˆþx™‘¯?7ùò,’ôêúK{¾‹ƒ¡óÃÀ㗾 +㦟èŸo*px‚Ÿ?df㠍÷‰0¯í•žßmbó7¡û7…8öG8énGmã§wýÐ5<]ï¥õó!ÌìyT¦þüsÑ÷¿*‘ç‡È¯W3·ü5tý0ôó˜Mµþ6Q˜èx×LÏU{>¿ÈŸß¨kþVQ®m„8âg¨ð„¾¢Ðý¯¶û \Œ¿b­¿Ž&Òˆu¾ /õ¿ù íù D^^ >0nüL~þ!òû¯YÚx×ÒÚScϏ Ýþ¼@¹¸ÓO×úVI¡Ûöûü?s;ßÖTÆÚ^/–ûç=¿4öý@-íþ/¦>¿ûþb7ʙÚýà3yü\±DË핎G™ù|!Bäñ3lýÐ{úIû×ÄU?ô” +7…NåÏ]Ûë÷u-ÏÖ:n÷§×_:JåE›ø+FŽŸô<– ¥Ÿº|¬ e{/öýCb¿ÿ•¡‡Š¾?¸ü™[ýµÔ繚ìüCÇð„ºÞ@×ñŸºõ74p{/x~(„­oƞï›üõ63ññ½ÚøéOm¯×°´ë÷Å~½³¹õ—¦2_Ðõz]…i毥Œÿ4>?E!pü"Dž¿"þ‚¦ø}¥0îö +>>0³ñ¤Æ—Žó-¹nñ ]mfåEì큆Wð…~ÇWêÒOÓùÔêæ±b»^\ßí³©Ÿ±Wº½ÁÂn¯ºí¶1òøYèý/ú:?Ví}ÿ_‰þúZ·×P÷²0núٛxþŠý| Û³ØÈó_±Ÿýù¢"Ÿ¯JûKL;hn÷³4t{%ôýdÄ~½³h®ÇÑm»ƒW)Ä=¾ò2ò|Úâæç:Ư‚Â4뛥Ç=~6týÐõ|u÷‡Óöy†ï +Þÿ +=Iý5ÔópK‹¤~TÒ1[¨ÿ5Tÿaìù›¦çŸ[G˜×öŠýx€ÉŸ_¢¶¼˜ÛøÔÐÏOü|3ïj¼O„¸ë¯HÚÓà%ºÆÓÈó}CïÏé¯çñ‹ÁO+Ì«=°Ôý%¦Ú>[Üþ «Æ>þaòÇóþ;?ú.4Ð|z¨B·òRMÛø <^3öñé~⚯šÊõt¶:†[NaÜü5vy1øó#ÄQþԕMý›P÷Óõü+MùkìûI:Çj¹½šÒÏSËüúzSm_N†Î_ Õc¾ÀùaìþMì÷ýó9 Ü ¨}Ö?o…òWËþÈäÏwùù‚?_RnÜöÀҎGiÿéz>ººç9h{}­¡Ï­`¦óA±¶/¦rýŒXóרáiJOmç å"¯yÿÆßççØNÿÿåÖOçêÎÏ)ýÿߋ™ü¿ß(òý%úÞ[C¡ßøi[_,íþ“ï?"´k'tޟ£0Íö^zž¦¸ú‘ÇO,ã+¯75Ñç÷êúdCïÖԟ:4ÔñZ]÷Gg¶ýût½^‚…ݯEšŠ,Íìü5S¾¤É–?ÇãÚîÏúüXéyrâÞ`.ퟩޯÅÒú±?ÊäŸ'!îò'øþXû7K«ob/Ïæö|?±?ŸDzÞÓÿ¾´½^ÜTۃçK¨»>TÝù†¾^Wðô‹0ný0øóR¥ñ¤ž‡giõCzÞ¶~ÆšÂÍýwÅ>Ÿ6³çe™úýïEÿü_-Ǚ¦~ý¾à÷Ϻ¿¸¼ˆýü{cï…ŽŸ¦—±ïw(…'®öEÓýõ 5>Ðù|G-Ó×ÐÇCMõüÒ:ö¿Æ¾^C4çwªù]“?¸Ç¯G1@Íçö.Æ¿ÞÏ=/ ÿ€aý<ûù ö ­¬5Ó݊·¼{zz(kûf¯>QYgœ²n5ß\3|­|sûæqÉ:Uiç›Ï7¿{V¾ê×XYÏEæ’eºÒÞMY¿šK6ßB®®®íßñúð‹,HÙ Ÿo_›áý>®8ËßKwkþñ4ØÛ3`¨Çï€@¿aC• Ýsò¶]zŽu앍ÜsðÇÏ!C<•îÙù¿çð aÊÆ¾ó>æ-<”Ž"é4NÙä‘”MU6%’Öÿ¿æ[5Äs”‡_w€²Ù‡˜¹çὡÃýý‡y(›F>N©øßmuNÛÖÊÿ¿­-ݔ­þ}[[ÿ½­í‰wa|HbþÃ={x óöñP~þa5mÆ)Ûþc5Y:ôÎ3MÙ.-Âýj•ì3åþyùºAc]³>‰—Oœtä͙æ>>9Uî¹gÛöؿȓK¹tfÁùÿ–¨§ò³B²Í +¿#?ï¶I‘p#Ýç—‡÷Údù®T’|Õ»z=%Ê;Û¯—µA\êçK̸/×#y®ùAe²Ø¥ÈÃö糏™úD¾¹âÙ^ݐwßj70bßyñÒnU—ºŸ–Ï YÚwv’¼EÞ×3ބޑ«¼ÿúÖéÎãÔ𹿻í-üëa·­¾“?˜r¨øØe¿È‡í³÷„k´Ü~FËær'ËìǼœW?\ÞºÕÕ kwDȃãNŸiZ»¹vü_wÓm×0[ß½ïËÇÎêÖ­cÅPy÷ÏÞìž1áIêç7B +´ßç"ñŋ¤Kn˓ò|óC–ûWå§ß®©Ð9ÿúÔïöÚs|Ýy‡Ã«šwëx[>- d¥yO6ÉçGn±°þ“të͹ºhќâäû;8—(ú0õýª_-,òù#ùþÝg6 YõXm~½Ü™³×ÒÂÍöÉCÖ´ì'»úþôöÇ^ø,\(Ÿ|Jéöãæò:·6%ÆV¹/ŸÞÆ»ryŸ(ù €n_;׉•/ɟÐ8(&I>:oû›5ʦ•‹žùýo´O¿jsÛ·m³ùÄ©A“vîI'Õºre*I^eá·õ¦ÝKýþœ»ÇüÎßIý{ËÑw­/†Ë/Ööº£ÑEy›ü‹…Ž¿'ß¶.`äј{j·gB¾Y­Âã/Ëç·-Ö³Ðþxy–M“—¼HŸ¾<ÛØ9)Ýûßz,ñjþ^î“ÿ:1%úÇÊQò37öꞒáõ]8vvΑ7iéÖÏÕmL…êÓ}ïÕ½“¡óUsåÛ&,îã•îó¤ãõþtú:L^xÓ£?^W¿"ßý}АWåÝ õÙ~ý€úôèתn?•ذÒ!4Ds¼óÏñ¬¹31Eëtï>êûÝ­îÉÇ}ÝzÛÖÜ×ÒÚÁyBCÖ>‘û…^ñ¨óµ¿ïÜïåV ‰,¼¾w/ïŠ?•ú÷ÕÑßÏtŠºi1õaèåÅ7êþ“épÞMn_¡šöùrrjïÙ=Ú'i©ÓrØ~ŸâgFª/?ª‡nk·=I‘ÒKÃ2ûˆ¡6²Ùº#¿zS]#¥§†åæÈüÑ56‰gQ1¶yr¥Ÿ nŸy·ê-ڜ~¿ëŒÁ‰M=ÊÇYlþɓãiÇêÇyóG¶ø5÷ãC—.Ūt¼¸&Þdâ›x®MÝ鋮§{¿ä×-v$·N|}Öm÷ÞîîýP~}ùÛwÊ5Çä3’|~¸ž+Ñ`ÛÛÕίʩzæ?N{šx ¬É1õãÿ`çäaÏb_¯K»E''¸×/äŠÍ~ykŸ]O¿US³ÈzJãuË,…;¯S¬•Æ£ú^¾ì»ÔÙ9gd†¿ÿmŽ?uˆÕ_ûVyîת¥|—–â^ºY{VgÙùÞ-5W¿Y“v|/ûñܛ‹"¥“´Ôm>˜ue»a¯ê-ü‘WÆc•Ö~‡/ôªÄÅÔ¿mÃ/ìu+6>Óë)s!º£Õ»ôãÃXçUŽ%6ÄÉnÊ÷›ÿëSêëו?Wk–~œ×¤@ÿQ)U“äeÎlíðsõÞjß!ìËï\Š%›L¾‡œ+µ æµ üÙð'7š^Êü|rìÉÂ;.jüތ7q坒‚ÒŽÃì.Òn í×{<×Ûߔ¿•áï'·½9m…}úí ïܶÝ?ÍÇ16·énu]óqÝl#U¼yOûãàIE†ÕIH?Žªô§}ßKÓêó°ån9TQÇRÿþ©ñâם.kß_\ßSeMµA§å[‚[¼YzÀèåw_èã£ú^Öø½›Ûߜ±!õïyÎ wø´Ù"¿Þ杗mÍûò®­ß9ü°í\ÙôÀííãÒ×ßkyÖ~þm‘9êçÿ Íôµ1\½ŸWÕy¯³ðýý—Ÿ…_Ëé’vžÔãõ]›Ô̧y¿sÿ7kŸu¸¢¾Íu½÷¬õñtïç«T&ËúöÞUf—5‡Œ­ðóOéÞ/¹.¡þêg‰j7·Ï…\÷ž_W MKïR¢^&'ê-ü½'}êåO~¬óï/þÔüÔՄtﯙävßë×Dƒ¥ÓÇåò›oúìN1üz…Zήù¤ìÂÑ1™ç›èf +Ž<~ÛKF=ڙ¬÷tpuœ],`D‚ÚÏsUàĽXՑûWî˗'輞bsvÕ +›žd´üÞ³Ø6ËNëÇ&[^Õ-§vê¾kD¢ñëaÄ›×~§=h=ï÷O#ªýÞ¾wâ\Ÿ&ªÆôÏÚµ”ۏæ#®ÙÛ?K4»ò -õ»,v}äùáñªß—|SÈZ„ýߎGM÷²‹×ø½ù†ççSÌ·üß+áÓñí«Ìïõ˜|J‡ö5kó­yrf`\¶dÆúø-{"uŽßÆì¿/_ö|<ôyíOs¤Hõõïe½¼Å¢JEé±(áÌüg÷ST›=çõ˒6|[óJÙŋn§þ­¼>]YgpãÇ!›çÌÔaü¿ÄiwX³ƒñ‚§ÃÂF"KÛþ¼½Æ¹üñåÀ}+‡t¼–éõž¨±çd÷{TÙ¢¼J±M«O§»xº™p;¦Pv:%¿™áï+Jõví®Öëy>áRӑZŒ‡rNlÖ¿f&Òue‹%ݲ0œP׿­ çOâ۞3³%G™]»iýzÔã*]cÓ½_)zÊÌsÙN ¾¾VGÊnùU<å Ê)¾qʵª©-ãýÆÇ-O}¿ü°DÏÁç4þ~ÌÐÓö›³\UûyŽ['bfïˆ×[üͽ³ÄSáÓsÂù-® ôP_÷y^»V/ô¡àáVßzëç‡Íë½¼<ÝúµgµcuÂGôè±(Úhëߐ¥ûÖYŽ)ªÚ'®ùÔ¸%žt QÔɺjôýÔ¿;L^1bo§Xõýââ]ÑO[>m;]±×ÏI]ïêüûi^‹¿+Yî¾h¶çRîΝ’B™\ùÇᐾ»~Ö¾œï“uÑ¥êi½Ç¯²“u«ïÿ¥?X¼/ÒÞö©xêç7ûvœ¯õv´á%ŠÙÔ§O­LŽC-uY<®ï“þn±‚‡Û>îú›zτßÏîùVé5û°öíܧ>%ßf |XïØÕgJ&öÿºìtñ—Ïïª:D~8®ùë¾2VãOªŸ÷6¯ßÊfµ‡úãOö®¶·Hw†÷}ry¢æq̟_µì‘$ÞúP_yªmœý)£ÇÃ'tÇ$Ó)ëàþΊ?¾.Û¹Ýs3þ˜æm +g:¥•“í[Úd¾ÿ¼PÂk¶SLZü»™<ëä¾3&×Îõÿ²ø»»ÿr\MÐʓÅlî‰ÿxáЭÏÿÑ.í^º{×=·j6úáêÂënjÿ»È6M};þ’ÖÞ\!?71ýxÒ.nµÏŒ3®Ä¼,7:V}üóהּâ^&¶¯lÁ„iÿqžD@r@ÖäÐ~+{O·%SíÃ]>¦*QzgsE¿«F//KbjöºüÔ|ÇC½'Œšþ—ïig¦ËÈV1n£â¥ô°°å…£‡ÆüÇqÖ·Åôñ ò ò è¡lï"s/Î{ý½½üÞßÔÃgøP¯ ÷ÿ ôõô÷Vº¸gãÓa#”_¸çâÞþ~ƒ‡ Uvh#ûûõá~ž^ž^¾Þ~c¼•®.‘nû™zƒÏ/?Üy³ã8e§´;ovègªÒÍ7Ÿo«ÿ¿ïf-~У°'p ·,Yd²<ø£±ÛÎ" +ñø oÑ$«LÖ¾ðÇpŒÁr¬Áfü„‹¸‰Ûxƒ,lqNE%TFuԂ=\13ñ=Âp·…$ã³ì2Ym4ǘ”C&K"±~Ñ|2™"¿L65 +°ÄRÜEɂ„åHANk™l.Ö »±Gq'p‘8ƒ ÈRH&Ë_X&ëFEÈÌ@$.áÊÚ°-˜ŒQEe²È_L&«‰ñÈO¹¨„NðƨpQˆ{_nle²¬È[Ø£>š¡Ú£#z£|0ã1“0‹ñB°›±Û°ûŽçØPB&+QR&s† ‚†»xŒ¼B—R2ÙyÄá>Š—–Éa ¦•‘Éæc6a v p*ÁQÃqœÄ)œÁYœÃúL&«tŏ¨WN&ëU^&»Š×•e²rUHt†'ü Äx,ÅìCŽª2YC8£qwaUM&ë/ÌÄÜŸx…Õe2;´F7|%X-8‰«ˆAÖ”Q”C#´Ä—Œ0üŠ*5e2'8c(0c±Û°8ƒ³ˆÅŸÈI¥¶† Š£<*¢ª¢)šaB°ëqgðœjSáŒ©˜‡“x€SuXÎã"nà¢ñâ9Þ!G]™¬lP¶(‰r(Ê¨Šj˜U؆X¼F>;â/Ð îð‚`„qÆ\|‹%؆í8€ƒ8Žë¸‰_ñ/Q¤yˆ’¨‚FhŒ&h†e؎0Ã<ÀH{⍸ƒDd©/“†-Ê£ê¢Ñ ]Ñ}à /ø`c(11 30 ó±K±«°° »°ûc8S8ƒ³¸ˆ_p÷ñÉx†7x‹wÈNçùPa‡z¨G´Dk´GtEOô!#0£0_#“1S1 30 °K°˱÷GHÆSü‰Èڐ¸¢>C9TAUTCMÔF8¢)Z =\ÐÝàO ÄPŒÄDLÁ<,À·Á2¬ÄXØŠPìÄìÅAœ@$Nã&î"÷ñIx…,Ø&dCvä@^@AB1آʢ"*¡*ª£j¡>ÁŽpB4ƒ +´@k´Ápŗè‚npGô„'ú¡?¼á ? ‰Q1‹q˜ˆI˜Œ)˜Šé˜‰Y˜ƒy˜XˆEXŒï‚åXUXØŠPìÂnìG8á0Žà$"qgpp—pWp×p7ð3nânãD!wp1ˆEîá> ‘€GxŒ'HD’‘‚§x†_ñ~Çøá9^à%^á5Þà-ÞAæ@™AVdCvä@Nä‚r#ò"ò£ +Â…PE`ƒ¢(†â°E ”D)”F”Åg(‡ò¨€Š¨Œ*¨Šê¨š¨…Ú¨;Ø£> !ÁŽpB4‡ +´@K|Ž6h‹vh|øÑ nèŒ.èŠnèwôD/ôF|x¢¼ÐÞðÁøÂ1C0Ãàa8F`$Fa4Æ`,¾Æ8ŒÇc"&a2¦`*¦a:f`&fa6¾ÁÌÅ<ÌÇB,Âb|‹%XŠï‚ï± ?`9V`%VáG¬Æ¬Å:¬ÇlÄ&lÆlÅ6lÇ„b'va7öà'ìÅ>„a?Âqq‡¡BŽà(Žá8Nà$N!§qgqçqq —qWq ×q?ã&ná6~A¢qwƒXÄáîãâñ x„Çx‚D$!)xŠgø¿áwü?ñžã^â^ã ÞB֘z‹¬È†ìȁœÈ+äFäE>äG„5 +¡0ŠÀEQ Åa‹(‰R(2(‹ÏPåQQ •QUQ ÕQ5Q µQua‡z°G}4@C4‚ÃNh‚¦h†æCg´@K´Bk|Ž6h‹vh|pŗèˆNpCgtAWtCw¸£z¢z£¾B_x¢¼ÐÞðÁø6~ÿØê9a0†`(†ÁJ AŽQ1‹¯1ã1Á˜ˆI˜Œ)˜Ši˜Ž˜‰Y˜o0s1ó± ±‹ñ-–`)¾C¾Ç2ü€åX•X…±k°ë°°›°[°Û°;ŠØ…Ý؃Ÿ°û†ýÇÄ!† +8‚£8†ã8“8…HœÆœÅ9\ÀE\ÆuÜÀϸ‰[¸_…hÜÁ]Ä q¸‡ûx€xø +è OôƒúÃþP"ÂpŒÀU\ÃuÜÀϸ‰ü4P­ñ9Ú -Ú¡=\ð:À_¢#:Á ÑÍ*FCg´@K´Âçhƒ¶h‡öpA¸¢#:¡3º +º¡;ÜÑ=Ñ ½Ñ_Á}á‰~ðBxÃà‹„Á‚¡(€@a8FbÆâkŒÇc"&a2¦`*¦a:f`6¾ÁÌÅ<,À",Æ·XŠï‚ï± ?`9V`%VáG¬ÆZ¬ÃzlÀFlÂflÁ6lÇ„b'va7~Â^ìCÂqq‡¡BŽâNà$"qgpçpp—pWp×p7p·p¿ +Ѹƒ»ˆA,âp÷ññxˆÇHD’‘‚§x†_ñ~Çøá9^à%^á5Þà-ÞAV‘~Y‘ ّ9‘ Vȃ¼È‚°F!† Š¢ŠÃ%P¥PePŸ¡* *£ +ª¡:j &j¡6ê .ìPö¨hˆFp@c8 MÐÍ!‡-Эð9Ú -Ú¡=\ð:À_¢#ÜÐ]ÐÝÐî聞è…Þèƒ¯à¾ðD?x¡?¼á ? Ä Æ Å0øC‰"Ã1#1 +c0_cÆc‚1“0S0Ó030³0ß`æbæcbã[,ÁR|‡|eø˱+± +?b5Ö`-Öa=6`#6c ¶b¶#;± »±?a/ö! ûŽ8Žà(Žá8Nà$N!§qgqqWp×p7ð3nânãD!wp1ˆEîá> ‘€GxŒ'HÂS<ÃoøàOü…çx—x…7x‹wU¢>";r 'r!7ò /ò!? +  ¬QE`ƒb°EI”Bi”Åg(‡ +¨ˆJ¨‚ª¨†ê¨š¨…Ú¨ƒº°C=Ø£> ÐŽpB4E34‡ +8£Z¢Z£ Ú¢\ð:àKtD'¸¡3º +ºÃ=Ð ½Ñ_Ážè/ô‡7|0¾ðÃ@ Â` ÁP ƒ?”@†cFbFc ÆâkŒÃxL@0&b&c +¦b¦cfa6¾ÁÌÃ|,ÀB,Âb|‹%XŠ|eø˱+± +?b5Ö`-Öa=6`#6a3¶b¶cB±»°{ðöbö#p‡p*DàŽâŽãNâ"qgqçq—pWq ?ã&ná6~A¢qwƒXÄáî# x„Çx‚D$!)xŠgø¿áwü?ñžã%^á5ÞAV™þY‘ ّ¹`…Üȃ¼È(kBa Š¢ŠÃ%PePŸ½?·åQQ •QUQ ÕQµPu`‡z°G}4@C4‚à MÐÍÐr8£Z¢Z£ Ú¢\ð\ñ%:¢ÜÐ]ÐÝáŽè‰ÞèôƒúÃ>_øa a0†`(†ÁD†cFbFc ÆbÆc‚1“0S0Ó030³0ß`æbæc!a1–`)¾C¾Ç2ü€åX•X…ÕXƒµX Ø„ÍØ‚­Ø†í؁Ø…Ý؃½Ã~„ãá0"pGq Çq'q +‘838‹s8 ¸ˆK¸Œ+¸Šk¸Žø7q ·ñ ¢;¸‹˜÷áW'|œÅ9œÇ\Ä%$Ö¢ŽÕ¦Žáwü?ñžã^â^ã Þâdu¨oȊlÈþþœäD.X!7ò /ò¡z]ê=£„}J¡4ʼ?·¥e ¾ðÃ@XÙò / +  ¬Q…Q6(Šb°E TD%TFTE5TG ÔD-ÔFԅêÁõÑ Ñh G8¡ šÂªqDX£ +£lPÅ0k±ë±± ›±[± Û±¡Ø‰]CŽà(Žá8Nà$N!§qgqçqq —qWq ×q?ã&ná6~A¢qwƒXÄáÞûß¿?fçH <* "*¡2ªÀÙéï“6ý†yxyûy(;8G³Ë8e×>}ª²[ڣѫÎ[8(qB˜üÓá=ü†øVvÿð/À^J÷†ïï= ÀÛ[ÙÃ%›ûûݐ¾~Cù  ì9Ãýý~ÈO¸÷éí7À—µöú°ÖÞã”}þ÷Ñï_¥­õïWSo¨ùátÔ aƒ•mäûu¸òlÓëáýêü8ˆî~ \ No newline at end of file diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_138.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_138.joblib new file mode 100644 index 00000000000..95174aea6f6 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_138.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_14.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_14.joblib new file mode 100644 index 00000000000..2f8d79dc7c4 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_14.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_141.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_141.joblib new file mode 100644 index 00000000000..75abdcd1266 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_141.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_145.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_145.joblib new file mode 100644 index 00000000000..c79f59b0306 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_145.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_151.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_151.joblib new file mode 100644 index 00000000000..9984001480c Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_151.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_16.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_16.joblib new file mode 100644 index 00000000000..687258ca230 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_16.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_161.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_161.joblib new file mode 100644 index 00000000000..c5c63e97973 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_161.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_168.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_168.joblib new file mode 100644 index 00000000000..e436ef9750b Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_168.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_169.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_169.joblib new file mode 100644 index 00000000000..c525797ac87 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_169.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_174.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_174.joblib new file mode 100644 index 00000000000..e9ea8b279f8 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_174.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_18.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_18.joblib new file mode 100644 index 00000000000..ae5b1933edb Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_18.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_182.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_182.joblib new file mode 100644 index 00000000000..c8b7cdf3408 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_182.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_185.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_185.joblib new file mode 100644 index 00000000000..2c5ce59a472 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_185.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_186.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_186.joblib new file mode 100644 index 00000000000..ccc566f592c Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_186.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_195.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_195.joblib new file mode 100644 index 00000000000..2766139156d Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_195.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_201.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_201.joblib new file mode 100644 index 00000000000..43550c3ec1f Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_201.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_204.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_204.joblib new file mode 100644 index 00000000000..f8f01cfc199 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_204.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_206.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_206.joblib new file mode 100644 index 00000000000..158a1d43b0a Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_206.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_208.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_208.joblib new file mode 100644 index 00000000000..dd9298d2f17 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_208.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_210.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_210.joblib new file mode 100644 index 00000000000..cc5d1173ba4 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_210.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_212.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_212.joblib new file mode 100644 index 00000000000..609020a9497 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_212.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_213.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_213.joblib new file mode 100644 index 00000000000..757f1269800 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_213.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_219.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_219.joblib new file mode 100644 index 00000000000..be617b73626 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_219.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_221.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_221.joblib new file mode 100644 index 00000000000..804e51a7db3 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_221.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_222.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_222.joblib new file mode 100644 index 00000000000..bfa31ba459a Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_222.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_223.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_223.joblib new file mode 100644 index 00000000000..0bddaea3a9e Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_223.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_225.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_225.joblib new file mode 100644 index 00000000000..78ce3d2e505 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_225.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_227.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_227.joblib new file mode 100644 index 00000000000..dd0c03e3497 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_227.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_229.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_229.joblib new file mode 100644 index 00000000000..f2bb54fb7f4 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_229.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_230.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_230.joblib new file mode 100644 index 00000000000..7e36b43ba29 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_230.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_234.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_234.joblib new file mode 100644 index 00000000000..ce3c000f262 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_234.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_235.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_235.joblib new file mode 100644 index 00000000000..ad159e36fb4 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_235.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_236.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_236.joblib new file mode 100644 index 00000000000..48ab45bc417 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_236.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_239.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_239.joblib new file mode 100644 index 00000000000..4ca76cfb80b Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_239.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_243.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_243.joblib new file mode 100644 index 00000000000..b885d2df04c Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_243.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_27.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_27.joblib new file mode 100644 index 00000000000..89117cb5b7a Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_27.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_3.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_3.joblib new file mode 100644 index 00000000000..7353c25a948 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_3.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_33.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_33.joblib new file mode 100644 index 00000000000..a04670cff92 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_33.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_36.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_36.joblib new file mode 100644 index 00000000000..6517beefe8a Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_36.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_44.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_44.joblib new file mode 100644 index 00000000000..56d3bcea1ed Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_44.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_50.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_50.joblib new file mode 100644 index 00000000000..833efb30bec Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_50.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_57.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_57.joblib new file mode 100644 index 00000000000..c1ce463804e Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_57.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_59.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_59.joblib new file mode 100644 index 00000000000..8b5e1ee89d3 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_59.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_6.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_6.joblib new file mode 100644 index 00000000000..0f76cd75c7f Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_6.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_61.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_61.joblib new file mode 100644 index 00000000000..220d8e46741 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_61.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_62.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_62.joblib new file mode 100644 index 00000000000..4429c6b5b04 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_62.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_67.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_67.joblib new file mode 100644 index 00000000000..08222b8b0f7 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_67.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_69.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_69.joblib new file mode 100644 index 00000000000..168ba9ee1e6 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_69.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_71.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_71.joblib new file mode 100644 index 00000000000..3d938ca1086 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_71.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_72.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_72.joblib new file mode 100644 index 00000000000..3ec9c5728d7 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_72.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_78.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_78.joblib new file mode 100644 index 00000000000..7d2f8d08b55 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_78.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_79.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_79.joblib new file mode 100644 index 00000000000..a3861af21bb Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_79.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_82.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_82.joblib new file mode 100644 index 00000000000..6e055a1275c Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_82.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_85.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_85.joblib new file mode 100644 index 00000000000..e335f9208a7 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_85.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_88.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_88.joblib new file mode 100644 index 00000000000..a35f179dee8 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_88.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_93.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_93.joblib new file mode 100644 index 00000000000..1c6e3f22ee8 Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_93.joblib differ diff --git a/src/pybind/mgr/diskprediction/predictor/models/svm_97.joblib b/src/pybind/mgr/diskprediction/predictor/models/svm_97.joblib new file mode 100644 index 00000000000..111494c2eab Binary files /dev/null and b/src/pybind/mgr/diskprediction/predictor/models/svm_97.joblib differ diff --git a/src/pybind/mgr/diskprediction/requirements.txt b/src/pybind/mgr/diskprediction/requirements.txt new file mode 100644 index 00000000000..3a22a072eba --- /dev/null +++ b/src/pybind/mgr/diskprediction/requirements.txt @@ -0,0 +1,14 @@ +google==2.0.1 +google-api-python-client==1.7.3 +google-auth==1.5.0 +google-auth-httplib2==0.0.3 +google-gax==0.12.5 +googleapis-common-protos==1.5.3 +grpc==0.3.post19 +grpc-google-logging-v2==0.8.1 +grpc-google-pubsub-v1==0.8.1 +grpcio==1.14.1 +mock==2.0.0 +numpy==1.15.1 +scikit-learn==0.19.2 +scipy==1.1.0 diff --git a/src/pybind/mgr/diskprediction/task.py b/src/pybind/mgr/diskprediction/task.py new file mode 100644 index 00000000000..914ea59270d --- /dev/null +++ b/src/pybind/mgr/diskprediction/task.py @@ -0,0 +1,157 @@ +from __future__ import absolute_import + +import time +from threading import Event, Thread + +from .agent.metrics.ceph_cluster import CephClusterAgent +from .agent.metrics.ceph_mon_osd import CephMonOsdAgent +from .agent.metrics.ceph_pool import CephPoolAgent +from .agent.metrics.db_relay import DBRelayAgent +from .agent.metrics.sai_agent import SAIAgent +from .agent.metrics.sai_cluster import SAICluserAgent +from .agent.metrics.sai_disk import SAIDiskAgent +from .agent.metrics.sai_disk_smart import SAIDiskSmartAgent +from .agent.metrics.sai_host import SAIHostAgent +from .agent.predict.prediction import PredictionAgent +from .common import DP_MGR_STAT_FAILED, DP_MGR_STAT_OK, DP_MGR_STAT_WARNING + + +class AgentRunner(Thread): + + task_name = '' + interval_key = '' + agents = [] + + def __init__(self, mgr_module, agent_timeout=60): + """ + + :param mgr_module: parent ceph mgr module + :param agent_timeout: (unit seconds) agent execute timeout value, default: 60 secs + """ + Thread.__init__(self) + self._agent_timeout = agent_timeout + self._module_inst = mgr_module + self._log = mgr_module.log + self._obj_sender = None + self._start_time = None + self._th = None + + self.exit = False + self.event = Event() + self.task_interval = \ + int(self._module_inst.get_configuration(self.interval_key)) + + def terminate(self): + self.exit = True + self.event.set() + self._log.info('PDS terminate %s complete' % self.task_name) + + def run(self): + self._start_time = time.time() + self._log.debug( + 'start %s, interval: %s' + % (self.task_name, self.task_interval)) + while not self.exit: + self.run_agents() + if self.event: + self.event.wait(int(self.task_interval)) + self.event.clear() + self._log.info( + 'completed %s(%s)' % (self.task_name, time.time()-self._start_time)) + + def run_agents(self): + try: + self._log.debug('run_agents %s' % self.task_name) + model = self._module_inst.get_configuration('diskprediction_config_mode') + if model.lower() == 'cloud': + # from .common.restapiclient import RestApiClient, gen_configuration + from .common.grpcclient import GRPcClient, gen_configuration + conf = gen_configuration( + host=self._module_inst.get_configuration('diskprediction_server'), + user=self._module_inst.get_configuration('diskprediction_user'), + password=self._module_inst.get_configuration( + 'diskprediction_password'), + port=self._module_inst.get_configuration('diskprediction_port'), + cert_context=self._module_inst.get_configuration('diskprediction_cert_context'), + mgr_inst=self._module_inst, + ssl_target_name=self._module_inst.get_configuration('diskprediction_ssl_target_name_override'), + default_authority=self._module_inst.get_configuration('diskprediction_default_authority')) + self._obj_sender = GRPcClient(conf) + else: + from .common.localpredictor import LocalPredictor, gen_configuration + conf = gen_configuration(mgr_inst=self._module_inst) + self._obj_sender = LocalPredictor(conf) + if not self._obj_sender: + self._log.error('invalid diskprediction sender') + self._module_inst.status = \ + {'status': DP_MGR_STAT_FAILED, + 'reason': 'invalid diskprediction sender'} + return + if self._obj_sender.test_connection(): + self._module_inst.status = {'status': DP_MGR_STAT_OK} + self._log.debug('succeed to test connection') + self._run() + else: + self._log.error('failed to test connection') + self._module_inst.status = \ + {'status': DP_MGR_STAT_FAILED, + 'reason': 'failed to test connection'} + except Exception as e: + self._module_inst.status = \ + {'status': DP_MGR_STAT_FAILED, + 'reason': 'failed to start %s agents, %s' + % (self.task_name, str(e))} + self._log.error( + 'failed to start %s agents, %s' % (self.task_name, str(e))) + + def _run(self): + self._log.debug('%s run' % self.task_name) + for agent in self.agents: + retry_count = 3 + while retry_count: + retry_count -= 1 + try: + obj_agent = agent( + self._module_inst, self._obj_sender, + self._agent_timeout) + obj_agent.run() + break + except Exception as e: + if str(e).find('configuring') >= 0: + self._log.debug( + 'failed to execute {}, {}, retry again.'.format( + agent.measurement, str(e))) + time.sleep(1) + continue + else: + self._module_inst.status = \ + {'status': DP_MGR_STAT_WARNING, + 'reason': 'failed to execute {}, {}'.format( + agent.measurement, ';'.join(str(e).split('\n\t')))} + self._log.warning( + 'failed to execute {}, {}'.format( + agent.measurement, ';'.join(str(e).split('\n\t')))) + break + + +class MetricsRunner(AgentRunner): + + task_name = 'Metrics Agent' + interval_key = 'diskprediction_upload_metrics_interval' + agents = [CephClusterAgent, CephMonOsdAgent, CephPoolAgent, + SAICluserAgent, SAIDiskAgent, SAIHostAgent, DBRelayAgent, + SAIAgent] + + +class PredictionRunner(AgentRunner): + + task_name = 'Prediction Agent' + interval_key = 'diskprediction_retrieve_prediction_interval' + agents = [PredictionAgent] + + +class SmartRunner(AgentRunner): + + task_name = 'Smart data Agent' + interval_key = 'diskprediction_upload_smart_interval' + agents = [SAIDiskSmartAgent] diff --git a/src/pybind/mgr/diskprediction/test/__init__.py b/src/pybind/mgr/diskprediction/test/__init__.py new file mode 100644 index 00000000000..1f19be57566 --- /dev/null +++ b/src/pybind/mgr/diskprediction/test/__init__.py @@ -0,0 +1 @@ +# vim: tabstop=4 shiftwidth=4 softtabstop=4 diff --git a/src/pybind/mgr/diskprediction/test/test_agents.py b/src/pybind/mgr/diskprediction/test/test_agents.py new file mode 100644 index 00000000000..55999db1994 --- /dev/null +++ b/src/pybind/mgr/diskprediction/test/test_agents.py @@ -0,0 +1,49 @@ +# vim: tabstop=4 shiftwidth=4 softtabstop=4 + +import time +import mock + +from ..agent.metrics.ceph_cluster import CephClusterAgent +from ..agent.metrics.ceph_mon_osd import CephMonOsdAgent +from ..agent.metrics.ceph_pool import CephPoolAgent +from ..agent.metrics.db_relay import DBRelayAgent +from ..agent.metrics.sai_agent import SAIAgent +from ..agent.metrics.sai_cluster import SAICluserAgent +from ..agent.metrics.sai_disk import SAIDiskAgent +from ..agent.metrics.sai_disk_smart import SAIDiskSmartAgent +from ..agent.metrics.sai_host import SAIHostAgent +from ..agent.predict.prediction import PredictionAgent +from ..common import DummyResonse + +TEMP_RESPONSE = { + "disk_domain_id": 'abc', + "near_failure": 'Good', + "predicted": int(time.time() * (1000 ** 3))} + +def generate_sender_mock(): + sender_mock = mock.MagicMock() + sender = sender_mock + status_info = dict() + status_info['measurement'] = None + status_info['success_count'] = 1 + status_info['failure_count'] = 0 + sender_mock.send_info.return_value = status_info + + query_value = DummyResonse() + query_value.status_code = 200 + query_value.resp_json = TEMP_RESPONSE + sender_mock.query_info.return_value = query_value + return sender + + +def test_agents(mgr_inst, sender=None): + if sender is None: + sender = generate_sender_mock() + + metrics_agents = \ + [CephClusterAgent, CephMonOsdAgent, CephPoolAgent, DBRelayAgent, + SAIAgent, SAICluserAgent, SAIDiskAgent, SAIDiskSmartAgent, + SAIHostAgent, PredictionAgent] + for agent in metrics_agents: + obj_agent = agent(mgr_inst, sender) + obj_agent.run()