# For Orchestrator related PRs
/src/cephadm @ceph/orchestrators
-/src/pybind/mgr/ansible @ceph/orchestrators
/src/pybind/mgr/orchestrator_cli @ceph/orchestrators
/src/pybind/mgr/orchestrator.py @ceph/orchestrators
/src/pybind/mgr/rook @ceph/orchestrators
%{_bindir}/ceph-mgr
%dir %{_datadir}/ceph/mgr
%{_datadir}/ceph/mgr/alerts
-%{_datadir}/ceph/mgr/ansible
%{_datadir}/ceph/mgr/balancer
%{_datadir}/ceph/mgr/crash
%{_datadir}/ceph/mgr/deepsea
lib/systemd/system/ceph-mgr*
usr/bin/ceph-mgr
usr/share/ceph/mgr/alerts
-usr/share/ceph/mgr/ansible
usr/share/ceph/mgr/balancer
usr/share/ceph/mgr/crash
usr/share/ceph/mgr/deepsea
Currently the following modules use tox:
-- Ansible (``./src/pybind/mgr/ansible``)
+- Cephadm (``./src/pybind/mgr/cephadm``)
- Insights (``./src/pybind/mgr/insights``)
- Orchestrator cli (``./src/pybind/mgr/orchestrator_cli``)
- Manager core (``./src/pybind/mgr``)
+++ /dev/null
-
-.. _ansible-module:
-
-====================
-Ansible Orchestrator
-====================
-
-This module is a :ref:`Ceph orchestrator <orchestrator-modules>` module that uses `Ansible Runner Service <https://github.com/ansible/ansible-runner-service>`_ (a RESTful API server) to execute Ansible playbooks in order to satisfy the different operations supported.
-
-These operations basically (and for the moment) are:
-
-- Get an inventory of the Ceph cluster nodes and all the storage devices present in each node
-- Hosts management
-- Create/remove OSD's
-- ...
-
-Usage
-=====
-
-Enable the module:
-
-::
-
- # ceph mgr module enable ansible
-
-Disable the module
-
-::
-
- # ceph mgr module disable ansible
-
-
-Enable the Ansible orchestrator module and use it with the :ref:`CLI <orchestrator-cli-module>`:
-
-::
-
- ceph mgr module enable ansible
- ceph orchestrator set backend ansible
-
-
-Configuration
-=============
-
-The external Ansible Runner Service uses TLS mutual authentication to allow clients to use the API.
-A client certificate and a key file should be provided by the Administrator of the Ansible Runner Service for each manager node.
-These files should be copied in each of the manager nodes with read access for the ceph user.
-The destination folder for this files and the name of the files must be the same always in all the manager nodes,
-although the certificate/key content of these files logically will be different in each node.
-
-Configuration must be set when the module is enabled for the first time.
-
-This can be done in one monitor node via the configuration key facility on a
-cluster-wide level (so they apply to all manager instances) as follows:
-
-In first place, configure the Ansible Runner Service client certificate and key:
-
-::
-
- If the provided client certificate is usable for all servers, apply it using:
- # ceph ansible set-ssl-certificate -i <location_of_the_crt_file>
- # ceph ansible set-ssl-certificate-key -i <location_of_the_key_file>
-
-
-::
-
- If the client certificate provided is for an especific manager server use:
- # ceph ansible set-ssl-certificate <server> -i <location_of_the_crt_file>
- # ceph ansible set-ssl-certificate-key <server> -i <location_of_the_key_file>
-
-
-
-After setting the client certificate and key files, finish the configuration as follows:
-
-::
-
- # ceph config set mgr mgr/ansible/server_location <ip_address/server_name>:<port>
- # ceph config set mgr mgr/ansible/verify_server <False|True>
- # ceph config set mgr mgr/ansible/ca_bundle <path_to_ca_bundle_file>
-
-
-
-Where:
-
- * <ip_address/server_name>: Is the ip address/hostname of the server where the Ansible Runner Service is available.
- * <port>: The port number where the Ansible Runner Service is listening
- * <verify_server_value>: boolean, it controls whether the Ansible Runner Service server's TLS certificate is verified. Defaults to ``True``.
- * <path_to_ca_bundle_file>: Path to a CA bundle to use in the verification.
-
-In order to check that everything is OK, use the "status" orchestrator command.
-
- # ceph orchestrator status
- Backend: ansible
- Available: True
-
-Any kind of problem connecting with the external Ansible Runner Service will be reported using this command.
-
-
-Debugging
-=========
-
-Any kind of incident with this orchestrator module can be debugged using the Ceph manager logs:
-
-Set the right log level in order to debug properly. Remember that the python log levels debug, info, warn, err are mapped into the Ceph severities 20, 4, 1 and 0 respectively.
-
-And use the "active" manager node: ( "ceph -s" command in one monitor give you this information)
-
-* Check current debug level::
-
- [@mgr0 ~]# ceph daemon mgr.mgr0 config show | grep debug_mgr
- "debug_mgr": "1/5",
- "debug_mgrc": "1/5",
-
-* Change the log level to "debug"::
-
- [mgr0 ~]# ceph daemon mgr.mgr0 config set debug_mgr 20/5
- {
- "success": ""
- }
-
-* Restore "info" log level::
-
- [mgr0 ~]# ceph daemon mgr.mgr0 config set debug_mgr 1/5
- {
- "success": ""
- }
-
-
-Operations
-==========
-
-To see the complete list of operations, use:
-:ref:`CLI <orchestrator-cli-module>`
Cephadm orchestrator <cephadm>
Rook module <rook>
DeepSea module <deepsea>
- Ansible module <ansible>
In this context, *orchestrator* refers to some external service that
provides the ability to discover devices and create Ceph services. This
-includes external projects such as ceph-ansible, DeepSea, and Rook.
+includes external projects such as DeepSea and Rook.
An *orchestrator module* is a ceph-mgr module (:ref:`mgr-module-dev`)
which implements common management operations using a particular
dashboard [label="mgr/dashboard"]
orchestrator_cli [label="mgr/orchestrator_cli"]
orchestrator [label="Orchestrator Interface"]
- ansible [label="mgr/ansible"]
- ssh [label="mgr/ssh"]
+ cephadm [label="mgr/cephadm"]
deepsea [label="mgr/deepsea"]
label = "ceph-mgr";
dashboard -> orchestrator
orchestrator_cli -> orchestrator
orchestrator -> rook -> rook_io
- orchestrator -> ansible -> ceph_ansible
orchestrator -> deepsea -> suse_deepsea
- orchestrator -> ssh
+ orchestrator -> cephadm
rook_io [label="Rook"]
- ceph_ansible [label="ceph-ansible"]
suse_deepsea [label="DeepSea"]
rankdir="TB";
}
for (auto& k : {
"pod_name", "pod_namespace", // set by rook
- "container_name" // set by ceph-ansible
+ "container_name" // set by cephadm, ceph-ansible
}) {
if (auto p = m.find(k); p != m.end()) {
f->dump_string(k, p->second);
+++ /dev/null
-from __future__ import absolute_import
-
-import os
-if 'UNITTEST' in os.environ:
- import tests
-
-from .module import Module
+++ /dev/null
-"""
-Client module to interact with the Ansible Runner Service
-"""
-
-import json
-import re
-from functools import wraps
-import collections
-import logging
-try:
- from typing import Optional, Dict, Any, List, Set
-except ImportError:
- pass # just for type checking
-
-import requests
-from orchestrator import OrchestratorError
-
-logger = logging.getLogger(__name__)
-
-# Ansible Runner service API endpoints
-API_URL = "api"
-PLAYBOOK_EXEC_URL = "api/v1/playbooks"
-PLAYBOOK_EVENTS = "api/v1/jobs/%s/events"
-EVENT_DATA_URL = "api/v1/jobs/%s/events/%s"
-URL_MANAGE_GROUP = "api/v1/groups/{group_name}"
-URL_ADD_RM_HOSTS = "api/v1/hosts/{host_name}/groups/{inventory_group}"
-
-class AnsibleRunnerServiceError(OrchestratorError):
- """Generic Ansible Runner Service Exception"""
- pass
-
-def handle_requests_exceptions(func):
- """Decorator to manage errors raised by requests library
- """
- @wraps(func)
- def inner(*args, **kwargs):
- """Generic error mgmt decorator"""
- try:
- return func(*args, **kwargs)
- except (requests.exceptions.RequestException, IOError) as ex:
- raise AnsibleRunnerServiceError(str(ex))
- return inner
-
-class ExecutionStatusCode(object):
- """Execution status of playbooks ( 'msg' field in playbook status request)
- """
-
- SUCCESS = 0 # Playbook has been executed succesfully" msg = successful
- ERROR = 1 # Playbook has finished with error msg = failed
- ON_GOING = 2 # Playbook is being executed msg = running
- NOT_LAUNCHED = 3 # Not initialized
-
-class PlayBookExecution(object):
- """Object to provide all the results of a Playbook execution
- """
-
- def __init__(self, rest_client, # type: Client
- playbook, # type: str
- result_pattern="", # type: str
- the_params=None, # type: Optional[dict]
- querystr_dict=None # type: Optional[dict]
- ):
-
- self.rest_client = rest_client
-
- # Identifier of the playbook execution
- self.play_uuid = "-"
-
- # Pattern used to extract the result from the events
- self.result_task_pattern = result_pattern
-
- # Playbook name
- self.playbook = playbook
-
- # Parameters used in the playbook
- self.params = the_params
-
- # Query string used in the "launch" request
- self.querystr_dict = querystr_dict
-
- def launch(self):
- """ Launch the playbook execution
- """
-
- response = None
- endpoint = "%s/%s" % (PLAYBOOK_EXEC_URL, self.playbook)
-
- try:
- response = self.rest_client.http_post(endpoint,
- self.params,
- self.querystr_dict)
- except AnsibleRunnerServiceError:
- logger.exception("Error launching playbook <%s>", self.playbook)
- raise
-
- # Here we have a server response, but an error trying
- # to launch the playbook is also posible (ex. 404, playbook not found)
- # Error already logged by rest_client, but an error should be raised
- # to the orchestrator (via completion object)
- if response.ok:
- self.play_uuid = json.loads(response.text)["data"]["play_uuid"]
- logger.info("Playbook execution launched succesfuly")
- else:
- raise AnsibleRunnerServiceError(response.reason)
-
- def get_status(self):
- """ Return the status of the execution
-
- In the msg field of the respons we can find:
- "msg": "successful"
- "msg": "running"
- "msg": "failed"
- """
-
- status_value = ExecutionStatusCode.NOT_LAUNCHED
- response = None
-
- if self.play_uuid == '-': # Initialized
- status_value = ExecutionStatusCode.NOT_LAUNCHED
- elif self.play_uuid == '': # Error launching playbook
- status_value = ExecutionStatusCode.ERROR
- else:
- endpoint = "%s/%s" % (PLAYBOOK_EXEC_URL, self.play_uuid)
-
- try:
- response = self.rest_client.http_get(endpoint)
- except AnsibleRunnerServiceError:
- logger.exception("Error getting playbook <%s> status",
- self.playbook)
-
- if response:
- the_status = json.loads(response.text)["msg"]
- if the_status == 'successful':
- status_value = ExecutionStatusCode.SUCCESS
- elif the_status == 'failed':
- status_value = ExecutionStatusCode.ERROR
- else:
- status_value = ExecutionStatusCode.ON_GOING
- else:
- status_value = ExecutionStatusCode.ERROR
-
- logger.info("Requested playbook execution status is: %s", status_value)
- return status_value
-
- def get_result(self, event_filter):
- """Get the data of the events filtered by a task pattern and
- a event filter
-
- @event_filter: list of 0..N event names items
- @returns: the events that matches with the patterns provided
- """
- response = None
- if not self.play_uuid:
- return {}
-
- try:
- response = self.rest_client.http_get(PLAYBOOK_EVENTS % self.play_uuid)
- except AnsibleRunnerServiceError:
- logger.exception("Error getting playbook <%s> result", self.playbook)
-
- if not response:
- result_events = {} # type: Dict[str, Any]
- else:
- events = json.loads(response.text)["data"]["events"]
-
- # Filter by task
- if self.result_task_pattern:
- result_events = {event:data for event, data in events.items()
- if "task" in data and
- re.match(self.result_task_pattern, data["task"])}
- else:
- result_events = events
-
- # Filter by event
- if event_filter:
- type_of_events = "|".join(event_filter)
-
- result_events = {event:data for event, data in result_events.items()
- if re.match(type_of_events, data['event'])}
-
- logger.info("Requested playbook result is: %s", json.dumps(result_events))
- return result_events
-
-class Client(object):
- """An utility object that allows to connect with the Ansible runner service
- and execute easily playbooks
- """
-
- def __init__(self,
- server_url, # type: str
- verify_server, # type: bool
- ca_bundle, # type: str
- client_cert, # type: str
- client_key # type: str
- ):
- """Provide an https client to make easy interact with the Ansible
- Runner Service"
-
- :param server_url: The base URL >server>:<port> of the Ansible Runner
- Service
- :param verify_server: A boolean to specify if server authentity should
- be checked or not. (True by default)
- :param ca_bundle: If provided, an alternative Cert. Auth. bundle file
- will be used as source for checking the authentity of
- the Ansible Runner Service
- :param client_cert: Path to Ansible Runner Service client certificate
- file
- :param client_key: Path to Ansible Runner Service client certificate key
- file
- """
- self.server_url = server_url
- self.client_cert = (client_cert, client_key)
-
- # used to provide the "verify" parameter in requests
- # a boolean that sometimes contains a string :-(
- self.verify_server = verify_server
- if ca_bundle: # This intentionallly overwrites
- self.verify_server = ca_bundle # type: ignore
-
- self.server_url = "https://{0}".format(self.server_url)
-
- @handle_requests_exceptions
- def is_operative(self):
- """Indicates if the connection with the Ansible runner Server is ok
- """
-
- # Check the service
- response = self.http_get(API_URL)
-
- if response:
- return response.status_code == requests.codes.ok
- else:
- return False
-
- @handle_requests_exceptions
- def http_get(self, endpoint):
- """Execute an http get request
-
- :param endpoint: Ansible Runner service RESTful API endpoint
-
- :returns: A requests object
- """
-
- the_url = "%s/%s" % (self.server_url, endpoint)
-
- response = requests.get(the_url,
- verify=self.verify_server,
- cert=self.client_cert,
- headers={})
-
- if response.status_code != requests.codes.ok:
- logger.error("http GET %s <--> (%s - %s)\n%s",
- the_url, response.status_code, response.reason,
- response.text)
- else:
- logger.info("http GET %s <--> (%s - %s)",
- the_url, response.status_code, response.text)
-
- return response
-
- @handle_requests_exceptions
- def http_post(self, endpoint, payload, params_dict):
- """Execute an http post request
-
- :param endpoint: Ansible Runner service RESTful API endpoint
- :param payload: Dictionary with the data used in the post request
- :param params_dict: A dict used to build a query string
-
- :returns: A requests object
- """
-
- the_url = "%s/%s" % (self.server_url, endpoint)
- response = requests.post(the_url,
- verify=self.verify_server,
- cert=self.client_cert,
- headers={"Content-type": "application/json"},
- json=payload,
- params=params_dict)
-
- if response.status_code != requests.codes.ok:
- logger.error("http POST %s [%s] <--> (%s - %s:%s)\n",
- the_url, payload, response.status_code,
- response.reason, response.text)
- else:
- logger.info("http POST %s <--> (%s - %s)",
- the_url, response.status_code, response.text)
-
- return response
-
- @handle_requests_exceptions
- def http_delete(self, endpoint):
- """Execute an http delete request
-
- :param endpoint: Ansible Runner service RESTful API endpoint
-
- :returns: A requests object
- """
-
- the_url = "%s/%s" % (self.server_url, endpoint)
- response = requests.delete(the_url,
- verify=self.verify_server,
- cert=self.client_cert,
- headers={})
-
- if response.status_code != requests.codes.ok:
- logger.error("http DELETE %s <--> (%s - %s)\n%s",
- the_url, response.status_code, response.reason,
- response.text)
- else:
- logger.info("http DELETE %s <--> (%s - %s)",
- the_url, response.status_code, response.text)
-
- return response
-
- def http_put(self, endpoint, payload):
- """Execute an http put request
-
- :param endpoint: Ansible Runner service RESTful API endpoint
- :param payload: Dictionary with the data used in the put request
-
- :returns: A requests object
- """
- # TODO
- raise NotImplementedError("TODO")
-
-
- def add_hosts_to_group(self, hosts, group):
- """ Add one or more hosts to an Ansible inventory group
-
- :host : host to add
- :group: Ansible inventory group where the hosts will be included
-
- :return : Nothing
-
- :raises : AnsibleRunnerServiceError if not possible to complete
- the operation
- """
-
- url_group = URL_MANAGE_GROUP.format(group_name=group)
-
- # Get/Create the group
- response = self.http_get(url_group)
- if response.status_code == 404:
- # Create the new group
- response = self.http_post(url_group, "", {})
- if response.status_code != 200:
- raise AnsibleRunnerServiceError("Error when trying to "\
- "create group:{}".format(group))
- hosts_in_group = [] # type: List[str]
- else:
- hosts_in_group = json.loads(response.text)["data"]["members"]
-
- # Here we have the group in the inventory. Add the hosts
- for host in hosts:
- if host not in hosts_in_group:
- add_url = URL_ADD_RM_HOSTS.format(host_name=host,
- inventory_group=group)
-
- response = self.http_post(add_url, "", {})
- if response.status_code != 200:
- raise AnsibleRunnerServiceError("Error when trying to "\
- "include host '{}' in group"\
- " '{}'".format(host, group))
-
- def remove_hosts_from_group(self, group, hosts):
- """ Remove all the hosts from group, it also removes the group itself if
- it is empty
-
- : group : Group name (str)
- : hosts : List of hosts to remove
- """
-
- url_group = URL_MANAGE_GROUP.format(group_name=group)
- response = self.http_get(url_group)
-
- # Group not found is OK!
- if response.status_code == 404:
- return
-
- # Once we have the group, we remove the hosts required
- if response.status_code == 200:
- hosts_in_group = json.loads(response.text)["data"]["members"]
-
- # Delete the hosts (it does not matter if the host does not exist)
- for host in hosts:
- if host in hosts_in_group:
- url_host = URL_ADD_RM_HOSTS.format(host_name=host,
- inventory_group=group)
- response = self.http_delete(url_host)
- hosts_in_group.remove(host)
-
- # Delete the group if no hosts in it
- if not hosts_in_group:
- response = self.http_delete(url_group)
-
- def get_hosts_in_group(self, group):
- """ Return the list of hosts in and inventory group
-
- : group : Group name (str)
- """
- url_group = URL_MANAGE_GROUP.format(group_name=group)
- response = self.http_get(url_group)
- if response.status_code == 404:
- raise AnsibleRunnerServiceError("Group {} not found in Ansible"\
- " inventory".format(group))
-
- return json.loads(response.text)["data"]["members"]
-
-
-class InventoryGroup(collections.MutableSet):
- """ Manages an Ansible Inventory Group
- """
- def __init__(self, group_name, ars_client):
- # type: (str, Client) -> None
- """Init the group_name attribute and
- Create the inventory group if it does not exist
-
- : group_name : Name of the group in the Ansible Inventory
- : returns : Nothing
- """
-
- self.elements = set() # type: Set[Any]
-
- self.group_name = group_name
- self.url_group = URL_MANAGE_GROUP.format(group_name=self.group_name)
- self.created = False
- self.ars_client = ars_client
-
- # Get/Create the group
- response = self.ars_client.http_get(self.url_group)
- if response.status_code == 404:
- return
-
- # get members if the group exists previously
- self.created = True
- self.elements.update(json.loads(response.text)["data"]["members"])
-
- def __contains__(self, host):
- """ Check if the host is in the group
-
- : host: Check if hosts is in Ansible Inventory Group
- """
- return host in self.elements
-
- def __iter__(self):
- return iter(self.elements)
-
- def __len__(self):
- return len(self.elements)
-
- def add(self, value):
- """ Add a new host to the group
- Create the Ansible Inventory group if it does not exist
-
- : value : The host(string) to add
- """
-
- if not self.created:
- self.__create_group__()
-
- add_url = URL_ADD_RM_HOSTS.format(host_name=value,
- inventory_group=self.group_name)
-
- response = self.ars_client.http_post(add_url, "", {})
- if response.status_code != 200:
- raise AnsibleRunnerServiceError("Error when trying to "\
- "include host '{}' in group"\
- " '{}'".format(value,
- self.group_name))
-
- # Refresh members
- response = self.ars_client.http_get(self.url_group)
- self.elements.update(json.loads(response.text)["data"]["members"])
-
- def discard(self, value):
- """Remove a host from the group.
- Remove the group from the Ansible inventory if it is empty
-
- : value : The host(string) to remove
- """
- url_host = URL_ADD_RM_HOSTS.format(host_name=value,
- inventory_group=self.group_name)
- response = self.ars_client.http_delete(url_host)
-
- # Refresh members
- response = self.ars_client.http_get(self.url_group)
- self.elements.update(json.loads(response.text)["data"]["members"])
-
- # Delete the group if no members
- if not self.elements:
- response = self.ars_client.http_delete(self.url_group)
-
- def update(self, iterable=None):
- """ Update the hosts in the group with the iterable items
-
- :iterable : And iterable object with hosts names
- """
- for item in iterable:
- self.add(item)
-
- def clean(self, iterable=None):
- """ Remove from the group the hosts included in iterable
- If not provided an iterable, all the hosts are removed from the group
-
- :iterable : And iterable object with hosts names
- """
-
- if not iterable:
- iterable = self.elements
-
- for item in iterable:
- self.discard(item)
-
- def __create_group__(self):
- """ Create the Ansible inventory group
- """
- response = self.ars_client.http_post(self.url_group, "", {})
-
- if response.status_code != 200:
- raise AnsibleRunnerServiceError("Error when trying to "\
- "create group:{}".format(
- self.group_name))
- self.created = True
- self.elements = set()
+++ /dev/null
-"""
-ceph-mgr Ansible orchestrator module
-
-The external Orchestrator is the Ansible runner service (RESTful https service)
-"""
-
-# pylint: disable=abstract-method, no-member, bad-continuation
-import functools
-import json
-import os
-import errno
-import tempfile
-
-try:
- from typing import List, Optional, Callable, Any
-except ImportError:
- pass # just for type checking
-
-
-import requests
-
-from mgr_module import MgrModule, Option, CLIWriteCommand
-import orchestrator
-from mgr_util import verify_tls_files, ServerConfigException
-
-from .ansible_runner_svc import Client, PlayBookExecution, ExecutionStatusCode,\
- AnsibleRunnerServiceError, InventoryGroup,\
- URL_MANAGE_GROUP, URL_ADD_RM_HOSTS
-
-from .output_wizards import ProcessInventory, ProcessPlaybookResult, \
- ProcessHostsList, OutputWizard
-
-
-
-# Time to clean the completions list
-WAIT_PERIOD = 10
-
-# List of playbooks names used
-
-# Name of the playbook used in the "get_inventory" method.
-# This playbook is expected to provide a list of storage devices in the host
-# where the playbook is executed.
-GET_STORAGE_DEVICES_CATALOG_PLAYBOOK = "storage-inventory.yml"
-
-# Used in the create_osd method
-ADD_OSD_PLAYBOOK = "add-osd.yml"
-
-# Used in the remove_osds method
-REMOVE_OSD_PLAYBOOK = "shrink-osd.yml"
-
-# General multi purpose cluster playbook
-SITE_PLAYBOOK = "site.yml"
-
-# General multi-purpose playbook for removing daemons
-PURGE_PLAYBOOK = "purge-cluster.yml"
-
-# Default name for the inventory group for hosts managed by the Orchestrator
-ORCHESTRATOR_GROUP = "orchestrator"
-
-# URLs for Ansible Runner Operations
-
-# Retrieve the groups where the host is included in.
-URL_GET_HOST_GROUPS = "api/v1/hosts/{host_name}"
-
-
-
-# URLs for Ansible Runner Operations
-URL_GET_HOSTS = "api/v1/hosts"
-
-
-
-def deferred(f):
- # type: (Callable) -> Callable[..., orchestrator.Completion]
- """
- Decorator to make RookOrchestrator methods return
- a completion object that executes themselves.
- """
-
- @functools.wraps(f)
- def wrapper(*args, **kwargs):
- return orchestrator.Completion(on_complete=lambda _: f(*args, **kwargs))
-
- return wrapper
-
-
-def clean_inventory(ar_client, clean_hosts_on_success):
- # type: (Client, dict) -> None
- """ Remove hosts from inventory groups
- """
-
- for group, hosts in clean_hosts_on_success.items():
- InventoryGroup(group, ar_client).clean(hosts)
-
-
-def playbook_operation(client, # type: Client
- playbook, # type: str
- result_pattern, # type: str
- params, # type: dict
- event_filter_list=None, # type: Optional[List[str]]
- querystr_dict=None, # type: Optional[dict]
- output_wizard=None, # type: Optional[OutputWizard]
- clean_hosts_on_success=None # type: Optional[dict]
- ):
- # type: (...) -> orchestrator.Completion
- """
- :param client : Ansible Runner Service Client
- :param playbook : The playbook to execute
- :param result_pattern: The "pattern" to discover what execution events
- have the information deemed as result
- :param params : http request payload for the playbook execution
- :param querystr_dict : http request querystring for the playbook
- execution (DO NOT MODIFY HERE)
- :param event_filter_list: An aditional filter of result events based in the event
- :param clean_hosts_on_success: A dict with groups and hosts to remove from inventory if operation is
- succesful. Ex: {"group1": ["host1"], "group2": ["host3", "host4"]}
- """
-
- querystr_dict = querystr_dict or {}
- event_filter_list = event_filter_list or [""]
- clean_hosts_on_success = clean_hosts_on_success or {}
-
- def status(_):
- """Check the status of the playbook execution and update the status
- and result of the underlying Completion object.
- """
-
- status = pb_execution.get_status()
-
- if status in (ExecutionStatusCode.SUCCESS, ExecutionStatusCode.ERROR):
- if status == ExecutionStatusCode.ERROR:
- raw_result = pb_execution.get_result(["runner_on_failed",
- "runner_on_unreachable",
- "runner_on_no_hosts",
- "runner_on_async_failed",
- "runner_item_on_failed"])
- else:
- raw_result = pb_execution.get_result(event_filter_list)
-
- if output_wizard:
- processed_result = output_wizard.process(pb_execution.play_uuid,
- raw_result)
- else:
- processed_result = raw_result
-
- # Clean hosts if operation is succesful
- if status == ExecutionStatusCode.SUCCESS:
- assert clean_hosts_on_success is not None
- clean_inventory(client, clean_hosts_on_success)
-
- return processed_result
- else:
- return orchestrator.Completion(on_complete=status)
-
- pb_execution = PlayBookExecution(client, playbook, result_pattern, params, querystr_dict)
-
- return orchestrator.Completion(on_complete=lambda _: pb_execution.launch()).then(status)
-
-
-def ars_http_operation(url, http_operation, payload="", params_dict=None):
- def inner(ar_client):
- # type: (Client) -> str
- if http_operation == "post":
- response = ar_client.http_post(url,
- payload,
- params_dict)
- elif http_operation == "delete":
- response = ar_client.http_delete(url)
- elif http_operation == "get":
- response = ar_client.http_get(url)
- else:
- assert False, http_operation
-
- # Any problem executing the secuence of operations will
- # produce an errored completion object.
- try:
- response.raise_for_status()
- except Exception as e:
- raise AnsibleRunnerServiceError(str(e))
-
- return response.text
- return inner
-
-
-@deferred
-def ars_change(client, operations, output_wizard=None):
- # type: (Client, List[Callable[[Client], str]], Optional[OutputWizard]) -> str
- """
- Execute one or more Ansible Runner Service Operations that implies
- a change in the cluster
-
- :param client : Ansible Runner Service Client
- :param operations : A list of http_operation objects
-
- Execute the Ansible Runner Service operations and update the status
- and result of the underlying Completion object.
- """
-
- out = None
- for my_request in operations:
- # Execute the right kind of http request
- out = my_request(client)
- # If this point is reached, all the operations has been succesfuly
- # executed, and the final result is updated
- assert out is not None
- if output_wizard:
- return output_wizard.process("", out)
- else:
- return out
-
-
-def ars_read(client, url, get_operation=True, payload=None, output_wizard=None):
- # type: (Client, str, bool, Optional[str], Optional[OutputWizard]) -> orchestrator.Completion
- """
- Execute the Ansible Runner Service operation
-
- :param client : Ansible Runner Service Client
- :param url : The Ansible Runner Service URL that provides
- the operation
- :param get_operation : True if operation is provided using an http GET
- :param payload : http request payload
- """
- return ars_change(client, [ars_http_operation(url, 'get' if get_operation else 'post', payload)], output_wizard)
-
-
-class Module(MgrModule, orchestrator.Orchestrator):
- """An Orchestrator that uses <Ansible Runner Service> to perform operations
- """
-
- MODULE_OPTIONS = [
- # url:port of the Ansible Runner Service
- Option(name="server_location", type="str", default=""),
- # Check server identity (True by default)
- Option(name="verify_server", type="bool", default=True),
- # Path to an alternative CA bundle
- Option(name="ca_bundle", type="str", default="")
- ]
-
- def __init__(self, *args, **kwargs):
- super(Module, self).__init__(*args, **kwargs)
-
- self.run = False
-
- self.all_completions = []
-
- self._ar_client = None # type: Optional[Client]
-
- # TLS certificate and key file names used to connect with the external
- # Ansible Runner Service
- self.client_cert_fname = ""
- self.client_key_fname = ""
-
- # used to provide more verbose explanation of errors in status method
- self.status_message = ""
-
- self.all_progress_references = list() # type: List[orchestrator.ProgressReference]
-
- @property
- def ar_client(self):
- # type: () -> Client
- assert self._ar_client is not None
- return self._ar_client
-
- def available(self):
- """ Check if Ansible Runner service is working
- """
- available = False
- msg = ""
- try:
-
- if self._ar_client:
- available = self.ar_client.is_operative()
- if not available:
- msg = "No response from Ansible Runner Service"
- else:
- msg = "Not possible to initialize connection with Ansible "\
- "Runner service."
-
- except AnsibleRunnerServiceError as ex:
- available = False
- msg = str(ex)
-
- # Add more details to the detected problem
- if self.status_message:
- msg = "{}:\n{}".format(msg, self.status_message)
-
- return (available, msg)
-
- def process(self, completions):
- """Given a list of Completion instances, progress any which are
- incomplete.
-
- :param completions: list of Completion instances
- :Returns : True if everything is done.
- """
-
- if completions:
- self.log.info("process: completions={0}".format(orchestrator.pretty_print(completions)))
-
- def serve(self):
- """ Mandatory for standby modules
- """
- self.log.info("Starting Ansible Orchestrator module ...")
-
- try:
- # Verify config options and client certificates
- self.verify_config()
-
- # Ansible runner service client
- self._ar_client = Client(
- server_url=self.get_module_option('server_location', ''),
- verify_server=self.get_module_option('verify_server', True),
- ca_bundle=self.get_module_option('ca_bundle', ''),
- client_cert=self.client_cert_fname,
- client_key=self.client_key_fname)
-
- except AnsibleRunnerServiceError:
- self.log.exception("Ansible Runner Service not available. "
- "Check external server status/TLS identity or "
- "connection options. If configuration options changed"
- " try to disable/enable the module.")
- self.shutdown()
- return
-
- self.run = True
-
- def shutdown(self):
-
- self.log.info('Stopping Ansible orchestrator module')
- self.run = False
-
- def get_inventory(self, node_filter=None, refresh=False):
- """
-
- :param : node_filter instance
- :param : refresh any cached state
- :Return : A AnsibleReadOperation instance (Completion Object)
- """
-
- # Create a new read completion object for execute the playbook
- op = playbook_operation(client=self.ar_client,
- playbook=GET_STORAGE_DEVICES_CATALOG_PLAYBOOK,
- result_pattern="list storage inventory",
- params={},
- output_wizard=ProcessInventory(self.ar_client),
- event_filter_list=["runner_on_ok"])
-
- self._launch_operation(op)
-
- return op
-
- def create_osds(self, drive_groups):
- """Create one or more OSDs within a single Drive Group.
- If no host provided the operation affects all the host in the OSDS role
-
-
- :param drive_groups: List[(ceph.deployment.drive_group.DriveGroupSpec)],
- Drive group with the specification of drives to use
-
- Caveat: Currently limited to a single DriveGroup.
- The orchestrator_cli expects a single completion which
- ideally represents a set of operations. This orchestrator
- doesn't support this notion, yet. Hence it's only accepting
- a single DriveGroup for now.
- You can work around it by invoking:
-
- $: ceph orchestrator osd create -i <dg.file>
-
- multiple times. The drivegroup file must only contain one spec at a time.
- """
- drive_group = drive_groups[0]
-
- # Transform drive group specification to Ansible playbook parameters
- host, osd_spec = dg_2_ansible(drive_group)
-
- # Create a new read completion object for execute the playbook
- op = playbook_operation(client=self.ar_client,
- playbook=ADD_OSD_PLAYBOOK,
- result_pattern="",
- params=osd_spec,
- querystr_dict={"limit": host},
- output_wizard=ProcessPlaybookResult(self.ar_client),
- event_filter_list=["playbook_on_stats"])
-
- self._launch_operation(op)
-
- return op
-
- def remove_osds(self, osd_ids, destroy=False):
- """Remove osd's.
-
- :param osd_ids: List of osd's to be removed (List[int])
- :param destroy: unsupported.
- """
- assert not destroy
-
- extravars = {'osd_to_kill': ",".join([str(osd_id) for osd_id in osd_ids]),
- 'ireallymeanit':'yes'}
-
- # Create a new read completion object for execute the playbook
- op = playbook_operation(client=self.ar_client,
- playbook=REMOVE_OSD_PLAYBOOK,
- result_pattern="",
- params=extravars,
- output_wizard=ProcessPlaybookResult(self.ar_client),
- event_filter_list=["playbook_on_stats"])
-
- # Execute the playbook
- self._launch_operation(op)
-
- return op
-
- def get_hosts(self):
- """Provides a list Inventory nodes
- """
-
- host_ls_op = ars_read(self.ar_client, url=URL_GET_HOSTS,
- output_wizard=ProcessHostsList(self.ar_client))
- return host_ls_op
-
- def add_host(self, host):
- """
- Add a host to the Ansible Runner Service inventory in the "orchestrator"
- group
-
- :param host: hostname
- :returns : orchestrator.Completion
- """
-
- url_group = URL_MANAGE_GROUP.format(group_name=ORCHESTRATOR_GROUP)
-
- try:
- # Create the orchestrator default group if not exist.
- # If exists we ignore the error response
- dummy_response = self.ar_client.http_post(url_group, "", {})
-
- # Here, the default group exists so...
- # Prepare the operation for adding the new host
- add_url = URL_ADD_RM_HOSTS.format(host_name=host,
- inventory_group=ORCHESTRATOR_GROUP)
-
- operations = [ars_http_operation(add_url, "post", "", None)]
-
- except AnsibleRunnerServiceError as ex:
- # Problems with the external orchestrator.
- # Prepare the operation to return the error in a Completion object.
- self.log.exception("Error checking <orchestrator> group: %s",
- str(ex))
- operations = [ars_http_operation(url_group, "post", "", None)]
-
- return ars_change(self.ar_client, operations)
-
- def remove_host(self, host):
- """
- Remove a host from all the groups in the Ansible Runner Service
- inventory.
-
- :param host: hostname
- :returns : orchestrator.Completion
- """
-
- host_groups = [] # type: List[Any]
-
- try:
- # Get the list of groups where the host is included
- groups_url = URL_GET_HOST_GROUPS.format(host_name=host)
- response = self.ar_client.http_get(groups_url)
-
- if response.status_code == requests.codes.ok:
- host_groups = json.loads(response.text)["data"]["groups"]
-
- except AnsibleRunnerServiceError:
- self.log.exception("Error retrieving host groups")
- raise
-
- if not host_groups:
- # Error retrieving the groups, prepare the completion object to
- # execute the problematic operation just to provide the error
- # to the caller
- operations = [ars_http_operation(groups_url, "get")]
- else:
- # Build the operations list
- operations = list(map(lambda x:
- ars_http_operation(URL_ADD_RM_HOSTS.format(
- host_name=host,
- inventory_group=x),
- "delete"),
- host_groups))
-
- return ars_change(self.ar_client, operations)
-
- def add_rgw(self, spec):
- # type: (orchestrator.RGWSpec) -> orchestrator.Completion
- """ Add a RGW service in the cluster
-
- : spec : an Orchestrator.RGWSpec object
-
- : returns : Completion object
- """
-
-
- # Add the hosts to the inventory in the right group
- hosts = spec.placement.hosts
- if not hosts:
- raise orchestrator.OrchestratorError("No hosts provided. "
- "At least one destination host is needed to install the RGW "
- "service")
-
- def set_rgwspec_defaults(spec):
- spec.rgw_multisite = spec.rgw_multisite if spec.rgw_multisite is not None else True
- spec.rgw_zonemaster = spec.rgw_zonemaster if spec.rgw_zonemaster is not None else True
- spec.rgw_zonesecondary = spec.rgw_zonesecondary \
- if spec.rgw_zonesecondary is not None else False
- spec.rgw_multisite_proto = spec.rgw_multisite_proto \
- if spec.rgw_multisite_proto is not None else "http"
- spec.rgw_frontend_port = spec.rgw_frontend_port \
- if spec.rgw_frontend_port is not None else 8080
-
- spec.rgw_zonegroup = spec.rgw_zonegroup if spec.rgw_zonegroup is not None else "default"
- spec.rgw_zone_user = spec.rgw_zone_user if spec.rgw_zone_user is not None else "zone.user"
- spec.rgw_realm = spec.rgw_realm if spec.rgw_realm is not None else "default"
-
- spec.system_access_key = spec.system_access_key \
- if spec.system_access_key is not None else spec.genkey(20)
- spec.system_secret_key = spec.system_secret_key \
- if spec.system_secret_key is not None else spec.genkey(40)
-
- set_rgwspec_defaults(spec)
- InventoryGroup("rgws", self.ar_client).update(hosts)
-
- # Limit playbook execution to certain hosts
- limited = ",".join(str(host) for host in hosts)
-
- # Add the settings for this service
- extravars = {k:v for (k,v) in spec.__dict__.items() if k.startswith('rgw_')}
- extravars['rgw_zone'] = spec.name
- extravars['rgw_multisite_endpoint_addr'] = spec.rgw_multisite_endpoint_addr
- extravars['rgw_multisite_endpoints_list'] = spec.rgw_multisite_endpoints_list
- extravars['rgw_frontend_port'] = str(spec.rgw_frontend_port)
-
- # Group hosts by resource (used in rm ops)
- resource_group = "rgw_zone_{}".format(spec.name)
- InventoryGroup(resource_group, self.ar_client).update(hosts)
-
- # Execute the playbook to create the service
- op = playbook_operation(client=self.ar_client,
- playbook=SITE_PLAYBOOK,
- result_pattern="",
- params=extravars,
- querystr_dict={"limit": limited},
- output_wizard=ProcessPlaybookResult(self.ar_client),
- event_filter_list=["playbook_on_stats"])
-
- # Execute the playbook
- self._launch_operation(op)
-
- return op
-
- def remove_rgw(self, zone):
- """ Remove a RGW service providing <zone>
-
- :param zone: <zone name> of the RGW
- ...
- :returns : Completion object
- """
-
-
- # Ansible Inventory group for the kind of service
- group = "rgws"
-
- # get the list of hosts where to remove the service
- # (hosts in resource group)
- resource_group = "rgw_zone_{}".format(zone)
-
- hosts_list = list(InventoryGroup(resource_group, self.ar_client))
- limited = ",".join(hosts_list)
-
- # Avoid manual confirmation
- extravars = {"ireallymeanit": "yes"}
-
- # Cleaning of inventory after a sucessful operation
- clean_inventory = {}
- clean_inventory[resource_group] = hosts_list
- clean_inventory[group] = hosts_list
-
- # Execute the playbook to remove the service
- op = playbook_operation(client=self.ar_client,
- playbook=PURGE_PLAYBOOK,
- result_pattern="",
- params=extravars,
- querystr_dict={"limit": limited},
- output_wizard=ProcessPlaybookResult(self.ar_client),
- event_filter_list=["playbook_on_stats"],
- clean_hosts_on_success=clean_inventory)
-
- # Execute the playbook
- self._launch_operation(op)
-
- return op
-
- def _launch_operation(self, ansible_operation):
- """Launch the operation and add the operation to the completion objects
- ongoing
-
- :ansible_operation: A read/write ansible operation (completion object)
- """
-
- # Add the operation to the list of things ongoing
- self.all_completions.append(ansible_operation)
-
- def verify_config(self):
- """Verify mandatory settings for the module and provide help to
- configure properly the orchestrator
- """
-
- # Retrieve TLS content to use and check them
- # First try to get certiticate and key content for this manager instance
- # ex: mgr/ansible/mgr0/[crt/key]
- self.log.info("Tying to use configured specific certificate and key"
- "files for this server")
- the_crt = self.get_store("{}/{}".format(self.get_mgr_id(), "crt"))
- the_key = self.get_store("{}/{}".format(self.get_mgr_id(), "key"))
- if the_crt is None or the_key is None:
- # If not possible... try to get generic certificates and key content
- # ex: mgr/ansible/[crt/key]
- self.log.warning("Specific tls files for this manager not "
- "configured, trying to use generic files")
- the_crt = self.get_store("crt")
- the_key = self.get_store("key")
-
- if the_crt is None or the_key is None:
- self.status_message = "No client certificate configured. Please "\
- "set Ansible Runner Service client "\
- "certificate and key:\n"\
- "ceph ansible set-ssl-certificate-"\
- "{key,certificate} -i <file>"
- self.log.error(self.status_message)
- return
-
- # generate certificate temp files
- self.client_cert_fname = generate_temp_file("crt", the_crt)
- self.client_key_fname = generate_temp_file("key", the_key)
-
- try:
- verify_tls_files(self.client_cert_fname, self.client_key_fname)
- except ServerConfigException as e:
- self.status_message = str(e)
-
- if self.status_message:
- self.log.error(self.status_message)
- return
-
- # Check module options
- if not self.get_module_option("server_location", ""):
- self.status_message = "No Ansible Runner Service base URL "\
- "<server_name>:<port>."\
- "Try 'ceph config set mgr mgr/{0}/server_location "\
- "<server name/ip>:<port>'".format(self.module_name)
- self.log.error(self.status_message)
- return
-
-
- if self.get_module_option("verify_server", True):
- self.status_message = "TLS server identity verification is enabled"\
- " by default.Use 'ceph config set mgr mgr/{0}/verify_server False'"\
- "to disable it.Use 'ceph config set mgr mgr/{0}/ca_bundle <path>'"\
- "to point an alternative CA bundle path used for TLS server "\
- "verification".format(self.module_name)
- self.log.error(self.status_message)
- return
-
- # Everything ok
- self.status_message = ""
-
-
- #---------------------------------------------------------------------------
- # Ansible Orchestrator self-owned commands
- #---------------------------------------------------------------------------
- @CLIWriteCommand("ansible set-ssl-certificate",
- "name=mgr_id,type=CephString,req=false")
- def set_tls_certificate(self, mgr_id=None, inbuf=None):
- """Load tls certificate in mon k-v store
- """
- if inbuf is None:
- return -errno.EINVAL, \
- 'Please specify the certificate file with "-i" option', ''
- if mgr_id is not None:
- self.set_store("{}/crt".format(mgr_id), inbuf)
- else:
- self.set_store("crt", inbuf)
- return 0, "SSL certificate updated", ""
-
- @CLIWriteCommand("ansible set-ssl-certificate-key",
- "name=mgr_id,type=CephString,req=false")
- def set_tls_certificate_key(self, mgr_id=None, inbuf=None):
- """Load tls certificate key in mon k-v store
- """
- if inbuf is None:
- return -errno.EINVAL, \
- 'Please specify the certificate key file with "-i" option', \
- ''
- if mgr_id is not None:
- self.set_store("{}/key".format(mgr_id), inbuf)
- else:
- self.set_store("key", inbuf)
- return 0, "SSL certificate key updated", ""
-
-# Auxiliary functions
-#==============================================================================
-
-def dg_2_ansible(drive_group):
- """ Transform a drive group especification into:
-
- a host : limit the playbook execution to this host
- a osd_spec : dict of parameters to pass to the Ansible playbook used
- to create the osds
-
- :param drive_group: (type: DriveGroupSpec)
-
- TODO: Possible this function will be removed/or modified heavily when
- the ansible playbook to create osd's use ceph volume batch with
- drive group parameter
- """
-
- # Limit the execution of the playbook to certain hosts
- # TODO: Now only accepted "*" (all the hosts) or a host_name in the
- # drive_group.host_pattern
- # This attribute is intended to be used with "fnmatch" patterns, so when
- # this become effective it will be needed to use the "get_inventory" method
- # in order to have a list of hosts to be filtered with the "host_pattern"
- if drive_group.host_pattern in ["*"]:
- host = None # No limit in the playbook
- else:
- # For the moment, we assume that we only have 1 host
- host = drive_group.host_pattern
-
- # Compose the OSD configuration
-
-
- osd = {}
- osd["data"] = drive_group.data_devices.paths[0]
- # Other parameters will be extracted in the same way
- #osd["dmcrypt"] = drive_group.encryption
-
- # lvm_volumes parameters
- # (by the moment is what is accepted in the current playbook)
- osd_spec = {"lvm_volumes":[osd]}
-
- #Global scope variables also can be included in the osd_spec
- #osd_spec["osd_objectstore"] = drive_group.objectstore
-
- return host, osd_spec
-
-
-def generate_temp_file(key, content):
- """ Generates a temporal file with the content passed as parameter
-
- :param key : used to build the temp file name
- :param content: the content that will be dumped to file
- :returns : the name of the generated file
- """
-
- fname = ""
-
- if content is not None:
- fname = "{}/{}.tmp".format(tempfile.gettempdir(), key)
- try:
- if os.path.exists(fname):
- os.remove(fname)
- with open(fname, "w") as text_file:
- text_file.write(content)
- except IOError as ex:
- raise AnsibleRunnerServiceError("Cannot store TLS certificate/key"
- " content: {}".format(str(ex)))
-
- return fname
-
-
+++ /dev/null
-"""
-ceph-mgr Output Wizards module
-
-Output wizards are used to process results in different ways in
-completion objects
-"""
-
-import json
-import logging
-
-from ceph.deployment import inventory
-from orchestrator import InventoryNode
-
-from .ansible_runner_svc import EVENT_DATA_URL
-
-logger = logging.getLogger(__name__)
-
-class OutputWizard(object):
- """Base class for help to process output in completion objects
- """
- def __init__(self, ar_client):
- """Make easy to work in output wizards using this attributes:
-
- :param ars_client: Ansible Runner Service client
- """
- self.ar_client = ar_client
-
- def process(self, operation_id, raw_result):
- """Make the magic here
-
- :param operation_id: Allows to identify the Ansible Runner Service
- operation whose result we wnat to process
- :param raw_result: input for processing
- """
- raise NotImplementedError
-
-class ProcessInventory(OutputWizard):
- """ Adapt the output of the playbook used in 'get_inventory'
- to the Orchestrator expected output (list of InventoryNode)
- """
-
- def process(self, operation_id, raw_result):
- """
- :param operation_id: Playbook uuid
- :param raw_result: events dict with the results
-
- Example:
- inventory_events =
- {'37-100564f1-9fed-48c2-bd62-4ae8636dfcdb': {'host': '192.168.121.254',
- 'task': 'list storage inventory',
- 'event': 'runner_on_ok'},
- '36-2016b900-e38f-7dcd-a2e7-00000000000e': {'host': '192.168.121.252'
- 'task': 'list storage inventory',
- 'event': 'runner_on_ok'}}
-
- :return : list of InventoryNode
- """
- # Just making more readable the method
- inventory_events = raw_result
-
- #Obtain the needed data for each result event
- inventory_nodes = []
-
- # Loop over the result events and request the event data
- for event_key, dummy_data in inventory_events.items():
-
- event_response = self.ar_client.http_get(EVENT_DATA_URL %
- (operation_id, event_key))
-
- # self.pb_execution.play_uuid
-
- # Process the data for each event
- if event_response:
- event_data = json.loads(event_response.text)["data"]["event_data"]
-
- host = event_data["host"]
-
- devices = json.loads(event_data["res"]["stdout"])
- devs = inventory.Devices.from_json(devices)
- inventory_nodes.append(InventoryNode(host, devs))
-
-
- return inventory_nodes
-
-class ProcessPlaybookResult(OutputWizard):
- """ Provides the result of a playbook execution as plain text
- """
- def process(self, operation_id, raw_result):
- """
- :param operation_id: Playbook uuid
- :param raw_result: events dict with the results
-
- :return : String with the playbook execution event list
- """
- # Just making more readable the method
- inventory_events = raw_result
- result = ""
-
- # Loop over the result events and request the data
- for event_key, dummy_data in inventory_events.items():
- event_response = self.ar_client.http_get(EVENT_DATA_URL %
- (operation_id, event_key))
-
- result += event_response.text
- return result
-
-
-class ProcessHostsList(OutputWizard):
- """ Format the output of host ls call
- """
- def process(self, operation_id, raw_result):
- """ Format the output of host ls call
-
- :param operation_id: Not used in this output wizard
- :param raw_result: In this case is like the following json:
- {
- "status": "OK",
- "msg": "",
- "data": {
- "hosts": [
- "host_a",
- "host_b",
- ...
- "host_x",
- ]
- }
- }
-
- :return: list of InventoryNodes
- """
- # Just making more readable the method
- host_ls_json = raw_result
-
- inventory_nodes = []
-
- try:
- json_resp = json.loads(host_ls_json)
-
- for host in json_resp["data"]["hosts"]:
- inventory_nodes.append(InventoryNode(host, inventory.Devices([])))
-
- except ValueError:
- logger.exception("Malformed json response")
- except KeyError:
- logger.exception("Unexpected content in Ansible Runner Service"
- " response")
- except TypeError:
- logger.exception("Hosts data must be iterable in Ansible Runner "
- "Service response")
-
- return inventory_nodes
+++ /dev/null
-{
- "status": "OK",
- "msg": "",
- "data": {
- "events": {
- "2-6edf768f-2923-44e1-b884-f0227b811cfc": {
- "event": "playbook_on_start"
- },
- "3-2016b900-e38f-7dcd-a2e7-000000000008": {
- "event": "playbook_on_play_start"
- },
- "4-2016b900-e38f-7dcd-a2e7-000000000012": {
- "event": "playbook_on_task_start",
- "task": "Gathering Facts"
- },
- "5-19ae1e5e-aa2d-479e-845a-ef0253cc1f99": {
- "event": "runner_on_ok",
- "host": "192.168.121.245",
- "task": "Gathering Facts"
- },
- "6-aad3acc4-06a3-4c97-82ff-31e9e484b1f5": {
- "event": "runner_on_ok",
- "host": "192.168.121.61",
- "task": "Gathering Facts"
- },
- "7-55298017-3e7d-4734-b316-bbe13ce1da5e": {
- "event": "runner_on_ok",
- "host": "192.168.121.254",
- "task": "Gathering Facts"
- },
- "8-2016b900-e38f-7dcd-a2e7-00000000000a": {
- "event": "playbook_on_task_start",
- "task": "setup"
- },
- "9-2085ccb6-e337-4b9f-bc38-1d8bbf9b973f": {
- "event": "runner_on_ok",
- "host": "192.168.121.254",
- "task": "setup"
- },
- "10-e14cdbbc-4883-436c-a41c-a8194ec69075": {
- "event": "runner_on_ok",
- "host": "192.168.121.245",
- "task": "setup"
- },
- "11-6d815a26-df53-4240-b8b6-2484e88e4f48": {
- "event": "runner_on_ok",
- "host": "192.168.121.61",
- "task": "setup"
- },
- "12-2016b900-e38f-7dcd-a2e7-00000000000b": {
- "event": "playbook_on_task_start",
- "task": "Get a list of block devices (excludes loop and child devices)"
- },
- "13-799b0119-ccab-4eca-b30b-a37b0bafa02c": {
- "event": "runner_on_ok",
- "host": "192.168.121.245",
- "task": "Get a list of block devices (excludes loop and child devices)"
- },
- "14-6beb6958-4bfd-4a9c-bd2c-d20d00248605": {
- "event": "runner_on_ok",
- "host": "192.168.121.61",
- "task": "Get a list of block devices (excludes loop and child devices)"
- },
- "15-3ca99cc8-98ea-4967-8f2d-115426d00b6a": {
- "event": "runner_on_ok",
- "host": "192.168.121.254",
- "task": "Get a list of block devices (excludes loop and child devices)"
- },
- "16-2016b900-e38f-7dcd-a2e7-00000000000c": {
- "event": "playbook_on_task_start",
- "task": "check if disk {{ item }} is free"
- },
- "17-8c88141a-08d1-411f-a855-9f7702a49c4e": {
- "event": "runner_item_on_failed",
- "host": "192.168.121.245",
- "task": "check if disk vda is free"
- },
- "18-4457db98-6f18-4f63-bfaa-584db5eea05b": {
- "event": "runner_on_failed",
- "host": "192.168.121.245",
- "task": "check if disk {{ item }} is free"
- },
- "19-ac3c72cd-1fbb-495a-be69-53fa6029f356": {
- "event": "runner_item_on_failed",
- "host": "192.168.121.61",
- "task": "check if disk vda is free"
- },
- "20-d161cb70-ba2e-4571-b029-c6428a566fef": {
- "event": "runner_on_failed",
- "host": "192.168.121.61",
- "task": "check if disk {{ item }} is free"
- },
- "21-65f1ce5c-2d86-4cc3-8e10-cff6bf6cbd82": {
- "event": "runner_item_on_failed",
- "host": "192.168.121.254",
- "task": "check if disk sda is free"
- },
- "22-7f86dcd4-4ef7-4f5a-9db3-c3780b67cc4b": {
- "event": "runner_item_on_failed",
- "host": "192.168.121.254",
- "task": "check if disk sdb is free"
- },
- "23-837bf4f6-a912-46a8-b94b-55aa66a935c4": {
- "event": "runner_item_on_ok",
- "host": "192.168.121.254",
- "task": "check if disk sdc is free"
- },
- "24-adf6238d-723f-4783-9226-8475419d466e": {
- "event": "runner_item_on_failed",
- "host": "192.168.121.254",
- "task": "check if disk vda is free"
- },
- "25-554661d8-bc34-4885-a589-4960d6b8a487": {
- "event": "runner_on_failed",
- "host": "192.168.121.254",
- "task": "check if disk {{ item }} is free"
- },
- "26-2016b900-e38f-7dcd-a2e7-00000000000d": {
- "event": "playbook_on_task_start",
- "task": "Update hosts freedisk list"
- },
- "27-52df484c-30a0-4e3b-9057-02ca345c5790": {
- "event": "runner_item_on_skipped",
- "host": "192.168.121.254",
- "task": "Update hosts freedisk list"
- },
- "28-083616ad-3c1f-4fb8-a06c-5d64e670e362": {
- "event": "runner_item_on_skipped",
- "host": "192.168.121.254",
- "task": "Update hosts freedisk list"
- },
- "29-bffc68d3-5448-491f-8780-07858285f5cd": {
- "event": "runner_item_on_skipped",
- "host": "192.168.121.245",
- "task": "Update hosts freedisk list"
- },
- "30-cca2dfd9-16e9-4fcb-8bf7-c4da7dab5668": {
- "event": "runner_on_skipped",
- "host": "192.168.121.245",
- "task": "Update hosts freedisk list"
- },
- "31-158a98ac-7e8d-4ebb-8c53-4467351a2d3a": {
- "event": "runner_item_on_ok",
- "host": "192.168.121.254",
- "task": "Update hosts freedisk list"
- },
- "32-06a7e809-8d82-41df-b01d-45d94e519cb7": {
- "event": "runner_item_on_skipped",
- "host": "192.168.121.254",
- "task": "Update hosts freedisk list"
- },
- "33-d5cdbb58-728a-4be5-abf1-4a051146e727": {
- "event": "runner_item_on_skipped",
- "host": "192.168.121.61",
- "task": "Update hosts freedisk list"
- },
- "34-9b3c570b-22d8-4539-8c94-d0c1cbed8633": {
- "event": "runner_on_ok",
- "host": "192.168.121.254",
- "task": "Update hosts freedisk list"
- },
- "35-93336830-03cd-43ff-be87-a7e063ca7547": {
- "event": "runner_on_skipped",
- "host": "192.168.121.61",
- "task": "Update hosts freedisk list"
- },
- "36-2016b900-e38f-7dcd-a2e7-00000000000e": {
- "event": "playbook_on_task_start",
- "task": "RESULTS"
- },
- "37-100564f1-9fed-48c2-bd62-4ae8636dfcdb": {
- "event": "runner_on_ok",
- "host": "192.168.121.254",
- "task": "RESULTS"
- },
- "38-20a64160-30a1-481f-a3ee-36e491bc7869": {
- "event": "playbook_on_stats"
- }
- },
- "total_events": 37
- }
-}
-
+++ /dev/null
-import logging
-import unittest
-from tests import mock
-import json
-import os
-
-import requests_mock
-
-from requests.exceptions import ConnectionError
-
-from ..ansible_runner_svc import Client, PlayBookExecution, ExecutionStatusCode, \
- API_URL, PLAYBOOK_EXEC_URL, \
- PLAYBOOK_EVENTS, AnsibleRunnerServiceError
-
-
-SERVER_URL = "ars:5001"
-CERTIFICATE = ""
-
-# Playbook attributes
-PB_NAME = "test_playbook"
-PB_UUID = "1733c3ac"
-
-# Playbook execution data file
-PB_EVENTS_FILE = os.path.dirname( __file__) + "/pb_execution_events.data"
-
-# create console handler and set level to info
-logger = logging.getLogger()
-handler = logging.StreamHandler()
-handler.setLevel(logging.INFO)
-formatter = logging.Formatter("%(levelname)s - %(message)s")
-handler.setFormatter(formatter)
-logger.addHandler(handler)
-
-def mock_get_pb(mock_server, playbook_name, return_code):
-
- ars_client = Client(SERVER_URL, verify_server=False, ca_bundle="",
- client_cert = "DUMMY_PATH", client_key = "DUMMY_PATH")
-
- the_pb_url = "https://%s/%s/%s" % (SERVER_URL, PLAYBOOK_EXEC_URL, playbook_name)
-
- if return_code == 404:
- mock_server.register_uri("POST",
- the_pb_url,
- json={ "status": "NOTFOUND",
- "msg": "playbook file not found",
- "data": {}},
- status_code=return_code)
- elif return_code == 202:
- mock_server.register_uri("POST",
- the_pb_url,
- json={ "status": "STARTED",
- "msg": "starting",
- "data": { "play_uuid": "1733c3ac" }},
- status_code=return_code)
-
- return PlayBookExecution(ars_client, playbook_name,
- result_pattern = "RESULTS")
-
-class ARSclientTest(unittest.TestCase):
-
- def test_server_not_reachable(self):
-
- with self.assertRaises(AnsibleRunnerServiceError):
- ars_client = Client(SERVER_URL, verify_server=False, ca_bundle="",
- client_cert = "DUMMY_PATH", client_key = "DUMMY_PATH")
-
- status = ars_client.is_operative()
-
-
- def test_server_connection_ok(self):
-
- with requests_mock.Mocker() as mock_server:
-
- ars_client = Client(SERVER_URL, verify_server=False, ca_bundle="",
- client_cert = "DUMMY_PATH", client_key = "DUMMY_PATH")
-
- the_api_url = "https://%s/%s" % (SERVER_URL,API_URL)
- mock_server.register_uri("GET",
- the_api_url,
- text="<!DOCTYPE html>api</html>",
- status_code=200)
-
- self.assertTrue(ars_client.is_operative(),
- "Operative attribute expected to be True")
-
- def test_server_http_delete(self):
-
- with requests_mock.Mocker() as mock_server:
-
- ars_client = Client(SERVER_URL, verify_server=False, ca_bundle="",
- client_cert = "DUMMY_PATH", client_key = "DUMMY_PATH")
-
- url = "https://%s/test" % (SERVER_URL)
- mock_server.register_uri("DELETE",
- url,
- json={ "status": "OK",
- "msg": "",
- "data": {}},
- status_code=201)
-
- response = ars_client.http_delete("test")
- self.assertTrue(response.status_code == 201)
-
-class PlayBookExecutionTests(unittest.TestCase):
-
-
- def test_playbook_execution_ok(self):
- """Check playbook id is set when the playbook is launched
- """
- with requests_mock.Mocker() as mock_server:
-
- test_pb = mock_get_pb(mock_server, PB_NAME, 202)
-
- test_pb.launch()
-
- self.assertEqual(test_pb.play_uuid, PB_UUID,
- "Found Unexpected playbook uuid")
-
- def test_playbook_execution_error(self):
- """Check playbook id is not set when the playbook is not present
- """
-
- with requests_mock.Mocker() as mock_server:
-
- test_pb = mock_get_pb(mock_server, "unknown_playbook", 404)
-
- with self.assertRaises(AnsibleRunnerServiceError):
- test_pb.launch()
-
- #self.assertEqual(test_pb.play_uuid, "",
- # "Playbook uuid not empty")
-
- def test_playbook_not_launched(self):
- """Check right status code when Playbook execution has not been launched
- """
-
- with requests_mock.Mocker() as mock_server:
-
- test_pb = mock_get_pb(mock_server, PB_NAME, 202)
-
- # Check playbook not launched
- self.assertEqual(test_pb.get_status(),
- ExecutionStatusCode.NOT_LAUNCHED,
- "Wrong status code for playbook not launched")
-
- def test_playbook_launched(self):
- """Check right status code when Playbook execution has been launched
- """
-
- with requests_mock.Mocker() as mock_server:
-
- test_pb = mock_get_pb(mock_server, PB_NAME, 202)
-
- test_pb.launch()
-
- the_status_url = "https://%s/%s/%s" % (SERVER_URL,
- PLAYBOOK_EXEC_URL,
- PB_UUID)
- mock_server.register_uri("GET",
- the_status_url,
- json={"status": "OK",
- "msg": "running",
- "data": {"task": "Step 2",
- "last_task_num": 6}
- },
- status_code=200)
-
- self.assertEqual(test_pb.get_status(),
- ExecutionStatusCode.ON_GOING,
- "Wrong status code for a running playbook")
-
- self.assertEqual(test_pb.play_uuid, PB_UUID,
- "Unexpected playbook uuid")
-
- def test_playbook_finish_ok(self):
- """Check right status code when Playbook execution is succesful
- """
- with requests_mock.Mocker() as mock_server:
-
- test_pb = mock_get_pb(mock_server, PB_NAME, 202)
-
- test_pb.launch()
-
- the_status_url = "https://%s/%s/%s" % (SERVER_URL,
- PLAYBOOK_EXEC_URL,
- PB_UUID)
- mock_server.register_uri("GET",
- the_status_url,
- json={"status": "OK",
- "msg": "successful",
- "data": {}
- },
- status_code=200)
-
- self.assertEqual(test_pb.get_status(),
- ExecutionStatusCode.SUCCESS,
- "Wrong status code for a playbook executed succesfully")
-
- def test_playbook_finish_error(self):
- """Check right status code when Playbook execution has failed
- """
- with requests_mock.Mocker() as mock_server:
-
- test_pb = mock_get_pb(mock_server, PB_NAME, 202)
-
- test_pb.launch()
-
- the_status_url = "https://%s/%s/%s" % (SERVER_URL,
- PLAYBOOK_EXEC_URL,
- PB_UUID)
- mock_server.register_uri("GET",
- the_status_url,
- json={"status": "OK",
- "msg": "failed",
- "data": {}
- },
- status_code=200)
-
- self.assertEqual(test_pb.get_status(),
- ExecutionStatusCode.ERROR,
- "Wrong status code for a playbook with error")
-
- def test_playbook_get_result(self):
- """ Find the right result event in a set of different events
- """
- with requests_mock.Mocker() as mock_server:
-
- test_pb = mock_get_pb(mock_server, PB_NAME, 202)
-
- test_pb.launch()
-
- the_events_url = "https://%s/%s" % (SERVER_URL,
- PLAYBOOK_EVENTS % PB_UUID)
-
- # Get the events stored in a file
- pb_events = {}
- with open(PB_EVENTS_FILE) as events_file:
- pb_events = json.loads(events_file.read())
-
- mock_server.register_uri("GET",
- the_events_url,
- json=pb_events,
- status_code=200)
-
- result = test_pb.get_result("runner_on_ok")
-
- self.assertEqual(len(result.keys()), 1,
- "Unique result event not found")
-
- self.assertIn("37-100564f1-9fed-48c2-bd62-4ae8636dfcdb",
- result.keys(),
- "Predefined result event not found")
+++ /dev/null
-""" Test output wizards
-"""
-import unittest
-from tests import mock
-
-from ..ansible_runner_svc import EVENT_DATA_URL
-from ..output_wizards import ProcessHostsList, ProcessPlaybookResult, \
- ProcessInventory
-
-class OutputWizardProcessHostsList(unittest.TestCase):
- """Test ProcessHostsList Output Wizard
- """
- RESULT_OK = """
- {
- "status": "OK",
- "msg": "",
- "data": {
- "hosts": [
- "host_a",
- "host_b",
- "host_c"
- ]
- }
- }
- """
- ar_client = mock.Mock()
- test_wizard = ProcessHostsList(ar_client)
-
- def test_process(self):
- """Test a normal call"""
-
- nodes_list = self.test_wizard.process("", self.RESULT_OK)
- self.assertEqual([node.name for node in nodes_list],
- ["host_a", "host_b", "host_c"])
-
- def test_errors(self):
- """Test different kind of errors processing result"""
-
- # Malformed json
- host_list = self.test_wizard.process("", """{"msg": """"")
- self.assertEqual(host_list, [])
-
- # key error
- host_list = self.test_wizard.process("", """{"msg": ""}""")
- self.assertEqual(host_list, [])
-
- # Hosts not in iterable
- host_list = self.test_wizard.process("", """{"data":{"hosts": 123} }""")
- self.assertEqual(host_list, [])
-
-class OutputWizardProcessPlaybookResult(unittest.TestCase):
- """Test ProcessPlaybookResult Output Wizard
- """
- # Input to process
- INVENTORY_EVENTS = {1:"first event", 2:"second event"}
- EVENT_INFORMATION = "event information\n"
-
- # Mocked response
- mocked_response = mock.Mock()
- mocked_response.text = EVENT_INFORMATION
-
- # The Ansible Runner Service client
- ar_client = mock.Mock()
- ar_client.http_get = mock.MagicMock(return_value=mocked_response)
-
- test_wizard = ProcessPlaybookResult(ar_client)
-
- def test_process(self):
- """Test a normal call
- """
-
- operation_id = 24
- result = self.test_wizard.process(operation_id, self.INVENTORY_EVENTS)
-
- # Check http request are correct and compose expected result
- expected_result = ""
- for key, dummy_data in self.INVENTORY_EVENTS.items():
- http_request = EVENT_DATA_URL % (operation_id, key)
- self.ar_client.http_get.assert_any_call(http_request)
- expected_result += self.EVENT_INFORMATION
-
- #Check result
- self.assertEqual(result, expected_result)
-
-class OutputWizardProcessInventory(unittest.TestCase):
- """Test ProcessInventory Output Wizard
- """
- # Input to process
- INVENTORY_EVENTS = {'event_uuid_1': {'host': '192.168.121.144',
- 'task': 'list storage inventory',
- 'event': 'runner_on_ok'}}
- EVENT_DATA = r"""
- {
- "status": "OK",
- "msg": "",
- "data": {
- "uuid": "5e96d509-174d-4f5f-bd94-e278c3a5b85b",
- "counter": 11,
- "stdout": "changed: [192.168.121.144]",
- "start_line": 17,
- "end_line": 18,
- "runner_ident": "6e98b2ba-3ce1-11e9-be81-2016b900e38f",
- "created": "2019-03-02T11:50:56.582112",
- "pid": 482,
- "event_data": {
- "play_pattern": "osds",
- "play": "query each host for storage device inventory",
- "task": "list storage inventory",
- "task_args": "_ansible_version=2.6.5, _ansible_selinux_special_fs=['fuse', 'nfs', 'vboxsf', 'ramfs', '9p'], _ansible_no_log=False, _ansible_module_name=ceph_volume, _ansible_debug=False, _ansible_verbosity=0, _ansible_keep_remote_files=False, _ansible_syslog_facility=LOG_USER, _ansible_socket=None, action=inventory, _ansible_diff=False, _ansible_remote_tmp=~/.ansible/tmp, _ansible_shell_executable=/bin/sh, _ansible_check_mode=False, _ansible_tmpdir=None",
- "remote_addr": "192.168.121.144",
- "res": {
- "_ansible_parsed": true,
- "stderr_lines": [],
- "changed": true,
- "end": "2019-03-02 11:50:56.554937",
- "_ansible_no_log": false,
- "stdout": "[{\"available\": true, \"rejected_reasons\": [], \"sys_api\": {\"scheduler_mode\": \"noop\", \"rotational\": \"1\", \"vendor\": \"ATA\", \"human_readable_size\": \"50.00 GB\", \"sectors\": 0, \"sas_device_handle\": \"\", \"partitions\": {}, \"rev\": \"2.5+\", \"sas_address\": \"\", \"locked\": 0, \"sectorsize\": \"512\", \"removable\": \"0\", \"path\": \"/dev/sdc\", \"support_discard\": \"\", \"model\": \"QEMU HARDDISK\", \"ro\": \"0\", \"nr_requests\": \"128\", \"size\": 53687091200.0}, \"lvs\": [], \"path\": \"/dev/sdc\"}, {\"available\": false, \"rejected_reasons\": [\"locked\"], \"sys_api\": {\"scheduler_mode\": \"noop\", \"rotational\": \"1\", \"vendor\": \"ATA\", \"human_readable_size\": \"50.00 GB\", \"sectors\": 0, \"sas_device_handle\": \"\", \"partitions\": {}, \"rev\": \"2.5+\", \"sas_address\": \"\", \"locked\": 1, \"sectorsize\": \"512\", \"removable\": \"0\", \"path\": \"/dev/sda\", \"support_discard\": \"\", \"model\": \"QEMU HARDDISK\", \"ro\": \"0\", \"nr_requests\": \"128\", \"size\": 53687091200.0}, \"lvs\": [{\"cluster_name\": \"ceph\", \"name\": \"osd-data-dcf8a88c-5546-42d2-afa4-b36f7fb23b66\", \"osd_id\": \"3\", \"cluster_fsid\": \"30d61f3e-7ee4-4bdc-8fe7-2ad5bb3f5317\", \"type\": \"block\", \"block_uuid\": \"fVqujC-9dgh-cN9W-1XD4-zVx1-1UdA-fUS3ha\", \"osd_fsid\": \"8b7cbeba-5e86-44ff-a5f3-2e7df77753fe\"}], \"path\": \"/dev/sda\"}, {\"available\": false, \"rejected_reasons\": [\"locked\"], \"sys_api\": {\"scheduler_mode\": \"noop\", \"rotational\": \"1\", \"vendor\": \"ATA\", \"human_readable_size\": \"50.00 GB\", \"sectors\": 0, \"sas_device_handle\": \"\", \"partitions\": {}, \"rev\": \"2.5+\", \"sas_address\": \"\", \"locked\": 1, \"sectorsize\": \"512\", \"removable\": \"0\", \"path\": \"/dev/sdb\", \"support_discard\": \"\", \"model\": \"QEMU HARDDISK\", \"ro\": \"0\", \"nr_requests\": \"128\", \"size\": 53687091200.0}, \"lvs\": [{\"cluster_name\": \"ceph\", \"name\": \"osd-data-8c92e986-bd97-4b3d-ba77-2cb88e15d80f\", \"osd_id\": \"1\", \"cluster_fsid\": \"30d61f3e-7ee4-4bdc-8fe7-2ad5bb3f5317\", \"type\": \"block\", \"block_uuid\": \"mgzO7O-vUfu-H3mf-4R3K-2f97-ZMRH-SngBFP\", \"osd_fsid\": \"6d067688-3e1b-45f9-ad03-8abd19e9f117\"}], \"path\": \"/dev/sdb\"}, {\"available\": false, \"rejected_reasons\": [\"locked\"], \"sys_api\": {\"scheduler_mode\": \"mq-deadline\", \"rotational\": \"1\", \"vendor\": \"0x1af4\", \"human_readable_size\": \"41.00 GB\", \"sectors\": 0, \"sas_device_handle\": \"\", \"partitions\": {\"vda1\": {\"start\": \"2048\", \"holders\": [], \"sectorsize\": 512, \"sectors\": \"2048\", \"size\": \"1024.00 KB\"}, \"vda3\": {\"start\": \"2101248\", \"holders\": [\"dm-0\", \"dm-1\"], \"sectorsize\": 512, \"sectors\": \"81784832\", \"size\": \"39.00 GB\"}, \"vda2\": {\"start\": \"4096\", \"holders\": [], \"sectorsize\": 512, \"sectors\": \"2097152\", \"size\": \"1024.00 MB\"}}, \"rev\": \"\", \"sas_address\": \"\", \"locked\": 1, \"sectorsize\": \"512\", \"removable\": \"0\", \"path\": \"/dev/vda\", \"support_discard\": \"\", \"model\": \"\", \"ro\": \"0\", \"nr_requests\": \"256\", \"size\": 44023414784.0}, \"lvs\": [{\"comment\": \"not used by ceph\", \"name\": \"LogVol00\"}, {\"comment\": \"not used by ceph\", \"name\": \"LogVol01\"}], \"path\": \"/dev/vda\"}]",
- "cmd": [
- "ceph-volume",
- "inventory",
- "--format=json"
- ],
- "rc": 0,
- "start": "2019-03-02 11:50:55.150121",
- "stderr": "",
- "delta": "0:00:01.404816",
- "invocation": {
- "module_args": {
- "wal_vg": null,
- "wal": null,
- "dmcrypt": false,
- "block_db_size": "-1",
- "journal": null,
- "objectstore": "bluestore",
- "db": null,
- "batch_devices": [],
- "db_vg": null,
- "journal_vg": null,
- "cluster": "ceph",
- "osds_per_device": 1,
- "containerized": "False",
- "crush_device_class": null,
- "report": false,
- "data_vg": null,
- "data": null,
- "action": "inventory",
- "journal_size": "5120"
- }
- },
- "stdout_lines": [
- "[{\"available\": true, \"rejected_reasons\": [], \"sys_api\": {\"scheduler_mode\": \"noop\", \"rotational\": \"1\", \"vendor\": \"ATA\", \"human_readable_size\": \"50.00 GB\", \"sectors\": 0, \"sas_device_handle\": \"\", \"partitions\": {}, \"rev\": \"2.5+\", \"sas_address\": \"\", \"locked\": 0, \"sectorsize\": \"512\", \"removable\": \"0\", \"path\": \"/dev/sdc\", \"support_discard\": \"\", \"model\": \"QEMU HARDDISK\", \"ro\": \"0\", \"nr_requests\": \"128\", \"size\": 53687091200.0}, \"lvs\": [], \"path\": \"/dev/sdc\"}, {\"available\": false, \"rejected_reasons\": [\"locked\"], \"sys_api\": {\"scheduler_mode\": \"noop\", \"rotational\": \"1\", \"vendor\": \"ATA\", \"human_readable_size\": \"50.00 GB\", \"sectors\": 0, \"sas_device_handle\": \"\", \"partitions\": {}, \"rev\": \"2.5+\", \"sas_address\": \"\", \"locked\": 1, \"sectorsize\": \"512\", \"removable\": \"0\", \"path\": \"/dev/sda\", \"support_discard\": \"\", \"model\": \"QEMU HARDDISK\", \"ro\": \"0\", \"nr_requests\": \"128\", \"size\": 53687091200.0}, \"lvs\": [{\"cluster_name\": \"ceph\", \"name\": \"osd-data-dcf8a88c-5546-42d2-afa4-b36f7fb23b66\", \"osd_id\": \"3\", \"cluster_fsid\": \"30d61f3e-7ee4-4bdc-8fe7-2ad5bb3f5317\", \"type\": \"block\", \"block_uuid\": \"fVqujC-9dgh-cN9W-1XD4-zVx1-1UdA-fUS3ha\", \"osd_fsid\": \"8b7cbeba-5e86-44ff-a5f3-2e7df77753fe\"}], \"path\": \"/dev/sda\"}, {\"available\": false, \"rejected_reasons\": [\"locked\"], \"sys_api\": {\"scheduler_mode\": \"noop\", \"rotational\": \"1\", \"vendor\": \"ATA\", \"human_readable_size\": \"50.00 GB\", \"sectors\": 0, \"sas_device_handle\": \"\", \"partitions\": {}, \"rev\": \"2.5+\", \"sas_address\": \"\", \"locked\": 1, \"sectorsize\": \"512\", \"removable\": \"0\", \"path\": \"/dev/sdb\", \"support_discard\": \"\", \"model\": \"QEMU HARDDISK\", \"ro\": \"0\", \"nr_requests\": \"128\", \"size\": 53687091200.0}, \"lvs\": [{\"cluster_name\": \"ceph\", \"name\": \"osd-data-8c92e986-bd97-4b3d-ba77-2cb88e15d80f\", \"osd_id\": \"1\", \"cluster_fsid\": \"30d61f3e-7ee4-4bdc-8fe7-2ad5bb3f5317\", \"type\": \"block\", \"block_uuid\": \"mgzO7O-vUfu-H3mf-4R3K-2f97-ZMRH-SngBFP\", \"osd_fsid\": \"6d067688-3e1b-45f9-ad03-8abd19e9f117\"}], \"path\": \"/dev/sdb\"}, {\"available\": false, \"rejected_reasons\": [\"locked\"], \"sys_api\": {\"scheduler_mode\": \"mq-deadline\", \"rotational\": \"1\", \"vendor\": \"0x1af4\", \"human_readable_size\": \"41.00 GB\", \"sectors\": 0, \"sas_device_handle\": \"\", \"partitions\": {\"vda1\": {\"start\": \"2048\", \"holders\": [], \"sectorsize\": 512, \"sectors\": \"2048\", \"size\": \"1024.00 KB\"}, \"vda3\": {\"start\": \"2101248\", \"holders\": [\"dm-0\", \"dm-1\"], \"sectorsize\": 512, \"sectors\": \"81784832\", \"size\": \"39.00 GB\"}, \"vda2\": {\"start\": \"4096\", \"holders\": [], \"sectorsize\": 512, \"sectors\": \"2097152\", \"size\": \"1024.00 MB\"}}, \"rev\": \"\", \"sas_address\": \"\", \"locked\": 1, \"sectorsize\": \"512\", \"removable\": \"0\", \"path\": \"/dev/vda\", \"support_discard\": \"\", \"model\": \"\", \"ro\": \"0\", \"nr_requests\": \"256\", \"size\": 44023414784.0}, \"lvs\": [{\"comment\": \"not used by ceph\", \"name\": \"LogVol00\"}, {\"comment\": \"not used by ceph\", \"name\": \"LogVol01\"}], \"path\": \"/dev/vda\"}]"
- ]
- },
- "pid": 482,
- "play_uuid": "2016b900-e38f-0e09-19be-00000000000c",
- "task_uuid": "2016b900-e38f-0e09-19be-000000000012",
- "event_loop": null,
- "playbook_uuid": "e80e66f2-4a78-4a96-aaf6-fbe473f11312",
- "playbook": "storage-inventory.yml",
- "task_action": "ceph_volume",
- "host": "192.168.121.144",
- "task_path": "/usr/share/ansible-runner-service/project/storage-inventory.yml:29"
- },
- "event": "runner_on_ok"
- }
- }
- """
-
- # Mocked response
- mocked_response = mock.Mock()
- mocked_response.text = EVENT_DATA
-
- # The Ansible Runner Service client
- ar_client = mock.Mock()
- ar_client.http_get = mock.MagicMock(return_value=mocked_response)
-
- test_wizard = ProcessInventory(ar_client)
-
- def test_process(self):
- """Test a normal call
- """
- operation_id = 12
- nodes_list = self.test_wizard.process(operation_id, self.INVENTORY_EVENTS)
-
- for key, dummy_data in self.INVENTORY_EVENTS.items():
- http_request = EVENT_DATA_URL % (operation_id, key)
- self.ar_client.http_get.assert_any_call(http_request)
-
-
- # Only one host
- self.assertTrue(len(nodes_list), 1)
-
- # Host retrieved OK
- self.assertEqual(nodes_list[0].name, "192.168.121.144")
-
- # Devices
- self.assertTrue(len(nodes_list[0].devices.devices), 4)
-
- expected_device_ids = ["/dev/sdc", "/dev/sda", "/dev/sdb", "/dev/vda"]
- device_ids = [dev.path for dev in nodes_list[0].devices.devices]
-
- self.assertEqual(expected_device_ids, device_ids)
await mgrmodules.navigateTo();
});
- it('should test editing on ansible module', async () => {
- const ansibleArr = [['rq', 'ca_bundle'], ['colts', 'server_location']];
- await mgrmodules.editMgrModule('ansible', ansibleArr);
- });
-
it('should test editing on deepsea module', async () => {
const deepseaArr = [
['rq', 'salt_api_eauth'],
// (on my local run of ceph-dev, this is subject to change i would assume). I'd imagine there is a
// better way of doing this.
await this.navigateTo();
- await this.waitClickableAndClick(this.getFirstTableCellWithText('devicehealth')); // checks ansible
+ await this.waitClickableAndClick(this.getFirstTableCellWithText('devicehealth'));
await element(by.cssContainingText('button', 'Edit')).click();
await this.clearInput(element(by.id('mark_out_threshold')));
await element(by.id('mark_out_threshold')).sendKeys('2419200');
await this.clearInput(element(by.id('warn_threshold')));
await element(by.id('warn_threshold')).sendKeys('7257600');
- // Checks that clearing represents in details tab of ansible
+ // Checks that clearing represents in details tab
await this.waitClickableAndClick(element(by.cssContainingText('button', 'Update')));
await this.navigateTo();
await this.waitVisibility(this.getFirstTableCellWithText('devicehealth'));
'type': 'str',
'default': None,
'desc': 'Orchestrator backend',
- 'enum_allowed': ['cephadm', 'rook', 'ansible', 'deepsea',
+ 'enum_allowed': ['cephadm', 'rook', 'deepsea',
'test_orchestrator'],
'runtime': True,
},
[testenv]
setenv = UNITTEST = true
deps = -r requirements.txt
-commands = pytest -v --cov --cov-append --cov-report=term --doctest-modules {posargs:mgr_util.py tests/ cephadm/ ansible/ progress/}
+commands = pytest -v --cov --cov-append --cov-report=term --doctest-modules {posargs:mgr_util.py tests/ cephadm/ progress/}
[testenv:mypy]
basepython = python3
-r requirements.txt
mypy
commands = mypy --config-file=../../mypy.ini \
- ansible/module.py \
cephadm/module.py \
mgr_module.py \
mgr_util.py \
$prog_name --tox-envs py27,py3
-following command will run tox with envlist of "py27" using "src/pybind/mgr/ansible/tox.ini"
-
- $prog_name --tox-envs py27 ansible
-
following command will run tox with envlist of "py27" using "/ceph/src/python-common/tox.ini"
$prog_name --tox-envs py27 --tox-path /ceph/src/python-common