test_path: <directory>
+OpenStack backend
+=================
+
+The ``teuthology-openstack`` command is a wrapper around
+``teuthology-suite`` that transparently creates the teuthology cluster
+using OpenStack virtual machines.
+
+Prerequisites
+-------------
+
+An OpenStack tenant with access to the nova and cinder API (for
+instance http://entercloudsuite.com/). If the cinder API is not
+available (for instance https://www.ovh.com/fr/cloud/), some jobs
+won't run because they expect volumes attached to each instance.
+
+Setup OpenStack at Enter Cloud Suite
+------------------------------------
+
+* create an account and `login the dashboard <https://dashboard.entercloudsuite.com/>`_
+* `create an Ubuntu 14.04 instance
+ <https://dashboard.entercloudsuite.com/console/index#/launch-instance>`_
+ with 1GB RAM and a public IP and destroy it immediately afterwards.
+* get $HOME/openrc.sh from `the horizon dashboard <https://horizon.entercloudsuite.com/project/access_and_security/?tab=access_security_tabs__api_access_tab>`_
+
+The creation/destruction of an instance via the dashboard is the
+shortest path to create the network, subnet and router that would
+otherwise need to be created via the neutron API.
+
+Setup OpenStack at OVH
+----------------------
+
+It is cheaper than EnterCloudSuite but does not provide volumes (as
+of August 2015) and is therefore unfit to run teuthology tests that
+require disks attached to the instance. Each instance has a public IP
+by default.
+
+* `create an account <https://www.ovh.com/fr/support/new_nic.xml>`_
+* get $HOME/openrc.sh from `the horizon dashboard <https://horizon.cloud.ovh.net/project/access_and_security/?tab=access_security_tabs__api_access_tab>`_
+
+Setup
+-----
+
+* Get and configure teuthology::
+
+ $ git clone -b wip-6502-openstack-v3 http://github.com/dachary/teuthology
+ $ cd teuthology ; ./bootstrap install
+ $ source virtualenv/bin/activate
+
+Get OpenStack credentials and test it
+-------------------------------------
+
+* follow the `OpenStack API Quick Start <http://docs.openstack.org/api/quick-start/content/index.html#cli-intro>`_
+* source $HOME/openrc.sh
+* verify the OpenStack client works::
+
+ $ nova list
+ +----+------------+--------+------------+-------------+-------------------------+
+ | ID | Name | Status | Task State | Power State | Networks |
+ +----+------------+--------+------------+-------------+-------------------------+
+ +----+------------+--------+------------+-------------+-------------------------+
+* upload your ssh public key with::
+
+ $ openstack keypair create --public-key ~/.ssh/id_rsa.pub myself
+ +-------------+-------------------------------------------------+
+ | Field | Value |
+ +-------------+-------------------------------------------------+
+ | fingerprint | e0:a3:ab:5f:01:54:5c:1d:19:40:d9:62:b4:b3:a1:0b |
+ | name | myself |
+ | user_id | 5cf9fa21b2e9406b9c4108c42aec6262 |
+ +-------------+-------------------------------------------------+
+
+Usage
+-----
+
+* Run the dummy suite as a test (``myself`` is a keypair created as
+ explained in the previous section)::
+
+ $ teuthology-openstack --key-name myself --suite dummy
+ Job scheduled with name ubuntu-2015-07-24_09:03:29-dummy-master---basic-openstack and ID 1
+ 2015-07-24 09:03:30,520.520 INFO:teuthology.suite:ceph sha1: dedda6245ce8db8828fdf2d1a2bfe6163f1216a1
+ 2015-07-24 09:03:31,620.620 INFO:teuthology.suite:ceph version: v9.0.2-829.gdedda62
+ 2015-07-24 09:03:31,620.620 INFO:teuthology.suite:teuthology branch: master
+ 2015-07-24 09:03:32,196.196 INFO:teuthology.suite:ceph-qa-suite branch: master
+ 2015-07-24 09:03:32,197.197 INFO:teuthology.repo_utils:Fetching from upstream into /home/ubuntu/src/ceph-qa-suite_master
+ 2015-07-24 09:03:33,096.096 INFO:teuthology.repo_utils:Resetting repo at /home/ubuntu/src/ceph-qa-suite_master to branch master
+ 2015-07-24 09:03:33,157.157 INFO:teuthology.suite:Suite dummy in /home/ubuntu/src/ceph-qa-suite_master/suites/dummy generated 1 jobs (not yet filtered)
+ 2015-07-24 09:03:33,158.158 INFO:teuthology.suite:Scheduling dummy/{all/nop.yaml}
+ 2015-07-24 09:03:34,045.045 INFO:teuthology.suite:Suite dummy in /home/ubuntu/src/ceph-qa-suite_master/suites/dummy scheduled 1 jobs.
+ 2015-07-24 09:03:34,046.046 INFO:teuthology.suite:Suite dummy in /home/ubuntu/src/ceph-qa-suite_master/suites/dummy -- 0 jobs were filtered out.
+
+ 2015-07-24 11:03:34,104.104 INFO:teuthology.openstack:
+ web interface: http://167.114.242.13:8081/
+ ssh access : ssh ubuntu@167.114.242.13 # logs in /usr/share/nginx/html
+
+* Visit the web interface (the URL is displayed at the end of the
+ teuthology-openstack output) to monitor the progress of the suite.
+
+* The virtual machine running the suite will persist for forensic
+ analysis purposes. To destroy it run::
+
+ $ teuthology-openstack --key-name myself --teardown
+
+Running the OpenStack backend integration tests
+-----------------------------------------------
+
+The easiest way to run the integration tests is to first run a dummy suite::
+
+ $ teuthology-openstack --key-name myself --suite dummy
+
+This will create a virtual machine suitable for running the
+integration tests. Once logged in the virtual machine:
+
+ $ pkill -f teuthology-worker
+ $ cd teuthology ; pip install "tox>=1.9"
+ $ tox -v -e openstack-integration
+ integration/openstack-integration.py::TestSuite::test_suite_noop PASSED
+ ...
+ ========= 9 passed in 2545.51 seconds ========
+ $ tox -v -e openstack
+ integration/test_openstack.py::TestTeuthologyOpenStack::test_create PASSED
+ ...
+ ========= 1 passed in 204.35 seconds =========
VIRTUAL MACHINE SUPPORT
=======================
# C) Adding "Precise" conditionals somewhere, eg. conditionalizing
# this bootstrap script to only use the python-libvirt package on
# Ubuntu Precise.
- for package in python-dev libssl-dev python-pip python-virtualenv libevent-dev python-libvirt libmysqlclient-dev libffi-dev; do
+ for package in python-dev libssl-dev python-pip python-virtualenv libevent-dev python-libvirt libmysqlclient-dev libffi-dev libyaml-dev libpython-dev ; do
if [ "$(dpkg --status -- $package|sed -n 's/^Status: //p')" != "install ok installed" ]; then
# add a space after old values
missing="${missing:+$missing }$package"
--- /dev/null
+import argparse
+import sys
+
+import teuthology.openstack
+
+
+def main(argv=sys.argv[1:]):
+ teuthology.openstack.main(parse_args(argv), argv)
+
+
+def parse_args(argv):
+ parser = argparse.ArgumentParser(
+ formatter_class=argparse.RawDescriptionHelpFormatter,
+ description="""
+Run a suite of ceph integration tests. A suite is a directory containing
+facets. A facet is a directory containing config snippets. Running a suite
+means running teuthology for every configuration combination generated by
+taking one config snippet from each facet. Any config files passed on the
+command line will be used for every combination, and will override anything in
+the suite. By specifying a subdirectory in the suite argument, it is possible
+to limit the run to a specific facet. For instance -s upgrade/dumpling-x only
+runs the dumpling-x facet of the upgrade suite.
+
+Display the http and ssh access to follow the progress of the suite
+and analyze results.
+
+ firefox http://183.84.234.3:8081/
+ ssh -i teuthology-admin.pem ubuntu@183.84.234.3
+
+""")
+ parser.add_argument(
+ '-v', '--verbose',
+ action='store_true', default=None,
+ help='be more verbose',
+ )
+ parser.add_argument(
+ '--name',
+ help='OpenStack primary instance name',
+ default='teuthology',
+ )
+ parser.add_argument(
+ '--key-name',
+ help='OpenStack keypair name',
+ required=True,
+ )
+ parser.add_argument(
+ '--key-filename',
+ help='path to the ssh private key',
+ )
+ parser.add_argument(
+ '--simultaneous-jobs',
+ help='maximum number of jobs running in parallel',
+ type=int,
+ default=2,
+ )
+ parser.add_argument(
+ '--teardown',
+ action='store_true', default=None,
+ help='destroy the cluster, if it exists',
+ )
+ # copy/pasted from scripts/suite.py
+ parser.add_argument(
+ 'config_yaml',
+ nargs='*',
+ help='Optional extra job yaml to include',
+ )
+ parser.add_argument(
+ '--dry-run',
+ action='store_true', default=None,
+ help='Do a dry run; do not schedule anything',
+ )
+ parser.add_argument(
+ '-s', '--suite',
+ help='The suite to schedule',
+ )
+ parser.add_argument(
+ '-c', '--ceph',
+ help='The ceph branch to run against',
+ default='master',
+ )
+ parser.add_argument(
+ '-k', '--kernel',
+ help=('The kernel branch to run against; if not '
+ 'supplied, the installed kernel is unchanged'),
+ )
+ parser.add_argument(
+ '-f', '--flavor',
+ help=("The kernel flavor to run against: ('basic',"
+ "'gcov', 'notcmalloc')"),
+ default='basic',
+ )
+ parser.add_argument(
+ '-d', '--distro',
+ help='Distribution to run against',
+ )
+ parser.add_argument(
+ '--suite-branch',
+ help='Use this suite branch instead of the ceph branch',
+ )
+ parser.add_argument(
+ '-e', '--email',
+ help='When tests finish or time out, send an email here',
+ )
+ parser.add_argument(
+ '-N', '--num',
+ help='Number of times to run/queue the job',
+ type=int,
+ default=1,
+ )
+ parser.add_argument(
+ '-l', '--limit',
+ metavar='JOBS',
+ help='Queue at most this many jobs',
+ type=int,
+ )
+ parser.add_argument(
+ '--subset',
+ help=('Instead of scheduling the entire suite, break the '
+ 'set of jobs into <outof> pieces (each of which will '
+ 'contain each facet at least once) and schedule '
+ 'piece <index>. Scheduling 0/<outof>, 1/<outof>, '
+ '2/<outof> ... <outof>-1/<outof> will schedule all '
+ 'jobs in the suite (many more than once).')
+ )
+ parser.add_argument(
+ '-p', '--priority',
+ help='Job priority (lower is sooner)',
+ type=int,
+ default=1000,
+ )
+ parser.add_argument(
+ '--timeout',
+ help=('How long, in seconds, to wait for jobs to finish '
+ 'before sending email. This does not kill jobs.'),
+ type=int,
+ default=43200,
+ )
+ parser.add_argument(
+ '--filter',
+ help=('Only run jobs whose description contains at least one '
+ 'of the keywords in the comma separated keyword '
+ 'string specified. ')
+ )
+ parser.add_argument(
+ '--filter-out',
+ help=('Do not run jobs whose description contains any of '
+ 'the keywords in the comma separated keyword '
+ 'string specified. ')
+ )
+
+ return parser.parse_args(argv)
--timeout <timeout> How long, in seconds, to wait for jobs to finish
before sending email. This does not kill jobs.
[default: {default_results_timeout}]
- --filter KEYWORDS Only run jobs whose name contains at least one
+ --filter KEYWORDS Only run jobs whose description contains at least one
of the keywords in the comma separated keyword
string specified.
- --filter-out KEYWORDS Do not run jobs whose name contains any of
+ --filter-out KEYWORDS Do not run jobs whose description contains any of
the keywords in the comma separated keyword
string specified.
""".format(default_machine_type=config.default_machine_type,
'boto >= 2.0b4',
'bunch >= 1.0.0',
'configobj',
- 'six',
+ 'six >= 1.9', # python-openstackclient won't work properly with less
'httplib2',
'paramiko < 1.8',
'pexpect',
'pyopenssl>=0.13',
'ndg-httpsclient',
'pyasn1',
+ 'python-openstackclient',
],
entry_points={
'console_scripts': [
'teuthology = scripts.run:main',
+ 'teuthology-openstack = scripts.openstack:main',
'teuthology-nuke = scripts.nuke:main',
'teuthology-suite = scripts.suite:main',
'teuthology-ls = scripts.ls:main',
return ret
+def lock_many_openstack(ctx, num, machine_type, user=None, description=None,
+ arch=None):
+ os_type = provision.get_distro(ctx)
+ os_version = provision.get_distro_version(ctx)
+ if hasattr(ctx, 'config'):
+ resources_hint = ctx.config.get('openstack')
+ else:
+ resources_hint = None
+ machines = provision.ProvisionOpenStack().create(
+ num, os_type, os_version, arch, resources_hint)
+ result = {}
+ for machine in machines:
+ lock_one(machine, user, description)
+ result[machine] = None # we do not collect ssh host keys yet
+ return result
+
def lock_many(ctx, num, machine_type, user=None, description=None,
os_type=None, os_version=None, arch=None):
if user is None:
machine_types_list = misc.get_multi_machine_types(machine_type)
if machine_types_list == ['vps']:
machine_types = machine_types_list
+ elif machine_types_list == ['openstack']:
+ return lock_many_openstack(ctx, num, machine_type,
+ user=user,
+ description=description,
+ arch=arch)
elif 'vps' in machine_types_list:
machine_types_non_vps = list(machine_types_list)
machine_types_non_vps.remove('vps')
def unlock_one(ctx, name, user, description=None):
name = misc.canonicalize_hostname(name, user=None)
if not provision.destroy_if_vm(ctx, name, user, description):
- log.error('downburst destroy failed for %s', name)
+ log.error('destroy failed for %s', name)
request = dict(name=name, locked=False, locked_by=user,
description=description)
uri = os.path.join(config.lock_server, 'nodes', name, 'lock', '')
from .lock import list_locks
from .lock import unlock_one
from .lock import find_stale_locks
+from .lockstatus import get_status
from .misc import config_file
from .misc import merge_configs
from .misc import get_testdir
(target,) = ctx.config['targets'].keys()
host = target.split('@')[-1]
shortname = host.split('.')[0]
- if should_unlock and 'vpm' in shortname:
- return
+ if should_unlock:
+ if 'vpm' in shortname:
+ return
+ status_info = get_status(host)
+ if status_info['is_vm'] and status_info['machine_type'] == 'openstack':
+ return
log.debug('shortname: %s' % shortname)
log.debug('{ctx}'.format(ctx=ctx))
if (not ctx.noipmi and 'ipmi_user' in ctx.teuthology_config and
--- /dev/null
+#
+# Copyright (c) 2015 Red Hat, Inc.
+#
+# Author: Loic Dachary <loic@dachary.org>
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+#
+import json
+import logging
+import os
+import paramiko
+import re
+import socket
+import subprocess
+import tempfile
+import teuthology
+
+from teuthology.contextutil import safe_while
+from teuthology.config import config as teuth_config
+from teuthology import misc
+
+log = logging.getLogger(__name__)
+
+class OpenStack(object):
+
+ # wget -O debian-8.0.qcow2 http://cdimage.debian.org/cdimage/openstack/current/debian-8.1.0-openstack-amd64.qcow2
+ # wget -O ubuntu-12.04.qcow2 https://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
+ # wget -O ubuntu-12.04-i386.qcow2 https://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-i386-disk1.img
+ # wget -O ubuntu-14.04.qcow2 https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
+ # wget -O ubuntu-14.04-i386.qcow2 https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-i386-disk1.img
+ # wget -O ubuntu-15.04.qcow2 https://cloud-images.ubuntu.com/vivid/current/vivid-server-cloudimg-arm64-disk1.img
+ # wget -O ubuntu-15.04-i386.qcow2 https://cloud-images.ubuntu.com/vivid/current/vivid-server-cloudimg-i386-disk1.img
+ # wget -O opensuse-13.2 http://download.opensuse.org/repositories/Cloud:/Images:/openSUSE_13.2/images/openSUSE-13.2-OpenStack-Guest.x86_64.qcow2
+ # wget -O centos-7.0.qcow2 http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
+ # wget -O centos-6.6.qcow2 http://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud.qcow2
+ # wget -O fedora-22.qcow2 https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+ # wget -O fedora-21.qcow2 http://fedora.mirrors.ovh.net/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
+ # wget -O fedora-20.qcow2 http://fedora.mirrors.ovh.net/linux/releases/20/Images/x86_64/Fedora-x86_64-20-20131211.1-sda.qcow2
+ image2url = {
+ 'centos-6.5': 'http://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud.qcow2',
+ 'centos-7.0': 'http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2',
+ 'ubuntu-14.04': 'https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img',
+ }
+
+ def __init__(self):
+ self.key_filename = None
+ self.username = 'ubuntu'
+ self.up_string = "UNKNOWN"
+ self.teuthology_suite = 'teuthology-suite'
+
+ @staticmethod
+ def get_value(result, field):
+ """
+ Get the value of a field from a result returned by the openstack command
+ in json format.
+ """
+ return filter(lambda v: v['Field'] == field, result)[0]['Value']
+
+ def image_exists(self, image):
+ """
+ Return true if the image exists in OpenStack.
+ """
+ found = misc.sh("openstack image list -f json --property name='" +
+ self.image_name(image) + "'")
+ return len(json.loads(found)) > 0
+
+ def net_id(self, network):
+ """
+ Return the uuid of the network in OpenStack.
+ """
+ r = json.loads(misc.sh("openstack network show -f json " +
+ network))
+ return self.get_value(r, 'id')
+
+ def type_version(self, os_type, os_version):
+ """
+ Return the string used to differentiate os_type and os_version in names.
+ """
+ return os_type + '-' + os_version
+
+ def image_name(self, name):
+ """
+ Return the image name used by teuthology in OpenStack to avoid
+ conflicts with existing names.
+ """
+ return "teuthology-" + name
+
+ def image_create(self, name):
+ """
+ Upload an image into OpenStack with glance. The image has to be qcow2.
+ """
+ misc.sh("wget -c -O " + name + ".qcow2 " + self.image2url[name])
+ misc.sh("glance image-create --property ownedby=teuthology " +
+ " --disk-format=qcow2 --container-format=bare " +
+ " --file " + name + ".qcow2 --name " + self.image_name(name))
+
+ def image(self, os_type, os_version):
+ """
+ Return the image name for the given os_type and os_version. If the image
+ does not exist it will be created.
+ """
+ name = self.type_version(os_type, os_version)
+ if not self.image_exists(name):
+ self.image_create(name)
+ return self.image_name(name)
+
+ def flavor(self, hint, select):
+ """
+ Return the smallest flavor that satisfies the desired size.
+ """
+ flavors_string = misc.sh("openstack flavor list -f json")
+ flavors = json.loads(flavors_string)
+ found = []
+ for flavor in flavors:
+ if select and not re.match(select, flavor['Name']):
+ continue
+ if (flavor['RAM'] >= hint['ram'] and
+ flavor['VCPUs'] >= hint['cpus'] and
+ flavor['Disk'] >= hint['disk']):
+ found.append(flavor)
+ if not found:
+ raise Exception("openstack flavor list: " + flavors_string +
+ " does not contain a flavor in which" +
+ " the desired " + str(hint) + " can fit")
+
+ def sort_flavor(a, b):
+ return (a['VCPUs'] - b['VCPUs'] or
+ a['RAM'] - b['RAM'] or
+ a['Disk'] - b['Disk'])
+ sorted_flavor = sorted(found, cmp=sort_flavor)
+ log.info("sorted flavor = " + str(sorted_flavor))
+ return sorted_flavor[0]['Name']
+
+ def cloud_init_wait(self, name_or_ip):
+ """
+ Wait for cloud-init to complete on the name_or_ip OpenStack instance.
+ """
+ log.debug('cloud_init_wait ' + name_or_ip)
+ client_args = {
+ 'timeout': 10,
+ 'username': self.username,
+ }
+ if self.key_filename:
+ log.debug("using key " + self.key_filename)
+ client_args['key_filename'] = self.key_filename
+ with safe_while(sleep=2, tries=600,
+ action="cloud_init_wait " + name_or_ip) as proceed:
+ success = False
+ # CentOS 6.6 logs in /var/log/clout-init-output.log
+ # CentOS 7.0 logs in /var/log/clout-init.log
+ all_done = ("tail /var/log/cloud-init*.log ; " +
+ " test -f /tmp/init.out && tail /tmp/init.out ; " +
+ " grep '" + self.up_string + "' " +
+ "/var/log/cloud-init*.log")
+ while proceed():
+ client = paramiko.SSHClient()
+ try:
+ client.set_missing_host_key_policy(
+ paramiko.AutoAddPolicy())
+ client.connect(name_or_ip, **client_args)
+ except paramiko.PasswordRequiredException as e:
+ client.close()
+ raise Exception(
+ "The private key requires a passphrase.\n"
+ "Create a new key with:"
+ " openstack keypair create myself > myself.pem\n"
+ " chmod 600 myself.pem\n"
+ "and call teuthology-openstack with the options\n"
+ " --key-name myself --key-filename myself.pem\n")
+ except paramiko.AuthenticationException as e:
+ client.close()
+ log.debug('cloud_init_wait AuthenticationException ' + str(e))
+ continue
+ except socket.timeout as e:
+ client.close()
+ log.debug('cloud_init_wait connect socket.timeout ' + str(e))
+ continue
+ except socket.error as e:
+ client.close()
+ log.debug('cloud_init_wait connect socket.error ' + str(e))
+ continue
+ except Exception as e:
+ if 'Unknown server' not in str(e):
+ log.exception('cloud_init_wait ' + name_or_ip)
+ client.close()
+ if 'Unknown server' in str(e):
+ continue
+ else:
+ raise e
+ log.debug('cloud_init_wait ' + all_done)
+ try:
+ stdin, stdout, stderr = client.exec_command(all_done)
+ stdout.channel.settimeout(5)
+ out = stdout.read()
+ log.debug('cloud_init_wait stdout ' + all_done + ' ' + out)
+ except socket.timeout as e:
+ client.close()
+ log.debug('cloud_init_wait socket.timeout ' + all_done)
+ continue
+ except socket.error as e:
+ client.close()
+ log.debug('cloud_init_wait socket.error ' + str(e) + ' ' + all_done)
+ continue
+ log.debug('cloud_init_wait stderr ' + all_done +
+ ' ' + stderr.read())
+ if stdout.channel.recv_exit_status() == 0:
+ success = True
+ client.close()
+ if success:
+ break
+ return success
+
+ def exists(self, name_or_id):
+ """
+ Return true if the OpenStack name_or_id instance exists,
+ false otherwise.
+ """
+ servers = json.loads(misc.sh("openstack server list -f json"))
+ for server in servers:
+ if (server['ID'] == name_or_id or server['Name'] == name_or_id):
+ return True
+ return False
+
+ @staticmethod
+ def get_addresses(instance_id):
+ """
+ Return the list of IPs associated with instance_id in OpenStack.
+ """
+ with safe_while(sleep=2, tries=30,
+ action="get ip " + instance_id) as proceed:
+ while proceed():
+ instance = misc.sh("openstack server show -f json " +
+ instance_id)
+ addresses = OpenStack.get_value(json.loads(instance),
+ 'addresses')
+ found = re.match('.*\d+', addresses)
+ if found:
+ return addresses
+
+ def get_ip(self, instance_id, network):
+ """
+ Return the private IP of the OpenStack instance_id. The network,
+ if not the empty string, disambiguate multiple networks attached
+ to the instance.
+ """
+ return re.findall(network + '=([\d.]+)',
+ self.get_addresses(instance_id))[0]
+
+class TeuthologyOpenStack(OpenStack):
+
+ def __init__(self, args, config, argv):
+ super(TeuthologyOpenStack, self).__init__()
+ self.argv = argv
+ self.args = args
+ self.config = config
+ self.up_string = 'teuthology is up and running'
+ self.user_data = 'teuthology/openstack/openstack-user-data.txt'
+
+ def main(self):
+ """
+ Entry point implementing the teuthology-openstack command.
+ """
+ self.setup_logs()
+ misc.read_config(self.args)
+ self.key_filename = self.args.key_filename
+ self.verify_openstack()
+ ip = self.setup()
+ if self.args.suite:
+ self.run_suite()
+ log.info("""
+web interface: http://{ip}:8081/
+ssh access : ssh {username}@{ip} # logs in /usr/share/nginx/html
+ """.format(ip=ip,
+ username=self.username))
+ if self.args.teardown:
+ self.teardown()
+
+ def run_suite(self):
+ """
+ Delegate running teuthology-suite to the OpenStack instance
+ running the teuthology cluster.
+ """
+ original_argv = self.argv[:]
+ argv = []
+ while len(original_argv) > 0:
+ if original_argv[0] in ('--name',
+ '--key-name',
+ '--key-filename',
+ '--simultaneous-jobs'):
+ del original_argv[0:2]
+ elif original_argv[0] in ('--teardown'):
+ del original_argv[0]
+ else:
+ argv.append(original_argv.pop(0))
+ argv.append('/home/' + self.username +
+ '/teuthology/teuthology/openstack/test/openstack.yaml')
+ command = (
+ "source ~/.bashrc_teuthology ; " + self.teuthology_suite + " " +
+ " --machine-type openstack " +
+ " ".join(map(lambda x: "'" + x + "'", argv))
+ )
+ print self.ssh(command)
+
+ def setup(self):
+ """
+ Create the teuthology cluster if it does not already exists
+ and return its IP address.
+ """
+ if not self.cluster_exists():
+ self.create_security_group()
+ self.create_cluster()
+ instance_id = self.get_instance_id(self.args.name)
+ return self.get_floating_ip_or_ip(instance_id)
+
+ def setup_logs(self):
+ """
+ Setup the log level according to --verbose
+ """
+ loglevel = logging.INFO
+ if self.args.verbose:
+ loglevel = logging.DEBUG
+ logging.getLogger("paramiko.transport").setLevel(logging.DEBUG)
+ teuthology.log.setLevel(loglevel)
+
+ def ssh(self, command):
+ """
+ Run a command in the OpenStack instance of the teuthology cluster.
+ Return the stdout / stderr of the command.
+ """
+ client_args = {
+ 'username': self.username,
+ }
+ if self.key_filename:
+ log.debug("ssh using key " + self.key_filename)
+ client_args['key_filename'] = self.key_filename
+ instance_id = self.get_instance_id(self.args.name)
+ ip = self.get_floating_ip_or_ip(instance_id)
+ log.debug("ssh " + self.username + "@" + str(ip) + " " + command)
+ client = paramiko.SSHClient()
+ client.set_missing_host_key_policy(
+ paramiko.AutoAddPolicy())
+ client.connect(ip, **client_args)
+ stdin, stdout, stderr = client.exec_command(command)
+ stdout.channel.settimeout(300)
+ out = ''
+ try:
+ out = stdout.read()
+ log.debug('teardown stdout ' + command + ' ' + out)
+ except Exception:
+ log.exception('teardown ' + command + ' failed')
+ err = stderr.read()
+ log.debug('teardown stderr ' + command + ' ' + err)
+ return out + ' ' + err
+
+ def verify_openstack(self):
+ """
+ Check there is a working connection to an OpenStack cluster
+ and set the provider data member if it is among those we
+ know already.
+ """
+ try:
+ misc.sh("openstack server list")
+ except subprocess.CalledProcessError:
+ log.exception("openstack server list")
+ raise Exception("verify openrc.sh has been sourced")
+ if 'OS_AUTH_URL' not in os.environ:
+ raise Exception('no OS_AUTH_URL environment variable')
+ providers = (('cloud.ovh.net', 'ovh'),
+ ('entercloudsuite.com', 'entercloudsuite'))
+ self.provider = None
+ for (pattern, provider) in providers:
+ if pattern in os.environ['OS_AUTH_URL']:
+ self.provider = provider
+ break
+
+ def flavor(self):
+ """
+ Return an OpenStack flavor fit to run the teuthology cluster.
+ The RAM size depends on the maximum number of workers that
+ will run simultaneously.
+ """
+ hint = {
+ 'disk': 10, # GB
+ 'ram': 1024, # MB
+ 'cpus': 1,
+ }
+ if self.args.simultaneous_jobs > 25:
+ hint['ram'] = 30000 # MB
+ elif self.args.simultaneous_jobs > 10:
+ hint['ram'] = 7000 # MB
+ elif self.args.simultaneous_jobs > 3:
+ hint['ram'] = 4000 # MB
+
+ select = None
+ if self.provider == 'ovh':
+ select = '^(vps|eg)-'
+ return super(TeuthologyOpenStack, self).flavor(hint, select)
+
+ def net(self):
+ """
+ Return the network to be used when creating an OpenStack instance.
+ By default it should not be set. But some providers such as
+ entercloudsuite require it is.
+ """
+ if self.provider == 'entercloudsuite':
+ return "--nic net-id=default"
+ else:
+ return ""
+
+ def get_user_data(self):
+ """
+ Create a user-data.txt file to be used to spawn the teuthology
+ cluster, based on a template where the OpenStack credentials
+ and a few other values are substituted.
+ """
+ path = tempfile.mktemp()
+ template = open(self.user_data).read()
+ openrc = ''
+ for (var, value) in os.environ.iteritems():
+ if var.startswith('OS_'):
+ openrc += ' ' + var + '=' + value
+ log.debug("OPENRC = " + openrc + " " +
+ "TEUTHOLOGY_USERNAME = " + self.username + " " +
+ "NWORKERS = " + str(self.args.simultaneous_jobs))
+ content = (template.
+ replace('OPENRC', openrc).
+ replace('TEUTHOLOGY_USERNAME', self.username).
+ replace('NWORKERS', str(self.args.simultaneous_jobs)))
+ open(path, 'w').write(content)
+ log.debug("get_user_data: " + content + " written to " + path)
+ return path
+
+ def create_security_group(self):
+ """
+ Create a security group that will be used by all teuthology
+ created instances. This should not be necessary in most cases
+ but some OpenStack providers enforce firewall restrictions even
+ among instances created within the same tenant.
+ """
+ try:
+ misc.sh("openstack security group show teuthology")
+ return
+ except subprocess.CalledProcessError:
+ pass
+ # TODO(loic): this leaves the teuthology vm very exposed
+ # it would be better to be very liberal for 192.168.0.0/16
+ # and 172.16.0.0/12 and 10.0.0.0/8 and only allow 80/8081/22
+ # for the rest.
+ misc.sh("""
+openstack security group create teuthology
+openstack security group rule create --dst-port 1:10000 teuthology
+openstack security group rule create --proto udp --dst-port 53 teuthology # dns
+ """)
+
+ @staticmethod
+ def get_unassociated_floating_ip():
+ """
+ Return a floating IP address not associated with an instance or None.
+ """
+ ips = json.loads(misc.sh("openstack ip floating list -f json"))
+ for ip in ips:
+ if not ip['Instance ID']:
+ return ip['IP']
+ return None
+
+ @staticmethod
+ def create_floating_ip():
+ pools = json.loads(misc.sh("openstack ip floating pool list -f json"))
+ if not pools:
+ return None
+ pool = pools[0]['Name']
+ try:
+ ip = json.loads(misc.sh(
+ "openstack ip floating create -f json '" + pool + "'"))
+ return TeuthologyOpenStack.get_value(ip, 'ip')
+ except subprocess.CalledProcessError:
+ log.debug("create_floating_ip: not creating a floating ip")
+ pass
+ return None
+
+ @staticmethod
+ def associate_floating_ip(name_or_id):
+ """
+ Associate a floating IP to the OpenStack instance
+ or do nothing if no floating ip can be created.
+ """
+ ip = TeuthologyOpenStack.get_unassociated_floating_ip()
+ if not ip:
+ ip = TeuthologyOpenStack.create_floating_ip()
+ if ip:
+ misc.sh("openstack ip floating add " + ip + " " + name_or_id)
+
+ @staticmethod
+ def get_floating_ip(instance_id):
+ """
+ Return the floating IP of the OpenStack instance_id.
+ """
+ ips = json.loads(misc.sh("openstack ip floating list -f json"))
+ for ip in ips:
+ if ip['Instance ID'] == instance_id:
+ return ip['IP']
+ return None
+
+ @staticmethod
+ def get_floating_ip_id(ip):
+ """
+ Return the id of a floating IP
+ """
+ results = json.loads(misc.sh("openstack ip floating list -f json"))
+ for result in results:
+ if result['IP'] == ip:
+ return str(result['ID'])
+ return None
+
+ @staticmethod
+ def get_floating_ip_or_ip(instance_id):
+ """
+ Return the floating ip, if any, otherwise return the last
+ IP displayed with openstack server list.
+ """
+ ip = TeuthologyOpenStack.get_floating_ip(instance_id)
+ if not ip:
+ ip = re.findall('([\d.]+)$',
+ TeuthologyOpenStack.get_addresses(instance_id))[0]
+ return ip
+
+ @staticmethod
+ def get_instance_id(name):
+ instance = json.loads(misc.sh("openstack server show -f json " + name))
+ return TeuthologyOpenStack.get_value(instance, 'id')
+
+ @staticmethod
+ def delete_floating_ip(instance_id):
+ """
+ Remove the floating ip from instance_id and delete it.
+ """
+ ip = TeuthologyOpenStack.get_floating_ip(instance_id)
+ if not ip:
+ return
+ misc.sh("openstack ip floating remove " + ip + " " + instance_id)
+ ip_id = TeuthologyOpenStack.get_floating_ip_id(ip)
+ misc.sh("openstack ip floating delete " + ip_id)
+
+ def create_cluster(self):
+ """
+ Create an OpenStack instance that runs the teuthology cluster
+ and wait for it to come up.
+ """
+ user_data = self.get_user_data()
+ instance = misc.sh(
+ "openstack server create " +
+ " --image '" + self.image('ubuntu', '14.04') + "' " +
+ " --flavor '" + self.flavor() + "' " +
+ " " + self.net() +
+ " --key-name " + self.args.key_name +
+ " --user-data " + user_data +
+ " --security-group teuthology" +
+ " --wait " + self.args.name +
+ " -f json")
+ instance_id = self.get_value(json.loads(instance), 'id')
+ os.unlink(user_data)
+ self.associate_floating_ip(instance_id)
+ ip = self.get_floating_ip_or_ip(instance_id)
+ return self.cloud_init_wait(ip)
+
+ def cluster_exists(self):
+ """
+ Return true if there exists an instance running the teuthology cluster.
+ """
+ if not self.exists(self.args.name):
+ return False
+ instance_id = self.get_instance_id(self.args.name)
+ ip = self.get_floating_ip_or_ip(instance_id)
+ return self.cloud_init_wait(ip)
+
+ def teardown(self):
+ """
+ Delete all instances run by the teuthology cluster and delete the
+ instance running the teuthology cluster.
+ """
+ self.ssh("sudo /etc/init.d/teuthology stop || true")
+ instance_id = self.get_instance_id(self.args.name)
+ self.delete_floating_ip(instance_id)
+ misc.sh("openstack server delete --wait " + self.args.name)
+
+def main(ctx, argv):
+ return TeuthologyOpenStack(ctx, teuth_config, argv).main()
--- /dev/null
+#cloud-config
+bootcmd:
+ - echo nameserver {nameserver} | tee /etc/resolv.conf
+ - echo search {lab_domain} | tee -a /etc/resolv.conf
+ - sed -ie 's/PEERDNS="yes"/PEERDNS="no"/' /etc/sysconfig/network-scripts/ifcfg-eth0
+ - ( curl --silent http://169.254.169.254/2009-04-04/meta-data/hostname | sed -e 's/[\.-].*//' ; eval printf "%03d%03d.{lab_domain}" $(curl --silent http://169.254.169.254/2009-04-04/meta-data/local-ipv4 | sed -e 's/.*\.\(.*\)\.\(.*\)/\1 \2/') ) | tee /etc/hostname
+ - hostname $(cat /etc/hostname)
+ - yum install -y yum-utils && yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/6/x86_64/ && yum install --nogpgcheck -y epel-release && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 && rm /etc/yum.repos.d/dl.fedoraproject.org*
+ - ( echo ; echo "MaxSessions 1000" ) >> /etc/ssh/sshd_config
+ - ( echo 'Defaults !requiretty' ; echo 'Defaults visiblepw' ) | tee /etc/sudoers.d/cephlab_sudo
+preserve_hostname: true
+system_info:
+ default_user:
+ name: {username}
+packages:
+ - python
+ - wget
+ - git
+ - ntp
+ - dracut-modules-growroot
+runcmd:
+ - mkinitrd --force /boot/initramfs-2.6.32-504.1.3.el6.x86_64.img 2.6.32-504.1.3.el6.x86_64
+ - reboot
+#runcmd:
+# # if /mnt is on ephemeral, that moves /home/{username} on the ephemeral, otherwise it does nothing
+# - rsync -a --numeric-ids /home/{username}/ /mnt/ && rm -fr /home/{username} && ln -s /mnt /home/{username}
+final_message: "{up}, after $UPTIME seconds"
--- /dev/null
+#cloud-config
+bootcmd:
+ - echo nameserver {nameserver} | tee /etc/resolv.conf
+ - echo search {lab_domain} | tee -a /etc/resolv.conf
+ - sed -ie 's/PEERDNS="yes"/PEERDNS="no"/' /etc/sysconfig/network-scripts/ifcfg-eth0
+ - ( curl --silent http://169.254.169.254/2009-04-04/meta-data/hostname | sed -e 's/[\.-].*//' ; eval printf "%03d%03d.{lab_domain}" $(curl --silent http://169.254.169.254/2009-04-04/meta-data/local-ipv4 | sed -e 's/.*\.\(.*\)\.\(.*\)/\1 \2/') ) | tee /etc/hostname
+ - hostname $(cat /etc/hostname)
+ - ( echo ; echo "MaxSessions 1000" ) >> /etc/ssh/sshd_config
+# See https://github.com/ceph/ceph-cm-ansible/blob/master/roles/cobbler/templates/snippets/cephlab_user
+ - ( echo 'Defaults !requiretty' ; echo 'Defaults visiblepw' ) | tee /etc/sudoers.d/cephlab_sudo ; chmod 0440 /etc/sudoers.d/cephlab_sudo
+preserve_hostname: true
+system_info:
+ default_user:
+ name: {username}
+packages:
+ - python
+ - wget
+ - git
+ - ntp
+# this does not work on centos, ssh key will not be working, maybe because there is a symlink to reach it ?
+#runcmd:
+# # if /mnt is on ephemeral, that moves /home/{username} on the ephemeral, otherwise it does nothing
+# - rsync -a --numeric-ids /home/{username}/ /mnt/ && rm -fr /home/{username} && ln -s /mnt /home/{username}
+final_message: "{up}, after $UPTIME seconds"
--- /dev/null
+openstack-ubuntu-user-data.txt
\ No newline at end of file
--- /dev/null
+#cloud-config
+users:
+ - name: clouduser
+ gecos: User
+ sudo: ["ALL=(ALL) NOPASSWD:ALL"]
+ groups: users
+ ssh_pwauth: True
+chpasswd:
+ list: |
+ clouduser:linux
+ expire: False
+ssh_pwauth: True
+
--- /dev/null
+#!/bin/bash
+#
+# Copyright (c) 2015 Red Hat, Inc.
+#
+# Author: Loic Dachary <loic@dachary.org>
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+#
+### BEGIN INIT INFO
+# Provides: teuthology
+# Required-Start: $network $remote_fs $syslog beanstalkd nginx
+# Required-Stop: $network $remote_fs $syslog
+# Default-Start: 2 3 4 5
+# Default-Stop:
+# Short-Description: Start teuthology
+### END INIT INFO
+
+cd /home/ubuntu
+
+source /etc/default/teuthology
+
+user=${TEUTHOLOGY_USERNAME:-ubuntu}
+
+case $1 in
+ start)
+ /etc/init.d/beanstalkd start
+ su - -c "cd /home/$user/paddles ; virtualenv/bin/pecan serve config.py" $user > /var/log/paddles.log 2>&1 &
+ su - -c "cd /home/$user/pulpito ; virtualenv/bin/python run.py" $user > /var/log/pulpito.log 2>&1 &
+ sleep 3
+ (
+ cd teuthology
+ . virtualenv/bin/activate
+ teuthology-lock --list-targets --owner scheduled_$user@teuthology > /tmp/t
+ if test -s /tmp/t && ! grep -qq 'targets: {}' /tmp/t ; then
+ teuthology-lock --unlock -t /tmp/t --owner scheduled_$user@teuthology
+ fi
+ mkdir -p /tmp/log
+ chown $user /tmp/log
+ for i in $(seq 1 $NWORKERS) ; do
+ su - -c "cd /home/$user ; source openrc.sh ; cd teuthology ; LC_ALL=C virtualenv/bin/teuthology-worker --tube openstack -l /tmp/log --archive-dir /usr/share/nginx/html" $user > /var/log/teuthology.$i 2>&1 &
+ done
+ )
+ ;;
+ stop)
+ pkill -f 'pecan serve'
+ pkill -f 'python run.py'
+ pkill -f 'teuthology-worker'
+ pkill -f 'ansible'
+ /etc/init.d/beanstalkd stop
+ source /home/$user/teuthology/virtualenv/bin/activate
+ source /home/$user/openrc.sh
+ ip=$(ip a show dev eth0 | sed -n "s:.*inet \(.*\)/.*:\1:p")
+ openstack server list --long -f json | \
+ jq ".[] | select(.Properties | contains(\"ownedby='$ip'\")) | .ID" | \
+ while read uuid ; do
+ eval openstack server delete $uuid
+ done
+ ;;
+ restart)
+ $0 stop
+ $0 start
+ ;;
+ *)
+esac
--- /dev/null
+openstack-ubuntu-user-data.txt
\ No newline at end of file
--- /dev/null
+#cloud-config
+bootcmd:
+ - apt-get remove --purge -y resolvconf || true
+ - echo 'prepend domain-name-servers {nameserver};' | sudo tee -a /etc/dhcp/dhclient.conf
+ - echo 'supersede domain-name "{lab_domain}";' | sudo tee -a /etc/dhcp/dhclient.conf
+ - ifdown eth0 ; ifup eth0
+ - ( curl --silent http://169.254.169.254/2009-04-04/meta-data/hostname | sed -e 's/[\.-].*//' ; eval printf "%03d%03d.{lab_domain}" $(curl --silent http://169.254.169.254/2009-04-04/meta-data/local-ipv4 | sed -e 's/.*\.\(.*\)\.\(.*\)/\1 \2/') ) | tee /etc/hostname
+ - hostname $(cat /etc/hostname)
+ - echo "MaxSessions 1000" >> /etc/ssh/sshd_config
+preserve_hostname: true
+system_info:
+ default_user:
+ name: {username}
+packages:
+ - python
+ - wget
+ - git
+ - ntp
+runcmd:
+ # if /mnt is on ephemeral, that moves /home/{username} on the ephemeral, otherwise it does nothing
+ - rsync -a --numeric-ids /home/{username}/ /mnt/ && rm -fr /home/{username} && ln -s /mnt /home/{username}
+final_message: "{up}, after $UPTIME seconds"
--- /dev/null
+#cloud-config
+bootcmd:
+ - touch /tmp/init.out
+system_info:
+ default_user:
+ name: TEUTHOLOGY_USERNAME
+packages:
+ - python-virtualenv
+ - git
+runcmd:
+ - su - -c '(set -x ; git clone -b wip-6502-openstack-v3 http://github.com/dachary/teuthology && cd teuthology && ./bootstrap install)' TEUTHOLOGY_USERNAME >> /tmp/init.out 2>&1
+ - echo 'export OPENRC' | tee /home/TEUTHOLOGY_USERNAME/openrc.sh
+ - su - -c '(set -x ; source openrc.sh ; cd teuthology ; source virtualenv/bin/activate ; openstack keypair delete teuthology || true ; teuthology/openstack/setup-openstack.sh --nworkers NWORKERS --setup-all)' TEUTHOLOGY_USERNAME >> /tmp/init.out 2>&1
+ - /etc/init.d/teuthology restart
+final_message: "teuthology is up and running after $UPTIME seconds"
--- /dev/null
+#!/bin/bash
+#
+# Copyright (c) 2015 Red Hat, Inc.
+#
+# Author: Loic Dachary <loic@dachary.org>
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+#
+
+#
+# Most of this file is intended to be obsoleted by the ansible equivalent
+# when they are available (setting up paddles, pulpito, etc.).
+#
+function create_config() {
+ local network="$1"
+ local subnet="$2"
+ local nameserver="$3"
+ local labdomain="$4"
+ local ip="$5"
+ local flavor_select="$6"
+
+ if test "$flavor_select" ; then
+ flavor_select="flavor-select-regexp: $flavor_select"
+ fi
+
+ if test "$network" ; then
+ network="network: $network"
+ fi
+
+ cat > ~/.teuthology.yaml <<EOF
+lock_server: http://localhost:8080/
+results_server: http://localhost:8080/
+queue_port: 11300
+queue_host: localhost
+lab_domain: $labdomain
+max_job_time: 14400 # 4 hours
+teuthology_path: .
+openstack:
+ user-data: teuthology/openstack/openstack-{os_type}-{os_version}-user-data.txt
+ ip: $ip
+ nameserver: $nameserver
+ #
+ # OpenStack has predefined machine sizes (called flavors)
+ # For a given job requiring N machines, the following will select
+ # the smallest flavor that satisfies these requirements. For instance
+ # If there are three flavors
+ #
+ # F1 (10GB disk, 2000MB RAM, 1CPU)
+ # F2 (100GB disk, 7000MB RAM, 1CPU)
+ # F3 (50GB disk, 7000MB RAM, 1CPU)
+ #
+ # and machine: { disk: 40, ram: 7000, cpus: 1 }, F3 will be chosen.
+ # F1 does not have enough RAM (2000 instead of the 7000 minimum) and
+ # although F2 satisfies all the requirements, it is larger than F3
+ # (100GB instead of 50GB) and presumably more expensive.
+ #
+ machine:
+ disk: 20 # GB
+ ram: 8000 # MB
+ cpus: 1
+ volumes:
+ count: 0
+ size: 1 # GB
+ $flavor_select
+ subnet: $subnet
+ $network
+EOF
+ echo "OVERRIDE ~/.teuthology.yaml"
+ return 0
+}
+
+function teardown_paddles() {
+ if pkill -f 'pecan' ; then
+ echo "SHUTDOWN the paddles server"
+ fi
+}
+
+function setup_paddles() {
+ local ip=$1
+
+ local public_ip=$(curl --silent http://169.254.169.254/2009-04-04/meta-data/public-ipv4/)
+ if test -z "$public_ip" ; then
+ public_ip=$ip
+ fi
+
+ local paddles_dir=$(dirname $0)/../../../paddles
+
+ if ! test -d $paddles_dir ; then
+ git clone https://github.com/ceph/paddles.git $paddles_dir || return 1
+ fi
+
+ sudo apt-get -qq install -y beanstalkd postgresql postgresql-contrib postgresql-server-dev-all supervisor
+
+ if ! sudo /etc/init.d/postgresql status ; then
+ sudo mkdir -p /etc/postgresql
+ sudo chown postgres /etc/postgresql
+ sudo -u postgres pg_createcluster 9.3 paddles
+ sudo /etc/init.d/postgresql start || return 1
+ fi
+ if ! psql --command 'select 1' 'postgresql://paddles:paddles@localhost/paddles' > /dev/null 2>&1 ; then
+ sudo -u postgres psql -c "CREATE USER paddles with PASSWORD 'paddles';" || return 1
+ sudo -u postgres createdb -O paddles paddles || return 1
+ fi
+ (
+ cd $paddles_dir || return 1
+ git pull --rebase
+ git clean -ffqdx
+ sed -e "s|^address.*|address = 'http://localhost'|" \
+ -e "s|^job_log_href_templ = 'http://qa-proxy.ceph.com/teuthology|job_log_href_templ = 'http://$public_ip|" \
+ -e "/sqlite/d" \
+ -e "s|.*'postgresql+psycop.*'|'url': 'postgresql://paddles:paddles@localhost/paddles'|" \
+ -e "s/'host': '127.0.0.1'/'host': '0.0.0.0'/" \
+ < config.py.in > config.py
+ virtualenv ./virtualenv
+ source ./virtualenv/bin/activate
+ pip install -r requirements.txt
+ pip install sqlalchemy tzlocal requests netaddr
+ python setup.py develop
+ )
+
+ echo "CONFIGURED the paddles server"
+}
+
+function populate_paddles() {
+ local subnet=$1
+ local labdomain=$2
+
+ local paddles_dir=$(dirname $0)/../../../paddles
+
+ local url='postgresql://paddles:paddles@localhost/paddles'
+
+ pkill -f 'pecan serve'
+
+ sudo -u postgres dropdb paddles
+ sudo -u postgres createdb -O paddles paddles
+
+ (
+ cd $paddles_dir || return 1
+ source virtualenv/bin/activate
+ pecan populate config.py
+
+ (
+ echo "begin transaction;"
+ subnet_names_and_ips $subnet | while read name ip ; do
+ echo "insert into nodes (name,machine_type,is_vm,locked,up) values ('${name}.${labdomain}', 'openstack', TRUE, FALSE, TRUE);"
+ done
+ echo "commit transaction;"
+ ) | psql --quiet $url
+
+ setsid pecan serve config.py < /dev/null > /dev/null 2>&1 &
+ for i in $(seq 1 20) ; do
+ if curl --silent http://localhost:8080/ > /dev/null 2>&1 ; then
+ break
+ else
+ echo -n .
+ sleep 5
+ fi
+ done
+ echo -n ' '
+ )
+
+ echo "RESET the paddles server"
+}
+
+function teardown_pulpito() {
+ if pkill -f 'python run.py' ; then
+ echo "SHUTDOWN the pulpito server"
+ fi
+}
+
+function setup_pulpito() {
+ local pulpito=http://localhost:8081/
+
+ local pulpito_dir=$(dirname $0)/../../../pulpito
+
+ if curl --silent $pulpito | grep -q pulpito ; then
+ echo "OK pulpito is running"
+ return 0
+ fi
+
+ if ! test -d $pulpito_dir ; then
+ git clone https://github.com/ceph/pulpito.git $pulpito_dir || return 1
+ fi
+
+ sudo apt-get -qq install -y nginx
+ local nginx_conf=/etc/nginx/sites-available/default
+ if ! grep -qq 'autoindex on' $nginx_conf ; then
+ sudo perl -pi -e 's|location / {|location / { autoindex on;|' $nginx_conf
+ sudo /etc/init.d/nginx restart
+ echo "ADDED autoindex on to nginx configuration"
+ fi
+ sudo chown $USER /usr/share/nginx/html
+ (
+ cd $pulpito_dir || return 1
+ git pull --rebase
+ git clean -ffqdx
+ sed -e "s|paddles_address.*|paddles_address = 'http://localhost:8080'|" < config.py.in > prod.py
+ virtualenv ./virtualenv
+ source ./virtualenv/bin/activate
+ pip install -r requirements.txt
+ python run.py &
+ )
+
+ echo "LAUNCHED the pulpito server"
+}
+
+function setup_bashrc() {
+ if test -f ~/.bashrc && grep -qq '.bashrc_teuthology' ~/.bashrc ; then
+ echo "OK .bashrc_teuthology found in ~/.bashrc"
+ else
+ cat > ~/.bashrc_teuthology <<'EOF'
+source $HOME/openrc.sh
+source $HOME/teuthology/virtualenv/bin/activate
+export HISTSIZE=500000
+export PROMPT_COMMAND='history -a'
+EOF
+ echo 'source $HOME/.bashrc_teuthology' >> ~/.bashrc
+ echo "ADDED .bashrc_teuthology to ~/.bashrc"
+ fi
+}
+
+function setup_ssh_config() {
+ if test -f ~/.ssh/config && grep -qq 'StrictHostKeyChecking no' ~/.ssh/config ; then
+ echo "OK ~/.ssh/config"
+ else
+ cat >> ~/.ssh/config <<EOF
+Host *
+ StrictHostKeyChecking no
+ UserKnownHostsFile=/dev/null
+EOF
+ echo "APPEND to ~/.ssh/config"
+ fi
+}
+
+function setup_bootscript() {
+ local nworkers=$1
+
+ local where=$(dirname $0)
+
+ sudo cp -a $where/openstack-teuthology.init /etc/init.d/teuthology
+ echo NWORKERS=$1 | sudo tee /etc/default/teuthology > /dev/null
+ echo "CREATED init script /etc/init.d/teuthology"
+}
+
+function get_or_create_keypair() {
+ local keypair=$1
+ local key_file=$HOME/.ssh/id_rsa
+
+ if ! openstack keypair show $keypair > /dev/null 2>&1 ; then
+ if test -f $key_file ; then
+ if ! test -f $key_file.pub ; then
+ ssh-keygen -y -f $key_file > $key_file.pub || return 1
+ fi
+ openstack keypair create --public-key $key_file.pub $keypair || return 1
+ echo "IMPORTED keypair $keypair"
+ else
+ openstack keypair create $keypair > $key_file || return 1
+ chmod 600 $key_file
+ echo "CREATED keypair $keypair"
+ fi
+ else
+ echo "OK keypair $keypair exists"
+ fi
+}
+
+function delete_keypair() {
+ local keypair=$1
+
+ if openstack keypair show $keypair > /dev/null 2>&1 ; then
+ openstack keypair delete $keypair || return 1
+ echo "REMOVED keypair $keypair"
+ fi
+}
+
+function setup_dnsmasq() {
+
+ if ! test -f /etc/dnsmasq.d/resolv ; then
+ resolver=$(grep nameserver /etc/resolv.conf | head -1 | perl -ne 'print $1 if(/\s*nameserver\s+([\d\.]+)/)')
+ sudo apt-get -qq install -y dnsmasq resolvconf
+ echo resolv-file=/etc/dnsmasq-resolv.conf | sudo tee /etc/dnsmasq.d/resolv
+ echo nameserver $resolver | sudo tee /etc/dnsmasq-resolv.conf
+ sudo /etc/init.d/dnsmasq restart
+ sudo sed -ie 's/^#IGNORE_RESOLVCONF=yes/IGNORE_RESOLVCONF=yes/' /etc/default/dnsmasq
+ echo nameserver 127.0.0.1 | sudo tee /etc/resolvconf/resolv.conf.d/head
+ sudo resolvconf -u
+ # see http://tracker.ceph.com/issues/12212 apt-mirror.front.sepia.ceph.com is not publicly accessible
+ echo host-record=apt-mirror.front.sepia.ceph.com,64.90.32.37 | sudo tee /etc/dnsmasq.d/apt-mirror
+ echo "INSTALLED dnsmasq and configured to be a resolver"
+ else
+ echo "OK dnsmasq installed"
+ fi
+}
+
+function subnet_names_and_ips() {
+ local subnet=$1
+ python -c 'import netaddr; print "\n".join([str(i) for i in netaddr.IPNetwork("'$subnet'")])' |
+ sed -e 's/\./ /g' | while read a b c d ; do
+ printf "target%03d%03d " $c $d
+ echo $a.$b.$c.$d
+ done
+}
+
+function define_dnsmasq() {
+ local subnet=$1
+ local labdomain=$2
+ local host_records=/etc/dnsmasq.d/teuthology
+ if ! test -f $host_records ; then
+ subnet_names_and_ips $subnet | while read name ip ; do
+ echo host-record=$name.$labdomain,$ip
+ done | sudo tee $host_records > /tmp/dnsmasq
+ head -2 /tmp/dnsmasq
+ echo 'etc.'
+ sudo /etc/init.d/dnsmasq restart
+ echo "CREATED $host_records"
+ else
+ echo "OK $host_records exists"
+ fi
+}
+
+function undefine_dnsmasq() {
+ local host_records=/etc/dnsmasq.d/teuthology
+
+ sudo rm -f $host_records
+ echo "REMOVED $host_records"
+}
+
+function setup_ansible() {
+ local subnet=$1
+ local labdomain=$2
+ local dir=/etc/ansible/hosts
+ if ! test -f $dir/teuthology ; then
+ sudo mkdir -p $dir/group_vars
+ echo '[testnodes]' | sudo tee $dir/teuthology
+ subnet_names_and_ips $subnet | while read name ip ; do
+ echo $name.$labdomain
+ done | sudo tee -a $dir/teuthology > /tmp/ansible
+ head -2 /tmp/ansible
+ echo 'etc.'
+ echo 'modify_fstab: false' | sudo tee $dir/group_vars/all.yml
+ echo "CREATED $dir/teuthology"
+ else
+ echo "OK $dir/teuthology exists"
+ fi
+}
+
+function teardown_ansible() {
+ sudo rm -fr /etc/ansible/hosts/teuthology
+}
+
+function remove_images() {
+ glance image-list --property-filter ownedby=teuthology | grep -v -e ---- -e 'Disk Format' | cut -f4 -d ' ' | while read image ; do
+ echo "DELETED iamge $image"
+ glance image-delete $image
+ done
+}
+
+function install_packages() {
+
+ if ! test -f /etc/apt/sources.list.d/trusty-backports.list ; then
+ echo deb http://archive.ubuntu.com/ubuntu trusty-backports main universe | sudo tee /etc/apt/sources.list.d/trusty-backports.list
+ sudo apt-get update
+ fi
+
+ local packages="jq realpath"
+ sudo apt-get -qq install -y $packages
+
+ echo "INSTALL required packages $packages"
+}
+
+CAT=${CAT:-cat}
+
+function set_nameserver() {
+ local subnet_id=$1
+ local nameserver=$2
+
+ eval local current_nameserver=$(neutron subnet-show -f json $subnet_id | jq '.[] | select(.Field == "dns_nameservers") | .Value' )
+
+ if test "$current_nameserver" = "$nameserver" ; then
+ echo "OK nameserver is $nameserver"
+ else
+ neutron subnet-update --dns-nameserver $nameserver $subnet_id || return 1
+ echo "CHANGED nameserver from $current_nameserver to $nameserver"
+ fi
+}
+
+function verify_openstack() {
+ if ! openstack server list > /dev/null ; then
+ echo ERROR: the credentials from ~/openrc.sh are not working >&2
+ return 1
+ fi
+ echo "OK $OS_TENANT_NAME can use $OS_AUTH_URL" >&2
+ local provider
+ if echo $OS_AUTH_URL | grep -qq cloud.ovh.net ; then
+ provider=ovh
+ elif echo $OS_AUTH_URL | grep -qq entercloudsuite.com ; then
+ provider=entercloudsuite
+ else
+ provider=standardopenstack
+ fi
+ echo "OPENSTACK PROVIDER $provider" >&2
+ echo $provider
+}
+
+function main() {
+ local network
+ local subnet
+ local nameserver
+ local labdomain=teuthology
+ local nworkers=2
+ local flavor_select
+ local keypair=teuthology
+
+ local do_setup_keypair=false
+ local do_create_config=false
+ local do_setup_dnsmasq=false
+ local do_install_packages=false
+ local do_setup_paddles=false
+ local do_populate_paddles=false
+ local do_setup_pulpito=false
+ local do_clobber=false
+
+ export LC_ALL=C
+
+ while [ $# -ge 1 ]; do
+ case $1 in
+ --verbose)
+ set -x
+ PS4='${FUNCNAME[0]}: $LINENO: '
+ ;;
+ --nameserver)
+ shift
+ nameserver=$1
+ ;;
+ --subnet)
+ shift
+ subnet=$1
+ ;;
+ --labdomain)
+ shift
+ labdomain=$1
+ ;;
+ --nworkers)
+ shift
+ nworkers=$1
+ ;;
+ --install)
+ do_install_packages=true
+ ;;
+ --config)
+ do_create_config=true
+ ;;
+ --setup-keypair)
+ do_setup_keypair=true
+ ;;
+ --setup-dnsmasq)
+ do_setup_dnsmasq=true
+ ;;
+ --setup-paddles)
+ do_setup_paddles=true
+ ;;
+ --setup-pulpito)
+ do_setup_pulpito=true
+ ;;
+ --populate-paddles)
+ do_populate_paddles=true
+ ;;
+ --setup-all)
+ do_install_packages=true
+ do_create_config=true
+ do_setup_keypair=true
+ do_setup_dnsmasq=true
+ do_setup_paddles=true
+ do_setup_pulpito=true
+ do_populate_paddles=true
+ ;;
+ --clobber)
+ do_clobber=true
+ ;;
+ *)
+ echo $1 is not a known option
+ return 1
+ ;;
+ esac
+ shift
+ done
+
+ if $do_install_packages ; then
+ install_packages || return 1
+ fi
+
+ local provider=$(verify_openstack)
+
+ eval local default_subnet=$(neutron subnet-list -f json | jq '.[0].cidr')
+ if test -z "$default_subnet" ; then
+ default_subnet=$(nova tenant-network-list | grep / | cut -f6 -d' ' | head -1)
+ fi
+ : ${subnet:=$default_subnet}
+
+ case $provider in
+ entercloudsuite)
+ eval local network=$(neutron net-list -f json | jq '.[] | select(.subnets | contains("'$subnet'")) | .name')
+ ;;
+ esac
+
+ case $provider in
+ ovh)
+ flavor_select='^(vps|eg)-'
+ ;;
+ esac
+
+ local ip=$(ip a show dev eth0 | sed -n "s:.*inet \(.*\)/.*:\1:p")
+ : ${nameserver:=$ip}
+
+ if $do_create_config ; then
+ create_config "$network" "$subnet" "$nameserver" "$labdomain" "$ip" "$flavor_select" || return 1
+ setup_ansible $subnet $labdomain || return 1
+ setup_ssh_config || return 1
+ setup_bashrc || return 1
+ setup_bootscript $nworkers || return 1
+ fi
+
+ if $do_setup_keypair ; then
+ get_or_create_keypair $keypair || return 1
+ fi
+
+ if $do_setup_dnsmasq ; then
+ setup_dnsmasq || return 1
+ define_dnsmasq $subnet $labdomain || return 1
+ fi
+
+ if $do_setup_paddles ; then
+ setup_paddles $ip || return 1
+ fi
+
+ if $do_populate_paddles ; then
+ populate_paddles $subnet $labdomain || return 1
+ fi
+
+ if $do_setup_pulpito ; then
+ setup_pulpito || return 1
+ fi
+
+ if $do_clobber ; then
+ undefine_dnsmasq || return 1
+ delete_keypair $keypair || return 1
+ teardown_paddles || return 1
+ teardown_pulpito || return 1
+ teardown_ansible || return 1
+ remove_images || return 1
+ fi
+}
+
+main "$@"
--- /dev/null
+archive-on-error: true
--- /dev/null
+stop_worker: true
+machine_type: openstack
+os_type: ubuntu
+os_version: "14.04"
+roles:
+- - mon.a
+ - osd.0
+tasks:
+- exec:
+ mon.a:
+ - echo "Well done !"
+
--- /dev/null
+#
+# Copyright (c) 2015 Red Hat, Inc.
+#
+# Author: Loic Dachary <loic@dachary.org>
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+#
+import argparse
+import logging
+import json
+import os
+import subprocess
+import tempfile
+import shutil
+
+import teuthology.lock
+import teuthology.nuke
+import teuthology.misc
+import teuthology.schedule
+import teuthology.suite
+import teuthology.openstack
+import scripts.schedule
+import scripts.lock
+import scripts.suite
+
+class Integration(object):
+
+ @classmethod
+ def setup_class(self):
+ teuthology.log.setLevel(logging.DEBUG)
+ teuthology.misc.read_config(argparse.Namespace())
+ self.teardown_class()
+
+ @classmethod
+ def teardown_class(self):
+ os.system("sudo /etc/init.d/beanstalkd restart")
+ # if this fails it will not show the error but some weird
+ # INTERNALERROR> IndexError: list index out of range
+ # move that to def tearDown for debug and when it works move it
+ # back in tearDownClass so it is not called on every test
+ all_instances = teuthology.misc.sh("openstack server list -f json --long")
+ for instance in json.loads(all_instances):
+ if 'teuthology=' in instance['Properties']:
+ teuthology.misc.sh("openstack server delete --wait " + instance['ID'])
+ teuthology.misc.sh("""
+teuthology/openstack/setup-openstack.sh \
+ --populate-paddles
+ """)
+
+ def setup_worker(self):
+ self.logs = self.d + "/log"
+ os.mkdir(self.logs, 0o755)
+ self.archive = self.d + "/archive"
+ os.mkdir(self.archive, 0o755)
+ self.worker_cmd = ("teuthology-worker --tube openstack " +
+ "-l " + self.logs + " "
+ "--archive-dir " + self.archive + " ")
+ logging.info(self.worker_cmd)
+ self.worker = subprocess.Popen(self.worker_cmd,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ shell=True)
+
+ def wait_worker(self):
+ if not self.worker:
+ return
+
+ (stdoutdata, stderrdata) = self.worker.communicate()
+ stdoutdata = stdoutdata.decode('utf-8')
+ stderrdata = stderrdata.decode('utf-8')
+ logging.info(self.worker_cmd + ":" +
+ " stdout " + stdoutdata +
+ " stderr " + stderrdata + " end ")
+ assert self.worker.returncode == 0
+ self.worker = None
+
+ def get_teuthology_log(self):
+ # the archive is removed before each test, there must
+ # be only one run and one job
+ run = os.listdir(self.archive)[0]
+ job = os.listdir(os.path.join(self.archive, run))[0]
+ path = os.path.join(self.archive, run, job, 'teuthology.log')
+ return open(path, 'r').read()
+
+class TestSuite(Integration):
+
+ def setup(self):
+ self.d = tempfile.mkdtemp()
+ self.setup_worker()
+ logging.info("TestSuite: done worker")
+
+ def teardown(self):
+ self.wait_worker()
+ shutil.rmtree(self.d)
+
+ def test_suite_noop(self):
+ cwd = os.getcwd()
+ args = ['--suite', 'noop',
+ '--suite-dir', cwd + '/teuthology/openstack/test',
+ '--machine-type', 'openstack',
+ '--verbose']
+ logging.info("TestSuite:test_suite_noop")
+ scripts.suite.main(args)
+ self.wait_worker()
+ log = self.get_teuthology_log()
+ assert "teuthology.run:pass" in log
+ assert "Well done" in log
+
+ def test_suite_nuke(self):
+ cwd = os.getcwd()
+ args = ['--suite', 'nuke',
+ '--suite-dir', cwd + '/teuthology/openstack/test',
+ '--machine-type', 'openstack',
+ '--verbose']
+ logging.info("TestSuite:test_suite_nuke")
+ scripts.suite.main(args)
+ self.wait_worker()
+ log = self.get_teuthology_log()
+ assert "teuthology.run:FAIL" in log
+ locks = teuthology.lock.list_locks(locked=True)
+ assert len(locks) == 0
+
+class TestSchedule(Integration):
+
+ def setup(self):
+ self.d = tempfile.mkdtemp()
+ self.setup_worker()
+
+ def teardown(self):
+ self.wait_worker()
+ shutil.rmtree(self.d)
+
+ def test_schedule_stop_worker(self):
+ job = 'teuthology/openstack/test/stop_worker.yaml'
+ args = ['--name', 'fake',
+ '--verbose',
+ '--owner', 'test@test.com',
+ '--worker', 'openstack',
+ job]
+ scripts.schedule.main(args)
+ self.wait_worker()
+
+ def test_schedule_noop(self):
+ job = 'teuthology/openstack/test/noop.yaml'
+ args = ['--name', 'fake',
+ '--verbose',
+ '--owner', 'test@test.com',
+ '--worker', 'openstack',
+ job]
+ scripts.schedule.main(args)
+ self.wait_worker()
+ log = self.get_teuthology_log()
+ assert "teuthology.run:pass" in log
+ assert "Well done" in log
+
+ def test_schedule_resources_hint(self):
+ """It is tricky to test resources hint in a provider agnostic way. The
+ best way seems to ask for at least 1GB of RAM and 10GB
+ disk. Some providers do not offer a 1GB RAM flavor (OVH for
+ instance) and the 2GB RAM will be chosen instead. It however
+ seems unlikely that a 4GB RAM will be chosen because it would
+ mean such a provider has nothing under that limit and it's a
+ little too high.
+
+ Since the default when installing is to ask for 7000 MB, we
+ can reasonably assume that the hint has been taken into
+ account if the instance has less than 4GB RAM.
+ """
+ try:
+ teuthology.misc.sh("openstack volume list")
+ job = 'teuthology/openstack/test/resources_hint.yaml'
+ has_cinder = True
+ except subprocess.CalledProcessError:
+ job = 'teuthology/openstack/test/resources_hint_no_cinder.yaml'
+ has_cinder = False
+ args = ['--name', 'fake',
+ '--verbose',
+ '--owner', 'test@test.com',
+ '--worker', 'openstack',
+ job]
+ scripts.schedule.main(args)
+ self.wait_worker()
+ log = self.get_teuthology_log()
+ assert "teuthology.run:pass" in log
+ assert "RAM size ok" in log
+ if has_cinder:
+ assert "Disk size ok" in log
+
+class TestLock(Integration):
+
+ def setup(self):
+ self.options = ['--verbose',
+ '--machine-type', 'openstack' ]
+
+ def test_main(self):
+ args = scripts.lock.parse_args(self.options + ['--lock'])
+ assert teuthology.lock.main(args) == 0
+
+ def test_lock_unlock(self):
+ for image in teuthology.openstack.OpenStack.image2url.keys():
+ (os_type, os_version) = image.split('-')
+ args = scripts.lock.parse_args(self.options +
+ ['--lock-many', '1',
+ '--os-type', os_type,
+ '--os-version', os_version])
+ assert teuthology.lock.main(args) == 0
+ locks = teuthology.lock.list_locks(locked=True)
+ assert len(locks) == 1
+ args = scripts.lock.parse_args(self.options +
+ ['--unlock', locks[0]['name']])
+ assert teuthology.lock.main(args) == 0
+
+ def test_list(self, capsys):
+ args = scripts.lock.parse_args(self.options + ['--list', '--all'])
+ teuthology.lock.main(args)
+ out, err = capsys.readouterr()
+ assert 'machine_type' in out
+ assert 'openstack' in out
+
+class TestNuke(Integration):
+
+ def setup(self):
+ self.options = ['--verbose',
+ '--machine-type', 'openstack']
+
+ def test_nuke(self):
+ image = teuthology.openstack.OpenStack.image2url.keys()[0]
+
+ (os_type, os_version) = image.split('-')
+ args = scripts.lock.parse_args(self.options +
+ ['--lock-many', '1',
+ '--os-type', os_type,
+ '--os-version', os_version])
+ assert teuthology.lock.main(args) == 0
+ locks = teuthology.lock.list_locks(locked=True)
+ logging.info('list_locks = ' + str(locks))
+ assert len(locks) == 1
+ ctx = argparse.Namespace(name=None,
+ config={
+ 'targets': { locks[0]['name']: None },
+ },
+ owner=locks[0]['locked_by'],
+ teuthology_config={})
+ teuthology.nuke.nuke(ctx, should_unlock=True)
+ locks = teuthology.lock.list_locks(locked=True)
+ assert len(locks) == 0
--- /dev/null
+overrides:
+ ceph:
+ conf:
+ global:
+ osd heartbeat grace: 100
+ # this line to address issue #1017
+ mon lease: 15
+ mon lease ack timeout: 25
+ rgw:
+ default_idle_timeout: 1200
+ s3tests:
+ idle_timeout: 1200
+archive-on-error: true
--- /dev/null
+stop_worker: true
+machine_type: openstack
+openstack:
+ machine:
+ disk: 10 # GB
+ ram: 1024 # MB
+ cpus: 1
+ volumes:
+ count: 1
+ size: 2 # GB
+os_type: ubuntu
+os_version: "14.04"
+roles:
+- - mon.a
+ - osd.0
+tasks:
+- exec:
+ mon.a:
+ - test $(sed -n -e 's/MemTotal.* \([0-9][0-9]*\).*/\1/p' < /proc/meminfo) -lt 4000000 && echo "RAM" "size" "ok"
+ - cat /proc/meminfo
+# wait for the attached volume to show up
+ - for delay in 1 2 4 8 16 32 64 128 256 512 ; do if test -e /sys/block/vdb/size ; then break ; else sleep $delay ; fi ; done
+# 4000000 because 512 bytes sectors
+ - test $(cat /sys/block/vdb/size) -gt 4000000 && echo "Disk" "size" "ok"
+ - cat /sys/block/vdb/size
--- /dev/null
+stop_worker: true
+machine_type: openstack
+openstack:
+ machine:
+ disk: 10 # GB
+ ram: 1024 # MB
+ cpus: 1
+ volumes:
+ count: 0
+ size: 2 # GB
+os_type: ubuntu
+os_version: "14.04"
+roles:
+- - mon.a
+ - osd.0
+tasks:
+- exec:
+ mon.a:
+ - cat /proc/meminfo
+ - test $(sed -n -e 's/MemTotal.* \([0-9][0-9]*\).*/\1/p' < /proc/meminfo) -lt 4000000 && echo "RAM" "size" "ok"
--- /dev/null
+stop_worker: true
--- /dev/null
+stop_worker: true
+roles:
+- - mon.a
+ - osd.0
+tasks:
+- exec:
+ mon.a:
+ - echo "Well done !"
+
--- /dev/null
+stop_worker: true
+nuke-on-error: true
+roles:
+- - client.0
+tasks:
+- exec:
+ client.0:
+ - exit 1
--- /dev/null
+#
+# Copyright (c) 2015 Red Hat, Inc.
+#
+# Author: Loic Dachary <loic@dachary.org>
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+#
+import argparse
+import logging
+import os
+import pytest
+import tempfile
+
+import teuthology
+from teuthology import misc
+from teuthology.openstack import TeuthologyOpenStack
+import scripts.openstack
+
+class TestTeuthologyOpenStack(object):
+
+ @classmethod
+ def setup_class(self):
+ if 'OS_AUTH_URL' not in os.environ:
+ pytest.skip('no OS_AUTH_URL environment variable')
+
+ teuthology.log.setLevel(logging.DEBUG)
+ teuthology.misc.read_config(argparse.Namespace())
+
+ ip = TeuthologyOpenStack.create_floating_ip()
+ if ip:
+ ip_id = TeuthologyOpenStack.get_floating_ip_id(ip)
+ misc.sh("openstack ip floating delete " + ip_id)
+ self.can_create_floating_ips = True
+ else:
+ self.can_create_floating_ips = True
+
+ def setup(self):
+ self.key_filename = tempfile.mktemp()
+ self.key_name = 'teuthology-test'
+ self.name = 'teuthology-test'
+ self.clobber()
+ misc.sh("""
+openstack keypair create {key_name} > {key_filename}
+chmod 600 {key_filename}
+ """.format(key_filename=self.key_filename,
+ key_name=self.key_name))
+ self.options = ['--key-name', self.key_name,
+ '--key-filename', self.key_filename,
+ '--name', self.name,
+ '--verbose']
+
+ def teardown(self):
+ self.clobber()
+ os.unlink(self.key_filename)
+
+ def clobber(self):
+ misc.sh("""
+openstack server delete {name} --wait || true
+openstack keypair delete {key_name} || true
+ """.format(key_name=self.key_name,
+ name=self.name))
+
+ def test_create(self, capsys):
+ teuthology_argv = [
+ '--suite', 'upgrade/hammer',
+ '--dry-run',
+ '--ceph', 'master',
+ '--kernel', 'distro',
+ '--flavor', 'gcov',
+ '--distro', 'ubuntu',
+ '--suite-branch', 'hammer',
+ '--email', 'loic@dachary.org',
+ '--num', '10',
+ '--limit', '23',
+ '--subset', '1/2',
+ '--priority', '101',
+ '--timeout', '234',
+ '--filter', 'trasher',
+ '--filter-out', 'erasure-code',
+ ]
+ argv = self.options + teuthology_argv
+ args = scripts.openstack.parse_args(argv)
+ teuthology = TeuthologyOpenStack(args, None, argv)
+ teuthology.user_data = 'teuthology/openstack/test/user-data-test1.txt'
+ teuthology.teuthology_suite = 'echo --'
+
+ teuthology.main()
+ assert 'Ubuntu 14.04' in teuthology.ssh("lsb_release -a")
+ variables = teuthology.ssh("grep 'substituded variables' /var/log/cloud-init.log")
+ assert "nworkers=" + str(args.simultaneous_jobs) in variables
+ assert "username=" + teuthology.username in variables
+ assert os.environ['OS_AUTH_URL'] in variables
+
+ out, err = capsys.readouterr()
+ assert " ".join(teuthology_argv) in out
+
+ if self.can_create_floating_ips:
+ ip = teuthology.get_floating_ip(self.name)
+ teuthology.teardown()
+ if self.can_create_floating_ips:
+ assert teuthology.get_floating_ip_id(ip) == None
+
+ def test_floating_ip(self):
+ if not self.can_create_floating_ips:
+ pytest.skip('unable to create floating ips')
+
+ expected = TeuthologyOpenStack.create_floating_ip()
+ ip = TeuthologyOpenStack.get_unassociated_floating_ip()
+ assert expected == ip
+ ip_id = TeuthologyOpenStack.get_floating_ip_id(ip)
+ misc.sh("openstack ip floating delete " + ip_id)
--- /dev/null
+#cloud-config
+system_info:
+ default_user:
+ name: ubuntu
+final_message: "teuthology is up and running after $UPTIME seconds, substituded variables nworkers=NWORKERS openrc=OPENRC username=TEUTHOLOGY_USERNAME"
+import json
import logging
+import misc
import os
+import random
+import re
import subprocess
import tempfile
import yaml
+from .openstack import OpenStack
from .config import config
from .contextutil import safe_while
from .misc import decanonicalize_hostname, get_distro, get_distro_version
self.remove_config()
+class ProvisionOpenStack(OpenStack):
+ """
+ A class that provides methods for creating and destroying virtual machine
+ instances using OpenStack
+ """
+ def __init__(self):
+ super(ProvisionOpenStack, self).__init__()
+ self.user_data = tempfile.mktemp()
+ log.debug("ProvisionOpenStack: " + str(config.openstack))
+ self.basename = 'target'
+ self.up_string = 'The system is finally up'
+ self.property = "%16x" % random.getrandbits(128)
+
+ def __del__(self):
+ if os.path.exists(self.user_data):
+ os.unlink(self.user_data)
+
+ def init_user_data(self, os_type, os_version):
+ """
+ Get the user-data file that is fit for os_type and os_version.
+ It is responsible for setting up enough for ansible to take
+ over.
+ """
+ template_path = config['openstack']['user-data'].format(
+ os_type=os_type,
+ os_version=os_version)
+ nameserver = config['openstack'].get('nameserver', '8.8.8.8')
+ user_data_template = open(template_path).read()
+ user_data = user_data_template.format(
+ up=self.up_string,
+ nameserver=nameserver,
+ username=self.username,
+ lab_domain=config.lab_domain)
+ open(self.user_data, 'w').write(user_data)
+
+ def attach_volumes(self, name, hint):
+ """
+ Create and attach volumes to the named OpenStack instance.
+ """
+ if hint:
+ volumes = hint['volumes']
+ else:
+ volumes = config['openstack']['volumes']
+ for i in range(volumes['count']):
+ volume_name = name + '-' + str(i)
+ try:
+ misc.sh("openstack volume show -f json " +
+ volume_name)
+ except subprocess.CalledProcessError as e:
+ if 'No volume with a name or ID' not in e.output:
+ raise e
+ misc.sh("openstack volume create -f json " +
+ config['openstack'].get('volume-create', '') + " " +
+ " --size " + str(volumes['size']) + " " +
+ volume_name)
+ with safe_while(sleep=2, tries=100,
+ action="volume " + volume_name) as proceed:
+ while proceed():
+ r = misc.sh("openstack volume show -f json " +
+ volume_name)
+ status = self.get_value(json.loads(r), 'status')
+ if status == 'available':
+ break
+ else:
+ log.info("volume " + volume_name +
+ " not available yet")
+ misc.sh("openstack server add volume " +
+ name + " " + volume_name)
+
+ def list_volumes(self, name_or_id):
+ """
+ Return the uuid of the volumes attached to the name_or_id
+ OpenStack instance.
+ """
+ instance = misc.sh("openstack server show -f json " +
+ name_or_id)
+ volumes = self.get_value(json.loads(instance),
+ 'os-extended-volumes:volumes_attached')
+ return [ volume['id'] for volume in volumes ]
+
+ @staticmethod
+ def ip2name(prefix, ip):
+ """
+ return the instance name suffixed with the /16 part of the IP.
+ """
+ digits = map(int, re.findall('.*\.(\d+)\.(\d+)', ip)[0])
+ return prefix + "%03d%03d" % tuple(digits)
+
+ def create(self, num, os_type, os_version, arch, resources_hint):
+ """
+ Create num OpenStack instances running os_type os_version and
+ return their names. Each instance has at least the resources
+ described in resources_hint.
+ """
+ log.debug('ProvisionOpenStack:create')
+ self.init_user_data(os_type, os_version)
+ image = self.image(os_type, os_version)
+ if 'network' in config['openstack']:
+ net = "--nic net-id=" + str(self.net_id(config['openstack']['network']))
+ else:
+ net = ''
+ if resources_hint:
+ flavor_hint = resources_hint['machine']
+ else:
+ flavor_hint = config['openstack']['machine']
+ flavor = self.flavor(flavor_hint,
+ config['openstack'].get('flavor-select-regexp'))
+ misc.sh("openstack server create" +
+ " " + config['openstack'].get('server-create', '') +
+ " -f json " +
+ " --image '" + str(image) + "'" +
+ " --flavor '" + str(flavor) + "'" +
+ " --key-name teuthology " +
+ " --user-data " + str(self.user_data) +
+ " " + net +
+ " --min " + str(num) +
+ " --max " + str(num) +
+ " --security-group teuthology" +
+ " --property teuthology=" + self.property +
+ " --property ownedby=" + config.openstack['ip'] +
+ " --wait " +
+ " " + self.basename)
+ all_instances = json.loads(misc.sh("openstack server list -f json --long"))
+ instances = filter(
+ lambda instance: self.property in instance['Properties'],
+ all_instances)
+ fqdns = []
+ try:
+ network = config['openstack'].get('network', '')
+ for instance in instances:
+ name = self.ip2name(self.basename, self.get_ip(instance['ID'], network))
+ misc.sh("openstack server set " +
+ "--name " + name + " " +
+ instance['ID'])
+ fqdn = name + '.' + config.lab_domain
+ if not misc.ssh_keyscan_wait(fqdn):
+ raise ValueError('ssh_keyscan_wait failed for ' + fqdn)
+ import time
+ time.sleep(15)
+ if not self.cloud_init_wait(fqdn):
+ raise ValueError('clound_init_wait failed for ' + fqdn)
+ self.attach_volumes(name, resources_hint)
+ fqdns.append(fqdn)
+ except Exception as e:
+ log.exception(str(e))
+ for id in [instance['ID'] for instance in instances]:
+ self.destroy(id)
+ raise e
+ return fqdns
+
+ def destroy(self, name_or_id):
+ """
+ Delete the name_or_id OpenStack instance.
+ """
+ log.debug('ProvisionOpenStack:destroy ' + name_or_id)
+ if not self.exists(name_or_id):
+ return True
+ volumes = self.list_volumes(name_or_id)
+ misc.sh("openstack server delete --wait " + name_or_id)
+ for volume in volumes:
+ misc.sh("openstack volume delete " + volume)
+ return True
+
+
def create_if_vm(ctx, machine_name, _downburst=None):
"""
Use downburst to create a virtual machine
return False
os_type = get_distro(ctx)
os_version = get_distro_version(ctx)
+ if status_info.get('machine_type') == 'openstack':
+ return ProvisionOpenStack(name=machine_name).create(
+ os_type, os_version)
has_config = hasattr(ctx, 'config') and ctx.config is not None
if has_config and 'downburst' in ctx.config:
log.error(msg.format(node=machine_name, desc_arg=description,
desc_lock=status_info['description']))
return False
+ if status_info.get('machine_type') == 'openstack':
+ return ProvisionOpenStack().destroy(
+ decanonicalize_hostname(machine_name))
+
dbrst = _downburst or Downburst(name=machine_name, os_type=None,
os_version=None, status=status_info)
return dbrst.destroy()
from teuthology.exceptions import SELinuxError
from teuthology.misc import get_archive_dir
from teuthology.orchestra.cluster import Cluster
+from teuthology.lockstatus import get_status
from . import Task
super(SELinux, self).filter_hosts()
new_cluster = Cluster()
for (remote, roles) in self.cluster.remotes.iteritems():
- if remote.shortname.startswith('vpm'):
- msg = "Excluding {host}: downburst VMs are not yet supported"
+ status_info = get_status(remote.name)
+ if status_info and status_info.get('is_vm', False):
+ msg = "Excluding {host}: VMs are not yet supported"
log.info(msg.format(host=remote.shortname))
elif remote.os.package_type == 'rpm':
new_cluster.add(remote, roles)
self.ctx = FakeNamespace()
self.ctx.config = dict()
- def test_host_exclusion(self):
+ @patch('teuthology.task.selinux.get_status')
+ def test_host_exclusion(self, mock_get_status):
+ mock_get_status.return_value = None
with patch.multiple(
Remote,
os=DEFAULT,
[tox]
-envlist = docs, py27, py27-integration, flake8
+envlist = docs, py27, py27-integration, flake8, openstack
[testenv:py27]
install_command = pip install --upgrade {opts} {packages}
pytest-cov==1.6
coverage==3.7.1
-commands=py.test --cov=teuthology --cov-report=term -v {posargs:teuthology scripts}
+commands=
+ py.test --cov=teuthology --cov-report=term -v {posargs:teuthology scripts}
[testenv:py27-integration]
install_command = pip install --upgrade {opts} {packages}
-passenv = HOME
+passenv = HOME OS_REGION_NAME OS_AUTH_URL OS_TENANT_ID OS_TENANT_NAME OS_PASSWORD OS_USERNAME
sitepackages=True
deps=
-r{toxinidir}/requirements.txt
commands=
sphinx-apidoc -f -o . ../teuthology ../teuthology/test ../teuthology/orchestra/test ../teuthology/task/test
sphinx-build -b html -d {envtmpdir}/doctrees . {envtmpdir}/html
+
+[testenv:openstack]
+install_command = pip install --upgrade {opts} {packages}
+passenv = HOME OS_REGION_NAME OS_AUTH_URL OS_TENANT_ID OS_TENANT_NAME OS_PASSWORD OS_USERNAME
+sitepackages=True
+deps=
+ -r{toxinidir}/requirements.txt
+ mock
+
+commands=py.test -v {posargs:teuthology/openstack/test/test_openstack.py}
+basepython=python2.7
+
+[testenv:openstack-integration]
+passenv = HOME OS_REGION_NAME OS_AUTH_URL OS_TENANT_ID OS_TENANT_NAME OS_PASSWORD OS_USERNAME
+basepython=python2
+sitepackages=True
+deps=
+ -r{toxinidir}/requirements.txt
+ mock
+
+commands=
+ py.test -v teuthology/openstack/test/openstack-integration.py