Test Suites
===========
-Most of the current teuthology test suite execution scripts automatically
-download their tests from the master branch of the appropriate github
-repository. People who want to run experimental test suites usually modify the
-download method in the ``teuthology/task`` script to use some other branch or
-repository. This should be generalized in later teuthology releases.
-Teuthology QA suites can be found in ``src/ceph-qa-suite``. Make sure that this
-directory exists in your source tree before running the test suites.
-
Each suite name is determined by the name of the directory in ``ceph-qa-suite``
that contains that suite. The directory contains subdirectories and yaml files,
which, when assembled, produce valid tests that can be run. The test suite
For example, consider::
- teuthology-suite -s rbd -c wip-fix -k cuttlefish -e bob.smith@foo.com -f basic -t cuttlefish -m plana
+ teuthology-suite -s rbd -c wip-fix -k jewel -e bob.smith@foo.com -f basic -t jewel -m mira
The above command runs the rbd suite using the wip-fix branch of ceph, the
-cuttlefish kernel, with a 'basic' kernel flavor, and the teuthology
-cuttlefish branch will be used. It will run on plana machines and send an email
-to bob.smith@foo.com when it's completed. For more details on
+jewel kernel, with a 'basic' kernel flavor, and the teuthology jewel branch
+will be used. It will run on mira machines and send an email to
+bob.smith@foo.com when it's completed. For more details on
``teuthology-suite``, please consult the output of ``teuthology-suite --help``.
In order for a queued task to be run, a teuthworker thread on
behaving like one.
Sub-tasks can nest further information. For example, overrides
of install tasks are project specific, so the following section of a yaml
- file would cause all ceph installations to default to using the cuttlefish
+ file would cause all ceph installations to default to using the jewel
branch::
overrides:
install:
ceph:
- branch: cuttlefish
+ branch: jewel
* ``workunit``: workunits are a way of grouping tasks and behavior on targets.
* ``sequential``: group the sub-tasks into a unit where the sub-tasks run
features. A user who wants to define a test for a new feature can implement
new tasks in this directory.
-Many of these tasks are used to run shell scripts that are defined in the
+Many of these tasks are used to run python scripts that are defined in the
ceph/ceph-qa-suite.
If machines were locked as part of the run (with the --lock switch),
VPS Hosts:
--------
-The following description is based on the Red Hat lab used by the Ceph
+The following description is based on the Red Hat lab used by the upstream Ceph
development and quality assurance teams.
The teuthology database of available machines contains a vpshost field.
When a virtual machine is unlocked, downburst destroys the image on the
machine.
-Temporary yaml files are used to downburst a virtual machine. A typical
-yaml file will look like this::
-
- downburst:
- cpus: 1
- disk-size: 30G
- distro: centos
- networks:
- - {source: front}
- ram: 4G
-
-These values are used by downburst to create the virtual machine.
-
-When locking a file, a downburst meta-data yaml file can be specified by using
-the downburst-conf parameter on the command line.
-
To find the downburst executable, teuthology first checks the PATH environment
variable. If not defined, teuthology next checks for
src/downburst/virtualenv/bin/downburst executables in the user's home
In the abstract, each set of tests is defined by a `suite`. All of our suites
live in the `ceph-qa-suite repository
-<https://github.com/ceph/ceph-qa-suite/>`__, in the `suites` subdirectory.
+<https://github.com/ceph/ceph-qa-suite/>`__, in the `suites` subdirectory. Each
+subdirectory in `suites` is a suite; they may also have "sub-suites" which may
+aid in scheduling, for example, tests for a specific feature.
In concrete terms, a `run` is what is created by assembling the contents of a
`suite` into a number of `jobs`. A `job` is created by assembling a number of
teuthology-suite -v -s smoke -c master -m mira --dry-run
+The `-m mira` specifies `mira` as the machine type. Machine types are dependent
+on the specific lab in use.
+
Assuming a build is available, that should pretend to schedule several jobs. If
it complains about missing packages, try swapping `master` with `jewel` or one
of the other Ceph stable branches.