From 2d9cb1ffb65470a1ffaeb2efaaea9b487e5c8090 Mon Sep 17 00:00:00 2001 From: Alfredo Deza Date: Wed, 24 Jul 2013 08:43:04 -0400 Subject: [PATCH] fix RST formatting issues in README file Signed-off-by: Alfredo Deza --- README.rst | 138 ++++++++++++++++++++++++++++------------------------- 1 file changed, 72 insertions(+), 66 deletions(-) diff --git a/README.rst b/README.rst index 8b9b53afa72f2..6a24e50ac1e1e 100644 --- a/README.rst +++ b/README.rst @@ -67,7 +67,7 @@ the nodes & use the live cluster ad hoc), might look like this:: - [mon.1, osd.1] - [mon.2, client.0] targets: - ubuntu@host07.example.com: ssh-rsa host07_ssh_key + ubuntu@host07.example.com: ssh-rsa host07_ssh_key ubuntu@host08.example.com: ssh-rsa host08_ssh_key ubuntu@host09.example.com: ssh-rsa host09_ssh_key tasks: @@ -82,7 +82,7 @@ Note the colon after every task name in the ``tasks`` section. You need to be able to SSH in to the listed targets without passphrases, and the remote user needs to have passphraseless `sudo` access. Note that the ssh keys at the end of the ``targets`` -entries are the public ssh keys for the hosts. +entries are the public ssh keys for the hosts. On Ubuntu, these are located at /etc/ssh/ssh_host_rsa_key.pub If you'd save the above file as ``example.yaml``, you could run @@ -171,33 +171,35 @@ they use ``yield``. Further details on some of the more complex tasks such as install or workunit can be obtained via python help. For example:: ->>> import teuthology.task.workunit ->>> help(teuthology.task.workunit) + >>> import teuthology.task.workunit + >>> help(teuthology.task.workunit) displays a page of more documentation and more concrete examples. Some of the more important / commonly used tasks include: -* chef -- Run the chef task. -* ceph -- Bring up Ceph -* install -- by default, the install task goes to gitbuilder and installs the results of the latest build. You can, however, add additional parameters to the test configuration to cause it to install any branch, SHA, archive or URL. The following are valid parameters. +* ``chef``: Run the chef task. +* ``ceph``: Bring up Ceph +* ``install``: by default, the install task goes to gitbuilder and installs the results of the latest build. You can, however, add additional parameters to the test configuration to cause it to install any branch, SHA, archive or URL. The following are valid parameters. - - branch -- specify a branch (bobtail, cuttlefish...) - - flavor -- specify a flavor (next, unstable...). Flavors can be thought of as subsets of branches. Sometimes (unstable, for example) they may have a predefined meaning. - - project -- specify a project (ceph, samba...) - - sha1 -- install the build with this sha1 value. - - tag -- specify a tag/identifying text for this build (v47.2, v48.1...) +- ``branch``: specify a branch (bobtail, cuttlefish...) +- ``flavor``: specify a flavor (next, unstable...). Flavors can be thought of as + subsets of branches. Sometimes (unstable, for example) they may have + a predefined meaning. +- ``project``: specify a project (ceph, samba...) +- ``sha1``: install the build with this sha1 value. +- ``tag``: specify a tag/identifying text for this build (v47.2, v48.1...) -* overrides -- override behavior. Typically, this includes sub-tasks being overridden. Sub-tasks can nest further information. For example, overrides of install tasks are project specific, so the following section of a yaml file would cause all ceph installation to default into using the cuttlefish branch:: +* ``overrides``: override behavior. Typically, this includes sub-tasks being overridden. Sub-tasks can nest further information. For example, overrides of install tasks are project specific, so the following section of a yaml file would cause all ceph installation to default into using the cuttlefish branch:: overrides: install: ceph: branch: cuttlefish -* workunit -- workunits are a way of grouping tasks and behavior on targets. -* sequential -- group the sub-tasks into a unit where the sub-tasks run sequentially as listed. -* parallel -- group the sub-tasks into a unit where the sub-task all run in parallel. +* ``workunit``: workunits are a way of grouping tasks and behavior on targets. +* ``sequential``: group the sub-tasks into a unit where the sub-tasks run sequentially as listed. +* ``parallel``: group the sub-tasks into a unit where the sub-task all run in parallel. Sequential and parallel tasks can be nested. Tasks run sequentially if not specified. @@ -232,10 +234,11 @@ Test Sandbox Directory ====================== Teuthology currently places most test files and mount points in a sandbox -directory, defaulting to /tmp/cephtest/{rundir}. The {rundir} is the name -of the run (as given by --name) or if no name is specified, user@host-timestamp -is used. To change the location of the sandbox directory, the following -options can be specified in $HOME/.teuthology.yaml: +directory, defaulting to ``/tmp/cephtest/{rundir}``. The ``{rundir}`` is the +name of the run (as given by ``--name``) or if no name is specified, +``user@host-timestamp`` is used. To change the location of the sandbox +directory, the following options can be specified in +``$HOME/.teuthology.yaml``:: base_test_dir: @@ -267,7 +270,7 @@ There are fixed "slots" for virtual machines that appear in the teuthology database. These slots have a machine type of vps and can be locked like any other machine. The existence of a vpshost field is how teuthology knows whether or not a database entry represents a physical or a virtual -machine. +machine. The following needs to be set in ~/.libvirt/libvirt.conf in order to get the right virtual machine associations for the Inktank lab:: @@ -305,48 +308,49 @@ right virtual machine associations for the Inktank lab:: DOWNBURST: ---------- -When a virtual machine is locked, downburst is run on that machine to +When a virtual machine is locked, downburst is run on that machine to install a new image. This allows the user to set different virtual OSes to be installed on the newly created virtual machine. Currently the default virtual machine is ubuntu (precise). A different vm installation -can be set using the --vm-type option in teuthology.lock. +can be set using the ``--vm-type`` option in ``teuthology.lock``. When a virtual machine is unlocked, downburst destroys the image on the machine. Temporary yaml files are used to downburst a virtual machine. A typical -yaml file will look like this: +yaml file will look like this:: -downburst: - cpus: 1 - disk-size: 30G - distro: centos - networks: - - {source: front} - ram: 4G + downburst: + cpus: 1 + disk-size: 30G + distro: centos + networks: + - {source: front} + ram: 4G -These values are used by downburst to create the virtual machine. +These values are used by downburst to create the virtual machine. HOST KEYS: ---------- -Because teuthology reinstalls a new machine, a new hostkey is generated. -After locking, once a connection is established to the new machine, -teuthology-lock with the --list or --list-targets options will display -the new keys. When vps machines are locked using the --lock-many option, -a message is displayed indicating that --list-targets should be run later. +Because teuthology reinstalls a new machine, a new hostkey is generated. After +locking, once a connection is established to the new machine, +``teuthology-lock`` with the ``--list`` or ``--list-targets`` options will +display the new keys. When vps machines are locked using the ``--lock-many`` +option, a message is displayed indicating that ``--list-targets`` should be run +later. CEPH-QA-CHEF: ------------- Once teuthology starts after a new vm is installed, teuthology -checks for the existence of /ceph-qa-ready. If this file is not -present, ceph-qa-chef is run when teuthology first comes up. +checks for the existence of ``/ceph-qa-ready``. If this file is not +present, ``ceph-qa-chef`` is run when teuthology first comes up. ASSUMPTIONS: ------------ -It is assumed that downburst is on the user's PATH. +It is assumed that downburst is on the user's ``$PATH``. Test Suites @@ -355,12 +359,12 @@ Test Suites Most of the current teuthology test suite execution scripts automatically download their tests from the master branch of the appropriate github repository. People who want to run experimental test suites usually modify -the download method in the teuthology/task script to use some other branch +the download method in the ``teuthology/task`` script to use some other branch or repository. This should be generalized in later teuthology releases. -Teuthology QA suites can be found in src/ceph-qa-suite. Make sure that this -directory exists in your source tree before running the test suites. +Teuthology QA suites can be found in ``src/ceph-qa-suite``. Make sure that this +directory exists in your source tree before running the test suites. -Each suite name is determined by the name of the directory in ceph-qa-suite +Each suite name is determined by the name of the directory in ``ceph-qa-suite`` that contains that suite. The directory contains subdirectories and yaml files, which, when assembled, produce valid tests that can be run. The test suite application generates combinations of these files and thus ends up running @@ -372,14 +376,14 @@ To run a suite, enter:: where: -* *suite* -- the name of the suite (the directory in ceph-qa-suite). -* *ceph* -- ceph branch to be used. -* *kernel* -- version of the kernel to be used. -* *email* -- email address to send the results to. -* *flavor* -- flavor of the test -* *teuth* -- version of teuthology to run -* *mtype* -- machine type of the run -* *templates* -- template file used for further modifying the suite (optional) +* ``suite``: the name of the suite (the directory in ceph-qa-suite). +* ``ceph``: ceph branch to be used. +* ``kernel``: version of the kernel to be used. +* ``email``: email address to send the results to. +* ``flavor``: flavor of the test +* ``teuth``: version of teuthology to run +* ``mtype``: machine type of the run +* ``templates``: template file used for further modifying the suite (optional) For example, consider:: @@ -390,13 +394,13 @@ a straight cuttlefish kernel, and the master flavor of cuttlefish teuthology. It will run on plana machines. In order for a queued task to be run, a teuthworker thread on -teuthology.front.sepia.ceph.com needs to remove the task from the queue. -On teuthology.front.sepia.ceph.com, run ``ps aux | grep teuthology-worker`` +``teuthology.front.sepia.ceph.com`` needs to remove the task from the queue. +On ``teuthology.front.sepia.ceph.com``, run ``ps aux | grep teuthology-worker`` to view currently running tasks. If no processes are reading from the test version that you are running, additonal teuthworker tasks need to be started. To start these tasks: -* copy your build tree to /home/teuthworker on teuthology.front.sepia.ceph.com. -Give it a unique name (in this example, xxx) +* copy your build tree to ``/home/teuthworker`` on ``teuthology.front.sepia.ceph.com``. +* Give it a unique name (in this example, xxx) * start up some number of worker threads (as many as machines you are testing with, there are 60 running for the default queue):: @@ -410,14 +414,16 @@ with, there are 60 running for the default queue):: own threads, or add to this file if you want your threads to be more permanent. -Once the suite completes, an email message is sent to the users specified, -and a large amount of information is left on teuthology.front.sepia.ceph.com -in /var/lib/teuthworker/archive. This is symbolically linked to /a for -convenience. A new directory is created whose name consists of a concatenation -of the date and time that the suite was started, the name of the suite, -the ceph branch tested, the kernel used, and the flavor. For every test run -there is a directory whose name is the pid number of the pid of that test. -Each of these directory contains a copy of the teuthology.log for that process. -Other information from the suite is stored in files in the directory, and -task-specific yaml files and other logs are saved in the subdirectories. +Once the suite completes, an email message is sent to the users specified, and +a large amount of information is left on ``teuthology.front.sepia.ceph.com`` in +``/var/lib/teuthworker/archive``. + +This is symbolically linked to /a for convenience. A new directory is created +whose name consists of a concatenation of the date and time that the suite was +started, the name of the suite, the ceph branch tested, the kernel used, and +the flavor. For every test run there is a directory whose name is the pid +number of the pid of that test. Each of these directory contains a copy of +the ``teuthology.log`` for that process. Other information from the suite is +stored in files in the directory, and task-specific yaml files and other logs +are saved in the subdirectories. -- 2.39.5