From 0f2489406a8d4194c159c8777517d20b54e216b5 Mon Sep 17 00:00:00 2001 From: Kefu Chai Date: Fri, 4 Sep 2020 19:41:30 +0800 Subject: [PATCH] doc/dev: use prompt directive when appropriate for 2 reasons: * sphinx renders codeblock using python syntax highlighting by default, so it's not surprising that it highlight keywords like "export" in command line samples. so to render command line code blocks, we'd better specify the syntax explicitly for better rendering result. * with the help of "prompt" directive, user is able to copy and paste the command without the prompt. for instance, with the default "::" directive, user will copy "$ ceph df", which is not very convenient, but with "prompt" directive, user only copies "ceph df". Signed-off-by: Kefu Chai --- doc/dev/developer_guide/basic-workflow.rst | 58 ++++++---- doc/dev/developer_guide/essentials.rst | 20 ++-- .../running-tests-in-cloud.rst | 44 ++++++-- doc/dev/quick_guide.rst | 104 +++++++++--------- doc/rados/configuration/auth-config-ref.rst | 56 ++++++---- doc/rados/configuration/common.rst | 8 +- 6 files changed, 173 insertions(+), 117 deletions(-) diff --git a/doc/dev/developer_guide/basic-workflow.rst b/doc/dev/developer_guide/basic-workflow.rst index 6a302c2a036..446d4a3d5ad 100644 --- a/doc/dev/developer_guide/basic-workflow.rst +++ b/doc/dev/developer_guide/basic-workflow.rst @@ -76,9 +76,9 @@ detailed instructions on forking. In short, if your GitHub username is https://github.com/mygithubaccount/ceph. Once you have created your fork, you clone it by doing: -.. code:: +.. prompt:: bash $ - $ git clone https://github.com/mygithubaccount/ceph + git clone https://github.com/mygithubaccount/ceph While it is possible to clone the upstream repo directly, in this case you must fork it first. Forking is what enables us to open a `GitHub pull @@ -96,10 +96,12 @@ copy of the ``master`` branch in ``remotes/origin/master``. Since the fork upstream repo (https://github.com/ceph/ceph.git, typically abbreviated to ``ceph/ceph.git``) is updated frequently by other developers, you will need to sync your fork periodically. To do this, first add the upstream repo as -a "remote" and fetch it:: +a "remote" and fetch it - $ git remote add ceph https://github.com/ceph/ceph.git - $ git fetch ceph +.. prompt:: bash $ + + git remote add ceph https://github.com/ceph/ceph.git + git fetch ceph Fetching downloads all objects (commits, branches) that were added since the last sync. After running these commands, all the branches from @@ -108,27 +110,31 @@ the last sync. After running these commands, all the branches from ``ceph/$BRANCH_NAME`` in certain git commands. For example, your local ``master`` branch can be reset to the upstream Ceph -``master`` branch by doing:: +``master`` branch by doing + +.. prompt:: bash $ - $ git fetch ceph - $ git checkout master - $ git reset --hard ceph/master + git fetch ceph + git checkout master + git reset --hard ceph/master Finally, the ``master`` branch of your fork can then be synced to upstream -master by:: +master by + +.. prompt:: bash $ - $ git push -u origin master + git push -u origin master Bugfix branch ------------- Next, create a branch for the bugfix: -.. code:: +.. prompt:: bash $ - $ git checkout master - $ git checkout -b fix_1 - $ git push -u origin fix_1 + git checkout master + git checkout -b fix_1 + git push -u origin fix_1 This creates a ``fix_1`` branch locally and in our GitHub fork. At this point, the ``fix_1`` branch is identical to the ``master`` branch, but not @@ -153,13 +159,17 @@ see the chapters on testing. For now, let us just assume that you have finished work on the bugfix and that you have tested it and believe it works. Commit the changes to your local -branch using the ``--signoff`` option:: +branch using the ``--signoff`` option - $ git commit -as +.. prompt:: bash $ -and push the changes to your fork:: + git commit -as - $ git push origin fix_1 +and push the changes to your fork + +.. prompt:: bash $ + + git push origin fix_1 GitHub pull request ------------------- @@ -280,9 +290,9 @@ history. See `this tutorial introduction to rebasing. When you are done with your modifications, you will need to force push your branch with: -.. code:: +.. prompt:: bash $ - $ git push --force origin fix_1 + git push --force origin fix_1 Merge ----- @@ -325,9 +335,11 @@ you can search the git log for their email address. Reviewers are likely to have committed something before, and as long as they have committed something, the git log will probably contain their email address. -Use the following command:: +Use the following command + +.. prompt:: bash [branch-under-review]$ - [branch under review]$ git log + git log Using ptl-tool to Generate Merge Commits ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/doc/dev/developer_guide/essentials.rst b/doc/dev/developer_guide/essentials.rst index 011b4d02786..a3427292504 100644 --- a/doc/dev/developer_guide/essentials.rst +++ b/doc/dev/developer_guide/essentials.rst @@ -161,25 +161,27 @@ See instructions at :doc:`/install/build-ceph`. Using ccache to speed up local builds ------------------------------------- +.. code-block:: console + Rebuilds of the ceph source tree can benefit significantly from use of `ccache`_. Many a times while switching branches and such, one might see build failures for certain older branches mostly due to older build artifacts. These rebuilds can significantly benefit the use of ccache. For a full clean source tree, one -could do :: +could do:: $ make clean - # note the following will nuke everything in the source tree that # isn't tracked by git, so make sure to backup any log files /conf options - $ git clean -fdx; git submodule foreach git clean -fdx ccache is available as a package in most distros. To build ceph with ccache -one can:: +one can - $ cmake -DWITH_CCACHE=ON .. +.. prompt:: bash $ + + cmake -DWITH_CCACHE=ON .. ccache can also be used for speeding up all builds in the system. for more details refer to the `run modes`_ of the ccache manual. The default settings @@ -202,10 +204,12 @@ configuration file ``ccache.conf``:: Now, set the environment variable ``SOURCE_DATE_EPOCH`` to a fixed value (a UNIX timestamp) and set ``ENABLE_GIT_VERSION`` to ``OFF`` when running -``cmake``:: +``cmake`` + +.. prompt:: bash $ - $ export SOURCE_DATE_EPOCH=946684800 - $ cmake -DWITH_CCACHE=ON -DENABLE_GIT_VERSION=OFF .. + export SOURCE_DATE_EPOCH=946684800 + cmake -DWITH_CCACHE=ON -DENABLE_GIT_VERSION=OFF .. .. note:: Binaries produced with these build options are not suitable for production or debugging purposes, as they do not contain the correct build diff --git a/doc/dev/developer_guide/running-tests-in-cloud.rst b/doc/dev/developer_guide/running-tests-in-cloud.rst index 5216107a9cd..60118aefdb8 100644 --- a/doc/dev/developer_guide/running-tests-in-cloud.rst +++ b/doc/dev/developer_guide/running-tests-in-cloud.rst @@ -68,6 +68,8 @@ Third, edit the file so it does not ask for your OpenStack password interactively. Comment out the relevant lines and replace them with something like:: +.. prompt:: bash $ + export OS_PASSWORD="aiVeth0aejee3eep8rogho3eep7Pha6ek" When ``ceph-workbench ceph-qa-suite`` connects to your OpenStack tenant for @@ -82,9 +84,11 @@ Run the dummy suite ------------------- You are now ready to take your OpenStack teuthology setup for a test -drive:: +drive + +.. prompt:: bash $ - $ ceph-workbench ceph-qa-suite --suite dummy + ceph-workbench ceph-qa-suite --suite dummy Be forewarned that the first run of ``ceph-workbench ceph-qa-suite`` on a pristine tenant will take a long time to complete because it downloads a VM @@ -106,9 +110,11 @@ What this means is that ``ceph-workbench ceph-qa-suite`` triggered the test suite run. It does not mean that the suite run has completed. To monitor progress of the run, check the Pulpito web interface URL periodically, or if you are impatient, ssh to the teuthology machine using the ssh command -shown and do:: +shown and do + +.. prompt:: bash $ - $ tail -f /var/log/teuthology.* + tail -f /var/log/teuthology.* The `/usr/share/nginx/html` directory contains the complete logs of the test suite. If we had provided the ``--upload`` option to the @@ -119,9 +125,11 @@ Run a standalone test --------------------- The standalone test explained in `Reading a standalone test`_ can be run -with the following command:: +with the following command - $ ceph-workbench ceph-qa-suite --suite rados/singleton/all/admin-socket.yaml +.. prompt:: bash $ + + ceph-workbench ceph-qa-suite --suite rados/singleton/all/admin-socket.yaml This will run the suite shown on the current ``master`` branch of ``ceph/ceph.git``. You can specify a different branch with the ``--ceph`` @@ -140,7 +148,9 @@ Interrupt a running suite Teuthology suites take time to run. From time to time one may wish to interrupt a running suite. One obvious way to do this is:: - ceph-workbench ceph-qa-suite --teardown +.. prompt:: bash $ + + ceph-workbench ceph-qa-suite --teardown This destroys all VMs created by ``ceph-workbench ceph-qa-suite`` and returns the OpenStack tenant to a "clean slate". @@ -149,9 +159,11 @@ Sometimes you may wish to interrupt the running suite, but keep the logs, the teuthology VM, the packages-repository VM, etc. To do this, you can ``ssh`` to the teuthology VM (using the ``ssh access`` command reported when you triggered the suite -- see `Run the dummy suite`_) and, once -there:: +there - sudo /etc/init.d/teuthology restart +.. prompt:: bash $ + + sudo /etc/init.d/teuthology restart This will keep the teuthology machine, the logs and the packages-repository instance but nuke everything else. @@ -182,6 +194,8 @@ Provision VMs ad hoc From the teuthology VM, it is possible to provision machines on an "ad hoc" basis, to use however you like. The magic incantation is:: +.. prompt:: bash $ + teuthology-lock --lock-many $NUMBER_OF_MACHINES \ --os-type $OPERATING_SYSTEM \ --os-version $OS_VERSION \ @@ -190,16 +204,22 @@ basis, to use however you like. The magic incantation is:: The command must be issued from the ``~/teuthology`` directory. The possible values for ``OPERATING_SYSTEM`` AND ``OS_VERSION`` can be found by examining -the contents of the directory ``teuthology/openstack/``. For example:: +the contents of the directory ``teuthology/openstack/``. For example + +.. prompt:: bash $ teuthology-lock --lock-many 1 --os-type ubuntu --os-version 16.04 \ --machine-type openstack --owner foo@example.com -When you are finished with the machine, find it in the list of machines:: +When you are finished with the machine, find it in the list of machines + +.. prompt:: bash $ openstack server list -to determine the name or ID, and then terminate it with:: +to determine the name or ID, and then terminate it with + +.. prompt:: bash $ openstack server delete $NAME_OR_ID diff --git a/doc/dev/quick_guide.rst b/doc/dev/quick_guide.rst index 873a86e02f9..8e27dfb4b71 100644 --- a/doc/dev/quick_guide.rst +++ b/doc/dev/quick_guide.rst @@ -11,19 +11,19 @@ The ``run-make-check.sh`` script will install Ceph dependencies, compile everything in debug mode and run a number of tests to verify the result behaves as expected. -.. code:: +.. prompt:: bash $ - $ ./run-make-check.sh + ./run-make-check.sh Optionally if you want to work on a specific component of Ceph, install the dependencies and build Ceph in debug mode with required cmake flags. Example: -.. code:: +.. prompt:: bash $ - $ ./install-deps.sh - $ ./do_cmake.sh -DWITH_MANPAGE=OFF -DWITH_BABELTRACE=OFF -DWITH_MGR_DASHBOARD_FRONTEND=OFF + ./install-deps.sh + ./do_cmake.sh -DWITH_MANPAGE=OFF -DWITH_BABELTRACE=OFF -DWITH_MGR_DASHBOARD_FRONTEND=OFF Running a development deployment -------------------------------- @@ -31,71 +31,71 @@ Ceph contains a script called ``vstart.sh`` (see also :doc:`/dev/dev_cluster_dep a simple deployment on your development system. Once the build finishes successfully, start the ceph deployment using the following command: -.. code:: +.. prompt:: bash $ - $ cd ceph/build # Assuming this is where you ran cmake - $ make vstart - $ ../src/vstart.sh -d -n -x + cd ceph/build # Assuming this is where you ran cmake + make vstart + ../src/vstart.sh -d -n -x You can also configure ``vstart.sh`` to use only one monitor and one metadata server by using the following: -.. code:: +.. prompt:: bash $ - $ MON=1 MDS=1 ../src/vstart.sh -d -n -x + MON=1 MDS=1 ../src/vstart.sh -d -n -x The system creates two pools on startup: `cephfs_data_a` and `cephfs_metadata_a`. Let's get some stats on the current pools: -.. code:: +.. code-block:: console - $ bin/ceph osd pool stats - *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** - pool cephfs_data_a id 1 - nothing is going on + $ bin/ceph osd pool stats + *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** + pool cephfs_data_a id 1 + nothing is going on - pool cephfs_metadata_a id 2 - nothing is going on + pool cephfs_metadata_a id 2 + nothing is going on - $ bin/ceph osd pool stats cephfs_data_a - *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** - pool cephfs_data_a id 1 - nothing is going on + $ bin/ceph osd pool stats cephfs_data_a + *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** + pool cephfs_data_a id 1 + nothing is going on - $ bin/rados df - POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR - cephfs_data_a 0 0 0 0 0 0 0 0 0 0 0 - cephfs_metadata_a 2246 21 0 63 0 0 0 0 0 42 8192 + $ bin/rados df + POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR + cephfs_data_a 0 0 0 0 0 0 0 0 0 0 0 + cephfs_metadata_a 2246 21 0 63 0 0 0 0 0 42 8192 - total_objects 21 - total_used 244G - total_space 1180G + total_objects 21 + total_used 244G + total_space 1180G Make a pool and run some benchmarks against it: -.. code:: +.. prompt:: bash $ - $ bin/ceph osd pool create mypool - $ bin/rados -p mypool bench 10 write -b 123 + bin/ceph osd pool create mypool + bin/rados -p mypool bench 10 write -b 123 Place a file into the new pool: -.. code:: +.. prompt:: bash $ - $ bin/rados -p mypool put objectone - $ bin/rados -p mypool put objecttwo + bin/rados -p mypool put objectone + bin/rados -p mypool put objecttwo List the objects in the pool: -.. code:: +.. prompt:: bash $ - $ bin/rados -p mypool ls + bin/rados -p mypool ls Once you are done, type the following to stop the development ceph deployment: -.. code:: +.. prompt:: bash $ - $ ../src/stop.sh + ../src/stop.sh Resetting your vstart environment --------------------------------- @@ -104,29 +104,29 @@ The vstart script creates out/ and dev/ directories which contain the cluster's state. If you want to quickly reset your environment, you might do something like this: -.. code:: +.. prompt:: bash [build]$ - [build]$ ../src/stop.sh - [build]$ rm -rf out dev - [build]$ MDS=1 MON=1 OSD=3 ../src/vstart.sh -n -d + ../src/stop.sh + rm -rf out dev + MDS=1 MON=1 OSD=3 ../src/vstart.sh -n -d Running a RadosGW development environment ----------------------------------------- Set the ``RGW`` environment variable when running vstart.sh to enable the RadosGW. -.. code:: +.. prompt:: bash $ - $ cd build - $ RGW=1 ../src/vstart.sh -d -n -x + cd build + RGW=1 ../src/vstart.sh -d -n -x You can now use the swift python client to communicate with the RadosGW. -.. code:: +.. prompt:: bash $ - $ swift -A http://localhost:8000/auth -U test:tester -K testing list - $ swift -A http://localhost:8000/auth -U test:tester -K testing upload mycontainer ceph - $ swift -A http://localhost:8000/auth -U test:tester -K testing list + swift -A http://localhost:8000/auth -U test:tester -K testing list + swift -A http://localhost:8000/auth -U test:tester -K testing upload mycontainer ceph + swift -A http://localhost:8000/auth -U test:tester -K testing list Run unit tests @@ -134,7 +134,7 @@ Run unit tests The tests are located in `src/tests`. To run them type: -.. code:: +.. prompt:: bash $ - $ make check + make check diff --git a/doc/rados/configuration/auth-config-ref.rst b/doc/rados/configuration/auth-config-ref.rst index 23e475ddce7..829a84761a1 100644 --- a/doc/rados/configuration/auth-config-ref.rst +++ b/doc/rados/configuration/auth-config-ref.rst @@ -57,43 +57,57 @@ authentication disabled. If you (or your deployment utility) have already generated the keys, you may skip the steps related to generating keys. #. Create a ``client.admin`` key, and save a copy of the key for your client - host:: + host - ceph auth get-or-create client.admin mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *' -o /etc/ceph/ceph.client.admin.keyring + .. prompt:: bash $ + + ceph auth get-or-create client.admin mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *' -o /etc/ceph/ceph.client.admin.keyring **Warning:** This will clobber any existing ``/etc/ceph/client.admin.keyring`` file. Do not perform this step if a deployment tool has already done it for you. Be careful! #. Create a keyring for your monitor cluster and generate a monitor - secret key. :: + secret key. + + .. prompt:: bash $ - ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' + ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' #. Copy the monitor keyring into a ``ceph.mon.keyring`` file in every monitor's ``mon data`` directory. For example, to copy it to ``mon.a`` in cluster ``ceph``, - use the following:: + use the following + + .. prompt:: bash $ + + cp /tmp/ceph.mon.keyring /var/lib/ceph/mon/ceph-a/keyring - cp /tmp/ceph.mon.keyring /var/lib/ceph/mon/ceph-a/keyring +#. Generate a secret key for every MGR, where ``{$id}`` is the MGR letter -#. Generate a secret key for every MGR, where ``{$id}`` is the MGR letter:: + .. prompt:: bash $ - ceph auth get-or-create mgr.{$id} mon 'allow profile mgr' mds 'allow *' osd 'allow *' -o /var/lib/ceph/mgr/ceph-{$id}/keyring + ceph auth get-or-create mgr.{$id} mon 'allow profile mgr' mds 'allow *' osd 'allow *' -o /var/lib/ceph/mgr/ceph-{$id}/keyring -#. Generate a secret key for every OSD, where ``{$id}`` is the OSD number:: +#. Generate a secret key for every OSD, where ``{$id}`` is the OSD number - ceph auth get-or-create osd.{$id} mon 'allow rwx' osd 'allow *' -o /var/lib/ceph/osd/ceph-{$id}/keyring + .. prompt:: bash $ -#. Generate a secret key for every MDS, where ``{$id}`` is the MDS letter:: + ceph auth get-or-create osd.{$id} mon 'allow rwx' osd 'allow *' -o /var/lib/ceph/osd/ceph-{$id}/keyring - ceph auth get-or-create mds.{$id} mon 'allow rwx' osd 'allow *' mds 'allow *' mgr 'allow profile mds' -o /var/lib/ceph/mds/ceph-{$id}/keyring +#. Generate a secret key for every MDS, where ``{$id}`` is the MDS letter + + .. prompt:: bash $ + + ceph auth get-or-create mds.{$id} mon 'allow rwx' osd 'allow *' mds 'allow *' mgr 'allow profile mds' -o /var/lib/ceph/mds/ceph-{$id}/keyring #. Enable ``cephx`` authentication by setting the following options in the - ``[global]`` section of your `Ceph configuration`_ file:: + ``[global]`` section of your `Ceph configuration`_ file - auth cluster required = cephx - auth service required = cephx - auth client required = cephx + .. code-block:: ini + + auth cluster required = cephx + auth service required = cephx + auth client required = cephx #. Start or restart the Ceph cluster. See `Operating a Cluster`_ for details. @@ -111,11 +125,13 @@ running authentication. **We do not recommend it.** However, it may be easier during setup and/or troubleshooting to temporarily disable authentication. #. Disable ``cephx`` authentication by setting the following options in the - ``[global]`` section of your `Ceph configuration`_ file:: + ``[global]`` section of your `Ceph configuration`_ file + + .. code-block:: ini - auth cluster required = none - auth service required = none - auth client required = none + auth cluster required = none + auth service required = none + auth client required = none #. Start or restart the Ceph cluster. See `Operating a Cluster`_ for details. diff --git a/doc/rados/configuration/common.rst b/doc/rados/configuration/common.rst index 8903406422d..84608121926 100644 --- a/doc/rados/configuration/common.rst +++ b/doc/rados/configuration/common.rst @@ -81,7 +81,9 @@ Authentication .. versionadded:: Bobtail 0.56 For Bobtail (v 0.56) and beyond, you should expressly enable or disable -authentication in the ``[global]`` section of your Ceph configuration file. :: +authentication in the ``[global]`` section of your Ceph configuration file. + +.. code-block:: ini auth cluster required = cephx auth service required = cephx @@ -125,7 +127,7 @@ foregoing directory would evaluate to:: You may override this path using the ``osd data`` setting. We don't recommend changing the default location. Create the default directory on your OSD host. -:: +.. prompt:: bash $ ssh {osd-host} sudo mkdir /var/lib/ceph/osd/ceph-{osd-number} @@ -135,6 +137,8 @@ separate from the hard disk storing and running the operating system and daemons. If the OSD is for a disk other than the OS disk, prepare it for use with Ceph, and mount it to the directory you just created:: +.. prompt:: bash $ + ssh {new-osd-host} sudo mkfs -t {fstype} /dev/{disk} sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number} -- 2.39.5