https://github.com/mygithubaccount/ceph. Once you have created your fork,
you clone it by doing:
-.. code::
+.. prompt:: bash $
- $ git clone https://github.com/mygithubaccount/ceph
+ git clone https://github.com/mygithubaccount/ceph
While it is possible to clone the upstream repo directly, in this case you
must fork it first. Forking is what enables us to open a `GitHub pull
upstream repo (https://github.com/ceph/ceph.git, typically abbreviated to
``ceph/ceph.git``) is updated frequently by other developers, you will need
to sync your fork periodically. To do this, first add the upstream repo as
-a "remote" and fetch it::
+a "remote" and fetch it
- $ git remote add ceph https://github.com/ceph/ceph.git
- $ git fetch ceph
+.. prompt:: bash $
+
+ git remote add ceph https://github.com/ceph/ceph.git
+ git fetch ceph
Fetching downloads all objects (commits, branches) that were added since
the last sync. After running these commands, all the branches from
``ceph/$BRANCH_NAME`` in certain git commands.
For example, your local ``master`` branch can be reset to the upstream Ceph
-``master`` branch by doing::
+``master`` branch by doing
+
+.. prompt:: bash $
- $ git fetch ceph
- $ git checkout master
- $ git reset --hard ceph/master
+ git fetch ceph
+ git checkout master
+ git reset --hard ceph/master
Finally, the ``master`` branch of your fork can then be synced to upstream
-master by::
+master by
+
+.. prompt:: bash $
- $ git push -u origin master
+ git push -u origin master
Bugfix branch
-------------
Next, create a branch for the bugfix:
-.. code::
+.. prompt:: bash $
- $ git checkout master
- $ git checkout -b fix_1
- $ git push -u origin fix_1
+ git checkout master
+ git checkout -b fix_1
+ git push -u origin fix_1
This creates a ``fix_1`` branch locally and in our GitHub fork. At this
point, the ``fix_1`` branch is identical to the ``master`` branch, but not
For now, let us just assume that you have finished work on the bugfix and
that you have tested it and believe it works. Commit the changes to your local
-branch using the ``--signoff`` option::
+branch using the ``--signoff`` option
- $ git commit -as
+.. prompt:: bash $
-and push the changes to your fork::
+ git commit -as
- $ git push origin fix_1
+and push the changes to your fork
+
+.. prompt:: bash $
+
+ git push origin fix_1
GitHub pull request
-------------------
introduction to rebasing. When you are done with your modifications, you
will need to force push your branch with:
-.. code::
+.. prompt:: bash $
- $ git push --force origin fix_1
+ git push --force origin fix_1
Merge
-----
to have committed something before, and as long as they have committed
something, the git log will probably contain their email address.
-Use the following command::
+Use the following command
+
+.. prompt:: bash [branch-under-review]$
- [branch under review]$ git log
+ git log
Using ptl-tool to Generate Merge Commits
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using ccache to speed up local builds
-------------------------------------
+.. code-block:: console
+
Rebuilds of the ceph source tree can benefit significantly from use of
`ccache`_.
Many a times while switching branches and such, one might see build failures
for certain older branches mostly due to older build artifacts. These rebuilds
can significantly benefit the use of ccache. For a full clean source tree, one
-could do ::
+could do::
$ make clean
-
# note the following will nuke everything in the source tree that
# isn't tracked by git, so make sure to backup any log files /conf options
-
$ git clean -fdx; git submodule foreach git clean -fdx
ccache is available as a package in most distros. To build ceph with ccache
-one can::
+one can
- $ cmake -DWITH_CCACHE=ON ..
+.. prompt:: bash $
+
+ cmake -DWITH_CCACHE=ON ..
ccache can also be used for speeding up all builds in the system. for more
details refer to the `run modes`_ of the ccache manual. The default settings
Now, set the environment variable ``SOURCE_DATE_EPOCH`` to a fixed value (a
UNIX timestamp) and set ``ENABLE_GIT_VERSION`` to ``OFF`` when running
-``cmake``::
+``cmake``
+
+.. prompt:: bash $
- $ export SOURCE_DATE_EPOCH=946684800
- $ cmake -DWITH_CCACHE=ON -DENABLE_GIT_VERSION=OFF ..
+ export SOURCE_DATE_EPOCH=946684800
+ cmake -DWITH_CCACHE=ON -DENABLE_GIT_VERSION=OFF ..
.. note:: Binaries produced with these build options are not suitable for
production or debugging purposes, as they do not contain the correct build
interactively. Comment out the relevant lines and replace them with
something like::
+.. prompt:: bash $
+
export OS_PASSWORD="aiVeth0aejee3eep8rogho3eep7Pha6ek"
When ``ceph-workbench ceph-qa-suite`` connects to your OpenStack tenant for
-------------------
You are now ready to take your OpenStack teuthology setup for a test
-drive::
+drive
+
+.. prompt:: bash $
- $ ceph-workbench ceph-qa-suite --suite dummy
+ ceph-workbench ceph-qa-suite --suite dummy
Be forewarned that the first run of ``ceph-workbench ceph-qa-suite`` on a
pristine tenant will take a long time to complete because it downloads a VM
suite run. It does not mean that the suite run has completed. To monitor
progress of the run, check the Pulpito web interface URL periodically, or
if you are impatient, ssh to the teuthology machine using the ssh command
-shown and do::
+shown and do
+
+.. prompt:: bash $
- $ tail -f /var/log/teuthology.*
+ tail -f /var/log/teuthology.*
The `/usr/share/nginx/html` directory contains the complete logs of the
test suite. If we had provided the ``--upload`` option to the
---------------------
The standalone test explained in `Reading a standalone test`_ can be run
-with the following command::
+with the following command
- $ ceph-workbench ceph-qa-suite --suite rados/singleton/all/admin-socket.yaml
+.. prompt:: bash $
+
+ ceph-workbench ceph-qa-suite --suite rados/singleton/all/admin-socket.yaml
This will run the suite shown on the current ``master`` branch of
``ceph/ceph.git``. You can specify a different branch with the ``--ceph``
Teuthology suites take time to run. From time to time one may wish to
interrupt a running suite. One obvious way to do this is::
- ceph-workbench ceph-qa-suite --teardown
+.. prompt:: bash $
+
+ ceph-workbench ceph-qa-suite --teardown
This destroys all VMs created by ``ceph-workbench ceph-qa-suite`` and
returns the OpenStack tenant to a "clean slate".
the teuthology VM, the packages-repository VM, etc. To do this, you can
``ssh`` to the teuthology VM (using the ``ssh access`` command reported
when you triggered the suite -- see `Run the dummy suite`_) and, once
-there::
+there
- sudo /etc/init.d/teuthology restart
+.. prompt:: bash $
+
+ sudo /etc/init.d/teuthology restart
This will keep the teuthology machine, the logs and the packages-repository
instance but nuke everything else.
From the teuthology VM, it is possible to provision machines on an "ad hoc"
basis, to use however you like. The magic incantation is::
+.. prompt:: bash $
+
teuthology-lock --lock-many $NUMBER_OF_MACHINES \
--os-type $OPERATING_SYSTEM \
--os-version $OS_VERSION \
The command must be issued from the ``~/teuthology`` directory. The possible
values for ``OPERATING_SYSTEM`` AND ``OS_VERSION`` can be found by examining
-the contents of the directory ``teuthology/openstack/``. For example::
+the contents of the directory ``teuthology/openstack/``. For example
+
+.. prompt:: bash $
teuthology-lock --lock-many 1 --os-type ubuntu --os-version 16.04 \
--machine-type openstack --owner foo@example.com
-When you are finished with the machine, find it in the list of machines::
+When you are finished with the machine, find it in the list of machines
+
+.. prompt:: bash $
openstack server list
-to determine the name or ID, and then terminate it with::
+to determine the name or ID, and then terminate it with
+
+.. prompt:: bash $
openstack server delete $NAME_OR_ID
compile everything in debug mode and run a number of tests to verify
the result behaves as expected.
-.. code::
+.. prompt:: bash $
- $ ./run-make-check.sh
+ ./run-make-check.sh
Optionally if you want to work on a specific component of Ceph,
install the dependencies and build Ceph in debug mode with required cmake flags.
Example:
-.. code::
+.. prompt:: bash $
- $ ./install-deps.sh
- $ ./do_cmake.sh -DWITH_MANPAGE=OFF -DWITH_BABELTRACE=OFF -DWITH_MGR_DASHBOARD_FRONTEND=OFF
+ ./install-deps.sh
+ ./do_cmake.sh -DWITH_MANPAGE=OFF -DWITH_BABELTRACE=OFF -DWITH_MGR_DASHBOARD_FRONTEND=OFF
Running a development deployment
--------------------------------
a simple deployment on your development system. Once the build finishes successfully, start the ceph
deployment using the following command:
-.. code::
+.. prompt:: bash $
- $ cd ceph/build # Assuming this is where you ran cmake
- $ make vstart
- $ ../src/vstart.sh -d -n -x
+ cd ceph/build # Assuming this is where you ran cmake
+ make vstart
+ ../src/vstart.sh -d -n -x
You can also configure ``vstart.sh`` to use only one monitor and one metadata server by using the following:
-.. code::
+.. prompt:: bash $
- $ MON=1 MDS=1 ../src/vstart.sh -d -n -x
+ MON=1 MDS=1 ../src/vstart.sh -d -n -x
The system creates two pools on startup: `cephfs_data_a` and `cephfs_metadata_a`. Let's get some stats on
the current pools:
-.. code::
+.. code-block:: console
- $ bin/ceph osd pool stats
- *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
- pool cephfs_data_a id 1
- nothing is going on
+ $ bin/ceph osd pool stats
+ *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
+ pool cephfs_data_a id 1
+ nothing is going on
- pool cephfs_metadata_a id 2
- nothing is going on
+ pool cephfs_metadata_a id 2
+ nothing is going on
- $ bin/ceph osd pool stats cephfs_data_a
- *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
- pool cephfs_data_a id 1
- nothing is going on
+ $ bin/ceph osd pool stats cephfs_data_a
+ *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
+ pool cephfs_data_a id 1
+ nothing is going on
- $ bin/rados df
- POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
- cephfs_data_a 0 0 0 0 0 0 0 0 0 0 0
- cephfs_metadata_a 2246 21 0 63 0 0 0 0 0 42 8192
+ $ bin/rados df
+ POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
+ cephfs_data_a 0 0 0 0 0 0 0 0 0 0 0
+ cephfs_metadata_a 2246 21 0 63 0 0 0 0 0 42 8192
- total_objects 21
- total_used 244G
- total_space 1180G
+ total_objects 21
+ total_used 244G
+ total_space 1180G
Make a pool and run some benchmarks against it:
-.. code::
+.. prompt:: bash $
- $ bin/ceph osd pool create mypool
- $ bin/rados -p mypool bench 10 write -b 123
+ bin/ceph osd pool create mypool
+ bin/rados -p mypool bench 10 write -b 123
Place a file into the new pool:
-.. code::
+.. prompt:: bash $
- $ bin/rados -p mypool put objectone <somefile>
- $ bin/rados -p mypool put objecttwo <anotherfile>
+ bin/rados -p mypool put objectone <somefile>
+ bin/rados -p mypool put objecttwo <anotherfile>
List the objects in the pool:
-.. code::
+.. prompt:: bash $
- $ bin/rados -p mypool ls
+ bin/rados -p mypool ls
Once you are done, type the following to stop the development ceph deployment:
-.. code::
+.. prompt:: bash $
- $ ../src/stop.sh
+ ../src/stop.sh
Resetting your vstart environment
---------------------------------
the cluster's state. If you want to quickly reset your environment,
you might do something like this:
-.. code::
+.. prompt:: bash [build]$
- [build]$ ../src/stop.sh
- [build]$ rm -rf out dev
- [build]$ MDS=1 MON=1 OSD=3 ../src/vstart.sh -n -d
+ ../src/stop.sh
+ rm -rf out dev
+ MDS=1 MON=1 OSD=3 ../src/vstart.sh -n -d
Running a RadosGW development environment
-----------------------------------------
Set the ``RGW`` environment variable when running vstart.sh to enable the RadosGW.
-.. code::
+.. prompt:: bash $
- $ cd build
- $ RGW=1 ../src/vstart.sh -d -n -x
+ cd build
+ RGW=1 ../src/vstart.sh -d -n -x
You can now use the swift python client to communicate with the RadosGW.
-.. code::
+.. prompt:: bash $
- $ swift -A http://localhost:8000/auth -U test:tester -K testing list
- $ swift -A http://localhost:8000/auth -U test:tester -K testing upload mycontainer ceph
- $ swift -A http://localhost:8000/auth -U test:tester -K testing list
+ swift -A http://localhost:8000/auth -U test:tester -K testing list
+ swift -A http://localhost:8000/auth -U test:tester -K testing upload mycontainer ceph
+ swift -A http://localhost:8000/auth -U test:tester -K testing list
Run unit tests
The tests are located in `src/tests`. To run them type:
-.. code::
+.. prompt:: bash $
- $ make check
+ make check
generated the keys, you may skip the steps related to generating keys.
#. Create a ``client.admin`` key, and save a copy of the key for your client
- host::
+ host
- ceph auth get-or-create client.admin mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *' -o /etc/ceph/ceph.client.admin.keyring
+ .. prompt:: bash $
+
+ ceph auth get-or-create client.admin mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *' -o /etc/ceph/ceph.client.admin.keyring
**Warning:** This will clobber any existing
``/etc/ceph/client.admin.keyring`` file. Do not perform this step if a
deployment tool has already done it for you. Be careful!
#. Create a keyring for your monitor cluster and generate a monitor
- secret key. ::
+ secret key.
+
+ .. prompt:: bash $
- ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
+ ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
#. Copy the monitor keyring into a ``ceph.mon.keyring`` file in every monitor's
``mon data`` directory. For example, to copy it to ``mon.a`` in cluster ``ceph``,
- use the following::
+ use the following
+
+ .. prompt:: bash $
+
+ cp /tmp/ceph.mon.keyring /var/lib/ceph/mon/ceph-a/keyring
- cp /tmp/ceph.mon.keyring /var/lib/ceph/mon/ceph-a/keyring
+#. Generate a secret key for every MGR, where ``{$id}`` is the MGR letter
-#. Generate a secret key for every MGR, where ``{$id}`` is the MGR letter::
+ .. prompt:: bash $
- ceph auth get-or-create mgr.{$id} mon 'allow profile mgr' mds 'allow *' osd 'allow *' -o /var/lib/ceph/mgr/ceph-{$id}/keyring
+ ceph auth get-or-create mgr.{$id} mon 'allow profile mgr' mds 'allow *' osd 'allow *' -o /var/lib/ceph/mgr/ceph-{$id}/keyring
-#. Generate a secret key for every OSD, where ``{$id}`` is the OSD number::
+#. Generate a secret key for every OSD, where ``{$id}`` is the OSD number
- ceph auth get-or-create osd.{$id} mon 'allow rwx' osd 'allow *' -o /var/lib/ceph/osd/ceph-{$id}/keyring
+ .. prompt:: bash $
-#. Generate a secret key for every MDS, where ``{$id}`` is the MDS letter::
+ ceph auth get-or-create osd.{$id} mon 'allow rwx' osd 'allow *' -o /var/lib/ceph/osd/ceph-{$id}/keyring
- ceph auth get-or-create mds.{$id} mon 'allow rwx' osd 'allow *' mds 'allow *' mgr 'allow profile mds' -o /var/lib/ceph/mds/ceph-{$id}/keyring
+#. Generate a secret key for every MDS, where ``{$id}`` is the MDS letter
+
+ .. prompt:: bash $
+
+ ceph auth get-or-create mds.{$id} mon 'allow rwx' osd 'allow *' mds 'allow *' mgr 'allow profile mds' -o /var/lib/ceph/mds/ceph-{$id}/keyring
#. Enable ``cephx`` authentication by setting the following options in the
- ``[global]`` section of your `Ceph configuration`_ file::
+ ``[global]`` section of your `Ceph configuration`_ file
- auth cluster required = cephx
- auth service required = cephx
- auth client required = cephx
+ .. code-block:: ini
+
+ auth cluster required = cephx
+ auth service required = cephx
+ auth client required = cephx
#. Start or restart the Ceph cluster. See `Operating a Cluster`_ for details.
during setup and/or troubleshooting to temporarily disable authentication.
#. Disable ``cephx`` authentication by setting the following options in the
- ``[global]`` section of your `Ceph configuration`_ file::
+ ``[global]`` section of your `Ceph configuration`_ file
+
+ .. code-block:: ini
- auth cluster required = none
- auth service required = none
- auth client required = none
+ auth cluster required = none
+ auth service required = none
+ auth client required = none
#. Start or restart the Ceph cluster. See `Operating a Cluster`_ for details.
.. versionadded:: Bobtail 0.56
For Bobtail (v 0.56) and beyond, you should expressly enable or disable
-authentication in the ``[global]`` section of your Ceph configuration file. ::
+authentication in the ``[global]`` section of your Ceph configuration file.
+
+.. code-block:: ini
auth cluster required = cephx
auth service required = cephx
You may override this path using the ``osd data`` setting. We don't recommend
changing the default location. Create the default directory on your OSD host.
-::
+.. prompt:: bash $
ssh {osd-host}
sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}
daemons. If the OSD is for a disk other than the OS disk, prepare it for
use with Ceph, and mount it to the directory you just created::
+.. prompt:: bash $
+
ssh {new-osd-host}
sudo mkfs -t {fstype} /dev/{disk}
sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}