--- /dev/null
+==========================
+ Ceph Configuration Files
+==========================
+When you start the Ceph service, the initialization process activates a series
+of daemons that run in the background. The hosts in a typical RADOS cluster run
+at least one of three processes or daemons:
+
+- RADOS (``ceph-osd``)
+- Monitor (``ceph-mon``)
+- Metadata Server (``ceph-mds``)
+
+Each process or daemon looks for a ``ceph.conf`` file that provides their
+configuration settings. The default ``ceph.conf`` locations in sequential
+order include:
+
+ 1. ``$CEPH_CONF`` (*i.e.,* the path following
+ the ``$CEPH_CONF`` environment variable)
+ 2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
+ 3. ``/etc/ceph/ceph.conf``
+ 4. ``~/.ceph/config``
+ 5. ``./ceph.conf`` (*i.e.,* in the current working directory)
+
+The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you
+have installed the Ceph packages on the OSD Cluster hosts, you need to create
+a ``ceph.conf`` file to configure your OSD cluster.
+
+Creating ``ceph.conf``
+----------------------
+The ``ceph.conf`` file defines:
+
+- Cluster Membership
+- Host Names
+- Paths to Hosts
+- Runtime Options
+
+You can add comments to the ``ceph.conf`` file by preceding comments with
+a semi-colon (;). For example::
+
+ ; <--A semi-colon precedes a comment
+ ; A comment may be anything, and always follows a semi-colon on each line.
+ ; We recommend that you provide comments in your configuration file(s).
+
+Configuration File Basics
+~~~~~~~~~~~~~~~~~~~~~~~~~
+The ``ceph.conf`` file configures each instance of the three common processes
+in a RADOS cluster.
+
++-----------------+--------------+--------------+-----------------+-------------------------------------------------+
+| Setting Scope | Process | Setting | Instance Naming | Description |
++=================+==============+==============+=================+=================================================+
+| All Modules | All | ``[global]`` | N/A | Settings affect all instances of all daemons. |
++-----------------+--------------+--------------+-----------------+-------------------------------------------------+
+| RADOS | ``ceph-osd`` | ``[osd]`` | Numeric | Settings affect RADOS instances only. |
++-----------------+--------------+--------------+-----------------+-------------------------------------------------+
+| Monitor | ``ceph-mon`` | ``[mon]`` | Alphanumeric | Settings affect monitor instances only. |
++-----------------+--------------+--------------+-----------------+-------------------------------------------------+
+| Metadata Server | ``ceph-mds`` | ``[mds]`` | Alphanumeric | Settings affect MDS instances only. |
++-----------------+--------------+--------------+-----------------+-------------------------------------------------+
+
+Metavariables
+~~~~~~~~~~~~~
+The configuration system supports certain 'metavariables,' which are typically
+used in ``[global]`` or process/daemon settings. If metavariables occur inside
+a configuration value, Ceph expands them into a concrete value--similar to how
+Bash shell expansion works.
+
+There are a few different metavariables:
+
++--------------+----------------------------------------------------------------------------------------------------------+
+| Metavariable | Description |
++==============+==========================================================================================================+
+| ``$host`` | Expands to the host name of the current daemon. |
++--------------+----------------------------------------------------------------------------------------------------------+
+| ``$type`` | Expands to one of ``mds``, ``osd``, or ``mon``, depending on the type of the current daemon. |
++--------------+----------------------------------------------------------------------------------------------------------+
+| ``$id`` | Expands to the daemon identifier. For ``osd.0``, this would be ``0``; for ``mds.a``, it would be ``a``. |
++--------------+----------------------------------------------------------------------------------------------------------+
+| ``$num`` | Same as ``$id``. |
++--------------+----------------------------------------------------------------------------------------------------------+
+| ``$name`` | Expands to ``$type.$id``. |
++--------------+----------------------------------------------------------------------------------------------------------+
+| ``$cluster`` | Expands to the cluster name. Useful when running multiple clusters on the same hardware. |
++--------------+----------------------------------------------------------------------------------------------------------+
+
+Global Settings
+~~~~~~~~~~~~~~~
+The Ceph configuration file supports a hierarchy of settings, where child
+settings inherit the settings of the parent. Global settings affect all
+instances of all processes in the cluster. Use the ``[global]`` setting for
+values that are common for all hosts in the cluster. You can override each
+``[global]`` setting by:
+
+1. Changing the setting in a particular ``[group]``.
+2. Changing the setting in a particular process type (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
+3. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` )
+
+Overriding a global setting affects all child processes, except those that
+you specifically override. For example::
+
+ [global]
+ ; Enable authentication between hosts within the cluster.
+ auth supported = cephx
+
+Process/Daemon Settings
+~~~~~~~~~~~~~~~~~~~~~~~
+You can specify settings that apply to a particular type of process. When you
+specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a
+particular instance, the setting will apply to all OSDs, monitors or metadata
+daemons respectively.
+
+Instance Settings
+~~~~~~~~~~~~~~~~~
+You may specify settings for particular instances of an daemon. You may specify
+an instance by entering its type, delimited by a period (.) and by the
+instance ID. The instance ID for an OSD is always numeric, but it may be
+alphanumeric for monitors and metadata servers. ::
+
+ [osd.1]
+ ; settings affect osd.1 only.
+ [mon.a1]
+ ; settings affect mon.a1 only.
+ [mds.b2]
+ ; settings affect mds.b2 only.
+
+``host`` and ``addr`` Settings
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The `Hardware Recommendations <../hardware-recommendations>`_ section
+provides some hardware guidelines for configuring the cluster. It is possible
+for a single host to run multiple daemons. For example, a single host with
+multiple disks or RAIDs may run one ``ceph-osd`` for each disk or RAID.
+Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon
+on the same host. Ideally, you will have a host for a particular type of
+process. For example, one host may run ``ceph-osd`` daemons, another host
+may run a ``ceph-mds`` daemon, and other hosts may run ``ceph-mon`` daemons.
+
+Each host has a name identified by the ``host`` setting, and a network location
+(i.e., domain name or IP address) identified by the ``addr`` setting. For example::
+
+ [osd.1]
+ host = hostNumber1
+ addr = 150.140.130.120:1100
+ [osd.2]
+ host = hostNumber1
+ addr = 150.140.130.120:1102
+
+
+Monitor Configuration
+~~~~~~~~~~~~~~~~~~~~~
+Ceph typically deploys with 3 monitors to ensure high availability should a
+monitor instance crash. An odd number of monitors (3) ensures that the Paxos
+algorithm can determine which version of the cluster map is the most accurate.
+
+.. note:: You may deploy Ceph with a single monitor, but if the instance fails,
+ the lack of a monitor may interrupt data service availability.
+
+Ceph monitors typically listen on port ``6789``.
+
+Example Configuration File
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. literalinclude:: demo-ceph.conf
+ :language: ini
+
+Configuration File Deployment Options
+-------------------------------------
+The most common way to deploy the ``ceph.conf`` file in a cluster is to have
+all hosts share the same configuration file.
+
+You may create a ``ceph.conf`` file for each host if you wish, or specify a
+particular ``ceph.conf`` file for a subset of hosts within the cluster. However,
+using per-host ``ceph.conf``configuration files imposes a maintenance burden as the
+cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file
+on the Administration host and then copies that file to each OSD Cluster host.
+
+The current cluster deployment script, ``mkcephfs``, does not make copies of the
+``ceph.conf``. You must copy the file manually.
+++ /dev/null
-==========================
- Ceph Configuration Files
-==========================
-When you start the Ceph service, the initialization process activates a series
-of daemons that run in the background. The hosts in a typical RADOS cluster run
-at least one of three processes or daemons:
-
-- RADOS (``ceph-osd``)
-- Monitor (``ceph-mon``)
-- Metadata Server (``ceph-mds``)
-
-Each process or daemon looks for a ``ceph.conf`` file that provides their
-configuration settings. The default ``ceph.conf`` locations in sequential
-order include:
-
- 1. ``$CEPH_CONF`` (*i.e.,* the path following
- the ``$CEPH_CONF`` environment variable)
- 2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
- 3. ``/etc/ceph/ceph.conf``
- 4. ``~/.ceph/config``
- 5. ``./ceph.conf`` (*i.e.,* in the current working directory)
-
-The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you
-have installed the Ceph packages on the OSD Cluster hosts, you need to create
-a ``ceph.conf`` file to configure your OSD cluster.
-
-Creating ``ceph.conf``
-----------------------
-The ``ceph.conf`` file defines:
-
-- Cluster Membership
-- Host Names
-- Paths to Hosts
-- Runtime Options
-
-You can add comments to the ``ceph.conf`` file by preceding comments with
-a semi-colon (;). For example::
-
- ; <--A semi-colon precedes a comment
- ; A comment may be anything, and always follows a semi-colon on each line.
- ; We recommend that you provide comments in your configuration file(s).
-
-Configuration File Basics
-~~~~~~~~~~~~~~~~~~~~~~~~~
-The ``ceph.conf`` file configures each instance of the three common processes
-in a RADOS cluster.
-
-+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
-| Setting Scope | Process | Setting | Instance Naming | Description |
-+=================+==============+==============+=================+=================================================+
-| All Modules | All | ``[global]`` | N/A | Settings affect all instances of all daemons. |
-+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
-| RADOS | ``ceph-osd`` | ``[osd]`` | Numeric | Settings affect RADOS instances only. |
-+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
-| Monitor | ``ceph-mon`` | ``[mon]`` | Alphanumeric | Settings affect monitor instances only. |
-+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
-| Metadata Server | ``ceph-mds`` | ``[mds]`` | Alphanumeric | Settings affect MDS instances only. |
-+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
-
-Metavariables
-~~~~~~~~~~~~~
-The configuration system supports certain 'metavariables,' which are typically
-used in ``[global]`` or process/daemon settings. If metavariables occur inside
-a configuration value, Ceph expands them into a concrete value--similar to how
-Bash shell expansion works.
-
-There are a few different metavariables:
-
-+--------------+----------------------------------------------------------------------------------------------------------+
-| Metavariable | Description |
-+==============+==========================================================================================================+
-| ``$host`` | Expands to the host name of the current daemon. |
-+--------------+----------------------------------------------------------------------------------------------------------+
-| ``$type`` | Expands to one of ``mds``, ``osd``, or ``mon``, depending on the type of the current daemon. |
-+--------------+----------------------------------------------------------------------------------------------------------+
-| ``$id`` | Expands to the daemon identifier. For ``osd.0``, this would be ``0``; for ``mds.a``, it would be ``a``. |
-+--------------+----------------------------------------------------------------------------------------------------------+
-| ``$num`` | Same as ``$id``. |
-+--------------+----------------------------------------------------------------------------------------------------------+
-| ``$name`` | Expands to ``$type.$id``. |
-+--------------+----------------------------------------------------------------------------------------------------------+
-| ``$cluster`` | Expands to the cluster name. Useful when running multiple clusters on the same hardware. |
-+--------------+----------------------------------------------------------------------------------------------------------+
-
-Global Settings
-~~~~~~~~~~~~~~~
-The Ceph configuration file supports a hierarchy of settings, where child
-settings inherit the settings of the parent. Global settings affect all
-instances of all processes in the cluster. Use the ``[global]`` setting for
-values that are common for all hosts in the cluster. You can override each
-``[global]`` setting by:
-
-1. Changing the setting in a particular ``[group]``.
-2. Changing the setting in a particular process type (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
-3. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` )
-
-Overriding a global setting affects all child processes, except those that
-you specifically override. For example::
-
- [global]
- ; Enable authentication between hosts within the cluster.
- auth supported = cephx
-
-Process/Daemon Settings
-~~~~~~~~~~~~~~~~~~~~~~~
-You can specify settings that apply to a particular type of process. When you
-specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a
-particular instance, the setting will apply to all OSDs, monitors or metadata
-daemons respectively.
-
-Instance Settings
-~~~~~~~~~~~~~~~~~
-You may specify settings for particular instances of an daemon. You may specify
-an instance by entering its type, delimited by a period (.) and by the
-instance ID. The instance ID for an OSD is always numeric, but it may be
-alphanumeric for monitors and metadata servers. ::
-
- [osd.1]
- ; settings affect osd.1 only.
- [mon.a1]
- ; settings affect mon.a1 only.
- [mds.b2]
- ; settings affect mds.b2 only.
-
-``host`` and ``addr`` Settings
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The `Hardware Recommendations <../hardware_recommendations>`_ section
-provides some hardware guidelines for configuring the cluster. It is possible
-for a single host to run multiple daemons. For example, a single host with
-multiple disks or RAIDs may run one ``ceph-osd`` for each disk or RAID.
-Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon
-on the same host. Ideally, you will have a host for a particular type of
-process. For example, one host may run ``ceph-osd`` daemons, another host
-may run a ``ceph-mds`` daemon, and other hosts may run ``ceph-mon`` daemons.
-
-Each host has a name identified by the ``host`` setting, and a network location
-(i.e., domain name or IP address) identified by the ``addr`` setting. For example::
-
- [osd.1]
- host = hostNumber1
- addr = 150.140.130.120:1100
- [osd.2]
- host = hostNumber1
- addr = 150.140.130.120:1102
-
-
-Monitor Configuration
-~~~~~~~~~~~~~~~~~~~~~
-Ceph typically deploys with 3 monitors to ensure high availability should a
-monitor instance crash. An odd number of monitors (3) ensures that the Paxos
-algorithm can determine which version of the cluster map is the most accurate.
-
-.. note:: You may deploy Ceph with a single monitor, but if the instance fails,
- the lack of a monitor may interrupt data service availability.
-
-Ceph monitors typically listen on port ``6789``.
-
-Example Configuration File
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. literalinclude:: demo-ceph.conf
- :language: ini
-
-Configuration File Deployment Options
--------------------------------------
-The most common way to deploy the ``ceph.conf`` file in a cluster is to have
-all hosts share the same configuration file.
-
-You may create a ``ceph.conf`` file for each host if you wish, or specify a
-particular ``ceph.conf`` file for a subset of hosts within the cluster. However,
-using per-host ``ceph.conf``configuration files imposes a maintenance burden as the
-cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file
-on the Administration host and then copies that file to each OSD Cluster host.
-
-The current cluster deployment script, ``mkcephfs``, does not make copies of the
-``ceph.conf``. You must copy the file manually.
--- /dev/null
+==============================
+ Deploying Ceph Configuration
+==============================
+Ceph's current deployment script does not copy the configuration file you
+created from the Administration host to the OSD Cluster hosts. Copy the
+configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
+from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host.
+
+::
+
+ ssh myserver01 sudo tee /etc/ceph/ceph.conf <mycluster.conf
+ ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
+ ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
+
+
+The current deployment script doesn't copy the start services. Copy the ``start``
+services from the Administration host to each OSD Cluster host. ::
+
+ ssh myserver01 sudo /etc/init.d/ceph start
+ ssh myserver02 sudo /etc/init.d/ceph start
+ ssh myserver03 sudo /etc/init.d/ceph start
+
+The current deployment script may not create the default server directories. Create
+server directories for each instance of a Ceph daemon.
+
+Using the exemplary ``ceph.conf`` file, you would perform the following:
+
+On ``myserver01``::
+
+ mkdir srv/osd.0
+ mkdir srv/mon.a
+
+On ``myserver02``::
+
+ mkdir srv/osd.1
+ mkdir srv/mon.b
+
+On ``myserver03``::
+
+ mkdir srv/osd.2
+ mkdir srv/mon.c
+
+On ``myserver04``::
+
+ mkdir srv/osd.3
+
+.. important:: The ``host`` variable determines which host runs each instance of a Ceph daemon.
+
--- /dev/null
+================================
+Deploying Ceph with ``mkcephfs``
+================================
+
+Once you have copied your Ceph Configuration to the OSD Cluster hosts, you may deploy Ceph with the ``mkcephfs`` script.
+
+.. note:: ``mkcephfs`` is a quick bootstrapping tool. It does not handle more complex operations, such as upgrades.
+
+ For production environments, you will deploy Ceph using Chef cookbooks (coming soon!).
+
+To run ``mkcephfs``, execute the following::
+
+ $ mkcephfs -a -c <path>/ceph.conf -k mycluster.keyring
+
+The script adds an admin key to the ``mycluster.keyring``, which is analogous to a root password.
+
+To start the cluster, execute the following::
+
+ /etc/init.d/ceph -a start
+
+Ceph should begin operating. You can check on the health of your Ceph cluster with the following::
+
+ ceph -k mycluster.keyring -c <path>/ceph.conf health
+
+++ /dev/null
-==============================
- Deploying Ceph Configuration
-==============================
-Ceph's current deployment script does not copy the configuration file you
-created from the Administration host to the OSD Cluster hosts. Copy the
-configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
-from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host.
-
-::
-
- ssh myserver01 sudo tee /etc/ceph/ceph.conf <mycluster.conf
- ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
- ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
-
-
-The current deployment script doesn't copy the start services. Copy the ``start``
-services from the Administration host to each OSD Cluster host. ::
-
- ssh myserver01 sudo /etc/init.d/ceph start
- ssh myserver02 sudo /etc/init.d/ceph start
- ssh myserver03 sudo /etc/init.d/ceph start
-
-The current deployment script may not create the default server directories. Create
-server directories for each instance of a Ceph daemon.
-
-Using the exemplary ``ceph.conf`` file, you would perform the following:
-
-On ``myserver01``::
-
- mkdir srv/osd.0
- mkdir srv/mon.a
-
-On ``myserver02``::
-
- mkdir srv/osd.1
- mkdir srv/mon.b
-
-On ``myserver03``::
-
- mkdir srv/osd.2
- mkdir srv/mon.c
-
-On ``myserver04``::
-
- mkdir srv/osd.3
-
-.. important:: The ``host`` variable determines which host runs each instance of a Ceph daemon.
-
+++ /dev/null
-================================
-Deploying Ceph with ``mkcephfs``
-================================
-
-Once you have copied your Ceph Configuration to the OSD Cluster hosts, you may deploy Ceph with the ``mkcephfs`` script.
-
-.. note:: ``mkcephfs`` is a quick bootstrapping tool. It does not handle more complex operations, such as upgrades.
-
- For production environments, you will deploy Ceph using Chef cookbooks (coming soon!).
-
-To run ``mkcephfs``, execute the following::
-
- $ mkcephfs -a -c <path>/ceph.conf -k mycluster.keyring
-
-The script adds an admin key to the ``mycluster.keyring``, which is analogous to a root password.
-
-To start the cluster, execute the following::
-
- /etc/init.d/ceph -a start
-
-Ceph should begin operating. You can check on the health of your Ceph cluster with the following::
-
- ceph -k mycluster.keyring -c <path>/ceph.conf health
-
--- /dev/null
+=========================================
+Hard Disk and File System Recommendations
+=========================================
+
+Ceph aims for data safety, which means that when the application receives notice
+that data was written to the disk, that data was actually written to the disk.
+For old kernels (<2.6.33), disable the write cache if the journal is on a raw
+disk. Newer kernels should work fine.
+
+Use ``hdparm`` to disable write caching on the hard disk::
+
+ $ hdparm -W 0 /dev/hda 0
+
+
+Ceph OSDs depend on the Extended Attributes (XATTRs) of the underlying file
+system for:
+
+- Internal object state
+- Snapshot metadata
+- RADOS Gateway Access Control Lists (ACLs).
+
+Ceph OSDs rely heavily upon the stability and performance of the underlying file
+system. The underlying file system must provide sufficient capacity for XATTRs.
+File system candidates for Ceph include B tree and B+ tree file systems such as:
+
+- ``btrfs``
+- ``XFS``
+
+If you are using ``ext4``, enable XATTRs. ::
+
+ filestore xattr use omap = true
+
+.. warning:: XATTR limits.
+
+ The RADOS Gateway's ACL and Ceph snapshots easily surpass the 4-kilobyte limit
+ for XATTRs in ``ext4``, causing the ``ceph-osd`` process to crash. Version 0.45
+ or newer uses ``leveldb`` to bypass this limitation. ``ext4`` is a poor file
+ system choice if you intend to deploy the RADOS Gateway or use snapshots on
+ versions earlier than 0.45.
+
+.. tip:: Use ``xfs`` initially and ``btrfs`` when it is ready for production.
+
+ The Ceph team believes that the best performance and stability will come from
+ ``btrfs.`` The ``btrfs`` file system has internal transactions that keep the
+ local data set in a consistent state. This makes OSDs based on ``btrfs`` simple
+ to deploy, while providing scalability not currently available from block-based
+ file systems. The 64-kb XATTR limit for ``xfs`` XATTRS is enough to accommodate
+ RDB snapshot metadata and RADOS Gateway ACLs. So ``xfs`` is the second-choice
+ file system of the Ceph team in the long run, but ``xfs`` is currently more
+ stable than ``btrfs``. If you only plan to use RADOS and ``rbd`` without
+ snapshots and without ``radosgw``, the ``ext4`` file system should work just fine.
+
+++ /dev/null
-=========================================
-Hard Disk and File System Recommendations
-=========================================
-
-Ceph aims for data safety, which means that when the application receives notice
-that data was written to the disk, that data was actually written to the disk.
-For old kernels (<2.6.33), disable the write cache if the journal is on a raw
-disk. Newer kernels should work fine.
-
-Use ``hdparm`` to disable write caching on the hard disk::
-
- $ hdparm -W 0 /dev/hda 0
-
-
-Ceph OSDs depend on the Extended Attributes (XATTRs) of the underlying file
-system for:
-
-- Internal object state
-- Snapshot metadata
-- RADOS Gateway Access Control Lists (ACLs).
-
-Ceph OSDs rely heavily upon the stability and performance of the underlying file
-system. The underlying file system must provide sufficient capacity for XATTRs.
-File system candidates for Ceph include B tree and B+ tree file systems such as:
-
-- ``btrfs``
-- ``XFS``
-
-If you are using ``ext4``, enable XATTRs. ::
-
- filestore xattr use omap = true
-
-.. warning:: XATTR limits.
-
- The RADOS Gateway's ACL and Ceph snapshots easily surpass the 4-kilobyte limit
- for XATTRs in ``ext4``, causing the ``ceph-osd`` process to crash. Version 0.45
- or newer uses ``leveldb`` to bypass this limitation. ``ext4`` is a poor file
- system choice if you intend to deploy the RADOS Gateway or use snapshots on
- versions earlier than 0.45.
-
-.. tip:: Use ``xfs`` initially and ``btrfs`` when it is ready for production.
-
- The Ceph team believes that the best performance and stability will come from
- ``btrfs.`` The ``btrfs`` file system has internal transactions that keep the
- local data set in a consistent state. This makes OSDs based on ``btrfs`` simple
- to deploy, while providing scalability not currently available from block-based
- file systems. The 64-kb XATTR limit for ``xfs`` XATTRS is enough to accommodate
- RDB snapshot metadata and RADOS Gateway ACLs. So ``xfs`` is the second-choice
- file system of the Ceph team in the long run, but ``xfs`` is currently more
- stable than ``btrfs``. If you only plan to use RADOS and ``rbd`` without
- snapshots and without ``radosgw``, the ``ext4`` file system should work just fine.
-
.. toctree::
- file_system_recommendations
- Configuration <ceph_conf>
- Deploy Config <deploying_ceph_conf>
- deploying_ceph_with_mkcephfs
+ file-system-recommendations
+ Configuration <ceph-conf>
+ Deploy Config <deploying-ceph-conf>
+ deploying-ceph-with-mkcephfs
--- /dev/null
+====================================
+ Downloading Debian/Ubuntu Packages
+====================================
+We automatically build Debian/Ubuntu packages for any branches or tags that
+appear in the ``ceph.git`` `repository <http://github.com/ceph/ceph>`_. If you
+want to build your own packages (*e.g.,* for RPM), see
+`Build Ceph Packages <../../source/build-packages>`_.
+
+When you download release packages, you will receive the latest package build,
+which may be several weeks behind the current release or the most recent code.
+It may contain bugs that have already been fixed in the most recent versions of
+the code. Until packages contain only stable code, you should carefully consider
+the tradeoffs of installing from a package or retrieving the latest release
+or the most current source code and building Ceph.
+
+When you execute the following commands to install the Debian/Ubuntu Ceph
+packages, replace ``{ARCH}`` with the architecture of your CPU (*e.g.,* ``amd64``
+or ``i386``), ``{DISTRO}`` with the code name of your operating system
+(*e.g.,* ``precise``, rather than the OS version number) and ``{BRANCH}`` with
+the version of Ceph you want to run (e.g., ``master``, ``stable``, ``unstable``,
+``v0.44``, *etc.*).
+
+Adding Release Packages to APT
+------------------------------
+We provide stable release packages for Debian/Ubuntu, which are signed signed
+with the ``release.asc`` key. Click `here <http://ceph.newdream.net/debian/dists>`_
+to see the distributions and branches supported. To install a release package,
+you must first add a release key. ::
+
+ $ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc \ | sudo apt-key add -
+
+For Debian/Ubuntu releases, we use the Advanced Package Tool (APT). To retrieve
+the release packages and updates and install them with ``apt``, you must add a
+``ceph.list`` file to your ``apt`` configuration with the following path::
+
+ etc/apt/sources.list.d/ceph.list
+
+Open the file and add the following line::
+
+ deb http://ceph.com/debian/ {DISTRO} main
+
+Remember to replace ``{DISTRO}`` with the Linux distribution for your host.
+Then, save the file.
+
+Downloading Packages
+--------------------
+Once you add either release or autobuild packages for Debian/Ubuntu, you may
+download them with ``apt`` as follows::
+
+ sudo apt-get update
+
+++ /dev/null
-====================================
- Downloading Debian/Ubuntu Packages
-====================================
-We automatically build Debian/Ubuntu packages for any branches or tags that
-appear in the ``ceph.git`` `repository <http://github.com/ceph/ceph>`_. If you
-want to build your own packages (*e.g.,* for RPM), see
-`Build Ceph Packages <../../source/build_packages>`_.
-
-When you download release packages, you will receive the latest package build,
-which may be several weeks behind the current release or the most recent code.
-It may contain bugs that have already been fixed in the most recent versions of
-the code. Until packages contain only stable code, you should carefully consider
-the tradeoffs of installing from a package or retrieving the latest release
-or the most current source code and building Ceph.
-
-When you execute the following commands to install the Debian/Ubuntu Ceph
-packages, replace ``{ARCH}`` with the architecture of your CPU (*e.g.,* ``amd64``
-or ``i386``), ``{DISTRO}`` with the code name of your operating system
-(*e.g.,* ``precise``, rather than the OS version number) and ``{BRANCH}`` with
-the version of Ceph you want to run (e.g., ``master``, ``stable``, ``unstable``,
-``v0.44``, *etc.*).
-
-Adding Release Packages to APT
-------------------------------
-We provide stable release packages for Debian/Ubuntu, which are signed signed
-with the ``release.asc`` key. Click `here <http://ceph.newdream.net/debian/dists>`_
-to see the distributions and branches supported. To install a release package,
-you must first add a release key. ::
-
- $ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc \ | sudo apt-key add -
-
-For Debian/Ubuntu releases, we use the Advanced Package Tool (APT). To retrieve
-the release packages and updates and install them with ``apt``, you must add a
-``ceph.list`` file to your ``apt`` configuration with the following path::
-
- etc/apt/sources.list.d/ceph.list
-
-Open the file and add the following line::
-
- deb http://ceph.com/debian/ {DISTRO} main
-
-Remember to replace ``{DISTRO}`` with the Linux distribution for your host.
-Then, save the file.
-
-Downloading Packages
---------------------
-Once you add either release or autobuild packages for Debian/Ubuntu, you may
-download them with ``apt`` as follows::
-
- sudo apt-get update
-
--- /dev/null
+==========================
+ Hardware Recommendations
+==========================
+Ceph runs on commodity hardware and a Linux operating system over a TCP/IP
+network. The hardware recommendations for different processes/daemons differ
+considerably.
+
+OSD hosts should have ample data storage in the form of a hard drive or a RAID.
+Ceph OSDs run the RADOS service, calculate data placement with CRUSH, and
+maintain their own copy of the cluster map. Therefore, OSDs should have a
+reasonable amount of processing power.
+
+Ceph monitors require enough disk space for the cluster map, but usually do
+not encounter heavy loads. Monitor hosts do not need to be very powerful.
+
+Ceph metadata servers distribute their load. However, metadata servers must be
+capable of serving their data quickly. Metadata servers should have strong
+processing capability and plenty of RAM.
+
+.. note:: If you are not using the Ceph File System, you do not need a meta data server.
+
++--------------+----------------+------------------------------------+
+| Process | Criteria | Minimum Recommended |
++==============+================+====================================+
+| ``ceph-osd`` | Processor | 64-bit AMD-64/i386 dual-core |
+| +----------------+------------------------------------+
+| | RAM | 500 MB per daemon |
+| +----------------+------------------------------------+
+| | Volume Storage | 1-disk or RAID per daemon |
+| +----------------+------------------------------------+
+| | Network | 2-1GB Ethernet NICs |
++--------------+----------------+------------------------------------+
+| ``ceph-mon`` | Processor | 64-bit AMD-64/i386 |
+| +----------------+------------------------------------+
+| | RAM | 1 GB per daemon |
+| +----------------+------------------------------------+
+| | Disk Space | 10 GB per daemon |
+| +----------------+------------------------------------+
+| | Network | 2-1GB Ethernet NICs |
++--------------+----------------+------------------------------------+
+| ``ceph-mds`` | Processor | 64-bit AMD-64/i386 quad-core |
+| +----------------+------------------------------------+
+| | RAM | 1 GB minimum per daemon |
+| +----------------+------------------------------------+
+| | Disk Space | 1 MB per daemon |
+| +----------------+------------------------------------+
+| | Network | 2-1GB Ethernet NICs |
++--------------+----------------+------------------------------------+
+
+++ /dev/null
-==========================
- Hardware Recommendations
-==========================
-Ceph runs on commodity hardware and a Linux operating system over a TCP/IP
-network. The hardware recommendations for different processes/daemons differ
-considerably.
-
-OSD hosts should have ample data storage in the form of a hard drive or a RAID.
-Ceph OSDs run the RADOS service, calculate data placement with CRUSH, and
-maintain their own copy of the cluster map. Therefore, OSDs should have a
-reasonable amount of processing power.
-
-Ceph monitors require enough disk space for the cluster map, but usually do
-not encounter heavy loads. Monitor hosts do not need to be very powerful.
-
-Ceph metadata servers distribute their load. However, metadata servers must be
-capable of serving their data quickly. Metadata servers should have strong
-processing capability and plenty of RAM.
-
-.. note:: If you are not using the Ceph File System, you do not need a meta data server.
-
-+--------------+----------------+------------------------------------+
-| Process | Criteria | Minimum Recommended |
-+==============+================+====================================+
-| ``ceph-osd`` | Processor | 64-bit AMD-64/i386 dual-core |
-| +----------------+------------------------------------+
-| | RAM | 500 MB per daemon |
-| +----------------+------------------------------------+
-| | Volume Storage | 1-disk or RAID per daemon |
-| +----------------+------------------------------------+
-| | Network | 2-1GB Ethernet NICs |
-+--------------+----------------+------------------------------------+
-| ``ceph-mon`` | Processor | 64-bit AMD-64/i386 |
-| +----------------+------------------------------------+
-| | RAM | 1 GB per daemon |
-| +----------------+------------------------------------+
-| | Disk Space | 10 GB per daemon |
-| +----------------+------------------------------------+
-| | Network | 2-1GB Ethernet NICs |
-+--------------+----------------+------------------------------------+
-| ``ceph-mds`` | Processor | 64-bit AMD-64/i386 quad-core |
-| +----------------+------------------------------------+
-| | RAM | 1 GB minimum per daemon |
-| +----------------+------------------------------------+
-| | Disk Space | 1 MB per daemon |
-| +----------------+------------------------------------+
-| | Network | 2-1GB Ethernet NICs |
-+--------------+----------------+------------------------------------+
-
.. toctree::
- Hardware Recommendations <hardware_recommendations>
- Download Packages <download_packages>
- Install Packages <installing_packages>
+ Hardware Recommendations <hardware-recommendations>
+ Download Packages <download-packages>
+ Install Packages <installing-packages>
--- /dev/null
+==========================
+ Installing Ceph Packages
+==========================
+Once you have downloaded or built Ceph packages, you may install them on your
+Admin host and OSD Cluster hosts.
+
+.. important:: All hosts should be running the same package version.
+ To ensure that you are running the same version on each host with APT,
+ you may execute ``sudo apt-get update`` on each host before you install
+ the packages.
+
+Installing Packages with APT
+----------------------------
+Once you download or build the packages and add your packages to APT
+(see `Downloading Debian/Ubuntu Packages <../download-packages>`_), you may
+install them as follows::
+
+ $ sudo apt-get install ceph
+
+Installing Packages with RPM
+----------------------------
+You may install RPM packages as follows::
+
+ rpm -i rpmbuild/RPMS/x86_64/ceph-*.rpm
+
+.. note: We do not build RPM packages at this time. You may build them
+ yourself by downloading the source code.
+
+Proceed to Configuring a Cluster
+--------------------------------
+Once you have prepared your hosts and installed Ceph pages, proceed to
+`Configuring a Storage Cluster <../../config-cluster>`_.
+++ /dev/null
-==========================
- Installing Ceph Packages
-==========================
-Once you have downloaded or built Ceph packages, you may install them on your
-Admin host and OSD Cluster hosts.
-
-.. important:: All hosts should be running the same package version.
- To ensure that you are running the same version on each host with APT,
- you may execute ``sudo apt-get update`` on each host before you install
- the packages.
-
-Installing Packages with APT
-----------------------------
-Once you download or build the packages and add your packages to APT
-(see `Downloading Debian/Ubuntu Packages <../download_packages>`_), you may
-install them as follows::
-
- $ sudo apt-get install ceph
-
-Installing Packages with RPM
-----------------------------
-You may install RPM packages as follows::
-
- rpm -i rpmbuild/RPMS/x86_64/ceph-*.rpm
-
-.. note: We do not build RPM packages at this time. You may build them
- yourself by downloading the source code.
-
-Proceed to Configuring a Cluster
---------------------------------
-Once you have prepared your hosts and installed Ceph pages, proceed to
-`Configuring a Storage Cluster <../../config-cluster>`_.
--- /dev/null
+===================
+Build Ceph Packages
+===================
+
+To build packages, you must clone the `Ceph`_ repository.
+You can create installation packages from the latest code using ``dpkg-buildpackage`` for Debian/Ubuntu
+or ``rpmbuild`` for the RPM Package Manager.
+
+.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of cores * 2.
+ For example, use ``-j4`` for a dual-core processor to accelerate the build.
+
+
+Advanced Package Tool (APT)
+---------------------------
+
+To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the `Ceph`_ repository,
+installed the `build prerequisites`_ and installed ``debhelper``::
+
+ $ sudo apt-get install debhelper
+
+Once you have installed debhelper, you can build the packages:
+
+ $ sudo dpkg-buildpackage
+
+For multi-processor CPUs use the ``-j`` option to accelerate the build.
+
+RPM Package Manager
+-------------------
+
+To create ``.prm`` packages, ensure that you have cloned the `Ceph`_ repository,
+installed the `build prerequisites`_ and installed ``rpm-build`` and ``rpmdevtools``::
+
+ $ yum install rpm-build rpmdevtools
+
+Once you have installed the tools, setup an RPM compilation environment::
+
+ $ rpmdev-setuptree
+
+Fetch the source tarball for the RPM compilation environment::
+
+ $ wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-<version>.tar.gz
+
+Build the RPM packages::
+
+ $ rpmbuild -tb ~/rpmbuild/SOURCES/ceph-<version>.tar.gz
+
+For multi-processor CPUs use the ``-j`` option to accelerate the build.
+
+
+.. _build prerequisites: ../build-prerequisites
+.. _Ceph: ../cloning-the-ceph-source-code-repository
--- /dev/null
+===================
+Build Prerequisites
+===================
+
+Before you can build Ceph source code or Ceph documentation, you need to install several libraries and tools.
+
+.. tip:: Check this section to see if there are specific prerequisites for your Linux/Unix distribution.
+
+Prerequisites for Building Ceph Source Code
+===========================================
+Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts
+depend on the following:
+
+- ``autotools-dev``
+- ``autoconf``
+- ``automake``
+- ``cdbs``
+- ``gcc``
+- ``g++``
+- ``git``
+- ``libboost-dev``
+- ``libedit-dev``
+- ``libssl-dev``
+- ``libtool``
+- ``libfcgi``
+- ``libfcgi-dev``
+- ``libfuse-dev``
+- ``linux-kernel-headers``
+- ``libcrypto++-dev``
+- ``libcrypto++``
+- ``libexpat1-dev``
+- ``libgtkmm-2.4-dev``
+- ``pkg-config``
+- ``libcurl4-gnutls-dev``
+
+On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
+
+ $ sudo apt-get install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
+
+On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. ::
+
+ $ aptitude install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
+
+
+Ubuntu Requirements
+-------------------
+
+- ``uuid-dev``
+- ``libkeytutils-dev``
+- ``libgoogle-perftools-dev``
+- ``libatomic-ops-dev``
+- ``libaio-dev``
+- ``libgdata-common``
+- ``libgdata13``
+
+Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
+
+ $ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev libatomic-ops-dev libaio-dev libgdata-common libgdata13
+
+Debian
+------
+Alternatively, you may also install::
+
+ $ aptitude install fakeroot dpkg-dev
+ $ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev
+
+openSUSE 11.2 (and later)
+-------------------------
+
+- ``boost-devel``
+- ``gcc-c++``
+- ``libedit-devel``
+- ``libopenssl-devel``
+- ``fuse-devel`` (optional)
+
+Execute ``zypper install`` for each dependency that isn't installed on your host. ::
+
+ $zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel
+
+Prerequisites for Building Ceph Documentation
+=============================================
+Ceph utilizes Python's Sphinx documentation tool. For details on
+the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_
+Follow the directions at `Sphinx 1.1.3 <http://pypi.python.org/pypi/Sphinx>`_
+to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the following are required:
+
+- ``python-dev``
+- ``python-pip``
+- ``python-virtualenv``
+- ``libxml2-dev``
+- ``libxslt-dev``
+- ``doxygen``
+- ``ditaa``
+- ``graphviz``
+
+Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
+
+ $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz
+
+++ /dev/null
-===================
-Build Ceph Packages
-===================
-
-To build packages, you must clone the `Ceph`_ repository.
-You can create installation packages from the latest code using ``dpkg-buildpackage`` for Debian/Ubuntu
-or ``rpmbuild`` for the RPM Package Manager.
-
-.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of cores * 2.
- For example, use ``-j4`` for a dual-core processor to accelerate the build.
-
-
-Advanced Package Tool (APT)
----------------------------
-
-To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the `Ceph`_ repository,
-installed the `build prerequisites`_ and installed ``debhelper``::
-
- $ sudo apt-get install debhelper
-
-Once you have installed debhelper, you can build the packages:
-
- $ sudo dpkg-buildpackage
-
-For multi-processor CPUs use the ``-j`` option to accelerate the build.
-
-RPM Package Manager
--------------------
-
-To create ``.prm`` packages, ensure that you have cloned the `Ceph`_ repository,
-installed the `build prerequisites`_ and installed ``rpm-build`` and ``rpmdevtools``::
-
- $ yum install rpm-build rpmdevtools
-
-Once you have installed the tools, setup an RPM compilation environment::
-
- $ rpmdev-setuptree
-
-Fetch the source tarball for the RPM compilation environment::
-
- $ wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-<version>.tar.gz
-
-Build the RPM packages::
-
- $ rpmbuild -tb ~/rpmbuild/SOURCES/ceph-<version>.tar.gz
-
-For multi-processor CPUs use the ``-j`` option to accelerate the build.
-
-
-.. _build prerequisites: ../build_prerequisites
-.. _Ceph: ../cloning_the_ceph_source_code_repository
\ No newline at end of file
+++ /dev/null
-===================
-Build Prerequisites
-===================
-
-Before you can build Ceph source code or Ceph documentation, you need to install several libraries and tools.
-
-.. tip:: Check this section to see if there are specific prerequisites for your Linux/Unix distribution.
-
-Prerequisites for Building Ceph Source Code
-===========================================
-Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts
-depend on the following:
-
-- ``autotools-dev``
-- ``autoconf``
-- ``automake``
-- ``cdbs``
-- ``gcc``
-- ``g++``
-- ``git``
-- ``libboost-dev``
-- ``libedit-dev``
-- ``libssl-dev``
-- ``libtool``
-- ``libfcgi``
-- ``libfcgi-dev``
-- ``libfuse-dev``
-- ``linux-kernel-headers``
-- ``libcrypto++-dev``
-- ``libcrypto++``
-- ``libexpat1-dev``
-- ``libgtkmm-2.4-dev``
-- ``pkg-config``
-- ``libcurl4-gnutls-dev``
-
-On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
-
- $ sudo apt-get install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
-
-On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. ::
-
- $ aptitude install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
-
-
-Ubuntu Requirements
--------------------
-
-- ``uuid-dev``
-- ``libkeytutils-dev``
-- ``libgoogle-perftools-dev``
-- ``libatomic-ops-dev``
-- ``libaio-dev``
-- ``libgdata-common``
-- ``libgdata13``
-
-Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
-
- $ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev libatomic-ops-dev libaio-dev libgdata-common libgdata13
-
-Debian
-------
-Alternatively, you may also install::
-
- $ aptitude install fakeroot dpkg-dev
- $ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev
-
-openSUSE 11.2 (and later)
--------------------------
-
-- ``boost-devel``
-- ``gcc-c++``
-- ``libedit-devel``
-- ``libopenssl-devel``
-- ``fuse-devel`` (optional)
-
-Execute ``zypper install`` for each dependency that isn't installed on your host. ::
-
- $zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel
-
-Prerequisites for Building Ceph Documentation
-=============================================
-Ceph utilizes Python's Sphinx documentation tool. For details on
-the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_
-Follow the directions at `Sphinx 1.1.3 <http://pypi.python.org/pypi/Sphinx>`_
-to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the following are required:
-
-- ``python-dev``
-- ``python-pip``
-- ``python-virtualenv``
-- ``libxml2-dev``
-- ``libxslt-dev``
-- ``doxygen``
-- ``ditaa``
-- ``graphviz``
-
-Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
-
- $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz
-
--- /dev/null
+=============
+Building Ceph
+=============
+
+Ceph provides build scripts for source code and for documentation.
+
+Building Ceph
+=============
+Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following::
+
+ $ cd ceph
+ $ ./autogen.sh
+ $ ./configure
+ $ make
+
+You can use ``make -j`` to execute multiple jobs depending upon your system. For example::
+
+ $ make -j4
+
+To install Ceph locally, you may also use::
+
+ $ make install
+
+If you install Ceph locally, ``make`` will place the executables in ``usr/local/bin``.
+You may add the ``ceph.conf`` file to the ``usr/local/bin`` directory to run an evaluation environment of Ceph from a single directory.
+
+Building Ceph Documentation
+===========================
+Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script::
+
+ $ cd ceph
+ $ admin/build-doc
+
+Once you build the documentation set, you may navigate to the source directory to view it::
+
+ $ cd build-doc/output
+
+There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively.
+++ /dev/null
-=============
-Building Ceph
-=============
-
-Ceph provides build scripts for source code and for documentation.
-
-Building Ceph
-=============
-Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following::
-
- $ cd ceph
- $ ./autogen.sh
- $ ./configure
- $ make
-
-You can use ``make -j`` to execute multiple jobs depending upon your system. For example::
-
- $ make -j4
-
-To install Ceph locally, you may also use::
-
- $ make install
-
-If you install Ceph locally, ``make`` will place the executables in ``usr/local/bin``.
-You may add the ``ceph.conf`` file to the ``usr/local/bin`` directory to run an evaluation environment of Ceph from a single directory.
-
-Building Ceph Documentation
-===========================
-Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script::
-
- $ cd ceph
- $ admin/build-doc
-
-Once you build the documentation set, you may navigate to the source directory to view it::
-
- $ cd build-doc/output
-
-There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively.
--- /dev/null
+=======================================
+Cloning the Ceph Source Code Repository
+=======================================
+To check out the Ceph source code, you must have ``git`` installed
+on your local host. To install ``git``, execute::
+
+ $ sudo apt-get install git
+
+You must also have a ``github`` account. If you do not have a
+``github`` account, go to `github.com <http://github.com>`_ and register.
+Follow the directions for setting up git at `Set Up Git <http://help.github.com/linux-set-up-git/>`_.
+
+Clone the Source
+----------------
+To clone the Ceph source code repository, execute::
+
+ $ git clone git@github.com:ceph/ceph.git
+
+Once ``git clone`` executes, you should have a full copy of the Ceph repository.
+
+Clone the Submodules
+--------------------
+Before you can build Ceph, you must navigate to your new repository and get the ``init`` submodule and the ``update`` submodule::
+
+ $ cd ceph
+ $ git submodule init
+ $ git submodule update
+
+.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date::
+
+ $ git status
+
+Choose a Branch
+---------------
+Once you clone the source code and submodules, your Ceph repository will be on the ``master`` branch by default, which is the unstable development branch. You may choose other branches too.
+
+- ``master``: The unstable development branch.
+- ``stable``: The bugfix branch.
+- ``next``: The release candidate branch.
+
+::
+
+ git checkout master
+
+++ /dev/null
-=======================================
-Cloning the Ceph Source Code Repository
-=======================================
-To check out the Ceph source code, you must have ``git`` installed
-on your local host. To install ``git``, execute::
-
- $ sudo apt-get install git
-
-You must also have a ``github`` account. If you do not have a
-``github`` account, go to `github.com <http://github.com>`_ and register.
-Follow the directions for setting up git at `Set Up Git <http://help.github.com/linux-set-up-git/>`_.
-
-Clone the Source
-----------------
-To clone the Ceph source code repository, execute::
-
- $ git clone git@github.com:ceph/ceph.git
-
-Once ``git clone`` executes, you should have a full copy of the Ceph repository.
-
-Clone the Submodules
---------------------
-Before you can build Ceph, you must navigate to your new repository and get the ``init`` submodule and the ``update`` submodule::
-
- $ cd ceph
- $ git submodule init
- $ git submodule update
-
-.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date::
-
- $ git status
-
-Choose a Branch
----------------
-Once you clone the source code and submodules, your Ceph repository will be on the ``master`` branch by default, which is the unstable development branch. You may choose other branches too.
-
-- ``master``: The unstable development branch.
-- ``stable``: The bugfix branch.
-- ``next``: The release candidate branch.
-
-::
-
- git checkout master
-
--- /dev/null
+====================================
+ Downloading a Ceph Release Tarball
+====================================
+
+As Ceph development progresses, the Ceph team releases new versions of the
+source code. You may download source code tarballs for Ceph releases here:
+
+`Ceph Release Tarballs <http://ceph.com/download/>`_
+++ /dev/null
-====================================
- Downloading a Ceph Release Tarball
-====================================
-
-As Ceph development progresses, the Ceph team releases new versions of the
-source code. You may download source code tarballs for Ceph releases here:
-
-`Ceph Release Tarballs <http://ceph.com/download/>`_
.. toctree::
- Prerequisites <build_prerequisites>
- Get a Tarball <downloading_a_ceph_release>
- Clone the Source <cloning_the_ceph_source_code_repository>
- Build the Source <building_ceph>
- Build a Package <build_packages>
+ Prerequisites <build-prerequisites>
+ Get a Tarball <downloading-a-ceph-release>
+ Clone the Source <cloning-the-ceph-source-code-repository>
+ Build the Source <building-ceph>
+ Build a Package <build-packages>
Contributing Code <contributing>
--- /dev/null
+=====================================
+ Get Involved in the Ceph Community!
+=====================================
+These are exciting times in the Ceph community! Get involved!
+
++-----------------+-------------------------------------------------+-----------------------------------------------+
+|Channel | Description | Contact Info |
++=================+=================================================+===============================================+
+| **Blog** | Check the Ceph Blog_ periodically to keep track | http://ceph.newdream.net/news |
+| | of Ceph progress and important announcements. | |
++-----------------+-------------------------------------------------+-----------------------------------------------+
+| **IRC** | As you delve into Ceph, you may have questions | |
+| | or feedback for the Ceph development team. Ceph | - **Domain:** ``irc.oftc.net`` |
+| | developers are often available on the ``#ceph`` | - **Channel:** ``#ceph`` |
+| | IRC channel particularly during daytime hours | |
+| | in the US Pacific Standard Time zone. | |
++-----------------+-------------------------------------------------+-----------------------------------------------+
+| **Email List** | Keep in touch with developer activity by | |
+| | subscribing_ to the email list at | - Subscribe_ |
+| | ceph-devel@vger.kernel.org. You can opt out of | - Unsubscribe_ |
+| | the email list at any time by unsubscribing_. | - Gmane_ |
+| | A simple email is all it takes! If you would | |
+| | like to view the archives, go to Gmane_. | |
++-----------------+-------------------------------------------------+-----------------------------------------------+
+| **Bug Tracker** | You can help keep Ceph production worthy by | http://tracker.newdream.net/projects/ceph |
+| | filing and tracking bugs, and providing feature | |
+| | requests using the Bug Tracker_. | |
++-----------------+-------------------------------------------------+-----------------------------------------------+
+| **Source Code** | If you would like to participate in | |
+| | development, bug fixing, or if you just want | - http://github.com:ceph/ceph.git |
+| | the very latest code for Ceph, you can get it | - ``$git clone git@github.com:ceph/ceph.git`` |
+| | at http://github.com. | |
++-----------------+-------------------------------------------------+-----------------------------------------------+
+| **Support** | If you have a very specific problem, an | http://inktank.com |
+| | immediate need, or if your deployment requires | |
+| | significant help, consider commercial support_. | |
++-----------------+-------------------------------------------------+-----------------------------------------------+
+
+
+
+.. _Subscribe: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
+.. _Unsubscribe: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
+.. _subscribing: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
+.. _unsubscribing: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
+.. _Gmane: http://news.gmane.org/gmane.comp.file-systems.ceph.devel
+.. _Tracker: http://tracker.newdream.net/projects/ceph
+.. _Blog: http://ceph.newdream.net/news
+.. _support: http://inktank.com
+++ /dev/null
-=====================================
- Get Involved in the Ceph Community!
-=====================================
-These are exciting times in the Ceph community! Get involved!
-
-+-----------------+-------------------------------------------------+-----------------------------------------------+
-|Channel | Description | Contact Info |
-+=================+=================================================+===============================================+
-| **Blog** | Check the Ceph Blog_ periodically to keep track | http://ceph.newdream.net/news |
-| | of Ceph progress and important announcements. | |
-+-----------------+-------------------------------------------------+-----------------------------------------------+
-| **IRC** | As you delve into Ceph, you may have questions | |
-| | or feedback for the Ceph development team. Ceph | - **Domain:** ``irc.oftc.net`` |
-| | developers are often available on the ``#ceph`` | - **Channel:** ``#ceph`` |
-| | IRC channel particularly during daytime hours | |
-| | in the US Pacific Standard Time zone. | |
-+-----------------+-------------------------------------------------+-----------------------------------------------+
-| **Email List** | Keep in touch with developer activity by | |
-| | subscribing_ to the email list at | - Subscribe_ |
-| | ceph-devel@vger.kernel.org. You can opt out of | - Unsubscribe_ |
-| | the email list at any time by unsubscribing_. | - Gmane_ |
-| | A simple email is all it takes! If you would | |
-| | like to view the archives, go to Gmane_. | |
-+-----------------+-------------------------------------------------+-----------------------------------------------+
-| **Bug Tracker** | You can help keep Ceph production worthy by | http://tracker.newdream.net/projects/ceph |
-| | filing and tracking bugs, and providing feature | |
-| | requests using the Bug Tracker_. | |
-+-----------------+-------------------------------------------------+-----------------------------------------------+
-| **Source Code** | If you would like to participate in | |
-| | development, bug fixing, or if you just want | - http://github.com:ceph/ceph.git |
-| | the very latest code for Ceph, you can get it | - ``$git clone git@github.com:ceph/ceph.git`` |
-| | at http://github.com. | |
-+-----------------+-------------------------------------------------+-----------------------------------------------+
-| **Support** | If you have a very specific problem, an | http://inktank.com |
-| | immediate need, or if your deployment requires | |
-| | significant help, consider commercial support_. | |
-+-----------------+-------------------------------------------------+-----------------------------------------------+
-
-
-
-.. _Subscribe: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
-.. _Unsubscribe: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
-.. _subscribing: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
-.. _unsubscribing: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
-.. _Gmane: http://news.gmane.org/gmane.comp.file-systems.ceph.devel
-.. _Tracker: http://tracker.newdream.net/projects/ceph
-.. _Blog: http://ceph.newdream.net/news
-.. _support: http://inktank.com
.. toctree::
- Get Involved <get_involved_in_the_ceph_community>
- quick_start
+ Get Involved <get-involved-in-the-ceph-community>
+ quick-start
--- /dev/null
+=============
+ Quick Start
+=============
+Ceph is intended for large-scale deployments, but you may install Ceph on a
+single host. Quick start is intended for Debian/Ubuntu Linux distributions.
+
+1. Login to your host.
+2. Make a directory for Ceph packages. *e.g.,* ``$ mkdir ceph``
+3. `Get Ceph packages <../../install/download-packages>`_ and add them to your
+ APT configuration file.
+4. Update and Install Ceph packages.
+ See `Downloading Debian/Ubuntu Packages <../../install/download-packages>`_
+ and `Installing Packages <../../install/installing-packages>`_ for details.
+5. Add a ``ceph.conf`` file.
+ See `Ceph Configuration Files <../../config-cluster/ceph-conf>`_ for details.
+6. Run Ceph.
+ See `Deploying Ceph with mkcephfs <../../config-cluster/deploying-ceph-with-mkcephfs>`_
+++ /dev/null
-=============
- Quick Start
-=============
-Ceph is intended for large-scale deployments, but you may install Ceph on a
-single host. Quick start is intended for Debian/Ubuntu Linux distributions.
-
-1. Login to your host.
-2. Make a directory for Ceph packages. *e.g.,* ``$ mkdir ceph``
-3. `Get Ceph packages <../../install/download_packages>`_ and add them to your
- APT configuration file.
-4. Update and Install Ceph packages.
- See `Downloading Debian/Ubuntu Packages <../../install/download_packages>`_
- and `Installing Packages <../../install/installing_packages>`_ for details.
-5. Add a ``ceph.conf`` file.
- See `Ceph Configuration Files <../../config-cluster/ceph_conf>`_ for details.
-6. Run Ceph.
- See `Deploying Ceph with mkcephfs <../../config_cluster/deploying_ceph_with_mkcephfs>`_