--- /dev/null
+==========================
+ Ceph Configuration Files
+==========================
+When you start the Ceph service, the initialization process activates a series
+of daemons that run in the background. The hosts in a typical RADOS cluster run
+at least one of three processes or daemons:
+
+- RADOS (``ceph-osd``)
+- Monitor (``ceph-mon``)
+- Metadata Server (``ceph-mds``)
+
+Each process or daemon looks for a ``ceph.conf`` file that provides their
+configuration settings. The default ``ceph.conf`` locations in sequential
+order include:
+
+ 1. ``$CEPH_CONF`` (*i.e.,* the path following
+ the ``$CEPH_CONF`` environment variable)
+ 2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
+ 3. ``/etc/ceph/ceph.conf``
+ 4. ``~/.ceph/config``
+ 5. ``./ceph.conf`` (*i.e.,* in the current working directory)
+
+The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you
+have installed the Ceph packages on the OSD Cluster hosts, you need to create
+a ``ceph.conf`` file to configure your OSD cluster.
+
+Creating ``ceph.conf``
+----------------------
+The ``ceph.conf`` file defines:
+
+- Cluster Membership
+- Host Names
+- Paths to Hosts
+- Runtime Options
+
+You can add comments to the ``ceph.conf`` file by preceding comments with
+a semi-colon (;). For example::
+
+ ; <--A semi-colon precedes a comment
+ ; A comment may be anything, and always follows a semi-colon on each line.
+ ; We recommend that you provide comments in your configuration file(s).
+
+Configuration File Basics
+~~~~~~~~~~~~~~~~~~~~~~~~~
+The ``ceph.conf`` file configures each instance of the three common processes
+in a RADOS cluster.
+
++-----------------+--------------+--------------+-----------------+-------------------------------------------------+
+| Setting Scope | Process | Setting | Instance Naming | Description |
++=================+==============+==============+=================+=================================================+
+| All Modules | All | ``[global]`` | N/A | Settings affect all instances of all daemons. |
++-----------------+--------------+--------------+-----------------+-------------------------------------------------+
+| RADOS | ``ceph-osd`` | ``[osd]`` | Numeric | Settings affect RADOS instances only. |
++-----------------+--------------+--------------+-----------------+-------------------------------------------------+
+| Monitor | ``ceph-mon`` | ``[mon]`` | Alphanumeric | Settings affect monitor instances only. |
++-----------------+--------------+--------------+-----------------+-------------------------------------------------+
+| Metadata Server | ``ceph-mds`` | ``[mds]`` | Alphanumeric | Settings affect MDS instances only. |
++-----------------+--------------+--------------+-----------------+-------------------------------------------------+
+
+Metavariables
+~~~~~~~~~~~~~
+The configuration system supports certain 'metavariables,' which are typically
+used in ``[global]`` or process/daemon settings. If metavariables occur inside
+a configuration value, Ceph expands them into a concrete value--similar to how
+Bash shell expansion works.
+
+There are a few different metavariables:
+
++--------------+----------------------------------------------------------------------------------------------------------+
+| Metavariable | Description |
++==============+==========================================================================================================+
+| ``$host`` | Expands to the host name of the current daemon. |
++--------------+----------------------------------------------------------------------------------------------------------+
+| ``$type`` | Expands to one of ``mds``, ``osd``, or ``mon``, depending on the type of the current daemon. |
++--------------+----------------------------------------------------------------------------------------------------------+
+| ``$id`` | Expands to the daemon identifier. For ``osd.0``, this would be ``0``; for ``mds.a``, it would be ``a``. |
++--------------+----------------------------------------------------------------------------------------------------------+
+| ``$num`` | Same as ``$id``. |
++--------------+----------------------------------------------------------------------------------------------------------+
+| ``$name`` | Expands to ``$type.$id``. |
++--------------+----------------------------------------------------------------------------------------------------------+
+| ``$cluster`` | Expands to the cluster name. Useful when running multiple clusters on the same hardware. |
++--------------+----------------------------------------------------------------------------------------------------------+
+
+Global Settings
+~~~~~~~~~~~~~~~
+The Ceph configuration file supports a hierarchy of settings, where child
+settings inherit the settings of the parent. Global settings affect all
+instances of all processes in the cluster. Use the ``[global]`` setting for
+values that are common for all hosts in the cluster. You can override each
+``[global]`` setting by:
+
+1. Changing the setting in a particular ``[group]``.
+2. Changing the setting in a particular process type (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
+3. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` )
+
+Overriding a global setting affects all child processes, except those that
+you specifically override. For example::
+
+ [global]
+ ; Enable authentication between hosts within the cluster.
+ auth supported = cephx
+
+Process/Daemon Settings
+~~~~~~~~~~~~~~~~~~~~~~~
+You can specify settings that apply to a particular type of process. When you
+specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a
+particular instance, the setting will apply to all OSDs, monitors or metadata
+daemons respectively.
+
+Instance Settings
+~~~~~~~~~~~~~~~~~
+You may specify settings for particular instances of an daemon. You may specify
+an instance by entering its type, delimited by a period (.) and by the
+instance ID. The instance ID for an OSD is always numeric, but it may be
+alphanumeric for monitors and metadata servers. ::
+
+ [osd.1]
+ ; settings affect osd.1 only.
+ [mon.a1]
+ ; settings affect mon.a1 only.
+ [mds.b2]
+ ; settings affect mds.b2 only.
+
+``host`` and ``addr`` Settings
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The `Hardware Recommendations <../hardware_recommendations>`_ section
+provides some hardware guidelines for configuring the cluster. It is possible
+for a single host to run multiple daemons. For example, a single host with
+multiple disks or RAIDs may run one ``ceph-osd`` for each disk or RAID.
+Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon
+on the same host. Ideally, you will have a host for a particular type of
+process. For example, one host may run ``ceph-osd`` daemons, another host
+may run a ``ceph-mds`` daemon, and other hosts may run ``ceph-mon`` daemons.
+
+Each host has a name identified by the ``host`` setting, and a network location
+(i.e., domain name or IP address) identified by the ``addr`` setting. For example::
+
+ [osd.1]
+ host = hostNumber1
+ addr = 150.140.130.120:1100
+ [osd.2]
+ host = hostNumber1
+ addr = 150.140.130.120:1102
+
+
+Monitor Configuration
+~~~~~~~~~~~~~~~~~~~~~
+Ceph typically deploys with 3 monitors to ensure high availability should a
+monitor instance crash. An odd number of monitors (3) ensures that the Paxos
+algorithm can determine which version of the cluster map is the most accurate.
+
+.. note:: You may deploy Ceph with a single monitor, but if the instance fails,
+ the lack of a monitor may interrupt data service availability.
+
+Ceph monitors typically listen on port ``6789``.
+
+Example Configuration File
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. literalinclude:: demo-ceph.conf
+ :language: ini
+
+Configuration File Deployment Options
+-------------------------------------
+The most common way to deploy the ``ceph.conf`` file in a cluster is to have
+all hosts share the same configuration file.
+
+You may create a ``ceph.conf`` file for each host if you wish, or specify a
+particular ``ceph.conf`` file for a subset of hosts within the cluster. However,
+using per-host ``ceph.conf``configuration files imposes a maintenance burden as the
+cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file
+on the Administration host and then copies that file to each OSD Cluster host.
+
+The current cluster deployment script, ``mkcephfs``, does not make copies of the
+``ceph.conf``. You must copy the file manually.
--- /dev/null
+[global]
+ auth supported = cephx
+ keyring = /etc/ceph/$name.keyring
+
+[mon]
+ mon data = /srv/mon.$id
+
+[mds]
+
+[osd]
+ osd data = /srv/osd.$id
+ osd journal = /srv/osd.$id.journal
+ osd journal size = 1000
+
+[mon.a]
+ host = myserver01
+ mon addr = 10.0.0.101:6789
+
+[mon.b]
+ host = myserver02
+ mon addr = 10.0.0.102:6789
+
+[mon.c]
+ host = myserver03
+ mon addr = 10.0.0.103:6789
+
+[osd.0]
+ host = myserver01
+
+[osd.1]
+ host = myserver02
+
+[osd.2]
+ host = myserver03
+
+[mds.a]
+ host = myserver01
\ No newline at end of file
--- /dev/null
+==============================
+ Deploying Ceph Configuration
+==============================
+Ceph's current deployment script does not copy the configuration file you
+created from the Administration host to the OSD Cluster hosts. Copy the
+configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
+from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host.
+
+::
+
+ ssh myserver01 sudo tee /etc/ceph/ceph.conf <mycluster.conf
+ ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
+ ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
+
+
+The current deployment script doesn't copy the start services. Copy the ``start``
+services from the Administration host to each OSD Cluster host. ::
+
+ ssh myserver01 sudo /etc/init.d/ceph start
+ ssh myserver02 sudo /etc/init.d/ceph start
+ ssh myserver03 sudo /etc/init.d/ceph start
+
+The current deployment script may not create the default server directories. Create
+server directories for each instance of a Ceph daemon.
+
+Using the exemplary ``ceph.conf`` file, you would perform the following:
+
+On ``myserver01``::
+
+ mkdir srv/osd.0
+ mkdir srv/mon.a
+
+On ``myserver02``::
+
+ mkdir srv/osd.1
+ mkdir srv/mon.b
+
+On ``myserver03``::
+
+ mkdir srv/osd.2
+ mkdir srv/mon.c
+
+On ``myserver04``::
+
+ mkdir srv/osd.3
+
+.. important:: The ``host`` variable determines which host runs each instance of a Ceph daemon.
+
--- /dev/null
+================================
+Deploying Ceph with ``mkcephfs``
+================================
+
+Once you have copied your Ceph Configuration to the OSD Cluster hosts, you may deploy Ceph with the ``mkcephfs`` script.
+
+.. note:: ``mkcephfs`` is a quick bootstrapping tool. It does not handle more complex operations, such as upgrades.
+
+ For production environments, you will deploy Ceph using Chef cookbooks (coming soon!).
+
+To run ``mkcephfs``, execute the following::
+
+ $ mkcephfs -a -c <path>/ceph.conf -k mycluster.keyring
+
+The script adds an admin key to the ``mycluster.keyring``, which is analogous to a root password. Ceph should begin operating.
+You can check on the health of your Ceph cluster with the following::
+
+ ceph -k mycluster.keyring -c mycluster.conf health
+
--- /dev/null
+=========================================
+Hard Disk and File System Recommendations
+=========================================
+
+Ceph aims for data safety, which means that when the application receives notice
+that data was written to the disk, that data was actually written to the disk.
+For old kernels (<2.6.33), disable the write cache if the journal is on a raw
+disk. Newer kernels should work fine.
+
+Use ``hdparm`` to disable write caching on the hard disk::
+
+ $ hdparm -W 0 /dev/hda 0
+
+
+Ceph OSDs depend on the Extended Attributes (XATTRs) of the underlying file
+system for:
+
+- Internal object state
+- Snapshot metadata
+- RADOS Gateway Access Control Lists (ACLs).
+
+Ceph OSDs rely heavily upon the stability and performance of the underlying file
+ system. The underlying file system must provide sufficient capacity for XATTRs.
+File system candidates for Ceph include B tree and B+ tree file systems such as:
+
+- ``btrfs``
+- ``XFS``
+
+If you are using ``ext4``, enable XATTRs. ::
+
+ filestore xattr use omap = true
+
+.. warning:: XATTR limits.
+
+ The RADOS Gateway's ACL and Ceph snapshots easily surpass the 4-kilobyte limit
+ for XATTRs in ``ext4``, causing the ``ceph-osd`` process to crash. Version 0.45
+ or newer uses ``leveldb`` to bypass this limitation. ``ext4`` is a poor file
+ system choice if you intend to deploy the RADOS Gateway or use snapshots on
+ versions earlier than 0.45.
+
+.. tip:: Use ``xfs`` initially and ``btrfs`` when it is ready for production.
+
+ The Ceph team believes that the best performance and stability will come from
+ ``btrfs.`` The ``btrfs`` file system has internal transactions that keep the
+ local data set in a consistent state. This makes OSDs based on ``btrfs`` simple
+ to deploy, while providing scalability not currently available from block-based
+ file systems. The 64-kb XATTR limit for ``xfs`` XATTRS is enough to accommodate
+ RDB snapshot metadata and RADOS Gateway ACLs. So ``xfs`` is the second-choice
+ file system of the Ceph team in the long run, but ``xfs`` is currently more
+ stable than ``btrfs``. If you only plan to use RADOS and ``rbd`` without
+ snapshots and without ``radosgw``, the ``ext4`` file system should work just fine.
+
--- /dev/null
+===============================
+ Configuring a Storage Cluster
+===============================
+Ceph can run with a cluster containing thousands of Object Storage Devices
+(OSDs). A minimal system will have at least two OSDs for data replication. To
+configure OSD clusters, you must provide settings in the configuration file.
+Ceph provides default values for many settings, which you can override in the
+configuration file. Additionally, you can make runtime modification to the
+configuration using command-line utilities.
+
+When Ceph starts, it activates three daemons:
+
+- ``ceph-osd`` (mandatory)
+- ``ceph-mon`` (mandatory)
+- ``ceph-mds`` (mandatory for cephfs only)
+
+Each process, daemon or utility loads the host's configuration file. A process
+may have information about more than one daemon instance (*i.e.,* multiple
+contexts). A daemon or utility only has information about a single daemon
+instance (a single context).
+
+.. note:: Ceph can run on a single host for evaluation purposes.
+
+.. toctree::
+
+ file_system_recommendations
+ Configuration <ceph_conf>
+ Deploy Config <deploying_ceph_conf>
+ deploying_ceph_with_mkcephfs
--- /dev/null
+===================
+ MDS Configuration
+===================
+
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| Setting | Type | Default | Description |
++===================================+=========================+============+================================================+
+| ``mds_max_file_size`` | 64-bit Integer Unsigned | 1ULL << 40 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_cache_size`` | 32-bit Integer | 100000 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_cache_mid`` | Float | .7 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_mem_max`` | 32-bit Integer | 1048576 | // KB |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_dir_commit_ratio`` | Float | .5 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_dir_max_commit_size`` | 32-bit Integer | 90 | // MB |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_decay_halflife`` | Float | 5 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_beacon_interval`` | Float | 4 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_beacon_grace`` | Float | 15 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_blacklist_interval`` | Float | 24.0*60.0 | // how long to blacklist failed nodes |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_session_timeout`` | Float | 60 | // cap bits and leases time out if client idle |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_session_autoclose`` | Float | 300 | // autoclose idle session |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_reconnect_timeout`` | Float | 45 | // secs to wait for clients during mds restart |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_tick_interval`` | Float | 5 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_dirstat_min_interval`` | Float | 1 | //try to avoid propagating more often than x |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_scatter_nudge_interval`` | Float | 5 | // how quickly dirstat changes propagate up |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_client_prealloc_inos`` | 32-bit Integer | 1000 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_early_reply`` | Boolean | true | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_use_tmap`` | Boolean | true | // use trivialmap for dir updates |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_default_dir_hash`` | 32-bit Integer | | CEPH_STR_HASH_RJENKINS |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_log`` | Boolean | true | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_log_skip_corrupt_events`` | Boolean | false | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_log_max_events`` | 32-bit Integer | -1 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_log_max_segments`` | 32-bit Integer | 30 | // segment size defined by FileLayout above |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_log_max_expiring`` | 32-bit Integer | 20 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_log_eopen_size`` | 32-bit Integer | 100 | // # open inodes per log entry |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_sample_interval`` | Float | 3.0 | // every 5 seconds |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_replicate_threshold`` | Float | 8000 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_unreplicate_threshold`` | Float | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_frag`` | Boolean | false | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_split_size`` | 32-bit Integer | 10000 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_split_rd`` | Float | 25000 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_split_wr`` | Float | 10000 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_split_bits`` | 32-bit Integer | 3 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_merge_size`` | 32-bit Integer | 50 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_merge_rd`` | Float | 1000 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_merge_wr`` | Float | 1000 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_interval`` | 32-bit Integer | 10 | // seconds |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_fragment_interval`` | 32-bit Integer | 5 | // seconds |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_idle_threshold`` | Float | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_max`` | 32-bit Integer | -1 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_max_until`` | 32-bit Integer | -1 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_mode`` | 32-bit Integer | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_min_rebalance`` | Float | .1 | // must be x above avg before we export |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_min_start`` | Float | .2 | // if we need less x. we don't do anything |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_need_min`` | Float | .8 | // take within this range of what we need |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_need_max`` | Float | 1.2 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_midchunk`` | Float | .3 | // any sub bigger than this taken in full |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_minchunk`` | Float | .001 | // never take anything smaller than this |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_target_removal_min`` | 32-bit Integer | 5 | // min bal iters before old target is removed |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_bal_target_removal_max`` | 32-bit Integer | 10 | // max bal iters before old target is removed |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_replay_interval`` | Float | 1.0 | // time to wait before starting replay again |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_shutdown_check`` | 32-bit Integer | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_thrash_exports`` | 32-bit Integer | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_thrash_fragments`` | 32-bit Integer | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_dump_cache_on_map`` | Boolean | false | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_dump_cache_after_rejoin`` | Boolean | false | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_verify_scatter`` | Boolean | false | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_debug_scatterstat`` | Boolean | false | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_debug_frag`` | Boolean | false | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_debug_auth_pins`` | Boolean | false | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_debug_subtrees`` | Boolean | false | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_kill_mdstable_at`` | 32-bit Integer | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_kill_export_at`` | 32-bit Integer | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_kill_import_at`` | 32-bit Integer | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_kill_link_at`` | 32-bit Integer | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_kill_rename_at`` | 32-bit Integer | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_wipe_sessions`` | Boolean | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_wipe_ino_prealloc`` | Boolean | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_skip_ino`` | 32-bit Integer | 0 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``max_mds`` | 32-bit Integer | 1 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_standby_for_name`` | String | "" | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_standby_for_rank`` | 32-bit Integer | -1 | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+| ``mds_standby_replay`` | Boolean | false | |
++-----------------------------------+-------------------------+------------+------------------------------------------------+
+
+
+// make it (mds_session_timeout - mds_beacon_grace |
--- /dev/null
+=======================
+ Monitor Configuration
+=======================
+
+
+
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| Setting | Type | Default Value | Description |
++===================================+================+===============+===========================================================+
+| ``mon data`` | String | "" | |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon sync fs threshold`` | 32-bit Integer | 5 | // sync when writing this many objects; 0 to disable. |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon tick interval`` | 32-bit Integer | 5 | |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon subscribe interval`` | Double | 300 | |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon osd auto mark in`` | Boolean | false | // mark any booting osds 'in' |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon osd auto mark auto out in`` | Boolean | true | // mark booting auto-marked-out osds 'in' |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon osd auto mark new in`` | Boolean | true | // mark booting new osds 'in' |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon osd down out interval`` | 32-bit Integer | 300 | // seconds |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon lease`` | Float | 5 | // lease interval |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon lease renew interval`` | Float | 3 | // on leader | to renew the lease |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon lease ack timeout`` | Float | 10.0 | // on leader | if lease isn't acked by all peons |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon clock drift allowed`` | Float | .050 | // allowed clock drift between monitors |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon clock drift warn backoff`` | Float | 5 | // exponential backoff for clock drift warnings |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon accept timeout`` | Float | 10.0 | // on leader | if paxos update isn't accepted |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon pg create interval`` | Float | 30.0 | // no more than every 30s |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon pg stuck threshold`` | 32-bit Integer | 300 | // number of seconds after which pgs can be considered |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon osd full ratio`` | Float | .95 | // what % full makes an OSD "full" |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon osd nearfull ratio`` | Float | .85 | // what % full makes an OSD near full |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon globalid prealloc`` | 32-bit Integer | 100 | // how many globalids to prealloc |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon osd report timeout`` | 32-bit Integer | 900 | // grace period before declaring unresponsive OSDs dead |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon force standby active`` | Boolean | true | // should mons force standby-replay mds to be active |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon min osdmap epochs`` | 32-bit Integer | 500 | |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon max pgmap epochs`` | 32-bit Integer | 500 | |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon max log epochs`` | 32-bit Integer | 500 | |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon probe timeout`` | Double | 2.0 | |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+| ``mon slurp timeout`` | Double | 10.0 | |
++-----------------------------------+----------------+---------------+-----------------------------------------------------------+
+
+inactive | unclean | or stale (see doc/control.rst under dump stuck for more info |
--- /dev/null
+===================
+ OSD Configuration
+===================
+
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| Setting | Type | Default Value | Description |
++=========================================+=====================+=======================+================================================+
+| ``osd auto upgrade tmap`` | Boolean | True | Uses ``tmap`` for ``omap`` on old objects. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd tmapput sets users tmap`` | Boolean | False | Uses ``tmap`` for debugging only. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd data`` | String | None | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd journal`` | String | None | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd journal size`` | 32-bit Int | 0 | The size of the journal in MBs. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd max write size`` | 32-bit Int | 90 | The size of the maximum x to write in MBs. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd balance reads`` | Boolean | False | Load balance reads? |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd shed reads`` | 32-bit Int | False (0) | Forward from primary to replica. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd shed reads min latency`` | Double | .01 | The minimum local latency. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd shed reads min latency diff`` | Double | 1.5 | Percentage difference from peer. 150% default. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd client message size cap`` | 64-bit Int Unsigned | 500*1024L*1024L | Client data allowed in-memory. 200MB default. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd stat refresh interval`` | 64-bit Int Unsigned | .5 | The status refresh interval in seconds. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd pg bits`` | 32-bit Int | 6 | Placement group bits per OSD. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd pgp bits`` | 32-bit Int | 4 | Placement group p bits per OSD? |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd pg layout`` | 32-bit Int | 2 | Placement Group bits ? per OSD? |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd min rep`` | 32-bit Int | 1 | Need a description. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd max rep`` | 32-bit Int | 10 | Need a description. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd min raid width`` | 32-bit Int | 3 | The minimum RAID width. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd max raid width`` | 32-bit Int | 2 | The maximum RAID width. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd pool default crush rule`` | 32-bit Int | 0 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd pool default size`` | 32-bit Int | 2 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd pool default pg num`` | 32-bit Int | 8 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd pool default pgp num`` | 32-bit Int | 8 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd map cache max`` | 32-bit Int | 250 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd map message max`` | 32-bit Int | 100 | // max maps per MOSDMap message |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd op threads`` | 32-bit Int | 2 | // 0 == no threading |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd disk threads`` | 32-bit Int | 1 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd recovery threads`` | 32-bit Int | 1 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd recover clone overlap`` | Boolean | false | // preserve clone overlap during rvry/migrat |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd backfill scan min`` | 32-bit Int | 64 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd backfill scan max`` | 32-bit Int | 512 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd op thread timeout`` | 32-bit Int | 30 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd backlog thread timeout`` | 32-bit Int | 60*60*1 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd recovery thread timeout`` | 32-bit Int | 30 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd snap trim thread timeout`` | 32-bit Int | 60*60*1 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd scrub thread timeout`` | 32-bit Int | 60 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd scrub finalize thread timeout`` | 32-bit Int | 60*10 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd remove thread timeout`` | 32-bit Int | 60*60 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd command thread timeout`` | 32-bit Int | 10*60 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd age`` | Float | .8 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd age time`` | 32-bit Int | 0 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd heartbeat interval`` | 32-bit Int | 1 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd mon heartbeat interval`` | 32-bit Int | 30 | // if no peers | ping monitor |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd heartbeat grace`` | 32-bit Int | 20 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd mon report interval max`` | 32-bit Int | 120 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd mon report interval min`` | 32-bit Int | 5 | // pg stats | failures | up thru | boot. |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd mon ack timeout`` | 32-bit Int | 30 | // time out a mon if it doesn't ack stats |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd min down reporters`` | 32-bit Int | 1 | // num OSDs needed to report a down OSD |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd min down reports`` | 32-bit Int | 3 | // num times a down OSD must be reported |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd default data pool replay window`` | 32-bit Int | 45 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd preserve trimmed log`` | Boolean | true | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd auto mark unfound lost`` | Boolean | false | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd recovery delay start`` | Float | 15 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd recovery max active`` | 32-bit Int | 5 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd recovery max chunk`` | 64-bit Int Unsigned | 1<<20 | // max size of push chunk |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd recovery forget lost objects`` | Boolean | false | // off for now |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd max scrubs`` | 32-bit Int | 1 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd scrub load threshold`` | Float | 0.5 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd scrub min interval`` | Float | 300 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd scrub max interval`` | Float | 60*60*24 | // once a day |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd auto weight`` | Boolean | false | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd class error timeout`` | Double | 60.0 | // seconds |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd class timeout`` | Double | 60*60.0 | // seconds |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd class dir`` | String | "/rados-classes" | CEPH LIBDIR |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd check for log corruption`` | Boolean | false | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd use stale snap`` | Boolean | false | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd rollback to cluster snap`` | String | "" | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd default notify timeout`` | 32-bit Int Unsigned | 30 | // default notify timeout in seconds |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd kill backfill at`` | 32-bit Int | 0 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd min pg log entries`` | 32-bit Int Unsigned | 1000 | // num entries to keep in pg log when trimming |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd op complaint time`` | Float | 30 | // how old in secs makes op complaint-worthy |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+| ``osd command max records`` | 32-bit Int | 256 | |
++-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
=========================
- Configuration reference
+ Configuration Reference
=========================
.. todo:: write me
-OSD (RADOS)
-===========
-
-Monitor
-=======
-
-MDS
-===
-
+.. toctree::
+ :maxdepth: 1
+ config-ref/mon-config
+ config-ref/osd-config
+ config-ref/mds-config
+++ /dev/null
-========================
-Ceph Configuration Files
-========================
-When you start the Ceph service, the initialization process activates a series of daemons that run in the background.
-The hosts in a typical RADOS cluster run at least one of three processes or daemons:
-
-- RADOS (``ceph-osd``)
-- Monitor (``ceph-mon``)
-- Metadata Server (``ceph-mds``)
-
-Each process or daemon looks for a ``ceph.conf`` file that provides their configuration settings.
-The default ``ceph.conf`` locations in sequential order include:
-
- 1. ``$CEPH_CONF`` (*i.e.,* the path following the ``$CEPH_CONF`` environment variable)
- 2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
- 3. /etc/ceph/ceph.conf
- 4. ``~/.ceph/config``
- 5. ``./ceph.conf`` (*i.e.,* in the current working directory)
-
-The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you have installed the Ceph packages on the OSD Cluster hosts, you need to create
-a ``ceph.conf`` file to configure your OSD cluster.
-
-Creating ``ceph.conf``
-----------------------
-The ``ceph.conf`` file defines:
-
-- Cluster Membership
-- Host Names
-- Paths to Hosts
-- Runtime Options
-
-You can add comments to the ``ceph.conf`` file by preceding comments with a semi-colon (;). For example::
-
- ; <--A semi-colon precedes a comment
- ; A comment may be anything, and always follows a semi-colon on each line.
- ; We recommend that you provide comments in your configuration file(s).
-
-Configuration File Basics
-~~~~~~~~~~~~~~~~~~~~~~~~~
-The ``ceph.conf`` file configures each instance of the three common processes
-in a RADOS cluster.
-
-+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
-| Setting Scope | Process | Setting | Instance Naming | Description |
-+=================+==============+==============+=================+=================================================+
-| All Modules | All | ``[global]`` | N/A | Settings affect all instances of all daemons. |
-+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
-| Groups | Group | ``[group]`` | Alphanumeric | Settings affect all instances within the group |
-+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
-| RADOS | ``ceph-osd`` | ``[osd]`` | Numeric | Settings affect RADOS instances only. |
-+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
-| Monitor | ``ceph-mon`` | ``[mon]`` | Alphanumeric | Settings affect monitor instances only. |
-+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
-| Metadata Server | ``ceph-mds`` | ``[mds]`` | Alphanumeric | Settings affect MDS instances only. |
-+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
-
-Metavariables
-~~~~~~~~~~~~~
-The configuration system supports certain 'metavariables,' which are typically used in ``[global]`` or process/daemon settings.
-If metavariables occur inside a configuration value, Ceph expands them into a concrete value--similar to how Bash shell expansion works.
-
-There are a few different metavariables:
-
-+--------------+----------------------------------------------------------------------------------------------------------+
-| Metavariable | Description |
-+==============+==========================================================================================================+
-| ``$host`` | Expands to the host name of the current daemon. |
-+--------------+----------------------------------------------------------------------------------------------------------+
-| ``$type`` | Expands to one of ``mds``, ``osd``, or ``mon``, depending on the type of the current daemon. |
-+--------------+----------------------------------------------------------------------------------------------------------+
-| ``$id`` | Expands to the daemon identifier. For ``osd.0``, this would be ``0``; for ``mds.a``, it would be ``a``. |
-+--------------+----------------------------------------------------------------------------------------------------------+
-| ``$num`` | Same as ``$id``. |
-+--------------+----------------------------------------------------------------------------------------------------------+
-| ``$name`` | Expands to ``$type.$id``. |
-+--------------+----------------------------------------------------------------------------------------------------------+
-
-Global Settings
-~~~~~~~~~~~~~~~
-The Ceph configuration file supports a hierarchy of settings, where child settings inherit the settings of the parent.
-Global settings affect all instances of all processes in the cluster. Use the ``[global]`` setting for values that
-are common for all hosts in the cluster. You can override each ``[global]`` setting by:
-
-1. Changing the setting in a particular ``[group]``.
-2. Changing the setting in a particular process type (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
-3. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` )
-
-Overriding a global setting affects all child processes, except those that you specifically override.
-
-For example::
-
- [global]
- ; Enable authentication between hosts within the cluster.
- auth supported = cephx
-
-Group Settings
-~~~~~~~~~~~~~~
-Group settings affect all instances of all processes in a group. Use the ``[group]`` setting for values that
-are common for all hosts in a group within the cluster. Each group must have a name. For example::
-
- [group primary]
- addr = 10.9.8.7
-
- [group secondary]
- addr = 6.5.4.3
-
-
-Process/Daemon Settings
-~~~~~~~~~~~~~~~~~~~~~~~
-You can specify settings that apply to a particular type of process. When you specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a particular instance,
-the setting will apply to all OSDs, monitors or metadata daemons respectively.
-
-Instance Settings
-~~~~~~~~~~~~~~~~~
-You may specify settings for particular instances of an daemon. You may specify an instance by entering its type, delimited by a period (.) and
-by the instance ID. The instance ID for an OSD is always numeric, but it may be alphanumeric for monitors and metadata servers. ::
-
- [osd.1]
- ; settings affect osd.1 only.
- [mon.a1]
- ; settings affect mon.a1 only.
- [mds.b2]
- ; settings affect mds.b2 only.
-
-``host`` and ``addr`` Settings
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The `Hardware Recommendations <../install/hardware_recommendations>`_ section provides some hardware guidelines for configuring the cluster.
-It is possible for a single host to run multiple daemons. For example, a single host with multiple disks or RAIDs may run one ``ceph-osd``
-for each disk or RAID. Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon on the same host. Ideally, you will have
-a host for a particular type of process. For example, one host may run ``ceph-osd`` daemons, another host may run a ``ceph-mds`` daemon,
-and other hosts may run ``ceph-mon`` daemons.
-
-Each host has a name identified by the ``host`` setting, and a network location (i.e., domain name or IP address) identified by the ``addr`` setting.
-For example::
-
- [osd.1]
- host = hostNumber1
- addr = 150.140.130.120:1100
- [osd.2]
- host = hostNumber1
- addr = 150.140.130.120:1102
-
-
-Monitor Configuration
-~~~~~~~~~~~~~~~~~~~~~
-Ceph typically deploys with 3 monitors to ensure high availability should a monitor instance crash. An odd number of monitors (3) ensures
-that the Paxos algorithm can determine which version of the cluster map is the most accurate.
-
-.. note:: You may deploy Ceph with a single monitor, but if the instance fails, the lack of a monitor may interrupt data service availability.
-
-Ceph monitors typically listen on port ``6789``.
-
-Example Configuration File
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. literalinclude:: demo-ceph.conf
- :language: ini
-
-
-Configuration File Deployment Options
--------------------------------------
-The most common way to deploy the ``ceph.conf`` file in a cluster is to have all hosts share the same configuration file.
-
-You may create a ``ceph.conf`` file for each host if you wish, or specify a particular ``ceph.conf`` file for a subset of hosts within the cluster. However, using per-host ``ceph.conf``
-configuration files imposes a maintenance burden as the cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file on the Administration host and then copies that
-file to each OSD Cluster host.
-
-The current cluster deployment script, ``mkcephfs``, does not make copies of the ``ceph.conf``. You must copy the file manually.
-
-
-
-
+++ /dev/null
-[global]
- auth supported = cephx
- keyring = /etc/ceph/$name.keyring
-
-[mon]
- mon data = /srv/mon.$id
-
-[mds]
-
-[osd]
- osd data = /srv/osd.$id
- osd journal = /srv/osd.$id.journal
- osd journal size = 1000
-
-[mon.a]
- host = myserver01
- mon addr = 10.0.0.101:6789
-
-[mon.b]
- host = myserver02
- mon addr = 10.0.0.102:6789
-
-[mon.c]
- host = myserver03
- mon addr = 10.0.0.103:6789
-
-[osd.0]
- host = myserver01
-
-[osd.1]
- host = myserver02
-
-[osd.2]
- host = myserver03
-
-[mds.a]
- host = myserver01
\ No newline at end of file
+++ /dev/null
-============================
-Deploying Ceph Configuration
-============================
-Ceph's current deployment script does not copy the configuration file you created from the Administration host
-to the OSD Cluster hosts. Copy the configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
-from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host.
-
-::
-
- ssh myserver01 sudo tee /etc/ceph/ceph.conf <mycluster.conf
- ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
- ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
-
-
-The current deployment script doesn't copy the start services. Copy the ``start``
-services from the Administration host to each OSD Cluster host. ::
-
- ssh myserver01 sudo /etc/init.d/ceph start
- ssh myserver02 sudo /etc/init.d/ceph start
- ssh myserver03 sudo /etc/init.d/ceph start
-
-The current deployment script may not create the default server directories. Create
-server directories for each instance of a Ceph daemon.
-
-Using the exemplary ``ceph.conf`` file, you would perform the following:
-
-On ``myserver01``::
-
- mkdir srv/osd.0
- mkdir srv/mon.a
-
-On ``myserver02``::
-
- mkdir srv/osd.1
- mkdir srv/mon.b
-
-On ``myserver03``::
-
- mkdir srv/osd.2
- mkdir srv/mon.c
-
-On ``myserver04``::
-
- mkdir srv/osd.3
-
-.. important:: The ``host`` variable determines which host runs each instance of a Ceph daemon.
-
+++ /dev/null
-================================
-Deploying Ceph with ``mkcephfs``
-================================
-
-Once you have copied your Ceph Configuration to the OSD Cluster hosts, you may deploy Ceph with the ``mkcephfs`` script.
-
-.. note:: ``mkcephfs`` is a quick bootstrapping tool. It does not handle more complex operations, such as upgrades.
-
- For production environments, you will deploy Ceph using Chef cookbooks (coming soon!).
-
-To run ``mkcephfs``, execute the following::
-
- $ mkcephfs -a -c <path>/ceph.conf -k mycluster.keyring
-
-The script adds an admin key to the ``mycluster.keyring``, which is analogous to a root password. Ceph should begin operating.
-You can check on the health of your Ceph cluster with the following::
-
- ceph -k mycluster.keyring -c mycluster.conf health
-
+++ /dev/null
-===================
-Deploying with Chef
-===================
-
-Coming Soon!
\ No newline at end of file
+++ /dev/null
-==========================
-Creating a Storage Cluster
-==========================
-Ceph can run with a cluster containing thousands of Object Storage Devices (OSDs).
-A minimal system will have at least two OSDs for data replication. To configure OSD clusters, you must
-provide settings in the configuration file. Ceph provides default values for many settings, which you can
-override in the configuration file. Additionally, you can make runtime modification to the configuration
-using command-line utilities.
-
-When Ceph starts, it activates three daemons:
-
-- ``ceph-osd`` (mandatory)
-- ``ceph-mon`` (mandatory)
-- ``ceph-mds`` (mandatory for cephfs only)
-
-Each process, daemon or utility loads the host's configuration file. A process may have information about
-more than one daemon instance (*i.e.,* multiple contexts). A daemon or utility only has information
-about a single daemon instance (a single context).
-
-.. note:: Ceph can run on a single host for evaluation purposes.
-
-- :doc:`Ceph Configuration Files <ceph_conf>`
- - :doc:`OSD Configuration Settings <osd_configuration_settings>`
- - :doc:`Monitor Configuration Settings <mon_configuration_settings>`
- - :doc:`Metadata Server Configuration Settings <mds_configuration_settings>`
-- :doc:`Deploying the Ceph Configuration <deploying_ceph_conf>`
-- :doc:`Deploying Ceph with mkcephfs <deploying_ceph_with_mkcephfs>`
-- :doc:`Deploying Ceph with Chef (coming soon) <deploying_with_chef>`
-
-.. toctree::
- :hidden:
-
- Configuration <ceph_conf>
- [osd] Settings <osd_configuration_settings>
- [mon] Settings <mon_configuration_settings>
- [mds] Settings <mds_configuration_settings>
- Deploy Config <deploying_ceph_conf>
- deploying_ceph_with_mkcephfs
- Chef Coming Soon! <deploying_with_chef>
\ No newline at end of file
+++ /dev/null
-======================================
-Metadata Server Configuration Settings
-======================================
-
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| Setting | Type | Default | Description |
-+===================================+=========================+============+================================================+
-| ``mds_max_file_size`` | 64-bit Integer Unsigned | 1ULL << 40 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_cache_size`` | 32-bit Integer | 100000 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_cache_mid`` | Float | .7 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_mem_max`` | 32-bit Integer | 1048576 | // KB |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_dir_commit_ratio`` | Float | .5 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_dir_max_commit_size`` | 32-bit Integer | 90 | // MB |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_decay_halflife`` | Float | 5 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_beacon_interval`` | Float | 4 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_beacon_grace`` | Float | 15 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_blacklist_interval`` | Float | 24.0*60.0 | // how long to blacklist failed nodes |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_session_timeout`` | Float | 60 | // cap bits and leases time out if client idle |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_session_autoclose`` | Float | 300 | // autoclose idle session |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_reconnect_timeout`` | Float | 45 | // secs to wait for clients during mds restart |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_tick_interval`` | Float | 5 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_dirstat_min_interval`` | Float | 1 | //try to avoid propagating more often than x |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_scatter_nudge_interval`` | Float | 5 | // how quickly dirstat changes propagate up |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_client_prealloc_inos`` | 32-bit Integer | 1000 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_early_reply`` | Boolean | true | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_use_tmap`` | Boolean | true | // use trivialmap for dir updates |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_default_dir_hash`` | 32-bit Integer | | CEPH_STR_HASH_RJENKINS |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_log`` | Boolean | true | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_log_skip_corrupt_events`` | Boolean | false | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_log_max_events`` | 32-bit Integer | -1 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_log_max_segments`` | 32-bit Integer | 30 | // segment size defined by FileLayout above |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_log_max_expiring`` | 32-bit Integer | 20 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_log_eopen_size`` | 32-bit Integer | 100 | // # open inodes per log entry |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_sample_interval`` | Float | 3.0 | // every 5 seconds |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_replicate_threshold`` | Float | 8000 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_unreplicate_threshold`` | Float | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_frag`` | Boolean | false | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_split_size`` | 32-bit Integer | 10000 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_split_rd`` | Float | 25000 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_split_wr`` | Float | 10000 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_split_bits`` | 32-bit Integer | 3 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_merge_size`` | 32-bit Integer | 50 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_merge_rd`` | Float | 1000 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_merge_wr`` | Float | 1000 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_interval`` | 32-bit Integer | 10 | // seconds |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_fragment_interval`` | 32-bit Integer | 5 | // seconds |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_idle_threshold`` | Float | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_max`` | 32-bit Integer | -1 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_max_until`` | 32-bit Integer | -1 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_mode`` | 32-bit Integer | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_min_rebalance`` | Float | .1 | // must be x above avg before we export |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_min_start`` | Float | .2 | // if we need less x. we don't do anything |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_need_min`` | Float | .8 | // take within this range of what we need |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_need_max`` | Float | 1.2 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_midchunk`` | Float | .3 | // any sub bigger than this taken in full |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_minchunk`` | Float | .001 | // never take anything smaller than this |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_target_removal_min`` | 32-bit Integer | 5 | // min bal iters before old target is removed |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_bal_target_removal_max`` | 32-bit Integer | 10 | // max bal iters before old target is removed |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_replay_interval`` | Float | 1.0 | // time to wait before starting replay again |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_shutdown_check`` | 32-bit Integer | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_thrash_exports`` | 32-bit Integer | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_thrash_fragments`` | 32-bit Integer | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_dump_cache_on_map`` | Boolean | false | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_dump_cache_after_rejoin`` | Boolean | false | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_verify_scatter`` | Boolean | false | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_debug_scatterstat`` | Boolean | false | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_debug_frag`` | Boolean | false | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_debug_auth_pins`` | Boolean | false | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_debug_subtrees`` | Boolean | false | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_kill_mdstable_at`` | 32-bit Integer | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_kill_export_at`` | 32-bit Integer | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_kill_import_at`` | 32-bit Integer | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_kill_link_at`` | 32-bit Integer | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_kill_rename_at`` | 32-bit Integer | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_wipe_sessions`` | Boolean | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_wipe_ino_prealloc`` | Boolean | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_skip_ino`` | 32-bit Integer | 0 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``max_mds`` | 32-bit Integer | 1 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_standby_for_name`` | String | "" | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_standby_for_rank`` | 32-bit Integer | -1 | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-| ``mds_standby_replay`` | Boolean | false | |
-+-----------------------------------+-------------------------+------------+------------------------------------------------+
-
-
-// make it (mds_session_timeout - mds_beacon_grace |
\ No newline at end of file
+++ /dev/null
-==============================
-Monitor Configuration Settings
-==============================
-
-
-
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| Setting | Type | Default Value | Description |
-+===================================+================+===============+===========================================================+
-| ``mon data`` | String | "" | |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon sync fs threshold`` | 32-bit Integer | 5 | // sync when writing this many objects; 0 to disable. |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon tick interval`` | 32-bit Integer | 5 | |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon subscribe interval`` | Double | 300 | |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon osd auto mark in`` | Boolean | false | // mark any booting osds 'in' |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon osd auto mark auto out in`` | Boolean | true | // mark booting auto-marked-out osds 'in' |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon osd auto mark new in`` | Boolean | true | // mark booting new osds 'in' |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon osd down out interval`` | 32-bit Integer | 300 | // seconds |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon lease`` | Float | 5 | // lease interval |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon lease renew interval`` | Float | 3 | // on leader | to renew the lease |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon lease ack timeout`` | Float | 10.0 | // on leader | if lease isn't acked by all peons |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon clock drift allowed`` | Float | .050 | // allowed clock drift between monitors |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon clock drift warn backoff`` | Float | 5 | // exponential backoff for clock drift warnings |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon accept timeout`` | Float | 10.0 | // on leader | if paxos update isn't accepted |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon pg create interval`` | Float | 30.0 | // no more than every 30s |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon pg stuck threshold`` | 32-bit Integer | 300 | // number of seconds after which pgs can be considered |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon osd full ratio`` | Float | .95 | // what % full makes an OSD "full" |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon osd nearfull ratio`` | Float | .85 | // what % full makes an OSD near full |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon globalid prealloc`` | 32-bit Integer | 100 | // how many globalids to prealloc |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon osd report timeout`` | 32-bit Integer | 900 | // grace period before declaring unresponsive OSDs dead |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon force standby active`` | Boolean | true | // should mons force standby-replay mds to be active |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon min osdmap epochs`` | 32-bit Integer | 500 | |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon max pgmap epochs`` | 32-bit Integer | 500 | |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon max log epochs`` | 32-bit Integer | 500 | |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon probe timeout`` | Double | 2.0 | |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-| ``mon slurp timeout`` | Double | 10.0 | |
-+-----------------------------------+----------------+---------------+-----------------------------------------------------------+
-
-inactive | unclean | or stale (see doc/control.rst under dump stuck for more info |
\ No newline at end of file
+++ /dev/null
-==========================
-osd Configuration Settings
-==========================
-
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| Setting | Type | Default Value | Description |
-+=========================================+=====================+=======================+================================================+
-| ``osd auto upgrade tmap`` | Boolean | True | Uses ``tmap`` for ``omap`` on old objects. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd tmapput sets users tmap`` | Boolean | False | Uses ``tmap`` for debugging only. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd data`` | String | None | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd journal`` | String | None | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd journal size`` | 32-bit Int | 0 | The size of the journal in MBs. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd max write size`` | 32-bit Int | 90 | The size of the maximum x to write in MBs. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd balance reads`` | Boolean | False | Load balance reads? |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd shed reads`` | 32-bit Int | False (0) | Forward from primary to replica. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd shed reads min latency`` | Double | .01 | The minimum local latency. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd shed reads min latency diff`` | Double | 1.5 | Percentage difference from peer. 150% default. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd client message size cap`` | 64-bit Int Unsigned | 500*1024L*1024L | Client data allowed in-memory. 200MB default. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd stat refresh interval`` | 64-bit Int Unsigned | .5 | The status refresh interval in seconds. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd pg bits`` | 32-bit Int | 6 | Placement group bits per OSD. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd pgp bits`` | 32-bit Int | 4 | Placement group p bits per OSD? |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd pg layout`` | 32-bit Int | 2 | Placement Group bits ? per OSD? |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd min rep`` | 32-bit Int | 1 | Need a description. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd max rep`` | 32-bit Int | 10 | Need a description. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd min raid width`` | 32-bit Int | 3 | The minimum RAID width. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd max raid width`` | 32-bit Int | 2 | The maximum RAID width. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd pool default crush rule`` | 32-bit Int | 0 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd pool default size`` | 32-bit Int | 2 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd pool default pg num`` | 32-bit Int | 8 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd pool default pgp num`` | 32-bit Int | 8 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd map cache max`` | 32-bit Int | 250 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd map message max`` | 32-bit Int | 100 | // max maps per MOSDMap message |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd op threads`` | 32-bit Int | 2 | // 0 == no threading |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd disk threads`` | 32-bit Int | 1 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd recovery threads`` | 32-bit Int | 1 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd recover clone overlap`` | Boolean | false | // preserve clone overlap during rvry/migrat |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd backfill scan min`` | 32-bit Int | 64 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd backfill scan max`` | 32-bit Int | 512 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd op thread timeout`` | 32-bit Int | 30 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd backlog thread timeout`` | 32-bit Int | 60*60*1 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd recovery thread timeout`` | 32-bit Int | 30 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd snap trim thread timeout`` | 32-bit Int | 60*60*1 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd scrub thread timeout`` | 32-bit Int | 60 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd scrub finalize thread timeout`` | 32-bit Int | 60*10 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd remove thread timeout`` | 32-bit Int | 60*60 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd command thread timeout`` | 32-bit Int | 10*60 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd age`` | Float | .8 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd age time`` | 32-bit Int | 0 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd heartbeat interval`` | 32-bit Int | 1 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd mon heartbeat interval`` | 32-bit Int | 30 | // if no peers | ping monitor |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd heartbeat grace`` | 32-bit Int | 20 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd mon report interval max`` | 32-bit Int | 120 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd mon report interval min`` | 32-bit Int | 5 | // pg stats | failures | up thru | boot. |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd mon ack timeout`` | 32-bit Int | 30 | // time out a mon if it doesn't ack stats |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd min down reporters`` | 32-bit Int | 1 | // num OSDs needed to report a down OSD |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd min down reports`` | 32-bit Int | 3 | // num times a down OSD must be reported |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd default data pool replay window`` | 32-bit Int | 45 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd preserve trimmed log`` | Boolean | true | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd auto mark unfound lost`` | Boolean | false | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd recovery delay start`` | Float | 15 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd recovery max active`` | 32-bit Int | 5 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd recovery max chunk`` | 64-bit Int Unsigned | 1<<20 | // max size of push chunk |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd recovery forget lost objects`` | Boolean | false | // off for now |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd max scrubs`` | 32-bit Int | 1 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd scrub load threshold`` | Float | 0.5 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd scrub min interval`` | Float | 300 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd scrub max interval`` | Float | 60*60*24 | // once a day |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd auto weight`` | Boolean | false | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd class error timeout`` | Double | 60.0 | // seconds |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd class timeout`` | Double | 60*60.0 | // seconds |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd class dir`` | String | "/rados-classes" | CEPH LIBDIR |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd check for log corruption`` | Boolean | false | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd use stale snap`` | Boolean | false | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd rollback to cluster snap`` | String | "" | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd default notify timeout`` | 32-bit Int Unsigned | 30 | // default notify timeout in seconds |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd kill backfill at`` | 32-bit Int | 0 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd min pg log entries`` | 32-bit Int Unsigned | 1000 | // num entries to keep in pg log when trimming |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd op complaint time`` | Float | 30 | // how old in secs makes op complaint-worthy |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
-| ``osd command max records`` | 32-bit Int | 256 | |
-+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
\ No newline at end of file
-===============
-Welcome to Ceph
-===============
-Ceph uniquely delivers **object, block, and file storage in one unified system**. Ceph is highly reliable, easy to manage, and free. The power of Ceph can transform your company’s IT infrastructure and your ability to manage vast amounts of data. Ceph delivers extraordinary scalability--thousands of clients accessing petabytes to exabytes of data. Ceph leverages commodity hardware and intelligent daemons to accommodate large numbers of storage hosts, which communicate with each other to replicate data, and redistribute data dynamically. Ceph's cluster of monitors oversees the hosts in the Ceph storage cluster to ensure that the storage hosts are running smoothly.
+=================
+ Welcome to Ceph
+=================
+Ceph uniquely delivers **object, block, and file storage in one unified
+system**. Ceph is highly reliable, easy to manage, and free. The power of Ceph
+can transform your company’s IT infrastructure and your ability to manage vast
+amounts of data. Ceph delivers extraordinary scalability--thousands of clients
+accessing petabytes to exabytes of data. Ceph leverages commodity hardware and
+intelligent daemons to accommodate large numbers of storage hosts, which
+communicate with each other to replicate data, and redistribute data
+dynamically. Ceph's cluster of monitors oversees the hosts in the Ceph storage
+cluster to ensure that the storage hosts are running smoothly.
.. image:: images/stack.png
-Ceph Development Status
-=======================
-Ceph has been under development as an open source project since 2004, and its current focus
-is on stability. The Ceph file system is functionally complete, but has not been tested well enough at scale
-and under load to recommend it for a production environment yet. We recommend deploying Ceph for testing
-and evaluation. We do not recommend deploying Ceph into a production environment or storing valuable data
-until stress testing is complete. Ceph is developed on Linux. You may attempt to deploy Ceph on other platforms,
-but Linux is the target platform for the Ceph project. You can access the Ceph file system from other operating systems
-using NFS or Samba re-exports.
-
-
.. toctree::
:maxdepth: 1
:hidden:
start/index
install/index
- create_cluster/index
+ config-cluster/index
ops/index
rec/index
config
control
api/index
+ source/index
Internals <dev/index>
man/index
architecture
+++ /dev/null
-===================
-Build Ceph Packages
-===================
-
-To build packages, you must clone the `Ceph`_ repository.
-You can create installation packages from the latest code using ``dpkg-buildpackage`` for Debian/Ubuntu
-or ``rpmbuild`` for the RPM Package Manager.
-
-.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of cores * 2.
- For example, use ``-j4`` for a dual-core processor to accelerate the build.
-
-
-Advanced Package Tool (APT)
----------------------------
-
-To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the `Ceph`_ repository,
-installed the `build prerequisites`_ and installed ``debhelper``::
-
- $ sudo apt-get install debhelper
-
-Once you have installed debhelper, you can build the packages:
-
- $ sudo dpkg-buildpackage
-
-For multi-processor CPUs use the ``-j`` option to accelerate the build.
-
-RPM Package Manager
--------------------
-
-To create ``.prm`` packages, ensure that you have cloned the `Ceph`_ repository,
-installed the `build prerequisites`_ and installed ``rpm-build`` and ``rpmdevtools``::
-
- $ yum install rpm-build rpmdevtools
-
-Once you have installed the tools, setup an RPM compilation environment::
-
- $ rpmdev-setuptree
-
-Fetch the source tarball for the RPM compilation environment::
-
- $ wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-<version>.tar.gz
-
-Build the RPM packages::
-
- $ rpmbuild -tb ~/rpmbuild/SOURCES/ceph-<version>.tar.gz
-
-For multi-processor CPUs use the ``-j`` option to accelerate the build.
-
-
-.. _build prerequisites: ../build_prerequisites
-.. _Ceph: ../cloning_the_ceph_source_code_repository
\ No newline at end of file
+++ /dev/null
-===================
-Build Prerequisites
-===================
-
-Before you can build Ceph source code or Ceph documentation, you need to install several libraries and tools.
-
-.. tip:: Check this section to see if there are specific prerequisites for your Linux/Unix distribution.
-
-Prerequisites for Building Ceph Source Code
-===========================================
-Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts
-depend on the following:
-
-- ``autotools-dev``
-- ``autoconf``
-- ``automake``
-- ``cdbs``
-- ``gcc``
-- ``g++``
-- ``git``
-- ``libboost-dev``
-- ``libedit-dev``
-- ``libssl-dev``
-- ``libtool``
-- ``libfcgi``
-- ``libfcgi-dev``
-- ``libfuse-dev``
-- ``linux-kernel-headers``
-- ``libcrypto++-dev``
-- ``libcrypto++``
-- ``libexpat1-dev``
-- ``libgtkmm-2.4-dev``
-- ``pkg-config``
-- ``libcurl4-gnutls-dev``
-
-On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
-
- $ sudo apt-get install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
-
-On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. ::
-
- $ aptitude install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
-
-
-Ubuntu Requirements
--------------------
-
-- ``uuid-dev``
-- ``libkeytutils-dev``
-- ``libgoogle-perftools-dev``
-- ``libatomic-ops-dev``
-- ``libaio-dev``
-- ``libgdata-common``
-- ``libgdata13``
-
-Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
-
- $ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev libatomic-ops-dev libaio-dev libgdata-common libgdata13
-
-Debian
-------
-Alternatively, you may also install::
-
- $ aptitude install fakeroot dpkg-dev
- $ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev
-
-openSUSE 11.2 (and later)
--------------------------
-
-- ``boost-devel``
-- ``gcc-c++``
-- ``libedit-devel``
-- ``libopenssl-devel``
-- ``fuse-devel`` (optional)
-
-Execute ``zypper install`` for each dependency that isn't installed on your host. ::
-
- $zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel
-
-Prerequisites for Building Ceph Documentation
-=============================================
-Ceph utilizes Python's Sphinx documentation tool. For details on
-the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_
-Follow the directions at `Sphinx 1.1.3 <http://pypi.python.org/pypi/Sphinx>`_
-to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the following are required:
-
-- ``python-dev``
-- ``python-pip``
-- ``python-virtualenv``
-- ``libxml2-dev``
-- ``libxslt-dev``
-- ``doxygen``
-- ``ditaa``
-- ``graphviz``
-
-Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
-
- $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz
-
+++ /dev/null
-=============
-Building Ceph
-=============
-
-Ceph provides build scripts for source code and for documentation.
-
-Building Ceph
-=============
-Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following::
-
- $ cd ceph
- $ ./autogen.sh
- $ ./configure
- $ make
-
-You can use ``make -j`` to execute multiple jobs depending upon your system. For example::
-
- $ make -j4
-
-To install Ceph locally, you may also use::
-
- $ make install
-
-If you install Ceph locally, ``make`` will place the executables in ``usr/local/bin``.
-You may add the ``ceph.conf`` file to the ``usr/local/bin`` directory to run an evaluation environment of Ceph from a single directory.
-
-Building Ceph Documentation
-===========================
-Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script::
-
- $ cd ceph
- $ admin/build-doc
-
-Once you build the documentation set, you may navigate to the source directory to view it::
-
- $ cd build-doc/output
-
-There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively.
+++ /dev/null
-=======================================
-Cloning the Ceph Source Code Repository
-=======================================
-To check out the Ceph source code, you must have ``git`` installed
-on your local host. To install ``git``, execute::
-
- $ sudo apt-get install git
-
-You must also have a ``github`` account. If you do not have a
-``github`` account, go to `github.com <http://github.com>`_ and register.
-Follow the directions for setting up git at `Set Up Git <http://help.github.com/linux-set-up-git/>`_.
-
-Generate SSH Keys
------------------
-You must generate SSH keys for github to clone the Ceph
-repository. If you do not have SSH keys for ``github``, execute::
-
- $ ssh-keygen -d
-
-Get the key to add to your ``github`` account (the following example assumes you used the default file path)::
-
- $ cat .ssh/id_dsa.pub
-
-Copy the public key.
-
-Add the Key
------------
-Go to your your ``github`` account,
-click on "Account Settings" (i.e., the 'tools' icon); then,
-click "SSH Keys" on the left side navbar.
-
-Click "Add SSH key" in the "SSH Keys" list, enter a name for
-the key, paste the key you generated, and press the "Add key"
-button.
-
-Clone the Source
-----------------
-To clone the Ceph source code repository, execute::
-
- $ git clone git@github.com:ceph/ceph.git
-
-Once ``git clone`` executes, you should have a full copy of the Ceph repository.
-
-Clone the Submodules
---------------------
-Before you can build Ceph, you must navigate to your new repository and get the ``init`` submodule and the ``update`` submodule::
-
- $ cd ceph
- $ git submodule init
- $ git submodule update
-
-.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date::
-
- $ git status
-
-Choose a Branch
----------------
-Once you clone the source code and submodules, your Ceph repository will be on the ``master`` branch by default, which is the unstable development branch. You may choose other branches too.
-
-- ``master``: The unstable development branch.
-- ``stable``: The bugfix branch.
-- ``next``: The release candidate branch.
-
-::
-
- git checkout master
-
+++ /dev/null
-==========================
-Downloading a Ceph Release
-==========================
-As Ceph development progresses, the Ceph team releases new versions of the source code. You may download source code for Ceph releases here:
-
-`Ceph Releases <http://ceph.newdream.net/download/>`_
\ No newline at end of file
+++ /dev/null
-=========================
-Building Ceph from Source
-=========================
-
-You can build Ceph from source by downloading a release or cloning the ``ceph`` repository at github. If you intend to build Ceph
-from source, please see the build pre-requisites first. Making sure you have all the pre-requisites will save you time.
-
-- :doc:`Build Prerequisites <build_from_source/build_prerequisites>`
-- :doc:`Downloading a Ceph Release <build_from_source/downloading_a_ceph_release>`
-- :doc:`Cloning the Ceph Source Code Repository <build_from_source/cloning_the_ceph_source_code_repository>`
-- :doc:`Building Ceph<build_from_source/building_ceph>`
-- :doc:`Building Ceph Install Packages <build_from_source/build_packages>`
-
-
-.. toctree::
- :hidden:
-
- Prerequisites <build_from_source/build_prerequisites>
- Get a Release <build_from_source/downloading_a_ceph_release>
- Clone the Source <build_from_source/cloning_the_ceph_source_code_repository>
- Build the Source <build_from_source/building_ceph>
- Build a Package <build_from_source/build_packages>
\ No newline at end of file
-==================================
-Downloading Debian/Ubuntu Packages
-==================================
-We automatically build Debian/Ubuntu packages for any branches or tags that appear in
-the ``ceph.git`` `repository <http://github.com/ceph/ceph>`_. If you want to build your own packages
-(e.g., for RPM), see `Build Ceph Packages <../build_from_source/build_packages>`_.
-
-When you download release packages, you will receive the latest package build, which may be several weeks behind the current release
-or the most recent code. It may contain bugs that have already been fixed in the most recent versions of the code. Until packages
-contain only stable code, you should carefully consider the tradeoffs of installing from a package or retrieving the latest release
+====================================
+ Downloading Debian/Ubuntu Packages
+====================================
+We automatically build Debian/Ubuntu packages for any branches or tags that
+appear in the ``ceph.git`` `repository <http://github.com/ceph/ceph>`_. If you
+want to build your own packages (*e.g.,* for RPM), see
+`Build Ceph Packages <../../source/build_packages>`_.
+
+When you download release packages, you will receive the latest package build,
+which may be several weeks behind the current release or the most recent code.
+It may contain bugs that have already been fixed in the most recent versions of
+the code. Until packages contain only stable code, you should carefully consider
+the tradeoffs of installing from a package or retrieving the latest release
or the most current source code and building Ceph.
-When you execute the following commands to install the Debian/Ubuntu Ceph packages, replace ``{ARCH}`` with the
-architecture of your CPU (e.g., ``amd64`` or ``i386``), ``{DISTRO}`` with the code name of your operating system
-(e.g., ``precise``, rather than the OS version number) and ``{BRANCH}`` with the version of Ceph you want to
-run (e.g., ``master``, ``stable``, ``unstable``, ``v0.44``, etc.).
+When you execute the following commands to install the Debian/Ubuntu Ceph
+packages, replace ``{ARCH}`` with the architecture of your CPU (*e.g.,* ``amd64``
+or ``i386``), ``{DISTRO}`` with the code name of your operating system
+(*e.g.,* ``precise``, rather than the OS version number) and ``{BRANCH}`` with
+the version of Ceph you want to run (e.g., ``master``, ``stable``, ``unstable``,
+``v0.44``, *etc.*).
Adding Release Packages to APT
------------------------------
-We provide stable release packages for Debian/Ubuntu, which are signed signed with the ``release.asc`` key.
-Click `here <http://ceph.newdream.net/debian/dists>`_ to see the distributions and branches supported.
-To install a release package, you must first add a release key. ::
+We provide stable release packages for Debian/Ubuntu, which are signed signed
+with the ``release.asc`` key. Click `here <http://ceph.newdream.net/debian/dists>`_
+to see the distributions and branches supported. To install a release package,
+you must first add a release key. ::
$ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc \ | sudo apt-key add -
-For Debian/Ubuntu releases, we use the Advanced Package Tool (APT). To retrieve the release packages and updates
-and install them with ``apt``, you must add a ``ceph.list`` file to your ``apt`` configuration with the following
-path::
+For Debian/Ubuntu releases, we use the Advanced Package Tool (APT). To retrieve
+the release packages and updates and install them with ``apt``, you must add a
+``ceph.list`` file to your ``apt`` configuration with the following path::
etc/apt/sources.list.d/ceph.list
deb http://ceph.newdream.net/debian/{BRANCH}/ {DISTRO} main
-Remember to replace ``{BRANCH}`` with the branch you want to use and replace ``{DISTRO}`` with the Linux distribution for your host. Then,
-save the file.
+Remember to replace ``{BRANCH}`` with the branch you want to use and replace
+``{DISTRO}`` with the Linux distribution for your host. Then, save the file.
Adding Autobuild Packages to APT
--------------------------------
-We provide unstable release packages for Debian/Ubuntu, which contain the latest code and bug fixes.
-The autobuild packages are signed signed with the ``autobuild.asc`` key. To install an autobuild package,
-you must first add an autobuild key::
+We provide unstable release packages for Debian/Ubuntu, which contain the
+latest code and bug fixes. The autobuild packages are signed signed with
+the ``autobuild.asc`` key. To install an autobuild package, you must first
+add an autobuild key::
wget -q -O- https://raw.github.com/ceph/ceph/master/keys/autobuild.asc \ | sudo apt-key add -
-For Debian/Ubuntu releases, we use the Advanced Package Tool (APT). To retrieve the autobuild packages and updates
-and install them with ``apt``, you must add a ``ceph.list`` file to your ``apt`` configuration with the following
-path::
+.. warning:: The following commands make your computer trust any code
+ that makes it into ``ceph.git``, including work in progress
+ branches and versions of code with possible security issues (that
+ were fixed afterwards). Use at your own risk!
+
+For Debian/Ubuntu releases, we use the Advanced Package Tool (APT). To
+retrieve the autobuild packages and updates and install them with ``apt``,
+you must add a ``ceph.list`` file to your ``apt`` configuration with the
+following path::
etc/apt/sources.list.d/ceph.list
deb http://ceph.newdream.net/debian-snapshot-amd64/{BRANCH}/ {DISTRO} main
deb-src http://ceph.newdream.net/debian-snapshot-amd64/{BRANCH}/ {DISTRO} main
-Remember to replace ``{BRANCH}`` with the branch you want to use and replace ``{DISTRO}`` with the Linux distribution for your host. Then,
-save the file.
+Remember to replace ``{BRANCH}`` with the branch you want to use and replace
+``{DISTRO}`` with the Linux distribution for your host. Then, save the file.
+
Downloading Packages
--------------------
-Once you add either release or autobuild packages for Debian/Ubuntu, you may download them with ``apt`` as follows::
+Once you add either release or autobuild packages for Debian/Ubuntu, you may
+download them with ``apt`` as follows::
sudo apt-get update
+++ /dev/null
-=========================================
-Hard Disk and File System Recommendations
-=========================================
-
-.. important:: Disable disk caching and asynchronous write.
-
-Ceph aims for data safety, which means that when the application receives notice that data was
-written to the disk, that data was actually written to the disk and not still in a transient state in
-a buffer or cache pending a lazy write to the hard disk.
-
-For data safety, you should mount your file system with caching disabled. Your file system should be
-mounted with ``sync`` and NOT ``async``. For example, your ``fstab`` file would reflect ``sync``. ::
-
- /dev/hda / xfs sync 0 0
-
-Use ``hdparm`` to disable write caching on the hard disk::
-
- $ hdparm -W 0 /dev/hda 0
-
-
-Ceph OSDs depend on the Extended Attributes (XATTRs) of the underlying file system for:
-
-- Internal object state
-- Snapshot metadata
-- RADOS Gateway Access Control Lists (ACLs).
-
-Ceph OSDs rely heavily upon the stability and performance of the underlying file system. The
-underlying file system must provide sufficient capacity for XATTRs. File system candidates for
-Ceph include B tree and B+ tree file systems such as:
-
-- ``btrfs``
-- ``XFS``
-
-.. warning:: XATTR limits.
-
- The RADOS Gateway's ACL and Ceph snapshots easily surpass the 4-kilobyte limit for XATTRs in ``ext4``,
- causing the ``ceph-osd`` process to crash. So ``ext4`` is a poor file system choice if
- you intend to deploy the RADOS Gateway or use snapshots. Version 0.45 or newer uses ``leveldb`` to
- bypass this limitation.
-
-.. tip:: Use ``xfs`` initially and ``btrfs`` when it is ready for production.
-
- The Ceph team believes that the best performance and stability will come from ``btrfs.``
- The ``btrfs`` file system has internal transactions that keep the local data set in a consistent state.
- This makes OSDs based on ``btrfs`` simple to deploy, while providing scalability not
- currently available from block-based file systems. The 64-kb XATTR limit for ``xfs``
- XATTRS is enough to accommodate RDB snapshot metadata and RADOS Gateway ACLs. So ``xfs`` is the second-choice
- file system of the Ceph team in the long run, but ``xfs`` is currently more stable than ``btrfs``.
- If you only plan to use RADOS and ``rbd`` without snapshots and without ``radosgw``, the ``ext4``
- file system should work just fine.
-
-========================
-Hardware Recommendations
-========================
-Ceph runs on commodity hardware and a Linux operating system over a TCP/IP network. The hardware
-recommendations for different processes/daemons differ considerably.
+==========================
+ Hardware Recommendations
+==========================
+Ceph runs on commodity hardware and a Linux operating system over a TCP/IP
+network. The hardware recommendations for different processes/daemons differ
+considerably.
-OSD hosts should have ample data storage in the form of a hard drive or a RAID. Ceph OSDs run the RADOS service, calculate data placement with CRUSH, and maintain their
-own copy of the cluster map. Therefore, OSDs should have a reasonable amount of processing power.
+OSD hosts should have ample data storage in the form of a hard drive or a RAID.
+Ceph OSDs run the RADOS service, calculate data placement with CRUSH, and
+maintain their own copy of the cluster map. Therefore, OSDs should have a
+reasonable amount of processing power.
-Ceph monitors require enough disk space for the cluster map, but usually do not encounter heavy loads. Monitor hosts do not need to be very powerful.
+Ceph monitors require enough disk space for the cluster map, but usually do
+not encounter heavy loads. Monitor hosts do not need to be very powerful.
-Ceph metadata servers distribute their load. However, metadata servers must be capable of serving their data quickly. Metadata servers should have strong processing capability and plenty of RAM.
+Ceph metadata servers distribute their load. However, metadata servers must be
+capable of serving their data quickly. Metadata servers should have strong
+processing capability and plenty of RAM.
.. note:: If you are not using the Ceph File System, you do not need a meta data server.
+++ /dev/null
-====================
-Host Recommendations
-====================
-We recommend using a dedicated Administration host for larger deployments, particularly when you intend to build Ceph from source
-or build your own packages.
-
-.. important:: The Administration host must have ``root`` access to OSD Cluster hosts for installation and maintenance.
-
-If you are installing Ceph on a single host for the first time to learn more about it, your local host is effectively your Administration host.
-
-Enable Extended Attributes (XATTRs)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-If you are using a file system other than ``XFS`` or ``btrfs``, you need to enable extended attributes. ::
-
- mount -t ext4 -o user_xattr /dev/hda mount/mount_point
-
-Install Build Prerequisites
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Install `build prerequisites`_ on your Administration host machine. If you want to
-build install packages on each OSD Cluster host, install the `build prerequisites`_
-on each OSD Cluster host.
-
-Configure SSH
-~~~~~~~~~~~~~
-Before you can install and configure Ceph on an OSD Cluster host you need to configure SSH on
-the Administration host and on each OSD Cluster host.
-
-You must be able to login via SSH from your Administration host to each OSD Cluster host
-as ``root`` using the short name to identify the OSD Cluster host. Your Administration host must be able
-to connect to each OSD Cluster host using the OSD Cluster host's short name, not its full domain name (e.g., ``shortname``
-not ``shortname.domain.com``).
-
-To connect to an OSD Cluster host from your Administration host using the OSD Cluster short name only,
-add a host configuration for the OSD Cluster host to your ``~/.ssh_config file``. You must add an entry
-for each host in your cluster. Set the user name to ``root`` or a username with root privileges. For example::
-
- Host shortname1
- Hostname shortname1.domain.com
- User root
- Host shortname2
- Hostname shortname2.domain.com
- User root
-
-You must be able to use ``sudo`` over SSH from the Administration host on each OSD Cluster host
-without ``sudo`` prompting you for a password.
-
-If you have a public key for your root password in ``.ssh/id_rsa.pub``, you must copy the key and append it
-to the contents of the ``.ssh/authorized_keys`` file on each OSD Cluster host. Create the ``.ssh/authorized_keys``
-file if it doesn't exist. When you open an SSH connection from the Administration host to an OSD Cluster host,
-SSH uses the private key in the home directory of the Administration host and tries to match it with a public
-key in the home directory of the OSD Cluster host. If the SSH keys match, SSH will log you in automatically.
-If the SSH keys do not match, SSH will prompt you for a password.
-
-To generate an SSH key pair on your Administration host, log in with ``root`` permissions, go to your ``/home`` directory and enter the following::
-
- $ ssh-keygen
-
-Add Packages to OSD Cluster Hosts
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Add the packages you downloaded or built on the Administration host to each OSD Cluster host. Perform the same steps
-from `Installing Debian/Ubuntu Packages`_ for each OSD Cluster host. To expedite adding
-the ``etc/apt/sources.list.d/ceph.list`` file to each OSD Cluster host, consider using ``tee``.
-::
-
- $ sudo tee /etc/apt/sources.list.d/ceph.list <<EOF
-
-A prompt will appear, and you can add lines to the ``ceph.list`` file. For release packages, enter::
-
- > deb http://ceph.newdream.net/debian/{BRANCH}/ {DISTRO} main
-
-For snapshot packages, enter::
-
- > deb http://ceph.newdream.net/debian-snapshot-amd64/{BRANCH}/ {DISTRO} main
- > deb-src http://ceph.newdream.net/debian-snapshot-amd64/{BRANCH}/ {DISTRO} main
-
-For packages you built on your Administration host, consider making them accessible via HTTP, and enter::
-
- > deb http://{adminhostname}.domainname.com/{package directory}
-
-Once you have added the package directories, close the file. ::
-
- > EOF
-
-
-.. _build prerequisites: ../build_from_source/build_prerequisites
-.. _Installing Debian/Ubuntu Packages: ../download_packages
\ No newline at end of file
-==========================
-Installing Ceph Components
-==========================
-Storage clusters are the foundation of the Ceph system. Ceph storage hosts provide object storage.
-Clients access the Ceph storage cluster directly from an application (using ``librados``),
-over an object storage protocol such as Amazon S3 or OpenStack Swift (using ``radosgw``), or with a block
-device (using ``rbd``). To begin using Ceph, you must first set up a storage cluster.
+=================
+ Installing Ceph
+=================
+Storage clusters are the foundation of the Ceph system. Ceph storage hosts
+provide object storage. Clients access the Ceph storage cluster directly from
+an application (using ``librados``), over an object storage protocol such as
+Amazon S3 or OpenStack Swift (using ``radosgw``), or with a block device
+(using ``rbd``). To begin using Ceph, you must first set up a storage cluster.
-The following sections provide guidance for configuring a storage cluster and installing Ceph components:
-
-1. :doc:`Hardware Recommendations <hardware_recommendations>`
-2. :doc:`File System Recommendations <file_system_recommendations>`
-3. :doc:`Host Recommendations <host_recommendations>`
-4. :doc:`Download Ceph Packages <download_packages>`
-5. :doc:`Building Ceph from Source <building_ceph_from_source>`
-6. :doc:`Installing Packages <installing_packages>`
+The following sections provide guidance for configuring a storage cluster and
+installing Ceph components:
.. toctree::
- :hidden:
- Hardware Recs <hardware_recommendations>
- File System Recs <file_system_recommendations>
- Host Recs <host_recommendations>
+ Hardware Recommendations <hardware_recommendations>
Download Packages <download_packages>
- Build From Source <building_ceph_from_source>
Install Packages <installing_packages>
-========================
-Installing Ceph Packages
-========================
-Once you have downloaded or built Ceph packages, you may install them on your Admin host and OSD Cluster hosts.
-
+==========================
+ Installing Ceph Packages
+==========================
+Once you have downloaded or built Ceph packages, you may install them on your
+Admin host and OSD Cluster hosts.
.. important:: All hosts should be running the same package version.
- To ensure that you are running the same version on each host with APT, you may
- execute ``sudo apt-get update`` on each host before you install the packages.
-
+ To ensure that you are running the same version on each host with APT,
+ you may execute ``sudo apt-get update`` on each host before you install
+ the packages.
Installing Packages with APT
----------------------------
Once you download or build the packages and add your packages to APT
-(see `Downloading Debian/Ubuntu Packages <../download_packages>`_), you may install them as follows::
+(see `Downloading Debian/Ubuntu Packages <../download_packages>`_), you may
+install them as follows::
$ sudo apt-get install ceph
-
Installing Packages with RPM
----------------------------
-Once you have built your RPM packages, you may install them as follows::
+You may install RPM packages as follows::
rpm -i rpmbuild/RPMS/x86_64/ceph-*.rpm
+.. note: We do not build RPM packages at this time. You may build them
+ yourself by downloading the source code.
-Proceed to Creating a Cluster
------------------------------
-Once you have prepared your hosts and installed Ceph pages, proceed to `Creating a Storage Cluster <../../create_cluster>`_.
\ No newline at end of file
+Proceed to Configuring a Cluster
+--------------------------------
+Once you have prepared your hosts and installed Ceph pages, proceed to
+`Configuring a Storage Cluster <../../config-cluster>`_.
:glob:
*/index
+ 1/obsync
+++ /dev/null
-=============================
- Autobuilt unstable packages
-=============================
-
-We automatically build Debian and Ubuntu packages for any branches or
-tags that appear in the |ceph.git|_. We build packages for the `amd64`
-and `i386` architectures (`arch list`_), for the following
-distributions (`distro list`_):
-
-- ``natty`` (Ubuntu 11.04)
-- ``squeeze`` (Debian 6.0)
-
-.. |ceph.git| replace::
- ``ceph.git`` repository
-.. _`ceph.git`: https://github.com/ceph/ceph
-
-.. _`arch list`: http://ceph.newdream.net/debian-snapshot-amd64/master/dists/natty/main/
-.. _`distro list`: http://ceph.newdream.net/debian-snapshot-amd64/master/dists/
-
-The current status of autobuilt packages can be found at
-http://ceph.newdream.net/gitbuilder-deb-amd64/ .
-
-If you wish to use these packages, you need to modify the
-:ref:`earlier instructions <install-debs>` as follows:
-
-.. warning:: The following commands make your computer trust any code
- that makes it into ``ceph.git``, including work in progress
- branches and versions of code with possible security issues (that
- were fixed afterwards). Use at your own risk!
-
-Whenever we say *DISTRO* below, replace it with the codename of your
-operating system.
-
-Whenever we say *BRANCH* below, replace it with the version of the
-code you want to run, e.g. ``master``, ``stable`` or ``v0.34`` (`branch list`_ [#broken-links]_).
-
-.. _`branch list`: http://ceph.newdream.net/debian-snapshot-amd64/
-
-Run these commands on all nodes::
-
- wget -q -O- https://raw.github.com/ceph/ceph/master/keys/autobuild.asc \
- | sudo apt-key add -
-
- sudo tee /etc/apt/sources.list.d/ceph.list <<EOF
- deb http://ceph.newdream.net/debian-snapshot-amd64/BRANCH/ DISTRO main
- deb-src http://ceph.newdream.net/debian-snapshot-amd64/BRANCH/ DISTRO main
- EOF
-
- sudo apt-get update
- sudo apt-get install ceph
-
-From here on, you can follow the usual set up instructions in
-:doc:`/ops/install/index`.
-
-
-
-.. rubric:: Footnotes
-
-.. [#broken-links] Technical issues with how that part of the URL
- space is HTTP reverse proxied means that the links in the generated
- directory listings are broken. Please don't click on the links,
- instead edit the URL bar manually, for now.
-
- .. todo:: Fix the gitbuilder reverse proxy to not break relative URLs.
.. toctree::
- install/index
manage/index
radosgw
rbd
monitor
- autobuilt
+++ /dev/null
-.. index:: Chef
-.. _install-chef:
-
-============================
- Installing Ceph using Chef
-============================
-
-(Try saying that fast 10 times.)
-
-.. topic:: Status as of 2011-09
-
- While we have Chef cookbooks in use internally, they are not yet
- ready to handle unsupervised installation of a full cluster. Stay
- tuned for updates.
-
-.. todo:: write me
+++ /dev/null
-===========================
- Installing a Ceph cluster
-===========================
-
-For development and really early stage testing, see :doc:`/dev/index`.
-
-For installing the latest development builds, see
-:doc:`/ops/autobuilt`.
-
-Installing any complex distributed software can be a lot of work. We
-support two automated ways of installing Ceph:
-
-- using Chef_: see :doc:`chef`
-- with the ``mkcephfs`` shell script: :doc:`mkcephfs`
-
-.. _Chef: http://wiki.opscode.com/display/chef
-
-.. topic:: Status as of 2011-09
-
- This section hides a lot of the tedious underlying details. If you
- need to, or wish to, roll your own deployment automation, or are
- doing it manually, you'll have to dig into a lot more intricate
- details. We are working on simplifying the installation, as that
- also simplifies our Chef cookbooks.
-
-
-.. toctree::
- :hidden:
-
- chef
- mkcephfs
+++ /dev/null
-.. index:: mkcephfs
-
-====================================
- Installing Ceph using ``mkcephfs``
-====================================
-
-.. note:: ``mkcephfs`` is meant as a quick bootstrapping tool. It does
- not handle more complex operations, such as upgrades. For
- production clusters, you will want to use the :ref:`Chef cookbooks
- <install-chef>`.
-
-Pick a host that has the Ceph software installed -- it does not have
-to be a part of your cluster, but it does need to have *matching
-versions* of the ``mkcephfs`` command and other Ceph tools
-installed. This will be your `admin host`.
-
-
-Installing the packages
-=======================
-
-
-.. _install-debs:
-
-Debian/Ubuntu
--------------
-
-We regularly build Debian and Ubuntu packages for the `amd64` and
-`i386` architectures, for the following distributions:
-
-- ``sid`` (Debian unstable)
-- ``wheezy`` (Debian 7.0)
-- ``squeeze`` (Debian 6.0)
-- ``precise`` (Ubuntu 12.04)
-- ``oneiric`` (Ubuntu 11.10)
-- ``natty`` (Ubuntu 11.04)
-- ``maverick`` (Ubuntu 10.10)
-
-.. todo:: http://ceph.newdream.net/debian/dists/ also has ``lucid``
- (Ubuntu 10.04), should that be removed?
-
-Whenever we say *DISTRO* below, replace that with the codename of your
-operating system.
-
-Run these commands on all nodes::
-
- wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc \
- | sudo apt-key add -
-
- sudo tee /etc/apt/sources.list.d/ceph.list <<EOF
- deb http://ceph.newdream.net/debian/ DISTRO main
- deb-src http://ceph.newdream.net/debian/ DISTRO main
- EOF
-
- sudo apt-get update
- sudo apt-get install ceph
-
-
-.. todo:: For older distributions, you may need to make sure your apt-get may read .bz2 compressed files. This works for Debian Lenny 5.0.3: ``apt-get install bzip2``
-
-.. todo:: Ponder packages; ceph.deb currently pulls in gceph (ceph.deb
- Recommends: ceph-client-tools ceph-fuse libcephfs1 librados2 librbd1
- btrfs-tools gceph) (other interesting: ceph-client-tools ceph-fuse
- libcephfs-dev librados-dev librbd-dev obsync python-ceph radosgw)
-
-
-Red Hat / CentOS / Fedora
--------------------------
-
-.. topic:: Status as of 2011-09
-
- We do not currently provide prebuilt RPMs, but we do provide a spec
- file that should work. The following will guide you to compiling it
- yourself.
-
-To ensure you have the right build-dependencies, run::
-
- yum install rpm-build rpmdevtools git fuse-devel libtool \
- libtool-ltdl-devel boost-devel libedit-devel openssl-devel \
- gcc-c++ nss-devel libatomic_ops-devel make
-
-To setup an RPM compilation environment, run::
-
- rpmdev-setuptree
-
-To fetch the Ceph source tarball, run::
-
- wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-0.35.tar.gz
-
-To build it, run::
-
- rpmbuild -tb ~/rpmbuild/SOURCES/ceph-0.35.tar.gz
-
-Finally, install the RPMs::
-
- rpm -i rpmbuild/RPMS/x86_64/ceph-*.rpm
-
-
-
-.. todo:: Other operating system support.
-
-
-Creating a ``ceph.conf`` file
-=============================
-
-On the `admin host`, create a file with a name like
-``mycluster.conf``.
-
-Here's a template for a 3-node cluster, where all three machines run a
-:ref:`monitor <monitor>` and an :ref:`object store <rados>`, and the
-first one runs the :ref:`Ceph filesystem daemon <cephfs>`. Replace the
-hostnames and IP addresses with your own, and add/remove hosts as
-appropriate. All hostnames *must* be short form (no domain).
-
-.. literalinclude:: mycluster.conf
- :language: ini
-
-Note how the ``host`` variables dictate what node runs what
-services. See :doc:`/config` for more information.
-
-It is **very important** that you only run a single monitor on each node. If
-you attempt to run more than one monitor on a node, you reduce your reliability
-and the procedures in this documentation will not behave reliably.
-
-.. todo:: More specific link for host= convention.
-
-.. todo:: Point to cluster design docs, once they are ready.
-
-.. todo:: At this point, either use 1 or 3 mons, point to :doc:`/ops/manage/grow/mon`
-
-
-Running ``mkcephfs``
-====================
-
-Verify that you can manage the nodes from the host you intend to run
-``mkcephfs`` on:
-
-- Make sure you can SSH_ from the `admin host` into all the nodes
- using the short hostnames (``myserver`` not
- ``myserver.mydept.example.com``), with no user specified
- [#ssh_config]_.
-- Make sure you can SSH_ from the `admin host` into all the nodes
- as ``root`` using the short hostnames.
-- Make sure you can run ``sudo`` without passphrase prompts on all
- nodes [#sudo]_.
-
-.. _SSH: http://openssh.org/
-
-If you are not using :ref:`Btrfs <btrfs>`, enable :ref:`extended
-attributes <xattr>`.
-
-On each node, make sure the directory ``/srv/osd.N`` (with the
-appropriate ``N``) exists, and the right filesystem is mounted. If you
-are not using a separate filesystem for the file store, just run
-``sudo mkdir /srv/osd.N`` (with the right ``N``).
-
-Then, using the right path to the ``mycluster.conf`` file you prepared
-earlier, run::
-
- mkcephfs -a -c mycluster.conf -k mycluster.keyring
-
-This will place an `admin key` into ``mycluster.keyring``. This will
-be used to manage the cluster. Treat it like a ``root`` password to
-your filesystem.
-
-.. todo:: Link to explanation of `admin key`.
-
-That should SSH into all the nodes, and set up Ceph for you.
-
-It does **not** copy the configuration, or start the services. Let's
-do that::
-
- ssh myserver01 sudo tee /etc/ceph/ceph.conf <mycluster.conf
- ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
- ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
- ...
-
- ssh myserver01 sudo /etc/init.d/ceph start
- ssh myserver02 sudo /etc/init.d/ceph start
- ssh myserver03 sudo /etc/init.d/ceph start
- ...
-
-After a little while, the cluster should come up and reach a healthy
-state. We can check that::
-
- ceph -k mycluster.keyring -c mycluster.conf health
- 2011-09-06 12:33:51.561012 mon <- [health]
- 2011-09-06 12:33:51.562164 mon2 -> 'HEALTH_OK' (0)
-
-.. todo:: Document "healthy"
-
-.. todo:: Improve output.
-
-
-
-.. rubric:: Footnotes
-
-.. [#ssh_config] Something like this in your ``~/.ssh_config`` may
- help -- unfortunately you need an entry per node::
-
- Host myserverNN
- Hostname myserverNN.dept.example.com
- User ubuntu
-
-.. [#sudo] The relevant ``sudoers`` syntax looks like this::
-
- %admin ALL=(ALL) NOPASSWD: ALL
+++ /dev/null
-[global]
- auth supported = cephx
- keyring = /etc/ceph/$name.keyring
-
-[mon]
- mon data = /srv/mon.$id
-
-[mds]
-
-[osd]
- osd data = /srv/osd.$id
- osd journal = /srv/osd.$id.journal
- osd journal size = 1000
-
-[mon.a]
- host = myserver01
- mon addr = 10.0.0.101:6789
-
-[mon.b]
- host = myserver02
- mon addr = 10.0.0.102:6789
-
-[mon.c]
- host = myserver03
- mon addr = 10.0.0.103:6789
-
-[osd.0]
- host = myserver01
-
-[osd.1]
- host = myserver02
-
-[osd.2]
- host = myserver03
-
-[mds.a]
- host = myserver01
--- /dev/null
+===================
+Build Ceph Packages
+===================
+
+To build packages, you must clone the `Ceph`_ repository.
+You can create installation packages from the latest code using ``dpkg-buildpackage`` for Debian/Ubuntu
+or ``rpmbuild`` for the RPM Package Manager.
+
+.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of cores * 2.
+ For example, use ``-j4`` for a dual-core processor to accelerate the build.
+
+
+Advanced Package Tool (APT)
+---------------------------
+
+To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the `Ceph`_ repository,
+installed the `build prerequisites`_ and installed ``debhelper``::
+
+ $ sudo apt-get install debhelper
+
+Once you have installed debhelper, you can build the packages:
+
+ $ sudo dpkg-buildpackage
+
+For multi-processor CPUs use the ``-j`` option to accelerate the build.
+
+RPM Package Manager
+-------------------
+
+To create ``.prm`` packages, ensure that you have cloned the `Ceph`_ repository,
+installed the `build prerequisites`_ and installed ``rpm-build`` and ``rpmdevtools``::
+
+ $ yum install rpm-build rpmdevtools
+
+Once you have installed the tools, setup an RPM compilation environment::
+
+ $ rpmdev-setuptree
+
+Fetch the source tarball for the RPM compilation environment::
+
+ $ wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-<version>.tar.gz
+
+Build the RPM packages::
+
+ $ rpmbuild -tb ~/rpmbuild/SOURCES/ceph-<version>.tar.gz
+
+For multi-processor CPUs use the ``-j`` option to accelerate the build.
+
+
+.. _build prerequisites: ../build_prerequisites
+.. _Ceph: ../cloning_the_ceph_source_code_repository
\ No newline at end of file
--- /dev/null
+===================
+Build Prerequisites
+===================
+
+Before you can build Ceph source code or Ceph documentation, you need to install several libraries and tools.
+
+.. tip:: Check this section to see if there are specific prerequisites for your Linux/Unix distribution.
+
+Prerequisites for Building Ceph Source Code
+===========================================
+Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts
+depend on the following:
+
+- ``autotools-dev``
+- ``autoconf``
+- ``automake``
+- ``cdbs``
+- ``gcc``
+- ``g++``
+- ``git``
+- ``libboost-dev``
+- ``libedit-dev``
+- ``libssl-dev``
+- ``libtool``
+- ``libfcgi``
+- ``libfcgi-dev``
+- ``libfuse-dev``
+- ``linux-kernel-headers``
+- ``libcrypto++-dev``
+- ``libcrypto++``
+- ``libexpat1-dev``
+- ``libgtkmm-2.4-dev``
+- ``pkg-config``
+- ``libcurl4-gnutls-dev``
+
+On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
+
+ $ sudo apt-get install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
+
+On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. ::
+
+ $ aptitude install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
+
+
+Ubuntu Requirements
+-------------------
+
+- ``uuid-dev``
+- ``libkeytutils-dev``
+- ``libgoogle-perftools-dev``
+- ``libatomic-ops-dev``
+- ``libaio-dev``
+- ``libgdata-common``
+- ``libgdata13``
+
+Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
+
+ $ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev libatomic-ops-dev libaio-dev libgdata-common libgdata13
+
+Debian
+------
+Alternatively, you may also install::
+
+ $ aptitude install fakeroot dpkg-dev
+ $ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev
+
+openSUSE 11.2 (and later)
+-------------------------
+
+- ``boost-devel``
+- ``gcc-c++``
+- ``libedit-devel``
+- ``libopenssl-devel``
+- ``fuse-devel`` (optional)
+
+Execute ``zypper install`` for each dependency that isn't installed on your host. ::
+
+ $zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel
+
+Prerequisites for Building Ceph Documentation
+=============================================
+Ceph utilizes Python's Sphinx documentation tool. For details on
+the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_
+Follow the directions at `Sphinx 1.1.3 <http://pypi.python.org/pypi/Sphinx>`_
+to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the following are required:
+
+- ``python-dev``
+- ``python-pip``
+- ``python-virtualenv``
+- ``libxml2-dev``
+- ``libxslt-dev``
+- ``doxygen``
+- ``ditaa``
+- ``graphviz``
+
+Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
+
+ $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz
+
--- /dev/null
+=============
+Building Ceph
+=============
+
+Ceph provides build scripts for source code and for documentation.
+
+Building Ceph
+=============
+Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following::
+
+ $ cd ceph
+ $ ./autogen.sh
+ $ ./configure
+ $ make
+
+You can use ``make -j`` to execute multiple jobs depending upon your system. For example::
+
+ $ make -j4
+
+To install Ceph locally, you may also use::
+
+ $ make install
+
+If you install Ceph locally, ``make`` will place the executables in ``usr/local/bin``.
+You may add the ``ceph.conf`` file to the ``usr/local/bin`` directory to run an evaluation environment of Ceph from a single directory.
+
+Building Ceph Documentation
+===========================
+Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script::
+
+ $ cd ceph
+ $ admin/build-doc
+
+Once you build the documentation set, you may navigate to the source directory to view it::
+
+ $ cd build-doc/output
+
+There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively.
--- /dev/null
+=======================================
+Cloning the Ceph Source Code Repository
+=======================================
+To check out the Ceph source code, you must have ``git`` installed
+on your local host. To install ``git``, execute::
+
+ $ sudo apt-get install git
+
+You must also have a ``github`` account. If you do not have a
+``github`` account, go to `github.com <http://github.com>`_ and register.
+Follow the directions for setting up git at `Set Up Git <http://help.github.com/linux-set-up-git/>`_.
+
+Clone the Source
+----------------
+To clone the Ceph source code repository, execute::
+
+ $ git clone git@github.com:ceph/ceph.git
+
+Once ``git clone`` executes, you should have a full copy of the Ceph repository.
+
+Clone the Submodules
+--------------------
+Before you can build Ceph, you must navigate to your new repository and get the ``init`` submodule and the ``update`` submodule::
+
+ $ cd ceph
+ $ git submodule init
+ $ git submodule update
+
+.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date::
+
+ $ git status
+
+Choose a Branch
+---------------
+Once you clone the source code and submodules, your Ceph repository will be on the ``master`` branch by default, which is the unstable development branch. You may choose other branches too.
+
+- ``master``: The unstable development branch.
+- ``stable``: The bugfix branch.
+- ``next``: The release candidate branch.
+
+::
+
+ git checkout master
+
--- /dev/null
+==========================
+ Contributing Source Code
+==========================
+If you are making source contributions, you must be added to the Ceph
+project on github. You must also generate keys and add them to your
+github account.
+
+Generate SSH Keys
+-----------------
+You must generate SSH keys for github to clone the Ceph
+repository. If you do not have SSH keys for ``github``, execute::
+
+ $ ssh-keygen -d
+
+Get the key to add to your ``github`` account (the following example
+assumes you used the default file path)::
+
+ $ cat .ssh/id_dsa.pub
+
+Copy the public key.
+
+Add the Key
+-----------
+Go to your your ``github`` account,
+click on "Account Settings" (i.e., the 'tools' icon); then,
+click "SSH Keys" on the left side navbar.
+
+Click "Add SSH key" in the "SSH Keys" list, enter a name for
+the key, paste the key you generated, and press the "Add key"
+button.
--- /dev/null
+====================================
+ Downloading a Ceph Release Tarball
+====================================
+
+As Ceph development progresses, the Ceph team releases new versions of the
+source code. You may download source code tarballs for Ceph releases here:
+
+`Ceph Release Tarballs <http://ceph.newdream.net/download/>`_
--- /dev/null
+==================
+ Ceph Source Code
+==================
+
+You can build Ceph from source by downloading a release or cloning the ``ceph``
+repository at github. If you intend to build Ceph from source, please see the
+build pre-requisites first. Making sure you have all the pre-requisites
+will save you time.
+
+.. toctree::
+
+ Prerequisites <build_prerequisites>
+ Get a Tarball <downloading_a_ceph_release>
+ Clone the Source <cloning_the_ceph_source_code_repository>
+ Build the Source <building_ceph>
+ Build a Package <build_packages>
+ Contributing Code <contributing>
-===================================
-Get Involved in the Ceph Community!
-===================================
+=====================================
+ Get Involved in the Ceph Community!
+=====================================
These are exciting times in the Ceph community! Get involved!
+-----------------+-------------------------------------------------+-----------------------------------------------+
| | the very latest code for Ceph, you can get it | - ``$git clone git@github.com:ceph/ceph.git`` |
| | at http://github.com. | |
+-----------------+-------------------------------------------------+-----------------------------------------------+
-| **Support** | If you have a very specific problem, an | http://ceph.newdream.net/support |
+| **Support** | If you have a very specific problem, an | http://inktank.com |
| | immediate need, or if your deployment requires | |
| | significant help, consider commercial support_. | |
+-----------------+-------------------------------------------------+-----------------------------------------------+
.. _Gmane: http://news.gmane.org/gmane.comp.file-systems.ceph.devel
.. _Tracker: http://tracker.newdream.net/projects/ceph
.. _Blog: http://ceph.newdream.net/news
-.. _support: http://ceph.newdream.net/support
+.. _support: http://inktank.com
-===============
-Getting Started
-===============
-Welcome to Ceph! The following sections provide information
-that will help you get started before you install Ceph:
-
-- :doc:`Why use Ceph? <why_use_ceph>`
-- :doc:`Introduction to Clustered Storage <introduction_to_clustered_storage>`
-- :doc:`Quick Start <quick_start>`
-- :doc:`Get Involved in the Ceph Community! <get_involved_in_the_ceph_community>`
+=================
+ Getting Started
+=================
+Welcome to Ceph! The following sections provide information that will help you
+get started:
.. toctree::
- :hidden:
- why_use_ceph
- Storage Clusters <introduction_to_clustered_storage>
- quick_start
Get Involved <get_involved_in_the_ceph_community>
+ quick_start
+++ /dev/null
-=================================
-Introduction to Clustered Storage
-=================================
-
-Storage clusters are the foundation of Ceph. The move to cloud computing presages a requirement
-to support storing many petabytes of data with the ability to store exabytes of data in the near future.
-
-A number of factors make it challenging to build large storage systems. Three of them include:
-
-- **Capital Expenditure**: Proprietary systems are expensive. So building scalable systems requires using less expensive commodity hardware and a "scale out" approach to reduce build-out expenses.
-
-- **Ongoing Operating Expenses**: Supporting thousands of storage hosts can impose significant personnel expenses, particularly as hardware and networking infrastructure must be installed, maintained and replaced ongoingly.
-
-- **Loss of Data or Access to Data**: Mission-critical enterprise applications cannot suffer significant amounts of downtime, including loss of data *or access to data*. Yet, in systems with thousands of storage hosts, hardware failure is an expectation, not an exception.
-
-Because of the foregoing factors and other factors, building massive storage systems requires new thinking.
-
-Ceph uses a revolutionary approach to storage that utilizes "intelligent daemons." A major advantage of *n*-tiered
-architectures is their ability to separate concerns--e.g., presentation layers, logic layers, storage layers, etc.
-Tiered architectures simplify system design, but they also tend to underutilize resources such as CPU, RAM, and network bandwidth.
-Ceph takes advantage of these resources to create a unified storage system with extraordinary scalability.
-
-At the core of Ceph storage is a service entitled the Reliable Autonomic Distributed Object Store (RADOS).
-RADOS revolutionizes Object Storage Devices (OSD)s by utilizing the CPU, memory and network interface of
-the storage hosts to communicate with each other, replicate data, and redistribute data dynamically. RADOS
-implements an algorithm that performs Controlled Replication Under Scalable Hashing, which we refer to as CRUSH.
-CRUSH enables RADOS to plan and distribute the data automatically so that system administrators do not have to
-do it manually. By utilizing each host's computing resources, RADOS increases scalability while simultaneously
-eliminating both a performance bottleneck and a single point of failure common to systems that manage clusters centrally.
-
-Each OSD maintains a map of all the hosts in the cluster. However, system administrators must expect hardware failure
-in petabyte-to-exabyte scale systems with thousands of OSD hosts. Ceph's monitors increase the reliability of the OSD
-clusters by maintaining a master copy of the cluster map. For example, storage hosts may be turned on and not be "in
-the cluster for the purposes of providing data storage services; not connected via a network; powered off; or, suffering from
-a malfunction.
-
-Ceph provides a lightweight monitor process to address faults in the OSD clusters as they arise. Like OSDs, monitors
-should be replicated in large-scale systems so that if one monitor crashes, another monitor can serve in its place.
-When the Ceph storage cluster employs multiple monitors, the monitors may get out of sync and have different versions
-of the cluster map. Ceph utilizes an algorithm to resolve disparities among versions of the cluster map.
-
-Ceph Metadata Servers (MDSs) are only required for Ceph FS. You can use RADOS block devices or the
-RADOS Gateway without MDSs. The MDSs dynamically adapt their behavior to the current workload.
-As the size and popularity of parts of the file system hierarchy change over time, the MDSs
-dynamically redistribute the file system hierarchy among the available
-MDSs to balance the load to use server resources effectively.
-===========
-Quick Start
-===========
-
-Ceph is intended for large-scale deployments, but you may install Ceph on a single host. Quick start is intended for Debian/Ubuntu Linux distributions.
+=============
+ Quick Start
+=============
+Ceph is intended for large-scale deployments, but you may install Ceph on a
+single host. Quick start is intended for Debian/Ubuntu Linux distributions.
1. Login to your host.
2. Make a directory for Ceph packages. *e.g.,* ``$ mkdir ceph``
-3. `Get Ceph packages <../../install/download_packages>`_ and add them to your APT configuration file.
-4. Update and Install Ceph packages. See `Downloading Debian/Ubuntu Packages <../../install/download_packages>`_ and `Installing Packages <../../install/installing_packages>`_ for details.
-5. Add a ``ceph.conf`` file. See `Ceph Configuration Files <../../create_cluster/ceph_conf>`_ for details.
-6. Run Ceph. See `Deploying Ceph with mkcephfs <../../create_cluster/deploying_ceph_with_mkcephfs>`_
\ No newline at end of file
+3. `Get Ceph packages <../../install/download_packages>`_ and add them to your
+ APT configuration file.
+4. Update and Install Ceph packages.
+ See `Downloading Debian/Ubuntu Packages <../../install/download_packages>`_
+ and `Installing Packages <../../install/installing_packages>`_ for details.
+5. Add a ``ceph.conf`` file.
+ See `Ceph Configuration Files <../../config-cluster/ceph_conf>`_ for details.
+6. Run Ceph.
+ See `Deploying Ceph with mkcephfs <../../config_cluster/deploying_ceph_with_mkcephfs>`_
+++ /dev/null
-=============
-Why use Ceph?
-=============
-Ceph provides an economic and technical foundation for massive scalability. Ceph is free and open source,
-which means it does not require expensive license fees or expensive updates. Ceph can run on economical
-commodity hardware, which reduces another economic barrier to scalability. Ceph is easy to install and administer,
-so it reduces expenses related to administration. Ceph supports popular and widely accepted interfaces in a
-unified storage system (e.g., Amazon S3, Swift, FUSE, block devices, POSIX-compliant shells, etc.), so you don't
-need to build out a different storage system for each storage interface you support.
-
-Technical and personnel constraints also limit scalability. The performance profile of highly scaled systems
-can vary substantially. Ceph relieves system administrators of the complex burden of manual performance optimization
-by utilizing the storage system's computing resources to balance loads intelligently and rebalance the file system dynamically.
-Ceph replicates data automatically so that hardware failures do not result in data loss or cascading load spikes.
-Ceph is fault tolerant, so complex fail-over scenarios are unnecessary. Ceph administrators can simply replace a failed host
-with new hardware.
-
-With POSIX semantics for Unix/Linux-based operating systems, popular interfaces like Amazon S3 or Swift, block devices
-and advanced features like directory-level snapshots, you can deploy enterprise applications on Ceph while
-providing them with a long-term economical solution for scalable storage. While Ceph is open source, commercial
-support is available too! So Ceph provides a compelling solution for building petabyte-to-exabyte scale storage systems.
-
-Reasons to use Ceph include:
-
-- Extraordinary scalability
-
- - Terabytes to exabytes
- - Tens of thousands of client nodes
-
-- Standards compliant
-
- - Virtual file system (vfs)
- - Shell (bash)
- - FUSE
- - Swift-compliant interface
- - Amazon S3-compliant interface
-
-- Reliable and fault-tolerant
-
- - Strong data consistency and safety semantics
- - Intelligent load balancing and dynamic re-balancing
- - Semi-autonomous data replication
- - Node monitoring and failure detection
- - Hot swappable hardware
-
-- Economical (Ceph is free!)
-
- - Open source and free
- - Uses heterogeneous commodity hardware
- - Easy to setup and maintain
- - Commercial support is available (if needed)