+++ /dev/null
-===================
-Build Prerequisites
-===================
-
-Before you can build Ceph documentation or Ceph source code, you need to install several libraries and tools.
-
-.. tip:: Check this section to see if there are specific prerequisites for your Linux/Unix distribution.
-
-
-Prerequisites for Building Ceph Documentation
-=============================================
-Ceph utilizes Python's Sphinx documentation tool. For details on
-the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_
-Follow the directions at `Sphinx 1.1.3 <http://pypi.python.org/pypi/Sphinx>`_
-to install Sphinx. To run Sphinx, with `admin/build-doc`, at least the following are required:
-
-- ``python-dev``
-- ``python-pip``
-- ``python-virtualenv``
-- ``libxml2-dev``
-- ``libxslt-dev``
-- ``doxygen``
-- ``ditaa``
-- ``graphviz``
-
-Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
-
- $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz
-
-Prerequisites for Building Ceph Source Code
-===========================================
-Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts
-depend on the following:
-
-- ``autotools-dev``
-- ``autoconf``
-- ``automake``
-- ``cdbs``
-- ``gcc``
-- ``g++``
-- ``git``
-- ``libboost-dev``
-- ``libedit-dev``
-- ``libssl-dev``
-- ``libtool``
-- ``libfcgi``
-- ``libfcgi-dev``
-- ``libfuse-dev``
-- ``linux-kernel-headers``
-- ``libcrypto++-dev``
-- ``libcrypto++``
-- ``libexpat1-dev``
-- ``libgtkmm-2.4-dev``
-- ``pkg-config``
-
-On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
-
- $ sudo apt-get install autotools-dev autoconf automake cdbs
- gcc g++ git libboost-dev libedit-dev libssl-dev libtool
- libfcgi libfcgi-dev libfuse-dev linux-kernel-headers
- libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
-
-On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. ::
-
- $ aptitude install autotools-dev autoconf automake cdbs
- gcc g++ git libboost-dev libedit-dev libssl-dev libtool
- libfcgi libfcgi-dev libfuse-dev linux-kernel-headers
- libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
-
-
-Ubuntu Requirements
--------------------
-
-- ``uuid-dev``
-- ``libkeytutils-dev``
-- ``libgoogle-perftools-dev``
-- ``libatomic-ops-dev``
-- ``libaio-dev``
-- ``libgdata-common``
-- ``libgdata13``
-
-Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
-
- $ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev
- libatomic-ops-dev libaio-dev libgdata-common libgdata13
-
-Debian
-------
-Alternatively, you may also install::
-
- $ aptitude install fakeroot dpkg-dev
- $ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev
-
-openSUSE 11.2 (and later)
--------------------------
-
-- ``boost-devel``
-- ``gcc-c++``
-- ``libedit-devel``
-- ``libopenssl-devel``
-- ``fuse-devel`` (optional)
-
-Execute ``zypper install`` for each dependency that isn't installed on your host. ::
-
- $zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel
\ No newline at end of file
+++ /dev/null
-=============
-Building Ceph
-=============
-
-Ceph provides build scripts for source code and for documentation.
-
-Building Ceph
-=============
-Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following::
-
- $ cd ceph
- $ ./autogen.sh
- $ ./configure
- $ make
-
-You can use ``make -j`` to execute multiple jobs depending upon your system. For example::
-
- $ make -j4
-
-Building Ceph Documentation
-===========================
-Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script::
-
- $ cd ceph
- $ admin/build-doc
-
-Once you build the documentation set, you may navigate to the source directory to view it::
-
- $ cd build-doc/output
-
-There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively.
+++ /dev/null
-=======================================
-Cloning the Ceph Source Code Repository
-=======================================
-To check out the Ceph source code, you must have ``git`` installed
-on your local host. To install ``git``, execute::
-
- $ sudo apt-get install git
-
-You must also have a ``github`` account. If you do not have a
-``github`` account, go to `github.com <http://github.com>`_ and register.
-Follow the directions for setting up git at `Set Up Git <http://help.github.com/linux-set-up-git/>`_.
-
-Generate SSH Keys
------------------
-You must generate SSH keys for github to clone the Ceph
-repository. If you do not have SSH keys for ``github``, execute::
-
- $ ssh-keygen -d
-
-Get the key to add to your ``github`` account::
-
- $ cat .ssh/id_dsa.pub
-
-Copy the public key.
-
-Add the Key
------------
-Go to your your ``github`` account,
-click on "Account Settings" (i.e., the 'tools' icon); then,
-click "SSH Keys" on the left side navbar.
-
-Click "Add SSH key" in the "SSH Keys" list, enter a name for
-the key, paste the key you generated, and press the "Add key"
-button.
-
-Clone the Source
-----------------
-To clone the Ceph source code repository, execute::
-
- $ git clone git@github.com:ceph/ceph.git
-
-Once ``git clone`` executes, you should have a full copy of the Ceph repository.
-
-Clone the Submodules
---------------------
-Before you can build Ceph, you must get the ``init`` submodule and the ``update`` submodule::
-
- $ git submodule init
- $ git submodule update
-
-.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date::
-
- $ git status
-
+++ /dev/null
-==========================
-Downloading a Ceph Release
-==========================
-As Ceph development progresses, the Ceph team releases new versions. You may download Ceph releases here:
-
-`Ceph Releases <http://ceph.newdream.net/download/>`_
\ No newline at end of file
+++ /dev/null
-========================
-File System Requirements
-========================
-Ceph OSDs depend on the Extended Attributes (XATTRS) of the underlying file system for::
-
-- Internal object state
-- Snapshot metadata
-- RADOS Gateway Access Control Lists (ACLs).
-
-Ceph OSDs rely heavily upon the stability and performance of the underlying file system. The
-underlying file system must provide sufficient capacity for XATTRS. File system candidates for
-Ceph include B tree and B+ tree file systems such as:
-
-- ``btrfs``
-- ``XFS``
-
-.. warning:: XATTR limits.
-
- The RADOS Gateway's ACL and Ceph snapshots easily surpass the 4-kilobyte limit for XATTRs in ``ext4``,
- causing the ``ceph-osd`` process to crash. So ``ext4`` is a poor file system choice if
- you intend to deploy the RADOS Gateway or use snapshots.
-
-.. tip:: Use `btrfs`
-
- The Ceph team believes that the best performance and stability will come from ``btrfs.``
- The ``btrfs`` file system has internal transactions that keep the local data set in a consistent state.
- This makes OSDs based on ``btrfs`` simple to deploy, while providing scalability not
- currently available from block-based file systems. The 64-kb XATTR limit for ``xfs``
- XATTRS is enough to accommodate RDB snapshot metadata and RADOS Gateway ACLs. So ``xfs`` is the second-choice
- file system of the Ceph team. If you only plan to use RADOS and ``rbd`` without snapshots and without
- ``radosgw``, the ``ext4`` file system should work just fine.
+++ /dev/null
-=====================
-Hardware Requirements
-=====================
-Ceph OSDs run on commodity hardware and a Linux operating system over a TCP/IP network. OSD hosts
-should have ample data storage in the form of one or more hard drives or a Redundant Array of
-Independent Devices (RAIDs).
-
-<Need More Info>
-
-Discussing the hardware requirements for each daemon,
-the tradeoffs of doing one ceph-osd per machine versus one per disk,
-and hardware-related configuration options like journaling locations.
-
-+--------------+----------------+------------------------------------+
-| Process | Criteria | Minimum Requirement |
-+==============+================+====================================+
-| ``ceph-osd`` | Processor | 64-bit x86; x-cores; 2MB Ln Cache |
-| +----------------+------------------------------------+
-| | RAM | 12 GB |
-| +----------------+------------------------------------+
-| | Disk Space | 30 GB |
-| +----------------+------------------------------------+
-| | Volume Storage | 2-4TB SATA Drives |
-| +----------------+------------------------------------+
-| | Network | 2-1GB Ethernet NICs |
-+--------------+----------------+------------------------------------+
-| ``ceph-mon`` | Processor | 64-bit x86; x-cores; 2MB Ln Cache |
-| +----------------+------------------------------------+
-| | RAM | 12 GB |
-| +----------------+------------------------------------+
-| | Disk Space | 30 GB |
-| +----------------+------------------------------------+
-| | Volume Storage | 2-4TB SATA Drives |
-| +----------------+------------------------------------+
-| | Network | 2-1GB Ethernet NICs |
-+--------------+----------------+------------------------------------+
-| ``ceph-mds`` | Processor | 64-bit x86; x-cores; 2MB Ln Cache |
-| +----------------+------------------------------------+
-| | RAM | 12 GB |
-| +----------------+------------------------------------+
-| | Disk Space | 30 GB |
-| +----------------+------------------------------------+
-| | Volume Storage | 2-4TB SATA Drives |
-| +----------------+------------------------------------+
-| | Network | 2-1GB Ethernet NICs |
-+--------------+----------------+------------------------------------+
\ No newline at end of file
+++ /dev/null
-======================================
-Installing RADOS Processes and Daemons
-======================================
-
-When you start the Ceph service, the initialization process activates a series of daemons that run in the background.
-The hosts in a typical RADOS cluster run at least one of three processes:
-
-- RADOS (``ceph-osd``)
-- Monitor (``ceph-mon``)
-- Metadata Server (``ceph-mds``)
-
-Each instance of a RADOS ``ceph-osd`` process performs a few essential tasks.
-
-1. Each ``ceph-osd`` instance provides clients with an object interface to the OSD for read/write operations.
-2. Each ``ceph-osd`` instance communicates and coordinates with other OSDs to store, replicate, redistribute and restore data.
-3. Each ``ceph-osd`` instance communicates with monitors to retrieve and/or update the master copy of the cluster map.
-
-Each instance of a monitor process performs a few essential tasks:
-
-1. Each ``ceph-mon`` instance communicates with other ``ceph-mon`` instances using PAXOS to establish consensus for distributed decision making.
-2. Each ``ceph-mon`` instance serves as the first point of contact for clients, and provides clients with the topology and status of the cluster.
-3. Each ``ceph-mon`` instance provides RADOS instances with a master copy of the cluster map and receives updates for the master copy of the cluster map.
-
-A metadata server (MDS) process performs a few essential tasks:
-
-1. Each ``ceph-mds`` instance provides clients with metadata regarding the file system.
-2. Each ``ceph-mds`` instance manage the file system namespace
-3. Coordinate access to the shared OSD cluster.
-
-
-Installing ``ceph-osd``
-=======================
-<placeholder>
-
-Installing ``ceph-mon``
-=======================
-<placeholder>
-
-Installing ``ceph-mds``
-=======================
-<placeholder>