+++ /dev/null
-==========================
-Introduction to RADOS OSDs
-==========================
-RADOS OSD clusters are the foundation of Ceph. RADOS revolutionizes OSDs by utilizing the CPU,
-memory and network interface of the storage hosts to communicate with each other, replicate data, and
-redistribute data dynamically so that system administrators do not have to plan and coordinate
-these tasks manually. By utilizing each host's computing resources, RADOS increases scalability while
-simultaneously eliminating both a performance bottleneck and a single point of failure common
-to systems that manage clusters centrally. Each OSD maintains a copy of the cluster map.
-
-Ceph provides a light-weight monitor process to address faults in the OSD clusters as they
-arise. System administrators must expect hardware failure in petabyte-to-exabyte scale systems
-with thousands of OSD hosts. Ceph's monitors increase the reliability of the OSD clusters by
-maintaining a master copy of the cluster map, and using the Paxos algorithm to resolve disparities
-among versions of the cluster map maintained by a plurality of monitors.
-
-Ceph Metadata Servers (MDSs) are only required for Ceph FS. You can use RADOS block devices or the
-RADOS Gateway without MDSs. The MDS dynamically adapt their behavior to the current workload.
-As the size and popularity of parts of the file system hierarchy change over time, the
-that hierarchy the MDSs dynamically redistribute the file system hierarchy among the available
-MDSs to balance the load to use server resources effectively.
-
-<image>
\ No newline at end of file
+++ /dev/null
-===================
-Build Prerequisites
-===================
-
-Before you can build Ceph documentation or Ceph source code, you need to install several libraries and tools.
-
-.. tip:: Check this section to see if there are specific prerequisites for your Linux/Unix distribution.
-
-
-Prerequisites for Building Ceph Documentation
-=============================================
-Ceph utilizes Python's Sphinx documentation tool. For details on
-the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_
-Follow the directions at `Sphinx 1.1.3 <http://pypi.python.org/pypi/Sphinx>`_
-to install Sphinx. To run Sphinx, with `admin/build-doc`, at least the following are required:
-
-- ``python-dev``
-- ``python-pip``
-- ``python-virtualenv``
-- ``libxml2-dev``
-- ``libxslt-dev``
-- ``doxygen``
-- ``ditaa``
-- ``graphviz``
-
-Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
-
- $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz
-
-Prerequisites for Building Ceph Source Code
-===========================================
-Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts
-depend on the following:
-
-- ``autotools-dev``
-- ``autoconf``
-- ``automake``
-- ``cdbs``
-- ``gcc``
-- ``g++``
-- ``git``
-- ``libboost-dev``
-- ``libedit-dev``
-- ``libssl-dev``
-- ``libtool``
-- ``libfcgi``
-- ``libfcgi-dev``
-- ``libfuse-dev``
-- ``linux-kernel-headers``
-- ``libcrypto++-dev``
-- ``libcrypto++``
-- ``libexpat1-dev``
-- ``libgtkmm-2.4-dev``
-- ``pkg-config``
-
-On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
-
- $ sudo apt-get install autotools-dev autoconf automake cdbs
- gcc g++ git libboost-dev libedit-dev libssl-dev libtool
- libfcgi libfcgi-dev libfuse-dev linux-kernel-headers
- libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
-
-On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. ::
-
- $ aptitude install autotools-dev autoconf automake cdbs
- gcc g++ git libboost-dev libedit-dev libssl-dev libtool
- libfcgi libfcgi-dev libfuse-dev linux-kernel-headers
- libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
-
-
-Ubuntu Requirements
--------------------
-
-- ``uuid-dev``
-- ``libkeytutils-dev``
-- ``libgoogle-perftools-dev``
-- ``libatomic-ops-dev``
-- ``libaio-dev``
-- ``libgdata-common``
-- ``libgdata13``
-
-Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
-
- $ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev
- libatomic-ops-dev libaio-dev libgdata-common libgdata13
-
-Debian
-------
-Alternatively, you may also install::
-
- $ aptitude install fakeroot dpkg-dev
- $ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev
-
-openSUSE 11.2 (and later)
--------------------------
-
-- ``boost-devel``
-- ``gcc-c++``
-- ``libedit-devel``
-- ``libopenssl-devel``
-- ``fuse-devel`` (optional)
-
-Execute ``zypper install`` for each dependency that isn't installed on your host. ::
-
- $zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel
\ No newline at end of file
+++ /dev/null
-=============
-Building Ceph
-=============
-
-Ceph provides build scripts for source code and for documentation.
-
-Building Ceph
-=============
-Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following::
-
- $ cd ceph
- $ ./autogen.sh
- $ ./configure
- $ make
-
-You can use ``make -j`` to execute multiple jobs depending upon your system. For example::
-
- $ make -j4
-
-Building Ceph Documentation
-===========================
-Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script::
-
- $ cd ceph
- $ admin/build-doc
-
-Once you build the documentation set, you may navigate to the source directory to view it::
-
- $ cd build-doc/output
-
-There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively.
+++ /dev/null
-=======================================
-Cloning the Ceph Source Code Repository
-=======================================
-To check out the Ceph source code, you must have ``git`` installed
-on your local host. To install ``git``, execute::
-
- $ sudo apt-get install git
-
-You must also have a ``github`` account. If you do not have a
-``github`` account, go to `github.com <http://github.com>`_ and register.
-Follow the directions for setting up git at `Set Up Git <http://help.github.com/linux-set-up-git/>`_.
-
-Generate SSH Keys
------------------
-You must generate SSH keys for github to clone the Ceph
-repository. If you do not have SSH keys for ``github``, execute::
-
- $ ssh-keygen -d
-
-Get the key to add to your ``github`` account::
-
- $ cat .ssh/id_dsa.pub
-
-Copy the public key.
-
-Add the Key
------------
-Go to your your ``github`` account,
-click on "Account Settings" (i.e., the 'tools' icon); then,
-click "SSH Keys" on the left side navbar.
-
-Click "Add SSH key" in the "SSH Keys" list, enter a name for
-the key, paste the key you generated, and press the "Add key"
-button.
-
-Clone the Source
-----------------
-To clone the Ceph source code repository, execute::
-
- $ git clone git@github.com:ceph/ceph.git
-
-Once ``git clone`` executes, you should have a full copy of the Ceph repository.
-
-Clone the Submodules
---------------------
-Before you can build Ceph, you must get the ``init`` submodule and the ``update`` submodule::
-
- $ git submodule init
- $ git submodule update
-
-.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date::
-
- $ git status
-
+++ /dev/null
-====================
-Downloading Packages
-====================
-
-We automatically build Debian and Ubuntu packages for any branches or tags that appear in
-the ``ceph.git`` `repository <http://github.com/ceph/ceph>`_. We build packages for the following
-architectures:
-
-- ``amd64``
-- ``i386``
-
-For each architecture, we build packages for the following distributions:
-
-- Debian 7.0 (``wheezy``)
-- Debian 6.0 (``squeeze``)
-- Debian unstable (``sid``)
-- Ubuntu 12.04 (``precise``)
-- Ubuntu 11.10 (``oneiric``)
-- Ubuntu 11.04 (``natty``)
-- Ubuntu 10.10 (``maverick``)
-
-When you execute the following commands to install the Ceph packages, replace ``{ARCH}`` with the architecture of your CPU,
-``{DISTRO}`` with the code name of your operating system (e.g., ``wheezy``, rather than the version number) and
-``{BRANCH}`` with the version of Ceph you want to run (e.g., ``master``, ``stable``, ``unstable``, ``v0.44``, etc.). ::
-
- wget -q -O- https://raw.github.com/ceph/ceph/master/keys/autobuild.asc \
- | sudo apt-key add -
-
- sudo tee /etc/apt/sources.list.d/ceph.list <<EOF
- deb http://ceph.newdream.net/debian-snapshot-{ARCH}/{BRANCH}/ {DISTRO} main
- deb-src http://ceph.newdream.net/debian-snapshot-{ARCH}/{BRANCH}/ {DISTRO} main
- EOF
-
- sudo apt-get update
- sudo apt-get install ceph
-
-
-When you download packages, you will receive the latest package build, which may be several weeks behind the current release
-or the most recent code. It may contain bugs that have already been fixed in the most recent versions of the code. Until packages
-contain only stable code, you should carefully consider the tradeoffs of installing from a package or retrieving the latest release
-or the most current source code and building Ceph.
\ No newline at end of file
+++ /dev/null
-==========================
-Downloading a Ceph Release
-==========================
-As Ceph development progresses, the Ceph team releases new versions. You may download Ceph releases here:
-
-`Ceph Releases <http://ceph.newdream.net/download/>`_
\ No newline at end of file
that will help you get started before you install Ceph:
- :doc:`Why use Ceph? <why_use_ceph>`
-- :doc:`Get Involved in the Ceph Community! <get_involved_in_the_ceph_community>`
+- :doc:`Introduction to Clustered Storage <introduction_to_clustered_storage>`
- :doc:`Quick Start <quick_start>`
+- :doc:`Get Involved in the Ceph Community! <get_involved_in_the_ceph_community>`
.. toctree::
:hidden:
why_use_ceph
- Get Involved <get_involved_in_the_ceph_community>
+ Storage Clusters <introduction_to_clustered_storage>
quick_start
-
+ Get Involved <get_involved_in_the_ceph_community>
--- /dev/null
+=================================
+Introduction to Clustered Storage
+=================================
+
+Storage clusters are the foundation of Ceph. The move to cloud computing presages a requirement
+to support storing many petabytes of data with the ability to store exabytes of data in the near future.
+
+A number of factors make it challenging to build large storage systems. Three of them include:
+
+- **Capital Expenditure**: Proprietary systems are expensive. So building scalable systems requires
+using less expensive commodity hardware and a "scale out" approach to reduce build-out expenses.
+
+- **Ongoing Operating Expenses**: Supporting thousands of storage hosts can impose significant personnel
+expenses, particularly as hardware and networking infrastructure must be installed, maintained and replaced
+ongoingly.
+
+- **Loss of Data or Access to Data**: Mission-critical enterprise applications cannot suffer significant
+amounts of downtime, including loss of data *or access to data*. Yet, in systems with thousands of storage hosts,
+hardware failure is an expectation, not an exception.
+
+Because of the foregoing factors and other factors, building massive storage systems requires new thinking.
+
+Ceph uses a revolutionary approach to storage that utilizes "intelligent daemons." A major advantage of *n*-tiered
+architectures is their ability to separate concerns--e.g., presentation layers, logic layers, storage layers, etc.
+Tiered architectures simplify system design, but they also tend to underutilize resources such as CPU, RAM, and network bandwidth.
+Ceph takes advantage of these resources to create a unified storage system with extraordinary scalability.
+
+At the core of Ceph storage is a service entitled the Reliable Autonomic Distributed Object Store (RADOS).
+RADOS revolutionizes Object Storage Devices (OSD)s by utilizing the CPU, memory and network interface of
+the storage hosts to communicate with each other, replicate data, and redistribute data dynamically. RADOS
+implements an algorithm that performs Controlled Replication Under Scalable Hashing, which we refer we refer to as CRUSH.
+CRUSH enables RADOS to plan and distribute the data automatically so that system administrators do not have to
+do it manually. By utilizing each host's computing resources, RADOS increases scalability while simultaneously
+eliminating both a performance bottleneck and a single point of failure common to systems that manage clusters centrally.
+
+Each OSD maintains a map of all the hosts in the cluster. However, system administrators must expect hardware failure
+in petabyte-to-exabyte scale systems with thousands of OSD hosts. Ceph's monitors increase the reliability of the OSD
+clusters by maintaining a master copy of the cluster map. For example, storage hosts may be turned on and not be "in
+the cluster for the purposes of providing data storage services; not connected via a network; powered off; or, suffering from
+a malfunction.
+
+Ceph provides a light-weight monitor process to address faults in the OSD clusters as they arise. Like OSDs, monitors
+should be replicated in large-scale systems so that if one monitor crashes, another monitor can serve in its place.
+When the Ceph storage cluster employs multiple monitors, the monitors may get out of sync and have different versions
+of the cluster map. Ceph utilizes an algorithm to resolve disparities among versions of the cluster map.
+
+Ceph Metadata Servers (MDSs) are only required for Ceph FS. You can use RADOS block devices or the
+RADOS Gateway without MDSs. The MDSs dynamically adapt their behavior to the current workload.
+As the size and popularity of parts of the file system hierarchy change over time, the MDSs
+dynamically redistribute the file system hierarchy among the available
+MDSs to balance the load to use server resources effectively.
+
+<image>
+++ /dev/null
-=======
-Summary
-=======
-
-Once you complete the build, you should find x under the /ceph/src directory.
-
-
-+---------------+------------------+
-|table heading | table heading 2 |
-+===============+==================+
-|ceph-dencoder | a utility to encode, decode, and dump ceph data structures (debugger) |
-+---------------+------------------+
-|cephfs | Client |
-+---------------+------------------+
-|ceph-fuse | Client |
-+---------------+------------------+
-|ceph-mds | The Ceph filesystem service daemon. |
-+---------------+------------------+
-|ceph-mon | The Ceph monitor. |
-+---------------+------------------+
-|ceph-osd | The RADOS OSD storage daemon. |
-+---------------+------------------+
-|ceph-syn | a simple synthetic workload generator. |
-+---------------+------------------+
-|crushtool | is a utility that lets you create, compile, and decompile CRUSH map files |
-+---------------+------------------+
-|monmaptool | a utility to create, view, and modify a monitor cluster map |
-+---------------+------------------+
-|mount.ceph | a simple helper for mounting the Ceph file system on a Linux host. |
-+---------------+------------------+
-|osd.maptool | a utility that lets you create, view, and manipulate OSD cluster maps from the Ceph distributed file system. |
-+---------------+------------------+
-|ceph.conf | a utility for getting information about a ceph configuration file. |
-+---------------+------------------+
-
-a FastCGI service that provides a RESTful HTTP API to store objects and metadata. ???
-
-Once you successfully build the Ceph code, you may proceed to Installing Ceph.