Introduction
============
-This guide assumes that you are already familiar with Ceph (the distributed
+This guide has two aims. First, it should lower the barrier to entry for
+software developers who wish to get involved in the Ceph project. Second,
+it should serve as a reference for Ceph developers.
+
+We assume that readers are already familiar with Ceph (the distributed
object store and file system designed to provide excellent performance,
reliability and scalability). If not, please refer to the `project website`_
and especially the `publications list`_.
.. _`project website`: http://ceph.com
.. _`publications list`: https://ceph.com/resources/publications/
-Bare essentials
-===============
+Since this document is to be consumed by developers, who are assumed to
+have Internet access, topics covered elsewhere on the web are treated by
+linking. If you notice that a link is broken or if you know of a better
+link, `open a pull request`.
+
+The bare essentials
+===================
+
+This chapter presents essential information that every Ceph developer needs
+to know.
Leads
-----
-* Project lead: Sage Weil
-* RADOS lead: Samuel Just
-* RGW lead: Yehuda Sadeh
-* RBD lead: Josh Durgin
-* CephFS lead: Gregory Farnum
-* Build/Ops lead: Ken Dreyer
-
-The Ceph-specific acronyms are explained under `High-level structure`_, below.
+The Ceph project is led by Sage Weil. In addition, each major project
+component has its own lead. The following table shows all the leads and
+their nicks on `github`:
-History
--------
+.. _github: https://github.com/ceph/ceph
-The Ceph project grew out of the petabyte-scale storage research at the Storage
-Systems Research Center at the University of California, Santa Cruz. The
-project was funded primarily by a grant from the Lawrence Livermore, Sandia,
-and Los Alamos National Laboratories.
+========= =============== =============
+Scope Lead GitHub nick
+========= =============== =============
+Ceph Sage Weil liewegas
+RADOS Samuel Just athanatos
+RGW Yehuda Sadeh yehudasa
+RBD Josh Durgin jdurgin
+CephFS Gregory Farnum gregsfortytwo
+Build/Ops Ken Dreyer ktdreyer
+========= =============== =============
-Sage Weil, the project lead, published his Ph. D. thesis entitled "Ceph:
-Reliable, Scalable, and High-Performance Distributed Storage" in 2007.
+The Ceph-specific acronyms in the table are explained under `High-level
+structure`_, below.
-DreamHost https://www.dreamhost.com . . .
+History
+-------
-Inktank . . .
+See the `History chapter of the Wikipedia article`.
-On April 30, 2014 Red Hat announced its takeover of Inktank:
-http://www.redhat.com/en/about/press-releases/red-hat-acquire-inktank-provider-ceph
+.. _`History chapter of the Wikipedia article`: https://en.wikipedia.org/wiki/Ceph_%28software%29#History
Licensing
---------
Mailing list
------------
-The official development email list is ``ceph-devel@vger.kernel.org``. Subscribe by sending
-a message to ``majordomo@vger.kernel.org`` with the line::
+Ceph development email discussions take place on
+``ceph-devel@vger.kernel.org``. Subscribe by sending a message to
+``majordomo@vger.kernel.org`` with the line::
- subscribe ceph-devel
+ subscribe ceph-devel
in the body of the message.
Like any other large software project, Ceph consists of a number of components.
Viewed from a very high level, the components are:
-* **RADOS**
+RADOS
+-----
+
+RADOS stands for "Reliable, Autonomic Distributed Object Store". In a Ceph
+cluster, all data are stored in objects, and RADOS is the component responsible
+for that.
- RADOS stands for "Reliable, Autonomic Distributed Object Store". In a Ceph
- cluster, all data are stored in objects, and RADOS is the component responsible
- for that.
+RADOS itself can be further broken down into Monitors, Object Storage Daemons
+(OSDs), and clients (librados). Monitors and OSDs are introduced at
+:doc:`start/intro`. The client library is explained at :doc:`rados/api`.
- RADOS itself can be further broken down into Monitors, Object Storage Daemons
- (OSDs), and clients.
+RGW
+---
-* **RGW**
+RGW stands for RADOS Gateway. Using the embedded HTTP server civetweb_, RGW
+provides a REST interface to RADOS objects.
- RGW stands for RADOS Gateway. Using the embedded HTTP server civetweb_, RGW
- provides a REST interface to RADOS objects.
+.. _civetweb: https://github.com/civetweb/civetweb
- .. _civetweb: https://github.com/civetweb/civetweb
+A more thorough introduction to RGW can be found at :doc:`radosgw`.
-* **RBD**
+RBD
+---
- RBD stands for RADOS Block Device. It enables a Ceph cluster to store disk
- images, and includes in-kernel code enabling RBD images to be mounted.
+RBD stands for RADOS Block Device. It enables a Ceph cluster to store disk
+images, and includes in-kernel code enabling RBD images to be mounted.
-* **CephFS**
+To delve further into RBD, see :doc:`rbd/rbd`.
- CephFS is a distributed file system that enables a Ceph cluster to be used as a NAS.
+CephFS
+------
- File system metadata is managed by Meta Data Server (MDS) daemons.
+CephFS is a distributed file system that enables a Ceph cluster to be used as a NAS.
+
+File system metadata is managed by Meta Data Server (MDS) daemons. The Ceph
+file system is explained in more detail at :doc:`cephfs`.
+
+Build/Ops
+---------
+
+Ceph is regularly built and packaged for a number of major Linux
+distributions. At the time of this writing, these included Debian, Ubuntu,
+CentOS, openSUSE, and Fedora.
Building
========
Building from source
--------------------
-See http://docs.ceph.com/docs/master/install/build-ceph/
+See instructions at :doc:`install/build-ceph`.
Testing
=======
# check that it's there
./ceph health
-WIP
-===
-
-Monitors
---------
-
-MON stands for "Monitor". Each Ceph cluster has a number of monitor processes.
-See **man ceph-mon** or http://docs.ceph.com/docs/master/man/8/ceph-mon/ for
-some basic information. The monitor source code lives under **src/mon** in the
-tree: https://github.com/ceph/ceph/tree/master/src/mon
-
-OSDs
-----
-
-OSD stands for Object Storage Daemon. Typically, there is one of these for each
-disk in the cluster. See **man ceph-osd** or
-http://docs.ceph.com/docs/master/man/8/ceph-osd/ for basic information. The OSD
-source code can be found here: https://github.com/ceph/ceph/tree/master/src/osd
-
-librados
---------
-
-RADOS also includes an API for writing your own clients that can communicate
-directly with a Ceph cluster's underlying object store. The API includes
-bindings for popular scripting languages such as Python. For more information,
-see the documents under https://github.com/ceph/ceph/tree/master/doc/rados/api
-
-Build/Ops
----------
-
-Ceph supports a number of major Linux distributions and provides packaging for them.
-
-