From b6f9a9b2b1a17e2cb73fd90b5d66599f38c32d71 Mon Sep 17 00:00:00 2001 From: Nathan Cutler Date: Mon, 30 Nov 2015 00:51:11 +0100 Subject: [PATCH] doc/dev/index.rst: begin writing Contributing to Ceph Replace existing index.rst with the beginnings of a comprehensive developer guide. Move the old index.rst to index-old.rst. Signed-off-by: Nathan Cutler --- doc/dev/index-old.rst | 39 +++++++++ doc/dev/index.rst | 184 +++++++++++++++++++++++++++++++++++++----- 2 files changed, 204 insertions(+), 19 deletions(-) create mode 100644 doc/dev/index-old.rst diff --git a/doc/dev/index-old.rst b/doc/dev/index-old.rst new file mode 100644 index 00000000000..f18adaee0c5 --- /dev/null +++ b/doc/dev/index-old.rst @@ -0,0 +1,39 @@ +================================== + Internal developer documentation +================================== + +.. note:: If you're looking for how to use Ceph as a library from your + own software, please see :doc:`/api/index`. + +You can start a development mode Ceph cluster, after compiling the source, with:: + + cd src + install -d -m0755 out dev/osd0 + ./vstart.sh -n -x -l + # check that it's there + ./ceph health + +.. todo:: vstart is woefully undocumented and full of sharp sticks to poke yourself with. + + +.. _mailing-list: + +.. rubric:: Mailing list + +The official development email list is ``ceph-devel@vger.kernel.org``. Subscribe by sending +a message to ``majordomo@vger.kernel.org`` with the line:: + + subscribe ceph-devel + +in the body of the message. + + +.. rubric:: Contents + +.. toctree:: + :glob: + + * + osd_internals/index* + mds_internals/index* + radosgw/index* diff --git a/doc/dev/index.rst b/doc/dev/index.rst index f18adaee0c5..93fddb8c6c2 100644 --- a/doc/dev/index.rst +++ b/doc/dev/index.rst @@ -1,9 +1,146 @@ -================================== - Internal developer documentation -================================== +==================== +Contributing to Ceph +==================== +---------------------- +A Guide for Developers +---------------------- -.. note:: If you're looking for how to use Ceph as a library from your - own software, please see :doc:`/api/index`. +:Author: Loic Dachary +:Author: Nathan Cutler +:License: Creative Commons Attribution-ShareAlike (CC BY-SA) + +.. note:: The old (pre-2016) developer documentation has been moved to :doc:`/dev/index-old`. + +.. contents:: + :depth: 4 + +Introduction +============ + +This guide assumes that you are already familiar with Ceph (the distributed +object store and file system designed to provide excellent performance, +reliability and scalability). If not, please refer to the `project website`_ +and especially the `publications list`_. + +.. _`project website`: http://ceph.com +.. _`publications list`: https://ceph.com/resources/publications/ + +Bare essentials +=============== + +Leads +----- + +* Project lead: Sage Weil +* RADOS lead: Samuel Just +* RGW lead: Yehuda Sadeh +* RBD lead: Josh Durgin +* CephFS lead: Gregory Farnum +* Build/Ops lead: Ken Dreyer + +The Ceph-specific acronyms are explained under `High-level structure`_, below. + +History +------- + +The Ceph project grew out of the petabyte-scale storage research at the Storage +Systems Research Center at the University of California, Santa Cruz. The +project was funded primarily by a grant from the Lawrence Livermore, Sandia, +and Los Alamos National Laboratories. + +Sage Weil, the project lead, published his Ph. D. thesis entitled "Ceph: +Reliable, Scalable, and High-Performance Distributed Storage" in 2007. + +DreamHost https://www.dreamhost.com . . . + +Inktank . . . + +On April 30, 2014 Red Hat announced its takeover of Inktank: +http://www.redhat.com/en/about/press-releases/red-hat-acquire-inktank-provider-ceph + +Licensing +--------- + +Ceph is free software. + +Unless stated otherwise, the Ceph source code is distributed under the terms of +the LGPL2.1. For full details, see the file COPYING in the top-level directory +of the source-code distribution: +https://github.com/ceph/ceph/blob/master/COPYING + +Source code repositories +------------------------ + +The source code of Ceph lives on GitHub in a number of repositories below https://github.com/ceph + +To make a meaningful contribution to the project as a developer, a working +knowledge of git_ is essential. + +.. _git: https://git-scm.com/documentation + +Mailing list +------------ + +The official development email list is ``ceph-devel@vger.kernel.org``. Subscribe by sending +a message to ``majordomo@vger.kernel.org`` with the line:: + + subscribe ceph-devel + +in the body of the message. + +There are also `other Ceph-related mailing lists`_. + +.. _`other Ceph-related mailing lists`: https://ceph.com/resources/mailing-list-irc/ + +IRC +--- + +See https://ceph.com/resources/mailing-list-irc/ + + +High-level structure +==================== + +Like any other large software project, Ceph consists of a number of components. +Viewed from a very high level, the components are: + +* **RADOS** + + RADOS stands for "Reliable, Autonomic Distributed Object Store". In a Ceph + cluster, all data are stored in objects, and RADOS is the component responsible + for that. + + RADOS itself can be further broken down into Monitors, Object Storage Daemons + (OSDs), and clients. + +* **RGW** + + RGW stands for RADOS Gateway. Using the embedded HTTP server civetweb_, RGW + provides a REST interface to RADOS objects. + + .. _civetweb: https://github.com/civetweb/civetweb + +* **RBD** + + RBD stands for RADOS Block Device. It enables a Ceph cluster to store disk + images, and includes in-kernel code enabling RBD images to be mounted. + +* **CephFS** + + CephFS is a distributed file system that enables a Ceph cluster to be used as a NAS. + + File system metadata is managed by Meta Data Server (MDS) daemons. + +Building +======== + +Building from source +-------------------- + +See http://docs.ceph.com/docs/master/install/build-ceph/ + +Testing +======= You can start a development mode Ceph cluster, after compiling the source, with:: @@ -13,27 +150,36 @@ You can start a development mode Ceph cluster, after compiling the source, with: # check that it's there ./ceph health -.. todo:: vstart is woefully undocumented and full of sharp sticks to poke yourself with. +WIP +=== +Monitors +-------- -.. _mailing-list: +MON stands for "Monitor". Each Ceph cluster has a number of monitor processes. +See **man ceph-mon** or http://docs.ceph.com/docs/master/man/8/ceph-mon/ for +some basic information. The monitor source code lives under **src/mon** in the +tree: https://github.com/ceph/ceph/tree/master/src/mon -.. rubric:: Mailing list +OSDs +---- -The official development email list is ``ceph-devel@vger.kernel.org``. Subscribe by sending -a message to ``majordomo@vger.kernel.org`` with the line:: +OSD stands for Object Storage Daemon. Typically, there is one of these for each +disk in the cluster. See **man ceph-osd** or +http://docs.ceph.com/docs/master/man/8/ceph-osd/ for basic information. The OSD +source code can be found here: https://github.com/ceph/ceph/tree/master/src/osd - subscribe ceph-devel +librados +-------- -in the body of the message. +RADOS also includes an API for writing your own clients that can communicate +directly with a Ceph cluster's underlying object store. The API includes +bindings for popular scripting languages such as Python. For more information, +see the documents under https://github.com/ceph/ceph/tree/master/doc/rados/api +Build/Ops +--------- -.. rubric:: Contents +Ceph supports a number of major Linux distributions and provides packaging for them. -.. toctree:: - :glob: - * - osd_internals/index* - mds_internals/index* - radosgw/index* -- 2.47.3