From: Tommi Virtanen Date: Thu, 3 May 2012 17:10:29 +0000 (-0700) Subject: doc: Rename to use dashes not underscores in URLs. X-Git-Tag: v0.47~55^2~11 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=93dcc9886fb833e987efd7eaf3bb71d84d79eda9;p=ceph.git doc: Rename to use dashes not underscores in URLs. This makes the-separate-words in the url match as separate words in searches, where this_way only matches an explicit "this_way" search. http://www.mattcutts.com/blog/dashes-vs-underscores/ Signed-off-by: Tommi Virtanen --- diff --git a/doc/config-cluster/ceph-conf.rst b/doc/config-cluster/ceph-conf.rst new file mode 100644 index 000000000000..c8f0213f7aa6 --- /dev/null +++ b/doc/config-cluster/ceph-conf.rst @@ -0,0 +1,176 @@ +========================== + Ceph Configuration Files +========================== +When you start the Ceph service, the initialization process activates a series +of daemons that run in the background. The hosts in a typical RADOS cluster run +at least one of three processes or daemons: + +- RADOS (``ceph-osd``) +- Monitor (``ceph-mon``) +- Metadata Server (``ceph-mds``) + +Each process or daemon looks for a ``ceph.conf`` file that provides their +configuration settings. The default ``ceph.conf`` locations in sequential +order include: + + 1. ``$CEPH_CONF`` (*i.e.,* the path following + the ``$CEPH_CONF`` environment variable) + 2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument) + 3. ``/etc/ceph/ceph.conf`` + 4. ``~/.ceph/config`` + 5. ``./ceph.conf`` (*i.e.,* in the current working directory) + +The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you +have installed the Ceph packages on the OSD Cluster hosts, you need to create +a ``ceph.conf`` file to configure your OSD cluster. + +Creating ``ceph.conf`` +---------------------- +The ``ceph.conf`` file defines: + +- Cluster Membership +- Host Names +- Paths to Hosts +- Runtime Options + +You can add comments to the ``ceph.conf`` file by preceding comments with +a semi-colon (;). For example:: + + ; <--A semi-colon precedes a comment + ; A comment may be anything, and always follows a semi-colon on each line. + ; We recommend that you provide comments in your configuration file(s). + +Configuration File Basics +~~~~~~~~~~~~~~~~~~~~~~~~~ +The ``ceph.conf`` file configures each instance of the three common processes +in a RADOS cluster. + ++-----------------+--------------+--------------+-----------------+-------------------------------------------------+ +| Setting Scope | Process | Setting | Instance Naming | Description | ++=================+==============+==============+=================+=================================================+ +| All Modules | All | ``[global]`` | N/A | Settings affect all instances of all daemons. | ++-----------------+--------------+--------------+-----------------+-------------------------------------------------+ +| RADOS | ``ceph-osd`` | ``[osd]`` | Numeric | Settings affect RADOS instances only. | ++-----------------+--------------+--------------+-----------------+-------------------------------------------------+ +| Monitor | ``ceph-mon`` | ``[mon]`` | Alphanumeric | Settings affect monitor instances only. | ++-----------------+--------------+--------------+-----------------+-------------------------------------------------+ +| Metadata Server | ``ceph-mds`` | ``[mds]`` | Alphanumeric | Settings affect MDS instances only. | ++-----------------+--------------+--------------+-----------------+-------------------------------------------------+ + +Metavariables +~~~~~~~~~~~~~ +The configuration system supports certain 'metavariables,' which are typically +used in ``[global]`` or process/daemon settings. If metavariables occur inside +a configuration value, Ceph expands them into a concrete value--similar to how +Bash shell expansion works. + +There are a few different metavariables: + ++--------------+----------------------------------------------------------------------------------------------------------+ +| Metavariable | Description | ++==============+==========================================================================================================+ +| ``$host`` | Expands to the host name of the current daemon. | ++--------------+----------------------------------------------------------------------------------------------------------+ +| ``$type`` | Expands to one of ``mds``, ``osd``, or ``mon``, depending on the type of the current daemon. | ++--------------+----------------------------------------------------------------------------------------------------------+ +| ``$id`` | Expands to the daemon identifier. For ``osd.0``, this would be ``0``; for ``mds.a``, it would be ``a``. | ++--------------+----------------------------------------------------------------------------------------------------------+ +| ``$num`` | Same as ``$id``. | ++--------------+----------------------------------------------------------------------------------------------------------+ +| ``$name`` | Expands to ``$type.$id``. | ++--------------+----------------------------------------------------------------------------------------------------------+ +| ``$cluster`` | Expands to the cluster name. Useful when running multiple clusters on the same hardware. | ++--------------+----------------------------------------------------------------------------------------------------------+ + +Global Settings +~~~~~~~~~~~~~~~ +The Ceph configuration file supports a hierarchy of settings, where child +settings inherit the settings of the parent. Global settings affect all +instances of all processes in the cluster. Use the ``[global]`` setting for +values that are common for all hosts in the cluster. You can override each +``[global]`` setting by: + +1. Changing the setting in a particular ``[group]``. +2. Changing the setting in a particular process type (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ). +3. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` ) + +Overriding a global setting affects all child processes, except those that +you specifically override. For example:: + + [global] + ; Enable authentication between hosts within the cluster. + auth supported = cephx + +Process/Daemon Settings +~~~~~~~~~~~~~~~~~~~~~~~ +You can specify settings that apply to a particular type of process. When you +specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a +particular instance, the setting will apply to all OSDs, monitors or metadata +daemons respectively. + +Instance Settings +~~~~~~~~~~~~~~~~~ +You may specify settings for particular instances of an daemon. You may specify +an instance by entering its type, delimited by a period (.) and by the +instance ID. The instance ID for an OSD is always numeric, but it may be +alphanumeric for monitors and metadata servers. :: + + [osd.1] + ; settings affect osd.1 only. + [mon.a1] + ; settings affect mon.a1 only. + [mds.b2] + ; settings affect mds.b2 only. + +``host`` and ``addr`` Settings +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +The `Hardware Recommendations <../hardware-recommendations>`_ section +provides some hardware guidelines for configuring the cluster. It is possible +for a single host to run multiple daemons. For example, a single host with +multiple disks or RAIDs may run one ``ceph-osd`` for each disk or RAID. +Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon +on the same host. Ideally, you will have a host for a particular type of +process. For example, one host may run ``ceph-osd`` daemons, another host +may run a ``ceph-mds`` daemon, and other hosts may run ``ceph-mon`` daemons. + +Each host has a name identified by the ``host`` setting, and a network location +(i.e., domain name or IP address) identified by the ``addr`` setting. For example:: + + [osd.1] + host = hostNumber1 + addr = 150.140.130.120:1100 + [osd.2] + host = hostNumber1 + addr = 150.140.130.120:1102 + + +Monitor Configuration +~~~~~~~~~~~~~~~~~~~~~ +Ceph typically deploys with 3 monitors to ensure high availability should a +monitor instance crash. An odd number of monitors (3) ensures that the Paxos +algorithm can determine which version of the cluster map is the most accurate. + +.. note:: You may deploy Ceph with a single monitor, but if the instance fails, + the lack of a monitor may interrupt data service availability. + +Ceph monitors typically listen on port ``6789``. + +Example Configuration File +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. literalinclude:: demo-ceph.conf + :language: ini + +Configuration File Deployment Options +------------------------------------- +The most common way to deploy the ``ceph.conf`` file in a cluster is to have +all hosts share the same configuration file. + +You may create a ``ceph.conf`` file for each host if you wish, or specify a +particular ``ceph.conf`` file for a subset of hosts within the cluster. However, +using per-host ``ceph.conf``configuration files imposes a maintenance burden as the +cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file +on the Administration host and then copies that file to each OSD Cluster host. + +The current cluster deployment script, ``mkcephfs``, does not make copies of the +``ceph.conf``. You must copy the file manually. diff --git a/doc/config-cluster/ceph_conf.rst b/doc/config-cluster/ceph_conf.rst deleted file mode 100644 index 86c0d44abfc0..000000000000 --- a/doc/config-cluster/ceph_conf.rst +++ /dev/null @@ -1,176 +0,0 @@ -========================== - Ceph Configuration Files -========================== -When you start the Ceph service, the initialization process activates a series -of daemons that run in the background. The hosts in a typical RADOS cluster run -at least one of three processes or daemons: - -- RADOS (``ceph-osd``) -- Monitor (``ceph-mon``) -- Metadata Server (``ceph-mds``) - -Each process or daemon looks for a ``ceph.conf`` file that provides their -configuration settings. The default ``ceph.conf`` locations in sequential -order include: - - 1. ``$CEPH_CONF`` (*i.e.,* the path following - the ``$CEPH_CONF`` environment variable) - 2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument) - 3. ``/etc/ceph/ceph.conf`` - 4. ``~/.ceph/config`` - 5. ``./ceph.conf`` (*i.e.,* in the current working directory) - -The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you -have installed the Ceph packages on the OSD Cluster hosts, you need to create -a ``ceph.conf`` file to configure your OSD cluster. - -Creating ``ceph.conf`` ----------------------- -The ``ceph.conf`` file defines: - -- Cluster Membership -- Host Names -- Paths to Hosts -- Runtime Options - -You can add comments to the ``ceph.conf`` file by preceding comments with -a semi-colon (;). For example:: - - ; <--A semi-colon precedes a comment - ; A comment may be anything, and always follows a semi-colon on each line. - ; We recommend that you provide comments in your configuration file(s). - -Configuration File Basics -~~~~~~~~~~~~~~~~~~~~~~~~~ -The ``ceph.conf`` file configures each instance of the three common processes -in a RADOS cluster. - -+-----------------+--------------+--------------+-----------------+-------------------------------------------------+ -| Setting Scope | Process | Setting | Instance Naming | Description | -+=================+==============+==============+=================+=================================================+ -| All Modules | All | ``[global]`` | N/A | Settings affect all instances of all daemons. | -+-----------------+--------------+--------------+-----------------+-------------------------------------------------+ -| RADOS | ``ceph-osd`` | ``[osd]`` | Numeric | Settings affect RADOS instances only. | -+-----------------+--------------+--------------+-----------------+-------------------------------------------------+ -| Monitor | ``ceph-mon`` | ``[mon]`` | Alphanumeric | Settings affect monitor instances only. | -+-----------------+--------------+--------------+-----------------+-------------------------------------------------+ -| Metadata Server | ``ceph-mds`` | ``[mds]`` | Alphanumeric | Settings affect MDS instances only. | -+-----------------+--------------+--------------+-----------------+-------------------------------------------------+ - -Metavariables -~~~~~~~~~~~~~ -The configuration system supports certain 'metavariables,' which are typically -used in ``[global]`` or process/daemon settings. If metavariables occur inside -a configuration value, Ceph expands them into a concrete value--similar to how -Bash shell expansion works. - -There are a few different metavariables: - -+--------------+----------------------------------------------------------------------------------------------------------+ -| Metavariable | Description | -+==============+==========================================================================================================+ -| ``$host`` | Expands to the host name of the current daemon. | -+--------------+----------------------------------------------------------------------------------------------------------+ -| ``$type`` | Expands to one of ``mds``, ``osd``, or ``mon``, depending on the type of the current daemon. | -+--------------+----------------------------------------------------------------------------------------------------------+ -| ``$id`` | Expands to the daemon identifier. For ``osd.0``, this would be ``0``; for ``mds.a``, it would be ``a``. | -+--------------+----------------------------------------------------------------------------------------------------------+ -| ``$num`` | Same as ``$id``. | -+--------------+----------------------------------------------------------------------------------------------------------+ -| ``$name`` | Expands to ``$type.$id``. | -+--------------+----------------------------------------------------------------------------------------------------------+ -| ``$cluster`` | Expands to the cluster name. Useful when running multiple clusters on the same hardware. | -+--------------+----------------------------------------------------------------------------------------------------------+ - -Global Settings -~~~~~~~~~~~~~~~ -The Ceph configuration file supports a hierarchy of settings, where child -settings inherit the settings of the parent. Global settings affect all -instances of all processes in the cluster. Use the ``[global]`` setting for -values that are common for all hosts in the cluster. You can override each -``[global]`` setting by: - -1. Changing the setting in a particular ``[group]``. -2. Changing the setting in a particular process type (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ). -3. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` ) - -Overriding a global setting affects all child processes, except those that -you specifically override. For example:: - - [global] - ; Enable authentication between hosts within the cluster. - auth supported = cephx - -Process/Daemon Settings -~~~~~~~~~~~~~~~~~~~~~~~ -You can specify settings that apply to a particular type of process. When you -specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a -particular instance, the setting will apply to all OSDs, monitors or metadata -daemons respectively. - -Instance Settings -~~~~~~~~~~~~~~~~~ -You may specify settings for particular instances of an daemon. You may specify -an instance by entering its type, delimited by a period (.) and by the -instance ID. The instance ID for an OSD is always numeric, but it may be -alphanumeric for monitors and metadata servers. :: - - [osd.1] - ; settings affect osd.1 only. - [mon.a1] - ; settings affect mon.a1 only. - [mds.b2] - ; settings affect mds.b2 only. - -``host`` and ``addr`` Settings -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The `Hardware Recommendations <../hardware_recommendations>`_ section -provides some hardware guidelines for configuring the cluster. It is possible -for a single host to run multiple daemons. For example, a single host with -multiple disks or RAIDs may run one ``ceph-osd`` for each disk or RAID. -Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon -on the same host. Ideally, you will have a host for a particular type of -process. For example, one host may run ``ceph-osd`` daemons, another host -may run a ``ceph-mds`` daemon, and other hosts may run ``ceph-mon`` daemons. - -Each host has a name identified by the ``host`` setting, and a network location -(i.e., domain name or IP address) identified by the ``addr`` setting. For example:: - - [osd.1] - host = hostNumber1 - addr = 150.140.130.120:1100 - [osd.2] - host = hostNumber1 - addr = 150.140.130.120:1102 - - -Monitor Configuration -~~~~~~~~~~~~~~~~~~~~~ -Ceph typically deploys with 3 monitors to ensure high availability should a -monitor instance crash. An odd number of monitors (3) ensures that the Paxos -algorithm can determine which version of the cluster map is the most accurate. - -.. note:: You may deploy Ceph with a single monitor, but if the instance fails, - the lack of a monitor may interrupt data service availability. - -Ceph monitors typically listen on port ``6789``. - -Example Configuration File -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. literalinclude:: demo-ceph.conf - :language: ini - -Configuration File Deployment Options -------------------------------------- -The most common way to deploy the ``ceph.conf`` file in a cluster is to have -all hosts share the same configuration file. - -You may create a ``ceph.conf`` file for each host if you wish, or specify a -particular ``ceph.conf`` file for a subset of hosts within the cluster. However, -using per-host ``ceph.conf``configuration files imposes a maintenance burden as the -cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file -on the Administration host and then copies that file to each OSD Cluster host. - -The current cluster deployment script, ``mkcephfs``, does not make copies of the -``ceph.conf``. You must copy the file manually. diff --git a/doc/config-cluster/deploying-ceph-conf.rst b/doc/config-cluster/deploying-ceph-conf.rst new file mode 100644 index 000000000000..71caed7eb6fa --- /dev/null +++ b/doc/config-cluster/deploying-ceph-conf.rst @@ -0,0 +1,48 @@ +============================== + Deploying Ceph Configuration +============================== +Ceph's current deployment script does not copy the configuration file you +created from the Administration host to the OSD Cluster hosts. Copy the +configuration file you created (*i.e.,* ``mycluster.conf`` in the example below) +from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host. + +:: + + ssh myserver01 sudo tee /etc/ceph/ceph.conf /ceph.conf -k mycluster.keyring + +The script adds an admin key to the ``mycluster.keyring``, which is analogous to a root password. + +To start the cluster, execute the following:: + + /etc/init.d/ceph -a start + +Ceph should begin operating. You can check on the health of your Ceph cluster with the following:: + + ceph -k mycluster.keyring -c /ceph.conf health + diff --git a/doc/config-cluster/deploying_ceph_conf.rst b/doc/config-cluster/deploying_ceph_conf.rst deleted file mode 100644 index 71caed7eb6fa..000000000000 --- a/doc/config-cluster/deploying_ceph_conf.rst +++ /dev/null @@ -1,48 +0,0 @@ -============================== - Deploying Ceph Configuration -============================== -Ceph's current deployment script does not copy the configuration file you -created from the Administration host to the OSD Cluster hosts. Copy the -configuration file you created (*i.e.,* ``mycluster.conf`` in the example below) -from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host. - -:: - - ssh myserver01 sudo tee /etc/ceph/ceph.conf /ceph.conf -k mycluster.keyring - -The script adds an admin key to the ``mycluster.keyring``, which is analogous to a root password. - -To start the cluster, execute the following:: - - /etc/init.d/ceph -a start - -Ceph should begin operating. You can check on the health of your Ceph cluster with the following:: - - ceph -k mycluster.keyring -c /ceph.conf health - diff --git a/doc/config-cluster/file-system-recommendations.rst b/doc/config-cluster/file-system-recommendations.rst new file mode 100644 index 000000000000..4af540d81bd0 --- /dev/null +++ b/doc/config-cluster/file-system-recommendations.rst @@ -0,0 +1,52 @@ +========================================= +Hard Disk and File System Recommendations +========================================= + +Ceph aims for data safety, which means that when the application receives notice +that data was written to the disk, that data was actually written to the disk. +For old kernels (<2.6.33), disable the write cache if the journal is on a raw +disk. Newer kernels should work fine. + +Use ``hdparm`` to disable write caching on the hard disk:: + + $ hdparm -W 0 /dev/hda 0 + + +Ceph OSDs depend on the Extended Attributes (XATTRs) of the underlying file +system for: + +- Internal object state +- Snapshot metadata +- RADOS Gateway Access Control Lists (ACLs). + +Ceph OSDs rely heavily upon the stability and performance of the underlying file +system. The underlying file system must provide sufficient capacity for XATTRs. +File system candidates for Ceph include B tree and B+ tree file systems such as: + +- ``btrfs`` +- ``XFS`` + +If you are using ``ext4``, enable XATTRs. :: + + filestore xattr use omap = true + +.. warning:: XATTR limits. + + The RADOS Gateway's ACL and Ceph snapshots easily surpass the 4-kilobyte limit + for XATTRs in ``ext4``, causing the ``ceph-osd`` process to crash. Version 0.45 + or newer uses ``leveldb`` to bypass this limitation. ``ext4`` is a poor file + system choice if you intend to deploy the RADOS Gateway or use snapshots on + versions earlier than 0.45. + +.. tip:: Use ``xfs`` initially and ``btrfs`` when it is ready for production. + + The Ceph team believes that the best performance and stability will come from + ``btrfs.`` The ``btrfs`` file system has internal transactions that keep the + local data set in a consistent state. This makes OSDs based on ``btrfs`` simple + to deploy, while providing scalability not currently available from block-based + file systems. The 64-kb XATTR limit for ``xfs`` XATTRS is enough to accommodate + RDB snapshot metadata and RADOS Gateway ACLs. So ``xfs`` is the second-choice + file system of the Ceph team in the long run, but ``xfs`` is currently more + stable than ``btrfs``. If you only plan to use RADOS and ``rbd`` without + snapshots and without ``radosgw``, the ``ext4`` file system should work just fine. + diff --git a/doc/config-cluster/file_system_recommendations.rst b/doc/config-cluster/file_system_recommendations.rst deleted file mode 100644 index 4af540d81bd0..000000000000 --- a/doc/config-cluster/file_system_recommendations.rst +++ /dev/null @@ -1,52 +0,0 @@ -========================================= -Hard Disk and File System Recommendations -========================================= - -Ceph aims for data safety, which means that when the application receives notice -that data was written to the disk, that data was actually written to the disk. -For old kernels (<2.6.33), disable the write cache if the journal is on a raw -disk. Newer kernels should work fine. - -Use ``hdparm`` to disable write caching on the hard disk:: - - $ hdparm -W 0 /dev/hda 0 - - -Ceph OSDs depend on the Extended Attributes (XATTRs) of the underlying file -system for: - -- Internal object state -- Snapshot metadata -- RADOS Gateway Access Control Lists (ACLs). - -Ceph OSDs rely heavily upon the stability and performance of the underlying file -system. The underlying file system must provide sufficient capacity for XATTRs. -File system candidates for Ceph include B tree and B+ tree file systems such as: - -- ``btrfs`` -- ``XFS`` - -If you are using ``ext4``, enable XATTRs. :: - - filestore xattr use omap = true - -.. warning:: XATTR limits. - - The RADOS Gateway's ACL and Ceph snapshots easily surpass the 4-kilobyte limit - for XATTRs in ``ext4``, causing the ``ceph-osd`` process to crash. Version 0.45 - or newer uses ``leveldb`` to bypass this limitation. ``ext4`` is a poor file - system choice if you intend to deploy the RADOS Gateway or use snapshots on - versions earlier than 0.45. - -.. tip:: Use ``xfs`` initially and ``btrfs`` when it is ready for production. - - The Ceph team believes that the best performance and stability will come from - ``btrfs.`` The ``btrfs`` file system has internal transactions that keep the - local data set in a consistent state. This makes OSDs based on ``btrfs`` simple - to deploy, while providing scalability not currently available from block-based - file systems. The 64-kb XATTR limit for ``xfs`` XATTRS is enough to accommodate - RDB snapshot metadata and RADOS Gateway ACLs. So ``xfs`` is the second-choice - file system of the Ceph team in the long run, but ``xfs`` is currently more - stable than ``btrfs``. If you only plan to use RADOS and ``rbd`` without - snapshots and without ``radosgw``, the ``ext4`` file system should work just fine. - diff --git a/doc/config-cluster/index.rst b/doc/config-cluster/index.rst index d62cec2b5247..46f627db3677 100644 --- a/doc/config-cluster/index.rst +++ b/doc/config-cluster/index.rst @@ -23,7 +23,7 @@ instance (a single context). .. toctree:: - file_system_recommendations - Configuration - Deploy Config - deploying_ceph_with_mkcephfs + file-system-recommendations + Configuration + Deploy Config + deploying-ceph-with-mkcephfs diff --git a/doc/install/download-packages.rst b/doc/install/download-packages.rst new file mode 100644 index 000000000000..21df35231f0c --- /dev/null +++ b/doc/install/download-packages.rst @@ -0,0 +1,51 @@ +==================================== + Downloading Debian/Ubuntu Packages +==================================== +We automatically build Debian/Ubuntu packages for any branches or tags that +appear in the ``ceph.git`` `repository `_. If you +want to build your own packages (*e.g.,* for RPM), see +`Build Ceph Packages <../../source/build-packages>`_. + +When you download release packages, you will receive the latest package build, +which may be several weeks behind the current release or the most recent code. +It may contain bugs that have already been fixed in the most recent versions of +the code. Until packages contain only stable code, you should carefully consider +the tradeoffs of installing from a package or retrieving the latest release +or the most current source code and building Ceph. + +When you execute the following commands to install the Debian/Ubuntu Ceph +packages, replace ``{ARCH}`` with the architecture of your CPU (*e.g.,* ``amd64`` +or ``i386``), ``{DISTRO}`` with the code name of your operating system +(*e.g.,* ``precise``, rather than the OS version number) and ``{BRANCH}`` with +the version of Ceph you want to run (e.g., ``master``, ``stable``, ``unstable``, +``v0.44``, *etc.*). + +Adding Release Packages to APT +------------------------------ +We provide stable release packages for Debian/Ubuntu, which are signed signed +with the ``release.asc`` key. Click `here `_ +to see the distributions and branches supported. To install a release package, +you must first add a release key. :: + + $ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc \ | sudo apt-key add - + +For Debian/Ubuntu releases, we use the Advanced Package Tool (APT). To retrieve +the release packages and updates and install them with ``apt``, you must add a +``ceph.list`` file to your ``apt`` configuration with the following path:: + + etc/apt/sources.list.d/ceph.list + +Open the file and add the following line:: + + deb http://ceph.com/debian/ {DISTRO} main + +Remember to replace ``{DISTRO}`` with the Linux distribution for your host. +Then, save the file. + +Downloading Packages +-------------------- +Once you add either release or autobuild packages for Debian/Ubuntu, you may +download them with ``apt`` as follows:: + + sudo apt-get update + diff --git a/doc/install/download_packages.rst b/doc/install/download_packages.rst deleted file mode 100644 index c46724ee7789..000000000000 --- a/doc/install/download_packages.rst +++ /dev/null @@ -1,51 +0,0 @@ -==================================== - Downloading Debian/Ubuntu Packages -==================================== -We automatically build Debian/Ubuntu packages for any branches or tags that -appear in the ``ceph.git`` `repository `_. If you -want to build your own packages (*e.g.,* for RPM), see -`Build Ceph Packages <../../source/build_packages>`_. - -When you download release packages, you will receive the latest package build, -which may be several weeks behind the current release or the most recent code. -It may contain bugs that have already been fixed in the most recent versions of -the code. Until packages contain only stable code, you should carefully consider -the tradeoffs of installing from a package or retrieving the latest release -or the most current source code and building Ceph. - -When you execute the following commands to install the Debian/Ubuntu Ceph -packages, replace ``{ARCH}`` with the architecture of your CPU (*e.g.,* ``amd64`` -or ``i386``), ``{DISTRO}`` with the code name of your operating system -(*e.g.,* ``precise``, rather than the OS version number) and ``{BRANCH}`` with -the version of Ceph you want to run (e.g., ``master``, ``stable``, ``unstable``, -``v0.44``, *etc.*). - -Adding Release Packages to APT ------------------------------- -We provide stable release packages for Debian/Ubuntu, which are signed signed -with the ``release.asc`` key. Click `here `_ -to see the distributions and branches supported. To install a release package, -you must first add a release key. :: - - $ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc \ | sudo apt-key add - - -For Debian/Ubuntu releases, we use the Advanced Package Tool (APT). To retrieve -the release packages and updates and install them with ``apt``, you must add a -``ceph.list`` file to your ``apt`` configuration with the following path:: - - etc/apt/sources.list.d/ceph.list - -Open the file and add the following line:: - - deb http://ceph.com/debian/ {DISTRO} main - -Remember to replace ``{DISTRO}`` with the Linux distribution for your host. -Then, save the file. - -Downloading Packages --------------------- -Once you add either release or autobuild packages for Debian/Ubuntu, you may -download them with ``apt`` as follows:: - - sudo apt-get update - diff --git a/doc/install/hardware-recommendations.rst b/doc/install/hardware-recommendations.rst new file mode 100644 index 000000000000..f4ff44e8cf25 --- /dev/null +++ b/doc/install/hardware-recommendations.rst @@ -0,0 +1,49 @@ +========================== + Hardware Recommendations +========================== +Ceph runs on commodity hardware and a Linux operating system over a TCP/IP +network. The hardware recommendations for different processes/daemons differ +considerably. + +OSD hosts should have ample data storage in the form of a hard drive or a RAID. +Ceph OSDs run the RADOS service, calculate data placement with CRUSH, and +maintain their own copy of the cluster map. Therefore, OSDs should have a +reasonable amount of processing power. + +Ceph monitors require enough disk space for the cluster map, but usually do +not encounter heavy loads. Monitor hosts do not need to be very powerful. + +Ceph metadata servers distribute their load. However, metadata servers must be +capable of serving their data quickly. Metadata servers should have strong +processing capability and plenty of RAM. + +.. note:: If you are not using the Ceph File System, you do not need a meta data server. + ++--------------+----------------+------------------------------------+ +| Process | Criteria | Minimum Recommended | ++==============+================+====================================+ +| ``ceph-osd`` | Processor | 64-bit AMD-64/i386 dual-core | +| +----------------+------------------------------------+ +| | RAM | 500 MB per daemon | +| +----------------+------------------------------------+ +| | Volume Storage | 1-disk or RAID per daemon | +| +----------------+------------------------------------+ +| | Network | 2-1GB Ethernet NICs | ++--------------+----------------+------------------------------------+ +| ``ceph-mon`` | Processor | 64-bit AMD-64/i386 | +| +----------------+------------------------------------+ +| | RAM | 1 GB per daemon | +| +----------------+------------------------------------+ +| | Disk Space | 10 GB per daemon | +| +----------------+------------------------------------+ +| | Network | 2-1GB Ethernet NICs | ++--------------+----------------+------------------------------------+ +| ``ceph-mds`` | Processor | 64-bit AMD-64/i386 quad-core | +| +----------------+------------------------------------+ +| | RAM | 1 GB minimum per daemon | +| +----------------+------------------------------------+ +| | Disk Space | 1 MB per daemon | +| +----------------+------------------------------------+ +| | Network | 2-1GB Ethernet NICs | ++--------------+----------------+------------------------------------+ + diff --git a/doc/install/hardware_recommendations.rst b/doc/install/hardware_recommendations.rst deleted file mode 100644 index f4ff44e8cf25..000000000000 --- a/doc/install/hardware_recommendations.rst +++ /dev/null @@ -1,49 +0,0 @@ -========================== - Hardware Recommendations -========================== -Ceph runs on commodity hardware and a Linux operating system over a TCP/IP -network. The hardware recommendations for different processes/daemons differ -considerably. - -OSD hosts should have ample data storage in the form of a hard drive or a RAID. -Ceph OSDs run the RADOS service, calculate data placement with CRUSH, and -maintain their own copy of the cluster map. Therefore, OSDs should have a -reasonable amount of processing power. - -Ceph monitors require enough disk space for the cluster map, but usually do -not encounter heavy loads. Monitor hosts do not need to be very powerful. - -Ceph metadata servers distribute their load. However, metadata servers must be -capable of serving their data quickly. Metadata servers should have strong -processing capability and plenty of RAM. - -.. note:: If you are not using the Ceph File System, you do not need a meta data server. - -+--------------+----------------+------------------------------------+ -| Process | Criteria | Minimum Recommended | -+==============+================+====================================+ -| ``ceph-osd`` | Processor | 64-bit AMD-64/i386 dual-core | -| +----------------+------------------------------------+ -| | RAM | 500 MB per daemon | -| +----------------+------------------------------------+ -| | Volume Storage | 1-disk or RAID per daemon | -| +----------------+------------------------------------+ -| | Network | 2-1GB Ethernet NICs | -+--------------+----------------+------------------------------------+ -| ``ceph-mon`` | Processor | 64-bit AMD-64/i386 | -| +----------------+------------------------------------+ -| | RAM | 1 GB per daemon | -| +----------------+------------------------------------+ -| | Disk Space | 10 GB per daemon | -| +----------------+------------------------------------+ -| | Network | 2-1GB Ethernet NICs | -+--------------+----------------+------------------------------------+ -| ``ceph-mds`` | Processor | 64-bit AMD-64/i386 quad-core | -| +----------------+------------------------------------+ -| | RAM | 1 GB minimum per daemon | -| +----------------+------------------------------------+ -| | Disk Space | 1 MB per daemon | -| +----------------+------------------------------------+ -| | Network | 2-1GB Ethernet NICs | -+--------------+----------------+------------------------------------+ - diff --git a/doc/install/index.rst b/doc/install/index.rst index e0064b84262e..be2680c52dcc 100644 --- a/doc/install/index.rst +++ b/doc/install/index.rst @@ -12,6 +12,6 @@ installing Ceph components: .. toctree:: - Hardware Recommendations - Download Packages - Install Packages + Hardware Recommendations + Download Packages + Install Packages diff --git a/doc/install/installing-packages.rst b/doc/install/installing-packages.rst new file mode 100644 index 000000000000..fd5025138d80 --- /dev/null +++ b/doc/install/installing-packages.rst @@ -0,0 +1,32 @@ +========================== + Installing Ceph Packages +========================== +Once you have downloaded or built Ceph packages, you may install them on your +Admin host and OSD Cluster hosts. + +.. important:: All hosts should be running the same package version. + To ensure that you are running the same version on each host with APT, + you may execute ``sudo apt-get update`` on each host before you install + the packages. + +Installing Packages with APT +---------------------------- +Once you download or build the packages and add your packages to APT +(see `Downloading Debian/Ubuntu Packages <../download-packages>`_), you may +install them as follows:: + + $ sudo apt-get install ceph + +Installing Packages with RPM +---------------------------- +You may install RPM packages as follows:: + + rpm -i rpmbuild/RPMS/x86_64/ceph-*.rpm + +.. note: We do not build RPM packages at this time. You may build them + yourself by downloading the source code. + +Proceed to Configuring a Cluster +-------------------------------- +Once you have prepared your hosts and installed Ceph pages, proceed to +`Configuring a Storage Cluster <../../config-cluster>`_. diff --git a/doc/install/installing_packages.rst b/doc/install/installing_packages.rst deleted file mode 100644 index 77f2a5267773..000000000000 --- a/doc/install/installing_packages.rst +++ /dev/null @@ -1,32 +0,0 @@ -========================== - Installing Ceph Packages -========================== -Once you have downloaded or built Ceph packages, you may install them on your -Admin host and OSD Cluster hosts. - -.. important:: All hosts should be running the same package version. - To ensure that you are running the same version on each host with APT, - you may execute ``sudo apt-get update`` on each host before you install - the packages. - -Installing Packages with APT ----------------------------- -Once you download or build the packages and add your packages to APT -(see `Downloading Debian/Ubuntu Packages <../download_packages>`_), you may -install them as follows:: - - $ sudo apt-get install ceph - -Installing Packages with RPM ----------------------------- -You may install RPM packages as follows:: - - rpm -i rpmbuild/RPMS/x86_64/ceph-*.rpm - -.. note: We do not build RPM packages at this time. You may build them - yourself by downloading the source code. - -Proceed to Configuring a Cluster --------------------------------- -Once you have prepared your hosts and installed Ceph pages, proceed to -`Configuring a Storage Cluster <../../config-cluster>`_. diff --git a/doc/source/build-packages.rst b/doc/source/build-packages.rst new file mode 100644 index 000000000000..1895f661710d --- /dev/null +++ b/doc/source/build-packages.rst @@ -0,0 +1,51 @@ +=================== +Build Ceph Packages +=================== + +To build packages, you must clone the `Ceph`_ repository. +You can create installation packages from the latest code using ``dpkg-buildpackage`` for Debian/Ubuntu +or ``rpmbuild`` for the RPM Package Manager. + +.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of cores * 2. + For example, use ``-j4`` for a dual-core processor to accelerate the build. + + +Advanced Package Tool (APT) +--------------------------- + +To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the `Ceph`_ repository, +installed the `build prerequisites`_ and installed ``debhelper``:: + + $ sudo apt-get install debhelper + +Once you have installed debhelper, you can build the packages: + + $ sudo dpkg-buildpackage + +For multi-processor CPUs use the ``-j`` option to accelerate the build. + +RPM Package Manager +------------------- + +To create ``.prm`` packages, ensure that you have cloned the `Ceph`_ repository, +installed the `build prerequisites`_ and installed ``rpm-build`` and ``rpmdevtools``:: + + $ yum install rpm-build rpmdevtools + +Once you have installed the tools, setup an RPM compilation environment:: + + $ rpmdev-setuptree + +Fetch the source tarball for the RPM compilation environment:: + + $ wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-.tar.gz + +Build the RPM packages:: + + $ rpmbuild -tb ~/rpmbuild/SOURCES/ceph-.tar.gz + +For multi-processor CPUs use the ``-j`` option to accelerate the build. + + +.. _build prerequisites: ../build-prerequisites +.. _Ceph: ../cloning-the-ceph-source-code-repository diff --git a/doc/source/build-prerequisites.rst b/doc/source/build-prerequisites.rst new file mode 100644 index 000000000000..6333436eabb0 --- /dev/null +++ b/doc/source/build-prerequisites.rst @@ -0,0 +1,99 @@ +=================== +Build Prerequisites +=================== + +Before you can build Ceph source code or Ceph documentation, you need to install several libraries and tools. + +.. tip:: Check this section to see if there are specific prerequisites for your Linux/Unix distribution. + +Prerequisites for Building Ceph Source Code +=========================================== +Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts +depend on the following: + +- ``autotools-dev`` +- ``autoconf`` +- ``automake`` +- ``cdbs`` +- ``gcc`` +- ``g++`` +- ``git`` +- ``libboost-dev`` +- ``libedit-dev`` +- ``libssl-dev`` +- ``libtool`` +- ``libfcgi`` +- ``libfcgi-dev`` +- ``libfuse-dev`` +- ``linux-kernel-headers`` +- ``libcrypto++-dev`` +- ``libcrypto++`` +- ``libexpat1-dev`` +- ``libgtkmm-2.4-dev`` +- ``pkg-config`` +- ``libcurl4-gnutls-dev`` + +On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: + + $ sudo apt-get install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev + +On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. :: + + $ aptitude install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev + + +Ubuntu Requirements +------------------- + +- ``uuid-dev`` +- ``libkeytutils-dev`` +- ``libgoogle-perftools-dev`` +- ``libatomic-ops-dev`` +- ``libaio-dev`` +- ``libgdata-common`` +- ``libgdata13`` + +Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: + + $ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev libatomic-ops-dev libaio-dev libgdata-common libgdata13 + +Debian +------ +Alternatively, you may also install:: + + $ aptitude install fakeroot dpkg-dev + $ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev + +openSUSE 11.2 (and later) +------------------------- + +- ``boost-devel`` +- ``gcc-c++`` +- ``libedit-devel`` +- ``libopenssl-devel`` +- ``fuse-devel`` (optional) + +Execute ``zypper install`` for each dependency that isn't installed on your host. :: + + $zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel + +Prerequisites for Building Ceph Documentation +============================================= +Ceph utilizes Python's Sphinx documentation tool. For details on +the Sphinx documentation tool, refer to: `Sphinx `_ +Follow the directions at `Sphinx 1.1.3 `_ +to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the following are required: + +- ``python-dev`` +- ``python-pip`` +- ``python-virtualenv`` +- ``libxml2-dev`` +- ``libxslt-dev`` +- ``doxygen`` +- ``ditaa`` +- ``graphviz`` + +Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: + + $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz + diff --git a/doc/source/build_packages.rst b/doc/source/build_packages.rst deleted file mode 100644 index 0c9950a0c6cc..000000000000 --- a/doc/source/build_packages.rst +++ /dev/null @@ -1,51 +0,0 @@ -=================== -Build Ceph Packages -=================== - -To build packages, you must clone the `Ceph`_ repository. -You can create installation packages from the latest code using ``dpkg-buildpackage`` for Debian/Ubuntu -or ``rpmbuild`` for the RPM Package Manager. - -.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of cores * 2. - For example, use ``-j4`` for a dual-core processor to accelerate the build. - - -Advanced Package Tool (APT) ---------------------------- - -To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the `Ceph`_ repository, -installed the `build prerequisites`_ and installed ``debhelper``:: - - $ sudo apt-get install debhelper - -Once you have installed debhelper, you can build the packages: - - $ sudo dpkg-buildpackage - -For multi-processor CPUs use the ``-j`` option to accelerate the build. - -RPM Package Manager -------------------- - -To create ``.prm`` packages, ensure that you have cloned the `Ceph`_ repository, -installed the `build prerequisites`_ and installed ``rpm-build`` and ``rpmdevtools``:: - - $ yum install rpm-build rpmdevtools - -Once you have installed the tools, setup an RPM compilation environment:: - - $ rpmdev-setuptree - -Fetch the source tarball for the RPM compilation environment:: - - $ wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-.tar.gz - -Build the RPM packages:: - - $ rpmbuild -tb ~/rpmbuild/SOURCES/ceph-.tar.gz - -For multi-processor CPUs use the ``-j`` option to accelerate the build. - - -.. _build prerequisites: ../build_prerequisites -.. _Ceph: ../cloning_the_ceph_source_code_repository \ No newline at end of file diff --git a/doc/source/build_prerequisites.rst b/doc/source/build_prerequisites.rst deleted file mode 100644 index 6333436eabb0..000000000000 --- a/doc/source/build_prerequisites.rst +++ /dev/null @@ -1,99 +0,0 @@ -=================== -Build Prerequisites -=================== - -Before you can build Ceph source code or Ceph documentation, you need to install several libraries and tools. - -.. tip:: Check this section to see if there are specific prerequisites for your Linux/Unix distribution. - -Prerequisites for Building Ceph Source Code -=========================================== -Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts -depend on the following: - -- ``autotools-dev`` -- ``autoconf`` -- ``automake`` -- ``cdbs`` -- ``gcc`` -- ``g++`` -- ``git`` -- ``libboost-dev`` -- ``libedit-dev`` -- ``libssl-dev`` -- ``libtool`` -- ``libfcgi`` -- ``libfcgi-dev`` -- ``libfuse-dev`` -- ``linux-kernel-headers`` -- ``libcrypto++-dev`` -- ``libcrypto++`` -- ``libexpat1-dev`` -- ``libgtkmm-2.4-dev`` -- ``pkg-config`` -- ``libcurl4-gnutls-dev`` - -On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: - - $ sudo apt-get install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev - -On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. :: - - $ aptitude install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev - - -Ubuntu Requirements -------------------- - -- ``uuid-dev`` -- ``libkeytutils-dev`` -- ``libgoogle-perftools-dev`` -- ``libatomic-ops-dev`` -- ``libaio-dev`` -- ``libgdata-common`` -- ``libgdata13`` - -Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: - - $ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev libatomic-ops-dev libaio-dev libgdata-common libgdata13 - -Debian ------- -Alternatively, you may also install:: - - $ aptitude install fakeroot dpkg-dev - $ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev - -openSUSE 11.2 (and later) -------------------------- - -- ``boost-devel`` -- ``gcc-c++`` -- ``libedit-devel`` -- ``libopenssl-devel`` -- ``fuse-devel`` (optional) - -Execute ``zypper install`` for each dependency that isn't installed on your host. :: - - $zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel - -Prerequisites for Building Ceph Documentation -============================================= -Ceph utilizes Python's Sphinx documentation tool. For details on -the Sphinx documentation tool, refer to: `Sphinx `_ -Follow the directions at `Sphinx 1.1.3 `_ -to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the following are required: - -- ``python-dev`` -- ``python-pip`` -- ``python-virtualenv`` -- ``libxml2-dev`` -- ``libxslt-dev`` -- ``doxygen`` -- ``ditaa`` -- ``graphviz`` - -Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: - - $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz - diff --git a/doc/source/building-ceph.rst b/doc/source/building-ceph.rst new file mode 100644 index 000000000000..626c16946615 --- /dev/null +++ b/doc/source/building-ceph.rst @@ -0,0 +1,38 @@ +============= +Building Ceph +============= + +Ceph provides build scripts for source code and for documentation. + +Building Ceph +============= +Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following:: + + $ cd ceph + $ ./autogen.sh + $ ./configure + $ make + +You can use ``make -j`` to execute multiple jobs depending upon your system. For example:: + + $ make -j4 + +To install Ceph locally, you may also use:: + + $ make install + +If you install Ceph locally, ``make`` will place the executables in ``usr/local/bin``. +You may add the ``ceph.conf`` file to the ``usr/local/bin`` directory to run an evaluation environment of Ceph from a single directory. + +Building Ceph Documentation +=========================== +Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx `_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script:: + + $ cd ceph + $ admin/build-doc + +Once you build the documentation set, you may navigate to the source directory to view it:: + + $ cd build-doc/output + +There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively. diff --git a/doc/source/building_ceph.rst b/doc/source/building_ceph.rst deleted file mode 100644 index 626c16946615..000000000000 --- a/doc/source/building_ceph.rst +++ /dev/null @@ -1,38 +0,0 @@ -============= -Building Ceph -============= - -Ceph provides build scripts for source code and for documentation. - -Building Ceph -============= -Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following:: - - $ cd ceph - $ ./autogen.sh - $ ./configure - $ make - -You can use ``make -j`` to execute multiple jobs depending upon your system. For example:: - - $ make -j4 - -To install Ceph locally, you may also use:: - - $ make install - -If you install Ceph locally, ``make`` will place the executables in ``usr/local/bin``. -You may add the ``ceph.conf`` file to the ``usr/local/bin`` directory to run an evaluation environment of Ceph from a single directory. - -Building Ceph Documentation -=========================== -Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx `_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script:: - - $ cd ceph - $ admin/build-doc - -Once you build the documentation set, you may navigate to the source directory to view it:: - - $ cd build-doc/output - -There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively. diff --git a/doc/source/cloning-the-ceph-source-code-repository.rst b/doc/source/cloning-the-ceph-source-code-repository.rst new file mode 100644 index 000000000000..7b8593b402bb --- /dev/null +++ b/doc/source/cloning-the-ceph-source-code-repository.rst @@ -0,0 +1,44 @@ +======================================= +Cloning the Ceph Source Code Repository +======================================= +To check out the Ceph source code, you must have ``git`` installed +on your local host. To install ``git``, execute:: + + $ sudo apt-get install git + +You must also have a ``github`` account. If you do not have a +``github`` account, go to `github.com `_ and register. +Follow the directions for setting up git at `Set Up Git `_. + +Clone the Source +---------------- +To clone the Ceph source code repository, execute:: + + $ git clone git@github.com:ceph/ceph.git + +Once ``git clone`` executes, you should have a full copy of the Ceph repository. + +Clone the Submodules +-------------------- +Before you can build Ceph, you must navigate to your new repository and get the ``init`` submodule and the ``update`` submodule:: + + $ cd ceph + $ git submodule init + $ git submodule update + +.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date:: + + $ git status + +Choose a Branch +--------------- +Once you clone the source code and submodules, your Ceph repository will be on the ``master`` branch by default, which is the unstable development branch. You may choose other branches too. + +- ``master``: The unstable development branch. +- ``stable``: The bugfix branch. +- ``next``: The release candidate branch. + +:: + + git checkout master + diff --git a/doc/source/cloning_the_ceph_source_code_repository.rst b/doc/source/cloning_the_ceph_source_code_repository.rst deleted file mode 100644 index 7b8593b402bb..000000000000 --- a/doc/source/cloning_the_ceph_source_code_repository.rst +++ /dev/null @@ -1,44 +0,0 @@ -======================================= -Cloning the Ceph Source Code Repository -======================================= -To check out the Ceph source code, you must have ``git`` installed -on your local host. To install ``git``, execute:: - - $ sudo apt-get install git - -You must also have a ``github`` account. If you do not have a -``github`` account, go to `github.com `_ and register. -Follow the directions for setting up git at `Set Up Git `_. - -Clone the Source ----------------- -To clone the Ceph source code repository, execute:: - - $ git clone git@github.com:ceph/ceph.git - -Once ``git clone`` executes, you should have a full copy of the Ceph repository. - -Clone the Submodules --------------------- -Before you can build Ceph, you must navigate to your new repository and get the ``init`` submodule and the ``update`` submodule:: - - $ cd ceph - $ git submodule init - $ git submodule update - -.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date:: - - $ git status - -Choose a Branch ---------------- -Once you clone the source code and submodules, your Ceph repository will be on the ``master`` branch by default, which is the unstable development branch. You may choose other branches too. - -- ``master``: The unstable development branch. -- ``stable``: The bugfix branch. -- ``next``: The release candidate branch. - -:: - - git checkout master - diff --git a/doc/source/downloading-a-ceph-release.rst b/doc/source/downloading-a-ceph-release.rst new file mode 100644 index 000000000000..d9f8c7facc04 --- /dev/null +++ b/doc/source/downloading-a-ceph-release.rst @@ -0,0 +1,8 @@ +==================================== + Downloading a Ceph Release Tarball +==================================== + +As Ceph development progresses, the Ceph team releases new versions of the +source code. You may download source code tarballs for Ceph releases here: + +`Ceph Release Tarballs `_ diff --git a/doc/source/downloading_a_ceph_release.rst b/doc/source/downloading_a_ceph_release.rst deleted file mode 100644 index d9f8c7facc04..000000000000 --- a/doc/source/downloading_a_ceph_release.rst +++ /dev/null @@ -1,8 +0,0 @@ -==================================== - Downloading a Ceph Release Tarball -==================================== - -As Ceph development progresses, the Ceph team releases new versions of the -source code. You may download source code tarballs for Ceph releases here: - -`Ceph Release Tarballs `_ diff --git a/doc/source/index.rst b/doc/source/index.rst index dd676c2f674b..a649c654431b 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -9,9 +9,9 @@ will save you time. .. toctree:: - Prerequisites - Get a Tarball - Clone the Source - Build the Source - Build a Package + Prerequisites + Get a Tarball + Clone the Source + Build the Source + Build a Package Contributing Code diff --git a/doc/start/get-involved-in-the-ceph-community.rst b/doc/start/get-involved-in-the-ceph-community.rst new file mode 100644 index 000000000000..c1267d7928b8 --- /dev/null +++ b/doc/start/get-involved-in-the-ceph-community.rst @@ -0,0 +1,48 @@ +===================================== + Get Involved in the Ceph Community! +===================================== +These are exciting times in the Ceph community! Get involved! + ++-----------------+-------------------------------------------------+-----------------------------------------------+ +|Channel | Description | Contact Info | ++=================+=================================================+===============================================+ +| **Blog** | Check the Ceph Blog_ periodically to keep track | http://ceph.newdream.net/news | +| | of Ceph progress and important announcements. | | ++-----------------+-------------------------------------------------+-----------------------------------------------+ +| **IRC** | As you delve into Ceph, you may have questions | | +| | or feedback for the Ceph development team. Ceph | - **Domain:** ``irc.oftc.net`` | +| | developers are often available on the ``#ceph`` | - **Channel:** ``#ceph`` | +| | IRC channel particularly during daytime hours | | +| | in the US Pacific Standard Time zone. | | ++-----------------+-------------------------------------------------+-----------------------------------------------+ +| **Email List** | Keep in touch with developer activity by | | +| | subscribing_ to the email list at | - Subscribe_ | +| | ceph-devel@vger.kernel.org. You can opt out of | - Unsubscribe_ | +| | the email list at any time by unsubscribing_. | - Gmane_ | +| | A simple email is all it takes! If you would | | +| | like to view the archives, go to Gmane_. | | ++-----------------+-------------------------------------------------+-----------------------------------------------+ +| **Bug Tracker** | You can help keep Ceph production worthy by | http://tracker.newdream.net/projects/ceph | +| | filing and tracking bugs, and providing feature | | +| | requests using the Bug Tracker_. | | ++-----------------+-------------------------------------------------+-----------------------------------------------+ +| **Source Code** | If you would like to participate in | | +| | development, bug fixing, or if you just want | - http://github.com:ceph/ceph.git | +| | the very latest code for Ceph, you can get it | - ``$git clone git@github.com:ceph/ceph.git`` | +| | at http://github.com. | | ++-----------------+-------------------------------------------------+-----------------------------------------------+ +| **Support** | If you have a very specific problem, an | http://inktank.com | +| | immediate need, or if your deployment requires | | +| | significant help, consider commercial support_. | | ++-----------------+-------------------------------------------------+-----------------------------------------------+ + + + +.. _Subscribe: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel +.. _Unsubscribe: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel +.. _subscribing: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel +.. _unsubscribing: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel +.. _Gmane: http://news.gmane.org/gmane.comp.file-systems.ceph.devel +.. _Tracker: http://tracker.newdream.net/projects/ceph +.. _Blog: http://ceph.newdream.net/news +.. _support: http://inktank.com diff --git a/doc/start/get_involved_in_the_ceph_community.rst b/doc/start/get_involved_in_the_ceph_community.rst deleted file mode 100644 index c1267d7928b8..000000000000 --- a/doc/start/get_involved_in_the_ceph_community.rst +++ /dev/null @@ -1,48 +0,0 @@ -===================================== - Get Involved in the Ceph Community! -===================================== -These are exciting times in the Ceph community! Get involved! - -+-----------------+-------------------------------------------------+-----------------------------------------------+ -|Channel | Description | Contact Info | -+=================+=================================================+===============================================+ -| **Blog** | Check the Ceph Blog_ periodically to keep track | http://ceph.newdream.net/news | -| | of Ceph progress and important announcements. | | -+-----------------+-------------------------------------------------+-----------------------------------------------+ -| **IRC** | As you delve into Ceph, you may have questions | | -| | or feedback for the Ceph development team. Ceph | - **Domain:** ``irc.oftc.net`` | -| | developers are often available on the ``#ceph`` | - **Channel:** ``#ceph`` | -| | IRC channel particularly during daytime hours | | -| | in the US Pacific Standard Time zone. | | -+-----------------+-------------------------------------------------+-----------------------------------------------+ -| **Email List** | Keep in touch with developer activity by | | -| | subscribing_ to the email list at | - Subscribe_ | -| | ceph-devel@vger.kernel.org. You can opt out of | - Unsubscribe_ | -| | the email list at any time by unsubscribing_. | - Gmane_ | -| | A simple email is all it takes! If you would | | -| | like to view the archives, go to Gmane_. | | -+-----------------+-------------------------------------------------+-----------------------------------------------+ -| **Bug Tracker** | You can help keep Ceph production worthy by | http://tracker.newdream.net/projects/ceph | -| | filing and tracking bugs, and providing feature | | -| | requests using the Bug Tracker_. | | -+-----------------+-------------------------------------------------+-----------------------------------------------+ -| **Source Code** | If you would like to participate in | | -| | development, bug fixing, or if you just want | - http://github.com:ceph/ceph.git | -| | the very latest code for Ceph, you can get it | - ``$git clone git@github.com:ceph/ceph.git`` | -| | at http://github.com. | | -+-----------------+-------------------------------------------------+-----------------------------------------------+ -| **Support** | If you have a very specific problem, an | http://inktank.com | -| | immediate need, or if your deployment requires | | -| | significant help, consider commercial support_. | | -+-----------------+-------------------------------------------------+-----------------------------------------------+ - - - -.. _Subscribe: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel -.. _Unsubscribe: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel -.. _subscribing: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel -.. _unsubscribing: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel -.. _Gmane: http://news.gmane.org/gmane.comp.file-systems.ceph.devel -.. _Tracker: http://tracker.newdream.net/projects/ceph -.. _Blog: http://ceph.newdream.net/news -.. _support: http://inktank.com diff --git a/doc/start/index.rst b/doc/start/index.rst index 6641c1f05ed0..ee41d1c962d9 100644 --- a/doc/start/index.rst +++ b/doc/start/index.rst @@ -6,5 +6,5 @@ get started: .. toctree:: - Get Involved - quick_start + Get Involved + quick-start diff --git a/doc/start/quick-start.rst b/doc/start/quick-start.rst new file mode 100644 index 000000000000..bf7e02103761 --- /dev/null +++ b/doc/start/quick-start.rst @@ -0,0 +1,17 @@ +============= + Quick Start +============= +Ceph is intended for large-scale deployments, but you may install Ceph on a +single host. Quick start is intended for Debian/Ubuntu Linux distributions. + +1. Login to your host. +2. Make a directory for Ceph packages. *e.g.,* ``$ mkdir ceph`` +3. `Get Ceph packages <../../install/download-packages>`_ and add them to your + APT configuration file. +4. Update and Install Ceph packages. + See `Downloading Debian/Ubuntu Packages <../../install/download-packages>`_ + and `Installing Packages <../../install/installing-packages>`_ for details. +5. Add a ``ceph.conf`` file. + See `Ceph Configuration Files <../../config-cluster/ceph-conf>`_ for details. +6. Run Ceph. + See `Deploying Ceph with mkcephfs <../../config-cluster/deploying-ceph-with-mkcephfs>`_ diff --git a/doc/start/quick_start.rst b/doc/start/quick_start.rst deleted file mode 100644 index 5890b3cfd2e0..000000000000 --- a/doc/start/quick_start.rst +++ /dev/null @@ -1,17 +0,0 @@ -============= - Quick Start -============= -Ceph is intended for large-scale deployments, but you may install Ceph on a -single host. Quick start is intended for Debian/Ubuntu Linux distributions. - -1. Login to your host. -2. Make a directory for Ceph packages. *e.g.,* ``$ mkdir ceph`` -3. `Get Ceph packages <../../install/download_packages>`_ and add them to your - APT configuration file. -4. Update and Install Ceph packages. - See `Downloading Debian/Ubuntu Packages <../../install/download_packages>`_ - and `Installing Packages <../../install/installing_packages>`_ for details. -5. Add a ``ceph.conf`` file. - See `Ceph Configuration Files <../../config-cluster/ceph_conf>`_ for details. -6. Run Ceph. - See `Deploying Ceph with mkcephfs <../../config_cluster/deploying_ceph_with_mkcephfs>`_