--- /dev/null
+[global]
+ auth supported = none
+
+[mon]
+ mon data = /srv/mon.$id
+ keyring = /etc/ceph/keyring.$name
+
+[mds]
+ keyring = /etc/ceph/keyring.$name
+
+[osd]
+ osd data = /srv/osd.$id
+ osd journal = /srv/osd.$id.journal
+ osd journal size = 1000
+ ; uncomment the following line if you are mounting with ext4
+ filestore xattr use omap = true
+ keyring = /etc/ceph/keyring.$name
+
+[mon.a]
+ host = localhost
+ mon addr = 127.0.0.1:6789
+
+[osd.0]
+ host = localhost
+
+[osd.1]
+ host = localhost
+
+[mds.a]
+ host = localhost
.. toctree::
+ 5-minute Quick Start <quick-start>
+ Quick Start RBD <quick-rbd>
+ Quick Start CephFS <quick-cephfs>
Get Involved <get-involved>
- quick-start
+ manual-install
-=============
- Quick Start
-=============
+==========================
+ Installing Ceph Manually
+==========================
Ceph is intended for large-scale deployments, but you may install Ceph on a
single host. Quick start is intended for Debian/Ubuntu Linux distributions.
--- /dev/null
+=====================
+ Ceph FS Quick Start
+=====================
+
+To mount the Ceph FS filesystem, you must have a running Ceph cluster. You may
+execute this quick start on a separate host if you have the Ceph packages and
+the ``/etc/ceph/ceph.conf`` file installed with the appropriate IP address
+and host name settings modified in the ``/etc/ceph/ceph.conf`` file.
+
+Kernel Driver
+-------------
+
+Mount Ceph FS as a kernel driver. ::
+
+ sudo mkdir /mnt/mycephfs
+ sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs
+
+Filesystem in User Space (FUSE)
+-------------------------------
+
+Mount Ceph FS as with FUSE. Replace {username} with your username. ::
+
+ sudo mkdir /home/{username}/cephfs
+ sudo ceph-fuse -m {ip-address-of-monitor}:6789 /home/{username}/cephfs
--- /dev/null
+=================
+ RBD Quick Start
+=================
+
+To use RADOS block devices, you must have a running Ceph cluster. You may
+execute this quick start on a separate host if you have the Ceph packages and
+the ``/etc/ceph/ceph.conf`` file installed with the appropriate IP address
+and host name settings modified in the ``/etc/ceph/ceph.conf`` file.
+
+Create a RADOS Block Device image. ::
+
+ rbd create foo --size 4096
+
+Load the ``rbd`` client module. ::
+
+ sudo modprobe rbd
+
+Map the image to a block device. ::
+
+ sudo rbd map foo --pool rbd --name client.admin
+
+Use the block device. In the following example, create a file system. ::
+
+ sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
+
+Mount the file system. ::
+
+ sudo mkdir /mnt/myrbd
+ sudo mount /dev/rbd/rbd/foo /mnt/myrbd
--- /dev/null
+======================
+ 5-minute Quick Start
+======================
+
+Thank you for trying Ceph! Petabyte-scale data clusters are quite an
+undertaking. Before delving deeper into Ceph, we recommend setting up a
+cluster on a a single host to explore some of the functionality.
+
+Ceph **5-Minute Quick Start** is intended for use on one machine with a
+recent Debian/Ubuntu operating system. The intent is to help you exercise
+Ceph functionality without the deployment overhead associated with a
+production-ready storage cluster.
+
+Add Ceph Packages
+-----------------
+
+To get the latest Ceph packages, add a release key to APT, add a source
+location to your ``/etc/apt/sources.list``, update your system and
+install Ceph. ::
+
+ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc | sudo apt-key add -
+ echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+ sudo apt-get update && sudo apt-get install ceph
+
+Add a Configuration File
+------------------------
+
+Copy the contents of the following configuration file and save it to
+``/etc/ceph/ceph.conf``. This file will configure Ceph to operate a monitor,
+two OSD daemons and one metadata server on your local machine.
+
+.. literalinclude:: ceph.conf
+ :language: ini
+
+Deploy the Configuration
+------------------------
+
+To deploy the configuration, create a directory for each
+daemon using the same name used in your ``ceph.conf`` file. ::
+
+ sudo mkdir /srv/osd.0
+ sudo mkdir /srv/osd.1
+ sudo mkdir /srv/mon.a
+ sudo mkdir /srv/mds.a
+
+Then deploy the configuration. ::
+
+ cd /etc/ceph
+ sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring
+
+Start the Ceph Cluster
+----------------------
+
+Once you have deployed the configuration, start the Ceph cluster. ::
+
+ sudo service ceph start
+
+Check the health of your Ceph cluster to ensure it is ready. ::
+
+ ceph health
+
+If your cluster echoes back ``HEALTH_OK``, you may begin using your cluster.
\ No newline at end of file