-==========================
- Object Store Quick Start
-==========================
+===========================
+ Store Cluster Quick Start
+===========================
If you haven't completed your `Preflight Checklist`_, do that first. This
**Quick Start** sets up a two-node demo cluster so you can explore some of the
-object store functionality. This **Quick Start** will help you install a
-minimal Ceph cluster on a server node from your admin node using
-``ceph-deploy``.
+:term:`Ceph Storage Cluster` functionality. This **Quick Start** will help you
+install a minimal Ceph Storage Cluster on a server node from your admin node
+using ``ceph-deploy``.
.. ditaa::
/----------------\ /----------------\
cd my-cluster
.. tip:: The ``ceph-deploy`` utility will output files to the
- current directory.
+ current directory. Ensure you are in this directory when executing
+ ``ceph-deploy``.
Create a Cluster
================
-To create your cluster, declare its initial monitors, generate a filesystem ID
-(``fsid``) and generate monitor keys by entering the following command on a
-commandline prompt::
+To create your Ceph Storage Cluster, declare its initial monitors, generate a
+filesystem ID (``fsid``) and generate monitor keys by entering the following
+command on a commandline prompt::
ceph-deploy new {node-name}
ceph-deploy new ceph-node
-Check the output with ``ls`` and ``cat`` in the current directory. You should
-see a Ceph configuration file, a keyring, and a log file for the new cluster.
-See `ceph-deploy new -h`_ for additional details.
+Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the current
+directory. You should see a Ceph configuration file, a keyring, and a log file
+for the new cluster. See `ceph-deploy new -h`_ for additional details.
.. topic:: Single Node Quick Start
- Assuming only one node for your cluster, you will need to modify the default
- ``osd crush chooseleaf type`` setting (it defaults to ``1`` for ``node``) to
- ``0`` so that it will peer with OSDs on the local node. Add the following
- line to your Ceph configuration file::
+ Assuming only one node for your Ceph Storage Cluster, you will need to
+ modify the default ``osd crush chooseleaf type`` setting (it defaults to
+ ``1`` for ``node``) to ``0`` for ``device`` so that it will peer with OSDs
+ on the local node. Add the following line to your Ceph configuration file::
osd crush chooseleaf type = 0
+.. tip:: If you deploy without executing foregoing step on a single node
+ cluster, your Ceph Storage Cluster will not achieve an ``active + clean``
+ state. To remedy this situation, you must modify your `CRUSH Map`_.
Install Ceph
============
stable Ceph package to the server node. See `ceph-deploy install -h`_ for
additional details.
+.. tip:: When ``ceph-deploy`` completes installation successfully,
+ it should echo ``OK``.
+
Add a Monitor
=============
.. tip:: In production environments, we recommend running Ceph Monitors on
nodes that do not run OSDs.
-
+When you have added a monitor successfully, directories under ``/var/lib/ceph``
+on your server node should have subdirectories ``bootstrap-mds`` and
+``bootstrap-osd`` that contain keyrings. If these directories do not contain
+keyrings, execute ``ceph-deploy mon create`` again on the admin node.
Gather Keys
ceph-deploy gatherkeys ceph-node
-Once you have gathered keys, you should have a keyring named
-``{cluster-name}.client.admin.keyring``,
-``{cluster-name}.bootstrap-osd.keyring`` and
-``{cluster-name}.bootstrap-mds.keyring`` in the local directory. If you don't,
-you may have a problem with your network connection. Ensure that you complete
-this step such that you have the foregoing keyrings before proceeding further.
-
-Add OSDs
-========
-
-For a cluster's object placement groups to reach an ``active + clean`` state,
-you must have at least two OSDs and at least two copies of an object (``osd pool
-default size`` is ``2`` by default).
-
-Adding OSDs is slightly more involved than other ``ceph-deploy`` commands,
-because an OSD involves both a data store and a journal. The ``ceph-deploy``
-tool has the ability to invoke ``ceph-disk-prepare`` to prepare the disk and
-activate the OSD for you.
-
-
-List Disks
-----------
-
-To list the available disk drives on a prospective OSD node, execute the
-following::
+Once you have gathered keys, your local directory should have the following keyrings:
- ceph-deploy disk list {osd-node-name}
- ceph-deploy disk list ceph-node
+- ``{cluster-name}.client.admin.keyring``
+- ``{cluster-name}.bootstrap-osd.keyring``
+- ``{cluster-name}.bootstrap-mds.keyring``
+If you don't have these keyrings, you may not have created a monitor successfully,
+or you may have a problem with your network connection. Ensure that you complete
+this step such that you have the foregoing keyrings before proceeding further.
-Zap a Disk
-----------
+.. tip:: You may repeat this procedure. If it fails, check to see if the
+ ``/var/lib/ceph/boostrap-{osd}|{mds}`` directories on the server node
+ have keyrings. If they do not have keyrings, try adding the monitor again;
+ then, return to this step.
-To zap a disk (delete its partition table) in preparation for use with Ceph,
-execute the following::
- ceph-deploy disk zap {osd-node-name}:{disk}
- ceph-deploy disk zap ceph-node:sdb ceph-node:sdb2
+Add Ceph OSD Daemons
+====================
-.. important:: This will delete all data on the disk.
+For a cluster's object placement groups to reach an ``active + clean`` state,
+you must have at least two instances of a :term:`Ceph OSD Daemon` running and
+at least two copies of an object (``osd pool default size`` is ``2``
+by default).
+Adding Ceph OSD Daemons is slightly more involved than other ``ceph-deploy``
+commands, because a Ceph OSD Daemon involves both a data store and a journal.
+The ``ceph-deploy`` tool has the ability to invoke ``ceph-disk-prepare`` to
+prepare the disk and activate the Ceph OSD Daemon for you.
Multiple OSDs on the OS Disk (Demo Only)
----------------------------------------
For demonstration purposes, you may wish to add multiple OSDs to the OS disk
(not recommended for production systems). To use Ceph OSDs daemons on the OS
-disk, you must use ``prepare`` and ``activate`` as separate steps. First, define
-a directory for the Ceph OSD daemon(s). ::
+disk, you must use ``prepare`` and ``activate`` as separate steps. First,
+define a directory for the Ceph OSD daemon(s). ::
mkdir /tmp/osd0
mkdir /tmp/osd1
for Ceph to run properly. Always use more than one OSD per cluster.
+List Disks
+----------
+
+To list the available disk drives on a prospective :term:`Ceph Node`, execute
+the following::
+
+ ceph-deploy disk list {osd-node-name}
+ ceph-deploy disk list ceph-node
+
+
+Zap a Disk
+----------
+
+To zap a disk (delete its partition table) in preparation for use with Ceph,
+execute the following::
+
+ ceph-deploy disk zap {osd-node-name}:{disk}
+ ceph-deploy disk zap ceph-node:sdb ceph-node:sdb2
+
+.. important:: This will delete all data on the disk.
+
+
Add OSDs on Standalone Disks
----------------------------
ceph-deploy osd activate {osd-node-name}:{osd-partition-name}
ceph-deploy osd activate ceph-node:sdb1
-
To prepare an OSD disk and activate it in one step, execute the following::
ceph-deploy osd create {osd-node-name}:{osd-disk-name}[:/path/to/journal] [{osd-node-name}:{osd-disk-name}[:/path/to/journal]]
on the same drive. If you have already formatted your disks and created
partitions, you may also use partition syntax for your OSD disk.
-You must add a minimum of two OSDs for the placement groups in a cluster to
-achieve an ``active + clean`` state.
+You must add a minimum of two Ceph OSD Daemons for the placement groups in
+a cluster to achieve an ``active + clean`` state.
Add a MDS
.. _Ceph Deploy: ../../rados/deployment
.. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
.. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new
-.. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
\ No newline at end of file
+.. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
+.. _CRUSH Map: ../../rados/operations/crush-map
\ No newline at end of file