From f6c51b486d5fc01883e2aa2d2764d9e5c8a01173 Mon Sep 17 00:00:00 2001 From: John Wilkins Date: Tue, 11 Jun 2013 14:46:12 -0700 Subject: [PATCH] doc: Added some tips and re-organized to simplify the process. Signed-off-by: John Wilkins --- doc/start/quick-ceph-deploy.rst | 135 ++++++++++++++++++-------------- 1 file changed, 77 insertions(+), 58 deletions(-) diff --git a/doc/start/quick-ceph-deploy.rst b/doc/start/quick-ceph-deploy.rst index cdbd27f42e60c..dec6b5eee6e20 100644 --- a/doc/start/quick-ceph-deploy.rst +++ b/doc/start/quick-ceph-deploy.rst @@ -1,12 +1,12 @@ -========================== - Object Store Quick Start -========================== +=========================== + Store Cluster Quick Start +=========================== If you haven't completed your `Preflight Checklist`_, do that first. This **Quick Start** sets up a two-node demo cluster so you can explore some of the -object store functionality. This **Quick Start** will help you install a -minimal Ceph cluster on a server node from your admin node using -``ceph-deploy``. +:term:`Ceph Storage Cluster` functionality. This **Quick Start** will help you +install a minimal Ceph Storage Cluster on a server node from your admin node +using ``ceph-deploy``. .. ditaa:: /----------------\ /----------------\ @@ -28,32 +28,36 @@ configuration of your cluster. :: cd my-cluster .. tip:: The ``ceph-deploy`` utility will output files to the - current directory. + current directory. Ensure you are in this directory when executing + ``ceph-deploy``. Create a Cluster ================ -To create your cluster, declare its initial monitors, generate a filesystem ID -(``fsid``) and generate monitor keys by entering the following command on a -commandline prompt:: +To create your Ceph Storage Cluster, declare its initial monitors, generate a +filesystem ID (``fsid``) and generate monitor keys by entering the following +command on a commandline prompt:: ceph-deploy new {node-name} ceph-deploy new ceph-node -Check the output with ``ls`` and ``cat`` in the current directory. You should -see a Ceph configuration file, a keyring, and a log file for the new cluster. -See `ceph-deploy new -h`_ for additional details. +Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the current +directory. You should see a Ceph configuration file, a keyring, and a log file +for the new cluster. See `ceph-deploy new -h`_ for additional details. .. topic:: Single Node Quick Start - Assuming only one node for your cluster, you will need to modify the default - ``osd crush chooseleaf type`` setting (it defaults to ``1`` for ``node``) to - ``0`` so that it will peer with OSDs on the local node. Add the following - line to your Ceph configuration file:: + Assuming only one node for your Ceph Storage Cluster, you will need to + modify the default ``osd crush chooseleaf type`` setting (it defaults to + ``1`` for ``node``) to ``0`` for ``device`` so that it will peer with OSDs + on the local node. Add the following line to your Ceph configuration file:: osd crush chooseleaf type = 0 +.. tip:: If you deploy without executing foregoing step on a single node + cluster, your Ceph Storage Cluster will not achieve an ``active + clean`` + state. To remedy this situation, you must modify your `CRUSH Map`_. Install Ceph ============ @@ -68,6 +72,9 @@ Without additional arguments, ``ceph-deploy`` will install the most recent stable Ceph package to the server node. See `ceph-deploy install -h`_ for additional details. +.. tip:: When ``ceph-deploy`` completes installation successfully, + it should echo ``OK``. + Add a Monitor ============= @@ -82,7 +89,10 @@ following to create a Ceph Monitor:: .. tip:: In production environments, we recommend running Ceph Monitors on nodes that do not run OSDs. - +When you have added a monitor successfully, directories under ``/var/lib/ceph`` +on your server node should have subdirectories ``bootstrap-mds`` and +``bootstrap-osd`` that contain keyrings. If these directories do not contain +keyrings, execute ``ceph-deploy mon create`` again on the admin node. Gather Keys @@ -96,55 +106,42 @@ the following to gather keys:: ceph-deploy gatherkeys ceph-node -Once you have gathered keys, you should have a keyring named -``{cluster-name}.client.admin.keyring``, -``{cluster-name}.bootstrap-osd.keyring`` and -``{cluster-name}.bootstrap-mds.keyring`` in the local directory. If you don't, -you may have a problem with your network connection. Ensure that you complete -this step such that you have the foregoing keyrings before proceeding further. - -Add OSDs -======== - -For a cluster's object placement groups to reach an ``active + clean`` state, -you must have at least two OSDs and at least two copies of an object (``osd pool -default size`` is ``2`` by default). - -Adding OSDs is slightly more involved than other ``ceph-deploy`` commands, -because an OSD involves both a data store and a journal. The ``ceph-deploy`` -tool has the ability to invoke ``ceph-disk-prepare`` to prepare the disk and -activate the OSD for you. - - -List Disks ----------- - -To list the available disk drives on a prospective OSD node, execute the -following:: +Once you have gathered keys, your local directory should have the following keyrings: - ceph-deploy disk list {osd-node-name} - ceph-deploy disk list ceph-node +- ``{cluster-name}.client.admin.keyring`` +- ``{cluster-name}.bootstrap-osd.keyring`` +- ``{cluster-name}.bootstrap-mds.keyring`` +If you don't have these keyrings, you may not have created a monitor successfully, +or you may have a problem with your network connection. Ensure that you complete +this step such that you have the foregoing keyrings before proceeding further. -Zap a Disk ----------- +.. tip:: You may repeat this procedure. If it fails, check to see if the + ``/var/lib/ceph/boostrap-{osd}|{mds}`` directories on the server node + have keyrings. If they do not have keyrings, try adding the monitor again; + then, return to this step. -To zap a disk (delete its partition table) in preparation for use with Ceph, -execute the following:: - ceph-deploy disk zap {osd-node-name}:{disk} - ceph-deploy disk zap ceph-node:sdb ceph-node:sdb2 +Add Ceph OSD Daemons +==================== -.. important:: This will delete all data on the disk. +For a cluster's object placement groups to reach an ``active + clean`` state, +you must have at least two instances of a :term:`Ceph OSD Daemon` running and +at least two copies of an object (``osd pool default size`` is ``2`` +by default). +Adding Ceph OSD Daemons is slightly more involved than other ``ceph-deploy`` +commands, because a Ceph OSD Daemon involves both a data store and a journal. +The ``ceph-deploy`` tool has the ability to invoke ``ceph-disk-prepare`` to +prepare the disk and activate the Ceph OSD Daemon for you. Multiple OSDs on the OS Disk (Demo Only) ---------------------------------------- For demonstration purposes, you may wish to add multiple OSDs to the OS disk (not recommended for production systems). To use Ceph OSDs daemons on the OS -disk, you must use ``prepare`` and ``activate`` as separate steps. First, define -a directory for the Ceph OSD daemon(s). :: +disk, you must use ``prepare`` and ``activate`` as separate steps. First, +define a directory for the Ceph OSD daemon(s). :: mkdir /tmp/osd0 mkdir /tmp/osd1 @@ -165,6 +162,28 @@ Finally, use ``activate`` to activate the Ceph OSD Daemons. :: for Ceph to run properly. Always use more than one OSD per cluster. +List Disks +---------- + +To list the available disk drives on a prospective :term:`Ceph Node`, execute +the following:: + + ceph-deploy disk list {osd-node-name} + ceph-deploy disk list ceph-node + + +Zap a Disk +---------- + +To zap a disk (delete its partition table) in preparation for use with Ceph, +execute the following:: + + ceph-deploy disk zap {osd-node-name}:{disk} + ceph-deploy disk zap ceph-node:sdb ceph-node:sdb2 + +.. important:: This will delete all data on the disk. + + Add OSDs on Standalone Disks ---------------------------- @@ -180,7 +199,6 @@ To activate the Ceph OSD Daemon, execute the following:: ceph-deploy osd activate {osd-node-name}:{osd-partition-name} ceph-deploy osd activate ceph-node:sdb1 - To prepare an OSD disk and activate it in one step, execute the following:: ceph-deploy osd create {osd-node-name}:{osd-disk-name}[:/path/to/journal] [{osd-node-name}:{osd-disk-name}[:/path/to/journal]] @@ -193,8 +211,8 @@ To prepare an OSD disk and activate it in one step, execute the following:: on the same drive. If you have already formatted your disks and created partitions, you may also use partition syntax for your OSD disk. -You must add a minimum of two OSDs for the placement groups in a cluster to -achieve an ``active + clean`` state. +You must add a minimum of two Ceph OSD Daemons for the placement groups in +a cluster to achieve an ``active + clean`` state. Add a MDS @@ -236,4 +254,5 @@ See `Ceph Deploy`_ for additional details. .. _Ceph Deploy: ../../rados/deployment .. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install .. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new -.. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart \ No newline at end of file +.. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart +.. _CRUSH Map: ../../rados/operations/crush-map \ No newline at end of file -- 2.39.5