templates_path = ['_templates']
source_suffix = '.rst'
master_doc = 'index'
-exclude_patterns = ['**/.#*', '**/*~']
+exclude_patterns = ['**/.#*', '**/*~', 'start/quick-common.rst']
pygments_style = 'sphinx'
html_theme = 'ceph'
on your admin node. Create a three Ceph Node cluster so you can
explore Ceph functionality.
-.. ditaa::
- /------------------\ /----------------\
- | Admin Node | | node1 |
- | +-------->+ cCCC |
- | ceph–deploy | | mon.node1 |
- \---------+--------/ \----------------/
- |
- | /----------------\
- | | node2 |
- +----------------->+ cCCC |
- | | osd.0 |
- | \----------------/
- |
- | /----------------\
- | | node3 |
- +----------------->| cCCC |
- | osd.1 |
- \----------------/
-
-For best results, create a directory on your admin node node for maintaining the
-configuration that ``ceph-deploy`` generates for your cluster. ::
-
- mkdir my-cluster
- cd my-cluster
-
-.. tip:: The ``ceph-deploy`` utility will output files to the
- current directory. Ensure you are in this directory when executing
- ``ceph-deploy``.
+.. include:: quick-common.rst
As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and two
Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
.. _placement group: ../../rados/operations/placement-groups
.. _Monitoring a Cluster: ../../rados/operations/monitoring
.. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg
-.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
\ No newline at end of file
+.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
--- /dev/null
+.. ditaa::
+ /------------------\ /----------------\
+ | Admin Node | | node1 |
+ | +-------->+ cCCC |
+ | ceph–deploy | | mon.node1 |
+ \---------+--------/ \----------------/
+ |
+ | /----------------\
+ | | node2 |
+ +----------------->+ cCCC |
+ | | osd.0 |
+ | \----------------/
+ |
+ | /----------------\
+ | | node3 |
+ +----------------->| cCCC |
+ | osd.1 |
+ \----------------/
+
+For best results, create a directory on your admin node node for maintaining the
+configuration that ``ceph-deploy`` generates for your cluster. ::
+
+ mkdir my-cluster
+ cd my-cluster
+
+.. tip:: The ``ceph-deploy`` utility will output files to the
+ current directory. Ensure you are in this directory when executing
+ ``ceph-deploy``.
Before proceeding any further, see `OS Recommendations`_ to verify that you have
a supported distribution and version of Linux.
+In the descriptions below, :term:`Node` refers to a single machine.
+
+.. include:: quick-common.rst
-.. ditaa::
- /------------------\ /----------------\
- | Admin Node | | node1 |
- | +-------->+ |
- | ceph–deploy | | cCCC |
- \---------+--------/ \----------------/
- |
- | /----------------\
- | | node2 |
- +----------------->+ |
- | | cCCC |
- | \----------------/
- |
- | /----------------\
- | | node3 |
- +----------------->| |
- | cCCC |
- \----------------/
Ceph Deploy Setup