From: Zac Dover Date: Wed, 5 Jun 2024 16:43:15 +0000 (+1000) Subject: doc/start: s/intro.rst/index.rst/ X-Git-Tag: v17.2.8~333^2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=2b813da9df5cb4ab0b65c7d5a034f05e58a93676;p=ceph.git doc/start: s/intro.rst/index.rst/ Change the filename "doc/start/intro.rst" to "doc/start/index.rst" so that Sphinx finds the root filename for the "/start" directory in the default location. Signed-off-by: Zac Dover (cherry picked from commit 84ce2212e87a4b6b2416eeab7e8e1718ae3ce87b) --- diff --git a/doc/index.rst b/doc/index.rst index 3d17053c2722d..5ae7ed917d610 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -101,7 +101,7 @@ about Ceph, see our `Architecture`_ section. :maxdepth: 3 :hidden: - start/intro + start/index install/index cephadm/index rados/index diff --git a/doc/start/index.rst b/doc/start/index.rst new file mode 100644 index 0000000000000..640fb5d84a839 --- /dev/null +++ b/doc/start/index.rst @@ -0,0 +1,103 @@ +=============== + Intro to Ceph +=============== + +Ceph can be used to provide :term:`Ceph Object Storage` to :term:`Cloud +Platforms` and Ceph can be used to provide :term:`Ceph Block Device` services +to :term:`Cloud Platforms`. Ceph can be used to deploy a :term:`Ceph File +System`. All :term:`Ceph Storage Cluster` deployments begin with setting up +each :term:`Ceph Node` and then setting up the network. + +A Ceph Storage Cluster requires the following: at least one Ceph Monitor and at +least one Ceph Manager, and at least as many :term:`Ceph Object Storage +Daemon`\s (OSDs) as there are copies of a given object stored in the +Ceph cluster (for example, if three copies of a given object are stored in the +Ceph cluster, then at least three OSDs must exist in that Ceph cluster). + +The Ceph Metadata Server is necessary to run Ceph File System clients. + +.. note:: + + It is a best practice to have a Ceph Manager for each Monitor, but it is not + necessary. + +.. ditaa:: + + +---------------+ +------------+ +------------+ +---------------+ + | OSDs | | Monitors | | Managers | | MDSs | + +---------------+ +------------+ +------------+ +---------------+ + +- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps of the + cluster state, including the :ref:`monitor map`, manager + map, the OSD map, the MDS map, and the CRUSH map. These maps are critical + cluster state required for Ceph daemons to coordinate with each other. + Monitors are also responsible for managing authentication between daemons and + clients. At least three monitors are normally required for redundancy and + high availability. + +- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is + responsible for keeping track of runtime metrics and the current + state of the Ceph cluster, including storage utilization, current + performance metrics, and system load. The Ceph Manager daemons also + host python-based modules to manage and expose Ceph cluster + information, including a web-based :ref:`mgr-dashboard` and + `REST API`_. At least two managers are normally required for high + availability. + +- **Ceph OSDs**: An Object Storage Daemon (:term:`Ceph OSD`, + ``ceph-osd``) stores data, handles data replication, recovery, + rebalancing, and provides some monitoring information to Ceph + Monitors and Managers by checking other Ceph OSD Daemons for a + heartbeat. At least three Ceph OSDs are normally required for + redundancy and high availability. + +- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores metadata + for the :term:`Ceph File System`. Ceph Metadata Servers allow CephFS users to + run basic commands (like ``ls``, ``find``, etc.) without placing a burden on + the Ceph Storage Cluster. + +Ceph stores data as objects within logical storage pools. Using the +:term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should +contain the object, and which OSD should store the placement group. The +CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and +recover dynamically. + +.. _REST API: ../../mgr/restful + +.. container:: columns-2 + + .. container:: column + + .. raw:: html + +

Recommendations

+ + To begin using Ceph in production, you should review our hardware + recommendations and operating system recommendations. + + .. toctree:: + :maxdepth: 2 + + Beginner's Guide + Hardware Recommendations + OS Recommendations + + .. container:: column + + .. raw:: html + +

Get Involved

+ + You can avail yourself of help or contribute documentation, source + code or bugs by getting involved in the Ceph community. + + .. toctree:: + :maxdepth: 2 + + get-involved + documenting-ceph + +.. toctree:: + :maxdepth: 2 + + intro