From 4a6e9b0de6b899c09fcb40aa73ed3edddfdecba9 Mon Sep 17 00:00:00 2001 From: Anthony D'Atri Date: Tue, 18 Feb 2025 16:31:47 -0500 Subject: [PATCH] doc/start: Mention RGW in Intro to Ceph Signed-off-by: Anthony D'Atri --- doc/glossary.rst | 2 +- doc/start/index.rst | 12 ++++++++---- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/doc/glossary.rst b/doc/glossary.rst index 5ecee57d21de2..e6371385d2f68 100644 --- a/doc/glossary.rst +++ b/doc/glossary.rst @@ -224,7 +224,7 @@ Architecture document` for details. Crimson - A next-generation OSD architecture whose main aim is the + A next-generation OSD architecture whose aim is the reduction of latency costs incurred due to cross-core communications. A re-design of the OSD reduces lock contention by reducing communication between shards in the data diff --git a/doc/start/index.rst b/doc/start/index.rst index 439e9b2455541..e2cfe6f9f91f8 100644 --- a/doc/start/index.rst +++ b/doc/start/index.rst @@ -23,9 +23,9 @@ The Ceph Metadata Server is necessary to run Ceph File System clients. .. ditaa:: - +---------------+ +------------+ +------------+ +---------------+ - | OSDs | | Monitors | | Managers | | MDSs | - +---------------+ +------------+ +------------+ +---------------+ + +------+ +----------+ +----------+ +-------+ +------+ + | OSDs | | Monitors | | Managers | | MDSes | | RGWs | + +------+ +----------+ +----------+ +-------+ +------+ - **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps of the cluster state, including the :ref:`monitor map`, manager @@ -51,11 +51,15 @@ The Ceph Metadata Server is necessary to run Ceph File System clients. heartbeat. At least three Ceph OSDs are normally required for redundancy and high availability. -- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores metadata +- **MDSes**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores metadata for the :term:`Ceph File System`. Ceph Metadata Servers allow CephFS users to run basic commands (like ``ls``, ``find``, etc.) without placing a burden on the Ceph Storage Cluster. +- **RGWs**: A :term:`Ceph Object Gateway` (RGW, ``ceph-radosgw``) daemon provides + a RESTful gateway between applications and Ceph storage clusters. The + S3-compatible API is most commonly used, though Swift is also available. + Ceph stores data as objects within logical storage pools. Using the :term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should contain the object, and which OSD should store the placement group. The -- 2.39.5