From 32f393933640360c24c0e6ef72455bac94cf9f88 Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Fri, 28 Apr 2023 08:35:17 +1000 Subject: [PATCH] doc/rados: m-config-ref: edit "background" Edit the "Background" section of doc/rados/monitor/config-ref.rst Signed-off-by: Zac Dover (cherry picked from commit 9223863fc83095def59b416bf70f9a828a701ccc) --- doc/rados/configuration/mon-config-ref.rst | 45 +++++++++++----------- 1 file changed, 22 insertions(+), 23 deletions(-) diff --git a/doc/rados/configuration/mon-config-ref.rst b/doc/rados/configuration/mon-config-ref.rst index 00d54e77aa2..c19728ada7c 100644 --- a/doc/rados/configuration/mon-config-ref.rst +++ b/doc/rados/configuration/mon-config-ref.rst @@ -16,24 +16,29 @@ consistent, but you can add, remove or replace a monitor in a cluster. See Background ========== -Ceph Monitors maintain a "master copy" of the :term:`Cluster Map`, which means a -:term:`Ceph Client` can determine the location of all Ceph Monitors, Ceph OSD -Daemons, and Ceph Metadata Servers just by connecting to one Ceph Monitor and +Ceph Monitors maintain a "master copy" of the :term:`Cluster Map`. + +The maintenance by Ceph Monitors of a :term:`Cluster Map` makes it possible for +a :term:`Ceph Client` to determine the location of all Ceph Monitors, Ceph OSD +Daemons, and Ceph Metadata Servers by connecting to one Ceph Monitor and retrieving a current cluster map. Before Ceph Clients can read from or write to -Ceph OSD Daemons or Ceph Metadata Servers, they must connect to a Ceph Monitor -first. With a current copy of the cluster map and the CRUSH algorithm, a Ceph -Client can compute the location for any object. The ability to compute object -locations allows a Ceph Client to talk directly to Ceph OSD Daemons, which is a -very important aspect of Ceph's high scalability and performance. See -`Scalability and High Availability`_ for additional details. - -The primary role of the Ceph Monitor is to maintain a master copy of the cluster -map. Ceph Monitors also provide authentication and logging services. Ceph -Monitors write all changes in the monitor services to a single Paxos instance, -and Paxos writes the changes to a key/value store for strong consistency. Ceph -Monitors can query the most recent version of the cluster map during sync -operations. Ceph Monitors leverage the key/value store's snapshots and iterators -(using leveldb) to perform store-wide synchronization. +Ceph OSD Daemons or Ceph Metadata Servers, they must connect to a Ceph Monitor. +When a Ceph client has a current copy of the cluster map and the CRUSH +algorithm, it can compute the location for any RADOS object within in the +cluster. This ability to compute the locations of objects makes it possible for +Ceph Clients to talk directly to Ceph OSD Daemons. This direct communication +with Ceph OSD Daemons represents an improvment upon traditional storage +architectures in which clients were required to communicate with a central +component, and that improvment contributes to Ceph's high scalability and +performance. See `Scalability and High Availability`_ for additional details. + +The Ceph Monitor's primary function is to maintain a master copy of the cluster +map. Monitors also provide authentication and logging services. All changes in +the monitor services are written by the Ceph Monitor to a single Paxos +instance, and Paxos writes the changes to a key/value store for strong +consistency. Ceph Monitors are able to query the most recent version of the +cluster map during sync operations, and they use the key/value store's +snapshots and iterators (using leveldb) to perform store-wide synchronization. .. ditaa:: /-------------\ /-------------\ @@ -56,12 +61,6 @@ operations. Ceph Monitors leverage the key/value store's snapshots and iterators | cCCC |*---------------------+ \-------------/ - -.. deprecated:: version 0.58 - -In Ceph versions 0.58 and earlier, Ceph Monitors use a Paxos instance for -each service and store the map as a file. - .. index:: Ceph Monitor; cluster map Cluster Maps -- 2.39.5