From 5acb1d4d151eeb59cf0b59c79fe149f7358a7dda Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Thu, 14 Sep 2023 00:09:45 +1000 Subject: [PATCH] doc/architecture: edit "High Avail. Monitors" Improve the sentence structure in the "High Availability Monitors" section of doc/architecture.rst. Co-authored-by: Anthony D'Atri Signed-off-by: Zac Dover (cherry picked from commit 57019c346917e6f155d89452d768bd93bdd2e51c) --- doc/architecture.rst | 35 ++++++++++++++++++++--------------- 1 file changed, 20 insertions(+), 15 deletions(-) diff --git a/doc/architecture.rst b/doc/architecture.rst index 2af85cd0575..95d1f745507 100644 --- a/doc/architecture.rst +++ b/doc/architecture.rst @@ -179,21 +179,26 @@ information recording the overall health of the Ceph Storage Cluster. High Availability Monitors ~~~~~~~~~~~~~~~~~~~~~~~~~~ -Before Ceph Clients can read or write data, they must contact a Ceph Monitor -to obtain the most recent copy of the cluster map. A Ceph Storage Cluster -can operate with a single monitor; however, this introduces a single -point of failure (i.e., if the monitor goes down, Ceph Clients cannot -read or write data). - -For added reliability and fault tolerance, Ceph supports a cluster of monitors. -In a cluster of monitors, latency and other faults can cause one or more -monitors to fall behind the current state of the cluster. For this reason, Ceph -must have agreement among various monitor instances regarding the state of the -cluster. Ceph always uses a majority of monitors (e.g., 1, 2:3, 3:5, 4:6, etc.) -and the `Paxos`_ algorithm to establish a consensus among the monitors about the -current state of the cluster. - -For details on configuring monitors, see the `Monitor Config Reference`_. +A Ceph Client must contact a Ceph Monitor and obtain a current copy of the +cluster map in order to read data from or to write data to the Ceph cluster. + +It is possible for a Ceph cluster to function properly with only a single +monitor, but a Ceph cluster that has only a single monitor has a single point +of failure: if the monitor goes down, Ceph clients will be unable to read data +from or write data to the cluster. + +Ceph leverages a cluster of monitors in order to increase reliability and fault +tolerance. When a cluster of monitors is used, however, one or more of the +monitors in the cluster can fall behind due to latency or other faults. Ceph +mitigates these negative effects by requiring multiple monitor instances to +agree about the state of the cluster. To establish consensus among the monitors +regarding the state of the cluster, Ceph uses the `Paxos`_ algorithm and a +majority of monitors (for example, one in a cluster that contains only one +monitor, two in a cluster that contains three monitors, three in a cluster that +contains five monitors, four in a cluster that contains six monitors, and so +on). + +See the `Monitor Config Reference`_ for more detail on configuring monitors. .. index:: architecture; high availability authentication -- 2.39.5