Background
==========
-Ceph Monitors maintain a "master copy" of the :term:`Cluster Map`, which means a
-:term:`Ceph Client` can determine the location of all Ceph Monitors, Ceph OSD
-Daemons, and Ceph Metadata Servers just by connecting to one Ceph Monitor and
+Ceph Monitors maintain a "master copy" of the :term:`Cluster Map`.
+
+The maintenance by Ceph Monitors of a :term:`Cluster Map` makes it possible for
+a :term:`Ceph Client` to determine the location of all Ceph Monitors, Ceph OSD
+Daemons, and Ceph Metadata Servers by connecting to one Ceph Monitor and
retrieving a current cluster map. Before Ceph Clients can read from or write to
-Ceph OSD Daemons or Ceph Metadata Servers, they must connect to a Ceph Monitor
-first. With a current copy of the cluster map and the CRUSH algorithm, a Ceph
-Client can compute the location for any object. The ability to compute object
-locations allows a Ceph Client to talk directly to Ceph OSD Daemons, which is a
-very important aspect of Ceph's high scalability and performance. See
-`Scalability and High Availability`_ for additional details.
-
-The primary role of the Ceph Monitor is to maintain a master copy of the cluster
-map. Ceph Monitors also provide authentication and logging services. Ceph
-Monitors write all changes in the monitor services to a single Paxos instance,
-and Paxos writes the changes to a key/value store for strong consistency. Ceph
-Monitors can query the most recent version of the cluster map during sync
-operations. Ceph Monitors leverage the key/value store's snapshots and iterators
-(using leveldb) to perform store-wide synchronization.
+Ceph OSD Daemons or Ceph Metadata Servers, they must connect to a Ceph Monitor.
+When a Ceph client has a current copy of the cluster map and the CRUSH
+algorithm, it can compute the location for any RADOS object within in the
+cluster. This ability to compute the locations of objects makes it possible for
+Ceph Clients to talk directly to Ceph OSD Daemons. This direct communication
+with Ceph OSD Daemons represents an improvment upon traditional storage
+architectures in which clients were required to communicate with a central
+component, and that improvment contributes to Ceph's high scalability and
+performance. See `Scalability and High Availability`_ for additional details.
+
+The Ceph Monitor's primary function is to maintain a master copy of the cluster
+map. Monitors also provide authentication and logging services. All changes in
+the monitor services are written by the Ceph Monitor to a single Paxos
+instance, and Paxos writes the changes to a key/value store for strong
+consistency. Ceph Monitors are able to query the most recent version of the
+cluster map during sync operations, and they use the key/value store's
+snapshots and iterators (using leveldb) to perform store-wide synchronization.
.. ditaa::
/-------------\ /-------------\
| cCCC |*---------------------+
\-------------/
-
-.. deprecated:: version 0.58
-
-In Ceph versions 0.58 and earlier, Ceph Monitors use a Paxos instance for
-each service and store the map as a file.
-
.. index:: Ceph Monitor; cluster map
Cluster Maps