- :term:`Ceph Manager`
- :term:`Ceph Metadata Server`
-.. ditaa::
-
- +---------------+ +---------------+ +---------------+ +---------------+
- | OSDs | | Monitors | | Managers | | MDS |
- +---------------+ +---------------+ +---------------+ +---------------+
-
-A Ceph Monitor maintains a master copy of the cluster map. A cluster of Ceph
-monitors ensures high availability should a monitor daemon fail. Storage cluster
-clients retrieve a copy of the cluster map from the Ceph Monitor.
+Ceph Monitors maintain the master copy of the cluster map, which they provide
+to Ceph clients. Provisioning multiple monitors within the Ceph cluster ensures
+availability in the event that one of the monitor daemons or its host fails.
+The Ceph monitor provides copies of the cluster map to storage cluster clients.
A Ceph OSD Daemon checks its own state and the state of other OSDs and reports
back to monitors.
-A Ceph Manager acts as an endpoint for monitoring, orchestration, and plug-in
+A Ceph Manager serves as an endpoint for monitoring, orchestration, and plug-in
modules.
A Ceph Metadata Server (MDS) manages file metadata when CephFS is used to
provide file services.
-Storage cluster clients and each :term:`Ceph OSD Daemon` use the CRUSH algorithm
-to efficiently compute information about data location, instead of having to
-depend on a central lookup table. Ceph's high-level features include a
-native interface to the Ceph Storage Cluster via ``librados``, and a number of
-service interfaces built on top of ``librados``.
-
-
+Storage cluster clients and :term:`Ceph OSD Daemon`\s use the CRUSH algorithm
+to compute information about data location. This means that clients and OSDs
+are not bottlenecked by a central lookup table. Ceph's high-level features
+include a native interface to the Ceph Storage Cluster via ``librados``, and a
+number of service interfaces built on top of ``librados``.
Storing Data
------------
The Ceph Storage Cluster receives data from :term:`Ceph Client`\s--whether it
comes through a :term:`Ceph Block Device`, :term:`Ceph Object Storage`, the
-:term:`Ceph File System` or a custom implementation you create using
-``librados``-- which is stored as RADOS objects. Each object is stored on an
-:term:`Object Storage Device`. Ceph OSD Daemons handle read, write, and
-replication operations on storage drives. With the older Filestore back end,
-each RADOS object was stored as a separate file on a conventional filesystem
-(usually XFS). With the new and default BlueStore back end, objects are
-stored in a monolithic database-like fashion.
+:term:`Ceph File System`, or a custom implementation that you create by using
+``librados``. The data received by the Ceph Storage Cluster is stored as RADOS
+objects. Each object is stored on an :term:`Object Storage Device` (this is
+also called an "OSD"). Ceph OSDs control read, write, and replication
+operations on storage drives. The default BlueStore back end stores objects
+in a monolithic, database-like fashion.
.. ditaa::
- /-----\ +-----+ +-----+
- | obj |------>| {d} |------>| {s} |
- \-----/ +-----+ +-----+
+ /------\ +-----+ +-----+
+ | obj |------>| {d} |------>| {s} |
+ \------/ +-----+ +-----+
Object OSD Drive
-Ceph OSD Daemons store data as objects in a flat namespace (e.g., no
-hierarchy of directories). An object has an identifier, binary data, and
-metadata consisting of a set of name/value pairs. The semantics are completely
-up to :term:`Ceph Client`\s. For example, CephFS uses metadata to store file
-attributes such as the file owner, created date, last modified date, and so
-forth.
+Ceph OSD Daemons store data as objects in a flat namespace. This means that
+objects are not stored in a hierarchy of directories. An object has an
+identifier, binary data, and metadata consisting of name/value pairs.
+:term:`Ceph Client`\s determine the semantics of the object data. For example,
+CephFS uses metadata to store file attributes such as the file owner, the
+created date, and the last modified date.
.. ditaa::
Scalability and High Availability
---------------------------------
-In traditional architectures, clients talk to a centralized component (e.g., a
-gateway, broker, API, facade, etc.), which acts as a single point of entry to a
-complex subsystem. This imposes a limit to both performance and scalability,
-while introducing a single point of failure (i.e., if the centralized component
-goes down, the whole system goes down, too).
+In traditional architectures, clients talk to a centralized component. This
+centralized component might be a gateway, a broker, an API, or a facade. A
+centralized component of this kind acts as a single point of entry to a complex
+subsystem. Architectures that rely upon this kind of centralized component
+experience limits to performance and scalability and have a single point of
+failure. If the centralized component goes down, the whole system becomes
+unavailable.
-Ceph eliminates the centralized gateway to enable clients to interact with
-Ceph OSD Daemons directly. Ceph OSD Daemons create object replicas on other
-Ceph Nodes to ensure data safety and high availability. Ceph also uses a cluster
-of monitors to ensure high availability. To eliminate centralization, Ceph
-uses an algorithm called CRUSH.
+Ceph eliminates this centralized component. This enables clients to interact
+with Ceph OSDs directly. Ceph OSDs create object replicas on other Ceph Nodes
+to ensure data safety and high availability. Ceph also uses a cluster of
+monitors to ensure high availability. To eliminate centralization, Ceph uses an
+algorithm called :abbr:`CRUSH (Controlled Replication Under Scalable Hashing)`.
.. index:: CRUSH; architecture
~~~~~~~~~~~~~~~~~~
Ceph Clients and Ceph OSD Daemons both use the :abbr:`CRUSH (Controlled
-Replication Under Scalable Hashing)` algorithm to efficiently compute
-information about object location, instead of having to depend on a
-central lookup table. CRUSH provides a better data management mechanism compared
-to older approaches, and enables massive scale by cleanly distributing the work
-to all the clients and OSD daemons in the cluster. CRUSH uses intelligent data
-replication to ensure resiliency, which is better suited to hyper-scale storage.
-The following sections provide additional details on how CRUSH works. For a
-detailed discussion of CRUSH, see `CRUSH - Controlled, Scalable, Decentralized
-Placement of Replicated Data`_.
+Replication Under Scalable Hashing)` algorithm to compute information about
+object location instead of relying upon a central lookup table. CRUSH provides
+a better data management mechanism than do older approaches, and CRUSH enables
+massive scale by distributing the work to all the OSD daemons in the cluster
+and all the clients that communicate with them. CRUSH uses intelligent data
+replication to ensure resiliency, which is better suited to hyper-scale
+storage. The following sections provide additional details on how CRUSH works.
+For a detailed discussion of CRUSH, see `CRUSH - Controlled, Scalable,
+Decentralized Placement of Replicated Data`_.
.. index:: architecture; cluster map