lightweight processes. See `Monitoring OSDs`_ and `Heartbeats`_ for
additional details.
-#. **Data Scrubbing:** As part of maintaining data consistency and cleanliness,
- Ceph OSD Daemons can scrub objects. That is, Ceph OSD Daemons can compare
- their local objects metadata with its replicas stored on other OSDs. Scrubbing
- happens on a per-Placement Group base. Scrubbing (usually performed daily)
- catches mismatches in size and other metadata. Ceph OSD Daemons also perform deeper
- scrubbing by comparing data in objects bit-for-bit with their checksums.
- Deep scrubbing (usually performed weekly) finds bad sectors on a drive that
- weren't apparent in a light scrub. See `Data Scrubbing`_ for details on
- configuring scrubbing.
+#. **Data Scrubbing:** To maintain data consistency, Ceph OSD Daemons scrub
+ RADOS objects. Ceph OSD Daemons compare the metadata of their own local
+ objects against the metadata of the replicas of those objects, which are
+ stored on other OSDs. Scrubbing occurs on a per-Placement-Group basis, finds
+ mismatches in object size and finds metadata mismatches, and is usually
+ performed daily. Ceph OSD Daemons perform deeper scrubbing by comparing the
+ data in objects, bit-for-bit, against their checksums. Deep scrubbing finds
+ bad sectors on drives that are not detectable with light scrubs. See `Data
+ Scrubbing`_ for details on configuring scrubbing.
- #. **Replication:** Like Ceph Clients, Ceph OSD Daemons use the CRUSH
- algorithm, but the Ceph OSD Daemon uses it to compute where replicas of
- objects should be stored (and for rebalancing). In a typical write scenario,
- a client uses the CRUSH algorithm to compute where to store an object, maps
- the object to a pool and placement group, then looks at the CRUSH map to
- identify the primary OSD for the placement group.
-
- The client writes the object to the identified placement group in the
- primary OSD. Then, the primary OSD with its own copy of the CRUSH map
- identifies the secondary and tertiary OSDs for replication purposes, and
- replicates the object to the appropriate placement groups in the secondary
- and tertiary OSDs (as many OSDs as additional replicas), and responds to the
- client once it has confirmed the object was stored successfully.
+ #. **Replication:** Data replication involves a collaboration between Ceph
+ Clients and Ceph OSD Daemons. Ceph OSD Daemons use the CRUSH algorithm to
+ determine the storage location of object replicas. Ceph clients use the
+ CRUSH algorithm to determine the storage location of an object, then the
+ object is mapped to a pool and to a placement group, and then the client
+ consults the CRUSH map to identify the placement group's primary OSD.
+
+ After identifying the target placement group, the client writes the object
+ to the identified placement group's primary OSD. The primary OSD then
+ consults its own copy of the CRUSH map to identify secondary and tertiary
+ OSDS, replicates the object to the placement groups in those secondary and
+ tertiary OSDs, confirms that the object was stored successfully in the
+ secondary and tertiary OSDs, and reports to the client that the object
+ was stored successfully.
.. ditaa::