From: zdover23 Date: Sat, 30 Sep 2023 00:12:15 +0000 (+1000) Subject: Merge pull request #53726 from zdover23/wip-doc-2023-09-29-architecture-14-of-x X-Git-Tag: v19.0.0~388 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=8f071794373ad5c5dc88ec769bf202432407a4a2;p=ceph.git Merge pull request #53726 from zdover23/wip-doc-2023-09-29-architecture-14-of-x doc/architecture: edit "Replication" Reviewed-by: Cole Mitchell --- 8f071794373ad5c5dc88ec769bf202432407a4a2 diff --cc doc/architecture.rst index 0f173573ef2b,566b299389b2..f3ce2987103a --- a/doc/architecture.rst +++ b/doc/architecture.rst @@@ -420,29 -420,30 +420,30 @@@ the greater cluster provides several be lightweight processes. See `Monitoring OSDs`_ and `Heartbeats`_ for additional details. -#. **Data Scrubbing:** As part of maintaining data consistency and cleanliness, - Ceph OSD Daemons can scrub objects. That is, Ceph OSD Daemons can compare - their local objects metadata with its replicas stored on other OSDs. Scrubbing - happens on a per-Placement Group base. Scrubbing (usually performed daily) - catches mismatches in size and other metadata. Ceph OSD Daemons also perform deeper - scrubbing by comparing data in objects bit-for-bit with their checksums. - Deep scrubbing (usually performed weekly) finds bad sectors on a drive that - weren't apparent in a light scrub. See `Data Scrubbing`_ for details on - configuring scrubbing. +#. **Data Scrubbing:** To maintain data consistency, Ceph OSD Daemons scrub + RADOS objects. Ceph OSD Daemons compare the metadata of their own local + objects against the metadata of the replicas of those objects, which are + stored on other OSDs. Scrubbing occurs on a per-Placement-Group basis, finds + mismatches in object size and finds metadata mismatches, and is usually + performed daily. Ceph OSD Daemons perform deeper scrubbing by comparing the + data in objects, bit-for-bit, against their checksums. Deep scrubbing finds + bad sectors on drives that are not detectable with light scrubs. See `Data + Scrubbing`_ for details on configuring scrubbing. - #. **Replication:** Like Ceph Clients, Ceph OSD Daemons use the CRUSH - algorithm, but the Ceph OSD Daemon uses it to compute where replicas of - objects should be stored (and for rebalancing). In a typical write scenario, - a client uses the CRUSH algorithm to compute where to store an object, maps - the object to a pool and placement group, then looks at the CRUSH map to - identify the primary OSD for the placement group. - - The client writes the object to the identified placement group in the - primary OSD. Then, the primary OSD with its own copy of the CRUSH map - identifies the secondary and tertiary OSDs for replication purposes, and - replicates the object to the appropriate placement groups in the secondary - and tertiary OSDs (as many OSDs as additional replicas), and responds to the - client once it has confirmed the object was stored successfully. + #. **Replication:** Data replication involves a collaboration between Ceph + Clients and Ceph OSD Daemons. Ceph OSD Daemons use the CRUSH algorithm to + determine the storage location of object replicas. Ceph clients use the + CRUSH algorithm to determine the storage location of an object, then the + object is mapped to a pool and to a placement group, and then the client + consults the CRUSH map to identify the placement group's primary OSD. + + After identifying the target placement group, the client writes the object + to the identified placement group's primary OSD. The primary OSD then + consults its own copy of the CRUSH map to identify secondary and tertiary + OSDS, replicates the object to the placement groups in those secondary and + tertiary OSDs, confirms that the object was stored successfully in the + secondary and tertiary OSDs, and reports to the client that the object + was stored successfully. .. ditaa::