bad sectors on drives that are not detectable with light scrubs. See `Data
Scrubbing`_ for details on configuring scrubbing.
-#. **Replication:** Data replication involves a collaboration between Ceph
+#. **Replication:** Data replication involves collaboration between Ceph
Clients and Ceph OSD Daemons. Ceph OSD Daemons use the CRUSH algorithm to
determine the storage location of object replicas. Ceph clients use the
CRUSH algorithm to determine the storage location of an object, then the
After identifying the target placement group, the client writes the object
to the identified placement group's primary OSD. The primary OSD then
- consults its own copy of the CRUSH map to identify secondary and tertiary
- OSDS, replicates the object to the placement groups in those secondary and
- tertiary OSDs, confirms that the object was stored successfully in the
- secondary and tertiary OSDs, and reports to the client that the object
- was stored successfully.
+ consults its own copy of the CRUSH map to identify secondary
+ OSDS, replicates the object to the placement groups in those secondary
+ OSDs, confirms that the object was stored successfully in the
+ secondary OSDs, and reports to the client that the object
+ was stored successfully. We call these replication operations ``subops``.
.. ditaa::
| +------+ +------+ |
| | Ack (4) Ack (5)| |
v * * v
- +---------------+ +---------------+
- | Secondary OSD | | Tertiary OSD |
- | | | |
- +---------------+ +---------------+
+ +---------------+ +----------------+
+ | Secondary OSD | | Secondary OSD |
+ | | | |
+ +---------------+ +----------------+
-By performing this act of data replication, Ceph OSD Daemons relieve Ceph
-clients of the burden of replicating data.
+By performing this data replication, Ceph OSD Daemons relieve Ceph
+clients and their network interfaces of the burden of replicating data.
Dynamic Cluster Management
--------------------------
Pools provide:
-- **Resilience**: It is possible to architect for the number of OSDs that may
+- **Resilience**: It is possible to plan for the number of OSDs that may
fail in parallel without data being unavailable or lost. If your cluster
uses replicated pools, the number of OSDs that can fail in parallel without
data loss is one less than the number of replicas, and the number that can
Setting Pool Quotas
===================
-To set quotas for the maximum number of bytes and/or the maximum number of
+To set quotas for the maximum number of bytes or the maximum number of
RADOS objects per pool, run a command of the following form:
.. prompt:: bash $
ceph osd pool set-quota data max_objects 10000
-To remove a quota, set its value to ``0``.
+To remove a quota, set its value to ``0``. Note that you may set a quota only
+for bytes or only for RADOS objects, or you can set both.
Deleting a Pool