bad sectors on drives that are not detectable with light scrubs. See `Data
Scrubbing`_ for details on configuring scrubbing.
-#. **Replication:** Data replication involves a collaboration between Ceph
+#. **Replication:** Data replication involves collaboration between Ceph
Clients and Ceph OSD Daemons. Ceph OSD Daemons use the CRUSH algorithm to
determine the storage location of object replicas. Ceph clients use the
CRUSH algorithm to determine the storage location of an object, then the
After identifying the target placement group, the client writes the object
to the identified placement group's primary OSD. The primary OSD then
- consults its own copy of the CRUSH map to identify secondary and tertiary
- OSDS, replicates the object to the placement groups in those secondary and
- tertiary OSDs, confirms that the object was stored successfully in the
- secondary and tertiary OSDs, and reports to the client that the object
- was stored successfully.
+ consults its own copy of the CRUSH map to identify secondary
+ OSDS, replicates the object to the placement groups in those secondary
+ OSDs, confirms that the object was stored successfully in the
+ secondary OSDs, and reports to the client that the object
+ was stored successfully. We call these replication operations ``subops``.
.. ditaa::
| +------+ +------+ |
| | Ack (4) Ack (5)| |
v * * v
- +---------------+ +---------------+
- | Secondary OSD | | Tertiary OSD |
- | | | |
- +---------------+ +---------------+
+ +---------------+ +----------------+
+ | Secondary OSD | | Secondary OSD |
+ | | | |
+ +---------------+ +----------------+
-By performing this act of data replication, Ceph OSD Daemons relieve Ceph
-clients of the burden of replicating data.
+By performing this data replication, Ceph OSD Daemons relieve Ceph
+clients and their network interfaces of the burden of replicating data.
Dynamic Cluster Management
--------------------------
Pools provide:
-- **Resilience**: It is possible to set the number of OSDs that are allowed to
- fail without any data being lost. If your cluster uses replicated pools, the
- number of OSDs that can fail without data loss is equal to the number of
- replicas.
+- **Resilience**: It is possible to plan for the number of OSDs that may
+ fail in parallel without data being unavailable or lost. If your cluster
+ uses replicated pools, the number of OSDs that can fail in parallel without
+ data loss is one less than the number of replicas, and the number that can
+ fail without data becoming unavailable is usually two.
For example: a typical configuration stores an object and two replicas
(copies) of each RADOS object (that is: ``size = 3``), but you can configure
Setting Pool Quotas
===================
-To set pool quotas for the maximum number of bytes and/or the maximum number of
-RADOS objects per pool, run the following command:
+To set quotas for the maximum number of bytes or the maximum number of
+RADOS objects per pool, run a command of the following form:
.. prompt:: bash $
ceph osd pool set-quota data max_objects 10000
-To remove a quota, set its value to ``0``.
+To remove a quota, set its value to ``0``. Note that you may set a quota only
+for bytes or only for RADOS objects, or you can set both.
Deleting a Pool