From: Anthony D'Atri Date: Fri, 7 Feb 2025 15:08:38 +0000 (-0500) Subject: doc: Clarify that there are no tertiary OSDs X-Git-Tag: v20.0.0~217^2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=e261155a8a0234897803d38e899d10ec89b2b08f;p=ceph.git doc: Clarify that there are no tertiary OSDs Signed-off-by: Anthony D'Atri --- diff --git a/doc/architecture.rst b/doc/architecture.rst index 984c520a7560e..782b55e974ae0 100644 --- a/doc/architecture.rst +++ b/doc/architecture.rst @@ -436,7 +436,7 @@ the greater cluster provides several benefits: bad sectors on drives that are not detectable with light scrubs. See `Data Scrubbing`_ for details on configuring scrubbing. -#. **Replication:** Data replication involves a collaboration between Ceph +#. **Replication:** Data replication involves collaboration between Ceph Clients and Ceph OSD Daemons. Ceph OSD Daemons use the CRUSH algorithm to determine the storage location of object replicas. Ceph clients use the CRUSH algorithm to determine the storage location of an object, then the @@ -445,11 +445,11 @@ the greater cluster provides several benefits: After identifying the target placement group, the client writes the object to the identified placement group's primary OSD. The primary OSD then - consults its own copy of the CRUSH map to identify secondary and tertiary - OSDS, replicates the object to the placement groups in those secondary and - tertiary OSDs, confirms that the object was stored successfully in the - secondary and tertiary OSDs, and reports to the client that the object - was stored successfully. + consults its own copy of the CRUSH map to identify secondary + OSDS, replicates the object to the placement groups in those secondary + OSDs, confirms that the object was stored successfully in the + secondary OSDs, and reports to the client that the object + was stored successfully. We call these replication operations ``subops``. .. ditaa:: @@ -471,13 +471,13 @@ the greater cluster provides several benefits: | +------+ +------+ | | | Ack (4) Ack (5)| | v * * v - +---------------+ +---------------+ - | Secondary OSD | | Tertiary OSD | - | | | | - +---------------+ +---------------+ + +---------------+ +----------------+ + | Secondary OSD | | Secondary OSD | + | | | | + +---------------+ +----------------+ -By performing this act of data replication, Ceph OSD Daemons relieve Ceph -clients of the burden of replicating data. +By performing this data replication, Ceph OSD Daemons relieve Ceph +clients and their network interfaces of the burden of replicating data. Dynamic Cluster Management -------------------------- diff --git a/doc/rados/operations/pools.rst b/doc/rados/operations/pools.rst index 988aa72c85f1b..7ee51c1eff623 100644 --- a/doc/rados/operations/pools.rst +++ b/doc/rados/operations/pools.rst @@ -7,7 +7,7 @@ Pools are logical partitions that are used to store RADOS objects. Pools provide: -- **Resilience**: It is possible to architect for the number of OSDs that may +- **Resilience**: It is possible to plan for the number of OSDs that may fail in parallel without data being unavailable or lost. If your cluster uses replicated pools, the number of OSDs that can fail in parallel without data loss is one less than the number of replicas, and the number that can @@ -245,7 +245,7 @@ pool by running the following command: Setting Pool Quotas =================== -To set quotas for the maximum number of bytes and/or the maximum number of +To set quotas for the maximum number of bytes or the maximum number of RADOS objects per pool, run a command of the following form: .. prompt:: bash $ @@ -258,7 +258,8 @@ For example: ceph osd pool set-quota data max_objects 10000 -To remove a quota, set its value to ``0``. +To remove a quota, set its value to ``0``. Note that you may set a quota only +for bytes or only for RADOS objects, or you can set both. Deleting a Pool