``osd.1``, ``osd.2`` and ``osd.3`` respectively. CRUSH actually seeks a
pseudo-random placement that will take into account failure domains you set in
your `CRUSH map`_, so you will rarely see placement groups assigned to nearest
-neighbor OSDs in a large cluster. We refer to the set of OSDs that should
-contain the replicas of a particular placement group as the **Acting Set**. In
-some cases, an OSD in the Acting Set is ``down`` or otherwise not able to
+neighbor OSDs in a large cluster.
+
+Ceph processes a client request using the **Acting Set**, which is the set of
+OSDs that will actually handle the requests since they have a full and working
+version of a placement group shard. The set of OSDs that should contain a shard
+of a particular placement group as the **Up Set**, i.e. where data is
+moved/copied to (or planned to be).
+
+In some cases, an OSD in the Acting Set is ``down`` or otherwise not able to
service requests for objects in the placement group. When these situations
arise, don't panic. Common examples include:
- An OSD in the Acting Set is ``down`` or unable to service requests,
and another OSD has temporarily assumed its duties.
-Ceph processes a client request using the **Up Set**, which is the set of OSDs
-that will actually handle the requests. In most cases, the Up Set and the Acting
-Set are virtually identical. When they are not, it may indicate that Ceph is
-migrating data, an OSD is recovering, or that there is a problem (i.e., Ceph
-usually echoes a "HEALTH WARN" state with a "stuck stale" message in such
-scenarios).
+In most cases, the Up Set and the Acting Set are identical. When they are not,
+it may indicate that Ceph is migrating the PG (it's remapped), an OSD is
+recovering, or that there is a problem (i.e., Ceph usually echoes a "HEALTH
+WARN" state with a "stuck stale" message in such scenarios).
To retrieve a list of placement groups, execute::