General Settings
================
-The following settings provide an Ceph OSD Daemon's ID, and determine paths to
+The following settings provide a Ceph OSD Daemon's ID, and determine paths to
data and journals. Ceph deployment scripts typically generate the UUID
automatically.
Scrubbing
=========
-In addition to making multiple copies of objects, Ceph insures data integrity by
+In addition to making multiple copies of objects, Ceph ensures data integrity by
scrubbing placement groups. Ceph scrubbing is analogous to ``fsck`` on the
object storage layer. For each placement group, Ceph generates a catalog of all
objects and compares each primary object and its replicas to ensure that no
:Description: This selects which priority ops will be sent to the strict
queue verses the normal queue. The ``low`` setting sends all
replication ops and higher to the strict queue, while the ``high``
- option sends only replication acknowledgement ops and higher to
+ option sends only replication acknowledgment ops and higher to
the strict queue. Setting this to ``high`` should help when a few
OSDs in the cluster are very busy especially when combined with
``wpq`` in the ``osd op queue`` setting. OSDs that are very busy
*mclock_client* is available. It divides operations based on category
but also divides them based on the client making the request. This
helps not only manage the distribution of resources spent on different
-classes of operations but also tries to insure fairness among clients.
+classes of operations but also tries to ensure fairness among clients.
CURRENT IMPLEMENTATION NOTE: the current experimental implementation
does not enforce the limit values. As a first approximation we decided
``osd recovery max active ssd``
:Description: The number of active recovery requests per OSD at one time, if the
- priary device is non-rotational (i.e., an SSD).
+ primary device is non-rotational (i.e., an SSD).
:Type: 32-bit Integer
:Default: ``10``