of the output.
* ``ceph config assimilate-conf -i <input file> -o <output file>``
- will injest a configuration file from *input file* and move any
+ will ingest a configuration file from *input file* and move any
valid options into the monitors' configuration database. Any
settings that are unrecognized, invalid, or cannot be controlled by
the monitor will be returned in an abbreviated config file stored in
``filestore max inline xattr size``
-:Description: The maximimum size of an XATTR stored in the filesystem (i.e., XFS,
+:Description: The maximum size of an XATTR stored in the filesystem (i.e., XFS,
btrfs, ext4, etc.) per object. Should not be larger than the
- filesytem can handle. Default value of 0 means to use the value
+ filesystem can handle. Default value of 0 means to use the value
specific to the underlying filesystem.
:Type: Unsigned 32-bit Integer
:Required: No
``filestore max inline xattr size xfs``
-:Description: The maximimum size of an XATTR stored in the XFS filesystem.
+:Description: The maximum size of an XATTR stored in the XFS filesystem.
Only used if ``filestore max inline xattr size`` == 0.
:Type: Unsigned 32-bit Integer
:Required: No
``filestore max inline xattr size btrfs``
-:Description: The maximimum size of an XATTR stored in the btrfs filesystem.
+:Description: The maximum size of an XATTR stored in the btrfs filesystem.
Only used if ``filestore max inline xattr size`` == 0.
:Type: Unsigned 32-bit Integer
:Required: No
``filestore max inline xattr size other``
-:Description: The maximimum size of an XATTR stored in other filesystems.
+:Description: The maximum size of an XATTR stored in other filesystems.
Only used if ``filestore max inline xattr size`` == 0.
:Type: Unsigned 32-bit Integer
:Required: No
:Description: Issue a ``HEALTH_WARN`` in cluster log if
``mon osd down out interval`` is zero. Having this option set to
zero on the leader acts much like the ``noout`` flag. It's hard
- to figure out what's going wrong with clusters witout the
+ to figure out what's going wrong with clusters without the
``noout`` flag set but acting like that just the same, so we
report a warning in this case.
:Type: Boolean
:Description: In some dynamic deployments the Ceph MON daemon might bind
to an IP address locally that is different from the ``public addr``
advertised to other peers in the network. The environment must ensure
- that routing rules are set correclty. If ``public bind addr`` is set
+ that routing rules are set correctly. If ``public bind addr`` is set
the Ceph MON daemon will bind to it locally and use ``public addr``
in the monmaps to advertise its address to peers. This behavior is limited
to the MON daemon.
``mon priority``
:Description: The priority of the declared monitor, the lower value the more
- prefered when a client selects a monitor when trying to connect
+ preferred when a client selects a monitor when trying to connect
to the cluster.
:Type: Unsigned 16-bit Integer
down scrubbing on an OSD that is busy handling client
operations. ``be`` is the default and is the same
priority as all other threads in the OSD. ``rt`` means
- the disk thread will have precendence over all other
+ the disk thread will have precedence over all other
threads in the OSD. Note: Only works with the Linux Kernel
CFQ scheduler. Since Jewel scrubbing is no longer carried
out by the disk iothread, see osd priority options instead.
The QoS support of Ceph is implemented using a queueing scheduler
based on `the dmClock algorithm`_. This algorithm allocates the I/O
resources of the Ceph cluster in proportion to weights, and enforces
-the constraits of minimum reservation and maximum limitation, so that
+the constraints of minimum reservation and maximum limitation, so that
the services can compete for the resources fairly. Currently the
*mclock_opclass* operation queue divides Ceph services involving I/O
resources into following buckets:
number of shards can be controlled with the configuration options
``osd_op_num_shards``, ``osd_op_num_shards_hdd``, and
``osd_op_num_shards_ssd``. A lower number of shards will increase the
-impact of the mClock queues, but may have other deliterious effects.
+impact of the mClock queues, but may have other deleterious effects.
Second, requests are transferred from the operation queue to the
operation sequencer, in which they go through the phases of