From 2e98517d365b6d3ca4d059c20db7e6329338fcbd Mon Sep 17 00:00:00 2001 From: Bryan Stillwell Date: Thu, 23 Aug 2018 16:17:08 -0600 Subject: [PATCH] doc: Fixed spelling errors in configuration section Correct a number of spelling mistakes and word omissions in the cluster configuration section of the docs. Signed-off-by: Bryan Stillwell --- doc/rados/configuration/ceph-conf.rst | 2 +- doc/rados/configuration/filestore-config-ref.rst | 10 +++++----- doc/rados/configuration/mon-config-ref.rst | 2 +- doc/rados/configuration/network-config-ref.rst | 4 ++-- doc/rados/configuration/osd-config-ref.rst | 6 +++--- 5 files changed, 12 insertions(+), 12 deletions(-) diff --git a/doc/rados/configuration/ceph-conf.rst b/doc/rados/configuration/ceph-conf.rst index b306625224faa..09c8e973da3ea 100644 --- a/doc/rados/configuration/ceph-conf.rst +++ b/doc/rados/configuration/ceph-conf.rst @@ -343,7 +343,7 @@ The following CLI commands are used to configure the cluster: of the output. * ``ceph config assimilate-conf -i -o `` - will injest a configuration file from *input file* and move any + will ingest a configuration file from *input file* and move any valid options into the monitors' configuration database. Any settings that are unrecognized, invalid, or cannot be controlled by the monitor will be returned in an abbreviated config file stored in diff --git a/doc/rados/configuration/filestore-config-ref.rst b/doc/rados/configuration/filestore-config-ref.rst index bb8926d984911..3be9c48943f83 100644 --- a/doc/rados/configuration/filestore-config-ref.rst +++ b/doc/rados/configuration/filestore-config-ref.rst @@ -32,9 +32,9 @@ xattrs`` threshold is reached. ``filestore max inline xattr size`` -:Description: The maximimum size of an XATTR stored in the filesystem (i.e., XFS, +:Description: The maximum size of an XATTR stored in the filesystem (i.e., XFS, btrfs, ext4, etc.) per object. Should not be larger than the - filesytem can handle. Default value of 0 means to use the value + filesystem can handle. Default value of 0 means to use the value specific to the underlying filesystem. :Type: Unsigned 32-bit Integer :Required: No @@ -43,7 +43,7 @@ xattrs`` threshold is reached. ``filestore max inline xattr size xfs`` -:Description: The maximimum size of an XATTR stored in the XFS filesystem. +:Description: The maximum size of an XATTR stored in the XFS filesystem. Only used if ``filestore max inline xattr size`` == 0. :Type: Unsigned 32-bit Integer :Required: No @@ -52,7 +52,7 @@ xattrs`` threshold is reached. ``filestore max inline xattr size btrfs`` -:Description: The maximimum size of an XATTR stored in the btrfs filesystem. +:Description: The maximum size of an XATTR stored in the btrfs filesystem. Only used if ``filestore max inline xattr size`` == 0. :Type: Unsigned 32-bit Integer :Required: No @@ -61,7 +61,7 @@ xattrs`` threshold is reached. ``filestore max inline xattr size other`` -:Description: The maximimum size of an XATTR stored in other filesystems. +:Description: The maximum size of an XATTR stored in other filesystems. Only used if ``filestore max inline xattr size`` == 0. :Type: Unsigned 32-bit Integer :Required: No diff --git a/doc/rados/configuration/mon-config-ref.rst b/doc/rados/configuration/mon-config-ref.rst index e0e990c19cae4..f1433e91ee2ae 100644 --- a/doc/rados/configuration/mon-config-ref.rst +++ b/doc/rados/configuration/mon-config-ref.rst @@ -388,7 +388,7 @@ by setting it in the ``[mon]`` section of the configuration file. :Description: Issue a ``HEALTH_WARN`` in cluster log if ``mon osd down out interval`` is zero. Having this option set to zero on the leader acts much like the ``noout`` flag. It's hard - to figure out what's going wrong with clusters witout the + to figure out what's going wrong with clusters without the ``noout`` flag set but acting like that just the same, so we report a warning in this case. :Type: Boolean diff --git a/doc/rados/configuration/network-config-ref.rst b/doc/rados/configuration/network-config-ref.rst index 7f1cea53d683c..85c5dddbd2093 100644 --- a/doc/rados/configuration/network-config-ref.rst +++ b/doc/rados/configuration/network-config-ref.rst @@ -380,7 +380,7 @@ addresses. :Description: In some dynamic deployments the Ceph MON daemon might bind to an IP address locally that is different from the ``public addr`` advertised to other peers in the network. The environment must ensure - that routing rules are set correclty. If ``public bind addr`` is set + that routing rules are set correctly. If ``public bind addr`` is set the Ceph MON daemon will bind to it locally and use ``public addr`` in the monmaps to advertise its address to peers. This behavior is limited to the MON daemon. @@ -415,7 +415,7 @@ specified. ``mon priority`` :Description: The priority of the declared monitor, the lower value the more - prefered when a client selects a monitor when trying to connect + preferred when a client selects a monitor when trying to connect to the cluster. :Type: Unsigned 16-bit Integer diff --git a/doc/rados/configuration/osd-config-ref.rst b/doc/rados/configuration/osd-config-ref.rst index 9fb367b29f199..346eb2b4af7bc 100644 --- a/doc/rados/configuration/osd-config-ref.rst +++ b/doc/rados/configuration/osd-config-ref.rst @@ -465,7 +465,7 @@ Operations down scrubbing on an OSD that is busy handling client operations. ``be`` is the default and is the same priority as all other threads in the OSD. ``rt`` means - the disk thread will have precendence over all other + the disk thread will have precedence over all other threads in the OSD. Note: Only works with the Linux Kernel CFQ scheduler. Since Jewel scrubbing is no longer carried out by the disk iothread, see osd priority options instead. @@ -520,7 +520,7 @@ Core Concepts The QoS support of Ceph is implemented using a queueing scheduler based on `the dmClock algorithm`_. This algorithm allocates the I/O resources of the Ceph cluster in proportion to weights, and enforces -the constraits of minimum reservation and maximum limitation, so that +the constraints of minimum reservation and maximum limitation, so that the services can compete for the resources fairly. Currently the *mclock_opclass* operation queue divides Ceph services involving I/O resources into following buckets: @@ -604,7 +604,7 @@ these queues neither interact nor share information among them. The number of shards can be controlled with the configuration options ``osd_op_num_shards``, ``osd_op_num_shards_hdd``, and ``osd_op_num_shards_ssd``. A lower number of shards will increase the -impact of the mClock queues, but may have other deliterious effects. +impact of the mClock queues, but may have other deleterious effects. Second, requests are transferred from the operation queue to the operation sequencer, in which they go through the phases of -- 2.39.5