From: Bryan Stillwell Date: Thu, 9 Aug 2018 20:51:25 +0000 (-0600) Subject: doc: Multiple spelling fixes X-Git-Tag: v14.0.1~634^2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=791b00daa15ecba06b07403f0093089d84d330a2;p=ceph.git doc: Multiple spelling fixes I ran a lot of the docs through aspell and found a number of spelling problems. Signed-off-by: Bryan Stillwell --- diff --git a/doc/install/install-ceph-gateway.rst b/doc/install/install-ceph-gateway.rst index dc1178e5a995b..92479b2800f00 100644 --- a/doc/install/install-ceph-gateway.rst +++ b/doc/install/install-ceph-gateway.rst @@ -264,7 +264,7 @@ system-wide value. You can also set it for each instance in your Ceph configuration file. Once you have changed your bucket sharding configuration in your Ceph -configuration file, restart your gateway. On Red Hat Enteprise Linux execute:: +configuration file, restart your gateway. On Red Hat Enterprise Linux execute:: sudo systemctl restart ceph-radosgw.service diff --git a/doc/install/manual-freebsd-deployment.rst b/doc/install/manual-freebsd-deployment.rst index 9c45e809adc87..1e4008953784a 100644 --- a/doc/install/manual-freebsd-deployment.rst +++ b/doc/install/manual-freebsd-deployment.rst @@ -59,7 +59,7 @@ Current implementation works on ZFS pools * Some cache and log (ZIL) can be attached. Please note that this is different from the Ceph journals. Cache and log are totally transparent for Ceph, and help the filesystem to keep the system - consistant and help performance. + consistent and help performance. Assuming that ada2 is an SSD:: gpart create -s GPT ada2 diff --git a/doc/man/8/ceph-dencoder.rst b/doc/man/8/ceph-dencoder.rst index cf2e429e103cd..09032aa75324b 100644 --- a/doc/man/8/ceph-dencoder.rst +++ b/doc/man/8/ceph-dencoder.rst @@ -68,7 +68,7 @@ Commands .. option:: count_tests - Print the number of built-in test instances of the previosly + Print the number of built-in test instances of the previously selected type that **ceph-dencoder** is able to generate. .. option:: select_test diff --git a/doc/man/8/ceph-kvstore-tool.rst b/doc/man/8/ceph-kvstore-tool.rst index bcea79694a956..da53d9ce34267 100644 --- a/doc/man/8/ceph-kvstore-tool.rst +++ b/doc/man/8/ceph-kvstore-tool.rst @@ -15,7 +15,7 @@ Synopsis Description =========== -:program:`ceph-kvstore-tool` is a kvstore manipulation tool. It allows users to manipule +:program:`ceph-kvstore-tool` is a kvstore manipulation tool. It allows users to manipulate leveldb/rocksdb's data (like OSD's omap) offline. Commands diff --git a/doc/man/8/ceph-volume.rst b/doc/man/8/ceph-volume.rst index 767f36a504015..d00c83644f26d 100644 --- a/doc/man/8/ceph-volume.rst +++ b/doc/man/8/ceph-volume.rst @@ -85,7 +85,7 @@ Usage:: Optional Arguments: * [-h, --help] show the help message and exit -* [--auto-detect-objectstore] Automatically detect the objecstore by inspecting +* [--auto-detect-objectstore] Automatically detect the objectstore by inspecting the OSD * [--bluestore] bluestore objectstore (default) * [--filestore] filestore objectstore diff --git a/doc/man/8/crushtool.rst b/doc/man/8/crushtool.rst index 897f62ec41daa..eba3aa19a4bd1 100644 --- a/doc/man/8/crushtool.rst +++ b/doc/man/8/crushtool.rst @@ -120,7 +120,7 @@ pools; it only runs simulations by mapping values in the range .. option:: --show-utilization - Displays the expected and actual utilisation for each device, for + Displays the expected and actual utilization for each device, for each number of replicas. For instance:: device 0: stored : 951 expected : 853.333 diff --git a/doc/rados/api/librados.rst b/doc/rados/api/librados.rst index 73d0e421ed333..3e202bd4bfffa 100644 --- a/doc/rados/api/librados.rst +++ b/doc/rados/api/librados.rst @@ -75,8 +75,8 @@ In the end, you will want to close your IO context and connection to RADOS with rados_shutdown(cluster); -Asychronous IO -============== +Asynchronous IO +=============== When doing lots of IO, you often don't need to wait for one operation to complete before starting the next one. `Librados` provides diff --git a/doc/rados/operations/health-checks.rst b/doc/rados/operations/health-checks.rst index acd4ce8913880..1fdd72badd347 100644 --- a/doc/rados/operations/health-checks.rst +++ b/doc/rados/operations/health-checks.rst @@ -286,7 +286,7 @@ DEVICE_HEALTH_IN_USE ____________________ One or more devices is expected to fail soon and has been marked "out" -of the cluster based on ``mgr/devicehalth/mark_out_threshold``, but it +of the cluster based on ``mgr/devicehealth/mark_out_threshold``, but it is still participating in one more PGs. This may be because it was only recently marked "out" and data is still migrating, or because data cannot be migrated off for some reason (e.g., the cluster is nearly @@ -335,7 +335,7 @@ Detailed information about which PGs are affected is available from:: ceph health detail In most cases the root cause is that one or more OSDs is currently -down; see the dicussion for ``OSD_DOWN`` above. +down; see the discussion for ``OSD_DOWN`` above. The state of specific problematic PGs can be queried with:: @@ -392,7 +392,7 @@ OSD_SCRUB_ERRORS ________________ Recent OSD scrubs have uncovered inconsistencies. This error is generally -paired with *PG_DAMANGED* (see above). +paired with *PG_DAMAGED* (see above). See :doc:`pg-repair` for more information. @@ -419,7 +419,7 @@ ___________ The number of PGs in use in the cluster is below the configurable threshold of ``mon_pg_warn_min_per_osd`` PGs per OSD. This can lead -to suboptimizal distribution and balance of data across the OSDs in +to suboptimal distribution and balance of data across the OSDs in the cluster, and similar reduce overall performance. This may be an expected condition if data pools have not yet been diff --git a/doc/rados/operations/monitoring.rst b/doc/rados/operations/monitoring.rst index 3c59f057720d7..db087829f1b09 100644 --- a/doc/rados/operations/monitoring.rst +++ b/doc/rados/operations/monitoring.rst @@ -128,7 +128,7 @@ log. Monitoring Health Checks ======================== -Ceph continously runs various *health checks* against its own status. When +Ceph continuously runs various *health checks* against its own status. When a health check fails, this is reflected in the output of ``ceph status`` (or ``ceph health``). In addition, messages are sent to the cluster log to indicate when a check fails, and when the cluster recovers. diff --git a/doc/rados/troubleshooting/troubleshooting-mon.rst b/doc/rados/troubleshooting/troubleshooting-mon.rst index 5f82d415d818f..dd4c8fdeaa475 100644 --- a/doc/rados/troubleshooting/troubleshooting-mon.rst +++ b/doc/rados/troubleshooting/troubleshooting-mon.rst @@ -200,7 +200,7 @@ What if the state is ``probing``? multi-monitor cluster, the monitors will stay in this state until they find enough monitors to form a quorum -- this means that if you have 2 out of 3 monitors down, the one remaining monitor will stay in this state - indefinitively until you bring one of the other monitors up. + indefinitely until you bring one of the other monitors up. If you have a quorum, however, the monitor should be able to find the remaining monitors pretty fast, as long as they can be reached. If your @@ -337,7 +337,7 @@ Can I increase the maximum tolerated clock skew? This value is configurable via the ``mon-clock-drift-allowed`` option, and although you *CAN* it doesn't mean you *SHOULD*. The clock skew mechanism is in place because clock skewed monitor may not properly behave. We, as - developers and QA afficcionados, are comfortable with the current default + developers and QA aficionados, are comfortable with the current default value, as it will alert the user before the monitors get out hand. Changing this value without testing it first may cause unforeseen effects on the stability of the monitors and overall cluster healthiness, although there is @@ -402,7 +402,7 @@ or:: Recovery using healthy monitor(s) --------------------------------- -If there is any survivers, we can always `replace`_ the corrupted one with a +If there is any survivors, we can always `replace`_ the corrupted one with a new one. And after booting up, the new joiner will sync up with a healthy peer, and once it is fully sync'ed, it will be able to serve the clients. @@ -527,7 +527,7 @@ You have quorum ceph tell mon.* config set debug_mon 10/10 -No quourm +No quorum Use the monitor's admin socket and directly adjust the configuration options:: diff --git a/doc/rados/troubleshooting/troubleshooting-osd.rst b/doc/rados/troubleshooting/troubleshooting-osd.rst index d704a72fc1a2e..12a1be6259a0c 100644 --- a/doc/rados/troubleshooting/troubleshooting-osd.rst +++ b/doc/rados/troubleshooting/troubleshooting-osd.rst @@ -381,7 +381,7 @@ Currently, we recommend deploying clusters with XFS. We recommend against using btrfs or ext4. The btrfs filesystem has many attractive features, but bugs in the filesystem may lead to -performance issues and suprious ENOSPC errors. We do not recommend +performance issues and spurious ENOSPC errors. We do not recommend ext4 because xattr size limitations break our support for long object names (needed for RGW). @@ -477,7 +477,7 @@ Events from the OSD after stuff has been given to local disk - op_applied: The op has been write()'en to the backing FS (ie, applied in memory but not flushed out to disk) on the primary - sub_op_applied: op_applied, but for a replica's "subop" -- sub_op_committed: op_commited, but for a replica's subop (only for EC pools) +- sub_op_committed: op_committed, but for a replica's subop (only for EC pools) - sub_op_commit_rec/sub_op_apply_rec from : the primary marks this when it hears about the above, but for a particular replica - commit_sent: we sent a reply back to the client (or primary OSD, for sub ops) diff --git a/doc/rados/troubleshooting/troubleshooting-pg.rst b/doc/rados/troubleshooting/troubleshooting-pg.rst index e2cf5f019f84b..467f5ce730863 100644 --- a/doc/rados/troubleshooting/troubleshooting-pg.rst +++ b/doc/rados/troubleshooting/troubleshooting-pg.rst @@ -569,7 +569,7 @@ rule needs, ``--rule`` is the value of the ``ruleset`` field displayed by ``ceph osd crush rule dump``. The test will try mapping one million values (i.e. the range defined by ``[--min-x,--max-x]``) and must display at least one bad mapping. If it outputs nothing it -means all mappings are successfull and you can stop right there: the +means all mappings are successful and you can stop right there: the problem is elsewhere. The CRUSH rule can be edited by decompiling the crush map:: diff --git a/doc/start/documenting-ceph.rst b/doc/start/documenting-ceph.rst index 42615ba5c931a..57b25f75bed26 100644 --- a/doc/start/documenting-ceph.rst +++ b/doc/start/documenting-ceph.rst @@ -10,7 +10,7 @@ instructions will help the Ceph project immensely. The Ceph documentation source resides in the ``ceph/doc`` directory of the Ceph repository, and Python Sphinx renders the source into HTML and manpages. The -http://ceph.com/docs link currenly displays the ``master`` branch by default, +http://ceph.com/docs link currently displays the ``master`` branch by default, but you may view documentation for older branches (e.g., ``argonaut``) or future branches (e.g., ``next``) as well as work-in-progress branches by substituting ``master`` with the branch name you prefer. @@ -188,7 +188,7 @@ To build the documentation on Debian/Ubuntu, Fedora, or CentOS/RHEL, execute:: admin/build-doc -To scan for the reachablity of external links, execute:: +To scan for the reachability of external links, execute:: admin/build-doc linkcheck diff --git a/doc/start/kube-helm.rst b/doc/start/kube-helm.rst index c5994614281bd..a8ae0b4c909c4 100644 --- a/doc/start/kube-helm.rst +++ b/doc/start/kube-helm.rst @@ -2,8 +2,8 @@ Installation (Kubernetes + Helm) ================================ -The ceph-helm_ project enables you to deploy Ceph in a Kubernetes environement. -This documentation assumes a Kubernetes environement is available. +The ceph-helm_ project enables you to deploy Ceph in a Kubernetes environment. +This documentation assumes a Kubernetes environment is available. Current limitations =================== @@ -157,7 +157,7 @@ Run the helm install command to deploy Ceph:: NAME TYPE ceph-rbd ceph.com/rbd -The output from helm install shows us the different types of ressources that will be deployed. +The output from helm install shows us the different types of resources that will be deployed. A StorageClass named ``ceph-rbd`` of type ``ceph.com/rbd`` will be created with ``ceph-rbd-provisioner`` Pods. These will allow a RBD to be automatically provisioned upon creation of a PVC. RBDs will also be formatted when mapped for the first