From d99d520493e1e269bbdedc18d9caf522199da5ac Mon Sep 17 00:00:00 2001 From: Ponnuvel Palaniyappan Date: Fri, 18 Sep 2020 18:12:07 +0100 Subject: [PATCH] doc: Fixed a number of typos in documentation Signed-off-by: Ponnuvel Palaniyappan --- doc/ceph-volume/simple/activate.rst | 2 +- doc/cephadm/concepts.rst | 2 +- doc/cephfs/administration.rst | 2 +- doc/cephfs/mantle.rst | 2 +- doc/cephfs/multimds.rst | 2 +- doc/cephfs/nfs.rst | 2 +- doc/dev/ceph-volume/zfs.rst | 4 ++-- doc/dev/ceph_krb_auth.rst | 2 +- doc/dev/developer_guide/running-tests-using-teuth.rst | 2 +- doc/dev/logging.rst | 2 +- doc/dev/osd_internals/erasure_coding/proposals.rst | 2 +- doc/dev/osd_internals/manifest.rst | 2 +- doc/dev/seastore.rst | 2 +- doc/install/get-packages.rst | 2 +- doc/man/8/ceph-bluestore-tool.rst | 2 +- doc/man/8/ceph.rst | 2 +- doc/mgr/orchestrator.rst | 2 +- doc/mgr/orchestrator_modules.rst | 4 ++-- doc/rados/operations/pg-repair.rst | 2 +- doc/rados/operations/user-management.rst | 2 +- doc/rados/troubleshooting/troubleshooting-pg.rst | 2 +- doc/radosgw/cloud-sync-module.rst | 2 +- doc/radosgw/config-ref.rst | 2 +- doc/radosgw/notifications.rst | 2 +- doc/radosgw/oidc.rst | 2 +- doc/radosgw/s3select.rst | 2 +- doc/rbd/libvirt.rst | 2 +- 27 files changed, 29 insertions(+), 29 deletions(-) diff --git a/doc/ceph-volume/simple/activate.rst b/doc/ceph-volume/simple/activate.rst index edbb1e3f89d60..2b2795d0be7fe 100644 --- a/doc/ceph-volume/simple/activate.rst +++ b/doc/ceph-volume/simple/activate.rst @@ -11,7 +11,7 @@ them, to prevent the UDEV/ceph-disk interaction that will attempt to start them up at boot time. The disabling of ``ceph-disk`` units is done only when calling ``ceph-volume -simple activate`` directly, but is is avoided when being called by systemd when +simple activate`` directly, but is avoided when being called by systemd when the system is booting up. The activation process requires using both the :term:`OSD id` and :term:`OSD uuid` diff --git a/doc/cephadm/concepts.rst b/doc/cephadm/concepts.rst index 8b1743799b4a0..363a5209fe749 100644 --- a/doc/cephadm/concepts.rst +++ b/doc/cephadm/concepts.rst @@ -106,7 +106,7 @@ This instructs cephadm to deploy three daemons on hosts labeled with ``myfs`` across the cluster. Then, in case there are less than three daemons deployed on the candidate -hosts, cephadm will then then randomly choose hosts for deploying new daemons. +hosts, cephadm will then randomly choose hosts for deploying new daemons. In case there are more than three daemons deployed, cephadm will remove existing daemons. diff --git a/doc/cephfs/administration.rst b/doc/cephfs/administration.rst index 4261e9480ee63..7fd83001b3e0b 100644 --- a/doc/cephfs/administration.rst +++ b/doc/cephfs/administration.rst @@ -302,7 +302,7 @@ is used by NFS-Ganesha. lazy_caps_wanted When a stale client resumes, if the client supports this feature, mds only needs -to re-issue caps that are explictly wanted. +to re-issue caps that are explicitly wanted. :: diff --git a/doc/cephfs/mantle.rst b/doc/cephfs/mantle.rst index 6d3d40d61e340..064408f712a7a 100644 --- a/doc/cephfs/mantle.rst +++ b/doc/cephfs/mantle.rst @@ -89,7 +89,7 @@ Mantle with `vstart.sh` Note that if you look at the last MDS (which could be a, b, or c -- it's - random), you will see an an attempt to index a nil value. This is because the + random), you will see an attempt to index a nil value. This is because the last MDS tries to check the load of its neighbor, which does not exist. 5. Run a simple benchmark. In our case, we use the Docker mdtest image to diff --git a/doc/cephfs/multimds.rst b/doc/cephfs/multimds.rst index c2367e891431f..e22a84fac8137 100644 --- a/doc/cephfs/multimds.rst +++ b/doc/cephfs/multimds.rst @@ -184,7 +184,7 @@ that should be pinned. For example: Would cause any directory loaded into cache or created under ``/tmp`` to be ephemerally pinned 50 percent of the time. -It is recomended to only set this to small values, like ``.001`` or ``0.1%``. +It is recommended to only set this to small values, like ``.001`` or ``0.1%``. Having too many subtrees may degrade performance. For this reason, the config ``mds_export_ephemeral_random_max`` enforces a cap on the maximum of this percentage (default: ``.01``). The MDS returns ``EINVAL`` when attempting to diff --git a/doc/cephfs/nfs.rst b/doc/cephfs/nfs.rst index c68ed6fc77975..6c5cd8cb905b2 100644 --- a/doc/cephfs/nfs.rst +++ b/doc/cephfs/nfs.rst @@ -106,7 +106,7 @@ Deploy the rook operator:: .. note:: Nautilus release or latest Ceph image should be used. -Before proceding check if the pods are running:: +Before proceeding check if the pods are running:: kubectl -n rook-ceph get pod diff --git a/doc/dev/ceph-volume/zfs.rst b/doc/dev/ceph-volume/zfs.rst index ca961698b22db..18de7652a0612 100644 --- a/doc/dev/ceph-volume/zfs.rst +++ b/doc/dev/ceph-volume/zfs.rst @@ -33,7 +33,7 @@ in the context of zfs tags:: Tags on filesystems are stored as property. Tags on a zpool are stored in the comment property as a concatenated list -seperated by ``;`` +separated by ``;`` .. _ceph-volume-zfs-tags: @@ -164,7 +164,7 @@ Example:: ``compression`` --------------- -A compression-enabled device can allways be set using the native zfs settings on +A compression-enabled device can always be set using the native zfs settings on a volume or filesystem. This will/can be activated during creation of the volume of filesystem. When activated by ``ceph-volume zfs`` this tag will be created. diff --git a/doc/dev/ceph_krb_auth.rst b/doc/dev/ceph_krb_auth.rst index dc3c7392b5549..627b9bd8e3890 100644 --- a/doc/dev/ceph_krb_auth.rst +++ b/doc/dev/ceph_krb_auth.rst @@ -261,7 +261,7 @@ properly: + Both *(forward and reverse)* zones, with *fully qualified domain name (fqdn)* ``(hostname + domain.name)`` - + KDC discover can be set up to to use DNS ``(srv resources)`` as + + KDC discover can be set up to use DNS ``(srv resources)`` as service location protocol *(RFCs 2052, 2782)*, as well as *host or domain* to the *appropriate realm* ``(txt record)``. diff --git a/doc/dev/developer_guide/running-tests-using-teuth.rst b/doc/dev/developer_guide/running-tests-using-teuth.rst index e3761f11ec9a7..492b7790e9e0a 100644 --- a/doc/dev/developer_guide/running-tests-using-teuth.rst +++ b/doc/dev/developer_guide/running-tests-using-teuth.rst @@ -125,7 +125,7 @@ to terminate a job:: teuthology-kill -r teuthology-2019-12-10_05:00:03-smoke-master-testing-basic-smithi -Let's call the the argument passed to ``-r`` as test ID. It can be found +Let's call the argument passed to ``-r`` as test ID. It can be found easily in the link to the Pulpito page for the tests you triggered. For example, for the above test ID, the link is - http://pulpito.front.sepia.ceph.com/teuthology-2019-12-10_05:00:03-smoke-master-testing-basic-smithi/ diff --git a/doc/dev/logging.rst b/doc/dev/logging.rst index 1337bacd0d401..67d3de141389d 100644 --- a/doc/dev/logging.rst +++ b/doc/dev/logging.rst @@ -71,7 +71,7 @@ documentation. Common acronyms are OK -- don't waste screen space typing "Rados Object Gateway" instead of RGW. Do not use internal class names like "MDCache" or "Objecter". It is okay to mention internal structures if they are the direct subject of the message, -for example in a corruption, but use plain english. +for example in a corruption, but use plain English. Example: instead of "Objecter requests" say "OSD client requests" Example: it is okay to mention internal structure in the context of "Corrupt session table" (but don't say "Corrupt SessionTable") diff --git a/doc/dev/osd_internals/erasure_coding/proposals.rst b/doc/dev/osd_internals/erasure_coding/proposals.rst index dc66506c3c99c..d048ce8a13d36 100644 --- a/doc/dev/osd_internals/erasure_coding/proposals.rst +++ b/doc/dev/osd_internals/erasure_coding/proposals.rst @@ -193,7 +193,7 @@ RADOS Client Acknowledgement Generation Optimization ==================================================== Now that the recovery scheme is understood, we can discuss the -generation of of the RADOS operation acknowledgement (ACK) by the +generation of the RADOS operation acknowledgement (ACK) by the primary ("sufficient" from above). It is NOT required that the primary wait for all shards to complete their respective prepare operations. Using our example where the RADOS operations writes only diff --git a/doc/dev/osd_internals/manifest.rst b/doc/dev/osd_internals/manifest.rst index 74c6e11f4e0cd..09b1ec5b42b50 100644 --- a/doc/dev/osd_internals/manifest.rst +++ b/doc/dev/osd_internals/manifest.rst @@ -284,7 +284,7 @@ This seems complicated, but it gets us two valuable properties: incrementing a ref 2) We don't need to load the object_manifest_t for every clone to determine how to handle removing one -- just the ones - immediately preceeding and suceeding it. + immediately preceding and succeeding it. All clone operations will need to consider adjacent chunk_maps when adding or removing references. diff --git a/doc/dev/seastore.rst b/doc/dev/seastore.rst index 0b22e57edb68a..eb89d82196b0b 100644 --- a/doc/dev/seastore.rst +++ b/doc/dev/seastore.rst @@ -60,7 +60,7 @@ mixed on the underlying media. Persistent Memory ----------------- -As the intial sequential design above matures, we'll introduce +As the initial sequential design above matures, we'll introduce persistent memory support for metadata and caching structures. Design diff --git a/doc/install/get-packages.rst b/doc/install/get-packages.rst index c14ea60a1f649..40468eec871c0 100644 --- a/doc/install/get-packages.rst +++ b/doc/install/get-packages.rst @@ -343,7 +343,7 @@ your Linux distribution codename. Replace ``{arch}`` with the CPU architecture. RPM Packages ~~~~~~~~~~~~ -Ceph requires additional additional third party libraries. +Ceph requires additional third party libraries. To add the EPEL repository, execute the following .. prompt:: bash $ diff --git a/doc/man/8/ceph-bluestore-tool.rst b/doc/man/8/ceph-bluestore-tool.rst index 9f7fc56d802a5..2a1813e69683a 100644 --- a/doc/man/8/ceph-bluestore-tool.rst +++ b/doc/man/8/ceph-bluestore-tool.rst @@ -191,7 +191,7 @@ It is advised to first check if rescue process would be successfull:: ceph-bluestore-tool fsck --path *osd path* \ --bluefs_replay_recovery=true --bluefs_replay_recovery_disable_compact=true -If above fsck is successfull fix procedure can be applied. +If above fsck is successful fix procedure can be applied. Availability ============ diff --git a/doc/man/8/ceph.rst b/doc/man/8/ceph.rst index 62fcdb7f28aae..715e667dac1fa 100644 --- a/doc/man/8/ceph.rst +++ b/doc/man/8/ceph.rst @@ -1152,7 +1152,7 @@ Usage:: ceph osd pool application rm -Subcommand ``set`` assosciates or updates, if it already exists, a key-value +Subcommand ``set`` associates or updates, if it already exists, a key-value pair with the given application for the given pool. Usage:: diff --git a/doc/mgr/orchestrator.rst b/doc/mgr/orchestrator.rst index 32c3bc0fac159..0e060332e8642 100644 --- a/doc/mgr/orchestrator.rst +++ b/doc/mgr/orchestrator.rst @@ -609,7 +609,7 @@ and ``=name`` specifies the name of the new monitor. Placement by labels ------------------- -Daemons can be explictly placed on hosts that match a specific label:: +Daemons can be explicitly placed on hosts that match a specific label:: orch apply prometheus --placement="label:mylabel" diff --git a/doc/mgr/orchestrator_modules.rst b/doc/mgr/orchestrator_modules.rst index 96e1b60112990..0fd1f4bbcbb4c 100644 --- a/doc/mgr/orchestrator_modules.rst +++ b/doc/mgr/orchestrator_modules.rst @@ -228,7 +228,7 @@ Placement --------- A :ref:`orchestrator-cli-placement-spec` defines the placement of -daemons of a specifc service. +daemons of a specific service. In general, stateless services do not require any specific placement rules as they can run anywhere that sufficient system resources @@ -285,7 +285,7 @@ OSD Replacement See :ref:`rados-replacing-an-osd` for the underlying process. Replacing OSDs is fundamentally a two-staged process, as users need to -physically replace drives. The orchestrator therefor exposes this two-staged process. +physically replace drives. The orchestrator therefore exposes this two-staged process. Phase one is a call to :meth:`Orchestrator.remove_daemons` with ``destroy=True`` in order to mark the OSD as destroyed. diff --git a/doc/rados/operations/pg-repair.rst b/doc/rados/operations/pg-repair.rst index d2c0e96efeda3..5dcfe63ad055c 100644 --- a/doc/rados/operations/pg-repair.rst +++ b/doc/rados/operations/pg-repair.rst @@ -51,7 +51,7 @@ More Information on Placement Group Repair ========================================== Ceph stores and updates the checksums of objects stored in the cluster. When a scrub is performed on a placement group, the OSD attempts to choose an authoritative copy from among its replicas. Among all of the possible cases, only one case is consistent. After a deep scrub, Ceph calculates the checksum of an object read from the disk and compares it to the checksum previously recorded. If the current checksum and the previously recorded checksums do not match, that is an inconsistency. In the case of replicated pools, any mismatch between the checksum of any replica of an object and the checksum of the authoritative copy means that there is an inconsistency. -The "pg repair" command attempts to fix inconsistencies of various kinds. If "pg repair" finds an inconsisent placement group, it attempts to overwrite the digest of the inconsistent copy with the digest of the authoritative copy. If "pg repair" finds an inconsistent replicated pool, it marks the inconsistent copy as missing. Recovery, in the case of replicated pools, is beyond the scope of "pg repair". +The "pg repair" command attempts to fix inconsistencies of various kinds. If "pg repair" finds an inconsistent placement group, it attempts to overwrite the digest of the inconsistent copy with the digest of the authoritative copy. If "pg repair" finds an inconsistent replicated pool, it marks the inconsistent copy as missing. Recovery, in the case of replicated pools, is beyond the scope of "pg repair". For erasure coded and bluestore pools, Ceph will automatically repair if osd_scrub_auto_repair (configuration default "false") is set to true and at most osd_scrub_auto_repair_num_errors (configuration default 5) errors are found. diff --git a/doc/rados/operations/user-management.rst b/doc/rados/operations/user-management.rst index 71c832e455a05..739ddf4f71d76 100644 --- a/doc/rados/operations/user-management.rst +++ b/doc/rados/operations/user-management.rst @@ -652,7 +652,7 @@ and the user followed by the capabilities. For example:: sudo ceph-authtool /etc/ceph/ceph.keyring -n client.ringo --cap osd 'allow rwx' --cap mon 'allow rwx' To update the user to the Ceph Storage Cluster, you must update the user -in the keyring to the user entry in the the Ceph Storage Cluster. :: +in the keyring to the user entry in the Ceph Storage Cluster. :: sudo ceph auth import -i /etc/ceph/ceph.keyring diff --git a/doc/rados/troubleshooting/troubleshooting-pg.rst b/doc/rados/troubleshooting/troubleshooting-pg.rst index 6b4bb0efe541c..60796b4a8a416 100644 --- a/doc/rados/troubleshooting/troubleshooting-pg.rst +++ b/doc/rados/troubleshooting/troubleshooting-pg.rst @@ -605,7 +605,7 @@ and adding the following line to the rule:: step set_choose_tries 100 -The relevant part of of the ``crush.txt`` file should look something +The relevant part of the ``crush.txt`` file should look something like:: rule erasurepool { diff --git a/doc/radosgw/cloud-sync-module.rst b/doc/radosgw/cloud-sync-module.rst index 09f6c7de51365..bea7f441bc5ae 100644 --- a/doc/radosgw/cloud-sync-module.rst +++ b/doc/radosgw/cloud-sync-module.rst @@ -153,7 +153,7 @@ For example: ``target_path = rgwx-${zone}-${sid}/${owner}/${bucket}`` * ``acl_profiles`` (array) -An array of of ``acl_profile``. +An array of ``acl_profile``. * ``acl_profile`` (container) diff --git a/doc/radosgw/config-ref.rst b/doc/radosgw/config-ref.rst index f842e149d7c7c..a94bfc6c854a1 100644 --- a/doc/radosgw/config-ref.rst +++ b/doc/radosgw/config-ref.rst @@ -414,7 +414,7 @@ you would look at increasing the ``rgw lc max worker`` value from the default va workload with a smaller number of buckets but higher number of objects (hundreds of thousands) per bucket you would look at tuning ``rgw lc max wp worker`` from the default value of 3. -:NOTE: When looking to to tune either of these specific values please validate the +:NOTE: When looking to tune either of these specific values please validate the current Cluster performance and Ceph Object Gateway utilization before increasing. Garbage Collection Settings diff --git a/doc/radosgw/notifications.rst b/doc/radosgw/notifications.rst index de43a6b8c2a9b..a2876c6d40860 100644 --- a/doc/radosgw/notifications.rst +++ b/doc/radosgw/notifications.rst @@ -37,7 +37,7 @@ Notifications may be sent synchronously, as part of the operation that triggered In this mode, the operation is acked only after the notification is sent to the topic's configured endpoint, which means that the round trip time of the notification is added to the latency of the operation itself. -.. note:: The original triggering operation will still be considered as sucessful even if the notification fail with an error, cannot be deliverd or times out +.. note:: The original triggering operation will still be considered as successful even if the notification fail with an error, cannot be deliverd or times out Notifications may also be sent asynchronously. They will be committed into persistent storage and then asynchronously sent to the topic's configured endpoint. In this case, the only latency added to the original operation is of committing the notification to persistent storage. diff --git a/doc/radosgw/oidc.rst b/doc/radosgw/oidc.rst index 0f5bb3a011cfd..46593f1d8a473 100644 --- a/doc/radosgw/oidc.rst +++ b/doc/radosgw/oidc.rst @@ -86,7 +86,7 @@ Example:: ListOpenIDConnectProviders -------------------------- -Lists infomation about all IDPs +Lists information about all IDPs Request Parameters ~~~~~~~~~~~~~~~~~~ diff --git a/doc/radosgw/s3select.rst b/doc/radosgw/s3select.rst index dc6415ac1a09a..faee08e265e8b 100644 --- a/doc/radosgw/s3select.rst +++ b/doc/radosgw/s3select.rst @@ -44,7 +44,7 @@ Basic functionalities | **S3select** has a definite set of functionalities that should be implemented (if we wish to stay compliant with AWS), currently only a portion of it is implemented. | The implemented software architecture supports basic arithmetic expressions, logical and compare expressions, including nested function calls and casting operators, that alone enables the user reasonable flexibility. - | review the bellow feature-table_. + | review the below feature-table_. Error Handling diff --git a/doc/rbd/libvirt.rst b/doc/rbd/libvirt.rst index bd58e6a330f44..e3523f8a80050 100644 --- a/doc/rbd/libvirt.rst +++ b/doc/rbd/libvirt.rst @@ -175,7 +175,7 @@ Configuring the VM When configuring the VM for use with Ceph, it is important to use ``virsh`` where appropriate. Additionally, ``virsh`` commands often require root privileges (i.e., ``sudo``) and will not return appropriate results or notify -you that that root privileges are required. For a reference of ``virsh`` +you that root privileges are required. For a reference of ``virsh`` commands, refer to `Virsh Command Reference`_. -- 2.39.5