From 8abce157f15ce96944bc00f148337746d92586ff Mon Sep 17 00:00:00 2001 From: Josh Soref <2119212+jsoref@users.noreply.github.com> Date: Sat, 25 Jun 2022 23:09:51 -0400 Subject: [PATCH] doc: Fix many spelling errors * administrators * allocated * allowed * approximate * authenticate * availability * average * behavior * binaries * bootstrap * bootstrapping * capacity * cephadm * clients * combining * command * committed * comparison * compiled * consequences * continues * convenience * cookie * crypto * dashboard * deduplication * defaults * delivered * deployment * describe * directory * documentation * dynamic * elimination * entries * expectancy * explicit * explicitly * exporter * github * hard * healthcheck * heartbeat * heavily * http * indices * infrastructure * inherit * layout * lexically * likelihood * logarithmic * manually * metadata * minimization * minimize * object * of * operation * opportunities * overwrite * prioritized * recipe * records * requirements * restructured * running * scalability * second * select * significant * specify * subscription * supported * synonym * throttle * unpinning * upgraded * value * version * which * with Plus some line wrapping and additional edits... Signed-off-by: Josh Soref --- doc/_themes/ceph/layout.html | 2 +- doc/architecture.rst | 2 +- doc/ceph-volume/lvm/batch.rst | 2 +- doc/cephadm/operations.rst | 2 +- doc/cephadm/services/monitoring.rst | 2 +- doc/cephadm/services/osd.rst | 2 +- doc/cephfs/mantle.rst | 2 +- doc/dev/blkin.rst | 2 +- doc/dev/ceph_krb_auth.rst | 8 +++---- doc/dev/cephadm/developing-cephadm.rst | 8 +++---- doc/dev/cephadm/host-maintenance.rst | 2 +- doc/dev/cephx_protocol.rst | 2 +- doc/dev/continuous-integration.rst | 8 +++---- doc/dev/crimson/error-handling.rst | 2 +- doc/dev/crimson/osd.rst | 2 +- doc/dev/crimson/poseidonstore.rst | 4 ++-- doc/dev/deduplication.rst | 4 ++-- ...oyement.rst => dev_cluster_deployment.rst} | 0 doc/dev/developer_guide/basic-workflow.rst | 10 ++++----- doc/dev/developer_guide/dash-devel.rst | 10 ++++----- doc/dev/developer_guide/debugging-gdb.rst | 2 +- .../developer_guide/running-tests-locally.rst | 2 +- ...tion-testing-teuthology-debugging-tips.rst | 2 +- ...s-integration-testing-teuthology-intro.rst | 4 ++-- ...ntegration-testing-teuthology-workflow.rst | 8 +++---- doc/dev/documenting.rst | 4 ++-- doc/dev/encoding.rst | 2 +- doc/dev/health-reports.rst | 6 ++--- doc/dev/mds_internals/locking.rst | 22 +++++++++++++++++-- doc/dev/msgr2.rst | 8 +++---- doc/dev/network-protocol.rst | 2 +- .../erasure_coding/proposals.rst | 2 +- doc/dev/osd_internals/manifest.rst | 2 +- doc/dev/osd_internals/osd_throttles.rst | 4 ++-- doc/dev/osd_internals/osdmap_versions.txt | 2 +- doc/dev/osd_internals/refcount.rst | 2 +- doc/dev/perf_histograms.rst | 4 ++-- doc/dev/quick_guide.rst | 2 +- doc/dev/zoned-storage.rst | 2 +- doc/foundation.rst | 2 +- doc/install/index.rst | 2 +- doc/install/mirrors.rst | 6 ++--- doc/install/windows-install.rst | 2 +- doc/man/8/ceph-diff-sorted.rst | 2 +- doc/man/8/ceph-objectstore-tool.rst | 4 ++-- doc/man/8/ceph.rst | 2 +- doc/man/8/cephadm.rst | 2 +- doc/man/8/osdmaptool.rst | 2 +- doc/mgr/administrator.rst | 2 +- doc/mgr/dashboard.rst | 4 ++-- doc/mgr/diskprediction.rst | 2 +- doc/mgr/rgw.rst | 2 +- .../configuration/bluestore-config-ref.rst | 4 ++-- .../configuration/mon-osd-interaction.rst | 2 +- doc/rados/operations/change-mon-elections.rst | 2 +- doc/rados/operations/control.rst | 2 +- doc/rados/operations/erasure-code-lrc.rst | 2 +- doc/rados/operations/health-checks.rst | 2 +- doc/rados/operations/user-management.rst | 2 +- doc/radosgw/lua-scripting.rst | 2 +- doc/radosgw/multisite.rst | 2 +- doc/radosgw/notifications.rst | 4 ++-- doc/radosgw/pubsub-module.rst | 2 +- doc/radosgw/qat-accel.rst | 2 +- doc/radosgw/rgw-cache.rst | 2 +- doc/radosgw/s3-notification-compatibility.rst | 2 +- doc/radosgw/s3select.rst | 4 ++-- doc/rbd/iscsi-initiator-win.rst | 2 +- doc/rbd/iscsi-target-cli.rst | 2 +- doc/start/documenting-ceph.rst | 2 +- doc/start/hardware-recommendations.rst | 2 +- src/doc/caching.txt | 4 ++-- src/doc/killpoints.txt | 2 +- src/doc/rgw/multisite-reshard.md | 10 ++++++++- 74 files changed, 139 insertions(+), 113 deletions(-) rename doc/dev/{dev_cluster_deployement.rst => dev_cluster_deployment.rst} (100%) diff --git a/doc/_themes/ceph/layout.html b/doc/_themes/ceph/layout.html index 81821d1b09d..f89edfe371a 100644 --- a/doc/_themes/ceph/layout.html +++ b/doc/_themes/ceph/layout.html @@ -55,7 +55,7 @@ {%- if not embedded %} - {# XXX Sphinx 1.8.0 made this an external js-file, quick fix until we refactor the template to inherert more blocks directly from sphinx #} + {# XXX Sphinx 1.8.0 made this an external js-file, quick fix until we refactor the template to inherit more blocks directly from sphinx #} {% if sphinx_version >= "1.8.0" %} {%- for scriptfile in script_files %} diff --git a/doc/architecture.rst b/doc/architecture.rst index 1f62d76f90e..4ba5f47d08a 100644 --- a/doc/architecture.rst +++ b/doc/architecture.rst @@ -603,7 +603,7 @@ name the Ceph OSD Daemons specifically (e.g., ``osd.0``, ``osd.1``, etc.), but rather refer to them as *Primary*, *Secondary*, and so forth. By convention, the *Primary* is the first OSD in the *Acting Set*, and is responsible for coordinating the peering process for each placement group where it acts as -the *Primary*, and is the **ONLY** OSD that that will accept client-initiated +the *Primary*, and is the **ONLY** OSD that will accept client-initiated writes to objects for a given placement group where it acts as the *Primary*. When a series of OSDs are responsible for a placement group, that series of diff --git a/doc/ceph-volume/lvm/batch.rst b/doc/ceph-volume/lvm/batch.rst index 19a2f7ab606..a636e31ec22 100644 --- a/doc/ceph-volume/lvm/batch.rst +++ b/doc/ceph-volume/lvm/batch.rst @@ -131,7 +131,7 @@ If one requires a different sizing policy for wal, db or journal devices, Implicit sizing --------------- -Scenarios in which either devices are under-comitted or not all data devices are +Scenarios in which either devices are under-committed or not all data devices are currently ready for use (due to a broken disk for example), one can still rely on `ceph-volume` automatic sizing. Users can provide hints to `ceph-volume` as to how many data devices should have diff --git a/doc/cephadm/operations.rst b/doc/cephadm/operations.rst index 9ec8371c89a..3641ccd1c85 100644 --- a/doc/cephadm/operations.rst +++ b/doc/cephadm/operations.rst @@ -348,7 +348,7 @@ CEPHADM_CHECK_KERNEL_LSM Each host within the cluster is expected to operate within the same Linux Security Module (LSM) state. For example, if the majority of the hosts are running with SELINUX in enforcing mode, any host not running in this mode is -flagged as an anomaly and a healtcheck (WARNING) state raised. +flagged as an anomaly and a healthcheck (WARNING) state raised. CEPHADM_CHECK_SUBSCRIPTION ~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/cephadm/services/monitoring.rst b/doc/cephadm/services/monitoring.rst index a17beba6d1e..a4b63dc9410 100644 --- a/doc/cephadm/services/monitoring.rst +++ b/doc/cephadm/services/monitoring.rst @@ -65,7 +65,7 @@ steps below: ceph orch apply alertmanager #. Deploy Prometheus. A single Prometheus instance is sufficient, but - for high availablility (HA) you might want to deploy two: + for high availability (HA) you might want to deploy two: .. prompt:: bash # diff --git a/doc/cephadm/services/osd.rst b/doc/cephadm/services/osd.rst index dd1cda84db1..cc104a4020f 100644 --- a/doc/cephadm/services/osd.rst +++ b/doc/cephadm/services/osd.rst @@ -565,7 +565,7 @@ To include disks equal to or greater than 40G in size: Sizes don't have to be specified exclusively in Gigabytes(G). -Other units of size are supported: Megabyte(M), Gigabyte(G) and Terrabyte(T). +Other units of size are supported: Megabyte(M), Gigabyte(G) and Terabyte(T). Appending the (B) for byte is also supported: ``MB``, ``GB``, ``TB``. diff --git a/doc/cephfs/mantle.rst b/doc/cephfs/mantle.rst index 064408f712a..dc9e624617d 100644 --- a/doc/cephfs/mantle.rst +++ b/doc/cephfs/mantle.rst @@ -224,7 +224,7 @@ in the MDS Map. The balancer pulls the Lua code from RADOS synchronously. We do this with a timeout: if the asynchronous read does not come back within half the balancing tick interval the operation is cancelled and a Connection Timeout error is returned. By default, the balancing tick interval is 10 seconds, so -Mantle will use a 5 second second timeout. This design allows Mantle to +Mantle will use a 5 second timeout. This design allows Mantle to immediately return an error if anything RADOS-related goes wrong. We use this implementation because we do not want to do a blocking OSD read diff --git a/doc/dev/blkin.rst b/doc/dev/blkin.rst index 078f6d0569e..989cddcd7ee 100644 --- a/doc/dev/blkin.rst +++ b/doc/dev/blkin.rst @@ -55,7 +55,7 @@ Create tracing session, enable tracepoints and start trace:: lttng enable-event --userspace osd:* lttng start -Perform some ceph operatin:: +Perform some Ceph operation:: rados bench -p ec 5 write diff --git a/doc/dev/ceph_krb_auth.rst b/doc/dev/ceph_krb_auth.rst index 6a865bc6add..a3e33a5fae1 100644 --- a/doc/dev/ceph_krb_auth.rst +++ b/doc/dev/ceph_krb_auth.rst @@ -86,7 +86,7 @@ Authorization Auditing Auditing takes the results from both *authentication and authorization* and - records them into an audit log. The audit log records records all actions + records them into an audit log. The audit log records all actions taking by/during the authentication and authorization for later review by the administrators. While authentication and authorization are preventive systems (in which unauthorized access is prevented), auditing is a reactive @@ -584,8 +584,8 @@ In order to configure connections (from Ceph nodes) to the KDC: Given that the *keytab client file* is/should already be copied and available at the - Kerberos client (Ceph cluster node), we should be able to athenticate using it before - going forward: :: + Kerberos client (Ceph cluster node), we should be able to authenticate using it before + continuing: :: # kdestroy -A && kinit -k -t /etc/gss_client_mon1.ktab -f 'ceph/ceph-mon1@MYDOMAIN.COM' && klist -f Ticket cache: KEYRING:persistent:0:0 @@ -1030,7 +1030,7 @@ In order to get a new MIT KDC Server running: 6. Name Resolution - As mentioned earlier, Kerberos *relies heavly on name resolution*. Most of + As mentioned earlier, Kerberos *relies heavily on name resolution*. Most of the Kerberos issues are usually related to name resolution, since Kerberos is *very picky* on both *systems names* and *host lookups*. diff --git a/doc/dev/cephadm/developing-cephadm.rst b/doc/dev/cephadm/developing-cephadm.rst index fdef8da1d70..61bffb35165 100644 --- a/doc/dev/cephadm/developing-cephadm.rst +++ b/doc/dev/cephadm/developing-cephadm.rst @@ -127,7 +127,7 @@ main advantages: an almost "real" environment. - Safe and isolated. Does not depend of the things you have installed in your machine. And the vms are isolated from your environment. - - Easy to work "dev" environment. For "not compilated" software pieces, + - Easy to work "dev" environment. For "not compiled" software pieces, for example any mgr module. It is an environment that allow you to test your changes interactively. @@ -137,7 +137,7 @@ Complete documentation in `kcli installation `_ + - 1. Review `requirements `_ and install/configure whatever is needed to meet them. - 2. get the kcli image and create one alias for executing the kcli command :: @@ -282,8 +282,8 @@ of the cluster. create loopback devices capable of holding osds. .. note:: Each osd will require 5GiB of space. -After bootstraping the cluster you can go inside the seed box in which you'll be -able to run cehpadm commands:: +After bootstrapping the cluster you can go inside the seed box in which you'll be +able to run cephadm commands:: box -v cluster sh [root@8d52a7860245] cephadm --help diff --git a/doc/dev/cephadm/host-maintenance.rst b/doc/dev/cephadm/host-maintenance.rst index 2bbbd158968..ff126d2ada9 100644 --- a/doc/dev/cephadm/host-maintenance.rst +++ b/doc/dev/cephadm/host-maintenance.rst @@ -58,7 +58,7 @@ The list below shows some of these additional daemons. By using the --check option first, the Admin can choose whether to proceed. This workflow is obviously optional for the CLI user, but could be integrated into the -UI workflow to help less experienced Administators manage the cluster. +UI workflow to help less experienced administrators manage the cluster. By adopting this two-phase approach, a UI based workflow would look something like this. diff --git a/doc/dev/cephx_protocol.rst b/doc/dev/cephx_protocol.rst index 7b8c17876ef..4b4a3a58481 100644 --- a/doc/dev/cephx_protocol.rst +++ b/doc/dev/cephx_protocol.rst @@ -102,7 +102,7 @@ we'll assume that we are in that state. The message C sends to A in phase I is build in ``CephxClientHandler::build_request()`` (in ``auth/cephx/CephxClientHandler.cc``). This routine is used for more than one purpose. In this case, we first call ``validate_tickets()`` (from routine -``CephXTicektManager::validate_tickets()`` which lives in ``auth/cephx/CephxProtocol.h``). +``CephXTicketManager::validate_tickets()`` which lives in ``auth/cephx/CephxProtocol.h``). This code runs through the list of possible tickets to determine what we need, setting values in the ``need`` flag as necessary. Then we call ``ticket.get_handler()``. This routine (in ``CephxProtocol.h``) finds a ticket of the specified type (a ticket to perform diff --git a/doc/dev/continuous-integration.rst b/doc/dev/continuous-integration.rst index d04d92f2bec..709eb72c17e 100644 --- a/doc/dev/continuous-integration.rst +++ b/doc/dev/continuous-integration.rst @@ -211,7 +211,7 @@ Uploading Dependencies To ensure that prebuilt packages are available by the jenkins agents, we need to upload them to either ``apt-mirror.front.sepia.ceph.com`` or `chacra`_. To upload -packages to the former would require the help our our lab administrator, so if we +packages to the former would require the help of our lab administrator, so if we want to maintain the package repositories on regular basis, a better choice would be to manage them using `chacractl`_. `chacra`_ represents packages repositories using a resource hierarchy, like:: @@ -230,9 +230,9 @@ branch ref a unique id of a given version of a set packages. This id is used to reference the set packages under the ``/``. It is a good practice to - version the packaging recipes, like the ``debian`` directory for building deb - packages and the ``spec`` for building rpm packages, and use the sha1 of the - packaging receipe for the ``ref``. But you could also use a random string for + version the packaging recipes, like the ``debian`` directory for building DEB + packages and the ``spec`` for building RPM packages, and use the SHA1 of the + packaging recipe for the ``ref``. But you could also use a random string for ``ref``, like the tag name of the built source tree. distro diff --git a/doc/dev/crimson/error-handling.rst b/doc/dev/crimson/error-handling.rst index 43017457df1..185868e701e 100644 --- a/doc/dev/crimson/error-handling.rst +++ b/doc/dev/crimson/error-handling.rst @@ -143,7 +143,7 @@ signature:: std::cout << "oops, the optimistic path generates a new error!"; return crimson::ct_error::input_output_error::make(); }, - // we have a special handler to delegate the handling up. For conveience, + // we have a special handler to delegate the handling up. For convenience, // the same behaviour is available as single argument-taking variant of // `safe_then()`. ertr::pass_further{}); diff --git a/doc/dev/crimson/osd.rst b/doc/dev/crimson/osd.rst index b4b8ed65c78..f7f132b3f9d 100644 --- a/doc/dev/crimson/osd.rst +++ b/doc/dev/crimson/osd.rst @@ -22,7 +22,7 @@ osd .. describe:: waiting_for_healthy If an OSD daemon is able to connected to its heartbeat peers, and its own - internal hearbeat does not fail, it is considered healthy. Otherwise, it + internal heartbeat does not fail, it is considered healthy. Otherwise, it puts itself in the state of `waiting_for_healthy`, and check its own reachability and internal heartbeat periodically. diff --git a/doc/dev/crimson/poseidonstore.rst b/doc/dev/crimson/poseidonstore.rst index 3dc2638de22..7c54c029a79 100644 --- a/doc/dev/crimson/poseidonstore.rst +++ b/doc/dev/crimson/poseidonstore.rst @@ -83,7 +83,7 @@ Towards an object store highly optimized for CPU consumption, three design choic * **PoseidonStore uses hybrid update strategies for different data size, similar to BlueStore.** As we discussed, both in-place and out-of-place update strategies have their pros and cons. - Since CPU is only bottlenecked under small I/O workloads, we chose update-in-place for small I/Os to mininize CPU consumption + Since CPU is only bottlenecked under small I/O workloads, we chose update-in-place for small I/Os to minimize CPU consumption while choosing update-out-of-place for large I/O to avoid double write. Double write for small data may be better than host-GC overhead in terms of CPU consumption in the long run. Although it leaves GC entirely up to SSDs, @@ -230,7 +230,7 @@ Crash consistency #. Crash occurs right after writing Data blocks - Data partition --> | Data blocks | - - We don't need to care this case. Data is not alloacted yet in reality. The blocks will be reused. + - We don't need to care this case. Data is not allocated yet. The blocks will be reused. #. Crash occurs right after WAL - Data partition --> | Data blocks | diff --git a/doc/dev/deduplication.rst b/doc/dev/deduplication.rst index b715896b6f7..69d4cff070b 100644 --- a/doc/dev/deduplication.rst +++ b/doc/dev/deduplication.rst @@ -94,8 +94,8 @@ Regarding how to use, please see ``osd_internals/manifest.rst`` Usage Patterns ============== -The different Ceph interface layers present potentially different oportunities -and costs for deduplication and tiering in general. +Each Ceph interface layer presents unique opportunities and costs for +deduplication and tiering in general. RadosGW ------- diff --git a/doc/dev/dev_cluster_deployement.rst b/doc/dev/dev_cluster_deployment.rst similarity index 100% rename from doc/dev/dev_cluster_deployement.rst rename to doc/dev/dev_cluster_deployment.rst diff --git a/doc/dev/developer_guide/basic-workflow.rst b/doc/dev/developer_guide/basic-workflow.rst index 60e0ecb3f86..728b58ecb85 100644 --- a/doc/dev/developer_guide/basic-workflow.rst +++ b/doc/dev/developer_guide/basic-workflow.rst @@ -43,7 +43,7 @@ no tracker issue exists, create one. There is only one case in which you do not have to create a Redmine tracker issue: the case of minor documentation changes. Simple documentation cleanup does not require a corresponding tracker issue. -Major documenatation changes do require a tracker issue. Major documentation +Major documentation changes do require a tracker issue. Major documentation changes include adding new documentation chapters or files, and making substantial changes to the structure or content of the documentation. @@ -220,7 +220,7 @@ upstream repository. The second command (git checkout -b fix_1) creates a "bugfix branch" called "fix_1" in your local working copy of the repository. The changes that you make -in order to fix the bug will be commited to this branch. +in order to fix the bug will be committed to this branch. The third command (git push -u origin fix_1) pushes the bugfix branch from your local working repository to your fork of the upstream repository. @@ -479,13 +479,13 @@ This consists of two parts: Using a browser extension to auto-fill the merge message ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If you use a browser for merging Github PRs, the easiest way to fill in -the merge message is with the `"Ceph Github Helper Extension" +If you use a browser for merging GitHub PRs, the easiest way to fill in +the merge message is with the `"Ceph GitHub Helper Extension" `_ (available for `Chrome `_ and `Firefox `_). -After enabling this extension, if you go to a Github PR page, a vertical helper +After enabling this extension, if you go to a GitHub PR page, a vertical helper will be displayed at the top-right corner. If you click on the user silhouette button the merge message input will be automatically populated. diff --git a/doc/dev/developer_guide/dash-devel.rst b/doc/dev/developer_guide/dash-devel.rst index a1639fb2e9f..d7f62cf5c20 100644 --- a/doc/dev/developer_guide/dash-devel.rst +++ b/doc/dev/developer_guide/dash-devel.rst @@ -31,7 +31,7 @@ introduced in this chapter are based on a so called ``vstart`` environment. .. note:: - Every ``vstart`` environment needs Ceph `to be compiled`_ from its Github + Every ``vstart`` environment needs Ceph `to be compiled`_ from its GitHub repository, though Docker environments simplify that step by providing a shell script that contains those instructions. @@ -54,7 +54,7 @@ You can read more about vstart in `Deploying a development cluster`_. Additional information for developers can also be found in the `Developer Guide`_. -.. _Deploying a development cluster: https://docs.ceph.com/docs/master/dev/dev_cluster_deployement/ +.. _Deploying a development cluster: https://docs.ceph.com/docs/master/dev/dev_cluster_deployment/ .. _Developer Guide: https://docs.ceph.com/docs/master/dev/quick_guide/ Host-based vs Docker-based Development Environments @@ -96,7 +96,7 @@ based on vstart. Those are: `ceph-dev`_ is an exception to this rule as one of the options it provides is `build-free`_. This is accomplished through a Ceph installation using - RPM system packages. You will still be able to work with a local Github + RPM system packages. You will still be able to work with a local GitHub repository like you are used to. @@ -1781,7 +1781,7 @@ To specify the grafana dashboard properties such as title, uid etc we can create local dashboardSchema(title, uid, time_from, refresh, schemaVersion, tags,timezone, timepicker) -To add a graph panel we can spcify the graph schema in a local function such as - +To add a graph panel we can specify the graph schema in a local function such as - :: @@ -2340,7 +2340,7 @@ If that checker failed, it means that the current Pull Request is modifying the Ceph API and therefore: #. The versioned OpenAPI specification should be updated explicitly: ``tox -e openapi-fix``. -#. The team @ceph/api will be requested for reviews (this is automated via Github CODEOWNERS), in order to asses the impact of changes. +#. The team @ceph/api will be requested for reviews (this is automated via GitHub CODEOWNERS), in order to asses the impact of changes. Additionally, Sphinx documentation can be generated from the OpenAPI specification with ``tox -e openapi-doc``. diff --git a/doc/dev/developer_guide/debugging-gdb.rst b/doc/dev/developer_guide/debugging-gdb.rst index bbe62f8c505..15314443161 100644 --- a/doc/dev/developer_guide/debugging-gdb.rst +++ b/doc/dev/developer_guide/debugging-gdb.rst @@ -24,7 +24,7 @@ Attaching gdb to the process:: .. note:: It is recommended to compile without any optimizations (``-O0`` gcc flag) - in order to avoid elimintaion of intermediate values. + in order to avoid elimination of intermediate values. Stopping for breakpoints while debugging may cause timeouts, so the following configuration options are suggested:: diff --git a/doc/dev/developer_guide/running-tests-locally.rst b/doc/dev/developer_guide/running-tests-locally.rst index 304e5773a81..8effd97e408 100644 --- a/doc/dev/developer_guide/running-tests-locally.rst +++ b/doc/dev/developer_guide/running-tests-locally.rst @@ -93,7 +93,7 @@ vstart_runner.py can take the following options - --interactive drops a Python shell when a test fails --log-ps-output logs ps output; might be useful while debugging --teardown tears Ceph cluster down after test(s) has finished - runnng + running --kclient use the kernel cephfs client instead of FUSE --brxnet= specify a new net/mask for the mount clients' network namespace container (Default: 192.168.0.0/16) diff --git a/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-debugging-tips.rst b/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-debugging-tips.rst index ac7c75e037c..a959240ba69 100644 --- a/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-debugging-tips.rst +++ b/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-debugging-tips.rst @@ -9,7 +9,7 @@ Test Run`_. Viewing Test Results -------------------- -When a teuthology run has been completed successfully, use `pulpito`_ dasboard +When a teuthology run has been completed successfully, use `pulpito`_ dashboard to view the results:: http://pulpito.front.sepia.ceph.com/// diff --git a/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-intro.rst b/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-intro.rst index ac8a66f2b01..7dec58bd6df 100644 --- a/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-intro.rst +++ b/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-intro.rst @@ -144,7 +144,7 @@ teuthology-describe documentation and better understanding of integration tests. Tests can be documented by embedding ``meta:`` annotations in the yaml files -used to define the tests. The results can be seen in the `teuthology-desribe +used to define the tests. The results can be seen in the `teuthology-describe usecases`_ Since this is a new feature, many yaml files have yet to be annotated. @@ -581,5 +581,5 @@ test will be first. .. _Sepia Lab: https://wiki.sepia.ceph.com/doku.php .. _teuthology repository: https://github.com/ceph/teuthology .. _teuthology framework: https://github.com/ceph/teuthology -.. _teuthology-desribe usecases: https://gist.github.com/jdurgin/09711d5923b583f60afc +.. _teuthology-describe usecases: https://gist.github.com/jdurgin/09711d5923b583f60afc .. _ceph-deploy man page: ../../../../man/8/ceph-deploy diff --git a/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-workflow.rst b/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-workflow.rst index 5820ca4f4f2..64b006c57fb 100644 --- a/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-workflow.rst +++ b/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-workflow.rst @@ -16,7 +16,7 @@ Ceph binaries must be built for your branch before you can use teuthology to run #. To ensure that the build process has been initiated, confirm that the branch name has appeared in the list of "Latest Builds Available" at `Shaman`_. - Soon after you start the build process, the testing infrastructrure adds + Soon after you start the build process, the testing infrastructure adds other, similarly-named builds to the list of "Latest Builds Available". The names of these new builds will contain the names of various Linux distributions of Linux and will be used to test your build against those @@ -110,7 +110,7 @@ run), and ``--subset`` (used to reduce the number of tests that are triggered). .. _teuthology_testing_qa_changes: -Testing QA changes (without re-building binaires) +Testing QA changes (without re-building binaries) ************************************************* If you are making changes only in the ``qa/`` directory, you do not have to @@ -273,8 +273,8 @@ a branch named ``feature-x`` should be named ``wip-$yourname-feature-x``, where ``$yourname`` is replaced with your name. Identifying your branch with your name makes your branch easily findable on Shaman and Pulpito. -If you are using one of the stable branches (for example, nautilis, mimic, -etc.), include the name of that stable branch in your ceph-ci branch name. +If you are using one of the stable branches (`quincy`, `pacific`, etc.), include +the name of that stable branch in your ceph-ci branch name. For example, the ``feature-x`` PR branch should be named ``wip-feature-x-nautilus``. *This is not just a convention. This ensures that your branch is built in the correct environment.* diff --git a/doc/dev/documenting.rst b/doc/dev/documenting.rst index e6c05ee2a44..f84342bc715 100644 --- a/doc/dev/documenting.rst +++ b/doc/dev/documenting.rst @@ -42,7 +42,7 @@ The general format for function documentation is * * Detailed description when necessary * - * preconditons, postconditions, warnings, bugs or other notes + * preconditions, postconditions, warnings, bugs or other notes * * parameter reference * return value (if non-void) @@ -128,7 +128,7 @@ Inkscape -------- You can use Inkscape to generate scalable vector graphics. -https://inkscape.org/en/ for restructedText documents. +https://inkscape.org/en/ for restructuredText documents. If you generate diagrams with Inkscape, you should commit both the Scalable Vector Graphics (SVG) file and export a diff --git a/doc/dev/encoding.rst b/doc/dev/encoding.rst index 81b52500e63..013046f330a 100644 --- a/doc/dev/encoding.rst +++ b/doc/dev/encoding.rst @@ -47,7 +47,7 @@ do not require incrementing compat_version. The ``DECODE_START`` macro takes an argument specifying the most recent message version that the code can handle. This is compared with the compat_version encoded in the message, and if the message is too new then -an exception will be thrown. Because changes to compat_verison are rare, +an exception will be thrown. Because changes to compat_version are rare, this isn't usually something to worry about when adding fields. In practice, changes to encoding usually involve simply adding the desired fields diff --git a/doc/dev/health-reports.rst b/doc/dev/health-reports.rst index ff45429b22a..4a6a7d67182 100644 --- a/doc/dev/health-reports.rst +++ b/doc/dev/health-reports.rst @@ -89,15 +89,15 @@ the latest ``FSMap`` and the health metrics reported by MDS daemons. But it's noteworthy that ``MgrStatMonitor`` does *not* prepare the reports by itself, it just stores whatever the health reports received from mgr! -ceph-mgr -- A Delegate Aggegator --------------------------------- +ceph-mgr -- A Delegate Aggregator +--------------------------------- In Ceph, mgr is created to share the burden of monitor, which is used to establish the consensus of information which is critical to keep the cluster function. Apparently, osdmap, mdsmap and monmap fall into this category. But what about the aggregated statistics of the cluster? They are crucial for the administrator to understand the status of the cluster, but they might not be that important to keep -the cluster running. To address this scability issue, we offloaded the work of +the cluster running. To address this scalability issue, we offloaded the work of collecting and aggregating the metrics to mgr. Now, mgr is responsible for receiving and processing the ``MPGStats`` messages from diff --git a/doc/dev/mds_internals/locking.rst b/doc/dev/mds_internals/locking.rst index e3009b3111e..cfd934f3f31 100644 --- a/doc/dev/mds_internals/locking.rst +++ b/doc/dev/mds_internals/locking.rst @@ -136,13 +136,31 @@ Note that the MDS adds a bunch of other locks for this inode, but for now let's The state transition entries are of type `sm_state_t` from `src/mds/locks.h` source. TODO: Describe these in detail. -We reach a point where the MDS fills in `LockOpVec` and invokes `Locker::acquire_locks()`, which according to the lock type and the mode (`rdlock`, etc..) tries to acquire that particular lock. Starting state for the lock is `LOCK_SYNC` (this may not always be the case, but consider this for simplicity). To acquire `xlock` for `iauth`, the MDS refers to the state transition table. If the current state allows the lock to be acquired, the MDS grabs the lock (which is just incrementing a counter). The current state (`LOCK_SYNC`) does not allow `xlock` to be acquired (column `x` in `LOCK_SYNC` state), thereby requiring a lock state switch. At this point, the MDS switches to an intermediate state `LOCK_SYNC_LOCK` - signifying transitioning from `LOCK_SYNC` to `LOCK_LOCK` state. The intermediate state has a couple of purposes - a. The intermediate state defines what caps are allowed to be held by cilents thereby revoking caps that are not allowed be held in this state, and b. preventing new locks to be acquired. At this point the MDS sends cap revoke messages to clients:: +We reach a point where the MDS fills in `LockOpVec` and invokes +`Locker::acquire_locks()`, which according to the lock type and the mode +(`rdlock`, etc..) tries to acquire that particular lock. Starting state for +the lock is `LOCK_SYNC` (this may not always be the case, but consider this +for simplicity). To acquire `xlock` for `iauth`, the MDS refers to the state +transition table. If the current state allows the lock to be acquired, the MDS +grabs the lock (which is just incrementing a counter). The current state +(`LOCK_SYNC`) does not allow `xlock` to be acquired (column `x` in `LOCK_SYNC` +state), thereby requiring a lock state switch. At this point, the MDS switches +to an intermediate state `LOCK_SYNC_LOCK` - signifying transitioning from +`LOCK_SYNC` to `LOCK_LOCK` state. The intermediate state has a couple of +purposes - a. The intermediate state defines what caps are allowed to be held +by clients thereby revoking caps that are not allowed be held in this state, +and b. preventing new locks to be acquired. At this point the MDS sends cap +revoke messages to clients:: 2021-11-22T07:18:20.040-0500 7fa66a3bd700 7 mds.0.locker: issue_caps allowed=pLsXsFscrl, xlocker allowed=pLsXsFscrl on [inode 0x10000000003 [2,head] /testfile auth v142 ap=1 DIRTYPARENT s=0 n(v0 rc2021-11-22T06:21:45.015746-0500 1=1+0) (iauth sync->lock) (iversion lock) caps={94134=pAsLsXsFscr/-@1,94138=pLsXsFscr/-@1} | request=1 lock=1 caps=1 dirtyparent=1 dirty=1 authpin=1 0x5633ffdac000] 2021-11-22T07:18:20.040-0500 7fa66a3bd700 20 mds.0.locker: client.94134 pending pAsLsXsFscr allowed pLsXsFscrl wanted - 2021-11-22T07:18:20.040-0500 7fa66a3bd700 7 mds.0.locker: sending MClientCaps to client.94134 seq 2 new pending pLsXsFscr was pAsLsXsFscr -As seen above, `client.94134` has `As` caps, which is getting revoked by the MDS. After the caps have been revoked, the MDS can continue to transition to further states: `LOCK_SYNC_LOCK` to `LOCK_LOCK`. Since the goal is to acquire `xlock`, the state transition conitnues (as per the lock transition state machine):: +As seen above, `client.94134` has `As` caps, which are getting revoked by the +MDS. After the caps have been revoked, the MDS can continue to transition to +further states: `LOCK_SYNC_LOCK` to `LOCK_LOCK`. Since the goal is to acquire +`xlock`, the state transition continues (as per the lock transition state +machine):: LOCK_LOCK -> LOCK_LOCK_XLOCK LOCK_LOCK_XLOCK -> LOCK_PREXLOCK diff --git a/doc/dev/msgr2.rst b/doc/dev/msgr2.rst index 05b6d201ed6..ecd6c8258cd 100644 --- a/doc/dev/msgr2.rst +++ b/doc/dev/msgr2.rst @@ -555,8 +555,8 @@ Once the handshake is completed, both peers have setup their compression handler bool is_compress uint32_t method - - the server determines whether compression is possible according to its' configuration. - - if it is possible, it will pick its' most prioritizied compression method that is also supprorted by the client. + - the server determines whether compression is possible according to the configuration. + - if it is possible, it will pick the most prioritized compression method that is also supported by the client. - if none exists, it will determine that session between the peers will be handled without compression. .. ditaa:: @@ -614,7 +614,7 @@ an established session. - the target addr is who the client is trying to connect *to*, so that the server side can close the connection if the client is talking to the wrong daemon. - - type.gid (entity_name_t) is set here, by combinging the type shared in the hello + - type.gid (entity_name_t) is set here, by combining the type shared in the hello frame with the gid here. this means we don't need it in the header of every message. it also means that we can't send messages "from" other entity_name_t's. the current @@ -622,7 +622,7 @@ an established session. shouldn't break any existing functionality. implementation will likely want to mask this against what the authenticated credential allows. - - cookie is the client coookie used to identify a session, and can be used + - cookie is the client cookie used to identify a session, and can be used to reconnect to an existing session. - we've dropped the 'protocol_version' field from msgr1 diff --git a/doc/dev/network-protocol.rst b/doc/dev/network-protocol.rst index f6fb1738d69..d766a321121 100644 --- a/doc/dev/network-protocol.rst +++ b/doc/dev/network-protocol.rst @@ -141,7 +141,7 @@ CEPH_MSGR_TAG_MSG (0x07) u32le front_crc; // Checksums of the various sections. u32le middle_crc; // u32le data_crc; // - u64le sig; // Crypographic signature. + u64le sig; // Cryptographic signature. u8 flags; } diff --git a/doc/dev/osd_internals/erasure_coding/proposals.rst b/doc/dev/osd_internals/erasure_coding/proposals.rst index d048ce8a13d..8a30727b3a0 100644 --- a/doc/dev/osd_internals/erasure_coding/proposals.rst +++ b/doc/dev/osd_internals/erasure_coding/proposals.rst @@ -124,7 +124,7 @@ Partial Application Peering/Recovery modifications -------------------------------------------------- Some writes will be small enough to not require updating all of the -shards holding data blocks. For write amplification minization +shards holding data blocks. For write amplification minimization reasons, it would be best to avoid writing to those shards at all, and delay even sending the log entries until the next write which actually hits that shard. diff --git a/doc/dev/osd_internals/manifest.rst b/doc/dev/osd_internals/manifest.rst index 6689bf239c5..f998a04f2e7 100644 --- a/doc/dev/osd_internals/manifest.rst +++ b/doc/dev/osd_internals/manifest.rst @@ -229,7 +229,7 @@ refcounts on backing objects (or risk a reference to a dead object) Thus, we introduce a simple convention: consecutive clones which share a reference at the same offset share the same refcount. This means that a write that invokes ``make_writeable`` may decrease refcounts, -but not increase them. This has some conquences for removing clones. +but not increase them. This has some consequences for removing clones. Consider the following sequence :: write foo [0, 1024) diff --git a/doc/dev/osd_internals/osd_throttles.rst b/doc/dev/osd_internals/osd_throttles.rst index 0703b797e7d..1d6fb8f73f9 100644 --- a/doc/dev/osd_internals/osd_throttles.rst +++ b/doc/dev/osd_internals/osd_throttles.rst @@ -24,7 +24,7 @@ when any of these hits the soft limit and will block in throttle() while any has exceeded the hard limit. Tighter soft limits will cause writeback to happen more quickly, -but may cause the OSD to miss oportunities for write coalescing. +but may cause the OSD to miss opportunities for write coalescing. Tighter hard limits may cause a reduction in latency variance by reducing time spent flushing the journal, but may reduce writeback parallelism. @@ -68,7 +68,7 @@ changed. Setting these properly should help to smooth out op latencies by mostly avoiding the hard limit. -See FileStore::throttle_ops and FileSTore::thottle_bytes. +See FileStore::throttle_ops and FileSTore::throttle_bytes. journal usage throttle ---------------------- diff --git a/doc/dev/osd_internals/osdmap_versions.txt b/doc/dev/osd_internals/osdmap_versions.txt index 12fab5ae342..2bf247dcfe5 100644 --- a/doc/dev/osd_internals/osdmap_versions.txt +++ b/doc/dev/osd_internals/osdmap_versions.txt @@ -90,7 +90,7 @@ map / 6 / - / 10 / 1fee4ccd5277b52292e255daf458330eef5f0255 / 0.64 / 2013 inc / 6 / - / 10 / * encode front hb addr * osdmap::incremental ext version bumped to 10 - * osdmap's ext versiont bumped to 10 + * osdmap's ext version bumped to 10 * because we're adding osd_addrs->hb_front_addr to map // below we have the change to ENCODE_START() for osdmap and others diff --git a/doc/dev/osd_internals/refcount.rst b/doc/dev/osd_internals/refcount.rst index 4d75ae01949..3324b63e5a0 100644 --- a/doc/dev/osd_internals/refcount.rst +++ b/doc/dev/osd_internals/refcount.rst @@ -6,7 +6,7 @@ Refcount Introduction ============ -Dedupliation, as described in ../deduplication.rst, needs a way to +Deduplication, as described in ../deduplication.rst, needs a way to maintain a target pool of deduplicated chunks with atomic ref refcounting. To that end, there exists an osd object class refcount responsible for using the object class machinery to diff --git a/doc/dev/perf_histograms.rst b/doc/dev/perf_histograms.rst index c277ac209dd..429c0040076 100644 --- a/doc/dev/perf_histograms.rst +++ b/doc/dev/perf_histograms.rst @@ -669,9 +669,9 @@ The actual dump is similar to the schema, except that there are actual value gro } }, -This represents the 2d histogram, consisting of 9 history entrires and 32 value groups per each history entry. +This represents the 2D histogram, consisting of 9 history entries and 32 value groups per each history entry. "Ranges" element denote value bounds for each of value groups. "Buckets" denote amount of value groups ("buckets"), -"Min" is a minimum accepted valaue, "quant_size" is quantization unit and "scale_type" is either "log2" (logarhitmic +"Min" is a minimum accepted value, "quant_size" is quantization unit and "scale_type" is either "log2" (logarithmic scale) or "linear" (linear scale). You can use histogram_dump.py tool (see src/tools/histogram_dump.py) for quick visualisation of existing histogram data. diff --git a/doc/dev/quick_guide.rst b/doc/dev/quick_guide.rst index e78a023d377..bccca0239a0 100644 --- a/doc/dev/quick_guide.rst +++ b/doc/dev/quick_guide.rst @@ -45,7 +45,7 @@ Running a development deployment -------------------------------- Ceph contains a script called ``vstart.sh`` (see also -:doc:`/dev/dev_cluster_deployement`) which allows developers to quickly test +:doc:`/dev/dev_cluster_deployment`) which allows developers to quickly test their code using a simple deployment on your development system. Once the build finishes successfully, start the ceph deployment using the following command: diff --git a/doc/dev/zoned-storage.rst b/doc/dev/zoned-storage.rst index 27f592b4c4e..cea741d6bfe 100644 --- a/doc/dev/zoned-storage.rst +++ b/doc/dev/zoned-storage.rst @@ -13,7 +13,7 @@ NVMe Solid State Disks with the upcoming NVMe Zoned Namespaces (ZNS) standard. This project aims to enable Ceph to work on zoned storage drives and at the same time explore research problems related to adopting this new interface. The -first target is to enable non-ovewrite workloads (e.g. RGW) on host-managed SMR +first target is to enable non-overwrite workloads (e.g. RGW) on host-managed SMR (HM-SMR) drives and explore cleaning (garbage collection) policies. HM-SMR drives are high capacity hard drives with the ZBC/ZAC interface. The longer term goal is to support ZNS SSDs, as they become available, as well as overwrite diff --git a/doc/foundation.rst b/doc/foundation.rst index 9d34ab66934..b6546777765 100644 --- a/doc/foundation.rst +++ b/doc/foundation.rst @@ -64,7 +64,7 @@ Associate * `grnet `_ * `Monash University `_ * `NRF SARAO `_ -* `Science & Technology Facilities Councel (STFC) `_ +* `Science & Technology Facilities Council (STFC) `_ * `University of Michigan `_ * `SWITCH `_ diff --git a/doc/install/index.rst b/doc/install/index.rst index 8f76477d123..2aaf829ea9e 100644 --- a/doc/install/index.rst +++ b/doc/install/index.rst @@ -40,7 +40,7 @@ Ceph clusters using Ansible. * ceph-ansible is widely deployed. * ceph-ansible is not integrated with the new orchestrator APIs, - introduced in Nautlius and Octopus, which means that newer + introduced in Nautilus and Octopus, which means that newer management features and dashboard integration are not available. diff --git a/doc/install/mirrors.rst b/doc/install/mirrors.rst index 442b5203490..314947af72b 100644 --- a/doc/install/mirrors.rst +++ b/doc/install/mirrors.rst @@ -43,7 +43,7 @@ Mirroring ========= You can easily mirror Ceph yourself using a Bash script and rsync. An easy-to-use -script can be found at `Github`_. +script can be found at `GitHub`_. When mirroring Ceph, please keep the following guidelines in mind: @@ -59,9 +59,9 @@ If you want to provide a public mirror for other users of Ceph you can opt to become a official mirror. To make sure all mirrors meet the same standards some requirements have been -set for all mirrors. These can be found on `Github`_. +set for all mirrors. These can be found on `GitHub`_. If you want to apply for an official mirror, please contact the ceph-users mailinglist. -.. _Github: https://github.com/ceph/ceph/tree/master/mirroring +.. _GitHub: https://github.com/ceph/ceph/tree/master/mirroring diff --git a/doc/install/windows-install.rst b/doc/install/windows-install.rst index 3c7a011c4a0..0cf1550beb9 100644 --- a/doc/install/windows-install.rst +++ b/doc/install/windows-install.rst @@ -32,7 +32,7 @@ Dokany In order to mount Ceph filesystems, ``ceph-dokan`` requires Dokany to be installed. You may fetch the installer as well as the source code from the -Dokany Github repository: https://github.com/dokan-dev/dokany/releases +Dokany GitHub repository: https://github.com/dokan-dev/dokany/releases The minimum supported Dokany version is 1.3.1. At the time of the writing, Dokany 2.0 is in Beta stage and is unsupported. diff --git a/doc/man/8/ceph-diff-sorted.rst b/doc/man/8/ceph-diff-sorted.rst index f5fe22ed868..61e0772e170 100644 --- a/doc/man/8/ceph-diff-sorted.rst +++ b/doc/man/8/ceph-diff-sorted.rst @@ -27,7 +27,7 @@ containing billions of lines) that the standard `diff` tool cannot handle efficiently. Knowing that the lines are sorted allows this to be done efficiently with minimal memory overhead. -The sorting of each file needs to be done lexcially. Most POSIX +The sorting of each file needs to be done lexically. Most POSIX systems use the *LANG* environment variable to determine the `sort` tool's sorting order. To sort lexically we would need something such as: diff --git a/doc/man/8/ceph-objectstore-tool.rst b/doc/man/8/ceph-objectstore-tool.rst index 19acc591320..26ea51542a0 100644 --- a/doc/man/8/ceph-objectstore-tool.rst +++ b/doc/man/8/ceph-objectstore-tool.rst @@ -64,7 +64,7 @@ Possible -op commands:: * set-inc-osdmap * mark-complete * reset-last-complete -* apply-layour-settings +* apply-layout-settings * update-mon-db * dump-export * trim-pg-log @@ -485,4 +485,4 @@ Error Codes Availability ============ -**ceph-objectstore-tool** is part of Ceph, a massively scalable, open-source, distributed storage system. **ceph-objectstore-tool** is provided by the package `ceph-osd`. Refer to the Ceph documentation at htpp://ceph.com/docs for more information. +**ceph-objectstore-tool** is part of Ceph, a massively scalable, open-source, distributed storage system. **ceph-objectstore-tool** is provided by the package `ceph-osd`. Refer to the Ceph documentation at http://ceph.com/docs for more information. diff --git a/doc/man/8/ceph.rst b/doc/man/8/ceph.rst index 3bc66f19d6a..881c0d3077b 100644 --- a/doc/man/8/ceph.rst +++ b/doc/man/8/ceph.rst @@ -232,7 +232,7 @@ Usage:: ceph config rm