From d0892831f9f8d3916a0b8d0d064ef0a0fc4e4bc1 Mon Sep 17 00:00:00 2001 From: Abhishek Lekshmanan Date: Wed, 9 Jul 2014 11:15:08 +0530 Subject: [PATCH] doc: fix a few typos in rados docs Signed-off-by: Abhishek Lekshmanan --- doc/rados/configuration/pool-pg-config-ref.rst | 2 +- doc/rados/deployment/ceph-deploy-admin.rst | 2 +- doc/rados/deployment/ceph-deploy-mon.rst | 4 ++-- doc/rados/operations/add-or-rm-mons.rst | 2 +- doc/rados/operations/authentication.rst | 2 +- doc/rados/operations/crush-map.rst | 8 ++++---- doc/rados/operations/monitoring-osd-pg.rst | 4 ++-- doc/rados/operations/pg-concepts.rst | 2 +- doc/rados/operations/pg-states.rst | 2 +- doc/rados/troubleshooting/log-and-debug.rst | 2 +- doc/rados/troubleshooting/memory-profiling.rst | 2 +- doc/rados/troubleshooting/troubleshooting-mon.rst | 2 +- 12 files changed, 17 insertions(+), 17 deletions(-) diff --git a/doc/rados/configuration/pool-pg-config-ref.rst b/doc/rados/configuration/pool-pg-config-ref.rst index b699f1c760e13..ebc566223b104 100644 --- a/doc/rados/configuration/pool-pg-config-ref.rst +++ b/doc/rados/configuration/pool-pg-config-ref.rst @@ -20,7 +20,7 @@ Ceph configuration file. ``mon max pool pg num`` -:Description: The maximium number of placement groups per pool. +:Description: The maximum number of placement groups per pool. :Type: Integer :Default: ``65536`` diff --git a/doc/rados/deployment/ceph-deploy-admin.rst b/doc/rados/deployment/ceph-deploy-admin.rst index 8e290abfe4036..a91f69cfdf33d 100644 --- a/doc/rados/deployment/ceph-deploy-admin.rst +++ b/doc/rados/deployment/ceph-deploy-admin.rst @@ -12,7 +12,7 @@ Create an Admin Host ==================== To enable a host to execute ceph commands with administrator -priveleges, use the ``admin`` command. :: +privileges, use the ``admin`` command. :: ceph-deploy admin {host-name [host-name]...} diff --git a/doc/rados/deployment/ceph-deploy-mon.rst b/doc/rados/deployment/ceph-deploy-mon.rst index 8778024055b94..bda34feee062e 100644 --- a/doc/rados/deployment/ceph-deploy-mon.rst +++ b/doc/rados/deployment/ceph-deploy-mon.rst @@ -34,9 +34,9 @@ the tool enforces a single monitor per host. :: among a majority of monitors, otherwise other steps (like ``ceph-deploy gatherkeys``) will fail. -.. note:: When adding a monitor on a host that was not in hosts intially defined +.. note:: When adding a monitor on a host that was not in hosts initially defined with the ``ceph-deploy new`` command, a ``public network`` statement needs - to be be added to the ceph.conf file. + to be added to the ceph.conf file. Remove a Monitor ================ diff --git a/doc/rados/operations/add-or-rm-mons.rst b/doc/rados/operations/add-or-rm-mons.rst index aa4f03ddfebf1..9c21e920cd00f 100644 --- a/doc/rados/operations/add-or-rm-mons.rst +++ b/doc/rados/operations/add-or-rm-mons.rst @@ -235,7 +235,7 @@ catch up with the current state of the cluster. If monitors discovered each other through the Ceph configuration file instead of through the monmap, it would introduce additional risks because the Ceph configuration files aren't updated and distributed automatically. Monitors -might inadvertantly use an older ``ceph.conf`` file, fail to recognize a +might inadvertently use an older ``ceph.conf`` file, fail to recognize a monitor, fall out of a quorum, or develop a situation where `Paxos`_ isn't able to determine the current state of the system accurately. Consequently, making changes to an existing monitor's IP address must be done with great care. diff --git a/doc/rados/operations/authentication.rst b/doc/rados/operations/authentication.rst index 681151f73603b..1a61fde5c3d2d 100644 --- a/doc/rados/operations/authentication.rst +++ b/doc/rados/operations/authentication.rst @@ -41,7 +41,7 @@ See the `Cephx Configuration Reference`_ for additional details. .. tip:: This guide is for manual configuration. If you use a deployment tool such as ``ceph-deploy``, it is very likely that the tool will perform at least the first two steps for you. Verify that your deployment tool - addresses these steps so that you don't overwrite your keys inadvertantly. + addresses these steps so that you don't overwrite your keys inadvertently. .. _client-admin-key: diff --git a/doc/rados/operations/crush-map.rst b/doc/rados/operations/crush-map.rst index 1356f26bb42e3..7328865dfe6c9 100644 --- a/doc/rados/operations/crush-map.rst +++ b/doc/rados/operations/crush-map.rst @@ -90,7 +90,7 @@ string for a given daemon. The location is based on, in order of preference: generated with the ``hostname -s`` command. In a typical deployment scenario, provisioning software (or the system -adminstrator) can simply set the 'crush location' field in a host's +administrator) can simply set the 'crush location' field in a host's ceph.conf to describe that machine's location within the datacenter or cluster. This will be provide location awareness to both Ceph daemons and clients alike. @@ -104,7 +104,7 @@ correct.):: osd crush location hook = /path/to/script -This hook is is passed several arguments (below) and should output a single line +This hook is passed several arguments (below) and should output a single line to stdout with the CRUSH location description.:: $ ceph-crush-location --cluster CLUSTER --id ID --type TYPE @@ -635,7 +635,7 @@ Placing Different Pools on Different OSDS: Suppose you want to have most pools default to OSDs backed by large hard drives, but have some pools mapped to OSDs backed by fast solid-state drives (SSDs). It's possible to have multiple independent CRUSH heirarchies within the same -CRUSH map. Define two hierachies with two different root nodes--one for hard +CRUSH map. Define two hierarchies with two different root nodes--one for hard disks (e.g., "root platter") and one for SSDs (e.g., "root ssd") as shown below:: @@ -1044,7 +1044,7 @@ A few important points required to support the feature. However, the OSD peering process requires examining and understanding old maps. Therefore, you should not run old versions of the ``ceph-osd`` daemon - if the cluster has previosly used non-legacy CRUSH values, even if + if the cluster has previously used non-legacy CRUSH values, even if the latest version of the map has been switched back to using the legacy defaults. diff --git a/doc/rados/operations/monitoring-osd-pg.rst b/doc/rados/operations/monitoring-osd-pg.rst index c57abaa60baed..ccb9adea808f5 100644 --- a/doc/rados/operations/monitoring-osd-pg.rst +++ b/doc/rados/operations/monitoring-osd-pg.rst @@ -498,7 +498,7 @@ Ceph provides a number of settings to manage the load spike associated with reassigning placement groups to an OSD (especially a new OSD). By default, ``osd_max_backfills`` sets the maximum number of concurrent backfills to or from an OSD to 10. The ``osd backfill full ratio`` enables an OSD to refuse a -backfill request if the OSD is approaching its its full ratio (85%, by default). +backfill request if the OSD is approaching its full ratio (85%, by default). If an OSD refuses a backfill request, the ``osd backfill retry interval`` enables an OSD to retry the request (after 10 seconds, by default). OSDs can also set ``osd backfill scan min`` and ``osd backfill scan max`` to manage scan @@ -573,7 +573,7 @@ location, all you need is the object name and the pool name. For example:: ceph osd map {poolname} {object-name} -.. topic:: Excercise: Locate an Object +.. topic:: Exercise: Locate an Object As an exercise, lets create an object. Specify an object name, a path to a test file containing some object data and a pool name using the diff --git a/doc/rados/operations/pg-concepts.rst b/doc/rados/operations/pg-concepts.rst index 2e53852e389c1..636d6bf9a1e5b 100644 --- a/doc/rados/operations/pg-concepts.rst +++ b/doc/rados/operations/pg-concepts.rst @@ -18,7 +18,7 @@ of the following terms: responsible for a particular placement group. *Up Set* - The ordered list of OSDs responsible for a particular placment + The ordered list of OSDs responsible for a particular placement group for a particular epoch according to CRUSH. Normally this is the same as the *Acting Set*, except when the *Acting Set* has been explicitly overridden via ``pg_temp`` in the OSD Map. diff --git a/doc/rados/operations/pg-states.rst b/doc/rados/operations/pg-states.rst index 39328d44031d9..06f67eb814782 100644 --- a/doc/rados/operations/pg-states.rst +++ b/doc/rados/operations/pg-states.rst @@ -23,7 +23,7 @@ map is ``active + clean``. The placement group is waiting for clients to replay operations after an OSD crashed. *Splitting* - Ceph is splitting the placment group into multiple placement groups. (functional?) + Ceph is splitting the placement group into multiple placement groups. (functional?) *Scrubbing* Ceph is checking the placement group for inconsistencies. diff --git a/doc/rados/troubleshooting/log-and-debug.rst b/doc/rados/troubleshooting/log-and-debug.rst index cfd3e6d32f8cd..faac6b35914de 100644 --- a/doc/rados/troubleshooting/log-and-debug.rst +++ b/doc/rados/troubleshooting/log-and-debug.rst @@ -261,7 +261,7 @@ settings: ``log file`` -:Desription: The location of the logging file for your cluster. +:Description: The location of the logging file for your cluster. :Type: String :Required: No :Default: ``/var/log/ceph/$cluster-$name.log`` diff --git a/doc/rados/troubleshooting/memory-profiling.rst b/doc/rados/troubleshooting/memory-profiling.rst index 9eb05398ecd3a..1d878a88b7fdc 100644 --- a/doc/rados/troubleshooting/memory-profiling.rst +++ b/doc/rados/troubleshooting/memory-profiling.rst @@ -19,7 +19,7 @@ Refer to `Google Heap Profiler`_ for additional details. Once you have the heap profiler installed, start your cluster and begin using the heap profiler. You may enable or disable the heap profiler at runtime, or -ensure that it runs continously. For the following commandline usage, replace +ensure that it runs continuously. For the following commandline usage, replace ``{daemon-type}`` with ``osd`` or ``mds``, and replace ``daemon-id`` with the OSD number or metadata server letter. diff --git a/doc/rados/troubleshooting/troubleshooting-mon.rst b/doc/rados/troubleshooting/troubleshooting-mon.rst index fa69ec862d81a..d0807b7b4cea3 100644 --- a/doc/rados/troubleshooting/troubleshooting-mon.rst +++ b/doc/rados/troubleshooting/troubleshooting-mon.rst @@ -175,7 +175,7 @@ How to troubleshoot this? all your monitor nodes and make sure you're not dropping/rejecting connections. - If this intial troubleshooting doesn't solve your problems, then it's + If this initial troubleshooting doesn't solve your problems, then it's time to go deeper. First, check the problematic monitor's ``mon_status`` via the admin -- 2.39.5