From: Danny Al-Gaaf Date: Sat, 8 Mar 2014 23:01:40 +0000 (+0100) Subject: doc: s/osd/OSD/ if not part of a command X-Git-Tag: v0.78~61^2~6 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=72ee3389af785d585ef959e76c4ce042956e9024;p=ceph.git doc: s/osd/OSD/ if not part of a command First attempt to unify usage of OSD over rst files. Signed-off-by: Danny Al-Gaaf --- diff --git a/doc/dev/config.rst b/doc/dev/config.rst index 15990632353d9..b9b0faaf14d1d 100644 --- a/doc/dev/config.rst +++ b/doc/dev/config.rst @@ -28,7 +28,7 @@ How do we find the configuration file? Well, in order, we check: Each stanza of the configuration file describes the key-value pairs that will be in effect for a particular subset of the daemons. The "global" stanza applies to everything. The "mon", "osd", and "mds" stanzas specify settings to take effect -for all monitors, all osds, and all mds servers, respectively. A stanza of the +for all monitors, all OSDs, and all mds servers, respectively. A stanza of the form mon.$name, osd.$name, or mds.$name gives settings for the monitor, OSD, or MDS of that name, respectively. Configuration values that appear later in the file win over earlier ones. diff --git a/doc/dev/osd-class-path.rst b/doc/dev/osd-class-path.rst index 10d8f73f856b1..6e209bc24111d 100644 --- a/doc/dev/osd-class-path.rst +++ b/doc/dev/osd-class-path.rst @@ -7,10 +7,10 @@ 2011-12-05 17:41:00.994075 7ffe8b5c3760 librbd: failed to assign a block name for image create error: error 5: Input/output error -This usually happens because your osds can't find ``cls_rbd.so``. They +This usually happens because your OSDs can't find ``cls_rbd.so``. They search for it in ``osd_class_dir``, which may not be set correctly by default (http://tracker.newdream.net/issues/1722). Most likely it's looking in ``/usr/lib/rados-classes`` instead of ``/usr/lib64/rados-classes`` - change ``osd_class_dir`` in your -``ceph.conf`` and restart the osds to fix it. +``ceph.conf`` and restart the OSDs to fix it. diff --git a/doc/dev/osd_internals/erasure_coding/pgbackend.rst b/doc/dev/osd_internals/erasure_coding/pgbackend.rst index 7cadaa5f7eca2..02adbf284c2b3 100644 --- a/doc/dev/osd_internals/erasure_coding/pgbackend.rst +++ b/doc/dev/osd_internals/erasure_coding/pgbackend.rst @@ -12,7 +12,7 @@ coding as failure recovery mechanisms. Much of the existing PG logic, particularly that for dealing with peering, will be common to each. With both schemes, a log of recent -operations will be used to direct recovery in the event that an osd is +operations will be used to direct recovery in the event that an OSD is down or disconnected for a brief period of time. Similarly, in both cases it will be necessary to scan a recovered copy of the PG in order to recover an empty OSD. The PGBackend abstraction must be @@ -35,7 +35,7 @@ and erasure coding which PGBackend must abstract over: acting set positions. 5. Selection of a pgtemp for backfill may differ between replicated and erasure coded backends. -6. The set of necessary osds from a particular interval required to +6. The set of necessary OSDs from a particular interval required to to continue peering may differ between replicated and erasure coded backends. 7. The selection of the authoritative log may differ between replicated @@ -115,7 +115,7 @@ the last interval which went active in order to minimize the number of divergent objects. The difficulty is that the current code assumes that as long as it has -an info from at least 1 osd from the prior interval, it can complete +an info from at least 1 OSD from the prior interval, it can complete peering. In order to ensure that we do not end up with an unrecoverably divergent object, a K+M erasure coded PG must hear from at least K of the replicas of the last interval to serve writes. This ensures @@ -140,8 +140,8 @@ PGBackend interfaces: PGTemp ------ -Currently, an osd is able to request a temp acting set mapping in -order to allow an up-to-date osd to serve requests while a new primary +Currently, an OSD is able to request a temp acting set mapping in +order to allow an up-to-date OSD to serve requests while a new primary is backfilled (and for other reasons). An erasure coded pg needs to be able to designate a primary for these reasons without putting it in the first position of the acting set. It also needs to be able @@ -161,7 +161,7 @@ Client Reads ------------ Reads with the replicated strategy can always be satisfied -synchronously out of the primary osd. With an erasure coded strategy, +synchronously out of the primary OSD. With an erasure coded strategy, the primary will need to request data from some number of replicas in order to satisfy a read. The perform_read() interface for PGBackend therefore will be async. @@ -179,7 +179,7 @@ With the replicated strategy, all replicas of a PG are interchangeable. With erasure coding, different positions in the acting set have different pieces of the erasure coding scheme and are not interchangeable. Worse, crush might cause chunk 2 to be written -to an osd which happens already to contain an (old) copy of chunk 4. +to an OSD which happens already to contain an (old) copy of chunk 4. This means that the OSD and PG messages need to work in terms of a type like pair in order to distinguish different pg chunks on a single OSD. @@ -293,7 +293,7 @@ Backfill See `Issue #5856`_. For the most part, backfill itself should behave similarly between replicated and erasure coded pools with a few exceptions: -1. We probably want to be able to backfill multiple osds concurrently +1. We probably want to be able to backfill multiple OSDs concurrently with an erasure coded pool in order to cut down on the read overhead. 2. We probably want to avoid having to place the backfill peers in the @@ -302,7 +302,7 @@ replicated and erasure coded pools with a few exceptions: For 2, we don't really need to place the backfill peer in the acting set for replicated PGs anyway. -For 1, PGBackend::choose_backfill() should determine which osds are +For 1, PGBackend::choose_backfill() should determine which OSDs are backfilled in a particular interval. Core changes: @@ -315,7 +315,7 @@ Core changes: PGBackend interfaces: -- choose_backfill(): allows the implementation to determine which osds +- choose_backfill(): allows the implementation to determine which OSDs should be backfilled in a particular interval. .. _Issue #5856: http://tracker.ceph.com/issues/5856 diff --git a/doc/dev/peering.rst b/doc/dev/peering.rst index 6f68e629e4ed5..ed40589ba195b 100644 --- a/doc/dev/peering.rst +++ b/doc/dev/peering.rst @@ -35,7 +35,7 @@ Concepts to [3,1,2] and osd.3 becomes the primary. *current interval* or *past interval* - a sequence of osd map epochs during which the *acting set* and *up + a sequence of OSD map epochs during which the *acting set* and *up set* for particular PG do not change *primary* @@ -95,7 +95,7 @@ Concepts *up_thru* before a primary can successfully complete the *peering* process, it must inform a monitor that is alive through the current - osd map epoch by having the monitor set its *up_thru* in the osd + OSD map epoch by having the monitor set its *up_thru* in the osd map. This helps peering ignore previous *acting sets* for which peering never completed after certain sequences of failures, such as the second interval below: @@ -135,7 +135,7 @@ process: of many placement groups. Before a primary successfully completes the *peering* - process, the osd map must reflect that the OSD was alive + process, the OSD map must reflect that the OSD was alive and well as of the first epoch in the *current interval*. Changes can only be made after successful *peering*. @@ -157,11 +157,11 @@ The high level process is for the current PG primary to: 2. generate a list of *past intervals* since *last epoch started*. Consider the subset of those for which *up_thru* was greater than - the first interval epoch by the last interval epoch's osd map; that is, + the first interval epoch by the last interval epoch's OSD map; that is, the subset for which *peering* could have completed before the *acting set* changed to another set of OSDs. - Successfull *peering* will require that we be able to contact at + Successful *peering* will require that we be able to contact at least one OSD from each of *past interval*'s *acting set*. 3. ask every node in that list for its *PG info*, which includes the most @@ -213,7 +213,7 @@ The high level process is for the current PG primary to: my own (*authoritative history*) ... which may involve deciding to delete divergent objects. - b) await acknowledgement that they have persisted the PG log entries. + b) await acknowledgment that they have persisted the PG log entries. 9. at this point all OSDs in the *acting set* agree on all of the meta-data, and would (in any future *peering*) return identical accounts of all diff --git a/doc/dev/placement-group.rst b/doc/dev/placement-group.rst index 57a02f5ee73f4..6ad709c83927c 100644 --- a/doc/dev/placement-group.rst +++ b/doc/dev/placement-group.rst @@ -58,12 +58,12 @@ something like this in pseudocode:: locator = object_name obj_hash = hash(locator) pg = obj_hash % num_pg - osds_for_pg = crush(pg) # returns a list of osds + OSDs_for_pg = crush(pg) # returns a list of OSDs primary = osds_for_pg[0] replicas = osds_for_pg[1:] If you want to understand the crush() part in the above, imagine a -perfectly spherical datacenter in a vacuum ;) that is, if all osds +perfectly spherical datacenter in a vacuum ;) that is, if all OSDs have weight 1.0, and there is no topology to the data center (all OSDs are on the top level), and you use defaults, etc, it simplifies to consistent hashing; you can think of it as:: @@ -76,7 +76,7 @@ consistent hashing; you can think of it as:: r = hash(pg) chosen = all_osds[ r % len(all_osds) ] if chosen in result: - # osd can be picked only once + # OSD can be picked only once continue result.append(chosen) return result diff --git a/doc/dev/rbd-layering.rst b/doc/dev/rbd-layering.rst index 2bc833f667790..e6e224ce4aee5 100644 --- a/doc/dev/rbd-layering.rst +++ b/doc/dev/rbd-layering.rst @@ -277,5 +277,5 @@ A new clone method will be added, which takes the same arguments as create except size (size of the parent image is used). Instead of expanding the rbd_info struct, we will break the metadata -retrieval into several api calls. Right now, the only users of +retrieval into several API calls. Right now, the only users of rbd_stat() other than 'rbd info' only use it to retrieve image size. diff --git a/doc/rados/operations/control.rst b/doc/rados/operations/control.rst index eccb766c9b5e6..ea5d6cb477ea3 100644 --- a/doc/rados/operations/control.rst +++ b/doc/rados/operations/control.rst @@ -91,18 +91,18 @@ or delete them if they were just created. :: OSD Subsystem ============= -Query osd subsystem status. :: +Query OSD subsystem status. :: ceph osd stat -Write a copy of the most recent osd map to a file. See +Write a copy of the most recent OSD map to a file. See `osdmaptool`_. :: ceph osd getmap -o file .. _osdmaptool: ../../man/8/osdmaptool -Write a copy of the crush map from the most recent osd map to +Write a copy of the crush map from the most recent OSD map to file. :: ceph osd getcrushmap -o file @@ -160,7 +160,7 @@ Remove the given OSD(s). :: ceph osd rm [{id}...] -Query the current max_osd parameter in the osd map. :: +Query the current max_osd parameter in the OSD map. :: ceph osd getmaxosd @@ -269,11 +269,11 @@ Sends a scrub command to OSD ``{osd-num}``. To send the command to all OSDs, use ceph osd scrub {osd-num} -Sends a repair command to osdN. To send the command to all osds, use ``*``. :: +Sends a repair command to OSD.N. To send the command to all OSDs, use ``*``. :: ceph osd repair N -Runs a simple throughput benchmark against osdN, writing ``TOTAL_BYTES`` +Runs a simple throughput benchmark against OSD.N, writing ``TOTAL_BYTES`` in write requests of ``BYTES_PER_WRITE`` each. By default, the test writes 1 GB in total in 4-MB increments. :: diff --git a/doc/rados/operations/pg-concepts.rst b/doc/rados/operations/pg-concepts.rst index 4f5f195437b10..2e53852e389c1 100644 --- a/doc/rados/operations/pg-concepts.rst +++ b/doc/rados/operations/pg-concepts.rst @@ -84,7 +84,7 @@ of the following terms: *up_thru* Before a *Primary* can successfully complete the *Peering* process, it must inform a monitor that is alive through the current - osd map *Epoch* by having the monitor set its *up_thru* in the osd + OSD map *Epoch* by having the monitor set its *up_thru* in the osd map. This helps *Peering* ignore previous *Acting Sets* for which *Peering* never completed after certain sequences of failures, such as the second interval below: diff --git a/doc/rados/troubleshooting/troubleshooting-osd.rst b/doc/rados/troubleshooting/troubleshooting-osd.rst index b2ac76495059f..8fe25f40aebc5 100644 --- a/doc/rados/troubleshooting/troubleshooting-osd.rst +++ b/doc/rados/troubleshooting/troubleshooting-osd.rst @@ -412,8 +412,8 @@ on the monitor, while marking themselves ``up``. We call this scenario If something is causing OSDs to 'flap' (repeatedly getting marked ``down`` and then ``up`` again), you can force the monitors to stop the flapping with:: - ceph osd set noup # prevent osds from getting marked up - ceph osd set nodown # prevent osds from getting marked down + ceph osd set noup # prevent OSDs from getting marked up + ceph osd set nodown # prevent OSDs from getting marked down These flags are recorded in the osdmap structure:: @@ -426,9 +426,9 @@ You can clear the flags with:: ceph osd unset nodown Two other flags are supported, ``noin`` and ``noout``, which prevent -booting OSDs from being marked ``in`` (allocated data) or down -ceph-osds from eventually being marked ``out`` (regardless of what the -current value for ``mon osd down out interval`` is). +booting OSDs from being marked ``in`` (allocated data) or protect OSDs +from eventually being marked ``out`` (regardless of what the current value for +``mon osd down out interval`` is). .. note:: ``noup``, ``noout``, and ``nodown`` are temporary in the sense that once the flags are cleared, the action they were blocking @@ -454,4 +454,4 @@ current value for ``mon osd down out interval`` is). .. _unsubscribe from the ceph-users email list: mailto:ceph-users-leave@lists.ceph.com .. _Inktank: http://inktank.com .. _OS recommendations: ../../../install/os-recommendations -.. _ceph-devel: ceph-devel@vger.kernel.org \ No newline at end of file +.. _ceph-devel: ceph-devel@vger.kernel.org diff --git a/doc/rados/troubleshooting/troubleshooting-pg.rst b/doc/rados/troubleshooting/troubleshooting-pg.rst index c4f1660ccbf7d..e3fe0f87f27a2 100644 --- a/doc/rados/troubleshooting/troubleshooting-pg.rst +++ b/doc/rados/troubleshooting/troubleshooting-pg.rst @@ -203,7 +203,7 @@ data, but it is ``down``. The full range of possible states include:: * already probed * querying - * osd is down + * OSD is down * not queried (yet) Sometimes it simply takes some time for the cluster to query possible @@ -286,4 +286,4 @@ in the `Pool, PG and CRUSH Config Reference`_ for details. .. _check: ../../operations/placement-groups#get-the-number-of-placement-groups .. _here: ../../configuration/pool-pg-config-ref .. _Placement Groups: ../../operations/placement-groups -.. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref \ No newline at end of file +.. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref