``type``
--------
-Describes if the device is a an OSD or Journal, with the ability to expand to
+Describes if the device is an OSD or Journal, with the ability to expand to
other types when supported (for example a lockbox)
Example::
======================
info.last_epoch_started records an activation epoch e for interval i
-such that all writes commited in i or earlier are reflected in the
+such that all writes committed in i or earlier are reflected in the
local info/log and no writes after i are reflected in the local
info/log. Since no committed write is ever divergent, even if we
get an authoritative log/info with an older info.last_epoch_started,
we can leave our info.last_epoch_started alone since no writes could
-have commited in any intervening interval (See PG::proc_master_log).
+have committed in any intervening interval (See PG::proc_master_log).
info.history.last_epoch_started records a lower bound on the most
recent interval in which the pg as a whole went active and accepted
#. The high level locking rules for mixing reads and writes without exposing
uncommitted state (which might be rolled back or forgotten later)
#. The process, metadata, and protocol needed to determine the set of osds
- which partcipated in the most recent interval in which we accepted writes
+ which participated in the most recent interval in which we accepted writes
#. etc.
Instead, we choose a few abstractions (and a few kludges) to paper over the differences:
For a replicated pool, an object is readable iff it is present on
the primary (at the right version). For an ec pool, we need at least
M shards present to do a read, and we need it on the primary. For
-this reason, PGBackend needs to include some interfaces for determing
+this reason, PGBackend needs to include some interfaces for determining
when recovery is required to serve a read vs a write. This also
changes the rules for when peering has enough logs to prove that it
synchronously out of the primary OSD. With an erasure coded strategy,
the primary will need to request data from some number of replicas in
order to satisfy a read. PGBackend will therefore need to provide
-seperate objects_read_sync and objects_read_async interfaces where
+separate objects_read_sync and objects_read_async interfaces where
the former won't be implemented by the ECBackend.
PGBackend interfaces:
In either case, our general strategy for removing the pg is to
atomically set the metadata objects (pg->log_oid, pg->biginfo_oid) to
-backfill and asynronously remove the pg collections. We do not do
+backfill and asynchronously remove the pg collections. We do not do
this inline because scanning the collections to remove the objects is
an expensive operation.
2. CLEARING_DIR: the PG's contents are being removed synchronously
3. DELETING_DIR: the PG's directories and metadata being queued for removal
4. DELETED_DIR: the final removal transaction has been queued
- 5. CANCELED: the deletion has been canceled
+ 5. CANCELED: the deletion has been cancelled
-In 1 and 2, the deletion can be canceled. Each state transition
+In 1 and 2, the deletion can be cancelled. Each state transition
method (and check_canceled) returns false if deletion has been
-canceled and true if the state transition was successful. Similarly,
-try_stop_deletion() returns true if it succeeds in canceling the
+cancelled and true if the state transition was successful. Similarly,
+try_stop_deletion() returns true if it succeeds in cancelling the
deletion. Additionally, try_stop_deletion() in the event that it
fails to stop the deletion will not return until the final removal
transaction is queued. This ensures that any operations queued after
OSD::_create_lock_pg must handle two cases:
1. Either there is no DeletingStateRef for the pg, or it failed to cancel
- 2. We succeeded in canceling the deletion.
+ 2. We succeeded in cancelling the deletion.
In case 1., we proceed as if there were no deletion occurring, except that
we avoid writing to the PG until the deletion finishes. In case 2., we
--------
Rados supports two related snapshotting mechanisms:
- 1. *pool snaps*: snapshots are implicitely applied to all objects
+ 1. *pool snaps*: snapshots are implicitly applied to all objects
in a pool
2. *self managed snaps*: the user must provide the current *SnapContext*
on each write.
==================
Previously, the filestore had a problem when handling large numbers of
-small ios. We throttle dirty data implicitely via the journal, but
+small ios. We throttle dirty data implicitly via the journal, but
a large number of inodes can be dirtied without filling the journal
resulting in a very long sync time when the sync finally does happen.
The flusher was not an adequate solution to this problem since it
To track the open FDs through the writeback process, there is now an
fdcache to cache open fds. lfn_open now returns a cached FDRef which
-implicitely closes the fd once all references have expired.
+implicitly closes the fd once all references have expired.
Filestore syncs have a sideeffect of flushing all outstanding objects
in the wbthrottle.