I ran a lot of the docs through aspell and found a number of spelling problems.
Signed-off-by: Bryan Stillwell <bstillwell@godaddy.com>
configuration file.
Once you have changed your bucket sharding configuration in your Ceph
-configuration file, restart your gateway. On Red Hat Enteprise Linux execute::
+configuration file, restart your gateway. On Red Hat Enterprise Linux execute::
sudo systemctl restart ceph-radosgw.service
* Some cache and log (ZIL) can be attached.
Please note that this is different from the Ceph journals. Cache and log are
totally transparent for Ceph, and help the filesystem to keep the system
- consistant and help performance.
+ consistent and help performance.
Assuming that ada2 is an SSD::
gpart create -s GPT ada2
.. option:: count_tests
- Print the number of built-in test instances of the previosly
+ Print the number of built-in test instances of the previously
selected type that **ceph-dencoder** is able to generate.
.. option:: select_test <n>
Description
===========
-:program:`ceph-kvstore-tool` is a kvstore manipulation tool. It allows users to manipule
+:program:`ceph-kvstore-tool` is a kvstore manipulation tool. It allows users to manipulate
leveldb/rocksdb's data (like OSD's omap) offline.
Commands
Optional Arguments:
* [-h, --help] show the help message and exit
-* [--auto-detect-objectstore] Automatically detect the objecstore by inspecting
+* [--auto-detect-objectstore] Automatically detect the objectstore by inspecting
the OSD
* [--bluestore] bluestore objectstore (default)
* [--filestore] filestore objectstore
.. option:: --show-utilization
- Displays the expected and actual utilisation for each device, for
+ Displays the expected and actual utilization for each device, for
each number of replicas. For instance::
device 0: stored : 951 expected : 853.333
rados_shutdown(cluster);
-Asychronous IO
-==============
+Asynchronous IO
+===============
When doing lots of IO, you often don't need to wait for one operation
to complete before starting the next one. `Librados` provides
____________________
One or more devices is expected to fail soon and has been marked "out"
-of the cluster based on ``mgr/devicehalth/mark_out_threshold``, but it
+of the cluster based on ``mgr/devicehealth/mark_out_threshold``, but it
is still participating in one more PGs. This may be because it was
only recently marked "out" and data is still migrating, or because data
cannot be migrated off for some reason (e.g., the cluster is nearly
ceph health detail
In most cases the root cause is that one or more OSDs is currently
-down; see the dicussion for ``OSD_DOWN`` above.
+down; see the discussion for ``OSD_DOWN`` above.
The state of specific problematic PGs can be queried with::
________________
Recent OSD scrubs have uncovered inconsistencies. This error is generally
-paired with *PG_DAMANGED* (see above).
+paired with *PG_DAMAGED* (see above).
See :doc:`pg-repair` for more information.
The number of PGs in use in the cluster is below the configurable
threshold of ``mon_pg_warn_min_per_osd`` PGs per OSD. This can lead
-to suboptimizal distribution and balance of data across the OSDs in
+to suboptimal distribution and balance of data across the OSDs in
the cluster, and similar reduce overall performance.
This may be an expected condition if data pools have not yet been
Monitoring Health Checks
========================
-Ceph continously runs various *health checks* against its own status. When
+Ceph continuously runs various *health checks* against its own status. When
a health check fails, this is reflected in the output of ``ceph status`` (or
``ceph health``). In addition, messages are sent to the cluster log to
indicate when a check fails, and when the cluster recovers.
multi-monitor cluster, the monitors will stay in this state until they
find enough monitors to form a quorum -- this means that if you have 2 out
of 3 monitors down, the one remaining monitor will stay in this state
- indefinitively until you bring one of the other monitors up.
+ indefinitely until you bring one of the other monitors up.
If you have a quorum, however, the monitor should be able to find the
remaining monitors pretty fast, as long as they can be reached. If your
This value is configurable via the ``mon-clock-drift-allowed`` option, and
although you *CAN* it doesn't mean you *SHOULD*. The clock skew mechanism
is in place because clock skewed monitor may not properly behave. We, as
- developers and QA afficcionados, are comfortable with the current default
+ developers and QA aficionados, are comfortable with the current default
value, as it will alert the user before the monitors get out hand. Changing
this value without testing it first may cause unforeseen effects on the
stability of the monitors and overall cluster healthiness, although there is
Recovery using healthy monitor(s)
---------------------------------
-If there is any survivers, we can always `replace`_ the corrupted one with a
+If there is any survivors, we can always `replace`_ the corrupted one with a
new one. And after booting up, the new joiner will sync up with a healthy
peer, and once it is fully sync'ed, it will be able to serve the clients.
ceph tell mon.* config set debug_mon 10/10
-No quourm
+No quorum
Use the monitor's admin socket and directly adjust the configuration
options::
We recommend against using btrfs or ext4. The btrfs filesystem has
many attractive features, but bugs in the filesystem may lead to
-performance issues and suprious ENOSPC errors. We do not recommend
+performance issues and spurious ENOSPC errors. We do not recommend
ext4 because xattr size limitations break our support for long object
names (needed for RGW).
- op_applied: The op has been write()'en to the backing FS (ie, applied in
memory but not flushed out to disk) on the primary
- sub_op_applied: op_applied, but for a replica's "subop"
-- sub_op_committed: op_commited, but for a replica's subop (only for EC pools)
+- sub_op_committed: op_committed, but for a replica's subop (only for EC pools)
- sub_op_commit_rec/sub_op_apply_rec from <X>: the primary marks this when it
hears about the above, but for a particular replica <X>
- commit_sent: we sent a reply back to the client (or primary OSD, for sub ops)
displayed by ``ceph osd crush rule dump``. The test will try mapping
one million values (i.e. the range defined by ``[--min-x,--max-x]``)
and must display at least one bad mapping. If it outputs nothing it
-means all mappings are successfull and you can stop right there: the
+means all mappings are successful and you can stop right there: the
problem is elsewhere.
The CRUSH rule can be edited by decompiling the crush map::
The Ceph documentation source resides in the ``ceph/doc`` directory of the Ceph
repository, and Python Sphinx renders the source into HTML and manpages. The
-http://ceph.com/docs link currenly displays the ``master`` branch by default,
+http://ceph.com/docs link currently displays the ``master`` branch by default,
but you may view documentation for older branches (e.g., ``argonaut``) or future
branches (e.g., ``next``) as well as work-in-progress branches by substituting
``master`` with the branch name you prefer.
admin/build-doc
-To scan for the reachablity of external links, execute::
+To scan for the reachability of external links, execute::
admin/build-doc linkcheck
Installation (Kubernetes + Helm)
================================
-The ceph-helm_ project enables you to deploy Ceph in a Kubernetes environement.
-This documentation assumes a Kubernetes environement is available.
+The ceph-helm_ project enables you to deploy Ceph in a Kubernetes environment.
+This documentation assumes a Kubernetes environment is available.
Current limitations
===================
NAME TYPE
ceph-rbd ceph.com/rbd
-The output from helm install shows us the different types of ressources that will be deployed.
+The output from helm install shows us the different types of resources that will be deployed.
A StorageClass named ``ceph-rbd`` of type ``ceph.com/rbd`` will be created with ``ceph-rbd-provisioner`` Pods. These
will allow a RBD to be automatically provisioned upon creation of a PVC. RBDs will also be formatted when mapped for the first