``mon max pool pg num``
-:Description: The maximium number of placement groups per pool.
+:Description: The maximum number of placement groups per pool.
:Type: Integer
:Default: ``65536``
====================
To enable a host to execute ceph commands with administrator
-priveleges, use the ``admin`` command. ::
+privileges, use the ``admin`` command. ::
ceph-deploy admin {host-name [host-name]...}
among a majority of monitors, otherwise other steps (like ``ceph-deploy gatherkeys``)
will fail.
-.. note:: When adding a monitor on a host that was not in hosts intially defined
+.. note:: When adding a monitor on a host that was not in hosts initially defined
with the ``ceph-deploy new`` command, a ``public network`` statement needs
- to be be added to the ceph.conf file.
+ to be added to the ceph.conf file.
Remove a Monitor
================
If monitors discovered each other through the Ceph configuration file instead of
through the monmap, it would introduce additional risks because the Ceph
configuration files aren't updated and distributed automatically. Monitors
-might inadvertantly use an older ``ceph.conf`` file, fail to recognize a
+might inadvertently use an older ``ceph.conf`` file, fail to recognize a
monitor, fall out of a quorum, or develop a situation where `Paxos`_ isn't able
to determine the current state of the system accurately. Consequently, making
changes to an existing monitor's IP address must be done with great care.
.. tip:: This guide is for manual configuration. If you use a deployment tool
such as ``ceph-deploy``, it is very likely that the tool will perform at
least the first two steps for you. Verify that your deployment tool
- addresses these steps so that you don't overwrite your keys inadvertantly.
+ addresses these steps so that you don't overwrite your keys inadvertently.
.. _client-admin-key:
generated with the ``hostname -s`` command.
In a typical deployment scenario, provisioning software (or the system
-adminstrator) can simply set the 'crush location' field in a host's
+administrator) can simply set the 'crush location' field in a host's
ceph.conf to describe that machine's location within the datacenter or
cluster. This will be provide location awareness to both Ceph daemons
and clients alike.
osd crush location hook = /path/to/script
-This hook is is passed several arguments (below) and should output a single line
+This hook is passed several arguments (below) and should output a single line
to stdout with the CRUSH location description.::
$ ceph-crush-location --cluster CLUSTER --id ID --type TYPE
Suppose you want to have most pools default to OSDs backed by large hard drives,
but have some pools mapped to OSDs backed by fast solid-state drives (SSDs).
It's possible to have multiple independent CRUSH heirarchies within the same
-CRUSH map. Define two hierachies with two different root nodes--one for hard
+CRUSH map. Define two hierarchies with two different root nodes--one for hard
disks (e.g., "root platter") and one for SSDs (e.g., "root ssd") as shown
below::
required to support the feature. However, the OSD peering process
requires examining and understanding old maps. Therefore, you
should not run old versions of the ``ceph-osd`` daemon
- if the cluster has previosly used non-legacy CRUSH values, even if
+ if the cluster has previously used non-legacy CRUSH values, even if
the latest version of the map has been switched back to using the
legacy defaults.
reassigning placement groups to an OSD (especially a new OSD). By default,
``osd_max_backfills`` sets the maximum number of concurrent backfills to or from
an OSD to 10. The ``osd backfill full ratio`` enables an OSD to refuse a
-backfill request if the OSD is approaching its its full ratio (85%, by default).
+backfill request if the OSD is approaching its full ratio (85%, by default).
If an OSD refuses a backfill request, the ``osd backfill retry interval``
enables an OSD to retry the request (after 10 seconds, by default). OSDs can
also set ``osd backfill scan min`` and ``osd backfill scan max`` to manage scan
ceph osd map {poolname} {object-name}
-.. topic:: Excercise: Locate an Object
+.. topic:: Exercise: Locate an Object
As an exercise, lets create an object. Specify an object name, a path to a
test file containing some object data and a pool name using the
responsible for a particular placement group.
*Up Set*
- The ordered list of OSDs responsible for a particular placment
+ The ordered list of OSDs responsible for a particular placement
group for a particular epoch according to CRUSH. Normally this
is the same as the *Acting Set*, except when the *Acting Set* has
been explicitly overridden via ``pg_temp`` in the OSD Map.
The placement group is waiting for clients to replay operations after an OSD crashed.
*Splitting*
- Ceph is splitting the placment group into multiple placement groups. (functional?)
+ Ceph is splitting the placement group into multiple placement groups. (functional?)
*Scrubbing*
Ceph is checking the placement group for inconsistencies.
``log file``
-:Desription: The location of the logging file for your cluster.
+:Description: The location of the logging file for your cluster.
:Type: String
:Required: No
:Default: ``/var/log/ceph/$cluster-$name.log``
Once you have the heap profiler installed, start your cluster and begin using
the heap profiler. You may enable or disable the heap profiler at runtime, or
-ensure that it runs continously. For the following commandline usage, replace
+ensure that it runs continuously. For the following commandline usage, replace
``{daemon-type}`` with ``osd`` or ``mds``, and replace ``daemon-id`` with the
OSD number or metadata server letter.
all your monitor nodes and make sure you're not dropping/rejecting
connections.
- If this intial troubleshooting doesn't solve your problems, then it's
+ If this initial troubleshooting doesn't solve your problems, then it's
time to go deeper.
First, check the problematic monitor's ``mon_status`` via the admin