.. code-block:: ini
[global]
- #Enable authentication between hosts within the cluster.
- #v 0.54 and earlier
- auth supported = cephx
+ #Enable authentication between hosts within the cluster.
+ #v 0.54 and earlier
+ auth supported = cephx
- #v 0.55 and after
- auth cluster required = cephx
- auth service required = cephx
- auth client required = cephx
+ #v 0.55 and after
+ auth cluster required = cephx
+ auth service required = cephx
+ auth client required = cephx
You can specify settings that apply to a particular type of daemon. When you
.. code-block:: ini
[osd]
- osd journal size = 1000
- filestore xattr use omap = true
+ osd journal size = 1000
You may specify settings for particular instances of a daemon. You may specify
.. code-block:: ini
[osd.1]
- # settings affect osd.1 only.
+ # settings affect osd.1 only.
[mon.a]
- # settings affect mon.a only.
+ # settings affect mon.a only.
[mds.b]
- # settings affect mds.b only.
+ # settings affect mds.b only.
+
+
+If the daemon you specify is a Ceph Gateway client, specify the daemon and the
+instance, delimited by a period (.). For example::
+
+ [client.radosgw.instance-name]
+ # settings affect client.radosgw.instance-name only.
+
.. _ceph-metavariables:
Metavariables simplify Ceph Storage Cluster configuration dramatically. When a
metavariable is set in a configuration value, Ceph expands the metavariable into
a concrete value. Metavariables are very powerful when used within the
-``[global]``, ``[osd]``, ``[mon]`` or ``[mds]`` sections of your configuration
-file. Ceph metavariables are similar to Bash shell expansion.
+``[global]``, ``[osd]``, ``[mon]``, ``[mds]`` or ``[client]`` sections of your
+configuration file. Ceph metavariables are similar to Bash shell expansion.
Ceph supports the following metavariables:
The `Hardware Recommendations`_ section provides some hardware guidelines for
configuring a Ceph Storage Cluster. It is possible for a single :term:`Ceph
Node` to run multiple daemons. For example, a single node with multiple drives
-or RAIDs may run one ``ceph-osd`` for each drive or RAID. Ideally, you will
-have a node for a particular type of process. For example, some nodes may run
-``ceph-osd`` daemons, other nodes may run ``ceph-mds`` daemons, and still
-other nodes may run ``ceph-mon`` daemons.
+may run one ``ceph-osd`` for each drive. Ideally, you will have a node for a
+particular type of process. For example, some nodes may run ``ceph-osd``
+daemons, other nodes may run ``ceph-mds`` daemons, and still other nodes may
+run ``ceph-mon`` daemons.
Each node has a name identified by the ``host`` setting. Monitors also specify
a network address and port (i.e., domain name or IP address) identified by the
``addr`` setting. A basic configuration file will typically specify only
-minimal settings for each instance of a daemon. For example:
+minimal settings for each instance of monitor daemons. For example:
.. code-block:: ini
- [mon.a]
- host = hostName
- mon addr = 150.140.130.120:6789
-
- [osd.0]
- host = hostName
+ [global]
+ mon_initial_members = ceph1
+ mon_host = 10.0.0.1
+
.. important:: The ``host`` setting is the short name of the node (i.e., not
an fqdn). It is **NOT** an IP address either. Enter ``hostname -s`` on
- the command line to retrieve the name of the node. Also, this setting is
- **ONLY** for ``mkcephfs`` and manual deployment. It **MUST NOT**
- be used with ``chef`` or ``ceph-deploy``, as those tools will enter the
- appropriate values for you.
+ the command line to retrieve the name of the node. Do not use ``host``
+ settings for anything other than initial monitors unless you are deploying
+ Ceph manually. You **MUST NOT** specify ``host`` under individual daemons
+ when using deployment tools like ``chef`` or ``ceph-deploy``, as those tools
+ will enter the appropriate values for you in the cluster map.
.. _ceph-network-config:
.. code-block:: ini
[mon.a]
- host = hostName
- mon addr = 150.140.130.120:6789
+ host = hostName
+ mon addr = 150.140.130.120:6789
By default, Ceph expects that you will store a monitor's data under the
following path::
For Bobtail (v 0.56) and beyond, you should expressly enable or disable
authentication in the ``[global]`` section of your Ceph configuration file. ::
- auth cluster required = cephx
- auth service required = cephx
- auth client required = cephx
+ auth cluster required = cephx
+ auth service required = cephx
+ auth client required = cephx
Additionally, you should enable message signing. See `Cephx Config Reference`_
and `Cephx Authentication`_ for details.
Ceph production clusters typically deploy :term:Ceph OSD Daemons` where one node
has one OSD daemon running a filestore on one storage drive. A typical
-deployment specifies a journal size and whether the file store's extended
-attributes (XATTRs) use an object map (i.e., when running on the ``ext4``
-filesystem). For example:
+deployment specifies a journal size. For example:
.. code-block:: ini
[osd]
- osd journal size = 10000
- filestore xattr use omap = true #enables the object map. Only if running ext4.
+ osd journal size = 10000
[osd.0]
- host = {hostname}
+ host = {hostname} #manual deployments only.
By default, Ceph expects that you will store a Ceph OSD Daemon's data with the
Logs / Debugging
================
-Ceph is still on the leading edge, so you may encounter situations that require
+Sometimes you may encounter issues with Ceph that require
modifying logging output and using Ceph's debugging. See `Debugging and
Logging`_ for details on log rotation.