When you start the Ceph service, the initialization process activates a series
of daemons that run in the background. A :term:`Ceph Storage Cluster` runs
-two types of daemons:
+three types of daemons:
- :term:`Ceph Monitor` (``ceph-mon``)
+- :term:`Ceph Manager` (``ceph-mgr``)
- :term:`Ceph OSD Daemon` (``ceph-osd``)
-Ceph Storage Clusters that support the :term:`Ceph Filesystem` run at least one
-:term:`Ceph Metadata Server` (``ceph-mds``). Clusters that support :term:`Ceph
-Object Storage` run Ceph Gateway daemons (``radosgw``). For your convenience,
-each daemon has a series of default values (*i.e.*, many are set by
-``ceph/src/common/config_opts.h``). You may override these settings with a Ceph
-configuration file.
+Ceph Storage Clusters that support the :term:`Ceph Filesystem` run at
+least one :term:`Ceph Metadata Server` (``ceph-mds``). Clusters that
+support :term:`Ceph Object Storage` run Ceph Gateway daemons
+(``radosgw``).
+Each daemon has a series of configuration options, each of which has a
+default values. You may adjust the behavior of the system by changing these
+configuration options.
-.. _ceph-conf-file:
+Option names
+============
-The Configuration File
-======================
+All Ceph configuration options have a unique name consisting of words
+formed with lower-case characters and connected with underscore
+(``_``) characters.
-When you start a Ceph Storage Cluster, each daemon looks for a Ceph
-configuration file (i.e., ``ceph.conf`` by default) that provides the cluster's
-configuration settings. For manual deployments, you need to create a Ceph
-configuration file. For tools that create configuration files for you (*e.g.*,
-``ceph-deploy``, Chef, etc.), you may use the information contained herein as a
-reference. The Ceph configuration file defines:
-
-- Cluster Identity
-- Authentication settings
-- Cluster membership
-- Host names
-- Host addresses
-- Paths to keyrings
-- Paths to journals
-- Paths to data
-- Other runtime options
-
-The default Ceph configuration file locations in sequential order include:
-
-#. ``$CEPH_CONF`` (*i.e.,* the path following the ``$CEPH_CONF``
- environment variable)
-#. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
-#. ``/etc/ceph/$cluster.conf``
-#. ``~/.ceph/$cluster.conf``
-#. ``./$cluster.conf`` (*i.e.,* in the current working directory)
-#. On FreeBSD systems only, ``/usr/local/etc/ceph/$cluster.conf``
+When option names are specified on the command line, either underscore
+(``_``) or dash (``-``) characters can be used interchangeable (e.g.,
+``--mon-host`` is equivalent to ``--mon_host``).
-where ``$cluster`` is the cluster's name (default ``ceph``).
+When option names appear in configuration files, spaces can also be
+used in place of underscore or dash.
-The Ceph configuration file uses an *ini* style syntax. You can add comments
-by preceding comments with a pound sign (#) or a semi-colon (;). For example:
+Config sources
+==============
-.. code-block:: ini
+Each Ceph daemon, process, and library will pull its configuration
+from several sources, listed below. Sources later in the list will
+override those earlier in the list when both are present.
+
+- the compiled-in default value
+- the monitor cluster's centralized configuration database
+- a configuration file stored on the local host
+- environment variables
+- command line arguments
+- runtime overrides set by an administrator
+
+One of the first things a Ceph process does on startup is parse the
+configuration options provided via the command line, environment, and
+local configuration file. The process will then contact the monitor
+cluster to retrieve configuration stored centrally for the entire
+cluster. Once a complete view of the configuration is available, the
+daemon or process startup will proceed.
+
+Bootstrap options
+-----------------
+
+Because some configuration options affect the process's ability to
+contact the monitors, authenticate, and retrieve the cluster-stored
+configuration, they may need to be stored locally on the node and set
+in a local configuration file. These options include:
+
+ - ``mon_host``, the list of monitors for the cluster
+ - ``mon_dns_serv_name`` (default: `ceph-mon`), the name of the DNS
+ SRV record to check to identify the cluster monitors via DNS
+ - ``mon_data``, ``osd_data``, ``mds_data``, ``mgr_data``, and
+ similar options that define which local directory the daemon
+ stores its data in.
+ - ``keyring``, ``keyfile``, and/or ``key``, which can be used to
+ specify the authentication credential to use to authenticate with
+ the monitor. Note that in most cases the default keyring location
+ is in the data directory specified above.
+
+In the vast majority of cases the default values of these are
+appropriate, with the exception of the ``mon_host`` option that
+identifies the addresses of the cluster's monitors. When DNS is used
+to identify monitors a local ceph configuration file can be avoided
+entirely.
+
+Skipping monitor config
+-----------------------
+
+Any process may be passed the option ``--no-mon-config`` to skip the
+step that retrieves configuration from the cluster monitors. This is
+useful in cases where configuration is managed entirely via
+configuration files or where the monitor cluster is currently down but
+some maintenance activity needs to be done.
- # <--A number (#) sign precedes a comment.
- ; A comment may be anything.
- # Comments always follow a semi-colon (;) or a pound (#) on each line.
- # The end of the line terminates a comment.
- # We recommend that you provide comments in your configuration file(s).
+.. _ceph-conf-file:
-.. _ceph-conf-settings:
-
-Config Sections
-===============
-The configuration file can configure all Ceph daemons in a Ceph Storage Cluster,
-or all Ceph daemons of a particular type. To configure a series of daemons, the
-settings must be included under the processes that will receive the
-configuration as follows:
+Configuration sections
+======================
-``[global]``
+Any given process or daemon has a single value for each configuration
+option. However, values for an option may vary across different
+daemon types even daemons of the same type. Ceph options that are
+stored in the monitor configuration database or in local configuration
+files are grouped into sections to indicate which daemons or clients
+they apply to.
-:Description: Settings under ``[global]`` affect all daemons in a Ceph Storage
- Cluster.
-
-:Example: ``auth supported = cephx``
+These sections include:
-``[osd]``
+``global``
-:Description: Settings under ``[osd]`` affect all ``ceph-osd`` daemons in
- the Ceph Storage Cluster, and override the same setting in
- ``[global]``.
+:Description: Settings under ``global`` affect all daemons and clients
+ in a Ceph Storage Cluster.
-:Example: ``osd journal size = 1000``
+:Example: ``log_file = /var/log/ceph/$cluster-$type.$id.log``
-``[mon]``
+``mon``
-:Description: Settings under ``[mon]`` affect all ``ceph-mon`` daemons in
+:Description: Settings under ``mon`` affect all ``ceph-mon`` daemons in
the Ceph Storage Cluster, and override the same setting in
- ``[global]``.
+ ``global``.
-:Example: ``mon addr = 10.0.0.101:6789``
+:Example: ``mon_cluster_log_to_syslog = true``
-``[mds]``
+``mgr``
-:Description: Settings under ``[mds]`` affect all ``ceph-mds`` daemons in
+:Description: Settings in the ``mgr`` section affect all ``ceph-mgr`` daemons in
the Ceph Storage Cluster, and override the same setting in
- ``[global]``.
-
-:Example: ``host = myserver01``
-
-``[client]``
+ ``global``.
-:Description: Settings under ``[client]`` affect all Ceph Clients
- (e.g., mounted Ceph Filesystems, mounted Ceph Block Devices,
- etc.).
+:Example: ``mgr_stats_period = 10``
-:Example: ``log file = /var/log/ceph/radosgw.log``
+``osd``
+:Description: Settings under ``osd`` affect all ``ceph-osd`` daemons in
+ the Ceph Storage Cluster, and override the same setting in
+ ``global``.
-Global settings affect all instances of all daemon in the Ceph Storage Cluster.
-Use the ``[global]`` setting for values that are common for all daemons in the
-Ceph Storage Cluster. You can override each ``[global]`` setting by:
+:Example: ``osd_op_queue = wpq``
-#. Changing the setting in a particular process type
- (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
+``mds``
-#. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` ).
+:Description: Settings in the ``mds`` section affect all ``ceph-mds`` daemons in
+ the Ceph Storage Cluster, and override the same setting in
+ ``global``.
-Overriding a global setting affects all child processes, except those that
-you specifically override in a particular daemon.
+:Example: ``mds_cache_size = 10G``
-A typical global setting involves activating authentication. For example:
+``client``
-.. code-block:: ini
-
- [global]
- #Enable authentication between hosts within the cluster.
- #v 0.54 and earlier
- auth supported = cephx
-
- #v 0.55 and after
- auth cluster required = cephx
- auth service required = cephx
- auth client required = cephx
-
-
-You can specify settings that apply to a particular type of daemon. When you
-specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a
-particular instance, the setting will apply to all OSDs, monitors or metadata
-daemons respectively.
-
-A typical daemon-wide setting involves setting journal sizes, filestore
-settings, etc. For example:
-
-.. code-block:: ini
-
- [osd]
- osd journal size = 1000
-
-
-You may specify settings for particular instances of a daemon. You may specify
-an instance by entering its type, delimited by a period (.) and by the instance
-ID. The instance ID for a Ceph OSD Daemon is always numeric, but it may be
-alphanumeric for Ceph Monitors and Ceph Metadata Servers.
+:Description: Settings under ``client`` affect all Ceph Clients
+ (e.g., mounted Ceph Filesystems, mounted Ceph Block Devices,
+ etc.).
-.. code-block:: ini
+:Example: ``objecter_inflight_ops = 512``
- [osd.1]
- # settings affect osd.1 only.
-
- [mon.a]
- # settings affect mon.a only.
-
- [mds.b]
- # settings affect mds.b only.
+Sections may also specify an individual daemon or client name. For example,
+``mon.foo``, ``osd.123``, and ``client.smith`` are all valid section names.
-If the daemon you specify is a Ceph Gateway client, specify the daemon and the
-instance, delimited by a period (.). For example::
- [client.radosgw.instance-name]
- # settings affect client.radosgw.instance-name only.
+Any given daemon will draw it's settings from the global section, the
+daemon or client type section, and the section sharing its name.
+Settings in the most-specific section take precedence, so for example
+if the same option is specified in both ``global``, ``mon``, and
+``mon.foo`` on the same source (i.e., in the same configurationfile),
+the ``mon.foo`` value will be used.
+Note that values from the local configuration file always take
+precedence over values from the monitor configuration database,
+regardless of which section they appear in.
.. _ceph-metavariables:
Metavariables
=============
-Metavariables simplify Ceph Storage Cluster configuration dramatically. When a
-metavariable is set in a configuration value, Ceph expands the metavariable into
-a concrete value. Metavariables are very powerful when used within the
-``[global]``, ``[osd]``, ``[mon]``, ``[mds]`` or ``[client]`` sections of your
-configuration file. Ceph metavariables are similar to Bash shell expansion.
+Metavariables simplify Ceph Storage Cluster configuration
+dramatically. When a metavariable is set in a configuration value,
+Ceph expands the metavariable into a concrete value at the time the
+configuration value is used. Ceph metavariables are similar to variable expansion in the Bash shell.
Ceph supports the following metavariables:
-
``$cluster``
:Description: Expands to the Ceph Storage Cluster name. Useful when running
``$type``
-:Description: Expands to one of ``mds``, ``osd``, or ``mon``, depending on the
- type of the instant daemon.
+:Description: Expands to a daemon or process type (e.g., ``mds``, ``osd``, or ``mon``)
:Example: ``/var/lib/ceph/$type``
``$id``
-:Description: Expands to the daemon identifier. For ``osd.0``, this would be
- ``0``; for ``mds.a``, it would be ``a``.
+:Description: Expands to the daemon or client identifier. For
+ ``osd.0``, this would be ``0``; for ``mds.a``, it would
+ be ``a``.
:Example: ``/var/lib/ceph/$type/$cluster-$id``
``$host``
-:Description: Expands to the host name of the instant daemon.
+:Description: Expands to the host name where the process is running.
``$name``
:Example: ``/var/run/ceph/$cluster-$name-$pid.asok``
-.. _ceph-conf-common-settings:
-
-Common Settings
-===============
-
-The `Hardware Recommendations`_ section provides some hardware guidelines for
-configuring a Ceph Storage Cluster. It is possible for a single :term:`Ceph
-Node` to run multiple daemons. For example, a single node with multiple drives
-may run one ``ceph-osd`` for each drive. Ideally, you will have a node for a
-particular type of process. For example, some nodes may run ``ceph-osd``
-daemons, other nodes may run ``ceph-mds`` daemons, and still other nodes may
-run ``ceph-mon`` daemons.
-
-Each node has a name identified by the ``host`` setting. Monitors also specify
-a network address and port (i.e., domain name or IP address) identified by the
-``addr`` setting. A basic configuration file will typically specify only
-minimal settings for each instance of monitor daemons. For example:
-
-.. code-block:: ini
- [global]
- mon_initial_members = ceph1
- mon_host = 10.0.0.1
+The Configuration File
+======================
+On startup, Ceph processes search for a configuration file in the
+following locations:
-.. important:: The ``host`` setting is the short name of the node (i.e., not
- an fqdn). It is **NOT** an IP address either. Enter ``hostname -s`` on
- the command line to retrieve the name of the node. Do not use ``host``
- settings for anything other than initial monitors unless you are deploying
- Ceph manually. You **MUST NOT** specify ``host`` under individual daemons
- when using deployment tools like ``chef`` or ``ceph-deploy``, as those tools
- will enter the appropriate values for you in the cluster map.
+#. ``$CEPH_CONF`` (*i.e.,* the path following the ``$CEPH_CONF``
+ environment variable)
+#. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
+#. ``/etc/ceph/$cluster.conf``
+#. ``~/.ceph/$cluster.conf``
+#. ``./$cluster.conf`` (*i.e.,* in the current working directory)
+#. On FreeBSD systems only, ``/usr/local/etc/ceph/$cluster.conf``
+where ``$cluster`` is the cluster's name (default ``ceph``).
-.. _ceph-network-config:
+The Ceph configuration file uses an *ini* style syntax. You can add comments
+by preceding comments with a pound sign (#) or a semi-colon (;). For example:
-Networks
-========
+.. code-block:: ini
-See the `Network Configuration Reference`_ for a detailed discussion about
-configuring a network for use with Ceph.
+ # <--A number (#) sign precedes a comment.
+ ; A comment may be anything.
+ # Comments always follow a semi-colon (;) or a pound (#) on each line.
+ # The end of the line terminates a comment.
+ # We recommend that you provide comments in your configuration file(s).
-Monitors
-========
+.. _ceph-conf-settings:
-Ceph production clusters typically deploy with a minimum 3 :term:`Ceph Monitor`
-daemons to ensure high availability should a monitor instance crash. At least
-three (3) monitors ensures that the Paxos algorithm can determine which version
-of the :term:`Ceph Cluster Map` is the most recent from a majority of Ceph
-Monitors in the quorum.
+Config file section names
+-------------------------
-.. note:: You may deploy Ceph with a single monitor, but if the instance fails,
- the lack of other monitors may interrupt data service availability.
+The configuration file is divided into sections like ``[global]`` or
+``[mon.foo]``, where the section is a valid Ceph section name
+(`global`, a daemon type, or a daemon name) surrounded by square
+brackets.
-Ceph Monitors typically listen on port ``6789``. For example:
+Global settings affect all instances of all daemon in the Ceph Storage Cluster.
+Use the ``[global]`` setting for values that are common for all daemons in the
+Ceph Storage Cluster. You can override each ``[global]`` setting by:
-.. code-block:: ini
+#. Changing the setting in a particular process type
+ (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
- [mon.a]
- host = hostName
- mon addr = 150.140.130.120:6789
+#. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` ).
-By default, Ceph expects that you will store a monitor's data under the
-following path::
+Overriding a global setting affects all child processes, except those that
+you specifically override in a particular daemon. For example,
- /var/lib/ceph/mon/$cluster-$id
-
-You or a deployment tool (e.g., ``ceph-deploy``) must create the corresponding
-directory. With metavariables fully expressed and a cluster named "ceph", the
-foregoing directory would evaluate to::
+.. code-block:: ini
- /var/lib/ceph/mon/ceph-a
+ [global]
+ debug ms = 0
-For additional details, see the `Monitor Config Reference`_.
-
-.. _Monitor Config Reference: ../mon-config-ref
-
-
-.. _ceph-osd-config:
-
-
-Authentication
-==============
-
-.. versionadded:: Bobtail 0.56
-
-For Bobtail (v 0.56) and beyond, you should expressly enable or disable
-authentication in the ``[global]`` section of your Ceph configuration file. ::
-
- auth cluster required = cephx
- auth service required = cephx
- auth client required = cephx
-
-Additionally, you should enable message signing. See `Cephx Config Reference`_ for details.
-
-.. important:: When upgrading, we recommend expressly disabling authentication
- first, then perform the upgrade. Once the upgrade is complete, re-enable
- authentication.
-
-.. _Cephx Config Reference: ../auth-config-ref
-
-
-.. _ceph-monitor-config:
-
-
-OSDs
-====
-
-Ceph production clusters typically deploy :term:`Ceph OSD Daemons` where one node
-has one OSD daemon running a filestore on one storage drive. A typical
-deployment specifies a journal size. For example:
+ [osd]
+ debug ms = 1
-.. code-block:: ini
+ [osd.1]
+ debug ms = 10
- [osd]
- osd journal size = 10000
-
- [osd.0]
- host = {hostname} #manual deployments only.
+ [osd.2]
+ debug ms = 10
-By default, Ceph expects that you will store a Ceph OSD Daemon's data with the
-following path::
- /var/lib/ceph/osd/$cluster-$id
-
-You or a deployment tool (e.g., ``ceph-deploy``) must create the corresponding
-directory. With metavariables fully expressed and a cluster named "ceph", the
-foregoing directory would evaluate to::
+Monitor configuration database
+==============================
- /var/lib/ceph/osd/ceph-0
-
-You may override this path using the ``osd data`` setting. We don't recommend
-changing the default location. Create the default directory on your OSD host.
+The monitor cluster manages a database of configuration options that
+can be consumed by the entire cluster, enabling streamlined central
+configuration management for the entire system. The vast majority of
+configuration options can and should be stored here for ease of
+administration and transparency.
-::
+A handful of settings may still need to be stored in local
+configuration files because they affect the ability to connect to the
+monitors, authenticate, and fetch configuration information. In most
+cases this is limited to the ``mon_host`` option, although this can
+also be avoided through the use of DNS SRV records.
- ssh {osd-host}
- sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}
+Sections and masks
+------------------
-The ``osd data`` path ideally leads to a mount point with a hard disk that is
-separate from the hard disk storing and running the operating system and
-daemons. If the OSD is for a disk other than the OS disk, prepare it for
-use with Ceph, and mount it to the directory you just created::
+Configuration options stored by the monitor can live in a global
+section, daemon type section, or specific daemon section, just like
+options in a configuration file can.
- ssh {new-osd-host}
- sudo mkfs -t {fstype} /dev/{disk}
- sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}
+In addition, options may also have a *mask* associated with them to
+further restrict which daemons or clients the option applies to.
+Masks take two forms:
-We recommend using the ``xfs`` file system when running
-:command:`mkfs`. (``btrfs`` and ``ext4`` are not recommended and no
-longer tested.)
+#. ``type:location`` where *type* is a CRUSH property like `rack` or
+ `host`, and *location* is a value for that property. For example,
+ ``host:foo`` would limit the option only to daemons or clients
+ running on a particular host.
+#. ``class:device-class`` where *device-class* is the name of a CRUSH
+ device class (e.g., ``hdd`` or ``ssd``). For example,
+ ``class:ssd`` would limit the option only to OSDs backed by SSDs.
+ (This mask has no effect for non-OSD daemons or clients.)
-See the `OSD Config Reference`_ for additional configuration details.
+When setting a configuration option, the `who` may be a section name,
+a mask, or a combination of both separated by a slash (``/``)
+character. For example, ``osd/rack:foo`` would mean all OSD daemons
+in the ``foo`` rack.
+When viewing configuration options, the section name and mask are
+generally separated out into separate fields or columns to ease readability.
-Heartbeats
-==========
-During runtime operations, Ceph OSD Daemons check up on other Ceph OSD Daemons
-and report their findings to the Ceph Monitor. You do not have to provide any
-settings. However, if you have network latency issues, you may wish to modify
-the settings.
+Commands
+--------
-See `Configuring Monitor/OSD Interaction`_ for additional details.
+The following CLI commands are used to configure the cluster:
+* ``ceph config dump`` will dump the entire configuration database for
+ the cluster.
-.. _ceph-logging-and-debugging:
+* ``ceph config get <who>`` will dump the configuration for a specific
+ daemon or client (e.g., ``mds.a``), as stored in the monitors'
+ configuration database.
-Logs / Debugging
-================
+* ``ceph config set <who> <option> <value>`` will set a configuration
+ option in the monitors' configuration database.
-Sometimes you may encounter issues with Ceph that require
-modifying logging output and using Ceph's debugging. See `Debugging and
-Logging`_ for details on log rotation.
+* ``ceph config show <who>`` will show the reported running
+ configuration for a running daemon. These settings may differ from
+ those stored by the monitors if there are also local configuration
+ files in use or options have been overridden on the command line or
+ at run time. The source of the option values is reported as part
+ of the output.
-.. _Debugging and Logging: ../../troubleshooting/log-and-debug
+* ``ceph config assimilate-conf -i <input file> -o <output file>``
+ will injest a configuration file from *input file* and move any
+ valid options into the monitors' configuration database. Any
+ settings that are unrecognized, invalid, or cannot be controlled by
+ the monitor will be returned in an abbreviated config file stored in
+ *output file*. This command is useful for transitioning from legacy
+ configuration files to centralized monitor-based configuration.
-Example ceph.conf
-=================
+Help
+====
-.. literalinclude:: demo-ceph.conf
- :language: ini
+You can get help for a particular option with::
+
+ ceph config help <option>
+
+Note that this will use the configuration schema that is compiled into the running monitors. If you have a mixed-version cluster (e.g., during an upgrade), you might also want to query the option schema from a specific running daemon::
+
+ ceph daemon <name> config help [option]
+
+For example,::
+
+ $ ceph config help log_file
+ log_file - path to log file
+ (std::string, basic)
+ Default (non-daemon):
+ Default (daemon): /var/log/ceph/$cluster-$name.log
+ Can update at runtime: false
+ See also: [log_to_stderr,err_to_stderr,log_to_syslog,err_to_syslog]
+
+or::
+
+ $ ceph config help log_file -f json-pretty
+ {
+ "name": "log_file",
+ "type": "std::string",
+ "level": "basic",
+ "desc": "path to log file",
+ "long_desc": "",
+ "default": "",
+ "daemon_default": "/var/log/ceph/$cluster-$name.log",
+ "tags": [],
+ "services": [],
+ "see_also": [
+ "log_to_stderr",
+ "err_to_stderr",
+ "log_to_syslog",
+ "err_to_syslog"
+ ],
+ "enum_values": [],
+ "min": "",
+ "max": "",
+ "can_update_at_runtime": false
+ }
+
+The ``level`` property can be any of `basic`, `advanced`, or `dev`.
+The `dev` options are intended for use by developers, generally for
+testing purposes, and are not recommended for use by operators.
-.. _ceph-runtime-config:
Runtime Changes
===============
-Ceph allows you to make changes to the configuration of a ``ceph-osd``,
-``ceph-mon``, or ``ceph-mds`` daemon at runtime. This capability is quite
-useful for increasing/decreasing logging output, enabling/disabling debug
-settings, and even for runtime optimization. The following reflects runtime
-configuration usage::
-
- ceph tell {daemon-type}.{id or *} config set {name} {value}
-
-Replace ``{daemon-type}`` with one of ``osd``, ``mon`` or ``mds``. You may apply
-the runtime setting to all daemons of a particular type with ``*``, or specify
-a specific daemon's ID (i.e., its number or letter). For example, to increase
-debug logging for a ``ceph-osd`` daemon named ``osd.0``, execute the following::
-
- ceph tell osd.0 config set debug_osd 20
-
-In your ``ceph.conf`` file, you may use spaces when specifying a
-setting name. When specifying a setting name on the command line,
-ensure that you use an underscore or hyphen (``_`` or ``-``) between
-terms (e.g., ``debug osd`` becomes ``debug_osd``).
-
-
-Viewing a Configuration at Runtime
-==================================
-
-If your Ceph Storage Cluster is running, and you would like to see the
-configuration settings from a running daemon, execute the following::
-
- ceph daemon {daemon-type}.{id} config show | less
-
-If you are on a machine where osd.0 is running, the command would be::
+In most cases, Ceph allows you to make changes to the configuration of
+a daemon at runtime. This capability is quite useful for
+increasing/decreasing logging output, enabling/disabling debug
+settings, and even for runtime optimization.
- ceph daemon osd.0 config show | less
+Generally speaking, configuration options can be updated in the usual
+way via the ``ceph config set`` command. For example, do enable the debug log level on a specific OSD,::
-Reading Configuration Metadata at Runtime
-=========================================
+ ceph config set osd.123 debug_ms 20
-Information about the available configuration options is available via
-the ``config help`` command:
+Note that if the same option is also customized in a local
+configuration file, the monitor setting will be ignored (it has a
+lower priority than the local config file).
-::
+Override values
+---------------
- ceph daemon {daemon-type}.{id} config help | less
+You can also temporarily set an option using the `tell` or `daemon`
+interfaces on the Ceph CLI. These *override* values are ephemeral in
+that they only affect the running process and are discarded/lost if
+the daemon or process restarts.
+Override values can be set in two ways:
-This metadata is primarily intended to be used when integrating other
-software with Ceph, such as graphical user interfaces. The output is
-a list of JSON objects, for example:
+#. From any host, we can send a message to a daemon over the network with::
-::
+ ceph tell <name> config set <option> <value>
- {
- "name": "mon_host",
- "type": "std::string",
- "level": "basic",
- "desc": "list of hosts or addresses to search for a monitor",
- "long_desc": "This is a comma, whitespace, or semicolon separated list of IP addresses or hostnames. Hostnames are resolved via DNS and all A or AAAA records are included in the search list.",
- "default": "",
- "daemon_default": "",
- "tags": [],
- "services": [
- "common"
- ],
- "see_also": [],
- "enum_values": [],
- "min": "",
- "max": ""
- }
+ For example,::
-type
-____
+ ceph tell osd.123 config set debug_osd 20
-The type of the setting, given as a C++ type name.
+ The `tell` command can also accept a wildcard for the daemon
+ identifier. For example, to adjust the debug level on all OSD
+ daemons,::
-level
-_____
+ ceph tell osd.* config set debug_osd 20
-One of `basic`, `advanced`, `dev`. The `dev` options are not intended
-for use outside of development and testing.
+#. From the host the process is running on, we can connect directly to
+ the process via a socket in ``/var/run/ceph`` with::
-desc
-____
+ ceph daemon <name> config set <option> <value>
-A short description -- this is a sentence fragment suitable for display
-in small spaces like a single line in a list.
+ For example,::
-long_desc
-_________
+ ceph daemon osd.4 config set debug_osd 20
-A full description of what the setting does, this may be as long as needed.
+Note that in the ``ceph config show`` command output these temporary
+values will be shown with a source of ``override``.
-default
-_______
-The default value, if any.
+Viewing runtime settings
+========================
-daemon_default
-______________
+You can see the current options set for a running daemon with the ``ceph config show`` command. For example,::
-An alternative default used for daemons (services) as opposed to clients.
+ ceph config show osd.0
-tags
-____
+will show you the (non-default) options for that daemon. You can also look at a specific option with::
-A list of strings indicating topics to which this setting relates. Examples
-of tags are `performance` and `networking`.
+ ceph config show osd.0 debug_osd
-services
-________
+or view all options (even those with default values) with::
-A list of strings indicating which Ceph services the setting relates to, such
-as `osd`, `mds`, `mon`. For settings that are relevant to any Ceph client
-or server, `common` is used.
+ ceph config show-with-defaults osd.0
-see_also
-________
+You can also observe settings for a running daemon by connecting to it from the local host via the admin socket. For example,::
-A list of strings indicating other configuration options that may also
-be of interest to a user setting this option.
+ ceph daemon osd.0 config show
-enum_values
-___________
+will dump all current settings,::
-Optional: a list of strings indicating the valid settings.
+ ceph daemon osd.0 config diff
-min, max
-________
+will show only non-default settings (as well as where the value came from: a config file, the monitor, an override, etc.), and::
-Optional: upper and lower (inclusive) bounds on valid settings.
-
-
-
-
-Running Multiple Clusters
-=========================
-
-With Ceph, you can run multiple Ceph Storage Clusters on the same hardware.
-Running multiple clusters provides a higher level of isolation compared to
-using different pools on the same cluster with different CRUSH rules. A
-separate cluster will have separate monitor, OSD and metadata server processes.
-When running Ceph with default settings, the default cluster name is ``ceph``,
-which means you would save your Ceph configuration file with the file name
-``ceph.conf`` in the ``/etc/ceph`` default directory.
-
-See `ceph-deploy new`_ for details.
-.. _ceph-deploy new:../ceph-deploy-new
-
-When you run multiple clusters, you must name your cluster and save the Ceph
-configuration file with the name of the cluster. For example, a cluster named
-``openstack`` will have a Ceph configuration file with the file name
-``openstack.conf`` in the ``/etc/ceph`` default directory.
-
-.. important:: Cluster names must consist of letters a-z and digits 0-9 only.
-
-Separate clusters imply separate data disks and journals, which are not shared
-between clusters. Referring to `Metavariables`_, the ``$cluster`` metavariable
-evaluates to the cluster name (i.e., ``openstack`` in the foregoing example).
-Various settings use the ``$cluster`` metavariable, including:
-
-- ``keyring``
-- ``admin socket``
-- ``log file``
-- ``pid file``
-- ``mon data``
-- ``mon cluster log file``
-- ``osd data``
-- ``osd journal``
-- ``mds data``
-- ``rgw data``
-
-See `General Settings`_, `OSD Settings`_, `Monitor Settings`_, `MDS Settings`_,
-`RGW Settings`_ and `Log Settings`_ for relevant path defaults that use the
-``$cluster`` metavariable.
-
-.. _General Settings: ../general-config-ref
-.. _OSD Settings: ../osd-config-ref
-.. _Monitor Settings: ../mon-config-ref
-.. _MDS Settings: ../../../cephfs/mds-config-ref
-.. _RGW Settings: ../../../radosgw/config-ref/
-.. _Log Settings: ../../troubleshooting/log-and-debug
-
-
-When creating default directories or files, you should use the cluster
-name at the appropriate places in the path. For example::
-
- sudo mkdir /var/lib/ceph/osd/openstack-0
- sudo mkdir /var/lib/ceph/mon/openstack-a
-
-.. important:: When running monitors on the same host, you should use
- different ports. By default, monitors use port 6789. If you already
- have monitors using port 6789, use a different port for your other cluster(s).
+ ceph daemon osd.0 config get debug_osd
-To invoke a cluster other than the default ``ceph`` cluster, use the
-``-c {filename}.conf`` option with the ``ceph`` command. For example::
+will report the value of a single option.
- ceph -c {cluster-name}.conf health
- ceph -c openstack.conf health
-.. _Hardware Recommendations: ../../../start/hardware-recommendations
-.. _Network Configuration Reference: ../network-config-ref
-.. _OSD Config Reference: ../osd-config-ref
-.. _Configuring Monitor/OSD Interaction: ../mon-osd-interaction
-.. _ceph-deploy new: ../../deployment/ceph-deploy-new#naming-a-cluster
--- /dev/null
+
+.. _ceph-conf-common-settings:
+
+Common Settings
+===============
+
+The `Hardware Recommendations`_ section provides some hardware guidelines for
+configuring a Ceph Storage Cluster. It is possible for a single :term:`Ceph
+Node` to run multiple daemons. For example, a single node with multiple drives
+may run one ``ceph-osd`` for each drive. Ideally, you will have a node for a
+particular type of process. For example, some nodes may run ``ceph-osd``
+daemons, other nodes may run ``ceph-mds`` daemons, and still other nodes may
+run ``ceph-mon`` daemons.
+
+Each node has a name identified by the ``host`` setting. Monitors also specify
+a network address and port (i.e., domain name or IP address) identified by the
+``addr`` setting. A basic configuration file will typically specify only
+minimal settings for each instance of monitor daemons. For example:
+
+.. code-block:: ini
+
+ [global]
+ mon_initial_members = ceph1
+ mon_host = 10.0.0.1
+
+
+.. important:: The ``host`` setting is the short name of the node (i.e., not
+ an fqdn). It is **NOT** an IP address either. Enter ``hostname -s`` on
+ the command line to retrieve the name of the node. Do not use ``host``
+ settings for anything other than initial monitors unless you are deploying
+ Ceph manually. You **MUST NOT** specify ``host`` under individual daemons
+ when using deployment tools like ``chef`` or ``ceph-deploy``, as those tools
+ will enter the appropriate values for you in the cluster map.
+
+
+.. _ceph-network-config:
+
+Networks
+========
+
+See the `Network Configuration Reference`_ for a detailed discussion about
+configuring a network for use with Ceph.
+
+
+Monitors
+========
+
+Ceph production clusters typically deploy with a minimum 3 :term:`Ceph Monitor`
+daemons to ensure high availability should a monitor instance crash. At least
+three (3) monitors ensures that the Paxos algorithm can determine which version
+of the :term:`Ceph Cluster Map` is the most recent from a majority of Ceph
+Monitors in the quorum.
+
+.. note:: You may deploy Ceph with a single monitor, but if the instance fails,
+ the lack of other monitors may interrupt data service availability.
+
+Ceph Monitors typically listen on port ``6789``. For example:
+
+.. code-block:: ini
+
+ [mon.a]
+ host = hostName
+ mon addr = 150.140.130.120:6789
+
+By default, Ceph expects that you will store a monitor's data under the
+following path::
+
+ /var/lib/ceph/mon/$cluster-$id
+
+You or a deployment tool (e.g., ``ceph-deploy``) must create the corresponding
+directory. With metavariables fully expressed and a cluster named "ceph", the
+foregoing directory would evaluate to::
+
+ /var/lib/ceph/mon/ceph-a
+
+For additional details, see the `Monitor Config Reference`_.
+
+.. _Monitor Config Reference: ../mon-config-ref
+
+
+.. _ceph-osd-config:
+
+
+Authentication
+==============
+
+.. versionadded:: Bobtail 0.56
+
+For Bobtail (v 0.56) and beyond, you should expressly enable or disable
+authentication in the ``[global]`` section of your Ceph configuration file. ::
+
+ auth cluster required = cephx
+ auth service required = cephx
+ auth client required = cephx
+
+Additionally, you should enable message signing. See `Cephx Config Reference`_ for details.
+
+.. important:: When upgrading, we recommend expressly disabling authentication
+ first, then perform the upgrade. Once the upgrade is complete, re-enable
+ authentication.
+
+.. _Cephx Config Reference: ../auth-config-ref
+
+
+.. _ceph-monitor-config:
+
+
+OSDs
+====
+
+Ceph production clusters typically deploy :term:`Ceph OSD Daemons` where one node
+has one OSD daemon running a filestore on one storage drive. A typical
+deployment specifies a journal size. For example:
+
+.. code-block:: ini
+
+ [osd]
+ osd journal size = 10000
+
+ [osd.0]
+ host = {hostname} #manual deployments only.
+
+
+By default, Ceph expects that you will store a Ceph OSD Daemon's data with the
+following path::
+
+ /var/lib/ceph/osd/$cluster-$id
+
+You or a deployment tool (e.g., ``ceph-deploy``) must create the corresponding
+directory. With metavariables fully expressed and a cluster named "ceph", the
+foregoing directory would evaluate to::
+
+ /var/lib/ceph/osd/ceph-0
+
+You may override this path using the ``osd data`` setting. We don't recommend
+changing the default location. Create the default directory on your OSD host.
+
+::
+
+ ssh {osd-host}
+ sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}
+
+The ``osd data`` path ideally leads to a mount point with a hard disk that is
+separate from the hard disk storing and running the operating system and
+daemons. If the OSD is for a disk other than the OS disk, prepare it for
+use with Ceph, and mount it to the directory you just created::
+
+ ssh {new-osd-host}
+ sudo mkfs -t {fstype} /dev/{disk}
+ sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}
+
+We recommend using the ``xfs`` file system when running
+:command:`mkfs`. (``btrfs`` and ``ext4`` are not recommended and no
+longer tested.)
+
+See the `OSD Config Reference`_ for additional configuration details.
+
+
+Heartbeats
+==========
+
+During runtime operations, Ceph OSD Daemons check up on other Ceph OSD Daemons
+and report their findings to the Ceph Monitor. You do not have to provide any
+settings. However, if you have network latency issues, you may wish to modify
+the settings.
+
+See `Configuring Monitor/OSD Interaction`_ for additional details.
+
+
+.. _ceph-logging-and-debugging:
+
+Logs / Debugging
+================
+
+Sometimes you may encounter issues with Ceph that require
+modifying logging output and using Ceph's debugging. See `Debugging and
+Logging`_ for details on log rotation.
+
+.. _Debugging and Logging: ../../troubleshooting/log-and-debug
+
+
+Example ceph.conf
+=================
+
+.. literalinclude:: demo-ceph.conf
+ :language: ini
+
+.. _ceph-runtime-config:
+
+
+
+Running Multiple Clusters
+=========================
+
+With Ceph, you can run multiple Ceph Storage Clusters on the same hardware.
+Running multiple clusters provides a higher level of isolation compared to
+using different pools on the same cluster with different CRUSH rules. A
+separate cluster will have separate monitor, OSD and metadata server processes.
+When running Ceph with default settings, the default cluster name is ``ceph``,
+which means you would save your Ceph configuration file with the file name
+``ceph.conf`` in the ``/etc/ceph`` default directory.
+
+See `ceph-deploy new`_ for details.
+.. _ceph-deploy new:../ceph-deploy-new
+
+When you run multiple clusters, you must name your cluster and save the Ceph
+configuration file with the name of the cluster. For example, a cluster named
+``openstack`` will have a Ceph configuration file with the file name
+``openstack.conf`` in the ``/etc/ceph`` default directory.
+
+.. important:: Cluster names must consist of letters a-z and digits 0-9 only.
+
+Separate clusters imply separate data disks and journals, which are not shared
+between clusters. Referring to `Metavariables`_, the ``$cluster`` metavariable
+evaluates to the cluster name (i.e., ``openstack`` in the foregoing example).
+Various settings use the ``$cluster`` metavariable, including:
+
+.. _Metavariables: ../ceph-conf#Metavariables
+
+- ``keyring``
+- ``admin socket``
+- ``log file``
+- ``pid file``
+- ``mon data``
+- ``mon cluster log file``
+- ``osd data``
+- ``osd journal``
+- ``mds data``
+- ``rgw data``
+
+See `General Settings`_, `OSD Settings`_, `Monitor Settings`_, `MDS Settings`_,
+`RGW Settings`_ and `Log Settings`_ for relevant path defaults that use the
+``$cluster`` metavariable.
+
+.. _General Settings: ../general-config-ref
+.. _OSD Settings: ../osd-config-ref
+.. _Monitor Settings: ../mon-config-ref
+.. _MDS Settings: ../../../cephfs/mds-config-ref
+.. _RGW Settings: ../../../radosgw/config-ref/
+.. _Log Settings: ../../troubleshooting/log-and-debug
+
+
+When creating default directories or files, you should use the cluster
+name at the appropriate places in the path. For example::
+
+ sudo mkdir /var/lib/ceph/osd/openstack-0
+ sudo mkdir /var/lib/ceph/mon/openstack-a
+
+.. important:: When running monitors on the same host, you should use
+ different ports. By default, monitors use port 6789. If you already
+ have monitors using port 6789, use a different port for your other cluster(s).
+
+To invoke a cluster other than the default ``ceph`` cluster, use the
+``-c {filename}.conf`` option with the ``ceph`` command. For example::
+
+ ceph -c {cluster-name}.conf health
+ ceph -c openstack.conf health
+
+
+.. _Hardware Recommendations: ../../../start/hardware-recommendations
+.. _Network Configuration Reference: ../network-config-ref
+.. _OSD Config Reference: ../osd-config-ref
+.. _Configuring Monitor/OSD Interaction: ../mon-osd-interaction
+.. _ceph-deploy new: ../../deployment/ceph-deploy-new#naming-a-cluster