fine with a public network only, but you may see significant performance
improvement with a second "cluster" network in a large cluster.
-We recommend running a Ceph Storage Cluster with two networks: a public
-(front-side) network and a cluster (back-side) network. To support two networks,
-each :term:`Ceph Node` will need to have more than one NIC. See `Hardware
-Recommendations - Networks`_ for additional details.
+It is possible to run a Ceph Storage Cluster with two networks: a public
+(front-side) network and a cluster (back-side) network. However, this approach
+complicates network configuration (both hardware and software) and does not usually have a significant impact on overall performance. For this reason, we generally recommend that dual-NIC systems either be configured with two IPs on the same network, or bonded.
+
+If, despite the complexity, one still wishes to use two networks, each
+:term:`Ceph Node` will need to have more than one NIC. See `Hardware
+Recommendations - Networks`_ for additional details.
.. ditaa::
+-------------+
\---------------------------------------------------------------------------/
-There are several reasons to consider operating two separate networks:
-
-#. **Performance:** Ceph OSD Daemons handle data replication for the Ceph
- Clients. When Ceph OSD Daemons replicate data more than once, the network
- load between Ceph OSD Daemons easily dwarfs the network load between Ceph
- Clients and the Ceph Storage Cluster. This can introduce latency and
- create a performance problem. Recovery and rebalancing can
- also introduce significant latency on the public network. See
- `Scalability and High Availability`_ for additional details on how Ceph
- replicates data. See `Monitor / OSD Interaction`_ for details on heartbeat
- traffic.
-
-#. **Security**: While most people are generally civil, a very tiny segment of
- the population likes to engage in what's known as a Denial of Service (DoS)
- attack. When traffic between Ceph OSD Daemons gets disrupted, placement
- groups may no longer reflect an ``active + clean`` state, which may prevent
- users from reading and writing data. A great way to defeat this type of
- attack is to maintain a completely separate cluster network that doesn't
- connect directly to the internet. Also, consider using `Message Signatures`_
- to defeat spoofing attacks.
-
-
IP Tables
=========
Monitor IP Tables
-----------------
-Ceph Monitors listen on port ``6789`` by default. Additionally, Ceph Monitors
-always operate on the public network. When you add the rule using the example
-below, make sure you replace ``{iface}`` with the public network interface
-(e.g., ``eth0``, ``eth1``, etc.), ``{ip-address}`` with the IP address of the
-public network and ``{netmask}`` with the netmask for the public network. ::
+Ceph Monitors listen on ports ``3300`` and ``6789`` by
+default. Additionally, Ceph Monitors always operate on the public
+network. When you add the rule using the example below, make sure you
+replace ``{iface}`` with the public network interface (e.g., ``eth0``,
+``eth1``, etc.), ``{ip-address}`` with the IP address of the public
+network and ``{netmask}`` with the netmask for the public network. ::
sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
Ceph Daemons
============
-Ceph has one network configuration requirement that applies to all daemons: the
-Ceph configuration file **MUST** specify the ``host`` for each daemon. Ceph also
-requires that a Ceph configuration file specify the monitor IP address and its
-port.
-
-.. important:: Some deployment tools (e.g., ``ceph-deploy``, Chef) may create a
- configuration file for you. **DO NOT** set these values if the deployment
- tool does it for you.
-
-.. tip:: The ``host`` setting is the short name of the host (i.e., not
- an fqdn). It is **NOT** an IP address either. Enter ``hostname -s`` on
- the command line to retrieve the name of the host.
-
+The monitor daemons are each configured to bind to a specific IP address. These addresses are normally configured by your deployment tool. Other components in the Ceph system discover the monitors via the ``mon host`` configuration option, normally specified in the ``[global]`` section of the ``ceph.conf`` file.
.. code-block:: ini
- [mon.a]
-
- host = {hostname}
- mon addr = {ip-address}:6789
+ [global]
+ mon host = 10.0.0.2, 10.0.0.3, 10.0.0.4
- [osd.0]
- host = {hostname}
+The ``mon host`` value can be a list of IP addresses or a name that is
+looked up via DNS. In the case of a DNS name with multiple A or AAAA
+records, all records are probed in order to discover a monitor. Once
+one monitor is reached, all other current monitors are discovered, so
+the ``mon host`` configuration option only needs to be sufficiently up
+to date such that a client can reach one monitor that is currently online.
-
-You do not have to set the host IP address for a daemon. If you have a static IP
-configuration and both public and cluster networks running, the Ceph
-configuration file may specify the IP address of the host for each daemon. To
-set a static IP address for a daemon, the following option(s) should appear in
-the daemon instance sections of your ``ceph.conf`` file.
+The MGR, OSD, and MDS daemons will bind to any available address and
+do not require any special configuration. However, it is possible to
+specify a specific IP address for them to bind to with the ``public
+addr`` (and/or, in the case of OSD daemons, the ``cluster addr``)
+configuration option. For example,
.. code-block:: ini
public addr = {host-public-ip-address}
cluster addr = {host-cluster-ip-address}
-
.. topic:: One NIC OSD in a Two Network Cluster
Generally, we do not recommend deploying an OSD host with a single NIC in a