production.
Following the same configuration as `Installation (Quick)`_, we will set up a
-cluster with ``node1`` as the monitor node, and ``node2`` and ``node3`` for
+cluster with ``node1`` as the monitor node, and ``node2`` and ``node3`` for
OSD nodes.
-.. ditaa::
+.. ditaa::
/------------------\ /----------------\
| Admin Node | | node1 |
| +-------->+ |
Bootstrapping a monitor (a Ceph Storage Cluster, in theory) requires
a number of things:
-- **Unique Identifier:** The ``fsid`` is a unique identifier for the cluster,
- and stands for File System ID from the days when the Ceph Storage Cluster was
- principally for the Ceph Filesystem. Ceph now supports native interfaces,
- block devices, and object storage gateway interfaces too, so ``fsid`` is a
+- **Unique Identifier:** The ``fsid`` is a unique identifier for the cluster,
+ and stands for File System ID from the days when the Ceph Storage Cluster was
+ principally for the Ceph Filesystem. Ceph now supports native interfaces,
+ block devices, and object storage gateway interfaces too, so ``fsid`` is a
bit of a misnomer.
- **Cluster Name:** Ceph clusters have a cluster name, which is a simple string
without spaces. The default cluster name is ``ceph``, but you may specify
- a different cluster name. Overriding the default cluster name is
- especially useful when you are working with multiple clusters and you need to
- clearly understand which cluster your are working with.
-
- For example, when you run multiple clusters in a `federated architecture`_,
+ a different cluster name. Overriding the default cluster name is
+ especially useful when you are working with multiple clusters and you need to
+ clearly understand which cluster your are working with.
+
+ For example, when you run multiple clusters in a `federated architecture`_,
the cluster name (e.g., ``us-west``, ``us-east``) identifies the cluster for
- the current CLI session. **Note:** To identify the cluster name on the
- command line interface, specify the Ceph configuration file with the
+ the current CLI session. **Note:** To identify the cluster name on the
+ command line interface, specify the Ceph configuration file with the
cluster name (e.g., ``ceph.conf``, ``us-west.conf``, ``us-east.conf``, etc.).
Also see CLI usage (``ceph --cluster {cluster-name}``).
-
-- **Monitor Name:** Each monitor instance within a cluster has a unique name.
+
+- **Monitor Name:** Each monitor instance within a cluster has a unique name.
In common practice, the Ceph Monitor name is the host name (we recommend one
- Ceph Monitor per host, and no commingling of Ceph OSD Daemons with
+ Ceph Monitor per host, and no commingling of Ceph OSD Daemons with
Ceph Monitors). You may retrieve the short hostname with ``hostname -s``.
-- **Monitor Map:** Bootstrapping the initial monitor(s) requires you to
- generate a monitor map. The monitor map requires the ``fsid``, the cluster
+- **Monitor Map:** Bootstrapping the initial monitor(s) requires you to
+ generate a monitor map. The monitor map requires the ``fsid``, the cluster
name (or uses the default), and at least one host name and its IP address.
-- **Monitor Keyring**: Monitors communicate with each other via a
- secret key. You must generate a keyring with a monitor secret and provide
+- **Monitor Keyring**: Monitors communicate with each other via a
+ secret key. You must generate a keyring with a monitor secret and provide
it when bootstrapping the initial monitor(s).
-
+
- **Administrator Keyring**: To use the ``ceph`` CLI tools, you must have
a ``client.admin`` user. So you must generate the admin user and keyring,
and you must also add the ``client.admin`` user to the monitor keyring.
-The foregoing requirements do not imply the creation of a Ceph Configuration
-file. However, as a best practice, we recommend creating a Ceph configuration
+The foregoing requirements do not imply the creation of a Ceph Configuration
+file. However, as a best practice, we recommend creating a Ceph configuration
file and populating it with the ``fsid``, the ``mon initial members`` and the
``mon host`` settings.
You can get and set all of the monitor settings at runtime as well. However,
-a Ceph Configuration file may contain only those settings that override the
+a Ceph Configuration file may contain only those settings that override the
default values. When you add settings to a Ceph configuration file, these
-settings override the default settings. Maintaining those settings in a
+settings override the default settings. Maintaining those settings in a
Ceph configuration file makes it easier to maintain your cluster.
The procedure is as follows:
ssh {hostname}
- For example::
+ For example::
ssh node1
-#. Ensure you have a directory for the Ceph configuration file. By default,
- Ceph uses ``/etc/ceph``. When you install ``ceph``, the installer will
+#. Ensure you have a directory for the Ceph configuration file. By default,
+ Ceph uses ``/etc/ceph``. When you install ``ceph``, the installer will
create the ``/etc/ceph`` directory automatically. ::
- ls /etc/ceph
+ ls /etc/ceph
**Note:** Deployment tools may remove this directory when purging a
cluster (e.g., ``ceph-deploy purgedata {node-name}``, ``ceph-deploy purge
{node-name}``).
-#. Create a Ceph configuration file. By default, Ceph uses
+#. Create a Ceph configuration file. By default, Ceph uses
``ceph.conf``, where ``ceph`` reflects the cluster name. ::
sudo vim /etc/ceph/ceph.conf
-#. Generate a unique ID (i.e., ``fsid``) for your cluster. ::
+#. Generate a unique ID (i.e., ``fsid``) for your cluster. ::
uuidgen
-
-#. Add the unique ID to your Ceph configuration file. ::
+
+#. Add the unique ID to your Ceph configuration file. ::
fsid = {UUID}
- For example::
+ For example::
fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
-#. Add the initial monitor(s) to your Ceph configuration file. ::
+#. Add the initial monitor(s) to your Ceph configuration file. ::
mon initial members = {hostname}[,{hostname}]
- For example::
+ For example::
mon initial members = node1
-#. Add the IP address(es) of the initial monitor(s) to your Ceph configuration
- file and save the file. ::
+#. Add the IP address(es) of the initial monitor(s) to your Ceph configuration
+ file and save the file. ::
mon host = {ip-address}[,{ip-address}]
#. Generate an administrator keyring, generate a ``client.admin`` user and add
- the user to the keyring. ::
+ the user to the keyring. ::
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
-#. Add the ``client.admin`` key to the ``ceph.mon.keyring``. ::
+#. Add the ``client.admin`` key to the ``ceph.mon.keyring``. ::
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
-#. Generate a monitor map using the hostname(s), host IP address(es) and the FSID.
- Save it as ``/tmp/monmap``::
+#. Generate a monitor map using the hostname(s), host IP address(es) and the FSID.
+ Save it as ``/tmp/monmap``::
monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
-#. Consider settings for a Ceph configuration file. Common settings include
+#. Consider settings for a Ceph configuration file. Common settings include
the following::
[global]
osd pool default size = {n} # Write an object n times.
osd pool default min size = {n} # Allow writing n copy in a degraded state.
osd pool default pg num = {n}
- osd pool default pgp num = {n}
+ osd pool default pgp num = {n}
osd crush chooseleaf type = {n}
In the foregoing example, the ``[global]`` section of the configuration might
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 333
- osd pool default pgp num = 333
+ osd pool default pgp num = 333
osd crush chooseleaf type = 1
#. Touch the ``done`` file.
0 data,1 metadata,2 rbd,
-#. Verify that the monitor is running. ::
+#. Verify that the monitor is running. ::
ceph -s
You should see output that the monitor you started is up and running, and
you should see a health error indicating that placement groups are stuck
- inactive. It should look something like this::
+ inactive. It should look something like this::
cluster a7f64266-0894-4f1e-a635-d0aeaca0e993
health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
On each node where you run a ceph-mon daemon, you should also set up a ceph-mgr daemon.
-See :doc:`../mgr/administrator`
+See :ref:`mgr-administrator-guide`
Adding OSDs
===========
cannot reach an ``active + clean`` state until you have enough OSDs to handle the
number of copies of an object (e.g., ``osd pool default size = 2`` requires at
least two OSDs). After bootstrapping your monitor, your cluster has a default
-CRUSH map; however, the CRUSH map doesn't have any Ceph OSD Daemons mapped to
+CRUSH map; however, the CRUSH map doesn't have any Ceph OSD Daemons mapped to
a Ceph Node.
Ceph provides the ``ceph-disk`` utility, which can prepare a disk, partition or
directory for use with Ceph. The ``ceph-disk`` utility creates the OSD ID by
incrementing the index. Additionally, ``ceph-disk`` will add the new OSD to the
-CRUSH map under the host for you. Execute ``ceph-disk -h`` for CLI details.
+CRUSH map under the host for you. Execute ``ceph-disk -h`` for CLI details.
The ``ceph-disk`` utility automates the steps of the `Long Form`_ below. To
create the first two OSDs with the short form procedure, execute the following
on ``node2`` and ``node3``:
sudo ceph-disk activate {data-path} [--activate-key {path}]
- For example::
+ For example::
sudo ceph-disk activate /dev/hdd1
``client.bootstrap-osd`` key is present on the machine. You may
alternatively execute this command as ``client.admin`` on a
different host where that key is present.::
-
+
ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | \
ceph osd new $UUID -i - \
-n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring)
mkdir /var/lib/ceph/osd/ceph-$ID
-#. If the OSD is for a drive other than the OS drive, prepare it
+#. If the OSD is for a drive other than the OS drive, prepare it
for use with Ceph, and mount it to the directory you just created. ::
mkfs.xfs /dev/{DEV}
chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID
-#. After you add an OSD to Ceph, the OSD is in your configuration. However,
- it is not yet running. You must start
+#. After you add an OSD to Ceph, the OSD is in your configuration. However,
+ it is not yet running. You must start
your new OSD before it can begin receiving data.
For modern systemd distributions::
systemctl enable ceph-osd@$ID
systemctl start ceph-osd@$ID
-
+
For example::
systemctl enable ceph-osd@12
#. Create a keyring.::
ceph-authtool --create-keyring /var/lib/ceph/mds/{cluster-name}-{id}/keyring --gen-key -n mds.{id}
-
+
#. Import the keyring and set caps.::
ceph auth add mds.{id} osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/{cluster}-{id}/keyring
-
+
#. Add to ceph.conf.::
[mds.{id}]
=======
Once you have your monitor and two OSDs up and running, you can watch the
-placement groups peer by executing the following::
+placement groups peer by executing the following::
ceph -w
-To view the tree, execute the following::
+To view the tree, execute the following::
ceph osd tree
-
-You should see output that looks something like this::
+
+You should see output that looks something like this::
# id weight type name up/down reweight
-1 2 root default
-2 2 host node1
0 1 osd.0 up 1
-3 1 host node2
- 1 1 osd.1 up 1
+ 1 1 osd.1 up 1
-To add (or remove) additional monitors, see `Add/Remove Monitors`_.
+To add (or remove) additional monitors, see `Add/Remove Monitors`_.
To add (or remove) additional Ceph OSD Daemons, see `Add/Remove OSDs`_.