Bootstrap a new cluster
=======================
-To create a new cluster, you need to know:
-
-* Which *IP address* to use for the cluster's first monitor. This is
- normally just the IP for the first cluster node. If there are
- multiple networks and interfaces, be sure to choose one that will be
- accessible by any hosts accessing the Ceph cluster.
+To create a new cluster, you need to know which *IP address* to use
+for the cluster's first monitor. This is normally just the IP for the
+first cluster node. If there are multiple networks and interfaces, be
+sure to choose one that will be accessible by any hosts accessing the
+Ceph cluster.
To bootstrap the cluster run the following command::
Cephadm logs to the ``cephadm`` cluster log channel, which means you can monitor progress in realtime with::
- ceph -W cephadm
+ # ceph -W cephadm
By default it will show info-level events and above. To see
-debug-level messages too,::
+debug-level messages too::
- ceph config set mgr mgr/cephadm/log_to_cluster_level debug
- ceph -W cephadm --watch-debug
+ # ceph config set mgr mgr/cephadm/log_to_cluster_level debug
+ # ceph -W cephadm --watch-debug
-Be careful: the debug messagse are very verbose!
+Be careful: the debug messages are very verbose!
You can see recent events with::
- ceph log last cephadm
+ # ceph log last cephadm
These events are also logged to the ``ceph.cephadm.log`` file on
monitor hosts and/or to the monitor-daemon stderr.
#. Tell Ceph that the new node is part of the cluster::
- [monitor 1] # ceph orch host add *newhost*
+ # ceph orch host add *newhost*
Deploying additional monitors
=============================
1) You need to know the device name for the block device (hard disk or SSD)
that will be used. Then,::
- [monitor 1] # ceph orch osd create *<host>*:*<path-to-device>*
+ # ceph orch osd create *<host>*:*<path-to-device>*
For example, to deploy an OSD on host *newhost*'s SSD,::
- [monitor 1] # ceph orch osd create newhost:/dev/disk/by-id/ata-WDC_WDS200T2B0A-00SM50_182294800028
+ # ceph orch osd create newhost:/dev/disk/by-id/ata-WDC_WDS200T2B0A-00SM50_182294800028
2) You need to describe your disk setup by it's properties (Drive Groups)
Link to DriveGroup docs.::
- [monitor 1] # ceph orchestrator osd create -i my_drivegroups.yml
+ # ceph orchestrator osd create -i my_drivegroups.yml
.. _drivegroups: drivegroups::
It is a good idea to have at least one backup manager daemon. To
deploy one or more new manager daemons,::
- [monitor 1] # ceph orch apply mgr *<new-num-mgrs>* [*<host1>* ...]
+ # ceph orch apply mgr *<new-num-mgrs>* [*<host1>* ...]
Deploying MDSs
==============