ceph-deploy mgr create node1
#. Add three OSDs. For the purposes of these instructions, we assume you have an
- unused disk in each node called ``/dev/vdb``. *Be sure that whatever device you use is not currently in use and does not contain any important data.*
+ unused disk in each node called ``/dev/vdb``. *Be sure that the device is not currently in use and does not contain any important data.*
ceph-deploy osd create {ceph-node}:{device}
run multiple Ceph Monitors so that the failure of a single Ceph
Monitor will not bring down the Ceph Storage Cluster. Ceph uses the
Paxos algorithm, which requires a majority of monitors (i.e., greather
-than *N/2* where *N* is the number of monitors) for form a quorum.
+than *N/2* where *N* is the number of monitors) to form a quorum.
Odd numbers of monitors tend to be better, although this is not required.
.. tip: If you did not define the ``public network`` option above then
additional manager daemons ensures that if one daemon or host fails, another
one can take over without interrupting service.
-To deploy additional manager daemons,::
+To deploy additional manager daemons::
ceph-deploy mgr create node2 node3