From 917a6f9ba493b31e8f3c1f809d56463c7795347e Mon Sep 17 00:00:00 2001 From: Sage Weil Date: Mon, 10 Jul 2017 10:04:24 -0400 Subject: [PATCH] doc/start/quick-ceph-deploy: fix typos etc Signed-off-by: Sage Weil --- doc/start/quick-ceph-deploy.rst | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/start/quick-ceph-deploy.rst b/doc/start/quick-ceph-deploy.rst index 6aac745758625..cc3899a7086f0 100644 --- a/doc/start/quick-ceph-deploy.rst +++ b/doc/start/quick-ceph-deploy.rst @@ -123,7 +123,7 @@ configuration details, perform the following steps using ``ceph-deploy``. ceph-deploy mgr create node1 #. Add three OSDs. For the purposes of these instructions, we assume you have an - unused disk in each node called ``/dev/vdb``. *Be sure that whatever device you use is not currently in use and does not contain any important data.* + unused disk in each node called ``/dev/vdb``. *Be sure that the device is not currently in use and does not contain any important data.* ceph-deploy osd create {ceph-node}:{device} @@ -194,7 +194,7 @@ Manager to run. For high availability, Ceph Storage Clusters typically run multiple Ceph Monitors so that the failure of a single Ceph Monitor will not bring down the Ceph Storage Cluster. Ceph uses the Paxos algorithm, which requires a majority of monitors (i.e., greather -than *N/2* where *N* is the number of monitors) for form a quorum. +than *N/2* where *N* is the number of monitors) to form a quorum. Odd numbers of monitors tend to be better, although this is not required. .. tip: If you did not define the ``public network`` option above then @@ -229,7 +229,7 @@ The Ceph Manager daemons operate in an active/standby pattern. Deploying additional manager daemons ensures that if one daemon or host fails, another one can take over without interrupting service. -To deploy additional manager daemons,:: +To deploy additional manager daemons:: ceph-deploy mgr create node2 node3 -- 2.39.5