From: Sage Weil Date: Sat, 7 Mar 2020 15:13:23 +0000 (-0600) Subject: doc/cephadm: fix formatting, typos X-Git-Tag: v15.1.1~67^2~2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=e0161132bcbaa357cdc642e1bbc367ea42546957;p=ceph-ci.git doc/cephadm: fix formatting, typos No need for [monitor 1] when accessing the CLI--this can happy from any node or container that has a working CLI. Signed-off-by: Sage Weil --- diff --git a/doc/cephadm/index.rst b/doc/cephadm/index.rst index a6e76d3467c..42096ac8df7 100644 --- a/doc/cephadm/index.rst +++ b/doc/cephadm/index.rst @@ -55,12 +55,11 @@ your Linux distribution:: Bootstrap a new cluster ======================= -To create a new cluster, you need to know: - -* Which *IP address* to use for the cluster's first monitor. This is - normally just the IP for the first cluster node. If there are - multiple networks and interfaces, be sure to choose one that will be - accessible by any hosts accessing the Ceph cluster. +To create a new cluster, you need to know which *IP address* to use +for the cluster's first monitor. This is normally just the IP for the +first cluster node. If there are multiple networks and interfaces, be +sure to choose one that will be accessible by any hosts accessing the +Ceph cluster. To bootstrap the cluster run the following command:: @@ -109,19 +108,19 @@ Watching cephadm log messages Cephadm logs to the ``cephadm`` cluster log channel, which means you can monitor progress in realtime with:: - ceph -W cephadm + # ceph -W cephadm By default it will show info-level events and above. To see -debug-level messages too,:: +debug-level messages too:: - ceph config set mgr mgr/cephadm/log_to_cluster_level debug - ceph -W cephadm --watch-debug + # ceph config set mgr mgr/cephadm/log_to_cluster_level debug + # ceph -W cephadm --watch-debug -Be careful: the debug messagse are very verbose! +Be careful: the debug messages are very verbose! You can see recent events with:: - ceph log last cephadm + # ceph log last cephadm These events are also logged to the ``ceph.cephadm.log`` file on monitor hosts and/or to the monitor-daemon stderr. @@ -138,7 +137,7 @@ For each new host you'd like to add to the cluster, you need to do two things: #. Tell Ceph that the new node is part of the cluster:: - [monitor 1] # ceph orch host add *newhost* + # ceph orch host add *newhost* Deploying additional monitors ============================= @@ -164,18 +163,18 @@ To add OSDs to the cluster, you have two options: 1) You need to know the device name for the block device (hard disk or SSD) that will be used. Then,:: - [monitor 1] # ceph orch osd create **:** + # ceph orch osd create **:** For example, to deploy an OSD on host *newhost*'s SSD,:: - [monitor 1] # ceph orch osd create newhost:/dev/disk/by-id/ata-WDC_WDS200T2B0A-00SM50_182294800028 + # ceph orch osd create newhost:/dev/disk/by-id/ata-WDC_WDS200T2B0A-00SM50_182294800028 2) You need to describe your disk setup by it's properties (Drive Groups) Link to DriveGroup docs.:: - [monitor 1] # ceph orchestrator osd create -i my_drivegroups.yml + # ceph orchestrator osd create -i my_drivegroups.yml .. _drivegroups: drivegroups:: @@ -186,7 +185,7 @@ Deploying manager daemons It is a good idea to have at least one backup manager daemon. To deploy one or more new manager daemons,:: - [monitor 1] # ceph orch apply mgr ** [** ...] + # ceph orch apply mgr ** [** ...] Deploying MDSs ==============