From 82fa8f261a02079d158d45f94ad5824c25ed2854 Mon Sep 17 00:00:00 2001 From: Sage Weil Date: Sat, 7 Mar 2020 11:22:47 -0600 Subject: [PATCH] doc/cephadm: fix formatting for osd section Signed-off-by: Sage Weil --- doc/cephadm/index.rst | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/doc/cephadm/index.rst b/doc/cephadm/index.rst index 0cbee6e170c..6ccd105aaf1 100644 --- a/doc/cephadm/index.rst +++ b/doc/cephadm/index.rst @@ -225,21 +225,22 @@ Deploying OSDs ============== To add OSDs to the cluster, you have two options: -1) You need to know the device name for the block device (hard disk or SSD) -that will be used. Then,:: - # ceph orch osd create **:** +#. You need to know the device name for the block device (hard disk or +SSD) that will be used. Then,:: -For example, to deploy an OSD on host *newhost*'s SSD,:: + # ceph orch osd create **:** - # ceph orch osd create newhost:/dev/disk/by-id/ata-WDC_WDS200T2B0A-00SM50_182294800028 + For example, to deploy an OSD on host *newhost*'s SSD,:: + # ceph orch osd create newhost:/dev/disk/by-id/ata-WDC_WDS200T2B0A-00SM50_182294800028 -2) You need to describe your disk setup by it's properties (Drive Groups) -Link to DriveGroup docs.:: +#. You need to describe your disk setup by it's properties (Drive Groups) - # ceph orchestrator osd create -i my_drivegroups.yml + Link to DriveGroup docs.:: + + # ceph orch osd create -i my_drivegroups.yml .. _drivegroups: drivegroups:: -- 2.39.5