From: Alfredo Deza Date: Wed, 29 Nov 2017 16:13:47 +0000 (-0500) Subject: doc/install use ceph-volume in manual deployment steps X-Git-Tag: v12.2.5~152^2~13 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=1b47ce9dc62ddaf7ff222b235bd3bb9c497a0f7e;p=ceph.git doc/install use ceph-volume in manual deployment steps Signed-off-by: Alfredo Deza (cherry picked from commit 3ed739e541ec45ce21845768bc043283315cf232) --- diff --git a/doc/install/manual-deployment.rst b/doc/install/manual-deployment.rst index affab5be9193..bfe1aa1875a6 100644 --- a/doc/install/manual-deployment.rst +++ b/doc/install/manual-deployment.rst @@ -311,36 +311,90 @@ a Ceph Node. Short Form ---------- -Ceph provides the ``ceph-disk`` utility, which can prepare a disk, partition or -directory for use with Ceph. The ``ceph-disk`` utility creates the OSD ID by -incrementing the index. Additionally, ``ceph-disk`` will add the new OSD to the -CRUSH map under the host for you. Execute ``ceph-disk -h`` for CLI details. -The ``ceph-disk`` utility automates the steps of the `Long Form`_ below. To +Ceph provides the ``ceph-volume`` utility, which can prepare a logical volume, disk, or partition +for use with Ceph. The ``ceph-volume`` utility creates the OSD ID by +incrementing the index. Additionally, ``ceph-volume`` will add the new OSD to the +CRUSH map under the host for you. Execute ``ceph-volume -h`` for CLI details. +The ``ceph-volume`` utility automates the steps of the `Long Form`_ below. To create the first two OSDs with the short form procedure, execute the following on ``node2`` and ``node3``: +bluestore +^^^^^^^^^ +#. Create the OSD. :: + + ssh {node-name} + sudo ceph-volume lvm create --data {data-path} + + For example:: + + ssh node1 + sudo ceph-volume lvm create --data /dev/hdd1 + +Alternatively, the creation process can be split in two phases (prepare, and +activate): #. Prepare the OSD. :: ssh {node-name} - sudo ceph-disk prepare --cluster {cluster-name} --cluster-uuid {uuid} {data-path} [{journal-path}] + sudo ceph-volume lvm prepare --data {data-path} {data-path} For example:: ssh node1 - sudo ceph-disk prepare --cluster ceph --cluster-uuid a7f64266-0894-4f1e-a635-d0aeaca0e993 --fs-type ext4 /dev/hdd1 + sudo ceph-volume prepare --data /dev/hdd1 + + Once prepared, the ``ID`` and ``FSID`` of the prepared OSD are required for + activation. These can be obtained by listing OSDs in the current server:: + sudo ceph-volume lvm list #. Activate the OSD:: - sudo ceph-disk activate {data-path} [--activate-key {path}] + sudo ceph-volume lvm activate {ID} {FSID} + + For example:: + + sudo ceph-volume lvm activate 0 a7f64266-0894-4f1e-a635-d0aeaca0e993 + + +filestore +^^^^^^^^^ +#. Create the OSD. :: + + ssh {node-name} + sudo ceph-volume lvm create --filestore --data {data-path} --journal {journal-path} + + For example:: + + ssh node1 + sudo ceph-volume lvm create --filestore --data /dev/hdd1 --journal /dev/hdd2 + +Alternatively, the creation process can be split in two phases (prepare, and +activate): + +#. Prepare the OSD. :: + + ssh {node-name} + sudo ceph-volume lvm prepare --filestore --data {data-path} --journal {journal-path} For example:: - sudo ceph-disk activate /dev/hdd1 + ssh node1 + sudo ceph-volume prepare --filestore --data /dev/hdd1 --journal /dev/hdd2 + + Once prepared, the ``ID`` and ``FSID`` of the prepared OSD are required for + activation. These can be obtained by listing OSDs in the current server:: + + sudo ceph-volume lvm list + +#. Activate the OSD:: + + sudo ceph-volume lvm activate --filestore {ID} {FSID} + + For example:: - **Note:** Use the ``--activate-key`` argument if you do not have a copy - of ``/var/lib/ceph/bootstrap-osd/{cluster}.keyring`` on the Ceph Node. + sudo ceph-volume lvm activate --filestore 0 a7f64266-0894-4f1e-a635-d0aeaca0e993 Long Form