| **ceph-deploy** **mon** *create-initial*
-| **ceph-deploy** **osd** *prepare* [*ceph-node*]:[*dir-path*]
-
-| **ceph-deploy** **osd** *activate* [*ceph-node*]:[*dir-path*]
-
-| **ceph-deploy** **osd** *create* [*ceph-node*]:[*dir-path*]
+| **ceph-deploy** **osd** *create* *--data* *device* *ceph-node*
| **ceph-deploy** **admin** [*admin-node*][*ceph-node*...]
Usage::
- ceph-deploy disk list [HOST:[DISK]]
-
-Here, [HOST] is hostname of the node and [DISK] is disk name or path.
-
-Subcommand ``prepare`` prepares a directory, disk or drive for a Ceph OSD. It
-creates a GPT partition, marks the partition with Ceph type uuid, creates a
-file system, marks the file system as ready for Ceph consumption, uses entire
-partition and adds a new partition to the journal disk.
-
-Usage::
-
- ceph-deploy disk prepare [HOST:[DISK]]
-
-Here, [HOST] is hostname of the node and [DISK] is disk name or path.
-
-Subcommand ``activate`` activates the Ceph OSD. It mounts the volume in a
-temporary location, allocates an OSD id (if needed), remounts in the correct
-location ``/var/lib/ceph/osd/$cluster-$id`` and starts ``ceph-osd``. It is
-triggered by ``udev`` when it sees the OSD GPT partition type or on ceph service
-start with ``ceph disk activate-all``.
-
-Usage::
-
- ceph-deploy disk activate [HOST:[DISK]]
-
-Here, [HOST] is hostname of the node and [DISK] is disk name or path.
-
-Subcommand ``zap`` zaps/erases/destroys a device's partition table and contents.
-It actually uses ``sgdisk`` and it's option ``--zap-all`` to destroy both GPT and
-MBR data structures so that the disk becomes suitable for repartitioning.
-``sgdisk`` then uses ``--mbrtogpt`` to convert the MBR or BSD disklabel disk to a
-GPT disk. The ``prepare`` subcommand can now be executed which will create a new
-GPT partition.
-
-Usage::
-
- ceph-deploy disk zap [HOST:[DISK]]
+ ceph-deploy disk list HOST
-Here, [HOST] is hostname of the node and [DISK] is disk name or path.
+Subcommand ``zap`` zaps/erases/destroys a device's partition table and
+contents. It actually uses ``ceph-volume lvm zap`` remotely, alternatively
+allowing someone to remove the Ceph metadata from the logical volume.
osd
---
Manage OSDs by preparing data disk on remote host. ``osd`` makes use of certain
subcommands for managing OSDs.
-Subcommand ``prepare`` prepares a directory, disk or drive for a Ceph OSD. It
-first checks against multiple OSDs getting created and warns about the
-possibility of more than the recommended which would cause issues with max
-allowed PIDs in a system. It then reads the bootstrap-osd key for the cluster or
-writes the bootstrap key if not found. It then uses :program:`ceph-disk`
-utility's ``prepare`` subcommand to prepare the disk, journal and deploy the OSD
-on the desired host. Once prepared, it gives some time to the OSD to settle and
-checks for any possible errors and if found, reports to the user.
+Subcommand ``create`` prepares a device for Ceph OSD. It first checks against
+multiple OSDs getting created and warns about the possibility of more than the
+recommended which would cause issues with max allowed PIDs in a system. It then
+reads the bootstrap-osd key for the cluster or writes the bootstrap key if not
+found.
+It then uses :program:`ceph-volume` utility's ``lvm create`` subcommand to
+prepare the disk, (and journal if using filestore) and deploy the OSD on the desired host.
+Once prepared, it gives some time to the OSD to start and checks for any
+possible errors and if found, reports to the user.
-Usage::
-
- ceph-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
+Bluestore Usage::
-Subcommand ``activate`` activates the OSD prepared using ``prepare`` subcommand.
-It actually uses :program:`ceph-disk` utility's ``activate`` subcommand with
-appropriate init type based on distro to activate the OSD. Once activated, it
-gives some time to the OSD to start and checks for any possible errors and if
-found, reports to the user. It checks the status of the prepared OSD, checks the
-OSD tree and makes sure the OSDs are up and in.
+ ceph-deploy osd create --data DISK HOST
-Usage::
+Filestore Usage::
- ceph-deploy osd activate HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
+ ceph-deploy osd create --data DISK --journal JOURNAL HOST
-Subcommand ``create`` uses ``prepare`` and ``activate`` subcommands to create an
-OSD.
-
-Usage::
- ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
+.. note:: For other flags available, please see the man page or the --help menu
+ on ceph-deploy osd create
-Subcommand ``list`` lists disk partitions, Ceph OSDs and prints OSD metadata.
-It gets the osd tree from a monitor host, uses the ``ceph-disk-list`` output
-and gets the mount point by matching the line where the partition mentions
-the OSD name, reads metadata from files, checks if a journal path exists,
-if the OSD is in a OSD tree and prints the OSD metadata.
+Subcommand ``list`` lists devices associated to Ceph as part of an OSD.
+It uses the ``ceph-volume lvm list`` output that has a rich output, mapping
+OSDs to devices and other interesting information about the OSD setup.
Usage::
- ceph-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
+ ceph-deploy osd list HOST
admin