--- /dev/null
+.\" Man page generated from reStructuredText.
+.
+.TH "CEPH-DEPLOY" "8" "December 06, 2014" "dev" "Ceph"
+.SH NAME
+ceph-deploy \- Ceph quickstart tool
+.
+.nr rst2man-indent-level 0
+.
+.de1 rstReportMargin
+\\$1 \\n[an-margin]
+level \\n[rst2man-indent-level]
+level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
+-
+\\n[rst2man-indent0]
+\\n[rst2man-indent1]
+\\n[rst2man-indent2]
+..
+.de1 INDENT
+.\" .rstReportMargin pre:
+. RS \\$1
+. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
+. nr rst2man-indent-level +1
+.\" .rstReportMargin post:
+..
+.de UNINDENT
+. RE
+.\" indent \\n[an-margin]
+.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
+.nr rst2man-indent-level -1
+.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
+.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
+..
+.
+.nr rst2man-indent-level 0
+.
+.de1 rstReportMargin
+\\$1 \\n[an-margin]
+level \\n[rst2man-indent-level]
+level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
+-
+\\n[rst2man-indent0]
+\\n[rst2man-indent1]
+\\n[rst2man-indent2]
+..
+.de1 INDENT
+.\" .rstReportMargin pre:
+. RS \\$1
+. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
+. nr rst2man-indent-level +1
+.\" .rstReportMargin post:
+..
+.de UNINDENT
+. RE
+.\" indent \\n[an-margin]
+.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
+.nr rst2man-indent-level -1
+.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
+.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
+..
+.SH SYNOPSIS
+.nf
+\fBceph\-deploy\fP \fBnew\fP [\fIinitial\-monitor\-node(s)\fP]
+.fi
+.sp
+.nf
+\fBceph\-deploy\fP \fBinstall\fP [\fIceph\-node\fP] [\fIceph\-node\fP\&...]
+.fi
+.sp
+.nf
+\fBceph\-deploy\fP \fBmon\fP \fIcreate\-initial\fP
+.fi
+.sp
+.nf
+\fBceph\-deploy\fP \fBosd\fP \fIprepare\fP [\fIceph\-node\fP]:[\fIdir\-path\fP]
+.fi
+.sp
+.nf
+\fBceph\-deploy\fP \fBosd\fP \fIactivate\fP [\fIceph\-node\fP]:[\fIdir\-path\fP]
+.fi
+.sp
+.nf
+\fBceph\-deploy\fP \fBadmin\fP [\fIadmin\-node\fP][\fIceph\-node\fP\&...]
+.fi
+.sp
+.nf
+\fBceph\-deploy\fP \fBpurgedata\fP [\fIceph\-node\fP][\fIceph\-node\fP\&...]
+.fi
+.sp
+.nf
+\fBceph\-deploy\fP \fBforgetkeys\fP
+.fi
+.sp
+.SH DESCRIPTION
+.sp
+\fBceph\-deploy\fP is a tool which allows easy and quick deployment of a ceph
+cluster without involving complex and detailed manual configuration. It uses
+ssh to gain access to other ceph nodes from the admin node, sudo for
+administrator privileges on them and the underlying Python scripts automates
+the manual process of ceph installation on each node from the admin node itself.
+It can be easily run on an workstation and doesn\(aqt require servers, databases or
+any other automated tools. With \fBceph\-deploy\fP, it is really easy to set up and
+take down a cluster. However, it is not a generic deployment tool. It is a
+specific tool which is designed for those who want to get ceph up and running
+quickly with only the unavoidable initial configuration settings and without the
+overhead of installing other tools like \fBChef\fP, \fBPuppet\fP or \fBJuju\fP\&. Those
+who want to customize security settings, partitions or directory locations and
+want to set up a cluster following detailed manual steps, should use other tools
+i.e, \fBChef\fP, \fBPuppet\fP, \fBJuju\fP or \fBCrowbar\fP\&.
+.sp
+With \fBceph\-deploy\fP, you can install ceph packages on remote nodes, create a
+cluster, add monitors, gather/forget keys, add OSDs and metadata servers,
+configure admin hosts or take down the cluster.
+.SH COMMANDS
+.sp
+\fBnew\fP: Start deploying a new cluster and write a configuration file and keyring
+for it. It tries to copy ssh keys from admin node to gain passwordless ssh to
+monitor node(s), validates host IP, creates a cluster with a new initial monitor
+node or nodes for monitor quorum, a ceph configuration file, a monitor secret
+keyring and a log file for the new cluster. It populates the newly created ceph
+configuration file with \fBfsid\fP of cluster, hostnames and IP addresses of initial
+monitor members under [global] section.
+.sp
+Usage: ceph\-deploy new [MON][MON...]
+.sp
+Here, [MON] is initial monitor hostname, fqdn, or hostname:fqdn pair.
+.sp
+Other options like \-\-no\-ssh\-copykey, \-\-fsid, \-\-cluster\-network and
+\-\-public\-network can also be used with this command.
+.sp
+If more than one network interface is used, \fBpublic network\fP setting has to be
+added under \fB[global]\fP section of ceph configuration file. If the public subnet
+is given, \fBnew\fP command will choose the one IP from the remote host that exists
+within the subnet range. Public network can also be added at runtime using
+\fB\-\-public\-network\fP option with the command as mentioned above.
+.sp
+It is also recommended to change the default number of replicas in the Ceph
+configuration file from 3 to 2 so that Ceph can achieve an \fB(active + clean)\fP
+state with just two Ceph OSDs. To do that, add the following line under the
+\fB[global]\fP section:
+.sp
+osd pool default size = 2
+.sp
+\fBinstall\fP: Install Ceph packages on remote hosts. As a first step it installs
+\fByum\-plugin\-priorities\fP in admin and other nodes using passwordless ssh and sudo
+so that Ceph packages from upstream repository get more priority. It then detects
+the platform and distribution for the hosts and installs Ceph normally by
+downloading distro compatible packages if adequate repo for Ceph is already added.
+It sets the \fBversion_kind\fP to be the right one out of \fBstable\fP, \fBtesting\fP,
+and \fBdevelopment\fP\&. Generally the \fBstable\fP version and latest release is used
+for installation. During detection of platform and distribution before installation,
+if it finds the \fBdistro.init\fP to be \fBsysvinit\fP (Fedora, CentOS/RHEL etc), it
+doesn\(aqt allow installation with custom cluster name and uses the default name
+\fBceph\fP for the cluster.
+.sp
+If the user explicitly specifies a custom repo url with \fB\-\-repo\-url\fP for
+installation, anything detected from the configuration will be overridden and
+the custom repository location will be used for installation of Ceph packages.
+If required, valid custom repositories are also detected and installed. In case of
+installation from a custom repo a boolean is used to determine the logic needed to
+proceed with a custom repo installation. A custom repo install helper is used that
+goes through config checks to retrieve repos (and any extra repos defined) and
+installs them. \fBcd_conf\fP is the object built from argparse that holds the flags
+and information needed to determine what metadata from the configuration is to be
+used.
+.sp
+A user can also opt to install only the repository without installing ceph and
+its dependencies by using \fB\-\-repo\fP option.
+.sp
+Usage: ceph\-deploy install [HOST][HOST...]
+.sp
+Here, [HOST] is/are the host node(s) where Ceph is to be installed.
+.sp
+Other options like \-\-release, \-\-testing, \-\-dev, \-\-adjust\-repos, \-\-no\-adjust\-repos,
+\-\-repo, \-\-local\-mirror, \-\-repo\-url and \-\-gpg\-url can also be used with this
+command.
+.sp
+\fBmds\fP: Deploy Ceph mds on remote hosts. A metadata server is needed to use
+CephFS and the \fBmds\fP command is used to create one on the desired host node.
+It uses the subcommand \fBcreate\fP to do so. \fBcreate\fP first gets the hostname
+and distro information of the desired mds host. It then tries to read the
+bootstrap\-mds key for the cluster and deploy it in the desired host. The key
+generally has a format of {cluster}.bootstrap\-mds.keyring. If it doesn\(aqt finds
+a keyring, it runs \fBgatherkeys\fP to get the keyring. It then creates a mds on the
+desired host under the path /var/lib/ceph/mds/ in /var/lib/ceph/mds/{cluster}\-{name}
+format and a bootstrap keyring under /var/lib/ceph/bootstrap\-mds/ in
+/var/lib/ceph/bootstrap\-mds/{cluster}.keyring format. It then runs appropriate
+commands based on \fBdistro.init\fP to start the \fBmds\fP\&. To remove the mds,
+subcommand \fBdestroy\fP is used.
+.sp
+Usage: ceph\-deploy mds create [HOST[:DAEMON\-NAME]] [HOST[:DAEMON\-NAME]...]
+.sp
+ceph\-deploy mds destroy [HOST[:DAEMON\-NAME]] [HOST[:DAEMON\-NAME]...]
+.sp
+The [DAEMON\-NAME] is optional.
+.sp
+\fBmon\fP: Deploy Ceph monitor on remote hosts. \fBmon\fP makes use of certain
+subcommands to deploy Ceph monitors on other nodes.
+.sp
+Subcommand \fBcreate\-initial\fP deploys for monitors defined in
+\fBmon initial members\fP under \fB[global]\fP section in Ceph configuration file,
+wait until they form quorum and then gatherkeys, reporting the monitor status
+along the process. If monitors don\(aqt form quorum the command will eventually
+time out.
+.sp
+Usage: ceph\-deploy mon create\-initial
+.sp
+Subcommand \fBcreate\fP is used to deploy Ceph monitors by explicitly specifying the
+hosts which are desired to be made monitors. If no hosts are specified it will
+default to use the \fBmon initial members\fP defined under \fB[global]\fP section of
+Ceph configuration file. \fBcreate\fP first detects platform and distro for desired
+hosts and checks if hostname is compatible for deployment. It then uses the monitor
+keyring initially created using \fBnew\fP command and deploys the monitor in desired
+host. If multiple hosts were specified during \fBnew\fP command i.e, if there are
+multiple hosts in \fBmon initial members\fP and multiple keyrings were created then
+a concatenated keyring is used for deployment of monitors. In this process a
+keyring parser is used which looks for \fB[entity]\fP sections in monitor keyrings
+and returns a list of those sections. A helper is then used to collect all
+keyrings into a single blob that will be used to inject it to monitors with
+\fB\-\-mkfs\fP on remote nodes. All keyring files are concatenated to be in a
+directory ending with \fB\&.keyring\fP\&. During this process the helper uses list of
+sections returned by keyring parser to check if an entity is already present in
+a keyring and if not, adds it. The concatenated keyring is used for deployment
+of monitors to desired multiple hosts.
+.sp
+Usage: ceph\-deploy mon create [HOST] [HOST...]
+.sp
+Here, [HOST] is hostname of desired monitor host(s).
+.sp
+Subcommand \fBadd\fP is used to add a monitor to an existing cluster. It first
+detects platform and distro for desired host and checks if hostname is
+compatible for deployment. It then uses the monitor keyring, ensures
+configuration for new monitor host and adds the monitor to the cluster.
+If the section for the monitor exists and defines a mon addr that
+will be used, otherwise it will fallback by resolving the hostname to an
+IP. If \-\-address is used it will override all other options. After
+adding the monitor to the cluster, it gives it some time to start. It then
+looks for any monitor errors and checks monitor status. Monitor errors
+arise if the monitor is not added in \fBmon initial members\fP, if it doesn\(aqt
+exist in monmap and if neither public_addr nor public_network keys were
+defined for monitors. Under such conditions, monitors may not be able to form
+quorum. Monitor status tells if the monitor is up and running normally. The
+status is checked by running ceph daemon mon.hostname mon_status on
+remote end which provides the output and returns a boolean status of what is
+going on. \fBFalse\fP means a monitor that is not fine even if it is up and
+running, while \fBTrue\fP means the monitor is up and running correctly.
+.sp
+Usage: ceph\-deploy mon add [HOST]
+.sp
+ceph\-deploy mon add [HOST] \-\-address [IP]
+.sp
+Here, [HOST] is the hostname and [IP] is the IP address of the desired monitor
+node.
+.sp
+Subcommand \fBdestroy\fP is used to completely remove monitors on remote hosts. It
+takes hostnames as arguments. It stops the monitor, verifies if ceph\-mon daemon
+really stopped, creates an archive directory \fBmon\-remove\fP under /var/lib/ceph/,
+archives old monitor directory in {cluster}\-{hostname}\-{stamp} format in it and
+removes the monitor from cluster by running \fBceph remove...\fP command.
+.sp
+Usage: ceph\-deploy mon destroy [HOST]
+.sp
+Here, [HOST] is hostname of monitor that is to be removed.
+.sp
+\fBgatherkeys\fP: Gather authentication keys for provisioning new nodes. It
+takes hostnames as arguments. It checks for and fetches client.admin keyring,
+monitor keyring and bootstrap\-mds/bootstrap\-osd keyring from monitor host.
+These authentication keys are used when new monitors/OSDs/MDS are added to
+the cluster.
+.sp
+Usage: ceph\-deploy gatherkeys [HOST] [HOST...]
+.sp
+Here, [HOST] is hostname of the monitor from where keys are to be pulled.
+.sp
+\fBdisk\fP: Manage disks on a remote host. It actually triggers the \fBceph\-disk\fP
+utility and it\(aqs subcommands to manage disks.
+.sp
+Subcommand \fBlist\fP lists disk partitions and ceph OSDs.
+.sp
+Usage: ceph\-deploy disk list [HOST:[DISK]]
+.sp
+Here, [HOST] is hostname of the node and [DISK] is disk name or path.
+.sp
+Subcommand \fBprepare\fP prepares a directory, disk or drive for a ceph OSD. It
+creates a GPT partition, marks the partition with ceph type uuid, creates a
+file system, marks the file system as ready for ceph consumption, uses entire
+partition and adds a new partition to the journal disk.
+.sp
+Usage: ceph\-deploy disk prepare [HOST:[DISK]]
+.sp
+Here, [HOST] is hostname of the node and [DISK] is disk name or path.
+.sp
+Subcommand \fBactivate\fP activates the ceph OSD. It mounts the volume in a temporary
+location, allocates an OSD id (if needed), remounts in the correct location
+/var/lib/ceph/osd/$cluster\-$id and starts ceph\-osd. It is triggered by udev
+when it sees the OSD GPT partition type or on ceph service start with
+\(aqceph disk activate\-all\(aq.
+.sp
+Usage: ceph\-deploy disk activate [HOST:[DISK]]
+.sp
+Here, [HOST] is hostname of the node and [DISK] is disk name or path.
+.sp
+Subcommand \fBzap\fP zaps/erases/destroys a device\(aqs partition table and contents.
+It actually uses \(aqsgdisk\(aq and it\(aqs option \(aq\-\-zap\-all\(aq to destroy both
+GPT and MBR data structures so that the disk becomes suitable for
+repartitioning. \(aqsgdisk\(aq then uses \(aq\-\-mbrtogpt\(aq to convert the MBR or
+BSD disklabel disk to a GPT disk. The \fBprepare\fP subcommand can now be
+executed which will create a new GPT partition.
+.sp
+Usage: ceph\-deploy disk zap [HOST:[DISK]]
+.sp
+Here, [HOST] is hostname of the node and [DISK] is disk name or path.
+.sp
+\fBosd\fP: Manage OSDs by preparing data disk on remote host. \fBosd\fP makes use
+of certain subcommands for managing OSDs.
+.sp
+Subcommand \fBprepare\fP prepares a directory, disk or drive for a ceph OSD. It
+first checks against multiple OSDs getting created and warns about the possibility
+of more than the recommended which would cause issues with max allowed PIDs in a
+system. It then reads the bootstrap\-osd key for the cluster or writes the bootstrap
+key if not found. It then uses \fBceph\-disk\fP utility\(aqs \fBprepare\fP subcommand to
+prepare the disk, journal and deploy the OSD on the desired host. Once prepared,
+it gives some time to the OSD to settle and checks for any possible errors and if
+found, reports to the user.
+.sp
+Usage: ceph\-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
+.sp
+Subcommand \fBactivate\fP activates the OSD prepared using \fIprepare\fP subcommand.
+It actually uses \fBceph\-disk\fP utility\(aqs \fBactivate\fP subcommand with
+appropriate init type based on distro to activate the OSD. Once activated,
+it gives some time to the OSD to start and checks for any possible errors and if
+found, reports to the user. It checks the status of the prepared OSD, checks the
+OSD tree and makes sure the OSDs are up and in.
+.sp
+Usage: ceph\-deploy osd activate HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
+.sp
+Subcommand \fBcreate\fP uses \fBprepare\fP and \fBactivate\fP subcommands to create an
+OSD.
+.sp
+Usage: ceph\-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
+.sp
+Subcommand \fBlist\fP lists disk partitions, ceph OSDs and prints OSD metadata.
+It gets the osd tree from a monitor host, uses the \fBceph\-disk\-list\fP output
+and gets the mount point by matching the line where the partition mentions
+the OSD name, reads metadata from files, checks if a journal path exists,
+if the OSD is in a OSD tree and prints the OSD metadata.
+.sp
+Usage: ceph\-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
+.sp
+Subcommand \fBdestroy\fP is used to completely remove OSDs from remote hosts. It
+first takes the desired OSD out of the cluster and waits for the cluster to
+rebalance and placement groups to reach \fB(active+clean)\fP state again. It then
+stops the OSD, removes the OSD from CRUSH map, removes the OSD authentication
+key, removes the OSD and updates the cluster\(aqs configuration file accordingly.
+.sp
+Usage: ceph\-deploy osd destroy HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
+.sp
+\fBadmin\fP: Push configuration and client.admin key to a remote host. It takes
+the {cluster}.client.admin.keyring from admin node and writes it under /etc/ceph
+directory of desired node.
+.sp
+Usage: ceph\-deploy admin [HOST] [HOST...]
+.sp
+Here, [HOST] is desired host to be configured for Ceph administration.
+.sp
+\fBconfig\fP: Push/pull configuration file to/from a remote host. It uses
+\fBpush\fP subcommand to takes the configuration file from admin host and
+write it to remote host under /etc/ceph directory. It uses \fBpull\fP subcommand
+to do the opposite i.e, pull the configuration file under /etc/ceph directory
+of remote host to admin node.
+.sp
+Usage: ceph\-deploy push [HOST] [HOST...]
+.sp
+Here, [HOST] is the hostname of the node where config file will be pushed.
+.sp
+ceph\-deploy pull [HOST] [HOST...]
+.sp
+Here, [HOST] is the hostname of the node from where config file will be pulled.
+.sp
+\fBuninstall\fP: Remove Ceph packages from remote hosts. It detects the platform
+and distro of selected host and uninstalls Ceph packages from it. However, some
+dependencies like librbd1 and librados2 \fBwill not\fP be removed because they can
+cause issues with qemu\-kvm.
+.sp
+Usage: ceph\-deploy uninstall [HOST] [HOST...]
+.sp
+Here, [HOST] is hostname of the node from where Ceph will be uninstalled.
+.sp
+\fBpurge\fP: Remove Ceph packages from remote hosts and purge all data. It detects
+the platform and distro of selected host, uninstalls Ceph packages and purges all
+data. However, some dependencies like librbd1 and librados2 \fBwill not\fP be removed
+because they can cause issues with qemu\-kvm.
+.sp
+Usage: ceph\-deploy purge [HOST] [HOST...]
+.sp
+Here, [HOST] is hostname of the node from where Ceph will be purged.
+.sp
+\fBpurgedata\fP: Purge (delete, destroy, discard, shred) any Ceph data from
+/var/lib/ceph. Once it detects the platform and distro of desired host, it first
+checks if Ceph is still installed on the selected host and if installed, it won\(aqt
+purge data from it. If Ceph is already uninstalled from the host, it tries to
+remove the contents of /var/lib/ceph. If it fails then probably OSDs are still
+mounted and needs to be unmounted to continue. It unmount the OSDs and tries to
+remove the contents of /var/lib/ceph again and checks for errors. It also
+removes contents of /etc/ceph. Once all steps are successfully completed, all
+the Ceph data from the selected host are removed.
+.sp
+Usage: ceph\-deploy purgedata [HOST] [HOST...]
+.sp
+Here, [HOST] is hostname of the node from where Ceph data will be purged.
+.sp
+\fBforgetkeys\fP: Remove authentication keys from the local directory. It removes
+all the authentication keys i.e, monitor keyring, client.admin keyring,
+bootstrap\-osd and bootstrap\-mds keyring from the node.
+.sp
+Usage: ceph\-deploy forgetkeys
+.sp
+\fBpkg\fP: Manage packages on remote hosts. It is used for installing or removing
+packages from remote hosts. The package names for installation or removal are to
+specified after the command. Two options \-\-install and \-\-remove are used for this
+purpose.
+.sp
+Usage: ceph\-deploy pkg \-\-install [PKGs] [HOST] [HOST...]
+.sp
+ceph\-deploy pkg \-\-remove [PKGs] [HOST] [HOST...]
+.sp
+Here, [PKGs] is comma\-separated package names and [HOST] is hostname of the
+remote node where packages are to installed or removed from.
+.sp
+\fBcalamari\fP: Install and configure Calamari nodes. It first checks if distro
+is supported for Calamari installation by ceph\-deploy. An argument \fBconnect\fP
+is used for installation and configuration. It checks for ceph\-deploy
+configuration file (cd_conf) and Calamari release repo or \fBcalamari\-minion\fP repo.
+It relies on default for repo installation as it doesn\(aqt install Ceph unless
+specified otherwise. \fBoptions\fP dictionary is also defined because ceph\-deploy
+pops items internally which causes issues when those items are needed to be
+available for every host. If the distro is Debian/Ubuntu, it is ensured that
+proxy is disabled for \fBcalamari\-minion\fP repo. calamari\-minion package is then
+installed and custom repository files are added. minion config is placed
+prior to installation so that it is present when the minion first starts.
+config directory, calamari salt config are created and salt\-minion package
+is installed. If the distro is Redhat/CentOS, the salt\-minion service needs to
+be started.
+.sp
+Usage: ceph\-deploy calamari {connect} [HOST] [HOST...]
+.sp
+Here, [HOST] is the hostname where Calamari is to be installed.
+.sp
+Other options like \-\-release and \-\-master can also be used this command.
+.SH OPTIONS
+.INDENT 0.0
+.TP
+.B \-\-version
+The current installed version of ceph\-deploy.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-username
+The username to connect to the remote host.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-overwrite\-conf
+Overwrite an existing conf file on remote host (if present).
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-cluster
+Name of the cluster.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-ceph\-conf
+Use (or reuse) a given ceph.conf file.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-no\-ssh\-copykey
+Do not attempt to copy ssh keys.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-fsid
+Provide an alternate FSID for ceph.conf generation.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-cluster\-network
+Specify the (internal) cluster network.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-public\-network
+Specify the public network for a cluster.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-release
+Install a release known as CODENAME (default: firefly).
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-testing
+Install the latest development release.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-dev
+Install a bleeding edge built from Git branch or tag (default: master).
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-adjust\-repos
+Install packages modifying source repos.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-no\-adjust\-repos
+Install packages without modifying source repos.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-repo
+Install repo files only (skips package installation).
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-local\-mirror
+Fetch packages and push them to hosts for a local repo mirror.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-repo\-url
+Specify a repo url that mirrors/contains Ceph packages.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-gpg\-url
+Specify a GPG key url to be used with custom repos (defaults to ceph.com).
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-address
+IP address of the host node to be added to the cluster.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-keyrings
+Concatenate multiple keyrings to be seeded on new monitors.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-zap\-disk
+Destroy the partition table and content of a disk.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-fs\-type
+Filesystem to use to format disk (xfs, btrfs or ext4).
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-dmcrypt
+Encrypt [data\-path] and/or journal devices with dm\-crypt.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-dmcrypt\-key\-dir
+Directory where dm\-crypt keys are stored.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-install
+Comma\-separated package(s) to install on remote hosts.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-remove
+Comma\-separated package(s) to remove from remote hosts.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-release
+Use a given release from repositories defined in ceph\-deploy\(aqs configuration.
+Defaults to \(aqcalamari\-minion\(aq.
+.UNINDENT
+.INDENT 0.0
+.TP
+.B \-\-master
+The domain for the Calamari master server.
+.UNINDENT
+.SH AVAILABILITY
+.sp
+\fBceph\-deploy\fP is a part of the Ceph distributed storage system. Please refer to
+the documentation at \fI\%http://ceph.com/ceph-deploy/docs\fP for more information.
+.SH SEE ALSO
+.sp
+\fBceph\-mon\fP(8),
+\fBceph\-osd\fP(8),
+\fBceph\-disk\fP(8),
+\fBceph\-mds\fP(8)
+.SH COPYRIGHT
+2010-2014, Inktank Storage, Inc. and contributors. Licensed under Creative Commons BY-SA
+.\" Generated by docutils manpage writer.
+.