From 3b228a45a450f5ac4f4736a7df7161ce6884e563 Mon Sep 17 00:00:00 2001 From: Sage Weil Date: Fri, 4 Oct 2019 12:40:05 -0500 Subject: [PATCH] doc/bootstrap: add new bootstrap documentation Signed-off-by: Sage Weil --- doc/bootstrap.rst | 133 ++++++++++++++++++++++++++++++++++++++++++++++ doc/conf.py | 1 + doc/index.rst | 1 + 3 files changed, 135 insertions(+) create mode 100644 doc/bootstrap.rst diff --git a/doc/bootstrap.rst b/doc/bootstrap.rst new file mode 100644 index 00000000000..8a288a88d82 --- /dev/null +++ b/doc/bootstrap.rst @@ -0,0 +1,133 @@ +============================ + Installation (ceph-daemon) +============================ + +A new Ceph cluster is deployed by bootstrapping a cluster on a single +node, and then adding additional nodes and daemons via the CLI or GUI +dashboard. + +Get ceph-daemon +=============== + +The ``ceph-daemon`` utility is used to bootstrap a new Ceph Cluster. +You can get the utility by either installing a package provided by +your Linux distribution:: + + sudo apt install -y ceph-daemon # or, + sudo dnf install -y ceph-daemon # or, + sudo yum install -y ceph-daemon + +or by simply downloading the standalone script manually:: + + curl https://github.com/ceph/ceph/tree/master/src/ceph-daemon > ceph-daemon + chmod +x ceph-daemon + sudo install -m 0755 ceph-daemon /usr/sbin # optional! + +Bootstrap a new cluster +======================= + +To create a new cluster, you need to know: + +* Which *IP address* to use for the cluster's first monitor. This is + normally just the IP for the first cluster node. If there are + multiple networks and interfaces, be sure to choose one that will be + accessible by any hosts accessing the Ceph cluster. + +To bootstrap the cluster,:: + + sudo ceph-daemon bootstrap --mon-ip ** --output-config ceph.conf --output-keyring ceph.keyring --output-pub-ssh-key ceph.pub + +This command does a few things: + +* Creates a monitor and manager daemon for the new cluster on the + local host. A minimal configuration file needed to communicate with + the new cluster is written to ``ceph.conf`` in the local directory. +* A copy of the ``client.admin`` administrative (privileged!) secret + key is written to ``ceph.keyring`` in the local directory. +* Generates a new SSH key, and adds the public key to the local root user's + ``/root/.ssh/authorized_keys`` file. A copy of the public key is written + to ``ceph.pub`` in the local directory. + +Interacting with the cluster +============================ + +You can easily start up a container that has all of the Ceph packages +installed to interact with your cluster:: + + sudo ceph-daemon shell --config ceph.conf --keyring ceph.keyring + +The ``--config`` and ``--keyring`` arguments will bind those local +files to the default locations in ``/etc/ceph`` inside the container +to allow the ``ceph`` CLI utility to work without additional +arguments. Inside the container, you can check the cluster status with:: + + ceph status + +In order to interact with the Ceph cluster outside of a container, you +need to install the Ceph client packages and install the configuration +and privileged administrator key in a global location:: + + sudo apt install -y ceph-common # or, + sudo dnf install -y ceph-common # or, + sudo yum install -y ceph-common + + sudo install -m 0644 ceph.conf /etc/ceph/ceph.conf + sudo install -m 0600 ceph.keyring /etc/ceph/ceph.keyring + +Adding hosts to the cluster +=========================== + +For each new host you'd like to add to the cluster, you need to do two things: + +#. Install the cluster's public SSH key in the new host's root user's + ``authorized_keys`` file. For example,:: + + cat ceph.pub | ssh root@*newhost* tee -a /root/.ssh/authorized_keys + +#. Tell Ceph that the new node is part of the cluster:: + + ceph orchestrator host add *newhost* + +Deploying additional monitors +============================= + +Normally a Ceph cluster has at least three (or, preferably, five) +monitor daemons spread across different hosts. Since we are deploying +a monitor, we again need to specify what IP address it will use, +either as a simple IP address or as a CIDR network name. + +To deploy additional monitors,:: + + ceph orchestrator mon update ** * [...]* + +For example, to deploy a second monitor on ``newhost`` using an IP +address in network ``10.1.2.0/24``,:: + + ceph orchestrator mon update 2 newhost:10.1.2.0/24 + +Deploying OSDs +============== + +To add an OSD to the cluster, you need to know the device name for the +block device (hard disk or SSD) that will be used. Then,:: + + ceph orchestrator osd create **:** + +For example, to deploy an OSD on host *newhost*'s SSD,:: + + ceph orchestrator osd create newhost:/dev/disk/by-id/ata-WDC_WDS200T2B0A-00SM50_182294800028 + +Deploying manager daemons +========================= + +It is a good idea to have at least one backup manager daemon. To +deploy one or more new manager daemons,:: + + ceph orchestrator mgr update ** [** ...] + +Deploying MDSs +============== + +In order to use the CephFS file system, one or more MDS daemons is needed. + +TBD diff --git a/doc/conf.py b/doc/conf.py index 19e2cf45b4b..2c0eb376473 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -17,6 +17,7 @@ if tags.has('man'): 'cephfs/*', 'dev/*', 'governance.rst', + 'bootstrap.rst', 'install/*', 'mon/*', 'rados/*', diff --git a/doc/index.rst b/doc/index.rst index 560e8fce843..d75f7dac079 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -92,6 +92,7 @@ about Ceph, see our `Architecture`_ section. :hidden: start/intro + bootstrap start/index install/index start/kube-helm -- 2.39.5