]> git.apps.os.sepia.ceph.com Git - ceph-ansible.git/commitdiff
doc: add day-2 operations documentation
authorGuillaume Abrioux <gabrioux@redhat.com>
Tue, 21 Apr 2020 07:50:27 +0000 (09:50 +0200)
committerDimitri Savineau <savineau.dimitri@gmail.com>
Tue, 21 Apr 2020 13:52:01 +0000 (09:52 -0400)
This commit is the first of a serie in order to describe all day-2 operations
that are possible via ceph-ansible using a set of playbook provided in
`infrastructure-playbooks` directory.

Fixes: #5061
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
docs/source/day-2/osds.rst [new file with mode: 0644]
docs/source/day-2/purge.rst [new file with mode: 0644]
docs/source/index.rst

diff --git a/docs/source/day-2/osds.rst b/docs/source/day-2/osds.rst
new file mode 100644 (file)
index 0000000..09c566f
--- /dev/null
@@ -0,0 +1,51 @@
+Adding/Removing OSD(s) after a cluster is deployed is a common operation that should be straight-forward to achieve.
+
+
+Adding osd(s)
+-------------
+
+Adding new OSD(s) on an existing host or adding a new OSD node can be achieved by running the main playbook with the ``--limit`` ansible option.
+You basically need to update your host_vars/group_vars with the new hardware and/or the inventory host file with the new osd nodes being added.
+
+The command used would be like following:
+
+``ansible-playbook -vv -i <your-inventory> site-container.yml --limit <node>``
+
+example:
+
+.. code-block:: shell
+
+   $ cat hosts
+   [mons]
+   mon-node-1
+   mon-node-2
+   mon-node-3
+
+   [mgrs]
+   mon-node-1
+   mon-node-2
+   mon-node-3
+
+   [osds]
+   osd-node-1
+   osd-node-2
+   osd-node-3
+   osd-node-99
+   
+   $ ansible-playbook -vv -i hosts site-container.yml --limit osd-node-99
+
+
+Shrinking osd(s)
+----------------
+
+Shrinking OSDs can be done by using the shrink-osd.yml playbook provided in ``infrastructure-playbooks`` directory.
+
+The variable ``osd_to_kill`` is a comma separated list of OSD IDs which must be passed to the playbook (passing it as an extra var is the easiest way).
+
+The playbook will shrink all osds passed in ``osd_to_kill`` serially.
+
+example:
+
+.. code-block:: shell
+
+   $ ansible-playbook -vv -i hosts infrastructure-playbooks/shrink-osds.yml -e osd_to_kill=1,2,3
\ No newline at end of file
diff --git a/docs/source/day-2/purge.rst b/docs/source/day-2/purge.rst
new file mode 100644 (file)
index 0000000..e471733
--- /dev/null
@@ -0,0 +1,15 @@
+Purging the cluster
+-------------------
+
+ceph-ansible provides two playbooks in ``infrastructure-playbooks`` for purging a Ceph cluster: ``purge-cluster.yml`` and ``purge-container-cluster.yml``.
+
+The names are pretty self-explanatory, ``purge-cluster.yml`` is intended to purge a non-containerized cluster whereas ``purge-container-cluster.yml`` is to purge a containerized cluster.
+
+example:
+
+.. code-block:: shell
+
+   $ ansible-playbook -vv -i hosts infrastructure-playbooks/purge-container-cluster.yml
+
+.. note::
+   These playbooks aren't intended to be run with the ``--limit`` option.
\ No newline at end of file
index 3e76c2b706f517b2ddb3fe2be44628e02b4839dd..46d7826f4c18e844a17c6af2e9efcd0f37a8c78c 100644 (file)
@@ -274,6 +274,17 @@ that scenario. As of nautilus in stable-4.0, the only scenarios available is ``l
 
    osds/scenarios
 
+Day-2 Operations
+----------------
+
+ceph-ansible provides a set of playbook in ``infrastructure-playbooks`` directory in order to perform some basic day-2 operations.
+
+.. toctree::
+   :maxdepth: 1
+
+   day-2/osds
+   day-2/purge
+
 Contribution
 ============