Converting an existing cluster to cephadm
=========================================
-Cephadm allows you to (pretty) easily convert an existing Ceph cluster that
+Cephadm allows you to convert an existing Ceph cluster that
has been deployed with ceph-deploy, ceph-ansible, DeepSea, or similar tools.
Limitations
* Cephadm only works with BlueStore OSDs. If there are FileStore OSDs
in your cluster you cannot manage them.
-Adoption Process
-----------------
-
-#. Get the ``cephadm`` command line too on each host. You can do this with curl or by installing the package. The simplest approach is::
+Preparation
+-----------
- [each host] # curl --silent --remote-name --location https://github.com/ceph/ceph/raw/master/src/cephadm/cephadm
- [each host] # chmod +x cephadm
+#. Get the ``cephadm`` command line tool on each host in the existing
+ cluster. See :ref:`get-cephadm`.
#. Prepare each host for use by ``cephadm``::
- [each host] # ./cephadm prepare-host
-
-#. List all Ceph daemons on the current host::
+ # cephadm prepare-host
- # ./cephadm ls
-
- You should see that all existing daemons have a type of ``legacy``
- in the resulting output.
-
-#. Determine which Ceph version you will use. You can use any Octopus
+#. Determine which Ceph version you will use. You can use any Octopus (15.2.z)
release or later. For example, ``docker.io/ceph/ceph:v15.2.0``. The default
will be the latest stable release, but if you are upgrading from an earlier
release at the same time be sure to refer to the upgrade notes for any
The image is passed to cephadm with::
- # ./cephadm --image $IMAGE <rest of command goes here>
+ # cephadm --image $IMAGE <rest of command goes here>
+
+#. Cephadm can provide a list of all Ceph daemons on the current host::
+
+ # cephadm ls
+
+ Before starting, you should see that all existing daemons have a
+ style of ``legacy`` in the resulting output. As the adoption
+ process progresses, adopted daemons will appear as style
+ ``cephadm:v1``.
+
+
+Adoption process
+----------------
+
+#. Ensure the ceph configuration is migrated to use the cluster config database.
+ If the ``/etc/ceph/ceph.conf`` is identical on each host, then on one host::
+
+ # ceph config assimilate-conf -i /etc/ceph/ceph.conf
+
+ If there are config variations on each host, you may need to repeat
+ this command on each host. You can view the cluster's
+ configuration to confirm that it is complete with::
+
+ # ceph config dump
#. Adopt each monitor::
- # ./cephadm adopt --style legacy --name mon.<hostname>
+ # cephadm adopt --style legacy --name mon.<hostname>
+
+ Each legacy monitor should stop, quickly restart as a cephadm
+ container, and rejoin the quorum.
#. Adopt each manager::
- # ./cephadm adopt --style legacy --name mgr.<hostname>
+ # cephadm adopt --style legacy --name mgr.<hostname>
#. Enable cephadm::
#. Generate an SSH key::
# ceph cephadm generate-key
- # ceph cephadm get-pub-key
-
-#. Install the SSH key on each host to be managed::
+ # ceph cephadm get-pub-key > ceph.pub
- # echo <ssh key here> | sudo tee /root/.ssh/authorized_keys
+#. Install the cluster SSH key on each host in the cluster::
- Note that ``/root/.ssh/authorized_keys`` should have mode ``0600`` and
- ``/root/.ssh`` should have mode ``0700``.
+ # ssh-copy-id -f -i ceph.pub root@<host>
#. Tell cephadm which hosts to manage::
This will perform a ``cephadm check-host`` on each host before
adding it to ensure it is working. The IP address argument is only
- required if DNS doesn't allow you to connect to each host by it's
+ required if DNS does not allow you to connect to each host by its
short name.
-#. Verify that the monitor and manager daemons are visible::
+#. Verify that the adopted monitor and manager daemons are visible::
# ceph orch ps
-#. Adopt all remainingg daemons::
+#. Adopt all OSDs in the cluster::
+
+ # cephadm adopt --style legacy --name <name>
+
+ For example::
+
+ # cephadm adopt --style legacy --name osd.1
+ # cephadm adopt --style legacy --name osd.2
+
+#. Redeploy MDS daemons by telling cephadm how many daemons to run for
+ each file system. You can list file systems by name with ``ceph fs
+ ls``. For each file system::
+
+ # ceph orch apply mds <fs-name> <num-daemons>
+
+ For example, in a cluster with a single file system called `foo`::
+
+ # ceph fs ls
+ name: foo, metadata pool: foo_metadata, data pools: [foo_data ]
+ # ceph orch apply mds foo 2
+
+ Wait for the new MDS daemons to start with::
+
+ # ceph orch ps --daemon-type mds
+
+ Finally, stop and remove the legacy MDS daemons::
+
+ # systemctl stop ceph-mds.target
+ # rm -rf /var/lib/ceph/mds/ceph-*
+
+#. Redeploy RGW daemons. Cephadm manages RGW daemons by zone. For each
+ zone, deploy new RGW daemons with cephadm::
+
+ # ceph orch apply rgw <realm> <zone> <placement> [--port <port>] [--ssl]
+
+ where *<placement>* can be a simple daemon count, or a list of
+ specific hosts (see :ref:`orchestrator-cli-placement-spec`).
- # ./cephadm adopt --style legacy --name <osd.0>
- # ./cephadm adopt --style legacy --name <osd.1>
- # ./cephadm adopt --style legacy --name <mds.foo>
+ Once the daemons have started and you have confirmed they are functioning,
+ stop and remove the old legacy daemons::
- Repeat for each host and daemon.
+ # systemctl stop ceph-rgw.target
+ # rm -rf /var/lib/ceph/radosgw/ceph-*
#. Check the ``ceph health detail`` output for cephadm warnings about
stray cluster daemons or hosts that are not yet managed.