From 50bc4913735e684c7f0b7feda9444b38fc96588a Mon Sep 17 00:00:00 2001 From: Sage Weil Date: Tue, 9 Mar 2021 14:29:39 -0500 Subject: [PATCH] doc: update docs Signed-off-by: Sage Weil --- doc/cephadm/adoption.rst | 8 +++----- doc/cephadm/rgw.rst | 31 ++++++++++++++++++++----------- doc/mgr/orchestrator.rst | 4 ++-- 3 files changed, 25 insertions(+), 18 deletions(-) diff --git a/doc/cephadm/adoption.rst b/doc/cephadm/adoption.rst index 3e12dbfb33b..3835128fb20 100644 --- a/doc/cephadm/adoption.rst +++ b/doc/cephadm/adoption.rst @@ -181,10 +181,11 @@ Adoption process .. prompt:: bash # - ceph orch apply rgw [--subcluster=] [--port=] [--ssl] [--placement=] + ceph orch apply rgw [--rgw-realm=] [--rgw-zone=] [--port=] [--ssl] [--placement=] where ** can be a simple daemon count, or a list of - specific hosts (see :ref:`orchestrator-cli-placement-spec`). + specific hosts (see :ref:`orchestrator-cli-placement-spec`), and the + zone and realm arguments are needed only for a multisite setup. After the daemons have started and you have confirmed that they are functioning, stop and remove the old, legacy daemons: @@ -194,8 +195,5 @@ Adoption process systemctl stop ceph-rgw.target rm -rf /var/lib/ceph/radosgw/ceph-* - To learn more about adopting single-site systems without a realm, see - :ref:`rgw-multisite-migrate-from-single-site`. - #. Check the output of the command ``ceph health detail`` for cephadm warnings about stray cluster daemons or hosts that are not yet managed by cephadm. diff --git a/doc/cephadm/rgw.rst b/doc/cephadm/rgw.rst index 3964c987a1f..a5d6d89acb9 100644 --- a/doc/cephadm/rgw.rst +++ b/doc/cephadm/rgw.rst @@ -8,32 +8,41 @@ Deploy RGWs =========== Cephadm deploys radosgw as a collection of daemons that manage a -particular *realm* and *zone*. (For more information about realms and -zones, see :ref:`multisite`.) +single-cluster deployment or a particular *realm* and *zone* in a +multisite deployment. (For more information about realms and zones, +see :ref:`multisite`.) Note that with cephadm, radosgw daemons are configured via the monitor configuration database instead of via a `ceph.conf` or the command line. If that configuration isn't already in place (usually in the -``client.rgw..`` section), then the radosgw +``client.rgw.`` section), then the radosgw daemons will start up with default settings (e.g., binding to port 80). -To deploy a set of radosgw daemons for a particular realm and zone, run the -following command: +To deploy a set of radosgw daemons, with an arbitrary service name +*name*, run the following command: .. prompt:: bash # - ceph orch apply rgw ** ** --placement="** [** ...]" + ceph orch apply rgw ** [--rgw-realm=**] [--rgw-zone=**] --placement="** [** ...]" -For example, to deploy 2 rgw daemons serving the *myorg* realm and the *us-east-1* zone on *myhost1* and *myhost2*: +For example, to deploy 2 RGW daemons (the default) for a single-cluster RGW deployment +under the arbitrary service id *foo*: .. prompt:: bash # - ceph orch apply rgw myorg us-east-1 --placement="2 myhost1 myhost2" + ceph orch apply rgw foo -Cephadm will wait for a healthy cluster and automatically create the supplied realm and zone if they do not exist before deploying the rgw daemon(s) +To deploy RGWs serving the multisite *myorg* realm and the *us-east-1* zone on +*myhost1* and *myhost2*: -Alternatively, the realm, zonegroup, and zone can be manually created using ``radosgw-admin`` commands: +.. prompt:: bash # + + ceph orch apply rgw east --rgw-realm=myorg --rgw-zone=us-east-1 --placement="2 myhost1 myhost2" + +Note that in a multisite situation, cephadm only deploys the daemons. It does not create +or update the realm or zone configurations. To create a new realm and zone, you need to do +something like: .. prompt:: bash # @@ -52,7 +61,7 @@ Alternatively, the realm, zonegroup, and zone can be manually created using ``ra radosgw-admin period update --rgw-realm= --commit See :ref:`orchestrator-cli-placement-spec` for details of the placement -specification. +specification. See :ref:`multisite` for more information of setting up multisite RGW. .. _orchestrator-haproxy-service-spec: diff --git a/doc/mgr/orchestrator.rst b/doc/mgr/orchestrator.rst index 7174eb69c1c..2bffbdd0590 100644 --- a/doc/mgr/orchestrator.rst +++ b/doc/mgr/orchestrator.rst @@ -103,7 +103,7 @@ The ``name`` parameter is an identifier of the group of instances: Creating/growing/shrinking/removing services:: ceph orch apply mds [--placement=] [--dry-run] - ceph orch apply rgw [--subcluster=] [--port=] [--ssl] [--placement=] [--dry-run] + ceph orch apply rgw [--rgw-realm=] [--rgw-zone=] [--port=] [--ssl] [--placement=] [--dry-run] ceph orch apply nfs [--namespace=] [--placement=] [--dry-run] ceph orch rm [--force] @@ -157,7 +157,7 @@ This is an overview of the current implementation status of the orchestrators. apply nfs ✔ ✔ apply osd ✔ ✔ apply rbd-mirror ✔ ✔ - apply rgw ⚪ ✔ + apply rgw ✔ ✔ apply container ⚪ ✔ host add ⚪ ✔ host ls ✔ ✔ -- 2.39.5