From a8e21a114bb25bec9b666db61412b2c1768680dc Mon Sep 17 00:00:00 2001 From: Joshua Schmid Date: Mon, 6 Jul 2020 11:59:49 +0200 Subject: [PATCH] doc: add notes about --dry-run Signed-off-by: Joshua Schmid --- doc/cephadm/drivegroups.rst | 8 ++++++++ doc/mgr/orchestrator.rst | 37 ++++++++++++++----------------------- 2 files changed, 22 insertions(+), 23 deletions(-) diff --git a/doc/cephadm/drivegroups.rst b/doc/cephadm/drivegroups.rst index e4f9bd8c83ded..f1dd523e22204 100644 --- a/doc/cephadm/drivegroups.rst +++ b/doc/cephadm/drivegroups.rst @@ -40,6 +40,14 @@ This will go out on all the matching hosts and deploy these OSDs. Since we want to have more complex setups, there are more filters than just the 'all' filter. +Also, there is a `--dry-run` flag that can be passed to the `apply osd` command, which gives you a synopsis +of the proposed layout. + +Example:: + + [monitor 1] # ceph orch apply osd -i /path/to/osd_spec.yml --dry-run + + Filters ======= diff --git a/doc/mgr/orchestrator.rst b/doc/mgr/orchestrator.rst index aefc92d907966..0e1ed50ddf76c 100644 --- a/doc/mgr/orchestrator.rst +++ b/doc/mgr/orchestrator.rst @@ -154,25 +154,23 @@ Example command:: Create OSDs ----------- -Create OSDs on a group of devices on a single host:: +Create OSDs on a set of devices on a single host:: ceph orch daemon add osd :device1,device2 Example:: - - # ceph orch daemon add osd node1:/dev/vdd - Created 1 OSD on host 'node1' using device '/dev/vdd' - +========= or:: - ceph orch apply osd -i + ceph orch apply osd -i [--dry-run] + Where the ``json_file/yaml_file`` is a DriveGroup specification. For a more in-depth guide to DriveGroups please refer to :ref:`drivegroups` or:: - ceph orch apply osd --all-available-devices + ceph orch apply osd --all-available-devices [--dry-run] If the 'apply' method is used. You will be presented with a preview of what will happen. @@ -261,25 +259,18 @@ The previously set the 'destroyed' flag is used to determined osd ids that will If you use OSDSpecs for osd deployment, your newly added disks will be assigned with the osd ids of their replaced counterpart, granted the new disk still match the OSDSpecs. -For assistance in this process you can use the 'preview' feature: - -Example:: - - - ceph orch apply osd --service-name --preview - NAME HOST DATA DB WAL - node1 /dev/vdb - - +For assistance in this process you can use the '--dry-run' feature: Tip: The name of your OSDSpec can be retrieved from **ceph orch ls** Alternatively, you can use your OSDSpec file:: - ceph orch apply osd -i --preview + ceph orch apply osd -i --dry-run NAME HOST DATA DB WAL node1 /dev/vdb - - -If this matches your anticipated behavior, just omit the --preview flag to execute the deployment. +If this matches your anticipated behavior, just omit the --dry-run flag to execute the deployment. .. @@ -318,13 +309,13 @@ error if it doesn't know how to do this transition. Update the number of monitor hosts:: - ceph orch apply mon [host, host:network...] + ceph orch apply mon [host, host:network...] [--dry-run] Each host can optionally specify a network for the monitor to listen on. Update the number of manager hosts:: - ceph orch apply mgr [host...] + ceph orch apply mgr [host...] [--dry-run] .. .. note:: @@ -400,9 +391,9 @@ The ``name`` parameter is an identifier of the group of instances: Creating/growing/shrinking/removing services:: - ceph orch apply mds [--placement=] - ceph orch apply rgw [--subcluster=] [--port=] [--ssl] [--placement=] - ceph orch apply nfs [--namespace=] [--placement=] + ceph orch apply mds [--placement=] [--dry-run] + ceph orch apply rgw [--subcluster=] [--port=] [--ssl] [--placement=] [--dry-run] + ceph orch apply nfs [--namespace=] [--placement=] [--dry-run] ceph orch rm [--force] Where ``placement`` is a :ref:`orchestrator-cli-placement-spec`. @@ -624,7 +615,7 @@ specification. 3. Apply the new ``ServiceSpec``:: - ceph orch apply -i myservice.yaml + ceph orch apply -i myservice.yaml [--dry-run] Configuring the Orchestrator CLI ================================ -- 2.39.5