From: Juan Miguel Olmo Martínez Date: Wed, 24 Jun 2020 09:57:51 +0000 (+0200) Subject: doc/mgr/orchestrator: Add information about unmanaged parameter X-Git-Tag: v15.2.5~105^2~31 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=9d37e4869241483c0ab53f4d8b11b09893256bd9;p=ceph.git doc/mgr/orchestrator: Add information about unmanaged parameter Explain parameter 'unmanaged' in OSDs creation and deletion Signed-off-by: Juan Miguel Olmo Martínez (cherry picked from commit da62f752c5c233b5bac687ac6be5e00fe35084b9) --- diff --git a/doc/mgr/orchestrator.rst b/doc/mgr/orchestrator.rst index c2c86d40a097..aefc92d90796 100644 --- a/doc/mgr/orchestrator.rst +++ b/doc/mgr/orchestrator.rst @@ -144,6 +144,12 @@ Example command:: ceph orch device zap my_hostname /dev/sdx +.. note:: + Cephadm orchestrator will automatically deploy drives that match the DriveGroup in your OSDSpec if the unmanaged flag is unset. + For example, if you use the ``all-available-devices`` option when creating OSD's, when you ``zap`` a device the cephadm orchestrator will automatically create a new OSD in the device . + To disable this behavior, see :ref:`orchestrator-cli-create-osds`. + +.. _orchestrator-cli-create-osds: Create OSDs ----------- @@ -152,23 +158,21 @@ Create OSDs on a group of devices on a single host:: ceph orch daemon add osd :device1,device2 -or:: - - ceph orch apply osd -i +Example:: + # ceph orch daemon add osd node1:/dev/vdd + Created 1 OSD on host 'node1' using device '/dev/vdd' or:: - ceph orch apply osd --all-available-devices - + ceph orch apply osd -i +Where the ``json_file/yaml_file`` is a DriveGroup specification. For a more in-depth guide to DriveGroups please refer to :ref:`drivegroups` -Example:: - - # ceph orch daemon add osd node1:/dev/vdd - Created osd(s) 6 on host 'node1' +or:: + ceph orch apply osd --all-available-devices If the 'apply' method is used. You will be presented with a preview of what will happen. @@ -180,9 +184,29 @@ Example:: all-available-devices node2 /dev/vdc - - all-available-devices node3 /dev/vdd - - - .. note:: - Output form Cephadm orchestrator + Example output from cephadm orchestrator + +When the parameter ``all-available-devices`` or a DriveGroup specification is used, a cephadm service is created. +This service guarantees that all available devices or devices included in the DriveGroup will be used for OSD's. +Take into account the implications of this behavior, which is automatic and enabled by default. + +For example: + +After using:: + + ceph orch apply osd --all-available-devices + +* If you add new disks to the cluster they will automatically be used to create new OSD's. +* A new OSD will be created automatically if you remove an OSD and clean the LVM physical volume. + +If you want to avoid this behavior (disable automatic creation of OSD in available devices), use the ``unmanaged`` parameter:: + + ceph orch apply osd --all-available-devices --unmanaged=true + +In the case that you have already created the OSD's using the ``all-available-devices`` service, you can change the automatic OSD creation using the following command:: + + ceph orch osd spec --service-name osd.all-available-devices --unmanaged Remove an OSD ------------------- @@ -211,6 +235,9 @@ You can query the state of the operation with:: When no PGs are left on the osd, it will be decommissioned and removed from the cluster. +.. note:: + After removing an OSD, if you wipe the LVM physical volume in the device used by the removed OSD, a new OSD will be created. + Read information about the ``unmanaged`` parameter in :ref:`orchestrator-cli-create-osds`. Replace an OSD ------------------- @@ -325,7 +352,7 @@ services of a particular type via optional --type parameter Discover the status of a particular service or daemons:: ceph orch ls --service_type type --service_name [--refresh] - + Export the service specs known to the orchestrator as yaml in format that is compatible to ``ceph orch apply -i``:: @@ -338,7 +365,7 @@ Daemon Status Print a list of all daemons known to the orchestrator:: ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh] - + Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs the id is the numeric OSD ID, for MDS services it is the file system name:: @@ -346,18 +373,18 @@ the id is the numeric OSD ID, for MDS services it is the file system name:: .. _orchestrator-cli-cephfs: - + Depoying CephFS =============== In order to set up a :term:`CephFS`, execute:: ceph fs volume create - -Where ``name`` is the name of the CephFS, ``placement`` is a + +Where ``name`` is the name of the CephFS, ``placement`` is a :ref:`orchestrator-cli-placement-spec`. - -This command will create the required Ceph pools, create the new + +This command will create the required Ceph pools, create the new CephFS, and deploy mds servers. Stateless services (MDS/RGW/NFS/rbd-mirror/iSCSI) @@ -385,31 +412,31 @@ e.g., ``ceph orch apply mds myfs --placement="3 host1 host2 host3"`` Service Commands:: ceph orch - + .. _orchestrator-cli-service-spec: - + Service Specification ===================== -As *Service Specification* is a data structure often represented as YAML +As *Service Specification* is a data structure often represented as YAML to specify the deployment of services. For example: .. code-block:: yaml service_type: rgw service_id: realm.zone - placement: - hosts: + placement: + hosts: - host1 - host2 - host3 spec: ... unmanaged: false - + Where the properties of a service specification are the following: * ``service_type`` is the type of the service. Needs to be either a Ceph - service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or + service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or ``rbd-mirror``), a gateway (``nfs`` or ``rgw``), or part of the monitoring stack (``alertmanager``, ``grafana``, ``node-exporter`` or ``prometheus``). @@ -433,8 +460,8 @@ an optional namespace: service_type: nfs service_id: mynfs - placement: - hosts: + placement: + hosts: - host1 - host2 spec: @@ -467,12 +494,12 @@ Many service specifications can then be applied at once using EOF .. _orchestrator-cli-placement-spec: - + Placement Specification ======================= In order to allow the orchestrator to deploy a *service*, it needs to -know how many and where it should deploy *daemons*. The orchestrator +know how many and where it should deploy *daemons*. The orchestrator defines a placement specification that can either be passed as a command line argument. Explicit placements @@ -481,22 +508,22 @@ Explicit placements Daemons can be explictly placed on hosts by simply specifying them:: orch apply prometheus "host1 host2 host3" - + Or in yaml: .. code-block:: yaml - + service_type: prometheus placement: - hosts: + hosts: - host1 - host2 - host3 - + MONs and other services may require some enhanced network specifications:: orch daemon add mon myhost:[v2:1.2.3.4:3000,v1:1.2.3.4:6789]=name - + Where ``[v2:1.2.3.4:3000,v1:1.2.3.4:6789]`` is the network address of the monitor and ``=name`` specifies the name of the new monitor. @@ -543,18 +570,18 @@ Or in yaml: placement: host_pattern: "*" - + Setting a limit --------------- By specifying ``count``, only that number of daemons will be created:: orch apply prometheus 3 - + To deploy *daemons* on a subset of hosts, also specify the count:: orch apply prometheus "2 host1 host2 host3" - + If the count is bigger than the amount of hosts, cephadm still deploys two daemons:: orch apply prometheus "3 host1 host2" @@ -566,7 +593,7 @@ Or in yaml: service_type: prometheus placement: count: 3 - + Or with hosts: .. code-block:: yaml @@ -574,10 +601,10 @@ Or with hosts: service_type: prometheus placement: count: 2 - hosts: + hosts: - host1 - host2 - - host3 + - host3 Updating Service Specifications ===============================