From: Sebastian Wagner Date: Thu, 18 Feb 2021 14:06:31 +0000 (+0100) Subject: doc/cephadm: group general service mgmt sections into one chapter X-Git-Tag: v17.1.0~2825^2~11 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=11fe5ef7cf7fb93440d58fee5594fad8bd7ef32b;p=ceph.git doc/cephadm: group general service mgmt sections into one chapter Signed-off-by: Sebastian Wagner --- diff --git a/doc/cephadm/index.rst b/doc/cephadm/index.rst index 61bed0b82e3..d83644c1298 100644 --- a/doc/cephadm/index.rst +++ b/doc/cephadm/index.rst @@ -32,6 +32,7 @@ versions of Ceph. adoption host-management osd + service-management upgrade Cephadm operations Cephadm monitoring diff --git a/doc/cephadm/service-management.rst b/doc/cephadm/service-management.rst new file mode 100644 index 00000000000..dbf11bf76cf --- /dev/null +++ b/doc/cephadm/service-management.rst @@ -0,0 +1,281 @@ +================== +Service Management +================== + +Service Status +============== + +A service is a group of daemons that are configured togeter. + +Print a list of services known to the orchestrator. The list can be limited to +services on a particular host with the optional --host parameter and/or +services of a particular type via optional --type parameter +(mon, osd, mgr, mds, rgw): + +:: + + ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh] + +Discover the status of a particular service or daemons:: + + ceph orch ls --service_type type --service_name [--refresh] + +Export the service specs known to the orchestrator as yaml in format +that is compatible to ``ceph orch apply -i``:: + + ceph orch ls --export + +For examples about retrieving specs of single services see :ref:`orchestrator-cli-service-spec-retrieve`. + +Daemon Status +============= + +A daemon is a running systemd unit and is part of a service. + +Print a list of all daemons known to the orchestrator:: + + ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh] + +Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs +the id is the numeric OSD ID, for MDS services it is the file system name:: + + ceph orch ps --daemon_type osd --daemon_id 0 + +.. _orchestrator-cli-service-spec: + +Service Specification +===================== + +A *Service Specification* is a data structure +to specify the deployment of services. For example in YAML: + +.. code-block:: yaml + + service_type: rgw + service_id: realm.zone + placement: + hosts: + - host1 + - host2 + - host3 + unmanaged: false + ... + +where the properties of a service specification are: + +* ``service_type`` + The type of the service. Needs to be either a Ceph + service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or + ``rbd-mirror``), a gateway (``nfs`` or ``rgw``), part of the + monitoring stack (``alertmanager``, ``grafana``, ``node-exporter`` or + ``prometheus``) or (``container``) for custom containers. +* ``service_id`` + The name of the service. +* ``placement`` + See :ref:`orchestrator-cli-placement-spec`. +* ``unmanaged`` + If set to ``true``, the orchestrator will not deploy nor + remove any daemon associated with this service. Placement and all other + properties will be ignored. This is useful, if this service should not + be managed temporarily. For cephadm, See :ref:`cephadm-spec-unmanaged` + +Each service type can have additional service specific properties. + +Service specifications of type ``mon``, ``mgr``, and the monitoring +types do not require a ``service_id``. + +A service of type ``nfs`` requires a pool name and may contain +an optional namespace: + +.. code-block:: yaml + + service_type: nfs + service_id: mynfs + placement: + hosts: + - host1 + - host2 + spec: + pool: mypool + namespace: mynamespace + +where ``pool`` is a RADOS pool where NFS client recovery data is stored +and ``namespace`` is a RADOS namespace where NFS client recovery +data is stored in the pool. + +A service of type ``osd`` is described in :ref:`drivegroups` + +Many service specifications can be applied at once using +``ceph orch apply -i`` by submitting a multi-document YAML file:: + + cat <. --export > rgw...yaml + ceph orch ls --service-type mgr --export > mgr.yaml + ceph orch ls --export > cluster.yaml + +The Specification can then be changed and re-applied as above. + +.. _orchestrator-cli-placement-spec: + +Placement Specification +======================= + +For the orchestrator to deploy a *service*, it needs to know where to deploy +*daemons*, and how many to deploy. This is the role of a placement +specification. Placement specifications can either be passed as command line arguments +or in a YAML files. + +Explicit placements +------------------- + +Daemons can be explicitly placed on hosts by simply specifying them:: + + orch apply prometheus --placement="host1 host2 host3" + +Or in YAML: + +.. code-block:: yaml + + service_type: prometheus + placement: + hosts: + - host1 + - host2 + - host3 + +MONs and other services may require some enhanced network specifications:: + + orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name" + +where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor +and ``=name`` specifies the name of the new monitor. + +.. _orch-placement-by-labels: + +Placement by labels +------------------- + +Daemons can be explicitly placed on hosts that match a specific label:: + + orch apply prometheus --placement="label:mylabel" + +Or in YAML: + +.. code-block:: yaml + + service_type: prometheus + placement: + label: "mylabel" + +* See :ref:`orchestrator-host-labels` + +Placement by pattern matching +----------------------------- + +Daemons can be placed on hosts as well:: + + orch apply prometheus --placement='myhost[1-3]' + +Or in YAML: + +.. code-block:: yaml + + service_type: prometheus + placement: + host_pattern: "myhost[1-3]" + +To place a service on *all* hosts, use ``"*"``:: + + orch apply node-exporter --placement='*' + +Or in YAML: + +.. code-block:: yaml + + service_type: node-exporter + placement: + host_pattern: "*" + + +Setting a limit +--------------- + +By specifying ``count``, only that number of daemons will be created:: + + orch apply prometheus --placement=3 + +To deploy *daemons* on a subset of hosts, also specify the count:: + + orch apply prometheus --placement="2 host1 host2 host3" + +If the count is bigger than the amount of hosts, cephadm deploys one per host:: + + orch apply prometheus --placement="3 host1 host2" + +results in two Prometheus daemons. + +Or in YAML: + +.. code-block:: yaml + + service_type: prometheus + placement: + count: 3 + +Or with hosts: + +.. code-block:: yaml + + service_type: prometheus + placement: + count: 2 + hosts: + - host1 + - host2 + - host3 + +Updating Service Specifications +=============================== + +The Ceph Orchestrator maintains a declarative state of each +service in a ``ServiceSpec``. For certain operations, like updating +the RGW HTTP port, we need to update the existing +specification. + +1. List the current ``ServiceSpec``:: + + ceph orch ls --service_name= --export > myservice.yaml + +2. Update the yaml file:: + + vi myservice.yaml + +3. Apply the new ``ServiceSpec``:: + + ceph orch apply -i myservice.yaml [--dry-run] diff --git a/doc/mgr/orchestrator.rst b/doc/mgr/orchestrator.rst index 64ff9bdf7f8..b527a5cef4e 100644 --- a/doc/mgr/orchestrator.rst +++ b/doc/mgr/orchestrator.rst @@ -138,41 +138,6 @@ Where ``placement`` is a :ref:`orchestrator-cli-placement-spec`. and mandatory for others (e.g. Ansible). -Service Status -============== - -Print a list of services known to the orchestrator. The list can be limited to -services on a particular host with the optional --host parameter and/or -services of a particular type via optional --type parameter -(mon, osd, mgr, mds, rgw): - -:: - - ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh] - -Discover the status of a particular service or daemons:: - - ceph orch ls --service_type type --service_name [--refresh] - -Export the service specs known to the orchestrator as yaml in format -that is compatible to ``ceph orch apply -i``:: - - ceph orch ls --export - -For examples about retrieving specs of single services see :ref:`orchestrator-cli-service-spec-retrieve`. - -Daemon Status -============= - -Print a list of all daemons known to the orchestrator:: - - ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh] - -Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs -the id is the numeric OSD ID, for MDS services it is the file system name:: - - ceph orch ps --daemon_type osd --daemon_id 0 - .. _orchestrator-cli-cephfs: @@ -464,245 +429,7 @@ where the properties of a service specification are: The absolute path of the directory where the file will be created must exist. Use the `dirs` property to create them if necessary. -.. _orchestrator-cli-service-spec: - -Service Specification -===================== - -A *Service Specification* is a data structure represented as YAML -to specify the deployment of services. For example: - -.. code-block:: yaml - - service_type: rgw - service_id: realm.zone - placement: - hosts: - - host1 - - host2 - - host3 - unmanaged: false - ... - -where the properties of a service specification are: - -* ``service_type`` - The type of the service. Needs to be either a Ceph - service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or - ``rbd-mirror``), a gateway (``nfs`` or ``rgw``), part of the - monitoring stack (``alertmanager``, ``grafana``, ``node-exporter`` or - ``prometheus``) or (``container``) for custom containers. -* ``service_id`` - The name of the service. -* ``placement`` - See :ref:`orchestrator-cli-placement-spec`. -* ``unmanaged`` - If set to ``true``, the orchestrator will not deploy nor - remove any daemon associated with this service. Placement and all other - properties will be ignored. This is useful, if this service should not - be managed temporarily. For cephadm, See :ref:`cephadm-spec-unmanaged` - -Each service type can have additional service specific properties. - -Service specifications of type ``mon``, ``mgr``, and the monitoring -types do not require a ``service_id``. - -A service of type ``nfs`` requires a pool name and may contain -an optional namespace: - -.. code-block:: yaml - - service_type: nfs - service_id: mynfs - placement: - hosts: - - host1 - - host2 - spec: - pool: mypool - namespace: mynamespace - -where ``pool`` is a RADOS pool where NFS client recovery data is stored -and ``namespace`` is a RADOS namespace where NFS client recovery -data is stored in the pool. - -A service of type ``osd`` is described in :ref:`drivegroups` - -Many service specifications can be applied at once using -``ceph orch apply -i`` by submitting a multi-document YAML file:: - - cat <. --export > rgw...yaml - ceph orch ls --service-type mgr --export > mgr.yaml - ceph orch ls --export > cluster.yaml - -The Specification can then be changed and re-applied as above. - -.. _orchestrator-cli-placement-spec: - -Placement Specification -======================= - -For the orchestrator to deploy a *service*, it needs to know where to deploy -*daemons*, and how many to deploy. This is the role of a placement -specification. Placement specifications can either be passed as command line arguments -or in a YAML files. - -Explicit placements -------------------- - -Daemons can be explicitly placed on hosts by simply specifying them:: - - orch apply prometheus --placement="host1 host2 host3" - -Or in YAML: - -.. code-block:: yaml - - service_type: prometheus - placement: - hosts: - - host1 - - host2 - - host3 - -MONs and other services may require some enhanced network specifications:: - - orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name" - -where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor -and ``=name`` specifies the name of the new monitor. - -.. _orch-placement-by-labels: - -Placement by labels -------------------- - -Daemons can be explicitly placed on hosts that match a specific label:: - - orch apply prometheus --placement="label:mylabel" - -Or in YAML: - -.. code-block:: yaml - - service_type: prometheus - placement: - label: "mylabel" - -* See :ref:`orchestrator-host-labels` - - -Placement by pattern matching ------------------------------ - -Daemons can be placed on hosts as well:: - - orch apply prometheus --placement='myhost[1-3]' - -Or in YAML: - -.. code-block:: yaml - - service_type: prometheus - placement: - host_pattern: "myhost[1-3]" - -To place a service on *all* hosts, use ``"*"``:: - - orch apply node-exporter --placement='*' - -Or in YAML: - -.. code-block:: yaml - - service_type: node-exporter - placement: - host_pattern: "*" - - -Setting a limit ---------------- - -By specifying ``count``, only that number of daemons will be created:: - - orch apply prometheus --placement=3 - -To deploy *daemons* on a subset of hosts, also specify the count:: - - orch apply prometheus --placement="2 host1 host2 host3" - -If the count is bigger than the amount of hosts, cephadm deploys one per host:: - - orch apply prometheus --placement="3 host1 host2" - -results in two Prometheus daemons. - -Or in YAML: - -.. code-block:: yaml - - service_type: prometheus - placement: - count: 3 - -Or with hosts: - -.. code-block:: yaml - - service_type: prometheus - placement: - count: 2 - hosts: - - host1 - - host2 - - host3 - -Updating Service Specifications -=============================== - -The Ceph Orchestrator maintains a declarative state of each -service in a ``ServiceSpec``. For certain operations, like updating -the RGW HTTP port, we need to update the existing -specification. - -1. List the current ``ServiceSpec``:: - - ceph orch ls --service_name= --export > myservice.yaml - -2. Update the yaml file:: - - vi myservice.yaml - -3. Apply the new ``ServiceSpec``:: - ceph orch apply -i myservice.yaml [--dry-run] Configuring the Orchestrator CLI ================================