--- /dev/null
+==================
+Service Management
+==================
+
+Service Status
+==============
+
+A service is a group of daemons that are configured togeter.
+
+Print a list of services known to the orchestrator. The list can be limited to
+services on a particular host with the optional --host parameter and/or
+services of a particular type via optional --type parameter
+(mon, osd, mgr, mds, rgw):
+
+::
+
+ ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
+
+Discover the status of a particular service or daemons::
+
+ ceph orch ls --service_type type --service_name <name> [--refresh]
+
+Export the service specs known to the orchestrator as yaml in format
+that is compatible to ``ceph orch apply -i``::
+
+ ceph orch ls --export
+
+For examples about retrieving specs of single services see :ref:`orchestrator-cli-service-spec-retrieve`.
+
+Daemon Status
+=============
+
+A daemon is a running systemd unit and is part of a service.
+
+Print a list of all daemons known to the orchestrator::
+
+ ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
+
+Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs
+the id is the numeric OSD ID, for MDS services it is the file system name::
+
+ ceph orch ps --daemon_type osd --daemon_id 0
+
+.. _orchestrator-cli-service-spec:
+
+Service Specification
+=====================
+
+A *Service Specification* is a data structure
+to specify the deployment of services. For example in YAML:
+
+.. code-block:: yaml
+
+ service_type: rgw
+ service_id: realm.zone
+ placement:
+ hosts:
+ - host1
+ - host2
+ - host3
+ unmanaged: false
+ ...
+
+where the properties of a service specification are:
+
+* ``service_type``
+ The type of the service. Needs to be either a Ceph
+ service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or
+ ``rbd-mirror``), a gateway (``nfs`` or ``rgw``), part of the
+ monitoring stack (``alertmanager``, ``grafana``, ``node-exporter`` or
+ ``prometheus``) or (``container``) for custom containers.
+* ``service_id``
+ The name of the service.
+* ``placement``
+ See :ref:`orchestrator-cli-placement-spec`.
+* ``unmanaged``
+ If set to ``true``, the orchestrator will not deploy nor
+ remove any daemon associated with this service. Placement and all other
+ properties will be ignored. This is useful, if this service should not
+ be managed temporarily. For cephadm, See :ref:`cephadm-spec-unmanaged`
+
+Each service type can have additional service specific properties.
+
+Service specifications of type ``mon``, ``mgr``, and the monitoring
+types do not require a ``service_id``.
+
+A service of type ``nfs`` requires a pool name and may contain
+an optional namespace:
+
+.. code-block:: yaml
+
+ service_type: nfs
+ service_id: mynfs
+ placement:
+ hosts:
+ - host1
+ - host2
+ spec:
+ pool: mypool
+ namespace: mynamespace
+
+where ``pool`` is a RADOS pool where NFS client recovery data is stored
+and ``namespace`` is a RADOS namespace where NFS client recovery
+data is stored in the pool.
+
+A service of type ``osd`` is described in :ref:`drivegroups`
+
+Many service specifications can be applied at once using
+``ceph orch apply -i`` by submitting a multi-document YAML file::
+
+ cat <<EOF | ceph orch apply -i -
+ service_type: mon
+ placement:
+ host_pattern: "mon*"
+ ---
+ service_type: mgr
+ placement:
+ host_pattern: "mgr*"
+ ---
+ service_type: osd
+ service_id: default_drive_group
+ placement:
+ host_pattern: "osd*"
+ data_devices:
+ all: true
+ EOF
+
+.. _orchestrator-cli-service-spec-retrieve:
+
+Retrieving the running Service Specification
+--------------------------------------------
+
+If the services have been started via ``ceph orch apply...``, then directly changing
+the Services Specification is complicated. Instead of attempting to directly change
+the Services Specification, we suggest exporting the running Service Specification by
+following these instructions::
+
+ ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
+ ceph orch ls --service-type mgr --export > mgr.yaml
+ ceph orch ls --export > cluster.yaml
+
+The Specification can then be changed and re-applied as above.
+
+.. _orchestrator-cli-placement-spec:
+
+Placement Specification
+=======================
+
+For the orchestrator to deploy a *service*, it needs to know where to deploy
+*daemons*, and how many to deploy. This is the role of a placement
+specification. Placement specifications can either be passed as command line arguments
+or in a YAML files.
+
+Explicit placements
+-------------------
+
+Daemons can be explicitly placed on hosts by simply specifying them::
+
+ orch apply prometheus --placement="host1 host2 host3"
+
+Or in YAML:
+
+.. code-block:: yaml
+
+ service_type: prometheus
+ placement:
+ hosts:
+ - host1
+ - host2
+ - host3
+
+MONs and other services may require some enhanced network specifications::
+
+ orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
+
+where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
+and ``=name`` specifies the name of the new monitor.
+
+.. _orch-placement-by-labels:
+
+Placement by labels
+-------------------
+
+Daemons can be explicitly placed on hosts that match a specific label::
+
+ orch apply prometheus --placement="label:mylabel"
+
+Or in YAML:
+
+.. code-block:: yaml
+
+ service_type: prometheus
+ placement:
+ label: "mylabel"
+
+* See :ref:`orchestrator-host-labels`
+
+Placement by pattern matching
+-----------------------------
+
+Daemons can be placed on hosts as well::
+
+ orch apply prometheus --placement='myhost[1-3]'
+
+Or in YAML:
+
+.. code-block:: yaml
+
+ service_type: prometheus
+ placement:
+ host_pattern: "myhost[1-3]"
+
+To place a service on *all* hosts, use ``"*"``::
+
+ orch apply node-exporter --placement='*'
+
+Or in YAML:
+
+.. code-block:: yaml
+
+ service_type: node-exporter
+ placement:
+ host_pattern: "*"
+
+
+Setting a limit
+---------------
+
+By specifying ``count``, only that number of daemons will be created::
+
+ orch apply prometheus --placement=3
+
+To deploy *daemons* on a subset of hosts, also specify the count::
+
+ orch apply prometheus --placement="2 host1 host2 host3"
+
+If the count is bigger than the amount of hosts, cephadm deploys one per host::
+
+ orch apply prometheus --placement="3 host1 host2"
+
+results in two Prometheus daemons.
+
+Or in YAML:
+
+.. code-block:: yaml
+
+ service_type: prometheus
+ placement:
+ count: 3
+
+Or with hosts:
+
+.. code-block:: yaml
+
+ service_type: prometheus
+ placement:
+ count: 2
+ hosts:
+ - host1
+ - host2
+ - host3
+
+Updating Service Specifications
+===============================
+
+The Ceph Orchestrator maintains a declarative state of each
+service in a ``ServiceSpec``. For certain operations, like updating
+the RGW HTTP port, we need to update the existing
+specification.
+
+1. List the current ``ServiceSpec``::
+
+ ceph orch ls --service_name=<service-name> --export > myservice.yaml
+
+2. Update the yaml file::
+
+ vi myservice.yaml
+
+3. Apply the new ``ServiceSpec``::
+
+ ceph orch apply -i myservice.yaml [--dry-run]
and mandatory for others (e.g. Ansible).
-Service Status
-==============
-
-Print a list of services known to the orchestrator. The list can be limited to
-services on a particular host with the optional --host parameter and/or
-services of a particular type via optional --type parameter
-(mon, osd, mgr, mds, rgw):
-
-::
-
- ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
-
-Discover the status of a particular service or daemons::
-
- ceph orch ls --service_type type --service_name <name> [--refresh]
-
-Export the service specs known to the orchestrator as yaml in format
-that is compatible to ``ceph orch apply -i``::
-
- ceph orch ls --export
-
-For examples about retrieving specs of single services see :ref:`orchestrator-cli-service-spec-retrieve`.
-
-Daemon Status
-=============
-
-Print a list of all daemons known to the orchestrator::
-
- ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
-
-Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs
-the id is the numeric OSD ID, for MDS services it is the file system name::
-
- ceph orch ps --daemon_type osd --daemon_id 0
-
.. _orchestrator-cli-cephfs:
The absolute path of the directory where the file will be created must
exist. Use the `dirs` property to create them if necessary.
-.. _orchestrator-cli-service-spec:
-
-Service Specification
-=====================
-
-A *Service Specification* is a data structure represented as YAML
-to specify the deployment of services. For example:
-
-.. code-block:: yaml
-
- service_type: rgw
- service_id: realm.zone
- placement:
- hosts:
- - host1
- - host2
- - host3
- unmanaged: false
- ...
-
-where the properties of a service specification are:
-
-* ``service_type``
- The type of the service. Needs to be either a Ceph
- service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or
- ``rbd-mirror``), a gateway (``nfs`` or ``rgw``), part of the
- monitoring stack (``alertmanager``, ``grafana``, ``node-exporter`` or
- ``prometheus``) or (``container``) for custom containers.
-* ``service_id``
- The name of the service.
-* ``placement``
- See :ref:`orchestrator-cli-placement-spec`.
-* ``unmanaged``
- If set to ``true``, the orchestrator will not deploy nor
- remove any daemon associated with this service. Placement and all other
- properties will be ignored. This is useful, if this service should not
- be managed temporarily. For cephadm, See :ref:`cephadm-spec-unmanaged`
-
-Each service type can have additional service specific properties.
-
-Service specifications of type ``mon``, ``mgr``, and the monitoring
-types do not require a ``service_id``.
-
-A service of type ``nfs`` requires a pool name and may contain
-an optional namespace:
-
-.. code-block:: yaml
-
- service_type: nfs
- service_id: mynfs
- placement:
- hosts:
- - host1
- - host2
- spec:
- pool: mypool
- namespace: mynamespace
-
-where ``pool`` is a RADOS pool where NFS client recovery data is stored
-and ``namespace`` is a RADOS namespace where NFS client recovery
-data is stored in the pool.
-
-A service of type ``osd`` is described in :ref:`drivegroups`
-
-Many service specifications can be applied at once using
-``ceph orch apply -i`` by submitting a multi-document YAML file::
-
- cat <<EOF | ceph orch apply -i -
- service_type: mon
- placement:
- host_pattern: "mon*"
- ---
- service_type: mgr
- placement:
- host_pattern: "mgr*"
- ---
- service_type: osd
- service_id: default_drive_group
- placement:
- host_pattern: "osd*"
- data_devices:
- all: true
- EOF
-
-.. _orchestrator-cli-service-spec-retrieve:
-
-Retrieving the running Service Specification
---------------------------------------------
-
-If the services have been started via ``ceph orch apply...``, then directly changing
-the Services Specification is complicated. Instead of attempting to directly change
-the Services Specification, we suggest exporting the running Service Specification by
-following these instructions::
-
- ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
- ceph orch ls --service-type mgr --export > mgr.yaml
- ceph orch ls --export > cluster.yaml
-
-The Specification can then be changed and re-applied as above.
-
-.. _orchestrator-cli-placement-spec:
-
-Placement Specification
-=======================
-
-For the orchestrator to deploy a *service*, it needs to know where to deploy
-*daemons*, and how many to deploy. This is the role of a placement
-specification. Placement specifications can either be passed as command line arguments
-or in a YAML files.
-
-Explicit placements
--------------------
-
-Daemons can be explicitly placed on hosts by simply specifying them::
-
- orch apply prometheus --placement="host1 host2 host3"
-
-Or in YAML:
-
-.. code-block:: yaml
-
- service_type: prometheus
- placement:
- hosts:
- - host1
- - host2
- - host3
-
-MONs and other services may require some enhanced network specifications::
-
- orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
-
-where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
-and ``=name`` specifies the name of the new monitor.
-
-.. _orch-placement-by-labels:
-
-Placement by labels
--------------------
-
-Daemons can be explicitly placed on hosts that match a specific label::
-
- orch apply prometheus --placement="label:mylabel"
-
-Or in YAML:
-
-.. code-block:: yaml
-
- service_type: prometheus
- placement:
- label: "mylabel"
-
-* See :ref:`orchestrator-host-labels`
-
-
-Placement by pattern matching
------------------------------
-
-Daemons can be placed on hosts as well::
-
- orch apply prometheus --placement='myhost[1-3]'
-
-Or in YAML:
-
-.. code-block:: yaml
-
- service_type: prometheus
- placement:
- host_pattern: "myhost[1-3]"
-
-To place a service on *all* hosts, use ``"*"``::
-
- orch apply node-exporter --placement='*'
-
-Or in YAML:
-
-.. code-block:: yaml
-
- service_type: node-exporter
- placement:
- host_pattern: "*"
-
-
-Setting a limit
----------------
-
-By specifying ``count``, only that number of daemons will be created::
-
- orch apply prometheus --placement=3
-
-To deploy *daemons* on a subset of hosts, also specify the count::
-
- orch apply prometheus --placement="2 host1 host2 host3"
-
-If the count is bigger than the amount of hosts, cephadm deploys one per host::
-
- orch apply prometheus --placement="3 host1 host2"
-
-results in two Prometheus daemons.
-
-Or in YAML:
-
-.. code-block:: yaml
-
- service_type: prometheus
- placement:
- count: 3
-
-Or with hosts:
-
-.. code-block:: yaml
-
- service_type: prometheus
- placement:
- count: 2
- hosts:
- - host1
- - host2
- - host3
-
-Updating Service Specifications
-===============================
-
-The Ceph Orchestrator maintains a declarative state of each
-service in a ``ServiceSpec``. For certain operations, like updating
-the RGW HTTP port, we need to update the existing
-specification.
-
-1. List the current ``ServiceSpec``::
-
- ceph orch ls --service_name=<service-name> --export > myservice.yaml
-
-2. Update the yaml file::
-
- vi myservice.yaml
-
-3. Apply the new ``ServiceSpec``::
- ceph orch apply -i myservice.yaml [--dry-run]
Configuring the Orchestrator CLI
================================