The relation between the names is the following:
-* A *service* has a specfic *service type*
+* A *service* has a specific *service type*
* A *daemon* is a physical instance of a *service type*
.. note::
Orchestrator modules may only implement a subset of the commands listed below.
- Also, the implementation of the commands are orchestrator module dependent and will
- differ between implementations.
+ Also, the implementation of the commands may differ between modules.
Status
======
ceph orch status
-Show current orchestrator mode and high-level status (whether the module able
-to talk to it)
+Show current orchestrator mode and high-level status (whether the orchestrator
+plugin is available and operational)
Host Management
===============
addr: node-02
hostname: node-02
-This can be combined with service specifications (below) to create a cluster spec file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec`` also to do this during bootstrap. Cluster SSH Keys must be copied to hosts prior.
+This can be combined with service specifications (below) to create a cluster spec file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec`` also to do this during bootstrap. Cluster SSH Keys must be copied to hosts prior to adding them.
OSD Management
==============
Erase Devices (Zap Devices)
---------------------------
-Erase (zap) a device so that it can be resued. ``zap`` calls ``ceph-volume zap`` on the remote host.
+Erase (zap) a device so that it can be reused. ``zap`` calls ``ceph-volume zap`` on the remote host.
::
.. note::
Cephadm orchestrator will automatically deploy drives that match the DriveGroup in your OSDSpec if the unmanaged flag is unset.
- For example, if you use the ``all-available-devices`` option when creating OSD's, when you ``zap`` a device the cephadm orchestrator will automatically create a new OSD in the device .
+ For example, if you use the ``all-available-devices`` option when creating OSDs, when you ``zap`` a device the cephadm orchestrator will automatically create a new OSD in the device .
To disable this behavior, see :ref:`orchestrator-cli-create-osds`.
.. _orchestrator-cli-create-osds:
ceph orch apply osd -i <json_file/yaml_file> [--dry-run]
-Where the ``json_file/yaml_file`` is a DriveGroup specification.
+where the ``json_file/yaml_file`` is a DriveGroup specification.
For a more in-depth guide to DriveGroups please refer to :ref:`drivegroups`
-Along with ``apply`` interface if ``dry-run`` option is used, it will present a
-preview of what will happen.
+``dry-run`` will cause the orchestrator to present a preview of what will happen
+without actually creating the OSDs.
Example::
all-available-devices node2 /dev/vdc - -
all-available-devices node3 /dev/vdd - -
-.. note::
- Example output from cephadm orchestrator
-
When the parameter ``all-available-devices`` or a DriveGroup specification is used, a cephadm service is created.
-This service guarantees that all available devices or devices included in the DriveGroup will be used for OSD's.
-Take into account the implications of this behavior, which is automatic and enabled by default.
+This service guarantees that all available devices or devices included in the DriveGroup will be used for OSDs.
+Note that the effect of ``--all-available-devices`` is persistent; that is, drives which are added to the system
+or become available (say, by zapping) after the command is complete will be automatically found and added to the cluster.
-For example:
-
-After using::
+That is, after using::
ceph orch apply osd --all-available-devices
-* If you add new disks to the cluster they will automatically be used to create new OSD's.
+* If you add new disks to the cluster they will automatically be used to create new OSDs.
* A new OSD will be created automatically if you remove an OSD and clean the LVM physical volume.
-If you want to avoid this behavior (disable automatic creation of OSD in available devices), use the ``unmanaged`` parameter::
+If you want to avoid this behavior (disable automatic creation of OSD on available devices), use the ``unmanaged`` parameter::
ceph orch apply osd --all-available-devices --unmanaged=true
+If you have already created the OSDs using the ``all-available-devices`` service, you can change the automatic OSD creation using the following command::
+
+ ceph orch osd spec --service-name osd.all-available-devices --unmanaged
+
Remove an OSD
-------------
::
4 cephadm-dev started 42 False True 2020-07-17 13:01:45.162158
-When no PGs are left on the osd, it will be decommissioned and removed from the cluster.
+When no PGs are left on the OSD, it will be decommissioned and removed from the cluster.
.. note::
After removing an OSD, if you wipe the LVM physical volume in the device used by the removed OSD, a new OSD will be created.
Stopping OSD Removal
--------------------
-You can stop the operation with
+You can stop the queued OSD removal operation with
::
# ceph orch osd rm stop 4
Stopped OSD(s) removal
-This will reset the initial state of the OSD and remove it from the queue.
+This will reset the initial state of the OSD and take it off the removal queue.
Replace an OSD
This follows the same procedure as the "Remove OSD" part with the exception that the OSD is not permanently removed
-from the crush hierarchy, but is assigned a 'destroyed' flag.
+from the CRUSH hierarchy, but is assigned a 'destroyed' flag.
**Preserving the OSD ID**
-The previously set the 'destroyed' flag is used to determined osd ids that will be reused in the next osd deployment.
+The previously-set 'destroyed' flag is used to determine OSD ids that will be reused in the next OSD deployment.
-If you use OSDSpecs for osd deployment, your newly added disks will be assigned with the osd ids of their replaced
-counterpart, granted the new disk still match the OSDSpecs.
+If you use OSDSpecs for OSD deployment, your newly added disks will be assigned the OSD ids of their replaced
+counterparts, assuming the new disks still match the OSDSpecs.
-For assistance in this process you can use the '--dry-run' feature:
+For assistance in this process you can use the '--dry-run' feature.
Tip: The name of your OSDSpec can be retrieved from **ceph orch ls**
..
- Blink Device Lights
- ^^^^^^^^^^^^^^^^^^^
+ Turn On Device Lights
+ ^^^^^^^^^^^^^^^^^^^^^
::
ceph orch device ident-on <dev_id>
ceph orch osd fault-on {primary,journal,db,wal,all} <osd-id>
ceph orch osd fault-off {primary,journal,db,wal,all} <osd-id>
- Where ``journal`` is the filestore journal, ``wal`` is the write ahead log of
- bluestore and ``all`` stands for all devices associated with the osd
+ where ``journal`` is the filestore journal device, ``wal`` is the bluestore
+ write ahead log device, and ``all`` stands for all devices associated with the OSD
Monitor and manager management
.. _orchestrator-cli-cephfs:
-Depoying CephFS
-===============
+Deploying CephFS
+================
In order to set up a :term:`CephFS`, execute::
ceph fs volume create <fs_name> <placement spec>
-Where ``name`` is the name of the CephFS, ``placement`` is a
+where ``name`` is the name of the CephFS and ``placement`` is a
:ref:`orchestrator-cli-placement-spec`.
This command will create the required Ceph pools, create the new
Stateless services (MDS/RGW/NFS/rbd-mirror/iSCSI)
=================================================
-The orchestrator is not responsible for configuring the services. Please look into the corresponding
-documentation for details.
+(Please note: The orchestrator will not configure the services. Please look into the corresponding
+documentation for service configuration details.)
The ``name`` parameter is an identifier of the group of instances:
ceph orch apply nfs <name> <pool> [--namespace=<namespace>] [--placement=<placement>] [--dry-run]
ceph orch rm <service_name> [--force]
-Where ``placement`` is a :ref:`orchestrator-cli-placement-spec`.
+where ``placement`` is a :ref:`orchestrator-cli-placement-spec`.
e.g., ``ceph orch apply mds myfs --placement="3 host1 host2 host3"``
Service Specification
=====================
-As *Service Specification* is a data structure often represented as YAML
-to specify the deployment of services. For example:
+A *Service Specification* is a data structure represented as YAML
+to specify the deployment of services. For example:
.. code-block:: yaml
spec: ...
unmanaged: false
-Where the properties of a service specification are the following:
+where the properties of a service specification are:
* ``service_type`` is the type of the service. Needs to be either a Ceph
service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or
``rbd-mirror``), a gateway (``nfs`` or ``rgw``), or part of the
monitoring stack (``alertmanager``, ``grafana``, ``node-exporter`` or
- ``prometheus``).
-* ``service_id`` is the name of the service. Omit the service time
+ ``prometheus``)
+* ``service_id`` is the name of the service
* ``placement`` is a :ref:`orchestrator-cli-placement-spec`
-* ``spec``: additional specifications for a specific service.
+* ``spec``: additional specifications for a specific service
* ``unmanaged``: If set to ``true``, the orchestrator will not deploy nor
remove any daemon associated with this service. Placement and all other
properties will be ignored. This is useful, if this service should not
be managed temporarily.
-Each service type can have different requirements for the spec.
+Each service type can have different requirements for the ``spec`` element.
Service specifications of type ``mon``, ``mgr``, and the monitoring
-types do not require a ``service_id``
+types do not require a ``service_id``.
-A service of type ``nfs`` requires a pool name and contain
+A service of type ``nfs`` requires a pool name and may contain
an optional namespace:
.. code-block:: yaml
pool: mypool
namespace: mynamespace
-Where ``pool`` is a RADOS pool where NFS client recovery data is stored
+where ``pool`` is a RADOS pool where NFS client recovery data is stored
and ``namespace`` is a RADOS namespace where NFS client recovery
data is stored in the pool.
-A service of type ``osd`` is in detail described in :ref:`drivegroups`
+A service of type ``osd`` is described in :ref:`drivegroups`
-Many service specifications can then be applied at once using
+Many service specifications can be applied at once using
``ceph orch apply -i`` by submitting a multi-document YAML file::
cat <<EOF | ceph orch apply -i -
Placement Specification
=======================
-In order to allow the orchestrator to deploy a *service*, it needs to
-know how many and where it should deploy *daemons*. The orchestrator
-defines a placement specification that can either be passed as a command line argument.
+For the orchestrator to deploy a *service*, it needs to know where to deploy
+*daemons*, and how many to deploy. This is the role of a placement
+specification. Placement specifications can either be passed as command line arguments
+or in a YAML files.
Explicit placements
-------------------
-Daemons can be explictly placed on hosts by simply specifying them::
+Daemons can be explicitly placed on hosts by simply specifying them::
orch apply prometheus "host1 host2 host3"
-Or in yaml:
+Or in YAML:
.. code-block:: yaml
orch daemon add mon myhost:[v2:1.2.3.4:3000,v1:1.2.3.4:6789]=name
-Where ``[v2:1.2.3.4:3000,v1:1.2.3.4:6789]`` is the network address of the monitor
+where ``[v2:1.2.3.4:3000,v1:1.2.3.4:6789]`` is the network address of the monitor
and ``=name`` specifies the name of the new monitor.
Placement by labels
-------------------
-Daemons can be explictly placed on hosts that match a specifc label::
+Daemons can be explictly placed on hosts that match a specific label::
orch apply prometheus label:mylabel
-Or in yaml:
+Or in YAML:
.. code-block:: yaml
orch apply prometheus 'myhost[1-3]'
-Or in yaml:
+Or in YAML:
.. code-block:: yaml
orch apply crash '*'
-Or in yaml:
+Or in YAML:
.. code-block:: yaml
orch apply prometheus "2 host1 host2 host3"
-If the count is bigger than the amount of hosts, cephadm still deploys two daemons::
+If the count is bigger than the amount of hosts, cephadm deploys one per host::
orch apply prometheus "3 host1 host2"
-Or in yaml:
+results in two Prometheus daemons.
+
+Or in YAML:
.. code-block:: yaml