====================
The cephadm orchestrator is an orchestrator module that does not rely on a separate
-system such as Rook or Ansible, but rather manages nodes in a cluster by
+system such as Rook or Ansible, but rather manages hosts in a cluster by
establishing an SSH connection and issuing explicit management commands.
Orchestrator modules only provide services to other modules, which in turn
List Devices
^^^^^^^^^^^^
-Print a list of discovered devices, grouped by node and optionally
-filtered to a particular node:
+Print a list of discovered devices, grouped by host and optionally
+filtered to a particular host:
::
Creates or removes MONs or MGRs from the cluster. Orchestrator may return an
error if it doesn't know how to do this transition.
-Update the number of monitor nodes::
+Update the number of monitor hosts::
ceph orch apply mon <num> [host, host:network...]
Each host can optionally specify a network for the monitor to listen on.
-Update the number of manager nodes::
+Update the number of manager hosts::
ceph orch apply mgr <num> [host...]
Label
arbitrary string tags that may be applied by administrators
- to nodes. Typically administrators use labels to indicate
- which nodes should run which kinds of service. Labels are
- advisory (from human input) and do not guarantee that nodes
+ to hosts. Typically administrators use labels to indicate
+ which hosts should run which kinds of service. Labels are
+ advisory (from human input) and do not guarantee that hosts
have particular physical capabilities.
Drive group
journals/dbs for a group of HDDs).
Placement
- choice of which node is used to run a service.
+ choice of which host is used to run a service.
Key Concepts
------------
The underlying orchestrator remains the source of truth for information
about whether a service is running, what is running where, which
-nodes are available, etc. Orchestrator modules should avoid taking
+hosts are available, etc. Orchestrator modules should avoid taking
any internal copies of this information, and read it directly from
the orchestrator backend as much as possible.
-Bootstrapping nodes and adding them to the underlying orchestration
+Bootstrapping hosts and adding them to the underlying orchestration
system is outside the scope of Ceph's orchestrator interface. Ceph
-can only work on nodes when the orchestrator is already aware of them.
+can only work on hosts when the orchestrator is already aware of them.
Calls to orchestrator modules are all asynchronous, and return *completion*
objects (see below) rather than returning values immediately.
the Ceph cluster's services only.
- Multipathed storage is not handled (multipathing is unnecessary for
Ceph clusters). Each drive is assumed to be visible only on
- a single node.
+ a single host.
Host management
---------------