modules (ceph-mgr modules which interface with external orchestation services)
As the orchestrator CLI unifies different external orchestrators, a common nomenclature
-for the orchrstrator module is needed.
-
-+--------------------------------------+--------------------------------------+
-| host | hostname (not DNS name) on the |
-| | physical host. Not the podname, |
-| | container name, or hostname inside |
-| | the container. |
-+--------------------------------------+--------------------------------------+
-| service | A logical service, e.g., “MDS for |
-| | fs1” or “NFS gateway(s)”. Typically |
-| | comprised of multiple service |
-| | instances on multiple hosts for HA |
-+--------------------------------------+--------------------------------------+
-| service instance | A single instance of a service. |
-| | Usually a daemon, but maybe not |
-| | (e.g., might be a kernel service |
-| | like LIO or knfsd or whatever) |
-+--------------------------------------+--------------------------------------+
-| daemon | A running process on a host; use |
-| | “service instance” instead |
-+--------------------------------------+--------------------------------------+
+for the orchestrator module is needed.
+
++--------------------------------------+---------------------------------------+
+| host | hostname (not DNS name) of the |
+| | physical host. Not the podname, |
+| | container name, or hostname inside |
+| | the container. |
++--------------------------------------+---------------------------------------+
+| service type | The type of the service. e.g., nfs, |
+| | mds, osd, mon, rgw, mgr, iscsi |
++--------------------------------------+---------------------------------------+
+| service | A logical service, Typically |
+| | comprised of multiple service |
+| | instances on multiple hosts for HA |
+| | |
+| | * ``fs_name`` for mds type |
+| | * ``rgw_zone`` for rgw type |
+| | * ``ganesha_cluster_id`` for nfs type |
++--------------------------------------+---------------------------------------+
+| service instance | A single instance of a service. |
+| | Usually a daemon, but maybe not |
+| | (e.g., might be a kernel service |
+| | like LIO or knfsd or whatever) |
+| | |
+| | This identifier should |
+| | uniquely identify the instance |
++--------------------------------------+---------------------------------------+
+| daemon | A running process on a host; use |
+| | “service instance” instead |
++--------------------------------------+---------------------------------------+
+
+The relation between the names is the following:
+
+* a service belongs to a service type
+* a service instance belongs to a service type
+* a service instance belongs to a single service group
Configuration
=============
ceph mgr module enable rook
ceph orchestrator set backend rook
-
You can then check backend is properly configured::
ceph orchestrator status
-
Usage
=====
::
- ceph orchestrator device ls [--host=...] [--devname=...] [--refresh]
+ ceph orchestrator device ls [--host=...] [--refresh]
Create OSDs
^^^^^^^^^^^
Create OSDs on a group of devices on a single host::
- ceph orchestrator osd create <host> <ceph-volume-invocation…>
+ ceph orchestrator osd create <host>:<drive>
+ ceph orchestrator osd create -i <path-to-drive-group.json>
-See :ref:`ceph-volume-invocation… <ceph-volume-overview>` for details. E.g.
-``ceph orchestrator osd create host1 lvm create …``
The output of ``osd create`` is not specified and may vary between orchestrator backends.
+Where ``drive.group.json`` is a JSON file containing the fields defined in :class:`orchestrator.DriveGroupSpec`
+
+
Decommission an OSD
^^^^^^^^^^^^^^^^^^^
::
- ceph orchestrator osd rm <osd-id>
+ ceph orchestrator osd rm <osd-id> [osd-id...]
-Removes an OSD from the cluster and the host, if the OSD is marked as
+Removes one or more OSDs from the cluster and the host, if the OSDs are marked as
``destroyed``.
+
..
Blink Device Lights
^^^^^^^^^^^^^^^^^^^
specifying hosts is optional for some orchestrator modules
and mandatory for others (e.g. Ansible).
-Service Management
-~~~~~~~~~~~~~~~~~~
Service Status
-^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~
Print a list of services known to the orchestrator. The list can be limited to
services on a particular host with the optional --host parameter and/or
::
- ceph orchestrator service ls [--host host] [--type type] [--refresh\|--no-cache]
+ ceph orchestrator service ls [--host host] [--svc_type type] [--refresh|--no-cache]
Discover the status of a particular service::
Stateless services (MDS/RGW/NFS/rbd-mirror/iSCSI)
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The orchestrator is not responsible for configuring the services. Please look into the corresponding
documentation for details.
Sizing: the ``size`` parameter gives the number of daemons in the cluster
(e.g. the number of MDS daemons for a particular CephFS filesystem).
-Creating/growing/shrinking services::
+Creating/growing/shrinking/removing services::
ceph orchestrator {mds,rgw} update <name> <size> [host…]
ceph orchestrator {mds,rgw} add <name>
ceph orchestrator nfs update <name> <size> [host…]
ceph orchestrator nfs add <name> <pool> [--namespace=<namespace>]
+ ceph orchestrator {mds,rgw,nfs} rm <name>
e.g., ``ceph orchestrator mds update myfs 3 host1 host2 host3``
ceph orchestrator service-instance {start,stop,reload} <type> <instance-name>
-Removing services::
-
- ceph orchestrator {mds,rgw} rm <name>
-
+Current Implementation Status
+=============================
+
+This is an overview of the current implementation status of the orchestrators.
+
+=================================== ========= ====== ========= =====
+ Command Ansible Rook DeepSea SSH
+=================================== ========= ====== ========= =====
+ host add ⚪ ⚪ ⚪ ✔️
+ host ls ⚪ ⚪ ⚪ ✔️
+ host rm ⚪ ⚪ ⚪ ✔️
+ mgr update ⚪ ⚪ ⚪ ⚪
+ mon update ⚪ ⚪ ⚪ ⚪
+ osd create ✔️ ✔️ ⚪ ⚪
+ osd device {ident,fault}-{on,off} ⚪ ⚪ ⚪ ⚪
+ osd rm ✔️ ⚪ ⚪ ⚪
+ device {ident,fault}-(on,off} ⚪ ⚪ ⚪ ⚪
+ device ls ✔️ ✔️ ✔️ ⚪
+ service ls ⚪ ✔️ ✔️ ⚪
+ service status ⚪ ✔️ ✔️ ⚪
+ service-instance status ⚪ ⚪ ⚪ ⚪
+ iscsi {stop,start,reload} ⚪ ⚪ ⚪ ⚪
+ iscsi add ⚪ ⚪ ⚪ ⚪
+ iscsi rm ⚪ ⚪ ⚪ ⚪
+ iscsi update ⚪ ⚪ ⚪ ⚪
+ mds {stop,start,reload} ⚪ ⚪ ⚪ ⚪
+ mds add ⚪ ✔️ ⚪ ⚪
+ mds rm ⚪ ✔️ ⚪ ⚪
+ mds update ⚪ ⚪ ⚪ ⚪
+ nfs {stop,start,reload} ⚪ ⚪ ⚪ ⚪
+ nfs add ⚪ ✔️ ⚪ ⚪
+ nfs rm ⚪ ✔️ ⚪ ⚪
+ nfs update ⚪ ⚪ ⚪ ⚪
+ rbd-mirror {stop,start,reload} ⚪ ⚪ ⚪ ⚪
+ rbd-mirror add ⚪ ⚪ ⚪ ⚪
+ rbd-mirror rm ⚪ ⚪ ⚪ ⚪
+ rbd-mirror update ⚪ ⚪ ⚪ ⚪
+ rgw {stop,start,reload} ⚪ ⚪ ⚪ ⚪
+ rgw add ⚪ ✔️ ⚪ ⚪
+ rgw rm ⚪ ✔️ ⚪ ⚪
+ rgw update ⚪ ⚪ ⚪ ⚪
+=================================== ========= ====== ========= =====
+
+where
+
+* ⚪ = not yet implemented
+* ❌ = not applicable
+* ✔ = implemented
class DeviceSelection(object):
+ """
+ Used within :class:`myclass.DriveGroupSpec` to specify the devices
+ used by the Drive Group.
+
+ Any attributes (even none) can be included in the device
+ specification structure.
+ """
+
def __init__(self, paths=None, id_model=None, size=None, rotates=None, count=None):
# type: (List[str], str, str, bool, int) -> None
"""
ephemeral drive group device specification
- :param paths: abs paths to the devices.
- :param id_model: A wildcard string. e.g: "SDD*"
- :param size: Size specification of format LOW:HIGH.
- Can also take the the form :HIGH, LOW:
- or an exact value (as ceph-volume inventory reports)
- :param rotates: is the drive rotating or not
- :param count: if this is present limit the number of drives to this number.
-
- Any attributes (even none) can be included in the device
- specification structure.
-
TODO: translate from the user interface (Drive Groups) to an actual list of devices.
"""
if paths is None:
paths = []
+
+ #: List of absolute paths to the devices.
self.paths = paths # type: List[str]
if self.paths and any(p is not None for p in [id_model, size, rotates, count]):
raise TypeError('`paths` and other parameters are mutually exclusive')
+ #: A wildcard string. e.g: "SDD*"
self.id_model = id_model
+
+ #: Size specification of format LOW:HIGH.
+ #: Can also take the the form :HIGH, LOW:
+ #: or an exact value (as ceph-volume inventory reports)
self.size = size
+
+ #: is the drive rotating or not
self.rotates = rotates
+
+ #: if this is present limit the number of drives to this number.
self.count = count
@classmethod
# concept of applying a drive group to a (set) of hosts is tightly
# linked to the drive group itself
#
- # An fnmatch pattern to select hosts. Can also be a single host.
+ #: An fnmatch pattern to select hosts. Can also be a single host.
self.host_pattern = host_pattern
+ #: A :class:`orchestrator.DeviceSelection`
self.data_devices = data_devices
+
+ #: A :class:`orchestrator.DeviceSelection`
self.db_devices = db_devices
+
+ #: A :class:`orchestrator.DeviceSelection`
self.wal_devices = wal_devices
+
+ #: A :class:`orchestrator.DeviceSelection`
self.journal_devices = journal_devices
- # Number of osd daemons per "DATA" device.
- # To fully utilize nvme devices multiple osds are required.
+ #: Number of osd daemons per "DATA" device.
+ #: To fully utilize nvme devices multiple osds are required.
self.osds_per_device = osds_per_device
assert objectstore in ('filestore', 'bluestore')
+ #: ``filestore`` or ``bluestore``
self.objectstore = objectstore
+ #: ``true`` or ``false``
self.encrypted = encrypted
+ #: How many OSDs per DB device
self.db_slots = db_slots
+
+ #: How many OSDs per WAL device
self.wal_slots = wal_slots
# FIXME: needs ceph-volume support
- # Optional: mapping of drive to OSD ID, used when the
- # created OSDs are meant to replace previous OSDs on
- # the same node.
+ #: Optional: mapping of drive to OSD ID, used when the
+ #: created OSDs are meant to replace previous OSDs on
+ #: the same node.
self.osd_id_claims = {}
@classmethod
def from_json(self, json_drive_group):
"""
Initialize and verify 'Drive group' structure
+
:param json_drive_group: A valid json string with a Drive Group
- specification
+ specification
"""
args = {k: (DeviceSelection.from_json(v) if k.endswith('_devices') else v) for k, v in
json_drive_group.items()}