ceph orch device zap my_hostname /dev/sdx
+.. note::
+ Cephadm orchestrator will automatically deploy drives that match the DriveGroup in your OSDSpec if the unmanaged flag is unset.
+ For example, if you use the ``all-available-devices`` option when creating OSD's, when you ``zap`` a device the cephadm orchestrator will automatically create a new OSD in the device .
+ To disable this behavior, see :ref:`orchestrator-cli-create-osds`.
+
+.. _orchestrator-cli-create-osds:
Create OSDs
-----------
ceph orch daemon add osd <host>:device1,device2
-or::
-
- ceph orch apply osd -i <json_file/yaml_file>
+Example::
+ # ceph orch daemon add osd node1:/dev/vdd
+ Created 1 OSD on host 'node1' using device '/dev/vdd'
or::
- ceph orch apply osd --all-available-devices
-
+ ceph orch apply osd -i <json_file/yaml_file>
+Where the ``json_file/yaml_file`` is a DriveGroup specification.
For a more in-depth guide to DriveGroups please refer to :ref:`drivegroups`
-Example::
-
- # ceph orch daemon add osd node1:/dev/vdd
- Created osd(s) 6 on host 'node1'
+or::
+ ceph orch apply osd --all-available-devices
If the 'apply' method is used. You will be presented with a preview of what will happen.
all-available-devices node2 /dev/vdc - -
all-available-devices node3 /dev/vdd - -
-
.. note::
- Output form Cephadm orchestrator
+ Example output from cephadm orchestrator
+
+When the parameter ``all-available-devices`` or a DriveGroup specification is used, a cephadm service is created.
+This service guarantees that all available devices or devices included in the DriveGroup will be used for OSD's.
+Take into account the implications of this behavior, which is automatic and enabled by default.
+
+For example:
+
+After using::
+
+ ceph orch apply osd --all-available-devices
+
+* If you add new disks to the cluster they will automatically be used to create new OSD's.
+* A new OSD will be created automatically if you remove an OSD and clean the LVM physical volume.
+
+If you want to avoid this behavior (disable automatic creation of OSD in available devices), use the ``unmanaged`` parameter::
+
+ ceph orch apply osd --all-available-devices --unmanaged=true
+
+In the case that you have already created the OSD's using the ``all-available-devices`` service, you can change the automatic OSD creation using the following command::
+
+ ceph orch osd spec --service-name osd.all-available-devices --unmanaged
Remove an OSD
-------------------
When no PGs are left on the osd, it will be decommissioned and removed from the cluster.
+.. note::
+ After removing an OSD, if you wipe the LVM physical volume in the device used by the removed OSD, a new OSD will be created.
+ Read information about the ``unmanaged`` parameter in :ref:`orchestrator-cli-create-osds`.
Replace an OSD
-------------------
Discover the status of a particular service or daemons::
ceph orch ls --service_type type --service_name <name> [--refresh]
-
+
Export the service specs known to the orchestrator as yaml in format
that is compatible to ``ceph orch apply -i``::
Print a list of all daemons known to the orchestrator::
ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
-
+
Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs
the id is the numeric OSD ID, for MDS services it is the file system name::
.. _orchestrator-cli-cephfs:
-
+
Depoying CephFS
===============
In order to set up a :term:`CephFS`, execute::
ceph fs volume create <fs_name> <placement spec>
-
-Where ``name`` is the name of the CephFS, ``placement`` is a
+
+Where ``name`` is the name of the CephFS, ``placement`` is a
:ref:`orchestrator-cli-placement-spec`.
-
-This command will create the required Ceph pools, create the new
+
+This command will create the required Ceph pools, create the new
CephFS, and deploy mds servers.
Stateless services (MDS/RGW/NFS/rbd-mirror/iSCSI)
Service Commands::
ceph orch <start|stop|restart|redeploy|reconfig> <service_name>
-
+
.. _orchestrator-cli-service-spec:
-
+
Service Specification
=====================
-As *Service Specification* is a data structure often represented as YAML
+As *Service Specification* is a data structure often represented as YAML
to specify the deployment of services. For example:
.. code-block:: yaml
service_type: rgw
service_id: realm.zone
- placement:
- hosts:
+ placement:
+ hosts:
- host1
- host2
- host3
spec: ...
unmanaged: false
-
+
Where the properties of a service specification are the following:
* ``service_type`` is the type of the service. Needs to be either a Ceph
- service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or
+ service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or
``rbd-mirror``), a gateway (``nfs`` or ``rgw``), or part of the
monitoring stack (``alertmanager``, ``grafana``, ``node-exporter`` or
``prometheus``).
service_type: nfs
service_id: mynfs
- placement:
- hosts:
+ placement:
+ hosts:
- host1
- host2
spec:
EOF
.. _orchestrator-cli-placement-spec:
-
+
Placement Specification
=======================
In order to allow the orchestrator to deploy a *service*, it needs to
-know how many and where it should deploy *daemons*. The orchestrator
+know how many and where it should deploy *daemons*. The orchestrator
defines a placement specification that can either be passed as a command line argument.
Explicit placements
Daemons can be explictly placed on hosts by simply specifying them::
orch apply prometheus "host1 host2 host3"
-
+
Or in yaml:
.. code-block:: yaml
-
+
service_type: prometheus
placement:
- hosts:
+ hosts:
- host1
- host2
- host3
-
+
MONs and other services may require some enhanced network specifications::
orch daemon add mon myhost:[v2:1.2.3.4:3000,v1:1.2.3.4:6789]=name
-
+
Where ``[v2:1.2.3.4:3000,v1:1.2.3.4:6789]`` is the network address of the monitor
and ``=name`` specifies the name of the new monitor.
placement:
host_pattern: "*"
-
+
Setting a limit
---------------
By specifying ``count``, only that number of daemons will be created::
orch apply prometheus 3
-
+
To deploy *daemons* on a subset of hosts, also specify the count::
orch apply prometheus "2 host1 host2 host3"
-
+
If the count is bigger than the amount of hosts, cephadm still deploys two daemons::
orch apply prometheus "3 host1 host2"
service_type: prometheus
placement:
count: 3
-
+
Or with hosts:
.. code-block:: yaml
service_type: prometheus
placement:
count: 2
- hosts:
+ hosts:
- host1
- host2
- - host3
+ - host3
Updating Service Specifications
===============================