how you want your OSDs configured. To begin your configuration rename each file in ``group_vars/`` you wish to use so that it does not include the ``.sample``
at the end of the filename, uncomment the options you wish to change and provide your own value.
-An example configuration that deploys the upstream ``jewel`` version of Ceph with OSDs that have collocated journals would look like this in ``group_vars/all.yml``:
+An example configuration that deploys the upstream ``octopus`` version of Ceph with lvm batch method would look like this in ``group_vars/all.yml``:
.. code-block:: yaml
ceph_origin: repository
ceph_repository: community
- ceph_stable_release: jewel
+ ceph_stable_release: octopus
public_network: "192.168.3.0/24"
cluster_network: "192.168.4.0/24"
monitor_interface: eth1
devices:
- '/dev/sda'
- '/dev/sdb'
- osd_scenario: collocated
The following config options are required to be changed on all installations but there could be other required options depending on your OSD scenario
selection or other aspects of your cluster.
- ``ceph_origin``
- ``ceph_stable_release``
- ``public_network``
-- ``osd_scenario``
- ``monitor_interface`` or ``monitor_address``
OSD Configuration
-----------------
-OSD configuration is set by selecting an OSD scenario and providing the configuration needed for
-that scenario. Each scenario is different in it's requirements. Selecting your OSD scenario is done
-by setting the ``osd_scenario`` configuration option.
+OSD configuration was used to be set by selecting an OSD scenario and providing the configuration needed for
+that scenario. As of nautilus in stable-4.0, the only scenarios available is ``lvm``.
.. toctree::
:maxdepth: 1
OSD Scenario
============
-As of stable-3.2, the following scenarios are not supported anymore since they are associated to ``ceph-disk``:
+As of stable-4.0, the following scenarios are not supported anymore since they are associated to ``ceph-disk``:
* `collocated`
* `non-collocated`
-``ceph-disk`` was deprecated during the ceph-ansible 3.2 cycle and has been removed entirely from Ceph itself in the Nautilus version.
-Supported values for the required ``osd_scenario`` variable are:
+Since the Ceph luminous release, it is preferred to use the :ref:`lvm scenario
+<osd_scenario_lvm>` that uses the ``ceph-volume`` provisioning tool. Any other
+scenario will cause deprecation warnings.
-At present (starting from stable-3.2), there is only one scenario, which defaults to ``lvm``, see:
+``ceph-disk`` was deprecated during the ceph-ansible 3.2 cycle and has been removed entirely from Ceph itself in the Nautilus version.
+At present (starting from stable-4.0), there is only one scenario, which defaults to ``lvm``, see:
* :ref:`lvm <osd_scenario_lvm>`
So there is no need to configure ``osd_scenario`` anymore, it defaults to ``lvm``.
-Since the Ceph luminous release, it is preferred to use the :ref:`lvm scenario
-<osd_scenario_lvm>` that uses the ``ceph-volume`` provisioning tool. Any other
-scenario will cause deprecation warnings.
-
The ``lvm`` scenario mentionned above support both containerized and non-containerized cluster.
As a reminder, deploying a containerized cluster can be done by setting ``containerized_deployment``
to ``True``.
This OSD scenario uses ``ceph-volume`` to create OSDs, primarily using LVM, and
is only available when the Ceph release is luminous or newer.
-
-**It is the preferred method of provisioning OSDs.**
-
-It is enabled with the following setting::
-
-
- osd_scenario: lvm
+It is automatically enabled.
Other (optional) supported settings:
.. code-block:: yaml
- osd_scenario: lvm
devices:
- /dev/sda
- /dev/sdb
.. code-block:: yaml
- osd_scenario: lvm
devices:
- /dev/sda
- /dev/sdb
.. code-block:: yaml
- osd_scenario: lvm
osd_auto_discovery: true
Other (optional) supported settings:
.. code-block:: yaml
osd_objectstore: bluestore
- osd_scenario: lvm
lvm_volumes:
- data: /dev/sda
- data: /dev/sdb
.. code-block:: yaml
osd_objectstore: bluestore
- osd_scenario: lvm
lvm_volumes:
- data: data-lv1
data_vg: data-vg1
.. code-block:: yaml
osd_objectstore: bluestore
- osd_scenario: lvm
lvm_volumes:
- data: data-lv1
data_vg: data-vg1
.. code-block:: yaml
osd_objectstore: filestore
- osd_scenario: lvm
lvm_volumes:
- data: data-lv1
data_vg: data-vg1
# If set to True, no matter which osd_objecstore you use the data will be encrypted
#dmcrypt: False
-
-#dedicated_devices: []
-
# Use ceph-volume to create OSDs from logical volumes.
# lvm_volumes is a list of dictionaries.
#
# If set to True, no matter which osd_objecstore you use the data will be encrypted
dmcrypt: False
-
-dedicated_devices: []
-
# Use ceph-volume to create OSDs from logical volumes.
# lvm_volumes is a list of dictionaries.
#
---
-- name: devices validation
- block:
- - name: validate devices is actually a device
- parted:
- device: "{{ item }}"
- unit: MiB
- register: devices_parted
- with_items: "{{ devices }}"
-
- - name: fail if one of the devices is not a device
- fail:
- msg: "{{ item }} is not a block special file!"
- when:
- - item.failed
- with_items: "{{ devices_parted.results }}"
- when:
- - devices is defined
-
-- name: validate dedicated_device is/are actually device(s)
+- name: validate devices is actually a device
parted:
device: "{{ item }}"
unit: MiB
- register: dedicated_device_parted
- with_items: "{{ dedicated_devices }}"
- when:
- - dedicated_devices|default([]) | length > 0
+ register: devices_parted
+ with_items: "{{ devices }}"
-- name: fail if one of the dedicated_device is not a device
+- name: fail if one of the devices is not a device
fail:
msg: "{{ item }} is not a block special file!"
- with_items: "{{ dedicated_device_parted.results }}"
when:
- - dedicated_devices|default([]) | length > 0
- item.failed
-
-- name: fail if number of dedicated_devices is not equal to number of devices
- fail:
- msg: "Number of dedicated_devices must be equal to number of devices. dedicated_devices: {{ dedicated_devices | length }}, devices: {{ devices | length }}"
- when:
- - dedicated_devices|default([]) | length > 0
- - devices | length > 0
- - dedicated_devices | length != devices | length
\ No newline at end of file
+ with_items: "{{ devices_parted.results }}"