Automatic sorting of disks
--------------------------
-If ``batch`` receives only a single list of data devices and the ``--no-auto`` option
- is *not* passed (currently the default), ``ceph-volume`` will auto-sort disks by its rotational
- property and use non-rotating disks for ``block.db`` or ``journal`` depending
- on the objectstore used.
-This default behavior is now DEPRECATED and will be removed in future releases. Instead
- an ``auto`` option is introduced to retain this behavior.
-It is recommended to make use of the explicit device lists for ``block.db``,
- ``block.wal`` and ``journal``.
+If ``batch`` receives only a single list of data devices and other options are
+passed , ``ceph-volume`` will auto-sort disks by its rotational
+property and use non-rotating disks for ``block.db`` or ``journal`` depending
+on the objectstore used. If all devices are to be used for standalone OSDs,
+no matter if rotating or solid state, pass ``--no-auto``.
For example assuming :term:`bluestore` is used and ``--no-auto`` is not passed,
- the deprecated behavior would deploy the following, depending on the devices
- passed:
+the deprecated behavior would deploy the following, depending on the devices
+passed:
#. Devices are all spinning HDDs: 1 OSD is created per device
#. Devices are all SSDs: 2 OSDs are created per device
.. note:: Although operations in ``ceph-volume lvm create`` allow usage of
``block.wal`` it isn't supported with the ``auto`` behavior.
+This default auto-sorting behavior is now DEPRECATED and will be changed in future releases.
+Instead devices are not automatically sorted unless the ``--auto`` option is passed
+
+It is recommended to make use of the explicit device lists for ``block.db``,
+ ``block.wal`` and ``journal``.
+
.. _ceph-volume-lvm-batch_bluestore:
Reporting
look like this::
$ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1
+ --> passed data devices: 3 physical, 0 LVM
+ --> relative data size: 1.0
+ --> passed block_db devices: 1 physical, 0 LVM
Total OSDs: 3
+ Type Path LV Size % of device
+ ----------------------------------------------------------------------------------------------------
+ data /dev/sdb 300.00 GB 100.00%
+ block_db /dev/nvme0n1 66.67 GB 33.33%
+ ----------------------------------------------------------------------------------------------------
+ data /dev/sdc 300.00 GB 100.00%
+ block_db /dev/nvme0n1 66.67 GB 33.33%
+ ----------------------------------------------------------------------------------------------------
+ data /dev/sdd 300.00 GB 100.00%
+ block_db /dev/nvme0n1 66.67 GB 33.33%
-**JSON reporting**
-Reporting can produce a structured output with ``--format json``::
- $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1
+
+**JSON reporting**
+Reporting can produce a structured output with ``--format json`` or
+``--format json-pretty``::
+
+ $ ceph-volume lvm batch --report --format json-pretty /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1
+ --> passed data devices: 3 physical, 0 LVM
+ --> relative data size: 1.0
+ --> passed block_db devices: 1 physical, 0 LVM
+ [
+ {
+ "block_db": "/dev/nvme0n1",
+ "block_db_size": "66.67 GB",
+ "data": "/dev/sdb",
+ "data_size": "300.00 GB",
+ "encryption": "None"
+ },
+ {
+ "block_db": "/dev/nvme0n1",
+ "block_db_size": "66.67 GB",
+ "data": "/dev/sdc",
+ "data_size": "300.00 GB",
+ "encryption": "None"
+ },
+ {
+ "block_db": "/dev/nvme0n1",
+ "block_db_size": "66.67 GB",
+ "data": "/dev/sdd",
+ "data_size": "300.00 GB",
+ "encryption": "None"
+ }
+ ]
Sizing
======
When no sizing arguments are passed, `ceph-volume` will derive the sizing from
the passed device lists (or the sorted lists when using the automatic sorting).
-`ceph-volume batch` will attempt to fully utilize a devices available capacity.
+`ceph-volume batch` will attempt to fully utilize a device's available capacity.
Relying on automatic sizing is recommended.
If one requires a different sizing policy for wal, db or journal devices,
Users can provide hints to `ceph-volume` as to how many data devices should have
their external volumes on a set of fast devices. These options are:
-* `--block-db-slots`
-* `--block-wal-slots`
-* `--journal-slots`
+* ``--block-db-slots``
+* ``--block-wal-slots``
+* ``--journal-slots``
-For example consider an OSD host that is supposed to contain 5 data devices and
-one device for wal/db volumes. However one data device is currently broken and
+For example, consider an OSD host that is supposed to contain 5 data devices and
+one device for wal/db volumes. However, one data device is currently broken and
is being replaced. Instead of calculating the explicit sizes for the wal/db
-volume one can simply call::
+volume, one can simply call::
$ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd /dev/sde --db-devices /dev/nvme0n1 --block-db-slots 5
---------------
It is also possible to provide explicit sizes to `ceph-volume` via the arguments
-* `--block-db-size`
-* `--block-wal-size`
-* `--journal-size`
+* ``--block-db-size``
+* ``--block-wal-size``
+* ``--journal-size``
`ceph-volume` will try to satisfy the requested sizes given the passed disks. If
this is not possible, no OSDs will be deployed.
def get_default_args():
defaults = {}
- defaults.update({name.strip('-').replace('-', '_').replace('.', '_'): val['default'] for name, val in common_args.items()
- if 'default' in val})
- defaults.update({name.strip('-').replace('-', '_').replace('.', '_'): val['default'] for name, val in filestore_args.items()
- if 'default' in val})
- defaults.update({name.strip('-').replace('-', '_').replace('.', '_'): val['default'] for name, val in bluestore_args.items()
- if 'default' in val})
- defaults.update({name.strip('-').replace('-', '_').replace('.', '_'): None for name, val in common_args.items()
- if 'default' not in val})
- defaults.update({name.strip('-').replace('-', '_').replace('.', '_'): None for name, val in filestore_args.items()
- if 'default' not in val})
- defaults.update({name.strip('-').replace('-', '_').replace('.', '_'): None for name, val in bluestore_args.items()
- if 'default' not in val})
+ def format_name(name):
+ return name.strip('-').replace('-', '_').replace('.', '_')
+ for argset in (common_args, filestore_args, bluestore_args):
+ defaults.update({format_name(name): val.get('default', None) for name, val in argset.items()})
return defaults