Automatic sorting of disks
--------------------------
-If ``batch`` receives only a single list of data devices and the ``--no-auto`` option
- is *not* passed (currently the default), ``ceph-volume`` will auto-sort disks by its rotational
- property and use non-rotating disks for ``block.db`` or ``journal`` depending
- on the objectstore used.
-This default behavior is now DEPRECATED and will be removed in future releases. Instead
- an ``auto`` option is introduced to retain this behavior.
-It is recommended to make use of the explicit device lists for ``block.db``,
- ``block.wal`` and ``journal``.
+If ``batch`` receives only a single list of data devices and other options are
+passed , ``ceph-volume`` will auto-sort disks by its rotational
+property and use non-rotating disks for ``block.db`` or ``journal`` depending
+on the objectstore used. If all devices are to be used for standalone OSDs,
+no matter if rotating or solid state, pass ``--no-auto``.
For example assuming :term:`bluestore` is used and ``--no-auto`` is not passed,
- the deprecated behavior would deploy the following, depending on the devices
- passed:
+the deprecated behavior would deploy the following, depending on the devices
+passed:
#. Devices are all spinning HDDs: 1 OSD is created per device
#. Devices are all SSDs: 2 OSDs are created per device
#. Devices are a mix of HDDs and SSDs: data is placed on the spinning device,
the ``block.db`` is created on the SSD, as large as possible.
+.. note:: Although operations in ``ceph-volume lvm create`` allow usage of
+ ``block.wal`` it isn't supported with the ``auto`` behavior.
+
+This default auto-sorting behavior is now DEPRECATED and will be changed in future releases.
+Instead devices are not automatically sorted unless the ``--auto`` option is passed
+
+It is recommended to make use of the explicit device lists for ``block.db``,
+ ``block.wal`` and ``journal``.
+
+.. _ceph-volume-lvm-batch_bluestore:
+
+Reporting
+=========
+By default ``batch`` will print a report of the computed OSD layout and ask the
+user to confirm. This can be overridden by passing ``--yes``.
+
+If one wants to try out several invocations with being asked to deploy
+``--report`` can be passed. ``ceph-volume`` will exit after printing the report.
+
+Consider the following invocation::
+
+ $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1
+
+This will deploy three OSDs with external ``db`` and ``wal`` volumes on
+an NVME device.
+
+**pretty reporting**
+The ``pretty`` report format (the default) would
+look like this::
+
+ $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1
+ --> passed data devices: 3 physical, 0 LVM
+ --> relative data size: 1.0
+ --> passed block_db devices: 1 physical, 0 LVM
+
+ Total OSDs: 3
+
+ Type Path LV Size % of device
+ ----------------------------------------------------------------------------------------------------
+ data /dev/sdb 300.00 GB 100.00%
+ block_db /dev/nvme0n1 66.67 GB 33.33%
+ ----------------------------------------------------------------------------------------------------
+ data /dev/sdc 300.00 GB 100.00%
+ block_db /dev/nvme0n1 66.67 GB 33.33%
+ ----------------------------------------------------------------------------------------------------
+ data /dev/sdd 300.00 GB 100.00%
+ block_db /dev/nvme0n1 66.67 GB 33.33%
+
.. note:: Although operations in ``ceph-volume lvm create`` allow usage of
``block.wal`` it isn't supported with the ``batch`` sub-command
-.. _ceph-volume-lvm-batch_filestore:
-``filestore``
--------------
-The :term:`filestore` objectstore can be used when creating multiple OSDs
-with the ``batch`` sub-command. It allows two different scenarios depending
-on the input of devices:
+**JSON reporting**
+Reporting can produce a structured output with ``--format json`` or
+``--format json-pretty``::
+
+ $ ceph-volume lvm batch --report --format json-pretty /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1
+ --> passed data devices: 3 physical, 0 LVM
+ --> relative data size: 1.0
+ --> passed block_db devices: 1 physical, 0 LVM
+ [
+ {
+ "block_db": "/dev/nvme0n1",
+ "block_db_size": "66.67 GB",
+ "data": "/dev/sdb",
+ "data_size": "300.00 GB",
+ "encryption": "None"
+ },
+ {
+ "block_db": "/dev/nvme0n1",
+ "block_db_size": "66.67 GB",
+ "data": "/dev/sdc",
+ "data_size": "300.00 GB",
+ "encryption": "None"
+ },
+ {
+ "block_db": "/dev/nvme0n1",
+ "block_db_size": "66.67 GB",
+ "data": "/dev/sdd",
+ "data_size": "300.00 GB",
+ "encryption": "None"
+ }
+ ]
+
+Sizing
+======
+When no sizing arguments are passed, `ceph-volume` will derive the sizing from
+the passed device lists (or the sorted lists when using the automatic sorting).
+`ceph-volume batch` will attempt to fully utilize a device's available capacity.
+Relying on automatic sizing is recommended.
#. Devices are all the same type (for example all spinning HDD or all SSDs):
1 OSD is created per device, collocating the journal in the same HDD.
ceph.conf and falling back to the default journal size of 5GB.
-When a mix of solid and spinning devices are used, ``ceph-volume`` will try to
-detect existing volume groups on the solid devices. If a VG is found, it will
-try to create the logical volume from there, otherwise raising an error if
-space is insufficient.
+* ``--block-db-slots``
+* ``--block-wal-slots``
+* ``--journal-slots``
-If a raw solid device is used along with a device that has a volume group in
-addition to some spinning devices, ``ceph-volume`` will try to extend the
-existing volume group and then create a logical volume.
+For example, consider an OSD host that is supposed to contain 5 data devices and
+one device for wal/db volumes. However, one data device is currently broken and
+is being replaced. Instead of calculating the explicit sizes for the wal/db
+volume, one can simply call::
.. _ceph-volume-lvm-batch_report:
understand the outcome of the received devices. Once confirmation is accepted,
the process continues.
-Although prompts are good to understand outcomes, it is incredibly useful to
-try different inputs to find the best product possible. With the ``--report``
-flag, one can prevent any actual operations and just verify outcomes from
-inputs.
+* ``--block-db-size``
+* ``--block-wal-size``
+* ``--journal-size``
**pretty reporting**
For two spinning devices, this is how the ``pretty`` report (the default) would
def get_default_args():
defaults = {}
- defaults.update({name.strip('-').replace('-', '_').replace('.', '_'): val['default'] for name, val in common_args.items()
- if 'default' in val})
- defaults.update({name.strip('-').replace('-', '_').replace('.', '_'): val['default'] for name, val in filestore_args.items()
- if 'default' in val})
- defaults.update({name.strip('-').replace('-', '_').replace('.', '_'): val['default'] for name, val in bluestore_args.items()
- if 'default' in val})
- defaults.update({name.strip('-').replace('-', '_').replace('.', '_'): None for name, val in common_args.items()
- if 'default' not in val})
- defaults.update({name.strip('-').replace('-', '_').replace('.', '_'): None for name, val in filestore_args.items()
- if 'default' not in val})
- defaults.update({name.strip('-').replace('-', '_').replace('.', '_'): None for name, val in bluestore_args.items()
- if 'default' not in val})
+ def format_name(name):
+ return name.strip('-').replace('-', '_').replace('.', '_')
+ for argset in (common_args, filestore_args, bluestore_args):
+ defaults.update({format_name(name): val.get('default', None) for name, val in argset.items()})
return defaults