--- /dev/null
+.. _ceph-volume-lvm-batch:
+
+``batch``
+===========
+This subcommand allows for multiple OSDs to be created at the same time given
+an input of devices. Depending on the device type (spinning drive, or solid
+state), the internal engine will decide the best approach to create the OSDs.
+
+This decision abstracts away the many nuances when creating an OSD: how large
+should a ``block.db`` be? How can one mix a solid state device with spinning
+devices in an efficient way?
+
+The process is similar to :ref:`ceph-volume-lvm-create`, and will do the
+preparation and activation at once, following the same workflow for each OSD.
+
+..
+ All the features that ``ceph-volume lvm create`` supports, like ``dmcrypt``,
+ avoiding ``systemd`` units from starting, defining bluestore or filestore,
+ are supported. Any fine-grained option that may affect a single OSD is not
+ supported, for example: specifying where journals should be placed.
+
+
+.. _ceph-volume-lvm-batch_bluestore:
+
+``bluestore``
+-------------
+The :term:`bluestore` objectstore (the default) is used when creating multiple OSDs
+with the ``batch`` sub-command. It allows a few different scenarios depending
+on the input of devices:
+
+#. Devices are all spinning HDDs: 1 OSD is created per device
+#. Devices are all spinning SSDs: 2 OSDs are created per device
+#. Devices are a mix of HDDS and SSDs: data is placed on the spinning device,
+ the ``block.db`` is created on the SSD, as large as possible.
+
+
+.. note:: Although operations in ``ceph-volume lvm create`` allow usage of
+ ``block.wal`` it isn't supported with the ``batch`` sub-command
+
+.. _ceph-volume-lvm-batch_report:
+
+Reporting
+---------
+When a call is received to create OSDs, the tool will prompt the user to
+continue if the pre-computed output is acceptable. This output is useful to
+understand the outcome of the received devices. Once confirmation is accepted,
+the process continues.
+
+Although prompts are good to understand outcomes, it is incredibly useful to
+try different inputs to find the best product possible. With the ``--report``
+flag, one can prevent any actual operations and just verify outcomes from
+inputs.
+
+**pretty reporting**
+For two spinning devices, this is how the ``pretty`` report (the default) would
+look::
+
+ $ ceph-volume lvm batch --report /dev/sdb /dev/sdc
+
+ Total OSDs: 2
+
+ Type Path LV Size % of device
+ --------------------------------------------------------------------------------
+ [data] /dev/sdb 10.74 GB 100%
+ --------------------------------------------------------------------------------
+ [data] /dev/sdc 10.74 GB 100%
+
+
+
+**JSON reporting**
+Reporting can produce a richer output with ``JSON``, which gives a few more
+hints on sizing. This feature might be better for other tooling to consume
+information that will need to be transformed.
+
+For two spinning devices, this is how the ``JSON`` report would look::
+
+ $ ceph-volume lvm batch --report --format=json /dev/sdb /dev/sdc
+ {
+ "osds": [
+ {
+ "block.db": {},
+ "data": {
+ "human_readable_size": "10.74 GB",
+ "parts": 1,
+ "path": "/dev/sdb",
+ "percentage": 100,
+ "size": 11534336000.0
+ }
+ },
+ {
+ "block.db": {},
+ "data": {
+ "human_readable_size": "10.74 GB",
+ "parts": 1,
+ "path": "/dev/sdc",
+ "percentage": 100,
+ "size": 11534336000.0
+ }
+ }
+ ],
+ "vgs": [
+ {
+ "devices": [
+ "/dev/sdb"
+ ],
+ "parts": 1
+ },
+ {
+ "devices": [
+ "/dev/sdc"
+ ],
+ "parts": 1
+ }
+ ]
+ }