]> git.apps.os.sepia.ceph.com Git - ceph-ci.git/commitdiff
ceph-volume: use correct extents when using db-devices and >1 osds_per_device
authorFabian Niepelt <f.niepelt@mittwald.de>
Wed, 11 Dec 2019 13:19:14 +0000 (14:19 +0100)
committerGitHub <noreply@github.com>
Wed, 11 Dec 2019 13:19:14 +0000 (14:19 +0100)
Actual data size depending on osds_per_device needs to be calculated here. Otherwise, if osds_per_device is greater than 1, ceph-volume will allocate 100% of the device to the first osd and then fail to create the LV for the second because the volume group is already full.

Fixes: https://tracker.ceph.com/issues/39442
Signed-off-by: Fabian Niepelt <f.niepelt@mittwald.de>
src/ceph-volume/ceph_volume/devices/lvm/strategies/bluestore.py

index afefc224bb6962616d8926e6ca68953b208da687..b6020fc4a630367b81cb0bcf506af555a51d1926 100644 (file)
@@ -355,7 +355,7 @@ class MixedType(MixedStrategy):
         for osd in self.computed['osds']:
             data_path = osd['data']['path']
             data_vg = data_vgs[data_path]
-            data_lv_extents = data_vg.sizing(parts=1)['extents']
+            data_lv_extents = data_vg.sizing(parts=self.osds_per_device)['extents']
             data_uuid = system.generate_uuid()
             data_lv = lvm.create_lv(
                 'osd-block', data_uuid, vg=data_vg.name, extents=data_lv_extents)