Podman inside Podman. This approach has its downsides too as we have to
simulate the creation of osds and addition of devices with loopback devices.
-Cephadm's box environment is a command which requires little to setup. The setup
-requires you to get the required podman images for what we call boxes and ceph.
-A box is the first layer of docker containers which can be either a seed or a
-host. A seed is the main box which holds cephadm and where you bootstrap the
-cluster. On the other hand, you have hosts with an ssh server setup so you can
-add those hosts to the cluster. The second layer, managed by cephadm, inside the
-seed box, requires the ceph image.
+Cephadm's box environment is simple to set up. The setup requires you to
+get the required Podman images for Ceph and what we call boxes.
+A box is the first layer of Podman containers which can be either a seed or a
+host. A seed is the main box which holds Cephadm and where you bootstrap the
+cluster. On the other hand, you have hosts with a SSH server setup so you can
+add those hosts to the cluster. The second layer, managed by Cephadm, inside the
+seed box, requires the Ceph image.
.. warning:: This development environment is still experimental and can have unexpected
behaviour. Please take a look at the road map and the known issues section
./box.py -v cluster setup
.. note:: It is recommended to run box with verbose (-v) as it will show the output of
- shell commands being run.
+ shell commands being run.
After getting all needed images we can create a simple cluster without OSDs and hosts with::
# explicitly change number of hosts and osds
sudo box -v cluster start --extended --osds 5 --hosts 5
-.. warning:: OSDs are still not supported in the box implementation with podman. It is
- work in progress.
+.. warning:: OSDs are still not supported in the box implementation with Podman. It is
+ work in progress.
Without the extended option, explicitly adding either more hosts or OSDs won't change the state
.. note:: Each osd will require 5GiB of space.
After bootstraping the cluster you can go inside the seed box in which you'll be
-able to run cephadm commands::
+able to run Cephadm commands::
./box.py -v cluster sh
[root@8d52a7860245] cephadm --help
To remove the cluster and clean up run::
./box.py cluster down
-
+
If you just want to clean up the last cluster created run::
./box.py cluster cleanup
./box.py --help
+If you want to run the box with Docker you can. You'll have to specify which
+engine you want to you like::
+
+ ./box.py -v --engine docker cluster list
+
+With Docker commands like bootstrap and osd creation should be called with sudo
+since it requires privileges to create osds on VGs and LVs::
+
+ sudo ./box.py -v --engine docker cluster start --expanded
+
+.. warning:: Using Docker as the box engine is dangerous as there were some instances
+ where the Xorg session was killed.
Known issues
------------
-* If you get permission issues with cephadm because it cannot infer the keyring
+* If you get permission issues with Cephadm because it cannot infer the keyring
and configuration, please run cephadm like this example::
cephadm shell --config /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.kerying
* Docker containers run with the --privileged flag enabled which has been seen
to make some computers log out.
* If SELinux is not disabled you'll probably see unexpected behaviour. For example:
- if not all permissions of Ceph repo files are set to your user it will probably
- fail starting with podman-compose.
+ if not all permissions of Ceph repo files are set to your user it will probably
+ fail starting with podman-compose.
* If running a command it fails to run a podman command because it couldn't find the
- container, you can debug by running the same podman-compose .. up command displayed
- with the flag -v.
+ container, you can debug by running the same podman-compose .. up command displayed
+ with the flag -v.
Road map
------------
-* Create osds with ceph-volume raw.
+* Create osds with ``ceph-volume raw``.
* Enable ceph-volume to mark loopback devices as a valid block device in
the inventory.
* Make the box ready to run dashboard CI tests (including cluster expansion).
+++ /dev/null
-# stable/Dockerfile
-#
-# Build a Podman container image from the latest
-# stable version of Podman on the Fedoras Updates System.
-# https://bodhi.fedoraproject.org/updates/?search=podman
-# This image can be used to create a secured container
-# that runs safely with privileges within the container.
-#
-FROM fedora:34
-
-ENV CEPHADM_PATH=/usr/local/sbin/cephadm
-# Don't include container-selinux and remove
-# directories used by yum that are just taking
-# up space.
-RUN dnf -y update; rpm --restore shadow-utils 2>/dev/null; \
-yum -y install strace podman fuse-overlayfs --exclude container-selinux; \
-rm -rf /var/cache /var/log/dnf* /var/log/yum.*
-
-RUN dnf install which firewalld chrony procps systemd openssh openssh-server openssh-clients sshpass lvm2 -y
-
-ADD https://raw.githubusercontent.com/containers/podman/main/contrib/podmanimage/stable/containers.conf /etc/containers/containers.conf
-ADD https://raw.githubusercontent.com/containers/podman/main/contrib/podmanimage/stable/podman-containers.conf /root/.config/containers/containers.conf
-
-RUN mkdir -p /root/.local/share/containers; # chown podman:podman -R /home/podman
-
-# Note VOLUME options must always happen after the chown call above
-# RUN commands can not modify existing volumes
-VOLUME /var/lib/containers
-VOLUME /root/.local/share/containers
-
-# chmod containers.conf and adjust storage.conf to enable Fuse storage.
-RUN chmod 644 /etc/containers/containers.conf; sed -i -e 's|^#mount_program|mount_program|g' -e '/additionalimage.*/a "/var/lib/shared",' -e 's|^mountopt[[:space:]]*=.*$|mountopt = "nodev,fsync=0"|g' /etc/containers/storage.conf
-RUN mkdir -p /var/lib/shared/overlay-images /var/lib/shared/overlay-layers /var/lib/shared/vfs-images /var/lib/shared/vfs-layers; touch /var/lib/shared/overlay-images/images.lock; touch /var/lib/shared/overlay-layers/layers.lock; touch /var/lib/shared/vfs-images/images.lock; touch /var/lib/shared/vfs-layers/layers.lock
-
-RUN echo 'root:root' | chpasswd
-
-RUN dnf install -y adjtimex # adjtimex syscall doesn't exist in fedora 35+ therefore we have to install it manually
- # so chronyd works
-RUN dnf -y install hostname iproute udev
-ENV _CONTAINERS_USERNS_CONFIGURED=""
-
-RUN useradd podman; \
-echo podman:0:5000 > /etc/subuid; \
-echo podman:0:5000 > /etc/subgid; \
-echo root:0:65535 > /etc/subuid; \
-echo root:0:65535 > /etc/subgid;
-
-VOLUME /home/podman/.local/share/containers
-
-ADD https://raw.githubusercontent.com/containers/libpod/master/contrib/podmanimage/stable/containers.conf /etc/containers/containers.conf
-ADD https://raw.githubusercontent.com/containers/libpod/master/contrib/podmanimage/stable/podman-containers.conf /home/podman/.config/containers/containers.conf
-
-RUN chown podman:podman -R /home/podman
-
-RUN echo 'podman:podman' | chpasswd
-RUN touch /.box_container # empty file to check if inside a container
-
-EXPOSE 8443
-EXPOSE 22
-
-ENTRYPOINT ["/usr/sbin/init"]
--- /dev/null
+# https://developers.redhat.com/blog/2014/05/05/running-systemd-within-docker-container/
+FROM centos:8 as centos-systemd
+ENV container docker
+ENV CEPHADM_PATH=/usr/local/sbin/cephadm
+
+# Centos met EOL and the content of the CentOS 8 repos has been moved to vault.centos.org
+RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-Linux-*
+RUN sed -i 's|#baseurl=http://mirror.centos.org|baseurl=https://vault.centos.org|g' /etc/yum.repos.d/CentOS-Linux-*
+
+RUN dnf -y install chrony firewalld lvm2 \
+ openssh-server openssh-clients python3 \
+ yum-utils sudo which && dnf clean all
+
+RUN systemctl enable chronyd firewalld sshd
+
+
+FROM centos-systemd as centos-systemd-docker
+# To cache cephadm images
+RUN yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
+RUN dnf -y install docker-ce && \
+ dnf clean all && systemctl enable docker
+
+# ssh utilities
+RUN dnf install epel-release -y && dnf makecache && dnf install sshpass -y
+RUN touch /.box_container # empty file to check if inside a container
+
+EXPOSE 8443
+EXPOSE 22
+
+FROM centos-systemd-docker
+WORKDIR /root
+
+CMD [ "/usr/sbin/init" ]
\ No newline at end of file
--- /dev/null
+# stable/Dockerfile
+#
+# Build a Podman container image from the latest
+# stable version of Podman on the Fedoras Updates System.
+# https://bodhi.fedoraproject.org/updates/?search=podman
+# This image can be used to create a secured container
+# that runs safely with privileges within the container.
+#
+FROM fedora:34
+
+ENV CEPHADM_PATH=/usr/local/sbin/cephadm
+# Don't include container-selinux and remove
+# directories used by yum that are just taking
+# up space.
+RUN dnf -y update; rpm --restore shadow-utils 2>/dev/null; \
+yum -y install strace podman fuse-overlayfs --exclude container-selinux; \
+rm -rf /var/cache /var/log/dnf* /var/log/yum.*
+
+RUN dnf install which firewalld chrony procps systemd openssh openssh-server openssh-clients sshpass lvm2 -y
+
+ADD https://raw.githubusercontent.com/containers/podman/main/contrib/podmanimage/stable/containers.conf /etc/containers/containers.conf
+ADD https://raw.githubusercontent.com/containers/podman/main/contrib/podmanimage/stable/podman-containers.conf /root/.config/containers/containers.conf
+
+RUN mkdir -p /root/.local/share/containers; # chown podman:podman -R /home/podman
+
+# Note VOLUME options must always happen after the chown call above
+# RUN commands can not modify existing volumes
+VOLUME /var/lib/containers
+VOLUME /root/.local/share/containers
+
+# chmod containers.conf and adjust storage.conf to enable Fuse storage.
+RUN chmod 644 /etc/containers/containers.conf; sed -i -e 's|^#mount_program|mount_program|g' -e '/additionalimage.*/a "/var/lib/shared",' -e 's|^mountopt[[:space:]]*=.*$|mountopt = "nodev,fsync=0"|g' /etc/containers/storage.conf
+RUN mkdir -p /var/lib/shared/overlay-images /var/lib/shared/overlay-layers /var/lib/shared/vfs-images /var/lib/shared/vfs-layers; touch /var/lib/shared/overlay-images/images.lock; touch /var/lib/shared/overlay-layers/layers.lock; touch /var/lib/shared/vfs-images/images.lock; touch /var/lib/shared/vfs-layers/layers.lock
+
+RUN echo 'root:root' | chpasswd
+
+RUN dnf install -y adjtimex # adjtimex syscall doesn't exist in fedora 35+ therefore we have to install it manually
+ # so chronyd works
+RUN dnf -y install hostname iproute udev
+ENV _CONTAINERS_USERNS_CONFIGURED=""
+
+RUN useradd podman; \
+echo podman:0:5000 > /etc/subuid; \
+echo podman:0:5000 > /etc/subgid; \
+echo root:0:65535 > /etc/subuid; \
+echo root:0:65535 > /etc/subgid;
+
+VOLUME /home/podman/.local/share/containers
+
+ADD https://raw.githubusercontent.com/containers/libpod/master/contrib/podmanimage/stable/containers.conf /etc/containers/containers.conf
+ADD https://raw.githubusercontent.com/containers/libpod/master/contrib/podmanimage/stable/podman-containers.conf /home/podman/.config/containers/containers.conf
+
+RUN chown podman:podman -R /home/podman
+
+RUN echo 'podman:podman' | chpasswd
+RUN touch /.box_container # empty file to check if inside a container
+
+EXPOSE 8443
+EXPOSE 22
+
+ENTRYPOINT ["/usr/sbin/init"]
run_shell_command,
run_shell_commands,
colored,
+ engine,
+ engine_compose,
Colors
)
# extract_tag
assert image_name.find(':')
image_name, tag = image_name.split(':')
- images = run_shell_command('podman image ls').split('\n')
+ images = run_shell_command(f'{engine()} image ls').split('\n')
IMAGE_NAME = 0
TAG = 1
for image in images:
def get_ceph_image():
print('Getting ceph image')
- run_shell_command(f'podman pull {CEPH_IMAGE}')
+ run_shell_command(f'{engine()} pull {CEPH_IMAGE}')
# update
- run_shell_command(f'podman build -t {CEPH_IMAGE} docker/ceph')
+ run_shell_command(f'{engine()} build -t {CEPH_IMAGE} docker/ceph')
if not os.path.exists('docker/ceph/image'):
os.mkdir('docker/ceph/image')
remove_ceph_image_tar()
- run_shell_command(f'podman save {CEPH_IMAGE} -o {CEPH_IMAGE_TAR}')
+ run_shell_command(f'{engine()} save {CEPH_IMAGE} -o {CEPH_IMAGE_TAR}')
+ run_shell_command(f'chmod 777 {CEPH_IMAGE_TAR}')
print('Ceph image added')
def get_box_image():
print('Getting box image')
- run_shell_command('podman build -t cephadm-box -f Dockerfile .')
+ if engine() == 'docker':
+ run_shell_command(f'{engine()} build -t cephadm-box -f DockerfileDocker .')
+ else:
+ run_shell_command(f'{engine()} build -t cephadm-box -f DockerfilePodman .')
print('Box image added')
def check_dashboard():
@ensure_outside_container
def setup(self):
- run_shell_command('pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz')
+ if engine() == 'podman':
+ run_shell_command('pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz')
check_cgroups()
check_selinux()
os.symlink('/cephadm/cephadm', cephadm_path)
- # restart to ensure docker is using daemon.json
- # run_shell_command(
- # 'systemctl restart docker'
- # )
+ if engine() == 'docker':
+ # restart to ensure docker is using daemon.json
+ run_shell_command(
+ 'systemctl restart docker'
+ )
st = os.stat(cephadm_path)
os.chmod(cephadm_path, st.st_mode | stat.S_IEXEC)
- run_shell_command('podman load < /cephadm/box/docker/ceph/image/quay.ceph.image.tar')
+ run_shell_command(f'{engine()} load < /cephadm/box/docker/ceph/image/quay.ceph.image.tar')
# cephadm guid error because it sometimes tries to use quay.ceph.io/ceph-ci/ceph:<none>
# instead of main branch's tag
run_shell_command('export CEPH_SOURCE_FOLDER=/ceph')
run_cephadm_shell_command('ceph -s')
- print(colored('dashboard available at https://localhost:8443', Colors.OKGREEN))
print('Bootstrap completed!')
@ensure_outside_container
hosts = Config.get('hosts')
# ensure boxes don't exist
- run_shell_command('podman-compose down')
- I_am = run_shell_command('whoami')
- if 'root' in I_am:
- print(root_error_msg)
- sys.exit(1)
+ self.down()
+
+ # podman is ran without sudo
+ if engine() == 'podman':
+ I_am = run_shell_command('whoami')
+ if 'root' in I_am:
+ print(root_error_msg)
+ sys.exit(1)
print('Checking docker images')
if not image_exists(CEPH_IMAGE):
print('Starting containers')
- dcflags = '-f docker-compose.yml'
- if not os.path.exists('/sys/fs/cgroup/cgroup.controllers'):
- dcflags += ' -f docker-compose.cgroup1.yml'
- run_shell_command(f'podman-compose --podman-run-args "--group-add keep-groups --network=host --device /dev/fuse -it {loop_device_arg}" up --scale hosts={hosts} -d')
- ip = run_dc_shell_command('hostname -i', 1, 'seed')
- assert ip != '127.0.0.1'
+ if engine() == 'docker':
+ dcflags = f'-f {Config.get("docker_yaml")}'
+ if not os.path.exists('/sys/fs/cgroup/cgroup.controllers'):
+ dcflags += f' -f {Config.get("docker_v1_yaml")}'
+ run_shell_command(f'{engine_compose()} {dcflags} up --scale hosts={hosts} -d')
+ else:
+ run_shell_command(f'{engine_compose()} -f {Config.get("podman_yaml")} --podman-run-args "--group-add keep-groups --network=host --device /dev/fuse -it {loop_device_arg}" up --scale hosts={hosts} -d')
run_shell_command('sudo sysctl net.ipv4.conf.all.forwarding=1')
run_shell_command('sudo iptables -P FORWARD ACCEPT')
sed 's/$OPTIONS/-x/g' /usr/lib/systemd/system/chronyd.service -i
systemctl daemon-reload
systemctl start chronyd
- systemctl status chronyd
+ systemctl status --no-pager chronyd
"""
for h in range(hosts):
run_dc_shell_commands(h + 1, 'hosts', chronyd_setup)
)
skip_dashboard = '--skip-dashboard' if Config.get('skip-dashboard') else ''
box_bootstrap_command = (
- f'/cephadm/box/box.py {verbose} cluster bootstrap '
+ f'/cephadm/box/box.py {verbose} --engine {engine()} cluster bootstrap '
f'--osds {osds} '
f'--hosts {hosts} '
f'{skip_deploy} '
host._add_hosts(ips, hostnames)
# TODO: add osds
- # if expanded and not Config.get('skip-deploy-osds'):
- # print('Deploying osds... This could take up to minutes')
- # osd.deploy_osds_in_vg('vg1')
- # print('Osds deployed')
+ if expanded and not Config.get('skip-deploy-osds'):
+ if engine() == 'podman':
+ print('osd deployment not supported in podman')
+ else:
+ print('Deploying osds... This could take up to minutes')
+ osd.deploy_osds_in_vg('vg1')
+ print('Osds deployed')
+
+ dashboard_ip = 'localhost'
+ info = get_boxes_container_info(with_seed=True)
+ if engine() == 'docker':
+ for i in range(info['size']):
+ if 'seed' in info['container_names'][i]:
+ dashboard_ip = info["ips"][i]
+ print(colored(f'dashboard available at https://{dashboard_ip}:8443', Colors.OKGREEN))
print('Bootstrap finished successfully')
@ensure_outside_container
def down(self):
- run_shell_command('podman-compose down')
- cleanup_box()
+ if engine() == 'podman':
+ run_shell_command(f'{engine_compose()} -f {Config.get("podman_yaml")} down')
+ else:
+ run_shell_command(f'{engine_compose()} -f {Config.get("docker_yaml")} down')
print('Successfully killed all boxes')
@ensure_outside_container
# we need verbose to see the prompt after running shell command
Config.set('verbose', True)
print('Seed bash')
- run_shell_command('podman-compose exec seed bash')
+ run_shell_command(f'{engine_compose()} -f {Config.get("docker_yaml")} exec seed bash')
targets = {
parser.add_argument(
'-v', action='store_true', dest='verbose', help='be more verbose'
)
+ parser.add_argument(
+ '--engine', type=str, default='podman',
+ dest='engine', help='choose engine between "docker" and "podman"'
+ )
subparsers = parser.add_subparsers()
target_instances = {}
--- /dev/null
+version: "2.4"
+services:
+ cephadm-host-base:
+ build:
+ context: .
+ environment:
+ - CEPH_BRANCH=master
+ image: cephadm-box
+ privileged: true
+ stop_signal: RTMIN+3
+ volumes:
+ - ../../../:/ceph
+ - ..:/cephadm
+ - ./daemon.json:/etc/docker/daemon.json
+ # dangerous, maybe just map the loopback
+ # https://stackoverflow.com/questions/36880565/why-dont-my-udev-rules-work-inside-of-a-running-docker-container
+ - /dev:/dev
+ networks:
+ - public
+ mem_limit: "20g"
+ scale: -1
+ seed:
+ extends:
+ service: cephadm-host-base
+ ports:
+ - "3000:3000"
+ - "8443:8443"
+ - "9095:9095"
+ scale: 1
+ hosts:
+ extends:
+ service: cephadm-host-base
+ scale: 3
+
+
+volumes:
+ var-lib-docker:
+networks:
+ public:
--- /dev/null
+version: "2.4"
+services:
+ cephadm-host-base:
+ build:
+ context: .
+ environment:
+ - CEPH_BRANCH=master
+ image: cephadm-box
+ # probably not needed with rootless Docker and cgroups v2
+ # privileged: true
+ cap_add:
+ - SYS_ADMIN
+ - NET_ADMIN
+ - SYS_TIME
+ - SYS_RAWIO
+ - MKNOD
+ - NET_RAW
+ - SETUID
+ - SETGID
+ - CHOWN
+ - SYS_PTRACE
+ - SYS_TTY_CONFIG
+ - CAP_AUDIT_WRITE
+ - CAP_AUDIT_CONTROL
+ stop_signal: RTMIN+3
+ volumes:
+ - ../../../:/ceph:z
+ - ..:/cephadm:z
+ # - ./daemon.json:/etc/docker/daemon.json
+ # dangerous, maybe just map the loopback
+ # https://stackoverflow.com/questions/36880565/why-dont-my-udev-rules-work-inside-of-a-running-docker-container
+ - /run/udev:/run/udev
+ - /sys/dev/block:/sys/dev/block
+ - /sys/fs/cgroup:/sys/fs/cgroup
+ - /dev/fuse:/dev/fuse
+ - /dev/disk:/dev/disk
+ - /dev/mapper:/dev/mapper
+ - /dev/mapper/control:/dev/mapper/control
+ mem_limit: "20g"
+ scale: -1
+ seed:
+ extends:
+ service: cephadm-host-base
+ ports:
+ - "2222:22"
+ - "3000:3000"
+ - "8888:8888"
+ - "8443:8443"
+ - "9095:9095"
+ scale: 1
+ hosts:
+ extends:
+ service: cephadm-host-base
+ scale: 1
+
+
+volumes:
+ var-lib-docker:
+
+network_mode: public
+++ /dev/null
-version: "2.4"
-services:
- cephadm-host-base:
- build:
- context: .
- environment:
- - CEPH_BRANCH=master
- image: cephadm-box
- # probably not needed with rootless Docker and cgroups v2
- # privileged: true
- cap_add:
- - SYS_ADMIN
- - NET_ADMIN
- - SYS_TIME
- - SYS_RAWIO
- - MKNOD
- - NET_RAW
- - SETUID
- - SETGID
- - CHOWN
- - SYS_PTRACE
- - SYS_TTY_CONFIG
- - CAP_AUDIT_WRITE
- - CAP_AUDIT_CONTROL
- stop_signal: RTMIN+3
- volumes:
- - ../../../:/ceph:z
- - ..:/cephadm:z
- # - ./daemon.json:/etc/docker/daemon.json
- # dangerous, maybe just map the loopback
- # https://stackoverflow.com/questions/36880565/why-dont-my-udev-rules-work-inside-of-a-running-docker-container
- - /run/udev:/run/udev
- - /sys/dev/block:/sys/dev/block
- - /sys/fs/cgroup:/sys/fs/cgroup
- - /dev/fuse:/dev/fuse
- - /dev/disk:/dev/disk
- - /dev/mapper:/dev/mapper
- - /dev/mapper/control:/dev/mapper/control
- mem_limit: "20g"
- scale: -1
- seed:
- extends:
- service: cephadm-host-base
- ports:
- - "2222:22"
- - "3000:3000"
- - "8888:8888"
- - "8443:8443"
- - "9095:9095"
- scale: 1
- hosts:
- extends:
- service: cephadm-host-base
- scale: 1
-
-
-volumes:
- var-lib-docker:
-
-network_mode: public
run_cephadm_shell_command,
run_dc_shell_command,
run_shell_command,
+ engine,
)
print('Redirecting to _setup_ssh to container')
verbose = '-v' if Config.get('verbose') else ''
run_dc_shell_command(
- f'/cephadm/box/box.py {verbose} host setup_ssh {container_type} {container_index}',
+ f'/cephadm/box/box.py {verbose} --engine {engine()} host setup_ssh {container_type} {container_index}',
container_index,
container_type,
)
hostnames = ' '.join(hostnames)
hostnames = f'{hostnames}'
run_dc_shell_command(
- f'/cephadm/box/box.py {verbose} host add_hosts seed 1 --ips {ips} --hostnames {hostnames}',
+ f'/cephadm/box/box.py {verbose} --engine {engine()} host add_hosts seed 1 --ips {ips} --hostnames {hostnames}',
1,
'seed',
)
ips = f'{ips}'
# assume we only have one seed
run_dc_shell_command(
- f'/cephadm/box/box.py {verbose} host copy_cluster_ssh_key seed 1 --ips {ips}',
+ f'/cephadm/box/box.py {verbose} --engine {engine()} host copy_cluster_ssh_key seed 1 --ips {ips}',
1,
'seed',
)
run_cephadm_shell_command,
run_dc_shell_command,
run_shell_command,
+ engine
)
makes another process to run on the background
"""
if inside_container():
- print('xd')
- else:
- lvs = json.loads(run_shell_command('sudo lvs --reportformat json'))
+ lvs = json.loads(run_shell_command('lvs --reportformat json'))
# distribute osds per host
hosts = get_orch_hosts()
host_index = 0
- verbose = '-v' if Config.get('verbose') else ''
for lv in lvs['report'][0]['lv']:
if lv['vg_name'] == vg:
deployed = False
while not deployed:
- hostname = hosts[host_index]['hostname']
- deployed = run_dc_shell_command(
- f'/cephadm/box/box.py -v osd deploy --data /dev/{vg}/{lv["lv_name"]} --hostname {hostname}', 1, 'seed'
+ deployed = deploy_osd(
+ f'{vg}/{lv["lv_name"]}', hosts[host_index]['hostname']
)
- deployed = 'created osd' in deployed.lower()
host_index = (host_index + 1) % len(hosts)
+ else:
+ verbose = '-v' if Config.get('verbose') else ''
+ print('Redirecting deploy osd in vg to inside container')
+ run_dc_shell_command(
+ f'/cephadm/box/box.py {verbose} --engine {engine()} osd deploy --vg {vg}', 1, 'seed'
+ )
class Osd(Target):
'config': '/etc/ceph/ceph.conf',
'keyring': '/etc/ceph/ceph.keyring',
'loop_img': 'loop-images/loop.img',
+ 'engine': 'podman',
+ 'docker_yaml': 'docker-compose-docker.yml',
+ 'docker_v1_yaml': 'docker-compose.cgroup1.yml',
+ 'podman_yaml': 'docker-compose-podman.yml',
}
@staticmethod
container_id = get_container_id(f'{box_type}_{index}')
print(container_id)
out = run_shell_command(
- f'podman exec -it {container_id} {command}', expect_error
+ f'{engine()} exec -it {container_id} {command}', expect_error
)
return out
return os.path.exists('/.box_container')
def get_container_id(container_name: str):
- return run_shell_command("podman ps | \grep " + container_name + " | awk '{ print $1 }'")
+ return run_shell_command(f"{engine()} ps | \grep " + container_name + " | awk '{ print $1 }'")
+
+def engine():
+ return Config.get('engine')
+
+def engine_compose():
+ return f'{engine()}-compose'
@ensure_outside_container
def get_boxes_container_info(with_seed: bool = False) -> Dict[str, Any]:
IP = 0
CONTAINER_NAME = 1
HOSTNAME = 2
- ips_query = "podman inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} %tab% {{.Name}} %tab% {{.Config.Hostname}}' $(podman ps -aq) | sed 's#%tab%#\t#g' | sed 's#/##g' | sort -t . -k 1,1n -k 2,2n -k 3,3n -k 4,4n"
+ # fstring extrapolation will mistakenly try to extrapolate inspect options
+ ips_query = engine() + " inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} %tab% {{.Name}} %tab% {{.Config.Hostname}}' $("+ engine() + " ps -aq) | sed 's#%tab%#\t#g' | sed 's#/##g' | sort -t . -k 1,1n -k 2,2n -k 3,3n -k 4,4n"
out = run_shell_command(ips_query)
# FIXME: if things get more complex a class representing a container info might be useful,
# for now representing data this way is faster.