]> git-server-git.apps.pok.os.sepia.ceph.com Git - ceph-ansible.git/commitdiff
osd: ensure /var/lib/ceph/osd/{cluster}-{id} is present
authorGuillaume Abrioux <gabrioux@redhat.com>
Tue, 17 Nov 2020 09:45:14 +0000 (10:45 +0100)
committerGuillaume Abrioux <gabrioux@redhat.com>
Thu, 19 Nov 2020 08:20:28 +0000 (09:20 +0100)
This commit ensures that the `/var/lib/ceph/osd/{{ cluster }}-{{ osd_id }}` is
present before starting OSDs.

This is needed specificly when redeploying an OSD in case of OS upgrade
failure.
Since ceph data are still present on its devices then the node can be
redeployed, however those directories aren't present since they are
initially created by ceph-volume. We could recreate them manually but
for better user experience we can ask ceph-ansible to recreate them.

NOTE:
this only works for OSDs that were deployed with ceph-volume.
ceph-disk deployed OSDs would have to get those directories recreated
manually.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1898486
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
roles/ceph-osd/tasks/start_osds.yml

index cfb93b99b0469e64d9675dec70b52542cb23d25d..78ec5be026b819f2ef47877e3323acaaafd6daad 100644 (file)
     - ceph_osd_systemd_overrides is defined
     - ansible_service_mgr == 'systemd'
 
+- name: ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present
+  file:
+    state: directory
+    path: "/var/lib/ceph/osd/{{ cluster }}-{{ item }}"
+    mode: "{{ ceph_directories_mode }}"
+    owner: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
+    group: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
+  with_items: "{{ ((ceph_osd_ids.stdout | default('{}') | from_json).keys() | list) | union(osd_ids_non_container.stdout_lines | default([])) }}"
+
 - name: systemd start osd
   systemd:
     name: ceph-osd@{{ item }}