During the transition from jewel non-container to container old ceph
units are disabled. ceph-disk can still remain in some cases and will
appear as 'loaded failed', this is not a problem although operators
might not like to see these units failing. That's why we remove them if
we find them.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1577846
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit
49a47124859e6577fb99e6dd680c5244ccd6f38f)
Signed-off-by: Sébastien Han <seb@redhat.com>
pre_tasks:
- - name: collect running osds
+ - name: collect running osds and ceph-disk unit(s)
shell: |
- systemctl list-units | grep "loaded active" | grep -Eo 'ceph-osd@[0-9]{1,2}.service'
+ systemctl list-units | grep "loaded active" | grep -Eo 'ceph-osd@[0-9]{1,2}.service|ceph-disk@dev-[a-z]{3,4}[0-9]{1}.service'
register: running_osds
changed_when: false
failed_when: false
- not collect_devices.get("skipped")
- collect_devices != []
- - name: stop non-containerized ceph osd(s)
+ - name: stop/disable/mask non-containerized ceph osd(s) and ceph-disk units (if any)
systemd:
name: "{{ item }}"
state: stopped