]> git.apps.os.sepia.ceph.com Git - ceph-ansible.git/commitdiff
ceph-handler: Fix osd restart condition
authorDimitri Savineau <dsavinea@redhat.com>
Mon, 9 Sep 2019 15:23:47 +0000 (11:23 -0400)
committerDimitri Savineau <savineau.dimitri@gmail.com>
Wed, 11 Sep 2019 17:20:30 +0000 (13:20 -0400)
In containerized deployment, the restart OSD handler couldn't be
triggered in most ansible execution.
This is due to the usage of run_once + a condition on the inventory
hostname and the last filter.
The run_once is triggered first so ansible will pick a node in the
osd group to execute the restart task. But if this node isn't the
last one in the osd group then the task is ignored. There's more
probability that the task will be ignored than executed.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 5b1c15653fcb4772f0839f3a57f7e36ba1b86f49)

roles/ceph-handler/handlers/main.yml

index d4d53a3a236a8ad5f85054aed82c556fe2426fc0..59e7fc5af96027bd22ad1a11457595e16b4798bb 100644 (file)
         # except when a crush location is specified. ceph-disk will start the osds before the osd crush location is specified
         - osd_group_name in group_names
         - containerized_deployment | bool
-        - inventory_hostname == groups.get(osd_group_name) | last
         - ceph_osd_container_stat.get('rc') == 0
         - ceph_osd_container_stat.get('stdout_lines', [])|length != 0
         - handler_health_osd_check | bool