after all mon are upgraded, let's reset mon_host which is used in the
rest of the playbook for setting `container_exec_cmd` so we are sure to
use the right value.
Typical error:
```
failed: [mds0 -> mon0] (item={u'path': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'name': u'client.bootstrap-mds', u'copy_key': True}) => changed=true
ansible_loop_var: item
cmd:
- docker
- exec
- ceph-mon-mon2
- ceph
- --cluster
- ceph
- auth
- get
- client.bootstrap-mds
delta: '0:00:00.016294'
end: '2019-09-27 13:54:58.828835'
item:
copy_key: true
name: client.bootstrap-mds
path: /var/lib/ceph/bootstrap-mds/ceph.keyring
msg: non-zero return code
rc: 1
start: '2019-09-27 13:54:58.812541'
stderr: 'Error response from daemon: No such container: ceph-mon-mon2'
stderr_lines: <omitted>
stdout: ''
stdout_lines: <omitted>
```
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit
d84160a170d9d134dc5b7ca246004fbe8a14b7af)
delay: "{{ health_mon_check_delay }}"
when: containerized_deployment | bool
+
+- name: reset mon_host
+ hosts: "{{ mon_group_name|default('mons') }}"
+ become: True
+ tasks:
+ - import_role:
+ name: ceph-defaults
+
+ - name: reset mon_host fact
+ set_fact:
+ mon_host: "{{ hostvars[groups[mon_group_name][0]]['ansible_hostname'] }}"
+
+
- name: upgrade ceph mgr nodes when implicitly collocated on monitors
vars:
health_mon_check_retries: 5