Note: this commit was inspired by
http://github.com/ceph/ceph-qa-suite/commit/
50758a4810794d265c5d36a71d1e16799251a00d
As of 10.2.4, when upgrading a cluster from hammer to jewel, after the last
node is upgraded the MON will put the cluster into HEALTH_WARN and say: "all
OSDs are running jewel or later but the 'require_jewel_osds' osdmap flag is not
set". The release notes say:
This is a signal for the admin to do "ceph osd set require_jewel_osds" – by
doing this, the upgrade path is complete and no more pre-Jewel OSDs may be
added to the cluster.
Fixes: http://tracker.ceph.com/issues/18719
Signed-off-by: Nathan Cutler <ncutler@suse.com>
- 'namespace'
num_objects: 5
name_length: [400, 800, 1600]
-- ceph.restart: [osd.2]
+- ceph.restart:
+ daemons: [osd.2]
+ wait-for-healthy: false
+ wait-for-osds-up: true
+- exec:
+ mon.a:
+ - sleep 60
+ - ceph osd set require_jewel_osds
- ceph_manager.wait_for_clean: null
- ceph_manager.do_pg_scrub:
args: ['test', 0, 'scrub']
- 'namespace'
num_objects: 5
name_length: [400, 800, 1600]
-- ceph.restart: [osd.2]
+- ceph.restart:
+ daemons: [osd.2]
+ wait-for-healthy: false
+ wait-for-osds-up: true
+- exec:
+ mon.a:
+ - sleep 60
+ - ceph osd set require_jewel_osds
- ceph_manager.wait_for_clean: null
- ceph_manager.do_pg_scrub:
args: ['test', 0, 'scrub']