This adds an upgrade suite to ensure that a Ceph cluster without a
CephFS file system does not blow up on upgrade (in particular, that the
MDSMonitor does not trip). This was developed to potentially reproduce
tracker 51673 but the actual cause for that issue was an old encoding
for the MDSMap which was obsoleted in Pacific. You must create a cluster
older than the FSMap (~Hammer or Infernalis) to reproduce. In any case,
this upgrade suite may be useful in the future so let's keep it!
Related-to: https://tracker.ceph.com/issues/51673
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit
9941188116e22104b625d35d7f4137f438632615)
--- /dev/null
+../.qa/
\ No newline at end of file
--- /dev/null
+This test just verifies that upgrades work with no file system present. In
+particular, catch that MDSMonitor doesn't blow up somehow with version
+mismatches.
--- /dev/null
+.qa/cephfs/objectstore-ec/bluestore-bitmap.yaml
\ No newline at end of file
--- /dev/null
+.qa/distros/supported/centos_latest.yaml
\ No newline at end of file
--- /dev/null
+.qa/cephfs/conf/
\ No newline at end of file
--- /dev/null
+roles:
+- [mon.a, mon.b, mon.c, mgr.x, mgr.y, osd.0, osd.1, osd.2, osd.3]
+openstack:
+- volumes: # attached to each instance
+ count: 4
+ size: 10 # GB
--- /dev/null
+../.qa/
\ No newline at end of file
--- /dev/null
+overrides:
+ ceph:
+ conf:
+ global:
+ mon pg warn min per osd: 0
--- /dev/null
+.qa/cephfs/overrides/whitelist_health.yaml
\ No newline at end of file
--- /dev/null
+.qa/cephfs/overrides/whitelist_wrongly_marked_down.yaml
\ No newline at end of file
--- /dev/null
+../.qa/
\ No newline at end of file
--- /dev/null
+meta:
+- desc: |
+ install ceph/octopus latest
+tasks:
+- install:
+ branch: octopus
+ exclude_packages:
+ - librados3
+ - ceph-mgr-dashboard
+ - ceph-mgr-diskprediction-local
+ - ceph-mgr-rook
+ - ceph-mgr-cephadm
+ - cephadm
+ extra_packages: ['librados2']
+- print: "**** done installing octopus"
+- ceph:
+ log-ignorelist:
+ - overall HEALTH_
+ - \(FS_
+ - \(MDS_
+ - \(OSD_
+ - \(MON_DOWN\)
+ - \(CACHE_POOL_
+ - \(POOL_
+ - \(MGR_DOWN\)
+ - \(PG_
+ - \(SMALLER_PGP_NUM\)
+ - Monitor daemon marked osd
+ - Behind on trimming
+ - Manager daemon
+ conf:
+ global:
+ mon warn on pool no app: false
+ ms bind msgr2: false
+- exec:
+ osd.0:
+ - ceph osd set-require-min-compat-client octopus
+- print: "**** done ceph"
--- /dev/null
+overrides:
+ ceph:
+ log-ignorelist:
+ - scrub mismatch
+ - ScrubResult
+ - wrongly marked
+ - \(POOL_APP_NOT_ENABLED\)
+ - \(SLOW_OPS\)
+ - overall HEALTH_
+ - \(MON_MSGR2_NOT_ENABLED\)
+ - slow request
+ conf:
+ global:
+ bluestore warn on legacy statfs: false
+ bluestore warn on no per pool omap: false
+ mon:
+ mon warn on osd down out interval zero: false
+
+tasks:
+- print: "*** upgrading, no cephfs present"
+- exec:
+ mon.a:
+ - ceph fs dump
+- install.upgrade:
+ mon.a:
+- print: "**** done install.upgrade"
+- ceph.restart:
+ daemons: [mon.*, mgr.*]
+ mon-health-to-clog: false
+ wait-for-healthy: false
+- ceph.healthy:
+- ceph.restart:
+ daemons: [osd.*]
+ wait-for-healthy: false
+ wait-for-osds-up: true
+- exec:
+ mon.a:
+ - ceph versions
+ - ceph osd dump -f json-pretty
+ - ceph fs dump
+ - ceph osd require-osd-release octopus
+ - for f in `ceph osd pool ls` ; do ceph osd pool set $f pg_autoscale_mode off ; done
+ #- ceph osd set-require-min-compat-client octopus
+- ceph.healthy:
+- print: "**** done ceph.restart"