Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
a9e43ed85236c8412679da58d068253e80d21d05)
Conflicts:
qa/suites/rados/monthrash/ceph.yaml (no changes needed)
Additional changes for luminous:
qa/suites/rados/basic/tasks/rados_api_tests.yaml
qa/suites/rados/singleton/all/thrash-eio.yaml
qa/suites/smoke/basic/tasks/rados_api_tests.yaml
- Metadata damage detected
- bad backtrace on inode
- overall HEALTH_
- - (MDS_TRIM)
+ - \(MDS_TRIM\)
conf:
mds:
mds log max segments: 1
- 'deep-scrub 1 missing, 0 inconsistent objects'
- 'failed to pick suitable auth object'
- overall HEALTH_
- - (OSDMAP_FLAGS)
- - (OSD_
- - (PG_
- - (OSD_SCRUB_ERRORS)
- - (TOO_FEW_PGS)
+ - \(OSDMAP_FLAGS\)
+ - \(OSD_
+ - \(PG_
+ - \(OSD_SCRUB_ERRORS\)
+ - \(TOO_FEW_PGS\)
conf:
osd:
osd deep scrub update digest min age: 0
- attr name mismatch
- Regular scrub request, deep-scrub details will be lost
- overall HEALTH_
- - (OSDMAP_FLAGS)
- - (OSD_
- - (PG_
+ - \(OSDMAP_FLAGS\)
+ - \(OSD_
+ - \(PG_
conf:
osd:
filestore debug inject read err: true
- .*clock.*skew.*
- clocks not synchronized
- overall HEALTH_
- - (MON_CLOCK_SKEW)
+ - \(MON_CLOCK_SKEW\)
- mon_clock_skew_check:
expect-skew: false
- failsafe engaged, dropping updates
- failsafe disengaged, no longer dropping updates
- overall HEALTH_
- - (OSDMAP_FLAGS)
- - (OSD_
- - (PG_
- - (SMALLER_PG_NUM)
+ - \(OSDMAP_FLAGS\)
+ - \(OSD_
+ - \(PG_
+ - \(SMALLER_PG_NUM\)
- workunit:
clients:
all:
- MDS in read-only mode
- force file system read-only
- overall HEALTH_
- - (OSDMAP_FLAGS)
- - (OSD_FULL)
- - (MDS_READ_ONLY)
- - (POOL_FULL)
+ - \(OSDMAP_FLAGS\)
+ - \(OSD_FULL\)
+ - \(MDS_READ_ONLY\)
+ - \(POOL_FULL\)
tasks:
- install:
- ceph:
- but it is still running
- slow request
- overall HEALTH_
- - (OSDMAP_FLAGS)
- - (OSD_
- - (PG_
+ - \(OSDMAP_FLAGS\)
+ - \(OSD_
+ - \(PG_
- exec:
client.0:
- sudo ceph osd pool create foo 128 128
- missing primary copy of
- objects unfound and apparently lost
- overall HEALTH_
- - (POOL_APP_NOT_ENABLED)
- - (PG_DEGRADED)
+ - \(POOL_APP_NOT_ENABLED\)
+ - \(PG_DEGRADED\)
- full_sequential:
- exec:
client.0:
- missing primary copy of
- objects unfound and apparently lost
- overall HEALTH_
- - (OSDMAP_FLAGS)
- - (REQUEST_SLOW)
- - (PG_
+ - \(OSDMAP_FLAGS\)
+ - \(REQUEST_SLOW\)
+ - \(PG_
- \(OBJECT_MISPLACED\)
- - (OSD_
+ - \(OSD_
- thrashosds:
op_delay: 30
clean_interval: 120
- but it is still running
- slow request
- overall HEALTH_
- - (CACHE_POOL_
+ - \(CACHE_POOL_
- exec:
client.0:
- sudo ceph osd pool create base 4
overrides:
ceph:
log-whitelist:
- - (POOL_APP_NOT_ENABLED)
+ - \(POOL_APP_NOT_ENABLED\)
- but it is still running
- objects unfound and apparently lost
- overall HEALTH_
- - (CACHE_POOL_NEAR_FULL)
- - (CACHE_POOL_NO_HIT_SET)
+ - \(CACHE_POOL_NEAR_FULL\)
+ - \(CACHE_POOL_NO_HIT_SET\)
tasks:
- exec:
client.0:
- reached quota
- but it is still running
- objects unfound and apparently lost
- - (POOL_APP_NOT_ENABLED)
+ - \(POOL_APP_NOT_ENABLED\)
- thrashosds:
chance_pgnum_grow: 2
chance_pgpnum_fix: 1
- scrub mismatch
- ScrubResult
- wrongly marked
- - (POOL_APP_NOT_ENABLED)
+ - \(POOL_APP_NOT_ENABLED\)
- overall HEALTH_
conf:
global:
- scrub mismatch
- ScrubResult
- wrongly marked
- - (POOL_APP_NOT_ENABLED)
+ - \(POOL_APP_NOT_ENABLED\)
- overall HEALTH_
conf:
global: