From: Travis Nielsen Date: Fri, 4 Mar 2022 17:42:02 +0000 (-0700) Subject: prometheus: spell check the alert descriptions X-Git-Tag: v16.2.8~6^2~6 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=f08cc45b31fafa5124cbf29694f9dc0d0727d91d;p=ceph.git prometheus: spell check the alert descriptions Signed-off-by: Travis Nielsen (cherry picked from commit 9cca95b16abd4af3eb3a5630acb3fb7e0cc73a4e) --- diff --git a/monitoring/ceph-mixin/prometheus_alerts.yml b/monitoring/ceph-mixin/prometheus_alerts.yml index 578596f4af0bc..f56b5877885d5 100644 --- a/monitoring/ceph-mixin/prometheus_alerts.yml +++ b/monitoring/ceph-mixin/prometheus_alerts.yml @@ -397,7 +397,7 @@ groups: documentation: https://docs.ceph.com/en/latest/cephfs/health-messages/#fs-degraded summary: Ceph filesystem is degraded description: > - One or more metdata daemons (MDS ranks) are failed or in a + One or more metadata daemons (MDS ranks) are failed or in a damaged state. At best the filesystem is partially available, worst case is the filesystem is completely unusable. - alert: CephFilesystemMDSRanksLow @@ -533,7 +533,7 @@ groups: During data consistency checks (scrub), at least one PG has been flagged as being damaged or inconsistent. - Check to see which PG is affected, and attempt a manual repair if neccessary. To list + Check to see which PG is affected, and attempt a manual repair if necessary. To list problematic placement groups, use 'rados list-inconsistent-pg '. To repair PGs use the 'ceph pg repair ' command. - alert: CephPGRecoveryAtRisk @@ -561,7 +561,7 @@ groups: documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-availability summary: Placement group is unavailable, blocking some I/O description: > - Data availability is reduced impacting the clusters abilty to service I/O to some data. One or + Data availability is reduced impacting the clusters ability to service I/O to some data. One or more placement groups (PGs) are in a state that blocks IO. - alert: CephPGBackfillAtRisk expr: ceph_health_detail{name="PG_BACKFILL_FULL"} == 1 diff --git a/monitoring/ceph-mixin/tests_alerts/test_alerts.yml b/monitoring/ceph-mixin/tests_alerts/test_alerts.yml index 6ff221e70006f..680082d898160 100644 --- a/monitoring/ceph-mixin/tests_alerts/test_alerts.yml +++ b/monitoring/ceph-mixin/tests_alerts/test_alerts.yml @@ -911,7 +911,7 @@ tests: documentation: https://docs.ceph.com/en/latest/cephfs/health-messages/#fs-degraded summary: Ceph filesystem is degraded description: > - One or more metdata daemons (MDS ranks) are failed or in a + One or more metadata daemons (MDS ranks) are failed or in a damaged state. At best the filesystem is partially available, worst case is the filesystem is completely unusable. - interval: 1m @@ -1773,7 +1773,7 @@ tests: During data consistency checks (scrub), at least one PG has been flagged as being damaged or inconsistent. - Check to see which PG is affected, and attempt a manual repair if neccessary. To list + Check to see which PG is affected, and attempt a manual repair if necessary. To list problematic placement groups, use 'rados list-inconsistent-pg '. To repair PGs use the 'ceph pg repair ' command. - interval: 1m @@ -1893,7 +1893,7 @@ tests: documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-availability summary: Placement group is unavailable, blocking some I/O description: > - Data availability is reduced impacting the clusters abilty to service I/O to some data. One or + Data availability is reduced impacting the clusters ability to service I/O to some data. One or more placement groups (PGs) are in a state that blocks IO. - interval: 1m input_series: