From c50c17e755863c25373e252fd252af1685875f01 Mon Sep 17 00:00:00 2001 From: Sage Weil Date: Tue, 10 Dec 2019 14:36:06 -0600 Subject: [PATCH] doc/releases/nautilus,PendingReleaseNotes: consolidate telemetry note Signed-off-by: Sage Weil --- PendingReleaseNotes | 27 ++++++++++----------------- doc/releases/nautilus.rst | 23 +++++++---------------- 2 files changed, 17 insertions(+), 33 deletions(-) diff --git a/PendingReleaseNotes b/PendingReleaseNotes index f7c753b2905..0c46eedeedd 100644 --- a/PendingReleaseNotes +++ b/PendingReleaseNotes @@ -132,26 +132,17 @@ cluster-wide and per-pool flags to be backward comaptible with pre-infernalis clusters. -* The telemetry module now has a 'device' channel, enabled by default, that - will report anonymized hard disk and SSD hea lth metrics to telemetry.ceph.com - in order to build and improve device failure prediction algorithms. Because - the content of telemetry reports has changed, you will need to either re-opt-in - with:: +* The telemetry module now reports more information. - ceph telemetry on - - You can view exactly what information will be reported first with:: - - ceph telemetry show - ceph telemetry show device # specifically show the device channel - - If you are not comfortable sharing device metrics, you can disable that - channel first before re-opting-in: + First, there is a new 'device' channel, enabled by default, that + will report anonymized hard disk and SSD health metrics to + telemetry.ceph.com in order to build and improve device failure + prediction algorithms. If you are not comfortable sharing device + metrics, you can disable that channel first before re-opting-in:: ceph config set mgr mgr/telemetry/channel_device false - ceph telemetry on -* The telemetry module now reports more information about CephFS file systems, + Second, we now report more information about CephFS file systems, including: - how many MDS daemons (in total and per file system) @@ -173,7 +164,9 @@ - whether a separate OSD cluster network is being used - how many RBD pools and images are in the cluster, and how many pools have RBD mirroring enabled - how many RGW daemons, zones, and zonegroups are present; which RGW frontends are in use - - aggregate stats about the CRUSH map, like which algorithms are used, how big buckets are, how many rules are defined, and what tunables are in use + - aggregate stats about the CRUSH map, like which algorithms are used, how + big buckets are, how many rules are defined, and what tunables are in + use If you had telemetry enabled, you will need to re-opt-in with:: diff --git a/doc/releases/nautilus.rst b/doc/releases/nautilus.rst index f8cacae9d10..2c7294fc8de 100644 --- a/doc/releases/nautilus.rst +++ b/doc/releases/nautilus.rst @@ -66,26 +66,17 @@ New health warnings: Changes in the telemetry module: -* The telemetry module now has a 'device' channel, enabled by default, that - will report anonymized hard disk and SSD health metrics to telemetry.ceph.com - in order to build and improve device failure prediction algorithms. Because - the content of telemetry reports has changed, you will need to re-opt-in - with:: +* The telemetry module now reports more information. - ceph telemetry on - - You can view exactly what information will be reported first with:: - - ceph telemetry show - ceph telemetry show device # specifically show the device channel - - If you are not comfortable sharing device metrics, you can disable that - channel first before re-opting-in: + First, there is a new 'device' channel, enabled by default, that + will report anonymized hard disk and SSD health metrics to + telemetry.ceph.com in order to build and improve device failure + prediction algorithms. If you are not comfortable sharing device + metrics, you can disable that channel first before re-opting-in:: ceph config set mgr mgr/telemetry/channel_device false - ceph telemetry on -* The telemetry module now reports more information about CephFS file systems, + Second, we now report more information about CephFS file systems, including: - how many MDS daemons (in total and per file system) -- 2.39.5