From 1007a19b7e46f6e828ba2991aa8c4c5e4db02d45 Mon Sep 17 00:00:00 2001 From: Jos Collin Date: Thu, 28 Jun 2018 14:10:36 +0530 Subject: [PATCH] doc: Updated dashboard doc references Signed-off-by: Jos Collin --- doc/mgr/administrator.rst | 2 +- doc/start/intro.rst | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/mgr/administrator.rst b/doc/mgr/administrator.rst index a74006c26a55e..4b93c52c8393b 100644 --- a/doc/mgr/administrator.rst +++ b/doc/mgr/administrator.rst @@ -75,7 +75,7 @@ daemon, if the client tries to connect to a standby. Consult the documentation pages for individual manager modules for more information about what functionality each module provides. -Here is an example of enabling the ``dashboard`` module: +Here is an example of enabling the :term:`Dashboard` module: :: diff --git a/doc/start/intro.rst b/doc/start/intro.rst index 95b51dd839efc..d24839bc366be 100644 --- a/doc/start/intro.rst +++ b/doc/start/intro.rst @@ -28,8 +28,9 @@ required when running Ceph Filesystem clients. state of the Ceph cluster, including storage utilization, current performance metrics, and system load. The Ceph Manager daemons also host python-based plugins to manage and expose Ceph cluster - information, including a web-based `dashboard`_ and `REST API`_. At - least two managers are normally required for high availability. + information, including a web-based :doc:`/mgr/dashboard` and + `REST API`_. At least two managers are normally required for high + availability. - **Ceph OSDs**: A :term:`Ceph OSD` (object storage daemon, ``ceph-osd``) stores data, handles data replication, recovery, @@ -51,7 +52,6 @@ contain the object, and further calculates which Ceph OSD Daemon should store the placement group. The CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and recover dynamically. -.. _dashboard: ../../mgr/dashboard .. _REST API: ../../mgr/restful .. raw:: html -- 2.39.5