From f841def599ea114a72265e153984fbee9bc29ed5 Mon Sep 17 00:00:00 2001 From: Franciszek Stachura Date: Sat, 29 Jan 2022 10:43:02 +0100 Subject: [PATCH] doc: Fix links to CRUSH, RADOS and DSP research papers. Signed-off-by: Franciszek Stachura --- doc/architecture.rst | 4 ++-- doc/cephfs/dynamic-metadata-management.rst | 2 +- doc/rados/operations/crush-map-edits.rst | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/architecture.rst b/doc/architecture.rst index 33558c0a877..1f62d76f90e 100644 --- a/doc/architecture.rst +++ b/doc/architecture.rst @@ -1619,13 +1619,13 @@ instance for high availability. -.. _RADOS - A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters: https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf +.. _RADOS - A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters: https://ceph.com/assets/pdfs/weil-rados-pdsw07.pdf .. _Paxos: https://en.wikipedia.org/wiki/Paxos_(computer_science) .. _Monitor Config Reference: ../rados/configuration/mon-config-ref .. _Monitoring OSDs and PGs: ../rados/operations/monitoring-osd-pg .. _Heartbeats: ../rados/configuration/mon-osd-interaction .. _Monitoring OSDs: ../rados/operations/monitoring-osd-pg/#monitoring-osds -.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf +.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/assets/pdfs/weil-crush-sc06.pdf .. _Data Scrubbing: ../rados/configuration/osd-config-ref#scrubbing .. _Report Peering Failure: ../rados/configuration/mon-osd-interaction#osds-report-peering-failure .. _Troubleshooting Peering Failure: ../rados/troubleshooting/troubleshooting-pg#placement-group-down-peering-failure diff --git a/doc/cephfs/dynamic-metadata-management.rst b/doc/cephfs/dynamic-metadata-management.rst index 4408be7bfbf..6e7ada9fca0 100644 --- a/doc/cephfs/dynamic-metadata-management.rst +++ b/doc/cephfs/dynamic-metadata-management.rst @@ -9,7 +9,7 @@ interdependent nature of the file system metadata. So in CephFS, the metadata workload is decoupled from data workload so as to avoid placing unnecessary strain on the RADOS cluster. The metadata is hence handled by a cluster of Metadata Servers (MDSs). -CephFS distributes metadata across MDSs via `Dynamic Subtree Partitioning `__. +CephFS distributes metadata across MDSs via `Dynamic Subtree Partitioning `__. Dynamic Subtree Partitioning ---------------------------- diff --git a/doc/rados/operations/crush-map-edits.rst b/doc/rados/operations/crush-map-edits.rst index eaed1c70902..3e3d3dfcb59 100644 --- a/doc/rados/operations/crush-map-edits.rst +++ b/doc/rados/operations/crush-map-edits.rst @@ -711,4 +711,4 @@ Further, as noted above, be careful running old versions of the ``ceph-osd`` daemon after reverting to legacy values as the feature bit is not perfectly enforced. -.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf +.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/assets/pdfs/weil-crush-sc06.pdf -- 2.39.5