From: Franciszek Stachura Date: Sat, 29 Jan 2022 09:43:02 +0000 (+0100) Subject: doc: Fix links to CRUSH, RADOS and DSP research papers. X-Git-Tag: v18.0.0~1478^2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=f841def599ea114a72265e153984fbee9bc29ed5;p=ceph.git doc: Fix links to CRUSH, RADOS and DSP research papers. Signed-off-by: Franciszek Stachura --- diff --git a/doc/architecture.rst b/doc/architecture.rst index 33558c0a877a5..1f62d76f90e3c 100644 --- a/doc/architecture.rst +++ b/doc/architecture.rst @@ -1619,13 +1619,13 @@ instance for high availability. -.. _RADOS - A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters: https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf +.. _RADOS - A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters: https://ceph.com/assets/pdfs/weil-rados-pdsw07.pdf .. _Paxos: https://en.wikipedia.org/wiki/Paxos_(computer_science) .. _Monitor Config Reference: ../rados/configuration/mon-config-ref .. _Monitoring OSDs and PGs: ../rados/operations/monitoring-osd-pg .. _Heartbeats: ../rados/configuration/mon-osd-interaction .. _Monitoring OSDs: ../rados/operations/monitoring-osd-pg/#monitoring-osds -.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf +.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/assets/pdfs/weil-crush-sc06.pdf .. _Data Scrubbing: ../rados/configuration/osd-config-ref#scrubbing .. _Report Peering Failure: ../rados/configuration/mon-osd-interaction#osds-report-peering-failure .. _Troubleshooting Peering Failure: ../rados/troubleshooting/troubleshooting-pg#placement-group-down-peering-failure diff --git a/doc/cephfs/dynamic-metadata-management.rst b/doc/cephfs/dynamic-metadata-management.rst index 4408be7bfbf47..6e7ada9fca02e 100644 --- a/doc/cephfs/dynamic-metadata-management.rst +++ b/doc/cephfs/dynamic-metadata-management.rst @@ -9,7 +9,7 @@ interdependent nature of the file system metadata. So in CephFS, the metadata workload is decoupled from data workload so as to avoid placing unnecessary strain on the RADOS cluster. The metadata is hence handled by a cluster of Metadata Servers (MDSs). -CephFS distributes metadata across MDSs via `Dynamic Subtree Partitioning `__. +CephFS distributes metadata across MDSs via `Dynamic Subtree Partitioning `__. Dynamic Subtree Partitioning ---------------------------- diff --git a/doc/rados/operations/crush-map-edits.rst b/doc/rados/operations/crush-map-edits.rst index eaed1c709026e..3e3d3dfcb59b5 100644 --- a/doc/rados/operations/crush-map-edits.rst +++ b/doc/rados/operations/crush-map-edits.rst @@ -711,4 +711,4 @@ Further, as noted above, be careful running old versions of the ``ceph-osd`` daemon after reverting to legacy values as the feature bit is not perfectly enforced. -.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf +.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/assets/pdfs/weil-crush-sc06.pdf