]> git.apps.os.sepia.ceph.com Git - ceph.git/commitdiff
doc: Fix links to CRUSH, RADOS and DSP research papers. 44833/head
authorFranciszek Stachura <fbstachura@gmail.com>
Sat, 29 Jan 2022 09:43:02 +0000 (10:43 +0100)
committerFranciszek Stachura <fbstachura@gmail.com>
Sun, 30 Jan 2022 16:23:33 +0000 (17:23 +0100)
Signed-off-by: Franciszek Stachura <fbstachura@gmail.com>
doc/architecture.rst
doc/cephfs/dynamic-metadata-management.rst
doc/rados/operations/crush-map-edits.rst

index 33558c0a877a5b6fac7779d06b71ce4fbfa97c6f..1f62d76f90e3cda28077da0c9be7c7b2df646f79 100644 (file)
@@ -1619,13 +1619,13 @@ instance for high availability.
 
 
 
-.. _RADOS - A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters: https://ceph.com/wp-content/uploads/2016/08/weil-rados-pdsw07.pdf
+.. _RADOS - A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters: https://ceph.com/assets/pdfs/weil-rados-pdsw07.pdf
 .. _Paxos: https://en.wikipedia.org/wiki/Paxos_(computer_science)
 .. _Monitor Config Reference: ../rados/configuration/mon-config-ref
 .. _Monitoring OSDs and PGs: ../rados/operations/monitoring-osd-pg
 .. _Heartbeats: ../rados/configuration/mon-osd-interaction
 .. _Monitoring OSDs: ../rados/operations/monitoring-osd-pg/#monitoring-osds
-.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf
+.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/assets/pdfs/weil-crush-sc06.pdf
 .. _Data Scrubbing: ../rados/configuration/osd-config-ref#scrubbing
 .. _Report Peering Failure: ../rados/configuration/mon-osd-interaction#osds-report-peering-failure
 .. _Troubleshooting Peering Failure: ../rados/troubleshooting/troubleshooting-pg#placement-group-down-peering-failure
index 4408be7bfbf4784fc8f5ab720c3a81f86f58572a..6e7ada9fca02e754c6687e96fdff63d7c48a797b 100644 (file)
@@ -9,7 +9,7 @@ interdependent nature of the file system metadata. So in CephFS,
 the metadata workload is decoupled from data workload so as to
 avoid placing unnecessary strain on the RADOS cluster. The metadata
 is hence handled by a cluster of Metadata Servers (MDSs). 
-CephFS distributes metadata across MDSs via `Dynamic Subtree Partitioning <https://ceph.com/wp-content/uploads/2016/08/weil-mds-sc04.pdf>`__.
+CephFS distributes metadata across MDSs via `Dynamic Subtree Partitioning <https://ceph.com/assets/pdfs/weil-mds-sc04.pdf>`__.
 
 Dynamic Subtree Partitioning
 ----------------------------
index eaed1c709026e8a7918d9bb14803e053b0cd1c9b..3e3d3dfcb59b522bae3d08d565208547dc694793 100644 (file)
@@ -711,4 +711,4 @@ Further, as noted above, be careful running old versions of the
 ``ceph-osd`` daemon after reverting to legacy values as the feature
 bit is not perfectly enforced.
 
-.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf
+.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/assets/pdfs/weil-crush-sc06.pdf