]> git-server-git.apps.pok.os.sepia.ceph.com Git - ceph.git/commitdiff
doc: Include links from hardware-recommendations to glossary 1754/head
authorKevin Dalley <kevin@kelphead.org>
Fri, 2 May 2014 00:04:43 +0000 (17:04 -0700)
committerKevin Dalley <kevin@kelphead.org>
Fri, 2 May 2014 00:04:43 +0000 (17:04 -0700)
Included :term: in parts of hardware-recommendations so that glossary
links appear.
Signed-off-by: Kevin Dalley <kevin@kelphead.org>
doc/start/hardware-recommendations.rst

index 58d2f437d1d638b0bb18487e83f8c636803e6dcf..ffbc37a58900f760ff763218417eee7321232462 100644 (file)
@@ -24,8 +24,8 @@ CPU
 
 Ceph metadata servers dynamically redistribute their load, which is CPU
 intensive. So your metadata servers should have significant processing power
-(e.g., quad core or better CPUs). Ceph OSDs run the RADOS service, calculate
-data placement with CRUSH, replicate data, and maintain their own copy of the
+(e.g., quad core or better CPUs). Ceph OSDs run the :term:`RADOS` service, calculate
+data placement with :term:`CRUSH`, replicate data, and maintain their own copy of the
 cluster map. Therefore, OSDs should have a reasonable amount of processing power
 (e.g., dual core processors). Monitors simply maintain a master copy of the
 cluster map, so they are not CPU intensive. You must also consider whether the
@@ -344,4 +344,4 @@ configurations for Ceph OSDs, and a lighter configuration for monitors.
 .. _Argonaut v. Bobtail Performance Preview: http://ceph.com/uncategorized/argonaut-vs-bobtail-performance-preview/
 .. _Bobtail Performance - I/O Scheduler Comparison: http://ceph.com/community/ceph-bobtail-performance-io-scheduler-comparison/ 
 .. _Mapping Pools to Different Types of OSDs: http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
-.. _OS Recommendations: ../os-recommendations
\ No newline at end of file
+.. _OS Recommendations: ../os-recommendations