From: Kevin Dalley Date: Fri, 2 May 2014 00:04:43 +0000 (-0700) Subject: doc: Include links from hardware-recommendations to glossary X-Git-Tag: v0.81~86^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=c879e895da494b14bd03d45131704dccda518d76;p=ceph.git doc: Include links from hardware-recommendations to glossary Included :term: in parts of hardware-recommendations so that glossary links appear. Signed-off-by: Kevin Dalley --- diff --git a/doc/start/hardware-recommendations.rst b/doc/start/hardware-recommendations.rst index 58d2f437d1d6..ffbc37a58900 100644 --- a/doc/start/hardware-recommendations.rst +++ b/doc/start/hardware-recommendations.rst @@ -24,8 +24,8 @@ CPU Ceph metadata servers dynamically redistribute their load, which is CPU intensive. So your metadata servers should have significant processing power -(e.g., quad core or better CPUs). Ceph OSDs run the RADOS service, calculate -data placement with CRUSH, replicate data, and maintain their own copy of the +(e.g., quad core or better CPUs). Ceph OSDs run the :term:`RADOS` service, calculate +data placement with :term:`CRUSH`, replicate data, and maintain their own copy of the cluster map. Therefore, OSDs should have a reasonable amount of processing power (e.g., dual core processors). Monitors simply maintain a master copy of the cluster map, so they are not CPU intensive. You must also consider whether the @@ -344,4 +344,4 @@ configurations for Ceph OSDs, and a lighter configuration for monitors. .. _Argonaut v. Bobtail Performance Preview: http://ceph.com/uncategorized/argonaut-vs-bobtail-performance-preview/ .. _Bobtail Performance - I/O Scheduler Comparison: http://ceph.com/community/ceph-bobtail-performance-io-scheduler-comparison/ .. _Mapping Pools to Different Types of OSDs: http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds -.. _OS Recommendations: ../os-recommendations \ No newline at end of file +.. _OS Recommendations: ../os-recommendations