]> git.apps.os.sepia.ceph.com Git - ceph.git/commitdiff
doc: Removed Calxeda example.
authorJohn Wilkins <john.wilkins@inktank.com>
Fri, 17 Jan 2014 01:00:38 +0000 (17:00 -0800)
committerJohn Wilkins <john.wilkins@inktank.com>
Fri, 17 Jan 2014 01:00:38 +0000 (17:00 -0800)
Signed-off-by: John Wilkins <john.wilkins@inktank.com>
doc/start/hardware-recommendations.rst

index c589301a435c2d83b0c490b55e5e7d37640295fe..58d2f437d1d638b0bb18487e83f8c636803e6dcf 100644 (file)
@@ -336,41 +336,7 @@ configurations for Ceph OSDs, and a lighter configuration for monitors.
 +----------------+----------------+------------------------------------+
 
 
-Calxeda Example
----------------
-
-A recent (2013) Ceph cluster project uses ARM hardware to obtain low
-power consumption and high storage density.
-
-+----------------+----------------+----------------------------------------+
-|  Configuration | Criteria       | Minimum Recommended                    |
-+================+================+========================================+
-| SuperMicro     | Processor Card |  3x Calxeda EnergyCard building blocks |
-| SC 847 Chassis +----------------+----------------------------------------+
-| 4U             | CPU            |  4x ECX-1000 ARM 1.4 GHz SoC per card  |
-|                +----------------+----------------------------------------+
-|                | RAM            |  4 GB per System-on-a-chip (SoC)       |
-|                +----------------+----------------------------------------+
-|                | Volume Storage |  36x 3TB Seagate Barracuda SATA        |
-|                +----------------+----------------------------------------+
-|                | Client Network |  1x 10GB Ethernet NICs                 |
-|                +----------------+----------------------------------------+
-|                | OSD Network    |  1x 10GB Ethernet NICs                 |
-|                +----------------+----------------------------------------+
-|                | Mgmt. Network  |  1x 1GB Ethernet NICs                  |
-+----------------+----------------+----------------------------------------+
-
-The chassis configuration enables the deployment of 36 Ceph OSD Daemons per
-chassis, one for each 3TB drive. Each System-on-a-chip (SoC) processor runs 3
-Ceph OSD Daemons. Four SoC processors per card allows the 12 processors to run
-36 Ceph OSD Daemons with capacity remaining for rebalancing, backfilling and
-recovery. This configuration provides 108TB of storage (slightly less after full
-ratio settings) per 4U chassis. Using a chassis exclusively for Ceph OSD Daemons
-makes it easy to expand the cluster's storage capacity significantly with
-relative ease.
-
-**Note:** the project uses Ceph for cold storage, so there are no SSDs 
-for journals.
+
 
 
 .. _Ceph Write Throughput 1: http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/