+----------------+----------------+------------------------------------+
-Calxeda Example
----------------
-
-A recent (2013) Ceph cluster project uses ARM hardware to obtain low
-power consumption and high storage density.
-
-+----------------+----------------+----------------------------------------+
-| Configuration | Criteria | Minimum Recommended |
-+================+================+========================================+
-| SuperMicro | Processor Card | 3x Calxeda EnergyCard building blocks |
-| SC 847 Chassis +----------------+----------------------------------------+
-| 4U | CPU | 4x ECX-1000 ARM 1.4 GHz SoC per card |
-| +----------------+----------------------------------------+
-| | RAM | 4 GB per System-on-a-chip (SoC) |
-| +----------------+----------------------------------------+
-| | Volume Storage | 36x 3TB Seagate Barracuda SATA |
-| +----------------+----------------------------------------+
-| | Client Network | 1x 10GB Ethernet NICs |
-| +----------------+----------------------------------------+
-| | OSD Network | 1x 10GB Ethernet NICs |
-| +----------------+----------------------------------------+
-| | Mgmt. Network | 1x 1GB Ethernet NICs |
-+----------------+----------------+----------------------------------------+
-
-The chassis configuration enables the deployment of 36 Ceph OSD Daemons per
-chassis, one for each 3TB drive. Each System-on-a-chip (SoC) processor runs 3
-Ceph OSD Daemons. Four SoC processors per card allows the 12 processors to run
-36 Ceph OSD Daemons with capacity remaining for rebalancing, backfilling and
-recovery. This configuration provides 108TB of storage (slightly less after full
-ratio settings) per 4U chassis. Using a chassis exclusively for Ceph OSD Daemons
-makes it easy to expand the cluster's storage capacity significantly with
-relative ease.
-
-**Note:** the project uses Ceph for cold storage, so there are no SSDs
-for journals.
+
.. _Ceph Write Throughput 1: http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/