The simplest erasure coded pool is equivalent to `RAID5
<https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5>`_ and
-requires at least three hosts::
+requires at least three hosts:
- $ ceph osd pool create ecpool erasure
- pool 'ecpool' created
- $ echo ABCDEFGHI | rados --pool ecpool put NYAN -
- $ rados --pool ecpool get NYAN -
- ABCDEFGHI
+.. prompt:: bash $
+
+ ceph osd pool create ecpool erasure
+
+::
+
+ pool 'ecpool' created
+
+.. prompt:: bash $
+
+ echo ABCDEFGHI | rados --pool ecpool put NYAN -
+ rados --pool ecpool get NYAN -
+
+::
+
+ ABCDEFGHI
Erasure code profiles
---------------------
The default erasure code profile sustains the loss of a two OSDs. It
is equivalent to a replicated pool of size three but requires 2TB
instead of 3TB to store 1TB of data. The default profile can be
-displayed with::
+displayed with:
+
+.. prompt:: bash $
- $ ceph osd erasure-code-profile get default
- k=2
- m=2
- plugin=jerasure
- crush-failure-domain=host
- technique=reed_sol_van
+ ceph osd erasure-code-profile get default
+
+::
+
+ k=2
+ m=2
+ plugin=jerasure
+ crush-failure-domain=host
+ technique=reed_sol_van
Choosing the right profile is important because it cannot be modified
after the pool is created: a new pool with a different profile needs
*crush-failure-domain* because they define the storage overhead and
the data durability. For instance, if the desired architecture must
sustain the loss of two racks with a storage overhead of 67% overhead,
-the following profile can be defined::
+the following profile can be defined:
+
+.. prompt:: bash $
- $ ceph osd erasure-code-profile set myprofile \
+ ceph osd erasure-code-profile set myprofile \
k=3 \
m=2 \
crush-failure-domain=rack
- $ ceph osd pool create ecpool erasure myprofile
- $ echo ABCDEFGHI | rados --pool ecpool put NYAN -
- $ rados --pool ecpool get NYAN -
+ ceph osd pool create ecpool erasure myprofile
+ echo ABCDEFGHI | rados --pool ecpool put NYAN -
+ rados --pool ecpool get NYAN -
+
+::
+
ABCDEFGHI
The *NYAN* object will be divided in three (*K=3*) and two additional
Since Luminous, partial writes for an erasure coded pool may be
enabled with a per-pool setting. This lets RBD and CephFS store their
-data in an erasure coded pool::
+data in an erasure coded pool:
+
+.. prompt:: bash $
ceph osd pool set ec_pool allow_ec_overwrites true
Erasure coded pools do not support omap, so to use them with RBD and
CephFS you must instruct them to store their data in an ec pool, and
their metadata in a replicated pool. For RBD, this means using the
-erasure coded pool as the ``--data-pool`` during image creation::
+erasure coded pool as the ``--data-pool`` during image creation:
+
+.. prompt:: bash $
rbd create --size 1G --data-pool ec_pool replicated_pool/image_name
limitations, one can set up a `cache tier <../cache-tiering>`_
before the erasure coded pool.
-For instance, if the pool *hot-storage* is made of fast storage::
+For instance, if the pool *hot-storage* is made of fast storage:
+
+.. prompt:: bash $
- $ ceph osd tier add ecpool hot-storage
- $ ceph osd tier cache-mode hot-storage writeback
- $ ceph osd tier set-overlay ecpool hot-storage
+ ceph osd tier add ecpool hot-storage
+ ceph osd tier cache-mode hot-storage writeback
+ ceph osd tier set-overlay ecpool hot-storage
will place the *hot-storage* pool as tier of *ecpool* in *writeback*
mode so that every write and read to the *ecpool* are actually using