From: Zac Dover Date: Sun, 2 Oct 2022 04:55:46 +0000 (+1000) Subject: doc/rados: fix prompts in erasure-code.rst X-Git-Tag: v16.2.11~287^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=481a7ce011f4a8076ebbec21b8d2da4450b9b45c;p=ceph.git doc/rados: fix prompts in erasure-code.rst This commit adds unselectable prompts to doc/rados/ erasure-code.rst. Signed-off-by: Zac Dover --- diff --git a/doc/rados/operations/erasure-code.rst b/doc/rados/operations/erasure-code.rst index ddbea214ba5d..f3e7aa53e05e 100644 --- a/doc/rados/operations/erasure-code.rst +++ b/doc/rados/operations/erasure-code.rst @@ -16,13 +16,24 @@ Creating a sample erasure coded pool The simplest erasure coded pool is equivalent to `RAID5 `_ and -requires at least three hosts:: +requires at least three hosts: - $ ceph osd pool create ecpool erasure - pool 'ecpool' created - $ echo ABCDEFGHI | rados --pool ecpool put NYAN - - $ rados --pool ecpool get NYAN - - ABCDEFGHI +.. prompt:: bash $ + + ceph osd pool create ecpool erasure + +:: + + pool 'ecpool' created + +.. prompt:: bash $ + + echo ABCDEFGHI | rados --pool ecpool put NYAN - + rados --pool ecpool get NYAN - + +:: + + ABCDEFGHI Erasure code profiles --------------------- @@ -30,14 +41,19 @@ Erasure code profiles The default erasure code profile sustains the loss of a two OSDs. It is equivalent to a replicated pool of size three but requires 2TB instead of 3TB to store 1TB of data. The default profile can be -displayed with:: +displayed with: + +.. prompt:: bash $ - $ ceph osd erasure-code-profile get default - k=2 - m=2 - plugin=jerasure - crush-failure-domain=host - technique=reed_sol_van + ceph osd erasure-code-profile get default + +:: + + k=2 + m=2 + plugin=jerasure + crush-failure-domain=host + technique=reed_sol_van Choosing the right profile is important because it cannot be modified after the pool is created: a new pool with a different profile needs @@ -47,15 +63,20 @@ The most important parameters of the profile are *K*, *M* and *crush-failure-domain* because they define the storage overhead and the data durability. For instance, if the desired architecture must sustain the loss of two racks with a storage overhead of 67% overhead, -the following profile can be defined:: +the following profile can be defined: + +.. prompt:: bash $ - $ ceph osd erasure-code-profile set myprofile \ + ceph osd erasure-code-profile set myprofile \ k=3 \ m=2 \ crush-failure-domain=rack - $ ceph osd pool create ecpool erasure myprofile - $ echo ABCDEFGHI | rados --pool ecpool put NYAN - - $ rados --pool ecpool get NYAN - + ceph osd pool create ecpool erasure myprofile + echo ABCDEFGHI | rados --pool ecpool put NYAN - + rados --pool ecpool get NYAN - + +:: + ABCDEFGHI The *NYAN* object will be divided in three (*K=3*) and two additional @@ -121,7 +142,9 @@ perform full object writes and appends. Since Luminous, partial writes for an erasure coded pool may be enabled with a per-pool setting. This lets RBD and CephFS store their -data in an erasure coded pool:: +data in an erasure coded pool: + +.. prompt:: bash $ ceph osd pool set ec_pool allow_ec_overwrites true @@ -133,7 +156,9 @@ ec overwrites yields low performance compared to bluestore. Erasure coded pools do not support omap, so to use them with RBD and CephFS you must instruct them to store their data in an ec pool, and their metadata in a replicated pool. For RBD, this means using the -erasure coded pool as the ``--data-pool`` during image creation:: +erasure coded pool as the ``--data-pool`` during image creation: + +.. prompt:: bash $ rbd create --size 1G --data-pool ec_pool replicated_pool/image_name @@ -149,11 +174,13 @@ lack some functionalities such as omap. To overcome these limitations, one can set up a `cache tier <../cache-tiering>`_ before the erasure coded pool. -For instance, if the pool *hot-storage* is made of fast storage:: +For instance, if the pool *hot-storage* is made of fast storage: + +.. prompt:: bash $ - $ ceph osd tier add ecpool hot-storage - $ ceph osd tier cache-mode hot-storage writeback - $ ceph osd tier set-overlay ecpool hot-storage + ceph osd tier add ecpool hot-storage + ceph osd tier cache-mode hot-storage writeback + ceph osd tier set-overlay ecpool hot-storage will place the *hot-storage* pool as tier of *ecpool* in *writeback* mode so that every write and read to the *ecpool* are actually using