Erasure code
==============
-By default, Ceph `pools <../pools>`_ are created with the type "replicated". In
+By default, Ceph :ref:`rados_pools` are created with the type "replicated". In
replicated-type pools, every object is copied to multiple disks. This
multiple copying is the method of data protection known as "replication".
+------+
-More information can be found in the `erasure-code profiles
-<../erasure-code-profile>`_ documentation.
+More information can be found in the :ref:`erasure-code-profiles`
+documentation.
Erasure Coding with Overwrites
rbd create --size 1G --data-pool ec_pool replicated_pool/image_name
For CephFS, an erasure-coded pool can be set as the default data pool during
-file system creation or via `file layouts <../../../cephfs/file-layouts>`_.
+file system creation or via :ref:`file-layouts`.
Erasure-coded pool overhead
---------------------------
For example: a typical configuration stores three replicas
(copies) of each RADOS object (that is: ``size = 3``), but you can configure
- the number of replicas on a per-pool basis. For `erasure-coded pools
- <../erasure-code>`_, resilience is defined as the number of coding (aka parity) chunks
+ the number of replicas on a per-pool basis. For :ref:`erasure-coded pools
+ <ecpool>`, resilience is defined as the number of coding (aka parity) chunks
(for example, ``m = 2`` in the default erasure code profile).
- **Placement Groups**: The :ref:`autoscaler <pg-autoscaler>` sets the number
Creating a Pool
===============
-Before creating a pool, consult `Pool, PG and CRUSH Config Reference`_. The
+Before creating a pool, consult :ref:`rados_config_pool_pg_crush_ref`. The
Ceph central configuration database contains a default setting
(namely, ``osd_pool_default_pg_num``) that determines the number of PGs assigned
to a new pool if no specific value has been specified. It is possible to change
this value from its default. For more on the subject of setting the number of
-PGs per pool, see `setting the number of placement groups`_.
+PGs per pool, see :ref:`setting the number of placement groups`.
.. note:: In Luminous and later releases, each pool must be associated with the
application that will be using the pool. For more information, see
The pool's data protection strategy. This can be either ``replicated``
(like RAID1 and RAID10) ``erasure (a kind
- of `generalized parity RAID <../erasure-code>`_ strategy like RAID6 but
+ of :ref:`generalized parity RAID <ecpool>` strategy like RAID6 but
more flexible). A
``replicated`` pool yields less usable capacity for a given amount of
raw storage but is suitable for all Ceph components and use cases.
:Type: String
:Required: No.
- :Default: For ``replicated`` pools, it is by default the rule specified by the :confval:`osd_pool_default_crush_rule` configuration option. This rule must exist. For ``erasure`` pools, it is the ``erasure-code`` rule if the ``default`` `erasure code profile`_ is used or the ``{pool-name}`` rule if not. This rule will be created implicitly if it doesn't already exist.
+ :Default: For ``replicated`` pools, it is by default the rule specified by the :confval:`osd_pool_default_crush_rule` configuration option. This rule must exist. For ``erasure`` pools, it is the ``erasure-code`` rule if the ``default`` :ref:`erasure code profile <erasure-code-profiles>` is used or the ``{pool-name}`` rule if not. This rule will be created implicitly if it doesn't already exist.
.. describe:: [erasure-code-profile=profile]
- For ``erasure`` pools only. Instructs Ceph to use the specified `erasure
- code profile`_. This profile must be an existing profile as defined via
+ For ``erasure`` pools only. Instructs Ceph to use the specified :ref:`erasure
+ code profile <erasure-code-profiles>`. This profile must be an existing profile as defined via
the dashboard or invoking ``osd erasure-code-profile set``. Note that
changes to the EC profile of a pool after creation do *not* take effect.
To change the EC profile of an existing pool one must modify the pool to
:Type: String
:Required: No.
-.. _erasure code profile: ../erasure-code-profile
-
.. describe:: --autoscale-mode=<on,off,warn>
- ``on``: the Ceph cluster will autotune changes to the number of PGs in the pool based on actual usage.
in central configuration, otherwise the Ceph monitors will refuse to remove
pools.
-For more information, see `Monitor Configuration`_.
-
-.. _Monitor Configuration: ../../configuration/mon-config-ref
+For more information, see :ref:`Monitor Configuration <monitor-config-reference>`.
If there are custom CRUSH rules that are no longer in use or needed, consider
deleting those rules.
.. describe:: min_size
- :Description: Sets the minimum number of active replicas (or shards) required for PGs to be active and thus for I/O operations to proceed. For further details, see `Setting the Number of RADOS Object Replicas`_. For erasure-coded pools, this should be set to a value greater than ``K``. If I/O is allowed with only ``K`` shards available, there will be no redundancy and data will be lost in the event of an additional, permanent OSD failure. For more information, see `Erasure Code <../erasure-code>`_
+ :Description: Sets the minimum number of active replicas (or shards) required for PGs to be active and thus for I/O operations to proceed. For further details, see `Setting the Number of RADOS Object Replicas`_. For erasure-coded pools, this should be set to a value greater than ``K``. If I/O is allowed with only ``K`` shards available, there will be no redundancy and data will be lost in the event of an additional, permanent OSD failure. For more information, see :ref:`ecpool`
:Type: Integer
:Version: ``0.54`` and above
:Type: String
:Required: Yes.
-.. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref
.. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter
-.. _setting the number of placement groups: ../placement-groups#set-the-number-of-placement-groups
.. _Erasure Coding with Overwrites: ../erasure-code#erasure-coding-with-overwrites
.. _Block Device Commands: ../../../rbd/rados-rbd-cmds/#create-a-block-device-pool
.. _pgcalc: ../pgcalc
If the monitors don't have a quorum or if there are errors with the monitor
status, address the monitor issues before proceeding by consulting the material
-in `Troubleshooting Monitors <../troubleshooting-mon>`_.
+in :ref:`rados-troubleshooting-mon`.
Next, check your networks to make sure that they are running properly. Networks
can have a significant impact on OSD operation and performance. Look for
copy of your data on at least one OSD. Deleting placement group directories
is a rare and extreme intervention. It is not to be undertaken lightly.
-See `Monitor Config Reference`_ for more information.
+See :ref:`monitor-config-reference` for more information.
OSDs are Slow/Unresponsive
.. _iostat: https://en.wikipedia.org/wiki/Iostat
-.. _Ceph Logging and Debugging: ../../configuration/ceph-conf#ceph-logging-and-debugging
.. _Logging and Debugging: ../log-and-debug
-.. _Debugging and Logging: ../debug
.. _Monitor/OSD Interaction: ../../configuration/mon-osd-interaction
-.. _Monitor Config Reference: ../../configuration/mon-config-ref
.. _monitoring your OSDs: ../../operations/monitoring-osd-pg
.. _monitoring OSDs: ../../operations/monitoring-osd-pg/#monitoring-osds