=======
When you first deploy a cluster without creating a pool, Ceph uses the default
-pools for storing data. A pool differs from CRUSH's location-based buckets in
-that a pool doesn't have a single physical location, and a pool provides you
-with some additional functionality, including:
+pools for storing data. A pool provides you with:
-- **Replicas**: You can set the desired number of copies/replicas of an object.
+- **Resilience**: You can set how many OSD are allowed to fail without loosing data.
+ For replicated pools, it is the desired number of copies/replicas of an object.
A typical configuration stores an object and one additional copy
(i.e., ``size = 2``), but you can determine the number of copies/replicas.
+ For erasure coded pools, it is the number of coding chunks
+ (i.e. ``erasure-code-m=2``)
- **Placement Groups**: You can set the number of placement groups for the pool.
A typical configuration uses approximately 100 placement groups per OSD to
placement groups for both the pool and the cluster as a whole.
- **CRUSH Rules**: When you store data in a pool, a CRUSH ruleset mapped to the
- pool enables CRUSH to identify a rule for the placement of the primary object
- and object replicas in your cluster. You can create a custom CRUSH rule for your
- pool.
+ pool enables CRUSH to identify a rule for the placement of the object
+ and its replicas (or chunks for erasure coded pools) in your cluster.
+ You can create a custom CRUSH rule for your pool.
- **Snapshots**: When you create snapshots with ``ceph osd pool mksnap``,
you effectively take a snapshot of a particular pool.
To create a pool, execute::
- ceph osd pool create {pool-name} {pg-num} [{pgp-num}]
+ ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated]
+ ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure \
+ [{crush_ruleset=ruleset}] \
+ [{erasure-code-directory=directory}] \
+ [{erasure-code-plugin=plugin}] \
+ [{erasure-code-k=data-chunks}] \
+ [{erasure-code-m=coding-chunks}] \
+ [{key=value} ...]
Where:
:Required: Yes. Picks up default or Ceph configuration value if not specified.
:Default: 8
+``{replicated|erasure}``
+
+:Description: The pool type which may either be **replicated** to
+ recover from lost OSDs by keeping multiple copies of the
+ objects or **erasure** to get a kind of generalized
+ RAID5 capability. The **replicated** pools require more
+ raw storage but implement all Ceph operations. The
+ **erasure** pools require less raw storage but only
+ implement a subset of the available operations.
+
+:Type: String
+:Required: No.
+:Default: replicated
+
+``{crush_ruleset=ruleset}``
+
+:Description: For **erasure** pools only. Set the name of the CRUSH
+ **ruleset**. It must be an existing ruleset matching
+ the requirements of the underlying erasure code plugin.
+
+:Type: String
+:Required: No.
+
+``{erasure-code-directory=directory}``
+
+:Description: For **erasure** pools only. Set the **directory** name
+ from which the erasure code plugin is loaded.
+
+:Type: String
+:Required: No.
+:Default: /usr/lib/ceph/erasure-code
+
+``{erasure-code-plugin=plugin}``
+
+:Description: For **erasure** pools only. Use the erasure code **plugin**
+ to compute coding chunks and recover missing chunks.
+
+:Type: String
+:Required: No.
+:Default: jerasure
+
+``{erasure-code-k=data-chunks}``
+
+:Description: For **erasure** pools using the **jerasure** plugin
+ only. Each object is split in **data-chunks** parts,
+ each stored on a different OSD.
+
+:Type: Integer
+:Required: No.
+:Default: 4
+
+``{erasure-code-m=coding-chunks}``
+
+:Description: For **erasure** pools using the **jerasure** plugin
+ only. Compute **coding chunks** for each object and
+ store them on different OSDs. The number of coding
+ chunks is also the number of OSDs that can be down
+ without losing data.
+
+:Type: Integer
+:Required: No.
+:Default: 2
+
+``{key=value}``
+
+:Description: For **erasure** pools, the semantic of the remaining
+ key/value pairs is defined by the erasure code plugin.
+ For **replicated** pools, the key/value pairs are
+ ignored.
+
+:Type: String
+:Required: No.
When you create a pool, set the number of placement groups to a reasonable value
(e.g., ``100``). Consider the total number of placement groups per OSD too.
``size``
-:Description: Sets the number of replicas for objects in the pool. See `Set the Number of Object Replicas`_ for further details.
+:Description: Sets the number of replicas for objects in the pool. See `Set the Number of Object Replicas`_ for further details. Replicated pools only.
:Type: Integer
``min_size``
-:Description: Sets the minimum number of replicas required for io. See `Set the Number of Object Replicas`_ for further details
+:Description: Sets the minimum number of replicas required for io. See `Set the Number of Object Replicas`_ for further details. Replicated pools only.
:Type: Integer
.. note:: Version ``0.54`` and above
Set the Number of Object Replicas
=================================
-To set the number of object replicas, execute the following::
+To set the number of object replicas on a replicated pool, execute the following::
ceph osd pool set {poolname} size {num-replicas}