.. code:: bash
- $ ceph osd pool create cephfs_data <pg_num>
- $ ceph osd pool create cephfs_metadata <pg_num>
+ $ ceph osd pool create cephfs_data
+ $ ceph osd pool create cephfs_metadata
Generally, the metadata pool will have at most a few gigabytes of data. For
this reason, a smaller PG count is usually recommended. 64 or 128 is commonly
::
ceph fs flag set enable_multiple true --yes-i-really-mean-it
- ceph osd pool create recovery <pg-num> replicated <crush-rule-name>
+ ceph osd pool create recovery replicated <crush-rule-name>
ceph fs new recovery-fs recovery <data pool> --allow-dangerous-metadata-overlay
cephfs-data-scan init --force-init --filesystem recovery-fs --alternate-pool recovery
ceph fs reset recovery-fs --yes-i-really-mean-it
First, create the pool. In this example we create the ``hadoop1`` pool with
replication factor 1. ::
- ceph osd pool create hadoop1 100
+ ceph osd pool create hadoop1
ceph osd pool set hadoop1 size 1
Next, determine the pool id. This can be done by examining the output of the
Now put something in using rados, check that it made it, get it back, and remove it.::
- ./ceph osd pool create test-blkin 8
+ ./ceph osd pool create test-blkin
./rados put test-object-1 ./vstart.sh --pool=test-blkin
./rados -p test-blkin ls
./ceph osd map test-blkin test-object-1
#. Use as much functionality of the cluster as you can, to exercise as many object encoder methods as possible::
- bin/ceph osd pool create mypool 8
+ bin/ceph osd pool create mypool
bin/rados -p mypool bench 10 write -b 123
bin/ceph osd out 0
bin/ceph osd in 0
Set up an erasure-coded pool::
- $ ceph osd pool create ecpool 12 12 erasure
+ $ ceph osd pool create ecpool erasure
Set up an erasure-coded pool and the associated CRUSH rule ``ecrule``::
$ ceph osd crush rule create-erasure ecrule
- $ ceph osd pool create ecpool 12 12 erasure \
- default ecrule
+ $ ceph osd pool create ecpool erasure default ecrule
Set the CRUSH failure domain to osd (instead of host, which is the default)::
plugin=jerasure
technique=reed_sol_van
crush-failure-domain=osd
- $ ceph osd pool create ecpool 12 12 erasure myprofile
+ $ ceph osd pool create ecpool erasure myprofile
Control the parameters of the erasure code plugin::
m=2
plugin=jerasure
technique=reed_sol_van
- $ ceph osd pool create ecpool 12 12 erasure \
- myprofile
+ $ ceph osd pool create ecpool erasure myprofile
Choose an alternate erasure code plugin::
plugin=jerasure
technique=reed_sol_van
crush-failure-domain=osd
- $ ceph osd pool create ecpool 12 12 erasure myprofile
+ $ ceph osd pool create ecpool erasure myprofile
The *plugin* is dynamically loaded from *directory* and expected to
implement the *int __erasure_code_init(char *plugin_name, char *directory)* function
.. code::
- $ bin/ceph osd pool create mypool 8
+ $ bin/ceph osd pool create mypool
$ bin/rados -p mypool bench 10 write -b 123
Place a file into the new pool:
Usage::
- ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
+ ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure}
{<erasure_code_profile>} {<rule>} {<int>}
Subcommand ``delete`` deletes pool.
Creates/deletes/renames a storage pool. ::
- ceph osd pool create {pool-name} pg_num [pgp_num]
+ ceph osd pool create {pool-name} [pg_num [pgp_num]]
ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]
ceph osd pool rename {old-name} {new-name}
plugin=clay \
k=4 m=2 d=5 \
crush-failure-domain=host
- $ ceph osd pool create claypool 12 12 erasure CLAYprofile
+ $ ceph osd pool create claypool erasure CLAYprofile
Creating a clay profile
plugin=lrc \
k=4 m=2 l=3 \
crush-failure-domain=host
- $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
+ $ ceph osd pool create lrcpool erasure LRCprofile
Reduce recovery bandwidth between racks
k=4 m=2 l=3 \
crush-locality=rack \
crush-failure-domain=host
- $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
+ $ ceph osd pool create lrcpool erasure LRCprofile
Create an lrc profile
plugin=lrc \
mapping=DD_ \
layers='[ [ "DDc", "" ] ]'
- $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
+ $ ceph osd pool create lrcpool erasure LRCprofile
Reduce recovery bandwidth between hosts
---------------------------------------
[ "cDDD____", "" ],
[ "____cDDD", "" ],
]'
- $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
+ $ ceph osd pool create lrcpool erasure LRCprofile
Reduce recovery bandwidth between racks
[ "choose", "rack", 2 ],
[ "chooseleaf", "host", 4 ],
]'
- $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
+ $ ceph osd pool create lrcpool erasure LRCprofile
Testing with different Erasure Code backends
--------------------------------------------
plugin=lrc \
mapping=DD_ \
layers='[ [ "DDc", "plugin=isa technique=cauchy" ] ]'
- $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
+ $ ceph osd pool create lrcpool erasure LRCprofile
You could also use a different erasure code profile for for each
layer.::
[ "cDDD____", "plugin=isa" ],
[ "____cDDD", "plugin=jerasure" ],
]'
- $ ceph osd pool create lrcpool 12 12 erasure LRCprofile
+ $ ceph osd pool create lrcpool erasure LRCprofile
plugin=shec \
k=8 m=4 c=3 \
crush-failure-domain=host
- $ ceph osd pool create shecpool 256 256 erasure SHECprofile
+ $ ceph osd pool create shecpool erasure SHECprofile
<https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5>`_ and
requires at least three hosts::
- $ ceph osd pool create ecpool 12 12 erasure
+ $ ceph osd pool create ecpool erasure
pool 'ecpool' created
$ echo ABCDEFGHI | rados --pool ecpool put NYAN -
$ rados --pool ecpool get NYAN -
k=3 \
m=2 \
crush-failure-domain=rack
- $ ceph osd pool create ecpool 12 12 erasure myprofile
+ $ ceph osd pool create ecpool erasure myprofile
$ echo ABCDEFGHI | rados --pool ecpool put NYAN -
$ rados --pool ecpool get NYAN -
ABCDEFGHI
When creating a new pool with::
- ceph osd pool create {pool-name} pg_num
+ ceph osd pool create {pool-name} [pg_num]
-it is mandatory to choose the value of ``pg_num`` because it cannot (currently) be
-calculated automatically. Here are a few values commonly used:
+it is optional to choose the value of ``pg_num``. If you do not
+specify ``pg_num``, the cluster can (by default) automatically tune it
+for you based on how much data is stored in the pool (see above, :ref:`pg-autoscaler`).
-- Less than 5 OSDs set ``pg_num`` to 128
+Alternatively, ``pg_num`` can be explicitly provided. However,
+whether you specify a ``pg_num`` value or not does not affect whether
+the value is automatically tuned by the cluster after the fact. To
+enable or disable auto-tuning,::
-- Between 5 and 10 OSDs set ``pg_num`` to 512
+ ceph osd pool set {pool-name} pg_autoscaler_mode (on|off|warn)
-- Between 10 and 50 OSDs set ``pg_num`` to 1024
+The "rule of thumb" for PGs per OSD has traditionally be 100. With
+the additional of the balancer (which is also enabled by default), a
+value of more like 50 PGs per OSD is probably reasonable. The
+challenge (which the autoscaler normally does for you), is to:
-- If you have more than 50 OSDs, you need to understand the tradeoffs
- and how to calculate the ``pg_num`` value by yourself
-
-- For calculating ``pg_num`` value by yourself please take help of `pgcalc`_ tool
-
-As the number of OSDs increases, choosing the right value for pg_num
-becomes more important because it has a significant influence on the
-behavior of the cluster as well as the durability of the data when
-something goes wrong (i.e. the probability that a catastrophic event
-leads to data loss).
+- have the PGs per pool proportional to the data in the pool, and
+- end up with 50-100 PGs per OSDs, after the replication or
+ erasuring-coding fan-out of each PG across OSDs is taken into
+ consideration
How are Placement Groups used ?
===============================
To create a pool, execute::
- ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \
+ ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] [replicated] \
[crush-rule-name] [expected-num-objects]
- ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure \
+ ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] erasure \
[erasure-code-profile] [crush-rule-name] [expected_num_objects]
Where:
coded pool that requires less OSDs::
ceph osd erasure-code-profile set myprofile k=5 m=3
- ceph osd pool create erasurepool 16 16 erasure myprofile
+ ceph osd pool create erasurepool erasure myprofile
or add a new OSDs and the PG will automatically use them.
to have OSDs residing on the same host with::
ceph osd erasure-code-profile set myprofile crush-failure-domain=osd
- ceph osd pool create erasurepool 16 16 erasure myprofile
+ ceph osd pool create erasurepool erasure myprofile
CRUSH gives up too soon
-----------------------
To configure Ceph for use with ``libvirt``, perform the following steps:
#. `Create a pool`_. The following example uses the
- pool name ``libvirt-pool`` with 128 placement groups. ::
+ pool name ``libvirt-pool``.::
- ceph osd pool create libvirt-pool 128 128
+ ceph osd pool create libvirt-pool
Verify the pool exists. ::
Kubernetes volume storage. Ensure your Ceph cluster is running, then create
the pool. ::
- $ ceph osd pool create kubernetes 128
+ $ ceph osd pool create kubernetes
See `Create a Pool`_ for details on specifying the number of placement groups
for your pools, and `Placement Groups`_ for details on the number of placement
pool. We recommend creating a pool for Cinder and a pool for Glance. Ensure
your Ceph cluster is running, then create the pools. ::
- ceph osd pool create volumes 128
- ceph osd pool create images 128
- ceph osd pool create backups 128
- ceph osd pool create vms 128
+ ceph osd pool create volumes
+ ceph osd pool create images
+ ceph osd pool create backups
+ ceph osd pool create vms
See `Create a Pool`_ for detail on specifying the number of placement groups for
your pools, and `Placement Groups`_ for details on the number of placement
Create and initialize the RBD pool::
- $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- ceph osd pool create rbd 256
+ $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- ceph osd pool create rbd
pool 'rbd' created
$ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- rbd pool init rbd
``rados put`` command on the command line. For example::
echo {Test-data} > testfile.txt
- ceph osd pool create mytest 8
+ ceph osd pool create mytest
rados put {object-name} {file-path} --pool=mytest
rados put test-object-1 testfile.txt --pool=mytest
::
- ceph osd pool create cephfs_data <pg_num>
- ceph osd pool create cephfs_metadata <pg_num>
+ ceph osd pool create cephfs_data
+ ceph osd pool create cephfs_metadata
ceph fs new <fs_name> cephfs_metadata cephfs_data