From: Sage Weil Date: Thu, 19 Sep 2019 15:47:07 +0000 (-0500) Subject: doc: remove all pg_num arguments to 'osd pool create' X-Git-Tag: v15.1.0~1455^2~5 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=7b988e9fe1a5630606c0c10b74740a4a276eb7fd;p=ceph-ci.git doc: remove all pg_num arguments to 'osd pool create' Also, update the dicussion about pg_num and pool creation, with a reference to the autoscaler. Signed-off-by: Sage Weil --- diff --git a/doc/cephfs/createfs.rst b/doc/cephfs/createfs.rst index 04616739067..11c54a1cd3c 100644 --- a/doc/cephfs/createfs.rst +++ b/doc/cephfs/createfs.rst @@ -28,8 +28,8 @@ might run the following commands: .. code:: bash - $ ceph osd pool create cephfs_data - $ ceph osd pool create cephfs_metadata + $ ceph osd pool create cephfs_data + $ ceph osd pool create cephfs_metadata Generally, the metadata pool will have at most a few gigabytes of data. For this reason, a smaller PG count is usually recommended. 64 or 128 is commonly diff --git a/doc/cephfs/disaster-recovery-experts.rst b/doc/cephfs/disaster-recovery-experts.rst index dad9f88ba2d..47c7bd9bfc0 100644 --- a/doc/cephfs/disaster-recovery-experts.rst +++ b/doc/cephfs/disaster-recovery-experts.rst @@ -219,7 +219,7 @@ it with empty file system data structures: :: ceph fs flag set enable_multiple true --yes-i-really-mean-it - ceph osd pool create recovery replicated + ceph osd pool create recovery replicated ceph fs new recovery-fs recovery --allow-dangerous-metadata-overlay cephfs-data-scan init --force-init --filesystem recovery-fs --alternate-pool recovery ceph fs reset recovery-fs --yes-i-really-mean-it diff --git a/doc/cephfs/hadoop.rst b/doc/cephfs/hadoop.rst index cfb291e1efd..e2c0ffedc63 100644 --- a/doc/cephfs/hadoop.rst +++ b/doc/cephfs/hadoop.rst @@ -146,7 +146,7 @@ for storing file system data using the ``ceph fs add_data_pool`` command. First, create the pool. In this example we create the ``hadoop1`` pool with replication factor 1. :: - ceph osd pool create hadoop1 100 + ceph osd pool create hadoop1 ceph osd pool set hadoop1 size 1 Next, determine the pool id. This can be done by examining the output of the diff --git a/doc/dev/blkin.rst b/doc/dev/blkin.rst index 574ae8029dc..00a5748faaa 100644 --- a/doc/dev/blkin.rst +++ b/doc/dev/blkin.rst @@ -111,7 +111,7 @@ You may want to check that ceph is up.:: Now put something in using rados, check that it made it, get it back, and remove it.:: - ./ceph osd pool create test-blkin 8 + ./ceph osd pool create test-blkin ./rados put test-object-1 ./vstart.sh --pool=test-blkin ./rados -p test-blkin ls ./ceph osd map test-blkin test-object-1 diff --git a/doc/dev/corpus.rst b/doc/dev/corpus.rst index 879a819262a..c538028768b 100644 --- a/doc/dev/corpus.rst +++ b/doc/dev/corpus.rst @@ -41,7 +41,7 @@ script of ``script/gen-corpus.sh``, or by following the instructions below: #. Use as much functionality of the cluster as you can, to exercise as many object encoder methods as possible:: - bin/ceph osd pool create mypool 8 + bin/ceph osd pool create mypool bin/rados -p mypool bench 10 write -b 123 bin/ceph osd out 0 bin/ceph osd in 0 diff --git a/doc/dev/erasure-coded-pool.rst b/doc/dev/erasure-coded-pool.rst index 670865baff0..8ad697702e9 100644 --- a/doc/dev/erasure-coded-pool.rst +++ b/doc/dev/erasure-coded-pool.rst @@ -49,13 +49,12 @@ Interface Set up an erasure-coded pool:: - $ ceph osd pool create ecpool 12 12 erasure + $ ceph osd pool create ecpool erasure Set up an erasure-coded pool and the associated CRUSH rule ``ecrule``:: $ ceph osd crush rule create-erasure ecrule - $ ceph osd pool create ecpool 12 12 erasure \ - default ecrule + $ ceph osd pool create ecpool erasure default ecrule Set the CRUSH failure domain to osd (instead of host, which is the default):: @@ -67,7 +66,7 @@ Set the CRUSH failure domain to osd (instead of host, which is the default):: plugin=jerasure technique=reed_sol_van crush-failure-domain=osd - $ ceph osd pool create ecpool 12 12 erasure myprofile + $ ceph osd pool create ecpool erasure myprofile Control the parameters of the erasure code plugin:: @@ -78,8 +77,7 @@ Control the parameters of the erasure code plugin:: m=2 plugin=jerasure technique=reed_sol_van - $ ceph osd pool create ecpool 12 12 erasure \ - myprofile + $ ceph osd pool create ecpool erasure myprofile Choose an alternate erasure code plugin:: diff --git a/doc/dev/osd_internals/erasure_coding/developer_notes.rst b/doc/dev/osd_internals/erasure_coding/developer_notes.rst index fca56ce2536..090dda194ab 100644 --- a/doc/dev/osd_internals/erasure_coding/developer_notes.rst +++ b/doc/dev/osd_internals/erasure_coding/developer_notes.rst @@ -174,7 +174,7 @@ key=value pairs stored in an `erasure code profile`_. plugin=jerasure technique=reed_sol_van crush-failure-domain=osd - $ ceph osd pool create ecpool 12 12 erasure myprofile + $ ceph osd pool create ecpool erasure myprofile The *plugin* is dynamically loaded from *directory* and expected to implement the *int __erasure_code_init(char *plugin_name, char *directory)* function diff --git a/doc/dev/quick_guide.rst b/doc/dev/quick_guide.rst index 7bda55f22c3..c2f02fe6bc6 100644 --- a/doc/dev/quick_guide.rst +++ b/doc/dev/quick_guide.rst @@ -66,7 +66,7 @@ Make a pool and run some benchmarks against it: .. code:: - $ bin/ceph osd pool create mypool 8 + $ bin/ceph osd pool create mypool $ bin/rados -p mypool bench 10 write -b 123 Place a file into the new pool: diff --git a/doc/man/8/ceph.rst b/doc/man/8/ceph.rst index fcb7c9cd20a..40e71304d5e 100644 --- a/doc/man/8/ceph.rst +++ b/doc/man/8/ceph.rst @@ -943,7 +943,7 @@ Subcommand ``create`` creates pool. Usage:: - ceph osd pool create {} {replicated|erasure} + ceph osd pool create {} {} {replicated|erasure} {} {} {} Subcommand ``delete`` deletes pool. diff --git a/doc/rados/operations/control.rst b/doc/rados/operations/control.rst index 72ebd350c38..68ea3c42f13 100644 --- a/doc/rados/operations/control.rst +++ b/doc/rados/operations/control.rst @@ -253,7 +253,7 @@ Creates/deletes a snapshot of a pool. :: Creates/deletes/renames a storage pool. :: - ceph osd pool create {pool-name} pg_num [pgp_num] + ceph osd pool create {pool-name} [pg_num [pgp_num]] ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it] ceph osd pool rename {old-name} {new-name} diff --git a/doc/rados/operations/erasure-code-clay.rst b/doc/rados/operations/erasure-code-clay.rst index 70f4ffffc76..e9cfa96e6a2 100644 --- a/doc/rados/operations/erasure-code-clay.rst +++ b/doc/rados/operations/erasure-code-clay.rst @@ -44,7 +44,7 @@ An example configuration that can be used to observe reduced bandwidth usage:: plugin=clay \ k=4 m=2 d=5 \ crush-failure-domain=host - $ ceph osd pool create claypool 12 12 erasure CLAYprofile + $ ceph osd pool create claypool erasure CLAYprofile Creating a clay profile diff --git a/doc/rados/operations/erasure-code-lrc.rst b/doc/rados/operations/erasure-code-lrc.rst index c2f24b3cbaf..a144b92eacf 100644 --- a/doc/rados/operations/erasure-code-lrc.rst +++ b/doc/rados/operations/erasure-code-lrc.rst @@ -28,7 +28,7 @@ observed.:: plugin=lrc \ k=4 m=2 l=3 \ crush-failure-domain=host - $ ceph osd pool create lrcpool 12 12 erasure LRCprofile + $ ceph osd pool create lrcpool erasure LRCprofile Reduce recovery bandwidth between racks @@ -42,7 +42,7 @@ OSD is in the same rack as the lost chunk.:: k=4 m=2 l=3 \ crush-locality=rack \ crush-failure-domain=host - $ ceph osd pool create lrcpool 12 12 erasure LRCprofile + $ ceph osd pool create lrcpool erasure LRCprofile Create an lrc profile @@ -196,7 +196,7 @@ by default.:: plugin=lrc \ mapping=DD_ \ layers='[ [ "DDc", "" ] ]' - $ ceph osd pool create lrcpool 12 12 erasure LRCprofile + $ ceph osd pool create lrcpool erasure LRCprofile Reduce recovery bandwidth between hosts --------------------------------------- @@ -214,7 +214,7 @@ the layout of the chunks is different:: [ "cDDD____", "" ], [ "____cDDD", "" ], ]' - $ ceph osd pool create lrcpool 12 12 erasure LRCprofile + $ ceph osd pool create lrcpool erasure LRCprofile Reduce recovery bandwidth between racks @@ -235,7 +235,7 @@ OSD is in the same rack as the lost chunk.:: [ "choose", "rack", 2 ], [ "chooseleaf", "host", 4 ], ]' - $ ceph osd pool create lrcpool 12 12 erasure LRCprofile + $ ceph osd pool create lrcpool erasure LRCprofile Testing with different Erasure Code backends -------------------------------------------- @@ -251,7 +251,7 @@ be used in the lrcpool.:: plugin=lrc \ mapping=DD_ \ layers='[ [ "DDc", "plugin=isa technique=cauchy" ] ]' - $ ceph osd pool create lrcpool 12 12 erasure LRCprofile + $ ceph osd pool create lrcpool erasure LRCprofile You could also use a different erasure code profile for for each layer.:: @@ -264,7 +264,7 @@ layer.:: [ "cDDD____", "plugin=isa" ], [ "____cDDD", "plugin=jerasure" ], ]' - $ ceph osd pool create lrcpool 12 12 erasure LRCprofile + $ ceph osd pool create lrcpool erasure LRCprofile diff --git a/doc/rados/operations/erasure-code-shec.rst b/doc/rados/operations/erasure-code-shec.rst index c799b3ae19f..dd5708a3b92 100644 --- a/doc/rados/operations/erasure-code-shec.rst +++ b/doc/rados/operations/erasure-code-shec.rst @@ -141,4 +141,4 @@ Erasure code profile examples plugin=shec \ k=8 m=4 c=3 \ crush-failure-domain=host - $ ceph osd pool create shecpool 256 256 erasure SHECprofile + $ ceph osd pool create shecpool erasure SHECprofile diff --git a/doc/rados/operations/erasure-code.rst b/doc/rados/operations/erasure-code.rst index 9fe83a3f117..23e23bb8f9a 100644 --- a/doc/rados/operations/erasure-code.rst +++ b/doc/rados/operations/erasure-code.rst @@ -18,7 +18,7 @@ The simplest erasure coded pool is equivalent to `RAID5 `_ and requires at least three hosts:: - $ ceph osd pool create ecpool 12 12 erasure + $ ceph osd pool create ecpool erasure pool 'ecpool' created $ echo ABCDEFGHI | rados --pool ecpool put NYAN - $ rados --pool ecpool get NYAN - @@ -56,7 +56,7 @@ the following profile can be defined:: k=3 \ m=2 \ crush-failure-domain=rack - $ ceph osd pool create ecpool 12 12 erasure myprofile + $ ceph osd pool create ecpool erasure myprofile $ echo ABCDEFGHI | rados --pool ecpool put NYAN - $ rados --pool ecpool get NYAN - ABCDEFGHI diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst index d5cc267c27d..9a417117f9c 100644 --- a/doc/rados/operations/placement-groups.rst +++ b/doc/rados/operations/placement-groups.rst @@ -167,27 +167,28 @@ A preselection of pg_num When creating a new pool with:: - ceph osd pool create {pool-name} pg_num + ceph osd pool create {pool-name} [pg_num] -it is mandatory to choose the value of ``pg_num`` because it cannot (currently) be -calculated automatically. Here are a few values commonly used: +it is optional to choose the value of ``pg_num``. If you do not +specify ``pg_num``, the cluster can (by default) automatically tune it +for you based on how much data is stored in the pool (see above, :ref:`pg-autoscaler`). -- Less than 5 OSDs set ``pg_num`` to 128 +Alternatively, ``pg_num`` can be explicitly provided. However, +whether you specify a ``pg_num`` value or not does not affect whether +the value is automatically tuned by the cluster after the fact. To +enable or disable auto-tuning,:: -- Between 5 and 10 OSDs set ``pg_num`` to 512 + ceph osd pool set {pool-name} pg_autoscaler_mode (on|off|warn) -- Between 10 and 50 OSDs set ``pg_num`` to 1024 +The "rule of thumb" for PGs per OSD has traditionally be 100. With +the additional of the balancer (which is also enabled by default), a +value of more like 50 PGs per OSD is probably reasonable. The +challenge (which the autoscaler normally does for you), is to: -- If you have more than 50 OSDs, you need to understand the tradeoffs - and how to calculate the ``pg_num`` value by yourself - -- For calculating ``pg_num`` value by yourself please take help of `pgcalc`_ tool - -As the number of OSDs increases, choosing the right value for pg_num -becomes more important because it has a significant influence on the -behavior of the cluster as well as the durability of the data when -something goes wrong (i.e. the probability that a catastrophic event -leads to data loss). +- have the PGs per pool proportional to the data in the pool, and +- end up with 50-100 PGs per OSDs, after the replication or + erasuring-coding fan-out of each PG across OSDs is taken into + consideration How are Placement Groups used ? =============================== diff --git a/doc/rados/operations/pools.rst b/doc/rados/operations/pools.rst index 18fdd3ee11e..8aa6f378b69 100644 --- a/doc/rados/operations/pools.rst +++ b/doc/rados/operations/pools.rst @@ -58,9 +58,9 @@ For example:: To create a pool, execute:: - ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \ + ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] [replicated] \ [crush-rule-name] [expected-num-objects] - ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure \ + ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] erasure \ [erasure-code-profile] [crush-rule-name] [expected_num_objects] Where: diff --git a/doc/rados/troubleshooting/troubleshooting-pg.rst b/doc/rados/troubleshooting/troubleshooting-pg.rst index a11b972f50f..7aa3846c1ce 100644 --- a/doc/rados/troubleshooting/troubleshooting-pg.rst +++ b/doc/rados/troubleshooting/troubleshooting-pg.rst @@ -476,7 +476,7 @@ If the Ceph cluster only has 8 OSDs and the erasure coded pool needs coded pool that requires less OSDs:: ceph osd erasure-code-profile set myprofile k=5 m=3 - ceph osd pool create erasurepool 16 16 erasure myprofile + ceph osd pool create erasurepool erasure myprofile or add a new OSDs and the PG will automatically use them. @@ -515,7 +515,7 @@ You can resolve the problem by creating a new pool in which PGs are allowed to have OSDs residing on the same host with:: ceph osd erasure-code-profile set myprofile crush-failure-domain=osd - ceph osd pool create erasurepool 16 16 erasure myprofile + ceph osd pool create erasurepool erasure myprofile CRUSH gives up too soon ----------------------- diff --git a/doc/rbd/libvirt.rst b/doc/rbd/libvirt.rst index f953b1f08e2..694179df513 100644 --- a/doc/rbd/libvirt.rst +++ b/doc/rbd/libvirt.rst @@ -59,9 +59,9 @@ Configuring Ceph To configure Ceph for use with ``libvirt``, perform the following steps: #. `Create a pool`_. The following example uses the - pool name ``libvirt-pool`` with 128 placement groups. :: + pool name ``libvirt-pool``.:: - ceph osd pool create libvirt-pool 128 128 + ceph osd pool create libvirt-pool Verify the pool exists. :: diff --git a/doc/rbd/rbd-kubernetes.rst b/doc/rbd/rbd-kubernetes.rst index c83f6851c60..04466b0acf7 100644 --- a/doc/rbd/rbd-kubernetes.rst +++ b/doc/rbd/rbd-kubernetes.rst @@ -44,7 +44,7 @@ By default, Ceph block devices use the ``rbd`` pool. Create a pool for Kubernetes volume storage. Ensure your Ceph cluster is running, then create the pool. :: - $ ceph osd pool create kubernetes 128 + $ ceph osd pool create kubernetes See `Create a Pool`_ for details on specifying the number of placement groups for your pools, and `Placement Groups`_ for details on the number of placement diff --git a/doc/rbd/rbd-openstack.rst b/doc/rbd/rbd-openstack.rst index 929d4c92616..75aefbb0597 100644 --- a/doc/rbd/rbd-openstack.rst +++ b/doc/rbd/rbd-openstack.rst @@ -77,10 +77,10 @@ By default, Ceph block devices use the ``rbd`` pool. You may use any available pool. We recommend creating a pool for Cinder and a pool for Glance. Ensure your Ceph cluster is running, then create the pools. :: - ceph osd pool create volumes 128 - ceph osd pool create images 128 - ceph osd pool create backups 128 - ceph osd pool create vms 128 + ceph osd pool create volumes + ceph osd pool create images + ceph osd pool create backups + ceph osd pool create vms See `Create a Pool`_ for detail on specifying the number of placement groups for your pools, and `Placement Groups`_ for details on the number of placement diff --git a/doc/start/kube-helm.rst b/doc/start/kube-helm.rst index e50f0d4d6a7..ff5b9902ec1 100644 --- a/doc/start/kube-helm.rst +++ b/doc/start/kube-helm.rst @@ -255,7 +255,7 @@ Copy the user secret from the ``ceph`` namespace to ``default``:: Create and initialize the RBD pool:: - $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- ceph osd pool create rbd 256 + $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- ceph osd pool create rbd pool 'rbd' created $ kubectl -n ceph exec -ti ceph-mon-cppdk -c ceph-mon -- rbd pool init rbd diff --git a/doc/start/quick-ceph-deploy.rst b/doc/start/quick-ceph-deploy.rst index 40a17472715..84b0998c00f 100644 --- a/doc/start/quick-ceph-deploy.rst +++ b/doc/start/quick-ceph-deploy.rst @@ -299,7 +299,7 @@ example:: ``rados put`` command on the command line. For example:: echo {Test-data} > testfile.txt - ceph osd pool create mytest 8 + ceph osd pool create mytest rados put {object-name} {file-path} --pool=mytest rados put test-object-1 testfile.txt --pool=mytest diff --git a/doc/start/quick-cephfs.rst b/doc/start/quick-cephfs.rst index 5e74679dbb1..da98f9465cd 100644 --- a/doc/start/quick-cephfs.rst +++ b/doc/start/quick-cephfs.rst @@ -36,8 +36,8 @@ become active until you create some pools and a file system. See :doc:`/cephfs/ :: - ceph osd pool create cephfs_data - ceph osd pool create cephfs_metadata + ceph osd pool create cephfs_data + ceph osd pool create cephfs_metadata ceph fs new cephfs_metadata cephfs_data