From: Alfredo Deza Date: Mon, 14 Aug 2017 22:57:06 +0000 (-0400) Subject: doc/rados/operations add newline after definition X-Git-Tag: v13.0.0~94^2~20 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=e925b9f899536a881bd27d3d7aaf6509531de540;p=ceph.git doc/rados/operations add newline after definition Signed-off-by: Alfredo Deza --- diff --git a/doc/rados/operations/pools.rst b/doc/rados/operations/pools.rst index 70155937cad0..131ee514701a 100644 --- a/doc/rados/operations/pools.rst +++ b/doc/rados/operations/pools.rst @@ -6,33 +6,33 @@ When you first deploy a cluster without creating a pool, Ceph uses the default pools for storing data. A pool provides you with: - **Resilience**: You can set how many OSD are allowed to fail without losing data. - For replicated pools, it is the desired number of copies/replicas of an object. + For replicated pools, it is the desired number of copies/replicas of an object. A typical configuration stores an object and one additional copy (i.e., ``size = 2``), but you can determine the number of copies/replicas. For `erasure coded pools <../erasure-code>`_, it is the number of coding chunks (i.e. ``m=2`` in the **erasure code profile**) - + - **Placement Groups**: You can set the number of placement groups for the pool. - A typical configuration uses approximately 100 placement groups per OSD to - provide optimal balancing without using up too many computing resources. When + A typical configuration uses approximately 100 placement groups per OSD to + provide optimal balancing without using up too many computing resources. When setting up multiple pools, be careful to ensure you set a reasonable number of - placement groups for both the pool and the cluster as a whole. + placement groups for both the pool and the cluster as a whole. -- **CRUSH Rules**: When you store data in a pool, a CRUSH ruleset mapped to the - pool enables CRUSH to identify a rule for the placement of the object - and its replicas (or chunks for erasure coded pools) in your cluster. +- **CRUSH Rules**: When you store data in a pool, a CRUSH ruleset mapped to the + pool enables CRUSH to identify a rule for the placement of the object + and its replicas (or chunks for erasure coded pools) in your cluster. You can create a custom CRUSH rule for your pool. - -- **Snapshots**: When you create snapshots with ``ceph osd pool mksnap``, + +- **Snapshots**: When you create snapshots with ``ceph osd pool mksnap``, you effectively take a snapshot of a particular pool. - -To organize data into pools, you can list, create, and remove pools. + +To organize data into pools, you can list, create, and remove pools. You can also view the utilization statistics for each pool. List Pools ========== -To list your cluster's pools, execute:: +To list your cluster's pools, execute:: ceph osd lspools @@ -53,19 +53,19 @@ For details on placement group numbers refer to `setting the number of placement application using the pool. See `Associate Pool to Application`_ below for more information. -For example:: +For example:: osd pool default pg num = 100 osd pool default pgp num = 100 -To create a pool, execute:: +To create a pool, execute:: ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \ [crush-rule-name] [expected-num-objects] ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure \ [erasure-code-profile] [crush-rule-name] [expected_num_objects] -Where: +Where: ``{pool-name}`` @@ -76,7 +76,7 @@ Where: ``{pg-num}`` :Description: The total number of placement groups for the pool. See `Placement - Groups`_ for details on calculating a suitable number. The + Groups`_ for details on calculating a suitable number. The default value ``8`` is NOT suitable for most systems. :Type: Integer @@ -86,7 +86,7 @@ Where: ``{pgp-num}`` :Description: The total number of placement groups for placement purposes. This - **should be equal to the total number of placement groups**, except + **should be equal to the total number of placement groups**, except for placement group splitting scenarios. :Type: Integer @@ -105,7 +105,7 @@ Where: implement a subset of the available operations. :Type: String -:Required: No. +:Required: No. :Default: replicated ``[crush-rule-name]`` @@ -114,7 +114,7 @@ Where: rule must exist. :Type: String -:Required: No. +:Required: No. :Default: For **replicated** pools it is the ruleset specified by the ``osd pool default crush replicated ruleset`` config variable. This ruleset must exist. @@ -128,11 +128,11 @@ Where: .. _erasure code profile: ../erasure-code-profile :Description: For **erasure** pools only. Use the `erasure code profile`_. It - must be an existing profile as defined by + must be an existing profile as defined by **osd erasure-code-profile set**. :Type: String -:Required: No. +:Required: No. When you create a pool, set the number of placement groups to a reasonable value (e.g., ``100``). Consider the total number of placement groups per OSD too. @@ -155,7 +155,7 @@ placement groups for your pool. :Type: Integer :Required: No. -:Default: 0, no splitting at the pool creation time. +:Default: 0, no splitting at the pool creation time. Associate Pool to Application ============================= @@ -177,12 +177,12 @@ a pool.:: Set Pool Quotas =============== -You can set pool quotas for the maximum number of bytes and/or the maximum +You can set pool quotas for the maximum number of bytes and/or the maximum number of objects per pool. :: - ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}] + ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}] -For example:: +For example:: ceph osd pool set-quota data max_objects 10000 @@ -203,7 +203,7 @@ configuration. Otherwise they will refuse to remove a pool. See `Monitor Configuration`_ for more information. .. _Monitor Configuration: ../../configuration/mon-config-ref - + If you created your own rulesets and rules for a pool you created, you should consider removing them when you no longer need your pool:: @@ -226,42 +226,42 @@ exists, you should consider deleting those users too:: Rename a Pool ============= -To rename a pool, execute:: +To rename a pool, execute:: ceph osd pool rename {current-pool-name} {new-pool-name} -If you rename a pool and you have per-pool capabilities for an authenticated +If you rename a pool and you have per-pool capabilities for an authenticated user, you must update the user's capabilities (i.e., caps) with the new pool -name. +name. .. note:: Version ``0.48`` Argonaut and above. Show Pool Statistics ==================== -To show a pool's utilization statistics, execute:: +To show a pool's utilization statistics, execute:: rados df - + Make a Snapshot of a Pool ========================= -To make a snapshot of a pool, execute:: +To make a snapshot of a pool, execute:: + + ceph osd pool mksnap {pool-name} {snap-name} - ceph osd pool mksnap {pool-name} {snap-name} - .. note:: Version ``0.48`` Argonaut and above. Remove a Snapshot of a Pool =========================== -To remove a snapshot of a pool, execute:: +To remove a snapshot of a pool, execute:: ceph osd pool rmsnap {pool-name} {snap-name} -.. note:: Version ``0.48`` Argonaut and above. +.. note:: Version ``0.48`` Argonaut and above. .. _setpoolvalues: @@ -269,15 +269,16 @@ To remove a snapshot of a pool, execute:: Set Pool Values =============== -To set a value to a pool, execute the following:: +To set a value to a pool, execute the following:: ceph osd pool set {pool-name} {key} {value} - -You may set values for the following keys: + +You may set values for the following keys: .. _compression_algorithm: ``compression_algorithm`` + :Description: Sets inline compression algorithm to use for underlying BlueStore. This setting overrides the `global setting `_ of ``bluestore compression algorithm``. @@ -310,8 +311,8 @@ You may set values for the following keys: ``size`` -:Description: Sets the number of replicas for objects in the pool. - See `Set the Number of Object Replicas`_ for further details. +:Description: Sets the number of replicas for objects in the pool. + See `Set the Number of Object Replicas`_ for further details. Replicated pools only. :Type: Integer @@ -320,8 +321,8 @@ You may set values for the following keys: ``min_size`` -:Description: Sets the minimum number of replicas required for I/O. - See `Set the Number of Object Replicas`_ for further details. +:Description: Sets the minimum number of replicas required for I/O. + See `Set the Number of Object Replicas`_ for further details. Replicated pools only. :Type: Integer @@ -331,7 +332,7 @@ You may set values for the following keys: ``pg_num`` -:Description: The effective number of placement groups to use when calculating +:Description: The effective number of placement groups to use when calculating data placement. :Type: Integer :Valid Range: Superior to ``pg_num`` current value. @@ -340,7 +341,7 @@ You may set values for the following keys: ``pgp_num`` -:Description: The effective number of placement groups for placement to use +:Description: The effective number of placement groups for placement to use when calculating data placement. :Type: Integer @@ -370,7 +371,7 @@ You may set values for the following keys: :Description: Set/Unset HASHPSPOOL flag on a given pool. :Type: Integer :Valid Range: 1 sets flag, 0 unsets flag -:Version: Version ``0.48`` Argonaut and above. +:Version: Version ``0.48`` Argonaut and above. .. _nodelete: @@ -438,7 +439,7 @@ You may set values for the following keys: ``hit_set_count`` -:Description: The number of hit sets to store for cache pools. The higher +:Description: The number of hit sets to store for cache pools. The higher the number, the more RAM consumed by the ``ceph-osd`` daemon. :Type: Integer @@ -448,8 +449,8 @@ You may set values for the following keys: ``hit_set_period`` -:Description: The duration of a hit set period in seconds for cache pools. - The higher the number, the more RAM consumed by the +:Description: The duration of a hit set period in seconds for cache pools. + The higher the number, the more RAM consumed by the ``ceph-osd`` daemon. :Type: Integer @@ -470,10 +471,10 @@ You may set values for the following keys: ``cache_target_dirty_ratio`` -:Description: The percentage of the cache pool containing modified (dirty) +:Description: The percentage of the cache pool containing modified (dirty) objects before the cache tiering agent will flush them to the backing storage pool. - + :Type: Double :Default: ``.4`` @@ -495,7 +496,7 @@ You may set values for the following keys: :Description: The percentage of the cache pool containing unmodified (clean) objects before the cache tiering agent will evict them from the cache pool. - + :Type: Double :Default: ``.8`` @@ -503,17 +504,17 @@ You may set values for the following keys: ``target_max_bytes`` -:Description: Ceph will begin flushing or evicting objects when the +:Description: Ceph will begin flushing or evicting objects when the ``max_bytes`` threshold is triggered. - + :Type: Integer :Example: ``1000000000000`` #1-TB .. _target_max_objects: -``target_max_objects`` +``target_max_objects`` -:Description: Ceph will begin flushing or evicting objects when the +:Description: Ceph will begin flushing or evicting objects when the ``max_objects`` threshold is triggered. :Type: Integer @@ -540,11 +541,11 @@ You may set values for the following keys: ``cache_min_flush_age`` -:Description: The time (in seconds) before the cache tiering agent will flush +:Description: The time (in seconds) before the cache tiering agent will flush an object from the cache pool to the storage pool. - + :Type: Integer -:Example: ``600`` 10min +:Example: ``600`` 10min .. _cache_min_evict_age: @@ -552,7 +553,7 @@ You may set values for the following keys: :Description: The time (in seconds) before the cache tiering agent will evict an object from the cache pool. - + :Type: Integer :Example: ``1800`` 30min @@ -607,11 +608,11 @@ You may set values for the following keys: Get Pool Values =============== -To get a value from a pool, execute the following:: +To get a value from a pool, execute the following:: ceph osd pool get {pool-name} {key} - -You may get values for the following keys: + +You may get values for the following keys: ``size`` @@ -691,18 +692,18 @@ You may get values for the following keys: ``cache_target_full_ratio`` :Description: see cache_target_full_ratio_ - + :Type: Double ``target_max_bytes`` :Description: see target_max_bytes_ - + :Type: Integer -``target_max_objects`` +``target_max_objects`` :Description: see target_max_objects_ @@ -712,14 +713,14 @@ You may get values for the following keys: ``cache_min_flush_age`` :Description: see cache_min_flush_age_ - + :Type: Integer ``cache_min_evict_age`` :Description: see cache_min_evict_age_ - + :Type: Integer @@ -754,19 +755,19 @@ You may get values for the following keys: Set the Number of Object Replicas ================================= -To set the number of object replicas on a replicated pool, execute the following:: +To set the number of object replicas on a replicated pool, execute the following:: ceph osd pool set {poolname} size {num-replicas} .. important:: The ``{num-replicas}`` includes the object itself. - If you want the object and two copies of the object for a total of + If you want the object and two copies of the object for a total of three instances of the object, specify ``3``. - -For example:: + +For example:: ceph osd pool set data size 3 -You may execute this command for each pool. **Note:** An object might accept +You may execute this command for each pool. **Note:** An object might accept I/Os in degraded mode with fewer than ``pool size`` replicas. To set a minimum number of required replicas for I/O, you should use the ``min_size`` setting. For example:: @@ -780,12 +781,12 @@ This ensures that no object in the data pool will receive I/O with fewer than Get the Number of Object Replicas ================================= -To get the number of object replicas, execute the following:: +To get the number of object replicas, execute the following:: ceph osd dump | grep 'replicated size' - + Ceph will list the pools, with the ``replicated size`` attribute highlighted. -By default, ceph creates two replicas of an object (a total of three copies, or +By default, ceph creates two replicas of an object (a total of three copies, or a size of 3).