``bluestore_cache_meta_ratio`` and ``bluestore_cache_kv_ratio`` options are
used as fallbacks.
-.. conf_val:: bluestore_cache_autotune
-.. conf_val:: osd_memory_target
-.. conf_val:: bluestore_cache_autotune_interval
-.. conf_val:: osd_memory_base
-.. conf_val:: osd_memory_expected_fragmentation
-.. conf_val:: osd_memory_cache_min
-.. conf_val:: osd_memory_cache_resize_interval
+.. confval:: bluestore_cache_autotune
+.. confval:: osd_memory_target
+.. confval:: bluestore_cache_autotune_interval
+.. confval:: osd_memory_base
+.. confval:: osd_memory_expected_fragmentation
+.. confval:: osd_memory_cache_min
+.. confval:: osd_memory_cache_resize_interval
Manual Cache Sizing
The data fraction can be calculated by
``<effective_cache_size> * (1 - bluestore_cache_meta_ratio - bluestore_cache_kv_ratio)``
-.. conf_val:: bluestore_cache_size
-.. conf_val:: bluestore_cache_size_hdd
-.. conf_val:: bluestore_cache_size_ssd
-.. conf_val:: bluestore_cache_meta_ratio
-.. conf_val:: bluestore_cache_kv_ratio
+.. confval:: bluestore_cache_size
+.. confval:: bluestore_cache_size_hdd
+.. confval:: bluestore_cache_size_ssd
+.. confval:: bluestore_cache_meta_ratio
+.. confval:: bluestore_cache_kv_ratio
Checksums
=========
ceph osd pool set <pool-name> csum_type <algorithm>
-.. conf_val:: bluestore_csum_type
+.. confval:: bluestore_csum_type
Inline Compression
==================
.. literalinclude:: pool-pg.conf
:language: ini
-
-``mon_max_pool_pg_num``
-
-:Description: The maximum number of placement groups per pool.
-:Type: Integer
-:Default: ``65536``
-
-
-``mon_pg_create_interval``
-
-:Description: Number of seconds between PG creation in the same
- Ceph OSD Daemon.
-
-:Type: Float
-:Default: ``30.0``
-
-
-``mon_pg_stuck_threshold``
-
-:Description: Number of seconds after which PGs can be considered as
- being stuck.
-
-:Type: 32-bit Integer
-:Default: ``300``
-
-``mon_pg_min_inactive``
-
-:Description: Raise ``HEALTH_ERR`` if the count of PGs that have been
- inactive longer than the ``mon_pg_stuck_threshold`` exceeds this
- setting. A non-positive number means disabled, never go into ERR.
-:Type: Integer
-:Default: ``1``
-
-
-``mon_pg_warn_min_per_osd``
-
-:Description: Raise ``HEALTH_WARN`` if the average number
- of PGs per ``in`` OSD is under this number. A non-positive number
- disables this.
-:Type: Integer
-:Default: ``30``
-
-
-``mon_pg_warn_min_objects``
-
-:Description: Do not warn if the total number of RADOS objects in cluster is below
- this number
-:Type: Integer
-:Default: ``1000``
-
-
-``mon_pg_warn_min_pool_objects``
-
-:Description: Do not warn on pools whose RADOS object count is below this number
-:Type: Integer
-:Default: ``1000``
-
-
-``mon_pg_check_down_all_threshold``
-
-:Description: Percentage threshold of ``down`` OSDs above which we check all PGs
- for stale ones.
-:Type: Float
-:Default: ``0.5``
-
-
-``mon_pg_warn_max_object_skew``
-
-:Description: Raise ``HEALTH_WARN`` if the average RADOS object count per PG
- of any pool is greater than ``mon_pg_warn_max_object_skew`` times
- the average RADOS object count per PG of all pools. Zero or a non-positive
- number disables this. Note that this option applies to ``ceph-mgr`` daemons.
-:Type: Float
-:Default: ``10``
-
-
-``mon_delta_reset_interval``
-
-:Description: Seconds of inactivity before we reset the PG delta to 0. We keep
- track of the delta of the used space of each pool, so, for
- example, it would be easier for us to understand the progress of
- recovery or the performance of cache tier. But if there's no
- activity reported for a certain pool, we just reset the history of
- deltas of that pool.
-:Type: Integer
-:Default: ``10``
-
-
-``mon_osd_max_op_age``
-
-:Description: Maximum op age before we get concerned (make it a power of 2).
- ``HEALTH_WARN`` will be raised if a request has been blocked longer
- than this limit.
-:Type: Float
-:Default: ``32.0``
-
-
-``osd_pg_bits``
-
-:Description: Placement group bits per Ceph OSD Daemon.
-:Type: 32-bit Integer
-:Default: ``6``
-
-
-``osd_pgp_bits``
-
-:Description: The number of bits per Ceph OSD Daemon for PGPs.
-:Type: 32-bit Integer
-:Default: ``6``
-
-
-``osd_crush_chooseleaf_type``
-
-:Description: The bucket type to use for ``chooseleaf`` in a CRUSH rule. Uses
- ordinal rank rather than name.
-
-:Type: 32-bit Integer
-:Default: ``1``. Typically a host containing one or more Ceph OSD Daemons.
-
-
-``osd_crush_initial_weight``
-
-:Description: The initial CRUSH weight for newly added OSDs.
-
-:Type: Double
-:Default: ``the size of a newly added OSD in TB``. By default, the initial CRUSH
- weight for a newly added OSD is set to its device size in TB.
- See `Weighting Bucket Items`_ for details.
-
-
-``osd_pool_default_crush_rule``
-
-:Description: The default CRUSH rule to use when creating a replicated pool.
-:Type: 8-bit Integer
-:Default: ``-1``, which means "pick the rule with the lowest numerical ID and
- use that". This is to make pool creation work in the absence of rule 0.
-
-
-``osd_pool_erasure_code_stripe_unit``
-
-:Description: Sets the default size, in bytes, of a chunk of an object
- stripe for erasure coded pools. Every object of size S
- will be stored as N stripes, with each data chunk
- receiving ``stripe unit`` bytes. Each stripe of ``N *
- stripe unit`` bytes will be encoded/decoded
- individually. This option can is overridden by the
- ``stripe_unit`` setting in an erasure code profile.
-
-:Type: Unsigned 32-bit Integer
-:Default: ``4096``
-
-
-``osd_pool_default_size``
-
-:Description: Sets the number of replicas for objects in the pool. The default
- value is the same as
- ``ceph osd pool set {pool-name} size {size}``.
-
-:Type: 32-bit Integer
-:Default: ``3``
-
-
-``osd_pool_default_min_size``
-
-:Description: Sets the minimum number of written replicas for objects in the
- pool in order to acknowledge an I/O operation to the client. If
- minimum is not met, Ceph will not acknowledge the I/O to the
- client, **which may result in data loss**. This setting ensures
- a minimum number of replicas when operating in ``degraded`` mode.
-
-:Type: 32-bit Integer
-:Default: ``0``, which means no particular minimum. If ``0``,
- minimum is ``size - (size / 2)``.
-
-
-``osd_pool_default_pg_num``
-
-:Description: The default number of placement groups for a pool. The default
- value is the same as ``pg_num`` with ``mkpool``.
-
-:Type: 32-bit Integer
-:Default: ``32``
-
-
-``osd_pool_default_pgp_num``
-
-:Description: The default number of placement groups for placement for a pool.
- The default value is the same as ``pgp_num`` with ``mkpool``.
- PG and PGP should be equal (for now).
-
-:Type: 32-bit Integer
-:Default: ``8``
-
-
-``osd_pool_default_flags``
-
-:Description: The default flags for new pools.
-:Type: 32-bit Integer
-:Default: ``0``
-
-
-``osd_max_pgls``
-
-:Description: The maximum number of placement groups to list. A client
- requesting a large number can tie up the Ceph OSD Daemon.
-
-:Type: Unsigned 64-bit Integer
-:Default: ``1024``
-:Note: Default should be fine.
-
-
-``osd_min_pg_log_entries``
-
-:Description: The minimum number of placement group logs to maintain
- when trimming log files.
-
-:Type: 32-bit Int Unsigned
-:Default: ``250``
-
-
-``osd_max_pg_log_entries``
-
-:Description: The maximum number of placement group logs to maintain
- when trimming log files.
-
-:Type: 32-bit Int Unsigned
-:Default: ``10000``
-
-
-``osd_default_data_pool_replay_window``
-
-:Description: The time (in seconds) for an OSD to wait for a client to replay
- a request.
-
-:Type: 32-bit Integer
-:Default: ``45``
-
-``osd_max_pg_per_osd_hard_ratio``
-
-:Description: The ratio of number of PGs per OSD allowed by the cluster before the
- OSD refuses to create new PGs. An OSD stops creating new PGs if the number
- of PGs it serves exceeds
- ``osd_max_pg_per_osd_hard_ratio`` \* ``mon_max_pg_per_osd``.
-
-:Type: Float
-:Default: ``2``
-
-``osd_recovery_priority``
-
-:Description: Priority of recovery in the work queue.
-
-:Type: Integer
-:Default: ``5``
-
-``osd_recovery_op_priority``
-
-:Description: Default priority used for recovery operations if pool doesn't override.
-
-:Type: Integer
-:Default: ``3``
+.. confval:: mon_max_pool_pg_num
+.. confval:: mon_pg_stuck_threshold
+.. confval:: mon_pg_warn_min_per_osd
+.. confval:: mon_pg_warn_min_objects
+.. confval:: mon_pg_warn_min_pool_objects
+.. confval:: mon_pg_check_down_all_threshold
+.. confval:: mon_pg_warn_max_object_skew
+.. confval:: mon_delta_reset_interval
+.. confval:: osd_crush_chooseleaf_type
+.. confval:: osd_crush_initial_weight
+.. confval:: osd_pool_default_crush_rule
+.. confval:: osd_pool_erasure_code_stripe_unit
+.. confval:: osd_pool_default_size
+.. confval:: osd_pool_default_min_size
+.. confval:: osd_pool_default_pg_num
+.. confval:: osd_pool_default_pgp_num
+.. confval:: osd_pool_default_flags
+.. confval:: osd_max_pgls
+.. confval:: osd_min_pg_log_entries
+.. confval:: osd_max_pg_log_entries
+.. confval:: osd_default_data_pool_replay_window
+.. confval:: osd_max_pg_per_osd_hard_ratio
+.. confval:: osd_recovery_priority
+.. confval:: osd_recovery_op_priority
.. _pool: ../../operations/pools
.. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg#peering
type: float
level: advanced
desc: window duration for rate calculations in 'ceph status'
+ fmt_desc: Seconds of inactivity before we reset the PG delta to 0. We keep
+ track of the delta of the used space of each pool, so, for
+ example, it would be easier for us to understand the progress of
+ recovery or the performance of cache tier. But if there's no
+ activity reported for a certain pool, we just reset the history of
+ deltas of that pool.
default: 10
services:
- mon
desc: number of seconds after which pgs can be considered stuck inactive, unclean,
etc
long_desc: see doc/control.rst under dump_stuck for more info
+ fmt_desc: Number of seconds after which PGs can be considered as
+ being stuck.
default: 1_min
services:
- mgr
type: uint
level: advanced
desc: minimal number PGs per (in) osd before we warn the admin
+ fmt_desc: Raise ``HEALTH_WARN`` if the average number
+ of PGs per ``in`` OSD is under this number. A non-positive number
+ disables this.
default: 0
services:
- mgr
type: float
level: advanced
desc: max skew few average in objects per pg
+ fmt_desc: Raise ``HEALTH_WARN`` if the average RADOS object count per PG
+ of any pool is greater than ``mon_pg_warn_max_object_skew`` times
+ the average RADOS object count per PG of all pools. Zero or a non-positive
+ number disables this. Note that this option applies to ``ceph-mgr`` daemons.
default: 10
services:
- mgr
type: int
level: advanced
desc: 'do not warn below this object #'
+ fmt_desc: Do not warn if the total number of RADOS objects in cluster is below
+ this number
default: 10000
services:
- mgr
type: int
level: advanced
desc: 'do not warn on pools below this object #'
+ fmt_desc: Do not warn on pools whose RADOS object count is below this number
default: 1000
services:
- mgr
type: float
level: advanced
desc: threshold of down osds after which we check all pgs
+ fmt_desc: Percentage threshold of ``down`` OSDs above which we check all PGs
+ for stale ones.
default: 0.5
services:
- mgr
type: uint
level: advanced
default: 64_K
+ fmt_desc: The maximum number of placement groups per pool.
- name: mon_pool_quota_warn_threshold
type: int
level: advanced
type: uint
level: advanced
desc: maximum number of results when listing objects in a pool
+ fmt_desc: The maximum number of placement groups to list. A client
+ requesting a large number can tie up the Ceph OSD Daemon.
default: 1_K
with_legacy: true
- name: osd_client_message_size_cap
type: int
level: dev
desc: default chooseleaf type for osdmaptool --create
+ fmt_desc: The bucket type to use for ``chooseleaf`` in a CRUSH rule. Uses
+ ordinal rank rather than name.
default: 1
flags:
- cluster_create
level: advanced
desc: if >= 0, initial CRUSH weight for newly created OSDs
long_desc: If this value is negative, the size of the OSD in TiB is used.
+ fmt_desc: The initial CRUSH weight for newly added OSDs. The default
+ value of this option is ``the size of a newly added OSD in TB``. By default,
+ the initial CRUSH weight for a newly added OSD is set to its device size in
+ TB. See `Weighting Bucket Items`_ for details.
default: -1
with_legacy: true
# whether turn on fast read on the pool or not
type: int
level: advanced
desc: CRUSH rule for newly created pools
+ fmt_desc: The default CRUSH rule to use when creating a replicated pool. The
+ default value of ``-1`` means "pick the rule with the lowest numerical ID and
+ use that". This is to make pool creation work in the absence of rule 0.
default: -1
services:
- mon
type: size
level: advanced
desc: the amount of data (in bytes) in a data chunk, per stripe
+ fmt_desc: Sets the default size, in bytes, of a chunk of an object
+ stripe for erasure coded pools. Every object of size S
+ will be stored as N stripes, with each data chunk
+ receiving ``stripe unit`` bytes. Each stripe of ``N *
+ stripe unit`` bytes will be encoded/decoded
+ individually. This option can is overridden by the
+ ``stripe_unit`` setting in an erasure code profile.
default: 4_K
services:
- mon
type: uint
level: advanced
desc: the number of copies of an object for new replicated pools
+ fmt_desc: Sets the number of replicas for objects in the pool. The default
+ value is the same as
+ ``ceph osd pool set {pool-name} size {size}``.
default: 3
services:
- mon
desc: the minimal number of copies allowed to write to a degraded pool for new replicated
pools
long_desc: 0 means no specific default; ceph will use size-size/2
+ fmt_desc: Sets the minimum number of written replicas for objects in the
+ pool in order to acknowledge an I/O operation to the client. If
+ minimum is not met, Ceph will not acknowledge the I/O to the
+ client, **which may result in data loss**. This setting ensures
+ a minimum number of replicas when operating in ``degraded`` mode.
+ The default value is ``0`` which means no particular minimum. If ``0``,
+ minimum is ``size - (size / 2)``.
default: 0
services:
- mon
type: uint
level: advanced
desc: number of PGs for new pools
+ fmt_desc: The default number of placement groups for a pool. The default
+ value is the same as ``pg_num`` with ``mkpool``.
default: 32
services:
- mon
type: uint
level: advanced
desc: number of PGs for placement purposes (0 to match pg_num)
+ fmt_desc: The default number of placement groups for placement for a pool.
+ The default value is the same as ``pgp_num`` with ``mkpool``.
+ PG and PGP should be equal (for now).
default: 0
services:
- mon
type: int
level: dev
desc: (integer) flags to set on new pools
+ fmt_desc: The default flags for new pools.
default: 0
services:
- mon
type: int
level: advanced
default: 45
+ fmt_desc: The time (in seconds) for an OSD to wait for a client to replay
+ a request.
- name: osd_auto_mark_unfound_lost
type: bool
level: advanced
type: uint
level: dev
desc: minimum number of entries to maintain in the PG log
+ fmt_desc: The minimum number of placement group logs to maintain
+ when trimming log files.
default: 250
services:
- osd
type: uint
level: dev
desc: maximum number of entries to maintain in the PG log
+ fmt_desc: The maximum number of placement group logs to maintain
+ when trimming log files.
default: 10000
services:
- osd
desc: Maximum number of PG per OSD, a factor of 'mon_max_pg_per_osd'
long_desc: OSD will refuse to instantiate PG if the number of PG it serves exceeds
this number.
+ fmt_desc: The ratio of number of PGs per OSD allowed by the cluster before the
+ OSD refuses to create new PGs. An OSD stops creating new PGs if the number
+ of PGs it serves exceeds
+ ``osd_max_pg_per_osd_hard_ratio`` \* ``mon_max_pg_per_osd``.
default: 3
see_also:
- mon_max_pg_per_osd
type: uint
level: advanced
desc: Priority to use for recovery operations if not specified for the pool
+ fmt_desc: Default priority used for recovery operations if pool doesn't override.
default: 3
with_legacy: true
- name: osd_peering_op_priority