practice is to partition the journal drive (often an SSD), and mount it such
that Ceph uses the entire partition for the journal.
-
-``osd_uuid``
-
-:Description: The universally unique identifier (UUID) for the Ceph OSD Daemon.
-:Type: UUID
-:Default: The UUID.
-:Note: The ``osd_uuid`` applies to a single Ceph OSD Daemon. The ``fsid``
- applies to the entire cluster.
-
-
-``osd_data``
-
-:Description: The path to the OSDs data. You must create the directory when
- deploying Ceph. You should mount a drive for OSD data at this
- mount point. We do not recommend changing the default.
-
-:Type: String
-:Default: ``/var/lib/ceph/osd/$cluster-$id``
-
-
-``osd_max_write_size``
-
-:Description: The maximum size of a write in megabytes.
-:Type: 32-bit Integer
-:Default: ``90``
-
-
-``osd_max_object_size``
-
-:Description: The maximum size of a RADOS object in bytes.
-:Type: 32-bit Unsigned Integer
-:Default: 128MB
-
-
-``osd_client_message_size_cap``
-
-:Description: The largest client data message allowed in memory.
-:Type: 64-bit Unsigned Integer
-:Default: 500MB default. ``500*1024L*1024L``
-
-
-``osd_class_dir``
-
-:Description: The class path for RADOS class plug-ins.
-:Type: String
-:Default: ``$libdir/rados-classes``
-
+.. confval:: osd_uuid
+.. confval:: osd_data
+.. confval:: osd_max_write_size
+.. confval:: osd_max_object_size
+.. confval:: osd_client_message_size_cap
+.. confval:: osd_class_dir
+ :default: $libdir/rados-classes
.. index:: OSD; file system
osd_journal_size = 10240
-``osd_journal``
-
-:Description: The path to the OSD's journal. This may be a path to a file or a
- block device (such as a partition of an SSD). If it is a file,
- you must create the directory to contain it. We recommend using a
- separate fast device when the ``osd_data`` drive is an HDD.
-
-:Type: String
-:Default: ``/var/lib/ceph/osd/$cluster-$id/journal``
-
-
-``osd_journal_size``
-
-:Description: The size of the journal in megabytes.
-
-:Type: 32-bit Integer
-:Default: ``5120``
-
+.. confval:: osd_journal
+.. confval:: osd_journal_size
See `Journal Config Reference`_ for additional details.
scrubbing operations.
-``osd_max_scrubs``
-
-:Description: The maximum number of simultaneous scrub operations for
- a Ceph OSD Daemon.
-
-:Type: 32-bit Int
-:Default: ``1``
-
-``osd_scrub_begin_hour``
-
-:Description: This restricts scrubbing to this hour of the day or later.
- Use ``osd_scrub_begin_hour = 0`` and ``osd_scrub_end_hour = 0``
- to allow scrubbing the entire day. Along with ``osd_scrub_end_hour``, they define a time
- window, in which the scrubs can happen.
- But a scrub will be performed
- no matter whether the time window allows or not, as long as the placement
- group's scrub interval exceeds ``osd_scrub_max_interval``.
-:Type: Integer in the range of 0 to 23
-:Default: ``0``
-
-
-``osd_scrub_end_hour``
-
-:Description: This restricts scrubbing to the hour earlier than this.
- Use ``osd_scrub_begin_hour = 0`` and ``osd_scrub_end_hour = 0`` to allow scrubbing
- for the entire day. Along with ``osd_scrub_begin_hour``, they define a time
- window, in which the scrubs can happen. But a scrub will be performed
- no matter whether the time window allows or not, as long as the placement
- group's scrub interval exceeds ``osd_scrub_max_interval``.
-:Type: Integer in the range of 0 to 23
-:Default: ``0``
-
-
-``osd_scrub_begin_week_day``
-
-:Description: This restricts scrubbing to this day of the week or later.
- 0 = Sunday, 1 = Monday, etc. Use ``osd_scrub_begin_week_day = 0``
- and ``osd_scrub_end_week_day = 0`` to allow scrubbing for the entire week.
- Along with ``osd_scrub_end_week_day``, they define a time window in which
- scrubs can happen. But a scrub will be performed
- no matter whether the time window allows or not, when the PG's
- scrub interval exceeds ``osd_scrub_max_interval``.
-:Type: Integer in the range of 0 to 6
-:Default: ``0``
-
-
-``osd_scrub_end_week_day``
-
-:Description: This restricts scrubbing to days of the week earlier than this.
- 0 = Sunday, 1 = Monday, etc. Use ``osd_scrub_begin_week_day = 0``
- and ``osd_scrub_end_week_day = 0`` to allow scrubbing for the entire week.
- Along with ``osd_scrub_begin_week_day``, they define a time
- window, in which the scrubs can happen. But a scrub will be performed
- no matter whether the time window allows or not, as long as the placement
- group's scrub interval exceeds ``osd_scrub_max_interval``.
-:Type: Integer in the range of 0 to 6
-:Default: ``0``
-
-
-``osd scrub during recovery``
-
-:Description: Allow scrub during recovery. Setting this to ``false`` will disable
- scheduling new scrub (and deep--scrub) while there is active recovery.
- Already running scrubs will be continued. This might be useful to reduce
- load on busy clusters.
-:Type: Boolean
-:Default: ``false``
-
-
-``osd_scrub_thread_timeout``
-
-:Description: The maximum time in seconds before timing out a scrub thread.
-:Type: 32-bit Integer
-:Default: ``60``
-
-
-``osd_scrub_finalize_thread_timeout``
-
-:Description: The maximum time in seconds before timing out a scrub finalize
- thread.
-
-:Type: 32-bit Integer
-:Default: ``10*60``
-
-
-``osd_scrub_load_threshold``
-
-:Description: The normalized maximum load. Ceph will not scrub when the system load
- (as defined by ``getloadavg() / number of online CPUs``) is higher than this number.
- Default is ``0.5``.
-
-:Type: Float
-:Default: ``0.5``
-
-
-``osd_scrub_min_interval``
-
-:Description: The minimal interval in seconds for scrubbing the Ceph OSD Daemon
- when the Ceph Storage Cluster load is low.
-
-:Type: Float
-:Default: Once per day. ``24*60*60``
-
-.. _osd_scrub_max_interval:
-
-``osd_scrub_max_interval``
-
-:Description: The maximum interval in seconds for scrubbing the Ceph OSD Daemon
- irrespective of cluster load.
-
-:Type: Float
-:Default: Once per week. ``7*24*60*60``
-
-
-``osd_scrub_chunk_min``
-
-:Description: The minimal number of object store chunks to scrub during single operation.
- Ceph blocks writes to single chunk during scrub.
-
-:Type: 32-bit Integer
-:Default: 5
-
-
-``osd_scrub_chunk_max``
-
-:Description: The maximum number of object store chunks to scrub during single operation.
-
-:Type: 32-bit Integer
-:Default: 25
-
-
-``osd_scrub_sleep``
-
-:Description: Time to sleep before scrubbing the next group of chunks. Increasing this value will slow
- down the overall rate of scrubbing so that client operations will be less impacted.
-
-:Type: Float
-:Default: 0
-
-
-``osd_deep_scrub_interval``
-
-:Description: The interval for "deep" scrubbing (fully reading all data). The
- ``osd_scrub_load_threshold`` does not affect this setting.
-
-:Type: Float
-:Default: Once per week. ``7*24*60*60``
-
-
-``osd_scrub_interval_randomize_ratio``
-
-:Description: Add a random delay to ``osd_scrub_min_interval`` when scheduling
- the next scrub job for a PG. The delay is a random
- value less than ``osd_scrub_min_interval`` \*
- ``osd_scrub_interval_randomized_ratio``. The default setting
- spreads scrubs throughout the allowed time
- window of ``[1, 1.5]`` \* ``osd_scrub_min_interval``.
-:Type: Float
-:Default: ``0.5``
-
-``osd_deep_scrub_stride``
-
-:Description: Read size when doing a deep scrub.
-:Type: 32-bit Integer
-:Default: 512 KB. ``524288``
-
-
-``osd_scrub_auto_repair``
-
-:Description: Setting this to ``true`` will enable automatic PG repair when errors
- are found by scrubs or deep-scrubs. However, if more than
- ``osd_scrub_auto_repair_num_errors`` errors are found a repair is NOT performed.
-:Type: Boolean
-:Default: ``false``
-
-
-``osd_scrub_auto_repair_num_errors``
-
-:Description: Auto repair will not occur if more than this many errors are found.
-:Type: 32-bit Integer
-:Default: ``5``
-
+.. confval:: osd_max_scrubs
+.. confval:: osd_scrub_begin_hour
+.. confval:: osd_scrub_end_hour
+.. confval:: osd_scrub_begin_week_day
+.. confval:: osd_scrub_end_week_day
+.. confval:: osd_scrub_during_recovery
+.. confval:: osd_scrub_load_threshold
+.. confval:: osd_scrub_min_interval
+.. confval:: osd_scrub_max_interval
+.. confval:: osd_scrub_chunk_min
+.. confval:: osd_scrub_chunk_max
+.. confval:: osd_scrub_sleep
+.. confval:: osd_deep_scrub_interval
+.. confval:: osd_scrub_interval_randomize_ratio
+.. confval:: osd_deep_scrub_stride
+.. confval:: osd_scrub_auto_repair
+.. confval:: osd_scrub_auto_repair_num_errors
.. index:: OSD; operations settings
type: uuid
level: advanced
desc: uuid label for a new OSD
+ fmt_desc: The universally unique identifier (UUID) for the Ceph OSD Daemon.
+ note: The ``osd_uuid`` applies to a single Ceph OSD Daemon. The ``fsid``
+ applies to the entire cluster.
flags:
- create
with_legacy: true
type: str
level: advanced
desc: path to OSD data
+ fmt_desc: The path to the OSDs data. You must create the directory when
+ deploying Ceph. You should mount a drive for OSD data at this
+ mount point. We do not recommend changing the default.
default: /var/lib/ceph/osd/$cluster-$id
flags:
- no_mon_update
type: str
level: advanced
desc: path to OSD journal (when FileStore backend is in use)
+ fmt_desc: The path to the OSD's journal. This may be a path to a file or a
+ block device (such as a partition of an SSD). If it is a file,
+ you must create the directory to contain it. We recommend using a
+ separate fast device when the ``osd_data`` drive is an HDD.
default: /var/lib/ceph/osd/$cluster-$id/journal
flags:
- no_mon_update
type: size
level: advanced
desc: size of FileStore journal (in MiB)
+ fmt_desc: The size of the journal in megabytes.
default: 5_K
flags:
- create
long_desc: This setting prevents clients from doing very large writes to RADOS. If
you set this to a value below what clients expect, they will receive an error
when attempting to write to the cluster.
+ fmt_desc: The maximum size of a write in megabytes.
default: 90
min: 4
with_legacy: true
desc: maximum memory to devote to in-flight client requests
long_desc: If this value is exceeded, the OSD will not read any new client data
off of the network until memory is freed.
+ fmt_desc: The largest client data message allowed in memory.
default: 500_M
with_legacy: true
- name: osd_client_message_cap
type: int
level: advanced
desc: Maximum concurrent scrubs on a single OSD
+ fmt_desc: The maximum number of simultaneous scrub operations for
+ a Ceph OSD Daemon.
default: 1
with_legacy: true
- name: osd_scrub_during_recovery
type: bool
level: advanced
desc: Allow scrubbing when PGs on the OSD are undergoing recovery
+ fmt_desc: Allow scrub during recovery. Setting this to ``false`` will disable
+ scheduling new scrub (and deep--scrub) while there is active recovery.
+ Already running scrubs will be continued. This might be useful to reduce
+ load on busy clusters.
default: false
with_legacy: true
- name: osd_repair_during_recovery
level: advanced
desc: Restrict scrubbing to this hour of the day or later
long_desc: Use osd_scrub_begin_hour=0 and osd_scrub_end_hour=0 for the entire day.
+ fmt_desc: This restricts scrubbing to this hour of the day or later.
+ Use ``osd_scrub_begin_hour = 0`` and ``osd_scrub_end_hour = 0``
+ to allow scrubbing the entire day. Along with ``osd_scrub_end_hour``, they define a time
+ window, in which the scrubs can happen.
+ But a scrub will be performed
+ no matter whether the time window allows or not, as long as the placement
+ group's scrub interval exceeds ``osd_scrub_max_interval``.
default: 0
see_also:
- osd_scrub_end_hour
level: advanced
desc: Restrict scrubbing to hours of the day earlier than this
long_desc: Use osd_scrub_begin_hour=0 and osd_scrub_end_hour=0 for the entire day.
+ fmt_desc: This restricts scrubbing to the hour earlier than this.
+ Use ``osd_scrub_begin_hour = 0`` and ``osd_scrub_end_hour = 0`` to allow scrubbing
+ for the entire day. Along with ``osd_scrub_begin_hour``, they define a time
+ window, in which the scrubs can happen. But a scrub will be performed
+ no matter whether the time window allows or not, as long as the placement
+ group's scrub interval exceeds ``osd_scrub_max_interval``.
default: 0
see_also:
- osd_scrub_begin_hour
desc: Restrict scrubbing to this day of the week or later
long_desc: 0 = Sunday, 1 = Monday, etc. Use osd_scrub_begin_week_day=0 osd_scrub_end_week_day=0
for the entire week.
+ fmt_desc: This restricts scrubbing to this day of the week or later.
+ 0 = Sunday, 1 = Monday, etc. Use ``osd_scrub_begin_week_day = 0``
+ and ``osd_scrub_end_week_day = 0`` to allow scrubbing for the entire week.
+ Along with ``osd_scrub_end_week_day``, they define a time window in which
+ scrubs can happen. But a scrub will be performed
+ no matter whether the time window allows or not, when the PG's
+ scrub interval exceeds ``osd_scrub_max_interval``.
default: 0
see_also:
- osd_scrub_end_week_day
desc: Restrict scrubbing to days of the week earlier than this
long_desc: 0 = Sunday, 1 = Monday, etc. Use osd_scrub_begin_week_day=0 osd_scrub_end_week_day=0
for the entire week.
+ fmt_desc: This restricts scrubbing to days of the week earlier than this.
+ 0 = Sunday, 1 = Monday, etc. Use ``osd_scrub_begin_week_day = 0``
+ and ``osd_scrub_end_week_day = 0`` to allow scrubbing for the entire week.
+ Along with ``osd_scrub_begin_week_day``, they define a time
+ window, in which the scrubs can happen. But a scrub will be performed
+ no matter whether the time window allows or not, as long as the placement
+ group's scrub interval exceeds ``osd_scrub_max_interval``.
default: 0
see_also:
- osd_scrub_begin_week_day
type: float
level: advanced
desc: Allow scrubbing when system load divided by number of CPUs is below this value
+ fmt_desc: The normalized maximum load. Ceph will not scrub when the system load
+ (as defined by ``getloadavg() / number of online CPUs``) is higher than this number.
+ Default is ``0.5``.
default: 0.5
with_legacy: true
# if load is low
type: float
level: advanced
desc: Scrub each PG no more often than this interval
+ fmt_desc: The minimal interval in seconds for scrubbing the Ceph OSD Daemon
+ when the Ceph Storage Cluster load is low.
default: 1_day
see_also:
- osd_scrub_max_interval
type: float
level: advanced
desc: Scrub each PG no less often than this interval
+ fmt_desc: The maximum interval in seconds for scrubbing the Ceph OSD Daemon
+ irrespective of cluster load.
default: 7_day
see_also:
- osd_scrub_min_interval
desc: Ratio of scrub interval to randomly vary
long_desc: This prevents a scrub 'stampede' by randomly varying the scrub intervals
so that they are soon uniformly distributed over the week
+ fmt_desc: Add a random delay to ``osd_scrub_min_interval`` when scheduling
+ the next scrub job for a PG. The delay is a random
+ value less than ``osd_scrub_min_interval`` \*
+ ``osd_scrub_interval_randomized_ratio``. The default setting
+ spreads scrubs throughout the allowed time
+ window of ``[1, 1.5]`` \* ``osd_scrub_min_interval``.
default: 0.5
see_also:
- osd_scrub_min_interval
type: int
level: advanced
desc: Minimum number of objects to scrub in a single chunk
+ fmt_desc: The minimal number of object store chunks to scrub during single operation.
+ Ceph blocks writes to single chunk during scrub.
default: 5
see_also:
- osd_scrub_chunk_max
type: int
level: advanced
desc: Maximum number of objects to scrub in a single chunk
+ fmt_desc: The maximum number of object store chunks to scrub during single operation.
default: 25
see_also:
- osd_scrub_chunk_min
type: float
level: advanced
desc: Duration to inject a delay during scrubbing
+ fmt_desc: Time to sleep before scrubbing the next group of chunks. Increasing this value will slow
+ down the overall rate of scrubbing so that client operations will be less impacted.
default: 0
with_legacy: true
# more sleep between [deep]scrub ops
type: bool
level: advanced
desc: Automatically repair damaged objects detected during scrub
+ fmt_desc: Setting this to ``true`` will enable automatic PG repair when errors
+ are found by scrubs or deep-scrubs. However, if more than
+ ``osd_scrub_auto_repair_num_errors`` errors are found a repair is NOT performed.
default: false
with_legacy: true
# only auto-repair when number of errors is below this threshold
type: uint
level: advanced
desc: Maximum number of detected errors to automatically repair
+ fmt_desc: Auto repair will not occur if more than this many errors are found.
default: 5
see_also:
- osd_scrub_auto_repair
type: float
level: advanced
desc: Deep scrub each PG (i.e., verify data checksums) at least this often
+ fmt_desc: The interval for "deep" scrubbing (fully reading all data). The
+ ``osd_scrub_load_threshold`` does not affect this setting.
default: 7_day
with_legacy: true
- name: osd_deep_scrub_randomize_ratio
type: size
level: advanced
desc: Number of bytes to read from an object at a time during deep scrub
+ fmt_desc: Read size when doing a deep scrub.
default: 512_K
with_legacy: true
- name: osd_deep_scrub_keys
type: str
level: advanced
default: @CMAKE_INSTALL_LIBDIR@/rados-classes
+ fmt_desc: The class path for RADOS class plug-ins.
with_legacy: true
- name: osd_open_classes_on_start
type: bool
type: size
level: advanced
default: 128_M
+ fmt_desc: The maximum size of a RADOS object in bytes.
with_legacy: true
# max rados object name len
- name: osd_max_object_name_len