* ``CEPH_VOLUME_SYSTEMD_TRIES``: Defaults to 30
* ``CEPH_VOLUME_SYSTEMD_INTERVAL``: Defaults to 5
-The *"tries"* is a number that sets the maximum amount of times the unit will
+The *"tries"* is a number that sets the maximum number of times the unit will
attempt to activate an OSD before giving up.
The *"interval"* is a value in seconds that determines the waiting time before
A single source of truth for CephFS exports is implemented in the volumes
module of the :term:`Ceph Manager` daemon (ceph-mgr). The OpenStack shared
-file system service (manila_), Ceph Containter Storage Interface (CSI_),
+file system service (manila_), Ceph Container Storage Interface (CSI_),
storage administrators among others can use the common CLI provided by the
ceph-mgr volumes module to manage the CephFS exports.
$ ceph fs volume create <vol_name>
-This creates a CephFS file sytem and its data and metadata pools. It also tries
-to create MDSes for the filesytem using the enabled ceph-mgr orchestrator
+This creates a CephFS file system and its data and metadata pools. It also tries
+to create MDSes for the filesystem using the enabled ceph-mgr orchestrator
module (see :doc:`/mgr/orchestrator_cli`) , e.g., rook.
Remove a volume using::
The auth MDS for an inode can change over time as well. The MDS' will
actively balance responsibility for the inode cache amongst
-themselves, but this can be overriden by **pinning** certain subtrees
+themselves, but this can be overridden by **pinning** certain subtrees
to a single MDS.
In modern Linux kernels (v4.17 or later), writeback errors are reported
once to every file description that is open at the time of the error. In
-addition, unreported errors that occured before the file description was
+addition, unreported errors that occurred before the file description was
opened will also be returned on fsync.
See `PostgreSQL's summary of fsync() error reporting across operating systems
| |
-* Connection failure after session is established because server reseted,
+* Connection failure after session is established because server reset,
and then client reconnects.
.. ditaa:: +---------+ +--------+
of the connection.
-* Connection failure after session is established because client reseted,
+* Connection failure after session is established because client reset,
and then client reconnects.
.. ditaa:: +---------+ +--------+
ceph osd erasure-code-profile set myprofile \
directory=<dir> \ # mandatory
plugin=jerasure \ # mandatory
- m=10 \ # optional and plugin dependant
- k=3 \ # optional and plugin dependant
- technique=reed_sol_van \ # optional and plugin dependant
+ m=10 \ # optional and plugin dependent
+ k=3 \ # optional and plugin dependent
+ technique=reed_sol_van \ # optional and plugin dependent
Notes
-----
The choice of whether to use a read-modify-write or a
parity-delta-write is complex policy issue that is TBD in the details
-and is likely to be heavily dependant on the computational costs
+and is likely to be heavily dependent on the computational costs
associated with a parity-delta vs. a regular parity-generation
operation. However, it is believed that the parity-delta scheme is
likely to be the preferred choice, when available.
``ceph-deploy``, notice that the configuration file only has one setting
``rgw_frontends`` (and that's assuming you elected to change the default port).
The ``ceph-deploy`` utility generates the data directory and the keyring for
-you--placing the keyring in ``/var/lib/ceph/radosgw/{rgw-intance}``. The daemon
+you--placing the keyring in ``/var/lib/ceph/radosgw/{rgw-instance}``. The daemon
looks in default locations, whereas you may have specified different settings
in your Ceph configuration file. Since you already have keys and a data
directory, you will want to maintain those paths in your Ceph configuration
ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}
-Subcommand ``get`` displays the value for the given key that is assosciated
+Subcommand ``get`` displays the value for the given key that is associated
with the given application of the given pool. Not passing the optional
arguments would display all key-value pairs for all applications for all
pools.
LibradosPP (C++)
==================
-.. note:: The librados C++ API is not guarenteed to be API+ABI stable
+.. note:: The librados C++ API is not guaranteed to be API+ABI stable
between major releases. All applications using the librados C++ API must
be recompiled and relinked against a specific Ceph release.
the ``osd_memory_target`` configuration option. This is a best effort
algorithm and caches will not shrink smaller than the amount specified by
``osd_memory_cache_min``. Cache ratios will be chosen based on a hierarchy
-of priorities. If priority information is not availabe, the
+of priorities. If priority information is not available, the
``bluestore_cache_meta_ratio`` and ``bluestore_cache_kv_ratio`` options are
used as fallbacks.
``osd_memory_target``
-:Description: When tcmalloc is available and cache autotuning is enabled, try to keep this many bytes mapped in memory. Note: This may not exactly match the RSS memory usage of the process. While the total amount of heap memory mapped by the process should generally stay close to this target, there is no guarantee that the kernel will actually reclaim memory that has been unmapped. During initial developement, it was found that some kernels result in the OSD's RSS Memory exceeding the mapped memory by up to 20%. It is hypothesised however, that the kernel generally may be more aggressive about reclaiming unmapped memory when there is a high amount of memory pressure. Your mileage may vary.
+:Description: When tcmalloc is available and cache autotuning is enabled, try to keep this many bytes mapped in memory. Note: This may not exactly match the RSS memory usage of the process. While the total amount of heap memory mapped by the process should generally stay close to this target, there is no guarantee that the kernel will actually reclaim memory that has been unmapped. During initial development, it was found that some kernels result in the OSD's RSS Memory exceeding the mapped memory by up to 20%. It is hypothesised however, that the kernel generally may be more aggressive about reclaiming unmapped memory when there is a high amount of memory pressure. Your mileage may vary.
:Type: Unsigned Integer
:Required: Yes
:Default: ``4294967296``
``osd_memory_base``
:Description: When tcmalloc and cache autotuning is enabled, estimate the minimum amount of memory in bytes the OSD will need. This is used to help the autotuner estimate the expected aggregate memory consumption of the caches.
-:Type: Unsigned Interger
+:Type: Unsigned Integer
:Required: No
:Default: ``805306368``
since the previous rule distributed across devices of multiple
classes but the adjusted rules will only map to devices of the
specified *device-class*, but that often is an accepted level of
- data movement when the nubmer of outlier devices is small.
+ data movement when the number of outlier devices is small.
#. ``--reclassify-bucket <match-pattern> <device-class> <default-parent>``
- This will allow you to merge a parallel type-specific hiearchy with the normal hierarchy. For example, many users have maps like::
+ This will allow you to merge a parallel type-specific hierarchy with the normal hierarchy. For example, many users have maps like::
host node1 {
id -2 # do not change unnecessarily
``crush-root={root}``
:Description: The name of the crush bucket used for the first step of
- the CRUSH rule. For intance **step take default**.
+ the CRUSH rule. For instance **step take default**.
:Type: String
:Required: No.
``crush-root={root}``
:Description: The name of the crush bucket used for the first step of
- the CRUSH rule. For intance **step take default**.
+ the CRUSH rule. For instance **step take default**.
:Type: String
:Required: No.
Ceph OSDs send heartbeat ping messages amongst themselves to monitor daemon availability. We
also use the response times to monitor network performance.
While it is possible that a busy OSD could delay a ping response, we can assume
-that if a network switch fails mutiple delays will be detected between distinct pairs of OSDs.
+that if a network switch fails multiple delays will be detected between distinct pairs of OSDs.
By default we will warn about ping times which exceed 1 second (1000 milliseconds).
def __init__(self, credentials, service_name, region_name):
self.credentials = credentials
# We initialize these value here so the unit tests can have
- # valid values. But these will get overriden in ``add_auth``
+ # valid values. But these will get overridden in ``add_auth``
# later for real requests.
self._region_name = region_name
if service_name == 'sts':
- Notification deletion is an extension to the S3 notification API
- When the bucket is deleted, any notification defined on it is also deleted
- - Deleting an unkown notification (e.g. double delete) is not considered an error
+ - Deleting an unknown notification (e.g. double delete) is not considered an error
Syntax
~~~~~~
option will need to be specified after ensuring all descendent clone images are
not in use.
-Commiting the live-migration will remove the cross-links between the source
+Committing the live-migration will remove the cross-links between the source
and target images, and will remove the source image::
$ rbd trash list --all
# there are two sections
#
# releases: ... for named releases
-# developement: ... for dev releases
+# development: ... for dev releases
#
# by default a `version` is interpreted as a sphinx reference when rendered (see
# schedule.rst for the existing tags such as `_13.2.2`). If a version should not