See :ref:`setpoolvalues` for details.
-.. index: architecture; placement group mapping
+.. index:: architecture; placement group mapping
Mapping PGs to OSDs
~~~~~~~~~~~~~~~~~~~
ceph fs snap-schedule retention add / 24h4w # add 24 hourly and 4 weekly to retention
ceph fs snap-schedule retention remove / 7d4w # remove 7 daily and 4 weekly, leaves 24 hourly
-.. note: When adding a path to snap-schedule, remember to strip off the mount
+.. note:: When adding a path to snap-schedule, remember to strip off the mount
point path prefix. Paths to snap-schedule should start at the appropriate
CephFS file system root and not at the host file system root.
e.g. if the Ceph File System is mounted at ``/mnt`` and the path under which
snapshots need to be taken is ``/mnt/some/path`` then the acutal path required
by snap-schedule is only ``/some/path``.
-.. note: It should be noted that the "created" field in the snap-schedule status
+.. note:: It should be noted that the "created" field in the snap-schedule status
command output is the timestamp at which the schedule was created. The "created"
timestamp has nothing to do with the creation of actual snapshots. The actual
snapshot creation is accounted for in the "created_count" field, which is a
cumulative count of the total number of snapshots created so far.
-.. note: The maximum number of snapshots to retain per directory is limited by the
+.. note:: The maximum number of snapshots to retain per directory is limited by the
config tunable `mds_max_snaps_per_dir`. This tunable defaults to 100.
To ensure a new snapshot can be created, one snapshot less than this will be
retained. So by default, a maximum of 99 snapshots will be retained.
-.. note: The --fs argument is now required if there is more than one file system.
+.. note:: The --fs argument is now required if there is more than one file system.
Active and inactive schedules
-----------------------------
* All commits are cherry-picked with ``git cherry-pick -x`` to
reference the original commit
-.. note: If a backport is appropriate, the submitter is responsible for
+.. note:: If a backport is appropriate, the submitter is responsible for
determining appropriate target stable branches to which backports must be
made.
according to these settings. The ``mds_autoscaler`` simply adjusts the
number of MDS daemons spawned by the orchestrator.
-.. note: There is no CLI as of the Tentacle release. There are no module
+.. note:: There is no CLI as of the Tentacle release. There are no module
configurations as of the Tentacle release. Enable or disable the module to
turn the functionality on or off.
ceph osd crush rule create-erasure {name} {profile-name}
-.. note: When creating a new pool, it is not necessary to create the rule
+.. note:: When creating a new pool, it is not necessary to create the rule
explicitly. If only the erasure-code profile is specified and the rule
argument is omitted, then Ceph will create the CRUSH rule automatically.
Choosing the Number of PGs
==========================
-.. note: It is rarely necessary to do the math in this section by hand.
+.. note:: It is rarely necessary to do the math in this section by hand.
Instead, use the ``ceph osd pool autoscale-status`` command in combination
with the ``target_size_bytes`` or ``target_size_ratio`` pool properties. For
more information, see :ref:`pg-autoscaler`.
ceph pg repair 1.4
-.. warning: This command overwrites the "bad" copies with "authoritative"
+.. warning:: This command overwrites the "bad" copies with "authoritative"
copies. In most cases, Ceph is able to choose authoritative copies from all
the available replicas by using some predefined criteria. This, however,
does not work in every case. For example, it might be the case that the
Token Authentication
--------------------
-.. note: Never use root tokens with Ceph in production environments.
+.. note:: Never use root tokens with Ceph in production environments.
The token authentication method expects a Vault token to be present in a
plaintext file. The Object Gateway can be configured to use token authentication
v0.67.6 "Dumpling"
==================
-.. note: This release contains a librbd bug that is fixed in v0.67.7. Please upgrade to v0.67.7 and do not use v0.67.6.
+.. note:: This release contains a librbd bug that is fixed in v0.67.7. Please upgrade to v0.67.7 and do not use v0.67.6.
This Dumpling point release contains a number of important fixed for
the OSD, monitor, and radosgw. Most significantly, a change that
The BlueStore on-disk format is expected to continue to evolve. However, we
will provide support in the OSD to migrate to the new format on upgrade.
-.. note: BlueStore is still marked "experimental" in Kraken. We
+.. note:: BlueStore is still marked "experimental" in Kraken. We
recommend its use for proof-of-concept and test environments, or
other cases where data loss can be tolerated. Although it is
stable in our testing environment, the code is new and bugs are
Note that canceling the upgrade simply stops the process; there is no ability to
downgrade back to Octopus.
-.. note:
+.. note::
If you have deployed an RGW service on Octopus using the default port (7280), you
will need to redeploy it because the default port changed (to 80 or 443, depending
on whether SSL is enabled):
- .. prompt: bash #
+ .. prompt:: bash #
ceph orch apply rgw <realm>.<zone> --port 7280
This is the second hotfix release in the Squid series.
We recommend that all users update to this release.
-.. warning: Upgrade to Squid v19.2.2. Do not upgrade to Squid v19.2.1.
+.. warning:: Upgrade to Squid v19.2.2. Do not upgrade to Squid v19.2.1.
Release Date
------------
=============
This is the first backport release in the Squid series.
-.. warning: Do not upgrade to Squid v19.2.1. Upgrade instead to Squid v19.2.2.
+.. warning:: Do not upgrade to Squid v19.2.1. Upgrade instead to Squid v19.2.2.
Release Date
------------