From: Nathan Cutler Date: Tue, 19 May 2020 13:48:51 +0000 (+0200) Subject: doc/releases/octopus: osd_calc_pg_upmaps_max_stddev removal X-Git-Tag: v17.0.0~2282^2~7 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=ba3c1b3d6b9185458b42c3645b5a5bf65818cf17;p=ceph.git doc/releases/octopus: osd_calc_pg_upmaps_max_stddev removal The release note about osd_calc_pg_upmaps_max_stddev removal was added to PendingReleaseNotes by 65d03bae8b4f50cc3cbaa50640eaeab4cabd711f but never found its way into the official v15.2.0 release notes. Signed-off-by: Nathan Cutler --- diff --git a/PendingReleaseNotes b/PendingReleaseNotes index 1394e545ab720..dfb8948eefcec 100644 --- a/PendingReleaseNotes +++ b/PendingReleaseNotes @@ -3,16 +3,6 @@ * CVE-2020-10736: Fixes an authorization bypass in monitor and manager daemons -* The configuration value ``osd_calc_pg_upmaps_max_stddev`` used for upmap - balancing has been removed. Instead use the mgr balancer config - ``upmap_max_deviation`` which now is an integer number of PGs of deviation - from the target PGs per OSD. This can be set with a command like - ``ceph config set mgr mgr/balancer/upmap_max_deviation 2``. The default - ``upmap_max_deviation`` is 1. There are situations where crush rules - would not allow a pool to ever have completely balanced PGs. For example, if - crush requires 1 replica on each of 3 racks, but there are fewer OSDs in 1 of - the racks. In those cases, the configuration value can be increased. - * MDS daemons can now be assigned to manage a particular file system via the new ``mds_join_fs`` option. The monitors will try to use only MDS for a file system with mds_join_fs equal to the file system name (strong affinity). diff --git a/doc/releases/octopus.rst b/doc/releases/octopus.rst index 1b8c7c268ac90..e6d3200312c33 100644 --- a/doc/releases/octopus.rst +++ b/doc/releases/octopus.rst @@ -689,17 +689,26 @@ Upgrade compatibility notes 'nvme' instead of 'hdd' or 'ssd'. This appears to be limited to cases where BlueStore was deployed with older versions of ceph-disk, or manually without ceph-volume and LVM. Going forward, the OSD - will limit itself to only 'hdd' and 'ssd' (or whatever device class o + will limit itself to only 'hdd' and 'ssd' (or whatever device class the user manually specifies). -* RGW: a mismatch between the bucket notification documentation and the actual - message format was fixed. This means that any endpoints receiving bucket - notification, will now receive the same notifications inside an JSON array - named 'Records'. Note that this does not affect pulling bucket notification - from a subscription in a 'pubsub' zone, as these are already wrapped inside - that array. - - +* RGW: a mismatch between the bucket notification documentation and + the actual message format was fixed. This means that any endpoints + receiving bucket notification, will now receive the same notifications + inside an JSON array named 'Records'. Note that this does not affect + pulling bucket notification from a subscription in a 'pubsub' zone, + as these are already wrapped inside that array. + +* The configuration value ``osd_calc_pg_upmaps_max_stddev`` used for + upmap balancing has been removed. Instead use the mgr balancer config + ``upmap_max_deviation`` which now is an integer number of PGs of + deviation from the target PGs per OSD. This can be set with a command + like ``ceph config set mgr mgr/balancer/upmap_max_deviation 2``. The + default ``upmap_max_deviation`` is 1. There are situations where + crush rules would not allow a pool to ever have completely balanced + PGs. For example, if crush requires 1 replica on each of 3 racks, but + there are fewer OSDs in one of the racks. In those cases, the + configuration value can be increased. Changelog