]> git.apps.os.sepia.ceph.com Git - ceph.git/log
ceph.git
2 years agodoc: Update jerasure.org references 51726/head
Anthony D'Atri [Tue, 23 May 2023 23:13:33 +0000 (19:13 -0400)]
doc: Update jerasure.org references

Signed-off-by: Anthony D'Atri <anthonyeleven@users.noreply.github.com>
(cherry picked from commit 5e60e0de275f7260aeae9e664ca22ebfdf8fc5f9)

2 years agoMerge pull request #51721 from zdover23/wip-doc-2023-05-24-backport-51679-to-quincy
Anthony D'Atri [Tue, 23 May 2023 22:29:02 +0000 (18:29 -0400)]
Merge pull request #51721 from zdover23/wip-doc-2023-05-24-backport-51679-to-quincy

quincy: doc/mgr: edit "leaderboard" in telemetry.rst

2 years agoMerge pull request #51653 from zdover23/wip-doc-2023-05-22-backport-51319-to-quincy
zdover23 [Tue, 23 May 2023 22:13:01 +0000 (08:13 +1000)]
Merge pull request #51653 from zdover23/wip-doc-2023-05-22-backport-51319-to-quincy

quincy: doc: deprecate the cache tiering

Reviewed-by: Cole Mitchell <cole.mitchell.ceph@gmail.com>
2 years agodoc/mgr: edit "leaderboard" in telemetry.rst 51721/head
Zac Dover [Mon, 22 May 2023 20:06:52 +0000 (06:06 +1000)]
doc/mgr: edit "leaderboard" in telemetry.rst

Standardize the presentation of commands in the "Leaderboard" section of
doc/mgr/telemetry.rst.

Follow-up to https://github.com/ceph/ceph/pull/50977

Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit 4935ad8aed2aa796015473de5b38cc973ba85ba1)

2 years agoMerge pull request #51496 from cbodley/wip-55703-quincy
Yuri Weinstein [Tue, 23 May 2023 19:54:47 +0000 (15:54 -0400)]
Merge pull request #51496 from cbodley/wip-55703-quincy

quincy: rgw multisite: complete fix for metadata sync issue

Reviewed-by: Shilpa Jagannath <smanjara@redhat.com>
2 years agoMerge pull request #50892 from cfsnyder/wip-58511-quincy
Yuri Weinstein [Tue, 23 May 2023 19:51:13 +0000 (15:51 -0400)]
Merge pull request #50892 from cfsnyder/wip-58511-quincy

quincy: rgw: optimizations for handling ECANCELED errors from within get_obj_state

Reviewed-by: Shilpa Jagannath <smanjara@redhat.com>
2 years agoMerge pull request #50962 from yuvalif/wip-58284-quincy
Yuri Weinstein [Tue, 23 May 2023 19:49:15 +0000 (15:49 -0400)]
Merge pull request #50962 from yuvalif/wip-58284-quincy

quincy: rgw/notifications: send mtime in complete multipart upload event

Reviewed-by: Shilpa Jagannath <smanjara@redhat.com>
2 years agoMerge pull request #50208 from cfsnyder/wip-58212-quincy
Yuri Weinstein [Tue, 23 May 2023 19:47:15 +0000 (15:47 -0400)]
Merge pull request #50208 from cfsnyder/wip-58212-quincy

quincy: rgw: concurrency for multi object deletes

Reviewed-by: Daniel Gryniewicz <dang@redhat.com>
Reviewed-by: Casey Bodley <cbodley@redhat.com>
2 years agoMerge pull request #49808 from yuvalif/wip-58285-quincy
Yuri Weinstein [Tue, 23 May 2023 19:46:17 +0000 (15:46 -0400)]
Merge pull request #49808 from yuvalif/wip-58285-quincy

quincy: rgw/notifications: sending metadata in COPY and CompleteMultipartUpload

Reviewed-by: Shilpa Jagannath <smanjara@redhat.com>
2 years agoMerge pull request #51512 from Matan-B/wip-61150-quincy
Yuri Weinstein [Tue, 23 May 2023 15:21:28 +0000 (11:21 -0400)]
Merge pull request #51512 from Matan-B/wip-61150-quincy

quincy: OSD: Fix check_past_interval_bounds()

Reviewed-by: Samuel Just <sjust@redhat.com>
2 years agoMerge pull request #51694 from zdover23/wip-doc-2023-05-23-backport-51682-to-quincy
Anthony D'Atri [Tue, 23 May 2023 12:10:56 +0000 (08:10 -0400)]
Merge pull request #51694 from zdover23/wip-doc-2023-05-23-backport-51682-to-quincy

quincy: doc/glossary: update bluestore entry

2 years agodoc/glossary: update bluestore entry 51694/head
Zac Dover [Mon, 22 May 2023 21:41:09 +0000 (07:41 +1000)]
doc/glossary: update bluestore entry

Update the BlueStore entry in the glossary, explaining that as of Reef
BlueStore and only BlueStore (and not FileStore) is the storage backend
for Ceph.

Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit bcee264276128f622c35e3aab81fdecb2b8afc10)

2 years agoMerge pull request #50979 from batrick/i59294
Yuri Weinstein [Mon, 22 May 2023 23:27:51 +0000 (19:27 -0400)]
Merge pull request #50979 from batrick/i59294

quincy: MgrMonitor: batch commit OSDMap and MgrMap mutations

Reviewed-by: Ilya Dryomov <idryomov@redhat.com>
Reviewed-by: Ramana Raja <rraja@redhat.com>
2 years agoMerge pull request #50964 from ajarr/wip-58998-quincy
Yuri Weinstein [Mon, 22 May 2023 23:26:45 +0000 (19:26 -0400)]
Merge pull request #50964 from ajarr/wip-58998-quincy

quincy: mgr: store names of modules that register RADOS clients in the MgrMap

Reviewed-by: Ilya Dryomov <idryomov@redhat.com>
2 years agoMerge pull request #50893 from cfsnyder/wip-59329-quincy
Yuri Weinstein [Mon, 22 May 2023 23:26:10 +0000 (19:26 -0400)]
Merge pull request #50893 from cfsnyder/wip-59329-quincy

quincy: kv/RocksDBStore: Add CompactOnDeletion support

Reviewed-by: Igor Fedotov <ifedotov@suse.com>
2 years agoMerge pull request #50693 from kamoltat/wip-ksirivad-backport-quincy-50334
Yuri Weinstein [Mon, 22 May 2023 23:25:27 +0000 (19:25 -0400)]
Merge pull request #50693 from kamoltat/wip-ksirivad-backport-quincy-50334

quincy: pybind/mgr/pg_autoscaler: Reorderd if statement for the func: _maybe_adjust

Reviewed-by: Laura Flores <lflores@redhat.com>
2 years agoMerge pull request #50480 from ljflores/wip-58954-quincy
Yuri Weinstein [Mon, 22 May 2023 23:24:51 +0000 (19:24 -0400)]
Merge pull request #50480 from ljflores/wip-58954-quincy

quincy: mgr/telemetry: make sure histograms are formatted in `all` commands

Reviewed-by: Yaarit Hatuka <yaarithatuka@gmail.com>
2 years agodoc: deprecate the cache tiering 51653/head
Radosław Zarzyński [Tue, 2 May 2023 15:52:23 +0000 (17:52 +0200)]
doc: deprecate the cache tiering

This topic has been discussed many times; recently at the Dev
Summit of Cephalocon 2023.

This commit is the minial version of the work, contained entirely
within the `doc`. However, likely it will be expanded as there
were ideas like e.g. adding cache tiering back experimental feature
list (Sam) to warn users when deploying a new cluster.

Signed-off-by: Radosław Zarzyński <rzarzyns@redhat.com>
(cherry picked from commit 535b8db33ea03fbab7ef0c4df5251658f956b0c5)

2 years agoMerge pull request #51620 from zdover23/wip-doc-2023-05-21-backport-51618-to-quincy
zdover23 [Mon, 22 May 2023 01:27:31 +0000 (11:27 +1000)]
Merge pull request #51620 from zdover23/wip-doc-2023-05-21-backport-51618-to-quincy

quincy: doc: Add missing `ceph` command in documentation section `REPLACING A…

Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
2 years agodoc: Add missing `ceph` command in documentation section `REPLACING AN OSD` 51620/head
Alexander Proschek [Sat, 20 May 2023 21:06:09 +0000 (14:06 -0700)]
doc: Add missing `ceph` command in documentation section `REPLACING AN OSD`

Signed-off-by: Alexander Proschek <alexander.proschek@protonmail.com>
Signed-off-by: Alexander Proschek <alexander.proschek@protonmail.com>
(cherry picked from commit 0557d5e465556adba6d25db62a40ba55a5dd2400)

2 years agoMerge pull request #51596 from zdover23/wip-doc-2023-05-20-backport-51594-to-quincy
zdover23 [Fri, 19 May 2023 20:19:48 +0000 (06:19 +1000)]
Merge pull request #51596 from zdover23/wip-doc-2023-05-20-backport-51594-to-quincy

quincy: doc/rados: edit data-placement.rst

Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
2 years agodoc/rados: edit data-placement.rst 51596/head
Zac Dover [Fri, 19 May 2023 16:26:45 +0000 (02:26 +1000)]
doc/rados: edit data-placement.rst

Edit doc/rados/data-placement.rst.

Co-authored-by: Cole Mitchell <cole.mitchell@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit 32600c27c4dca6b9d5fae9892c0a1660b672781c)

2 years agoMerge pull request #51586 from zdover23/wip-doc-2023-05-19-backport-51580-to-quincy
Anthony D'Atri [Fri, 19 May 2023 12:13:53 +0000 (08:13 -0400)]
Merge pull request #51586 from zdover23/wip-doc-2023-05-19-backport-51580-to-quincy

quincy: doc/radosgw: explain multisite dynamic sharding

2 years agodoc/radosgw: explain multisite dynamic sharding 51586/head
Zac Dover [Thu, 18 May 2023 21:07:02 +0000 (07:07 +1000)]
doc/radosgw: explain multisite dynamic sharding

Add a note to doc/radosgw/dynamicresharding.rst and a note to
doc/radosgw/multisite.rst that explains that dynamic resharding is not
supported in releases prior to Reef.

This commit is made in response to a request from Mathias Chapelain.

Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit d4ed4223d914328361528990f89f1ee4acd30e79)

2 years agoMerge pull request #51577 from zdover23/wip-doc-2023-05-19-backport-51572-to-quincy
Anthony D'Atri [Thu, 18 May 2023 22:42:16 +0000 (18:42 -0400)]
Merge pull request #51577 from zdover23/wip-doc-2023-05-19-backport-51572-to-quincy

quincy: doc/rados: line-edit devices.rst

2 years agodoc/rados: line-edit devices.rst 51577/head
Zac Dover [Thu, 18 May 2023 14:13:41 +0000 (00:13 +1000)]
doc/rados: line-edit devices.rst

Edit doc/rados/operations/devices.rst.

Co-authored-by: Cole Mitchell <cole.mitchell@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit 8d589b43d76a4e291c96c3750d068dba18eb9309)

2 years agoMerge pull request #51490 from zdover23/wip-doc-2023-05-16-backport-51485-to-quincy
zdover23 [Thu, 18 May 2023 14:50:20 +0000 (00:50 +1000)]
Merge pull request #51490 from zdover23/wip-doc-2023-05-16-backport-51485-to-quincy

quincy: doc/start/os-recommendations: drop 4.14 kernel and reword guidance

Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
2 years agoMerge pull request #51543 from zdover23/wip-doc-2023-05-18-backport-51534-to-quincy
Anthony D'Atri [Wed, 17 May 2023 22:44:15 +0000 (18:44 -0400)]
Merge pull request #51543 from zdover23/wip-doc-2023-05-18-backport-51534-to-quincy

quincy: doc/cephfs: line-edit "Mirroring Module"

2 years agodoc/cephfs: line-edit "Mirroring Module" 51543/head
Zac Dover [Wed, 17 May 2023 12:25:38 +0000 (22:25 +1000)]
doc/cephfs: line-edit "Mirroring Module"

Line-edit the "Mirroring Module" section of
doc/cephfs/cephfs-mirroring.rst. Add prompts and formatting where such
things contribute to the realization of adequate sentences.

This commit is a follow-up to https://github.com/ceph/ceph/pull/51505.

Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit dd8855d9a934bcdd6a026f1308ba7410b1e143e3)

2 years agoMerge pull request #51521 from zdover23/wip-doc-2023-05-17-backport-51505-to-quincy
Anthony D'Atri [Wed, 17 May 2023 12:56:56 +0000 (08:56 -0400)]
Merge pull request #51521 from zdover23/wip-doc-2023-05-17-backport-51505-to-quincy

quincy: doc: explain cephfs mirroring `peer_add` step in detail

2 years agoMerge pull request #51525 from aaSharma14/wip-61179-quincy
Nizamudeen A [Wed, 17 May 2023 12:30:18 +0000 (18:00 +0530)]
Merge pull request #51525 from aaSharma14/wip-61179-quincy

quincy: mgr/dashboard: fix regression caused by cephPgImabalance alert

Reviewed-by: Nizamudeen A <nia@redhat.com>
2 years agomgr/dashboard: fix regression caused by cephPgImabalance alert 51525/head
Aashish Sharma [Mon, 8 May 2023 07:19:13 +0000 (12:49 +0530)]
mgr/dashboard: fix regression caused by cephPgImabalance alert

because of an earlier fix delivered, there is a regression caused by it
due to which alerts are not getting displayed in the active alerts tab.
This PR intends to fix this issue.

Fixes: https://tracker.ceph.com/issues/59666
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
(cherry picked from commit d0a1431fb836f1dd227df85f9e75e098edfdeac9)

2 years agodoc: explain cephfs mirroring `peer_add` step in detail 51521/head
Venky Shankar [Tue, 16 May 2023 05:25:34 +0000 (10:55 +0530)]
doc: explain cephfs mirroring `peer_add` step in detail

@zdover23 reached out regarding missing explanation for `peer_add`
step in cephfs mirroring documentation. Add some explanation and
and example to make the step clear.

Signed-off-by: Venky Shankar <vshankar@redhat.com>
(cherry picked from commit 6a6e887ff1f7f7d76db7f30f8410783b2f8153b0)

2 years agoMerge pull request #51503 from zdover23/wip-doc-2023-05-16-backport-51492-to-quincy
Anthony D'Atri [Wed, 17 May 2023 00:49:59 +0000 (20:49 -0400)]
Merge pull request #51503 from zdover23/wip-doc-2023-05-16-backport-51492-to-quincy

quincy: doc/start: KRBD feature flag support note

2 years agomessages/MOSDMap: Remove get_oldest/newest 51512/head
Matan Breizman [Wed, 1 Feb 2023 09:56:44 +0000 (09:56 +0000)]
messages/MOSDMap: Remove get_oldest/newest

cluster_osdmap_trim_lower_bound and newest_map are public.

Signed-off-by: Matan Breizman <mbreizma@redhat.com>
(cherry picked from commit 0c5f02ed5c79232e81e4ce5e6656e7502acb0ac6)

2 years agocrimson/osd: Rename MOSDMap::oldest_map users
Matan Breizman [Wed, 1 Feb 2023 09:32:53 +0000 (09:32 +0000)]
crimson/osd: Rename MOSDMap::oldest_map users

Note: this commit was manually fixed (crimson only)

Signed-off-by: Matan Breizman <mbreizma@redhat.com>
(cherry picked from commit 3f92107d99f4ee6a521e994728418ce7cffd2d95)

2 years agomessages/MOSDMap: Rename oldest_map to cluster_osdmap_trim_lower_bound
Matan Breizman [Wed, 1 Feb 2023 08:49:19 +0000 (08:49 +0000)]
messages/MOSDMap: Rename oldest_map to cluster_osdmap_trim_lower_bound

Previously, MOSDMap messages sent to other OSDs were populated with the
superblocks's oldest_map. We should, instead, use the superblock's
cluster_osdmap_trim_lower_bound because oldest map is merely a marker
for each osd's trimming progress.

As specified in the docs:
***
We use cluster_osdmap_trim_lower_bound rather than a specific osd's oldest_map
because we don't necessarily trim all MOSDMap::oldest_map. In order to avoid
doing too much work at once we limit the amount of osdmaps trimmed using
``osd_target_transaction_size`` in OSD::trim_maps().
For this reason, a specific OSD's oldest_map can lag behind
OSDSuperblock::cluster_osdmap_trim_lower_bound
for a while.
***

Signed-off-by: Matan Breizman <mbreizma@redhat.com>
(cherry picked from commit 721c9aea1869bf95a167b8879756b4f71fbf41bf)

2 years agocrimson/osd/osd.cc: Add cluster_osdmap_trim_lower_bound to print
Matan Breizman [Wed, 1 Feb 2023 09:46:57 +0000 (09:46 +0000)]
crimson/osd/osd.cc: Add cluster_osdmap_trim_lower_bound to print

Note: this commit was manually fixed (crimson only)

Signed-off-by: Matan Breizman <mbreizma@redhat.com>
(cherry picked from commit 27584ffcd6bb5aaa7606f80646808687bdc54f38)

2 years agoosd/OSD.cc: Add cluster_osdmap_trim_lower_bound to status
Matan Breizman [Wed, 1 Feb 2023 09:42:25 +0000 (09:42 +0000)]
osd/OSD.cc: Add cluster_osdmap_trim_lower_bound to status

Signed-off-by: Matan Breizman <mbreizma@redhat.com>
(cherry picked from commit fd66cf8bad9d3a7700962ac6a5435f82061092c1)

2 years agoosd: Rename max_oldest_map to cluster_osdmap_trim_lower_bound
Matan Breizman [Tue, 31 Jan 2023 09:13:38 +0000 (09:13 +0000)]
osd: Rename max_oldest_map to cluster_osdmap_trim_lower_bound

Signed-off-by: Matan Breizman <mbreizma@redhat.com>
(cherry picked from commit ff58b81ddff5e99a57ca6cf2c5dae560fbc8adad)

2 years agoosd: Remove oldest_stored_osdmap()
Matan Breizman [Thu, 3 Nov 2022 08:59:11 +0000 (08:59 +0000)]
osd: Remove oldest_stored_osdmap()

The only usage was for identyfing map gaps on new intervals.
We should use max_oldest_stored_osdmap() instead, since a specific
osd's oldest_map may lag behind.

Signed-off-by: Matan Breizman <mbreizma@redhat.com>
(cherry picked from commit 2541b49927670927fc34acf4af712fdb9f7a5bf7)

2 years agoosd: Fix check_past_interval_bounds()
Matan Breizman [Wed, 2 Nov 2022 10:40:03 +0000 (10:40 +0000)]
osd: Fix check_past_interval_bounds()

When getting the required past interval bounds we use
oldest_map or current pg info (lec/ec).
Before this change we set oldest_map epoch using the
osd's superblock.oldest_map.
The fix will use the max_oldest_map received with other peers
instead since a specific osd's oldest_map can lag for a while
in order to avoid large workloads.

Fixes: https://tracker.ceph.com/issues/49689
Signed-off-by: Matan Breizman <mbreizma@redhat.com>
(cherry picked from commit 0c611b362fb9cc4225f18283f74299551c2c5953)

2 years agodoc/dev/osd_internals: add past_intervals.rst
Samuel Just [Wed, 26 Oct 2022 04:46:24 +0000 (21:46 -0700)]
doc/dev/osd_internals: add past_intervals.rst

Add explanation of past_interals.

Signed-off-by: Samuel Just <sjust@redhat.com>
Signed-off-by: Matan Breizman <mbreizma@redhat.com>
(cherry picked from commit cd4c031e5e5f5b0318347a7957310cb7358380f6)

2 years agoosd: move OSDService::max_oldest_map into OSDSuperblock
Matan Breizman [Thu, 3 Nov 2022 07:51:35 +0000 (07:51 +0000)]
osd: move OSDService::max_oldest_map into OSDSuperblock

Persist max_oldest_map to the superblock.

Note: cherry-pick was manually edited (change in existing OSD members from original commit)

Signed-off-by: Matan Breizman <mbreizma@redhat.com>
(cherry picked from commit 5c65627edc7daaf71c034c21579dfcd0b9982f10)

2 years agodoc/start: KRBD feature flag support note 51503/head
Zac Dover [Mon, 15 May 2023 17:04:43 +0000 (03:04 +1000)]
doc/start: KRBD feature flag support note

Add KRBD feature flag support note to doc/start/os-recommendations.rst.

This change was suggested by Anthony D'Atri in https://github.com/ceph/ceph/pull/51485.

Co-authored-by: Ilya Dryomov <idryomov@redhat.com>
Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit 2a619dba2d22749e6facaf8dd0d370e16a1672c4)

2 years agorgw/multisite: set truncated flag to true after fetching remote mdlogs. 51496/head
Shilpa Jagannath [Wed, 4 May 2022 14:40:41 +0000 (10:40 -0400)]
rgw/multisite: set truncated flag to true after fetching remote mdlogs.
This is in addition to 2ed1c3e. Ensures that the we continue syncing logs
that have been fetched already and don't end up cloning more logs prematurely.

Signed-off-by: Shilpa Jagannath <smanjara@redhat.com>
(cherry picked from commit 256f6b3a07505a5f67b6583e69f588cd1d408c6e)

2 years agorgw: read incremental metalog from master cluster based on truncate variable
gengjichao [Mon, 20 Jul 2020 02:33:00 +0000 (10:33 +0800)]
rgw: read incremental metalog from master cluster based on truncate variable
when the log entry in the meta.log object of the secondary cluster is empty,
the value of max_marker is also empty,which can't meet the requirement that
mdlog_marker <= max_marker,resulting in that the secondary cluster can't fetch
new log entry from the master cluster and infinite loop,finally, the secondary
cluster's metadata can't catch up the master cluster. when the truncate is false,
it means that the secondary cluster's meta.log is empyt,we can read more from
master cluster.

Fixes: https://tracker.ceph.com/issues/46563
Signed-off-by: gengjichao <gengjichao@jd.com>
(cherry picked from commit 2ed1c3e28a326e1422b0b6af47dd0af300ae85a2)
(cherry picked from commit 3b61154d39d6c4a1f29c6464e588589155982fcd)

2 years agoMerge pull request #51478 from zdover23/wip-doc-2023-05-15-backport-51473-to-quincy
Anthony D'Atri [Mon, 15 May 2023 16:51:46 +0000 (12:51 -0400)]
Merge pull request #51478 from zdover23/wip-doc-2023-05-15-backport-51473-to-quincy

quincy: doc/rados: edit devices.rst

2 years agodoc/start/os-recommendations: drop 4.14 kernel and reword guidance 51490/head
Ilya Dryomov [Fri, 12 May 2023 11:55:32 +0000 (13:55 +0200)]
doc/start/os-recommendations: drop 4.14 kernel and reword guidance

The 4.14 LTS kernel has less than a year left in terms of maintenance,
drop it.

Also, the current wording with an explicit list of kernels tends to go
stale: it's missing the latest 6.1 LTS kernel.

Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
(cherry picked from commit b697f96f3872f178f024e85799445386204a96e1)

2 years agodoc/rados: edit devices.rst 51478/head
Zac Dover [Mon, 15 May 2023 01:01:19 +0000 (11:01 +1000)]
doc/rados: edit devices.rst

Line-edit doc/rados/operations/devices.rst.

Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com>
Co-authored-by: Cole Mitchell <cole.mitchell.ceph@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit 8321b457a25a4394439f908c500091ce30e0736a)

2 years agoMerge pull request #51470 from zdover23/wip-doc-2023-05-14-backport-51175-to-quincy
Anthony D'Atri [Sun, 14 May 2023 11:07:08 +0000 (07:07 -0400)]
Merge pull request #51470 from zdover23/wip-doc-2023-05-14-backport-51175-to-quincy

quincy: doc: add link to "documenting ceph" to index.rst

2 years agodoc: add link to "documenting ceph" to index.rst 51470/head
Zac Dover [Fri, 21 Apr 2023 20:59:04 +0000 (22:59 +0200)]
doc: add link to "documenting ceph" to index.rst

Add a link to the landing page of docs.ceph.com to direct documentation
contributors to documentation-related information.

Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit 155a382cb2e8b80dca260ca7abdc3cc89c805edb)

2 years agoMerge pull request #51466 from zdover23/wip-doc-2023-05-13-backport-51463-to-quincy
zdover23 [Sat, 13 May 2023 14:18:44 +0000 (00:18 +1000)]
Merge pull request #51466 from zdover23/wip-doc-2023-05-13-backport-51463-to-quincy

quincy: doc/cephfs: edit fs-volumes.rst (1 of x)

Reviewed-by: Cole Mitchell <cole.mitchell.ceph@gmail.com>
2 years agodoc/cephfs: edit fs-volumes.rst (1 of x) 51466/head
Zac Dover [Fri, 12 May 2023 15:49:14 +0000 (01:49 +1000)]
doc/cephfs: edit fs-volumes.rst (1 of x)

Edit the syntax of the English language in the file
doc/cephfs/fs-volumes.rst up to (but not including) the section called
"FS Subvolumes".

Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit a1184070a1a3d2f6c1462c62f88fe70df5626c36)

2 years agoMerge pull request #51459 from zdover23/wip-doc-2023-05-12-backport-51458-to-quincy
zdover23 [Fri, 12 May 2023 13:28:55 +0000 (23:28 +1000)]
Merge pull request #51459 from zdover23/wip-doc-2023-05-12-backport-51458-to-quincy

quincy: doc/cephfs: rectify prompts in fs-volumes.rst

Reviewed-by: Cole Mitchell <cole.mitchell.ceph@gmail.com>
2 years agodoc/cephfs: rectify prompts in fs-volumes.rst 51459/head
Zac Dover [Fri, 12 May 2023 10:35:25 +0000 (20:35 +1000)]
doc/cephfs: rectify prompts in fs-volumes.rst

Make sure all prompts are unselectable. This PR is meant to be
backported to Reef, Quincy, and Pacific, to get all of the prompts into
a fit state so that a line-edit can be performed on the Englsh language
in this file.

Follows https://github.com/ceph/ceph/pull/51427.

Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit 1f88f10fe6d2069d3d474fe490e69a809afb1f56)

2 years agoMerge pull request #51435 from zdover23/wip-doc-2023-05-11-backport-51427-to-quincy
zdover23 [Fri, 12 May 2023 12:43:04 +0000 (22:43 +1000)]
Merge pull request #51435 from zdover23/wip-doc-2023-05-11-backport-51427-to-quincy

quincy: doc/cephfs: fix prompts in fs-volumes.rst

Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
2 years agodoc/cephfs: fix prompts in fs-volumes.rst 51435/head
Zac Dover [Wed, 10 May 2023 14:52:50 +0000 (00:52 +1000)]
doc/cephfs: fix prompts in fs-volumes.rst

Fixed a regression introduced in
e5355e3d66e1438d51de6b57eae79fab47cd0184 that broke the unselectable
prompts in the RST.

Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit e019948783adf41207d70e8cd2540d335e07b80b)

2 years agoMerge pull request #51372 from zdover23/wip-doc-2023-05-06-backport-51359-to-quincy
zdover23 [Wed, 10 May 2023 13:23:14 +0000 (23:23 +1000)]
Merge pull request #51372 from zdover23/wip-doc-2023-05-06-backport-51359-to-quincy

quincy: doc/cephfs: repairing inaccessible FSes

Reviewed-by: Cole Mitchell <cole.mitchell.ceph@gmail.com>
2 years agoMerge pull request #51420 from zdover23/wip-doc-2023-05-10-backport-51403-to-quincy
Anthony D'Atri [Wed, 10 May 2023 12:24:07 +0000 (08:24 -0400)]
Merge pull request #51420 from zdover23/wip-doc-2023-05-10-backport-51403-to-quincy

quincy: doc/start: fix "Planet Ceph" link

2 years agodoc/start: fix "Planet Ceph" link 51420/head
Zac Dover [Tue, 9 May 2023 03:39:10 +0000 (13:39 +1000)]
doc/start: fix "Planet Ceph" link

Fix a link to Planet Ceph on the doc/start/get-involved.rst page.

Reported 2023 Apr 21, here:
https://pad.ceph.com/p/Report_Documentation_Bugs

Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit 67ebc206648144e533b627b9c22f29695764b26b)

2 years agoMerge pull request #51398 from zdover23/wip-doc-2023-05-09-backport-51394-to-quincy
Anthony D'Atri [Tue, 9 May 2023 08:48:53 +0000 (04:48 -0400)]
Merge pull request #51398 from zdover23/wip-doc-2023-05-09-backport-51394-to-quincy

quincy: doc/dev/encoding.txt: update per std::optional

2 years agoMerge pull request #51401 from zdover23/wip-doc-2023-05-09-backport-51392-to-quincy
Anthony D'Atri [Tue, 9 May 2023 08:37:46 +0000 (04:37 -0400)]
Merge pull request #51401 from zdover23/wip-doc-2023-05-09-backport-51392-to-quincy

quincy: doc: update multisite doc

2 years agodoc: update multisite doc 51401/head
parth-gr [Mon, 8 May 2023 13:53:29 +0000 (19:23 +0530)]
doc: update multisite doc

cmd for getting zone group was spelled incorrectly
Updated to rdosgw-admin

Signed-off-by: parth-gr <paarora@redhat.com>
(cherry picked from commit edab93b2f15b19f05a86aab499ba11b56135aaf3)

2 years agodoc/dev/encoding.txt: update per std::optional 51398/head
Radoslaw Zarzynski [Mon, 8 May 2023 14:41:22 +0000 (14:41 +0000)]
doc/dev/encoding.txt: update per std::optional

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit 622829cebcca8ae4ec6f0463a4d74c909998a92d)

2 years agoMerge pull request #49973 from sseshasa/wip-quincy-fix-mclk-rec-backfill-cost
Neha Ojha [Mon, 8 May 2023 18:49:22 +0000 (11:49 -0700)]
Merge pull request #49973 from sseshasa/wip-quincy-fix-mclk-rec-backfill-cost

quincy: osd: mClock recovery/backfill cost fixes

Reviewed-by: Samuel Just <sjust@redhat.com>
2 years agoMerge pull request #51390 from zdover23/wip-doc-2023-05-08-backport-51387-to-quincy
zdover23 [Mon, 8 May 2023 13:37:13 +0000 (23:37 +1000)]
Merge pull request #51390 from zdover23/wip-doc-2023-05-08-backport-51387-to-quincy

quincy: doc/rados: stretch-mode.rst (other commands)

Reviewed-by: Cole Mitchell <cole.mitchell.ceph@gmail.com>
2 years agodoc/rados: stretch-mode.rst (other commands) 51390/head
Zac Dover [Mon, 8 May 2023 11:08:49 +0000 (21:08 +1000)]
doc/rados: stretch-mode.rst (other commands)

Edit the "Other Commands" section of
doc/rados/operations/stretch-mode.rst.

Signed-off-by: Zac Dover <zac.dover@proton.me>
(cherry picked from commit fde33f1a5b8dbd03c096140887e04038a82f3076)

2 years agoMerge pull request #51367 from rhcs-dashboard/quincy-crud
Nizamudeen A [Mon, 8 May 2023 10:45:07 +0000 (16:15 +0530)]
Merge pull request #51367 from rhcs-dashboard/quincy-crud

quincy: mgr/dashboard CRUD component backport

Reviewed-by: Pegonzal <NOT@FOUND>
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
Reviewed-by: Nizamudeen A <nia@redhat.com>
2 years agoqa/: Override mClock profile to 'high_recovery_ops' for qa tests 49973/head
Sridhar Seshasayee [Sat, 29 Apr 2023 04:58:04 +0000 (10:28 +0530)]
qa/: Override mClock profile to 'high_recovery_ops' for qa tests

The qa tests are not client I/O centric and mostly focus on triggering
recovery/backfills and monitor them for completion within a finite amount
of time. The same holds true for scrub operations.

Therefore, an mClock profile that optimizes background operations is a
better fit for qa related tests. The osd_mclock_profile is therefore
globally overriden to 'high_recovery_ops' profile for the Rados suite as
it fits the requirement.

Also, many standalone tests expect recovery and scrub operations to
complete within a finite time. To ensure this, the osd_mclock_profile
options is set to 'high_recovery_ops' as part of the run_osd() function
in ceph-helpers.sh.

A subset of standalone tests explicitly used 'high_recovery_ops' profile.
Since the profile is now set as part of run_osd(), the earlier overrides
are redundant and therefore removed from the tests.

Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agodoc/: Modify mClock configuration documentation to reflect profile changes
Sridhar Seshasayee [Tue, 11 Apr 2023 17:57:05 +0000 (23:27 +0530)]
doc/: Modify mClock configuration documentation to reflect profile changes

Modify the relevant documentation to reflect:

- change in the default mClock profile to 'balanced'
- new allocations for ops across mClock profiles
- change in the osd_max_backfills limit
- miscellaneous changes related to warnings.

Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agocommon/options/osd.yaml.in: Change mclock max sequential bandwidth for SSDs
Sridhar Seshasayee [Tue, 11 Apr 2023 16:48:51 +0000 (22:18 +0530)]
common/options/osd.yaml.in: Change mclock max sequential bandwidth for SSDs

The osd_mclock_max_sequential_bandwidth_ssd is changed to 1200 MiB/s as
a reasonable middle ground considering the broad range of SSD capabilities.
This allows the mClock's cost model to extract the SSDs capability
depending on the cost of the IO being performed.

Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agoosd/: Retain the default osd_max_backfills limit to 1 for mClock
Sridhar Seshasayee [Tue, 11 Apr 2023 16:28:35 +0000 (21:58 +0530)]
osd/: Retain the default osd_max_backfills limit to 1 for mClock

The earlier limit of 3 was still aggressive enough to have an impact on
the client and other competing operations. Retain the current default
for mClock. This can be modified if necessary after setting the
osd_mclock_override_recovery_settings option.

Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agocommon/options/osd.yaml.in: change mclock profile default to balanced
Samuel Just [Tue, 11 Apr 2023 15:15:38 +0000 (08:15 -0700)]
common/options/osd.yaml.in: change mclock profile default to balanced

Let's use the middle profile as the default.
Modify the standalone tests accordingly.

Signed-off-by: Samuel Just <sjust@redhat.com>
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agoosd/scheduler/mClockScheduler: avoid limits for recovery
Samuel Just [Tue, 11 Apr 2023 15:10:04 +0000 (08:10 -0700)]
osd/scheduler/mClockScheduler: avoid limits for recovery

Now that recovery operations are split between background_recovery and
background_best_effort, rebalance qos params to avoid penalizing
background_recovery while idle.

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/: add counters for ops delayed due to degraded|unreadable target
Samuel Just [Mon, 10 Apr 2023 21:18:49 +0000 (14:18 -0700)]
osd/: add counters for ops delayed due to degraded|unreadable target

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/: add counters for queue latency for PGRecovery[Context]
Samuel Just [Thu, 6 Apr 2023 21:15:02 +0000 (14:15 -0700)]
osd/: add counters for queue latency for PGRecovery[Context]

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/: add per-op latency averages for each recovery related message
Samuel Just [Thu, 6 Apr 2023 20:50:48 +0000 (20:50 +0000)]
osd/: add per-op latency averages for each recovery related message

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/: differentiate priority for PGRecovery[Context]
Samuel Just [Thu, 6 Apr 2023 07:04:05 +0000 (00:04 -0700)]
osd/: differentiate priority for PGRecovery[Context]

PGs with degraded objects should be higher priority.

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/: add MSG_OSD_PG_(BACKFILL|BACKFILL_REMOVE|SCAN) as recovery messages
Samuel Just [Thu, 6 Apr 2023 05:57:48 +0000 (22:57 -0700)]
osd/: add MSG_OSD_PG_(BACKFILL|BACKFILL_REMOVE|SCAN) as recovery messages

Otherwise, these end up as PGOpItem and therefore as immediate:

class PGOpItem : public PGOpQueueable {
...
  op_scheduler_class get_scheduler_class() const final {
    auto type = op->get_req()->get_type();
    if (type == CEPH_MSG_OSD_OP ||
  type == CEPH_MSG_OSD_BACKOFF) {
      return op_scheduler_class::client;
    } else {
      return op_scheduler_class::immediate;
    }
  }
...
};

This was probably causing a bunch of extra interference with client
ops.

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/: differentiate scheduler class for undersized/degraded vs data movement
Samuel Just [Thu, 6 Apr 2023 05:57:42 +0000 (22:57 -0700)]
osd/: differentiate scheduler class for undersized/degraded vs data movement

Recovery operations on pgs/objects that have fewer than the configured
number of copies should be treated more urgently than operations on
pgs/objects that simply need to be moved to a new location.

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/.../OpSchedulerItem: add MSG_OSD_PG_PULL to is_recovery_msg
Samuel Just [Thu, 6 Apr 2023 04:30:18 +0000 (04:30 +0000)]
osd/.../OpSchedulerItem: add MSG_OSD_PG_PULL to is_recovery_msg

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/: move PGRecoveryMsg check from osd into PGRecoveryMsg::is_recovery_msg
Samuel Just [Thu, 6 Apr 2023 04:23:23 +0000 (04:23 +0000)]
osd/: move PGRecoveryMsg check from osd into PGRecoveryMsg::is_recovery_msg

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/: move get_recovery_op_priority into PeeringState next to get_*_priority
Samuel Just [Thu, 6 Apr 2023 03:45:19 +0000 (03:45 +0000)]
osd/: move get_recovery_op_priority into PeeringState next to get_*_priority

Consolidate methods governing recovery scheduling in PeeringState.

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/scheduler: simplify qos specific params in OpSchedulerItem
Samuel Just [Tue, 4 Apr 2023 23:34:17 +0000 (23:34 +0000)]
osd/scheduler: simplify qos specific params in OpSchedulerItem

is_qos_item() was only used in operator<< for OpSchedulerItem.  However,
it's actually useful to see priority for mclock items since it affects
whether it goes into the immediate queues and, for some types, the
class.  Unconditionally display both class_id and priority.

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/scheduler: remove unused PGOpItem::maybe_get_mosd_op
Samuel Just [Tue, 4 Apr 2023 23:22:59 +0000 (23:22 +0000)]
osd/scheduler: remove unused PGOpItem::maybe_get_mosd_op

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/scheduler: remove OpQueueable::get_order_locker() and supporting machinery
Samuel Just [Tue, 4 Apr 2023 23:13:41 +0000 (23:13 +0000)]
osd/scheduler: remove OpQueueable::get_order_locker() and supporting machinery

Apparently unused.

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/scheduler: remove OpQueueable::get_op_type() and supporting machinery
Samuel Just [Tue, 4 Apr 2023 23:05:56 +0000 (23:05 +0000)]
osd/scheduler: remove OpQueueable::get_op_type() and supporting machinery

Apparently unused.

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoPeeringState::clamp_recovery_priority: use std::clamp
Samuel Just [Mon, 3 Apr 2023 20:31:46 +0000 (13:31 -0700)]
PeeringState::clamp_recovery_priority: use std::clamp

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agodoc: Modify mClock configuration documentation to reflect new cost model
Sridhar Seshasayee [Sat, 25 Mar 2023 07:16:09 +0000 (12:46 +0530)]
doc: Modify mClock configuration documentation to reflect new cost model

Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agoosd: Retain overridden mClock recovery settings across osd restarts
Sridhar Seshasayee [Tue, 21 Feb 2023 13:01:32 +0000 (18:31 +0530)]
osd: Retain overridden mClock recovery settings across osd restarts

Fix an issue where an overridden mClock recovery setting (set prior to
an osd restart) could be lost after an osd restart.

For e.g., consider that prior to an osd restart, the option
'osd_max_backfill' was successfully set to a value different from the
mClock default. If the osd was restarted for some reason, the
boot-up sequence was incorrectly resetting the backfill value to the
mclock default within the async local/remote reservers. This fix
ensures that no change is made if the current overriden value is
different from the mClock default.

Modify an existing standalone test to verify that the local and remote
async reservers are updated to the desired number of backfills under
normal conditions and also across osd restarts.

Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agoosd: Set default max active recovery and backfill limits for mClock
Sridhar Seshasayee [Mon, 20 Mar 2023 13:24:57 +0000 (18:54 +0530)]
osd: Set default max active recovery and backfill limits for mClock

Client ops are sensitive to the recovery load and must be carefully
set for osds whose underlying device is HDD. Tests revealed that
recoveries with osd_max_backfills = 10 and osd_recovery_max_active_hdd = 5
were still aggressive and overwhelmed client ops. The built-in defaults
for mClock are now set to:

    1) osd_recovery_max_active_hdd = 3
    2) osd_recovery_max_active_ssd = 10
    3) osd_max_backfills = 3

The above may be modified if necessary by setting
osd_mclock_override_recovery_settings option.

Fixes: https://tracker.ceph.com/issues/58529
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agoosd/scheduler/mClockScheduler: make is_rotational const
Sridhar Seshasayee [Wed, 29 Mar 2023 19:33:08 +0000 (01:03 +0530)]
osd/scheduler/mClockScheduler: make is_rotational const

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd/scheduler/mClockScheduler: simplify profile handling
Sridhar Seshasayee [Wed, 29 Mar 2023 19:31:29 +0000 (01:01 +0530)]
osd/scheduler/mClockScheduler: simplify profile handling

Previously, setting default configs from the configured profile was
split across:
- enable_mclock_profile_settings
- set_mclock_profile - sets mclock_profile class member
- set_*_allocations - updates client_allocs class member
- set_profile_config - sets profile based on client_allocs class member

This made tracing the effect of changing the profile pretty challenging
due passing state through class member variables.

Instead, define a simple profile_t with three constexpr values
corresponding to the three profiles and handle it all in a single
set_config_defaults_from_profile() method.

Signed-off-by: Samuel Just <sjust@redhat.com>
2 years agoosd: Modify mClock scheduler's cost model to represent cost in bytes
Sridhar Seshasayee [Thu, 9 Feb 2023 15:35:22 +0000 (21:05 +0530)]
osd: Modify mClock scheduler's cost model to represent cost in bytes

The mClock scheduler's cost model for HDDs/SSDs is modified and now
represents the cost of an IO in terms of bytes.

The cost parameters, namely, osd_mclock_cost_per_io_usec_[hdd|ssd]
and osd_mclock_cost_per_byte_usec_[hdd|ssd] which represent the cost
of an IO in secs are inaccurate and therefore removed.

The new model considers the following aspects of an osd to calculate
the cost of an IO:

 - osd_mclock_max_capacity_iops_[hdd|ssd] (existing option)
   The measured random write IOPS at 4 KiB block size. This is
   measured during OSD boot-up using OSD bench tool.
 - osd_mclock_max_sequential_bandwidth_[hdd|ssd] (new config option)
   The maximum sequential bandwidth of of the underlying device.
   For HDDs, 150 MiB/s is considered, and for SSDs 750 MiB/s is
   considered in the cost calculation.

The following important changes are made to arrive at the overall
cost of an IO,

1. Represent QoS reservation and limit config parameter as proportion:
The reservation and limit parameters are now set in terms of a
proportion of the OSD's max IOPS capacity. The earlier representation
was in terms of IOPS per OSD shard which required the user to perform
calculations before setting the parameter. Representing the
reservation and limit in terms of proportions is much more intuitive
and simpler for a user.

2. Cost per IO Calculation:
Using the above config options, osd_bandwidth_cost_per_io for the osd is
calculated and set. It is the ratio of the max sequential bandwidth and
the max random write iops of the osd. It is a constant and represents the
base cost of an IO in terms of bytes. This is added to the actual size of
the IO(in bytes) to represent the overall cost of the IO operation.See
mClockScheduler::calc_scaled_cost().

3. Cost calculation in Bytes:
The settings for reservation and limit in terms a fraction of the OSD's
maximum IOPS capacity is converted to Bytes/sec before updating the
mClock server's ClientInfo structure. This is done for each OSD op shard
using osd_bandwidth_capacity_per_shard shown below:

    (res|lim)  = (IOPS proportion) * osd_bandwidth_capacity_per_shard
    (Bytes/sec)   (unitless)             (bytes/sec)

The above result is updated within the mClock server's ClientInfo
structure for different op_scheduler_class operations. See
mClockScheduler::ClientRegistry::update_from_config().

The overall cost of an IO operation (in secs) is finally determined
during the tag calculations performed in the mClock server. See
crimson::dmclock::RequestTag::tag_calc() for more details.

4. Profile Allocations:
Optimize mClock profile allocations due to the change in the cost model
and lower recovery cost.

5. Modify standalone tests to reflect the change in the QoS config
parameter representation of reservation and limit options.

Fixes: https://tracker.ceph.com/issues/58529
Fixes: https://tracker.ceph.com/issues/59080
Signed-off-by: Samuel Just <sjust@redhat.com>
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agoosd: update PGRecovery queue item cost to reflect object size
Sridhar Seshasayee [Fri, 3 Feb 2023 12:23:06 +0000 (17:53 +0530)]
osd: update PGRecovery queue item cost to reflect object size

Previously, we used a static value of osd_recovery_cost (20M
by default) for PGRecovery. For pools with relatively small
objects, this causes mclock to backfill very very slowly as
20M massively overestimates the amount of IO each recovery
queue operation requires. Instead, add a cost_per_object
parameter to OSDService::awaiting_throttle and set it to the
average object size in the PG being queued.

Fixes: https://tracker.ceph.com/issues/58606
Signed-off-by: Samuel Just <sjust@redhat.com>
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agoosd: update OSDService::queue_recovery_context to specify cost
Sridhar Seshasayee [Fri, 3 Feb 2023 12:17:38 +0000 (17:47 +0530)]
osd: update OSDService::queue_recovery_context to specify cost

Previously, we always queued this with cost osd_recovery_cost which
defaults to 20M. With mclock, this caused these items to be delayed
heavily. Instead, base the cost on the operation queued.

Fixes: https://tracker.ceph.com/issues/58606
Signed-off-by: Samuel Just <sjust@redhat.com>
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agoosd/osd_types: use appropriate cost value for PullOp
Sridhar Seshasayee [Fri, 3 Feb 2023 12:12:46 +0000 (17:42 +0530)]
osd/osd_types: use appropriate cost value for PullOp

See included comments -- previous values did not account for object
size.  This causes problems for mclock which is much more strict
in how it interprets costs.

Fixes: https://tracker.ceph.com/issues/58607
Signed-off-by: Samuel Just <sjust@redhat.com>
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agoosd/osd_types: use appropriate cost value for PushReplyOp
Sridhar Seshasayee [Thu, 2 Feb 2023 12:16:27 +0000 (17:46 +0530)]
osd/osd_types: use appropriate cost value for PushReplyOp

See included comments -- previous values did not account for object
size.  This causes problems for mclock which is much more strict
in how it interprets costs.

Fixes: https://tracker.ceph.com/issues/58529
Signed-off-by: Samuel Just <sjust@redhat.com>
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
2 years agomgr/dashboard: Edit ceph authx users 51367/head
Pedro Gonzalez Gomez [Mon, 20 Feb 2023 13:37:00 +0000 (14:37 +0100)]
mgr/dashboard: Edit ceph authx users

Signed-off-by: Pedro Gonzalez Gomez <pegonzal@redhat.com>
Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
(cherry picked from commit 8177a748bd831568417df5c687109fbbbd9b981d)
(cherry picked from commit bc73c1aec686282547d6b920ef7ef239d0231f40)