]> git.apps.os.sepia.ceph.com Git - ceph.git/log
ceph.git
2 years agorgw: Fix `rgw::sal::Bucket::empty` static method signatures
Adam C. Emerson [Mon, 11 Jul 2022 15:52:09 +0000 (11:52 -0400)]
rgw: Fix `rgw::sal::Bucket::empty` static method signatures

`unique_ptr` overload should take by reference.

Both should be const.

Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit b1d3e6c00674ebf6bde08968789a426d65db73d9)

Conflicts:
src/rgw/rgw_sal.h
 - `unique_ptr` overload of empty

Fixes: https://tracker.ceph.com/issues/56585
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit e40ce4a3511e669e761da5f39d81b14e6cdbdeba)
Resolves: rhbz#118423
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
2 years agocephadm: continue to support --yes-i-know flag
Adam King [Mon, 22 Aug 2022 13:34:12 +0000 (09:34 -0400)]
cephadm: continue to support --yes-i-know flag

Signed-off-by: Adam King <adking@redhat.com>
Resolves: rhbz#2073030

2 years agocephadm: add "su root root" to cephadm.log logrotate config
Adam King [Tue, 19 Jul 2022 19:02:47 +0000 (15:02 -0400)]
cephadm: add "su root root" to cephadm.log logrotate config

Fixes: https://tracker.ceph.com/issues/56639
Signed-off-by: Adam King <adking@redhat.com>
(cherry picked from commit c0929e7e3ea14b0f8bfc71dc8ff74efddafb3ffc)
(cherry picked from commit e890629d694f28ab288ae8e1b17824c306b7afe0)

Resolves: rhbz#2099670

2 years agomgr/cephadm: use hostname from crush map for osd memory autotuning
Adam King [Fri, 3 Jun 2022 01:32:53 +0000 (21:32 -0400)]
mgr/cephadm: use hostname from crush map for osd memory autotuning

Fixes: https://tracker.ceph.com/issues/55841
Signed-off-by: Adam King <adking@redhat.com>
(cherry picked from commit 50f28aa56edd348c3816335bef3bbfaf5133ae54)
(cherry picked from commit 8ac0eaed304f151ec3d9801abbca7eed9c88383a)

Resolves: rhbz#2107849

2 years agomgr/cephadm: store device info separately from rest of host cache
Adam King [Mon, 23 May 2022 19:57:14 +0000 (15:57 -0400)]
mgr/cephadm: store device info separately from rest of host cache

device info tends to take up the most space out of
everything, so the hope is by giving it its own
location in the config key store we can avoid hitting
issues where the host cache value we attempt to
place in the config key store exceeds the size limit

Fixes: https://tracker.ceph.com/issues/54251
Fixes: https://tracker.ceph.com/issues/53624
Signed-off-by: Adam King <adking@redhat.com>
(cherry picked from commit e35d4144d380cef190a04517b4d7b30d520d5b4f)
(cherry picked from commit f9cf5f1316e4ea2e9e86ddd70351b644198a582b)

Resolves: rhbz#2053276

2 years agoosd/SnapMapper: fix pacific legacy key conversion and introduce test
Manuel Lausch [Thu, 30 Jun 2022 12:29:53 +0000 (14:29 +0200)]
osd/SnapMapper: fix pacific legacy key conversion and introduce test

Octopus modified the SnapMapper key format from

  <LEGACY_MAPPING_PREFIX><snapid>_<shardid>_<hobject_t::to_str()>

to

  <MAPPING_PREFIX><pool>_<snapid>_<shardid>_<hobject_t::to_str()>

When this change was introduced, 94ebe0ea also introduced a conversion
with a crucial bug which essentially destroyed legacy keys by mapping them
to

  <MAPPING_PREFIX><poolid>_<snapid>_

without the object-unique suffix.  This commit fixes this conversion going
forward, but a fix for existing clusters still needs to be developed.

Fixes: https://tracker.ceph.com/issues/56147
Signed-off-by: Manuel Lausch <manuel.lausch@1und1.de>
Signed-off-by: Matan Breizman <mbreizma@redhat.com>
(cherry picked from commit 66bea86ab447b2db8a948c7913dc6c7a4a995c31)
(cherry picked from commit 9b5fbb8cb80c1d606f138bdbdce62686c084207a)

Resolves: rhbz#2107404

2 years agomgr: relax "pending_service_map.epoch > service_map.epoch" assert
Mykola Golub [Thu, 21 Apr 2022 08:57:25 +0000 (11:57 +0300)]
mgr: relax "pending_service_map.epoch > service_map.epoch" assert

When we are activating we may receive several service map updates
initiated by the previous active mgr. Treat them all as initial map.

The code also adds "pending_service_map_dirty == 0" assert, which we
expect is true when receiving an initial map -- otherwise we can't
just initialize pending_service_map with received map.

Fixes: https://tracker.ceph.com/issues/51835
Signed-off-by: Mykola Golub <mgolub@suse.com>
(cherry picked from commit cc2721ccdb33248a732abd1919df808ef8a1f80f)
(cherry picked from commit 7fc2f0f675a8c9da7cc1563854ab79578f906f1e)

Resolves: rhbz#1984881

2 years agomgr/dashboard: add rbd primary info
Pere Diaz Bou [Wed, 1 Jun 2022 10:44:35 +0000 (12:44 +0200)]
mgr/dashboard: add rbd primary info

Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
Resolves: rhbz#2115243

2 years agomgr/dashboard: add cephadm e2e tests for checking count for services instances
Avan Thakkar [Wed, 3 Aug 2022 08:45:02 +0000 (14:15 +0530)]
mgr/dashboard: add cephadm e2e tests for checking count for services instances

Resolves: rhbz#2101771

Signed-off-by: Avan Thakkar <athakkar@redhat.com>
(cherry picked from commit 25b03d85153e368098ebe5bf066f0872449af729)

2 years agomgr/dashboard: cluster > hosts: host list tables doesn't show all services deployed
Avan Thakkar [Mon, 25 Jul 2022 13:49:22 +0000 (19:19 +0530)]
mgr/dashboard: cluster > hosts: host list tables doesn't show all services deployed

Resolves: rhbz#2101771

Fixes: https://tracker.ceph.com/issues/53210
Signed-off-by: Avan Thakkar <athakkar@redhat.com>
Service instances was displaying only the ceph services, but with these changes it'll display instances
of cephadm services as well.

(cherry picked from commit 4d50da7629145d40da3a2820c3b5c8cdb2bca33f)

2 years agomgr/dashboard: rbd image primary ui
Pedro Gonzalez Gomez [Fri, 3 Jun 2022 10:13:44 +0000 (12:13 +0200)]
mgr/dashboard: rbd image primary ui

Resolves: rhbz#2115243

Signed-off-by: Pedro Gonzalez Gomez <pegonzal@redhat.com>
Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
(cherry picked from commit 1a17bd27a08d87ef24457788542f064f4e917d54)

2 years agomgr/dashboard: fix linting issues
Pere Diaz Bou [Tue, 21 Jun 2022 08:34:56 +0000 (10:34 +0200)]
mgr/dashboard: fix linting issues

Resolves: rhbz#1891012

Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
(cherry picked from commit 536c3833d990c95cd7b8e46e84abc43d7ffe8607)

2 years agomgr/dashboard: Error page cleanup
Nizamudeen A [Wed, 15 Jun 2022 17:22:39 +0000 (22:52 +0530)]
mgr/dashboard: Error page cleanup

Resolves: rhbz#1891012

Some error page cleanups

Signed-off-by: Nizamudeen A <nia@redhat.com>
(cherry picked from commit ec899f2de30760edd6aa10fb7d5bdd61fadf83af)

2 years agomgr/dashboard: configure rbd mirroring
Nizamudeen A [Mon, 6 Jun 2022 05:51:29 +0000 (11:21 +0530)]
mgr/dashboard: configure rbd mirroring

Resolves: rhbz#1891012

One-click button in the case of an orch cluster for configuring the
rbd-mirroring when its not properly setup. This button will create an
rbd-mirror service and also an rbd labelled pool(replicated: size-3) (if they are not
existing)

Fixes: https://tracker.ceph.com/issues/55646
Signed-off-by: Nizamudeen A <nia@redhat.com>
 Conflicts:
src/pybind/mgr/dashboard/frontend/src/app/core/error/error.component.html
src/pybind/mgr/dashboard/frontend/src/app/shared/services/module-status-guard.service.ts

This commits had minor conflicts where the addition of dashboardButton
had to be explicitly added and the state of api/orch/status in
module-status-guard had to be updated.

(cherry picked from commit e918150212074a757c95f05ecb9d528ff1fe6e06)

2 years agomgr/dashboard: add rbd status endpoint
Melissa Li [Thu, 26 May 2022 18:07:30 +0000 (14:07 -0400)]
mgr/dashboard: add rbd status endpoint

Resolves: rhbz#1891012

Show "No RBD pools available" error page when accessing block/rbd if there are no rbd pools.
Add a "button_name" and "button_route" property to `ModuleStatusGuardService` config to customize the button on the error page.
Modify `ModuleStatusGuardService` to execute API calls to `/ui-api/<uiApiPath>/status` which uses the `UIRouter`.

Fixes: https://tracker.ceph.com/issues/42109
Signed-off-by: Melissa Li <melissali@redhat.com>
(cherry picked from commit 6ac9b3cfe171a8902454ea907b3ba37d83eda3dc)
(cherry picked from commit f150637b0417d5877c8dc9d125f8e9cc837fec71)

2 years agomgr/dashboard: improve edit site name action in rbd-mirroring
Nizamudeen A [Mon, 13 Jun 2022 08:24:15 +0000 (13:54 +0530)]
mgr/dashboard: improve edit site name action in rbd-mirroring

Resolves: rhbz#1891012

Fixes: https://tracker.ceph.com/issues/55896
Signed-off-by: Nizamudeen A <nia@redhat.com>
(cherry picked from commit 83f15e764421960a4fc1a2db99b22b8f76fff9f2)

2 years agomgr/dashboard: rbd force resync from fornt-end
Sarthak0702 [Thu, 2 Jun 2022 22:58:31 +0000 (04:28 +0530)]
mgr/dashboard: rbd force resync from fornt-end

Resolves: rhbz#1891012

Signed-off-by: Sarthak0702 <sarthak.dev.0702@gmail.com>
(cherry picked from commit 2b005395818700185e2c0f7713fae0c321471b2d)

2 years agomgr/dashboard: fix mirroring e2e and lint errors
Avan Thakkar [Wed, 8 Jun 2022 11:03:18 +0000 (16:33 +0530)]
mgr/dashboard: fix mirroring e2e and lint errors

Resolves: rhbz#1891012

Signed-off-by: Avan Thakkar <athakkar@redhat.com>
(cherry picked from commit e95ba57e49aad7187f4593d448c8112fb2e28812)

2 years agomgr/dashboard: add byte info, move state, add idle state
Pere Diaz Bou [Tue, 7 Jun 2022 17:57:44 +0000 (19:57 +0200)]
mgr/dashboard: add byte info, move state, add idle state

Resolves: rhbz#1891012

Idle substate added from snapshot mode.
Instead of seconds info we display bytes and entries info.

Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
(cherry picked from commit 453446104b8fe86b3f561b99a7b5a5838ee89478)

2 years agomgr/dashboard: move replaying images to Syncing tab
Pere Diaz Bou [Mon, 30 May 2022 14:10:11 +0000 (16:10 +0200)]
mgr/dashboard: move replaying images to Syncing tab

Resolves: rhbz#1891012

Images with 'Replaying' state will be displayed in Syncing tab. syncTmpl
removed as it was unnecessary if sate is provided from the backend.

Replaying images in contrast of Syncing images don't have a progress
percentage, nevertheless, we have an approximation of how much time left
there is until the image is fully synced. Therefore, we can use seconds_until_synced to represent the progress.

Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
(cherry picked from commit d225328363abc22f1f30f1a740e4d83faa0dfb8c)

2 years agomgr/dashboard: snapshot mirroring from dashboard
Pere Diaz Bou [Fri, 13 May 2022 15:15:33 +0000 (17:15 +0200)]
mgr/dashboard: snapshot mirroring from dashboard

Resolves: rhbz#1891012

Enable snapshot mirroring from the Pools -> Image

Also show the mirror-snapshot in the image where snapshot is enabled

When parsing images if an image has the snapshot mode enabled, it will
try to  run commands that don't work with that mode. The solution was
not running those for now and appending the mode in the get call.

Fixes: https://tracker.ceph.com/issues/55648
Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
Signed-off-by: Nizamudeen A <nia@redhat.com>
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
Signed-off-by: Avan Thakkar <athakkar@redhat.com>
(cherry picked from commit 489a385a95d6ffa5dbd4c5f9c53c1f80ea179142)
(cherry picked from commit 3ca9ca7e215562912daf00ca3cca40dbc5d560b5)

2 years agomgr/dashboard: expose image mirroring commands as endpoints
Pere Diaz Bou [Thu, 12 May 2022 18:29:01 +0000 (20:29 +0200)]
mgr/dashboard: expose image mirroring commands as endpoints

Resolves: rhbz#1891012

Expose:
  - enable/disable mirroring in image
  - promote/demote (primary and non-primary)
  - resync
  - snapshot mode:
    - mirror image snapshot (manual snapshot)
    - schedule

Fixes: https://tracker.ceph.com/issues/55645
Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
(cherry picked from commit 8bd89415fe340512f457acd58225934e9ed8e4e1)
(cherry picked from commit a1922767f984d9173e67c828ade54ac84cfe8f51)

2 years agomgr/dashboard:Get "Different Storage Class" metrics in Prometheus dashboard
Aashish Sharma [Tue, 19 Jul 2022 09:04:18 +0000 (14:34 +0530)]
mgr/dashboard:Get "Different Storage Class" metrics in Prometheus dashboard

Get metrics of the different "HDDRule" and "MixedUse" classes of the "Raw Storage" for their ceph VMs. So that Prometheus can scrape the data and display it to them in grafana

Resolves: rhbz#2095632

Fixes: https://tracker.ceph.com/issues/56625
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
(cherry picked from commit 20705677acd5935981bd8510702b47e1b0678d27)
(cherry picked from commit 895536fb050e2767aa67373357569c8e4994693e)

2 years agorbd: remove incorrect use of std::includes()
Ilya Dryomov [Fri, 12 Aug 2022 09:10:45 +0000 (11:10 +0200)]
rbd: remove incorrect use of std::includes()

- std::includes() requires sorted ranges but command specs aren't
  sorted
- std::includes() purpose is to check whether the second range is
  a subsequence of the first range but here the size of the second
  range is always equal to the size of the first range, which means
  that, had the ranges been sorted, std::includes() would have checked
  straight equality

Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
(cherry picked from commit 1483e2a20237ad65fa3188b910e4e19f3ace9030)

Resolves: rhbz#2117166

2 years agorbd: find_action() should sort actions first
Ilya Dryomov [Fri, 12 Aug 2022 09:10:45 +0000 (11:10 +0200)]
rbd: find_action() should sort actions first

The order in which objects with static storage duration in
different TUs are initialized is undefined.  If the compiler
chooses to initialize Shell::Action objects in action/Trash.cc
before Shell::Action objects in action/TrashPurgeSchedule.cc,
all "rbd trash purge schedule ..." commands get shadowed by
"rbd trash purge" command:

$ rbd trash purge schedule list
rbd: too many arguments

The confusing error arises because "rbd trash purge" takes a single
positional argument.  "schedule" gets interpreted as <pool-spec> and
"list" generates an error.

Fixes: https://tracker.ceph.com/issues/57107
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
(cherry picked from commit e0b428a87a48f2132c825ffde4629709a57b76d6)

Resolves: rhbz#2117166

2 years agodoc/cephadm: os tuning profile documentation
Adam King [Thu, 23 Jun 2022 19:46:22 +0000 (15:46 -0400)]
doc/cephadm: os tuning profile documentation

Signed-off-by: Adam King <adking@redhat.com>
(cherry picked from commit 39cc904ca6a032490868cb6d87d2ecd8e5895430)
(cherry picked from commit f7c31f361d7a76ce07cd389c55b75d61d96d2112)

Resolves: rhbz#2068335

2 years agomgr/cephadm: unit tests for tuned os profiles
Adam King [Thu, 23 Jun 2022 16:57:14 +0000 (12:57 -0400)]
mgr/cephadm: unit tests for tuned os profiles

Signed-off-by: Adam King <adking@redhat.com>
(cherry picked from commit 4f6ee4e20b74bc545b0c44db0208b94a54c648e1)
(cherry picked from commit 35dfeba771feb814f4e83f2624fad3b129d2432c)

Resolves: rhbz#2068335

2 years agomgr/cephadm: support for os tuning profiles
Adam King [Tue, 31 May 2022 20:22:49 +0000 (16:22 -0400)]
mgr/cephadm: support for os tuning profiles

Fixes: https://tracker.ceph.com/issues/55819
Signed-off-by: Adam King <adking@redhat.com>
(cherry picked from commit 91e6e80ce38031d5eba148efd7c4aaede09021ac)

Conflicts:
src/python-common/ceph/deployment/service_spec.py
(cherry picked from commit 26d5f9230b57635ea46aa9a3367bbae28ac1c389)

Resolves: rhbz#2068335

2 years agoosd: Handle oncommits and wait for future work items from mClock queue
Sridhar Seshasayee [Thu, 21 Jul 2022 16:01:55 +0000 (21:31 +0530)]
osd: Handle oncommits and wait for future work items from mClock queue

When a worker thread with the smallest thread index waits for future work
items from the mClock queue, oncommit callbacks are called. But after the
callback, the thread has to continue waiting instead of returning back to
the ShardedThreadPool::shardedthreadpool_worker() loop. Returning results
in the threads with the smallest index across all shards to busy loop
causing very high CPU utilization.

The fix involves reacquiring the shard_lock and waiting on sdata_cond
until notified or until time period lapses. After this, the smallest
thread index repopulates the oncommit queue from the context_queue
if there were any additions.

Fixes: https://tracker.ceph.com/issues/56530
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
(cherry picked from commit 180a5a7bffd4d96c472cc39447717958dd51bbd9)
(cherry picked from commit 2207f8d7318ba219f071fa1fa2f1cb553b7e06c2)

Resolves: rhbz#2114612

2 years agomgr/dashboard: add required validation for frontend and monitor port
Avan Thakkar [Mon, 25 Jul 2022 10:34:00 +0000 (16:04 +0530)]
mgr/dashboard: add required validation for frontend and monitor port

Resolves: rhbz#2080916

Fixes: https://tracker.ceph.com/issues/56688
Signed-off-by: Avan Thakkar <athakkar@redhat.com>
(cherry picked from commit ea770fd858cae9a8b52c60b293f48cc4dbc925f8)

2 years agomgr/dashboard: Show error on creating service with duplicate service id
Aashish Sharma [Mon, 25 Jul 2022 08:18:39 +0000 (13:48 +0530)]
mgr/dashboard: Show error on creating service with duplicate service id

Resolves: rhbz#2064850

Fixes: https://tracker.ceph.com/issues/56689
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
(cherry picked from commit 07cfd44193f6ebd552d13325858e8b5b5c131bfb)
(cherry picked from commit 3afb825a683bea24b691fc6887ba8923469ff440)

2 years agomgr/dashboard: ingress backend service should list all supported services
Avan Thakkar [Wed, 6 Jul 2022 09:24:26 +0000 (14:54 +0530)]
mgr/dashboard: ingress backend service should list all supported services

Resolves: rhbz#2102776

Fixes: https://tracker.ceph.com/issues/56478
Signed-off-by: Avan Thakkar <athakkar@redhat.com>
(cherry picked from commit 32118522cb71722c3cc20a1e1a9ca9ffdf7897e4)

2 years agomgr/dashboard: do not recommend throughput for ssd's only cluster
Nizamudeen A [Mon, 18 Jul 2022 05:38:28 +0000 (11:08 +0530)]
mgr/dashboard: do not recommend throughput for ssd's only cluster

This is just a bug fix where we recommend the throughput option even if
there are only ssd's are present in the cluster.

Resolves: rhbz#2101680

Fixes: https://tracker.ceph.com/issues/56413
Signed-off-by: Nizamudeen A <nia@redhat.com>
(cherry picked from commit 0f6f79f1d9c716e13f662151459fa107f651f158)
(cherry picked from commit dc81fcd37c63c7916fc0f2f8109c022cd560e989)

2 years agomgr/dashboard: iops optimized option enabled
Pere Diaz Bou [Thu, 5 May 2022 14:34:36 +0000 (16:34 +0200)]
mgr/dashboard: iops optimized option enabled

Resolves: rhbz#2101680

Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
(cherry picked from commit 6fd7b91d3795c93a0e520ea9adc6ed34d2d91a89)

2 years agomgr/dashboard: test througput deployment option
Pere Diaz Bou [Fri, 6 May 2022 08:48:32 +0000 (10:48 +0200)]
mgr/dashboard: test througput deployment option

Resolves: rhbz#2101680

Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
(cherry picked from commit d58b9499208cd611897518009495f9e80515349d)
(cherry picked from commit 842ce6c33c91436bf400512b00f152d2e089a938)

2 years agomgr/dashboard: throughput optimized option enabled
Pere Diaz Bou [Tue, 3 May 2022 12:28:22 +0000 (14:28 +0200)]
mgr/dashboard: throughput optimized option enabled

Resolves: rhbz#2101680

Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
(cherry picked from commit f2474bcb767893dc750b8f1231b4583925f9bfb1)
(cherry picked from commit a00fa3df760edef52d46638b1de3bd03a4b851be)

2 years agomgr/dashboard: OSD Creation Workflow initial works
Nizamudeen A [Tue, 22 Feb 2022 10:21:03 +0000 (15:51 +0530)]
mgr/dashboard: OSD Creation Workflow initial works

Introducing the Cost/Capacity Optimized deployment option
Used bootstrap accordion
Adapted the e2e but not written new tests for the deployment option

Resolves: rhbz#2101680

Fixes: https://tracker.ceph.com/issues/54340
Fixes: https://tracker.ceph.com/issues/54563
Signed-off-by: Nizamudeen A <nia@redhat.com>
Signed-off-by: Sarthak0702 <sarthak.0702@gmail.com>
(cherry picked from commit 6c2dcb740efb793a3f6ef593793151a34c19ca01)
(cherry picked from commit b350266d9758c29e559849c2527bbd3d5cbb0a22)

2 years agomgr/dashboard: retrieve disk status
Pere Diaz Bou [Fri, 4 Mar 2022 08:58:36 +0000 (09:58 +0100)]
mgr/dashboard: retrieve disk status

Resolves: rhbz#2101680
Signed-off-by: Pere Diaz Bou <pdiazbou@redhat.com>
(cherry picked from commit a1d1c853a5e4ff9a317591b99b75e005ccc862c9)
(cherry picked from commit 1ee11d090d11fdb438abe3ed5073e39daa43c2e8)

2 years agomgr/dashboard: Dashboard should display some helpful (error) message when the iframe...
Ngwa Sedrick Meh [Mon, 14 Feb 2022 05:17:47 +0000 (06:17 +0100)]
mgr/dashboard: Dashboard should display some helpful (error) message when the iframe-embedded Grafana dashboard failed to load

This commit adds checks for successful grafana panel loads before displaying dashboards and informs the user if the request is blocked by the browser

Resolves: rhbz#2056478
Fixes: https://tracker.ceph.com/issues/54206
Signed-off-by: Ngwa Sedrick Meh <nsedrick101@gmail.com>
(cherry picked from commit a4b66efb2a2139bd88a6a088af9bd5e079e46105)
(cherry picked from commit 30fa3dbe654471072352a5027a5b363143e3f8f7)

2 years agomgr/dashboard: branding: Update doc URLS
Nizamudeen A [Tue, 11 Aug 2020 11:19:36 +0000 (16:49 +0530)]
mgr/dashboard: branding: Update doc URLS

Updating all the doc urls to point it to the downstream

Resolves: rhbz#2106618
Signed-off-by: Nizamudeen A <nia@redhat.com>
2 years agomgr/dashboard: branding: apply patternfly to all elements
Nizamudeen A [Sun, 1 Nov 2020 10:51:04 +0000 (16:21 +0530)]
mgr/dashboard: branding: apply patternfly to all elements

Resolves: rhbz#2106618
Signed-off-by: Nizamudeen A <nia@redhat.com>
(cherry picked from commit db7fec96c8d8a0ee5aa6939a8971c5eac661265c)

2 years agomgr/dashboard: branding: About modal box
Nizamudeen A [Fri, 6 Nov 2020 10:05:38 +0000 (15:35 +0530)]
mgr/dashboard: branding: About modal box

Resolves: rhbz#2106618
Signed-off-by: Nizamudeen A <nia@redhat.com>
2 years agomgr/dashboard: branding: Navigation and Notification bar
Nizamudeen A [Wed, 21 Oct 2020 18:34:04 +0000 (00:04 +0530)]
mgr/dashboard: branding: Navigation and Notification bar

Resolves: rhbz#2106618

Signed-off-by: Nizamudeen A <nia@redhat.com>
(cherry picked from commit cfd7da700db0a78c84293115550800ed8db2775f)

2 years agomgr/dashboard: branding: login page pacific
Nizamudeen A [Mon, 31 Aug 2020 11:09:50 +0000 (16:39 +0530)]
mgr/dashboard: branding: login page pacific

Resolves: rhbz#2106618
Signed-off-by: Nizamudeen A <nia@redhat.com>
(cherry picked from commit deffbb9454a79c1b453ef3aa74ec9464b199dcd1)

2 years agorgw/dbstore: Fix build errors on centos9
Soumya Koduri [Wed, 27 Apr 2022 09:24:46 +0000 (14:54 +0530)]
rgw/dbstore: Fix build errors on centos9

Related: rhbz#2109960
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit fc1008a15bc9f7c37330e613f7c81680bc5605fc)

2 years ago17.2.3 47344/head v17.2.3
Ceph Release Team [Thu, 28 Jul 2022 21:52:12 +0000 (21:52 +0000)]
17.2.3

Signed-off-by: Ceph Release Team <ceph-maintainers@ceph.io>
2 years agolibcephsqlite: ceph-mgr crashes when compiled with gcc12
Ganesh Maharaj Mahalingam [Mon, 11 Apr 2022 17:15:43 +0000 (10:15 -0700)]
libcephsqlite: ceph-mgr crashes when compiled with gcc12

regex in libcephsqlite, when compiled with GCC12 treats '-' as a range
operator resulting in the following error.
"Invalid start of '[x-x]' range in regular expression"

Fixes: https://tracker.ceph.com/issues/55304
Signed-off-by: Ganesh Maharaj Mahalingam <ganesh.mahalingam@intel.com>
(cherry picked from commit ac043a09c5ffb4b434b8644920004b3d5b7f9d8c)
(cherry picked from commit 5a10875d4570356afa3af024f63ac229caaa765b)

2 years ago17.2.2 v17.2.2
Jenkins Build Slave User [Thu, 21 Jul 2022 17:29:33 +0000 (17:29 +0000)]
17.2.2

2 years agorgw: s3website check for bucket before retargeting
Seena Fallah [Fri, 1 Jul 2022 21:19:40 +0000 (23:19 +0200)]
rgw: s3website check for bucket before retargeting

On requesting s3website API without a bucket name it will crash because s->bucket is null

Fixes: https://tracker.ceph.com/issues/56281
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
(cherry picked from commit 933dbabb3a2a43fd016bc61cc0ee5e27f7ad32e7)
(cherry picked from commit e2b923600c28626b921107c13e2c12d691eed3f1)

2 years agoqa: validate subvolume discover on upgrade
Kotresh HR [Fri, 4 Feb 2022 09:58:39 +0000 (15:28 +0530)]
qa: validate subvolume discover on upgrade

Validate subvolume discover on upgrade from
legacy subvolume to v1. The handcrafted
".meta" file on legacy subvolume root should
not be used for any subvolume apis like getpath,
authorize.

Signed-off-by: Kotresh HR <khiremat@redhat.com>
(cherry picked from commit fcc118500c545fe6018cd3f2742127b92c657def)
(cherry picked from commit 7e6453d32f78e4a02771396c96dcba472f275135)

2 years agomgr/volumes: V2 Fix for test_subvolume_retain_snapshot_invalid_recreate
Kotresh HR [Thu, 9 Jun 2022 08:00:59 +0000 (13:30 +0530)]
mgr/volumes: V2 Fix for test_subvolume_retain_snapshot_invalid_recreate

Signed-off-by: Kotresh HR <khiremat@redhat.com>
(cherry picked from commit ac791879096c8a9e5bda4eef69f1269eee819932)

2 years agomgr/volumes: Fix subvolume discover during upgrade
Kotresh HR [Fri, 4 Feb 2022 09:25:03 +0000 (14:55 +0530)]
mgr/volumes: Fix subvolume discover during upgrade

Fixes the subvolume discover to use the correct
metadata file after an upgrade from legacy subvolume
to v1. The fix makes sure, it doesn't use the
handcrafted metadata file placed in the subvolume
root of legacy subvolume.

Co-authored-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@cern.ch>
Co-authored-by: Dan van der Ster <daniel.vanderster@cern.ch>
Co-authored-by: Ramana Raja <rraja@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
(cherry picked from commit 7eba9cab6cfb9a13a84062177d7a0fa228311e13)
(cherry picked from commit 50f4ac7b75185c89ede210b2c975672e8bdd73aa)

3 years ago17.2.1 46827/head v17.2.1
Ceph Release Team [Thu, 23 Jun 2022 14:41:35 +0000 (14:41 +0000)]
17.2.1

Signed-off-by: Ceph Release Team <ceph-maintainers@ceph.io>
3 years agotools: ceph-objectstore-tool is able to trim pg log dups' entries.
Radosław Zarzyński [Sat, 11 Jun 2022 19:29:29 +0000 (21:29 +0200)]
tools: ceph-objectstore-tool is able to trim pg log dups' entries.

The main assumption is trimming just dups doesn't need any update
to the corresponding pg_info_t.

Testing:

1. cluster without the autoscaler
```
rzarz@ubulap:~/dev/ceph/build$ MON=1 MGR=1 OSD=3 MGR=1 MDS=0 ../src/vstart.sh -l -b -n -o "osd_pg_log_dups_tracked=3000000" -o "osd_pool_default_pg_autoscale_mode=off"
```

2. 8 PGs in the testing pool.
```
rzarz@ubulap:~/dev/ceph/build$ bin/ceph osd pool create test-pool 8 8
```

3. Provisioning dups with rados bench
```
bin/rados bench -p test-pool 300 write -b 4096  --no-cleanup
...
Total time run:         300.034
Total writes made:      103413
Write size:             4096
Object size:            4096
Bandwidth (MB/sec):     1.34637
Stddev Bandwidth:       0.589071
Max bandwidth (MB/sec): 2.4375
Min bandwidth (MB/sec): 0.902344
Average IOPS:           344
Stddev IOPS:            150.802
Max IOPS:               624
Min IOPS:               231
Average Latency(s):     0.0464151
Stddev Latency(s):      0.0183627
Max latency(s):         0.0928424
Min latency(s):         0.0131932
```

4. Killing osd.0
```
rzarz@ubulap:~/dev/ceph/build$ kill 2572129 # pid of osd.0
```

5. Listing PGs on osd.0 and calculating number of pg log's entries and
dups:

```
rzarz@ubulap:~/dev/ceph/build$ bin/ceph-objectstore-tool --data-path dev/osd0 --op list-pgs --pgid 2.c > osd0_pgs.txt
rzarz@ubulap:~/dev/ceph/build$ for pgid in `cat osd0_pgs.txt`; do echo $pgid; bin/ceph-objectstore-tool --data-path dev/osd0 --op log --pgid $pgid | jq '(.pg_log_t.log|length),(.pg_log_t.dups|length)'; done
2.7
10020
3100
2.6
10100
3000
2.3
10012
2800
2.1
10049
2900
2.2
10057
2700
2.0
10027
2900
2.5
10077
2700
2.4
10072
2900
1.0
97
0
```

6. Trimming dups
```
rzarz@ubulap:~/dev/ceph/build$ CEPH_ARGS="--osd_pg_log_dups_tracked 2500 --osd_pg_log_trim_max=100" bin/ceph-objectstore-tool --data-path dev/osd0 --op trim-pg-log-dups --pgid 2.7
max_dup_entries=2500 max_chunk_size=100
Removing keys dup_0000000020.00000000000000000001 - dup_0000000020.00000000000000000100
Removing keys dup_0000000020.00000000000000000101 - dup_0000000020.00000000000000000200
Removing keys dup_0000000020.00000000000000000201 - dup_0000000020.00000000000000000300
Removing keys dup_0000000020.00000000000000000301 - dup_0000000020.00000000000000000400
Removing keys dup_0000000020.00000000000000000401 - dup_0000000020.00000000000000000500
Removing keys dup_0000000020.00000000000000000501 - dup_0000000020.00000000000000000600
Finished trimming, now compacting...
Finished trimming pg log dups
```

7. Checking number of pg log's entries and dups
```
rzarz@ubulap:~/dev/ceph/build$ for pgid in `cat osd0_pgs.txt`; do echo $pgid; bin/ceph-objectstore-tool --data-path dev/osd0 --op log --pgid $pgid | jq '(.pg_log_t.log|length),(.pg_log_t.dups|length)'; done
2.7
10020
2500
2.6
10100
3000
2.3
10012
2800
2.1
10049
2900
2.2
10057
2700
2.0
10027
2900
2.5
10077
2700
2.4
10072
2900
1.0
97
0
```

Fixes: https://tracker.ceph.com/issues/53729
Signed-off-by: Radosław Zarzyński <rzarzyns@redhat.com>
(cherry picked from commit a2190f901abf2fed20c65e59f53b38c10545cb5a)
(cherry picked from commit 3d3193fc6d71e178af0a288e010c308d61767562)

3 years agoMerge pull request #46577 from cbodley/wip-quincy-qa-smoke-s3tests
Yuri Weinstein [Mon, 13 Jun 2022 16:01:03 +0000 (09:01 -0700)]
Merge pull request #46577 from cbodley/wip-quincy-qa-smoke-s3tests

qa/smoke: use ceph-quincy branch of s3tests

Reviewed-by: Yuri Weinstein <yweinste@redhat.com>
3 years agoMerge pull request #46607 from rzarzynski/wip-55982-quincy
Yuri Weinstein [Fri, 10 Jun 2022 17:20:42 +0000 (10:20 -0700)]
Merge pull request #46607 from rzarzynski/wip-55982-quincy

quincy: osd: log the number of 'dups' entries in a PG Log

Reviewed-by: Laura Flores <lflores@redhat.com>
3 years agoMerge pull request #46605 from rzarzynski/wip-55981-quincy
Yuri Weinstein [Fri, 10 Jun 2022 17:20:03 +0000 (10:20 -0700)]
Merge pull request #46605 from rzarzynski/wip-55981-quincy

quincy: revert backport of #45529

Reviewed-by: Laura Flores <lflores@redhat.com>
3 years agoosd: log the number of 'dups' entries in a PG Log 46607/head
Radoslaw Zarzynski [Thu, 9 Jun 2022 18:44:10 +0000 (18:44 +0000)]
osd: log the number of 'dups' entries in a PG Log

We really want to have the ability to know how many
entries `PGLog::IndexedLog::dups` has inside.
The current ways are either invasive (stopping an OSD)
or indirect (examination of `dump_mempools`).

The code comes from Nitzan Mordechai (part of
ede37edd79a9d5560dfb417ec176327edfc0e4a3).

Fixes: https://tracker.ceph.com/issues/55982
Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
(cherry picked from commit 8f1c8a7309976098644bb978d2c1095089522846)

3 years agoRevert "tools/ceph_objectstore_took: Add duplicate entry trimming" 46605/head
Radoslaw Zarzynski [Thu, 9 Jun 2022 20:10:31 +0000 (20:10 +0000)]
Revert "tools/ceph_objectstore_took: Add duplicate entry trimming"

This reverts commit 5245fb33dd02e53cf0ef5d2f7be5904b6fbe63ce.

Although the chunking in off-line `dups` trimming (via COT) seems
fine, the `ceph-objectstore-tool` is a client of `trim()` of
`PGLog::IndexedLog` which means than a partial revert is not
possible without extensive changes.

The backport ticket is: https://tracker.ceph.com/issues/55981

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
3 years agoRevert "osd/PGLog.cc: Trim duplicates by number of entries"
Radoslaw Zarzynski [Thu, 9 Jun 2022 18:22:45 +0000 (18:22 +0000)]
Revert "osd/PGLog.cc: Trim duplicates by number of entries"

This reverts commit 3ff0df6a28a1d9e197bdba40be7126fed8a14ae9
which is the in-OSD part of the fix for accumulation of `dup`
entries in a PG Log. Brainstorming it has brought questions
on the OSD's behaviour during an upgrade if there are tons of
dups in the log. What must be double-checked before bringing
it back is ensuring we chunk the deletions properly to not
impose OOMs / stalls in, to exemplify, RocksDB.

The backport ticket is: https://tracker.ceph.com/issues/55981

Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
3 years agoqa/smoke: use ceph-quincy branch of s3tests 46577/head
Casey Bodley [Wed, 8 Jun 2022 18:46:57 +0000 (14:46 -0400)]
qa/smoke: use ceph-quincy branch of s3tests

Signed-off-by: Casey Bodley <cbodley@redhat.com>
3 years agoMerge pull request #46544 from yaarith/rook-telemetry-release-notes-quincy
Neha Ojha [Wed, 8 Jun 2022 14:34:09 +0000 (07:34 -0700)]
Merge pull request #46544 from yaarith/rook-telemetry-release-notes-quincy

quincy: PendingReleaseNotes: add a note about Rook telemetry

Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Laura Flores <lflores@redhat.com>
3 years agoMerge pull request #46508 from nmshelke/wip-55797-quincy
Yuri Weinstein [Wed, 8 Jun 2022 14:08:40 +0000 (07:08 -0700)]
Merge pull request #46508 from nmshelke/wip-55797-quincy

quincy: mgr/volumes: set, get, list and remove metadata of snapshot

Reviewed-by: Venky Shankar <vshankar@redhat.com>
3 years agoMerge pull request #46449 from zdover23/wip-doc-2022-05-31-backport-46430-quincy
zdover23 [Tue, 7 Jun 2022 18:19:17 +0000 (04:19 +1000)]
Merge pull request #46449 from zdover23/wip-doc-2022-05-31-backport-46430-quincy

quincy: doc/start: update "memory" in hardware-recs.rst

Reviewed-by: Mark Nelson <mnelson@redhat.com>
3 years agoMerge pull request #46492 from kalebskeithley/wip-55810-quincy
Yuri Weinstein [Tue, 7 Jun 2022 14:55:18 +0000 (07:55 -0700)]
Merge pull request #46492 from kalebskeithley/wip-55810-quincy

quincy: rocksdb: build with rocksdb-7.y.z

Reviewed-by: Laura Flores <lflores@redhat.com>
3 years agoMerge pull request #46497 from lxbsz/wip-55658
Yuri Weinstein [Tue, 7 Jun 2022 13:57:30 +0000 (06:57 -0700)]
Merge pull request #46497 from lxbsz/wip-55658

quincy: mds: trigger to flush the mdlog in handle_find_ino()

Reviewed-by: Venky Shankar <vshankar@redhat.com>
3 years agoMerge pull request #46496 from lxbsz/wip-55661
Yuri Weinstein [Tue, 7 Jun 2022 13:56:59 +0000 (06:56 -0700)]
Merge pull request #46496 from lxbsz/wip-55661

quincy: qa: add filesystem/file sync stuck test support

Reviewed-by: Venky Shankar <vshankar@redhat.com>
3 years agoMerge pull request #46476 from lxbsz/wip-55447
Yuri Weinstein [Tue, 7 Jun 2022 13:54:30 +0000 (06:54 -0700)]
Merge pull request #46476 from lxbsz/wip-55447

quincy: client: add option to disable collecting and sending metrics

Reviewed-by: Venky Shankar <vshankar@redhat.com>
3 years agoMerge pull request #46175 from cfsnyder/wip-55441-quincy
Yuri Weinstein [Tue, 7 Jun 2022 13:53:46 +0000 (06:53 -0700)]
Merge pull request #46175 from cfsnyder/wip-55441-quincy

quincy: os/bluestore: set upper and lower bounds on rocksdb omap iterators

Reviewed-by: Neha Ojha <nojha@redhat.com>
3 years agoMerge pull request #46542 from idryomov/wip-rbd-codeowners-quincy
Ilya Dryomov [Tue, 7 Jun 2022 09:49:44 +0000 (11:49 +0200)]
Merge pull request #46542 from idryomov/wip-rbd-codeowners-quincy

quincy: CODEOWNERS: add RBD team

Reviewed-by: Deepika Upadhyay <dupadhya@redhat.com>
3 years agoPendingReleaseNotes: add a note about Rook telemetry 46544/head
Yaarit Hatuka [Mon, 6 Jun 2022 19:34:19 +0000 (19:34 +0000)]
PendingReleaseNotes: add a note about Rook telemetry

Signed-off-by: Yaarit Hatuka <yaarit@redhat.com>
(cherry picked from commit 1742d1b36555684aa7a75cbaa9f128131af9b6a7)

Conflicts:
Needed to reorder notes to appear in the 17.2.1 section.

3 years agoCODEOWNERS: add RBD team 46542/head
Ilya Dryomov [Wed, 1 Jun 2022 07:22:15 +0000 (09:22 +0200)]
CODEOWNERS: add RBD team

Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
(cherry picked from commit 00a44f1c6b3c5270f1c9d75cf6dcac3f0d470fa9)

3 years agoMerge pull request #46519 from neha-ojha/wip-46415-quincy
Neha Ojha [Mon, 6 Jun 2022 16:38:54 +0000 (09:38 -0700)]
Merge pull request #46519 from neha-ojha/wip-46415-quincy

quincy: .github/CODEOWNERS: tag core devs on core PRs

Reviewed-by: Laura Flores lflores@redhat.com
3 years agoMerge pull request #46469 from gregsfortytwo/wip-55746-quincy
Yuri Weinstein [Mon, 6 Jun 2022 14:58:10 +0000 (07:58 -0700)]
Merge pull request #46469 from gregsfortytwo/wip-55746-quincy

quincy: Implement CIDR blocklisting

Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
3 years agoMerge pull request #46468 from ljflores/wip-55811-quincy
Yuri Weinstein [Mon, 6 Jun 2022 14:56:53 +0000 (07:56 -0700)]
Merge pull request #46468 from ljflores/wip-55811-quincy

quincy: os/bluestore: turn `bluestore zero block detection` off by default

Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Ilya Dryomov <idryomov@redhat.com>
3 years agoMerge pull request #46418 from ronen-fr/wip-rf-46320-quincy
Yuri Weinstein [Mon, 6 Jun 2022 14:55:44 +0000 (07:55 -0700)]
Merge pull request #46418 from ronen-fr/wip-rf-46320-quincy

quincy: osd/scrub: restart snap trimming after a failed scrub

Reviewed-by: Neha Ojha <nojha@redhat.com>
3 years ago.github/CODEOWNERS: tag core devs on core PRs 46519/head
Neha Ojha [Fri, 27 May 2022 19:34:57 +0000 (19:34 +0000)]
.github/CODEOWNERS: tag core devs on core PRs

Start with everything that is present under core in .github/labeler.yml.

Signed-off-by: Neha Ojha <nojha@redhat.com>
(cherry picked from commit 8303c6b911154ee936adb46e7c3491b174d22df8)

3 years agoMerge pull request #46486 from yaarith/wip-55816-quincy
Neha Ojha [Fri, 3 Jun 2022 20:15:12 +0000 (13:15 -0700)]
Merge pull request #46486 from yaarith/wip-55816-quincy

quincy: mgr/telemetry: add Rook data

Reviewed-by: Laura Flores <lflores@redhat.com>
3 years agoMerge pull request #46453 from rhcs-dashboard/wip-55115-quincy
Ernesto Puerta [Fri, 3 Jun 2022 14:49:02 +0000 (16:49 +0200)]
Merge pull request #46453 from rhcs-dashboard/wip-55115-quincy

quincy: mgr/dashboard:  don't log 3xx as errors

Reviewed-by: nSedrickm <NOT@FOUND>
Reviewed-by: Pere Diaz Bou <pdiazbou@redhat.com>
3 years agoqa: set, get, list and remove custom metadata for snapshot 46508/head
Nikhilkumar Shelke [Thu, 28 Apr 2022 18:38:05 +0000 (00:08 +0530)]
qa: set, get, list and remove custom metadata for snapshot

Following test are added:
1. Set custom metadata for subvolume snapshot.
2. Set custom metadata for subvolume snapshot(Idempotency).
3. Get custom metadata for specified key.
4. Get custom metadata if specified key not exist (Expecting error ENOENT).
5. Get custom metadata if no any key-value is added means section not exist (Expecting error ENOENT).
6. Update value for existing key in custom metadata.
7. List custom metadata of subvolume snapshot.
8. List custom metadata of subvolume snapshot if no any key-value is added (Expect empty json/dictionary)
9. Remove custom metadata for specified key.
10. Remove custom metadata if specified key not exist (Expecting error ENOENT).
11. Remove custom metadata if no any key-value is added means section not exist (Expecting error ENOENT).
12. Remove custom metadata with --force option.
13. Remove custom metadata with --force option if specified key not exist (Expecting command to succeed because of '--force' option)
14. Remove subvolume snapshot and verify whether metadata for snapshot is removed or not

Fixes: https://tracker.ceph.com/issues/55401
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
(cherry picked from commit 6fd28cc9d67b96ba87f0dffbf41d626229e904e3)

3 years agodocs: set, get, list and remove custom metadata for snapshot
Nikhilkumar Shelke [Wed, 27 Apr 2022 16:41:07 +0000 (22:11 +0530)]
docs: set, get, list and remove custom metadata for snapshot

Set custom metadata on the snapshot as a key-value pair using
    $ ceph fs subvolume snapshot metadata set <vol_name> <subvol_name> <snap_name> <key_name> <value> [--group_name <subvol_group_name>]
    note: If the key_name already exists then the old value will get replaced by the new value.
    note: The key_name and value should be a string of ASCII characters (as specified in python's string.printable). The key_name is case-insensitive and always stored in lower case.
    note: Custom metadata on a snapshots is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.

Get custom metadata set on the snapshot using the metadata key::
    $ ceph fs subvolume snapshot metadata get <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>]

List custom metadata (key-value pairs) set on the snapshot using::
    $ ceph fs subvolume snapshot metadata ls <vol_name> <subvol_name> <snap_name> [--group_name <subvol_group_name>]

Remove custom metadata set on the snapshot using the metadata key::
    $ ceph fs subvolume snapshot metadata rm <vol_name> <subvol_name> <snap_name> <key_name> [--group_name <subvol_group_name>] [--force]
    Using the '--force' flag allows the command to succeed that would otherwise fail if the metadata key did not exist.

Fixes: https://tracker.ceph.com/issues/55401
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
(cherry picked from commit 59a0cbc14bf2832080e983729de5c462ddc70bb3)

3 years agomgr/volumes: set, get, list and remove custom metadata for snapshot
Nikhilkumar Shelke [Wed, 27 Apr 2022 16:20:33 +0000 (21:50 +0530)]
mgr/volumes: set, get, list and remove custom metadata for snapshot

If CephFS in ODF configured in external mode, user like to use
subvolume snapshot metadata to store some Openshift specific
information, as the PVC/PV/namespace the subvolumes/snapshot
are coming from. For RBD volumes, it's possible to add metadata
information to the images using the 'rbd image-meta' command.
However, this feature is not available for CephFS volumes.
We'd like to request this capability.

Adding following commands:
    ceph fs subvolume snapshot metadata set <vol_name> <sub_name> <snap_name> <key_name> <value> [<group_name>]
    ceph fs subvolume snapshot metadata get <vol_name> <sub_name> <snap_name> <key_name> [<group_name>]
    ceph fs subvolume snapshot metadata ls <vol_name> <sub_name> <snap_name> [<group_name>]
    ceph fs subvolume snapshot metadata rm <vol_name> <sub_name> <snap_name> <key_name> [<group_name>] [--force]

Fixes: https://tracker.ceph.com/issues/55401
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
(cherry picked from commit 559222cfe8d552cd2d7aef7361de4140820ae74a)

3 years agoMerge pull request #46503 from rhcs-dashboard/wip-55831-quincy
Ernesto Puerta [Thu, 2 Jun 2022 17:06:51 +0000 (19:06 +0200)]
Merge pull request #46503 from rhcs-dashboard/wip-55831-quincy

quincy: qa: fix teuthology master branch ref

Reviewed-by: Avan Thakkar <athakkar@redhat.com>
Reviewed-by: Laura Flores <lflores@redhat.com>
Reviewed-by: neha-ojha <NOT@FOUND>
3 years agoqa: fix teuthology master branch ref 46503/head
Ernesto Puerta [Thu, 2 Jun 2022 10:27:02 +0000 (12:27 +0200)]
qa: fix teuthology master branch ref

Fixes: https://tracker.ceph.com/issues/55826
Signed-off-by: Ernesto Puerta <epuertat@redhat.com>
(cherry picked from commit e91773df68c286266a2855e69bf542b4c73379d9)

3 years agomds: trigger to flush the mdlog in handle_find_ino() 46497/head
Xiubo Li [Tue, 19 Apr 2022 06:21:49 +0000 (14:21 +0800)]
mds: trigger to flush the mdlog in handle_find_ino()

If the the CInode was just created by using openc in current
auth MDS, but the client just sends a getattr request to another
replica MDS. Then here it will make a path of '#INODE-NUMBER'
only because the CInode hasn't been linked yet, and the replica
MDS will keep retrying until the auth MDS flushes the mdlog and
the C_MDS_openc_finish and link_primary_inode are called at most
5 seconds later.

Fixes: https://tracker.ceph.com/issues/55240
Signed-off-by: Xiubo Li <xiubli@redhat.com>
(cherry picked from commit 5d6dd5d1acf94eade4a2c0f48777c95473d454ae)

3 years agoqa: add file/filesystem sync crash test case 46496/head
Xiubo Li [Thu, 14 Apr 2022 03:50:19 +0000 (11:50 +0800)]
qa: add file/filesystem sync crash test case

This is one test case for the possible kernel crash bug when doing
the file sync or filesystem sync.

Fixes: https://tracker.ceph.com/issues/55329
Signed-off-by: Xiubo Li <xiubli@redhat.com>
(cherry picked from commit cd3e903b0c525c7946794b5fbf002da482b3b4bc)

3 years agoqa: add file sync stuck test support
Xiubo Li [Tue, 12 Apr 2022 11:40:02 +0000 (19:40 +0800)]
qa: add file sync stuck test support

This will test the file sync of a directory, which maybe stuck for
at most 5 seconds. This was because the related code will wait for
all the unsafe requests to get safe reply from MDSes, but the MDSes
just think that it's unnecessary to flush the mdlog immediately
after early reply, and the mdlog will be flushed every 5 seconds
in the tick thread.

This should have been fixed in kclient and libcephfs by triggering
mdlog flush before waiting requests' safe reply.

Fixes: https://tracker.ceph.com/issues/55283
Signed-off-by: Xiubo Li <xiubli@redhat.com>
(cherry picked from commit 3db3b4e2a4b853192c5b30c9594947ba45f96e03)

3 years agoqa: add filesystem sync stuck test support
Xiubo Li [Tue, 12 Apr 2022 04:37:13 +0000 (12:37 +0800)]
qa: add filesystem sync stuck test support

This will test the sync of the filesystem, which maybe stuck for
at most 5 seconds. This was because the related code will wait
for all the unsafe requests to get safe reply from MDSes, but the
MDSes just think that it's unnecessary to flush the mdlog immediately
after early reply, and the mdlog will be flushed every 5 seconds
in the tick thread.

This should have been fixed in kclient and libcephfs by triggering
mdlog flush before waiting requests' safe reply.

Fixes: https://tracker.ceph.com/issues/55283
Signed-off-by: Xiubo Li <xiubli@redhat.com>
(cherry picked from commit b6fc5480f6ba6352fa72062e1376d0dd6b9074cd)

3 years agoMerge pull request #46491 from ceph/quincy-nobranch
David Galloway [Wed, 1 Jun 2022 20:19:49 +0000 (16:19 -0400)]
Merge pull request #46491 from ceph/quincy-nobranch

quincy: qa: remove .teuthology_branch file

3 years agorocksdb: build with rocksdb-7.y.z 46492/head
Kaleb S. KEITHLEY [Mon, 23 May 2022 11:41:26 +0000 (07:41 -0400)]
rocksdb: build with rocksdb-7.y.z

RocksDB 7, specifically 7.2.2 has landed in Fedora 37/rawhide.

https://tracker.ceph.com/issues/55730

Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
(cherry picked from commit eea10281e6f4078f261b05b6bd9c9c9aec129201)

3 years agoqa: remove .teuthology_branch file 46491/head
Jeff Layton [Wed, 1 Jun 2022 18:26:33 +0000 (14:26 -0400)]
qa: remove .teuthology_branch file

This was originally added to help support the py2 -> py3 conversion.
That's long since complete so we should be able to just remove this file
now.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
(cherry picked from commit 81430de9b70be16a439bf2445f3345b83035a861)

3 years agoosd/scrub: restart snap trimming after a failed scrub 46418/head
Ronen Friedman [Tue, 17 May 2022 16:13:59 +0000 (16:13 +0000)]
osd/scrub: restart snap trimming after a failed scrub

A followup to PR#45640.
In PR#45640 snap trimming was restarted (if blocked) after all
successful scrubs, and after most scrub failures. Still, a few
failure scenarios did not handle snaptrim restart correctly.

The current PR cleans up and fixes the interaction between
scrub initiation/termination (for whatever cause) and snap
trimming.

Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
(cherry picked from commit 290e744a9b6c64f3da805056625b963f0eedaf33)

3 years agomgr/telemetry: add Rook data 46486/head
Yaarit Hatuka [Wed, 1 Jun 2022 04:46:17 +0000 (04:46 +0000)]
mgr/telemetry: add Rook data

Add the first Rook data collection to telemetry's basic channel.

We choose to nag with this collection since we wish to know the volume
of Rook deployments in the wild.

The next Rook collections should have consecutive numbers (basic_rook_v02,
basic_rook_v03, ...).

See tracker below for more details.

Fixes: https://tracker.ceph.com/issues/55740
Signed-off-by: Yaarit Hatuka <yaarit@redhat.com>
(cherry picked from commit 63f5dcdb520ea4f5e0400e9c6d9f0da29998e437)

3 years agoMerge pull request #46446 from rhcs-dashboard/wip-54585-quincy
Ernesto Puerta [Wed, 1 Jun 2022 16:43:21 +0000 (18:43 +0200)]
Merge pull request #46446 from rhcs-dashboard/wip-54585-quincy

quincy: mgr/dashboard: fix columns in host table  with NaN Undefined

Reviewed-by: Avan Thakkar <athakkar@redhat.com>
Reviewed-by: nSedrickm <NOT@FOUND>
3 years agoMerge pull request #46455 from rhcs-dashboard/wip-55589-quincy
Ernesto Puerta [Wed, 1 Jun 2022 16:35:28 +0000 (18:35 +0200)]
Merge pull request #46455 from rhcs-dashboard/wip-55589-quincy

quincy: mgr/dashboard: WDC multipath bug fixes

Reviewed-by: Avan Thakkar <athakkar@redhat.com>
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
3 years agoPendingReleaseNotes: add note about `bluestore_zero_block_detection` config option 46468/head
Laura Flores [Fri, 27 May 2022 18:28:19 +0000 (13:28 -0500)]
PendingReleaseNotes: add note about `bluestore_zero_block_detection` config option

Signed-off-by: Laura Flores <lflores@redhat.com>
(cherry picked from commit fce2a7782d6bddada9258e5028e6459e72b4381e)

Conflicts:
PendingReleaseNotes
- The cherry-pick applied cleanly, but some extra release notes about >=18.0.0
          still was added that I had to remove.

3 years agoqa: drop get_blocklisted_instances in TestMirroring 46469/head
Jos Collin [Fri, 6 May 2022 10:58:23 +0000 (16:28 +0530)]
qa: drop get_blocklisted_instances in TestMirroring

drop get_blocklisted_instances in TestMirroring and use
is_addr_blocklisted instead.

Signed-off-by: Jos Collin <jcollin@redhat.com>
(cherry picked from commit 0e66107c89a127bec4f1a3c83894ff858919c8f3)

3 years agoqa: fix is_addr_blocklisted() to get blocklisted clients from 'osd dump'
Jos Collin [Wed, 4 May 2022 13:03:12 +0000 (18:33 +0530)]
qa: fix is_addr_blocklisted() to get blocklisted clients from 'osd dump'

By the introduction of range blocklist, the 'blocklist ls' command outputs
two lists. It's also straightforward to get the blocklisted clients directly
from 'osd dump' to avoid regression.

Fixes: https://tracker.ceph.com/issues/55516
Signed-off-by: Jos Collin <jcollin@redhat.com>
(cherry picked from commit 47de5d79b8190458847072aae1c29db7d6a9b66b)

3 years agoclient: force send global open_files/metadata metrics 46476/head
Xiubo Li [Thu, 5 May 2022 02:22:04 +0000 (10:22 +0800)]
client: force send global open_files/metadata metrics

This change will fix two missing fixes introduce by commit
e9a26c551c763f75a403ff26f6304d5c10f2ca38.

Fixes: https://tracker.ceph.com/issues/54411
Signed-off-by: Xiubo Li <xiubli@redhat.com>
(cherry picked from commit a328d3264a0b64c03cf90c2d39f37c6962fa45cb)

Conflicts:
          src/client/Client.cc

3 years agomds, client: only send the metrices supported by MDSes
Xiubo Li [Wed, 16 Mar 2022 09:15:57 +0000 (17:15 +0800)]
mds, client: only send the metrices supported by MDSes

For the old ceph clusters the clients won't send any metrics to
them as default unless they have backported this commit, but there
has one option 'client_collect_and_send_global_metrics' still could
be used to enable it manually.

This will fix the crash bug when upgrading from old ceph clusters,
which will crash the MDSes once they receive unknown metrics.

Fixes: https://tracker.ceph.com/issues/54411
Signed-off-by: Xiubo Li <xiubli@redhat.com>
(cherry picked from commit e9a26c551c763f75a403ff26f6304d5c10f2ca38)

Conflicts:
          src/client/Client.cc