From: Ilya Dryomov Date: Tue, 9 Jul 2024 20:54:10 +0000 (+0200) Subject: qa/suites/fs/workload: drop mgrmodules stanza X-Git-Tag: v20.0.0~1220^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=d25fe4e7860e889c09242ea52f876ee23d54ff13;p=ceph.git qa/suites/fs/workload: drop mgrmodules stanza After commit 5601f91bf97e ("qa: move some configs to cluster-conf"), mgrmodules shouldn't be needed. Currently "ceph mgr module enable" command actually gets run twice with the one triggered by mgrmodules coming in 10 minutes later: 2024-07-04T02:37:06.207 INFO:tasks.cephadm:enabling module snap_schedule 2024-07-04T02:37:06.208 DEBUG:teuthology.orchestra.run.smithi032:> sudo /home/ubuntu/cephtest/cephadm ... shell ... -- sudo ceph --cluster ceph mgr module enable snap_schedule 2024-07-04T02:47:08.104 INFO:teuthology.run_tasks:Running task sequential... 2024-07-04T02:47:08.114 INFO:teuthology.task.sequential:In sequential, running task sequential... 2024-07-04T02:47:08.114 INFO:teuthology.task.sequential:In sequential, running task print... 2024-07-04T02:47:08.115 INFO:teuthology.task.print:Enabling mgr modules 2024-07-04T02:47:08.115 INFO:teuthology.task.sequential:In sequential, running task exec... 2024-07-04T02:47:08.116 INFO:teuthology.task.exec:Executing custom commands... 2024-07-04T02:47:08.116 INFO:teuthology.task.exec:Running commands on role mon.a host ubuntu@smithi032.front.sepia.ceph.com 2024-07-04T02:47:08.116 DEBUG:teuthology.orchestra.run.smithi032:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph mgr module enable snap_schedule' Signed-off-by: Ilya Dryomov --- diff --git a/qa/cephfs/begin/3-modules.yaml b/qa/cephfs/begin/3-modules.yaml deleted file mode 100644 index 259473425693..000000000000 --- a/qa/cephfs/begin/3-modules.yaml +++ /dev/null @@ -1,19 +0,0 @@ -# Enable mgr modules now before any CephFS mounts are created by the mgr. This -# avoids the potential race of the mgr mounting CephFS and then getting failed -# over by the monitors before the monitors have a chance to note the new client -# session from the mgr beacon. In that case, the monitors will not blocklist -# that client mount automatically so the MDS will eventually do the eviction -# (and create a cluster log warning which we want to avoid). -# -# Note: ideally the mgr would gently stop mgr modules before respawning so that -# the client mounts can be unmounted but this caused issues historically with -# modules like the dashboard so an abrupt restart was chosen instead. - -mgrmodules: - sequential: - - print: "Enabling mgr modules" - # other fragments append to this - -tasks: - - sequential: - - mgrmodules diff --git a/qa/suites/fs/workload/begin/3-modules.yaml b/qa/suites/fs/workload/begin/3-modules.yaml deleted file mode 120000 index 1eba706a59dc..000000000000 --- a/qa/suites/fs/workload/begin/3-modules.yaml +++ /dev/null @@ -1 +0,0 @@ -.qa/cephfs/begin/3-modules.yaml \ No newline at end of file diff --git a/qa/suites/fs/workload/tasks/3-snaps/yes.yaml b/qa/suites/fs/workload/tasks/3-snaps/yes.yaml index dee81778942e..51bbe2a3dbfa 100644 --- a/qa/suites/fs/workload/tasks/3-snaps/yes.yaml +++ b/qa/suites/fs/workload/tasks/3-snaps/yes.yaml @@ -1,8 +1,3 @@ -mgrmodules: - sequential: - - exec: - mon.a: - - ceph mgr module enable snap_schedule overrides: ceph: mgr-modules: