]> git.apps.os.sepia.ceph.com Git - ceph-ansible.git/log
ceph-ansible.git
2 years agoshrink-osd fails when the OSD container is stopped
Teoman ONAY [Wed, 1 Mar 2023 20:26:54 +0000 (21:26 +0100)]
shrink-osd fails when the OSD container is stopped

ceph-volume simple scan cannot be executed as it is meant to be
run inside the OSD container.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2164414
Signed-off-by: Teoman ONAY <tonay@ibm.com>
2 years agoFix selinux label issues
Teoman ONAY [Tue, 14 Mar 2023 11:26:56 +0000 (12:26 +0100)]
Fix selinux label issues

Add --security-opt label=disable to all containers
accessing /var/lib/ceph. podman selinux relabeling behavious changed
since version podman-3:4.2.0-1 which prevent some containers to access
files in these subdirectories.

Signed-off-by: Teoman ONAY <tonay@ibm.com>
2 years agoFixes selinux relabeling issue for nfs container
Teoman ONAY [Thu, 2 Mar 2023 22:01:48 +0000 (23:01 +0100)]
Fixes selinux relabeling issue for nfs container

Signed-off-by: Teoman ONAY <tonay@ibm.com>
2 years agoUses a more recent version of the CentOS stream 8 box
Teoman ONAY [Tue, 7 Feb 2023 13:30:52 +0000 (14:30 +0100)]
Uses a more recent version of the CentOS stream 8 box

Uses the latest centos/streamX image available.

Signed-off-by: Teoman ONAY <tonay@ibm.com>
2 years agoosd: drop filestore support
Guillaume Abrioux [Wed, 15 Feb 2023 03:35:01 +0000 (04:35 +0100)]
osd: drop filestore support

filestore is about to be removed. This commit removes the filestore
support in ceph-ansible.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2 years agodevices: remove duplicated disks after the readlink resolve
Seena Fallah [Mon, 13 Feb 2023 17:07:25 +0000 (18:07 +0100)]
devices: remove duplicated disks after the readlink resolve

If a disk has a symlink it will be re-added to the devices lists one with resolved path and the other with a defined path.
We can rebuild the list from the readlink output cause readlink always return the correct path for all disks.

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2 years agodevices: exclude db disks on osd_auto_discovery enabled
Seena Fallah [Mon, 13 Feb 2023 16:12:41 +0000 (17:12 +0100)]
devices: exclude db disks on osd_auto_discovery enabled

Exclude disks were defined in dedicated_devices and bluestore_wal_devices on osd_auto_discovery enabled.

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2 years agoCollocated mgr with mon fails to start on RHEL 8.7
Teoman ONAY [Tue, 7 Feb 2023 13:53:39 +0000 (14:53 +0100)]
Collocated mgr with mon fails to start on RHEL 8.7

With podman version podman-3:4.2.0-4.module+el8.7.0+17064+3b31f55c and
later, when mgr fails to start if mon is already running.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2169767
Signed-off-by: Teoman ONAY <tonay@ibm.com>
2 years agoceph-config: make sure rgw_instances is set
Guillaume Abrioux [Tue, 7 Feb 2023 00:52:43 +0000 (01:52 +0100)]
ceph-config: make sure rgw_instances is set

We need to make sure `rgw_instances` is set before `ceph.conf` is
rendered. Otherwise, the `ceph-crash` play in the main playbook updates
(via ceph-handler) the `ceph.conf` on rgw nodes and removes rgw instances
sections.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2141604
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2 years agoInitialize rbd pool at pool creation
Teoman ONAY [Tue, 29 Nov 2022 08:47:58 +0000 (09:47 +0100)]
Initialize rbd pool at pool creation

When creating a RBD pool it needs to be initialized as per documentation[1]
Modified (pre_)generate_ceph_cmd to make it usable with any command with
the same parameters as the ceph command

[1]https://docs.ceph.com/en/latest/rbd/rados-rbd-cmds/#create-a-block-device-pool

Signed-off-by: Teoman ONAY <tonay@redhat.com>
2 years agoCheck first the OSD storage file rather than after created
Mario Codeniera [Tue, 6 Dec 2022 08:18:03 +0000 (21:18 +1300)]
Check first the OSD storage file rather than after created

Signed-off-by: Mario Codeniera <M.Codeniera@massey.ac.nz>
2 years agotests: use quay.io instead of quay.ceph.io
Guillaume Abrioux [Tue, 6 Dec 2022 12:14:07 +0000 (13:14 +0100)]
tests: use quay.io instead of quay.ceph.io

This makes the CI use quay.io instead of quay.ceph.io

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2 years agocephadm-adopt: fix rbd-mirror adoption
Guillaume Abrioux [Mon, 14 Nov 2022 11:29:37 +0000 (12:29 +0100)]
cephadm-adopt: fix rbd-mirror adoption

The recent rbdmirror refactor introduced a regression in the
cephadm-adopt playbook.
Given that the rbd-mirror peer addition is now done by using the monitor
config-key store method during the cluster deployment, we can drop this
play from the cephadm-adopt.yml playbook.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2140569
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2 years agoSetting fact _radosgw_address fail when RGW is on a different network
Teoman ONAY [Tue, 18 Oct 2022 13:28:54 +0000 (15:28 +0200)]
Setting fact _radosgw_address fail when RGW is on a different network

Changed the when condition to only execute that fact setting on RGW
nodes while before it was run on all nodes and failed if the node
was not on the same network range as the RGW.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2131150
Signed-off-by: Teoman ONAY <tonay@redhat.com>
2 years agodashboard: support --limit execution with rgw
Guillaume Abrioux [Wed, 13 Apr 2022 08:42:47 +0000 (10:42 +0200)]
dashboard: support --limit execution with rgw

When the following conditions are met:

- rgw is deployed,
- dashboard is deployed,
- playbook is called with --limit,
- a node being processed is collocated on either a mon or mgr.

The playbook fails because `rgw_instances` is undefined.
The idea here is to make sure this variable is always defined.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2063029
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2 years agofacts: follow up on aa0cc93
Guillaume Abrioux [Thu, 21 Apr 2022 08:06:56 +0000 (10:06 +0200)]
facts: follow up on aa0cc93

when these variables are defined in the inventory host file,
all tasks are skipped then because the node being played isn't
aware about the values from the rgw nodes.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2063029
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2 years agoswitch-to-containers: ignore errors when stopping service
Guillaume Abrioux [Mon, 17 Oct 2022 08:20:21 +0000 (10:20 +0200)]
switch-to-containers: ignore errors when stopping service

There might be cases where it can break idempotency.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2 years agoswitch-to-containers: fix rbd-mirror migration
Guillaume Abrioux [Fri, 14 Oct 2022 17:33:26 +0000 (19:33 +0200)]
switch-to-containers: fix rbd-mirror migration

`--state=enabled` isn't a valid filter so the unit from the packaging
never gets removed.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2134917
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2 years agolibrary/radosgw_user.py: fix user update
John Karasev [Wed, 28 Sep 2022 18:57:41 +0000 (11:57 -0700)]
library/radosgw_user.py: fix user update

Removes the case when display_name was defined prevously but
was not provided when modifying. Without this change the module
will change display_name to name even if display_name was not name
originally. See #7296

Signed-off-by: John Karasev <john.karasev@intel.com>
2 years agoceph-osd: remove unused ceph config set for osd_memory_target
Seena Fallah [Sat, 24 Sep 2022 17:49:09 +0000 (19:49 +0200)]
ceph-osd: remove unused ceph config set for osd_memory_target

As the conf is always being set in the config file there is no need to set it in with `ceph config`.
Also this will make it hard to run the playbook with the `ceph_update_config` tag as it won't run and will create an inconsistency between the config managements of the cluster

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2 years agoceph-config: fix overriding osd_memory_target
Seena Fallah [Sat, 24 Sep 2022 17:46:20 +0000 (19:46 +0200)]
ceph-config: fix overriding osd_memory_target

When the value is overriding in `ceph_conf_overrides`, there is no need to calculate and set `osd_memory_target` again as we wanted to override the conf by our desired value.

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2 years agoceph-config: don't check for devices on existing osds
Seena Fallah [Sat, 24 Sep 2022 17:21:41 +0000 (19:21 +0200)]
ceph-config: don't check for devices on existing osds

When osd_auto_discovery is true the `devices` var will be empty (as the disks have holders).
Also in general there is no need to check for devices to list the devices with ceph-volume as we have `default({})` on the stdout in `num_osds` set fact in the next task

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2 years agoceph-config: always set _osd_memory_target
Seena Fallah [Sat, 24 Sep 2022 17:07:36 +0000 (19:07 +0200)]
ceph-config: always set _osd_memory_target

this should be set when rolling_update is true as well, otherwise, it will reset to default on the upgrade

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2 years agocommon: v18/reef kickoff
Guillaume Abrioux [Fri, 7 Oct 2022 08:49:00 +0000 (10:49 +0200)]
common: v18/reef kickoff

align with ceph/ceph/pull/47458 since it has been merged.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2 years agodoc: update README-MULTISITE.md
quangln94 [Fri, 7 Oct 2022 14:23:47 +0000 (16:23 +0200)]
doc: update README-MULTISITE.md

I update the parameters according to this PR: #7315
In addition, where rgw_zonesecondary: True and rgw_zonemaster: False,

I change rgw_zonegroupmaster to be False as those lines below:

https://github.com/ceph/ceph-ansible/blob/main/README-MULTISITE.md?plain=1#L396
https://github.com/ceph/ceph-ansible/blob/main/README-MULTISITE.md?plain=1#L417
https://github.com/ceph/ceph-ansible/blob/main/README-MULTISITE.md?plain=1#L520
https://github.com/ceph/ceph-ansible/blob/main/README-MULTISITE.md?plain=1#L535
Add note at line 205 here https://github.com/ceph/ceph-ansible/blob/main/README-MULTISITE.md?plain=1#L205

Signed-off-by: quangln94 <ngocquang.ptit@gmail.com>
2 years agoUpdate ceph_ec_profile.py
quangln94 [Fri, 23 Sep 2022 07:32:04 +0000 (14:32 +0700)]
Update ceph_ec_profile.py

This parameters "crush_root" unsupported for (ceph_ec_profile) module: crush_root
Follow this issue: https://github.com/ceph/ceph-ansible/issues/7306