]> git.apps.os.sepia.ceph.com Git - ceph-ansible.git/log
ceph-ansible.git
3 years agotests: add osd node in collocation
Guillaume Abrioux [Tue, 28 Sep 2021 20:24:43 +0000 (22:24 +0200)]
tests: add osd node in collocation

we update the pool size from 1 to 2 in idempotency test
but only 1 node is available.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
3 years agocephadm-adopt: add no_log: true
Guillaume Abrioux [Tue, 21 Sep 2021 08:41:53 +0000 (10:41 +0200)]
cephadm-adopt: add no_log: true

Let's add a `no_log: true` on the `cephadm registry-login` task.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
3 years agoadopt: stop iscsi services in the first place
Guillaume Abrioux [Fri, 24 Sep 2021 12:45:11 +0000 (14:45 +0200)]
adopt: stop iscsi services in the first place

If old containers are still running, it can make tcmu-runner process
unable to open devices and there's nothing else to do than restarting
the container.

Also, as per discussion with iscsi experts, iscsi should be migrated before
OSDs. (the client should be closed before the server)

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2000412
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agotests: auth_allow_insecure_global_id_reclaim false
Dimitri Savineau [Tue, 10 Aug 2021 15:41:50 +0000 (11:41 -0400)]
tests: auth_allow_insecure_global_id_reclaim false

Otherwise the clients won't be able to reconnect after the reboot in the
all_daemons and collocation jobs.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agotests: fix container-cephadm job
Guillaume Abrioux [Thu, 16 Sep 2021 14:53:33 +0000 (16:53 +0200)]
tests: fix container-cephadm job

add missing variable `containerized_deployment` in group_vars

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agocommon: install ceph-volume package
Guillaume Abrioux [Thu, 16 Sep 2021 12:02:17 +0000 (14:02 +0200)]
common: install ceph-volume package

After pacific release, ceph-volume has its own package.
ceph-ansible has to explicitly install it on osd nodes.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agocephadm-adopt: set cephadm registry login info
Daniel Pivonka [Thu, 9 Sep 2021 21:14:10 +0000 (17:14 -0400)]
cephadm-adopt: set cephadm registry login info

registry login info needs to be stored in cluster for cephadm and future hosts

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2000103
Signed-off-by: Daniel Pivonka <dpivonka@redhat.com>
4 years agoRevert "tests: rename grafana to monitoring"
Guillaume Abrioux [Thu, 9 Sep 2021 14:01:47 +0000 (16:01 +0200)]
Revert "tests: rename grafana to monitoring"

This reverts commit a36586a7777dc34cb977f81dc2d5bdfa2bebd4b6.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agotests: rename grafana to monitoring
Dimitri Savineau [Mon, 9 Aug 2021 17:38:26 +0000 (13:38 -0400)]
tests: rename grafana to monitoring

Since the grafana-server group has been renamed to monitoring then
changing the associated tests.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agopurge: add remove_docker tag
Seena Fallah [Mon, 16 Aug 2021 20:37:40 +0000 (01:07 +0430)]
purge: add remove_docker tag

This can help to skip docker removal tasks

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
4 years agopurge: add container_binary needed for zap osds
Seena Fallah [Mon, 16 Aug 2021 20:08:47 +0000 (00:38 +0430)]
purge: add container_binary needed for zap osds

`container_binary` isn't set anymore in the purge osd play because of a
regression introduced by 60aa70a.
The CI didn't catch it because the play purging node-exporter sets this
variable for all nodes before we run the purge osd play.

This commit fixes this regression.

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
4 years agoceph-defaults: set quay.io as the default registry
Dimitri Savineau [Fri, 27 Aug 2021 16:01:27 +0000 (12:01 -0400)]
ceph-defaults: set quay.io as the default registry

Because the ceph container images are now only pushed to the quay.io
registry then this updates the default registry value.
The docker.io registry can still be used but doesn't receive updated
container images.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agopurge-dashboard: remove cid files
Dimitri Savineau [Tue, 7 Sep 2021 16:13:37 +0000 (12:13 -0400)]
purge-dashboard: remove cid files

This adds the service cid file cleanup as supported in the classic purge
playbook since b9dd253

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1786691
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agotests/rgw: use json format output for user info
Dimitri Savineau [Thu, 26 Aug 2021 20:45:07 +0000 (16:45 -0400)]
tests/rgw: use json format output for user info

If the radosgw user already exists then we need to have the output in json
format because we are expecting to load the output with json.loads()
Otherwise we have pytest failure like:

```console
self = <json.decoder.JSONDecoder object at 0x7fa2f00a5fd0>, s = '', idx = 0

    def raw_decode(self, s, idx=0):
        """Decode a JSON document from ``s`` (a ``str`` beginning with
        a JSON document) and return a 2-tuple of the Python
        representation and the index in ``s`` where the document ended.

        This can be used to decode a JSON document from a string that may
        have extraneous data at the end.

        """
        try:
            obj, end = self.scan_once(s, idx)
        except StopIteration as err:
>           raise JSONDecodeError("Expecting value", s, err.value) from None
E           json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agotests/rgw: add timeout 5s to radosgw-admin command
Dimitri Savineau [Tue, 10 Aug 2021 15:57:01 +0000 (11:57 -0400)]
tests/rgw: add timeout 5s to radosgw-admin command

If the radosgw daemons aren't up and running correctly (like not registered
in the servicemap or the OSD are down) then the radosgw-admin will hang
forever.
Jenkins will kill the jobs after 3h but we don't want to wait until this global
timeout.
Adding the timeout 5 command to the radosgw-admin commands (which is already
present on other ceph calls) allows the job to fail earlier.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agocephadm-adopt: fix orch host add with FQDN
Dimitri Savineau [Thu, 26 Aug 2021 16:06:11 +0000 (12:06 -0400)]
cephadm-adopt: fix orch host add with FQDN

When a node is configured with FQDN as the hostname value then the
`ceph orch host add` command will fail because the `ansible_hostname` used
by that command contains the short hostname which won't match the current
hostname (FQDN)
Instead we can use the ansible_nodename fact.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1997083
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agocontainer: explicitly pull monitoring images
Dimitri Savineau [Thu, 19 Aug 2021 18:08:06 +0000 (14:08 -0400)]
container: explicitly pull monitoring images

We don't pull the monitoring container images (alertmanager, prometheus,
node-exporter and grafana) in a dedicated task like we're doing for the
ceph container image.
This means that the container image pull is done during the start of the
systemd service.
By doing this, pulling the image behind a proxy isn't working with podman.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1995574
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoRevert "tests: use old build of ceph@master"
Dimitri Savineau [Thu, 19 Aug 2021 18:32:21 +0000 (14:32 -0400)]
Revert "tests: use old build of ceph@master"

This reverts commit 47a451426a8308a4ea80e0a1e4d867e9dd290fe5.

This build isn't available on shaman anymore.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoiscsi: don't set default value for trusted_ip_list
Guillaume Abrioux [Wed, 18 Aug 2021 11:23:44 +0000 (13:23 +0200)]
iscsi: don't set default value for trusted_ip_list

It restricts access to the iSCSI API.
It can be left empty if the API isn't going to be access from outside the
gateway node

Even though this seems to be a limited use case, it's better to leave it
empty by default than having a meaningless default value.

We could make this variable mandatory but that would be a breaking
change. Let's just add a logic in the template in order to set this
variable in the configuration file only if it was specified by users.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1994930
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-authored-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agocephadm-adopt: remove ceph-nfs.target
Dimitri Savineau [Wed, 18 Aug 2021 15:15:39 +0000 (11:15 -0400)]
cephadm-adopt: remove ceph-nfs.target

This systemd target doesn't exist at all.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agocontainers: introduce target systemd unit
Guillaume Abrioux [Tue, 10 Aug 2021 13:21:19 +0000 (15:21 +0200)]
containers: introduce target systemd unit

This adds ceph-*.target systemd unit files support for containerized
deployments.
This also fixes a regression introduced by PR #6719 (rgw and nfs systemd
units not getting purged)

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1962748
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoVagrantfile: fallback on 'varant_variables.yml.sample'
Guillaume Abrioux [Tue, 10 Aug 2021 14:11:37 +0000 (16:11 +0200)]
Vagrantfile: fallback on 'varant_variables.yml.sample'

When using a vagrant command from the root directory of the repo, it
throws an error if no 'vagrant_variables.yml' file is present.

```
Message: Errno::ENOENT: No such file or directory @ rb_sysopen - /home/guits/workspaces/ceph-ansible/vagrant_variables.yml
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoceph-container-engine: allow override container_package_name and container_service_name
Seena Fallah [Thu, 5 Aug 2021 11:03:55 +0000 (15:33 +0430)]
ceph-container-engine: allow override container_package_name and container_service_name

Only include specific variables when they are undefined

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
4 years agocephadm-adopt: use cephadm_ssh_user for ssh user
Seena Fallah [Tue, 27 Jul 2021 17:44:38 +0000 (22:14 +0430)]
cephadm-adopt: use cephadm_ssh_user for ssh user

Use cephadm_ssh_user to set custom user (not root) for cephadm to ssh to the hosts

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
4 years agoroles: remove leftover from pr #4319
Guillaume Abrioux [Tue, 10 Aug 2021 13:34:50 +0000 (15:34 +0200)]
roles: remove leftover from pr #4319

pr #4319 introduced some uesless `become: true` on systemd tasks.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoupdate: gather facts only one time
Guillaume Abrioux [Tue, 17 Aug 2021 14:07:03 +0000 (16:07 +0200)]
update: gather facts only one time

this play doesn't need to gather facts from localhost

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoceph-dashboard: fix oject gateway integration
Dimitri Savineau [Tue, 17 Aug 2021 15:27:57 +0000 (11:27 -0400)]
ceph-dashboard: fix oject gateway integration

Since [1] multiple ceph dashboard commands have been removed and this is
breaking the current ceph-ansible dashboard with RGW automation.
This removes the following dashboard rgw commands:

- ceph dashboard set-rgw-api-access-key
- ceph dashboard set-rgw-api-secret-key
- ceph dashboard set-rgw-api-host
- ceph dashboard set-rgw-api-port
- ceph dashboard set-rgw-api-scheme

Which are replaced by `ceph dashboard set-rgw-credentials`

The RGW user creation task is also removed.

Finally moving the delegate_to statement from the rgw tasks at the block
level.

[1] https://github.com/ceph/ceph/pull/42252

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoceph-volume: hide OSD keyring during creation
Dimitri Savineau [Thu, 12 Aug 2021 15:08:27 +0000 (11:08 -0400)]
ceph-volume: hide OSD keyring during creation

When using ceph-volume lvm create/prepare/batch then the keyring of each
OSD created is displayed in the output.
Let's replace those by some '*' chars.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agotests: use old build of ceph@master
Guillaume Abrioux [Thu, 12 Aug 2021 21:45:06 +0000 (23:45 +0200)]
tests: use old build of ceph@master

for unlocking the ci.
this is intended to be reverted.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoceph-mon: do not log monitor keyring
Dimitri Savineau [Wed, 11 Aug 2021 20:01:08 +0000 (16:01 -0400)]
ceph-mon: do not log monitor keyring

We don't want to display the keyring in the ansible log.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agocommon: do not log keyring secret
Guillaume Abrioux [Mon, 9 Aug 2021 12:57:33 +0000 (14:57 +0200)]
common: do not log keyring secret

let's not display any keyring secret by default in ansible log.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1980744
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoceph-dashboard: fix TLS cert openssl generation
Dimitri Savineau [Mon, 9 Aug 2021 14:33:40 +0000 (10:33 -0400)]
ceph-dashboard: fix TLS cert openssl generation

With OpenSSL version prior 1.1.1 (like CentOS 7 with 1.0.2k), the -addext
doesn't exist.
As a solution, this uses the default openssl.cnf configuration file as a
template and add the subjectAltName in the v3_ca section. This temp openssl
configuration file is removed after the TLS certificate creation.
This patch also move the run_once statement at the block level.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1978869
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoFixes typo in rgw-add-users-buckets playbook
VasishtaShastry [Fri, 6 Aug 2021 10:40:19 +0000 (16:10 +0530)]
Fixes typo in rgw-add-users-buckets playbook

Signed-off-by: VasishtaShastry <vipin.indiasmg@gmail.com>
4 years agodashboard: subj_alt_names fact refactor
Guillaume Abrioux [Thu, 5 Aug 2021 13:00:49 +0000 (15:00 +0200)]
dashboard: subj_alt_names fact refactor

the current way the variable is built results in:

```
2021-08-03 04:18:23,020 - ceph.ceph - INFO - ok: [ceph-sangadi-4x-indpt6-node1-installer] => changed=false
  ansible_facts:
    subj_alt_names: |-
      subjectAltName=ceph-sangadi-4x-indpt6-node1-installer/subjectAltName=10.0.210.223/subjectAltName=ceph-sangadi-4x-indpt6-node1-installersubjectAltName=ceph-sangadi-4x-indpt6-node2/subjectAltName=10.0.210.252/subjectAltName=ceph-sangadi-4x-indpt6-node2/
```

which is incorrect.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1978869
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoadopt: import rgw ssl certificate into kv store
Guillaume Abrioux [Wed, 28 Jul 2021 19:50:15 +0000 (21:50 +0200)]
adopt: import rgw ssl certificate into kv store

Without this, when rgw is managed by cephadm, it fails to start because
the ssl certificate isn't present in the kv store.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1987010
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1988404
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-authored-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agocephadm-adopt: remove nfs pool and namespace
Dimitri Savineau [Wed, 4 Aug 2021 19:11:59 +0000 (15:11 -0400)]
cephadm-adopt: remove nfs pool and namespace

This has been removed from the code (orch apply name).
The default pool name is now .nfs

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoinfra: use dedicated variables for balancer status
Dimitri Savineau [Tue, 3 Aug 2021 15:58:49 +0000 (11:58 -0400)]
infra: use dedicated variables for balancer status

The balancer status is registered during the cephadm-adopt, rolling_update
and swith2container playbooks. But it is also used in the ceph-handler role
which is included in those playbooks too.
Even if the ceph-handler tasks are skipped for rolling_update and
switch2container, the balancer_status variable is erased with the skip task
result.

play1:
  register: balancer_status
play2:
  register: balancer_status <-- skipped
play3:
  when: (balancer_status.stdout | from_json)['active'] | bool

This leads to issue like:

The conditional check '(balancer_status.stdout | from_json)['active'] | bool'
failed. The error was: Unexpected templating type error occurred on
({% if (balancer_status.stdout | from_json)['active'] | bool %} True
{% else %} False {% endif %}): expected string or buffer.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1982054
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agopodman pids.max default value is 2048, docker's one is 4096 which are
Teoman ONAY [Tue, 3 Aug 2021 14:06:53 +0000 (16:06 +0200)]
podman pids.max default value is 2048, docker's one is 4096 which are
sufficient for the default value (512) of rgw thread pool size.
But if its value is increased near to the pids-limit value,
it does not leave place for the other processes to spawn and run within
the container and the container crashes.

pids-limit set to unlimited regardless of the container engine.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1987041
Signed-off-by: Teoman ONAY <tonay@redhat.com>
4 years agoceph-defaults: remove radosgw_civetweb_ variables
Dimitri Savineau [Thu, 29 Jul 2021 15:42:03 +0000 (11:42 -0400)]
ceph-defaults: remove radosgw_civetweb_ variables

radosgw_civetweb_xxx variables are legacy variables and users should
have switched to radosgw_frontend_xxx variables instead.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoosds: use osd pool ls instead of osd dump command
Dimitri Savineau [Wed, 28 Jul 2021 18:54:15 +0000 (14:54 -0400)]
osds: use osd pool ls instead of osd dump command

The ceph osd pool ls detail command is a subset of the ceph osd dump
command.

$ ceph osd dump --format json|wc -c
10117
$ ceph osd pool ls detail --format json|wc -c
4740

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agolibrary: exit on user creation failure
Dimitri Savineau [Wed, 28 Jul 2021 16:27:00 +0000 (12:27 -0400)]
library: exit on user creation failure

When the ceph dashboard user creation fails then the issue is hidden
as we don't check the return code and don't print the error message
in the module output.

This ends up with a failure on the ceph dashboard set roles command saying
that the user doesn't exist.

By failing on the user creation, we will have an explicit explaination of
the issue (like weak password).

Closes: #6197
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agorolling_update: get ceph version when mons exist
Dimitri Savineau [Thu, 29 Jul 2021 16:26:33 +0000 (12:26 -0400)]
rolling_update: get ceph version when mons exist

eec3878 introduced a regression for upgrade scenarios where there's no
monitor nodes at all (like ganesha standalone, external clients, etc..)

TASK [get the ceph release being deployed] ************************************
task path: infrastructure-playbooks/rolling_update.yml:121
Thursday 29 July 2021  15:55:29 +0000 (0:00:00.484)       0:00:15.802 *********
fatal: [client0]: FAILED! =>
  msg: '''dict object'' has no attribute ''mons'''

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoinfrastructure-playbooks: Get Ceph info in check mode
Benoît Knecht [Mon, 26 Jul 2021 15:10:19 +0000 (17:10 +0200)]
infrastructure-playbooks: Get Ceph info in check mode

In the `set osd flags` block, run the Ceph commands that gather information
from the cluster (and don't make any changes to it) even when running in check
mode.

This allows the tasks that depend on the variables set by those tasks to
succeed in check mode.

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
4 years agoceph-handler: Fix osd handler in check mode
Benoît Knecht [Mon, 26 Jul 2021 11:03:56 +0000 (13:03 +0200)]
ceph-handler: Fix osd handler in check mode

Run the Ceph commands that only gather information (without making any changes
to the cluster) when running Ansible in check mode.

This allows the tasks that depend on the variables set by those tasks to
succeed in check mode.

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
4 years agoceph-defaults: add missing grafana dashboards
Dimitri Savineau [Tue, 27 Jul 2021 14:30:30 +0000 (10:30 -0400)]
ceph-defaults: add missing grafana dashboards

The radosgw-sync-overview and rbd-details grafana dashboars were missing
from the list.

Closes: #6758
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoupdate: check the ceph release
Guillaume Abrioux [Mon, 26 Jul 2021 09:19:36 +0000 (11:19 +0200)]
update: check the ceph release

Check early which Ceph release is going to be deployed and fail if it
doesn't correspond to the ceph-ansible version being used.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1978643
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoalertmanager: allow disable dashboard tls verify
Dimitri Savineau [Fri, 23 Jul 2021 14:27:55 +0000 (10:27 -0400)]
alertmanager: allow disable dashboard tls verify

When using self-signed/untrusted CA certificates, alertmanager displays
an error in logs. With this commit this should make those messages
disappear.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1936299
Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agomultisite: use node fqdn for endpoints when https
Dimitri Savineau [Fri, 9 Jul 2021 21:24:09 +0000 (17:24 -0400)]
multisite: use node fqdn for endpoints when https

When the rgw_multisite_proto variable is set to https then we shoudn't use
the IP address in the zone endpoints list but the node FQDN to match the
TLS certificate CN.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1965504
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agopurge: support osd_auto_discovery
Guillaume Abrioux [Wed, 21 Jul 2021 21:16:59 +0000 (23:16 +0200)]
purge: support osd_auto_discovery

This adds a task that zaps by osd id so we can support the scenario
where osds were deployed with `osd_auto_discovery` is true.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1876860
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agopurge: merge playbooks
Guillaume Abrioux [Tue, 13 Jul 2021 16:48:42 +0000 (18:48 +0200)]
purge: merge playbooks

This refactor merges the two playbooks so we only have to maintain 1
playbook.
(Symlink the old purge-container-cluster.yml playbook for backward
 compatibility).

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agopurge: drop variables from 'hosts' sections
Guillaume Abrioux [Tue, 13 Jul 2021 15:11:22 +0000 (17:11 +0200)]
purge: drop variables from 'hosts' sections

Those variables are useless given this is not possible to override them.
Let's replace them with the hardcoded name instead.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agocommon: remove unnecessary run_once statements
Dimitri Savineau [Tue, 20 Jul 2021 15:38:44 +0000 (11:38 -0400)]
common: remove unnecessary run_once statements

1303611 introduced tasks for disabling the pg_autoscaler on pools and
the balancer but thoses tasks are already executed on the first monitor
node so we don't need to add the run_once statement.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agocommon: fix py2 pool_list from_json when skipped
Dimitri Savineau [Tue, 20 Jul 2021 19:53:48 +0000 (15:53 -0400)]
common: fix py2 pool_list from_json when skipped

When using python 2 and the task with a loop is skipped then it generates
an error.

Unexpected templating type error occurred on
({{ (pool_list.stdout | from_json)['pools'] }}): expected string or buffer

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agocommon: disable/enable pg_autoscaler
Guillaume Abrioux [Mon, 14 Jun 2021 16:01:41 +0000 (18:01 +0200)]
common: disable/enable pg_autoscaler

The PG autoscaler can disrupt the PG checks so the idea here is to
disable it and re-enable it back after the restart is done.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoceph-mgr: move mgr module list to common
Dimitri Savineau [Thu, 15 Jul 2021 19:38:07 +0000 (15:38 -0400)]
ceph-mgr: move mgr module list to common

Populating the ceph_mgr_modules list in the mgr_modules doesn't make sense
since that file is only executed if the list isn't empty or we're using the
dashboard.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoceph-nfs: allow overriding NFS_CORE_PARAM
Dimitri Savineau [Thu, 15 Jul 2021 20:24:28 +0000 (16:24 -0400)]
ceph-nfs: allow overriding NFS_CORE_PARAM

We already have config override variables for existing block (like
ganesha_ceph_export_overrides, ganesha_log_overrides, etc...) or a
global one (ganesha_conf_overrides) but redefining the NFS_CORE_PARAM
block in that variable will erase all previous values (currently only
Bind_Addr).

ganesha_core_param_overrides: |
        Enable_UDP = false;
        NFS_Port = 2050;

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1941775
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agopurge: reindent playbook
Guillaume Abrioux [Tue, 13 Jul 2021 12:26:40 +0000 (14:26 +0200)]
purge: reindent playbook

This commit reindents the playbook.
Also improve readability by adding an extra line between plays.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agolib/ceph-volume: support zapping by osd_id
Guillaume Abrioux [Fri, 9 Jul 2021 09:07:08 +0000 (11:07 +0200)]
lib/ceph-volume: support zapping by osd_id

This commit adds the support for zapping an osd by osd_id in the
ceph_volume module.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agocephadm-adopt: enable osd memory autotune for HCI
Dimitri Savineau [Mon, 12 Jul 2021 12:58:42 +0000 (08:58 -0400)]
cephadm-adopt: enable osd memory autotune for HCI

This enables the osd_memory_target_autotune option on HCI environment.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1973149
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agorolling_update: check quorum state before upgrade
Dimitri Savineau [Fri, 9 Jul 2021 20:09:49 +0000 (16:09 -0400)]
rolling_update: check quorum state before upgrade

If one a the monitor is out of the quorum then nothing prevents the upgrade
playbook to run.
We only check if we have at least three monitor nodes but we should also
check if those monitor nodes are correctly present in the quorum.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1952571
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoupdate: fail the playbook if straw2 conversion failed
Guillaume Abrioux [Fri, 9 Jul 2021 14:29:09 +0000 (16:29 +0200)]
update: fail the playbook if straw2 conversion failed

It's better to fail the playbook so the user is aware the straw2
migration has failed.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoupdate: followup on pr #6689
Guillaume Abrioux [Fri, 9 Jul 2021 07:19:52 +0000 (09:19 +0200)]
update: followup on pr #6689

add mising 'osd' command.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoupdate: convert straw bucket
Guillaume Abrioux [Thu, 8 Jul 2021 19:57:13 +0000 (21:57 +0200)]
update: convert straw bucket

After an upgrade, the presence of straw buckets will produce the
following warning (HEALTH_WARN):

```
crush map has legacy tunables (require firefly, min is hammer)
```

because straw bucket is a firefly feature it needs to be converted to
straw2.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967964
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agocephadm-adopt: set application on ganesha pool
Dimitri Savineau [Thu, 8 Jul 2021 17:50:11 +0000 (13:50 -0400)]
cephadm-adopt: set application on ganesha pool

Set the nfs application to the ganesha pool.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1956840
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agodashboard: remove "certificate is valid for" error
Guillaume Abrioux [Tue, 6 Jul 2021 12:18:51 +0000 (14:18 +0200)]
dashboard: remove "certificate is valid for" error

When deploying dashboard with ssl certificates generated by
ceph-ansible, we enforce the CN to 'ceph-dashboard' which can makes
application such alertmanager complain like following:

`err="Post https://mgr0:8443/api/prometheus_receiver: x509: certificate is valid for ceph-dashboard, not mgr0" context_err="context deadline exceeded"`

The idea here is to add alternative names matching all mgr/mon instances
in the certificate so this error won't appear in logs.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1978869
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoworkflow: add dashboard playbook to ansible-lint
Dimitri Savineau [Mon, 5 Jul 2021 18:20:35 +0000 (14:20 -0400)]
workflow: add dashboard playbook to ansible-lint

The dashboard.yml playbook was missing from the ansible-lint workflow.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoinfra: add playbook to purge dashboard/monitoring
Dimitri Savineau [Mon, 5 Jul 2021 18:07:05 +0000 (14:07 -0400)]
infra: add playbook to purge dashboard/monitoring

The dashboard/monitoring stack can be deployed via the dashboard_enabled
variable. But there's nothing similar if we can to remove that part only
and keep the ceph cluster up and running.
The current purge playbooks remove everything.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1786691
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agodashboard: support dedicated network for the dashboard
Guillaume Abrioux [Mon, 5 Jul 2021 15:49:26 +0000 (17:49 +0200)]
dashboard: support dedicated network for the dashboard

This introduces a new variable `dashboard_network` in order to support
deploying the dashboard on a different subnet.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1927574
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoceph-crash: add install checkpoint
Dimitri Savineau [Mon, 5 Jul 2021 14:11:57 +0000 (10:11 -0400)]
ceph-crash: add install checkpoint

The ceph crash insatll checkpoint callback was missing in the main
playbooks.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agocephadm_adopt: add any_errors_fatal on play
Guillaume Abrioux [Mon, 28 Jun 2021 12:12:40 +0000 (14:12 +0200)]
cephadm_adopt: add any_errors_fatal on play

Add any_errors_fatal: true in cephadm-adopt playbook.
We should stop the playbook execution when a task throws an error.
Otherwise it can lead to unexpected behavior.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1976179
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agopurge: add monitoring group in final cleanup play
Guillaume Abrioux [Fri, 2 Jul 2021 12:57:52 +0000 (14:57 +0200)]
purge: add monitoring group in final cleanup play

This adds the monitoring group in the "final cleanup play" so any cid
files generated are well removed when purging the cluster.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1974536
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoprometheus: fix prometheus target url
Dimitri Savineau [Fri, 2 Jul 2021 13:13:43 +0000 (09:13 -0400)]
prometheus: fix prometheus target url

The prometheus service isn't binding on localhost.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1933560
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoceph-facts: move device facts to its own file
Dimitri Savineau [Wed, 16 Dec 2020 19:18:08 +0000 (14:18 -0500)]
ceph-facts: move device facts to its own file

Instead of reusing the condition 'inventory_hostname in groups[osds]'
on each device facts tasks then we can move all the tasks into a
dedicated file and set the condition on the import_tasks statement.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoceph-validate: check logical volumes
Dimitri Savineau [Tue, 15 Dec 2020 22:34:34 +0000 (17:34 -0500)]
ceph-validate: check logical volumes

We currently don't check if the logical volume used in lvm_volumes list
for either bluestore data/db/wal or filestore data/journal exist.
We're only doing this on raw devices for batch scenario.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoceph-validate: check db/journal/wal devices too
Dimitri Savineau [Tue, 15 Dec 2020 20:08:00 +0000 (15:08 -0500)]
ceph-validate: check db/journal/wal devices too

When using dedicated devices for db/journal/wal objecstore with
ceph-volume lvm batch then we should also validate that those devices
exist and don't use a gpt partition table in addition of the devices
and lvm_volume.data variables.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoceph-validate: use root device from ansible_mounts
Dimitri Savineau [Tue, 15 Dec 2020 20:04:57 +0000 (15:04 -0500)]
ceph-validate: use root device from ansible_mounts

Instead of using findmnt command to find the device associated to the
root mount point then we can use the ansible_mounts fact.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoceph-validate: do not resolve devices
Dimitri Savineau [Tue, 15 Dec 2020 20:02:59 +0000 (15:02 -0500)]
ceph-validate: do not resolve devices

This is already done in the ceph-facts role.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoceph-validate: check block presence first
Dimitri Savineau [Tue, 15 Dec 2020 20:00:28 +0000 (15:00 -0500)]
ceph-validate: check block presence first

Instead of doing two parted calls we can check first if the device exist
and then test the partition table.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agoceph-validate: check devices from lvm_volumes
Dimitri Savineau [Tue, 15 Dec 2020 19:49:57 +0000 (14:49 -0500)]
ceph-validate: check devices from lvm_volumes

2888c08 introduced a regression as the check_devices tasks file was
only included based on the devices variable.
But that file also validate some devices from the lvm_volumes variable.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1906022
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agocontainer: set tcmalloc value by default
Dimitri Savineau [Tue, 29 Jun 2021 17:24:29 +0000 (13:24 -0400)]
container: set tcmalloc value by default

All ceph daemons need to have the TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES
environment variable set to 128MB by default in container setup.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970913
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agorhcs: remove ISO install method
Dimitri Savineau [Mon, 28 Jun 2021 15:01:22 +0000 (11:01 -0400)]
rhcs: remove ISO install method

Starting RHCS 5, there's no ISO available anymore.
This removes all ISO variables and the ceph_repository_type variable.

Closes: #6626
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agolibrary: flake8 ceph-ansible modules
Wong Hoi Sing Edison [Thu, 17 Jun 2021 16:18:07 +0000 (00:18 +0800)]
library: flake8 ceph-ansible modules

This commit ensure all ceph-ansible modules pass flake8 properly.

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
4 years agoworkflows: test against 1 python version only
Guillaume Abrioux [Tue, 29 Jun 2021 23:24:36 +0000 (01:24 +0200)]
workflows: test against 1 python version only

Let's drop py3.6 and py3.7

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoworkflows: add signed-off check
Guillaume Abrioux [Tue, 29 Jun 2021 22:24:01 +0000 (00:24 +0200)]
workflows: add signed-off check

This adds a github workflow for checking the signed off line in commit
messages.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoworkflow: add group_vars/defaults checks
Guillaume Abrioux [Tue, 29 Jun 2021 19:06:37 +0000 (21:06 +0200)]
workflow: add group_vars/defaults checks

let's use github workflow for checking defaults values.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoworkflow: add syntax check
Guillaume Abrioux [Tue, 29 Jun 2021 18:47:33 +0000 (20:47 +0200)]
workflow: add syntax check

This adds the ansible --syntax-check test in the ansible-lint workflow

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agotests: remove legacy file
Guillaume Abrioux [Mon, 28 Jun 2021 16:05:26 +0000 (18:05 +0200)]
tests: remove legacy file

This inventory isn't used anywhere.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoshrink-mgr: modify existing mgr check
Guillaume Abrioux [Mon, 28 Jun 2021 18:16:03 +0000 (20:16 +0200)]
shrink-mgr: modify existing mgr check

Do not rely on the inventory aliases in order to check if the selected
manager to be removed is present.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967897
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agocephadm-adopt/rgw: add host target in svc_id
Guillaume Abrioux [Tue, 29 Jun 2021 12:02:45 +0000 (14:02 +0200)]
cephadm-adopt/rgw: add host target in svc_id

If multi-realms were deployed with several instances belonging to the same
realm and zone using the same port on different nodes, the service id
expected by cephadm will be the same and therefore only one service will
be deployed. We need to create a service called
`<node>.<realm>.<zone>.<port>` to be sure the service name will be unique
and well deployed on the expected node in order to preserve backward
compatibility with the rgws instances that were deployed with
ceph-ansible.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoswitch2container: run ceph-validate role
Dimitri Savineau [Mon, 28 Jun 2021 14:46:40 +0000 (10:46 -0400)]
switch2container: run ceph-validate role

This adds the ceph-validate role before starting the switch to a containerized
deployment.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1968177
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
4 years agolibrary/ceph_key.py: rewrite for generate_ceph_cmd()
Wong Hoi Sing Edison [Thu, 17 Jun 2021 15:43:13 +0000 (23:43 +0800)]
library/ceph_key.py: rewrite for generate_ceph_cmd()

Also code lint with flake8

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
4 years agodashboard: Add new prometheus alert
Boris Ranto [Tue, 8 Jun 2021 07:43:23 +0000 (09:43 +0200)]
dashboard: Add new prometheus alert

It was requested for us to update our alerting definitions to include a
slow OSD Ops health check.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1951664
Signed-off-by: Boris Ranto <branto@redhat.com>
4 years agocephadm-adopt: support rgw multisite adoption
Guillaume Abrioux [Wed, 23 Jun 2021 13:24:23 +0000 (15:24 +0200)]
cephadm-adopt: support rgw multisite adoption

We need to support rgw multisite deployments.
This commit makes the adoption playbook support this kind of deployment.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agomultisite: fix bug during switch2containers
Guillaume Abrioux [Wed, 16 Jun 2021 07:39:18 +0000 (09:39 +0200)]
multisite: fix bug during switch2containers

When running the switch-to-containers playbook with multisite enabled,
the fact "rgw_instances" is only set for the node being processed
(serial: 1), the consequence of that is that the set_fact of
'rgw_instances_all' can't iterate over all rgw node in order to look up
each 'rgw_instances_host'.

Adding a condition checking whether hostvars[item]["rgw_instances_host"]
is defined fixes this issue.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967926
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agotests: Retry generating SSH vagrant config. Also add some debug.
David Galloway [Tue, 15 Jun 2021 20:17:19 +0000 (16:17 -0400)]
tests: Retry generating SSH vagrant config.  Also add some debug.

Signed-off-by: David Galloway <dgallowa@redhat.com>
4 years agonfs: do no copy client.bootstrap-rgw when using mds
Guillaume Abrioux [Tue, 15 Jun 2021 09:02:05 +0000 (11:02 +0200)]
nfs: do no copy client.bootstrap-rgw when using mds

There's no need to copy this keyring when using nfs with mds

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agocontainer: conditionnally disable lvmetad
Guillaume Abrioux [Fri, 21 May 2021 11:25:25 +0000 (13:25 +0200)]
container: conditionnally disable lvmetad

Enabling lvmetad in containerized deployments on el7 based OS might
cause issues.
This commit make it possible to disable this service if needed.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1955040
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agoceph_key: handle error in a better way
Guillaume Abrioux [Mon, 7 Jun 2021 12:51:43 +0000 (14:51 +0200)]
ceph_key: handle error in a better way

When calling the `ceph_key` module with `state: info`, if the ceph
command called fails, the actual error is hidden by the module which
makes it pretty difficult to troubleshoot.

The current code always states that if rc is not equal to 0 the keyring
doesn't exist.

`state: info` should always return the actual rc, stdout and stderr.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1964889
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agocephadm-adopt: fix mgr placement hosts task
Guillaume Abrioux [Thu, 10 Jun 2021 13:12:41 +0000 (15:12 +0200)]
cephadm-adopt: fix mgr placement hosts task

When no `[mgrs]` group is defined in the inventory, mgr daemon are
implicitly collocated with monitors.
This task currently relies on the length of the mgr group in order to
tell cephadm to deploy mgr daemons.
If there's no `[mgrs]` group defined in the inventory, it will ask
cephadm to deploy 0 mgr daemon which doesn't make sense and will throw
an error.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970313
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
4 years agotests: allocate more memory for all_in_one job
Guillaume Abrioux [Fri, 11 Jun 2021 13:15:47 +0000 (15:15 +0200)]
tests: allocate more memory for all_in_one job

Since we fire up much less VMs than other job, we can affoard allocating
more memory here for this job.
Each VM hosts more daemon so 1024Mb can be too few.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>