]> git.apps.os.sepia.ceph.com Git - ceph-ansible.git/log
ceph-ansible.git
5 years agodocker2podman: set disk_list for non lvm scenario v3.2.47
Dimitri Savineau [Thu, 30 Jul 2020 13:54:33 +0000 (09:54 -0400)]
docker2podman: set disk_list for non lvm scenario

When using non lvm scenarios (collocated or non-collocated) then the
disk_list variable isn't set because this is done during the ceph-osd
role (start_osds.yml) which isn't executed in the docker2podman
playbook.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1862046
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agotests: pin pytest-forked to 1.2.0
Dimitri Savineau [Wed, 29 Jul 2020 14:12:57 +0000 (10:12 -0400)]
tests: pin pytest-forked to 1.2.0

The pytest-forked 1.3.0 release isn't compatible with the pytest release
we are using in that branch.

-----------------------
pytest-forked 1.3.0 requires pytest>=3.10, but you'll have pytest 3.6.1
which is incompatible.
-----------------------

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoREADME-MULTISITE: fix old conflict v3.2.46
Dimitri Savineau [Tue, 7 Jul 2020 19:10:28 +0000 (15:10 -0400)]
README-MULTISITE: fix old conflict

The automatic backport [1] done by mergify has merged the backport PR
even if a conflict was present in the documentation.

[1] https://github.com/ceph/ceph-ansible/pull/3803

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agofacts: explicitly disable facter and ohai
Dimitri Savineau [Tue, 30 Jun 2020 14:13:42 +0000 (10:13 -0400)]
facts: explicitly disable facter and ohai

By default, ansible gathers facts from facter and ohai if installed on
the remote nodes, given we don't need them, let's exclude these facts
from our facts gathering

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit c95adc564b8be6f9f9b1ba8568072daf39da7a2c)

5 years agoceph-osd: exit gracefully when no data partition
Dimitri Savineau [Fri, 26 Jun 2020 17:49:17 +0000 (13:49 -0400)]
ceph-osd: exit gracefully when no data partition

When using collocated or non-collocated osd_scenarios (ceph-disk) and
trying to deterime the OSD_DEVICE from the OSD_ID passed to the systemd
unit then we can be in a situation where the OSD hasn't been activated
but the OSD ID exists.
This means the data partition isn't in activate state and the ceph-disk
list command won't show the OSD ID on the data partition.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1850377
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoinfra: introduce docker to podman playbook
Guillaume Abrioux [Mon, 6 Jul 2020 07:27:38 +0000 (09:27 +0200)]
infra: introduce docker to podman playbook

This isn't backported from master because there are too many changes
between stable-3.2 and other newer branches.

NOTE:
This playbook  *doesn't* add podman support in stable-3.2 at all.
This is a tripleO dedicated playbook which is intended to be run
early during FFU workflow in order to prepare the OS upgrade.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1853457
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agodoc: add a note about deprecated branches
Guillaume Abrioux [Fri, 3 Jul 2020 05:14:57 +0000 (07:14 +0200)]
doc: add a note about deprecated branches

This commit adds a note about `stable-3.0` `stable-3.1` branches which
are deprecated and not maintained anymore.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit bbe30bcc69ffcf117ee97e8500f5247b4542f186)

5 years agodoc: add a note about containerized deployments
Guillaume Abrioux [Fri, 3 Jul 2020 04:58:49 +0000 (06:58 +0200)]
doc: add a note about containerized deployments

This commit updates the documentation to add a note about containerized
deployments.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit e61488507b400b8fb2eedab99889871da27eef12)

5 years agodoc: fix warning treated as an error
Guillaume Abrioux [Fri, 3 Jul 2020 07:14:13 +0000 (09:14 +0200)]
doc: fix warning treated as an error

Typical error:

```
Warning, treated as error:
/home/jenkins-build/build/workspace/ceph-ansible-docs-pull-requests/docs/source/day-2/upgrade.rst:2:Title underline too short.
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 5c254861bdc146d3ef73dc99d6f52f7d03e22deb)

5 years agoswitch_to_containers: don't set noup flag v3.2.45
Guillaume Abrioux [Tue, 16 Jun 2020 15:43:13 +0000 (17:43 +0200)]
switch_to_containers: don't set noup flag

We shouldn't set this flag when running switch_to_containers playbook.
Otherwise the playbook fails waiting for pgs to be clean.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1843569
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b91d60d38456f9e316bee3daeb2f72dda0315cae)

5 years agoswitch-to-containers: set and unset osd flags
Guillaume Abrioux [Fri, 3 Apr 2020 13:36:23 +0000 (15:36 +0200)]
switch-to-containers: set and unset osd flags

The workflow in this playbook should be the same than in rolling_update,
we should first set noout and nodeep-scrub flags before migrating the
first osd and unset osd flags after the last osd is migrated.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 2cfaa056e020615bb99eb9db1520a977e5ac3ef4)

5 years agoRevert "switch-to-containers: set and unset osd flags" v3.2.44
Guillaume Abrioux [Thu, 25 Jun 2020 15:08:10 +0000 (17:08 +0200)]
Revert "switch-to-containers: set and unset osd flags"

This reverts commit 5a4134098a6b0bdb513425f23c8bc8804cd75adc.

We need to provide a tag for RHCS 3.3z6 without this commit.

5 years agoRevert "switch_to_containers: don't set noup flag"
Guillaume Abrioux [Thu, 25 Jun 2020 15:07:25 +0000 (17:07 +0200)]
Revert "switch_to_containers: don't set noup flag"

This reverts commit b7ec4a995b9df0fb673160864887b973012b39bc.

We need to provide a tag for RHCS 3.3z6 without this commit.

5 years agodocker: Add Requires on docker service
Dimitri Savineau [Mon, 22 Jun 2020 17:58:10 +0000 (13:58 -0400)]
docker: Add Requires on docker service

When using docker container engine then the systemd unit scripts only
use a dependency on the docker daemon via the After parameter.
But if docker is restarted on a live system then the ceph systemd units
should wait for the docker daemon to be fully restarted.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1846830
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit bd22f1d1ec8c692848aee5337cd0d682a3a058b7)

5 years agodocs: Add upgrade operation.
Dimitri Savineau [Mon, 25 May 2020 13:44:12 +0000 (09:44 -0400)]
docs: Add upgrade operation.

This commit adds a chapter about the ceph upgrade process.

Closes: #5393
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit e41487dbce9dd5e9d754270bec426bea920406be)

5 years agoswitch_to_containers: don't set noup flag
Guillaume Abrioux [Tue, 16 Jun 2020 15:43:13 +0000 (17:43 +0200)]
switch_to_containers: don't set noup flag

We shouldn't set this flag when running switch_to_containers playbook.
Otherwise the playbook fails waiting for pgs to be clean.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1843569
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b91d60d38456f9e316bee3daeb2f72dda0315cae)

5 years agoswitch-to-containers: set and unset osd flags
Guillaume Abrioux [Fri, 3 Apr 2020 13:36:23 +0000 (15:36 +0200)]
switch-to-containers: set and unset osd flags

The workflow in this playbook should be the same than in rolling_update,
we should first set noout and nodeep-scrub flags before migrating the
first osd and unset osd flags after the last osd is migrated.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 2cfaa056e020615bb99eb9db1520a977e5ac3ef4)

5 years agoclients: move dummy container creation v3.2.43
Guillaume Abrioux [Mon, 27 Apr 2020 14:45:51 +0000 (16:45 +0200)]
clients: move dummy container creation

This commit moves the dummy container creation task right before the
cephx keys creation task so it can't be run out of time.

Also, this commit makes the dummy container running for ever.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1828105
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agotypo: updating type check on rc v3.2.42
ianwatsonrh [Tue, 21 Apr 2020 13:14:46 +0000 (14:14 +0100)]
typo: updating type check on rc

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1827271
Signed-off-by: ianwatsonrh <ianwatson@redhat.com>
(cherry picked from commit ccf6a7f153c7a36a700b914db956058f2408304b)

5 years agodoc: add day-2 operations documentation
Guillaume Abrioux [Tue, 21 Apr 2020 07:50:27 +0000 (09:50 +0200)]
doc: add day-2 operations documentation

This commit is the first of a serie in order to describe all day-2 operations
that are possible via ceph-ansible using a set of playbook provided in
`infrastructure-playbooks` directory.

Fixes: #5061
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 7e800303e9933cb61a5288b608e8d4f2cfdd7746)

5 years agolibrary/ceph_volume: look for error messages in stderr v3.2.41
Rishabh Dave [Tue, 7 Apr 2020 11:50:35 +0000 (17:20 +0530)]
library/ceph_volume: look for error messages in stderr

Error message were moved to from stdout in stderr here -
https://github.com/ceph/ceph/commit/b8d6dcbe9f803c96c0af68da54f1262e9b6a9e77#diff-20f7c578a4e69ec61a5869d706567a24R137.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1793542
Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 4249d1e02d6da07466a4ddf1282cf4600a131773)

5 years agoceph-validate: update RHEL requirement for RHCS
Dimitri Savineau [Thu, 9 Apr 2020 18:00:52 +0000 (14:00 -0400)]
ceph-validate: update RHEL requirement for RHCS

We were not testing the right ansible_distribution fact value for RHEL
distribution.
This commit also updates the minial RHEL version supported by RHCS.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 5de74fe512575b2873b5863f5817f676954d3469)

5 years agoadd-osd: refact the playbook
Guillaume Abrioux [Tue, 7 Apr 2020 13:35:38 +0000 (15:35 +0200)]
add-osd: refact the playbook

There's no need to have two plays anymore since we now set/unset osd
flags in `ceph-osd` role.

Also, this commit makes the role `ceph-facts` to be called after
`ceph-defaults`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agoadd-osd: fix fact gathering in add-osd
Guillaume Abrioux [Tue, 7 Apr 2020 12:57:47 +0000 (14:57 +0200)]
add-osd: fix fact gathering in add-osd

This commit makes this playbook gathering facts from all other nodes but
clients.
When collocating OSDs on other nodes it can fail like following:

```
fatal: [vm252-11]: FAILED! => {
    "msg": "'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_hostname'"
}
```

In that case, a fact from a RGW node is called when rendering the
`ceph.conf.j2` but it fails because facts are gathered only from mon and
osd nodes.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1806765
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agoadd-osd: unset noup flag after last osd is deployed
Guillaume Abrioux [Tue, 7 Apr 2020 09:16:25 +0000 (11:16 +0200)]
add-osd: unset noup flag after last osd is deployed

this commit fixes a bug when using `add-osd.yml` playbook.
`noup` flag is set early but it never got unset before the "wait for pgs
clean" check, so the playbook always fails because OSDs aren't never
seen UP.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1816023
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agoceph_key: fetch key when needed
Guillaume Abrioux [Fri, 3 Apr 2020 16:23:00 +0000 (18:23 +0200)]
ceph_key: fetch key when needed

Fetch the key when it is present in the cluster but not on the node.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit ccfa249919b338197daec353cb5d4e535b3fb734)

5 years agoceph_key: fix idempotency when no secret is passed
Guillaume Abrioux [Fri, 3 Apr 2020 08:24:32 +0000 (10:24 +0200)]
ceph_key: fix idempotency when no secret is passed

553584cbd0d014429e665f998776e8d198f72d2b introduced a regression when no
secret is passed, it overwrites the secret each time the task is run.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 003defec0311af0f03da861f80d596852bdb9cf5)

5 years agoceph_key: remove 'update' state
Guillaume Abrioux [Tue, 17 Mar 2020 14:34:11 +0000 (15:34 +0100)]
ceph_key: remove 'update' state

With this change, the state `present` is enough to update a keyring.
If the keyring already exist, it will be updated if caps or secret
passed to the module are different.
If the keyring doen't exist, it will be created.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1808367
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 553584cbd0d014429e665f998776e8d198f72d2b)

5 years agotests: add mgr nodes to shrink_mon inventory
Dimitri Savineau [Thu, 2 Apr 2020 18:31:38 +0000 (14:31 -0400)]
tests: add mgr nodes to shrink_mon inventory

Since 306ce82 we explicitly fail when there's no mgr node preent in the
inventory.

fatal: [mon0]: FAILED! => {
    "changed": false
}

MSG:

Please add a mgr host to your inventory.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoosd: support changing default rule even when osd_crush_location isn't defined
Guillaume Abrioux [Thu, 12 Mar 2020 11:14:01 +0000 (12:14 +0100)]
osd: support changing default rule even when osd_crush_location isn't defined

Creating crush rules even with no crush hierarchy configuration is a
valid scenario so we shouldn't be bound to the first task result (which
configure crush hierarchy) to be able to add new crush rules.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1816989
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 5b0476385ccb00a9edb9092a183c18e2637afd5d)

5 years agoAdd site-container.yml symlink
Dimitri Savineau [Tue, 31 Mar 2020 20:53:17 +0000 (16:53 -0400)]
Add site-container.yml symlink

This adds a symlink to the site-docker.yml.sample playbook.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoswitch_to_containers: exclude clients nodes from facts gathering
Guillaume Abrioux [Mon, 9 Dec 2019 13:20:42 +0000 (14:20 +0100)]
switch_to_containers: exclude clients nodes from facts gathering

just like site.yml and rolling_update, let's exclude clients node from
the fact gathering.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 332c39376b45375a2c0566406b49896ae316a293)
(cherry picked from commit 5c3ba0787cf346c7e5eb5df74a1da4984c16e7aa)

5 years agomain: exclude client nodes from facts gathering when delegate_facts_host
Guillaume Abrioux [Wed, 2 Oct 2019 13:36:30 +0000 (15:36 +0200)]
main: exclude client nodes from facts gathering when delegate_facts_host

This commit excludes client nodes from facts gathering, they are not
needed and can speed up this task.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 865d2eac9ba81bdb9ecbd841e4b73608648dfae2)

5 years agoThe _filtered_clients list should intersect with ansible_play_batch
John Fulton [Thu, 6 Feb 2020 02:23:54 +0000 (21:23 -0500)]
The _filtered_clients list should intersect with ansible_play_batch

Client configuration with --limit fails without this patch
because certain tasks are only done to the first host in the
_filtered_clients list and it's likely that first host will
not be included in what's sepcified with --limit. To fix this
the _filtered_clients list should be built from all clients
in the inventory that are also in the running play.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1798781
Signed-off-by: John Fulton <fulton@redhat.com>
(cherry picked from commit e4bf4857f556465c60f89d32d5f2a92d25d5c90f)

5 years agodefaults: remove legacy comment
Guillaume Abrioux [Thu, 26 Mar 2020 06:38:36 +0000 (07:38 +0100)]
defaults: remove legacy comment

This is no longer true, let's remove this comment given that this option
is not ignored in containerized deployments.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit e551b5ba1a65e653540b5b5c7cb3a4f5d32b2540)

5 years agodocker-common: remove legacy tasks for ntp configuration
Guillaume Abrioux [Fri, 19 Oct 2018 13:57:18 +0000 (15:57 +0200)]
docker-common: remove legacy tasks for ntp configuration

Those tasks aren't needed in docker-common since the introduction of
`ceph-infra` role. They are duplicated tasks.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1810376
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit cd0195c5622ecc2d0001eb7c988d7b6a6fac1d5e)

5 years agotests: add inventory host for 4.0 upgrade job
Guillaume Abrioux [Wed, 4 Mar 2020 22:05:35 +0000 (23:05 +0100)]
tests: add inventory host for 4.0 upgrade job

This inventory is intended to be used in the upgrade scenario in
stable-4.0 branch.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agotests: modify add-osd job
Guillaume Abrioux [Tue, 3 Mar 2020 10:08:22 +0000 (11:08 +0100)]
tests: modify add-osd job

This commit modifies the way we test add-osd scenario given that the
playbook add-osd.yml is broken at the moment.

As a workaround we can use main playbook with `--limit` to achieve this
operation.

Note: This commit is intended to be reverted once we get a fix.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agotests: pg num should be a power of two number
Dimitri Savineau [Mon, 17 Feb 2020 18:46:09 +0000 (13:46 -0500)]
tests: pg num should be a power of two number

This patch changes the pg_num value of the rgw pools foo and bar to be
a power of two number.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoceph-rgw: Fix customize pool size "when" condition
Benoît Knecht [Mon, 20 Jan 2020 10:36:27 +0000 (11:36 +0100)]
ceph-rgw: Fix customize pool size "when" condition

In 3c31b19ab39f297635c84edb9e8a5de6c2da7707, I fixed the `customize pool
size` task by replacing `item.size` with `item.value.size`. However, I
missed the same issue in the `when` condition.

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
(cherry picked from commit 3842aa1a30277b5ea3acf78ac1aef37bad5afb14)

5 years agoceph-rgw: Fix custom pool size setting
Benoît Knecht [Mon, 30 Dec 2019 09:53:20 +0000 (10:53 +0100)]
ceph-rgw: Fix custom pool size setting

RadosGW pools can be created by setting

```yaml
rgw_create_pools:
  .rgw.root:
    pg_num: 512
    size: 2
```

for instance. However, doing so would create pools of size
`osd_pool_default_size` regardless of the `size` value. This was due to
the fact that the Ansible task used

```
{{ item.size | default(osd_pool_default_size) }}
```

as the pool size value, but `item.size` is always undefined; the
correct variable is `item.value.size`.

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
(cherry picked from commit 3c31b19ab39f297635c84edb9e8a5de6c2da7707)

5 years agoceph-{mon,osd}: move default crush variables v3.2.40
Dimitri Savineau [Mon, 10 Feb 2020 18:43:31 +0000 (13:43 -0500)]
ceph-{mon,osd}: move default crush variables

Since ed36a11 we move the crush rules creation code from the ceph-mon to
the ceph-osd role.
To keep the backward compatibility we kept the possibility to set the
crush variables on the mons side but we didn't move the default values.
As a result, when using crush_rule_config set to true and wanted to use
the default values for crush_rules then the crush rule ansible task
creation will fail.

"msg": "'ansible.vars.hostvars.HostVarsVars object' has no attribute
'crush_rules'"

This patch move the default crush variables from ceph-mon to ceph-osd
role but also use those default values when nothing is defined on the
mons side.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1798864
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 1fc6b337142efdc76c10340c076653d298e11c68)

5 years agoceph-validate: fail if no mgr host is present
Dimitri Savineau [Tue, 11 Feb 2020 16:44:28 +0000 (11:44 -0500)]
ceph-validate: fail if no mgr host is present

We already stop the upgrade playbook (rolling_update.yml) if there's
no mgr node present so we should also do the same for initial
deployment.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1788644
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoceph-mon: use interactive session with aliases
Dimitri Savineau [Tue, 4 Feb 2020 16:31:47 +0000 (11:31 -0500)]
ceph-mon: use interactive session with aliases

When using ceph aliases with commands that require manual intervention
to stop then the command will keep running inside the container (like
using Ctrl+c).
For handling this, we should use the interactive session option (-it)
with the docker commands.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1797874
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoiscsi: Fix crashes during rolling update v3.2.39
Mike Christie [Tue, 28 Jan 2020 22:31:55 +0000 (16:31 -0600)]
iscsi: Fix crashes during rolling update

During a rolling update we will run the ceph iscsigw tasks that start
the daemons then run the configure_iscsi.yml tasks which can create
iscsi objects like targets, disks, clients, etc. The problem is that
once the daemons are started they will accept confifguration requests,
or may want to update the system themself. Those operations can then
conflict with the configure_iscsi.yml tasks that setup objects and we
can end up in crashes due to the kernel being in a unsupported state.

This could also happen during creation, but is less likely due to no
objects being setup yet, so there are no watchers or users accessing the
gws yet. The fix in this patch works for both update and initial setup.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1795806

Signed-off-by: Mike Christie <mchristi@redhat.com>
(cherry picked from commit 77f3b5d51b84a6338847c5f6a93f22a3a6a683d2)

5 years agotests: retry to fire up VMs on vagrant failure
Guillaume Abrioux [Tue, 2 Apr 2019 12:53:19 +0000 (14:53 +0200)]
tests: retry to fire up VMs on vagrant failure

Add a script to retry several times to fire up VMs to avoid vagrant
failures.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-authored-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 1ecb3a9352d869d8fde694cefae9de8af8f6fee8)

5 years agoconfig: fix external client scenario
Guillaume Abrioux [Fri, 31 Jan 2020 10:51:54 +0000 (11:51 +0100)]
config: fix external client scenario

When no monitor group is present in the inventory, this task fails.
This affects only non-containerized deployments.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit e7bc0794054008ac2d6771f0d29d275493319665)

5 years agotests: add external_clients scenario
Guillaume Abrioux [Thu, 30 Jan 2020 12:00:47 +0000 (13:00 +0100)]
tests: add external_clients scenario

This commit adds a new 'external ceph clients' scenario.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 641729357e5c8bc4dbf90b5c4f7b20e1d3d51f7d)

5 years agovalidate: allow running ceph-ansible 3.2 against ansible 2.7
Guillaume Abrioux [Fri, 31 Jan 2020 09:23:20 +0000 (10:23 +0100)]
validate: allow running ceph-ansible 3.2 against ansible 2.7

This commit allows ceph-ansible 3.2 to be run against ansible 2.7

However, note that running stable-3.2 against ansible 2.7 doesn't get
any testing upstream this might break the playbook, only ansible 2.6 is
officially supported.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1781635
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agotests: add 'all_in_one' scenario
Guillaume Abrioux [Mon, 27 Jan 2020 14:49:30 +0000 (15:49 +0100)]
tests: add 'all_in_one' scenario

Add new scenario 'all_in_one' in order to catch more collocated related
issues.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3e7dbb4b16dc7de3b46c18db4c00e7f2c2a50453)

5 years agoupdate: remove legacy tasks
Guillaume Abrioux [Tue, 28 Jan 2020 09:29:03 +0000 (10:29 +0100)]
update: remove legacy tasks

These tasks should have been removed with backport #4756

Note:
This should have been backported from master but it's not possible
because of too many change between master and stable-3.2

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1740463
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agoceph-defaults: remove rgw from ceph_conf_overrides
Dimitri Savineau [Tue, 28 Jan 2020 15:27:34 +0000 (10:27 -0500)]
ceph-defaults: remove rgw from ceph_conf_overrides

The [rgw] section in the ceph.conf file or via the ceph_conf_overrides
variable doesn't exist and has no effect.
To apply overrides to all radosgw instances we should use either the
[global] or [client] sections.
Overrides per radosgw instance should still use the
[client.rgw.{instance-name}] section.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1794552
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 2f07b8513158d3fc36c5b0d29386f46dd28b5efa)

5 years agodefaults: change monitor|radosgw_address default values
Guillaume Abrioux [Mon, 9 Dec 2019 17:23:15 +0000 (18:23 +0100)]
defaults: change monitor|radosgw_address default values

To avoid confusion, let's change the default value from `0.0.0.0` to
`x.x.x.x`.
Users might think setting `0.0.0.0` will make the daemon binding on all
interfaces.

Fixes: #4827
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit fc02fc98ebce0f99e81628e76ad28e7bf65435de)

5 years agotox: allow copy admin key for purge scenario
Dimitri Savineau [Mon, 13 Jan 2020 18:36:50 +0000 (13:36 -0500)]
tox: allow copy admin key for purge scenario

This is enabled in the group_vars/clients file but it's overrided in
extra vars by tox.
Let's do it like that for now.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agotests: add coverage on purge playbook
Guillaume Abrioux [Thu, 7 Nov 2019 12:39:25 +0000 (13:39 +0100)]
tests: add coverage on purge playbook

This commit adds a playbook to be played before we run purge playbook,
it first creates an rbd image then map an rbd device on client0 so the
purge playbook will try to unmap it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit db77fbda15bf9f79f8122559b01b6625005ae29c)

5 years agopurge: use sysfs to unmap rbd devices
Guillaume Abrioux [Mon, 4 Nov 2019 14:59:39 +0000 (15:59 +0100)]
purge: use sysfs to unmap rbd devices

in containerized context, using the binary provided in atomic os won't
work because it's an old version provided by ceph-common based on
10.2.5.
Using a container could be an idea but for large cluster with hundreds
of client nodes, that would require to pull the image of each of them
just to unmap the rbd devices.

Let's use the sysfs method in order to avoid any issue related to ceph
version that is shipped on the host.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1766064
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3cfcc7a105156dfde65b23e9d8662cd848537094)

5 years agoupdate: only run post osd upgrade play on 1 mon
Guillaume Abrioux [Mon, 18 Nov 2019 17:12:00 +0000 (18:12 +0100)]
update: only run post osd upgrade play on 1 mon

There is no need to run these tasks n times from each monitor.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit c878e99589bde0eecb8ac72a7ec8bc1f66403eeb)

5 years agoupdate: use flags noout and nodeep-scrub only
Guillaume Abrioux [Mon, 18 Nov 2019 16:59:56 +0000 (17:59 +0100)]
update: use flags noout and nodeep-scrub only

1. set noout and nodeep-scrub flags,
2. upgrade each OSD node, one by one, wait for active+clean pgs
3. after all osd nodes are upgraded, unset flags

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-authored-by: Rachana Patel <racpatel@redhat.com>
(cherry picked from commit 548db78b9535348dff616665be749503f80c4fca)

5 years agoceph-defaults: exclude rbd devices from discovery
Dimitri Savineau [Mon, 16 Dec 2019 20:12:47 +0000 (15:12 -0500)]
ceph-defaults: exclude rbd devices from discovery

The RBD devices aren't excluded from the devices list in the LVM auto
discovery scenario.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1783908
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 6f0556f01536932bdf47e8f1aab341b2c6761537)

5 years agoceph-osd: wait for all osds once
Dimitri Savineau [Wed, 27 Nov 2019 14:29:06 +0000 (09:29 -0500)]
ceph-osd: wait for all osds once

cf8c6a3 moves the 'wait for all osds' task from openstack_config to the
main tasks list.
But the openstack_config code was executed only on the last OSD node.
We don't need to do this check on all OSD node so we need to add set
run_once to true on that task.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 5bd1cf40eb5823aab3c4e16b60b37c30600f9283)

5 years agoceph-osd: wait for all osd before crush rules
Dimitri Savineau [Tue, 26 Nov 2019 16:09:11 +0000 (11:09 -0500)]
ceph-osd: wait for all osd before crush rules

When creating crush rules with device class parameter we need to be sure
that all OSDs are up and running because the device class list is
is populated with this information.
This is now enable for all scenario not openstack_config only.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit cf8c6a384999be8caedce1121dfd57ae114d5bb6)

5 years agorolling_update: create crush rule after osd play
Dimitri Savineau [Mon, 18 Nov 2019 16:07:16 +0000 (11:07 -0500)]
rolling_update: create crush rule after osd play

When upgrading from jewel to luminous we can execute the crush rule tasks
only when the 'osd require-osd-release luminous' command.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoceph-osd: add device class to crush rules
Dimitri Savineau [Thu, 31 Oct 2019 20:24:12 +0000 (16:24 -0400)]
ceph-osd: add device class to crush rules

This adds device class support to crush rules when using the class key
in the rule dict via the create-replicated sub command.
If the class key isn't specified then we use the create-simple sub
command for backward compatibility.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1636508
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit ef2cb99f739ade80e285d83050ac01184aafc753)

5 years agomove crush rule creation from mon to osd role
Dimitri Savineau [Thu, 31 Oct 2019 20:17:33 +0000 (16:17 -0400)]
move crush rule creation from mon to osd role

If we want to create crush rules with the create-replicated sub command
and device class then we need to have the OSD created before the crush
rules otherwise the device classes won't exist.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit ed36a11eabbdbb040652991300cdfc93d51ed491)

5 years agoceph-validate: add rbdmirror validation
Dimitri Savineau [Tue, 5 Nov 2019 16:53:22 +0000 (11:53 -0500)]
ceph-validate: add rbdmirror validation

When ceph_rbd_mirror_configure is set to true we need to ensure that
the required variables aren't empty.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1760553
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 4a065cebd70d259bfd59b6f5f9baa45d516a9c3a)

5 years agoswitch_to_containers: set GUID on lockbox part
Dimitri Savineau [Mon, 16 Dec 2019 21:41:20 +0000 (16:41 -0500)]
switch_to_containers: set GUID on lockbox part

The ceph lockbox partition (part number 5) used with non lvm scenarios
and in non containerized deployment don't have a valid PARTUUID.
The value is set to 00000000-0000-0000-0000-000000000000 for each OSD
devices.

$ blkid -t PARTLABEL="ceph lockbox" -o value -s PARTUUID
00000000-0000-0000-0000-000000000000
00000000-0000-0000-0000-000000000000
00000000-0000-0000-0000-000000000000
00000000-0000-0000-0000-000000000000
00000000-0000-0000-0000-000000000000

When switching to containerized deployment we manually mount the lockbox
partition by using the PARTUUID.
Unfortunately because we have most of the time multiple OSD on the same
node we can't have the right symlink in /dev/disk/by-partuuid because it
will point to only one partition.

/dev/disk/by-partuuid/00000000-0000-0000-0000-000000000000 -> ../../sdb5

After the switch_to_containers playbook then only one OSD will restart
correctly and the other will try to access to the wrong device causing
error like 'xxxx is still in use'.

When deploying with containers and dmcrypt OSDs we force a PARTUUID
value during the ceph-disk prepare task.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1616159
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoceph-mds: allow directory fragmentation
Dimitri Savineau [Tue, 17 Dec 2019 18:24:06 +0000 (13:24 -0500)]
ceph-mds: allow directory fragmentation

We need to explicitly enable the allow_dirfrags flag on cephfs pool
after upgrading to Luminous.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1776233
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agofacts: avoid duplicated element in devices list
Guillaume Abrioux [Wed, 20 Nov 2019 10:02:49 +0000 (11:02 +0100)]
facts: avoid duplicated element in devices list

When using `osd_auto_discovery`, `devices` is built multiple times due
to multiple runs of `ceph-facts` role. It end up with duplicate
instances of a same device in the list.

Using `unique` filter when building the list fixes this issue.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 23b1f43897db0a03ef94f51e83ed3c562c4584d0)

5 years agotests: add shrink-osd-legacy testing
Guillaume Abrioux [Thu, 5 Dec 2019 17:42:24 +0000 (18:42 +0100)]
tests: add shrink-osd-legacy testing

This commit introduce back testing against ceph-disk deployed osds.

In stable-3.2 which is the most common version used at customers
(downstream pov), a bunch of OSDs are still deployed using ceph-disk.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agoshrink-osd: support fqdn in inventory
Guillaume Abrioux [Mon, 9 Dec 2019 14:52:26 +0000 (15:52 +0100)]
shrink-osd: support fqdn in inventory

When using fqdn in inventory, that playbook fails because of some tasks
using the result of ceph osd tree (which returns shortname) to get
some datas in hostvars[].

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1779021
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 6d9ca6b05b52694dec53ce61fdc16bb83c93979d)

5 years agoceph-iscsi: add ceph-iscsi stable repositories
Dimitri Savineau [Wed, 8 Jan 2020 15:54:42 +0000 (10:54 -0500)]
ceph-iscsi: add ceph-iscsi stable repositories

This commit adds the support of the ceph-iscsi stable repository when
use ceph_repository community instead of always using the devel
repositories.
We're still using the devel repositories for rtslib and tcmu-runner in
both cases (dev and community).

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoansible.cfg: do not enforce PreferredAuthentications v3.2.38
Guillaume Abrioux [Mon, 9 Dec 2019 16:10:11 +0000 (17:10 +0100)]
ansible.cfg: do not enforce PreferredAuthentications

There's no need to enforce PreferredAuthentications by default.
Users can still choose to override the ansible.cfg with any additional
parameter like this one to fit their infrastructure.

Fixes: #4826
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit d682412e2aa5eeb411cc0dff9a3ffef4b4aa8683)

5 years agoceph-osd: update systemd unit script
Dimitri Savineau [Mon, 9 Dec 2019 14:43:17 +0000 (09:43 -0500)]
ceph-osd: update systemd unit script

The systemd unit script wasn't updated with the new container name
format (without the hostname).
We now have the same start/stop docker commands for all scenarios.
During the device to id OSD migration we need to be sure that the
old container with the hostname are stopped.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1780688
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agotests: add lvm-auto-discovery scenario
Dimitri Savineau [Thu, 5 Dec 2019 16:01:55 +0000 (11:01 -0500)]
tests: add lvm-auto-discovery scenario

This adds the lvm-auto-discovery scenario.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoceph-defaults: exclude md devices from discovery
Dimitri Savineau [Wed, 4 Dec 2019 17:32:49 +0000 (12:32 -0500)]
ceph-defaults: exclude md devices from discovery

The md devices (RAID software) aren't excluded from the devices list in
the auto discovery scenario.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1764601
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 014f51c2a42e4922b43da07b97c4b810ede32200)

5 years agofacts: fix auto_discovery exclude
Guillaume Abrioux [Mon, 25 Feb 2019 23:07:01 +0000 (00:07 +0100)]
facts: fix auto_discovery exclude

the previous approach was wrong.
checking if `item.key` is in `osd_auto_discovery_exclude` (`['dm-',
'loop']`) is incorrect because it will obviously not match. Therefore,
the condition will return `True` whatever the device we are checking.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 8f420072727441fd6e6a22a15cc9034a3a678cae)

5 years agoosd: add possibility to exclude device in osd_auto_discovery
Guillaume Abrioux [Mon, 11 Feb 2019 12:52:37 +0000 (13:52 +0100)]
osd: add possibility to exclude device in osd_auto_discovery

Add a new `osd_auto_discovery_exclude` to give the possibility of
excluding some devices in auto_discovery scenario.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 83d7ef777ec19eb5f96c553d869ec8ad90fd5061)

5 years agoceph-facts: generate devices when osd_auto_discovery is true
Andrew Schoen [Thu, 10 Jan 2019 17:26:19 +0000 (11:26 -0600)]
ceph-facts: generate devices when osd_auto_discovery is true

This task used to live in ceph-osd, but we need it defined here to that
ceph-config can use it when trying to determine the number of osds.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 88eda479a9d6df6224a24f56dbcbdb204daab150)

5 years agotests: reduce max_mds from 3 to 2
Dimitri Savineau [Wed, 4 Dec 2019 17:12:05 +0000 (12:12 -0500)]
tests: reduce max_mds from 3 to 2

Having max_mds value equals to the number of mds nodes generates a
warning in the ceph cluster status:

cluster:
id:     6d3e49a4-ab4d-4e03-a7d6-58913b8ec00a'
health: HEALTH_WARN'
        insufficient standby MDS daemons available'
(...)
services:
  mds:     cephfs:3 {0=mds1=up:active,1=mds0=up:active,2=mds2=up:active}'

Let's use 2 active and 1 standby mds.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 4a6d19dae296969954e5101e9bd53443fddde03d)

5 years agoswitch_to_containers: fix umount ceph partitions v3.2.37
Dimitri Savineau [Wed, 27 Nov 2019 16:27:09 +0000 (11:27 -0500)]
switch_to_containers: fix umount ceph partitions

When a container is already running on a non containerized node then the
umount ceph partition task is skipped.
This is due to the container ps command which always returns 0 even if
the filter matches nothing.

We should run the umount task when:
1/ the container command is failing (not installed) : rc != 0
2/ the container command reports running ceph-osd containers : rc == 0

Also we should not fail on the ceph directory listing.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1616159
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 39cfe0aa65ddd96458ba9d0a031d801efbb0d394)

5 years agotests: fix update scenario (container)
Guillaume Abrioux [Mon, 2 Dec 2019 14:16:39 +0000 (15:16 +0100)]
tests: fix update scenario (container)

The path to the inventory isn't correct because we are missing the variable
`CONTAINER_DIR` here.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agotests: revert vagrant_variable file name detection
Guillaume Abrioux [Mon, 25 Nov 2019 09:03:08 +0000 (10:03 +0100)]
tests: revert vagrant_variable file name detection

This commit reverts the following change:

https://github.com/ceph/ceph-ansible/pull/4510/commits/fcf181342a70b78a355d1c985699028012326b5f#diff-23b6f443c01ea2efcb4f36eedfea9089R7-R14

this is causing CI failures so this commit is intended to unlock the CI.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 5353ab8a23aee92ebab8146eeeeffcfcb25c0865)

5 years agorolling_update: don't enable ceph-mon unit
Dimitri Savineau [Wed, 20 Nov 2019 19:40:52 +0000 (14:40 -0500)]
rolling_update: don't enable ceph-mon unit

On non containerized deployment the ceph-mon hostname/fqdn systemd
service are stopped at the beginning of the mon upgrade.
But the parameter enabled is set to true for both task so even if we're
not using the fqdn then it will enabled the systemd unit based on it.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1649617
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agocontainer: add always tag on gather fact tasks v3.2.36
Dimitri Savineau [Thu, 14 Nov 2019 14:29:29 +0000 (09:29 -0500)]
container: add always tag on gather fact tasks

If we execute the site-container.yml playbook with specific tags (like
ceph_update_config) then we need to be sure to gather the facts otherwise
we will see error like:

The task includes an option with an undefined variable. The error was:
'ansible_hostname' is undefined

This commit also adds missing 'gather_facts: false' to mons plays.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1754432
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit d7fd769b6ddcb63086cc414e00cce31433d56673)

5 years agoEvades validation of ceph_repository_type in containerized scenario
VasishtaShastry [Thu, 7 Nov 2019 12:00:21 +0000 (17:30 +0530)]
Evades validation of ceph_repository_type in containerized scenario
This will prevent failure of site-docker.yml with configs in doc.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1769760
Signed-off-by: VasishtaShastry <vipin.indiasmg@gmail.com>
Co-Authored-By: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 9a1f1626c3e57e64bdcd8d37ae600c21f3ea2a24)

5 years agoceph_key: restore file mode after a key is fetched
Guillaume Abrioux [Thu, 14 Nov 2019 09:30:34 +0000 (10:30 +0100)]
ceph_key: restore file mode after a key is fetched

when `import_key` is enabled, if the key already exists, it will only be
fetched using ceph cli, if the mode specified in the `ceph_key` task is
different from what is applied by the ceph cli, the mode isn't restored because
we don't call `module.set_fs_attributes_if_different()` before
`module.exit_json(**result)`

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1734513
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b717b5f736448903c69882392e0691fba60893aa)

5 years agoRemove outdated documentation
Noah Watkins [Thu, 15 Nov 2018 22:04:45 +0000 (14:04 -0800)]
Remove outdated documentation

Fixes BZ
https://bugzilla.redhat.com/show_bug.cgi?id=1640525

Signed-off-by: Noah Watkins <nwatkins@redhat.com>
5 years agomergify: remove mergify config on stable-3.2
Guillaume Abrioux [Thu, 7 Nov 2019 20:14:51 +0000 (21:14 +0100)]
mergify: remove mergify config on stable-3.2

This commit removes the mergify config on stable-3.2

At the moment there is no need to have a mergify config on this branch
given that we don't use it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agoceph-osd: fix fs.aio-max-nr sysctl condition
Dimitri Savineau [Wed, 6 Nov 2019 15:15:53 +0000 (10:15 -0500)]
ceph-osd: fix fs.aio-max-nr sysctl condition

[1] introduced a regression on the fs.aio-max-nr sysctl value condition.
The enable key isn't a boolean but a string because the expression isn't
evaluated.
This string output "(osd_objectstore == 'bluestore')" is always true
because item.enable condition only matches non empty string. So the
sysctl value was applyied for both filestore and bluestore backend.

[2] added the bool filter to the condition but the filter always returns
false on string and the sysctl wasn't applyed at all.

This commit fixes the enable key value by evaluating the value instead
of using the string.

[1] https://github.com/ceph/ceph-ansible/commit/08a2b58
[2] https://github.com/ceph/ceph-ansible/commit/ab54fe2

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit ece46d33be566994d6ce799fdc4547299b352429)

5 years agoSupport comma-delimited subnets in firewall v3.2.35
Harald Jensås [Fri, 6 Sep 2019 14:24:30 +0000 (16:24 +0200)]
Support comma-delimited subnets in firewall

ceph.conf supports a comma separated list of
subnet CIDR's for the public_network and the
cluster network. ceph-ansible should support
setting up the firewall for this configuration.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1767392
Closes: #4425
Related: #4333
https://docs.ceph.com/docs/nautilus/rados/configuration/network-config-ref/#network-config-settings

Signed-off-by: Harald Jensås <hjensas@redhat.com>
(cherry picked from commit d94229204d84fc27c5997d273dff577af0ab1684)

5 years agoceph-infra: Remove restart firewalld handler
Dimitri Savineau [Fri, 15 Feb 2019 20:58:04 +0000 (15:58 -0500)]
ceph-infra: Remove restart firewalld handler

There's no need to restart firewalld service when a new rule is
added due to the usage of the immediate flag.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit b7338d438a18b6de9083e4302d809ead7a46b6c6)

5 years agoceph-osd: Remove ulimit nofile on container start
Dimitri Savineau [Wed, 30 Oct 2019 15:45:44 +0000 (11:45 -0400)]
ceph-osd: Remove ulimit nofile on container start

Even if this improves ceph-disk/ceph-volume performances then it also
impact the ceph-osd process.
The ceph-osd process shouldn't use 1024:4096 value for the max open
files.
Removing the ulimit option from the container engine and doing this kind
of change on the container side [1].

[1] https://github.com/ceph/ceph-container/pull/1497

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1702285
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 9a996aef7f79d5018e6999362fd025e9c04c9b3f)

5 years agoupdate: add default values when setting fact
Guillaume Abrioux [Tue, 29 Oct 2019 17:19:15 +0000 (18:19 +0100)]
update: add default values when setting fact

This commit adds a default value in the with_dict because when using
python 2.7, if a task using a with_dict has a condition, it is
evaluated anyway whereas in python 3 it isn't.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1766499
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agorolling_update: remove default filter on mds group
Dimitri Savineau [Fri, 25 Oct 2019 21:03:46 +0000 (17:03 -0400)]
rolling_update: remove default filter on mds group

There's no need to use the default filter on active/standby groups
because if the group doesn't exist then the play is just skipped.

Currently this generates warnings like:

[WARNING]: Could not match supplied host pattern, ignoring: |
[WARNING]: Could not match supplied host pattern, ignoring: default([])

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 2ca79fcc99bcff6f73478f11e67ba7edb178b029)

5 years agorolling_update: fix active mds host value
Dimitri Savineau [Fri, 25 Oct 2019 20:47:50 +0000 (16:47 -0400)]
rolling_update: fix active mds host value

The active mds host should be based on the inventory hostname and not on
the ansible hostname.
The value returns under the mdsmap structure is based on the OS hostname
so we need to find the right node in the inventory with this value when
doing operation on inventory nodes.

Othewise we could see error like:

The task includes an option with an undefined variable. The error was:
"hostvars[foobar]" is undefined

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit f1f2352c7974f0839b5e74cb23849e943a1131c6)

5 years agoupdate: skip mds deactivation when no mds in inventory v3.2.34
Guillaume Abrioux [Wed, 23 Oct 2019 13:48:32 +0000 (15:48 +0200)]
update: skip mds deactivation when no mds in inventory

Let's skip this part of the code if there's no mds node in the
inventory.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 5ec906c3af2d188de23cc354ecb9ddcfc0af9d90)

5 years agoopenstack_config: fix docker exec command v3.2.33
Dimitri Savineau [Thu, 24 Oct 2019 14:36:58 +0000 (10:36 -0400)]
openstack_config: fix docker exec command

container_exec_cmd should be replace by docker_exec_cmd.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1765110
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoupdate: follow new recommandation to upgrade mds cluster v3.2.32
Guillaume Abrioux [Thu, 10 Oct 2019 17:09:49 +0000 (19:09 +0200)]
update: follow new recommandation to upgrade mds cluster

Refact the mds cluster upgrade code in order to follow the documented
recommandation.
See: https://github.com/ceph/ceph/blob/luminous/doc/cephfs/upgrading.rst

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1569689
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 71cebf80a623388c64dfcb190133eb5f54a524f9)

5 years agotests: fix the size on the second data LV
Dimitri Savineau [Thu, 17 Oct 2019 18:28:45 +0000 (14:28 -0400)]
tests: fix the size on the second data LV

The commit replaces the pv/vg/lv commands used with the ansible command
module by the lvg and lvol modules.
This also fixes the size of the second data LV because we were only using
50% of the remaining space instead of 100%.

With a 50G device, the result was:
  - data-lv1 was 25G
  - data-lv2 was 12.5G
Instead of:
  - data-lv1 was 25G
  - data-lv2 was 25G

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 2c03c6fcd33ba6b3a3daf73ecd011e87ab41c0a0)

5 years agocommon: do not override ceph_release when using custom repo
Guillaume Abrioux [Thu, 17 Oct 2019 15:05:53 +0000 (17:05 +0200)]
common: do not override ceph_release when using custom repo

Otherwise it fails like following:

```
TASK [ceph-mds : allow multimds] **************************************************************************************************************************************************
Monday 22 July 2019  16:37:38 +0800 (0:00:03.269)       0:13:25.651 ***********
fatal: [rhel7u6clone1]: FAILED! => {"msg": "The conditional check 'ceph_release_num[ceph_release] == ceph_release_num.luminous' failed. The error was: error while evaluating conditional (ceph_release_num[ceph_release] == ceph_release_num.luminous): 'dict object' has no attribute u'dummy'\n\nThe error appears to have been in '/usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml': line 43, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: allow multimds\n  ^ here\n"}
```

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1645379
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 4e9504c939a1daddceef3806f7165844952a6618)