]> git.apps.os.sepia.ceph.com Git - ceph-ansible.git/log
ceph-ansible.git
5 years agotests: retry to fire up VMs on vagrant failure
Guillaume Abrioux [Tue, 2 Apr 2019 12:53:19 +0000 (14:53 +0200)]
tests: retry to fire up VMs on vagrant failure

Add a script to retry several times to fire up VMs to avoid vagrant
failures.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-authored-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 1ecb3a9352d869d8fde694cefae9de8af8f6fee8)

5 years agoconfig: fix external client scenario
Guillaume Abrioux [Fri, 31 Jan 2020 10:51:54 +0000 (11:51 +0100)]
config: fix external client scenario

When no monitor group is present in the inventory, this task fails.
This affects only non-containerized deployments.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit e7bc0794054008ac2d6771f0d29d275493319665)

5 years agotests: add external_clients scenario
Guillaume Abrioux [Thu, 30 Jan 2020 12:00:47 +0000 (13:00 +0100)]
tests: add external_clients scenario

This commit adds a new 'external ceph clients' scenario.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 641729357e5c8bc4dbf90b5c4f7b20e1d3d51f7d)

5 years agovalidate: allow running ceph-ansible 3.2 against ansible 2.7
Guillaume Abrioux [Fri, 31 Jan 2020 09:23:20 +0000 (10:23 +0100)]
validate: allow running ceph-ansible 3.2 against ansible 2.7

This commit allows ceph-ansible 3.2 to be run against ansible 2.7

However, note that running stable-3.2 against ansible 2.7 doesn't get
any testing upstream this might break the playbook, only ansible 2.6 is
officially supported.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1781635
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agotests: add 'all_in_one' scenario
Guillaume Abrioux [Mon, 27 Jan 2020 14:49:30 +0000 (15:49 +0100)]
tests: add 'all_in_one' scenario

Add new scenario 'all_in_one' in order to catch more collocated related
issues.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3e7dbb4b16dc7de3b46c18db4c00e7f2c2a50453)

5 years agoupdate: remove legacy tasks
Guillaume Abrioux [Tue, 28 Jan 2020 09:29:03 +0000 (10:29 +0100)]
update: remove legacy tasks

These tasks should have been removed with backport #4756

Note:
This should have been backported from master but it's not possible
because of too many change between master and stable-3.2

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1740463
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agoceph-defaults: remove rgw from ceph_conf_overrides
Dimitri Savineau [Tue, 28 Jan 2020 15:27:34 +0000 (10:27 -0500)]
ceph-defaults: remove rgw from ceph_conf_overrides

The [rgw] section in the ceph.conf file or via the ceph_conf_overrides
variable doesn't exist and has no effect.
To apply overrides to all radosgw instances we should use either the
[global] or [client] sections.
Overrides per radosgw instance should still use the
[client.rgw.{instance-name}] section.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1794552
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 2f07b8513158d3fc36c5b0d29386f46dd28b5efa)

5 years agodefaults: change monitor|radosgw_address default values
Guillaume Abrioux [Mon, 9 Dec 2019 17:23:15 +0000 (18:23 +0100)]
defaults: change monitor|radosgw_address default values

To avoid confusion, let's change the default value from `0.0.0.0` to
`x.x.x.x`.
Users might think setting `0.0.0.0` will make the daemon binding on all
interfaces.

Fixes: #4827
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit fc02fc98ebce0f99e81628e76ad28e7bf65435de)

5 years agotox: allow copy admin key for purge scenario
Dimitri Savineau [Mon, 13 Jan 2020 18:36:50 +0000 (13:36 -0500)]
tox: allow copy admin key for purge scenario

This is enabled in the group_vars/clients file but it's overrided in
extra vars by tox.
Let's do it like that for now.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agotests: add coverage on purge playbook
Guillaume Abrioux [Thu, 7 Nov 2019 12:39:25 +0000 (13:39 +0100)]
tests: add coverage on purge playbook

This commit adds a playbook to be played before we run purge playbook,
it first creates an rbd image then map an rbd device on client0 so the
purge playbook will try to unmap it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit db77fbda15bf9f79f8122559b01b6625005ae29c)

5 years agopurge: use sysfs to unmap rbd devices
Guillaume Abrioux [Mon, 4 Nov 2019 14:59:39 +0000 (15:59 +0100)]
purge: use sysfs to unmap rbd devices

in containerized context, using the binary provided in atomic os won't
work because it's an old version provided by ceph-common based on
10.2.5.
Using a container could be an idea but for large cluster with hundreds
of client nodes, that would require to pull the image of each of them
just to unmap the rbd devices.

Let's use the sysfs method in order to avoid any issue related to ceph
version that is shipped on the host.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1766064
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3cfcc7a105156dfde65b23e9d8662cd848537094)

5 years agoupdate: only run post osd upgrade play on 1 mon
Guillaume Abrioux [Mon, 18 Nov 2019 17:12:00 +0000 (18:12 +0100)]
update: only run post osd upgrade play on 1 mon

There is no need to run these tasks n times from each monitor.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit c878e99589bde0eecb8ac72a7ec8bc1f66403eeb)

5 years agoupdate: use flags noout and nodeep-scrub only
Guillaume Abrioux [Mon, 18 Nov 2019 16:59:56 +0000 (17:59 +0100)]
update: use flags noout and nodeep-scrub only

1. set noout and nodeep-scrub flags,
2. upgrade each OSD node, one by one, wait for active+clean pgs
3. after all osd nodes are upgraded, unset flags

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-authored-by: Rachana Patel <racpatel@redhat.com>
(cherry picked from commit 548db78b9535348dff616665be749503f80c4fca)

5 years agoceph-defaults: exclude rbd devices from discovery
Dimitri Savineau [Mon, 16 Dec 2019 20:12:47 +0000 (15:12 -0500)]
ceph-defaults: exclude rbd devices from discovery

The RBD devices aren't excluded from the devices list in the LVM auto
discovery scenario.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1783908
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 6f0556f01536932bdf47e8f1aab341b2c6761537)

5 years agoceph-osd: wait for all osds once
Dimitri Savineau [Wed, 27 Nov 2019 14:29:06 +0000 (09:29 -0500)]
ceph-osd: wait for all osds once

cf8c6a3 moves the 'wait for all osds' task from openstack_config to the
main tasks list.
But the openstack_config code was executed only on the last OSD node.
We don't need to do this check on all OSD node so we need to add set
run_once to true on that task.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 5bd1cf40eb5823aab3c4e16b60b37c30600f9283)

5 years agoceph-osd: wait for all osd before crush rules
Dimitri Savineau [Tue, 26 Nov 2019 16:09:11 +0000 (11:09 -0500)]
ceph-osd: wait for all osd before crush rules

When creating crush rules with device class parameter we need to be sure
that all OSDs are up and running because the device class list is
is populated with this information.
This is now enable for all scenario not openstack_config only.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit cf8c6a384999be8caedce1121dfd57ae114d5bb6)

5 years agorolling_update: create crush rule after osd play
Dimitri Savineau [Mon, 18 Nov 2019 16:07:16 +0000 (11:07 -0500)]
rolling_update: create crush rule after osd play

When upgrading from jewel to luminous we can execute the crush rule tasks
only when the 'osd require-osd-release luminous' command.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoceph-osd: add device class to crush rules
Dimitri Savineau [Thu, 31 Oct 2019 20:24:12 +0000 (16:24 -0400)]
ceph-osd: add device class to crush rules

This adds device class support to crush rules when using the class key
in the rule dict via the create-replicated sub command.
If the class key isn't specified then we use the create-simple sub
command for backward compatibility.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1636508
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit ef2cb99f739ade80e285d83050ac01184aafc753)

5 years agomove crush rule creation from mon to osd role
Dimitri Savineau [Thu, 31 Oct 2019 20:17:33 +0000 (16:17 -0400)]
move crush rule creation from mon to osd role

If we want to create crush rules with the create-replicated sub command
and device class then we need to have the OSD created before the crush
rules otherwise the device classes won't exist.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit ed36a11eabbdbb040652991300cdfc93d51ed491)

5 years agoceph-validate: add rbdmirror validation
Dimitri Savineau [Tue, 5 Nov 2019 16:53:22 +0000 (11:53 -0500)]
ceph-validate: add rbdmirror validation

When ceph_rbd_mirror_configure is set to true we need to ensure that
the required variables aren't empty.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1760553
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 4a065cebd70d259bfd59b6f5f9baa45d516a9c3a)

5 years agoswitch_to_containers: set GUID on lockbox part
Dimitri Savineau [Mon, 16 Dec 2019 21:41:20 +0000 (16:41 -0500)]
switch_to_containers: set GUID on lockbox part

The ceph lockbox partition (part number 5) used with non lvm scenarios
and in non containerized deployment don't have a valid PARTUUID.
The value is set to 00000000-0000-0000-0000-000000000000 for each OSD
devices.

$ blkid -t PARTLABEL="ceph lockbox" -o value -s PARTUUID
00000000-0000-0000-0000-000000000000
00000000-0000-0000-0000-000000000000
00000000-0000-0000-0000-000000000000
00000000-0000-0000-0000-000000000000
00000000-0000-0000-0000-000000000000

When switching to containerized deployment we manually mount the lockbox
partition by using the PARTUUID.
Unfortunately because we have most of the time multiple OSD on the same
node we can't have the right symlink in /dev/disk/by-partuuid because it
will point to only one partition.

/dev/disk/by-partuuid/00000000-0000-0000-0000-000000000000 -> ../../sdb5

After the switch_to_containers playbook then only one OSD will restart
correctly and the other will try to access to the wrong device causing
error like 'xxxx is still in use'.

When deploying with containers and dmcrypt OSDs we force a PARTUUID
value during the ceph-disk prepare task.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1616159
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoceph-mds: allow directory fragmentation
Dimitri Savineau [Tue, 17 Dec 2019 18:24:06 +0000 (13:24 -0500)]
ceph-mds: allow directory fragmentation

We need to explicitly enable the allow_dirfrags flag on cephfs pool
after upgrading to Luminous.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1776233
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agofacts: avoid duplicated element in devices list
Guillaume Abrioux [Wed, 20 Nov 2019 10:02:49 +0000 (11:02 +0100)]
facts: avoid duplicated element in devices list

When using `osd_auto_discovery`, `devices` is built multiple times due
to multiple runs of `ceph-facts` role. It end up with duplicate
instances of a same device in the list.

Using `unique` filter when building the list fixes this issue.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 23b1f43897db0a03ef94f51e83ed3c562c4584d0)

5 years agotests: add shrink-osd-legacy testing
Guillaume Abrioux [Thu, 5 Dec 2019 17:42:24 +0000 (18:42 +0100)]
tests: add shrink-osd-legacy testing

This commit introduce back testing against ceph-disk deployed osds.

In stable-3.2 which is the most common version used at customers
(downstream pov), a bunch of OSDs are still deployed using ceph-disk.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agoshrink-osd: support fqdn in inventory
Guillaume Abrioux [Mon, 9 Dec 2019 14:52:26 +0000 (15:52 +0100)]
shrink-osd: support fqdn in inventory

When using fqdn in inventory, that playbook fails because of some tasks
using the result of ceph osd tree (which returns shortname) to get
some datas in hostvars[].

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1779021
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 6d9ca6b05b52694dec53ce61fdc16bb83c93979d)

5 years agoceph-iscsi: add ceph-iscsi stable repositories
Dimitri Savineau [Wed, 8 Jan 2020 15:54:42 +0000 (10:54 -0500)]
ceph-iscsi: add ceph-iscsi stable repositories

This commit adds the support of the ceph-iscsi stable repository when
use ceph_repository community instead of always using the devel
repositories.
We're still using the devel repositories for rtslib and tcmu-runner in
both cases (dev and community).

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoansible.cfg: do not enforce PreferredAuthentications v3.2.38
Guillaume Abrioux [Mon, 9 Dec 2019 16:10:11 +0000 (17:10 +0100)]
ansible.cfg: do not enforce PreferredAuthentications

There's no need to enforce PreferredAuthentications by default.
Users can still choose to override the ansible.cfg with any additional
parameter like this one to fit their infrastructure.

Fixes: #4826
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit d682412e2aa5eeb411cc0dff9a3ffef4b4aa8683)

5 years agoceph-osd: update systemd unit script
Dimitri Savineau [Mon, 9 Dec 2019 14:43:17 +0000 (09:43 -0500)]
ceph-osd: update systemd unit script

The systemd unit script wasn't updated with the new container name
format (without the hostname).
We now have the same start/stop docker commands for all scenarios.
During the device to id OSD migration we need to be sure that the
old container with the hostname are stopped.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1780688
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agotests: add lvm-auto-discovery scenario
Dimitri Savineau [Thu, 5 Dec 2019 16:01:55 +0000 (11:01 -0500)]
tests: add lvm-auto-discovery scenario

This adds the lvm-auto-discovery scenario.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoceph-defaults: exclude md devices from discovery
Dimitri Savineau [Wed, 4 Dec 2019 17:32:49 +0000 (12:32 -0500)]
ceph-defaults: exclude md devices from discovery

The md devices (RAID software) aren't excluded from the devices list in
the auto discovery scenario.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1764601
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 014f51c2a42e4922b43da07b97c4b810ede32200)

5 years agofacts: fix auto_discovery exclude
Guillaume Abrioux [Mon, 25 Feb 2019 23:07:01 +0000 (00:07 +0100)]
facts: fix auto_discovery exclude

the previous approach was wrong.
checking if `item.key` is in `osd_auto_discovery_exclude` (`['dm-',
'loop']`) is incorrect because it will obviously not match. Therefore,
the condition will return `True` whatever the device we are checking.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 8f420072727441fd6e6a22a15cc9034a3a678cae)

5 years agoosd: add possibility to exclude device in osd_auto_discovery
Guillaume Abrioux [Mon, 11 Feb 2019 12:52:37 +0000 (13:52 +0100)]
osd: add possibility to exclude device in osd_auto_discovery

Add a new `osd_auto_discovery_exclude` to give the possibility of
excluding some devices in auto_discovery scenario.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 83d7ef777ec19eb5f96c553d869ec8ad90fd5061)

5 years agoceph-facts: generate devices when osd_auto_discovery is true
Andrew Schoen [Thu, 10 Jan 2019 17:26:19 +0000 (11:26 -0600)]
ceph-facts: generate devices when osd_auto_discovery is true

This task used to live in ceph-osd, but we need it defined here to that
ceph-config can use it when trying to determine the number of osds.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 88eda479a9d6df6224a24f56dbcbdb204daab150)

5 years agotests: reduce max_mds from 3 to 2
Dimitri Savineau [Wed, 4 Dec 2019 17:12:05 +0000 (12:12 -0500)]
tests: reduce max_mds from 3 to 2

Having max_mds value equals to the number of mds nodes generates a
warning in the ceph cluster status:

cluster:
id:     6d3e49a4-ab4d-4e03-a7d6-58913b8ec00a'
health: HEALTH_WARN'
        insufficient standby MDS daemons available'
(...)
services:
  mds:     cephfs:3 {0=mds1=up:active,1=mds0=up:active,2=mds2=up:active}'

Let's use 2 active and 1 standby mds.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 4a6d19dae296969954e5101e9bd53443fddde03d)

5 years agoswitch_to_containers: fix umount ceph partitions v3.2.37
Dimitri Savineau [Wed, 27 Nov 2019 16:27:09 +0000 (11:27 -0500)]
switch_to_containers: fix umount ceph partitions

When a container is already running on a non containerized node then the
umount ceph partition task is skipped.
This is due to the container ps command which always returns 0 even if
the filter matches nothing.

We should run the umount task when:
1/ the container command is failing (not installed) : rc != 0
2/ the container command reports running ceph-osd containers : rc == 0

Also we should not fail on the ceph directory listing.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1616159
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 39cfe0aa65ddd96458ba9d0a031d801efbb0d394)

5 years agotests: fix update scenario (container)
Guillaume Abrioux [Mon, 2 Dec 2019 14:16:39 +0000 (15:16 +0100)]
tests: fix update scenario (container)

The path to the inventory isn't correct because we are missing the variable
`CONTAINER_DIR` here.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agotests: revert vagrant_variable file name detection
Guillaume Abrioux [Mon, 25 Nov 2019 09:03:08 +0000 (10:03 +0100)]
tests: revert vagrant_variable file name detection

This commit reverts the following change:

https://github.com/ceph/ceph-ansible/pull/4510/commits/fcf181342a70b78a355d1c985699028012326b5f#diff-23b6f443c01ea2efcb4f36eedfea9089R7-R14

this is causing CI failures so this commit is intended to unlock the CI.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 5353ab8a23aee92ebab8146eeeeffcfcb25c0865)

5 years agorolling_update: don't enable ceph-mon unit
Dimitri Savineau [Wed, 20 Nov 2019 19:40:52 +0000 (14:40 -0500)]
rolling_update: don't enable ceph-mon unit

On non containerized deployment the ceph-mon hostname/fqdn systemd
service are stopped at the beginning of the mon upgrade.
But the parameter enabled is set to true for both task so even if we're
not using the fqdn then it will enabled the systemd unit based on it.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1649617
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agocontainer: add always tag on gather fact tasks v3.2.36
Dimitri Savineau [Thu, 14 Nov 2019 14:29:29 +0000 (09:29 -0500)]
container: add always tag on gather fact tasks

If we execute the site-container.yml playbook with specific tags (like
ceph_update_config) then we need to be sure to gather the facts otherwise
we will see error like:

The task includes an option with an undefined variable. The error was:
'ansible_hostname' is undefined

This commit also adds missing 'gather_facts: false' to mons plays.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1754432
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit d7fd769b6ddcb63086cc414e00cce31433d56673)

5 years agoEvades validation of ceph_repository_type in containerized scenario
VasishtaShastry [Thu, 7 Nov 2019 12:00:21 +0000 (17:30 +0530)]
Evades validation of ceph_repository_type in containerized scenario
This will prevent failure of site-docker.yml with configs in doc.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1769760
Signed-off-by: VasishtaShastry <vipin.indiasmg@gmail.com>
Co-Authored-By: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 9a1f1626c3e57e64bdcd8d37ae600c21f3ea2a24)

5 years agoceph_key: restore file mode after a key is fetched
Guillaume Abrioux [Thu, 14 Nov 2019 09:30:34 +0000 (10:30 +0100)]
ceph_key: restore file mode after a key is fetched

when `import_key` is enabled, if the key already exists, it will only be
fetched using ceph cli, if the mode specified in the `ceph_key` task is
different from what is applied by the ceph cli, the mode isn't restored because
we don't call `module.set_fs_attributes_if_different()` before
`module.exit_json(**result)`

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1734513
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b717b5f736448903c69882392e0691fba60893aa)

5 years agoRemove outdated documentation
Noah Watkins [Thu, 15 Nov 2018 22:04:45 +0000 (14:04 -0800)]
Remove outdated documentation

Fixes BZ
https://bugzilla.redhat.com/show_bug.cgi?id=1640525

Signed-off-by: Noah Watkins <nwatkins@redhat.com>
5 years agomergify: remove mergify config on stable-3.2
Guillaume Abrioux [Thu, 7 Nov 2019 20:14:51 +0000 (21:14 +0100)]
mergify: remove mergify config on stable-3.2

This commit removes the mergify config on stable-3.2

At the moment there is no need to have a mergify config on this branch
given that we don't use it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agoceph-osd: fix fs.aio-max-nr sysctl condition
Dimitri Savineau [Wed, 6 Nov 2019 15:15:53 +0000 (10:15 -0500)]
ceph-osd: fix fs.aio-max-nr sysctl condition

[1] introduced a regression on the fs.aio-max-nr sysctl value condition.
The enable key isn't a boolean but a string because the expression isn't
evaluated.
This string output "(osd_objectstore == 'bluestore')" is always true
because item.enable condition only matches non empty string. So the
sysctl value was applyied for both filestore and bluestore backend.

[2] added the bool filter to the condition but the filter always returns
false on string and the sysctl wasn't applyed at all.

This commit fixes the enable key value by evaluating the value instead
of using the string.

[1] https://github.com/ceph/ceph-ansible/commit/08a2b58
[2] https://github.com/ceph/ceph-ansible/commit/ab54fe2

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit ece46d33be566994d6ce799fdc4547299b352429)

5 years agoSupport comma-delimited subnets in firewall v3.2.35
Harald Jensås [Fri, 6 Sep 2019 14:24:30 +0000 (16:24 +0200)]
Support comma-delimited subnets in firewall

ceph.conf supports a comma separated list of
subnet CIDR's for the public_network and the
cluster network. ceph-ansible should support
setting up the firewall for this configuration.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1767392
Closes: #4425
Related: #4333
https://docs.ceph.com/docs/nautilus/rados/configuration/network-config-ref/#network-config-settings

Signed-off-by: Harald Jensås <hjensas@redhat.com>
(cherry picked from commit d94229204d84fc27c5997d273dff577af0ab1684)

5 years agoceph-infra: Remove restart firewalld handler
Dimitri Savineau [Fri, 15 Feb 2019 20:58:04 +0000 (15:58 -0500)]
ceph-infra: Remove restart firewalld handler

There's no need to restart firewalld service when a new rule is
added due to the usage of the immediate flag.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit b7338d438a18b6de9083e4302d809ead7a46b6c6)

5 years agoceph-osd: Remove ulimit nofile on container start
Dimitri Savineau [Wed, 30 Oct 2019 15:45:44 +0000 (11:45 -0400)]
ceph-osd: Remove ulimit nofile on container start

Even if this improves ceph-disk/ceph-volume performances then it also
impact the ceph-osd process.
The ceph-osd process shouldn't use 1024:4096 value for the max open
files.
Removing the ulimit option from the container engine and doing this kind
of change on the container side [1].

[1] https://github.com/ceph/ceph-container/pull/1497

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1702285
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 9a996aef7f79d5018e6999362fd025e9c04c9b3f)

5 years agoupdate: add default values when setting fact
Guillaume Abrioux [Tue, 29 Oct 2019 17:19:15 +0000 (18:19 +0100)]
update: add default values when setting fact

This commit adds a default value in the with_dict because when using
python 2.7, if a task using a with_dict has a condition, it is
evaluated anyway whereas in python 3 it isn't.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1766499
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agorolling_update: remove default filter on mds group
Dimitri Savineau [Fri, 25 Oct 2019 21:03:46 +0000 (17:03 -0400)]
rolling_update: remove default filter on mds group

There's no need to use the default filter on active/standby groups
because if the group doesn't exist then the play is just skipped.

Currently this generates warnings like:

[WARNING]: Could not match supplied host pattern, ignoring: |
[WARNING]: Could not match supplied host pattern, ignoring: default([])

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 2ca79fcc99bcff6f73478f11e67ba7edb178b029)

5 years agorolling_update: fix active mds host value
Dimitri Savineau [Fri, 25 Oct 2019 20:47:50 +0000 (16:47 -0400)]
rolling_update: fix active mds host value

The active mds host should be based on the inventory hostname and not on
the ansible hostname.
The value returns under the mdsmap structure is based on the OS hostname
so we need to find the right node in the inventory with this value when
doing operation on inventory nodes.

Othewise we could see error like:

The task includes an option with an undefined variable. The error was:
"hostvars[foobar]" is undefined

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit f1f2352c7974f0839b5e74cb23849e943a1131c6)

5 years agoupdate: skip mds deactivation when no mds in inventory v3.2.34
Guillaume Abrioux [Wed, 23 Oct 2019 13:48:32 +0000 (15:48 +0200)]
update: skip mds deactivation when no mds in inventory

Let's skip this part of the code if there's no mds node in the
inventory.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 5ec906c3af2d188de23cc354ecb9ddcfc0af9d90)

5 years agoopenstack_config: fix docker exec command v3.2.33
Dimitri Savineau [Thu, 24 Oct 2019 14:36:58 +0000 (10:36 -0400)]
openstack_config: fix docker exec command

container_exec_cmd should be replace by docker_exec_cmd.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1765110
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoupdate: follow new recommandation to upgrade mds cluster v3.2.32
Guillaume Abrioux [Thu, 10 Oct 2019 17:09:49 +0000 (19:09 +0200)]
update: follow new recommandation to upgrade mds cluster

Refact the mds cluster upgrade code in order to follow the documented
recommandation.
See: https://github.com/ceph/ceph/blob/luminous/doc/cephfs/upgrading.rst

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1569689
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 71cebf80a623388c64dfcb190133eb5f54a524f9)

5 years agotests: fix the size on the second data LV
Dimitri Savineau [Thu, 17 Oct 2019 18:28:45 +0000 (14:28 -0400)]
tests: fix the size on the second data LV

The commit replaces the pv/vg/lv commands used with the ansible command
module by the lvg and lvol modules.
This also fixes the size of the second data LV because we were only using
50% of the remaining space instead of 100%.

With a 50G device, the result was:
  - data-lv1 was 25G
  - data-lv2 was 12.5G
Instead of:
  - data-lv1 was 25G
  - data-lv2 was 25G

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 2c03c6fcd33ba6b3a3daf73ecd011e87ab41c0a0)

5 years agocommon: do not override ceph_release when using custom repo
Guillaume Abrioux [Thu, 17 Oct 2019 15:05:53 +0000 (17:05 +0200)]
common: do not override ceph_release when using custom repo

Otherwise it fails like following:

```
TASK [ceph-mds : allow multimds] **************************************************************************************************************************************************
Monday 22 July 2019  16:37:38 +0800 (0:00:03.269)       0:13:25.651 ***********
fatal: [rhel7u6clone1]: FAILED! => {"msg": "The conditional check 'ceph_release_num[ceph_release] == ceph_release_num.luminous' failed. The error was: error while evaluating conditional (ceph_release_num[ceph_release] == ceph_release_num.luminous): 'dict object' has no attribute u'dummy'\n\nThe error appears to have been in '/usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml': line 43, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: allow multimds\n  ^ here\n"}
```

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1645379
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 4e9504c939a1daddceef3806f7165844952a6618)

5 years agotests: add multimds coverage
Guillaume Abrioux [Thu, 17 Oct 2019 14:20:22 +0000 (16:20 +0200)]
tests: add multimds coverage

This commit makes the all_daemons scenario deploying 3 mds in order to
cover the multimds case.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agorbd-mirror: fail if the peer is not added
Dimitri Savineau [Tue, 15 Oct 2019 15:32:40 +0000 (11:32 -0400)]
rbd-mirror: fail if the peer is not added

Due the 'failed_when: false' statement present in the peer task then
the playbook continues to ran even if the peer task was failing (like
incorrect remote peer format.

"stderr": "rbd: invalid spec 'admin@cluster1'"

This patch adds a task to list the peer present and add the peer only if
it's not already added. With this we don't need the failed_when statement
anymore.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1665877
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 0b1e9c0737ca84c2e4a34f827cf91e1a11007b16)

5 years agoRemove validate action and notario dependency
Dimitri Savineau [Fri, 11 Oct 2019 15:42:36 +0000 (11:42 -0400)]
Remove validate action and notario dependency

The current ceph-validate role is using both validate action and fail
module tasks to validate the ceph configuration.
The validate action is based on the notario python library. When one of
the notario validation fails then a python stack trace is reported to the
ansible task. This output isn't understandable by users.

This patch removes the validate action and the notario depencendy. The
validation is now done with only fail ansible module.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1654790
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agotests: fix rgw multisite vagrant variables v3.2.31
Dimitri Savineau [Fri, 4 Oct 2019 14:07:05 +0000 (10:07 -0400)]
tests: fix rgw multisite vagrant variables

The secondary vagrant variables didn't have the grafana vm variable
set which create an vagrant error.

There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
an invalid or undefined variable.

This patch also changes the ssh-extra-args parameter to ssh-common-args
to get the same values for ssh/sftp/scp. Otherwise we can see warnings
from ansible and some tasks are failing.

[WARNING]: sftp transfer mechanism failed on [mon0]. Use ANSIBLE_DEBUG=1
to see detailed information

It also updates the ssh-common-args value for the rgw-multisite scenario
to reflect the ANSIBLE_SSH_ARGS environment variable value.

Finally changing the IP addresses due to the Vagrant refact done in the
commit 778c51a

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 010158ff847bb59920f6a5bbf383a1cb7056c0cf)

5 years agoswitch_to_containers: optimize ownership change
Guillaume Abrioux [Mon, 7 Oct 2019 07:19:50 +0000 (09:19 +0200)]
switch_to_containers: optimize ownership change

As per https://github.com/ceph/ceph-ansible/pull/4323#issuecomment-538420164

using `find` command should be faster.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1757400
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-Authored-by: Giulio Fidente <gfidente@redhat.com>
(cherry picked from commit c5d0c90bb7d8382fde2f07820c2d8547c8a3603e)

5 years agovalidate: prevent from installing OSD on same disk as the OS
Guillaume Abrioux [Tue, 8 Oct 2019 12:42:44 +0000 (14:42 +0200)]
validate: prevent from installing OSD on same disk as the OS

This commit adds a validation task to prevent from installing an OSD on
the same disk as the OS.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1623580
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 80e2d00b16629c4815019e8bc58c4539bd109710)

5 years agotests: update tox due to pipeline removal
Guillaume Abrioux [Tue, 8 Oct 2019 15:23:08 +0000 (17:23 +0200)]
tests: update tox due to pipeline removal

This commit reflects the recent changes in ceph/ceph-build#1406

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit bcaf8cedeec0f06eb8641b0038569a5cd3a3e7be)

5 years agoswitch_to_containers: umount osd lockbox partition v3.2.30
Dimitri Savineau [Mon, 7 Oct 2019 19:47:52 +0000 (15:47 -0400)]
switch_to_containers: umount osd lockbox partition

When switching from a baremetal deployment to a containerized deployment
we only umount the OSD data partition.
If the OSD is encrypted (dmcrypt: true) then there's an additional
partition (part number 5) used for the lockbox and mount in the
/var/lib/ceph/osd-lockbox/ directory.
Because this partition isn't umount then the containerized OSD aren't
able to start. The partition is still mount by the system and can't be
remount from the container.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1616159
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 19edf707a50c2e86110b2ba0231091b6bd355bd1)

5 years agoceph-config: remove container_binary variable
Dimitri Savineau [Mon, 7 Oct 2019 20:51:32 +0000 (16:51 -0400)]
ceph-config: remove container_binary variable

9e7972a introduced a regression via the container_binary variable
which is undefined.
The CEPH_CONTAINER_BINARY environment variable isn't used at all.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoceph-mgr: fix ceph_key module with container
Dimitri Savineau [Mon, 7 Oct 2019 17:01:19 +0000 (13:01 -0400)]
ceph-mgr: fix ceph_key module with container

556052b changed the way the mgr keyring are created but the ceph_key
module need the containerized parameter when the deployment is using
containers.
This module doesn't support CEPH_CONTAINER_[BINARY|IMAGE] environment
variables.

Closes: #4547
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agonfs: stop nfs server service in all context
Guillaume Abrioux [Mon, 7 Oct 2019 08:34:07 +0000 (10:34 +0200)]
nfs: stop nfs server service in all context

This commit moves this task in order to stop the nfs server service
regardless the deployment type desired (containerized or non
containerized).

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1508506
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 6c6a512a720de0268e2d099926413e2816c65174)

5 years agonfs: stop nfs server service
Guillaume Abrioux [Mon, 7 Oct 2019 08:21:51 +0000 (10:21 +0200)]
nfs: stop nfs server service

The syntax here wasn't working, this refact fixes this task.
Also, removing the `ignore_errors: true` which was hidding the failure.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1508506
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 47034effe0bb7de14442b0ba884ff4abe793b4b7)

5 years agoplaybook: add missing tags
Guillaume Abrioux [Fri, 4 Oct 2019 12:58:11 +0000 (14:58 +0200)]
playbook: add missing tags

Add missing tag on ceph-handler role call.
Otherwise, we can't use `--tags='ceph_update_config'` for updating the
ceph configuration file.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1754432
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit f59dad620d43a740466a26f7fb8eba1ffc5ba0af)

5 years agoceph-mgr: create keys for MGRs
Rishabh Dave [Thu, 2 May 2019 12:48:00 +0000 (08:48 -0400)]
ceph-mgr: create keys for MGRs

Add code in ceph-mgr for creating a keyring for manager in so that
managers can be deployed on a separate node too.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1552210
Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 56bfec7c58407e269f6e6fa7b4c8a5928953dc6f)

5 years agoceph-handler: don't restart all OSDs with limit
Dimitri Savineau [Wed, 2 Oct 2019 18:48:53 +0000 (14:48 -0400)]
ceph-handler: don't restart all OSDs with limit

When using the ansible --limit option on one or few OSD nodes and if the
handler is triggered then we will restart the OSD service on all OSDs
nodes instead of the hosts limited by the limit value.
Even if the play is limited by the --limit value we are using all OSD
nodes from the OSD group.

  with_items: '{{ groups[osd_group_name] }}'

Instead we should iterate only on the nodes present in both OSD group and
limit list.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 0346871fb5c46fb1fedfb24ffe5a8c02108c244e)

5 years agoVagrantfile: support more than 9 nodes per daemon type
Guillaume Abrioux [Wed, 2 Oct 2019 08:14:52 +0000 (10:14 +0200)]
Vagrantfile: support more than 9 nodes per daemon type

because of the current ip address assignation, it's not possible to
deploy more than 9 nodes per daemon type.
This commit refact a bit and allows us to get around this limitation.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 778c51a0ff7a8c66c464f4828a0f87dd290e1c3e)

5 years agotests: set gateway_ip_list dynamically
Guillaume Abrioux [Fri, 4 Oct 2019 02:20:44 +0000 (04:20 +0200)]
tests: set gateway_ip_list dynamically

so we dont' have to hardcode this in the tests

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agoosd: refact 'wait for all osd to be up' task
Guillaume Abrioux [Wed, 14 Aug 2019 08:47:40 +0000 (10:47 +0200)]
osd: refact 'wait for all osd to be up' task

let's use `until` instead of doing test in bash using python oneliner
also, use `command` instead of `shell`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit c76cd5ad844b71a41d898febafda326ad067cc05)

5 years agovalidate: fix gpt header check v3.2.29
Guillaume Abrioux [Mon, 30 Sep 2019 15:10:23 +0000 (17:10 +0200)]
validate: fix gpt header check

Check for gpt header when osd scenario is lvm or lvm batch.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1731310
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agoSet proper ownership command performance improvement
Kevin Jones [Sat, 10 Aug 2019 19:44:32 +0000 (15:44 -0400)]
Set proper ownership command performance improvement

By changing the set ownership command from using the file module in combination with a with_items loop to a raw chown command, we can achieve a 98% performance increase here.

On a ceph cluster with a significant amount of directories and files in /var/lib/ceph, the file module has to run checks on ownership of all those directories and files to determine whether a change is needed.

In this case, we just want to explicitly set the ownership of all these directories and files to the ceph_uid

Added context note to all set proper ownership tasks

Signed-off-by: Kevin Jones <kevinjones@redhat.com>
(cherry picked from commit 47bf47c9d87fe057bc1402f7a1aa0a84d13b5fd0)

5 years agoceph-config: do not always assume containers when calculating num_osds
Andrew Schoen [Thu, 31 Jan 2019 19:38:53 +0000 (13:38 -0600)]
ceph-config: do not always assume containers when calculating num_osds

CEPH_CONTAINER_IMAGE should be None if containerized_deployment
is False.

Resolves: #4498

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 70a4368bc5a58f198b23751f8478f2d79bc5e9f4)

6 years agomon: use ceph_key module for containerized mgr keyring creation
Guillaume Abrioux [Wed, 25 Sep 2019 14:02:08 +0000 (16:02 +0200)]
mon: use ceph_key module for containerized mgr keyring creation

This commit replaces a `command` task with `ceph_key` in order to create
mgr keyrings.

This allows us to use `mode` parameter to set the right mode on
generated keys.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1734513
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoceph-osd: handle loop devices with containers
Dimitri Savineau [Wed, 25 Sep 2019 13:40:34 +0000 (09:40 -0400)]
ceph-osd: handle loop devices with containers

Since we change the way to run the OSD containers with the ID instead
of the device name, we lost the ability to use loop devices.
Loop devices are like nvme or cciss devices because the partitions are
referenced with an extra 'p' before the partition number.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1749097

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoconfig: support num_osds fact setting in containerized deployment v3.2.28
Guillaume Abrioux [Wed, 30 Jan 2019 23:07:30 +0000 (00:07 +0100)]
config: support num_osds fact setting in containerized deployment

This part of the code must be supported in containerized deployment

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1664112
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit fe1528adb4fa853693ba1c207856dfad58cef270)

6 years agoceph-handler: Fix osd restart condition v3.2.27
Dimitri Savineau [Mon, 9 Sep 2019 15:23:47 +0000 (11:23 -0400)]
ceph-handler: Fix osd restart condition

In containerized deployment, the restart OSD handler couldn't be
triggered in most ansible execution.
This is due to the usage of run_once + a condition on the inventory
hostname and the last filter.
The run_once is triggered first so ansible will pick a node in the
osd group to execute the restart task. But if this node isn't the
last one in the osd group then the task is ignored. There's more
probability that the task will be ignored than executed.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 5b1c15653fcb4772f0839f3a57f7e36ba1b86f49)

6 years agorbd-mirror: Allow to copy the admin keyring
Dimitri Savineau [Mon, 9 Sep 2019 18:33:55 +0000 (14:33 -0400)]
rbd-mirror: Allow to copy the admin keyring

The ceph-rbd-mirror role allows to copy the admin keyring via the
copy_admin_key variable but there's actually no task in that role
doing the job.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 1f505628dd5e62226ceee975679f1629788771f9)

6 years agorbd-mirror: Use the rbd mirror client keyring
Dimitri Savineau [Mon, 9 Sep 2019 18:18:49 +0000 (14:18 -0400)]
rbd-mirror: Use the rbd mirror client keyring

The admin keyring isn't present by default on the rbd mirror nodes so
the rbd commands related to the mirroring confguration will fail.
Instead we can use the rbd mirror client keyring.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit a3d36df02564d2301a0b51e5d63ea5ffb8ae6968)

6 years agoLook for additional names when checking ceph-nfs container status
Giulio Fidente [Mon, 9 Sep 2019 17:07:02 +0000 (19:07 +0200)]
Look for additional names when checking ceph-nfs container status

Ganesha cannot be operated active/active, in those deployments
where it is managed by pacemaker the container name can be
different than the default.

This change uses "ceph_nfs_service_suffix" where previously
missing to ensure tasks will work with customized names.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1750005
Signed-off-by: Giulio Fidente <gfidente@redhat.com>
(cherry picked from commit d2a2bd7c423182b60460bffa6b3d6a28c7d12227)

6 years agorbd-mirror: configure pool and peer
Dimitri Savineau [Wed, 4 Sep 2019 18:35:20 +0000 (14:35 -0400)]
rbd-mirror: configure pool and peer

The rbd mirror configuration was only available for non containerized
deployment and was also imcomplete.
We now enable the mirroring on the pool and add the remote peer in both
scenarios.

The default mirroring mode is set to 'pool' but can be configured via
the ceph_rbd_mirror_mode variable.

This commit also fixes an issue on the rbd mirror command if the ceph
cluster name isn't using the default value (ceph) due to a missing
--cluster parameter to the command.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1665877
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 7e5e21741eb0143e3d981dc1891253ad331d9753)

6 years agotests: update dedidated mgr node all_daemons
Dimitri Savineau [Fri, 30 Aug 2019 18:36:12 +0000 (14:36 -0400)]
tests: update dedidated mgr node all_daemons

5b29144 change the mgr node to a dedicated node instead of the first
monitor node.
But the change didn't update the switch-to-containers inventory which
cause this playbook to fail.
Also update the ubuntu inventory to have the same configuration.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoceph-infra: Apply firewall rules with container v3.2.26
Dimitri Savineau [Wed, 31 Jul 2019 18:02:41 +0000 (14:02 -0400)]
ceph-infra: Apply firewall rules with container

We don't have a reason to not apply firewall rules on the host when
using a containerized deployment.
The TripleO environments already manage the ceph firewall rules outside
ceph-ansible and set the configure_firewall variable to false.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1733251
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 771f25b1f823ce46995b62dec84f9f3515bfac2c)

6 years agoceph-client: Use profile rbd in keyring caps v3.2.25
Dimitri Savineau [Mon, 26 Aug 2019 19:35:19 +0000 (15:35 -0400)]
ceph-client: Use profile rbd in keyring caps

Like the OpenStack keyrings, we can use the profile rbd for the clients
keyring (both mon and osd).

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 49aa05b96c6614a07127238fe157c2bf87315618)

6 years agoRevert "osd: add 'osd blacklist' cap for osp keyrings"
Dimitri Savineau [Mon, 26 Aug 2019 19:04:41 +0000 (15:04 -0400)]
Revert "osd: add 'osd blacklist' cap for osp keyrings"

This reverts commit 2d955757ee9324a018374f628664e2e15dcb7903.

The "osd blacklist" isn't an osd caps but should be used with mon caps.
Also the correct caps for this is: 'allow command "osd blacklist"'.
The current change is breaking the openstack and clients keyrings.
By using the profile rbd (which is already used) we already rely on the
ability to blacklist dead client.

Resolves: #4385

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 717af83475e4ece252b4a776dcd17b013451a075)

6 years agoceph-osd: Add ulimit nofile on container start
Dimitri Savineau [Tue, 6 Aug 2019 15:52:59 +0000 (11:52 -0400)]
ceph-osd: Add ulimit nofile on container start

On containerized deployment, the OSD entrypoint runs some ceph-volume
commands (lvm/simple scan and/or activate) which perform badly without
the ulimit option.
This option was added for all previous ceph-volume commands but not on
the ceph-osd container startup.
Also updating hard limit value to 4096 to reflect default baremetal
value.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1744390
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 9a4ac46d1977726c6e9b0ce8e4f051f15ce2ca12)

6 years agomgr: add a check task for all mgr to be up
Guillaume Abrioux [Thu, 22 Aug 2019 12:33:20 +0000 (14:33 +0200)]
mgr: add a check task for all mgr to be up

This can't be backported from master since there was too many
modifications meantime.

When mgr aren't all ready, sometimes the following error can show up:

```
stderr: 'Error ENOENT: all mgr daemons do not support module ''status'', pass --force to force enablement'
```

This commit adds a check so all mgr are available when we try to enable
modules.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agovalidate: fail if gpt header found on unprepared devices
Guillaume Abrioux [Wed, 17 Jul 2019 13:55:12 +0000 (15:55 +0200)]
validate: fail if gpt header found on unprepared devices

ceph-volume will complain if gpt headers are found on devices.
This commit checks whether a gpt header is present on devices passed in
`devices` variable and fail early.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1730541
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 487d7016850524af64254aa3a9ec61222ec926a4)

6 years agorelease-note: add two deprecations warning and removal
Guillaume Abrioux [Thu, 1 Aug 2019 09:29:22 +0000 (11:29 +0200)]
release-note: add two deprecations warning and removal

In `stable-4.0`, the group name `iscsi-gws` will go away and some rgw
systemd service names will disappear as well:

(`ceph-rgw@<hostname>.service`, `ceph-radosgw@<hostname>.service`,
`ceph-radosgw@radosgw.<hostname>.service`,
`ceph-radosgw@radosgw.gateway.service`)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agovalidate: do not validate devices or lvm_volumes in osd_auto_discovery case
Guillaume Abrioux [Wed, 14 Aug 2019 12:20:58 +0000 (14:20 +0200)]
validate: do not validate devices or lvm_volumes in osd_auto_discovery case

we shouldn't validate these two variables when `osd_auto_discovery` is
set.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1644623
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 243edfbc963baa68af0e0acf8489c4b920f57c1a)

6 years agoupdate: use ids to restart osds instead of device name v3.2.24
Guillaume Abrioux [Tue, 13 Aug 2019 09:13:30 +0000 (11:13 +0200)]
update: use ids to restart osds instead of device name

we must use the ids instead of device names in the tasks executed in
`post_tasks` for the osd rolling update otherwise it ends up with old
systemd units enabled.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1739209
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoosd: copy systemd-device-to-id.sh on all osd nodes before running it v3.2.23
Guillaume Abrioux [Mon, 12 Aug 2019 08:01:43 +0000 (10:01 +0200)]
osd: copy systemd-device-to-id.sh on all osd nodes before running it

Otherwise it will fail when running rolling_update.yml playbook because
of `serial: 1` usage.
The task which copies the script is run against the current node being
played only whereas the task which runs the script is run against all
nodes in a loop, it ends up with the typical error:

```
2019-08-08 17:47:05,115 p=14905 u=ubuntu |  failed: [magna023 -> magna030] (item=magna030) => {
    "changed": true,
    "cmd": [
        "/usr/bin/env",
        "bash",
        "/tmp/systemd-device-to-id.sh"
    ],
    "delta": "0:00:00.004339",
    "end": "2019-08-08 17:46:59.059670",
    "invocation": {
        "module_args": {
            "_raw_params": "/usr/bin/env bash /tmp/systemd-device-to-id.sh",
            "_uses_shell": false,
            "argv": null,
            "chdir": null,
            "creates": null,
            "executable": null,
            "removes": null,
            "stdin": null,
            "warn": true
        }
    },
    "item": "magna030",
    "msg": "non-zero return code",
    "rc": 127,
    "start": "2019-08-08 17:46:59.055331",
    "stderr": "bash: /tmp/systemd-device-to-id.sh: No such file or directory",
    "stderr_lines": [
        "bash: /tmp/systemd-device-to-id.sh: No such file or directory"
    ],
    "stdout": "",
    "stdout_lines": []
}
```

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1739209
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agotests: deploy mgr on a dedicated node (all_daemons scenario)
Guillaume Abrioux [Thu, 8 Aug 2019 11:43:29 +0000 (13:43 +0200)]
tests: deploy mgr on a dedicated node (all_daemons scenario)

let's deploy mgr on a dedicated node.
This makes update job failing on stable-4.0 branch since there's a
mismatch between the two inventories.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoosd: add 'osd blacklist' cap for osp keyrings
Guillaume Abrioux [Mon, 15 Jul 2019 07:57:06 +0000 (09:57 +0200)]
osd: add 'osd blacklist' cap for osp keyrings

This commits adds the `osd blacklist` cap on all OSP clients keyrings.

Fixes: #2296
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 2d955757ee9324a018374f628664e2e15dcb7903)

6 years agoshrink-osd: Stop ceph-disk container based on ID v3.2.22
Dimitri Savineau [Mon, 5 Aug 2019 18:32:18 +0000 (14:32 -0400)]
shrink-osd: Stop ceph-disk container based on ID

Since bedc0ab we now manage ceph-osd systemd unit scripts based on ID
instead of device name but it was not present in the shrink-osd
playbook (ceph-disk version).
To keep backward compatibility on deployment that didn't do yet the
transition on OSD id then we should stop unit scripts for both device
and ID.
This commit adds the ulimit nofile container option to get better
performance on ceph-disk commands.
It also fixes an issue when the OSD id matches multiple OSD ids with
the same first digit.

$ ceph-disk list | grep osd.1
 /dev/sdb1 ceph data, prepared, cluster ceph, osd.1, block /dev/sdb2
 /dev/sdg1 ceph data, prepared, cluster ceph, osd.12, block /dev/sdg2

Finally removing the shrinked OSD directory.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agorgw: add beast frontend
Dimitri Savineau [Tue, 26 Feb 2019 14:16:37 +0000 (09:16 -0500)]
rgw: add beast frontend

Allow to configure the rgw beast frontend in addition to civetweb
(default value).
Add rgw_thread_pool_size variable with 512 as default value and keep
backward compatibility with num_threads option when using civetweb.
Update radosgw_civetweb_num_threads to reflect rgw_thread_pool_size
change.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1733406
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit d17b1b48b6d4259f88445a0752e1c13b4522ced0)

6 years agoceph-osd: check container engine rc for pools
Dimitri Savineau [Mon, 22 Jul 2019 20:58:40 +0000 (16:58 -0400)]
ceph-osd: check container engine rc for pools

When creating OpenStack pools, we only check if the return code from
the pool list command isn't 0 (ie: if it doesn't exist). In that case,
the return code will be 2. That's why the next condition is rc != 0 for
the pool creation.
But in containerized deployment, the return code could be different if
there's a failure on the container engine command (like container not
running). In that case, the return code could but either 1 (docker) or
125 (podman) so we should fail at this point and not in the next tasks.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1732157

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit d549fffdd24d21661b64b31bda20b4e8c6aa82b6)