]> git.apps.os.sepia.ceph.com Git - ceph-ansible.git/log
ceph-ansible.git
6 years agotox.ini: setup LVs in OSD hosts for '*-cluster' scenarios
Ramana Raja [Fri, 30 Nov 2018 15:01:13 +0000 (20:31 +0530)]
tox.ini: setup LVs in OSD hosts for '*-cluster' scenarios

... as the scenarios set up ceph clusters with LVM OSDs.

Closes: https://github.com/ceph/ceph-ansible/issues/3399
Signed-off-by: Ramana Raja <rraja@redhat.com>
6 years agoosd: manage legacy ceph-disk non-container startup v3.2.0rc6
Sébastien Han [Thu, 29 Nov 2018 13:59:25 +0000 (14:59 +0100)]
osd: manage legacy ceph-disk non-container startup

The code is now able (again) to start osds that where configured with
ceph-disk on a non-container scenario.

Closes: https://github.com/ceph/ceph-ansible/issues/3388
Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agoconfig: write jinja comment with appropriate syntax
Guillaume Abrioux [Thu, 29 Nov 2018 09:16:52 +0000 (10:16 +0100)]
config: write jinja comment with appropriate syntax

jinja comment should be written using the jinja syntax `{# ... #}`

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1654441
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit a86c2b85263f84891e2cbf7e782f7ac8891257b3)

6 years agorolling_update: default ceph json output to empty dict v3.2.0rc5
Sébastien Han [Wed, 28 Nov 2018 23:27:49 +0000 (00:27 +0100)]
rolling_update: default ceph json output to empty dict

So we can avoid the following failure:

The conditional check 'hostvars[mon_host]['ansible_hostname'] in (ceph_health_raw.stdout | from_json)["quorum_names"] or hostvars[mon_host]['ansible_fqdn'] in (ceph_health_raw.stdout | from_json)["quorum_names"]
' failed. The error was: No JSON object could be decoded

We just need to set a default, the next iteration will have a more
complete json since the command won't fail.

Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agoclient: change default pool size
Guillaume Abrioux [Wed, 21 Nov 2018 16:28:00 +0000 (17:28 +0100)]
client: change default pool size

default pool size should match the real default that is defined in ceph
itself.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit ed42262b372ace8688c2b20a05d143e46174ec08)

6 years agodefaults: change default size for openstack pools
Guillaume Abrioux [Wed, 21 Nov 2018 16:27:11 +0000 (17:27 +0100)]
defaults: change default size for openstack pools

default pool size should match the real default that is defined in ceph
itself.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 6d1fe329980b91944cae58e68b909d34892667e7)

6 years agodefaults: change for default pool size for cephfs_pools
Guillaume Abrioux [Wed, 21 Nov 2018 16:08:19 +0000 (17:08 +0100)]
defaults: change for default pool size for cephfs_pools

default pool size should match the real default that is defined in ceph
itself.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit fdc438dd0dd7ad91b296008a7335460a88c2ca4a)

6 years agodefaults: add ceph related vars file
Guillaume Abrioux [Wed, 21 Nov 2018 10:06:45 +0000 (11:06 +0100)]
defaults: add ceph related vars file

This is to add a granularity level.
We can have ceph specific variables that user shouldn't have to change
here.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit f1735e9bb016dd30c2164b9f8ec6f644914052b1)

6 years agorefact osd pool size customization
Guillaume Abrioux [Wed, 21 Nov 2018 10:00:11 +0000 (11:00 +0100)]
refact osd pool size customization

Add real default value for osd pool size customization.
Ceph itself has an `osd_pool_default_size` default value to `3`.

If users don't specify a pool size in various pools definition within
ceph-ansible, we should default to `3`.

By the way, this kind of condition isn't really clear:
```
when:
  - rbd_pool_size | default ("")
```

we should try to get the customized value then default to what is in
`osd_pool_default_size` (which has its default value pointing to
`ceph_osd_pool_default_size` (`3`) as well) and compare it to
`ceph_osd_pool_default_size`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 7774069d45477df9f37c98bc414b3bf38cf41feb)

6 years agomon: move `osd_pool_default_pg_num` in `ceph-defaults`
Guillaume Abrioux [Tue, 13 Nov 2018 14:40:35 +0000 (15:40 +0100)]
mon: move `osd_pool_default_pg_num` in `ceph-defaults`

`osd_pool_default_pg_num` parameter is set in `ceph-mon`.
When using ceph-ansible with `--limit` on a specifc group of nodes, it
will fail when trying to access this variables since it wouldn't be
defined.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1518696
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit d4c0960f04342e995db2453b50940aa9933ceb09)

6 years agotests: change default pools size
Guillaume Abrioux [Wed, 21 Nov 2018 16:28:31 +0000 (17:28 +0100)]
tests: change default pools size

default pool size in our test should be explicitly set to 1

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoupdate: fix a typo
Guillaume Abrioux [Mon, 26 Nov 2018 13:10:19 +0000 (14:10 +0100)]
update: fix a typo

`hostvars[groups[mon_host]]['ansible_hostname']` seems to be a typo.
That should be `hostvars[mon_host]['ansible_hostname']`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 7c99b6df6d8f0daa05ed8da987984d638af3a794)

6 years agotests: do not fully override previous ceph_conf_overrides
Guillaume Abrioux [Thu, 22 Nov 2018 10:33:20 +0000 (11:33 +0100)]
tests: do not fully override previous ceph_conf_overrides

We run an initial deployment with `osd_pool_default_size: 1` in
`ceph_conf_overrides`.
When re-running the playbook to test idempotency and handlers, we reset
`ceph_conf_overrides`, we must append a new value instead of just
overwritting it, otherwise, this can lead to error in the CI.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit f290e49df86a6c878dfffa4d017537f3be6ff615)

6 years agorolling_update: refact set_fact `mon_host`
Guillaume Abrioux [Thu, 22 Nov 2018 16:52:58 +0000 (17:52 +0100)]
rolling_update: refact set_fact `mon_host`

each monitor node should select another monitor which isn't itself.
Otherwise, one node in the monitor group won't set this fact and causes
failure.

Typical error:
```
TASK [create potentially missing keys (rbd and rbd-mirror) when mon is containerized] ***
task path: /home/jenkins-build/build/workspace/ceph-ansible-prs-dev-update_docker_cluster/rolling_update.yml:200
Thursday 22 November 2018  14:02:30 +0000 (0:00:07.493)       0:02:50.005 *****
fatal: [mon1]: FAILED! => {}

MSG:

The task includes an option with an undefined variable. The error was: 'dict object' has no attribute u'mon2'
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit af78173584f1b3a99515e9b94f450be22420c545)

6 years agorolling_update: create rbd and rbd-mirror keyrings
Sébastien Han [Wed, 21 Nov 2018 15:18:58 +0000 (16:18 +0100)]
rolling_update: create rbd and rbd-mirror keyrings

During an upgrade ceph won't create keys that were not existing on the
previous version. So after the upgrade of let's Jewel to Luminous, once
all the monitors have the new version they should get or create the
keys. It's ok to have the task fails, especially for the rbd-mirror
key, which only appears in Nautilus.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1650572
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 4e267bee4f9263b9ac3b5649f1e3cf3cbaf12d10)

6 years agoceph_key: add a get_key function
Sébastien Han [Wed, 21 Nov 2018 15:17:04 +0000 (16:17 +0100)]
ceph_key: add a get_key function

When checking if a key exists we also have to ensure that the key exists
on the filesystem, the key can change on Ceph but still have an outdated
version on the filesystem. This solves this issue.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 691f373543d96d26b1af61c4ff7731fd888a9ce9)

6 years agoswitch: do not look for devices anymore
Sébastien Han [Mon, 19 Nov 2018 13:58:03 +0000 (14:58 +0100)]
switch: do not look for devices anymore

It's easier lookup a directoriy instead of the block devices,
especially because of ceph-volume and ceph-disk have a different way to
handle devices.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit c14f9b78ff7b88419148ac2dd01611b7ec830598)

6 years agoswitch: disable all ceph units
Sébastien Han [Fri, 16 Nov 2018 15:15:24 +0000 (16:15 +0100)]
switch: disable all ceph units

Prior to this commit we were only disabling ceph-osd units, but forgot
the ceph.target which is controlling everything and will restart the
ceph-osd units at each reboot.
Now that everything gets disabled there won't be any conflicts between
the old non-container and the new container units.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit cd56dad9fa4574f8474c362083d97003f62926ab)

6 years agoswitch: do not mask systemd unit
Sébastien Han [Tue, 13 Nov 2018 16:43:21 +0000 (17:43 +0100)]
switch: do not mask systemd unit

If we mask it we won't be able to start the OSD container since now the
osd container use the osd ID as a name such as: ceph-osd@0

Fixes the error:  Failed to execute operation: Cannot send after transport endpoint shutdown

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit fe1d09925ae1525e99f22a3eab9ca1823c079bda)

6 years agoosd: re-introduce disk_list check
Sébastien Han [Wed, 28 Nov 2018 23:10:29 +0000 (00:10 +0100)]
osd: re-introduce disk_list check

This commit
https://github.com/ceph/ceph-ansible/commit/4cc1506303739f13bb7a6e1022646ef90e004c90#diff-51bbe3572e46e3b219ad726da44b64ebL13
accidentally removed this check.

This is a must have for ceph-disk based containerized OSDs.

Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agovalidate: change default value for `radosgw_address`
Guillaume Abrioux [Wed, 28 Nov 2018 19:53:10 +0000 (20:53 +0100)]
validate: change default value for `radosgw_address`

change default value of `radosgw_address` to keep consistency with
`monitor_address`.
Moreover, `ceph-validate` checks if the value is '0.0.0.0' to determine
if it has to run `check_eth_rgw.yml`.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600227
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit e4869ac8bd574af56952f02b1c8f63ecae0d5d86)

6 years agotests: rgw_multisite allow clusters to talk to each other
Guillaume Abrioux [Wed, 28 Nov 2018 17:46:45 +0000 (18:46 +0100)]
tests: rgw_multisite allow clusters to talk to each other

Adding this rule on the hypervisor will allow cluster to talk to each
other.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 96ce8761ba7656d4a657df319362084f72080320)

6 years agotests: set pool size to 1 in ceph-override.json
Guillaume Abrioux [Thu, 8 Nov 2018 08:08:28 +0000 (09:08 +0100)]
tests: set pool size to 1 in ceph-override.json

setting this setting to 1 makes the CI covering the related code in the
playbook without breaking the upgrade scenarios.

Those scenarios were broken because there is a check `TASK [waiting for
clean pgs...]` in rolling_update.yml, since the pool size for
`cephfs_metadata` and `cephfs_data` are updated to `2` in
`ceph-override.json` and there is not enough osd to honor this size,
some PGs are degraded and make the mentioned check failing.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3ac6619fb9aa0ea29041ce122b4dfac9a51fc235)

6 years agoosd: commonize start_osd code
Guillaume Abrioux [Wed, 7 Nov 2018 10:45:29 +0000 (11:45 +0100)]
osd: commonize start_osd code

since `ceph-volume` introduction, there is no need to split those tasks.

Let's refact this part of the code so it's clearer.

By the way, this was breaking rolling_update.yml when `openstack_config:
true` playbook because nothing ensured OSDs were started in ceph-osd role (In
`openstack_config.yml` there is a check ensuring all OSD are UP which was
obviously failing) and resulted with OSDs on the last OSD node not started
anyway.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit f7fcc012e9a5b5d37bcffd39f3062adbc2886006)

6 years agomgr: fix mgr keyring error on rolling_update
Guillaume Abrioux [Tue, 27 Nov 2018 12:42:41 +0000 (13:42 +0100)]
mgr: fix mgr keyring error on rolling_update

when upgrading from RHCS 2.5 to 3.2, it fails because the task `create
ceph mgr keyring(s) when mon is containerized` has a when condition
`inventory_hostname == groups[mon_group_name]|last`.
First, this is incorrect because `inventory_hostname` is referring to a
mgr node, it means this condition would have never been satisfied.
Then, this condition + `serial: 1` makes the mgr keyring creating skipped on
the first node. Further, the `ceph-mgr` role tries to copy the mgr
keyring (it's not aware we are running `serial: 1`) this leads to a
failure like the following:

```
TASK [ceph-mgr : copy ceph keyring(s) if needed] ***************************************************************************************************************************************************************************************************************************************************************************
task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:10
Tuesday 27 November 2018  12:03:34 +0000 (0:00:00.296)       0:11:01.290 ******
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AnsibleFileNotFound: Could not find or access '~/ceph-ansible-keys/48d78ac1-e0d6-4e35-ab3e-772aea7828fc//etc/ceph/local.mgr.magna021.keyring'
failed: [magna021] (item={u'dest': u'/var/lib/ceph/mgr/local-magna021/keyring', u'name': u'/etc/ceph/local.mgr.magna021.keyring', u'copy_key': True}) => {"changed": false, "item": {"copy_key": true, "dest": "/var/lib/ceph/mgr/local-magna021/keyring", "name": "/etc/ceph/local.mgr.magna021.keyring"}, "msg": "Could not find or access '~/ceph-ansible-keys/48d78ac1-e0d6-4e35-ab3e-772aea7828fc//etc/ceph/local.mgr.magna021.keyring'"}
```

The ceph_key module is idempotent, so there is no need to have such a
condition.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1649957
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 73287f91bcdbc4e9cf95f52f8389b561418cf3bd)

6 years agotests: apply dev_setup on the secondary cluster for rgw_multisite
Guillaume Abrioux [Tue, 27 Nov 2018 09:26:41 +0000 (10:26 +0100)]
tests: apply dev_setup on the secondary cluster for rgw_multisite

we must apply this playbook before deploying the secondary cluster.
Otherwise, there will be a mismatch between the two deployed cluster.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3d8f4e63045d9453549e86fc280556663a9a9a1c)

6 years agohandler: show unit logs on error
Sébastien Han [Tue, 27 Nov 2018 09:45:05 +0000 (10:45 +0100)]
handler: show unit logs on error

This will tremendously help debugging daemons that fail on restart by
showing the systemd unit logs.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit a9b337ba660da641f36c79a92e0aace217175ff0)

6 years agoceph-volume: be idempotent when the batch strategy changes
Andrew Schoen [Tue, 20 Nov 2018 20:28:58 +0000 (14:28 -0600)]
ceph-volume: be idempotent when the batch strategy changes

If you deploy with 2 HDDs and 1 SDD then each subsequent deploy both
HDD drives will be filtered out, because they're already used by ceph.
ceph-volume will report this as a 'strategy change' because the device
list went from a mixed type of HDD and SDD to a single type of only SDD.

This situation results in a non-zero exit code from ceph-volume. We want
to handle this situation gracefully and report that nothing will be changed.
A similar json structure to what would have been given by ceph-volume is
returned in the 'stdout' key.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1650306
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit e13f32c1c5be2e4007714f704297827b16488ec6)

6 years agoconfig: convert _osd_memory_target to int
Guillaume Abrioux [Wed, 21 Nov 2018 13:38:25 +0000 (14:38 +0100)]
config: convert _osd_memory_target to int

ceph.conf doesn't accept float value.

Typical error seen:
```
$ sudo ceph daemon osd.2 config get osd_memory_target
Can't get admin socket path: unable to get conf option admin_socket for osd.2:
parse error setting 'osd_memory_target' to '7823740108,8' (strict_si_cast:
unit prefix not recognized)
```

This commit ensures the value inserted in ceph.conf will be an integer.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 68dde424f6b254c657d75f1b5c47131dc84d9fc3)

6 years agoinfra: don't restart firewalld if unit is masked v3.2.0rc4
Guillaume Abrioux [Thu, 15 Nov 2018 20:56:11 +0000 (21:56 +0100)]
infra: don't restart firewalld if unit is masked

if firewalld.service systemd unit is masked, the handler will fail when
trying to restart it.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1650281
(cherry picked from commit 63b9835cbb0510415a2d0077697a0107e2d6c4f3)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoosd_memory_target: standardize unit and fix calculation
Neha Ojha [Mon, 19 Nov 2018 06:50:02 +0000 (06:50 +0000)]
osd_memory_target: standardize unit and fix calculation

* The default value of osd_memory_target used by ceph is 4294967296 bytes,
so use the same as ceph-ansible default.

* Convert ansible_memtotal_mb to bytes to calculate osd_memory_target

Signed-off-by: Neha Ojha <nojha@redhat.com>
(cherry picked from commit 10538e9a23c60c4e634226aafe0456680c5ccc6d)

6 years agoclient: fix a typo in create_users_keys.yml
Guillaume Abrioux [Sat, 17 Nov 2018 16:40:35 +0000 (17:40 +0100)]
client: fix a typo in create_users_keys.yml

cd1e4ee024ef400ded25e8c99948648ead3a0892 introduced a typo.
This commit fixes it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 393ab94728cfff9ab2d4846eb39095becf69ad32)

6 years agovalidate: allow stable-3.2 to run with ansible 2.4 v3.2.0rc3
Guillaume Abrioux [Thu, 15 Nov 2018 21:03:28 +0000 (22:03 +0100)]
validate: allow stable-3.2 to run with ansible 2.4

Although this is not officially supported, this commit allows
`stable-3.2` to run against ansible 2.4.
This should ease the transition in RHOSP.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoigw: add support for IPv6 v3.2.0rc2
Jason Dillaman [Fri, 2 Nov 2018 14:30:34 +0000 (10:30 -0400)]
igw: add support for IPv6

Signed-off-by: Jason Dillaman <dillaman@redhat.com>
(cherry picked from commit 0aff0e9ede433d75d040a70d1a21b0acd8f4790f)

Conflicts:
library/igw_purge.py: trivial resolution
roles/ceph-iscsi-gw/library/igw_purge.py: trivial resolution

6 years agoigw: open iscsi target port
Mike Christie [Tue, 30 Oct 2018 19:03:37 +0000 (14:03 -0500)]
igw: open iscsi target port

Open the port the iscsi target uses for iscsi traffic.

Signed-off-by: Mike Christie <mchristi@redhat.com>
(cherry picked from commit 5ba7d1671ed421995e263f6abf6c2ccffac12422)

6 years agoigw: use api_port variable for firewall port setting
Mike Christie [Thu, 8 Nov 2018 21:23:24 +0000 (15:23 -0600)]
igw: use api_port variable for firewall port setting

Don't hard code api port because it might be overridden by the user.

Signed-off-by: Mike Christie <mchristi@redhat.com>
(cherry picked from commit e2f1f81de4c829b52760dc6a98e2f8751d51255e)

6 years agoigw: fix firewall iscsi_group_name check
Mike Christie [Tue, 30 Oct 2018 18:54:52 +0000 (13:54 -0500)]
igw: fix firewall iscsi_group_name check

The firewall setup for igw is not getting setup because iscsi_group_name
does not it exist. It should be iscsi_gw_group_name.

Signed-off-by: Mike Christie <mchristi@redhat.com>
(cherry picked from commit a4ff52842cc53917388901971a01242b036455e4)

6 years agoigw: Fix default api port
Mike Christie [Tue, 30 Oct 2018 18:54:03 +0000 (13:54 -0500)]
igw: Fix default api port

The default igw api port is 5000 in the manual setup docs and
ceph-iscsi-config package so this syncs up ansible.

Signed-off-by: Mike Christie <mchristi@redhat.com>
(cherry picked from commit a10853c5f8bbd113b07efbb7ae93a2ef3f8304da)

6 years agoceph-validate : Added functions to accept true and flase
VasishtaShastry [Sun, 28 Oct 2018 17:37:21 +0000 (23:07 +0530)]
ceph-validate : Added functions to accept true and flase

ceph-validate used to throw error for setting flags as 'true' or 'false' for True and False
Now user can set the flags 'dmcrypt' and 'osd_auto_discovery' as 'true' or 'false'

Will fix - Bug 1638325

Signed-off-by: VasishtaShastry <vipin.indiasmg@gmail.com>
(cherry picked from commit 098f42f2334c442bf418f09d3f4b3b99750c7ba0)

6 years agoremove configuration files for ceph packages on ubuntu clusters
Rishabh Dave [Wed, 31 Oct 2018 14:46:13 +0000 (10:46 -0400)]
remove configuration files for ceph packages on ubuntu clusters

For apt-get, purge command needs to be used, instead of remove command,
to remove related configuration files. Otherwise, packages might be
shown as installed while running dpkg command even after removing them.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1640061
Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 640cad3fd810f0aacd41fc35b96f0be3f85fbd0d)

6 years agoigw: stop tcmu-runner on iscsi purge
Mike Christie [Thu, 8 Nov 2018 21:38:08 +0000 (15:38 -0600)]
igw: stop tcmu-runner on iscsi purge

When the iscsi purge playbook is run we stop the gw and api daemons but
not tcmu-runner which I forgot on the previous PR.

Fixes Red Hat BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1621255

Signed-off-by: Mike Christie <mchristi@redhat.com>
(cherry picked from commit b523a44a1a9cf60f7512af833d97c52c1dee1bba)

6 years agotests: test ooo_collocation agasint v3.0.3 ceph-container image
Guillaume Abrioux [Tue, 6 Nov 2018 08:17:29 +0000 (09:17 +0100)]
tests: test ooo_collocation agasint v3.0.3 ceph-container image

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 811f043947e946eb60bf2fc70a8f7f300a0cd4dc)

6 years agorbd-mirror: enable ceph-rbd-mirror.target
Sébastien Han [Mon, 5 Nov 2018 17:53:44 +0000 (18:53 +0100)]
rbd-mirror: enable ceph-rbd-mirror.target

Without this the daemon will never start after reboot.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit b7a791e9029e4aca31b00a118e6eb6ac1737dc6d)

6 years agovalidate: do not validate ceph_repository if deploying containers
Andrew Schoen [Wed, 31 Oct 2018 15:25:26 +0000 (10:25 -0500)]
validate: do not validate ceph_repository if deploying containers

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1630975
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 9cd8ecf0cc74e5edfe54cbb5cebf1d72ca3bab8a)

6 years agorgw: move multisite default variables in ceph-defaults v3.2.0rc1
Guillaume Abrioux [Tue, 30 Oct 2018 14:01:46 +0000 (15:01 +0100)]
rgw: move multisite default variables in ceph-defaults

Move all rgw multisite variables in ceph-defaults so ceph-validate can
go through them.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agotests: add more memory for rgw_multsite scenarios
Guillaume Abrioux [Mon, 29 Oct 2018 17:50:31 +0000 (18:50 +0100)]
tests: add more memory for rgw_multsite scenarios

Adding more memory to VMs for rgw_multisite scenarios could avoid this error
I have recently hit in the CI:

(It is worth it to set 1024Mb since there is only 2 nodes in those
scenarios.)

```
fatal: [osd0]: FAILED! => {
    "changed": false,
    "cmd": [
        "docker",
        "run",
        "--rm",
        "--entrypoint",
        "/usr/bin/ceph",
        "docker.io/ceph/daemon:latest-luminous",
        "--version"
    ],
    "delta": "0:00:04.799084",
    "end": "2018-10-29 17:10:39.136602",
    "rc": 1,
    "start": "2018-10-29 17:10:34.337518"
}

STDERR:

Traceback (most recent call last):
  File "/usr/bin/ceph", line 125, in <module>
    import rados
ImportError: libceph-common.so.0: cannot map zero-fill pages: Cannot allocate memory
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agorgw: move multisite related tasks after docker/main.yml
Guillaume Abrioux [Mon, 29 Oct 2018 15:37:12 +0000 (16:37 +0100)]
rgw: move multisite related tasks after docker/main.yml

We must play this task after the container has started otherwise
rgw_multisite tasks will fail.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agorgw: add rgw_multisite for containerized deployments
Guillaume Abrioux [Mon, 29 Oct 2018 13:05:59 +0000 (14:05 +0100)]
rgw: add rgw_multisite for containerized deployments

run commands on containers when containerized deployments.
(At the moment, all commands are run on the host only)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agotests: add rgw_multisite functional test
Guillaume Abrioux [Mon, 29 Oct 2018 12:30:59 +0000 (13:30 +0100)]
tests: add rgw_multisite functional test

Add a playbook that will upload a file on the master then try to get
info from the secondary node, this way we can check if the replication
is ok.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agorgw: add testing scenario for rgw multisite
Guillaume Abrioux [Wed, 24 Oct 2018 20:27:28 +0000 (22:27 +0200)]
rgw: add testing scenario for rgw multisite

This will setup 2 cluster with rgw multisite enabled.
First cluster will act as the 'master', the 2nd will be the secondary
one.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agovalidate: remove check on rgw_multisite_endpoint_addr definition
Guillaume Abrioux [Mon, 29 Oct 2018 11:05:09 +0000 (12:05 +0100)]
validate: remove check on rgw_multisite_endpoint_addr definition

since `rgw_multisite_endpoint_addr` has a default value to
`{{ ansible_fqdn }}`, it shouldn't be mandatory to set this variable.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agorgw: add ceph-validate tasks for multisite, other fixes
Ali Maredia [Fri, 26 Oct 2018 14:39:56 +0000 (14:39 +0000)]
rgw: add ceph-validate tasks for multisite, other fixes

- updated README-MULTISITE
- re-added destroy.yml
- added tasks in ceph-validate to make sure the
rgw multisite vars are set

Signed-off-by: Ali Maredia <amaredia@redhat.com>
6 years agorgw: add a dedicated variable for multisite endpoint
Guillaume Abrioux [Fri, 26 Oct 2018 09:14:12 +0000 (11:14 +0200)]
rgw: add a dedicated variable for multisite endpoint

We should give users the possibility to set the IP they want as
multisite endpoint, setting the default value to `{{ ansible_fqdn }}` to
not force them to set this variable.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agorgw: update rgw multisite tasks
Ali Maredia [Mon, 18 Sep 2017 22:33:23 +0000 (18:33 -0400)]
rgw: update rgw multisite tasks

- remove destroy tasks
- cleanup conditionals and syntax
- remove unnecessary realm pulls
- enable multisite to be tested in automated
testing infra
- add multisite related vars to main.yml and
group_vars
- update README-MULTISITE
- ensure all `radosgw-admin` commands are being run
on a mon

Signed-off-by: Ali Maredia <amaredia@redhat.com>
6 years agotravis: add ansible-galaxy integration
Sébastien Han [Tue, 30 Oct 2018 11:18:16 +0000 (12:18 +0100)]
travis: add ansible-galaxy integration

This instructs Travis to notify Galaxy when a build completes. Since 3.0
the ansible-galaxy has the ability to build and push roles from repos
with multiple roles.

Closes: https://github.com/ceph/ceph-ansible/issues/3165
Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agogitignore: add mergify and travis as exceptions
Sébastien Han [Tue, 30 Oct 2018 11:20:44 +0000 (12:20 +0100)]
gitignore: add mergify and travis as exceptions

Git must notice changes from .travis.yml and .mergify.yml

Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agocontrib: rm script push-roles-to-ansible-galaxy.sh
Sébastien Han [Tue, 30 Oct 2018 11:04:59 +0000 (12:04 +0100)]
contrib: rm script push-roles-to-ansible-galaxy.sh

The script is not used anymore and soon Travis CI will do this job of
pushing the role into the galaxy.

Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agocleanup repos's root
Sébastien Han [Tue, 30 Oct 2018 10:28:23 +0000 (11:28 +0100)]
cleanup repos's root

Remove old files and move scripts to the contrib directory.

Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agoceph-volume: fix TypeError exception when setting osds-per-device > 1
Maciej Naruszewicz [Fri, 19 Oct 2018 20:40:36 +0000 (22:40 +0200)]
ceph-volume: fix TypeError exception when setting osds-per-device > 1

osds-per-device needs to be passed to run_command as a string.
Otherwise, expandvars method will try to iterate over an integer.

Signed-off-by: Maciej Naruszewicz <maciej.naruszewicz@intel.com>
6 years agotestinfra: change test osds for containers
Sébastien Han [Mon, 29 Oct 2018 15:24:45 +0000 (16:24 +0100)]
testinfra: change test osds for containers

We do not use  @<device> anymore so we don't need to perform the
readlink check anymore.

Also we are making an exception for ooo which is still using ceph-disk.

Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agoceph_volume: add container support for batch
Sébastien Han [Fri, 26 Oct 2018 14:30:32 +0000 (16:30 +0200)]
ceph_volume: add container support for batch

https://tracker.ceph.com/issues/36363 has been resolved and the patch
has been backported to luminous and mimic so let's enable the container
support.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1541415
Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agotest_osd: dynamically get the osd container
Sébastien Han [Mon, 29 Oct 2018 11:00:40 +0000 (12:00 +0100)]
test_osd: dynamically get the osd container

Do not enforce the container name since this will fail when we have
multiple VMs running OSDs.

Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agotest: convert all the tests to use lvm
Sébastien Han [Wed, 10 Oct 2018 19:29:56 +0000 (15:29 -0400)]
test: convert all the tests to use lvm

ceph-disk is now deprecated in ceph-ansible so let's convert all the ci
tests to use lvm instead of ceph-disk.

Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agotox: change container image to use master
Sébastien Han [Thu, 25 Oct 2018 14:15:36 +0000 (16:15 +0200)]
tox: change container image to use master

We have a latest-master image which contains builds from upstream ceph
so let's use it to verify build.

Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agotest: remove ceph-disk CI tests
Sébastien Han [Wed, 10 Oct 2018 18:55:20 +0000 (14:55 -0400)]
test: remove ceph-disk CI tests

Since we are removing the ceph-disk test from the ci in master then
there is no need to have the functionnal tests in master anymore.

Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agoroles: fix *_docker_memory_limit default value
Guillaume Abrioux [Mon, 29 Oct 2018 10:46:46 +0000 (11:46 +0100)]
roles: fix *_docker_memory_limit default value

append 'm' suffix to specify the unit size used in all
`*_docker_memory_limit`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoroles: do not limit docker_memory_limit for various daemons
Neha Ojha [Thu, 25 Oct 2018 17:45:00 +0000 (17:45 +0000)]
roles: do not limit docker_memory_limit for various daemons

Since we do not have enough data to put valid upper bounds for the memory
usage of these daemons, do not put artificial limits by default. This will
help us avoid failures like OOM kills due to low default values.

Whenever required, these limits can be manually enforced by the user.

More details in
https://bugzilla.redhat.com/show_bug.cgi?id=1638148

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1638148
Signed-off-by: Neha Ojha <nojha@redhat.com>
6 years agoMerge branch 'jcsp-wip-rm-calamari'
Sébastien Han [Mon, 29 Oct 2018 13:53:47 +0000 (14:53 +0100)]
Merge branch 'jcsp-wip-rm-calamari'

6 years agoMerge branch 'master' into wip-rm-calamari 3147/head
Sébastien Han [Mon, 29 Oct 2018 13:50:37 +0000 (14:50 +0100)]
Merge branch 'master' into wip-rm-calamari

6 years agoinfrastructure playbooks: ensure nvme_device is defined in lv-create.yml
Ali Maredia [Mon, 29 Oct 2018 06:01:25 +0000 (06:01 +0000)]
infrastructure playbooks: ensure nvme_device is defined in lv-create.yml

Signed-off-by: Ali Maredia <amaredia@redhat.com>
6 years agonfs: do not create the nfs user if already present
Sébastien Han [Fri, 26 Oct 2018 13:27:33 +0000 (15:27 +0200)]
nfs: do not create the nfs user if already present

Check if the user exists and skip its creation if true.

Closes: https://github.com/ceph/ceph-ansible/issues/3254
Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agoFix problem with ceph_key in python3
Jairo Llopis [Thu, 4 Oct 2018 05:48:03 +0000 (07:48 +0200)]
Fix problem with ceph_key in python3

Pretty basic problem of iteritems removal.

Signed-off-by: Jairo Llopis <yajo.sk8@gmail.com>
6 years agoceph_volume: better error handling
Sébastien Han [Wed, 24 Oct 2018 14:55:52 +0000 (16:55 +0200)]
ceph_volume: better error handling

When loading the json, if invalid, we should fail with a meaningful
error.

Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agoceph_volume: expose ceph-volume logs on the host
Sébastien Han [Wed, 24 Oct 2018 14:53:12 +0000 (16:53 +0200)]
ceph_volume: expose ceph-volume logs on the host

This will tremendously help debugging failures while performing any
ceph-volume command in containers.

Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agoresync group_vars/*.sample files
Guillaume Abrioux [Fri, 26 Oct 2018 07:46:29 +0000 (09:46 +0200)]
resync group_vars/*.sample files

ee2d52d33df2a311cdf0ff62abd353fccb3affbc missed this sync between
ceph-defaults/defaults/main.yml and group_vars/all.yml.sampl

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agotox: fix a typo
Guillaume Abrioux [Thu, 25 Oct 2018 12:42:54 +0000 (14:42 +0200)]
tox: fix a typo

the line setting `ANSIBLE_CONFIG` obviously contains a typo introduced
by 1e283bf69be8b9efbc1a7a873d91212ad57c7351

`ANSIBLE_CONFIG` has to point to a path only (path to an ansible.cfg)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoigw: stop daemons on purge all calls v3.2.0beta9
Mike Christie [Wed, 12 Sep 2018 20:37:44 +0000 (15:37 -0500)]
igw: stop daemons on purge all calls

When purging the entire igw config (lio and rbd) stop disable the api
and gw daemons.

Fixes Red Hat BZ
https://bugzilla.redhat.com/show_bug.cgi?id=1621255

Signed-off-by: Mike Christie <mchristi@redhat.com>
6 years agoceph-validate: avoid "list index out of range" error
Rishabh Dave [Tue, 9 Oct 2018 20:47:40 +0000 (02:17 +0530)]
ceph-validate: avoid "list index out of range" error

Be sure that error.path has more than one members before using them.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
6 years agoceph-infra: reload firewall after rules are added v3.2.0beta8
Guillaume Abrioux [Tue, 23 Oct 2018 07:49:50 +0000 (09:49 +0200)]
ceph-infra: reload firewall after rules are added

we ensure that firewalld is installed and running before adding any
rule. This has no sense anymore not to reload firewalld once the rule
are added.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoallow custom pool size
Rishabh Dave [Mon, 1 Oct 2018 15:11:13 +0000 (11:11 -0400)]
allow custom pool size

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1596339
Signed-off-by: Rishabh Dave <ridave@redhat.com>
6 years agotests: remove unnecessary variables definition v3.2.0beta7
Guillaume Abrioux [Fri, 19 Oct 2018 11:19:59 +0000 (13:19 +0200)]
tests: remove unnecessary variables definition

since we set `configure_firewall: true` in
`ceph-defaults/defaults/main.yml` there is no need to explicitly set it
in `centos7_cluster` and `docker_cluster` testing scenarios.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agodefaults: set default `configure_firewall` to `True`
Guillaume Abrioux [Fri, 19 Oct 2018 11:16:23 +0000 (13:16 +0200)]
defaults: set default `configure_firewall` to `True`

Let's configure firewalld by default.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1526400
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agorolling_update: fix upgrade when using fqdn
Sébastien Han [Thu, 9 Aug 2018 09:32:53 +0000 (11:32 +0200)]
rolling_update: fix upgrade when using fqdn

CLusters that were deployed using 'mon_use_fqdn' have a different unit
name, so during the upgrade this must be used otherwise the upgrade will
fail, looking for a unit that does not exist.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1597516
Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agovalidate: check the version of python-notario
Andrew Schoen [Tue, 16 Oct 2018 15:20:54 +0000 (10:20 -0500)]
validate: check the version of python-notario

If the version of python-notario is < 0.0.13 an error message is given
like "TypeError: validate() got an unexpected keyword argument
'defined_keys'", which is not helpful in figuring
out you've got an incorrect version of python-notario.

This check will avoid that situation by telling the user that they need
to upgrade python-notario before they hit that error.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
6 years agoiscsi: fix networking issue on containerized env
Guillaume Abrioux [Thu, 18 Oct 2018 20:29:02 +0000 (22:29 +0200)]
iscsi: fix networking issue on containerized env

The iscsi-gw containers can't reach monitors without `--net=host`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoRevert "tests: test `test_all_docker_osds_are_up_and_in()` from mon nodes"
Guillaume Abrioux [Thu, 18 Oct 2018 13:43:36 +0000 (15:43 +0200)]
Revert "tests: test `test_all_docker_osds_are_up_and_in()` from mon nodes"

This approach doesn't work with all scenarios because it's comparing a
local OSD number expected to a global OSD number found in the whole
cluster.

This reverts commit b8ad35ceb99cdbd1644c79dd689b818f095ba8b8.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agotests: set configure_firewall: true in centos7|docker_cluster
Guillaume Abrioux [Thu, 18 Oct 2018 11:45:14 +0000 (13:45 +0200)]
tests: set configure_firewall: true in centos7|docker_cluster

This way the CI will cover this part of the code.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoinfra: move restart fw handler in ceph-infra role
Guillaume Abrioux [Thu, 18 Oct 2018 11:41:49 +0000 (13:41 +0200)]
infra: move restart fw handler in ceph-infra role

Move the handler to restart firewall in ceph-infra role.

Closes: #3243
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agotests: test `test_all_docker_osds_are_up_and_in()` from mon nodes v3.2.0beta6
Guillaume Abrioux [Tue, 16 Oct 2018 14:25:12 +0000 (16:25 +0200)]
tests: test `test_all_docker_osds_are_up_and_in()` from mon nodes

Let's get the osd tree from mons instead on osds.
This way we don't have to predict an OSD container name.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoadd-osds: followup on 3632b26
Guillaume Abrioux [Wed, 17 Oct 2018 11:57:09 +0000 (13:57 +0200)]
add-osds: followup on 3632b26

Three fixes:

- fix a typo in vagrant_variables that cause a networking issue for
containerized scenario.
- add containerized_deployment: true
- remove a useless block of code: the fact docker_exec_cmd is set in
ceph-defaults which is played right after.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoinfra: add a gather-ceph-logs.yml playbook
Sébastien Han [Thu, 24 May 2018 17:47:29 +0000 (10:47 -0700)]
infra: add a gather-ceph-logs.yml playbook

Add a gather-ceph-logs.yml which will log onto all the machines from
your inventory and will gather ceph logs. This is not intended to work
on containerized environments since the logs are stored in journald.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1582280
Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agotests: add tests for day-2-operation playbook
Guillaume Abrioux [Tue, 16 Oct 2018 15:05:10 +0000 (17:05 +0200)]
tests: add tests for day-2-operation playbook

Adding testing scenarios for day-2-operation playbook.

Steps:
- deploys a cluster,
- run testinfra,
- test idempotency,
- add a new osd node,
- run testinfra

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoinfra: rename osd-configure to add-osd and improve it
Sébastien Han [Thu, 27 Sep 2018 14:31:22 +0000 (16:31 +0200)]
infra: rename osd-configure to add-osd and improve it

The playbook has various improvements:

* run ceph-validate role before doing anything
* run ceph-fetch-keys only on the first monitor of the inventory list
* set noup flag so PGs get distributed once all the new OSDs have been
added to the cluster and unset it when they are up and running

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1624962
Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agoceph-fetch-keys: refact
Sébastien Han [Thu, 27 Sep 2018 14:29:22 +0000 (16:29 +0200)]
ceph-fetch-keys: refact

This commits simplies the usage of the ceph-fetch-keys role. The role
now has a nicer way to find various ceph keys and fetch them on the
ansible server.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1624962
Signed-off-by: Sébastien Han <seb@redhat.com>
6 years agoAdd ability to use a different client container
Andy McCrae [Fri, 5 Oct 2018 13:36:36 +0000 (14:36 +0100)]
Add ability to use a different client container

Currently a throw-away container is built to run ceph client
commands to setup users, pools & auth keys. This utilises
the same base ceph container which has all the ceph services
inside it.

This PR allows the use of a separate container if the deployer
wishes - but defaults to use the same full ceph container.

This can be used for different architectures or distributions,
which may support the the Ceph client, but not Ceph server,
and allows the deployer to build and specify a separate client
container if need be.

Signed-off-by: Andy McCrae <andy.mccrae@gmail.com>
6 years agoinfra: fix wrong condition on firewalld start task
Guillaume Abrioux [Tue, 16 Oct 2018 13:09:48 +0000 (15:09 +0200)]
infra: fix wrong condition on firewalld start task

a non skipped task won't have the `skipped` attribute, so `start
firewalld` task will complain about that.
Indeed, `skipped` and `rc` attributes won't exist since the first task
`check firewalld installation on redhat or suse` won't be skipped in
case of non-containerized deployment.

Fixes: #3236
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1541840
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoceph-defaults: set ceph_stable_openstack_release_uca to queens
Christian Berendt [Thu, 11 Oct 2018 10:26:04 +0000 (12:26 +0200)]
ceph-defaults: set ceph_stable_openstack_release_uca to queens

Liberty is no longer available in the UCA. The last available release there
is currently Queens.

Signed-off-by: Christian Berendt <berendt@betacloud-solutions.de>
6 years agocontrib: add a bash script to snapshort libvirt vms
Guillaume Abrioux [Mon, 15 Oct 2018 21:42:16 +0000 (23:42 +0200)]
contrib: add a bash script to snapshort libvirt vms

This script is still 'work in progress' but could be used to make
snapshot of Libvirt VMs.
This can save some times when deploying again and again.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agohandler: remove some leftover in restart_*_daemon.sh.j2
Guillaume Abrioux [Mon, 15 Oct 2018 13:32:17 +0000 (15:32 +0200)]
handler: remove some leftover in restart_*_daemon.sh.j2

Remove some legacy in those restart script.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agodoc: update default osd_objectstore value
Guillaume Abrioux [Mon, 15 Oct 2018 21:54:47 +0000 (23:54 +0200)]
doc: update default osd_objectstore value

since dc3319c3c4e2fb58cb1b5e6c60f165ed28260dc8 this should be reflected
in the doc.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>