]> git.apps.os.sepia.ceph.com Git - ceph-ansible.git/log
ceph-ansible.git
6 years agoshrink-rbdmirror: check if rbdmirror is well removed from cluster 4238/head
Guillaume Abrioux [Thu, 11 Jul 2019 09:26:50 +0000 (11:26 +0200)]
shrink-rbdmirror: check if rbdmirror is well removed from cluster

This commits adds a check to ensure the daemon has been removed from the
cluster.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agotests/functional: add a test for shrink-rbdmirror.yml
Rishabh Dave [Tue, 25 Jun 2019 13:30:53 +0000 (19:00 +0530)]
tests/functional: add a test for shrink-rbdmirror.yml

Add a new functional test that deploys Ceph cluster with three nodes for
MON, OSD and RBD Mirror and, then, runs shrink-rbdmirror.yml to test it.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
6 years agoadd a playbook that removes rbd-mirror from a node
Rishabh Dave [Tue, 25 Jun 2019 13:08:17 +0000 (18:38 +0530)]
add a playbook that removes rbd-mirror from a node

Add a playbook named "shrink-rbdmirror.yml" in infrastructure-playbooks/
that removes a RBD Mirror from a node in an already deployed Ceph
cluster.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1677431
Signed-off-by: Rishabh Dave <ridave@redhat.com>
6 years agoceph-infra: update handler with daemon variable
Dimitri Savineau [Wed, 10 Jul 2019 18:58:58 +0000 (14:58 -0400)]
ceph-infra: update handler with daemon variable

Both ntp and chrony daemon use variable for the service name because it
could be different depending on the GNU/Linux distribution.
This has been update in 9d88d3199 for chrony but only for the start part
not for the handler.
The commit fixes this for both ntp and chrony.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoceph-infra: Open prometheus port
Dimitri Savineau [Wed, 10 Jul 2019 21:19:29 +0000 (17:19 -0400)]
ceph-infra: Open prometheus port

The Prometheus porrt 9090 isn't open in the firewall configuration.
Also the dashboard task on the grafana node was not required because
it's already present on the mgr node.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agohandler: remove legacy condition
Guillaume Abrioux [Tue, 9 Jul 2019 14:03:26 +0000 (16:03 +0200)]
handler: remove legacy condition

since everything is already in a block with the same condition, it's not
needed to leave all of them on these tasks.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agovalidate: improve message printed in check_devices.yml
Guillaume Abrioux [Wed, 10 Jul 2019 13:08:39 +0000 (15:08 +0200)]
validate: improve message printed in check_devices.yml

The message prints the whole content of the registered variable in the
playbook, this is not needed and makes the message pretty unclear and
unreadable.

```
"msg": "{'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'err': u'Error: Could not stat device /dev/sdf - No such file or directory.\\n', 'item': u'/dev/sdf', '_ansible_item_result': True, u'failed': False, '_ansible_item_label': u'/dev/sdf', u'msg': u\"Error while getting device information with parted script: '/sbin/parted -s -m /dev/sdf -- unit 'MiB' print'\", u'rc': 1, u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/sdf', u'unit': u'MiB'}}, 'failed_when_result': False, '_ansible_ignore_errors': None, u'out': u''} is not a block special file!"
```

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1719023
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoceph-iscsi: Update gateway config/template
Dimitri Savineau [Mon, 8 Jul 2019 18:36:07 +0000 (14:36 -0400)]
ceph-iscsi: Update gateway config/template

- Remove gateway_keyring from the configuration file because it's
not used in ceph-iscsi 3.x release.
- Use config_template instead of template module for iscsi-gateway
configuration file. Because the file is an ini file and we might want
to override more parameters than those present in ceph-ansible.
- Because we can now set the pool name in the configuration, we should
use a variable for that. This is refact with the iscsi_pool_* variables
also used to configure the pool size.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoceph-dashboard: remove bool filter for rgw vars
Dimitri Savineau [Mon, 8 Jul 2019 13:43:13 +0000 (09:43 -0400)]
ceph-dashboard: remove bool filter for rgw vars

Some dashboard_rgw_api_* variables are using the bool filter but those
variables are strings with an empty string as default value.
So we should test the variable against an empty string instead of a
bool.

dashboard_rgw_api_host: ''
dashboard_rgw_api_port: ''
dashboard_rgw_api_scheme: ''
dashboard_rgw_api_admin_resource: ''

Resolves: #4179

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agodashboard: Use upstream default port
Boris Ranto [Tue, 9 Jul 2019 20:32:38 +0000 (22:32 +0200)]
dashboard: Use upstream default port

We are currently using incorrect dashboard default port. The upstream
uses 8443 instead of 8234 by default. This should get us closer to the
upstream project.

Signed-off-by: Boris Ranto <branto@redhat.com>
6 years agotests/functional: add a test for shrink-mgr.yml
Rishabh Dave [Thu, 20 Jun 2019 10:23:22 +0000 (15:53 +0530)]
tests/functional: add a test for shrink-mgr.yml

Add a new functional test that deploys a Ceph cluster with three nodes
for MON, OSD and MGR and then runs shrink-mgr.yml to test it.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
6 years agoadd a playbook that removes manager from a node
Rishabh Dave [Wed, 12 Jun 2019 04:46:00 +0000 (10:16 +0530)]
add a playbook that removes manager from a node

Add a playbook, named "shrink-mgr.yml", in infrastructure-playbooks/
that removes a MGR from a node in an already deployed Ceph cluster.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1677431
Signed-off-by: Rishabh Dave <ridave@redhat.com>
6 years agoceph-handler: fix cluster name in socket path
Dimitri Savineau [Wed, 3 Jul 2019 15:26:42 +0000 (11:26 -0400)]
ceph-handler: fix cluster name in socket path

c90f605b5 introduces the default ceph cluster name value in the rgw
socket path for the rgw restart script. But this should use the
`cluster` variable instead.
This commit also fixes this in the osd restart script.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoshrink-mds: refact post tasks
Guillaume Abrioux [Wed, 3 Jul 2019 08:45:46 +0000 (10:45 +0200)]
shrink-mds: refact post tasks

This commit refacts the way we check the "mds_to_kill" node is well
stopped.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Signed-off-by: Rishabh Dave <ridave@redhat.com>
6 years agotests/functional: add a test for shrink-mds.yml
Rishabh Dave [Fri, 10 May 2019 16:10:07 +0000 (21:40 +0530)]
tests/functional: add a test for shrink-mds.yml

Add a new functional test that deploys a Ceph cluster with three nodes
for MON, OSD and MDS and then runs shrink-mds.yml to test it.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
6 years agoadd a playbook that removes mds from a node
Rishabh Dave [Thu, 20 Jun 2019 09:59:25 +0000 (15:29 +0530)]
add a playbook that removes mds from a node

Add a playbook, named "shrink-mds.yml", in infrastructure-playbooks/
that removes a MDS from a node in an already deployed Ceph cluster.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1677431
Signed-off-by: Rishabh Dave <ridave@redhat.com>
6 years agoAdd package-install tag on ceph-grafana-dashboard pkg install.
fmount [Thu, 4 Jul 2019 09:43:39 +0000 (11:43 +0200)]
Add package-install tag on ceph-grafana-dashboard pkg install.

According to the OSP pattern, we need the package-install tag
to control what is installed on the host. This commit just add
the missing tag to meet the TripleO requirements.

See: /issues/4197 for details

Fixes: #4197
Signed-off-by: fmount <fpantano@redhat.com>
6 years agoceph-iscsi-gw: Update log directories bind mount
Dimitri Savineau [Thu, 4 Jul 2019 14:10:00 +0000 (10:10 -0400)]
ceph-iscsi-gw: Update log directories bind mount

On containerized deployment we need to bind mount the ceph-iscsi
directory to avoid writing the logs in the container.
The /var/log/ceph directory isn't use by rbd-targe-api/gw services
because they have their own log directories.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoceph-mon: Fix cluster name parameter
ilyashestopalov [Wed, 3 Jul 2019 16:58:32 +0000 (19:58 +0300)]
ceph-mon: Fix cluster name parameter

The ability to add nodes with the monitor role to an existing cluster
whose name differs from the default name is fixed.

Signed-off-by: ilyashestopalov <usr.tester@yandex.ru>
6 years agoiscsi: refact deprecated variables
Guillaume Abrioux [Tue, 2 Jul 2019 13:30:12 +0000 (15:30 +0200)]
iscsi: refact deprecated variables

This commit moves some old variables into ceph-defaults so we can move
the `use_new_ceph_iscsi` fact in ceph-facts role in order.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoigw: Add check for missing iqn
Mike Christie [Thu, 30 May 2019 16:17:09 +0000 (11:17 -0500)]
igw: Add check for missing iqn

If the user is still using the older packages and does not setup
the target iqn you will just get a vague error message later on.
This adds a check during the validate task, so it is clear to the
user.

Signed-off-by: Mike Christie <mchristi@redhat.com>
6 years agoigw: Add check for mismatch ceph-iscsi and iscsigws.yml settings
Mike Christie [Mon, 13 May 2019 17:59:27 +0000 (12:59 -0500)]
igw: Add check for mismatch ceph-iscsi and iscsigws.yml settings

If the user has manually installed ceph-iscsi but is trying to setup a
iscsi object in iscsigws.yml you will just a python crash. This patch
adds a check and more user friendly error message for the case.

Signed-off-by: Mike Christie <mchristi@redhat.com>
6 years agoigw: Update tests to use ceph-iscsi package
Mike Christie [Thu, 6 Jun 2019 14:47:56 +0000 (09:47 -0500)]
igw: Update tests to use ceph-iscsi package

gateway_ip_list is depreciated and is only used when using the old
ceph-iscsi-config/cli packages that are no longer being developed
(GH repos are archived). Because ceph-iscsi-config/cli is no longer
being worked on, this modifies the tests to stress the ceph-iscsi
based installs.

Signed-off-by: Mike Christie <mchristi@redhat.com>
6 years agoigw: Update iscsigws.yml.sample for ceph-iscsi support
Mike Christie [Sun, 12 May 2019 07:38:46 +0000 (02:38 -0500)]
igw: Update iscsigws.yml.sample for ceph-iscsi support

Update iscsigws.yml.sample to document that we cannot use ansible to
setup iSCSI objects and use the new ceph-iscsi package.

Signed-off-by: Mike Christie <mchristi@redhat.com>
6 years agoigw: Support ceph-iscsi package for install
Mike Christie [Thu, 30 May 2019 16:28:34 +0000 (11:28 -0500)]
igw: Support ceph-iscsi package for install

This adds support for the ceph-iscsi package during install. ceph-iscsi
does not support setting up targets/gws, luns and clients with the
current library/igw_* code. Going forward those tasks should be done with
gwcli or dashboard. ceph-iscsi will only be used if the user has no iscsi
objects setup so we do not break existing setups.

The next patch will update the iscsigws.yml.sample to document that
users must not setup any iscsi object if they want to use the new
package and tools.

Signed-off-by: Mike Christie <mchristi@redhat.com>
6 years agoigw: drop gateway_ip_list for container setups
Mike Christie [Thu, 30 May 2019 15:55:45 +0000 (10:55 -0500)]
igw: drop gateway_ip_list for container setups

The gateway_ip_list is not used in container setups, so drop it
for that case.

Signed-off-by: Mike Christie <mchristi@redhat.com>
6 years agoigw: move gateway_ip_list check to validate role
Mike Christie [Thu, 30 May 2019 15:54:04 +0000 (10:54 -0500)]
igw: move gateway_ip_list check to validate role

Signed-off-by: Mike Christie <mchristi@redhat.com>
6 years agoigw: Support new ceph-iscsi package during purge
Mike Christie [Sat, 11 May 2019 22:23:37 +0000 (17:23 -0500)]
igw: Support new ceph-iscsi package during purge

The ceph-iscsi-config and ceph-iscsi-cli packages were combined into
ceph-iscsi and its APIs changed. This fixes up the iscsi purge task to
support the new API and old one.

Signed-off-by: Mike Christie <mchristi@redhat.com>
6 years agotests: wait 30sec before running testinfra
Guillaume Abrioux [Wed, 3 Jul 2019 13:27:18 +0000 (15:27 +0200)]
tests: wait 30sec before running testinfra

adding back a sleep 30s after nodes have rebooted before running
testinfra.
This was removed accidentally by d5be83e

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoceph-handler: Fix rgw socket in restart script
Dimitri Savineau [Tue, 7 May 2019 20:33:21 +0000 (16:33 -0400)]
ceph-handler: Fix rgw socket in restart script

Since Mimic the radosgw socket has two extra fields in the socket
name (before the .asok suffix): <pid>.<ctid>

Before:
  /var/run/ceph/ceph-client.rgw.cephaio-1.asok
After:
  /var/run/ceph/ceph-client.rgw.cephaio-1.16913.23928832.asok

The radosgw restart script doesn't handle this and could fail during
an upgrade.
If the SOCKETS variable isn't defined in the script then the test
command won't fail because the return code is 0

$ test -S
$ echo $?
0

There multiple issues in that script:
  - The default SOCKETS value isn't defined due to a typo
SOCKET vs SOCKETS.
  - Because the socket name uses the pid then we need to check the
socket name after the service restart.
  - After restarting the radosgw service we need to wait few seconds
otherwise the socket won't be created.
  - Update the wget parameters because the command is doing a loop.
We now use the same option than curl.
  - The check_rest function doesn't test the radosgw at all due to
a wrong test command (test against a string) and always returns 0.
This needs to use the DOCKER_EXECS variable in order to execute the
command.

$ test 'wget http://192.168.100.11:8080'
$ echo $?
0

Also remove the test based on the ansible_fqdn because we only use
the ansible_hostname + rgw instance name.

Finally group all for loop into a single one.

Resolves: #3926

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoAdd radosgw_frontend_ssl_certificate parameter
Giulio Fidente [Wed, 19 Jun 2019 12:59:15 +0000 (14:59 +0200)]
Add radosgw_frontend_ssl_certificate parameter

This is necessary when configuring RGW with SSL because
in addition to passing specific frontend options, civetweb
appends the 's' character to the binding port and beast uses
ssl_endpoint instead of endpoint.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1722071
Signed-off-by: Giulio Fidente <gfidente@redhat.com>
6 years agotox-podman.ini: add missing reruns option
Dimitri Savineau [Fri, 28 Jun 2019 20:07:20 +0000 (16:07 -0400)]
tox-podman.ini: add missing reruns option

The first py.test call didn't have the reruns option. The commit
fixes it.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoAdd condition on dashboard installer phase
Dimitri Savineau [Fri, 28 Jun 2019 18:14:53 +0000 (14:14 -0400)]
Add condition on dashboard installer phase

Even if dashboard feature is disabled then the installer status will
still report dashboard, grafana and node-exporter roles timing.

INSTALLER STATUS **********************************
Install Ceph Monitor           : Complete (0:01:21)
Install Ceph Manager           : Complete (0:00:49)
Install Ceph Dashboard         : Complete (0:00:00)
Install Ceph Grafana           : Complete (0:00:02)

When need to set the dashboard_enabled condition on those installer
phase pre/post tasks.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agonfs: clean template
Guillaume Abrioux [Fri, 21 Jun 2019 16:04:47 +0000 (18:04 +0200)]
nfs: clean template

remove legacy options

```
ganesha.nfsd-115[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:13): Unknown parameter (Dir_Max)
ganesha.nfsd-115[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:14): Unknown parameter (Cache_FDs)

```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agocontainers: improve logging
Guillaume Abrioux [Wed, 26 Jun 2019 13:38:58 +0000 (15:38 +0200)]
containers: improve logging

bindmount /var/log/ceph on all containers so it's possible to retrieve
logs from the host.

related ceph-container PR: ceph/ceph-container#1408

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1710548
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoceph-osd: Add CONTAINER_IMAGE env variable
Dimitri Savineau [Thu, 27 Jun 2019 14:26:40 +0000 (10:26 -0400)]
ceph-osd: Add CONTAINER_IMAGE env variable

This environment variable was added in cb381b4 but was removed in
4d35e9e.
This commit reintroduces the change.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoSet grafana_server_addr fact for ipv6 scenarios.
fmount [Wed, 19 Jun 2019 11:52:31 +0000 (13:52 +0200)]
Set grafana_server_addr fact for ipv6 scenarios.

As the bz1721914 describes, the grafana_server_addr
fact is not defined if ip_version used is ipv6.
This commit adds the ip_version condition to set
correctly this fact.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1721914
Signed-off-by: fmount <fpantano@redhat.com>
6 years agofacts: fix bug in grafana_server_addr fact setting
Guillaume Abrioux [Mon, 24 Jun 2019 20:30:23 +0000 (22:30 +0200)]
facts: fix bug in grafana_server_addr fact setting

If no grafana-server group is defined while an mgr group is, that task
will fail because `hostvars[groups[grafana_server_group_name][0]` can't
return anything since `groups['grafana-server']` will be a non existing
key.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agotests: clean nfs_ganesha variables
Guillaume Abrioux [Tue, 25 Jun 2019 16:03:27 +0000 (18:03 +0200)]
tests: clean nfs_ganesha variables

- clean some leftover.
- move nfs_ganesha_[stable|dev] in group_vars so dev_setup.yml can modify them.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agonfs: add missing | bool filters
Guillaume Abrioux [Tue, 25 Jun 2019 15:11:28 +0000 (17:11 +0200)]
nfs: add missing | bool filters

To address this warning:
```
[DEPRECATION WARNING]: evaluating nfs_ganesha_dev as a bare variable, this
behaviour will go away and you might need to add |bool to the expression in the
 future
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agonfs: remove duplicate task
Guillaume Abrioux [Tue, 25 Jun 2019 08:23:57 +0000 (10:23 +0200)]
nfs: remove duplicate task

This task is already present in pre_requisite_non_container.yml

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agotests: test nfs-ganesha deployment
Guillaume Abrioux [Tue, 25 Jun 2019 05:44:43 +0000 (07:44 +0200)]
tests: test nfs-ganesha deployment

Add back the nfs-ganesha deployment testing which was removed because of
broken dependencies.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agovalidate.py: Fix alphabetical order on uca
Gabriel Ramirez [Tue, 25 Jun 2019 04:52:11 +0000 (21:52 -0700)]
validate.py: Fix alphabetical order on uca

Alphabetized ceph_repository_uca keys due to errors validating when
using UCA/queens repository on Ubuntu 16.04

An exception occurred during task execution. To see the full
traceback, use -vvv. The error was:
SchemaError: -> ceph_stable_repo_uca  schema item is not
alphabetically ordered

Closes: #4154
Signed-off-by: Gabriel Ramirez <gabrielramirez1109@gmail.com>
6 years agotests: deploy nfs-ganesha in container-all_daemons
Guillaume Abrioux [Fri, 21 Jun 2019 16:16:51 +0000 (18:16 +0200)]
tests: deploy nfs-ganesha in container-all_daemons

this commit bring back the nfs-ganesha testing in containerized
deployment.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agopurge: ensure no ceph kernel thread is present
Guillaume Abrioux [Fri, 21 Jun 2019 14:10:16 +0000 (16:10 +0200)]
purge: ensure no ceph kernel thread is present

This tries to first unmount any cephfs/nfs-ganesha mount point on client
nodes, then unmap any mapped rbd devices and finally it tries to remove
ceph kernel modules.
If it fails it means some resources are still busy and should be cleaned
manually before continuing to purge the cluster.
This is done early in the playbook so the cluster stays untouched until
everything is ready for that operation, otherwise if you try to redeploy
a cluster it could end up by getting confused by leftover from previous
deployment.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1337915
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoceph-handler: Fix OSD restart script
Dimitri Savineau [Thu, 20 Jun 2019 21:33:39 +0000 (17:33 -0400)]
ceph-handler: Fix OSD restart script

There's two big issues with the current OSD restart script.

1/ We try to test if the ceph osd daemon socket exists but we use a
wildcard for the socket name : /var/run/ceph/*.asok.
This fails because we usually have multiple ceph osd sockets (or
other ceph daemon collocated) present in /var/run/ceph directory.
Currently the test fails with:

bash: line xxx: [: too many arguments

But it doesn't stop the script execution.
Instead we can specify the full ceph osd socket name because we
already know the OSD id.

2/ The container filter pattern is wrong and could matches multiple
containers resulting the script to fail.
We use the filter with two different patterns. One is with the device
name (sda, sdb, ..) and the other one is with the OSD id (ceph-osd-0,
ceph-osd-15, ..).
In both case we could match more than needed.

$ docker container ls
CONTAINER ID IMAGE              NAMES
958121a7cc7d ceph-daemon:latest ceph-osd-strg0-sda
589a982d43b5 ceph-daemon:latest ceph-osd-strg0-sdb
46c7240d71f3 ceph-daemon:latest ceph-osd-strg0-sdaa
877985ec3aca ceph-daemon:latest ceph-osd-strg0-sdab
$ docker container ls -q -f "name=sda"
958121a7cc7d
46c7240d71f3
877985ec3aca

$ docker container ls
CONTAINER ID IMAGE              NAMES
2db399b3ee85 ceph-daemon:latest ceph-osd-5
099dc13f08f1 ceph-daemon:latest ceph-osd-13
5d0c2fe8f121 ceph-daemon:latest ceph-osd-17
d6c7b89db1d1 ceph-daemon:latest ceph-osd-1
$ docker container ls -q -f "name=ceph-osd-1"
099dc13f08f1
5d0c2fe8f121
d6c7b89db1d1

Adding an extra '$' character at the end of the pattern solves the
problem.

Finally removing the get_container_osd_id function because it's not
used in the script at all.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoChange ansible_lsb by ansible_distribution_release
Dimitri Savineau [Thu, 20 Jun 2019 18:04:24 +0000 (14:04 -0400)]
Change ansible_lsb by ansible_distribution_release

The ansible_lsb fact is based on the lsb package (lsb-base,
lsb-release or redhat-lsb-core).
If the package isn't installed on the remote host then the fact isn't
populated.

--------
"ansible_lsb": {},
--------

Switching to the ansible_distribution_release fact instead.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoupgrade: accept HEALTH_OK and HEALTH_WARN as valid state
Guillaume Abrioux [Wed, 19 Jun 2019 13:07:24 +0000 (15:07 +0200)]
upgrade: accept HEALTH_OK and HEALTH_WARN as valid state

3a100cfa5265b3a5327ef6a8d382a8059391b903 introduced a check which is a
bit too restrictive, let's accept HEALTH_OK and HEALTH_WARN.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoAdd higher retry/delay defaults to check the quorum status.
fpantano [Wed, 19 Jun 2019 17:13:37 +0000 (19:13 +0200)]
Add higher retry/delay defaults to check the quorum status.

As per bz1718981, this commit adds higher values to check
the quorum status. This is helpful for several OSP deployments
that fail during the scale up.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1718981
Signed-off-by: fpantano <fpantano@redhat.com>
6 years agoceph-volume: Set max open files limit on container
Dimitri Savineau [Thu, 20 Jun 2019 14:28:44 +0000 (10:28 -0400)]
ceph-volume: Set max open files limit on container

The ceph-volume lvm list command takes ages to complete when having
a lot of LV devices on containerized deployment.
For instance, with 25 OSDs on a node it takes 3 mins 44s to list the
OSD.
Adding the max open files limit to the container engine cli when
executing the ceph-volume command seems to improve a lot thee
execution time ~30s.

This was impacting the OSDs creation with ceph-volume (both filestore
and bluestore) when using multiple LV devices.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1702285
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agofacts: add a retry on get current fsid task
Guillaume Abrioux [Thu, 20 Jun 2019 12:45:07 +0000 (14:45 +0200)]
facts: add a retry on get current fsid task

sometimes it can happen the following task fails:

```
TASK [ceph-facts : get current fsid] *******************************************
task path: /home/jenkins-build/build/workspace/ceph-ansible-prs-dev-centos-container-update/roles/ceph-facts/tasks/facts.yml:78
Wednesday 19 June 2019  18:12:49 +0000 (0:00:00.203)       0:02:39.995 ********
fatal: [mon2 -> mon1]: FAILED! => changed=true
  cmd:
  - timeout
  - --foreground
  - -s
  - KILL
  - 600s
  - docker
  - exec
  - ceph-mon-mon1
  - ceph
  - --cluster
  - ceph
  - daemon
  - mon.mon1
  - config
  - get
  - fsid
  delta: '0:00:00.239339'
  end: '2019-06-19 18:12:49.812099'
  msg: non-zero return code
  rc: 22
  start: '2019-06-19 18:12:49.572760'
  stderr: 'admin_socket: exception getting command descriptions: [Errno 2] No such file or directory'
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
```

not sure exactly why since just before this task, mon1 seems to be well
UP otherwise it wouldn't have passed the task `waiting for the
containerized monitor to join the quorum`.

As a quick fix/workaround, let's add a retry which allows us to get
around this situation:

```
TASK [ceph-facts : get current fsid] *******************************************
task path: /home/jenkins-build/build/workspace/ceph-ansible-scenario/roles/ceph-facts/tasks/facts.yml:78
Thursday 20 June 2019  15:35:07 +0000 (0:00:00.201)       0:03:47.288 *********
FAILED - RETRYING: get current fsid (3 retries left).
changed: [mon2 -> mon1] => changed=true
  attempts: 2
  cmd:
  - timeout
  - --foreground
  - -s
  - KILL
  - 600s
  - docker
  - exec
  - ceph-mon-mon1
  - ceph
  - --cluster
  - ceph
  - daemon
  - mon.mon1
  - config
  - get
  - fsid
  delta: '0:00:00.290252'
  end: '2019-06-20 15:35:13.960188'
  rc: 0
  start: '2019-06-20 15:35:13.669936'
  stderr: ''
  stderr_lines: <omitted>
  stdout: |-
    {
        "fsid": "153e159d-7ade-42a7-842c-4d04348b901e"
    }
  stdout_lines: <omitted>
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoroles: Remove useless become (true) flag
Dimitri Savineau [Tue, 18 Jun 2019 17:50:11 +0000 (13:50 -0400)]
roles: Remove useless become (true) flag

We already set the become flag to true at a play level in the site*
playbooks so we don't need to set it at a task level.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoosd: remove legacy task
Guillaume Abrioux [Tue, 11 Jun 2019 09:16:51 +0000 (11:16 +0200)]
osd: remove legacy task

`parted_results` isn't used anymore in the playbook.

By the way, `parted` seems to cause issue because it changes the
ownership on devices:

```
root@osd0 ~]# ls -l /dev/sdc*
brw-rw----. 1 root disk 8, 32 Jun 11 08:53 /dev/sdc
brw-rw----. 1 ceph ceph 8, 33 Jun 11 08:53 /dev/sdc1
brw-rw----. 1 ceph ceph 8, 34 Jun 11 08:53 /dev/sdc2

[root@osd0 ~]# parted -s /dev/sdc print
Model: ATA QEMU HARDDISK (scsi)
Disk /dev/sdc: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name           Flags
 1      1049kB  1075MB  1074MB               ceph block.db
 2      1075MB  2149MB  1074MB               ceph block.db

[root@osd0 ~]# #We can see ownerships have changed from ceph:ceph to root:disk:
[root@osd0 ~]# ls -l /dev/sdc*
brw-rw----. 1 root disk 8, 32 Jun 11 08:57 /dev/sdc
brw-rw----. 1 root disk 8, 33 Jun 11 08:57 /dev/sdc1
brw-rw----. 1 root disk 8, 34 Jun 11 08:57 /dev/sdc2
[root@osd0 ~]#
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agorolling_update: fail early if cluster state is not OK
Guillaume Abrioux [Mon, 10 Jun 2019 14:26:18 +0000 (16:26 +0200)]
rolling_update: fail early if cluster state is not OK

starting an upgrade if the cluster isn't HEALTH_OK isn't a good idea.
Let's check for the cluster status before trying to upgrade.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agorolling_update: only mask and stop unit in mgr part
Guillaume Abrioux [Mon, 10 Jun 2019 13:18:43 +0000 (15:18 +0200)]
rolling_update: only mask and stop unit in mgr part

Otherwise it fails like following:

```
fatal: [mon0]: FAILED! => changed=false
  msg: |-
    Unable to enable service ceph-mgr@mon0: Failed to execute operation: Cannot send after transport endpoint shutdown
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoAdd installer phase for dashboard roles
Dimitri Savineau [Mon, 17 Jun 2019 19:52:04 +0000 (15:52 -0400)]
Add installer phase for dashboard roles

This commits adds the support of the installer phase for dashboard,
grafana and node-exporter roles.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoremove ceph restapi references
Dimitri Savineau [Mon, 17 Jun 2019 19:02:25 +0000 (15:02 -0400)]
remove ceph restapi references

The ceph restapi configuration was only available until Luminous
release so we don't need those leftovers for nautilus+.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agodashboard: fix hosts sections in main playbook
Guillaume Abrioux [Fri, 14 Jun 2019 13:27:11 +0000 (15:27 +0200)]
dashboard: fix hosts sections in main playbook

ceph-dashboard should be deployed on either a dedicated mgr node or a
mon if they are collocated.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agouse pre_tasks and post_tasks in shrink-mon.yml too
Rishabh Dave [Sat, 15 Jun 2019 12:07:13 +0000 (17:37 +0530)]
use pre_tasks and post_tasks in shrink-mon.yml too

This commit should've been part of commit
2fb12ae55462f5601a439a104a5b0c01929accd9.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
6 years agotests: Update ansible ssh_args variable
Dimitri Savineau [Fri, 14 Jun 2019 21:31:39 +0000 (17:31 -0400)]
tests: Update ansible ssh_args variable

Because we're using vagrant, a ssh config file will be created for
each nodes with options like user, host, port, identity, etc...
But via tox we're override ANSIBLE_SSH_ARGS to use this file. This
remove the default value set in ansible.cfg.

Also adding PreferredAuthentications=publickey because CentOS/RHEL
servers are configured with GSSAPIAuthenticationis enabled for ssh
server forcing the client to make a PTR DNS query.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agotests: increase docker pull timeout
Guillaume Abrioux [Fri, 14 Jun 2019 09:45:29 +0000 (11:45 +0200)]
tests: increase docker pull timeout

CI is facing issues where docker pull reach the timeout, let's increase
this to avoid CI failures.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoceph-infra: make chronyd default NTP daemon
Rishabh Dave [Wed, 12 Jun 2019 09:09:44 +0000 (14:39 +0530)]
ceph-infra: make chronyd default NTP daemon

Since timesyncd is not available on RHEL-based OSs, change the default
to chronyd for RHEL-based OSs. Also, chronyd is chrony on Ubuntu, so
set the Ansible fact accordingly.

Fixes: https://github.com/ceph/ceph-ansible/issues/3628
Signed-off-by: Rishabh Dave <ridave@redhat.com>
6 years agoceph-infra: update cache for Ubuntu
Rishabh Dave [Thu, 13 Jun 2019 08:06:00 +0000 (13:36 +0530)]
ceph-infra: update cache for Ubuntu

Ubuntu-based CI jobs often fail with error code 404 while installing
NTP daemons. Updating cache beforehand should fix the issue.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
6 years agoalign cephfs pool creation
Rishabh Dave [Tue, 10 Apr 2018 09:32:58 +0000 (11:32 +0200)]
align cephfs pool creation

The definitions of cephfs pools should match openstack pools.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
Co-Authored-by: Simone Caronni <simone.caronni@teralytics.net>
6 years agoiscsi: assign application (rbd) to pool 'rbd'
Guillaume Abrioux [Tue, 11 Jun 2019 20:03:59 +0000 (22:03 +0200)]
iscsi: assign application (rbd) to pool 'rbd'

if we don't assign the rbd application tag on this pool,
the cluster will get `HEALTH_WARN` state like following:

```
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
    application not enabled on pool 'rbd'
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoceph-handler: replace fuser by /proc/net/unix
Dimitri Savineau [Thu, 6 Jun 2019 18:08:18 +0000 (14:08 -0400)]
ceph-handler: replace fuser by /proc/net/unix

We're using fuser command to see if a process is using a ceph unix
socket file. But the fuser command runs through every PID present in
/proc/<PID> to see if one of them is using the file.
On a system running thousands processes, the fuser command can take
a long time to finish.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1717011

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agomon: enforce mon0 delegation for initial_mon_key register
Guillaume Abrioux [Wed, 12 Jun 2019 09:38:49 +0000 (11:38 +0200)]
mon: enforce mon0 delegation for initial_mon_key register

since this task is designed to be always run on the first monitor, let's
enforce the container name accordingly otherwise it could fail like
following:

```
fatal: [mon1 -> mon0]: FAILED! => changed=true
  cmd:
  - docker
  - exec
  - ceph-mon-mon1
  - ceph
  - --cluster
  - ceph
  - --name
  - mon.
  - -k
  - /var/lib/ceph/mon/ceph-mon0/keyring
  - auth
  - get-key
  - mon.
  delta: '0:00:00.085025'
  end: '2019-06-12 06:12:27.677936'
  msg: non-zero return code
  rc: 1
  start: '2019-06-12 06:12:27.592911'
  stderr: 'Error response from daemon: No such container: ceph-mon-mon1'
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agotests: remove unused variable
Guillaume Abrioux [Wed, 12 Jun 2019 14:35:14 +0000 (16:35 +0200)]
tests: remove unused variable

`e MGR_DASHBOARD=0` isn't needed anymore here, let's remove this legacy.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agotests: update docker image tag used in ooo job
Guillaume Abrioux [Wed, 12 Jun 2019 14:31:32 +0000 (16:31 +0200)]
tests: update docker image tag used in ooo job

ceph-ansible@master isn't intended to deploy luminous.
Let's use latest-master on ceph-ansible@master branch

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agodashboard: add allow_embedding support
Guillaume Abrioux [Wed, 12 Jun 2019 06:01:06 +0000 (08:01 +0200)]
dashboard: add allow_embedding support

Add a variable to support the allow_embedding support.

See ceph/ceph-ansible/issues/4084 for details.

Fixes: #4084
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agodashboard: fix dashboard_url setting
Guillaume Abrioux [Wed, 12 Jun 2019 06:31:47 +0000 (08:31 +0200)]
dashboard: fix dashboard_url setting

This setting must be set to something resolvable.

See: ceph/ceph-ansible/issues/4085 for details

Fixes: #4085
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoceph-node-exporter: Fix systemd template
Dimitri Savineau [Tue, 11 Jun 2019 14:46:35 +0000 (10:46 -0400)]
ceph-node-exporter: Fix systemd template

069076b introduced a bug in the systemd unit script template. This
commit fixes the options used by the node-exporter container.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoceph-node-exporter: use modprobe ansible module
Dimitri Savineau [Tue, 11 Jun 2019 13:35:28 +0000 (09:35 -0400)]
ceph-node-exporter: use modprobe ansible module

Instead of using the modprobe command from the path in the systemd
unit script, we can use the modprobe ansible module.
That way we don't have to manage the binary path based on the linux
distribution.

Resolves: #4072

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoFix units and add ability to have a dedicated instance
fmount [Thu, 23 May 2019 14:21:08 +0000 (16:21 +0200)]
Fix units and add ability to have a dedicated instance

Few fixes on systemd unit templates for node_exporter and
alertmanager container parameters.
Added the ability to use a dedicated instance to deploy the
dashboard components (prometheus and grafana).
This commit also introduces the grafana_group_name variable
to refer grafana group and keep consistency with the other
groups.
During the integration with TripleO some grafana/prometheus
template variables resulted undefined. This commit adds the
ability to check if the group exist and create, accordingly,
different job groups in prometheus template.

Signed-off-by: fmount <fpantano@redhat.com>
6 years agovalidate: fail in check_devices at the right task
Guillaume Abrioux [Fri, 7 Jun 2019 08:50:28 +0000 (10:50 +0200)]
validate: fail in check_devices at the right task

see https://bugzilla.redhat.com/show_bug.cgi?id=1648168#c17 for details.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1648168#c17
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agospec: bring back possibility to install ceph with custom repo
Guillaume Abrioux [Fri, 7 Jun 2019 08:16:16 +0000 (10:16 +0200)]
spec: bring back possibility to install ceph with custom repo

This can be seen as a regression for customers who were used to deploy
in offline environment with custom repositories.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1673254
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agopodman: Add systemd dependency on network.target
Dimitri Savineau [Thu, 6 Jun 2019 19:41:35 +0000 (15:41 -0400)]
podman: Add systemd dependency on network.target

When using podman, the systemd unit scripts don't have a dependency
on the network. So we're not sure that the network is up and running
when the containers are starting.
With docker this behaviour is already handled because the systemd
unit scripts depend on docker service which is started after the
network.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agopurge-cluster: clean all ceph repo files
Dimitri Savineau [Thu, 6 Jun 2019 17:51:16 +0000 (13:51 -0400)]
purge-cluster: clean all ceph repo files

We currently only purge rh_storage yum repository file but depending
on the ceph_repository value we are using, the ceph repository file
could have a different name.

Resolves: #4056

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoAdd section for purging rgw loadbalancer in purge-cluster.yml
guihecheng [Fri, 1 Mar 2019 07:51:43 +0000 (15:51 +0800)]
Add section for purging rgw loadbalancer in purge-cluster.yml

Signed-off-by: guihecheng <guihecheng@cmiot.chinamobile.com>
6 years agoAdd section for rgw loadbalancer in site.yml
guihecheng [Thu, 4 Apr 2019 03:33:15 +0000 (11:33 +0800)]
Add section for rgw loadbalancer in site.yml

This drives ceph rgw loadbalancer stuff to run.

Signed-off-by: guihecheng <guihecheng@cmiot.chinamobile.com>
6 years agoAdd role definitions of ceph-rgw-loadbalancer
guihecheng [Thu, 4 Apr 2019 02:54:41 +0000 (10:54 +0800)]
Add role definitions of ceph-rgw-loadbalancer

This add support for rgw loadbalancer based on HAProxy and Keepalived.
We define a single role ceph-rgw-loadbalancer and include HAProxy and
Keepalived configurations all in this.

A single haproxy backend is used to balance all RGW instances and
a single frontend is exported via a single port, default 80.

Keepalived is used to maintain the high availability of all haproxy
instances. You are free to use any number of VIPs. A single VIP is
shared across all keepalived instances and there will be one
master for one VIP, selected sequentially, and others serve as
backups.
This assumes that each keepalived instance is on the same node as
one haproxy instance and we use a simple check script to detect
the state of each haproxy instance and trigger the VIP failover
upon its failure.

Signed-off-by: guihecheng <guihecheng@cmiot.chinamobile.com>
6 years agoansible: use 'bool' filter on boolean conditionals
L3D [Wed, 22 May 2019 08:02:42 +0000 (10:02 +0200)]
ansible: use 'bool' filter on boolean conditionals

By running ceph-ansible there are a lot ``[DEPRECATION WARNING]`` like these:
```
[DEPRECATION WARNING]: evaluating containerized_deployment as a bare variable,
this behaviour will go away and you might need to add |bool to the expression
in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This
feature will be removed in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
```

Now appended ``| bool`` on a lot of the affected variables.

Sometimes the coding style from ``variable|bool`` changed to ``variable | bool`` *(with spaces at the pipe)*.

Closes: #4022
Signed-off-by: L3D <l3d@c3woc.de>
6 years agocontainer-common: support podman on Ubuntu
Dimitri Savineau [Fri, 17 May 2019 21:10:34 +0000 (17:10 -0400)]
container-common: support podman on Ubuntu

Currently we're only able to use podman on ubuntu if podman's
installation is done manually before the ceph-ansible execution
because the deb package is present in an external repository.
We already manage the docker-ce installation via an external
repository so we should be able to allow the podman installation
with the same mechanism too.

https://github.com/containers/libpod/blob/master/install.md#ubuntu

Resolves: #3947

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoceph-osd: do not relabel /run/udev in containerized context
Guillaume Abrioux [Mon, 3 Jun 2019 17:15:30 +0000 (19:15 +0200)]
ceph-osd: do not relabel /run/udev in containerized context

Otherwise content in /run/udev is mislabeled and prevent some services
like NetworkManager from starting.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agotests: test podman against atomic os instead rhel8
Guillaume Abrioux [Thu, 23 May 2019 08:49:54 +0000 (10:49 +0200)]
tests: test podman against atomic os instead rhel8

the rhel8 image used is an outdated beta version, it is not worth it to
maintain this image upstream, since it's possible to test podman with a
newer version of centos/atomic-host image.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agosite-container: update container-engine role
Dimitri Savineau [Tue, 28 May 2019 20:43:48 +0000 (16:43 -0400)]
site-container: update container-engine role

Since the split between container-engine and container-common roles,
the tags and condition were not updated to reflect the change.

- ceph-container-engine needs with_pkg tag
- ceph-container-common needs fetch_container_images
- we don't need to pull the container image in a dedicated task for
atomic host. We can now use the ceph-container-common role.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoceph-nfs: use template module for configuration
Dimitri Savineau [Mon, 3 Jun 2019 19:28:39 +0000 (15:28 -0400)]
ceph-nfs: use template module for configuration

789cef7 introduces a regression in the ganesha configuration file
generation. The new config_template module version broke it.
But the ganesha.conf file isn't an ini file and doesn't really
need to use the config_template module. Instead we can use the
classic template module.

Resolves: #4045

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agodashboard: add a last default value in grafana-server host section
Guillaume Abrioux [Mon, 20 May 2019 14:32:08 +0000 (16:32 +0200)]
dashboard: add a last default value in grafana-server host section

If there is no mgrs and mons in the inventory, it will fail with the following error:

```
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'dict object' has no attribute 'mons'

The error appears to be in '/home/guits/ceph-ansible/site-docker.yml.sample': line 539, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

- hosts: '{{ (groups["mgrs"] | default(groups["mons"]))[0] }}'
  ^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:

    with_items:
      - {{ foo }}

Should be written as:

    with_items:
      - "{{ foo }}"
```

let's add an  `omit` so it just display this message instead:

```
PLAY [[]] *******************
skipping: no hosts matched
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agodashboard: move ceph-grafana-dashboards package installation
Guillaume Abrioux [Thu, 23 May 2019 08:26:36 +0000 (10:26 +0200)]
dashboard: move ceph-grafana-dashboards package installation

This commit moves the package installation into ceph-dashboard role.
This is needed to install ceph dasboard json file in
`/etc/grafana/dashboards/ceph-dashboard/`.

Closes: #4026
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoinfra: refact dashboard firewall rules
Guillaume Abrioux [Wed, 22 May 2019 14:31:21 +0000 (16:31 +0200)]
infra: refact dashboard firewall rules

- There is no need to open ports 3000, 8234, 9283 on all nodes.
- Add missing rule for alertmanager (port 9093)

Closes: #4023
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agodashboard: append mgr modules to ceph_mgr_modules
Guillaume Abrioux [Wed, 22 May 2019 14:12:16 +0000 (16:12 +0200)]
dashboard: append mgr modules to ceph_mgr_modules

when `dashboard_enabled` is `True`, let's append `dashboard` and
`prometheus` modules to `ceph_mgr_modules` so they are automatically
loaded.

Closes: #4026
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoremove ceph-agent role and references
Dimitri Savineau [Tue, 28 May 2019 14:55:03 +0000 (10:55 -0400)]
remove ceph-agent role and references

The ceph-agent role was used only for RHCS 2 (jewel) so it's not
usefull anymore.
The current code will fail on CentOS distribution because the rhscon
package is only avaible on Red Hat with the RHCS 2 repository and
this ceph release is supported on stable-3.0 branch.

Resolves: #4020

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agovalidate: add a check for nfs standalone
Guillaume Abrioux [Mon, 20 May 2019 14:28:42 +0000 (16:28 +0200)]
validate: add a check for nfs standalone

if `nfs_obj_gw` is True when deploying an internal ganesha with an
external ceph cluster, `ceph_nfs_rgw_access_key` and
`ceph_nfs_rgw_secret_key` must be provided so the
ganesha configuration file can be generated.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agonfs: support internal Ganesha with external ceph cluster
Guillaume Abrioux [Mon, 20 May 2019 13:58:10 +0000 (15:58 +0200)]
nfs: support internal Ganesha with external ceph cluster

This commits allows to deploy an internal ganesha with an external ceph
cluster.

This requires to define `external_cluster_mon_ips` with a comma
separated list of external monitors.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1710358
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoceph-facts: generate fsid on mon node
Dimitri Savineau [Fri, 31 May 2019 17:26:30 +0000 (13:26 -0400)]
ceph-facts: generate fsid on mon node

The fsid generation is done via a python command. When the ansible
controller node only have python3 available (like RHEL 8) then the
python command isn't necessarily present causing the fsid generation
to fail.
We already do some resource creation (like ceph keyring secret) with
the python command too but from the mon node so we should do the same
for fsid.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1714631

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agovagrant: Default box to centos/7
Dimitri Savineau [Fri, 31 May 2019 14:22:15 +0000 (10:22 -0400)]
vagrant: Default box to centos/7

We don't use ceph/ubuntu-xenial anymore but only centos/7 and
centos/atomic-host.
Changing the default to centos/7.

Resolves: #4036

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
6 years agoSync config_template from upstream
Kevin Carter [Wed, 22 May 2019 18:08:10 +0000 (13:08 -0500)]
Sync config_template from upstream

This change pulls in the most recent release of the config_template module
into the ceph_ansible action plugins.

Signed-off-by: Kevin Carter <kecarter@redhat.com>
6 years agotests: add retries on failing tests in testinfra
Guillaume Abrioux [Wed, 22 May 2019 08:42:33 +0000 (10:42 +0200)]
tests: add retries on failing tests in testinfra

This commit adds `pytest-rerunfailures` in requirements.txt so we can
retry failing test in testinfra to avoid false positive. (eg: sometimes it
can happen for some reason a service takes too much time to start)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoroles: introduce `ceph-container-engine` role
Guillaume Abrioux [Mon, 20 May 2019 07:46:10 +0000 (09:46 +0200)]
roles: introduce `ceph-container-engine` role

This commit splits the current `ceph-container-common` role.

This introduces a new role `ceph-container-engine` which handles the
tasks specific to the installation of containers tools (docker/podman).

This is needed for the ceph-dashboard implementation for 2 main reasons:

1/ Since the ceph-dashboard stack is only containerized, we must install
everything needed to run containers even in non containerized
deployments. Splitting this role allows us to not have to call the full
`ceph-container-common` role which would run a bunch of unneeded tasks
that would have been skipped anyway.

2/ The current implementation would have required to run
`ceph-container-common` on all ceph-clients nodes which would have been
conflicting with 9d3517c670ea2e944565e1a3e150a966b2d399de (we don't want
to run ceph-container-common on all client nodes, see mentioned commit
for more details)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
6 years agoceph-mgr: install python-routes for dashboard
Dimitri Savineau [Fri, 17 May 2019 15:24:00 +0000 (11:24 -0400)]
ceph-mgr: install python-routes for dashboard

The ceph mgr dashboard requires routes python library to be installed
on the system.

Resolves: #3995

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>