]> git.apps.os.sepia.ceph.com Git - ceph-ansible.git/log
ceph-ansible.git
5 years agoConfigure ceph dashboard backend and dashboard_frontend_vip
Francesco Pantano [Wed, 12 Feb 2020 12:58:59 +0000 (13:58 +0100)]
Configure ceph dashboard backend and dashboard_frontend_vip

This change introduces a new set of tasks to configure the
ceph dashboard backend and listen just on the mgr related
subnet (and not on '*'). For the same reason the proper
server address is added in both prometheus and alertmanger
systemd units.
This patch also adds the "dashboard_frontend_vip" parameter
to make sure we're able to support the HA model when multiple
grafana instances are deployed.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1792230
Signed-off-by: Francesco Pantano <fpantano@redhat.com>
(cherry picked from commit 15ed9eebf15c631b33c9de54f361e3d8ae993b3d)

5 years agoceph-rgw: increase connection timeout to 10
Dimitri Savineau [Thu, 20 Feb 2020 14:49:17 +0000 (09:49 -0500)]
ceph-rgw: increase connection timeout to 10

5s as a connection timeout could be low in some setup. Let's increase
it to 10s.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 44e750ee5d2e3a7a45d02d76f9adba5895d56667)

5 years agocontainers: add KillMode=none to systemd templates
Dimitri Savineau [Tue, 11 Feb 2020 15:09:51 +0000 (10:09 -0500)]
containers: add KillMode=none to systemd templates

Because we are relying on docker|podman for managing containers then we
don't need systemd to manage the process (like kill).

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 5a03e0ee1c840e9632b21e1fd8f8d88c9b736d8b)

5 years agorgw: extend automatic rgw pool creation capability v4.0.15
Ali Maredia [Tue, 10 Sep 2019 22:01:48 +0000 (22:01 +0000)]
rgw: extend automatic rgw pool creation capability

Add support for erasure code pools.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1731148
Signed-off-by: Ali Maredia <amaredia@redhat.com>
Co-authored-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 1834c1e48de4627ac9b12f7d84691080c7fd8c7a)

5 years agoceph-rgw-loadbalancer: Fix SSL newline issue
Florian Faltermeier [Wed, 18 Dec 2019 13:31:57 +0000 (14:31 +0100)]
ceph-rgw-loadbalancer: Fix SSL newline issue

The ad7a5da commit introduced a regression when using TLS on haproxy
via the haproxy_frontend_ssl_certificate variable.
This cause the "stats socket" and the "tune.ssl.default-dh-param"
parameters to be on the same line resulting haproxy failing to start.

[ALERT] 351/140240 (21388) : parsing [xxxxx] : 'stats socket' : unknown
keyword 'tune.ssl.default-dh-param'. Registered
[ALERT] 351/140240 (21388) : Fatal errors found in configuration.

Fixes: #4869
Signed-off-by: Florian Faltermeier <florian.faltermeier@uibk.ac.at>
(cherry picked from commit 9d081e2453285599b78ade93ff9617a28b3d1f02)

5 years agoceph-defaults: remove bootstrap_dirs_xxx vars
Dimitri Savineau [Wed, 12 Feb 2020 19:34:30 +0000 (14:34 -0500)]
ceph-defaults: remove bootstrap_dirs_xxx vars

Both bootstrap_dirs_owner and bootstrap_dirs_group variables aren't
used anymore in the code.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit c644ea904108b302a40f3ced77cf5a6ccaee6fa4)

5 years agorgw: don't create user on secondary zones
Dimitri Savineau [Tue, 5 Nov 2019 16:32:06 +0000 (11:32 -0500)]
rgw: don't create user on secondary zones

The rgw user creation for the Ceph dashboard integration shouldn't be
created on secondary rgw zones.

Closes: #4707
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1794351
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 16e12bf2bbf645d78063ba7d0b7a89b2348e56d1)

5 years agoceph-{mon,osd}: move default crush variables
Dimitri Savineau [Mon, 10 Feb 2020 18:43:31 +0000 (13:43 -0500)]
ceph-{mon,osd}: move default crush variables

Since ed36a11 we move the crush rules creation code from the ceph-mon to
the ceph-osd role.
To keep the backward compatibility we kept the possibility to set the
crush variables on the mons side but we didn't move the default values.
As a result, when using crush_rule_config set to true and wanted to use
the default values for crush_rules then the crush rule ansible task
creation will fail.

"msg": "'ansible.vars.hostvars.HostVarsVars object' has no attribute
'crush_rules'"

This patch move the default crush variables from ceph-mon to ceph-osd
role but also use those default values when nothing is defined on the
mons side.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1798864
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 1fc6b337142efdc76c10340c076653d298e11c68)

5 years agoceph-grafana: fix grafana_{crt,key} condition
Dimitri Savineau [Wed, 12 Feb 2020 15:38:25 +0000 (10:38 -0500)]
ceph-grafana: fix grafana_{crt,key} condition

The grafana_{crt,key} aren't boolean variables but strings. The default
value is an empty string so we should do the conditional on the string
length instead of the bool filter

Closes: #5053
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 15bd4cd189d0c4009bcd9dd80b296492e336661e)

5 years agoceph-prometheus: add alertmanager HA config
Dimitri Savineau [Thu, 13 Feb 2020 20:56:23 +0000 (15:56 -0500)]
ceph-prometheus: add alertmanager HA config

When using multiple alertmanager nodes (via the grafana-server group)
then we need to specify the other peers in the configuration.

https://prometheus.io/docs/alerting/alertmanager/#high-availability
https://github.com/prometheus/alertmanager#high-availability

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1792225
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit b9d975385c2dceca3b06c18d4c37eadbe9f48c92)

5 years agoThe _filtered_clients list should intersect with ansible_play_batch
John Fulton [Thu, 6 Feb 2020 02:23:54 +0000 (21:23 -0500)]
The _filtered_clients list should intersect with ansible_play_batch

Client configuration with --limit fails without this patch
because certain tasks are only done to the first host in the
_filtered_clients list and it's likely that first host will
not be included in what's sepcified with --limit. To fix this
the _filtered_clients list should be built from all clients
in the inventory that are also in the running play.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1798781
Signed-off-by: John Fulton <fulton@redhat.com>
(cherry picked from commit e4bf4857f556465c60f89d32d5f2a92d25d5c90f)

5 years agoceph-nfs: add nfs-ganesha-rados-urls package
Dimitri Savineau [Thu, 6 Feb 2020 20:41:46 +0000 (15:41 -0500)]
ceph-nfs: add nfs-ganesha-rados-urls package

Since nfs-ganesha 2.8.3 the rados-urls library has been move to a
dedicated package.
We don't have the same nfs-ganesha 2.8.x between the community and rhcs
repositories.

community: 2.8.1
rhcs: 2.8.3

As a workaround we will install that package only for rhcs setup.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 0a3e85e8cabf69a68329c749db277f9527cfc053)

5 years agoceph-nfs: fix ceph_nfs_ceph_user variable
Dimitri Savineau [Mon, 10 Feb 2020 16:06:48 +0000 (11:06 -0500)]
ceph-nfs: fix ceph_nfs_ceph_user variable

The ceph_nfs_ceph_user variable is a string for the ceph-nfs role but a
list in ceph-client role.
6a6785b introduced a confusion between both variable type in the ceph-nfs
role for external ceph with ganesha.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1801319
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 10951eeea8236e359d0c6f4b07c0355ffe800a11)

5 years agodashboard: allow configuring multiple grafana host
Dimitri Savineau [Mon, 27 Jan 2020 19:47:00 +0000 (14:47 -0500)]
dashboard: allow configuring multiple grafana host

When using multiple grafana hosts then we push set the grafana and
prometheus URL and push the dashboard layout to a single node.

grafana_server_addrs is the list of all grafana nodes and used during
the ceph-dashboard role (on mgr/mon nodes).
grafana_server_addr is the current grafana node used during the
ceph-grafana and ceph-prometheus role (on grafana-server nodes).

We don't have the grafana_server_addr fact duplication code between
external vs collocated nodes.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1784011
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit c6e96699f71c634b3028d819bdc29aedd8b64ce9)

5 years agoswitch_to_containers: increase health check values
Guillaume Abrioux [Thu, 30 Jan 2020 10:33:38 +0000 (11:33 +0100)]
switch_to_containers: increase health check values

This commit increases the default values for the following variable
consumed in switch-from-non-containerized-to-containerized-ceph-daemons.yml
playbook.
This also moves these variables in `ceph-defaults` role so the user can
set different values if needed.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1783223
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3700aa5385a986460f49370b9bfcb762c414d54d)

5 years agopurge/update: remove backward compatibility legacy
Guillaume Abrioux [Tue, 26 Nov 2019 13:43:07 +0000 (14:43 +0100)]
purge/update: remove backward compatibility legacy

This was introduced in 3.1 and marked as deprecation
We can definitely drop it in stable-4.0

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 0441812959c3283b2db9795eeddbaa022b35aff7)

5 years agoAdd option for HAproxy to act a SSL frontend termination point for loadbalanced RGW...
Stanley Lam [Thu, 21 Nov 2019 22:40:51 +0000 (14:40 -0800)]
Add option for HAproxy to act a SSL frontend termination point for loadbalanced RGW instances.

Signed-off-by: Stanley Lam <stanleylam_604@hotmail.com>
(cherry picked from commit ad7a5dad3f0b3f731107d3f7f1011dc129135e9a)

5 years agoswitch_to_containers: exclude clients nodes from facts gathering
Guillaume Abrioux [Mon, 9 Dec 2019 13:20:42 +0000 (14:20 +0100)]
switch_to_containers: exclude clients nodes from facts gathering

just like site.yml and rolling_update, let's exclude clients node from
the fact gathering.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 332c39376b45375a2c0566406b49896ae316a293)

5 years agoceph-handler: Use /proc/net/unix for rgw socket
Dimitri Savineau [Tue, 6 Aug 2019 15:41:02 +0000 (11:41 -0400)]
ceph-handler: Use /proc/net/unix for rgw socket

If for some reason, there's an old rgw socket file present in the
/var/run/ceph/ directory then the test command could fail with

test: xxxxxxxxx.asok: binary operator expected

$ ls -hl /var/run/ceph/
total 0
srwxr-xr-x. ceph-client.rgw.rgw0.rgw0.68.94153614631472.asok
srwxr-xr-x. ceph-client.rgw.rgw0.rgw0.68.94240997655088.asok

We can check the radosgw socket in /proc/net/unix to avoid using wildcard
in the socket name.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 60cbfdc2a60520e4cd14ee6f54acdc511092b77e)

5 years agofilestore-to-bluestore: skip bluestore osd nodes
Dimitri Savineau [Thu, 23 Jan 2020 21:58:14 +0000 (16:58 -0500)]
filestore-to-bluestore: skip bluestore osd nodes

If the OSD node is already using bluestore OSDs then we should skip
all the remaining tasks to avoid purging OSD for nothing.
Instead we warn the user.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1790472
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 83c5a1d7a831910f4baa11ab95f9354b233a796a)

5 years agoceph-container-engine: lvm2 on OSD nodes only
Dimitri Savineau [Wed, 29 Jan 2020 03:31:04 +0000 (22:31 -0500)]
ceph-container-engine: lvm2 on OSD nodes only

Since de8f2a9 the lvm2 package installation has been moved from ceph-osd
role to ceph-container-engine role.
But the scope wasn't limited to the OSD nodes only.
This commit fixes this behaviour.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit fa8aa8c86470505a10ae2ce0573bd37958226624)

5 years agoupdate: remove legacy tasks
Guillaume Abrioux [Tue, 28 Jan 2020 09:29:03 +0000 (10:29 +0100)]
update: remove legacy tasks

These tasks should have been removed with backport #4756

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1793564
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
5 years agoceph-common: rhcs 4 repositories for rhel 7
Dimitri Savineau [Fri, 31 Jan 2020 13:59:21 +0000 (08:59 -0500)]
ceph-common: rhcs 4 repositories for rhel 7

RHCS 4 is available for both RHEL 7 and 8 so we should also enable the
cdn repositories for that distribution.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1796853
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 9b40a959b9c42abb8b98ec0a0e458203b6331314)

5 years agoiscsi: Fix crashes during rolling update
Mike Christie [Tue, 28 Jan 2020 22:31:55 +0000 (16:31 -0600)]
iscsi: Fix crashes during rolling update

During a rolling update we will run the ceph iscsigw tasks that start
the daemons then run the configure_iscsi.yml tasks which can create
iscsi objects like targets, disks, clients, etc. The problem is that
once the daemons are started they will accept confifguration requests,
or may want to update the system themself. Those operations can then
conflict with the configure_iscsi.yml tasks that setup objects and we
can end up in crashes due to the kernel being in a unsupported state.

This could also happen during creation, but is less likely due to no
objects being setup yet, so there are no watchers or users accessing the
gws yet. The fix in this patch works for both update and initial setup.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1795806

Signed-off-by: Mike Christie <mchristi@redhat.com>
(cherry picked from commit 77f3b5d51b84a6338847c5f6a93f22a3a6a683d2)

5 years agopurge: fix purge cluster failed
wujie1993 [Sun, 5 Jan 2020 07:31:46 +0000 (15:31 +0800)]
purge: fix purge cluster failed

Fix purge cluster failed when local container images does not exist.

Purge node-exporter and grafana-server only when dashboard_enabled is set to True.

Signed-off-by: wujie1993 qq594jj@gmail.com
(cherry picked from commit d8b0b3cbd94655a36bbaa828410977eba6b1aa21)

5 years agoconfig: fix external client scenario
Guillaume Abrioux [Fri, 31 Jan 2020 10:51:54 +0000 (11:51 +0100)]
config: fix external client scenario

When no monitor group is present in the inventory, this task fails.
This affects only non-containerized deployments.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit e7bc0794054008ac2d6771f0d29d275493319665)

5 years agotests: add external_clients scenario
Guillaume Abrioux [Thu, 30 Jan 2020 12:00:47 +0000 (13:00 +0100)]
tests: add external_clients scenario

This commit adds a new 'external ceph clients' scenario.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 641729357e5c8bc4dbf90b5c4f7b20e1d3d51f7d)

5 years agoceph-defaults: remove rgw from ceph_conf_overrides
Dimitri Savineau [Tue, 28 Jan 2020 15:27:34 +0000 (10:27 -0500)]
ceph-defaults: remove rgw from ceph_conf_overrides

The [rgw] section in the ceph.conf file or via the ceph_conf_overrides
variable doesn't exist and has no effect.
To apply overrides to all radosgw instances we should use either the
[global] or [client] sections.
Overrides per radosgw instance should still use the
[client.rgw.{instance-name}] section.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1794552
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 2f07b8513158d3fc36c5b0d29386f46dd28b5efa)

5 years agodashboard: add quotes when passing password to the CLI
Guillaume Abrioux [Tue, 28 Jan 2020 14:32:27 +0000 (15:32 +0100)]
dashboard: add quotes when passing password to the CLI

Otherwise, if the variables contains a '$' it will be interpreted as a BASH
variable.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 8c3759f8ce04a4c2df673290197ef12fe98d992a)

5 years agotests: set dashboard|grafana_admin_password
Guillaume Abrioux [Tue, 28 Jan 2020 13:04:45 +0000 (14:04 +0100)]
tests: set dashboard|grafana_admin_password

Set these 2 variables in all test scenarios where `dashboard_enabled` is
`True`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit c040199c8f277b33bd1447f2cbce71861055711f)

5 years agovalidate: fail if dashboard|grafana_admin_password aren't set
Guillaume Abrioux [Tue, 28 Jan 2020 12:55:54 +0000 (13:55 +0100)]
validate: fail if dashboard|grafana_admin_password aren't set

This commit adds a task to make sure user set a custom password for
`grafana_admin_password` and `dashboard_admin_password` variables.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1795509
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 99328545de07d94c4a2bdd67c6ac8bc9280f23c5)

5 years agoceph-facts: fix _container_exec_cmd fact value v4.0.14
Dimitri Savineau [Wed, 29 Jan 2020 02:34:24 +0000 (21:34 -0500)]
ceph-facts: fix _container_exec_cmd fact value

When using different name between the inventory_hostname and the
ansible_hostname then the _container_exec_cmd fact will get a wrong
value based on the inventory_hostname instead of the ansible_hostname.
This happens when the ceph cluster is already running (update/upgrade).

Later the container exec commands will fail because the container name
is wrong.

We should always set the _container_exec_cmd based on the
ansible_hostname fact.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1795792
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 1fcafffdad43476d1d99766a57b4087cfe43718b)

5 years agotox: set extras vars for filestore-to-bluestore
Dimitri Savineau [Mon, 27 Jan 2020 16:31:47 +0000 (11:31 -0500)]
tox: set extras vars for filestore-to-bluestore

The ansible extra variables aren't set with the ansible-playbook
command running the filestore-to-bluestore playbook.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit a27290bf98470236d2a1362224fb5555884259c9)

5 years agofilestore-to-bluestore: fix undefine osd_fsid_list
Dimitri Savineau [Mon, 27 Jan 2020 14:36:56 +0000 (09:36 -0500)]
filestore-to-bluestore: fix undefine osd_fsid_list

If the playbook is used on a host running bluestore OSDs then the
osd_fsid_list won't be filled because the bluestore OSDs are reported
with 'type: block' via ceph-volume lvm list command but we are looking
for 'type: data' (filestore).

TASK [zap ceph-volume prepared OSDs] *********
fatal: [xxxxx]: FAILED! =>
  msg: '''osd_fsid_list'' is undefined

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1729267
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit cd76054f76fa4ce618335a3693c6c0f95d9209e6)

5 years agotests: add 'all_in_one' scenario v4.0.13
Guillaume Abrioux [Mon, 27 Jan 2020 14:49:30 +0000 (15:49 +0100)]
tests: add 'all_in_one' scenario

Add new scenario 'all_in_one' in order to catch more collocated related
issues.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3e7dbb4b16dc7de3b46c18db4c00e7f2c2a50453)

5 years agofix calls to `container_exec_cmd` in ceph-osd role
Guillaume Abrioux [Mon, 27 Jan 2020 12:31:29 +0000 (13:31 +0100)]
fix calls to `container_exec_cmd` in ceph-osd role

We must call `container_exec_cmd` from the right monitor node otherwise
the value of the fact might mistmatch between the delegated node and the
node being played.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1794900
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 2f919f8971891c780ecb46ac233da27cc14ae2b9)

5 years agofilestore-to-bluestore: don't fail when with no PV
Dimitri Savineau [Fri, 24 Jan 2020 16:50:34 +0000 (11:50 -0500)]
filestore-to-bluestore: don't fail when with no PV

When the PV is already removed from the devices then we should not fail
to avoid errors like:

stderr: No PV found on device /dev/sdb.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1729267
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit a9c23005455a57f2fe1e5356a6ab24f47f1eaa2f)

5 years agohandler: read container_exec_cmd value from first mon v4.0.12
Guillaume Abrioux [Thu, 23 Jan 2020 14:51:17 +0000 (15:51 +0100)]
handler: read container_exec_cmd value from first mon

Given that we delegate to the first monitor, we must read the value of
`container_exec_cmd` from this node.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1792320
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit eb9112d8fbbd33e89a365029feaaed0459c9b86a)

5 years agoceph-facts: Fix for 'running_mon is undefined' error, so that
Vytenis Sabaliauskas [Thu, 23 Jan 2020 08:58:18 +0000 (10:58 +0200)]
ceph-facts: Fix for 'running_mon is undefined' error, so that
fact 'running_mon' is set once 'grep' successfully exits with 'rc == 0'

Signed-off-by: Vytenis Sabaliauskas <vytenis.sabaliauskas@protonmail.com>
(cherry picked from commit ed1eaa1f38022dfea37d8352d81fc0aa6058fa23)

5 years agosite-container: don't skip ceph-container-common
Dimitri Savineau [Wed, 22 Jan 2020 19:45:38 +0000 (14:45 -0500)]
site-container: don't skip ceph-container-common

On HCI environment the OSD and Client nodes are collocated. Because we
aren't running the ceph-container-common role on the client nodes except
the first one (for keyring purpose) then the ceph-role execution fails
due to undefined variables.

Closes: #4970
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1794195
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 671b1aba3c7ca9eca8c630bc4ce13d6dc2e185c5)

5 years agorolling_update: support upgrading 3.x + ceph-metrics on a dedicated node v4.0.11
Guillaume Abrioux [Wed, 22 Jan 2020 14:00:01 +0000 (15:00 +0100)]
rolling_update: support upgrading 3.x + ceph-metrics on a dedicated node

When upgrading from RHCS 3.x where ceph-metrics was deployed on a
dedicated node to RHCS 4.0, it fails like following:

```
fatal: [magna005]: FAILED! => changed=false
  gid: 0
  group: root
  mode: '0755'
  msg: 'chown failed: failed to look up user ceph'
  owner: root
  path: /etc/ceph
  secontext: unconfined_u:object_r:etc_t:s0
  size: 4096
  state: directory
  uid: 0
```

because we are trying to run `ceph-config` on this node, it doesn't make
sense so we should simply run this play on all groups except
`[grafana-server]`.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1793885
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit e5812fe45b328951e0ecc249e3b83f03e7a0a4ce)

5 years agofilestore-to-bluestore: fix osd_auto_discovery
Dimitri Savineau [Tue, 21 Jan 2020 21:37:10 +0000 (16:37 -0500)]
filestore-to-bluestore: fix osd_auto_discovery

When osd_auto_discovery is set then we need to refresh the
ansible_devices fact between after the filestore OSD purge
otherwise the devices fact won't be populated.
Also remove the gpt header on ceph_disk_osds_devices because
the devices is empty at this point for osd_auto_discovery.
Adding the bool filter when needed.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1729267
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit bb3eae0c8033dc0ffbee44f490f6ad483bd109b9)

5 years agofilestore-to-bluestore: --destroy with raw devices
Dimitri Savineau [Mon, 20 Jan 2020 21:40:58 +0000 (16:40 -0500)]
filestore-to-bluestore: --destroy with raw devices

We still need --destroy when using a raw device otherwise we won't be
able to recreate the lvm stack on that device with bluestore.

Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-bdc67a84-894a-4687-b43f-bcd76317580a /dev/sdd
 stderr: Physical volume '/dev/sdd' is already in volume group 'ceph-b7801d50-e827-4857-95ec-3291ad6f0151'
  Unable to add physical volume '/dev/sdd' to volume group 'ceph-b7801d50-e827-4857-95ec-3291ad6f0151'
  /dev/sdd: physical volume not initialized.
--> Was unable to complete a new OSD, will rollback changes

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1792227
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit f995b079a6a4f936da04fce3d55449361b2109e3)

5 years agoceph-osd: set container objectstore env variables
Dimitri Savineau [Mon, 20 Jan 2020 16:24:08 +0000 (11:24 -0500)]
ceph-osd: set container objectstore env variables

Because we need to manage legacy ceph-disk based OSD with ceph-volume
then we need a way to know the osd_objectstore in the container.
This was done like this previously with ceph-disk so we should also
do it with ceph-volume.
Note that this won't have any impact for ceph-volume lvm based OSD.

Rename docker_env_args fact to container_env_args and move the container
condition on the include_tasks call.
Remove OSD_DMCRYPT env variable from the ceph-osd template because it's
now included in the container_env_args variable.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1792122
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit c9e1fe3d928111275be5ab06ae01d82df8fa8bd4)

5 years agoceph-rgw: Fix customize pool size "when" condition
Benoît Knecht [Mon, 20 Jan 2020 10:36:27 +0000 (11:36 +0100)]
ceph-rgw: Fix customize pool size "when" condition

In 3c31b19ab39f297635c84edb9e8a5de6c2da7707, I fixed the `customize pool
size` task by replacing `item.size` with `item.value.size`. However, I
missed the same issue in the `when` condition.

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
(cherry picked from commit 3842aa1a30277b5ea3acf78ac1aef37bad5afb14)

5 years agohandler: fix call to container_exec_cmd in handler_osds
Guillaume Abrioux [Fri, 17 Jan 2020 14:50:40 +0000 (15:50 +0100)]
handler: fix call to container_exec_cmd in handler_osds

When unsetting the noup flag, we must call container_exec_cmd from the
delegated node (first mon member)
Also, adding a `run_once: true` because this task needs to be run only 1
time.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1792320
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 22865cde9c7cb3be30c359ce8679ec841b16a663)

5 years agoFix undefined running_mon
Dmitriy Rabotyagov [Thu, 16 Jan 2020 18:23:58 +0000 (20:23 +0200)]
Fix undefined running_mon

Since commit [1] running_mon introduced, it can be not defined
which results in fatal error [2]. This patch defines default value which
was used before patch [1]

Signed-off-by: Dmitriy Rabotyagov <drabotyagov@vexxhost.com>
[1] https://github.com/ceph/ceph-ansible/commit/8dcbcecd713b0cd7769d3b4d04ef5c2f15881377
[2] https://zuul.opendev.org/t/openstack/build/c82a73aeabd64fd583694ed04b947731/log/job-output.txt#14011

(cherry picked from commit 2478a7b94856fc1b4639b4293383f7c3f2ae0f05)

5 years agoremove container_exec_cmd_mgr fact v4.0.10
Guillaume Abrioux [Wed, 15 Jan 2020 13:39:16 +0000 (14:39 +0100)]
remove container_exec_cmd_mgr fact

Iterating over all monitors in order to delegate a `
{{ container_binary }}` fails when collocating mgrs with mons, because
ceph-facts reset `container_exec_cmd` to point to the first member of
the monitor group.

The idea is to force `container_exec_cmd` to be reset in ceph-mgr.
This commit also removes the `container_exec_cmd_mgr` fact.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1791282
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 8dcbcecd713b0cd7769d3b4d04ef5c2f15881377)

5 years agoshrink-mds: fix condition on fs deletion
Guillaume Abrioux [Wed, 15 Jan 2020 06:17:08 +0000 (07:17 +0100)]
shrink-mds: fix condition on fs deletion

the new ceph status registered in `ceph_status` will report `fsmap.up` =
0 when it's the last mds given that it's done after we shrink the mds,
it means the condition is wrong. Also adding a condition so we don't try
to delete the fs if a standby node is going to rejoin the cluster.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1787543
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3d0898aa5db7b264d17a6948747a55b0834629e2)

5 years agoceph-iscsi: don't use bracket with trusted_ip_list
Dimitri Savineau [Tue, 14 Jan 2020 15:07:56 +0000 (10:07 -0500)]
ceph-iscsi: don't use bracket with trusted_ip_list

The trusted_ip_list parameter for the rbd-target-api service doesn't
support ipv6 address with bracket.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1787531
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit bd87d69183db4bc9a9cdd08f67e53c5d029dacd0)

5 years agocontainer: move lvm2 package installation
Dimitri Savineau [Thu, 2 Jan 2020 20:50:24 +0000 (15:50 -0500)]
container: move lvm2 package installation

Before this patch, the lvm2 package installation was done during the
ceph-osd role.
However we were running ceph-volume command in the ceph-config role
before ceph-osd. If lvm2 wasn't installed then the ceph-volume command
fails:

error checking path "/run/lock/lvm": stat /run/lock/lvm: no such file or
directory

This wasn't visible before because lvm2 was automatically installed as
docker dependency but it's not the same for podman on CentOS 8.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit de8f2a9f83194e465d10207c7ae0569700345b9c)

5 years agoosd: use _devices fact in lvm batch scenario v4.0.9
Guillaume Abrioux [Tue, 14 Jan 2020 08:42:43 +0000 (09:42 +0100)]
osd: use _devices fact in lvm batch scenario

since fd1718f3796312e29cd5fd64fcc46826741303d2, we must use `_devices`
when deploying with lvm batch scenario.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 5558664f376e3fa09b99071bb615a65a6b134a3f)

5 years agoosd: do not run openstack_config during upgrade
Guillaume Abrioux [Fri, 8 Nov 2019 15:21:54 +0000 (16:21 +0100)]
osd: do not run openstack_config during upgrade

There is no need to run this part of the playbook when upgrading the
cluter.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit af6875706af93f133299156403f51d3ad48d17d3)

5 years agotests: use main playbook for add_osds job
Guillaume Abrioux [Fri, 8 Nov 2019 08:53:58 +0000 (09:53 +0100)]
tests: use main playbook for add_osds job

This commit replaces the playbook used for add_osds job given
accordingly to the add-osd.yml playbook removal

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit fef1cd4c4b3714c0595adbc0a4059a44f6968e80)

5 years agoosd: support scaling up using --limit
Guillaume Abrioux [Tue, 9 Jul 2019 13:40:47 +0000 (15:40 +0200)]
osd: support scaling up using --limit

This commit lets add-osd.yml in place but mark the deprecation of the
playbook.
Scaling up OSDs is now possible using --limit

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3496a0efa2d7ae2872476387e9d801fb32414f63)

5 years agoceph-facts: move grafana fact to dedicated file
Dimitri Savineau [Mon, 13 Jan 2020 15:24:52 +0000 (10:24 -0500)]
ceph-facts: move grafana fact to dedicated file

We don't need to executed the grafana fact everytime but only during
the dashboard deployment.
Especially for ceph-grafana, ceph-prometheus and ceph-dashboard roles.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1790303
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit f940e695ab839aafac3be73163d8c84a2d1a8ebf)

5 years agofacts: fix osp/ceph external use case
Guillaume Abrioux [Mon, 13 Jan 2020 14:30:13 +0000 (15:30 +0100)]
facts: fix osp/ceph external use case

d6da508a9b6829d2d0633c7200efdffce14f403f broke the osp/ceph external use case.

We must skip these tasks when no monitor is present in the inventory.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1790508
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 2592a1e1e84f0c3f407ffd879fc8cee87ad35894)

5 years agodefaults: change monitor|radosgw_address default values
Guillaume Abrioux [Mon, 9 Dec 2019 17:23:15 +0000 (18:23 +0100)]
defaults: change monitor|radosgw_address default values

To avoid confusion, let's change the default value from `0.0.0.0` to
`x.x.x.x`.
Users might think setting `0.0.0.0` will make the daemon binding on all
interfaces.

Fixes: #4827
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit fc02fc98ebce0f99e81628e76ad28e7bf65435de)

5 years agoosd: ensure osd ids collected are well restarted
Guillaume Abrioux [Mon, 13 Jan 2020 15:31:00 +0000 (16:31 +0100)]
osd: ensure osd ids collected are well restarted

This commit refact the condition in the loop of that task so all
potential osd ids found are well started.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1790212
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 58e6bfed2d1c9f6e86fd1a680f26539f539afcd0)

5 years agotests: add time command in vagrant_up.sh v4.0.8
Guillaume Abrioux [Thu, 17 Oct 2019 13:37:31 +0000 (15:37 +0200)]
tests: add time command in vagrant_up.sh

monitor how long it takes to get all VMs up and running

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 16bcef4f28c24b56d3896ac193226be139b4d2f2)

5 years agotests: retry to fire up VMs on vagrant failure
Guillaume Abrioux [Tue, 2 Apr 2019 12:53:19 +0000 (14:53 +0200)]
tests: retry to fire up VMs on vagrant failure

Add a script to retry several times to fire up VMs to avoid vagrant
failures.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-authored-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 1ecb3a9352d869d8fde694cefae9de8af8f6fee8)

5 years agotests: add a docker2podman scenario
Guillaume Abrioux [Fri, 10 Jan 2020 13:31:42 +0000 (14:31 +0100)]
tests: add a docker2podman scenario

This commit adds a new scenario in order to test docker-to-podman.yml
migration playbook.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit dc672e86eca3cfbd047c5968852511c07368d4b4)

5 years agodocker2podman: use set_fact to override variables
Guillaume Abrioux [Fri, 10 Jan 2020 13:30:35 +0000 (14:30 +0100)]
docker2podman: use set_fact to override variables

play vars have lower precedence than role vars and `set_fact`.
We must use a `set_fact` to reset these variables.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b0c491800a785df88613ef7a9c2680a7540a8c90)

5 years agodocker2podman: force systemd to reload config
Guillaume Abrioux [Fri, 10 Jan 2020 13:29:50 +0000 (14:29 +0100)]
docker2podman: force systemd to reload config

This is needed after a change is made in systemd unit files.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 1c2ec9fb4042b65088d055ee4cd8bc773e241dcf)

5 years agodocker2podman: install podman
Guillaume Abrioux [Fri, 10 Jan 2020 10:17:27 +0000 (11:17 +0100)]
docker2podman: install podman

This commit adds a package installation task in order to install podman
during the docker-to-podman.yml migration playbook.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit d746575fd0ac83d5861b6aae0143aa8d390760e6)

5 years agoupdate: only run post osd upgrade play on 1 mon
Guillaume Abrioux [Mon, 18 Nov 2019 17:12:00 +0000 (18:12 +0100)]
update: only run post osd upgrade play on 1 mon

There is no need to run these tasks n times from each monitor.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit c878e99589bde0eecb8ac72a7ec8bc1f66403eeb)

5 years agoupdate: use flags noout and nodeep-scrub only
Guillaume Abrioux [Mon, 18 Nov 2019 16:59:56 +0000 (17:59 +0100)]
update: use flags noout and nodeep-scrub only

1. set noout and nodeep-scrub flags,
2. upgrade each OSD node, one by one, wait for active+clean pgs
3. after all osd nodes are upgraded, unset flags

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-authored-by: Rachana Patel <racpatel@redhat.com>
(cherry picked from commit 548db78b9535348dff616665be749503f80c4fca)

5 years agoceph-validate: add rbdmirror validation
Dimitri Savineau [Tue, 5 Nov 2019 16:53:22 +0000 (11:53 -0500)]
ceph-validate: add rbdmirror validation

When ceph_rbd_mirror_configure is set to true we need to ensure that
the required variables aren't empty.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1760553
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 4a065cebd70d259bfd59b6f5f9baa45d516a9c3a)

5 years agoceph-osd: wait for all osds once
Dimitri Savineau [Wed, 27 Nov 2019 14:29:06 +0000 (09:29 -0500)]
ceph-osd: wait for all osds once

cf8c6a3 moves the 'wait for all osds' task from openstack_config to the
main tasks list.
But the openstack_config code was executed only on the last OSD node.
We don't need to do this check on all OSD node so we need to add set
run_once to true on that task.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 5bd1cf40eb5823aab3c4e16b60b37c30600f9283)

5 years agoceph-osd: wait for all osd before crush rules
Dimitri Savineau [Tue, 26 Nov 2019 16:09:11 +0000 (11:09 -0500)]
ceph-osd: wait for all osd before crush rules

When creating crush rules with device class parameter we need to be sure
that all OSDs are up and running because the device class list is
is populated with this information.
This is now enable for all scenario not openstack_config only.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit cf8c6a384999be8caedce1121dfd57ae114d5bb6)

5 years agoceph-osd: add device class to crush rules
Dimitri Savineau [Thu, 31 Oct 2019 20:24:12 +0000 (16:24 -0400)]
ceph-osd: add device class to crush rules

This adds device class support to crush rules when using the class key
in the rule dict via the create-replicated sub command.
If the class key isn't specified then we use the create-simple sub
command for backward compatibility.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1636508
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit ef2cb99f739ade80e285d83050ac01184aafc753)

5 years agomove crush rule creation from mon to osd role
Dimitri Savineau [Thu, 31 Oct 2019 20:17:33 +0000 (16:17 -0400)]
move crush rule creation from mon to osd role

If we want to create crush rules with the create-replicated sub command
and device class then we need to have the OSD created before the crush
rules otherwise the device classes won't exist.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit ed36a11eabbdbb040652991300cdfc93d51ed491)

5 years agopurge-iscsi-gateways: don't run all ceph-facts
Dimitri Savineau [Fri, 10 Jan 2020 14:31:26 +0000 (09:31 -0500)]
purge-iscsi-gateways: don't run all ceph-facts

We only need to have the container_binary fact. Because we're not
gathering the facts from all nodes then the purge fails trying to get
one of the grafana fact.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1786686
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit a09d1c38bf80e412265f58d732c554262ef23cc7)

5 years agorolling_update: run registry auth before upgrading
Dimitri Savineau [Thu, 9 Jan 2020 19:57:08 +0000 (14:57 -0500)]
rolling_update: run registry auth before upgrading

There's some tasks using the new container image during the rolling
upgrade playbook that needs to execute the registry login first otherwise
the nodes won't be able to pull the container image.

Unable to find image 'xxx.io/foo/bar:latest' locally
Trying to pull repository xxx.io/foo/bar ...
/usr/bin/docker-current: Get https://xxx.io/v2/foo/bar/manifests/latest:
unauthorized

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 3f344fdefe02c3b597b886cbef8b7456a7db28eb)

5 years agoconfig: exclude ceph-disk prepared osds in lvm batch report
Guillaume Abrioux [Thu, 9 Jan 2020 18:31:57 +0000 (19:31 +0100)]
config: exclude ceph-disk prepared osds in lvm batch report

We must exclude the devices already used and prepared by ceph-disk when
doing the lvm batch report. Otherwise it fails because ceph-volume
complains about GPT header.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1786682
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit fd1718f3796312e29cd5fd64fcc46826741303d2)

5 years agotests: use community repository v4.0.7
Dimitri Savineau [Thu, 9 Jan 2020 20:24:17 +0000 (15:24 -0500)]
tests: use community repository

We don't need to use dev repository on stable branches.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agoshrink-rgw: refact global workflow
Dimitri Savineau [Thu, 9 Jan 2020 16:48:13 +0000 (11:48 -0500)]
shrink-rgw: refact global workflow

Instead of running the ceph roles against localhost we should do it
on the first mon.
The ansible and inventory hostname of the rgw nodes could be different.
Ensure that the rgw instance to remove is present in the cluster.
Fix rgw service and directory path.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1677431
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 747555dfa601b4925204fd878735c296ef728e5d)

5 years agomon: support replacing a mon
Guillaume Abrioux [Thu, 9 Jan 2020 15:46:34 +0000 (16:46 +0100)]
mon: support replacing a mon

We must pick up a mon which actually exists in ceph-facts in order to
detect if a cluster is running. Otherwise, it will state no cluster is
already running which will end up deploying a new monitor isolated in a
new quorum.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1622688
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 86f3eeb717c7daac8c6330fdaa7f8a3c83f94b0d)

5 years agoceph-iscsi: manage ipv6 in trusted_ip_list
Dimitri Savineau [Tue, 7 Jan 2020 20:01:48 +0000 (15:01 -0500)]
ceph-iscsi: manage ipv6 in trusted_ip_list

Only the ipv4 addresses from the nodes running the dashboard mgr module
were added to the trusted_ip_list configuration file on the iscsigws
nodes.
This also add the iscsi gateways with ipv6 configuration to the ceph
dashboard.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1787531
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 70eba66182aebfcb7056521eb9da7c6c13f574da)

5 years agoceph-rgw: Fix custom pool size setting
Benoît Knecht [Mon, 30 Dec 2019 09:53:20 +0000 (10:53 +0100)]
ceph-rgw: Fix custom pool size setting

RadosGW pools can be created by setting

```yaml
rgw_create_pools:
  .rgw.root:
    pg_num: 512
    size: 2
```

for instance. However, doing so would create pools of size
`osd_pool_default_size` regardless of the `size` value. This was due to
the fact that the Ansible task used

```
{{ item.size | default(osd_pool_default_size) }}
```

as the pool size value, but `item.size` is always undefined; the
correct variable is `item.value.size`.

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
(cherry picked from commit 3c31b19ab39f297635c84edb9e8a5de6c2da7707)

5 years agohandler: fix bug
Guillaume Abrioux [Thu, 19 Dec 2019 10:29:41 +0000 (11:29 +0100)]
handler: fix bug

411bd07d54fc3f585296b68f2fd04484328399b5 introduced a bug in handlers

using `handler_*_status` instead of `hostvars[item]['handler_*_status']`
causes handlers to be triggered in anycase even though
`handler_*_status` was set to `False` on a specific node.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1622688
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 30200802d97abf56c09ebd39f64184b2b4622c50)

5 years agoceph-nfs: add ganesha_t type to selinux
Dimitri Savineau [Mon, 6 Jan 2020 14:09:42 +0000 (09:09 -0500)]
ceph-nfs: add ganesha_t type to selinux

Since RHEL 8.1 we need to add the ganesha_t type to the permissive
SELinux list.
Otherwise the nfs-ganesha service won't start.
This was done on RHEL 7 previously and part of the nfs-ganesha-selinux
package on RHEL 8.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1786110
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit d75812529069244734732d05cc5aa3ddbc99b7c5)

5 years agoshrink-osd: support fqdn in inventory
Guillaume Abrioux [Mon, 9 Dec 2019 14:52:26 +0000 (15:52 +0100)]
shrink-osd: support fqdn in inventory

When using fqdn in inventory, that playbook fails because of some tasks
using the result of ceph osd tree (which returns shortname) to get
some datas in hostvars[].

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1779021
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 6d9ca6b05b52694dec53ce61fdc16bb83c93979d)

5 years agoceph-defaults: exclude rbd devices from discovery
Dimitri Savineau [Mon, 16 Dec 2019 20:12:47 +0000 (15:12 -0500)]
ceph-defaults: exclude rbd devices from discovery

The RBD devices aren't excluded from the devices list in the LVM auto
discovery scenario.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1783908
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 6f0556f01536932bdf47e8f1aab341b2c6761537)

5 years agoceph-infra: replace hardcoded grafana group name
Dimitri Savineau [Mon, 16 Dec 2019 16:03:21 +0000 (11:03 -0500)]
ceph-infra: replace hardcoded grafana group name

The grafana-server group name was hardcoded for the grafana/prometheus
firewalld tasks condition.
We should we the associated variable : grafana_server_group_name

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 2c06678cdeed20f0d40f1693abbf8678250c25ea)

5 years agoceph-infra: move dashboard into a dedicated file
Dimitri Savineau [Mon, 16 Dec 2019 16:00:35 +0000 (11:00 -0500)]
ceph-infra: move dashboard into a dedicated file

Instead of using multiple dashboard_enabled condition in the
configure_firewall file we could just have the condition once
and include the dedicated tasks list.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit f4c261ef9023d006cabfac28b45e7820bb132ceb)

5 years agoceph-infra: open dashboard port on monitor
Dimitri Savineau [Mon, 16 Dec 2019 15:48:26 +0000 (10:48 -0500)]
ceph-infra: open dashboard port on monitor

When there's no mgr group defined in the ansible inventory then the
mgrs are deployed implicitly on the mons nodes.
If the dashboard is enabled then we need to open the dashboard port on
the node that is running the ceph mgr process (mgr or mon).
The current code only allow to open that port on the mgr nodes when they
are present explicitly in the inventory but not implicitly.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1783520
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 4535985188dcc656ff4da60318dc07b44eabf3a6)

5 years agodashboard: use fqdn in external url
Guillaume Abrioux [Thu, 2 Jan 2020 17:09:38 +0000 (18:09 +0100)]
dashboard: use fqdn in external url

Force fqdn to be used in external url for prometheus and alertmanager.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1765485
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 498bc45859f9a7ac4b3ac419e21852164f8a762e)

5 years agotests: use ceph iscsi stable repository
Dimitri Savineau [Wed, 8 Jan 2020 14:20:01 +0000 (09:20 -0500)]
tests: use ceph iscsi stable repository

The ceph iscsi repository was still set to dev (shaman) instead of
using the stable ceph-iscsi repository.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
5 years agopurge-iscsi-gateways: remove node from dashboard
Dimitri Savineau [Mon, 6 Jan 2020 20:22:51 +0000 (15:22 -0500)]
purge-iscsi-gateways: remove node from dashboard

When using the ceph dashboard with iscsi gateways nodes we also need to
remove the nodes from the ceph dashboard list.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1786686
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 931a842f21e5eb847ad371640307b7c0fef198bd)

5 years agofilestore-to-bluestore: umount partitions before zapping them
Guillaume Abrioux [Wed, 18 Dec 2019 14:48:32 +0000 (15:48 +0100)]
filestore-to-bluestore: umount partitions before zapping them

When an OSD is stopped, it leaves partitions mounted.
We must umount them before zapping them, otherwise error like "Device is
busy" will show up.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1729267
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 8056514134512f20a4b02028fb051f075ad7a145)

5 years agoshrink-mds: do not play ceph-facts entirely
Guillaume Abrioux [Wed, 8 Jan 2020 15:10:17 +0000 (16:10 +0100)]
shrink-mds: do not play ceph-facts entirely

We only need to set `container_binary`.
Let's use `tasks_from` option.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 0ae0a9ce2812796d943085c6622f4188f16d6231)

5 years agoshrink-mds: use fact from delegated node
Guillaume Abrioux [Wed, 8 Jan 2020 14:02:24 +0000 (15:02 +0100)]
shrink-mds: use fact from delegated node

The command is delegated on the first monitor so we must use the fact
`container_binary` from this node.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 77b39d235b9b713b7e814296164db27b4d428ae0)

5 years agofacts: use correct python interpreter
Guillaume Abrioux [Wed, 8 Jan 2020 13:14:41 +0000 (14:14 +0100)]
facts: use correct python interpreter

that task is delegated on the first mon so we should always use the
`discovered_interpreter_python` from that node.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 5adb735c78767545993192c67cf12b9e03f42138)

5 years agoshrink-mds: fix filesystem removal task
Guillaume Abrioux [Fri, 3 Jan 2020 15:02:48 +0000 (16:02 +0100)]
shrink-mds: fix filesystem removal task

This commit deletes the filesystem when no more MDS is present after
shrinking operation.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1787543
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 38278a6bb5eb2ec4186984f2006094fe7e36dd79)

5 years agoshrink-mds: ensure max_mds is always honored
Guillaume Abrioux [Fri, 3 Jan 2020 14:56:43 +0000 (15:56 +0100)]
shrink-mds: ensure max_mds is always honored

This commit prevent from shrinking an mds node when max_mds wouldn't be
honored after that operation.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 2cfe5a04bfcb4aae6b7842621dfacab90bdcc7c3)

5 years agoceph_volume: support filestore to bluestore migration
Guillaume Abrioux [Tue, 7 Jan 2020 15:29:48 +0000 (16:29 +0100)]
ceph_volume: support filestore to bluestore migration

This commit adds the filestore to bluestore migration support in
ceph_volume module.

We must append to the executed command only the relevant options
according to what is passed in `osd_objectostore`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit aabba3baab50fe4fb86535cd8838edc4af87d917)

5 years agofilestore-to-bluestore: ensure all dm are closed 4885/head v4.0.6
Guillaume Abrioux [Tue, 10 Dec 2019 22:04:57 +0000 (23:04 +0100)]
filestore-to-bluestore: ensure all dm are closed

This commit adds a task to ensure device mappers are well closed when
lvm batch scenario is used.
Otherwise, OSDs can't be redeployed given that devices that are rejected
by ceph-volume because they are locked.

Adding a condition `devices | default([]) | length > 0` to remove these
dm only when using lvm batch scenario.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 8e6ef818a287e8bf139420142493843077ea3851)

5 years agofilestore-to-bluestore: force OSDs to be marked down
Guillaume Abrioux [Tue, 10 Dec 2019 22:03:40 +0000 (23:03 +0100)]
filestore-to-bluestore: force OSDs to be marked down

Otherwise, sometimes it can take a while for an OSD to be seen as down
and causes the `ceph osd purge` command to fail.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 51d601193ee2050a002dafd29005019e26e2a804)

5 years agofilestore-to-bluestore: do not use --destroy
Guillaume Abrioux [Tue, 10 Dec 2019 14:59:50 +0000 (15:59 +0100)]
filestore-to-bluestore: do not use --destroy

Do not use `--destroy` when zapping a device.
Otherwise, it destroys VGs while they are still needed to redeploy the
OSDs.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit e3305e6bb655aa64f936687097cbcb6fc62f43cb)