upgrade: skip luminous tasks for jewel minor update
These tasks are needed only when upgrading to luminous.
They are not needed in Jewel minor upgrade and by the way, they fail because
`ceph versions` command doesn't exist.
osds: change default value for `dedicated_devices`
This is to keep backward compatibility with stable-2.2 and satisfy the
check "verify dedicated devices have been provided" in
`check_mandatory_vars.yml`. This check is looking for
`dedicated_devices` so we need to default it's value to
`raw_journal_devices` when `raw_multi_journal` is set to `True`.
Sébastien Han [Tue, 19 Dec 2017 17:54:19 +0000 (18:54 +0100)]
osd: skip devices marked as '/dev/dead'
On a non-collocated scenario, if a drive is faulty we can't really
remove it from the list of 'devices' without messing up or having to
re-arrange the order of the 'dedicated_devices'. We want to keep this
device list ordered. This will prevent the activation failing on a
device that we know is failing but we can't remove it yet to not mess up
the dedicated_devices mapping with devices.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 6db4aea453b6371345b2a1db96ab449b34870235) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Wed, 17 Jan 2018 14:18:11 +0000 (15:18 +0100)]
rolling update: add mgr exception for jewel minor updates
When update from a minor Jewel version to another, the playbook will
fail on the task "fail if no mgr host is present in the inventory".
This now can be worked around by running Ansible with_items
-e jewel_minor_update=true
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1535382 Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 8af745947695ff7dc543754db802ec57c3238adf) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Thu, 18 Jan 2018 09:06:34 +0000 (10:06 +0100)]
rgw: disable legacy unit
Some systems that were deployed with old tools can leave units named
"ceph-radosgw@radosgw.gateway.service". As a consequence, they will
prevent the new unit to start.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1509584 Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit f88795e8433f92ddc049d3e0d87e7757448e5005) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Tue, 16 Jan 2018 16:43:54 +0000 (17:43 +0100)]
common/docker-common: always start ntp
There is no need to only start ntp only if the package was present. If
the package is not present, we install it AND eventually activate + run
the service.
The original fix is part of this commit:
https://github.com/ceph/ceph-ansible/commit/849786967ac4c6235e624243019f0b54bf3340a4
However, this is a feature addition so it cannot be backported. Hence
this commit.
Sébastien Han [Thu, 21 Dec 2017 18:57:01 +0000 (19:57 +0100)]
ci: test on ansible 2.4.2
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 7ba25b20dcb199f81666b34cae6c1b95c30b1033) Signed-off-by: Sébastien Han <seb@redhat.com>
Having handlers in both ceph-defaults and ceph-docker-common roles can make the
playbook restarting two times services. Handlers can be triggered first
time because of a change in ceph.conf and a second time because a new
image has been pulled.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b29a42cba6a4059b2c0035572d570c0812f48d16) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Fri, 15 Dec 2017 18:43:23 +0000 (19:43 +0100)]
container: restart container when there is a new image
This wasn't any good choice to implement this.
We had several options and none of them were ideal since handlers can
not be triggered cross-roles.
We could have achieved that by doing:
* option 1 was to add a dependancy in the meta of the ceph-docker-common
role. We had that long ago and we decided to stop so everything is
managed via site.yml
* option 2 was to import files from another role. This is messy and we
don't that anywhere in the current code base. We will continue to do so.
There is option 3 where we pull the image from the ceph-config role.
This is not suitable as well since the docker command won't be available
unless you run Atomic distro. This would also mean that you're trying to
pull twice. First time in ceph-config, second time in ceph-docker-common
The only option I came up with was to duplicate a bit of the ceph-config
handlers code.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1526513 Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 8a19a83354cd8a4f9a729b3864850ec69be6d5da) Signed-off-by: Sébastien Han <seb@redhat.com>
containers: fix bug when looking for existing cluster
When containerized deployment, `docker_exec_cmd` is not set before the
task which try to retrieve the current fsid is played, it means it
considers there is no existing fsid and try to generate a new one.
Sébastien Han [Tue, 9 Jan 2018 13:34:09 +0000 (14:34 +0100)]
container: change the way we force no logs inside the container
Previously we were using ceph_conf_overrides however this doesn't play
nice for softwares like TripleO that uses ceph_conf_overrides inside its
own code. For now, and since this is the only occurence of this, we can
ensure no logs through the ceph conf template.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1532619 Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit c2e04623a54007674ec60647a9e5ddd2da4f991b) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Tue, 9 Jan 2018 12:54:50 +0000 (13:54 +0100)]
mon: use crush rules for non-container too
There is no reasons why we can't use crush rules when deploying
containers. So moving the inlcude in the main.yml so it can be called.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit f0787e64da45fdbefb2ff1376a0705fadf6a502d) Signed-off-by: Sébastien Han <seb@redhat.com>
Andrew Schoen [Fri, 5 Jan 2018 19:47:10 +0000 (13:47 -0600)]
test: set UPDATE_CEPH_DOCKER_IMAGE_TAG for jewel tests
We want to be explict here and update to luminous and not
the 'latest' tag.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit a8509fbc9c0328670224f608abea17d8e64257ab) Signed-off-by: Sébastien Han <seb@redhat.com>
Andrew Schoen [Fri, 5 Jan 2018 18:42:16 +0000 (12:42 -0600)]
switch-to-containers: do not fail when stopping the nfs-ganesha service
If we're working with a jewel cluster then this service will not exist.
This is mainly a problem with CI testing because our tests are setup to
work with both jewel and luminous, meaning that eventhough we want to
test jewel we still have a nfs-ganesha host in the test causing these
tasks to run.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit b613321c210155f390d4ddb7dcda8dc685a6e9ea) Signed-off-by: Sébastien Han <seb@redhat.com>
Andrew Schoen [Fri, 5 Jan 2018 18:37:36 +0000 (12:37 -0600)]
switch-to-containers: do not fail when stopping the ceph-mgr daemon
If we are working with a jewel cluster ceph mgr does not exist
and this makes the playbook fail.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 0b4b60e3c9cabbbda2883feb40a6f80763c66b50) Signed-off-by: Sébastien Han <seb@redhat.com>
Andrew Schoen [Fri, 5 Jan 2018 16:06:53 +0000 (10:06 -0600)]
rolling_update: do not fail the playbook if nfs-ganesha is not present
The rolling update playbook was attempting to stop the
nfs-ganesha service on nodes where jewel is still installed.
The nfs-ganesha service did not exist in jewel so the task fails.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 997edea271b713b29f896ebb87dc6df29a60488b) Signed-off-by: Sébastien Han <seb@redhat.com>
`bluestore_purge_osd_non_container` scenario is failing because it
keeps old osd_uuid information on devices and cause the `ceph-disk activate`
to fail when trying to redeploy a new cluster after a purge.
typical error seen :
```
2017-12-13 14:29:48.021288 7f6620651d00 -1
bluestore(/var/lib/ceph/tmp/mnt.2_3gh6/block) _check_or_set_bdev_label
bdev /var/lib/ceph/tmp/mnt.2_3gh6/block fsid 770080e2-20db-450f-bc17-81b55f167982 does not match our fsid f33efff0-2f07-4203-ad8d-8a0844d6bda0
```
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit eeedefdf0207f04e67af490e03d895324ab609a1) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Wed, 20 Dec 2017 14:29:02 +0000 (15:29 +0100)]
mon: always run ceph-create-keys
ceph-create-keys is idempotent so it's not an issue to run it each time
we play ansible. This also fix issues where the 'creates' arg skips the
task and no keys get generated on newer version, e.g during an upgrade.
Closes: https://github.com/ceph/ceph-ansible/issues/2228 Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 0b55abe3d0fc6db6c93d963545781c05a31503bb) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Thu, 21 Dec 2017 09:19:22 +0000 (10:19 +0100)]
rgw: disable legacy rgw service unit
When upgrading from OSP11 to OSP12 container, ceph-ansible attempts to
disable the RGW service provided by the overcloud image. The task
attempts to stop/disable ceph-rgw@{{ ansible-hostname }} and
ceph-radosgw@{{ ansible-hostname }}.service. The actual service name is
ceph-radosgw@radosgw.$name
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1525209 Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit ad54e19262f3d523ad57ee39e64d6927b0c21dea) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Wed, 20 Dec 2017 12:39:33 +0000 (13:39 +0100)]
fix jewel scenarios on container
When deploying Jewel from master we still need to enable this code since
the container image has such check. This check still exists because
ceph-disk is not able to create a GPT label on a drive that does not
have one.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 39f2bfd5d58bae3fef2dd4fca0b2bab2e67ba21f) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Tue, 19 Dec 2017 14:10:05 +0000 (15:10 +0100)]
site-docker: ability to disable fact sharing
When deploying with Ansible at large scale, the delegate_facts method
consumes a lot of memory on the host that is running Ansible. This can
cause various issues like memory exhaustion on that machine.
You can now run Ansible with "-e delegate_facts_host=False" to disable
the fact sharing.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit c315f81dfe440945aaa90265cd3294fdea549942) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Fri, 15 Dec 2017 16:39:32 +0000 (17:39 +0100)]
rolling_update: do not require root to answer question
There is no need to ask for root on the local action. This will prompt
for a password the current user is not part of sudoers. That's
unnecessary anyways.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1516947 Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 200785832f3b56dd8c5766ec0b503c5d77b4a984) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Mon, 18 Dec 2017 15:43:37 +0000 (16:43 +0100)]
osd: best effort if no device is found during activation
We have a scenario when we switch from non-container to containers. This
means we don't know anything about the ceph partitions associated to an
OSD. Normally in a containerized context we have files containing the
preparation sequence. From these files we can get the capabilities of
each OSD. As a last resort we use a ceph-disk call inside a dummy bash
container to discover the ceph journal on the current osd.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1525612 Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit bbc79765f3e8b93b707b0f25f94e975c1bd85c66) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Tue, 19 Dec 2017 10:17:04 +0000 (11:17 +0100)]
nfs: fix package install for debian/suss systems
This resolves the following error:
E: There were unauthenticated packages and -y was used without
--allow-unauthenticated
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit dfbef8361d3ac03788aa1f93b23907bc9595a730) Signed-off-by: Sébastien Han <seb@redhat.com>
The name docker_version is very generic and is also used by other
roles. As a result, there may be name conflicts. To avoid this a
ceph_ prefix should be used for this fact. Since it is an internal
fact renaming is not a problem.
Sébastien Han [Fri, 20 Oct 2017 09:14:13 +0000 (11:14 +0200)]
common: move restapi template to config
Closes: github.com/ceph/ceph-ansible/issues/1981 Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit ba5c6e66f03314d1b7263225e75f0f56c438db3b) Signed-off-by: Sébastien Han <seb@redhat.com>
the entrypoint to generate users keyring is `ceph-authtool`, therefore,
it can expand the `$(ceph-authtool --gen-print-key)` inside the
container. Users must generate a keyring themselves.
This commit also adds a check to ensure keyring are properly filled when
`user_config: true`.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit ab1dd3027a4b9932e58f28b86ab46979eb1f1682) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Thu, 14 Dec 2017 10:31:28 +0000 (11:31 +0100)]
default: look for the right return code on socket stat in-use
As reported in https://github.com/ceph/ceph-ansible/issues/2254, the
check with fuser is not ideal. If fuser is not available the return code
is 127. Here we want to make sure that we looking for the correct return
code, so 1.
Closes: https://github.com/ceph/ceph-ansible/issues/2254 Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 7eaf444328c8c381c673883913cf71b8ebe9d064) Signed-off-by: Sébastien Han <seb@redhat.com>
this task hangs because `{{ inventory_hostname }}` doesn't resolv to an
actual ip address.
Using `hostvars[inventory_hostname]['ansible_default_ipv4']['address']`
should fix this because it will reach the node with its actual IP
address.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit aaaf980140832de694ef0ffe3282dabbf0b90081) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Wed, 22 Nov 2017 16:11:50 +0000 (17:11 +0100)]
common: install ceph-common on all the machines
Since some daemons now install their own packages the task checking the
ceph version fails on Debian systems. So the 'ceph-common' package must
be installed on all the machines.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit bb7b29a9fcc33e7316bbe7dad3dc3cd5395ef8ab) Signed-off-by: Sébastien Han <seb@redhat.com>
John Fulton [Thu, 16 Nov 2017 16:29:59 +0000 (11:29 -0500)]
Make openstack_keys param support no acls list
A recent change [1] required that the openstack_keys
param always containe an acls list. However, it's
possible it might not contain that list. Thus, this
param sets a default for that list to be empty if it
is not in the structure as defined by the user.
Sébastien Han [Thu, 16 Nov 2017 13:55:08 +0000 (14:55 +0100)]
osd: fix bad activation for dmcrypt
We were activating dmcrypt devices with the wrong command. Basically the
first task execute the wrong activate command. The task fails but
continues because of the 'failed_when: false'. Then the right activation
sequence is being done by the next task.
John Fulton [Mon, 6 Nov 2017 22:24:48 +0000 (17:24 -0500)]
Set permissions and ACLs of OpenStack keys on all ceph-mons
If ceph-ansible deploys a Ceph cluster with "openstack_config: true"
and sets the openstack_keys map to have certain ACLs or permissions,
the requested ACLs or permissions are only set on one of the monitor
nodes [2] when they should be set on all of them.
This patch solves [3] the above issue by having the chmod and setfacl
tasks iterate the list of mon nodes (including the mon node that the
task was delegated to) to apply the chmod of setfacl to the keys in
openstack_keys.
Like 80d32dec, the path to the fact is not correct.
In any case, we will retrieve the IP address in hostvars, the variable
is the way we get the interface name according where it has been set
(eg.: inventory host file vs. group_vars/)
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1510906 Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 44df3f9102773c10011c82b5c1a20e7ae46e0001) Signed-off-by: Sébastien Han <seb@redhat.com>
ceph-ansible is now being testing against ansible2.2 and ansible2.4. We
need to update tox.ini so we use the right version of testinfra
regarding which ansible version we are using.
purge-docker-cluster must remove all osd_disk_prepare logs in
`{{ ceph_osd_docker_run_script_path }}`, otherwise if you purge your
cluster and try to redeploy it, osds will fail to start since because it
will try to retrieve find a partition uuid which doesn't exist.