ceph-defaults: fix handlers that are always triggered
Handlers are always triggered in ceph-ansible because ceph.conf file is
generated with a randomly order for the different keys/values pairs
in sections.
In python, a dict is not sorted. It means in our case each time we try
to generate the ceph.conf file it will be rendered with a random order
since the mecanism behind consist of rendering a file from a python dict
with keys/values. Therefore, as a quick workaround, forcing this dict to be
sorted before rendering the configuration file will ensure that it will be
rendered always the same way.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit ec042219e64a321fa67fce0384af76eeb238c645) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Tue, 17 Oct 2017 09:49:41 +0000 (11:49 +0200)]
rpm: remove ability to install ceph community version
Downstream version of ceph-ansible could still trigger install from
upstream repo and import keys.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1503019 Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit c72ddee2d9e93e72722004b109733a68ffd6b8d1) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Mon, 16 Oct 2017 12:15:43 +0000 (14:15 +0200)]
upgrade: support for rbd mirror and nfs
- Add upgrade support for rbd mirror and nfs daemons.
- Only works with systemd (remove sysvinit and upstart occurence)
- A bit of cleanup
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit d920d4839d029cc2eed4cb0556782a20f867ddcc) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Wed, 11 Oct 2017 16:29:34 +0000 (18:29 +0200)]
config: proper render ceph.conf when doing collocation
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit aa70b07ae20407b20ec3b71320d2148788d2742e) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Wed, 11 Oct 2017 11:21:37 +0000 (13:21 +0200)]
osd: rollback bindmount of /run/udev
This is causing unknown issues when trying to start a dmcrypt container.
Basically the container is stuck at mount opening the LUKS device. This
is still unknown why this is causing trouble but we need to move
forward. Also, this doesn't seem to help in any ways to fix the race
condition we've seen.
Here is the log for dmcrypt:
cryptsetup 1.7.4 processing "cryptsetup --debug --verbose --key-file
key luksClose fbf8887d-8694-46ca-b9ff-be79a668e2a9"
Running command close.
Locking memory.
Installing SIGINT/SIGTERM handler.
Unblocking interruption on signal.
Allocating crypt device context by device fbf8887d-8694-46ca-b9ff-be79a668e2a9.
Initialising device-mapper backend library.
dm version [ opencount flush ] [16384] (*1)
dm versions [ opencount flush ] [16384] (*1)
Detected dm-crypt version 1.14.1, dm-ioctl version 4.35.0.
Device-mapper backend running with UDEV support enabled.
dm status fbf8887d-8694-46ca-b9ff-be79a668e2a9 [ opencount flush ]
[16384] (*1)
Releasing device-mapper backend.
Trying to open and read device /dev/sdc1 with direct-io.
Allocating crypt device /dev/sdc1 context.
Trying to open and read device /dev/sdc1 with direct-io.
Initialising device-mapper backend library.
dm table fbf8887d-8694-46ca-b9ff-be79a668e2a9 [ opencount flush
securedata ] [16384] (*1)
Trying to open and read device /dev/sdc1 with direct-io.
Crypto backend (gcrypt 1.5.3) initialized in cryptsetup library
version 1.7.4.
Detected kernel Linux 3.10.0-693.el7.x86_64 x86_64.
Reading LUKS header of size 1024 from device /dev/sdc1
Key length 32, device size 1943016847 sectors, header size 2050
sectors.
Deactivating volume fbf8887d-8694-46ca-b9ff-be79a668e2a9.
dm status fbf8887d-8694-46ca-b9ff-be79a668e2a9 [ opencount flush ]
[16384] (*1)
Udev cookie 0xd4d14e4 (semid 32769) created
Udev cookie 0xd4d14e4 (semid 32769) incremented to 1
Udev cookie 0xd4d14e4 (semid 32769) incremented to 2
Udev cookie 0xd4d14e4 (semid 32769) assigned to REMOVE task(2) with
flags (0x0)
dm remove fbf8887d-8694-46ca-b9ff-be79a668e2a9 [ opencount flush
retryremove ] [16384] (*1) fbf8887d-8694-46ca-b9ff-be79a668e2a9: Stacking NODE_DEL [verify_udev]
Udev cookie 0xd4d14e4 (semid 32769) decremented to 1
Udev cookie 0xd4d14e4 (semid 32769) waiting for zero
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit d0a9e57bfcf68e41e25a1b3868ded447d09f8199) Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Wed, 11 Oct 2017 10:52:12 +0000 (12:52 +0200)]
purge-iscsi: fix group name
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1500281 Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 85e13a864c1317849d7bf34441fa1f7b33939556) Signed-off-by: Sébastien Han <seb@redhat.com>
in addition to c4dcdaa20 this commit adds the missing condition on
install tasks for debian_rhcs deployment. Without them, these tasks are
played on any kind of deployment.
Jan Provaznik [Tue, 10 Oct 2017 10:43:23 +0000 (12:43 +0200)]
Ceph-nfs dynamic exports fixes
* DBus on host should include ganesha service file
* to allow ganesha container to respond on DBus it needs to run
in --privileged mode (ganesha folks contacted to look at this)
* ceph_nfs_include_exports_dir variable replaced with more general
ceph_nfs_dynamic_exports
Sébastien Han [Tue, 10 Oct 2017 07:57:39 +0000 (09:57 +0200)]
purge: fix journal purge
Using a condition when osd_scenario == 'non-collocated' was wrong since
these partitions can be collocated on a single device also. Removing the
check makes the purge of these partitions.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1499871 Signed-off-by: Sébastien Han <seb@redhat.com>
Make role `ceph-mgr` handling itself the installation of `ceph-mgr`
package because it's complicated to manage it regarding we are going to
install `jewel vs. luminous`
Sébastien Han [Sun, 8 Oct 2017 15:29:32 +0000 (17:29 +0200)]
ci: re-add osd_pool_default_size to 1 with the override
If we don't do this the client will create pools with a replica 3 since
osd_pool_default_size was gone in ceph-override.json. This was making
switch_to_containers failing.
Sébastien Han [Sun, 8 Oct 2017 13:54:36 +0000 (15:54 +0200)]
infra: add independant purge-iscsi-gateways.yml
The current inclusion of purge-iscsi-gateways.yml in purge-cluster.yml
is not working well and blocking the CI too. So removing it from
purge-cluster.yml and re-add the original purge-iscsi-gateways.yml.
Boris Ranto [Fri, 6 Oct 2017 20:54:34 +0000 (22:54 +0200)]
purge-cluster: Do not use shell for rm
The shell wildcard expansion of non-existing paths fails on zsh making
the whole script fail. We can use file module with with_fileglob to
alleviate the problem instead.
Sébastien Han [Fri, 6 Oct 2017 14:49:46 +0000 (16:49 +0200)]
use get to check stdout_lines
During the initial play, the docker command doesn't not exist and then
there is no stdout_lines to the command. So get allows us to fix this by
declaring an array if the command fails.
Use an intermediate variable to build the final `dedicated_devices` list
to avoid duplicate entry in that array. (We need a 1:1 relation between
`dedicated_devices` and `devices` since we are using a `with_together`
later.
Shared folder is not required for tests.
We should avoid hitting the error :
```
uninitialized constant VagrantPlugins::ProviderLibvirt::Action::ShareFolders
```
Also, disabling it might reduce the needed time in certains cases for the VMs
to be started.
If two environments are using the same subnet, we will get trouble
because of ips addresses conflicts.
This commit ensures each scenario has a uniq subnet for both public and cluster
network so we can setup several test environment at a time on a same hypervisor.
Sébastien Han [Thu, 5 Oct 2017 12:21:37 +0000 (14:21 +0200)]
jewel: remove rbd check
The value of doing this is fairly low compare to the added value.
So we remove these tasks, if rbd pool on Jewel doesn't have the right PG
value you can always increase it.
iscsi-gw needs a 'rbd' pool to configure iscsi target.
Note: I could have used the facts already set in `ceph-mon` but I voluntarily
didn't do it to not create a dependancy between these two roles.
This commit refacts the code regarding all `set_osd_pool_default_*`
related tasks by avoiding usage of useless `set_fact` to determine
whether a key is present in `ceph_conf_overrides`.
Al Lau [Fri, 29 Sep 2017 17:19:05 +0000 (10:19 -0700)]
Only perform actions on the rbd pool after it has been created
The rbd pool is the default pool that gets created during ceph cluster
initializaiton. If we act on the rbd related operations too early, the
rbd pool does not exist yet. Move the call to perform rbd operations
to a later stage after other pools have been created.
The rbd_pool.yml playbook has all the operations related to the rbd pool.
Replace the always_run (deprecated) directive with check_mode.
Most of the ceph related tasks only need to run once. The run_once directive
executes the task on the first host.
The ceph sub-command to delete a pool is delete (not rm).
This upload includes these changes:
- Use the fail module (instead of assert).
- From luminous release, the rbd pool is no longer created by default.
Delete the code to create the rbd pool for luminous release
- Conform the .yml files to use the suggested syntax.
The commands are executed on the mcp nodes and I think shell ansible module
is the right one to use. The command module is used to execute commands on
remote nodes. I can make the change to use command module if that is
prefrerred.