]> git.apps.os.sepia.ceph.com Git - ceph-ansible.git/log
ceph-ansible.git
7 years agoiscsi: add python-rtslib repository
Sébastien Han [Mon, 14 May 2018 07:21:48 +0000 (09:21 +0200)]
iscsi: add python-rtslib repository

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoAllow os_tuning_params to overwrite fs.aio-max-nr
Andy McCrae [Thu, 10 May 2018 10:15:30 +0000 (11:15 +0100)]
Allow os_tuning_params to overwrite fs.aio-max-nr

The order of fs.aio-max-nr (which is hard-coded to 1048576) means that
if you set fs.aio-max-nr in os_tuning_params it will effectively be
ignored for bluestore scenarios.

To resolve this we should move the setting of fs.aio-max-nr above the
setting of os_tuning_params, in this way the operator can define the
value of fs.aio-max-nr to be something other than 1048576 if they want
to.

Additionally, we can make the sysctl settings happen in 1 task rather
than multiple.

7 years agoMakefile: bail out on unknown Git tag formats
Ken Dreyer [Thu, 10 May 2018 20:39:07 +0000 (14:39 -0600)]
Makefile: bail out on unknown Git tag formats

Prior to this change, if we created entirely new Git tags patterns like
"3.2.0alpha" or "3.2.0foobar", the Makefile would incorrectly translate
the Git tag name into a Name-Version-Release that would prevent upgrades
to "newer" versions.

This happened for example in
https://bugs.centos.org/view.php?id=14593, "Incorrect naming scheme for
a build of ceph-ansible prevents subsequent updates to be installed"

If we encounter a new Git tag format that we cannot parse,
pessimistically bail out early instead of trying to build an RPM.

The purpose of this safeguard is to prevent Jenkins from building RPMs
that cannot be easily upgraded.

7 years agoclient: remove default value for pg_num in pools creation
Guillaume Abrioux [Thu, 3 May 2018 19:36:21 +0000 (21:36 +0200)]
client: remove default value for pg_num in pools creation

trying to set the default value for pg_num to
`hostvars[groups[mon_group_name][0]]['osd_pool_default_pg_num'])` will
break in case of external client nodes deployment.
the `pg_num` attribute should be mandatory and be tested in future
`ceph-validate` role.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agocontrib: update backport script to reflect stable branch
Sébastien Han [Wed, 9 May 2018 21:30:12 +0000 (14:30 -0700)]
contrib: update backport script to reflect stable branch

Since we now do backports on stable-3.0 and stable-3.1 we have to use
the name of the stable branch in the backport branch name. If we don't
do this we will end up with conflicting branch names.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoadds missing state needed to upgrade nfs-ganesha
Gregory Meno [Wed, 9 May 2018 18:17:26 +0000 (11:17 -0700)]
adds missing state needed to upgrade nfs-ganesha

in tasks for os_family Red Hat we were missing this

fixes: bz1575859
Signed-off-by: Gregory Meno <gmeno@redhat.com>
7 years agomon: fix mgr keyring creation when upgrading from jewel v3.1.0rc2
Guillaume Abrioux [Wed, 9 May 2018 12:42:27 +0000 (14:42 +0200)]
mon: fix mgr keyring creation when upgrading from jewel

On containerized deployment,
when upgrading from jewel to luminous, mgr keyring creation fails because the
command to create mgr keyring is executed on a container that is still
running jewel since the container is restarted later to run the new
image, therefore, it fails with bad entity error.

To get around this situation, we can delegate the command to create
these keyrings on the first monitor when we are running the playbook on the last monitor.
That way we ensure we will issue the command on a container that has
been well restarted with the new image.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1574995
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoosd: clean legacy syntax in ceph-osd-run.sh.j2
Guillaume Abrioux [Wed, 9 May 2018 01:10:30 +0000 (03:10 +0200)]
osd: clean legacy syntax in ceph-osd-run.sh.j2

Quick clean on a legacy syntax due to e0a264c7e

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoMake sure the restart_mds_daemon script is created with the correct MDS name
Simone Caronni [Thu, 5 Apr 2018 14:14:23 +0000 (16:14 +0200)]
Make sure the restart_mds_daemon script is created with the correct MDS name

7 years agocommon: enable Tools repo for rhcs clients
Sébastien Han [Tue, 8 May 2018 14:11:14 +0000 (07:11 -0700)]
common: enable Tools repo for rhcs clients

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1574458
Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoFix install of nfs-ganesha-ceph for Debian/SuSE v3.1.0beta9
Andy McCrae [Thu, 22 Mar 2018 12:19:22 +0000 (12:19 +0000)]
Fix install of nfs-ganesha-ceph for Debian/SuSE

The Debian and SuSE installs for nfs-ganesha on the non-rhcs repository
requires you to allow_unauthenticated for Debian, and disable_gpg_check
for SuSE. The nfs-ganesha-rgw package already does this, but the
nfs-ganesha-ceph package will fail to install because of this same
issue.

This PR moves the installations to happen when the appropriate flags are
set to True (nfs_obj_gw & nfs_file_gw), but does it per distro (one for
SuSE and one for Debian) so that the appropriate flag can be passed to
ignore the GPG check.

7 years agoplaybook: improve facts gathering
Guillaume Abrioux [Thu, 3 May 2018 16:41:16 +0000 (18:41 +0200)]
playbook: improve facts gathering

there is no need to gather facts with O(N^2) way.
Only one node should gather facts from other node.

Fixes: #2553
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoceph-nfs: disable attribute caching
Ramana Raja [Thu, 3 May 2018 12:10:13 +0000 (17:40 +0530)]
ceph-nfs: disable attribute caching

When 'ceph_nfs_disable_caching' is set to True, disable attribute
caching done by Ganesha for all Ganesha exports.

Signed-off-by: Ramana Raja <rraja@redhat.com>
7 years agocommon: copy iso files if rolling_update
Sébastien Han [Thu, 3 May 2018 14:54:53 +0000 (16:54 +0200)]
common: copy iso files if rolling_update

If we are in a middle of an update we want to get the new package
version being installed so the task that copies the repo files should
not be skipped.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1572032
Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoMove apt cache update to individual task per role
Andy McCrae [Thu, 26 Apr 2018 09:42:11 +0000 (10:42 +0100)]
Move apt cache update to individual task per role

The apt-cache update can fail due to transient issues related to the
action being a network operation. To reduce the impact of these
transient failures this patch adds a retry to the update_cache task.

However, the apt_repository tasks which would perform an apt_update
won't retry the apt_update on a failure in the same way, as such this PR
moves the apt_update into an individual task, once per role.

Finally, the apt_repository tasks no longer have a changed_when: false,
and the apt_cache update is only performed once per role, if the
repositories change. Otherwise the cache is updated on the "apt" install
tasks if the cache_timeout has been reached.

7 years agoclient: fix pool creation
Guillaume Abrioux [Mon, 30 Apr 2018 18:53:42 +0000 (20:53 +0200)]
client: fix pool creation

the value in `docker_exec_client_cmd` doesn't allow to check for
existing pools because it's set with a wrong value for the entrypoint
that is going to be used.
It means the check were going to fail anyway even if pools actually exist.

Using jinja syntax to set `docker_exec_cmd` allows to handle the case
where you don't have monitors in your inventory.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agomon: change application pool support
Sébastien Han [Thu, 26 Apr 2018 17:55:48 +0000 (19:55 +0200)]
mon: change application pool support

If openstack_pools contains an application key it will be used to apply
this application pool type to a pool.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1562220
Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agocheck if pools already exist before creating them
Guillaume Abrioux [Fri, 27 Apr 2018 12:48:33 +0000 (14:48 +0200)]
check if pools already exist before creating them

Add a task to check if pools already exist before we create them.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agotests: update the type for the rule used in pools
Guillaume Abrioux [Wed, 25 Apr 2018 15:33:35 +0000 (17:33 +0200)]
tests: update the type for the rule used in pools

As of ceph 12.2.5 the type of the parameter `type` is not a name anymore but
an id, therefore an `int` is expected otherwise it will fail with the
following error

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoswitch: fix ceph_uid fact for osd
Guillaume Abrioux [Wed, 25 Apr 2018 12:20:35 +0000 (14:20 +0200)]
switch: fix ceph_uid fact for osd

In addition to b324c17 this commit fix the ceph uid for osd role in the
switch from non containerized to containerized playbook.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoswitch: resolve device path so we can umount the osd data dir
Sébastien Han [Thu, 19 Apr 2018 12:45:03 +0000 (14:45 +0200)]
switch: resolve device path so we can umount the osd data dir

If we don't do this, umounting devices declared like this
/dev/disk/by-id/ata-QEMU_HARDDISK_QM00001

will fail like:

umount: /dev/disk/by-id/ata-QEMU_HARDDISK_QM000011: mountpoint not found

Since we append '1' (partition 1), this won't work.
So we need to resolved the link to get something like /dev/sdb and then
append 1 to /dev/sdb1

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoswitch: fix ceph_uid fact
Sébastien Han [Thu, 19 Apr 2018 08:28:56 +0000 (10:28 +0200)]
switch: fix ceph_uid fact

Latest is now centos not ubuntu anymore so the condition was wrong.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoRevert "add .vscode/ to gitignore"
Sébastien Han [Fri, 27 Apr 2018 11:19:25 +0000 (13:19 +0200)]
Revert "add .vscode/ to gitignore"

This reverts commit 3c4319ca4b5355d69b2925e916420f86d29ee524.

7 years agomon/client: honor key mode when copying it to other nodes v3.1.0beta8
Sébastien Han [Mon, 23 Apr 2018 08:02:16 +0000 (10:02 +0200)]
mon/client: honor key mode when copying it to other nodes

The last mon creates the keys with a particular mode, while copying them
to the other mons (first and second) we must re-use the mode that was
set.

The same applies for the client node, the slurp preserves the initial
'item' so we can get the mode for the copy.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoci: bump client nodes to 2
Sébastien Han [Mon, 23 Apr 2018 08:01:23 +0000 (10:01 +0200)]
ci: bump client nodes to 2

In order to test the key distribution is correct we must have 2 client
nodes.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agomon: remove redundant copy task
Sébastien Han [Mon, 23 Apr 2018 07:52:18 +0000 (09:52 +0200)]
mon: remove redundant copy task

We had twice the same task, also one was overriding the mode.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agomon/client: remove acl code
Sébastien Han [Fri, 20 Apr 2018 14:44:41 +0000 (16:44 +0200)]
mon/client: remove acl code

Applying ACL on the keyrings is not used anymore so let's remove this
code.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agomon/client: apply mode from ceph_key
Sébastien Han [Fri, 20 Apr 2018 14:37:05 +0000 (16:37 +0200)]
mon/client: apply mode from ceph_key

Do not use a dedicated task for this but use the ceph_key module
capability to set file mode.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoceph_key: ability to apply a mode to a file
Sébastien Han [Fri, 20 Apr 2018 14:35:39 +0000 (16:35 +0200)]
ceph_key: ability to apply a mode to a file

You can now create keys and set file mode on them. Use the 'mode'
parameter for that, mode must be in octal so 0644.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoadd AArch64 to supported architecture
Di Xu [Mon, 23 Apr 2018 02:08:48 +0000 (10:08 +0800)]
add AArch64 to supported architecture

works on AArch64 platform

7 years agomon: remove mgr key from ceph_config_keys
Sébastien Han [Thu, 19 Apr 2018 16:54:53 +0000 (18:54 +0200)]
mon: remove mgr key from ceph_config_keys

This key is created after the last mon is up so there is no need to try
to push it from the first mon. The initia mon container is not creating
the mgr key, ansible does. So this key will never exist.
The key will go into the fetch dir once the last mon is up, then when
the ceph-mgr plays it will try to get it from the fetch directory.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agomon: remove mon map from ceph_config_keys
Sébastien Han [Thu, 19 Apr 2018 16:40:16 +0000 (18:40 +0200)]
mon: remove mon map from ceph_config_keys

During the initial bootstrap of the first mon, the monmap file is
destroyed so it's not available and ansible will never find it.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoconfig_template: resync with upstream
Sébastien Han [Sat, 31 Mar 2018 10:43:42 +0000 (12:43 +0200)]
config_template: resync with upstream

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoci: test ansible 2.5
Sébastien Han [Wed, 28 Mar 2018 19:52:40 +0000 (21:52 +0200)]
ci: test ansible 2.5

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoExpose /var/run/ceph
Sébastien Han [Thu, 12 Apr 2018 13:52:30 +0000 (15:52 +0200)]
Expose /var/run/ceph

Useful for softwares that do data collection/monitoring like collectd.
They can connect to the socket and then retrieve information.

Even though the sockets are exposed now, I'm keeping the docker exec to
check the socket, this will allow newer version of ceph-ansible to work
with older versions.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1563280
Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agodefault: extent ceph_uid and gid
Sébastien Han [Fri, 13 Apr 2018 17:42:17 +0000 (19:42 +0200)]
default: extent ceph_uid and gid

We now have the ability to detect the uid/gid of the ceph user depending
on the distribution we are running on and so we are doing non-container
deployements.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agomove create ceph initial directories to default
Sébastien Han [Fri, 13 Apr 2018 15:56:06 +0000 (17:56 +0200)]
move create ceph initial directories to default

This is needed for both non-container and container deployments.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoshrink-osd: ability to shrink NVMe drives
Sébastien Han [Fri, 20 Apr 2018 09:13:51 +0000 (11:13 +0200)]
shrink-osd: ability to shrink NVMe drives

Now if the service name contains nvme we know we need to remove the last
2 character instead of 1.

If nvme then osd_to_kill_disks is nvme0n1, we need nvme0
If ssd or hdd then osd_to_kill_disks is sda1, we need sda

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1561456
Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoselinux: remove chcon calls
Sébastien Han [Tue, 17 Apr 2018 13:32:53 +0000 (15:32 +0200)]
selinux: remove chcon calls

We know bindmount with the :z option at the end of the -v command so
this will basically run the exact same command as we used to run. So to
speak:

chcon -Rt svirt_sandbox_file_t /var/lib/ceph

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoclient: add a --rm option to run the container
Sébastien Han [Tue, 17 Apr 2018 12:16:41 +0000 (14:16 +0200)]
client: add a --rm option to run the container

This fixes the case where the playbook died and never removed the
container. So now, once the container exits it will remove itself from
the container list.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1568157
Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoclient: import the key in ceph is copy_admin_key is true v3.1.0beta7
Sébastien Han [Wed, 18 Apr 2018 13:44:36 +0000 (15:44 +0200)]
client: import the key in ceph is copy_admin_key is true

If the user has set copy_admin_key to true we assume he/she wants to
import the key in Ceph and not only create the key on the filesystem.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoclient: add quotes to the dict values
Sébastien Han [Wed, 18 Apr 2018 13:11:55 +0000 (15:11 +0200)]
client: add quotes to the dict values

ceph-authtool does not support raw arguements so we have to quote caps
declaration like this allow 'bla bla' instead of allow bla bla

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1568157
Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoAdd support for --diff in config_template
Andy McCrae [Wed, 21 Mar 2018 15:57:00 +0000 (15:57 +0000)]
Add support for --diff in config_template

Add support for the Ansible --diff mode in config_template. This will
show the before/after for config_template changes, in the same way as
the base copy and template modules do.

To utilise this run your playbooks with "--diff --check".

7 years agorefactor the way we copy keys
Sébastien Han [Wed, 11 Apr 2018 15:15:29 +0000 (17:15 +0200)]
refactor the way we copy keys

This commit does a couple of things:

* use a common.yml file that contains things that can be played on both
container and non-container

* refactor the ability to copy the admin key to the nodes

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoceph-defaults: fix ceph_uid fact on container deployments
Randy J. Martinez [Thu, 29 Mar 2018 04:17:02 +0000 (23:17 -0500)]
ceph-defaults: fix ceph_uid fact on container deployments

Red Hat is now using tags[3,latest] for image rhceph/rhceph-3-rhel7.
Because of this, the ceph_uid conditional passes for Debian
when 'ceph_docker_image_tag: latest' on RH deployments.
I've added an additional task to check for rhceph image specifically,
and also updated the RH family task for ceph/daemon [centos|fedora]tags.

Signed-off-by: Randy J. Martinez <ramartin@redhat.com>
7 years agorhcs: re-add apt-pining
Sébastien Han [Tue, 17 Apr 2018 13:59:52 +0000 (15:59 +0200)]
rhcs: re-add apt-pining

When installing rhcs on Debian systems the red hat repos must have the
highest priority so we avoid packages conflicts and install the rhcs
version.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1565850
Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agodefaults: check only 1 time if there is a running cluster
Guillaume Abrioux [Mon, 9 Apr 2018 16:07:31 +0000 (18:07 +0200)]
defaults: check only 1 time if there is a running cluster

There is no need to check for a running cluster n*nodes time in
`ceph-defaults` so let's add a `run_once: true` to save some resources
and time.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agosite: make it more readable
Guillaume Abrioux [Tue, 10 Apr 2018 13:30:16 +0000 (15:30 +0200)]
site: make it more readable

These conditions introduced by d981c6bd2 were insane.
This should be a bit easier to read.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoosd: do not do anything if the dev has a partition
Sébastien Han [Fri, 13 Apr 2018 14:36:43 +0000 (16:36 +0200)]
osd: do not do anything if the dev has a partition

Regardless if the partition is 'ceph' or something else, we don't want
to be as strick as checking for a particular partition.
If the drive has a partition, we just don't do anything.

This solves the case where the server reboots, disks get a different
/dev/sda (node) allocation. In this case, prior to restarting the server
/dev/sda was an OSD, but now it's /dev/sdb and the other way around.
In such scenario, we will try to prepare the OSD and create a new
partition, so let's not mess around with devices that have partitions.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1498303
Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agotests: update tests for mds to cover multimds case
Guillaume Abrioux [Thu, 12 Apr 2018 07:55:25 +0000 (09:55 +0200)]
tests: update tests for mds to cover multimds case

in case of multimds we must check for the number of mds up instead of
just checking if the hostname of the node is in the fsmap.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agocommon: add tools repo for iscsi gw
Sébastien Han [Thu, 12 Apr 2018 10:15:35 +0000 (12:15 +0200)]
common: add tools repo for iscsi gw

To install iscsi gw packages we need to enable the tools repo.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1547849
Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoRemove deprecated allow_multimds
Douglas Fuller [Wed, 4 Apr 2018 18:23:25 +0000 (14:23 -0400)]
Remove deprecated allow_multimds

allow_multimds will be officially deprecated in Mimic, specify it
only for all versions of Ceph where it was declared stable. Going
forward, specify only max_mds.

Signed-off-by: Douglas Fuller <dfuller@redhat.com>
7 years agoFixed a typo (extra space) v3.1.0beta6
vasishta p shastry [Tue, 10 Apr 2018 13:37:35 +0000 (19:07 +0530)]
Fixed a typo (extra space)

7 years agoosd: to support copy_admin_key
vasishta p shastry [Tue, 10 Apr 2018 13:21:50 +0000 (18:51 +0530)]
osd: to support copy_admin_key

7 years agomds: to support copy_admin_keyring
vasishta p shastry [Tue, 10 Apr 2018 12:39:43 +0000 (18:09 +0530)]
mds: to support copy_admin_keyring

7 years agonfs: to support copy_admin_key - containerized
vasishta p shastry [Tue, 10 Apr 2018 12:37:11 +0000 (18:07 +0530)]
nfs: to support copy_admin_key - containerized

7 years agonfs: ensure nfs-server server is stopped
Ali Maredia [Mon, 2 Apr 2018 17:47:31 +0000 (13:47 -0400)]
nfs: ensure nfs-server server is stopped

NFS-ganesha cannot start is the nfs-server service
is running. This commit stops nfs-server in case it
is running on a (debian, redhat, suse) node before
the nfs-ganesha service starts up

fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1508506

Signed-off-by: Ali Maredia <amaredia@redhat.com>
7 years agoceph-nfs: allow disabling ganesha caching
Ramana Raja [Mon, 9 Apr 2018 12:03:33 +0000 (17:33 +0530)]
ceph-nfs: allow disabling ganesha caching

Add a variable, ceph_nfs_disable_caching, that if set to true
disables ganesha's directory and attribute caching as much as
possible.

Also, disable caching done by ganesha, when 'nfs_file_gw'
variable is true, i.e., when Ganesha is used as CephFS's gateway.
This is the recommended Ganesha setting as libcephfs already caches
information. And doing so helps avoid cache incoherency issues
especially with clustered ganesha over CephFS.

Fixes: https://tracker.ceph.com/issues/23393
Signed-off-by: Ramana Raja <rraja@redhat.com>
7 years agoceph-defaults: bring backward compatibility for old syntax
Sébastien Han [Tue, 10 Apr 2018 13:39:44 +0000 (15:39 +0200)]
ceph-defaults: bring backward compatibility for old syntax

If people keep on using the mon_cap, osd_cap etc the playbook will
translate this old syntax on the flight.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoci: fix tripleO scenario
Sébastien Han [Mon, 9 Apr 2018 22:33:33 +0000 (00:33 +0200)]
ci: fix tripleO scenario

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoci: client copy admin key
Sébastien Han [Thu, 5 Apr 2018 16:52:23 +0000 (18:52 +0200)]
ci: client copy admin key

If we don't copy the admin key we can't add the key into ceph.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoci: remove useless tests
Sébastien Han [Wed, 4 Apr 2018 14:31:04 +0000 (16:31 +0200)]
ci: remove useless tests

These are already handled by ceph-client/defaults/main.yml so the keys
will be created once user_config is set to True.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoceph_key: use ceph_key in the playbook
Sébastien Han [Wed, 4 Apr 2018 14:22:36 +0000 (16:22 +0200)]
ceph_key: use ceph_key in the playbook

Replaced all the occurence of raw command using the 'command' module
with the ceph_key module instead.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoinfra: add playbook example for ceph_key module
Sébastien Han [Fri, 30 Mar 2018 14:56:44 +0000 (16:56 +0200)]
infra: add playbook example for ceph_key module

Helper playbook to manage CephX keys.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoadd ceph_key module
Sébastien Han [Sun, 18 Mar 2018 14:53:45 +0000 (15:53 +0100)]
add ceph_key module

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoceph_volume: objectstore should default to 'bluestore'
Andrew Schoen [Thu, 5 Apr 2018 14:12:32 +0000 (09:12 -0500)]
ceph_volume: objectstore should default to 'bluestore'

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agoceph_volume: refactor to not run ceph osd destroy
Andrew Schoen [Tue, 3 Apr 2018 16:55:36 +0000 (11:55 -0500)]
ceph_volume: refactor to not run ceph osd destroy

This changes state to action and gives the options 'create'
or 'zap'. The zap parameter is also removed.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agoceph_volume: perserve newlines in stdout and stderr when zapping
Andrew Schoen [Wed, 28 Mar 2018 16:10:17 +0000 (11:10 -0500)]
ceph_volume: perserve newlines in stdout and stderr when zapping

Because we have many commands we might need to run the
ANSIBLE_STDOUT_CALLBACK won't format these nicely because we're
not reporting these back at the root level of the json result.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agopurge-cluster: no need to use objectstore for ceph_volume module
Andrew Schoen [Wed, 14 Mar 2018 19:46:37 +0000 (14:46 -0500)]
purge-cluster: no need to use objectstore for ceph_volume module

When zapping objectstore is not required.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agoceph_volume: rc should be 0 on successful runs
Andrew Schoen [Wed, 14 Mar 2018 17:26:43 +0000 (12:26 -0500)]
ceph_volume: rc should be 0 on successful runs

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agoceph_volume: defines the zap param in module_args
Andrew Schoen [Wed, 14 Mar 2018 17:19:42 +0000 (12:19 -0500)]
ceph_volume: defines the zap param in module_args

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agoceph_volume: make state not required so I can provide a default
Andrew Schoen [Wed, 14 Mar 2018 16:49:48 +0000 (11:49 -0500)]
ceph_volume: make state not required so I can provide a default

I want a default value of 'present' for state, so it can not
be made required. Othewise it'll throw a 'Module alias error'
from ansible.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agoceph_volume: objectstore is now optional except when state is present
Andrew Schoen [Wed, 14 Mar 2018 16:47:07 +0000 (11:47 -0500)]
ceph_volume: objectstore is now optional except when state is present

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agopurge-cluster: use ceph_volume module to zap and destroy OSDs
Andrew Schoen [Wed, 14 Mar 2018 16:32:19 +0000 (11:32 -0500)]
purge-cluster: use ceph_volume module to zap and destroy OSDs

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agotests: no need to remove partitions in lvm_setup.yml
Andrew Schoen [Mon, 12 Mar 2018 19:06:39 +0000 (14:06 -0500)]
tests: no need to remove partitions in lvm_setup.yml

Now that we are using ceph_volume_zap the partitions are
kept around and should be able to be reused.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agoceph_volume: adds a zap property and reworks to support state: absent
Andrew Schoen [Wed, 14 Mar 2018 16:24:40 +0000 (11:24 -0500)]
ceph_volume: adds a zap property and reworks to support state: absent

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agoceph_volume: adds a state property
Andrew Schoen [Wed, 14 Mar 2018 15:14:21 +0000 (10:14 -0500)]
ceph_volume: adds a state property

This can be either present or absent.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agoceph_volume: remove the subcommand argument
Andrew Schoen [Wed, 14 Mar 2018 14:57:49 +0000 (09:57 -0500)]
ceph_volume: remove the subcommand argument

This really isn't needed currently and I don't believe is a good
mechanism for switching subcommands anwyay. The user of this module
should not have to be familar with all ceph-volume subcommands.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
7 years agopurge-docker: added conditionals needed to successfully re-run purge
Randy J. Martinez [Wed, 28 Mar 2018 23:46:54 +0000 (18:46 -0500)]
purge-docker: added conditionals needed to successfully re-run purge

Added 'ignore_errors: true' to multiple lines which run docker commands; even in cases where docker is no longer installed. Because of this, certain tasks in the purge-docker-cluster.yml will cause the playbook to fail if re-run and stop the purge. This leaves behind a dirty environment, and a playbook which can no longer be run.
Fix Regex line 275: Sometimes 'list-units' will output 4 spaces between loaded+active. The update will account for both scenarios.
purge fetch_directory: in other roles fetch_directory is hard linked ex.: "{{ fetch_directory }}"/"{{ somedir }}". That being said, fetch_directory will never have a trailing slash in the all.yml so this task was never being run(causing failures when trying to re-deploy).

Signed-off-by: Randy J. Martinez <ramartin@redhat.com>
7 years agoFixed wrong path of ceph.conf in docs. v3.1.0beta5
JohnHaan [Tue, 10 Apr 2018 00:48:47 +0000 (09:48 +0900)]
Fixed wrong path of ceph.conf in docs.

The path of ceph.conf sample template moved to ceph-config.
Therefore docs needs to be changed to the right directory.

Signed-off-by: JohnHaan <yongiman@gmail.com>
7 years agodefaults: fix backward compatibility
Guillaume Abrioux [Mon, 9 Apr 2018 11:02:44 +0000 (13:02 +0200)]
defaults: fix backward compatibility

backward compatibility with `ceph_mon_docker_interface` and
`ceph_mon_docker_subnet` was not working since there wasn't lookup on
`monitor_interface` and `public_network`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agocommon: upgrade/install ceph-test RPM first
Ken Dreyer [Thu, 5 Apr 2018 19:40:15 +0000 (13:40 -0600)]
common: upgrade/install ceph-test RPM first

Prior to this change, if a user had ceph-test-12.2.1 installed, and
upgraded to ceph v12.2.3 or newer, the RPM upgrade process would
fail.

The problem is that the ceph-test RPM did not depend on an exact version
of ceph-common until v12.2.3.

In Ceph v12.2.3, ceph-{osdomap,kvstore,monstore}-tool binaries moved
from ceph-test into ceph-base. When ceph-test is not yet up-to-date, Yum
encounters package conflicts between the older ceph-test and newer
ceph-base.

When all users have upgraded beyond Ceph < 12.2.3, this is no longer
relevant.

7 years agoceph-defaults: fix ceoh_uid for container image tag latest
Sébastien Han [Mon, 9 Apr 2018 08:01:30 +0000 (10:01 +0200)]
ceph-defaults: fix ceoh_uid for container image tag latest

According to our recent change, we now use "CentOS" as a latest
container image. We need to reflect this on the ceph_uid.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agotox: use container latest tag for upgrades
Sébastien Han [Thu, 5 Apr 2018 08:28:51 +0000 (10:28 +0200)]
tox: use container latest tag for upgrades

Currently tag-build-master-luminous-ubuntu-16.04 is not used anymore.
Also now, 'latest' points to CentOS so we need to make that switch here
too.

We know have latest tags for each stable release so let's use them and
point tox at them to deploy the right version.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoUse the CentOS repo for Red Hat dev packages
Zack Cerza [Fri, 6 Apr 2018 16:17:48 +0000 (10:17 -0600)]
Use the CentOS repo for Red Hat dev packages

No use even trying to use something that doesn't exist.

Signed-off-by: Zack Cerza <zack@redhat.com>
7 years agosite-docker: followup on #2487
Guillaume Abrioux [Wed, 4 Apr 2018 09:46:51 +0000 (11:46 +0200)]
site-docker: followup on #2487

get a non empty array as default value for `groups.get('clients')`,
otherwise `| first` filter will complain because it can't work with
empty array.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoadd .vscode/ to gitignore
Sébastien Han [Wed, 4 Apr 2018 14:23:54 +0000 (16:23 +0200)]
add .vscode/ to gitignore

I personally dev on vscode and I have some preferences to save when it
comes to running the python unit tests. So escaping this directory is
actually useful.

Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agoDeploying without managed monitors failed
Attila Fazekas [Wed, 4 Apr 2018 13:30:55 +0000 (15:30 +0200)]
Deploying without managed monitors failed

Tripleo deployment failed when the monitors not manged
by tripleo itself with:
    FAILED! => {"msg": "list object has no element 0"}

The failing play item was introduced by
 f46217b69ae18317cb0c1cc3e391a0bca5767eb6 .

fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1552327

Signed-off-by: Attila Fazekas <afazekas@redhat.com>
7 years agodefaults: remove `run_once: true` when creating fetch_directory
Guillaume Abrioux [Tue, 3 Apr 2018 11:43:53 +0000 (13:43 +0200)]
defaults: remove `run_once: true` when creating fetch_directory

because of `serial: 1`, it can be an issue when the playbook is being
run on client nodes.
Since the refact of `ceph-client` we skip the role `ceph-defaults` on
every node except the first client node, it means that the task is not
going to be played because of `run_once: true`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoconfig: use fact `ceph_uid`
Guillaume Abrioux [Tue, 3 Apr 2018 11:41:07 +0000 (13:41 +0200)]
config: use fact `ceph_uid`

Use fact `ceph_uid` in the task which ensures `/etc/ceph` exists in
containerized deployments.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoclients: refact `ceph-clients` role
Guillaume Abrioux [Fri, 30 Mar 2018 11:48:17 +0000 (13:48 +0200)]
clients: refact `ceph-clients` role

This commit refacts this role so we don't have to pull container image
on client nodes just to create pools and keys.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1550977
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoclient: remove legacy code
Guillaume Abrioux [Fri, 30 Mar 2018 10:50:14 +0000 (12:50 +0200)]
client: remove legacy code

This seems to be a leftover.
This commit removes an unnecessary 'set linux permissions' on
`/var/lib/ceph`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agocontainer: play docker-common only on first client node
Guillaume Abrioux [Fri, 30 Mar 2018 10:45:15 +0000 (12:45 +0200)]
container: play docker-common only on first client node

This commit aims to set the default behavior to play
`ceph-docker-common` only on first node in clients group.

Currently, we play docker-common to pull container image so we can run
ceph commands in order to generate keys or create pools.
On a cluster with a large number of client nodes this can be time consuming
to proceed this way. An alternative would be to pull container image
only a first node and then copy keys on other nodes.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agomove selinux check to `ceph-defaults`
Guillaume Abrioux [Fri, 30 Mar 2018 10:38:41 +0000 (12:38 +0200)]
move selinux check to `ceph-defaults`

This check is alone in `ceph-docker-common` since a previous code
refactor.
Moving this check in `ceph-defaults` allows us to run `ceph-clients`
without having to run `ceph-docker-common` even in non-containerized
deployment.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoceph-iscsi: fix certificates generation and distribution
Sébastien Han [Tue, 3 Apr 2018 13:20:06 +0000 (15:20 +0200)]
ceph-iscsi: fix certificates generation and distribution

Prior to this patch, the certificates where being generated on a single
node only (because of the run_once: true). Thus certificates were not
distributed on all the gateway nodes.

This would require a second ansible run to work. This patches fix the
creation and keys's distribution on all the nodes.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1540845
Signed-off-by: Sébastien Han <seb@redhat.com>
7 years agodo not delegate facts on client nodes
Guillaume Abrioux [Wed, 21 Mar 2018 18:01:51 +0000 (19:01 +0100)]
do not delegate facts on client nodes

This commit is a workaround for
https://bugzilla.redhat.com/show_bug.cgi?id=1550977

We iterate over all nodes on each node and we delegate the facts gathering.
This is high memory consuming when having a large number of nodes in the
inventory.
That way of gathering is not necessary for clients node so we can simply
gather local facts for these nodes.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agopurge-docker: remove redundant task
Guillaume Abrioux [Tue, 27 Mar 2018 12:26:12 +0000 (14:26 +0200)]
purge-docker: remove redundant task

The `remove_packages` prompt is redundant to the `ireallymeanit` prompt
since it does exactly the same thing. I guess the only goal of this task
was to make a break to warn user about `--skip-tags=with_pkg` feature.
This warning should be part of the first prompt.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
7 years agoceph-mds: delete duplicate tasks which cause multimds container deployments to fail.
Randy J. Martinez [Thu, 29 Mar 2018 00:15:19 +0000 (19:15 -0500)]
ceph-mds: delete duplicate tasks which cause multimds container deployments to fail.

This update will resolve error['cephfs' is undefined.] in multimds container deployments.
See: roles/ceph-mon/tasks/create_mds_filesystems.yml. The same last two tasks are present there, and actully need to happen in that role since "{{ cephfs }}" gets defined in
roles/ceph-mon/defaults/main.yml, and not roles/ceph-mds/defaults/main.yml.

Signed-off-by: Randy J. Martinez <ramartin@redhat.com>
7 years agoceph-osd note that some scenarios use ceph-disk vs. ceph-volume
Alfredo Deza [Wed, 28 Mar 2018 20:40:04 +0000 (16:40 -0400)]
ceph-osd note that some scenarios use ceph-disk vs. ceph-volume

Signed-off-by: Alfredo Deza <adeza@redhat.com>
7 years agoRefer to expected-num-ojects as expected_num_objects, not size
John Fulton [Sun, 25 Mar 2018 20:36:27 +0000 (20:36 +0000)]
Refer to expected-num-ojects as expected_num_objects, not size

Follow up patch to PR 2432 [1] which replaces "size" (sorry if
the original bug used that term, which can be confusing) with
expected_num_objects as is used in the Ceph documentation [2].

[1] https://github.com/ceph/ceph-ansible/pull/2432/files
[2] http://docs.ceph.com/docs/jewel/rados/operations/pools