once the cluster is upgraded to nautilus, we can complete the process by
disallowing pre-nautilus OSDs and enabling all new nautilus-only functionality
update: ensure mgrs are upgraded after ALL monitors
As of 1c760904b0bd1b6b0f49d6ac19d87d79f185c18f, ceph-ansible implicitly
bootstrap managers on monitors.
mgrs must be upgraded only after all monitors, therefore, this commit
refact the way mgrs are upgraded to be sure we don't upgrade a mgr
during the monitors upgrade.
This commit also ensure we handle the case were we split managers on
dedicated nodes.
update: ensure /var/lib/ceph/bootstrap-rbd-mirror is present
This directory is created by ceph-config node by node.
In the upgrade context we need it to be created on ALL monitors as soon
as the first iteration because of the task right after which creates and sends
the keyrings on all monitors.
This prevents the packaging from restarting services before we do need
to restart them in the rolling update sequence.
We want to handle services restart at rolling_update playbook.
update: remove an old parameter in ceph_key module call
the `containerized` parameter in ceph_key module doesn't exist anymore.
This was making the module failing but was hidden because of the
`ignore_errors: True`.
ceph_key: `lookup_ceph_initial_entities` shouldn't fail on update
As of nautilus, the initial keyrings list has changed, it means when
upgrading from Luminous or Mimic, it is expected there's a mismatch
between what is found on the cluster and the expected initial keyring
list hardcoded in ceph_key module. We shouldn't fail when upgrading to
nautilus.
handlers: do not trigger handlers on rolling_update
rolling_update playbook already takes care of stopping/starting services
during the sequence. There's no need to trigger potential unwanted
services restart.
Dimitri Savineau [Wed, 20 Mar 2019 19:30:46 +0000 (15:30 -0400)]
ceph-osd: Ensure lvm2 is installed
When using osd_scenario lvm, we never check if the lvm2 package is
present on the host.
When using containerized deployment and docker on CentOS/RedHat this
package will be automatically installed as a dependency but not for
Ubuntu distribution.
OSD deployed via ceph-volume require the lvmetad.socket to be active
and running.
osd: backward compatibility with old disk_list.sh location
Since all files in container image have moved to `/opt/ceph-container`
this check must look for new AND the old path so it's backward
compatible. Otherwise it could end up by templating an inconsistent
`ceph-osd-run.sh`.
Dimitri Savineau [Thu, 14 Mar 2019 20:22:01 +0000 (16:22 -0400)]
ceph-validate: fail if there's no ipaddr available in monitor_address_block subnet
When using monitor_address_block to determine the ip address of the
monitor node, we need an ip address available in that cidr to be
present in the ansible facts (ansible_all_ipv[46]_addresses).
Currently we don't check if there's an ip address available during
the ceph-validate role.
As a result, the ceph-config role fails due to an empty list during
ceph.conf template creation but the error isn't explicit.
TASK [ceph-config : generate ceph.conf configuration file] *****
fatal: [0]: FAILED! => {"msg": "No first item, sequence was empty."}
With this patch we will fail before the ceph deployment with an
explicit failure message.
Dimitri Savineau [Fri, 15 Mar 2019 15:30:15 +0000 (11:30 -0400)]
ceph-common: Install yum plugin priorities
When using community repository we need to set the priority on the
ceph repositories because we could have some conflict with EPEL
packages.
In order to set the priority on the ceph repositories, we need to
install the yum-plugin-priorities package.
Rishabh Dave [Mon, 11 Mar 2019 10:49:32 +0000 (16:19 +0530)]
don't append path components while calling os.path.join()
This creates a confusion whether directory/file names are being
formed by appendng strings or path components are being appended.
Since latter should never be done manually, get rid of the statements
creating confusion.
Rishabh Dave [Mon, 11 Mar 2019 10:20:08 +0000 (15:50 +0530)]
use os.path.join() correctly
os.path.join adds the separator (i.e. '/') between the provided path
components only if needed. Providing a single path component doesn't
lead to any checks.
Currently the default crush rule value is added to the ceph config
on the mon nodes as an extra configuration applied after the template
generation via the ansible ini module.
This implies two behaviors:
1/ On each ceph-ansible run, the ceph.conf will be regenerated via
ceph-config+template and then ceph-mon+ini_file. This leads to a
non necessary daemons restart.
2/ When other ceph daemons are collocated on the monitor nodes
(like mgr or rgw), the default crush rule value will be erased by
the ceph.conf template (mon -> mgr -> rgw).
This patch adds the osd_pool_default_crush_rule config to the ceph
template and only for the monitor nodes (like crush_rules.yml).
The default crush rule id is read (if exist) from the current ceph
configuration.
The default configuration is -1 (ceph default).
Dimitri Savineau [Mon, 11 Mar 2019 14:44:47 +0000 (10:44 -0400)]
ceph-osd: Install numactl package when needed
With 3e32dce we can run OSD containers with numactl support.
When using numactl command in a containerized deployment we need to
be sure that the corresponding package is installed on the host.
The package installation is only executed when the
ceph_osd_numactl_opts variable isn't empty.
I suspect `./generate_group_vars_sample.sh` wasn't used in b8d580b3f48c69ba9882df773c4d144b73d01c95 because it introduced a typo in
`group_vars/all.yml.sample` and `group_vars/clients.yml.sample`.
We don't need to set After=docker.service when the container_binary
variable isn't set to docker.
It doesn't break anything currently but it could be confusing when
using podman.
Instead of using subscription-manager with command module we can use
the rhsm_repository ansible module.
This module already uses repos list feature to determine if a
repository is enabled or not. That way this module is idempotent so
we don't need changed_when: false anymore.
Because the client name is part of the client key path we can reuse
the user variable to build this path.
Also remove a duplicate user variable declaration.
Because we're still using Linux distributions with python 2.7 (like
CentOS/RHEL 7) it could be useful to run travis tests against python
2.7 even if the support will be ended in 2020.
The ceph stable community repository only enables the basearch
packages url.
Adding the noarch url because starting with nautilus release, some
packages are added there and useful for mgr or grafana.
After b8d580b and e9e5d5a we could have either item.min_size or
osd_pool_default_min_size using string instead of int causing the
condition to be true when it's false.
As a result, the task could try to set the pool min_size value to
0 which leads to:
Error EINVAL: pool min_size must be between 1 and 1
otherwise a bunch of jobs will fail like following:
```
[WARNING]: Unable to parse /home/jenkins-build/build/workspace/ceph-ansible-nightly-luminous-ubuntu-container-stable-3.2-bluestore_lvm_osds/tests/functional/bs-lvm-osds/container/hosts-ubuntu as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
```
As of testinfra 2.0.0, the binary name is `py.test`.
But let's pin the version to 1.19.0.
Indeed, migrating to 2.0.0 requires our current testing to be reworked a bit.
Since we don't have the bandwidth ATM for this, it's better to simply
keep testing with testinfra 1.19.0.
Note that I've replaced all `testinfra` occurences by `py.test` anyway.
removing the content of this directory seems a bit agressive and cause a
redeployment to fail after a purge on debian based distrubition.
Typical error:
```
fatal: [mon0]: FAILED! => changed=false
attempts: 3
msg: No package matching 'ceph' is available
```
The following task will consider the cache is still valid, so apt
doesn't refresh it:
```
- name: update apt cache if cache_valid_time has expired
apt:
update_cache: yes
cache_valid_time: 3600
register: result
until: result is succeeded
```
since the task installing ceph packages has a `update_cache: no` it
fails:
```
- name: install ceph for debian
apt:
name: "{{ debian_ceph_pkgs | unique }}"
update_cache: no
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
default_release: "{{ ceph_stable_release_uca | default('') }}{{ ansible_distribution_release ~ '-backports' if ceph_origin == 'distro' and ceph_use_distro_backports else '' }}"
register: result
until: result is succeeded
```
/tmp/* isn't specific to ceph as well, so we shouldn't remove everything
in this directory.
using `shell` module seems to be the only way to make this task working
on rhel based distribution AND debian based distributions.
on ubuntu, using `command` ansible module fails like following
(not due to `sudo` usage or not):
```
ok: [osd1] => changed=false
cmd: command -v ceph-volume
failed_when_result: false
msg: '[Errno 2] No such file or directory: ''command'': ''command'''
rc: 2
```
Kevin Coakley [Thu, 28 Feb 2019 20:57:03 +0000 (12:57 -0800)]
Add changed_when: false to the "get osd ids" statement
The "get osd ids" statement only registers the osd_ids_non_container variable. Running "ls /var/lib/ceph/osd/ | sed 's/.*-//'" should never produce a change on the system. Adding changed_when: false prevents irrelevant change messages from Ansible.
Dimitri Savineau [Wed, 27 Feb 2019 16:07:38 +0000 (11:07 -0500)]
mon: Add mds permissions to client.admin
The administrator keyring needs full capabilities on mds like mon,
osd and mgr.
Whithout this, the client.admin key won't be able to run commands
against mds (like ceph tell mds.0 session ls)
Kevin Coakley [Tue, 26 Feb 2019 17:30:31 +0000 (09:30 -0800)]
Set permissions on monitor directory to u=rwX,g=rX,o=rX recursive
Set directories to 755 and files to 644 to /var/lib/ceph/mon/{{ cluster }}-{{ monitor_name }} recursively instead of setting files and directories to 755 recursively. The ceph mon process writes files to this path with permissions 644. This update stops ansible from updating the permissions in /var/lib/ceph/mon/{{ cluster }}-{{ monitor_name }} every time ceph mon writes a file and increases idempotency.
the previous approach was wrong.
checking if `item.key` is in `osd_auto_discovery_exclude` (`['dm-',
'loop']`) is incorrect because it will obviously not match. Therefore,
the condition will return `True` whatever the device we are checking.
when the following failure is thrown
```
rhel-8.0.0-beta-1.7- [=== ] --- B/s | 0 B --:-- ETArhel-8.0.0-beta-1.7-appstream 0.0 B/s | 0 B 00:00
rhel-8.0.0-beta-1.7- [=== ] --- B/s | 0 B --:-- ETArhel-8.0.0-beta-1.7-baseos 0.0 B/s | 0 B 00:00
rhel-8.0.0-beta-1.7- [ === ] --- B/s | 0 B --:-- ETArhel-8.0.0-beta-1.7-builder 0.0 B/s | 0 B 00:00
Failed to synchronize cache for repo 'rhel-8.0.0-beta-1.7-appstream', ignoring this repo.
Failed to synchronize cache for repo 'rhel-8.0.0-beta-1.7-baseos', ignoring this repo.
Failed to synchronize cache for repo 'rhel-8.0.0-beta-1.7-builder', ignoring this repo.
No match for argument: python3
Error: Unable to find a match
```
dnf returns 0 anyway.
Let's ensure the pattern 'Failed' isn't present in the output.
I didn't use the `ceph/ubuntu-bionic` image because it's broken at the
time of writing this commit. I'll switch back to `ceph/ubuntu-bionic` as
soon as it will be fixed.
osd: make the 'wait for all osd to be up' task configurable
introduce two new variables to make the check that 'wait for all osd to
be up' configurable.
It's possible that for some deployments, OSDs can take longer to be seen
as UP and IN.
TASK [ceph-mgr : fetch ceph mgr keyring] ***************************************************************************************************************************************************************************
skipping: [mon-000] => {
"changed": false,
"skip_reason": "Conditional result was False"
}
skipping: [mon-002] => {
"changed": false,
"skip_reason": "Conditional result was False"
}
skipping: [mon-001] => {
"changed": false,
"skip_reason": "Conditional result was False"
}
TASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
failed: [mon-002] (item={'name': '/etc/ceph/ceph.mgr.li895-17.keyring', 'dest': '/var/lib/ceph/mgr/ceph-li895-17/keyring', 'copy_key': True}) => {
"changed": false,
"item": {
"copy_key": true,
"dest": "/var/lib/ceph/mgr/ceph-li895-17/keyring",
"name": "/etc/ceph/ceph.mgr.li895-17.keyring"
}
}
MSG:
Could not find or access 'fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring'
Searched in:
/root/ceph-ansible/roles/ceph-mgr/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
/root/ceph-ansible/roles/ceph-mgr/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
/root/ceph-ansible/roles/ceph-mgr/tasks/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
/root/ceph-ansible/roles/ceph-mgr/tasks/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
/root/ceph-ansible/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
/root/ceph-ansible/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring on the Ansible Controller.
If you are using a module and expect the file to exist on the remote, see the remote_src option
skipping: [mon-002] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'dest': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': False}) => {
"changed": false,
"item": {
"copy_key": false,
"dest": "/etc/ceph/ceph.client.admin.keyring",
"name": "/etc/ceph/ceph.client.admin.keyring"
},
"skip_reason": "Conditional result was False"
}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
failed: [mon-001] (item={'name': '/etc/ceph/ceph.mgr.li985-128.keyring', 'dest': '/var/lib/ceph/mgr/ceph-li985-128/keyring', 'copy_key': True}) => {
"changed": false,
"item": {
"copy_key": true,
"dest": "/var/lib/ceph/mgr/ceph-li985-128/keyring",
"name": "/etc/ceph/ceph.mgr.li985-128.keyring"
}
}
MSG:
Could not find or access 'fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring'
Searched in:
/root/ceph-ansible/roles/ceph-mgr/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
/root/ceph-ansible/roles/ceph-mgr/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
/root/ceph-ansible/roles/ceph-mgr/tasks/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
/root/ceph-ansible/roles/ceph-mgr/tasks/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
/root/ceph-ansible/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
/root/ceph-ansible/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring on the Ansible Controller.
If you are using a module and expect the file to exist on the remote, see the remote_src option
skipping: [mon-001] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'dest': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': False}) => {
"changed": false,
"item": {
"copy_key": false,
"dest": "/etc/ceph/ceph.client.admin.keyring",
"name": "/etc/ceph/ceph.client.admin.keyring"
},
"skip_reason": "Conditional result was False"
}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
failed: [mon-000] (item={'name': '/etc/ceph/ceph.mgr.li1166-30.keyring', 'dest': '/var/lib/ceph/mgr/ceph-li1166-30/keyring', 'copy_key': True}) => {
"changed": false,
"item": {
"copy_key": true,
"dest": "/var/lib/ceph/mgr/ceph-li1166-30/keyring",
"name": "/etc/ceph/ceph.mgr.li1166-30.keyring"
}
}
MSG:
Could not find or access 'fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring'
Searched in:
/root/ceph-ansible/roles/ceph-mgr/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
/root/ceph-ansible/roles/ceph-mgr/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
/root/ceph-ansible/roles/ceph-mgr/tasks/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
/root/ceph-ansible/roles/ceph-mgr/tasks/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
/root/ceph-ansible/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
/root/ceph-ansible/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring on the Ansible Controller.
If you are using a module and expect the file to exist on the remote, see the remote_src option
skipping: [mon-000] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'dest': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': False}) => {
"changed": false,
"item": {
"copy_key": false,
"dest": "/etc/ceph/ceph.client.admin.keyring",
"name": "/etc/ceph/ceph.client.admin.keyring"
},
"skip_reason": "Conditional result was False"
}
NO MORE HOSTS LEFT *************************************************************************************************************************************************************************************************
to retry, use: --limit @/root/ceph-linode/linode.retry
David Waiting [Mon, 10 Dec 2018 14:54:18 +0000 (09:54 -0500)]
ensure at least one osd is up
The existing task checks that the number of OSDs is equal to the number of up OSDs before continuing.
The problem is that if none of the OSDs have been discovered yet, the task will exit immediately and subsequent pool creation will fail (num_osds = 0, num_up_osds = 0).
In this change, we also check that at least one OSD is present. In our testing, this results in the task correctly waiting for all OSDs to come up before continuing.
Signed-off-by: David Waiting <david_waiting@comcast.com>
switch_to_containers: use ceph binary from container
use the ceph binary from the container instead of the host.
If the ceph CLI version isn't compatible between host and container
image, it can cause the CLI to hang.
instead of using `RuntimeDirectory` parameter in systemd unit files,
let's use a systemd `tmpfiles.d` to ensure `/run/ceph`.
Explanation:
`podman` doesn't create the `/var/run/ceph` if it doesn't exist the time
where the container is run while `docker` used to create it.
In case of `switch_to_containers` scenario, `/run/ceph` gets created by
a tmpfiles.d systemd file; when switching to containers, the systemd
unit file complains because `/run/ceph` already exists
The better fix would be to ensure `/usr/lib/tmpfiles.d/ceph-common.conf`
is removed and only rely on `RuntimeDirectory` from systemd unit file parameter
but we come from a non-containerized environment which is already running,
it means `/run/ceph` is already created and when starting the unit to
start the container, systemd will still complain and we can't simply
remove the directory if daemons are collocated.
switch_to_containers: do not try to redeploy monitors
`ceph-mon` tries to redeploy monitors because it assumes it was not yet
deployed since `mon_socket_stat` and `ceph_mon_container_stat` are
undefined (indeed, we stop the daemon before calling `ceph-mon` in the
switch_to_containers playbook).