1ac94c048ff1d1385de2892d0ecef7879ec563e9 introduced the support of
multiple rgw instances on a single host but somehow has missed to
implement this feature in rolling_update.
config: make sure ceph_release is set for all client node
`ceph_release` is set in `ceph-container-common` but this role is
played only on first node for clients, this means ceph-config will fail
on all client nodes except the first one.
This commit ensure ceph_release is set for all client nodes.
Sébastien Han [Mon, 21 Jan 2019 11:08:56 +0000 (12:08 +0100)]
mon: enable msgr2
Enabling msgr2 style declaration for Nautilus and above. Prior releases
will keep the right syntax.
When upgrading from Mimic to Nautilus we must maintain something in the
form of:
Sébastien Han [Wed, 9 Jan 2019 12:23:07 +0000 (13:23 +0100)]
mon: ability to change mon listening port on container
You can now use 'ceph_mon_container_listen_port' to change the port the
monitor will listen on.
Setting the default to 3300 (assigned by IANA) since Nautilus has released the messenger2
transport protocol.
Sébastien Han [Mon, 21 Jan 2019 12:53:53 +0000 (13:53 +0100)]
Revert "mon: force peer addition"
This reverts commit ee08d1f89a588e878324141dd0f80c65058a377d which was
mostly to workaround a bug in ceph@master. Now, ceph@master is fixed so
reverting this. Thanks to https://github.com/ceph/ceph/pull/25900
guihecheng [Fri, 9 Nov 2018 00:56:57 +0000 (08:56 +0800)]
rgw: add support for multiple rgw instances on a single host
With this, we could have multiple rgw instances on a single host
with a single run, don't have to use rgw-standalone.yml which does not
seems able to bind ports separately.
If you want to have multiple rgw instances, just change 'radosgw_instances'
to the number you want, which defaults to 1.
Not compatible with Multi-Site yet.
to avoid duplicating code in `site.yml.sample`, `site-docker.yml.sample`
and `setup.yml`, let's isolate this part of the code and simply include
it each time we need it.
This condition is useless and it's also creating issues we don't see in
our CI. ceph_release is set by either ceph-common or ceph-docker-common
so let's keep it this way.
Sébastien Han [Tue, 8 Jan 2019 17:36:14 +0000 (18:36 +0100)]
mon: force peer addition
Somewhat something changed with the introduction of msg2 and we have to
add each node as a peer so the monitors can form a quorum. This might be
due to our CI environment, although adding this is completly harmless
and solves monitors not being able to form quorum.
It seems that the initial monitor map wasn't containing the right
information about the peers (addresses like 0.0.0.0/0r1, for each rank.
Bruceforce [Thu, 3 Jan 2019 15:08:58 +0000 (16:08 +0100)]
nfs-ganesha: fixed nfs_ganesha_dev_apt_repo variable
The nfs_ganesha_dev_apt_repo variable was set incorrect in task
"fetch nfs-ganesha development repository"
Rishabh Dave [Wed, 12 Dec 2018 11:15:00 +0000 (16:45 +0530)]
ceph-infra: merge ntp_debian.yml and ntp_rpm.yml
Merge ntp_debian.yml and ntp_rpm.yml into one (the new file is called
setup_ntp.yml) since they are almost identical. Also avoid repetition
of the common setup step for ntpd and chronyd services.
Rishabh Dave [Wed, 2 Jan 2019 11:34:49 +0000 (17:04 +0530)]
copy certificates as root user
Since the current user on the controller node, might not have the
permission to read the TLS certificate and related files, copy these
files to the Ceph nodes as root user.
Fixes: https://github.com/ceph/ceph-ansible/issues/3465 Signed-off-by: Rishabh Dave <ridave@redhat.com>
update: do not enforce `serial: 1` on client nodes
There is no need to enforce `serial: 1` on client nodes.
Let's make it parameterizable by introducing a new *extra* variable
`client_update_batch`, if not filled this will default to `{{
ansible_forks }}`.
NOTE: this is only usable as an extra variable passed with
`-e client_update_batch=<num>`
Rishabh Dave [Mon, 17 Dec 2018 10:34:46 +0000 (16:04 +0530)]
set any_errors_fatal to true for all host sections
Add `any_errors_fatal: true` to all host sections in `site.yml.sample`
and `site-container.yml.sample` so that the playbook execution
ceases spontaneously and instantaneously when errors occurs.
Sébastien Han [Thu, 13 Dec 2018 10:38:23 +0000 (11:38 +0100)]
mon: remove ceph aliases for containers
These aliases have led to several issues making believe that ceph
binaries are actually present on the host when running the command.
However it wasn't explicit that the commands were only ran inside a
container.
It has brought to much confusion so we decided to remove them.
Closes: https://github.com/ceph/ceph-ansible/issues/3445 Signed-off-by: Sébastien Han <seb@redhat.com>
sometimes we play the whole role `ceph-defaults` just to access the
default value of some variables. It means we play the `facts.yml` part
in this role while it's not desired. Splitting this role will speedup
the playbook.
the OSD part of the purge delegates commands on monitor node, we need to
gather monitors facts to know the `ansible_hostname` fact that is used
in the `docker_exec_cmd` fact.
Sébastien Han [Mon, 3 Dec 2018 21:59:17 +0000 (22:59 +0100)]
purge-cluster: add support for mon/mgr collocation
Recently we introduced the default collocation of mon/mgr without the
need of a dedicated mgrs section. This means we have to stop the mgr
process on that machine too.
Sébastien Han [Mon, 3 Dec 2018 21:46:52 +0000 (22:46 +0100)]
purge-docker-cluster: add support for mgr/mon collocation
Recently we introduced the collocation of mon and mgr by default, so we
don't need to have an explicit mgrs section for this. This means we have
to remove the mgr container on the mon machines too.
Sébastien Han [Thu, 4 Oct 2018 15:40:25 +0000 (17:40 +0200)]
purge-docker-cluster: add ceph-volume support
This commits adds the support for purging cluster that were deployed
with ceph-volume. It also separates nicely with a block intruction the
work to do when lvm is used or not.
Sébastien Han [Fri, 7 Dec 2018 11:20:27 +0000 (12:20 +0100)]
ceph-defaults: do not use podman only on atomic
We want to test podman on f29 non-atomic, atomic is not a hard
requirement. However, if you want to get podman then you will have to
install it first before running the playbook.
Sébastien Han [Thu, 6 Dec 2018 12:58:37 +0000 (13:58 +0100)]
mgr: little refact
This commit removes the default module, so ceph-ansible does not enable
any manager module.
To enable a module you need to set a value to 'ceph_mgr_modules', you
can pass a list of modules like this:
Sébastien Han [Tue, 4 Dec 2018 17:41:36 +0000 (18:41 +0100)]
tox: add missing ceph_docker_image_tag in shrink
When calling shrink on containerized deployment, we were first doing the
setup with `latest-master` and then when calling the playbook we were
using the default value for `ceph_docker_image_tag` that comes from
ceph-defaults. Now we pass
`ceph_docker_image_tag={env:CEPH_DOCKER_IMAGE_TAG:latest-master}` so the
play will run the right container image.
Sébastien Han [Tue, 4 Dec 2018 09:44:28 +0000 (10:44 +0100)]
test: disable nfs for containers
Based on https://github.com/ceph/ceph-container/pull/1269 and given
there are no stable packages and reliable repository, we disable nfs
ganesha temporarly.
Sébastien Han [Wed, 28 Nov 2018 23:10:29 +0000 (00:10 +0100)]
osd: re-introduce disk_list check
This commit
https://github.com/ceph/ceph-ansible/commit/4cc1506303739f13bb7a6e1022646ef90e004c90#diff-51bbe3572e46e3b219ad726da44b64ebL13
accidentally removed this check.
This is a must have for ceph-disk based containerized OSDs.
Sébastien Han [Fri, 30 Nov 2018 10:20:03 +0000 (11:20 +0100)]
osd: discover osd_objectstore on the fly
Applying and passing the OSD_BLUESTORE/FILESTORE on the fly is wrong for
existing clusters as their config will be changed.
Typically, if an OSD was prepared with ceph-disk on filestore and we
change the default objectstore to bluestore, the activation will fail.
The flag osd_objectstore should only be used for the preparation, not
activation. The activate in this case detects the osd objecstore which
prevents failures like the one described above.
Sébastien Han [Tue, 27 Nov 2018 16:50:44 +0000 (17:50 +0100)]
ceph-osd: change jinja condition
If an existing cluster runs this config, and has ceph-disk OSD, the
`expose_partitions` won't be expected by jinja since it's inside the
'old' if. We need it as part of the osd_scenario != 'lvm' condition.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1640273 Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Mon, 3 Dec 2018 10:59:49 +0000 (11:59 +0100)]
ceph_key: allow setting 'dest' to a file
This is useful in situations where you fetch the key from the mon store
and want to write the file with a different name to a dedicated
directory. This is important when fetching the mgr key, they are created
as mgr.ceph-mon2 but we want them in /var/lib/ceph/mgr/ceph-ceph-mon0/keyring
Sébastien Han [Fri, 16 Nov 2018 09:50:38 +0000 (10:50 +0100)]
mon: do not serialized container bootstrap
This commit unifies the container and non-container code, which in the
meantime gives use the ability to deploy N mon container at the same
time without having to serialized the deployment. This will drastically
reduces the time needed to bootstrap the cluster.
Note, this is only possible since Nautilus because the monitors are
bootstrap the initial keys on their own once they reach quorum. In the
Nautilus version of the ceph-container mon, we stopped generating the
keys 'manually' from inside the container, for more detail see: https://github.com/ceph/ceph-container/pull/1238
Sébastien Han [Fri, 26 Oct 2018 12:32:49 +0000 (14:32 +0200)]
mgr: only copy keys with dedicated mgr
When collocating mon and mgr, the mgr container will attempt to create
its own key since it has the admin key at its disposal. Also at this
point there is nothing to fetch since the key is not created by the
mons, as mentionned above the mgr creates the key on its own.
Sébastien Han [Tue, 16 Oct 2018 13:40:35 +0000 (15:40 +0200)]
site: collocated mon and mgr by default
This will speed up the deployment and also deploy mon and mgr collocated
just as recommended.
This won't prevent you of adding more and dedicaded machines for mgr if
needed.
Sébastien Han [Mon, 26 Nov 2018 10:08:27 +0000 (11:08 +0100)]
mon: default ceph_health_raw to json
During the first iteration, the command won't return anything, or can
simply fail and might not return a valid json structure. Ansible will
fail parsing it in the filter `from_json` so let's default that variable
to empty dictionary.
Sébastien Han [Mon, 26 Nov 2018 09:56:14 +0000 (10:56 +0100)]
container-common: remove old check
This removes a bit of unnecessary code, the check was always wrong
because of the condition 'not ceph_current_status.get('rc', 1) == 0'
It will never match since `Not` is used for bool and we are checking for
an rc.
Also, even though the check would work, this will be a major blocker for
a complete meltdown. If the whole platform is shutdown then nothing will
be up but files will be present, so this check is definitely wrong.
Sébastien Han [Fri, 26 Oct 2018 12:13:43 +0000 (14:13 +0200)]
rolling-update: remove old condition
This failure condition was only valid at the time where clusters didn't
have ceph-mgr activated. Now since we collocate the ceph-mgr with the
mon by default, if the daemon wasn't present it will be created during
the upgrade.
Sébastien Han [Fri, 16 Nov 2018 09:46:10 +0000 (10:46 +0100)]
ceph_key: apply permissions using ansible code module
Instead of applying file permissions from our code, let's rely on the
ansible code 'file' module for this. This is now handled at the task
declaration level instead of inside the module.
Sébastien Han [Mon, 26 Nov 2018 10:06:10 +0000 (11:06 +0100)]
sites: fail the playbook on any failure
We need to apply any_errors_fatal: true to every play so it can take
effect, not only on the initial pass. With this flag, any error in the
playbook will cause the playbook to stop.