Sébastien Han [Wed, 2 Aug 2017 13:13:06 +0000 (15:13 +0200)]
common: automate setting up online repositories for ceph deployments on debian nodes
This commits automates the process of setting up online repositories for
Red Hat Ceph Storage on Debian nodes. The manual steps are currently
described here:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/installation_guide_for_ubuntu/prerequisites#online_repositories
If you are an RHCS customer and run a Debian based system you can now
access package through the Red Hat CDN.
For this set: ceph_rhcs and ceph_rhcs_cdn_install to true. Then set your
customer credentials in ceph_rhcs_cdn_debian_repo. Replace
customername:customerpasswd with your details.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1434175 Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Wed, 2 Aug 2017 16:09:46 +0000 (18:09 +0200)]
common: override and autodetect ceph_stable_release
For ceph_dev and rhcs installation we need to detect the release since
we do not declare it explicitly. Keeping the default ceph_stable_release
could lead to several things going wrong and some have already been
reported.
Fixes: https://github.com/ceph/ceph-ansible/issues/1712 and
https://bugzilla.redhat.com/show_bug.cgi?id=1476210 Signed-off-by: Sébastien Han <seb@redhat.com>
Sébastien Han [Thu, 27 Jul 2017 15:05:59 +0000 (17:05 +0200)]
osd: simplify scenarios
There is only two main scenarios now:
* collocated: everything remains on the same device:
- data, db, wal for bluestore
- data and journal for filestore
* non-collocated: dedicated device for some of the component
in containerized deployment, if you try to update your `ceph.conf` file
it won't be actually updated on your nodes because it is overwritten by
the copy of the file which is present in your fetch directory.
Move `fsid`,`monitor_name`,`docker_exec_cmd` and `ceph_release` set_fact
to `ceph-defaults` role.
It will allow to reuse these facts without having to play `ceph-common`
or `ceph-docker-common`.
Merge `ceph-docker-common` and `ceph-common` defaults vars in
`ceph-defaults` role.
Remove redundant variables declaration in `ceph-mon` and `ceph-osd` roles.
Sébastien Han [Thu, 27 Jul 2017 13:50:38 +0000 (15:50 +0200)]
common: only add a daemon section if we run on the host
We don't want to have heterogeous ceph.conf anymore and believe that we
should have the right section for the running daemon.
If we don't do this and use profiles, e.g: rgw, we will get a new rgw
section on some of the nodes.
Sébastien Han [Wed, 26 Jul 2017 14:46:57 +0000 (16:46 +0200)]
osd: do not enable osd@id unit file
ceph-disk is responsable for enabling the unit file if needed. Actually
since https://github.com/ceph/ceph/pull/12241 it seems that it's not
even needed. On an event of a restart, udev rules will be trigger and
they will ceph-disk activate the device too so the 'enabled' is not
needed.
Closes: https://github.com/ceph/ceph-ansible/issues/1142 Signed-off-by: Sébastien Han <seb@redhat.com>
Christian Zunker [Mon, 12 Jun 2017 08:31:49 +0000 (08:31 +0000)]
Restart OSDs during initial setup when crush location is used
OSDs get started by ceph-disk before the ceph.conf file is written
with a crush location. That results in a crush map without configured
crush location.
To prevent this, we have to restart the OSDs during the initial setup
after the crush location was added to the ceph.conf file.
Sébastien Han [Mon, 24 Jul 2017 09:35:08 +0000 (11:35 +0200)]
osd: refactor osd scenarios
We have multiple issues with ceph-disk's cli with bluestore and Ceph
releases. This is mainly due to cli changes with Luminous. Luminous
introduced a --bluestore and --filestore options which respectively does
not exist on releases older than Luminous. The default store being
bluestore on Luminous, simply checking for the store is not enough so we
have to build a specific command line for ceph-disk depending on the
Ceph version we are running and the desired osd_store.
John Fulton [Wed, 19 Jul 2017 22:20:18 +0000 (22:20 +0000)]
Allow user to define ACLs for OpenStack keys
The keys and openstack_keys structure now supports an optional
key called acls whose value is a list of strings one could pass
to setfacl. The ansible ACL module applies the ACLs to all
openstack keys with this property.
The workaround here is rendering `ceph_conf_overrides` before passing it
to `config_template` to be sure we won't have a section added twice in
ceph.conf
Sébastien Han [Wed, 27 Jul 2016 15:31:35 +0000 (17:31 +0200)]
profiles: introducing cluster profiles
This commit introduces a new directory called "profiles" which
contains some set of variables for a particular use case. These profiles
provide guidance for certain scenarios such as:
Andrew Schoen [Mon, 17 Jul 2017 15:26:48 +0000 (10:26 -0500)]
tests: run all existing tests with shaman repos
If you use the 'dev' factor, the testing scenario will
use repos from shaman.ceph.com. You can define CEPH_DEV_BRANCH
and CEPH_DEV_SHA1 to specify which repo you'd like to test.
Some tasks fetch file to `{{ fetch_directory }}/docker_mon_files` and
then try to copy from `{{ fetch_directory }}/{{ fsid }}`. That causes
the playbook to fail.
Client: keep consistency between `openstack_key` and `keys`
To keep consistency between `{{ openstack_keys }}` and `{{ keys }}`
respectively in `ceph-mon` and `ceph-client` roles.
This commit also add the possibility to set mds caps.
the `test_osds_listen_on_*` consider OSDs will always listen on tcp port
with consecutive tcp port number starting from `6800`.
Eg.
If you have 2 OSDs, tests will assume it should listen on 2 ports for each
network (`public_network` and `cluster_network`), therefore:
`6800, 6801, 6802, 6803`
but sometime it doesn't happen this way and you can get OSDs listening
on tcp port like this :
`failed: internal error: Monitor path /var/lib/libvirt/qemu/domain-bs-docker-cl
uster-dmcrypt-journal-collocation_mon0_1499294943_ba9faf7bf296533177f6/monitor.
sock too big for destination`
and we can't upgrade libvirt in our CI for some reason
we need to get the directories name shorter in order to workaround this
issue
Doc: containerized deploy with custom admin secret
In addition to ceph/ceph-docker@69d9aa6, this explains how to deploy a
containerized cluster with a custom admin secret.
Basically, just need to pass the `admin_secret` defined in your
`group_vars/all.yml` to the `ceph_mon_docker_extra_env` variable.