`failed: internal error: Monitor path /var/lib/libvirt/qemu/domain-bs-docker-cl
uster-dmcrypt-journal-collocation_mon0_1499294943_ba9faf7bf296533177f6/monitor.
sock too big for destination`
and we can't upgrade libvirt in our CI for some reason
we need to get the directories name shorter in order to workaround this
issue
Doc: containerized deploy with custom admin secret
In addition to ceph/ceph-docker@69d9aa6, this explains how to deploy a
containerized cluster with a custom admin secret.
Basically, just need to pass the `admin_secret` defined in your
`group_vars/all.yml` to the `ceph_mon_docker_extra_env` variable.
Sébastien Han [Thu, 29 Jun 2017 15:34:54 +0000 (17:34 +0200)]
osd: ability to set db and wal to bluestore
This commits refactors how we deploy bluestore. We have existing
scenarios that we don't want to change too much. This commits eases the
user experience by now changing the way you use scenarios. Bluestore is
just a different interface to store objects but the scenarios more or
less remain the same.
If you set osd_objectstore == 'bluestore' along with
journal_collocation: true, you will get an OSD running bluestore with DB
and WAL partitions on the same device.
If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true, you will get an OSD running bluestore with a
dedicated drive for the rocksdb DB, then the remaining
drives (used with 'devices') will have WAL and DATA collocated.
If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true and declare bluestore_wal_devices you will get
an OSD running bluestore with a dedicated drive for rocksdb db, a
dedicated drive partition for rocksdb WAL and a dedicated drive for
DATA.
Mon: Readd the include of check_mandatory_vars.yml
The check regarding the networking scenario configuration has been
moved from ceph-common to ceph-mon in 1de8176 but the include was not re-added
in 189f4fe
e8187f6 does not fix the ipv6 as expected since `ansible_default_*` are
filled with the IP address carried by the network interface used by the
default gateway route. By the way, it assumes that the MON_IP address will
be this IP address which is not always the case.
We need to keep using the previous fact but add some intelligence in the
template to determine how to retrieve the ipv4|ipv6 address since the path
to the fact in `hostvars` is not the same according to ipv4 vs ipv6 case.
Christian Zunker [Mon, 12 Jun 2017 08:36:29 +0000 (08:36 +0000)]
Create OpenStack pools with crush rule
Add an extra variable to the openstack pools, which creates them with
defined rules. This will allow to place different pools on e.g.
different type of disks.
This commit will also set a new default rule when defined and move
the rbd pool to the new rule.
Sébastien Han [Fri, 23 Jun 2017 14:16:06 +0000 (16:16 +0200)]
docker: refactor followup
Followup on https://github.com/ceph/ceph-ansible/pull/1469 where we
merged most of the container code from roles/ceph-*/task/docker/*.yml
into roles/ceph-docker-common/tasks/
It seems that we forgot to remove the original files.
David Galloway [Tue, 20 Jun 2017 16:34:46 +0000 (12:34 -0400)]
infra: Fix ceph.conf creation when taking over existing cluster
Fixes bug introduced in https://github.com/ceph/ceph-ansible/pull/1330
The "stat ceph.conf" task was basically using the stat module on a
string instead of the ceph.conf filename. This caused the "generate
ceph configuration file" task to fail.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1463382 Signed-off-by: David Galloway <dgallowa@redhat.com>
John Fulton [Mon, 19 Jun 2017 18:25:59 +0000 (14:25 -0400)]
Add OpenStack metrics pool
OpenStack's Gnocchi service expects to have a pool called "metrics".
This change addess "metrics" to the list of `openstack_pools` and
creates a corresponding key. It is only run if the user sets
`openstack_config: false`.
Peter Jenkins [Wed, 14 Jun 2017 07:44:13 +0000 (10:44 +0300)]
Bluestore: Omit "osd mkfs type" etc from ceph.conf
Remove "osd mkfs type" and the other pre-Bluestore parameters from the
generated ceph.conf so that disk activation on OSDs will work. The
current default xfs config results in a failed deployment and
incorrect partition metadata.
Sébastien Han [Mon, 12 Jun 2017 12:38:10 +0000 (14:38 +0200)]
ceph-mon: fix get rbd size hanging
For newly created cluster the command: ceph --cluster {{ cluster }} osd
pool get rbd size does not respond properly.
We only want to check if the rbd pool exists, so we know use an ls |
grep approach.
Closes: https://github.com/ceph/ceph-ansible/issues/1547 Signed-off-by: Sébastien Han <seb@redhat.com>
Common: Add a default for ceph_docker_on_openstack
Add a default value for `ceph_docker_on_openstack` to avoid a
conditional check error for the task `pause after docker install before starting` in
`roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml`
Andrew Schoen [Tue, 6 Jun 2017 14:04:58 +0000 (09:04 -0500)]
remove ceph-iscsi-gw play from site.yml.sample
We ship ceph-iscsi-gw in a separate repo downstream and do not package
it with ceph-ansible. Including the play for ceph-iscsi-gw in
site.yml.sample makes the playbook fail when using the downstream
packages.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1454945 Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Andrew Schoen [Mon, 5 Jun 2017 15:51:47 +0000 (10:51 -0500)]
ceph-mon: fix support for ipv6 on containerized mons
The fact ['ansible_$interface']['ipv4'] is a dictionary where
['ansible_$interface']['ipv6'] is a list. If we use
ansible_default_ipv6|ipv4 is is always a dictionary which allows us to
get the ipv6 and ipv4 address without adding more complexity to the
template.
Andrew Schoen [Fri, 2 Jun 2017 12:42:34 +0000 (07:42 -0500)]
purge-docker-cluster: include ceph_docker_registry
We need to include ceph_docker_registry when removing containers/images
because if we don't it will assume docker.io which is not always where
the image originated from, causing the playbook to fail.
For some reason we changed the check of pgs but it appears it could be
dangerous because the current check might satisfied as long as 1 PG is
active+clean.
since `-e CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE` is already hardcoded in
`eph-osd-run.sh.j2` there is no need to add `-e
CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE` as a default value in defaults vars.