Ivan Font [Fri, 26 Aug 2016 00:37:57 +0000 (17:37 -0700)]
NFS fixes
- Move mon_containerized_default_ceph_conf_with_kv config from ceph-mon
to ceph-common defaults as it's used in ceph-nfs
- Update conditional to generate ganesha config when not
mon_containerized_default_ceph_conf_with_kv
- Revert change to store radosgw keyring using ansible_hostname on
ansible server so that ceph-nfs can find it
- Update ceph-ceph-nfs0-rgw-user container to use ansible_hostname
variable
Sébastien Han [Tue, 23 Aug 2016 13:35:54 +0000 (15:35 +0200)]
docker: fix more than one monitor deployment
There is no need to run the actions from
roles/ceph-mon/tasks/docker/create_configs.yml
on the first monitor only since the monitor deployment happens
**serially**.
Moreover with Vagrant it's useful to allow the auto creation of the
cluster fsid, so enabling the option. If this is not desired you can
still set `fsid: 9c9c0448-0551-401d-b55b-e5b3a42bae42` for example.
Ivan Font [Mon, 22 Aug 2016 17:42:27 +0000 (10:42 -0700)]
Restrict fact gathering to mons and update ceph.conf
- Gather facts only for mons before processing ceph-mon role serially in
containerized playbook sample
- Updated ceph.conf in order to generate a valid ceph.conf
- Move fsal_rgw config to ceph-common, as it's shaered with ceph-rgw
- Update all.docker.sample with NFS config
- Rename fsal_rgw to nfs_obj_gw and fsal_ceph to nfs_file_gw, because
the former names mean nothing to non-Ganesha developers
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
Sébastien Han [Wed, 17 Aug 2016 09:48:42 +0000 (11:48 +0200)]
create a directory for infrastructure playbooks
Since we have a couple of infrastructure related playbooks
(additionnally to the roles we are using to deploy Ceph), it makes sense
to have them located in a separate directory.
Sébastien Han [Thu, 11 Aug 2016 15:20:07 +0000 (17:20 +0200)]
add shrink playbooks: mons and osds
We now have the ability to shrink a ceph cluster with the help of 2 new
playbooks. Even if a lot portions of those are identical I thought I
would make more sense to separate both for several reasons:
* it is rare to remove mon(s) and osd(s)
* this remains a tricky process so to avoid any overlap we keep things
* separated
For monitors, just select the list of the monitor hostnames you want to
delete from the cluster and execute the playbook like this. The hostname
must be resolvable. Then run the playbook like this:
ansible-playbook shrink-cluster.yml -e mon_host=ceph-mon-01,ceph-mon-02
Are you sure you want to shrink the cluster? [no]: yes
For OSDs, just select the list of the OSD id you want to delete from the
cluster and execute the playbook like this:
ansible-playbook shrink-cluster.yml -e osd_ids=0,2,4
Are you sure you want to shrink the cluster? [no]: yes
If you know what you're doing you can run it like this:
Daniel Lin [Mon, 6 Jun 2016 14:22:20 +0000 (10:22 -0400)]
Allow ceph-ansible to be run on a locally built/installed Ceph
-First install ceph into a directory with CMake
cmake -DCMAKE_INSTALL_LIBEXECDIR=/usr/lib -DWITH_SYSTEMD=ON -DCMAKE_INSTALL_PREFIX:PATH:=/usr <ceph_src_dir> && make DESTDIR=<install_dir> install/strip
-Ceph-ansible copies over the install_dir
-User can use rundep_installer.sh to install any runtime dependencies that ceph needs onto the machine from rundep
Ivan Font [Thu, 28 Jul 2016 14:42:19 +0000 (07:42 -0700)]
Add option to enable ntp
This fixes #845 for containerized deployments. We now also mount the
/etc/localtime volume in the containers in order to synchronize the host
timezone with the container timezone.
Ken Dreyer [Tue, 2 Aug 2016 15:57:24 +0000 (09:57 -0600)]
ceph-common: client settings are for libvirt
Prior to this change, each ceph cluster node would end up with several
"qemu-client-$pid.log" files owned by root. The [client] section would
capture *all* client activity (for example the "ceph health" command,
etc), not just librbd-in-qemu.
Restrict this section to libvirt clients only so that we don't generate
these spurious log files for other Ceph client traffic.
ceph-mon: fix the loop in `secure the cluster' task
Deployment fails when the ``secure_cluster`` is false:
TASK [ceph-mon : secure the cluster]
*******************************************
fatal: [saceph-mon.vm.ceph.asheplyakov]: FAILED! => {"failed": true, "msg": "'dict object' has no attribute 'stdout_lines'"}
fatal: [saceph-mon2.vm.ceph.asheplyakov]: FAILED! => {"failed": true, "msg": "'dict object' has no attribute 'stdout_lines'"}
fatal: [saceph-mon3.vm.ceph.asheplyakov]: FAILED! => {"failed": true, "msg": "'dict object' has no attribute 'stdout_lines'"}
A conditional include evaluates all included tasks with the (additional)
conditional applied to every task [1]. Thus all tasks from `secure_cluster.yml'
are always evaluated (with an additional 'when: secure_cluster' condition).
The `secure the cluster' task iterates over ``ceph_pools.stdout_lines``
even if ``secure_cluster`` is false: in loops ansible applies conditional
to every item (by design) [2]. However the `collect all the pools' task
is skipped if the very same condition evaluates to false, which leaves
the ``ceph_pools`` undefined, so the `secure the cluster' task fails:
Provide the default (empty) list to avoid the problem.