qa/tasks/vstart_runner.py runs src/stop.sh and src/vstart.sh if --create
is passed to it. Once in a few times, vstart_runner.py hangs on running
stop.sh. Running "ps -ef | grep ceph" shows that following command is
launched every time vstart_runner.py hangs at stop.sh -
/usr/bin/python3.9 bin/ceph -c <path-to-ceph-repo>/build/ceph.conf tell mds.* client ls
Every time an instance of vstart_runner.py hangs at execution of
stop.sh, a new "ceph tell mds.* client ls" command is launched. This
doesn't happen when stop.sh is run manually. I suspect some issue lies
between this commmand in stop.sh and Python subprocess module.
Anyways, a simple fix is to not to run this command when ceph-mds daemon(s)
are not present on the system.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
done
#Get fuse mounts of the cluster
- CEPH_FUSE_MNTS=$("${CEPH_BIN}"/ceph -c $conf_fn tell mds.* client ls 2>/dev/null | grep mount_point | tr -d '",' | awk '{print $2}')
- [ -n "$CEPH_FUSE_MNTS" ] && sudo umount -f $CEPH_FUSE_MNTS
+ num_of_ceph_mdss=$(ps -e | grep \ ceph-mds$ | wc -l)
+ if test num_of_ceph_mdss -ne 0; then
+ CEPH_FUSE_MNTS=$("${CEPH_BIN}"/ceph -c $conf_fn tell mds.* client ls 2>/dev/null | grep mount_point | tr -d '",' | awk '{print $2}')
+ [ -n "$CEPH_FUSE_MNTS" ] && sudo umount -f $CEPH_FUSE_MNTS
+ fi
}
usage="usage: $0 [all] [mon] [mds] [osd] [rgw] [nfs] [--crimson] [--cephadm]\n"