This ticket seems to suggest that (1) the root cause is related to an
exec that is orphaned and screws up the container state (due to, e.g., ssh
dropping, or a timeout), (2) -f may be needed, sometimes, to recover, and
(3) newer versions fix it.
https://github.com/containers/libpod/issues/3226
Way back in
26f9fe54cb635cbcd8f74849d6fa3528cdf5d755 we found that using
-f the first time around was a Bad Idea, so we'd rather avoid this.
Instead, just avoid triggering the bug.
Signed-off-by: Sage Weil <sage@redhat.com>
$CEPHADM enter --fsid $FSID --name mon.a -- pidof ceph-mon
expect_false $CEPHADM enter --fsid $FSID --name mgr.x -- pidof ceph-mon
$CEPHADM enter --fsid $FSID --name mgr.x -- pidof ceph-mgr
-expect_false $CEPHADM --timeout 1 enter --fsid $FSID --name mon.a -- sleep 10
+# this triggers a bug in older versions of podman, including 18.04's 1.6.2
+#expect_false $CEPHADM --timeout 1 enter --fsid $FSID --name mon.a -- sleep 10
$CEPHADM --timeout 10 enter --fsid $FSID --name mon.a -- sleep 1
## ceph-volume