Problem:
In a relocate situation there is a demote involved and we have a demote
snapshot both on primary with peer UUID attached and on secondary with peer UUID
attached. Later a promote say on site-b triggered primary demote group snap
removal on site-a which started a image snap removal, so it started a request to
image replayer (idle at that time), but the request didn't yet went
through.
As a next command we had --force promote on site-a but since the request on
site-a for image snap removal is already in place it somehow went through and
removed the image snaps leaving the group snap on site-a primary demote snapshot
partially deleted with peer UUID attached. And now note site-a is again primary.
In the later test, it involved demote on site-b and followed by a resync on it.
But since the primary demote snapshot is lingering in the partially deleted
state and since peer UUID is still attached, there is no way the resync will
go through, leaving the syncing snaps in incomplete state forever.
Note: Any CLI command on new primary site-a cannot delete the partially deleted
snap because there is a peer UUID attached to it.
Solution:
Do not prune any primary snapshot (including primary demote snapshots) as part
of the secondary daemon operations.
It is safe, however, to unlink remote group snapshots (such as non-primary
demote snapshots). Just ensure that primary demote snapshots are not pruned by
the secondary daemon.
Even in scenarios where a relocation back to the initial cluster occurs in the
future, the primary demoted snapshot on the current secondary (which becomes
primary during relocation) will not be synced again — even though it retains
with a peer UUID. This is because the scan_for_unsynced_group_snapshots()
function uses the last local group snapshot ID to locate the corresponding
snapshot on the remote side and continues syncing from that point onward, rather
than starting from the beginning.
However, in the event of a full resync request, this snapshot will be synced.
This is expected behavior and should not pose any issues. This actually
helps in pruning the primary demote snap on primary quickly later.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
}
}
mirror_group_snapshot_unlink_peer(prune_snap->id);
- // prune all the image snaps of the group snap locally
- if (prune_all_image_snapshots(prune_snap, locker)) {
+ const auto& ns = std::get<cls::rbd::GroupSnapshotNamespaceMirror>(
+ prune_snap->snapshot_namespace);
+ if (ns.is_primary() ||
+ prune_all_image_snapshots(prune_snap, locker)) {
prune_snap = nullptr;
skip_next_snap_check = false;
continue;