From: Adrián Larumbe Date: Wed, 8 Apr 2026 19:12:23 +0000 (+0100) Subject: drm/panthor: Extend VM locked region for remap case to be a superset X-Git-Tag: ceph-for-7.1-rc4~113^2^2~11 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=8867262d993d249309a8d3ec28ce095378cf1720;p=ceph-client.git drm/panthor: Extend VM locked region for remap case to be a superset In the event of an sm_step_remap() that leads to a partial unmap of a transparent huge page, the new locked region required by an extended unmap might not be a superset of the original one. Then, if it leaves a portion of the initially requested one out, the ensuing map will trigger a warning. Fixes: 8e7460eac786 ("drm/panthor: Support partial unmaps of huge pages") Reviewed-by: Boris Brezillon Reviewed-by: Steven Price Reviewed-by: Liviu Dudau Link: https://patch.msgid.link/20260408191228.537625-1-adrian.larumbe@collabora.com Signed-off-by: Adrián Larumbe --- diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c index 41604a7aaf85..888fafc938a7 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -1638,6 +1638,25 @@ static int panthor_vm_lock_region(struct panthor_vm *vm, u64 start, u64 size) start + size <= vm->locked_region.start + vm->locked_region.size) return 0; + /* sm_step_remap() may need a locked region that isn't a strict superset + * of the original one because of having to extend unmap boundaries beyond + * it to deal with partial unmaps of transparent huge pages. What we want + * in those cases is to lock the union of both regions. The new region must + * always overlap with the original one, because the upper and lower unmap + * boundaries in a remap operation can only shift up or down respectively, + * but never otherwise. + */ + if (vm->locked_region.size) { + u64 end = max(vm->locked_region.start + vm->locked_region.size, + start + size); + + drm_WARN_ON_ONCE(&vm->ptdev->base, (start + size <= vm->locked_region.start) || + (start >= vm->locked_region.start + vm->locked_region.size)); + + start = min(start, vm->locked_region.start); + size = end - start; + } + mutex_lock(&ptdev->mmu->as.slots_lock); if (vm->as.id >= 0 && size) { /* Lock the region that needs to be updated */