mm/migrate: preserve soft dirty in remove_migration_pte()
authorRalph Campbell <rcampbell@nvidia.com>
Fri, 4 Sep 2020 23:36:07 +0000 (16:36 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sat, 5 Sep 2020 19:14:30 +0000 (12:14 -0700)
The code to remove a migration PTE and replace it with a device private
PTE was not copying the soft dirty bit from the migration entry.  This
could lead to page contents not being marked dirty when faulting the page
back from device private memory.

Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Bharata B Rao <bharata@linux.ibm.com>
Link: https://lkml.kernel.org/r/20200831212222.22409-3-rcampbell@nvidia.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/migrate.c

index 1d791d4207254213fb408b104f4abd5c6dfda44f..941b89383cf3dec58a2fcfa7cca09559ddb4fc04 100644 (file)
@@ -249,6 +249,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
                if (unlikely(is_device_private_page(new))) {
                        entry = make_device_private_entry(new, pte_write(pte));
                        pte = swp_entry_to_pte(entry);
+                       if (pte_swp_soft_dirty(*pvmw.pte))
+                               pte = pte_swp_mksoft_dirty(pte);
                        if (pte_swp_uffd_wp(*pvmw.pte))
                                pte = pte_swp_mkuffd_wp(pte);
                }