]> www.infradead.org Git - users/dwmw2/linux.git/commitdiff
s390/mm: fix huge pte soft dirty copying
authorJanosch Frank <frankja@linux.ibm.com>
Tue, 7 Jul 2020 13:38:54 +0000 (15:38 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 22 Jul 2020 07:22:19 +0000 (09:22 +0200)
commit 528a9539348a0234375dfaa1ca5dbbb2f8f8e8d2 upstream.

If the pmd is soft dirty we must mark the pte as soft dirty (and not dirty).
This fixes some cases for guest migration with huge page backings.

Cc: <stable@vger.kernel.org> # 4.8
Fixes: bc29b7ac1d9f ("s390/mm: clean up pte/pmd encoding")
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/s390/mm/hugetlbpage.c

index e19ea9ebe960bd0444a46b3d6fe39c67267f90e7..777a4418693fb21df8c167ef4614b1f3b1c2ef96 100644 (file)
@@ -117,7 +117,7 @@ static inline pte_t __rste_to_pte(unsigned long rste)
                                             _PAGE_YOUNG);
 #ifdef CONFIG_MEM_SOFT_DIRTY
                pte_val(pte) |= move_set_bit(rste, _SEGMENT_ENTRY_SOFT_DIRTY,
-                                            _PAGE_DIRTY);
+                                            _PAGE_SOFT_DIRTY);
 #endif
                pte_val(pte) |= move_set_bit(rste, _SEGMENT_ENTRY_NOEXEC,
                                             _PAGE_NOEXEC);