]> www.infradead.org Git - users/dwmw2/linux.git/commitdiff
mm/rmap: warn on new PTE-mapped folios in page_add_anon_rmap()
authorDavid Hildenbrand <david@redhat.com>
Wed, 13 Sep 2023 12:51:11 +0000 (14:51 +0200)
committerAndrew Morton <akpm@linux-foundation.org>
Wed, 4 Oct 2023 17:32:27 +0000 (10:32 -0700)
If swapin code would ever decide to not use order-0 pages and supply a
PTE-mapped large folio, we will have to change how we call
__folio_set_anon() -- eventually with exclusive=false and an adjusted
address.  For now, let's add a VM_WARN_ON_FOLIO() with a comment about the
situation.

Link: https://lkml.kernel.org/r/20230913125113.313322-5-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Muchun Song <muchun.song@linux.dev>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/rmap.c

index 65de3832123ef2cc94579c3a2d8e643f26e4ad01..9b40c3feba3ebe1e822ef8a594c6bcb3cc01f6bb 100644 (file)
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1238,6 +1238,13 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 
        if (unlikely(!folio_test_anon(folio))) {
                VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+               /*
+                * For a PTE-mapped large folio, we only know that the single
+                * PTE is exclusive. Further, __folio_set_anon() might not get
+                * folio->index right when not given the address of the head
+                * page.
+                */
+               VM_WARN_ON_FOLIO(folio_test_large(folio) && !compound, folio);
                __folio_set_anon(folio, vma, address,
                                 !!(flags & RMAP_EXCLUSIVE));
        } else if (likely(!folio_test_ksm(folio))) {