]> www.infradead.org Git - users/hch/misc.git/commitdiff
mm/vma: do not register private-anon mappings with khugepaged during mmap
authorDev Jain <dev.jain@arm.com>
Thu, 6 Mar 2025 06:30:37 +0000 (12:00 +0530)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 17 Mar 2025 00:40:24 +0000 (17:40 -0700)
We already are registering private-anon VMAs with khugepaged during fault
time, in do_huge_pmd_anonymous_page().  Commit "register suitable readonly
file vmas for khugepaged" moved the khugepaged registration logic from
shmem_mmap to the generic mmap path.

The userspace-visible effect should be this: khugepaged will unnecessarily
scan mm's which haven't yet faulted in.  Note that it won't actually
collapse because all PTEs are none.

Now that I think about it, the mm is going to have a file VMA anyways
during fork+exec, so the mm already gets registered during mmap due to the
non-anon case (I *think*), so at least one of either the mmap registration
or fault-time registration is redundant.

Make this logic specific for non-anon mappings.

Link: https://lkml.kernel.org/r/20250306063037.16299-1-dev.jain@arm.com
Fixes: 613bec092fe7 ("mm: mmap: register suitable readonly file vmas for khugepaged")
Signed-off-by: Dev Jain <dev.jain@arm.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@kernel.org>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jann Horn <jannh@google.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Matthew Wilcow (Oracle) <willy@infradead.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <yang@os.amperecomputing.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/vma.c

index 96bcb372c90e43e1ff503c5277b2492ca343272b..71ca012c616c991ccde38160c05e1828ce581168 100644 (file)
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -2381,7 +2381,8 @@ static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)
         * vma_merge_new_range() calls khugepaged_enter_vma() too, the below
         * call covers the non-merge case.
         */
-       khugepaged_enter_vma(vma, map->flags);
+       if (!vma_is_anonymous(vma))
+               khugepaged_enter_vma(vma, map->flags);
        ksm_add_vma(vma);
        *vmap = vma;
        return 0;