]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
fs/proc/task_mmu: remove per-page mapcount dependency for "mapmax" (CONFIG_NO_PAGE_MA...
authorDavid Hildenbrand <david@redhat.com>
Mon, 3 Mar 2025 16:30:11 +0000 (17:30 +0100)
committerAndrew Morton <akpm@linux-foundation.org>
Tue, 4 Mar 2025 05:50:46 +0000 (21:50 -0800)
Let's implement an alternative when per-page mapcounts in large folios are
no longer maintained -- soon with CONFIG_NO_PAGE_MAPCOUNT.

For calculating "mapmax", we now use the average per-page mapcount in a
large folio instead of the per-page mapcount.

For hugetlb folios and folios that are not partially mapped into MMs,
there is no change.

Likely, this change will not matter much in practice, and an alternative
might be to simple remove this stat with CONFIG_NO_PAGE_MAPCOUNT.
However, there might be value to it, so let's keep it like that and
document the behavior.

Link: https://lkml.kernel.org/r/20250303163014.1128035-19-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Cc: Andy Lutomirks^H^Hski <luto@kernel.org>
Cc: Borislav Betkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jann Horn <jannh@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Lance Yang <ioworker0@gmail.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Matthew Wilcow (Oracle) <willy@infradead.org>
Cc: Michal Koutn <mkoutny@suse.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: tejun heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Zefan Li <lizefan.x@bytedance.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Documentation/filesystems/proc.rst
fs/proc/task_mmu.c

index 09f0aed5a08bafcca7643ef26c047d14eb414df5..1aa190017f79661ba10ca0e948ecdcdb7c900cf3 100644 (file)
@@ -686,6 +686,11 @@ Where:
 node locality page counters (N0 == node0, N1 == node1, ...) and the kernel page
 size, in KB, that is backing the mapping up.
 
+Note that some kernel configurations do not track the precise number of times
+a page part of a larger allocation (e.g., THP) is mapped. In these
+configurations, "mapmax" might corresponds to the average number of mappings
+per page in such a larger allocation instead.
+
 1.2 Kernel data
 ---------------
 
index f937c2df7b3f4021ba3030859fece62a790651b3..5043376ebd4763898170c73e9a46455e53ad9f2d 100644 (file)
@@ -2866,7 +2866,12 @@ static void gather_stats(struct page *page, struct numa_maps *md, int pte_dirty,
                        unsigned long nr_pages)
 {
        struct folio *folio = page_folio(page);
-       int count = folio_precise_page_mapcount(folio, page);
+       int count;
+
+       if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT))
+               count = folio_precise_page_mapcount(folio, page);
+       else
+               count = folio_average_page_mapcount(folio);
 
        md->pages += nr_pages;
        if (pte_dirty || folio_test_dirty(folio))