]> www.infradead.org Git - users/jedix/linux-maple.git/commit
fs/proc/task_mmu: reduce scope of lazy mmu region
authorRyan Roberts <ryan.roberts@arm.com>
Mon, 3 Mar 2025 14:15:36 +0000 (14:15 +0000)
committerAndrew Morton <akpm@linux-foundation.org>
Tue, 4 Mar 2025 05:50:47 +0000 (21:50 -0800)
commit4bbfac3fb9716ee315a677fa2b355c0b7d60a623
treec09397cf407accb015370ba4dcc9a9240d390193
parente3817713c2a39ea4f61db15cfc0c6370b0ad816a
fs/proc/task_mmu: reduce scope of lazy mmu region

Update the way arch_[enter|leave]_lazy_mmu_mode() is called in
pagemap_scan_pmd_entry() to follow the normal pattern of holding the ptl
for user space mappings.  As a result the scope is reduced to only the pte
table, but that's where most of the performance win is.

While I believe there wasn't technically a bug here, the original scope
made it easier to accidentally nest or, worse, accidentally call something
like kmap() which would expect an immediate mode pte modification but it
would end up deferred.

Link: https://lkml.kernel.org/r/20250303141542.3371656-3-ryan.roberts@arm.com
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Juergen Gross <jgross@suse.com>
Cc: Andreas Larsson <andreas@gaisler.com>
Cc: Borislav Betkov <bp@alien8.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juegren Gross <jgross@suse.com>
Cc: Matthew Wilcow (Oracle) <willy@infradead.org>
Cc: Thomas Gleinxer <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
fs/proc/task_mmu.c