]> www.infradead.org Git - users/jedix/linux-maple.git/commit
fs/proc/task_mmu: reduce scope of lazy mmu region
authorRyan Roberts <ryan.roberts@arm.com>
Mon, 3 Mar 2025 14:15:36 +0000 (14:15 +0000)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 17 Mar 2025 07:05:34 +0000 (00:05 -0700)
commitad449d856bd7e7461ac740abb9b5d10a824e0166
tree1b21eb0fd7687d196c4f9052da1c60a16847fabe
parent691ee97e1a9de0cdb3efb893c1f180e3f4a35e32
fs/proc/task_mmu: reduce scope of lazy mmu region

Update the way arch_[enter|leave]_lazy_mmu_mode() is called in
pagemap_scan_pmd_entry() to follow the normal pattern of holding the ptl
for user space mappings.  As a result the scope is reduced to only the pte
table, but that's where most of the performance win is.

While I believe there wasn't technically a bug here, the original scope
made it easier to accidentally nest or, worse, accidentally call something
like kmap() which would expect an immediate mode pte modification but it
would end up deferred.

Link: https://lkml.kernel.org/r/20250303141542.3371656-3-ryan.roberts@arm.com
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Juergen Gross <jgross@suse.com>
Cc: Andreas Larsson <andreas@gaisler.com>
Cc: Borislav Betkov <bp@alien8.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juegren Gross <jgross@suse.com>
Cc: Matthew Wilcow (Oracle) <willy@infradead.org>
Cc: Thomas Gleinxer <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
fs/proc/task_mmu.c