From: Kevin Brodsky Date: Mon, 8 Sep 2025 07:39:25 +0000 (+0100) Subject: mm: remove arch_flush_lazy_mmu_mode() X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=6b3a99fc80ea4af4fee739527cfbb4016dcfc73a;p=users%2Fjedix%2Flinux-maple.git mm: remove arch_flush_lazy_mmu_mode() Patch series "Nesting support for lazy MMU mode", v2. When the lazy MMU mode was introduced eons ago, it wasn't made clear whether such a sequence was legal: arch_enter_lazy_mmu_mode() ... arch_enter_lazy_mmu_mode() ... arch_leave_lazy_mmu_mode() ... arch_leave_lazy_mmu_mode() It seems fair to say that nested calls to arch_{enter,leave}_lazy_mmu_mode() were not expected, and most architectures never explicitly supported it. Ryan Roberts' series from March [1] attempted to prevent nesting from ever occurring, and mostly succeeded. Unfortunately, a corner case (DEBUG_PAGEALLOC) may still cause nesting to occur on arm64. Ryan proposed [2] to address that corner case at the generic level but this approach received pushback; [3] then attempted to solve the issue on arm64 only, but it was deemed too fragile. It feels generally fragile to rely on lazy_mmu sections not to nest, because callers of various standard mm functions do not know if the function uses lazy_mmu itself. This series therefore performs a U-turn and adds support for nested lazy_mmu sections, on all architectures. The main change enabling nesting is patch 2, following the approach suggested by Catalin Marinas [4]: have enter() return some state and the matching leave() take that state. In this series, the state is only used to handle nesting, but it could be used for other purposes such as restoring context modified by enter(); the proposed kpkeys framework would be an immediate user [5]. This patch (of 7): This function has only ever been used in arch/x86, so there is no need for other architectures to implement it. Remove it from linux/pgtable.h and all architectures besides x86. The arm64 implementation is not empty but it is only called from arch_leave_lazy_mmu_mode(), so we can simply fold it there. Link: https://lkml.kernel.org/r/20250908073931.4159362-1-kevin.brodsky@arm.com Link: https://lkml.kernel.org/r/20250908073931.4159362-2-kevin.brodsky@arm.com Link: https://lore.kernel.org/all/20250303141542.3371656-1-ryan.roberts@arm.com/ [1] Link: https://lore.kernel.org/all/20250530140446.2387131-1-ryan.roberts@arm.com/ [2] Link: https://lore.kernel.org/all/20250606135654.178300-1-ryan.roberts@arm.com/ [3] Link: https://lore.kernel.org/all/aEhKSq0zVaUJkomX@arm.com/ [4] Link: https://lore.kernel.org/linux-hardening/20250815085512.2182322-19-kevin.brodsky@arm.com/ [5] Signed-off-by: Kevin Brodsky Acked-by: Mike Rapoport (Microsoft) Reviewed-by: Yeoreum Yun Acked-by: David Hildenbrand Cc: Alexander Gordeev Cc: Andreas Larsson Cc: Borislav Betkov Cc: Boris Ostrovsky Cc: Catalin Marinas Cc: Christophe Leroy Cc: David S. Miller Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: Jann Horn Cc: Juegren Gross Cc: levi.yun Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Michal Hocko Cc: Nicholas Piggin Cc: Peter Zijlstra Cc: Ryan Roberts Cc: Suren Baghdasaryan Cc: Thomas Gleinxer Cc: Vlastimil Babka Cc: Will Deacon Signed-off-by: Andrew Morton --- diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index abd2dee416b3..728d7b6ed20a 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -101,21 +101,14 @@ static inline void arch_enter_lazy_mmu_mode(void) set_thread_flag(TIF_LAZY_MMU); } -static inline void arch_flush_lazy_mmu_mode(void) +static inline void arch_leave_lazy_mmu_mode(void) { if (in_interrupt()) return; if (test_and_clear_thread_flag(TIF_LAZY_MMU_PENDING)) emit_pte_barriers(); -} - -static inline void arch_leave_lazy_mmu_mode(void) -{ - if (in_interrupt()) - return; - arch_flush_lazy_mmu_mode(); clear_thread_flag(TIF_LAZY_MMU); } diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h index 146287d9580f..176d7fd79eeb 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h @@ -55,8 +55,6 @@ static inline void arch_leave_lazy_mmu_mode(void) preempt_enable(); } -#define arch_flush_lazy_mmu_mode() do {} while (0) - extern void hash__tlbiel_all(unsigned int action); extern void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, diff --git a/arch/sparc/include/asm/tlbflush_64.h b/arch/sparc/include/asm/tlbflush_64.h index 8b8cdaa69272..cd144eb31bdd 100644 --- a/arch/sparc/include/asm/tlbflush_64.h +++ b/arch/sparc/include/asm/tlbflush_64.h @@ -44,7 +44,6 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end); void flush_tlb_pending(void); void arch_enter_lazy_mmu_mode(void); void arch_leave_lazy_mmu_mode(void); -#define arch_flush_lazy_mmu_mode() do {} while (0) /* Local cpu only. */ void __flush_tlb_all(void); diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index e33df3da6980..14fd672bc9b2 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -117,7 +117,8 @@ extern pmdval_t early_pmd_flags; #define pte_val(x) native_pte_val(x) #define __pte(x) native_make_pte(x) -#define arch_end_context_switch(prev) do {} while(0) +#define arch_end_context_switch(prev) do {} while (0) +#define arch_flush_lazy_mmu_mode() do {} while (0) #endif /* CONFIG_PARAVIRT_XXL */ static inline pmd_t pmd_set_flags(pmd_t pmd, pmdval_t set) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 94249e671a7e..8d6007123cdf 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -234,7 +234,6 @@ static inline int pmd_dirty(pmd_t pmd) #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE #define arch_enter_lazy_mmu_mode() do {} while (0) #define arch_leave_lazy_mmu_mode() do {} while (0) -#define arch_flush_lazy_mmu_mode() do {} while (0) #endif #ifndef pte_batch_hint