]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
powerpc/mm: support nested lazy_mmu sections
authorKevin Brodsky <kevin.brodsky@arm.com>
Mon, 8 Sep 2025 07:39:29 +0000 (08:39 +0100)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 12 Sep 2025 00:26:03 +0000 (17:26 -0700)
The lazy_mmu API now allows nested sections to be handled by arch code:
enter() can return a flag if called inside another lazy_mmu section, so
that the matching call to leave() leaves any optimisation enabled.

This patch implements that new logic for powerpc: if there is an active
batch, then enter() returns LAZY_MMU_NESTED and the matching leave()
leaves batch->active set.  The preempt_{enable,disable} calls are left
untouched as they already handle nesting themselves.

TLB flushing is still done in leave() regardless of the nesting level, as
the caller may rely on it whether nesting is occurring or not.

Link: https://lkml.kernel.org/r/20250908073931.4159362-6-kevin.brodsky@arm.com
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Andreas Larsson <andreas@gaisler.com>
Cc: Borislav Betkov <bp@alien8.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: David Hildenbrand <david@redhat.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jann Horn <jannh@google.com>
Cc: Juegren Gross <jgross@suse.com>
Cc: levi.yun <yeoreum.yun@arm.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Thomas Gleinxer <tglx@linutronix.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
arch/powerpc/include/asm/book3s/64/tlbflush-hash.h

index c9f1e819e567738578e4c5cab023c5fc21f4a795..e92bce2efca6ad6230cce907a67c494f154e9ee6 100644 (file)
@@ -39,9 +39,13 @@ static inline lazy_mmu_state_t arch_enter_lazy_mmu_mode(void)
         */
        preempt_disable();
        batch = this_cpu_ptr(&ppc64_tlb_batch);
-       batch->active = 1;
 
-       return LAZY_MMU_DEFAULT;
+       if (!batch->active) {
+               batch->active = 1;
+               return LAZY_MMU_DEFAULT;
+       } else {
+               return LAZY_MMU_NESTED;
+       }
 }
 
 static inline void arch_leave_lazy_mmu_mode(lazy_mmu_state_t state)
@@ -54,7 +58,10 @@ static inline void arch_leave_lazy_mmu_mode(lazy_mmu_state_t state)
 
        if (batch->index)
                __flush_tlb_pending(batch);
-       batch->active = 0;
+
+       if (state != LAZY_MMU_NESTED)
+               batch->active = 0;
+
        preempt_enable();
 }