From: Kevin Brodsky Date: Mon, 8 Sep 2025 07:39:31 +0000 (+0100) Subject: mm: update lazy_mmu documentation X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=fb58f57d14001858cd0bef705435b8d37684b3ec;p=users%2Fjedix%2Flinux-maple.git mm: update lazy_mmu documentation We now support nested lazy_mmu sections on all architectures implementing the API. Update the API comment accordingly. Link: https://lkml.kernel.org/r/20250908073931.4159362-8-kevin.brodsky@arm.com Signed-off-by: Kevin Brodsky Acked-by: Mike Rapoport (Microsoft) Reviewed-by: Yeoreum Yun Cc: Alexander Gordeev Cc: Andreas Larsson Cc: Borislav Betkov Cc: Boris Ostrovsky Cc: Catalin Marinas Cc: Christophe Leroy Cc: David Hildenbrand Cc: David S. Miller Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: Jann Horn Cc: Juegren Gross Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Michal Hocko Cc: Nicholas Piggin Cc: Peter Zijlstra Cc: Ryan Roberts Cc: Suren Baghdasaryan Cc: Thomas Gleinxer Cc: Vlastimil Babka Cc: Will Deacon Signed-off-by: Andrew Morton --- diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index df0eb898b3fc7..85cd1fdb914fd 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -228,8 +228,18 @@ static inline int pmd_dirty(pmd_t pmd) * of the lazy mode. So the implementation must assume preemption may be enabled * and cpu migration is possible; it must take steps to be robust against this. * (In practice, for user PTE updates, the appropriate page table lock(s) are - * held, but for kernel PTE updates, no lock is held). Nesting is not permitted - * and the mode cannot be used in interrupt context. + * held, but for kernel PTE updates, no lock is held). The mode cannot be used + * in interrupt context. + * + * Calls may be nested: an arch_{enter,leave}_lazy_mmu_mode() pair may be called + * while the lazy MMU mode has already been enabled. An implementation should + * handle this using the state returned by enter() and taken by the matching + * leave() call; the LAZY_MMU_{DEFAULT,NESTED} flags can be used to indicate + * whether this enter/leave pair is nested inside another or not. (It is up to + * the implementation to track whether the lazy MMU mode is enabled at any point + * in time.) The expectation is that leave() will flush any batched state + * unconditionally, but only leave the lazy MMU mode if the passed state is not + * LAZY_MMU_NESTED. */ #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE typedef int lazy_mmu_state_t;