From: Hugh Dickins Date: Mon, 8 Sep 2025 22:24:54 +0000 (-0700) Subject: mm: lru_add_drain_all() do local lru_add_drain() first X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=663d9bc12ab5b92a63e35816b1964a9eda52b490;p=users%2Fjedix%2Flinux-maple.git mm: lru_add_drain_all() do local lru_add_drain() first No numbers to back this up, but it seemed obvious to me, that if there are competing lru_add_drain_all()ers, the work will be minimized if each flushes its own local queues before locking and doing cross-CPU drains. Link: https://lkml.kernel.org/r/33389bf8-f79d-d4dd-b7a4-680c4aa21b23@google.com Signed-off-by: Hugh Dickins Acked-by: David Hildenbrand Cc: "Aneesh Kumar K.V" Cc: Axel Rasmussen Cc: Chris Li Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Johannes Weiner Cc: John Hubbard Cc: Keir Fraser Cc: Konstantin Khlebnikov Cc: Li Zhe Cc: Matthew Wilcox (Oracle) Cc: Peter Xu Cc: Rik van Riel Cc: Shivank Garg Cc: Vlastimil Babka Cc: Wei Xu Cc: Will Deacon Cc: yangge Cc: Yuanchu Xie Cc: Yu Zhao Signed-off-by: Andrew Morton --- diff --git a/mm/swap.c b/mm/swap.c index b74ebe865dd92..881e53b2877e6 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -834,6 +834,9 @@ static inline void __lru_add_drain_all(bool force_all_cpus) */ this_gen = smp_load_acquire(&lru_drain_gen); + /* It helps everyone if we do our own local drain immediately. */ + lru_add_drain(); + mutex_lock(&lock); /*