]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
mm/gup: local lru_add_drain() to avoid lru_add_drain_all()
authorHugh Dickins <hughd@google.com>
Mon, 8 Sep 2025 22:16:53 +0000 (15:16 -0700)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 12 Sep 2025 00:23:38 +0000 (17:23 -0700)
In many cases, if collect_longterm_unpinnable_folios() does need to drain
the LRU cache to release a reference, the cache in question is on this
same CPU, and much more efficiently drained by a preliminary local
lru_add_drain(), than the later cross-CPU lru_add_drain_all().

Marked for stable, to counter the increase in lru_add_drain_all()s from
"mm/gup: check ref_count instead of lru before migration".  Note for clean
backports: can take 6.16 commit a03db236aebf ("gup: optimize longterm
pin_user_pages() for large folio") first.

Link: https://lkml.kernel.org/r/66f2751f-283e-816d-9530-765db7edc465@google.com
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@kernel.org>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Chris Li <chrisl@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Keir Fraser <keirf@google.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Li Zhe <lizhe.67@bytedance.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Shivank Garg <shivankg@amd.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Xu <weixugc@google.com>
Cc: Will Deacon <will@kernel.org>
Cc: yangge <yangge1116@126.com>
Cc: Yuanchu Xie <yuanchu@google.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/gup.c

index 82aec6443c0afb011c89f80c0819fa87cf8e3aee..b47066a54f5239b373c177e694c8aa92038f3cf3 100644 (file)
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2287,8 +2287,8 @@ static unsigned long collect_longterm_unpinnable_folios(
                struct pages_or_folios *pofs)
 {
        unsigned long collected = 0;
-       bool drain_allow = true;
        struct folio *folio;
+       int drained = 0;
        long i = 0;
 
        for (folio = pofs_get_folio(pofs, i); folio;
@@ -2307,10 +2307,17 @@ static unsigned long collect_longterm_unpinnable_folios(
                        continue;
                }
 
-               if (drain_allow && folio_ref_count(folio) !=
-                                  folio_expected_ref_count(folio) + 1) {
+               if (drained == 0 &&
+                               folio_ref_count(folio) !=
+                               folio_expected_ref_count(folio) + 1) {
+                       lru_add_drain();
+                       drained = 1;
+               }
+               if (drained == 1 &&
+                               folio_ref_count(folio) !=
+                               folio_expected_ref_count(folio) + 1) {
                        lru_add_drain_all();
-                       drain_allow = false;
+                       drained = 2;
                }
 
                if (!folio_isolate_lru(folio))