]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
mm/migrate: don't call folio_putback_active_hugetlb() on dst hugetlb folio
authorDavid Hildenbrand <david@redhat.com>
Mon, 13 Jan 2025 13:16:08 +0000 (14:16 +0100)
committerAndrew Morton <akpm@linux-foundation.org>
Sun, 26 Jan 2025 04:22:41 +0000 (20:22 -0800)
We replaced a simple put_page() by a putback_active_hugepage() call in
commit 3aaa76e125c1 ("mm: migrate: hugetlb: putback destination hugepage
to active list"), to set the "active" flag on the dst hugetlb folio.

Nowadays, we decoupled the "active" list from the flag, by calling the
flag "migratable".

Calling "putback" on something that wasn't allocated is weird and not
future proof, especially if we might reach that path when migration failed
and we just want to free the freshly allocated hugetlb folio.

Let's simply handle the migratable flag and the active list flag in
move_hugetlb_state(), where we know that allocation succeeded and already
handle the temporary flag; use a simple folio_put() to return our
reference.

Link: https://lkml.kernel.org/r/20250113131611.2554758-4-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/hugetlb.c
mm/migrate.c

index 15a68996426500079077ef6cc9e2cde8805b5d36..4f95031eeeeacf30293f0475df73d84583bdbaaa 100644 (file)
@@ -7533,6 +7533,16 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re
                }
                spin_unlock_irq(&hugetlb_lock);
        }
+
+       /*
+        * Our old folio is isolated and has "migratable" cleared until it
+        * is putback. As migration succeeded, set the new folio "migratable"
+        * and add it to the active list.
+        */
+       spin_lock_irq(&hugetlb_lock);
+       folio_set_hugetlb_migratable(new_folio);
+       list_move_tail(&new_folio->lru, &(folio_hstate(new_folio))->hugepage_activelist);
+       spin_unlock_irq(&hugetlb_lock);
 }
 
 static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
index c3052877e844e68fe0f0c54f2c7a6266c1451508..2a6da67a0eda06a24375aa9c3cdf1b69acf9465d 100644 (file)
@@ -1539,14 +1539,14 @@ out:
                list_move_tail(&src->lru, ret);
 
        /*
-        * If migration was not successful and there's a freeing callback, use
-        * it.  Otherwise, put_page() will drop the reference grabbed during
-        * isolation.
+        * If migration was not successful and there's a freeing callback,
+        * return the folio to that special allocator. Otherwise, simply drop
+        * our additional reference.
         */
        if (put_new_folio)
                put_new_folio(dst, private);
        else
-               folio_putback_active_hugetlb(dst);
+               folio_put(dst);
 
        return rc;
 }