From: Zi Yan Date: Fri, 28 Feb 2025 17:49:53 +0000 (-0500) Subject: mm/migrate: fix shmem xarray update during migration X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=9ebb5bf9c7300a5aee70b692be9d2e2660a5539d;p=users%2Fjedix%2Flinux-maple.git mm/migrate: fix shmem xarray update during migration Pagecache uses multi-index entries for large folio, so does shmem. Only swap cache still stores multiple entries for a single large folio. Commit fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly") fixed swap cache but got shmem wrong by storing multiple entries for a large shmem folio. This results in a soft lockup as reported by Liu Shixin. Fix it by storing a single entry for a shmem folio. Link: https://lkml.kernel.org/r/20250228174953.2222831-1-ziy@nvidia.com Fixes: fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly") Signed-off-by: Zi Yan Reported-by: Liu Shixin Closes: https://lore.kernel.org/all/28546fb4-5210-bf75-16d6-43e1f8646080@huawei.com/ Reviewed-by: Shivank Garg Reviewed-by: Baolin Wang Cc: Barry Song Cc: Charan Teja Kalla Cc: David Hildenbrand Cc: Hugh Dickens Cc: Kefeng Wang Cc: Lance Yang Cc: Matthew Wilcow (Oracle) Cc: Ryan Roberts Cc: Signed-off-by: Andrew Morton --- diff --git a/mm/migrate.c b/mm/migrate.c index fb19a18892c89..198c7c463aa53 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -524,7 +524,11 @@ static int __folio_migrate_mapping(struct address_space *mapping, folio_set_swapcache(newfolio); newfolio->private = folio_get_private(folio); } - entries = nr; + /* shmem uses high-order entry */ + if (!folio_test_anon(folio)) + entries = 1; + else + entries = nr; } else { VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio); entries = 1;