In do_migrate_range(), the hwpoisoned folio may be large folio, which
can't be handled by unmap_poisoned_folio().
I can reproduce this issue in qemu after adding delay in memory_failure()
BUG: kernel NULL pointer dereference, address:
0000000000000000
Workqueue: kacpi_hotplug acpi_hotplug_work_fn
RIP: 0010:try_to_unmap_one+0x16a/0xfc0
<TASK>
rmap_walk_anon+0xda/0x1f0
try_to_unmap+0x78/0x80
? __pfx_try_to_unmap_one+0x10/0x10
? __pfx_folio_not_mapped+0x10/0x10
? __pfx_folio_lock_anon_vma_read+0x10/0x10
unmap_poisoned_folio+0x60/0x140
do_migrate_range+0x4d1/0x600
? slab_memory_callback+0x6a/0x190
? notifier_call_chain+0x56/0xb0
offline_pages+0x3e6/0x460
memory_subsys_offline+0x130/0x1f0
device_offline+0xba/0x110
acpi_bus_offline+0xb7/0x130
acpi_scan_hot_remove+0x77/0x290
acpi_device_hotplug+0x1e0/0x240
acpi_hotplug_work_fn+0x1a/0x30
process_one_work+0x186/0x340
In this case, just make offline_pages() fail.
Also, do_migrate_range() may be called between memory_failure() setting
the hwposion flag and isolation of the folio from the lru, so remove
WARN_ON().
Also, in other places unmap_poisoned_folio() is called when the folio is
isolated, so obey that in do_migrate_range().
Link: https://lkml.kernel.org/r/20250627125747.3094074-3-tujinjiang@huawei.com
Fixes: b15c87263a69 ("hwpoison, memory_hotplug: allow hwpoisoned pages to be offlined")
Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
return 0;
}
-static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
+static int do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
{
struct folio *folio;
unsigned long pfn;
pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
if (folio_contain_hwpoisoned_page(folio)) {
- if (WARN_ON(folio_test_lru(folio)))
- folio_isolate_lru(folio);
+ if (folio_test_large(folio) && !folio_test_hugetlb(folio))
+ goto err_out;
+ if (folio_test_lru(folio) && !folio_isolate_lru(folio))
+ goto err_out;
if (folio_mapped(folio)) {
folio_lock(folio);
unmap_poisoned_folio(folio, pfn, false);
putback_movable_pages(&source);
}
}
+ return 0;
+err_out:
+ folio_put(folio);
+ putback_movable_pages(&source);
+ return -EBUSY;
}
static int __init cmdline_parse_movable_node(char *p)
ret = scan_movable_pages(pfn, end_pfn, &pfn);
if (!ret) {
- /*
- * TODO: fatal migration failures should bail
- * out
- */
- do_migrate_range(pfn, end_pfn);
+ ret = do_migrate_range(pfn, end_pfn);
+ if (ret)
+ break;
}
} while (!ret);