From: David Woodhouse Date: Wed, 23 Apr 2025 06:41:15 +0000 (+0100) Subject: mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range() X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=refs%2Fheads%2Ffor_each_valid_pfn-4;p=users%2Fdwmw2%2Flinux.git mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range() Currently, memmap_init initializes pfn_hole with 0 instead of ARCH_PFN_OFFSET. Then init_unavailable_range will start iterating each page from the page at address zero to the first available page, but it won't do anything for pages below ARCH_PFN_OFFSET because pfn_valid won't pass. If ARCH_PFN_OFFSET is very large (e.g., something like 2^64-2GiB if the kernel is used as a library and loaded at a very high address), the pointless iteration for pages below ARCH_PFN_OFFSET will take a very long time, and the kernel will look stuck at boot time. Use for_each_valid_pfn() to skip the pointless iterations. Reported-by: Ruihan Li Suggested-by: Mike Rapoport Signed-off-by: David Woodhouse Reviewed-by: Mike Rapoport (Microsoft) Tested-by: Ruihan Li --- diff --git a/mm/mm_init.c b/mm/mm_init.c index 41884f2155c47..0d1a4546825cf 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -845,11 +845,7 @@ static void __init init_unavailable_range(unsigned long spfn, unsigned long pfn; u64 pgcnt = 0; - for (pfn = spfn; pfn < epfn; pfn++) { - if (!pfn_valid(pageblock_start_pfn(pfn))) { - pfn = pageblock_end_pfn(pfn) - 1; - continue; - } + for_each_valid_pfn(pfn, spfn, epfn) { __init_single_page(pfn_to_page(pfn), pfn, zone, node); __SetPageReserved(pfn_to_page(pfn)); pgcnt++;