]> www.infradead.org Git - users/dwmw2/linux.git/commitdiff
mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range() for_each_valid_pfn
authorDavid Woodhouse <dwmw@amazon.co.uk>
Wed, 23 Apr 2025 06:41:15 +0000 (07:41 +0100)
committerDavid Woodhouse <dwmw@amazon.co.uk>
Fri, 25 Apr 2025 22:57:53 +0000 (23:57 +0100)
Currently, memmap_init initializes pfn_hole with 0 instead of
ARCH_PFN_OFFSET. Then init_unavailable_range will start iterating each
page from the page at address zero to the first available page, but it
won't do anything for pages below ARCH_PFN_OFFSET because pfn_valid
won't pass.

If ARCH_PFN_OFFSET is very large (e.g., something like 2^64-2GiB if the
kernel is used as a library and loaded at a very high address), the
pointless iteration for pages below ARCH_PFN_OFFSET will take a very
long time, and the kernel will look stuck at boot time.

Use for_each_valid_pfn() to skip the pointless iterations.

Reported-by: Ruihan Li <lrh2000@pku.edu.cn>
Suggested-by: Mike Rapoport <rppt@kernel.org>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Tested-by: Ruihan Li <lrh2000@pku.edu.cn>
mm/mm_init.c

index 01d6005ac8e84f26d681342e8e0b2b55f9254c61..a69310a3d24ec5309a45d723c826323e17dea593 100644 (file)
@@ -845,11 +845,7 @@ static void __init init_unavailable_range(unsigned long spfn,
        unsigned long pfn;
        u64 pgcnt = 0;
 
-       for (pfn = spfn; pfn < epfn; pfn++) {
-               if (!pfn_valid(pageblock_start_pfn(pfn))) {
-                       pfn = pageblock_end_pfn(pfn) - 1;
-                       continue;
-               }
+       for_each_valid_pfn(pfn, spfn, epfn) {
                __init_single_page(pfn_to_page(pfn), pfn, zone, node);
                __SetPageReserved(pfn_to_page(pfn));
                pgcnt++;