From: Li Zhe Date: Fri, 19 Sep 2025 09:23:53 +0000 (+0800) Subject: hugetlb: increase number of reserving hugepages via cmdline X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=3dfd02c900379d209ac9dcac24b4a61d8478842a;p=users%2Fhch%2Fmisc.git hugetlb: increase number of reserving hugepages via cmdline Commit 79359d6d24df ("hugetlb: perform vmemmap optimization on a list of pages") batches the submission of HugeTLB vmemmap optimization (HVO) during hugepage reservation. With HVO enabled, hugepages obtained from the buddy allocator are not submitted for optimization and their struct-page memory is therefore not released—until the entire reservation request has been satisfied. As a result, any struct-page memory freed in the course of the allocation cannot be reused for the ongoing reservation, artificially limiting the number of huge pages that can ultimately be provided. As commit b1222550fbf7 ("mm/hugetlb: do pre-HVO for bootmem allocated pages") already applies early HVO to bootmem-allocated huge pages, this patch extends the same benefit to non-bootmem pages by incrementally submitting them for HVO as they are allocated, thereby returning struct-page memory to the buddy allocator in real time. The change raises the maximum 2 MiB hugepage reservation from just under 376 GB to more than 381 GB on a 384 GB x86 VM. Link: https://lkml.kernel.org/r/20250919092353.41671-1-lizhe.67@bytedance.com Signed-off-by: Li Zhe Cc: David Hildenbrand Cc: Muchun Song Cc: Oscar Salvador Signed-off-by: Andrew Morton --- diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d2471a0b6002..1f65609cb724 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3538,7 +3538,14 @@ static void __init hugetlb_pages_alloc_boot_node(unsigned long start, unsigned l nodes_clear(node_alloc_noretry); for (i = 0; i < num; ++i) { - struct folio *folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY], + struct folio *folio; + + if (hugetlb_vmemmap_optimizable_size(h) && + (si_mem_available() == 0) && !list_empty(&folio_list)) { + prep_and_add_allocated_folios(h, &folio_list); + INIT_LIST_HEAD(&folio_list); + } + folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY], &node_alloc_noretry, &next_node); if (!folio) break;