From: Wei Yang Date: Tue, 14 Oct 2025 13:46:05 +0000 (+0000) Subject: mm/huge_memory: optimize old_order derivation during folio splitting X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=add4b4da03e925acdb2d281c9bf5882163b5d28b;p=users%2Fjedix%2Flinux-maple.git mm/huge_memory: optimize old_order derivation during folio splitting Folio splitting requires both the folio's original order (@old_order) and the new target order (@split_order). In the current implementation, @old_order is repeatedly retrieved using folio_order(). However, for every iteration after the first, the folio being split is the result of the previous split, meaning its order is already known to be equal to the previous iteration's @split_order. This commit optimizes the logic: * Instead of calling folio_order(), we now set @old_order directly to the value of @split_order from the previous iteration. * The initial @split_order (which was previously handled by a separate @start_order variable) is now directly used, and the redundant @start_order variable is removed. This change avoids unnecessary function calls and simplifies the loop setup. Link: https://lkml.kernel.org/r/20251014134606.22543-5-richard.weiyang@gmail.com Signed-off-by: Wei Yang Cc: Zi Yan Cc: Baolin Wang Cc: Barry Song Cc: David Hildenbrand Cc: Dev Jain Cc: Lance Yang Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Mariano Pache Cc: Ryan Roberts Signed-off-by: Andrew Morton --- diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 59ca7924dbfb..cf9a6c505b33 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3528,8 +3528,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, struct address_space *mapping, bool uniform_split) { bool is_anon = folio_test_anon(folio); - int order = folio_order(folio); - int start_order = uniform_split ? new_order : order - 1; + int old_order = folio_order(folio); int split_order; folio_clear_has_hwpoisoned(folio); @@ -3538,10 +3537,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, * split to new_order one order at a time. For uniform split, * folio is split to new_order directly. */ - for (split_order = start_order; + for (split_order = uniform_split ? new_order : old_order - 1; split_order >= new_order; split_order--) { - int old_order = folio_order(folio); int new_folios = 1UL << (old_order - split_order); /* order-1 anonymous folio is not supported */ @@ -3576,6 +3574,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, new_folios); } folio = page_folio(split_at); + old_order = split_order; } return 0;