From: Pankaj Raghav
Date: Fri, 5 Sep 2025 15:00:12 +0000 (+0200)
Subject: huge_memory: return -EINVAL in folio split functions when THP is disabled
X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=fe1a50b2e39a83f90e19bc5058ba27f92854d4c1;p=users%2Fjedix%2Flinux-maple.git
huge_memory: return -EINVAL in folio split functions when THP is disabled
split_huge_page_to_list_[to_order](), split_huge_page() and
try_folio_split() return 0 on success and error codes on failure.
When THP is disabled, these functions return 0 indicating success even
though an error code should be returned as it is not possible to split a
folio when THP is disabled.
Make all these functions return -EINVAL to indicate failure instead of 0.
As large folios depend on CONFIG_THP, issue warning as this function
should not be called without a large folio.
Link: https://lkml.kernel.org/r/20250905150012.93714-1-kernel@pankajraghav.com
Signed-off-by: Pankaj Raghav
Reported-by: kernel test robot
Closes: https://lore.kernel.org/oe-kbuild-all/202509051753.riCeG7LC-lkp@intel.com/
Acked-by: David Hildenbrand
Acked-by: Zi Yan
Acked-by: Kiryl Shutsemau
Reviewed-by: Lorenzo Stoakes
Reviewed-by: Barry Song
Reviewed-by: Anshuman Khandual
Signed-off-by: Andrew Morton
---
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 29ef70022da1..f327d62fc985 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -588,22 +588,26 @@ static inline int
split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
unsigned int new_order)
{
- return 0;
+ VM_WARN_ON_ONCE_PAGE(1, page);
+ return -EINVAL;
}
static inline int split_huge_page(struct page *page)
{
- return 0;
+ VM_WARN_ON_ONCE_PAGE(1, page);
+ return -EINVAL;
}
static inline int split_folio_to_list(struct folio *folio, struct list_head *list)
{
- return 0;
+ VM_WARN_ON_ONCE_FOLIO(1, folio);
+ return -EINVAL;
}
static inline int try_folio_split(struct folio *folio, struct page *page,
struct list_head *list)
{
- return 0;
+ VM_WARN_ON_ONCE_FOLIO(1, folio);
+ return -EINVAL;
}
static inline void deferred_split_folio(struct folio *folio, bool partially_mapped) {}