From: Manish Kumar Date: Wed, 15 Oct 2025 17:50:41 +0000 (+0530) Subject: mm/page_isolation: clarify FIXME around shrink_slab() in memory hotplug X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=e5dfdaea8c4e981b98d9e7215f5853895fe9a161;p=users%2Fjedix%2Flinux-maple.git mm/page_isolation: clarify FIXME around shrink_slab() in memory hotplug The existing FIXME comment notes that memory hotplug doesn't invoke shrink_slab() directly. This patch adds context explaining that this is an intentional design choice to avoid recursion or deadlocks in the memory reclaim path, as slab shrinking is handled by vmscan. Link: https://lkml.kernel.org/r/20251015175041.40408-1-manish1588@gmail.com Signed-off-by: Manish Kumar Cc: Brendan Jackman Cc: Johannes Weiner Cc: Michal Hocko Cc: Suren Baghdasaryan Cc: Vlastimil Babka Cc: Zi Yan Signed-off-by: Andrew Morton --- diff --git a/mm/page_isolation.c b/mm/page_isolation.c index f72b6cd38b95..e7e9eb57ec31 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -176,10 +176,16 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode, /* * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself. - * We just check MOVABLE pages. + * + * This is an intentional limitation: invoking shrink_slab() from a + * hotplug path can cause reclaim recursion or deadlock if the normal + * memory reclaim (vmscan) path is already active. Slab shrinking is + * handled by the vmscan reclaim code under normal operation, so hotplug + * avoids direct calls into shrink_slab() to prevent reentrancy issues. + * + * We therefore only check MOVABLE pages here. * * Pass the intersection of [start_pfn, end_pfn) and the page's pageblock - * to avoid redundant checks. */ check_unmovable_start = max(page_to_pfn(page), start_pfn); check_unmovable_end = min(pageblock_end_pfn(page_to_pfn(page)),