From 264a88cafdbd0f4579af903145ac605d030f3f66 Mon Sep 17 00:00:00 2001 From: Honggyu Kim Date: Fri, 27 Dec 2024 18:57:37 +0900 Subject: [PATCH] mm/mempolicy: count MPOL_WEIGHTED_INTERLEAVE to "interleave_hit" Commit fa3bea4e1f82 introduced MPOL_WEIGHTED_INTERLEAVE but it missed adding its counter to "interleave_hit" of numastat, which is located at /sys/devices/system/node/nodeN/ directory. It'd be better to add weighted interleving counter info to the existing "interleave_hit" instead of introducing a new counter "weighted_interleave_hit". Link: https://lkml.kernel.org/r/20241227095737.645-1-honggyu.kim@sk.com Fixes: fa3bea4e1f82 ("mm/mempolicy: introduce MPOL_WEIGHTED_INTERLEAVE for weighted interleaving") Signed-off-by: Honggyu Kim Reviewed-by: Gregory Price Reviewed-by: Hyeonggon Yoo Tested-by: Yunjeong Mun Cc: Andi Kleen Signed-off-by: Andrew Morton --- mm/mempolicy.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 04f35659717ae..162407fbf2bc7 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2268,7 +2268,8 @@ struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order, page = __alloc_pages_noprof(gfp, order, nid, nodemask); - if (unlikely(pol->mode == MPOL_INTERLEAVE) && page) { + if (unlikely(pol->mode == MPOL_INTERLEAVE || + pol->mode == MPOL_WEIGHTED_INTERLEAVE) && page) { /* skip NUMA_INTERLEAVE_HIT update if numa stats is disabled */ if (static_branch_likely(&vm_numa_stat_key) && page_to_nid(page) == nid) { -- 2.50.1