From: Vlastimil Babka Date: Mon, 23 Aug 2021 23:59:00 +0000 (+1000) Subject: mm, slub: detach whole partial list at once in unfreeze_partials() X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=c9c73fec8db3c827d0d0d4c1587ef180f18b9412;p=users%2Fjedix%2Flinux-maple.git mm, slub: detach whole partial list at once in unfreeze_partials() Instead of iterating through the live percpu partial list, detach it from the kmem_cache_cpu at once. This is simpler and will allow further optimization. Link: https://lkml.kernel.org/r/20210805152000.12817-25-vbabka@suse.cz Signed-off-by: Vlastimil Babka Cc: Christoph Lameter Cc: David Rientjes Cc: Jann Horn Cc: Jesper Dangaard Brouer Cc: Joonsoo Kim Cc: Mel Gorman Cc: Mike Galbraith Cc: Pekka Enberg Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Signed-off-by: Andrew Morton Signed-off-by: Stephen Rothwell --- diff --git a/mm/slub.c b/mm/slub.c index 240b22328212..d8bfc41dc1f0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2344,16 +2344,20 @@ static void unfreeze_partials(struct kmem_cache *s, { #ifdef CONFIG_SLUB_CPU_PARTIAL struct kmem_cache_node *n = NULL, *n2 = NULL; - struct page *page, *discard_page = NULL; + struct page *page, *partial_page, *discard_page = NULL; unsigned long flags; local_irq_save(flags); - while ((page = slub_percpu_partial(c))) { + partial_page = slub_percpu_partial(c); + c->partial = NULL; + + while (partial_page) { struct page new; struct page old; - slub_set_percpu_partial(c, page); + page = partial_page; + partial_page = page->next; n2 = get_node(s, page_to_nid(page)); if (n != n2) {