]> www.infradead.org Git - users/dwmw2/linux.git/commit
mm/zswap: split zswap rb-tree
authorChengming Zhou <zhouchengming@bytedance.com>
Fri, 19 Jan 2024 11:22:23 +0000 (11:22 +0000)
committerAndrew Morton <akpm@linux-foundation.org>
Thu, 22 Feb 2024 18:24:39 +0000 (10:24 -0800)
commit44c7c734a5132fc02f5584c7207f1d0c483f3ccd
treeb39c42fb5a5e642d098dc930816962a75a5f2eab
parentbb29fd7760ae39905127afd31fc83294625ff704
mm/zswap: split zswap rb-tree

Each swapfile has one rb-tree to search the mapping of swp_entry_t to
zswap_entry, that use a spinlock to protect, which can cause heavy lock
contention if multiple tasks zswap_store/load concurrently.

Optimize the scalability problem by splitting the zswap rb-tree into
multiple rb-trees, each corresponds to SWAP_ADDRESS_SPACE_PAGES (64M),
just like we did in the swap cache address_space splitting.

Although this method can't solve the spinlock contention completely, it
can mitigate much of that contention.  Below is the results of kernel
build in tmpfs with zswap shrinker enabled:

     linux-next  zswap-lock-optimize
real 1m9.181s    1m3.820s
user 17m44.036s  17m40.100s
sys  7m37.297s   4m54.622s

So there are clearly improvements.

Link: https://lkml.kernel.org/r/20240117-b4-zswap-lock-optimize-v2-2-b5cc55479090@bytedance.com
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Nhat Pham <nphamcs@gmail.com>
Acked-by: Yosry Ahmed <yosryahmed@google.com>
Cc: Chris Li <chriscli@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
include/linux/zswap.h
mm/swapfile.c
mm/zswap.c