]> www.infradead.org Git - users/dwmw2/linux.git/commitdiff
mm: shmem: extend shmem_partial_swap_usage() to support large folio swap
authorBaolin Wang <baolin.wang@linux.alibaba.com>
Mon, 12 Aug 2024 07:42:03 +0000 (15:42 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Wed, 4 Sep 2024 04:15:33 +0000 (21:15 -0700)
To support shmem large folio swapout in the following patches, using
xa_get_order() to get the order of the swap entry to calculate the swap
usage of shmem.

Link: https://lkml.kernel.org/r/60b130b9fc3e422bb91293a172c2113c85e9233a.1723434324.git.baolin.wang@linux.alibaba.com
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Chris Li <chrisl@kernel.org>
Cc: Daniel Gomez <da.gomez@samsung.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Lance Yang <ioworker0@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Pankaj Raghav <p.raghav@samsung.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/shmem.c

index 22cdc10f27ea32714a06414359da248afb6860a6..02fb188d627f48ba1f32a8ea8f7b8d08e8de7861 100644 (file)
@@ -890,7 +890,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
                if (xas_retry(&xas, page))
                        continue;
                if (xa_is_value(page))
-                       swapped++;
+                       swapped += 1 << xa_get_order(xas.xa, xas.xa_index);
                if (xas.xa_index == max)
                        break;
                if (need_resched()) {