]> www.infradead.org Git - nvme.git/commitdiff
mm: remove folio_test_anon(folio)==false path in __folio_add_anon_rmap()
authorBarry Song <v-songbaohua@oppo.com>
Mon, 17 Jun 2024 23:11:37 +0000 (11:11 +1200)
committerAndrew Morton <akpm@linux-foundation.org>
Thu, 4 Jul 2024 02:30:18 +0000 (19:30 -0700)
The folio_test_anon(folio)==false cases has been relocated to
folio_add_new_anon_rmap().  Additionally, four other callers consistently
pass anonymous folios.

stack 1:
remove_migration_pmd
   -> folio_add_anon_rmap_pmd
     -> __folio_add_anon_rmap

stack 2:
__split_huge_pmd_locked
   -> folio_add_anon_rmap_ptes
      -> __folio_add_anon_rmap

stack 3:
remove_migration_pmd
   -> folio_add_anon_rmap_pmd
      -> __folio_add_anon_rmap (RMAP_LEVEL_PMD)

stack 4:
try_to_merge_one_page
   -> replace_page
     -> folio_add_anon_rmap_pte
       -> __folio_add_anon_rmap

__folio_add_anon_rmap() only needs to handle the cases
folio_test_anon(folio)==true now.
We can remove the !folio_test_anon(folio)) path within
__folio_add_anon_rmap() now.

Link: https://lkml.kernel.org/r/20240617231137.80726-4-21cnbao@gmail.com
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
Suggested-by: David Hildenbrand <david@redhat.com>
Tested-by: Shuai Yuan <yuanshuai@oppo.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Chris Li <chrisl@kernel.org>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Yosry Ahmed <yosryahmed@google.com>
Cc: Yu Zhao <yuzhao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/rmap.c

index 41012fe7a05a5476b1d13fa4e018dacf07de5a99..8616308610b9fb64b1465395b75417163ec189c2 100644 (file)
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1297,23 +1297,12 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
 {
        int i, nr, nr_pmdmapped = 0;
 
+       VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
+
        nr = __folio_add_rmap(folio, page, nr_pages, level, &nr_pmdmapped);
 
-       if (unlikely(!folio_test_anon(folio))) {
-               VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
-               /*
-                * For a PTE-mapped large folio, we only know that the single
-                * PTE is exclusive. Further, __folio_set_anon() might not get
-                * folio->index right when not given the address of the head
-                * page.
-                */
-               VM_WARN_ON_FOLIO(folio_test_large(folio) &&
-                                level != RMAP_LEVEL_PMD, folio);
-               __folio_set_anon(folio, vma, address,
-                                !!(flags & RMAP_EXCLUSIVE));
-       } else if (likely(!folio_test_ksm(folio))) {
+       if (likely(!folio_test_ksm(folio)))
                __page_check_anon_rmap(folio, page, vma, address);
-       }
 
        __folio_mod_stat(folio, nr, nr_pmdmapped);