]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
mm,unmap: avoid flushing TLB in batch if PTE is inaccessible
authorHuang Ying <ying.huang@intel.com>
Mon, 10 Apr 2023 07:52:24 +0000 (15:52 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 21 Apr 2023 23:07:58 +0000 (16:07 -0700)
0Day/LKP reported a performance regression for commit 7e12beb8ca2a
("migrate_pages: batch flushing TLB").  In the commit, the TLB flushing
during page migration is batched.  So, in try_to_migrate_one(),
ptep_clear_flush() is replaced with set_tlb_ubc_flush_pending().  In
further investigation, it is found that the TLB flushing can be avoided in
ptep_clear_flush() if the PTE is inaccessible.  In fact, we can optimize
in similar way for the batched TLB flushing too to improve the
performance.

So in this patch, we check pte_accessible() before
set_tlb_ubc_flush_pending() in try_to_unmap/migrate_one().  Tests show
that the benchmark score of the anon-cow-rand-mt test case of
vm-scalability test suite can improve up to 2.1% with the patch on a Intel
server machine.  The TLB flushing IPI can reduce up to 44.3%.

Link: https://lore.kernel.org/oe-lkp/202303192325.ecbaf968-yujie.liu@intel.com
Link: https://lore.kernel.org/oe-lkp/ab92aaddf1b52ede15e2c608696c36765a2602c1.camel@intel.com/
Link: https://lkml.kernel.org/r/20230410075224.827740-1-ying.huang@intel.com
Fixes: 7e12beb8ca2a ("migrate_pages: batch flushing TLB")
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reported-by: kernel test robot <yujie.liu@intel.com>
Reviewed-by: Nadav Amit <namit@vmware.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Hugh Dickins <hughd@google.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: David Hildenbrand <david@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/rmap.c

index ba901c416785cb743de16104bd8ab5d0ffb75c22..e33246a0e9ba520b386f9410aca3439c41b60791 100644 (file)
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1580,7 +1580,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
                                 */
                                pteval = ptep_get_and_clear(mm, address, pvmw.pte);
 
-                               set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
+                               if (pte_accessible(mm, pteval))
+                                       set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
                        } else {
                                pteval = ptep_clear_flush(vma, address, pvmw.pte);
                        }
@@ -1961,7 +1962,8 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
                                 */
                                pteval = ptep_get_and_clear(mm, address, pvmw.pte);
 
-                               set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
+                               if (pte_accessible(mm, pteval))
+                                       set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
                        } else {
                                pteval = ptep_clear_flush(vma, address, pvmw.pte);
                        }