]> www.infradead.org Git - users/willy/xarray.git/commit
mm/ksm: convert break_ksm() from walk_page_range_vma() to folio_walk
authorDavid Hildenbrand <david@redhat.com>
Fri, 2 Aug 2024 15:55:24 +0000 (17:55 +0200)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 2 Sep 2024 03:26:02 +0000 (20:26 -0700)
commite317a8d8b4f600fc7ec9725e26417030ee594f52
treee72d26b518abbc8d08645745d288ddc95b4b990b
parent7290840de65e04c1fc59dab692468fc9600c6038
mm/ksm: convert break_ksm() from walk_page_range_vma() to folio_walk

Let's simplify by reusing folio_walk.  Keep the existing behavior by
handling migration entries and zeropages.

Link: https://lkml.kernel.org/r/20240802155524.517137-12-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Janosch Frank <frankja@linux.ibm.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/ksm.c