From: David Stevens Date: Tue, 18 Apr 2023 08:40:31 +0000 (+0900) Subject: mm/shmem: Fix race in shmem_undo_range w/THP X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=baac0f9f362db62d80ce0017b530436c34a53e8c;p=users%2Fjedix%2Flinux-maple.git mm/shmem: Fix race in shmem_undo_range w/THP Split folios during the second loop of shmem_undo_range. It's not sufficient to only split folios when dealing with partial pages, since it's possible for a THP to be faulted in after that point. Calling truncate_inode_folio in that situation can result in throwing away data outside of the range being targeted. Link: https://lkml.kernel.org/r/20230418084031.3439795-1-stevensd@google.com Fixes: b9a8a4195c7d ("truncate,shmem: Handle truncates that split large folios") Signed-off-by: David Stevens Cc: Matthew Wilcox (Oracle) Cc: Suleiman Souhlal Cc: Signed-off-by: Andrew Morton --- diff --git a/mm/shmem.c b/mm/shmem.c index 448f393d8ab2..347c84a9f2ff 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1022,7 +1022,22 @@ whole_folios: } VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio); - truncate_inode_folio(mapping, folio); + + if (!folio_test_large(folio)) { + truncate_inode_folio(mapping, folio); + } else if (truncate_inode_partial_folio(folio, lstart, lend)) { + /* + * If we split a page, reset the loop so that we + * pick up the new sub pages. Otherwise the THP + * was entirely dropped or the target range was + * zeroed, so just continue the loop as is. + */ + if (!folio_test_large(folio)) { + folio_unlock(folio); + index = start; + break; + } + } } folio_unlock(folio); }