]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
btrfs: make btrfs_cleanup_ordered_extents() support large folios
authorQu Wenruo <wqu@suse.com>
Sat, 19 Jul 2025 22:26:48 +0000 (07:56 +0930)
committerDavid Sterba <dsterba@suse.com>
Thu, 7 Aug 2025 15:07:15 +0000 (17:07 +0200)
When hitting a large folio, btrfs_cleanup_ordered_extents() will get the
same large folio multiple times, and clearing the same range again and
again.

Thankfully this is not causing anything wrong, just inefficiency.

This is caused by the fact that we're iterating folios using the old
page index, thus can hit the same large folio again and again.

Enhance it by increasing @index to the index of the folio end, and only
increase @index by 1 if we failed to grab a folio.

Reviewed-by: Boris Burkov <boris@bur.io>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
fs/btrfs/inode.c

index b77dd22b8cdbe3601028594efadd32e06e83036b..a2de289e662b2fe5711b9027bdc9a486cd642f9b 100644 (file)
@@ -401,10 +401,12 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode,
 
        while (index <= end_index) {
                folio = filemap_get_folio(inode->vfs_inode.i_mapping, index);
-               index++;
-               if (IS_ERR(folio))
+               if (IS_ERR(folio)) {
+                       index++;
                        continue;
+               }
 
+               index = folio_end(folio) >> PAGE_SHIFT;
                /*
                 * Here we just clear all Ordered bits for every page in the
                 * range, then btrfs_mark_ordered_io_finished() will handle