btrfs: enable large data folios support for defrag
Currently we reject large folios for defrag gracefully, but the
implementation itself is already mostly large folios compatible.
There are several parts of defrag in btrfs:
- Extent map checking
Aka, defrag_collect_targets(), which prepares a list of target ranges
that should be defragged.
This part is completely folio unrelated, thus it doesn't care about
the folio size.
- Target folio preparation
Aka, defrag_prepare_one_folio(), which lock and read (if needed) the
target folio.
Since folio read and lock are already supporting large folios, this
part needs only minor changes.
- Redirty the target range of the folio
This is already done in a way supporting large folios.
So it's pretty straightforward to enable large folios for defrag:
- Do not reject large folios for experimental builds
This affects the large folio check inside defrag_prepare_one_folio().
- Wait for ordered extents of the whole folio in
defrag_prepare_one_folio()
- Lock the whole extent range for all involved folios in
defrag_one_range()
- Allow the folios[] array to be partially empty
Since we can have large folios, folios[] will not always be full.
This affects:
* How to allocate folios in defrag_one_range()
Now we cannot use page index, but use the end position of the folio
as an iterator.
* How to free the folios[] array
If we hit an empty slot, it means we have large folios and already
hit the end of the array.
* How to mark the range dirty
Instead of use page index directly, we have to go through each
folio, and check if the folio covers the defrag target inside
defrag_one_locked_target().
Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>