]>
www.infradead.org Git - users/jedix/linux-maple.git/commit
btrfs: prepare compression paths for large data folios
All compression algorithms inside btrfs are not supporting large folios
due to the following points:
- btrfs_calc_input_length() is assuming page sized folio
- kmap_local_folio() usages are using offset_in_page()
Prepare them to support large data folios by:
- Add a folio parameter to btrfs_calc_input_length()
And use that folio parameter to calculate the correct length.
Since we're here, also add extra ASSERT()s to make sure the parameter
@cur is inside the folio range.
This affects only zlib and zstd. Lzo compresses at most one block at a
time, thus not affected.
- Use offset_in_folio() to calculate the kmap_local_folio() offset
This affects all 3 algorithms.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>