crypto: x86/aes-xts - wire up AESNI + AVX implementation
Add an AES-XTS implementation "xts-aes-aesni-avx" for x86_64 CPUs that
have the AES-NI and AVX extensions but not VAES. It's similar to the
existing xts-aes-aesni in that uses xmm registers to operate on one AES
block at a time. It differs from xts-aes-aesni in the following ways:
- It uses the VEX-coded (non-destructive) instructions from AVX.
This improves performance slightly.
- It incorporates some additional optimizations such as interleaving the
tweak computation with AES en/decryption, handling single-page
messages more efficiently, and caching the first round key.
- It supports only 64-bit (x86_64).
- It's generated by an assembly macro that will also be used to generate
VAES-based implementations.
The performance improvement over xts-aes-aesni varies from small to
large, depending on the CPU and other factors such as the size of the
messages en/decrypted. For example, the following increases in
AES-256-XTS decryption throughput are seen on the following CPUs:
(The above CPUs don't support VAES, so they can't use VAES instead.)
While this isn't as large an improvement as what VAES provides, this
still seems worthwhile. This implementation is fairly easy to provide
based on the assembly macro that's needed for VAES anyway, and it will
be the best implementation on a large number of CPUs (very roughly, the
CPUs launched by Intel and AMD from 2011 to 2018).
This makes the existing xts-aes-aesni *mostly* obsolete. For now, leave
it in place to support 32-bit kernels and also CPUs like Intel Westmere
that support AES-NI but not AVX. (We could potentially remove it anyway
and just rely on the indirect acceleration via ecb-aes-aesni in those
cases, but that change will need to be considered separately.)
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>