Eric Biggers [Tue, 22 Apr 2025 15:27:10 +0000 (08:27 -0700)]
crypto: mips - move library functions to arch/mips/lib/crypto/
Continue disentangling the crypto library functions from the generic
crypto infrastructure by moving the mips ChaCha and Poly1305 library
functions into a new directory arch/mips/lib/crypto/ that does not
depend on CRYPTO. This mirrors the distinction between crypto/ and
lib/crypto/.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Tue, 22 Apr 2025 15:27:09 +0000 (08:27 -0700)]
crypto: arm64 - move library functions to arch/arm64/lib/crypto/
Continue disentangling the crypto library functions from the generic
crypto infrastructure by moving the arm64 ChaCha and Poly1305 library
functions into a new directory arch/arm64/lib/crypto/ that does not
depend on CRYPTO. This mirrors the distinction between crypto/ and
lib/crypto/.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Tue, 22 Apr 2025 15:27:08 +0000 (08:27 -0700)]
crypto: arm - move library functions to arch/arm/lib/crypto/
Continue disentangling the crypto library functions from the generic
crypto infrastructure by moving the arm BLAKE2s, ChaCha, and Poly1305
library functions into a new directory arch/arm/lib/crypto/ that does
not depend on CRYPTO. This mirrors the distinction between crypto/ and
lib/crypto/.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Tue, 22 Apr 2025 15:27:07 +0000 (08:27 -0700)]
crypto: x86 - drop redundant dependencies on X86
arch/x86/crypto/Kconfig is sourced only when CONFIG_X86=y, so there is
no need for the symbols defined inside it to depend on X86.
In the case of CRYPTO_TWOFISH_586 and CRYPTO_TWOFISH_X86_64, the
dependency was actually on '(X86 || UML_X86)', which suggests that these
two symbols were intended to be available under user-mode Linux as well.
Yet, again these symbols were defined only when CONFIG_X86=y, so that
was not the case. Just remove this redundant dependency.
Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: riscv - Use SYM_FUNC_START for functions only called directly
After some recent changes to the RISC-V crypto code that turned some
indirect function calls into direct ones, builds with CONFIG_CFI_CLANG
fail with:
ld.lld: error: undefined symbol: __kcfi_typeid_sm3_transform_zvksh_zvkb
>>> referenced by arch/riscv/crypto/sm3-riscv64-zvksh-zvkb.o:(.text+0x2) in archive vmlinux.a
ld.lld: error: undefined symbol: __kcfi_typeid_sha512_transform_zvknhb_zvkb
>>> referenced by arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.o:(.text+0x2) in archive vmlinux.a
ld.lld: error: undefined symbol: __kcfi_typeid_sha256_transform_zvknha_or_zvknhb_zvkb
>>> referenced by arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.o:(.text+0x2) in archive vmlinux.a
As these functions are no longer indirectly called (i.e., have their
address taken), the compiler will not emit __kcfi_typeid symbols for
them but SYM_TYPED_FUNC_START expects these to exist at link time.
Switch the definitions of these functions to use SYM_FUNC_START, as they
no longer need kCFI type information since they are only called
directly.
Fixes: 1523eaed0ac5 ("crypto: riscv/sm3 - Use API partial block handling") Fixes: 561aab1104d8 ("crypto: riscv/sha512 - Use API partial block handling") Fixes: e6c5597badf2 ("crypto: riscv/sha256 - Use API partial block handling") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Thu, 24 Apr 2025 14:42:51 +0000 (22:42 +0800)]
crypto: engine - Remove CRYPTO_ALG_ENGINE bit
Remove the private and obsolete CRYPTO_ALG_ENGINE bit which is
conflicting with the new CRYPTO_ALG_DUP_FIRST bit.
Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com> Fixes: f1440a90465b ("crypto: api - Add support for duplicating algorithms before registration") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Fri, 18 Apr 2025 03:00:57 +0000 (11:00 +0800)]
crypto: lib/sm3 - Remove partial block helpers
Now that all sm3_base users have been converted to use the API
partial block handling, remove the partial block helpers as well
as the lib/crypto functions.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Fri, 18 Apr 2025 02:58:41 +0000 (10:58 +0800)]
crypto: shash - Handle partial blocks in API
Provide an option to handle the partial blocks in the shash API.
Almost every hash algorithm has a block size and are only able
to hash partial blocks on finalisation.
Rather than duplicating the partial block handling many times,
add this functionality to the shash API.
It is optional (e.g., hmac would never need this by relying on
the partial block handling of the underlying hash), and to enable
it set the bit CRYPTO_AHASH_ALG_BLOCK_ONLY.
The export format is always that of the underlying hash export,
plus the partial block buffer, followed by a single-byte for the
partial block length.
Set the bit CRYPTO_AHASH_ALG_FINAL_NONZERO to withhold an extra
byte in the partial block. This will come in handy when this
is extended to ahash where hardware often can't deal with a
zero-length final.
It will also be used for algorithms requiring an extra block for
finalisation (e.g., cmac).
As an optimisation, set the bit CRYPTO_AHASH_ALG_FINUP_MAX if
the algorithm wishes to get as much data as possible instead of
just the last partial block.
The descriptor will be zeroed after finalisation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Merge crypto tree to pick up scompress off-by-one patch. The
merge resolution is non-trivial as the dst handling code has been
moved in front of the src.
Fixes: da001fb651b00e1d ("crypto: atmel-i2c - add support for SHA204A random number generator") Cc: <stable@vger.kernel.org> Signed-off-by: Marek BehĂșn <kabel@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Mon, 21 Apr 2025 03:31:31 +0000 (11:31 +0800)]
crypto: scomp - Fix off-by-one bug when calculating last page
Fix off-by-one bug in the last page calculation for src and dst.
Reported-by: Nhat Pham <nphamcs@gmail.com> Fixes: 2d3553ecb4e3 ("crypto: scomp - Remove support for some non-trivial SG lists") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Thu, 17 Apr 2025 02:26:28 +0000 (10:26 +0800)]
powerpc/crc: Include uaccess.h and others
The powerpc crc code was relying on pagefault_disable from being
pulled in by random header files.
Fix this by explicitly including uaccess.h. Also add other missing
header files to prevent similar problems in future.
Reported-by: Eric Biggers <ebiggers@kernel.org> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Fixes: 7ba8df47810f ("asm-generic: Make simd.h more resilient") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Wed, 16 Apr 2025 07:48:26 +0000 (15:48 +0800)]
crypto: public_key - Make sig/tfm local to if clause in software_key_query
The recent code changes in this function triggered a false-positive
maybe-uninitialized warning in software_key_query. Rearrange the
code by moving the sig/tfm variables into the if clause where they
are actually used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert Xu [Tue, 15 Apr 2025 09:23:19 +0000 (17:23 +0800)]
crypto: deflate - Make the acomp walk atomic
Add an atomic flag to the acomp walk and use that in deflate.
Due to the use of a per-cpu context, it is impossible to sleep
during the walk in deflate.
Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202504151654.4c3b6393-lkp@intel.com Fixes: 08cabc7d3c86 ("crypto: deflate - Convert to acomp") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: sun8i-ss - use API helpers to setup fallback request
Rather than setting up the fallback request by hand, use
ahash_request_set_callback() and ahash_request_set_crypt() API helpers
to properly setup the new request.
This also ensures that the completion callback is properly passed down
to the fallback algorithm, which avoids a crash with async fallbacks.
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm
that uses the architecture's Poly1305 library functions, individual
architectures no longer need to do the same. Therefore, remove the
redundant shash algorithm from the arch-specific code and leave just the
library functions there.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm
that uses the architecture's Poly1305 library functions, individual
architectures no longer need to do the same. Therefore, remove the
redundant shash algorithm from the arch-specific code and leave just the
library functions there.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm
that uses the architecture's Poly1305 library functions, individual
architectures no longer need to do the same. Therefore, remove the
redundant shash algorithm from the arch-specific code and leave just the
library functions there.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm
that uses the architecture's Poly1305 library functions, individual
architectures no longer need to do the same. Therefore, remove the
redundant shash algorithm from the arch-specific code and leave just the
library functions there.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Eric Biggers [Sun, 13 Apr 2025 04:54:14 +0000 (21:54 -0700)]
crypto: poly1305 - centralize the shash wrappers for arch code
Following the example of the crc32, crc32c, and chacha code, make the
crypto subsystem register both generic and architecture-optimized
poly1305 shash algorithms, both implemented on top of the appropriate
library functions. This eliminates the need for every architecture to
implement the same shash glue code.
Note that the poly1305 shash requires that the key be prepended to the
data, which differs from the library functions where the key is simply a
parameter to poly1305_init(). Previously this was handled at a fairly
low level, polluting the library code with shash-specific code.
Reorganize things so that the shash code handles this quirk itself.
Also, to register the architecture-optimized shashes only when
architecture-optimized code is actually being used, add a function
poly1305_is_arch_optimized() and make each arch implement it. Change
each architecture's Poly1305 module_init function to arch_initcall so
that the CPU feature detection is guaranteed to run before
poly1305_is_arch_optimized() gets called by crypto/poly1305.c. (In
cases where poly1305_is_arch_optimized() just returns true
unconditionally, using arch_initcall is not strictly needed, but it's
still good to be consistent across architectures.)
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>