Stefan Agner [Mon, 7 Aug 2017 17:31:20 +0000 (10:31 -0700)]
lib/mpi: fix build with clang
Use just @ to denote comments which works with gcc and clang.
Otherwise clang reports an escape sequence error:
error: invalid % escape in inline assembly string
Use %0-%3 as operand references, this avoids:
error: invalid operand in inline asm: 'umull ${1:r}, ${0:r}, ${2:r}, ${3:r}'
Also remove superfluous casts on output operands to avoid warnings
such as:
warning: invalid use of a cast in an inline asm context requiring an l-value
Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Mogens Lauridsen [Thu, 3 Aug 2017 13:34:12 +0000 (15:34 +0200)]
crypto: sahara - Fix dma unmap direction
The direction used in dma_unmap_sg in aes calc is wrong.
This result in the cache not being invalidated correct when aes
calculation is done and result has been dma'ed to memory.
This is seen as sporadic wrong result from aes calc.
Signed-off-by: Mogens Lauridsen <mlauridsen171@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
af_alg_alloc_areq: allocation of the request data structure for the
cipher operation
af_alg_get_rsgl: creation of the RX SGL anchored in the request data
structure
The following changes to the implementation without affecting the
functionality have been applied to synchronize slightly different code
bases in algif_skcipher and algif_aead:
The wakeup in af_alg_wait_for_data is triggered when either more data
is received or the indicator that more data is to be expected is
released. The first is triggered by user space, the second is
triggered by the kernel upon finishing the processing of data
(i.e. the kernel is ready for more).
af_alg_sendmsg uses size_t in min_t calculation for obtaining len.
Return code determination is consistent with algif_skcipher. The
scope of the variable i is reduced to match algif_aead. The type of the
variable i is switched from int to unsigned int to match algif_aead.
af_alg_sendpage does not contain the superfluous err = 0 from
aead_sendpage.
af_alg_async_cb requires to store the number of output bytes in
areq->outlen before the AIO callback is triggered.
The POLLIN / POLLRDNORM is now set when either not more data is given or
the kernel is supplied with data. This is consistent to the wakeup from
sleep when the kernel waits for data.
The request data structure is extended by the field last_rsgl which
points to the last RX SGL list entry. This shall help recvmsg
implementation to chain the RX SGL to other SG(L)s if needed. It is
currently used by algif_aead which chains the tag SGL to the RX SGL
during decryption.
Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Like the other drivers implementing RSA in hardware, this
can be avoided by always enabling the base support when we build
CCP.
Fixes: ceeec0afd684 ("crypto: ccp - Add support for RSA on the CCP") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The added support for version 5 CCPs introduced a false-positive
warning in the RSA implementation:
drivers/crypto/ccp/ccp-ops.c: In function 'ccp_run_rsa_cmd':
drivers/crypto/ccp/ccp-ops.c:1856:3: error: 'sb_count' may be used uninitialized in this function [-Werror=maybe-uninitialized]
This changes the code in a way that should make it easier for
the compiler to track the state of the sb_count variable, and
avoid the warning.
Fixes: 6ba46c7d4d7e ("crypto: ccp - Fix base RSA function for version 5 CCPs") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: serpent - improve __serpent_setkey with UBSAN
When UBSAN is enabled, we get a very large stack frame for
__serpent_setkey, when the register allocator ends up using more registers
than it has, and has to spill temporary values to the stack. The code
was originally optimized for in-order x86-32 CPU implementations using
older compilers, but it now runs into a highly suboptimal case on all
CPU architectures, as seen by this warning:
crypto/serpent_generic.c: In function '__serpent_setkey':
crypto/serpent_generic.c:436:1: error: the frame size of 2720 bytes is larger than 2048 bytes [-Werror=frame-larger-than=]
Disabling -fsanitize=alignment would avoid that warning, presumably the
option turns off a optimization step that is required for getting the
register allocation right, but there is no easy way to do that on gcc-7
(gcc-8 introduces a function attribute for this).
I tried to figure out a way to modify the source code instead, and noticed
that the two stages of the setkey() function (keyiter and sbox) each are
fine by themselves, but not when combined into one function. Splitting
out the entire sbox into a separate function also happens to work fine
with all compilers I tried (arm, arm64 and x86).
The setkey function uses a strange way to handle offsets into the key
array, using both negative and positive index values, as well as adjusting
the array pointer back and forth. I have checked that this actually
makes no difference to modern compilers, but I left that untouched
to make the patch easier to review and to keep the code closer to
the reference implementation.
Stephan Mueller [Sun, 30 Jul 2017 12:32:58 +0000 (14:32 +0200)]
crypto: algif_aead - copy AAD from src to dst
Use the NULL cipher to copy the AAD and PT/CT from the TX SGL
to the RX SGL. This allows an in-place crypto operation on the
RX SGL for encryption, because the TX data is always smaller or
equal to the RX data (the RX data will hold the tag).
For decryption, a per-request TX SGL is created which will only hold
the tag value. As the RX SGL will have no space for the tag value and
an in-place operation will not write the tag buffer, the TX SGL with the
tag value is chained to the RX SGL. This now allows an in-place
crypto operation.
Gary R Hook [Tue, 25 Jul 2017 19:21:34 +0000 (14:21 -0500)]
crypto: ccp - Rework the unit-size check for XTS-AES
The CCP supports a limited set of unit-size values. Change the check
for this parameter such that acceptable values match the enumeration.
Then clarify the conditions under which we must use the fallback
implementation.
Signed-off-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Gary R Hook [Tue, 25 Jul 2017 19:12:11 +0000 (14:12 -0500)]
crypto: ccp - Fix XTS-AES-128 support on v5 CCPs
Version 5 CCPs have some new requirements for XTS-AES: the type field
must be specified, and the key requires 512 bits, with each part
occupying 256 bits and padded with zeroes.
cc: <stable@vger.kernel.org> # 4.9.x+
Signed-off-by: Gary R Hook <ghook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: arm64/aes - avoid expanded lookup tables in the final round
For the final round, avoid the expanded and padded lookup tables
exported by the generic AES driver. Instead, for encryption, we can
perform byte loads from the same table we used for the inner rounds,
which will still be hot in the caches. For decryption, use the inverse
AES Sbox directly, which is 4x smaller than the inverse lookup table
exported by the generic driver.
This should significantly reduce the Dcache footprint of our code,
which makes the code more robust against timing attacks. It does not
introduce any additional module dependencies, given that we already
rely on the core AES module for the shared key expansion routines.
It also frees up register x18, which is not available as a scratch
register on all platforms, which and so avoiding it improves
shareability of this code.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: arm/aes - avoid expanded lookup tables in the final round
For the final round, avoid the expanded and padded lookup tables
exported by the generic AES driver. Instead, for encryption, we can
perform byte loads from the same table we used for the inner rounds,
which will still be hot in the caches. For decryption, use the inverse
AES Sbox directly, which is 4x smaller than the inverse lookup table
exported by the generic driver.
This should significantly reduce the Dcache footprint of our code,
which makes the code more robust against timing attacks. It does not
introduce any additional module dependencies, given that we already
rely on the core AES module for the shared key expansion routines.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: arm64/ghash - add NEON accelerated fallback for 64-bit PMULL
Implement a NEON fallback for systems that do support NEON but have
no support for the optional 64x64->128 polynomial multiplication
instruction that is part of the ARMv8 Crypto Extensions. It is based
on the paper "Fast Software Polynomial Multiplication on ARM Processors
Using the NEON Engine" by Danilo Camara, Conrado Gouvea, Julio Lopez and
Ricardo Dahab (https://hal.inria.fr/hal-01506572), but has been reworked
extensively for the AArch64 ISA.
On a low-end core such as the Cortex-A53 found in the Raspberry Pi3, the
NEON based implementation is 4x faster than the table based one, and
is time invariant as well, making it less vulnerable to timing attacks.
When combined with the bit-sliced NEON implementation of AES-CTR, the
AES-GCM performance increases by 2x (from 58 to 29 cycles per byte).
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: arm/ghash - add NEON accelerated fallback for vmull.p64
Implement a NEON fallback for systems that do support NEON but have
no support for the optional 64x64->128 polynomial multiplication
instruction that is part of the ARMv8 Crypto Extensions. It is based
on the paper "Fast Software Polynomial Multiplication on ARM Processors
Using the NEON Engine" by Danilo Camara, Conrado Gouvea, Julio Lopez and
Ricardo Dahab (https://hal.inria.fr/hal-01506572)
On a 32-bit guest executing under KVM on a Cortex-A57, the new code is
not only 4x faster than the generic table based GHASH driver, it is also
time invariant. (Note that the existing vmull.p64 code is 16x faster on
this core).
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: arm64/gcm - implement native driver using v8 Crypto Extensions
Currently, the AES-GCM implementation for arm64 systems that support the
ARMv8 Crypto Extensions is based on the generic GCM module, which combines
the AES-CTR implementation using AES instructions with the PMULL based
GHASH driver. This is suboptimal, given the fact that the input data needs
to be loaded twice, once for the encryption and again for the MAC
calculation.
On Cortex-A57 (r1p2) and other recent cores that implement micro-op fusing
for the AES instructions, AES executes at less than 1 cycle per byte, which
means that any cycles wasted on loading the data twice hurt even more.
So implement a new GCM driver that combines the AES and PMULL instructions
at the block level. This improves performance on Cortex-A57 by ~37% (from
3.5 cpb to 2.6 cpb)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: arm64/aes-bs - implement non-SIMD fallback for AES-CTR
Of the various chaining modes implemented by the bit sliced AES driver,
only CTR is exposed as a synchronous cipher, and requires a fallback in
order to remain usable once we update the kernel mode NEON handling logic
to disallow nested use. So wire up the existing CTR fallback C code.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: arm64/chacha20 - take may_use_simd() into account
To accommodate systems that disallow the use of kernel mode NEON in
some circumstances, take the return value of may_use_simd into
account when deciding whether to invoke the C fallback routine.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: arm64/aes-blk - add a non-SIMD fallback for synchronous CTR
To accommodate systems that may disallow use of the NEON in kernel mode
in some circumstances, introduce a C fallback for synchronous AES in CTR
mode, and use it if may_use_simd() returns false.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The arm64 kernel will shortly disallow nested kernel mode NEON.
So honour this in the ARMv8 Crypto Extensions implementation of
CCM-AES, and fall back to a scalar implementation using the generic
crypto helpers for AES, XOR and incrementing the CTR counter.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: arm64/aes-ce-cipher - match round key endianness with generic code
In order to be able to reuse the generic AES code as a fallback for
situations where the NEON may not be used, update the key handling
to match the byte order of the generic code: it stores round keys
as sequences of 32-bit quantities rather than streams of bytes, and
so our code needs to be updated to reflect that.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
where crypto_xor() is preceded or followed by a memcpy() invocation
that is only there because crypto_xor() uses its output parameter as
one of the inputs. To avoid having to add new instances of this pattern
in the arm64 code, which will be refactored to implement non-SIMD
fallbacks, add an alternative implementation called crypto_xor_cpy(),
taking separate input and output arguments. This removes the need for
the separate memcpy().
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: algapi - use separate dst and src operands for __crypto_xor()
In preparation of introducing crypto_xor_cpy(), which will use separate
operands for input and output, modify the __crypto_xor() implementation,
which it will share with the existing crypto_xor(), which provides the
actual functionality when not using the inline version.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Zain Wang [Mon, 24 Jul 2017 01:23:13 +0000 (09:23 +0800)]
crypto: rockchip - move the crypto completion from interrupt context
It's illegal to call the completion function from hardirq context,
it will cause runtime tests to fail. Let's build a new task (done_task)
for moving update operation from hardirq context.
Signed-off-by: zain wang <wzz@rock-chips.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add binding documentation for the Freescale RNGC found on
some i.MX2/3 SoCs.
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> Signed-off-by: Martin Kaiser <martin@kaiser.cx> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: scompress - defer allocation of scratch buffer to first use
The scompress code allocates 2 x 128 KB of scratch buffers for each CPU,
so that clients of the async API can use synchronous implementations
even from atomic context. However, on systems such as Cavium Thunderx
(which has 96 cores), this adds up to a non-negligible 24 MB. Also,
32-bit systems may prefer to use their precious vmalloc space for other
things,especially since there don't appear to be any clients for the
async compression API yet.
So let's defer allocation of the scratch buffers until the first time
we allocate an acompress cipher based on an scompress implementation.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: scompress - free partially allocated scratch buffers on failure
When allocating the per-CPU scratch buffers, we allocate the source
and destination buffers separately, but bail immediately if the second
allocation fails, without freeing the first one. Fix that.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: scompress - don't sleep with preemption disabled
Due to the use of per-CPU buffers, scomp_acomp_comp_decomp() executes
with preemption disabled, and so whether the CRYPTO_TFM_REQ_MAY_SLEEP
flag is set is irrelevant, since we cannot sleep anyway. So disregard
the flag, and use GFP_ATOMIC unconditionally.
crypto: brcm - Support more FlexRM rings than SPU engines.
Enhance code to generically support cases where DMA rings
are greater than or equal to number of SPU engines.
New hardware has underlying DMA engine-FlexRM with 32 rings
which can be used to communicate to any of the available
10 SPU engines.
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: ccp - remove duplicate module version and author entry
commit 720419f01832 ("crypto: ccp - Introduce the AMD Secure Processor device")
moved the module registeration from ccp-dev.c to sp-dev.c but patch missed
removing the module version and author entry from ccp-dev.c.
It causes the below warning during boot when CONFIG_CRYPTO_DEV_SP_CCP=y
and CONFIG_CRYPTO_DEV_CCP_CRYPTO=y is set.
Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Gary R Hook <gary.hook@amd.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove xts(aes) speed tests with 2 x 192-bit keys, since implementations
adhering strictly to IEEE 1619-2007 standard cannot cope with key sizes
other than 2 x 128, 2 x 256 bits - i.e. AES-XTS-{128,256}:
[...]
tcrypt: test 5 (384 bit key, 16 byte blocks):
caam_jr 8020000.jr: key size mismatch
tcrypt: setkey() failed flags=200000
[...]
Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Colin Ian King [Wed, 19 Jul 2017 09:24:15 +0000 (10:24 +0100)]
Crypto: atmel-ecc: Make a couple of local functions static
Functions atmel_ecc_i2c_client_alloc and atmel_ecc_i2c_client_free are
local to the source and no not need to be in the global scope. Make
them static.
Cleans up sparse warnings:
symbol 'atmel_ecc_i2c_client_alloc' was not declared. Should it be static?
symbol 'atmel_ecc_i2c_client_free' was not declared. Should it be static?
Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Gustavo A. R. Silva [Tue, 18 Jul 2017 23:07:12 +0000 (18:07 -0500)]
crypto: img-hash - remove unnecessary static in img_hash_remove()
Remove unnecessary static on local variable hdev. Such variable
is initialized before being used, on every execution path throughout
the function. The static has no benefit and, removing it reduces the
object file size.
This issue was detected using Coccinelle and the following semantic patch:
https://github.com/GustavoARSilva/coccinelle/blob/master/static/static_unused.cocci
In the following log you can see a significant difference in the object
file size. This log is the output of the size command, before and after
the code change:
before:
text data bss dec hex filename
14842 6464 128 21434 53ba drivers/crypto/img-hash.o
after:
text data bss dec hex filename
14789 6376 64 21229 52ed drivers/crypto/img-hash.o
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Gustavo A. R. Silva [Tue, 18 Jul 2017 23:05:50 +0000 (18:05 -0500)]
crypto: atmel-tdes - remove unnecessary static in atmel_tdes_remove()
Remove unnecessary static on local variable tdes_dd. Such variable
is initialized before being used, on every execution path throughout
the function. The static has no benefit and, removing it reduces the
object file size.
This issue was detected using Coccinelle and the following semantic patch:
https://github.com/GustavoARSilva/coccinelle/blob/master/static/static_unused.cocci
In the following log you can see a significant difference in the object
file size. This log is the output of the size command, before and after
the code change:
before:
text data bss dec hex filename
17079 8704 128 25911 6537 drivers/crypto/atmel-tdes.o
after:
text data bss dec hex filename
17039 8616 64 25719 6477 drivers/crypto/atmel-tdes.o
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Gustavo A. R. Silva [Tue, 18 Jul 2017 23:04:45 +0000 (18:04 -0500)]
crypto: atmel-sha - remove unnecessary static in atmel_sha_remove()
Remove unnecessary static on local variable sha_dd. Such variable
is initialized before being used, on every execution path throughout
the function. The static has no benefit and, removing it reduces the
object file size.
This issue was detected using Coccinelle and the following semantic patch:
https://github.com/GustavoARSilva/coccinelle/blob/master/static/static_unused.cocci
In the following log you can see a significant difference in the object
file size. This log is the output of the size command, before and after
the code change:
before:
text data bss dec hex filename
30005 10264 128 40397 9dcd drivers/crypto/atmel-sha.o
after:
text data bss dec hex filename
29934 10208 64 40206 9d0e drivers/crypto/atmel-sha.o
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Gustavo A. R. Silva [Tue, 18 Jul 2017 23:03:11 +0000 (18:03 -0500)]
crypto: omap-sham - remove unnecessary static in omap_sham_remove()
Remove unnecessary static on local variable dd. Such variable
is initialized before being used, on every execution path throughout
the function. The static has no benefit and, removing it reduces the
object file size.
This issue was detected using Coccinelle and the following semantic patch:
https://github.com/GustavoARSilva/coccinelle/blob/master/static/static_unused.cocci
In the following log you can see a difference in the object file size.
This log is the output of the size command, before and after the code
change:
before:
text data bss dec hex filename
26135 11944 128 38207 953f drivers/crypto/omap-sham.o
after:
text data bss dec hex filename
26084 11856 64 38004 9474 drivers/crypto/omap-sham.o
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rob Herring [Tue, 18 Jul 2017 21:42:56 +0000 (16:42 -0500)]
crypto: n2 - Convert to using %pOF instead of full_name
Now that we have a custom printf format specifier, convert users of
full_name to use %pOF instead. This is preparation to remove storing
of the full path string for each node.
Signed-off-by: Rob Herring <robh@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add support for using the caam/jr backend on DPAA2-based SoCs.
These have some particularities we have to account for:
-HW S/G format is different
-Management Complex (MC) firmware initializes / manages (partially)
the CAAM block: MCFGR, QI enablement in QICTL, RNG
Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Gary R Hook [Mon, 17 Jul 2017 20:16:13 +0000 (15:16 -0500)]
crypto: ccp - Fix base RSA function for version 5 CCPs
Version 5 devices have requirements for buffer lengths, as well as
parameter format (e.g. bits vs. bytes). Fix the base CCP driver
code to meet requirements all supported versions.
Signed-off-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Jason A. Donenfeld [Sun, 16 Jul 2017 17:22:06 +0000 (19:22 +0200)]
crypto: rng - ensure that the RNG is ready before using
Otherwise, we might be seeding the RNG using bad randomness, which is
dangerous. The one use of this function from within the kernel -- not
from userspace -- is being removed (keys/big_key), so that call site
isn't relevant in assessing this.
Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Stephan Mueller [Sun, 25 Jun 2017 15:12:59 +0000 (17:12 +0200)]
crypto: algif_aead - overhaul memory management
The updated memory management is described in the top part of the code.
As one benefit of the changed memory management, the AIO and synchronous
operation is now implemented in one common function. The AF_ALG
operation uses the async kernel crypto API interface for each cipher
operation. Thus, the only difference between the AIO and sync operation
types visible from user space is:
1. the callback function to be invoked when the asynchronous operation
is completed
2. whether to wait for the completion of the kernel crypto API operation
or not
The change includes the overhaul of the TX and RX SGL handling. The TX
SGL holding the data sent from user space to the kernel is now dynamic
similar to algif_skcipher. This dynamic nature allows a continuous
operation of a thread sending data and a second thread receiving the
data. These threads do not need to synchronize as the kernel processes
as much data from the TX SGL to fill the RX SGL.
The caller reading the data from the kernel defines the amount of data
to be processed. Considering that the interface covers AEAD
authenticating ciphers, the reader must provide the buffer in the
correct size. Thus the reader defines the encryption size.
Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The updated memory management is described in the top part of the code.
As one benefit of the changed memory management, the AIO and synchronous
operation is now implemented in one common function. The AF_ALG
operation uses the async kernel crypto API interface for each cipher
operation. Thus, the only difference between the AIO and sync operation
types visible from user space is:
1. the callback function to be invoked when the asynchronous operation
is completed
2. whether to wait for the completion of the kernel crypto API operation
or not
In addition, the code structure is adjusted to match the structure of
algif_aead for easier code assessment.
The user space interface changed slightly as follows: the old AIO
operation returned zero upon success and < 0 in case of an error to user
space. As all other AF_ALG interfaces (including the sync skcipher
interface) returned the number of processed bytes upon success and < 0
in case of an error, the new skcipher interface (regardless of AIO or
sync) returns the number of processed bytes in case of success.
Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Harald Freudenberger [Tue, 11 Jul 2017 07:36:23 +0000 (09:36 +0200)]
hwrng: remember rng chosen by user
When a user chooses a rng source via sysfs attribute
this rng should be sticky, even when other sources
with better quality to register. This patch introduces
a simple way to remember the user's choice. This is
reflected by a new sysfs attribute file 'rng_selected'
which shows if the current rng has been chosen by
userspace. The new attribute file shows '1' for user
selected rng and '0' otherwise.
Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Reviewed-by: PrasannaKumar Muralidharan <prasannatsmkumar@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Harald Freudenberger [Tue, 11 Jul 2017 07:36:22 +0000 (09:36 +0200)]
hwrng: use rng source with best quality
This patch rewoks the hwrng to always use the
rng source with best entropy quality.
On registation and unregistration the hwrng now
tries to choose the best (= highest quality value)
rng source. The handling of the internal list
of registered rng sources is now always sorted
by quality and the top most rng chosen.
Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Reviewed-by: PrasannaKumar Muralidharan <prasannatsmkumar@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: caam/qi - fix AD length endianness in S/G entry
Associated data (AD) length is read by CAAM from an S/G entry
that is initially filled by the GPP.
Accordingly, AD length has to be stored in CAAM endianness.
Fixes: b189817cf789 ("crypto: caam/qi - add ablkcipher and authenc algorithms") Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: caam/qi - handle large number of S/Gs case
For more than 16 S/G entries, driver currently corrupts memory
on ARMv8, see below KASAN log.
Note: this does not reproduce on PowerPC due to different (smaller)
cache line size - 64 bytes on PPC vs. 128 bytes on ARMv8.
One such use case is one of the cbc(aes) test vectors - with 8 S/G
entries and src != dst. Driver needs 1 (IV) + 2 x 8 = 17 entries,
which goes over the 16 S/G entries limit:
(CAAM_QI_MEMCACHE_SIZE - offsetof(struct ablkcipher_edesc, sgt)) /
sizeof(struct qm_sg_entry) = 256 / 16 = 16 S/Gs
Fix this by:
-increasing object size in caamqicache pool from 512 to 768; this means
the maximum number of S/G entries grows from (at least) 16 to 32
(again, for ARMv8 case of 128-byte cache line)
-add checks in the driver to fail gracefully (ENOMEM) in case the 32 S/G
entries limit is exceeded
==================================================================
BUG: KASAN: slab-out-of-bounds in ablkcipher_edesc_alloc+0x4ec/0xf60
Write of size 1 at addr ffff800021cb6003 by task cryptomgr_test/1394
Memory state around the buggy address: ffff800021cb5f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff800021cb5f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffff800021cb6000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
^ ffff800021cb6080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff800021cb6100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================
Fixes: b189817cf789 ("crypto: caam/qi - add ablkcipher and authenc algorithms") Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: caam/qi - properly set IV after {en,de}crypt
caam/qi needs a fix similar to what was done for caam/jr in
commit "crypto: caam/qi - properly set IV after {en,de}crypt",
to allow for ablkcipher/skcipher chunking/streaming.
Cc: <stable@vger.kernel.org> Fixes: b189817cf789 ("crypto: caam/qi - add ablkcipher and authenc algorithms") Suggested-by: David Gstir <david@sigma-star.at> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: caam/qi - fix compilation with DEBUG enabled
caam/qi driver does not compile when DEBUG is enabled
(CRYPTO_DEV_FSL_CAAM_DEBUG=y):
drivers/crypto/caam/caamalg_qi.c: In function 'ablkcipher_done':
drivers/crypto/caam/caamalg_qi.c:794:2: error: implicit declaration of function 'dbg_dump_sg' [-Werror=implicit-function-declaration]
dbg_dump_sg(KERN_ERR, "dst @" __stringify(__LINE__)": ",
Since dbg_dump_sg() is shared between caam/jr and caam/qi, move it
in a shared location and export it.
At the same time:
-reduce ifdeferry by providing a no-op implementation for !DEBUG case
-rename it to caam_dump_sg() to be consistent in terms of
exported symbols namespace (caam_*)
Cc: <stable@vger.kernel.org> Fixes: b189817cf789 ("crypto: caam/qi - add ablkcipher and authenc algorithms") Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Chris Gorman [Thu, 6 Jul 2017 18:44:56 +0000 (14:44 -0400)]
crypto: geode-aes - fixed coding style warnings and error
fixed WARNING: Block comments should align the * on each line
fixed WARNINGs: Missing a blank line after declarations
fixed ERROR: space prohibited before that ',' (ctx:WxE)
Signed-off-by: Chris Gorman <chrisjohgorman@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CCP device initializes is now integerated into higher level SP device,
to avoid the confusion lets rename the ccp driver initialization files
(ccp-platform.c->sp-platform.c, ccp-pci.c->sp-pci.c). The patch does not
make any functional changes other than renaming file and structures
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CCP and PSP devices part of AMD Secure Procesor may share the same
interrupt. Hence we expand the SP device to register a common interrupt
handler and provide functions to CCP and PSP devices to register their
interrupt callback which will be invoked upon interrupt.
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: ccp - Introduce the AMD Secure Processor device
The CCP device is part of the AMD Secure Processor. In order to expand
the usage of the AMD Secure Processor, create a framework that allows
functional components of the AMD Secure Processor to be initialized and
handled appropriately.
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto: ccp - Use devres interface to allocate PCI/iomap and cleanup
Update pci and platform files to use devres interface to allocate the PCI
and iomap resources. Also add helper functions to consolicate module init,
exit and power mangagement code duplication.
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The device features hardware acceleration for the NIST standard
P256 prime curve and supports the complete key life cycle from
private key generation to ECDH key agreement.
Random private key generation is supported internally within
the device to ensure that the private key can never be known
outside of the device. If the user wants to use its own private
keys, the driver will fallback to the ecdh software implementation.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>