The initial value for the max calculation needs to be 0. The fallback
needs to come last. With the old code, the value was never smaller
than the fallback.
For RSA_ALT, use MPI_MAX_SIZE. Only use this if RSA_ALT is enabled.
For PSA, check PSA_ASYMMETRIC_SIGNATURE_MAX_SIZE, and separately check
the special case of ECDSA where PSA and mbedtls have different
representations for the signature.
PSA_ASYMMETRIC_SIGNATURE_MAX_SIZE was taking the maximum ECDSA key
size as the ECDSA signature size. Fix it to use the actual maximum
size of an ECDSA signature.
mbedtls_pk_sign does not take the size of its output buffer as a
parameter. We guarantee that MBEDTLS_PK_SIGNATURE_MAX_SIZE is enough.
For RSA and ECDSA signatures made in software, this is ensured by the
way MBEDTLS_PK_SIGNATURE_MAX_SIZE is defined at compile time. For
signatures made through RSA-alt and PSA, this is not guaranteed
robustly at compile time, but we can test it at runtime, so do that.
The original definition of MBEDTLS_PK_SIGNATURE_MAX_SIZE only took RSA
into account. An ECDSA signature may be larger than the maximum
possible RSA signature size, depending on build options; for example
this is the case with config-suite-b.h.
In pk_sign_verify, if mbedtls_pk_sign() failed, sig_len was passed to
mbedtls_pk_verify_restartable() without having been initialized. This
worked only because in the only test case that expects signature to
fail, the verify implementation doesn't look at sig_len before failing
for the expected reason.
The value of sig_len if sign() fails is undefined, so set sig_len to
something sensible.
It was not obvious before that `AES_KEY` and `RSA_KEY` were shorthand
for key material. A user copy pasting the code snippet would run into a
compilation error if they didn't realize this. Make it more obvious that
key material must come from somewhere external by making the snippets
which use global keys into functions that take a key as a parameter.
When writing a private EC key, use a constant size for the private
value, as specified in RFC 5915. Previously, the value was written
as an ASN.1 INTEGER, which caused the size of the key to leak
about 1 bit of information on average, and could cause the value to be
1 byte too large for the output buffer.
Add pk_write test cases where the ASN.1 INTEGER encoding of the
private value would not have the mandatory size for the OCTET STRING
that contains the value.
ec_256_long_prv.pem is a random secp256r1 private key, selected so
that the private value is >= 2^255, i.e. the top bit of the first byte
is set (which would cause the INTEGER encoding to have an extra
leading 0 byte).
ec_521_short_prv.pem is a random secp521r1 private key, selected so
that the private value is < 2^519, i.e. the first byte is 0 and the
top bit of the second byte is 0 (which would cause the INTEGER
encoding to have one less 0 byte at the start).
1. variable name accoriding to the Mbed TLS coding style;
2. add a comment explaining safety of the optimization;
3. safer T2 initialization and memory zeroing on the function exit;
Enabling memory_buffer_alloc is slow and makes ASan ineffective. We
have a patch pending to remove it from the full config. In the
meantime, disable it explicitly.
Record checking fails if mbedtls_ssl_check_record() is called with
external buffer. Received record sequence number is available in the
incoming record but it is not available in the ssl contexts `in_ctr`-
variable that is used when decoding the sequence number.
To fix the problem, temporarily update ssl context `in_ctr` to
point to the received record header and restore value later.
You can't reuse a CTR_DRBG context without free()ing it and
re-init()ing it. This generally happened to work, but was never
guaranteed. It could have failed with alternative implementations of
the AES module because mbedtls_ctr_drbg_seed() calls
mbedtls_aes_init() on a context which is already initialized if
mbedtls_ctr_drbg_seed() hasn't been called before, plausibly causing a
memory leak. Since the addition of mbedtls_ctr_drbg_set_nonce_len(),
the second call to mbedtls_ctr_drbg_seed() uses a nonsensical value as
the entropy nonce length.
Calling free() and seed() with no intervening init fails when
MBEDTLS_THREADING_C is enabled and all-bits-zero is not a valid mutex
representation.
You can't reuse a CTR_DRBG context without free()ing it and
re-init()ing. This generally happened to work, but was never
guaranteed. It could have failed with alternative implementations of
the AES module because mbedtls_ctr_drbg_seed() calls
mbedtls_aes_init() on a context which is already initialized if
mbedtls_ctr_drbg_seed() hasn't been called before, plausibly causing a
memory leak. Calling free() and seed() with no intervening init fails
when MBEDTLS_THREADING_C is enabled and all-bits-zero is not a valid
mutex representation. So add the missing free() and init().
We want to explicitly disallow creating new transactions when a
transaction is already in progress. However, we were incorrectly
checking for the existence of the injected entropy file before
continuing with creating a transaction. This meant we could have a
transaction already in progress and would be able to still create a new
transaction. It also meant we couldn't start a new transaction if any
entropy had been injected. Check the transaction file instead of the
injected entropy file in order to prevent multiple concurrent
transactions.
The default entropy nonce length is either zero or nonzero depending
on the desired security strength and the entropy length.
The implementation calculates the actual entropy nonce length from the
actual entropy length, and therefore it doesn't need a constant that
indicates the default entropy nonce length. A portable application may
be interested in this constant, however. And our test code could
definitely use it.
Define a constant MBEDTLS_CTR_DRBG_ENTROPY_NONCE_LEN and use it in
test code. Previously, test_suite_ctr_drbg had knowledge about the
default entropy nonce length built in and test_suite_psa_crypto_init
failed. Now both use MBEDTLS_CTR_DRBG_ENTROPY_NONCE_LEN.
This change means that the test ctr_drbg_entropy_usage no longer
validates that the default entropy nonce length is sensible. So add a
new test that checks that the default entropy length and the default
entropy nonce length are sufficient to ensure the expected security
strength.
Change the default entropy nonce length to be nonzero in some cases.
Specifically, the default nonce length is now set in such a way that
the entropy input during the initial seeding always contains enough
entropy to achieve the maximum possible security strength per
NIST SP 800-90A given the key size and entropy length.
If MBEDTLS_CTR_DRBG_ENTROPY_LEN is kept to its default value,
mbedtls_ctr_drbg_seed() now grabs extra entropy for a nonce if
MBEDTLS_CTR_DRBG_USE_128_BIT_KEY is disabled and either
MBEDTLS_ENTROPY_FORCE_SHA256 is enabled or MBEDTLS_SHA512_C is
disabled. If MBEDTLS_CTR_DRBG_USE_128_BIT_KEY is enabled, or if
the entropy module uses SHA-512, then the default value of
MBEDTLS_CTR_DRBG_ENTROPY_LEN does not require a second call to the
entropy function to achieve the maximum security strength.
This choice of default nonce size guarantees NIST compliance with the
maximum security strength while keeping backward compatibility and
performance high: in configurations that do not require grabbing more
entropy, the code will not grab more entropy than before.
Add a new function mbedtls_ctr_drbg_set_nonce_len() which configures
the DRBG instance to call f_entropy a second time during the initial
seeding to grab a nonce.
The default nonce length is 0, so there is no behavior change unless
the user calls the new function.
Add a new function mbedtls_ctr_drbg_set_nonce_len() which configures
the DRBG instance to call f_entropy a second time during the initial
seeding to grab a nonce.
The default nonce length is 0, so there is no behavior change unless
the user calls the new function.
When running 'make test' with GNU make, if a test suite program
displays "PASSED", this was automatically counted as a pass. This
would in particular count as passing:
* A test suite with the substring "PASSED" in a test description.
* A test suite where all the test cases succeeded, but the final
cleanup failed, in particular if a sanitizer reported a memory leak.
Use the test executable's return status instead to determine whether
the test suite passed. It's always 0 on PASSED unless the executable's
cleanup code fails, and it's never 0 on any failure.
FixARMmbed/mbed-crypto#303
Some sanitizers default to displaying an error message and recovering.
This could result in a test being recorded as passing despite a
complaint from the sanitizer. Turn off sanitizer recovery to avoid
this risk.
When running 'make test' with GNU make, if a test suite program
displays "PASSED", this was automatically counted as a pass. This
would in particular count as passing:
* A test suite with the substring "PASSED" in a test description.
* A test suite where all the test cases succeeded, but the final
cleanup failed, in particular if a sanitizer reported a memory leak.
Use the test executable's return status instead to determine whether
the test suite passed. It's always 0 on PASSED unless the executable's
cleanup code fails, and it's never 0 on any failure.
FixARMmbed/mbed-crypto#303
Some sanitizers default to displaying an error message and recovering.
This could result in a test being recorded as passing despite a
complaint from the sanitizer. Turn off sanitizer recovery to avoid
this risk.