Let the caller decide what certificates and keys are loaded (EC/RSA)
instead of loading both for the server, and an unspecified one
for the client. Use only DER encoding.
Change the encoding of key types, EC curve families and DH group
families to make the low-order bit a parity bit (with even parity).
This ensures that distinct key type values always have a Hamming
distance of at least 2, which makes it easier for implementations to
resist single bit flips.
All key types now have an encoding on 32 bits where the bottom 16 bits
are zero. Change to using 16 bits only.
Keep 32 bits for key types in storage, but move the significant
half-word from the top to the bottom.
Likewise, change EC curve and DH group families from 32 bits out of
which the top 8 and bottom 16 bits are zero, to 8 bits only.
Reorder psa_core_key_attributes_t to avoid padding.
Remove the values of curve encodings that are based on the TLS registry
and include the curve size, keeping only the new encoding that merely
encodes a curve family in 8 bits.
Keep the old constant names as aliases for the new values and
deprecate the old names.
Define constants for ECC curve families and DH group families. These
constants have 0x0000 in the lower 16 bits of the key type.
Support these constants in the implementation and in the PSA metadata
tests.
Switch the slot management and secure element driver HAL tests to the
new curve encodings. This requires SE driver code to become slightly
more clever when figuring out the bit-size of an imported EC key since
it now needs to take the data size into account.
Switch some documentation to the new encodings.
Remove the macro PSA_ECC_CURVE_BITS which can no longer be implemented.
Change the representation of psa_ecc_curve_t and psa_dh_group_t from
the IETF 16-bit encoding to a custom 24-bit encoding where the upper 8
bits represent a curve family and the lower 16 bits are the key size
in bits. Families are based on naming and mathematical similarity,
with sufficiently precise families that no two curves in a family have
the same bit size (for example SECP-R1 and SECP-R2 are two different
families).
As a consequence, the lower 16 bits of a key type value are always
either the key size or 0.
Don't rely on the bit size encoded in the PSA curve identifier, in
preparation for removing that.
For some inputs, the error code on EC key creation changes from
PSA_ERROR_INVALID_ARGUMENT to PSA_ERROR_NOT_SUPPORTED or vice versa.
There will be further such changes in subsequent commits.
Key types are now encoded through a category in the upper 4 bits (bits
28-31) and a type-within-category in the next 11 bits (bits 17-27),
with bit 16 unused and bits 0-15 only used for the EC curve or DH
group.
For symmetric keys, bits 20-22 encode the block size (0x0=stream,
0x3=8B, 0x4=16B).
psa_hash_compare is tested for good cases and invalid-signature cases
in hash_compute_compare. Also test invalid-argument cases. Also run a
few autonomous test cases with valid arguments.
Because two buffers were aliased too early in the code, it was possible that
after an allocation failure, free() would be called twice for the same pointer.
Previously mocked non-blocking read/write was returning 0 when buffer was empty/full. That was causing ERR_SSL_CONN_EOF error in tests which was using these mocked callbacks. Beside that non-blocking read/write was returning ERR_SSL_WANT_READ/_WRITE depending on block pattern set by test design. Such behavior forced to redesign of these functions so that they could be used in other tests
This error occurs when free space in the buffer is in the middle (the buffer has come full circle) and function mbedtls_test_buffer_put is called. Then the arguments for memcpy are calculated incorrectly and program ends with segmentation fault
Before, the string to parse may contain trailing garbage (there was
never more than one byte), and there was a separate argument
indicating the length of the content. Now, the string to parse is the
exact content, and the test code runs an extra test step with a
trailing byte added.
If there was a fatal error (bizarre behavior from the standard
library, or missing test data file), execute_tests did not close the
outcome file. Fix this.
Fix get_len_step when buffer_size==0. The intent of this test is to
ensure (via static or runtime buffer overflow analysis) that
mbedtls_asn1_get_len does not attempt to access beyond the end of the
buffer. When buffer_size is 0 (reached from get_len when parsing a
1-byte buffer), the buffer is buf[1..1] because allocating a 0-byte
buffer might yield a null pointer rather than a valid pointer. In this
case the end of the buffer is p==buf+1, not buf+buffer_size which is
buf+0.
The test passed because calling mbedtls_asn1_get_len(&p,end,...) with
end < p happens to work, but this is not guaranteed.
In a unit test we want to avoid accessing the network. To test the
handshake in the unit test suite we need to implement a connection
between the server and the client. This socket implementation uses
two ring buffers to mock the transport layer.
In a unit test we want to avoid accessing the network. To test the
handshake in the unit test suite we need to implement a connection
between the server and the client. This ring buffer implementation will
serve as the said connection.
The new macro ASSERT_ALLOC allocates memory with mbedtls_calloc and
fails the test if the allocation fails. It outputs a null pointer if
the requested size is 0. It is meant to replace existing calls to
mbedtls_calloc.
We're going to create some edge cases where the attributes of a key
are not bitwise identical to the attributes passed during creation.
Have a test function ready for that.
When MBEDTLS_TEST_DEPRECATED is defined, run some additional tests to
validate deprecated PSA macros. We don't need to test deprecated
features extensively, but we should at least ensure that they don't
break the build.
Add some code to component_build_deprecated in all.sh to run these
tests with MBEDTLS_DEPRECATED_WARNING enabled. The tests are also
executed when MBEDTLS_DEPRECATED_WARNING and
MBEDTLS_DEPRECATED_REMOVED are both disabled.
Rename some macros and functions related to signature which are
changing as part of the addition of psa_sign_message and
psa_verify_message.
perl -i -pe '%t = (
PSA_KEY_USAGE_SIGN => PSA_KEY_USAGE_SIGN_HASH,
PSA_KEY_USAGE_VERIFY => PSA_KEY_USAGE_VERIFY_HASH,
PSA_ASYMMETRIC_SIGNATURE_MAX_SIZE => PSA_SIGNATURE_MAX_SIZE,
PSA_ASYMMETRIC_SIGN_OUTPUT_SIZE => PSA_SIGN_OUTPUT_SIZE,
psa_asymmetric_sign => psa_sign_hash,
psa_asymmetric_verify => psa_verify_hash,
); s/\b(@{[join("|", keys %t)]})\b/$t{$1}/ge' $(git ls-files . ':!:**/crypto_compat.h')
The test suites should always run self-tests for all enabled features.
Otherwise we miss failing self-tests in CI runs, because we don't
always run the selftest program independently.
There was one spurious dependency to remove:
MBEDTLS_CTR_DRBG_USE_128_BIT_KEY for ctr_drbg, which was broken but
has now been fixed.
First deal with deleted files.
* Files deleted by us: keep them deleted.
* Files deleted by them, whether modified by us or not: keep our version.
```
git rm $(git status -s | sed -n 's/^DU //p')
git reset -- $(git status -s | sed -n 's/^D //p')
git checkout -- $(git status -s | sed -n 's/^ D //p')
git add -- $(git status -s | sed -n 's/^UD //p')
```
Individual files with conflicts:
* `3rdparty/everest/library/Hacl_Curve25519_joined.c`: spurious conflict because git mistakenly identified this file as a rename. Keep our version.
* `README.md`: conflict due to their change in a paragraph that doesn't exist in our version. Keep our version of this paragraph.
* `docs/architecture/Makefile`: near-identical additions. Adapt the definition of `all_markdown` and include the clean target.
* `doxygen/input/docs_mainpage.h`: conflict in the version number. Keep our version number.
* `include/mbedtls/config.h`: two delete/modify conflicts. Keep the removed chunks out.
* `library/CMakeLists.txt`: discard all their changes as they are not relevant.
* `library/Makefile`:
* Discard the added chunk about the crypto submodule starting with `INCLUDING_FROM_MBEDTLS:=1`.
* delete/modify: keep the removed chunk out.
* library build: This is almost delete/modify. Their changes are mostly not applicable. Do keep the `libmbedcrypto.$(DLEXT): | libmbedcrypto.a` order dependency.
* `.c.o`: `-o` was added on both sides but in a different place. Change to their place.
* `library/error.c`: to be regenerated.
* `library/version_features.c`: to be regenerated.
* `programs/Makefile`: Most of the changes are not relevant. The one relevant change is in the `clean` target for Windows; adapt it by removing `/S` from our version.
* `programs/test/query_config.c`: to be regenerated.
* `scripts/config.py`: added in parallel on both sides. Keep our version.
* `scripts/footprint.sh`: parallel changes. Keep our version.
* `scripts/generate_visualc_files.pl`: one delete/modify conflict. Keep the removed chunks out.
* `tests/Makefile`: discard all of their changes.
* `tests/scripts/all.sh`:
* `pre_initialize_variables` add `append_outcome`: add it.
* `pre_initialize_variables` add `ASAN_CFLAGS`: already there, keep our version.
* `pre_parse_command_line` add `--no-append-outcome`: add it.
* `pre_parse_command_line` add `--outcome-file`: add it.
* `pre_print_configuration`: add `MBEDTLS_TEST_OUTCOME_FILE`.
* Several changes in SSL-specific components: keep our version without them.
* Several changes where `config.pl` was changed to `config.py` and there was an adjacent difference: keep our version.
* Changes regarding the inclusion of `MBEDTLS_MEMORY_xxx`: ignore them here, they will be normalized in a subsequent commit.
* `component_test_full_cmake_gcc_asan`: add it without the TLS tests.
* `component_test_no_use_psa_crypto_full_cmake_asan`: keep the fixed `msg`, discard other changes.
* `component_test_memory_buffer_allocator_backtrace`, `component_test_memory_buffer_allocator`: add them without the TLS tests.
* `component_test_m32_everest`: added in parallel on both sides. Keep our version.
* `tests/scripts/check-names.sh`, `tests/scripts/list-enum-consts.pl`, `tests/scripts/list-identifiers.sh`, ``tests/scripts/list-macros.sh`: discard all of their changes.
* `tests/scripts/test-ref-configs.pl`: the change in the conflict is not relevant, so keep our version there.
* `visualc/VS2010/*.vcxproj`: to be regenerated.
Regenerate files:
```
scripts/generate_visualc_files.pl
git add visualc/VS2010/*.vcxproj
scripts/generate_errors.pl
git add library/error.c
scripts/generate_features.pl
git add library/version_features.c
scripts/generate_query_config.pl
git add programs/test/query_config.c
```
Rejected changes in non-conflicting files:
* `CMakeLists.txt`: discard their addition which has already been side-ported.
* `doxygen/mbedtls.doxyfile`: keep the version number change. Discard the changes related to `../crypto` paths.
Keep the following changes after examination:
* `.travis.yml`: all of their changes are relevant.
* `include/mbedtls/error.h`: do keep their changes. Even though Crypto doesn't use TLS errors, it must not encroach on TLS's allocated numbers.
* `tests/scripts/check-test-cases.py`: keep the code dealing with `ssl-opt.sh`. It works correctly when the file is not present.
In pk_sign_verify, if mbedtls_pk_sign() failed, sig_len was passed to
mbedtls_pk_verify_restartable() without having been initialized. This
worked only because in the only test case that expects signature to
fail, the verify implementation doesn't look at sig_len before failing
for the expected reason.
The value of sig_len if sign() fails is undefined, so set sig_len to
something sensible.
Add pk_write test cases where the ASN.1 INTEGER encoding of the
private value would not have the mandatory size for the OCTET STRING
that contains the value.
ec_256_long_prv.pem is a random secp256r1 private key, selected so
that the private value is >= 2^255, i.e. the top bit of the first byte
is set (which would cause the INTEGER encoding to have an extra
leading 0 byte).
ec_521_short_prv.pem is a random secp521r1 private key, selected so
that the private value is < 2^519, i.e. the first byte is 0 and the
top bit of the second byte is 0 (which would cause the INTEGER
encoding to have one less 0 byte at the start).
The corner case tests were designed for 32 and 64 bit limbs
independently and performed only on the target platform. On the other
platform they are not corner cases anymore, but we can still exercise
them.
The corner case tests were designed for 64 bit limbs and failed on 32
bit platforms because the numbers in the test ended up being stored in a
different number of limbs and the function (correctly) returnd an error
upon receiving them.
The signature of mbedtls_mpi_cmp_mpi_ct() meant to support using it in
place of mbedtls_mpi_cmp_mpi(). This meant full comparison functionality
and a signed result.
To make the function more universal and friendly to constant time
coding, we change the result type to unsigned. Theoretically, we could
encode the comparison result in an unsigned value, but it would be less
intuitive.
Therefore we won't be able to represent the result as unsigned anymore
and the functionality will be constrained to checking if the first
operand is less than the second. This is sufficient to support the
current use case and to check any relationship between MPIs.
The only drawback is that we need to call the function twice when
checking for equality, but this can be optimised later if an when it is
needed.
None of the test cases in tests_suite_memory_buffer_alloc actually
need MBEDTLS_MEMORY_DEBUG. Some have additional checks when
MBEDTLS_MEMORY_DEBUG but all are useful even without it. So enable
them all and #ifdef out the parts that require DEBUG.
The test case "Memory buffer small buffer" emits a message
"FATAL: verification of first header failed". In this test case, it's
actually expected, but it looks weird to see this message from a
passing test. Add a comment that states this explicitly, and modify
the test description to indicate that the failure is expected, and
change the test function name to be more accurate.
Fix#309
The default entropy nonce length is either zero or nonzero depending
on the desired security strength and the entropy length.
The implementation calculates the actual entropy nonce length from the
actual entropy length, and therefore it doesn't need a constant that
indicates the default entropy nonce length. A portable application may
be interested in this constant, however. And our test code could
definitely use it.
Define a constant MBEDTLS_CTR_DRBG_ENTROPY_NONCE_LEN and use it in
test code. Previously, test_suite_ctr_drbg had knowledge about the
default entropy nonce length built in and test_suite_psa_crypto_init
failed. Now both use MBEDTLS_CTR_DRBG_ENTROPY_NONCE_LEN.
This change means that the test ctr_drbg_entropy_usage no longer
validates that the default entropy nonce length is sensible. So add a
new test that checks that the default entropy length and the default
entropy nonce length are sufficient to ensure the expected security
strength.
Change the default entropy nonce length to be nonzero in some cases.
Specifically, the default nonce length is now set in such a way that
the entropy input during the initial seeding always contains enough
entropy to achieve the maximum possible security strength per
NIST SP 800-90A given the key size and entropy length.
If MBEDTLS_CTR_DRBG_ENTROPY_LEN is kept to its default value,
mbedtls_ctr_drbg_seed() now grabs extra entropy for a nonce if
MBEDTLS_CTR_DRBG_USE_128_BIT_KEY is disabled and either
MBEDTLS_ENTROPY_FORCE_SHA256 is enabled or MBEDTLS_SHA512_C is
disabled. If MBEDTLS_CTR_DRBG_USE_128_BIT_KEY is enabled, or if
the entropy module uses SHA-512, then the default value of
MBEDTLS_CTR_DRBG_ENTROPY_LEN does not require a second call to the
entropy function to achieve the maximum security strength.
This choice of default nonce size guarantees NIST compliance with the
maximum security strength while keeping backward compatibility and
performance high: in configurations that do not require grabbing more
entropy, the code will not grab more entropy than before.
mbedtls_ctr_drbg_seed() always set the entropy length to the default,
so a call to mbedtls_ctr_drbg_set_entropy_len() before seed() had no
effect. Change this to the more intuitive behavior that
set_entropy_len() sets the entropy length and seed() respects that and
only uses the default entropy length if there was no call to
set_entropy_len().
This removes the need for the test-only function
mbedtls_ctr_drbg_seed_entropy_len(). Just call
mbedtls_ctr_drbg_set_entropy_len() followed by
mbedtls_ctr_drbg_seed(), it works now.
Consolidate the invalid-handle tests from test_suite_psa_crypto and
test_suite_psa_crypto_slot_management. Start with the code in
test_suite_psa_crypto_slot_management and adapt it to test one invalid
handle value per run of the test function.
mbedtls_asn1_get_int() and mbedtls_asn1_get_mpi() behave differently
on negative INTEGERs (0200). Don't change the library behavior for now
because this might break interoperability in some applications. Change
the test function to the library behavior.
Fix the test data with negative INTEGERs. These test cases were
previously not run (they were introduced but deliberately deactivated
in 27d806fab4). The test data was
actually wrong: ASN.1 uses two's complement, which has no negative 0,
and some encodings were wrong. Now the tests have correct data, and
the test code rectifies the expected data to match the library
behavior.
mbedtls_asn1_get_int() and mbedtls_asn1_get_mpi() behave differently
on an empty INTEGER (0200). Don't change the library behavior for now
because this might break interoperability in some applications. Write
a test function that matches the library behavior.
When the asn1parse module is enabled but the bignum module is
disabled, the asn1parse test suite did not work. Fix this.
* Fix a syntax error in get_integer() (label immediately followed by a
closing brace).
* Fix an unused variable in get_integer().
* Fix `TEST_ASSERT( *p == q );` in nested_parse() failing because `*p`
was not set.
* Fix nested_parse() not outputting the length of what it parsed.
Add some ECDSA test cases where the hash is shorter or longer than the
key length, to check that the API doesn't enforce a relationship
between the two.
For the sign_deterministic tests, the keys are
tests/data_files/ec_256_prv.pem and tests/data_files/ec_384_prv.pem
and the signatures were obtained with Python Cryptodome:
from binascii import hexlify, unhexlify
from Crypto.Hash import SHA256, SHA384
from Crypto.PublicKey import ECC
from Crypto.Signature import DSS
k2 = ECC.import_key(unhexlify("3077020101042049c9a8c18c4b885638c431cf1df1c994131609b580d4fd43a0cab17db2f13eeea00a06082a8648ce3d030107a144034200047772656f814b399279d5e1f1781fac6f099a3c5ca1b0e35351834b08b65e0b572590cdaf8f769361bcf34acfc11e5e074e8426bdde04be6e653945449617de45"))
SHA384.new(b'hello').hexdigest()
hexlify(DSS.new(k2, 'deterministic-rfc6979').sign(SHA384.new(b'hello')))
k3 = ECC.import_key(unhexlify("3081a402010104303f5d8d9be280b5696cc5cc9f94cf8af7e6b61dd6592b2ab2b3a4c607450417ec327dcdcaed7c10053d719a0574f0a76aa00706052b81040022a16403620004d9c662b50ba29ca47990450e043aeaf4f0c69b15676d112f622a71c93059af999691c5680d2b44d111579db12f4a413a2ed5c45fcfb67b5b63e00b91ebe59d09a6b1ac2c0c4282aa12317ed5914f999bc488bb132e8342cc36f2ca5e3379c747"))
SHA256.new(b'hello').hexdigest()
hexlify(DSS.new(k3, 'deterministic-rfc6979').sign(SHA256.new(b'hello')))
There were tests to ensure that each entropy source reaches its
threshold, but no test that covers the total amount of entropy. Add
test cases with a known set of entropy sources and make sure that we
always gather at least MBEDTLS_ENTROPY_BLOCK_SIZE bytes from a strong
source.
Always pass a context object to entropy_dummy_source. This lets us
write tests that register more than one source and keep track of how
many times each one is called.
* origin/pr/2843: (26 commits)
Make hyperlink a hyperlink in every markdown flavor
Update the crypto submodule to be the same as development
Document test case descriptions
Restore MBEDTLS_TEST_OUTCOME_FILE after test_default_out_of_box
ssl-opt.sh: Fix some test case descriptions
Reject non-ASCII characters in test case descriptions
Process input files as binary
Factor description-checking code into a common function
Fix cosmetic error in warnings
Fix regex matching run_test calls in ssl-opt.sh
all.sh: run check-test-cases.py
Better information messages for quick checks
Fix configuration short name in key-exchanges.pl
Make test case descriptions unique
New test script check-test-cases.py
Document the test outcome file
Create infrastructure for architecture documents in Markdown
all.sh --outcome-file creates an outcome file
Set meaningful test configuration names when running tests
ssl-opt: remove semicolons from test case descriptions
...
Add invasive checks that peek at the stored persistent data after some
successful import, generation or destruction operations and after
reinitialization to ensure that the persistent data in storage has the
expected content.
Add a parameter to the p_validate_slot_number method to allow the
driver to modify the persistent data.
With the current structure of the core, the persistent data is already
updated. All it took was adding a way to modify it.
When registering a key in a secure element, go through the transaction
mechanism. This makes the code simpler, at the expense of a few extra
storage operations. Given that registering a key is typically very
rare over the lifetime of a device, this is an acceptable loss.
Drivers must now have a p_validate_slot_number method, otherwise
registering a key is not possible. This reduces the risk that due to a
mistake during the integration of a device, an application might claim
a slot in a way that is not supported by the driver.
If none of the inputs to a key derivation is a
PSA_KEY_DERIVATION_INPUT_SECRET passed with
psa_key_derivation_input_key(), forbid
psa_key_derivation_output_key(). It usually doesn't make sense to
derive a key object if the secret isn't itself a proper key.
After passing some inputs, try getting one byte of output, just to
check that this succeeds (for a valid sequence of inputs) or fails
with BAD_STATE (for an invalid sequence of inputs). Either output a
1-byte key or a 1-byte buffer depending on the test data.
The test data was expanded as follows:
* Output key type (or not a key): same as the SECRET input if success
is expected, otherwise NONE.
* Expected status: PSA_SUCCESS after valid inputs, BAD_STATE after any
invalid input.
Allow a direct input as the SECRET input step in a key derivation, in
addition to allowing DERIVE keys. This makes it easier for
applications to run a key derivation where the "secret" input is
obtained from somewhere else. This makes it possible for the "secret"
input to be empty (keys cannot be empty), which some protocols do (for
example the IV derivation in EAP-TLS).
Conversely, allow a RAW_DATA key as the INFO/LABEL/SALT/SEED input to a key
derivation, in addition to allowing direct inputs. This doesn't
improve security, but removes a step when a personalization parameter
is stored in the key store, and allows this personalization parameter
to remain opaque.
Add test cases that explore step/key-type-and-keyhood combinations.
This commit only makes derive_input more flexible so that the key
derivation API can be tested with different key types and raw data for
each input step. The behavior of the test cases remains the same.
The uint32 is given as a bigendian stream, in the tests, however,
the char buffer that collected the stream read it as is,
without converting it. Add a temporary buffer, to call `greentea_getc()`
8 times, and then put it in the correct endianity for input to `unhexify()`.
Reduce the stack usage of the `test_suite_pkcs1_v21` by reducing the
size of the buffers used in the tests, to a reasonable big enough size,
and change the size sent to the API to sizeof output.
Reduce the stack usage of the `test_suite_rsa` by reducing the
size of the buffers used in the tests, to a reasonable big enough size,
and change the data size to decrypt in the data file.
The current test generator code accepts multiple colons as a
separator, but this is just happenstance due to how the code, it isn't
robust. Replace "::" by ":", which is more future-proof and allows
simple separator-based navigation.
Make check-test-cases.py pass.
Prior to this commit, there were many repeated test descriptions, but
none with the same test data and dependencies and comments, as checked
with the following command:
for x in tests/suites/*.data; do perl -00 -ne 'warn "$ARGV: $. = $seen{$_}\n" if $seen{$_}; $seen{$_}=$.' $x; done
Wherever a test suite contains multiple test cases with the exact same
description, add " [#1]", " [#2]", etc. to make the descriptions
unique. We don't currently use this particular arrangement of
punctuation, so all occurrences of " [#" were added by this script.
I used the following ad hoc code:
import sys
def fix_test_suite(data_file_name):
in_paragraph = False
total = {}
index = {}
lines = None
with open(data_file_name) as data_file:
lines = list(data_file.readlines())
for line in lines:
if line == '\n':
in_paragraph = False
continue
if line.startswith('#'):
continue
if not in_paragraph:
# This is a test case description line.
total[line] = total.get(line, 0) + 1
index[line] = 0
in_paragraph = True
with open(data_file_name, 'w') as data_file:
for line in lines:
if line in total and total[line] > 1:
index[line] += 1
line = '%s [#%d]\n' % (line[:-1], index[line])
data_file.write(line)
for data_file_name in sys.argv[1:]:
fix_test_suite(data_file_name)
A test case for 32+0 was present three times, evidently overeager
copy-paste. Replace the duplicates by test cases that read more than
32 bytes, which exercises HKDF a little more (32 bytes is significant
because HKDF-SHA-256 produces output in blocks of 32 bytes).
I obtained the test data by running our implementation, because we're
confident in our implementation now thanks to other test cases: this
data is useful as a non-regression test.
There should have been a good-saltlen test case and a bad-saltlen test
case for both sizes 522 and 528, but the 522-bad-saltlen test case was
missing and the 528-good-saltlen test case was repeated. Fix this.
Don't use semicolons in test case descriptions. The test outcome file
is a semicolon-separated CSV file without quotes to keep things
simple, so fields in that file may not contain semicolons.
If the environment variable MBEDTLS_TEST_OUTCOME_FILE is set, then for
each test case, write a line to the file with the given name, of the
form
PLATFORM;CONFIGURATION;TEST SUITE;TEST CASE DESCRIPTION;PASS/FAIL/SKIP;CAUSE
PLATFORM and CONFIGURATION come from the environment variables
MBEDTLS_TEST_PLATFORM and MBEDTLS_TEST_CONFIGURATION.
Errors while writing the test outcome file are not considered fatal,
and are not reported except for an error initially opening the file.
This is in line with other write errors that are not checked.
This commit adds multiple test cases to the X.509 CRT parsing test suite
exercising the stack's behaviour when facing CertificatePolicy extensions
that are malformed for a variety of reasons. It follows the same scheme
as in other negative parsing tests: For each ASN.1 component, have test
cases for (a) unexpected tag, (b) missing length, (c) invalid length
encoding, (d) length out of bounds.
This commit modifies the test
X509 CRT ASN1 (TBSCertificate v3, inv CertificatePolicies, data missing)
which exercises the behaviour of the X.509 CRT parser when facing a
CertificatePolicy extension with empty data field.
The following adaptations are made:
- The subject ID and issuer ID are modified to have length 0.
The previous values `aa` and `bb` are OK, but a generic ASN.1
parser will try to interpret them as ASN.1 tags and fail. For
maintainability, it's therefore better to use something that
can be parsed as ASN.1, and an empty ID is the easiest solution
here.
- The TBS part of the certificate wasn't followed by signature
algorithm and signature fields, which makes the test incompatible
with future changes swapping to breadth-first parsing of
certificates.
This commit moves the X.509 negative parsing tests for the
CertificatePolicy extension to the place where negative
testing of other extensions happens.
Judging from its name, the purpose of the test
TBSCertificate v3, ext CertificatePolicies tag, bool len missing
in test_suite_x509parse.data is to exercise the X.509 parsing stack's
behaviour when parsing a CertificatePolicy extension which lacks the
length field of the boolean 'Criticality' value.
However, the test fails at an earlier stage due to a mismatch of inner
and outer length of the explicit ASN.1 extensions structure.
Since we already have tests exercising
- mismatch of inner and outer length in the extensions structure, namely
'X509 CRT ASN1 (TBS, inv v3Ext, inner tag invalid)'
- missing length of the 'Criticality' field in an extension, namely
'X509 CRT ASN1 (TBS, inv v3Ext, critical length missing)'
and since for both tests there's no relevance to the use of the
policy extension OID, the test
'TBSCertificate v3, ext CertificatePolicies tag, bool len missing'
can be dropped.
The signature must have exactly the same length as the key, it can't
be longer. Fix#258
If the signature doesn't have the correct size, that's an invalid
signature, not a problem with an output buffer size. Fix the error code.
Add test cases.
In psa_asymmetric_sign, immediately reject an empty signature buffer.
This can never be right.
Add test cases (one RSA and one ECDSA).
Change the SE HAL mock tests not to use an empty signature buffer.
Add tests for derivation.
Test both 7 bits and 9 bits, in case the implementation truncated the
bit size down and 7 was rejected as 0 rather than because it isn't a
multiple of 8.
There is no corresponding test for import because import determines
the key size from the key data, which is always a whole number of bytes.
Tweak test data for one test case to not rely on mbedtls_asn1_get_int
lacking support for leading zeros. Instead, use a number that is
actually out of range for int.
Tweak test data for one test case to not rely on
mbedtls_asn1_get_bitstring_null rejecting bitstrings shorter than two
octets. Instead, try bit strings that are genuinely invalid, or have a
nonzero number of unused bits.
Add a test case with a correct empty signature. This is commented out
because asn1parse currently does not support this. Uncomment it when
asn1parse is updated to support this.
The new macro ASSERT_ALLOC_WEAK does not fail the test case if the
memory allocation fails. This is useful for tests that allocate a
large amount of memory, but that aren't useful on platforms where
allocating such a large amount is not possible.
Ideally this macro should mark the test as skipped. We don't yet have
a facility for that but we're working on it. Once we have a skip
functionality, this macro should be changed to use it.
Use the test-many-sizes framework for string writes as
well (previously, it was only used for booleans and integers). This
way, more edge cases are tested with less test code.
This commit removes buffer overwrite checks. Instead of these checks,
run the test suite under a memory sanitizer (which we do in our CI).
Omit negative integers and MPIs that would result in values that look
like negative INTEGERs, since the library doesn't respect the
specifications there, but fixing it has a serious risk of breaking
interoperability when ASN.1 is used in X.509 and other
cryptography-related applications.
Add self-contained ASN.1 parsing tests, so that ASN.1 parsing is not
solely tested through X.509 and TLS.
The tests cover every function and almost complete line coverage in
asn1parse.c.
A few test cases containing negative and edge case INTEGER values are
deliberately deactivated because the historical library behavior is at
odds with official specifications, but changing the behavior might
break interoperability.
Other than that, these tests revealed a couple of minor bugs which
will be fixed in subsequent commits.
This leak wasn't discovered by the CI because the only test in
all.sh exercising the respective path enabled the custom memory
buffer allocator implementations of calloc() and free(), hence
bypassing ASan.
In preparation of deprecating the old and less secure deterministic
ECDSA signature function we need to remove it from the test. At the
same time, the new function needs to be tested. Modifying the tests
to use the new function achieves both of these goals.
* crypto/development: (863 commits)
crypto_platform: Fix typo
des: Reduce number of self-test iterations
Fix -O0 build for Aarch64 bignum multiplication.
Make GNUC-compatible compilers use the right mbedtls_t_udbl again on Aarch64 builds.
Add optimized bignum multiplication for Aarch64.
Enable 64-bit limbs for all Aarch64 builds.
HMAC DRBG: Split entropy-gathering requests to reduce request sizes
psa: Use application key ID where necessary
psa: Adapt set_key_id() for when owner is included
psa: Add PSA_KEY_ID_INIT
psa: Don't duplicate policy initializer
crypto_extra: Use const seed for entropy injection
getting_started: Update for PSA Crypto API 1.0b3
Editorial fixes.
Cross reference 'key handles' from INVALID_HANDLE
Update documentation for psa_destroy_key
Update documentation for psa_close_key
Update psa_open_key documentation
Remove duplicated information in psa_open_key
Initialize key bits to max size + 1 in psa_import_key
...
Bring Mbed TLS 2.18.0 and 2.18.1 release changes back into the
development branch. We had branched to release 2.18.0 and 2.18.1 in
order to allow those releases to go out without having to block work on
the `development` branch.
Manually resolve conflicts in the Changelog by moving all freshly addded
changes to a new, unreleased version entry.
Reject changes to include/mbedtls/platform.h made in the mbedtls-2.18
branch, as that file is now sourced from Mbed Crypto.
* mbedtls-2.18:
platform: Include stdarg.h where needed
Update Mbed Crypto to contain mbed-crypto#152
CMake: Add a subdirectory build regression test
README: Enable builds as a CMake subproject
ChangeLog: Enable builds as a CMake subproject
Remove use of CMAKE_SOURCE_DIR
Update library version to 2.18.0
This commit introduces a new SSL error code
`MBEDTLS_ERR_SSL_VERSION_MISMATCH`
which can be used to indicate operation failure due to a
mismatch of version or configuration.
It is put to use in the implementation of `mbedtls_ssl_session_load()`
to signal the attempt to de-serialize a session which has been serialized
in a build of Mbed TLS using a different version or configuration.
This commit improves the test exercising the behaviour of
session deserialization when facing an unexpected version
or config, by testing ver/cfg corruption at any bit in the
ver/cfg header of the serialized data; previously, it had
only tested the first bit of each byte.
The chosen fix matches what's currently done in the baremetal branch - except
the `#ifdef` have been adapted because now in baremetal the digest is not kept
if renegotiation is disabled.
We have explicit recommendations to use US spelling for technical writing, so
let's apply this to code as well for uniformity. (My fingers tend to prefer UK
spelling, so this needs to be fixed in many places.)
sed -i 's/\([Ss]eriali\)s/\1z/g' **/*.[ch] **/*.function **/*.data ChangeLog
This test works regardless of the serialisation format and embedded pointers
in it, contrary to the load-save test, though it requires more maintenance of
the test code (sync the member list with the struct definition).
This uncovered a bug that led to a double-free (in practice, in general could
be free() on any invalid value): initially the session structure is loaded
with `memcpy()` which copies the previous values of pointers peer_cert and
ticket to heap-allocated buffers (or any other value if the input is
attacker-controlled). Now if we exit before we got a chance to replace those
invalid values with valid ones (for example because the input buffer is too
small, or because the second malloc() failed), then the next call to
session_free() is going to call free() on invalid pointers.
This bug is fixed in this commit by always setting the pointers to NULL right
after they've been read from the serialised state, so that the invalid values
can never be used.
(An alternative would be to NULL-ify them when writing, which was rejected
mostly because we need to do it when reading anyway (as the consequences of
free(invalid) are too severe to take any risk), so doing it when writing as
well is redundant and a waste of code size.)
Also, while thinking about what happens in case of errors, it became apparent
to me that it was bad practice to leave the session structure in an
half-initialised state and rely on the caller to call session_free(), so this
commit also ensures we always clear the structure when loading failed.
This test appeared to be passing for the wrong reason, it's actually not
appropriate for the current implementation. The serialised data contains
values of pointers to heap-allocated buffers. There is no reason these should
be identical after a load-save pair. They just happened to be identical when I
first ran the test due to the place of session_free() in the test code and the
fact that the libc's malloc() reused the same buffers. The test no longer
passes if other malloc() implementations are used (for example, when compiling
with asan which avoids re-using the buffer, probably for better error
detection).
So, disable this test for now (we can re-enable it when we changed how
sessions are serialised, which will be done in a future PR, hence the name of
the dummy macro in depends_on). In the next commit we're going to add a test
that save-load is the identity instead - which will be more work in testing as
it will require checking each field manually, but at least is reliable.
This initial test ensures that a load-save function is the identity. It is so
far incomplete in that it only tests sessions without tickets or certificate.
This will be improved in the next commits.
* crypto/pr/212: (337 commits)
Make TODO comments consistent
Fix PSA tests
Fix psa_generate_random for >1024 bytes
Add tests to generate more random than MBEDTLS_CTR_DRBG_MAX_REQUEST
Fix double free in psa_generate_key when psa_generate_random fails
Fix copypasta in test data
Avoid a lowercase letter in a macro name
Correct some comments
Fix PSA init/deinit in mbedtls_xxx tests when using PSA
Make psa_calculate_key_bits return psa_key_bits_t
Adjust secure element code to the new ITS interface
More refactoring: consolidate attribute validation
Fix policy validity check on key creation.
Add test function for import with a bad policy
Test key creation with an invalid type (0 and nonzero)
Remove "allocated" flag from key slots
Take advantage of psa_core_key_attributes_t internally #2
Store the key size in the slot in memory
Take advantage of psa_core_key_attributes_t internally: key loading
Switch storage functions over to psa_core_key_attributes_t
...
Drivers that allow destroying a key must have a destroy method. This
test bug was previously not caught because of an implementation bug
that lost the error triggered by the missing destroy method.
Add a flow where the key is imported or fake-generated in the secure
element, then call psa_export_public_key and do the software
verification with the public key.
Factor common code of ram_import and ram_fake_generate into a common
auxiliary function.
Reject key types that aren't supported by this test code.
Report the bit size correctly for EC key pairs.
The methods to import and generate a key in a secure element drivers
were written for an earlier version of the application-side interface.
Now that there is a psa_key_attributes_t structure that combines all
key metadata including its lifetime (location), type, size, policy and
extra type-specific data (domain parameters), pass that to drivers
instead of separate arguments for each piece of metadata. This makes
the interface less cluttered.
Update parameter names and descriptions to follow general conventions.
Document the public-key output on key generation more precisely.
Explain that it is optional in a driver, and when a driver would
implement it. Declare that it is optional in the core, too (which
means that a crypto core might not support drivers for secure elements
that do need this feature).
Update the implementation and the tests accordingly.
Register an existing key in a secure element.
Minimal implementation that doesn't call any driver method and just
lets the application declare whatever it wants.
Pass the key creation method (import/generate/derive/copy) to the
driver methods to allocate or validate a slot number. This allows
drivers to enforce policies such as "this key slot can only be used
for keys generated inside the secure element".