This commit makes a first step towards modularizing the incoming record
processing by having it operate on instances of the structure mbedtls_record
representing SSL records.
So far, only record encryption/decryption operate in terms of record
instances, but the rest of the parsing doesn't. In particular,
ssl_parse_record_header() operates directly on the fixed input buffer,
setting the various ssl->in_xxx pointers and fields, and only directly
before/after calling ssl_decrypt_buf() these fields a converted to/from
mbedtls_record instances.
This commit does not yet remove the ssl->in_xxx fields, but makes a step
towards extending the lifetime of mbedtls_record structure representing
incoming records, by modifying ssl_parse_record_header() to setup an
instance of mbedtls_record, and setting the ssl->in_xxx fields from that
instance. The instance so-constructed isn't used further so far, and in
particular it is not yet consolidated with the instance set up for use
in ssl_decrypt_record(). That's for a later commit.
Previously, ssl_parse_record_header() did not check whether the current
datagram is large enough to hold a record of the advertised size. This
could lead to records being silently skipped over or backed up on the
basis of an invalid record length. Concretely, the following would happen:
1) In the case of a record from an old epoch, the record would be
'skipped over' by setting next_record_offset according to the advertised
but non-validated length, and only in the subsequent mbedtls_ssl_fetch_input()
it would be noticed in an assertion failure if the record length is too
large for the current incoming datagram.
While not critical, this is fragile, and also contrary to the intend
that MBEDTLS_ERR_SSL_INTERNAL_ERROR should never be trigger-able by
external input.
2) In the case of a future record being buffered, it might be that we
backup a record before we have validated its length, hence copying
parts of the input buffer that don't belong to the current record.
This is a bug, and it's by luck that it doesn't seem to have critical
consequences.
This commit fixes this by modifying ssl_parse_record_header() to check that
the current incoming datagram is large enough to hold a record of the
advertised length, returning MBEDTLS_ERR_SSL_INVALID_RECORD otherwise.
We don't send alerts on other instances of ill-formed records,
so why should we do it here? If we want to keep it, the alerts
should rather be sent ssl_get_next_record().
As explained in the previous commit, if mbedtls_ssl_fetch_input()
is called multiple times, all but the first call are equivalent to
bounds checks in the incoming datagram.
In DTLS, if mbedtls_ssl_fetch_input() is called multiple times without
resetting the input buffer in between, the non-initial calls are functionally
equivalent to mere bounds checks ensuring that the incoming datagram is
large enough to hold the requested data. In the interest of code-size
and modularity (removing a call to a non-const function which is logically
const in this instance), this commit replaces such a call to
mbedtls_ssl_fetch_input() by an explicit bounds check in
ssl_parse_record_header().
Previously, `ssl_handle_possible_reconnect()` was part of
`ssl_parse_record_header()`, which was required to return a non-zero error
code to indicate a record which should not be further processed because it
was invalid, unexpected, duplicate, .... In this case, some error codes
would lead to some actions to be taken, e.g. `MBEDTLS_ERR_SSL_EARLY_MESSAGE`
to potential buffering of the record, but eventually, the record would be
dropped regardless of the precise value of the error code. The error code
`MBEDTLS_ERR_SSL_HELLO_VERIFY_REQUIRED` returned from
`ssl_handle_possible_reconnect()` did not receive any special treatment and
lead to silent dopping of the record - in particular, it was never returned
to the user.
In the new logic this commit introduces, `ssl_handle_possible_reconnect()` is
part of `ssl_check_client_reconnect()` which is triggered _after_
`ssl_parse_record_header()` found an unexpected record, which is already in
the code-path eventually dropping the record; we want to leave this code-path
only if a valid cookie has been found and we want to reset, but do nothing
otherwise. That's why `ssl_handle_possible_reconnect()` now returns `0` unless
a valid cookie has been found or a fatal error occurred.
Availability of sufficient incoming data should be checked when
it is needed, which is in mbedtls_ssl_fetch_input(), and this
function has the necessary bounds checks in place.
The check is in terms of the internal input buffer length and is
hence likely to be originally intended to protect against overflow
of the input buffer when fetching data from the underlying
transport in mbedtls_ssl_fetch_input(). For locality of reasoning,
it's better to perform such a check close to where it's needed,
and in fact, mbedtls_ssl_fetch_input() _does_ contain an equivalent
bounds check, too, rendering the bounds check in question redundant.
mbedtls_ssl_decrypt_buf() asserts that the passed transform is not NULL,
but the function is only invoked in a single place, and this invocation
is clearly visible to be within a branch ensuring that the incoming
transform isn't NULL. Remove the assertion for the benefit of code-size.
The previous code performed architectural maximum record length checks
both before and after record decryption. Since MBEDTLS_SSL_IN_CONTENT_LEN
bounds the maximum length of the record plaintext, it suffices to check
only once after (potential) decryption.
This must not be confused with the internal check that the record
length is small enough to make the record fit into the internal input
buffer; this is done in mbedtls_ssl_fetch_input().
Due to how the checking script is run in docker, worktree_rev is
ambiguous when running rev-parse. We're running it in the checked
out worktree, so we can use HEAD instead, which is unambiguous.
* origin/pr/2660:
Fix parsing issue when int parameter is in base 16
Refactor receive_uint32()
Refactor get_byte function
Make the script portable to both pythons
Update the test encoding to support python3
update the test script
* origin/pr/1622: (29 commits)
Do not build fuzz on windows
No booleans and import config
Removing space before opening parenthesis
Style corrections
Syntax fix
Fixes warnings from MSVC
Add a linker flag to enable gcov in basic-build-test.sh
checks MBEDTLS_PEM_PARSE_C
Restore programs/fuzz/Makefile after in-tree cmake
Move fuzz directory to programs
Documentation for corpus generation
Restore tests/fuzz/Makefile after in-tree cmake
Adding ifdefs to avoid warnings for unused globals
Adds LDFLAGS fsanitize=address
Ignore compiled object files and executables
Also clean the fuzz subdirectory
copyediting README.md
Protecting client/server fuzz targts with ifdefs
Makefile support 1
Fuzz README and direct compilation
...
* origin/pr/2729:
Split _abi_compliance_command into smaller functions
Record the commits that were compared
Document how to build the typical argument for -s
Allow running /somewhere/else/path/to/abi_check.py
* origin/pr/2706:
Update Mbed Crypto to contain mbed-crypto#152
CMake: Add a subdirectory build regression test
README: Enable builds as a CMake subproject
ChangeLog: Enable builds as a CMake subproject
Remove use of CMAKE_SOURCE_DIR
* origin/pr/2632:
Adapt ChangeLog
Avoid use of large stack buffers in mbedtls_x509_write_crt_pem()
Improve documentation of mbedtls_pem_write_buffer()
Perform CRT writing in-place on the output buffer
Adapt x509write_crt.c to coding style