From Hanno:
When a server replies to a cookieless ClientHello with a HelloVerifyRequest,
it is supposed to reset the connection and wait for a subsequent ClientHello
which includes the cookie from the HelloVerifyRequest.
In testing environments, it might happen that the reset of the server
takes longer than for the client to replying to the HelloVerifyRequest
with the ClientHello+Cookie. In this case, the ClientHello gets lost
and the client will need retransmit. This may happen even if the underlying
datagram transport is reliable.
This commit continues commit 47db877 by removing resend guards in the
ssl-opt.sh tests 'DTLS fragmenting: proxy MTU, XXX' which sometimes made
the tests fail in case the log showed a resend from the client.
See 47db877 for more information.
When a server replies to a cookieless ClientHello with a HelloVerifyRequest,
it is supposed to reset the connection and wait for a subsequent ClientHello
which includes the cookie from the HelloVerifyRequest.
In testing environments, it might happen that the reset of the server
takes longer than for the client to replying to the HelloVerifyRequest
with the ClientHello+Cookie. In this case, the ClientHello gets lost
and the client will need retransmit. This may happen even if the underlying
datagram transport is reliable.
This commit removes a guard in the ssl-opt.sh test
'DTLS fragmenting: proxy MTU, resumed handshake' which made
the test fail in case the log showed a resend from the client.
We previously observed random-looking failures from this test. I think they
were caused by a race condition where the client tries to reconnect while the
server is still closing the connection and has not yet returned to an
accepting state. In that case, the server would fail to see and reply to the
ClientHello, and the client would have to resend it.
I believe logs of failing runs are compatible with this interpretation:
- the proxy logs show the new ClientHello and the server's closing Alert are
sent the same millisecond.
- the client logs show the server's closing Alert is received after the new
handshake has been started (discarding message from wrong epoch).
The attempted fix is for the client to wait a bit before reconnecting, which
should vastly enhance the probability of the server reaching its accepting
state before the client tries to reconnect. The value of 1 second is arbitrary
but should be more than enough even on loaded machines.
The test was run locally 100 times in a row on a slightly loaded machine (an
instance of all.sh running in parallel) without any failure after this fix.
Depends on the current transform, which might change when retransmitting a
flight containing a Finished message, so compute it only after the transform
is swapped.
Use the same values as other 3d tests: this makes the test hopefully a bit
faster than the default values, while not increasing the failure rate.
While at it:
- adjust "needs_more_time" setting for 3d interop tests (we can't set the
timeout values for other implementations, so the test might be slow)
- fix some supposedly DTLS 1.0 test that were using dtls1_2 on the command
line
This setting belongs to the individual connection, not to a configuration
shared by many connections. (If a default value is desired, that can be handled
by the application code that calls mbedtls_ssl_set_mtu().)
There are at least two ways in which this matters:
- per-connection settings can be adjusted if MTU estimates become available
during the lifetime of the connection
- it is at least conceivable that a server might recognize restricted clients
based on range of IPs and immediately set a lower MTU for them. This is much
easier to do with a per-connection setting than by maintaining multiple
near-duplicated ssl_config objects that differ only by the MTU setting.
This for example lead to the following corner case bug:
The code attempted to piggy-back a Finished message at
the end of a datagram where precisely 12 bytes of payload
were still available. This lead to an empty Finished fragment
being sent, and when mbedtls_ssl_flight_transmit() was called
again, it believed that it was just starting to send the
Finished message, thereby calling ssl_swap_epochs() which
had already happened in the call sending the empty fragment.
Therefore, the second call would send the 'rest' of the
Finished message with wrong epoch.
This commit adds four tests to ssl-opt.sh running default
DTLS client and server with and without datagram packing
enabled, and checking that datagram packing is / is not
used by inspecting the debug output.
The UDP proxy does currently not dissect datagrams into records,
an hence the coverage of the reordering, package loss and duplication
tests is much smaller if datagram packing is in use.
This commit disables datagram packing for most UDP proxy tests,
in particular all 3D (drop, duplicate, delay) tests.
Now that datagram packing can be dynamically configured,
the test exercising the behavior of Mbed TLS when facing
an out-of-order CCS message can be re-introduced, disabling
datagram packing for the sender of the delayed CCS.
This commit adds a new command line option `dgram_packing`
to the example server application programs/ssl/ssl_client2
allowing to allow/forbid the use of datagram packing.
This commit adds a new command line option `dgram_packing`
to the example server application programs/ssl/ssl_server2
allowing to allow/forbid the use of datagram packing.
This commit adds a public function
`mbedtls_ssl_conf_datagram_packing()`
that allows to allow / forbid the packing of multiple
records within a single datagram.
The tests "DTLS fragmenting: none (for reference)" and
"DTLS fragmenting: none (for reference) (MTU)" used a
maximum fragment length resp. MTU value of 2048 which
was meant to be large enough so that fragmentation
of the certificate message would not be necessary.
However, it is not large enough to hold the entire flight
to which the certificate belongs, and hence there will
be fragmentation as soon as datagram packing is used.
This commit increases the maximum fragment length resp.
MTU values to 4096 bytes to ensure that even with datagram
packing in place, no fragmentation is necessary.
A similar change was made in "DTLS fragmenting: client (MTU)".
The `partial` argument is only used when DTLS and same port
client reconnect are enabled. This commit marks the variable
as unused if that's not the case.
If neither the maximum fragment length extension nor DTLS
are used, the SSL context argument is unnecessary as the
maximum payload length is hardcoded as MBEDTLS_SSL_MAX_CONTENT_LEN.
The test exercising a delayed CCS message is not
expected to work when datagram packing is used,
as the current UDP proxy is not able to recognize
records which are not at the beginning of a
datagram.
This commit finally enables datagram packing by modifying the
record preparation function ssl_write_record() to not always
calling mbedtls_ssl_flush_output().
The packing of multiple records within a single datagram works
by increasing the pointer `out_hdr` (pointing to the beginning
of the next outgoing record) within the datagram buffer, as
long as space is available and no flush was mandatory.
This commit does not yet change the code's behavior of always
flushing after preparing a record, but it introduces the logic
of increasing `out_hdr` after preparing the record, and resetting
it after the flush has been completed.