Previously, the UDP proxy could only remember one delayed message
for future transmission; if two messages were delayed in succession,
without another one being normally forwarded in between,
the message that got delayed first would be dropped.
This commit enhances the UDP proxy to allow to delay an arbitrary
(compile-time fixed) number of messages in succession.
This setting belongs to the individual connection, not to a configuration
shared by many connections. (If a default value is desired, that can be handled
by the application code that calls mbedtls_ssl_set_mtu().)
There are at least two ways in which this matters:
- per-connection settings can be adjusted if MTU estimates become available
during the lifetime of the connection
- it is at least conceivable that a server might recognize restricted clients
based on range of IPs and immediately set a lower MTU for them. This is much
easier to do with a per-connection setting than by maintaining multiple
near-duplicated ssl_config objects that differ only by the MTU setting.
This commit adds a new command line option `dgram_packing`
to the example server application programs/ssl/ssl_client2
allowing to allow/forbid the use of datagram packing.
This commit adds a new command line option `dgram_packing`
to the example server application programs/ssl/ssl_server2
allowing to allow/forbid the use of datagram packing.
For now, just check that it causes us to fragment. More tests are coming in
follow-up commits to ensure we respect the exact value set, including when
renegotiating.
sprintf( (char *) buf, "%s\r\n", base );
Above code generates Wformat-overflow warning since both buf and base
are of same size. buf should be sizeof( base ) + characters added in
the format. In this case format 2 bytes for "\r\n".
When MBEDTLS_MEMORY_BUFFER_ALLOC_C was defined, the sample ssl_server2.c was
using its own memory buffer for memory allocated by the library. The memory
used wasn't obvious, so this adds a macro for the memory buffer allocated to
make the allocated memory size more obvious and hence easier to configure.
Newer features in the library have increased the overall RAM usage of the
library, when all features are enabled. ssl_server2.c, with all features enabled
was running out of memory for the ssl-opt.sh test 'Authentication: client
max_int chain, server required'.
This commit increases the memory buffer allocation for ssl_server2.c to allow
the test to work with all features enabled.
Functions time with TIME_AND_TSC() didn't have their return values checked.
I'm not sure whether Coverity complained about existing uses, but it did about
new ones, since we consistently check their return values everywhere but here,
which it rightfully finds suspicious.
So, let's check return values. This probably adds a few cycles to existing
loop overhead, but on my machine (x86_64) the added overhead is less than the
random-looking variation between various runs, so it's acceptable.
Some calls had their own particular error checking; remove that in favour of
the new general solution.