*_packet_pressure_parallel tests are useful for checking for synchronization
errors, but push the practical limitations of the network stack. Failing
these tests is not unreasonable.
*_packet_pressure tests are a little bit less unreasonable, but also
push the practical limitations of the network stack. Hopefully these
will become stable in the near future.
Generalized handling of dns servers when brought up with both ipv4 and
ipv6 addresses. Falls back to google dns servers if not dns server is
found through dhcp.
Also added support for the `add_dns_server` method to lwip to support
custom servers.
depending on timing and HW, there might be some delay before the master
request gets notified, so better loop in while than a single call
to slave.receive()
To allow a network stack to support both NSAPI and its own options, try to make
sure the NSAPI levels don't collide with level numbers likely to be used by
network stacks.
Distinguish between socket and stack options, and tighten up documentation. Add
IP MRU stack options as an example (implementation not immediately planned for
any stack, but could be useful).
Despite being able to buffer an arbitrary stream of data,
TCP send is still limited by the available buffer space in the
network stack. Errors from TCP send are perfectly reasonable
and should be handled by reducing the buffer that is attempted.
These tests could adopt the dynamically sized buffers used for the
packet-pressure tests, however throughput is not an important feature
of these tests.
Printing out dropped packets caused significantly more overhead in the
parallel tests due to increased noise on the network. This noise would
push the tests past their provided timeouts.
Dynamic buffers gives the network stack the maximum throughput while
still supporting smaller devices. This should expose the largest number
of issues across differently sized platforms.
Additionally, restructured the UDP tests to avoid unintentionally flooding
the recieving side with bad data after failed packets.
Also, added a bit more documentation
A larger buffer gives the network stack the best options for maximizing
throughput. However, the initial buffer size did not fit on small
targets. Resized 8192 -> 1024.
Added test for the pattern of packets used during the DTLS
handshake. This pattern (5x ~300 byte packets) has been very
problematic for new network interfaces.