mirror of https://github.com/ARMmbed/mbed-os.git
Our config file for lwIP had TCP_QUEUE_OOSEQ disabled - this can cause significant performance problems, as observed during testing. One lost packet can lock an input stream into a mode where the transmitter keeps thinking packets are being lost, so keeps slowing down. This caused test failures - a transfer that would normally take 10s hit a 60s timeout. Turning this on increases code size, but doesn't significantly increase static memory use. The memory used for out-of-order packets comes from the same pbuf pool as for outgoing TCP segments, so there is contention when running bidirectionally. Out-of-order processing is on by default for lwIP - this seems to be another example of us excessively paring it back. |
||
---|---|---|
.. | ||
lwip | ||
lwip-eth/arch | ||
lwip-sys | ||
.mbedignore | ||
CONTRIBUTING.md | ||
EthernetInterface.cpp | ||
EthernetInterface.h | ||
emac_lwip.c | ||
emac_stack_lwip.cpp | ||
eth_arch.h | ||
lwip_stack.c | ||
lwip_stack.h | ||
lwipopts.h | ||
mbed_lib.json | ||
ppp_lwip.cpp | ||
ppp_lwip.h |