depending on timing and HW, there might be some delay before the master
request gets notified, so better loop in while than a single call
to slave.receive()
STM32 supported targets have 2 possible versions of I2C.
This patch makes the start / stop / read and write byte work ok for IP V2.
This was not working before and does not seem to be widely used.
Previously, the RTOS threads test was conditionally change the thread
stack size for all test cases based on the target. Now, it uses the
default stack size for all targets when threads are created serially,
and uses a 512 byte stack for the threads that are created in parallel.
This change was spurred by a confusing error. I attempted to compile for
the RZ_A1H (a Cortex-A device), and I had the standalone ARM compiler in
my system path, which supports Cortex-A. However, the default path for
the ARM compiler in settings.py uses a Keil installation, which only
supports Cortex-M. It found my Keil installation and used that instead.
This change proposes to remove this default behavior and instead
requires the user to explicitly set the intended compiler, either by a
settings file, mbed CLI, environment variables, or by placing the
compiler in your PATH.
To allow a network stack to support both NSAPI and its own options, try to make
sure the NSAPI levels don't collide with level numbers likely to be used by
network stacks.
Distinguish between socket and stack options, and tighten up documentation. Add
IP MRU stack options as an example (implementation not immediately planned for
any stack, but could be useful).
I modified the TTB setting of RO_DATA area.
The current setting of this area is "not executable".
Therefore, when trying to execute a program placed in this area, a prefetch abort will occur.
So I changed from "Sect_Normal_RO" to "Sect_Normal_Cod".
Despite being able to buffer an arbitrary stream of data,
TCP send is still limited by the available buffer space in the
network stack. Errors from TCP send are perfectly reasonable
and should be handled by reducing the buffer that is attempted.
These tests could adopt the dynamically sized buffers used for the
packet-pressure tests, however throughput is not an important feature
of these tests.
Printing out dropped packets caused significantly more overhead in the
parallel tests due to increased noise on the network. This noise would
push the tests past their provided timeouts.
Dynamic buffers gives the network stack the maximum throughput while
still supporting smaller devices. This should expose the largest number
of issues across differently sized platforms.
Additionally, restructured the UDP tests to avoid unintentionally flooding
the recieving side with bad data after failed packets.
Also, added a bit more documentation
A larger buffer gives the network stack the best options for maximizing
throughput. However, the initial buffer size did not fit on small
targets. Resized 8192 -> 1024.
Added test for the pattern of packets used during the DTLS
handshake. This pattern (5x ~300 byte packets) has been very
problematic for new network interfaces.
Attempt to maximize the devices bandwidth with an exponentially growing
transaction of random sequences. Also prints the time taken and bandwidth
reached during the tests.