*_packet_pressure_parallel tests are useful for checking for synchronization
errors, but push the practical limitations of the network stack. Failing
these tests is not unreasonable.
*_packet_pressure tests are a little bit less unreasonable, but also
push the practical limitations of the network stack. Hopefully these
will become stable in the near future.
Generalized handling of dns servers when brought up with both ipv4 and
ipv6 addresses. Falls back to google dns servers if not dns server is
found through dhcp.
Also added support for the `add_dns_server` method to lwip to support
custom servers.
depending on timing and HW, there might be some delay before the master
request gets notified, so better loop in while than a single call
to slave.receive()
To allow a network stack to support both NSAPI and its own options, try to make
sure the NSAPI levels don't collide with level numbers likely to be used by
network stacks.
Distinguish between socket and stack options, and tighten up documentation. Add
IP MRU stack options as an example (implementation not immediately planned for
any stack, but could be useful).
Despite being able to buffer an arbitrary stream of data,
TCP send is still limited by the available buffer space in the
network stack. Errors from TCP send are perfectly reasonable
and should be handled by reducing the buffer that is attempted.
These tests could adopt the dynamically sized buffers used for the
packet-pressure tests, however throughput is not an important feature
of these tests.
Printing out dropped packets caused significantly more overhead in the
parallel tests due to increased noise on the network. This noise would
push the tests past their provided timeouts.
Dynamic buffers gives the network stack the maximum throughput while
still supporting smaller devices. This should expose the largest number
of issues across differently sized platforms.
Additionally, restructured the UDP tests to avoid unintentionally flooding
the recieving side with bad data after failed packets.
Also, added a bit more documentation
A larger buffer gives the network stack the best options for maximizing
throughput. However, the initial buffer size did not fit on small
targets. Resized 8192 -> 1024.
Added test for the pattern of packets used during the DTLS
handshake. This pattern (5x ~300 byte packets) has been very
problematic for new network interfaces.
Attempt to maximize the devices bandwidth with an exponentially growing
transaction of random sequences. Also prints the time taken and bandwidth
reached during the tests.
A few new error codes are added to nsapi_error_t and
support for non-blocking socket connect is added.
Nanostack's connect call will be non-blocking.
Whereas LWIP connect call is currently blocking, and it could be changed now
to be non-blocking.
1. Add targets into build_travis.py and tests.py.
2. Add target SPI pins into SPI SD test samples.
3. Rename target TOOLCHAIN_GCC_ARM/retarget.c to avoid name collision of compiled retarget.o with platform/retargets.cpp.
condition posix error mbed error
good host, closed port ECONNREFUSED NSAPI_ERROR_NO_CONNECTION
bad host EHOSTUNREACH NSAPI_ERROR_NO_CONNECTION
bad network ENETUNREACH NSAPI_ERROR_NO_CONNECTION
During open, the socket checked the internal stack variable,
assuming it would alway be null on a socket not connected to
the network. However, when a socket is closed, the stack variable
was not updated, causing the socket to incorrectly return a
parameter error if reopened.
The simple fix was to set the stack to null on close. A non-null
stack is a predicate for a non-null socket variable, so no additional
checks are needed in socket functions.
It's currently possible to generate a socket event when a non-blocking socket is closed:
1. _pending is set to 0 in https://github.com/ARMmbed/mbed-os/blob/master/features/netsocket/TCPSocket.cpp#L22
when the socket is created.
2. close() calls event() in https://github.com/ARMmbed/mbed-os/blob/master/features/netsocket/Socket.cpp#L66
3. event() increments _pending, and since _pending is 1 it will call _callback() in https://github.com/ARMmbed/mbed-os/blob/master/features/netsocket/TCPSocket.cpp#L167
However, if send() (for example) is called, this can happen:
- send() is called and sets _pending to 0.
- when the data is sent, event() is called, which sets _pending to 1 and calls _callback().
- if close() is called at this point, there won't be an event generated for close() anymore,
since _pending will be set to 2.
Same thing for recv. Also, same thing for TCPServer and UDPSocket.
This PR changes the initial value of _pending to 1 instead of 0, so that
events are never generated for close().
cd60f73 Merge branch 'mbed-os-lwip-rc2-maint' into mbed-os-lwip-rc2-maint-prefixed
3a50479 fixed bug #49676 (Possible endless loop when parsing dhcp options) & added unit test for that
git-subtree-dir: features/FEATURE_LWIP/lwip-interface/lwip
git-subtree-split: cd60f73f110829e00df46593fea5db26bcfb1662
mbed compile doesn't support two different FEATURE_X folders being merged, so we'll have to move our nanostack driver into the Nanostack folder for the time being.
The configuration option for the mbed TLS specific hardware acceleration
has to be in the macro section and not in the device capabilities
section in targets.json.
The option has also been renamed to better reflect its function.
The crypto hardware acceleration might require defining a lot of mbed
TLS specific macros. Enumerating all of them in `targets.json` creates
too much noise, therefore we move it into a target specific mbed TLS
header.
The target with crypto hardware acceleration has to
- indicate its capability in `targets.json` by adding "CRYPTO"
to the "device_has" section
- has to define his crypto hardware acceleration related macros
in an `mbedtls_device.h` header
- place the `mbedtls_device.h` file in the
`features/mbedtls/targets/TARGET_XXXX`
directory specific to the target
This is to allow other types of PHY drivers than just RF.
Mesh-API does not actually care about driver type, it is drivers
responsibility to register right handlers with Nanostack.
* Implement a wrapper class NanostackRfPhy to ensure backward
compatibility.
* Remove mesh_connect()/disconnect() functions from MeshInterface
This job is already done in inherited classes.
* LoWPANNDInterface and ThreadInterface should only be used with
NanostackRfPhy.
* Move all the functionality to LoWPANNDInterface and
ThreadInterface classes.
* AbstractMesh class modified to be pure virtual
* Thread/6LoWPAN specific functionality totally separated.
Now linker will drop the unreferenced classes.
* MeshInterfaceNanostack now inherits from AbstractMesh
The cbmaster_done function is a callback which will be called from
the asynch I2C interrupt handler. Calling to printf from this context
sometimes lead to missing interrupts on the slave side. This was at least
encountered on STM32F3 MCUs.
Initially these assertions were added to protected simultaneous
send/recv from the same socket when similarly purposed mutexes were
removed.
However, simultaneous send/recv can still be useful for UDP if the
payload is guaranteed to be less than the MTU across the entire
connection.
set_ip_bytes() does a 16-byte memcpy from the input buffer to
the local nsapi_addr_t despite the address version.
If the address version is ipv4, the input buffer may only be
4-byte in size. This causes a out-of-bound access on the input buffer.
Signed-off-by: Tony Wu <tonywu@realtek.com>
nsapi_error_t - enum of errors or 0 for NSAPI_ERROR_OK
nsapi_size_t - unsigned size of data that could be sent
nsapi_size_or_error_t - either a non-negative size or negative error
Replace Comodo and OpenDNS IPv4 servers with Google and DNS.WATCH IPv6
servers, so IPv6-only devices (eg 6LoWPAN) have a default.
3 IPv4 resolvers should be plenty - existing code doesn't remember which
one last worked, so if early list entries were unreachable performance
would be consistently bad anyway. Replacing two entries avoids
increasing image size and RAM consumption.
On an IPv6-only or IPv4-only system, the sendto() for the wrong type of
address should fail immediately - change loop to move on to the next
server for any sendto() error.
Previously, exhausting hardware buffers would begin blocking the lwip
thread. This patch changes the emac layer to simply drop ethernet
frames, leaving recovery up to a higher level protocol.
This is consistent with the behaviour of the emac layer when unable
to allocate dynamic memory.
- cc.h@57,1: "BYTE_ORDER" redefined
- lwip_inet_chksum.c@560,44: passing argument 1 of 'thumb2_checksum'
discards 'const' qualifier from pointer target type
- lwip_pbuf.c@1172,9: variable 'err' set but not used
- SocketAddress.cpp@293,1: control reaches end of non-void function
Takes advantage of the get_ip_address function to predict the IP
address version wanted by the underlying interface. The should avoid
the need for most IPv6 interfaces to overload gethostbyname.
suggested by @kjbracey-arm
This change prevents the standard library from allocating a large buffer
on the heap. On GCC_ARM, this is a saving of 1K. On ARM, this is a saving
of 64 bytes.
This was actually several bugs colluding together.
1. Confusion on the buffer-semaphore paradigm used led to misuse of the
tx semaphore and potential for odd behaviour.
2. Equality tests on tx_consume_index and tx_produce_index did not
handle overflow correctly. This would allow tx_consume_index to catch
up to tx_produce_index and trick the k64f_rx_reclaim function into
forgetting about a whole buffer of pbufs.
3. On top of all of that, the ENET_BUFFDESCRIPTOR_TX_READ_MASK was not
correctly read immediately after being set due to either a compiler
optimization or hardware delays. This caused k64f_low_level_output
to eagerly overrun existing buff-descriptors before they had been
completely sent. Adopting the counting-semaphore paradigm for 1 avoided
this concern.
As pointed out by @infinnovation, the overflow only occurs in the rare
case that the 120MHz CPU can actually generate packets faster than the
ENET hardware can transmit on a 100Mbps link.
+ Added ``void debug(bool dbg)`` method to allow enabling/disabling
serial debug at runtime.
+ Replaced calls to ``debug`` with ``debug_if`` to prevent messges
from being thrown via serial when debug is disabled.
Signed-off-by: Bruno Monteiro Pires <brunomonteiropires@gmail.com>
In the config store create test in test case #5 the amount of available
memory is determined by fully allocating the heap. This is done
multiple times to determine if there is a memory leak. This causes
problems when even slight fragmentation occurs in the heap, since
the size that can be allocated is decreased slightly, which the test
flags as a memory leak.
This patch makes memory leak detection more robust by using metrics
provided by mbed_stats_heap_get. These metrics are an exact
measurement of memory allocated is not changed by fragmentation.
This allows the memory leak test to report correct values regardless of
fragmentation.
When closing a file handle remove the handle from the handle list
regardless of what the reference count of the key it is pointing to is.
This prevents config store from keeping a handle to file handles that
have gone out of scope.
The function cfstore_delete_ex is written under the assumption that
CFSTORE_REALLOC will never fail if the size is decreasing. Regardless
of the status of CFSTORE_REALLOC the entry is removed from the config
store and zeroed. This works correctly if CFSTORE_REALLOC correctly
updates area_0_tail, but can lead to crashes in the case area_0_tail is
left unchanged. The crash is because when iterating over the config
store data, cfstore_get_next_hkvt is unable to determine the end of
valid data.
This patch fixes this problem by handling the realloc failure case by
updating area_0_tail even if CFSTORE_REALLOC returns NULL. This
patch also adds an assert to check for out of bound entries in when
calling cfstore_get_next_hkvt. This allows an assert to be triggered
if this bug is re-introduced, rather than a crash.
When the config store is powered down area_0_head is freed, but
area_0_len is not set to 0. This causes when cfstore_realloc_ex is
called, since on the first allocation it appears that the config store
size is decreasing, and therefore the data is not initialized.
Since the data is uninitiated various fields such as the reference
can have invalid values. On GCC_ARM built with heap stats enabled
this manifests as a crash due to an invalid reference count.
This patch fixes this problem by setting area_0_len to 0 when the data
is freed.
* Binary build script is modified to follow currebt mbedOS baseline structure
* License files are moved to the correct location.
* Contribution.md is also moved to the correct location.
Update the current version of mbed TLS with the development HEAD of the
mbed TLS project repository. This mostly includes the latest CMAC
feature. Also, update the version in the importer Makefile and
VERSION.txt with the hash of the mbed TLS commit that was sync'ed.
Returning a wifi access point without information regarding the
security type is only valid if the security type is unknown (from
the perspective of the network-socket API). For clarity in situations
in which scan may return an unsupported, but known security type,
type name has been changed to NSAPI_SECURITY_UNKNOWN.
Using a class for this structure is more idiomatic C++, allows
future deprecations, and more flexibility of the internal structure.
Additionally, this matches to design of the SocketAddress class
specific to the network-socket api.
Current asynchronous functions do not match the blocking/attach
pattern of the current codebase.
Unfortunately, the asynchronous api can not be updated in the
current time constraints. The async functions will be removed
for now and can be reconsidered at a later date.
* Application has been using MBED_MESH_API_6LOWPAN_ND_SECURITY_MODE as the macro to define Secuity mode.
* The fall back mechanism, in case of absence of neo or yotta macro definition, was setting the macro to
be yotta format which was not used at all in the application.
* The bug was fixed by changing YOTTA_CFG_MBED_MESH_API_6LOWPAN_ND_SECURITY_MODE to
MBED_MESH_API_6LOWPAN_ND_SECURITY_MODE in the fall back mechanism.