- Removed target alias from the EXPORT_MAP in targets.py as it didn't work
- Added copies of the LPC4088 target exporters
- Fixed flag issue in the gcc toolchain
- Changed defines in eth USBDevice, rpt and rtos to handle
TARGET_LPC4088_DM
Fix a bug as below.
- If Ether driver have been set multiple transmit data without waiting for the received data, Ether driver can not send data correctly .
1) Endianess of TX_DESC_UPDATED_MASK so Tx buffers can be released after tranmission.
2) Avoid assert( ) failure due uninitialized variable in enet_hal)config_tx_fifo( ) function.
Signed-off-by: Sergio Scaglia <sergio.scaglia@arm.com>
If lwIP placed more than 2 pbufs in a TCP segment, the ethernet driver
would fail to send it as it didn't have enough Tx descriptors. The
maximum number of pbufs outstanding for transmit that lwIP keeps is
defined by the TCP_SND_QUEUELEN macro. I modifed the value of
LPC_NUM_BUFF_TXDESCS to take advantage of this lwIP value. The +1
takes into account that LPC_EMAC->TxProduceIndex ==
LPC->TxConsumeIndex is reserved for indicating that the queue is empty
so a full queue uses one less than the maximum count.
tcp_write() would incorrectly byte swap the checksum 1 too many times
when concatenating a pbuf to an existing TCP segment if the number of
bytes in the concatenated data was odd. I hit this issue when I tried
to reproduce a lost segment issue reported by a mbed user in this forum
thread: http://mbed.org/forum/mbed/topic/4354/?page=2#comment-22657
For tests such as TCPEchoServer
(http://mbed.org/users/emilmont/notebook/networking-libraries-benchmark/)
this change showed a 28% improvement (14Mbps to 18Mbps) when the echo
test was modified to instead use 1K data buffers.
I targetted these two functions based on manual profiling samples which
showed that a great deal of time was being spent in these two functions
when the network stack was being slammed with UDP packets.
I now use a signal to communicate when a packet has been received by
the ethernet hardware and should be processed by the packet_rx thread.
Previously the change to make the lwIP stack thread safe introduced
enough delay in packet_rx that the semaphore count could lag behind
the processed packets and overflow its maximum token count. Now the
ISR uses the signal to indicate that >= 1 packet has been received
since the last time packet_rx() was awaken.
Previously the ethernet driver used generic sys_arch* APIs exposed from
lwIP to manipulate the semaphores. I now call CMSIS RTOS APIs
directly when using the signals. I think this is acceptable since that
same driver source file already contains similar os* calls that talk
directly to the RTOS.
This reverts commit acb35785c9.
It turns out that this commit actually causes problems if an ethernet
interrupt is dropped because a higher privilege task is running, such
as LocalFileSystem accesses. If this happens, the semaphore count isn't
incremented enough times and the packet_rx() thread will fall behind and
end up running as though it had only one ethernet receive buffer. This
causes even more lost packets.
I plan to fix this by switching the semaphore to be a signal so that
the syncronization object is more boolean. It simply indicates if an
interrupt has arrived since the last time packet_rx() was awaken to
process inbound packets.
I recently pulled a NXP crash fix for their ethernet driver which will
requeue a pbuf to the ethernet driver rather than sending it to the
lwip stack if it can't allocate a new pbuf to keep the ethernet
hardware primed with available packet buffers. While recently
reviewing this code I noticed that the full size of the pbuf wasn't
used on this re-queueing operation but the size of the last received
packet. I now reset the pbuf size back to its originally allocated
size before doing this requeue operation.
Previously the packet_rx() function would wait on the RxSem and when
signalled it would process all available inbound packets. This used to
cause no problem but once the thread synchronization was turned
on via SYS_LIGHTWEIGHT_PROT, the semaphore actually started to overflow
its maximum token count of 65535. This caused the mbed_die() flashing
LEDs of death. The old code was really breaking the producer/consumer
pattern that I typically see with a semaphore since the consumer was
written to consume more than 1 produced object per semaphore wait.
Before the thread synchronization was enabled, the packet_rx() thread
could use a single time slice to process all of these packets and then
loop back around a few more times to decrement the semaphore count
while skipping the packet processing since it had all been done.
Now the packet processing code would cause the thread to give up its
time slice as it hit newly enabled critical sections. In the end it
was possible for the code to leak 2 semaphore signals for every 1 by
which the thread was awaken. After about 10 seconds of load, this
would cause a leak of 65535 signals.
NOTE: Two potential issues with this change:
1) The LPC_EMAC->RxConsumeIndex != LPC_EMAC->RxProduceIndex check was
removed from packet_rx(). I believe that this is Ok since the same
condition is later checked in lpc_low_level_input() anyway so it
won't now try to process more packets than what exist.
2) What if ENET_IRQHandler(void) ends up not signalling the RxSem for
every packet received? When would that happen? I could see it
happening if the ethernet hardware would try to pend more than 1
interrupt when the priority was too elevated to process the
pending requests. Putting the consumer loop back in packet_rx()
and using a Signal instead of a Semaphore might be a better
solution?