1) Endianess of TX_DESC_UPDATED_MASK so Tx buffers can be released after tranmission.
2) Avoid assert( ) failure due uninitialized variable in enet_hal)config_tx_fifo( ) function.
Signed-off-by: Sergio Scaglia <sergio.scaglia@arm.com>
If lwIP placed more than 2 pbufs in a TCP segment, the ethernet driver
would fail to send it as it didn't have enough Tx descriptors. The
maximum number of pbufs outstanding for transmit that lwIP keeps is
defined by the TCP_SND_QUEUELEN macro. I modifed the value of
LPC_NUM_BUFF_TXDESCS to take advantage of this lwIP value. The +1
takes into account that LPC_EMAC->TxProduceIndex ==
LPC->TxConsumeIndex is reserved for indicating that the queue is empty
so a full queue uses one less than the maximum count.
tcp_write() would incorrectly byte swap the checksum 1 too many times
when concatenating a pbuf to an existing TCP segment if the number of
bytes in the concatenated data was odd. I hit this issue when I tried
to reproduce a lost segment issue reported by a mbed user in this forum
thread: http://mbed.org/forum/mbed/topic/4354/?page=2#comment-22657
For tests such as TCPEchoServer
(http://mbed.org/users/emilmont/notebook/networking-libraries-benchmark/)
this change showed a 28% improvement (14Mbps to 18Mbps) when the echo
test was modified to instead use 1K data buffers.
I targetted these two functions based on manual profiling samples which
showed that a great deal of time was being spent in these two functions
when the network stack was being slammed with UDP packets.
I now use a signal to communicate when a packet has been received by
the ethernet hardware and should be processed by the packet_rx thread.
Previously the change to make the lwIP stack thread safe introduced
enough delay in packet_rx that the semaphore count could lag behind
the processed packets and overflow its maximum token count. Now the
ISR uses the signal to indicate that >= 1 packet has been received
since the last time packet_rx() was awaken.
Previously the ethernet driver used generic sys_arch* APIs exposed from
lwIP to manipulate the semaphores. I now call CMSIS RTOS APIs
directly when using the signals. I think this is acceptable since that
same driver source file already contains similar os* calls that talk
directly to the RTOS.
This reverts commit acb35785c9.
It turns out that this commit actually causes problems if an ethernet
interrupt is dropped because a higher privilege task is running, such
as LocalFileSystem accesses. If this happens, the semaphore count isn't
incremented enough times and the packet_rx() thread will fall behind and
end up running as though it had only one ethernet receive buffer. This
causes even more lost packets.
I plan to fix this by switching the semaphore to be a signal so that
the syncronization object is more boolean. It simply indicates if an
interrupt has arrived since the last time packet_rx() was awaken to
process inbound packets.
I recently pulled a NXP crash fix for their ethernet driver which will
requeue a pbuf to the ethernet driver rather than sending it to the
lwip stack if it can't allocate a new pbuf to keep the ethernet
hardware primed with available packet buffers. While recently
reviewing this code I noticed that the full size of the pbuf wasn't
used on this re-queueing operation but the size of the last received
packet. I now reset the pbuf size back to its originally allocated
size before doing this requeue operation.
Previously the packet_rx() function would wait on the RxSem and when
signalled it would process all available inbound packets. This used to
cause no problem but once the thread synchronization was turned
on via SYS_LIGHTWEIGHT_PROT, the semaphore actually started to overflow
its maximum token count of 65535. This caused the mbed_die() flashing
LEDs of death. The old code was really breaking the producer/consumer
pattern that I typically see with a semaphore since the consumer was
written to consume more than 1 produced object per semaphore wait.
Before the thread synchronization was enabled, the packet_rx() thread
could use a single time slice to process all of these packets and then
loop back around a few more times to decrement the semaphore count
while skipping the packet processing since it had all been done.
Now the packet processing code would cause the thread to give up its
time slice as it hit newly enabled critical sections. In the end it
was possible for the code to leak 2 semaphore signals for every 1 by
which the thread was awaken. After about 10 seconds of load, this
would cause a leak of 65535 signals.
NOTE: Two potential issues with this change:
1) The LPC_EMAC->RxConsumeIndex != LPC_EMAC->RxProduceIndex check was
removed from packet_rx(). I believe that this is Ok since the same
condition is later checked in lpc_low_level_input() anyway so it
won't now try to process more packets than what exist.
2) What if ENET_IRQHandler(void) ends up not signalling the RxSem for
every packet received? When would that happen? I could see it
happening if the ethernet hardware would try to pend more than 1
interrupt when the priority was too elevated to process the
pending requests. Putting the consumer loop back in packet_rx()
and using a Signal instead of a Semaphore might be a better
solution?
This option actually enables the use of the lwip_sys_mutex for
protecting concurrent access to such important lwIP resources as:
select_cb_list (this is the one which orig flagged problem)
sockets array
mem stats (if enabled)
heap (if LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT was non-zero)
memp pool allocs/frees
netif->loop_last pbuf linked list
pbuf reference counts
...
I first noticed this issue when I hit a crash while slamming the net
stack with a large number of TCP packets (I was actually sending 1k
data buffers from the TCPEchoServer mbed sample.) It crashed in the
last line of this code snippet from event_callback:
for (scb = select_cb_list; scb != NULL; scb = scb->next) {
if (scb->sem_signalled == 0) {
It was crashing because scb had an invalid address so it generated a
bus fault. I figured that memory was either corrupted or there was
some kind of concurrency issue. In trying to determine which, I wanted
to walk through the select_cb_list linked list and see where it was
corrupted:
(gdb) p scb
$1 = (struct lwip_select_cb *) 0x85100080
(gdb) p select_cb_list
$2 = (struct lwip_select_cb *) 0x0
That was interesting, the head of the linked list was now NULL but it
must have had a non-NULL value when this loop started running or we
would have never gotten to the point where we hit this crash.
This was starting to look like a concurrency issue since the linked
list was modified out from underneath this thread. Looking through the
source code for this function, I saw use of macros like
SYS_ARCH_PROTECT and SYS_ARCH_UNPROTECT which looked like they should
be providing the thead synchronization. I disassembled the
event_callback() function in the debugger and saw no signs of the
usage of synchronizition APIs that I expected. A search
through the code for the definition of these SYS_ARCH_UN/PROTECT
macros led me to discovering that they were actually ignored unless an
implementation defined them itself (the mbed version doesn't do so) or
the SYS_LIGHTWEIGHT_PROT macro is set to non-zero (the mbed version
didn't do this either). Flipping the SYS_LIGHTWEIGHT_PROT macro on in
lwipopts.h fixed the crash I kept hitting, increased the size of the
code a bit, and unfortunately slows things down a bit since it now
actually serializes access to these data structures by making calls
to the RTOS sync APIs.
After making my previous commit to completely disable LWIP_ASSERT
macro invocations, I ended up with a warning in pbuf.c where an
err variable was set but only checked for success in an assert. I
added a "(void)err;" reference to silence this warning.
I was doing some debugging that had me looking at the disassembly of
lpc_rx_queue() from within the debugger. I was looking for the call to
pbuf_alloc() that we see in the following code snippet:
p = pbuf_alloc(PBUF_RAW, (u16_t) EMAC_ETH_MAX_FLEN, PBUF_RAM);
if (p == NULL) {
LWIP_DEBUGF(UDP_LPC_EMAC | LWIP_DBG_TRACE,
("lpc_rx_queue: could not allocate RX pbuf (free desc=%d)\n",
lpc_enetif->rx_free_descs));
return queued;
}
/* pbufs allocated from the RAM pool should be non-chained. */
LWIP_ASSERT("lpc_rx_queue: pbuf is not contiguous (chained)",
pbuf_clen(p) <= 1);
When I was looking through the disassembly for this code I noticed a
call to pbuf_clen() in the actual machine code.
=> 0x0000bab0 <+24>: bl 0x44c0 <pbuf_clen>
0x0000bab4 <+28>: ldr r3, [r4, #112] ; 0x70
0x0000bab6 <+30>: ldrh.w r12, [r5, #10]
0x0000baba <+34>: add.w r2, r3, #9
0x0000babe <+38>: add.w r0, r12, #4294967295 ; 0xffffffff
The only call to pbuf_clean made from this function is made from
within the LWIP_ASSERT. When I looked more closely at how this macro
was defined, I saw that the mbed version of the stack had disabled the
LWIP_PLATFORM_ASSERT macro when LWIP_DEBUG was false which means that
no action will be taken if the assert is false but it still allows the
LWIP_ASSERT macro to potentially evaluate the assert expression.
Defining the LWIP_NOASSERT macro will fully disable the LWIP_ASSERT
macro.
I saw one of my TCP/IP samples shrink about 0.5K when I made this
change.
The code in netif_set_ipaddr would read the memory pointed to by its
ipaddr parameter, even if it was NULL on this line:
if ((ip_addr_cmp(ipaddr, &(netif->ip_addr))) == 0) {
On the Cortex-M3, it is typically OK to read from address 0 so this
code will actually compare the reset stack pointer value to the
current value in netif->ip_addr.
Later in the code, this same pointer will be used for a second read:
ip_addr_set(&(netif->ip_addr), ipaddr);
The ip_addr_set call will first check to see if the ipaddr is NULL and
if so, treats it like IP_ADDR_ANY (4 bytes of 0).
/** Safely copy one IP address to another (src may be NULL) */
#define ip_addr_set(dest, src) ((dest)->addr = \
((src) == NULL ? 0 : \
(src)->addr))
The issue here is that when GCC optimizes this code, it assumes that
the first dereference of ipaddr would have thrown an invalid memory
access exception and execution would never make it to this second
dereference. Therefore it optimizes out the NULL check in ip_addr_set.
The -fno-delete-null-pointer-checks will disable this optimization and
is a good thing to use with GCC in general on Cortex-M parts. I will
let the mbed guys make that change to their build system.
I have however corrected the code so that the intent of how to handle a
NULL ipaddr is more obvious and gets rid of the potential NULL
dereference.
By the way, this bug caused connect() to fail in obtaining an
address from DHCP. If I recall correctly from when I first debugged
this issue (late last year), I actually saw the initial value of the
stack pointer being used in the DHCP request as an IP address which
caused it to be rejected.
Peter's and my changes to LPC1768.ld ended up adding the same AHBSRAM0
and AHBSRAM1 section clauses to the script twice. I removed one copy.
I also pulled Peter's define of the ETHMEM_SECTION macro up into the
previous nested #if so that the preprocessor wouldn't spit out a
redefined macro warning.
I verified that building the code clean before and after these changes
still results in the same .bin file but now without warnings and/or
duplicate code.
I started out looking at some UDP receive code that was only able to
handle 3 inbound 550 byte datagrams out of 16 when sent in quick
succession. I stepped through the ethernet driver code and it
seemed to work as expected but it just couldn't queue up more than
3 PBUFs for each burst. It was almost like it was being starved of
CPU cycles. Based on that observation, I looked up the thread
priorities for the receive ethernet thread and found the following
close to the top of the lpc17_emac.c source file:
#define RX_PRIORITY (osPriorityNormal)
This got me to thinking, what is the priority of the tcp thead? It
turns out that it gets its priority from the following line in
lwipopts.h:
#define TCPIP_THREAD_PRIO 1
Interesting! What priority is 1? It turns out that it corresponds
to osPriorityAboveNormal. This means that while the tcp thread is
handling one packet that has been posted to its mailbox from the
ethernet receive thread, the receive thread is starved from processing
any more inbound ethernet packets.
What happens if we set TCP_IP_THREAD_PRIO to osPriorityNormal? Crash!
The ethernet driver ends up crashing in lpc_low_level_input() when
it tries to set p->len on a NULL p pointer. The p pointer ended up
being NULL because an earlier call to pbuf_alloc() in lpc_rx_queue()
failed its allocation (I will have more to say about this failed
allocation later since that is caused by yet another bug). I pulled a
fix from http://lpcware.com/content/bugtrackerissue/lpc17xx-mac-bugs to
remedy this issue. When the pbuf allocation fails, it discards the
inbound packet in the pbuf and just puts it back into the rx queue.
This means we never end up with a NULL pointer in that queue to
dereference and crash on.
With that bug fixed, the application would just appear to hang after
receiving and processing a few datagrams. I could place breakpoints in
the packet_rx() thread function and found that it was being signalled
by the ethernet ISR but it was always failing to allocate new PBUFs,
which is what led to our previous crash. This means that the new
crash prevention code was just discarding every packet that arrived.
Why are these allocations failing? In my opinion, this was the most
interesting bug to track down. Is there a memory leak somewhere in
the code which maybe only triggers in low memory situations? I
figured the easiest way to determine that would be to learn a bit
about the format of the lwIP heap from which the PBUF was failing to
be allocated. I started by just stepping into the failing lwIP memory
allocator, mem_malloc(). The loop which search the free list starts
with this code:
for (ptr = (mem_size_t)((u8_t *)lfree - ram);
This loop didn't even go through one iteration and when I looked at the
initial ptr value it contained a really large value. It turns out that
lfree was actually lower than ram. At this point I figured that lfree
had probably been corrupted during a free operation after one of the
heap allocations had been underflowed/overflowed to cause the metadata
for an allocation to be corrupted. As I started thinking about how to
track that kind of bug down, I noticed that the ram variable might be
too large (0x20080a68). I restarted the debugger and looked at the
initial value. It was at a nice even address (0x2007c000) and
certainly nothing like what I saw when the allocations were failing.
This global variable shouldn't change at all during the execution of
the program. I placed a memory access watchpoint on this ram variable
and it fired very quickly inside of the rt_mbx_send() function. The
ram variable was being changed by this line in rt_mbx_send():
p_MCB->msg[p_MCB->first] = p_msg;
What the what? Why does writing to the mailbox queue overwrite the
ram global variable? Let's start by looking at the data structure used
in the lwIP port to target RTX (defined in sys_arch.h):
// === MAIL BOX ===
typedef struct {
osMessageQId id;
osMessageQDef_t def;
uint32_t queue[MB_SIZE];
} sys_mbox_t;
Compare that to the utility macro that RTX defines to help setup one of
these mailboxes with queue:
#define osMessageQDef(name, queue_sz, type) \
uint32_t os_messageQ_q_##name[4+(queue_sz)]; \
osMessageQDef_t os_messageQ_def_##name = \
{ (queue_sz), (os_messageQ_q_##name) }
Note the 4+(queue_sz) used in the definition of the message queue
array. What a hack! The RTX OS requires an extra 16 bytes to contain
its OS_MCB header and this is how it adds it in. Obviously the
sys_mbox_t structure used in the lwIP OS targetting code doesn't have
this. Without it, the RTX mailbox routines end up scribbling on
memory following the structure in memory. Adding 4 in that structure
fixes the memory allocation failure that I was seeing and now the network
stack can handle between 7 and 10 datagrams within a burst.
The phy_speed_100mbs, phy_full_duplex, and phy_link_active fields of
PHY_STATUS_TYPE are 1 bit wide but lpc_phy_init() attempted to
initialize them to a value of 2. I switched the initializations to
be 0 instead and it still generated the same .bin image.
The first was a potential out of range index read in dhcp_handle_ack().
The (n < DNS_MAX_SERVERS) check should occur first. There is also a
documented lwIP bug for this issue here:
http://savannah.nongnu.org/bugs/?36170
In dhcp_bind() there is no need to perform the NULL check in
ip_addr_isany() for &gw_addr. Just check (gw_addr.addr == IPADDR_ANY)
instead.
I refactored the chaddr[] copy in dhcp_create_msg() to first copy all
of the valid bytes in hwaddr and then pad the rest of the bytes with 0.
Before it used to check on every destination byte if it should copy or
pad. GCC originally complained about an index out of range read from
the hwaddr[] array even though it was protected by a conditional
operator. The refactor makes the intent a bit clearer and saves the
extra comparison per loop iteration. It also stops GCC from
complaining :)
GCC will issue a warning when the ip_addr_isany() macro is used on
a pointer which can never be NULL since the macros NULL check will
always be false:
#define ip_addr_isany(addr1) ((addr1) == NULL || \
(addr1)->addr == IPADDR_ANY)
In these cases, it is probably clearer to just perform the
x.addr == IPADDR_ANY check inline.
The dn variable in lpc_low_level_output() was originally defined as a
u32_t but it is later compared to the s32_t return value from
lpc_tx_ready(). Since it is intialized to pbuf_clean() which returns
a u8_t, a s32_t type can safely hold the initial value and remains
consistent with the signed lpc_tx_ready() comparison.
I also modifed writtenLen in TCPSocketConnection::send_all() and
readLen in TCPSocketConnection::recieve_all() to be of type int instead
of size_t. This is more consistent with their usage within these
methods (they accumulate int ret values and are compared to the int
length value) and their use as a signed integer return values.