If lwIP placed more than 2 pbufs in a TCP segment, the ethernet driver
would fail to send it as it didn't have enough Tx descriptors. The
maximum number of pbufs outstanding for transmit that lwIP keeps is
defined by the TCP_SND_QUEUELEN macro. I modifed the value of
LPC_NUM_BUFF_TXDESCS to take advantage of this lwIP value. The +1
takes into account that LPC_EMAC->TxProduceIndex ==
LPC->TxConsumeIndex is reserved for indicating that the queue is empty
so a full queue uses one less than the maximum count.
I now use a signal to communicate when a packet has been received by
the ethernet hardware and should be processed by the packet_rx thread.
Previously the change to make the lwIP stack thread safe introduced
enough delay in packet_rx that the semaphore count could lag behind
the processed packets and overflow its maximum token count. Now the
ISR uses the signal to indicate that >= 1 packet has been received
since the last time packet_rx() was awaken.
Previously the ethernet driver used generic sys_arch* APIs exposed from
lwIP to manipulate the semaphores. I now call CMSIS RTOS APIs
directly when using the signals. I think this is acceptable since that
same driver source file already contains similar os* calls that talk
directly to the RTOS.
This reverts commit acb35785c9.
It turns out that this commit actually causes problems if an ethernet
interrupt is dropped because a higher privilege task is running, such
as LocalFileSystem accesses. If this happens, the semaphore count isn't
incremented enough times and the packet_rx() thread will fall behind and
end up running as though it had only one ethernet receive buffer. This
causes even more lost packets.
I plan to fix this by switching the semaphore to be a signal so that
the syncronization object is more boolean. It simply indicates if an
interrupt has arrived since the last time packet_rx() was awaken to
process inbound packets.
I recently pulled a NXP crash fix for their ethernet driver which will
requeue a pbuf to the ethernet driver rather than sending it to the
lwip stack if it can't allocate a new pbuf to keep the ethernet
hardware primed with available packet buffers. While recently
reviewing this code I noticed that the full size of the pbuf wasn't
used on this re-queueing operation but the size of the last received
packet. I now reset the pbuf size back to its originally allocated
size before doing this requeue operation.
Previously the packet_rx() function would wait on the RxSem and when
signalled it would process all available inbound packets. This used to
cause no problem but once the thread synchronization was turned
on via SYS_LIGHTWEIGHT_PROT, the semaphore actually started to overflow
its maximum token count of 65535. This caused the mbed_die() flashing
LEDs of death. The old code was really breaking the producer/consumer
pattern that I typically see with a semaphore since the consumer was
written to consume more than 1 produced object per semaphore wait.
Before the thread synchronization was enabled, the packet_rx() thread
could use a single time slice to process all of these packets and then
loop back around a few more times to decrement the semaphore count
while skipping the packet processing since it had all been done.
Now the packet processing code would cause the thread to give up its
time slice as it hit newly enabled critical sections. In the end it
was possible for the code to leak 2 semaphore signals for every 1 by
which the thread was awaken. After about 10 seconds of load, this
would cause a leak of 65535 signals.
NOTE: Two potential issues with this change:
1) The LPC_EMAC->RxConsumeIndex != LPC_EMAC->RxProduceIndex check was
removed from packet_rx(). I believe that this is Ok since the same
condition is later checked in lpc_low_level_input() anyway so it
won't now try to process more packets than what exist.
2) What if ENET_IRQHandler(void) ends up not signalling the RxSem for
every packet received? When would that happen? I could see it
happening if the ethernet hardware would try to pend more than 1
interrupt when the priority was too elevated to process the
pending requests. Putting the consumer loop back in packet_rx()
and using a Signal instead of a Semaphore might be a better
solution?
Peter's and my changes to LPC1768.ld ended up adding the same AHBSRAM0
and AHBSRAM1 section clauses to the script twice. I removed one copy.
I also pulled Peter's define of the ETHMEM_SECTION macro up into the
previous nested #if so that the preprocessor wouldn't spit out a
redefined macro warning.
I verified that building the code clean before and after these changes
still results in the same .bin file but now without warnings and/or
duplicate code.
I started out looking at some UDP receive code that was only able to
handle 3 inbound 550 byte datagrams out of 16 when sent in quick
succession. I stepped through the ethernet driver code and it
seemed to work as expected but it just couldn't queue up more than
3 PBUFs for each burst. It was almost like it was being starved of
CPU cycles. Based on that observation, I looked up the thread
priorities for the receive ethernet thread and found the following
close to the top of the lpc17_emac.c source file:
#define RX_PRIORITY (osPriorityNormal)
This got me to thinking, what is the priority of the tcp thead? It
turns out that it gets its priority from the following line in
lwipopts.h:
#define TCPIP_THREAD_PRIO 1
Interesting! What priority is 1? It turns out that it corresponds
to osPriorityAboveNormal. This means that while the tcp thread is
handling one packet that has been posted to its mailbox from the
ethernet receive thread, the receive thread is starved from processing
any more inbound ethernet packets.
What happens if we set TCP_IP_THREAD_PRIO to osPriorityNormal? Crash!
The ethernet driver ends up crashing in lpc_low_level_input() when
it tries to set p->len on a NULL p pointer. The p pointer ended up
being NULL because an earlier call to pbuf_alloc() in lpc_rx_queue()
failed its allocation (I will have more to say about this failed
allocation later since that is caused by yet another bug). I pulled a
fix from http://lpcware.com/content/bugtrackerissue/lpc17xx-mac-bugs to
remedy this issue. When the pbuf allocation fails, it discards the
inbound packet in the pbuf and just puts it back into the rx queue.
This means we never end up with a NULL pointer in that queue to
dereference and crash on.
With that bug fixed, the application would just appear to hang after
receiving and processing a few datagrams. I could place breakpoints in
the packet_rx() thread function and found that it was being signalled
by the ethernet ISR but it was always failing to allocate new PBUFs,
which is what led to our previous crash. This means that the new
crash prevention code was just discarding every packet that arrived.
Why are these allocations failing? In my opinion, this was the most
interesting bug to track down. Is there a memory leak somewhere in
the code which maybe only triggers in low memory situations? I
figured the easiest way to determine that would be to learn a bit
about the format of the lwIP heap from which the PBUF was failing to
be allocated. I started by just stepping into the failing lwIP memory
allocator, mem_malloc(). The loop which search the free list starts
with this code:
for (ptr = (mem_size_t)((u8_t *)lfree - ram);
This loop didn't even go through one iteration and when I looked at the
initial ptr value it contained a really large value. It turns out that
lfree was actually lower than ram. At this point I figured that lfree
had probably been corrupted during a free operation after one of the
heap allocations had been underflowed/overflowed to cause the metadata
for an allocation to be corrupted. As I started thinking about how to
track that kind of bug down, I noticed that the ram variable might be
too large (0x20080a68). I restarted the debugger and looked at the
initial value. It was at a nice even address (0x2007c000) and
certainly nothing like what I saw when the allocations were failing.
This global variable shouldn't change at all during the execution of
the program. I placed a memory access watchpoint on this ram variable
and it fired very quickly inside of the rt_mbx_send() function. The
ram variable was being changed by this line in rt_mbx_send():
p_MCB->msg[p_MCB->first] = p_msg;
What the what? Why does writing to the mailbox queue overwrite the
ram global variable? Let's start by looking at the data structure used
in the lwIP port to target RTX (defined in sys_arch.h):
// === MAIL BOX ===
typedef struct {
osMessageQId id;
osMessageQDef_t def;
uint32_t queue[MB_SIZE];
} sys_mbox_t;
Compare that to the utility macro that RTX defines to help setup one of
these mailboxes with queue:
#define osMessageQDef(name, queue_sz, type) \
uint32_t os_messageQ_q_##name[4+(queue_sz)]; \
osMessageQDef_t os_messageQ_def_##name = \
{ (queue_sz), (os_messageQ_q_##name) }
Note the 4+(queue_sz) used in the definition of the message queue
array. What a hack! The RTX OS requires an extra 16 bytes to contain
its OS_MCB header and this is how it adds it in. Obviously the
sys_mbox_t structure used in the lwIP OS targetting code doesn't have
this. Without it, the RTX mailbox routines end up scribbling on
memory following the structure in memory. Adding 4 in that structure
fixes the memory allocation failure that I was seeing and now the network
stack can handle between 7 and 10 datagrams within a burst.
The phy_speed_100mbs, phy_full_duplex, and phy_link_active fields of
PHY_STATUS_TYPE are 1 bit wide but lpc_phy_init() attempted to
initialize them to a value of 2. I switched the initializations to
be 0 instead and it still generated the same .bin image.
The dn variable in lpc_low_level_output() was originally defined as a
u32_t but it is later compared to the s32_t return value from
lpc_tx_ready(). Since it is intialized to pbuf_clean() which returns
a u8_t, a s32_t type can safely hold the initial value and remains
consistent with the signed lpc_tx_ready() comparison.
I also modifed writtenLen in TCPSocketConnection::send_all() and
readLen in TCPSocketConnection::recieve_all() to be of type int instead
of size_t. This is more consistent with their usage within these
methods (they accumulate int ret values and are compared to the int
length value) and their use as a signed integer return values.