Fault is triggered by trying to read LPC_CAN1->IER when the peripheral is powered off. Fixed by checking the power control register before checking the IER register.
While fixing this issue in the various LPC* ports, I noticed a comment
pointing to this mbed forum post which summarizes this bug quite well:
https://mbed.org/forum/bugs-suggestions/topic/4473/
This bug was introduced in the following commit:
2662e105c4
The following code was added to serial_putc() as part of this commit:
uint32_t lsr = obj->uart->LSR;
lsr = lsr;
uint32_t thr = obj->uart->THR;
thr = thr;
As the forum post indicates, this causes the serial_putc() routine to
actually eat an inbound received byte if it exists since reading THR is
really reading the RBR, the Receiver Buffer Register. This code looks
like code that was probably added so that the developer could take a
snapshot of these registers and look at them in the debugger. It
probably got committed in error.
LPC11U24/LPC11U24_301/LPC11U35_401 shared the same startup file for ARM
and uARM toolchains, which is wrong, because the initial SP value is
different for LPC11U24_301. This commit fixes this issue by giving each
target its own startup file.
There were lots of overlaps in the code for LPC810 and LPC812, including
duplicated source files. This commit adds a TARGET_LPC81X_COMMON folder in
both HAL and CMSIS, this folder keeps common code for the targets.
The dependency file generated by GCC might contain more than one
dependency listed on a single line, which wasn't taken into account by the
GCC dependency fille interpreter. This commit fixes this issue.
If the FileBase::lookup operation in the constructor of FilePath returns
NULL, subsequent operations (such as isFile()/isFileSystem()) will call
methods on a NULL 'fb' pointer. This commit fixes this issue by adding
explicit NULL checks and a new method in FilePath (exists()).
Asm versions of netstack memcpy() and lwip_standard_chksum()
[Note] I'm generally a bit reluctant when including optimizations like this (from an architectural standpoint), because they tend to be a bit too specific (for example, this one works only with lwIP+GCC+Cortex-M3 or M4), but for now it looks as this is the right place for them, although the optimized memcpy should ideally be in libc (or even better replaced with a DMA transfer in this particular case). But this will be both a nice optimization and a reminder of what we need to implement/change in the future.
Based on great feedback from Martin Kojtal on my previous commit, I
have modified my USBHost::fileControlBuf() change to be more portable.
ddb3fbe826 (commitcomment-3988044)
The code now fills in the setupPacket byte array a byte at a time,
using bit shifts to extract lower and upper bytes from the 16-bit
values so that the code should work on big endian or little endian
machines. I also removed the 2-byte alignment attribute from the
setupPacket array as it is no longer required.
1. Added: GCC_CR toolchain ID for LPC2368. (targets.py)
2. Modified: Startup codes for GCC_ARM and GCC_CR toolchain.
3. Verified: "ticker" and "basic" test program work well, so far.
(Fixed typo.)
1. Added: GCC_CR toolchain ID for LPC2368. (targets.py)
2. Modified: Startup codes for GCC_ARM and GCC_CR toolchain.
3. Verified: "ticker" and "basic" test program works well, so far.
I verified that the hang issue I was seeing when building and running
the mbed official networking tests with GCC_ARM was related to this
issue reported on the mbed forums:
http://mbed.org/forum/mbed/topic/3803/?page=1#comment-18934
If you are using the 4.7 2013q1 update of GCC_ARM or newer then it
will have a _sbrk() implementation which checks the new top of heap
pointer against the current thread SP, stack pointer.
See this GCC_ARM related thread for more information:
https://answers.launchpad.net/gcc-arm-embedded/+question/218972
When using RTX RTOS threads, the thread's stack pointer can easily
point to an address which is below the current top of heap so this
check will incorrectly fail the allocation.
I have added a _sbrk() implementation to the mbed SDK which checks the
heap pointer against the MSP instead of the current thread SP. I have
only enabled this for TOOLCHAIN_GCC_ARM as this is the only GCC based
toolchain that I am sure requires this.
For tests such as TCPEchoServer
(http://mbed.org/users/emilmont/notebook/networking-libraries-benchmark/)
this change showed a 28% improvement (14Mbps to 18Mbps) when the echo
test was modified to instead use 1K data buffers.
I targetted these two functions based on manual profiling samples which
showed that a great deal of time was being spent in these two functions
when the network stack was being slammed with UDP packets.
A new hooks mechanism (hooks.py) allows various targets to customize
part(s) of the build process. This was implemented to allow generation of
custom binary images for the EA LPC4088 target, but it should be generic
enough to allow other such customizations in the future. For now, only the
'binary' step is hooked in toolchains/arm.py.
I changed the following initialization from:
uint8_t cmd[6] = {0x12, (lun << 5) | evpd, page_code, 0, 36, 0};
to:
uint8_t cmd[6] = {0x12, uint8_t((lun << 5) | evpd), page_code, 0, 36, 0};
This makes it clear to the compiler that we are Ok with the 32-bit
integer result from the shift and logical OR operation (after integral
promotions) being truncated down to a 8-bit unsigned value. This is
safe as long as lun only has a value of 7 or lower.
USBHostHub.cpp made a few calls to the USBHALHost::deviceDisconnected()
virtual method passing in NULL for the addr parameter which is actually
declared as uint32_t and not a pointer type. Switching these calls
to pass in a 0 for this parameter silences GCC warnings about
incompatible types.
I removed the initialization of some variables which were never used to
silence GCC warnings.
One of them had me removing the following line from
USBHostMouse::rxHandler():
int len = int_in->getLengthTransferred();
The variable len is never used again in this method and it doesn't
appear that the int_in->getLengthTransferred() call has any side
effects which would require this extraneous call to remain in the code.
There are a few warnings thrown by GCC for this source file that I have
attempted to correct. I would definitely appreciate feedback from
others on these changes:
* USBHost::usb_process() would attempt to write past the end of the
deviceInited[] array in the following code snippet:
if (i == MAX_DEVICE_CONNECTED) {
USB_ERR("Too many device connected!!\r\n");
deviceInited[i] = false;
The i variable is guaranteed to index 1 item past then end of this
array since it only contains MAX_DEVICE_CONNECTED elements. I have
removed the line which sets deviceInited[i] to false. Two questions
result though:
1) What was the intent of this line of code and is it Ok that I
just removed it or should it be replaced with something else?
2) I see no where else that elements in the deviceInited array are
set to false except when all are set to false in the usbThread()
method. Should there be code in DEVICE_DISCONNECTED_EVENT to
do this as well?
* USBHost::transferCompleted(volatile uint32_t addr) was comparing addr
to NULL, which is of pointer type. GCC issues a warning on this
since the types are different (void* being compared to uint32_t). I
switched it to just compare with 0 instead.
* There is a switch statement in USBHost::unqueueEndpoint() which
is conditional on a ENDPOINT_TYPE enumeration but doesn't contain
cases for all values in the enumeration. I just added a default
case to simply break on other values to silence this GCC warning and
allow the code to continue working as it did before. Is it Ok that
this particular piece of code only handles these two particular
cases?
* USBHost::fillControlBuf() was generating a warning about possible
alignment issues when accessing the setupPacket byte array as 16-bit
half words instead. I changed the casting to silence the warnings
and modified the declaration of the setupPacket field to make sure
that it is at least 2-byte aligned for these 16-bit accesses.
The two following memcpy() calls in USBEndpoint::init() are passing in
a NULL pointer as the copy source location.
memcpy(td_list_[0], 0, sizeof(HCTD));
memcpy(td_list_[1], 0, sizeof(HCTD));
I suspect that these were meant to be memset() calls instead so I
switched them. In one of the places that I found where this method is
called, USBHost::newEndpoint(), its passes in an array of HCTD objects
which have already been cleared with similar memset() calls. I am
therefore pretty certain that these were meant to be memset() calls but
if all callers already guarantee that they are zeroed out then maybe
the memset()s can be removed from USBEndpoint::init() anyway.
I updated a few things in the USBHost source code to get it to compile
with GCC:
* In USBHALHost.cpp, the usb_buf global variable was defined with two
different aligmnent specifiers:
static volatile __align(256) uint8_t usb_buf[TOTAL_SIZE] __attribute((section("AHBSRAM1"),aligned)); //256 bytes aligned!
The first one was not accepted by GCC. I removed the duplicate
alignment specifier and updated the one in the existing __attribute
property to force the desired 256 byte alignment.
* Removed the explicit use of the __packed property from structures
and instead used the PACKED macro defined in the mbed SDK's
toolchain.h header file. This macro does the right thing for the
various compilers supported by the mbed team.
* Updated USB_* message macros in dbg.h to place spaces around
references to the 'x' macro parameter. Without this, C++ 11
compilation thought the x character was a string literal operator but
it wasn't one it recognized. Adding the spaces makes it easier to
see the use of the parameter, fixes this compile time error, and
doesn't add any extra space to the final output string since the
compiler only concatenates the contents within the double quotes of
the strings.
By the way, I build with the gnu++11 standard since I have had requests
for its support from gcc4mbed. Some of the items fixed here might not
be errors when using older standards with GCC.
I built and tested a USBHostMSD sample with these updates using both
GCC and the online compiler. I will test more of the host interfaces
before issuing a pull request containing this commit.
I now use a signal to communicate when a packet has been received by
the ethernet hardware and should be processed by the packet_rx thread.
Previously the change to make the lwIP stack thread safe introduced
enough delay in packet_rx that the semaphore count could lag behind
the processed packets and overflow its maximum token count. Now the
ISR uses the signal to indicate that >= 1 packet has been received
since the last time packet_rx() was awaken.
Previously the ethernet driver used generic sys_arch* APIs exposed from
lwIP to manipulate the semaphores. I now call CMSIS RTOS APIs
directly when using the signals. I think this is acceptable since that
same driver source file already contains similar os* calls that talk
directly to the RTOS.
This reverts commit acb35785c9.
It turns out that this commit actually causes problems if an ethernet
interrupt is dropped because a higher privilege task is running, such
as LocalFileSystem accesses. If this happens, the semaphore count isn't
incremented enough times and the packet_rx() thread will fall behind and
end up running as though it had only one ethernet receive buffer. This
causes even more lost packets.
I plan to fix this by switching the semaphore to be a signal so that
the syncronization object is more boolean. It simply indicates if an
interrupt has arrived since the last time packet_rx() was awaken to
process inbound packets.
I recently pulled a NXP crash fix for their ethernet driver which will
requeue a pbuf to the ethernet driver rather than sending it to the
lwip stack if it can't allocate a new pbuf to keep the ethernet
hardware primed with available packet buffers. While recently
reviewing this code I noticed that the full size of the pbuf wasn't
used on this re-queueing operation but the size of the last received
packet. I now reset the pbuf size back to its originally allocated
size before doing this requeue operation.
Previously the packet_rx() function would wait on the RxSem and when
signalled it would process all available inbound packets. This used to
cause no problem but once the thread synchronization was turned
on via SYS_LIGHTWEIGHT_PROT, the semaphore actually started to overflow
its maximum token count of 65535. This caused the mbed_die() flashing
LEDs of death. The old code was really breaking the producer/consumer
pattern that I typically see with a semaphore since the consumer was
written to consume more than 1 produced object per semaphore wait.
Before the thread synchronization was enabled, the packet_rx() thread
could use a single time slice to process all of these packets and then
loop back around a few more times to decrement the semaphore count
while skipping the packet processing since it had all been done.
Now the packet processing code would cause the thread to give up its
time slice as it hit newly enabled critical sections. In the end it
was possible for the code to leak 2 semaphore signals for every 1 by
which the thread was awaken. After about 10 seconds of load, this
would cause a leak of 65535 signals.
NOTE: Two potential issues with this change:
1) The LPC_EMAC->RxConsumeIndex != LPC_EMAC->RxProduceIndex check was
removed from packet_rx(). I believe that this is Ok since the same
condition is later checked in lpc_low_level_input() anyway so it
won't now try to process more packets than what exist.
2) What if ENET_IRQHandler(void) ends up not signalling the RxSem for
every packet received? When would that happen? I could see it
happening if the ethernet hardware would try to pend more than 1
interrupt when the priority was too elevated to process the
pending requests. Putting the consumer loop back in packet_rx()
and using a Signal instead of a Semaphore might be a better
solution?