pull/965/head
bcostm 2015-03-13 10:14:56 +01:00
commit de420415a7
16 changed files with 2219 additions and 356 deletions

43
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,43 @@
# Description
This document is cheat sheet for everyone who wants to contribute to mbedmicro/mbed GitHub repository at GitHub.
All changes in code base should originate from GitHub Issues and take advantage of existing GitHub flows. Goal is to attract contributors and allow them contribute to code and documentation at the same time.
Guidelines from this document are created to help new and existing contributors understand process workflow and align to project rules before pull request is submitted. It explains how a participant should do things like format code, test fixes, and submit patches.
## Where to get more information?
You can for example read more in our ```docs``` section in [mbedmicro/mbed/doc](https://github.com/PrzemekWirkus/mbed/tree/docs/docs) directory.
# How to contribute
We really appreciate your contributions! We are Open Source project and we need your help. We want to keep it as easy as possible to contribute changes that get things working in your environment. There are a few guidelines that we need contributors to follow so that we can have a chance of keeping on top of things.
You can pick up existing [mbed GitHub Issue](https://github.com/mbedmicro/mbed/issues) and solve it or implement new feature you find important, attractive or just necessary. We will review your proposal via pull request mechanism, give you comments and merge your changes if we decide your contribution satisfy criteria such as quality.
# Enhancements vs Bugs
Enhancements are:
* New features implementation.
* Code refactoring.
* Coding rules, coding styles improvements.
* Code comments improvement.
* Documentation work.
Bugs are:
* Issues rose internally or externally by mbedmicro/mbed users.
* Internally (within mbed team) created issues from Continuous Integration pipeline and build servers.
* Issues detected using automation tools such as compilers, sanitizers, static code analysis tools etc.
# Gate Keeper role
Gate Keeper is a person responsible for GitHub process workflow execution and is responsible for repository / project code base. Gate Keeper is also responsible for code (pull request) quality stamp and approves or rejects code changes in projects code base.
Gate Keepers will review your pull request code, give you comments in pull request comment section and in the end if everything goes well merge your pull request to one of our branches (most probably default ```master``` branch).
Please be patient, digest Gate Keeper's feedback and respond promptly :)
# mbed SDK porting
* For more information regarding mbed SDK porting please refer to [mbed SDK porting](http://developer.mbed.org/handbook/mbed-SDK-porting) handbook.
* Before starting the mbed SDK porting, you might want to familiarize with the [mbed SDK library internals](http://developer.mbed.org/handbook/mbed-library-internals) first.
# Glossary
* Gate Keeper persons responsible for overall code-base quality of mbedmicro/mbed project.
* Enhancement New feature deployment, code refactoring actions or existing code improvements.
* Bugfix Issues originated from GitHub Issues pool, raised internally within mbed classic team or issues from automated code validators like linters, static code analysis tools etc.
* Mbed classic mbed SDK 2.0 located in GitHub at mbedmicro/mbed.

601
docs/BUILDING.md Normal file
View File

@ -0,0 +1,601 @@
# Mbed SDK build script environment
## Introduction
Mbed test framework allows users to test their mbed devices applications, build mbed SDK library, re-run tests, run mbed SDK regression, add new tests and get all this results automatically. Everything is done on your machine so you have a full control over compilation, and tests you run.
It's is using Python 2.7 programming language to drive all tests so make sure Python 2.7 is installed on your system and included in your system PATH. To compile mbed SDK and tests you will need one or more supported compilers installed on your system.
To follow this short introduction you should already:
* Know what mbed SDK is in general.
* Know how to install Python 2.7, ARM target cross compilers.
* You have C/C++ programming experience and at least willingness to learn a bit about Python.
## Test automation
Currently our simple test framework allows users to run tests on their machines (hosts) in a fully automated manner. All you need to do is to prepare two configuration files.
## Test automation limitations
Note that for tests which require connected external peripherals, for example Ethernet, SD flash cards, external EEPROM tests, loops etc. you need to:
* Modify test source code to match components' pin names to actual mbed board pins where peripheral is connected or
* Wire your board the same way test defines it.
## Prerequisites
mbed test suite and build scripts are Python 2.7 applications and require Python 2.7 runtime environment and [setuptools](https://pythonhosted.org/an_example_pypi_project/setuptools.html) to install dependencies.
What we need:
* Installed [Python 2.7](https://www.python.org/download/releases/2.7) programming language.
* Installed [setuptools](https://pythonhosted.org/an_example_pypi_project/setuptools.html#installing-setuptools-and-easy-install)
* Optionally you can install [pip](https://pip.pypa.io/en/latest/installing.html) which is the PyPA recommended tool for installing Python packages from command line.
mbed SDK in its repo root directory specifies ```setup.py``` file which holds information about all packages which are dependencies for it. Bear in mind only few simple steps are required to install all dependencies.
First, clone mbed SDK repo and go to mbed SDk repo's directory:
```
$ git clone https://github.com/mbedmicro/mbed.git
$ cd mbed
```
Second, invoke ```setup.py``` so ```setuptools``` can install mbed SDK's dependencies (external Python modules required by mbed SDK):
```
$ python setup.py install
```
or
```
$ sudo python setup.py install
```
when your system requires administrator rights to install new Python packages.
## Prerequisites (manual Python package dependency installation)
**Please only read this chapter if you had problems installing mbed SDK dependencies to Python packages**.
Below you can find the list of mbed SDK dependencies to Python modules with instructions how to install them manually.
You can skip this part if you've already install [Python 2.7](https://www.python.org/download/releases/2.7) and [setuptools](https://pythonhosted.org/an_example_pypi_project/setuptools.html) and successfully [installed all dependencies](#prerequisites).
* Please make sure you've installed [pip](https://pip.pypa.io/en/latest/installing.html) or [easy_install](https://pythonhosted.org/setuptools/easy_install.html#installing-easy-install)
Note: Easy Install is a python module (easy_install) bundled with [setuptools](https://pythonhosted.org/an_example_pypi_project/setuptools.html#installing-setuptools-and-easy-install) that lets you automatically download, build, install, and manage Python packages.
* Installed [pySerial](https://pypi.python.org/pypi/pyserial) module for Python 2.7.
pySerial can be installed from PyPI, either manually downloading the files and installing as described below or using:
```
$ pip install pyserial
```
or:
```
easy_install -U pyserial
```
* Installed [prettytable](https://code.google.com/p/prettytable/wiki/Installation) module for Python 2.7.
prettytable can be installed from PyPI, either manually downloading the files and installing as described below or using:
```
$ pip install prettytable
```
* Installed [IntelHex](https://pypi.python.org/pypi/IntelHex) module.
IntelHex may be downloaded from https://launchpad.net/intelhex/+download or http://www.bialix.com/intelhex/.
Assuming Python is properly installed on your platform, installation should just require running the following command from the root directory of the archive:
```
sudo python setup.py install
```
This will install the intelhex package into your systems site-packages directory. After that is done, any other Python scripts or modules should be able to import the package using:
```
$ python
Python 2.7.8 (default, Jun 30 2014, 16:03:49) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from intelhex import IntelHex
>>>
```
* You can check if you have correctly installed the above modules (or you already have them) by starting Python and importing both modules.
```
$ python
Python 2.7.8 (default, Jun 30 2014, 16:03:49) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import serial
>>> import prettytable
>>> from intelhex import IntelHex
>>>
```
* Installed Git open source distributed version control system.
* Installed at least one of the supported by Mbed SDK workspace tools compilers:
Compiler | Mbed SDK Abbreviation | Example Version
-----------------------|-----------------------|-----------
Keil ARM Compiler | ARM, uARM | ARM C/C++ Compiler, 5.03 [Build 117]
GCC ARM | GCC_ARM | gcc version 4.8.3 20131129 (release)
GCC CodeSourcery | GCC_CS | gcc version 4.8.1 (Sourcery CodeBench Lite 2013.11-24)
GCC CodeRed | GCC_CR | gcc version 4.6.2 20121016 (release)
IAR Embedded Workbench | IAR | IAR ANSI C/C++ Compiler V6.70.1.5641/W32 for ARM
* Mbed board. You can find list of supported platforms [here](https://mbed.org/platforms/).
### Getting Mbed SDK sources with test suite
So you have already installed Python (with required modules) together with at least one supported compiler you will use with your mbed board. Great!
Now let's go further and try to get Mbed SDK with test suite together. So let's clone latest Mbed SDK source code and configure path to our compiler(s) in next few steps.
* Open console and run command below to clone Mbed SDK repository hosted on [Github](https://github.com/mbedmicro/mbed).
```
$ git clone https://github.com/mbedmicro/mbed.git
Cloning into 'mbed'...
remote: Counting objects: 37221, done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 37221 (delta 0), reused 0 (delta 0), pack-reused 37218
Receiving objects: 100% (37221/37221), 20.38 MiB | 511.00 KiB/s, done.
Resolving deltas: 100% (24455/24455), done.
Checking connectivity... done.
Checking out files: 100% (3994/3994), done.
```
* Now you can go to mbed directory you've just cloned and you can see root directory structure of our Mbed SDK library sources. Just type following commands:
```
$ cd mbed
$ ls
LICENSE MANIFEST.in README.md libraries setup.py travis workspace_tools
```
Directory structure we are interested in:
```
mbed/workspace_tools/ - test suite scripts, build scripts etc.
mbed/library/tests/ - mbed SDK tests,
mbed/library/tests/mbed/ - tests for mbed SDK and peripherals tests,
mbed/library/tests/net/echo/ - tests for Ethernet interface,
mbed/library/tests/rtos/mbed/ - tests for RTOS.
```
### Workspace tools
Workspace tools are set of Python scripts used off-line by Mbed SDK team to:
* Compile and build mbed SDK,
* Compile and build libraries included in mbed SDK repo like e.g. ETH (Ethernet), USB, RTOS or CMSIS,
* Compile, build and run mbed SDK tests,
* Run test regression locally and in CI server,
* Get library, target, test configuration (paths, parameters, names etc.).
### Configure workspace tools to work with your compilers
Before we can run our first test we need to configure our test environment a little!
Now we need to tell workspace tools where our compilers are.
* Please to go ```mbed/workspace_tools/``` directory and create empty file called ```private_settings.py```.
```
$ touch private_settings.py
```
* Populate this file the Python code below:
```python
from os.path import join
# ARMCC
ARM_PATH = "C:/Work/toolchains/ARMCompiler_5.03_117_Windows"
ARM_BIN = join(ARM_PATH, "bin")
ARM_INC = join(ARM_PATH, "include")
ARM_LIB = join(ARM_PATH, "lib")
ARM_CPPLIB = join(ARM_LIB, "cpplib")
MY_ARM_CLIB = join(ARM_PATH, "lib", "microlib")
# GCC ARM
GCC_ARM_PATH = "C:/Work/toolchains/gcc_arm_4_8/4_8_2013q4/bin"
# GCC CodeSourcery
GCC_CS_PATH = "C:/Work/toolchains/Sourcery_CodeBench_Lite_for_ARM_EABI/bin"
# GCC CodeRed
GCC_CR_PATH = "C:/Work/toolchains/LPCXpresso_6.1.4_194/lpcxpresso/tools/bin"
# IAR
IAR_PATH = "C:/Work/toolchains/iar_6_5/arm"
SERVER_ADDRESS = "127.0.0.1"
LOCALHOST = "127.0.0.1"
# This is moved to separate JSON configuration file used by singletest.py
MUTs = {
}
```
Note: You need to provide the absolute path to your compiler(s) installed on your host machine. Replace corresponding variable values with paths to compilers installed in your system:
* ```ARM_PATH``` for armcc compiler.
* ```GCC_ARM_PATH``` for GCC ARM compiler.
* ```GCC_CS_PATH``` for GCC CodeSourcery compiler.
* ```GCC_CR_PATH``` for GCC CodeRed compiler.
* ```IAR_PATH``` for IAR compiler.
If for example you do not use ```IAR``` compiler you do not have to modify anything. Workspace tools will use ```IAR_PATH`` variable only if you explicit ask for it from command line. So do not worry and replace only paths for your installed compilers.
Note: Because this is a Python script and ```ARM_PATH```, ```GCC_ARM_PATH```, ```GCC_CS_PATH```, ```GCC_CR_PATH```, ```IAR_PATH``` are Python string variables please use double backlash or single slash as path's directories delimiter to avoid incorrect path format. For example:
```python
ARM_PATH = "C:/Work/toolchains/ARMCompiler_5.03_117_Windows"
GCC_ARM_PATH = "C:/Work/toolchains/gcc_arm_4_8/4_8_2013q4/bin"
GCC_CS_PATH = "C:/Work/toolchains/Sourcery_CodeBench_Lite_for_ARM_EABI/bin"
GCC_CR_PATH = "C:/Work/toolchains/LPCXpresso_6.1.4_194/lpcxpresso/tools/bin"
IAR_PATH = "C:/Work/toolchains/iar_6_5/arm"
```
Note: Settings in ```private_settings.py``` will overwrite variables with default values in ```mbed/workspace_tools/settings.py``` file.
## Build Mbed SDK library from sources
Let's build mbed SDK library off-line from sources using your compiler. We've already cloned mbed SDK sources, we've also installed compilers and added their paths to ```private_settings.py```.
We now should be ready to use workspace tools script ```build.py``` to compile and build mbed SDK from sources.
We are still using console. You should be already in ```mbed/workspace_tools/``` directory if not go to ```mbed/workspace_tools/``` and type below command:
```
$ python build.py -m LPC1768 -t ARM
```
or if you want to take advantage from multi-threaded compilation please use option ```-j X``` where ```X``` is number of cores you want to use to compile mbed SDK. See below:
```
$ python build.py -m LPC1768 -t ARM -j 4
Building library CMSIS (LPC1768, ARM)
Copy: core_ca9.h
Copy: core_caFunc.h
...
Compile: us_ticker_api.c
Compile: wait_api.c
Library: mbed.ar
Creating archive 'C:\temp\x\mbed\build\mbed\TARGET_LPC1768\TOOLCHAIN_ARM_STD\mbed.ar'
Copy: board.o
Copy: retarget.o
Completed in: (42.58)s
Build successes:
* ARM::LPC1768
```
Above command will build mbed SDK for [LPC1768](http://developer.mbed.org/platforms/mbed-LPC1768/) platform using ARM compiler.
Let's have a look at directory structure under ```mbed/build/```. We can see for ```LPC1768``` new directory ```TARGET_LPC1768``` was created. This directory contains all build primitives.
Directory ```mbed/TARGET_LPC1768/TOOLCHAIN_ARM_STD/``` conteins mbed SDK library ```mbed.ar```. This directory structure also stores all needed headers which you should use with ```mbed.ar``` when building your own software.
```
$ tree ./mbed/build/
Folder PATH listing
Volume serial number is 006C006F 6243:3EA9
./MBED/BUILD
+---mbed
+---.temp
¦ +---TARGET_LPC1768
¦ +---TOOLCHAIN_ARM_STD
¦ +---TARGET_NXP
¦ +---TARGET_LPC176X
¦ +---TOOLCHAIN_ARM_STD
+---TARGET_LPC1768
+---TARGET_NXP
¦ +---TARGET_LPC176X
¦ +---TARGET_MBED_LPC1768
+---TOOLCHAIN_ARM_STD
```
Note: Why ```LCP1768```? For this example we are using ```LPC1768``` because this platform supports all compilers so you are sure you only need to specify proper compiler.
If you are not using ARM Compiler replace ```ARM``` with your compiler nickname: ```GCC_ARM```, ```GCC_CS```, ```GCC_CR``` or ```IAR```. For example if you are using IAR type command:
```
$ python build.py -m LPC1768 -t IAR
```
Note: Workspace tools track changes in source code. So if for example mbed SDK or test source code changes ```build.py``` script will recompile project with all dependencies. If there are no changes in code consecutive mbed SDK re-builds using build.py will not rebuild project if this is not necessary. Try to run last command once again, we can see script ```build.py``` will not recompile project (there are no changes):
```
$ python build.py -m LPC1768 -t ARM
Building library CMSIS (LPC1768, ARM)
Building library MBED (LPC1768, ARM)
Completed in: (0.15)s
Build successes:
* ARM::LPC1768
```
### build.py script
Build script located in mbed/workspace_tools/ is our core script solution to drive compilation, linking and building process for:
* mbed SDK (with libs like Ethernet, RTOS, USB, USB host).
* Tests which also can be linked with libraries like RTOS or Ethernet.
Note: Test suite also uses the same build script, inheriting the same properties like auto dependency tracking and project rebuild in case of source code changes.
Build.py script is a powerful tool to build mbed SDK for all available platforms using all supported by mbed cross-compilers. Script is using our workspace tools build API to create desired platform-compiler builds. Use script option ```--h``` (help) to check all script parameters.
```
$ python build.py --help
```
* The command line parameter ```-m``` specifies the MCUs/platforms for which you want to build the mbed SDK. More than one MCU(s)/platform(s) may be specified with this parameter using comma as delimiter.
Example for one platform build:
```
$ python build.py -m LPC1768 -t ARM
```
or for many platforms:
```
$ python build.py -m LPC1768,NUCLEO_L152RE -t ARM
```
* Parameter ```-t``` defined which toolchain should be used for mbed SDK build. You can build Mbed SDK for multiple toolchains using one command.
Below example (note there is no space after commas) will compile mbed SDK for Freescale Freedom KL25Z platform using ARM and GCC_ARM compilers:
```
$ python build.py -m KL25Z -t ARM,GCC_ARM
```
* You can combine this technique to compile multiple targets with multiple compilers.
Below example will compile mbed SDK for Freescale's KL25Z and KL46Z platforms using ARM and GCC_ARM compilers:
```
$ python build.py -m KL25Z,KL46Z -t ARM,GCC_ARM
```
* Building libraries included in mbed SDK's source code. Parameters ```-r```, ```-e```, ```-u```, ```-U```, ```-d```, ```-b``` will add ```RTOS```, ```Ethernet```, ```USB```, ```USB Host```, ```DSP```, ```U-Blox``` libraries respectively.
Below example will build Mbed SDK library for for NXP LPC1768 platform together with RTOS (```-r``` switch) and Ethernet (```-e``` switch) libraries.
```
$ python build.py -m LPC1768 -t ARM -r -e
Building library CMSIS (LPC1768, ARM)
Building library MBED (LPC1768, ARM)
Building library RTX (LPC1768, ARM)
Building library RTOS (LPC1768, ARM)
Building library ETH (LPC1768, ARM)
Completed in: (0.48)s
Build successes:
* ARM::LPC1768
```
* If youre unsure which platforms and toolchains are supported please use switch ```-S``` to print simple matrix of platform to compiler dependencies.
```
$ python python build.py -S
+-------------------------+-----------+-----------+-----------+-----------+-----------+-----------+------------+---------------+
| Platform | ARM | uARM | GCC_ARM | IAR | GCC_CR | GCC_CS | GCC_CW_EWL | GCC_CW_NEWLIB |
+-------------------------+-----------+-----------+-----------+-----------+-----------+-----------+------------+---------------+
| APPNEARME_MICRONFCBOARD | Supported | Default | Supported | - | - | - | - | - |
| ARCH_BLE | Default | - | Supported | Supported | - | - | - | - |
| ARCH_GPRS | Supported | Default | Supported | Supported | Supported | - | - | - |
...
| UBLOX_C029 | Supported | Default | Supported | Supported | - | - | - | - |
| WALLBOT_BLE | Default | - | Supported | Supported | - | - | - | - |
| XADOW_M0 | Supported | Default | Supported | Supported | Supported | - | - | - |
+-------------------------+-----------+-----------+-----------+-----------+-----------+-----------+------------+---------------+
*Default - default on-line compiler
*Supported - supported off-line compiler
Total platforms: 90
Total permutations: 297
```
Above list can be overwhelming so please do not hesitate to use switch ```-f``` to filter ```Platform``` column.
```
$ python build.py -S -f ^K
+--------------+-----------+---------+-----------+-----------+--------+--------+------------+---------------+
| Platform | ARM | uARM | GCC_ARM | IAR | GCC_CR | GCC_CS | GCC_CW_EWL | GCC_CW_NEWLIB |
+--------------+-----------+---------+-----------+-----------+--------+--------+------------+---------------+
| K20D50M | Default | - | Supported | Supported | - | - | - | - |
| K22F | Default | - | Supported | Supported | - | - | - | - |
| K64F | Default | - | Supported | Supported | - | - | - | - |
| KL05Z | Supported | Default | Supported | Supported | - | - | - | - |
| KL25Z | Default | - | Supported | Supported | - | - | Supported | Supported |
| KL43Z | Default | - | Supported | - | - | - | - | - |
| KL46Z | Default | - | Supported | Supported | - | - | - | - |
| NRF51_DK | Default | - | Supported | Supported | - | - | - | - |
| NRF51_DK_OTA | Default | - | Supported | - | - | - | - | - |
+--------------+-----------+---------+-----------+-----------+--------+--------+------------+---------------+
*Default - default on-line compiler
*Supported - supported off-line compiler
Total platforms: 9
Total permutations: 28
```
or just give platform name:
```
$ python build.py -S -f LPC1768
+----------+---------+-----------+-----------+-----------+-----------+-----------+------------+---------------+
| Platform | ARM | uARM | GCC_ARM | IAR | GCC_CR | GCC_CS | GCC_CW_EWL | GCC_CW_NEWLIB |
+----------+---------+-----------+-----------+-----------+-----------+-----------+------------+---------------+
| LPC1768 | Default | Supported | Supported | Supported | Supported | Supported | - | - |
+----------+---------+-----------+-----------+-----------+-----------+-----------+------------+---------------+
*Default - default on-line compiler
*Supported - supported off-line compiler
Total platforms: 1
Total permutations: 6
```
* You can be more verbose ```-v``` especially if you want to see each compilation / linking command build.py is executing:
```
$ python build.py -t GCC_ARM -m LPC1768 -j 8 -v
Building library CMSIS (LPC1768, GCC_ARM)
Copy: LPC1768.ld
Compile: startup_LPC17xx.s
[DEBUG] Command: C:/Work/toolchains/gcc_arm_4_8/4_8_2013q4/bin\arm-none-eabi-gcc
-x assembler-with-cpp -c -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers
-fmessage-length=0 -fno-exceptions -fno-builtin -ffunction-sections -fdata-sections -MMD
-fno-delete-null-pointer-checks -fomit-frame-pointer -mcpu=cortex-m3 -mthumb -O2
-DTARGET_LPC1768 -DTARGET_M3 -DTARGET_CORTEX_M -DTARGET_NXP -DTARGET_LPC176X
-DTARGET_MBED_LPC1768 -DTOOLCHAIN_GCC_ARM -DTOOLCHAIN_GCC -D__CORTEX_M3 -DARM_MATH_CM3
-DMBED_BUILD_TIMESTAMP=1424903604.77 -D__MBED__=1 -IC:\Work\mbed\libraries\mbed\targets\cmsis
-IC:\Work\mbed\libraries\mbed\targets\cmsis\TARGET_NXP
-IC:\Work\mbed\libraries\mbed\targets\cmsis\TARGET_NXP\TARGET_LPC176X -IC:\Work\mbed\libraries\mbed\targets\cmsis\TARGET_NXP\TARGET_LPC176X\TOOLCHAIN_GCC_ARM
-o C:\Work\mbed\build\mbed\.temp\TARGET_LPC1768\TOOLCHAIN_GCC_ARM\TARGET_NXP\TARGET_LPC176X\TOOLCHAIN_GCC_ARM\startup_LPC17xx.o
C:\Work\mbed\libraries\mbed\targets\cmsis\TARGET_NXP\TARGET_LPC176X\TOOLCHAIN_GCC_ARM\startup_LPC17xx.s
[DEBUG] Return: 0
...
```
## CppUCheck analysis
[Cppcheck](http://cppcheck.sourceforge.net/) is a static analysis tool for C/C++ code. Unlike C/C++ compilers and many other analysis tools it does not detect syntax errors in the code. Cppcheck primarily detects the types of bugs that the compilers normally do not detect. The goal is to detect only real errors in the code (i.e. have zero false positives).
Prerequisites:
* Please install ```CppCheck``` on your system before you want to use it with build scripts.
* You should also add Cppcheck to your system path.
```build.py``` script supports switching between compilation and building and just static code analysis testing. You can use switch ```--cppcheck``` to perform CppCheck static code analysis.
* When you are using --cppcheck switch all macros, toolchain dependencies etc. are preserved so you are sure you are checking exactly the same code you would compile for your application.
* Cppcheck analysis can take up to few minutes on slower machines.
* Usually you will use switches ```-t``` and ```-m``` to define toolchain and MCU (platform) respectively. You should do the same in case of CppCheck analysis. Please note that build script can also compile and build RTOS, Ethernet library etc. If you want to check those just use corresponding build script switches (e.g. ```-r```, ```-e```, ...).
Example:
```
$ python build.py -t uARM -m NUCLEO_F334R8 --cppcheck
```
# make.py script
```make.pt``` is a ```mbed/workspace_tools/``` script used to build tests (we call them sometimes 'programs') one by one manually. Script allows you to flash board with test and execute it. This is deprecated functionality and will not be described here. Instead please use ```singletest.py``` file to build mbed SDK, tests and run automation for test cases included in ```mbedmicro/mbed```.
Note: ```make.py``` script depends on existing already built mked SDK and library sources so you need to pre-build mbed SDK and for example RTOS library to link 'program' (test) with mebd SDK and RTOS library. To pre-build mbed SDK please use ```build.py``` script.
Just for sake of example please see few ways to use ```make.py``` together with Freedom K64F board.
* We need to build mbed SDK (in directory ```mbed/build/```:
```
$ python build.py -t GCC_ARM -m K64F -j 8
Building library CMSIS (K64F, GCC_ARM)
Building library MBED (K64F, GCC_ARM)
Completed in: (0.59)s
Build successes:
* GCC_ARM::K64F
```
* We can print all 'programs' (test cases) ```make.py``` can build for us:
```
$ python make.py
.
[ 0] MBED_A1: Basic
[ 1] MBED_A2: Semihost file system
[ 2] MBED_A3: C++ STL
[ 3] MBED_A4: I2C TMP102
.
```
For example 'program' under index ```2``` is ```MBED_A3``` test case we can build and flash onto K64F board.
* Building test with ```make.py``` by specifying test case name with ```-n``` option:
```
$ python make.py -t GCC_ARM -m K64F -n MBED_A3
Building project STL (K64F, GCC_ARM)
Compile: main.cpp
[Warning] main.cpp@76: In function 'int main()': deprecated conversion from string constant to 'char*' [-Wwrite-strings]
.
.
.
[Warning] main.cpp@76: In function 'int main()': deprecated conversion from string constant to 'char*' [-Wwrite-strings]
Compile: test_env.cpp
Link: stl
Elf2Bin: stl
Image: C:\Work\mbed\build\test\K64F\GCC_ARM\MBED_A3\stl.bin
```
Because we previously have built mbed SDK we are now able to drive test case compilation and linking with mbed SDK and produce ```MBED_A3``` test case binary in build directory:
```
C:\Work\mbed\build\test\K64F\GCC_ARM\MBED_A3\stl.bin
```
For more help type ```$ python make.py --help``` in your command line.
# project.py script
```project.py``` script is used to export test cases ('programs') from test case portfolio to off-line IDE. This is a easy way to export test project to IDEs such as:
* codesourcery.
* coide.
* ds5_5.
* emblocks.
* gcc_arm.
* iar.
* kds.
* lpcxpresso.
* uvision.
You can export project using command line. All you need to do is to specify mbed platform name (option ```-m```), your IDE (option ```-i```) and project name you want to export (option ```-n``` or (option ```-p```).
In below example we export our project so we can work on it using GCC ARM cross-compiler. Building mechanism used to drive exported build will be ```Make```.
```
$ python project.py -m K64F -n MBED_A3 -i gcc_arm
Copy: test_env.h
Copy: AnalogIn.h
Copy: AnalogOut.h
.
.
.
Copy: K64FN1M0xxx12.ld
Copy: main.cpp
Successful exports:
* K64F::gcc_arm C:\Work\mbed\build\export\MBED_A3_gcc_arm_K64F.zip
```
You can see exporter placed compressed project export in ```zip``` file in ```mbed/build/export/``` directory.
Example export file ```MBED_A3_gcc_arm_K64F.zip``` structure:
```
MBED_A3
├───env
└───mbed
├───api
├───common
├───hal
└───targets
├───cmsis
│ └───TARGET_Freescale
│ └───TARGET_MCU_K64F
│ └───TOOLCHAIN_GCC_ARM
└───hal
└───TARGET_Freescale
└───TARGET_KPSDK_MCUS
├───TARGET_KPSDK_CODE
│ ├───common
│ │ └───phyksz8081
│ ├───drivers
│ │ ├───clock
│ │ ├───enet
│ │ │ └───src
│ │ ├───interrupt
│ │ └───pit
│ │ ├───common
│ │ └───src
│ ├───hal
│ │ ├───adc
│ │ ├───can
│ │ ├───dac
│ │ ├───dmamux
│ │ ├───dspi
│ │ ├───edma
│ │ ├───enet
│ │ ├───flextimer
│ │ ├───gpio
│ │ ├───i2c
│ │ ├───llwu
│ │ ├───lptmr
│ │ ├───lpuart
│ │ ├───mcg
│ │ ├───mpu
│ │ ├───osc
│ │ ├───pdb
│ │ ├───pit
│ │ ├───pmc
│ │ ├───port
│ │ ├───rcm
│ │ ├───rtc
│ │ ├───sai
│ │ ├───sdhc
│ │ ├───sim
│ │ ├───smc
│ │ ├───uart
│ │ └───wdog
│ └───utilities
│ └───src
└───TARGET_MCU_K64F
├───device
│ ├───device
│ │ └───MK64F12
│ └───MK64F12
├───MK64F12
└───TARGET_FRDM
```
After unpacking exporter ```zip``` file we can go to directory and see files inside MBED_A3 directory:
```
$ ls
GettingStarted.htm Makefile env main.cpp mbed
```
Exporter generated for us ```Makefile``` so now we can build our software:
```
$ make -j 8
.
.
.
text data bss dec hex filename
29336 184 336 29856 74a0 MBED_A3.elf
```
We can see root directory of exporter project is now populated with binary files:
* MBED_A3.bin.
* MBED_A3.elf .
* MBED_A3.hex.
You have also map file ```MBED_A3.map``` for your disposal.
```
$ ls
GettingStarted.htm MBED_A3.bin MBED_A3.elf MBED_A3.hex MBED_A3.map Makefile env main.cpp main.d main.o mbed
```

269
docs/COMMITTERS.md Normal file
View File

@ -0,0 +1,269 @@
# Committing changes to mbedmicro/mbed
* Our current branching model is very simple. We are using ```master``` branch to merge all pull requests.
* Based on stable ```SHA``` version of ```master``` branch we decide to release and att he same time ```tag``` our build release.
* Our current release versioning follows simple integer version: ```94```, ```95```, ```96``` etc.
# Committer Guide
## How to decide what release(s) should be patched
This section provides a guide to help a committer decide the specific base branch that a change set should be merged into.
Currently our default branch is ```master``` branch. All pull requests should be created against ```master``` branch.
mbed SDK is released currently on master branch under certain tag name (see [Git tagging basics]( http://git-scm.com/book/en/v2/Git-Basics-Tagging)). You can see mbed SDK tags and switch between them to for example go back to previous mbed SDK release.
```
$ git tag
```
Please note: mebd SDK ```master``` branch's ```HEAD``` is our latest code and may not be as stable as you expect. We are putting our best effort to run regression testing (in-house) against pull requests and latest code.
Each commit to ```master``` will trigger [GitHub's Travis Continuous Integration](https://travis-ci.org/mbedmicro/mbed/builds).
### Pull request
Please send pull requests with changes which are:
* Complete (your code will compile and perform as expected).
* Tested on hardware.
* You can use included mbed SDK test suite to perform testing. See TESTING.md.
* If your change, feature do not have a test case included please add one (or more) to cover new functionality.
* If you can't test your functionality describe why.
* Documented source code:
* New, modified functions have descriptive comments.
* You follow coding rules and styles provided by mbed SDK project.
* Documented pull request description:
* Description of changes is added - explain your change / enhancement.
* References to existing issues, other pull requests or forum discussions are included.
* Test results are added.
After you send us your pull request our Gate Keeper will change the state of pull request to:
• ``` enhancement``` or ```bug``` when pull request creates new improvement or fixed issue.
Than we will set for you labels:
• ```review``` to let you know your pull request is under review and you can expect review related comments from us.
• ```in progress``` when you pull request requires some additional change which will for now block this pull request from merging.
At the end we will remove ```review``` label and merge your change if everything goes well.
## C++ coding rules & coding guidelines
### Rules
* The mbed SDK code follows K&R style (Reference: [K&R style](http://en.wikipedia.org/wiki/Indent_style#K.26R_style)) with at least 2 exceptions which can be found in the list below the code snippet:
```c++
static const PinMap PinMap_ADC[] = {
{PTC2, ADC0_SE4b, 0},
{NC , NC , 0}
};
uint32_t adc_function(analogin_t *obj, uint32_t options)
{
uint32_t instance = obj->adc >> ADC_INSTANCE_SHIFT;
switch (options) {
case 1:
timeout = 6;
break;
default:
timeout = 10;
break;
}
while (!adc_hal_is_conversion_completed(instance, 0)) {
if (timeout == 0) {
break;
} else {
timeout--;
}
}
if (obj->adc == ADC_CHANNEL0) {
adc_measure_channel(instance);
adc_stop_channel(instance);
} else {
error("channel not available");
}
#if DEBUG
for (uint32_t i = 0; i < 10; i++) {
printf("Loop : %d", i);
}
#endif
return adc_hal_get_conversion_value(instance, 0);
}
```
* Indentation - 4 spaces. Please do not use tabs!
* Braces - K&R, except for functions where the opening brace is on the new line.
* 1 TBS - use braces for statements ```if```, ```else```, ```while```, ```for``` (exception from K&R) Reference: [1TBS](http://en.wikipedia.org/wiki/Indent_style#Variant:_1TBS)).
* One line per statement.
* Preprocessor macro starts at the beginning of a new line, the code inside is indented accordingly the code above it.
* Cases within switch are indented (exception from K&R).
* Space after statements if, while, for, switch, same applies to binary and ternary operators.
* Each line has preferably at most 120 characters.
* For pointers, ```*``` is adjacent to a name (analogin_t *obj).
* Don't leave trailing spaces at the end of lines.
* Empty lines should have no trailing spaces.
* Unix line endings are default option for files.
* Use capital letters for macros.
* A file should have an empty line at the end.
and:
* We are not using C++11 yet so do not write code compliant to this standard.
* We are not using libraries like ```BOOST``` so please so not include any ```BOOST``` headers to your code.
* C++ & templates: please take under consideration templates are not fully supported by cross-compilers. You may have difficulties compiling template code few cross-compilers so make sure your template code compilers for more than one compiler.
### Naming conventions
Classes:
* Begins with a capital letter, and each word in it begins also with a capital letter (```AnalogIn```, ```BusInOut```).
* Methods contain small letters, distinct words separated by underscore.
* Private members starts with an underscore.
User defined types (typedef):
* Structures - suffix ```_t``` - to denote it is user defined type
* Enumeration - the type name and values name - same naming convention as classes (e.g ```MyNewEnum```)
Functions:
* Contain lower case letters (as methods within classes)
* Distinct words separated by underscore (```wait_ms```, ```read_u16```)
* Please make sure that in your module all functions have unique prefix so when your module is compiled with other modules function names (and e.g. extern global variable names) are not in naming conflict.
Example code look&feel:
```c++
#define ADC_INSTANCE_SHIFT 8
class AnalogIn {
public:
/** Create an AnalogIn, connected to the specified pin
*
* @param pin AnalogIn pin to connect to
* @param name (optional) A string to identify the object
*/
AnalogIn(PinName pin) {
analogin_init(&_adc, pin);
}
/** Read the input voltage, represented as a float in the range [0.0, 1.0]
*
* @returns
* A floating-point value representing the current input voltage, measured as a percentage
*/
uint32_t read() {
return analogin_read(&_adc, operation);
}
protected:
analogin_t _adc;
};
typedef enum {
ADC0_SE0 = (0 << ADC_INSTANCE_SHIFT) | 0,
} ADCName;
struct analogin_s {
ADCName adc;
};
typedef struct analogin_s analogin_t;
```
### Doxygen documentation
All functions / methods should contain a documentation using doxygen javadoc in a header file. More information regarding writing API Documentation, follow [this](https://mbed.org/handbook/API-Documentation) link.
Example of well documentet code:
```c++
#ifndef ADC_H
#define ADC_H
#ifdef __cplusplus
extern "C" {
#endif
/** ADC Measurement method
*
* @param obj Pointer to the analogin object.
* @param options Options to be enabled by ADC peripheral.
*
* @returns
* Measurement value on defined ADC channel.
*/
uint32_t adc_function(analogin_t *obj, uint32_t options)
#ifdef __cplusplus
}
#endif
#endif
```
### C/C++ Source code indenter
In Mbed project you can use AStyle (Reference: [Artistic Style](http://astyle.sourceforge.net/)) source code indenter to help you auto format your source code. It will for sure not correct all your coding styles but for sure will eliminate most of them. You can download AStyle from this location.
Official Mbed SDK styles include below AStyle styles (defined by command line switched):
```
--style=kr --indent=spaces=4 --indent-switches
```
To format your file you can execute below command. Just replace ```$(FULL_CURRENT_PATH)``` with path to your source file.
```
$ astyle.exe --style=kr --indent=spaces=4 --indent-switches $(FULL_CURRENT_PATH)
```
## Python coding rules & coding guidelines
Some of our tools in workspace_tools are written in ```Python 2.7```. In case of developing tools for python we prefer to keep similar code styles across all Python source code. Please note that not all rules must be enforced. For example we do not limit you to 80 characters per line, just be sure your code can fit to widescreen display.
Please stay compatible with ```Python 2.7``` but nothing stops you to write your code so in the future it will by Python 3 friendly.
Please check our Python source code (especially ```test_api.py``` and ```singletest.py```) to get notion of how your new code should look like). We know our code is not perfect but please try to fit the same coding style to existing code so source looks consistent and is not series of different flavors.
Some general guidelines:
* Use Python idioms, please refer to one of many on-line guidelines how to write Pythonic code: [Code Like a Pythonista: Idiomatic Python](http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html).
* Please do not use TABs. Please use 4 spaces instead for indentations.
* Please put space character between operators, after comma etc.
* Please document your code, write comments and ```doc``` sections for each function or class you implement.
### Static Code Analizers for Python
If you are old-school developer for sure you remember tools like lint. "lint was the name originally given to a particular program that flagged some suspicious and non-portable constructs (likely to be bugs) in C language source code." Now lint-like programs are used to check similar code issues for multiple languages, also for Python. Please do use them if you want to commit new code to workspace_tools and other mbed SDK Python tooling.
Below is the list Python lint tools you may want to use:
* [pyflakes](https://pypi.python.org/pypi/pyflakes) - Please scan your code with pyflakes and remove all issues reported by it. If you are unsure if something should be modified or not you can skip lint report related fix and report this issue as possible additional commit in your pull request description.
* [pylint](http://www.pylint.org/) - Please scan your code with pylint and check if there are any issues which can be resolved and are obvious "to fix" bugs. For example you may forgot to add 'self' as first parameter in class method parameter list or you are calling unknown functions / functions from not imported modules.
* [pychecker](http://pychecker.sourceforge.net/) - optional, but more the merrier ;)
Example Python look&feel:
```python
class HostRegistry:
""" Class stores registry with host tests and objects representing them
"""
HOST_TESTS = {} # host_test_name -> host_test_ojbect
def register_host_test(self, ht_name, ht_object):
""" Registers (removes) host test by name from HOST_TESTS registry
if host test is not already registered (check by name).
"""
if ht_name not in self.HOST_TESTS:
self.HOST_TESTS[ht_name] = ht_object
def unregister_host_test(self):
""" Unregisters (removes) host test by name from HOST_TESTS registry.
"""
if ht_name in HOST_TESTS:
self.HOST_TESTS[ht_name] = None
def get_host_test(self, ht_name):
""" Returns HOST_TEST if host name is valid.
In case no host test is available return None
"""
return self.HOST_TESTS[ht_name] if ht_name in self.HOST_TESTS else None
def is_host_test(self, ht_name):
""" Function returns True if host name is valid (is in HOST_TESTS)
"""
return ht_name in self.HOST_TESTS
```
## Testing
Please refer to TESTING.md document for detais regarding mbed SDK test suite and build scripts included in ```mbed/workspace_tools/```.
## Before pull request checklist
* Your pull request description section contains:
* Rationale tell us why you submitted this pull request. This is your change to write us summary of your change.
* Description describe changes youve made and tell us which new features / functionalities were implemented.
* Manual / Cookbook / Handbook you can put here manual, cookbook or handbook related to your change / enhancement. Your documentation can stay with pull request.
* Test results (if applicable).
* Make sure you followed project's coding rules and styles.
* No dependencies are created to external C/C++ libraries which are not included already in our repository.
* Please make sure that in your module all functions have unique prefix (no name space collisions).
* You reused existing functionality, please do not add or rewrite existing code. E.g. use mbeds ```FunctionPointer``` if possible to store your function pointers. Do not write another wrapper for it. We already got one. If some functionality is missing, just add it! Extend our APIs wisely!
* Were you consistent? Please continue using style / code formatting, variables naming etc. in file they are modifying.
* Your code compiles and links. Also doesnt generate additional compilation warnings.

877
docs/TESTING.md Normal file
View File

@ -0,0 +1,877 @@
# Mbed SDK automated test suite
## Introduction
Test suit allows users to run locally on their machines Mbed SDKs tests included in Mbed SDK repository. It also allows users to create their own tests and for example add new tests to test set as they progress with their project. If test is generic enough it could be included into official Mbed SDK test pool just do it via normal pull request!
Each test is supervised by python script called “host test” which will at least Test suite is using build script API to compile and build test source together with required by test libraries like CMSIS, Mbed, Ethernet, USB etc.
## What is host test?
Test suite supports test supervisor concept. This concept is realized by separate Python script called ```host test```. Host tests can be found in ```mbed/workspace_tools/host_tests/``` directory. Note: In newer mbed versions (mbed OS) host tests will be separate library.
Host test script is executed in parallel with test runner to monitor test execution. Basic host test just monitors device's default serial port for test results returned by test runner. Simple tests will print test result on serial port. In other cases host tests can for example judge by test results returned by test runner is test passed or failed. It all depends on test itself.
In some cases host test can be TCP server echoing packets from test runner and judging packet loss. In other cases it can just check if values returned from accelerometer are actually valid (sane).
## Test suite core: singletest.py script
```singletest.py``` script located in ```mbed/workspace_tools/``` is a test suite script which allows users to compile, build tests and test runners (also supports CppUTest unit test library). Script also is responsible for test execution on devices selected by configuration files.
### Parameters of singletest.py
Test execution script ```singletest.py``` is a fairly powerful tool to run tests for mbed SDK platform. It is flexible and allows users to configure test execution process and define which mbed platforms will be tested.
By specifying external configuration files (in JSON format) you can gain flexibility and prepare many different test scenarios. Just pass configuration file names to your script and run it.
#### MUTs Specification
You can easily configure your MUTs (Mbed Under Test) by creating configuration file with MUTs description.
Note: This configuration file must be in [JSON format](http://www.w3schools.com/json/).
Note: Unfortunately JSON format is not allowing you to have comments inside JSON code.
Lets see some example and let's try to configure small "test farm" with three devices connected to your host computer. In this example no peripherals (like SD card or EEPROM) are connected to our Mbed boards. We will use three platforms in this example:
* [NXP LPC1768](https://mbed.org/platforms/mbed-LPC1768) board.
* \[Freescale KL25Z](https://mbed.org/platforms/KL25Z) board and
* [STMicro Nucleo F103RB](https://mbed.org/platforms/ST-Nucleo-F103RB) board.
After connecting boards to our host machine (PC) we can check which serial ports and disks they occupy. For our example let's assume that:
* ```LPC1768``` serial port is on ```COM4``` and disk drive is ```J:```.
* ```KL25Z``` serial port is on ```COM39``` and disk drive is ```E:```.
* ```NUCLEO_F103RB``` serial port is on ```COM11``` and disk drive is ```I:```.
If you are working under Linux your port and disk could look like /dev/ttyACM5 and /media/usb5.
This information is needed to create ```muts_all.json``` configuration file. You can create in in ```mbed/workspace_tools/``` directory:
```
$ touch muts_all.json
```
Its name will be passed to ```singletest.py``` script after ```-M``` (MUTs specification) switch. Lets see how this file's content would look like in our example below:
```json
{
"1" : {"mcu": "LPC1768",
"port":"COM4",
"disk":"J:\\",
"peripherals": []
},
"2" : {"mcu": "KL25Z",
"port":"COM39",
"disk":"E:\\",
"peripherals": []
},
"3" : {"mcu": "NUCLEO_F103RB",
"port":"COM11",
"disk":"I:\\",
"peripherals": []
}
}
```
Note: We will leave field ```peripherals``` empty for the sake of this example. We will explain it later. All you need to do now is to properly fill fields ```mcu```, ```port``` and ```disk```.
Note: Please make sure files muts_all.json and test_spec.json are in workspace_tools/ directory. We will assume in this example they are.
Where to find ```mcu``` names? You can use option ```-S``` of ```build.py``` script (in ```mbed/workspace_tools/``` directory) to check all supported off-line MCUs names.
Note: If you update mbed device firmware or even disconnect / reconnect mbed device you may find that serial port / disk configuration changed. You need to update configuration file accordingly or you will face connection problems and obviously tests will not run.
#### Peripherals testing
When using MUTs configuration file (switch ```-M```) you can define in MUTs JSON file peripherals connected to your device:
```json
{
"1" : {"mcu" : "KL25Z",
"port" : "COM39",
"disk" : "E:\\",
"peripherals" : ["SD", "24LC256"]}
}
```
You can force test suite to run only common tests (switch ```-C```) or only peripheral tests (switch ```-P```).
```
$ python singletest.py -i test_spec.json -M muts_all.json -C
```
will not include tests for SD card and EEPROM 24LC256.
```
$ python singletest.py -i test_spec.json -M muts_all.json -P
```
will only run tests bind to SD card and EEPROM 24LC256.
Note: option ```-P``` is useful for example in cases when you have same platform and different shields you want to test. No need to test common part all the time (timers, RTC, RTOS etc.). You can force to test peripherals only on some devices and for example only common tests on other devices.
#### Additional MUTs configuration file settings
You can add extra information to each MUT configuration. In particular you can specify which flashing (binary copy method) should be used, how to reset target and for example set reset timeout (used to delay test execution just after reset).
muts_all.json:
```json
{
"1" : {"mcu" : "LPC1768",
"port" : "COM77",
"disk" : "G:\\",
"peripherals" : ["TMP102", "digital_loop", "port_loop", "analog_loop", "SD"]},
"2" : {"mcu" : "KL25Z",
"port" : "COM89",
"disk" : "F:\\",
"peripherals" : ["SD", "24LC256", "KL25Z"],
"copy_method" : "copy",
"reset_type" : "default",
"reset_tout" : "2"},
"3" : {"mcu" : "LPC11U24",
"port" : "COM76",
"disk" : "E:\\",
"peripherals" : []}
}
```
Please note that for MUT no. 2 few extra parameters were defined: ```copy_method```, ```reset_type``` and ```reset_tout```. Using this extra options you can tell test suite more about MUT you are using. This will allow you to be more flexible in terms of how you configure and use your MUTs.
* ```copy_method``` - STRING - tells test suite which binary copy method should be used.
You may notice that ```singletest.py``` command line help contains description about:
* Option ```-c``` (in MUTs file called ```copy_method```) with available copy methods supported by test suite plugin system.
* Option ```-r``` (in MUTs file called reset_type) with available reset methods supported by test suite plugin system.
* ```reset_type``` - STRING - some boards may require special reset handling, for example vendor specific command must be executed to reset device.
* ```reset_tout``` - INTEGER - extra timeout just after device is reseted. May be used to wait for few seconds so device may finish booting, flashing data internally etc.
Part of help listing for singletest.py:
```
-c COPY_METHOD, --copy-method=COPY_METHOD
Select binary copy (flash) method. Default is Python's
shutil.copy() method. Plugin support: copy, cp,
default, eACommander, eACommander-usb, xcopy
-r MUT_RESET_TYPE, --reset-type=MUT_RESET_TYPE
Extra reset method used to reset MUT by host test
script. Plugin support: default, eACommander,
eACommander-usb
```
----
Now we've already defined how our devices are connected to our host PC. We can continue and define which of this MUTs will be tested and which compilers we will use to compile and build Mbed SDK and tests. To do so we need to create test specification file (let's call it ```test_spec.json```) and put inside our configuration file information about which MUTs we actually want to test. We will pass this file's name to ```singletest.py``` script using ```-i``` switch.
Below we can see how sample ```test_spec.json``` file content could look like. (I've included all possible toolchains, we will change it in a moment):
```json
{
"targets": {
"LPC1768" : ["ARM", "uARM", "GCC_ARM", "GCC_CS", "GCC_CR", "IAR"],
"KL25Z" : ["ARM", "GCC_ARM"],
"NUCLEO_F103RB" : ["ARM", "uARM"]
}
}
```
Above example configuration will force tests for LPC1768, KL25Z, NUCLEO_F103RB platforms and:
* Compilers: ```ARM```, ```uARM```, ```GCC_ARM```, ```GCC_CS```, ```GCC_CR``` and ```IAR``` will be used to compile tests for NXP's ```LPC1768```.
* Compilers: ```ARM``` and ```GCC_ARM``` will be used for Freescales' ```KL25Z``` platform.
* Compilers: ```ARM``` and ```uARM``` will be used for STMicro's ```NUCLEO_F103RB``` platform.
For our example purposes let's assume we only have Keil ARM compiler, so let's change configuration in ```test_spec.json``` file and reduce number of compiler to those we actually have:
```json
{
"targets": {
"LPC1768" : ["ARM", "uARM"],
"KL25Z" : ["ARM"],
"NUCLEO_F103RB" : ["ARM", "uARM"]
}
}
```
#### Run your tests
After you configure all your MUTs and compilers you are ready to run tests. Make sure your devices are connected and your configuration files reflect your current configuration (serial ports, devices). Go to workspace_tools directory in your mbed location.
```
$ cd workspace_tools/
```
and execute test suite script.
```
$ python singletest.py -i test_spec.json -M muts_all.json
```
To check your configuration before test execution please use ```--config``` switch:
```
$ python singletest.py -i test_spec.json -M muts_all.json --config
MUTs configuration in m.json:
+-------+-------------+---------------+------+-------+
| index | peripherals | mcu | disk | port |
+-------+-------------+---------------+------+-------+
| 1 | | LPC1768 | J:\ | COM4 |
| 3 | | NUCLEO_F103RB | I:\ | COM11 |
| 2 | | KL25Z | E:\ | COM39 |
+-------+-------------+---------------+------+-------+
Test specification in t.json:
+---------------+-----+------+
| mcu | ARM | uARM |
+---------------+-----+------+
| NUCLEO_F103RB | Yes | Yes |
| KL25Z | Yes | - |
| LPC1768 | Yes | Yes |
+---------------+-----+------+
```
It should help you localize basic problems with configuration files and toolchain configuration.
Note: Configurations with issues will be marked with ```*``` sign.
Having multiple configuration files allows you to manage your test scenarios in more flexible manner. You can:
* Set up all platforms and toolchains used during testing.
* Define (using script's ```-n``` switch) which tests you want to run during testing.
* Just run regression (all tests). Regression is default setting for test script.
You can also force ```singletest.py``` script to:
* Run only peripherals' tests (switch ```-P```) or
* Just skip peripherals' tests (switch ```-C```).
* Build mbed SDK, libraries and corresponding tests with multiple cores, just use ```-j X``` option where ```X``` is number of cores you want to use for compilation.
```
$ python singletest.py -i test_spec.json -M muts_all.json -j 8
```
* Print test cases console output using ```-V``` option.
* Only build mbed SDK, tests and dependant libraries with switch ```-O```:
```
$ python singletest.py -i test_spec.json -M muts_all.json -j 8 -O
```
* Execute each test case multiple times with ```--global-loops X``` option, where ```X``` number of repeats. Additionally use option ```-W``` to continue repeating test cases execution only if they continue to fail.
```
$ python singletest.py -i test_spec.json -M muts_all.json --global-loops 3 -W
```
* Option ```--loops``` can be used to overwrite global loop count and redefine loop count for particular tests. Define test loops as ```TEST_ID=X``` where ```X``` is integer and separate loops count definitions by comma if necessary. E.g. ```TEST_1=5,TEST_2=20,TEST_3=2```.
```
$ python singletest.py -i test_spec.json -M muts_all.json RTOS_1=10,RTOS_2=5
```
This will execute test ```RTOS_1``` ten (10) times and test ```RTOS_2``` five (5) times.
* Force non default copy method. Note that mbed platforms can be flashed with just binary drag&drop. We simply copy file onto mbed's disk and interface chip flashes target MCU with given binary. Force non standard (Python specific) copy method by using option ```-c COPY_METHOD``` where ```COPY_METHOD``` can be shell, command line copy command like: ```cp```, ```copy````, ```xcopy``` etc. Make sure those commands are available from command line!
```
$ python singletest.py -i test_spec.json -M muts_all.json -c cp
```
* Run only selected tests. You can select which tests should be executed when you run test suite. Use ```-n``` switch to define tests by their ids you want to execute. Use comma to separate test ids:
```
$ python singletest.py -i test_spec.json -M muts_all.json -n RTOS_1,RTOS_2,RTOS_3,MBED_10,MBED_16,MBED_11
```
* Set common output binary name for all tests. In some cases you would like to have the same name for all tests. You can use switch ```--firmware-name``` to specify (without extension) build script output binary name.
In below example we would like to have all test binaries called ```firmware.bin`` (or with other extension like .elf, .hex depending on target accepted format).
```
$ python singletest.py -i test_spec.json -M muts_all.json --firmware-name firmware
```
* Where to find test list? Tests are defined in file ```tests.py``` in ```mbed/workspace_tools/``` directory. ```singletest.py``` uses test metadata in ```tests.py``` to resolve libraries dependencies and build tests for proper platforms and peripherals. Option ```-R``` can be used to get test names and direct path and test configuration.
```
$ python singletest.py -R
+-------------+-----------+---------------------------------------+--------------+-------------------+----------+--------------------------------------------------------+
| id | automated | description | peripherals | host_test | duration | source_dir |
+-------------+-----------+---------------------------------------+--------------+-------------------+----------+--------------------------------------------------------+
| MBED_1 | False | I2C SRF08 | SRF08 | host_test | 10 | C:\Work\mbed\libraries\tests\mbed\i2c_SRF08 |
| MBED_10 | True | Hello World | - | host_test | 10 | C:\Work\mbed\libraries\tests\mbed\hello |
| MBED_11 | True | Ticker Int | - | host_test | 20 | C:\Work\mbed\libraries\tests\mbed\ticker |
| MBED_12 | True | C++ | - | host_test | 10 | C:\Work\mbed\libraries\tests\mbed\cpp |
| MBED_13 | False | Heap & Stack | - | host_test | 10 | C:\Work\mbed\libraries\tests\mbed\heap_and_stack |
| MBED_14 | False | Serial Interrupt | - | host_test | 10 | C:\Work\mbed\libraries\tests\mbed\serial_interrupt |
| MBED_15 | False | RPC | - | host_test | 10 | C:\Work\mbed\libraries\tests\mbed\rpc |
| MBED_16 | True | RTC | - | host_test | 15 | C:\Work\mbed\libraries\tests\mbed\rtc |
| MBED_17 | False | Serial Interrupt 2 | - | host_test | 10 | C:\Work\mbed\libraries\tests\mbed\serial_interrupt_2 |
| MBED_18 | False | Local FS Directory | - | host_test | 10 | C:\Work\mbed\libraries\tests\mbed\dir |
...
```
Note: you can filter tests by ```id``` column, just use ```-f``` option and give test name or regular expression:
```
$ python singletest.py -R -f RTOS
+--------------+-----------+-------------------------+-------------+-----------+----------+---------------------------------------------------+
| id | automated | description | peripherals | host_test | duration | source_dir |
+--------------+-----------+-------------------------+-------------+-----------+----------+---------------------------------------------------+
| CMSIS_RTOS_1 | False | Basic | - | host_test | 10 | C:\Work\mbed\libraries\tests\rtos\cmsis\basic |
| CMSIS_RTOS_2 | False | Mutex | - | host_test | 20 | C:\Work\mbed\libraries\tests\rtos\cmsis\mutex |
| CMSIS_RTOS_3 | False | Semaphore | - | host_test | 20 | C:\Work\mbed\libraries\tests\rtos\cmsis\semaphore |
| CMSIS_RTOS_4 | False | Signals | - | host_test | 10 | C:\Work\mbed\libraries\tests\rtos\cmsis\signals |
| CMSIS_RTOS_5 | False | Queue | - | host_test | 20 | C:\Work\mbed\libraries\tests\rtos\cmsis\queue |
| CMSIS_RTOS_6 | False | Mail | - | host_test | 20 | C:\Work\mbed\libraries\tests\rtos\cmsis\mail |
| CMSIS_RTOS_7 | False | Timer | - | host_test | 10 | C:\Work\mbed\libraries\tests\rtos\cmsis\timer |
| CMSIS_RTOS_8 | False | ISR | - | host_test | 10 | C:\Work\mbed\libraries\tests\rtos\cmsis\isr |
| RTOS_1 | True | Basic thread | - | host_test | 15 | C:\Work\mbed\libraries\tests\rtos\mbed\basic |
| RTOS_2 | True | Mutex resource lock | - | host_test | 20 | C:\Work\mbed\libraries\tests\rtos\mbed\mutex |
| RTOS_3 | True | Semaphore resource lock | - | host_test | 20 | C:\Work\mbed\libraries\tests\rtos\mbed\semaphore |
| RTOS_4 | True | Signals messaging | - | host_test | 10 | C:\Work\mbed\libraries\tests\rtos\mbed\signals |
| RTOS_5 | True | Queue messaging | - | host_test | 10 | C:\Work\mbed\libraries\tests\rtos\mbed\queue |
| RTOS_6 | True | Mail messaging | - | host_test | 10 | C:\Work\mbed\libraries\tests\rtos\mbed\mail |
| RTOS_7 | True | Timer | - | host_test | 15 | C:\Work\mbed\libraries\tests\rtos\mbed\timer |
| RTOS_8 | True | ISR (Queue) | - | host_test | 10 | C:\Work\mbed\libraries\tests\rtos\mbed\isr |
| RTOS_9 | True | SD File write-read | SD | host_test | 10 | C:\Work\mbed\libraries\tests\rtos\mbed\file |
+--------------+-----------+-------------------------+-------------+-----------+----------+---------------------------------------------------+
```
* Shuffle your tests. We strongly encourage you to shuffle your test order each time you execute test suite script.
Rationale: It is probable that tests executed in one particular order will pass and in other will fail. To shuffle your tests order please use option ```u``` (or ```--shuffle```):
```
$ python singletest.py -i test_spec.json -M muts_all.json --shuffle
```
Above command with force test script to randomly generate shuffle seed and shuffle test order execution. Note: Shuffle seed is float in ```[0.0, 1.0)```.
You can always recreate particular test order by forcing shuffle (```-u``` or ```--shuffle```} switch) and passing shuffle seed back to test suite using ```--shuffle-seed``` switch:
```
$ python singletest.py -i test_spec.json -M muts_all.json --shuffle --shuffle-seed 0.4041028336
```
Note: You can also supply your own randomly generated shuffle seed to drive particular test execution order scenarios. Just make sure shuffle seed is float in ```[0.0, 1.0)```.
You can find test shuffle seed in test summary:
```
...
| OK | LPC1768 | ARM | MBED_A9 | Serial Echo at 115200 | 2.84 | 10 | 1/1 |
+--------+---------+-----------+-----------+-----------------------------+--------------------+---------------+-------+
Result: 1 FAIL / 22 OK
Shuffle Seed: 0.4041028336
Completed in 234.85 sec
```
### Exmple of device configuration (one device connected to host computer)
This example will show you how to configure single device, run general tests or only peripheral tests. We will also show some real test result examples.
1. We will test only one board STMIcro Nucleo ```F334R8``` board connected to our PC (port ```COM46``` and disk is ```E:```).
2. We will also connect EEPROM ```24LC256``` to SDA, SCL pins of our Nucleo board and define 24LC256 peripheral to make sure our test suite will run all available tests for ```24LC256```.
Let's configure our one MUT and set uARM as the only compiler we will use to compiler Mbed SDK and tests.
We also need to create two configuration files ```muts_all.json``` and ```test_spec.json``` to pass our small testbed configuration to test script.
muts_all.json:
```json
{
"1" : {
"mcu": "NUCLEO_F334R8",
"port":"COM46",
"disk":"E:\\",
"peripherals": ["24LC256"]
}
}
```
Note: By defining ```"peripherals": ["24LC256"]``` we are passing to test suite information that this particular board has EEPROM 24LC256 connected to our board.
test_spec.json:
```json
{
"targets": {
"NUCLEO_F334R8" : ["uARM"]
}
}
```
Note:
* Please make sure device is connected before we will start running tests.
* Please make sure files ```muts_all.json``` and ```test_spec.json``` are in ```mbed/workspace_tools/``` directory.
Now you can call test suite and execute tests:
```
$ python singletest.py -i test_spec.json -M muts_all.json
...
Test summary:
+--------+---------------+-----------+-----------+---------------------------------+--------------------+---------------+
| Result | Target | Toolchain | Test ID | Test Description | Elapsed Time (sec) | Timeout (sec) |
+--------+---------------+-----------+-----------+---------------------------------+--------------------+---------------+
| OK | NUCLEO_F334R8 | uARM | MBED_A25 | I2C EEPROM line read/write test | 12.41 | 15 |
| OK | NUCLEO_F334R8 | uARM | MBED_A1 | Basic | 3.42 | 10 |
| OK | NUCLEO_F334R8 | uARM | EXAMPLE_1 | /dev/null | 3.42 | 10 |
| OK | NUCLEO_F334R8 | uARM | MBED_24 | Timeout Int us | 11.47 | 15 |
| OK | NUCLEO_F334R8 | uARM | MBED_25 | Time us | 11.43 | 15 |
| OK | NUCLEO_F334R8 | uARM | MBED_26 | Integer constant division | 3.37 | 10 |
| OK | NUCLEO_F334R8 | uARM | MBED_23 | Ticker Int us | 12.43 | 15 |
| OK | NUCLEO_F334R8 | uARM | MBED_A19 | I2C EEPROM read/write test | 11.42 | 15 |
| OK | NUCLEO_F334R8 | uARM | MBED_11 | Ticker Int | 12.43 | 20 |
| OK | NUCLEO_F334R8 | uARM | MBED_10 | Hello World | 2.42 | 10 |
| OK | NUCLEO_F334R8 | uARM | MBED_12 | C++ | 3.42 | 10 |
| OK | NUCLEO_F334R8 | uARM | MBED_16 | RTC | 4.76 | 15 |
| UNDEF | NUCLEO_F334R8 | uARM | MBED_2 | stdio | 20.42 | 20 |
| UNDEF | NUCLEO_F334R8 | uARM | MBED_A9 | Serial Echo at 115200 | 10.37 | 10 |
+--------+---------------+-----------+-----------+---------------------------------+--------------------+---------------+
Result: 2 UNDEF / 12 OK
Completed in 160 sec
```
If we want to get additional test summary with results in separate columns please use option ```-t```.
```
$ python singletest.py -i test_spec.json -M muts_all.json -t
...
Test summary:
+---------------+-----------+---------------------------------+-------+
| Target | Test ID | Test Description | uARM |
+---------------+-----------+---------------------------------+-------+
| NUCLEO_F334R8 | EXAMPLE_1 | /dev/null | OK |
| NUCLEO_F334R8 | MBED_10 | Hello World | OK |
| NUCLEO_F334R8 | MBED_11 | Ticker Int | OK |
| NUCLEO_F334R8 | MBED_12 | C++ | OK |
| NUCLEO_F334R8 | MBED_16 | RTC | OK |
| NUCLEO_F334R8 | MBED_2 | stdio | UNDEF |
| NUCLEO_F334R8 | MBED_23 | Ticker Int us | OK |
| NUCLEO_F334R8 | MBED_24 | Timeout Int us | OK |
| NUCLEO_F334R8 | MBED_25 | Time us | OK |
| NUCLEO_F334R8 | MBED_26 | Integer constant division | OK |
| NUCLEO_F334R8 | MBED_A1 | Basic | OK |
| NUCLEO_F334R8 | MBED_A19 | I2C EEPROM read/write test | OK |
| NUCLEO_F334R8 | MBED_A25 | I2C EEPROM line read/write test | OK |
| NUCLEO_F334R8 | MBED_A9 | Serial Echo at 115200 | UNDEF |
+---------------+-----------+---------------------------------+-------+
```
----
Please do not forget you can combine few options together to get result you want. For example you want to repeat few tests multiple number of times, shuffle test ids execution order and select only tests which are critical for you at this point. You can do it using switch -n, --global-loops with --loops and --shuffle:
Execute above command to:
* Run only tests: ```RTOS_1```, ```RTOS_2```, ```RTOS_3```, ```MBED_10```, ```MBED_16```, ```MBED_11```.
* Shuffle test execution order. Note tests in loops will not be shuffled.
* Set global loop count to 3 - each test will repeated 3 times.
* Overwrite global loop count (set above to 3) and:
* Force to loop test RTOS_1 to execute 3 times.
* Force to loop test RTOS_2 to execute 4 times.
* Force to loop test RTOS_3 to execute 5 times.
* Force to loop test MBED_11 to execute 5 times.
```
$ python singletest.py -i test_spec.json -M muts_all.json -n RTOS_1,RTOS_2,RTOS_3,MBED_10,MBED_16,MBED_11 --shuffle --global-loops 3 --loops RTOS_1=3,RTOS_2=4,RTOS_3=5,MBED_11=5
```
# CppUTest unit test library support
## CppUTest in Mbed SDK testing introduction
[CppUTest](http://cpputest.github.io/) is a C / C++ based unit xUnit test framework for unit testing and for test-driving your code. It is written in C++ but is used in C and C++ projects and frequently used in embedded systems but it works for any C / C++ project.
Mbed SDK test suite supports writing tests using CppUTest. All you need to do it to provide CppUTest sources and includes with Mbed SDK port. This is already done for you so all you need to do it to get proper sources in your project directory.
CppUTests core design principles are:
* Simple in design and simple in use.
* Portable to old and new platforms.
* Build with Test-driven Development in mind.
## From where you can get more help about CppUTest library and unit testing
• You can read [CppUTest manual](http://cpputest.github.io/manual.html)
* [CppUTest forum](https://groups.google.com/forum/?fromgroups#!forum/cpputest)
* [CppUTest on GitHub](https://github.com/cpputest/cpputest)
* Finally, if you think unit testing is new concept for you, you can have a grasp of it on Wikipedia pages about [unit testing](http://en.wikipedia.org/wiki/Unit_testing) and continue from there.
## How to add CppUTest to your current Mbed SDK installation
### Do I need CppUTest port for Mbed SDK?
Yes, you do. If you want to use CppUTest with Mbed SDK you need to have CppUTest version with ARMCC compiler (only ARM flavor for now) port and Mbed SDK console port (if you want to have output on serial port). All is already prepared by Mbed engineers and you can get it for example here: http://mbed.org/users/rgrover1/code/CppUTest/
### Prerequisites
* Installed [git client](http://git-scm.com/downloads/).
* Installed [Mercurial client](http://mercurial.selenic.com/).
### How / where to install
We want to create directory structure similar to one below:
```
\your_project_directory
├───cpputest
│ ├───include
│ └───src
└───mbed
├───libraries
├───travis
└───workspace_tools
```
Please go to directory with your project. For example it could be c:\Projects\Project.
```
$ cd c:\Projects\Project
```
If your project directory already has your mbed SDK repository included just execute below command (Mercurial console client). It should download CppUTest with Mbed SDK port.
```
$ hg clone https://mbed.org/users/rgrover1/code/cpputest/
```
You should see something like this after you execute Mercurial clone command:
```
$ hg clone https://mbed.org/users/rgrover1/code/cpputest/
destination directory: cpputest
requesting all changes
adding changesets
adding manifests
adding file changes
added 3 changesets with 69 changes to 42 files
updating to branch default
41 files updated, 0 files merged, 0 files removed, 0 files unresolved
```
Confirm your project structure. It should look more or less like this:
```
$ ls
cpputest mbed
```
From now on CppUTest is in correct path. Each time you want to compile unit tests for CppUTest build script will always look for CppUTest library in the same directory where mbed library is.
## New off-line mbed SDK project with CppUTest support
If you are creating new mbed SDK project and you want to use CppUTest with it you need to download both mbed SDK and CppUTest with mbed port to the same directory. You can do it like this:
```
$ cd c:\Projects\Project
$ git clone https://github.com/mbedmicro/mbed.git
$ hg clone https://mbed.org/users/rgrover1/code/cpputest/
```
After above three steps you should have proper directory structure. All you need to do now is to configure your ```private_settings.py``` in ```mbed/workspace_tools/``` directory. Please refer to mbed SDK build script documentation for details.
## CppUTest with mbed port
To make sure you actualy have CppUTest library with mbed SDK port you can go to CppUTest ```armcc``` platform directory:
```
$ cd c:/Projects/Project/cpputest/src/Platforms/armcc/
```
And open file ```UtestPlatform.cpp```.
You should find part of code responsible for porting console on default serial port of the mbed device:
```c++
#include "Serial.h"
using namespace mbed;
int PlatformSpecificPutchar(int c)
{
/* Please modify this block for test results to be reported as
* console output. The following is a sample implementation using a
* Serial object connected to the console. */
#define NEED_TEST_REPORT_AS_CONSOLE_OUTPUT 1
#if NEED_TEST_REPORT_AS_CONSOLE_OUTPUT
extern Serial console;
#define NEED_LINE_FEED_IN_ADDITION_TO_NEWLINE 1
#if NEED_LINE_FEED_IN_ADDITION_TO_NEWLINE
/* CppUTest emits \n line terminators in its reports; some terminals
* need the linefeed (\r) in addition. */
if (c == '\n') {
console.putc('\r');
}
#endif /* #if NEED_LINE_FEED_IN_ADDITION_TO_NEWLINE */
return (console.putc(c));
#else /* NEED_TEST_REPORT_AS_CONSOLE_OUTPUT */
return (0);
#endif /* NEED_TEST_REPORT_AS_CONSOLE_OUTPUT */
}
```
You can find cpputest UT test runner main function in mbed sources: ```c:/Projects/Project/mbed/libraries/tests/utest/testrunner/testrunner.cpp```. Test runner code (in ```testrunner.cpp```) only defined console object and executes all unit tests:
```c++
#include "CommandLineTestRunner.h"
#include <stdio.h>
#include "mbed.h"
#include "testrunner.h"
#include "test_env.h"
/**
Object 'mbed_cpputest_console' is used to show prints on console.
It is declared in \cpputest\src\Platforms\armcc\UtestPlatform.cpp
*/
Serial mbed_cpputest_console(STDIO_UART_TX, STDIO_UART_RX);
int main(int ac, char** av) {
MBED_HOSTTEST_TIMEOUT(20);
MBED_HOSTTEST_SELECT(default_auto);
MBED_HOSTTEST_DESCRIPTION(Unit test);
MBED_HOSTTEST_START("UT");
unsigned failureCount = 0;
{
// Some compilers may not pass ac, av so we need to supply them ourselves
int ac = 2;
char* av[] = {__FILE__, "-v"};
failureCount = CommandLineTestRunner::RunAllTests(ac, av);
}
MBED_HOSTTEST_RESULT(failureCount == 0);
return failureCount;
}
```
## Unit test location
Unit tests source code is located in below directory: ```c:/Projects/Project/mbed/libraries/tests/utest/```
Each sub directory except testrunner contains compilable unit test source files with test groups and test cases. You can see utest structure below. Please note this is just example and in the future this directory will contain many sub directories with unit tests.
```
$ c:\Projects\Project\mbed\libraries\tests\utest> tree
utest
├───basic
├───semihost_fs
└───testrunner
```
## Define unit tests in mbed SDK test suite structure
All tests defined in test suite are described in ```mbed/workspace_tools/tests.py``` file. This file stores data structure ```TESTS``` which is a list of simple structures describing each test. Below you can find example of ```TESTS``` structure which is configuring one of the unit tests.
```
.
.
.
{
"id": "UT_2", "description": "Semihost file system",
"source_dir": join(TEST_DIR, "utest", "file"),
"dependencies": [MBED_LIBRARIES, TEST_MBED_LIB, CPPUTEST_LIBRARY],
"automated": False,
"mcu": ["LPC1768", "LPC2368", "LPC11U24"]
},
.
.
.
```
Note: In dependency section we've added library ```CPPUTEST_LIBRARY``` which is pointing build script to CppUTest library with mbed port. This is a must for unit tests to be compiled with CppUTest library.
### Tests are now divided into two types:
#### 'Hello world' tests
First type of test cases we call 'hello world' tests. They do not dependent on CppUTest library and are monolithic programs usually composed of one main function. Yo can find this tests in below example directories:
* ```mbed/libraries/tests/mbed/```
* ```mbed/libraries/tests/net/```
* ```mbed/libraries/tests/rtos/```
* ```mbed/libraries/tests/usb/```
Usually hello world test cases are using ```test_env.cpp``` and ```test_env.h``` files which implement simple test framework used to communicate with host test and help test framework instrument your tests.
Below you can see listing of ```test_env.h``` file which contains simple macro definitions used to communicate (via serial port printouts) between test case (on hardware) and host test script (on host computer).
Each use case should print on console basic information like:
* Default test case timeout.
* Which host test should be used to supervise test case execution.
* Test description and test case ID (short identifier).
```c++
.
.
.
// Test result related notification functions
void notify_start();
void notify_completion(bool success);
bool notify_completion_str(bool success, char* buffer);
void notify_performance_coefficient(const char* measurement_name, const int value);
void notify_performance_coefficient(const char* measurement_name, const unsigned int value);
void notify_performance_coefficient(const char* measurement_name, const double value);
// Host test auto-detection API
void notify_host_test_name(const char *host_test);
void notify_timeout(int timeout);
void notify_test_id(const char *test_id);
void notify_test_description(const char *description);
// Host test auto-detection API
#define MBED_HOSTTEST_START(TESTID) notify_test_id(TESTID); notify_start()
#define MBED_HOSTTEST_SELECT(NAME) notify_host_test_name(#NAME)
#define MBED_HOSTTEST_TIMEOUT(SECONDS) notify_timeout(SECONDS)
#define MBED_HOSTTEST_DESCRIPTION(DESC) notify_test_description(#DESC)
#define MBED_HOSTTEST_RESULT(RESULT) notify_completion(RESULT)
/**
Test auto-detection preamble example:
main() {
MBED_HOSTTEST_TIMEOUT(10);
MBED_HOSTTEST_SELECT( host_test );
MBED_HOSTTEST_DESCRIPTION(Hello World);
MBED_HOSTTEST_START("MBED_10");
// Proper 'host_test.py' should take over supervising of this test
// Test code
bool result = ...;
MBED_HOSTTEST_RESULT(result);
}
*/
.
.
.
```
Example of 'hello world' test:
```c++
#include "mbed.h"
#include "test_env.h"
#define CUSTOM_TIME 1256729737
int main() {
MBED_HOSTTEST_TIMEOUT(20);
MBED_HOSTTEST_SELECT(rtc_auto);
MBED_HOSTTEST_DESCRIPTION(RTC);
MBED_HOSTTEST_START("MBED_16");
char buffer[32] = {0};
set_time(CUSTOM_TIME); // Set RTC time to Wed, 28 Oct 2009 11:35:37
while(1) {
time_t seconds = time(NULL);
strftime(buffer, 32, "%Y-%m-%d %H:%M:%S %p", localtime(&seconds));
printf("MBED: [%ld] [%s]\r\n", seconds, buffer);
wait(1);
}
}
```
#### 'Unit test' test cases
Second group of tests are unit tests. They are using CppUTest library and require you to write ```TEST_GROUP```s and ```TEST```s in your test files. Test suite will add test runner sources to your test automatically so you can concentrate on writing tests.
Example of unit test:
```c++
#include "TestHarness.h"
#include <utility>
#include "mbed.h"
TEST_GROUP(BusOut_mask)
{
};
TEST(BusOut_mask, led_1_2_3)
{
BusOut bus_data(LED1, LED2, LED3);
CHECK_EQUAL(0x07, bus_data.mask());
}
TEST(BusOut_mask, led_nc_nc_nc_nc)
{
BusOut bus_data(NC, NC, NC, NC);
CHECK_EQUAL(0x00, bus_data.mask());
}
TEST(BusOut_mask, led_1_2_3_nc_nc)
{
BusOut bus_data(LED1, LED2, LED3, NC, NC);
CHECK_EQUAL(0x07, bus_data.mask());
}
TEST(BusOut_mask, led_1_nc_2_nc_nc_3)
{
BusOut bus_data(LED1, NC, LED2, NC, NC, LED3);
CHECK_EQUAL(0x25, bus_data.mask());
}
///////////////////////////////////////////////////////////////////////////////
#ifdef MBED_OPERATORS
TEST_GROUP(BusOut_digitalout_write)
{
};
TEST(BusOut_digitalout_write, led_nc)
{
BusOut bus_data(NC);
CHECK_EQUAL(false, bus_data[0].is_connected())
}
TEST(BusOut_digitalout_write, led_1_2_3)
{
BusOut bus_data(LED1, LED2, LED3);
bus_data[0].write(1);
bus_data[1].write(1);
bus_data[2].write(1);
CHECK(bus_data[0].read());
CHECK(bus_data[1].read());
CHECK(bus_data[2].read());
}
TEST(BusOut_digitalout_write, led_1_2_3_nc_nc)
{
BusOut bus_data(LED1, LED2, LED3, NC, NC);
bus_data[0].write(0);
bus_data[1].write(0);
bus_data[2].write(0);
CHECK(bus_data[0].read() == 0);
CHECK(bus_data[1].read() == 0);
CHECK(bus_data[2].read() == 0);
}
TEST(BusOut_digitalout_write, led_1_nc_2_nc_nc_3)
{
BusOut bus_data(LED1, NC, LED2, NC, NC, LED3);
bus_data[0].write(1);
bus_data[2].write(0);
bus_data[5].write(0);
CHECK(bus_data[0].read());
CHECK(bus_data[2].read() == 0);
CHECK(bus_data[5].read() == 0);
}
#endif
```
## Example
In below example we will run two example unit tests that are now available. tests ```UT_1``` and ```UT_2``` are unit tests used for now only to check if mbed SDK works with CppUTest library and if tests are being executed. In future number of unit tests will increase, nothing is also should stop you from writing and executing your own unit tests!
### Example configuration
By default unit tests ```UT_1``` and ```UT_2``` are not automated - simply test suite will ignore them. Also we do not want to create dependency to CppUTest library each time someone executes automation.
Note: To compile ```UT_1``` and ```UT_2``` tests CppUTest library described above installation is needed and not all users wish to add UT libs to their project. Also new to mbed users may find it difficult. This is why unit testing is an extra feature active only after you deliberately install and enable needed components.
Bellow snippet shows how to modify 'automated' flag so test suite will consider unit tests ```UT_1``` and ```UT_2``` as part of "automated test portfolio". Just change flag 'automated' from ```False``` to ```True```.
```tests.py``` listing related to ```UT_1``` and ```UT_2```:
```python
.
.
.
# CPPUTEST Library provides Unit testing Framework
#
# To write TESTs and TEST_GROUPs please add CPPUTEST_LIBRARY to 'dependencies'
#
# This will also include:
# 1. test runner - main function with call to CommandLineTestRunner::RunAllTests(ac, av)
# 2. Serial console object to print test result on serial port console
#
# Unit testing with cpputest library
{
"id": "UT_1", "description": "Basic",
"source_dir": join(TEST_DIR, "utest", "basic"),
"dependencies": [MBED_LIBRARIES, TEST_MBED_LIB, CPPUTEST_LIBRARY],
"automated": True,
},
{
"id": "UT_2", "description": "Semihost file system",
"source_dir": join(TEST_DIR, "utest", "semihost_fs"),
"dependencies": [MBED_LIBRARIES, TEST_MBED_LIB, CPPUTEST_LIBRARY],
"automated": True,
"mcu": ["LPC1768", "LPC2368", "LPC11U24"]
},
.
.
.
```
### Execute tests
In my test I will use common [LPC1768](http://developer.mbed.org/platforms/mbed-LPC1768/) mbed-enabled board because unit test ```UT_2``` is checking semi-host functionality which is available on this board and handful of others.
Configure your ```test_spec.json``` and ```muts_all.json``` files (refer to test suite build script and automation description) and set mbed disk and serial port.
```
$ singletest.py -i test_spec.json -M muts_all.json -n UT_1,UT_2 -V
Building library CMSIS (LPC1768, ARM)
Building library MBED (LPC1768, ARM)
Building library CPPUTEST (LPC1768, ARM)
Building project BASIC (LPC1768, ARM)
Executing 'python host_test.py -p COM77 -d E:\ -t 10'
Test::Output::Start
Host test instrumentation on port: "COM77" and disk: "E:\"
TEST(FirstTestGroup, FirstTest) - 0 ms
OK (1 tests, 1 ran, 3 checks, 0 ignored, 0 filtered out, 3 ms)
{{success}}
{{end}}
Test::Output::Finish
TargetTest::LPC1768::ARM::UT_1::Basic [OK] in 2.43 of 10 sec
Building library CPPUTEST (LPC1768, ARM)
Building project SEMIHOST_FS (LPC1768, ARM)
Executing 'python host_test.py -p COM77 -d E:\ -t 10'
Test::Output::Start
Host test instrumentation on port: "COM77" and disk: "E:\"
TEST(FirstTestGroup, FirstTest) - 9 ms
OK (1 tests, 1 ran, 10 checks, 0 ignored, 0 filtered out, 10 ms)
{{success}}
{{end}}
Test::Output::Finish
TargetTest::LPC1768::ARM::UT_2::Semihost file system [OK] in 2.43 of 10 sec
Test summary:
+--------+---------+-----------+---------+----------------------+--------------------+---------------+-------+
| Result | Target | Toolchain | Test ID | Test Description | Elapsed Time (sec) | Timeout (sec) | Loops |
+--------+---------+-----------+---------+----------------------+--------------------+---------------+-------+
| OK | LPC1768 | ARM | UT_1 | Basic | 2.43 | 10 | 1/1 |
| OK | LPC1768 | ARM | UT_2 | Semihost file system | 2.43 | 10 | 1/1 |
+--------+---------+-----------+---------+----------------------+--------------------+---------------+-------+
Result: 2 OK
Completed in 12.02 sec
```
You can compile unit tests using various number of supported compilers, below just few examples with working solutions:
```
Test summary:
+--------+---------+-----------+---------+----------------------+--------------------+---------------+-------+
| Result | Target | Toolchain | Test ID | Test Description | Elapsed Time (sec) | Timeout (sec) | Loops |
+--------+---------+-----------+---------+----------------------+--------------------+---------------+-------+
| OK | LPC1768 | ARM | UT_1 | Basic | 2.43 | 10 | 1/1 |
| OK | LPC1768 | ARM | UT_2 | Semihost file system | 2.43 | 10 | 1/1 |
| OK | LPC1768 | uARM | UT_1 | Basic | 2.43 | 10 | 1/1 |
| OK | LPC1768 | uARM | UT_2 | Semihost file system | 2.43 | 10 | 1/1 |
| OK | LPC1768 | GCC_ARM | UT_1 | Basic | 2.43 | 10 | 1/1 |
| OK | LPC1768 | GCC_ARM | UT_2 | Semihost file system | 2.43 | 10 | 1/1 |
| OK | LPC1768 | GCC_CR | UT_1 | Basic | 3.44 | 10 | 1/1 |
| OK | LPC1768 | GCC_CR | UT_2 | Semihost file system | 3.43 | 10 | 1/1 |
+--------+---------+-----------+---------+----------------------+--------------------+---------------+-------+
Result: 8 OK
Completed in 55.85 sec
```

View File

@ -187,7 +187,7 @@ bool USBDevice::controlOut(void)
/* Check we should be transferring data OUT */
if (transfer.direction != HOST_TO_DEVICE)
{
#if defined(TARGET_KL25Z) | defined(TARGET_KL43Z) | defined(TARGET_KL46Z) | defined(TARGET_K20D5M) | defined(TARGET_K64F) | defined(TARGET_K22F)
#if defined(TARGET_KL25Z) | defined(TARGET_KL43Z) | defined(TARGET_KL46Z) | defined(TARGET_K20D5M) | defined(TARGET_K64F) | defined(TARGET_K22F) | defined(TARGET_TEENSY3_1)
/*
* We seem to have a pending device-to-host transfer. The host must have
* sent a new control request without waiting for us to finish processing

View File

@ -41,7 +41,7 @@ typedef enum {
#include "USBEndpoints_LPC17_LPC23.h"
#elif defined(TARGET_LPC11UXX) || defined(TARGET_LPC1347) || defined (TARGET_LPC11U6X) || defined (TARGET_LPC1549)
#include "USBEndpoints_LPC11U.h"
#elif defined(TARGET_KL25Z) | defined(TARGET_KL43Z) | defined(TARGET_KL46Z) | defined(TARGET_K20D50M) | defined(TARGET_K64F) | defined(TARGET_K22F)
#elif defined(TARGET_KL25Z) | defined(TARGET_KL43Z) | defined(TARGET_KL46Z) | defined(TARGET_K20D50M) | defined(TARGET_K64F) | defined(TARGET_K22F) | defined(TARGET_TEENSY3_1)
#include "USBEndpoints_KL25Z.h"
#elif defined (TARGET_STM32F4)
#include "USBEndpoints_STM32F4.h"

View File

@ -16,7 +16,7 @@
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#if defined(TARGET_KL25Z) | defined(TARGET_KL43Z) | defined(TARGET_KL46Z) | defined(TARGET_K20D50M) | defined(TARGET_K64F) | defined(TARGET_K22F)
#if defined(TARGET_KL25Z) | defined(TARGET_KL43Z) | defined(TARGET_KL46Z) | defined(TARGET_K20D50M) | defined(TARGET_K64F) | defined(TARGET_K22F) | defined(TARGET_TEENSY3_1)
#include "USBHAL.h"

View File

@ -47,6 +47,7 @@
0 ... Multipurpose Clock Generator (MCG) in FLL Engaged Internal (FEI) mode
Reference clock source for MCG module is the slow internal clock source 32.768kHz
Core clock = 41.94MHz, BusClock = 41.94MHz
This works on Teensy3.1
1 ... Multipurpose Clock Generator (MCG) in PLL Engaged External (PEE) mode
Reference clock source for MCG module is an external crystal 8MHz
Core clock = 48MHz, BusClock = 48MHz
@ -56,7 +57,7 @@
3 ... Multipurpose Clock Generator (MCG) in PLL Engaged External (PEE) mode
Reference clock source for MCG module is an external crystal 16MHz
Core clock = 72MHz, BusClock = 48MHz
This is the Teensy3.1 72Mhz set up
This is the default Teensy3.1 72Mhz set up
*/
/*----------------------------------------------------------------------------
@ -98,7 +99,6 @@ uint32_t SystemCoreClock = DEFAULT_SYSTEM_CLOCK;
/* ----------------------------------------------------------------------------
-- SystemInit()
---------------------------------------------------------------------------- */
void SystemInit (void) {
#if (DISABLE_WDOG)
/* Disable the WDOG module */
@ -106,143 +106,120 @@ void SystemInit (void) {
WDOG->UNLOCK = (uint16_t)0xC520u; /* Key 1 */
/* WDOG_UNLOCK : WDOGUNLOCK=0xD928 */
WDOG->UNLOCK = (uint16_t)0xD928u; /* Key 2 */
/* WDOG_STCTRLH: ??=0,DISTESTWDOG=0,BYTESEL=0,TESTSEL=0,TESTWDOG=0,??=0,STNDBYEN=1,WAITEN=1,STOPEN=1,DBGEN=0,ALLOWUPDATE=1,WINEN=0,IRQRSTEN=0,CLKSRC=1,WDOGEN=0 */
/* WDOG_STCTRLH: DISTESTWDOG=0,BYTESEL=0,TESTSEL=0,TESTWDOG=0,STNDBYEN=1,WAITEN=1,STOPEN=1,DBGEN=0,ALLOWUPDATE=1,WINEN=0,IRQRSTEN=0,CLKSRC=1,WDOGEN=0 */
WDOG->STCTRLH = (uint16_t)0x01D2u;
#endif /* (DISABLE_WDOG) */
#if (CLOCK_SETUP == 0)
/* SIM->CLKDIV1: OUTDIV1=0,OUTDIV2=0,OUTDIV3=1,OUTDIV4=1,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0 */
SIM->CLKDIV1 = (uint32_t)0x00110000u; /* Update system prescalers */
/* SIM->CLKDIV1: OUTDIV1=0,OUTDIV2=0,OUTDIV4=1 Set Prescalers 41.94MHz cpu, 41.94MHz system, 20.97MHz flash*/
SIM->CLKDIV1 = SIM_CLKDIV1_OUTDIV4(1);
/* Switch to FEI Mode */
/* MCG->C1: CLKS=0,FRDIV=0,IREFS=1,IRCLKEN=1,IREFSTEN=0 */
MCG->C1 = (uint8_t)0x06u;
/* MCG->C2: ??=0,??=0,RANGE0=0,HGO=0,EREFS=0,LP=0,IRCS=0 */
MCG->C1 = MCG_C1_IREFS_MASK | MCG_C1_IRCLKEN_MASK;
/* MCG->C2: LOCKRE0=0,RANGE0=0,HGO=0,EREFS=0,LP=0,IRCS=0 */
MCG->C2 = (uint8_t)0x00u;
/* MCG_C4: DMX32=0,DRST_DRS=1 */
MCG->C4 = (uint8_t)((MCG->C4 & (uint8_t)~(uint8_t)0xC0u) | (uint8_t)0x20u);
/* MCG->C5: ??=0,PLLCLKEN=0,PLLSTEN=0,PRDIV0=0 */
/* MCG->C5: PLLCLKEN=0,PLLSTEN=0,PRDIV0=0 */
MCG->C5 = (uint8_t)0x00u;
/* MCG->C6: LOLIE=0,PLLS=0,CME=0,VDIV0=0 */
MCG->C6 = (uint8_t)0x00u;
while((MCG->S & MCG_S_IREFST_MASK) == 0u) { /* Check that the source of the FLL reference clock is the internal reference clock. */
}
while((MCG->S & 0x0Cu) != 0x00u) { /* Wait until output of the FLL is selected */
}
while((MCG->S & MCG_S_IREFST_MASK) == 0u) { } /* Check that the source of the FLL reference clock is the internal reference clock. */
while((MCG->S & 0x0Cu) != 0x00u) { } /* Wait until output of the FLL is selected */
#elif (CLOCK_SETUP == 1)
/* SIM->CLKDIV1: OUTDIV1=0,OUTDIV2=0,OUTDIV3=1,OUTDIV4=1,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0 */
SIM->CLKDIV1 = (uint32_t)0x00110000u; /* Update system prescalers */
/* SIM->CLKDIV1: OUTDIV1=0,OUTDIV2=0,OUTDIV4=1 Set Prescalers 48MHz cpu, 48MHz system, 24MHz flash*/
SIM->CLKDIV1 = SIM_CLKDIV1_OUTDIV4(1);
/* Switch to FBE Mode */
/* OSC0->CR: ERCLKEN=0,??=0,EREFSTEN=0,??=0,SC2P=0,SC4P=0,SC8P=0,SC16P=0 */
/* OSC0->CR: ERCLKEN=0,EREFSTEN=0,SC2P=0,SC4P=0,SC8P=0,SC16P=0 */
OSC0->CR = (uint8_t)0x00u;
/* MCG->C7: OSCSEL=0 */
MCG->C7 = (uint8_t)0x00u;
/* MCG->C2: ??=0,??=0,RANGE0=2,HGO=0,EREFS=1,LP=0,IRCS=0 */
MCG->C2 = (uint8_t)0x24u;
/* MCG->C2: LOCKRE0=0,RANGE0=2,HGO=0,EREFS=1,LP=0,IRCS=0 */
MCG->C2 = MCG_C2_RANGE0(2);
/* MCG->C1: CLKS=2,FRDIV=3,IREFS=0,IRCLKEN=1,IREFSTEN=0 */
MCG->C1 = (uint8_t)0x9Au;
MCG->C1 = MCG_C1_CLKS(2) | MCG_C1_FRDIV(3) | MCG_C1_IRCLKEN_MASK;
/* MCG->C4: DMX32=0,DRST_DRS=0 */
MCG->C4 &= (uint8_t)~(uint8_t)0xE0u;
/* MCG->C5: ??=0,PLLCLKEN=0,PLLSTEN=0,PRDIV0=3 */
MCG->C5 = (uint8_t)0x03u;
/* MCG->C5: PLLCLKEN=0,PLLSTEN=0,PRDIV0=3 */
MCG->C5 = MCG_C5_PRDIV0(3);
/* MCG->C6: LOLIE=0,PLLS=0,CME=0,VDIV0=0 */
MCG->C6 = (uint8_t)0x00u;
while((MCG->S & MCG_S_OSCINIT0_MASK) == 0u) { /* Check that the oscillator is running */
}
#if 0 /* ARM: THIS CHECK IS REMOVED DUE TO BUG WITH SLOW IRC IN REV. 1.0 */
while((MCG->S & MCG_S_IREFST_MASK) != 0u) { /* Check that the source of the FLL reference clock is the external reference clock. */
}
#endif
while((MCG->S & 0x0Cu) != 0x08u) { /* Wait until external reference clock is selected as MCG output */
}
while((MCG->S & MCG_S_OSCINIT0_MASK) == 0u) { } /* Check that the oscillator is running */
while((MCG->S & 0x0Cu) != 0x08u) { } /* Wait until external reference clock is selected as MCG output */
/* Switch to PBE Mode */
/* MCG_C5: ??=0,PLLCLKEN=0,PLLSTEN=0,PRDIV0=3 */
MCG->C5 = (uint8_t)0x03u;
/* MCG_C5: PLLCLKEN=0,PLLSTEN=0,PRDIV0=3 */
MCG->C5 = MCG_C5_PRDIV0(3);
/* MCG->C6: LOLIE=0,PLLS=1,CME=0,VDIV0=0 */
MCG->C6 = (uint8_t)0x40u;
while((MCG->S & MCG_S_PLLST_MASK) == 0u) { /* Wait until the source of the PLLS clock has switched to the PLL */
}
while((MCG->S & MCG_S_LOCK0_MASK) == 0u) { /* Wait until locked */
}
MCG->C6 = MCG_C6_PLLS_MASK;
while((MCG->S & MCG_S_PLLST_MASK) == 0u) { } /* Wait until the source of the PLLS clock has switched to the PLL */
while((MCG->S & MCG_S_LOCK0_MASK) == 0u) { } /* Wait until locked */
/* Switch to PEE Mode */
/* MCG->C1: CLKS=0,FRDIV=3,IREFS=0,IRCLKEN=1,IREFSTEN=0 */
MCG->C1 = (uint8_t)0x1Au;
while((MCG->S & 0x0Cu) != 0x0Cu) { /* Wait until output of the PLL is selected */
}
while((MCG->S & MCG_S_LOCK0_MASK) == 0u) { /* Wait until locked */
}
MCG->C1 = MCG_C1_FRDIV(3) | MCG_C1_IRCLKEN_MASK;
while((MCG->S & 0x0Cu) != 0x0Cu) { } /* Wait until output of the PLL is selected */
while((MCG->S & MCG_S_LOCK0_MASK) == 0u) { } /* Wait until locked */
#elif (CLOCK_SETUP == 2)
/* SIM_CLKDIV1: OUTDIV1=0,OUTDIV2=0,OUTDIV3=1,OUTDIV4=1,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0 */
SIM->CLKDIV1 = (uint32_t)0x00110000u; /* Update system prescalers */
/* SIM->CLKDIV1: OUTDIV1=0,OUTDIV2=0,OUTDIV4=1 Set Prescalers 8MHz cpu, 8MHz system, 8MHz flash*/
SIM->CLKDIV1 = SIM_CLKDIV1_OUTDIV4(1);
/* Switch to FBE Mode */
/* OSC0->CR: ERCLKEN=0,??=0,EREFSTEN=0,??=0,SC2P=0,SC4P=0,SC8P=0,SC16P=0 */
/* OSC0->CR: ERCLKEN=0,EREFSTEN=0,SC2P=0,SC4P=0,SC8P=0,SC16P=0 */
OSC0->CR = (uint8_t)0x00u;
/* MCG->C7: OSCSEL=0 */
MCG->C7 = (uint8_t)0x00u;
/* MCG->C2: ??=0,??=0,RANGE0=2,HGO=0,EREFS=1,LP=0,IRCS=0 */
MCG->C2 = (uint8_t)0x24u;
/* MCG->C2: LOCKRE0=0,RANGE0=2,HGO=0,EREFS=1,LP=0,IRCS=0 */
MCG->C2 = MCG_C2_RANGE0(2) | MCG_C2_EREFS0_MASK;
/* MCG->C1: CLKS=2,FRDIV=3,IREFS=0,IRCLKEN=1,IREFSTEN=0 */
MCG->C1 = (uint8_t)0x9Au;
MCG->C1 = MCG_C1_CLKS(2) | MCG_C1_FRDIV(3) | MCG_C1_IRCLKEN_MASK;
/* MCG->C4: DMX32=0,DRST_DRS=0 */
MCG->C4 &= (uint8_t)~(uint8_t)0xE0u;
/* MCG->C5: ??=0,PLLCLKEN=0,PLLSTEN=0,PRDIV0=0 */
/* MCG->C5: PLLCLKEN=0,PLLSTEN=0,PRDIV0=0 */
MCG->C5 = (uint8_t)0x00u;
/* MCG->C6: LOLIE=0,PLLS=0,CME=0,VDIV0=0 */
MCG->C6 = (uint8_t)0x00u;
while((MCG->S & MCG_S_OSCINIT0_MASK) == 0u) { /* Check that the oscillator is running */
}
#if 0 /* ARM: THIS CHECK IS REMOVED DUE TO BUG WITH SLOW IRC IN REV. 1.0 */
while((MCG->S & MCG_S_IREFST_MASK) != 0u) { /* Check that the source of the FLL reference clock is the external reference clock. */
}
#endif
while((MCG->S & 0x0CU) != 0x08u) { /* Wait until external reference clock is selected as MCG output */
}
while((MCG->S & MCG_S_OSCINIT0_MASK) == 0u) { } /* Check that the oscillator is running */
while((MCG->S & 0x0CU) != 0x08u) { } /* Wait until external reference clock is selected as MCG output */
/* Switch to BLPE Mode */
/* MCG->C2: ??=0,??=0,RANGE0=2,HGO=0,EREFS=1,LP=0,IRCS=0 */
MCG->C2 = (uint8_t)0x24u;
/* MCG->C2: LOCKRE0=0,RANGE0=2,HGO=0,EREFS=1,LP=0,IRCS=0 */
MCG->C2 = MCG_C2_RANGE0(2) | MCG_C2_EREFS0_MASK;
#elif (CLOCK_SETUP == 3)
/* SIM->CLKDIV1: OUTDIV1=0,OUTDIV2=0,OUTDIV3=1,OUTDIV4=1,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0,??=0 */
SIM->CLKDIV1 = (uint32_t)0x00110000u; /* Update system prescalers */
/* SIM->CLKDIV1: OUTDIV1=0,OUTDIV2=0,OUTDIV4=1 Set Prescalers 72MHz cpu, 72MHz system, 36MHz flash*/
SIM->CLKDIV1 = SIM_CLKDIV1_OUTDIV4(1);
/* SIM->CLKDIV2: USBDIV=2,USBFRAC=1 Divide 72MHz system clock for USB 48MHz */
SIM->CLKDIV2 = SIM_CLKDIV2_USBDIV(2) | SIM_CLKDIV2_USBFRAC_MASK;
/* OSC0->CR: ERCLKEN=0,EREFSTEN=0,SC2P=1,SC4P=0,SC8P=1,SC16P=0 10pF loading capacitors for 16MHz system oscillator*/
OSC0->CR = OSC_CR_SC8P_MASK | OSC_CR_SC2P_MASK;
/* Switch to FBE Mode */
/* OSC0->CR: ERCLKEN=0,??=0,EREFSTEN=0,??=0,SC2P=0,SC4P=0,SC8P=0,SC16P=0 */
OSC0->CR = (uint8_t)0x0Au; // this is required if there are no external capacitors fitted to the Xtal
/* MCG->C7: OSCSEL=0 */
MCG->C7 = (uint8_t)0x00u;
/* MCG->C2: ??=0,??=0,RANGE0=2,HGO=0,EREFS=1,LP=0,IRCS=0 */
MCG->C2 = (uint8_t)0x24u;
/* MCG->C2: LOCKRE0=0,RANGE0=2,HGO=0,EREFS=1,LP=0,IRCS=0 */
MCG->C2 = MCG_C2_RANGE0(2) | MCG_C2_EREFS0_MASK;
//MCG->C2 = (uint8_t)0x24u;
/* MCG->C1: CLKS=2,FRDIV=3,IREFS=0,IRCLKEN=1,IREFSTEN=0 */
MCG->C1 = (uint8_t)0x9Au;
/* MCG->C4: DMX32=0,DRST_DRS=0 */
MCG->C4 &= (uint8_t)~(uint8_t)0xE0u;
/* MCG->C5: ??=0,PLLCLKEN=0,PLLSTEN=0,PRDIV0=3 */
MCG->C5 = (uint8_t)0x07u;
MCG->C1 = MCG_C1_CLKS(2) | MCG_C1_FRDIV(3) | MCG_C1_IRCLKEN_MASK;
/* MCG->C4: DMX32=0,DRST_DRS=0,FCTRIM=0,SCFTRIM=0 */
MCG->C4 &= (uint8_t)~(uint8_t)0xE0u;
/* MCG->C5: PLLCLKEN=0,PLLSTEN=0,PRDIV0=7 */
MCG->C5 = MCG_C5_PRDIV0(7);
/* MCG->C6: LOLIE=0,PLLS=0,CME=0,VDIV0=0 */
MCG->C6 = (uint8_t)0x00u;
while((MCG->S & MCG_S_OSCINIT0_MASK) == 0u) { /* Check that the oscillator is running */
}
#if 0 /* ARM: THIS CHECK IS REMOVED DUE TO BUG WITH SLOW IRC IN REV. 1.0 */
while((MCG->S & MCG_S_IREFST_MASK) != 0u) { /* Check that the source of the FLL reference clock is the external reference clock. */
}
#endif
while((MCG->S & 0x0Cu) != 0x08u) { /* Wait until external reference clock is selected as MCG output */
}
while((MCG->S & MCG_S_OSCINIT0_MASK) == 0u) { } /* Check that the oscillator is running */
while((MCG->S & 0x0Cu) != 0x08u) { } /* Wait until external reference clock is selected as MCG output */
/* Switch to PBE Mode */
/* MCG_C5: ??=0,PLLCLKEN=0,PLLSTEN=0,PRDIV0=3 */
MCG->C5 = (uint8_t)0x05u;
/* MCG->C6: LOLIE=0,PLLS=1,CME=0,VDIV0=0 */
MCG->C6 = (uint8_t)0x43u;
while((MCG->S & MCG_S_PLLST_MASK) == 0u) { /* Wait until the source of the PLLS clock has switched to the PLL */
}
while((MCG->S & MCG_S_LOCK0_MASK) == 0u) { /* Wait until locked */
}
/* MCG_C5: PLLCLKEN=0,PLLSTEN=0,PRDIV0=5 */
MCG->C5 = MCG_C5_PRDIV0(5);
/* MCG->C6: LOLIE=0,PLLS=1,CME=0,VDIV0=3 */
MCG->C6 = MCG_C6_PLLS_MASK | MCG_C6_VDIV0(3);
while((MCG->S & MCG_S_PLLST_MASK) == 0u) { } /* Wait until the source of the PLLS clock has switched to the PLL */
while((MCG->S & MCG_S_LOCK0_MASK) == 0u) { } /* Wait until locked */
/* Switch to PEE Mode */
/* MCG->C1: CLKS=0,FRDIV=3,IREFS=0,IRCLKEN=1,IREFSTEN=0 */
MCG->C1 = (uint8_t)0x22u;
while((MCG->S & 0x0Cu) != 0x0Cu) { /* Wait until output of the PLL is selected */
}
while((MCG->S & MCG_S_LOCK0_MASK) == 0u) { /* Wait until locked */
}
#endif /* (CLOCK_SETUP == 3) */
/* MCG->C1: CLKS=0,FRDIV=2,IREFS=0,IRCLKEN=1,IREFSTEN=0 */
MCG->C1 = MCG_C1_FRDIV(2) | MCG_C1_IRCLKEN_MASK;
while((MCG->S & 0x0Cu) != 0x0Cu) { } /* Wait until output of the PLL is selected */
while((MCG->S & MCG_S_LOCK0_MASK) == 0u) { } /* Wait until locked */
#endif /* (CLOCK_SETUP) */
}
/* ----------------------------------------------------------------------------

View File

@ -210,40 +210,40 @@ typedef enum {
DAC0_OUT = 0xFEFE, /* DAC does not have Pin Name in RM */
// Teensy3.1 Headers
p0 = PTB16,
p1 = PTB17,
p2 = PTD0,
p3 = PTA12,
p4 = PTA13,
p5 = PTD7,
p6 = PTD4,
p7 = PTD2,
p8 = PTD3,
p9 = PTC3,
p10 = PTC4,
p11 = PTC6,
p12 = PTC7,
p13 = PTC5,
p14 = PTD1,
p15 = PTC0,
p16 = PTB0,
p17 = PTB1,
p18 = PTB3,
p19 = PTB2,
p20 = PTD5,
p21 = PTD6,
p22 = PTC1,
p23 = PTC2,
p24 = PTA5,
p25 = PTD19,
p26 = PTE1,
p27 = PTC9,
p28 = PTC8,
p29 = PTC10,
p30 = PTC11,
p31 = PTE0,
p32 = PTB18,
p33 = PTA4,
D0 = PTB16,
D1 = PTB17,
D2 = PTD0,
D3 = PTA12,
D4 = PTA13,
D5 = PTD7,
D6 = PTD4,
D7 = PTD2,
D8 = PTD3,
D9 = PTC3,
D10 = PTC4,
D11 = PTC6,
D12 = PTC7,
D13 = PTC5,
D14 = PTD1,
D15 = PTC0,
D16 = PTB0,
D17 = PTB1,
D18 = PTB3,
D19 = PTB2,
D20 = PTD5,
D21 = PTD6,
D22 = PTC1,
D23 = PTC2,
D24 = PTA5,
D25 = PTB19,
D26 = PTE1,
D27 = PTC9,
D28 = PTC8,
D29 = PTC10,
D30 = PTC11,
D31 = PTE0,
D32 = PTB18,
D33 = PTA4,
A0 = PTD1,
A1 = PTC0,
@ -255,7 +255,13 @@ typedef enum {
A7 = PTD6,
A8 = PTC1,
A9 = PTC2,
A15 = PTE1,
A16 = PTC9,
A17 = PTC8,
A18 = PTC10,
A19 = PTC11,
A20 = PTE0,
I2C_SCL = PTB3,
I2C_SDA = PTB2,
@ -267,7 +273,18 @@ typedef enum {
SERIAL_TX = PTB17,
SERIAL_RX = PTB16,
PWM = PTD7,
PWM = PTA12,
PWM1 = PTA13,
PWM2 = PTD7,
PWM3 = PTD4,
PWM4 = PTC3,
PWM5 = PTC4,
PWM6 = PTD5,
PWM7 = PTD6,
PWM8 = PTC1,
PWM9 = PTC2,
PWM10 = PTB19,
PWM11 = PTB18,
DAC = DAC0_OUT,

View File

@ -41,7 +41,7 @@ struct port_s {
struct pwmout_s {
__IO uint32_t *MOD;
__IO uint32_t *CNT;
__IO uint32_t *SYNC;
__IO uint32_t *CnV;
};

View File

@ -38,26 +38,30 @@ void pwmout_init(pwmout_t* obj, PinName pin) {
}
pwm_clock = clkval;
unsigned int port = (unsigned int)pin >> PORT_SHIFT;
unsigned int ftm_n = (pwm >> TPM_SHIFT);
unsigned int ch_n = (pwm & 0xFF);
SIM->SCGC5 |= 1 << (SIM_SCGC5_PORTA_SHIFT + port);
SIM->SCGC6 |= 1 << (SIM_SCGC6_FTM0_SHIFT + ftm_n);
FTM_Type *ftm = (FTM_Type *)(FTM0_BASE + 0x1000 * ftm_n);
ftm->CONF |= FTM_CONF_BDMMODE(3);
ftm->SC = FTM_SC_CLKS(1) | FTM_SC_PS(clkdiv); // (clock)MHz / clkdiv ~= (0.75)MHz
ftm->CONTROLS[ch_n].CnSC = (FTM_CnSC_MSB_MASK | FTM_CnSC_ELSB_MASK); /* No Interrupts; High True pulses on Edge Aligned PWM */
ftm->MODE = FTM_MODE_FTMEN_MASK;
ftm->SYNC = FTM_SYNC_CNTMIN_MASK;
ftm->SYNCONF = FTM_SYNCONF_SYNCMODE_MASK | FTM_SYNCONF_SWSOC_MASK | FTM_SYNCONF_SWWRBUF_MASK;
//Without SYNCEN set CnV does not seem to update
ftm->COMBINE = FTM_COMBINE_SYNCEN0_MASK | FTM_COMBINE_SYNCEN1_MASK | FTM_COMBINE_SYNCEN2_MASK | FTM_COMBINE_SYNCEN3_MASK;
obj->CnV = &ftm->CONTROLS[ch_n].CnV;
obj->MOD = &ftm->MOD;
obj->CNT = &ftm->CNT;
obj->SYNC = &ftm->SYNC;
// default to 20ms: standard for servos, and fine for e.g. brightness control
pwmout_period_ms(obj, 20);
pwmout_write(obj, 0);
pwmout_write(obj, 0.0);
// Wire pinout
pinmap_pinout(pin, PinMap_PWM);
}
@ -70,11 +74,14 @@ void pwmout_write(pwmout_t* obj, float value) {
} else if (value > 1.0) {
value = 1.0;
}
while(*obj->SYNC & FTM_SYNC_SWSYNC_MASK);
*obj->CnV = (uint32_t)((float)(*obj->MOD + 1) * value);
*obj->SYNC |= FTM_SYNC_SWSYNC_MASK;
}
float pwmout_read(pwmout_t* obj) {
while(*obj->SYNC & FTM_SYNC_SWSYNC_MASK);
float v = (float)(*obj->CnV) / (float)(*obj->MOD + 1);
return (v > 1.0) ? (1.0) : (v);
}
@ -91,6 +98,7 @@ void pwmout_period_ms(pwmout_t* obj, int ms) {
void pwmout_period_us(pwmout_t* obj, int us) {
float dc = pwmout_read(obj);
*obj->MOD = (uint32_t)(pwm_clock * (float)us) - 1;
*obj->SYNC |= FTM_SYNC_SWSYNC_MASK;
pwmout_write(obj, dc);
}
@ -104,4 +112,5 @@ void pwmout_pulsewidth_ms(pwmout_t* obj, int ms) {
void pwmout_pulsewidth_us(pwmout_t* obj, int us) {
*obj->CnV = (uint32_t)(pwm_clock * (float)us);
*obj->SYNC |= FTM_SYNC_SWSYNC_MASK;
}

View File

@ -29,9 +29,15 @@ static void init(void) {
void rtc_init(void) {
init();
// Enable the oscillator
#if defined (TARGET_K20D50M)
RTC->CR |= RTC_CR_OSCE_MASK;
#else
// Teensy3.1 requires 20pF MCU loading capacitors for 32KHz RTC oscillator
/* RTC->CR: SC2P=0,SC4P=1,SC8P=0,SC16P=1,CLKO=0,OSCE=1,UM=0,SUP=0,SPE=0,SWR=0 */
RTC->CR |= RTC_CR_OSCE_MASK |RTC_CR_SC16P_MASK | RTC_CR_SC4P_MASK;
#endif
//Configure the TSR. default value: 1
RTC->TSR = 1;

View File

@ -39,13 +39,32 @@ void deepsleep(void)
SCB->SCR = 1<<SCB_SCR_SLEEPDEEP_Pos;
__WFI();
//Switch back to PLL as clock source if needed
//The interrupt that woke up the device will run at reduced speed
if (PLL_FLL_en) {
#if defined (TARGET_K20D50M)
if (MCG->C6 & (1<<MCG_C6_PLLS_SHIFT) != 0) /* If PLL */
while((MCG->S & MCG_S_LOCK0_MASK) == 0x00U); /* Wait until locked */
MCG->C1 &= ~MCG_C1_CLKS_MASK;
#else
// MCG->C1: CLKS=2,FRDIV=3,IREFS=0,IRCLKEN=1,IREFSTEN=0
MCG->C1 = MCG_C1_CLKS(2) | MCG_C1_FRDIV(3) | MCG_C1_IRCLKEN_MASK;
// MCG->C6: LOLIE=0,PLLS=0,CME=0,VDIV0=0
MCG->C6 = MCG_C6_VDIV0(0);
while((MCG->S & MCG_S_OSCINIT0_MASK) == 0u) { } // Check that the oscillator is running
while((MCG->S & 0x0Cu) != 0x08u) { } // Wait until external reference clock is selected as MCG output
// MCG->C5: PLLCLKEN=0,PLLSTEN=0,PRDIV0=3
MCG->C5 = MCG_C5_PRDIV0(5);
// MCG->C6: LOLIE=0,PLLS=1,CME=0,VDIV0=3
MCG->C6 = MCG_C6_PLLS_MASK | MCG_C6_VDIV0(3);
while((MCG->S & 0x0Cu) != 0x08u) { } // Wait until external reference clock is selected as MCG output
while((MCG->S & MCG_S_PLLST_MASK) == 0u) { } // Wait until the source of the PLLS clock has switched to the PLL
while((MCG->S & MCG_S_LOCK0_MASK) == 0u) { } // Wait until locked
// MCG->C1: CLKS=0,FRDIV=2,IREFS=0,IRCLKEN=1,IREFSTEN=0
MCG->C1 = MCG_C1_FRDIV(2) | MCG_C1_IRCLKEN_MASK;;
while((MCG->S & 0x0Cu) != 0x0Cu) { } // Wait until output of the PLL is selected
while((MCG->S & MCG_S_LOCK0_MASK) == 0u) { } // Wait until locked
#endif
}
}

View File

@ -96,7 +96,7 @@ void serial_format(serial_t *obj, int data_bits, SerialParity parity, int stop_b
UART_HAL_SetBitCountPerChar(uart_addrs[obj->index], (uart_bit_count_per_char_t)data_bits);
UART_HAL_SetParityMode(uart_addrs[obj->index], (uart_parity_mode_t)parity);
#if FSL_FEATURE_UART_HAS_STOP_BIT_CONFIG_SUPPORT
UART_HAL_SetStopBitCount(uart_addrs[obj->index], (uart_stop_bit_count_t)stop_bits);
UART_HAL_SetStopBitCount(uart_addrs[obj->index], (uart_stop_bit_count_t)--stop_bits);
#endif
}

View File

@ -221,6 +221,7 @@ if __name__ == '__main__':
_opts_verbose=opts.verbose,
_opts_firmware_global_name=opts.firmware_global_name,
_opts_only_build_tests=opts.only_build_tests,
_opts_parallel_test_exec=opts.parallel_test_exec,
_opts_suppress_summary=opts.suppress_summary,
_opts_test_x_toolchain_summary=opts.test_x_toolchain_summary,
_opts_copy_method=opts.copy_method,

View File

@ -34,7 +34,7 @@ from prettytable import PrettyTable
from time import sleep, time
from Queue import Queue, Empty
from os.path import join, exists, basename
from threading import Thread
from threading import Thread, Lock
from subprocess import Popen, PIPE
# Imports related to mbed build api
@ -167,6 +167,7 @@ class SingleTestRunner(object):
_opts_verbose=False,
_opts_firmware_global_name=None,
_opts_only_build_tests=False,
_opts_parallel_test_exec=False,
_opts_suppress_summary=False,
_opts_test_x_toolchain_summary=False,
_opts_copy_method=None,
@ -217,6 +218,7 @@ class SingleTestRunner(object):
self.opts_verbose = _opts_verbose
self.opts_firmware_global_name = _opts_firmware_global_name
self.opts_only_build_tests = _opts_only_build_tests
self.opts_parallel_test_exec = _opts_parallel_test_exec
self.opts_suppress_summary = _opts_suppress_summary
self.opts_test_x_toolchain_summary = _opts_test_x_toolchain_summary
self.opts_copy_method = _opts_copy_method
@ -257,7 +259,7 @@ class SingleTestRunner(object):
"shuffle_test_order" : str(self.opts_shuffle_test_order),
"shuffle_test_seed" : str(self.opts_shuffle_test_seed),
"test_by_names" : str(self.opts_test_by_names),
"peripheral_by_names" : str(self.opts_peripheral_by_names),
"peripheral_by_names" : str(self.opts_peripheral_by_names),
"test_only_peripheral" : str(self.opts_test_only_peripheral),
"test_only_common" : str(self.opts_test_only_common),
"verbose" : str(self.opts_verbose),
@ -284,225 +286,260 @@ class SingleTestRunner(object):
result = False
return result
# This will store target / toolchain specific properties
test_suite_properties_ext = {} # target : toolchain
# Here we store test results
test_summary = []
# Here we store test results in extended data structure
test_summary_ext = {}
execute_thread_slice_lock = Lock()
def execute_thread_slice(self, q, target, toolchains, clean, test_ids):
for toolchain in toolchains:
# print target, toolchain
# Test suite properties returned to external tools like CI
test_suite_properties = {}
test_suite_properties['jobs'] = self.opts_jobs
test_suite_properties['clean'] = clean
test_suite_properties['target'] = target
test_suite_properties['test_ids'] = ', '.join(test_ids)
test_suite_properties['toolchain'] = toolchain
test_suite_properties['shuffle_random_seed'] = self.shuffle_random_seed
# print '=== %s::%s ===' % (target, toolchain)
# Let's build our test
if target not in TARGET_MAP:
print self.logger.log_line(self.logger.LogType.NOTIF, 'Skipped tests for %s target. Target platform not found'% (target))
continue
T = TARGET_MAP[target]
build_mbed_libs_options = ["analyze"] if self.opts_goanna_for_mbed_sdk else None
clean_mbed_libs_options = True if self.opts_goanna_for_mbed_sdk or clean or self.opts_clean else None
try:
build_mbed_libs_result = build_mbed_libs(T,
toolchain,
options=build_mbed_libs_options,
clean=clean_mbed_libs_options,
jobs=self.opts_jobs)
if not build_mbed_libs_result:
print self.logger.log_line(self.logger.LogType.NOTIF, 'Skipped tests for %s target. Toolchain %s is not yet supported for this target'% (T.name, toolchain))
continue
except ToolException:
print self.logger.log_line(self.logger.LogType.ERROR, 'There were errors while building MBED libs for %s using %s'% (target, toolchain))
#return self.test_summary, self.shuffle_random_seed, self.test_summary_ext, self.test_suite_properties_ext
q.put(target + '_'.join(toolchains))
return
build_dir = join(BUILD_DIR, "test", target, toolchain)
test_suite_properties['build_mbed_libs_result'] = build_mbed_libs_result
test_suite_properties['build_dir'] = build_dir
test_suite_properties['skipped'] = []
# Enumerate through all tests and shuffle test order if requested
test_map_keys = sorted(TEST_MAP.keys())
if self.opts_shuffle_test_order:
random.shuffle(test_map_keys, self.shuffle_random_func)
# Update database with shuffle seed f applicable
if self.db_logger:
self.db_logger.reconnect();
if self.db_logger.is_connected():
self.db_logger.update_build_id_info(self.db_logger_build_id, _shuffle_seed=self.shuffle_random_func())
self.db_logger.disconnect();
if self.db_logger:
self.db_logger.reconnect();
if self.db_logger.is_connected():
# Update MUTs and Test Specification in database
self.db_logger.update_build_id_info(self.db_logger_build_id, _muts=self.muts, _test_spec=self.test_spec)
# Update Extra information in database (some options passed to test suite)
self.db_logger.update_build_id_info(self.db_logger_build_id, _extra=json.dumps(self.dump_options()))
self.db_logger.disconnect();
for test_id in test_map_keys:
test = TEST_MAP[test_id]
if self.opts_test_by_names and test_id not in self.opts_test_by_names.split(','):
continue
if test_ids and test_id not in test_ids:
continue
if self.opts_test_only_peripheral and not test.peripherals:
if self.opts_verbose_skipped_tests:
print self.logger.log_line(self.logger.LogType.INFO, 'Common test skipped for target %s'% (target))
test_suite_properties['skipped'].append(test_id)
continue
if self.opts_peripheral_by_names and test.peripherals and not len([i for i in test.peripherals if i in self.opts_peripheral_by_names.split(',')]):
# We will skip tests not forced with -p option
if self.opts_verbose_skipped_tests:
print self.logger.log_line(self.logger.LogType.INFO, 'Common test skipped for target %s'% (target))
test_suite_properties['skipped'].append(test_id)
continue
if self.opts_test_only_common and test.peripherals:
if self.opts_verbose_skipped_tests:
print self.logger.log_line(self.logger.LogType.INFO, 'Peripheral test skipped for target %s'% (target))
test_suite_properties['skipped'].append(test_id)
continue
if test.automated and test.is_supported(target, toolchain):
if test.peripherals is None and self.opts_only_build_tests:
# When users are using 'build only flag' and test do not have
# specified peripherals we can allow test building by default
pass
elif self.opts_peripheral_by_names and test_id not in self.opts_peripheral_by_names.split(','):
# If we force peripheral with option -p we expect test
# to pass even if peripheral is not in MUTs file.
pass
elif not self.is_peripherals_available(target, test.peripherals):
if self.opts_verbose_skipped_tests:
if test.peripherals:
print self.logger.log_line(self.logger.LogType.INFO, 'Peripheral %s test skipped for target %s'% (",".join(test.peripherals), target))
else:
print self.logger.log_line(self.logger.LogType.INFO, 'Test %s skipped for target %s'% (test_id, target))
test_suite_properties['skipped'].append(test_id)
continue
build_project_options = ["analyze"] if self.opts_goanna_for_tests else None
clean_project_options = True if self.opts_goanna_for_tests or clean or self.opts_clean else None
# Detect which lib should be added to test
# Some libs have to compiled like RTOS or ETH
libraries = []
for lib in LIBRARIES:
if lib['build_dir'] in test.dependencies:
libraries.append(lib['id'])
# Build libs for test
for lib_id in libraries:
try:
build_lib(lib_id,
T,
toolchain,
options=build_project_options,
verbose=self.opts_verbose,
clean=clean_mbed_libs_options,
jobs=self.opts_jobs)
except ToolException:
print self.logger.log_line(self.logger.LogType.ERROR, 'There were errors while building library %s'% (lib_id))
#return self.test_summary, self.shuffle_random_seed, self.test_summary_ext, self.test_suite_properties_ext
q.put(target + '_'.join(toolchains))
return
test_suite_properties['test.libs.%s.%s.%s'% (target, toolchain, test_id)] = ', '.join(libraries)
# TODO: move this 2 below loops to separate function
INC_DIRS = []
for lib_id in libraries:
if 'inc_dirs_ext' in LIBRARY_MAP[lib_id] and LIBRARY_MAP[lib_id]['inc_dirs_ext']:
INC_DIRS.extend(LIBRARY_MAP[lib_id]['inc_dirs_ext'])
MACROS = []
for lib_id in libraries:
if 'macros' in LIBRARY_MAP[lib_id] and LIBRARY_MAP[lib_id]['macros']:
MACROS.extend(LIBRARY_MAP[lib_id]['macros'])
MACROS.append('TEST_SUITE_TARGET_NAME="%s"'% target)
MACROS.append('TEST_SUITE_TEST_ID="%s"'% test_id)
test_uuid = uuid.uuid4()
MACROS.append('TEST_SUITE_UUID="%s"'% str(test_uuid))
project_name = self.opts_firmware_global_name if self.opts_firmware_global_name else None
try:
path = build_project(test.source_dir,
join(build_dir, test_id),
T,
toolchain,
test.dependencies,
options=build_project_options,
clean=clean_project_options,
verbose=self.opts_verbose,
name=project_name,
macros=MACROS,
inc_dirs=INC_DIRS,
jobs=self.opts_jobs)
except ToolException:
project_name_str = project_name if project_name is not None else test_id
print self.logger.log_line(self.logger.LogType.ERROR, 'There were errors while building project %s'% (project_name_str))
# return self.test_summary, self.shuffle_random_seed, self.test_summary_ext, self.test_suite_properties_ext
q.put(target + '_'.join(toolchains))
return
if self.opts_only_build_tests:
# With this option we are skipping testing phase
continue
# Test duration can be increased by global value
test_duration = test.duration
if self.opts_extend_test_timeout is not None:
test_duration += self.opts_extend_test_timeout
# For an automated test the duration act as a timeout after
# which the test gets interrupted
test_spec = self.shape_test_request(target, path, test_id, test_duration)
test_loops = self.get_test_loop_count(test_id)
test_suite_properties['test.duration.%s.%s.%s'% (target, toolchain, test_id)] = test_duration
test_suite_properties['test.loops.%s.%s.%s'% (target, toolchain, test_id)] = test_loops
test_suite_properties['test.path.%s.%s.%s'% (target, toolchain, test_id)] = path
# read MUTs, test specification and perform tests
single_test_result, detailed_test_results = self.handle(test_spec, target, toolchain, test_loops=test_loops)
# Append test results to global test summary
if single_test_result is not None:
self.test_summary.append(single_test_result)
# Prepare extended test results data structure (it can be used to generate detailed test report)
if toolchain not in self.test_summary_ext:
self.test_summary_ext[toolchain] = {} # test_summary_ext : toolchain
if target not in self.test_summary_ext[toolchain]:
self.test_summary_ext[toolchain][target] = {} # test_summary_ext : toolchain : target
if target not in self.test_summary_ext[toolchain][target]:
self.test_summary_ext[toolchain][target][test_id] = detailed_test_results # test_summary_ext : toolchain : target : test_it
test_suite_properties['skipped'] = ', '.join(test_suite_properties['skipped'])
self.test_suite_properties_ext[target][toolchain] = test_suite_properties
# return self.test_summary, self.shuffle_random_seed, test_summary_ext, self.test_suite_properties_ext
q.put(target + '_'.join(toolchains))
return
def execute(self):
clean = self.test_spec.get('clean', False)
test_ids = self.test_spec.get('test_ids', [])
# This will store target / toolchain specific properties
test_suite_properties_ext = {} # target : toolchain
# Here we store test results
test_summary = []
# Here we store test results in extended data structure
test_summary_ext = {}
q = Queue()
# Generate seed for shuffle if seed is not provided in
self.shuffle_random_seed = round(random.random(), self.SHUFFLE_SEED_ROUND)
if self.opts_shuffle_test_seed is not None and self.is_shuffle_seed_float():
self.shuffle_random_seed = round(float(self.opts_shuffle_test_seed), self.SHUFFLE_SEED_ROUND)
for target, toolchains in self.test_spec['targets'].iteritems():
test_suite_properties_ext[target] = {}
for toolchain in toolchains:
# Test suite properties returned to external tools like CI
test_suite_properties = {}
test_suite_properties['jobs'] = self.opts_jobs
test_suite_properties['clean'] = clean
test_suite_properties['target'] = target
test_suite_properties['test_ids'] = ', '.join(test_ids)
test_suite_properties['toolchain'] = toolchain
test_suite_properties['shuffle_random_seed'] = self.shuffle_random_seed
if self.opts_parallel_test_exec:
###################################################################
# Experimental, parallel test execution per singletest instance.
###################################################################
execute_threads = [] # Threads used to build mbed SDL, libs, test cases and execute tests
# Note: We are building here in parallel for each target separately!
# So we are not building the same thing multiple times and compilers
# in separate threads do not collide.
# Inside execute_thread_slice() function function handle() will be called to
# get information about available MUTs (per target).
for target, toolchains in self.test_spec['targets'].iteritems():
self.test_suite_properties_ext[target] = {}
t = threading.Thread(target=self.execute_thread_slice, args = (q, target, toolchains, clean, test_ids))
t.daemon = True
t.start()
execute_threads.append(t)
# print '=== %s::%s ===' % (target, toolchain)
# Let's build our test
if target not in TARGET_MAP:
print self.logger.log_line(self.logger.LogType.NOTIF, 'Skipped tests for %s target. Target platform not found'% (target))
continue
T = TARGET_MAP[target]
build_mbed_libs_options = ["analyze"] if self.opts_goanna_for_mbed_sdk else None
clean_mbed_libs_options = True if self.opts_goanna_for_mbed_sdk or clean or self.opts_clean else None
try:
build_mbed_libs_result = build_mbed_libs(T,
toolchain,
options=build_mbed_libs_options,
clean=clean_mbed_libs_options,
jobs=self.opts_jobs)
if not build_mbed_libs_result:
print self.logger.log_line(self.logger.LogType.NOTIF, 'Skipped tests for %s target. Toolchain %s is not yet supported for this target'% (T.name, toolchain))
continue
except ToolException:
print self.logger.log_line(self.logger.LogType.ERROR, 'There were errors while building MBED libs for %s using %s'% (target, toolchain))
return test_summary, self.shuffle_random_seed, test_summary_ext, test_suite_properties_ext
build_dir = join(BUILD_DIR, "test", target, toolchain)
test_suite_properties['build_mbed_libs_result'] = build_mbed_libs_result
test_suite_properties['build_dir'] = build_dir
test_suite_properties['skipped'] = []
# Enumerate through all tests and shuffle test order if requested
test_map_keys = sorted(TEST_MAP.keys())
if self.opts_shuffle_test_order:
random.shuffle(test_map_keys, self.shuffle_random_func)
# Update database with shuffle seed f applicable
if self.db_logger:
self.db_logger.reconnect();
if self.db_logger.is_connected():
self.db_logger.update_build_id_info(self.db_logger_build_id, _shuffle_seed=self.shuffle_random_func())
self.db_logger.disconnect();
if self.db_logger:
self.db_logger.reconnect();
if self.db_logger.is_connected():
# Update MUTs and Test Specification in database
self.db_logger.update_build_id_info(self.db_logger_build_id, _muts=self.muts, _test_spec=self.test_spec)
# Update Extra information in database (some options passed to test suite)
self.db_logger.update_build_id_info(self.db_logger_build_id, _extra=json.dumps(self.dump_options()))
self.db_logger.disconnect();
for test_id in test_map_keys:
test = TEST_MAP[test_id]
if self.opts_test_by_names and test_id not in self.opts_test_by_names.split(','):
continue
if test_ids and test_id not in test_ids:
continue
if self.opts_test_only_peripheral and not test.peripherals:
if self.opts_verbose_skipped_tests:
print self.logger.log_line(self.logger.LogType.INFO, 'Common test skipped for target %s'% (target))
test_suite_properties['skipped'].append(test_id)
continue
if self.opts_peripheral_by_names and test.peripherals and not len([i for i in test.peripherals if i in self.opts_peripheral_by_names.split(',')]):
# We will skip tests not forced with -p option
if self.opts_verbose_skipped_tests:
print self.logger.log_line(self.logger.LogType.INFO, 'Common test skipped for target %s'% (target))
test_suite_properties['skipped'].append(test_id)
continue
if self.opts_test_only_common and test.peripherals:
if self.opts_verbose_skipped_tests:
print self.logger.log_line(self.logger.LogType.INFO, 'Peripheral test skipped for target %s'% (target))
test_suite_properties['skipped'].append(test_id)
continue
if test.automated and test.is_supported(target, toolchain):
if test.peripherals is None and self.opts_only_build_tests:
# When users are using 'build only flag' and test do not have
# specified peripherals we can allow test building by default
pass
elif self.opts_peripheral_by_names and test_id not in self.opts_peripheral_by_names.split(','):
# If we force peripheral with option -p we expect test
# to pass even if peripheral is not in MUTs file.
pass
elif not self.is_peripherals_available(target, test.peripherals):
if self.opts_verbose_skipped_tests:
if test.peripherals:
print self.logger.log_line(self.logger.LogType.INFO, 'Peripheral %s test skipped for target %s'% (",".join(test.peripherals), target))
else:
print self.logger.log_line(self.logger.LogType.INFO, 'Test %s skipped for target %s'% (test_id, target))
test_suite_properties['skipped'].append(test_id)
continue
build_project_options = ["analyze"] if self.opts_goanna_for_tests else None
clean_project_options = True if self.opts_goanna_for_tests or clean or self.opts_clean else None
# Detect which lib should be added to test
# Some libs have to compiled like RTOS or ETH
libraries = []
for lib in LIBRARIES:
if lib['build_dir'] in test.dependencies:
libraries.append(lib['id'])
# Build libs for test
for lib_id in libraries:
try:
build_lib(lib_id,
T,
toolchain,
options=build_project_options,
verbose=self.opts_verbose,
clean=clean_mbed_libs_options,
jobs=self.opts_jobs)
except ToolException:
print self.logger.log_line(self.logger.LogType.ERROR, 'There were errors while building library %s'% (lib_id))
return test_summary, self.shuffle_random_seed, test_summary_ext, test_suite_properties_ext
test_suite_properties['test.libs.%s.%s.%s'% (target, toolchain, test_id)] = ', '.join(libraries)
# TODO: move this 2 below loops to separate function
INC_DIRS = []
for lib_id in libraries:
if 'inc_dirs_ext' in LIBRARY_MAP[lib_id] and LIBRARY_MAP[lib_id]['inc_dirs_ext']:
INC_DIRS.extend(LIBRARY_MAP[lib_id]['inc_dirs_ext'])
MACROS = []
for lib_id in libraries:
if 'macros' in LIBRARY_MAP[lib_id] and LIBRARY_MAP[lib_id]['macros']:
MACROS.extend(LIBRARY_MAP[lib_id]['macros'])
MACROS.append('TEST_SUITE_TARGET_NAME="%s"'% target)
MACROS.append('TEST_SUITE_TEST_ID="%s"'% test_id)
test_uuid = uuid.uuid4()
MACROS.append('TEST_SUITE_UUID="%s"'% str(test_uuid))
project_name = self.opts_firmware_global_name if self.opts_firmware_global_name else None
try:
path = build_project(test.source_dir,
join(build_dir, test_id),
T,
toolchain,
test.dependencies,
options=build_project_options,
clean=clean_project_options,
verbose=self.opts_verbose,
name=project_name,
macros=MACROS,
inc_dirs=INC_DIRS,
jobs=self.opts_jobs)
except ToolException:
project_name_str = project_name if project_name is not None else test_id
print self.logger.log_line(self.logger.LogType.ERROR, 'There were errors while building project %s'% (project_name_str))
return test_summary, self.shuffle_random_seed, test_summary_ext, test_suite_properties_ext
if self.opts_only_build_tests:
# With this option we are skipping testing phase
continue
# Test duration can be increased by global value
test_duration = test.duration
if self.opts_extend_test_timeout is not None:
test_duration += self.opts_extend_test_timeout
# For an automated test the duration act as a timeout after
# which the test gets interrupted
test_spec = self.shape_test_request(target, path, test_id, test_duration)
test_loops = self.get_test_loop_count(test_id)
test_suite_properties['test.duration.%s.%s.%s'% (target, toolchain, test_id)] = test_duration
test_suite_properties['test.loops.%s.%s.%s'% (target, toolchain, test_id)] = test_loops
test_suite_properties['test.path.%s.%s.%s'% (target, toolchain, test_id)] = path
# read MUTs, test specification and perform tests
single_test_result, detailed_test_results = self.handle(test_spec, target, toolchain, test_loops=test_loops)
# Append test results to global test summary
if single_test_result is not None:
test_summary.append(single_test_result)
# Prepare extended test results data structure (it can be used to generate detailed test report)
if toolchain not in test_summary_ext:
test_summary_ext[toolchain] = {} # test_summary_ext : toolchain
if target not in test_summary_ext[toolchain]:
test_summary_ext[toolchain][target] = {} # test_summary_ext : toolchain : target
if target not in test_summary_ext[toolchain][target]:
test_summary_ext[toolchain][target][test_id] = detailed_test_results # test_summary_ext : toolchain : target : test_it
test_suite_properties['skipped'] = ', '.join(test_suite_properties['skipped'])
test_suite_properties_ext[target][toolchain] = test_suite_properties
for t in execute_threads:
q.get() # t.join() would block some threads because we should not wait in any order for thread end
else:
# Serialized (not parallel) test execution
for target, toolchains in self.test_spec['targets'].iteritems():
if target not in self.test_suite_properties_ext:
self.test_suite_properties_ext[target] = {}
self.execute_thread_slice(q, target, toolchains, clean, test_ids)
q.get()
if self.db_logger:
self.db_logger.reconnect();
@ -510,7 +547,7 @@ class SingleTestRunner(object):
self.db_logger.update_build_id_info(self.db_logger_build_id, _status_fk=self.db_logger.BUILD_ID_STATUS_COMPLETED)
self.db_logger.disconnect();
return test_summary, self.shuffle_random_seed, test_summary_ext, test_suite_properties_ext
return self.test_summary, self.shuffle_random_seed, self.test_summary_ext, self.test_suite_properties_ext
def generate_test_summary_by_target(self, test_summary, shuffle_seed=None):
""" Prints well-formed summary with results (SQL table like)
@ -641,7 +678,8 @@ class SingleTestRunner(object):
def handle(self, test_spec, target_name, toolchain_name, test_loops=1):
""" Function determines MUT's mbed disk/port and copies binary to
target. Test is being invoked afterwards.
target.
Test is being invoked afterwards.
"""
data = json.loads(test_spec)
# Get test information, image and test timeout
@ -1527,7 +1565,7 @@ def get_default_test_options_parser():
parser.add_option('-p', '--peripheral-by-names',
dest='peripheral_by_names',
help='Forces discovery of particular peripherals. Use comma to separate peripheral names.')
help='Forces discovery of particular peripherals. Use comma to separate peripheral names.')
copy_methods = host_tests_plugins.get_plugin_caps('CopyMethod')
copy_methods_str = "Plugin support: " + ', '.join(copy_methods)
@ -1592,6 +1630,12 @@ def get_default_test_options_parser():
default=False,
help="Only build tests, skips actual test procedures (flashing etc.)")
parser.add_option('', '--parallel',
dest='parallel_test_exec',
default=False,
action="store_true",
help='Experimental, you execute test runners for connected to your host MUTs in parallel (speeds up test result collection)')
parser.add_option('', '--config',
dest='verbose_test_configuration_only',
default=False,