Merge pull request #1893 from screamerbg/tools-improvements

mbed Tools features and improvements
pull/1901/head
Sam Grove 2016-06-10 18:34:23 +01:00 committed by GitHub
commit 52e93aebd0
335 changed files with 2739 additions and 682 deletions

2
.gitignore vendored
View File

@ -7,7 +7,7 @@ dist
MANIFEST
# Private settings
private_settings.py
mbed_settings.py
# Default Build Directory
.build/

View File

@ -1,7 +1,7 @@
python:
- "2.7"
script: "python workspace_tools/build_travis.py"
script: "python tools/build_travis.py"
before_install:
- sudo add-apt-repository -y ppa:terry.guo/gcc-arm-embedded
- sudo apt-get update -qq

View File

@ -1,3 +1,3 @@
graft workspace_tools
recursive-exclude workspace_tools *.pyc
graft tools
recursive-exclude tools *.pyc
include LICENSE

View File

@ -157,7 +157,7 @@ Develop
```
> "venv/Scripts/activate"
> pip install -r requirements.txt
> cd workspace_tools
> cd tools
> ... do things ...
> "venv/Scripts/deactivate"
```

View File

@ -130,11 +130,11 @@ Checking out files: 100% (3994/3994), done.
```
$ cd mbed
$ ls
LICENSE MANIFEST.in README.md libraries setup.py travis workspace_tools
LICENSE MANIFEST.in README.md libraries setup.py travis tools
```
Directory structure we are interested in:
```
mbed/workspace_tools/ - test suite scripts, build scripts etc.
mbed/tools/ - test suite scripts, build scripts etc.
mbed/libraries/tests/ - mbed SDK tests,
mbed/libraries/tests/mbed/ - tests for mbed SDK and peripherals tests,
mbed/libraries/tests/net/echo/ - tests for Ethernet interface,
@ -153,9 +153,9 @@ Workspace tools are set of Python scripts used off-line by Mbed SDK team to:
Before we can run our first test we need to configure our test environment a little!
Now we need to tell workspace tools where our compilers are.
* Please to go ```mbed/workspace_tools/``` directory and create empty file called ```private_settings.py```.
* Please to go ```mbed``` directory and create empty file called ```mbed_settings.py```.
```
$ touch private_settings.py
$ touch mbed_settings.py
```
* Populate this file the Python code below:
```python
@ -203,13 +203,13 @@ GCC_CR_PATH = "C:/Work/toolchains/LPCXpresso_6.1.4_194/lpcxpresso/tools/bin"
IAR_PATH = "C:/Work/toolchains/iar_6_5/arm"
```
Note: Settings in ```private_settings.py``` will overwrite variables with default values in ```mbed/workspace_tools/settings.py``` file.
Note: Settings in ```mbed_settings.py``` will overwrite variables with default values in ```mbed/default_settings.py``` file.
## Build Mbed SDK library from sources
Let's build mbed SDK library off-line from sources using your compiler. We've already cloned mbed SDK sources, we've also installed compilers and added their paths to ```private_settings.py```.
Let's build mbed SDK library off-line from sources using your compiler. We've already cloned mbed SDK sources, we've also installed compilers and added their paths to ```mbed_settings.py```.
We now should be ready to use workspace tools script ```build.py``` to compile and build mbed SDK from sources.
We are still using console. You should be already in ```mbed/workspace_tools/``` directory if not go to ```mbed/workspace_tools/``` and type below command:
We are still using console. You should be already in ```mbed/tools/``` directory if not go to ```mbed/tools/``` and type below command:
```
$ python build.py -m LPC1768 -t ARM
```
@ -276,7 +276,7 @@ Build successes:
### build.py script
Build script located in mbed/workspace_tools/ is our core script solution to drive compilation, linking and building process for:
Build script located in mbed/tools/ is our core script solution to drive compilation, linking and building process for:
* mbed SDK (with libs like Ethernet, RTOS, USB, USB host).
* Tests which also can be linked with libraries like RTOS or Ethernet.
@ -426,7 +426,7 @@ $ python build.py -t uARM -m NUCLEO_F334R8 --cppcheck
```
# make.py script
```make.py``` is a ```mbed/workspace_tools/``` script used to build tests (we call them sometimes 'programs') one by one manually. This script allows you to flash board, execute and test it. However, this script is deprecated and will not be described here. Instead please use ```singletest.py``` file to build mbed SDK, tests and run automation for test cases included in ```mbedmicro/mbed```.
```make.py``` is a ```mbed/tools/``` script used to build tests (we call them sometimes 'programs') one by one manually. This script allows you to flash board, execute and test it. However, this script is deprecated and will not be described here. Instead please use ```singletest.py``` file to build mbed SDK, tests and run automation for test cases included in ```mbedmicro/mbed```.
Note: ```make.py``` script depends on existing already built mbed SDK and library sources so you need to pre-build mbed SDK and other libraries (such as RTOS library) to link 'program' (test) with mbed SDK and RTOS library. To pre-build mbed SDK please use ```build.py``` script.
Just for sake of example please see few ways to use ```make.py``` together with Freedom K64F board.

View File

@ -198,7 +198,7 @@ $ astyle.exe --style=kr --indent=spaces=4 --indent-switches $(FULL_CURRENT_PATH)
```
## Python coding rules & coding guidelines
Some of our tools in workspace_tools are written in ```Python 2.7```. In case of developing tools for python we prefer to keep similar code styles across all Python source code. Please note that not all rules must be enforced. For example we do not limit you to 80 characters per line, just be sure your code can fit to widescreen display.
Some of our tools in tools are written in ```Python 2.7```. In case of developing tools for python we prefer to keep similar code styles across all Python source code. Please note that not all rules must be enforced. For example we do not limit you to 80 characters per line, just be sure your code can fit to widescreen display.
Please stay compatible with ```Python 2.7``` but nothing stops you to write your code so in the future it will by Python 3 friendly.
@ -211,7 +211,7 @@ Some general guidelines:
* Please document your code, write comments and ```doc``` sections for each function or class you implement.
### Static Code Analizers for Python
If you are old-school developer for sure you remember tools like lint. "lint was the name originally given to a particular program that flagged some suspicious and non-portable constructs (likely to be bugs) in C language source code." Now lint-like programs are used to check similar code issues for multiple languages, also for Python. Please do use them if you want to commit new code to workspace_tools and other mbed SDK Python tooling.
If you are old-school developer for sure you remember tools like lint. "lint was the name originally given to a particular program that flagged some suspicious and non-portable constructs (likely to be bugs) in C language source code." Now lint-like programs are used to check similar code issues for multiple languages, also for Python. Please do use them if you want to commit new code to tools and other mbed SDK Python tooling.
Below is the list Python lint tools you may want to use:
@ -254,7 +254,7 @@ class HostRegistry:
```
## Testing
Please refer to TESTING.md document for detais regarding mbed SDK test suite and build scripts included in ```mbed/workspace_tools/```.
Please refer to TESTING.md document for detais regarding mbed SDK test suite and build scripts included in ```mbed/tools/```.
## Before pull request checklist
* Your pull request description section contains:

View File

@ -6,7 +6,7 @@ Test suit allows users to run locally on their machines Mbed SDKs tests inclu
Each test is supervised by python script called “host test” which will at least Test suite is using build script API to compile and build test source together with required by test libraries like CMSIS, Mbed, Ethernet, USB etc.
## What is host test?
Test suite supports test supervisor concept. This concept is realized by separate Python script called ```host test```. Host tests can be found in ```mbed/workspace_tools/host_tests/``` directory. Note: In newer mbed versions (mbed OS) host tests will be separate library.
Test suite supports test supervisor concept. This concept is realized by separate Python script called ```host test```. Host tests can be found in ```mbed/tools/host_tests/``` directory. Note: In newer mbed versions (mbed OS) host tests will be separate library.
Host test script is executed in parallel with test runner to monitor test execution. Basic host test just monitors device's default serial port for test results returned by test runner. Simple tests will print test result on serial port. In other cases host tests can for example judge by test results returned by test runner if test passed or failed. It all depends on test itself.
@ -14,7 +14,7 @@ In some cases host test can be TCP server echoing packets from test runner and j
## Test suite core: singletest.py script
```singletest.py``` script located in ```mbed/workspace_tools/``` is a test suite script which allows users to compile, build tests and test runners (also supports CppUTest unit test library). Script also is responsible for test execution on devices selected by configuration files.
```singletest.py``` script located in ```mbed/tools/``` is a test suite script which allows users to compile, build tests and test runners (also supports CppUTest unit test library). Script also is responsible for test execution on devices selected by configuration files.
### Parameters of singletest.py
@ -37,7 +37,7 @@ After connecting boards to our host machine (PC) we can check which serial ports
* ```NUCLEO_F103RB``` serial port is on ```COM11``` and disk drive is ```I:```.
If you are working under Linux your port and disk could look like /dev/ttyACM5 and /media/usb5.
This information is needed to create ```muts_all.json``` configuration file. You can create it in ```mbed/workspace_tools/``` directory:
This information is needed to create ```muts_all.json``` configuration file. You can create it in ```mbed/tools/``` directory:
```
$ touch muts_all.json
```
@ -67,8 +67,8 @@ Its name will be passed to ```singletest.py``` script after ```-M``` (MUTs speci
Note: We will leave field ```peripherals``` empty for the sake of this example. We will explain it later. All you need to do now is to properly fill fields ```mcu```, ```port``` and ```disk```.
Note: Please make sure files muts_all.json and test_spec.json are in workspace_tools/ directory. We will assume in this example they are.
Where to find ```mcu``` names? You can use option ```-S``` of ```build.py``` script (in ```mbed/workspace_tools/``` directory) to check all supported off-line MCUs names.
Note: Please make sure files muts_all.json and test_spec.json are in tools/ directory. We will assume in this example they are.
Where to find ```mcu``` names? You can use option ```-S``` of ```build.py``` script (in ```mbed/tools/``` directory) to check all supported off-line MCUs names.
Note: If you update mbed device firmware or even disconnect / reconnect mbed device you may find that serial port / disk configuration changed. You need to update configuration file accordingly or you will face connection problems and obviously tests will not run.
@ -172,9 +172,9 @@ For our example purposes let's assume we only have Keil ARM compiler, so let's c
```
#### Run your tests
After you configure all your MUTs and compilers you are ready to run tests. Make sure your devices are connected and your configuration files reflect your current configuration (serial ports, devices). Go to workspace_tools directory in your mbed location.
After you configure all your MUTs and compilers you are ready to run tests. Make sure your devices are connected and your configuration files reflect your current configuration (serial ports, devices). Go to tools directory in your mbed location.
```
$ cd workspace_tools/
$ cd tools/
```
and execute test suite script.
```
@ -244,7 +244,7 @@ In below example we would like to have all test binaries called ```firmware.bin`
```
$ python singletest.py -i test_spec.json -M muts_all.json --firmware-name firmware
```
* Where to find test list? Tests are defined in file ```tests.py``` in ```mbed/workspace_tools/``` directory. ```singletest.py``` uses test metadata in ```tests.py``` to resolve libraries dependencies and build tests for proper platforms and peripherals. Option ```-R``` can be used to get test names and direct path and test configuration.
* Where to find test list? Tests are defined in file ```tests.py``` in ```mbed/tools/``` directory. ```singletest.py``` uses test metadata in ```tests.py``` to resolve libraries dependencies and build tests for proper platforms and peripherals. Option ```-R``` can be used to get test names and direct path and test configuration.
```
$ python singletest.py -R
+-------------+-----------+---------------------------------------+--------------+-------------------+----------+--------------------------------------------------------+
@ -344,7 +344,7 @@ test_spec.json:
```
Note:
* Please make sure device is connected before we will start running tests.
* Please make sure files ```muts_all.json``` and ```test_spec.json``` are in ```mbed/workspace_tools/``` directory.
* Please make sure files ```muts_all.json``` and ```test_spec.json``` are in ```mbed/tools/``` directory.
Now you can call test suite and execute tests:
```
$ python singletest.py -i test_spec.json -M muts_all.json
@ -451,7 +451,7 @@ We want to create directory structure similar to one below:
└───mbed
├───libraries
├───travis
└───workspace_tools
└───tools
```
Please go to directory with your project. For example it could be c:\Projects\Project.
@ -492,7 +492,7 @@ $ git clone https://github.com/mbedmicro/mbed.git
$ hg clone https://mbed.org/users/rgrover1/code/cpputest/
```
After above three steps you should have proper directory structure. All you need to do now is to configure your ```private_settings.py``` in ```mbed/workspace_tools/``` directory. Please refer to mbed SDK build script documentation for details.
After above three steps you should have proper directory structure. All you need to do now is to configure your ```mbed_settings.py``` in ```mbed``` directory. Please refer to mbed SDK build script documentation for details.
## CppUTest with mbed port
To make sure you actualy have CppUTest library with mbed SDK port you can go to CppUTest ```armcc``` platform directory:
@ -577,7 +577,7 @@ utest
```
## Define unit tests in mbed SDK test suite structure
All tests defined in test suite are described in ```mbed/workspace_tools/tests.py``` file. This file stores data structure ```TESTS``` which is a list of simple structures describing each test. Below you can find example of ```TESTS``` structure which is configuring one of the unit tests.
All tests defined in test suite are described in ```mbed/tools/tests.py``` file. This file stores data structure ```TESTS``` which is a list of simple structures describing each test. Below you can find example of ```TESTS``` structure which is configuring one of the unit tests.
```
.
.

View File

@ -1,6 +1,6 @@
# Adding and configuring mbed targets
mbed uses JSON as a description language for its build targets. The JSON description of mbed targets can be found in `workspace_tools/targets.json`. To better understand how a target is defined, we'll use this example (taken from `targets.json`):
mbed uses JSON as a description language for its build targets. The JSON description of mbed targets can be found in `tools/targets.json`. To better understand how a target is defined, we'll use this example (taken from `targets.json`):
```
"TEENSY3_1": {
@ -173,4 +173,4 @@ This property is used to pass additional data to the project generator (used to
```
The `target` property of `progen` specifies the target name that must be used for the exporter (if different than the mbed target name).
For each exporter, a template for exporting can also be specified. In this example, the template used for generating a uVision project file is in a file called `uvision_microlib.uvproj.tmpl`. It is assumed that all the templates are located in `workspace_tools/export`.
For each exporter, a template for exporting can also be specified. In this example, the template used for generating a uVision project file is in a file called `uvision_microlib.uvproj.tmpl`. It is assumed that all the templates are located in `tools/export`.

View File

@ -16,17 +16,17 @@ DESCRIPTION = """A set of Python scripts that can be used to compile programs wr
OWNER_NAMES = 'emilmont, bogdanm'
OWNER_EMAILS = 'Emilio.Monti@arm.com, Bogdan.Marinescu@arm.com'
# If private_settings.py exists in workspace_tools, read it in a temporary file
# If mbed_settings.py exists in tools, read it in a temporary file
# so it can be restored later
private_settings = join('workspace_tools', 'private_settings.py')
mbed_settings = join('mbed_settings.py')
backup = None
if isfile(private_settings):
if isfile(mbed_settings):
backup = TemporaryFile()
with open(private_settings, "rb") as f:
with open(mbed_settings, "rb") as f:
copyfileobj(f, backup)
# Create the correct private_settings.py for the distribution
with open(private_settings, "wt") as f:
# Create the correct mbed_settings.py for the distribution
with open(mbed_settings, "wt") as f:
f.write("from mbed_settings import *\n")
setup(name='mbed-tools',
@ -42,8 +42,8 @@ setup(name='mbed-tools',
license=LICENSE,
install_requires=["PrettyTable>=0.7.2", "PySerial>=2.7", "IntelHex>=1.3", "colorama>=0.3.3", "Jinja2>=2.7.3", "project-generator>=0.8.11,<0.9.0", "junit-xml", "requests", "pyYAML"])
# Restore previous private_settings if needed
# Restore previous mbed_settings if needed
if backup:
backup.seek(0)
with open(private_settings, "wb") as f:
with open(mbed_settings, "wb") as f:
copyfileobj(backup, f)

43
workspace_tools/build.py → tools/build.py Executable file → Normal file
View File

@ -27,14 +27,14 @@ ROOT = abspath(join(dirname(__file__), ".."))
sys.path.insert(0, ROOT)
from workspace_tools.toolchains import TOOLCHAINS
from workspace_tools.targets import TARGET_NAMES, TARGET_MAP
from workspace_tools.options import get_default_options_parser
from workspace_tools.build_api import build_mbed_libs, build_lib
from workspace_tools.build_api import mcu_toolchain_matrix
from workspace_tools.build_api import static_analysis_scan, static_analysis_scan_lib, static_analysis_scan_library
from workspace_tools.build_api import print_build_results
from workspace_tools.settings import CPPCHECK_CMD, CPPCHECK_MSG_FORMAT
from tools.toolchains import TOOLCHAINS
from tools.targets import TARGET_NAMES, TARGET_MAP
from tools.options import get_default_options_parser
from tools.build_api import build_library, build_mbed_libs, build_lib
from tools.build_api import mcu_toolchain_matrix
from tools.build_api import static_analysis_scan, static_analysis_scan_lib, static_analysis_scan_library
from tools.build_api import print_build_results
from tools.settings import CPPCHECK_CMD, CPPCHECK_MSG_FORMAT
if __name__ == '__main__':
start = time()
@ -42,6 +42,15 @@ if __name__ == '__main__':
# Parse Options
parser = get_default_options_parser()
parser.add_option("--source", dest="source_dir",
default=None, help="The source (input) directory", action="append")
parser.add_option("--build", dest="build_dir",
default=None, help="The build (output) directory")
parser.add_option("--no-archive", dest="no_archive", action="store_true",
default=False, help="Do not produce archive (.ar) file, but rather .o")
# Extra libraries
parser.add_option("-r", "--rtos",
action="store_true",
@ -119,7 +128,7 @@ if __name__ == '__main__':
help='For some commands you can use filter to filter out results')
parser.add_option("-j", "--jobs", type="int", dest="jobs",
default=1, help="Number of concurrent jobs (default 1). Use 0 for auto based on host machine's number of CPUs")
default=0, help="Number of concurrent jobs. Default: 0/auto (based on host machine's number of CPUs)")
parser.add_option("-v", "--verbose",
action="store_true",
@ -183,7 +192,7 @@ if __name__ == '__main__':
if options.usb_host:
libraries.append("usb_host")
if options.dsp:
libraries.extend(["cmsis_dsp", "dsp"])
libraries.extend(["dsp"])
if options.fat:
libraries.extend(["fat"])
if options.ublox:
@ -224,7 +233,18 @@ if __name__ == '__main__':
tt_id = "%s::%s" % (toolchain, target)
try:
mcu = TARGET_MAP[target]
lib_build_res = build_mbed_libs(mcu, toolchain,
if options.source_dir:
lib_build_res = build_library(options.source_dir, options.build_dir, mcu, toolchain,
options=options.options,
extra_verbose=options.extra_verbose_notify,
verbose=options.verbose,
silent=options.silent,
jobs=options.jobs,
clean=options.clean,
archive=(not options.no_archive),
macros=options.macros)
else:
lib_build_res = build_mbed_libs(mcu, toolchain,
options=options.options,
extra_verbose=options.extra_verbose_notify,
verbose=options.verbose,
@ -232,6 +252,7 @@ if __name__ == '__main__':
jobs=options.jobs,
clean=options.clean,
macros=options.macros)
for lib_id in libraries:
build_lib(lib_id, mcu, toolchain,
options=options.options,

View File

@ -19,20 +19,22 @@ import re
import tempfile
import colorama
from copy import copy
from types import ListType
from shutil import rmtree
from os.path import join, exists, basename
from os.path import join, exists, basename, abspath, normpath
from os import getcwd, walk
from time import time
import fnmatch
from workspace_tools.utils import mkdir, run_cmd, run_cmd_ext, NotSupportedException
from workspace_tools.paths import MBED_TARGETS_PATH, MBED_LIBRARIES, MBED_API, MBED_HAL, MBED_COMMON
from workspace_tools.targets import TARGET_NAMES, TARGET_MAP
from workspace_tools.libraries import Library
from workspace_tools.toolchains import TOOLCHAIN_CLASSES
from tools.utils import mkdir, run_cmd, run_cmd_ext, NotSupportedException, ToolException
from tools.paths import MBED_TARGETS_PATH, MBED_LIBRARIES, MBED_API, MBED_HAL, MBED_COMMON
from tools.targets import TARGET_NAMES, TARGET_MAP
from tools.libraries import Library
from tools.toolchains import TOOLCHAIN_CLASSES
from jinja2 import FileSystemLoader
from jinja2.environment import Environment
from tools.config import Config
def prep_report(report, target_name, toolchain_name, id_name):
# Setup report keys
@ -75,37 +77,90 @@ def add_result_to_report(report, result):
result_wrap = { 0: result }
report[target][toolchain][id_name].append(result_wrap)
def get_config(src_path, target, toolchain_name):
# Convert src_path to a list if needed
src_paths = [src_path] if type(src_path) != ListType else src_path
# We need to remove all paths which are repeated to avoid
# multiple compilations and linking with the same objects
src_paths = [src_paths[0]] + list(set(src_paths[1:]))
# Create configuration object
config = Config(target, src_paths)
# If the 'target' argument is a string, convert it to a target instance
if isinstance(target, str):
try:
target = TARGET_MAP[target]
except KeyError:
raise KeyError("Target '%s' not found" % target)
# Toolchain instance
try:
toolchain = TOOLCHAIN_CLASSES[toolchain_name](target, options=None, notify=None, macros=None, silent=True, extra_verbose=False)
except KeyError as e:
raise KeyError("Toolchain %s not supported" % toolchain_name)
# Scan src_path for config files
resources = toolchain.scan_resources(src_paths[0])
for path in src_paths[1:]:
resources.add(toolchain.scan_resources(path))
config.add_config_files(resources.json_files)
return config.get_config_data()
def build_project(src_path, build_path, target, toolchain_name,
libraries_paths=None, options=None, linker_script=None,
clean=False, notify=None, verbose=False, name=None, macros=None, inc_dirs=None,
jobs=1, silent=False, report=None, properties=None, project_id=None, project_description=None, extra_verbose=False):
jobs=1, silent=False, report=None, properties=None, project_id=None, project_description=None,
extra_verbose=False, config=None):
""" This function builds project. Project can be for example one test / UT
"""
# Toolchain instance
toolchain = TOOLCHAIN_CLASSES[toolchain_name](target, options, notify, macros, silent, extra_verbose=extra_verbose)
toolchain.VERBOSE = verbose
toolchain.jobs = jobs
toolchain.build_all = clean
# Convert src_path to a list if needed
src_paths = [src_path] if type(src_path) != ListType else src_path
# We need to remove all paths which are repeated to avoid
# multiple compilations and linking with the same objects
src_paths = [src_paths[0]] + list(set(src_paths[1:]))
PROJECT_BASENAME = basename(src_paths[0])
first_src_path = src_paths[0] if src_paths[0] != "." and src_paths[0] != "./" else getcwd()
abs_path = abspath(first_src_path)
project_name = basename(normpath(abs_path))
# If the configuration object was not yet created, create it now
config = config or Config(target, src_paths)
# If the 'target' argument is a string, convert it to a target instance
if isinstance(target, str):
try:
target = TARGET_MAP[target]
except KeyError:
raise KeyError("Target '%s' not found" % target)
# Toolchain instance
try:
toolchain = TOOLCHAIN_CLASSES[toolchain_name](target, options, notify, macros, silent, extra_verbose=extra_verbose)
except KeyError as e:
raise KeyError("Toolchain %s not supported" % toolchain_name)
toolchain.VERBOSE = verbose
toolchain.jobs = jobs
toolchain.build_all = clean
if name is None:
# We will use default project name based on project folder name
name = PROJECT_BASENAME
toolchain.info("Building project %s (%s, %s)" % (PROJECT_BASENAME.upper(), target.name, toolchain_name))
name = project_name
toolchain.info("Building project %s (%s, %s)" % (project_name, target.name, toolchain_name))
else:
# User used custom global project name to have the same name for the
toolchain.info("Building project %s to %s (%s, %s)" % (PROJECT_BASENAME.upper(), name, target.name, toolchain_name))
toolchain.info("Building project %s to %s (%s, %s)" % (project_name, name, target.name, toolchain_name))
if report != None:
start = time()
id_name = project_id.upper()
description = project_description
# If project_id is specified, use that over the default name
id_name = project_id.upper() if project_id else name.upper()
description = project_description if project_description else name
vendor_label = target.extra_labels[0]
cur_result = None
prep_report(report, target.name, toolchain_name, id_name)
@ -139,17 +194,22 @@ def build_project(src_path, build_path, target, toolchain_name,
resources.inc_dirs.extend(inc_dirs)
else:
resources.inc_dirs.append(inc_dirs)
# Update the configuration with any .json files found while scanning
config.add_config_files(resources.json_files)
# And add the configuration macros to the toolchain
toolchain.add_macros(config.get_config_data_macros())
# Compile Sources
for path in src_paths:
src = toolchain.scan_resources(path)
objects = toolchain.compile_sources(src, build_path, resources.inc_dirs)
resources.objects.extend(objects)
# Link Program
res, needed_update = toolchain.link_program(resources, build_path, name)
res, _ = toolchain.link_program(resources, build_path, name)
if report != None and needed_update:
if report != None:
end = time()
cur_result["elapsed_time"] = end - start
cur_result["output"] = toolchain.get_output()
@ -170,6 +230,155 @@ def build_project(src_path, build_path, target, toolchain_name,
cur_result["elapsed_time"] = end - start
toolchain_output = toolchain.get_output()
if toolchain_output:
cur_result["output"] += toolchain_output
add_result_to_report(report, cur_result)
# Let Exception propagate
raise e
def build_library(src_paths, build_path, target, toolchain_name,
dependencies_paths=None, options=None, name=None, clean=False, archive=True,
notify=None, verbose=False, macros=None, inc_dirs=None, inc_dirs_ext=None,
jobs=1, silent=False, report=None, properties=None, extra_verbose=False,
project_id=None):
""" src_path: the path of the source directory
build_path: the path of the build directory
target: ['LPC1768', 'LPC11U24', 'LPC2368']
toolchain: ['ARM', 'uARM', 'GCC_ARM', 'GCC_CR']
library_paths: List of paths to additional libraries
clean: Rebuild everything if True
notify: Notify function for logs
verbose: Write the actual tools command lines if True
inc_dirs: additional include directories which should be included in build
inc_dirs_ext: additional include directories which should be copied to library directory
"""
if type(src_paths) != ListType:
src_paths = [src_paths]
# The first path will give the name to the library
project_name = basename(src_paths[0] if src_paths[0] != "." and src_paths[0] != "./" else getcwd())
if name is None:
# We will use default project name based on project folder name
name = project_name
if report != None:
start = time()
# If project_id is specified, use that over the default name
id_name = project_id.upper() if project_id else name.upper()
description = name
vendor_label = target.extra_labels[0]
cur_result = None
prep_report(report, target.name, toolchain_name, id_name)
cur_result = create_result(target.name, toolchain_name, id_name, description)
if properties != None:
prep_properties(properties, target.name, toolchain_name, vendor_label)
for src_path in src_paths:
if not exists(src_path):
error_msg = "The library source folder does not exist: %s", src_path
if report != None:
cur_result["output"] = error_msg
cur_result["result"] = "FAIL"
add_result_to_report(report, cur_result)
raise Exception(error_msg)
try:
# Toolchain instance
toolchain = TOOLCHAIN_CLASSES[toolchain_name](target, options, macros=macros, notify=notify, silent=silent, extra_verbose=extra_verbose)
toolchain.VERBOSE = verbose
toolchain.jobs = jobs
toolchain.build_all = clean
toolchain.info("Building library %s (%s, %s)" % (name, target.name, toolchain_name))
# Scan Resources
resources = None
for path in src_paths:
# Scan resources
resource = toolchain.scan_resources(path)
# Copy headers, objects and static libraries - all files needed for static lib
toolchain.copy_files(resource.headers, build_path, rel_path=resource.base_path)
toolchain.copy_files(resource.objects, build_path, rel_path=resource.base_path)
toolchain.copy_files(resource.libraries, build_path, rel_path=resource.base_path)
if resource.linker_script:
toolchain.copy_files(resource.linker_script, build_path, rel_path=resource.base_path)
# Extend resources collection
if not resources:
resources = resource
else:
resources.add(resource)
# We need to add if necessary additional include directories
if inc_dirs:
if type(inc_dirs) == ListType:
resources.inc_dirs.extend(inc_dirs)
else:
resources.inc_dirs.append(inc_dirs)
# Add extra include directories / files which are required by library
# This files usually are not in the same directory as source files so
# previous scan will not include them
if inc_dirs_ext is not None:
for inc_ext in inc_dirs_ext:
resources.add(toolchain.scan_resources(inc_ext))
# Dependencies Include Paths
if dependencies_paths is not None:
for path in dependencies_paths:
lib_resources = toolchain.scan_resources(path)
resources.inc_dirs.extend(lib_resources.inc_dirs)
if archive:
# Use temp path when building archive
tmp_path = join(build_path, '.temp')
mkdir(tmp_path)
else:
tmp_path = build_path
# Handle configuration
config = Config(target)
# Update the configuration with any .json files found while scanning
config.add_config_files(resources.json_files)
# And add the configuration macros to the toolchain
toolchain.add_macros(config.get_config_data_macros())
# Compile Sources
for path in src_paths:
src = toolchain.scan_resources(path)
objects = toolchain.compile_sources(src, abspath(tmp_path), resources.inc_dirs)
resources.objects.extend(objects)
if archive:
toolchain.build_library(objects, build_path, name)
if report != None:
end = time()
cur_result["elapsed_time"] = end - start
cur_result["output"] = toolchain.get_output()
cur_result["result"] = "OK"
add_result_to_report(report, cur_result)
except Exception, e:
if report != None:
end = time()
if isinstance(e, ToolException):
cur_result["result"] = "FAIL"
elif isinstance(e, NotSupportedException):
cur_result["result"] = "NOT_SUPPORTED"
cur_result["elapsed_time"] = end - start
toolchain_output = toolchain.get_output()
if toolchain_output:
cur_result["output"] += toolchain_output
@ -181,11 +390,32 @@ def build_project(src_path, build_path, target, toolchain_name,
# Let Exception propagate
raise e
######################
### Legacy methods ###
######################
def build_library(src_paths, build_path, target, toolchain_name,
dependencies_paths=None, options=None, name=None, clean=False,
notify=None, verbose=False, macros=None, inc_dirs=None, inc_dirs_ext=None,
jobs=1, silent=False, report=None, properties=None, extra_verbose=False):
def build_lib(lib_id, target, toolchain_name, options=None, verbose=False, clean=False, macros=None, notify=None, jobs=1, silent=False, report=None, properties=None, extra_verbose=False):
""" Legacy method for building mbed libraries
Function builds library in proper directory using all dependencies and macros defined by user.
"""
lib = Library(lib_id)
if not lib.is_supported(target, toolchain_name):
print 'Library "%s" is not yet supported on target %s with toolchain %s' % (lib_id, target.name, toolchain)
return False
# We need to combine macros from parameter list with macros from library definition
MACROS = lib.macros if lib.macros else []
if macros:
macros.extend(MACROS)
else:
macros = MACROS
src_paths = lib.source_dir
build_path = lib.build_dir
dependencies_paths = lib.dependencies
inc_dirs = lib.inc_dirs
inc_dirs_ext = lib.inc_dirs_ext
""" src_path: the path of the source directory
build_path: the path of the build directory
target: ['LPC1768', 'LPC11U24', 'LPC2368']
@ -300,34 +530,6 @@ def build_library(src_paths, build_path, target, toolchain_name,
# Let Exception propagate
raise e
def build_lib(lib_id, target, toolchain, options=None, verbose=False, clean=False, macros=None, notify=None, jobs=1, silent=False, report=None, properties=None, extra_verbose=False):
""" Wrapper for build_library function.
Function builds library in proper directory using all dependencies and macros defined by user.
"""
lib = Library(lib_id)
if lib.is_supported(target, toolchain):
# We need to combine macros from parameter list with macros from library definition
MACROS = lib.macros if lib.macros else []
if macros:
MACROS.extend(macros)
return build_library(lib.source_dir, lib.build_dir, target, toolchain, lib.dependencies, options,
verbose=verbose,
silent=silent,
clean=clean,
macros=MACROS,
notify=notify,
inc_dirs=lib.inc_dirs,
inc_dirs_ext=lib.inc_dirs_ext,
jobs=jobs,
report=report,
properties=properties,
extra_verbose=extra_verbose)
else:
print 'Library "%s" is not yet supported on target %s with toolchain %s' % (lib_id, target.name, toolchain)
return False
# We do have unique legacy conventions about how we build and package the mbed library
def build_mbed_libs(target, toolchain_name, options=None, verbose=False, clean=False, macros=None, notify=None, jobs=1, silent=False, report=None, properties=None, extra_verbose=False):
""" Function returns True is library was built and false if building was skipped """
@ -417,12 +619,12 @@ def build_mbed_libs(target, toolchain_name, options=None, verbose=False, clean=F
for o in separate_objects:
objects.remove(o)
needed_update = toolchain.build_library(objects, BUILD_TOOLCHAIN, "mbed")
toolchain.build_library(objects, BUILD_TOOLCHAIN, "mbed")
for o in separate_objects:
toolchain.copy_files(o, BUILD_TOOLCHAIN)
if report != None and needed_update:
if report != None:
end = time()
cur_result["elapsed_time"] = end - start
cur_result["output"] = toolchain.get_output()
@ -449,6 +651,7 @@ def build_mbed_libs(target, toolchain_name, options=None, verbose=False, clean=F
# Let Exception propagate
raise e
def get_unique_supported_toolchains():
""" Get list of all unique toolchains supported by targets """
unique_supported_toolchains = []
@ -734,3 +937,63 @@ def write_build_report(build_report, template_filename, filename):
with open(filename, 'w+') as f:
f.write(template.render(failing_builds=build_report_failing, passing_builds=build_report_passing))
def scan_for_source_paths(path, exclude_paths=None):
ignorepatterns = []
paths = []
def is_ignored(file_path):
for pattern in ignorepatterns:
if fnmatch.fnmatch(file_path, pattern):
return True
return False
""" os.walk(top[, topdown=True[, onerror=None[, followlinks=False]]])
When topdown is True, the caller can modify the dirnames list in-place
(perhaps using del or slice assignment), and walk() will only recurse into
the subdirectories whose names remain in dirnames; this can be used to prune
the search, impose a specific order of visiting, or even to inform walk()
about directories the caller creates or renames before it resumes walk()
again. Modifying dirnames when topdown is False is ineffective, because in
bottom-up mode the directories in dirnames are generated before dirpath
itself is generated.
"""
for root, dirs, files in walk(path, followlinks=True):
# Remove ignored directories
# Check if folder contains .mbedignore
if ".mbedignore" in files :
with open (join(root,".mbedignore"), "r") as f:
lines=f.readlines()
lines = [l.strip() for l in lines] # Strip whitespaces
lines = [l for l in lines if l != ""] # Strip empty lines
lines = [l for l in lines if not re.match("^#",l)] # Strip comment lines
# Append root path to glob patterns
# and append patterns to ignorepatterns
ignorepatterns.extend([join(root,line.strip()) for line in lines])
for d in copy(dirs):
dir_path = join(root, d)
# Always ignore hidden directories
if d.startswith('.'):
dirs.remove(d)
# Remove dirs that already match the ignorepatterns
# to avoid travelling into them and to prevent them
# on appearing in include path.
if is_ignored(join(dir_path,"")):
dirs.remove(d)
if exclude_paths:
for exclude_path in exclude_paths:
rel_path = relpath(dir_path, exclude_path)
if not (rel_path.startswith('..')):
dirs.remove(d)
break
# Add root to include paths
paths.append(root)
return paths

View File

@ -25,14 +25,14 @@ import json
ROOT = abspath(join(dirname(__file__), ".."))
sys.path.insert(0, ROOT)
from workspace_tools.build_api import build_mbed_libs
from workspace_tools.build_api import write_build_report
from workspace_tools.targets import TARGET_MAP, TARGET_NAMES
from workspace_tools.test_exporters import ReportExporter, ResultExporterType
from workspace_tools.test_api import SingleTestRunner
from workspace_tools.test_api import singletest_in_cli_mode
from workspace_tools.paths import TEST_DIR
from workspace_tools.tests import TEST_MAP
from tools.build_api import build_mbed_libs
from tools.build_api import write_build_report
from tools.targets import TARGET_MAP, TARGET_NAMES
from tools.test_exporters import ReportExporter, ResultExporterType
from tools.test_api import SingleTestRunner
from tools.test_api import singletest_in_cli_mode
from tools.paths import TEST_DIR
from tools.tests import TEST_MAP
OFFICIAL_MBED_LIBRARY_BUILD = (
('LPC11U24', ('ARM', 'uARM', 'GCC_ARM', 'IAR')),

View File

@ -139,7 +139,7 @@ def run_builds(dry_run):
toolchain_list = build["toolchains"]
if type(toolchain_list) != type([]): toolchain_list = [toolchain_list]
for toolchain in toolchain_list:
cmdline = "python workspace_tools/build.py -m %s -t %s -j 4 -c --silent "% (build["target"], toolchain)
cmdline = "python tools/build.py -m %s -t %s -j 4 -c --silent "% (build["target"], toolchain)
libs = build.get("libs", [])
if libs:
cmdline = cmdline + " ".join(["--" + l for l in libs])
@ -163,14 +163,14 @@ def run_test_linking(dry_run):
for test_lib in tests:
test_names = tests[test_lib]
test_lib_switch = "--" + test_lib if test_lib else ""
cmdline = "python workspace_tools/make.py -m %s -t %s -c --silent %s -n %s " % (link["target"], toolchain, test_lib_switch, ",".join(test_names))
cmdline = "python tools/make.py -m %s -t %s -c --silent %s -n %s " % (link["target"], toolchain, test_lib_switch, ",".join(test_names))
print "Executing: " + cmdline
if not dry_run:
if os.system(cmdline) != 0:
sys.exit(1)
def run_test_testsuite(dry_run):
cmdline = "python workspace_tools/singletest.py --version"
cmdline = "python tools/singletest.py --version"
print "Executing: " + cmdline
if not dry_run:
if os.system(cmdline) != 0:

View File

@ -286,24 +286,24 @@ from buildbot.config import BuilderConfig
c['builders'] = []
copy_private_settings = ShellCommand(name = "copy private_settings.py",
command = "cp ../private_settings.py workspace_tools/private_settings.py",
copy_mbed_settings = ShellCommand(name = "copy mbed_settings.py",
command = "cp ../mbed_settings.py mbed_settings.py",
haltOnFailure = True,
description = "Copy private_settings.py")
description = "Copy mbed_settings.py")
mbed_build_release = BuildFactory()
mbed_build_release.addStep(git_clone)
mbed_build_release.addStep(copy_private_settings)
mbed_build_release.addStep(copy_mbed_settings)
for target_name, toolchains in OFFICIAL_MBED_LIBRARY_BUILD:
builder_name = "All_TC_%s" % target_name
mbed_build = BuildFactory()
mbed_build.addStep(git_clone)
mbed_build.addStep(copy_private_settings)
mbed_build.addStep(copy_mbed_settings)
# Adding all chains for target
for toolchain in toolchains:
build_py = BuildCommand(name = "Build %s using %s" % (target_name, toolchain),
command = "python workspace_tools/build.py -m %s -t %s" % (target_name, toolchain),
command = "python tools/build.py -m %s -t %s" % (target_name, toolchain),
haltOnFailure = True,
warnOnWarnings = True,
description = "Building %s using %s" % (target_name, toolchain),
@ -314,12 +314,12 @@ for target_name, toolchains in OFFICIAL_MBED_LIBRARY_BUILD:
if target_name in OFFICIAL_MBED_TESTBED_SUPPORTED_HARDWARE:
copy_example_test_spec_json = ShellCommand(name = "Copy example_test_spec.json",
command = "cp ../example_test_spec.json workspace_tools/data/example_test_spec.json",
command = "cp ../example_test_spec.json tools/data/example_test_spec.json",
haltOnFailure = True,
description = "Copy example_test_spec.json")
autotest_py = ShellCommand(name = "Running autotest.py for %s" % (target_name),
command = "python workspace_tools/autotest.py workspace_tools/data/example_test_spec.json",
command = "python tools/autotest.py tools/data/example_test_spec.json",
haltOnFailure = True,
description = "Running autotest.py")
@ -337,12 +337,12 @@ for target_name, toolchains in OFFICIAL_MBED_LIBRARY_BUILD:
factory=mbed_build))
# copy_example_test_spec_json = ShellCommand(name = "Copy example_test_spec.json",
# command = "cp ../example_test_spec.json workspace_tools/data/example_test_spec.json",
# command = "cp ../example_test_spec.json tools/data/example_test_spec.json",
# haltOnFailure = True,
# description = "Copy example_test_spec.json")
singletest_py = TestCommand(name = "Running Target Tests",
command = "python workspace_tools/singletest.py -i workspace_tools/test_spec.json -M workspace_tools/muts_all.json",
command = "python tools/singletest.py -i tools/test_spec.json -M tools/muts_all.json",
haltOnFailure = True,
warnOnWarnings = True,
description = "Running Target Tests",

325
tools/config.py Normal file
View File

@ -0,0 +1,325 @@
"""
mbed SDK
Copyright (c) 2016 ARM Limited
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
# Implementation of mbed configuration mechanism
from copy import deepcopy
from collections import OrderedDict
from tools.utils import json_file_to_dict, ToolException
from tools.targets import Target
import os
# Base class for all configuration exceptions
class ConfigException(Exception):
pass
# This class keeps information about a single configuration parameter
class ConfigParameter:
# name: the name of the configuration parameter
# data: the data associated with the configuration parameter
# unit_name: the unit (target/library/application) that defines this parameter
# unit_ kind: the kind of the unit ("target", "library" or "application")
def __init__(self, name, data, unit_name, unit_kind):
self.name = self.get_full_name(name, unit_name, unit_kind, allow_prefix = False)
self.defined_by = self.get_display_name(unit_name, unit_kind)
self.set_by = self.defined_by
self.help_text = data.get("help", None)
self.value = data.get("value", None)
self.required = data.get("required", False)
self.macro_name = data.get("macro_name", "MBED_CONF_%s" % self.sanitize(self.name.upper()))
# Return the full (prefixed) name of a parameter.
# If the parameter already has a prefix, check if it is valid
# name: the simple (unqualified) name of the parameter
# unit_name: the unit (target/library/application) that defines this parameter
# unit_kind: the kind of the unit ("target", "library" or "application")
# label: the name of the label in the 'target_config_overrides' section (optional)
# allow_prefix: True to allo the original name to have a prefix, False otherwise
@staticmethod
def get_full_name(name, unit_name, unit_kind, label = None, allow_prefix = True):
if name.find('.') == -1: # the name is not prefixed
if unit_kind == "target":
prefix = "target."
elif unit_kind == "application":
prefix = "app."
else:
prefix = unit_name + '.'
return prefix + name
# The name has a prefix, so check if it is valid
if not allow_prefix:
raise ConfigException("Invalid parameter name '%s' in '%s'" % (name, ConfigParameter.get_display_name(unit_name, unit_kind, label)))
temp = name.split(".")
# Check if the parameter syntax is correct (must be unit_name.parameter_name)
if len(temp) != 2:
raise ConfigException("Invalid parameter name '%s' in '%s'" % (name, ConfigParameter.get_display_name(unit_name, unit_kind, label)))
prefix = temp[0]
# Check if the given parameter prefix matches the expected prefix
if (unit_kind == "library" and prefix != unit_name) or (unit_kind == "target" and prefix != "target"):
raise ConfigException("Invalid prefix '%s' for parameter name '%s' in '%s'" % (prefix, name, ConfigParameter.get_display_name(unit_name, unit_kind, label)))
return name
# Return the name displayed for a unit when interogating the origin
# and the last set place of a parameter
# unit_name: the unit (target/library/application) that defines this parameter
# unit_kind: the kind of the unit ("target", "library" or "application")
# label: the name of the label in the 'target_config_overrides' section (optional)
@staticmethod
def get_display_name(unit_name, unit_kind, label = None):
if unit_kind == "target":
return "target:" + unit_name
elif unit_kind == "application":
return "application%s" % ("[%s]" % label if label else "")
else: # library
return "library:%s%s" % (unit_name, "[%s]" % label if label else "")
# "Sanitize" a name so that it is a valid C macro name
# Currently it simply replaces '.' and '-' with '_'
# name: the un-sanitized name.
@staticmethod
def sanitize(name):
return name.replace('.', '_').replace('-', '_')
# Sets a value for this parameter, remember the place where it was set
# value: the value of the parameter
# unit_name: the unit (target/library/application) that defines this parameter
# unit_ kind: the kind of the unit ("target", "library" or "application")
# label: the name of the label in the 'target_config_overrides' section (optional)
def set_value(self, value, unit_name, unit_kind, label = None):
self.value = value
self.set_by = self.get_display_name(unit_name, unit_kind, label)
# Return the string representation of this configuration parameter
def __str__(self):
if self.value is not None:
return '%s = %s (macro name: "%s")' % (self.name, self.value, self.macro_name)
else:
return '%s has no value' % self.name
# Return a verbose description of this configuration paramater as a string
def get_verbose_description(self):
desc = "Name: %s%s\n" % (self.name, " (required parameter)" if self.required else "")
if self.help_text:
desc += " Description: %s\n" % self.help_text
desc += " Defined by: %s\n" % self.defined_by
if not self.value:
return desc + " No value set"
desc += " Macro name: %s\n" % self.macro_name
desc += " Value: %s (set by %s)" % (self.value, self.set_by)
return desc
# A representation of a configuration macro. It handles both macros without a value (MACRO)
# and with a value (MACRO=VALUE)
class ConfigMacro:
def __init__(self, name, unit_name, unit_kind):
self.name = name
self.defined_by = ConfigParameter.get_display_name(unit_name, unit_kind)
if name.find("=") != -1:
tmp = name.split("=")
if len(tmp) != 2:
raise ValueError("Invalid macro definition '%s' in '%s'" % (name, self.defined_by))
self.macro_name = tmp[0]
else:
self.macro_name = name
# 'Config' implements the mbed configuration mechanism
class Config:
# Libraries and applications have different names for their configuration files
__mbed_app_config_name = "mbed_app.json"
__mbed_lib_config_name = "mbed_lib.json"
# Allowed keys in configuration dictionaries
# (targets can have any kind of keys, so this validation is not applicable to them)
__allowed_keys = {
"library": set(["name", "config", "target_overrides", "macros", "__config_path"]),
"application": set(["config", "custom_targets", "target_overrides", "macros", "__config_path"])
}
# The initialization arguments for Config are:
# target: the name of the mbed target used for this configuration instance
# top_level_dirs: a list of top level source directories (where mbed_abb_config.json could be found)
# __init__ will look for the application configuration file in top_level_dirs.
# If found once, it'll parse it and check if it has a custom_targets function.
# If it does, it'll update the list of targets if need.
# If found more than once, an exception is raised
# top_level_dirs can be None (in this case, mbed_app_config.json will not be searched)
def __init__(self, target, top_level_dirs = []):
app_config_location = None
for s in (top_level_dirs or []):
full_path = os.path.join(s, self.__mbed_app_config_name)
if os.path.isfile(full_path):
if app_config_location is not None:
raise ConfigException("Duplicate '%s' file in '%s' and '%s'" % (self.__mbed_app_config_name, app_config_location, full_path))
else:
app_config_location = full_path
self.app_config_data = json_file_to_dict(app_config_location) if app_config_location else {}
# Check the keys in the application configuration data
unknown_keys = set(self.app_config_data.keys()) - self.__allowed_keys["application"]
if unknown_keys:
raise ConfigException("Unknown key(s) '%s' in %s" % (",".join(unknown_keys), self.__mbed_app_config_name))
# Update the list of targets with the ones defined in the application config, if applicable
Target.add_py_targets(self.app_config_data.get("custom_targets", {}))
self.lib_config_data = {}
# Make sure that each config is processed only once
self.processed_configs = {}
self.target = target if isinstance(target, str) else target.name
self.target_labels = Target.get_target(self.target).get_labels()
# Add one or more configuration files
def add_config_files(self, flist):
for f in flist:
if not f.endswith(self.__mbed_lib_config_name):
continue
full_path = os.path.normpath(os.path.abspath(f))
# Check that we didn't already process this file
if self.processed_configs.has_key(full_path):
continue
self.processed_configs[full_path] = True
# Read the library configuration and add a "__full_config_path" attribute to it
cfg = json_file_to_dict(f)
cfg["__config_path"] = full_path
# If there's already a configuration for a module with the same name, exit with error
if self.lib_config_data.has_key(cfg["name"]):
raise ConfigException("Library name '%s' is not unique (defined in '%s' and '%s')" % (cfg["name"], full_path, self.lib_config_data[cfg["name"]]["__config_path"]))
self.lib_config_data[cfg["name"]] = cfg
# Helper function: process a "config_parameters" section in either a target, a library or the application
# data: a dictionary with the configuration parameters
# params: storage for the discovered configuration parameters
# unit_name: the unit (target/library/application) that defines this parameter
# unit_kind: the kind of the unit ("target", "library" or "application")
def _process_config_parameters(self, data, params, unit_name, unit_kind):
for name, v in data.items():
full_name = ConfigParameter.get_full_name(name, unit_name, unit_kind)
# If the parameter was already defined, raise an error
if full_name in params:
raise ConfigException("Parameter name '%s' defined in both '%s' and '%s'" % (name, ConfigParameter.get_display_name(unit_name, unit_kind), params[full_name].defined_by))
# Otherwise add it to the list of known parameters
# If "v" is not a dictionary, this is a shortcut definition, otherwise it is a full definition
params[full_name] = ConfigParameter(name, v if isinstance(v, dict) else {"value": v}, unit_name, unit_kind)
return params
# Helper function: process "config_parameters" and "target_config_overrides" in a given dictionary
# data: the configuration data of the library/appliation
# params: storage for the discovered configuration parameters
# unit_name: the unit (library/application) that defines this parameter
# unit_kind: the kind of the unit ("library" or "application")
def _process_config_and_overrides(self, data, params, unit_name, unit_kind):
self._process_config_parameters(data.get("config", {}), params, unit_name, unit_kind)
for label, overrides in data.get("target_overrides", {}).items():
# If the label is defined by the target or it has the special value "*", process the overrides
if (label == '*') or (label in self.target_labels):
for name, v in overrides.items():
# Get the full name of the parameter
full_name = ConfigParameter.get_full_name(name, unit_name, unit_kind, label)
# If an attempt is made to override a parameter that isn't defined, raise an error
if not full_name in params:
raise ConfigException("Attempt to override undefined parameter '%s' in '%s'" % (full_name, ConfigParameter.get_display_name(unit_name, unit_kind, label)))
params[full_name].set_value(v, unit_name, unit_kind, label)
return params
# Read and interpret configuration data defined by targets
def get_target_config_data(self):
# We consider the resolution order for our target and sort it by level reversed,
# so that we first look at the top level target (the parent), then its direct children,
# then the children's children and so on, until we reach self.target
# TODO: this might not work so well in some multiple inheritance scenarios
# At each step, look at two keys of the target data:
# - config_parameters: used to define new configuration parameters
# - config_overrides: used to override already defined configuration parameters
params, json_data = {}, Target.get_json_target_data()
resolution_order = [e[0] for e in sorted(Target.get_target(self.target).resolution_order, key = lambda e: e[1], reverse = True)]
for tname in resolution_order:
# Read the target data directly from its description
t = json_data[tname]
# Process definitions first
self._process_config_parameters(t.get("config", {}), params, tname, "target")
# Then process overrides
for name, v in t.get("overrides", {}).items():
full_name = ConfigParameter.get_full_name(name, tname, "target")
# If the parameter name is not defined or if there isn't a path from this target to the target where the
# parameter was defined in the target inheritance tree, raise an error
# We need to use 'defined_by[7:]' to remove the "target:" prefix from defined_by
if (not full_name in params) or (not params[full_name].defined_by[7:] in Target.get_target(tname).resolution_order_names):
raise ConfigException("Attempt to override undefined parameter '%s' in '%s'" % (name, ConfigParameter.get_display_name(tname, "target")))
# Otherwise update the value of the parameter
params[full_name].set_value(v, tname, "target")
return params
# Helper function: process a macro definition, checking for incompatible duplicate definitions
# mlist: list of macro names to process
# macros: dictionary with currently discovered macros
# unit_name: the unit (library/application) that defines this macro
# unit_kind: the kind of the unit ("library" or "application")
def _process_macros(self, mlist, macros, unit_name, unit_kind):
for mname in mlist:
m = ConfigMacro(mname, unit_name, unit_kind)
if (m.macro_name in macros) and (macros[m.macro_name].name != mname):
# Found an incompatible definition of the macro in another module, so raise an error
full_unit_name = ConfigParameter.get_display_name(unit_name, unit_kind)
raise ConfigException("Macro '%s' defined in both '%s' and '%s' with incompatible values" % (m.macro_name, macros[m.macro_name].defined_by, full_unit_name))
macros[m.macro_name] = m
# Read and interpret configuration data defined by libs
# It is assumed that "add_config_files" above was already called and the library configuration data
# exists in self.lib_config_data
def get_lib_config_data(self):
all_params, macros = {}, {}
for lib_name, lib_data in self.lib_config_data.items():
unknown_keys = set(lib_data.keys()) - self.__allowed_keys["library"]
if unknown_keys:
raise ConfigException("Unknown key(s) '%s' in %s" % (",".join(unknown_keys), lib_name))
all_params.update(self._process_config_and_overrides(lib_data, {}, lib_name, "library"))
self._process_macros(lib_data.get("macros", []), macros, lib_name, "library")
return all_params, macros
# Read and interpret the configuration data defined by the target
# The target can override any configuration parameter, as well as define its own configuration data
# params: the dictionary with configuration parameters found so far (in the target and in libraries)
# macros: the list of macros defined in the configuration
def get_app_config_data(self, params, macros):
app_cfg = self.app_config_data
# The application can have a "config_parameters" and a "target_config_overrides" section just like a library
self._process_config_and_overrides(app_cfg, params, "app", "application")
# The application can also defined macros
self._process_macros(app_cfg.get("macros", []), macros, "app", "application")
# Return the configuration data in two parts:
# - params: a dictionary with (name, ConfigParam) entries
# - macros: the list of macros defined with "macros" in libraries and in the application
def get_config_data(self):
all_params = self.get_target_config_data()
lib_params, macros = self.get_lib_config_data()
all_params.update(lib_params)
self.get_app_config_data(all_params, macros)
return all_params, [m.name for m in macros.values()]
# Helper: verify if there are any required parameters without a value in 'params'
def _check_required_parameters(self, params):
for p in params.values():
if p.required and (p.value is None):
raise ConfigException("Required parameter '%s' defined by '%s' doesn't have a value" % (p.name, p.defined_by))
# Return the macro definitions generated for a dictionary of configuration parameters
# params: a dictionary of (name, ConfigParameters instance) mappings
@staticmethod
def parameters_to_macros(params):
return ['%s=%s' % (m.macro_name, m.value) for m in params.values() if m.value is not None]
# Return the configuration data converted to a list of C macros
def get_config_data_macros(self):
params, macros = self.get_config_data()
self._check_required_parameters(params)
return macros + self.parameters_to_macros(params)

View File

@ -14,7 +14,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from workspace_tools.targets import TARGETS
from tools.targets import TARGETS
DEFAULT_SUPPORT = {}
CORTEX_ARM_SUPPORT = {}

View File

@ -17,7 +17,7 @@ limitations under the License.
from os.path import join
from jinja2 import Template
from workspace_tools.paths import TOOLS_DATA, MBED_RPC
from tools.paths import TOOLS_DATA, MBED_RPC
RPC_TEMPLATES_PATH = join(TOOLS_DATA, "rpc")

View File

View File

@ -19,10 +19,10 @@ from os.path import join, exists, basename
from shutil import copytree, rmtree, copy
import yaml
from workspace_tools.utils import mkdir
from workspace_tools.export import uvision4, uvision5, codered, gccarm, ds5_5, iar, emblocks, coide, kds, zip, simplicityv3, atmelstudio, sw4stm32, e2studio
from workspace_tools.export.exporters import zip_working_directory_and_clean_up, OldLibrariesException
from workspace_tools.targets import TARGET_NAMES, EXPORT_MAP, TARGET_MAP
from tools.utils import mkdir
from tools.export import uvision4, uvision5, codered, gccarm, ds5_5, iar, emblocks, coide, kds, zip, simplicityv3, atmelstudio, sw4stm32, e2studio
from tools.export.exporters import zip_working_directory_and_clean_up, OldLibrariesException
from tools.targets import TARGET_NAMES, EXPORT_MAP, TARGET_MAP
from project_generator_definitions.definitions import ProGenDef
@ -57,7 +57,7 @@ def online_build_url_resolver(url):
def export(project_path, project_name, ide, target, destination='/tmp/',
tempdir=None, clean=True, extra_symbols=None, build_url_resolver=online_build_url_resolver):
tempdir=None, clean=True, extra_symbols=None, zip=True, relative=False, build_url_resolver=online_build_url_resolver):
# Convention: we are using capitals for toolchain and target names
if target is not None:
target = target.upper()
@ -74,7 +74,7 @@ def export(project_path, project_name, ide, target, destination='/tmp/',
try:
ide = "zip"
exporter = zip.ZIP(target, tempdir, project_name, build_url_resolver, extra_symbols=extra_symbols)
exporter.scan_and_copy_resources(project_path, tempdir)
exporter.scan_and_copy_resources(project_path, tempdir, relative)
exporter.generate()
report['success'] = True
except OldLibrariesException, e:
@ -101,7 +101,7 @@ def export(project_path, project_name, ide, target, destination='/tmp/',
# target checked, export
try:
exporter = Exporter(target, tempdir, project_name, build_url_resolver, extra_symbols=extra_symbols)
exporter.scan_and_copy_resources(project_path, tempdir)
exporter.scan_and_copy_resources(project_path, tempdir, relative)
exporter.generate()
report['success'] = True
except OldLibrariesException, e:
@ -133,8 +133,12 @@ def export(project_path, project_name, ide, target, destination='/tmp/',
# add readme file to every offline export.
open(os.path.join(tempdir, 'GettingStarted.htm'),'w').write('<meta http-equiv="refresh" content="0; url=http://mbed.org/handbook/Getting-Started-mbed-Exporters#%s"/>'% (ide))
# copy .hgignore file to exported direcotry as well.
copy(os.path.join(exporter.TEMPLATE_DIR,'.hgignore'),tempdir)
zip_path = zip_working_directory_and_clean_up(tempdir, destination, project_name, clean)
if exists(os.path.join(exporter.TEMPLATE_DIR,'.hgignore')):
copy(os.path.join(exporter.TEMPLATE_DIR,'.hgignore'), tempdir)
if zip:
zip_path = zip_working_directory_and_clean_up(tempdir, destination, project_name, clean)
else:
zip_path = destination
return zip_path, report

View File

View File

Some files were not shown because too many files have changed in this diff Show More