There are two shell-like things that we care about
bash: does environment varible expansion with $
batch: does environment varible expansion by surrounding something with %
so to detect batch, try to expand a variable on the shell with $.
If you get a $ back, you are in batch!
when in target.json "default_build": "small" is configured
- build.py+make.py
- uses linker option --specs=nano.specs
- macro MBED_RTOS_SINGLE_THREAD is defined
- exporting with project.py + make Makefile
- doesn't use the linker option --specs=nano.specs
- doesn't contain macro MBED_RTOS_SINGLE_THREAD
With this patch in place, tests.py uses targets names instead of target
instances, which makes it possible to use application defined targets
with tests.
With this change, custom targets defined by the application being
tested in its mbed_app.json file can be used with tests. Note that
`build_project` already accepts both target names and instances, so the
call to `build_project` inside `build_tests` will still work.
The configuration object is now created early in the build_project
function. This way, if there's a mbed_app.json that contains a custom
target, that target is taken into account. This is useful (for example)
when compiling tests for an application that defines a custom target.
- removing redundancy as discussed in PR #2087:
- in target.json the core option can have only this values : "Cortex-M0", "Cortex-M0+", "Cortex-M1", "Cortex-M3", "Cortex-M4", "Cortex-M7", "Cortex-A9" - Cortex-M4F and Cortex-M7F removed
- in target.json an additional fpu option with values: "single" and "double" can be used
- build and export scripts are changed to handle this
- tested (compiling, running on hardware) with nucleo_f767 (cortex-m7 with double precision fpu), nucleo_f746 (cortex-m7 with single precision fpu), nucleo_f446 and nucleo_l467 (cortex-m4 with single precision fpu), teensy31 (cortex-m4 without fpu - only build test), nucleo_l073 (cortex-m0)
- singletest results are added to PR #2087 comments
- creating new core name Cortex_M7F_DP for a target with a double precision fpu
- adding new core name to arm.py to set compiler/linker flags to a double precision fpu when configured in target.json
- up to now: gcc wrote flag for a double precision fpu -> target with STM32F746 didn't run when using double variables - mcu has only single precision fpu
- changing gcc.py to use single precision for Cortex-M7 und double precision for Cortex_M7F_DP
tested with NUCLEO_F746, NUCLEO_F767 and build.py+make.py and exporting with project.py + compiling/flashing
- iar.py need a similar extention - I didn't change that yet because
- did not run at the moment - python exception
- currently worked on in PR #1948
This mitigates the Windows paths issue by shortening the test binary name to just the
test case folder name instead of the full unique test name used by the
tools. This doesn't solve the Windows path limit of 260 characters, but it
does reduce the characters used.
The template file already enables VLA as it's for C only. This --vla flag
causes conflicts with --cpp flag that is enabled implicitly if C++ is enabled.
libpath is not required for exporters, as they provide default paths.
This caused problems when paths are not correct for mbed tools, a project
fails to build as path is not found.
Before this PR:
```
Successful exports:
* K64F::uvision .\projectfiles\uvision\Unnamed_Project_K64F
```
After this PR:
```
Successful exports:
* K64F::uvision .\projectfiles\uvision_K64F\Unnamed_Project
```
The directory name now contains <ide>_<target>, and there's a single
project per directory as a result.
if we work with relative sources, the flag should be set to True, otherwise
False.
This fixes wrong paths when exporting with --source argument. The exporter would
assume sources were copied, and thus reference them all within the root of the generated
project.
The IAR assembler doesn't accept '--preinclude', but it accepts -D.
This commit changes the way the config-related macros are propagated
to the IAR assembler to use '-D' instead of '--preinclude'. This is
the only change related to functionality, the others are small,
backward compatible changes to the config code to make passing arguments
to the toolchain instances easier.
Tested by compiled blinky with IAR, GCC_ARM and ARM for K64F.
- Removing stack & heap (dynamic) RAM information
This information was misleading and shouldn't be shown in memap.
E.g. each task may have its own stack region configured at run time.
- Adding 'bytes' unit in the total memory info
- Right aligment of numbers, so it is easier to compare numbers
For example .mbedignore in tools/ contains '*' and naturally should match all files, folders including tools/ itself. Without this fix, tools/ is added to the include path
ARM and GNU compilers currently are in a mode where they will accept VLAs
in C++ as an extension. IAR does not accept them in C++.
Avoid potential portability surprises by making GCC warn, and
deactivating the extension in ArmCC.
IAR defaults to C99 mode, but doesn't enable VLAs by default. Enable them
to make it more conformant.
We don't have much if any code using actual variable-length arrays, but
variably-modified types are occasionally used. The same switch controls
both.
(VLAs were actually already enabled in most of the project export
templates, but not the build script).
PR #1974 added a new configuration parameter to K64F, which in turn made
some tests break, because they found an unexpected configuration
parameter. Fixed by defining a special target for the tests
(test_target) that can be used independently of the actual mbed targets.
This commit uses the previously introduced feature of generating
configuration data as a C header file rather than as command line macro
definitions. Each toolchain was modified to use prefix headers if
requested, and build_api.py was modified to set up the toolchain's
prefix header content using the data generated by the config system.
Tested by compiling blinky for GCC and ARMCC. I'm having a few issues
with my IAR license currently, but both ARMCC and IAR use the same
`--preinclude` option for prefix headers, so this shouldn't be an issue.
Note that at the moment all exporters still use the previous
configuration data mechanism (individual macro definitions as opposed to
a prefix header). Exporters will be updated in one or more PRs that will
follow.
The current implementation of the configuration system "compiles" the
configuration parameters to macros defined on the command line. This
works, but has a few limitations:
- it can bring back the maximum command line length issues in Windows.
- depending on the type of the configuration parameter, it might require
special quoting of its value in the command line.
- various 3rd party IDE/tools seem to have some limitations regarding
the total length of the macros that can be defined.
This commit is the first step in replacing the current mechanism with
one that generates configuration in header files that will be
automatically included, instead of command line macro definitions. The
commit only adds the method for generating the header file, it doesn't
actually change the way config is used yet (that will happen in step 2),
thus it is backwards compatible. The logic of the configuration system
itself is unchanged (in fact, the whole change (not only this commit) is
supposed to be completely transparent for the users of the configuration
system).
The commit also fixes an issue in tools/get_config.py that appeared as a
result of a recent PR: the signature of "get_config" in
tools/build_api.py changed, but tools/get_config.py was not updated.
Previously it was .zip always, even when using sources. This patch fixes it:`
```
Successful exports:
* K64F::uvision path\projectfiles\uvision\Unnamed_Project_K64F
```
Previously, .hex files were not copied when building source as a library.
This prevents builds that pre compile source as a library and then
includes the build directory as the only source (because there is no
softdevice present). This PR copies hex files when compiling source
as a library.