* Refactor skill test input utterances
Slight cleanup to make adding more different cases easier.
* Add test for common QA
"question" can now be used instead of "utterance" to test a common qa skill
Merge consecutive .*'s into a single .*
The process for "{{modifier}} {{precip}} is expected on {{day}}" will first
replace the {{}} by .* as previously:
".* .* is expected on .*"
then a second pass is made replacing any consecutive .* resulting in
".* is expected on .*"
* Let skill tester expand dialogs
The skill tester need to expand the dialogs in the same way as the dialog renderer to be able to correctly assert the expected_dialog criteria.
* Minor docstring changes
* Add lt and gt to skill tester evaluation vocabulary
lt returns True if message item is LESS THAN the value in the config
gt returns True if message item is GREATER THAN the value in the config
* Add separate Exception for Skilltest errors
* Add support for common playback skill messages
CPS_query:
new test json possibilities
play_query: Emits a message that can be catched by CPS_match_query_phrase()
play_query_match: Structure with info of the expected match
"phrase": matched phrase
"confidence_threshold": The minimum confidence the phrase should result in
Example:
{
"play_query": "the news",
"play_query_match": {
"phrase": "the news",
"confidence_threshold": 0.8
}
}
"play_start": Emits message that can be catched by CPS_start using sub-fields.
"phrase": matched phrase
"callback_data": dict with info for the function
Example:
{
"play_start": {
"phrase": "the news",
"callback_data": {
}
},
"expected_data": {"__type__": "mycroft.audio.service.play"}
}
The skill log is redirected to a string during loading and if the skill
fails to load the loading logs are outputed when the first test for the
skill is executed.
Allowing multiple dialog choices can help in cases where a skill has a number of dialog files that each can be triggered independently by the same intent. For example, the weather skill inquiry "will it rain" can trigger either a response when there is an upcoming rain and another if there's no rain in the near future.
```
"expected_dialog": ["dialog1", "dialog2"]
```
and
```
"expected_response": ["text 1", "text 2"]
```
is now possible. This will pass the test if a line from either dialog1 or dialog2 is matched. (or "text 1" or "text 2" is matched for "expected_response")
The test setup function will be run after the skill is loaded but before the testing starts.
This provides a place to setup things that is standard for all test cases.
Several visual changes to the logs (no functional difference)
* Added 'with' to close test_case_file, quieting warning
* Highlight Mycroft's utterannces in output and moved print to fix order of printing
* Better highlight sending a response
* Highlight the test utterance
* Improve the standalone skill tester
Using the skill tester was difficult -- the skill author had to copy a file
locally, and figuring out what was wrong with a test wasn't obvious. This
change improves it in several ways:
* The tester can now be run as a module, allowing any skill to be tested
by entering the skill folder and running:
```
python -m test.integrationtests.skills.runner
```
Optionally you can pass along a path to the skill.
* The runner will display help with a '--help' parameter, pointing to
documentation on creating the tests.
* Information on where the tests are expected is printed during execution
* The *.intent.json was reduced to simply *.json since this is under an
```test/intent/``` folder already. (This is backwards compatible for
existing intent tests.)
* The failing rule is now displayed at the bottom of the run report, making
it easier to figure out where issues exist during test creation
* Headers and terminal colors are used in the output, making it easier to
visually parse the output from the execution of tests.
==== Documentation Notes ====
Update the skill documentation to reflect using
python -m test.integrationtests.skills.runner
instead of copying the skill_developers_testrunner.py.
NOTE: This does have to be performed within a developer venv in order to
access the test suite. Consider moving this into a mycroft-core-dev package
in the future for Mark 1 / Picroft users.
* Quieting warning from Codacy
* Replace single_test with runner
Adds support for "test_env" to the runner script
* Update the discover_test to match the runner
- catch intent tests from *.json files
- add failure msg to assert
* Turn off color using MST_NO_COLOR env variable
* Update format for skill listing
Now send the skills with id and active status
* Add commands to activate/deactivate skills
* Add "unload all except one" functionallity
* Update after rebasing
- fix identifying skills
* Unload skills if they're removed from disk
* Rename _shutdown to default_shutdown
The method is not intended to be non-public, and this should shut up
codacy bot.
* Handle keep command without argument
* Add new commands to help
- Split help into multiple pages as needed
* Support :activate all
This also removes the ability to pass the skills dir as the first argument. Instead this was moved to an environment variable called SKILLS_DIR to prevent conflict with pytest arguments
Forcing all content to string limits the amount of tests that can be run with this. For example a list of strings will fail. This keeps the original type from the json
Adds the possibility to add a responses list in the test case. The test
will respond with each of the entries in the list to get_response
requests.
Ex:
{
[...]
"responses": ["yes", "Miami"]
}
expected_data would fail in combination with Intent tests, this
differentiates it from the other rules
Add message type as "__type__" in the data for processing using
expected_data.
The custom test runner can be used for mocking third-party applications
or services.
To use create a __init__.py in the SKILL_DIR/test
The base of the file should look something like:
```python
from test.integrationtests.skills.skill_tester import SkillTest
def skill_runner(skill, example, emitter, loader, m1, m2):
return SkillTest(skill, example, emitter).run(loader)
```
Then the skill_runner can be decorated by `mock.patch` to mock out resources.
Allows the intent test to check expected data content.
Example:
{
"utterance": "set a weekend alarm at 9 am",
"expected_data": {
"ampm": "am",
"time": "9",
"daytype": "weekend"
},
"expected_response": "Okay. Setting a .* alarm"
}