Merge pull request #8 from MycroftAI/dev

update dev
pull/2775/head
Dominik 2020-10-01 18:57:31 +02:00 committed by GitHub
commit b4a0c51a5f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
190 changed files with 5454 additions and 1832 deletions

View File

@ -4,26 +4,32 @@ So you want to contribute to Mycroft?
This should be as easy as possible for you but there are a few things to consider when contributing.
The following guidelines for contribution should be followed if you want to submit a pull request.
## How to prepare
## How to Prepare
* You need a [GitHub account](https://github.com/signup/free)
* Submit an [issue ticket](https://github.com/MycroftAI/mycroft/issues) for your issue if there is not one yet.
* Submit an [issue ticket](https://github.com/MycroftAI/mycroft-core/issues) for your issue if one does not already exist.
* Describe the issue and include steps to reproduce if it's a bug.
* Ensure to mention the earliest version that you know is affected.
* If you are able and want to fix this, fork the repository on GitHub
* If you are able and want to fix this, fork the repository on GitHub and follow the instructions below.
## Make Changes
1. [Fork the Project](https://help.github.com/articles/fork-a-repo/)
2. [Create a new Issue](https://help.github.com/articles/creating-an-issue/)
3. Create a **feature** or **bugfix** branch based on **dev** with your issue identifier. For example, if your issue identifier is: **issue-123** then you will create either: **feature/issue-123** or **bugfix/issue-123**. Use **feature** prefix for issues related to new functionalities or enhancements and **bugfix** in case of bugs found on the **dev** branch
4. Make sure you stick to the coding style and OO patterns that are used already.
5. Document code using [Google-style docstrings](http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html). Our automated documentation tools expect that format. All functions and class methods that are expected to be called externally should include a docstring. (And those that aren't [should be prefixed with a single underscore](https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references)).
6. Make commits in logical units and describe them properly. Use your issue identifier at the very begin of each commit. For instance:
`git commit -m "Issues-123 - Fixing 'A' sound on Spelling Skill"`
7. Before committing, format your code following the PEP8 rules and organize your imports removing unused libs. To check whether you are following these rules, install pep8 and run `pep8 mycroft test` while in the `mycroft-core` folder. This will check for formatting issues in the `mycroft` and `test` folders.
8. Once you have committed everything and are done with your branch, you have to rebase your code with **dev**. Do the following steps:
2. Clone onto your local machine and set MycroftAI/mycroft-core as your upstream branch
```
git clone https://github.com/<your-username>/<repo-name>
cd <repo-name>
git remote add upstream https://github.com/MycroftAI/mycroft-core
```
3. If one does not already exist, [create a new issue](https://help.github.com/articles/creating-an-issue/) on the [MycroftAI/mycroft-core Issues Tracker](https://github.com/MycroftAI/mycroft-core/issues)
4. Create a **feature** or **bugfix** branch in your forked repo, based on **dev** with your issue identifier. For example, if your issue identifier is: **issue-123** then you will create either: **feature/issue-123** or **bugfix/issue-123**. Use **feature** prefix for issues related to new functionalities or enhancements and **bugfix** in case of bugs found on the **dev** branch
5. Make sure you stick to the coding style and OO patterns that are used already.
6. Document code using [Google-style docstrings](http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html). Our automated documentation tools expect that format. All functions and class methods that are expected to be called externally should include a docstring. (And those that aren't [should be prefixed with a single underscore](https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references)).
7. Make commits in logical units and describe them properly. Use your issue identifier at the very beginning of each commit. For instance:
`git commit -m "Issue-123 - Fixing 'A' sound on Spelling Skill"`
8. Before committing, format your code following the PEP8 rules and organize your imports removing unused libs. To check whether you are following these rules, install pep8 and run `pep8 mycroft test` while in the `mycroft-core` folder. This will check for formatting issues in the `mycroft` and `test` folders.
9. Once you have committed everything and are done with your branch, you have to rebase your code with **dev**. Do the following steps:
1. Make sure you do not have any changes left on your branch
2. Checkout on dev branch and make sure it is up-to-date
3. Checkout your branch and rebase it with dev
@ -38,11 +44,11 @@ git checkout <your_branch_name>
git rebase dev
git push -f
```
9. If possible, create unit tests for your changes
* [Unit Tests for most contributions](https://github.com/MycroftAI/mycroft-core/tree/dev/test)
* [Intent Tests for new skills](https://docs.mycroft.ai/development/creating-a-skill#testing-your-skill)
* We utilize TRAVIS-CI, which will test each pull request. To test locally you can run: `./start-mycroft.sh unittest`
10. Once everything is OK, you can finally [create a Pull Request (PR) on Github](https://help.github.com/articles/using-pull-requests/) in order to be reviewed and merged.
10. If possible, create unit tests for your changes
* [Unit Tests for most contributions](https://github.com/MycroftAI/mycroft-core/tree/dev/test)
* [Intent Tests for new skills](https://mycroft-ai.gitbook.io/docs/#testing-your-skill)
* We utilize TRAVIS-CI, which will test each pull request. To test locally you can run: `./start-mycroft.sh unittest`
11. Once everything is okay, you can finally [create a Pull Request (PR)](https://help.github.com/articles/using-pull-requests/) on [MycroftAi/mycroft-core](https://github.com/MycroftAI/mycroft-core/pulls) to have your code reviewed and merged.
**Note**: Even if you have write access to the master branch, do not work directly on master!

View File

@ -26,8 +26,6 @@ install:
- mkdir ${TMPDIR}
- echo ${TMPDIR}
- VIRTUALENV_ROOT=${VIRTUAL_ENV} ./dev_setup.sh
- pip install -r requirements.txt
- pip install -r test-requirements.txt
# command to run tests
script:
- pycodestyle mycroft test

49
Jenkinsfile vendored
View File

@ -9,6 +9,23 @@ pipeline {
}
stages {
// Run the build in the against the dev branch to check for compile errors
stage('Add CLA label to PR') {
when {
anyOf {
changeRequest target: 'dev'
}
}
environment {
//spawns GITHUB_USR and GITHUB_PSW environment variables
GITHUB=credentials('38b2e4a6-167a-40b2-be6f-d69be42c8190')
}
steps {
// Using an install of Github repo CLA tagger
// (https://github.com/forslund/github-repo-cla)
sh '~/github-repo-cla/mycroft-core-cla-check.sh'
}
}
stage('Run Integration Tests') {
when {
anyOf {
@ -29,8 +46,7 @@ pipeline {
}
steps {
echo 'Building Mark I Voight-Kampff Docker Image'
sh 'cp test/Dockerfile.test Dockerfile'
sh 'docker build \
sh 'docker build -f test/Dockerfile \
--target voight_kampff_builder \
--build-arg platform=mycroft_mark_1 \
-t voight-kampff-mark-1:${BRANCH_ALIAS} .'
@ -167,6 +183,32 @@ pipeline {
}
}
}
// Build snap package for release
stage('Build development Snap package') {
when {
anyOf {
branch 'dev'
}
}
steps {
echo "Launching package build for ${env.BRANCH_NAME}"
build (job: '../Mycroft-snap/dev', wait: false,
parameters: [[$class: 'StringParameterValue',
name: 'BRANCH', value: env.BRANCH_NAME]])
}
}
stage('Build Release Snap package') {
when {
tag "release/v*.*.*"
}
steps {
echo "Launching package build for ${env.TAG_NAME}"
build (job: '../Mycroft-snap/dev', wait: false,
parameters: [[$class: 'StringParameterValue',
name: 'BRANCH', value: env.TAG_NAME]])
}
}
// Build a voight_kampff image for major releases. This will be used
// by the mycroft-skills repository to test skill changes. Skills are
// tested against major releases to determine if they play nicely with
@ -187,8 +229,7 @@ pipeline {
}
steps {
echo 'Building ${TAG_NAME} Docker Image for Skill Testing'
sh 'cp test/Dockerfile.test Dockerfile'
sh 'docker build \
sh 'docker build -f test/Dockerfile \
--target voight_kampff_builder \
--build-arg platform=mycroft_mark_1 \
-t voight-kampff-mark-1:${SKILL_BRANCH} .'

View File

@ -208,4 +208,4 @@ Component licenses for mycroft-core:
The mycroft-core software references various Python Packages (via PIP),
each of which has a separate license. All are compatible with the
Apache 2.0 license. See the referenced packages listed in the
"requirements.txt" file for specific terms and conditions.
"requirements/requirements.txt" file for specific terms and conditions.

View File

@ -1,5 +1,5 @@
recursive-include mycroft/client/speech/recognizer/model *
include requirements.txt
include requirements/requirements.txt
include mycroft/configuration/*.conf
recursive-include mycroft/res *
recursive-include mycroft/res/snd *

View File

@ -22,23 +22,28 @@ echo -e "\e[36mMycroft\e[0m is your open source voice assistant. Full source"
echo "can be found at: ${DIR}"
echo
echo "Mycroft-specific commands you can use from the Linux command prompt:"
echo " mycroft-cli-client command line client, useful for debugging"
echo " mycroft-cli-client Command line client, useful for debugging"
echo " mycroft-msm Mycroft Skills Manager, to manage your Skills"
echo " mycroft-msk Mycroft Skills Kit, create and share Skills"
echo " mycroft-start Launch/restart Mycroft services"
echo " mycroft-stop Stop Mycroft services"
echo
echo "Scripting Utilities:"
echo " mycroft-speak <phr> have Mycroft speak a phrase to the user"
echo " mycroft-say-to <utt> send an utterance to Mycroft as if spoken by a user"
echo " mycroft-listen Activate the microphone to listen for a command"
echo " mycroft-speak <phr> Have Mycroft speak a phrase to the user"
echo " mycroft-say-to <utt> Send an utterance to Mycroft as if spoken by a user"
echo
echo "Mycroft's Python Virtual Environment (venv) control:"
echo " mycroft-venv-activate enter the venv"
echo " mycroft-venv-deactivate exit the venv"
echo " mycroft-pip install a Python package within the venv"
echo " mycroft-pip Install a Python package within the venv"
echo " mycroft-venv-activate Enter the venv"
echo " mycroft-venv-deactivate Exit the venv"
echo
echo "Skill Development:"
echo " mycroft-msk Mycroft Skills Kit, create and share Skills"
echo " mycroft-skill-testrunner Run integration tests on Mycroft Skills"
echo
echo "Other:"
echo " mycroft-mic-test record and playback to directly test microphone"
echo " mycroft-help display this message"
echo " mycroft-config Manage your local Mycroft configuration files"
echo " mycroft-mic-test Record and playback to directly test microphone"
echo " mycroft-help Display this message"
echo
echo "For more information, see https://mycroft.ai and https://github.com/MycroftAI"
echo "For more information, see https://mycroft.ai/documentation"

View File

@ -20,12 +20,40 @@ DIR="$( dirname "$SOURCE" )"
# Enter the Mycroft venv
source "$DIR/../venv-activate.sh" -q
function vktest-clear() {
FEATURES_DIR="$DIR/../test/integrationtests/voight_kampff/features"
num_feature_files=$(ls $FEATURES_DIR | wc -l)
# A clean directory will have `steps/` and `environment.py`
if [ $num_feature_files -gt "2" ] ; then
echo "Removing Feature files..."
rm ${DIR}/../test/integrationtests/voight_kampff/features/*.feature
rm ${DIR}/../test/integrationtests/voight_kampff/features/*.config.json
fi
STEPS_DIR="$FEATURES_DIR/steps"
num_steps_files=$(ls $STEPS_DIR | wc -l)
if [ $num_steps_files -gt "2" ] ; then
echo "Removing Custom Step files..."
TMP_DIR="$STEPS_DIR/tmp"
mkdir $TMP_DIR
mv "$STEPS_DIR/configuration.py" $TMP_DIR
mv "$STEPS_DIR/utterance_responses.py" $TMP_DIR
rm ${STEPS_DIR}/*.py
mv ${TMP_DIR}/* $STEPS_DIR
rmdir $TMP_DIR
fi
echo "Voight Kampff tests clear."
}
# Invoke the individual skill tester
if [ "$#" -eq 0 ] ; then
python -m test.integrationtests.skills.runner .
elif [ "$1" = "vktest" ] ; then
shift
python -m test.integrationtests.voight_kampff "$@"
if [ "$2" = "clear" ] ; then
vktest-clear
else
shift
python -m test.integrationtests.voight_kampff "$@"
fi
else
python -m test.integrationtests.skills.runner $@
fi

View File

@ -294,7 +294,7 @@ function os_is() {
}
function os_is_like() {
grep "^ID_LIKE=" /etc/os-release | awk -F'=' '/^ID_LIKE/ {print $2}' | sed 's/\"//g' | grep -P -q '(^|\s)'"$1"'(\s|$)'
grep "^ID_LIKE=" /etc/os-release | awk -F'=' '/^ID_LIKE/ {print $2}' | sed 's/\"//g' | grep -q "\\b$1\\b"
}
function redhat_common_install() {
@ -337,7 +337,7 @@ function open_suse_install() {
function fedora_install() {
$SUDO dnf install -y git python3 python3-devel python3-pip python3-setuptools python3-virtualenv pygobject3-devel libtool libffi-devel openssl-devel autoconf bison swig glib2-devel portaudio-devel mpg123 mpg123-plugins-pulseaudio screen curl pkgconfig libicu-devel automake libjpeg-turbo-devel fann-devel gcc-c++ redhat-rpm-config jq
$SUDO dnf install -y git python3 python3-devel python3-pip python3-setuptools python3-virtualenv pygobject3-devel libtool libffi-devel openssl-devel autoconf bison swig glib2-devel portaudio-devel mpg123 mpg123-plugins-pulseaudio screen curl pkgconfig libicu-devel automake libjpeg-turbo-devel fann-devel gcc-c++ redhat-rpm-config jq make
}
@ -368,6 +368,14 @@ function redhat_install() {
}
function gentoo_install() {
$SUDO emerge --noreplace dev-vcs/git dev-lang/python dev-python/setuptools dev-python/pygobject dev-python/requests sys-devel/libtool virtual/libffi virtual/jpeg dev-libs/openssl sys-devel/autoconf sys-devel/bison dev-lang/swig dev-libs/glib media-libs/portaudio media-sound/mpg123 media-libs/flac net-misc/curl sci-mathematics/fann sys-devel/gcc app-misc/jq media-libs/alsa-lib dev-libs/icu
}
function alpine_install() {
$SUDO apk add alpine-sdk git python3 py3-pip py3-setuptools py3-virtualenv mpg123 vorbis-tools pulseaudio-utils fann-dev automake autoconf libtool pcre2-dev pulseaudio-dev alsa-lib-dev swig python3-dev portaudio-dev libjpeg-turbo-dev
}
function install_deps() {
echo 'Installing packages...'
if found_exe zypper ; then
@ -390,10 +398,18 @@ function install_deps() {
# Fedora
echo "$GREEN Installing packages for Fedora...$RESET"
fedora_install
elif found_exe pacman; then
elif found_exe pacman && os_is arch ; then
# Arch Linux
echo "$GREEN Installing packages for Arch...$RESET"
arch_install
elif found_exe emerge && os_is gentoo; then
# Gentoo Linux
echo "$GREEN Installing packages for Gentoo Linux ...$RESET"
gentoo_install
elif found_exe apk && os_is alpine; then
# Alpine Linux
echo "$GREEN Installing packages for Alpine Linux...$RESET"
alpine_install
else
echo
echo -e "${YELLOW}Could not find package manager
@ -495,16 +511,28 @@ if ! grep -q "$TOP" $VENV_PATH_FILE ; then
fi
# install required python modules
if ! pip install -r requirements.txt ; then
echo 'Warning: Failed to install all requirements. Continue? y/N'
if ! pip install -r requirements/requirements.txt ; then
echo 'Warning: Failed to install required dependencies. Continue? y/N'
read -n1 continue
if [[ $continue != 'y' ]] ; then
exit 1
fi
fi
if ! pip install -r test-requirements.txt ; then
echo "Warning test requirements wasn't installed, Note: normal operation should still work fine..."
# install optional python modules
if [[ ! $(pip install -r requirements/extra-audiobackend.txt) ||
! $(pip install -r requirements/extra-stt.txt) ||
! $(pip install -r requirements/extra-mark1.txt) ]] ; then
echo 'Warning: Failed to install some optional dependencies. Continue? y/N'
read -n1 continue
if [[ $continue != 'y' ]] ; then
exit 1
fi
fi
if ! pip install -r requirements/tests.txt ; then
echo "Warning: Test requirements failed to install. Note: normal operation should still work fine..."
fi
SYSMEM=$(free | awk '/^Mem:/ { print $2 }')
@ -563,4 +591,4 @@ if [[ ! -w /var/log/mycroft/ ]] ; then
fi
#Store a fingerprint of setup
md5sum requirements.txt test-requirements.txt dev_setup.sh > .installed
md5sum requirements/requirements.txt requirements/extra-audiobackend.txt requirements/extra-stt.txt requirements/extra-mark1.txt requirements/tests.txt dev_setup.sh > .installed

View File

@ -12,8 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Mycroft audio service.
"""Mycroft audio service.
This handles playback of audio and speech
"""
@ -27,23 +26,48 @@ import mycroft.audio.speech as speech
from .audioservice import AudioService
def main():
def on_ready():
LOG.info('Audio service is ready.')
def on_error(e='Unknown'):
LOG.error('Audio service failed to launch ({}).'.format(repr(e)))
def on_stopping():
LOG.info('Audio service is shutting down...')
def main(ready_hook=on_ready, error_hook=on_error, stopping_hook=on_stopping):
""" Main function. Run when file is invoked. """
reset_sigint_handler()
check_for_signal("isSpeaking")
bus = MessageBusClient() # Connect to the Mycroft Messagebus
Configuration.set_config_update_handlers(bus)
speech.init(bus)
try:
reset_sigint_handler()
check_for_signal("isSpeaking")
bus = MessageBusClient() # Connect to the Mycroft Messagebus
Configuration.set_config_update_handlers(bus)
speech.init(bus)
LOG.info("Starting Audio Services")
bus.on('message', create_echo_function('AUDIO', ['mycroft.audio.service']))
audio = AudioService(bus) # Connect audio service instance to message bus
create_daemon(bus.run_forever)
LOG.info("Starting Audio Services")
bus.on('message', create_echo_function('AUDIO',
['mycroft.audio.service']))
wait_for_exit_signal()
# Connect audio service instance to message bus
audio = AudioService(bus)
except Exception as e:
error_hook(e)
else:
create_daemon(bus.run_forever)
if audio.wait_for_load() and len(audio.service) > 0:
# If at least one service exists, report ready
ready_hook()
wait_for_exit_signal()
stopping_hook()
else:
error_hook('No audio services loaded')
speech.shutdown()
audio.shutdown()
speech.shutdown()
audio.shutdown()
main()
if __name__ == '__main__':
main()

View File

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import imp
import importlib
import sys
import time
from os import listdir
@ -22,15 +22,17 @@ from threading import Lock
from mycroft.configuration import Configuration
from mycroft.messagebus.message import Message
from mycroft.util.log import LOG
from mycroft.util.monotonic_event import MonotonicEvent
from .services import RemoteAudioBackend
MINUTES = 60 # Seconds in a minute
MAINMODULE = '__init__'
sys.path.append(abspath(dirname(__file__)))
def create_service_descriptor(service_folder):
def create_service_spec(service_folder):
"""Prepares a descriptor that can be used together with imp.
Args:
@ -39,7 +41,11 @@ def create_service_descriptor(service_folder):
Returns:
Dict with import information
"""
info = imp.find_module(MAINMODULE, [service_folder])
module_name = 'audioservice_' + basename(service_folder)
path = join(service_folder, MAINMODULE + '.py')
spec = importlib.util.spec_from_file_location(module_name, path)
mod = importlib.util.module_from_spec(spec)
info = {'spec': spec, 'mod': mod, 'module_name': module_name}
return {"name": basename(service_folder), "info": info}
@ -66,7 +72,7 @@ def get_services(services_folder):
not MAINMODULE + ".py" in listdir(name)):
continue
try:
services.append(create_service_descriptor(name))
services.append(create_service_spec(name))
except Exception:
LOG.error('Failed to create service from ' + name,
exc_info=True)
@ -74,7 +80,7 @@ def get_services(services_folder):
not MAINMODULE + ".py" in listdir(location)):
continue
try:
services.append(create_service_descriptor(location))
services.append(create_service_spec(location))
except Exception:
LOG.error('Failed to create service from ' + location,
exc_info=True)
@ -99,8 +105,11 @@ def load_services(config, bus, path=None):
for descriptor in service_directories:
LOG.info('Loading ' + descriptor['name'])
try:
service_module = imp.load_module(descriptor["name"] + MAINMODULE,
*descriptor["info"])
service_module = descriptor['info']['mod']
spec = descriptor['info']['spec']
module_name = descriptor['info']['module_name']
sys.modules[module_name] = service_module
spec.loader.exec_module(service_module)
except Exception as e:
LOG.error('Failed to import module ' + descriptor['name'] + '\n' +
repr(e))
@ -144,6 +153,7 @@ class AudioService:
self.play_start_time = 0
self.volume_is_low = False
self._loaded = MonotonicEvent()
bus.once('open', self.load_services_callback)
def load_services_callback(self):
@ -152,7 +162,6 @@ class AudioService:
service and default and registers the event handlers for the
subsystem.
"""
services = load_services(self.config, self.bus)
# Sort services so local services are checked first
local = [s for s in services if not isinstance(s, RemoteAudioBackend)]
@ -190,7 +199,20 @@ class AudioService:
self.bus.on('recognizer_loop:audio_output_start', self._lower_volume)
self.bus.on('recognizer_loop:record_begin', self._lower_volume)
self.bus.on('recognizer_loop:audio_output_end', self._restore_volume)
self.bus.on('recognizer_loop:record_end', self._restore_volume)
self.bus.on('recognizer_loop:record_end',
self._restore_volume_after_record)
self._loaded.set() # Report services loaded
def wait_for_load(self, timeout=3 * MINUTES):
"""Wait for services to be loaded.
Arguments:
timeout (float): Seconds to wait (default 3 minutes)
Returns:
(bool) True if loading completed within timeout, else False.
"""
return self._loaded.wait(timeout)
def track_start(self, track):
"""Callback method called from the services to indicate start of
@ -294,6 +316,31 @@ class AudioService:
if not self.volume_is_low:
self.current.restore_volume()
def _restore_volume_after_record(self, message=None):
"""
Restores the volume when Mycroft is done recording.
If no utterance detected, restore immediately.
If no response is made in reasonable time, then also restore.
Args:
message: message bus message, not used but required
"""
def restore_volume():
LOG.debug('restoring volume')
self.current.restore_volume()
if self.current:
self.bus.on('recognizer_loop:speech.recognition.unknown',
restore_volume)
speak_msg_detected = self.bus.wait_for_message('speak',
timeout=8.0)
if not speak_msg_detected:
restore_volume()
self.bus.remove('recognizer_loop:speech.recognition.unknown',
restore_volume)
else:
LOG.debug("No audio service to restore volume of")
def play(self, tracks, prefered_service, repeat=False):
"""
play starts playing the audio on the prefered service if it
@ -440,4 +487,5 @@ class AudioService:
self.bus.remove('recognizer_loop:record_begin', self._lower_volume)
self.bus.remove('recognizer_loop:audio_output_end',
self._restore_volume)
self.bus.remove('recognizer_loop:record_end', self._restore_volume)
self.bus.remove('recognizer_loop:record_end',
self._restore_volume_after_record)

View File

@ -12,44 +12,82 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
"""Entrypoint for enclosure service.
This provides any "enclosure" specific functionality, for example GUI or
control over the Mark-1 Faceplate.
"""
from mycroft.configuration import LocalConf, SYSTEM_CONFIG
from mycroft.util.log import LOG
from mycroft.messagebus.client import MessageBusClient
from mycroft.configuration import Configuration, LocalConf, SYSTEM_CONFIG
from mycroft.util import (create_daemon, wait_for_exit_signal,
reset_sigint_handler)
def main():
# Read the system configuration
system_config = LocalConf(SYSTEM_CONFIG)
platform = system_config.get("enclosure", {}).get("platform")
def on_ready():
LOG.info("Enclosure started!")
def on_stopping():
LOG.info('Enclosure is shutting down...')
def on_error(e='Unknown'):
LOG.error('Enclosure failed to start. ({})'.format(repr(e)))
def create_enclosure(platform):
"""Create an enclosure based on the provided platform string.
Arguments:
platform (str): platform name string
Returns:
Enclosure object
"""
if platform == "mycroft_mark_1":
LOG.debug("Creating Mark I Enclosure")
LOG.info("Creating Mark I Enclosure")
from mycroft.client.enclosure.mark1 import EnclosureMark1
enclosure = EnclosureMark1()
elif platform == "mycroft_mark_2":
LOG.debug("Creating Mark II Enclosure")
LOG.info("Creating Mark II Enclosure")
from mycroft.client.enclosure.mark2 import EnclosureMark2
enclosure = EnclosureMark2()
else:
LOG.debug("Creating generic enclosure, platform='{}'".format(platform))
LOG.info("Creating generic enclosure, platform='{}'".format(platform))
# TODO: Mechanism to load from elsewhere. E.g. read a script path from
# the mycroft.conf, then load/launch that script.
from mycroft.client.enclosure.generic import EnclosureGeneric
enclosure = EnclosureGeneric()
return enclosure
def main(ready_hook=on_ready, error_hook=on_error, stopping_hook=on_stopping):
# Read the system configuration
"""Launch one of the available enclosure implementations.
This depends on the configured platform and can currently either be
mycroft_mark_1 or mycroft_mark_2, if unconfigured a generic enclosure with
only the GUI bus will be started.
"""
# Read the system configuration
system_config = LocalConf(SYSTEM_CONFIG)
platform = system_config.get("enclosure", {}).get("platform")
enclosure = create_enclosure(platform)
if enclosure:
try:
LOG.debug("Enclosure started!")
enclosure.run()
reset_sigint_handler()
create_daemon(enclosure.run)
ready_hook()
wait_for_exit_signal()
stopping_hook()
except Exception as e:
print(e)
finally:
sys.exit()
else:
LOG.debug("No enclosure available for this hardware, running headless")
LOG.info("No enclosure available for this hardware, running headless")
if __name__ == "__main__":

View File

@ -104,6 +104,7 @@ class Enclosure:
self.bus.on("gui.page.delete", self.on_gui_delete_page)
self.bus.on("gui.clear.namespace", self.on_gui_delete_namespace)
self.bus.on("gui.event.send", self.on_gui_send_event)
self.bus.on("gui.status.request", self.handle_gui_status_request)
def run(self):
try:
@ -114,6 +115,16 @@ class Enclosure:
######################################################################
# GUI client API
@property
def gui_connected(self):
"""Returns True if at least 1 gui is connected, else False"""
return len(GUIWebsocketHandler.clients) > 0
def handle_gui_status_request(self, message):
"""Reply to gui status request, allows querying if a gui is
connected using the message bus"""
self.bus.emit(message.reply("gui.status.request.response",
{"connected": self.gui_connected}))
def send(self, msg_dict):
""" Send to all registered GUIs. """

View File

@ -15,7 +15,6 @@
import subprocess
import time
import sys
from alsaaudio import Mixer
from threading import Thread, Timer
import mycroft.dialog

View File

@ -171,18 +171,19 @@ def handle_open():
EnclosureAPI(bus).reset()
def main():
global bus
global loop
global config
reset_sigint_handler()
PIDLock("voice")
bus = MessageBusClient() # Mycroft messagebus, see mycroft.messagebus
Configuration.set_config_update_handlers(bus)
config = Configuration.get()
def on_ready():
LOG.info('Speech client is ready.')
# Register handlers on internal RecognizerLoop bus
loop = RecognizerLoop()
def on_stopping():
LOG.info('Speech service is shutting down...')
def on_error(e='Unknown'):
LOG.error('Audio service failed to launch ({}).'.format(repr(e)))
def connect_loop_events(loop):
loop.on('recognizer_loop:utterance', handle_utterance)
loop.on('recognizer_loop:speech.recognition.unknown', handle_unknown)
loop.on('speak', handle_speak)
@ -192,6 +193,8 @@ def main():
loop.on('recognizer_loop:record_end', handle_record_end)
loop.on('recognizer_loop:no_internet', handle_no_internet)
def connect_bus_events(bus):
# Register handlers for events on main Mycroft messagebus
bus.on('open', handle_open)
bus.on('complete_intent_failure', handle_complete_intent_failure)
@ -207,10 +210,31 @@ def main():
bus.on('mycroft.stop', handle_stop)
bus.on('message', create_echo_function('VOICE'))
create_daemon(bus.run_forever)
create_daemon(loop.run)
wait_for_exit_signal()
def main(ready_hook=on_ready, error_hook=on_error, stopping_hook=on_stopping,
watchdog=lambda: None):
global bus
global loop
global config
try:
reset_sigint_handler()
PIDLock("voice")
bus = MessageBusClient() # Mycroft messagebus, see mycroft.messagebus
Configuration.set_config_update_handlers(bus)
config = Configuration.get()
# Register handlers on internal RecognizerLoop bus
loop = RecognizerLoop(watchdog)
connect_loop_events(loop)
connect_bus_events(bus)
create_daemon(bus.run_forever)
create_daemon(loop.run)
except Exception as e:
error_hook(e)
else:
ready_hook()
wait_for_exit_signal()
stopping_hook()
if __name__ == "__main__":

View File

@ -0,0 +1,102 @@
# Copyright 2020 Mycroft AI Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Data structures used by the speech client."""
class RollingMean:
"""Simple rolling mean calculation optimized for speed.
The optimization is made for cases where value retrieval is made at a
comparative rate to the sample additions.
Arguments:
mean_samples: Number of samples to use for mean value
"""
def __init__(self, mean_samples):
self.num_samples = mean_samples
self.samples = []
self.value = None # Leave unintialized
self.replace_pos = 0 # Position to replace
def append_sample(self, sample):
"""Add a sample to the buffer.
The sample will be appended if there is room in the buffer,
otherwise it will replace the oldest sample in the buffer.
"""
sample = float(sample)
current_len = len(self.samples)
if current_len < self.num_samples:
# build the mean
self.samples.append(sample)
if self.value is not None:
avgsum = self.value * current_len + sample
self.value = avgsum / (current_len + 1)
else: # If no samples are in the buffer set the sample as mean
self.value = sample
else:
# Remove the contribution of the old sample
replace_val = self.samples[self.replace_pos]
self.value -= replace_val / self.num_samples
# Replace it with the new sample and update the mean with it's
# contribution
self.value += sample / self.num_samples
self.samples[self.replace_pos] = sample
# Update replace position
self.replace_pos = (self.replace_pos + 1) % self.num_samples
class CyclicAudioBuffer:
"""A Cyclic audio buffer for storing binary data.
TODO: The class is still unoptimized and performance can probably be
enhanced.
Arguments:
size (int): size in bytes
initial_data (bytes): initial buffer data
"""
def __init__(self, size, initial_data):
self.size = size
# Get at most size bytes from the end of the initial data
self._buffer = initial_data[-size:]
def append(self, data):
"""Add new data to the buffer, and slide out data if the buffer is full
Arguments:
data (bytes): binary data to append to the buffer. If buffer size
is exceeded the oldest data will be dropped.
"""
buff = self._buffer + data
if len(buff) > self.size:
buff = buff[-self.size:]
self._buffer = buff
def get(self):
"""Get the binary data."""
return self._buffer
def get_last(self, size):
"""Get the last entries of the buffer."""
return self._buffer[-size:]
def __getitem__(self, key):
return self._buffer[key]
def __len__(self):
return len(self._buffer)

View File

@ -24,13 +24,14 @@ from contextlib import suppress
from glob import glob
from os.path import dirname, exists, join, abspath, expanduser, isfile, isdir
from shutil import rmtree
from threading import Timer, Event, Thread
from threading import Timer, Thread
from urllib.error import HTTPError
from petact import install_package
from mycroft.configuration import Configuration, LocalConf, USER_CONFIG
from mycroft.util.log import LOG
from mycroft.util.monotonic_event import MonotonicEvent
RECOGNIZER_DIR = join(abspath(dirname(__file__)), "recognizer")
INIT_TIMEOUT = 10 # In seconds
@ -44,15 +45,32 @@ class NoModelAvailable(Exception):
pass
def msec_to_sec(msecs):
"""Convert milliseconds to seconds.
Arguments:
msecs: milliseconds
Returns:
input converted from milliseconds to seconds
"""
return msecs / 1000
class HotWordEngine:
def __init__(self, key_phrase="hey mycroft", config=None, lang="en-us"):
self.key_phrase = str(key_phrase).lower()
# rough estimate 1 phoneme per 2 chars
self.num_phonemes = len(key_phrase) / 2 + 1
if config is None:
config = Configuration.get().get("hot_words", {})
config = config.get(self.key_phrase, {})
self.config = config
# rough estimate 1 phoneme per 2 chars
self.num_phonemes = len(key_phrase) / 2 + 1
phoneme_duration = msec_to_sec(config.get('phoneme_duration', 120))
self.expected_duration = self.num_phonemes * phoneme_duration
self.listener_config = Configuration.get().get("listener", {})
self.lang = str(self.config.get("lang", lang)).lower()
@ -100,9 +118,16 @@ class PocketsphinxHotWord(HotWordEngine):
return file_name
def create_config(self, dict_name, config):
"""If language config doesn't exist then
we use default language (english) config as a fallback.
"""
model_file = join(RECOGNIZER_DIR, 'model', self.lang, 'hmm')
if not exists(model_file):
LOG.error('PocketSphinx model not found at ' + str(model_file))
LOG.error(
'PocketSphinx model not found at "{}". '.format(model_file) +
'Falling back to en-us model'
)
model_file = join(RECOGNIZER_DIR, 'model', 'en-us', 'hmm')
config.set_string('-hmm', model_file)
config.set_string('-dict', dict_name)
config.set_string('-keyphrase', self.key_phrase)
@ -385,7 +410,7 @@ class HotWordFactory:
def load_module(module, hotword, config, lang, loop):
LOG.info('Loading "{}" wake word via {}'.format(hotword, module))
instance = None
complete = Event()
complete = MonotonicEvent()
def initialize():
nonlocal instance, complete

View File

@ -16,7 +16,7 @@ import time
from threading import Thread
import speech_recognition as sr
import pyaudio
from pyee import EventEmitter
from pyee import BaseEventEmitter
from requests import RequestException
from requests.exceptions import ConnectionError
@ -271,14 +271,19 @@ def recognizer_conf_hash(config):
return hash(json.dumps(c, sort_keys=True))
class RecognizerLoop(EventEmitter):
class RecognizerLoop(BaseEventEmitter):
""" EventEmitter loop running speech recognition.
Local wake word recognizer and remote general speech recognition.
Arguments:
watchdog: (callable) function to call periodically indicating
operational status.
"""
def __init__(self):
def __init__(self, watchdog=None):
super(RecognizerLoop, self).__init__()
self._watchdog = watchdog
self.mute_calls = 0
self._load_config()
@ -305,7 +310,7 @@ class RecognizerLoop(EventEmitter):
# TODO - localization
self.wakeup_recognizer = self.create_wakeup_recognizer()
self.responsive_recognizer = ResponsiveRecognizer(
self.wakeword_recognizer)
self.wakeword_recognizer, self._watchdog)
self.state = RecognizerLoopState()
def create_wake_word_recognizer(self):

View File

@ -15,7 +15,7 @@
import audioop
from time import sleep, time as get_time
from collections import deque
from collections import deque, namedtuple
import datetime
import json
import os
@ -44,20 +44,26 @@ from mycroft.util import (
)
from mycroft.util.log import LOG
from .data_structures import RollingMean, CyclicAudioBuffer
WakeWordData = namedtuple('WakeWordData',
['audio', 'found', 'stopped', 'end_audio'])
class MutableStream:
def __init__(self, wrapped_stream, format, muted=False):
assert wrapped_stream is not None
self.wrapped_stream = wrapped_stream
self.muted = muted
if muted:
self.mute()
self.SAMPLE_WIDTH = pyaudio.get_sample_size(format)
self.muted_buffer = b''.join([b'\x00' * self.SAMPLE_WIDTH])
self.read_lock = Lock()
self.muted = muted
if muted:
self.mute()
def mute(self):
"""Stop the stream and set the muted flag."""
with self.read_lock:
@ -180,11 +186,131 @@ class MutableMicrophone(Microphone):
def is_muted(self):
return self.muted
def duration_to_bytes(self, sec):
"""Converts a duration in seconds to number of recorded bytes.
Arguments:
sec: number of seconds
Returns:
(int) equivalent number of bytes recorded by this Mic
"""
return int(sec * self.SAMPLE_RATE) * self.SAMPLE_WIDTH
def get_silence(num_bytes):
return b'\0' * num_bytes
class NoiseTracker:
"""Noise tracker, used to deterimine if an audio utterance is complete.
The current implementation expects a number of loud chunks (not necessary
in one continous sequence) followed by a short period of continous quiet
audio data to be considered complete.
Arguments:
minimum (int): lower noise level will be threshold for "quiet" level
maximum (int): ceiling of noise level
sec_per_buffer (float): the length of each buffer used when updating
the tracker
loud_time_limit (float): time in seconds of low noise to be considered
a complete sentence
silence_time_limit (float): time limit for silence to abort sentence
silence_after_loud (float): time of silence to finalize the sentence.
default 0.25 seconds.
"""
def __init__(self, minimum, maximum, sec_per_buffer, loud_time_limit,
silence_time_limit, silence_after_loud_time=0.25):
self.min_level = minimum
self.max_level = maximum
self.sec_per_buffer = sec_per_buffer
self.num_loud_chunks = 0
self.level = 0
# Smallest number of loud chunks required to return loud enough
self.min_loud_chunks = int(loud_time_limit / sec_per_buffer)
self.max_silence_duration = silence_time_limit
self.silence_duration = 0
# time of quite period after long enough loud data to consider the
# sentence complete
self.silence_after_loud = silence_after_loud_time
# Constants
self.increase_multiplier = 200
self.decrease_multiplier = 100
def _increase_noise(self):
"""Bumps the current level.
Modifies the noise level with a factor depending in the buffer length.
"""
if self.level < self.max_level:
self.level += self.increase_multiplier * self.sec_per_buffer
def _decrease_noise(self):
"""Decrease the current level.
Modifies the noise level with a factor depending in the buffer length.
"""
if self.level > self.min_level:
self.level -= self.decrease_multiplier * self.sec_per_buffer
def update(self, is_loud):
"""Update the tracking. with either a loud chunk or a quiet chunk.
Arguments:
is_loud: True if a loud chunk should be registered
False if a quiet chunk should be registered
"""
if is_loud:
self._increase_noise()
self.num_loud_chunks += 1
else:
self._decrease_noise()
# Update duration of energy under the threshold level
if self._quiet_enough():
self.silence_duration += self.sec_per_buffer
else: # Reset silence duration
self.silence_duration = 0
def _loud_enough(self):
"""Check if the noise loudness criteria is fulfilled.
The noise is considered loud enough if it's been over the threshold
for a certain number of chunks (accumulated, not in a row).
"""
return self.num_loud_chunks > self.min_loud_chunks
def _quiet_enough(self):
"""Check if the noise quietness criteria is fulfilled.
The quiet level is instant and will return True if the level is lower
or equal to the minimum noise level.
"""
return self.level <= self.min_level
def recording_complete(self):
"""Has the end creteria for the recording been met.
If the noise level has decresed from a loud level to a low level
the user has stopped speaking.
Alternatively if a lot of silence was recorded without detecting
a loud enough phrase.
"""
too_much_silence = (self.silence_duration > self.max_silence_duration)
if too_much_silence:
LOG.debug('Too much silence recorded without start of sentence '
'detected')
return ((self._quiet_enough() and
self.silence_duration > self.silence_after_loud) and
(self._loud_enough() or too_much_silence))
class ResponsiveRecognizer(speech_recognition.Recognizer):
# Padding of silence when feeding to pocketsphinx
SILENCE_SEC = 0.01
@ -197,18 +323,11 @@ class ResponsiveRecognizer(speech_recognition.Recognizer):
# before a phrase will be considered complete
MIN_SILENCE_AT_END = 0.25
# The maximum seconds a phrase can be recorded,
# provided there is noise the entire time
RECORDING_TIMEOUT = 10.0
# The maximum time it will continue to record silence
# when not enough noise has been detected
RECORDING_TIMEOUT_WITH_SILENCE = 3.0
# Time between pocketsphinx checks for the wake word
SEC_BETWEEN_WW_CHECKS = 0.2
def __init__(self, wake_word_recognizer):
def __init__(self, wake_word_recognizer, watchdog=None):
self._watchdog = watchdog or (lambda: None) # Default to dummy func
self.config = Configuration.get()
listener_config = self.config.get('listener')
self.upload_url = listener_config['wake_word_upload']['url']
@ -217,7 +336,7 @@ class ResponsiveRecognizer(speech_recognition.Recognizer):
self.overflow_exc = listener_config.get('overflow_exception', False)
speech_recognition.Recognizer.__init__(self)
super().__init__()
self.wake_word_recognizer = wake_word_recognizer
self.audio = pyaudio.PyAudio()
self.multiplier = listener_config.get('multiplier')
@ -235,23 +354,24 @@ class ResponsiveRecognizer(speech_recognition.Recognizer):
if self.save_utterances and not isdir(self.saved_utterances_dir):
os.mkdir(self.saved_utterances_dir)
self.upload_lock = Lock()
self.filenames_to_upload = []
self.mic_level_file = os.path.join(get_ipc_directory(), "mic_level")
# Signal statuses
self._stop_signaled = False
self._listen_triggered = False
# The maximum audio in seconds to keep for transcribing a phrase
# The wake word must fit in this time
num_phonemes = wake_word_recognizer.num_phonemes
len_phoneme = listener_config.get('phoneme_duration', 120) / 1000.0
self.TEST_WW_SEC = num_phonemes * len_phoneme
self.SAVED_WW_SEC = max(3, self.TEST_WW_SEC)
self._account_id = None
# The maximum seconds a phrase can be recorded,
# provided there is noise the entire time
self.recording_timeout = listener_config.get('recording_timeout',
10.0)
# The maximum time it will continue to record silence
# when not enough noise has been detected
self.recording_timeout_with_silence = listener_config.get(
'recording_timeout_with_silence', 3.0)
@property
def account_id(self):
"""Fetch account from backend when needed.
@ -288,7 +408,7 @@ class ResponsiveRecognizer(speech_recognition.Recognizer):
Essentially, this code waits for a period of silence and then returns
the audio. If silence isn't detected, it will terminate and return
a buffer of RECORDING_TIMEOUT duration.
a buffer of self.recording_timeout duration.
Args:
source (AudioSource): Source producing the audio chunks
@ -303,37 +423,16 @@ class ResponsiveRecognizer(speech_recognition.Recognizer):
bytearray: complete audio buffer recorded, including any
silence at the end of the user's utterance
"""
num_loud_chunks = 0
noise = 0
max_noise = 25
min_noise = 0
silence_duration = 0
def increase_noise(level):
if level < max_noise:
return level + 200 * sec_per_buffer
return level
def decrease_noise(level):
if level > min_noise:
return level - 100 * sec_per_buffer
return level
# Smallest number of loud chunks required to return
min_loud_chunks = int(self.MIN_LOUD_SEC_PER_PHRASE / sec_per_buffer)
noise_tracker = NoiseTracker(0, 25, sec_per_buffer,
self.MIN_LOUD_SEC_PER_PHRASE,
self.recording_timeout_with_silence)
# Maximum number of chunks to record before timing out
max_chunks = int(self.RECORDING_TIMEOUT / sec_per_buffer)
max_chunks = int(self.recording_timeout / sec_per_buffer)
num_chunks = 0
# Will return if exceeded this even if there's not enough loud chunks
max_chunks_of_silence = int(self.RECORDING_TIMEOUT_WITH_SILENCE /
sec_per_buffer)
# bytearray to store audio in
# bytearray to store audio in, initialized with a single sample of
# silence.
byte_data = get_silence(source.SAMPLE_WIDTH)
if stream:
@ -354,33 +453,20 @@ class ResponsiveRecognizer(speech_recognition.Recognizer):
energy = self.calc_energy(chunk, source.SAMPLE_WIDTH)
test_threshold = self.energy_threshold * self.multiplier
is_loud = energy > test_threshold
if is_loud:
noise = increase_noise(noise)
num_loud_chunks += 1
else:
noise = decrease_noise(noise)
noise_tracker.update(is_loud)
if not is_loud:
self._adjust_threshold(energy, sec_per_buffer)
# The phrase is complete if the noise_tracker end of sentence
# criteria is met or if the top-button is pressed
phrase_complete = (noise_tracker.recording_complete() or
check_for_signal('buttonPress'))
# Periodically write the energy level to the mic level file.
if num_chunks % 10 == 0:
self._watchdog()
self.write_mic_level(energy, source)
was_loud_enough = num_loud_chunks > min_loud_chunks
quiet_enough = noise <= min_noise
if quiet_enough:
silence_duration += sec_per_buffer
if silence_duration < self.MIN_SILENCE_AT_END:
quiet_enough = False # gotta be silent for min of 1/4 sec
else:
silence_duration = 0
recorded_too_much_silence = num_chunks > max_chunks_of_silence
if quiet_enough and (was_loud_enough or recorded_too_much_silence):
phrase_complete = True
# Pressing top-button will end recording immediately
if check_for_signal('buttonPress'):
phrase_complete = True
return byte_data
def write_mic_level(self, energy, source):
@ -392,17 +478,12 @@ class ResponsiveRecognizer(speech_recognition.Recognizer):
)
)
@staticmethod
def sec_to_bytes(sec, source):
return int(sec * source.SAMPLE_RATE) * source.SAMPLE_WIDTH
def _skip_wake_word(self):
"""Check if told programatically to skip the wake word
For example when we are in a dialog with the user.
"""
# TODO: remove startListening signal check in 20.02
if check_for_signal('startListening') or self._listen_triggered:
if self._listen_triggered:
return True
# Pressing the Mark 1 button can start recording (unless
@ -420,9 +501,7 @@ class ResponsiveRecognizer(speech_recognition.Recognizer):
return False
def stop(self):
"""
Signal stop and exit waiting state.
"""
"""Signal stop and exit waiting state."""
self._stop_signaled = True
def _compile_metadata(self):
@ -443,140 +522,141 @@ class ResponsiveRecognizer(speech_recognition.Recognizer):
'model': str(model_hash)
}
def _upload_wake_word(self, audio, metadata):
requests.post(
self.upload_url, files={
'audio': BytesIO(audio.get_wav_data()),
'metadata': StringIO(json.dumps(metadata))
}
)
def trigger_listen(self):
"""Externally trigger listening."""
LOG.debug('Listen triggered from external source.')
self._listen_triggered = True
def _wait_until_wake_word(self, source, sec_per_buffer, emitter):
def _upload_wakeword(self, audio, metadata):
"""Upload the wakeword in a background thread."""
LOG.debug(
"Wakeword uploading has been disabled. The API endpoint used in "
"Mycroft-core v20.2 and below has been deprecated. To contribute "
"new wakeword samples please upgrade to v20.8 or above."
)
# def upload(audio, metadata):
# requests.post(self.upload_url,
# files={'audio': BytesIO(audio.get_wav_data()),
# 'metadata': StringIO(json.dumps(metadata))})
# Thread(target=upload, daemon=True, args=(audio, metadata)).start()
def _send_wakeword_info(self, emitter):
"""Send messagebus message indicating that a wakeword was received.
Arguments:
emitter: bus emitter to send information on.
"""
SessionManager.touch()
payload = {'utterance': self.wake_word_name,
'session': SessionManager.get().session_id}
emitter.emit("recognizer_loop:wakeword", payload)
def _write_wakeword_to_disk(self, audio, metadata):
"""Write wakeword to disk.
Arguments:
audio: Audio data to write
metadata: List of metadata about the captured wakeword
"""
filename = join(self.saved_wake_words_dir,
'_'.join(str(metadata[k]) for k in sorted(metadata)) +
'.wav')
with open(filename, 'wb') as f:
f.write(audio.get_wav_data())
def _handle_wakeword_found(self, audio_data, source):
"""Perform actions to be triggered after a wakeword is found.
This includes: emit event on messagebus that a wakeword is heard,
store wakeword to disk if configured and sending the wakeword data
to the cloud in case the user has opted into the data sharing.
"""
# Save and upload positive wake words as appropriate
upload_allowed = (self.config['opt_in'] and not self.upload_disabled)
if (self.save_wake_words or upload_allowed):
audio = self._create_audio_data(audio_data, source)
metadata = self._compile_metadata()
if self.save_wake_words:
# Save wake word locally
self._write_wakeword_to_disk(audio, metadata)
# Upload wake word for opt_in people
if upload_allowed:
self._upload_wakeword(audio, metadata)
def _wait_until_wake_word(self, source, sec_per_buffer):
"""Listen continuously on source until a wake word is spoken
Args:
Arguments:
source (AudioSource): Source producing the audio chunks
sec_per_buffer (float): Fractional number of seconds in each chunk
"""
# The maximum audio in seconds to keep for transcribing a phrase
# The wake word must fit in this time
ww_duration = self.wake_word_recognizer.expected_duration
ww_test_duration = max(3, ww_duration)
mic_write_counter = 0
num_silent_bytes = int(self.SILENCE_SEC * source.SAMPLE_RATE *
source.SAMPLE_WIDTH)
silence = get_silence(num_silent_bytes)
# bytearray to store audio in
byte_data = silence
# Max bytes for byte_data before audio is removed from the front
max_size = source.duration_to_bytes(ww_duration)
test_size = source.duration_to_bytes(ww_test_duration)
audio_buffer = CyclicAudioBuffer(max_size, silence)
buffers_per_check = self.SEC_BETWEEN_WW_CHECKS / sec_per_buffer
buffers_since_check = 0.0
# Max bytes for byte_data before audio is removed from the front
max_size = self.sec_to_bytes(self.SAVED_WW_SEC, source)
test_size = self.sec_to_bytes(self.TEST_WW_SEC, source)
said_wake_word = False
# Rolling buffer to track the audio energy (loudness) heard on
# the source recently. An average audio energy is maintained
# based on these levels.
energies = []
idx_energy = 0
avg_energy = 0.0
energy_avg_samples = int(5 / sec_per_buffer) # avg over last 5 secs
counter = 0
average_samples = int(5 / sec_per_buffer) # average over last 5 secs
audio_mean = RollingMean(average_samples)
# These are frames immediately after wake word is detected
# that we want to keep to send to STT
ww_frames = deque(maxlen=7)
while not said_wake_word and not self._stop_signaled:
if self._skip_wake_word():
break
said_wake_word = False
while (not said_wake_word and not self._stop_signaled and
not self._skip_wake_word()):
chunk = self.record_sound_chunk(source)
audio_buffer.append(chunk)
ww_frames.append(chunk)
energy = self.calc_energy(chunk, source.SAMPLE_WIDTH)
audio_mean.append_sample(energy)
if energy < self.energy_threshold * self.multiplier:
self._adjust_threshold(energy, sec_per_buffer)
# maintain the threshold using average
if self.energy_threshold < energy < audio_mean.value * 1.5:
# bump the threshold to just above this value
self.energy_threshold = energy * 1.2
if len(energies) < energy_avg_samples:
# build the average
energies.append(energy)
avg_energy += float(energy) / energy_avg_samples
else:
# maintain the running average and rolling buffer
avg_energy -= float(energies[idx_energy]) / energy_avg_samples
avg_energy += float(energy) / energy_avg_samples
energies[idx_energy] = energy
idx_energy = (idx_energy + 1) % energy_avg_samples
# maintain the threshold using average
if energy < avg_energy * 1.5:
if energy > self.energy_threshold:
# bump the threshold to just above this value
self.energy_threshold = energy * 1.2
# Periodically output energy level stats. This can be used to
# Periodically output energy level stats. This can be used to
# visualize the microphone input, e.g. a needle on a meter.
if counter % 3:
if mic_write_counter % 3:
self._watchdog()
self.write_mic_level(energy, source)
counter += 1
# At first, the buffer is empty and must fill up. After that
# just drop the first chunk bytes to keep it the same size.
needs_to_grow = len(byte_data) < max_size
if needs_to_grow:
byte_data += chunk
else: # Remove beginning of audio and add new chunk to end
byte_data = byte_data[len(chunk):] + chunk
mic_write_counter += 1
buffers_since_check += 1.0
# Send chunk to wake_word_recognizer
self.wake_word_recognizer.update(chunk)
if buffers_since_check > buffers_per_check:
buffers_since_check -= buffers_per_check
chopped = byte_data[-test_size:] \
if test_size < len(byte_data) else byte_data
audio_data = chopped + silence
audio_data = audio_buffer.get_last(test_size) + silence
said_wake_word = \
self.wake_word_recognizer.found_wake_word(audio_data)
# Save positive wake words as appropriate
if said_wake_word:
SessionManager.touch()
payload = {
'utterance': self.wake_word_name,
'session': SessionManager.get().session_id,
}
emitter.emit("recognizer_loop:wakeword", payload)
audio = None
mtd = None
if self.save_wake_words:
# Save wake word locally
audio = self._create_audio_data(byte_data, source)
mtd = self._compile_metadata()
module = self.wake_word_recognizer.__class__.__name__
fn = join(
self.saved_wake_words_dir,
'_'.join(str(mtd[k]) for k in sorted(mtd)) + '.wav'
)
with open(fn, 'wb') as f:
f.write(audio.get_wav_data())
if self.config['opt_in'] and not self.upload_disabled:
# Upload wake word for opt_in people
Thread(
target=self._upload_wake_word, daemon=True,
args=[audio or
self._create_audio_data(byte_data, source),
mtd or self._compile_metadata()]
).start()
return ww_frames
self._listen_triggered = False
return WakeWordData(audio_data, said_wake_word,
self._stop_signaled, ww_frames)
@staticmethod
def _create_audio_data(raw_data, source):
@ -586,6 +666,17 @@ class ResponsiveRecognizer(speech_recognition.Recognizer):
"""
return AudioData(raw_data, source.SAMPLE_RATE, source.SAMPLE_WIDTH)
def mute_and_confirm_listening(self, source):
audio_file = resolve_resource_file(
self.config.get('sounds').get('start_listening'))
if audio_file:
source.mute()
play_wav(audio_file).wait()
source.unmute()
return True
else:
return False
def listen(self, source, emitter, stream=None):
"""Listens for chunks of audio that Mycroft should perform STT on.
@ -617,22 +708,23 @@ class ResponsiveRecognizer(speech_recognition.Recognizer):
self.adjust_for_ambient_noise(source, 1.0)
LOG.debug("Waiting for wake word...")
ww_frames = self._wait_until_wake_word(source, sec_per_buffer, emitter)
ww_data = self._wait_until_wake_word(source, sec_per_buffer)
self._listen_triggered = False
if self._stop_signaled:
ww_frames = None
if ww_data.found:
# If the wakeword was heard send it
self._send_wakeword_info(emitter)
self._handle_wakeword_found(ww_data.audio, source)
ww_frames = ww_data.end_audio
if ww_data.stopped:
# If the waiting returned from a stop signal
return
LOG.debug("Recording...")
# If enabled, play a wave file with a short sound to audibly
# indicate recording has begun.
if self.config.get('confirm_listening'):
audio_file = resolve_resource_file(
self.config.get('sounds').get('start_listening'))
if audio_file:
source.mute()
play_wav(audio_file).wait()
source.unmute()
if self.mute_and_confirm_listening(source):
# Clear frames from wakeword detctions since they're
# irrelevant after mute - play wav - unmute sequence
ww_frames = None

View File

@ -1018,7 +1018,7 @@ def show_skills(skills):
scr.addstr(curses.LINES - 1, 0,
center(23) + "Press any key to continue", CLR_HEADING)
scr.refresh()
scr.get_wch() # blocks
wait_for_any_key()
prepare_page()
elif row == curses.LINES - 2:
# Reached bottom of screen, start at top and move output to a
@ -1065,6 +1065,21 @@ def _get_cmd_param(cmd, keyword):
return parts[-1]
def wait_for_any_key():
"""Block until key is pressed.
This works around curses.error that can occur on old versions of ncurses.
"""
while True:
try:
scr.get_wch() # blocks
except curses.error:
# Loop if get_wch throws error
time.sleep(0.05)
else:
break
def handle_cmd(cmd):
global show_meter
global screen_mode
@ -1142,7 +1157,8 @@ def handle_cmd(cmd):
if message:
show_skills(message.data)
scr.get_wch() # blocks
wait_for_any_key()
screen_mode = SCR_MAIN
set_screen_dirty()
elif "deactivate" in cmd:

View File

@ -28,8 +28,8 @@ from .locations import (DEFAULT_CONFIG, SYSTEM_CONFIG, USER_CONFIG,
def is_remote_list(values):
''' check if this list corresponds to a backend formatted collection of
dictionaries '''
"""Check if list corresponds to a backend formatted collection of dicts
"""
for v in values:
if not isinstance(v, dict):
return False
@ -39,13 +39,11 @@ def is_remote_list(values):
def translate_remote(config, setting):
"""
Translate config names from server to equivalents usable
in mycroft-core.
"""Translate config names from server to equivalents for mycroft-core.
Args:
config: base config to populate
settings: remote settings to be translated
Arguments:
config: base config to populate
settings: remote settings to be translated
"""
IGNORED_SETTINGS = ["uuid", "@type", "active", "user", "device"]
@ -69,12 +67,11 @@ def translate_remote(config, setting):
def translate_list(config, values):
"""
Translate list formated by mycroft server.
"""Translate list formated by mycroft server.
Args:
config (dict): target config
values (list): list from mycroft server config
Arguments:
config (dict): target config
values (list): list from mycroft server config
"""
for v in values:
module = v["@type"]
@ -85,9 +82,7 @@ def translate_list(config, values):
class LocalConf(dict):
"""
Config dict from file.
"""
"""Config dictionary from file."""
def __init__(self, path):
super(LocalConf, self).__init__()
if path:
@ -95,11 +90,10 @@ class LocalConf(dict):
self.load_local(path)
def load_local(self, path):
"""
Load local json file into self.
"""Load local json file into self.
Args:
path (str): file to load
Arguments:
path (str): file to load
"""
if exists(path) and isfile(path):
try:
@ -115,10 +109,10 @@ class LocalConf(dict):
LOG.debug("Configuration '{}' not defined, skipping".format(path))
def store(self, path=None):
"""
Cache the received settings locally. The cache will be used if
the remote is unreachable to load settings that are as close
to the user's as possible
"""Cache the received settings locally.
The cache will be used if the remote is unreachable to load settings
that are as close to the user's as possible.
"""
path = path or self.path
with open(path, 'w') as f:
@ -129,9 +123,7 @@ class LocalConf(dict):
class RemoteConf(LocalConf):
"""
Config dict fetched from mycroft.ai
"""
"""Config dictionary fetched from mycroft.ai."""
def __init__(self, cache=None):
super(RemoteConf, self).__init__(None)
@ -176,18 +168,23 @@ class RemoteConf(LocalConf):
class Configuration:
"""Namespace for operations on the configuration singleton."""
__config = {} # Cached config
__patch = {} # Patch config that skills can update to override config
@staticmethod
def get(configs=None, cache=True):
"""
Get configuration, returns cached instance if available otherwise
builds a new configuration dict.
"""Get configuration
Args:
configs (list): List of configuration dicts
cache (boolean): True if the result should be cached
Returns cached instance if available otherwise builds a new
configuration dict.
Arguments:
configs (list): List of configuration dicts
cache (boolean): True if the result should be cached
Returns:
(dict) configuration dictionary.
"""
if Configuration.__config:
return Configuration.__config
@ -196,14 +193,14 @@ class Configuration:
@staticmethod
def load_config_stack(configs=None, cache=False):
"""
load a stack of config dicts into a single dict
"""Load a stack of config dicts into a single dict
Args:
configs (list): list of dicts to load
cache (boolean): True if result should be cached
Arguments:
configs (list): list of dicts to load
cache (boolean): True if result should be cached
Returns: merged dict of all configuration files
Returns:
(dict) merged dict of all configuration files
"""
if not configs:
configs = [LocalConf(DEFAULT_CONFIG), RemoteConf(),
@ -233,29 +230,40 @@ class Configuration:
def set_config_update_handlers(bus):
"""Setup websocket handlers to update config.
Args:
Arguments:
bus: Message bus client instance
"""
bus.on("configuration.updated", Configuration.updated)
bus.on("configuration.patch", Configuration.patch)
bus.on("configuration.patch.clear", Configuration.patch_clear)
@staticmethod
def updated(message):
"""
handler for configuration.updated, triggers an update
of cached config.
"""Handler for configuration.updated,
Triggers an update of cached config.
"""
Configuration.load_config_stack(cache=True)
@staticmethod
def patch(message):
"""
patch the volatile dict usable by skills
"""Patch the volatile dict usable by skills
Args:
message: Messagebus message should contain a config
in the data payload.
Arguments:
message: Messagebus message should contain a config
in the data payload.
"""
config = message.data.get("config", {})
merge_dict(Configuration.__patch, config)
Configuration.load_config_stack(cache=True)
@staticmethod
def patch_clear(message):
"""Clear the config patch space.
Arguments:
message: Messagebus message should contain a config
in the data payload.
"""
Configuration.__patch = {}
Configuration.load_config_stack(cache=True)

View File

@ -102,7 +102,7 @@
// Relative to "data_dir"
"cache": ".skills-repo",
"url": "https://github.com/MycroftAI/mycroft-skills",
"branch": "20.02"
"branch": "20.08"
}
},
"upload_skill_manifest": true,
@ -189,7 +189,11 @@
"multiplier": 1.0,
"energy_ratio": 1.5,
"wake_word": "hey mycroft",
"stand_up_word": "wake up"
"stand_up_word": "wake up",
// Settings used by microphone to set recording timeout
"recording_timeout": 10.0,
"recording_timeout_with_silence": 3.0
},
// Settings used for any precise wake words
@ -281,9 +285,16 @@
// Text to Speech parameters
// Override: REMOTE
"tts": {
// Engine. Options: "mimic", "google", "marytts", "fatts", "espeak", "spdsay", "responsive_voice", "yandex"
// Engine. Options: "mimic", "google", "marytts", "fatts", "espeak",
// "spdsay", "responsive_voice", "yandex", "polly"
"pulse_duck": false,
"module": "mimic",
"polly": {
"voice": "Matthew",
"region": "us-east-1",
"access_key_id": "",
"secret_access_key": ""
},
"mimic": {
"voice": "ap"
},

View File

@ -14,5 +14,4 @@
#
"""Provides utilities for rendering dialogs and populating with custom data."""
from .dialog import (MustacheDialogRenderer, DialogLoader,
load_dialogs, get)
from .dialog import (MustacheDialogRenderer, load_dialogs, get)

View File

@ -120,28 +120,6 @@ class MustacheDialogRenderer:
return line
class DialogLoader:
"""Loads a collection of dialog files into a renderer implementation.
TODO: Remove in 20.02
"""
def __init__(self, renderer_factory=MustacheDialogRenderer):
LOG.warning('Deprecated, use load_dialogs() instead.')
self.__renderer = renderer_factory()
def load(self, dialog_dir):
"""Load all dialog files within the specified directory.
Args:
dialog_dir (str): directory that contains dialog files
Returns:
a loaded instance of a dialog renderer
"""
return load_dialogs(dialog_dir, self.__renderer)
def load_dialogs(dialog_dir, renderer=None):
"""Load all dialog files within the specified directory.

View File

@ -34,11 +34,21 @@ class SkillGUI:
def __init__(self, skill):
self.__session_data = {} # synced to GUI for use by this skill's pages
self.page = None # the active GUI page (e.g. QML template) to show
self.page = None # the active GUI page (e.g. QML template) to show
self.skill = skill
self.on_gui_changed_callback = None
self.config = Configuration.get()
@property
def connected(self):
"""Returns True if at least 1 gui is connected, else False"""
if self.skill.bus:
reply = self.skill.bus.wait_for_response(
Message("gui.status.request"), "gui.status.request.response")
if reply:
return reply.data["connected"]
return False
@property
def remote_url(self):
"""Returns configuration value for url of remote-server."""
@ -111,7 +121,7 @@ class SkillGUI:
self.skill.bus.emit(Message("gui.clear.namespace",
{"__from": self.skill.skill_id}))
def send_event(self, event_name, params={}):
def send_event(self, event_name, params=None):
"""Trigger a gui event.
Arguments:
@ -119,12 +129,14 @@ class SkillGUI:
params: json serializable object containing any parameters that
should be sent along with the request.
"""
params = params or {}
self.skill.bus.emit(Message("gui.event.send",
{"__from": self.skill.skill_id,
"event_name": event_name,
"params": params}))
def show_page(self, name, override_idle=None):
def show_page(self, name, override_idle=None,
override_animations=False):
"""Begin showing the page in the GUI
Arguments:
@ -133,10 +145,14 @@ class SkillGUI:
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
self.show_pages([name], 0, override_idle)
self.show_pages([name], 0, override_idle, override_animations)
def show_pages(self, page_names, index=0, override_idle=None):
def show_pages(self, page_names, index=0, override_idle=None,
override_animations=False):
"""Begin showing the list of pages in the GUI.
Arguments:
@ -148,6 +164,9 @@ class SkillGUI:
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
if not isinstance(page_names, list):
raise ValueError('page_names must be a list')
@ -181,7 +200,8 @@ class SkillGUI:
{"page": page_urls,
"index": index,
"__from": self.skill.skill_id,
"__idle": override_idle}))
"__idle": override_idle,
"__animations": override_animations}))
def remove_page(self, page):
"""Remove a single page from the GUI.
@ -214,21 +234,30 @@ class SkillGUI:
{"page": page_urls,
"__from": self.skill.skill_id}))
def show_text(self, text, title=None, override_idle=None):
def show_text(self, text, title=None, override_idle=None,
override_animations=False):
"""Display a GUI page for viewing simple text.
Arguments:
text (str): Main text content. It will auto-paginate
title (str): A title to display above the text content.
override_idle (boolean, int):
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
self.clear()
self["text"] = text
self["title"] = title
self.show_page("SYSTEM_TextFrame.qml", override_idle)
self.show_page("SYSTEM_TextFrame.qml", override_idle,
override_animations)
def show_image(self, url, caption=None,
title=None, fill=None,
override_idle=None):
override_idle=None, override_animations=False):
"""Display a GUI page for viewing an image.
Arguments:
@ -237,35 +266,88 @@ class SkillGUI:
title (str): A title to display above the image content
fill (str): Fill type supports 'PreserveAspectFit',
'PreserveAspectCrop', 'Stretch'
override_idle (boolean, int):
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
self.clear()
self["image"] = url
self["title"] = title
self["caption"] = caption
self["fill"] = fill
self.show_page("SYSTEM_ImageFrame.qml", override_idle)
self.show_page("SYSTEM_ImageFrame.qml", override_idle,
override_animations)
def show_html(self, html, resource_url=None, override_idle=None):
def show_animated_image(self, url, caption=None,
title=None, fill=None,
override_idle=None, override_animations=False):
"""Display a GUI page for viewing an image.
Arguments:
url (str): Pointer to the .gif image
caption (str): A caption to show under the image
title (str): A title to display above the image content
fill (str): Fill type supports 'PreserveAspectFit',
'PreserveAspectCrop', 'Stretch'
override_idle (boolean, int):
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
self.clear()
self["image"] = url
self["title"] = title
self["caption"] = caption
self["fill"] = fill
self.show_page("SYSTEM_AnimatedImageFrame.qml", override_idle,
override_animations)
def show_html(self, html, resource_url=None, override_idle=None,
override_animations=False):
"""Display an HTML page in the GUI.
Arguments:
html (str): HTML text to display
resource_url (str): Pointer to HTML resources
override_idle (boolean, int):
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
self.clear()
self["html"] = html
self["resourceLocation"] = resource_url
self.show_page("SYSTEM_HtmlFrame.qml", override_idle)
self.show_page("SYSTEM_HtmlFrame.qml", override_idle,
override_animations)
def show_url(self, url, override_idle=None):
def show_url(self, url, override_idle=None,
override_animations=False):
"""Display an HTML page in the GUI.
Arguments:
url (str): URL to render
override_idle (boolean, int):
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
self.clear()
self["url"] = url
self.show_page("SYSTEM_UrlFrame.qml", override_idle)
self.show_page("SYSTEM_UrlFrame.qml", override_idle,
override_animations)
def shutdown(self):
"""Shutdown gui interface.

View File

@ -11,4 +11,4 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .client import MessageBusClient
from .client import MessageBusClient, MessageWaiter

View File

@ -27,14 +27,61 @@ from mycroft.messagebus.load_config import load_message_bus_config
from mycroft.messagebus.message import Message
from mycroft.util import create_echo_function
from mycroft.util.log import LOG
from .threaded_event_emitter import ThreadedEventEmitter
from pyee import ExecutorEventEmitter
class MessageWaiter:
"""Wait for a single message.
Encapsulate the wait for a message logic separating the setup from
the actual waiting act so the waiting can be setuo, actions can be
performed and _then_ the message can be waited for.
Argunments:
bus: Bus to check for messages on
message_type: message type to wait for
"""
def __init__(self, bus, message_type):
self.bus = bus
self.msg_type = message_type
self.received_msg = None
# Setup response handler
self.bus.once(message_type, self._handler)
def _handler(self, message):
"""Receive response data."""
self.received_msg = message
def wait(self, timeout=3.0):
"""Wait for message.
Arguments:
timeout (int or float): seconds to wait for message
Returns:
Message or None
"""
start_time = time.monotonic()
while self.received_msg is None:
time.sleep(0.2)
if time.monotonic() - start_time > timeout:
try:
self.bus.remove(self.msg_type, self._handler)
except (ValueError, KeyError):
# ValueError occurs on pyee 5.0.1 removing handlers
# registered with once.
# KeyError may theoretically occur if the event occurs as
# the handler is removed
pass
break
return self.received_msg
class MessageBusClient:
def __init__(self, host=None, port=None, route=None, ssl=None):
config_overrides = dict(host=host, port=port, route=route, ssl=ssl)
self.config = load_message_bus_config(**config_overrides)
self.emitter = ThreadedEventEmitter()
self.emitter = ExecutorEventEmitter()
self.client = self.create_client()
self.retry = 5
self.connected_event = Event()
@ -120,42 +167,36 @@ class MessageBusClient:
LOG.warning('Could not send {} message because connection '
'has been closed'.format(message.msg_type))
def wait_for_response(self, message, reply_type=None, timeout=None):
def wait_for_message(self, message_type, timeout=3.0):
"""Wait for a message of a specific type.
Arguments:
message_type (str): the message type of the expected message
timeout: seconds to wait before timeout, defaults to 3
Returns:
The received message or None if the response timed out
"""
return MessageWaiter(self, message_type).wait(timeout)
def wait_for_response(self, message, reply_type=None, timeout=3.0):
"""Send a message and wait for a response.
Args:
Arguments:
message (Message): message to send
reply_type (str): the message type of the expected reply.
Defaults to "<message.msg_type>.response".
timeout: seconds to wait before timeout, defaults to 3
Returns:
The received message or None if the response timed out
"""
response = []
def handler(message):
"""Receive response data."""
response.append(message)
# Setup response handler
self.once(reply_type or message.msg_type + '.response', handler)
# Send request
message_type = reply_type or message.msg_type + '.response'
waiter = MessageWaiter(self, message_type) # Setup response handler
# Send message and wait for it's response
self.emit(message)
# Wait for response
start_time = time.monotonic()
while len(response) == 0:
time.sleep(0.2)
if time.monotonic() - start_time > (timeout or 3.0):
try:
self.remove(reply_type, handler)
except (ValueError, KeyError):
# ValueError occurs on pyee 1.0.1 removing handlers
# registered with once.
# KeyError may theoretically occur if the event occurs as
# the handler is removed
pass
return None
return response[0]
return waiter.wait(timeout)
def on(self, event_name, func):
self.emitter.on(event_name, func)

View File

@ -1,65 +0,0 @@
# Copyright 2019 Mycroft AI Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from pyee import EventEmitter
from multiprocessing.pool import ThreadPool
from collections import defaultdict
class ThreadedEventEmitter(EventEmitter):
""" Event Emitter using the threadpool to run event functions in
separate threads.
"""
def __init__(self, threads=10):
super().__init__()
self.pool = ThreadPool(threads)
self.wrappers = defaultdict(list)
def on(self, event, f=None):
""" Wrap on with a threaded launcher. """
def wrapped(*args, **kwargs):
return self.pool.apply_async(f, args, kwargs)
w = super().on(event, wrapped)
# Store mapping from function to wrapped function
self.wrappers[event].append((f, wrapped))
return w
def once(self, event, f=None):
""" Wrap once with a threaded launcher. """
def wrapped(*args, **kwargs):
return self.pool.apply_async(f, args, kwargs)
wrapped = super().once(event, wrapped)
self.wrappers[event].append((f, wrapped))
return wrapped
def remove_listener(self, event_name, func):
""" Wrap the remove to translate from function to wrapped
function.
"""
for w in self.wrappers[event_name]:
if w[0] == func:
self.wrappers[event_name].remove(w)
return super().remove_listener(event_name, w[1])
# if no wrapper exists try removing the function
return super().remove_listener(event_name, func)
def remove_all_listeners(self, event_name):
"""Remove all listeners with name.
event_name: identifier of event handler
"""
super().remove_all_listeners(event_name)
self.wrappers.pop(event_name)

View File

@ -33,7 +33,19 @@ from mycroft.util import (
from mycroft.util.log import LOG
def main():
def on_ready():
LOG.info('Message bus service started!')
def on_error(e='Unknown'):
LOG.info('Message bus failed to start ({})'.format(repr(e)))
def on_stopping():
LOG.info('Message bus is shutting down...')
def main(ready_hook=on_ready, error_hook=on_error, stopping_hook=on_stopping):
import tornado.options
LOG.info('Starting message bus service...')
reset_sigint_handler()
@ -51,8 +63,9 @@ def main():
application = web.Application(routes, debug=True)
application.listen(config.port, config.host)
create_daemon(ioloop.IOLoop.instance().start)
LOG.info('Message bus service started!')
ready_hook()
wait_for_exit_signal()
stopping_hook()
if __name__ == "__main__":

View File

@ -18,7 +18,7 @@ import sys
import traceback
from tornado.websocket import WebSocketHandler
from pyee import EventEmitter
from pyee import BaseEventEmitter
from mycroft.messagebus.message import Message
from mycroft.util.log import LOG
@ -29,7 +29,7 @@ client_connections = []
class MessageBusEventHandler(WebSocketHandler):
def __init__(self, application, request, **kwargs):
super().__init__(application, request, **kwargs)
self.emitter = EventEmitter()
self.emitter = BaseEventEmitter()
def on(self, event_name, handler):
self.emitter.on(event_name, handler)

View File

@ -0,0 +1 @@
a

View File

@ -0,0 +1,4 @@
Mám problém s komunikací se servery Mycroftu. Prosím, dejte mi pár minut než na mě budete mluvit.
Mám problém s komunikací se servery Mycroftu. Prosím, počkejte pár minut než na mě budete mluvit.
Vypadá to, že se nemohu připojit k serverům Mycroftu. Prosím, dejte mi pár minut než na mě budete mluvit.
Vypadá to, že se nemohu připojit k serverům Mycroftu. Prosím, počkejte pár pár minut než na mě budete mluvit.

View File

@ -0,0 +1,3 @@
zrušit
nevadí
zapomeň to

View File

@ -0,0 +1,2 @@
Kontroluji aktualizace
Prosím o chvilku strpení, akltualizuji se

View File

@ -0,0 +1 @@
den

View File

@ -0,0 +1 @@
dny

View File

@ -0,0 +1 @@
hodina

View File

@ -0,0 +1 @@
hodiny

View File

@ -0,0 +1,4 @@
Omlouvám se, nerozumněl jsem
Bojím se, že jsem neporozumněl
Můžete to říci znovu?
Můžete to zopakovat?

View File

@ -0,0 +1,2 @@
poslední možnost
poslední

View File

@ -0,0 +1 @@
Interakční data nebudou dále odesílána do Mycroft AI.

View File

@ -0,0 +1 @@
Od teď budu odesílat interakční data Mycroft AI pro účely učení. Momentálně to obsahuje i nahrávání wake-word aktivace.

View File

@ -0,0 +1 @@
< < < NAČÍTÁNÍ < < <

View File

@ -0,0 +1 @@
RESTARTUJI...

View File

@ -0,0 +1 @@
< < < SYNCHRONIZACE < < <

View File

@ -0,0 +1 @@
< < < AKTUALIZUJI < < <

View File

@ -0,0 +1 @@
minuta

View File

@ -0,0 +1 @@
minuty

View File

@ -0,0 +1 @@
Dobrý den, já jsem Mycroft, váš nový hlasový asistent. Abych vám mohl asistovat, potřebuji být přpojen k internetu. Můžete mne připojit síťovým kabelem nebo přes wifi. Následujte tyto instrukce pro použití wifi:

View File

@ -0,0 +1,2 @@
ne
nechci

View File

@ -0,0 +1,3 @@
Zdá se, že nejsem připojen připojen k internetu, zkontrolujte prosím vaše síťové připojení.
Nemohu se připojit k internetu, zkontrolujte prosím vaše síťové připojení.
Mám problém se připojit k internetu, zkontrolujte prosím vaše síťové připojení.

View File

@ -0,0 +1 @@
Počkejte prosím chvíly než dokončím startování.

View File

@ -0,0 +1 @@
nebo

View File

@ -0,0 +1,5 @@
jalepeno: hallipeenyo
ai: A.I.
mycroftai: mycroft A.I.
spotify: spot-ify
corgi: core-gee

View File

@ -0,0 +1 @@
Byl jsem resetován do továrního nastavení.

View File

@ -0,0 +1 @@
sekunda

View File

@ -0,0 +1 @@
sekundy

View File

@ -0,0 +1 @@
Naskytla se chyba při zracování požadavku v {{skill}}

View File

@ -0,0 +1,2 @@
Nyní jsem zaktualizován na poslední verzi.
Dovednosti jsou zaktualizovány. Jsem připraven vám pomáhat.

View File

@ -0,0 +1 @@
vyskytla se chyba při aktualizaci dovedností

View File

@ -0,0 +1 @@
SSH přihlašování bylo zakázáno

View File

@ -0,0 +1 @@
SSH přihlašování je povoleno

View File

@ -0,0 +1 @@
Potřebuji se restartovat po synchronizaci času s internetem, hned budu zpět.

View File

@ -0,0 +1,6 @@
ano
povrzuji
jistě
potvrzeno
souhlasím
prosím

View File

@ -0,0 +1,33 @@
{
"unit system": {
"metric": {"system_unit": "metric"},
"imperial": {"system_unit": "imperial"}
},
"location": {
"stockholm": {
"location": {
"city": {
"name": "Stockholm",
"state": {
"code": "SE.18",
"country": {
"code": "SE",
"name": "Sweden"
},
"name": "Stockholm"
}
},
"coordinate": {
"latitude": 59.38306,
"longitude": 16.66667
},
"timezone": {
"code": "Europe/Stockholm",
"dst_offset": 7200000,
"name": "Europe/Stockholm",
"offset": 3600000
}
}
}
}
}

View File

@ -0,0 +1 @@
i

View File

@ -0,0 +1,4 @@
Wystąpił problem z połączeniem do serwerów Mycroft. Daj mi parę minut przed kolejnym poleceniem.
Wystąpił problem z połączeniem do serwerów Mycroft. Poczekaj parę minut przed kolejnym poleceniem.
Wygląda na to, że nie mogę się połączyć z serwerami Mycroft. Daj mi parę minut przed kolejnym poleceniem.
Wygląda na to, że nie mogę się połączyć z serwerami Mycroft. Poczekaj parę minut przed kolejnym poleceniem.

View File

@ -0,0 +1,4 @@
anuluj
nieważne
zapomnij
stop

View File

@ -0,0 +1,2 @@
Sprawdzam aktualizacje
Aktualizuję się, daj mi chwilę

View File

@ -0,0 +1 @@
dzień

View File

@ -0,0 +1 @@
dni

View File

@ -0,0 +1 @@
godzina

View File

@ -0,0 +1 @@
godziny

View File

@ -0,0 +1,4 @@
Przepraszam, nie rozumiem tego
Możesz powtórzyć?
Nie jestem pewny czy to zrozumiałem
Możesz powiedzieć ponownie?

View File

@ -0,0 +1,3 @@
ostatni wybór
ostatnia opcja
ostatnia

View File

@ -0,0 +1 @@
Dane interakcji nie będą już wysyłane do Mycroft AI.

View File

@ -0,0 +1 @@
Od tej pory będę wysyłał dane interakcji do Mycroft AI dzięki czemu będę mądrzejszy. Obecnie wysyłam także nagrania polecenia aktywacji.

View File

@ -0,0 +1 @@
< < < ŁADUJĘ < < <

View File

@ -0,0 +1 @@
RESETUJĘ...

View File

@ -0,0 +1 @@
< < < SYNCHRONIZACJA < < <

View File

@ -0,0 +1 @@
< < < AKTUALIZUJĘ < < <

View File

@ -0,0 +1 @@
minuta

View File

@ -0,0 +1 @@
minuty

View File

@ -0,0 +1 @@
Cześć, jestem Mycroft, Twój nowy asystent. Żeby móc to robić, potrzebuję połączenia z Internetem. Możesz mnie podpiąć albo kablem Ethernet, albo do wifi. Aby połączyć z wifi, użyj tych instrukcji:

View File

@ -0,0 +1,2 @@
nie
odmawiam

View File

@ -0,0 +1,3 @@
Nie mogę się połączyć z Internetem, sprawdź proszę połączenie.
Wygląda na to, że nie jestem podłączony do Internetu, sprawdź proszę połączenie.
Połączenie z Internetem nie działa, zweryfikuj czy jest poprawne.

View File

@ -0,0 +1 @@
Poczekaj jeszcze chwilę aż skończę się uruchamiać.

View File

@ -0,0 +1 @@
albo

View File

@ -0,0 +1,5 @@
jalepeno: hallipeenyo
ai: A.I.
mycroftai: mycroft A.I.
spotify: spot-ify
corgi: core-gee

View File

@ -0,0 +1 @@
Zostałem przywrócony do ustawień fabrycznych.

View File

@ -0,0 +1 @@
sekunda

View File

@ -0,0 +1 @@
sekundy

View File

@ -0,0 +1 @@
Wystąpił błąd podczas przetwarzania polecenia przez {{skill}}

View File

@ -0,0 +1,2 @@
Jestem na bieżąco z aktualizacjami
Aktualizacje zainstalowane, w czym mogę pomóc?

View File

@ -0,0 +1 @@
wystąpił błąd podczas aktualizacji umiejętności

View File

@ -0,0 +1 @@
Logowanie przez SSH zostało wyłączone

View File

@ -0,0 +1 @@
Logowanie przez SSH zostało włączone

View File

@ -0,0 +1 @@
Muszę się zresetować po synchronizacja zegara, zaraz wracam.

View File

@ -0,0 +1,6 @@
tak
pewnie
jasne
oczywiście
potwierdzam
zgadza się

View File

@ -0,0 +1,83 @@
import QtQuick.Layouts 1.4
import QtQuick 2.4
import QtQuick.Controls 2.0
import org.kde.kirigami 2.4 as Kirigami
import Mycroft 1.0 as Mycroft
Mycroft.Delegate {
id: systemImageFrame
skillBackgroundColorOverlay: "#000000"
property bool hasTitle: sessionData.title.length > 0 ? true : false
property bool hasCaption: sessionData.caption.length > 0 ? true : false
ColumnLayout {
id: systemImageFrameLayout
anchors.fill: parent
Kirigami.Heading {
id: systemImageTitle
visible: hasTitle
enabled: hasTitle
Layout.fillWidth: true
Layout.preferredHeight: paintedHeight + Kirigami.Units.largeSpacing
level: 3
text: sessionData.title
wrapMode: Text.Wrap
font.family: "Noto Sans"
font.weight: Font.Bold
}
AnimatedImage {
id: systemImageDisplay
visible: true
enabled: true
Layout.fillWidth: true
Layout.fillHeight: true
source: sessionData.image
property var fill: sessionData.fill
onFillChanged: {
console.log(fill)
if(fill == "PreserveAspectCrop"){
systemImageDisplay.fillMode = 2
} else if (fill == "PreserveAspectFit"){
console.log("inFit")
systemImageDisplay.fillMode = 1
} else if (fill == "Stretch"){
systemImageDisplay.fillMode = 0
} else {
systemImageDisplay.fillMode = 0
}
}
Rectangle {
id: systemImageCaptionBox
visible: hasCaption
enabled: hasCaption
anchors.bottom: parent.bottom
anchors.left: parent.left
anchors.right: parent.right
height: systemImageCaption.paintedHeight
color: "#95000000"
Kirigami.Heading {
id: systemImageCaption
level: 2
anchors.left: parent.left
anchors.leftMargin: Kirigami.Units.largeSpacing
anchors.right: parent.right
anchors.rightMargin: Kirigami.Units.largeSpacing
anchors.verticalCenter: parent.verticalCenter
text: sessionData.caption
wrapMode: Text.Wrap
font.family: "Noto Sans"
font.weight: Font.Bold
}
}
}
}
}

View File

@ -43,7 +43,6 @@ from mycroft.util.log import LOG
from .core import FallbackSkill
from .event_scheduler import EventScheduler
from .intent_service import IntentService
from .padatious_service import PadatiousService
from .skill_manager import SkillManager
RASPBERRY_PI_PLATFORMS = ('mycroft_mark_1', 'picroft', 'mycroft_mark_2pi')
@ -173,7 +172,20 @@ class DevicePrimer(object):
wait_while_speaking()
def main():
def on_ready():
LOG.info('Skill service is ready.')
def on_error(e='Unknown'):
LOG.info('Skill service failed to launch ({})'.format(repr(e)))
def on_stopping():
LOG.info('Skill service is shutting down...')
def main(ready_hook=on_ready, error_hook=on_error, stopping_hook=on_stopping,
watchdog=None):
reset_sigint_handler()
# Create PID file, prevent multiple instances of this service
mycroft.lock.Lock('skills')
@ -185,18 +197,21 @@ def main():
bus = _start_message_bus_client()
_register_intent_services(bus)
event_scheduler = EventScheduler(bus)
skill_manager = _initialize_skill_manager(bus)
skill_manager = _initialize_skill_manager(bus, watchdog)
_wait_for_internet_connection()
if skill_manager is None:
skill_manager = _initialize_skill_manager(bus)
skill_manager = _initialize_skill_manager(bus, watchdog)
device_primer = DevicePrimer(bus, config)
device_primer.prepare_device()
skill_manager.start()
while not skill_manager.is_alive():
time.sleep(0.1)
ready_hook() # Report ready status
wait_for_exit_signal()
stopping_hook() # Report shutdown started
shutdown(skill_manager, event_scheduler)
@ -224,24 +239,24 @@ def _register_intent_services(bus):
bus: messagebus client to register the services on
"""
service = IntentService(bus)
try:
PadatiousService(bus, service)
except Exception as e:
LOG.exception('Failed to create padatious handlers '
'({})'.format(repr(e)))
# Register handler to trigger fallback system
bus.on(
'mycroft.skills.fallback',
FallbackSkill.make_intent_failure_handler(bus)
)
# Backwards compatibility TODO: remove in 20.08
bus.on('intent_failure', FallbackSkill.make_intent_failure_handler(bus))
return service
def _initialize_skill_manager(bus):
def _initialize_skill_manager(bus, watchdog):
"""Create a thread that monitors the loaded skills, looking for updates
Returns:
SkillManager instance or None if it couldn't be initialized
"""
try:
skill_manager = SkillManager(bus)
skill_manager = SkillManager(bus, watchdog)
skill_manager.load_priority()
except MsmException:
# skill manager couldn't be created, wait for network connection and

Some files were not shown because too many files have changed in this diff Show More