An error occured when testing skill settings when checking if the settings should be sent when trying to upload a field without a value. To guard against this there is a check if the field has a value before checking.
SkillManager now handles the skillmanager.list message and will reply with the
mycroft.skills.list message including a list of the loaded skills.
==== Protocol Notes ====
Added messages:
- skillmanager.list: skill manager send list of skills on messagebus
- mycroft.skills.list message with skill list
The bug was due to identifier collisions in the backend as well as settingsmeta schemas with a label field with no name attribute. This fix will make sure there are no more identifier collisions with already existing broken settingsmeta schemas in the database.
Add support for:
* mycroft.util.format.nice_time()
* mycroft.util.format.prounce_number()
* implemented unittests for above
Also renamed the helper method convert_number() to
_convert_to_mixed_fraction()
* Add support for deepspeech_server
deepspeech_server is a server running deepspeech (obviously). It's quite
easy to install and run (package available in pip and then a config file
pointing to the model)
This pr adds the DeepSpeechServerTTS class, a STT interface allowing
mycroft to use one of these servers.
config needed:
"stt": {
"module": "deepspeech_server",
"deepspeech_server": {
"uri": "http://IP-ADDRESS:PORT/stt"
}
}
* Add deepspeech_server example to mycroft.conf
Withe the NTP checks in place, the sequence of visual and audio queues
was a little clunky. This refines it slightly for normal use and to
play better with the pairing process.
* Implement max-line to limit memory usage
The major point of the PR is to limit the memory usage of the CLI by
implementing a maximum log limit. It defaults to 5000.
Other changes:
* Add "--debug" option to support troubleshooting/debugging the CLI itself
* Add support for jumping to the top (Ctrl+T/Ctrl+PgUp) or bottom (Ctrl+B/Ctrl+PgDn) of the logs.
* Remove the "OLDEST" message from the log. It was really no longer necessary since the log navigation issues got straightened out, and it complicated the max log line logic.
Raspberry Pi's don't have a built-in clock, so at boot-up the clock just picks up from when they were last running. Normally this is corrected very quickly by NTP from an internet server, but if there is no network connection that cannot happen.
When an out-of-the-box Mark 1 or Picroft is being setup, the clock is set to whenever the image was created. Upon completing the Wifi setup step the NTP service can finally sync with the internet, so time suddenly "jumps" to weeks later -- usually. In either case (when the date jumps or when the date is erronously months old), there is potential for havoc.
These changes deal with that situation. Upon network connection, an NTP synchonization is forced. If it is detected that a major time jump happened
(more than 1 hour), then the user is notified that the clock change requires
a reboot and the system restarts.
Other changes:
* use the new "system." message namespace
* add pause before the system.reboot during a WIPE, allowing reset to totally complete
*
==== Tech Notes ====
When a new track starts playing the audio service will send a message
indicating which track has started.
==== Protocol Notes ====
"mycroft.audio.service.track_start" message added
- add_event() now accepts the parameter once, registring the event as a one shot event.
- remove_event for non-existing events is handled
- added a test for this
The speak method digs through the stack trying to find a Message object
and if found uses the context from that message when sending the data to
the speech subsystem.
==== Tech Notes ====
STT, intent handling, intent fallbacks, skill handlers are now timed and
tied together with a ident (consistent through the chain so the flow from
STT until completion of the skill handler can be follewed.
TTS execution time is also measured, right now this is not tied into the
ident due to the nature of the speech.
The report is always called "timing" and always contain the following
fields:
- id: Identifier grouping the metrics into interactions
- system: Which part (STT, intent service, skill handler, etc)
- start_time: timestamp for when the action started
- time: how long it took to execute the action
The different system adds their own specific information, for example
the intent_service adds the intent_type, i.e. which handler was matched.
==== Protocol Notes ====
mycroft.skills.loaded is sent togheter with skill id and skill name
whenever a skill is loaded. This is used in the intent_service to
convert from id to skill name when reporting
Basically at any place where handle_metric is called I was adding a try-except
block to catch possible network/http issues. Due to this fact I feel
it's best to add it here.
Move the language specific functions and constants into separate files.
This will avoid many unnecessary conflicts due to involuntary encoding
changes.
Add message.utterance_remainder() method
This helper will return the portion of an utterance not
consumed by the Adapt parser already. For example,
"turn on the kitchen light" would have a remainder of
"the kitchen" if there was an Intent with entities that
matched "turn on" and "light". The returned text is passed
through normalize().
This messes up Pairing so it will be reverted until there is a backend
change to support it
This reverts commit 71611ca6be, reversing
changes made to c7da63c536.
The out-of-box spiel given by Mycroft was coming at the user pretty fast.
This adds a momentary pause in the spoken text.
Also cleaned up some ugly messagebus interaction to use the speak() method.
When a token has been generated for the provided developer credentials
id the method will return it as json, if it doesn't exist HTTPError will
be raised (404 not found)
Add Python 2/3 compatibility
==== Tech Notes ====
This allows the main bus, skills and cli to be run in both python 2.7 and
3.5+.
Mainly trivial changes
- syntax for exceptions
- logic for importing correct Queue module
- .iteritems -> future.utils.iteritems when accessing dicts key value
pairs
* Allow audio service to be run in python 3
* Make speech client work with python 3
* Importing of Queue version dependent
* Exception syntax corrected
* Creating sound buffer is version dependant
- Adapt context use range from builtins
- Use compatible next() instead of .next() when walking the skill
directory
* Make CLI Python 3 Compatible
- Use compatible BytesIO instead of StringsIO
- Open files as text instead of binary
- Make sure integer divisions are used
* Make messagebus send compatible
* Fix failing travis
Re-add future 0.16.0
* Make string checks compatible
* basestring doesn't exist in python 3 so it's imported from the "past"
* Fix latest compatibility issues in speech client
- handle urllib
- handle encoding before calling md5
* Make Api.build_json() python 2/3 compatible
This method will load a translateable (and expandable) list of names
and values from the dialog/xx-xx/ folder of a skill. For example:
dialog/en-en/Colors.value
```
# List colors and their hex RGB values
alice blue, #F0F8FF
antique white, #FAEBD7
aqua, #00FFFF
```
Fixes a misspelling and removes the "i am awake.dialog" which is
no longer used. (The dialog is handled by the (Sleep skill)[https://github.com/MycroftAI/skill-naptime] instead.
The mapping from a VT100 key to NCURSES PgUp/PgDn was inverted,
resulting in different behavior for a keyboard and monitor plugged
in to the Pi than when SSH'ed in to the unit.
Creating several visual feedback points to let users know what is
occurring in the boot sequence. This also expects a new version
of the enclosure.
* On shutdown and reset, change eye color to a dark gray
* On initial startup the enclosure will have left the eyes at a
yellow color, indicating that it is on but not fully running yet.
When the enclosure comes up (and doesn't kick off the out-of-box
or the notice that they need to run wifi setup) it will begin to
scroll an "UPDATING" message.
This assumes that someone else will remove the "UPDATING" message.
That gets handled by the Mycroft-Mark-1 skill, which resets the
mouth and sets the eye color to default.
- "tonight" is re-interpreted as PM
- check is performed to check if previous word exist before accessing it
to handle sentences containing only a simple date
When there is an error in settingsmeta.json, the load of the skill
would fail. Now it generates an error message but continues on.
Additionally the exception is caught and now displays information
about where the error in the JSON is (line and column).
Several more tweaks for the CLI:
* Ctrl+Left and Ctrl+Right are now available to cycle through previous
entries, just like Ctrl+P and Ctrl+N
* Added ":clear log" command
* fixed identifier to be unique across all accounts
* now skills will load on http error and skills will upload once to web once identity2 exist
* extracted stuff out of __init__ so that skills work on load and skills get upload when identiy2 exists
* iniatiated class attribute to None
Several minor documentation changes, plus:
* 'cancel' now has to be an exact match
* Cancel events return None instead of the spoken cancelation string
* Reduced timeout to 10 seconds instead of 20
* Changed 'text' to 'announcement' and simplified logic
the lock could be taken by a waiting thread between sentences in a multi-sentenced utterance. This locking method will allow the entire utterance to be synthezised before handling next.
Since the garbage collection can be on a bit on the slow side it
shouldn't be done at a normal user's device.
A developer can still enable this by setting the configuration flag
"debug" to true.
The reference count sometimes reported that references were remaining
even if they actually weren't. By enforcing a garbage collector run
after shutting down a skill the false possitives are reduced if not
removed all together.
The functionality of the PgUp and PgDn buttons was inverted in previous
commit.
Also added support for a ":keycode show" to assist in debugging things like
this, plus dropped support the "555" and "500" support which aren't obvious
why they were implemented in the first place. If users encounter problems
under different terminal emulators or whatever, we can use ":keycode show"
to track it down.
Finally, made the inital refresh occur in 1s to clean up any screen
corruption created by messages put out from import of packages at startup.
Several minor changes to the CLI:
* Placed lock on screen refresh from a new log message
* Slowed frequency of mic updates to limit opportunity for screen corruption.
Still happening occasionally, which is vexing.
* Up/Down arrow now scroll logs by a single line
* Ctrl+P/Ctrl+N (Previous/Next) now scroll through the command history
* Ensured that the "Oldest" line is always the first in the log
- Add support for unnamed intents.
- Add more debugging information for skill handler errors
- Clean up skill name for pronounciation
- Update docstring for initialize method
Corrected my refinement after a previous review.
Also added support for splitting the name of a skill before running it through
TTS, making "VolumeSkill" sound like "Volume Skill" and such. Plus a log
message before raising some errors in the skill wrapper.
Instead of speaking directly from the listener send a message that the
naptime_skill can use to trigger speech and/or other indications that
the listener is awake
The isSpeaking signal would only be generated when the actual audio playback
started, but this could be several seconds for TTS engines like Mimic which
take some time to generate the audio file for playback. This changes the
creation of the "isSpeaking" signal to the start of the execute() method,
which should queue up audio and leave the signal set until the queue has
eventually been cleared.
This allows skill writers to ignore naming intents. Combined with a
forthcoming change to Adapt that creates a default of None for IntentBuilder()
This allows the current:
@intent_handler(IntentBuilder("CurrentWeatherIntent").require(
"Weather").optionally("Location").build())
def handle_current_weather(self, message):
...
To become:
@intent_handler(IntentBuilder().require("Weather").optionally("Location"))
def handle_current_weather(self, message):
...
Which will automatically name the Intent "handle_current_weather".
Also dropped the log message in the default initialize() method since it is
common to not override it now.
If the machine is not connected to the network getting the user uuid in MutableMic.__init__ will fail, raising ConnectionError. This caused the speech client to crash.
Adding handler for ConnectionError resolves this issue.
==== Tech Notes ====
Configuration.get() was originally thought to take a stack of configuration dicts and merge together into a single configuration dict but can also handle file paths.
The Api() class can't use the RemoteConfig class so it has supplied it's own stack of configuration files to avoid infinite recursion. Previously the files wore converted to dicts before passed into the Configuration which meant that 3 files were loaded even if the information was already cached. This passes the files as file references that will only be converted to dicts if the cache is not found.
In code like this:
self.speak_dialog("something")
mycroft.audio.wait_while_speaking()
It was possible that the speaking of "something" would take longer to
start than the 0.1 seconds that was built into the wait_while_speaking().
The definition of this behavior is slightly fuzzy, but this is definitely
a case where the expectation is that previous request for speech would
start and complete. For now, I have just bumped the minimum wait to
0.3 seconds.
In the long run we might consider tracking specific speak requests and
generating a notification when that request has been serviced. Then the
skill could automatically hold off until that request has been serviced.
But the basic skill code won't have to change to make this happen, so
this additional sleep is adequate for today.
Also snuck in a minor change to a comment.
Add support for 'mycroft.mic.listen' on the messagebus to trigger the system
to listen for STT processing. This can be posted on the messagebus by outside
systems, such as a physical or GUI Listen button.
The recent change to remove Pystache introduced a bug in cases where
the passed-in key/value data included non-strings for the value. This
showed up in the weather skill which passed in "lat" and "lon" as floats.
Now the value is explicitly converted using str().
When the TTS engine provides visemes to the faceplate, the information
passed along consists of the mouth shape and the duration to display it.
When the system get backed-up for some reason (e.g. the CPU is briefly
overloaded), the code would attempt to catch up the animation but would
still send the 'expired' viseme across the serial port to the faceplate
with no wait-time. This results in a fast-moving mouth to catch up,
which isn't very pleasing.
Now the viseme is passed along with an expiration date, so if the time
to display it has already passed then the viseme code gets thrown away
instead of being sent across the (relatively slow) serial port. This
allows better catch-up.
Several refinements:
* Remove the "What time is it" preloaded example
* Page up/down now moves by 1/2 the # visible log lines instead of always 10 lines
* Reduce the frequency of full-screen redraws to 10 secs instead of 5 (because less needed with the corruption fix)
* Add the version of mycroft-core in the upper-right corner
* FIX: screen redraw now uses a lock, preventing corruption from drawing simultaneously from multiple threads
On certain platforms it escapes quotes which causes problems. Since we only use it for something as simple as key value replacements, it doesn't make sense to include it as a dependency
As noted on the Chromium Dev How-to [1] and on the
SpeechRecognition library docs [2], the Google TTS API
is really not the right API to call. While it's simple,
and allows you to authenticate via a simple token, it is
also limited to a max of *50* requests per day.
That's not a lot. I don't think many people will find that
useful, outside of quick testing (if they can even get an
API key - I couldn't figure out how to generate one that
worked correctly).
This commit introduces a new STT backend, google_cloud, so that
the Google STT backend can be deprecated eventually. To do so, we
needed to:
- Install the Google API Client Library
- Create a new STT class which knows how to turn a provided Google
JSON credentials file into a string (the SpeechRecognition library
expects you to pass in the JSON string, not a file path, nor an object).
Any person wishing to use this will need to:
- Enable the Cloud Speech API on the Google Cloud Platform console
- Create a new Service Account Key, and download the credentials to
a secure location
- Configure that location in mycroft.conf, like so:
"stt": {
"google_cloud": {
"credential" {
"json": { contents of downloaded credentials }
}
}
}
It's worth noting that the Cloud Speech API has a free quota of
60 minutes per month, which would probably stretch further than 50
individual requests. So for hobbyists, it should be nothing but
a net benefit.
[1] http://www.chromium.org/developers/how-tos/api-keys
[2] https://github.com/Uberi/speech_recognition/blob/master/reference/library-reference.rst#recognizer_instancerecognize_googleaudio_data-key--none-language--en-us-show_all--false
==== Tech notes ====
The send script will throw an IOerror exception if the bus service isn't
started. This correctly catches it and emits a single warning.
==== Tech Notes ====
queue command will start playback if no playback is running.
mpg123 and vlc services updated support queueing tracks while playing
==== Protocol Notes ====
mycroft.audio.service.queue message added
==== Tech Notes ====
As soon as mycroft, mycroft.api or mycroft.skills.core were imported the
configuration was loaded (including reaching out to the web config).
This will reduce unneccessary configuration loadings and make it easier
to handle configuration in the tests.
A Skill can now implement the get_intro_message() method, which can return a
string to be spoken the very first time a skill is run. This is intended to
be an announcement, such as further steps necessary to setup the skill.
Also stopped generation of the Error message when the expected StopIteration
occurs on a intent failure. This confused new developers and poluted the logs.
Finally, corrected some documentation typos.
==== Tech Notes ====
Before the refactoring of the configuration system the
ConfigurationManager allowed loading config files directly from strings.
This commit restores that functionality
==== Tech Notes ====
This allows you to remove message bus messages inside core.py. When canceling scheduling events, the message bus messages were not removed which could caused duplicate listeners and handlers for the same intent. Adding remove_event function removes the actual messages from the bus to prevent potential duplicates.
==== Tech Notes ====
Location was added as a default context keyword when the context manager
was added as an example of how the context feature could be used.
However in the current greedy implementation in can cause some confusion
with lingering context providing incorrect Location.
The feature can still be turned on in configuration if someone wants to
experiment with it.
The prompt during skill downloads was occurring even when the "speak" flag was
set to False. Now it is honored.
Also removed the "no network connection.dialog" which essentially was a copy of
the "not connected to the internet.dialog" file.
This cleans up the amount of noise in the logs:
* Removed logging of the serial port raw read/writes.
* Removed the "Setting active skill" log in display_manager.py
* Corrected typo "dispaly"
* Added default CLI filter for mouth.display and mouth.icon messages
* Fixed bug when adding new filters
Several attempts to connect to the backend fail before pairing has
been completed, which was producing errors that prevented the
messagebus and audio services from starting up.
Now the DeviceApi().is_subscriber property returns False if the
check fails, assuming unpaired == not subscribed.
Also stopped logging the call stack on the HTTPError exception, it
made an expected situation look like a major crash.
==== Tech Notes ====
Before the current chromecast application was quit when mycroft was started which caused some interference. Now the current app is quit right before starting playback on the device instead.
PR 1049 introduced several cosmetic PEP8 errors that were easily fixed.
Additionally there are unittests that include non-ASCII characters which are
failing. As Pt-PT support is a work-in-progress, I just commented them out
with TODOs next to them.
==== Tech Notes ====
Since the playback now is performed in a thread the curate_cache could
clean out generated speech before or in the middle of playing back the
queue.
==== Fixed Issues ====
#1141
==== Tech Notes ====
Curate cache now only removes cache files if the diskspace is below the
set percentage AND if below a set amount of free disk space
==== Tech Notes ====
- removed old main.py
- replace reference to ConfigurationManager in api tests
- reset configuration after use in configuration test
- Pep-8 issue
==== Tech Notes ====
Previous approach did not work as websocket client was not running, this
uses the standalone send functionality recently added to send a single
message.
==== Tech Notes ====
Since the voice is a quite large download stalling download is a real
possibility. Using wget allows for resume and retry of download in a
simple way.
==== Tech Notes ====
- Using download utility to download voice binary
- reverts to default voice if not premium
- uses default voice during download and switches over when done
The audio service could not handle https URLs. VLC can support them,
so https was added to its supported_uris. Additionally, the play( )
function in the audio service could not actually correctly search the
backends for a supported uri, so the code there has been fixed
When (re)booting a Mark 1 unit will show rolling eyes until it reaches
a "ready" state. This happens by sending a command to the Arduino.
There is also code that prevents sending commands our the serial port
if not running on a Mark 1. In certain situations, the message
indicating that the Mark 1 Arduino was found was posted to the
messagebus before it was fully open. When this was missed, the system
didn't think it was on a Mark 1 and the command to stop the eyes from
rolling (and for further interactions with the Mark 1 hardware) were
not sent.
The Mark 1 Arduino detection is now triggered when the messagebus
'open' notification is generated rather than when the object is
constructed.
==== Fixed Issues ====
#967 - Eyes never stop spinning on startup (Mark 1)
==== Fixed Issues ====
One exemple is reloading pairing skill after skill update (30-60 seconds)
results in a new pairing code being generated.
==== Tech Notes ====
Now compare individual skill modification dates and not a global last
modified skill.
This fixes at least a potential issue with the Mark 1 boot sequence.
The system posts a "system.version" message then registers a
listener for the response. There is a chance that the response
sneaks in before the handler is registered. This just reorders the
sequence of that code.
==== Fixed Issues ====
ISSUE #967 - Eyes never stop spinning on startup (Mark 1)
==== Tech Notes ====
The msm lock wasn't checked before reloading skills allowing skills to be loaded before msm wasn't finished installing requirements.
A check was added to ensure this along with a separate lock enforcing the handling of block_msm and release_msm in the correct order. If not the update procedure could cause a deadlock.
The interval was previously not updated after success (unless speak was set to
true). This caused the skills to continuously update.
Also set speak = false as default since that's the most commonly used
case right now.
Significantly reworked the loading/updating of Skills. Unified
all management under a single SkillManager class. This class
runs as a thread that initially loads, upgrades (via MSM)
and reloads skills.
Removed the independent threads that were being run. The skill
updating still happens once an hour, but works in conjunction
with the scan to reload modified skills. Also added messagebus
notifications from MSM so mycroft-core can pause reloading
skills until the installation is complete.
Added a new mycroft.messagebus.send module to allow command
line interaction with the messagebus, e.g.:
python -m mycroft.messagebus.send mycroft.wifi.start
python -m mycroft.messagebus.send speak '{"utterance":"hello"}'
==== Fixed Issues ====
MSM installs that have PIP dependencies were failing, as the
load would occur after code was retrieved but before PIP install
completed. Restart was required to load new skills.
==== Tech Notes ====
TODO: Change the way we manage modules. The auto-load of the
remote configuration for the module is silly, slow and wasteful.
I made the WebsocketClient.build_url() method static in
anticipation of being able to do this more efficiently when the
submodule load doesn't hit the remove API automatically.
==== Localization Notes ====
Modified 'sorry I couldn't install default skills' message.
==== Protocol Notes ====
MSM now generates:
msm.updating
msm.installing
msm.install.succeeded { "skill" : name }
msm.install.failed { "skill" : name, "error" : code }
msm.installed
msm.updated
msm.removing
msm.remove.succeeded { "skill" : name }
msm.remove.failed { "skill" : name, "error" : code }
msm.removed
An update can now be forced by posting 'skillmanager.update' to the
messagebus.
added capability to auto upload changes from settingsmeta.json to home.mycroft.ai
==== Documentation Notes ====
If a developer make changes to the settingsmeta.json, then this will be auto uploaded to home.mycroft.ai
==== Protocol Notes ====
hash and uuid are now stored as variables in files located in ~/.mycroft/skills/{skill-name}
Fix log autocomplete
==== Tech Notes ====
Due to the use of setattr(), IDEs like PyCharm couldn't use autocomplete on the LOG class. This unrolls the loop so that the logging attributes appear in the autocomplete menu.
This commit officially switches the mycroft-core repository from
GPLv3.0 licensing to Apache 2.0. All dependencies on GPL'ed code
have been removed and we have contacted all previous contributors
with still-existing code in the repository to agree to this change.
Going forward, all contributors will sign a Contributor License
Agreement (CLA) by visiting https://mycroft.ai/cla, then they will
be included in the Mycroft Project's overall Contributor list,
found at: https://github.com/MycroftAI/contributors. This cleanly
protects the project, the contributor and all who use the technology
to build upon.
Futher discussion can be found at this blog post:
https://mycroft.ai/blog/right-license/
This commit also removes all __author__="" from the code. These
lines are painful to maintain and the etiquette surrounding their
maintainence is unclear. Do you remove a name from the list if the
last line of code the wrote gets replaced? Etc. Now all
contributors are publicly acknowledged in the aforementioned repo,
and actual authorship is maintained by Github in a much more
effective and elegant way!
Finally, a few references to "Mycroft AI" were changed to the correct
legal entity name "Mycroft AI Inc."
==== Fixed Issues ====
#403 Update License.md and file headers to Apache 2.0
#400 Update LICENSE.md
==== Documentation Notes ====
Deprecated the ScheduledSkill and ScheduledCRUDSkill classes.
These capabilities have been superceded by the more flexible MycroftSkill
class methods schedule_event(), schedule_repeating_event(), update_event(),
and cancel_event().
Fix configuration manager saving
==== Tech Notes ====
Fixes ConfigurationManager.save() method. Previously incorrect use of dict update method resulted in empty config. Now the saved dict is recursively merged to only update affected fields.
==== Tech Notes ====
Add mycroft.skill.handler.start and mycroft.skill.handler.complete to fallback handler. handler in data field will be called "fallback" upon completion the used handler will be reported in the "fallback_handler" data entry.
* fix
* pep8
* fallback handler name
==== Tech Notes ====
If cancel + add event messages arrive at roughly the same time the add
would be overridden by the cancel. More intuitive to handle cancel first
and add new arrived events after.
==== Tech Notes ====
update_event and cancel_event did not use a name unique to the skill
forcing the user to build it themselves. Now the unique name is
constructed in the method _unique_name() for all event scheduling
methods.
==== Tech Notes ====
when using multiple decorators on a method inspect will return incorrect
values despite @wraps. This causes the intent handler to fail to
execute.
@decorator can't really be of help here since it doesn't handle
decorators with arguments (as far as I can understand)
This is a workaround using the fact that the argument count will be zero
on methods with multiple decorators, and basically tries the usual
signatures