==== Tech Notes ====
Wheb a language without time rules were defaulted back to us-english
time rules the class was instanciated prematurely. Now the default
returns a class and not an object.
==== Fixed Issues ====
#1027
==== Tech Notes ====
io.open is the default implementation of open for python3 and handles
encodings in a better way defaulting to utf8
==== Fixed Issues ====
#1022
==== Tech Notes ====
When a one_of intent is hit the intent returned by adapt doesn't look like normal require/optional intent parameters. This PR adds a check for entities before trying to accessing them when trying to update context.
This is a temporary workaround while it's determined if the adapt behaviour is correct or should be modified to conform to the normal format. (See issue 66 in the adapt repo), but in any case it's a good sanity check
==== Fixed Issues ====
the expect_response pararmeter is now correctly passed along to
self.speak()
==== Tech Notes ====
NONE - explain new algorithms in detail, tool changes, etc.
==== Documentation Notes ====
NONE - description of a new feature or notes on behavior changes
==== Localization Notes ====
NONE - point to new strings, language specific functions, etc.
==== Environment Notes ====
NONE - new package requirements, new files being written to disk, etc.
==== Protocol Notes ====
NONE - message types added or changed, new signals, APIs, etc.
==== Fixed Issues ====
==== Tech Notes ====
This PR corrects a couple of small issues led to skills being left in memory when.
- Handler for `stop.mycroft` weren't removed from event emitter when
skill shut down. Now is added using `self.add_event()`
- registered intent list `self.events` created a circular reference that
python couldn't resolve a live so this is now deleted at shutdown
- Timers in scheduled skills weren't terminated properly. Now if the
timer is alive it will be joined
==== Documentation Notes ====
Registring event handlers should use `self.add_event` instead of
`self.emitter.on()` To make sure they are cleaned up when skill is
terminated.
==== Localization Notes ====
NONE - point to new strings, language specific functions, etc.
==== Environment Notes ====
NONE - new package requirements, new files being written to disk, etc.
==== Protocol Notes ====
NONE - message types added or changed, new signals, APIs, etc.
==== Tech Notes ====
Allow cached config to be updated from messagebus, from a skill or other connected software. Listens for configuration.patch signal and updates loaded config.
==== Protocol Notes ====
new messagebus signal configuration.patch
==== Tech Notes ====
Some functions have been orphaned in core.py and are only used in the
tests. To clean up these have been moved to where they're used.
==== Fixed Issues ====
#1007
==== Tech Notes ====
The converse system changed the api for the load_skill() function this
since the skill_container wasn't updated accordingly it stopped working.
This PR makes updates the container to use the new interface.
==== Fixed Issues ====
#1001
==== Tech Notes ====
Previously the skills were reloaded a couple of times during startup
since updates of the .pyc file and possible the settings.json file were
made.
This commit adds a bit finer control over which files to check.
Currently all files in skill root except ones ending in .pyc and the
settings.json are checked along with all visible subdirectories
The most recently used skills now have an opportunity to preview all
utterances before they hit the intent system.
==== Tech Notes ====
Skills get a preview in the order of activation -- most recent first --
and if they can consume the utterance or ignore it. If consumed,
processing stops. If ignored, the next most recent skill gets a shot
at it. Finally, if no skill consumes it the intent system takes over,
running as it always has.
Skills remain "active" for 5 minutes after last use.
A skill achieves this by implementing the converse() method, e.g.
def def converse(self, utterances, lang="en-us"):
if .... :
return True # handled, consume utterance
else:
return False # not for this skill, pass it along
==== Tech Notes ====
IntentBuilder objects always need to be built into an Intent to be
usable in mycroft. Since this is always neccessary the code doing this
can be moved so users don't need to do this step.
This commit makes the `register_intent()` method check the handler
argument type, if it's an IntentBuilder object it builds the Intent, if
it's an Intent it continues as normal, if it's any other type it will
raise a ValueError
==== Tech Notes ====
Downloader() is a Thread subclass starting and handling a download. The
property done and status is set when the download is completed or
failed.
download manager ensuring continuation of started downloads (per process)
complete_action, a function to be called after download is complete can be
supplied to the downloader
==== Tech Notes ====
The DeviceApi now has a `get_subscription()` method returning the entire
subscription structure and the property `is_subscriber` returning True
if the device is connected to a paying account and False if it's a free
account.
==== Tech Notes ====
pulse_lower_volume() scans through the list of running input sinks and
reduces the volume to 30% of original volume skipping over the stream
named 'mycroft-voice'
pulse_restore_volume() restores the volume again.
If activated with the 'pulseaudio' config parameter they will be called
when mycroft starts/stops listening and starts/stops speaking
==== Environment Notes ====
The Audio->pulseaudio configuration parameter can now be set to 'lower'
to activate this feature
Bug fix proposed for success wake-word recording.
Opt: Suggestion to change the save location of recording files as well.
Otherwise, the user could potentially get the permission denial.
-NeonGecko.com Inc
Add intent fallback system
==== Fixed Issues ====
NONE - replace with issue numbers, e.g. #123, #304
==== Tech Notes ====
NONE - explain new algorithms in detail, tool changes, etc.
==== Documentation Notes ====
The FallbackSkill needs to be documented in detail for skill creators.
==== Environment Notes ====
NONE
==== Protocol Notes ====
multi_utterance_intent_failure replaced with complete_intent_failure
==== Fixed Issues ====
#969
==== Tech Notes ====
The criteria for excluding messages were inverted and excluded all
messages containing 'mycroft.audio.service'. This criteria has been
fixed.
==== Documentation Notes ====
NONE - things like description of a new feature or notes on behavior
changes
==== Localization Notes ====
NONE - point out new strings, functions needing international versions,
etc.
==== Environment Notes ====
NONE - new package requirements, new files being written to disk,
etc.
==== Protocol Notes ====
NONE - message types added or changed, new signals, etc.
Some exception code was attempting to use the LOGGER, but one had
not been created for this file. So when the exception occurred
it caused a crash.
NOTE: We need to standardize on log/LOG/LOGGER!
==== Fixed Issues ====
#960
==== Tech Notes ====
The messagebus handler for "mycroft.stop" halts and exits if an
exception occurs in any methods that are registered to that name. The
handler executes the stop() method that's provided by the user and is
not verified. To ensure that other skills are unaffected exceptions in
the user provided method are caught.
==== Documentation Notes ====
NONE - things like description of a new feature or notes on behavior
changes
==== Localization Notes ====
NONE - point out new strings, functions needing international versions,
etc.
==== Environment Notes ====
NONE - new package requirements, new files being written to disk,
etc.
==== Protocol Notes ====
NONE - message types added or changed, new signals, etc.
===Fixed issues ====
#958
==== Tech Notes ====
Adds method clear_visimes() to voice playback thread to stop visime stream
instead of having visime stream check for signals.
==== Documentation Notes ====
NONE - things like description of a new feature or notes on behavior
changes
==== Localization Notes ====
NONE - point out new strings, functions needing international versions,
etc.
==== Environment Notes ====
NONE - new package requirements, new files being written to disk,
etc.
==== Protocol Notes ====
NONE - message types added or changed, new signals, etc.
when the audio configuration option "pulseaudio" is set to mute running
audio streams will be muted while mycroft is speaking and while mycroft
is listening.
Chromecast can be manually added with a backend entry with type set to "chromecast" or the service can autodetect all chromecast and register them with their names by setting the Audio configuration parameter autodetect-chromecast set to true.
Currently this implementation is very basic and is using the default media controller. This limits the usable uri's to http(s), adding support for local files and hopefully more services will be added later.
The user can now specify which backend to use to play media. default the keywords are the name of the backend (vlc, mpg123, mopidy) but can be set with the config.
Example:
audio.mopidy.name = "livingroom"
A possible user interaction might be
- "Hey mycroft, play the news in the livingroom"
The detection is very basic and not very elegant but works as a proof of concept.
Three backends have been added, mopidy, vlc and mpg123. Depending on uri type an apporpriate service is selected when media playback is requested. (for example mopidy service can handle spotify://... and local://...)
A playback Control skill can pause, resume, stop change track, etc any started media. So for example if the NPR news skill used this after starting playing the news the user can say "Hey Mycroft, pause" and the playback will pause. The playback control also handles stuff like lowering the volume of the playback if mycroft is asked another question.
Currently the different backend runs under the playbeck control, this was made most for convenience and the services should be moved in the future.
Usage:
The user needs to import the audioservice interface
`from mycroft.skills.audioservice import AudioService`
and initialize an instance in the skill `initialize` method
`self.audio_service = AudioService(self.emitter)`
Then playing an uri is as simple as
`self.audio_service.play(uri)`
TODO:
* Configuration (Alias for the different backends, service specific config, active services, etc.)
* Manual selection of backend (This is prepared in the audioservice interface biut not implemented)
* More feature complete audio service interface (playback control, get trackname etc)
* Separate audio services from the playback control
* Probably lots more
==== Fixed Issues ====
==== Tech Notes ====
Uses supplied name parameter if present, otherwise uses name of class.
==== Documentation Notes ====
The skill creation documentation should mention that the name parameter
is only an option.
==== Localization Notes ====
NONE
==== Environment Notes ====
NONE
=== Protocol Notes ====
NONE
This is because this will break wolfra alpha skill unless they update
skills, but if they update before getting the new version, it will also
break wolfram
This reverts commit 6ca4161335.
* Create new FallbackSkill base class for implementing fallback behavior
Also removes multi utterance intent fail. Only makes sense to emit an intent_failure regardless of the amount of intents
0.8.17 or earlier build. A Series of Unfortunate Events leads to two instances
of the mycroft-skills service running. This causes double-answers, which is
confusing to everyone. The root problem that lead to this is corrected in
release 0.8.18, but resolving it in packaging is ridiculously difficult.
So, we'll do this simple hack for the interim. It is harmless and useless if
not running in the Mark 1 environment.
Adds the ExtractDateTime parse function from Christopher. When imported from mycroft/util/parse.py, it'll take a sentence like "What's the weather like 5 weeks from next Wednesday?" and will extract a python datetime object for that date.
* Added requirements.txt change for importing dateutil
* Check queue empty with self.queue.empty() instead of len()
* Add error logging of exceptions in tts thread.
* Limit the number of audio_output_start messages.
recognizer_loop:audio_output_start message will only be sent if the
queue has been empty since last loop.