When the TTS engine provides visemes to the faceplate, the information
passed along consists of the mouth shape and the duration to display it.
When the system get backed-up for some reason (e.g. the CPU is briefly
overloaded), the code would attempt to catch up the animation but would
still send the 'expired' viseme across the serial port to the faceplate
with no wait-time. This results in a fast-moving mouth to catch up,
which isn't very pleasing.
Now the viseme is passed along with an expiration date, so if the time
to display it has already passed then the viseme code gets thrown away
instead of being sent across the (relatively slow) serial port. This
allows better catch-up.
==== Tech Notes ====
Since the playback now is performed in a thread the curate_cache could
clean out generated speech before or in the middle of playing back the
queue.
==== Tech Notes ====
Since the voice is a quite large download stalling download is a real
possibility. Using wget allows for resume and retry of download in a
simple way.
==== Tech Notes ====
- Using download utility to download voice binary
- reverts to default voice if not premium
- uses default voice during download and switches over when done
This commit officially switches the mycroft-core repository from
GPLv3.0 licensing to Apache 2.0. All dependencies on GPL'ed code
have been removed and we have contacted all previous contributors
with still-existing code in the repository to agree to this change.
Going forward, all contributors will sign a Contributor License
Agreement (CLA) by visiting https://mycroft.ai/cla, then they will
be included in the Mycroft Project's overall Contributor list,
found at: https://github.com/MycroftAI/contributors. This cleanly
protects the project, the contributor and all who use the technology
to build upon.
Futher discussion can be found at this blog post:
https://mycroft.ai/blog/right-license/
This commit also removes all __author__="" from the code. These
lines are painful to maintain and the etiquette surrounding their
maintainence is unclear. Do you remove a name from the list if the
last line of code the wrote gets replaced? Etc. Now all
contributors are publicly acknowledged in the aforementioned repo,
and actual authorship is maintained by Github in a much more
effective and elegant way!
Finally, a few references to "Mycroft AI" were changed to the correct
legal entity name "Mycroft AI Inc."
==== Fixed Issues ====
#403 Update License.md and file headers to Apache 2.0
#400 Update LICENSE.md
==== Documentation Notes ====
Deprecated the ScheduledSkill and ScheduledCRUDSkill classes.
These capabilities have been superceded by the more flexible MycroftSkill
class methods schedule_event(), schedule_repeating_event(), update_event(),
and cancel_event().
==== Tech Notes ====
isSpeaking was lowered as soon a the tts had synthesized the audio and
not when the output finished. This commit moves the signal
raising/lowering to the tts instead of the 'mycroft.speak' handler.
===Fixed issues ====
#958
==== Tech Notes ====
Adds method clear_visimes() to voice playback thread to stop visime stream
instead of having visime stream check for signals.
==== Documentation Notes ====
NONE - things like description of a new feature or notes on behavior
changes
==== Localization Notes ====
NONE - point out new strings, functions needing international versions,
etc.
==== Environment Notes ====
NONE - new package requirements, new files being written to disk,
etc.
==== Protocol Notes ====
NONE - message types added or changed, new signals, etc.
* Check queue empty with self.queue.empty() instead of len()
* Add error logging of exceptions in tts thread.
* Limit the number of audio_output_start messages.
recognizer_loop:audio_output_start message will only be sent if the
queue has been empty since last loop.
* BUGFIX: The big bug was calling is_paired() during wake_word_in_audio(). When not paired, that call hit the server, taking about a second. Since it happened multiple times a second, the audio buffers got backed up hugely. This resulted in weird behavior later as the buffers get cleared out.
* Added mycroft.api.has_been_paired(), which just looks for the pairing key (it does not validate it is still active with the server, like is_paired())
* The enclosure now checks for internet connectivity and kicks off the wifisetup process, not the wifisetup client itself.
* During the "onboarding" process, the microphone is muted using the new "mycroft.mic.mute" message. After pairing completes, the "mycroft.mic.unmute" is expected to be sent from the pairing skill. Unmuting again after a re-pairing is harmless.
* mute_and_speak() is smart enough to not unmute itself when complete if muted before
* util.check_for_signal() now accepts -1 as the lifetime. This means it never times out.
* util.stop_speaking() is more intelligent about shutting down the spoken text (including text that has been split at periods) and visemes
* Added a mycroft.api.is_paired() method
* Added mycroft.util.is_speaking and mycroft.util.wait_while_speaking() methods
* RESET now waits for the spoken notice to complete
* Stopped the "Checking for updates" and "Skills updated" prompts (commented out for now, probably will eliminate)
* Wifi setup filters out hidden ("x00") networks
* Visemes should keep up better if they get behind (will skip)
* Mimic is now searched for on the users path
* Onboarding process:
- wifi setup starts automatically
- User is walked through the process
- wake word and button pressing are ignored
- At end, a short tutorial is given
The TTS audio is now cached. If the same TTS is requested again, the cached WAV and phoneme sequence is reused.
Major points:
* Created mycroft.util.get_cache_directory(). You can give this a domain, also. The mycroft.conf can define where this directory resides, so enclosures can have this reside on a ramdisk, for instance.
* Created mycroft.util.curate_cache(). This retains a percentage of the disk size free.
* upgrading gTTS
- using play_mp3 to play mp3 files
Conflicts:
mycroft/tts/google_tts.py
* Update gTTS version
* default to 'us-en' if lang is omitted
* Fix multiple sentence speech.
Wait until audio has been played before exiting `GoogleTTS.execute()`
- Initialize tts ws and enclosure at the main process
Note:
- This is a minimal change to fix the problem.
- The ultimate goal is to have a totally isolated TTS process which requires its own main and ws initialization to be developed soon.