* Add support for deepspeech_server
deepspeech_server is a server running deepspeech (obviously). It's quite
easy to install and run (package available in pip and then a config file
pointing to the model)
This pr adds the DeepSpeechServerTTS class, a STT interface allowing
mycroft to use one of these servers.
config needed:
"stt": {
"module": "deepspeech_server",
"deepspeech_server": {
"uri": "http://IP-ADDRESS:PORT/stt"
}
}
* Add deepspeech_server example to mycroft.conf
Withe the NTP checks in place, the sequence of visual and audio queues
was a little clunky. This refines it slightly for normal use and to
play better with the pairing process.
* Implement max-line to limit memory usage
The major point of the PR is to limit the memory usage of the CLI by
implementing a maximum log limit. It defaults to 5000.
Other changes:
* Add "--debug" option to support troubleshooting/debugging the CLI itself
* Add support for jumping to the top (Ctrl+T/Ctrl+PgUp) or bottom (Ctrl+B/Ctrl+PgDn) of the logs.
* Remove the "OLDEST" message from the log. It was really no longer necessary since the log navigation issues got straightened out, and it complicated the max log line logic.
Raspberry Pi's don't have a built-in clock, so at boot-up the clock just picks up from when they were last running. Normally this is corrected very quickly by NTP from an internet server, but if there is no network connection that cannot happen.
When an out-of-the-box Mark 1 or Picroft is being setup, the clock is set to whenever the image was created. Upon completing the Wifi setup step the NTP service can finally sync with the internet, so time suddenly "jumps" to weeks later -- usually. In either case (when the date jumps or when the date is erronously months old), there is potential for havoc.
These changes deal with that situation. Upon network connection, an NTP synchonization is forced. If it is detected that a major time jump happened
(more than 1 hour), then the user is notified that the clock change requires
a reboot and the system restarts.
Other changes:
* use the new "system." message namespace
* add pause before the system.reboot during a WIPE, allowing reset to totally complete
*
The speak method digs through the stack trying to find a Message object
and if found uses the context from that message when sending the data to
the speech subsystem.
==== Tech Notes ====
STT, intent handling, intent fallbacks, skill handlers are now timed and
tied together with a ident (consistent through the chain so the flow from
STT until completion of the skill handler can be follewed.
TTS execution time is also measured, right now this is not tied into the
ident due to the nature of the speech.
The report is always called "timing" and always contain the following
fields:
- id: Identifier grouping the metrics into interactions
- system: Which part (STT, intent service, skill handler, etc)
- start_time: timestamp for when the action started
- time: how long it took to execute the action
The different system adds their own specific information, for example
the intent_service adds the intent_type, i.e. which handler was matched.
==== Protocol Notes ====
mycroft.skills.loaded is sent togheter with skill id and skill name
whenever a skill is loaded. This is used in the intent_service to
convert from id to skill name when reporting
Basically at any place where handle_metric is called I was adding a try-except
block to catch possible network/http issues. Due to this fact I feel
it's best to add it here.
Move the language specific functions and constants into separate files.
This will avoid many unnecessary conflicts due to involuntary encoding
changes.
Add message.utterance_remainder() method
This helper will return the portion of an utterance not
consumed by the Adapt parser already. For example,
"turn on the kitchen light" would have a remainder of
"the kitchen" if there was an Intent with entities that
matched "turn on" and "light". The returned text is passed
through normalize().