The speak method digs through the stack trying to find a Message object
and if found uses the context from that message when sending the data to
the speech subsystem.
==== Tech Notes ====
STT, intent handling, intent fallbacks, skill handlers are now timed and
tied together with a ident (consistent through the chain so the flow from
STT until completion of the skill handler can be follewed.
TTS execution time is also measured, right now this is not tied into the
ident due to the nature of the speech.
The report is always called "timing" and always contain the following
fields:
- id: Identifier grouping the metrics into interactions
- system: Which part (STT, intent service, skill handler, etc)
- start_time: timestamp for when the action started
- time: how long it took to execute the action
The different system adds their own specific information, for example
the intent_service adds the intent_type, i.e. which handler was matched.
==== Protocol Notes ====
mycroft.skills.loaded is sent togheter with skill id and skill name
whenever a skill is loaded. This is used in the intent_service to
convert from id to skill name when reporting
Basically at any place where handle_metric is called I was adding a try-except
block to catch possible network/http issues. Due to this fact I feel
it's best to add it here.
As suggested by @turboproc this restores GoogleCloudSTT. For details on the change see https://github.com/Uberi/speech_recognition/releases
==== Fixed Issues ====
#1329
==== Environment Notes ====
SpeechRecognition updated to 3.8.1
Move the language specific functions and constants into separate files.
This will avoid many unnecessary conflicts due to involuntary encoding
changes.
Add message.utterance_remainder() method
This helper will return the portion of an utterance not
consumed by the Adapt parser already. For example,
"turn on the kitchen light" would have a remainder of
"the kitchen" if there was an Intent with entities that
matched "turn on" and "light". The returned text is passed
through normalize().