Added a RemoteTTSException exception for non-timeout exceptions, Mimic2
will consider HTTP status code 200-299 as OK while other responses will
trigger raise RemoteTTSException.
If rendering a chunk of a sentence takes too long time, the audio queue
may run out and trigger the listening.
This moves the listening trigger to after the last chunk.
Major refactoring of the skills startup sequence
- Restructure to a less nested structure
- Remove usage of globals by wrapping a lot of state variables into a class this allows for things like caching a negative pairing status throughout the startup process
* Refactor mimic2 to use the shared tts architecture
* Make sure the queue is cleared
- Add a convenience method grouping clear_queue and clear_visemes
- The start time is now set before the lock to allow multiple speech requests queued before the stop signal to also be cancelled
- Make sure the any pending TTS generation is cleared from the queue by calling tts.clear() when breaking from the chunking loop.
This fixes a stray lock.release() causing a silent
RuntimeError: release unlocked lock
after TTS execution but just before reporting the timing for the TTS system.
The Mark 1 button press can now be "consumed" when a skill handles
the Stop command. When this happens, the button press will not
trigger listening mode. An additional press would be needed to
trigger listening.
This introduces the "mycroft.stop.handled" messagebus message. It
carries a data field called "by" which identifies who handled it.
Currently the values are "TTS" for when speaking ends or the name
of a skill which implements Stop and returns True from the call.
Also fixed a potential bug when the flag to clear queued visemes
was left set after a button press.
- Engines now specify if they support ssml rather than the configuration
- The text client strips out ssml tags
- Engines can modify tags via the `self.modify_tag` method
This sends a ctrl+c signal to each process which will allow code to exit properly by handling KeyboardInterrupt
Other notable changes:
- create_daemon method used to clean up create daemon threads
- create_echo_function used to reduce code duplication with messagebus
echo functions
- wait_for_exit_signal used to wait for ctrl+c (SIGINT)
- reset_sigint_handler used to ensure SIGINT will raise KeyboardInterrupt
The speak method digs through the stack trying to find a Message object
and if found uses the context from that message when sending the data to
the speech subsystem.
==== Tech Notes ====
STT, intent handling, intent fallbacks, skill handlers are now timed and
tied together with a ident (consistent through the chain so the flow from
STT until completion of the skill handler can be follewed.
TTS execution time is also measured, right now this is not tied into the
ident due to the nature of the speech.
The report is always called "timing" and always contain the following
fields:
- id: Identifier grouping the metrics into interactions
- system: Which part (STT, intent service, skill handler, etc)
- start_time: timestamp for when the action started
- time: how long it took to execute the action
The different system adds their own specific information, for example
the intent_service adds the intent_type, i.e. which handler was matched.
==== Protocol Notes ====
mycroft.skills.loaded is sent togheter with skill id and skill name
whenever a skill is loaded. This is used in the intent_service to
convert from id to skill name when reporting
the lock could be taken by a waiting thread between sentences in a multi-sentenced utterance. This locking method will allow the entire utterance to be synthezised before handling next.
Add support for 'mycroft.mic.listen' on the messagebus to trigger the system
to listen for STT processing. This can be posted on the messagebus by outside
systems, such as a physical or GUI Listen button.