if expect_response flag is set to True the stt will be triggered just as if the wakeword has been received or the button on the mycroft enclosure has been pressed.
This adds several refinements to the listening sound mechanism added by:
* Added a default sound file
* Allowing various ways to override "resource files" for customization
* Moved the sound configuration path from "confirm_ding" to
"sounds" > "start_listening"
* Also added "sounds" > "end_listening" configuration for the future
This submission adds the new mycroft.util.resolve_resource_file(res_name)
method. This method takes a name such as "snd/start_listening.wav" and
looks (in order):
* For an absolute path <res_name>
* For ~/.mycroft/<res_name>
* For /opt/mycroft/<res_name>
* For mycroft/res/<res_name> within the source package
This adds several refinements to the listening sound mechanism added by:
* Added a default sound file
* Allowing various ways to override "resource files" for customization
* Moved the sound configuration path from "confirm_ding" to
"sounds" > "start_listening"
* Also added "sounds" > "end_listening" configuration for the future
This submission adds the new mycroft.util.resolve_resource_file(res_name)
method. This method takes a name such as "snd/start_listening.wav" and
looks (in order):
* For an absolute path <res_name>
* For ~/.mycroft/<res_name>
* For /opt/mycroft/<res_name>
* For mycroft/res/<res_name> within the source package
* Listen confirmation
If enabled, will play a wave file to confirm that Mycroft is listening
* Listen confirmation ding config options
* Rename config option
* Update mycroft.conf
* Type: Rename config option
* Missing imports
Whoops, forgot them (was copying edits since I didn't have my dev environment set up)
* Removing unnecessary import functions
This is a bit of a hack for Picroft. The analog audio on a Pi blocks
for 30 seconds fairly often, so we don't want to break on periods
(decreasing the chance of encountering the block). But we will
keep the split for non-Picroft installs since it give user feedback
faster on longer phrases.
* Fixes issue #434. Developers working on both Cerberus and Home during the transition would have to re-pair.
Also bumping enclosure client version.
* Correcting error from when Tarturus code was merged. At startup it was calling Enclosure.system_reset(), which rebooted the Arduino, instead of implementing Enclosure.reset(), which sets the UI to a "ready for input" state.
While in here, I also added docstrings for all Enclosure API methods.
* Increment Arduino code version
* Adding a call to reset the face UI when the enclosure service starts up. This is needed because the enclosure.reset that is posted by the speech service on the messagebus sometimes occurs before the enclosure client is up and listening for it -- especially if there is a Arduino firmware upgrade.
In the future, we may want to consider a core service roll-call that gets triggered whenever any of the core services come up.
* Update dev_setup.sh
- Initialize tts ws and enclosure at the main process
Note:
- This is a minimal change to fix the problem.
- The ultimate goal is to have a totally isolated TTS process which requires its own main and ws initialization to be developed soon.
"enclosure.system.reset" on the messagebus (which was intended to
only reset the enclosure's visual elements) to simply "reset()" and
"enclosure.reset" to avoid confusion with the "system.reset" serial
port message (which resets the Arduino).
system_reset() that means the Enclosure appearance should be reset to
its defaults. The implementation of this is now a reset of both the
mouth and the eyes. This command gets sent to the Enclosure once the
speech client has fully opened its connection to the messagebus.
* Several changes related to button pressing on the Mycroft unit:
- Pressing the button when it isn't listening starts it listening
- Pressing the button when listening will stop the listen
- Added a mycroft.util.signal() mechanism for out-of-thread communication
- Pressing the button now creates an "buttonPress" signal from the Enclosure
- The viseme playback and aplay check for the 'buttonPress' signal to abort
- Removed "Sorry I didn't catch that", irritating during false activations
* Fixed spacing that pep8 yelled about
The 1980s birthed a new form of interaction between computers and users. For the first time computers became capable of understanding the most basic form of human communication - pointing and grunting. The mouse and the GUI revolutionized computing and made computers accessible to the masses.
We have now entered a third era. We are rapidly approaching a time when computer systems will understand human language and respond using the most natural form of human communication – speech.
This is an important development. Some might even call it revolutionary.
Despite its importance, however, the technologies that will underpin this new method of interaction are the property of major tech firms who don't necessarily have the public's best interests at heart.
Not anymore.
Meet Mycroft – the worlds first open source natural language platform. Mycroft understands human language and responds with speech. It is being designed to run on anything from a phone to an automobile and will change the way we interact with open source technologies in profound ways.
Our goal here at Mycroft is to improve this technology to the point that when you interact with the software it is impossible to tell if you are talking to a human or a machine.
This initial release of the Mycroft software represents a significant effort by the Mycroft community to give the open source world access to this important technology. We are all hoping that the software will be useful to the public and will help to usher in a new era of human machine interaction.
Our community welcomes everyone to use Mycroft, improve the software and contribute back to the project. With your help and support we can truly make Mycroft an AI for everyone.
Joshua W Montgomery – May 17, 2016