Commit Graph

26 Commits (18f4ff3e901faae71351204c66a901b19eed2dc1)

Author SHA1 Message Date
kfezer 0446d58ba3 Merge pull request #578 from SoloVeniaASaludar/bugfix/issue-577
speech client, error in control of interval between calls to wakeup recognizer #577
2017-03-29 17:30:46 +00:00
SoloVeniaASaludar 4df486da82 Update mic.py 2017-03-20 19:41:33 +01:00
SoloVeniaASaludar 3c6a40c5ea Update mic.py 2017-03-20 19:40:30 +01:00
SoloVeniaASaludar e63cb1d628 Update mic.py 2017-03-18 11:02:33 +01:00
penrods bc9956cd68 Fixing sloppy copy/paste errors. 2017-03-10 16:23:00 -06:00
penrods 9fce7d4620 This implements CLI enhancements per issue #547
Main CLI enhancements:
* Microphone meter
* Long log line left/right scrolling
* Eliminated flicker
* VT100 ESC key code support (used by some terms)

In addition, to achieve the meter it was necessary to implement a mechanism for local Inter Process Communication (IPC).  This is achieved using the file-system.  By default a folder structure is created under /tmp/mycroft/ipc, but it can be directed to somewhere else by setting the config value in mycroft.conf:
    "ipc_path" : "/path/to/somewhere"
In the future, Mark 1 and Picroft will get RAM disks to avoid burning out the SD card.  This is also a very fast communication mechanism.  This is all hidden under util.get_ipc_directory()

Further, the named signal mechanism was changed to use the IPC folder.  The signal can have a lifetime now (not just one shot).
2017-03-10 01:30:15 -06:00
penrods e47e38c92c Fixes issue #528
Max recording time is now 10 seconds instead of 30.  This deals with cases where a noisy background prevents the listener's silence detection from triggering.  30 seconds was WAAY too long to keep listening -- nobody is going to be saying something that long for now.
2017-02-23 20:54:39 -08:00
Jonathan D'Orleans 416191e598 Issues 350 - Synchronizing local and remote configuration 2016-12-17 10:27:01 -05:00
Jonathan D'Orleans 4c1ba4e337 Issues 356 - Rebasing with master 2016-12-17 10:16:29 -05:00
Steve cac955fa64 Feature button behavior (#365)
* Several changes related to button pressing on the Mycroft unit:
- Pressing the button when it isn't listening starts it listening
- Pressing the button when listening will stop the listen
- Added a mycroft.util.signal() mechanism for out-of-thread communication
- Pressing the button now creates an "buttonPress" signal from the Enclosure
- The viseme playback and aplay check for the 'buttonPress' signal to abort
- Removed "Sorry I didn't catch that", irritating during false activations

* Fixed spacing that pep8 yelled about
2016-09-22 13:16:11 -05:00
Ethan Ward b60bd99e79 Fix unused imports (#296) 2016-07-18 15:45:11 -05:00
Matthew D. Scholefield a43b7b3ace Add new listener config options- Fixes #260 (#266)
* Added new listener config options

* Fixed audio unit tests
This adds a cleareraudio file for the wakeup test and changesto the new LocalRecognizer constructor

* Added .dict files to .gitignore
This is because they are now auto-generated on startup rather than stored permanently

* Fixed audio accuracy test for new LocalRecognizer constructor

* Added support for spaces in wake word config
In the phonemes a new word is indicated by a period character. The separating of the words actually changes the way pocketsphinx interprets the sound of it and in this case improves it

* Fixed unit test
2016-07-07 18:24:52 -05:00
Arron Atchison a9c00d69c3 Merge pull request #230 from Wolfgange3311999/feature/issues-224
Added timeout override to listener - fixes #224
2016-06-23 17:27:29 -05:00
Matthew Scholefield d2dc5443ee Fixed pep8 for mic.py 2016-06-23 17:14:49 -05:00
Matthew Scholefield 54cf720f8b Added timeout override to listener and fixed bug with it
previously the listner's behavior would be influenced by the buffer size. Now, however, it is not since the noise is increased by a factor of the seconds per buffer.
2016-06-23 16:03:47 -05:00
Matthew Scholefield d9905a2d07 Removed unnecessary variable in ResponsiveRecognizer
This was left over from when I was testing with making the class be asynchronous
2016-06-21 11:54:41 -05:00
Ryan Sipes 159ece558d Revert "Revert "Listener improvements (Fixes #128)"" 2016-06-18 14:00:07 -05:00
Ryan Sipes 32ce7a492f Revert "Listener improvements (Fixes #128)" 2016-06-18 13:21:21 -05:00
Matthew D. Scholefield 5df1e5e59b Removed extra import from previous commit 2016-06-17 22:13:26 -05:00
Wolfgange3311999 99d125640a Added a timeout of 30 seconds for recording a phrase 2016-06-17 21:05:28 -05:00
Wolfgange3311999 153cd62abd Changed to using += operator
Previously I thought bytearrays were immutable for some reason (which is not the case)
2016-06-17 20:55:01 -05:00
Matthew Scholefield d0cbddc961 Removed unused variable 2016-06-17 17:10:47 -05:00
Wolfgange3311999 b1900c3d81 Rewrote listener 2016-06-17 16:50:41 -05:00
Ryan Sipes 8f2c451938 Fixed Missing License Headers on All Files.
GPL LIcense added to the top of each python file.
2016-05-26 11:16:13 -05:00
Leo Arias d618676089 Issues-4 - Fix pep8 errors. 2016-05-23 17:23:47 +00:00
Arron Atchison 6e42bb1736 In the 1970s computer users had to understand the arcane syntax of the machines they used. They programed their computers using the machine's native language and hardly gave it a thought.
The 1980s birthed a new form of interaction between computers and users.  For the first time computers became capable of understanding the most basic form of human communication - pointing and grunting.  The mouse and the GUI revolutionized computing and made computers accessible to the masses.

We have now entered a third era.  We are rapidly approaching a time when computer systems will understand human language and respond using the most natural form of human communication – speech.

This is an important development.  Some might even call it revolutionary.

Despite its importance, however, the technologies that will underpin this new method of interaction are the property of major tech firms who don't necessarily have the public's best interests at heart.

Not anymore.

Meet Mycroft – the worlds first open source natural language platform.  Mycroft understands human language and responds with speech.  It is being designed to run on anything from a phone to an automobile and will change the way we interact with open source technologies in profound ways.

Our goal here at Mycroft is to improve this technology to the point that when you interact with the software it is impossible to tell if you are talking to a human or a machine.

This initial release of the Mycroft software represents a significant effort by the Mycroft community to give the open source world access to this important technology.  We are all hoping that the software will be useful to the public and will help to usher in a new era of human machine interaction.

Our community welcomes everyone to use Mycroft, improve the software and contribute back to the project.  With your help and support we can truly make Mycroft an AI for everyone.

Joshua W Montgomery – May 17, 2016
2016-05-20 09:16:01 -05:00