Changed git clone to git clone to git clone --depth=1 to reduce the cloning size as this is only for deployment of mycroft and not for development and added additional instructions for developers to use the whole repo instead of single depth repo.
As noted on the Chromium Dev How-to [1] and on the
SpeechRecognition library docs [2], the Google TTS API
is really not the right API to call. While it's simple,
and allows you to authenticate via a simple token, it is
also limited to a max of *50* requests per day.
That's not a lot. I don't think many people will find that
useful, outside of quick testing (if they can even get an
API key - I couldn't figure out how to generate one that
worked correctly).
This commit introduces a new STT backend, google_cloud, so that
the Google STT backend can be deprecated eventually. To do so, we
needed to:
- Install the Google API Client Library
- Create a new STT class which knows how to turn a provided Google
JSON credentials file into a string (the SpeechRecognition library
expects you to pass in the JSON string, not a file path, nor an object).
Any person wishing to use this will need to:
- Enable the Cloud Speech API on the Google Cloud Platform console
- Create a new Service Account Key, and download the credentials to
a secure location
- Configure that location in mycroft.conf, like so:
"stt": {
"google_cloud": {
"credential" {
"json": { contents of downloaded credentials }
}
}
}
It's worth noting that the Cloud Speech API has a free quota of
60 minutes per month, which would probably stretch further than 50
individual requests. So for hobbyists, it should be nothing but
a net benefit.
[1] http://www.chromium.org/developers/how-tos/api-keys
[2] https://github.com/Uberi/speech_recognition/blob/master/reference/library-reference.rst#recognizer_instancerecognize_googleaudio_data-key--none-language--en-us-show_all--false
Significant edit:
* Remove mentions of old start.sh / mycroft.sh
* Enhanced getting started instructions with more explicit steps
* Reorganized to place most commonly wanted information up front.
* Syntax, layout, and typo repairs.
Fixed a bunch of spelling errors, punctuation problems, and layout inconsistencies. Clarified some of the language. Added a link to the `screen` man page.
* Chat link update
Added: https://mycroft.ai/to/chat/
* Added home.mycroft.ai as config option
* Capitalization and small syntax edits
* Noticed that FAQ/Common Errors section of README.md said that mycroft.sh would start the cli, so I added it.
* changed three to four for FAQ since four processes are listed.
* added --quiet option to start.sh cli to prevent echo when using the cli
* per @the7erm sugestion: changed the log flush to 1 second for screen.
* modified mycroft.sh to support starting in three modes:
- service, skills, voice, cli --quiet
- service skills, voice
- service skills, cli
* Added changes to quick start to reflect changes to mycroft.sh
The 1980s birthed a new form of interaction between computers and users. For the first time computers became capable of understanding the most basic form of human communication - pointing and grunting. The mouse and the GUI revolutionized computing and made computers accessible to the masses.
We have now entered a third era. We are rapidly approaching a time when computer systems will understand human language and respond using the most natural form of human communication – speech.
This is an important development. Some might even call it revolutionary.
Despite its importance, however, the technologies that will underpin this new method of interaction are the property of major tech firms who don't necessarily have the public's best interests at heart.
Not anymore.
Meet Mycroft – the worlds first open source natural language platform. Mycroft understands human language and responds with speech. It is being designed to run on anything from a phone to an automobile and will change the way we interact with open source technologies in profound ways.
Our goal here at Mycroft is to improve this technology to the point that when you interact with the software it is impossible to tell if you are talking to a human or a machine.
This initial release of the Mycroft software represents a significant effort by the Mycroft community to give the open source world access to this important technology. We are all hoping that the software will be useful to the public and will help to usher in a new era of human machine interaction.
Our community welcomes everyone to use Mycroft, improve the software and contribute back to the project. With your help and support we can truly make Mycroft an AI for everyone.
Joshua W Montgomery – May 17, 2016