* Add a processing queue to influxdb
* Updates after reviews
* Remove lint
* Move retry loop to thread class
* Move constant calculation out of loop
* Deprecate retry_queue_limit
* Simplify entity update
* Split entity platform from entity component
* Decouple entity platform from entity component
* Always include unit of measurement again
* Lint
* Fix test
* fix generic_thermostat bug when restore state from HA start up
if you don't set "initial_operation_mode" in config, you will get
`self._enabled = True` when init GenericThermostat. And then you will
miss the `if self._current_operation != STATE_OFF` statement and the
self._enabled still keep `True`. That's the problem
* add a test to describe the restore case
* make port mapping optional
* dependencies + improvements
* Added bytes and packets sensors from IGD
* flake8 check
* new sensor with upnp counters
* checks
* whitespaces in blank line
* requirements update
* added sensor.upnp to .coveragerc
* downgrade miniupnpc
Latest version of miniupnpc is 2.0, but pypi only has 1.9
Fortunately it is enough
* revert to non async
miniupnpc will do network calls, so this component can’t be moved to
coroutine
* hof hof
forgot to remove import ot asyncio
* Add baudrate option
* merge
* Added Mediaroom media_player component
* Updated header
Works with MEO and VDF set-top boxes in Portugal
* formatting
* Development Checklist (done)
* fix formatting according to houndci-bot
* more format fixing (tks houndci-bot)
* more fixes
* too much cleanup...
* too much
* pylint check
* Initial commit
Basic configuration testing
* flake8 and lint
The module `asyncio.test_utils` has been removed from Python in the 3.7 branch, because it was intended to be a private module for internal testing of asyncio. For more information, see the upstream bug report at https://bugs.python.org/issue32273 and the upstream PR at https://github.com/python/cpython/pull/4785.
For this commit, I have migrated the small amount of functionality that was being used from the `asyncio.test_utils` directly into the `RunThreadsafeTests` Class. To see the original `asyncio.test_utils.TestCase` class, which I pulled some functionality from, please see: https://github.com/python/cpython/blob/3.6/Lib/asyncio/test_utils.py#L440
Note: In addition to being broken in 3.7, this test case also seems to be broken in Python 3.6.4 when using Docker. This PR fixes the test when run in docker.
To reproduce: `./script/test_docker -- tests/util/test_async.py`
failing output (prior to this commit):
```
... trimmed ...
py36 runtests: PYTHONHASHSEED='3262989550'
py36 runtests: commands[0] | py.test --timeout=9 --duration=10 --cov --cov-report= tests/util/test_async.py
Test session starts (platform: linux, Python 3.6.4, pytest 3.3.1, pytest-sugar 0.9.0)
rootdir: /usr/src/app, inifile: setup.cfg
plugins: timeout-1.2.1, sugar-0.9.0, cov-2.5.1, aiohttp-0.3.0
timeout: 9.0s method: signal
―――――――――――――――――― ERROR collecting tests/util/test_async.py ――――――――――――――――――――――――
ImportError while importing test module '/usr/src/app/tests/util/test_async.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/util/test_async.py:3: in <module>
from asyncio import test_utils
/usr/local/lib/python3.6/asyncio/test_utils.py:36: in <module>
from test import support
E ImportError: cannot import name 'support'
```
* Fixing demo platform to use support_flags
* Fixed tests as well
* Moved humidity low / high as always available based on defaults
* Updated demo platform to show more combinations
* Entity#unique_id defaults to None
* Initial commit entity registry
* Clean up unique_id property
* Lint
* Add tests to entity component
* Lint
* Restore some unique ids
* Spelling
* Remove use of IP address for unique ID
* Add tests
* Add tests
* Fix tests
* Add some docs
* Add one more test
* Fix new test…
* google_assistant: Refactor query_device
The previous code had issues where domains could break out and end up
with weird brightness values and we weren't enforcing the `on` and
`oneline` keys in the response.
* google_assistant: Add media_player to query test
* Refactor alexa smart_home tests
The previous tests had a lot of copy pasta due to a lack of expressions
for higher-level assertions. I'm hoping this makes it more reasonable to
extend all interfaces to support properties.
* Lint
* Refactor Alexa Smart Home API
Having an object per interface will make it easier to support
properties.
Ideally, properties are reported in context in all responses. However
current implementation reports them only in response to a ReportState
request. This seems to work sufficiently. As long as the device is
opened in the Alexa app, Amazon will poll the device state every few
seconds with a ReportState request.
* Report properties for some Alexa interfaces
Fixes (mostly) #11874.
Other interfaces will need properties implemented as well.
Implementing properties for just PowerController seems sufficient to
eliminate the "There was a problem." error for any device that supports
it, even if other interfaces are supported. Of course the additional
properties will be reported incorrectly in the Alexa app.
Includes a minor bugfix: `reportable` was previously placed incorrectly
in the responses, so Amazon was ignoring it.
Having an object per interface will make it easier to support
properties.
Ideally, properties are reported in context in all responses. However
current implementation reports them only in response to a ReportState
request. This seems to work sufficiently. As long as the device is
opened in the Alexa app, Amazon will poll the device state every few
seconds with a ReportState request.
* Fixed Canary temperature sensor and remapped air quality value
* Addressed review comment
* - Fixed canary tests and added more tests
- Removed py-canary requirements from tests
* Noop to trigger a build again
* - Removed py-canary requirements from tests
* Addressed PR comment
* - Updated tests
- Removed py-canary from gen_requirements_all.py
* - Fixed hound violation
* Added back py-canary to gen_requirements_all.py as it's still need in tests
* Added back py-canary to test requirements as it's still need in tests
* Address PR comment
* Send Alexa Smart Home responses to debug log
* Report scripts and groups as scenes to Alexa
The Alexa API docs have a couple display categories that sound relevant
to scenes or scripts:
ACTIVITY_TRIGGER: Describes a combination of devices set to a
specific state, when the state change must occur in a specific
order. For example, a “watch Neflix” scene might require the: 1. TV
to be powered on & 2. Input set to HDMI1.
SCENE_TRIGGER: Describes a combination of devices set to a specific
state, when the order of the state change is not important. For
example a bedtime scene might include turning off lights and
lowering the thermostat, but the order is unimportant.
Additionally, Alexa has a notion of scenes that support deactivation.
This is a natural fit for groups, and scripts with delays which can be
cancelled.
https://developer.amazon.com/docs/device-apis/alexa-discovery.html#display-categories
The mechanism to map entities to the Alexa Discovery response is
refactored since extending the data structures in MAPPING_COMPONENT to
implement supportsDeactivation would have added complication to what I
already found to be a confusing construct.
* Release worker thread while waiting for Z-wave startup
* Increase zwave startup timeout
* Adjust test
* Use asyncio.sleep in _check_awaked
* Remove lint
* Name loop parameter
* Expose Alexa Smart Home via HTTP POST
Haaska uses the deprecated v2 Alexa Smart Home payload. Exposing the v3
implementation this way allows an easy path to upgrading Haaska and
reducing code duplication with Home Assistant Cloud.
* Expose Alexa Smart Home via HTTP POST
Haaska uses the deprecated v2 Alexa Smart Home payload. Exposing the v3
implementation this way allows an easy path to upgrading Haaska and
reducing code duplication with Home Assistant Cloud.