* Add counter entities to the ZHA coordinator device
* rework to prepare for non coordinator device counters
* Initial scaffolding to support quirks v2 entities
* update for zigpy changes
* add assertion error message
* clean up test
* update group entity discovery kwargs
* constants and clearer names
* apply custom device configuration
* quirks switches
* quirks select entities
* quirks sensor entities
* update discovery
* move call to super
* add complex quirks v2 discovery test
* remove duplicate replaces
* add quirks v2 button entity support
* add quirks v2 binary sensor entity support
* fix exception in counter entitiy discovery
* oops
* update formatting
* support custom on and off values
* logging
* don't filter out entities quirks says should be created
* fix type alias warnings
* sync up with zigpy changes and additions
* add a binary sensor test
* button coverage
* switch coverage
* initial select coverage
* number coverage
* sensor coverage
* update discovery after rebase
* coverage
* single line
* line lengths
* fix double underscore
* review comments
* set category from quirks in base entity
* line lengths
* move comment
* imports
* simplify
* simplify
Because the setup again was scheduled as a task, it would
not unset self._async_cancel_retry_setup in time and we would
try to unsub self._async_cancel_retry_setup after it had already
fired. Change it to call a callback that runs right away so it
unsets self._async_cancel_retry_setup as soon as its called
so there is no race
fixes#111796
* Deprecate @bind_hass and log error if used inside custom component
* Log also when accessing `hass.components`
* Log warning only when `hass.components` is used
* Change version
* Process code review
* Use `None` instead of `"unknown"` when the current version is unknown
* Only use the current file version from the OTA notification
* Use `sw_version`, if available, and update `current_file_version`
* Assume the current version is the latest version
* Fix lint errors
* Use `image` instead of `firmware`
* Include a changelog if updates expose it
* Clear latest firmware only after updating the installed version
* Bump minimum zigpy version to 0.63.0
* Create a data update coordinator to consolidate updates
* Fix overridden `async_update`
* Fix most unit tests
* Simplify `test_devices` to fix current tests
* Use a dict comprehension for creating mocked entities
* Fix unit tests (thanks @dmulcahey!)
* Update the currently installed version on cluster attribute update
* Drop `PARALLEL_UPDATES` now that we use an update coordinator
* Drop `_reset_progress`, it is already handled by the update component
* Do not update the progress if we are not supposed to be updating
* Ignore latest version (e.g. if device attrs changed) if zigpy rejects it
* Clean up handling of command id in `Ota.cluster_command`
* Start progress at 1%: 0 and False are considered equal and are filtered!
Use `ceil` instead of remapping 1-100
* The installed version will be auto-updated when the upgrade succeeds
* Avoid 1 as well, it collides with `True`
* Bump zigpy to (unreleased) 0.63.2
* Fix unit tests
* Fix existing unit tests
Send both event types
Globally enable sending both event types
* Remove unnecessary branches
* Test ignoring invalid progress callbacks
* Test updating a device with a no longer compatible firmware
* Import tplink in the executor to avoid blocking the event loop
2024-02-27 22:44:19.908 DEBUG (MainThread) [homeassistant.loader] Component tplink import took 1.620 seconds (loaded_executor=False)
* patch out discovery because it happens too fast now
Avoid creating system monitor sensors for non-dirs
Currently we create sensors for /etc/hosts, /etc/asound.conf, since
they are bind mounts in the container. These all have to have
their own coordinator
* Reduce latancy to load storage by making the task eager
This changes the semantics a bit under the hood because it
can raise sooner which means we do not store the task
as _load_task if it raises right away. That means
concurrent calls that result in failure are likely to try
again now which will be a tiny performance hit for this
case.
* fix
* will now finish in time
* Add Grid import export to enphase Envoy
* Update snapshot for labels dict element in entity registry
* use identity check for enum
* Revert use of identity check, didn't add entities
* Implement review feedback for tests
* ct phase sensors disabled by default
* import PHASENAMES from pyenphase
* Update tests/components/enphase_envoy/test_sensor.py
* Update tests/components/enphase_envoy/test_sensor.py
---------
Co-authored-by: J. Nick Koston <nick@koston.org>
* Always allow ignore and unignore flows for single config entry integrations
* Update tests/test_config_entries.py
Co-authored-by: Martin Hjelmare <marhje52@gmail.com>
---------
Co-authored-by: Martin Hjelmare <marhje52@gmail.com>
* introduce preserve last value option
* improve comments
* add unit test
* skip scheduling purge on a preserved value
* do not schedule sensor update if preserving last value
* fix unit test to use new mock time pattern
pattern introduced in https://github.com/home-assistant/core/pull/93499
* rename preserve_last_val to keep_last_sample
* add keep_last_sample config validation
* Deprecate Logi Circle integration
* Update homeassistant/components/logi_circle/__init__.py
Co-authored-by: G Johansson <goran.johansson@shiftit.se>
---------
Co-authored-by: G Johansson <goran.johansson@shiftit.se>
* Remove zeroconf from ssdp after deps
This was added in #36277 but is no longer needed since
we setup discovery integrations ahead of time to ensure
their deps are updated before other integrations can load
them
* adjust test
* Use an eager task in the update coordinator scheduled refresh
We have a lot of places that will not suspend because the refresh function
decides it does not need to update. Currently these have to be scheduled
on the event loop even though they are a noop.
Since _handle_refresh_interval is subclassed in some integrations, I created
a dunder wrapper function to avoid integraions subclassing it
* fix time fires outside of patch
* Deconflict based on wake word
* Undo test
* Make wake up key a string, rename error
* Update snapshot
* Change to "wake word phrase" and normalize
* Move normalization into the wake provider
* Working on describe
* Use satellite info to resolve wake word phrase
* Add test for wake word phrase
* Match phrase with model name in wake word provider
* Check model id
* Use one constant wake word cooldown
* Update homeassistant/components/assist_pipeline/error.py
Co-authored-by: Paulus Schoutsen <balloob@gmail.com>
* Fix wake word tests
---------
Co-authored-by: Paulus Schoutsen <balloob@gmail.com>
* rebase off dev
* Update homeassistant/components/weatherflow_cloud/const.py
Co-authored-by: Joost Lekkerkerker <joostlek@outlook.com>
* Addressing 1st round of PR Comments
* Update homeassistant/components/weatherflow_cloud/config_flow.py
Co-authored-by: Joost Lekkerkerker <joostlek@outlook.com>
* addressing PR Comments
* fixing last comment that i can see
* Update homeassistant/components/weatherflow_cloud/coordinator.py
OOPS
Co-authored-by: G Johansson <goran.johansson@shiftit.se>
* Update homeassistant/components/weatherflow_cloud/weather.py
Co-authored-by: G Johansson <goran.johansson@shiftit.se>
* Update homeassistant/components/weatherflow_cloud/coordinator.py
Co-authored-by: G Johansson <goran.johansson@shiftit.se>
* switching to station id
* Update homeassistant/components/weatherflow_cloud/strings.json
Co-authored-by: G Johansson <goran.johansson@shiftit.se>
* addressing PR
* Updated tests to be better
* Updated tests accordingly
* REAuth flow and tests added
* Update homeassistant/components/weatherflow_cloud/strings.json
Co-authored-by: G Johansson <goran.johansson@shiftit.se>
* Update homeassistant/components/weatherflow_cloud/coordinator.py
Co-authored-by: G Johansson <goran.johansson@shiftit.se>
* Addressing PR comments
* Apply suggestions from code review
* ruff fix
---------
Co-authored-by: Joost Lekkerkerker <joostlek@outlook.com>
Co-authored-by: G Johansson <goran.johansson@shiftit.se>
* Add support for pre-imports at setup time
alternative solution to #111331
* refactor
* refactor
* refactor
* mark >1.0s integrations
* no point in executor if already loaded
* no point in executor if already loaded
* cleanup
* cleanup
* two more
* one more
* analytics loads a lot more integrations
* cloud
* debug
* psutil, hardwre
* try zha
* Update homeassistant/setup.py
* await
* comments
* coverage
* coverage
* coverage
* move logic to loader
* move logic to loader
* preserve comments
* Image entity media source
* MJPEG streaming
* Update on change rather than fixed interval
* Only send boundary twice
* return when image has no data
* Write each frame twice
* Use friendly name when browsing
* Fix sending of double frame
* Initial image proxy test
* Improve proxy stream test
* Refactor
* Code review fixes
* Allow setting if we support multiple config entries in config flow
* Move property to config flow instead of flow handler
* Move marking an integration as single instance only to manifest
* Revert line remove
* Avoid init a config flow or adding a new entry on a single instance with an entry
* Revert changes in test
* Process code review comments
* Apply code review suggestion
* Use discovery flow helper for hardware integrations
The discovery flow helper defers loading discovered integrations until after startup
to improve startup reliability.
* Use discovery flow helper for hardware integrations
The discovery flow helper defers loading discovered integrations until after startup
to improve startup reliability. Since hardware was not listed in as a
discovery integration, the notification for new discoveries was missing.
The entity was removed before the entity registry could update it
```
Traceback (most recent call last):
File "/Users/bdraco/home-assistant/homeassistant/helpers/entity.py", line 1482, in _async_process_registry_update_or_remove
assert registry_entry is not None
AssertionError
```
* Subscribe to Traccar Server events
* No need to unsubscribe on error
* typo
* rename _attrs
* arg type
* reorder return type
* more spesific
* Update stale docstring
* Avoid circular import in Storage.async_delay_save
We call Storage.async_delay_save for every entity being added or removed
from the registry. The late import took more time than everything else
in the function.
* Avoid reschedule churn in Storage.async_delay_save
When we are adding or removing entities we will call async_delay_save
quite often which has to add and remove a TimerHandle on the event loop
which can add up when there are a lot of registry items changing.
If the timer handle still has 80% of the time remaining on it
we will avoid resceduling and let it fire at the time the
original async_delay_save call was made. This ensures we
do not force the event loop to rebuild its heapq because
too many timer handlers were cancelled at once
* div0
* add coverage for 0 since we had none
* fix bad conflict
* tweaks
* tweaks
* tweaks
* tweaks
* tweaks
* tweaks
* more test fixes
* mqtt tests rely on event loop overhead
* Convert debouncer async_shutdown to be a normal function
nothing was being awaited here and the shutdown call was only used
in integrations marked internal and other internals. Its possible
that a custom component might have been using the method but it
seemed uncommon enough that it did not warrent marking as a breaking
change. The update coordinator is no longer awaiting anything in
async_shutdown either now but it seemed likely that this use
would get subclassed.
* fix
* Add counter entities to the ZHA coordinator device
* rework to prepare for non coordinator device counters
* counter entity test
* update log lines
* disable by default
* Bump library
* Update code to the new library version
* Improve diagnostics
* Fix tests
---------
Co-authored-by: Maciej Bieniek <478555+bieniu@users.noreply.github.com>
* aemet: disable legacy options
This enables proper timezone handling:
- Atlantic/Canary for the Canary Islands.
- Europe/Madrid for the Iberian Peninsula.
Also provides daily data for the current day after AEMET stops providing the
full day interval, which is normally after midday (12:00).
This is a breaking change because with the previous behaviour the daily data
for the current day wasn't available after midday and now it will be.
What the integration library does to workaround this is to fallback to the
12-24 interval data if the the 00-24 is no longer provided by the API.
Signed-off-by: Álvaro Fernández Rojas <noltari@gmail.com>
* Fix AEMET tests with v0.4.8
Signed-off-by: Álvaro Fernández Rojas <noltari@gmail.com>
---------
Signed-off-by: Álvaro Fernández Rojas <noltari@gmail.com>
* Add switch platform for husqvarna_automower
* Use RestrictedReasons const
* Typing
* Add snapshot testing
* Invert switch
* Test sucessfull servie calls
* Assert client mock calls
* Use getattr
* Update snapshot
* Add available property
* Add a new base class for control entities
* Make switch unavailabe if mower in error state
* Sort platforms
---------
Co-authored-by: Martin Hjelmare <marhje52@gmail.com>
* Add switch platform
* Add mypulink switch platform
* Update tests according to review
* Address more review comments
* Adjust types
* More typing
* Fix typo
* Use constants in tests
* Revert constants
* Catch aiohttp.ClientError when API call fails
* Add test case for failed async_set_device_points call
* Test api failures for both toggle directions
* Use parametrize for testing switching
* Move backup/* WS commands to the backup integration
* Call correct command
* Use debug for logging
* Remove assertion of hass.data for setup test
* parametrize token fixture
* Add sensor platform for husqvarna_automower
* Adress review comments
* Try to fix test
* Improve sensors
* Address review
* Adapt some values
* Add test
* Add test for cutting blade usage time
* Import TEST_MOWER_ID
* Use a parenthesis around the lambda and indent it so it's easier to distinguish the entity description parameters from the lambda
* Add OpenID authentication to wolflink integration
* Update wolf-comm to 0.0.2
* Upgrade wolf_comm to 0.0.3 + fix tests
* Version 0.0.4 of wolf_comm including LICENSE.txt
* Update requirements to wolf_comm 0.0.4
---------
Co-authored-by: Jan Rothkegel <jan.rothkegel@web.de>
* Add async_schedule_call to the Debouncer
async_schedule_call allows the Debouncer to schedule a call
from a callback without having to create tasks to run
async_call
* Update homeassistant/helpers/debounce.py
* Make device registry cleanup all callback function
* fix typing, code supported callback functions, but typing did not
* fixes
* fixes
* fix
* we had no coverage for other job types
* we had no coverage for other job types
* add calendar
* rename function
* remove device from test
* requested changes
* extend range
* fix async_get_events
* catch and test edge cases
* remove commented code
* rebase snapshot
* Reduce overhead to load multiple languages in translations
Instead of loading in a task, we now group everything
to be loaded into a single executor job
* fixes
* fixes
* fixes
* fixes
* fixes
* update tests
* add missing coverage (was existing)
* Avoid scheduling registry loads as tasks in tests
Since we patch out async_load in Store, these will not yield
to the event loop so it makes sense to await them instead
of creating tasks
This reduced my local test run times ~2.5% on average
* mock out save as well so we do not schedule tasks to save empty data
* tweaks
* fix lingering files
* another one
* too much for one PR, reduce
* fix targets
* Add presets
* Make hvac_modes dynamic
* Add supported features
* Fix set preset mode
* Add coverage
* Remove unused constants
* Remove the extra newline
* Add snapshot assertion to new test
* Add comments
* Use ServiceValidationError
* Test for ServiceValidationError
* Fix typo
* Refactor to use _handle_coordinator_update
* Remove preset_mode prop
* Apply suggestions from code review
Co-authored-by: J. Nick Koston <nick@koston.org>
---------
Co-authored-by: J. Nick Koston <nick@koston.org>
fixes
```
2024-02-19 13:51:58.128 ERROR (MainThread) [homeassistant] Error doing job: Task exception was never retrieved: File "/Users/bdraco/home-assistant/venv/bin/hass", line 8, in <module>
sys.exit(main())
File "/Users/bdraco/home-assistant/homeassistant/__main__.py", line 209, in main
exit_code = runner.run(runtime_conf)
File "/Users/bdraco/home-assistant/homeassistant/runner.py", line 188, in run
return loop.run_until_complete(setup_and_run_hass(runtime_config))
File "/opt/homebrew/Cellar/python@3.12/3.12.1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/base_events.py", line 673, in run_until_complete
self.run_forever()
File "/opt/homebrew/Cellar/python@3.12/3.12.1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/base_events.py", line 640, in run_forever
self._run_once()
File "/opt/homebrew/Cellar/python@3.12/3.12.1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/base_events.py", line 1965, in _run_once
handle._run()
File "/opt/homebrew/Cellar/python@3.12/3.12.1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/events.py", line 84, in _run
self._context.run(self._callback, *self._args)
File "/Users/bdraco/home-assistant/homeassistant/helpers/entity_platform.py", line 610, in async_add_entities
await add_func(coros, entities, timeout)
File "/Users/bdraco/home-assistant/homeassistant/helpers/entity_platform.py", line 561, in _async_add_entities
await coro
File "/Users/bdraco/home-assistant/homeassistant/helpers/entity_platform.py", line 652, in _async_add_entity
entity.add_to_platform_start(
File "/Users/bdraco/home-assistant/homeassistant/components/device_tracker/config_entry.py", line 356, in add_to_platform_start
_async_connected_device_registered(
File "/Users/bdraco/home-assistant/homeassistant/components/device_tracker/config_entry.py", line 94, in _async_connected_device_registered
async_dispatcher_send(
File "/Users/bdraco/home-assistant/homeassistant/helpers/dispatcher.py", line 227, in async_dispatcher_send
hass.async_run_hass_job(job, *args)
File "/Users/bdraco/home-assistant/homeassistant/core.py", line 701, in async_run_hass_job
hassjob.target(*args)
File "/Users/bdraco/home-assistant/homeassistant/util/logging.py", line 133, in _callback_wrapper
func(*args)
File "/Users/bdraco/home-assistant/homeassistant/components/dhcp/__init__.py", line 392, in _async_process_device_data
self.async_process_client(ip_address, hostname, mac_address)
File "/Users/bdraco/home-assistant/homeassistant/components/dhcp/__init__.py", line 268, in async_process_client
discovery_flow.async_create_flow(
File "/Users/bdraco/home-assistant/homeassistant/helpers/discovery_flow.py", line 32, in async_create_flow
hass.async_create_task(init_coro, f"discovery flow {domain} {context}")
File "/Users/bdraco/home-assistant/homeassistant/core.py", line 634, in async_create_task
task = self.loop.create_task(target, name=name)
Traceback (most recent call last):
File "/Users/bdraco/home-assistant/homeassistant/config_entries.py", line 1017, in async_init
flow, result = await self._async_init(flow_id, handler, context, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bdraco/home-assistant/homeassistant/config_entries.py", line 1047, in _async_init
result = await self._async_handle_step(flow, flow.init_step, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bdraco/home-assistant/homeassistant/data_entry_flow.py", line 501, in _async_handle_step
result: FlowResult = await getattr(flow, method)(user_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bdraco/home-assistant/homeassistant/components/hunterdouglas_powerview/config_flow.py", line 127, in async_step_dhcp
return await self.async_step_discovery_confirm()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bdraco/home-assistant/homeassistant/components/hunterdouglas_powerview/config_flow.py", line 152, in async_step_discovery_confirm
assert self.discovered_ip and self.discovered_name
AssertionError
```