* Add ability to pass the config entry explicitely in data update coordinators
* Implement in accuweather
* Raise if config entry not set
* Move accuweather models
* Fix gogogate2
* Fix rainforest_raven
* init
* Update homeassistant/helpers/update_coordinator.py
Co-authored-by: Joost Lekkerkerker <joostlek@outlook.com>
* fix typo, ruff
* consistency with rest, test
* pylint suppression
* ruff
* ruff
* switch to one test
* add last exc
* add tests for auth & Entry Errors
* move exceptions to correct test
* Update update_coordinator.py
Co-authored-by: G Johansson <goran.johansson@shiftit.se>
* test setup call
* simplify
---------
Co-authored-by: Joost Lekkerkerker <joostlek@outlook.com>
Co-authored-by: G Johansson <goran.johansson@shiftit.se>
* Fix race in TimestampDataUpdateCoordinator
The last_update_success_time value was being set after the listeners
were fired which could lead to a loop because the listener may
re-trigger an update because it thinks the data is stale
* coverage
* docstring
* Phase out periodic tasks
* false by default or some tests will block forever, will need to fix each one manually
* kwarg works
* kwarg works
* kwarg works
* fixes
* fix more tests
* fix more tests
* fix lifx
* opensky
* pvpc_hourly_pricing
* adjust more
* adjust more
* smarttub
* adjust more
* adjust more
* adjust more
* adjust more
* adjust
* no eager executor
* zha
* qnap_qsw
* fix more
* fix fix
* docs
* its a wrapper now
* add more coverage
* coverage
* cover all combos
* more fixes
* more fixes
* more fixes
* remaining issues are legit bugs in tests
* make tplink test more predictable
* more fixes
* feedreader
* grind out some more
* make test race safe
* one more
* Schedule periodic coordinator updates as background tasks.
Currently, the coordinator's periodic refreshes delay startup because they are not scheduled as background tasks. We will wait if the startup takes long enough for the first planned refresh. Another coordinator's scheduled refresh will be fired on busy systems, further delaying the startup. This chain of events results in the startup taking a long time and hitting the safety timeout because too many coordinators are refreshing.
This case can also happen with scheduled entity refreshes, but it's less common. A future PR will address that case.
* periodic_tasks
* periodic_tasks
* periodic_tasks
* merge
* merge
* merge
* merge
* merge
* fix test that call the sync api from async
* one more place
* cannot chain
* async_run_periodic_hass_job
* sun and pattern time changes from automations also block startup
* Revert "sun and pattern time changes from automations also block startup"
This reverts commit 6de2defa05.
* make sure polling is cancelled when config entry is unloaded
* Revert "Revert "sun and pattern time changes from automations also block startup""
This reverts commit e8f12aad55.
* remove DisabledError from homewizard test as it relies on a race
* fix race
* direct coverage
* Use an eager task in the update coordinator scheduled refresh
We have a lot of places that will not suspend because the refresh function
decides it does not need to update. Currently these have to be scheduled
on the event loop even though they are a noop.
Since _handle_refresh_interval is subclassed in some integrations, I created
a dunder wrapper function to avoid integraions subclassing it
* fix time fires outside of patch
* Convert debouncer async_shutdown to be a normal function
nothing was being awaited here and the shutdown call was only used
in integrations marked internal and other internals. Its possible
that a custom component might have been using the method but it
seemed uncommon enough that it did not warrent marking as a breaking
change. The update coordinator is no longer awaiting anything in
async_shutdown either now but it seemed likely that this use
would get subclassed.
* fix
* Avoid useless time fetch in DataUpdateCoordinator
Since we used the async_call_at helper, it would always call dt_util.utcnow()
to feed the _handle_refresh_interval which threw it away. This meant we had
to fetch time twice as much as needed for each update
* tweak
* compat
* adjust comment
* Avoid looking up the callable type for HassJob when we already know it
When we connect the frontend we call async_listen with run_immediately MANY
times when we already know the job type (it will always be a callback). This
reduces the latency to get the frontend going
* missing coverage