Co-authored-by: Sid <27780930+autinerd@users.noreply.github.com>
Co-authored-by: Marc Mueller <30130371+cdce8p@users.noreply.github.com>
Co-authored-by: J. Nick Koston <nick@koston.org>
* Move setup time logging into the context manager
We were fetching the time twice but since the context
manager already has the timing, move it there
* remove log setup assertions from integration test
* tweak logging to give us better data for tracking issues
* redundant
* adjust
* preen
* fixes
* adjust
* make api change internal so nobody uses it
* coverage
* fix test
* fix more tests
* coverage
* more tests assuming internal calls
* fix more
* adjust
* adjust
* fix axis tests
* fix broadlink -- it does not call async_forward_entry_setup
* missed some
* remove useless patch
* rename, detect it both ways
* clear
* debug
* try to fix
* handle phase finishing out while paused
* where its set does not need to know its late as that is an implemenation detail of setup
* where its set does not need to know its late as that is an implemenation detail of setup
* tweak
* simplify
* reduce complexity
* revert order change as it makes review harder
* revert naming changes as it makes review harder
* improve comment
* improve debug
* late dispatch test
* test the other way as well
* Update setup.py
* Update setup.py
* Update setup.py
* simplify
* reduce
* Avoid pre-importing config_flows if the integration does support migration
Currently we pre-import the config flow module if it exists since
setting up the config entry required comparing the versions found
in the config_flow.py. We can avoid the pre-import if the integration
does not support async_migrate_entry which means we avoid loading
many config flows in memory at startup.
* cover
* fix missing block
* do not call directly
* its too fast now, the test gets more along
* Update homeassistant/loader.py
* Phase out periodic tasks
* false by default or some tests will block forever, will need to fix each one manually
* kwarg works
* kwarg works
* kwarg works
* fixes
* fix more tests
* fix more tests
* fix lifx
* opensky
* pvpc_hourly_pricing
* adjust more
* adjust more
* smarttub
* adjust more
* adjust more
* adjust more
* adjust more
* adjust
* no eager executor
* zha
* qnap_qsw
* fix more
* fix fix
* docs
* its a wrapper now
* add more coverage
* coverage
* cover all combos
* more fixes
* more fixes
* more fixes
* remaining issues are legit bugs in tests
* make tplink test more predictable
* more fixes
* feedreader
* grind out some more
* make test race safe
* one more
* Schedule periodic coordinator updates as background tasks.
Currently, the coordinator's periodic refreshes delay startup because they are not scheduled as background tasks. We will wait if the startup takes long enough for the first planned refresh. Another coordinator's scheduled refresh will be fired on busy systems, further delaying the startup. This chain of events results in the startup taking a long time and hitting the safety timeout because too many coordinators are refreshing.
This case can also happen with scheduled entity refreshes, but it's less common. A future PR will address that case.
* periodic_tasks
* periodic_tasks
* periodic_tasks
* merge
* merge
* merge
* merge
* merge
* fix test that call the sync api from async
* one more place
* cannot chain
* async_run_periodic_hass_job
* sun and pattern time changes from automations also block startup
* Revert "sun and pattern time changes from automations also block startup"
This reverts commit 6de2defa05.
* make sure polling is cancelled when config entry is unloaded
* Revert "Revert "sun and pattern time changes from automations also block startup""
This reverts commit e8f12aad55.
* remove DisabledError from homewizard test as it relies on a race
* fix race
* direct coverage
Some of the data we had to search for was already available
in a dict or underlying data structure. Make it available
instead of having to build it every time.
There are more places these can be used, but I only did
the device registry cleanup for now
Because the setup again was scheduled as a task, it would
not unset self._async_cancel_retry_setup in time and we would
try to unsub self._async_cancel_retry_setup after it had already
fired. Change it to call a callback that runs right away so it
unsets self._async_cancel_retry_setup as soon as its called
so there is no race
fixes#111796
* Always allow ignore and unignore flows for single config entry integrations
* Update tests/test_config_entries.py
Co-authored-by: Martin Hjelmare <marhje52@gmail.com>
---------
Co-authored-by: Martin Hjelmare <marhje52@gmail.com>
* Allow setting if we support multiple config entries in config flow
* Move property to config flow instead of flow handler
* Move marking an integration as single instance only to manifest
* Revert line remove
* Avoid init a config flow or adding a new entry on a single instance with an entry
* Revert changes in test
* Process code review comments
* Apply code review suggestion
* Avoid circular import in Storage.async_delay_save
We call Storage.async_delay_save for every entity being added or removed
from the registry. The late import took more time than everything else
in the function.
* Avoid reschedule churn in Storage.async_delay_save
When we are adding or removing entities we will call async_delay_save
quite often which has to add and remove a TimerHandle on the event loop
which can add up when there are a lot of registry items changing.
If the timer handle still has 80% of the time remaining on it
we will avoid resceduling and let it fire at the time the
original async_delay_save call was made. This ensures we
do not force the event loop to rebuild its heapq because
too many timer handlers were cancelled at once
* div0
* add coverage for 0 since we had none
* fix bad conflict
* tweaks
* tweaks
* tweaks
* tweaks
* tweaks
* tweaks
* more test fixes
* mqtt tests rely on event loop overhead
* Convert debouncer async_shutdown to be a normal function
nothing was being awaited here and the shutdown call was only used
in integrations marked internal and other internals. Its possible
that a custom component might have been using the method but it
seemed uncommon enough that it did not warrent marking as a breaking
change. The update coordinator is no longer awaiting anything in
async_shutdown either now but it seemed likely that this use
would get subclassed.
* fix
* Add async_schedule_reload helper to the ConfigEntries manager
We have cases where the the setup retry kicks in right before
the reload happens causing the reload to fail with
OperationNotAllowed. The async_schedule_reload will
cancel the setup retry before the async_reload task
is created to avoid this problem.
I updated a few integrations that were most likely
to have this problem. Future PRs will do a more
extensive audit
* coverage
* revert for now since this needs more refactoring in a followup
* cover
* cleanup and fixes
* Don't blow up if config entries have unhashable unique IDs
* Add test
* Add comment on when we remove the guard
* Don't stringify hashable non string unique_id
* Add helper function to update and reload config entry to config flow
* Use async_create_task
* Remove await
* Reload only when update & add task name
* Rename function
* Improve performance of abort_entries_match
In #90406 a ChainMap was added which called __iter__
and __contains__ which ends up creating temp dicts
for matching
174e9da083/Lib/collections/__init__.py (L1022)
We can avoid this by removing the ChainMap since there
are only two mappings to match on.
This also means options no longer obscures data
* adjust comment