* Move backup/* WS commands to the backup integration
* Call correct command
* Use debug for logging
* Remove assertion of hass.data for setup test
* parametrize token fixture
* Refactor integration startup time tracking to reduce overhead
- Use monotonic time for watching integration startup time as it avoids incorrect values if time moves backwards because of ntp during startup and reduces many time conversions since we want durations in seconds and not local time
- Use loop scheduling instead of a task
- Moves all the dispatcher logic into the new _WatchPendingSetups
* websocket as well
* tweaks
* simplify logic
* preserve logic
* preserve logic
* lint
* adjust
* Signficantly reduce executor contention during bootstrap
At startup we have a thundering herd wanting to use the executor
to load manifiest.json. Since we know which integrations we are
about to load in each resolver step, group the manifest loads
into single executor jobs by calling async_get_integrations on
the deps of the integrations after they are resolved.
In practice this reduced the number of executor jobs
by 80% during bootstrap
* merge
* naming
* tweak
* tweak
* not enough contention to be worth it there
* refactor to avoid waiting
* refactor to avoid waiting
* tweaks
* tweaks
* tweak
* background is fine
* comment
* Drop use of regex in helpers.extract_domain_configs
* Update test
* Revert test update
* Add domain_from_config_key helper
* Add validator
* Address review comment
* Update snapshots
* Inline domain_from_config_key in validator
* Migrate restore_state helper to use registry loading pattern
As more entities have started using restore_state over time, it
has become a startup bottleneck as each entity being added is
creating a task to load restore state data that is already loaded
since it is a singleton
We now use the same pattern as the registry helpers
* fix refactoring error -- guess I am tired
* fixes
* fix tests
* fix more
* fix more
* fix zha tests
* fix zha tests
* comments
* fix error
* add missing coverage
* s/DATA_RESTORE_STATE_TASK/DATA_RESTORE_STATE/g
The default behavior of these warnings is to go to stderr, which in
some setups goes easily unnoticed. For example in Docker based ones,
they end up only in the container logs, and not e.g. in the HA log.
Capture these to make them available in logs where other such messages
are, and to make them subject to filtering as usual.
https://docs.python.org/3/library/logging.html#logging.captureWarnings
* Do not wait
* Correct tests
* Manage after dependencies stage 1
* test bootstrap dependencies
* Assert log the dependenciy is waited for
* Improve docstrings
* Assert outside callback
* Patch async_get_integrations
* Revert changes made to snips integration
* Undo changes to mqtt_statestream
* Fix memory churn in state templates
The LRU for state templates was limited to 512 states. As soon
as it was exaused, system performance would tank as each template
that iterated all states would have to create and GC any state
> 512
* does it scale?
* avoid copy on all
* comment
* preen
* cover
* cover
* comments
* comments
* comments
* preen
* preen
* Adds a loader to enable jinja imports.
* Switch to in-memory
* Move loading custom_jinja off of the event loop
* Raise TemplateNotFound if template doesn't exist
* Fix docstring
* Adds a service to reload custom jinja
* Remove IO from test setup
* Improve coverage and small refactor
* Incorporate feedback and use .jinja extension
* Check the loaded sources in test.
* Incorporate PR feedback.
* Update homeassistant/helpers/template.py
Co-authored-by: Erik Montnemery <erik@montnemery.com>
---------
Co-authored-by: Erik Montnemery <erik@montnemery.com>
* Allow configuring country and language in core config
* Add script for updating list of countries
* Use black for formatting
* Fix quoting
* Move country codes to a separate file
* Address review comments
* Add generated/countries.py
* Get default language from owner account
* Remove unused variable
* Add script to generate list of supported languages
* Add tests
* Fix stale docsring
* Use format_python_namespace
* Correct async_user_store
* Improve typing
* Fix with_store decorator
* Initialize language in core store migration
* Fix startup
* Tweak
* Apply suggestions from code review
Co-authored-by: Franck Nijhof <git@frenck.dev>
* Update storage.py
Co-authored-by: Franck Nijhof <git@frenck.dev>
Prime platform.uname cache at startup to fix blocking subprocess
- Multiple modules check platform.uname()[0] at startup which
does a blocking subprocess call. We can avoid this happening
in the eventloop and distrupting startup stability by priming
the cache ahead of time in the executor
* Limit concurrency of async_get_integration to avoid creating extra threads
Since async_get_integration is waiting on the disk most of the time
it would end up creating many new threads because the disk could
not deliver the data in time.
* pylint
* Detect lingering threads after tests
* Make sure cast is setup before checking state
* Make sure we ask executors of old hass to shutdown
We are not waiting here, just hoping for the best
* Make sure all instances of hass and executors is stopped.
Co-authored-by: Paulus Schoutsen <balloob@gmail.com>
* Also apply hass stopping to scripts
* Adjust to changes how we set up executor
* Add new CoreState.stopped
Co-authored-by: Paulus Schoutsen <balloob@gmail.com>
* Part 1 of 2 (no breaking changes in part 1).
When integrations configured via the UI block startup or fail to start,
the webserver can remain offline which make it is impossible
to recover without manually changing files in
.storage since the UI is not available.
This change is the foundation that part 2 will build on
and enable a listener to start the webserver when the frontend
is finished loading.
Frontend Changes (home-assistant/frontend#6068)
* Address review comments
* bump timeout to 1800s, adjust comment
* bump timeout to 4h
* remove timeout failsafe
* and the test
* Periodicly log when intergrations are taking a while to setup
When one or more intergrations are taking a while to setup
it is hard to determine which one is the cause. We can
help narrow this down for the user with a periodic log
message about which domains are still waiting to be setup
every 30s.
* 30 -> 60 per discussion
* only log when the integration is actually doing setup
* reduce, fix race in test