* Load pending state attributes and event data ids at startup
Since we queue all events to be processed after startup
we can have a thundering herd of queries to prime the
LRUs of event data and state attributes ids. Since we
know we are about to process a chunk of events we can
fetch all the ids in two queries
* lru
* fix hang
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* cleanup
* Mark PostgreSQL range select as fast
Currently we were using the slow range select workaround for
PostgreSQL that was original developed for MariaDB but
its actually slower on PostgreSQ
fixes#83253
* Mark PostgreSQL range select as fast
Currently we were using the slow range select workaround for
PostgreSQL that was original developed for MariaDB but
its actually slower on PostgreSQ
fixes#83253
* Fix statistics_at_time query not using index
fixes#82411
* fix refactoring error
* fix query so sqlalc does not get confused
* split it
* write as subquery
* reduce
* cleanup
* reduce
* Revert "reduce"
This reverts commit 43b4b55778.
* Remove default from created statistics schema
We were still inserting created times because even though
None was passed when creating the object explictly, the
default would still be used
* adjust column
* preserve original pre sql alc 2.0 behavior
* Adjust size of recorder LRU based on number of entities
If there are a large number of entities the cache would
get thrashed as there were more state attributes being
recorded than the size of the cache. This meant we had
to go back to the database to do lookups frequently when
an instance has more than 2048 entities that change
frequently
* add a test
* do not actually record 4096 states
* patch target
* patch target
* Ensure doorbird always uses internal url
The doorbird should always use the internal url to
ensure the webhooks work. The doorbird does not
verify ssl so there is no concern about ssl matching
according to the LAN-2-LAN API v0.32 Dec 21 2022
* adjust
* Update homeassistant/components/doorbird/__init__.py
* Only expose default cloud domains in default agent
* Copy exposed domain list to conversation
* Implement requested changes
* Add test for exposed devices/areas
* Use a set for config entries task tracking
* Allow adding background tasks to config entries
* Add tests for config entry add tasks
* Update docstrings on core create task
* Migrate roon and august
* Use in more places
* Guard for None
* Fix dangling task for elkm1
* Update homeassistant/components/elkm1/__init__.py
Co-authored-by: J. Nick Koston <nick@koston.org>
---------
Co-authored-by: J. Nick Koston <nick@koston.org>
```
2023-02-16 20:44:54.516 ERROR (MainThread) [homeassistant.config_entries] Error setting up entry alexander for sense
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/aiohttp/connector.py", line 1152, in _create_direct_connection
hosts = await asyncio.shield(host_resolved)
File "/usr/local/lib/python3.10/site-packages/aiohttp/connector.py", line 1152, in _create_direct_connection
hosts = await asyncio.shield(host_resolved)
File "/usr/local/lib/python3.10/site-packages/aiohttp/connector.py", line 861, in _resolve_host
await event.wait()
File "/usr/local/lib/python3.10/site-packages/aiohttp/locks.py", line 34, in wait
raise self._exc
File "/usr/local/lib/python3.10/site-packages/aiohttp/connector.py", line 874, in _resolve_host
addrs = await self._resolver.resolve(host, port, family=self._family)
File "/usr/local/lib/python3.10/site-packages/aiohttp/resolver.py", line 33, in resolve
infos = await self._loop.getaddrinfo(
File "/usr/local/lib/python3.10/asyncio/base_events.py", line 860, in getaddrinfo
return await self.run_in_executor(
File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.10/socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -3] Try again
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config_entries.py", line 381, in async_setup
result = await component.async_setup_entry(hass, self)
File "/usr/src/homeassistant/homeassistant/components/sense/__init__.py", line 83, in async_setup_entry
await gateway.get_monitor_data()
File "/usr/local/lib/python3.10/site-packages/sense_energy/asyncsenseable.py", line 214, in get_monitor_data
json = await self._api_call("app/monitors/%s/overview" % self.sense_monitor_id)
File "/usr/local/lib/python3.10/site-packages/sense_energy/asyncsenseable.py", line 174, in _api_call
async with self._client_session.get(
File "/usr/local/lib/python3.10/site-packages/aiohttp/client.py", line 1141, in __aenter__
self._resp = await self._coro
File "/usr/local/lib/python3.10/site-packages/aiohttp/client.py", line 536, in _request
conn = await self._connector.connect(
File "/usr/local/lib/python3.10/site-packages/aiohttp/connector.py", line 540, in connect
proto = await self._create_connection(req, traces, timeout)
File "/usr/local/lib/python3.10/site-packages/aiohttp/connector.py", line 901, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "/usr/local/lib/python3.10/site-packages/aiohttp/connector.py", line 1166, in _create_direct_connection
raise ClientConnectorError(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host api.sense.com:443 ssl:default [Try again] ```
* Use blocking in service calls and verify result
* Block for 2 seconds and update states after
* Small timeout in service call to allow exceptions
* Move sun test
* Stop processing when we hit bad encryption
* Accept webhook payload that is a list
* Rename functions because we import them
* Revert a debug thing
---------
Co-authored-by: epenet <6771947+epenet@users.noreply.github.com>
* Remove profiler.memory service
guppy3 is not python3.11 compat
https://github.com/zhuyifei1999/guppy3/issues/41
This service will return if and when guppy3 becomes
python3.11 compat
* squash
* temp remove
* temp dump tests
* temp dump tests
* drop a few more to get a run
* drop a few more to get a run
* Account for changed python3.11 enum.IntFlag behavior in zha
There may be additional changes needed, but I could only
see what needed to be updated based on the tests
* merge
* restore
* restore
* legacy value
* tweak a bit for the python 3.11 timings
* block cchardet
* conditional
* adjust est
* test
* not yet
* tweak
* give a little leeway for timing
* Fix otbr tests
* Increase database test timeout
It looks like we need a little more time to run
with the addiitonal tests in #87019
* Increase database test timeout
It looks like we need a little more time to run
with the addiitonal tests in #87019
* Fix aprs tests with python 3.11
* merge fix
* hints
* Update homeassistant/package_constraints.txt
* Update script/gen_requirements_all.py
* Constrain uamqp for Python 3.10 only
* Bump vulcan-api to 2.3.0
see https://github.com/kapi2289/vulcan-api/pull/126
see https://github.com/home-assistant/core/pull/88038
see https://github.com/home-assistant/docker/pull/260
* add ban
* Bump python-matter-server to 2.1.1
* revert
* Update tests/asyncio_legacy.py
---------
Co-authored-by: Erik <erik@montnemery.com>
Co-authored-by: Franck Nijhof <git@frenck.dev>
Co-authored-by: Marcel van der Veldt <m.vanderveldt@outlook.com>
* Refactor zeroconf task handling
- Avoid the need to create tasks for most callbacks
- Fixes the untracked task that could get unexpectedly GCed
* be consistant
* be consistant
* fix zeroconf tests
* runtime
* Revert "runtime"
This reverts commit 19e6b61837.
* precalc
* refactor
* tweak
* update tests
The check for identical flows only worked after
the start event. We now check against pending
flows as well
If startup took a while we could end up
with quite the thundering herd