* Avoid asking recorder platforms for list_statistic_ids when already complete
If we already had all the data needed for list_statistic_ids, we would
still query recorder platforms and throw away the results
* Update homeassistant/components/recorder/statistics.py
Ensure new tables are created using InnoDB
InnoDB is the only supported engine to use with MariaDB
or MySQL as we currently have large keys in the states
table that will not work with MyIASM. Other storage
engines including Aria will likely work fine, but they
are not officially supported.
* Load pending state attributes and event data ids at startup
Since we queue all events to be processed after startup
we can have a thundering herd of queries to prime the
LRUs of event data and state attributes ids. Since we
know we are about to process a chunk of events we can
fetch all the ids in two queries
* lru
* fix hang
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* Fix recorder LRU being destroyed if event session is reopened
We would clear the LRU in _close_event_session but
it would never get replaced with an LRU again so
it would leak memory if the event session is reopened
* cleanup
* Mark PostgreSQL range select as fast
Currently we were using the slow range select workaround for
PostgreSQL that was original developed for MariaDB but
its actually slower on PostgreSQ
fixes#83253
* Mark PostgreSQL range select as fast
Currently we were using the slow range select workaround for
PostgreSQL that was original developed for MariaDB but
its actually slower on PostgreSQ
fixes#83253
* Fix statistics_at_time query not using index
fixes#82411
* fix refactoring error
* fix query so sqlalc does not get confused
* split it
* write as subquery
* reduce
* cleanup
* reduce
* Revert "reduce"
This reverts commit 43b4b55778.
* Remove default from created statistics schema
We were still inserting created times because even though
None was passed when creating the object explictly, the
default would still be used
* adjust column
* preserve original pre sql alc 2.0 behavior
* Adjust size of recorder LRU based on number of entities
If there are a large number of entities the cache would
get thrashed as there were more state attributes being
recorded than the size of the cache. This meant we had
to go back to the database to do lookups frequently when
an instance has more than 2048 entities that change
frequently
* add a test
* do not actually record 4096 states
* patch target
* patch target
* Optimize fetching statistics
* speed up
* avoid double groupby
* avoid another loop
* tweak flow
* fixes
* tweak
* avoid a new dt object in the cache for week/month
* avoid a new dt object in the cache for week/month
* Add JSON type definitions
* Sample use
* Keep mutable for a follo-up PR (avoid dead code)
* Use list/dict
* Remove JsonObjectType
* Remove reference to Union
* Cleanup
* Improve rest
* Rename json_dict => json_data
* Add docstring
* Add type hint to json_loads
* Add cast
* Move type alias to json helpers
* Cleanup
* Create and use json_loads_object
* Make error more explicit and add tests
* Use JsonObjectType in conversation
* Remove quotes
Fix recorder run history during schema migration
RunHistory.get and RunHistory.current can be called before
RunHistory.start. We need to return a RecorderRuns object
with the recording_start time that will be used when start
it called to ensure history queries still work as expected.
fixes#87112