* Optimize fetching statistics
* speed up
* avoid double groupby
* avoid another loop
* tweak flow
* fixes
* tweak
* avoid a new dt object in the cache for week/month
* avoid a new dt object in the cache for week/month
* Add JSON type definitions
* Sample use
* Keep mutable for a follo-up PR (avoid dead code)
* Use list/dict
* Remove JsonObjectType
* Remove reference to Union
* Cleanup
* Improve rest
* Rename json_dict => json_data
* Add docstring
* Add type hint to json_loads
* Add cast
* Move type alias to json helpers
* Cleanup
* Create and use json_loads_object
* Make error more explicit and add tests
* Use JsonObjectType in conversation
* Remove quotes
Fix recorder run history during schema migration
RunHistory.get and RunHistory.current can be called before
RunHistory.start. We need to return a RecorderRuns object
with the recording_start time that will be used when start
it called to ensure history queries still work as expected.
fixes#87112
* Speed up comparing State and Event objects
Use default python implementation for State and Event __hash__ and __eq__
The default implementation compared based on the id() of the object
which is effectively what we want here anyways. These overrides are
left over from the days when these used to be attrs objects
By avoiding implementing these ourselves all of the equality checks
can happen in native code
* tweak
* adjust tests
* write out some more
* fix test to not compare objects
* more test fixes
* more test fixes
* correct stats tests
* fix more tests
* fix more tests
* update sensor recorder tests
* Chunk MariaDB data migration to avoid running out of buffer space
This will make the migration slower but since the innodb_buffer_pool_size
is using the defaul to 128M and not tuned to the db size there is a
risk of running out of buffer space for large databases
* Update homeassistant/components/recorder/migration.py
* hard code since bandit thinks its an injection
* Update homeassistant/components/recorder/migration.py
* guard against manually modified data/corrupt db
* adjust to 10k per chunk
* adjust to 50k per chunk
* memory still just fine at 250k
* but slower
* commit after each chunk to reduce lock pressure
* adjust
* set to 0 if null so we do not loop forever (this should only happen if the data is missing)
* set to 0 if null so we do not loop forever (this should only happen if the data is missing)
* tweak
* tweak
* limit cleanup
* lower limit to give some more buffer
* lower limit to give some more buffer
* where required for sqlite
* sqlite can wipe as many as needed with no limit
* limit on mysql only
* chunk postgres
* fix limit
* tweak
* fix reference
* fix
* tweak for ram
* postgres memory reduction
* defer cleanup
* fix
* same order
If the there are a lot of excluded events for the recorder, it
can have a performance impact as the list has to be searched
every time an event fires in HA