* Avoid regex for negative zero check in sensor
We can avoid calling the regex for every sensor value
since most of the time values are not negative zero
* tweak
* tweak
* Apply suggestions from code review
* simpler
* cover
* safer and still fast
* safer and still fast
* prep for py3.11
* fix check
* add missing cover
* more coverage
* coverage
* coverage
* Migrate restore_state helper to use registry loading pattern
As more entities have started using restore_state over time, it
has become a startup bottleneck as each entity being added is
creating a task to load restore state data that is already loaded
since it is a singleton
We now use the same pattern as the registry helpers
* fix refactoring error -- guess I am tired
* fixes
* fix tests
* fix more
* fix more
* fix zha tests
* fix zha tests
* comments
* fix error
* add missing coverage
* s/DATA_RESTORE_STATE_TASK/DATA_RESTORE_STATE/g
* Support calculating changes between consecutive sum statistics
* Add support for unit conversion when calculating change
* Don't include sum in WS response unless requested
* Improve tests
* Break out calculating change to its own function
* Improve test coverage
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* not working as expected
* override the type compiler
* override the type compiler
* override the type compiler
* override the type compiler
* Apply suggestions from code review
* pgsql char1
* make entity filter test setup with old schema
* fix some more tests that were mutating state
* fix some more tests that were mutating state
* fix some more tests that were mutating state
* fix more dbstate mutations
* add shim for older tests
* split migration tests
* add coverage for purging legacy data
* tweak
* more fixes
* drop some legacy
* fix another test
* fix a few more
* add casts for postgresql in case someone deletes the schema changes table
* dry
* dry
* dry
If the time period for the mean/time weighted average was smaller
than we can measure (less than one microsecond), generating
statistics would fail with a divide by zero error. This is likely
only happens if the database schema precision is incorrect.
* Avoid database executor job to fetch statistic metadata on cache hit
Since we will almost always have a cache hit fetching
statistic meta data we can avoid an executor job
* Avoid database executor job to fetch statistic metadata on cache hit
Since we will almost always have a cache hit fetching
statistic meta data we can avoid an executor job
* Avoid database executor job to fetch statistic metadata on cache hit
Since we will almost always have a cache hit fetching
statistic meta data we can avoid an executor job
* remove exception catch since the threading.excepthook will actually catch this in production
* fix a few missed ones
* threadsafe
* Update homeassistant/components/recorder/table_managers/statistics_meta.py
* coverage and optimistic caching
* Speed up comparing State and Event objects
Use default python implementation for State and Event __hash__ and __eq__
The default implementation compared based on the id() of the object
which is effectively what we want here anyways. These overrides are
left over from the days when these used to be attrs objects
By avoiding implementing these ourselves all of the equality checks
can happen in native code
* tweak
* adjust tests
* write out some more
* fix test to not compare objects
* more test fixes
* more test fixes
* correct stats tests
* fix more tests
* fix more tests
* update sensor recorder tests