* Add deduplicate script
* Fix forecast_solar incorrect key with space
* Fix utf-8
* Do not create references to other arbitrary other integrations
* Add commented code to only allow applying to referencing integrations
* Tweak
* Bug fix
* Add command line arg for limit reference
* never suggest to update common keys
* Output of script
* Apply suggestions from code review
Co-authored-by: Michael <35783820+mib1185@users.noreply.github.com>
---------
Co-authored-by: Michael <35783820+mib1185@users.noreply.github.com>
* Add common translations
* Add common translations
* Add common translations
* Add common translations
* Add common translations
* Add common translations
* Add common translations
* Add common translations
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* Fix package names to match pypi index metadata
* uses _
* uses -
* fix metadata
* Add additional coverage to history websocket api
related issue #93258
* Add additional coverage to history websocket api
related issue #93258
* Fix results when union query ends up at the end instead of front
* Apply suggestions from code review
* resort
* zero instead
* fix exception
* fix tests
* Significantly speed up recorder event listener
This code is called every time an event happens since it
subscribes to all events. Its our most frequently called
listener out of the box.
It used to have a seperate filter function but it was
later combined after core had some previous refactoring.
It was never optimized after that happened.
This change reduces the run time by ~70%
* decruft
* Support calculating changes between consecutive sum statistics
* Add support for unit conversion when calculating change
* Don't include sum in WS response unless requested
* Improve tests
* Break out calculating change to its own function
* Improve test coverage
* Auto repair incorrect collation on MySQL schema
As we do more union queries in 2023.5.x if there is a mismatch
between collations on tables, they will fail with an error
that is hard for the user to figure out how to fix
`Error executing query: (MySQLdb.OperationalError) (1271, "Illegal mix of collations for operation UNION")`
This was reported in the #beta channel and by PM from others
so the problem is not isolated to a single user
https://discord.com/channels/330944238910963714/427516175237382144/1100908739910963272
* test with ascii since older maraidb versions may not work otherwise
* Revert "test with ascii since older maraidb versions may not work otherwise"
This reverts commit 787fda1aefcd8418a28a8a8f430e7e7232218ef8.t
* older version need to check collation_server because the collation is not reflected if its the default
* Speed up logbook and history queries where ORM rows are not needed
This avoids having sqlalchemy wrap Result in ChunkedIteratorResult
which has additional overhead we do not need for these cases
* more places
* anything that uses _sorted_statistics_to_dict does not need orm rows either
* Fallback to generating a new ULID on migraiton if context is missing or invalid
It was discovered that postgresql will do a full scan if
there is a low cardinality on the index because of missing
context ids. We will now generate a ULID for the timestamp
of the row if the context data is missing or invalid
fixes#91514
* tests
* tweak
* tweak
* preen
* Ensure recorder run shutdown if the run loop raises
If anything goes wrong with the recorder we should
still try to shutdown cleanly
* tweak
* tests
* tests
* handle migraiton failure
* tweak comment
* naming
* order
* order
* order
* reword
* adjust test
* fixes
* threading
* failure case
* fix test
* have to wait for stop because the task blocks on thread join
* Use local timezone for recorder connection
The fix in #90335 had an unexpected side effect of
using UTC for the timezone since all recorder operations
use UTC. Since only sqlite much use the database executor
we can use a seperate connection pool which uses local time
This also ensures that the engines are disposed of
when Home Assistant is shutdown as previously we
did not cleanly disconnect
* coverage
* fix unclean shutdown in config flow
* tweaks
https://www.sqlite.org/pragma.html#pragma_optimize
> To achieve the best long-term query performance without the need to do a detailed engineering analysis of the application schema and SQL, it is recommended that applications run "PRAGMA optimize" (with no arguments) just before closing each database connection. Long-running applications might also benefit from setting a timer to run "PRAGMA optimize" every few hours.
> This pragma is usually a no-op or nearly so and is very fast.
Since we keep the recorder connection open for the entire time HA
is running we fall into the long-running application bucket
* delete more code
* tweak
* tweak
* wrappers
* restore lost performance
* restore lost performance
* restore lost performance
* compact
* reduce
* fix refactor
* DRY
* tweak
* delete the start time state injector
* move away the legacy code
* tweak
* adjust
* adjust
* tweak
* ignore impossible
* fix a bug where the first start was changed to the start time when there was no previous history recorded before
* avoid the empty scan most cases
* postgresql
* fixes
* workaround for mariadb < 10.4
* remove unused
* remove unused
* adjust
* bail early
* tweak
* tweak
* fix more tests
* fix recorderrun being init in the future in the test
* run history tests on schema 30 as well
* Revert "run history tests on schema 30 as well"
This reverts commit d798b100ac.
* reduce
* cleanup
* tweak
* reduce
* prune
* adjust
* adjust
* adjust
* reverse later is faster because the index is in forward order and the data size we are reversing is much smaller even if we are in python code
* Revert "reverse later is faster because the index is in forward order and the data size we are reversing is much smaller even if we are in python code"
This reverts commit bf974e103e.
* fix test
* Revert "Revert "reverse later is faster because the index is in forward order and the data size we are reversing is much smaller even if we are in python code""
This reverts commit 119354499e.
* more coverage
* adjust
* fix for table order
* impossible for it to be missing
* remove some more legacy from the all states
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* Reduce overhead of legacy database columns on new installs
* not working as expected
* override the type compiler
* override the type compiler
* override the type compiler
* override the type compiler
* Apply suggestions from code review
* pgsql char1
* make entity filter test setup with old schema
* fix some more tests that were mutating state
* fix some more tests that were mutating state
* fix some more tests that were mutating state
* fix more dbstate mutations
* add shim for older tests
* split migration tests
* add coverage for purging legacy data
* tweak
* more fixes
* drop some legacy
* fix another test
* fix a few more
* add casts for postgresql in case someone deletes the schema changes table
* dry
* dry
* dry