* Lazy loading of service descriptions
* Fix tests
* Load YAML in executor
* Return a copy of available services to allow mutations
* Remove lint
* Add zha/services.yaml
* Only cache descriptions for known services
* Remove lint
* Remove description loading during service registration
* Remove description parameter from async_register
* Test async_get_all_descriptions
* Remove lint
* Fix typos from multi-edit
* Remove unused arguments
* Remove unused import os
* Remove unused import os, part 2
* Remove unneeded coroutine decorator
* Only use executor for loading files
* Cleanups suggested in review
* Increase test coverage
* Fix races in existing tests
* Add EntityFilter helper
* Changes in entityfilter after code review
* Convert recorder to use EntityFilter
* Fix flake/lint errors in recorder
* Update entity filter helper to return function
* Update recorder to use updated entity filter
* Better docstrings in entityfilter
* Update entityfilter.py
* Extra check to incoming connections
The incoming connection could be other than self.db_url, because
some 'custom_component' could be making these, and then, if they're not
sqlite3 connections, an error will raise because those haven't the
`dbapi_connection.isolation_level` attrib.
* lint fix
* simplify check: isinstance test only
* Add recorder purge service
* Recorder test to match purge config
* Removed purge timer, move service handler to setup, add service description file
* Tests for recorder purge service
* Recorder purge timer rework, add purge service parameter, tests
* Purge service schema change
* Service description change value range
* First cleanup
* Fix name of config
* Add DEBUG-level log for db row to native object conversion
This is now the bottleneck (by a large margin) for big history queries, so I'm leaving this log feature in to help diagnose users with a slow history page
* Rewrite of the "first synthetic datapoint" query for multiple entities
The old method was written in a manner that prevented an index from being used in the inner-most GROUP BY statement, causing massive performance issues especially when querying for a large time period.
The new query does have one material change that will cause it to return different results than before: instead of using max(state_id) to get the latest entry, we now get the max(last_updated). This is more appropriate (primary key should not be assumed to be in order of event firing) and allows an index to be used on the inner-most query. I added another JOIN layer to account for cases where there are two entries on the exact same `last_created` for a given entity. In this case we do use `state_id` as a tiebreaker.
For performance reasons the domain filters were also moved to the outermost query, as it's way more efficient to do it there than on the innermost query as before (due to indexing with GROUP BY problems)
The result is a query that only needs to do a filesort on the final result set, which will only be as many rows as there are entities.
* Remove the ORDER BY entity_id when fetching states, and add logging
Having this ORDER BY in the query prevents it from using an index due to the range filter, so it has been removed.
We already do a `groupby` in the `states_to_json` method which accomplishes exactly what the ORDER BY in the query was trying to do anyway, so this change causes no functional difference.
Also added DEBUG-level logging to allow diagnosing a user's slow history page.
* Add DEBUG-level logging for the synthetic-first-datapoint query
For diagnosing a user's slow history page
* Missed a couple instances of `created` that should be `last_updated`
* Remove `entity_id` sorting from state_changes; match significant_update
This is the same change as 09b3498f41 , but applied to the `state_changes_during_period` method which I missed before. This should give the same performance boost to the history sensor component!
* Bugfix in History query used for History Sensor
The date filter was using a different column for the upper and lower bounds. It would work, but it would be slow!
* Update Recorder purge script to use more appropriate columns
Two reasons: 1. the `created` column's meaning is fairly arbitrary and does not represent when an event or state change actually ocurred. It seems more correct to purge based on the event date than the time the database row was written.
2. The new columns are indexed, which will speed up this purge script by orders of magnitude
* Updating db model to match new query optimizations
A few things here: 1. New schema version with a new index and several removed indexes
2. A new method in the migration script to drop old indexes
3. Added an INFO-level log message when a new index will be added, as this can take quite some time on a Raspberry Pi
* Try catch around database updates in recorder. Resolves 6919
* Fixing failed test for line length
* Catch only OperationalError and retry connections before giving up
* Including alchemy exceptions in single function
* New indexes for states table
* Added recorder_runs indexes
* Created a new function for compound indexes.
A new function was created because it makes it a little cleaner when creating
a single-field index since one doesn't have to create a list. This is mostly
when creating the name of the index so with a bit more logic it's possible
to combine it into one function. Given how often migration changes are run,
I thought that code bloat was probably a worthy trade-off for now.
* Adjusted indexes, POC for ref indexes by name.
* Corrected lint errors
* Fixed pydocstyle error
* Moved create_index function outside apply_update
* Moved to single line (just barely)
* Wait up to 9 seconds
* Set number of recorder retries to 8
* Do not sleep when reporting last connection error if no retries left
* Make sure we clean up old engine if connection is retrying
* Update __init__.py
* Restore states
* feedback
* Remove component move into recorder
* space
* helper
* Address my own comments
* Improve test coverage
* Add test for light restore state
* [recorder] Add tests for full schema migration
* Remove leftover code
* Fix duplicate creation of sqlalchemy Index object
* It's that kind of day...
* Improve models_original docstring
* Index events time_fired to improve logbook perf.
* Updated implementation to track schema versions
* Added tests for schema migration support logic
* Rename check_schema to migrate_schema
* Add event loop to the core
* Add block_till_done to HA core object
* Fix some tests
* Linting core
* Fix statemachine tests
* Core test fixes
* fix block_till_done to wait for loop and queue to empty
* fix test_core for passing, and correct start/stop/block_till_done
* Fix remote tests
* Fix tests: block_till_done
* Fix linting
* Fix more tests
* Fix final linting
* Fix remote test
* remove unnecessary import
* reduce sleep to avoid slowing down the tests excessively
* fix remaining tests to wait for non-threadsafe operations
* Add async_ doc strings for event loop / coroutine info
* Fix command line test to block for the right timeout
* Fix py3.4.2 loop var access
* Fix SERVICE_CALL_LIMIT being in effect for other tests
* Fix lint errors
* Fix lint error with proper placement
* Fix slave start to not start a timer
* Add asyncio compatible listeners.
* Increase min Python version to 3.4.2
* Move async backports to util
* Add backported async tests
* Fix linting
* Simplify Python version check
* Fix lint
* Remove unneeded try/except and queue listener appproriately.
* Fix tuple vs. list unorderable error on version compare.
* Fix version tests
* Update recorder.
models.py:
- Use scoped_session in models.py to fix shutdown error
__init__.py:
- Session _commit & retry method
- Single session var for purge_data
- Ensure single _INSTANCE
- repeat purge every 2 days
- show correct time in log_error
* _commit
* Restore models to old functionality, swap purge, remove _INSTANCE cleanup from tests, typing ignore Base class
* pylint
* Remove recorder from model unit test
* Switch to SQLAlchemy for the Recorder component. Gives the ability to use MySQL or other.
* fixes for failed lint
* add conversion script
* code review fixes and refactor to use to_native() model methods and execute() helper
* move script to homeassistant.scripts module
* style fixes my tox lint/flake8 missed
* move exclusion up