* raise an exception when event_type exceeds the max length that the recorder supports
* add test
* use max length constant in recorder
* update config entry reloaded service name
* remove exception string function because it's not needed
* increase limit to 64 and revert event name change
* fix test
* assert exception args
* fix test
* add comment about migration
* Add index to old_state_id column for older databases
The schema was updated in #43610 but the index was not
added on migration.
* Handle postgresql missing ondelete
* create index first
By default these tables are created with utf8 which can only hold 3 bytes. This
meant that all emjoi would trigger a MySQLdb._exceptions.OperationalError because
they are 4 bytes.
This will only fix the issue for users who recreate their tables.
* MariaDB doesn't purge #42402
This addresses home-assistant#42402
Relationships within table "states" and between tables "states" and "events " home-assistant#40467 prevent the purge from working correctly. The database increases w/o any purge.
This proposal sets related indices to NULL and permits deleting of rows.
Further explanations can be found here home-assistant#42402
This proposal also allows to purge the tables "events" and "states" in any order.
* Update models.py
Corrected for Black style requirements
* Update homeassistant/components/recorder/models.py
Co-authored-by: J. Nick Koston <nick@koston.org>
* Add the options to foreign key constraints
* purge old states when database gets deleted out from under us
* pylint
Co-authored-by: J. Nick Koston <nick@koston.org>
We currently serialize the event data for state change events
and then replace it because we save the state in the states table.
Since the old state and new state are both contains in the event
the cost of serializing the data has a noticable impact when there
are many state changed events.
Cleanup indexes as >50% of the db size was indexes,
many of them unused in any current query
Logbook search was having to filter event_types without
an index:
Created ix_events_event_type_time_fired
Dropped ix_events_event_type
States had a redundant keys on composite index:
Dropped ix_states_entity_id
Its unused since we have ix_states_entity_id_last_updated
De-duplicate storage of context in states as
its always stored in events and can be found
by joining the state on the event_id.
Dropped ix_states_context_id
Dropped ix_states_context_parent_id
Dropped ix_states_context_user_id
After schema v9:
STATES............................................ 10186 40.9%
EVENTS............................................ 5502 22.1%
IX_STATES_ENTITY_ID_LAST_UPDATED.................. 2177 8.7%
IX_EVENTS_EVENT_TYPE_TIME_FIRED................... 1910 7.7%
IX_EVENTS_CONTEXT_ID.............................. 1592 6.4%
IX_EVENTS_TIME_FIRED.............................. 1383 5.6%
IX_STATES_LAST_UPDATED............................ 1079 4.3%
IX_STATES_EVENT_ID................................ 375 1.5%
IX_EVENTS_CONTEXT_PARENT_ID....................... 347 1.4%
IX_EVENTS_CONTEXT_USER_ID......................... 346 1.4%
IX_RECORDER_RUNS_START_END........................ 1 0.004%
RECORDER_RUNS..................................... 1 0.004%
SCHEMA_CHANGES.................................... 1 0.004%
SQLITE_MASTER..................................... 1 0.004%
* adj
* time_fired_isoformat
* remove unused code
* tests for processing timestamps
* restore missing import lost in merge conflict
* test for None case
* Add old_state_id to states, remove old/new state data from events since it can now be found by a join
* remove state lookup on restart
* Ensure old_state is set for exisitng states
* Improve history api performance
A new option "minimal_response" reduces the amount of data
sent between the first and last history states to only the
"last_changed" and "state" fields.
Calling to_native is now avoided where possible and only
done at the end for rows that will be returned in the response.
When sending the `minimal_response` option, the history
api now returns a json response similar to the following
for an entity
Testing:
History API Response time for 1 day
Average of 10 runs with minimal_response
Before: 19.89s. (content length : 3427428)
After: 8.44s (content length: 592199)
```
[{
"attributes": {--TRUNCATED--},
"context": {--TRUNCATED--},
"entity_id": "binary_sensor.powerwall_status",
"last_changed": "2020-05-18T23:20:03.213000+00:00",
"last_updated": "2020-05-18T23:20:03.213000+00:00",
"state": "on"
},
...
{
"last_changed": "2020-05-19T00:41:08Z",
"state": "unavailable"
},
...
{
"attributes": {--TRUNCATED--},
"context": {--TRUNCATED--},
"entity_id": "binary_sensor.powerwall_status",
"last_changed": "2020-05-19T00:42:08.069698+00:00",
"last_updated": "2020-05-19T00:42:08.069698+00:00",
"state": "on"
}]
```
* Remove impossible state check
* Remove another impossible state check
* Update homeassistant/components/history/__init__.py
Co-authored-by: Paulus Schoutsen <paulus@home-assistant.io>
* Reorder to save some indent per review
* Make query response make sense with to_native=False
* Update test for 00:00 to Z change
* Update homeassistant/components/recorder/models.py
Co-authored-by: Paulus Schoutsen <paulus@home-assistant.io>
Co-authored-by: Paulus Schoutsen <paulus@home-assistant.io>
The database fields are timezoned via DateTime(timezone=True), so the
default value should be timezoned too. When using cockroachdb this is
fatal and results in the recorder crashing.
* [recorder] Use orjson to parse json faster
* Remove from http manifest
* Bump to orjson 2.5.1
* Empty commit to trigger CI
Co-authored-by: Paulus Schoutsen <paulus@home-assistant.io>
* move imports to top-level in recorder init
* move imports to top-level in recorder migration
* move imports to top-level in recorder models
* move imports to top-level in recorder purge
* move imports to top-level in recorder util
* fix pylint
* Add DEBUG-level log for db row to native object conversion
This is now the bottleneck (by a large margin) for big history queries, so I'm leaving this log feature in to help diagnose users with a slow history page
* Rewrite of the "first synthetic datapoint" query for multiple entities
The old method was written in a manner that prevented an index from being used in the inner-most GROUP BY statement, causing massive performance issues especially when querying for a large time period.
The new query does have one material change that will cause it to return different results than before: instead of using max(state_id) to get the latest entry, we now get the max(last_updated). This is more appropriate (primary key should not be assumed to be in order of event firing) and allows an index to be used on the inner-most query. I added another JOIN layer to account for cases where there are two entries on the exact same `last_created` for a given entity. In this case we do use `state_id` as a tiebreaker.
For performance reasons the domain filters were also moved to the outermost query, as it's way more efficient to do it there than on the innermost query as before (due to indexing with GROUP BY problems)
The result is a query that only needs to do a filesort on the final result set, which will only be as many rows as there are entities.
* Remove the ORDER BY entity_id when fetching states, and add logging
Having this ORDER BY in the query prevents it from using an index due to the range filter, so it has been removed.
We already do a `groupby` in the `states_to_json` method which accomplishes exactly what the ORDER BY in the query was trying to do anyway, so this change causes no functional difference.
Also added DEBUG-level logging to allow diagnosing a user's slow history page.
* Add DEBUG-level logging for the synthetic-first-datapoint query
For diagnosing a user's slow history page
* Missed a couple instances of `created` that should be `last_updated`
* Remove `entity_id` sorting from state_changes; match significant_update
This is the same change as 09b3498f41 , but applied to the `state_changes_during_period` method which I missed before. This should give the same performance boost to the history sensor component!
* Bugfix in History query used for History Sensor
The date filter was using a different column for the upper and lower bounds. It would work, but it would be slow!
* Update Recorder purge script to use more appropriate columns
Two reasons: 1. the `created` column's meaning is fairly arbitrary and does not represent when an event or state change actually ocurred. It seems more correct to purge based on the event date than the time the database row was written.
2. The new columns are indexed, which will speed up this purge script by orders of magnitude
* Updating db model to match new query optimizations
A few things here: 1. New schema version with a new index and several removed indexes
2. A new method in the migration script to drop old indexes
3. Added an INFO-level log message when a new index will be added, as this can take quite some time on a Raspberry Pi
* New indexes for states table
* Added recorder_runs indexes
* Created a new function for compound indexes.
A new function was created because it makes it a little cleaner when creating
a single-field index since one doesn't have to create a list. This is mostly
when creating the name of the index so with a bit more logic it's possible
to combine it into one function. Given how often migration changes are run,
I thought that code bloat was probably a worthy trade-off for now.
* Adjusted indexes, POC for ref indexes by name.
* Corrected lint errors
* Fixed pydocstyle error
* Moved create_index function outside apply_update
* Moved to single line (just barely)
* Index events time_fired to improve logbook perf.
* Updated implementation to track schema versions
* Added tests for schema migration support logic
* Rename check_schema to migrate_schema
* Update recorder.
models.py:
- Use scoped_session in models.py to fix shutdown error
__init__.py:
- Session _commit & retry method
- Single session var for purge_data
- Ensure single _INSTANCE
- repeat purge every 2 days
- show correct time in log_error
* _commit
* Restore models to old functionality, swap purge, remove _INSTANCE cleanup from tests, typing ignore Base class
* pylint
* Remove recorder from model unit test
* Switch to SQLAlchemy for the Recorder component. Gives the ability to use MySQL or other.
* fixes for failed lint
* add conversion script
* code review fixes and refactor to use to_native() model methods and execute() helper
* move script to homeassistant.scripts module
* style fixes my tox lint/flake8 missed
* move exclusion up