Handle cancelation in wait_for_ble_connections_free
If `wait_for_ble_connections_free` was canceled due to timeout or
the esp disconnecting from Home Assistant the future would get
canceled. When we reconnect and get the next callback we need
to handle it being done.
fixes
```
2023-03-21 02:34:36.876 ERROR (MainThread) [homeassistant] Error doing job: Fatal error: protocol.data_received() call failed.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/asyncio/selector_events.py", line 868, in _read_ready__data_received
self._protocol.data_received(data)
File "/usr/local/lib/python3.10/site-packages/aioesphomeapi/_frame_helper.py", line 195, in data_received
self._callback_packet(msg_type_int, bytes(packet_data))
File "/usr/local/lib/python3.10/site-packages/aioesphomeapi/_frame_helper.py", line 110, in _callback_packet
self._on_pkt(Packet(type_, data))
File "/usr/local/lib/python3.10/site-packages/aioesphomeapi/connection.py", line 688, in _process_packet
handler(msg)
File "/usr/local/lib/python3.10/site-packages/aioesphomeapi/client.py", line 482, in on_msg
on_bluetooth_connections_free_update(resp.free, resp.limit)
File "/usr/src/homeassistant/homeassistant/components/esphome/entry_data.py", line 136, in async_update_ble_connection_limits
fut.set_result(free)
asyncio.exceptions.InvalidStateError: invalid state
```
* Adds base code for matter lock
* Adds basic matter door lock support
* Adds matter lock fixture
* Adds tests for matter lock
* Addresses feedback
* Added logic to handle inter states of matter lock
* Addesses feedback
* Introduce a delay between update entity calls
* Update homeassistant/components/zwave_js/update.py
Co-authored-by: Martin Hjelmare <marhje52@gmail.com>
* move delay to constant and patch
* rename constant
* Switch to async_call_later
* Remove failing test
* Reimplement to solve task problem
* comment
* pass count directly so that value doesn't mutate before we store it
* lines
* Fix logic and tests
* Comments
* Readd missed coverage
* Add test for delays
* cleanup
* Fix async_added_to_hass logic
* flip conditional
* Store firmware info in extra data so we can restore it along with latest version
* Comment
* comment
* Add test for is_running check and fix bugs
* comment
* Add tests for various restore state scenarios
* move comment so it's less confusing
* improve typing
* consolidate into constant and remove unused one
* Update update.py
* update test to unknown state during partial restore
* fix elif check
* Fix type
* clean up test docstrings and function names
---------
Co-authored-by: Martin Hjelmare <marhje52@gmail.com>
If a user manually migrated their database to MySQL or PostgresSQL
and incorrectly created the timestamp columns as float we would
fail to correct them to double because when we migrated to use
timestamps for the columns I missed that we needed to change the
columns and types for µs precision
- If the user had previously duplicated data we could end up
picking the next metadata_id and there could be stale rows
in the database that have that metadata_id. This can only happen
from bad manual migrations (which is what this is function
is validating in the first place). To solve this we now insert
data with a future date and look at the latest inserted row
instead of the first.
Example
```
['stored_statistics',
defaultdict(<class 'list'>,
{'recorder.db_test_schema': [{'end': 948589200.0,
'last_reset': None,
'max': None,
'mean': 2021.0,
'min': None,
'start': 948585600.0,
'state': None,
'sum': 394.5068},
{'end': 1601946000.000001,
'last_reset': 1601942400.000001,
'max': 1.000000000000001,
'mean': 1.000000000000001,
'min': 1.000000000000001,
'start': 1601942400.000001,
'state': 1.000000000000001,
'sum': 1.000000000000001}]})]
```
Update the calander event trigger logic to have more exhaustive coverage. The
trigger will now use a timespan to create an explicit window for considering
upcoming events. The start/end of the time span is now more explicit, rather
than getting it from the alarm time.
The trigger is now broken into composable pieces:
- A timespan object for more explicitly managing the time window
- A function to get events during a time span
- A function to process upcoming events and determine the trigger times
The existing listener is now just responsible for scheduling alarms and glue.
This fixes bug with DST handling where the conversion back and forth between
UTC and timezone ends up dropping events during the jump forward. In practice,
an event was returned from the scanning, but it was never fired by the trigger
because (1) it was filtered out of the interval and (2) the event list was
previously cleared every iteration so it would get dropped.
Future improvements can bake more invariant checking into this structure.
* shield Reolink webhook callback from cancelation
* Update homeassistant/components/reolink/host.py
Co-authored-by: Paulus Schoutsen <paulus@home-assistant.io>
* fix styling
* fix black
* Revert to using asyncio.shield
---------
Co-authored-by: Paulus Schoutsen <paulus@home-assistant.io>
Set unique on StatesMeta and EventTypes
These should have been marked unique originally to prevent
collision bugs from going unnoticed. These have not been
to beta yet so this is not a breaking change
* Fix cpu thrashing during purge after all legacy events were removed
We now remove the the index of of event ids on the states table when its
all NULLs to save space. The purge path needs to avoid checking for legacy
rows to purge if the index has been removed since it will result in a full
table scan each purge cycle that will always find no legacy rows to purge
* one more place
* drop the key constraint as well
* fixes
* more sqlite
* Avoid database executor job to fetch statistic metadata on cache hit
Since we will almost always have a cache hit fetching
statistic meta data we can avoid an executor job
* Avoid database executor job to fetch statistic metadata on cache hit
Since we will almost always have a cache hit fetching
statistic meta data we can avoid an executor job
* Avoid database executor job to fetch statistic metadata on cache hit
Since we will almost always have a cache hit fetching
statistic meta data we can avoid an executor job
* remove exception catch since the threading.excepthook will actually catch this in production
* fix a few missed ones
* threadsafe
* Update homeassistant/components/recorder/table_managers/statistics_meta.py
* coverage and optimistic caching