[matter] Initial contribution (#18486)
Signed-off-by: Dan Cunningham <dan@digitaldan.com>pull/18740/head
parent
f0b681ffb2
commit
f88de93591
|
@ -217,6 +217,7 @@
|
|||
/bundles/org.openhab.binding.luxtronikheatpump/ @sgiehl
|
||||
/bundles/org.openhab.binding.magentatv/ @markus7017
|
||||
/bundles/org.openhab.binding.mail/ @J-N-K
|
||||
/bundles/org.openhab.binding.matter/ @digitaldan
|
||||
/bundles/org.openhab.binding.max/ @marcelrv
|
||||
/bundles/org.openhab.binding.mcd/ @simon-dengler
|
||||
/bundles/org.openhab.binding.mcp23017/ @aogorek
|
||||
|
|
|
@ -1071,6 +1071,11 @@
|
|||
<artifactId>org.openhab.binding.mail</artifactId>
|
||||
<version>${project.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.openhab.addons.bundles</groupId>
|
||||
<artifactId>org.openhab.binding.matter</artifactId>
|
||||
<version>${project.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.openhab.addons.bundles</groupId>
|
||||
<artifactId>org.openhab.binding.max</artifactId>
|
||||
|
|
|
@ -0,0 +1,13 @@
|
|||
# Node.js dependencies
|
||||
code-gen/node_modules/
|
||||
matter-server/node_modules/
|
||||
|
||||
# Node.js bin dirs
|
||||
code-gen/node
|
||||
matter-server/node
|
||||
|
||||
# Build output directories
|
||||
code-gen/dist/
|
||||
code-gen/out/
|
||||
matter-server/dist/
|
||||
matter-server/out/
|
|
@ -0,0 +1,214 @@
|
|||
# Development Guide
|
||||
|
||||
This document describes how to set up your development environment and contribute to the project. The project consists of three main components:
|
||||
|
||||
1. Code Generation Tool
|
||||
2. Matter.js WebSocket Service
|
||||
3. openHAB Java Add-on
|
||||
|
||||
## General Build Requirements
|
||||
|
||||
- Java 17 or higher
|
||||
- Node.js 18 or higher
|
||||
- npm 9 or higher
|
||||
|
||||
## Building the Project
|
||||
|
||||
The project uses Maven as the primary build tool. To build all components:
|
||||
|
||||
```bash
|
||||
mvn clean install
|
||||
```
|
||||
|
||||
### Maven Build Process
|
||||
|
||||
The `mvn clean install` command executes several steps to build the websocket server and package everything together.
|
||||
By default, this will not regenerate the matter cluster classes. To regenerate the cluster classes, use the `code-gen` profile:
|
||||
|
||||
```bash
|
||||
mvn clean install -P code-gen
|
||||
```
|
||||
|
||||
The following maven steps are executed:
|
||||
|
||||
1. **Clean Phase**
|
||||
- Without `-P code-gen`: Cleans only standard build output directories
|
||||
- With `-P code-gen`: Additionally cleans:
|
||||
- The `code-gen/out` directory
|
||||
- Generated Java classes in `src/main/java/org/openhab/binding/matter/internal/client/dto/cluster/gen`
|
||||
|
||||
2. **Generate Resources Phase**
|
||||
- Sets up Node.js and npm environment
|
||||
- Installs Matter server dependencies
|
||||
- Builds Matter server using webpack
|
||||
- Copies built `matter.js` to the appropriate resource directory for inclusion in the final jar
|
||||
|
||||
3. **Generate Sources Phase** (only with `-P code-gen`)
|
||||
- Runs code generation tool:
|
||||
1. Installs code-gen npm dependencies
|
||||
2. Runs the main 'app.ts' which uses custom handlebars template for code generation from Matter.js SDK definitions
|
||||
3. Moves generated Java classes to `src/main/java/.../internal/client/dto/cluster/gen`
|
||||
4. Cleans up temporary output directories
|
||||
|
||||
4. **Process Sources Phase** (only with `-P code-gen`)
|
||||
- Formats generated code using spotless
|
||||
|
||||
5. **Compile and Package**
|
||||
- Compiles Java sources
|
||||
|
||||
|
||||
## Project Components
|
||||
|
||||
### 1. Code Generation Tool (`code-gen/`)
|
||||
|
||||
#### Purpose
|
||||
The code generation tool is responsible for creating Java classes from the Matter.js SDK definitions. It processes the Matter protocol specifications and generates type-safe Java code that represents Matter clusters, attributes, and commands.
|
||||
|
||||
#### Architecture
|
||||
- Located in the `code-gen/` directory
|
||||
- Uses TypeScript for code generation logic (see `code-gen/app.ts`)
|
||||
- Utilizes Handlebars templates for Java code generation (see `code-gen/templates`)
|
||||
- Processes Matter.js SDK definitions directly from the matter.js SDK ( `Matter.children....`)
|
||||
|
||||
#### Building and Running
|
||||
```bash
|
||||
cd code-gen
|
||||
npm install
|
||||
npm run build
|
||||
```
|
||||
|
||||
The generated Java classes will be placed in the openHAB addon's source directory.
|
||||
|
||||
### 2. Matter.js WebSocket Service (`matter-server/`)
|
||||
|
||||
#### Purpose
|
||||
The Matter.js WebSocket service acts as a bridge between the openHAB binding and the Matter.js SDK. It provides a WebSocket interface that allows the Java binding to communicate with Matter devices through the Matter.js protocol implementation.
|
||||
|
||||
#### Architecture
|
||||
- WebSocket server implementation in TypeScript
|
||||
- Two main operation modes:
|
||||
- Client Controller: Acts as a Matter controller allowing communication with Matter devices
|
||||
- Bridge Controller: Acts as a Matter bridge node, exposing non matter devices (openHAB items) as endpoints for 3rd party clients to control. This will bind on the default matter port by default.
|
||||
- Modes are independent of each other and create their own matter instances
|
||||
- Real-time event system for device state updates
|
||||
|
||||
#### WebSocket Protocol
|
||||
|
||||
##### Connection Establishment
|
||||
1. Client connects to WebSocket server with query parameters:
|
||||
- `service`: Either 'bridge' or 'client'
|
||||
- For client mode: `controllerName` parameter required
|
||||
- For bridge mode: `uniqueId` parameter required
|
||||
2. Server validates connection parameters and initializes appropriate controller
|
||||
3. Server sends 'ready' event when controller is initialized
|
||||
|
||||
##### Message Types
|
||||
|
||||
###### Requests
|
||||
```typescript
|
||||
{
|
||||
id: string; // Unique request identifier which will be used in the response to track messages
|
||||
namespace: string; // Command RPC namespace
|
||||
function: string; // Function to execute in the namespace
|
||||
args?: any[]; // Optional function arguments
|
||||
}
|
||||
```
|
||||
|
||||
###### Responses
|
||||
```typescript
|
||||
{
|
||||
type: string; // "response"
|
||||
message: {
|
||||
type: string; // "result", "resultError", "resultSuccess"
|
||||
id: string; // Matching ID from the original request
|
||||
result?: any; // Operation result (if successful)
|
||||
error?: string; // Error message (if failed)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
###### Events
|
||||
```typescript
|
||||
{
|
||||
type: string; // "event"
|
||||
message: {
|
||||
type: string; // Event type (see below)
|
||||
data?: any; // Event data (string, number, boolean, map, etc....)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
##### Event Types
|
||||
- `attributeChanged`: Device attribute value updates
|
||||
- `eventTriggered`: Device-triggered events
|
||||
- `nodeStateInformation`: Device connection state changes
|
||||
- `nodeData`: Device data updates (cluster and attributes)
|
||||
- `bridgeEvent`: Bridge-related events
|
||||
|
||||
##### Node States
|
||||
- `Connected`: Node is connected and ready for querying of data
|
||||
- `Disconnected`: Node is disconnected
|
||||
- `Reconnecting`: Node is attempting to reconnect (but still offline)
|
||||
- `WaitingForDeviceDiscovery`: Waiting for MDNS announcement (so still offline)
|
||||
- `StructureChanged`: Node structure has been modified
|
||||
- `Decommissioned`: Node has been decommissioned
|
||||
|
||||
#### Components
|
||||
- `app.ts`: Main server implementation and WebSocket handling
|
||||
- Manages WebSocket connections
|
||||
- Handles message routing
|
||||
- Implements connection lifecycle
|
||||
- `Controller.ts`: Base abstract controller functionality (implemented by client and bridge)
|
||||
- Common controller operations
|
||||
- Message handling framework
|
||||
- Handles looking up namespaces and functions for remote commands
|
||||
- `client/`: Matter controller functionality
|
||||
- `bridge/`: Matter bridge functionality
|
||||
- `util/`: Shared utilities and helper functions
|
||||
|
||||
#### Building and Running
|
||||
```bash
|
||||
cd matter-server
|
||||
npm install
|
||||
npm run webpack
|
||||
```
|
||||
|
||||
Server configuration options:
|
||||
- `--port`: WebSocket server port (default: 8888)
|
||||
- `--host`: Server host address
|
||||
|
||||
#### Error Handling
|
||||
- Connection errors trigger immediate WebSocket closure
|
||||
- Operation errors are returned in response messages
|
||||
- Node state changes are communicated via events
|
||||
- Automatic reconnection for temporary disconnections
|
||||
- Parent process monitoring for clean shutdown
|
||||
|
||||
### 3. openHAB Matter Binding (`src/`)
|
||||
|
||||
#### Purpose
|
||||
The openHAB Matter binding provides integration between openHAB and Matter devices. It implements the openHAB binding framework and communicates with Matter devices through the Matter.js WebSocket service.
|
||||
|
||||
#### Architecture
|
||||
|
||||
##### Shared Client Code
|
||||
- Location: `src/main/java/.../internal/client/`
|
||||
- Handles WebSocket communication with Matter server
|
||||
- Implements message serialization/deserialization
|
||||
- Manages connection lifecycle
|
||||
|
||||
##### Controller Code
|
||||
- Location: `src/main/java/.../internal/controller/`
|
||||
- Implements Matter device control logic
|
||||
- Manages device state and commands through "converter" classes
|
||||
|
||||
##### Bridge Code
|
||||
- Location: `src/main/java/.../internal/bridge/`
|
||||
- Implements openHAB Matter bridge functionality
|
||||
- Uses Item metadata tags to identity Items to expose (similar to homekit, alexa, ga, etc....)
|
||||
- Handles device pairing and commissioning of 3rd party controllers (Amazon, Apple, Google, etc.... )
|
||||
|
||||
#### Building
|
||||
```bash
|
||||
mvn clean install
|
||||
```
|
|
@ -0,0 +1,20 @@
|
|||
This content is produced and maintained by the openHAB project.
|
||||
|
||||
* Project home: https://www.openhab.org
|
||||
|
||||
== Declared Project Licenses
|
||||
|
||||
This program and the accompanying materials are made available under the terms
|
||||
of the Eclipse Public License 2.0 which is available at
|
||||
https://www.eclipse.org/legal/epl-2.0/.
|
||||
|
||||
== Source Code
|
||||
|
||||
https://github.com/openhab/openhab-addons
|
||||
|
||||
== Third-party Content
|
||||
|
||||
matter.js
|
||||
* License: Apache 2.0
|
||||
* Project: https://github.com/project-chip/matter.js
|
||||
* Source: https://github.com/project-chip/matter.js
|
|
@ -0,0 +1,535 @@
|
|||
# Matter Binding
|
||||
|
||||
The Matter Binding for openHAB allows seamless integration with Matter-compatible devices.
|
||||
|
||||
## Supported functionality
|
||||
|
||||
This binding supports two different types of Matter functionality which operate independently of each other.
|
||||
|
||||
- [Matter Client](#matter-client)
|
||||
- This allows openHAB to discover and control other Matter devices like lights, thermostats, window coverings, locks, etc...
|
||||
|
||||
- [Matter Bridge](#matter-bridge)
|
||||
- This allows openHAB to expose items as Matter devices to other Matter clients.
|
||||
This allows local control of openHAB devices from other ecosystems like Apple Home, Amazon, and Google Home.
|
||||
|
||||
For more information on the Matter specification, see the [Matter Ecosystem Overview](#matter-ecosystem-overview) section at the end of this document.
|
||||
|
||||
## Matter.JS Runtime
|
||||
|
||||
This binding uses the excellent [matter.js](https://github.com/project-chip/matter.js) implementation of the the Matter 1.4 protocol.
|
||||
|
||||
As such, this binding requires NodesJS 18+ and will attempt to download and cache an appropriate version when started if a version is not already installed on the system.
|
||||
Alpine Linux users (typically docker) and those on older Linux distributions will need to install this manually as the official NodeJS versions are not compatible.
|
||||
|
||||
## Matter and IPv6
|
||||
|
||||
Matter **requires** IPv6 to be enabled and be routable between openHAB and the Matter device.
|
||||
This means IPv6 needs to be enabled on the host openHAB is running, and the network must be able route IPv6 unicast and multicast messages.
|
||||
Docker, VLANs, subnets and other configurations can prohibit Matter from working if not configured correctly.
|
||||
|
||||
|
||||
# Matter Client
|
||||
|
||||
This describes the Matter controller functionality for discovering and controlling Matter devices.
|
||||
|
||||
## Supported Things
|
||||
|
||||
The Matter Binding supports the following types of things:
|
||||
|
||||
- `controller`: The main controller that interfaces with Matter devices.
|
||||
It requires the configuration parameter `nodeId` which sets the local Matter node ID for this controller (must be unique in the fabric).
|
||||
**This must be added manually.**
|
||||
- `node`: Represents an individual Node within the Matter network.
|
||||
The only configuration parameter is `nodeId`.
|
||||
A standard Node will map Matter endpoints to openHAB channel groups.
|
||||
**This will be discovered automatically** when a pairing code is used to scan for a device and should not be added manually.
|
||||
- `endpoint`: Represents an standalone endpoint as a child of a `node` thing. Only Endpoints exposed by Matter bridges will be added as `endpoint` things, otherwise Matter Endpoints are mapped on a `node` thing as channel groups. An `endpoint` thing **will be discovered automatically** when a node is added that has multiple bridged endpoints and should not be added manually.
|
||||
|
||||
## Discovery
|
||||
|
||||
Matter controllers must be added manually.
|
||||
Nodes (devices) will be discovered when a `pairCode` is used to search for a device to add.
|
||||
Bridged endpoints will be added to the inbox once the parent Node is added as a thing.
|
||||
|
||||
### Device Pairing: General
|
||||
|
||||
The pairing action can be found in the settings of the "Controller" thing under the "Actions" -> "Pair Matter Device"
|
||||
|
||||
<img src="doc/pairing.png" alt="Matter Pairing" width="600"/>
|
||||
|
||||
This action will give feedback on the pairing process, if successful a device will be added to the Inbox.
|
||||
|
||||
See [Device Pairing: Code Types](#device-pairing-code-types) for more information on pairing codes and code formats.
|
||||
|
||||
The same codes can also be used in the openHAB Thing discovery UI, although feedback is limited and only a single controller is supported.
|
||||
|
||||
<img src="doc/thing-discovery.png" alt="Thing Discovery" width="600"/>
|
||||
|
||||
### Device Pairing: Code Types
|
||||
|
||||
In order to pair (commission in matter terminology) a device, you must have an 11 digit manual pairing code (eg 123-4567-8901 or 12345678901) or a QR Code (eg MT:ABCDEF1234567890123).
|
||||
If the device has not been paired before, use the code provided by the manufacturer and **ensure the device is in pairing mode**, refer to your devices instructions for pairing for more information.
|
||||
You can include dashes or omit them in a manual pairing code.
|
||||
|
||||
If the device is paired with another Matter ecosystem (Apple, Google, Amazon, etc..) you must use that ecosystem to generate a new pairing code and search for devices.
|
||||
The pairing code and device will only be available for commissioning for a limited time.
|
||||
Refer to the ecosystem that generated the code for the exact duration (typically 5-15 minutes). In this case, openHAB still talks directly to the device and is not associated with that existing ecosystem.
|
||||
|
||||
If the device seems to be found in the logs, but can not be added, its possible the device has been already paired.
|
||||
Hard resetting the device may help this case.
|
||||
See your device documentation for how to hard reset the device.
|
||||
|
||||
### Device Pairing: Thread Devices
|
||||
|
||||
Thread devices require a Thread Border Router and a bluetooth enabled device to facilitate the thread joining process (typically a mobile device).
|
||||
Until there is a supported thread border router integration in openHAB and the openHAB mobile apps, it's strongly recommended to pair the device to a commercial router with thread support first (Apple TV 4k, Google Nest Hub 2, Amazon Gen 4 Echo, etc... ), then generate a matter pairing code using that ecosystem and add the device normally.
|
||||
This will still allow openHAB to have direct access to the device using only the embedded thread border router and does not interact with the underlying providers home automation stack.
|
||||
|
||||
Support for using a OpenThread Border Router has been verified to work and will be coming soon to openHAB, but in some cases requires strong expertise in IPv6 routing as well as support in our mobile clients.
|
||||
|
||||
### Enabling IPv6 Thread Connectivity on Linux Hosts
|
||||
|
||||
It is important to make sure that Route Announcements (RA) and Route Information Options (RIO) are enabled on your host so that Thread boarder routers can announce routes to the Thread network.
|
||||
This is done by setting the following sysctl options:
|
||||
|
||||
1. `net.ipv6.conf.wlan0.accept_ra` should be at least `1` if ip forwarding is not enabled, and `2` otherwise.
|
||||
1. `net.ipv6.conf.wlan0.accept_ra_rt_info_max_plen` should not be smaller than `64`.
|
||||
|
||||
the `accept_ra` is defaulted to `1` for most distributions.
|
||||
|
||||
There may be other network daemons which will override this option (for example, dhcpcd on Raspberry Pi will override accept_ra to 0).
|
||||
|
||||
You can check the accept_ra value with:
|
||||
|
||||
```shell
|
||||
$ sudo sysctl -n net.ipv6.conf.wlan0.accept_ra
|
||||
0
|
||||
```
|
||||
|
||||
And set the value to 1 (or 2 in case IP forwarding is enabled) with:
|
||||
|
||||
```shell
|
||||
$ sudo sysctl -w net.ipv6.conf.wlan0.accept_ra=1
|
||||
Net.ipv6.conf.wlan0.accept_ra = 1
|
||||
```
|
||||
|
||||
The accept_ra_rt_info_max_plen option on most Linux distributions is default to 0, set it to 64 with:
|
||||
|
||||
```shell
|
||||
$ sudo sysctl -w net.ipv6.conf.wlan0.accept_ra_rt_info_max_plen=64
|
||||
net.ipv6.conf.wlan0.accept_ra_rt_info_max_plen = 64
|
||||
```
|
||||
|
||||
To make these changes permanent, add the following lines to `/etc/sysctl.conf`:
|
||||
|
||||
```ini
|
||||
net.ipv6.conf.eth0.accept_ra=1
|
||||
net.ipv6.conf.eth0.accept_ra_rt_info_max_plen=64
|
||||
```
|
||||
|
||||
Raspberry Pi users may need to add the following lines to `/etc/dhcpcd.conf` to prevent dhcpcd from overriding the accept_ra value:
|
||||
|
||||
```ini
|
||||
noipv6
|
||||
noipv6rs
|
||||
```
|
||||
|
||||
***NOTE: Please ensure you use the right interface name for your network interface.*** The above examples use `wlan0` and `eth0` as examples.
|
||||
You can find the correct interface name by running `ip a` and looking for the interface that has an IPv6 address assigned to it.
|
||||
|
||||
## Thing Configuration
|
||||
|
||||
### Controller Thing Configuration
|
||||
|
||||
The controller thing must be created manually before devices can be discovered.
|
||||
|
||||
| Name | Type | Description | Default | Required | Advanced |
|
||||
|--------|--------|----------------------------------------|---------|----------|----------|
|
||||
| nodeId | number | The matter node ID for this controller | 0 | yes | no |
|
||||
|
||||
Note: The controller nodeId must not be changed after a controller is created.
|
||||
|
||||
### Node Thing Configuration
|
||||
|
||||
Nodes are discovered automatically (see [Discovery](#Discovery) for more information) and should not be added manually.
|
||||
|
||||
| Name | Type | Description | Default | Required | Advanced |
|
||||
|------------|--------|------------------------------------|---------|----------|----------|
|
||||
| nodeId | text | The node ID of the endpoint | N/A | yes | no |
|
||||
|
||||
### Endpoint Thing Configuration
|
||||
|
||||
Endpoints are discovered automatically once their parent Node has been added (see [Discovery](#Discovery) for more information) and should not be added manually.
|
||||
|
||||
| Name | Type | Description | Default | Required | Advanced |
|
||||
|------------|--------|------------------------------------|---------|----------|----------|
|
||||
| endpointId | number | The endpoint ID within the node | N/A | yes | no |
|
||||
|
||||
## Thing Actions
|
||||
|
||||
### Node Thing Actions
|
||||
|
||||
| Name | Description |
|
||||
|-------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Decommission Matter node from fabric | This will remove the device from the Matter fabric. If the device is online and reachable this will attempt to remove the credentials from the device first before removing it from the network. Once a device is removed, this Thing will go offline and can be removed. |
|
||||
| Generate a new pairing code for a Matter device | Generates a new manual and QR pairing code to be used to pair the Matter device with an external Matter controller |
|
||||
| List Connected Matter Fabrics | This will list all the Matter fabrics this node belongs to |
|
||||
| Remove Connected Matter Fabric | This removes a connected Matter fabric from a device. Use the 'List connected Matter fabrics' action to retrieve the fabric index number |
|
||||
|
||||
|
||||
For nodes that contain a Thread Border Router Management Cluster, the following additional actions will be present
|
||||
|
||||
| Name | Description |
|
||||
|----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Thread: Load external operational dataset | Updates the local operational dataset configuration from a hex or JSON string for the node. Use the 'Push local operational dataset' action to push the dataset back to the device after loading. |
|
||||
| Thread: Load operational dataset from device | Updates the local operational dataset configuration from the device. |
|
||||
| Thread: Operational Dataset Generator | Generates a new operational dataset and optionally saves it locally. |
|
||||
| Thread: Push local operational dataset | Pushes the local operational dataset configuration to the device. |
|
||||
|
||||
A Thread operational data set is a hex encoded string which contains a Thread border router's configuration.
|
||||
Using the same operational data set across multiple Thread border routers allows those routers to form a single network where Thread devices can roam from router to router.
|
||||
Some Thread border routers allow a "pending" operational dataset to be configured, this allows routers to coordinate the configuration change with current Thread devices without requiring those devices to be reconfigured (live migration).
|
||||
|
||||
## Channels
|
||||
|
||||
### Controller Channels
|
||||
|
||||
Controllers have no channels.
|
||||
|
||||
### Node and Bridge Endpoint Channels
|
||||
|
||||
Channels are dynamically added based on the endpoint type and matter cluster supported.
|
||||
Each endpoint is represented as a channel group.
|
||||
Possible channels include:
|
||||
|
||||
## Endpoint Channels
|
||||
|
||||
| Channel ID | Type | Label | Description | Category | ReadOnly | Pattern |
|
||||
|-------------------------------------------------------------|--------------------------|------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|----------|-------------|
|
||||
| battery-voltage | Number:ElectricPotential | Battery Voltage | The current battery voltage | Energy | true | %.1f %unit% |
|
||||
| battery-alarm | String | Battery Alarm | The battery alarm state | Energy | true | |
|
||||
| powersource-batpercentremaining | Number:Dimensionless | Battery Percent Remaining | Indicates the estimated percentage of battery charge remaining until the battery will no longer be able to provide power to the Node | Energy | true | %d %% |
|
||||
| powersource-batchargelevel | Number | Battery Charge Level | Indicates a coarse ranking of the charge level of the battery, used to indicate when intervention is required | Energy | true | |
|
||||
| booleanstate-statevalue | Switch | Boolean State | Indicates a boolean state value | Status | true | |
|
||||
| colorcontrol-color | Color | Color | The color channel allows to control the color of a light. It is also possible to dim values and switch the light on and off. | ColorLight | | |
|
||||
| colorcontrol-temperature | Dimmer | Color Temperature | Sets the color temperature of the light | ColorLight | | |
|
||||
| colorcontrol-temperature-abs | Number:Temperature | Color Temperature | Sets the color temperature of the light in mirek | ColorLight | | %.0f %unit% |
|
||||
| doorlock-lockstate | Switch | Door Lock State | Locks and unlocks the door and maintains the lock state | Door | | |
|
||||
| fancontrol-fanmode | Number | Fan Mode | Set the fan mode | HVAC | | |
|
||||
| onoffcontrol-onoff | Switch | Switch | Switches the power on and off | Light | | |
|
||||
| levelcontrol-level | Dimmer | Dimmer | Sets the level of the light | Light | | |
|
||||
| modeselect-mode | Number | Mode Select | Selection of 1 or more states | | | %d |
|
||||
| switch-switch | Number | Switch | Indication of a switch or remote being activated | | true | %d |
|
||||
| switch-switchlatched | Trigger | Switched Latched Trigger | This trigger shall indicate the new value of the CurrentPosition attribute as a JSON object, i.e. after the move. | | | |
|
||||
| switch-initialpress | Trigger | Initial Press Trigger | This trigger shall indicate the new value of the CurrentPosition attribute as a JSON object, i.e. while pressed. | | | |
|
||||
| switch-longpress | Trigger | Long Press Trigger | This trigger shall indicate the new value of the CurrentPosition attribute as a JSON object, i.e. while pressed. | | | |
|
||||
| switch-shortrelease | Trigger | Short Release Trigger | This trigger shall indicate the previous value of the CurrentPosition attribute as a JSON object, i.e. just prior to release. | | | |
|
||||
| switch-longrelease | Trigger | Long Release Trigger | This trigger shall indicate the previous value of the CurrentPosition attribute as a JSON object, i.e. just prior to release. | | | |
|
||||
| switch-multipressongoing | Trigger | Multi-Press Ongoing Trigger | This trigger shall indicate 2 numeric fields as a JSON object. The first is the new value of the CurrentPosition attribute, i.e. while pressed. The second is the multi press code with a value of N when the Nth press of a multi-press sequence has been detected. | | | |
|
||||
| switch-multipresscomplete | Trigger | Multi-Press Complete Trigger | This trigger shall indicate 2 numeric fields as a JSON object. The first is the new value of the CurrentPosition attribute, i.e. while pressed. The second is how many times the momentary switch has been pressed in a multi-press sequence. | | | |
|
||||
| thermostat-localtemperature | Number:Temperature | Local Temperature | Indicates the local temperature provided by the thermostat | HVAC | true | %.1f %unit% |
|
||||
| thermostat-outdoortemperature | Number:Temperature | Outdoor Temperature | Indicates the outdoor temperature provided by the thermostat | HVAC | true | %.1f %unit% |
|
||||
| thermostat-occupiedheating | Number:Temperature | Occupied Heating Setpoint | Set the heating temperature when the room is occupied | HVAC | | %.1f %unit% |
|
||||
| thermostat-occupiedcooling | Number:Temperature | Occupied Cooling Setpoint | Set the cooling temperature when the room is occupied | HVAC | | %.1f %unit% |
|
||||
| thermostat-unoccupiedheating | Number:Temperature | Unoccupied Heating Setpoint | Set the heating temperature when the room is unoccupied | HVAC | | %.1f %unit% |
|
||||
| thermostat-unoccupiedcooling | Number:Temperature | Unoccupied Cooling Setpoint | Set the cooling temperature when the room is unoccupied | HVAC | | %.1f %unit% |
|
||||
| thermostat-systemmode | Number | System Mode | Set the system mode of the thermostat | HVAC | | |
|
||||
| thermostat-runningmode | Number | Running Mode | The running mode of the thermostat | HVAC | true | |
|
||||
| windowcovering-lift | Rollershutter | Window Covering Lift | Sets the window covering level - supporting open/close and up/down type commands | Blinds | | %.0f %% |
|
||||
| fancontrol-percent | Dimmer | Fan Control Percent | The current fan speed percentage level | HVAC | true | %.0f %% |
|
||||
| fancontrol-mode | Number | Fan Control Mode | The current mode of the fan | HVAC | | |
|
||||
| temperaturemeasurement-measuredvalue | Number:Temperature | Temperature | The measured temperature | Temperature | true | %.1f %unit% |
|
||||
| occupancysensing-occupied | Switch | Occupancy | Indicates if an occupancy sensor is triggered | Presence | true | |
|
||||
| relativehumiditymeasurement-measuredvalue | Number:Dimensionless | Humidity | The measured humidity | Humidity | true | %.0f %% |
|
||||
| illuminancemeasurement-measuredvalue | Number:Illuminance | Illuminance | The measured illuminance in Lux | Illuminance | true | %d %unit% |
|
||||
| wifinetworkdiagnostics-rssi | Number:Power | Signal | Wi-Fi signal strength indicator. | QualityOfService | true | %d %unit% |
|
||||
| electricalpowermeasurement-activepower | Number:Power | Active Power | The active power measurement in watts | Energy | true | %.1f %unit% |
|
||||
| electricalpowermeasurement-activecurrent | Number:ElectricCurrent | Active Current | The active current measurement in amperes | Energy | true | %.1f %unit% |
|
||||
| electricalpowermeasurement-voltage | Number:ElectricPotential | Voltage | The voltage measurement in volts | Energy | true | %.2f %unit% |
|
||||
| electricalenergymeasurement-energymeasurmement-energy | Number:Energy | Energy | The measured energy | Energy | true | %.1f %unit% |
|
||||
| electricalenergymeasurement-cumulativeenergyimported-energy | Number:Energy | Cumulative Energy Imported | The cumulative energy imported measurement | Energy | true | %.1f %unit% |
|
||||
| electricalenergymeasurement-cumulativeenergyexported-energy | Number:Energy | Cumulative Energy Exported | The cumulative energy exported measurement | Energy | true | %.1f %unit% |
|
||||
| electricalenergymeasurement-periodicenergyimported-energy | Number:Energy | Periodic Energy Imported | The periodic energy imported measurement | Energy | true | %.1f %unit% |
|
||||
| electricalenergymeasurement-periodicenergyexported-energy | Number:Energy | Periodic Energy Exported | The periodic energy exported measurement | Energy | true | %.1f %unit% |
|
||||
| threadnetworkdiagnostics-channel | Number | Channel | The Thread network channel | Network | true | %d |
|
||||
| threadnetworkdiagnostics-routingrole | Number | Routing Role | The Thread routing role (0=Unspecified, 1=Unassigned, 2=Sleepy End Device, 3=End Device, 4=Reed, 5=Router, 6=Leader) | Network | true | %d |
|
||||
| threadnetworkdiagnostics-networkname | String | Network Name | The Thread network name | Network | true | |
|
||||
| threadnetworkdiagnostics-panid | Number | PAN ID | The Thread network PAN ID | Network | true | %d |
|
||||
| threadnetworkdiagnostics-extendedpanid | Number | Extended PAN ID | The Thread network extended PAN ID | Network | true | %d |
|
||||
| threadnetworkdiagnostics-rloc16 | Number | RLOC16 | The Thread network RLOC16 address | Network | true | %d |
|
||||
| threadborderroutermanagement-borderroutername | String | Border Router Name | The name of the Thread border router | Network | true | |
|
||||
| threadborderroutermanagement-borderagentid | String | Border Agent ID | The unique identifier of the Thread border agent | Network | true | |
|
||||
| threadborderroutermanagement-threadversion | Number | Thread Version | The version of Thread protocol being used | Network | true | %d |
|
||||
| threadborderroutermanagement-interfaceenabled | Switch | Interface Enabled | Whether the Thread border router interface is enabled | Network | | |
|
||||
| threadborderroutermanagement-activedatasettimestamp | Number | Active Dataset Timestamp | Timestamp of the active Thread network dataset | Network | true | %d |
|
||||
| threadborderroutermanagement-activedataset | String | Active Dataset | The active Thread network dataset configuration | Network | | |
|
||||
| threadborderroutermanagement-pendingdatasettimestamp | Number | Pending Dataset Timestamp | Timestamp of the pending Thread network dataset (only available if PAN change feature is supported) | Network | true | %d |
|
||||
| threadborderroutermanagement-pendingdataset | String | Pending Dataset | The pending Thread network dataset configuration (only available if PAN change feature is supported) | Network | | |
|
||||
|
||||
## Full Example
|
||||
|
||||
### Thing Configuration
|
||||
|
||||
```java
|
||||
Thing configuration example for the Matter controller:
|
||||
Thing matter:controller:main [ nodeId="1" ]
|
||||
|
||||
Thing configuration example for a Matter node:
|
||||
Thing matter:node:main:12345678901234567890 [ nodeId="12345678901234567890"]
|
||||
|
||||
Thing configuration example for a Matter bridge endpoint:
|
||||
Thing matter:endpoint:main:12345678901234567890:2 [ endpointId=2]
|
||||
```
|
||||
|
||||
### Item Configuration
|
||||
|
||||
```java
|
||||
Dimmer MyDimmer "My Endpoint Dimmer" { channel="matter:node:main:12345678901234567890:1#levelcontrol-level" }
|
||||
Dimmer MyBridgedDimmer "My Bridged Dimmer" { channel="matter:endpoint:main:12345678901234567890:2#levelcontrol-level" }
|
||||
|
||||
```
|
||||
|
||||
### Sitemap Configuration
|
||||
|
||||
```perl
|
||||
Optional Sitemap configuration:
|
||||
sitemap home label="Home"
|
||||
{
|
||||
Frame label="Matter Devices"
|
||||
{
|
||||
Dimmer item=MyEndpointDimmer
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
# Matter Bridge
|
||||
|
||||
openHAB can also expose Items and Item groups as Matter devices to 3rd party Matter clients like Google Home, Apple Home and Amazon Alexa. This allows local control for those ecosystems and can be used instead of cloud based integrations for features like voice assistants.
|
||||
|
||||
## Configuration
|
||||
|
||||
The openHAB matter bridge uses Metadata tags with the key "matter", similar to the Alexa, Google Assistant and Apple Homekit integrations.
|
||||
Matter Metadata tag values generally follow the Matter "Device Type" and "Cluster" specification as much as possible.
|
||||
Items and item groups are initially tagged with a Matter "Device Type", which are Matter designations for common device types like lights, thermostats, locks, window coverings, etc...
|
||||
For single items, like a light switch or dimmer, simply tagging the item with the Matter device type is enough.
|
||||
For more complicated devices, like thermostats, A group item is tagged with the device type, and its child members are tagged with the cluster attribute(s) that it will be associated with.
|
||||
Multiple attributes use a comma delimited format like `attribute1, attribute2, ... attributeN`.
|
||||
For devices like fans that support groups with multiple items, but you are only using one item to control (like On/Off or Speed), you can tag the regular item with both the device type and the cluster attribute(s) separated by a comma.
|
||||
|
||||
Pairing codes and other options can be found in the MainUI under "Settings -> Add-on Settings -> Matter Binding"
|
||||
|
||||
### Device Types
|
||||
|
||||
| Type | Item Type | Tag | Option |
|
||||
|---------------------|---------------------------------------|-------------------|---------------------------------------------------------------------------------|
|
||||
| OnOff Light | Switch, Dimmer | OnOffLight | |
|
||||
| Dimmable Light | Dimmer | DimmableLight | |
|
||||
| Color Light | Color | ColorLight | |
|
||||
| On/Off Plug In Unit | Switch, Dimmer | OnOffPlugInUnit | |
|
||||
| Thermostat | Group | Thermostat | |
|
||||
| Window Covering | Rollershutter, Dimmer, String, Switch | WindowCovering | String types: [OPEN="OPEN", CLOSED="CLOSED"], Switch types: [invert=true/false] |
|
||||
| Temperature Sensor | Number | TemperatureSensor | |
|
||||
| Humidity Sensor | Number | HumiditySensor | |
|
||||
| Occupancy Sensor | Switch, Contact | OccupancySensor | |
|
||||
| Contact Sensor | Switch, Contact | ContactSensor | |
|
||||
| Door Lock | Switch | DoorLock | |
|
||||
| Fan | Group, Switch, String, Dimmer | Fan | |
|
||||
|
||||
### Global Options
|
||||
|
||||
* Endpoint Labels
|
||||
* By default, the Item label is used as the Matter label but can be overridden by adding a `label` key as a metadata option, either by itself or part of other options required for a device.
|
||||
* Example: `[label="My Custom Label"]`
|
||||
* Fixed Labels
|
||||
* Matter has a concept of "Fixed Labels" which allows devices to expose arbitrary label names and values which can be used by clients for tasks like grouping devices in rooms.
|
||||
* Example: `[fixedLabels="room=Office, floor=1"]`
|
||||
|
||||
### Thermostat group member tags
|
||||
|
||||
| Type | Item Type | Tag | Options |
|
||||
|---------------------|------------------------|------------------------------------|------------------------------------------------------------------------------------------|
|
||||
| Current Temperature | Number | thermostat.localTemperature | |
|
||||
| Outdoor Temperature | Number | thermostat.outdoorTemperature | |
|
||||
| Heating Setpoint | Number | thermostat.occupiedHeatingSetpoint | |
|
||||
| Cooling Setpoint | Number | thermostat.occupiedCoolingSetpoint | |
|
||||
| System Mode | Number, String, Switch | thermostat.systemMode | [OFF=0,AUTO=1,ON=1,COOL=3,HEAT=4,EMERGENCY_HEAT=5,PRECOOLING=6,FAN_ONLY=7,DRY=8,SLEEP=9] |
|
||||
| Running Mode | Number, String | thermostat.runningMode | |
|
||||
|
||||
For `systemMode` the `ON` option should map to the system mode custom value that would be appropriate if a 'ON' command was issued, defaults to the `AUTO` mapping.
|
||||
|
||||
The following attributes can be set in the options of any thermostat member or on the Group item to set temperature options.
|
||||
|
||||
| Setting | Description | Value (in 0.01°C) |
|
||||
|--------------------------------------|-------------------------------------------------------------------------------------------------|-------------------|
|
||||
| `thermostat-minHeatSetpointLimit` | The minimum allowable heat setpoint limit. | 0 |
|
||||
| `thermostat-maxHeatSetpointLimit` | The maximum allowable heat setpoint limit. | 3500 |
|
||||
| `thermostat-absMinHeatSetpointLimit` | The absolute minimum heat setpoint limit that cannot be exceeded by the `minHeatSetpointLimit`. | 0 |
|
||||
| `thermostat-absMaxHeatSetpointLimit` | The absolute maximum heat setpoint limit that cannot be exceeded by the `maxHeatSetpointLimit`. | 3500 |
|
||||
| `thermostat-minCoolSetpointLimit` | The minimum allowable cool setpoint limit. | 0 |
|
||||
| `thermostat-maxCoolSetpointLimit` | The maximum allowable cool setpoint limit. | 3500 |
|
||||
| `thermostat-absMinCoolSetpointLimit` | The absolute minimum cool setpoint limit that cannot be exceeded by the `minCoolSetpointLimit`. | 0 |
|
||||
| `thermostat-absMaxCoolSetpointLimit` | The absolute maximum cool setpoint limit that cannot be exceeded by the `maxCoolSetpointLimit`. | 3500 |
|
||||
| `thermostat-minSetpointDeadBand` | The minimum deadband (temperature gap) between heating and cooling setpoints. | 0 |
|
||||
|
||||
### Fan group member tags
|
||||
|
||||
| Type | Item Type | Tag | Options |
|
||||
|----------------|------------------------|---------------------------|---------------------------------------------------------|
|
||||
| Fan Mode | Number, String, Switch | fanControl.fanMode | [OFF=0, LOW=1, MEDIUM=2, HIGH=3, ON=4, AUTO=5, SMART=6] |
|
||||
| Fan Percentage | Dimmer | fanControl.percentSetting | |
|
||||
| Fan OnOff | Switch | onOff.onOff | |
|
||||
|
||||
The following attributes can be set on the Fan Mode item or the Group item to set fan options.
|
||||
|
||||
| Setting | Description | Value |
|
||||
|------------------------------|----------------------------------------------------------------------------------------------------------|-------|
|
||||
| `fanControl-fanModeSequence` | The sequence of fan modes to cycle through. See [Fan Mode Sequence Options](#fan-mode-sequence-options) | 5 |
|
||||
|
||||
#### Fan Mode Sequence Options
|
||||
|
||||
| Value | Description |
|
||||
|-------|-------------------|
|
||||
| 0 | OffLowMedHigh |
|
||||
| 1 | OffLowHigh |
|
||||
| 2 | OffLowMedHighAuto |
|
||||
| 3 | OffLowHighAuto |
|
||||
| 4 | OffHighAuto |
|
||||
| 5 | OffHigh |
|
||||
|
||||
### Example
|
||||
|
||||
```java
|
||||
Dimmer TestDimmer "Test Dimmer [%d%%]" {matter="DimmableLight" [label="My Custom Dimmer", fixedLabels="room=Bedroom 1, floor=2, direction=up, customLabel=Custom Value"]}
|
||||
|
||||
Group TestHVAC "Thermostat" ["HVAC"] {matter="Thermostat" [thermostat-minHeatSetpointLimit=0, thermostat-maxHeatSetpointLimit=3500]}
|
||||
Number:Temperature TestHVAC_Temperature "Temperature [%d °F]" (TestHVAC) ["Measurement","Temperature"] {matter="thermostat.localTemperature"}
|
||||
Number:Temperature TestHVAC_HeatSetpoint "Heat Setpoint [%d °F]" (TestHVAC) ["Setpoint", "Temperature"] {matter="thermostat.occupiedHeatingSetpoint"}
|
||||
Number:Temperature TestHVAC_CoolSetpoint "Cool Setpoint [%d °F]" (TestHVAC) ["Setpoint", "Temperature"] {matter="thermostat.occupiedCoolingSetpoint"}
|
||||
Number TestHVAC_Mode "Mode [%s]" (TestHVAC) ["Control" ] {matter="thermostat.systemMode" [OFF=0, HEAT=1, COOL=2, AUTO=3]}
|
||||
|
||||
Switch TestDoorLock "Door Lock" {matter="DoorLock"}
|
||||
Rollershutter TestShade "Window Shade" {matter="WindowCovering"}
|
||||
Number:Temperature TestTemperatureSensor "Temperature Sensor" {matter="TemperatureSensor"}
|
||||
Number TestHumiditySensor "Humidity Sensor" {matter="HumiditySensor"}
|
||||
Switch TestOccupancySensor "Occupancy Sensor" {matter="OccupancySensor"}
|
||||
|
||||
### Fan with group item control
|
||||
Group TestFan "Test Fan" {matter="Fan" [fanControl-fanModeSequence=3]}
|
||||
Dimmer TestFanSpeed "Speed" (TestFan) {matter="fanControl.percentSetting"}
|
||||
Switch TestFanOnOff "On/Off" (TestFan) {matter="fanControl.fanMode"}
|
||||
Number TestFanMode "Mode" (TestFan) {matter="fanControl.fanMode" [OFF=0, LOW=1, MEDIUM=2, HIGH=3, ON=4, AUTO=5, SMART=6]}
|
||||
|
||||
### Fan with single item control , so no group item is needed
|
||||
Switch TestFanSingleItem "On/Off" {matter="Fan, fanControl.fanMode"}
|
||||
```
|
||||
|
||||
### Bridge FAQ
|
||||
|
||||
* Alexa: When pairing, after a minute Alexa reports "Something went wrong"
|
||||
* Alexa can take 3-4 seconds per device to process which can take longer then the Alexa UI is willing to wait.
|
||||
Eventually the pairing will complete, which for a large number of devices may be a few minutes.
|
||||
* Alexa: Suddenly stops working and says it could not connect to a device or device not responding.
|
||||
* Check the Settings page in the Main UI to confirm the bridge is running
|
||||
* Ensure the openHAB item has the proper matter tag, or that the item is being loaded at all (check item file errors)
|
||||
* Rarely, you may need to reboot the Alexa device.
|
||||
If you have multiple devices and not sure which is the primary matter connection, you may need to reboot all of them.
|
||||
|
||||
# Matter Ecosystem Overview
|
||||
|
||||
Matter is an open-source connectivity standard for smart home devices, allowing seamless communication between a wide range of devices, controllers, and ecosystems.
|
||||
|
||||
Below is a high-level overview of the Matter ecosystem as well as common terminology used in the Matter standard.
|
||||
|
||||
## Matter Devices
|
||||
|
||||
### Nodes and Endpoints
|
||||
|
||||
In the Matter ecosystem, a **node** represents a single device that joins a Matter network and will have a locally routable IPv6 address.
|
||||
A **node** can have multiple **endpoints**, which are logical representations of specific features or functionalities of the device.
|
||||
For example, a smart thermostat (node) may have an endpoint for general thermostat control (heating, cooling, current temperature, operating state, etc....) and another endpoint for humidity sensing.
|
||||
Many devices will only have a single endpoint.
|
||||
[Matter Bridges](#bridges) will expose multiple endpoints for each device they are bridging, and the bridge itself will be a node.
|
||||
|
||||
**Example:**
|
||||
|
||||
- A Thermostat node with an endpoint for general temperature control and another endpoint for a remote temperature or humidity sensor.
|
||||
|
||||
### Controllers
|
||||
|
||||
A **controller** manages the interaction between Matter devices and other parts of the network.
|
||||
Controllers can send commands, receive updates, and facilitate device communication.
|
||||
They also handle the commissioning process when new devices are added to the network.
|
||||
|
||||
**Example:**
|
||||
|
||||
- openHAB or another smart home hub or a smartphone app that manages your smart light bulbs, door locks, and sensors (Google Home, Apple Home, Amazon Alexa, etc...)
|
||||
|
||||
### Bridges
|
||||
|
||||
A **bridge** is a special type of node that connects non-Matter devices to a Matter network, effectively translating between protocols.
|
||||
Bridges allow legacy devices to be controlled via the Matter standard.
|
||||
|
||||
openHAB fully supports connecting to Matter bridges.
|
||||
In addition, openHAB has support for running its own Matter bridge service, exposing openHAB items as Matter endpoints to 3rd party systems.
|
||||
See [Matter Bridge](#Matter-Bridge) for information on running a Bridge server.
|
||||
|
||||
**Example:**
|
||||
|
||||
- A bridge that connects Zigbee or Z-Wave devices, making them accessible within a Matter ecosystem. The Ikea Dirigera and Philips Hue Bridge both act as matter bridges and are supported in openHAB.
|
||||
|
||||
### Thread Border Routers
|
||||
|
||||
A **Thread Border Router** is a device that allows devices connected via Thread (a low-power wireless protocol) to communicate with devices on other networks, such as Wi-Fi or Ethernet.
|
||||
It facilitates IPv6-based communication between Thread networks and the local IP network.
|
||||
|
||||
**Example:**
|
||||
|
||||
- An OpenThread Boarder Router (open source) as well as recent versions of Apple TVs, Amazon Echos and Google Nest Hubs all have embedded thread boarder routers.
|
||||
|
||||
## IPv6 and Network Connectivity
|
||||
|
||||
Matter devices operate over an IPv6 network, and obtaining an IPv6 address is required for communication.
|
||||
Devices can connect to the network via different interfaces:
|
||||
|
||||
### Ethernet
|
||||
|
||||
Ethernet-connected Matter devices receive an IPv6 address through standard DHCPv6 or stateless address auto-configuration (SLAAC).
|
||||
|
||||
### Wi-Fi
|
||||
|
||||
Wi-Fi-enabled Matter devices also receive an IPv6 address using DHCPv6 or SLAAC.
|
||||
They rely on the existing Wi-Fi infrastructure for communication within the Matter ecosystem.
|
||||
|
||||
### Thread
|
||||
|
||||
Thread-based Matter devices connect to the network via a **Thread Border Router**.
|
||||
They receive an IPv6 address from the Thread router.
|
||||
|
||||
## IPv6 Requirements
|
||||
|
||||
For Matter devices to function correctly, **IPv6 must be enabled** and supported in both the local network (router) and the Matter controllers.
|
||||
Without IPv6, devices won't be able to communicate properly within the Matter ecosystem.
|
||||
Ensure that your router has IPv6 enabled and that any Matter controllers (like smart hubs, apps or openHAB) are configured to support IPv6 as well.
|
||||
|
||||
**Note that environments like Docker require special configurations to enable IPv6**
|
||||
|
||||
## Matter Commissioning and Pairing Codes
|
||||
|
||||
Commissioning a Matter device involves securely adding it to the network using a **pairing code**.
|
||||
This process ensures that only authorized devices can join the network.
|
||||
|
||||
### Pairing Code from the Device
|
||||
|
||||
When commissioning a new Matter device, it typically has a printed QR code or numeric pairing code that you scan or enter during setup. This pairing code allows the controller to establish a secure connection to the device and add it to the network.
|
||||
Once a device pairing code is in use, it typically can not be used again to pair other controllers.
|
||||
|
||||
### Additional Pairing Code from a Controller
|
||||
|
||||
If a device has already been commissioned and you want to add it to another Matter controller, the existing controller can generate an additional pairing code.
|
||||
This is useful when sharing access to a device across multiple hubs or apps.
|
||||
Apple Home, Google Home, Amazon Alexa and openHAB all support generating pairing codes for existing paired devices.
|
||||
|
||||
### Example:
|
||||
|
||||
- When setting up a smart lock, you may scan a QR code directly from the lock, or use the 11 digit pairing code printed on it to pair it with openHAB. If you later want to control the lock from another app or hub, you would retrieve a new pairing code directly from openHAB.
|
|
@ -0,0 +1,82 @@
|
|||
# Matter Code Generator
|
||||
|
||||
This system generates Java classes for Matter clusters, device types, and related functionality. It uses Handlebars templates to transform Matter.js protocol definitions into Java code suitable for serialization.
|
||||
|
||||
## Overview
|
||||
|
||||
The code generator consists of:
|
||||
|
||||
- `app.ts`: Main generator script that processes Matter.js definitions and generates Java code
|
||||
- Template files in `src/templates/`:
|
||||
- `cluster-class.hbs`: Template for individual cluster classes
|
||||
- `base-cluster.hbs`: Template for the base cluster class
|
||||
- `cluster-constants.hbs`: Template for cluster constants
|
||||
- `cluster-registry.hbs`: Template for cluster registry
|
||||
- `device-types-class.hbs`: Template for device type definitions
|
||||
- `data-types-class.hbs`: Template for data type definitions
|
||||
|
||||
## Main Generator (app.ts)
|
||||
|
||||
The generator script:
|
||||
|
||||
1. Imports Matter.js protocol definitions
|
||||
2. Maps Matter.js data types to Java types
|
||||
3. Processes cluster inheritance and references between clusters
|
||||
4. Compiles Handlebars templates
|
||||
5. Generates Java code files in the `out/` directory
|
||||
|
||||
|
||||
## Templates
|
||||
|
||||
### cluster-class.hbs
|
||||
Generates individual cluster classes with:
|
||||
- Cluster attributes
|
||||
- Struct definitions
|
||||
- Enum definitions
|
||||
- Command methods
|
||||
- toString() implementation
|
||||
|
||||
### base-cluster.hbs
|
||||
Generates the base cluster class with:
|
||||
- Common fields and methods
|
||||
- Global struct/enum definitions
|
||||
|
||||
### cluster-constants.hbs
|
||||
Generates constants for:
|
||||
- Channel names
|
||||
- Channel labels
|
||||
- Channel IDs
|
||||
- Channel type UIDs
|
||||
|
||||
note this is not currently used yet in the binding
|
||||
|
||||
### cluster-registry.hbs
|
||||
Generates a registry mapping cluster IDs to cluster classes
|
||||
|
||||
### device-types-class.hbs
|
||||
Generates device type definitions and mappings
|
||||
|
||||
## Usage
|
||||
|
||||
1. Install dependencies:
|
||||
2. Run the generator:
|
||||
3. Generated Java files will be in the `out/` directory
|
||||
|
||||
```bash
|
||||
npm install && npm run start
|
||||
```
|
||||
|
||||
Note the the maven pom.xml will execute these steps when building the project, including linting the generated files and moving them from the out directory to the primary addon src directory.
|
||||
|
||||
## Handlebars Helpers
|
||||
|
||||
The generator includes several Handlebars helpers for string manipulation to assist in Java naming conventions:
|
||||
|
||||
- `asUpperCase`: Convert to uppercase
|
||||
- `asLowerCase`: Convert to lowercase
|
||||
- `asUpperCamelCase`: Convert to UpperCamelCase
|
||||
- `asLowerCamelCase`: Convert to lowerCamelCase
|
||||
- `asTitleCase`: Convert to Title Case
|
||||
- `asEnumField`: Convert to ENUM_FIELD format
|
||||
- `asHex`: Convert number to hex string
|
||||
- and many others
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,26 @@
|
|||
{
|
||||
"name": "code-gen",
|
||||
"version": "0.1.0",
|
||||
"description": "",
|
||||
"scripts": {
|
||||
"clean": "tsc --clean",
|
||||
"build": "tsc --build",
|
||||
"build-clean": "tsc --build --clean",
|
||||
"start": "ts-node src/app.ts",
|
||||
"test": "ts-node src/test.ts"
|
||||
},
|
||||
"devDependencies": {
|
||||
"ts-loader": "^9.4.4",
|
||||
"ts-node": "^10.9.2",
|
||||
"typescript": "^5.2.2"
|
||||
},
|
||||
"dependencies": {
|
||||
"@matter/main": "v0.14.0-alpha.0-20250531-7ed2d6da8",
|
||||
"handlebars": "^4.7.8"
|
||||
},
|
||||
"files": [
|
||||
"dist/**/*",
|
||||
"src/**/*",
|
||||
"README.md"
|
||||
]
|
||||
}
|
|
@ -0,0 +1,536 @@
|
|||
import { AnyElement, FieldElement, Matter, ClusterElement, DatatypeElement, AttributeElement, CommandElement, AnyValueElement, ClusterModel, MatterModel } from "@matter/main/model";
|
||||
import "@matter/model/resources";
|
||||
import handlebars from "handlebars";
|
||||
import { Bytes } from "@matter/general"
|
||||
import fs from "fs";
|
||||
|
||||
// Convert Matter object to JSON string and parse it back, this is a workaround to avoid some typescript issues iterating over the Matter object
|
||||
const matterData = JSON.parse(JSON.stringify(Matter)) as MatterModel;
|
||||
|
||||
matterData.children.filter(c => c.name == 'LevelControl' || c.name == 'ColorControl').forEach(c => {
|
||||
c.children?.filter(c => c.tag == 'command').forEach(c => {
|
||||
c.children?.filter(c => c.type == 'Options').forEach(c => {
|
||||
c.type = 'OptionsBitmap';
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
interface ExtendedClusterElement extends ClusterElement {
|
||||
attributes: AnyValueElement[],
|
||||
commands: CommandElement[],
|
||||
datatypes: AnyValueElement[],
|
||||
enums: AnyValueElement[],
|
||||
structs: AnyValueElement[],
|
||||
maps: AnyValueElement[],
|
||||
typeMapping: Map<string, string | undefined>
|
||||
}
|
||||
|
||||
function toJSON(data: any, space = 2) {
|
||||
return JSON.stringify(data, (_, value) => {
|
||||
if (typeof value === "bigint") {
|
||||
return value.toString();
|
||||
}
|
||||
if (value instanceof Uint8Array) {
|
||||
return Bytes.toHex(value);
|
||||
}
|
||||
if (value === undefined) {
|
||||
return "undefined";
|
||||
}
|
||||
return value;
|
||||
}, space);
|
||||
}
|
||||
|
||||
handlebars.registerHelper('asUpperCase', function (str) {
|
||||
return toUpperCase(str);
|
||||
});
|
||||
|
||||
handlebars.registerHelper('asLowerCase', function (str) {
|
||||
return toLowerCase(str);
|
||||
});
|
||||
|
||||
handlebars.registerHelper('asUpperCamelCase', function (str) {
|
||||
return toUpperCamelCase(str);
|
||||
});
|
||||
|
||||
handlebars.registerHelper('asLowerCamelCase', function (str) {
|
||||
return toLowerCamelCase(str);
|
||||
});
|
||||
|
||||
handlebars.registerHelper('asTitleCase', function (str) {
|
||||
return toTitleCase(str);
|
||||
});
|
||||
|
||||
handlebars.registerHelper('asEnumField', function (str) {
|
||||
return toEnumField(str);
|
||||
});
|
||||
|
||||
handlebars.registerHelper('asUpperSnakeCase', function (str) {
|
||||
return toUpperSnakeCase(str);
|
||||
});
|
||||
|
||||
handlebars.registerHelper('asSpacedTitleCase', function (str) {
|
||||
return toSpacedTitleCase(str);
|
||||
});
|
||||
|
||||
handlebars.registerHelper('asHex', function (decimal, length) {
|
||||
return toHex(decimal, length);
|
||||
});
|
||||
|
||||
handlebars.registerHelper('isLastElement', function (index: number, count: number) {
|
||||
return index >= count - 1;
|
||||
});
|
||||
handlebars.registerHelper('isFirstElement', function (index: number) {
|
||||
return index == 0;
|
||||
});
|
||||
handlebars.registerHelper('isEmpty', function (e: Array<any> | String | undefined) {
|
||||
return e == undefined || e.length == 0
|
||||
});
|
||||
handlebars.registerHelper('isDepreciated', function (field) {
|
||||
return field.conformance == "D" || field.conformance == "X" || field.conformance == "[!LT]"
|
||||
});
|
||||
handlebars.registerHelper('isReadOnly', function (field) {
|
||||
return field.access.indexOf('RW') == -1;
|
||||
});
|
||||
handlebars.registerHelper('toBitmapType', function (constraint) {
|
||||
return constraint != undefined && constraint.indexOf(" to ") > 0 ? "short" : "boolean"
|
||||
});
|
||||
handlebars.registerHelper('toBitmapChildName', function (child, type) {
|
||||
return type == "FeatureMap" ? toLowerCamelCase(child.title) : toLowerCamelCase(child.name)
|
||||
});
|
||||
handlebars.registerHelper('isAttribute', function (field) {
|
||||
return field.tag == 'attribute'
|
||||
});
|
||||
|
||||
handlebars.registerHelper('isNonNull', function (field) {
|
||||
return field.access?.indexOf('RW') > -1 || field.isNonNull;
|
||||
});
|
||||
|
||||
function toUpperCase(str: string | undefined) {
|
||||
if (str == undefined) {
|
||||
return "UNDEFINED"
|
||||
}
|
||||
return str.toUpperCase();
|
||||
}
|
||||
|
||||
function toLowerCase(str: string | undefined) {
|
||||
if (str == undefined) {
|
||||
return "undefined"
|
||||
}
|
||||
return str.toLowerCase();
|
||||
}
|
||||
|
||||
function toUpperCamelCase(str: string | undefined) {
|
||||
if (str == undefined) {
|
||||
return "undefined"
|
||||
}
|
||||
return str.replace(/(^\w|[_\s]\w)/g, match => match.replace(/[_\s]/, '').toUpperCase());
|
||||
}
|
||||
|
||||
function toLowerCamelCase(str: string): string {
|
||||
if (str == undefined) {
|
||||
return "undefined"
|
||||
}
|
||||
return str.replace(/(?:^\w|[_\s]\w)/g, (match, offset) => {
|
||||
return offset === 0 ? match.toLowerCase() : match.replace(/[_\s]/, '').toUpperCase();
|
||||
});
|
||||
}
|
||||
|
||||
function toTitleCase(str: string | undefined): string {
|
||||
if (!str) {
|
||||
return "Undefined";
|
||||
}
|
||||
return str
|
||||
.replace(/([a-z])([A-Z])/g, '$1 $2') // Add a space before uppercase letters that follow lowercase letters
|
||||
.replace(/[_\s]+/g, ' ') // Replace underscores or multiple spaces with a single space
|
||||
.trim()
|
||||
.split(' ')
|
||||
.map(word => word.charAt(0).toUpperCase() + word.slice(1).toLowerCase())
|
||||
.join(' ');
|
||||
}
|
||||
|
||||
function toEnumField(str: string): string {
|
||||
// Check if the string starts with a number and prepend "V" if it does
|
||||
if (/^\d/.test(str)) {
|
||||
str = 'V' + str;
|
||||
}
|
||||
|
||||
// First split camelCase words by inserting underscores
|
||||
str = str
|
||||
// Split between lowercase and uppercase letters
|
||||
.replace(/([a-z])([A-Z])/g, '$1_$2')
|
||||
// Split between uppercase letters followed by lowercase
|
||||
.replace(/([A-Z])([A-Z][a-z])/g, '$1_$2')
|
||||
// Replace any remaining spaces with underscores
|
||||
.replace(/\s+/g, '_')
|
||||
// Finally convert to uppercase
|
||||
.toUpperCase();
|
||||
|
||||
return str;
|
||||
}
|
||||
|
||||
function toUpperSnakeCase(str: string | undefined) {
|
||||
if (str == undefined) {
|
||||
return "UNDEFINED"
|
||||
}
|
||||
return str
|
||||
.replace(/([a-z])([A-Z])/g, '$1_$2') // Insert underscore between camelCase
|
||||
.replace(/[\s-]+/g, '_') // Replace spaces and hyphens with underscore
|
||||
.toUpperCase();
|
||||
}
|
||||
|
||||
function toSpacedTitleCase(str: string | undefined): string {
|
||||
if (!str) {
|
||||
return "Undefined";
|
||||
}
|
||||
return str
|
||||
.replace(/([a-z])([A-Z])/g, '$1 $2') // Add a space before uppercase letters that follow lowercase letters
|
||||
.replace(/([A-Z])([A-Z][a-z])/g, '$1 $2') // Split between capital letters when followed by capital+lowercase
|
||||
.replace(/([a-z])([A-Z][a-z])/g, '$1 $2') // Split between lowercase and camelCase word
|
||||
.replace(/([a-zA-Z])(\d)/g, '$1 $2') // Split between letters and numbers
|
||||
.replace(/(\d)([a-zA-Z])/g, '$1 $2') // Split between numbers and letters
|
||||
.replace(/[_\s]+/g, ' ') // Replace underscores or multiple spaces with a single space
|
||||
.trim();
|
||||
}
|
||||
|
||||
function toHex(decimal: number, length = 0) {
|
||||
let hex = decimal.toString(16).toUpperCase();
|
||||
if (length > 0) {
|
||||
hex = hex.padStart(length, '0');
|
||||
}
|
||||
return `0x${hex}`;
|
||||
}
|
||||
|
||||
/**
|
||||
*
|
||||
* @param field Lookup function to map matter native types to Java native types
|
||||
* @returns
|
||||
*/
|
||||
function matterNativeTypeToJavaNativeType(field: AnyElement) {
|
||||
switch (field.type || field.name) {
|
||||
case "bool":
|
||||
return "Boolean";
|
||||
case "uint8":
|
||||
case "uint16":
|
||||
case "uint24":
|
||||
case "uint32":
|
||||
case "int8":
|
||||
case "int16":
|
||||
case "int24":
|
||||
case "int32":
|
||||
case "status":
|
||||
return "Integer";
|
||||
case "uint40":
|
||||
case "uint48":
|
||||
case "uint56":
|
||||
case "uint64":
|
||||
case "int40":
|
||||
case "int48":
|
||||
case "int56":
|
||||
case "int64":
|
||||
return "BigInteger";
|
||||
case "single":
|
||||
return "Float";
|
||||
case "double":
|
||||
return "Double";
|
||||
case "date":
|
||||
return "date";
|
||||
case "string":
|
||||
case "locationdesc":
|
||||
return "String";
|
||||
case "octstr":
|
||||
return "OctetString";
|
||||
// this are semantic tag fields
|
||||
case "tag":
|
||||
case "namespace":
|
||||
return "Integer";
|
||||
case "list":
|
||||
case "struct":
|
||||
case "map8":
|
||||
case "map16":
|
||||
case "map32":
|
||||
case "map64":
|
||||
case "map8":
|
||||
case "map16":
|
||||
case "map32":
|
||||
case "map64":
|
||||
//these are complex types and do not map to a Java native type
|
||||
default:
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
function filterDep(e: AnyValueElement) {
|
||||
//remove fields flagged as depreciated or not used
|
||||
const children = e.children?.filter(field => {
|
||||
const f = field as FieldElement;
|
||||
return f.conformance != 'X' && f.conformance != 'D'
|
||||
})
|
||||
e.children = children as AnyValueElement[];
|
||||
return e;
|
||||
}
|
||||
|
||||
/**
|
||||
* Type mapper attempts to lookup the Java native type for any matter element, this include Integers, Strings, Booleans, etc...
|
||||
*
|
||||
* If there is no matching type, then the matter element is a complex type, like maps, enums and structs
|
||||
* These complex types are represented as Java classes, so the mapping will refer to that complex type which
|
||||
* will be templated out later in the code
|
||||
*
|
||||
* This code also traverses any children of the data type, applying the same logic
|
||||
*
|
||||
* @param mappings - existing set of mapping lookups for types
|
||||
* @param dt - the data type we are operating on
|
||||
* @returns the data type which now includes a new field 'mappedType' on all levels of the object
|
||||
*/
|
||||
function typeMapper(mappings: Map<string, string | undefined>, dt: AnyValueElement): any {
|
||||
let mappedType: string | undefined;
|
||||
if (dt.tag == 'attribute' && dt.type?.startsWith('enum') || dt.type?.startsWith('map') || dt.type?.startsWith('struct')) {
|
||||
//these types will be generated as inner classes and will be referred to by name
|
||||
mappedType = dt.name;
|
||||
} else {
|
||||
//this gets raw types
|
||||
mappedType = dt.type && mappings.get(dt.type) || matterNativeTypeToJavaNativeType(dt) || dt.type || "String"
|
||||
}
|
||||
if (mappedType == 'list') {
|
||||
const ct = dt.children?.[0].type
|
||||
//if the type is cluster.type then its referring to type in another cluster
|
||||
if (ct && ct.indexOf('.') > 0) {
|
||||
const [otherCluster, otherType] = ct.split('.');
|
||||
mappedType = `List<${toUpperCamelCase(otherCluster + "Cluster")}.${toUpperCamelCase(otherType)}>`;
|
||||
} else {
|
||||
mappedType = `List<${toUpperCamelCase(ct && mappings.get(ct) || ct)}>`;
|
||||
}
|
||||
} else if (mappedType && mappedType.indexOf('.') > 0) {
|
||||
//some types reference other clusters, like MediaPlayback.CharacteristicEnum
|
||||
const [cName, dtName] = mappedType.split('.');
|
||||
mappedType = `${toUpperCamelCase(cName)}Cluster.${toUpperCamelCase(dtName)}`;
|
||||
} else if (mappings.get(mappedType)) {
|
||||
//if the type is already mapped, then use the mapped type
|
||||
mappedType = mappings.get(mappedType)
|
||||
}
|
||||
|
||||
const children = dt.children?.map((child) => {
|
||||
return typeMapper(mappings, child as AnyValueElement) as AnyValueElement
|
||||
})
|
||||
return {
|
||||
...dt,
|
||||
children: children,
|
||||
mappedType: mappedType
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Certain clusters have complex inheritance that we don't support yet (and don't need right now)
|
||||
*/
|
||||
const skipClusters = new Set(['Messages']);
|
||||
|
||||
/**
|
||||
* Global types (not in a cluster)
|
||||
*/
|
||||
const globalDataTypes = (matterData.children as DatatypeElement[]).filter(c => c.tag === 'datatype');
|
||||
const globalAttributes = (matterData.children as AttributeElement[]).filter(c => c.tag === 'attribute');
|
||||
|
||||
/**
|
||||
* Global type mapping lookup, clusters will combine this with their own mapping
|
||||
*/
|
||||
const globalTypeMapping = new Map();
|
||||
//some types are special and need to be mapped to Java native types here
|
||||
globalTypeMapping.set("FabricIndex", "Integer");
|
||||
// semantic tag fields
|
||||
globalTypeMapping.set("namespace", "Integer");
|
||||
globalTypeMapping.set("tag", "Integer");
|
||||
|
||||
globalDataTypes.forEach(dt => {
|
||||
matterNativeTypeToJavaNativeType(dt) && globalTypeMapping.set(dt.name, matterNativeTypeToJavaNativeType(dt))
|
||||
});
|
||||
//it seems like there is a global data type that overrides the string type
|
||||
globalTypeMapping.set("string", "String");
|
||||
|
||||
globalAttributes.forEach(dt => matterNativeTypeToJavaNativeType(dt) && globalTypeMapping.set(dt.name, matterNativeTypeToJavaNativeType(dt)));
|
||||
|
||||
const clusters: ExtendedClusterElement[] = (matterData.children as ClusterElement[])
|
||||
.filter(c => c.tag === 'cluster')
|
||||
.filter(c => !skipClusters.has(c.name))
|
||||
.map(cluster => {
|
||||
// typeMapping is a map of matter types to Java types
|
||||
const typeMapping = new Map<string, string | undefined>(globalTypeMapping);
|
||||
const dataTypes = (cluster.children || []).filter(c => c.tag === 'datatype') as DatatypeElement[];
|
||||
const maps = (cluster.children || []).filter(c => c.type?.startsWith('map')) as AnyValueElement[];
|
||||
const enums = (cluster.children || []).filter(c => c.type?.startsWith('enum')) as AnyValueElement[];
|
||||
const structs = (cluster.children || [])
|
||||
.filter(dt => dt.type === 'struct' || dt.tag === 'event')
|
||||
.map(dt => typeMapper(typeMapping, dt as AnyValueElement));
|
||||
dataTypes?.forEach(dt => {
|
||||
if (dt.type && dt.type.indexOf('.') > 0) {
|
||||
return typeMapping.set(dt.name, dt.type)
|
||||
}
|
||||
return matterNativeTypeToJavaNativeType(dt) && typeMapping.set(dt.name, matterNativeTypeToJavaNativeType(dt))
|
||||
});
|
||||
|
||||
// if the cluster has a type, then the java class will extend this type (which is another cluster)
|
||||
const parent = cluster.type ? matterData.children.find(c => c.name == cluster.type) : undefined;
|
||||
|
||||
const attributes = cluster.children?.filter(c => c.tag == 'attribute')?.filter((element, index, self) => {
|
||||
//remove duplicates, not sure why they exist in the model
|
||||
const dupIndex = self.findIndex(e => e.name === element.name)
|
||||
if (dupIndex != index) {
|
||||
if (element.conformance?.toString().startsWith('[!')) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
// if the parent cluster has an attribute with the same name, then don't include it as we need to use the parent's attribute
|
||||
if (parent) {
|
||||
const parentAttr = parent.children?.filter(c => c.tag == 'attribute')?.find(c => c.name == element.name)
|
||||
if (parentAttr) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
return true
|
||||
}).filter(attr => filterDep(attr)).map(dt => typeMapper(typeMapping, dt))
|
||||
|
||||
//some command types reference attribute types (LevelControl Options)
|
||||
attributes?.forEach(dt => {
|
||||
if (dt.type && dt.type.indexOf('.') > 0) {
|
||||
typeMapping.set(dt.name, dt.type)
|
||||
return;
|
||||
}
|
||||
typeMapping.set(dt.name, matterNativeTypeToJavaNativeType(dt) || dt.type)
|
||||
|
||||
//some local Attributes like FeatureMap reference the global attribute type
|
||||
if (dt.children) {
|
||||
const ga = globalAttributes.find(a => a.name == dt.type)
|
||||
if (ga && ga.type?.startsWith('map') && !maps?.find(e => e.name == dt.name)) {
|
||||
maps?.push(dt as ClusterModel.Child)
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
|
||||
//clean up commands
|
||||
const commandsRaw = cluster.children?.filter(c => c.tag == 'command');
|
||||
const commands = commandsRaw?.map(command => {
|
||||
//some commands reference others
|
||||
if (command.type != undefined) {
|
||||
command.children = commandsRaw?.find(c => c.name == command.type)?.children || []
|
||||
}
|
||||
return command
|
||||
}).map(command => filterDep(command)).filter(c => (c as CommandElement).direction == "request").map(dt => {
|
||||
if (dt.type && dt.type.indexOf('.') > 0) {
|
||||
typeMapping.set(dt.name, dt.type)
|
||||
return;
|
||||
}
|
||||
const newCommand = typeMapper(typeMapping, dt)
|
||||
newCommand.children?.forEach((c: any) => {
|
||||
if (c.type?.startsWith('map') && !maps?.find(e => e.name == c.name)) {
|
||||
maps?.push(c as ClusterModel.Child)
|
||||
}
|
||||
});
|
||||
return newCommand
|
||||
});
|
||||
|
||||
return {
|
||||
...cluster,
|
||||
attributes: attributes,
|
||||
commands: commands,
|
||||
datatypes: dataTypes,
|
||||
enums: enums,
|
||||
structs: structs,
|
||||
maps: maps,
|
||||
typeMapping: typeMapping
|
||||
} as ExtendedClusterElement
|
||||
}) || []
|
||||
|
||||
function copyClusterDatatype(sourceCluster: ExtendedClusterElement, destCluster: ExtendedClusterElement, name: string) {
|
||||
let dt = sourceCluster.datatypes?.find(d => d.name == name) || sourceCluster.enums?.find(d => d.name == name) || sourceCluster.structs?.find(d => d.name == name) || sourceCluster.attributes?.find(d => d.name == name)
|
||||
if (dt) {
|
||||
destCluster.typeMapping.set(name, name);
|
||||
if (dt.type) {
|
||||
if (dt.type.startsWith('enum')) {
|
||||
if (!destCluster.enums) {
|
||||
destCluster.enums = []
|
||||
}
|
||||
destCluster.enums.push(dt)
|
||||
} else if (dt.type.startsWith('map')) {
|
||||
if (!destCluster.maps) {
|
||||
destCluster.maps = []
|
||||
}
|
||||
destCluster.maps.push(dt)
|
||||
} else if (dt.type == 'struct') {
|
||||
if (!destCluster.structs) {
|
||||
destCluster.structs = []
|
||||
}
|
||||
destCluster.structs.push(dt)
|
||||
} else {
|
||||
if (!destCluster.datatypes) {
|
||||
destCluster.datatypes = []
|
||||
}
|
||||
destCluster.datatypes.push(dt);
|
||||
}
|
||||
}
|
||||
destCluster.commands = destCluster.commands.map(c => typeMapper(destCluster.typeMapping, c))
|
||||
|
||||
}
|
||||
}
|
||||
clusters.forEach(cluster => {
|
||||
cluster.typeMapping.forEach((value, key, map) => {
|
||||
if (value && value.indexOf('.') > 0) {
|
||||
const [cName, dtName] = value.split('.');
|
||||
const otherCluster = clusters.find(c => c.name != cluster.name && c.name == cName)
|
||||
if (otherCluster) {
|
||||
copyClusterDatatype(otherCluster, cluster, dtName);
|
||||
}
|
||||
return;
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// Compile Handlebars template
|
||||
const clusterSource = fs.readFileSync('src/templates/cluster-class.java.hbs', 'utf8');
|
||||
const clusterTemplate = handlebars.compile(clusterSource);
|
||||
const baseClusterSource = fs.readFileSync('src/templates/base-cluster.java.hbs', 'utf8');
|
||||
const baseClusterTemplate = handlebars.compile(baseClusterSource);
|
||||
const deviceTypeSource = fs.readFileSync('src/templates/device-types-class.java.hbs', 'utf8');
|
||||
const deviceTypeTemplate = handlebars.compile(deviceTypeSource);
|
||||
const clusterRegistrySource = fs.readFileSync('src/templates/cluster-registry.java.hbs', 'utf8');
|
||||
const clusterRegistryTemplate = handlebars.compile(clusterRegistrySource);
|
||||
const clusterConstantsSource = fs.readFileSync('src/templates/cluster-constants.java.hbs', 'utf8');
|
||||
const clusterConstantsTemplate = handlebars.compile(clusterConstantsSource);
|
||||
|
||||
// Generate Java code
|
||||
|
||||
|
||||
const datatypes = {
|
||||
enums: [
|
||||
...globalDataTypes?.filter(c => c.type?.startsWith('enum')),
|
||||
...globalAttributes?.filter(c => c.type?.startsWith('enum'))
|
||||
],
|
||||
structs: [
|
||||
...globalDataTypes?.filter(c => c.type?.startsWith('struct')),
|
||||
...globalAttributes?.filter(c => c.type?.startsWith('struct'))
|
||||
].map(e => typeMapper(globalTypeMapping, e)),
|
||||
maps: [
|
||||
...globalDataTypes?.filter(c => c.type?.startsWith('map')),
|
||||
...globalAttributes?.filter(c => c.type?.startsWith('map'))
|
||||
]
|
||||
}
|
||||
|
||||
fs.mkdir('out', { recursive: true }, (err) => {
|
||||
});
|
||||
|
||||
const baseClusterClass = baseClusterTemplate(datatypes);
|
||||
fs.writeFileSync(`out/BaseCluster.java`, baseClusterClass);
|
||||
|
||||
const deviceTypeClass = deviceTypeTemplate({ deviceTypes: matterData.children.filter((c: AnyElement) => c.tag === 'deviceType' && c.id !== undefined) });
|
||||
fs.writeFileSync(`out/DeviceTypes.java`, deviceTypeClass);
|
||||
|
||||
|
||||
const clusterRegistryClass = clusterRegistryTemplate({ clusters: clusters });
|
||||
fs.writeFileSync(`out/ClusterRegistry.java`, clusterRegistryClass);
|
||||
|
||||
const clusterConstantsClass = clusterConstantsTemplate({ clusters: clusters });
|
||||
fs.writeFileSync(`out/ClusterConstants.java`, clusterConstantsClass);
|
||||
|
||||
clusters.forEach(cluster => {
|
||||
const javaCode = clusterTemplate(cluster);
|
||||
fs.writeFileSync(`out/${cluster.name}Cluster.java`, javaCode);
|
||||
});
|
|
@ -0,0 +1,172 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
|
||||
import com.google.gson.Gson;
|
||||
|
||||
/**
|
||||
* {{asUpperCamelCase name}}
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
|
||||
public class BaseCluster {
|
||||
|
||||
protected static final Gson GSON = new Gson();
|
||||
public BigInteger nodeId;
|
||||
public int endpointId;
|
||||
public int id;
|
||||
public String name;
|
||||
|
||||
public interface MatterEnum {
|
||||
Integer getValue();
|
||||
|
||||
String getLabel();
|
||||
|
||||
public static <E extends MatterEnum> E fromValue(Class<E> enumClass, int value) {
|
||||
E[] constants = enumClass.getEnumConstants();
|
||||
if (constants != null) {
|
||||
for (E enumConstant : constants) {
|
||||
if (enumConstant != null) {
|
||||
if (enumConstant.getValue().equals(value)) {
|
||||
return enumConstant;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
throw new IllegalArgumentException("Unknown value: " + value);
|
||||
}
|
||||
}
|
||||
|
||||
public BaseCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
this.nodeId = nodeId;
|
||||
this.endpointId = endpointId;
|
||||
this.id = clusterId;
|
||||
this.name = clusterName;
|
||||
}
|
||||
|
||||
public static class OctetString {
|
||||
public byte[] value;
|
||||
|
||||
public OctetString(byte[] value) {
|
||||
this.value = value;
|
||||
}
|
||||
|
||||
public OctetString(String hexString) {
|
||||
int length = hexString.length();
|
||||
value = new byte[length / 2];
|
||||
for (int i = 0; i < length; i += 2) {
|
||||
value[i / 2] = (byte) ((Character.digit(hexString.charAt(i), 16) << 4)
|
||||
+ Character.digit(hexString.charAt(i + 1), 16));
|
||||
}
|
||||
}
|
||||
|
||||
public @NonNull String toHexString() {
|
||||
StringBuilder hexString = new StringBuilder();
|
||||
for (byte b : value) {
|
||||
String hex = Integer.toHexString(0xFF & b);
|
||||
if (hex.length() == 1) {
|
||||
hexString.append('0');
|
||||
}
|
||||
hexString.append(hex);
|
||||
}
|
||||
return hexString.toString();
|
||||
}
|
||||
|
||||
public @NonNull String toString() {
|
||||
return toHexString();
|
||||
}
|
||||
}
|
||||
|
||||
{{#each structs}}
|
||||
{{#if (isFirstElement @index)}}
|
||||
//Structs
|
||||
{{/if}}
|
||||
public class {{asUpperCamelCase name}} {
|
||||
{{#each children}}
|
||||
public {{{asUpperCamelCase mappedType}}} {{asLowerCamelCase name}}; // {{type}}
|
||||
{{/each}}
|
||||
public {{asUpperCamelCase name}}({{#each children}}{{{asUpperCamelCase mappedType}}} {{asLowerCamelCase name}}{{#unless (isLastElement @index ../children.length)}}, {{/unless}}{{/each}}) {
|
||||
{{#each children}}
|
||||
this.{{asLowerCamelCase name}} = {{asLowerCamelCase name}};
|
||||
{{/each}}
|
||||
}
|
||||
}
|
||||
{{/each}}
|
||||
|
||||
|
||||
{{#each enums}}
|
||||
{{#if (isFirstElement @index)}}
|
||||
//Enums
|
||||
{{/if}}
|
||||
public enum {{asUpperCamelCase name}} implements MatterEnum {
|
||||
{{#if (isEmpty children)}}
|
||||
DEFAULT(0, "Default");
|
||||
{{else}}
|
||||
{{#each children}}
|
||||
{{asEnumField name}}({{id}}, "{{name}}"){{#unless (isLastElement @index ../children.length)}},{{else}};{{/unless}}
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
private {{asUpperCamelCase name}}(Integer value, String label){
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
{{/each}}
|
||||
|
||||
{{#each maps}}
|
||||
{{#if (isFirstElement @index)}}
|
||||
// Bitmaps
|
||||
{{/if}}
|
||||
public static class {{asUpperCamelCase name}} {
|
||||
{{#if (isEmpty children)}}
|
||||
public List<Boolean> map;
|
||||
public {{asUpperCamelCase name}}(List<Boolean> map){
|
||||
this.map = map;
|
||||
{{else}}
|
||||
{{#each children}}
|
||||
public boolean {{asLowerCamelCase name}};
|
||||
{{/each}}
|
||||
public {{asUpperCamelCase name}}({{#each children}}boolean {{asLowerCamelCase name}}{{#unless (isLastElement @index ../children.length)}}, {{/unless}}{{/each}}){
|
||||
{{#each children}}
|
||||
this.{{asLowerCamelCase name}} = {{asLowerCamelCase name}};
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
}
|
||||
}
|
||||
{{/each}}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,186 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.LinkedHashMap;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.ClusterCommand;
|
||||
|
||||
/**
|
||||
* {{asUpperCamelCase name}}
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public {{#if id}}{{else}}abstract {{/if}}class {{asUpperCamelCase name}}Cluster extends {{#if type}}{{asUpperCamelCase type}}Cluster{{else}}BaseCluster{{/if}} {
|
||||
|
||||
{{#if id}}public static final int CLUSTER_ID = {{asHex id 4}};{{else}}{{/if}}
|
||||
public static final String CLUSTER_NAME = "{{asUpperCamelCase name}}";
|
||||
public static final String CLUSTER_PREFIX = "{{asLowerCamelCase name}}";
|
||||
{{#each attributes }}
|
||||
{{#if (isDepreciated this)}}
|
||||
{{else}}
|
||||
public static final String ATTRIBUTE_{{asUpperSnakeCase name}} = "{{asLowerCamelCase name}}";
|
||||
{{/if}}
|
||||
{{/each}}
|
||||
|
||||
{{#each attributes }}
|
||||
{{#if (isDepreciated this)}}
|
||||
{{else}}
|
||||
{{#if details}}
|
||||
/**
|
||||
* {{details}}
|
||||
*/
|
||||
{{/if}}
|
||||
public {{{asUpperCamelCase mappedType}}} {{asLowerCamelCase name}}; // {{id}} {{type}} {{access}}
|
||||
{{/if}}
|
||||
{{/each}}
|
||||
{{#each structs}}
|
||||
{{#if (isFirstElement @index)}}
|
||||
//Structs
|
||||
{{/if}}
|
||||
{{#if details}}
|
||||
/**
|
||||
* {{details}}
|
||||
*/
|
||||
{{/if}}
|
||||
public class {{asUpperCamelCase name}} {
|
||||
{{#each children}}
|
||||
{{#if details}}
|
||||
/**
|
||||
* {{details}}
|
||||
*/
|
||||
{{/if}}
|
||||
public {{{mappedType}}} {{asLowerCamelCase name}}; // {{type}}
|
||||
{{/each}}
|
||||
public {{asUpperCamelCase name}}({{#each children}}{{{mappedType}}} {{asLowerCamelCase name}}{{#unless (isLastElement @index ../children.length)}}, {{/unless}}{{/each}}) {
|
||||
{{#each children}}
|
||||
this.{{asLowerCamelCase name}} = {{asLowerCamelCase name}};
|
||||
{{/each}}
|
||||
}
|
||||
}
|
||||
{{/each}}
|
||||
|
||||
|
||||
{{#each enums}}
|
||||
{{#if (isFirstElement @index)}}
|
||||
//Enums
|
||||
{{/if}}
|
||||
{{#if details}}
|
||||
/**
|
||||
* {{details}}
|
||||
*/
|
||||
{{/if}}
|
||||
public enum {{asUpperCamelCase name}} implements MatterEnum {
|
||||
{{#if (isEmpty children)}}
|
||||
DEFAULT(0, "Default");
|
||||
{{else}}
|
||||
{{#each children}}
|
||||
{{asEnumField name}}({{id}}, "{{asSpacedTitleCase name}}"){{#unless (isLastElement @index ../children.length)}},{{else}};{{/unless}}
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
private {{asUpperCamelCase name}}(Integer value, String label){
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
{{/each}}
|
||||
|
||||
{{#each maps}}
|
||||
{{#if (isFirstElement @index)}}
|
||||
// Bitmaps
|
||||
{{/if}}
|
||||
{{#if details}}
|
||||
/**
|
||||
* {{details}}
|
||||
*/
|
||||
{{/if}}
|
||||
public static class {{asUpperCamelCase name}} {
|
||||
{{#each children}}
|
||||
{{#if details}}
|
||||
/**
|
||||
* {{description}}
|
||||
* {{details}}
|
||||
*/
|
||||
{{/if}}
|
||||
public {{toBitmapType constraint}} {{toBitmapChildName this ../type}};
|
||||
{{/each}}
|
||||
public {{asUpperCamelCase name}}({{#each children}}{{toBitmapType constraint}} {{toBitmapChildName this ../type}}{{#unless (isLastElement @index ../children.length)}}, {{/unless}}{{/each}}){
|
||||
{{#each children}}
|
||||
this.{{toBitmapChildName this ../type}} = {{toBitmapChildName this ../type}};
|
||||
{{/each}}
|
||||
}
|
||||
}
|
||||
{{/each}}
|
||||
{{#if id}}
|
||||
public {{asUpperCamelCase name}}Cluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, {{id}}, "{{asUpperCamelCase name}}");
|
||||
}
|
||||
{{/if}}
|
||||
protected {{asUpperCamelCase name}}Cluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
{{#each commands }}
|
||||
{{#if (isFirstElement @index)}}
|
||||
//commands
|
||||
{{/if}}
|
||||
{{#if details}}
|
||||
/**
|
||||
* {{details}}
|
||||
*/
|
||||
{{/if}}
|
||||
public static ClusterCommand {{asLowerCamelCase name}}({{#each children}}{{#unless (isDepreciated this)}}{{{mappedType}}} {{asLowerCamelCase name}}{{/unless}}{{#unless (isLastElement @index ../children.length)}}, {{/unless}}{{/each}}) {
|
||||
{{#each children}}
|
||||
{{#if (isFirstElement @index)}}
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
{{/if}}
|
||||
{{#unless (isDepreciated this)}}
|
||||
if ({{asLowerCamelCase name}} != null) {
|
||||
map.put("{{asLowerCamelCase name}}", {{asLowerCamelCase name}});
|
||||
}
|
||||
{{/unless}}
|
||||
{{/each}}
|
||||
return new ClusterCommand("{{asLowerCamelCase name}}"{{#if children}}, map{{/if}});
|
||||
}
|
||||
{{/each}}
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
{{#each attributes }}
|
||||
{{#if (isDepreciated this)}}
|
||||
{{else}}
|
||||
str += "{{asLowerCamelCase name}} : " + {{asLowerCamelCase name}} + "\n";
|
||||
{{/if}}
|
||||
{{/each}}
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,45 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import org.openhab.core.thing.type.ChannelTypeUID;
|
||||
|
||||
/**
|
||||
*
|
||||
* ClusterThingTypes
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class ClusterConstants {
|
||||
|
||||
{{#each clusters}}
|
||||
// {{name}} Cluster
|
||||
{{#each children}}
|
||||
{{#if (isDepreciated this)}}
|
||||
{{else}}
|
||||
{{#if (isAttribute this)}}
|
||||
{{#if access}}
|
||||
public static final String CHANNEL_NAME_{{asUpperCase ../name}}_{{asUpperCase name}} = "{{asUpperCamelCase name}}";
|
||||
public static final String CHANNEL_LABEL_{{asUpperCase ../name}}_{{asUpperCase name}} = "{{asTitleCase name}}";
|
||||
public static final String CHANNEL_ID_{{asUpperCase ../name}}_{{asUpperCase name}} = "{{asLowerCase ../name}}-{{asLowerCase name}}";
|
||||
public static final ChannelTypeUID CHANNEL_{{asUpperCase ../name}}_{{asUpperCase name}} = new ChannelTypeUID(
|
||||
"matter:" + CHANNEL_ID_{{asUpperCase ../name}}_{{asUpperCase name}});
|
||||
|
||||
{{/if}}
|
||||
{{/if}}
|
||||
{{/if}}
|
||||
{{/each}}
|
||||
{{/each}}
|
||||
}
|
|
@ -0,0 +1,37 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
*
|
||||
* ClusterRegistry
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class ClusterRegistry {
|
||||
|
||||
public static final Map<Integer, Class<? extends BaseCluster>> CLUSTER_IDS = new HashMap<>();
|
||||
static {
|
||||
{{#each clusters}}
|
||||
{{#if (isEmpty id)}}
|
||||
{{else}}
|
||||
CLUSTER_IDS.put({{id}}, {{asUpperCamelCase name}}Cluster.class);
|
||||
{{/if}}
|
||||
{{/each}}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,101 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* {{asUpperCamelCase name}}
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
|
||||
public class DataTypes {
|
||||
|
||||
{{#each structs}}
|
||||
{{#if (isFirstElement @index)}}
|
||||
//Structs
|
||||
{{/if}}
|
||||
public class {{asUpperCamelCase name}} {
|
||||
{{#each children}}
|
||||
public {{{asUpperCamelCase mappedType}}} {{asLowerCamelCase name}}; // {{type}}
|
||||
{{/each}}
|
||||
public {{asUpperCamelCase name}}({{#each children}}{{{asUpperCamelCase mappedType}}} {{asLowerCamelCase name}}{{#unless (isLastElement @index ../children.length)}}, {{/unless}}{{/each}}) {
|
||||
{{#each children}}
|
||||
this.{{asLowerCamelCase name}} = {{asLowerCamelCase name}};
|
||||
{{/each}}
|
||||
}
|
||||
}
|
||||
{{/each}}
|
||||
|
||||
|
||||
{{#each enums}}
|
||||
{{#if (isFirstElement @index)}}
|
||||
//Enums
|
||||
{{/if}}
|
||||
public enum {{asUpperCamelCase name}} implements BaseCluster.MatterEnum {
|
||||
{{#if (isEmpty children)}}
|
||||
DEFAULT(0, "Default");
|
||||
{{else}}
|
||||
{{#each children}}
|
||||
{{asEnumField name}}({{id}}, "{{name}}"){{#unless (isLastElement @index ../children.length)}},{{else}};{{/unless}}
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
private {{asUpperCamelCase name}}(Integer value, String label){
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
{{/each}}
|
||||
|
||||
{{#each maps}}
|
||||
{{#if (isFirstElement @index)}}
|
||||
// Bitmaps
|
||||
{{/if}}
|
||||
public static class {{asUpperCamelCase name}} {
|
||||
{{#if (isEmpty children)}}
|
||||
public List<Boolean> map;
|
||||
public {{asUpperCamelCase name}}(List<Boolean> map){
|
||||
this.map = map;
|
||||
{{else}}
|
||||
{{#each children}}
|
||||
public boolean {{asLowerCamelCase name}};
|
||||
{{/each}}
|
||||
public {{asUpperCamelCase name}}({{#each children}}boolean {{asLowerCamelCase name}}{{#unless (isLastElement @index ../children.length)}}, {{/unless}}{{/each}}){
|
||||
{{#each children}}
|
||||
this.{{asLowerCamelCase name}} = {{asLowerCamelCase name}};
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
}
|
||||
}
|
||||
{{/each}}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* DeviceTypes
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
|
||||
public class DeviceTypes {
|
||||
|
||||
public static final Map<Integer, String> DEVICE_MAPPING = new HashMap<>();
|
||||
static {
|
||||
{{#each deviceTypes}}
|
||||
{{#if (isEmpty id)}}
|
||||
{{else}}
|
||||
DEVICE_MAPPING.put({{id}}, "{{asUpperCamelCase name}}");
|
||||
{{/if}}
|
||||
{{/each}}
|
||||
}
|
||||
{{#each deviceTypes}}
|
||||
/**
|
||||
* {{details}}
|
||||
**/
|
||||
public static final Integer {{asUpperSnakeCase name}} = {{id}};
|
||||
{{/each}}
|
||||
}
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
{
|
||||
"compilerOptions": {
|
||||
// Participate in workspace
|
||||
"composite": true,
|
||||
|
||||
// Add compatibility with CommonJS modules
|
||||
"esModuleInterop": true,
|
||||
|
||||
// Compile incrementally using tsbuildinfo state file
|
||||
"incremental": true,
|
||||
|
||||
// We should probably boost this at least to ES 2017
|
||||
"target": "es2020",
|
||||
|
||||
// Generate modules as ES2020 or CommonJS
|
||||
"module": "commonjs",
|
||||
|
||||
// Use node-style dependency resolution
|
||||
"moduleResolution": "node",
|
||||
|
||||
"lib": ["ES2015", "DOM"],
|
||||
|
||||
// Do not load globals from node_modules by default
|
||||
"types": [
|
||||
"node"
|
||||
],
|
||||
|
||||
// Enforce a subset of our code conventions
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"noImplicitAny": true,
|
||||
"noImplicitOverride": true,
|
||||
"noUnusedParameters": false,
|
||||
"noUnusedLocals": false,
|
||||
"strict": true,
|
||||
"strictNullChecks": true,
|
||||
"allowJs": true,
|
||||
"skipLibCheck": true,
|
||||
"outDir": "dist",
|
||||
"rootDir": "src",
|
||||
"resolveJsonModule": true
|
||||
},
|
||||
"include": ["src/**/*.ts"],
|
||||
"ts-node": {
|
||||
"transpileOnly": true,
|
||||
"files": true
|
||||
}
|
||||
}
|
Binary file not shown.
After Width: | Height: | Size: 179 KiB |
Binary file not shown.
After Width: | Height: | Size: 216 KiB |
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,43 @@
|
|||
{
|
||||
"name": "matter-server",
|
||||
"version": "0.1.0",
|
||||
"description": "",
|
||||
"bin": "dist/src/app.js",
|
||||
"scripts": {
|
||||
"clean": "tsc --clean",
|
||||
"build": "tsc --build",
|
||||
"build-clean": "tsc --build --clean",
|
||||
"start": "ts-node src/app.ts",
|
||||
"webpack": "webpack --mode production",
|
||||
"webpack-dev": "webpack --mode development"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bn.js": "^5.1.5",
|
||||
"@types/node": "^20.9.0",
|
||||
"@types/uuid": "^9.0.7",
|
||||
"@types/ws": "^8.5.10",
|
||||
"@types/yargs": "^17.0.32",
|
||||
"ts-loader": "^9.4.4",
|
||||
"ts-node": "^10.9.1",
|
||||
"typescript": "^5.2.2",
|
||||
"webpack": "^5.88.2",
|
||||
"webpack-cli": "^5.1.4"
|
||||
},
|
||||
"dependencies": {
|
||||
"@matter/main": "v0.14.0-alpha.0-20250531-7ed2d6da8",
|
||||
"@matter/node": "v0.14.0-alpha.0-20250531-7ed2d6da8",
|
||||
"@project-chip/matter.js" : "v0.14.0-alpha.0-20250531-7ed2d6da8",
|
||||
"uuid": "^9.0.1",
|
||||
"ws": "^8.18.0",
|
||||
"yargs": "^17.7.2"
|
||||
},
|
||||
"files": [
|
||||
"dist/**/*",
|
||||
"src/**/*",
|
||||
"LICENSE",
|
||||
"README.md"
|
||||
],
|
||||
"publishConfig": {
|
||||
"access": "public"
|
||||
}
|
||||
}
|
|
@ -0,0 +1,65 @@
|
|||
import { WebSocketSession } from "./app";
|
||||
import { Request, MessageType } from './MessageTypes';
|
||||
import { Logger } from "@matter/general";
|
||||
import { printError } from "./util/error";
|
||||
const logger = Logger.get("Controller");
|
||||
|
||||
export abstract class Controller {
|
||||
|
||||
constructor(protected ws: WebSocketSession, protected params: URLSearchParams) {
|
||||
}
|
||||
|
||||
/**
|
||||
* Initializes the controller
|
||||
*/
|
||||
abstract init(): Promise<void>;
|
||||
|
||||
/**
|
||||
* Closes the controller
|
||||
*/
|
||||
abstract close(): void;
|
||||
|
||||
/**
|
||||
* Returns the unique identifier of the controller
|
||||
* @returns
|
||||
*/
|
||||
abstract id(): string;
|
||||
|
||||
/**
|
||||
* Executes a command, similar to a RPC call, on the controller implementor
|
||||
* @param namespace
|
||||
* @param functionName
|
||||
* @param args
|
||||
*/
|
||||
abstract executeCommand(namespace: string, functionName: string, args: any[]): any | Promise<any>
|
||||
|
||||
/**
|
||||
* Handles a request from the client
|
||||
* @param request
|
||||
*/
|
||||
async handleRequest(request: Request): Promise<void> {
|
||||
const { id, namespace, function: functionName, args } = request;
|
||||
logger.debug(`Received request: ${Logger.toJSON(request)}`);
|
||||
try {
|
||||
const result = this.executeCommand(namespace, functionName, args || []);
|
||||
if (result instanceof Promise) {
|
||||
result.then((asyncResult) => {
|
||||
this.ws.sendResponse(MessageType.ResultSuccess, id, asyncResult);
|
||||
}).catch((error) => {
|
||||
printError(logger, error, functionName);
|
||||
this.ws.sendResponse(MessageType.ResultError, id, undefined, error.message);
|
||||
});
|
||||
} else {
|
||||
this.ws.sendResponse(MessageType.ResultSuccess, id, result);
|
||||
}
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
printError(logger,error, functionName);
|
||||
this.ws.sendResponse(MessageType.ResultError, id, undefined, error.message);
|
||||
} else {
|
||||
logger.error(`Unexpected error executing function ${functionName}: ${error}`);
|
||||
this.ws.sendResponse(MessageType.ResultError, id, undefined, String(error));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,89 @@
|
|||
export interface Request {
|
||||
id: string;
|
||||
namespace: string;
|
||||
function: string;
|
||||
args?: any[];
|
||||
}
|
||||
|
||||
export interface Response {
|
||||
type: string;
|
||||
id: string;
|
||||
result?: any;
|
||||
error?: string;
|
||||
}
|
||||
|
||||
export interface Event {
|
||||
type: string;
|
||||
data?: any;
|
||||
}
|
||||
|
||||
export enum EventType {
|
||||
AttributeChanged = "attributeChanged",
|
||||
EventTriggered = "eventTriggered",
|
||||
NodeStateInformation = "nodeStateInformation",
|
||||
NodeData = "nodeData",
|
||||
BridgeEvent = "bridgeEvent"
|
||||
}
|
||||
|
||||
export interface Message {
|
||||
type: string;
|
||||
message: any;
|
||||
}
|
||||
|
||||
export enum MessageType {
|
||||
Result = "result",
|
||||
ResultError = "resultError",
|
||||
ResultSuccess = "resultSuccess",
|
||||
}
|
||||
|
||||
export enum BridgeEventType {
|
||||
AttributeChanged = "attributeChanged",
|
||||
EventTriggered = "eventTriggered",
|
||||
}
|
||||
export interface BridgeEvent {
|
||||
type: string;
|
||||
data: any;
|
||||
}
|
||||
|
||||
export interface BridgeAttributeChangedEvent {
|
||||
endpointId: string;
|
||||
clusterName: string;
|
||||
attributeName: string;
|
||||
data: any;
|
||||
}
|
||||
|
||||
export interface BridgeEventTrigger {
|
||||
eventName: string;
|
||||
data: any;
|
||||
}
|
||||
|
||||
export enum NodeState {
|
||||
/** Node is connected, but not fully initialized. */
|
||||
CONNECTED = "Connected",
|
||||
|
||||
/**
|
||||
* Node is disconnected. Data are stale and interactions will most likely return an error. If controller
|
||||
* instance is still active then the device will be reconnected once it is available again.
|
||||
*/
|
||||
DISCONNECTED = "Disconnected",
|
||||
|
||||
/** Node is reconnecting. Data are stale. It is yet unknown if the reconnection is successful. */
|
||||
RECONNECTING = "Reconnecting",
|
||||
|
||||
/**
|
||||
* The node could not be connected and the controller is now waiting for a MDNS announcement and tries every 10
|
||||
* minutes to reconnect.
|
||||
*/
|
||||
WAITING_FOR_DEVICE_DISCOVERY = "WaitingForDeviceDiscovery",
|
||||
|
||||
/**
|
||||
* Node structure has changed (Endpoints got added or also removed). Data are up-to-date.
|
||||
* This State information will only be fired when the subscribeAllAttributesAndEvents option is set to true.
|
||||
*/
|
||||
STRUCTURE_CHANGED = "StructureChanged",
|
||||
|
||||
/**
|
||||
* The node was just Decommissioned.
|
||||
*/
|
||||
DECOMMISSIONED = "Decommissioned",
|
||||
}
|
|
@ -0,0 +1,187 @@
|
|||
import WebSocket, { Server } from 'ws';
|
||||
import { LogFormat, Logger, LogLevel } from "@matter/general";
|
||||
import { IncomingMessage } from 'http';
|
||||
import { ClientController } from './client/ClientController';
|
||||
import { Controller } from './Controller';
|
||||
import yargs from 'yargs'
|
||||
import { hideBin } from 'yargs/helpers'
|
||||
import { Request, Response, Message, MessageType } from './MessageTypes';
|
||||
import { BridgeController } from './bridge/BridgeController';
|
||||
import { printError } from './util/error';
|
||||
const argv: any = yargs(hideBin(process.argv)).argv
|
||||
|
||||
const logger = Logger.get("matter");
|
||||
Logger.level = LogLevel.DEBUG;
|
||||
Logger.format = LogFormat.PLAIN;
|
||||
|
||||
process.on("SIGINT", () => shutdownHandler("SIGINT"));
|
||||
process.on("SIGTERM", () => shutdownHandler("SIGTERM"));
|
||||
process.on('uncaughtException', function (err) {
|
||||
logger.error(`Caught exception: ${err} ${err.stack}`);
|
||||
});
|
||||
|
||||
const parentPid = process.ppid;
|
||||
setInterval(() => {
|
||||
try {
|
||||
// Try sending signal 0 to the parent process.
|
||||
// If the parent is dead, this will throw an error.
|
||||
// otherwise we stick around forever and eat 100% of a cpu core (?)
|
||||
process.kill(parentPid, 0);
|
||||
} catch (e) {
|
||||
console.error("Parent process exited. Shutting down Node.js...");
|
||||
process.exit(1);
|
||||
}
|
||||
}, 5000);
|
||||
|
||||
const shutdownHandler = async (signal: string) => {
|
||||
logger.info(`Received ${signal}. Closing WebSocket connections...`);
|
||||
|
||||
const closePromises: Promise<void>[] = [];
|
||||
|
||||
wss.clients.forEach((client: WebSocket) => {
|
||||
if (client.readyState === WebSocket.OPEN) {
|
||||
closePromises.push(
|
||||
new Promise<void>((resolve) => {
|
||||
client.close(1000, "Server shutting down");
|
||||
client.on('close', () => {
|
||||
resolve();
|
||||
});
|
||||
client.on('error', (err) => {
|
||||
console.error('Error while closing WebSocket connection:', err);
|
||||
resolve();
|
||||
});
|
||||
})
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
await Promise.all(closePromises)
|
||||
.then(() => {
|
||||
logger.info("All WebSocket connections closed.");
|
||||
return new Promise<void>((resolve) => wss.close(() => resolve()));
|
||||
})
|
||||
.then(() => {
|
||||
logger.info("WebSocket server closed.");
|
||||
process.exit(0);
|
||||
})
|
||||
.catch((err) => {
|
||||
console.error("Error during shutdown:", err);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
||||
|
||||
export interface WebSocketSession extends WebSocket {
|
||||
controller?: Controller;
|
||||
sendResponse(type: string, id: string, result?: any, error?: string): void;
|
||||
sendEvent(type: string, data?: any): void;
|
||||
}
|
||||
|
||||
const socketPort = argv.port ? parseInt(argv.port) : 8888;
|
||||
const wss: Server = new WebSocket.Server({ port: socketPort, host: argv.host });
|
||||
|
||||
wss.on('connection', async (ws: WebSocketSession, req: IncomingMessage) => {
|
||||
|
||||
ws.sendResponse = (type: string, id: string, result?: any, error?: string) => {
|
||||
const message: Message = {
|
||||
type: 'response',
|
||||
message: {
|
||||
type,
|
||||
id,
|
||||
result,
|
||||
error
|
||||
}
|
||||
};
|
||||
logger.debug(`Sending response: ${Logger.toJSON(message)}`);
|
||||
ws.send(Logger.toJSON(message));
|
||||
};
|
||||
|
||||
ws.sendEvent = (type: string, data?: any) => {
|
||||
const message: Message = {
|
||||
type: 'event',
|
||||
message: {
|
||||
type,
|
||||
data
|
||||
}
|
||||
};
|
||||
logger.debug(`Sending event: ${Logger.toJSON(message)}`);
|
||||
ws.send(Logger.toJSON(message));
|
||||
};
|
||||
|
||||
ws.on('open', () => {
|
||||
logger.info('WebSocket opened');
|
||||
});
|
||||
|
||||
ws.on('message', (message: string) => {
|
||||
try {
|
||||
const request: Request = JSON.parse(message);
|
||||
ws.controller?.handleRequest(request);
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
ws.sendResponse(MessageType.ResultError, '', undefined, error.message);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
ws.on('close', async () => {
|
||||
logger.info('WebSocket closed');
|
||||
if (ws.controller) {
|
||||
await ws.controller.close();
|
||||
}
|
||||
});
|
||||
|
||||
ws.on('error', (error: Error) => {
|
||||
logger.error(`WebSocket error: ${error} ${error.stack}`);
|
||||
});
|
||||
|
||||
if (!req.url) {
|
||||
logger.error('No URL in the request');
|
||||
ws.close(1002, 'No URL in the request');
|
||||
return;
|
||||
}
|
||||
|
||||
const params = new URLSearchParams(req.url.slice(req.url.indexOf('?')));
|
||||
const service = params.get('service') === 'bridge' ? 'bridge' : 'client'
|
||||
|
||||
if (service === 'client') {
|
||||
let controllerName = params.get('controllerName');
|
||||
try {
|
||||
if (controllerName == null) {
|
||||
throw new Error('No controllerName parameter in the request');
|
||||
}
|
||||
wss.clients.forEach((client: WebSocket) => {
|
||||
const session = client as WebSocketSession;
|
||||
if (session.controller && session.controller.id() === `client-${controllerName}`) {
|
||||
throw new Error(`Controller with name ${controllerName} already exists!`);
|
||||
}
|
||||
});
|
||||
ws.controller = new ClientController(ws, params);
|
||||
await ws.controller.init();
|
||||
} catch (error: any) {
|
||||
printError(logger, error, "ClientController.init()");
|
||||
logger.error("returning error", error.message)
|
||||
ws.close(1002, error.message);
|
||||
return;
|
||||
}
|
||||
} else {
|
||||
// For now we only support one bridge
|
||||
const uniqueId = "0"
|
||||
try {
|
||||
wss.clients.forEach((client: WebSocket) => {
|
||||
const session = client as WebSocketSession;
|
||||
if (session.controller && session.controller.id() === `bridge-${uniqueId}`) {
|
||||
throw new Error(`Bridge with uniqueId ${uniqueId} already exists!`);
|
||||
}
|
||||
});
|
||||
ws.controller = new BridgeController(ws, params);
|
||||
await ws.controller.init();
|
||||
} catch (error: any) {
|
||||
printError(logger, error, "BridgeController.init()");
|
||||
logger.error("returning error", error.message)
|
||||
ws.close(1002, error.message);
|
||||
return;
|
||||
}
|
||||
}
|
||||
ws.sendEvent('ready', 'Controller initialized');
|
||||
});
|
||||
|
||||
logger.info(`CHIP Controller Server listening on port ${socketPort}`);
|
|
@ -0,0 +1,56 @@
|
|||
import { Logger } from "@matter/general";
|
||||
import { Controller } from "../Controller";
|
||||
import { WebSocketSession } from "../app";
|
||||
import { DeviceNode } from "./DeviceNode";
|
||||
|
||||
const logger = Logger.get("BridgeController");
|
||||
|
||||
export class BridgeController extends Controller {
|
||||
|
||||
deviceNode!: DeviceNode
|
||||
constructor(override ws: WebSocketSession, override params: URLSearchParams) {
|
||||
super(ws, params);
|
||||
let storagePath = this.params.get('storagePath');
|
||||
|
||||
if (storagePath === null) {
|
||||
throw new Error('No storagePath parameters in the request');
|
||||
}
|
||||
|
||||
const deviceName = this.params.get('deviceName');
|
||||
const vendorName = this.params.get('vendorName');
|
||||
const passcode = this.params.get('passcode');
|
||||
const discriminator = this.params.get('discriminator');
|
||||
const vendorId = this.params.get('vendorId');
|
||||
const productName = this.params.get('productName');
|
||||
const productId = this.params.get('productId');
|
||||
const port = this.params.get('port');
|
||||
|
||||
if (deviceName === null || vendorName === null || passcode === null || discriminator === null || vendorId === null || productName === null || productId === null || port === null ) {
|
||||
throw new Error('Missing parameters in the request');
|
||||
}
|
||||
|
||||
this.deviceNode = new DeviceNode(this, storagePath, deviceName, vendorName, parseInt(passcode), parseInt(discriminator), parseInt(vendorId), productName, parseInt(productId), parseInt(port));
|
||||
}
|
||||
override id(): string {
|
||||
return DeviceNode.DEFAULT_NODE_ID;
|
||||
}
|
||||
|
||||
override async init() {
|
||||
}
|
||||
|
||||
executeCommand(namespace: string, functionName: string, args: any[]): any | Promise<any> {
|
||||
let baseObject: any = this.deviceNode
|
||||
|
||||
logger.debug(`Executing function ${namespace}.${functionName}(${Logger.toJSON(args)})`);
|
||||
|
||||
if (typeof baseObject[functionName] !== 'function') {
|
||||
throw new Error(`Function ${functionName} not found`);
|
||||
}
|
||||
|
||||
return baseObject[functionName](...args);
|
||||
}
|
||||
|
||||
async close() {
|
||||
return this.deviceNode.close();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,277 @@
|
|||
// Include this first to auto-register Crypto, Network and Time Node.js implementations
|
||||
import "@matter/node";
|
||||
|
||||
import { FabricIndex, VendorId } from "@matter/types";
|
||||
import { DeviceCommissioner, FabricManager, SessionManager } from "@matter/protocol";
|
||||
import { Endpoint, ServerNode } from "@matter/node";
|
||||
import { AggregatorEndpoint } from "@matter/node/endpoints";
|
||||
import { Environment, Logger, StorageService } from "@matter/general";
|
||||
import { GenericDeviceType } from "./devices/GenericDeviceType";
|
||||
import { OnOffLightDeviceType } from "./devices/OnOffLightDeviceType";
|
||||
import { OnOffPlugInDeviceType } from "./devices/OnOffPlugInDeviceType";
|
||||
import { DimmableDeviceType } from "./devices/DimmableDeviceType";
|
||||
import { ThermostatDeviceType } from "./devices/ThermostatDeviceType";
|
||||
import { WindowCoveringDeviceType } from "./devices/WindowCoveringDeviceType";
|
||||
import { BridgeController } from "./BridgeController";
|
||||
import { DoorLockDeviceType } from "./devices/DoorLockDeviceType";
|
||||
import { TemperatureSensorType } from "./devices/TemperatureSensorType";
|
||||
import { HumiditySensorType } from "./devices/HumiditySensorType";
|
||||
import { OccupancySensorDeviceType } from "./devices/OccupancySensorDeviceType";
|
||||
import { ContactSensorDeviceType } from "./devices/ContactSensorDeviceType";
|
||||
import { FanDeviceType } from "./devices/FanDeviceType";
|
||||
import { ColorDeviceType } from "./devices/ColorDeviceType";
|
||||
import { BridgeEvent, BridgeEventType, EventType } from "../MessageTypes";
|
||||
import { BasicInformationServer } from "@matter/node/behaviors";
|
||||
|
||||
type DeviceType = OnOffLightDeviceType | OnOffPlugInDeviceType | DimmableDeviceType | ThermostatDeviceType | WindowCoveringDeviceType | DoorLockDeviceType | TemperatureSensorType | HumiditySensorType | OccupancySensorDeviceType | ContactSensorDeviceType | FanDeviceType | ColorDeviceType;
|
||||
|
||||
const logger = Logger.get("DeviceNode");
|
||||
|
||||
/**
|
||||
* This represents the root device node for the Matter Bridge, creates the initial aggregator endpoint, adds devices to the bridge, and manages storage
|
||||
*/
|
||||
export class DeviceNode {
|
||||
static DEFAULT_NODE_ID = "oh-bridge";
|
||||
|
||||
private server: ServerNode | null = null;
|
||||
#environment: Environment = Environment.default;
|
||||
|
||||
private aggregator: Endpoint<AggregatorEndpoint> | null = null;
|
||||
private devices: Map<string, GenericDeviceType> = new Map();
|
||||
private storageService: StorageService;
|
||||
private inCommissioning: boolean = false;
|
||||
|
||||
constructor(private bridgeController: BridgeController, private storagePath: string, private deviceName: string, private vendorName: string, private passcode: number, private discriminator: number, private vendorId: number, private productName: string, private productId: number, private port: number) {
|
||||
logger.info(`Device Node Storage location: ${this.storagePath} (Directory)`);
|
||||
this.#environment.vars.set('storage.path', this.storagePath)
|
||||
this.storageService = this.#environment.get(StorageService);
|
||||
}
|
||||
|
||||
//public methods
|
||||
|
||||
async initializeBridge(resetStorage: boolean = false) {
|
||||
await this.close();
|
||||
logger.info(`Initializing bridge`);
|
||||
await this.#init();
|
||||
if (resetStorage) {
|
||||
logger.info(`!!! Erasing ServerNode Storage !!!`);
|
||||
await this.server?.erase();
|
||||
await this.close();
|
||||
// generate a new uniqueId for the bridge (bridgeBasicInformation.uniqueId)
|
||||
const ohStorage = await this.#ohBridgeStorage();
|
||||
await ohStorage.set("basicInformation.uniqueId", BasicInformationServer.createUniqueId());
|
||||
logger.info(`Initializing bridge again`);
|
||||
await this.#init();
|
||||
}
|
||||
logger.info(`Bridge initialized`);
|
||||
}
|
||||
|
||||
async startBridge() {
|
||||
if (this.devices.size === 0) {
|
||||
throw new Error("No devices added, not starting");
|
||||
}
|
||||
if (!this.server) {
|
||||
throw new Error("Server not initialized, not starting");
|
||||
}
|
||||
if (this.server.lifecycle.isOnline) {
|
||||
throw new Error("Server is already started, not starting");
|
||||
}
|
||||
this.server.events.commissioning.enabled$Changed.on(async () => {
|
||||
logger.info(`Commissioning state changed to ${this.server?.state.commissioning.enabled}`);
|
||||
this.#sendCommissioningStatus();
|
||||
});
|
||||
this.server.lifecycle.online.on(() => {
|
||||
logger.info(`Bridge online`);
|
||||
this.#sendCommissioningStatus();
|
||||
});
|
||||
logger.info(this.server);
|
||||
logger.info(`Starting bridge`);
|
||||
await this.server.start();
|
||||
logger.info(`Bridge started`);
|
||||
const ohStorage = await this.#ohBridgeStorage();
|
||||
await ohStorage.set("lastStart", Date.now());
|
||||
}
|
||||
|
||||
async close() {
|
||||
await this.server?.close();
|
||||
this.server = null;
|
||||
this.devices.clear();
|
||||
}
|
||||
|
||||
async addEndpoint(deviceType: string, id: string, nodeLabel: string, productName: string, productLabel: string, serialNumber: string, attributeMap: { [key: string]: any }) {
|
||||
if (this.devices.has(id)) {
|
||||
throw new Error(`Device ${id} already exists!`);
|
||||
}
|
||||
|
||||
if (!this.aggregator) {
|
||||
throw new Error(`Aggregator not initialized, aborting.`);
|
||||
}
|
||||
|
||||
// little hack to get the correct device class and initialize it with the correct parameters once
|
||||
const deviceTypeMap: { [key: string]: new (bridgeController: BridgeController, attributeMap: { [key: string]: any }, id: string, nodeLabel: string, productName: string, productLabel: string, serialNumber: string) => DeviceType } = {
|
||||
"OnOffLight": OnOffLightDeviceType,
|
||||
"OnOffPlugInUnit": OnOffPlugInDeviceType,
|
||||
"DimmableLight": DimmableDeviceType,
|
||||
"Thermostat": ThermostatDeviceType,
|
||||
"WindowCovering": WindowCoveringDeviceType,
|
||||
"DoorLock": DoorLockDeviceType,
|
||||
"TemperatureSensor": TemperatureSensorType,
|
||||
"HumiditySensor": HumiditySensorType,
|
||||
"OccupancySensor": OccupancySensorDeviceType,
|
||||
"ContactSensor": ContactSensorDeviceType,
|
||||
"Fan": FanDeviceType,
|
||||
"ColorLight": ColorDeviceType
|
||||
};
|
||||
|
||||
const DeviceClass = deviceTypeMap[deviceType];
|
||||
if (!DeviceClass) {
|
||||
throw new Error(`Unsupported device type ${deviceType}`);
|
||||
}
|
||||
const device = new DeviceClass(this.bridgeController, attributeMap, id, nodeLabel, productName, productLabel, serialNumber);
|
||||
this.devices.set(id, device);
|
||||
await this.aggregator.add(device.endpoint);
|
||||
}
|
||||
|
||||
async setEndpointState(endpointId: string, clusterName: string, stateName: string, stateValue: any) {
|
||||
const device = this.devices.get(endpointId);
|
||||
if (device) {
|
||||
device.updateState(clusterName, stateName, stateValue);
|
||||
}
|
||||
}
|
||||
|
||||
async openCommissioningWindow() {
|
||||
const dc = this.#getStartedServer().env.get(DeviceCommissioner);
|
||||
logger.debug('opening basic commissioning window')
|
||||
await dc.allowBasicCommissioning(() => {
|
||||
logger.debug('commissioning window closed')
|
||||
this.inCommissioning = false;
|
||||
this.#sendCommissioningStatus();
|
||||
});
|
||||
this.inCommissioning = true;
|
||||
logger.debug('basic commissioning window open')
|
||||
this.#sendCommissioningStatus();
|
||||
}
|
||||
|
||||
async closeCommissioningWindow() {
|
||||
const server = this.#getStartedServer();
|
||||
if(!server.state.commissioning.commissioned) {
|
||||
logger.debug('bridge is not commissioned, not closing commissioning window')
|
||||
this.#sendCommissioningStatus();
|
||||
return;
|
||||
}
|
||||
const dc = server.env.get(DeviceCommissioner);
|
||||
logger.debug('closing basic commissioning window')
|
||||
await dc.endCommissioning();
|
||||
}
|
||||
|
||||
getCommissioningState() {
|
||||
const server = this.#getStartedServer();
|
||||
return {
|
||||
pairingCodes: {
|
||||
manualPairingCode: server.state.commissioning.pairingCodes.manualPairingCode,
|
||||
qrPairingCode: server.state.commissioning.pairingCodes.qrPairingCode
|
||||
},
|
||||
commissioningWindowOpen : !server.state.commissioning.commissioned || this.inCommissioning
|
||||
}
|
||||
}
|
||||
|
||||
getFabrics() {
|
||||
const fabricManager = this.#getStartedServer().env.get(FabricManager);
|
||||
return fabricManager.fabrics;
|
||||
}
|
||||
|
||||
async removeFabric(fabricIndex: number) {
|
||||
const fabricManager = this.#getStartedServer().env.get(FabricManager);
|
||||
await fabricManager.removeFabric(FabricIndex(fabricIndex));
|
||||
}
|
||||
|
||||
//private methods
|
||||
|
||||
async #init() {
|
||||
const ohStorage = await this.#ohBridgeStorage();
|
||||
const uniqueId = await this.#uniqueIdForBridge();
|
||||
logger.info(`Unique ID: ${uniqueId}`);
|
||||
/**
|
||||
* Create a Matter ServerNode, which contains the Root Endpoint and all relevant data and configuration
|
||||
*/
|
||||
try {
|
||||
this.server = await ServerNode.create({
|
||||
// Required: Give the Node a unique ID which is used to store the state of this node
|
||||
id: DeviceNode.DEFAULT_NODE_ID,
|
||||
|
||||
// Provide Network relevant configuration like the port
|
||||
// Optional when operating only one device on a host, Default port is 5540
|
||||
network: {
|
||||
port: this.port,
|
||||
},
|
||||
|
||||
// Provide Commissioning relevant settings
|
||||
// Optional for development/testing purposes
|
||||
commissioning: {
|
||||
passcode: this.passcode,
|
||||
discriminator: this.discriminator,
|
||||
|
||||
},
|
||||
|
||||
// Provide Node announcement settings
|
||||
// Optional: If Ommitted some development defaults are used
|
||||
productDescription: {
|
||||
name: this.deviceName,
|
||||
deviceType: AggregatorEndpoint.deviceType,
|
||||
},
|
||||
|
||||
// Provide defaults for the BasicInformation cluster on the Root endpoint
|
||||
// Optional: If Omitted some development defaults are used
|
||||
basicInformation: {
|
||||
vendorName: this.vendorName,
|
||||
vendorId: VendorId(this.vendorId),
|
||||
nodeLabel: this.productName,
|
||||
productName: this.productName,
|
||||
productLabel: this.productName,
|
||||
productId: this.productId,
|
||||
uniqueId: uniqueId,
|
||||
},
|
||||
});
|
||||
this.aggregator = new Endpoint(AggregatorEndpoint, { id: "aggregator" });
|
||||
await this.server.add(this.aggregator);
|
||||
await ohStorage.set("basicInformation.uniqueId", uniqueId);
|
||||
logger.info(`ServerNode created with uniqueId: ${uniqueId}`);
|
||||
} catch (e) {
|
||||
logger.error(`Error starting server: ${e}`);
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
#getStartedServer() {
|
||||
if(!this.server || !this.server.lifecycle.isOnline) {
|
||||
throw new Error("Server not ready");
|
||||
}
|
||||
return this.server;
|
||||
}
|
||||
|
||||
async #ohBridgeStorage() {
|
||||
return (await this.storageService.open(DeviceNode.DEFAULT_NODE_ID)).createContext("openhab");
|
||||
}
|
||||
|
||||
async #rootStorage() {
|
||||
return (await this.storageService.open(DeviceNode.DEFAULT_NODE_ID)).createContext("root");
|
||||
}
|
||||
|
||||
async #uniqueIdForBridge() {
|
||||
const rootContext = await this.#ohBridgeStorage();
|
||||
return rootContext.get("basicInformation.uniqueId", BasicInformationServer.createUniqueId());
|
||||
}
|
||||
|
||||
#sendCommissioningStatus() {
|
||||
const state = this.getCommissioningState();
|
||||
const be: BridgeEvent = {
|
||||
type: BridgeEventType.EventTriggered,
|
||||
data: {
|
||||
eventName: state.commissioningWindowOpen ? "commissioningWindowOpen" : "commissioningWindowClosed",
|
||||
data: state.pairingCodes
|
||||
}
|
||||
}
|
||||
this.bridgeController.ws.sendEvent(EventType.BridgeEvent, be);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,90 @@
|
|||
import { Endpoint } from "@matter/node";
|
||||
import { ExtendedColorLightDevice } from "@matter/node/devices/extended-color-light";
|
||||
import { GenericDeviceType } from './GenericDeviceType'; // Adjust the path as needed
|
||||
import { ColorControlServer } from "@matter/main/behaviors";
|
||||
import { ColorControl, LevelControl, OnOff } from "@matter/main/clusters";
|
||||
|
||||
export class ColorDeviceType extends GenericDeviceType {
|
||||
|
||||
private normalizeValue(value: number, min: number, max: number): number {
|
||||
return Math.min(Math.max(value, min), max);
|
||||
}
|
||||
|
||||
override createEndpoint(clusterValues: Record<string, any>) {
|
||||
const { colorControl } = clusterValues;
|
||||
const { colorTempPhysicalMinMireds, colorTempPhysicalMaxMireds } = colorControl;
|
||||
|
||||
colorControl.colorTemperatureMireds = this.normalizeValue(
|
||||
colorControl.colorTemperatureMireds,
|
||||
colorTempPhysicalMinMireds,
|
||||
colorTempPhysicalMaxMireds
|
||||
);
|
||||
|
||||
colorControl.startUpColorTemperatureMireds = this.normalizeValue(
|
||||
colorControl.startUpColorTemperatureMireds,
|
||||
colorTempPhysicalMinMireds,
|
||||
colorTempPhysicalMaxMireds
|
||||
);
|
||||
|
||||
colorControl.coupleColorTempToLevelMinMireds = this.normalizeValue(
|
||||
colorControl.coupleColorTempToLevelMinMireds,
|
||||
colorTempPhysicalMinMireds,
|
||||
colorTempPhysicalMaxMireds
|
||||
);
|
||||
|
||||
colorControl.coupleColorTempToLevelMaxMireds = this.normalizeValue(
|
||||
colorControl.coupleColorTempToLevelMaxMireds,
|
||||
colorTempPhysicalMinMireds,
|
||||
colorTempPhysicalMaxMireds
|
||||
);
|
||||
|
||||
const endpoint = new Endpoint(ExtendedColorLightDevice.with(
|
||||
//setLocally=true for createOnOffServer otherwise moveToHueAndSaturationLogic will not be called b/c matter.js thinks the device is OFF.
|
||||
this.createOnOffServer(true).with(OnOff.Feature.Lighting),
|
||||
this.createLevelControlServer().with(LevelControl.Feature.Lighting),
|
||||
this.createColorControlServer().with(ColorControl.Feature.HueSaturation, ColorControl.Feature.ColorTemperature),
|
||||
...this.defaultClusterServers()), {
|
||||
...this.endPointDefaults(),
|
||||
...clusterValues
|
||||
});
|
||||
|
||||
return endpoint;
|
||||
}
|
||||
|
||||
override defaultClusterValues() {
|
||||
return {
|
||||
levelControl: {
|
||||
currentLevel: 0
|
||||
},
|
||||
onOff: {
|
||||
onOff: false
|
||||
},
|
||||
colorControl: {
|
||||
colorMode: 0,
|
||||
currentHue: 0,
|
||||
currentSaturation: 0,
|
||||
colorTemperatureMireds: 154,
|
||||
startUpColorTemperatureMireds: 154,
|
||||
colorTempPhysicalMinMireds: 154,
|
||||
colorTempPhysicalMaxMireds: 667,
|
||||
coupleColorTempToLevelMinMireds: 154,
|
||||
coupleColorTempToLevelMaxMireds: 667
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
protected createColorControlServer(): typeof ColorControlServer {
|
||||
const parent = this;
|
||||
return class extends ColorControlServer {
|
||||
override async moveToColorTemperatureLogic(targetMireds: number, transitionTime: number) {
|
||||
await parent.sendBridgeEvent("colorControl", "colorTemperatureMireds", targetMireds);
|
||||
return super.moveToColorTemperatureLogic(targetMireds, transitionTime);
|
||||
}
|
||||
|
||||
override async moveToHueAndSaturationLogic(targetHue: number, targetSaturation: number, transitionTime: number) {
|
||||
await parent.sendBridgeEvent("colorControl", "currentHue", targetHue);
|
||||
await parent.sendBridgeEvent("colorControl", "currentSaturation", targetSaturation);
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
|
@ -0,0 +1,28 @@
|
|||
import { Endpoint } from "@matter/node";
|
||||
import { ContactSensorDevice } from "@matter/node/devices/contact-sensor";
|
||||
import { GenericDeviceType } from './GenericDeviceType'; // Adjust the path as needed
|
||||
|
||||
export class ContactSensorDeviceType extends GenericDeviceType {
|
||||
|
||||
override createEndpoint(clusterValues: Record<string, any>) {
|
||||
const defaults = {
|
||||
booleanState: {
|
||||
stateValue : false
|
||||
}
|
||||
}
|
||||
const endpoint = new Endpoint(ContactSensorDevice.with(...this.defaultClusterServers()), {
|
||||
...this.endPointDefaults(),
|
||||
...clusterValues
|
||||
});
|
||||
|
||||
return endpoint
|
||||
}
|
||||
|
||||
override defaultClusterValues() {
|
||||
return {
|
||||
booleanState: {
|
||||
stateValue: false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,35 @@
|
|||
import { Endpoint } from "@matter/node";
|
||||
import { DimmableLightDevice } from "@matter/node/devices/dimmable-light";
|
||||
import { GenericDeviceType } from './GenericDeviceType';
|
||||
import { LevelControl, OnOff } from "@matter/main/clusters";
|
||||
import { FixedLabelServer, LevelControlServer, OnOffServer } from "@matter/main/behaviors";
|
||||
import { TypeFromPartialBitSchema } from "@matter/main/types";
|
||||
|
||||
const LevelControlType = LevelControlServer.with(LevelControl.Feature.Lighting);
|
||||
|
||||
export class DimmableDeviceType extends GenericDeviceType {
|
||||
|
||||
override createEndpoint(clusterValues: Record<string, any>) {
|
||||
const endpoint = new Endpoint(DimmableLightDevice.with(
|
||||
this.createOnOffServer(),
|
||||
this.createLevelControlServer().with(LevelControl.Feature.Lighting),
|
||||
...this.defaultClusterServers()), {
|
||||
...this.endPointDefaults(),
|
||||
...clusterValues
|
||||
});
|
||||
return endpoint;
|
||||
}
|
||||
|
||||
override defaultClusterValues() {
|
||||
return {
|
||||
levelControl: {
|
||||
currentLevel: 254
|
||||
},
|
||||
onOff: {
|
||||
onOff: false
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -0,0 +1,46 @@
|
|||
import { Endpoint } from "@matter/node";
|
||||
import { DoorLockDevice } from "@matter/node/devices/door-lock";
|
||||
import { GenericDeviceType } from './GenericDeviceType';
|
||||
import { DoorLockServer } from "@matter/main/behaviors";
|
||||
import { DoorLock } from "@matter/main/clusters";
|
||||
import LockState = DoorLock.LockState;
|
||||
|
||||
export class DoorLockDeviceType extends GenericDeviceType {
|
||||
|
||||
override createEndpoint(clusterValues: Record<string, any>) {
|
||||
const endpoint = new Endpoint(DoorLockDevice.with(...this.defaultClusterServers(), this.createDoorLockServer()), {
|
||||
...this.endPointDefaults(),
|
||||
...clusterValues
|
||||
});
|
||||
return endpoint
|
||||
}
|
||||
|
||||
override defaultClusterValues() {
|
||||
return {
|
||||
doorLock: {
|
||||
lockState: 0,
|
||||
lockType: 2,
|
||||
actuatorEnabled: true,
|
||||
doorState: 1,
|
||||
maxPinCodeLength: 10,
|
||||
minPinCodeLength: 1,
|
||||
wrongCodeEntryLimit: 5,
|
||||
userCodeTemporaryDisableTime: 10,
|
||||
operatingMode: 0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
protected createDoorLockServer(): typeof DoorLockServer {
|
||||
const parent = this;
|
||||
return class extends DoorLockServer {
|
||||
override async lockDoor() {
|
||||
await parent.sendBridgeEvent("doorLock", "lockState", LockState.Locked);
|
||||
}
|
||||
|
||||
override async unlockDoor() {
|
||||
await parent.sendBridgeEvent("doorLock", "lockState", LockState.Unlocked);
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
|
@ -0,0 +1,72 @@
|
|||
import { Endpoint } from "@matter/node";
|
||||
import { FanDevice } from "@matter/node/devices/fan";
|
||||
import { GenericDeviceType } from "./GenericDeviceType";
|
||||
import { FanControl, OnOff } from "@matter/main/clusters";
|
||||
import { FanControlServer, OnOffServer } from "@matter/node/behaviors";
|
||||
import { Logger } from "@matter/main";
|
||||
|
||||
const logger = Logger.get("FanDeviceType");
|
||||
|
||||
export class FanDeviceType extends GenericDeviceType {
|
||||
override createEndpoint(clusterValues: Record<string, any>) {
|
||||
const features: FanControl.Feature[] = [];
|
||||
if (clusterValues.fanControl.featureMap.step === true) {
|
||||
features.push(FanControl.Feature.Step);
|
||||
}
|
||||
|
||||
switch (clusterValues.fanControl.fanModeSequence) {
|
||||
case FanControl.FanModeSequence.OffLowMedHighAuto:
|
||||
case FanControl.FanModeSequence.OffLowHighAuto:
|
||||
case FanControl.FanModeSequence.OffHighAuto:
|
||||
features.push(FanControl.Feature.Auto);
|
||||
clusterValues.fanControl.featureMap.auto = true;
|
||||
break;
|
||||
default:
|
||||
clusterValues.fanControl.featureMap.auto = false;
|
||||
break;
|
||||
}
|
||||
|
||||
logger.debug(`createEndpoint values: ${JSON.stringify(clusterValues)}`);
|
||||
const endpoint = new Endpoint(
|
||||
FanDevice.with(
|
||||
...this.defaultClusterServers(),
|
||||
FanControlServer.with(...features),
|
||||
...(clusterValues.onOff?.onOff !== undefined
|
||||
? [this.createOnOffServer()]
|
||||
: [])
|
||||
),
|
||||
{
|
||||
...this.endPointDefaults(),
|
||||
...clusterValues,
|
||||
}
|
||||
);
|
||||
endpoint.events.fanControl.fanMode$Changed.on((value) => {
|
||||
this.sendBridgeEvent("fanControl", "fanMode", value);
|
||||
});
|
||||
|
||||
endpoint.events.fanControl.percentSetting$Changed.on((value) => {
|
||||
this.sendBridgeEvent("fanControl", "percentSetting", value);
|
||||
});
|
||||
|
||||
return endpoint;
|
||||
}
|
||||
|
||||
override defaultClusterValues() {
|
||||
return {
|
||||
fanControl: {
|
||||
featureMap: {
|
||||
auto: false,
|
||||
step: false,
|
||||
multiSpeed: false,
|
||||
airflowDirection: false,
|
||||
rocking: false,
|
||||
wind: false,
|
||||
},
|
||||
fanMode: FanControl.FanMode.Off,
|
||||
fanModeSequence: FanControl.FanModeSequence.OffHigh,
|
||||
percentCurrent: 0,
|
||||
percentSetting: 0,
|
||||
},
|
||||
};
|
||||
}
|
||||
}
|
|
@ -0,0 +1,157 @@
|
|||
import { Endpoint } from "@matter/main";
|
||||
import { BridgeController } from "../BridgeController";
|
||||
import { EventType, BridgeEvent, BridgeEventType } from '../../MessageTypes';
|
||||
import { BridgedDeviceBasicInformationServer } from "@matter/node/behaviors/bridged-device-basic-information";
|
||||
import { FixedLabelServer, LevelControlServer, OnOffServer } from "@matter/main/behaviors";
|
||||
import { LevelControl, OnOff } from "@matter/main/clusters";
|
||||
import { Logger } from "@matter/main";
|
||||
import { TypeFromPartialBitSchema } from "@matter/main/types";
|
||||
|
||||
const logger = Logger.get("GenericDevice");
|
||||
const OnOffType = OnOffServer.with(OnOff.Feature.Lighting);
|
||||
|
||||
/**
|
||||
* This is the base class for all matter device types.
|
||||
*/
|
||||
export abstract class GenericDeviceType {
|
||||
|
||||
protected updateLocks = new Set<string>();
|
||||
endpoint: Endpoint;
|
||||
|
||||
|
||||
constructor(protected bridgeController: BridgeController, protected attributeMap: Record<string, any>, protected endpointId: string, protected nodeLabel: string, protected productName: string, protected productLabel: string, protected serialNumber: string) {
|
||||
this.nodeLabel = this.#truncateString(nodeLabel);
|
||||
this.productLabel = this.#truncateString(productLabel);
|
||||
this.productName = this.#truncateString(productName);
|
||||
this.serialNumber = this.#truncateString(serialNumber);
|
||||
this.endpoint = this.createEndpoint(this.#generateAttributes(this.defaultClusterValues(), attributeMap));
|
||||
logger.debug(`New Device: label: ${this.nodeLabel} name: ${this.productName} product label: ${this.productLabel} serial: ${this.serialNumber}`);
|
||||
}
|
||||
|
||||
abstract defaultClusterValues(): Record<string, any>;
|
||||
abstract createEndpoint(clusterValues: Record<string, any>): Endpoint;
|
||||
|
||||
public async updateState(clusterName: string, attributeName: string, attributeValue: any) {
|
||||
const args = {} as { [key: string]: any }
|
||||
args[clusterName] = {} as { [key: string]: any }
|
||||
args[clusterName][attributeName] = attributeValue
|
||||
await this.endpoint.set(args);
|
||||
}
|
||||
|
||||
protected sendBridgeEvent(clusterName: string, attributeName: string, attributeValue: any) {
|
||||
const be: BridgeEvent = {
|
||||
type: BridgeEventType.AttributeChanged,
|
||||
data: {
|
||||
endpointId: this.endpoint.id,
|
||||
clusterName: clusterName,
|
||||
attributeName: attributeName,
|
||||
data: attributeValue,
|
||||
}
|
||||
}
|
||||
this.sendEvent(EventType.BridgeEvent, be)
|
||||
}
|
||||
|
||||
protected sendEvent(eventName: string, data: any) {
|
||||
logger.debug(`Sending event: ${eventName} with data: ${data}`);
|
||||
this.bridgeController.ws.sendEvent(eventName, data)
|
||||
}
|
||||
|
||||
protected endPointDefaults() {
|
||||
return {
|
||||
id: this.endpointId,
|
||||
bridgedDeviceBasicInformation: {
|
||||
nodeLabel: this.nodeLabel,
|
||||
productName: this.productName,
|
||||
productLabel: this.productLabel,
|
||||
serialNumber: this.serialNumber,
|
||||
reachable: true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//note that these overrides assume openHAB will be sending the state back when changed as we will not set it here prematurely
|
||||
//other wise we would want to call super.on() and so on (same for level control or any other cluster behavior)to set local state
|
||||
|
||||
protected createOnOffServer(setLocally: boolean = false): typeof OnOffType {
|
||||
const parent = this;
|
||||
return class extends OnOffType {
|
||||
override async on() {
|
||||
await parent.sendBridgeEvent("onOff", "onOff", true);
|
||||
if(setLocally){
|
||||
await super.on();
|
||||
}
|
||||
}
|
||||
override async off() {
|
||||
await parent.sendBridgeEvent("onOff", "onOff", false);
|
||||
if(setLocally){
|
||||
await super.off();
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
protected createLevelControlServer(): typeof LevelControlServer {
|
||||
const parent = this;
|
||||
return class extends LevelControlServer {
|
||||
override async moveToLevelLogic(
|
||||
level: number,
|
||||
transitionTime: number | null,
|
||||
withOnOff: boolean,
|
||||
options: TypeFromPartialBitSchema<typeof LevelControl.Options>,
|
||||
) {
|
||||
await parent.sendBridgeEvent("levelControl", "currentLevel", level);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
protected defaultClusterServers() {
|
||||
return [
|
||||
BridgedDeviceBasicInformationServer,
|
||||
FixedLabelServer
|
||||
]
|
||||
}
|
||||
|
||||
#truncateString(str: string, maxLength: number = 32): string {
|
||||
return str.slice(0, maxLength);
|
||||
}
|
||||
|
||||
#generateAttributes<T extends Record<string, any>, U extends Partial<T>>(defaults: T, overrides: U): T {
|
||||
const alwaysAdd = ["fixedLabel"]
|
||||
let entries = this.#mergeWithDefaults(defaults, overrides);
|
||||
// Ensure entries include the values from overrides for the keys in alwaysAdd
|
||||
alwaysAdd.forEach((key) => {
|
||||
if (key in overrides) {
|
||||
entries[key as keyof T] = overrides[key as keyof T]!;
|
||||
}
|
||||
});
|
||||
return entries;
|
||||
}
|
||||
|
||||
|
||||
#mergeWithDefaults<T extends Record<string, any>, U extends Partial<T>>(defaults: T, overrides: U): T {
|
||||
function isPlainObject(value: any): value is Record<string, any> {
|
||||
return value && typeof value === 'object' && !Array.isArray(value);
|
||||
}
|
||||
// Get unique keys from both objects
|
||||
const allKeys = [...new Set([...Object.keys(defaults), ...Object.keys(overrides)])];
|
||||
|
||||
return allKeys.reduce((result, key) => {
|
||||
const defaultValue = defaults[key];
|
||||
const overrideValue = overrides[key];
|
||||
|
||||
// If both values exist and are objects, merge them recursively
|
||||
if (
|
||||
isPlainObject(defaultValue) &&
|
||||
isPlainObject(overrideValue)
|
||||
) {
|
||||
result[key] = this.#mergeWithDefaults(defaultValue, overrideValue);
|
||||
} else {
|
||||
// Use override value if it exists, otherwise use default value
|
||||
result[key] = key in overrides ? overrideValue : defaultValue;
|
||||
}
|
||||
|
||||
return result;
|
||||
}, {} as Record<string, any>) as T;
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,22 @@
|
|||
import { Endpoint } from "@matter/node";
|
||||
import { HumiditySensorDevice } from "@matter/node/devices/humidity-sensor";
|
||||
import { GenericDeviceType } from './GenericDeviceType';
|
||||
|
||||
export class HumiditySensorType extends GenericDeviceType {
|
||||
|
||||
override createEndpoint(clusterValues: Record<string, any>) {
|
||||
const endpoint = new Endpoint(HumiditySensorDevice.with(...this.defaultClusterServers()), {
|
||||
...this.endPointDefaults(),
|
||||
...clusterValues
|
||||
});
|
||||
return endpoint
|
||||
}
|
||||
|
||||
override defaultClusterValues() {
|
||||
return {
|
||||
relativeHumidityMeasurement: {
|
||||
measuredValue: 0
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,34 @@
|
|||
import { Endpoint } from "@matter/node";
|
||||
import { OccupancySensorDevice } from "@matter/node/devices/occupancy-sensor";
|
||||
import { GenericDeviceType } from "./GenericDeviceType";
|
||||
import { OccupancySensing } from "@matter/main/clusters";
|
||||
import { OccupancySensingServer } from "@matter/node/behaviors";
|
||||
|
||||
/**
|
||||
* This is the device type for the occupancy sensor.
|
||||
*/
|
||||
export class OccupancySensorDeviceType extends GenericDeviceType {
|
||||
override createEndpoint(clusterValues: Record<string, any>) {
|
||||
const endpoint = new Endpoint(
|
||||
OccupancySensorDevice.with(
|
||||
OccupancySensingServer.with(OccupancySensing.Feature.PassiveInfrared),
|
||||
...this.defaultClusterServers()),
|
||||
{
|
||||
...this.endPointDefaults(),
|
||||
...clusterValues,
|
||||
}
|
||||
);
|
||||
return endpoint;
|
||||
}
|
||||
|
||||
override defaultClusterValues() {
|
||||
return {
|
||||
occupancySensing: {
|
||||
occupancy: {
|
||||
occupied: false,
|
||||
},
|
||||
occupancySensorType: OccupancySensing.OccupancySensorType.Pir,
|
||||
},
|
||||
};
|
||||
}
|
||||
}
|
|
@ -0,0 +1,24 @@
|
|||
import { Endpoint } from "@matter/node";
|
||||
import { OnOffLightDevice } from "@matter/node/devices/on-off-light";
|
||||
import { GenericDeviceType } from './GenericDeviceType';
|
||||
import { OnOff } from "@matter/main/clusters";
|
||||
|
||||
export class OnOffLightDeviceType extends GenericDeviceType {
|
||||
|
||||
override createEndpoint(clusterValues: Record<string, any>) {
|
||||
const endpoint = new Endpoint(OnOffLightDevice.with(
|
||||
...this.defaultClusterServers(),this.createOnOffServer()), {
|
||||
...this.endPointDefaults(),
|
||||
...clusterValues
|
||||
});
|
||||
return endpoint
|
||||
}
|
||||
|
||||
override defaultClusterValues() {
|
||||
return {
|
||||
onOff: {
|
||||
onOff: false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,25 @@
|
|||
import { Endpoint } from "@matter/node";
|
||||
import { OnOffPlugInUnitDevice } from "@matter/node/devices/on-off-plug-in-unit";
|
||||
import { GenericDeviceType } from './GenericDeviceType';
|
||||
import { OnOff } from "@matter/main/clusters";
|
||||
|
||||
export class OnOffPlugInDeviceType extends GenericDeviceType {
|
||||
|
||||
override createEndpoint(clusterValues: Record<string, any>) {
|
||||
const endpoint = new Endpoint(OnOffPlugInUnitDevice.with(
|
||||
...this.defaultClusterServers(),
|
||||
this.createOnOffServer()), {
|
||||
...this.endPointDefaults(),
|
||||
...clusterValues
|
||||
});
|
||||
return endpoint
|
||||
}
|
||||
|
||||
override defaultClusterValues() {
|
||||
return {
|
||||
onOff: {
|
||||
onOff: false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,21 @@
|
|||
import { Endpoint } from "@matter/node";
|
||||
import { TemperatureSensorDevice } from "@matter/node/devices/temperature-sensor";
|
||||
import { GenericDeviceType } from './GenericDeviceType';
|
||||
|
||||
export class TemperatureSensorType extends GenericDeviceType {
|
||||
|
||||
override createEndpoint(clusterValues: Record<string, any>) {
|
||||
const endpoint = new Endpoint(TemperatureSensorDevice.with(...this.defaultClusterServers()), {
|
||||
...this.endPointDefaults(),
|
||||
...clusterValues
|
||||
});
|
||||
return endpoint
|
||||
}
|
||||
override defaultClusterValues() {
|
||||
return {
|
||||
temperatureMeasurement: {
|
||||
measuredValue: 0
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,70 @@
|
|||
import { Endpoint } from "@matter/node";
|
||||
import { ThermostatDevice } from "@matter/node/devices/thermostat";
|
||||
import { ThermostatServer } from '@matter/node/behaviors/thermostat';
|
||||
import { Thermostat } from '@matter/main/clusters';
|
||||
import { GenericDeviceType } from './GenericDeviceType';
|
||||
|
||||
export class ThermostatDeviceType extends GenericDeviceType {
|
||||
|
||||
override createEndpoint(clusterValues: Record<string, any>) {
|
||||
let controlSequenceOfOperation = -1;
|
||||
const features: Thermostat.Feature[] = [];
|
||||
if (clusterValues.thermostat?.occupiedHeatingSetpoint != undefined) {
|
||||
features.push(Thermostat.Feature.Heating);
|
||||
controlSequenceOfOperation = 2;
|
||||
}
|
||||
if (clusterValues.thermostat?.occupiedCoolingSetpoint != undefined) {
|
||||
features.push(Thermostat.Feature.Cooling);
|
||||
controlSequenceOfOperation = 0;
|
||||
}
|
||||
if (features.indexOf(Thermostat.Feature.Heating) != -1 && features.indexOf(Thermostat.Feature.Cooling) != -1) {
|
||||
features.push(Thermostat.Feature.AutoMode);
|
||||
controlSequenceOfOperation = 4;
|
||||
}
|
||||
|
||||
if (controlSequenceOfOperation < 0) {
|
||||
throw new Error("At least heating, cooling or both must be added")
|
||||
}
|
||||
|
||||
clusterValues.thermostat.controlSequenceOfOperation = controlSequenceOfOperation;
|
||||
|
||||
const endpoint = new Endpoint(ThermostatDevice.with(this.createOnOffServer().with(), ThermostatServer.with(
|
||||
...features
|
||||
), ...this.defaultClusterServers()), {
|
||||
...this.endPointDefaults(),
|
||||
...clusterValues
|
||||
|
||||
});
|
||||
endpoint.events.thermostat.occupiedHeatingSetpoint$Changed?.on((value) => {
|
||||
this.sendBridgeEvent('thermostat','occupiedHeatingSetpoint', value);
|
||||
});
|
||||
endpoint.events.thermostat.occupiedCoolingSetpoint$Changed?.on((value) => {
|
||||
this.sendBridgeEvent('thermostat','occupiedCoolingSetpoint', value);
|
||||
});
|
||||
endpoint.events.thermostat.systemMode$Changed.on((value) => {
|
||||
this.sendBridgeEvent('thermostat','systemMode', value);
|
||||
});
|
||||
return endpoint;
|
||||
}
|
||||
|
||||
override defaultClusterValues() {
|
||||
return {
|
||||
thermostat: {
|
||||
systemMode: 0,
|
||||
localTemperature: 0,
|
||||
minHeatSetpointLimit: 0,
|
||||
maxHeatSetpointLimit: 3500,
|
||||
absMinHeatSetpointLimit: 0,
|
||||
absMaxHeatSetpointLimit: 3500,
|
||||
minCoolSetpointLimit: 0,
|
||||
absMinCoolSetpointLimit: 0,
|
||||
maxCoolSetpointLimit: 3500,
|
||||
absMaxCoolSetpointLimit: 3500,
|
||||
minSetpointDeadBand: 0
|
||||
},
|
||||
onOff: {
|
||||
onOff: false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,60 @@
|
|||
import { Endpoint } from "@matter/node";
|
||||
import { WindowCoveringDevice } from "@matter/node/devices/window-covering";
|
||||
import { MovementDirection, MovementType, WindowCoveringServer } from '@matter/node/behaviors/window-covering';
|
||||
import { WindowCovering } from '@matter/main/clusters';
|
||||
import { GenericDeviceType } from './GenericDeviceType';
|
||||
|
||||
export class WindowCoveringDeviceType extends GenericDeviceType {
|
||||
|
||||
override createEndpoint(clusterValues: Record<string, any>) {
|
||||
const features: WindowCovering.Feature[] = [];
|
||||
features.push(WindowCovering.Feature.Lift);
|
||||
features.push(WindowCovering.Feature.PositionAwareLift);
|
||||
const endpoint = new Endpoint(WindowCoveringDevice.with(this.createWindowCoveringServer().with(
|
||||
...features,
|
||||
), ...this.defaultClusterServers()), {
|
||||
...this.endPointDefaults(),
|
||||
...clusterValues
|
||||
});
|
||||
endpoint.events.windowCovering.operationalStatus$Changed.on(value => {
|
||||
this.sendBridgeEvent("windowCovering", "operationalStatus", value);
|
||||
});
|
||||
return endpoint
|
||||
}
|
||||
|
||||
override defaultClusterValues() {
|
||||
return {
|
||||
windowCovering: {
|
||||
currentPositionLiftPercent100ths: 0,
|
||||
configStatus: {
|
||||
operational: true,
|
||||
onlineReserved: false,
|
||||
liftMovementReversed: false,
|
||||
liftPositionAware: true,
|
||||
tiltPositionAware: false,
|
||||
liftEncoderControlled: true,
|
||||
tiltEncoderControlled: false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// this allows us to get all commands to move the device, not just if it thinks the position has changed
|
||||
private createWindowCoveringServer(): typeof WindowCoveringServer {
|
||||
const parent = this;
|
||||
return class extends WindowCoveringServer {
|
||||
override async handleMovement(type: MovementType, reversed: boolean, direction: MovementDirection, targetPercent100ths?: number): Promise<void> {
|
||||
if (targetPercent100ths != null) {
|
||||
await parent.sendBridgeEvent("windowCovering", "targetPositionLiftPercent100ths", targetPercent100ths);
|
||||
}
|
||||
}
|
||||
override async handleStopMovement() {
|
||||
await parent.sendBridgeEvent("windowCovering", "operationalStatus", {
|
||||
global: WindowCovering.MovementStatus.Stopped,
|
||||
lift: WindowCovering.MovementStatus.Stopped,
|
||||
tilt: WindowCovering.MovementStatus.Stopped
|
||||
});
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
|
@ -0,0 +1,70 @@
|
|||
import { Logger } from "@matter/general";
|
||||
import { ControllerNode } from "./ControllerNode";
|
||||
import { Nodes } from "./namespaces/Nodes";
|
||||
import { Clusters } from "./namespaces/Clusters";
|
||||
import { WebSocketSession } from "../app";
|
||||
import { Controller } from "../Controller";
|
||||
|
||||
const logger = Logger.get("ClientController");
|
||||
|
||||
/**
|
||||
* This class exists to expose the "nodes" and "clusters" namespaces to websocket clients
|
||||
*/
|
||||
export class ClientController extends Controller {
|
||||
|
||||
nodes?: Nodes;
|
||||
clusters?: Clusters;
|
||||
controllerNode: ControllerNode;
|
||||
controllerName: string;
|
||||
|
||||
constructor(override ws: WebSocketSession, override params: URLSearchParams) {
|
||||
super(ws, params);
|
||||
const stringId = this.params.get('nodeId');
|
||||
const nodeId = stringId != null ? parseInt(stringId) : null;
|
||||
let storagePath = this.params.get('storagePath');
|
||||
let controllerName = this.params.get('controllerName');
|
||||
|
||||
if (nodeId === null || storagePath === null || controllerName === null) {
|
||||
throw new Error('Missing required parameters in the request');
|
||||
}
|
||||
|
||||
this.controllerName = controllerName;
|
||||
this.controllerNode = new ControllerNode(storagePath, controllerName, nodeId, ws);
|
||||
}
|
||||
|
||||
id(): string {
|
||||
return "client-" + this.controllerName;
|
||||
}
|
||||
|
||||
async init() {
|
||||
await this.controllerNode.initialize();
|
||||
logger.info(`Started Node`);
|
||||
// set up listeners to send events back to the client
|
||||
this.nodes = new Nodes(this.controllerNode);
|
||||
this.clusters = new Clusters(this.controllerNode);
|
||||
}
|
||||
|
||||
async close() {
|
||||
logger.info(`Closing Node`);
|
||||
await this.controllerNode?.close();
|
||||
logger.info(`Node Closed`);
|
||||
}
|
||||
|
||||
executeCommand(namespace: string, functionName: string, args: any[]): any | Promise<any> {
|
||||
const controllerAny: any = this;
|
||||
let baseObject: any;
|
||||
|
||||
logger.debug(`Executing function ${namespace}.${functionName}(${Logger.toJSON(args)})`);
|
||||
|
||||
if (typeof controllerAny[namespace] !== 'object') {
|
||||
throw new Error(`Namespace ${namespace} not found`);
|
||||
}
|
||||
|
||||
baseObject = controllerAny[namespace];
|
||||
if (typeof baseObject[functionName] !== 'function') {
|
||||
throw new Error(`Function ${functionName} not found`);
|
||||
}
|
||||
|
||||
return baseObject[functionName](...args);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,367 @@
|
|||
// Include this first to auto-register Crypto, Network and Time Node.js implementations
|
||||
import { CommissioningController } from "@project-chip/matter.js";
|
||||
import { NodeId } from "@matter/types";
|
||||
import { PairedNode, NodeStates, Endpoint } from "@project-chip/matter.js/device";
|
||||
import { Environment, Logger, StorageContext } from "@matter/general";
|
||||
import { ControllerStore } from "@matter/node";
|
||||
import { WebSocketSession } from "../app";
|
||||
import { EventType, NodeState } from '../MessageTypes';
|
||||
import { printError } from "../util/error";
|
||||
const logger = Logger.get("ControllerNode");
|
||||
|
||||
/**
|
||||
* This class represents the Matter Controller / Admin client
|
||||
*/
|
||||
export class ControllerNode {
|
||||
|
||||
private environment: Environment = Environment.default;
|
||||
private storageContext?: StorageContext;
|
||||
private nodes: Map<NodeId, PairedNode> = new Map();
|
||||
commissioningController?: CommissioningController;
|
||||
|
||||
constructor(
|
||||
private readonly storageLocation: string,
|
||||
private readonly controllerName: string,
|
||||
private readonly nodeNum: number,
|
||||
private readonly ws: WebSocketSession,
|
||||
private readonly netInterface?: string
|
||||
) { }
|
||||
|
||||
get Store() {
|
||||
if (!this.storageContext) {
|
||||
throw new Error("Storage uninitialized");
|
||||
}
|
||||
return this.storageContext;
|
||||
}
|
||||
|
||||
/**
|
||||
* Closes the controller node
|
||||
*/
|
||||
async close() {
|
||||
await this.commissioningController?.close();
|
||||
this.nodes.clear();
|
||||
}
|
||||
|
||||
/**
|
||||
* Initializes the controller node
|
||||
*/
|
||||
async initialize() {
|
||||
const outputDir = this.storageLocation;
|
||||
const id = `${this.controllerName}-${this.nodeNum.toString()}`
|
||||
const prefix = "openHAB: ";
|
||||
const fabricLabel = prefix + this.controllerName.substring(0, 31 - prefix.length);
|
||||
|
||||
logger.info(`Storage location: ${outputDir} (Directory)`);
|
||||
this.environment.vars.set('storage.path', outputDir)
|
||||
|
||||
// TODO we may need to choose which network interface to use
|
||||
if (this.netInterface !== undefined) {
|
||||
this.environment.vars.set("mdns.networkinterface", this.netInterface);
|
||||
}
|
||||
this.commissioningController = new CommissioningController({
|
||||
environment: {
|
||||
environment: this.environment,
|
||||
id,
|
||||
},
|
||||
autoConnect: false,
|
||||
adminFabricLabel: fabricLabel,
|
||||
});
|
||||
await this.commissioningController.initializeControllerStore();
|
||||
|
||||
const controllerStore = this.environment.get(ControllerStore);
|
||||
// TODO: Implement resetStorage
|
||||
// if (resetStorage) {
|
||||
// await controllerStore.erase();
|
||||
// }
|
||||
this.storageContext = controllerStore.storage.createContext("Node");
|
||||
|
||||
if (await this.Store.has("ControllerFabricLabel")) {
|
||||
await this.commissioningController.updateFabricLabel(
|
||||
await this.Store.get<string>("ControllerFabricLabel",fabricLabel),
|
||||
);
|
||||
}
|
||||
|
||||
await this.commissioningController.start();
|
||||
}
|
||||
|
||||
/**
|
||||
* Connects to a node, setting up event listeners. If called multiple times for the same node, it will trigger a node reconnect.
|
||||
* If a connection timeout is provided, the function will return a promise that will resolve when the node is initialized or reject if the node
|
||||
* becomes disconnected or the timeout is reached. Note that the node will continue to connect in the background and the client will be notified
|
||||
* when the node is initialized through the NodeStateInformation event. To stop the reconnection, call the disconnectNode method.
|
||||
*
|
||||
* @param nodeId The nodeId of the node to connect to
|
||||
* @param connectionTimeout Optional timeout in milliseconds. If omitted or non-positive, no timeout will be applied
|
||||
* @returns Promise that resolves when the node is initialized
|
||||
* @throws Error if connection times out or node becomes disconnected
|
||||
*/
|
||||
async initializeNode(nodeId: string | number, connectionTimeout?: number): Promise<void> {
|
||||
if (this.commissioningController === undefined) {
|
||||
throw new Error("CommissioningController not initialized");
|
||||
}
|
||||
|
||||
let node = this.nodes.get(NodeId(BigInt(nodeId)));
|
||||
if (node !== undefined) {
|
||||
node.triggerReconnect();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
let timeoutId: NodeJS.Timeout | undefined;
|
||||
|
||||
if (connectionTimeout && connectionTimeout > 0) {
|
||||
timeoutId = setTimeout(() => {
|
||||
logger.info(`Node ${node?.nodeId} state: ${node?.state}`);
|
||||
if (node?.state === NodeStates.Disconnected ||
|
||||
node?.state === NodeStates.WaitingForDeviceDiscovery ||
|
||||
node?.state === NodeStates.Reconnecting) {
|
||||
reject(new Error(`Node ${node?.nodeId} reconnection failed: ${NodeStates[node?.state]}`));
|
||||
} else {
|
||||
reject(new Error(`Node ${node?.nodeId} reconnection timed out`));
|
||||
}
|
||||
}, connectionTimeout);
|
||||
}
|
||||
|
||||
// Cancel timer if node initializes
|
||||
node?.events.initializedFromRemote.once(() => {
|
||||
logger.info(`Node ${node?.nodeId} initialized from remote`);
|
||||
if (timeoutId) clearTimeout(timeoutId);
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
node = await this.commissioningController.getNode(NodeId(BigInt(nodeId)));
|
||||
if (node === undefined) {
|
||||
throw new Error(`Node ${nodeId} not connected`);
|
||||
}
|
||||
node.connect();
|
||||
this.nodes.set(node.nodeId, node);
|
||||
|
||||
// register event listeners once the node is fully connected
|
||||
node.events.initializedFromRemote.once(() => {
|
||||
node.events.attributeChanged.on((data) => {
|
||||
data.path.nodeId = node.nodeId;
|
||||
this.ws.sendEvent(EventType.AttributeChanged, data);
|
||||
});
|
||||
|
||||
node.events.eventTriggered.on((data) => {
|
||||
data.path.nodeId = node.nodeId;
|
||||
this.ws.sendEvent(EventType.EventTriggered, data);
|
||||
});
|
||||
|
||||
node.events.stateChanged.on(info => {
|
||||
const data: any = {
|
||||
nodeId: node.nodeId,
|
||||
state: NodeStates[info]
|
||||
};
|
||||
this.ws.sendEvent(EventType.NodeStateInformation, data)
|
||||
});
|
||||
|
||||
node.events.structureChanged.on(() => {
|
||||
const data: any = {
|
||||
nodeId: node.nodeId,
|
||||
state: NodeState.STRUCTURE_CHANGED
|
||||
};
|
||||
this.ws.sendEvent(EventType.NodeStateInformation, data)
|
||||
});
|
||||
|
||||
node.events.decommissioned.on(() => {
|
||||
this.nodes.delete(node.nodeId);
|
||||
const data: any = {
|
||||
nodeId: node.nodeId,
|
||||
state: NodeState.DECOMMISSIONED
|
||||
};
|
||||
this.ws.sendEvent(EventType.NodeStateInformation, data)
|
||||
});
|
||||
});
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
let timeoutId: NodeJS.Timeout | undefined;
|
||||
|
||||
if (connectionTimeout && connectionTimeout > 0) {
|
||||
timeoutId = setTimeout(() => {
|
||||
logger.info(`Node ${node?.nodeId} initialization timed out`);
|
||||
|
||||
// register a listener to send the node state information once the node is connected at some future time
|
||||
node.events.initializedFromRemote.once(() => {
|
||||
const data: any = {
|
||||
nodeId: node.nodeId,
|
||||
state: NodeStates.Connected
|
||||
};
|
||||
this.ws.sendEvent(EventType.NodeStateInformation, data)
|
||||
});
|
||||
|
||||
if (node?.state === NodeStates.Disconnected ||
|
||||
node?.state === NodeStates.WaitingForDeviceDiscovery) {
|
||||
reject(new Error(`Node ${node.nodeId} connection failed: ${NodeStates[node.state]}`));
|
||||
} else {
|
||||
reject(new Error(`Node ${node.nodeId} connection timed out`));
|
||||
}
|
||||
}, connectionTimeout);
|
||||
}
|
||||
|
||||
node.events.initializedFromRemote.once(() => {
|
||||
if (timeoutId) clearTimeout(timeoutId);
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns a node by nodeId. If the node has not been initialized, it will throw an error.
|
||||
* @param nodeId
|
||||
* @returns
|
||||
*/
|
||||
getNode(nodeId: number | string | NodeId) {
|
||||
if (this.commissioningController === undefined) {
|
||||
throw new Error("CommissioningController not initialized");
|
||||
}
|
||||
//const node = await this.commissioningController.connectNode(NodeId(BigInt(nodeId)))
|
||||
const node = this.nodes.get(NodeId(BigInt(nodeId)));
|
||||
if (node === undefined) {
|
||||
throw new Error(`Node ${nodeId} not connected`);
|
||||
}
|
||||
return node;
|
||||
}
|
||||
|
||||
/**
|
||||
* Removes a node from the controller
|
||||
* @param nodeId
|
||||
*/
|
||||
async removeNode(nodeId: number | string | NodeId) {
|
||||
const node = this.nodes.get(NodeId(BigInt(nodeId)));
|
||||
if (node !== undefined) {
|
||||
try {
|
||||
await node.decommission();
|
||||
} catch (error) {
|
||||
logger.error(`Error decommissioning node ${nodeId}: ${error} force removing node`);
|
||||
await this.commissioningController?.removeNode(NodeId(BigInt(nodeId)), false);
|
||||
this.nodes.delete(NodeId(BigInt(nodeId)));
|
||||
}
|
||||
} else {
|
||||
await this.commissioningController?.removeNode(NodeId(BigInt(nodeId)), false);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns all commissioned nodes Ids
|
||||
* @returns
|
||||
*/
|
||||
async getCommissionedNodes() {
|
||||
return this.commissioningController?.getCommissionedNodes();
|
||||
}
|
||||
|
||||
/**
|
||||
* Finds the given endpoint, included nested endpoints
|
||||
* @param node
|
||||
* @param endpointId
|
||||
* @returns
|
||||
*/
|
||||
getEndpoint(node: PairedNode, endpointId: number) {
|
||||
const endpoints = node.getDevices();
|
||||
for (const e of endpoints) {
|
||||
const endpoint = this.findEndpoint(e, endpointId);
|
||||
if (endpoint != undefined) {
|
||||
return endpoint;
|
||||
}
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
*
|
||||
* @param root Endpoints can have child endpoints. This function recursively searches for the endpoint with the given id.
|
||||
* @param endpointId
|
||||
* @returns
|
||||
*/
|
||||
private findEndpoint(root: Endpoint, endpointId: number): Endpoint | undefined {
|
||||
if (root.number === endpointId) {
|
||||
return root;
|
||||
}
|
||||
for (const endpoint of root.getChildEndpoints()) {
|
||||
const found = this.findEndpoint(endpoint, endpointId);
|
||||
if (found !== undefined) {
|
||||
return found;
|
||||
}
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Serializes a node and sends it to the web socket
|
||||
* @param node
|
||||
* @param endpointId Optional endpointId to serialize. If omitted, all endpoints will be serialized.
|
||||
*/
|
||||
sendSerializedNode(node: PairedNode, endpointId?: number) {
|
||||
this.serializePairedNode(node, endpointId).then(data => {
|
||||
this.ws.sendEvent(EventType.NodeData, data);
|
||||
}).catch(error => {
|
||||
logger.error(`Error serializing node: ${error}`);
|
||||
printError(logger, error, "serializePairedNode");
|
||||
node.triggerReconnect();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Serializes a node and returns the json string
|
||||
* @param node
|
||||
* @param endpointId Optional endpointId to serialize. If omitted, the root endpoint will be serialized.
|
||||
* @returns
|
||||
*/
|
||||
async serializePairedNode(node: PairedNode, endpointId?: number) {
|
||||
if (!this.commissioningController) {
|
||||
throw new Error("CommissioningController not initialized");
|
||||
}
|
||||
|
||||
// Recursive function to build the hierarchy
|
||||
async function serializeEndpoint(endpoint: Endpoint): Promise<any> {
|
||||
const endpointData: any = {
|
||||
number: endpoint.number,
|
||||
clusters: {},
|
||||
children: []
|
||||
};
|
||||
|
||||
// Serialize clusters
|
||||
for (const cluster of endpoint.getAllClusterClients()) {
|
||||
if (!cluster.id) continue;
|
||||
|
||||
const clusterData: any = {
|
||||
id: cluster.id,
|
||||
name: cluster.name
|
||||
};
|
||||
|
||||
// Serialize attributes
|
||||
for (const attributeName in cluster.attributes) {
|
||||
// Skip numeric referenced attributes
|
||||
if (/^\d+$/.test(attributeName)) continue;
|
||||
const attribute = cluster.attributes[attributeName];
|
||||
if (!attribute) continue;
|
||||
const attributeValue = await attribute.get();
|
||||
logger.debug(`Attribute ${attributeName} value: ${attributeValue}`);
|
||||
if (attributeValue !== undefined) {
|
||||
clusterData[attributeName] = attributeValue;
|
||||
}
|
||||
}
|
||||
|
||||
endpointData.clusters[cluster.name] = clusterData;
|
||||
}
|
||||
|
||||
for (const child of endpoint.getChildEndpoints()) {
|
||||
endpointData.children.push(await serializeEndpoint(child));
|
||||
}
|
||||
|
||||
return endpointData;
|
||||
}
|
||||
|
||||
// Start serialization from the root endpoint
|
||||
const rootEndpoint = endpointId !== undefined ? this.getEndpoint(node, endpointId) : node.getRootEndpoint();
|
||||
if (rootEndpoint === undefined) {
|
||||
throw new Error(`Endpoint not found for node ${node.nodeId} and endpointId ${endpointId}`);
|
||||
}
|
||||
const data: any = {
|
||||
id: node.nodeId,
|
||||
rootEndpoint: await serializeEndpoint(rootEndpoint)
|
||||
};
|
||||
|
||||
return data;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,212 @@
|
|||
import { Logger } from "@matter/general";
|
||||
import { ControllerNode } from "../ControllerNode";
|
||||
import { convertJsonDataWithModel } from "../../util/Json";
|
||||
import { ClusterId, ValidationError } from "@matter/main/types";
|
||||
import * as MatterClusters from "@matter/types/clusters";
|
||||
import { SupportedAttributeClient } from "@matter/protocol"
|
||||
import { ClusterModel, MatterModel } from "@matter/model";
|
||||
import { camelize, capitalize } from "../../util/String";
|
||||
|
||||
const logger = Logger.get("Clusters");
|
||||
|
||||
/**
|
||||
* This class is used for websocket clients interacting with Matter Clusters to send commands like OnOff, LevelControl, etc...
|
||||
* Methods not marked as private are intended to be exposed to websocket clients
|
||||
*/
|
||||
export class Clusters {
|
||||
constructor(private controllerNode: ControllerNode) {
|
||||
}
|
||||
|
||||
/**
|
||||
* Dynamically executes a command on a specified cluster within a device node.
|
||||
* This method retrieves the cluster client for the device at the given node and endpoint, checks the existence
|
||||
* of the command on the cluster, and calls it with any provided arguments.
|
||||
*
|
||||
* @param nodeId Identifier for the node containing the target device.
|
||||
* @param endpointId Endpoint on the node where the command is directed.
|
||||
* @param clusterName Name of the cluster targeted by the command.
|
||||
* @param commandName Specific command to be executed on the cluster.
|
||||
* @param args Optional arguments for executing the command.
|
||||
* @throws Error if the cluster or command is not found on the device.
|
||||
*/
|
||||
async command(nodeId: number, endpointId: number, clusterName: string, commandName: string, args: any) {
|
||||
logger.debug(`command ${nodeId} ${endpointId} ${clusterName} ${commandName} ${Logger.toJSON(args)}`);
|
||||
const device = await this.controllerNode.getNode(nodeId).getDeviceById(endpointId);
|
||||
if (device == undefined) {
|
||||
throw new Error(`Endpoint ${endpointId} not found`);
|
||||
}
|
||||
|
||||
const cluster = this.#clusterForName(clusterName);
|
||||
if (cluster.id === undefined) {
|
||||
throw new Error(`Cluster ID for ${clusterName} not found`);
|
||||
}
|
||||
|
||||
const clusterClient = device.getClusterClientById(ClusterId(cluster.id));
|
||||
if (clusterClient === undefined) {
|
||||
throw new Error(`Cluster client for ${clusterName} not found`);
|
||||
}
|
||||
|
||||
const uppercaseName = capitalize(commandName);
|
||||
const command = cluster.commands.find(c => c.name === uppercaseName);
|
||||
if (command == undefined) {
|
||||
throw new Error(`Cluster Function ${commandName} not found`);
|
||||
}
|
||||
|
||||
let convertedArgs: any = undefined;
|
||||
if (args !== undefined && Object.keys(args).length > 0) {
|
||||
convertedArgs = convertJsonDataWithModel(command, args);
|
||||
}
|
||||
|
||||
return clusterClient.commands[commandName](convertedArgs);
|
||||
}
|
||||
|
||||
/**
|
||||
* Writes an attribute to a device (not all attributes are writable)
|
||||
* @param nodeId
|
||||
* @param endpointId
|
||||
* @param clusterName
|
||||
* @param attributeName
|
||||
* @param value
|
||||
*/
|
||||
async writeAttribute(nodeId: number, endpointId: number, clusterName: string, attributeName: string, value: string) {
|
||||
let parsedValue: any;
|
||||
try {
|
||||
parsedValue = JSON.parse(value);
|
||||
} catch (error) {
|
||||
try {
|
||||
parsedValue = JSON.parse(`"${value}"`);
|
||||
} catch (innerError) {
|
||||
throw new Error(`ERROR: Could not parse value ${value} as JSON.`)
|
||||
}
|
||||
}
|
||||
|
||||
const device = await this.controllerNode.getNode(nodeId).getDeviceById(endpointId);
|
||||
if (device == undefined) {
|
||||
throw new Error(`Endpoint ${endpointId} not found`);
|
||||
}
|
||||
|
||||
const cluster = this.#clusterForName(clusterName);
|
||||
if (cluster.id === undefined) {
|
||||
throw new Error(`Cluster ID for ${clusterName} not found`);
|
||||
}
|
||||
|
||||
const clusterClient = device.getClusterClientById(ClusterId(cluster.id));
|
||||
if (clusterClient === undefined) {
|
||||
throw new Error(`Cluster client for ${clusterName} not found`);
|
||||
}
|
||||
|
||||
const attributeClient = clusterClient.attributes[attributeName];
|
||||
if (!(attributeClient instanceof SupportedAttributeClient)) {
|
||||
throw new Error(`Attribute ${nodeId}/${endpointId}/${clusterName}/${attributeName} not supported.`)
|
||||
}
|
||||
|
||||
const uppercaseName = capitalize(attributeName);
|
||||
const attribute = cluster.attributes.find(c => c.name === uppercaseName);
|
||||
if (attribute == undefined) {
|
||||
throw new Error(`Attribute ${attributeName} not found`);
|
||||
}
|
||||
|
||||
try {
|
||||
parsedValue = convertJsonDataWithModel(attribute, parsedValue);
|
||||
await attributeClient.set(parsedValue);
|
||||
console.log(
|
||||
`Attribute ${attributeName} ${nodeId}/${endpointId}/${clusterName}/${attributeName} set to ${Logger.toJSON(value)}`,
|
||||
);
|
||||
} catch (error) {
|
||||
if (error instanceof ValidationError) {
|
||||
throw new Error(`Could not validate data for attribute ${attributeName} to ${Logger.toJSON(parsedValue)}: ${error}${error.fieldName !== undefined ? ` in field ${error.fieldName}` : ""}`,
|
||||
)
|
||||
} else {
|
||||
throw new Error(`Could not set attribute ${attributeName} to ${Logger.toJSON(parsedValue)}: ${error}`)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Reads an attribute from a device
|
||||
* @param nodeId
|
||||
* @param endpointId
|
||||
* @param clusterName
|
||||
* @param attributeName
|
||||
*/
|
||||
async readAttribute(nodeId: number, endpointId: number, clusterName: string, attributeName: string) {
|
||||
const device = await this.controllerNode.getNode(nodeId).getDeviceById(endpointId);
|
||||
if (device == undefined) {
|
||||
throw new Error(`Endpoint ${endpointId} not found`);
|
||||
}
|
||||
|
||||
const cluster = this.#clusterForName(clusterName);
|
||||
if (cluster.id === undefined) {
|
||||
throw new Error(`Cluster ID for ${clusterName} not found`);
|
||||
}
|
||||
|
||||
const clusterClient = device.getClusterClientById(ClusterId(cluster.id));
|
||||
if (clusterClient === undefined) {
|
||||
throw new Error(`Cluster client for ${clusterName} not found`);
|
||||
}
|
||||
|
||||
const attributeClient = clusterClient.attributes[attributeName];
|
||||
if (!(attributeClient instanceof SupportedAttributeClient)) {
|
||||
throw new Error(`Attribute ${nodeId}/${endpointId}/${clusterName}/${attributeName} not supported.`)
|
||||
}
|
||||
|
||||
const uppercaseName = capitalize(attributeName);
|
||||
const attribute = cluster.attributes.find(c => c.name === uppercaseName);
|
||||
if (attribute == undefined) {
|
||||
throw new Error(`Attribute ${attributeName} not found`);
|
||||
}
|
||||
|
||||
return await attributeClient.get(true);
|
||||
}
|
||||
|
||||
/**
|
||||
* Requests all attributes data for a single endpoint and its children
|
||||
* @param nodeId
|
||||
* @param endpointId
|
||||
* @returns
|
||||
*/
|
||||
async readCluster(nodeId: string | number, endpointId: number, clusterNameOrId: string | number) {
|
||||
const device = await this.controllerNode.getNode(nodeId).getDeviceById(endpointId);
|
||||
if (device === undefined) {
|
||||
throw new Error(`Endpoint ${endpointId} not found`);
|
||||
}
|
||||
|
||||
const clusterId = typeof clusterNameOrId === 'string' ? this.#clusterForName(clusterNameOrId).id : clusterNameOrId;
|
||||
if (clusterId === undefined) {
|
||||
throw new Error(`Cluster ID for ${clusterNameOrId} not found`);
|
||||
}
|
||||
|
||||
const clusterClient = device.getClusterClientById(ClusterId(clusterId));
|
||||
if (clusterClient === undefined) {
|
||||
throw new Error(`Cluster client for ${clusterNameOrId} not found`);
|
||||
}
|
||||
|
||||
const clusterData: any = {
|
||||
id: clusterClient.id,
|
||||
name: clusterClient.name
|
||||
};
|
||||
|
||||
// Serialize attributes
|
||||
for (const attributeName in clusterClient.attributes) {
|
||||
// Skip numeric referenced attributes
|
||||
if (/^\d+$/.test(attributeName)) continue;
|
||||
const attribute = clusterClient.attributes[attributeName];
|
||||
if (!attribute) continue;
|
||||
const attributeValue = await attribute.get();
|
||||
logger.debug(`Attribute ${attributeName} value: ${attributeValue}`);
|
||||
if (attributeValue !== undefined) {
|
||||
clusterData[attributeName] = attributeValue;
|
||||
}
|
||||
}
|
||||
|
||||
return clusterData;
|
||||
}
|
||||
|
||||
#clusterForName(clusterName: string): ClusterModel {
|
||||
const cluster = MatterModel.standard.clusters.find(c => c.name === clusterName);
|
||||
if (cluster == null) {
|
||||
throw new Error(`Cluster ${clusterName} not found`);
|
||||
}
|
||||
return cluster;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,281 @@
|
|||
import { NodeCommissioningOptions } from "@project-chip/matter.js";
|
||||
import { GeneralCommissioning, OperationalCredentialsCluster } from "@matter/main/clusters";
|
||||
import { ManualPairingCodeCodec, QrPairingCodeCodec, QrCode, NodeId, FabricIndex } from "@matter/types";
|
||||
import { Logger } from "@matter/main";
|
||||
import { ControllerNode } from "../ControllerNode";
|
||||
|
||||
const logger = Logger.get("matter");
|
||||
|
||||
|
||||
/**
|
||||
* This class is used for exposing Matter nodes. This includes node lifecycle functions, node fabrics, node data, and other node related methods to websocket clients.
|
||||
* Methods not marked as private are intended to be exposed to websocket clients
|
||||
*/
|
||||
export class Nodes {
|
||||
|
||||
constructor(private controllerNode: ControllerNode) {
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns all commissioned nodes Ids
|
||||
* @returns
|
||||
*/
|
||||
async listNodes() {
|
||||
if (this.controllerNode.commissioningController === undefined) {
|
||||
throw new Error("CommissioningController not initialized");
|
||||
}
|
||||
return this.controllerNode.getCommissionedNodes();
|
||||
}
|
||||
|
||||
/**
|
||||
* Initializes a node and connects to it
|
||||
* @param nodeId
|
||||
* @returns
|
||||
*/
|
||||
async initializeNode(nodeId: string | number) {
|
||||
return this.controllerNode.initializeNode(nodeId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Requests all attributes data for a node
|
||||
* @param nodeId
|
||||
* @returns
|
||||
*/
|
||||
async requestAllData(nodeId: string | number) {
|
||||
const node = await this.controllerNode.getNode(nodeId);
|
||||
if (node.initialized) {
|
||||
return this.controllerNode.sendSerializedNode(node);
|
||||
} else {
|
||||
throw new Error(`Node ${nodeId} not initialized`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Requests all attributes data for a single endpoint and its children
|
||||
* @param nodeId
|
||||
* @param endpointId
|
||||
* @returns
|
||||
*/
|
||||
async requestEndpointData(nodeId: string | number, endpointId: number) {
|
||||
const node = await this.controllerNode.getNode(nodeId);
|
||||
if (node.initialized) {
|
||||
return this.controllerNode.sendSerializedNode(node, endpointId);
|
||||
} else {
|
||||
throw new Error(`Node ${nodeId} not initialized`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Pairs a node using a pairing code, supports multiple pairing code formats
|
||||
* @param pairingCode
|
||||
* @param shortDiscriminator
|
||||
* @param setupPinCode
|
||||
* @returns
|
||||
*/
|
||||
async pairNode(pairingCode: string | undefined, shortDiscriminator: number | undefined, setupPinCode: number | undefined) {
|
||||
let discriminator: number | undefined;
|
||||
let nodeIdStr: string | undefined;
|
||||
let ipPort: number | undefined;
|
||||
let ip: string | undefined;
|
||||
let instanceId: string | undefined;
|
||||
let ble = false
|
||||
|
||||
if (typeof pairingCode === "string" && pairingCode.trim().length > 0) {
|
||||
pairingCode = pairingCode.trim();
|
||||
if (pairingCode.toUpperCase().indexOf('MT:') == 0) {
|
||||
const qrcode = QrPairingCodeCodec.decode(pairingCode.toUpperCase())[0];
|
||||
setupPinCode = qrcode.passcode;
|
||||
discriminator = qrcode.discriminator;
|
||||
} else {
|
||||
const { shortDiscriminator: pairingCodeShortDiscriminator, passcode } =
|
||||
ManualPairingCodeCodec.decode(pairingCode);
|
||||
shortDiscriminator = pairingCodeShortDiscriminator;
|
||||
setupPinCode = passcode;
|
||||
discriminator = undefined;
|
||||
}
|
||||
} else if (discriminator === undefined && shortDiscriminator === undefined) {
|
||||
discriminator = 3840;
|
||||
}
|
||||
|
||||
const nodeId = nodeIdStr !== undefined ? NodeId(BigInt(nodeIdStr)) : undefined;
|
||||
if (this.controllerNode.commissioningController === undefined) {
|
||||
throw new Error("CommissioningController not initialized");
|
||||
}
|
||||
|
||||
const options = {
|
||||
discovery: {
|
||||
knownAddress:
|
||||
ip !== undefined && ipPort !== undefined
|
||||
? { ip, port: ipPort, type: "udp" }
|
||||
: undefined,
|
||||
identifierData:
|
||||
instanceId !== undefined
|
||||
? { instanceId }
|
||||
: discriminator !== undefined
|
||||
? { longDiscriminator: discriminator }
|
||||
: shortDiscriminator !== undefined
|
||||
? { shortDiscriminator }
|
||||
: {},
|
||||
discoveryCapabilities: {
|
||||
ble,
|
||||
onIpNetwork: true,
|
||||
},
|
||||
},
|
||||
passcode: setupPinCode
|
||||
} as NodeCommissioningOptions;
|
||||
|
||||
options.commissioning = {
|
||||
nodeId: nodeId !== undefined ? NodeId(nodeId) : undefined,
|
||||
regulatoryLocation: GeneralCommissioning.RegulatoryLocationType.Outdoor, // Set to the most restrictive if relevant
|
||||
regulatoryCountryCode: "XX"
|
||||
};
|
||||
|
||||
if (this.controllerNode.Store.has("WiFiSsid") && this.controllerNode.Store.has("WiFiPassword")) {
|
||||
options.commissioning.wifiNetwork = {
|
||||
wifiSsid: await this.controllerNode.Store.get<string>("WiFiSsid", ""),
|
||||
wifiCredentials: await this.controllerNode.Store.get<string>("WiFiPassword", ""),
|
||||
};
|
||||
}
|
||||
if (
|
||||
this.controllerNode.Store.has("ThreadName") &&
|
||||
this.controllerNode.Store.has("ThreadOperationalDataset")
|
||||
) {
|
||||
options.commissioning.threadNetwork = {
|
||||
networkName: await this.controllerNode.Store.get<string>("ThreadName", ""),
|
||||
operationalDataset: await this.controllerNode.Store.get<string>(
|
||||
"ThreadOperationalDataset",
|
||||
"",
|
||||
),
|
||||
};
|
||||
}
|
||||
|
||||
const commissionedNodeId =
|
||||
await this.controllerNode.commissioningController.commissionNode(options);
|
||||
|
||||
console.log(`Commissioned Node: ${commissionedNodeId}`);
|
||||
return commissionedNodeId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Disconnects a node
|
||||
* @param nodeId
|
||||
*/
|
||||
async disconnectNode(nodeId: number | string) {
|
||||
const node = this.controllerNode.getNode(nodeId);
|
||||
await node.disconnect();
|
||||
}
|
||||
|
||||
/**
|
||||
* Reconnects a node
|
||||
* @param nodeId
|
||||
*/
|
||||
async reconnectNode(nodeId: number | string) {
|
||||
const node = await this.controllerNode.getNode(nodeId);
|
||||
node.triggerReconnect();
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the fabrics for a node. Fabrics are the set of matter networks that the node has been commissioned to (openhab, Alexa, Google, Apple, etc)
|
||||
* @param nodeId
|
||||
* @returns
|
||||
*/
|
||||
async getFabrics(nodeId: number | string) {
|
||||
const node = await this.controllerNode.getNode(nodeId);
|
||||
const operationalCredentialsCluster = node.getRootClusterClient(OperationalCredentialsCluster);
|
||||
if (operationalCredentialsCluster === undefined) {
|
||||
throw new Error(`OperationalCredentialsCluster for node ${nodeId} not found.`);
|
||||
}
|
||||
return await operationalCredentialsCluster.getFabricsAttribute(true, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* Removes a fabric from a node, effectively decommissioning the node from the specific network
|
||||
* @param nodeId
|
||||
* @param index
|
||||
* @returns
|
||||
*/
|
||||
async removeFabric(nodeId: number | string, index: number) {
|
||||
if (this.controllerNode.commissioningController === undefined) {
|
||||
console.log("Controller not initialized, nothing to disconnect.");
|
||||
return;
|
||||
}
|
||||
|
||||
const node = await this.controllerNode.getNode(nodeId);
|
||||
if (node === undefined) {
|
||||
throw new Error(`Node ${nodeId} not found`);
|
||||
}
|
||||
const operationalCredentialsCluster = node.getRootClusterClient(OperationalCredentialsCluster);
|
||||
|
||||
if (operationalCredentialsCluster === undefined) {
|
||||
throw new Error(`OperationalCredentialsCluster for node ${nodeId} not found.`);
|
||||
}
|
||||
|
||||
|
||||
const fabricInstance = FabricIndex(index);
|
||||
const ourFabricIndex = await operationalCredentialsCluster.getCurrentFabricIndexAttribute(true);
|
||||
|
||||
if (ourFabricIndex == fabricInstance) {
|
||||
throw new Error("Will not delete our own fabric");
|
||||
}
|
||||
|
||||
await operationalCredentialsCluster.commands.removeFabric({ fabricIndex: fabricInstance });
|
||||
}
|
||||
|
||||
/**
|
||||
* Removes a node from the commissioning controller
|
||||
* @param nodeId
|
||||
*/
|
||||
async removeNode(nodeId: number | string) {
|
||||
await this.controllerNode.removeNode(nodeId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns active session information for all connected nodes.
|
||||
* @returns
|
||||
*/
|
||||
sessionInformation() {
|
||||
return this.controllerNode.commissioningController?.getActiveSessionInformation() || {}
|
||||
}
|
||||
|
||||
/**
|
||||
* Opens a basic commissioning window for a node allowing for manual pairing to an additional fabric.
|
||||
* @param nodeId
|
||||
* @param timeout
|
||||
*/
|
||||
async basicCommissioningWindow(nodeId: number | string, timeout = 900) {
|
||||
const node = await this.controllerNode.getNode(nodeId);
|
||||
await node.openBasicCommissioningWindow(timeout);
|
||||
console.log(`Basic Commissioning Window for node ${nodeId} opened`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Opens an enhanced commissioning window for a node allowing for QR code pairing to an additional fabric.
|
||||
* @param nodeId
|
||||
* @param timeout
|
||||
* @returns
|
||||
*/
|
||||
async enhancedCommissioningWindow(nodeId: number | string, timeout = 900) {
|
||||
const node = await this.controllerNode.getNode(nodeId);
|
||||
const data = await node.openEnhancedCommissioningWindow(timeout);
|
||||
|
||||
console.log(`Enhanced Commissioning Window for node ${nodeId} opened`);
|
||||
const { qrPairingCode, manualPairingCode } = data;
|
||||
|
||||
console.log(QrCode.get(qrPairingCode));
|
||||
console.log(
|
||||
`QR Code URL: https://project-chip.github.io/connectedhomeip/qrcode.html?data=${qrPairingCode}`,
|
||||
);
|
||||
console.log(`Manual pairing code: ${manualPairingCode}`);
|
||||
return data;
|
||||
}
|
||||
|
||||
/**
|
||||
* Logs the structure of a node
|
||||
* @param nodeId
|
||||
*/
|
||||
async logNode(nodeId: number | string) {
|
||||
const node = await this.controllerNode.getNode(nodeId);
|
||||
console.log("Logging structure of Node ", node.nodeId.toString());
|
||||
node.logStructure();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,74 @@
|
|||
import { Bytes } from "@matter/general";
|
||||
import { ValueModel } from "@matter/model";
|
||||
import { ValidationDatatypeMismatchError } from "@matter/types";
|
||||
import { camelize } from "./String";
|
||||
import { Logger } from "@matter/general";
|
||||
|
||||
const logger = Logger.get("Clusters");
|
||||
|
||||
export function convertJsonDataWithModel(model: ValueModel, data: any): any {
|
||||
const definingModel = model.definingModel ?? model;
|
||||
logger.debug("convertJsonDataWithModel: type {}", definingModel.effectiveMetatype);
|
||||
logger.debug("convertJsonDataWithModel: data {}", data);
|
||||
logger.debug("convertJsonDataWithModel: model {}", model);
|
||||
switch (definingModel.effectiveMetatype) {
|
||||
case "array":
|
||||
if (!Array.isArray(data)) {
|
||||
throw new ValidationDatatypeMismatchError(`Expected array, got ${typeof data}`);
|
||||
}
|
||||
return data.map(item => convertJsonDataWithModel(definingModel.children[0], item));
|
||||
case "object":
|
||||
if (typeof data !== "object") {
|
||||
throw new ValidationDatatypeMismatchError(`Expected object, got ${typeof data}`);
|
||||
}
|
||||
for (const child of definingModel.children) {
|
||||
const childKeyName = camelize(child.name);
|
||||
data[childKeyName] = convertJsonDataWithModel(child, data[childKeyName]);
|
||||
}
|
||||
return data;
|
||||
case "integer":
|
||||
if (typeof data === "string") {
|
||||
if (definingModel.metabase?.byteSize !== undefined && definingModel.metabase.byteSize > 6) {
|
||||
// If we have an integer with byteSize > 6 and a string value, we need to convert the string to a
|
||||
// BigInt also handles 0x prefixed hex strings
|
||||
return BigInt(data);
|
||||
} else if (data.startsWith("0x")) {
|
||||
// Else if hex string convert to number
|
||||
return parseInt(data.substring(2), 16);
|
||||
}
|
||||
}
|
||||
break;
|
||||
case "bytes":
|
||||
if (typeof data === "string") {
|
||||
// ByteArray encoded as hex-String ... so convert to ByteArray
|
||||
return Bytes.fromHex(data);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
return data;
|
||||
}
|
||||
|
||||
export function toJSON(data: any) {
|
||||
return JSON.stringify(data, (_, value) => {
|
||||
if (typeof value === "bigint") {
|
||||
return value.toString();
|
||||
}
|
||||
if (value instanceof Uint8Array) {
|
||||
return Bytes.toHex(value);
|
||||
}
|
||||
if (value === undefined) {
|
||||
return "undefined";
|
||||
}
|
||||
return value;
|
||||
});
|
||||
}
|
||||
|
||||
export function fromJSON(data: any) {
|
||||
return JSON.parse(data, (key, value) => {
|
||||
if (typeof value === "string" && value.startsWith("0x")) {
|
||||
return Bytes.fromHex(value);
|
||||
}
|
||||
return value;
|
||||
});
|
||||
}
|
|
@ -0,0 +1,7 @@
|
|||
export function camelize(str: string) {
|
||||
return str.charAt(0).toLowerCase() + str.slice(1);
|
||||
}
|
||||
|
||||
export function capitalize(str: string) {
|
||||
return str.charAt(0).toUpperCase() + str.slice(1);
|
||||
}
|
|
@ -0,0 +1,20 @@
|
|||
import { Logger } from "@matter/general";
|
||||
|
||||
export function printError(logger: Logger, error: Error, functionName: String) {
|
||||
|
||||
logger.error(`Error executing function ${functionName}: ${error.message}`);
|
||||
logger.error(`Stack trace: ${error.stack}`);
|
||||
|
||||
// Log additional error properties if available
|
||||
if ('code' in error) {
|
||||
logger.error(`Error code: ${(error as any).code}`);
|
||||
}
|
||||
if ('name' in error) {
|
||||
logger.error(`Error name: ${(error as any).name}`);
|
||||
}
|
||||
|
||||
// Fallback: log the entire error object in case there are other useful details
|
||||
logger.error(`Full error object: ${JSON.stringify(error, Object.getOwnPropertyNames(error), 2)}`)
|
||||
logger.error("--------------------------------")
|
||||
logger.error(error)
|
||||
}
|
|
@ -0,0 +1,41 @@
|
|||
{
|
||||
"compilerOptions": {
|
||||
// Participate in workspace
|
||||
"composite": true,
|
||||
|
||||
// Add compatibility with CommonJS modules
|
||||
"esModuleInterop": true,
|
||||
|
||||
// Compile incrementally using tsbuildinfo state file
|
||||
"incremental": true,
|
||||
|
||||
// We should probably boost this at least to ES 2017
|
||||
"target": "es2022",
|
||||
|
||||
// Generate modules as ES2020 or CommonJS
|
||||
"module": "node16",
|
||||
|
||||
// Use node-style dependency resolution
|
||||
"moduleResolution": "node16",
|
||||
|
||||
"lib": ["ES2022", "DOM"],
|
||||
|
||||
// Do not load globals from node_modules by default
|
||||
"types": [
|
||||
"node"
|
||||
],
|
||||
|
||||
// Enforce a subset of our code conventions
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"noImplicitAny": true,
|
||||
"noImplicitOverride": true,
|
||||
"noUnusedParameters": false,
|
||||
"noUnusedLocals": false,
|
||||
"strict": true,
|
||||
"strictNullChecks": true,
|
||||
"allowJs": true,
|
||||
"skipLibCheck": true,
|
||||
"outDir": "dist"
|
||||
},
|
||||
"include": ["src/**/*.ts"]
|
||||
}
|
|
@ -0,0 +1,22 @@
|
|||
const path = require('path');
|
||||
|
||||
module.exports = {
|
||||
entry: './src/app.ts',
|
||||
target: 'node',
|
||||
externals: {bufferutil: "bufferutil", "utf-8-validate": "utf-8-validate"},
|
||||
module: {
|
||||
rules: [
|
||||
{
|
||||
test: /\.tsx?$/,
|
||||
use: 'ts-loader'
|
||||
}
|
||||
]
|
||||
},
|
||||
resolve: {
|
||||
extensions: ['.tsx', '.ts', '.js']
|
||||
},
|
||||
output: {
|
||||
filename: 'matter.js', // single output JS file
|
||||
path: path.resolve(__dirname, 'dist')
|
||||
}
|
||||
};
|
|
@ -0,0 +1,234 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<parent>
|
||||
<groupId>org.openhab.addons.bundles</groupId>
|
||||
<artifactId>org.openhab.addons.reactor.bundles</artifactId>
|
||||
<version>5.0.0-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<artifactId>org.openhab.binding.matter</artifactId>
|
||||
|
||||
<name>openHAB Add-ons :: Bundles :: Matter Binding</name>
|
||||
|
||||
<properties>
|
||||
<generated.code.dir>src/main/java/org/openhab/binding/matter/internal/client/dto/cluster/gen</generated.code.dir>
|
||||
</properties>
|
||||
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>com.github.eirslett</groupId>
|
||||
<artifactId>frontend-maven-plugin</artifactId>
|
||||
<configuration>
|
||||
<nodeVersion>v20.12.2</nodeVersion>
|
||||
<npmVersion>10.5.0</npmVersion>
|
||||
</configuration>
|
||||
|
||||
<executions>
|
||||
<!-- Matter server webpack commands -->
|
||||
<execution>
|
||||
<id>Install node and npm</id>
|
||||
<goals>
|
||||
<goal>install-node-and-npm</goal>
|
||||
</goals>
|
||||
<phase>generate-resources</phase>
|
||||
<configuration>
|
||||
<workingDirectory>matter-server</workingDirectory>
|
||||
</configuration>
|
||||
</execution>
|
||||
|
||||
<execution>
|
||||
<id>npm install</id>
|
||||
<goals>
|
||||
<goal>npm</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<workingDirectory>matter-server</workingDirectory>
|
||||
<arguments>install</arguments>
|
||||
</configuration>
|
||||
</execution>
|
||||
|
||||
<execution>
|
||||
<id>npm run webpack-dev</id>
|
||||
<goals>
|
||||
<goal>npm</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<workingDirectory>matter-server</workingDirectory>
|
||||
<arguments>run webpack-dev</arguments>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>build-helper-maven-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<goals>
|
||||
<goal>add-resource</goal>
|
||||
</goals>
|
||||
<phase>generate-resources</phase>
|
||||
<configuration>
|
||||
<resources>
|
||||
<resource>
|
||||
<directory>matter-server/dist</directory>
|
||||
<include>matter.js</include>
|
||||
<targetPath>matter-server</targetPath>
|
||||
</resource>
|
||||
</resources>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
|
||||
<profiles>
|
||||
<profile>
|
||||
<id>code-gen</id>
|
||||
<build>
|
||||
<plugins>
|
||||
<!-- Code generation commands -->
|
||||
<plugin>
|
||||
<groupId>com.github.eirslett</groupId>
|
||||
<artifactId>frontend-maven-plugin</artifactId>
|
||||
<configuration>
|
||||
<nodeVersion>v20.12.2</nodeVersion>
|
||||
<npmVersion>10.5.0</npmVersion>
|
||||
</configuration>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>Install node and npm for codegen</id>
|
||||
<goals>
|
||||
<goal>install-node-and-npm</goal>
|
||||
</goals>
|
||||
<phase>generate-sources</phase>
|
||||
<configuration>
|
||||
<workingDirectory>code-gen</workingDirectory>
|
||||
</configuration>
|
||||
</execution>
|
||||
|
||||
<execution>
|
||||
<id>npm-install-codegen</id>
|
||||
<goals>
|
||||
<goal>npm</goal>
|
||||
</goals>
|
||||
<phase>generate-sources</phase>
|
||||
<configuration>
|
||||
<workingDirectory>code-gen</workingDirectory>
|
||||
<arguments>install</arguments>
|
||||
</configuration>
|
||||
</execution>
|
||||
|
||||
<execution>
|
||||
<id>npm-run-codegen</id>
|
||||
<goals>
|
||||
<goal>npm</goal>
|
||||
</goals>
|
||||
<phase>generate-sources</phase>
|
||||
<configuration>
|
||||
<workingDirectory>code-gen</workingDirectory>
|
||||
<arguments>run start</arguments>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
|
||||
<!-- Clean generated code directory -->
|
||||
<plugin>
|
||||
<artifactId>maven-clean-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>clean-generated-code</id>
|
||||
<goals>
|
||||
<goal>clean</goal>
|
||||
</goals>
|
||||
<phase>clean</phase>
|
||||
<configuration>
|
||||
<filesets>
|
||||
<fileset>
|
||||
<directory>code-gen/out</directory>
|
||||
</fileset>
|
||||
<fileset>
|
||||
<directory>${generated.code.dir}</directory>
|
||||
</fileset>
|
||||
</filesets>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
|
||||
<!-- Move generated files -->
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-resources-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>copy-generated-code</id>
|
||||
<goals>
|
||||
<goal>copy-resources</goal>
|
||||
</goals>
|
||||
<phase>generate-sources</phase>
|
||||
<configuration>
|
||||
<outputDirectory>${generated.code.dir}</outputDirectory>
|
||||
<resources>
|
||||
<resource>
|
||||
<directory>code-gen/out</directory>
|
||||
</resource>
|
||||
</resources>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
|
||||
<!-- Clean up temporary output directory -->
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>cleanup-output</id>
|
||||
<goals>
|
||||
<goal>run</goal>
|
||||
</goals>
|
||||
<phase>generate-sources</phase>
|
||||
<configuration>
|
||||
<target>
|
||||
<delete dir="code-gen/out"/>
|
||||
</target>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
|
||||
<!-- Run spotless -->
|
||||
<plugin>
|
||||
<groupId>com.diffplug.spotless</groupId>
|
||||
<artifactId>spotless-maven-plugin</artifactId>
|
||||
<executions>
|
||||
<!-- Disable the default check execution -->
|
||||
<execution>
|
||||
<id>codestyle_check</id>
|
||||
<phase>none</phase>
|
||||
</execution>
|
||||
|
||||
<execution>
|
||||
<id>format-generated-code</id>
|
||||
<goals>
|
||||
<goal>apply</goal>
|
||||
</goals>
|
||||
<phase>process-sources</phase>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</profile>
|
||||
</profiles>
|
||||
</project>
|
|
@ -0,0 +1,9 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<features name="org.openhab.binding.matter-${project.version}" xmlns="http://karaf.apache.org/xmlns/features/v1.4.0">
|
||||
<repository>mvn:org.openhab.core.features.karaf/org.openhab.core.features.karaf.openhab-core/${ohc.version}/xml/features</repository>
|
||||
|
||||
<feature name="openhab-binding-matter" description="Matter Binding" version="${project.version}">
|
||||
<feature>openhab-runtime-base</feature>
|
||||
<bundle start-level="80">mvn:org.openhab.addons.bundles/org.openhab.binding.matter/${project.version}</bundle>
|
||||
</feature>
|
||||
</features>
|
|
@ -0,0 +1,382 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.core.thing.ThingTypeUID;
|
||||
import org.openhab.core.thing.type.ChannelTypeUID;
|
||||
|
||||
/**
|
||||
* The {@link MatterBindingConstants} class defines common constants, which are
|
||||
* used across the whole binding.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class MatterBindingConstants {
|
||||
public static final String BINDING_ID = "matter";
|
||||
// List of all Thing Type UIDs
|
||||
public static final ThingTypeUID THING_TYPE_CONTROLLER = new ThingTypeUID(BINDING_ID, "controller");
|
||||
public static final ThingTypeUID THING_TYPE_NODE = new ThingTypeUID(BINDING_ID, "node");
|
||||
public static final ThingTypeUID THING_TYPE_ENDPOINT = new ThingTypeUID(BINDING_ID, "endpoint");
|
||||
public static final String CONFIG_DESCRIPTION_URI_THING_PREFIX = "thing";
|
||||
// List of Channel UIDs
|
||||
public static final String CHANNEL_ID_ONOFF_ONOFF = "onoffcontrol-onoff";
|
||||
public static final ChannelTypeUID CHANNEL_ONOFF_ONOFF = new ChannelTypeUID(BINDING_ID, CHANNEL_ID_ONOFF_ONOFF);
|
||||
public static final String CHANNEL_ID_LEVEL_LEVEL = "levelcontrol-level";
|
||||
public static final ChannelTypeUID CHANNEL_LEVEL_LEVEL = new ChannelTypeUID(BINDING_ID, CHANNEL_ID_LEVEL_LEVEL);
|
||||
public static final String CHANNEL_ID_COLOR_COLOR = "colorcontrol-color";
|
||||
public static final ChannelTypeUID CHANNEL_COLOR_COLOR = new ChannelTypeUID(BINDING_ID, CHANNEL_ID_COLOR_COLOR);
|
||||
public static final String CHANNEL_ID_COLOR_TEMPERATURE = "colorcontrol-temperature";
|
||||
public static final ChannelTypeUID CHANNEL_COLOR_TEMPERATURE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_COLOR_TEMPERATURE);
|
||||
public static final String CHANNEL_ID_COLOR_TEMPERATURE_ABS = "colorcontrol-temperature-abs";
|
||||
public static final ChannelTypeUID CHANNEL_COLOR_TEMPERATURE_ABS = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_COLOR_TEMPERATURE_ABS);
|
||||
public static final String CHANNEL_ID_POWER_BATTERYPERCENT = "powersource-batpercentremaining";
|
||||
public static final ChannelTypeUID CHANNEL_POWER_BATTERYPERCENT = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_POWER_BATTERYPERCENT);
|
||||
public static final String CHANNEL_ID_POWER_CHARGELEVEL = "powersource-batchargelevel";
|
||||
public static final ChannelTypeUID CHANNEL_POWER_CHARGELEVEL = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_POWER_CHARGELEVEL);
|
||||
public static final String CHANNEL_ID_THERMOSTAT_LOCALTEMPERATURE = "thermostat-localtemperature";
|
||||
public static final ChannelTypeUID CHANNEL_THERMOSTAT_LOCALTEMPERATURE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_THERMOSTAT_LOCALTEMPERATURE);
|
||||
public static final String CHANNEL_ID_THERMOSTAT_OUTDOORTEMPERATURE = "thermostat-outdoortemperature";
|
||||
public static final ChannelTypeUID CHANNEL_THERMOSTAT_OUTDOORTEMPERATURE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_THERMOSTAT_OUTDOORTEMPERATURE);
|
||||
public static final String CHANNEL_ID_THERMOSTAT_OCCUPIEDCOOLING = "thermostat-occupiedcooling";
|
||||
public static final ChannelTypeUID CHANNEL_THERMOSTAT_OCCUPIEDCOOLING = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_THERMOSTAT_OCCUPIEDCOOLING);
|
||||
public static final String CHANNEL_ID_THERMOSTAT_OCCUPIEDHEATING = "thermostat-occupiedheating";
|
||||
public static final ChannelTypeUID CHANNEL_THERMOSTAT_OCCUPIEDHEATING = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_THERMOSTAT_OCCUPIEDHEATING);
|
||||
public static final String CHANNEL_ID_THERMOSTAT_UNOCCUPIEDCOOLING = "thermostat-unoccupiedcooling";
|
||||
public static final ChannelTypeUID CHANNEL_THERMOSTAT_UNOCCUPIEDCOOLING = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_THERMOSTAT_UNOCCUPIEDCOOLING);
|
||||
public static final String CHANNEL_ID_THERMOSTAT_UNOCCUPIEDHEATING = "thermostat-unoccupiedheating";
|
||||
public static final ChannelTypeUID CHANNEL_THERMOSTAT_UNOCCUPIEDHEATING = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_THERMOSTAT_UNOCCUPIEDHEATING);
|
||||
public static final String CHANNEL_ID_THERMOSTAT_SYSTEMMODE = "thermostat-systemmode";
|
||||
public static final ChannelTypeUID CHANNEL_THERMOSTAT_SYSTEMMODE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_THERMOSTAT_SYSTEMMODE);
|
||||
public static final String CHANNEL_ID_THERMOSTAT_RUNNINGMODE = "thermostat-runningmode";
|
||||
public static final ChannelTypeUID CHANNEL_THERMOSTAT_RUNNINGMODE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_THERMOSTAT_RUNNINGMODE);
|
||||
public static final String CHANNEL_ID_THERMOSTAT_HEATING_DEMAND = "thermostat-heatingdemand";
|
||||
public static final ChannelTypeUID CHANNEL_THERMOSTAT_HEATING_DEMAND = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_THERMOSTAT_HEATING_DEMAND);
|
||||
public static final String CHANNEL_ID_THERMOSTAT_COOLING_DEMAND = "thermostat-coolingdemand";
|
||||
public static final ChannelTypeUID CHANNEL_THERMOSTAT_COOLING_DEMAND = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_THERMOSTAT_COOLING_DEMAND);
|
||||
public static final String CHANNEL_ID_DOORLOCK_STATE = "doorlock-lockstate";
|
||||
public static final ChannelTypeUID CHANNEL_DOORLOCK_STATE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_DOORLOCK_STATE);
|
||||
public static final String CHANNEL_ID_WINDOWCOVERING_LIFT = "windowcovering-lift";
|
||||
public static final ChannelTypeUID CHANNEL_WINDOWCOVERING_LIFT = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_WINDOWCOVERING_LIFT);
|
||||
public static final String CHANNEL_ID_FANCONTROL_PERCENT = "fancontrol-percent";
|
||||
public static final ChannelTypeUID CHANNEL_FANCONTROL_PERCENT = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_FANCONTROL_PERCENT);
|
||||
public static final String CHANNEL_ID_FANCONTROL_MODE = "fancontrol-fanmode";
|
||||
public static final ChannelTypeUID CHANNEL_FANCONTROL_MODE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_FANCONTROL_MODE);
|
||||
public static final String CHANNEL_ID_TEMPERATUREMEASURMENT_MEASUREDVALUE = "temperaturemeasurement-measuredvalue";
|
||||
public static final ChannelTypeUID CHANNEL_TEMPERATUREMEASURMENT_MEASUREDVALUE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_TEMPERATUREMEASURMENT_MEASUREDVALUE);
|
||||
public static final String CHANNEL_ID_HUMIDITYMEASURMENT_MEASUREDVALUE = "relativehumiditymeasurement-measuredvalue";
|
||||
public static final ChannelTypeUID CHANNEL_HUMIDITYMEASURMENT_MEASUREDVALUE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_HUMIDITYMEASURMENT_MEASUREDVALUE);
|
||||
public static final String CHANNEL_ID_OCCUPANCYSENSING_OCCUPIED = "occupancysensing-occupied";
|
||||
public static final ChannelTypeUID CHANNEL_OCCUPANCYSENSING_OCCUPIED = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_OCCUPANCYSENSING_OCCUPIED);
|
||||
public static final String CHANNEL_ID_ILLUMINANCEMEASURMENT_MEASUREDVALUE = "illuminancemeasurement-measuredvalue";
|
||||
public static final ChannelTypeUID CHANNEL_ILLUMINANCEMEASURMENT_MEASUREDVALUE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_ILLUMINANCEMEASURMENT_MEASUREDVALUE);
|
||||
public static final String CHANNEL_ID_MODESELECT_MODE = "modeselect-mode";
|
||||
public static final ChannelTypeUID CHANNEL_MODESELECT_MODE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_MODESELECT_MODE);
|
||||
public static final String CHANNEL_ID_BOOLEANSTATE_STATEVALUE = "booleanstate-statevalue";
|
||||
public static final ChannelTypeUID CHANNEL_BOOLEANSTATE_STATEVALUE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_BOOLEANSTATE_STATEVALUE);
|
||||
public static final String CHANNEL_ID_WIFINETWORKDIAGNOSTICS_RSSI = "wifinetworkdiagnostics-rssi";
|
||||
public static final ChannelTypeUID CHANNEL_WIFINETWORKDIAGNOSTICS_RSSI = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_WIFINETWORKDIAGNOSTICS_RSSI);
|
||||
public static final String CHANNEL_ID_SWITCH_SWITCH = "switch-switch";
|
||||
public static final ChannelTypeUID CHANNEL_SWITCH_SWITCH = new ChannelTypeUID(BINDING_ID, CHANNEL_ID_SWITCH_SWITCH);
|
||||
public static final String CHANNEL_ID_SWITCH_SWITCHLATECHED = "switch-switchlatched";
|
||||
public static final ChannelTypeUID CHANNEL_SWITCH_SWITCHLATECHED = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_SWITCH_SWITCHLATECHED);
|
||||
public static final String CHANNEL_ID_SWITCH_INITIALPRESS = "switch-initialpress";
|
||||
public static final ChannelTypeUID CHANNEL_SWITCH_INITIALPRESS = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_SWITCH_INITIALPRESS);
|
||||
public static final String CHANNEL_ID_SWITCH_LONGPRESS = "switch-longpress";
|
||||
public static final ChannelTypeUID CHANNEL_SWITCH_LONGPRESS = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_SWITCH_LONGPRESS);
|
||||
public static final String CHANNEL_ID_SWITCH_SHORTRELEASE = "switch-shortrelease";
|
||||
public static final ChannelTypeUID CHANNEL_SWITCH_SHORTRELEASE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_SWITCH_SHORTRELEASE);
|
||||
public static final String CHANNEL_ID_SWITCH_LONGRELEASE = "switch-longrelease";
|
||||
public static final ChannelTypeUID CHANNEL_SWITCH_LONGRELEASE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_SWITCH_LONGRELEASE);
|
||||
public static final String CHANNEL_ID_SWITCH_MULTIPRESSONGOING = "switch-multipressongoing";
|
||||
public static final ChannelTypeUID CHANNEL_SWITCH_MULTIPRESSONGOING = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_SWITCH_MULTIPRESSONGOING);
|
||||
public static final String CHANNEL_ID_SWITCH_MULTIPRESSCOMPLETE = "switch-multipresscomplete";
|
||||
public static final ChannelTypeUID CHANNEL_SWITCH_MULTIPRESSCOMPLETE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_SWITCH_MULTIPRESSCOMPLETE);
|
||||
// shared by energy imported and exported
|
||||
public static final ChannelTypeUID CHANNEL_ELECTRICALENERGYMEASUREMENT_ENERGYMEASUREMENT_ENERGY = new ChannelTypeUID(
|
||||
BINDING_ID, "electricalenergymeasurement-energymeasurmement-energy");
|
||||
|
||||
public static final String CHANNEL_ID_ELECTRICALENERGYMEASUREMENT_CUMULATIVEENERGYIMPORTED_ENERGY = "electricalenergymeasurement-cumulativeenergyimported-energy";
|
||||
|
||||
public static final String CHANNEL_ID_ELECTRICALENERGYMEASUREMENT_CUMULATIVEENERGYEXPORTED_ENERGY = "electricalenergymeasurement-cumulativeenergyexported-energy";
|
||||
|
||||
public static final String CHANNEL_ID_ELECTRICALENERGYMEASUREMENT_PERIODICENERGYIMPORTED_ENERGY = "electricalenergymeasurement-periodicenergyimported-energy";
|
||||
|
||||
public static final String CHANNEL_ID_ELECTRICALENERGYMEASUREMENT_PERIODICENERGYEXPORTED_ENERGY = "electricalenergymeasurement-periodicenergyexported-energy";
|
||||
|
||||
public static final String CHANNEL_ID_ELECTRICALPOWERMEASUREMENT_VOLTAGE = "electricalpowermeasurement-voltage";
|
||||
public static final ChannelTypeUID CHANNEL_ELECTRICALPOWERMEASUREMENT_VOLTAGE = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_ELECTRICALPOWERMEASUREMENT_VOLTAGE);
|
||||
|
||||
public static final String CHANNEL_ID_ELECTRICALPOWERMEASUREMENT_ACTIVECURRENT = "electricalpowermeasurement-activecurrent";
|
||||
public static final ChannelTypeUID CHANNEL_ELECTRICALPOWERMEASUREMENT_ACTIVECURRENT = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_ELECTRICALPOWERMEASUREMENT_ACTIVECURRENT);
|
||||
|
||||
public static final String CHANNEL_ID_ELECTRICALPOWERMEASUREMENT_ACTIVEPOWER = "electricalpowermeasurement-activepower";
|
||||
public static final ChannelTypeUID CHANNEL_ELECTRICALPOWERMEASUREMENT_ACTIVEPOWER = new ChannelTypeUID(BINDING_ID,
|
||||
CHANNEL_ID_ELECTRICALPOWERMEASUREMENT_ACTIVEPOWER);
|
||||
|
||||
public static final String CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_BORDERROUTERNAME = "threadborderroutermgmt-borderroutername";
|
||||
public static final String CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_BORDERAGENTID = "threadborderroutermgmt-borderagentid";
|
||||
public static final String CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_THREADVERSION = "threadborderroutermgmt-threadversion";
|
||||
public static final String CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_INTERFACEENABLED = "threadborderroutermgmt-interfaceenabled";
|
||||
public static final String CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_ACTIVEDATASETTIMESTAMP = "threadborderroutermgmt-activedatasettimestamp";
|
||||
public static final String CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_PENDINGDATASETTIMESTAMP = "threadborderroutermgmt-pendingdatasettimestamp";
|
||||
public static final String CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_ACTIVEDATASET = "threadborderroutermgmt-activedataset";
|
||||
public static final String CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_PENDINGDATASET = "threadborderroutermgmt-pendingdataset";
|
||||
|
||||
public static final ChannelTypeUID CHANNEL_THREADBORDERROUTERMANAGEMENT_BORDERROUTERNAME = new ChannelTypeUID(
|
||||
BINDING_ID, CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_BORDERROUTERNAME);
|
||||
public static final ChannelTypeUID CHANNEL_THREADBORDERROUTERMANAGEMENT_BORDERAGENTID = new ChannelTypeUID(
|
||||
BINDING_ID, CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_BORDERAGENTID);
|
||||
public static final ChannelTypeUID CHANNEL_THREADBORDERROUTERMANAGEMENT_THREADVERSION = new ChannelTypeUID(
|
||||
BINDING_ID, CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_THREADVERSION);
|
||||
public static final ChannelTypeUID CHANNEL_THREADBORDERROUTERMANAGEMENT_INTERFACEENABLED = new ChannelTypeUID(
|
||||
BINDING_ID, CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_INTERFACEENABLED);
|
||||
public static final ChannelTypeUID CHANNEL_THREADBORDERROUTERMANAGEMENT_ACTIVEDATASETTIMESTAMP = new ChannelTypeUID(
|
||||
BINDING_ID, CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_ACTIVEDATASETTIMESTAMP);
|
||||
public static final ChannelTypeUID CHANNEL_THREADBORDERROUTERMANAGEMENT_PENDINGDATASETTIMESTAMP = new ChannelTypeUID(
|
||||
BINDING_ID, CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_PENDINGDATASETTIMESTAMP);
|
||||
public static final ChannelTypeUID CHANNEL_THREADBORDERROUTERMANAGEMENT_ACTIVEDATASET = new ChannelTypeUID(
|
||||
BINDING_ID, CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_ACTIVEDATASET);
|
||||
public static final ChannelTypeUID CHANNEL_THREADBORDERROUTERMANAGEMENT_PENDINGDATASET = new ChannelTypeUID(
|
||||
BINDING_ID, CHANNEL_ID_THREADBORDERROUTERMANAGEMENT_PENDINGDATASET);
|
||||
|
||||
// Thread Border Router Configuration Keys
|
||||
public static final String CONFIG_THREAD_CHANNEL = "channel";
|
||||
public static final String CONFIG_THREAD_ALLOWED_CHANNELS = "allowedChannels";
|
||||
public static final String CONFIG_THREAD_EXTENDED_PAN_ID = "extendedPanId";
|
||||
public static final String CONFIG_THREAD_MESH_LOCAL_PREFIX = "meshLocalPrefix";
|
||||
public static final String CONFIG_THREAD_NETWORK_NAME = "networkName";
|
||||
public static final String CONFIG_THREAD_NETWORK_KEY = "networkKey";
|
||||
public static final String CONFIG_THREAD_PAN_ID = "panId";
|
||||
public static final String CONFIG_THREAD_PSKC = "pskc";
|
||||
public static final String CONFIG_THREAD_ACTIVE_TIMESTAMP_SECONDS = "activeTimestampSeconds";
|
||||
public static final String CONFIG_THREAD_ACTIVE_TIMESTAMP_TICKS = "activeTimestampTicks";
|
||||
public static final String CONFIG_THREAD_ACTIVE_TIMESTAMP_AUTHORITATIVE = "activeTimestampAuthoritative";
|
||||
public static final String CONFIG_THREAD_DELAY_TIMER = "delayTimer";
|
||||
public static final String CONFIG_THREAD_ROTATION_TIME = "rotationTime";
|
||||
public static final String CONFIG_THREAD_OBTAIN_NETWORK_KEY = "obtainNetworkKey";
|
||||
public static final String CONFIG_THREAD_NATIVE_COMMISSIONING = "nativeCommissioning";
|
||||
public static final String CONFIG_THREAD_ROUTERS = "routers";
|
||||
public static final String CONFIG_THREAD_EXTERNAL_COMMISSIONING = "externalCommissioning";
|
||||
public static final String CONFIG_THREAD_COMMERCIAL_COMMISSIONING = "commercialCommissioning";
|
||||
public static final String CONFIG_THREAD_AUTONOMOUS_ENROLLMENT = "autonomousEnrollment";
|
||||
public static final String CONFIG_THREAD_NETWORK_KEY_PROVISIONING = "networkKeyProvisioning";
|
||||
public static final String CONFIG_THREAD_TOBLE_LINK = "tobleLink";
|
||||
public static final String CONFIG_THREAD_NON_CCM_ROUTERS = "nonCcmRouters";
|
||||
|
||||
// Thread Border Router Configuration Labels
|
||||
public static final String CONFIG_LABEL_THREAD_BORDER_ROUTER_OPERATIONAL_DATASET = "@text/thing-type.config.matter.node.thread_border_router_operational_dataset.label";
|
||||
public static final String CONFIG_LABEL_THREAD_NETWORK_CHANNEL_NUMBER = "@text/thing-type.config.matter.node.thread_network_channel_number.label";
|
||||
public static final String CONFIG_LABEL_THREAD_NETWORK_ALLOWED_CHANNELS = "@text/thing-type.config.matter.node.thread_network_allowed_channels.label";
|
||||
public static final String CONFIG_LABEL_THREAD_EXTENDED_PAN_ID = "@text/thing-type.config.matter.node.thread_extended_pan_id.label";
|
||||
public static final String CONFIG_LABEL_THREAD_MESH_LOCAL_PREFIX = "@text/thing-type.config.matter.node.thread_mesh_local_prefix.label";
|
||||
public static final String CONFIG_LABEL_THREAD_NETWORK_NAME = "@text/thing-type.config.matter.node.thread_network_name.label";
|
||||
public static final String CONFIG_LABEL_THREAD_NETWORK_KEY = "@text/thing-type.config.matter.node.thread_network_key.label";
|
||||
public static final String CONFIG_LABEL_THREAD_PAN_ID = "@text/thing-type.config.matter.node.thread_pan_id.label";
|
||||
public static final String CONFIG_LABEL_THREAD_PSKC = "@text/thing-type.config.matter.node.thread_pskc.label";
|
||||
public static final String CONFIG_LABEL_THREAD_ACTIVE_TIMESTAMP_SECONDS = "@text/thing-type.config.matter.node.thread_active_timestamp_seconds.label";
|
||||
public static final String CONFIG_LABEL_THREAD_ACTIVE_TIMESTAMP_TICKS = "@text/thing-type.config.matter.node.thread_active_timestamp_ticks.label";
|
||||
public static final String CONFIG_LABEL_THREAD_ACTIVE_TIMESTAMP_IS_AUTHORITATIVE = "@text/thing-type.config.matter.node.thread_active_timestamp_is_authoritative.label";
|
||||
public static final String CONFIG_LABEL_THREAD_DATASET_SECURITY_POLICY = "@text/thing-type.config.matter.node.thread_dataset_security_policy.label";
|
||||
public static final String CONFIG_LABEL_SECURITY_POLICY_ROTATION_TIME = "@text/thing-type.config.matter.node.security_policy_rotation_time.label";
|
||||
public static final String CONFIG_LABEL_SECURITY_POLICY_OBTAIN_NETWORK_KEY = "@text/thing-type.config.matter.node.security_policy_obtain_network_key.label";
|
||||
public static final String CONFIG_LABEL_SECURITY_POLICY_NATIVE_COMMISSIONING = "@text/thing-type.config.matter.node.security_policy_native_commissioning.label";
|
||||
public static final String CONFIG_LABEL_SECURITY_POLICY_ROUTERS = "@text/thing-type.config.matter.node.security_policy_routers.label";
|
||||
public static final String CONFIG_LABEL_SECURITY_POLICY_EXTERNAL_COMMISSIONING = "@text/thing-type.config.matter.node.security_policy_external_commissioning.label";
|
||||
public static final String CONFIG_LABEL_SECURITY_POLICY_COMMERCIAL_COMMISSIONING = "@text/thing-type.config.matter.node.security_policy_commercial_commissioning.label";
|
||||
public static final String CONFIG_LABEL_SECURITY_POLICY_AUTONOMOUS_ENROLLMENT = "@text/thing-type.config.matter.node.security_policy_autonomous_enrollment.label";
|
||||
public static final String CONFIG_LABEL_SECURITY_POLICY_NETWORK_KEY_PROVISIONING = "@text/thing-type.config.matter.node.security_policy_network_key_provisioning.label";
|
||||
public static final String CONFIG_LABEL_SECURITY_POLICY_TO_BLE_LINK = "@text/thing-type.config.matter.node.security_policy_to_ble_link.label";
|
||||
public static final String CONFIG_LABEL_SECURITY_POLICY_NON_CCM_ROUTERS = "@text/thing-type.config.matter.node.security_policy_non_ccm_routers.label";
|
||||
|
||||
// Thread Border Router Configuration Descriptions
|
||||
public static final String CONFIG_DESC_THREAD_BORDER_ROUTER_OPERATIONAL_DATASET = "@text/thing-type.config.matter.node.thread_border_router_operational_dataset.description";
|
||||
public static final String CONFIG_DESC_THREAD_NETWORK_CHANNEL_NUMBER = "@text/thing-type.config.matter.node.thread_network_channel_number.description";
|
||||
public static final String CONFIG_DESC_THREAD_NETWORK_ALLOWED_CHANNELS = "@text/thing-type.config.matter.node.thread_network_allowed_channels.description";
|
||||
public static final String CONFIG_DESC_THREAD_EXTENDED_PAN_ID = "@text/thing-type.config.matter.node.thread_extended_pan_id.description";
|
||||
public static final String CONFIG_DESC_THREAD_MESH_LOCAL_PREFIX = "@text/thing-type.config.matter.node.thread_mesh_local_prefix.description";
|
||||
public static final String CONFIG_DESC_THREAD_NETWORK_NAME = "@text/thing-type.config.matter.node.thread_network_name.description";
|
||||
public static final String CONFIG_DESC_THREAD_NETWORK_KEY = "@text/thing-type.config.matter.node.thread_network_key.description";
|
||||
public static final String CONFIG_DESC_THREAD_PAN_ID = "@text/thing-type.config.matter.node.thread_pan_id.description";
|
||||
public static final String CONFIG_DESC_THREAD_PSKC = "@text/thing-type.config.matter.node.thread_pskc.description";
|
||||
public static final String CONFIG_DESC_THREAD_ACTIVE_TIMESTAMP_SECONDS = "@text/thing-type.config.matter.node.thread_active_timestamp_seconds.description";
|
||||
public static final String CONFIG_DESC_THREAD_ACTIVE_TIMESTAMP_TICKS = "@text/thing-type.config.matter.node.thread_active_timestamp_ticks.description";
|
||||
public static final String CONFIG_DESC_THREAD_ACTIVE_TIMESTAMP_IS_AUTHORITATIVE = "@text/thing-type.config.matter.node.thread_active_timestamp_is_authoritative.description";
|
||||
public static final String CONFIG_DESC_THREAD_DATASET_SECURITY_POLICY = "@text/thing-type.config.matter.node.thread_dataset_security_policy.description";
|
||||
public static final String CONFIG_DESC_SECURITY_POLICY_ROTATION_TIME = "@text/thing-type.config.matter.node.security_policy_rotation_time.description";
|
||||
public static final String CONFIG_DESC_SECURITY_POLICY_OBTAIN_NETWORK_KEY = "@text/thing-type.config.matter.node.security_policy_obtain_network_key.description";
|
||||
public static final String CONFIG_DESC_SECURITY_POLICY_NATIVE_COMMISSIONING = "@text/thing-type.config.matter.node.security_policy_native_commissioning.description";
|
||||
public static final String CONFIG_DESC_SECURITY_POLICY_ROUTERS = "@text/thing-type.config.matter.node.security_policy_routers.description";
|
||||
public static final String CONFIG_DESC_SECURITY_POLICY_EXTERNAL_COMMISSIONING = "@text/thing-type.config.matter.node.security_policy_external_commissioning.description";
|
||||
public static final String CONFIG_DESC_SECURITY_POLICY_COMMERCIAL_COMMISSIONING = "@text/thing-type.config.matter.node.security_policy_commercial_commissioning.description";
|
||||
public static final String CONFIG_DESC_SECURITY_POLICY_AUTONOMOUS_ENROLLMENT = "@text/thing-type.config.matter.node.security_policy_autonomous_enrollment.description";
|
||||
public static final String CONFIG_DESC_SECURITY_POLICY_NETWORK_KEY_PROVISIONING = "@text/thing-type.config.matter.node.security_policy_network_key_provisioning.description";
|
||||
public static final String CONFIG_DESC_SECURITY_POLICY_TO_BLE_LINK = "@text/thing-type.config.matter.node.security_policy_to_ble_link.description";
|
||||
public static final String CONFIG_DESC_SECURITY_POLICY_NON_CCM_ROUTERS = "@text/thing-type.config.matter.node.security_policy_non_ccm_routers.description";
|
||||
// Thread Border Router Configuration Group Names
|
||||
public static final String CONFIG_GROUP_THREAD_BORDER_ROUTER_CORE = "threadBorderRouterCore";
|
||||
public static final String CONFIG_GROUP_SECURITY_POLICY = "securityPolicy";
|
||||
|
||||
// Matter Node Actions
|
||||
public static final String THING_ACTION_LABEL_NODE_GENERATE_NEW_PAIRING_CODE = "@text/thing-action.node.generateNewPairingCode.label";
|
||||
public static final String THING_ACTION_DESC_NODE_GENERATE_NEW_PAIRING_CODE = "@text/thing-action.node.generateNewPairingCode.description";
|
||||
public static final String THING_ACTION_LABEL_NODE_GENERATE_NEW_PAIRING_CODE_MANUAL_PAIRING_CODE = "@text/thing-action.node.generateNewPairingCode.manual-pairing-code.label";
|
||||
public static final String THING_ACTION_LABEL_NODE_GENERATE_NEW_PAIRING_CODE_QR_PAIRING_CODE = "@text/thing-action.node.generateNewPairingCode.qr-pairing-code.label";
|
||||
public static final String THING_ACTION_LABEL_NODE_DECOMMISSION = "@text/thing-action.node.decommission.label";
|
||||
public static final String THING_ACTION_DESC_NODE_DECOMMISSION = "@text/thing-action.node.decommission.description";
|
||||
public static final String THING_ACTION_LABEL_NODE_DECOMMISSION_RESULT = "@text/thing-action.node.decommission.result.label";
|
||||
public static final String THING_ACTION_LABEL_NODE_GET_FABRICS = "@text/thing-action.node.getFabrics.label";
|
||||
public static final String THING_ACTION_DESC_NODE_GET_FABRICS = "@text/thing-action.node.getFabrics.description";
|
||||
public static final String THING_ACTION_LABEL_NODE_GET_FABRICS_RESULT = "@text/thing-action.node.getFabrics.result.label";
|
||||
public static final String THING_ACTION_LABEL_NODE_REMOVE_FABRIC = "@text/thing-action.node.removeFabric.label";
|
||||
public static final String THING_ACTION_DESC_NODE_REMOVE_FABRIC = "@text/thing-action.node.removeFabric.description";
|
||||
public static final String THING_ACTION_LABEL_NODE_REMOVE_FABRIC_RESULT = "@text/thing-action.node.removeFabric.result.label";
|
||||
public static final String THING_ACTION_LABEL_NODE_REMOVE_FABRIC_INDEX = "@text/thing-action.node.removeFabric.index.label";
|
||||
public static final String THING_ACTION_DESC_NODE_REMOVE_FABRIC_INDEX = "@text/thing-action.node.removeFabric.index.description";
|
||||
|
||||
// Action Result Messages
|
||||
public static final String THING_ACTION_RESULT_SUCCESS = "@text/thing-action.result.success";
|
||||
public static final String THING_ACTION_RESULT_NO_HANDLER = "@text/thing-action.result.no-handler";
|
||||
public static final String THING_ACTION_RESULT_NO_FABRICS = "@text/thing-action.result.no-fabrics";
|
||||
|
||||
// Matter OTBR Actions
|
||||
public static final String THING_ACTION_LABEL_OTBR_LOAD_EXTERNAL_DATASET = "@text/thing-action.otbr.loadExternalDataset.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_LOAD_EXTERNAL_DATASET = "@text/thing-action.otbr.loadExternalDataset.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_LOAD_EXTERNAL_DATASET_DATASET = "@text/thing-action.otbr.loadExternalDataset.dataset.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_LOAD_EXTERNAL_DATASET_DATASET = "@text/thing-action.otbr.loadExternalDataset.dataset.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_LOAD_EXTERNAL_DATASET_RESULT = "@text/thing-action.otbr.loadExternalDataset.result.label";
|
||||
|
||||
public static final String THING_ACTION_LABEL_OTBR_LOAD_ACTIVE_DATASET = "@text/thing-action.otbr.loadActiveDataset.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_LOAD_ACTIVE_DATASET = "@text/thing-action.otbr.loadActiveDataset.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_LOAD_ACTIVE_DATASET_RESULT = "@text/thing-action.otbr.loadActiveDataset.result.label";
|
||||
public static final String THING_ACTION_LABEL_OTBR_LOAD_ACTIVE_DATASET_DATASET = "@text/thing-action.otbr.loadActiveDataset.dataset.label";
|
||||
|
||||
public static final String THING_ACTION_LABEL_OTBR_PUSH_DATASET = "@text/thing-action.otbr.pushDataset.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_PUSH_DATASET = "@text/thing-action.otbr.pushDataset.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_PUSH_DATASET_RESULT = "@text/thing-action.otbr.pushDataset.result.label";
|
||||
public static final String THING_ACTION_LABEL_OTBR_PUSH_DATASET_DELAY = "@text/thing-action.otbr.pushDataset.delay.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_PUSH_DATASET_DELAY = "@text/thing-action.otbr.pushDataset.delay.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_PUSH_DATASET_GENERATE_TIME = "@text/thing-action.otbr.pushDataset.generateTime.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_PUSH_DATASET_GENERATE_TIME = "@text/thing-action.otbr.pushDataset.generateTime.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_PUSH_DATASET_INCREMENT_TIME = "@text/thing-action.otbr.pushDataset.incrementTime.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_PUSH_DATASET_INCREMENT_TIME = "@text/thing-action.otbr.pushDataset.incrementTime.description";
|
||||
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET = "@text/thing-action.otbr.generateDataset.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET = "@text/thing-action.otbr.generateDataset.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_RESULT = "@text/thing-action.otbr.generateDataset.result.label";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_JSON = "@text/thing-action.otbr.generateDataset.json.label";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_HEX = "@text/thing-action.otbr.generateDataset.hex.label";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_SAVE = "@text/thing-action.otbr.generateDataset.save.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_SAVE = "@text/thing-action.otbr.generateDataset.save.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_CHANNEL = "@text/thing-action.otbr.generateDataset.channel.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_CHANNEL = "@text/thing-action.otbr.generateDataset.channel.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_TIMESTAMP_SECONDS = "@text/thing-action.otbr.generateDataset.timestampSeconds.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_TIMESTAMP_SECONDS = "@text/thing-action.otbr.generateDataset.timestampSeconds.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_TIMESTAMP_TICKS = "@text/thing-action.otbr.generateDataset.timestampTicks.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_TIMESTAMP_TICKS = "@text/thing-action.otbr.generateDataset.timestampTicks.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_TIMESTAMP_AUTHORITATIVE = "@text/thing-action.otbr.generateDataset.timestampAuthoritative.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_TIMESTAMP_AUTHORITATIVE = "@text/thing-action.otbr.generateDataset.timestampAuthoritative.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_PAN_ID = "@text/thing-action.otbr.generateDataset.panId.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_PAN_ID = "@text/thing-action.otbr.generateDataset.panId.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_EXTENDED_PAN_ID = "@text/thing-action.otbr.generateDataset.extendedPanId.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_EXTENDED_PAN_ID = "@text/thing-action.otbr.generateDataset.extendedPanId.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_MESH_PREFIX = "@text/thing-action.otbr.generateDataset.meshPrefix.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_MESH_PREFIX = "@text/thing-action.otbr.generateDataset.meshPrefix.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_NETWORK_NAME = "@text/thing-action.otbr.generateDataset.networkName.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_NETWORK_NAME = "@text/thing-action.otbr.generateDataset.networkName.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_NETWORK_KEY = "@text/thing-action.otbr.generateDataset.networkKey.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_NETWORK_KEY = "@text/thing-action.otbr.generateDataset.networkKey.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_PASSPHRASE = "@text/thing-action.otbr.generateDataset.passphrase.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_PASSPHRASE = "@text/thing-action.otbr.generateDataset.passphrase.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_ROTATION_TIME = "@text/thing-action.otbr.generateDataset.rotationTime.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_ROTATION_TIME = "@text/thing-action.otbr.generateDataset.rotationTime.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_OBTAIN_NETWORK_KEY = "@text/thing-action.otbr.generateDataset.obtainNetworkKey.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_OBTAIN_NETWORK_KEY = "@text/thing-action.otbr.generateDataset.obtainNetworkKey.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_NATIVE_COMMISSIONING = "@text/thing-action.otbr.generateDataset.nativeCommissioning.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_NATIVE_COMMISSIONING = "@text/thing-action.otbr.generateDataset.nativeCommissioning.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_ROUTERS = "@text/thing-action.otbr.generateDataset.routers.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_ROUTERS = "@text/thing-action.otbr.generateDataset.routers.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_EXTERNAL_COMMISSIONING = "@text/thing-action.otbr.generateDataset.externalCommissioning.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_EXTERNAL_COMMISSIONING = "@text/thing-action.otbr.generateDataset.externalCommissioning.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_COMMERCIAL_COMMISSIONING = "@text/thing-action.otbr.generateDataset.commercialCommissioning.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_COMMERCIAL_COMMISSIONING = "@text/thing-action.otbr.generateDataset.commercialCommissioning.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_AUTONOMOUS_ENROLLMENT = "@text/thing-action.otbr.generateDataset.autonomousEnrollment.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_AUTONOMOUS_ENROLLMENT = "@text/thing-action.otbr.generateDataset.autonomousEnrollment.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_NETWORK_KEY_PROVISIONING = "@text/thing-action.otbr.generateDataset.networkKeyProvisioning.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_NETWORK_KEY_PROVISIONING = "@text/thing-action.otbr.generateDataset.networkKeyProvisioning.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_TOBLE_LINK = "@text/thing-action.otbr.generateDataset.tobleLink.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_TOBLE_LINK = "@text/thing-action.otbr.generateDataset.tobleLink.description";
|
||||
public static final String THING_ACTION_LABEL_OTBR_GENERATE_DATASET_NON_CCM_ROUTERS = "@text/thing-action.otbr.generateDataset.nonCcmRouters.label";
|
||||
public static final String THING_ACTION_DESC_OTBR_GENERATE_DATASET_NON_CCM_ROUTERS = "@text/thing-action.otbr.generateDataset.nonCcmRouters.description";
|
||||
|
||||
// Matter OTBR Action Results
|
||||
public static final String THING_ACTION_RESULT_NO_CONVERTER = "@text/thing-action.result.no-converter";
|
||||
public static final String THING_ACTION_RESULT_INVALID_JSON = "@text/thing-action.result.invalid-json";
|
||||
public static final String THING_ACTION_RESULT_ERROR_GENERATING_KEY = "@text/thing-action.result.error-generating-key";
|
||||
public static final String THING_ACTION_RESULT_ERROR_SETTING_DATASET = "@text/thing-action.result.error-setting-dataset";
|
||||
|
||||
// Matter Controller Actions
|
||||
public static final String THING_ACTION_LABEL_CONTROLLER_PAIR_DEVICE = "@text/thing-action.controller.pairDevice.label";
|
||||
public static final String THING_ACTION_DESC_CONTROLLER_PAIR_DEVICE = "@text/thing-action.controller.pairDevice.description";
|
||||
public static final String THING_ACTION_LABEL_CONTROLLER_PAIR_DEVICE_CODE = "@text/thing-action.controller.pairDevice.code.label";
|
||||
public static final String THING_ACTION_DESC_CONTROLLER_PAIR_DEVICE_CODE = "@text/thing-action.controller.pairDevice.code.description";
|
||||
public static final String THING_ACTION_LABEL_CONTROLLER_PAIR_DEVICE_RESULT = "@text/thing-action.controller.pairDevice.result.label";
|
||||
public static final String THING_ACTION_RESULT_DEVICE_ADDED = "@text/thing-action.result.device-added";
|
||||
public static final String THING_ACTION_RESULT_PAIRING_FAILED = "@text/thing-action.result.pairing-failed";
|
||||
|
||||
// Matter Controller Statuses
|
||||
public static final String THING_STATUS_DETAIL_CONTROLLER_WAITING_FOR_DATA = "@text/thing-status.detail.controller.waitingForData";
|
||||
public static final String THING_STATUS_DETAIL_ENDPOINT_THING_NOT_REACHABLE = "@text/thing-status.detail.endpoint.thingNotReachable";
|
||||
|
||||
// Discovery
|
||||
public static final String DISCOVERY_MATTER_BRIDGE_ENDPOINT_LABEL = "@text/discovery.matter.bridge-endpoint.label";
|
||||
public static final String DISCOVERY_MATTER_NODE_DEVICE_LABEL = "@text/discovery.matter.node-device.label";
|
||||
public static final String DISCOVERY_MATTER_UNKNOWN_NODE_LABEL = "@text/discovery.matter.unknown-node.label";
|
||||
public static final String DISCOVERY_MATTER_SCAN_INPUT_LABEL = "@text/discovery.matter.scan-input.label";
|
||||
public static final String DISCOVERY_MATTER_SCAN_INPUT_DESCRIPTION = "@text/discovery.matter.scan-input.description";
|
||||
}
|
|
@ -0,0 +1,117 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal;
|
||||
|
||||
import java.net.URI;
|
||||
import java.util.Collection;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.core.storage.StorageService;
|
||||
import org.openhab.core.thing.ThingTypeUID;
|
||||
import org.openhab.core.thing.binding.AbstractStorageBasedTypeProvider;
|
||||
import org.openhab.core.thing.binding.ThingTypeProvider;
|
||||
import org.openhab.core.thing.type.ChannelGroupType;
|
||||
import org.openhab.core.thing.type.ChannelGroupTypeProvider;
|
||||
import org.openhab.core.thing.type.ChannelGroupTypeUID;
|
||||
import org.openhab.core.thing.type.ChannelTypeProvider;
|
||||
import org.openhab.core.thing.type.ThingType;
|
||||
import org.openhab.core.thing.type.ThingTypeBuilder;
|
||||
import org.openhab.core.thing.type.ThingTypeRegistry;
|
||||
import org.osgi.service.component.annotations.Activate;
|
||||
import org.osgi.service.component.annotations.Component;
|
||||
import org.osgi.service.component.annotations.Reference;
|
||||
|
||||
/**
|
||||
* This class is used to provide channel types dynamically for Matter devices.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
@Component(immediate = false, service = { ThingTypeProvider.class, ChannelTypeProvider.class,
|
||||
ChannelGroupTypeProvider.class, MatterChannelTypeProvider.class })
|
||||
public class MatterChannelTypeProvider extends AbstractStorageBasedTypeProvider {
|
||||
private final ThingTypeRegistry thingTypeRegistry;
|
||||
|
||||
@Activate
|
||||
public MatterChannelTypeProvider(@Reference ThingTypeRegistry thingTypeRegistry,
|
||||
@Reference StorageService storageService) {
|
||||
super(storageService);
|
||||
this.thingTypeRegistry = thingTypeRegistry;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clone the defaults from a XML defined thing (baseType). Optionally pass in bridgeTypes of parent things that are
|
||||
* dynamic and not defined in xml
|
||||
*
|
||||
*/
|
||||
public ThingTypeBuilder derive(ThingTypeUID newTypeId, ThingTypeUID baseTypeId,
|
||||
@Nullable List<String> supportedBridgeTypeUIDs) {
|
||||
ThingType baseType = thingTypeRegistry.getThingType(baseTypeId);
|
||||
|
||||
if (baseType == null) {
|
||||
throw new IllegalArgumentException("Base type not found: " + baseTypeId);
|
||||
}
|
||||
|
||||
ThingTypeBuilder result = ThingTypeBuilder.instance(newTypeId, baseType.getLabel())
|
||||
.withChannelGroupDefinitions(baseType.getChannelGroupDefinitions())
|
||||
.withChannelDefinitions(baseType.getChannelDefinitions())
|
||||
.withExtensibleChannelTypeIds(baseType.getExtensibleChannelTypeIds())
|
||||
.withSupportedBridgeTypeUIDs(supportedBridgeTypeUIDs != null ? supportedBridgeTypeUIDs
|
||||
: baseType.getSupportedBridgeTypeUIDs())
|
||||
.withProperties(baseType.getProperties()).isListed(false);
|
||||
|
||||
String representationProperty = baseType.getRepresentationProperty();
|
||||
if (representationProperty != null) {
|
||||
result = result.withRepresentationProperty(representationProperty);
|
||||
}
|
||||
URI configDescriptionURI = baseType.getConfigDescriptionURI();
|
||||
if (configDescriptionURI != null) {
|
||||
result = result.withConfigDescriptionURI(configDescriptionURI);
|
||||
}
|
||||
String category = baseType.getCategory();
|
||||
if (category != null) {
|
||||
result = result.withCategory(category);
|
||||
}
|
||||
String description = baseType.getDescription();
|
||||
if (description != null) {
|
||||
result = result.withDescription(description);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
public void updateChannelGroupTypesForPrefix(String prefix, Collection<ChannelGroupType> types) {
|
||||
Collection<ChannelGroupType> oldCgts = channelGroupTypesForPrefix(prefix);
|
||||
|
||||
Set<ChannelGroupTypeUID> oldUids = oldCgts.stream().map(ChannelGroupType::getUID).collect(Collectors.toSet());
|
||||
Collection<ChannelGroupTypeUID> uids = types.stream().map(ChannelGroupType::getUID).toList();
|
||||
|
||||
oldUids.removeAll(uids);
|
||||
// oldUids now contains only UIDs that no longer exist. so remove them
|
||||
oldUids.forEach(this::removeChannelGroupType);
|
||||
types.forEach(this::putChannelGroupType);
|
||||
}
|
||||
|
||||
public void removeChannelGroupTypesForPrefix(String prefix) {
|
||||
channelGroupTypesForPrefix(prefix).forEach(cgt -> removeChannelGroupType(cgt.getUID()));
|
||||
}
|
||||
|
||||
private Collection<ChannelGroupType> channelGroupTypesForPrefix(String prefix) {
|
||||
return getChannelGroupTypes(null).stream().filter(cgt -> cgt.getUID().getId().startsWith(prefix + "-"))
|
||||
.toList();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,53 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal;
|
||||
|
||||
import java.net.URI;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.HashMap;
|
||||
import java.util.Locale;
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.core.config.core.ConfigDescription;
|
||||
import org.openhab.core.config.core.ConfigDescriptionProvider;
|
||||
import org.osgi.service.component.annotations.Component;
|
||||
|
||||
/**
|
||||
* Extends the ConfigDescriptionProvider to dynamically add ConfigDescriptions.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@Component(service = { MatterConfigDescriptionProvider.class, ConfigDescriptionProvider.class })
|
||||
@NonNullByDefault
|
||||
public class MatterConfigDescriptionProvider implements ConfigDescriptionProvider {
|
||||
|
||||
private Map<URI, ConfigDescription> configDescriptionsByURI = new HashMap<>();
|
||||
|
||||
@Override
|
||||
public Collection<ConfigDescription> getConfigDescriptions(@Nullable Locale locale) {
|
||||
return new ArrayList<ConfigDescription>(configDescriptionsByURI.values());
|
||||
}
|
||||
|
||||
@Override
|
||||
@Nullable
|
||||
public ConfigDescription getConfigDescription(URI uri, @Nullable Locale locale) {
|
||||
return configDescriptionsByURI.get(uri);
|
||||
}
|
||||
|
||||
public void addConfigDescription(ConfigDescription configDescription) {
|
||||
configDescriptionsByURI.put(configDescription.getUID(), configDescription);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,101 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal;
|
||||
|
||||
import static org.openhab.binding.matter.internal.MatterBindingConstants.THING_TYPE_CONTROLLER;
|
||||
import static org.openhab.binding.matter.internal.MatterBindingConstants.THING_TYPE_ENDPOINT;
|
||||
import static org.openhab.binding.matter.internal.MatterBindingConstants.THING_TYPE_NODE;
|
||||
|
||||
import java.util.Set;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.binding.matter.internal.client.MatterWebsocketService;
|
||||
import org.openhab.binding.matter.internal.handler.ControllerHandler;
|
||||
import org.openhab.binding.matter.internal.handler.EndpointHandler;
|
||||
import org.openhab.binding.matter.internal.handler.NodeHandler;
|
||||
import org.openhab.binding.matter.internal.util.MatterUIDUtils;
|
||||
import org.openhab.binding.matter.internal.util.TranslationService;
|
||||
import org.openhab.core.thing.Bridge;
|
||||
import org.openhab.core.thing.Thing;
|
||||
import org.openhab.core.thing.ThingTypeUID;
|
||||
import org.openhab.core.thing.binding.BaseThingHandlerFactory;
|
||||
import org.openhab.core.thing.binding.ThingHandler;
|
||||
import org.openhab.core.thing.binding.ThingHandlerFactory;
|
||||
import org.osgi.service.component.annotations.Activate;
|
||||
import org.osgi.service.component.annotations.Component;
|
||||
import org.osgi.service.component.annotations.Reference;
|
||||
|
||||
/**
|
||||
* The {@link MatterHandlerFactory} is responsible for creating things and thing
|
||||
* handlers.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
@Component(service = { ThingHandlerFactory.class, MatterHandlerFactory.class })
|
||||
public class MatterHandlerFactory extends BaseThingHandlerFactory {
|
||||
private static final Set<ThingTypeUID> SUPPORTED_THING_TYPES_UIDS = Set.of(THING_TYPE_CONTROLLER, THING_TYPE_NODE,
|
||||
THING_TYPE_ENDPOINT);
|
||||
|
||||
private final MatterStateDescriptionOptionProvider stateDescriptionProvider;
|
||||
private final MatterWebsocketService websocketService;
|
||||
private final MatterChannelTypeProvider channelGroupTypeProvider;
|
||||
private final MatterConfigDescriptionProvider configDescriptionProvider;
|
||||
private final TranslationService translationService;
|
||||
|
||||
@Activate
|
||||
public MatterHandlerFactory(@Reference MatterWebsocketService websocketService,
|
||||
@Reference MatterStateDescriptionOptionProvider stateDescriptionProvider,
|
||||
@Reference MatterChannelTypeProvider channelGroupTypeProvider,
|
||||
@Reference MatterConfigDescriptionProvider configDescriptionProvider,
|
||||
@Reference TranslationService translationService) {
|
||||
this.websocketService = websocketService;
|
||||
this.stateDescriptionProvider = stateDescriptionProvider;
|
||||
this.channelGroupTypeProvider = channelGroupTypeProvider;
|
||||
this.configDescriptionProvider = configDescriptionProvider;
|
||||
this.translationService = translationService;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean supportsThingType(ThingTypeUID thingTypeUID) {
|
||||
ThingTypeUID baseTypeUID = MatterUIDUtils.baseTypeForThingType(thingTypeUID);
|
||||
return SUPPORTED_THING_TYPES_UIDS.contains(baseTypeUID != null ? baseTypeUID : thingTypeUID);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected @Nullable ThingHandler createHandler(Thing thing) {
|
||||
ThingTypeUID thingTypeUID = thing.getThingTypeUID();
|
||||
|
||||
if (THING_TYPE_CONTROLLER.equals(thingTypeUID)) {
|
||||
ControllerHandler controllerHandler = new ControllerHandler((Bridge) thing, websocketService,
|
||||
translationService);
|
||||
return controllerHandler;
|
||||
}
|
||||
|
||||
ThingTypeUID baseTypeUID = MatterUIDUtils.baseTypeForThingType(thingTypeUID);
|
||||
ThingTypeUID derivedTypeUID = baseTypeUID != null ? baseTypeUID : thingTypeUID;
|
||||
|
||||
if (THING_TYPE_NODE.equals(derivedTypeUID)) {
|
||||
return new NodeHandler((Bridge) thing, this, stateDescriptionProvider, channelGroupTypeProvider,
|
||||
configDescriptionProvider, translationService);
|
||||
}
|
||||
|
||||
if (THING_TYPE_ENDPOINT.equals(derivedTypeUID)) {
|
||||
return new EndpointHandler(thing, this, stateDescriptionProvider, channelGroupTypeProvider,
|
||||
configDescriptionProvider, translationService);
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,86 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal;
|
||||
|
||||
import java.math.BigDecimal;
|
||||
import java.util.Locale;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.core.events.EventPublisher;
|
||||
import org.openhab.core.thing.Channel;
|
||||
import org.openhab.core.thing.ChannelUID;
|
||||
import org.openhab.core.thing.binding.BaseDynamicStateDescriptionProvider;
|
||||
import org.openhab.core.thing.events.ThingEventFactory;
|
||||
import org.openhab.core.thing.i18n.ChannelTypeI18nLocalizationService;
|
||||
import org.openhab.core.thing.link.ItemChannelLinkRegistry;
|
||||
import org.openhab.core.thing.type.DynamicStateDescriptionProvider;
|
||||
import org.openhab.core.types.StateDescription;
|
||||
import org.openhab.core.types.StateDescriptionFragment;
|
||||
import org.openhab.core.types.StateDescriptionFragmentBuilder;
|
||||
import org.osgi.service.component.annotations.Activate;
|
||||
import org.osgi.service.component.annotations.Component;
|
||||
import org.osgi.service.component.annotations.Reference;
|
||||
|
||||
/**
|
||||
* Dynamic provider of state options
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@Component(service = { DynamicStateDescriptionProvider.class, MatterStateDescriptionOptionProvider.class })
|
||||
@NonNullByDefault
|
||||
public class MatterStateDescriptionOptionProvider extends BaseDynamicStateDescriptionProvider {
|
||||
|
||||
private final Map<ChannelUID, StateDescriptionFragment> stateDescriptionFragments = new ConcurrentHashMap<>();
|
||||
|
||||
@Activate
|
||||
public MatterStateDescriptionOptionProvider(final @Reference EventPublisher eventPublisher, //
|
||||
final @Reference ItemChannelLinkRegistry itemChannelLinkRegistry, //
|
||||
final @Reference ChannelTypeI18nLocalizationService channelTypeI18nLocalizationService) {
|
||||
this.eventPublisher = eventPublisher;
|
||||
this.itemChannelLinkRegistry = itemChannelLinkRegistry;
|
||||
this.channelTypeI18nLocalizationService = channelTypeI18nLocalizationService;
|
||||
}
|
||||
|
||||
@Override
|
||||
public @Nullable StateDescription getStateDescription(Channel channel, @Nullable StateDescription original,
|
||||
@Nullable Locale locale) {
|
||||
StateDescriptionFragment stateDescriptionFragment = stateDescriptionFragments.get(channel.getUID());
|
||||
return stateDescriptionFragment != null ? stateDescriptionFragment.toStateDescription()
|
||||
: super.getStateDescription(channel, original, locale);
|
||||
}
|
||||
|
||||
public void setMinMax(ChannelUID channelUID, BigDecimal min, BigDecimal max, @Nullable BigDecimal step,
|
||||
@Nullable String pattern) {
|
||||
StateDescriptionFragment oldStateDescriptionFragment = stateDescriptionFragments.get(channelUID);
|
||||
StateDescriptionFragmentBuilder builder = StateDescriptionFragmentBuilder.create().withMinimum(min)
|
||||
.withMaximum(max);
|
||||
if (step != null) {
|
||||
builder = builder.withStep(step);
|
||||
}
|
||||
if (pattern != null) {
|
||||
builder = builder.withPattern(pattern);
|
||||
}
|
||||
StateDescriptionFragment newStateDescriptionFragment = builder.build();
|
||||
if (!newStateDescriptionFragment.equals(oldStateDescriptionFragment)) {
|
||||
stateDescriptionFragments.put(channelUID, newStateDescriptionFragment);
|
||||
ItemChannelLinkRegistry itemChannelLinkRegistry = this.itemChannelLinkRegistry;
|
||||
postEvent(ThingEventFactory.createChannelDescriptionChangedEvent(channelUID,
|
||||
itemChannelLinkRegistry != null ? itemChannelLinkRegistry.getLinkedItemNames(channelUID) : Set.of(),
|
||||
newStateDescriptionFragment, oldStateDescriptionFragment));
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,82 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.actions;
|
||||
|
||||
import java.util.concurrent.ExecutionException;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.binding.matter.internal.MatterBindingConstants;
|
||||
import org.openhab.binding.matter.internal.handler.ControllerHandler;
|
||||
import org.openhab.binding.matter.internal.util.TranslationService;
|
||||
import org.openhab.core.automation.annotation.ActionInput;
|
||||
import org.openhab.core.automation.annotation.ActionOutput;
|
||||
import org.openhab.core.automation.annotation.ActionOutputs;
|
||||
import org.openhab.core.automation.annotation.RuleAction;
|
||||
import org.openhab.core.thing.binding.ThingActions;
|
||||
import org.openhab.core.thing.binding.ThingActionsScope;
|
||||
import org.openhab.core.thing.binding.ThingHandler;
|
||||
import org.osgi.service.component.annotations.Activate;
|
||||
import org.osgi.service.component.annotations.Component;
|
||||
import org.osgi.service.component.annotations.Reference;
|
||||
import org.osgi.service.component.annotations.ServiceScope;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
/**
|
||||
* The {@link MatterControllerActions} exposes Matter related actions for the Matter Controller Thing.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
@Component(scope = ServiceScope.PROTOTYPE, service = MatterControllerActions.class)
|
||||
@ThingActionsScope(name = "matter")
|
||||
public class MatterControllerActions implements ThingActions {
|
||||
public final Logger logger = LoggerFactory.getLogger(getClass());
|
||||
private @Nullable ControllerHandler handler;
|
||||
private final TranslationService translationService;
|
||||
|
||||
@Activate
|
||||
public MatterControllerActions(@Reference TranslationService translationService) {
|
||||
this.translationService = translationService;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setThingHandler(@Nullable ThingHandler handler) {
|
||||
if (handler instanceof ControllerHandler controllerHandler) {
|
||||
this.handler = controllerHandler;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public @Nullable ThingHandler getThingHandler() {
|
||||
return handler;
|
||||
}
|
||||
|
||||
@RuleAction(label = MatterBindingConstants.THING_ACTION_LABEL_CONTROLLER_PAIR_DEVICE, description = MatterBindingConstants.THING_ACTION_DESC_CONTROLLER_PAIR_DEVICE)
|
||||
public @Nullable @ActionOutputs({
|
||||
@ActionOutput(name = "result", label = MatterBindingConstants.THING_ACTION_LABEL_CONTROLLER_PAIR_DEVICE_RESULT, type = "java.lang.String") }) String pairDevice(
|
||||
@ActionInput(name = "code", label = MatterBindingConstants.THING_ACTION_LABEL_CONTROLLER_PAIR_DEVICE_CODE, description = MatterBindingConstants.THING_ACTION_DESC_CONTROLLER_PAIR_DEVICE_CODE, type = "java.lang.String") String code) {
|
||||
ControllerHandler handler = this.handler;
|
||||
if (handler != null) {
|
||||
try {
|
||||
handler.startScan(code).get();
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_DEVICE_ADDED);
|
||||
} catch (InterruptedException | ExecutionException e) {
|
||||
return handler.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_PAIRING_FAILED)
|
||||
+ e.getLocalizedMessage();
|
||||
}
|
||||
}
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_HANDLER);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,162 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.actions;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
import java.util.Optional;
|
||||
import java.util.concurrent.ExecutionException;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.binding.matter.internal.MatterBindingConstants;
|
||||
import org.openhab.binding.matter.internal.client.dto.PairingCodes;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.OperationalCredentialsCluster;
|
||||
import org.openhab.binding.matter.internal.controller.MatterControllerClient;
|
||||
import org.openhab.binding.matter.internal.handler.NodeHandler;
|
||||
import org.openhab.binding.matter.internal.util.MatterVendorIDs;
|
||||
import org.openhab.binding.matter.internal.util.TranslationService;
|
||||
import org.openhab.core.automation.annotation.ActionInput;
|
||||
import org.openhab.core.automation.annotation.ActionOutput;
|
||||
import org.openhab.core.automation.annotation.ActionOutputs;
|
||||
import org.openhab.core.automation.annotation.RuleAction;
|
||||
import org.openhab.core.thing.binding.ThingActions;
|
||||
import org.openhab.core.thing.binding.ThingActionsScope;
|
||||
import org.openhab.core.thing.binding.ThingHandler;
|
||||
import org.osgi.service.component.annotations.Activate;
|
||||
import org.osgi.service.component.annotations.Component;
|
||||
import org.osgi.service.component.annotations.Reference;
|
||||
import org.osgi.service.component.annotations.ServiceScope;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.google.gson.JsonParseException;
|
||||
|
||||
/**
|
||||
* The {@link MatterNodeActions} exposes Matter related actions for the Matter Node Thing.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
@Component(scope = ServiceScope.PROTOTYPE, service = MatterNodeActions.class)
|
||||
@ThingActionsScope(name = "matter")
|
||||
public class MatterNodeActions implements ThingActions {
|
||||
public final Logger logger = LoggerFactory.getLogger(getClass());
|
||||
protected @Nullable NodeHandler handler;
|
||||
private final TranslationService translationService;
|
||||
|
||||
@Activate
|
||||
public MatterNodeActions(@Reference TranslationService translationService) {
|
||||
this.translationService = translationService;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setThingHandler(@Nullable ThingHandler handler) {
|
||||
if (handler instanceof NodeHandler nodeHandler) {
|
||||
this.handler = nodeHandler;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public @Nullable ThingHandler getThingHandler() {
|
||||
return handler;
|
||||
}
|
||||
|
||||
@RuleAction(label = MatterBindingConstants.THING_ACTION_LABEL_NODE_GENERATE_NEW_PAIRING_CODE, description = MatterBindingConstants.THING_ACTION_DESC_NODE_GENERATE_NEW_PAIRING_CODE)
|
||||
public @ActionOutputs({
|
||||
@ActionOutput(name = "manualPairingCode", label = MatterBindingConstants.THING_ACTION_LABEL_NODE_GENERATE_NEW_PAIRING_CODE_MANUAL_PAIRING_CODE, type = "java.lang.String"),
|
||||
@ActionOutput(name = "qrPairingCode", label = MatterBindingConstants.THING_ACTION_LABEL_NODE_GENERATE_NEW_PAIRING_CODE_QR_PAIRING_CODE, type = "qrCode") }) Map<String, Object> generateNewPairingCode() {
|
||||
NodeHandler handler = this.handler;
|
||||
if (handler != null) {
|
||||
MatterControllerClient client = handler.getClient();
|
||||
if (client != null) {
|
||||
try {
|
||||
PairingCodes code = client.enhancedCommissioningWindow(handler.getNodeId()).get();
|
||||
return Map.of("manualPairingCode", code.manualPairingCode, "qrPairingCode", code.qrPairingCode);
|
||||
} catch (InterruptedException | ExecutionException | JsonParseException e) {
|
||||
logger.debug("Failed to generate new pairing code for device {}", handler.getNodeId(), e);
|
||||
}
|
||||
}
|
||||
}
|
||||
return Map.of("manualPairingCode",
|
||||
translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_HANDLER),
|
||||
"qrPairingCode",
|
||||
translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_HANDLER));
|
||||
}
|
||||
|
||||
@RuleAction(label = MatterBindingConstants.THING_ACTION_LABEL_NODE_DECOMMISSION, description = MatterBindingConstants.THING_ACTION_DESC_NODE_DECOMMISSION)
|
||||
public @ActionOutputs({
|
||||
@ActionOutput(name = "result", label = MatterBindingConstants.THING_ACTION_LABEL_NODE_DECOMMISSION_RESULT, type = "java.lang.String") }) String decommissionNode() {
|
||||
NodeHandler handler = this.handler;
|
||||
if (handler != null) {
|
||||
MatterControllerClient client = handler.getClient();
|
||||
if (client != null) {
|
||||
try {
|
||||
client.removeNode(handler.getNodeId()).get();
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_SUCCESS);
|
||||
} catch (InterruptedException | ExecutionException e) {
|
||||
logger.debug("Failed to decommission device {}", handler.getNodeId(), e);
|
||||
return Objects.requireNonNull(Optional.ofNullable(e.getLocalizedMessage()).orElse(e.toString()));
|
||||
}
|
||||
}
|
||||
}
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_HANDLER);
|
||||
}
|
||||
|
||||
@RuleAction(label = MatterBindingConstants.THING_ACTION_LABEL_NODE_GET_FABRICS, description = MatterBindingConstants.THING_ACTION_DESC_NODE_GET_FABRICS)
|
||||
public @ActionOutputs({
|
||||
@ActionOutput(name = "result", label = MatterBindingConstants.THING_ACTION_LABEL_NODE_GET_FABRICS_RESULT, type = "java.lang.String") }) String getFabrics() {
|
||||
NodeHandler handler = this.handler;
|
||||
if (handler != null) {
|
||||
MatterControllerClient client = handler.getClient();
|
||||
if (client != null) {
|
||||
try {
|
||||
List<OperationalCredentialsCluster.FabricDescriptorStruct> fabrics = client
|
||||
.getFabrics(handler.getNodeId()).get();
|
||||
String result = fabrics.stream().map(fabric -> String.format("#%d %s (%s)", fabric.fabricIndex,
|
||||
fabric.label, MatterVendorIDs.VENDOR_IDS.get(fabric.vendorId)))
|
||||
.collect(Collectors.joining(", "));
|
||||
return result.isEmpty()
|
||||
? translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_FABRICS)
|
||||
: result;
|
||||
} catch (InterruptedException | ExecutionException | JsonParseException e) {
|
||||
logger.debug("Failed to retrieve fabrics {}", handler.getNodeId(), e);
|
||||
return Objects.requireNonNull(Optional.ofNullable(e.getLocalizedMessage()).orElse(e.toString()));
|
||||
}
|
||||
}
|
||||
}
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_HANDLER);
|
||||
}
|
||||
|
||||
@RuleAction(label = MatterBindingConstants.THING_ACTION_LABEL_NODE_REMOVE_FABRIC, description = MatterBindingConstants.THING_ACTION_DESC_NODE_REMOVE_FABRIC)
|
||||
public @ActionOutputs({
|
||||
@ActionOutput(name = "result", label = MatterBindingConstants.THING_ACTION_LABEL_NODE_REMOVE_FABRIC_RESULT, type = "java.lang.String") }) String removeFabric(
|
||||
@ActionInput(name = "indexNumber", label = MatterBindingConstants.THING_ACTION_LABEL_NODE_REMOVE_FABRIC_INDEX, description = MatterBindingConstants.THING_ACTION_DESC_NODE_REMOVE_FABRIC_INDEX) Integer indexNumber) {
|
||||
NodeHandler handler = this.handler;
|
||||
if (handler != null) {
|
||||
MatterControllerClient client = handler.getClient();
|
||||
if (client != null) {
|
||||
try {
|
||||
client.removeFabric(handler.getNodeId(), indexNumber).get();
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_SUCCESS);
|
||||
} catch (InterruptedException | ExecutionException e) {
|
||||
logger.debug("Failed to remove fabric {} {} ", handler.getNodeId(), indexNumber, e);
|
||||
return Objects.requireNonNull(Optional.ofNullable(e.getLocalizedMessage()).orElse(e.toString()));
|
||||
}
|
||||
}
|
||||
}
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_HANDLER);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,303 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.actions;
|
||||
|
||||
import java.security.NoSuchAlgorithmException;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
import java.util.Optional;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.binding.matter.internal.MatterBindingConstants;
|
||||
import org.openhab.binding.matter.internal.controller.devices.converter.ThreadBorderRouterManagementConverter;
|
||||
import org.openhab.binding.matter.internal.handler.NodeHandler;
|
||||
import org.openhab.binding.matter.internal.util.ThreadDataset;
|
||||
import org.openhab.binding.matter.internal.util.ThreadDataset.ThreadTimestamp;
|
||||
import org.openhab.binding.matter.internal.util.TlvCodec;
|
||||
import org.openhab.binding.matter.internal.util.TranslationService;
|
||||
import org.openhab.core.automation.annotation.ActionInput;
|
||||
import org.openhab.core.automation.annotation.ActionOutput;
|
||||
import org.openhab.core.automation.annotation.ActionOutputs;
|
||||
import org.openhab.core.automation.annotation.RuleAction;
|
||||
import org.openhab.core.thing.binding.ThingActions;
|
||||
import org.openhab.core.thing.binding.ThingActionsScope;
|
||||
import org.openhab.core.thing.binding.ThingHandler;
|
||||
import org.osgi.service.component.annotations.Activate;
|
||||
import org.osgi.service.component.annotations.Component;
|
||||
import org.osgi.service.component.annotations.Reference;
|
||||
import org.osgi.service.component.annotations.ServiceScope;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
/**
|
||||
* The {@link MatterOTBRActions} exposes Thread Border Router related actions
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
@Component(scope = ServiceScope.PROTOTYPE, service = MatterOTBRActions.class)
|
||||
@ThingActionsScope(name = "matter-otbr")
|
||||
public class MatterOTBRActions implements ThingActions {
|
||||
public final Logger logger = LoggerFactory.getLogger(getClass());
|
||||
|
||||
protected @Nullable NodeHandler handler;
|
||||
private final TranslationService translationService;
|
||||
|
||||
@Activate
|
||||
public MatterOTBRActions(@Reference TranslationService translationService) {
|
||||
this.translationService = translationService;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setThingHandler(@Nullable ThingHandler handler) {
|
||||
if (handler instanceof NodeHandler nodeHandler) {
|
||||
this.handler = nodeHandler;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public @Nullable ThingHandler getThingHandler() {
|
||||
return handler;
|
||||
}
|
||||
|
||||
@RuleAction(label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_LOAD_EXTERNAL_DATASET, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_LOAD_EXTERNAL_DATASET)
|
||||
public @ActionOutputs({
|
||||
@ActionOutput(name = "result", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_LOAD_EXTERNAL_DATASET_RESULT, type = "java.lang.String") }) String loadExternalOperationalDataset(
|
||||
@ActionInput(name = "dataset", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_LOAD_EXTERNAL_DATASET_DATASET, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_LOAD_EXTERNAL_DATASET_DATASET) String dataset) {
|
||||
NodeHandler handler = this.handler;
|
||||
if (handler != null) {
|
||||
ThreadBorderRouterManagementConverter converter = handler
|
||||
.findConverterByType(ThreadBorderRouterManagementConverter.class);
|
||||
if (converter != null) {
|
||||
try {
|
||||
ThreadDataset tds = null;
|
||||
if (dataset.trim().startsWith("{")) {
|
||||
tds = ThreadDataset.fromJson(dataset);
|
||||
if (tds == null) {
|
||||
return translationService
|
||||
.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_INVALID_JSON);
|
||||
}
|
||||
} else {
|
||||
tds = ThreadDataset.fromHex(dataset);
|
||||
}
|
||||
converter.updateThreadConfiguration(tds.toHex());
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_SUCCESS);
|
||||
} catch (Exception e) {
|
||||
logger.debug("Error setting dataset", e);
|
||||
return "error: " + e.getMessage();
|
||||
}
|
||||
} else {
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_CONVERTER);
|
||||
}
|
||||
} else {
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_HANDLER);
|
||||
}
|
||||
}
|
||||
|
||||
@RuleAction(label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_LOAD_ACTIVE_DATASET, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_LOAD_ACTIVE_DATASET)
|
||||
public @ActionOutputs({
|
||||
@ActionOutput(name = "result", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_LOAD_ACTIVE_DATASET_RESULT, type = "java.lang.String"),
|
||||
@ActionOutput(name = "dataset", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_LOAD_ACTIVE_DATASET_DATASET, type = "java.lang.String") }) Map<String, Object> loadActiveOperationalDataset() {
|
||||
NodeHandler handler = this.handler;
|
||||
if (handler != null) {
|
||||
ThreadBorderRouterManagementConverter converter = handler
|
||||
.findConverterByType(ThreadBorderRouterManagementConverter.class);
|
||||
if (converter != null) {
|
||||
try {
|
||||
String dataset = Objects.requireNonNull(converter.getActiveDataset().get(),
|
||||
"Could not get active dataset");
|
||||
converter.updateThreadConfiguration(dataset);
|
||||
return Map.of("result",
|
||||
translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_SUCCESS),
|
||||
"dataset", dataset);
|
||||
} catch (Exception e) {
|
||||
logger.debug("Error setting dataset", e);
|
||||
String message = Objects.requireNonNull(Optional.ofNullable(e.getMessage()).orElse(e.toString()));
|
||||
return Map.of("error", message);
|
||||
}
|
||||
} else {
|
||||
return Map.of("error",
|
||||
translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_CONVERTER));
|
||||
}
|
||||
} else {
|
||||
return Map.of("error",
|
||||
translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_HANDLER));
|
||||
}
|
||||
}
|
||||
|
||||
@RuleAction(label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_PUSH_DATASET, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_PUSH_DATASET)
|
||||
public @ActionOutputs({
|
||||
@ActionOutput(name = "result", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_PUSH_DATASET_RESULT, type = "java.lang.String") }) String pushOperationalDataSetHex(
|
||||
@ActionInput(name = "delay", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_PUSH_DATASET_DELAY, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_PUSH_DATASET_DELAY, defaultValue = "30000", required = true) @Nullable Long delay,
|
||||
@ActionInput(name = "generatePendingTime", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_PUSH_DATASET_GENERATE_TIME, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_PUSH_DATASET_GENERATE_TIME, defaultValue = "true", required = true) @Nullable Boolean generatePendingTime,
|
||||
@ActionInput(name = "incrementActiveTime", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_PUSH_DATASET_INCREMENT_TIME, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_PUSH_DATASET_INCREMENT_TIME, defaultValue = "1", required = true) @Nullable Integer incrementActiveTime) {
|
||||
NodeHandler handler = this.handler;
|
||||
if (handler == null) {
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_HANDLER);
|
||||
}
|
||||
ThreadBorderRouterManagementConverter converter = handler
|
||||
.findConverterByType(ThreadBorderRouterManagementConverter.class);
|
||||
if (converter == null) {
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_CONVERTER);
|
||||
}
|
||||
ThreadDataset tds = converter.datasetFromConfiguration();
|
||||
if (delay == null) {
|
||||
delay = 30000L;
|
||||
}
|
||||
// default to generating a new pending timestamp
|
||||
if (generatePendingTime == null || generatePendingTime.booleanValue()) {
|
||||
tds.setPendingTimestamp(ThreadTimestamp.now(false));
|
||||
}
|
||||
ThreadTimestamp ts = Objects
|
||||
.requireNonNull(tds.getActiveTimestampObject().orElse(new ThreadTimestamp(1, 0, false)));
|
||||
|
||||
ts.setSeconds(ts.getSeconds() + (incrementActiveTime == null ? 1 : incrementActiveTime.intValue()));
|
||||
tds.setActiveTimestamp(ts);
|
||||
tds.setDelayTimer(delay);
|
||||
logger.debug("New dataset: {}", tds.toJson());
|
||||
String dataset = tds.toHex();
|
||||
logger.debug("New dataset hex: {}", dataset);
|
||||
try {
|
||||
converter.setPendingDataset(dataset).get();
|
||||
return translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_SUCCESS) + ": "
|
||||
+ tds.toJson();
|
||||
} catch (Exception e) {
|
||||
logger.debug("Error setting pending dataset", e);
|
||||
return "error: " + e.getMessage();
|
||||
}
|
||||
}
|
||||
|
||||
@RuleAction(label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET)
|
||||
public @ActionOutputs({
|
||||
@ActionOutput(name = "result", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_RESULT, type = "java.lang.String"),
|
||||
@ActionOutput(name = "datasetJson", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_JSON, type = "java.lang.String"),
|
||||
@ActionOutput(name = "datasetHex", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_HEX, type = "java.lang.String") }) Map<String, Object> generateOperationalDataset(
|
||||
@ActionInput(name = "save", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_SAVE, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_SAVE, defaultValue = "false", required = true) @Nullable Boolean save,
|
||||
@ActionInput(name = "channel", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_CHANNEL, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_CHANNEL, defaultValue = "22", required = true) @Nullable Integer channel,
|
||||
@ActionInput(name = "activeTimestampSeconds", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_TIMESTAMP_SECONDS, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_TIMESTAMP_SECONDS, defaultValue = "1", required = true) @Nullable Long activeTimestampSeconds,
|
||||
@ActionInput(name = "activeTimestampTicks", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_TIMESTAMP_TICKS, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_TIMESTAMP_TICKS, defaultValue = "0", required = true) @Nullable Integer activeTimestampTicks,
|
||||
@ActionInput(name = "activeTimestampAuthoritative", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_TIMESTAMP_AUTHORITATIVE, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_TIMESTAMP_AUTHORITATIVE, defaultValue = "false", required = true) @Nullable Boolean activeTimestampAuthoritative,
|
||||
@ActionInput(name = "panId", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_PAN_ID, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_PAN_ID, defaultValue = "4460", required = true) @Nullable Integer panId,
|
||||
@ActionInput(name = "extendedPanId", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_EXTENDED_PAN_ID, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_EXTENDED_PAN_ID, defaultValue = "1111111122222222", required = true) @Nullable String extendedPanId,
|
||||
@ActionInput(name = "meshLocalPrefix", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_MESH_PREFIX, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_MESH_PREFIX, defaultValue = "fd11:22::/64", required = true) @Nullable String meshLocalPrefix,
|
||||
@ActionInput(name = "networkName", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_NETWORK_NAME, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_NETWORK_NAME, defaultValue = "openHAB-Thread", required = true) @Nullable String networkName,
|
||||
@ActionInput(name = "networkKey", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_NETWORK_KEY, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_NETWORK_KEY, required = false) @Nullable String networkKey,
|
||||
@ActionInput(name = "passPhrase", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_PASSPHRASE, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_PASSPHRASE, defaultValue = "j01Nme", required = true) @Nullable String passPhrase,
|
||||
@ActionInput(name = "rotationTime", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_ROTATION_TIME, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_ROTATION_TIME, defaultValue = "672") @Nullable Integer rotationTime,
|
||||
@ActionInput(name = "obtainNetworkKey", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_OBTAIN_NETWORK_KEY, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_OBTAIN_NETWORK_KEY, defaultValue = "true") @Nullable Boolean obtainNetworkKey,
|
||||
@ActionInput(name = "nativeCommissioning", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_NATIVE_COMMISSIONING, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_NATIVE_COMMISSIONING, defaultValue = "true") @Nullable Boolean nativeCommissioning,
|
||||
@ActionInput(name = "routers", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_ROUTERS, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_ROUTERS, defaultValue = "true") @Nullable Boolean routers,
|
||||
@ActionInput(name = "externalCommissioning", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_EXTERNAL_COMMISSIONING, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_EXTERNAL_COMMISSIONING, defaultValue = "true") @Nullable Boolean externalCommissioning,
|
||||
@ActionInput(name = "commercialCommissioning", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_COMMERCIAL_COMMISSIONING, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_COMMERCIAL_COMMISSIONING, defaultValue = "false") @Nullable Boolean commercialCommissioning,
|
||||
@ActionInput(name = "autonomousEnrollment", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_AUTONOMOUS_ENROLLMENT, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_AUTONOMOUS_ENROLLMENT, defaultValue = "true") @Nullable Boolean autonomousEnrollment,
|
||||
@ActionInput(name = "networkKeyProvisioning", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_NETWORK_KEY_PROVISIONING, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_NETWORK_KEY_PROVISIONING, defaultValue = "true") @Nullable Boolean networkKeyProvisioning,
|
||||
@ActionInput(name = "tobleLink", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_TOBLE_LINK, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_TOBLE_LINK, defaultValue = "true") @Nullable Boolean tobleLink,
|
||||
@ActionInput(name = "nonCcmRouters", label = MatterBindingConstants.THING_ACTION_LABEL_OTBR_GENERATE_DATASET_NON_CCM_ROUTERS, description = MatterBindingConstants.THING_ACTION_DESC_OTBR_GENERATE_DATASET_NON_CCM_ROUTERS, defaultValue = "false") @Nullable Boolean nonCcmRouters) {
|
||||
NodeHandler handler = this.handler;
|
||||
if (handler == null) {
|
||||
return Map.of("error",
|
||||
translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_HANDLER));
|
||||
}
|
||||
ThreadBorderRouterManagementConverter converter = handler
|
||||
.findConverterByType(ThreadBorderRouterManagementConverter.class);
|
||||
if (converter == null) {
|
||||
return Map.of("error",
|
||||
translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_NO_CONVERTER));
|
||||
}
|
||||
try {
|
||||
ThreadTimestamp timestamp = new ThreadTimestamp(1, 0, false);
|
||||
if (activeTimestampSeconds != null) {
|
||||
timestamp.setSeconds(activeTimestampSeconds.longValue());
|
||||
}
|
||||
if (activeTimestampTicks != null) {
|
||||
timestamp.setTicks(activeTimestampTicks.intValue());
|
||||
}
|
||||
if (activeTimestampAuthoritative != null) {
|
||||
timestamp.setAuthoritative(activeTimestampAuthoritative.booleanValue());
|
||||
}
|
||||
long channelMask = 134215680;
|
||||
if (channel == null) {
|
||||
channel = 22;
|
||||
}
|
||||
if (panId == null) {
|
||||
panId = 4460;
|
||||
}
|
||||
if (extendedPanId == null) {
|
||||
extendedPanId = "1111111122222222";
|
||||
}
|
||||
if (meshLocalPrefix == null) {
|
||||
meshLocalPrefix = "fd11:22::/64";
|
||||
}
|
||||
if (networkName == null) {
|
||||
networkName = "openHAB-Thread";
|
||||
}
|
||||
if (passPhrase == null) {
|
||||
passPhrase = "j01Nme";
|
||||
}
|
||||
if (networkKey == null) {
|
||||
try {
|
||||
networkKey = TlvCodec.bytesToHex(ThreadDataset.generateMasterKey());
|
||||
} catch (NoSuchAlgorithmException e) {
|
||||
logger.debug("Error generating master key", e);
|
||||
return Map.of("error",
|
||||
translationService
|
||||
.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_ERROR_GENERATING_KEY)
|
||||
+ ": " + e.getMessage());
|
||||
}
|
||||
}
|
||||
if (save == null) {
|
||||
save = false;
|
||||
}
|
||||
|
||||
String pskc = TlvCodec.bytesToHex(ThreadDataset.generatePskc(passPhrase, networkName, extendedPanId));
|
||||
|
||||
logger.debug(
|
||||
"All values: channel: {}, panId: {}, extendedPanId: {}, meshLocalPrefix: {}, networkName: {}, networkKey: {}, pskc: {}",
|
||||
channel, panId, extendedPanId, meshLocalPrefix, networkName, networkKey, pskc);
|
||||
|
||||
ThreadDataset dataset = new ThreadDataset(timestamp, null, null, channel, channelMask, panId, networkName,
|
||||
networkKey, extendedPanId, pskc, meshLocalPrefix, null);
|
||||
|
||||
int rotationTimeValue = (rotationTime == null) ? 672 : rotationTime.intValue();
|
||||
dataset.setSecurityPolicyRotation(rotationTimeValue);
|
||||
dataset.setObtainNetworkKey(obtainNetworkKey != null ? obtainNetworkKey.booleanValue() : true);
|
||||
dataset.setNativeCommissioning(nativeCommissioning != null ? nativeCommissioning.booleanValue() : true);
|
||||
dataset.setRoutersEnabled(routers != null ? routers.booleanValue() : true);
|
||||
dataset.setCommercialCommissioning(
|
||||
commercialCommissioning != null ? commercialCommissioning.booleanValue() : false);
|
||||
dataset.setExternalCommissioning(
|
||||
externalCommissioning != null ? externalCommissioning.booleanValue() : true);
|
||||
dataset.setAutonomousEnrollment(autonomousEnrollment != null ? autonomousEnrollment.booleanValue() : true);
|
||||
dataset.setNetworkKeyProvisioning(
|
||||
networkKeyProvisioning != null ? networkKeyProvisioning.booleanValue() : true);
|
||||
dataset.setToBleLink(tobleLink != null ? tobleLink.booleanValue() : true);
|
||||
dataset.setNonCcmRouters(nonCcmRouters != null ? nonCcmRouters.booleanValue() : false);
|
||||
|
||||
String json = dataset.toJson();
|
||||
String hex = dataset.toHex();
|
||||
logger.debug("Generated dataset: {}", json);
|
||||
logger.debug("Generated dataset hex: {}", hex);
|
||||
if (save.booleanValue()) {
|
||||
converter.updateThreadConfiguration(hex);
|
||||
}
|
||||
return Map.of("result",
|
||||
translationService.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_SUCCESS),
|
||||
"datasetJson", json, "datasetHex", hex);
|
||||
} catch (Exception e) {
|
||||
logger.debug("Error setting active dataset", e);
|
||||
return Map.of("error", translationService
|
||||
.getTranslation(MatterBindingConstants.THING_ACTION_RESULT_ERROR_SETTING_DATASET));
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,577 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge;
|
||||
|
||||
import java.io.File;
|
||||
import java.io.IOException;
|
||||
import java.util.Collection;
|
||||
import java.util.Dictionary;
|
||||
import java.util.HashMap;
|
||||
import java.util.Hashtable;
|
||||
import java.util.Map;
|
||||
import java.util.Random;
|
||||
import java.util.concurrent.CancellationException;
|
||||
import java.util.concurrent.ExecutionException;
|
||||
import java.util.concurrent.ScheduledExecutorService;
|
||||
import java.util.concurrent.ScheduledFuture;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.binding.matter.internal.bridge.devices.DeviceRegistry;
|
||||
import org.openhab.binding.matter.internal.bridge.devices.GenericDevice;
|
||||
import org.openhab.binding.matter.internal.client.MatterClientListener;
|
||||
import org.openhab.binding.matter.internal.client.MatterWebsocketService;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.AttributeChangedMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.BridgeCommissionState;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.BridgeEventAttributeChanged;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.BridgeEventMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.BridgeEventTriggered;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.EventTriggeredMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.NodeDataMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.NodeStateMessage;
|
||||
import org.openhab.core.OpenHAB;
|
||||
import org.openhab.core.common.ThreadPoolManager;
|
||||
import org.openhab.core.common.registry.RegistryChangeListener;
|
||||
import org.openhab.core.config.core.ConfigurableService;
|
||||
import org.openhab.core.config.core.Configuration;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.ItemNotFoundException;
|
||||
import org.openhab.core.items.ItemRegistry;
|
||||
import org.openhab.core.items.ItemRegistryChangeListener;
|
||||
import org.openhab.core.items.Metadata;
|
||||
import org.openhab.core.items.MetadataKey;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.osgi.framework.Constants;
|
||||
import org.osgi.service.cm.ConfigurationAdmin;
|
||||
import org.osgi.service.component.annotations.Activate;
|
||||
import org.osgi.service.component.annotations.Component;
|
||||
import org.osgi.service.component.annotations.Deactivate;
|
||||
import org.osgi.service.component.annotations.Modified;
|
||||
import org.osgi.service.component.annotations.Reference;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.google.gson.JsonParseException;
|
||||
|
||||
/**
|
||||
* The {@link MatterBridge} is the main class for the Matter Bridge service.
|
||||
*
|
||||
* It is responsible for exposing a "Matter Bridge" server and exposing items as endpoint on the bridge.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
@Component(immediate = true, service = MatterBridge.class, configurationPid = MatterBridge.CONFIG_PID, property = Constants.SERVICE_PID
|
||||
+ "=" + MatterBridge.CONFIG_PID)
|
||||
@ConfigurableService(category = "io", label = "Matter Bridge", description_uri = MatterBridge.CONFIG_URI)
|
||||
public class MatterBridge implements MatterClientListener {
|
||||
private final Logger logger = LoggerFactory.getLogger(MatterBridge.class);
|
||||
private static final String CONFIG_PID = "org.openhab.matter";
|
||||
private static final String CONFIG_URI = "io:matter";
|
||||
|
||||
// Matter Bridge Device Info *Basic Information Cluster*
|
||||
private static final String VENDOR_NAME = "openHAB";
|
||||
private static final String DEVICE_NAME = "Bridge Device";
|
||||
private static final String PRODUCT_ID = "0001";
|
||||
private static final String VENDOR_ID = "65521";
|
||||
|
||||
private final Map<String, GenericDevice> devices = new HashMap<>();
|
||||
|
||||
private MatterBridgeClient client;
|
||||
private ItemRegistry itemRegistry;
|
||||
private MetadataRegistry metadataRegistry;
|
||||
private MatterWebsocketService websocketService;
|
||||
private ConfigurationAdmin configAdmin;
|
||||
private MatterBridgeSettings settings;
|
||||
|
||||
private final ItemRegistryChangeListener itemRegistryChangeListener;
|
||||
private final RegistryChangeListener<Metadata> metadataRegistryChangeListener;
|
||||
private final ScheduledExecutorService scheduler = ThreadPoolManager
|
||||
.getScheduledPool(ThreadPoolManager.THREAD_POOL_NAME_COMMON);
|
||||
private boolean resetStorage = false;
|
||||
private @Nullable ScheduledFuture<?> modifyFuture;
|
||||
private @Nullable ScheduledFuture<?> reconnectFuture;
|
||||
private RunningState runningState = RunningState.Stopped;
|
||||
|
||||
@Activate
|
||||
public MatterBridge(final @Reference ItemRegistry itemRegistry, final @Reference MetadataRegistry metadataRegistry,
|
||||
final @Reference MatterWebsocketService websocketService, final @Reference ConfigurationAdmin configAdmin) {
|
||||
this.itemRegistry = itemRegistry;
|
||||
this.metadataRegistry = metadataRegistry;
|
||||
this.websocketService = websocketService;
|
||||
this.configAdmin = configAdmin;
|
||||
this.client = new MatterBridgeClient();
|
||||
this.settings = new MatterBridgeSettings();
|
||||
|
||||
itemRegistryChangeListener = new ItemRegistryChangeListener() {
|
||||
private boolean handleMetadataChange(Item item) {
|
||||
if (metadataRegistry.get(new MetadataKey("matter", item.getUID())) != null) {
|
||||
updateModifyFuture();
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void added(Item element) {
|
||||
handleMetadataChange(element);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void updated(Item oldElement, Item element) {
|
||||
if (!handleMetadataChange(oldElement)) {
|
||||
handleMetadataChange(element);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void allItemsChanged(Collection<String> oldItemNames) {
|
||||
updateModifyFuture();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void removed(Item element) {
|
||||
handleMetadataChange(element);
|
||||
}
|
||||
};
|
||||
this.itemRegistry.addRegistryChangeListener(itemRegistryChangeListener);
|
||||
|
||||
metadataRegistryChangeListener = new RegistryChangeListener<>() {
|
||||
private boolean handleMetadataChange(Metadata element) {
|
||||
if ("matter".equals(element.getUID().getNamespace())) {
|
||||
updateModifyFuture();
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
public void added(Metadata element) {
|
||||
handleMetadataChange(element);
|
||||
}
|
||||
|
||||
public void removed(Metadata element) {
|
||||
handleMetadataChange(element);
|
||||
}
|
||||
|
||||
public void updated(Metadata oldElement, Metadata element) {
|
||||
if (!handleMetadataChange(oldElement)) {
|
||||
handleMetadataChange(element);
|
||||
}
|
||||
}
|
||||
};
|
||||
this.metadataRegistry.addRegistryChangeListener(metadataRegistryChangeListener);
|
||||
}
|
||||
|
||||
@Activate
|
||||
public void activate(Map<String, Object> properties) {
|
||||
logger.debug("Activating Matter Bridge {}", properties);
|
||||
// if this returns true, we will wait for @Modified to be called after the config is persisted
|
||||
if (!parseInitialConfig(properties)) {
|
||||
this.settings = (new Configuration(properties)).as(MatterBridgeSettings.class);
|
||||
if (this.settings.enableBridge) {
|
||||
connectClient();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Deactivate
|
||||
public void deactivate() {
|
||||
logger.debug("Deactivating Matter Bridge");
|
||||
itemRegistry.removeRegistryChangeListener(itemRegistryChangeListener);
|
||||
metadataRegistry.removeRegistryChangeListener(metadataRegistryChangeListener);
|
||||
stopClient();
|
||||
}
|
||||
|
||||
@Modified
|
||||
protected void modified(Map<String, Object> properties) {
|
||||
logger.debug("Modified Matter Bridge {}", properties);
|
||||
MatterBridgeSettings settings = (new Configuration(properties)).as(MatterBridgeSettings.class);
|
||||
boolean restart = false;
|
||||
if (this.settings.enableBridge != settings.enableBridge) {
|
||||
restart = true;
|
||||
}
|
||||
if (!this.settings.bridgeName.equals(settings.bridgeName)) {
|
||||
restart = true;
|
||||
}
|
||||
if (this.settings.discriminator != settings.discriminator) {
|
||||
restart = true;
|
||||
}
|
||||
if (this.settings.passcode != settings.passcode) {
|
||||
restart = true;
|
||||
}
|
||||
if (this.settings.port != settings.port) {
|
||||
restart = true;
|
||||
}
|
||||
if (settings.resetBridge) {
|
||||
this.resetStorage = true;
|
||||
settings.resetBridge = false;
|
||||
restart = true;
|
||||
}
|
||||
|
||||
this.settings = settings;
|
||||
|
||||
if (!settings.enableBridge) {
|
||||
stopClient();
|
||||
} else if (!client.isConnected() || restart) {
|
||||
stopClient();
|
||||
scheduleConnect();
|
||||
} else {
|
||||
manageCommissioningWindow(settings.openCommissioningWindow);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onDisconnect(String reason) {
|
||||
stopClient();
|
||||
if (this.settings.enableBridge) {
|
||||
scheduleConnect();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onConnect() {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onReady() {
|
||||
registerItems();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onEvent(NodeStateMessage message) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onEvent(AttributeChangedMessage message) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onEvent(EventTriggeredMessage message) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onEvent(NodeDataMessage message) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onEvent(BridgeEventMessage message) {
|
||||
if (message instanceof BridgeEventAttributeChanged attributeChanged) {
|
||||
GenericDevice d = devices.get(attributeChanged.data.endpointId);
|
||||
if (d != null) {
|
||||
d.handleMatterEvent(attributeChanged.data.clusterName, attributeChanged.data.attributeName,
|
||||
attributeChanged.data.data);
|
||||
}
|
||||
} else if (message instanceof BridgeEventTriggered bridgeEventTriggered) {
|
||||
switch (bridgeEventTriggered.data.eventName) {
|
||||
case "commissioningWindowOpen":
|
||||
updateConfig(Map.of("openCommissioningWindow", true));
|
||||
break;
|
||||
case "commissioningWindowClosed":
|
||||
updateConfig(Map.of("openCommissioningWindow", false));
|
||||
break;
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public void restart() {
|
||||
stopClient();
|
||||
connectClient();
|
||||
}
|
||||
|
||||
public void allowCommissioning() {
|
||||
manageCommissioningWindow(true);
|
||||
}
|
||||
|
||||
public void resetStorage() {
|
||||
this.resetStorage = true;
|
||||
stopClient();
|
||||
connectClient();
|
||||
}
|
||||
|
||||
public String listFabrics() throws InterruptedException, ExecutionException {
|
||||
return client.getFabrics().get().toString();
|
||||
}
|
||||
|
||||
public void removeFabric(String fabricId) {
|
||||
try {
|
||||
client.removeFabric(Integer.parseInt(fabricId)).get();
|
||||
} catch (InterruptedException | ExecutionException e) {
|
||||
logger.debug("Could not remove fabric", e);
|
||||
}
|
||||
}
|
||||
|
||||
private synchronized void connectClient() {
|
||||
if (client.isConnected()) {
|
||||
logger.debug("Already Connected, returning");
|
||||
return;
|
||||
}
|
||||
|
||||
String folderName = OpenHAB.getUserDataFolder() + File.separator + "matter";
|
||||
File folder = new File(folderName);
|
||||
if (!folder.exists()) {
|
||||
folder.mkdirs();
|
||||
}
|
||||
|
||||
Map<String, String> paramsMap = new HashMap<>();
|
||||
|
||||
paramsMap.put("service", "bridge");
|
||||
paramsMap.put("storagePath", folder.getAbsolutePath());
|
||||
|
||||
// default values the bridge exposes to clients
|
||||
paramsMap.put("deviceName", DEVICE_NAME);
|
||||
paramsMap.put("vendorName", VENDOR_NAME);
|
||||
paramsMap.put("vendorId", VENDOR_ID);
|
||||
paramsMap.put("productId", PRODUCT_ID);
|
||||
|
||||
paramsMap.put("productName", settings.bridgeName);
|
||||
paramsMap.put("passcode", String.valueOf(settings.passcode));
|
||||
paramsMap.put("discriminator", String.valueOf(settings.discriminator));
|
||||
paramsMap.put("port", String.valueOf(settings.port));
|
||||
|
||||
client.addListener(this);
|
||||
client.connectWhenReady(this.websocketService, paramsMap);
|
||||
}
|
||||
|
||||
private void stopClient() {
|
||||
logger.debug("Stopping Matter Bridge Client");
|
||||
cancelConnect();
|
||||
updateRunningState(RunningState.Stopped, null);
|
||||
ScheduledFuture<?> modifyFuture = this.modifyFuture;
|
||||
if (modifyFuture != null) {
|
||||
modifyFuture.cancel(true);
|
||||
}
|
||||
client.removeListener(this);
|
||||
client.disconnect();
|
||||
devices.values().forEach(GenericDevice::dispose);
|
||||
devices.clear();
|
||||
}
|
||||
|
||||
private void scheduleConnect() {
|
||||
cancelConnect();
|
||||
this.reconnectFuture = scheduler.schedule(this::connectClient, 5, TimeUnit.SECONDS);
|
||||
}
|
||||
|
||||
private void cancelConnect() {
|
||||
ScheduledFuture<?> reconnectFuture = this.reconnectFuture;
|
||||
if (reconnectFuture != null) {
|
||||
reconnectFuture.cancel(true);
|
||||
}
|
||||
}
|
||||
|
||||
private boolean parseInitialConfig(Map<String, Object> properties) {
|
||||
logger.debug("Parse Config Matter Bridge");
|
||||
|
||||
Dictionary<String, Object> props = null;
|
||||
org.osgi.service.cm.Configuration config = null;
|
||||
|
||||
try {
|
||||
config = configAdmin.getConfiguration(MatterBridge.CONFIG_PID);
|
||||
props = config.getProperties();
|
||||
} catch (IOException e) {
|
||||
logger.warn("cannot retrieve config admin {}", e.getMessage());
|
||||
}
|
||||
|
||||
if (props == null) { // if null, the configuration is new
|
||||
props = new Hashtable<>();
|
||||
}
|
||||
|
||||
// A discriminator uniquely identifies a Matter device on the IPV6 network, 12-bit integer (0-4095)
|
||||
int discriminator = -1;
|
||||
@Nullable
|
||||
Object discriminatorProp = props.get("discriminator");
|
||||
if (discriminatorProp instanceof String discriminatorString) {
|
||||
try {
|
||||
discriminator = Integer.parseInt(discriminatorString);
|
||||
} catch (NumberFormatException e) {
|
||||
logger.debug("Could not parse discriminator {}", discriminatorString);
|
||||
}
|
||||
} else if (discriminatorProp instanceof Integer discriminatorInteger) {
|
||||
discriminator = discriminatorInteger;
|
||||
}
|
||||
|
||||
// randomly create one if not set
|
||||
if (discriminator < 0) {
|
||||
Random random = new Random();
|
||||
discriminator = random.nextInt(4096);
|
||||
}
|
||||
|
||||
props.put("discriminator", discriminator);
|
||||
|
||||
// this should never be persisted true, temporary settings
|
||||
props.put("resetBridge", false);
|
||||
|
||||
boolean changed = false;
|
||||
if (config != null) {
|
||||
try {
|
||||
changed = config.updateIfDifferent(props);
|
||||
} catch (IOException e) {
|
||||
logger.warn("cannot update configuration {}", e.getMessage());
|
||||
}
|
||||
}
|
||||
return changed;
|
||||
}
|
||||
|
||||
private synchronized void registerItems() {
|
||||
try {
|
||||
logger.debug("Initializing bridge, resetStorage: {}", resetStorage);
|
||||
client.initializeBridge(resetStorage).get();
|
||||
if (resetStorage) {
|
||||
resetStorage = false;
|
||||
updateConfig(Map.of("resetBridge", false));
|
||||
}
|
||||
} catch (InterruptedException | ExecutionException e) {
|
||||
logger.debug("Could not initialize endpoints", e);
|
||||
updateRunningState(RunningState.Error, e.getMessage());
|
||||
return;
|
||||
}
|
||||
|
||||
updateRunningState(RunningState.Starting, null);
|
||||
|
||||
// clear out any existing devices
|
||||
devices.values().forEach(GenericDevice::dispose);
|
||||
devices.clear();
|
||||
|
||||
for (Metadata metadata : metadataRegistry.getAll()) {
|
||||
final MetadataKey uid = metadata.getUID();
|
||||
if ("matter".equals(uid.getNamespace())) {
|
||||
try {
|
||||
logger.debug("Metadata {}", metadata);
|
||||
if (devices.containsKey(uid.getItemName())) {
|
||||
logger.debug("Updating item {}", uid.getItemName());
|
||||
}
|
||||
final GenericItem item = (GenericItem) itemRegistry.getItem(uid.getItemName());
|
||||
String deviceType = metadata.getValue();
|
||||
String[] parts = deviceType.split(",");
|
||||
for (String part : parts) {
|
||||
GenericDevice device = DeviceRegistry.createDevice(part.trim(), metadataRegistry, client, item);
|
||||
if (device != null) {
|
||||
try {
|
||||
device.registerDevice().get();
|
||||
logger.debug("Registered item {} with device type {}", item.getName(),
|
||||
device.deviceType());
|
||||
devices.put(item.getName(), device);
|
||||
} catch (InterruptedException | ExecutionException e) {
|
||||
logger.debug("Could not register device with bridge", e);
|
||||
updateRunningState(RunningState.Error, e.getMessage());
|
||||
device.dispose();
|
||||
devices.values().forEach(GenericDevice::dispose);
|
||||
devices.clear();
|
||||
return;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
} catch (ItemNotFoundException e) {
|
||||
logger.debug("Could not find item {}", uid.getItemName(), e);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (devices.isEmpty()) {
|
||||
logger.info("No devices found to register with bridge, not starting bridge");
|
||||
updateRunningState(RunningState.Stopped, "No items found with matter metadata");
|
||||
return;
|
||||
}
|
||||
try {
|
||||
client.startBridge().get();
|
||||
updateRunningState(RunningState.Running, null);
|
||||
updatePairingCodes();
|
||||
} catch (InterruptedException | ExecutionException e) {
|
||||
logger.debug("Could not start bridge", e);
|
||||
}
|
||||
}
|
||||
|
||||
private void manageCommissioningWindow(boolean open) {
|
||||
if (runningState != RunningState.Running) {
|
||||
return;
|
||||
}
|
||||
if (open) {
|
||||
try {
|
||||
client.openCommissioningWindow().get();
|
||||
} catch (CancellationException | InterruptedException | ExecutionException e) {
|
||||
logger.debug("Could not open commissioning window", e);
|
||||
}
|
||||
} else {
|
||||
try {
|
||||
client.closeCommissioningWindow().get();
|
||||
} catch (CancellationException | InterruptedException | ExecutionException e) {
|
||||
logger.debug("Could not close commissioning window", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private void updatePairingCodes() {
|
||||
try {
|
||||
BridgeCommissionState state = client.getCommissioningState().get();
|
||||
updateConfig(Map.of("manualPairingCode", state.pairingCodes.manualPairingCode, "qrCode",
|
||||
state.pairingCodes.qrPairingCode, "openCommissioningWindow", state.commissioningWindowOpen));
|
||||
} catch (CancellationException | InterruptedException | ExecutionException | JsonParseException e) {
|
||||
logger.debug("Could not query codes", e);
|
||||
}
|
||||
}
|
||||
|
||||
private void updateConfig(Map<String, Object> entries) {
|
||||
try {
|
||||
org.osgi.service.cm.Configuration config = configAdmin.getConfiguration(MatterBridge.CONFIG_PID);
|
||||
Dictionary<String, Object> props = config.getProperties();
|
||||
if (props == null) {
|
||||
return;
|
||||
}
|
||||
entries.forEach((k, v) -> props.put(k, v));
|
||||
// if this updates, it will trigger a @Modified call
|
||||
config.updateIfDifferent(props);
|
||||
} catch (IOException e) {
|
||||
logger.debug("Could not load configuration", e);
|
||||
}
|
||||
}
|
||||
|
||||
private void updateRunningState(RunningState newState, @Nullable String message) {
|
||||
runningState = newState;
|
||||
updateConfig(Map.of("runningState", runningState.toString() + (message != null ? ": " + message : "")));
|
||||
}
|
||||
|
||||
/**
|
||||
* This should be called by changes to items or metadata
|
||||
*/
|
||||
private void updateModifyFuture() {
|
||||
// if the bridge is not enabled, we don't need to update the future
|
||||
if (!settings.enableBridge) {
|
||||
return;
|
||||
}
|
||||
ScheduledFuture<?> modifyFuture = this.modifyFuture;
|
||||
if (modifyFuture != null) {
|
||||
modifyFuture.cancel(true);
|
||||
}
|
||||
this.modifyFuture = scheduler.schedule(this::registerItems, 5, TimeUnit.SECONDS);
|
||||
}
|
||||
|
||||
enum RunningState {
|
||||
Stopped("Stopped"),
|
||||
Starting("Starting"),
|
||||
Running("Running"),
|
||||
Error("Error");
|
||||
|
||||
private final String runningState;
|
||||
|
||||
RunningState(String runningState) {
|
||||
this.runningState = runningState;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return runningState;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,104 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.binding.matter.internal.client.MatterWebsocketClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.BridgeCommissionState;
|
||||
|
||||
import com.google.gson.JsonElement;
|
||||
import com.google.gson.JsonParseException;
|
||||
|
||||
/**
|
||||
* The {@link MatterBridgeClient} is a client for the Matter Bridge service.
|
||||
*
|
||||
* It is responsible for sending messages to the Matter Bridge websocket server and receiving responses.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class MatterBridgeClient extends MatterWebsocketClient {
|
||||
|
||||
public CompletableFuture<String> addEndpoint(String deviceType, String id, String nodeLabel, String productName,
|
||||
String productLabel, String serialNumber, Map<String, Map<String, Object>> attributeMap) {
|
||||
CompletableFuture<JsonElement> future = sendMessage("bridge", "addEndpoint",
|
||||
new Object[] { deviceType, id, nodeLabel, productName, productLabel, serialNumber, attributeMap });
|
||||
return future.thenApply(obj -> obj.toString());
|
||||
}
|
||||
|
||||
public CompletableFuture<Void> setEndpointState(String endpointId, String clusterName, String attributeName,
|
||||
Object state) {
|
||||
CompletableFuture<JsonElement> future = sendMessage("bridge", "setEndpointState",
|
||||
new Object[] { endpointId, clusterName, attributeName, state });
|
||||
return future.thenAccept(obj -> {
|
||||
// Do nothing, just to complete the future
|
||||
});
|
||||
}
|
||||
|
||||
public CompletableFuture<Void> initializeBridge(boolean resetStorage) {
|
||||
CompletableFuture<JsonElement> future = sendMessage("bridge", "initializeBridge",
|
||||
new Object[] { resetStorage });
|
||||
return future.thenAccept(obj -> {
|
||||
// Do nothing, just to complete the future
|
||||
});
|
||||
}
|
||||
|
||||
public CompletableFuture<Void> startBridge() {
|
||||
CompletableFuture<JsonElement> future = sendMessage("bridge", "startBridge", new Object[0]);
|
||||
return future.thenAccept(obj -> {
|
||||
// Do nothing, just to complete the future
|
||||
});
|
||||
}
|
||||
|
||||
public CompletableFuture<BridgeCommissionState> getCommissioningState() {
|
||||
CompletableFuture<JsonElement> future = sendMessage("bridge", "getCommissioningState", new Object[0]);
|
||||
return future.thenApply(obj -> {
|
||||
BridgeCommissionState state = gson.fromJson(obj, BridgeCommissionState.class);
|
||||
if (state == null) {
|
||||
throw new JsonParseException("Could not deserialize commissioning state");
|
||||
}
|
||||
return state;
|
||||
});
|
||||
}
|
||||
|
||||
public CompletableFuture<Void> openCommissioningWindow() {
|
||||
CompletableFuture<JsonElement> future = sendMessage("bridge", "openCommissioningWindow", new Object[0]);
|
||||
return future.thenAccept(obj -> {
|
||||
// Do nothing, just to complete the future
|
||||
});
|
||||
}
|
||||
|
||||
public CompletableFuture<Void> closeCommissioningWindow() {
|
||||
CompletableFuture<JsonElement> future = sendMessage("bridge", "closeCommissioningWindow", new Object[0]);
|
||||
return future.thenAccept(obj -> {
|
||||
// Do nothing, just to complete the future
|
||||
});
|
||||
}
|
||||
|
||||
public CompletableFuture<Void> removeFabric(int fabricIndex) {
|
||||
CompletableFuture<JsonElement> future = sendMessage("bridge", "removeFabric", new Object[] { fabricIndex });
|
||||
return future.thenAccept(obj -> {
|
||||
// Do nothing, just to complete the future
|
||||
});
|
||||
}
|
||||
|
||||
public CompletableFuture<String> getFabrics() {
|
||||
CompletableFuture<JsonElement> future = sendMessage("bridge", "getFabrics", new Object[0]);
|
||||
return future.thenApply(obj -> {
|
||||
return obj.toString();
|
||||
});
|
||||
}
|
||||
}
|
|
@ -0,0 +1,40 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
|
||||
/**
|
||||
* The {@link MatterBridgeSettings} is the settings configuration for the Matter Bridge service.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class MatterBridgeSettings {
|
||||
public boolean enableBridge = true;
|
||||
public String runningState = "Stopped";
|
||||
public String bridgeName = "openHAB";
|
||||
public int port = 5540;
|
||||
public int passcode = 20202021;
|
||||
public int discriminator = -1;
|
||||
public String qrCode = "";
|
||||
public String manualPairingCode = "";
|
||||
public boolean resetBridge = false;
|
||||
public boolean openCommissioningWindow = false;
|
||||
|
||||
public String toString() {
|
||||
return "MatterBridgeSettings [name=" + bridgeName + ", port=" + port + ", passcode=" + passcode
|
||||
+ ", discriminator=" + discriminator + ", qrCode=" + qrCode + ", manualPairingCode=" + manualPairingCode
|
||||
+ ", resetBridge=" + resetBridge + ", openCommissioningWindow=" + openCommissioningWindow + "]";
|
||||
}
|
||||
}
|
|
@ -0,0 +1,240 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.ColorControlCluster;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.LevelControlCluster;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.OnOffCluster;
|
||||
import org.openhab.binding.matter.internal.util.ValueUtils;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.openhab.core.library.items.ColorItem;
|
||||
import org.openhab.core.library.types.DecimalType;
|
||||
import org.openhab.core.library.types.HSBType;
|
||||
import org.openhab.core.library.types.OnOffType;
|
||||
import org.openhab.core.library.types.PercentType;
|
||||
import org.openhab.core.types.State;
|
||||
import org.openhab.core.util.ColorUtil;
|
||||
|
||||
/**
|
||||
* The {@link ColorDevice} is a device that represents a Color Light.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class ColorDevice extends GenericDevice {
|
||||
// how long to wait (max) for the device to turn on before updating the HSB values
|
||||
private static final int ONOFF_DELAY_MILLIS = 500;
|
||||
// the onFuture is used to wait for the device to turn on before updating the HSB values
|
||||
private CompletableFuture<Void> onFuture = CompletableFuture.completedFuture(null);
|
||||
// the lastH, lastS are used to store the last HSB values as they come in from the device
|
||||
private @Nullable DecimalType lastH;
|
||||
private @Nullable PercentType lastS;
|
||||
private @Nullable PercentType lastB;
|
||||
|
||||
public ColorDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem item) {
|
||||
super(metadataRegistry, client, item);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String deviceType() {
|
||||
return "ColorLight";
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MatterDeviceOptions activate() {
|
||||
primaryItem.addStateChangeListener(this);
|
||||
MetaDataMapping primaryMetadata = metaDataMapping(primaryItem);
|
||||
Map<String, Object> attributeMap = primaryMetadata.getAttributeOptions();
|
||||
if (primaryItem instanceof ColorItem colorItem) {
|
||||
HSBType hsbType = colorItem.getStateAs(HSBType.class);
|
||||
if (hsbType == null) {
|
||||
hsbType = new HSBType();
|
||||
}
|
||||
Integer currentHue = toHue(hsbType.getHue());
|
||||
Integer currentSaturation = toSaturation(hsbType.getSaturation());
|
||||
Integer currentLevel = toBrightness(hsbType.getBrightness());
|
||||
attributeMap.put(LevelControlCluster.CLUSTER_PREFIX + "." + LevelControlCluster.ATTRIBUTE_CURRENT_LEVEL,
|
||||
Math.max(currentLevel, 1));
|
||||
attributeMap.put(ColorControlCluster.CLUSTER_PREFIX + "." + ColorControlCluster.ATTRIBUTE_CURRENT_HUE,
|
||||
currentHue);
|
||||
attributeMap.put(
|
||||
ColorControlCluster.CLUSTER_PREFIX + "." + ColorControlCluster.ATTRIBUTE_CURRENT_SATURATION,
|
||||
currentSaturation);
|
||||
attributeMap.put(OnOffCluster.CLUSTER_PREFIX + "." + OnOffCluster.ATTRIBUTE_ON_OFF, currentLevel > 0);
|
||||
}
|
||||
|
||||
return new MatterDeviceOptions(attributeMap, primaryMetadata.label);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void dispose() {
|
||||
primaryItem.removeStateChangeListener(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleMatterEvent(String clusterName, String attributeName, Object data) {
|
||||
Double value = Double.valueOf(0);
|
||||
if (data instanceof Double d) {
|
||||
value = d;
|
||||
}
|
||||
switch (attributeName) {
|
||||
case OnOffCluster.ATTRIBUTE_ON_OFF:
|
||||
updateOnOff(Boolean.valueOf(data.toString()));
|
||||
break;
|
||||
case LevelControlCluster.ATTRIBUTE_CURRENT_LEVEL:
|
||||
updateBrightness(ValueUtils.levelToPercent(value.intValue()));
|
||||
break;
|
||||
// currentHue and currentSaturation will always be updated together sequentially in the matter.js bridge
|
||||
// code
|
||||
case ColorControlCluster.ATTRIBUTE_CURRENT_HUE:
|
||||
float hueValue = value == 0 ? 0.0f : value.floatValue() * 360.0f / 254.0f;
|
||||
lastH = new DecimalType(Float.valueOf(hueValue).toString());
|
||||
updateHueSaturation();
|
||||
break;
|
||||
case ColorControlCluster.ATTRIBUTE_CURRENT_SATURATION:
|
||||
float saturationValue = value == 0 ? 0.0f : value.floatValue() / 254.0f * 100.0f;
|
||||
lastS = new PercentType(Float.valueOf(saturationValue).toString());
|
||||
updateHueSaturation();
|
||||
break;
|
||||
case ColorControlCluster.ATTRIBUTE_COLOR_TEMPERATURE_MIREDS:
|
||||
Double kelvin = 1e6 / (Double) data;
|
||||
HSBType ctHSB = ColorUtil.xyToHsb(ColorUtil.kelvinToXY(Math.max(1000, Math.min(kelvin, 10000))));
|
||||
lastH = ctHSB.getHue();
|
||||
lastS = ctHSB.getSaturation();
|
||||
updateHueSaturation();
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void updateState(Item item, State state) {
|
||||
if (state instanceof HSBType hsb) {
|
||||
if (hsb.getBrightness().intValue() == 0) {
|
||||
setEndpointState(OnOffCluster.CLUSTER_PREFIX, OnOffCluster.ATTRIBUTE_ON_OFF, false);
|
||||
} else {
|
||||
// since we are on, complete the future
|
||||
completeOnFuture();
|
||||
lastB = null; // reset the cached brightness
|
||||
setEndpointState(OnOffCluster.CLUSTER_PREFIX, OnOffCluster.ATTRIBUTE_ON_OFF, true);
|
||||
setEndpointState(LevelControlCluster.CLUSTER_PREFIX, LevelControlCluster.ATTRIBUTE_CURRENT_LEVEL,
|
||||
toBrightness(hsb.getBrightness()));
|
||||
}
|
||||
setEndpointState(ColorControlCluster.CLUSTER_PREFIX, ColorControlCluster.ATTRIBUTE_CURRENT_HUE,
|
||||
toHue(hsb.getHue()));
|
||||
setEndpointState(ColorControlCluster.CLUSTER_PREFIX, ColorControlCluster.ATTRIBUTE_CURRENT_SATURATION,
|
||||
toSaturation(hsb.getSaturation()));
|
||||
}
|
||||
}
|
||||
|
||||
private void updateBrightness(PercentType brightness) {
|
||||
if (primaryItem instanceof ColorItem colorItem) {
|
||||
lastB = brightness;
|
||||
colorItem.send(brightness);
|
||||
}
|
||||
}
|
||||
|
||||
private synchronized void updateOnOff(boolean onOff) {
|
||||
if (primaryItem instanceof ColorItem colorItem) {
|
||||
if (!onFuture.isDone()) {
|
||||
onFuture.cancel(true);
|
||||
}
|
||||
// if we are turning on, we need to wait for the device to turn on before updating the HSB due to brightness
|
||||
// being 0 until the device has turned on (and we need to query this state)
|
||||
if (onOff) {
|
||||
onFuture = new CompletableFuture<>();
|
||||
onFuture.orTimeout(ONOFF_DELAY_MILLIS, TimeUnit.MILLISECONDS);
|
||||
onFuture.whenComplete((v, ex) -> {
|
||||
if (lastH != null && lastS != null) {
|
||||
// if these are not null, we need to update the HSB now
|
||||
updateHSB();
|
||||
}
|
||||
});
|
||||
}
|
||||
colorItem.send(OnOffType.from(onOff));
|
||||
}
|
||||
}
|
||||
|
||||
private synchronized void updateHSB() {
|
||||
if (primaryItem instanceof ColorItem colorItem) {
|
||||
HSBType hsb = colorItem.getStateAs(HSBType.class);
|
||||
if (hsb == null) {
|
||||
return;
|
||||
}
|
||||
|
||||
DecimalType lastH = this.lastH;
|
||||
PercentType lastS = this.lastS;
|
||||
PercentType lastB = this.lastB;
|
||||
|
||||
if (lastH == null && lastS == null) {
|
||||
return;
|
||||
}
|
||||
|
||||
DecimalType h = hsb.getHue();
|
||||
PercentType s = hsb.getSaturation();
|
||||
PercentType b = hsb.getBrightness();
|
||||
|
||||
if (lastH != null) {
|
||||
h = lastH;
|
||||
}
|
||||
if (lastS != null) {
|
||||
s = lastS;
|
||||
}
|
||||
if (lastB != null) {
|
||||
b = lastB;
|
||||
}
|
||||
// the device is still off but should not be, just set the brightness to 100%
|
||||
if (b.intValue() == 0) {
|
||||
b = new PercentType(100);
|
||||
}
|
||||
colorItem.send(new HSBType(h, s, b));
|
||||
}
|
||||
this.lastH = null;
|
||||
this.lastS = null;
|
||||
}
|
||||
|
||||
private void updateHueSaturation() {
|
||||
if (onFuture.isDone() && lastH != null && lastS != null) {
|
||||
// we have OnOff and both Hue and Saturation so update
|
||||
updateHSB();
|
||||
}
|
||||
}
|
||||
|
||||
private synchronized void completeOnFuture() {
|
||||
if (!onFuture.isDone()) {
|
||||
onFuture.complete(Void.TYPE.cast(null));
|
||||
}
|
||||
}
|
||||
|
||||
private Integer toHue(DecimalType h) {
|
||||
return Math.round(h.floatValue() * 254.0f / 360.0f);
|
||||
}
|
||||
|
||||
private Integer toSaturation(PercentType s) {
|
||||
return Math.round(s.floatValue() * 254.0f / 100.0f);
|
||||
}
|
||||
|
||||
private Integer toBrightness(PercentType b) {
|
||||
return ValueUtils.percentToLevel(b);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,88 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.BooleanStateCluster;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.openhab.core.library.types.OnOffType;
|
||||
import org.openhab.core.library.types.OpenClosedType;
|
||||
import org.openhab.core.types.State;
|
||||
|
||||
/**
|
||||
* The {@link ContactSensorDevice} is a device that represents a Contact Sensor.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class ContactSensorDevice extends GenericDevice {
|
||||
|
||||
public ContactSensorDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem item) {
|
||||
super(metadataRegistry, client, item);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String deviceType() {
|
||||
return "ContactSensor";
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleMatterEvent(String clusterName, String attributeName, Object data) {
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MatterDeviceOptions activate() {
|
||||
primaryItem.addStateChangeListener(this);
|
||||
MetaDataMapping primaryMetadata = metaDataMapping(primaryItem);
|
||||
Map<String, Object> attributeMap = primaryMetadata.getAttributeOptions();
|
||||
attributeMap.put(BooleanStateCluster.CLUSTER_PREFIX + "." + BooleanStateCluster.ATTRIBUTE_STATE_VALUE,
|
||||
contactState(primaryItem.getState()));
|
||||
return new MatterDeviceOptions(attributeMap, primaryMetadata.label);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void dispose() {
|
||||
primaryItem.removeStateChangeListener(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void updateState(Item item, State state) {
|
||||
setEndpointState(BooleanStateCluster.CLUSTER_PREFIX, BooleanStateCluster.ATTRIBUTE_STATE_VALUE,
|
||||
contactState(primaryItem.getState()));
|
||||
}
|
||||
|
||||
/**
|
||||
* Matter Device Library Specification R1.3
|
||||
* 7.1.4.2. Boolean State Cluster
|
||||
* True: Closed or contact
|
||||
* False: Open or no contact
|
||||
*
|
||||
* @param state
|
||||
* @return closed or open
|
||||
*/
|
||||
private boolean contactState(State state) {
|
||||
boolean open = true;
|
||||
if (state instanceof OnOffType onOffType) {
|
||||
open = onOffType == OnOffType.ON;
|
||||
}
|
||||
if (state instanceof OpenClosedType openClosedType) {
|
||||
open = openClosedType == OpenClosedType.OPEN;
|
||||
}
|
||||
return !open;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,68 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.lang.reflect.Constructor;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
|
||||
/**
|
||||
* The {@link DeviceRegistry} is a registry of device types that are supported by the Matter Bridge service.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class DeviceRegistry {
|
||||
private static final Map<String, Class<? extends GenericDevice>> DEVICE_TYPES = new HashMap<>();
|
||||
|
||||
static {
|
||||
registerDevice("OnOffLight", OnOffLightDevice.class);
|
||||
registerDevice("OnOffPlugInUnit", OnOffPlugInUnitDevice.class);
|
||||
registerDevice("DimmableLight", DimmableLightDevice.class);
|
||||
registerDevice("Thermostat", ThermostatDevice.class);
|
||||
registerDevice("WindowCovering", WindowCoveringDevice.class);
|
||||
registerDevice("DoorLock", DoorLockDevice.class);
|
||||
registerDevice("TemperatureSensor", TemperatureSensorDevice.class);
|
||||
registerDevice("HumiditySensor", HumiditySensorDevice.class);
|
||||
registerDevice("OccupancySensor", OccupancySensorDevice.class);
|
||||
registerDevice("ContactSensor", ContactSensorDevice.class);
|
||||
registerDevice("ColorLight", ColorDevice.class);
|
||||
registerDevice("Fan", FanDevice.class);
|
||||
}
|
||||
|
||||
private static void registerDevice(String deviceType, Class<? extends GenericDevice> device) {
|
||||
DEVICE_TYPES.put(deviceType, device);
|
||||
}
|
||||
|
||||
public static @Nullable GenericDevice createDevice(String deviceType, MetadataRegistry metadataRegistry,
|
||||
MatterBridgeClient client, GenericItem item) {
|
||||
Class<? extends GenericDevice> clazz = DEVICE_TYPES.get(deviceType);
|
||||
if (clazz != null) {
|
||||
try {
|
||||
Class<?>[] constructorParameterTypes = new Class<?>[] { MetadataRegistry.class,
|
||||
MatterBridgeClient.class, GenericItem.class };
|
||||
Constructor<? extends GenericDevice> constructor = clazz.getConstructor(constructorParameterTypes);
|
||||
return constructor.newInstance(metadataRegistry, client, item);
|
||||
} catch (Exception e) {
|
||||
// ignore
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,127 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.LevelControlCluster;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.OnOffCluster;
|
||||
import org.openhab.binding.matter.internal.util.ValueUtils;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.GroupItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.openhab.core.library.items.DimmerItem;
|
||||
import org.openhab.core.library.items.SwitchItem;
|
||||
import org.openhab.core.library.types.HSBType;
|
||||
import org.openhab.core.library.types.OnOffType;
|
||||
import org.openhab.core.library.types.PercentType;
|
||||
import org.openhab.core.types.State;
|
||||
|
||||
/**
|
||||
* The {@link DimmableLightDevice} is a device that represents a Dimmable Light.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class DimmableLightDevice extends GenericDevice {
|
||||
|
||||
private State lastOnOffState = OnOffType.OFF;
|
||||
|
||||
public DimmableLightDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem item) {
|
||||
super(metadataRegistry, client, item);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String deviceType() {
|
||||
return "DimmableLight";
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MatterDeviceOptions activate() {
|
||||
primaryItem.addStateChangeListener(this);
|
||||
MetaDataMapping primaryMetadata = metaDataMapping(primaryItem);
|
||||
Map<String, Object> attributeMap = primaryMetadata.getAttributeOptions();
|
||||
PercentType level = Optional.ofNullable(primaryItem.getStateAs(PercentType.class))
|
||||
.orElseGet(() -> new PercentType(0));
|
||||
lastOnOffState = level.intValue() > 0 ? OnOffType.ON : OnOffType.OFF;
|
||||
attributeMap.put(LevelControlCluster.CLUSTER_PREFIX + "." + LevelControlCluster.ATTRIBUTE_CURRENT_LEVEL,
|
||||
Math.max(1, ValueUtils.percentToLevel(level)));
|
||||
attributeMap.put(OnOffCluster.CLUSTER_PREFIX + "." + OnOffCluster.ATTRIBUTE_ON_OFF, level.intValue() > 0);
|
||||
return new MatterDeviceOptions(attributeMap, primaryMetadata.label);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void dispose() {
|
||||
primaryItem.removeStateChangeListener(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleMatterEvent(String clusterName, String attributeName, Object data) {
|
||||
switch (attributeName) {
|
||||
case OnOffCluster.ATTRIBUTE_ON_OFF:
|
||||
updateOnOff(OnOffType.from(Boolean.valueOf(data.toString())));
|
||||
break;
|
||||
case LevelControlCluster.ATTRIBUTE_CURRENT_LEVEL:
|
||||
if (lastOnOffState == OnOffType.ON) {
|
||||
updateLevel(ValueUtils.levelToPercent(((Double) data).intValue()));
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void updateState(Item item, State state) {
|
||||
if (state instanceof HSBType hsb) {
|
||||
setEndpointState(LevelControlCluster.CLUSTER_PREFIX, LevelControlCluster.ATTRIBUTE_CURRENT_LEVEL,
|
||||
ValueUtils.percentToLevel(hsb.getBrightness()));
|
||||
setEndpointState(OnOffCluster.CLUSTER_PREFIX, OnOffCluster.ATTRIBUTE_ON_OFF,
|
||||
hsb.getBrightness().intValue() > 0);
|
||||
lastOnOffState = hsb.getBrightness().intValue() > 0 ? OnOffType.ON : OnOffType.OFF;
|
||||
} else if (state instanceof PercentType percentType) {
|
||||
setEndpointState(OnOffCluster.CLUSTER_PREFIX, OnOffCluster.ATTRIBUTE_ON_OFF, percentType.intValue() > 0);
|
||||
if (percentType.intValue() > 0) {
|
||||
setEndpointState(LevelControlCluster.CLUSTER_PREFIX, LevelControlCluster.ATTRIBUTE_CURRENT_LEVEL,
|
||||
ValueUtils.percentToLevel(percentType));
|
||||
lastOnOffState = OnOffType.ON;
|
||||
} else {
|
||||
lastOnOffState = OnOffType.OFF;
|
||||
}
|
||||
} else if (state instanceof OnOffType onOffType) {
|
||||
setEndpointState(OnOffCluster.CLUSTER_PREFIX, OnOffCluster.ATTRIBUTE_ON_OFF, onOffType == OnOffType.ON);
|
||||
lastOnOffState = onOffType;
|
||||
}
|
||||
}
|
||||
|
||||
private void updateOnOff(OnOffType onOffType) {
|
||||
lastOnOffState = onOffType;
|
||||
if (primaryItem instanceof GroupItem groupItem) {
|
||||
groupItem.send(onOffType);
|
||||
} else {
|
||||
((SwitchItem) primaryItem).send(onOffType);
|
||||
}
|
||||
}
|
||||
|
||||
private void updateLevel(PercentType level) {
|
||||
if (primaryItem instanceof GroupItem groupItem) {
|
||||
groupItem.send(level);
|
||||
} else {
|
||||
((DimmerItem) primaryItem).send(level);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,88 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.DoorLockCluster;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.GroupItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.openhab.core.library.items.SwitchItem;
|
||||
import org.openhab.core.library.types.OnOffType;
|
||||
import org.openhab.core.types.State;
|
||||
|
||||
/**
|
||||
* The {@link DoorLockDevice} is a device that represents a Door Lock.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class DoorLockDevice extends GenericDevice {
|
||||
|
||||
public DoorLockDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem item) {
|
||||
super(metadataRegistry, client, item);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String deviceType() {
|
||||
return "DoorLock";
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleMatterEvent(String clusterName, String attributeName, Object data) {
|
||||
switch (attributeName) {
|
||||
case DoorLockCluster.ATTRIBUTE_LOCK_STATE: {
|
||||
int lockInt = ((Double) data).intValue();
|
||||
boolean locked = DoorLockCluster.LockStateEnum.LOCKED.getValue() == lockInt;
|
||||
if (primaryItem instanceof GroupItem groupItem) {
|
||||
groupItem.send(OnOffType.from(locked));
|
||||
} else {
|
||||
((SwitchItem) primaryItem).send(OnOffType.from(locked));
|
||||
}
|
||||
}
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MatterDeviceOptions activate() {
|
||||
primaryItem.addStateChangeListener(this);
|
||||
MetaDataMapping primaryMetadata = metaDataMapping(primaryItem);
|
||||
Map<String, Object> attributeMap = primaryMetadata.getAttributeOptions();
|
||||
attributeMap.put(DoorLockCluster.CLUSTER_PREFIX + "." + DoorLockCluster.ATTRIBUTE_LOCK_STATE,
|
||||
Optional.ofNullable(primaryItem.getStateAs(OnOffType.class))
|
||||
.orElseGet(() -> OnOffType.OFF) == OnOffType.ON ? DoorLockCluster.LockStateEnum.LOCKED.value
|
||||
: DoorLockCluster.LockStateEnum.UNLOCKED.value);
|
||||
return new MatterDeviceOptions(attributeMap, primaryMetadata.label);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void dispose() {
|
||||
primaryItem.removeStateChangeListener(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void updateState(Item item, State state) {
|
||||
if (state instanceof OnOffType onOffType) {
|
||||
setEndpointState(DoorLockCluster.CLUSTER_PREFIX, DoorLockCluster.ATTRIBUTE_LOCK_STATE,
|
||||
onOffType == OnOffType.ON ? DoorLockCluster.LockStateEnum.LOCKED.value
|
||||
: DoorLockCluster.LockStateEnum.UNLOCKED.value);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,413 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.FanControlCluster;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.OnOffCluster;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.GroupItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.openhab.core.library.items.DimmerItem;
|
||||
import org.openhab.core.library.items.NumberItem;
|
||||
import org.openhab.core.library.items.StringItem;
|
||||
import org.openhab.core.library.items.SwitchItem;
|
||||
import org.openhab.core.library.types.DecimalType;
|
||||
import org.openhab.core.library.types.OnOffType;
|
||||
import org.openhab.core.library.types.PercentType;
|
||||
import org.openhab.core.library.types.StringType;
|
||||
import org.openhab.core.types.State;
|
||||
|
||||
/**
|
||||
* The {@link FanDevice} is a device that represents a Fan.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class FanDevice extends GenericDevice {
|
||||
private final Map<String, GenericItem> itemMap = new HashMap<>();
|
||||
private final Map<String, String> attributeToItemNameMap = new HashMap<>();
|
||||
private final FanModeMapper fanModeMapper = new FanModeMapper();
|
||||
@Nullable
|
||||
private Integer lastSpeed;
|
||||
@Nullable
|
||||
private Integer lastMode;
|
||||
@Nullable
|
||||
private OnOffType lastOnOff;
|
||||
|
||||
public FanDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem item) {
|
||||
super(metadataRegistry, client, item);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String deviceType() {
|
||||
return "Fan";
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleMatterEvent(String clusterName, String attributeName, Object data) {
|
||||
GenericItem item = itemForAttribute(clusterName, attributeName);
|
||||
// if we have an item bound to this attribute, we can just update it, otherwise we need to handle updating other
|
||||
// items (see else block)
|
||||
if (item != null) {
|
||||
switch (attributeName) {
|
||||
case FanControlCluster.ATTRIBUTE_FAN_MODE:
|
||||
try {
|
||||
int mode = ((Double) data).intValue();
|
||||
String mappedMode = fanModeMapper.toCustomValue(mode);
|
||||
if (item instanceof NumberItem numberItem) {
|
||||
numberItem.send(new DecimalType(mappedMode));
|
||||
} else if (item instanceof StringItem stringItem) {
|
||||
stringItem.send(new StringType(mappedMode));
|
||||
} else if (item instanceof SwitchItem switchItem) {
|
||||
switchItem.send(mode > 0 ? OnOffType.ON : OnOffType.OFF);
|
||||
}
|
||||
} catch (FanModeMappingException e) {
|
||||
logger.debug("Could not convert {} to custom value", data);
|
||||
}
|
||||
break;
|
||||
case FanControlCluster.ATTRIBUTE_PERCENT_SETTING:
|
||||
int level = ((Double) data).intValue();
|
||||
if (item instanceof GroupItem groupItem) {
|
||||
groupItem.send(new PercentType(level));
|
||||
} else if (item instanceof DimmerItem dimmerItem) {
|
||||
dimmerItem.send(new PercentType(level));
|
||||
}
|
||||
break;
|
||||
case OnOffCluster.ATTRIBUTE_ON_OFF:
|
||||
if (item instanceof SwitchItem switchItem) {
|
||||
OnOffType onOff = OnOffType.from((Boolean) data);
|
||||
switchItem.send(onOff);
|
||||
lastOnOff = onOff;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
// if there is not an item bound to a specific attribute, we need to handle updating other items and fake it
|
||||
switch (attributeName) {
|
||||
case FanControlCluster.ATTRIBUTE_PERCENT_SETTING: {
|
||||
int level = ((Double) data).intValue();
|
||||
// try and update the on/off state if set
|
||||
GenericItem genericItem = itemForAttribute(OnOffCluster.CLUSTER_PREFIX,
|
||||
OnOffCluster.ATTRIBUTE_ON_OFF);
|
||||
if (genericItem instanceof SwitchItem switchItem) {
|
||||
switchItem.send(OnOffType.from(level > 0));
|
||||
}
|
||||
// try and update the fan mode if set
|
||||
genericItem = itemForAttribute(FanControlCluster.CLUSTER_PREFIX,
|
||||
FanControlCluster.ATTRIBUTE_FAN_MODE);
|
||||
try {
|
||||
String mappedMode = fanModeMapper
|
||||
.toCustomValue(level > 0 ? FanControlCluster.FanModeEnum.ON.value
|
||||
: FanControlCluster.FanModeEnum.OFF.value);
|
||||
if (genericItem instanceof NumberItem numberItem) {
|
||||
numberItem.send(new DecimalType(mappedMode));
|
||||
} else if (genericItem instanceof StringItem stringItem) {
|
||||
stringItem.send(new StringType(mappedMode));
|
||||
} else if (genericItem instanceof SwitchItem switchItem) {
|
||||
switchItem.send(OnOffType.from(level > 0));
|
||||
}
|
||||
} catch (FanModeMappingException e) {
|
||||
logger.debug("Could not convert {} to custom value", data);
|
||||
}
|
||||
}
|
||||
break;
|
||||
case FanControlCluster.ATTRIBUTE_FAN_MODE: {
|
||||
int mode = ((Double) data).intValue();
|
||||
GenericItem genericItem = itemForAttribute(FanControlCluster.CLUSTER_PREFIX,
|
||||
FanControlCluster.ATTRIBUTE_PERCENT_SETTING);
|
||||
PercentType level = mode > 0 ? PercentType.HUNDRED : PercentType.ZERO;
|
||||
if (genericItem instanceof GroupItem groupItem) {
|
||||
groupItem.send(level);
|
||||
} else if (genericItem instanceof DimmerItem dimmerItem) {
|
||||
dimmerItem.send(level);
|
||||
}
|
||||
// try and update the on/off state if set
|
||||
genericItem = itemForAttribute(OnOffCluster.CLUSTER_PREFIX, OnOffCluster.ATTRIBUTE_ON_OFF);
|
||||
if (genericItem instanceof SwitchItem switchItem) {
|
||||
switchItem.send(OnOffType.from(mode > 0));
|
||||
}
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MatterDeviceOptions activate() {
|
||||
primaryItem.addStateChangeListener(this);
|
||||
Map<String, Object> attributeMap = new HashMap<>();
|
||||
MetaDataMapping primaryMetadata = metaDataMapping(primaryItem);
|
||||
attributeMap.putAll(primaryMetadata.getAttributeOptions());
|
||||
Set<Item> members = new HashSet<>();
|
||||
members.add(primaryItem);
|
||||
if (primaryItem instanceof GroupItem groupItem) {
|
||||
members.addAll(groupItem.getAllMembers());
|
||||
}
|
||||
for (Item member : members) {
|
||||
if (member instanceof GenericItem genericMember) {
|
||||
MetaDataMapping metadata = metaDataMapping(genericMember);
|
||||
State state = genericMember.getState();
|
||||
for (String attribute : metadata.attributes) {
|
||||
String[] pair = attribute.split("\\.");
|
||||
if (pair.length != 2) {
|
||||
logger.debug("Unknown attribute format {}", attribute);
|
||||
continue;
|
||||
}
|
||||
String attributeName = pair[1];
|
||||
switch (attributeName) {
|
||||
case FanControlCluster.ATTRIBUTE_PERCENT_SETTING:
|
||||
if (state instanceof PercentType percentType) {
|
||||
int speed = percentType.intValue();
|
||||
attributeMap.put(attribute, speed);
|
||||
lastSpeed = speed;
|
||||
} else {
|
||||
attributeMap.put(attribute, 0);
|
||||
lastSpeed = 0;
|
||||
}
|
||||
break;
|
||||
case FanControlCluster.ATTRIBUTE_FAN_MODE:
|
||||
int mode = 0;
|
||||
if (state instanceof DecimalType decimalType) {
|
||||
mode = decimalType.intValue();
|
||||
}
|
||||
attributeMap.put(attribute, mode);
|
||||
fanModeMapper.initializeMappings(metadata.config);
|
||||
lastMode = mode;
|
||||
break;
|
||||
case OnOffCluster.ATTRIBUTE_ON_OFF:
|
||||
if (state instanceof OnOffType onOffType) {
|
||||
attributeMap.put(attribute, onOffType == OnOffType.ON);
|
||||
lastOnOff = onOffType;
|
||||
} else {
|
||||
attributeMap.put(attribute, false);
|
||||
lastOnOff = OnOffType.OFF;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
continue;
|
||||
}
|
||||
if (!itemMap.containsKey(genericMember.getUID())) {
|
||||
itemMap.put(genericMember.getUID(), genericMember);
|
||||
genericMember.addStateChangeListener(this);
|
||||
}
|
||||
attributeMap.putAll(metadata.getAttributeOptions());
|
||||
attributeToItemNameMap.put(attribute, genericMember.getUID());
|
||||
}
|
||||
}
|
||||
}
|
||||
updateMissingAttributes().forEach((attribute, value) -> {
|
||||
attributeMap.put(attribute, value);
|
||||
});
|
||||
return new MatterDeviceOptions(attributeMap, primaryMetadata.label);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void dispose() {
|
||||
attributeToItemNameMap.clear();
|
||||
primaryItem.removeStateChangeListener(this);
|
||||
itemMap.forEach((uid, item) -> {
|
||||
((GenericItem) item).removeStateChangeListener(this);
|
||||
});
|
||||
itemMap.clear();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void updateState(Item item, State state) {
|
||||
attributeToItemNameMap.forEach((attribute, itemUid) -> {
|
||||
if (itemUid.equals(item.getUID())) {
|
||||
String[] pair = attribute.split("\\.");
|
||||
if (pair.length != 2) {
|
||||
logger.debug("Unknown attribute format {}", attribute);
|
||||
return;
|
||||
}
|
||||
String clusterName = pair[0];
|
||||
String attributeName = pair[1];
|
||||
switch (attributeName) {
|
||||
case FanControlCluster.ATTRIBUTE_PERCENT_SETTING:
|
||||
if (state instanceof PercentType percentType) {
|
||||
int speed = percentType.intValue();
|
||||
setEndpointState(clusterName, attributeName, speed);
|
||||
setEndpointState(clusterName, FanControlCluster.ATTRIBUTE_PERCENT_CURRENT, speed);
|
||||
lastSpeed = speed;
|
||||
}
|
||||
break;
|
||||
case FanControlCluster.ATTRIBUTE_FAN_MODE:
|
||||
if (state instanceof OnOffType onOffType) {
|
||||
int mode = onOffType == OnOffType.ON ? FanControlCluster.FanModeEnum.ON.value
|
||||
: FanControlCluster.FanModeEnum.OFF.value;
|
||||
setEndpointState(clusterName, attributeName, mode);
|
||||
lastMode = mode;
|
||||
} else {
|
||||
try {
|
||||
int mode = fanModeMapper.fromCustomValue(state.toString()).value;
|
||||
setEndpointState(clusterName, attributeName, mode);
|
||||
lastMode = mode;
|
||||
} catch (FanModeMappingException e) {
|
||||
logger.debug("Could not convert {} to matter value", state.toString());
|
||||
}
|
||||
}
|
||||
break;
|
||||
case OnOffCluster.ATTRIBUTE_ON_OFF:
|
||||
if (state instanceof OnOffType onOffType) {
|
||||
setEndpointState(clusterName, attributeName, onOffType == OnOffType.ON);
|
||||
lastOnOff = onOffType;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
});
|
||||
sendMissingAttributes();
|
||||
}
|
||||
|
||||
/**
|
||||
* Fan device types mandates mode and speed tp be present, this fakes that if those are missing.
|
||||
*
|
||||
* @return
|
||||
*/
|
||||
private Map<String, Object> updateMissingAttributes() {
|
||||
Map<String, Object> attributeMap = new HashMap<>();
|
||||
OnOffType onOff = lastOnOff;
|
||||
Integer mode = lastMode;
|
||||
Integer speed = lastSpeed;
|
||||
if (lastSpeed == null) {
|
||||
if (onOff != null) {
|
||||
attributeMap.put(FanControlCluster.CLUSTER_PREFIX + "." + FanControlCluster.ATTRIBUTE_PERCENT_CURRENT,
|
||||
onOff == OnOffType.ON ? 100 : 0);
|
||||
attributeMap.put(FanControlCluster.CLUSTER_PREFIX + "." + FanControlCluster.ATTRIBUTE_PERCENT_SETTING,
|
||||
onOff == OnOffType.ON ? 100 : 0);
|
||||
} else if (mode != null) {
|
||||
attributeMap.put(FanControlCluster.CLUSTER_PREFIX + "." + FanControlCluster.ATTRIBUTE_PERCENT_CURRENT,
|
||||
mode == 0 ? 0 : 100);
|
||||
attributeMap.put(FanControlCluster.CLUSTER_PREFIX + "." + FanControlCluster.ATTRIBUTE_PERCENT_SETTING,
|
||||
mode == 0 ? 0 : 100);
|
||||
}
|
||||
}
|
||||
if (mode == null) {
|
||||
if (onOff != null) {
|
||||
attributeMap.put(FanControlCluster.CLUSTER_PREFIX + "." + FanControlCluster.ATTRIBUTE_FAN_MODE,
|
||||
onOff == OnOffType.ON ? FanControlCluster.FanModeEnum.ON.value
|
||||
: FanControlCluster.FanModeEnum.OFF.value);
|
||||
} else if (speed != null) {
|
||||
attributeMap.put(FanControlCluster.CLUSTER_PREFIX + "." + FanControlCluster.ATTRIBUTE_FAN_MODE,
|
||||
speed == 0 ? FanControlCluster.FanModeEnum.OFF.value : FanControlCluster.FanModeEnum.ON.value);
|
||||
}
|
||||
}
|
||||
return attributeMap;
|
||||
}
|
||||
|
||||
private void sendMissingAttributes() {
|
||||
updateMissingAttributes().forEach((attribute, value) -> {
|
||||
String[] pair = attribute.split("\\.");
|
||||
if (pair.length != 2) {
|
||||
logger.debug("Unknown attribute format {}", attribute);
|
||||
return;
|
||||
}
|
||||
String clusterName = pair[0];
|
||||
String attributeName = pair[1];
|
||||
setEndpointState(clusterName, attributeName, value);
|
||||
});
|
||||
}
|
||||
|
||||
private @Nullable GenericItem itemForAttribute(String clusterName, String attributeName) {
|
||||
String pathName = clusterName + "." + attributeName;
|
||||
String itemUid = attributeToItemNameMap.get(pathName);
|
||||
if (itemUid != null) {
|
||||
return itemMap.get(itemUid);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
class FanModeMapper {
|
||||
private final Map<Integer, String> intToCustomMap = new HashMap<>();
|
||||
private final Map<String, FanControlCluster.FanModeEnum> customToEnumMap = new HashMap<>();
|
||||
|
||||
public FanModeMapper() {
|
||||
Map<String, Object> mappings = new HashMap<>();
|
||||
FanControlCluster.FanModeEnum[] modes = FanControlCluster.FanModeEnum.values();
|
||||
for (FanControlCluster.FanModeEnum mode : modes) {
|
||||
mappings.put(mode.name(), mode.getValue());
|
||||
}
|
||||
initializeMappings(mappings);
|
||||
}
|
||||
|
||||
public FanModeMapper(Map<String, Object> mappings) {
|
||||
initializeMappings(mappings);
|
||||
}
|
||||
|
||||
private void initializeMappings(Map<String, Object> mappings) {
|
||||
if (mappings.isEmpty()) {
|
||||
return;
|
||||
}
|
||||
|
||||
// don't bother mapping if there's no OFF
|
||||
if (!mappings.containsKey("OFF")) {
|
||||
return;
|
||||
}
|
||||
|
||||
intToCustomMap.clear();
|
||||
customToEnumMap.clear();
|
||||
for (Map.Entry<String, Object> entry : mappings.entrySet()) {
|
||||
String customKey = entry.getKey().trim();
|
||||
Object valueObj = entry.getValue();
|
||||
String customValue = valueObj.toString().trim();
|
||||
|
||||
try {
|
||||
FanControlCluster.FanModeEnum mode = FanControlCluster.FanModeEnum.valueOf(customKey);
|
||||
intToCustomMap.put(mode.value, customValue);
|
||||
customToEnumMap.put(customValue, mode);
|
||||
} catch (IllegalArgumentException e) {
|
||||
// ignore unknown values
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public String toCustomValue(int modeValue) throws FanModeMappingException {
|
||||
String value = intToCustomMap.get(modeValue);
|
||||
if (value == null) {
|
||||
throw new FanModeMappingException("No mapping for mode: " + modeValue);
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
public FanControlCluster.FanModeEnum fromCustomValue(String customValue) throws FanModeMappingException {
|
||||
FanControlCluster.FanModeEnum value = customToEnumMap.get(customValue);
|
||||
if (value == null) {
|
||||
throw new FanModeMappingException("No mapping for custom value: " + customValue);
|
||||
}
|
||||
return value;
|
||||
}
|
||||
}
|
||||
|
||||
class FanModeMappingException extends Exception {
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
public FanModeMappingException(String message) {
|
||||
super(message);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,224 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.util.Arrays;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.Metadata;
|
||||
import org.openhab.core.items.MetadataKey;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.openhab.core.items.StateChangeListener;
|
||||
import org.openhab.core.types.State;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
/**
|
||||
* The {@link GenericDevice} is a base class for all devices that are managed by the bridge.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public abstract class GenericDevice implements StateChangeListener {
|
||||
protected final Logger logger = LoggerFactory.getLogger(getClass());
|
||||
|
||||
protected final GenericItem primaryItem;
|
||||
protected @Nullable Metadata primaryItemMetadata;
|
||||
protected final MatterBridgeClient client;
|
||||
protected final MetadataRegistry metadataRegistry;
|
||||
protected boolean activated = false;
|
||||
|
||||
public GenericDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem primaryItem) {
|
||||
this.metadataRegistry = metadataRegistry;
|
||||
this.client = client;
|
||||
this.primaryItem = primaryItem;
|
||||
this.primaryItemMetadata = metadataRegistry.get(new MetadataKey("matter", primaryItem.getUID()));
|
||||
}
|
||||
|
||||
public abstract String deviceType();
|
||||
|
||||
/**
|
||||
* Activate the device, this will return the device options for the device, inherited classes should override this
|
||||
* method to return the correct device options as well as set the initial state of the device and register listeners
|
||||
*
|
||||
* @return the device options
|
||||
*/
|
||||
protected abstract MatterDeviceOptions activate();
|
||||
|
||||
/**
|
||||
* Dispose of the device, inherited classes should unregister the device and remove the listeners
|
||||
*/
|
||||
public abstract void dispose();
|
||||
|
||||
/**
|
||||
* Handle openHAB item state changes
|
||||
*
|
||||
*/
|
||||
public abstract void updateState(Item item, State state);
|
||||
|
||||
/**
|
||||
* Handle matter events
|
||||
*
|
||||
* @param clusterName the cluster name
|
||||
* @param attributeName the attribute name
|
||||
* @param data the raw matter data value
|
||||
*/
|
||||
public abstract void handleMatterEvent(String clusterName, String attributeName, Object data);
|
||||
|
||||
@Override
|
||||
public void stateChanged(Item item, State oldState, State newState) {
|
||||
logger.debug("{} state changed from {} to {}", item.getName(), oldState, newState);
|
||||
updateState(item, newState);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void stateUpdated(Item item, State state) {
|
||||
}
|
||||
|
||||
public synchronized CompletableFuture<String> registerDevice() {
|
||||
if (activated) {
|
||||
throw new IllegalStateException("Device already registered");
|
||||
}
|
||||
MatterDeviceOptions options = activate();
|
||||
activated = true;
|
||||
return client.addEndpoint(deviceType(), primaryItem.getName(), options.label, primaryItem.getName(),
|
||||
"Type " + primaryItem.getType(), String.valueOf(primaryItem.getName().hashCode()), options.clusters);
|
||||
}
|
||||
|
||||
public String getName() {
|
||||
return primaryItem.getName();
|
||||
}
|
||||
|
||||
public CompletableFuture<Void> setEndpointState(String clusterName, String attributeName, Object state) {
|
||||
return client.setEndpointState(primaryItem.getName(), clusterName, attributeName, state);
|
||||
}
|
||||
|
||||
protected MetaDataMapping metaDataMapping(GenericItem item) {
|
||||
Metadata metadata = metadataRegistry.get(new MetadataKey("matter", item.getUID()));
|
||||
String label = item.getLabel();
|
||||
List<String> attributeList = List.of();
|
||||
Map<String, Object> config = new HashMap<>();
|
||||
if (metadata != null) {
|
||||
attributeList = Arrays.stream(metadata.getValue().split(",")).map(String::trim)
|
||||
.collect(Collectors.toList());
|
||||
metadata.getConfiguration().forEach((key, value) -> {
|
||||
config.put(key.replace('-', '.').trim(), value);
|
||||
});
|
||||
if (config.get("label") instanceof String customLabel) {
|
||||
label = customLabel;
|
||||
}
|
||||
|
||||
// convert the value of fixed labels into a cluster attribute
|
||||
if (config.get("fixedLabels") instanceof String fixedLabels) {
|
||||
List<KeyValue> labelList = parseFixedLabels(fixedLabels);
|
||||
config.put("fixedLabel.labelList", labelList);
|
||||
}
|
||||
}
|
||||
|
||||
if (label == null) {
|
||||
label = item.getName();
|
||||
}
|
||||
|
||||
return new MetaDataMapping(attributeList, config, label);
|
||||
}
|
||||
|
||||
/**
|
||||
* This class is used to map the metadata to the endpoint options
|
||||
*/
|
||||
class MetaDataMapping {
|
||||
public final List<String> attributes;
|
||||
/**
|
||||
* The config for the item, this will be a mix of custom mapping like "ON=1" and cluster attributes like
|
||||
* "clusterName.attributeName=2"
|
||||
*/
|
||||
public final Map<String, Object> config;
|
||||
/**
|
||||
* The label for the item
|
||||
*/
|
||||
public final String label;
|
||||
|
||||
public MetaDataMapping(List<String> attributes, Map<String, Object> config, String label) {
|
||||
this.attributes = attributes;
|
||||
this.config = config;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the attribute options from the config, this filters the entries to just keys like
|
||||
* "clusterName.attributeName"
|
||||
*
|
||||
* @return
|
||||
*/
|
||||
public Map<String, Object> getAttributeOptions() {
|
||||
return config.entrySet().stream().filter(entry -> entry.getKey().contains("."))
|
||||
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
|
||||
}
|
||||
}
|
||||
|
||||
class MatterDeviceOptions {
|
||||
public final Map<String, Map<String, Object>> clusters;
|
||||
public final String label;
|
||||
|
||||
public MatterDeviceOptions(Map<String, Object> attributes, String label) {
|
||||
this.clusters = mapClusterAttributes(attributes);
|
||||
this.label = label;
|
||||
}
|
||||
}
|
||||
|
||||
Map<String, Map<String, Object>> mapClusterAttributes(Map<String, Object> clusterAttributes) {
|
||||
Map<String, Map<String, Object>> returnMap = new HashMap<>();
|
||||
clusterAttributes.forEach((key, value) -> {
|
||||
String[] parts = key.split("\\.");
|
||||
if (parts.length != 2) {
|
||||
throw new IllegalArgumentException("Key must be in the format 'clusterName.attributeName'");
|
||||
}
|
||||
String clusterName = parts[0];
|
||||
String attributeName = parts[1];
|
||||
|
||||
// Get or create the child map for the clusterName
|
||||
Map<String, Object> attributes = returnMap.computeIfAbsent(clusterName, k -> new HashMap<>());
|
||||
|
||||
// Update the attributeName with the value
|
||||
if (attributes != null) {
|
||||
attributes.put(attributeName, value);
|
||||
}
|
||||
});
|
||||
return returnMap;
|
||||
}
|
||||
|
||||
private List<KeyValue> parseFixedLabels(String labels) {
|
||||
Map<String, String> keyValueMap = Arrays.stream(labels.split(",")).map(pair -> pair.trim().split("=", 2))
|
||||
.filter(parts -> parts.length == 2)
|
||||
.collect(Collectors.toMap(parts -> parts[0].trim(), parts -> parts[1].trim()));
|
||||
return keyValueMap.entrySet().stream().map(entry -> new KeyValue(entry.getKey(), entry.getValue())).toList();
|
||||
}
|
||||
|
||||
class KeyValue {
|
||||
public final String label;
|
||||
public final String value;
|
||||
|
||||
public KeyValue(String label, String value) {
|
||||
this.label = label;
|
||||
this.value = value;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,84 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.math.BigDecimal;
|
||||
import java.math.RoundingMode;
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.RelativeHumidityMeasurementCluster;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.openhab.core.library.types.QuantityType;
|
||||
import org.openhab.core.types.State;
|
||||
|
||||
/**
|
||||
* The {@link HumiditySensorDevice} is a device that represents a Humidity Sensor.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class HumiditySensorDevice extends GenericDevice {
|
||||
private static final BigDecimal HUMIDITY_MULTIPLIER = new BigDecimal(100);
|
||||
|
||||
public HumiditySensorDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem item) {
|
||||
super(metadataRegistry, client, item);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String deviceType() {
|
||||
return "HumiditySensor";
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleMatterEvent(String clusterName, String attributeName, Object data) {
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MatterDeviceOptions activate() {
|
||||
primaryItem.addStateChangeListener(this);
|
||||
MetaDataMapping primaryMetadata = metaDataMapping(primaryItem);
|
||||
Map<String, Object> attributeMap = primaryMetadata.getAttributeOptions();
|
||||
attributeMap.put(
|
||||
RelativeHumidityMeasurementCluster.CLUSTER_PREFIX + "."
|
||||
+ RelativeHumidityMeasurementCluster.ATTRIBUTE_MEASURED_VALUE,
|
||||
toMatterValue(primaryItem.getState()));
|
||||
return new MatterDeviceOptions(attributeMap, primaryMetadata.label);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void dispose() {
|
||||
primaryItem.removeStateChangeListener(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void updateState(Item item, State state) {
|
||||
setEndpointState(RelativeHumidityMeasurementCluster.CLUSTER_PREFIX,
|
||||
RelativeHumidityMeasurementCluster.ATTRIBUTE_MEASURED_VALUE, toMatterValue(state));
|
||||
}
|
||||
|
||||
private int toMatterValue(@Nullable State humidity) {
|
||||
BigDecimal value = new BigDecimal(0);
|
||||
if (humidity instanceof QuantityType quantityType) {
|
||||
value = quantityType.toBigDecimal();
|
||||
}
|
||||
if (humidity instanceof Number number) {
|
||||
value = BigDecimal.valueOf(number.doubleValue());
|
||||
}
|
||||
return value.setScale(2, RoundingMode.CEILING).multiply(HUMIDITY_MULTIPLIER).intValue();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,83 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.OccupancySensingCluster;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.openhab.core.library.types.OnOffType;
|
||||
import org.openhab.core.library.types.OpenClosedType;
|
||||
import org.openhab.core.types.State;
|
||||
|
||||
import com.google.gson.JsonObject;
|
||||
|
||||
/**
|
||||
* The {@link OccupancySensorDevice} is a device that represents an Occupancy Sensor.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class OccupancySensorDevice extends GenericDevice {
|
||||
|
||||
public OccupancySensorDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem item) {
|
||||
super(metadataRegistry, client, item);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String deviceType() {
|
||||
return "OccupancySensor";
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleMatterEvent(String clusterName, String attributeName, Object data) {
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MatterDeviceOptions activate() {
|
||||
primaryItem.addStateChangeListener(this);
|
||||
MetaDataMapping primaryMetadata = metaDataMapping(primaryItem);
|
||||
Map<String, Object> attributeMap = primaryMetadata.getAttributeOptions();
|
||||
attributeMap.put(OccupancySensingCluster.CLUSTER_PREFIX + "." + OccupancySensingCluster.ATTRIBUTE_OCCUPANCY,
|
||||
occupiedState(primaryItem.getState()));
|
||||
return new MatterDeviceOptions(attributeMap, primaryMetadata.label);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void dispose() {
|
||||
primaryItem.removeStateChangeListener(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void updateState(Item item, State state) {
|
||||
setEndpointState(OccupancySensingCluster.CLUSTER_PREFIX, OccupancySensingCluster.ATTRIBUTE_OCCUPANCY,
|
||||
occupiedState(primaryItem.getState()));
|
||||
}
|
||||
|
||||
private JsonObject occupiedState(State state) {
|
||||
boolean occupied = false;
|
||||
if (state instanceof OnOffType onOffType) {
|
||||
occupied = onOffType == OnOffType.ON;
|
||||
}
|
||||
if (state instanceof OpenClosedType openClosedType) {
|
||||
occupied = openClosedType == OpenClosedType.OPEN;
|
||||
}
|
||||
JsonObject stateJson = new JsonObject();
|
||||
stateJson.addProperty("occupied", occupied);
|
||||
return stateJson;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,101 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.LevelControlCluster;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.OnOffCluster;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.GroupItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.openhab.core.library.items.SwitchItem;
|
||||
import org.openhab.core.library.types.HSBType;
|
||||
import org.openhab.core.library.types.OnOffType;
|
||||
import org.openhab.core.library.types.PercentType;
|
||||
import org.openhab.core.types.State;
|
||||
|
||||
/**
|
||||
* The {@link OnOffLightDevice} is a device that represents an On/Off Light.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class OnOffLightDevice extends GenericDevice {
|
||||
|
||||
public OnOffLightDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem item) {
|
||||
super(metadataRegistry, client, item);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String deviceType() {
|
||||
return "OnOffLight";
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleMatterEvent(String clusterName, String attributeName, Object data) {
|
||||
switch (attributeName) {
|
||||
case OnOffCluster.ATTRIBUTE_ON_OFF: {
|
||||
if (primaryItem instanceof GroupItem groupItem) {
|
||||
groupItem.send(OnOffType.from(Boolean.valueOf(data.toString())));
|
||||
} else if (primaryItem instanceof SwitchItem switchItem) {
|
||||
switchItem.send(OnOffType.from(Boolean.valueOf(data.toString())));
|
||||
}
|
||||
}
|
||||
break;
|
||||
case LevelControlCluster.ATTRIBUTE_CURRENT_LEVEL: {
|
||||
OnOffType onOff = OnOffType.from(((Double) data).intValue() > 0);
|
||||
if (primaryItem instanceof GroupItem groupItem) {
|
||||
groupItem.send(onOff);
|
||||
} else if (primaryItem instanceof SwitchItem switchItem) {
|
||||
switchItem.send(onOff);
|
||||
}
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MatterDeviceOptions activate() {
|
||||
primaryItem.addStateChangeListener(this);
|
||||
MetaDataMapping primaryMetadata = metaDataMapping(primaryItem);
|
||||
Map<String, Object> attributeMap = primaryMetadata.getAttributeOptions();
|
||||
attributeMap.put(OnOffCluster.CLUSTER_PREFIX + "." + OnOffCluster.ATTRIBUTE_ON_OFF, Optional
|
||||
.ofNullable(primaryItem.getStateAs(OnOffType.class)).orElseGet(() -> OnOffType.OFF) == OnOffType.ON);
|
||||
return new MatterDeviceOptions(attributeMap, primaryMetadata.label);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void dispose() {
|
||||
primaryItem.removeStateChangeListener(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void updateState(Item item, State state) {
|
||||
if (state instanceof HSBType hsb) {
|
||||
setEndpointState(OnOffCluster.CLUSTER_PREFIX, OnOffCluster.ATTRIBUTE_ON_OFF,
|
||||
hsb.getBrightness().intValue() > 0 ? true : false);
|
||||
} else if (state instanceof PercentType percentType) {
|
||||
setEndpointState(OnOffCluster.CLUSTER_PREFIX, OnOffCluster.ATTRIBUTE_ON_OFF,
|
||||
percentType.intValue() > 0 ? true : false);
|
||||
} else if (state instanceof OnOffType onOffType) {
|
||||
setEndpointState(OnOffCluster.CLUSTER_PREFIX, OnOffCluster.ATTRIBUTE_ON_OFF, onOffType == OnOffType.ON);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,36 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
|
||||
/**
|
||||
* The {@link OnOffPlugInUnitDevice} is a device that represents an On/Off Plug-In Unit.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class OnOffPlugInUnitDevice extends OnOffLightDevice {
|
||||
|
||||
public OnOffPlugInUnitDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem item) {
|
||||
super(metadataRegistry, client, item);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String deviceType() {
|
||||
return "OnOffPlugInUnit";
|
||||
}
|
||||
}
|
|
@ -0,0 +1,74 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.TemperatureMeasurementCluster;
|
||||
import org.openhab.binding.matter.internal.util.ValueUtils;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.openhab.core.types.State;
|
||||
|
||||
/**
|
||||
* The {@link TemperatureSensorDevice} is a device that represents a Temperature Sensor.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class TemperatureSensorDevice extends GenericDevice {
|
||||
|
||||
public TemperatureSensorDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem item) {
|
||||
super(metadataRegistry, client, item);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String deviceType() {
|
||||
return "TemperatureSensor";
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleMatterEvent(String clusterName, String attributeName, Object data) {
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MatterDeviceOptions activate() {
|
||||
primaryItem.addStateChangeListener(this);
|
||||
MetaDataMapping primaryMetadata = metaDataMapping(primaryItem);
|
||||
Map<String, Object> attributeMap = primaryMetadata.getAttributeOptions();
|
||||
State state = primaryItem.getState();
|
||||
Integer value = ValueUtils.temperatureToValue(state);
|
||||
attributeMap.put(TemperatureMeasurementCluster.CLUSTER_PREFIX + "."
|
||||
+ TemperatureMeasurementCluster.ATTRIBUTE_MEASURED_VALUE, value == null ? 0 : value);
|
||||
return new MatterDeviceOptions(attributeMap, primaryMetadata.label);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void dispose() {
|
||||
primaryItem.removeStateChangeListener(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void updateState(Item item, State state) {
|
||||
Integer value = ValueUtils.temperatureToValue(state);
|
||||
if (value != null) {
|
||||
setEndpointState(TemperatureMeasurementCluster.CLUSTER_PREFIX,
|
||||
TemperatureMeasurementCluster.ATTRIBUTE_MEASURED_VALUE, value);
|
||||
} else {
|
||||
logger.debug("Could not convert {} to matter value", state.toString());
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,328 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import javax.measure.quantity.Temperature;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.OnOffCluster;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.ThermostatCluster;
|
||||
import org.openhab.binding.matter.internal.util.ValueUtils;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.GroupItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.openhab.core.library.items.NumberItem;
|
||||
import org.openhab.core.library.items.StringItem;
|
||||
import org.openhab.core.library.items.SwitchItem;
|
||||
import org.openhab.core.library.types.DecimalType;
|
||||
import org.openhab.core.library.types.OnOffType;
|
||||
import org.openhab.core.library.types.QuantityType;
|
||||
import org.openhab.core.library.types.StringType;
|
||||
import org.openhab.core.types.State;
|
||||
import org.openhab.core.types.UnDefType;
|
||||
|
||||
/**
|
||||
* The {@link ThermostatDevice} is a device that represents a Thermostat.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class ThermostatDevice extends GenericDevice {
|
||||
private final Map<String, GenericItem> itemMap = new HashMap<>();
|
||||
private final Map<String, String> attributeToItemNameMap = new HashMap<>();
|
||||
private final SystemModeMapper systemModeMapper = new SystemModeMapper();
|
||||
|
||||
public ThermostatDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem item) {
|
||||
super(metadataRegistry, client, item);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String deviceType() {
|
||||
return "Thermostat";
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleMatterEvent(String clusterName, String attributeName, Object data) {
|
||||
String pathName = clusterName + "." + attributeName;
|
||||
String itemUid = attributeToItemNameMap.get(pathName);
|
||||
if (itemUid != null) {
|
||||
GenericItem item = itemMap.get(itemUid);
|
||||
if (item != null) {
|
||||
switch (attributeName) {
|
||||
case ThermostatCluster.ATTRIBUTE_OCCUPIED_HEATING_SETPOINT:
|
||||
case ThermostatCluster.ATTRIBUTE_OCCUPIED_COOLING_SETPOINT:
|
||||
if (item instanceof NumberItem numberItem) {
|
||||
QuantityType<Temperature> t = ValueUtils
|
||||
.valueToTemperature(Float.valueOf(data.toString()).intValue());
|
||||
numberItem.send(t);
|
||||
}
|
||||
break;
|
||||
case ThermostatCluster.ATTRIBUTE_SYSTEM_MODE:
|
||||
try {
|
||||
int mode = ((Double) data).intValue();
|
||||
String mappedMode = systemModeMapper.toCustomValue(mode);
|
||||
if (item instanceof NumberItem numberItem) {
|
||||
numberItem.send(new DecimalType(mappedMode));
|
||||
} else if (item instanceof StringItem stringItem) {
|
||||
stringItem.send(new StringType(mappedMode));
|
||||
} else if (item instanceof SwitchItem switchItem) {
|
||||
switchItem.send(OnOffType.from(mode > 0));
|
||||
}
|
||||
} catch (SystemModeMappingException e) {
|
||||
logger.debug("Could not convert {} to custom value", data);
|
||||
}
|
||||
break;
|
||||
case OnOffCluster.ATTRIBUTE_ON_OFF:
|
||||
try {
|
||||
if (data instanceof Boolean onOff) {
|
||||
String mappedMode = onOff ? systemModeMapper.onToCustomValue()
|
||||
: systemModeMapper.toCustomValue(0);
|
||||
if (item instanceof NumberItem) {
|
||||
item.setState(new DecimalType(mappedMode));
|
||||
} else {
|
||||
item.setState(new StringType(mappedMode));
|
||||
}
|
||||
}
|
||||
} catch (SystemModeMappingException e) {
|
||||
logger.debug("Could not convert {} to custom value", data);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void updateState(Item item, State state) {
|
||||
attributeToItemNameMap.forEach((attribute, itemUid) -> {
|
||||
if (itemUid.equals(item.getUID())) {
|
||||
// we need to do conversion here
|
||||
String[] pair = attribute.split("\\.");
|
||||
if (pair.length != 2) {
|
||||
logger.debug("Unknown attribute format {}", attribute);
|
||||
return;
|
||||
}
|
||||
String clusterName = pair[0];
|
||||
String attributeName = pair[1];
|
||||
switch (attributeName) {
|
||||
case ThermostatCluster.ATTRIBUTE_LOCAL_TEMPERATURE:
|
||||
case ThermostatCluster.ATTRIBUTE_OUTDOOR_TEMPERATURE:
|
||||
case ThermostatCluster.ATTRIBUTE_OCCUPIED_HEATING_SETPOINT:
|
||||
case ThermostatCluster.ATTRIBUTE_OCCUPIED_COOLING_SETPOINT:
|
||||
Integer value = ValueUtils.temperatureToValue(state);
|
||||
if (value != null) {
|
||||
logger.debug("Setting {} to {}", attributeName, value);
|
||||
setEndpointState(clusterName, attributeName, value);
|
||||
} else {
|
||||
logger.debug("Could not convert {} to matter value", state.toString());
|
||||
}
|
||||
break;
|
||||
case ThermostatCluster.ATTRIBUTE_SYSTEM_MODE:
|
||||
try {
|
||||
int mode = systemModeMapper.fromCustomValue(state.toString()).value;
|
||||
setEndpointState(clusterName, attributeName, mode);
|
||||
setEndpointState(OnOffCluster.CLUSTER_PREFIX, OnOffCluster.ATTRIBUTE_ON_OFF, mode > 0);
|
||||
} catch (SystemModeMappingException e) {
|
||||
logger.debug("Could not convert {} to matter value", state.toString());
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MatterDeviceOptions activate() {
|
||||
primaryItem.addStateChangeListener(this);
|
||||
Map<String, Object> attributeMap = new HashMap<>();
|
||||
MetaDataMapping primaryMetadata = metaDataMapping(primaryItem);
|
||||
// add any settings for attributes from config, like thermostat.minHeatSetpointLimit=0
|
||||
attributeMap.putAll(primaryMetadata.getAttributeOptions());
|
||||
for (Item member : ((GroupItem) primaryItem).getAllMembers()) {
|
||||
if (member instanceof GenericItem genericMember) {
|
||||
MetaDataMapping metadata = metaDataMapping(genericMember);
|
||||
State state = genericMember.getState();
|
||||
for (String attribute : metadata.attributes) {
|
||||
String[] pair = attribute.split("\\.");
|
||||
if (pair.length != 2) {
|
||||
logger.debug("Unknown attribute format {}", attribute);
|
||||
continue;
|
||||
}
|
||||
String attributeName = pair[1];
|
||||
switch (attributeName) {
|
||||
case ThermostatCluster.ATTRIBUTE_LOCAL_TEMPERATURE:
|
||||
case ThermostatCluster.ATTRIBUTE_OUTDOOR_TEMPERATURE:
|
||||
case ThermostatCluster.ATTRIBUTE_OCCUPIED_HEATING_SETPOINT:
|
||||
case ThermostatCluster.ATTRIBUTE_OCCUPIED_COOLING_SETPOINT:
|
||||
if (state instanceof UnDefType) {
|
||||
attributeMap.put(attribute, 0);
|
||||
} else {
|
||||
Integer value = ValueUtils.temperatureToValue(state);
|
||||
attributeMap.put(attribute, value != null ? value : 0);
|
||||
}
|
||||
break;
|
||||
case ThermostatCluster.ATTRIBUTE_SYSTEM_MODE:
|
||||
try {
|
||||
systemModeMapper.initializeMappings(metadata.config);
|
||||
int mode = systemModeMapper.fromCustomValue(state.toString()).value;
|
||||
attributeMap.put(attribute, mode);
|
||||
attributeMap.put(OnOffCluster.CLUSTER_PREFIX + "." + OnOffCluster.ATTRIBUTE_ON_OFF,
|
||||
mode > 0);
|
||||
} catch (SystemModeMappingException e) {
|
||||
logger.debug("Could not convert {} to matter value", state.toString());
|
||||
}
|
||||
break;
|
||||
default:
|
||||
continue;
|
||||
}
|
||||
if (!itemMap.containsKey(genericMember.getUID())) {
|
||||
itemMap.put(genericMember.getUID(), genericMember);
|
||||
genericMember.addStateChangeListener(this);
|
||||
}
|
||||
// add any settings for attributes from config, like thermostat.minHeatSetpointLimit=0
|
||||
attributeMap.putAll(metadata.getAttributeOptions());
|
||||
attributeToItemNameMap.put(attribute, genericMember.getUID());
|
||||
}
|
||||
}
|
||||
}
|
||||
return new MatterDeviceOptions(attributeMap, primaryMetadata.label);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void dispose() {
|
||||
attributeToItemNameMap.clear();
|
||||
primaryItem.removeStateChangeListener(this);
|
||||
itemMap.forEach((uid, item) -> {
|
||||
((GenericItem) item).removeStateChangeListener(this);
|
||||
});
|
||||
itemMap.clear();
|
||||
}
|
||||
|
||||
class SystemModeMapper {
|
||||
private final Map<Integer, String> intToCustomMap = new HashMap<>();
|
||||
private final Map<String, ThermostatCluster.SystemModeEnum> customToEnumMap = new HashMap<>();
|
||||
private @Nullable String onMode = null;
|
||||
|
||||
public SystemModeMapper() {
|
||||
Map<String, Object> mappings = new HashMap<>();
|
||||
ThermostatCluster.SystemModeEnum[] modes = ThermostatCluster.SystemModeEnum.values();
|
||||
for (ThermostatCluster.SystemModeEnum mode : modes) {
|
||||
mappings.put(mode.name(), mode.getValue());
|
||||
}
|
||||
mappings.put("ON", ThermostatCluster.SystemModeEnum.AUTO.getValue()); // this is a special case for ON
|
||||
initializeMappings(mappings);
|
||||
}
|
||||
|
||||
public SystemModeMapper(Map<String, Object> mappings) {
|
||||
initializeMappings(mappings);
|
||||
}
|
||||
|
||||
private void initializeMappings(Map<String, Object> mappings) {
|
||||
if (mappings.isEmpty()) {
|
||||
return;
|
||||
}
|
||||
|
||||
// don't bother mapping if there's no OFF
|
||||
if (!mappings.containsKey("OFF")) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (!mappings.containsKey("ON")) {
|
||||
Object onObject = mappings.get("COOL");
|
||||
if (onObject != null) {
|
||||
mappings.put("ON", onObject);
|
||||
} else {
|
||||
onObject = mappings.get("HEAT");
|
||||
if (onObject != null) {
|
||||
mappings.put("ON", onObject);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
intToCustomMap.clear();
|
||||
customToEnumMap.clear();
|
||||
for (Map.Entry<String, Object> entry : mappings.entrySet()) {
|
||||
String customKey = entry.getKey().trim();
|
||||
Object valueObj = entry.getValue();
|
||||
String customValue = valueObj.toString().trim();
|
||||
|
||||
if ("ON".equals(customKey)) {
|
||||
onMode = customValue;
|
||||
continue;
|
||||
}
|
||||
|
||||
try {
|
||||
ThermostatCluster.SystemModeEnum mode = ThermostatCluster.SystemModeEnum.valueOf(customKey);
|
||||
intToCustomMap.put(mode.value, customValue);
|
||||
customToEnumMap.put(customValue, mode);
|
||||
} catch (IllegalArgumentException e) {
|
||||
// ignore unknown values
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public String toCustomValue(int modeValue) throws SystemModeMappingException {
|
||||
String value = intToCustomMap.get(modeValue);
|
||||
if (value == null) {
|
||||
throw new SystemModeMappingException("No mapping for mode: " + modeValue);
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
public String onToCustomValue() throws SystemModeMappingException {
|
||||
String value = this.onMode;
|
||||
if (value == null) {
|
||||
value = intToCustomMap.get(ThermostatCluster.SystemModeEnum.AUTO.value);
|
||||
}
|
||||
if (value == null) {
|
||||
value = ThermostatCluster.SystemModeEnum.AUTO.getValue().toString();
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
public ThermostatCluster.SystemModeEnum fromCustomValue(String customValue) throws SystemModeMappingException {
|
||||
if ("ON".equals(customValue)) {
|
||||
String onMode = this.onMode;
|
||||
if (onMode != null) {
|
||||
return fromCustomValue(onMode);
|
||||
} else {
|
||||
return ThermostatCluster.SystemModeEnum.AUTO;
|
||||
}
|
||||
}
|
||||
|
||||
ThermostatCluster.SystemModeEnum value = customToEnumMap.get(customValue);
|
||||
if (value == null) {
|
||||
throw new SystemModeMappingException("No mapping for custom value: " + customValue);
|
||||
}
|
||||
return value;
|
||||
}
|
||||
}
|
||||
|
||||
class SystemModeMappingException extends Exception {
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
public SystemModeMappingException(String message) {
|
||||
super(message);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,225 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.bridge.devices;
|
||||
|
||||
import java.util.AbstractMap;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.ExecutionException;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.ScheduledExecutorService;
|
||||
import java.util.concurrent.ScheduledFuture;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.binding.matter.internal.bridge.MatterBridgeClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.WindowCoveringCluster;
|
||||
import org.openhab.core.items.GenericItem;
|
||||
import org.openhab.core.items.GroupItem;
|
||||
import org.openhab.core.items.Item;
|
||||
import org.openhab.core.items.Metadata;
|
||||
import org.openhab.core.items.MetadataRegistry;
|
||||
import org.openhab.core.library.items.DimmerItem;
|
||||
import org.openhab.core.library.items.RollershutterItem;
|
||||
import org.openhab.core.library.items.StringItem;
|
||||
import org.openhab.core.library.items.SwitchItem;
|
||||
import org.openhab.core.library.types.OnOffType;
|
||||
import org.openhab.core.library.types.OpenClosedType;
|
||||
import org.openhab.core.library.types.PercentType;
|
||||
import org.openhab.core.library.types.StopMoveType;
|
||||
import org.openhab.core.library.types.StringType;
|
||||
import org.openhab.core.library.types.UpDownType;
|
||||
import org.openhab.core.types.State;
|
||||
|
||||
/**
|
||||
* The {@link WindowCoveringDevice} is a device that represents a Window Covering.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class WindowCoveringDevice extends GenericDevice {
|
||||
private ScheduledExecutorService operationalStateScheduler = Executors.newSingleThreadScheduledExecutor();
|
||||
private @Nullable ScheduledFuture<?> operationalStateTimer = null;
|
||||
private @Nullable Integer lastTargetPercent = null;
|
||||
|
||||
public WindowCoveringDevice(MetadataRegistry metadataRegistry, MatterBridgeClient client, GenericItem item) {
|
||||
super(metadataRegistry, client, item);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String deviceType() {
|
||||
return "WindowCovering";
|
||||
}
|
||||
|
||||
@Override
|
||||
protected MatterDeviceOptions activate() {
|
||||
primaryItem.addStateChangeListener(this);
|
||||
MetaDataMapping primaryMetadata = metaDataMapping(primaryItem);
|
||||
Map<String, Object> attributeMap = primaryMetadata.getAttributeOptions();
|
||||
attributeMap.put(
|
||||
WindowCoveringCluster.CLUSTER_PREFIX + "."
|
||||
+ WindowCoveringCluster.ATTRIBUTE_CURRENT_POSITION_LIFT_PERCENT100THS,
|
||||
itemStateToPercent(primaryItem.getState()) * 100);
|
||||
return new MatterDeviceOptions(attributeMap, primaryMetadata.label);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void dispose() {
|
||||
primaryItem.removeStateChangeListener(this);
|
||||
cancelTimer();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleMatterEvent(String clusterName, String attributeName, Object data) {
|
||||
switch (attributeName) {
|
||||
case WindowCoveringCluster.ATTRIBUTE_TARGET_POSITION_LIFT_PERCENT100THS:
|
||||
PercentType percentType = new PercentType((int) ((Double) data / 100));
|
||||
lastTargetPercent = percentType.intValue();
|
||||
int currentPercent = itemStateToPercent(primaryItem.getState());
|
||||
if (currentPercent >= 0) {
|
||||
updateOperationalStatus(currentPercent);
|
||||
}
|
||||
// do logic to sen op state
|
||||
boolean open = percentType.intValue() == 0;
|
||||
Metadata primaryItemMetadata = this.primaryItemMetadata;
|
||||
String key = open ? "OPEN" : "CLOSED";
|
||||
if (primaryItem instanceof GroupItem groupItem) {
|
||||
groupItem.send(percentType);
|
||||
} else if (primaryItem instanceof DimmerItem dimmerItem) {
|
||||
dimmerItem.send(percentType);
|
||||
} else if (primaryItem instanceof RollershutterItem rollerShutterItem) {
|
||||
if (percentType.intValue() == 100) {
|
||||
rollerShutterItem.send(UpDownType.DOWN);
|
||||
} else if (percentType.intValue() == 0) {
|
||||
rollerShutterItem.send(UpDownType.UP);
|
||||
} else {
|
||||
rollerShutterItem.send(percentType);
|
||||
}
|
||||
} else if (primaryItem instanceof SwitchItem switchItem) {
|
||||
boolean invert = false;
|
||||
if (primaryItemMetadata != null) {
|
||||
Object invertObject = primaryItemMetadata.getConfiguration().getOrDefault("invert", false);
|
||||
if (invertObject instanceof Boolean invertValue) {
|
||||
invert = invertValue;
|
||||
}
|
||||
}
|
||||
switchItem.send(OnOffType.from(invert ? open ? "ON" : "OFF" : open ? "OFF" : "ON"));
|
||||
} else if (primaryItem instanceof StringItem stringItem) {
|
||||
Object value = key;
|
||||
if (primaryItemMetadata != null) {
|
||||
value = primaryItemMetadata.getConfiguration().getOrDefault(key, key);
|
||||
}
|
||||
stringItem.send(new StringType(value.toString()));
|
||||
}
|
||||
break;
|
||||
case WindowCoveringCluster.ATTRIBUTE_OPERATIONAL_STATUS:
|
||||
if (data instanceof AbstractMap treeMap) {
|
||||
@SuppressWarnings("unchecked")
|
||||
AbstractMap<String, Object> map = (AbstractMap<String, Object>) treeMap;
|
||||
if (map.get("global") instanceof Integer value) {
|
||||
if (WindowCoveringCluster.MovementStatus.STOPPED.getValue().equals(value)
|
||||
&& primaryItem instanceof RollershutterItem rollerShutterItem) {
|
||||
rollerShutterItem.send(StopMoveType.STOP);
|
||||
cancelTimer();
|
||||
lastTargetPercent = null;
|
||||
// will send stop back
|
||||
updateOperationalStatus(0);
|
||||
}
|
||||
}
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void updateState(Item item, State state) {
|
||||
int localPercent = itemStateToPercent(state);
|
||||
if (localPercent >= 0) {
|
||||
try {
|
||||
setEndpointState(WindowCoveringCluster.CLUSTER_PREFIX,
|
||||
WindowCoveringCluster.ATTRIBUTE_CURRENT_POSITION_LIFT_PERCENT100THS, localPercent * 100).get();
|
||||
} catch (InterruptedException | ExecutionException e) {
|
||||
logger.debug("Could not set state", e);
|
||||
return;
|
||||
}
|
||||
cancelTimer();
|
||||
final Integer lp = localPercent;
|
||||
this.operationalStateTimer = operationalStateScheduler.schedule(() -> updateOperationalStatus(lp), 1000,
|
||||
TimeUnit.MILLISECONDS);
|
||||
}
|
||||
}
|
||||
|
||||
private int itemStateToPercent(State state) {
|
||||
int localPercent = 0;
|
||||
if (state instanceof PercentType percentType) {
|
||||
localPercent = percentType.intValue();
|
||||
} else if (state instanceof OpenClosedType openClosedType) {
|
||||
localPercent = openClosedType == OpenClosedType.OPEN ? 0 : 100;
|
||||
} else if (state instanceof OnOffType onOffType) {
|
||||
Metadata primaryItemMetadata = this.primaryItemMetadata;
|
||||
boolean invert = false;
|
||||
if (primaryItemMetadata != null) {
|
||||
logger.debug("primaryItemMetadata: {}", primaryItemMetadata);
|
||||
Object invertObject = primaryItemMetadata.getConfiguration().getOrDefault("invert", false);
|
||||
if (invertObject instanceof Boolean invertValue) {
|
||||
logger.debug("invertObject: {}", invertObject);
|
||||
invert = invertValue;
|
||||
}
|
||||
}
|
||||
localPercent = invert ? onOffType == OnOffType.ON ? 0 : 100 : onOffType == OnOffType.ON ? 100 : 0;
|
||||
} else if (state instanceof StringType stringType) {
|
||||
Metadata primaryItemMetadata = this.primaryItemMetadata;
|
||||
if (primaryItemMetadata != null) {
|
||||
Object openValue = primaryItemMetadata.getConfiguration().get("OPEN");
|
||||
Object closeValue = primaryItemMetadata.getConfiguration().get("CLOSED");
|
||||
if (openValue instanceof String && closeValue instanceof String) {
|
||||
if (stringType.equals(openValue)) {
|
||||
localPercent = 0;
|
||||
} else if (stringType.equals(closeValue)) {
|
||||
localPercent = 100;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return localPercent;
|
||||
}
|
||||
|
||||
private void updateOperationalStatus(Integer localPercent) {
|
||||
Integer lastTargetPercent = this.lastTargetPercent;
|
||||
WindowCoveringCluster.MovementStatus status = WindowCoveringCluster.MovementStatus.STOPPED;
|
||||
if (lastTargetPercent != null) {
|
||||
if (lastTargetPercent < localPercent) {
|
||||
status = WindowCoveringCluster.MovementStatus.CLOSING;
|
||||
} else if (lastTargetPercent > localPercent) {
|
||||
status = WindowCoveringCluster.MovementStatus.OPENING;
|
||||
} else {
|
||||
this.lastTargetPercent = null;
|
||||
}
|
||||
}
|
||||
AbstractMap<String, Object> t = new LinkedHashMap<String, Object>();
|
||||
t.put("global", status.getValue());
|
||||
t.put("lift", status.getValue());
|
||||
setEndpointState(WindowCoveringCluster.CLUSTER_PREFIX, WindowCoveringCluster.ATTRIBUTE_OPERATIONAL_STATUS, t);
|
||||
}
|
||||
|
||||
private void cancelTimer() {
|
||||
ScheduledFuture<?> operationalStateTimer = this.operationalStateTimer;
|
||||
if (operationalStateTimer != null) {
|
||||
operationalStateTimer.cancel(true);
|
||||
}
|
||||
this.operationalStateTimer = null;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,27 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.client;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.AttributeChangedMessage;
|
||||
|
||||
/**
|
||||
* A listener for attribute changes
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public interface AttributeListener {
|
||||
|
||||
public void onEvent(AttributeChangedMessage message);
|
||||
}
|
|
@ -0,0 +1,27 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.client;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.EventTriggeredMessage;
|
||||
|
||||
/**
|
||||
* A listener for event triggered messages
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public interface EventTriggeredListener {
|
||||
|
||||
public void onEvent(EventTriggeredMessage message);
|
||||
}
|
|
@ -0,0 +1,44 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.client;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.AttributeChangedMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.BridgeEventMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.EventTriggeredMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.NodeDataMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.NodeStateMessage;
|
||||
|
||||
/**
|
||||
* A listener for Matter client events
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public interface MatterClientListener {
|
||||
public void onDisconnect(String reason);
|
||||
|
||||
public void onConnect();
|
||||
|
||||
public void onReady();
|
||||
|
||||
public void onEvent(NodeStateMessage message);
|
||||
|
||||
public void onEvent(AttributeChangedMessage message);
|
||||
|
||||
public void onEvent(EventTriggeredMessage message);
|
||||
|
||||
public void onEvent(BridgeEventMessage message);
|
||||
|
||||
public void onEvent(NodeDataMessage message);
|
||||
}
|
|
@ -0,0 +1,700 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.client;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.lang.reflect.Field;
|
||||
import java.lang.reflect.ParameterizedType;
|
||||
import java.lang.reflect.Type;
|
||||
import java.math.BigInteger;
|
||||
import java.net.URI;
|
||||
import java.net.URLEncoder;
|
||||
import java.nio.charset.StandardCharsets;
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.UUID;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.CopyOnWriteArrayList;
|
||||
import java.util.concurrent.ScheduledExecutorService;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.concurrent.TimeoutException;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.eclipse.jetty.websocket.api.Session;
|
||||
import org.eclipse.jetty.websocket.api.WebSocketListener;
|
||||
import org.eclipse.jetty.websocket.api.WebSocketPolicy;
|
||||
import org.eclipse.jetty.websocket.client.ClientUpgradeRequest;
|
||||
import org.eclipse.jetty.websocket.client.WebSocketClient;
|
||||
import org.openhab.binding.matter.internal.client.dto.Endpoint;
|
||||
import org.openhab.binding.matter.internal.client.dto.Node;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.BaseCluster;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.BaseCluster.OctetString;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.ClusterRegistry;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.AttributeChangedMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.BridgeEventAttributeChanged;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.BridgeEventMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.BridgeEventTriggered;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.Event;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.EventTriggeredMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.Message;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.NodeDataMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.NodeStateMessage;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.Path;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.Request;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.Response;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.TriggerEvent;
|
||||
import org.openhab.core.common.ThreadPoolManager;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.google.gson.Gson;
|
||||
import com.google.gson.GsonBuilder;
|
||||
import com.google.gson.JsonArray;
|
||||
import com.google.gson.JsonDeserializationContext;
|
||||
import com.google.gson.JsonDeserializer;
|
||||
import com.google.gson.JsonElement;
|
||||
import com.google.gson.JsonObject;
|
||||
import com.google.gson.JsonParseException;
|
||||
import com.google.gson.JsonPrimitive;
|
||||
import com.google.gson.JsonSerializationContext;
|
||||
import com.google.gson.JsonSerializer;
|
||||
import com.google.gson.JsonSyntaxException;
|
||||
import com.google.gson.reflect.TypeToken;
|
||||
|
||||
/**
|
||||
* A client for the Matter WebSocket API for communicating with a Matter controller
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public class MatterWebsocketClient implements WebSocketListener, MatterWebsocketService.NodeProcessListener {
|
||||
|
||||
protected final Logger logger = LoggerFactory.getLogger(getClass());
|
||||
|
||||
private static final int BUFFER_SIZE = 1048576 * 2; // 2 Mb
|
||||
private static final int REQUEST_TIMEOUT_SECONDS = 60 * 3; // 3 minutes
|
||||
|
||||
private final ScheduledExecutorService scheduler = ThreadPoolManager
|
||||
.getScheduledPool("matter.MatterWebsocketClient");
|
||||
|
||||
protected final Gson gson = new GsonBuilder().registerTypeAdapter(Node.class, new NodeDeserializer())
|
||||
.registerTypeAdapter(BigInteger.class, new BigIntegerSerializer())
|
||||
.registerTypeHierarchyAdapter(BaseCluster.MatterEnum.class, new MatterEnumDeserializer())
|
||||
.registerTypeAdapter(AttributeChangedMessage.class, new AttributeChangedMessageDeserializer())
|
||||
.registerTypeAdapter(EventTriggeredMessage.class, new EventTriggeredMessageDeserializer())
|
||||
.registerTypeAdapter(OctetString.class, new OctetStringDeserializer())
|
||||
.registerTypeAdapter(OctetString.class, new OctetStringSerializer()).create();
|
||||
|
||||
protected final WebSocketClient client = new WebSocketClient();
|
||||
protected final ConcurrentHashMap<String, CompletableFuture<JsonElement>> pendingRequests = new ConcurrentHashMap<>();
|
||||
protected final CopyOnWriteArrayList<MatterClientListener> clientListeners = new CopyOnWriteArrayList<>();
|
||||
|
||||
@Nullable
|
||||
private Session session;
|
||||
@Nullable
|
||||
Map<String, String> connectionParameters;
|
||||
|
||||
@Nullable
|
||||
private MatterWebsocketService wss;
|
||||
|
||||
/**
|
||||
* Connect to a local Matter controller running on this host in openHAB, primarily use case
|
||||
*
|
||||
* @param nodeId
|
||||
* @param storagePath
|
||||
* @param controllerName
|
||||
* @throws Exception
|
||||
*/
|
||||
public void connectWhenReady(MatterWebsocketService wss, Map<String, String> connectionParameters) {
|
||||
this.connectionParameters = connectionParameters;
|
||||
this.wss = wss;
|
||||
wss.addProcessListener(this);
|
||||
}
|
||||
|
||||
/**
|
||||
* Disconnect from the controller
|
||||
*/
|
||||
public void disconnect() {
|
||||
Session session = this.session;
|
||||
try {
|
||||
pendingRequests.forEach((id, future) -> {
|
||||
if (!future.isDone()) {
|
||||
future.completeExceptionally(new Exception("Client disconnected"));
|
||||
}
|
||||
});
|
||||
pendingRequests.clear();
|
||||
|
||||
if (session != null && session.isOpen()) {
|
||||
session.disconnect();
|
||||
session.close();
|
||||
session = null;
|
||||
}
|
||||
} catch (IOException e) {
|
||||
logger.debug("Error trying to disconnect", e);
|
||||
} finally {
|
||||
try {
|
||||
client.stop();
|
||||
} catch (Exception e) {
|
||||
logger.debug("Error closing Web Socket", e);
|
||||
}
|
||||
MatterWebsocketService wss = this.wss;
|
||||
if (wss != null) {
|
||||
wss.removeProcessListener(this);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a listener to the client
|
||||
*
|
||||
* @param listener
|
||||
*/
|
||||
public void addListener(MatterClientListener listener) {
|
||||
clientListeners.add(listener);
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove a listener from the client
|
||||
*
|
||||
* @param listener
|
||||
*/
|
||||
public void removeListener(MatterClientListener listener) {
|
||||
clientListeners.remove(listener);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if the client is connected to the controller
|
||||
*
|
||||
* @return
|
||||
*/
|
||||
public boolean isConnected() {
|
||||
Session session = this.session;
|
||||
return session != null && session.isOpen();
|
||||
}
|
||||
|
||||
/**
|
||||
* Send a generic command to the controller
|
||||
*
|
||||
* @param namespace
|
||||
* @param functionName
|
||||
* @param objects
|
||||
* @return
|
||||
*/
|
||||
public CompletableFuture<String> genericCommand(String namespace, String functionName,
|
||||
@Nullable Object... objects) {
|
||||
CompletableFuture<JsonElement> future = sendMessage(namespace, functionName, objects);
|
||||
return future.thenApply(obj -> obj.toString());
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onNodeExit(int exitCode) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onNodeReady(int port) {
|
||||
logger.debug("onNodeReady port {}", port);
|
||||
if (isConnected()) {
|
||||
logger.debug("Already connected, aborting!");
|
||||
return;
|
||||
}
|
||||
try {
|
||||
connectWebsocket("localhost", port);
|
||||
} catch (Exception e) {
|
||||
disconnect();
|
||||
logger.debug("Could not connect", e);
|
||||
for (MatterClientListener listener : clientListeners) {
|
||||
String msg = e.getLocalizedMessage();
|
||||
listener.onDisconnect(msg != null ? msg : "Exception connecting");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onWebSocketConnect(@Nullable Session session) {
|
||||
if (session != null) {
|
||||
final WebSocketPolicy currentPolicy = session.getPolicy();
|
||||
currentPolicy.setInputBufferSize(BUFFER_SIZE);
|
||||
currentPolicy.setMaxTextMessageSize(BUFFER_SIZE);
|
||||
currentPolicy.setMaxBinaryMessageSize(BUFFER_SIZE);
|
||||
this.session = session;
|
||||
for (MatterClientListener listener : clientListeners) {
|
||||
listener.onConnect();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onWebSocketText(@Nullable String msg) {
|
||||
logger.debug("onWebSocketText {}", msg);
|
||||
scheduler.submit(() -> {
|
||||
Message message = gson.fromJson(msg, Message.class);
|
||||
if (message == null) {
|
||||
logger.debug("invalid Message");
|
||||
return;
|
||||
}
|
||||
if ("response".equals(message.type)) {
|
||||
Response response = gson.fromJson(message.message, Response.class);
|
||||
if (response == null) {
|
||||
logger.debug("invalid response Message");
|
||||
return;
|
||||
}
|
||||
CompletableFuture<JsonElement> future = pendingRequests.remove(response.id);
|
||||
if (future == null) {
|
||||
logger.debug("no future for response id {}, type {} , did the request timeout?", response.id,
|
||||
response.type);
|
||||
return;
|
||||
}
|
||||
logger.debug("result type: {} ", response.type);
|
||||
if (!"resultSuccess".equals(response.type)) {
|
||||
future.completeExceptionally(new Exception(response.error));
|
||||
} else {
|
||||
future.complete(response.result);
|
||||
}
|
||||
} else if ("event".equals(message.type)) {
|
||||
Event event = gson.fromJson(message.message, Event.class);
|
||||
if (event == null) {
|
||||
logger.debug("invalid Event");
|
||||
return;
|
||||
}
|
||||
switch (event.type) {
|
||||
case "attributeChanged":
|
||||
logger.debug("attributeChanged message {}", event.data);
|
||||
AttributeChangedMessage changedMessage = gson.fromJson(event.data,
|
||||
AttributeChangedMessage.class);
|
||||
if (changedMessage == null) {
|
||||
logger.debug("invalid AttributeChangedMessage");
|
||||
return;
|
||||
}
|
||||
for (MatterClientListener listener : clientListeners) {
|
||||
try {
|
||||
listener.onEvent(changedMessage);
|
||||
} catch (Exception e) {
|
||||
logger.debug("Error notifying listener", e);
|
||||
}
|
||||
}
|
||||
break;
|
||||
case "eventTriggered":
|
||||
logger.debug("eventTriggered message {}", event.data);
|
||||
EventTriggeredMessage triggeredMessage = gson.fromJson(event.data, EventTriggeredMessage.class);
|
||||
if (triggeredMessage == null) {
|
||||
logger.debug("invalid EventTriggeredMessage");
|
||||
return;
|
||||
}
|
||||
for (MatterClientListener listener : clientListeners) {
|
||||
try {
|
||||
listener.onEvent(triggeredMessage);
|
||||
} catch (Exception e) {
|
||||
logger.debug("Error notifying listener", e);
|
||||
}
|
||||
}
|
||||
break;
|
||||
case "nodeStateInformation":
|
||||
logger.debug("nodeStateInformation message {}", event.data);
|
||||
NodeStateMessage nodeStateMessage = gson.fromJson(event.data, NodeStateMessage.class);
|
||||
if (nodeStateMessage == null) {
|
||||
logger.debug("invalid NodeStateMessage");
|
||||
return;
|
||||
}
|
||||
for (MatterClientListener listener : clientListeners) {
|
||||
try {
|
||||
listener.onEvent(nodeStateMessage);
|
||||
} catch (Exception e) {
|
||||
logger.debug("Error notifying listener", e);
|
||||
}
|
||||
}
|
||||
break;
|
||||
case "nodeData":
|
||||
logger.debug("nodeData message {}", event.data);
|
||||
Node node = gson.fromJson(event.data, Node.class);
|
||||
if (node == null) {
|
||||
logger.debug("invalid nodeData");
|
||||
return;
|
||||
}
|
||||
for (MatterClientListener listener : clientListeners) {
|
||||
try {
|
||||
listener.onEvent(new NodeDataMessage(node));
|
||||
} catch (Exception e) {
|
||||
logger.debug("Error notifying listener", e);
|
||||
}
|
||||
}
|
||||
break;
|
||||
case "bridgeEvent":
|
||||
logger.debug("bridgeEvent message {}", event.data);
|
||||
BridgeEventMessage bridgeEventMessage = gson.fromJson(event.data, BridgeEventMessage.class);
|
||||
|
||||
if (bridgeEventMessage == null) {
|
||||
logger.debug("invalid bridgeEvent");
|
||||
return;
|
||||
}
|
||||
|
||||
switch (bridgeEventMessage.type) {
|
||||
case "attributeChanged":
|
||||
bridgeEventMessage = gson.fromJson(event.data, BridgeEventAttributeChanged.class);
|
||||
break;
|
||||
case "eventTriggered":
|
||||
bridgeEventMessage = gson.fromJson(event.data, BridgeEventTriggered.class);
|
||||
break;
|
||||
}
|
||||
|
||||
if (bridgeEventMessage == null) {
|
||||
logger.debug("invalid bridgeEvent subtype");
|
||||
return;
|
||||
}
|
||||
|
||||
for (MatterClientListener listener : clientListeners) {
|
||||
try {
|
||||
listener.onEvent(bridgeEventMessage);
|
||||
} catch (Exception e) {
|
||||
logger.debug("Error notifying listener", e);
|
||||
}
|
||||
}
|
||||
break;
|
||||
case "ready":
|
||||
for (MatterClientListener listener : clientListeners) {
|
||||
listener.onReady();
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onWebSocketClose(int statusCode, @Nullable String reason) {
|
||||
logger.debug("onWebSocketClose {} {}", statusCode, reason);
|
||||
for (MatterClientListener listener : clientListeners) {
|
||||
listener.onDisconnect(reason != null ? reason : "Code " + statusCode);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onWebSocketError(@Nullable Throwable cause) {
|
||||
logger.debug("onWebSocketError", cause);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onWebSocketBinary(byte @Nullable [] payload, int offset, int len) {
|
||||
logger.debug("onWebSocketBinary data, not supported");
|
||||
}
|
||||
|
||||
protected CompletableFuture<JsonElement> sendMessage(String namespace, String functionName,
|
||||
@Nullable Object args[]) {
|
||||
return sendMessage(namespace, functionName, args, REQUEST_TIMEOUT_SECONDS);
|
||||
}
|
||||
|
||||
protected CompletableFuture<JsonElement> sendMessage(String namespace, String functionName, @Nullable Object args[],
|
||||
int timeoutSeconds) {
|
||||
if (timeoutSeconds <= 0) {
|
||||
timeoutSeconds = REQUEST_TIMEOUT_SECONDS;
|
||||
}
|
||||
CompletableFuture<JsonElement> responseFuture = new CompletableFuture<>();
|
||||
|
||||
Session session = this.session;
|
||||
if (session == null) {
|
||||
logger.debug("Could not send {} {} : no valid session", namespace, functionName);
|
||||
return responseFuture;
|
||||
}
|
||||
String requestId = UUID.randomUUID().toString();
|
||||
pendingRequests.put(requestId, responseFuture);
|
||||
Request message = new Request(requestId, namespace, functionName, args);
|
||||
String jsonMessage = gson.toJson(message);
|
||||
logger.debug("sendMessage: {}", jsonMessage);
|
||||
session.getRemote().sendStringByFuture(jsonMessage);
|
||||
|
||||
// timeout handling
|
||||
scheduler.schedule(() -> {
|
||||
CompletableFuture<JsonElement> future = pendingRequests.remove(requestId);
|
||||
if (future != null && !future.isDone()) {
|
||||
future.completeExceptionally(new TimeoutException(String.format(
|
||||
"Request %s:%s timed out after %d seconds", namespace, functionName, REQUEST_TIMEOUT_SECONDS)));
|
||||
}
|
||||
}, timeoutSeconds, TimeUnit.SECONDS);
|
||||
|
||||
return responseFuture;
|
||||
}
|
||||
|
||||
private void connectWebsocket(String host, int port) throws Exception {
|
||||
String dest = "ws://" + host + ":" + port;
|
||||
Map<String, String> connectionParameters = this.connectionParameters;
|
||||
if (connectionParameters != null) {
|
||||
dest += "?" + connectionParameters.entrySet().stream()
|
||||
.map((Map.Entry<String, String> entry) -> URLEncoder.encode(entry.getKey(), StandardCharsets.UTF_8)
|
||||
+ "=" + URLEncoder.encode(entry.getValue(), StandardCharsets.UTF_8))
|
||||
.collect(Collectors.joining("&"));
|
||||
}
|
||||
|
||||
logger.debug("Connecting {}", dest);
|
||||
WebSocketClient client = new WebSocketClient();
|
||||
client.setMaxIdleTimeout(Long.MAX_VALUE);
|
||||
client.start();
|
||||
URI uri = new URI(dest);
|
||||
client.connect(this, uri, new ClientUpgradeRequest()).get();
|
||||
}
|
||||
|
||||
@NonNullByDefault({})
|
||||
class NodeDeserializer implements JsonDeserializer<Node> {
|
||||
@Override
|
||||
public @Nullable Node deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context)
|
||||
throws JsonParseException {
|
||||
JsonObject jsonObjectNode = json.getAsJsonObject();
|
||||
Node node = new Node();
|
||||
node.id = jsonObjectNode.get("id").getAsBigInteger();
|
||||
|
||||
// Deserialize root endpoint
|
||||
JsonObject rootEndpointJson = jsonObjectNode.getAsJsonObject("rootEndpoint");
|
||||
Endpoint rootEndpoint = deserializeEndpoint(rootEndpointJson, context);
|
||||
node.rootEndpoint = rootEndpoint;
|
||||
|
||||
return node;
|
||||
}
|
||||
|
||||
private Endpoint deserializeEndpoint(JsonObject endpointJson, JsonDeserializationContext context) {
|
||||
Endpoint endpoint = new Endpoint();
|
||||
endpoint.number = endpointJson.get("number").getAsInt();
|
||||
endpoint.clusters = new HashMap<>();
|
||||
logger.trace("deserializeEndpoint {}", endpoint.number);
|
||||
|
||||
// Deserialize clusters
|
||||
JsonObject clustersJson = endpointJson.getAsJsonObject("clusters");
|
||||
Set<Map.Entry<String, JsonElement>> clusterEntries = clustersJson.entrySet();
|
||||
for (Map.Entry<String, JsonElement> clusterEntry : clusterEntries) {
|
||||
String clusterName = clusterEntry.getKey();
|
||||
JsonElement clusterElement = clusterEntry.getValue();
|
||||
logger.trace("Cluster {}", clusterEntry);
|
||||
try {
|
||||
Class<?> clazz = Class.forName(BaseCluster.class.getPackageName() + "." + clusterName + "Cluster");
|
||||
if (BaseCluster.class.isAssignableFrom(clazz)) {
|
||||
BaseCluster cluster = context.deserialize(clusterElement, clazz);
|
||||
deserializeFields(cluster, clusterElement, clazz, context);
|
||||
endpoint.clusters.put(clusterName, cluster);
|
||||
logger.trace("deserializeEndpoint adding cluster {} to endpoint {}", clusterName,
|
||||
endpoint.number);
|
||||
}
|
||||
} catch (ClassNotFoundException e) {
|
||||
logger.debug("Cluster not found: {}", clusterName);
|
||||
} catch (JsonSyntaxException | IllegalArgumentException | SecurityException
|
||||
| IllegalAccessException e) {
|
||||
logger.debug("Exception for cluster {}", clusterName, e);
|
||||
}
|
||||
}
|
||||
|
||||
// Deserialize child endpoints
|
||||
endpoint.children = new ArrayList<>();
|
||||
JsonArray childrenJson = endpointJson.getAsJsonArray("children");
|
||||
if (childrenJson != null) {
|
||||
for (JsonElement childElement : childrenJson) {
|
||||
JsonObject childJson = childElement.getAsJsonObject();
|
||||
Endpoint childEndpoint = deserializeEndpoint(childJson, context);
|
||||
endpoint.children.add(childEndpoint);
|
||||
}
|
||||
}
|
||||
|
||||
return endpoint;
|
||||
}
|
||||
|
||||
private void deserializeFields(Object instance, JsonElement jsonElement, Class<?> clazz,
|
||||
JsonDeserializationContext context) throws IllegalAccessException {
|
||||
JsonObject jsonObject = jsonElement.getAsJsonObject();
|
||||
for (Map.Entry<String, JsonElement> entry : jsonObject.entrySet()) {
|
||||
String fieldName = entry.getKey();
|
||||
JsonElement element = entry.getValue();
|
||||
|
||||
try {
|
||||
Field field = getField(clazz, fieldName);
|
||||
field.setAccessible(true);
|
||||
|
||||
if (List.class.isAssignableFrom(field.getType())) {
|
||||
// Handle lists generically
|
||||
Type fieldType = ((ParameterizedType) field.getGenericType()).getActualTypeArguments()[0];
|
||||
List<?> list = context.deserialize(element,
|
||||
TypeToken.getParameterized(List.class, fieldType).getType());
|
||||
field.set(instance, list);
|
||||
} else {
|
||||
// Handle normal fields
|
||||
Object fieldValue = context.deserialize(element, field.getType());
|
||||
field.set(instance, fieldValue);
|
||||
}
|
||||
} catch (NoSuchFieldException e) {
|
||||
logger.trace("Skipping field {}", fieldName);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private Field getField(Class<?> clazz, String fieldName) throws NoSuchFieldException {
|
||||
try {
|
||||
return clazz.getDeclaredField(fieldName);
|
||||
} catch (NoSuchFieldException e) {
|
||||
Class<?> superClass = clazz.getSuperclass();
|
||||
if (superClass == null) {
|
||||
throw e;
|
||||
} else {
|
||||
return getField(superClass, fieldName);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@NonNullByDefault({})
|
||||
class AttributeChangedMessageDeserializer implements JsonDeserializer<AttributeChangedMessage> {
|
||||
@Override
|
||||
public @Nullable AttributeChangedMessage deserialize(JsonElement json, Type typeOfT,
|
||||
JsonDeserializationContext context) throws JsonParseException {
|
||||
JsonObject jsonObject = json.getAsJsonObject();
|
||||
|
||||
Path path = context.deserialize(jsonObject.get("path"), Path.class);
|
||||
Long version = jsonObject.get("version").getAsLong();
|
||||
|
||||
JsonElement valueElement = jsonObject.get("value");
|
||||
Object value = null;
|
||||
|
||||
// Use ClusterRegistry to find the cluster class
|
||||
Class<? extends BaseCluster> clusterClass = ClusterRegistry.CLUSTER_IDS.get(path.clusterId);
|
||||
if (clusterClass != null) {
|
||||
try {
|
||||
// Use reflection to find the field type
|
||||
Field field = getField(clusterClass, path.attributeName);
|
||||
if (field != null) {
|
||||
value = context.deserialize(valueElement, field.getType());
|
||||
}
|
||||
} catch (NoSuchFieldException e) {
|
||||
logger.debug("Field not found for attribute: {}", path.attributeName, e);
|
||||
}
|
||||
}
|
||||
|
||||
if (value == null) {
|
||||
// Fallback to primitive types if no specific class is found
|
||||
if (valueElement.isJsonPrimitive()) {
|
||||
JsonPrimitive primitive = valueElement.getAsJsonPrimitive();
|
||||
if (primitive.isNumber()) {
|
||||
value = primitive.getAsNumber();
|
||||
} else if (primitive.isString()) {
|
||||
value = primitive.getAsString();
|
||||
} else if (primitive.isBoolean()) {
|
||||
value = primitive.getAsBoolean();
|
||||
}
|
||||
} else if (valueElement.isJsonArray()) {
|
||||
value = context.deserialize(valueElement.getAsJsonArray(), List.class);
|
||||
} else {
|
||||
value = valueElement.toString();
|
||||
}
|
||||
}
|
||||
|
||||
return new AttributeChangedMessage(path, version, value);
|
||||
}
|
||||
|
||||
private Field getField(Class<?> clazz, String fieldName) throws NoSuchFieldException {
|
||||
try {
|
||||
return clazz.getDeclaredField(fieldName);
|
||||
} catch (NoSuchFieldException e) {
|
||||
Class<?> superClass = clazz.getSuperclass();
|
||||
if (superClass == null) {
|
||||
throw e;
|
||||
} else {
|
||||
return getField(superClass, fieldName);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Biginteger types have to be represented as strings in JSON
|
||||
*/
|
||||
@NonNullByDefault({})
|
||||
class BigIntegerSerializer implements JsonSerializer<BigInteger> {
|
||||
@Override
|
||||
public JsonElement serialize(BigInteger src, Type typeOfSrc, JsonSerializationContext context) {
|
||||
return new JsonPrimitive(src.toString());
|
||||
}
|
||||
}
|
||||
|
||||
@NonNullByDefault({})
|
||||
class MatterEnumDeserializer implements JsonDeserializer<BaseCluster.MatterEnum> {
|
||||
@Override
|
||||
public BaseCluster.MatterEnum deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context)
|
||||
throws JsonParseException {
|
||||
int value = json.getAsInt();
|
||||
Class<?> rawType = (Class<?>) typeOfT;
|
||||
|
||||
if (BaseCluster.MatterEnum.class.isAssignableFrom(rawType) && rawType.isEnum()) {
|
||||
@SuppressWarnings("unchecked")
|
||||
Class<? extends BaseCluster.MatterEnum> enumType = (Class<? extends BaseCluster.MatterEnum>) rawType;
|
||||
return BaseCluster.MatterEnum.fromValue(enumType, value);
|
||||
}
|
||||
|
||||
throw new JsonParseException("Unable to deserialize " + typeOfT);
|
||||
}
|
||||
}
|
||||
|
||||
@NonNullByDefault({})
|
||||
class EventTriggeredMessageDeserializer implements JsonDeserializer<EventTriggeredMessage> {
|
||||
@Override
|
||||
public EventTriggeredMessage deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context)
|
||||
throws JsonParseException {
|
||||
JsonObject jsonObject = json.getAsJsonObject();
|
||||
Path path = context.deserialize(jsonObject.get("path"), Path.class);
|
||||
JsonArray eventsArray = jsonObject.getAsJsonArray("events");
|
||||
|
||||
TriggerEvent[] events = new TriggerEvent[eventsArray.size()];
|
||||
Class<? extends BaseCluster> clusterClass = ClusterRegistry.CLUSTER_IDS.get(path.clusterId);
|
||||
|
||||
String eventName = path.eventName;
|
||||
String className = Character.toUpperCase(eventName.charAt(0)) + eventName.substring(1);
|
||||
for (int i = 0; i < eventsArray.size(); i++) {
|
||||
JsonObject eventObject = eventsArray.get(i).getAsJsonObject();
|
||||
TriggerEvent event = context.deserialize(eventObject, TriggerEvent.class);
|
||||
if (clusterClass != null) {
|
||||
try {
|
||||
Class<?> eventClass = Class.forName(clusterClass.getName() + "$" + className);
|
||||
event.data = context.deserialize(eventObject.get("data"), eventClass);
|
||||
} catch (ClassNotFoundException e) {
|
||||
logger.debug("Event class not found for event: {}", path.eventName, e);
|
||||
}
|
||||
}
|
||||
events[i] = event;
|
||||
}
|
||||
return new EventTriggeredMessage(path, events);
|
||||
}
|
||||
}
|
||||
|
||||
@NonNullByDefault({})
|
||||
class OctetStringDeserializer implements JsonDeserializer<OctetString> {
|
||||
@Override
|
||||
public OctetString deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context)
|
||||
throws JsonParseException {
|
||||
return new OctetString(json.getAsString());
|
||||
}
|
||||
}
|
||||
|
||||
@NonNullByDefault({})
|
||||
class OctetStringSerializer implements JsonSerializer<OctetString> {
|
||||
@Override
|
||||
public JsonElement serialize(OctetString src, Type typeOfSrc, JsonSerializationContext context) {
|
||||
return new JsonPrimitive(src.toString());
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the Gson instance for use in tests
|
||||
*/
|
||||
Gson getGson() {
|
||||
return gson;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,320 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.client;
|
||||
|
||||
import java.io.BufferedReader;
|
||||
import java.io.IOException;
|
||||
import java.io.InputStream;
|
||||
import java.io.InputStreamReader;
|
||||
import java.net.ServerSocket;
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Path;
|
||||
import java.nio.file.StandardCopyOption;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.ScheduledExecutorService;
|
||||
import java.util.concurrent.ScheduledFuture;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.regex.Matcher;
|
||||
import java.util.regex.Pattern;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.openhab.core.common.ThreadPoolManager;
|
||||
import org.openhab.core.io.net.http.HttpClientFactory;
|
||||
import org.osgi.service.component.annotations.Activate;
|
||||
import org.osgi.service.component.annotations.Component;
|
||||
import org.osgi.service.component.annotations.Deactivate;
|
||||
import org.osgi.service.component.annotations.Reference;
|
||||
import org.osgi.service.component.annotations.ServiceScope;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
/**
|
||||
* A service for managing the Matter Node.js process
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@Component(service = MatterWebsocketService.class, scope = ServiceScope.SINGLETON)
|
||||
@NonNullByDefault
|
||||
public class MatterWebsocketService {
|
||||
private final Logger logger = LoggerFactory.getLogger(getClass());
|
||||
private static final Pattern LOG_PATTERN = Pattern
|
||||
.compile("^\\S+\\s+\\S+\\s+(TRACE|DEBUG|INFO|WARN|ERROR)\\s+(\\S+)\\s+(.*)$");
|
||||
private static final String MATTER_JS_PATH = "/matter-server/matter.js";
|
||||
// Delay before restarting the node process after it exits as well as notifying listeners when it's ready
|
||||
private static final int STARTUP_DELAY_SECONDS = 5;
|
||||
// Timeout for shutting down the node process
|
||||
private static final int SHUTDOWN_TIMEOUT_SECONDS = 3;
|
||||
private final List<NodeProcessListener> processListeners = new ArrayList<>();
|
||||
private final ExecutorService executorService = Executors.newFixedThreadPool(4);
|
||||
private final ScheduledExecutorService scheduler = ThreadPoolManager
|
||||
.getScheduledPool("matter.MatterWebsocketService");
|
||||
private @Nullable ScheduledFuture<?> notifyFuture;
|
||||
private @Nullable ScheduledFuture<?> restartFuture;
|
||||
// The path to the Node.js executable (node or node.exe)
|
||||
private final String nodePath;
|
||||
// The Node.js process running the matter.js script
|
||||
private @Nullable Process nodeProcess;
|
||||
// The state of the service, STARTING, READY, SHUTTING_DOWN
|
||||
private ServiceState state = ServiceState.STARTING;
|
||||
// the port the node process is listening on
|
||||
private int port;
|
||||
|
||||
@Activate
|
||||
public MatterWebsocketService(final @Reference HttpClientFactory httpClientFactory) throws IOException {
|
||||
NodeJSRuntimeManager nodeManager = new NodeJSRuntimeManager(httpClientFactory.getCommonHttpClient());
|
||||
String nodePath = nodeManager.getNodePath();
|
||||
this.nodePath = nodePath;
|
||||
scheduledStart(0);
|
||||
}
|
||||
|
||||
@Deactivate
|
||||
public void deactivate() {
|
||||
stopNode();
|
||||
executorService.shutdown();
|
||||
}
|
||||
|
||||
public void restart() {
|
||||
stopNode();
|
||||
scheduledStart(STARTUP_DELAY_SECONDS);
|
||||
}
|
||||
|
||||
public void addProcessListener(NodeProcessListener listener) {
|
||||
processListeners.add(listener);
|
||||
if (state == ServiceState.READY) {
|
||||
listener.onNodeReady(port);
|
||||
}
|
||||
}
|
||||
|
||||
public void removeProcessListener(NodeProcessListener listener) {
|
||||
processListeners.remove(listener);
|
||||
}
|
||||
|
||||
public void stopNode() {
|
||||
logger.debug("stopNode");
|
||||
state = ServiceState.SHUTTING_DOWN;
|
||||
cancelFutures();
|
||||
Process nodeProcess = this.nodeProcess;
|
||||
if (nodeProcess != null && nodeProcess.isAlive()) {
|
||||
nodeProcess.destroy();
|
||||
try {
|
||||
// Wait for the process to terminate
|
||||
if (!nodeProcess.waitFor(SHUTDOWN_TIMEOUT_SECONDS, java.util.concurrent.TimeUnit.SECONDS)) {
|
||||
nodeProcess.destroyForcibly();
|
||||
}
|
||||
} catch (InterruptedException e) {
|
||||
Thread.currentThread().interrupt();
|
||||
logger.debug("Interrupted while waiting for Node process to stop", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public int getPort() {
|
||||
return port;
|
||||
}
|
||||
|
||||
public boolean isReady() {
|
||||
return state == ServiceState.READY;
|
||||
}
|
||||
|
||||
private void cancelFutures() {
|
||||
ScheduledFuture<?> notifyFuture = this.notifyFuture;
|
||||
if (notifyFuture != null) {
|
||||
notifyFuture.cancel(true);
|
||||
}
|
||||
ScheduledFuture<?> restartFuture = this.restartFuture;
|
||||
if (restartFuture != null) {
|
||||
restartFuture.cancel(true);
|
||||
}
|
||||
}
|
||||
|
||||
private boolean isRestarting() {
|
||||
ScheduledFuture<?> restartFuture = this.restartFuture;
|
||||
return restartFuture != null && !restartFuture.isDone();
|
||||
}
|
||||
|
||||
private synchronized void scheduledStart(int delay) {
|
||||
if (isRestarting()) {
|
||||
logger.debug("Restart already scheduled, skipping");
|
||||
return;
|
||||
}
|
||||
logger.debug("Scheduling restart in {} seconds", delay);
|
||||
restartFuture = scheduler.schedule(() -> {
|
||||
try {
|
||||
port = runNodeWithResource(MATTER_JS_PATH);
|
||||
} catch (IOException e) {
|
||||
logger.warn("Failed to restart the Matter Node process", e);
|
||||
}
|
||||
}, delay, TimeUnit.SECONDS);
|
||||
}
|
||||
|
||||
private int runNodeWithResource(String resourcePath, String... additionalArgs) throws IOException {
|
||||
state = ServiceState.STARTING;
|
||||
Path scriptPath = extractResourceToTempFile(resourcePath);
|
||||
|
||||
port = findAvailablePort();
|
||||
List<String> command = new ArrayList<>();
|
||||
command.add(nodePath);
|
||||
command.add(scriptPath.toString());
|
||||
command.add("--host");
|
||||
command.add("localhost");
|
||||
command.add("--port");
|
||||
command.add(String.valueOf(port));
|
||||
command.addAll(List.of(additionalArgs));
|
||||
|
||||
ProcessBuilder pb = new ProcessBuilder(command);
|
||||
nodeProcess = pb.start();
|
||||
|
||||
// Start output and error stream readers
|
||||
executorService.submit(this::readOutputStream);
|
||||
executorService.submit(this::readErrorStream);
|
||||
|
||||
// Wait for the process to exit in a separate thread
|
||||
executorService.submit(() -> {
|
||||
int exitCode = -1;
|
||||
try {
|
||||
Process nodeProcess = this.nodeProcess;
|
||||
if (nodeProcess != null) {
|
||||
exitCode = nodeProcess.waitFor();
|
||||
logger.debug("Node process exited with code: {}", exitCode);
|
||||
}
|
||||
} catch (InterruptedException e) {
|
||||
Thread.currentThread().interrupt();
|
||||
logger.debug("Interrupted while waiting for Node process to exit", e);
|
||||
} finally {
|
||||
try {
|
||||
Files.deleteIfExists(scriptPath);
|
||||
notifyExitListeners(exitCode);
|
||||
} catch (IOException e) {
|
||||
logger.debug("Failed to delete temporary script file", e);
|
||||
}
|
||||
|
||||
if (state != ServiceState.SHUTTING_DOWN) {
|
||||
logger.debug("trying to restart, state: {}", state);
|
||||
scheduledStart(STARTUP_DELAY_SECONDS);
|
||||
}
|
||||
}
|
||||
});
|
||||
return port;
|
||||
}
|
||||
|
||||
private void readOutputStream() {
|
||||
Process nodeProcess = this.nodeProcess;
|
||||
if (nodeProcess != null) {
|
||||
processStream(nodeProcess.getInputStream(), "Error reading Node process output", true);
|
||||
}
|
||||
}
|
||||
|
||||
private void readErrorStream() {
|
||||
Process nodeProcess = this.nodeProcess;
|
||||
if (nodeProcess != null) {
|
||||
processStream(nodeProcess.getErrorStream(), "Error reading Node process error stream", false);
|
||||
}
|
||||
}
|
||||
|
||||
private void processStream(InputStream inputStream, String errorMessage, boolean triggerNotify) {
|
||||
try (BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream))) {
|
||||
String line;
|
||||
while ((line = reader.readLine()) != null) {
|
||||
// we only want to do this once!
|
||||
if (state == ServiceState.STARTING && triggerNotify && notifyFuture == null) {
|
||||
notifyFuture = scheduler.schedule(() -> {
|
||||
state = ServiceState.READY;
|
||||
this.notifyFuture = null;
|
||||
notifyReadyListeners();
|
||||
}, STARTUP_DELAY_SECONDS, TimeUnit.SECONDS);
|
||||
}
|
||||
if (logger.isTraceEnabled()) {
|
||||
Matcher matcher = LOG_PATTERN.matcher(line);
|
||||
if (matcher.matches()) {
|
||||
String component = matcher.group(2);
|
||||
String message = matcher.group(3);
|
||||
logger.trace("{}: {}", component, message);
|
||||
} else {
|
||||
logger.trace(line);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (IOException e) {
|
||||
if (!state.equals(ServiceState.SHUTTING_DOWN)) {
|
||||
logger.debug("{}", errorMessage, e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private void notifyExitListeners(int exitCode) {
|
||||
for (NodeProcessListener listener : processListeners) {
|
||||
listener.onNodeExit(exitCode);
|
||||
}
|
||||
}
|
||||
|
||||
private void notifyReadyListeners() {
|
||||
for (NodeProcessListener listener : processListeners) {
|
||||
listener.onNodeReady(port);
|
||||
}
|
||||
}
|
||||
|
||||
private Path extractResourceToTempFile(String resourcePath) throws IOException {
|
||||
Path tempFile = Files.createTempFile("node-script-", ".js");
|
||||
try (InputStream in = getClass().getResourceAsStream(resourcePath)) {
|
||||
if (in == null) {
|
||||
throw new IOException("Resource not found: " + resourcePath);
|
||||
}
|
||||
Files.copy(in, tempFile, StandardCopyOption.REPLACE_EXISTING);
|
||||
}
|
||||
tempFile.toFile().deleteOnExit(); // Ensure the temp file is deleted on JVM exit
|
||||
return tempFile;
|
||||
}
|
||||
|
||||
private int findAvailablePort() throws IOException {
|
||||
ServerSocket serverSocket = null;
|
||||
try {
|
||||
serverSocket = new ServerSocket(0);
|
||||
return serverSocket.getLocalPort();
|
||||
} finally {
|
||||
if (serverSocket != null) {
|
||||
try {
|
||||
serverSocket.close();
|
||||
} catch (IOException e) {
|
||||
logger.debug("Failed to close ServerSocket", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public interface NodeProcessListener {
|
||||
void onNodeExit(int exitCode);
|
||||
|
||||
void onNodeReady(int port);
|
||||
}
|
||||
|
||||
public enum ServiceState {
|
||||
/**
|
||||
* The service is up and ready.
|
||||
*/
|
||||
READY,
|
||||
|
||||
/**
|
||||
* The service is in the process of starting but not yet ready.
|
||||
*/
|
||||
STARTING,
|
||||
|
||||
/**
|
||||
* The service is in the process of shutting down, so it shouldn't be restarted.
|
||||
*/
|
||||
SHUTTING_DOWN
|
||||
}
|
||||
}
|
|
@ -0,0 +1,298 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.client;
|
||||
|
||||
import java.io.BufferedReader;
|
||||
import java.io.File;
|
||||
import java.io.FileInputStream;
|
||||
import java.io.IOException;
|
||||
import java.io.InputStream;
|
||||
import java.io.InputStreamReader;
|
||||
import java.lang.reflect.Type;
|
||||
import java.nio.file.DirectoryStream;
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Path;
|
||||
import java.nio.file.Paths;
|
||||
import java.nio.file.StandardCopyOption;
|
||||
import java.util.Comparator;
|
||||
import java.util.List;
|
||||
import java.util.Optional;
|
||||
import java.util.concurrent.ExecutionException;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.concurrent.TimeoutException;
|
||||
import java.util.zip.ZipEntry;
|
||||
import java.util.zip.ZipInputStream;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
import org.eclipse.jetty.client.HttpClient;
|
||||
import org.eclipse.jetty.client.api.ContentResponse;
|
||||
import org.eclipse.jetty.client.api.Response;
|
||||
import org.eclipse.jetty.client.util.InputStreamResponseListener;
|
||||
import org.eclipse.jetty.http.HttpMethod;
|
||||
import org.eclipse.jetty.http.HttpStatus;
|
||||
import org.openhab.core.OpenHAB;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.google.gson.Gson;
|
||||
import com.google.gson.reflect.TypeToken;
|
||||
|
||||
/**
|
||||
* Manages the Node.js runtime for the Matter binding
|
||||
*
|
||||
* This class provides methods for checking the system installed version of Node.js,
|
||||
* downloading and extracting the latest version of Node.js, and finding the Node.js
|
||||
* executable.
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
class NodeJSRuntimeManager {
|
||||
private final Logger logger = LoggerFactory.getLogger(NodeJSRuntimeManager.class);
|
||||
|
||||
private static final String NODE_BASE_VERSION = "v22";
|
||||
private static final String NODE_DEFAULT_VERSION = "v22.12.0";
|
||||
private static final String NODE_MIN_VERSION = "v18.0.0";
|
||||
|
||||
private static final String NODE_INDEX_URL = "https://nodejs.org/dist/index.json";
|
||||
|
||||
private static final String BASE_URL = "https://nodejs.org/dist/";
|
||||
private static final String CACHE_DIR = Paths
|
||||
.get(OpenHAB.getUserDataFolder(), "cache", "org.openhab.binding.matter", "node_cache").toString();
|
||||
|
||||
private String platform = "";
|
||||
private String arch = "";
|
||||
private String nodeExecutable = "";
|
||||
private org.eclipse.jetty.client.HttpClient client;
|
||||
|
||||
public NodeJSRuntimeManager(HttpClient client) {
|
||||
this.client = client;
|
||||
detectPlatformAndArch();
|
||||
}
|
||||
|
||||
public String getNodePath() throws IOException {
|
||||
// Check if system installed node is at least the minimum required version
|
||||
if (checkSystemInstalledVersion(NODE_MIN_VERSION)) {
|
||||
logger.debug("Using system installed node");
|
||||
return nodeExecutable;
|
||||
}
|
||||
|
||||
// Download the latest version of Node.js if not already installed
|
||||
String version = getLatestVersion();
|
||||
String cacheDir = CACHE_DIR + File.separator + platform + "-" + arch + File.separator + version;
|
||||
Path nodePath = findNodeExecutable(cacheDir, version);
|
||||
|
||||
if (nodePath == null) {
|
||||
downloadAndExtract(cacheDir, version);
|
||||
nodePath = findNodeExecutable(cacheDir, version);
|
||||
if (nodePath == null) {
|
||||
throw new IOException("Unable to locate Node.js executable after download and extraction");
|
||||
}
|
||||
}
|
||||
|
||||
return nodePath.toString();
|
||||
}
|
||||
|
||||
private void detectPlatformAndArch() {
|
||||
String os = Optional.ofNullable(System.getProperty("os.name")).orElseGet(() -> "unknown").toLowerCase();
|
||||
String arch = Optional.ofNullable(System.getProperty("os.arch")).orElseGet(() -> "unknown").toLowerCase();
|
||||
|
||||
if (os.contains("win")) {
|
||||
platform = "win";
|
||||
nodeExecutable = "node.exe";
|
||||
} else if (os.contains("mac")) {
|
||||
platform = "darwin";
|
||||
nodeExecutable = "node";
|
||||
} else if (os.contains("nux")) {
|
||||
platform = "linux";
|
||||
nodeExecutable = "node";
|
||||
} else {
|
||||
throw new UnsupportedOperationException("Unsupported operating system");
|
||||
}
|
||||
|
||||
if (arch.contains("amd64") || arch.contains("x86_64")) {
|
||||
this.arch = "x64";
|
||||
} else if (arch.contains("aarch64") || arch.contains("arm64")) {
|
||||
this.arch = "arm64";
|
||||
} else if (arch.contains("arm")) {
|
||||
this.arch = "armv7l";
|
||||
} else {
|
||||
throw new UnsupportedOperationException("Unsupported architecture");
|
||||
}
|
||||
}
|
||||
|
||||
private String getLatestVersion() {
|
||||
try {
|
||||
ContentResponse response = client.newRequest(NODE_INDEX_URL).method(HttpMethod.GET).send();
|
||||
String json = response.getContentAsString();
|
||||
Gson gson = new Gson();
|
||||
Type listType = new TypeToken<List<NodeVersion>>() {
|
||||
}.getType();
|
||||
List<NodeVersion> versions = gson.fromJson(json, listType);
|
||||
if (versions != null) {
|
||||
NodeVersion latest = versions.stream().filter(v -> v.version.startsWith(NODE_BASE_VERSION + "."))
|
||||
.max(Comparator.comparing(v -> v.version)).orElse(null);
|
||||
if (latest != null) {
|
||||
return latest.version;
|
||||
} else {
|
||||
logger.debug("Could not find latest version of Node.js, using default version: {}",
|
||||
NODE_DEFAULT_VERSION);
|
||||
}
|
||||
}
|
||||
} catch (Exception e) {
|
||||
logger.debug("Could not fetch latest version of Node.js, using default version: {}", NODE_DEFAULT_VERSION,
|
||||
e);
|
||||
}
|
||||
return NODE_DEFAULT_VERSION;
|
||||
}
|
||||
|
||||
private @Nullable Path findNodeExecutable(String cacheDir, String version) throws IOException {
|
||||
Path rootDir = Paths.get(cacheDir);
|
||||
if (!Files.exists(rootDir)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
try (DirectoryStream<Path> stream = Files.newDirectoryStream(rootDir)) {
|
||||
for (Path path : stream) {
|
||||
if (Files.isDirectory(path) && path.getFileName().toString().startsWith("node-" + version)) {
|
||||
Path execPath = path.resolve("bin").resolve(nodeExecutable);
|
||||
if (Files.exists(execPath)) {
|
||||
return execPath;
|
||||
}
|
||||
|
||||
// windows does not have a 'bin' directory
|
||||
execPath = path.resolve(nodeExecutable);
|
||||
if (Files.exists(execPath)) {
|
||||
return execPath;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
private void downloadAndExtract(String cacheDir, String version) throws IOException {
|
||||
String fileName = "node-" + version + "-" + platform + "-" + arch
|
||||
+ ("win".equals(platform) ? ".zip" : ".tar.gz");
|
||||
String downloadUrl = BASE_URL + version + "/" + fileName;
|
||||
|
||||
Path downloadPath = Paths.get(cacheDir, fileName);
|
||||
Files.createDirectories(downloadPath.getParent());
|
||||
|
||||
logger.info("Downloading Node.js from: {}", downloadUrl);
|
||||
try {
|
||||
InputStreamResponseListener listener = new InputStreamResponseListener();
|
||||
client.newRequest(downloadUrl).method(HttpMethod.GET).send(listener);
|
||||
Response response = listener.get(5, TimeUnit.SECONDS);
|
||||
|
||||
if (response.getStatus() == HttpStatus.OK_200) {
|
||||
try (InputStream responseContent = listener.getInputStream()) {
|
||||
Files.copy(responseContent, downloadPath, StandardCopyOption.REPLACE_EXISTING);
|
||||
}
|
||||
} else {
|
||||
throw new IOException("Failed to download Node.js: HTTP " + response.getStatus());
|
||||
}
|
||||
} catch (InterruptedException e) {
|
||||
Thread.currentThread().interrupt();
|
||||
throw new IOException("Download interrupted", e);
|
||||
} catch (TimeoutException | ExecutionException e) {
|
||||
throw new IOException("Failed to download Node.js", e);
|
||||
}
|
||||
|
||||
logger.debug("Extracting Node.js");
|
||||
if ("win".equals(platform)) {
|
||||
unzip(downloadPath.toString(), cacheDir);
|
||||
} else {
|
||||
untar(downloadPath.toString(), cacheDir);
|
||||
}
|
||||
|
||||
Files.delete(downloadPath);
|
||||
}
|
||||
|
||||
private void unzip(String zipFilePath, String destDir) throws IOException {
|
||||
try (ZipInputStream zis = new ZipInputStream(new FileInputStream(zipFilePath))) {
|
||||
ZipEntry zipEntry;
|
||||
while ((zipEntry = zis.getNextEntry()) != null) {
|
||||
Path newPath = Paths.get(destDir, zipEntry.getName());
|
||||
if (zipEntry.isDirectory()) {
|
||||
Files.createDirectories(newPath);
|
||||
} else {
|
||||
Files.createDirectories(newPath.getParent());
|
||||
Files.copy(zis, newPath, StandardCopyOption.REPLACE_EXISTING);
|
||||
}
|
||||
zis.closeEntry();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private void untar(String tarFilePath, String destDir) throws IOException {
|
||||
ProcessBuilder pb = new ProcessBuilder("tar", "-xzf", tarFilePath, "-C", destDir);
|
||||
Process p = pb.start();
|
||||
try {
|
||||
p.waitFor();
|
||||
} catch (InterruptedException e) {
|
||||
Thread.currentThread().interrupt();
|
||||
throw new IOException("Interrupted while extracting tar file", e);
|
||||
}
|
||||
}
|
||||
|
||||
private boolean checkSystemInstalledVersion(String requiredVersion) {
|
||||
try {
|
||||
Process process = new ProcessBuilder(nodeExecutable, "--version").start();
|
||||
try (BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream()))) {
|
||||
String versionLine = reader.readLine();
|
||||
if (versionLine == null || !versionLine.startsWith("v")) {
|
||||
logger.debug("unexpected node output {}", versionLine);
|
||||
return false;
|
||||
}
|
||||
logger.debug("node found {}", versionLine);
|
||||
String currentVersion = versionLine.substring(1); // Remove the leading 'v'
|
||||
return compareVersions(currentVersion, requiredVersion) >= 0;
|
||||
}
|
||||
} catch (Exception e) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
private int compareVersions(String version1, String version2) {
|
||||
if (version1.indexOf("v") == 0) {
|
||||
version1 = version1.substring(1);
|
||||
}
|
||||
if (version2.indexOf("v") == 0) {
|
||||
version2 = version2.substring(1);
|
||||
}
|
||||
String[] parts1 = version1.split("\\.");
|
||||
String[] parts2 = version2.split("\\.");
|
||||
|
||||
int length = Math.max(parts1.length, parts2.length);
|
||||
for (int i = 0; i < length; i++) {
|
||||
int v1 = i < parts1.length ? Integer.parseInt(parts1[i]) : 0;
|
||||
int v2 = i < parts2.length ? Integer.parseInt(parts2[i]) : 0;
|
||||
|
||||
if (v1 != v2) {
|
||||
return Integer.compare(v1, v2);
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static class NodeVersion {
|
||||
public String version = "";
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return version;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,27 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.client;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNullByDefault;
|
||||
import org.openhab.binding.matter.internal.client.dto.ws.NodeStateMessage;
|
||||
|
||||
/**
|
||||
* A listener for node state changes
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
@NonNullByDefault
|
||||
public interface NodeStateListener {
|
||||
|
||||
public void onEvent(NodeStateMessage messasge);
|
||||
}
|
|
@ -0,0 +1,29 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.client.dto;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.gen.BaseCluster;
|
||||
|
||||
/**
|
||||
* Represents a Matter endpoint
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class Endpoint {
|
||||
public Integer number;
|
||||
public Map<String, BaseCluster> clusters;
|
||||
public List<Endpoint> children;
|
||||
}
|
|
@ -0,0 +1,25 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.client.dto;
|
||||
|
||||
import java.math.BigInteger;
|
||||
|
||||
/**
|
||||
* Represents a Matter node
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class Node {
|
||||
public BigInteger id;
|
||||
public Endpoint rootEndpoint;
|
||||
}
|
|
@ -0,0 +1,23 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.client.dto;
|
||||
|
||||
/**
|
||||
* Represents the pairing codes for a Matter device
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class PairingCodes {
|
||||
public String manualPairingCode;
|
||||
public String qrPairingCode;
|
||||
}
|
|
@ -0,0 +1,75 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster;
|
||||
|
||||
import java.util.Collections;
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.Nullable;
|
||||
|
||||
/**
|
||||
* The {@link ClusterCommand}
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class ClusterCommand {
|
||||
public String commandName;
|
||||
public Map<String, Object> args;
|
||||
|
||||
/**
|
||||
* @param commandName
|
||||
* @param options
|
||||
*/
|
||||
public ClusterCommand(String commandName, Map<String, Object> args) {
|
||||
super();
|
||||
this.commandName = commandName;
|
||||
this.args = args;
|
||||
}
|
||||
|
||||
public ClusterCommand(String commandName) {
|
||||
super();
|
||||
this.commandName = commandName;
|
||||
this.args = Collections.emptyMap();
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(@Nullable Object obj) {
|
||||
if (this == obj) {
|
||||
return true;
|
||||
}
|
||||
if (obj == null || getClass() != obj.getClass()) {
|
||||
return false;
|
||||
}
|
||||
ClusterCommand other = (ClusterCommand) obj;
|
||||
if (commandName == null) {
|
||||
if (other.commandName != null) {
|
||||
return false;
|
||||
}
|
||||
} else if (!commandName.equals(other.commandName)) {
|
||||
return false;
|
||||
}
|
||||
if (args == null) {
|
||||
return other.args == null;
|
||||
}
|
||||
return args.equals(other.args);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
final int prime = 31;
|
||||
int result = 1;
|
||||
result = prime * result + ((commandName == null) ? 0 : commandName.hashCode());
|
||||
result = prime * result + ((args == null) ? 0 : args.hashCode());
|
||||
return result;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,747 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.ClusterCommand;
|
||||
|
||||
/**
|
||||
* AccessControl
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class AccessControlCluster extends BaseCluster {
|
||||
|
||||
public static final int CLUSTER_ID = 0x001F;
|
||||
public static final String CLUSTER_NAME = "AccessControl";
|
||||
public static final String CLUSTER_PREFIX = "accessControl";
|
||||
public static final String ATTRIBUTE_CLUSTER_REVISION = "clusterRevision";
|
||||
public static final String ATTRIBUTE_FEATURE_MAP = "featureMap";
|
||||
public static final String ATTRIBUTE_ACL = "acl";
|
||||
public static final String ATTRIBUTE_EXTENSION = "extension";
|
||||
public static final String ATTRIBUTE_SUBJECTS_PER_ACCESS_CONTROL_ENTRY = "subjectsPerAccessControlEntry";
|
||||
public static final String ATTRIBUTE_TARGETS_PER_ACCESS_CONTROL_ENTRY = "targetsPerAccessControlEntry";
|
||||
public static final String ATTRIBUTE_ACCESS_CONTROL_ENTRIES_PER_FABRIC = "accessControlEntriesPerFabric";
|
||||
public static final String ATTRIBUTE_COMMISSIONING_ARL = "commissioningArl";
|
||||
public static final String ATTRIBUTE_ARL = "arl";
|
||||
|
||||
public Integer clusterRevision; // 65533 ClusterRevision
|
||||
public FeatureMap featureMap; // 65532 FeatureMap
|
||||
/**
|
||||
* An attempt to add an Access Control Entry when no more entries are available shall result in a RESOURCE_EXHAUSTED
|
||||
* error being reported and the ACL attribute shall NOT have the entry added to it. See access control limits.
|
||||
* See the AccessControlEntriesPerFabric attribute for the actual value of the number of entries per fabric
|
||||
* supported by the server.
|
||||
* Each Access Control Entry codifies a single grant of privilege on this Node, and is used by the Access Control
|
||||
* Privilege Granting algorithm to determine if a subject has privilege to interact with targets on the Node.
|
||||
*/
|
||||
public List<AccessControlEntryStruct> acl; // 0 list RW F A
|
||||
/**
|
||||
* If present, the Access Control Extensions may be used by Administrators to store arbitrary data related to
|
||||
* fabric’s Access Control Entries.
|
||||
* The Access Control Extension list shall support a single extension entry per supported fabric.
|
||||
*/
|
||||
public List<AccessControlExtensionStruct> extension; // 1 list RW F A
|
||||
/**
|
||||
* This attribute shall provide the minimum number of Subjects per entry that are supported by this server.
|
||||
* Since reducing this value over time may invalidate ACL entries already written, this value shall NOT decrease
|
||||
* across time as software updates occur that could impact this value. If this is a concern for a given
|
||||
* implementation, it is recommended to only use the minimum value required and avoid reporting a higher value than
|
||||
* the required minimum.
|
||||
*/
|
||||
public Integer subjectsPerAccessControlEntry; // 2 uint16 R V
|
||||
/**
|
||||
* This attribute shall provide the minimum number of Targets per entry that are supported by this server.
|
||||
* Since reducing this value over time may invalidate ACL entries already written, this value shall NOT decrease
|
||||
* across time as software updates occur that could impact this value. If this is a concern for a given
|
||||
* implementation, it is recommended to only use the minimum value required and avoid reporting a higher value than
|
||||
* the required minimum.
|
||||
*/
|
||||
public Integer targetsPerAccessControlEntry; // 3 uint16 R V
|
||||
/**
|
||||
* This attribute shall provide the minimum number of ACL Entries per fabric that are supported by this server.
|
||||
* Since reducing this value over time may invalidate ACL entries already written, this value shall NOT decrease
|
||||
* across time as software updates occur that could impact this value. If this is a concern for a given
|
||||
* implementation, it is recommended to only use the minimum value required and avoid reporting a higher value than
|
||||
* the required minimum.
|
||||
*/
|
||||
public Integer accessControlEntriesPerFabric; // 4 uint16 R V
|
||||
/**
|
||||
* This attribute shall provide the set of CommissioningAccessRestrictionEntryStruct applied during commissioning on
|
||||
* a managed device.
|
||||
* When present, the CommissioningARL attribute shall indicate the access restrictions applying during
|
||||
* commissioning.
|
||||
* Attempts to access data model elements described by an entry in the CommissioningARL attribute during
|
||||
* commissioning shall result in an error of ACCESS_RESTRICTED. See Access Control Model for more information about
|
||||
* the features related to controlling access to a Node’s Endpoint Clusters ("Targets" hereafter) from
|
||||
* other Nodes.
|
||||
* See Section 9.10.4.2.1, “Managed Device Feature Usage Restrictions” for limitations on the use of access
|
||||
* restrictions.
|
||||
*/
|
||||
public List<CommissioningAccessRestrictionEntryStruct> commissioningArl; // 5 list R V
|
||||
/**
|
||||
* This attribute shall provide the set of AccessRestrictionEntryStruct applied to the associated fabric on a
|
||||
* managed device.
|
||||
* When present, the ARL attribute shall indicate the access restrictions applying to the accessing fabric. In
|
||||
* contrast, the CommissioningARL attribute indicates the accessing restrictions that apply when there is no
|
||||
* accessing fabric, such as during commissioning.
|
||||
* The access restrictions are externally added/removed based on the particular relationship the device hosting this
|
||||
* server has with external entities such as its owner, external service provider, or end-user.
|
||||
* Attempts to access data model elements described by an entry in the ARL attribute for the accessing fabric shall
|
||||
* result in an error of ACCESS_RESTRICTED. See Access Control Model for more information about the features related
|
||||
* to controlling access to a Node’s Endpoint Clusters ("Targets" hereafter) from other Nodes.
|
||||
* See Section 9.10.4.2.1, “Managed Device Feature Usage Restrictions” for limitations on the use of access
|
||||
* restrictions.
|
||||
*/
|
||||
public List<AccessRestrictionEntryStruct> arl; // 6 list R F V
|
||||
// Structs
|
||||
|
||||
/**
|
||||
* The cluster shall generate AccessControlEntryChanged events whenever its ACL attribute data is changed by an
|
||||
* Administrator.
|
||||
* • Each added entry shall generate an event with ChangeType Added.
|
||||
* • Each changed entry shall generate an event with ChangeType Changed.
|
||||
* • Each removed entry shall generate an event with ChangeType Removed.
|
||||
*/
|
||||
public class AccessControlEntryChanged {
|
||||
/**
|
||||
* The Node ID of the Administrator that made the change, if the change occurred via a CASE session.
|
||||
* Exactly one of AdminNodeID and AdminPasscodeID shall be set, depending on whether the change occurred via a
|
||||
* CASE or PASE session; the other shall be null.
|
||||
*/
|
||||
public BigInteger adminNodeId; // node-id
|
||||
/**
|
||||
* The Passcode ID of the Administrator that made the change, if the change occurred via a PASE session.
|
||||
* Non-zero values are reserved for future use (see PasscodeId generation in PBKDFParamRequest).
|
||||
* Exactly one of AdminNodeID and AdminPasscodeID shall be set, depending on whether the change occurred via a
|
||||
* CASE or PASE session; the other shall be null.
|
||||
*/
|
||||
public Integer adminPasscodeId; // uint16
|
||||
/**
|
||||
* The type of change as appropriate.
|
||||
*/
|
||||
public ChangeTypeEnum changeType; // ChangeTypeEnum
|
||||
/**
|
||||
* The latest value of the changed entry.
|
||||
* This field SHOULD be set if resources are adequate for it; otherwise it shall be set to NULL if resources are
|
||||
* scarce.
|
||||
*/
|
||||
public AccessControlEntryStruct latestValue; // AccessControlEntryStruct
|
||||
public Integer fabricIndex; // FabricIndex
|
||||
|
||||
public AccessControlEntryChanged(BigInteger adminNodeId, Integer adminPasscodeId, ChangeTypeEnum changeType,
|
||||
AccessControlEntryStruct latestValue, Integer fabricIndex) {
|
||||
this.adminNodeId = adminNodeId;
|
||||
this.adminPasscodeId = adminPasscodeId;
|
||||
this.changeType = changeType;
|
||||
this.latestValue = latestValue;
|
||||
this.fabricIndex = fabricIndex;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* The cluster shall generate AccessControlExtensionChanged events whenever its extension attribute data is changed
|
||||
* by an Administrator.
|
||||
* • Each added extension shall generate an event with ChangeType Added.
|
||||
* • Each changed extension shall generate an event with ChangeType Changed.
|
||||
* • Each removed extension shall generate an event with ChangeType Removed.
|
||||
*/
|
||||
public class AccessControlExtensionChanged {
|
||||
/**
|
||||
* The Node ID of the Administrator that made the change, if the change occurred via a CASE session.
|
||||
* Exactly one of AdminNodeID and AdminPasscodeID shall be set, depending on whether the change occurred via a
|
||||
* CASE or PASE session; the other shall be null.
|
||||
*/
|
||||
public BigInteger adminNodeId; // node-id
|
||||
/**
|
||||
* The Passcode ID of the Administrator that made the change, if the change occurred via a PASE session.
|
||||
* Non-zero values are reserved for future use (see PasscodeId generation in PBKDFParamRequest).
|
||||
* Exactly one of AdminNodeID and AdminPasscodeID shall be set, depending on whether the change occurred via a
|
||||
* CASE or PASE session; the other shall be null.
|
||||
*/
|
||||
public Integer adminPasscodeId; // uint16
|
||||
/**
|
||||
* The type of change as appropriate.
|
||||
*/
|
||||
public ChangeTypeEnum changeType; // ChangeTypeEnum
|
||||
/**
|
||||
* The latest value of the changed extension.
|
||||
* This field SHOULD be set if resources are adequate for it; otherwise it shall be set to NULL if resources are
|
||||
* scarce.
|
||||
*/
|
||||
public AccessControlExtensionStruct latestValue; // AccessControlExtensionStruct
|
||||
public Integer fabricIndex; // FabricIndex
|
||||
|
||||
public AccessControlExtensionChanged(BigInteger adminNodeId, Integer adminPasscodeId, ChangeTypeEnum changeType,
|
||||
AccessControlExtensionStruct latestValue, Integer fabricIndex) {
|
||||
this.adminNodeId = adminNodeId;
|
||||
this.adminPasscodeId = adminPasscodeId;
|
||||
this.changeType = changeType;
|
||||
this.latestValue = latestValue;
|
||||
this.fabricIndex = fabricIndex;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* The cluster shall generate a FabricRestrictionReviewUpdate event to indicate completion of a fabric restriction
|
||||
* review. Due to the requirement to generate this event within a bound time frame of successful receipt of the
|
||||
* ReviewFabricRestrictions command, this event may include additional steps that the client may present to the user
|
||||
* in order to help the user locate the user interface for the Managed Device feature.
|
||||
*/
|
||||
public class FabricRestrictionReviewUpdate {
|
||||
/**
|
||||
* This field shall indicate the Token that can be used to correlate a ReviewFabricRestrictionsResponse with a
|
||||
* FabricRestrictionReviewUpdate event.
|
||||
*/
|
||||
public BigInteger token; // uint64
|
||||
/**
|
||||
* This field shall provide human readable text that may be displayed to the user to help them locate the user
|
||||
* interface for managing access restrictions for each fabric.
|
||||
* A device SHOULD implement the Localization Configuration Cluster when it has no other means to determine the
|
||||
* locale to use for this text.
|
||||
* Examples include "Please try again and immediately access device display for further instructions."
|
||||
* or "Please check email associated with your Acme account."
|
||||
*/
|
||||
public String instruction; // string
|
||||
/**
|
||||
* This field shall indicate the URL for the service associated with the device maker which the user can visit
|
||||
* to manage fabric limitations. The syntax of this field shall follow the syntax as specified in RFC 1738 and
|
||||
* shall use the https scheme for internet-hosted URLs.
|
||||
* • The URL may embed the token, fabric index, fabric vendor, or other information transparently in order to
|
||||
* pass context about the originating ReviewFabricRestrictions command to the service associated with the URL.
|
||||
* The service associated with the device vendor may perform vendor ID verification on the fabric from which the
|
||||
* ReviewFabricRestrictions command originated.
|
||||
* • If the device grants the request, the ARL attribute in the Access Control Cluster shall be updated to
|
||||
* reflect the new access rights and a successful response shall be returned to the device making the request
|
||||
* using the MTaer field of the callbackUrl. If the request is denied, the ARL attribute shall remain unchanged
|
||||
* and a failure response shall be returned to the device making the request using the MTaer field of the
|
||||
* callbackUrl.
|
||||
* • The device using this mechanism shall provide a service at the URL that can accept requests for additional
|
||||
* access and return responses indicating whether the requests were granted or denied.
|
||||
* • This URL will typically lead to a server which (e.g. by looking at the User-Agent) redirects the user to
|
||||
* allow viewing, downloading, installing or using a manufacturer-provided means for guiding the user through
|
||||
* the process to review and approve or deny the request. The device manufacturer may choose to use a
|
||||
* constructed URL which is valid in a HTTP GET request (i.e. dedicated for the product) such as, for example,
|
||||
* https://domain.example/arl-app?vid=FFF1& pid=1234. If a client follows or launches the
|
||||
* ARLRequestFlowUrl, it shall expand it as described in Section 9.10.9.3.4, “ARLRequestFlowUrl format”.
|
||||
* • A manufacturer contemplating using this flow should realize that
|
||||
* ◦ This flow typically requires internet access to access the URL, and access extension may fail when internet
|
||||
* connectivity is not available.
|
||||
* ◦ If the flow prefers to redirect the user to an app which is available on popular platforms, it SHOULD also
|
||||
* provide a fallback option such as a web browser interface to ensure users can complete access extension.
|
||||
* ### ARLRequestFlowUrl format
|
||||
* The ARLRequestFlowUrl shall contain a query component (see RFC 3986 section 3.4) composed of one or more
|
||||
* key-value pairs:
|
||||
* • The query shall use the & delimiter between key/value pairs.
|
||||
* • The key-value pairs shall in the format name=<value> where name is the key name, and
|
||||
* <value> is the contents of the value encoded with proper URL-encoded escaping.
|
||||
* • If key MTcu is present, it shall have a value of "_" (i.e. MTcu=_). This is the
|
||||
* "callback URL (CallbackUrl) placeholder".
|
||||
* • Any key whose name begins with MT not mentioned in the previous bullets shall be reserved for future use by
|
||||
* this specification. Manufacturers shall NOT include query keys starting with MT in the ARLRequestFlowUrl
|
||||
* unless they are referenced by a version of this specification.
|
||||
* Any other element in the ARLRequestFlowUrl query field not covered by the above rules, as well as the
|
||||
* fragment field (if present), shall remain including the order of query key/value pairs present.
|
||||
* Once the URL is obtained, it shall be expanded to form a final URL (ExpandedARLRequestFlowUrl) by proceeding
|
||||
* with the following substitution algorithm on the original ARLRequestFlowUrl:
|
||||
* 1. If key MTcu is present, compute the CallbackUrl desired (see Section 9.10.9.3.5, “CallbackUrl format for
|
||||
* ARL Request Flow response”), and substitute the placeholder value "_" (i.e. in MTcu=_) in the
|
||||
* ARLRequestFlowUrl with the desired contents, encoded with proper URL-encoded escaping (see RFC 3986 section
|
||||
* 2).
|
||||
* The final URL after expansion (ExpandedARLRequestFlowUrl) shall be the one to follow, rather than the
|
||||
* original value obtained from the FabricRestrictionReviewUpdate event.
|
||||
* ### CallbackUrl format for ARL Request Flow response
|
||||
* If a CallbackUrl field (i.e. MTcu=) query field placeholder is present in the ARLRequestFlowUrl, the
|
||||
* client may replace the placeholder value "_" in the ExpandedARLRequestFlowUrl with a URL that the
|
||||
* manufacturer flow can use to make a smooth return to the client when the ARL flow has terminated.
|
||||
* This URL field may contain a query component (see RFC 3986 section 3.4). If a query is present, it shall be
|
||||
* composed of one or more key-value pairs:
|
||||
* • The query shall use the & delimiter between key/value pairs.
|
||||
* • The key-value pairs shall follow the format name=<value> where name is the key name, and
|
||||
* <value> is the contents of the value encoded with proper URL-encoded escaping.
|
||||
* • If key MTaer is present, it shall have a value of "_" (i.e. MTaer=_). This is the
|
||||
* placeholder for a "access extension response" provided by the manufacturer flow to the client. The
|
||||
* manufacturer flow shall replace this placeholder with the final status of the access extension request, which
|
||||
* shall be formatted following Expansion of CallbackUrl by the manufacturer custom flow and encoded with proper
|
||||
* URL-encoded escaping.
|
||||
* • Any key whose name begins with MT not mentioned in the previous bullets shall be reserved for future use by
|
||||
* this specification.
|
||||
* Any other element in the CallbackUrl query field not covered by the above rules, as well as the fragment
|
||||
* field (if present), shall remain as provided by the client through embedding within the
|
||||
* ExpandedARLRequestFlowUrl, including the order of query key/value pairs present.
|
||||
* Once the CallbackUrl is obtained by the manufacturer flow, it may be expanded to form a final
|
||||
* ExpandedARLRequestCallbackUrl URL to be used by proceeding with the following substitution algorithm on the
|
||||
* provided CallbackUrl:
|
||||
* • If key MTaer is present, the manufacturer custom flow having received the initial query containing the
|
||||
* CallbackUrl shall substitute the placeholder value "_" (i.e. in MTaer=_) in the CallbackUrl
|
||||
* with the final status of the access extension request flow which shall be one of the following. Any value
|
||||
* returned in the MTaer field not listed above shall be considered an error and shall be treated as
|
||||
* GeneralFailure.
|
||||
* ◦ Success - The flow completed successfully and the ARL attribute was updated. The client may now read the
|
||||
* ARL attribute to determine the new access restrictions.
|
||||
* ◦ NoChange - The ARL attribute was already listing minimum restrictions for the requesting fabric.
|
||||
* ◦ GeneralFailure - The flow failed for an unspecified reason.
|
||||
* ◦ FlowAuthFailure - The user failed to authenticate to the flow.
|
||||
* ◦ NotFound - Access extension failed because the target fabric was not found.
|
||||
* A manufacturer custom flow having received an ExpandedARLRequestFlowUrl SHOULD attempt to open the
|
||||
* ExpandedARLRequestCallbackUrl, on completion of the request, if an ExpandedARLRequestCallbackUrl was computed
|
||||
* from the CallbackUrl and opening such a URL is supported.
|
||||
* ### Examples of ARLRequestFlowUrl URLs
|
||||
* Below are some examples of valid ExpandedARLRequestFlowUrl for several valid values of ARLRequestFlowUrl, as
|
||||
* well as some examples of invalid values of ARLRequestFlowUrl:
|
||||
* • Invalid URL with no query string: http scheme is not allowed:
|
||||
* ◦ http://company.domain.example/matter/arl/vFFF1p1234
|
||||
* • Valid URL :
|
||||
* ◦ https://company.domain.example/matter/arl/vFFF1p1234
|
||||
* • Valid URL, CallbackUrl requested:
|
||||
* ◦ Before expansion:
|
||||
* https://company.domain.example/matter/arl?vid=FFF1&pid=1234&MTcu=_
|
||||
* ◦ After expansion:
|
||||
* https://company.domain.example/matter/arl?vid=FFF1&pid=1234&MTcu=https%3A%2F%2Fc
|
||||
* lient.domain.example%2Fcb%3Ftoken%3DmAsJ6_vqbr-vjDiG_w%253D%253D%26MTaer%3D_
|
||||
* ◦ The ExpandedARLRequestFlowUrl URL contains:
|
||||
* ▪ A CallbackUrl with a client-provided arbitrary token= key/value pair and the MTaer= key/value
|
||||
* pair place-holder to indicate support for a return access extension completion status:
|
||||
* https://client.domain.example/cb?token=mAsJ6_vqbr-vjDiG_w%3D%3D&MTaer=_
|
||||
* ▪ After expansion of the CallbackUrl (MTcu key) into an ExpandedCallbackUrl, with an example return access
|
||||
* extension completion status of Success, the ExpandedARLRequestCallbackUrl would be:
|
||||
* https://client.domain.example/cb?token=mAsJ6_vqbr-vjDiG_w%3D%3D&MTaer=Success
|
||||
* Note that the MTcu key/value pair was initially provided URL-encoded within the ExpandedARLRequestFlowUrl URL
|
||||
* and the MTaer=_ key/value pair placeholder now contains a substituted returned completion status.
|
||||
* • Invalid URL, due to MTza=79 key/value pair in reserved MT-prefixed keys reserved for future use:
|
||||
* ◦ https://company.domain.example/matter/arl?vid=FFF1&pid=1234&MTop=_&MTza=79
|
||||
*/
|
||||
public String arlRequestFlowUrl; // string
|
||||
public Integer fabricIndex; // FabricIndex
|
||||
|
||||
public FabricRestrictionReviewUpdate(BigInteger token, String instruction, String arlRequestFlowUrl,
|
||||
Integer fabricIndex) {
|
||||
this.token = token;
|
||||
this.instruction = instruction;
|
||||
this.arlRequestFlowUrl = arlRequestFlowUrl;
|
||||
this.fabricIndex = fabricIndex;
|
||||
}
|
||||
}
|
||||
|
||||
public class AccessControlTargetStruct {
|
||||
public Integer cluster; // cluster-id
|
||||
public Integer endpoint; // endpoint-no
|
||||
public Integer deviceType; // devtype-id
|
||||
|
||||
public AccessControlTargetStruct(Integer cluster, Integer endpoint, Integer deviceType) {
|
||||
this.cluster = cluster;
|
||||
this.endpoint = endpoint;
|
||||
this.deviceType = deviceType;
|
||||
}
|
||||
}
|
||||
|
||||
public class AccessControlEntryStruct {
|
||||
/**
|
||||
* The privilege field shall specify the level of privilege granted by this Access Control Entry.
|
||||
* Each privilege builds upon its predecessor, expanding the set of actions that can be performed upon a Node.
|
||||
* Administer is the highest privilege, and is special as it pertains to the administration of privileges
|
||||
* itself, via the Access Control Cluster.
|
||||
* When a Node is granted a particular privilege, it is also implicitly granted all logically lower privilege
|
||||
* levels as well. The following diagram illustrates how the higher privilege levels subsume the lower privilege
|
||||
* levels:
|
||||
* Individual clusters shall define whether attributes are readable, writable, or both readable and writable.
|
||||
* Clusters also shall define which privilege is minimally required to be able to perform a particular read or
|
||||
* write action on those attributes, or invoke particular commands. Device type specifications may further
|
||||
* restrict the privilege required.
|
||||
* The Access Control Cluster shall require the Administer privilege to observe and modify the Access Control
|
||||
* Cluster itself. The Administer privilege shall NOT be used on Access Control Entries which use the Group auth
|
||||
* mode.
|
||||
*/
|
||||
public AccessControlEntryPrivilegeEnum privilege; // AccessControlEntryPrivilegeEnum
|
||||
/**
|
||||
* The AuthMode field shall specify the authentication mode required by this Access Control Entry.
|
||||
*/
|
||||
public AccessControlEntryAuthModeEnum authMode; // AccessControlEntryAuthModeEnum
|
||||
/**
|
||||
* The subjects field shall specify a list of Subject IDs, to which this Access Control Entry grants access.
|
||||
* Device types may impose additional constraints on the minimum number of subjects per Access Control Entry.
|
||||
* An attempt to create an entry with more subjects than the node can support shall result in a
|
||||
* RESOURCE_EXHAUSTED error and the entry shall NOT be created.
|
||||
* ### Subject ID shall be of type uint64 with semantics depending on the entry’s AuthMode as follows:
|
||||
* An empty subjects list indicates a wildcard; that is, this entry shall grant access to any Node that
|
||||
* successfully authenticates via AuthMode. The subjects list shall NOT be empty if the entry’s AuthMode is
|
||||
* PASE.
|
||||
* The PASE AuthMode is reserved for future use (see Section 6.6.2.9, “Bootstrapping of the Access Control
|
||||
* Cluster”). An attempt to write an entry with AuthMode set to PASE shall fail with a status code of
|
||||
* CONSTRAINT_ERROR.
|
||||
* For PASE authentication, the Passcode ID identifies the required passcode verifier, and shall be 0 for the
|
||||
* default commissioning passcode.
|
||||
* For CASE authentication, the Subject ID is a distinguished name within the Operational Certificate shared
|
||||
* during CASE session establishment, the type of which is determined by its range to be one of:
|
||||
* • a Node ID, which identifies the required source node directly (by ID)
|
||||
* • a CASE Authenticated Tag, which identifies the required source node indirectly (by tag)
|
||||
* For Group authentication, the Group ID identifies the required group, as defined in the Group Key Management
|
||||
* Cluster.
|
||||
*/
|
||||
public List<BigInteger> subjects; // list
|
||||
/**
|
||||
* The targets field shall specify a list of AccessControlTargetStruct, which define the clusters on this Node
|
||||
* to which this Access Control Entry grants access.
|
||||
* Device types may impose additional constraints on the minimum number of targets per Access Control Entry.
|
||||
* An attempt to create an entry with more targets than the node can support shall result in a
|
||||
* RESOURCE_EXHAUSTED error and the entry shall NOT be created.
|
||||
* A single target shall contain at least one field (Cluster, Endpoint, or DeviceType), and shall NOT contain
|
||||
* both an Endpoint field and a DeviceType field.
|
||||
* A target grants access based on the presence of fields as follows:
|
||||
* An empty targets list indicates a wildcard: that is, this entry shall grant access to all cluster instances
|
||||
* on all endpoints on this Node.
|
||||
*/
|
||||
public List<AccessControlTargetStruct> targets; // list
|
||||
public Integer fabricIndex; // FabricIndex
|
||||
|
||||
public AccessControlEntryStruct(AccessControlEntryPrivilegeEnum privilege,
|
||||
AccessControlEntryAuthModeEnum authMode, List<BigInteger> subjects,
|
||||
List<AccessControlTargetStruct> targets, Integer fabricIndex) {
|
||||
this.privilege = privilege;
|
||||
this.authMode = authMode;
|
||||
this.subjects = subjects;
|
||||
this.targets = targets;
|
||||
this.fabricIndex = fabricIndex;
|
||||
}
|
||||
}
|
||||
|
||||
public class AccessControlExtensionStruct {
|
||||
/**
|
||||
* This field may be used by manufacturers to store arbitrary TLV-encoded data related to a fabric’s Access
|
||||
* Control Entries.
|
||||
* The contents shall consist of a top-level anonymous list; each list element shall include a profile-specific
|
||||
* tag encoded in fully-qualified form.
|
||||
* Administrators may iterate over this list of elements, and interpret selected elements at their discretion.
|
||||
* The content of each element is not specified, but may be coordinated among manufacturers at their discretion.
|
||||
*/
|
||||
public OctetString data; // octstr
|
||||
public Integer fabricIndex; // FabricIndex
|
||||
|
||||
public AccessControlExtensionStruct(OctetString data, Integer fabricIndex) {
|
||||
this.data = data;
|
||||
this.fabricIndex = fabricIndex;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This structure describes an access restriction that would be applied to a specific data model element on a given
|
||||
* endpoint/cluster pair (see AccessRestrictionEntryStruct).
|
||||
*/
|
||||
public class AccessRestrictionStruct {
|
||||
/**
|
||||
* This field shall indicate the type of restriction, for example, AttributeAccessForbidden.
|
||||
*/
|
||||
public AccessRestrictionTypeEnum type; // AccessRestrictionTypeEnum
|
||||
/**
|
||||
* This field shall indicate the element Manufacturer Extensible Identifier (MEI) associated with the element
|
||||
* type subject to the access restriction, based upon the AccessRestrictionTypeEnum. When the Type is
|
||||
* AttributeAccessForbidden or AttributeWriteForbidden, this value shall be considered of type attrib-id (i.e.
|
||||
* an attribute identifier). When the Type is CommandForbidden, this value shall be considered of type
|
||||
* command-id (i.e. an attribute identifier). When the Type is EventForbidden, this value shall be considered of
|
||||
* type event-id (i.e. an event identifier).
|
||||
* A null value shall indicate the wildcard value for the given value of Type (i.e. all elements associated with
|
||||
* the Type under the associated endpoint and cluster for the containing AccessRestrictionEntryStruct).
|
||||
*/
|
||||
public Integer id; // uint32
|
||||
|
||||
public AccessRestrictionStruct(AccessRestrictionTypeEnum type, Integer id) {
|
||||
this.type = type;
|
||||
this.id = id;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This structure describes a current access restriction on the fabric.
|
||||
*/
|
||||
public class AccessRestrictionEntryStruct {
|
||||
/**
|
||||
* This field shall indicate the endpoint having associated access restrictions scoped to the associated fabric
|
||||
* of the list containing the entry.
|
||||
*/
|
||||
public Integer endpoint; // endpoint-no
|
||||
/**
|
||||
* This field shall indicate the cluster having associated access restrictions under the entry’s Endpoint,
|
||||
* scoped to the associated fabric of the list containing the entry.
|
||||
*/
|
||||
public Integer cluster; // cluster-id
|
||||
/**
|
||||
* This field shall indicate the set of restrictions applying to the Cluster under the given Endpoint, scoped to
|
||||
* the associated fabric of the list containing the entry.
|
||||
* This list shall NOT be empty.
|
||||
*/
|
||||
public List<AccessRestrictionStruct> restrictions; // list
|
||||
public Integer fabricIndex; // FabricIndex
|
||||
|
||||
public AccessRestrictionEntryStruct(Integer endpoint, Integer cluster,
|
||||
List<AccessRestrictionStruct> restrictions, Integer fabricIndex) {
|
||||
this.endpoint = endpoint;
|
||||
this.cluster = cluster;
|
||||
this.restrictions = restrictions;
|
||||
this.fabricIndex = fabricIndex;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This structure describes a current access restriction when there is no accessing fabric.
|
||||
*/
|
||||
public class CommissioningAccessRestrictionEntryStruct {
|
||||
/**
|
||||
* This field shall indicate the endpoint having associated access restrictions scoped to the associated fabric
|
||||
* of the list containing the entry.
|
||||
*/
|
||||
public Integer endpoint; // endpoint-no
|
||||
/**
|
||||
* This field shall indicate the cluster having associated access restrictions under the entry’s Endpoint,
|
||||
* scoped to the associated fabric of the list containing the entry.
|
||||
*/
|
||||
public Integer cluster; // cluster-id
|
||||
/**
|
||||
* This field shall indicate the set of restrictions applying to the Cluster under the given Endpoint, scoped to
|
||||
* the associated fabric of the list containing the entry.
|
||||
* This list shall NOT be empty.
|
||||
*/
|
||||
public List<AccessRestrictionStruct> restrictions; // list
|
||||
|
||||
public CommissioningAccessRestrictionEntryStruct(Integer endpoint, Integer cluster,
|
||||
List<AccessRestrictionStruct> restrictions) {
|
||||
this.endpoint = endpoint;
|
||||
this.cluster = cluster;
|
||||
this.restrictions = restrictions;
|
||||
}
|
||||
}
|
||||
|
||||
// Enums
|
||||
public enum ChangeTypeEnum implements MatterEnum {
|
||||
CHANGED(0, "Changed"),
|
||||
ADDED(1, "Added"),
|
||||
REMOVED(2, "Removed");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private ChangeTypeEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* ### Proxy View Value
|
||||
* ### This value implicitly grants View privileges
|
||||
*/
|
||||
public enum AccessControlEntryPrivilegeEnum implements MatterEnum {
|
||||
VIEW(1, "View"),
|
||||
PROXY_VIEW(2, "Proxy View"),
|
||||
OPERATE(3, "Operate"),
|
||||
MANAGE(4, "Manage"),
|
||||
ADMINISTER(5, "Administer");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private AccessControlEntryPrivilegeEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
public enum AccessRestrictionTypeEnum implements MatterEnum {
|
||||
ATTRIBUTE_ACCESS_FORBIDDEN(0, "Attribute Access Forbidden"),
|
||||
ATTRIBUTE_WRITE_FORBIDDEN(1, "Attribute Write Forbidden"),
|
||||
COMMAND_FORBIDDEN(2, "Command Forbidden"),
|
||||
EVENT_FORBIDDEN(3, "Event Forbidden");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private AccessRestrictionTypeEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
public enum AccessControlEntryAuthModeEnum implements MatterEnum {
|
||||
PASE(1, "Pase"),
|
||||
CASE(2, "Case"),
|
||||
GROUP(3, "Group");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private AccessControlEntryAuthModeEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
// Bitmaps
|
||||
public static class FeatureMap {
|
||||
/**
|
||||
*
|
||||
* This feature indicates the device supports ACL Extension attribute.
|
||||
*/
|
||||
public boolean extension;
|
||||
/**
|
||||
*
|
||||
* This feature is for a device that is managed by a service associated with the device vendor and which imposes
|
||||
* default access restrictions upon each new fabric added to it. This could arise, for example, if the device is
|
||||
* managed by a service provider under contract to an end-user, in such a way that the manager of the device
|
||||
* does not unconditionally grant universal access to all of a device’s functionality, even for fabric
|
||||
* administrators. For example, many Home Routers are managed by an Internet Service Provider (a service), and
|
||||
* these services often have a policy that requires them to obtain user consent before certain administrative
|
||||
* functions can be delegated to a third party (e.g., a fabric Administrator). These restrictions are expressed
|
||||
* using an Access Restriction List (ARL).
|
||||
* The purpose of this feature on the Access Control cluster is to indicate to a fabric Administrator that
|
||||
* access by it to specific attributes, commands and/or events for specific clusters is currently prohibited.
|
||||
* Attempts to access these restricted data model elements shall result in an error of ACCESS_RESTRICTED.
|
||||
* A device that implements this feature shall have a mechanism to honor the ReviewFabricRestrictions command,
|
||||
* such as user interfaces or service interactions associated with a service provider or the device
|
||||
* manufacturer, which allows the owner (or subscriber) to manage access restrictions for each fabric. The user
|
||||
* interface design, which includes the way restrictions are organized and presented to the user, is not
|
||||
* specified, but SHOULD be usable by non-expert end-users from common mobile devices, personal computers, or an
|
||||
* on-device user interface.
|
||||
* Controllers and clients SHOULD incorporate generic handling of the ACCESS_RESTRICTED error code, when it
|
||||
* appears in allowed contexts, in order to gracefully handle situations where this feature is encountered.
|
||||
* Device vendors that adopt this feature SHOULD be judicious in its use given the risk of unexpected behavior
|
||||
* in controllers and clients.
|
||||
* For certification testing, a device that implements this feature shall provide a way for all restrictions to
|
||||
* be removed.
|
||||
* The ARL attribute provides the set of restrictions currently applied to this fabric.
|
||||
* The ReviewFabricRestrictions command provides a way for the fabric Administrator to request that the server
|
||||
* triggers a review of the current fabric restrictions, by involving external entities such as end-users, or
|
||||
* other services associated with the manager of the device hosting the server. This review process may involve
|
||||
* communication between external services and the user, and may take an unpredictable amount of time to
|
||||
* complete since an end-user may need to visit some resources, such as a mobile application or web site. A
|
||||
* FabricRestrictionReviewUpdate event will be generated by the device within a predictable time period of the
|
||||
* ReviewFabricRestrictionsResponse (see ReviewFabricRestrictions for specification of this time period), and
|
||||
* this event can be correlated with the ReviewFabricRestrictionsResponse using a token provided in both. The
|
||||
* device may provide instructions or a Redirect URL in the FabricRestrictionReviewUpdate event in order to help
|
||||
* the user access the features required for managing per-fabric restrictions.
|
||||
* See Section 6.6.2, “Model” for a description of how access control is impacted by the ARL attribute.
|
||||
* ### Managed Device Feature Usage Restrictions
|
||||
* Use of this feature shall be limited to the mandatory clusters of endpoints having a device type that
|
||||
* explicitly permits its use in the Device Library Specification. As a reminder, the device types associated
|
||||
* with an endpoint are listed in the Descriptor cluster of the endpoint.
|
||||
* In addition, use of this feature shall NOT restrict the following clusters on any endpoint:
|
||||
* 1. the Descriptor Cluster (0x001D)
|
||||
* 2. the Binding Cluster (0x001E)
|
||||
* 3. the Network Commissioning Cluster (0x0031)
|
||||
* 4. the Identify Cluster (0x0003)
|
||||
* 5. the Groups Cluster (0x0004)
|
||||
* In addition, use of this feature shall NOT restrict the global attributes of any cluster.
|
||||
* Because ARLs cannot be used to restrict root node access or access to any clusters required for
|
||||
* commissioning, administrators may determine the current restrictions of the ARL at any point, including
|
||||
* during commissioning after joining the fabric.
|
||||
*/
|
||||
public boolean managedDevice;
|
||||
|
||||
public FeatureMap(boolean extension, boolean managedDevice) {
|
||||
this.extension = extension;
|
||||
this.managedDevice = managedDevice;
|
||||
}
|
||||
}
|
||||
|
||||
public AccessControlCluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, 31, "AccessControl");
|
||||
}
|
||||
|
||||
protected AccessControlCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
// commands
|
||||
/**
|
||||
* This command signals to the service associated with the device vendor that the fabric administrator would like a
|
||||
* review of the current restrictions on the accessing fabric. This command includes an optional list of ARL entries
|
||||
* that the fabric administrator would like removed.
|
||||
* In response, a ReviewFabricRestrictionsResponse is sent which contains a token that can be used to correlate a
|
||||
* review request with a FabricRestrictionReviewUpdate event.
|
||||
* Within 1 hour of the ReviewFabricRestrictionsResponse, the FabricRestrictionReviewUpdate event shall be
|
||||
* generated, in order to indicate completion of the review and any additional steps required by the user for the
|
||||
* review.
|
||||
* A review may include obtaining consent from the user, which can take time. For example, the user may need to
|
||||
* respond to an email or a push notification.
|
||||
* The ARL attribute may change at any time due to actions taken by the user, or the service associated with the
|
||||
* device vendor.
|
||||
*/
|
||||
public static ClusterCommand reviewFabricRestrictions(List<CommissioningAccessRestrictionEntryStruct> arl) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (arl != null) {
|
||||
map.put("arl", arl);
|
||||
}
|
||||
return new ClusterCommand("reviewFabricRestrictions", map);
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
str += "clusterRevision : " + clusterRevision + "\n";
|
||||
str += "featureMap : " + featureMap + "\n";
|
||||
str += "acl : " + acl + "\n";
|
||||
str += "extension : " + extension + "\n";
|
||||
str += "subjectsPerAccessControlEntry : " + subjectsPerAccessControlEntry + "\n";
|
||||
str += "targetsPerAccessControlEntry : " + targetsPerAccessControlEntry + "\n";
|
||||
str += "accessControlEntriesPerFabric : " + accessControlEntriesPerFabric + "\n";
|
||||
str += "commissioningArl : " + commissioningArl + "\n";
|
||||
str += "arl : " + arl + "\n";
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,173 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.ClusterCommand;
|
||||
|
||||
/**
|
||||
* AccountLogin
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class AccountLoginCluster extends BaseCluster {
|
||||
|
||||
public static final int CLUSTER_ID = 0x050E;
|
||||
public static final String CLUSTER_NAME = "AccountLogin";
|
||||
public static final String CLUSTER_PREFIX = "accountLogin";
|
||||
public static final String ATTRIBUTE_CLUSTER_REVISION = "clusterRevision";
|
||||
|
||||
public Integer clusterRevision; // 65533 ClusterRevision
|
||||
// Structs
|
||||
|
||||
/**
|
||||
* This event can be used by the Content App to indicate that the current user has logged out. In response to this
|
||||
* event, the Fabric Admin shall remove access to this Content App by the specified Node. If no Node is provided,
|
||||
* then the Fabric Admin shall remove access to all non-Admin Nodes.
|
||||
*/
|
||||
public class LoggedOut {
|
||||
/**
|
||||
* This field shall provide the Node ID corresponding to the user account that has logged out, if that Node ID
|
||||
* is available. If it is NOT available, this field shall NOT be present in the event.
|
||||
*/
|
||||
public BigInteger node; // node-id
|
||||
|
||||
public LoggedOut(BigInteger node) {
|
||||
this.node = node;
|
||||
}
|
||||
}
|
||||
|
||||
public AccountLoginCluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, 1294, "AccountLogin");
|
||||
}
|
||||
|
||||
protected AccountLoginCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
// commands
|
||||
/**
|
||||
* The purpose of this command is to determine if the active user account of the given Content App matches the
|
||||
* active user account of a given Commissionee, and when it does, return a Setup PIN code which can be used for
|
||||
* password-authenticated session establishment (PASE) with the Commissionee.
|
||||
* For example, a Video Player with a Content App Platform may invoke this command on one of its Content App
|
||||
* endpoints to facilitate commissioning of a Phone App made by the same vendor as the Content App. If the accounts
|
||||
* match, then the Content App may return a setup code that can be used by the Video Player to commission the Phone
|
||||
* App without requiring the user to physically input a setup code.
|
||||
* The account match is determined by the Content App using a method which is outside the scope of this
|
||||
* specification and will typically involve a central service which is in communication with both the Content App
|
||||
* and the Commissionee. The GetSetupPIN command is needed in order to provide the Commissioner/Admin with a Setup
|
||||
* PIN when this Commissioner/Admin is operated by a different vendor from the Content App.
|
||||
* This method is used to facilitate Setup PIN exchange (for PASE) between Commissioner and Commissionee when the
|
||||
* same user account is active on both nodes. With this method, the Content App satisfies proof of possession
|
||||
* related to commissioning by requiring the same user account to be active on both Commissionee and Content App,
|
||||
* while the Commissioner/Admin ensures user consent by prompting the user prior to invocation of the command.
|
||||
* Upon receipt of this command, the Content App checks if the account associated with the Temporary Account
|
||||
* Identifier sent by the client is the same account that is active on itself. If the accounts are the same, then
|
||||
* the Content App returns the GetSetupPIN Response which includes a Setup PIN that may be used for PASE with the
|
||||
* Commissionee.
|
||||
* The Temporary Account Identifier for a Commissionee may be populated with the Rotating ID field of the client’s
|
||||
* commissionable node advertisement (see Rotating Device Identifier section in [MatterCore]) encoded as an octet
|
||||
* string where the octets of the Rotating Device Identifier are encoded as 2-character sequences by representing
|
||||
* each octet’s value as a 2-digit hexadecimal number, using uppercase letters.
|
||||
* The Setup PIN is a character string so that it can accommodate different future formats, including alpha-numeric
|
||||
* encodings. For a Commissionee it shall be populated with the Manual Pairing Code (see Manual Pairing Code section
|
||||
* in [MatterCore]) encoded as a string (11 characters) or the Passcode portion of the Manual Pairing Code (when
|
||||
* less than 11 characters) .
|
||||
* The server shall implement rate limiting to prevent brute force attacks. No more than 10 unique requests in a 10
|
||||
* minute period shall be allowed; a command response status of FAILURE should sent for additional commands received
|
||||
* within the 10 minute period. Because access to this command is limited to nodes with Admin-level access, and the
|
||||
* user is prompted for consent prior to Commissioning, there are in place multiple obstacles to successfully
|
||||
* mounting a brute force attack. A Content App that supports this command shall ensure that the Temporary Account
|
||||
* Identifier used by its clients is not valid for more than 10 minutes.
|
||||
*/
|
||||
public static ClusterCommand getSetupPin(String tempAccountIdentifier) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (tempAccountIdentifier != null) {
|
||||
map.put("tempAccountIdentifier", tempAccountIdentifier);
|
||||
}
|
||||
return new ClusterCommand("getSetupPin", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* The purpose of this command is to allow the Content App to assume the user account of a given Commissionee by
|
||||
* leveraging the Setup PIN code input by the user during the commissioning process.
|
||||
* For example, a Video Player with a Content App Platform may invoke this command on one of its Content App
|
||||
* endpoints after the commissioning has completed of a Phone App made by the same vendor as the Content App. The
|
||||
* Content App may determine whether the Temporary Account Identifier maps to an account with a corresponding Setup
|
||||
* PIN and, if so, it may automatically login to the account for the corresponding user. The end result is that a
|
||||
* user performs commissioning of a Phone App to a Video Player by inputting the Setup PIN for the Phone App into
|
||||
* the Video Player UX. Once commissioning has completed, the Video Player invokes this command to allow the
|
||||
* corresponding Content App to assume the same user account as the Phone App.
|
||||
* The verification of Setup PIN for the given Temporary Account Identifier is determined by the Content App using a
|
||||
* method which is outside the scope of this specification and will typically involve a central service which is in
|
||||
* communication with both the Content App and the Commissionee. Implementations of such a service should impose
|
||||
* aggressive time outs for any mapping of Temporary Account Identifier to Setup PIN in order to prevent accidental
|
||||
* login due to delayed invocation.
|
||||
* Upon receipt, the Content App checks if the account associated with the client’s Temp Account Identifier has a
|
||||
* current active Setup PIN with the given value. If the Setup PIN is valid for the user account associated with the
|
||||
* Temp Account Identifier, then the Content App may make that user account active.
|
||||
* The Temporary Account Identifier for a Commissionee may be populated with the Rotating ID field of the client’s
|
||||
* commissionable node advertisement encoded as an octet string where the octets of the Rotating Device Identifier
|
||||
* are encoded as 2-character sequences by representing each octet’s value as a 2-digit hexadecimal number, using
|
||||
* uppercase letters.
|
||||
* The Setup PIN for a Commissionee may be populated with the Manual Pairing Code encoded as a string of decimal
|
||||
* numbers (11 characters) or the Passcode portion of the Manual Pairing Code encoded as a string of decimal numbers
|
||||
* (8 characters) .
|
||||
* The server shall implement rate limiting to prevent brute force attacks. No more than 10 unique requests in a 10
|
||||
* minute period shall be allowed; a command response status of FAILURE should sent for additional commands received
|
||||
* within the 10 minute period. Because access to this command is limited to nodes with Admin-level access, and the
|
||||
* user is involved when obtaining the SetupPIN, there are in place multiple obstacles to successfully mounting a
|
||||
* brute force attack. A Content App that supports this command shall ensure that the Temporary Account Identifier
|
||||
* used by its clients is not valid for more than 10 minutes.
|
||||
*/
|
||||
public static ClusterCommand login(String tempAccountIdentifier, String setupPin, BigInteger node) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (tempAccountIdentifier != null) {
|
||||
map.put("tempAccountIdentifier", tempAccountIdentifier);
|
||||
}
|
||||
if (setupPin != null) {
|
||||
map.put("setupPin", setupPin);
|
||||
}
|
||||
if (node != null) {
|
||||
map.put("node", node);
|
||||
}
|
||||
return new ClusterCommand("login", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* The purpose of this command is to instruct the Content App to clear the current user account. This command SHOULD
|
||||
* be used by clients of a Content App to indicate the end of a user session.
|
||||
*/
|
||||
public static ClusterCommand logout(BigInteger node) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (node != null) {
|
||||
map.put("node", node);
|
||||
}
|
||||
return new ClusterCommand("logout", map);
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
str += "clusterRevision : " + clusterRevision + "\n";
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,614 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.ClusterCommand;
|
||||
|
||||
/**
|
||||
* Actions
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class ActionsCluster extends BaseCluster {
|
||||
|
||||
public static final int CLUSTER_ID = 0x0025;
|
||||
public static final String CLUSTER_NAME = "Actions";
|
||||
public static final String CLUSTER_PREFIX = "actions";
|
||||
public static final String ATTRIBUTE_CLUSTER_REVISION = "clusterRevision";
|
||||
public static final String ATTRIBUTE_ACTION_LIST = "actionList";
|
||||
public static final String ATTRIBUTE_ENDPOINT_LISTS = "endpointLists";
|
||||
public static final String ATTRIBUTE_SETUP_URL = "setupUrl";
|
||||
|
||||
public Integer clusterRevision; // 65533 ClusterRevision
|
||||
/**
|
||||
* The ActionList attribute holds the list of actions. Each entry shall have an unique ActionID, and its
|
||||
* EndpointListID shall exist in the EndpointLists attribute.
|
||||
*/
|
||||
public List<ActionStruct> actionList; // 0 list R V
|
||||
/**
|
||||
* The EndpointLists attribute holds the list of endpoint lists. Each entry shall have an unique EndpointListID.
|
||||
*/
|
||||
public List<EndpointListStruct> endpointLists; // 1 list R V
|
||||
/**
|
||||
* The SetupURL attribute (when provided) shall indicate a URL; its syntax shall follow the syntax as specified in
|
||||
* RFC 1738, max. 512 ASCII characters and shall use the https scheme. The location referenced by this URL shall
|
||||
* provide additional information for the actions provided:
|
||||
* • When used without suffix, it shall provide information about the various actions which the cluster provides.
|
||||
* ◦ Example: SetupURL could take the value of example://Actions or https://domain.example/ Matter/bridgev1/Actions
|
||||
* for this generic case (access generic info how to use actions provided by this cluster).
|
||||
* • When used with a suffix of "/?a=" and the decimal value of ActionID for one of the actions, it
|
||||
* may provide information about that particular action. This could be a deeplink to manufacturer-app/website
|
||||
* (associated somehow to the server node) with the information/edit-screen for this action so that the user can
|
||||
* view and update details of the action, e.g. edit the scene, or change the wake-up experience time period.
|
||||
* ◦ Example of SetupURL with suffix added: example://Actions/?a=12345 or
|
||||
* https://domain.example/Matter/bridgev1/Actions/?a=12345 for linking to specific info/editing of the action
|
||||
* with ActionID 0x3039.
|
||||
*/
|
||||
public String setupUrl; // 2 string R V
|
||||
// Structs
|
||||
|
||||
/**
|
||||
* This event shall be generated when there is a change in the State of an ActionID during the execution of an
|
||||
* action and the most recent command using that ActionID used an InvokeID data field.
|
||||
* It provides feedback to the client about the progress of the action.
|
||||
* Example: When InstantActionWithTransition is invoked (with an InvokeID data field), two StateChanged events will
|
||||
* be generated:
|
||||
* • one when the transition starts (NewState=Active)
|
||||
* • one when the transition completed (NewState=Inactive)
|
||||
*/
|
||||
public class StateChanged {
|
||||
/**
|
||||
* This field shall be set to the ActionID of the action which has changed state.
|
||||
*/
|
||||
public Integer actionId; // uint16
|
||||
/**
|
||||
* This field shall be set to the InvokeID which was provided to the most recent command referencing this
|
||||
* ActionID.
|
||||
*/
|
||||
public Integer invokeId; // uint32
|
||||
/**
|
||||
* This field shall be set to state that the action has changed to.
|
||||
*/
|
||||
public ActionStateEnum newState; // ActionStateEnum
|
||||
|
||||
public StateChanged(Integer actionId, Integer invokeId, ActionStateEnum newState) {
|
||||
this.actionId = actionId;
|
||||
this.invokeId = invokeId;
|
||||
this.newState = newState;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This event shall be generated when there is some error which prevents the action from its normal planned
|
||||
* execution and the most recent command using that ActionID used an InvokeID data field.
|
||||
* It provides feedback to the client about the non-successful progress of the action.
|
||||
* Example: When InstantActionWithTransition is invoked (with an InvokeID data field), and another controller
|
||||
* changes the state of one or more of the involved endpoints during the transition, thus interrupting the
|
||||
* transition triggered by the action, two events would be generated:
|
||||
* • StateChanged when the transition starts (NewState=Active)
|
||||
* • ActionFailed when the interrupting command occurs (NewState=Inactive, Error=interrupted)
|
||||
* Example: When InstantActionWithTransition is invoked (with an InvokeID data field = 1), and the same client
|
||||
* invokes an InstantAction with (the same or another ActionId and) InvokeID = 2, and this second command
|
||||
* interrupts the transition triggered by the first command, these events would be generated:
|
||||
* • StateChanged (InvokeID=1, NewState=Active) when the transition starts
|
||||
* • ActionFailed (InvokeID=2, NewState=Inactive, Error=interrupted) when the second command
|
||||
* interrupts the transition
|
||||
* • StateChanged (InvokeID=2, NewState=Inactive) upon the execution of the action for the second command
|
||||
*/
|
||||
public class ActionFailed {
|
||||
/**
|
||||
* This field shall be set to the ActionID of the action which encountered an error.
|
||||
*/
|
||||
public Integer actionId; // uint16
|
||||
/**
|
||||
* This field shall be set to the InvokeID which was provided to the most recent command referencing this
|
||||
* ActionID.
|
||||
*/
|
||||
public Integer invokeId; // uint32
|
||||
/**
|
||||
* This field shall be set to state that the action is in at the time of generating the event.
|
||||
*/
|
||||
public ActionStateEnum newState; // ActionStateEnum
|
||||
/**
|
||||
* This field shall be set to indicate the reason for non-successful progress of the action.
|
||||
*/
|
||||
public ActionErrorEnum error; // ActionErrorEnum
|
||||
|
||||
public ActionFailed(Integer actionId, Integer invokeId, ActionStateEnum newState, ActionErrorEnum error) {
|
||||
this.actionId = actionId;
|
||||
this.invokeId = invokeId;
|
||||
this.newState = newState;
|
||||
this.error = error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This data type holds the details of a single action, and contains the data fields below.
|
||||
*/
|
||||
public class ActionStruct {
|
||||
/**
|
||||
* This field shall provide an unique identifier used to identify an action.
|
||||
*/
|
||||
public Integer actionId; // uint16
|
||||
/**
|
||||
* This field shall indicate the name (as assigned by the user or automatically by the server) associated with
|
||||
* this action. This can be used for identifying the action to the user by the client. Example: "my
|
||||
* colorful scene".
|
||||
*/
|
||||
public String name; // string
|
||||
/**
|
||||
* This field shall indicate the type of action. The value of Type of an action, along with its
|
||||
* SupportedCommands can be used by the client in its UX or logic to determine how to present or use such
|
||||
* action. See ActionTypeEnum for details and examples.
|
||||
*/
|
||||
public ActionTypeEnum type; // ActionTypeEnum
|
||||
/**
|
||||
* This field shall provide a reference to the associated endpoint list, which specifies the endpoints on this
|
||||
* Node which will be impacted by this ActionID.
|
||||
*/
|
||||
public Integer endpointListId; // uint16
|
||||
/**
|
||||
* This field is a bitmap which shall be used to indicate which of the cluster’s commands are supported for this
|
||||
* particular action, with a bit set to 1 for each supported command according to the table below. Other bits
|
||||
* shall be set to 0.
|
||||
*/
|
||||
public CommandBits supportedCommands; // CommandBits
|
||||
/**
|
||||
* This field shall indicate the current state of this action.
|
||||
*/
|
||||
public ActionStateEnum state; // ActionStateEnum
|
||||
|
||||
public ActionStruct(Integer actionId, String name, ActionTypeEnum type, Integer endpointListId,
|
||||
CommandBits supportedCommands, ActionStateEnum state) {
|
||||
this.actionId = actionId;
|
||||
this.name = name;
|
||||
this.type = type;
|
||||
this.endpointListId = endpointListId;
|
||||
this.supportedCommands = supportedCommands;
|
||||
this.state = state;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This data type holds the details of a single endpoint list, which relates to a set of endpoints that have some
|
||||
* logical relation, and contains the data fields below.
|
||||
*/
|
||||
public class EndpointListStruct {
|
||||
/**
|
||||
* This field shall provide an unique identifier used to identify the endpoint list.
|
||||
*/
|
||||
public Integer endpointListId; // uint16
|
||||
/**
|
||||
* This field shall indicate the name (as assigned by the user or automatically by the server) associated with
|
||||
* the set of endpoints in this list. This can be used for identifying the action to the user by the client.
|
||||
* Example: "living room".
|
||||
*/
|
||||
public String name; // string
|
||||
/**
|
||||
* This field shall indicate the type of endpoint list, see EndpointListTypeEnum.
|
||||
*/
|
||||
public EndpointListTypeEnum type; // EndpointListTypeEnum
|
||||
/**
|
||||
* This field shall provide a list of endpoint numbers.
|
||||
*/
|
||||
public List<Integer> endpoints; // list
|
||||
|
||||
public EndpointListStruct(Integer endpointListId, String name, EndpointListTypeEnum type,
|
||||
List<Integer> endpoints) {
|
||||
this.endpointListId = endpointListId;
|
||||
this.name = name;
|
||||
this.type = type;
|
||||
this.endpoints = endpoints;
|
||||
}
|
||||
}
|
||||
|
||||
// Enums
|
||||
public enum ActionTypeEnum implements MatterEnum {
|
||||
OTHER(0, "Other"),
|
||||
SCENE(1, "Scene"),
|
||||
SEQUENCE(2, "Sequence"),
|
||||
AUTOMATION(3, "Automation"),
|
||||
EXCEPTION(4, "Exception"),
|
||||
NOTIFICATION(5, "Notification"),
|
||||
ALARM(6, "Alarm");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private ActionTypeEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Note that some of these states are applicable only for certain actions, as determined by their SupportedCommands.
|
||||
*/
|
||||
public enum ActionStateEnum implements MatterEnum {
|
||||
INACTIVE(0, "Inactive"),
|
||||
ACTIVE(1, "Active"),
|
||||
PAUSED(2, "Paused"),
|
||||
DISABLED(3, "Disabled");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private ActionStateEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
public enum ActionErrorEnum implements MatterEnum {
|
||||
UNKNOWN(0, "Unknown"),
|
||||
INTERRUPTED(1, "Interrupted");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private ActionErrorEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* The Room and Zone values are provided for the cases where a user (or the system on behalf of the user) has
|
||||
* created logical grouping of the endpoints (e.g. bridged devices) based on location.
|
||||
*/
|
||||
public enum EndpointListTypeEnum implements MatterEnum {
|
||||
OTHER(0, "Other"),
|
||||
ROOM(1, "Room"),
|
||||
ZONE(2, "Zone");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private EndpointListTypeEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
// Bitmaps
|
||||
/**
|
||||
* Note - The bit allocation of this bitmap shall follow the ID’s of the Commands of this cluster.
|
||||
*/
|
||||
public static class CommandBits {
|
||||
public boolean instantAction;
|
||||
public boolean instantActionWithTransition;
|
||||
public boolean startAction;
|
||||
public boolean startActionWithDuration;
|
||||
public boolean stopAction;
|
||||
public boolean pauseAction;
|
||||
public boolean pauseActionWithDuration;
|
||||
public boolean resumeAction;
|
||||
public boolean enableAction;
|
||||
public boolean enableActionWithDuration;
|
||||
public boolean disableAction;
|
||||
public boolean disableActionWithDuration;
|
||||
|
||||
public CommandBits(boolean instantAction, boolean instantActionWithTransition, boolean startAction,
|
||||
boolean startActionWithDuration, boolean stopAction, boolean pauseAction,
|
||||
boolean pauseActionWithDuration, boolean resumeAction, boolean enableAction,
|
||||
boolean enableActionWithDuration, boolean disableAction, boolean disableActionWithDuration) {
|
||||
this.instantAction = instantAction;
|
||||
this.instantActionWithTransition = instantActionWithTransition;
|
||||
this.startAction = startAction;
|
||||
this.startActionWithDuration = startActionWithDuration;
|
||||
this.stopAction = stopAction;
|
||||
this.pauseAction = pauseAction;
|
||||
this.pauseActionWithDuration = pauseActionWithDuration;
|
||||
this.resumeAction = resumeAction;
|
||||
this.enableAction = enableAction;
|
||||
this.enableActionWithDuration = enableActionWithDuration;
|
||||
this.disableAction = disableAction;
|
||||
this.disableActionWithDuration = disableActionWithDuration;
|
||||
}
|
||||
}
|
||||
|
||||
public ActionsCluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, 37, "Actions");
|
||||
}
|
||||
|
||||
protected ActionsCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
// commands
|
||||
/**
|
||||
* This command triggers an action (state change) on the involved endpoints, in a "fire and forget"
|
||||
* manner. Afterwards, the action’s state shall be Inactive.
|
||||
* Example: recall a scene on a number of lights.
|
||||
*/
|
||||
public static ClusterCommand instantAction(Integer actionId, Integer invokeId) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (actionId != null) {
|
||||
map.put("actionId", actionId);
|
||||
}
|
||||
if (invokeId != null) {
|
||||
map.put("invokeId", invokeId);
|
||||
}
|
||||
return new ClusterCommand("instantAction", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* It is recommended that, where possible (e.g., it is not possible for attributes with Boolean data type), a
|
||||
* gradual transition SHOULD take place from the old to the new state over this time period. However, the exact
|
||||
* transition is manufacturer dependent.
|
||||
* This command triggers an action (state change) on the involved endpoints, with a specified time to transition
|
||||
* from the current state to the new state. During the transition, the action’s state shall be Active. Afterwards,
|
||||
* the action’s state shall be Inactive.
|
||||
* Example: recall a scene on a number of lights, with a specified transition time.
|
||||
*/
|
||||
public static ClusterCommand instantActionWithTransition(Integer actionId, Integer invokeId,
|
||||
Integer transitionTime) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (actionId != null) {
|
||||
map.put("actionId", actionId);
|
||||
}
|
||||
if (invokeId != null) {
|
||||
map.put("invokeId", invokeId);
|
||||
}
|
||||
if (transitionTime != null) {
|
||||
map.put("transitionTime", transitionTime);
|
||||
}
|
||||
return new ClusterCommand("instantActionWithTransition", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command triggers the commencement of an action on the involved endpoints. Afterwards, the action’s state
|
||||
* shall be Active.
|
||||
* Example: start a dynamic lighting pattern (such as gradually rotating the colors around the setpoints of the
|
||||
* scene) on a set of lights.
|
||||
* Example: start a sequence of events such as a wake-up experience involving lights moving through several
|
||||
* brightness/color combinations and the window covering gradually opening.
|
||||
*/
|
||||
public static ClusterCommand startAction(Integer actionId, Integer invokeId) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (actionId != null) {
|
||||
map.put("actionId", actionId);
|
||||
}
|
||||
if (invokeId != null) {
|
||||
map.put("invokeId", invokeId);
|
||||
}
|
||||
return new ClusterCommand("startAction", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command triggers the commencement of an action on the involved endpoints, and shall change the action’s
|
||||
* state to Active. After the specified Duration, the action will stop, and the action’s state shall change to
|
||||
* Inactive.
|
||||
* Example: start a dynamic lighting pattern (such as gradually rotating the colors around the setpoints of the
|
||||
* scene) on a set of lights for 1 hour (Duration=3600).
|
||||
*/
|
||||
public static ClusterCommand startActionWithDuration(Integer actionId, Integer invokeId, Integer duration) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (actionId != null) {
|
||||
map.put("actionId", actionId);
|
||||
}
|
||||
if (invokeId != null) {
|
||||
map.put("invokeId", invokeId);
|
||||
}
|
||||
if (duration != null) {
|
||||
map.put("duration", duration);
|
||||
}
|
||||
return new ClusterCommand("startActionWithDuration", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command stops the ongoing action on the involved endpoints. Afterwards, the action’s state shall be
|
||||
* Inactive.
|
||||
* Example: stop a dynamic lighting pattern which was previously started with StartAction.
|
||||
*/
|
||||
public static ClusterCommand stopAction(Integer actionId, Integer invokeId) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (actionId != null) {
|
||||
map.put("actionId", actionId);
|
||||
}
|
||||
if (invokeId != null) {
|
||||
map.put("invokeId", invokeId);
|
||||
}
|
||||
return new ClusterCommand("stopAction", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command pauses an ongoing action, and shall change the action’s state to Paused.
|
||||
* Example: pause a dynamic lighting effect (the lights stay at their current color) which was previously started
|
||||
* with StartAction.
|
||||
*/
|
||||
public static ClusterCommand pauseAction(Integer actionId, Integer invokeId) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (actionId != null) {
|
||||
map.put("actionId", actionId);
|
||||
}
|
||||
if (invokeId != null) {
|
||||
map.put("invokeId", invokeId);
|
||||
}
|
||||
return new ClusterCommand("pauseAction", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command pauses an ongoing action, and shall change the action’s state to Paused. After the specified
|
||||
* Duration, the ongoing action will be automatically resumed. which shall change the action’s state to Active.
|
||||
* Example: pause a dynamic lighting effect (the lights stay at their current color) for 10 minutes
|
||||
* (Duration=600).
|
||||
* The difference between Pause/Resume and Disable/Enable is on the one hand semantic (the former is more of a
|
||||
* transitionary nature while the latter is more permanent) and on the other hand these can be implemented slightly
|
||||
* differently in the implementation of the action (e.g. a Pause would be automatically resumed after some hours or
|
||||
* during a nightly reset, while an Disable would remain in effect until explicitly enabled again).
|
||||
*/
|
||||
public static ClusterCommand pauseActionWithDuration(Integer actionId, Integer invokeId, Integer duration) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (actionId != null) {
|
||||
map.put("actionId", actionId);
|
||||
}
|
||||
if (invokeId != null) {
|
||||
map.put("invokeId", invokeId);
|
||||
}
|
||||
if (duration != null) {
|
||||
map.put("duration", duration);
|
||||
}
|
||||
return new ClusterCommand("pauseActionWithDuration", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command resumes a previously paused action, and shall change the action’s state to Active.
|
||||
* The difference between ResumeAction and StartAction is that ResumeAction will continue the action from the state
|
||||
* where it was paused, while StartAction will start the action from the beginning.
|
||||
* Example: resume a dynamic lighting effect (the lights' colors will change gradually, continuing from the
|
||||
* point they were paused).
|
||||
*/
|
||||
public static ClusterCommand resumeAction(Integer actionId, Integer invokeId) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (actionId != null) {
|
||||
map.put("actionId", actionId);
|
||||
}
|
||||
if (invokeId != null) {
|
||||
map.put("invokeId", invokeId);
|
||||
}
|
||||
return new ClusterCommand("resumeAction", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command enables a certain action or automation. Afterwards, the action’s state shall be Active.
|
||||
* Example: enable a motion sensor to control the lights in an area.
|
||||
*/
|
||||
public static ClusterCommand enableAction(Integer actionId, Integer invokeId) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (actionId != null) {
|
||||
map.put("actionId", actionId);
|
||||
}
|
||||
if (invokeId != null) {
|
||||
map.put("invokeId", invokeId);
|
||||
}
|
||||
return new ClusterCommand("enableAction", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command enables a certain action or automation, and shall change the action’s state to be Active. After the
|
||||
* specified Duration, the action or automation will stop, and the action’s state shall change to Disabled.
|
||||
* Example: enable a "presence mimicking" behavior for the lights in your home during a vacation; the
|
||||
* Duration field is used to indicated the length of your absence from home. After that period, the presence
|
||||
* mimicking behavior will no longer control these lights.
|
||||
*/
|
||||
public static ClusterCommand enableActionWithDuration(Integer actionId, Integer invokeId, Integer duration) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (actionId != null) {
|
||||
map.put("actionId", actionId);
|
||||
}
|
||||
if (invokeId != null) {
|
||||
map.put("invokeId", invokeId);
|
||||
}
|
||||
if (duration != null) {
|
||||
map.put("duration", duration);
|
||||
}
|
||||
return new ClusterCommand("enableActionWithDuration", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command disables a certain action or automation, and shall change the action’s state to Inactive.
|
||||
* Example: disable a motion sensor to no longer control the lights in an area.
|
||||
*/
|
||||
public static ClusterCommand disableAction(Integer actionId, Integer invokeId) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (actionId != null) {
|
||||
map.put("actionId", actionId);
|
||||
}
|
||||
if (invokeId != null) {
|
||||
map.put("invokeId", invokeId);
|
||||
}
|
||||
return new ClusterCommand("disableAction", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command disables a certain action or automation, and shall change the action’s state to Disabled. After the
|
||||
* specified Duration, the action or automation will re-start, and the action’s state shall change to either
|
||||
* Inactive or Active, depending on the actions (see examples 4 and 6).
|
||||
* Example: disable a "wakeup" experience for a period of 1 week when going on holiday (to prevent them
|
||||
* from turning on in the morning while you’re not at home). After this period, the wakeup experience will control
|
||||
* the lights as before.
|
||||
*/
|
||||
public static ClusterCommand disableActionWithDuration(Integer actionId, Integer invokeId, Integer duration) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (actionId != null) {
|
||||
map.put("actionId", actionId);
|
||||
}
|
||||
if (invokeId != null) {
|
||||
map.put("invokeId", invokeId);
|
||||
}
|
||||
if (duration != null) {
|
||||
map.put("duration", duration);
|
||||
}
|
||||
return new ClusterCommand("disableActionWithDuration", map);
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
str += "clusterRevision : " + clusterRevision + "\n";
|
||||
str += "actionList : " + actionList + "\n";
|
||||
str += "endpointLists : " + endpointLists + "\n";
|
||||
str += "setupUrl : " + setupUrl + "\n";
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,47 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
|
||||
/**
|
||||
* ActivatedCarbonFilterMonitoring
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class ActivatedCarbonFilterMonitoringCluster extends ResourceMonitoringCluster {
|
||||
|
||||
public static final int CLUSTER_ID = 0x0072;
|
||||
public static final String CLUSTER_NAME = "ActivatedCarbonFilterMonitoring";
|
||||
public static final String CLUSTER_PREFIX = "activatedCarbonFilterMonitoring";
|
||||
|
||||
public ActivatedCarbonFilterMonitoringCluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, 114, "ActivatedCarbonFilterMonitoring");
|
||||
}
|
||||
|
||||
protected ActivatedCarbonFilterMonitoringCluster(BigInteger nodeId, int endpointId, int clusterId,
|
||||
String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,235 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.ClusterCommand;
|
||||
|
||||
/**
|
||||
* AdministratorCommissioning
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class AdministratorCommissioningCluster extends BaseCluster {
|
||||
|
||||
public static final int CLUSTER_ID = 0x003C;
|
||||
public static final String CLUSTER_NAME = "AdministratorCommissioning";
|
||||
public static final String CLUSTER_PREFIX = "administratorCommissioning";
|
||||
public static final String ATTRIBUTE_CLUSTER_REVISION = "clusterRevision";
|
||||
public static final String ATTRIBUTE_FEATURE_MAP = "featureMap";
|
||||
public static final String ATTRIBUTE_WINDOW_STATUS = "windowStatus";
|
||||
public static final String ATTRIBUTE_ADMIN_FABRIC_INDEX = "adminFabricIndex";
|
||||
public static final String ATTRIBUTE_ADMIN_VENDOR_ID = "adminVendorId";
|
||||
|
||||
public Integer clusterRevision; // 65533 ClusterRevision
|
||||
public FeatureMap featureMap; // 65532 FeatureMap
|
||||
/**
|
||||
* Indicates whether a new Commissioning window has been opened by an Administrator, using either the
|
||||
* OpenCommissioningWindow command or the OpenBasicCommissioningWindow command.
|
||||
* This attribute shall revert to WindowNotOpen upon expiry of a commissioning window.
|
||||
* > [!NOTE]
|
||||
* > An initial commissioning window is not opened using either the OpenCommissioningWindow command or the
|
||||
* OpenBasicCommissioningWindow command, and therefore this attribute shall be set to WindowNotOpen on initial
|
||||
* commissioning.
|
||||
*/
|
||||
public CommissioningWindowStatusEnum windowStatus; // 0 CommissioningWindowStatusEnum R V
|
||||
/**
|
||||
* When the WindowStatus attribute is not set to WindowNotOpen, this attribute shall indicate the FabricIndex
|
||||
* associated with the Fabric scoping of the Administrator that opened the window. This may be used to
|
||||
* cross-reference in the Fabrics attribute of the Node Operational Credentials cluster.
|
||||
* If, during an open commissioning window, the fabric for the Administrator that opened the window is removed, then
|
||||
* this attribute shall be set to null.
|
||||
* When the WindowStatus attribute is set to WindowNotOpen, this attribute shall be set to null.
|
||||
*/
|
||||
public Integer adminFabricIndex; // 1 fabric-idx R V
|
||||
/**
|
||||
* When the WindowStatus attribute is not set to WindowNotOpen, this attribute shall indicate the Vendor ID
|
||||
* associated with the Fabric scoping of the Administrator that opened the window. This field shall match the
|
||||
* VendorID field of the Fabrics attribute list entry associated with the Administrator having opened the window, at
|
||||
* the time of window opening. If the fabric for the Administrator that opened the window is removed from the node
|
||||
* while the commissioning window is still open, this attribute shall NOT be updated.
|
||||
* When the WindowStatus attribute is set to WindowNotOpen, this attribute shall be set to null.
|
||||
*/
|
||||
public Integer adminVendorId; // 2 vendor-id R V
|
||||
|
||||
// Enums
|
||||
public enum CommissioningWindowStatusEnum implements MatterEnum {
|
||||
WINDOW_NOT_OPEN(0, "Window Not Open"),
|
||||
ENHANCED_WINDOW_OPEN(1, "Enhanced Window Open"),
|
||||
BASIC_WINDOW_OPEN(2, "Basic Window Open");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private CommissioningWindowStatusEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
public enum StatusCodeEnum implements MatterEnum {
|
||||
BUSY(2, "Busy"),
|
||||
PAKE_PARAMETER_ERROR(3, "Pake Parameter Error"),
|
||||
WINDOW_NOT_OPEN(4, "Window Not Open");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private StatusCodeEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
// Bitmaps
|
||||
public static class FeatureMap {
|
||||
/**
|
||||
*
|
||||
* Node supports Basic Commissioning Method.
|
||||
*/
|
||||
public boolean basic;
|
||||
|
||||
public FeatureMap(boolean basic) {
|
||||
this.basic = basic;
|
||||
}
|
||||
}
|
||||
|
||||
public AdministratorCommissioningCluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, 60, "AdministratorCommissioning");
|
||||
}
|
||||
|
||||
protected AdministratorCommissioningCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
// commands
|
||||
/**
|
||||
* This command is used by a current Administrator to instruct a Node to go into commissioning mode. The Enhanced
|
||||
* Commissioning Method specifies a window of time during which an already commissioned Node accepts PASE sessions.
|
||||
* The current Administrator MUST specify a timeout value for the duration of the OpenCommissioningWindow command.
|
||||
* When the OpenCommissioningWindow command expires or commissioning completes, the Node shall remove the Passcode
|
||||
* by deleting the PAKE passcode verifier as well as stop publishing the DNS-SD record corresponding to this command
|
||||
* as described in Section 4.3.1, “Commissionable Node Discovery”. The commissioning into a new Fabric completes
|
||||
* when the Node successfully receives a CommissioningComplete command, see Section 5.5, “Commissioning Flows”.
|
||||
* The parameters for OpenCommissioningWindow command are as follows:
|
||||
* A current Administrator may invoke this command to put a node in commissioning mode for the next Administrator.
|
||||
* On completion, the command shall return a cluster specific status code from the Section 11.19.6, “Status Codes”
|
||||
* below reflecting success or reasons for failure of the operation. The new Administrator shall discover the Node
|
||||
* on the IP network using DNS-based Service Discovery (DNS-SD) for commissioning.
|
||||
* If any format or validity errors related to the PAKEPasscodeVerifier, Iterations or Salt arguments arise, this
|
||||
* command shall fail with a cluster specific status code of PAKEParameterError.
|
||||
* If a commissioning window is already currently open, this command shall fail with a cluster specific status code
|
||||
* of Busy.
|
||||
* If the fail-safe timer is currently armed, this command shall fail with a cluster specific status code of Busy,
|
||||
* since it is likely that concurrent commissioning operations from multiple separate Commissioners are about to
|
||||
* take place.
|
||||
* In case of any other parameter error, this command shall fail with a status code of COMMAND_INVALID.
|
||||
*/
|
||||
public static ClusterCommand openCommissioningWindow(Integer commissioningTimeout, OctetString pakePasscodeVerifier,
|
||||
Integer discriminator, Integer iterations, OctetString salt) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (commissioningTimeout != null) {
|
||||
map.put("commissioningTimeout", commissioningTimeout);
|
||||
}
|
||||
if (pakePasscodeVerifier != null) {
|
||||
map.put("pakePasscodeVerifier", pakePasscodeVerifier);
|
||||
}
|
||||
if (discriminator != null) {
|
||||
map.put("discriminator", discriminator);
|
||||
}
|
||||
if (iterations != null) {
|
||||
map.put("iterations", iterations);
|
||||
}
|
||||
if (salt != null) {
|
||||
map.put("salt", salt);
|
||||
}
|
||||
return new ClusterCommand("openCommissioningWindow", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command may be used by a current Administrator to instruct a Node to go into commissioning mode, if the node
|
||||
* supports the Basic Commissioning Method. The Basic Commissioning Method specifies a window of time during which
|
||||
* an already commissioned Node accepts PASE sessions. The current Administrator shall specify a timeout value for
|
||||
* the duration of the OpenBasicCommissioningWindow command.
|
||||
* If a commissioning window is already currently open, this command shall fail with a cluster specific status code
|
||||
* of Busy.
|
||||
* If the fail-safe timer is currently armed, this command shall fail with a cluster specific status code of Busy,
|
||||
* since it is likely that concurrent commissioning operations from multiple separate Commissioners are about to
|
||||
* take place.
|
||||
* In case of any other parameter error, this command shall fail with a status code of COMMAND_INVALID.
|
||||
* The commissioning into a new Fabric completes when the Node successfully receives a CommissioningComplete
|
||||
* command, see Section 5.5, “Commissioning Flows”. The new Administrator shall discover the Node on the IP network
|
||||
* using DNS-based Service Discovery (DNS-SD) for commissioning.
|
||||
*/
|
||||
public static ClusterCommand openBasicCommissioningWindow(Integer commissioningTimeout) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (commissioningTimeout != null) {
|
||||
map.put("commissioningTimeout", commissioningTimeout);
|
||||
}
|
||||
return new ClusterCommand("openBasicCommissioningWindow", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command is used by a current Administrator to instruct a Node to revoke any active OpenCommissioningWindow
|
||||
* or OpenBasicCommissioningWindow command. This is an idempotent command and the Node shall (for ECM) delete the
|
||||
* temporary PAKEPasscodeVerifier and associated data, and stop publishing the DNS-SD record associated with the
|
||||
* OpenCommissioningWindow or OpenBasicCommissioningWindow command, see Section 4.3.1, “Commissionable Node
|
||||
* Discovery”.
|
||||
* If no commissioning window was open at time of receipt, this command shall fail with a cluster specific status
|
||||
* code of WindowNotOpen.
|
||||
* If the commissioning window was open and the fail-safe was armed when this command is received, the device shall
|
||||
* immediately expire the fail-safe and perform the cleanup steps outlined in Section 11.10.7.2.2, “Behavior on
|
||||
* expiry of Fail-Safe timer”.
|
||||
*/
|
||||
public static ClusterCommand revokeCommissioning() {
|
||||
return new ClusterCommand("revokeCommissioning");
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
str += "clusterRevision : " + clusterRevision + "\n";
|
||||
str += "featureMap : " + featureMap + "\n";
|
||||
str += "windowStatus : " + windowStatus + "\n";
|
||||
str += "adminFabricIndex : " + adminFabricIndex + "\n";
|
||||
str += "adminVendorId : " + adminVendorId + "\n";
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,123 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
|
||||
/**
|
||||
* AirQuality
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class AirQualityCluster extends BaseCluster {
|
||||
|
||||
public static final int CLUSTER_ID = 0x005B;
|
||||
public static final String CLUSTER_NAME = "AirQuality";
|
||||
public static final String CLUSTER_PREFIX = "airQuality";
|
||||
public static final String ATTRIBUTE_CLUSTER_REVISION = "clusterRevision";
|
||||
public static final String ATTRIBUTE_FEATURE_MAP = "featureMap";
|
||||
public static final String ATTRIBUTE_AIR_QUALITY = "airQuality";
|
||||
|
||||
public Integer clusterRevision; // 65533 ClusterRevision
|
||||
public FeatureMap featureMap; // 65532 FeatureMap
|
||||
/**
|
||||
* Indicates a value from AirQualityEnum that is indicative of the currently measured air quality.
|
||||
*/
|
||||
public AirQualityEnum airQuality; // 0 AirQualityEnum R V
|
||||
|
||||
// Enums
|
||||
/**
|
||||
* The AirQualityEnum provides a representation of the quality of the analyzed air. It is up to the device
|
||||
* manufacturer to determine the mapping between the measured values and their corresponding enumeration values.
|
||||
*/
|
||||
public enum AirQualityEnum implements MatterEnum {
|
||||
UNKNOWN(0, "Unknown"),
|
||||
GOOD(1, "Good"),
|
||||
FAIR(2, "Fair"),
|
||||
MODERATE(3, "Moderate"),
|
||||
POOR(4, "Poor"),
|
||||
VERY_POOR(5, "Very Poor"),
|
||||
EXTREMELY_POOR(6, "Extremely Poor");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private AirQualityEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
// Bitmaps
|
||||
public static class FeatureMap {
|
||||
/**
|
||||
*
|
||||
* Cluster supports the Fair air quality level
|
||||
*/
|
||||
public boolean fair;
|
||||
/**
|
||||
*
|
||||
* Cluster supports the Moderate air quality level
|
||||
*/
|
||||
public boolean moderate;
|
||||
/**
|
||||
*
|
||||
* Cluster supports the Very poor air quality level
|
||||
*/
|
||||
public boolean veryPoor;
|
||||
/**
|
||||
*
|
||||
* Cluster supports the Extremely poor air quality level
|
||||
*/
|
||||
public boolean extremelyPoor;
|
||||
|
||||
public FeatureMap(boolean fair, boolean moderate, boolean veryPoor, boolean extremelyPoor) {
|
||||
this.fair = fair;
|
||||
this.moderate = moderate;
|
||||
this.veryPoor = veryPoor;
|
||||
this.extremelyPoor = extremelyPoor;
|
||||
}
|
||||
}
|
||||
|
||||
public AirQualityCluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, 91, "AirQuality");
|
||||
}
|
||||
|
||||
protected AirQualityCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
str += "clusterRevision : " + clusterRevision + "\n";
|
||||
str += "featureMap : " + featureMap + "\n";
|
||||
str += "airQuality : " + airQuality + "\n";
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,158 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.ClusterCommand;
|
||||
|
||||
/**
|
||||
* AlarmBase
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public abstract class AlarmBaseCluster extends BaseCluster {
|
||||
|
||||
public static final String CLUSTER_NAME = "AlarmBase";
|
||||
public static final String CLUSTER_PREFIX = "alarmBase";
|
||||
public static final String ATTRIBUTE_CLUSTER_REVISION = "clusterRevision";
|
||||
public static final String ATTRIBUTE_FEATURE_MAP = "featureMap";
|
||||
public static final String ATTRIBUTE_MASK = "mask";
|
||||
public static final String ATTRIBUTE_LATCH = "latch";
|
||||
public static final String ATTRIBUTE_STATE = "state";
|
||||
public static final String ATTRIBUTE_SUPPORTED = "supported";
|
||||
|
||||
public Integer clusterRevision; // 65533 ClusterRevision
|
||||
public FeatureMap featureMap; // 65532 FeatureMap
|
||||
/**
|
||||
* Indicates a bitmap where each bit set in the Mask attribute corresponds to an alarm that shall be enabled.
|
||||
*/
|
||||
public AlarmBitmap mask; // 0 AlarmBitmap R V
|
||||
/**
|
||||
* Indicates a bitmap where each bit set in the Latch attribute shall indicate that the corresponding alarm will be
|
||||
* latched when set, and will not reset to inactive when the underlying condition which caused the alarm is no
|
||||
* longer present, and so requires an explicit reset using the Reset command.
|
||||
*/
|
||||
public AlarmBitmap latch; // 1 AlarmBitmap R V
|
||||
/**
|
||||
* Indicates a bitmap where each bit shall represent the state of an alarm. The value of true means the alarm is
|
||||
* active, otherwise the alarm is inactive.
|
||||
*/
|
||||
public AlarmBitmap state; // 2 AlarmBitmap R V
|
||||
/**
|
||||
* Indicates a bitmap where each bit shall represent whether or not an alarm is supported. The value of true means
|
||||
* the alarm is supported, otherwise the alarm is not supported.
|
||||
* If an alarm is not supported, the corresponding bit in Mask, Latch, and State shall be false.
|
||||
*/
|
||||
public AlarmBitmap supported; // 3 AlarmBitmap R V
|
||||
// Structs
|
||||
|
||||
/**
|
||||
* This event shall be generated when one or more alarms change state, and shall have these fields:
|
||||
*/
|
||||
public class Notify {
|
||||
/**
|
||||
* This field shall indicate those alarms that have become active.
|
||||
*/
|
||||
public AlarmBitmap active; // AlarmBitmap
|
||||
/**
|
||||
* This field shall indicate those alarms that have become inactive.
|
||||
*/
|
||||
public AlarmBitmap inactive; // AlarmBitmap
|
||||
/**
|
||||
* This field shall be a copy of the new State attribute value that resulted in the event being generated. That
|
||||
* is, this field shall have all the bits in Active set and shall NOT have any of the bits in Inactive set.
|
||||
*/
|
||||
public AlarmBitmap state; // AlarmBitmap
|
||||
/**
|
||||
* This field shall be a copy of the Mask attribute when this event was generated.
|
||||
*/
|
||||
public AlarmBitmap mask; // AlarmBitmap
|
||||
|
||||
public Notify(AlarmBitmap active, AlarmBitmap inactive, AlarmBitmap state, AlarmBitmap mask) {
|
||||
this.active = active;
|
||||
this.inactive = inactive;
|
||||
this.state = state;
|
||||
this.mask = mask;
|
||||
}
|
||||
}
|
||||
|
||||
// Bitmaps
|
||||
/**
|
||||
* This data type shall be a map32 with values defined by the derived cluster. The meaning of each bit position
|
||||
* shall be consistent for all attributes in a derived cluster. That is, if bit 0 is defined for an alarm, the
|
||||
* Latch, State, and Supported information for that alarm are also bit 0.
|
||||
*/
|
||||
public static class AlarmBitmap {
|
||||
public AlarmBitmap() {
|
||||
}
|
||||
}
|
||||
|
||||
public static class FeatureMap {
|
||||
/**
|
||||
*
|
||||
* This feature indicates that alarms can be reset via the Reset command.
|
||||
*/
|
||||
public boolean reset;
|
||||
|
||||
public FeatureMap(boolean reset) {
|
||||
this.reset = reset;
|
||||
}
|
||||
}
|
||||
|
||||
protected AlarmBaseCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
// commands
|
||||
/**
|
||||
* This command resets active and latched alarms (if possible). Any generated Notify event shall contain fields that
|
||||
* represent the state of the server after the command has been processed.
|
||||
*/
|
||||
public static ClusterCommand reset(AlarmBitmap alarms) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (alarms != null) {
|
||||
map.put("alarms", alarms);
|
||||
}
|
||||
return new ClusterCommand("reset", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* This command allows a client to request that an alarm be enabled or suppressed at the server.
|
||||
*/
|
||||
public static ClusterCommand modifyEnabledAlarms(AlarmBitmap mask) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (mask != null) {
|
||||
map.put("mask", mask);
|
||||
}
|
||||
return new ClusterCommand("modifyEnabledAlarms", map);
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
str += "clusterRevision : " + clusterRevision + "\n";
|
||||
str += "featureMap : " + featureMap + "\n";
|
||||
str += "mask : " + mask + "\n";
|
||||
str += "latch : " + latch + "\n";
|
||||
str += "state : " + state + "\n";
|
||||
str += "supported : " + supported + "\n";
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,157 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.List;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
|
||||
/**
|
||||
* ApplicationBasic
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class ApplicationBasicCluster extends BaseCluster {
|
||||
|
||||
public static final int CLUSTER_ID = 0x050D;
|
||||
public static final String CLUSTER_NAME = "ApplicationBasic";
|
||||
public static final String CLUSTER_PREFIX = "applicationBasic";
|
||||
public static final String ATTRIBUTE_CLUSTER_REVISION = "clusterRevision";
|
||||
public static final String ATTRIBUTE_VENDOR_NAME = "vendorName";
|
||||
public static final String ATTRIBUTE_VENDOR_ID = "vendorId";
|
||||
public static final String ATTRIBUTE_APPLICATION_NAME = "applicationName";
|
||||
public static final String ATTRIBUTE_PRODUCT_ID = "productId";
|
||||
public static final String ATTRIBUTE_APPLICATION = "application";
|
||||
public static final String ATTRIBUTE_STATUS = "status";
|
||||
public static final String ATTRIBUTE_APPLICATION_VERSION = "applicationVersion";
|
||||
public static final String ATTRIBUTE_ALLOWED_VENDOR_LIST = "allowedVendorList";
|
||||
|
||||
public Integer clusterRevision; // 65533 ClusterRevision
|
||||
/**
|
||||
* This attribute shall specify a human readable (displayable) name of the vendor for the Content App.
|
||||
*/
|
||||
public String vendorName; // 0 string R V
|
||||
/**
|
||||
* This attribute, if present, shall specify the Connectivity Standards Alliance assigned Vendor ID for the Content
|
||||
* App.
|
||||
*/
|
||||
public Integer vendorId; // 1 vendor-id R V
|
||||
/**
|
||||
* This attribute shall specify a human readable (displayable) name of the Content App assigned by the vendor. For
|
||||
* example, "NPR On Demand". The maximum length of the ApplicationName attribute is 256 bytes of UTF-8
|
||||
* characters.
|
||||
*/
|
||||
public String applicationName; // 2 string R V
|
||||
/**
|
||||
* This attribute, if present, shall specify a numeric ID assigned by the vendor to identify a specific Content App
|
||||
* made by them. If the Content App is certified by the Connectivity Standards Alliance, then this would be the
|
||||
* Product ID as specified by the vendor for the certification.
|
||||
*/
|
||||
public Integer productId; // 3 uint16 R V
|
||||
/**
|
||||
* This attribute shall specify a Content App which consists of an Application ID using a specified catalog.
|
||||
*/
|
||||
public ApplicationStruct application; // 4 ApplicationStruct R V
|
||||
/**
|
||||
* This attribute shall specify the current running status of the application.
|
||||
*/
|
||||
public ApplicationStatusEnum status; // 5 ApplicationStatusEnum R V
|
||||
/**
|
||||
* This attribute shall specify a human readable (displayable) version of the Content App assigned by the vendor.
|
||||
* The maximum length of the ApplicationVersion attribute is 32 bytes of UTF-8 characters.
|
||||
*/
|
||||
public String applicationVersion; // 6 string R V
|
||||
/**
|
||||
* This attribute is a list of vendor IDs. Each entry is a vendor-id.
|
||||
*/
|
||||
public List<Integer> allowedVendorList; // 7 list R A
|
||||
// Structs
|
||||
|
||||
/**
|
||||
* This indicates a global identifier for an Application given a catalog.
|
||||
*/
|
||||
public class ApplicationStruct {
|
||||
/**
|
||||
* This field shall indicate the Connectivity Standards Alliance issued vendor ID for the catalog. The DIAL
|
||||
* registry shall use value 0x0000.
|
||||
* It is assumed that Content App Platform providers (see Video Player Architecture section in [MatterDevLib])
|
||||
* will have their own catalog vendor ID (set to their own Vendor ID) and will assign an ApplicationID to each
|
||||
* Content App.
|
||||
*/
|
||||
public Integer catalogVendorId; // uint16
|
||||
/**
|
||||
* This field shall indicate the application identifier, expressed as a string, such as "123456-5433",
|
||||
* "PruneVideo" or "Company X". This field shall be unique within a catalog.
|
||||
* For the DIAL registry catalog, this value shall be the DIAL prefix.
|
||||
*/
|
||||
public String applicationId; // string
|
||||
|
||||
public ApplicationStruct(Integer catalogVendorId, String applicationId) {
|
||||
this.catalogVendorId = catalogVendorId;
|
||||
this.applicationId = applicationId;
|
||||
}
|
||||
}
|
||||
|
||||
// Enums
|
||||
public enum ApplicationStatusEnum implements MatterEnum {
|
||||
STOPPED(0, "Stopped"),
|
||||
ACTIVE_VISIBLE_FOCUS(1, "Active Visible Focus"),
|
||||
ACTIVE_HIDDEN(2, "Active Hidden"),
|
||||
ACTIVE_VISIBLE_NOT_FOCUS(3, "Active Visible Not Focus");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private ApplicationStatusEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
public ApplicationBasicCluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, 1293, "ApplicationBasic");
|
||||
}
|
||||
|
||||
protected ApplicationBasicCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
str += "clusterRevision : " + clusterRevision + "\n";
|
||||
str += "vendorName : " + vendorName + "\n";
|
||||
str += "vendorId : " + vendorId + "\n";
|
||||
str += "applicationName : " + applicationName + "\n";
|
||||
str += "productId : " + productId + "\n";
|
||||
str += "application : " + application + "\n";
|
||||
str += "status : " + status + "\n";
|
||||
str += "applicationVersion : " + applicationVersion + "\n";
|
||||
str += "allowedVendorList : " + allowedVendorList + "\n";
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,213 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.ClusterCommand;
|
||||
|
||||
/**
|
||||
* ApplicationLauncher
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class ApplicationLauncherCluster extends BaseCluster {
|
||||
|
||||
public static final int CLUSTER_ID = 0x050C;
|
||||
public static final String CLUSTER_NAME = "ApplicationLauncher";
|
||||
public static final String CLUSTER_PREFIX = "applicationLauncher";
|
||||
public static final String ATTRIBUTE_CLUSTER_REVISION = "clusterRevision";
|
||||
public static final String ATTRIBUTE_FEATURE_MAP = "featureMap";
|
||||
public static final String ATTRIBUTE_CATALOG_LIST = "catalogList";
|
||||
public static final String ATTRIBUTE_CURRENT_APP = "currentApp";
|
||||
|
||||
public Integer clusterRevision; // 65533 ClusterRevision
|
||||
public FeatureMap featureMap; // 65532 FeatureMap
|
||||
/**
|
||||
* This attribute shall specify the list of supported application catalogs, where each entry in the list is the
|
||||
* CSA-issued vendor ID for the catalog. The DIAL registry (see [DIAL Registry]) shall use value 0x0000.
|
||||
* It is expected that Content App Platform providers will have their own catalog vendor ID (set to their own Vendor
|
||||
* ID) and will assign an ApplicationID to each Content App.
|
||||
*/
|
||||
public List<Integer> catalogList; // 0 list R V
|
||||
/**
|
||||
* This attribute shall specify the current in-focus application, identified using an Application ID, catalog vendor
|
||||
* ID and the corresponding endpoint number when the application is represented by a Content App endpoint. A null
|
||||
* shall be used to indicate there is no current in-focus application.
|
||||
*/
|
||||
public ApplicationEPStruct currentApp; // 1 ApplicationEPStruct R V
|
||||
// Structs
|
||||
|
||||
/**
|
||||
* This indicates a global identifier for an Application given a catalog.
|
||||
*/
|
||||
public class ApplicationStruct {
|
||||
/**
|
||||
* This field shall indicate the CSA-issued vendor ID for the catalog. The DIAL registry shall use value 0x0000.
|
||||
* Content App Platform providers will have their own catalog vendor ID (set to their own Vendor ID) and will
|
||||
* assign an ApplicationID to each Content App.
|
||||
*/
|
||||
public Integer catalogVendorId; // uint16
|
||||
/**
|
||||
* This field shall indicate the application identifier, expressed as a string, such as "PruneVideo"
|
||||
* or "Company X". This field shall be unique within a catalog.
|
||||
* For the DIAL registry catalog, this value shall be the DIAL prefix (see [DIAL Registry]).
|
||||
*/
|
||||
public String applicationId; // string
|
||||
|
||||
public ApplicationStruct(Integer catalogVendorId, String applicationId) {
|
||||
this.catalogVendorId = catalogVendorId;
|
||||
this.applicationId = applicationId;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This specifies an app along with its corresponding endpoint.
|
||||
*/
|
||||
public class ApplicationEPStruct {
|
||||
public ApplicationStruct application; // ApplicationStruct
|
||||
public Integer endpoint; // endpoint-no
|
||||
|
||||
public ApplicationEPStruct(ApplicationStruct application, Integer endpoint) {
|
||||
this.application = application;
|
||||
this.endpoint = endpoint;
|
||||
}
|
||||
}
|
||||
|
||||
// Enums
|
||||
public enum StatusEnum implements MatterEnum {
|
||||
SUCCESS(0, "Success"),
|
||||
APP_NOT_AVAILABLE(1, "App Not Available"),
|
||||
SYSTEM_BUSY(2, "System Busy"),
|
||||
PENDING_USER_APPROVAL(3, "Pending User Approval"),
|
||||
DOWNLOADING(4, "Downloading"),
|
||||
INSTALLING(5, "Installing");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private StatusEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
// Bitmaps
|
||||
public static class FeatureMap {
|
||||
/**
|
||||
*
|
||||
* Support for attributes and commands required for endpoint to support launching any application within the
|
||||
* supported application catalogs
|
||||
*/
|
||||
public boolean applicationPlatform;
|
||||
|
||||
public FeatureMap(boolean applicationPlatform) {
|
||||
this.applicationPlatform = applicationPlatform;
|
||||
}
|
||||
}
|
||||
|
||||
public ApplicationLauncherCluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, 1292, "ApplicationLauncher");
|
||||
}
|
||||
|
||||
protected ApplicationLauncherCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
// commands
|
||||
/**
|
||||
* Upon receipt of this command, the server shall launch the application with optional data. The application shall
|
||||
* be either
|
||||
* • the specified application, if the Application Platform feature is supported;
|
||||
* • otherwise the application corresponding to the endpoint.
|
||||
* The endpoint shall launch and bring to foreground the requisite application if the application is not already
|
||||
* launched and in foreground. The Status attribute shall be updated to ActiveVisibleFocus on the Application Basic
|
||||
* cluster of the Endpoint corresponding to the launched application. The Status attribute shall be updated on any
|
||||
* other application whose Status may have changed as a result of this command. The CurrentApp attribute, if
|
||||
* supported, shall be updated to reflect the new application in the foreground.
|
||||
* This command returns a Launcher Response.
|
||||
*/
|
||||
public static ClusterCommand launchApp(ApplicationStruct application, OctetString data) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (application != null) {
|
||||
map.put("application", application);
|
||||
}
|
||||
if (data != null) {
|
||||
map.put("data", data);
|
||||
}
|
||||
return new ClusterCommand("launchApp", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* Upon receipt of this command, the server shall stop the application if it is running. The application shall be
|
||||
* either
|
||||
* • the specified application, if the Application Platform feature is supported;
|
||||
* • otherwise the application corresponding to the endpoint.
|
||||
* The Status attribute shall be updated to Stopped on the Application Basic cluster of the Endpoint corresponding
|
||||
* to the stopped application. The Status attribute shall be updated on any other application whose Status may have
|
||||
* changed as a result of this command.
|
||||
* This command returns a Launcher Response.
|
||||
*/
|
||||
public static ClusterCommand stopApp(ApplicationStruct application) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (application != null) {
|
||||
map.put("application", application);
|
||||
}
|
||||
return new ClusterCommand("stopApp", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* Upon receipt of this command, the server shall hide the application. The application shall be either
|
||||
* • the specified application, if the Application Platform feature is supported;
|
||||
* • otherwise the application corresponding to the endpoint.
|
||||
* The endpoint may decide to stop the application based on manufacturer specific behavior or resource constraints
|
||||
* if any. The Status attribute shall be updated to ActiveHidden or Stopped, depending on the action taken, on the
|
||||
* Application Basic cluster of the Endpoint corresponding to the application on which the action was taken. The
|
||||
* Status attribute shall be updated on any other application whose Status may have changed as a result of this
|
||||
* command. This command returns a Launcher Response.
|
||||
*/
|
||||
public static ClusterCommand hideApp(ApplicationStruct application) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (application != null) {
|
||||
map.put("application", application);
|
||||
}
|
||||
return new ClusterCommand("hideApp", map);
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
str += "clusterRevision : " + clusterRevision + "\n";
|
||||
str += "featureMap : " + featureMap + "\n";
|
||||
str += "catalogList : " + catalogList + "\n";
|
||||
str += "currentApp : " + currentApp + "\n";
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,169 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
import org.openhab.binding.matter.internal.client.dto.cluster.ClusterCommand;
|
||||
|
||||
/**
|
||||
* AudioOutput
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class AudioOutputCluster extends BaseCluster {
|
||||
|
||||
public static final int CLUSTER_ID = 0x050B;
|
||||
public static final String CLUSTER_NAME = "AudioOutput";
|
||||
public static final String CLUSTER_PREFIX = "audioOutput";
|
||||
public static final String ATTRIBUTE_CLUSTER_REVISION = "clusterRevision";
|
||||
public static final String ATTRIBUTE_FEATURE_MAP = "featureMap";
|
||||
public static final String ATTRIBUTE_OUTPUT_LIST = "outputList";
|
||||
public static final String ATTRIBUTE_CURRENT_OUTPUT = "currentOutput";
|
||||
|
||||
public Integer clusterRevision; // 65533 ClusterRevision
|
||||
public FeatureMap featureMap; // 65532 FeatureMap
|
||||
/**
|
||||
* This attribute provides the list of outputs supported by the device.
|
||||
*/
|
||||
public List<OutputInfoStruct> outputList; // 0 list R V
|
||||
/**
|
||||
* This attribute contains the value of the index field of the currently selected OutputInfoStruct.
|
||||
*/
|
||||
public Integer currentOutput; // 1 uint8 R V
|
||||
// Structs
|
||||
|
||||
/**
|
||||
* This contains information about an output.
|
||||
*/
|
||||
public class OutputInfoStruct {
|
||||
/**
|
||||
* This field shall indicate the unique index into the list of outputs.
|
||||
*/
|
||||
public Integer index; // uint8
|
||||
/**
|
||||
* This field shall indicate the type of output.
|
||||
*/
|
||||
public OutputTypeEnum outputType; // OutputTypeEnum
|
||||
/**
|
||||
* The device defined and user editable output name, such as “Soundbar”, “Speakers”. This field may be blank,
|
||||
* but SHOULD be provided when known.
|
||||
*/
|
||||
public String name; // string
|
||||
|
||||
public OutputInfoStruct(Integer index, OutputTypeEnum outputType, String name) {
|
||||
this.index = index;
|
||||
this.outputType = outputType;
|
||||
this.name = name;
|
||||
}
|
||||
}
|
||||
|
||||
// Enums
|
||||
/**
|
||||
* The type of output, expressed as an enum, with the following values:
|
||||
*/
|
||||
public enum OutputTypeEnum implements MatterEnum {
|
||||
HDMI(0, "Hdmi"),
|
||||
BT(1, "Bt"),
|
||||
OPTICAL(2, "Optical"),
|
||||
HEADPHONE(3, "Headphone"),
|
||||
INTERNAL(4, "Internal"),
|
||||
OTHER(5, "Other");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private OutputTypeEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
// Bitmaps
|
||||
public static class FeatureMap {
|
||||
/**
|
||||
*
|
||||
* Supports updates to output names
|
||||
*/
|
||||
public boolean nameUpdates;
|
||||
|
||||
public FeatureMap(boolean nameUpdates) {
|
||||
this.nameUpdates = nameUpdates;
|
||||
}
|
||||
}
|
||||
|
||||
public AudioOutputCluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, 1291, "AudioOutput");
|
||||
}
|
||||
|
||||
protected AudioOutputCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
// commands
|
||||
/**
|
||||
* Upon receipt, this shall change the output on the device to the output at a specific index in the Output List.
|
||||
* Note that when the current output is set to an output of type HDMI, adjustments to volume via a Speaker endpoint
|
||||
* on the same node may cause HDMI volume up/down commands to be sent to the given HDMI output.
|
||||
*/
|
||||
public static ClusterCommand selectOutput(Integer index) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (index != null) {
|
||||
map.put("index", index);
|
||||
}
|
||||
return new ClusterCommand("selectOutput", map);
|
||||
}
|
||||
|
||||
/**
|
||||
* Upon receipt, this shall rename the output at a specific index in the Output List.
|
||||
* Updates to the output name shall appear in the device’s settings menus. Name updates may automatically be sent to
|
||||
* the actual device to which the output connects.
|
||||
*/
|
||||
public static ClusterCommand renameOutput(Integer index, String name) {
|
||||
Map<String, Object> map = new LinkedHashMap<>();
|
||||
if (index != null) {
|
||||
map.put("index", index);
|
||||
}
|
||||
if (name != null) {
|
||||
map.put("name", name);
|
||||
}
|
||||
return new ClusterCommand("renameOutput", map);
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
str += "clusterRevision : " + clusterRevision + "\n";
|
||||
str += "featureMap : " + featureMap + "\n";
|
||||
str += "outputList : " + outputList + "\n";
|
||||
str += "currentOutput : " + currentOutput + "\n";
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,203 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
|
||||
/**
|
||||
* BallastConfiguration
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class BallastConfigurationCluster extends BaseCluster {
|
||||
|
||||
public static final int CLUSTER_ID = 0x0301;
|
||||
public static final String CLUSTER_NAME = "BallastConfiguration";
|
||||
public static final String CLUSTER_PREFIX = "ballastConfiguration";
|
||||
public static final String ATTRIBUTE_CLUSTER_REVISION = "clusterRevision";
|
||||
public static final String ATTRIBUTE_PHYSICAL_MIN_LEVEL = "physicalMinLevel";
|
||||
public static final String ATTRIBUTE_PHYSICAL_MAX_LEVEL = "physicalMaxLevel";
|
||||
public static final String ATTRIBUTE_BALLAST_STATUS = "ballastStatus";
|
||||
public static final String ATTRIBUTE_MIN_LEVEL = "minLevel";
|
||||
public static final String ATTRIBUTE_MAX_LEVEL = "maxLevel";
|
||||
public static final String ATTRIBUTE_INTRINSIC_BALLAST_FACTOR = "intrinsicBallastFactor";
|
||||
public static final String ATTRIBUTE_BALLAST_FACTOR_ADJUSTMENT = "ballastFactorAdjustment";
|
||||
public static final String ATTRIBUTE_LAMP_QUANTITY = "lampQuantity";
|
||||
public static final String ATTRIBUTE_LAMP_TYPE = "lampType";
|
||||
public static final String ATTRIBUTE_LAMP_MANUFACTURER = "lampManufacturer";
|
||||
public static final String ATTRIBUTE_LAMP_RATED_HOURS = "lampRatedHours";
|
||||
public static final String ATTRIBUTE_LAMP_BURN_HOURS = "lampBurnHours";
|
||||
public static final String ATTRIBUTE_LAMP_ALARM_MODE = "lampAlarmMode";
|
||||
public static final String ATTRIBUTE_LAMP_BURN_HOURS_TRIP_POINT = "lampBurnHoursTripPoint";
|
||||
|
||||
public Integer clusterRevision; // 65533 ClusterRevision
|
||||
/**
|
||||
* This attribute shall specify the minimum light output the ballast can achieve according to the dimming light
|
||||
* curve (see Dimming Curve).
|
||||
*/
|
||||
public Integer physicalMinLevel; // 0 uint8 R V
|
||||
/**
|
||||
* This attribute shall specify the maximum light output the ballast can achieve according to the dimming light
|
||||
* curve (see Dimming Curve).
|
||||
*/
|
||||
public Integer physicalMaxLevel; // 1 uint8 R V
|
||||
/**
|
||||
* This attribute shall specify the status of various aspects of the ballast or the connected lights, see
|
||||
* BallastStatusBitmap.
|
||||
*/
|
||||
public BallastStatusBitmap ballastStatus; // 2 BallastStatusBitmap R V
|
||||
/**
|
||||
* This attribute shall specify the light output of the ballast according to the dimming light curve (see Dimming
|
||||
* Curve) when the Level Control Cluster’s CurrentLevel attribute equals to 1 (and the On/Off Cluster’s OnOff
|
||||
* attribute equals to TRUE).
|
||||
* The value of this attribute shall be both greater than or equal to PhysicalMinLevel and less than or equal to
|
||||
* MaxLevel. If an attempt is made to set this attribute to a level where these conditions are not met, a response
|
||||
* shall be returned with status code set to CONSTRAINT_ERROR, and the level shall NOT be set.
|
||||
*/
|
||||
public Integer minLevel; // 16 uint8 RW VM
|
||||
/**
|
||||
* This attribute shall specify the light output of the ballast according to the dimming light curve (see Dimming
|
||||
* Curve) when the Level Control Cluster’s CurrentLevel attribute equals to 254 (and the On/Off Cluster’s OnOff
|
||||
* attribute equals to TRUE).
|
||||
* The value of this attribute shall be both less than or equal to PhysicalMaxLevel and greater than or equal to
|
||||
* MinLevel. If an attempt is made to set this attribute to a level where these conditions are not met, a response
|
||||
* shall be returned with status code set to CONSTRAINT_ERROR, and the level shall NOT be set.
|
||||
*/
|
||||
public Integer maxLevel; // 17 uint8 RW VM
|
||||
/**
|
||||
* This attribute shall specify the ballast factor, as a percentage, of the ballast/lamp combination, prior to any
|
||||
* adjustment.
|
||||
* A value of null indicates in invalid value.
|
||||
*/
|
||||
public Integer intrinsicBallastFactor; // 20 uint8 RW VM
|
||||
/**
|
||||
* This attribute shall specify the multiplication factor, as a percentage, to be applied to the configured light
|
||||
* output of the lamps. A typical use for this attribute is to compensate for reduction in efficiency over the
|
||||
* lifetime of a lamp.
|
||||
* ### The light output is given by
|
||||
* actual light output = configured light output x BallastFactorAdjustment / 100%
|
||||
* The range for this attribute is manufacturer dependent. If an attempt is made to set this attribute to a level
|
||||
* that cannot be supported, a response shall be returned with status code set to CONSTRAINT_ERROR, and the level
|
||||
* shall NOT be changed. The value of null indicates that ballast factor scaling is not in use.
|
||||
*/
|
||||
public Integer ballastFactorAdjustment; // 21 uint8 RW VM
|
||||
/**
|
||||
* This attribute shall specify the number of lamps connected to this ballast. (Note 1: this number does not take
|
||||
* into account whether lamps are actually in their sockets or not).
|
||||
*/
|
||||
public Integer lampQuantity; // 32 uint8 R V
|
||||
/**
|
||||
* This attribute shall specify the type of lamps (including their wattage) connected to the ballast.
|
||||
*/
|
||||
public String lampType; // 48 string RW VM
|
||||
/**
|
||||
* This attribute shall specify the name of the manufacturer of the currently connected lamps.
|
||||
*/
|
||||
public String lampManufacturer; // 49 string RW VM
|
||||
/**
|
||||
* This attribute shall specify the number of hours of use the lamps are rated for by the manufacturer.
|
||||
* A value of null indicates an invalid or unknown time.
|
||||
*/
|
||||
public Integer lampRatedHours; // 50 uint24 RW VM
|
||||
/**
|
||||
* This attribute shall specify the length of time, in hours, the currently connected lamps have been operated,
|
||||
* cumulative since the last re-lamping. Burn hours shall NOT be accumulated if the lamps are off.
|
||||
* This attribute SHOULD be reset to zero (e.g., remotely) when the lamps are changed. If partially used lamps are
|
||||
* connected, LampBurnHours SHOULD be updated to reflect the burn hours of the lamps.
|
||||
* A value of null indicates an invalid or unknown time.
|
||||
*/
|
||||
public Integer lampBurnHours; // 51 uint24 RW VM
|
||||
/**
|
||||
* This attribute shall specify which attributes may cause an alarm notification to be generated. Ain each bit
|
||||
* position means that its associated attribute is able to generate an alarm.
|
||||
*/
|
||||
public LampAlarmModeBitmap lampAlarmMode; // 52 LampAlarmModeBitmap RW VM
|
||||
/**
|
||||
* This attribute shall specify the number of hours the LampBurnHours attribute may reach before an alarm is
|
||||
* generated.
|
||||
* If the Alarms cluster is not present on the same device this attribute is not used and thus may be omitted (see
|
||||
* Dependencies).
|
||||
* The Alarm Code field included in the generated alarm shall be 0x01.
|
||||
* If this attribute has the value of null, then this alarm shall NOT be generated.
|
||||
*/
|
||||
public Integer lampBurnHoursTripPoint; // 53 uint24 RW VM
|
||||
|
||||
// Bitmaps
|
||||
public static class BallastStatusBitmap {
|
||||
/**
|
||||
* Operational state of the ballast.
|
||||
* This bit shall indicate whether the ballast is operational.
|
||||
* • 0 = The ballast is fully operational
|
||||
* • 1 = The ballast is not fully operational
|
||||
*/
|
||||
public boolean ballastNonOperational;
|
||||
/**
|
||||
* Operational state of the lamps.
|
||||
* This bit shall indicate whether all lamps is operational.
|
||||
* • 0 = All lamps are operational
|
||||
* • 1 = One or more lamp is not in its socket or is faulty
|
||||
*/
|
||||
public boolean lampFailure;
|
||||
|
||||
public BallastStatusBitmap(boolean ballastNonOperational, boolean lampFailure) {
|
||||
this.ballastNonOperational = ballastNonOperational;
|
||||
this.lampFailure = lampFailure;
|
||||
}
|
||||
}
|
||||
|
||||
public static class LampAlarmModeBitmap {
|
||||
/**
|
||||
* State of LampBurnHours alarm generation
|
||||
* This bit shall indicate that the LampBurnHours attribute may generate an alarm.
|
||||
*/
|
||||
public boolean lampBurnHours;
|
||||
|
||||
public LampAlarmModeBitmap(boolean lampBurnHours) {
|
||||
this.lampBurnHours = lampBurnHours;
|
||||
}
|
||||
}
|
||||
|
||||
public BallastConfigurationCluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, 769, "BallastConfiguration");
|
||||
}
|
||||
|
||||
protected BallastConfigurationCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
str += "clusterRevision : " + clusterRevision + "\n";
|
||||
str += "physicalMinLevel : " + physicalMinLevel + "\n";
|
||||
str += "physicalMaxLevel : " + physicalMaxLevel + "\n";
|
||||
str += "ballastStatus : " + ballastStatus + "\n";
|
||||
str += "minLevel : " + minLevel + "\n";
|
||||
str += "maxLevel : " + maxLevel + "\n";
|
||||
str += "intrinsicBallastFactor : " + intrinsicBallastFactor + "\n";
|
||||
str += "ballastFactorAdjustment : " + ballastFactorAdjustment + "\n";
|
||||
str += "lampQuantity : " + lampQuantity + "\n";
|
||||
str += "lampType : " + lampType + "\n";
|
||||
str += "lampManufacturer : " + lampManufacturer + "\n";
|
||||
str += "lampRatedHours : " + lampRatedHours + "\n";
|
||||
str += "lampBurnHours : " + lampBurnHours + "\n";
|
||||
str += "lampAlarmMode : " + lampAlarmMode + "\n";
|
||||
str += "lampBurnHoursTripPoint : " + lampBurnHoursTripPoint + "\n";
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,404 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.List;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
|
||||
import com.google.gson.Gson;
|
||||
|
||||
/**
|
||||
* undefined
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
|
||||
public class BaseCluster {
|
||||
|
||||
protected static final Gson GSON = new Gson();
|
||||
public BigInteger nodeId;
|
||||
public int endpointId;
|
||||
public int id;
|
||||
public String name;
|
||||
|
||||
public interface MatterEnum {
|
||||
Integer getValue();
|
||||
|
||||
String getLabel();
|
||||
|
||||
public static <E extends MatterEnum> E fromValue(Class<E> enumClass, int value) {
|
||||
E[] constants = enumClass.getEnumConstants();
|
||||
if (constants != null) {
|
||||
for (E enumConstant : constants) {
|
||||
if (enumConstant != null) {
|
||||
if (enumConstant.getValue().equals(value)) {
|
||||
return enumConstant;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
throw new IllegalArgumentException("Unknown value: " + value);
|
||||
}
|
||||
}
|
||||
|
||||
public BaseCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
this.nodeId = nodeId;
|
||||
this.endpointId = endpointId;
|
||||
this.id = clusterId;
|
||||
this.name = clusterName;
|
||||
}
|
||||
|
||||
public static class OctetString {
|
||||
public byte[] value;
|
||||
|
||||
public OctetString(byte[] value) {
|
||||
this.value = value;
|
||||
}
|
||||
|
||||
public OctetString(String hexString) {
|
||||
int length = hexString.length();
|
||||
value = new byte[length / 2];
|
||||
for (int i = 0; i < length; i += 2) {
|
||||
value[i / 2] = (byte) ((Character.digit(hexString.charAt(i), 16) << 4)
|
||||
+ Character.digit(hexString.charAt(i + 1), 16));
|
||||
}
|
||||
}
|
||||
|
||||
public @NonNull String toHexString() {
|
||||
StringBuilder hexString = new StringBuilder();
|
||||
for (byte b : value) {
|
||||
String hex = Integer.toHexString(0xFF & b);
|
||||
if (hex.length() == 1) {
|
||||
hexString.append('0');
|
||||
}
|
||||
hexString.append(hex);
|
||||
}
|
||||
return hexString.toString();
|
||||
}
|
||||
|
||||
public @NonNull String toString() {
|
||||
return toHexString();
|
||||
}
|
||||
}
|
||||
|
||||
// Structs
|
||||
public class AtomicAttributeStatusStruct {
|
||||
public Integer attributeId; // attrib-id
|
||||
public Integer statusCode; // status
|
||||
|
||||
public AtomicAttributeStatusStruct(Integer attributeId, Integer statusCode) {
|
||||
this.attributeId = attributeId;
|
||||
this.statusCode = statusCode;
|
||||
}
|
||||
}
|
||||
|
||||
public class MeasurementAccuracyRangeStruct {
|
||||
public BigInteger rangeMin; // int64
|
||||
public BigInteger rangeMax; // int64
|
||||
public Integer percentMax; // percent100ths
|
||||
public Integer percentMin; // percent100ths
|
||||
public Integer percentTypical; // percent100ths
|
||||
public BigInteger fixedMax; // uint64
|
||||
public BigInteger fixedMin; // uint64
|
||||
public BigInteger fixedTypical; // uint64
|
||||
|
||||
public MeasurementAccuracyRangeStruct(BigInteger rangeMin, BigInteger rangeMax, Integer percentMax,
|
||||
Integer percentMin, Integer percentTypical, BigInteger fixedMax, BigInteger fixedMin,
|
||||
BigInteger fixedTypical) {
|
||||
this.rangeMin = rangeMin;
|
||||
this.rangeMax = rangeMax;
|
||||
this.percentMax = percentMax;
|
||||
this.percentMin = percentMin;
|
||||
this.percentTypical = percentTypical;
|
||||
this.fixedMax = fixedMax;
|
||||
this.fixedMin = fixedMin;
|
||||
this.fixedTypical = fixedTypical;
|
||||
}
|
||||
}
|
||||
|
||||
public class MeasurementAccuracyStruct {
|
||||
public MeasurementTypeEnum measurementType; // MeasurementTypeEnum
|
||||
public Boolean measured; // bool
|
||||
public BigInteger minMeasuredValue; // int64
|
||||
public BigInteger maxMeasuredValue; // int64
|
||||
public List<MeasurementAccuracyRangeStruct> accuracyRanges; // list
|
||||
|
||||
public MeasurementAccuracyStruct(MeasurementTypeEnum measurementType, Boolean measured,
|
||||
BigInteger minMeasuredValue, BigInteger maxMeasuredValue,
|
||||
List<MeasurementAccuracyRangeStruct> accuracyRanges) {
|
||||
this.measurementType = measurementType;
|
||||
this.measured = measured;
|
||||
this.minMeasuredValue = minMeasuredValue;
|
||||
this.maxMeasuredValue = maxMeasuredValue;
|
||||
this.accuracyRanges = accuracyRanges;
|
||||
}
|
||||
}
|
||||
|
||||
public class Date {
|
||||
public Integer year; // uint8
|
||||
public Integer month; // uint8
|
||||
public Integer day; // uint8
|
||||
public Integer dayOfWeek; // uint8
|
||||
|
||||
public Date(Integer year, Integer month, Integer day, Integer dayOfWeek) {
|
||||
this.year = year;
|
||||
this.month = month;
|
||||
this.day = day;
|
||||
this.dayOfWeek = dayOfWeek;
|
||||
}
|
||||
}
|
||||
|
||||
public class Locationdesc {
|
||||
public String locationName; // string
|
||||
public Integer floorNumber; // int16
|
||||
public Integer areaType; // tag
|
||||
|
||||
public Locationdesc(String locationName, Integer floorNumber, Integer areaType) {
|
||||
this.locationName = locationName;
|
||||
this.floorNumber = floorNumber;
|
||||
this.areaType = areaType;
|
||||
}
|
||||
}
|
||||
|
||||
public class Semtag {
|
||||
public Integer mfgCode; // vendor-id
|
||||
public Integer namespaceId; // namespace
|
||||
public Integer tag; // tag
|
||||
public String label; // string
|
||||
|
||||
public Semtag(Integer mfgCode, Integer namespaceId, Integer tag, String label) {
|
||||
this.mfgCode = mfgCode;
|
||||
this.namespaceId = namespaceId;
|
||||
this.tag = tag;
|
||||
this.label = label;
|
||||
}
|
||||
}
|
||||
|
||||
public class Tod {
|
||||
public Integer hours; // uint8
|
||||
public Integer minutes; // uint8
|
||||
public Integer seconds; // uint8
|
||||
public Integer hundredths; // uint8
|
||||
|
||||
public Tod(Integer hours, Integer minutes, Integer seconds, Integer hundredths) {
|
||||
this.hours = hours;
|
||||
this.minutes = minutes;
|
||||
this.seconds = seconds;
|
||||
this.hundredths = hundredths;
|
||||
}
|
||||
}
|
||||
|
||||
// Enums
|
||||
public enum AtomicRequestTypeEnum implements MatterEnum {
|
||||
BEGIN_WRITE(0, "BeginWrite"),
|
||||
COMMIT_WRITE(1, "CommitWrite"),
|
||||
ROLLBACK_WRITE(2, "RollbackWrite");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private AtomicRequestTypeEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
public enum MeasurementTypeEnum implements MatterEnum {
|
||||
UNSPECIFIED(0, "Unspecified"),
|
||||
VOLTAGE(1, "Voltage"),
|
||||
ACTIVE_CURRENT(2, "ActiveCurrent"),
|
||||
REACTIVE_CURRENT(3, "ReactiveCurrent"),
|
||||
APPARENT_CURRENT(4, "ApparentCurrent"),
|
||||
ACTIVE_POWER(5, "ActivePower"),
|
||||
REACTIVE_POWER(6, "ReactivePower"),
|
||||
APPARENT_POWER(7, "ApparentPower"),
|
||||
RMS_VOLTAGE(8, "RmsVoltage"),
|
||||
RMS_CURRENT(9, "RmsCurrent"),
|
||||
RMS_POWER(10, "RmsPower"),
|
||||
FREQUENCY(11, "Frequency"),
|
||||
POWER_FACTOR(12, "PowerFactor"),
|
||||
NEUTRAL_CURRENT(13, "NeutralCurrent"),
|
||||
ELECTRICAL_ENERGY(14, "ElectricalEnergy");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private MeasurementTypeEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
public enum SoftwareVersionCertificationStatusEnum implements MatterEnum {
|
||||
DEV_TEST(0, "DevTest"),
|
||||
PROVISIONAL(1, "Provisional"),
|
||||
CERTIFIED(2, "Certified"),
|
||||
REVOKED(3, "Revoked");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private SoftwareVersionCertificationStatusEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
public enum Priority implements MatterEnum {
|
||||
DEBUG(0, "Debug"),
|
||||
INFO(1, "Info"),
|
||||
CRITICAL(2, "Critical");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private Priority(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
public enum Status implements MatterEnum {
|
||||
SUCCESS(0, "Success"),
|
||||
FAILURE(1, "Failure"),
|
||||
INVALID_SUBSCRIPTION(125, "InvalidSubscription"),
|
||||
UNSUPPORTED_ACCESS(126, "UnsupportedAccess"),
|
||||
UNSUPPORTED_ENDPOINT(127, "UnsupportedEndpoint"),
|
||||
INVALID_ACTION(128, "InvalidAction"),
|
||||
UNSUPPORTED_COMMAND(129, "UnsupportedCommand"),
|
||||
INVALID_COMMAND(133, "InvalidCommand"),
|
||||
UNSUPPORTED_ATTRIBUTE(134, "UnsupportedAttribute"),
|
||||
CONSTRAINT_ERROR(135, "ConstraintError"),
|
||||
UNSUPPORTED_WRITE(136, "UnsupportedWrite"),
|
||||
RESOURCE_EXHAUSTED(137, "ResourceExhausted"),
|
||||
NOT_FOUND(139, "NotFound"),
|
||||
UNREPORTABLE_ATTRIBUTE(140, "UnreportableAttribute"),
|
||||
INVALID_DATA_TYPE(141, "InvalidDataType"),
|
||||
UNSUPPORTED_READ(143, "UnsupportedRead"),
|
||||
DATA_VERSION_MISMATCH(146, "DataVersionMismatch"),
|
||||
TIMEOUT(148, "Timeout"),
|
||||
UNSUPPORTED_NODE(155, "UnsupportedNode"),
|
||||
BUSY(156, "Busy"),
|
||||
ACCESS_RESTRICTED(157, "AccessRestricted"),
|
||||
UNSUPPORTED_CLUSTER(195, "UnsupportedCluster"),
|
||||
NO_UPSTREAM_SUBSCRIPTION(197, "NoUpstreamSubscription"),
|
||||
NEEDS_TIMED_INTERACTION(198, "NeedsTimedInteraction"),
|
||||
UNSUPPORTED_EVENT(199, "UnsupportedEvent"),
|
||||
PATHS_EXHAUSTED(200, "PathsExhausted"),
|
||||
TIMED_REQUEST_MISMATCH(201, "TimedRequestMismatch"),
|
||||
FAILSAFE_REQUIRED(202, "FailsafeRequired"),
|
||||
INVALID_IN_STATE(203, "InvalidInState"),
|
||||
NO_COMMAND_RESPONSE(204, "NoCommandResponse"),
|
||||
TERMS_AND_CONDITIONS_CHANGED(205, "TermsAndConditionsChanged"),
|
||||
MAINTENANCE_REQUIRED(206, "MaintenanceRequired");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private Status(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
// Bitmaps
|
||||
public static class WildcardPathFlagsBitmap {
|
||||
public boolean wildcardSkipRootNode;
|
||||
public boolean wildcardSkipGlobalAttributes;
|
||||
public boolean wildcardSkipAttributeList;
|
||||
public boolean reserved;
|
||||
public boolean wildcardSkipCommandLists;
|
||||
public boolean wildcardSkipCustomElements;
|
||||
public boolean wildcardSkipFixedAttributes;
|
||||
public boolean wildcardSkipChangesOmittedAttributes;
|
||||
public boolean wildcardSkipDiagnosticsClusters;
|
||||
|
||||
public WildcardPathFlagsBitmap(boolean wildcardSkipRootNode, boolean wildcardSkipGlobalAttributes,
|
||||
boolean wildcardSkipAttributeList, boolean reserved, boolean wildcardSkipCommandLists,
|
||||
boolean wildcardSkipCustomElements, boolean wildcardSkipFixedAttributes,
|
||||
boolean wildcardSkipChangesOmittedAttributes, boolean wildcardSkipDiagnosticsClusters) {
|
||||
this.wildcardSkipRootNode = wildcardSkipRootNode;
|
||||
this.wildcardSkipGlobalAttributes = wildcardSkipGlobalAttributes;
|
||||
this.wildcardSkipAttributeList = wildcardSkipAttributeList;
|
||||
this.reserved = reserved;
|
||||
this.wildcardSkipCommandLists = wildcardSkipCommandLists;
|
||||
this.wildcardSkipCustomElements = wildcardSkipCustomElements;
|
||||
this.wildcardSkipFixedAttributes = wildcardSkipFixedAttributes;
|
||||
this.wildcardSkipChangesOmittedAttributes = wildcardSkipChangesOmittedAttributes;
|
||||
this.wildcardSkipDiagnosticsClusters = wildcardSkipDiagnosticsClusters;
|
||||
}
|
||||
}
|
||||
|
||||
public static class FeatureMap {
|
||||
public List<Boolean> map;
|
||||
|
||||
public FeatureMap(List<Boolean> map) {
|
||||
this.map = map;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,459 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
|
||||
/**
|
||||
* BasicInformation
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class BasicInformationCluster extends BaseCluster {
|
||||
|
||||
public static final int CLUSTER_ID = 0x0028;
|
||||
public static final String CLUSTER_NAME = "BasicInformation";
|
||||
public static final String CLUSTER_PREFIX = "basicInformation";
|
||||
public static final String ATTRIBUTE_CLUSTER_REVISION = "clusterRevision";
|
||||
public static final String ATTRIBUTE_DATA_MODEL_REVISION = "dataModelRevision";
|
||||
public static final String ATTRIBUTE_VENDOR_NAME = "vendorName";
|
||||
public static final String ATTRIBUTE_VENDOR_ID = "vendorId";
|
||||
public static final String ATTRIBUTE_PRODUCT_NAME = "productName";
|
||||
public static final String ATTRIBUTE_PRODUCT_ID = "productId";
|
||||
public static final String ATTRIBUTE_NODE_LABEL = "nodeLabel";
|
||||
public static final String ATTRIBUTE_LOCATION = "location";
|
||||
public static final String ATTRIBUTE_HARDWARE_VERSION = "hardwareVersion";
|
||||
public static final String ATTRIBUTE_HARDWARE_VERSION_STRING = "hardwareVersionString";
|
||||
public static final String ATTRIBUTE_SOFTWARE_VERSION = "softwareVersion";
|
||||
public static final String ATTRIBUTE_SOFTWARE_VERSION_STRING = "softwareVersionString";
|
||||
public static final String ATTRIBUTE_MANUFACTURING_DATE = "manufacturingDate";
|
||||
public static final String ATTRIBUTE_PART_NUMBER = "partNumber";
|
||||
public static final String ATTRIBUTE_PRODUCT_URL = "productUrl";
|
||||
public static final String ATTRIBUTE_PRODUCT_LABEL = "productLabel";
|
||||
public static final String ATTRIBUTE_SERIAL_NUMBER = "serialNumber";
|
||||
public static final String ATTRIBUTE_LOCAL_CONFIG_DISABLED = "localConfigDisabled";
|
||||
public static final String ATTRIBUTE_REACHABLE = "reachable";
|
||||
public static final String ATTRIBUTE_UNIQUE_ID = "uniqueId";
|
||||
public static final String ATTRIBUTE_CAPABILITY_MINIMA = "capabilityMinima";
|
||||
public static final String ATTRIBUTE_PRODUCT_APPEARANCE = "productAppearance";
|
||||
public static final String ATTRIBUTE_SPECIFICATION_VERSION = "specificationVersion";
|
||||
public static final String ATTRIBUTE_MAX_PATHS_PER_INVOKE = "maxPathsPerInvoke";
|
||||
|
||||
public Integer clusterRevision; // 65533 ClusterRevision
|
||||
/**
|
||||
* This attribute shall be set to the revision number of the Data Model against which the Node is certified. The
|
||||
* value of this attribute shall be one of the valid values listed in Section 7.1.1, “Revision History”.
|
||||
*/
|
||||
public Integer dataModelRevision; // 0 uint16 R V
|
||||
/**
|
||||
* This attribute shall specify a human readable (displayable) name of the vendor for the Node.
|
||||
*/
|
||||
public String vendorName; // 1 string R V
|
||||
/**
|
||||
* This attribute shall specify the Vendor ID.
|
||||
*/
|
||||
public Integer vendorId; // 2 vendor-id R V
|
||||
/**
|
||||
* This attribute shall specify a human readable (displayable) name of the model for the Node such as the model
|
||||
* number (or other identifier) assigned by the vendor.
|
||||
*/
|
||||
public String productName; // 3 string R V
|
||||
/**
|
||||
* This attribute shall specify the Product ID assigned by the vendor that is unique to the specific product of the
|
||||
* Node.
|
||||
*/
|
||||
public Integer productId; // 4 uint16 R V
|
||||
/**
|
||||
* Indicates a user defined name for the Node. This attribute SHOULD be set during initial commissioning and may be
|
||||
* updated by further reconfigurations.
|
||||
*/
|
||||
public String nodeLabel; // 5 string RW VM
|
||||
/**
|
||||
* This attribute shall be an ISO 3166-1 alpha-2 code to represent the country, dependent territory, or special area
|
||||
* of geographic interest in which the Node is located at the time of the attribute being set. This attribute shall
|
||||
* be set during initial commissioning (unless already set) and may be updated by further reconfigurations. This
|
||||
* attribute may affect some regulatory aspects of the Node’s operation, such as radio transmission power levels in
|
||||
* given spectrum allocation bands if technologies where this is applicable are used. The Location’s region code
|
||||
* shall be interpreted in a case-insensitive manner. If the Node cannot understand the location code with which it
|
||||
* was configured, or the location code has not yet been configured, it shall configure itself in a region-agnostic
|
||||
* manner as determined by the vendor, avoiding region-specific assumptions as much as is practical. The special
|
||||
* value XX shall indicate that region-agnostic mode is used.
|
||||
*/
|
||||
public String location; // 6 string RW VA
|
||||
/**
|
||||
* This attribute shall specify the version number of the hardware of the Node. The meaning of its value, and the
|
||||
* versioning scheme, are vendor defined.
|
||||
*/
|
||||
public Integer hardwareVersion; // 7 uint16 R V
|
||||
/**
|
||||
* This attribute shall specify the version number of the hardware of the Node. The meaning of its value, and the
|
||||
* versioning scheme, are vendor defined. The HardwareVersionString attribute shall be used to provide a more
|
||||
* user-friendly value than that represented by the HardwareVersion attribute.
|
||||
*/
|
||||
public String hardwareVersionString; // 8 string R V
|
||||
/**
|
||||
* This attribute shall contain the current version number for the software running on this Node. The version number
|
||||
* can be compared using a total ordering to determine if a version is logically newer than another one. A larger
|
||||
* value of SoftwareVersion is newer than a lower value, from the perspective of software updates (see Section
|
||||
* 11.20.3.3, “Availability of Software Images”). Nodes may query this field to determine the currently running
|
||||
* version of software on another given Node.
|
||||
*/
|
||||
public Integer softwareVersion; // 9 uint32 R V
|
||||
/**
|
||||
* This attribute shall contain a current human-readable representation for the software running on the Node. This
|
||||
* version information may be conveyed to users. The maximum length of the SoftwareVersionString attribute is 64
|
||||
* bytes of UTF-8 characters. The contents SHOULD only use simple 7-bit ASCII alphanumeric and punctuation
|
||||
* characters, so as to simplify the conveyance of the value to a variety of cultures.
|
||||
* Examples of version strings include "1.0", "1.2.3456", "1.2-2",
|
||||
* "1.0b123", "1.2_3".
|
||||
*/
|
||||
public String softwareVersionString; // 10 string R V
|
||||
/**
|
||||
* This attribute shall specify the date that the Node was manufactured. The first 8 characters shall specify the
|
||||
* date of manufacture of the Node in international date notation according to ISO 8601, i.e., YYYYMMDD, e.g.,
|
||||
* 20060814. The final 8 characters may include country, factory, line, shift or other related information at the
|
||||
* option of the vendor. The format of this information is vendor defined.
|
||||
*/
|
||||
public String manufacturingDate; // 11 string R V
|
||||
/**
|
||||
* This attribute shall specify a human-readable (displayable) vendor assigned part number for the Node whose
|
||||
* meaning and numbering scheme is vendor defined.
|
||||
* Multiple products (and hence PartNumbers) can share a ProductID. For instance, there may be different packaging
|
||||
* (with different PartNumbers) for different regions; also different colors of a product might share the ProductID
|
||||
* but may have a different PartNumber.
|
||||
*/
|
||||
public String partNumber; // 12 string R V
|
||||
/**
|
||||
* This attribute shall specify a link to a product specific web page. The specified URL SHOULD resolve to a
|
||||
* maintained web page available for the lifetime of the product. The syntax of this attribute shall follow the
|
||||
* syntax as specified in RFC 1738 and shall use the https scheme. The maximum length of this attribute is 256 ASCII
|
||||
* characters.
|
||||
*/
|
||||
public String productUrl; // 13 string R V
|
||||
/**
|
||||
* This attribute shall specify a vendor specific human readable (displayable) product label. The ProductLabel
|
||||
* attribute may be used to provide a more user-friendly value than that represented by the ProductName attribute.
|
||||
* The ProductLabel attribute SHOULD NOT include the name of the vendor as defined within the VendorName attribute.
|
||||
*/
|
||||
public String productLabel; // 14 string R V
|
||||
/**
|
||||
* This attribute shall specify a human readable (displayable) serial number.
|
||||
*/
|
||||
public String serialNumber; // 15 string R V
|
||||
/**
|
||||
* This attribute shall allow a local Node configuration to be disabled. When this attribute is set to True the Node
|
||||
* shall disable the ability to configure the Node through an on-Node user interface. The value of the
|
||||
* LocalConfigDisabled attribute shall NOT in any way modify, disable, or otherwise affect the user’s ability to
|
||||
* trigger a factory reset on the Node.
|
||||
*/
|
||||
public Boolean localConfigDisabled; // 16 bool RW VM
|
||||
/**
|
||||
* This attribute (when used) shall indicate whether the Node can be reached. For a native Node this is implicitly
|
||||
* True (and its use is optional).
|
||||
* Its main use case is in the derived Bridged Device Basic Information cluster where it is used to indicate whether
|
||||
* the bridged device is reachable by the bridge over the non-native network.
|
||||
*/
|
||||
public Boolean reachable; // 17 bool R V
|
||||
/**
|
||||
* Indicates a unique identifier for the device, which is constructed in a manufacturer specific manner.
|
||||
* It may be constructed using a permanent device identifier (such as device MAC address) as basis. In order to
|
||||
* prevent tracking,
|
||||
* • it SHOULD NOT be identical to (or easily derived from) such permanent device identifier
|
||||
* • it shall be updated when the device is factory reset
|
||||
* • it shall NOT be identical to the SerialNumber attribute
|
||||
* • it shall NOT be printed on the product or delivered with the product
|
||||
* The value does not need to be human readable, since it is intended for machine to machine (M2M) communication.
|
||||
* > [!NOTE]
|
||||
* > The conformance of the UniqueID attribute was optional in cluster revisions prior to revision 4.
|
||||
* This UniqueID attribute shall NOT be the same as the Persistent Unique ID which is used in the Rotating Device
|
||||
* Identifier mechanism.
|
||||
*/
|
||||
public String uniqueId; // 18 string R V
|
||||
/**
|
||||
* This attribute shall provide the minimum guaranteed value for some system-wide resource capabilities that are not
|
||||
* otherwise cluster-specific and do not appear elsewhere. This attribute may be used by clients to optimize
|
||||
* communication with Nodes by allowing them to use more than the strict minimum values required by this
|
||||
* specification, wherever available.
|
||||
* The values supported by the server in reality may be larger than the values provided in this attribute, such as
|
||||
* if a server is not resource-constrained at all. However, clients SHOULD only rely on the amounts provided in this
|
||||
* attribute.
|
||||
* Note that since the fixed values within this attribute may change over time, both increasing and decreasing, as
|
||||
* software versions change for a given Node, clients SHOULD take care not to assume forever unchanging values and
|
||||
* SHOULD NOT cache this value permanently at Commissioning time.
|
||||
*/
|
||||
public CapabilityMinimaStruct capabilityMinima; // 19 CapabilityMinimaStruct R V
|
||||
/**
|
||||
* This attribute shall provide information about the appearance of the product, which could be useful to a user
|
||||
* trying to locate or identify the node.
|
||||
*/
|
||||
public ProductAppearanceStruct productAppearance; // 20 ProductAppearanceStruct R V
|
||||
/**
|
||||
* This attribute shall contain the current version number for the specification version this Node was certified
|
||||
* against. The version number can be compared using a total ordering to determine if a version is logically newer
|
||||
* than another one. A larger value of SpecificationVersion is newer than a lower value.
|
||||
* Nodes may query this field to determine the currently supported version of the specification on another given
|
||||
* Node.
|
||||
* The format of this number is segmented as its four component bytes. Bit positions for the fields are as follows:
|
||||
* For example, a SpecificationVersion value of 0x0102AA00 is composed of 4 version components, representing a
|
||||
* version 1.2.170.0.
|
||||
* In the example above:
|
||||
* • Major version is the uppermost byte (0x01).
|
||||
* • Minor version is the following byte (0x02).
|
||||
* • Patch version is 170/0xAA.
|
||||
* • Reserved1 value is 0.
|
||||
* The initial revision (1.0) of this specification (1.0) was 0x01000000. Matter Spring 2024 release (1.3) was
|
||||
* 0x01030000.
|
||||
* If the SpecificationVersion is absent or zero, such as in Basic Information cluster revisions prior to Revision
|
||||
* 3, the specification version cannot be properly inferred unless other heuristics are employed.
|
||||
* Comparison of SpecificationVersion shall always include the total value over 32 bits, without masking reserved
|
||||
* parts.
|
||||
*/
|
||||
public Integer specificationVersion; // 21 uint32 R V
|
||||
/**
|
||||
* Indicates the maximum number of elements in a single InvokeRequests list (see Section 8.8.2, “Invoke Request
|
||||
* Action”) that the Node is able to process. Note that since this attribute may change over time, both increasing
|
||||
* and decreasing, as software versions change for a given Node, clients SHOULD take care not to assume forever
|
||||
* unchanging values and SHOULD NOT cache this value permanently at Commissioning time.
|
||||
* If the MaxPathsPerInvoke attribute is absent or zero, such as in Basic Information cluster revisions prior to
|
||||
* Revision 3, clients shall assume a value of 1.
|
||||
*/
|
||||
public Integer maxPathsPerInvoke; // 22 uint16 R V
|
||||
// Structs
|
||||
|
||||
/**
|
||||
* The StartUp event shall be generated by a Node as soon as reasonable after completing a boot or reboot process.
|
||||
* The StartUp event SHOULD be the first Data Model event recorded by the Node after it completes a boot or reboot
|
||||
* process.
|
||||
*/
|
||||
public class StartUp {
|
||||
/**
|
||||
* This field shall be set to the same value as the one available in the SoftwareVersion attribute.
|
||||
*/
|
||||
public Integer softwareVersion; // uint32
|
||||
|
||||
public StartUp(Integer softwareVersion) {
|
||||
this.softwareVersion = softwareVersion;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* The ShutDown event SHOULD be generated by a Node prior to any orderly shutdown sequence on a best-effort basis.
|
||||
* When a ShutDown event is generated, it SHOULD be the last Data Model event recorded by the Node. This event
|
||||
* SHOULD be delivered urgently to current subscribers on a best-effort basis. Any subsequent incoming interactions
|
||||
* to the Node may be dropped until the completion of a future boot or reboot process.
|
||||
*/
|
||||
public class ShutDown {
|
||||
public ShutDown() {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* The Leave event SHOULD be generated by a Node prior to permanently leaving a given Fabric, such as when the
|
||||
* RemoveFabric command is invoked for a given fabric, or triggered by factory reset or some other manufacturer
|
||||
* specific action to disable or reset the operational data in the Node. When a Leave event is generated, it SHOULD
|
||||
* be assumed that the fabric recorded in the event is no longer usable, and subsequent interactions targeting that
|
||||
* fabric will most likely fail.
|
||||
* Upon receipt of Leave Event on a subscription, the receiving Node may update other nodes in the fabric by
|
||||
* removing related bindings, access control list entries and other data referencing the leaving Node.
|
||||
*/
|
||||
public class Leave {
|
||||
/**
|
||||
* This field shall contain the local Fabric Index of the fabric which the node is about to leave.
|
||||
*/
|
||||
public Integer fabricIndex; // fabric-idx
|
||||
|
||||
public Leave(Integer fabricIndex) {
|
||||
this.fabricIndex = fabricIndex;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This event shall be supported if and only if the Reachable attribute is supported.
|
||||
* This event (when supported) shall be generated when there is a change in the Reachable attribute.
|
||||
* Its main use case is in the derived Bridged Device Basic Information cluster.
|
||||
*/
|
||||
public class ReachableChanged {
|
||||
/**
|
||||
* This field shall indicate the value of the Reachable attribute after it was changed.
|
||||
*/
|
||||
public Boolean reachableNewValue; // bool
|
||||
|
||||
public ReachableChanged(Boolean reachableNewValue) {
|
||||
this.reachableNewValue = reachableNewValue;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This structure provides a description of the product’s appearance.
|
||||
*/
|
||||
public class ProductAppearanceStruct {
|
||||
/**
|
||||
* This field shall indicate the visible finish of the product.
|
||||
*/
|
||||
public ProductFinishEnum finish; // ProductFinishEnum
|
||||
/**
|
||||
* This field indicates the representative color of the visible parts of the product. If the product has no
|
||||
* representative color, the field shall be null.
|
||||
*/
|
||||
public ColorEnum primaryColor; // ColorEnum
|
||||
|
||||
public ProductAppearanceStruct(ProductFinishEnum finish, ColorEnum primaryColor) {
|
||||
this.finish = finish;
|
||||
this.primaryColor = primaryColor;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This structure provides constant values related to overall global capabilities of this Node, that are not
|
||||
* cluster-specific.
|
||||
*/
|
||||
public class CapabilityMinimaStruct {
|
||||
/**
|
||||
* This field shall indicate the actual minimum number of concurrent CASE sessions that are supported per
|
||||
* fabric.
|
||||
* This value shall NOT be smaller than the required minimum indicated in Section 4.14.2.8, “Minimal Number of
|
||||
* CASE Sessions”.
|
||||
*/
|
||||
public Integer caseSessionsPerFabric; // uint16
|
||||
/**
|
||||
* This field shall indicate the actual minimum number of concurrent subscriptions supported per fabric.
|
||||
* This value shall NOT be smaller than the required minimum indicated in Section 8.5.1, “Subscribe
|
||||
* Transaction”.
|
||||
*/
|
||||
public Integer subscriptionsPerFabric; // uint16
|
||||
|
||||
public CapabilityMinimaStruct(Integer caseSessionsPerFabric, Integer subscriptionsPerFabric) {
|
||||
this.caseSessionsPerFabric = caseSessionsPerFabric;
|
||||
this.subscriptionsPerFabric = subscriptionsPerFabric;
|
||||
}
|
||||
}
|
||||
|
||||
// Enums
|
||||
/**
|
||||
* The data type of ProductFinishEnum is derived from enum8.
|
||||
*/
|
||||
public enum ProductFinishEnum implements MatterEnum {
|
||||
OTHER(0, "Other"),
|
||||
MATTE(1, "Matte"),
|
||||
SATIN(2, "Satin"),
|
||||
POLISHED(3, "Polished"),
|
||||
RUGGED(4, "Rugged"),
|
||||
FABRIC(5, "Fabric");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private ProductFinishEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* The data type of ColorEnum is derived from enum8.
|
||||
*/
|
||||
public enum ColorEnum implements MatterEnum {
|
||||
BLACK(0, "Black"),
|
||||
NAVY(1, "Navy"),
|
||||
GREEN(2, "Green"),
|
||||
TEAL(3, "Teal"),
|
||||
MAROON(4, "Maroon"),
|
||||
PURPLE(5, "Purple"),
|
||||
OLIVE(6, "Olive"),
|
||||
GRAY(7, "Gray"),
|
||||
BLUE(8, "Blue"),
|
||||
LIME(9, "Lime"),
|
||||
AQUA(10, "Aqua"),
|
||||
RED(11, "Red"),
|
||||
FUCHSIA(12, "Fuchsia"),
|
||||
YELLOW(13, "Yellow"),
|
||||
WHITE(14, "White"),
|
||||
NICKEL(15, "Nickel"),
|
||||
CHROME(16, "Chrome"),
|
||||
BRASS(17, "Brass"),
|
||||
COPPER(18, "Copper"),
|
||||
SILVER(19, "Silver"),
|
||||
GOLD(20, "Gold");
|
||||
|
||||
public final Integer value;
|
||||
public final String label;
|
||||
|
||||
private ColorEnum(Integer value, String label) {
|
||||
this.value = value;
|
||||
this.label = label;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Integer getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getLabel() {
|
||||
return label;
|
||||
}
|
||||
}
|
||||
|
||||
public BasicInformationCluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, 40, "BasicInformation");
|
||||
}
|
||||
|
||||
protected BasicInformationCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
str += "clusterRevision : " + clusterRevision + "\n";
|
||||
str += "dataModelRevision : " + dataModelRevision + "\n";
|
||||
str += "vendorName : " + vendorName + "\n";
|
||||
str += "vendorId : " + vendorId + "\n";
|
||||
str += "productName : " + productName + "\n";
|
||||
str += "productId : " + productId + "\n";
|
||||
str += "nodeLabel : " + nodeLabel + "\n";
|
||||
str += "location : " + location + "\n";
|
||||
str += "hardwareVersion : " + hardwareVersion + "\n";
|
||||
str += "hardwareVersionString : " + hardwareVersionString + "\n";
|
||||
str += "softwareVersion : " + softwareVersion + "\n";
|
||||
str += "softwareVersionString : " + softwareVersionString + "\n";
|
||||
str += "manufacturingDate : " + manufacturingDate + "\n";
|
||||
str += "partNumber : " + partNumber + "\n";
|
||||
str += "productUrl : " + productUrl + "\n";
|
||||
str += "productLabel : " + productLabel + "\n";
|
||||
str += "serialNumber : " + serialNumber + "\n";
|
||||
str += "localConfigDisabled : " + localConfigDisabled + "\n";
|
||||
str += "reachable : " + reachable + "\n";
|
||||
str += "uniqueId : " + uniqueId + "\n";
|
||||
str += "capabilityMinima : " + capabilityMinima + "\n";
|
||||
str += "productAppearance : " + productAppearance + "\n";
|
||||
str += "specificationVersion : " + specificationVersion + "\n";
|
||||
str += "maxPathsPerInvoke : " + maxPathsPerInvoke + "\n";
|
||||
return str;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,90 @@
|
|||
/*
|
||||
* Copyright (c) 2010-2025 Contributors to the openHAB project
|
||||
*
|
||||
* See the NOTICE file(s) distributed with this work for additional
|
||||
* information.
|
||||
*
|
||||
* This program and the accompanying materials are made available under the
|
||||
* terms of the Eclipse Public License 2.0 which is available at
|
||||
* http://www.eclipse.org/legal/epl-2.0
|
||||
*
|
||||
* SPDX-License-Identifier: EPL-2.0
|
||||
*/
|
||||
|
||||
// AUTO-GENERATED, DO NOT EDIT!
|
||||
|
||||
package org.openhab.binding.matter.internal.client.dto.cluster.gen;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.List;
|
||||
|
||||
import org.eclipse.jdt.annotation.NonNull;
|
||||
|
||||
/**
|
||||
* Binding
|
||||
*
|
||||
* @author Dan Cunningham - Initial contribution
|
||||
*/
|
||||
public class BindingCluster extends BaseCluster {
|
||||
|
||||
public static final int CLUSTER_ID = 0x001E;
|
||||
public static final String CLUSTER_NAME = "Binding";
|
||||
public static final String CLUSTER_PREFIX = "binding";
|
||||
public static final String ATTRIBUTE_CLUSTER_REVISION = "clusterRevision";
|
||||
public static final String ATTRIBUTE_BINDING = "binding";
|
||||
|
||||
public Integer clusterRevision; // 65533 ClusterRevision
|
||||
/**
|
||||
* Each entry shall represent a binding.
|
||||
*/
|
||||
public List<TargetStruct> binding; // 0 list RW F VM
|
||||
// Structs
|
||||
|
||||
public class TargetStruct {
|
||||
/**
|
||||
* This field is the remote target node ID. If the Endpoint field is present, this field shall be present.
|
||||
*/
|
||||
public BigInteger node; // node-id
|
||||
/**
|
||||
* This field is the target group ID that represents remote endpoints. If the Endpoint field is present, this
|
||||
* field shall NOT be present.
|
||||
*/
|
||||
public Integer group; // group-id
|
||||
/**
|
||||
* This field is the remote endpoint that the local endpoint is bound to. If the Group field is present, this
|
||||
* field shall NOT be present.
|
||||
*/
|
||||
public Integer endpoint; // endpoint-no
|
||||
/**
|
||||
* This field is the cluster ID (client & server) on the local and target endpoint(s). If this field is
|
||||
* present, the client cluster shall also exist on this endpoint (with this Binding cluster). If this field is
|
||||
* present, the target shall be this cluster on the target endpoint(s).
|
||||
*/
|
||||
public Integer cluster; // cluster-id
|
||||
public Integer fabricIndex; // FabricIndex
|
||||
|
||||
public TargetStruct(BigInteger node, Integer group, Integer endpoint, Integer cluster, Integer fabricIndex) {
|
||||
this.node = node;
|
||||
this.group = group;
|
||||
this.endpoint = endpoint;
|
||||
this.cluster = cluster;
|
||||
this.fabricIndex = fabricIndex;
|
||||
}
|
||||
}
|
||||
|
||||
public BindingCluster(BigInteger nodeId, int endpointId) {
|
||||
super(nodeId, endpointId, 30, "Binding");
|
||||
}
|
||||
|
||||
protected BindingCluster(BigInteger nodeId, int endpointId, int clusterId, String clusterName) {
|
||||
super(nodeId, endpointId, clusterId, clusterName);
|
||||
}
|
||||
|
||||
@Override
|
||||
public @NonNull String toString() {
|
||||
String str = "";
|
||||
str += "clusterRevision : " + clusterRevision + "\n";
|
||||
str += "binding : " + binding + "\n";
|
||||
return str;
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue