Updated external content (Jenkins build 1276)

pull/1969/head
openHAB Build Server 2022-12-18 20:23:11 +00:00
parent 7b4cb802a5
commit 673f5efcf2
6 changed files with 11 additions and 13 deletions

View File

@ -16,8 +16,6 @@ install: auto
This Binding integrates [Shelly devices](https://shelly.cloud) devloped by Allterco.
![shelly 1](https://shop.shelly.cloud/image/cache/catalog/shelly_1/s1_x1-80x80.jpg) ![shelly dimmer 2](https://shop.shelly.cloud/image/cache/catalog/shelly_dimmer2/shelly_dimmer2_x1-80x80.jpg) ![shelly vintage a60](https://shop.shelly.cloud/image/cache/catalog/shelly_vintage/shelly_vintage_A60-80x80.jpg) ![shelly plug s](https://shop.shelly.cloud/image/cache/catalog/shelly_plug_s/s_plug_s_x1-80x80.jpg) ![shelly button 1](https://shop.shelly.cloud/image/cache/catalog/shelly_button1/shelly_button1_x1-80x80.jpg) ![shelly gas eu](https://shop.shelly.cloud/image/cache/catalog/shelly_gas/shelly_gas_eu-80x80.jpg) ![shelly ht](https://shop.shelly.cloud/image/cache/catalog/shelly_ht/s_ht_x1-80x80.jpg)
Allterco provides a rich set of smart home devices. All of them are WiFi enabled (2,4GHz, IPv4 only) and provide a documented API.
The binding is officially acknowledged by Allterco and openHAB is listed as a reference and directly supports the openHAB community.

View File

@ -3,7 +3,7 @@ id: cometvisu
label: CometVisu Backend for openHAB
title: CometVisu Backend for openHAB - UIs
type: ui
description: "This adds a backend for the web based visualization CometVisu <http://www.cometvisu.org>."
description: "This adds a backend for the web based visualization CometVisu <https://www.cometvisu.org>."
since: 3x
logo: images/addons/cometvisu.png
install: auto
@ -15,7 +15,7 @@ install: auto
# CometVisu Backend for openHAB
This adds a backend for the web based visualization CometVisu <http://www.cometvisu.org>.
This adds a backend for the web based visualization CometVisu <https://www.cometvisu.org>.
The CometVisu is a highly customizable visualization, that runs in any browser.
Unlike the browser based UI´s in openHAB, the CometVisu does not rely on sitemaps.
The layout is defined with an XML-based configuration file, which allows a multi column layout.
@ -332,10 +332,10 @@ Something like that could help to improve the sitemap->config generation.
some screenshots can be found here:
- <http://knx-user-forum.de/forum/supportforen/cometvisu/26809-allgemeine-frage-wie-sieht-eure-cv-startseite-aus>
- <https://knx-user-forum.de/forum/supportforen/cometvisu/26809-allgemeine-frage-wie-sieht-eure-cv-startseite-aus>
## Links
* German CometVisu Support Forum: <http://knx-user-forum.de/forum/supportforen/cometvisu>
* User documentation for the CometVisu: <http://www.cometvisu.org/>
* German CometVisu Support Forum: <https://knx-user-forum.de/forum/supportforen/cometvisu>
* User documentation for the CometVisu: <https://www.cometvisu.org/>
* GitHub project page of the CometVisu: <https://github.com/CometVisu/CometVisu>

View File

@ -21,8 +21,8 @@ It consists in:
- a machine-learning natural language processor based on [Apache OpenNLP](https://opennlp.apache.org) for intent classification and entity extraction (thanks to [nlp-intent-toolkit](https://github.com/mlehman/nlp-intent-toolkit));
- a modular intent-based skill system with learning data provisioning (basic skills to retrieve item statuses, historical data and send basic commands are built-in, but more can be injected by other OSGi dependency injection);
- a fully-featured responsive card-based user interface built with the [Quasar Framework](http://quasar-framework.org) and its companion REST API to interact with the bot;
- an openHAB [Human Language Interpreter](http://docs.openhab.org/configuration/multimedia.html#human-language-interpreter) - this means once the natural language answers expand to more than "here's what I found" and "there you go", you will eventually be able to ask HABot questions and give it orders without a visual UI when coupled with speech-to-text and text-to-speech engines in openHAB, for instance to build privacy-focused specialized voice assistant.
- a fully-featured responsive card-based user interface built with the [Quasar Framework](https://quasar.dev/) and its companion REST API to interact with the bot;
- an openHAB [Human Language Interpreter](https://www.openhab.org/docs/configuration/multimedia.html#human-language-interpreter) - this means once the natural language answers expand to more than "here's what I found" and "there you go", you will eventually be able to ask HABot questions and give it orders without a visual UI when coupled with speech-to-text and text-to-speech engines in openHAB, for instance to build privacy-focused specialized voice assistant.
It is another step to have a full, open source, privacy-focused, integrated natural language processing toolchain for your openHAB smart home.

View File

@ -126,7 +126,7 @@ Modifications to the dashboard are not saved automatically, use the **Save** but
When a dashboard is running, widgets can be interacted with, and server-sent events are received when items' states are updated, so widgets update automatically in HABPanel.
The icons in the top-right corner perform the following:
- the **speech balloon** activates the speech recognition feature and send the results as text to openHAB's default human language interpreter. This implies [some configuration on the server]({{base}}/configuration/multimedia.html#human-language-interpreter), and this icon might not be displayed if the browser doesn't support voice recognition ([mainly only in Chrome and other webkit variants currently](http://caniuse.com/#feat=speech-recognition){:target="_blank"}). It can also be configured in the panel configuration to appear on the bottom of the screen;
- the **speech balloon** activates the speech recognition feature and send the results as text to openHAB's default human language interpreter. This implies [some configuration on the server]({{base}}/configuration/multimedia.html#human-language-interpreter), and this icon might not be displayed if the browser doesn't support voice recognition ([mainly only in Chrome and other webkit variants currently](https://caniuse.com/#feat=speech-recognition){:target="_blank"}). It can also be configured in the panel configuration to appear on the bottom of the screen;
- the **refresh** button forces HABPanel to retrieve the current state of all items;
- the **fullscreen** button tells the browser to go fullscreen, if supported.

View File

@ -52,12 +52,12 @@ Displays markers on a map
</PropBlock>
<PropBlock type="TEXT" name="tileLayerProvider" label="Provider for the background tiles">
<PropDescription>
The provider of tiles to use for the background of the map. Use one from <a class="external text-color-blue" target="_blank" href="http://leaflet-extras.github.io/leaflet-providers/preview/">Leaflet Providers</a>, Some providers will not work until you set options, like access tokens, in the <code>tileLayerProviderOptions</code> parameter (in Code view). See <a class="external text-color-blue" target="_blank" href="https://github.com/leaflet-extras/leaflet-providers#providers-requiring-registration">here</a> for more info. The default is CartoDB, the variant depending on the dark mode setting.
The provider of tiles to use for the background of the map. Use one from <a class="external text-color-blue" target="_blank" href="https://leaflet-extras.github.io/leaflet-providers/preview/">Leaflet Providers</a>, Some providers will not work until you set options, like access tokens, in the <code>tileLayerProviderOptions</code> parameter (in Code view). See <a class="external text-color-blue" target="_blank" href="https://github.com/leaflet-extras/leaflet-providers#providers-requiring-registration">here</a> for more info. The default is CartoDB, the variant depending on the dark mode setting.
</PropDescription>
</PropBlock>
<PropBlock type="TEXT" name="overlayTileLayerProvider" label="Provider for the overlay tiles">
<PropDescription>
The provider of tiles to use for the overlay layer above the background of the map. Use one from <a class="external text-color-blue" target="_blank" href="http://leaflet-extras.github.io/leaflet-providers/preview/">Leaflet Providers</a>, Some providers will not work until you set options, like access tokens, in the <code>overlayTileLayerProviderOptions</code> parameter (in Code view). See <a class="external text-color-blue" target="_blank" href="https://github.com/leaflet-extras/leaflet-providers#providers-requiring-registration">here</a> for more info.
The provider of tiles to use for the overlay layer above the background of the map. Use one from <a class="external text-color-blue" target="_blank" href="https://leaflet-extras.github.io/leaflet-providers/preview/">Leaflet Providers</a>, Some providers will not work until you set options, like access tokens, in the <code>overlayTileLayerProviderOptions</code> parameter (in Code view). See <a class="external text-color-blue" target="_blank" href="https://github.com/leaflet-extras/leaflet-providers#providers-requiring-registration">here</a> for more info.
</PropDescription>
</PropBlock>
</PropGroup>

View File

@ -126,7 +126,7 @@ Modifications to the dashboard are not saved automatically, use the **Save** but
When a dashboard is running, widgets can be interacted with, and server-sent events are received when items' states are updated, so widgets update automatically in HABPanel.
The icons in the top-right corner perform the following:
- the **speech balloon** activates the speech recognition feature and send the results as text to openHAB's default human language interpreter. This implies [some configuration on the server]({{base}}/configuration/multimedia.html#human-language-interpreter), and this icon might not be displayed if the browser doesn't support voice recognition ([mainly only in Chrome and other webkit variants currently](http://caniuse.com/#feat=speech-recognition){:target="_blank"}). It can also be configured in the panel configuration to appear on the bottom of the screen;
- the **speech balloon** activates the speech recognition feature and send the results as text to openHAB's default human language interpreter. This implies [some configuration on the server]({{base}}/configuration/multimedia.html#human-language-interpreter), and this icon might not be displayed if the browser doesn't support voice recognition ([mainly only in Chrome and other webkit variants currently](https://caniuse.com/#feat=speech-recognition){:target="_blank"}). It can also be configured in the panel configuration to appear on the bottom of the screen;
- the **refresh** button forces HABPanel to retrieve the current state of all items;
- the **fullscreen** button tells the browser to go fullscreen, if supported.