Fixing typos (#5315)

* Fix typos

* Bump hugo to latest version v0.122.0

---------

Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>
pull/5316/head
Andreas Deininger 2024-02-05 17:51:51 +01:00 committed by GitHub
parent e66563946c
commit 476a73e95e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
56 changed files with 70 additions and 70 deletions

View File

@ -49,7 +49,7 @@ jobs:
deploy:
docker:
- image: cimg/go:1.21.5
- image: cimg/go:1.21.6
steps:
- checkout
- restore_cache:

View File

@ -656,7 +656,7 @@ Truncated markdown content here.
### Expandable accordion content blocks
Use the `{{% expand "Item label" %}}` shortcode to create expandable, accordion-style content blocks.
Each expandable block needs a label that users can click to expand or collpase the content block.
Each expandable block needs a label that users can click to expand or collapse the content block.
Pass the label as a string to the shortcode.
```md

View File

@ -38,7 +38,7 @@ including our GPG key, can be found at https://www.influxdata.com/how-to-report-
yarn install
```
_**Note:** The most recent version of Hugo tested with this documentation is **0.121.2**._
_**Note:** The most recent version of Hugo tested with this documentation is **0.122.0**._
3. To generate the API docs, see [api-docs/README.md](api-docs/README.md).

View File

@ -24,7 +24,7 @@ function datePart(date) {
return {year: year, month: month, day: day}
}
////////////////////////// SESSION / COOKIE MANAGMENT //////////////////////////
////////////////////////// SESSION / COOKIE MANAGEMENT //////////////////////////
cookieID = 'influxdb_get_started_date'

View File

@ -1,4 +1,4 @@
// Styles for accordian-like expandable content blocks
// Styles for accordion-like expandable content blocks
.expand-wrapper {
margin: 2rem 0 2rem;

View File

@ -33,7 +33,7 @@
"layouts/code-controls",
"layouts/v3-wayfinding";
// Import Product-specifc color schemes
// Import Product-specific color schemes
@import "product-overrides/telegraf",
"product-overrides/chronograf",
"product-overrides/kapacitor";

View File

@ -194,7 +194,7 @@ Important SuperAdmin behaviors:
#### All New Users are SuperAdmins configuration option
By default, the **Config** setting for "**All new users are SuperAdmins"** is **On**. Any user with SuperAdmin permission can toggle this under the **Admin > Chronograf > Organizations** tab. If this setting is **On**, any new user (who is created or who authenticates) will_ automatically have SuperAdmin permisison. If this setting is **Off**, any new user (who is created or who authenticates) will _not_ have SuperAdmin permisison unless they are explicitly granted it later by another user with SuperAdmin permission.
By default, the **Config** setting for "**All new users are SuperAdmins"** is **On**. Any user with SuperAdmin permission can toggle this under the **Admin > Chronograf > Organizations** tab. If this setting is **On**, any new user (who is created or who authenticates) will_ automatically have SuperAdmin permission. If this setting is **Off**, any new user (who is created or who authenticates) will _not_ have SuperAdmin permission unless they are explicitly granted it later by another user with SuperAdmin permission.
### Create users

View File

@ -16,13 +16,13 @@ Chronograf lets you manage Flux and InfluxQL queries using the Queries page.
2. Click on **InfluxDB**.
3. Click the **Queries** tab to go to the Queries Page.
The first column lists all the databases in your Influx instance and the queries running on that database appear in the Query column. The Duration column depicts the duration of your query and the Status column shows the status of each query. The refresh rate in the upper righthand corner can be set to a vareity of refresh rates using the dropdown menu.
The first column lists all the databases in your Influx instance and the queries running on that database appear in the Query column. The Duration column depicts the duration of your query and the Status column shows the status of each query. The refresh rate in the upper righthand corner can be set to a variety of refresh rates using the dropdown menu.
### Kill a running query
1. Open Chronograf in your web browser and select **Admin {{< icon "crown" >}}** in the sidebar.
2. Click on **InfluxDB**.
3. Click the **Queries** tab to go to the Queries Page. You will see a list of databases on the quereis running on them. Locate the query you want to kill.
3. Click the **Queries** tab to go to the Queries Page. You will see a list of databases on the queries running on them. Locate the query you want to kill.
4. Got to the **Status** column.
5. Hover over **running**. A red box with **Kill** will appear.
6. Click on the **Kill** box and a **Confirm** box will appear. Click on **Confirm** to kill the query.

View File

@ -80,9 +80,9 @@ Below are the options and how they appear in the log table:
| Severity Format | Display |
| --------------- |:------- |
| Dot | <img src="/img/chronograf/1-6-logs-serverity-fmt-dot.png" alt="Log severity format 'Dot'" style="display:inline;max-height:24px;"/> |
| Dot + Text | <img src="/img/chronograf/1-6-logs-serverity-fmt-dot-text.png" alt="Log severity format 'Dot + Text'" style="display:inline;max-height:24px;"/> |
| Text | <img src="/img/chronograf/1-6-logs-serverity-fmt-text.png" alt="Log severity format 'Text'" style="display:inline;max-height:24px;"/> |
| Dot | <img src="/img/chronograf/1-6-logs-severity-fmt-dot.png" alt="Log severity format 'Dot'" style="display:inline;max-height:24px;"/> |
| Dot + Text | <img src="/img/chronograf/1-6-logs-severity-fmt-dot-text.png" alt="Log severity format 'Dot + Text'" style="display:inline;max-height:24px;"/> |
| Text | <img src="/img/chronograf/1-6-logs-severity-fmt-text.png" alt="Log severity format 'Text'" style="display:inline;max-height:24px;"/> |
### Truncate or wrap log messages
By default, text in Log Viewer columns is truncated if it exceeds the column width. You can choose to wrap the text instead to display the full content of each cell.

View File

@ -169,7 +169,7 @@ IFS=$'\n'; for i in $(influx -format csv -username $INFLUXUSER -password $INFLUX
InfluxDB subscription configuration options are available in the `[subscriber]`
section of the `influxdb.conf`.
In order to use subcriptions, the `enabled` option in the `[subscriber]` section must be set to `true`.
In order to use subscriptions, the `enabled` option in the `[subscriber]` section must be set to `true`.
Below is an example `influxdb.conf` subscriber configuration:
```toml

View File

@ -236,7 +236,7 @@ Filter data by measurement regular expression.
Filter data by tag key regular expression.
##### [ `-tag-value-filter <regular_expresssion>` ]
##### [ `-tag-value-filter <regular_expression>` ]
Filter data by tag value regular expression.

View File

@ -30,7 +30,7 @@ Each property can have a different value type.
## Record syntax
A **record** literal contains a set of key-value pairs (properties) enclosed in curly brackets (`{}`).
Properties are comma-delimitted.
Properties are comma-delimited.
**Property keys must be strings** and can optionally be enclosed in double quotes (`"`).
If a property key contains whitespace characters or only numeric characters,
you must enclose the property key in double quotes.

View File

@ -60,7 +60,7 @@ Array of values to convert. Default is the piped-forward array (`<-`).
## Examples
### Convert an array of floats to usigned integers
### Convert an array of floats to unsigned integers
```js
import "experimental/array"

View File

@ -2,7 +2,7 @@
title: http.endpoint() function
description: >
`http.endpoint()` iterates over input data and sends a single POST request per input row to
a specficied URL.
a specified URL.
menu:
flux_v0_ref:
name: http.endpoint
@ -29,7 +29,7 @@ Fluxdoc syntax: https://github.com/influxdata/flux/blob/master/docs/fluxdoc.md
------------------------------------------------------------------------------->
`http.endpoint()` iterates over input data and sends a single POST request per input row to
a specficied URL.
a specified URL.
This function is designed to be used with `monitor.notify()`.

View File

@ -76,7 +76,7 @@ The returned record is included in the final output.
In a left outer join, `l` is guaranteed to not be a default record, but `r` may be a
default record. Because `r` is more likely to contain null values, the output record
is built almost entirely from proprties of `l`, with the exception of `v_right`, which
is built almost entirely from properties of `l`, with the exception of `v_right`, which
we expect to sometimes be null.
For more information about the behavior of outer joins, see the [Outer joins](/flux/v0/stdlib/join/#outer-joins)

View File

@ -76,7 +76,7 @@ The returned record is included in the final output.
In a right outer join, `r` is guaranteed to not be a default record, but `l` may be a
default record. Because `l` is more likely to contain null values, the output record
is built almost entirely from proprties of `r`, with the exception of `v_left`, which
is built almost entirely from properties of `r`, with the exception of `v_left`, which
we expect to sometimes be null.
For more information about the behavior of outer joins, see the [Outer joins](/flux/v0/stdlib/join/#outer-joins)

View File

@ -208,7 +208,7 @@ join.tables(
The next example is nearly identical to the [previous example](#perform-a-left-outer-join),
but uses the `right` join method. With this method, `r` is guaranteed to not be a default
record, but `l` may be a default record. Because `l` is more likely to contain null values,
the output record is built almost entirely from proprties of `r`, with the exception of
the output record is built almost entirely from properties of `r`, with the exception of
`v_left`, which we expect to sometimes be null.
```js

View File

@ -64,7 +64,7 @@ math.sqrtpi
- **math.log2e** represents the base 2 logarithm of **e** (`math.e`).
- **math.maxfloat** represents the maximum float value.
- **math.maxint** represents the maximum integer value (`2^63 - 1`).
- **math.maxuint** representes the maximum unsigned integer value (`2^64 - 1`).
- **math.maxuint** represents the maximum unsigned integer value (`2^64 - 1`).
- **math.minint** represents the minimum integer value (`-2^63`).
- **math.phi** represents the [Golden Ratio](https://www.britannica.com/science/golden-ratio).
- **math.pi** represents pi (π).

View File

@ -50,7 +50,7 @@ y-coordinate to use in the operation.
### x
({{< req >}})
x-corrdinate to use in the operation.
x-coordinate to use in the operation.

View File

@ -55,7 +55,7 @@ y-value to use in the operation.
## Examples
- [Return the maximum difference between two values](#return-the-maximum-difference-betwee-two-values)
- [Return the maximum difference between two values](#return-the-maximum-difference-between-two-values)
- [Use math.dim in map](#use-mathdim-in-map)
### Return the maximum difference between two values

View File

@ -50,7 +50,7 @@ is the value used in the evaluation.
### sign
({{< req >}})
is the sign used in the eveluation.
is the sign used in the evaluation.

View File

@ -54,7 +54,7 @@ When enabled, results include a table with the following columns:
- **TotalDuration**: total query duration in nanoseconds.
- **CompileDuration**: number of nanoseconds spent compiling the query.
- **QueueDuration**: number of nanoseconds spent queueing.
- **RequeueDuration**: number fo nanoseconds spent requeueing.
- **RequeueDuration**: number of nanoseconds spent requeueing.
- **PlanDuration**: number of nanoseconds spent planning the query.
- **ExecuteDuration**: number of nanoseconds spent executing the query.
- **Concurrency**: number of goroutines allocated to process the query.

View File

@ -482,7 +482,7 @@ SHOW DATABASES
<!-- ### SHOW FIELD KEY CARDINALITY
Estimates or counts exactly the cardinality of the field key set for the curren
Estimates or counts exactly the cardinality of the field key set for the current
database unless a database is specified using the `ON <database>` option.
{{% note %}}

View File

@ -50,7 +50,7 @@ Use `HOLT_WINTERS()` to:
- Compare predicted values with actual values to detect anomalies in your data
```sql
HOLT_WINTERS[_WITH_FIT](aggregrate_expression, N, S)
HOLT_WINTERS[_WITH_FIT](aggregate_expression, N, S)
```
#### Arguments {#arguments-holt-winters}

View File

@ -482,7 +482,7 @@ SHOW DATABASES
<!-- ### SHOW FIELD KEY CARDINALITY
Estimates or counts exactly the cardinality of the field key set for the curren
Estimates or counts exactly the cardinality of the field key set for the current
database unless a database is specified using the `ON <database>` option.
{{% note %}}

View File

@ -50,7 +50,7 @@ Use `HOLT_WINTERS()` to:
- Compare predicted values with actual values to detect anomalies in your data
```sql
HOLT_WINTERS[_WITH_FIT](aggregrate_expression, N, S)
HOLT_WINTERS[_WITH_FIT](aggregate_expression, N, S)
```
#### Arguments {#arguments-holt-winters}

View File

@ -311,7 +311,7 @@ consider doing one of the following:
--rate-limit "5MB/5min"
```
- Include `--start` and `--end` flags with `influxd inpsect export-lp` to limit
- Include `--start` and `--end` flags with `influxd inspect export-lp` to limit
exported data by time and then sequentially write the consecutive time ranges.
```sh

View File

@ -482,7 +482,7 @@ SHOW DATABASES
<!-- ### SHOW FIELD KEY CARDINALITY
Estimates or counts exactly the cardinality of the field key set for the curren
Estimates or counts exactly the cardinality of the field key set for the current
database unless a database is specified using the `ON <database>` option.
{{% note %}}

View File

@ -50,7 +50,7 @@ Use `HOLT_WINTERS()` to:
- Compare predicted values with actual values to detect anomalies in your data
```sql
HOLT_WINTERS[_WITH_FIT](aggregrate_expression, N, S)
HOLT_WINTERS[_WITH_FIT](aggregate_expression, N, S)
```
#### Arguments {#arguments-holt-winters}

View File

@ -167,7 +167,7 @@ IFS=$'\n'; for i in $(influx -format csv -username $INFLUXUSER -password $INFLUX
InfluxDB subscription configuration options are available in the `[subscriber]`
section of the `influxdb.conf`.
In order to use subcriptions, the `enabled` option in the `[subscriber]` section must be set to `true`.
In order to use subscriptions, the `enabled` option in the `[subscriber]` section must be set to `true`.
Below is an example `influxdb.conf` subscriber configuration:
```toml

View File

@ -236,7 +236,7 @@ Filter data by measurement regular expression.
Filter data by tag key regular expression.
##### [ `-tag-value-filter <regular_expresssion>` ]
##### [ `-tag-value-filter <regular_expression>` ]
Filter data by tag value regular expression.

View File

@ -27,7 +27,7 @@ The storage engine includes the following components:
* [Write Ahead Log (WAL)](#write-ahead-log-wal)
* [Cache](#cache)
* [Time-Structed Merge Tree (TSM)](#time-structured-merge-tree-tsm)
* [Time-Structured Merge Tree (TSM)](#time-structured-merge-tree-tsm)
* [Time Series Index (TSI)](#time-series-index-tsi)
## Writing data from API to disk

View File

@ -245,7 +245,7 @@ Append **uinteger separators** to the `long` datatype annotation with a colon (`
For example:
```
#datatype "usignedLong:.,"
#datatype "unsignedLong:.,"
```
{{% note %}}

View File

@ -40,7 +40,7 @@ section(s) of your [Kapacitor configuration file](/kapacitor/v1/administration/c
- [Specify your InfluxDB URL](#specify-your-influxdb-url)
- [Provide InfluxDB authentication credentials](#provide-influxdb-authentication-credentials)
- [Disable InfluxDB subcriptions](#disable-influxdb-subscriptions)
- [Disable InfluxDB subscriptions](#disable-influxdb-subscriptions)
### Specify your InfluxDB URL
Provide your InfluxDB URL in the `[[influxdb]].urls` configuration option.

View File

@ -326,7 +326,7 @@ If the host machine is busy, it may take awhile to log alerts.
{{% /note %}}
6. (Optional) Modify the task to be really sensitive to ensure the alerts are working.
In the TICKscript, change the lamda function `.crit(lambda: "usage_idle" < 70)` to `.crit(lambda: "usage_idle" < 100)`, and run the `define` command with just the `TASK_NAME` and `-tick` arguments:
In the TICKscript, change the lambda function `.crit(lambda: "usage_idle" < 70)` to `.crit(lambda: "usage_idle" < 100)`, and run the `define` command with just the `TASK_NAME` and `-tick` arguments:
```bash
kapacitor define cpu_alert -tick cpu_alert.tick

View File

@ -450,7 +450,7 @@ The `Combine` and `Flatten` nodes previously operated (erroneously) across batch
- Force tar owner/group to be `root`.
- Fixed install/remove of Kapacitor on non-systemd Debian/Ubuntu systems.
- Fixed packaging to not enable services on RHEL systems.
- Fixed issues with recusive symlinks on systemd systems.
- Fixed issues with recursive symlinks on systemd systems.
- Fixed invalid default MQTT config.
---

View File

@ -5,7 +5,7 @@ description: >
The aggregate event handler allows you to aggregate alerts messages over a specified interval. This page includes aggregate options and usage examples.
menu:
kapacitor_v1:
name: Aggregrate
name: Aggregate
weight: 100
parent: Event handlers
aliases:

View File

@ -279,7 +279,7 @@ stream
.measurement('errors')
.groupBy('500')
|alert()
.info(lamda: 'count' > 0)
.info(lambda: 'count' > 0)
.noRecoveries()
.topic('500-errors')
```

View File

@ -297,7 +297,7 @@ stream
.measurement('errors')
.groupBy('500')
|alert()
.info(lamda: 'count' > 0)
.info(lambda: 'count' > 0)
.noRecoveries()
.topic('500-errors')
```

View File

@ -647,7 +647,7 @@ Filter expression for resetting the INFO alert level to lower level.
alert.infoReset(value ast.LambdaNode)
// Example
alert.infoReset(lamda: 'usage_idle' > 60)
alert.infoReset(lambda: 'usage_idle' > 60)
```
### Inhibit

View File

@ -535,7 +535,7 @@ In Example 19 above, the `float` conversion function is used to ensure that the
<!-- issue 1244 -->
When writing floating point values in messages, or to InfluxDB, it might be helpful to specify the decimal precision in order to make the values more readable or better comparable. For example, in the `messsage()` method of an `alert` node it is possible to "pipe" a value to a `printf` statement.
When writing floating point values in messages, or to InfluxDB, it might be helpful to specify the decimal precision in order to make the values more readable or better comparable. For example, in the `message()` method of an `alert` node it is possible to "pipe" a value to a `printf` statement.
```js
|alert()

View File

@ -60,17 +60,17 @@ To add a Kapacitor instance to Chronograf:
"Active Kapacitor" heading, click **Add Config**.
The Configure Kapacitor page loads with default settings.
<img src="/img/kapacitor/1-4-chrono-configuration02.png" alt="conifguration-new" style="max-width: 100%;"/>
<img src="/img/kapacitor/1-4-chrono-configuration02.png" alt="configuration-new" style="max-width: 100%;"/>
3. In the grouping "Connection Details" set the values for Kapacitor URL and a
Name for this Kapacitor, also add username and password credentials if necessary.
<img src="/img/kapacitor/1-4-chrono-configuration03.png" alt="conifguration-details" style="max-width: 306px;"/>
<img src="/img/kapacitor/1-4-chrono-configuration03.png" alt="configuration-details" style="max-width: 306px;"/>
4. Click the **Connect** button. If the "Connection Details" are correct a success
message is displayed and a new section will appear "Configure Alert Endpoints".
<img src="/img/kapacitor/1-4-chrono-configuration04.png" alt="conifguration-success" style="max-width: 100%;" />
<img src="/img/kapacitor/1-4-chrono-configuration04.png" alt="configuration-success" style="max-width: 100%;" />
5. If a third party alert service or SMTP is used, update, the third party
settings in the "Configure Alert Endpoints" section.
@ -78,7 +78,7 @@ To add a Kapacitor instance to Chronograf:
6. Return to the "Configuration" page by clicking on the **Configuration** icon once more.
The new Kapacitor instance should be listed under the "Active Kapacitor" heading.
<img src="/img/kapacitor/1-4-chrono-configuration05.png" alt="conifguration-review" style="max-width: 100%;" />
<img src="/img/kapacitor/1-4-chrono-configuration05.png" alt="configuration-review" style="max-width: 100%;" />
### Managing Kapacitor from Chronograf

View File

@ -30,7 +30,7 @@ The diagram below outlines the infrastructure for discovering and scraping data
**Image 1 &ndash; Scrapping and Discovery work flow**
<img src="/img/kapacitor/1-4-pull-metrics.png" alt="conifguration-open" style="max-width:100%;" />
<img src="/img/kapacitor/1-4-pull-metrics.png" alt="configuration-open" style="max-width:100%;" />
1. First, Kapacitor implements the discovery process to identify the available targets in your infrastructure.
It requests that information at regular intervals and receives that information from an [authority](#available-discoverers).

View File

@ -83,7 +83,7 @@ cpu:cpu=cpu3,host=localhost OK cpu:cpu=cpu3,host=localhost is OK
```
{{% note %}}
If the error message `unkown topic: "cpu"` is returned, please be aware,
If the error message `unknown topic: "cpu"` is returned, please be aware,
that topics are created only when needed, as such if the task has not triggered an alert yet, the topic will not exist.
If this error about the topic not existing is returned, then, try and cause an alert to be triggered.
Either change the thresholds on the task or create some cpu load.

View File

@ -183,7 +183,7 @@ This is a list of known headers and the corresponding values for
In this configuration mode, you explicitly specify the field and tags you want
to scrape from your data.
A configuration can contain muliple _xpath_ subsections (for example, the file plugin
A configuration can contain multiple _xpath_ subsections (for example, the file plugin
to process the xml-string multiple times). Consult the [XPath syntax][xpath] and
the [underlying library's functions][xpath lib] for details and help regarding
XPath queries. Consider using an XPath tester such as [xpather.com][xpather] or

View File

@ -183,7 +183,7 @@ This is a list of known headers and the corresponding values for
In this configuration mode, you explicitly specify the field and tags you want
to scrape from your data.
A configuration can contain muliple _xpath_ subsections (for example, the file plugin
A configuration can contain multiple _xpath_ subsections (for example, the file plugin
to process the xml-string multiple times). Consult the [XPath syntax][xpath] and
the [underlying library's functions][xpath lib] for details and help regarding
XPath queries. Consider using an XPath tester such as [xpather.com][xpather] or

View File

@ -183,7 +183,7 @@ This is a list of known headers and the corresponding values for
In this configuration mode, you explicitly specify the field and tags you want
to scrape from your data.
A configuration can contain muliple _xpath_ subsections (for example, the file plugin
A configuration can contain multiple _xpath_ subsections (for example, the file plugin
to process the xml-string multiple times). Consult the [XPath syntax][xpath] and
the [underlying library's functions][xpath lib] for details and help regarding
XPath queries. Consider using an XPath tester such as [xpather.com][xpather] or

View File

@ -1355,7 +1355,7 @@ Telegraf without having to paste in sample configurations from each plugin's REA
- Remove signed MacOS artifacts.
- Run `go mod tidy`.
- Fix `prometheusremotewrite` wrong timestamp unit.
- Fix sudden close ccaused by OPC UA inpu.
- Fix sudden close caused by OPC UA input.
- Update `containerd` to 1.5.9.
- Update `go-sensu` to v2.12.0.
- Update `gosmi` from v0.4.3 to v0.4.4.
@ -1645,7 +1645,7 @@ Telegraf without having to paste in sample configurations from each plugin's REA
- Update `containerd/containerd` module to 1.5.9.
### Input plugin updates
- Execd (`execd`): Resolve a Promethues text format parsing error.
- Execd (`execd`): Resolve a Prometheus text format parsing error.
- IPset (`ipset`): Prevent panic from occurring after startup.
- OPC-UA (`opc_ua`): Fix issue where fields were being duplicated.
- HTTP (`http`): Prevent server side error message.
@ -4550,7 +4550,7 @@ for details about the mapping.
- Allow iptable entries with trailing text.
- Sanitize password from couchbase metric.
- Converge to typed value in prometheus output.
- Skip compilcation of logparser and tail on solaris.
- Skip compilation of logparser and tail on solaris.
- Discard logging from tail library.
- Remove log message on ping timeout.
- Don't retry points beyond retention policy.
@ -4996,7 +4996,7 @@ consistent with the behavior of `collection_jitter`.
- Add support for Tengine.
- Logparser input plugin for parsing grok-style log patterns.
- ElasticSearch: now supports connecting to ElasticSearch via SSL.
- Add graylog input pluging.
- Add graylog input plugin.
- Consul input plugin.
- conntrack input plugin.
- vmstat input plugin.

View File

@ -1,6 +1,6 @@
# Notification data structure
#
# - id: unqiue ID for notification, cannot start with digit, no spaces, a-z and 0-9
# - id: unique ID for notification, cannot start with digit, no spaces, a-z and 0-9
# level: note or warn
# scope:
# - list of URL paths to show notification on, no scope shows everywhere

View File

@ -289,7 +289,7 @@ input:
- name: Cisco GNMI Telemetry
id: cisco_telemetry_gnmi
description: |
> The `inputs.cisco_telementry_gnmi` plugin was renamed to `inputs.gmni`
> The `inputs.cisco_telemetry_gnmi` plugin was renamed to `inputs.gmni`
in **Telegraf 1.15.0** to better reflect its general support for gNMI devices.
See the [gNMI plugin](#input-cisco_telemetry_gnmi).
@ -2150,7 +2150,7 @@ input:
id: win_perf_counters
description: |
The Windows Performance Counters input plugin reads Performance Counters on the
Windows operating sytem. **Windows only**.
Windows operating system. **Windows only**.
introduced: 0.10.2
tags: [windows, systems]
@ -2796,7 +2796,7 @@ aggregator:
id: minmax
description: |
The MinMax aggregator plugin aggregates `min` and `max` values of each field it sees,
emitting the aggregrate every period seconds.
emitting the aggregate every period seconds.
introduced: 1.1.0
tags: [linux, macos, windows]

View File

@ -44,7 +44,7 @@
</label>
</li>
</ul>
<div class="higlight">
<div class="highlight">
<pre id="group-by-example" class="chroma">
data
<span class="nx">|></span> group(columns<span class="nx">:</span> [<span class="s2">"_measurement"</span>, <span class="s2">"loc"</span>, <span class="s2">"sensorID"</span>, <span class="s2">"_field"</span>])</pre>

View File

Before

Width:  |  Height:  |  Size: 2.4 KiB

After

Width:  |  Height:  |  Size: 2.4 KiB

View File

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

Before

Width:  |  Height:  |  Size: 1.9 KiB

After

Width:  |  Height:  |  Size: 1.9 KiB

View File

@ -5,7 +5,7 @@
# source: https://github.com/mrbaseman/parse_yaml.git
#
###############################################################################
# Parses a YAML file and outputs variable assigments. Can optionally accept a
# Parses a YAML file and outputs variable assignments. Can optionally accept a
# variable name prefix and a variable name separator
#
# Usage:

View File

@ -633,9 +633,9 @@ http2-wrapper@^2.1.10:
resolve-alpn "^1.2.0"
hugo-extended@>=0.101.0:
version "0.121.2"
resolved "https://registry.yarnpkg.com/hugo-extended/-/hugo-extended-0.121.2.tgz#b8013d3a9b2c676ebc2210429ea0b08f13ee362d"
integrity sha512-kb5XX5b9COxI88PDoH8+n4nmt/pe8ylyFLIBSdCaAGfH9/fEvk88Bv2MabyDCwHOGIAa8M6WpwulJG007vXwWg==
version "0.122.0"
resolved "https://registry.yarnpkg.com/hugo-extended/-/hugo-extended-0.122.0.tgz#80ed2fcb165cf4a809230cb0ab3b2756a45402a6"
integrity sha512-f9kPVSKxk5mq62wmw1tbhg5CV7n93Tbt7jZoy+C3yfRlEZhGqBlxaEJ3MeeNoilz3IPy5STHB7R0Bdhuap7mHA==
dependencies:
careful-downloader "^3.0.0"
log-symbols "^5.1.0"
@ -1163,9 +1163,9 @@ spdx-correct@^3.0.0:
spdx-license-ids "^3.0.0"
spdx-exceptions@^2.1.0:
version "2.3.0"
resolved "https://registry.yarnpkg.com/spdx-exceptions/-/spdx-exceptions-2.3.0.tgz#3f28ce1a77a00372683eade4a433183527a2163d"
integrity sha512-/tTrYOC7PPI1nUAgx34hUpqXuyJG+DTHJTnIULG4rDygi4xu/tfgmq1e1cIRwRzwZgo4NLySi+ricLkZkw4i5A==
version "2.4.0"
resolved "https://registry.yarnpkg.com/spdx-exceptions/-/spdx-exceptions-2.4.0.tgz#c07a4ede25b16e4f78e6707bbd84b15a45c19c1b"
integrity sha512-hcjppoJ68fhxA/cjbN4T8N6uCUejN8yFw69ttpqtBeCbF3u13n7mb31NB9jKwGTTWWnt9IbRA/mf1FprYS8wfw==
spdx-expression-parse@^3.0.0:
version "3.0.1"