Merge branch 'master' of github.com:influxdata/docs-v2

pull/4797/head
Scott Anderson 2023-03-13 15:24:59 -06:00
commit bade01e7ab
31 changed files with 238 additions and 331 deletions

View File

@ -15,7 +15,7 @@ Use the `POST` request method and include the following in your request:
|:----------- |:---------- |
| Organization | Use the `org` query parameter in your request URL. |
| Bucket | Use the `bucket` query parameter in your request URL. |
| Precision | Use the [`precision`](/influxdb/cloud/write-data/developer-tools/line-protocol/#timestamp-precision) query parameter in your request URL. Default is `ns` |
| Precision | Use the [`precision`](/influxdb/cloud/reference/glossary/#precision) query parameter in your request URL. Default is `ns` |
| API token | Use the `Authorization: Token YOUR_API_TOKEN` header. |
| Line protocol | Pass as plain text in your request body. |

View File

@ -14,7 +14,7 @@ aliases:
Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) in a Node.js environment to query InfluxDB.
The following example sends a Flux query to an InfluxDB bucket and outputs rows from an observable table.
The following example sends a Flux query to an InfluxDB bucket and outputs rows as a JavaScript _asynchronous iterable_ object.
## Before you begin
@ -56,25 +56,21 @@ The following example sends a Flux query to an InfluxDB bucket and outputs rows
```
Replace *`YOUR_BUCKET`* with the name of your InfluxDB bucket.
4. Use the `queryRows()` method of the query client to query InfluxDB.
`queryRows()` takes a Flux query and an [RxJS **Observer**](http://reactivex.io/rxjs/manual/overview.html#observer) object.
The client returns [table](/{{% latest "influxdb" %}}/reference/syntax/annotated-csv/#tables) metadata and rows as an [RxJS **Observable**](http://reactivex.io/rxjs/manual/overview.html#observable).
`queryRows()` subscribes your observer to the observable.
Finally, the observer logs the rows from the response to the terminal.
4. Use the `iterateRows()` method of the query client to query InfluxDB.
`iterateRows()` takes a Flux query and returns the [table](/{{% latest "influxdb" %}}/reference/syntax/annotated-csv/#tables) of metadata and rows as an asynchronous iterable (`AsyncIterable<Row>`).
The following example shows how to write an asynchronous function that uses the `iterateRows()` method to query a bucket and uses the JavaScript `for await...of` statement to iterate over the query results:
```js
const observer = {
next(row, tableMeta) {
const o = tableMeta.toObject(row)
const myQuery = async () => {
for await (const {values, tableMeta} of queryApi.iterateRows(fluxQuery)) {
const o = tableMeta.toObject(values)
console.log(
`${o._time} ${o._measurement} in '${o.location}' (${o.sensor_id}): ${o._field}=${o._value}`
)
}
}
queryApi.queryRows(fluxQuery, observer)
```
myQuery()
### Complete example

View File

@ -14,7 +14,7 @@ aliases:
Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) in a Node.js environment to query InfluxDB.
The following example sends a Flux query to an InfluxDB bucket and outputs rows from an observable table.
The following example sends a Flux query to an InfluxDB bucket and outputs rows as a JavaScript _asynchronous iterable_ object.
## Before you begin
@ -56,24 +56,20 @@ The following example sends a Flux query to an InfluxDB bucket and outputs rows
```
Replace *`YOUR_BUCKET`* with the name of your InfluxDB bucket.
4. Use the `queryRows()` method of the query client to query InfluxDB.
`queryRows()` takes a Flux query and an [RxJS **Observer**](http://reactivex.io/rxjs/manual/overview.html#observer) object.
The client returns [table](/{{% latest "influxdb" %}}/reference/syntax/annotated-csv/#tables) metadata and rows as an [RxJS **Observable**](http://reactivex.io/rxjs/manual/overview.html#observable).
`queryRows()` subscribes your observer to the observable.
Finally, the observer logs the rows from the response to the terminal.
4. Use the `iterateRows()` method of the query client to query InfluxDB.
`iterateRows()` takes a Flux query and returns table as an asynchronous collection.
The client returns [table](/{{% latest "influxdb" %}}/reference/syntax/annotated-csv/#tables) metadata and rows as an as an AsyncIterable.
```js
const observer = {
next(row, tableMeta) {
const o = tableMeta.toObject(row)
const myQuery = async () => {
for await (const {values, tableMeta} of queryApi.iterateRows(fluxQuery)) {
const o = tableMeta.toObject(values)
console.log(
`${o._time} ${o._measurement} in '${o.location}' (${o.sensor_id}): ${o._field}=${o._value}`
)
}
}
queryApi.queryRows(fluxQuery, observer)
myQuery()
```
### Complete example

View File

@ -14,7 +14,7 @@ aliases:
Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) in a Node.js environment to query InfluxDB.
The following example sends a Flux query to an InfluxDB bucket and outputs rows from an observable table.
The following example sends a Flux query to an InfluxDB bucket and outputs rows as asynchronous iterable.
## Before you begin
@ -56,24 +56,20 @@ The following example sends a Flux query to an InfluxDB bucket and outputs rows
```
Replace *`YOUR_BUCKET`* with the name of your InfluxDB bucket.
4. Use the `queryRows()` method of the query client to query InfluxDB.
`queryRows()` takes a Flux query and an [RxJS **Observer**](http://reactivex.io/rxjs/manual/overview.html#observer) object.
The client returns [table](/{{% latest "influxdb" %}}/reference/syntax/annotated-csv/#tables) metadata and rows as an [RxJS **Observable**](http://reactivex.io/rxjs/manual/overview.html#observable).
`queryRows()` subscribes your observer to the observable.
Finally, the observer logs the rows from the response to the terminal.
4. Use the `iterateRows()` method of the query client to query InfluxDB.
`iterateRows()` takes a Flux query and returns table as an asynchronous collection.
The client returns [table](/{{% latest "influxdb" %}}/reference/syntax/annotated-csv/#tables) metadata and rows as an as an AsyncIterable.
```js
const observer = {
next(row, tableMeta) {
const o = tableMeta.toObject(row)
const myQuery = async () => {
for await (const {values, tableMeta} of queryApi.iterateRows(fluxQuery)) {
const o = tableMeta.toObject(values)
console.log(
`${o._time} ${o._measurement} in '${o.location}' (${o.sensor_id}): ${o._field}=${o._value}`
)
}
}
queryApi.queryRows(fluxQuery, observer)
myQuery()
```
### Complete example

View File

@ -249,9 +249,7 @@ const influxdb = new InfluxDB({url: process.env.INFLUX_URL, token: process.env.I
|> last()`
const devices = {}
return await new Promise((resolve, reject) => {
queryApi.queryRows(fluxQuery, {
next(row, tableMeta) {
for await (const {row, tableMeta} of queryApi.iterateRows(fluxQuery)) {
const o = tableMeta.toObject(row)
const deviceId = o.deviceId
if (!deviceId) {
@ -262,13 +260,9 @@ const influxdb = new InfluxDB({url: process.env.INFLUX_URL, token: process.env.I
if (!device.updatedAt || device.updatedAt < o._time) {
device.updatedAt = o._time
}
},
error: reject,
complete() {
resolve(devices)
},
})
})
}
return devices
}
```
@ -284,26 +278,17 @@ for registered devices, processes the data, and returns a Promise with the resul
If you invoke the function as `getDevices()` (without a _`deviceId`_),
it retrieves all `deviceauth` points and returns a Promise with `{ DEVICE_ID: ROW_DATA }`.
To send the query and process results, the `getDevices(deviceId)` function uses the `QueryAPI queryRows(query, consumer)` method.
`queryRows` executes the `query` and provides the Annotated CSV result as an Observable to the `consumer`.
`queryRows` has the following TypeScript signature:
To send the query and process results, the `getDevices(deviceId)` function uses the `QueryAPI iterateRows(query)` asynchronous method.
`iterateRows` executes the `query` and provides the Annotated CSV result as an AsyncIterable.
`iterateRows` has the following TypeScript signature:
```ts
queryRows(
query: string | ParameterizedQuery,
consumer: FluxResultObserver<string[]>
): void
iterateRows(
query: string | ParameterizedQuery
): AsyncIterable<Row>
```
{{% caption %}}[@influxdata/influxdb-client-js QueryAPI](https://github.com/influxdata/influxdb-client-js/blob/3db2942432b993048d152e0d0e8ec8499eedfa60/packages/core/src/QueryApi.ts){{% /caption %}}
The `consumer` that you provide must implement the [`FluxResultObserver` interface](https://github.com/influxdata/influxdb-client-js/blob/3db2942432b993048d152e0d0e8ec8499eedfa60/packages/core/src/results/FluxResultObserver.ts) and provide the following callback functions:
- `next(row, tableMeta)`: processes the next row and table metadata--for example, to prepare the response.
- `error(error)`: receives and handles errors--for example, by rejecting the Promise.
- `complete()`: signals when all rows have been consumed--for example, by resolving the Promise.
To learn more about Observers, see the [RxJS Guide](https://rxjs.dev/guide/observer).
{{% caption %}}[@influxdata/influxdb-client-js QueryAPI](https://github.com/influxdata/influxdb-client-js/blob/af7cf3b6c1003ff0400e91bcb6a0b860668d6458/packages/core/src/QueryApi.ts){{% /caption %}}
## Create the API to register devices

View File

@ -259,9 +259,7 @@ const influxdb = new InfluxDB({url: process.env.INFLUX_URL, token: process.env.I
|> last()`
const devices = {}
return await new Promise((resolve, reject) => {
queryApi.queryRows(fluxQuery, {
next(row, tableMeta) {
for await (const {row, tableMeta} of queryApi.iterateRows(fluxQuery)) {
const o = tableMeta.toObject(row)
const deviceId = o.deviceId
if (!deviceId) {
@ -272,13 +270,9 @@ const influxdb = new InfluxDB({url: process.env.INFLUX_URL, token: process.env.I
if (!device.updatedAt || device.updatedAt < o._time) {
device.updatedAt = o._time
}
},
error: reject,
complete() {
resolve(devices)
},
})
})
}
return devices
}
```
@ -294,26 +288,17 @@ for registered devices, processes the data, and returns a Promise with the resul
If you invoke the function as `getDevices()` (without a _`deviceId`_),
it retrieves all `deviceauth` points and returns a Promise with `{ DEVICE_ID: ROW_DATA }`.
To send the query and process results, the `getDevices(deviceId)` function uses the `QueryAPI queryRows(query, consumer)` method.
`queryRows` executes the `query` and provides the Annotated CSV result as an Observable to the `consumer`.
`queryRows` has the following TypeScript signature:
To send the query and process results, the `getDevices(deviceId)` function uses the `QueryAPI iterateRows(query)` asynchronous method.
`iterateRows` executes the `query` and provides the Annotated CSV result as an AsyncIterable.
`iterateRows` has the following TypeScript signature:
```ts
queryRows(
query: string | ParameterizedQuery,
consumer: FluxResultObserver<string[]>
): void
iterateRows(
query: string | ParameterizedQuery
): AsyncIterable<Row>
```
{{% caption %}}[@influxdata/influxdb-client-js QueryAPI](https://github.com/influxdata/influxdb-client-js/blob/3db2942432b993048d152e0d0e8ec8499eedfa60/packages/core/src/QueryApi.ts){{% /caption %}}
The `consumer` that you provide must implement the [`FluxResultObserver` interface](https://github.com/influxdata/influxdb-client-js/blob/3db2942432b993048d152e0d0e8ec8499eedfa60/packages/core/src/results/FluxResultObserver.ts) and provide the following callback functions:
- `next(row, tableMeta)`: processes the next row and table metadata--for example, to prepare the response.
- `error(error)`: receives and handles errors--for example, by rejecting the Promise.
- `complete()`: signals when all rows have been consumed--for example, by resolving the Promise.
To learn more about Observers, see the [RxJS Guide](https://rxjs.dev/guide/observer).
{{% caption %}}[@influxdata/influxdb-client-js QueryAPI](https://github.com/influxdata/influxdb-client-js/blob/af7cf3b6c1003ff0400e91bcb6a0b860668d6458/packages/core/src/QueryApi.ts){{% /caption %}}
## Create the API to register devices

View File

@ -15,7 +15,7 @@ Use the `POST` request method and include the following in your request:
|:----------- |:---------- |
| Organization | Use the `org` query parameter in your request URL. |
| Bucket | Use the `bucket` query parameter in your request URL. |
| Timestamp precision | Use the [`precision`](/influxdb/v2.3/write-data/developer-tools/line-protocol/#timestamp-precision) query parameter in your request URL. Default is `ns`. |
| Timestamp precision | Use the [`precision`](/influxdb/v2.3/reference/glossary/#precision) query parameter in your request URL. Default is `ns`. |
| API token | Use the `Authorization: Token YOUR_API_TOKEN` header. |
| Line protocol | Pass as plain text in your request body. |

View File

@ -14,7 +14,7 @@ aliases:
Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) in a Node.js environment to query InfluxDB.
The following example sends a Flux query to an InfluxDB bucket and outputs rows from an observable table.
The following example sends a Flux query to an InfluxDB bucket and outputs rows as asynchronous iterable.
## Before you begin
@ -56,24 +56,20 @@ The following example sends a Flux query to an InfluxDB bucket and outputs rows
```
Replace *`YOUR_BUCKET`* with the name of your InfluxDB bucket.
4. Use the `queryRows()` method of the query client to query InfluxDB.
`queryRows()` takes a Flux query and an [RxJS **Observer**](http://reactivex.io/rxjs/manual/overview.html#observer) object.
The client returns [table](/{{% latest "influxdb" %}}/reference/syntax/annotated-csv/#tables) metadata and rows as an [RxJS **Observable**](http://reactivex.io/rxjs/manual/overview.html#observable).
`queryRows()` subscribes your observer to the observable.
Finally, the observer logs the rows from the response to the terminal.
4. Use the `iterateRows()` method of the query client to query InfluxDB.
`iterateRows()` takes a Flux query and returns table as an asynchronous collection.
The client returns [table](/{{% latest "influxdb" %}}/reference/syntax/annotated-csv/#tables) metadata and rows as an as an AsyncIterable.
```js
const observer = {
next(row, tableMeta) {
const o = tableMeta.toObject(row)
const myQuery = async () => {
for await (const {values, tableMeta} of queryApi.iterateRows(fluxQuery)) {
const o = tableMeta.toObject(values)
console.log(
`${o._time} ${o._measurement} in '${o.location}' (${o.sensor_id}): ${o._field}=${o._value}`
)
}
}
queryApi.queryRows(fluxQuery, observer)
myQuery()
```
### Complete example

View File

@ -259,26 +259,20 @@ const influxdb = new InfluxDB({url: process.env.INFLUX_URL, token: process.env.I
|> last()`
const devices = {}
return await new Promise((resolve, reject) => {
queryApi.queryRows(fluxQuery, {
next(row, tableMeta) {
const o = tableMeta.toObject(row)
for await (const {values, tableMeta} of queryApi.iterateRows(fluxQuery)) {
const o = tableMeta.toObject(values)
const deviceId = o.deviceId
if (!deviceId) {
return
continue
}
const device = devices[deviceId] || (devices[deviceId] = {deviceId})
device[o._field] = o._value
if (!device.updatedAt || device.updatedAt < o._time) {
device.updatedAt = o._time
}
},
error: reject,
complete() {
resolve(devices)
},
})
})
}
return devices
}
```
@ -294,26 +288,17 @@ for registered devices, processes the data, and returns a Promise with the resul
If you invoke the function as `getDevices()` (without a _`deviceId`_),
it retrieves all `deviceauth` points and returns a Promise with `{ DEVICE_ID: ROW_DATA }`.
To send the query and process results, the `getDevices(deviceId)` function uses the `QueryAPI queryRows(query, consumer)` method.
`queryRows` executes the `query` and provides the Annotated CSV result as an Observable to the `consumer`.
`queryRows` has the following TypeScript signature:
To send the query and process results, the `getDevices(deviceId)` function uses the `QueryAPI iterateRows(query)` asynchronous method.
`iterateRows` executes the `query` and provides the Annotated CSV result as an AsyncIterable.
`iterateRows` has the following TypeScript signature:
```ts
queryRows(
query: string | ParameterizedQuery,
consumer: FluxResultObserver<string[]>
): void
iterateRows(
query: string | ParameterizedQuery
): AsyncIterable<Row>
```
{{% caption %}}[@influxdata/influxdb-client-js QueryAPI](https://github.com/influxdata/influxdb-client-js/blob/3db2942432b993048d152e0d0e8ec8499eedfa60/packages/core/src/QueryApi.ts){{% /caption %}}
The `consumer` that you provide must implement the [`FluxResultObserver` interface](https://github.com/influxdata/influxdb-client-js/blob/3db2942432b993048d152e0d0e8ec8499eedfa60/packages/core/src/results/FluxResultObserver.ts) and provide the following callback functions:
- `next(row, tableMeta)`: processes the next row and table metadata--for example, to prepare the response.
- `error(error)`: receives and handles errors--for example, by rejecting the Promise.
- `complete()`: signals when all rows have been consumed--for example, by resolving the Promise.
To learn more about Observers, see the [RxJS Guide](https://rxjs.dev/guide/observer).
{{% caption %}}[@influxdata/influxdb-client-js QueryAPI](https://github.com/influxdata/influxdb-client-js/blob/af7cf3b6c1003ff0400e91bcb6a0b860668d6458/packages/core/src/QueryApi.ts){{% /caption %}}
## Create the API to register devices

View File

@ -15,7 +15,7 @@ Use the `POST` request method and include the following in your request:
|:----------- |:---------- |
| Organization | Use the `org` query parameter in your request URL. |
| Bucket | Use the `bucket` query parameter in your request URL. |
| Timestamp precision | Use the [`precision`](/influxdb/v2.4/write-data/developer-tools/line-protocol/#timestamp-precision) query parameter in your request URL. Default is `ns`. |
| Timestamp precision | Use the [`precision`](/influxdb/v2.4/reference/glossary/#precision) query parameter in your request URL. Default is `ns`. |
| API token | Use the `Authorization: Token YOUR_API_TOKEN` header. |
| Line protocol | Pass as plain text in your request body. |

View File

@ -15,7 +15,7 @@ Use the `POST` request method and include the following in your request:
|:----------- |:---------- |
| Organization | Use the `org` query parameter in your request URL. |
| Bucket | Use the `bucket` query parameter in your request URL. |
| Timestamp precision | Use the [`precision`](/influxdb/v2.5/write-data/developer-tools/line-protocol/#timestamp-precision) query parameter in your request URL. Default is `ns`. |
| Timestamp precision | Use the [`precision`](/influxdb/v2.5/reference/glossary/#precision) query parameter in your request URL. Default is `ns`. |
| API token | Use the `Authorization: Token YOUR_API_TOKEN` header. |
| Line protocol | Pass as plain text in your request body. |

View File

@ -15,7 +15,7 @@ Use the `POST` request method and include the following in your request:
|:----------- |:---------- |
| Organization | Use the `org` query parameter in your request URL. |
| Bucket | Use the `bucket` query parameter in your request URL. |
| Timestamp precision | Use the [`precision`](/influxdb/v2.6/write-data/developer-tools/line-protocol/#timestamp-precision) query parameter in your request URL. Default is `ns`. |
| Timestamp precision | Use the [`precision`](/influxdb/v2.6/reference/glossary/#precision) query parameter in your request URL. Default is `ns`. |
| API token | Use the `Authorization: Token YOUR_API_TOKEN` header. |
| Line protocol | Pass as plain text in your request body. |

View File

@ -8,10 +8,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -65,12 +65,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -168,8 +167,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -8,10 +8,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -65,12 +65,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -168,8 +167,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -8,10 +8,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -65,12 +65,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -168,8 +167,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -8,10 +8,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -65,12 +65,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -168,8 +167,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -8,10 +8,10 @@ menu:
parent: Input data formats (parsers)
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -65,12 +65,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -168,8 +167,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -8,10 +8,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -65,12 +65,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -168,8 +167,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -8,10 +8,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -65,12 +65,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -168,8 +167,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -8,10 +8,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -65,12 +65,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -168,8 +167,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -8,10 +8,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -65,12 +65,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -168,8 +167,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -8,10 +8,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -65,12 +65,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -168,8 +167,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -8,10 +8,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -65,12 +65,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -168,8 +167,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -9,10 +9,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -66,12 +66,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -169,8 +168,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -9,10 +9,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -66,12 +66,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -169,8 +168,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -9,10 +9,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -66,12 +66,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -169,8 +168,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -9,10 +9,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -66,12 +66,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -169,8 +168,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -9,10 +9,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -66,12 +66,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -169,8 +168,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -8,10 +8,10 @@ menu:
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
The grok data format parses line-delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
For an introduction to grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
@ -65,12 +65,11 @@ See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
[Logstash's core patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
Logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
If you need help building patterns to match your logs, [Grok Constructor](https://grokconstructor.appspot.com/) might be helpful.
## Configuration
@ -168,8 +167,8 @@ grok will offset the timestamp accordingly.
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
escaping required by the grok syntax. Using the TOML multi-line literal
syntax (`'''`) may be useful.
The following config examples will parse this input file:

View File

@ -23,9 +23,7 @@ const INFLUX_BUCKET_AUTH = process.env.INFLUX_BUCKET_AUTH
|> last()`
const devices = {}
console.log(`*** QUERY *** \n ${fluxQuery}`)
return await new Promise((resolve, reject) => {
queryApi.queryRows(fluxQuery, {
next(row, tableMeta) {
for await (const {row, tableMeta} of queryApi.iterateRows(fluxQuery)) {
const o = tableMeta.toObject(row)
const deviceId = o.deviceId
if (!deviceId) {
@ -36,14 +34,8 @@ const INFLUX_BUCKET_AUTH = process.env.INFLUX_BUCKET_AUTH
if (!device.updatedAt || device.updatedAt < o._time) {
device.updatedAt = o._time
}
},
error: reject,
complete() {
console.log(JSON.stringify(devices))
resolve(devices)
},
})
})
}
return devices
}

View File

@ -21,21 +21,14 @@ const queryApi = new InfluxDB({url, token}).getQueryApi(org)
/** To avoid SQL injection, use a string literal for the query. */
const fluxQuery = 'from(bucket:"air_sensor") |> range(start: 0) |> filter(fn: (r) => r._measurement == "temperature")'
const fluxObserver = {
next(row, tableMeta) {
const o = tableMeta.toObject(row)
const myQuery = async () => {
for await (const {values, tableMeta} of queryApi.iterateRows(fluxQuery)) {
const o = tableMeta.toObject(values)
console.log(
`${o._time} ${o._measurement} in ${o.region} (${o.sensor_id}): ${o._field}=${o._value}`
`${o._time} ${o._measurement} in '${o.location}' (${o.sensor_id}): ${o._field}=${o._value}`
)
},
error(error) {
console.error(error)
console.log('\nFinished ERROR')
},
complete() {
console.log('\nFinished SUCCESS')
}
}
/** Execute a query and receive line table metadata and rows. */
queryApi.queryRows(fluxQuery, fluxObserver)
myQuery()