Merge branch 'master' into fix-hardware-sizing-link
commit
1908ea4dde
|
|
@ -2395,11 +2395,6 @@ components:
|
|||
- invalid
|
||||
readOnly: true
|
||||
type: string
|
||||
maxLength:
|
||||
description: Max length in bytes for a body of line-protocol.
|
||||
format: int32
|
||||
readOnly: true
|
||||
type: integer
|
||||
message:
|
||||
description: Message is a human-readable message.
|
||||
readOnly: true
|
||||
|
|
@ -2407,7 +2402,6 @@ components:
|
|||
required:
|
||||
- code
|
||||
- message
|
||||
- maxLength
|
||||
Link:
|
||||
description: URI of resource.
|
||||
format: uri
|
||||
|
|
@ -9989,7 +9983,7 @@ paths:
|
|||
description: Unexpected error
|
||||
summary: List scripts
|
||||
tags:
|
||||
- Invokable Scripts
|
||||
- Invocable Scripts
|
||||
post:
|
||||
operationId: PostScripts
|
||||
requestBody:
|
||||
|
|
@ -10011,7 +10005,7 @@ paths:
|
|||
description: Unexpected error
|
||||
summary: Create a script
|
||||
tags:
|
||||
- Invokable Scripts
|
||||
- Invocable Scripts
|
||||
/api/v2/scripts/{scriptID}:
|
||||
delete:
|
||||
description: Deletes a script and all associated records.
|
||||
|
|
@ -10031,9 +10025,9 @@ paths:
|
|||
description: Unexpected error
|
||||
summary: Delete a script
|
||||
tags:
|
||||
- Invokable Scripts
|
||||
- Invocable Scripts
|
||||
get:
|
||||
description: Uses script ID to retrieve details of an invokable script.
|
||||
description: Uses script ID to retrieve details of an invocable script.
|
||||
operationId: GetScriptsID
|
||||
parameters:
|
||||
- description: The script ID.
|
||||
|
|
@ -10054,10 +10048,10 @@ paths:
|
|||
description: Unexpected error
|
||||
summary: Retrieve a script
|
||||
tags:
|
||||
- Invokable Scripts
|
||||
- Invocable Scripts
|
||||
patch:
|
||||
description: >
|
||||
Updates properties (`name`, `description`, and `script`) of an invokable
|
||||
Updates properties (`name`, `description`, and `script`) of an invocable
|
||||
script.
|
||||
operationId: PatchScriptsID
|
||||
parameters:
|
||||
|
|
@ -10086,7 +10080,7 @@ paths:
|
|||
description: Unexpected error
|
||||
summary: Update a script
|
||||
tags:
|
||||
- Invokable Scripts
|
||||
- Invocable Scripts
|
||||
/api/v2/scripts/{scriptID}/invoke:
|
||||
post:
|
||||
description: >-
|
||||
|
|
@ -10116,7 +10110,7 @@ paths:
|
|||
description: Unexpected error
|
||||
summary: Invoke a script
|
||||
tags:
|
||||
- Invokable Scripts
|
||||
- Invocable Scripts
|
||||
/api/v2/setup:
|
||||
get:
|
||||
description: >-
|
||||
|
|
@ -12127,6 +12121,10 @@ paths:
|
|||
format.
|
||||
|
||||
|
||||
InfluxDB Cloud enforces rate and size limits different from InfluxDB
|
||||
OSS. For details, see Responses.
|
||||
|
||||
|
||||
For more information and examples, see the following:
|
||||
|
||||
- [Write data with the InfluxDB
|
||||
|
|
@ -12289,10 +12287,19 @@ paths:
|
|||
`bucket`, and name.
|
||||
'413':
|
||||
content:
|
||||
application/json:
|
||||
examples:
|
||||
dataExceedsSizeLimitOSS:
|
||||
summary: InfluxDB OSS response
|
||||
value: >
|
||||
{"code":"request too large","message":"unable to read data:
|
||||
points batch is too large"}
|
||||
schema:
|
||||
$ref: '#/components/schemas/LineProtocolLengthError'
|
||||
text/html:
|
||||
examples:
|
||||
dataExceedsSizeLimit:
|
||||
summary: Cloud response
|
||||
summary: InfluxDB Cloud response
|
||||
value: |
|
||||
<html>
|
||||
<head><title>413 Request Entity Too Large</title></head>
|
||||
|
|
@ -12304,13 +12311,21 @@ paths:
|
|||
</html>
|
||||
schema:
|
||||
type: string
|
||||
description: >-
|
||||
Request entity too large. The payload exceeded the 50MB size limit.
|
||||
InfluxDB rejected the batch and did not write any data.
|
||||
description: >
|
||||
The request payload is too large. InfluxDB rejected the batch and
|
||||
did not write any data.
|
||||
|
||||
#### InfluxDB Cloud:
|
||||
- returns this error if the payload exceeds the 50MB size limit.
|
||||
- returns `Content-Type: text/html` for this error.
|
||||
|
||||
#### InfluxDB OSS:
|
||||
- returns this error only if the [Go (golang) `ioutil.ReadAll()`](https://pkg.go.dev/io/ioutil#ReadAll) function raises an error.
|
||||
- returns `Content-Type: application/json` for this error.
|
||||
'429':
|
||||
description: >-
|
||||
The token is temporarily over quota. The Retry-After header
|
||||
describes when to try the write again.
|
||||
InfluxDB Cloud only. The token is temporarily over quota. The
|
||||
Retry-After header describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: >-
|
||||
|
|
@ -12387,7 +12402,7 @@ tags:
|
|||
- DBRPs
|
||||
- Delete
|
||||
- DemoDataBuckets
|
||||
- Invokable Scripts
|
||||
- Invocable Scripts
|
||||
- Labels
|
||||
- Limits
|
||||
- NotificationEndpoints
|
||||
|
|
@ -12519,7 +12534,7 @@ x-tagGroups:
|
|||
- DBRPs
|
||||
- Delete
|
||||
- DemoDataBuckets
|
||||
- Invokable Scripts
|
||||
- Invocable Scripts
|
||||
- Labels
|
||||
- Limits
|
||||
- NotificationEndpoints
|
||||
|
|
|
|||
|
|
@ -2294,11 +2294,6 @@ components:
|
|||
- invalid
|
||||
readOnly: true
|
||||
type: string
|
||||
maxLength:
|
||||
description: Max length in bytes for a body of line-protocol.
|
||||
format: int32
|
||||
readOnly: true
|
||||
type: integer
|
||||
message:
|
||||
description: Message is a human-readable message.
|
||||
readOnly: true
|
||||
|
|
@ -2306,7 +2301,6 @@ components:
|
|||
required:
|
||||
- code
|
||||
- message
|
||||
- maxLength
|
||||
Link:
|
||||
description: URI of resource.
|
||||
format: uri
|
||||
|
|
@ -12717,7 +12711,8 @@ paths:
|
|||
|
||||
To write data into InfluxDB, you need the following:
|
||||
|
||||
- **organization** – _See [View
|
||||
|
||||
- **organization name or ID** – _See [View
|
||||
organizations](https://docs.influxdata.com/influxdb/v2.1/organizations/view-orgs/#view-your-organization-id)
|
||||
for instructions on viewing your organization ID._
|
||||
|
||||
|
|
@ -12736,6 +12731,10 @@ paths:
|
|||
format.
|
||||
|
||||
|
||||
InfluxDB Cloud enforces rate and size limits different from InfluxDB
|
||||
OSS. For details, see Responses.
|
||||
|
||||
|
||||
For more information and examples, see the following:
|
||||
|
||||
- [Write data with the InfluxDB
|
||||
|
|
@ -12757,8 +12756,8 @@ paths:
|
|||
schema:
|
||||
default: identity
|
||||
description: >-
|
||||
The header value specifies that the line protocol in the request
|
||||
body is encoded with gzip or not encoded with identity.
|
||||
The content coding. Use `gzip` for compressed data or `identity`
|
||||
for unmodified, uncompressed data.
|
||||
enum:
|
||||
- gzip
|
||||
- identity
|
||||
|
|
@ -12848,9 +12847,7 @@ paths:
|
|||
application/json:
|
||||
examples:
|
||||
measurementSchemaFieldTypeConflict:
|
||||
summary: >-
|
||||
Example of a field type conflict thrown by an explicit
|
||||
bucket schema
|
||||
summary: Field type conflict thrown by an explicit bucket schema
|
||||
value:
|
||||
code: invalid
|
||||
message: >-
|
||||
|
|
@ -12901,13 +12898,52 @@ paths:
|
|||
'413':
|
||||
content:
|
||||
application/json:
|
||||
examples:
|
||||
dataExceedsSizeLimitOSS:
|
||||
summary: InfluxDB OSS response
|
||||
value: >
|
||||
{"code":"request too large","message":"unable to read data:
|
||||
points batch is too large"}
|
||||
schema:
|
||||
$ref: '#/components/schemas/LineProtocolLengthError'
|
||||
text/html:
|
||||
examples:
|
||||
dataExceedsSizeLimit:
|
||||
summary: InfluxDB Cloud response
|
||||
value: |
|
||||
<html>
|
||||
<head><title>413 Request Entity Too Large</title></head>
|
||||
<body>
|
||||
<center><h1>413 Request Entity Too Large</h1></center>
|
||||
<hr>
|
||||
<center>nginx</center>
|
||||
</body>
|
||||
</html>
|
||||
schema:
|
||||
type: string
|
||||
description: >
|
||||
All request data was rejected and not written. InfluxDB OSS only
|
||||
returns this error if the [Go (golang)
|
||||
`ioutil.ReadAll()`](https://pkg.go.dev/io/ioutil#ReadAll) function
|
||||
raises an error.
|
||||
The request payload is too large. InfluxDB rejected the batch and
|
||||
did not write any data.
|
||||
|
||||
#### InfluxDB Cloud:
|
||||
- returns this error if the payload exceeds the 50MB size limit.
|
||||
- returns `Content-Type: text/html` for this error.
|
||||
|
||||
#### InfluxDB OSS:
|
||||
- returns this error only if the [Go (golang) `ioutil.ReadAll()`](https://pkg.go.dev/io/ioutil#ReadAll) function raises an error.
|
||||
- returns `Content-Type: application/json` for this error.
|
||||
'429':
|
||||
description: >-
|
||||
InfluxDB Cloud only. The token is temporarily over quota. The
|
||||
Retry-After header describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: >-
|
||||
A non-negative decimal integer indicating the seconds to delay
|
||||
after the response is received.
|
||||
schema:
|
||||
format: int32
|
||||
type: integer
|
||||
'500':
|
||||
content:
|
||||
application/json:
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ paths:
|
|||
schema:
|
||||
type: string
|
||||
required: true
|
||||
description: The bucket to write to. If the specified bucket does not exist, a bucket is created with a default 3 day retention policy.
|
||||
description: The bucket to write to. If none exist a bucket will be created with a default 3 day retention policy.
|
||||
- in: query
|
||||
name: rp
|
||||
schema:
|
||||
|
|
@ -54,7 +54,7 @@ paths:
|
|||
"204":
|
||||
description: Write data is correctly formatted and accepted for writing to the bucket.
|
||||
"400":
|
||||
description: Line protocol was not in correct format, and no points were written. Response can be used to determine the first malformed line in the line-protocol body. All data in body was rejected and not written.
|
||||
description: Line protocol poorly formed and no points were written. Response can be used to determine the first malformed line in the body line-protocol. All data in body was rejected and not written.
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
|
|
|
|||
|
|
@ -104,7 +104,7 @@ Use dot or bracket notation to reference the variable key inside of the `v` reco
|
|||
|
||||
```js
|
||||
from(bucket: v.bucket)
|
||||
|> range(start: v.timeRangeStart, stop: v.timeRangeStart)
|
||||
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|
||||
|> filter(fn: (r) => r._field == v["Field key"])
|
||||
|> aggregateWindow(every: v.windowPeriod, fn: v.aggregateFunction)
|
||||
```
|
||||
|
|
|
|||
|
|
@ -15,6 +15,12 @@ Use fine-grained authorization (FGA) in InfluxDB Enterprise to control user acce
|
|||
|
||||
You must have [admin permissions](/influxdb/v1.7/administration/authentication_and_authorization/#admin-user-management) to set up FGA.
|
||||
|
||||
{{% warn %}}
|
||||
#### FGA does not apply to Flux
|
||||
FGA does not restrict actions performed by Flux queries (both read and write).
|
||||
If using FGA, we recommend [disabling Flux](/enterprise_influxdb/v{{< current-version >}}/flux/installation/).
|
||||
{{% /warn %}}
|
||||
|
||||
## Set up fine-grained authorization
|
||||
|
||||
1. [Enable authentication](/influxdb/v1.7/administration/authentication_and_authorization/#set-up-authentication) in your InfluxDB configuration file.
|
||||
|
|
|
|||
|
|
@ -17,6 +17,12 @@ Use fine-grained authorization (FGA) in InfluxDB Enterprise to control user acce
|
|||
|
||||
You must have [admin permissions](/influxdb/v1.8/administration/authentication_and_authorization/#admin-user-management) to set up FGA.
|
||||
|
||||
{{% warn %}}
|
||||
#### FGA does not apply to Flux
|
||||
FGA does not restrict actions performed by Flux queries (both read and write).
|
||||
If using FGA, we recommend [disabling Flux](/enterprise_influxdb/v{{< current-version >}}/flux/installation/).
|
||||
{{% /warn %}}
|
||||
|
||||
## Set up fine-grained authorization
|
||||
|
||||
1. [Enable authentication](/influxdb/v1.8/administration/authentication_and_authorization/#set-up-authentication) in your InfluxDB configuration file.
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ weight: 42
|
|||
---
|
||||
|
||||
- [Overview](#overview)
|
||||
- [API examples](#user-and-privilege-management-over-the-influxd-meta-api)
|
||||
- [API examples](#user-and-privilege-management-over-the-influxdb-enterprise-meta-api)
|
||||
|
||||
## Overview
|
||||
|
||||
|
|
|
|||
|
|
@ -130,6 +130,179 @@ See [Structured logging](/enterprise_influxdb/v1.9/administration/logs/#structur
|
|||
|
||||
## Tracing
|
||||
|
||||
Logging has been enhanced, starting in InfluxDB 1.5, to provide tracing of important InfluxDB operations. Tracing is useful for error reporting and discovering performance bottlenecks.
|
||||
Logging has been enhanced to provide tracing of important InfluxDB operations.
|
||||
Tracing is useful for error reporting and discovering performance bottlenecks.
|
||||
|
||||
See [Tracing](/enterprise_influxdb/v1.9/administration/logs/#tracing) in the InfluxDB OSS documentation.
|
||||
### Logging keys used in tracing
|
||||
|
||||
#### Tracing identifier key
|
||||
|
||||
The `trace_id` key specifies a unique identifier for a specific instance of a trace.
|
||||
You can use this key to filter and correlate all related log entries for an operation.
|
||||
|
||||
All operation traces include consistent starting and ending log entries, with the same message (`msg`) describing the operation (e.g., "TSM compaction"), but adding the appropriate `op_event` context (either `start` or `end`).
|
||||
For an example, see [Finding all trace log entries for an InfluxDB operation](#finding-all-trace-log-entries-for-an-influxdb-operation).
|
||||
|
||||
**Example:** `trace_id=06R0P94G000`
|
||||
|
||||
#### Operation keys
|
||||
|
||||
The following operation keys identify an operation's name, the start and end timestamps, and the elapsed execution time.
|
||||
|
||||
##### `op_name`
|
||||
Unique identifier for an operation.
|
||||
You can filter on all operations of a specific name.
|
||||
|
||||
**Example:** `op_name=tsm1_compact_group`
|
||||
|
||||
##### `op_event`
|
||||
Specifies the start and end of an event.
|
||||
The two possible values, `(start)` or `(end)`, are used to indicate when an operation started or ended.
|
||||
For example, you can grep by values in `op_name` AND `op_event` to find all starting operation log entries.
|
||||
For an example of this, see [Finding all starting log entries](#finding-all-starting-operation-log-entries).
|
||||
|
||||
**Example:** `op_event=start`
|
||||
|
||||
##### `op_elapsed`
|
||||
Duration of the operation execution.
|
||||
Logged with the ending trace log entry.
|
||||
Valid duration units are `ns`, `µs`, `ms`, and `s`.
|
||||
|
||||
**Example:** `op_elapsed=352ms`
|
||||
|
||||
|
||||
#### Log identifier context key
|
||||
|
||||
The log identifier key (`log_id`) lets you easily identify _every_ log entry for a single execution of an `influxd` process.
|
||||
There are other ways a log file could be split by a single execution, but the consistent `log_id` eases the searching of log aggregation services.
|
||||
|
||||
**Example:** `log_id=06QknqtW000`
|
||||
|
||||
#### Database context keys
|
||||
|
||||
- **db\_instance**: Database name
|
||||
- **db\_rp**: Retention policy name
|
||||
- **db\_shard\_id**: Shard identifier
|
||||
- **db\_shard\_group**: Shard group identifier
|
||||
|
||||
### Tooling
|
||||
|
||||
Here are a couple of popular tools available for processing and filtering log files output in `logfmt` or `json` formats.
|
||||
|
||||
#### hutils
|
||||
|
||||
The [hutils](https://blog.heroku.com/hutils-explore-your-structured-data-logs) utility collection, provided by Heroku, provides tools for working with `logfmt`-encoded logs, including:
|
||||
|
||||
- **lcut**: Extracts values from a `logfmt` trace based on a specified field name.
|
||||
- **lfmt**: Prettifies `logfmt` lines as they emerge from a stream, and highlights their key sections.
|
||||
- **ltap**: Accesses messages from log providers in a consistent way to allow easy parsing by other utilities that operate on `logfmt` traces.
|
||||
- **lviz**: Visualizes `logfmt` output by building a tree out of a dataset combining common sets of key-value pairs into shared parent nodes.
|
||||
|
||||
#### lnav (Log File Navigator)
|
||||
|
||||
The [lnav (Log File Navigator)](http://lnav.org) is an advanced log file viewer useful for watching and analyzing your log files from a terminal.
|
||||
The lnav viewer provides a single log view, automatic log format detection, filtering, timeline view, pretty-print view, and querying logs using SQL.
|
||||
|
||||
### Operations
|
||||
|
||||
The following operations, listed by their operation name (`op_name`) are traced in InfluxDB internal logs and available for use without changes in logging level.
|
||||
|
||||
#### Initial opening of data files
|
||||
|
||||
The `tsdb_open` operation traces include all events related to the initial opening of the `tsdb_store`.
|
||||
|
||||
|
||||
#### Retention policy shard deletions
|
||||
|
||||
The `retention.delete_check` operation includes all shard deletions related to the retention policy.
|
||||
|
||||
#### TSM snapshotting in-memory cache to disk
|
||||
|
||||
The `tsm1_cache_snapshot` operation represents the snapshotting of the TSM in-memory cache to disk.
|
||||
|
||||
#### TSM compaction strategies
|
||||
|
||||
The `tsm1_compact_group` operation includes all trace log entries related to TSM compaction strategies and displays the related TSM compaction strategy keys:
|
||||
|
||||
- **tsm1\_strategy**: level or full
|
||||
- **tsm1\_level**: 1, 2, or 3
|
||||
- **tsm\_optimize**: true or false
|
||||
|
||||
#### Series file compactions
|
||||
|
||||
The `series_partition_compaction` operation includes all trace log entries related to series file compactions.
|
||||
|
||||
#### Continuous query execution (if logging enabled)
|
||||
|
||||
The `continuous_querier_execute` operation includes all continuous query executions, if logging is enabled.
|
||||
|
||||
#### TSI log file compaction
|
||||
|
||||
The `tsi1_compact_log_file` operation includes all trace log entries related to log file compactions.
|
||||
|
||||
#### TSI level compaction
|
||||
|
||||
The `tsi1_compact_to_level` operation includes all trace log entries for TSI level compactions.
|
||||
|
||||
|
||||
### Tracing examples
|
||||
|
||||
#### Finding all trace log entries for an InfluxDB operation
|
||||
|
||||
In the example below, you can see the log entries for all trace operations related to a "TSM compaction" process.
|
||||
Note that the initial entry shows the message "TSM compaction (start)" and the final entry displays the message "TSM compaction (end)".
|
||||
{{% note %}}
|
||||
Log entries were grepped using the `trace_id` value and then the specified key values were displayed using `lcut` (an `hutils` tool).
|
||||
{{% /note %}}\]
|
||||
|
||||
```
|
||||
$ grep "06QW92x0000" influxd.log | lcut ts lvl msg strategy level
|
||||
2018-02-21T20:18:56.880065Z info TSM compaction (start) full
|
||||
2018-02-21T20:18:56.880162Z info Beginning compaction full
|
||||
2018-02-21T20:18:56.880185Z info Compacting file full
|
||||
2018-02-21T20:18:56.880211Z info Compacting file full
|
||||
2018-02-21T20:18:56.880226Z info Compacting file full
|
||||
2018-02-21T20:18:56.880254Z info Compacting file full
|
||||
2018-02-21T20:19:03.928640Z info Compacted file full
|
||||
2018-02-21T20:19:03.928687Z info Finished compacting files full
|
||||
2018-02-21T20:19:03.928707Z info TSM compaction (end) full
|
||||
```
|
||||
|
||||
|
||||
#### Finding all starting operation log entries
|
||||
|
||||
To find all starting operation log entries, you can grep by values in `op_name` AND `op_event`.
|
||||
In the following example, the grep returned 101 entries, so the result below only displays the first entry.
|
||||
In the example result entry, the timestamp, level, strategy, trace_id, op_name, and op_event values are included.
|
||||
|
||||
```
|
||||
$ grep -F 'op_name=tsm1_compact_group' influxd.log | grep -F 'op_event=start'
|
||||
ts=2018-02-21T20:16:16.709953Z lvl=info msg="TSM compaction" log_id=06QVNNCG000 engine=tsm1 level=1 strategy=level trace_id=06QV~HHG000 op_name=tsm1_compact_group op_event=start
|
||||
...
|
||||
```
|
||||
|
||||
Using the `lcut` utility (in hutils), the following command uses the previous `grep` command, but adds an `lcut` command to only display the keys and their values for keys that are not identical in all of the entries.
|
||||
The following example includes 19 examples of unique log entries displaying selected keys: `ts`, `strategy`, `level`, and `trace_id`.
|
||||
|
||||
```
|
||||
$ grep -F 'op_name=tsm1_compact_group' influxd.log | grep -F 'op_event=start' | lcut ts strategy level trace_id | sort -u
|
||||
2018-02-21T20:16:16.709953Z level 1 06QV~HHG000
|
||||
2018-02-21T20:16:40.707452Z level 1 06QW0k0l000
|
||||
2018-02-21T20:17:04.711519Z level 1 06QW2Cml000
|
||||
2018-02-21T20:17:05.708227Z level 2 06QW2Gg0000
|
||||
2018-02-21T20:17:29.707245Z level 1 06QW3jQl000
|
||||
2018-02-21T20:17:53.711948Z level 1 06QW5CBl000
|
||||
2018-02-21T20:18:17.711688Z level 1 06QW6ewl000
|
||||
2018-02-21T20:18:56.880065Z full 06QW92x0000
|
||||
2018-02-21T20:20:46.202368Z level 3 06QWFizW000
|
||||
2018-02-21T20:21:25.292557Z level 1 06QWI6g0000
|
||||
2018-02-21T20:21:49.294272Z level 1 06QWJ_RW000
|
||||
2018-02-21T20:22:13.292489Z level 1 06QWL2B0000
|
||||
2018-02-21T20:22:37.292431Z level 1 06QWMVw0000
|
||||
2018-02-21T20:22:38.293320Z level 2 06QWMZqG000
|
||||
2018-02-21T20:23:01.293690Z level 1 06QWNygG000
|
||||
2018-02-21T20:23:25.292956Z level 1 06QWPRR0000
|
||||
2018-02-21T20:24:33.291664Z full 06QWTa2l000
|
||||
2018-02-21T21:12:08.017055Z full 06QZBpKG000
|
||||
2018-02-21T21:12:08.478200Z full 06QZBr7W000
|
||||
```
|
||||
|
|
|
|||
|
|
@ -16,10 +16,18 @@ related:
|
|||
|
||||
Use fine-grained authorization (FGA) in InfluxDB Enterprise to control user access at the database, measurement, and series levels.
|
||||
|
||||
> **Note:** InfluxDB OSS controls access at the database level only.
|
||||
{{% note %}}
|
||||
**Note:** InfluxDB OSS controls access at the database level only.
|
||||
{{% /note %}}
|
||||
|
||||
You must have [admin permissions](/enterprise_influxdb/v1.9/administration/authentication_and_authorization/#admin-user-management) to set up FGA.
|
||||
|
||||
{{% warn %}}
|
||||
#### FGA does not apply to Flux
|
||||
FGA does not restrict actions performed by Flux queries (both read and write).
|
||||
If using FGA, we recommend [disabling Flux](/enterprise_influxdb/v{{< current-version >}}/flux/installation/).
|
||||
{{% /warn %}}
|
||||
|
||||
## Set up fine-grained authorization
|
||||
|
||||
1. [Enable authentication](/enterprise_influxdb/v1.9/administration/authentication_and_authorization/#set-up-authentication) in your InfluxDB configuration file.
|
||||
|
|
@ -34,19 +42,25 @@ You must have [admin permissions](/enterprise_influxdb/v1.9/administration/authe
|
|||
|
||||
3. Ensure that you can access the **meta node** API (port 8091 by default).
|
||||
|
||||
> In a typical cluster configuration, the HTTP ports for data nodes
|
||||
> (8086 by default) are exposed to clients but the meta node HTTP ports are not.
|
||||
> You may need to work with your network administrator to gain access to the meta node HTTP ports.
|
||||
{{% note %}}
|
||||
In a typical cluster configuration, the HTTP ports for data nodes
|
||||
(8086 by default) are exposed to clients but the meta node HTTP ports are not.
|
||||
You may need to work with your network administrator to gain access to the meta node HTTP ports.
|
||||
{{% /note %}}
|
||||
|
||||
4. _(Optional)_ [Create roles](#manage-roles).
|
||||
Roles let you grant permissions to groups of users assigned to each role.
|
||||
|
||||
> For an overview of how users and roles work in InfluxDB Enterprise, see [InfluxDB Enterprise users](/enterprise_influxdb/v1.9/features/users/).
|
||||
{{% note %}}
|
||||
For an overview of how users and roles work in InfluxDB Enterprise, see [InfluxDB Enterprise users](/enterprise_influxdb/v1.9/features/users/).
|
||||
{{% /note %}}
|
||||
|
||||
5. [Set up restrictions](#manage-restrictions).
|
||||
Restrictions apply to all non-admin users.
|
||||
|
||||
> Permissions (currently "read" and "write") may be restricted independently depending on the scenario.
|
||||
{{% note %}}
|
||||
Permissions (currently "read" and "write") may be restricted independently depending on the scenario.
|
||||
{{% /note %}}
|
||||
|
||||
7. [Set up grants](#manage-grants) to remove restrictions for specified users and roles.
|
||||
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ time(v: "2021-01-01T00:00:00Z")
|
|||
|
||||
#### Convert an integer to a time value
|
||||
```js
|
||||
int(v: 609459200000000000)
|
||||
time(v: 609459200000000000)
|
||||
|
||||
// Returns 2021-01-01T00:00:00Z
|
||||
```
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ seotitle: Change your InfluxDB Cloud password
|
|||
description: >
|
||||
To update your InfluxDB Cloud password, click the **Forgot Password** link on
|
||||
the [InfluxDB Cloud login page](https://cloud2.influxdata.com/login).
|
||||
Passwords must be at least 8 characters in length, and must not contain common words, personal information, or previous passwords.
|
||||
menu:
|
||||
influxdb_cloud:
|
||||
name: Change your password
|
||||
|
|
@ -17,3 +18,13 @@ To change or reset your InfluxDB Cloud password:
|
|||
2. Open the **InfluxCloud: Password Change Requested** email sent to the email
|
||||
address associated with your InfluxDB Cloud account, click the **Reset Password**
|
||||
button, and then enter and confirm a new password.
|
||||
|
||||
### Password requirements
|
||||
|
||||
Passwords must meet the following requirements:
|
||||
|
||||
- Must be longer than 8 characters.
|
||||
- Must not contain personal information.
|
||||
- Must not be a common or previous password.
|
||||
|
||||
These requirements follow the National Institute of Standards and Technology (NIST) standards for 2021.
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ The Usage-Based Plan uses the following pricing vectors to calculate InfluxDB Cl
|
|||
- Each individual operation—including queries, tasks, alerts, notifications, and Data Explorer activity—is one billable query operation.
|
||||
- Refreshing a dashboard with multiple cells will incur multiple query operations.
|
||||
- Failed operations aren’t counted.
|
||||
- **Data In** is the amount of data you’re writing into InfluxDB (measured in MB/second).
|
||||
- **Data In** is the amount of data you’re writing into InfluxDB (measured in MB).
|
||||
- **Storage** is the amount of data you’re storing in InfluxDB (measured in GB/hour).
|
||||
|
||||
Discover how to [manage InfluxDB Cloud billing](/influxdb/cloud/account-management/billing/).
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ influx auth create [flags]
|
|||
| | `--read-orgs` | Grant permission to read organizations | | |
|
||||
| | `--read-tasks` | Grant permission to read tasks | | |
|
||||
| | `--read-telegrafs` | Grant permission to read Telegraf configurations | | |
|
||||
| | `--read-user` | Grant permission to read organization users | | |
|
||||
| | `--read-users` | Grant permission to read organization users | | |
|
||||
| | `--skip-verify` | Skip TLS certificate verification | | `INFLUX_SKIP_VERIFY` |
|
||||
| `-t` | `--token` | API token | string | `INFLUX_TOKEN` |
|
||||
| `-u` | `--user` | Username | string | |
|
||||
|
|
@ -55,7 +55,7 @@ influx auth create [flags]
|
|||
| | `--write-orgs` | Grant permission to create and update organizations | | |
|
||||
| | `--write-tasks` | Grant permission to create and update tasks | | |
|
||||
| | `--write-telegrafs` | Grant permission to create and update Telegraf configurations | | |
|
||||
| | `--write-user` | Grant permission to create and update organization users | | |
|
||||
| | `--write-users` | Grant permission to create and update organization users | | |
|
||||
|
||||
## Examples
|
||||
|
||||
|
|
@ -100,7 +100,7 @@ influx auth create \
|
|||
--read-orgs \
|
||||
--read-tasks \
|
||||
--read-telegrafs \
|
||||
--read-user \
|
||||
--read-users \
|
||||
--write-buckets \
|
||||
--write-checks \
|
||||
--write-dashboards \
|
||||
|
|
@ -110,7 +110,7 @@ influx auth create \
|
|||
--write-orgs \
|
||||
--write-tasks \
|
||||
--write-telegrafs \
|
||||
--write-user
|
||||
--write-users
|
||||
```
|
||||
|
||||
### Create an API token with read and write access to specific buckets
|
||||
|
|
@ -136,5 +136,5 @@ influx auth create \
|
|||
--read-orgs \
|
||||
--read-tasks \
|
||||
--read-telegrafs \
|
||||
--read-user
|
||||
--read-users
|
||||
```
|
||||
|
|
|
|||
|
|
@ -154,7 +154,7 @@ brew list | grep influxdb-cli
|
|||
sudo cp influxdb2-client-{{< latest-patch cli=true >}}-linux-amd64/influx /usr/local/bin/
|
||||
|
||||
# arm
|
||||
sudo cp influxdb2-client-{{< latest-patch cli=true >}}-linux-amd64/influx /usr/local/bin/
|
||||
sudo cp influxdb2-client-{{< latest-patch cli=true >}}-linux-arm64/influx /usr/local/bin/
|
||||
```
|
||||
|
||||
If you do not move the `influx` binary into your `$PATH`, prefix the executable
|
||||
|
|
|
|||
Loading…
Reference in New Issue