Merge branch 'master' of github.com:influxdata/docs-v2
commit
12ebe95cce
|
@ -27,7 +27,7 @@ Depending on the volume of data to be protected and your application requirement
|
|||
- [Backup and restore utilities](#backup-and-restore-utilities) — For most applications
|
||||
- [Exporting and importing data](#exporting-and-importing-data) — For large datasets
|
||||
|
||||
> **Note:** Use the [`backup` and `restore` utilities (InfluxDB OSS 1.5 and later)](/{{< latest "influxdb" "v1" >}}/administration/backup_and_restore/) to:
|
||||
> **Note:** Use the [`backup` and `restore` utilities (InfluxDB OSS 1.5 and later)](/enterprise_influxdb/v1.9/administration/backup-and-restore/) to:
|
||||
>
|
||||
> - Restore InfluxDB Enterprise backup files to InfluxDB OSS instances.
|
||||
> - Back up InfluxDB OSS data that can be restored in InfluxDB Enterprise clusters.
|
||||
|
@ -429,13 +429,13 @@ As an alternative to the standard backup and restore utilities, use the InfluxDB
|
|||
|
||||
### Exporting data
|
||||
|
||||
Use the [`influx_inspect export` command](/{{< latest "influxdb" "v1" >}}/tools/influx_inspect#export) to export data in line protocol format from your InfluxDB Enterprise cluster. Options include:
|
||||
Use the [`influx_inspect export` command](/enterprise_influxdb/v1.9/tools/influx_inspect#export) to export data in line protocol format from your InfluxDB Enterprise cluster. Options include:
|
||||
|
||||
- Exporting all, or specific, databases
|
||||
- Filtering with starting and ending timestamps
|
||||
- Using gzip compression for smaller files and faster exports
|
||||
|
||||
For details on optional settings and usage, see [`influx_inspect export` command](/{{< latest "influxdb" "v1" >}}/tools/influx_inspect#export).
|
||||
For details on optional settings and usage, see [`influx_inspect export` command](/enterprise_influxdb/v1.9/tools/influx_inspect#export).
|
||||
|
||||
In the following example, the database is exported filtered to include only one day and compressed for optimal speed and file size.
|
||||
|
||||
|
@ -445,7 +445,7 @@ influx_inspect export -database myDB -compress -start 2019-05-19T00:00:00.000Z -
|
|||
|
||||
### Importing data
|
||||
|
||||
After exporting the data in line protocol format, you can import the data using the [`influx -import` CLI command](/{{< latest "influxdb" "v1" >}}/tools/use-influx/#import).
|
||||
After exporting the data in line protocol format, you can import the data using the [`influx -import` CLI command](/enterprise_influxdb/v1.9/tools/influx-cli/use-influx/#import-data-from-a-file-with--import).
|
||||
|
||||
In the following example, the compressed data file is imported into the specified database.
|
||||
|
||||
|
@ -453,7 +453,7 @@ In the following example, the compressed data file is imported into the specifie
|
|||
influx -import -database myDB -compress
|
||||
```
|
||||
|
||||
For details on using the `influx -import` command, see [Import data from a file with -import](/{{< latest "influxdb" "v1" >}}/tools/use-influx/#import-data-from-a-file-with-import).
|
||||
For details on using the `influx -import` command, see [Import data from a file with -import](/enterprise_influxdb/v1.9/tools/influx-cli/use-influx/#import-data-from-a-file-with--import).
|
||||
|
||||
### Example
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ aliases:
|
|||
The default port that runs the InfluxDB HTTP service.
|
||||
It is used for the primary public write and query API.
|
||||
Clients include the CLI, Chronograf, InfluxDB client libraries, Grafana, curl, or anything that wants to write and read time series data to and from InfluxDB.
|
||||
[Configure this port](/enterprise_influxdb/v1.9/administration/config-data-nodes/#bind-address-8088)
|
||||
[Configure this port](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#bind-address--8086)
|
||||
in the data node configuration file.
|
||||
|
||||
_See also: [API Reference](/enterprise_influxdb/v1.9/tools/api/)._
|
||||
|
@ -34,12 +34,12 @@ It's also used by meta nodes for cluster-type operations (e.g., tell a data node
|
|||
|
||||
This is the default port used for RPC calls used for inter-node communication and by the CLI for backup and restore operations
|
||||
(`influxdb backup` and `influxd restore`).
|
||||
[Configure this port](/enterprise_influxdb/v1.9/administration/config/#bind-address-127-0-0-1-8088)
|
||||
[Configure this port](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#bind-address--8088)
|
||||
in the configuration file.
|
||||
|
||||
This port should not be exposed outside the cluster.
|
||||
|
||||
_See also: [Backup and Restore](/enterprise_influxdb/v1.9/administration/backup_and_restore/)._
|
||||
_See also: [Back up and restore](/enterprise_influxdb/v1.9/administration/backup-and-restore/)._
|
||||
|
||||
### 8089
|
||||
|
||||
|
@ -72,7 +72,7 @@ in the configuration file.
|
|||
### 4242
|
||||
|
||||
The default port that runs the OpenTSDB service.
|
||||
[Enable and configure this port](/enterprise_influxdb/v1.9/administration/config#bind-address-4242)
|
||||
[Enable and configure this port](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#opentsdb-settings)
|
||||
in the configuration file.
|
||||
|
||||
**Resources** [OpenTSDB README](https://github.com/influxdata/influxdb/tree/1.8/services/opentsdb/README.md)
|
||||
|
@ -80,7 +80,7 @@ in the configuration file.
|
|||
### 8089
|
||||
|
||||
The default port that runs the UDP service.
|
||||
[Enable and configure this port](/enterprise_influxdb/v1.9/administration/config#bind-address-8089)
|
||||
[Enable and configure this port](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#udp-settings)
|
||||
in the configuration file.
|
||||
|
||||
**Resources** [UDP README](https://github.com/influxdata/influxdb/tree/1.8/services/udp/README.md)
|
||||
|
@ -88,7 +88,7 @@ in the configuration file.
|
|||
### 25826
|
||||
|
||||
The default port that runs the Collectd service.
|
||||
[Enable and configure this port](/enterprise_influxdb/v1.9/administration/config#bind-address-25826)
|
||||
[Enable and configure this port](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#collectd-settings)
|
||||
in the configuration file.
|
||||
|
||||
**Resources** [Collectd README](https://github.com/influxdata/influxdb/tree/1.8/services/collectd/README.md)
|
||||
|
|
|
@ -11,7 +11,7 @@ related:
|
|||
- /enterprise_influxdb/v1.9/guides/fine-grained-authorization/
|
||||
- /{{< latest "chronograf" >}}/administration/managing-influxdb-users/
|
||||
aliases:
|
||||
- enterprise_influxdb/v1.9/administration/authentication_and_authorization/
|
||||
- /enterprise_influxdb/v1.9/administration/authentication_and_authorization/
|
||||
---
|
||||
|
||||
This document covers setting up and managing authentication and authorization in InfluxDB Enterprise.
|
||||
|
|
|
@ -23,7 +23,7 @@ HTTP, HTTPS, or UDP in [line protocol](/enterprise_influxdb/v1.9/write_protocols
|
|||
the InfluxDB subscriber service creates multiple "writers" ([goroutines](https://golangbot.com/goroutines/))
|
||||
which send writes to the subscription endpoints.
|
||||
|
||||
_The number of writer goroutines is defined by the [`write-concurrency`](/enterprise_influxdb/v1.9/administration/config#write-concurrency-40) configuration._
|
||||
_The number of writer goroutines is defined by the [`write-concurrency`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#write-concurrency-40) configuration._
|
||||
|
||||
As writes occur in InfluxDB, each subscription writer sends the written data to the
|
||||
specified subscription endpoints.
|
||||
|
@ -182,7 +182,7 @@ Below is an example `influxdb.conf` subscriber configuration:
|
|||
write-buffer-size = 1000
|
||||
```
|
||||
|
||||
_**Descriptions of `[subscriber]` configuration options are available in the [Configuring InfluxDB](/enterprise_influxdb/v1.9/administration/config#subscription-settings) documentation.**_
|
||||
_**Descriptions of `[subscriber]` configuration options are available in the [data node configuration](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#subscription-settings) documentation.**_
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
|
|
|
@ -65,13 +65,13 @@ Deletes sent to the Cache will clear out the given key or the specific time rang
|
|||
|
||||
The Cache exposes a few controls for snapshotting behavior.
|
||||
The two most important controls are the memory limits.
|
||||
There is a lower bound, [`cache-snapshot-memory-size`](/enterprise_influxdb/v1.9/administration/config#cache-snapshot-memory-size-25m), which when exceeded will trigger a snapshot to TSM files and remove the corresponding WAL segments.
|
||||
There is also an upper bound, [`cache-max-memory-size`](/enterprise_influxdb/v1.9/administration/config#cache-max-memory-size-1g), which when exceeded will cause the Cache to reject new writes.
|
||||
There is a lower bound, [`cache-snapshot-memory-size`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#cache-snapshot-memory-size--25m), which when exceeded will trigger a snapshot to TSM files and remove the corresponding WAL segments.
|
||||
There is also an upper bound, [`cache-max-memory-size`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes#cache-max-memory-size-1g), which when exceeded will cause the Cache to reject new writes.
|
||||
These configurations are useful to prevent out of memory situations and to apply back pressure to clients writing data faster than the instance can persist it.
|
||||
The checks for memory thresholds occur on every write.
|
||||
|
||||
The other snapshot controls are time based.
|
||||
The idle threshold, [`cache-snapshot-write-cold-duration`](/enterprise_influxdb/v1.9/administration/config#cache-snapshot-write-cold-duration-10m), forces the Cache to snapshot to TSM files if it hasn't received a write within the specified interval.
|
||||
The idle threshold, [`cache-snapshot-write-cold-duration`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes#cache-snapshot-write-cold-duration--10m), forces the Cache to snapshot to TSM files if it hasn't received a write within the specified interval.
|
||||
|
||||
The in-memory Cache is recreated on restart by re-reading the WAL files on disk.
|
||||
|
||||
|
|
|
@ -28,13 +28,9 @@ index-version = "tsi1"
|
|||
|
||||
### InfluxDB Enterprise
|
||||
|
||||
- To convert your data nodes to support TSI, see [Upgrade InfluxDB Enterprise clusters](/enterprise_influxdb/v1.8/administration/upgrading/).
|
||||
- To convert your data nodes to support TSI, see [Upgrade InfluxDB Enterprise clusters](/enterprise_influxdb/v1.9/administration/upgrading/).
|
||||
|
||||
- For detail on configuration, see [Configure InfluxDB Enterprise clusters](/enterprise_influxdb/v1.8/administration/configuration/).
|
||||
|
||||
### InfluxDB OSS
|
||||
|
||||
- For detail on configuration, see [Configuring InfluxDB OSS](/enterprise_influxdb/v1.9/administration/config/).
|
||||
- For detail on configuration, see [Configure InfluxDB Enterprise clusters](/enterprise_influxdb/v1.9/administration/configuration/).
|
||||
|
||||
## Tooling
|
||||
|
||||
|
|
|
@ -214,7 +214,7 @@ data that reside in an RP other than the `DEFAULT` RP.
|
|||
Between checks, `orders` may have data that are older than two hours.
|
||||
The rate at which InfluxDB checks to enforce an RP is a configurable setting,
|
||||
see
|
||||
[Database Configuration](/enterprise_influxdb/v1.9/administration/config#check-interval-30m0s).
|
||||
[Database Configuration](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#check-interval--30m0s).
|
||||
|
||||
Using a combination of RPs and CQs, we've successfully set up our database to
|
||||
automatically keep the high precision raw data for a limited time, create lower
|
||||
|
|
|
@ -852,7 +852,7 @@ To change a CQ, you must `DROP` and re`CREATE` it with the updated settings.
|
|||
### Continuous query statistics
|
||||
|
||||
If `query-stats-enabled` is set to `true` in your `influxdb.conf` or using the `INFLUXDB_CONTINUOUS_QUERIES_QUERY_STATS_ENABLED` environment variable, data will be written to `_internal` with information about when continuous queries ran and their duration.
|
||||
Information about CQ configuration settings is available in the [Configuration](/enterprise_influxdb/v1.9/administration/config/#continuous-queries-settings) documentation.
|
||||
Information about CQ configuration settings is available in the [Configuration](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#continuous-queries-settings) documentation.
|
||||
|
||||
> **Note:** `_internal` houses internal system data and is meant for internal use.
|
||||
The structure of and data stored in `_internal` can change at any time.
|
||||
|
|
|
@ -87,7 +87,7 @@ If you attempt to create a database that already exists, InfluxDB does nothing a
|
|||
```
|
||||
|
||||
The query creates a database called `NOAA_water_database`.
|
||||
[By default](/enterprise_influxdb/v1.9/administration/config/#retention-autocreate-true), InfluxDB also creates the `autogen` retention policy and associates it with the `NOAA_water_database`.
|
||||
[By default](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#retention-autocreate--true), InfluxDB also creates the `autogen` retention policy and associates it with the `NOAA_water_database`.
|
||||
|
||||
##### Create a database with a specific retention policy
|
||||
|
||||
|
@ -229,7 +229,7 @@ exist.
|
|||
|
||||
The following sections cover how to create, alter, and delete retention policies.
|
||||
Note that when you create a database, InfluxDB automatically creates a retention policy named `autogen` which has infinite retention.
|
||||
You may disable its auto-creation in the [configuration file](/enterprise_influxdb/v1.9/administration/config/#metastore-settings).
|
||||
You may disable its auto-creation in the [configuration file](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#retention-autocreate--true).
|
||||
|
||||
### Create retention policies with CREATE RETENTION POLICY
|
||||
|
||||
|
@ -251,7 +251,7 @@ duration is `INF`.
|
|||
##### `REPLICATION`
|
||||
|
||||
- The `REPLICATION` clause determines how many independent copies of each point
|
||||
are stored in the [cluster](/enterprise_influxdb/v1.9/high_availability/clusters/).
|
||||
are stored in the cluster.
|
||||
|
||||
- By default, the replication factor `n` usually equals the number of data nodes. However, if you have four or more data nodes, the default replication factor `n` is 3.
|
||||
|
||||
|
|
|
@ -63,7 +63,7 @@ From your terminal, download the text file that contains the data in [line proto
|
|||
curl https://s3.amazonaws.com/noaa.water-database/NOAA_data.txt -o NOAA_data.txt
|
||||
```
|
||||
|
||||
Write the data to InfluxDB via the [CLI](../../tools/use-influx/):
|
||||
Write the data to InfluxDB via the [`influx` CLI](/enterprise_influxdb/v1.9/tools/influx-cli/use-influx/):
|
||||
```
|
||||
influx -import -path=NOAA_data.txt -precision=s -database=NOAA_water_database
|
||||
```
|
||||
|
@ -113,7 +113,7 @@ time level description location water_level
|
|||
```
|
||||
|
||||
### Data sources and things to note
|
||||
The sample data is publicly available data from the [National Oceanic and Atmospheric Administration’s (NOAA) Center for Operational Oceanographic Products and Services](http://tidesandcurrents.noaa.gov/stations.html?type=Water+Levels).
|
||||
The sample data is publicly available data from the [National Oceanic and Atmospheric Administration’s (NOAA) Center for Operational Oceanographic Products and Services](https://tidesandcurrents.noaa.gov/map/index.html?type=Water+Levels).
|
||||
The data include 15,258 observations of water levels (ft) collected every six minutes at two stations (Santa Monica, CA (ID 9410840) and Coyote Creek, CA (ID 9414575)) over the period from August 18, 2015 through September 18, 2015.
|
||||
|
||||
Note that the measurements `average_temperature`, `h2o_pH`, `h2o_quality`, and `h2o_temperature` contain fictional data.
|
||||
|
|
|
@ -23,7 +23,7 @@ HTTP endpoints to InfluxDB:
|
|||
* `/api/v1/prom/read`
|
||||
* `/api/v1/prom/write`
|
||||
|
||||
Additionally, there is a [`/metrics` endpoint](/enterprise_influxdb/v1.9/administration/server_monitoring/#influxdb-metrics-http-endpoint) configured to produce default Go metrics in Prometheus metrics format.
|
||||
Additionally, there is a [`/metrics` endpoint](/enterprise_influxdb/v1.9/administration/monitor/server_monitoring/#influxdb-metrics-http-endpoint) configured to produce default Go metrics in Prometheus metrics format.
|
||||
|
||||
### Create a target database
|
||||
|
||||
|
@ -89,7 +89,7 @@ made to match the InfluxDB data structure:
|
|||
* Prometheus labels become InfluxDB tags.
|
||||
* All `# HELP` and `# TYPE` lines are ignored.
|
||||
* [v1.8.6 and later] Prometheus remote write endpoint drops unsupported Prometheus values (`NaN`,`-Inf`, and `+Inf`) rather than reject the entire batch.
|
||||
* If [write trace logging is enabled (`[http] write-tracing = true`)](/enterprise_influxdb/v1.9/administration/config/#write-tracing-false), then summaries of dropped values are logged.
|
||||
* If [write trace logging is enabled (`[http] write-tracing = true`)](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#write-tracing--false), then summaries of dropped values are logged.
|
||||
* If a batch of values contains values that are subsequently dropped, HTTP status code `204` is returned.
|
||||
|
||||
### Example: Parse Prometheus to InfluxDB
|
||||
|
|
|
@ -17,7 +17,7 @@ It uses HTTP response codes, HTTP authentication, JWT Tokens, and basic authenti
|
|||
|
||||
The following sections assume your InfluxDB instance is running on `localhost`
|
||||
port `8086` and HTTPS is not enabled.
|
||||
Those settings [are configurable](/enterprise_influxdb/v1.9/administration/config/#http-endpoints-settings).
|
||||
Those settings [are configurable](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#http-endpoints-settings).
|
||||
|
||||
- [InfluxDB 2.0 API compatibility endpoints](#influxdb-20-api-compatibility-endpoints)
|
||||
- [InfluxDB 1.x HTTP endpoints](#influxdb-1x-http-endpoints)
|
||||
|
@ -427,7 +427,8 @@ A successful [`CREATE DATABASE` query](/enterprise_influxdb/v1.9/query_language/
|
|||
| u=\<username> | Optional if you haven't [enabled authentication](/enterprise_influxdb/v1.9/administration/authentication_and_authorization/#set-up-authentication). Required if you've enabled authentication.* | Sets the username for authentication if you've enabled authentication. The user must have read access to the database. Use with the query string parameter `p`. |
|
||||
|
||||
\* InfluxDB does not truncate the number of rows returned for requests without the `chunked` parameter.
|
||||
That behavior is configurable; see the [`max-row-limit`](/enterprise_influxdb/v1.9/administration/config/#max-row-limit-0) configuration option for more information.
|
||||
That behavior is configurable; see the [`max-row-limit`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#max-row-limit--0)
|
||||
configuration option for more information.
|
||||
|
||||
\** The InfluxDB API also supports basic authentication.
|
||||
Use basic authentication if you've [enabled authentication](/enterprise_influxdb/v1.9/administration/authentication_and_authorization/#set-up-authentication)
|
||||
|
@ -950,7 +951,7 @@ Errors are returned in JSON.
|
|||
| 400 Bad Request | Unacceptable request. Can occur with an InfluxDB line protocol syntax error or if a user attempts to write values to a field that previously accepted a different value type. The returned JSON offers further information. |
|
||||
| 401 Unauthorized | Unacceptable request. Can occur with invalid authentication credentials. |
|
||||
| 404 Not Found | Unacceptable request. Can occur if a user attempts to write to a database that does not exist. The returned JSON offers further information. |
|
||||
| 413 Request Entity Too Large | Unaccetable request. It will occur if the payload of the POST request is bigger than the maximum size allowed. See [`max-body-size`](/enterprise_influxdb/v1.9/administration/config/#max-body-size-25000000) parameter for more details.
|
||||
| 413 Request Entity Too Large | Unaccetable request. It will occur if the payload of the POST request is bigger than the maximum size allowed. See [`max-body-size`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#max-body-size--25000000) parameter for more details.
|
||||
| 500 Internal Server Error | The system is overloaded or significantly impaired. Can occur if a user attempts to write to a retention policy that does not exist. The returned JSON offers further information. |
|
||||
|
||||
#### Examples
|
||||
|
|
|
@ -77,7 +77,7 @@ The size of the batches written to the index. Default value is `10000`.
|
|||
##### `[ -concurrency ]`
|
||||
|
||||
The number of workers to dedicate to shard index building.
|
||||
Defaults to [`GOMAXPROCS`](/enterprise_influxdb/v1.9/administration/config#gomaxprocs-environment-variable) value.
|
||||
Defaults to [`GOMAXPROCS`](/enterprise_influxdb/v1.9/administration/configure/configuration/#gomaxprocs-environment-variable) value.
|
||||
|
||||
##### `[ -database <db_name> ]`
|
||||
|
||||
|
|
|
@ -37,7 +37,8 @@ The `max series per database exceeded` error occurs when a write causes the
|
|||
number of [series](/enterprise_influxdb/v1.9/concepts/glossary/#series) in a database to
|
||||
exceed the maximum allowable series per database.
|
||||
The maximum allowable series per database is controlled by the
|
||||
`max-series-per-database` setting in the `[data]` section of the configuration
|
||||
[`max-series-per-database`](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#max-series-per-database--1000000)
|
||||
setting in the `[data]` section of the configuration
|
||||
file.
|
||||
|
||||
The information in the `< >` shows the measurement and the tag set of the series
|
||||
|
@ -46,9 +47,6 @@ that exceeded `max-series-per-database`.
|
|||
By default `max-series-per-database` is set to one million.
|
||||
Changing the setting to `0` allows an unlimited number of series per database.
|
||||
|
||||
**Resources:**
|
||||
[Database Configuration](/enterprise_influxdb/v1.9/administration/config/#max-series-per-database-1000000)
|
||||
|
||||
## `error parsing query: found < >, expected identifier at line < >, char < >`
|
||||
|
||||
### InfluxQL syntax
|
||||
|
@ -326,7 +324,7 @@ The maximum valid timestamp is `9223372036854775806` or `2262-04-11T23:47:16.854
|
|||
|
||||
The `cache maximum memory size exceeded` error occurs when the cached
|
||||
memory size increases beyond the
|
||||
[`cache-max-memory-size` setting](/enterprise_influxdb/v1.9/administration/config/#cache-max-memory-size-1g)
|
||||
[`cache-max-memory-size` setting](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#cache-max-memory-size--1g)
|
||||
in the configuration file.
|
||||
|
||||
By default, `cache-max-memory-size` is set to 512mb.
|
||||
|
@ -335,12 +333,9 @@ or for datasets with higher [series cardinality](/enterprise_influxdb/v1.9/conce
|
|||
If you have lots of RAM you could set it to `0` to disable the cached memory
|
||||
limit and never get this error.
|
||||
You can also examine the `memBytes` field in the`cache` measurement in the
|
||||
[`_internal` database](/enterprise_influxdb/v1.9/administration/server_monitoring/#internal-monitoring)
|
||||
[`_internal` database](/enterprise_influxdb/v1.9/administration/monitor/server_monitoring/#internal-monitoring)
|
||||
to get a sense of how big the caches are in memory.
|
||||
|
||||
**Resources:**
|
||||
[Database Configuration](/enterprise_influxdb/v1.9/administration/config/)
|
||||
|
||||
## `already killed`
|
||||
|
||||
The `already killed` error occurs when a query has already been killed, but
|
||||
|
|
|
@ -1262,7 +1262,7 @@ The default shard group duration is one week and if your data cover several hund
|
|||
Having an extremely high number of shards is inefficient for InfluxDB.
|
||||
Increase the shard group duration for your data’s retention policy with the [`ALTER RETENTION POLICY` query](/enterprise_influxdb/v1.9/query_language/manage-database/#modify-retention-policies-with-alter-retention-policy).
|
||||
|
||||
Second, temporarily lowering the [`cache-snapshot-write-cold-duration` configuration setting](enterprise_influxdb/v1.9/administration/config-data-nodes/#cache-snapshot-write-cold-duration--10m).
|
||||
Second, temporarily lowering the [`cache-snapshot-write-cold-duration` configuration setting](/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#cache-snapshot-write-cold-duration--10m).
|
||||
If you’re writing a lot of historical data, the default setting (`10m`) can cause the system to hold all of your data in cache for every shard.
|
||||
Temporarily lowering the `cache-snapshot-write-cold-duration` setting to `10s` while you write the historical data makes the process more efficient.
|
||||
## Where can I find InfluxDB Enterprise logs?
|
||||
|
|
|
@ -63,7 +63,7 @@ until the query record is cleared from memory.
|
|||
#### Syntax
|
||||
|
||||
Where `qid` is the query ID, displayed in the
|
||||
[`SHOW QUERIES`](/eneterprise_influxdb/v1.9/troubleshooting/query_management/influxql_query_management/#list-currently-running-queries-with-show-queries) output:
|
||||
[`SHOW QUERIES`](/enterprise_influxdb/v1.9/troubleshooting/query_management/influxql_query_management/#list-currently-running-queries-with-show-queries) output:
|
||||
|
||||
```sql
|
||||
KILL QUERY <qid>
|
||||
|
|
|
@ -39,7 +39,7 @@ To send notifications about changes in your data, start by creating a notificati
|
|||
- For Slack, create an [Incoming WebHook](https://api.slack.com/incoming-webhooks#posting_with_webhooks) in Slack, and then enter your webHook URL in the **Slack Incoming WebHook URL** field.
|
||||
|
||||
- For PagerDuty:
|
||||
- [Create a new service](https://support.pagerduty.com/docs/services-and-integrations#section-create-a-new-service), [add an integration for your service](https://support.pagerduty.com/docs/services-and-integrations#section-add-integrations-to-an-existing-service), and then enter the PagerDuty integration key for your new service in the **Routing Key** field.
|
||||
- [Create a new service](https://support.pagerduty.com/docs/services-and-integrations#section-create-a-new-service), [add an Events API V2 integration for your service](https://support.pagerduty.com/docs/services-and-integrations#section-add-integrations-to-an-existing-service), and then enter the PagerDuty integration key for your new service in the **Routing Key** field.
|
||||
- The **Client URL** provides a useful link in your PagerDuty notification. Enter any URL that you'd like to use to investigate issues. This URL is sent as the `client_url` property in the PagerDuty trigger event. By default, the **Client URL** is set to your Monitoring & Alerting History page, and the following included in the PagerDuty trigger event:
|
||||
|
||||
```json
|
||||
|
|
Loading…
Reference in New Issue