diff --git a/content/enterprise_influxdb/v1/features/_index.md b/content/enterprise_influxdb/v1/features/_index.md index 063d0479d..cd398a4ec 100644 --- a/content/enterprise_influxdb/v1/features/_index.md +++ b/content/enterprise_influxdb/v1/features/_index.md @@ -29,7 +29,7 @@ Certain configurations (e.g., 3 meta and 2 data node) provide high-availability while making certain tradeoffs in query performance when compared to a single node. Further increasing the number of nodes can improve performance in both respects. -For example, a cluster with 4 data nodes and a [replication factor](https://docs.influxdata.com/enterprise_influxdb/v1/concepts/glossary/#replication-factor) +For example, a cluster with 4 data nodes and a [replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor) of 2 can support a higher volume of write traffic than a single node could. It can also support a higher *query* workload, as the data is replicated in two locations. Performance of the queries may be on par with a single diff --git a/content/enterprise_influxdb/v1/flux/guides/scalar-values.md b/content/enterprise_influxdb/v1/flux/guides/scalar-values.md index 529bd2ef5..605336a02 100644 --- a/content/enterprise_influxdb/v1/flux/guides/scalar-values.md +++ b/content/enterprise_influxdb/v1/flux/guides/scalar-values.md @@ -217,7 +217,7 @@ The temperature was ${string(v: lastReported._value)}°F." The following sample data set represents fictional temperature metrics collected from three locations. -It's formatted in [annotated CSV](https://v2.docs.influxdata.com/v2.0/reference/syntax/annotated-csv/) and imported +It's formatted in [annotated CSV](/influxdb/v2/reference/syntax/annotated-csv/) and imported into the Flux query using the [`csv.from()` function](/flux/v0/stdlib/csv/from/). Place the following at the beginning of your query to use the sample data: diff --git a/content/enterprise_influxdb/v1/guides/hardware_sizing.md b/content/enterprise_influxdb/v1/guides/hardware_sizing.md index b969143be..6f6336705 100644 --- a/content/enterprise_influxdb/v1/guides/hardware_sizing.md +++ b/content/enterprise_influxdb/v1/guides/hardware_sizing.md @@ -18,7 +18,7 @@ Review configuration and hardware guidelines for InfluxDB Enterprise: * [Recommended cluster configurations](#recommended-cluster-configurations) * [Storage: type, amount, and configuration](#storage-type-amount-and-configuration) -For InfluxDB OSS instances, see [OSS hardware sizing guidelines](https://docs.influxdata.com/influxdb/v1/guides/hardware_sizing/). +For InfluxDB OSS instances, see [OSS hardware sizing guidelines](/influxdb/v1/guides/hardware_sizing/). > **Disclaimer:** Your numbers may vary from recommended guidelines. Guidelines provide estimated benchmarks for implementing the most performant system for your business. diff --git a/content/influxdb/cloud/account-management/billing.md b/content/influxdb/cloud/account-management/billing.md index 3034fcae8..07c3b2a53 100644 --- a/content/influxdb/cloud/account-management/billing.md +++ b/content/influxdb/cloud/account-management/billing.md @@ -103,7 +103,7 @@ If you exceed your plan's [adjustable quotas or limits](/influxdb/cloud/account- If you exceed the series cardinality limit, InfluxDB adds a rate limit event warning on the **Usage** page, and begins to reject write requests with new series. To start processing write requests again, do the following as needed: -- **Series cardinality limits**: If you exceed the series cardinality limit, see how to [resolve high series cardinality](https://docs.influxdata.com/influxdb/v2/write-data/best-practices/resolve-high-cardinality/). +- **Series cardinality limits**: If you exceed the series cardinality limit, see how to [resolve high series cardinality](/influxdb/v2/write-data/best-practices/resolve-high-cardinality/). - **Free plan**: To raise rate limits, [upgrade to a Usage-based Plan](#upgrade-to-usage-based-plan). #### Write and query limits (HTTP response code) diff --git a/content/influxdb/v1/flux/guides/scalar-values.md b/content/influxdb/v1/flux/guides/scalar-values.md index b1ac4fc3b..b2b6ccaa1 100644 --- a/content/influxdb/v1/flux/guides/scalar-values.md +++ b/content/influxdb/v1/flux/guides/scalar-values.md @@ -232,7 +232,7 @@ The temperature was ${string(v: lastReported._value)}°F." The following sample data set represents fictional temperature metrics collected from three locations. -It's formatted in [annotated CSV](https://v2.docs.influxdata.com/v2.0/reference/syntax/annotated-csv/) and imported +It's formatted in [annotated CSV](/influxdb/v2/reference/syntax/annotated-csv/) and imported into the Flux query using the [`csv.from()` function](/flux/v0/stdlib/csv/from/). Place the following at the beginning of your query to use the sample data: diff --git a/content/influxdb/v1/tools/api.md b/content/influxdb/v1/tools/api.md index eb5d1fa24..cbb7fb70a 100644 --- a/content/influxdb/v1/tools/api.md +++ b/content/influxdb/v1/tools/api.md @@ -20,7 +20,7 @@ Responses use standard HTTP response codes and JSON format. To send API requests, you can use the [InfluxDB v1 client libraries](/influxdb/v1/tools/api_client_libraries/), the [InfluxDB v2 client libraries](/influxdb/v1/tools/api_client_libraries/), -[Telegraf](https://docs.influxdata.com/telegraf/v1/), +[Telegraf](/telegraf/v1/), or the client of your choice. {{% note %}} diff --git a/content/influxdb/v2/reference/release-notes/influxdb.md b/content/influxdb/v2/reference/release-notes/influxdb.md index 9633439d6..2de95e4ce 100644 --- a/content/influxdb/v2/reference/release-notes/influxdb.md +++ b/content/influxdb/v2/reference/release-notes/influxdb.md @@ -643,7 +643,7 @@ to migrate InfluxDB key-value metadata schemas to earlier 2.x versions when nece #### Telegraf -- Add the following new [Telegraf plugins](https://docs.influxdata.com/telegraf/v1/plugins/) to the Load Data page: +- Add the following new [Telegraf plugins](/telegraf/v1/plugins/) to the Load Data page: - Alibaba (Aliyun) CloudMonitor Service Statistics (`aliyuncms`) - AMD ROCm System Management Interface (SMI) (`amd_rocm_smi`) - Counter-Strike: Global Offensive (CS:GO) (`csgo`) diff --git a/content/influxdb3/cloud-dedicated/admin/monitor-your-cluster.md b/content/influxdb3/cloud-dedicated/admin/monitor-your-cluster.md index 27ccd7c2a..d4ae8c139 100644 --- a/content/influxdb3/cloud-dedicated/admin/monitor-your-cluster.md +++ b/content/influxdb3/cloud-dedicated/admin/monitor-your-cluster.md @@ -328,7 +328,7 @@ following levels: - **L3**: 4 L2 files compacted together Parquet files store data partitioned by time and optionally tags -_(see [Manage data partition](https://docs.influxdata.com/influxdb3/cloud-dedicated/admin/custom-partitions/))_. +_(see [Manage data partition](/influxdb3/cloud-dedicated/admin/custom-partitions/))_. After four L0 files accumulate for a partition, they're eligible for compaction. If the compactor is keeping up with the incoming write load, all compaction events have exactly four files. diff --git a/content/influxdb3/cloud-dedicated/reference/internals/security.md b/content/influxdb3/cloud-dedicated/reference/internals/security.md index e5fa1e943..b1303af8b 100644 --- a/content/influxdb3/cloud-dedicated/reference/internals/security.md +++ b/content/influxdb3/cloud-dedicated/reference/internals/security.md @@ -67,7 +67,7 @@ by periodically creating, recording, and writing test data into test buckets. The service periodically executes queries to ensure the data hasn't been lost or corrupted. A separate instance of this service lives within each {{% product-name %}} cluster. Additionally, the service creates out-of-band backups in -[line protocol](https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/), +[line protocol](/influxdb/cloud/reference/syntax/line-protocol/), and ensures the backup data matches the data on disk. ## Cloud infrastructure diff --git a/content/influxdb3/clustered/reference/internals/security.md b/content/influxdb3/clustered/reference/internals/security.md index c12776c27..38765b03e 100644 --- a/content/influxdb3/clustered/reference/internals/security.md +++ b/content/influxdb3/clustered/reference/internals/security.md @@ -62,7 +62,7 @@ by periodically creating, recording, and writing test data into test buckets. The service periodically executes queries to ensure the data hasn't been lost or corrupted. A separate instance of this service lives within each InfluxDB cluster. Additionally, the service creates out-of-band backups in -[line protocol](https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/), +[line protocol](/influxdb/cloud/reference/syntax/line-protocol/), and ensures the backup data matches the data on disk. ## Cloud infrastructure @@ -229,7 +229,7 @@ User accounts can be created by InfluxData on the InfluxDB Clustered system via User accounts can create database tokens with data read and/or write permissions. API requests from custom applications require a database token with sufficient permissions. For more information on the types of tokens and ways to create them, see -[Manage tokens](https://docs.influxdata.com/influxdb3/clustered/admin/tokens/). +[Manage tokens](/influxdb3/clustered/admin/tokens/). ### Role-based access controls (RBAC) diff --git a/content/influxdb3/clustered/reference/release-notes/clustered.md b/content/influxdb3/clustered/reference/release-notes/clustered.md index 0973ace20..e81fca982 100644 --- a/content/influxdb3/clustered/reference/release-notes/clustered.md +++ b/content/influxdb3/clustered/reference/release-notes/clustered.md @@ -1419,7 +1419,7 @@ This now enables the use of Azure blob storage. The "Install InfluxDB Clustered" instructions (formerly known as "GETTING_STARTED") are now available on the public -[InfluxDB Clustered documentation](https://docs.influxdata.com/influxdb3/clustered/install/). +[InfluxDB Clustered documentation](/influxdb3/clustered/install/). The `example-customer.yml` (also known as `myinfluxdb.yml`) example configuration file still lives in the release bundle alongside the `RELEASE_NOTES`. diff --git a/content/kapacitor/v1/reference/about_the_project/release-notes.md b/content/kapacitor/v1/reference/about_the_project/release-notes.md index 567d00928..2ae546029 100644 --- a/content/kapacitor/v1/reference/about_the_project/release-notes.md +++ b/content/kapacitor/v1/reference/about_the_project/release-notes.md @@ -162,7 +162,7 @@ aliases: - Add new `auto-attributes` configuration option to BigPanda node. - Ability to add new headers to HTTP posts directly in `env var` config. - `Topic queue length` is now configurable. This allows you to set a `topic-buffer-length` parameter in the Kapacitor config file in the -[alert](https://docs.influxdata.com/kapacitor/v1/administration/configuration/#alert) section. The default is 5000. Minimum length +[alert](/kapacitor/v1/administration/configuration/#alert) section. The default is 5000. Minimum length is 1000. - Add new `address template` to email alert. Email addresses no longer need to be hardcoded; can be derived directly from data. diff --git a/content/platform/monitoring/influxdata-platform/tools/measurements-internal.md b/content/platform/monitoring/influxdata-platform/tools/measurements-internal.md index 8a027b218..2b4669c0e 100644 --- a/content/platform/monitoring/influxdata-platform/tools/measurements-internal.md +++ b/content/platform/monitoring/influxdata-platform/tools/measurements-internal.md @@ -444,7 +444,7 @@ Tracks the disk usage of all hinted handoff queues for a given node (not the byt a lag occurs between when bytes are processed and when they're removed from the disk. `queueTotalSize` is used to determine when a node's hinted handoff queue has reached the -maximum size configured in the [hinted-handoff max-size](https://docs.influxdata.com/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#max-size) parameter. +maximum size configured in the [hinted-handoff max-size](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#max-size) parameter. --- diff --git a/content/resources/how-to-guides/alert-to-zapier.md b/content/resources/how-to-guides/alert-to-zapier.md index 17d5c6633..529530b43 100644 --- a/content/resources/how-to-guides/alert-to-zapier.md +++ b/content/resources/how-to-guides/alert-to-zapier.md @@ -40,11 +40,11 @@ Zapier, August 2022 ## Create an InfluxDB check -[Create an InfluxDB check](https://docs.influxdata.com/influxdb/cloud/monitor-alert/checks/create) to query and alert on a metric you want to monitor. +[Create an InfluxDB check](/influxdb/cloud/monitor-alert/checks/create) to query and alert on a metric you want to monitor. Use a default **threshold** check as the task. _It is possible to use your own task written in Flux code, but for this guide, use the InfluxDB UI to create the check._ -Once the check is completed, [create a notification endpoint](https://docs.influxdata.com/influxdb/cloud/monitor-alert/notification-endpoints/create/). Select **HTTP** as an endpoint. +Once the check is completed, [create a notification endpoint](/influxdb/cloud/monitor-alert/notification-endpoints/create/). Select **HTTP** as an endpoint. {{< img-hd src="static/img/resources/notification-endpoint.png" alt="Create a check" />}} {{% caption %}} diff --git a/content/resources/how-to-guides/state-changes-across-task-executions.md b/content/resources/how-to-guides/state-changes-across-task-executions.md index 31ca6ea4f..ef2cf5858 100644 --- a/content/resources/how-to-guides/state-changes-across-task-executions.md +++ b/content/resources/how-to-guides/state-changes-across-task-executions.md @@ -38,7 +38,7 @@ Create a task where you: 1. Import packages and define task options and secrets. Import the following packages: - [Flux Telegram package](/flux/v0/stdlib/contrib/sranka/telegram/): This package - [Flux InfluxDB secrets package](/flux/v0/stdlib/influxdata/influxdb/secrets/): This package contains the [secrets.get()](/flux/v0/stdlib/influxdata/influxdb/secrets/get/) function which allows you to retrieve secrets from the InfluxDB secret store. Learn how to [manage secrets](/influxdb/v2/admin/secrets/) in InfluxDB to use this package. - - [Flux InfluxDB monitoring package](https://docs.influxdata.com/flux/v0/stdlib/influxdata/influxdb/monitor/): This package contains functions and tools for monitoring your data. + - [Flux InfluxDB monitoring package](/flux/v0/stdlib/influxdata/influxdb/monitor/): This package contains functions and tools for monitoring your data. ```js diff --git a/content/shared/influxdb3-query-guides/execute-queries/influxdb3-api.md b/content/shared/influxdb3-query-guides/execute-queries/influxdb3-api.md index ef8f0c6a1..fa18d6586 100644 --- a/content/shared/influxdb3-query-guides/execute-queries/influxdb3-api.md +++ b/content/shared/influxdb3-query-guides/execute-queries/influxdb3-api.md @@ -62,7 +62,7 @@ about your database server and table schemas in {{% product-name %}}. > In examples, tables with `"table_name":"system_` are user-created tables for CPU, memory, disk, > network, and other resource statistics collected and written > by the user--for example, using the `psutil` Python library or -> [Telegraf](https://docs.influxdata.com/telegraf/v1/get-started/) to collect +> [Telegraf](/telegraf/v1/get-started/) to collect > and write system metrics to an InfluxDB 3 database. ##### Show tables diff --git a/content/telegraf/v1/data_formats/input/prometheus-remote-write.md b/content/telegraf/v1/data_formats/input/prometheus-remote-write.md index ec6f018b0..33620938e 100644 --- a/content/telegraf/v1/data_formats/input/prometheus-remote-write.md +++ b/content/telegraf/v1/data_formats/input/prometheus-remote-write.md @@ -64,6 +64,6 @@ prompb.WriteRequest{ prometheus_remote_write,instance=localhost:9090,job=prometheus,quantile=0.99 go_gc_duration_seconds=4.63 1614889298859000000 ``` -## For alignment with the [InfluxDB v1.x Prometheus Remote Write Spec](https://docs.influxdata.com/influxdb/v1/supported_protocols/prometheus/#how-prometheus-metrics-are-parsed-in-influxdb) +## For alignment with the [InfluxDB v1.x Prometheus Remote Write Spec](/influxdb/v1/supported_protocols/prometheus/#how-prometheus-metrics-are-parsed-in-influxdb) - Use the [Starlark processor rename prometheus remote write script](https://github.com/influxdata/telegraf/blob/master/plugins/processors/starlark/testdata/rename_prometheus_remote_write.star) to rename the measurement name to the fieldname and rename the fieldname to value.