fix: links containing hostname

pull/6339/head
Jason Stirnaman 2025-08-24 21:32:26 -05:00
parent 942c76d0c8
commit f587fbaf48
15 changed files with 17 additions and 17 deletions

View File

@ -29,7 +29,7 @@ Certain configurations (e.g., 3 meta and 2 data node) provide high-availability
while making certain tradeoffs in query performance when compared to a single node.
Further increasing the number of nodes can improve performance in both respects.
For example, a cluster with 4 data nodes and a [replication factor](https://docs.influxdata.com/enterprise_influxdb/v1/concepts/glossary/#replication-factor)
For example, a cluster with 4 data nodes and a [replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor)
of 2 can support a higher volume of write traffic than a single node could.
It can also support a higher *query* workload, as the data is replicated
in two locations. Performance of the queries may be on par with a single

View File

@ -18,7 +18,7 @@ Review configuration and hardware guidelines for InfluxDB Enterprise:
* [Recommended cluster configurations](#recommended-cluster-configurations)
* [Storage: type, amount, and configuration](#storage-type-amount-and-configuration)
For InfluxDB OSS instances, see [OSS hardware sizing guidelines](https://docs.influxdata.com/influxdb/v1/guides/hardware_sizing/).
For InfluxDB OSS instances, see [OSS hardware sizing guidelines](/influxdb/v1/guides/hardware_sizing/).
> **Disclaimer:** Your numbers may vary from recommended guidelines. Guidelines provide estimated benchmarks for implementing the most performant system for your business.

View File

@ -103,7 +103,7 @@ If you exceed your plan's [adjustable quotas or limits](/influxdb/cloud/account-
If you exceed the series cardinality limit, InfluxDB adds a rate limit event warning on the **Usage** page, and begins to reject write requests with new series. To start processing write requests again, do the following as needed:
- **Series cardinality limits**: If you exceed the series cardinality limit, see how to [resolve high series cardinality](https://docs.influxdata.com/influxdb/v2/write-data/best-practices/resolve-high-cardinality/).
- **Series cardinality limits**: If you exceed the series cardinality limit, see how to [resolve high series cardinality](/influxdb/v2/write-data/best-practices/resolve-high-cardinality/).
- **Free plan**: To raise rate limits, [upgrade to a Usage-based Plan](#upgrade-to-usage-based-plan).
#### Write and query limits (HTTP response code)

View File

@ -20,7 +20,7 @@ Responses use standard HTTP response codes and JSON format.
To send API requests, you can use
the [InfluxDB v1 client libraries](/influxdb/v1/tools/api_client_libraries/),
the [InfluxDB v2 client libraries](/influxdb/v1/tools/api_client_libraries/),
[Telegraf](https://docs.influxdata.com/telegraf/v1/),
[Telegraf](/telegraf/v1/),
or the client of your choice.
{{% note %}}

View File

@ -643,7 +643,7 @@ to migrate InfluxDB key-value metadata schemas to earlier 2.x versions when nece
#### Telegraf
- Add the following new [Telegraf plugins](https://docs.influxdata.com/telegraf/v1/plugins/) to the Load Data page:
- Add the following new [Telegraf plugins](/telegraf/v1/plugins/) to the Load Data page:
- Alibaba (Aliyun) CloudMonitor Service Statistics (`aliyuncms`)
- AMD ROCm System Management Interface (SMI) (`amd_rocm_smi`)
- Counter-Strike: Global Offensive (CS:GO) (`csgo`)

View File

@ -328,7 +328,7 @@ following levels:
- **L3**: 4 L2 files compacted together
Parquet files store data partitioned by time and optionally tags
_(see [Manage data partition](https://docs.influxdata.com/influxdb3/cloud-dedicated/admin/custom-partitions/))_.
_(see [Manage data partition](/influxdb3/cloud-dedicated/admin/custom-partitions/))_.
After four L0 files accumulate for a partition, they're eligible for compaction.
If the compactor is keeping up with the incoming write load, all compaction
events have exactly four files.

View File

@ -67,7 +67,7 @@ by periodically creating, recording, and writing test data into test buckets.
The service periodically executes queries to ensure the data hasn't been lost or corrupted.
A separate instance of this service lives within each {{% product-name %}} cluster.
Additionally, the service creates out-of-band backups in
[line protocol](https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/),
[line protocol](/influxdb/cloud/reference/syntax/line-protocol/),
and ensures the backup data matches the data on disk.
## Cloud infrastructure

View File

@ -62,7 +62,7 @@ by periodically creating, recording, and writing test data into test buckets.
The service periodically executes queries to ensure the data hasn't been lost or corrupted.
A separate instance of this service lives within each InfluxDB cluster.
Additionally, the service creates out-of-band backups in
[line protocol](https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/),
[line protocol](/influxdb/cloud/reference/syntax/line-protocol/),
and ensures the backup data matches the data on disk.
## Cloud infrastructure
@ -229,7 +229,7 @@ User accounts can be created by InfluxData on the InfluxDB Clustered system via
User accounts can create database tokens with data read and/or write permissions.
API requests from custom applications require a database token with sufficient permissions.
For more information on the types of tokens and ways to create them, see
[Manage tokens](https://docs.influxdata.com/influxdb3/clustered/admin/tokens/).
[Manage tokens](/influxdb3/clustered/admin/tokens/).
### Role-based access controls (RBAC)

View File

@ -1419,7 +1419,7 @@ This now enables the use of Azure blob storage.
The "Install InfluxDB Clustered" instructions (formerly known as "GETTING_STARTED")
are now available on the public
[InfluxDB Clustered documentation](https://docs.influxdata.com/influxdb3/clustered/install/).
[InfluxDB Clustered documentation](/influxdb3/clustered/install/).
The `example-customer.yml` (also known as `myinfluxdb.yml`) example
configuration file still lives in the release bundle alongside the `RELEASE_NOTES`.

View File

@ -162,7 +162,7 @@ aliases:
- Add new `auto-attributes` configuration option to BigPanda node.
- Ability to add new headers to HTTP posts directly in `env var` config.
- `Topic queue length` is now configurable. This allows you to set a `topic-buffer-length` parameter in the Kapacitor config file in the
[alert](https://docs.influxdata.com/kapacitor/v1/administration/configuration/#alert) section. The default is 5000. Minimum length
[alert](/kapacitor/v1/administration/configuration/#alert) section. The default is 5000. Minimum length
is 1000.
- Add new `address template` to email alert. Email addresses no longer need to be hardcoded; can be derived directly from data.

View File

@ -444,7 +444,7 @@ Tracks the disk usage of all hinted handoff queues for a given node (not the byt
a lag occurs between when bytes are processed and when they're removed from the disk.
`queueTotalSize` is used to determine when a node's hinted handoff queue has reached the
maximum size configured in the [hinted-handoff max-size](https://docs.influxdata.com/enterprise_influxdb/v1.9/administration/configure/config-data-nodes/#max-size) parameter.
maximum size configured in the [hinted-handoff max-size](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#max-size) parameter.
---

View File

@ -40,11 +40,11 @@ Zapier, August 2022
## Create an InfluxDB check
[Create an InfluxDB check](https://docs.influxdata.com/influxdb/cloud/monitor-alert/checks/create) to query and alert on a metric you want to monitor.
[Create an InfluxDB check](/influxdb/cloud/monitor-alert/checks/create) to query and alert on a metric you want to monitor.
Use a default **threshold** check as the task.
_It is possible to use your own task written in Flux code, but for this guide, use the InfluxDB UI to create the check._
Once the check is completed, [create a notification endpoint](https://docs.influxdata.com/influxdb/cloud/monitor-alert/notification-endpoints/create/). Select **HTTP** as an endpoint.
Once the check is completed, [create a notification endpoint](/influxdb/cloud/monitor-alert/notification-endpoints/create/). Select **HTTP** as an endpoint.
{{< img-hd src="static/img/resources/notification-endpoint.png" alt="Create a check" />}}
{{% caption %}}

View File

@ -38,7 +38,7 @@ Create a task where you:
1. Import packages and define task options and secrets. Import the following packages:
- [Flux Telegram package](/flux/v0/stdlib/contrib/sranka/telegram/): This package
- [Flux InfluxDB secrets package](/flux/v0/stdlib/influxdata/influxdb/secrets/): This package contains the [secrets.get()](/flux/v0/stdlib/influxdata/influxdb/secrets/get/) function which allows you to retrieve secrets from the InfluxDB secret store. Learn how to [manage secrets](/influxdb/v2/admin/secrets/) in InfluxDB to use this package.
- [Flux InfluxDB monitoring package](https://docs.influxdata.com/flux/v0/stdlib/influxdata/influxdb/monitor/): This package contains functions and tools for monitoring your data.
- [Flux InfluxDB monitoring package](/flux/v0/stdlib/influxdata/influxdb/monitor/): This package contains functions and tools for monitoring your data.
```js

View File

@ -62,7 +62,7 @@ about your database server and table schemas in {{% product-name %}}.
> In examples, tables with `"table_name":"system_` are user-created tables for CPU, memory, disk,
> network, and other resource statistics collected and written
> by the user--for example, using the `psutil` Python library or
> [Telegraf](https://docs.influxdata.com/telegraf/v1/get-started/) to collect
> [Telegraf](/telegraf/v1/get-started/) to collect
> and write system metrics to an InfluxDB 3 database.
##### Show tables

View File

@ -64,6 +64,6 @@ prompb.WriteRequest{
prometheus_remote_write,instance=localhost:9090,job=prometheus,quantile=0.99 go_gc_duration_seconds=4.63 1614889298859000000
```
## For alignment with the [InfluxDB v1.x Prometheus Remote Write Spec](https://docs.influxdata.com/influxdb/v1/supported_protocols/prometheus/#how-prometheus-metrics-are-parsed-in-influxdb)
## For alignment with the [InfluxDB v1.x Prometheus Remote Write Spec](/influxdb/v1/supported_protocols/prometheus/#how-prometheus-metrics-are-parsed-in-influxdb)
- Use the [Starlark processor rename prometheus remote write script](https://github.com/influxdata/telegraf/blob/master/plugins/processors/starlark/testdata/rename_prometheus_remote_write.star) to rename the measurement name to the fieldname and rename the fieldname to value.