Merge branch 'master' into fix-lefthook-patterns

pull/6123/head
Jason Stirnaman 2025-07-07 16:04:57 -05:00 committed by GitHub
commit 1d67b79f03
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
354 changed files with 95758 additions and 9338 deletions

View File

@ -1,4 +1,4 @@
version: 2
version: 2.1
jobs:
build:
docker:
@ -41,7 +41,7 @@ jobs:
- /home/circleci/bin
- run:
name: Hugo Build
command: npx hugo --logLevel info --minify --destination workspace/public
command: yarn hugo --environment production --logLevel info --gc --destination workspace/public
- persist_to_workspace:
root: workspace
paths:
@ -68,7 +68,6 @@ jobs:
when: on_success
workflows:
version: 2
build:
jobs:
- build

44
.context/README.md Normal file
View File

@ -0,0 +1,44 @@
# Context Files for LLMs and AI Tools
This directory contains plans, reports, and other context files that are:
- Used to provide context to LLMs during development
- Not committed to the repository
- May be transient or belong in other repositories
## Directory Structure
- `plans/` - Documentation plans and roadmaps
- `reports/` - Generated reports and analyses
- `research/` - Research notes and findings
- `templates/` - Reusable templates for Claude interactions
## Usage
Place files here that you want to reference--for example, using @ mentions in Claude--such as:
- Documentation planning documents
- API migration guides
- Performance reports
- Architecture decisions
## Example Structure
```
.context/
├── plans/
│ ├── v3.2-release-plan.md
│ └── api-migration-guide.md
├── reports/
│ ├── weekly-progress-2025-07.md
│ └── pr-summary-2025-06.md
├── research/
│ └── competitor-analysis.md
└── templates/
└── release-notes-template.md
```
## Best Practices
1. Use descriptive filenames that indicate the content and date
2. Keep files organized in appropriate subdirectories
3. Consider using date prefixes for time-sensitive content (e.g., `2025-07-01-meeting-notes.md`)
4. Remove outdated files periodically to keep the context relevant

View File

@ -1,13 +1,16 @@
# GitHub Copilot Instructions for InfluxData Documentation
# Instructions for InfluxData Documentation
## Purpose and scope
GitHub Copilot should help document InfluxData products by creating clear, accurate technical content with proper code examples, frontmatter, and formatting.
Help document InfluxData products by creating clear, accurate technical content with proper code examples, frontmatter, and formatting.
## Documentation structure
- **Product version data**: `/data/products.yml`
- **Products**:
- **InfluxData products**:
- InfluxDB 3 Explorer
- Documentation source path: `/content/influxdb3/explorer`
- Published for the web: https://docs.influxdata.com/influxdb3/explorer/
- InfluxDB 3 Core
- Documentation source path: `/content/influxdb3/core`
- Published for the web: https://docs.influxdata.com/influxdb3/core/
@ -92,7 +95,8 @@ GitHub Copilot should help document InfluxData products by creating clear, accur
## Markdown and shortcodes
- Include proper frontmatter for each page:
- Include proper frontmatter for Markdown pages in `content/**/*.md` (except for
shared content files in `content/shared/`):
```yaml
title: # Page title (h1)
@ -180,3 +184,17 @@ Table: keys: [_start, _stop, _field, _measurement]
## Related repositories
- **Internal documentation assistance requests**: https://github.com/influxdata/DAR/issues Documentation
## Additional instruction files
For specific workflows and content types, also refer to:
- **InfluxDB 3 code placeholders**: `.github/instructions/influxdb3-code-placeholders.instructions.md` - Guidelines for placeholder formatting, descriptions, and shortcode usage in InfluxDB 3 documentation
- **Contributing guidelines**: `.github/instructions/contributing.instructions.md` - Detailed style guidelines, shortcode usage, frontmatter requirements, and development workflows
- **Content-specific instructions**: Check `.github/instructions/` directory for specialized guidelines covering specific documentation patterns and requirements
## Integration with specialized instructions
When working on InfluxDB 3 documentation (Core/Enterprise), prioritize the placeholder guidelines from `influxdb3-code-placeholders.instructions.md`.
For general documentation structure, shortcodes, and development workflows, follow the comprehensive guidelines in `contributing.instructions.md`.

File diff suppressed because it is too large Load Diff

9
.gitignore vendored
View File

@ -15,13 +15,18 @@ node_modules
!telegraf-build/templates
!telegraf-build/scripts
!telegraf-build/README.md
/cypress/downloads
/cypress/downloads/*
/cypress/screenshots/*
/cypress/videos/*
test-results.xml
/influxdb3cli-build-scripts/content
.vscode/*
!.vscode/launch.json
.idea
**/config.toml
package-lock.json
tmp
tmp
# Context files for LLMs and AI tools
.context/*
!.context/README.md

View File

@ -3,3 +3,4 @@
**/.svn
**/.hg
**/node_modules
assets/jsconfig.json

47
.vscode/launch.json vendored Normal file
View File

@ -0,0 +1,47 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Debug JS (debug-helpers)",
"type": "chrome",
"request": "launch",
"url": "http://localhost:1313",
"webRoot": "${workspaceFolder}",
"skipFiles": [
"<node_internals>/**"
],
"sourceMaps": false,
"trace": true,
"smartStep": false
},
{
"name": "Debug JS (source maps)",
"type": "chrome",
"request": "launch",
"url": "http://localhost:1313",
"webRoot": "${workspaceFolder}",
"sourceMaps": true,
"sourceMapPathOverrides": {
"*": "${webRoot}/assets/js/*",
"main.js": "${webRoot}/assets/js/main.js",
"page-context.js": "${webRoot}/assets/js/page-context.js",
"ask-ai-trigger.js": "${webRoot}/assets/js/ask-ai-trigger.js",
"ask-ai.js": "${webRoot}/assets/js/ask-ai.js",
"utils/*": "${webRoot}/assets/js/utils/*",
"services/*": "${webRoot}/assets/js/services/*"
},
"skipFiles": [
"<node_internals>/**",
"node_modules/**",
"chrome-extension://**"
],
"trace": true,
"smartStep": true,
"disableNetworkCache": true,
"userDataDir": "${workspaceFolder}/.vscode/chrome-user-data",
"runtimeArgs": [
"--disable-features=VizDisplayCompositor"
]
},
]
}

25
CLAUDE.md Normal file
View File

@ -0,0 +1,25 @@
# Instructions for InfluxData Documentation
## Purpose and scope
Claude should help document InfluxData products by creating clear, accurate technical content with proper code examples, frontmatter, and formatting.
## Project overview
See @README.md
## Available NPM commands
@package.json
## Instructions for contributing
See @.github/copilot-instructions.md for style guidelines and
product-specific documentation paths and URLs managed in this project.
See @.github/instructions/contributing.instructions.md for contributing
information including using shortcodes and running tests.
See @.github/instructions/influxdb3-code-placeholders.instructions.md for using
placeholders in code samples and CLI commands.

View File

@ -363,6 +363,9 @@ list_query_example:# Code examples included with article descriptions in childre
# References to examples in data/query_examples
canonical: # Path to canonical page, overrides auto-gen'd canonical URL
v2: # Path to v2 equivalent page
alt_links: # Alternate pages in other products/versions for cross-product navigation
cloud-dedicated: /influxdb3/cloud-dedicated/path/to/page/
core: /influxdb3/core/path/to/page/
prepend: # Prepend markdown content to an article (especially powerful with cascade)
block: # (Optional) Wrap content in a block style (note, warn, cloud)
content: # Content to prepend to article
@ -454,6 +457,29 @@ add the following frontmatter to the 1.x page:
v2: /influxdb/v2.0/get-started/
```
### Alternative links for cross-product navigation
Use the `alt_links` frontmatter to specify equivalent pages in other InfluxDB products,
for example, when a page exists at a different path in a different version or if
the feature doesn't exist in that product.
This enables the product switcher to navigate users to the corresponding page when they
switch between products. If a page doesn't exist in another product (for example, an
Enterprise-only feature), point to the nearest parent page if relevant.
```yaml
alt_links:
cloud-dedicated: /influxdb3/cloud-dedicated/admin/tokens/create-token/
cloud-serverless: /influxdb3/cloud-serverless/admin/tokens/create-token/
core: /influxdb3/core/reference/cli/influxdb3/update/ # Points to parent if exact page doesn't exist
```
Supported product keys for InfluxDB 3:
- `core`
- `enterprise`
- `cloud-serverless`
- `cloud-dedicated`
- `clustered`
### Prepend and append content to a page
Use the `prepend` and `append` frontmatter to add content to the top or bottom of a page.
@ -1667,7 +1693,7 @@ The shortcode takes a regular expression for matching placeholder names.
Use the `code-placeholder-key` shortcode to format the placeholder names in
text that describes the placeholder--for example:
```
```markdown
{{% code-placeholders "DATABASE_NAME|USERNAME|PASSWORD_OR_TOKEN|API_TOKEN|exampleuser@influxdata.com" %}}
```sh
curl --request POST http://localhost:8086/write?db=DATABASE_NAME \
@ -1691,3 +1717,83 @@ InfluxDB API documentation when documentation is deployed.
Redoc generates HTML documentation using the InfluxDB `swagger.yml`.
For more information about generating InfluxDB API documentation, see the
[API Documentation README](https://github.com/influxdata/docs-v2/tree/master/api-docs#readme).
## JavaScript in the documentation UI
The InfluxData documentation UI uses JavaScript with ES6+ syntax and
`assets/js/main.js` as the entry point to import modules from
`assets/js`.
Only `assets/js/main.js` should be imported in HTML files.
`assets/js/main.js` registers components and initializes them on page load.
If you're adding UI functionality that requires JavaScript, follow these steps:
1. In your HTML file, add a `data-component` attribute to the element that
should be initialized by your JavaScript code. For example:
```html
<div data-component="my-component"></div>
```
2. Following the component pattern, create a single-purpose JavaScript module
(`assets/js/components/my-component.js`)
that exports a single function that receives the component element and initializes it.
3. In `assets/js/main.js`, import the module and register the component to ensure
the component is initialized on page load.
### Debugging JavaScript
To debug JavaScript code used in the InfluxData documentation UI, choose one of the following methods:
- Use source maps and the Chrome DevTools debugger.
- Use debug helpers that provide breakpoints and console logging as a workaround or alternative for using source maps and the Chrome DevTools debugger.
#### Using source maps and Chrome DevTools debugger
1. In VS Code, select Run > Start Debugging.
2. Select the "Debug Docs (source maps)" configuration.
3. Click the play button to start the debugger.
5. Set breakpoints in the JavaScript source files--files in the
`assets/js/ns-hugo-imp:` namespace-- in the
VS Code editor or in the Chrome Developer Tools Sources panel:
- In the VS Code Debugger panel > "Loaded Scripts" section, find the
`assets/js/ns-hugo-imp:` namespace.
- In the Chrome Developer Tools Sources panel, expand
`js/ns-hugo-imp:/<YOUR_WORKSPACE_ROOT>/assets/js/`.
#### Using debug helpers
1. In your JavaScript module, import debug helpers from `assets/js/utils/debug-helpers.js`.
These helpers provide breakpoints and console logging as a workaround or alternative for
using source maps and the Chrome DevTools debugger.
2. Insert debug statements by calling the helper functions in your code--for example:
```js
import { debugLog, debugBreak, debugInspect } from './utils/debug-helpers.js';
const data = debugInspect(someData, 'Data');
debugLog('Processing data', 'myFunction');
function processData() {
// Add a breakpoint that works with DevTools
debugBreak();
// Your existing code...
}
```
3. Start Hugo in development mode--for example:
```bash
yarn hugo server
```
4. In VS Code, go to Run > Start Debugging, and select the "Debug JS (debug-helpers)" configuration.
Your system uses the configuration in `launch.json` to launch the site in Chrome
and attach the debugger to the Developer Tools console.
Make sure to remove the debug statements before merging your changes.
The debug helpers are designed to be used in development and should not be used in production.

View File

@ -62,7 +62,7 @@ function showHelp {
subcommand=$1
case "$subcommand" in
cloud-dedicated-v2|cloud-dedicated-management|cloud-serverless-v2|clustered-v2|cloud-v2|v2|v1-compat|core-v3|enterprise-v3|all)
cloud-dedicated-v2|cloud-dedicated-management|cloud-serverless-v2|clustered-management|clustered-v2|cloud-v2|v2|v1-compat|core-v3|enterprise-v3|all)
product=$1
shift
@ -187,6 +187,22 @@ function updateCloudServerlessV2 {
postProcess $outFile 'influxdb3/cloud-serverless/.config.yml' v2@2
}
function updateClusteredManagement {
outFile="influxdb3/clustered/management/openapi.yml"
if [[ -z "$baseUrl" ]];
then
echo "Using existing $outFile"
else
# Clone influxdata/granite and fetch the latest openapi.yaml file.
echo "Fetching the latest openapi.yaml file from influxdata/granite"
tmp_dir=$(mktemp -d)
git clone --depth 1 --branch main https://github.com/influxdata/granite.git "$tmp_dir"
cp "$tmp_dir/openapi.yaml" "$outFile"
rm -rf "$tmp_dir"
fi
postProcess $outFile 'influxdb3/clustered/.config.yml' management@0
}
function updateClusteredV2 {
outFile="influxdb3/clustered/v2/ref.yml"
if [[ -z "$baseUrl" ]];
@ -278,6 +294,9 @@ then
elif [ "$product" = "cloud-serverless-v2" ];
then
updateCloudServerlessV2
elif [ "$product" = "clustered-management" ];
then
updateClusteredManagement
elif [ "$product" = "clustered-v2" ];
then
updateClusteredV2
@ -305,6 +324,6 @@ then
updateOSSV2
updateV1Compat
else
echo "Provide a product argument: cloud-v2, cloud-serverless-v2, cloud-dedicated-v2, cloud-dedicated-management, clustered-v2, core-v3, enterprise-v3, v2, v1-compat, or all."
echo "Provide a product argument: cloud-v2, cloud-serverless-v2, cloud-dedicated-v2, cloud-dedicated-management, clustered-management, clustered-v2, core-v3, enterprise-v3, v2, v1-compat, or all."
showHelp
fi

View File

@ -10,7 +10,5 @@ apis:
root: v2/ref.yml
x-influxdata-docs-aliases:
- /influxdb/v2/api/
v1-compatibility@2:
root: v1-compatibility/swaggerV1Compat.yml
x-influxdata-docs-aliases:
- /influxdb/v2/api/v1-compatibility/
- /influxdb/v2/api/v1/

View File

@ -6,5 +6,6 @@
- Headers
- Pagination
- Response codes
- Compatibility endpoints
- name: All endpoints
tags: []

View File

@ -58,6 +58,7 @@ tags:
- [Manage API tokens](/influxdb/v2/security/tokens/)
- [Assign a token to a specific user](/influxdb/v2/security/tokens/create-token/)
name: Authorizations (API tokens)
- name: Authorizations (v1-compatible)
- name: Backup
- description: |
Store your data in InfluxDB [buckets](/influxdb/v2/reference/glossary/#bucket).
@ -88,6 +89,15 @@ tags:
| `orgID` | 16-byte string | The organization ID ([find your organization](/influxdb/v2/organizations/view-orgs/). |
name: Common parameters
x-traitTag: true
- name: Compatibility endpoints
description: |
InfluxDB v2 provides a v1-compatible API for backward compatibility with InfluxDB 1.x clients and integrations.
Use these endpoints with InfluxDB 1.x client libraries and third-party integrations such as Grafana, Telegraf, and other tools designed for InfluxDB 1.x. The compatibility layer maps InfluxDB 1.x concepts (databases, retention policies) to InfluxDB v2 resources (buckets, organizations) through database retention policy (DBRP) mappings.
- [Write data (v1-compatible)](#tag/Write-data-(v1-compatible))
- [Query data using InfluxQL (v1-compatible)](#tag/Query-data-(v1-compatible))
- [Manage v1-compatible users and permissions](#tag/Authorizations-(v1-compatible))
- name: Config
- name: Dashboards
- name: Data I/O endpoints
@ -99,7 +109,7 @@ tags:
databases and retention policies are mapped to buckets using the
database and retention policy (DBRP) mapping service.
The DBRP mapping service uses the database and retention policy
specified in 1.x compatibility API requests to route operations to a bucket.
specified in v1 compatibility API requests to route operations to a bucket.
### Related guides
@ -139,9 +149,6 @@ tags:
x-traitTag: true
- name: Health
- name: Labels
- name: Legacy Authorizations
- name: Legacy Query
- name: Legacy Write
- name: Metrics
- name: NotificationEndpoints
- name: NotificationRules
@ -194,6 +201,7 @@ tags:
- description: |
Retrieve data, analyze queries, and get query suggestions.
name: Query
- name: Query data (v1-compatible)
- description: |
See the [**API Quick Start**](/influxdb/v2/api-guide/api_intro/)
to get up and running authenticating with tokens, writing to buckets, and querying data.
@ -314,6 +322,7 @@ tags:
- description: |
Write time series data to [buckets](/influxdb/v2/reference/glossary/#bucket).
name: Write
- name: Write data (v1-compatible)
paths:
/api/v2:
get:
@ -12756,7 +12765,7 @@ paths:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: '#/components/schemas/Error'
description: The request was well-formed, but some or all the points were rejected due to semantic errors--for example, schema conflicts or retention policy violations. Error message contains details for one or more rejected points.
'429':
description: |
@ -12869,7 +12878,7 @@ paths:
description: Unexpected error
summary: List all legacy authorizations
tags:
- Legacy Authorizations
- Authorizations (v1-compatible)
post:
description: |
Creates a legacy authorization and returns the legacy authorization.
@ -12932,7 +12941,7 @@ paths:
description: Unexpected error
summary: Create a legacy authorization
tags:
- Legacy Authorizations
- Authorizations (v1-compatible)
servers:
- url: /private
/legacy/authorizations/{authID}:
@ -12954,7 +12963,7 @@ paths:
description: Unexpected error
summary: Delete a legacy authorization
tags:
- Legacy Authorizations
- Authorizations (v1-compatible)
get:
operationId: GetLegacyAuthorizationsID
parameters:
@ -12977,7 +12986,7 @@ paths:
description: Unexpected error
summary: Retrieve a legacy authorization
tags:
- Legacy Authorizations
- Authorizations (v1-compatible)
patch:
operationId: PatchLegacyAuthorizationsID
parameters:
@ -13007,7 +13016,7 @@ paths:
description: Unexpected error
summary: Update a legacy authorization to be active or inactive
tags:
- Legacy Authorizations
- Authorizations (v1-compatible)
servers:
- url: /private
/legacy/authorizations/{authID}/password:
@ -13040,94 +13049,29 @@ paths:
description: Unexpected error
summary: Set a legacy authorization password
tags:
- Legacy Authorizations
- Authorizations (v1-compatible)
servers:
- url: /private
/query:
get:
description: Queries InfluxDB using InfluxQL.
summary: Execute InfluxQL query (v1-compatible)
description: |
Executes an InfluxQL query to retrieve data from the specified database.
This endpoint is compatible with InfluxDB 1.x client libraries and third-party integrations such as Grafana.
Use query parameters to specify the database and the InfluxQL query.
operationId: GetLegacyQuery
parameters:
- $ref: '#/components/parameters/TraceSpan'
- in: header
name: Accept
schema:
default: application/json
description: |
Media type that the client can understand.
**Note**: With `application/csv`, query results include [**unix timestamps**](/influxdb/v2/reference/glossary/#unix-timestamp) instead of [RFC3339 timestamps](/influxdb/v2/reference/glossary/#rfc3339-timestamp).
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
type: string
- description: The content encoding (usually a compression algorithm) that the client can understand.
in: header
name: Accept-Encoding
schema:
default: identity
description: The content coding. Use `gzip` for compressed data or `identity` for unmodified, uncompressed data.
enum:
- gzip
- identity
type: string
- in: header
name: Content-Type
schema:
enum:
- application/json
type: string
- description: The InfluxDB 1.x username to authenticate the request.
in: query
name: u
schema:
type: string
- description: The InfluxDB 1.x password to authenticate the request.
in: query
name: p
schema:
type: string
- description: |
The database to query data from.
This is mapped to an InfluxDB [bucket](/influxdb/v2/reference/glossary/#bucket).
For more information, see [Database and retention policy mapping](/influxdb/v2/api/influxdb-1x/dbrp/).
in: query
name: db
required: true
schema:
type: string
- description: |
The retention policy to query data from.
This is mapped to an InfluxDB [bucket](/influxdb/v2/reference/glossary/#bucket).
For more information, see [Database and retention policy mapping](/influxdb/v2/api/influxdb-1x/dbrp/).
in: query
name: rp
schema:
type: string
- description: The InfluxQL query to execute. To execute multiple queries, delimit queries with a semicolon (`;`).
in: query
name: q
required: true
schema:
type: string
- description: |
A unix timestamp precision.
Formats timestamps as [unix (epoch) timestamps](/influxdb/v2/reference/glossary/#unix-timestamp) with the specified precision
instead of [RFC3339 timestamps](/influxdb/v2/reference/glossary/#rfc3339-timestamp) with nanosecond precision.
in: query
name: epoch
schema:
enum:
- ns
- u
- µ
- ms
- s
- m
- h
type: string
- $ref: '#/components/parameters/AuthV1Username'
- $ref: '#/components/parameters/AuthV1Password'
- $ref: '#/components/parameters/Accept'
- $ref: '#/components/parameters/AcceptEncoding'
- $ref: '#/components/parameters/Content-Type'
- $ref: '#/components/parameters/V1Database'
- $ref: '#/components/parameters/V1RetentionPolicy'
- $ref: '#/components/parameters/V1Epoch'
- $ref: '#/components/parameters/V1Query'
responses:
'200':
content:
@ -13191,19 +13135,87 @@ paths:
schema:
$ref: '#/components/schemas/Error'
description: Error processing query
summary: Query with the 1.x compatibility API
tags:
- Legacy Query
- Query data (v1-compatible)
post:
operationId: PostQueryV1
summary: Execute InfluxQL query (v1-compatible)
description: |
Executes an InfluxQL query to retrieve data from the specified database.
This endpoint is compatible with InfluxDB 1.x client libraries and third-party integrations such as Grafana.
Use query parameters to specify the database and the InfluxQL query.
tags:
- Query data (v1-compatible)
requestBody:
description: InfluxQL query to execute.
content:
text/plain:
schema:
type: string
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthV1Username'
- $ref: '#/components/parameters/AuthV1Password'
- $ref: '#/components/parameters/Accept'
- $ref: '#/components/parameters/AcceptEncoding'
- $ref: '#/components/parameters/Content-Type'
- $ref: '#/components/parameters/V1Database'
- $ref: '#/components/parameters/V1RetentionPolicy'
- $ref: '#/components/parameters/V1Epoch'
responses:
'200':
description: Query results
headers:
Content-Encoding:
description: The Content-Encoding entity header is used to compress the media-type. When present, its value indicates which encodings were applied to the entity-body
schema:
type: string
description: Specifies that the response in the body is encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: The Trace-Id header reports the request's trace ID, if one was generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: '#/components/schemas/InfluxqlCsvResponse'
application/json:
schema:
$ref: '#/components/schemas/InfluxqlJsonResponse'
text/csv:
schema:
$ref: '#/components/schemas/InfluxqlCsvResponse'
examples:
influxql-chunk_size_2:
value: |
{"results":[{"statement_id":0,"series":[{"name":"mymeas","columns":["time","myfield","mytag"],"values":[["2016-05-19T18:37:55Z",90,"1"],["2016-05-19T18:37:56Z",90,"1"]],"partial":true}],"partial":true}]}
{"results":[{"statement_id":0,"series":[{"name":"mymeas","columns":["time","myfield","mytag"],"values":[["2016-05-19T18:37:57Z",90,"1"],["2016-05-19T18:37:58Z",90,"1"]]}]}]}
application/x-msgpack:
schema:
type: string
format: binary
'429':
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/write:
post:
description: |-
Writes line protocol to the specified bucket.
This endpoint provides backward compatibility for InfluxDB 1.x write workloads using tools such as InfluxDB 1.x client libraries, the Telegraf `outputs.influxdb` output plugin, or third-party tools.
Use this endpoint to send data in [line protocol](https://docs.influxdata.com/influxdb/v2/reference/syntax/line-protocol/) format to InfluxDB.
Use query parameters to specify options for writing data.
operationId: PostLegacyWrite
parameters:
- $ref: '#/components/parameters/TraceSpan'
@ -13281,7 +13293,7 @@ paths:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: '#/components/schemas/Error'
description: The request was well-formed, but some or all the points were rejected due to semantic errors--for example, schema conflicts or retention policy violations. Error message contains details for one or more rejected points.
'429':
description: Token is temporarily over quota. The Retry-After header describes when to try the write again.
@ -13305,9 +13317,31 @@ paths:
schema:
$ref: '#/components/schemas/Error'
description: Internal server error
summary: Write time series data into InfluxDB in a V1-compatible format
summary: Write data using a v1-compatible request
description: |
Writes data in [line protocol](/influxdb/v2/reference/syntax/line-protocol/) syntax to the specified bucket using a v1-compatible request.
This endpoint provides backward compatibility for InfluxDB 1.x write workloads using tools such as InfluxDB 1.x client libraries, the Telegraf `outputs.influxdb` output plugin, or third-party tools.
Use query parameters to specify options for writing data.
#### InfluxDB Cloud
- Validates and queues the request.
- Handles the write asynchronously - the write might not have completed yet.
- Returns a `Retry-After` header that describes when to try the write again.
#### InfluxDB OSS v2
- Validates the request and handles the write synchronously.
- If all points were written successfully, responds with HTTP `2xx` status code
- If any points were rejected, responds with HTTP `4xx` status code and details about the problem.
#### Related guides
- [Write data with the InfluxDB API](/influxdb/v2/write-data/developer-tools/api)
tags:
- Legacy Write
- Write data (v1-compatible)
components:
examples:
AuthorizationPostRequest:
@ -13412,6 +13446,96 @@ components:
required: false
schema:
type: string
Accept:
in: header
name: Accept
schema:
default: application/json
description: |
Media type that the client can understand.
**Note**: With `application/csv`, query results include [**unix timestamps**](/influxdb/v2/reference/glossary/#unix-timestamp) instead of [RFC3339 timestamps](/influxdb/v2/reference/glossary/#rfc3339-timestamp).
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
type: string
AcceptEncoding:
description: The content encoding (usually a compression algorithm) that the client can understand.
in: header
name: Accept-Encoding
schema:
default: identity
description: The content coding. Use `gzip` for compressed data or `identity` for unmodified, uncompressed data.
enum:
- gzip
- identity
type: string
Content-Type:
in: header
name: Content-Type
schema:
enum:
- application/json
type: string
AuthV1Username:
description: |
The InfluxDB 1.x username to authenticate the request.
If you provide an API token as the password, `u` is required, but can be any value.
in: query
name: u
schema:
type: string
AuthV1Password:
description: The InfluxDB 1.x password to authenticate the request.
in: query
name: p
schema:
type: string
V1Database:
description: |
The database to query data from.
This is mapped to an InfluxDB [bucket](/influxdb/v2/reference/glossary/#bucket).
For more information, see [Database and retention policy mapping](/influxdb/v2/api/influxdb-1x/dbrp/).
in: query
name: db
required: true
schema:
type: string
V1RetentionPolicy:
description: |
The retention policy to query data from.
This is mapped to an InfluxDB [bucket](/influxdb/v2/reference/glossary/#bucket).
For more information, see [Database and retention policy mapping](/influxdb/v2/api/influxdb-1x/dbrp/).
in: query
name: rp
schema:
type: string
V1Query:
description: The InfluxQL query to execute. To execute multiple queries, delimit queries with a semicolon (`;`).
in: query
name: q
required: true
schema:
type: string
V1Epoch:
description: |
A unix timestamp precision.
Formats timestamps as [unix (epoch) timestamps](/influxdb/v2/reference/glossary/#unix-timestamp) with the specified precision
instead of [RFC3339 timestamps](/influxdb/v2/reference/glossary/#rfc3339-timestamp) with nanosecond precision.
in: query
name: epoch
schema:
enum:
- ns
- u
- µ
- ms
- s
- m
- h
type: string
responses:
AuthorizationError:
content:
@ -20058,13 +20182,16 @@ x-tagGroups:
- Headers
- Pagination
- Response codes
- Compatibility endpoints
- name: All endpoints
tags:
- Authorizations (API tokens)
- Authorizations (v1-compatible)
- Backup
- Buckets
- Cells
- Checks
- Compatibility endpoints
- Config
- Dashboards
- DBRPs
@ -20072,15 +20199,13 @@ x-tagGroups:
- Delete
- Health
- Labels
- Legacy Authorizations
- Legacy Query
- Legacy Write
- Metrics
- NotificationEndpoints
- NotificationRules
- Organizations
- Ping
- Query
- Query data (v1-compatible)
- Ready
- RemoteConnections
- Replications
@ -20102,3 +20227,4 @@ x-tagGroups:
- Variables
- Views
- Write
- Write data (v1-compatible)

View File

@ -1,6 +1,6 @@
- name: Using the Management API
tags:
- Authentication
- Examples
- Quickstart
- name: All endpoints
tags: []

View File

@ -7,10 +7,10 @@ info:
This documentation is generated from the
InfluxDB OpenAPI specification.
version: ''
license:
name: MIT
url: https://opensource.org/licenses/MIT
version: ''
contact:
name: InfluxData
url: https://www.influxdata.com
@ -31,7 +31,7 @@ tags:
- name: Authentication
x-traitTag: true
description: |
The InfluxDB Management API endpoints require the following credentials:
With InfluxDB 3 Cloud Dedicated, the InfluxDB Management API endpoints require the following credentials:
- `ACCOUNT_ID`: The ID of the [account](/influxdb3/cloud-dedicated/get-started/setup/#request-an-influxdb-cloud-dedicated-cluster) that the cluster belongs to. To view account ID and cluster ID, [list cluster details](/influxdb3/cloud-dedicated/admin/clusters/list/#detailed-output-in-json).
- `CLUSTER_ID`: The ID of the [cluster](/influxdb3/cloud-dedicated/get-started/setup/#request-an-influxdb-cloud-dedicated-cluster) that you want to manage. To view account ID and cluster ID, [list cluster details](/influxdb3/cloud-dedicated/admin/clusters/list/#detailed-output-in-json).
@ -45,7 +45,7 @@ tags:
description: Manage database read/write tokens for a cluster
- name: Databases
description: Manage databases for a cluster
- name: Example
- name: Quickstart
x-traitTag: true
description: |
The following example script shows how to use `curl` to make database and token management requests:
@ -630,7 +630,7 @@ paths:
maxTables: 300
maxColumnsPerTable: 150
retentionPeriod: 600000000000
maxTablsOnly:
maxTablesOnly:
summary: Update Max Tables Only
value:
maxTables: 300
@ -681,7 +681,7 @@ paths:
maxTables: 300
maxColumnsPerTable: 150
retentionPeriod: 600000000000
maxTablsOnly:
maxTablesOnly:
summary: Update Max Tables Only
value:
accountId: 11111111-1111-4111-8111-111111111111
@ -975,6 +975,10 @@ paths:
$ref: '#/components/schemas/DatabaseTokenPermissions'
createdAt:
$ref: '#/components/schemas/DatabaseTokenCreatedAt'
expiresAt:
$ref: '#/components/schemas/DatabaseTokenExpiresAt'
revokedAt:
$ref: '#/components/schemas/DatabaseTokenRevokedAt'
required:
- accountId
- clusterId
@ -1078,6 +1082,8 @@ paths:
$ref: '#/components/schemas/DatabaseTokenDescription'
permissions:
$ref: '#/components/schemas/DatabaseTokenPermissions'
expiresAt:
$ref: '#/components/schemas/DatabaseTokenExpiresAt'
required:
- description
examples:
@ -1127,6 +1133,10 @@ paths:
$ref: '#/components/schemas/DatabaseTokenCreatedAt'
accessToken:
$ref: '#/components/schemas/DatabaseTokenAccessToken'
expiresAt:
$ref: '#/components/schemas/DatabaseTokenExpiresAt'
revokedAt:
$ref: '#/components/schemas/DatabaseTokenRevokedAt'
required:
- accountId
- clusterId
@ -1270,6 +1280,10 @@ paths:
$ref: '#/components/schemas/DatabaseTokenPermissions'
createdAt:
$ref: '#/components/schemas/DatabaseTokenCreatedAt'
expiresAt:
$ref: '#/components/schemas/DatabaseTokenExpiresAt'
revokedAt:
$ref: '#/components/schemas/DatabaseTokenRevokedAt'
required:
- accountId
- clusterId
@ -1427,6 +1441,10 @@ paths:
$ref: '#/components/schemas/DatabaseTokenPermissions'
createdAt:
$ref: '#/components/schemas/DatabaseTokenCreatedAt'
expiresAt:
$ref: '#/components/schemas/DatabaseTokenExpiresAt'
revokedAt:
$ref: '#/components/schemas/DatabaseTokenRevokedAt'
required:
- accountId
- clusterId
@ -1876,6 +1894,18 @@ components:
examples:
- '2023-12-21T17:32:28.000Z'
- '2024-03-02T04:20:19.000Z'
DatabaseTokenExpiresAt:
description: |
The date and time that the database token expires, if applicable
Uses RFC3339 format
$ref: '#/components/schemas/DateTimeRfc3339'
DatabaseTokenRevokedAt:
description: |
The date and time that the database token was revoked, if applicable
Uses RFC3339 format
$ref: '#/components/schemas/DateTimeRfc3339'
DatabaseTokenAccessToken:
description: |
The access token that can be used to authenticate query and write requests to the cluster
@ -1986,7 +2016,7 @@ x-tagGroups:
- name: Using the Management API
tags:
- Authentication
- Examples
- Quickstart
- name: All endpoints
tags:
- Database tokens

View File

@ -6,6 +6,8 @@ extends:
x-influxdata-product-name: InfluxDB 3 Clustered
apis:
management@0:
root: management/openapi.yml
v2@2:
root: v2/ref.yml
x-influxdata-docs-aliases:

View File

@ -0,0 +1,15 @@
title: InfluxDB 3 Clustered Management API
x-influxdata-short-title: Management API
description: |
The Management API for InfluxDB 3 Clustered provides a programmatic interface for managing an InfluxDB 3 cluster.
The Management API lets you integrate functions such as creating and managing databases, permissions, and tokens into your workflow or application.
This documentation is generated from the
InfluxDB 3 Management API OpenAPI specification.
license:
name: MIT
url: 'https://opensource.org/licenses/MIT'
contact:
name: InfluxData
url: https://www.influxdata.com
email: support@influxdata.com

View File

@ -0,0 +1,8 @@
- url: 'https://{baseurl}/api/v0'
description: InfluxDB 3 Clustered Management API URL
variables:
baseurl:
enum:
- 'console.influxdata.com'
default: 'console.influxdata.com'
description: InfluxDB 3 Clustered Console URL

View File

@ -0,0 +1,6 @@
- name: Using the Management API
tags:
- Authentication
- Quickstart
- name: All endpoints
tags: []

File diff suppressed because it is too large Load Diff

View File

@ -922,9 +922,25 @@ paths:
summary: Delete a database
description: |
Soft deletes a database.
The database is scheduled for deletion and unavailable for querying.
The database is scheduled for deletion and unavailable for querying.
Use the `hard_delete_at` parameter to schedule a hard deletion.
parameters:
- $ref: '#/components/parameters/db'
- name: hard_delete_at
in: query
required: false
schema:
type: string
format: date-time
description: |
Schedule the database for hard deletion at the specified time.
If not provided, the database will be soft deleted.
Use ISO 8601 date-time format (for example, "2025-12-31T23:59:59Z").
#### Deleting a database cannot be undone
Deleting a database is a destructive action.
Once a database is deleted, data stored in that database cannot be recovered.
responses:
'200':
description: Success. Database deleted.
@ -961,7 +977,13 @@ paths:
summary: Delete a table
description: |
Soft deletes a table.
The table is scheduled for deletion and unavailable for querying.
The table is scheduled for deletion and unavailable for querying.
Use the `hard_delete_at` parameter to schedule a hard deletion.
#### Deleting a table cannot be undone
Deleting a table is a destructive action.
Once a table is deleted, data stored in that table cannot be recovered.
parameters:
- $ref: '#/components/parameters/db'
- name: table
@ -969,6 +991,16 @@ paths:
required: true
schema:
type: string
- name: hard_delete_at
in: query
required: false
schema:
type: string
format: date-time
description: |
Schedule the table for hard deletion at the specified time.
If not provided, the table will be soft deleted.
Use ISO 8601 format (for example, "2025-12-31T23:59:59Z").
responses:
'200':
description: Success (no content). The table has been deleted.
@ -1078,7 +1110,7 @@ paths:
In `"cron:CRON_EXPRESSION"`, `CRON_EXPRESSION` uses extended 6-field cron format.
The cron expression `0 0 6 * * 1-5` means the trigger will run at 6:00 AM every weekday (Monday to Friday).
value:
db: DATABASE_NAME
db: mydb
plugin_filename: schedule.py
trigger_name: schedule_cron_trigger
trigger_specification: cron:0 0 6 * * 1-5
@ -1136,7 +1168,7 @@ paths:
db: mydb
plugin_filename: request.py
trigger_name: hello_world_trigger
trigger_specification: path:hello-world
trigger_specification: request:hello-world
cron_friday_afternoon:
summary: Cron trigger for Friday afternoons
description: |
@ -1365,16 +1397,16 @@ paths:
description: Plugin not enabled.
tags:
- Processing engine
/api/v3/engine/{plugin_path}:
/api/v3/engine/{request_path}:
parameters:
- name: plugin_path
- name: request_path
description: |
The path configured in the request trigger specification "path:<plugin_path>"` for the plugin.
The path configured in the request trigger specification for the plugin.
For example, if you define a trigger with the following:
```json
trigger-spec: "path:hello-world"
trigger_specification: "request:hello-world"
```
then, the HTTP API exposes the following plugin endpoint:
@ -1390,7 +1422,7 @@ paths:
operationId: GetProcessingEnginePluginRequest
summary: On Request processing engine plugin request
description: |
Executes the On Request processing engine plugin specified in `<plugin_path>`.
Executes the On Request processing engine plugin specified in the trigger's `plugin_filename`.
The request can include request headers, query string parameters, and a request body, which InfluxDB passes to the plugin.
An On Request plugin implements the following signature:
@ -1417,7 +1449,7 @@ paths:
operationId: PostProcessingEnginePluginRequest
summary: On Request processing engine plugin request
description: |
Executes the On Request processing engine plugin specified in `<plugin_path>`.
Executes the On Request processing engine plugin specified in the trigger's `plugin_filename`.
The request can include request headers, query string parameters, and a request body, which InfluxDB passes to the plugin.
An On Request plugin implements the following signature:
@ -1868,7 +1900,7 @@ components:
`schedule.py` or `endpoints/report.py`.
The path can be absolute or relative to the `--plugins-dir` directory configured when starting InfluxDB 3.
The plugin file must implement the trigger interface associated with the trigger's specification (`trigger_spec`).
The plugin file must implement the trigger interface associated with the trigger's specification.
trigger_name:
type: string
trigger_specification:
@ -1911,12 +1943,12 @@ components:
- `table:TABLE_NAME` - Triggers on write events to a specific table
### On-demand triggers
Format: `path:ENDPOINT_NAME`
Format: `request:REQUEST_PATH`
Creates an HTTP endpoint `/api/v3/engine/ENDPOINT_NAME` for manual invocation:
- `path:hello-world` - Creates endpoint `/api/v3/engine/hello-world`
- `path:data-export` - Creates endpoint `/api/v3/engine/data-export`
pattern: ^(cron:[0-9 *,/-]+|every:[0-9]+[smhd]|all_tables|table:[a-zA-Z_][a-zA-Z0-9_]*|path:[a-zA-Z0-9_-]+)$
Creates an HTTP endpoint `/api/v3/engine/REQUEST_PATH` for manual invocation:
- `request:hello-world` - Creates endpoint `/api/v3/engine/hello-world`
- `request:data-export` - Creates endpoint `/api/v3/engine/data-export`
pattern: ^(cron:[0-9 *,/-]+|every:[0-9]+[smhd]|all_tables|table:[a-zA-Z_][a-zA-Z0-9_]*|request:[a-zA-Z0-9_-]+)$
example: cron:0 0 6 * * 1-5
trigger_arguments:
type: object
@ -2013,6 +2045,65 @@ components:
- m
- h
type: string
UpdateDatabaseRequest:
type: object
properties:
retention_period:
type: string
description: |
The retention period for the database. Specifies how long data should be retained.
Use duration format (for example, "1d", "1h", "30m", "7d").
example: "7d"
description: Request schema for updating database configuration.
UpdateTableRequest:
type: object
properties:
db:
type: string
description: The name of the database containing the table.
table:
type: string
description: The name of the table to update.
retention_period:
type: string
description: |
The retention period for the table. Specifies how long data in this table should be retained.
Use duration format (for example, "1d", "1h", "30m", "7d").
example: "30d"
required:
- db
- table
description: Request schema for updating table configuration.
LicenseResponse:
type: object
properties:
license_type:
type: string
description: The type of license (for example, "enterprise", "trial").
example: "enterprise"
expires_at:
type: string
format: date-time
description: The expiration date of the license in ISO 8601 format.
example: "2025-12-31T23:59:59Z"
features:
type: array
items:
type: string
description: List of features enabled by the license.
example:
- "clustering"
- "processing_engine"
- "advanced_auth"
status:
type: string
enum:
- "active"
- "expired"
- "invalid"
description: The current status of the license.
example: "active"
description: Response schema for license information.
responses:
Unauthorized:
description: Unauthorized access.

View File

@ -922,9 +922,25 @@ paths:
summary: Delete a database
description: |
Soft deletes a database.
The database is scheduled for deletion and unavailable for querying.
The database is scheduled for deletion and unavailable for querying.
Use the `hard_delete_at` parameter to schedule a hard deletion.
parameters:
- $ref: '#/components/parameters/db'
- name: hard_delete_at
in: query
required: false
schema:
type: string
format: date-time
description: |
Schedule the database for hard deletion at the specified time.
If not provided, the database will be soft deleted.
Use ISO 8601 date-time format (for example, "2025-12-31T23:59:59Z").
#### Deleting a database cannot be undone
Deleting a database is a destructive action.
Once a database is deleted, data stored in that database cannot be recovered.
responses:
'200':
description: Success. Database deleted.
@ -961,7 +977,13 @@ paths:
summary: Delete a table
description: |
Soft deletes a table.
The table is scheduled for deletion and unavailable for querying.
The table is scheduled for deletion and unavailable for querying.
Use the `hard_delete_at` parameter to schedule a hard deletion.
#### Deleting a table cannot be undone
Deleting a table is a destructive action.
Once a table is deleted, data stored in that table cannot be recovered.
parameters:
- $ref: '#/components/parameters/db'
- name: table
@ -969,6 +991,16 @@ paths:
required: true
schema:
type: string
- name: hard_delete_at
in: query
required: false
schema:
type: string
format: date-time
description: |
Schedule the table for hard deletion at the specified time.
If not provided, the table will be soft deleted.
Use ISO 8601 format (for example, "2025-12-31T23:59:59Z").
responses:
'200':
description: Success (no content). The table has been deleted.
@ -978,6 +1010,77 @@ paths:
description: Table not found.
tags:
- Table
patch:
operationId: PatchConfigureTable
summary: Update a table
description: |
Updates table configuration, such as retention period.
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/UpdateTableRequest'
responses:
'200':
description: Success. The table has been updated.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Table not found.
tags:
- Table
/api/v3/configure/database/{db}:
patch:
operationId: PatchConfigureDatabase
summary: Update a database
description: |
Updates database configuration, such as retention period.
parameters:
- name: db
in: path
required: true
schema:
type: string
description: The name of the database to update.
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/UpdateDatabaseRequest'
responses:
'200':
description: Success. The database has been updated.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Database not found.
tags:
- Database
/api/v3/show/license:
get:
operationId: GetShowLicense
summary: Show license information
description: |
Retrieves information about the current InfluxDB 3 Enterprise license.
responses:
'200':
description: Success. The response body contains license information.
content:
application/json:
schema:
$ref: '#/components/schemas/LicenseResponse'
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Access denied.
tags:
- Server information
/api/v3/configure/distinct_cache:
post:
operationId: PostConfigureDistinctCache
@ -1136,7 +1239,7 @@ paths:
db: mydb
plugin_filename: request.py
trigger_name: hello_world_trigger
trigger_specification: path:hello-world
trigger_specification: request:hello-world
cron_friday_afternoon:
summary: Cron trigger for Friday afternoons
description: |
@ -1365,16 +1468,16 @@ paths:
description: Plugin not enabled.
tags:
- Processing engine
/api/v3/engine/{plugin_path}:
/api/v3/engine/{request_path}:
parameters:
- name: plugin_path
- name: request_path
description: |
The path configured in the request trigger specification "path:<plugin_path>"` for the plugin.
The path configured in the request trigger specification for the plugin.
For example, if you define a trigger with the following:
```json
trigger-spec: "path:hello-world"
trigger_specification: "request:hello-world"
```
then, the HTTP API exposes the following plugin endpoint:
@ -1390,7 +1493,7 @@ paths:
operationId: GetProcessingEnginePluginRequest
summary: On Request processing engine plugin request
description: |
Executes the On Request processing engine plugin specified in `<plugin_path>`.
Executes the On Request processing engine plugin specified in the trigger's `plugin_filename`.
The request can include request headers, query string parameters, and a request body, which InfluxDB passes to the plugin.
An On Request plugin implements the following signature:
@ -1417,7 +1520,7 @@ paths:
operationId: PostProcessingEnginePluginRequest
summary: On Request processing engine plugin request
description: |
Executes the On Request processing engine plugin specified in `<plugin_path>`.
Executes the On Request processing engine plugin specified in the trigger's `plugin_filename`.
The request can include request headers, query string parameters, and a request body, which InfluxDB passes to the plugin.
An On Request plugin implements the following signature:
@ -1812,6 +1915,16 @@ components:
properties:
db:
type: string
pattern: '^[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]$|^[a-zA-Z0-9]$'
description: |-
The database name. Database names cannot contain underscores (_).
Names must start and end with alphanumeric characters and can contain hyphens (-) in the middle.
retention_period:
type: string
description: |-
The retention period for the database. Specifies how long data should be retained.
Use duration format (for example, "1d", "1h", "30m", "7d").
example: "7d"
required:
- db
CreateTableRequest:
@ -1843,6 +1956,12 @@ components:
required:
- name
- type
retention_period:
type: string
description: |-
The retention period for the table. Specifies how long data in this table should be retained.
Use duration format (for example, "1d", "1h", "30m", "7d").
example: "30d"
required:
- db
- table
@ -1929,11 +2048,10 @@ components:
`schedule.py` or `endpoints/report.py`.
The path can be absolute or relative to the `--plugins-dir` directory configured when starting InfluxDB 3.
The plugin file must implement the trigger interface associated with the trigger's specification (`trigger_spec`).
The plugin file must implement the trigger interface associated with the trigger's specification.
trigger_name:
type: string
trigger_specification:
type: string
description: |
Specifies when and how the processing engine trigger should be invoked.
@ -1972,12 +2090,12 @@ components:
- `table:TABLE_NAME` - Triggers on write events to a specific table
### On-demand triggers
Format: `path:ENDPOINT_NAME`
Format: `request:REQUEST_PATH`
Creates an HTTP endpoint `/api/v3/engine/ENDPOINT_NAME` for manual invocation:
- `path:hello-world` - Creates endpoint `/api/v3/engine/hello-world`
- `path:data-export` - Creates endpoint `/api/v3/engine/data-export`
pattern: ^(cron:[0-9 *,/-]+|every:[0-9]+[smhd]|all_tables|table:[a-zA-Z_][a-zA-Z0-9_]*|path:[a-zA-Z0-9_-]+)$
Creates an HTTP endpoint `/api/v3/engine/REQUEST_PATH` for manual invocation:
- `request:hello-world` - Creates endpoint `/api/v3/engine/hello-world`
- `request:data-export` - Creates endpoint `/api/v3/engine/data-export`
pattern: ^(cron:[0-9 *,/-]+|every:[0-9]+[smhd]|all_tables|table:[a-zA-Z_][a-zA-Z0-9_]*|request:[a-zA-Z0-9_-]+)$
example: cron:0 0 6 * * 1-5
trigger_arguments:
type: object
@ -2074,6 +2192,65 @@ components:
- m
- h
type: string
UpdateDatabaseRequest:
type: object
properties:
retention_period:
type: string
description: |
The retention period for the database. Specifies how long data should be retained.
Use duration format (for example, "1d", "1h", "30m", "7d").
example: "7d"
description: Request schema for updating database configuration.
UpdateTableRequest:
type: object
properties:
db:
type: string
description: The name of the database containing the table.
table:
type: string
description: The name of the table to update.
retention_period:
type: string
description: |
The retention period for the table. Specifies how long data in this table should be retained.
Use duration format (for example, "1d", "1h", "30m", "7d").
example: "30d"
required:
- db
- table
description: Request schema for updating table configuration.
LicenseResponse:
type: object
properties:
license_type:
type: string
description: The type of license (for example, "enterprise", "trial").
example: "enterprise"
expires_at:
type: string
format: date-time
description: The expiration date of the license in ISO 8601 format.
example: "2025-12-31T23:59:59Z"
features:
type: array
items:
type: string
description: List of features enabled by the license.
example:
- "clustering"
- "processing_engine"
- "advanced_auth"
status:
type: string
enum:
- "active"
- "expired"
- "invalid"
description: The current status of the license.
example: "active"
description: Response schema for license information.
responses:
Unauthorized:
description: Unauthorized access.

View File

@ -2,7 +2,7 @@
///////////////// Preferred Client Library programming language ///////////////
////////////////////////////////////////////////////////////////////////////////
import { activateTabs, updateBtnURLs } from './tabbed-content.js';
import { getPreference, setPreference } from './local-storage.js';
import { getPreference, setPreference } from './services/local-storage.js';
function getVisitedApiLib() {
const path = window.location.pathname.match(

View File

@ -8,29 +8,31 @@ function setUser(userid, email) {
window[NAMESPACE] = {
user: {
uniqueClientId: userid,
email: email,
}
}
email: email,
},
};
}
// Initialize the chat widget
function initializeChat({onChatLoad, chatAttributes}) {
/* See https://docs.kapa.ai/integrations/website-widget/configuration for
function initializeChat({ onChatLoad, chatAttributes }) {
/* See https://docs.kapa.ai/integrations/website-widget/configuration for
* available configuration options.
* All values are strings.
*/
// If you make changes to data attributes here, you also need to port the changes to the api-docs/template.hbs API reference template.
// If you make changes to data attributes here, you also need to
// port the changes to the api-docs/template.hbs API reference template.
const requiredAttributes = {
websiteId: 'a02bca75-1dd3-411e-95c0-79ee1139be4d',
projectName: 'InfluxDB',
projectColor: '#020a47',
projectLogo: '/img/influx-logo-cubo-white.png',
}
};
const optionalAttributes = {
modalDisclaimer: 'This AI can access [documentation for InfluxDB, clients, and related tools](https://docs.influxdata.com). Information you submit is used in accordance with our [Privacy Policy](https://www.influxdata.com/legal/privacy-policy/).',
modalExampleQuestions: 'Use Python to write data to InfluxDB 3,How do I query using SQL?,How do I use MQTT with Telegraf?',
modalDisclaimer:
'This AI can access [documentation for InfluxDB, clients, and related tools](https://docs.influxdata.com). Information you submit is used in accordance with our [Privacy Policy](https://www.influxdata.com/legal/privacy-policy/).',
modalExampleQuestions:
'Use Python to write data to InfluxDB 3,How do I query using SQL?,How do I use MQTT with Telegraf?',
buttonHide: 'true',
exampleQuestionButtonWidth: 'auto',
modalOpenOnCommandK: 'true',
@ -52,28 +54,32 @@ function initializeChat({onChatLoad, chatAttributes}) {
modalHeaderBorderBottom: 'none',
modalTitleColor: '#fff',
modalTitleFontSize: '1.25rem',
}
};
const scriptUrl = 'https://widget.kapa.ai/kapa-widget.bundle.js';
const script = document.createElement('script');
script.async = true;
script.src = scriptUrl;
script.onload = function() {
script.onload = function () {
onChatLoad();
window.influxdatadocs.AskAI = AskAI;
};
script.onerror = function() {
script.onerror = function () {
console.error('Error loading AI chat widget script');
};
const dataset = {...requiredAttributes, ...optionalAttributes, ...chatAttributes};
Object.keys(dataset).forEach(key => {
// Assign dataset attributes from the object
const dataset = {
...requiredAttributes,
...optionalAttributes,
...chatAttributes,
};
Object.keys(dataset).forEach((key) => {
// Assign dataset attributes from the object
script.dataset[key] = dataset[key];
});
// Check for an existing script element to remove
const oldScript= document.querySelector(`script[src="${scriptUrl}"]`);
const oldScript = document.querySelector(`script[src="${scriptUrl}"]`);
if (oldScript) {
oldScript.remove();
}
@ -82,22 +88,21 @@ function initializeChat({onChatLoad, chatAttributes}) {
function getProductExampleQuestions() {
const questions = productData?.product?.ai_sample_questions;
return questions?.join(',') || '';
return questions?.join(',') || '';
}
/**
/**
* chatParams: specify custom (for example, page-specific) attribute values for the chat, pass the dataset key-values (collected in ...chatParams). See https://docs.kapa.ai/integrations/website-widget/configuration for available configuration options.
* onChatLoad: function to call when the chat widget has loaded
* userid: optional, a unique user ID for the user (not currently used for public docs)
*/
*/
export default function AskAI({ userid, email, onChatLoad, ...chatParams }) {
const modalExampleQuestions = getProductExampleQuestions();
const chatAttributes = {
...(modalExampleQuestions && { modalExampleQuestions }),
...chatParams,
}
initializeChat({onChatLoad, chatAttributes});
};
initializeChat({ onChatLoad, chatAttributes });
if (userid) {
setUser(userid, email);

View File

@ -1,8 +1,9 @@
import $ from 'jquery';
import { context } from './page-context.js';
function initialize() {
var codeBlockSelector = '.article--content pre';
var codeBlocks = $(codeBlockSelector);
var $codeBlocks = $(codeBlockSelector);
var appendHTML = `
<div class="code-controls">
@ -15,7 +16,7 @@ function initialize() {
`;
// Wrap all codeblocks with a new 'codeblock' div
$(codeBlocks).each(function () {
$codeBlocks.each(function () {
$(this).wrap("<div class='codeblock'></div>");
});
@ -68,7 +69,94 @@ function initialize() {
// Trigger copy failure state lifecycle
$('.copy-code').click(function () {
let text = $(this).closest('.code-controls').prevAll('pre:has(code)')[0].innerText;
let codeElement = $(this)
.closest('.code-controls')
.prevAll('pre:has(code)')[0];
let text = codeElement.innerText;
// Extract additional code block information
const codeBlockInfo = extractCodeBlockInfo(codeElement);
// Add Google Analytics event tracking
const currentUrl = new URL(window.location.href);
// Determine which tracking parameter to add based on product context
switch (context) {
case 'cloud':
currentUrl.searchParams.set('dl', 'cloud');
break;
case 'core':
/** Track using the same value used by www.influxdata.com pages */
currentUrl.searchParams.set('dl', 'oss3');
break;
case 'enterprise':
/** Track using the same value used by www.influxdata.com pages */
currentUrl.searchParams.set('dl', 'enterprise');
break;
case 'serverless':
currentUrl.searchParams.set('dl', 'serverless');
break;
case 'dedicated':
currentUrl.searchParams.set('dl', 'dedicated');
break;
case 'clustered':
currentUrl.searchParams.set('dl', 'clustered');
break;
case 'oss/enterprise':
currentUrl.searchParams.set('dl', 'oss');
break;
case 'other':
default:
// No tracking parameter for other/unknown products
break;
}
// Add code block specific tracking parameters
if (codeBlockInfo.language) {
currentUrl.searchParams.set('code_lang', codeBlockInfo.language);
}
if (codeBlockInfo.lineCount) {
currentUrl.searchParams.set('code_lines', codeBlockInfo.lineCount);
}
if (codeBlockInfo.hasPlaceholders) {
currentUrl.searchParams.set('has_placeholders', 'true');
}
if (codeBlockInfo.blockType) {
currentUrl.searchParams.set('code_type', codeBlockInfo.blockType);
}
if (codeBlockInfo.sectionTitle) {
currentUrl.searchParams.set(
'section',
encodeURIComponent(codeBlockInfo.sectionTitle)
);
}
if (codeBlockInfo.firstLine) {
currentUrl.searchParams.set(
'first_line',
encodeURIComponent(codeBlockInfo.firstLine.substring(0, 100))
);
}
// Update browser history without triggering page reload
if (window.history && window.history.replaceState) {
window.history.replaceState(null, '', currentUrl.toString());
}
// Send custom Google Analytics event if gtag is available
if (typeof window.gtag !== 'undefined') {
window.gtag('event', 'code_copy', {
language: codeBlockInfo.language,
line_count: codeBlockInfo.lineCount,
has_placeholders: codeBlockInfo.hasPlaceholders,
dl: codeBlockInfo.dl || null,
section_title: codeBlockInfo.sectionTitle,
first_line: codeBlockInfo.firstLine
? codeBlockInfo.firstLine.substring(0, 100)
: null,
product: context,
});
}
const copyContent = async () => {
try {
@ -82,6 +170,71 @@ function initialize() {
copyContent();
});
/**
* Extract contextual information about a code block
* @param {HTMLElement} codeElement - The code block element
* @returns {Object} Information about the code block
*/
function extractCodeBlockInfo(codeElement) {
const codeTag = codeElement.querySelector('code');
const info = {
language: null,
lineCount: 0,
hasPlaceholders: false,
blockType: 'code',
dl: null, // Download script type
sectionTitle: null,
firstLine: null,
};
// Extract language from class attribute
if (codeTag && codeTag.className) {
const langMatch = codeTag.className.match(
/language-(\w+)|hljs-(\w+)|(\w+)/
);
if (langMatch) {
info.language = langMatch[1] || langMatch[2] || langMatch[3];
}
}
// Count lines
const text = codeElement.innerText || '';
const lines = text.split('\n');
info.lineCount = lines.length;
// Get first non-empty line
info.firstLine = lines.find((line) => line.trim() !== '') || null;
// Check for placeholders (common patterns)
info.hasPlaceholders =
/\b[A-Z_]{2,}\b|\{\{[^}]+\}\}|\$\{[^}]+\}|<[^>]+>/.test(text);
// Determine if this is a download script
if (text.includes('https://www.influxdata.com/d/install_influxdb3.sh')) {
if (text.includes('install_influxdb3.sh enterprise')) {
info.dl = 'enterprise';
} else {
info.dl = 'oss3';
}
} else if (text.includes('docker pull influxdb:3-enterprise')) {
info.dl = 'enterprise';
} else if (text.includes('docker pull influxdb3-core')) {
info.dl = 'oss3';
}
// Find nearest section heading
let element = codeElement;
while (element && element !== document.body) {
element = element.previousElementSibling || element.parentElement;
if (element && element.tagName && /^H[1-6]$/.test(element.tagName)) {
info.sectionTitle = element.textContent.trim();
break;
}
}
return info;
}
/////////////////////////////// FULL WINDOW CODE ///////////////////////////////
/*
@ -90,7 +243,10 @@ Disable scrolling on the body.
Disable user selection on everything but the fullscreen codeblock.
*/
$('.fullscreen-toggle').click(function () {
var code = $(this).closest('.code-controls').prevAll('pre:has(code)').clone();
var code = $(this)
.closest('.code-controls')
.prevAll('pre:has(code)')
.clone();
$('#fullscreen-code-placeholder').replaceWith(code[0]);
$('body').css('overflow', 'hidden');

View File

@ -0,0 +1,78 @@
// Memoize the mermaid module import
let mermaidPromise = null;
export default function Diagram({ component }) {
// Import mermaid.js module (memoized)
if (!mermaidPromise) {
mermaidPromise = import('mermaid');
}
mermaidPromise
.then(({ default: mermaid }) => {
// Configure mermaid with InfluxData theming
mermaid.initialize({
startOnLoad: false, // We'll manually call run()
theme: document.body.classList.contains('dark-theme')
? 'dark'
: 'default',
themeVariables: {
fontFamily: 'Proxima Nova',
fontSize: '16px',
lineColor: '#22ADF6',
primaryColor: '#22ADF6',
primaryTextColor: '#545454',
secondaryColor: '#05CE78',
tertiaryColor: '#f4f5f5',
},
securityLevel: 'loose', // Required for interactive diagrams
logLevel: 'error',
});
// Process the specific diagram component
try {
mermaid.run({ nodes: [component] });
} catch (error) {
console.error('Mermaid diagram rendering error:', error);
}
// Store reference to mermaid for theme switching
if (!window.mermaidInstances) {
window.mermaidInstances = new Map();
}
window.mermaidInstances.set(component, mermaid);
})
.catch((error) => {
console.error('Failed to load Mermaid library:', error);
});
// Listen for theme changes to refresh diagrams
const observer = new MutationObserver((mutations) => {
mutations.forEach((mutation) => {
if (
mutation.attributeName === 'class' &&
document.body.classList.contains('dark-theme') !== window.isDarkTheme
) {
window.isDarkTheme = document.body.classList.contains('dark-theme');
// Reload this specific diagram with new theme
if (window.mermaidInstances?.has(component)) {
const mermaid = window.mermaidInstances.get(component);
mermaid.initialize({
theme: window.isDarkTheme ? 'dark' : 'default',
});
mermaid.run({ nodes: [component] });
}
}
});
});
// Watch for theme changes on body element
observer.observe(document.body, { attributes: true });
// Return cleanup function to be called when component is destroyed
return () => {
observer.disconnect();
if (window.mermaidInstances?.has(component)) {
window.mermaidInstances.delete(component);
}
};
}

View File

@ -0,0 +1,180 @@
/**
* DocSearch component for InfluxData documentation
* Handles asynchronous loading and initialization of Algolia DocSearch
*/
const debug = false; // Set to true for debugging output
export default function DocSearch({ component }) {
// Store configuration from component data attributes
const config = {
apiKey: component.getAttribute('data-api-key'),
appId: component.getAttribute('data-app-id'),
indexName: component.getAttribute('data-index-name'),
inputSelector: component.getAttribute('data-input-selector'),
searchTag: component.getAttribute('data-search-tag'),
includeFlux: component.getAttribute('data-include-flux') === 'true',
includeResources:
component.getAttribute('data-include-resources') === 'true',
debug: component.getAttribute('data-debug') === 'true',
};
// Initialize global object to track DocSearch state
window.InfluxDocs = window.InfluxDocs || {};
window.InfluxDocs.search = {
initialized: false,
options: config,
};
// Load DocSearch asynchronously
function loadDocSearch() {
if (debug) {
console.log('Loading DocSearch script...');
}
const script = document.createElement('script');
script.src =
'https://cdn.jsdelivr.net/npm/docsearch.js@2/dist/cdn/docsearch.min.js';
script.async = true;
script.onload = initializeDocSearch;
document.body.appendChild(script);
}
// Initialize DocSearch after script loads
function initializeDocSearch() {
if (debug) {
console.log('Initializing DocSearch...');
}
const multiVersion = ['influxdb'];
// Use object-based lookups instead of conditionals for version and product names
// These can be replaced with data from productData in the future
// Version display name mappings
const versionDisplayNames = {
cloud: 'Cloud (TSM)',
core: 'Core',
enterprise: 'Enterprise',
'cloud-serverless': 'Cloud Serverless',
'cloud-dedicated': 'Cloud Dedicated',
clustered: 'Clustered',
explorer: 'Explorer',
};
// Product display name mappings
const productDisplayNames = {
influxdb: 'InfluxDB',
influxdb3: 'InfluxDB 3',
explorer: 'InfluxDB 3 Explorer',
enterprise_influxdb: 'InfluxDB Enterprise',
flux: 'Flux',
telegraf: 'Telegraf',
chronograf: 'Chronograf',
kapacitor: 'Kapacitor',
platform: 'InfluxData Platform',
resources: 'Additional Resources',
};
// Initialize DocSearch with configuration
window.docsearch({
apiKey: config.apiKey,
appId: config.appId,
indexName: config.indexName,
inputSelector: config.inputSelector,
debug: config.debug,
transformData: function (hits) {
// Format version using object lookup instead of if-else chain
function fmtVersion(version, productKey) {
if (version == null) {
return '';
} else if (versionDisplayNames[version]) {
return versionDisplayNames[version];
} else if (multiVersion.includes(productKey)) {
return version;
} else {
return '';
}
}
hits.map((hit) => {
const pathData = new URL(hit.url).pathname
.split('/')
.filter((n) => n);
const product = productDisplayNames[pathData[0]] || pathData[0];
const version = fmtVersion(pathData[1], pathData[0]);
hit.product = product;
hit.version = version;
hit.hierarchy.lvl0 =
hit.hierarchy.lvl0 +
` <span class=\"search-product-version\">${product} ${version}</span>`;
hit._highlightResult.hierarchy.lvl0.value =
hit._highlightResult.hierarchy.lvl0.value +
` <span class=\"search-product-version\">${product} ${version}</span>`;
});
return hits;
},
algoliaOptions: {
hitsPerPage: 10,
facetFilters: buildFacetFilters(config),
},
autocompleteOptions: {
templates: {
header:
'<div class="search-all-content"><a href="https:\/\/support.influxdata.com" target="_blank">Search all InfluxData content <span class="icon-arrow-up-right"></span></a>',
empty:
'<div class="search-no-results"><p>Not finding what you\'re looking for?</p> <a href="https:\/\/support.influxdata.com" target="_blank">Search all InfluxData content <span class="icon-arrow-up-right"></span></a></div>',
},
},
});
// Mark DocSearch as initialized
window.InfluxDocs.search.initialized = true;
// Dispatch event for other components to know DocSearch is ready
window.dispatchEvent(new CustomEvent('docsearch-initialized'));
}
/**
* Helper function to build facet filters based on config
* - Uses nested arrays for AND conditions
* - Includes space after colon in filter expressions
*/
function buildFacetFilters(config) {
if (!config.searchTag) {
return ['latest:true'];
} else if (config.includeFlux) {
// Return a nested array to match original template structure
// Note the space after each colon
return [
[
'searchTag: ' + config.searchTag,
'flux:true',
'resources: ' + config.includeResources,
],
];
} else {
// Return a nested array to match original template structure
// Note the space after each colon
return [
[
'searchTag: ' + config.searchTag,
'resources: ' + config.includeResources,
],
];
}
}
// Load DocSearch when page is idle or after a slight delay
if ('requestIdleCallback' in window) {
requestIdleCallback(loadDocSearch);
} else {
setTimeout(loadDocSearch, 500);
}
// Return cleanup function
return function cleanup() {
// Clean up any event listeners if needed
if (debug) {
console.log('DocSearch component cleanup');
}
};
}

View File

@ -0,0 +1,6 @@
import SearchInteractions from '../utils/search-interactions.js';
export default function SidebarSearch({ component }) {
const searchInput = component.querySelector('.sidebar--search-field');
SearchInteractions({ searchInput });
}

View File

@ -1,7 +1,7 @@
import $ from 'jquery';
import { Datepicker } from 'vanillajs-datepicker';
import { toggleModal } from './modals.js';
import * as localStorage from './local-storage.js';
import * as localStorage from './services/local-storage.js';
// Placeholder start date used in InfluxDB custom timestamps
const defaultStartDate = '2022-01-01';
@ -53,65 +53,65 @@ function timeToUnixSeconds(time) {
return unixSeconds;
}
// Default time values in getting started sample data
const defaultTimes = [
{
rfc3339: `${defaultStartDate}T08:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T08:00:00Z`),
}, // 1641024000
{
rfc3339: `${defaultStartDate}T09:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T09:00:00Z`),
}, // 1641027600
{
rfc3339: `${defaultStartDate}T10:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T10:00:00Z`),
}, // 1641031200
{
rfc3339: `${defaultStartDate}T11:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T11:00:00Z`),
}, // 1641034800
{
rfc3339: `${defaultStartDate}T12:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T12:00:00Z`),
}, // 1641038400
{
rfc3339: `${defaultStartDate}T13:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T13:00:00Z`),
}, // 1641042000
{
rfc3339: `${defaultStartDate}T14:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T14:00:00Z`),
}, // 1641045600
{
rfc3339: `${defaultStartDate}T15:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T15:00:00Z`),
}, // 1641049200
{
rfc3339: `${defaultStartDate}T16:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T16:00:00Z`),
}, // 1641052800
{
rfc3339: `${defaultStartDate}T17:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T17:00:00Z`),
}, // 1641056400
{
rfc3339: `${defaultStartDate}T18:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T18:00:00Z`),
}, // 1641060000
{
rfc3339: `${defaultStartDate}T19:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T19:00:00Z`),
}, // 1641063600
{
rfc3339: `${defaultStartDate}T20:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T20:00:00Z`),
}, // 1641067200
];
// Default time values in getting started sample data
const defaultTimes = [
{
rfc3339: `${defaultStartDate}T08:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T08:00:00Z`),
}, // 1641024000
{
rfc3339: `${defaultStartDate}T09:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T09:00:00Z`),
}, // 1641027600
{
rfc3339: `${defaultStartDate}T10:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T10:00:00Z`),
}, // 1641031200
{
rfc3339: `${defaultStartDate}T11:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T11:00:00Z`),
}, // 1641034800
{
rfc3339: `${defaultStartDate}T12:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T12:00:00Z`),
}, // 1641038400
{
rfc3339: `${defaultStartDate}T13:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T13:00:00Z`),
}, // 1641042000
{
rfc3339: `${defaultStartDate}T14:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T14:00:00Z`),
}, // 1641045600
{
rfc3339: `${defaultStartDate}T15:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T15:00:00Z`),
}, // 1641049200
{
rfc3339: `${defaultStartDate}T16:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T16:00:00Z`),
}, // 1641052800
{
rfc3339: `${defaultStartDate}T17:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T17:00:00Z`),
}, // 1641056400
{
rfc3339: `${defaultStartDate}T18:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T18:00:00Z`),
}, // 1641060000
{
rfc3339: `${defaultStartDate}T19:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T19:00:00Z`),
}, // 1641063600
{
rfc3339: `${defaultStartDate}T20:00:00Z`,
unix: timeToUnixSeconds(`${defaultStartDate}T20:00:00Z`),
}, // 1641067200
];
function updateTimestamps (newStartDate, seedTimes=defaultTimes) {
function updateTimestamps(newStartDate, seedTimes = defaultTimes) {
// Update the times array with replacement times
const times = seedTimes.map(x => {
const times = seedTimes.map((x) => {
var newStartTimestamp = x.rfc3339.replace(/^.*T/, newStartDate + 'T');
return {
@ -178,7 +178,7 @@ function updateTimestamps (newStartDate, seedTimes=defaultTimes) {
/////////////////////// MODAL INTERACTIONS / DATE PICKER ///////////////////////
function CustomTimeTrigger({component}) {
function CustomTimeTrigger({ component }) {
const $component = $(component);
$component
.find('a[data-action="open"]:first')
@ -212,7 +212,7 @@ function CustomTimeTrigger({component}) {
if (newDate != undefined) {
newDate = formatDate(newDate);
// Update the last updated timestamps with the new date
// and reassign the updated times.
updatedTimes = updateTimestamps(newDate, updatedTimes);

View File

@ -1,30 +1,54 @@
const monthNames = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"];
var date = new Date()
var currentTimestamp = date.toISOString().replace(/^(.*)(\.\d+)(Z)/, '$1$3') // 2023-01-01T12:34:56Z
var currentTime = date.toISOString().replace(/(^.*T)(.*)(Z)/, '$2') + '084216' // 12:34:56.000084216
import $ from 'jquery';
function currentDate(offset=0, trimTime=false) {
outputDate = new Date(date)
outputDate.setDate(outputDate.getDate() + offset)
var date = new Date();
var currentTimestamp = date.toISOString().replace(/^(.*)(\.\d+)(Z)/, '$1$3'); // 2023-01-01T12:34:56Z
// Microsecond offset appended to the current time string for formatting purposes
const MICROSECOND_OFFSET = '084216';
var currentTime =
date.toISOString().replace(/(^.*T)(.*)(Z)/, '$2') + MICROSECOND_OFFSET; // 12:34:56.000084216
function currentDate(offset = 0, trimTime = false) {
let outputDate = new Date(date);
outputDate.setDate(outputDate.getDate() + offset);
if (trimTime) {
return outputDate.toISOString().replace(/T.*$/, '') // 2023-01-01
return outputDate.toISOString().replace(/T.*$/, ''); // 2023-01-01
} else {
return outputDate.toISOString().replace(/T.*$/, 'T00:00:00Z') // 2023-01-01T00:00:00Z
return outputDate.toISOString().replace(/T.*$/, 'T00:00:00Z'); // 2023-01-01T00:00:00Z
}
}
function enterpriseEOLDate() {
var inTwoYears = date.setFullYear(date.getFullYear() + 2)
earliestEOL = new Date(inTwoYears)
return `${monthNames[earliestEOL.getMonth()]} ${earliestEOL.getDate()}, ${earliestEOL.getFullYear()}`
const monthNames = [
'January',
'February',
'March',
'April',
'May',
'June',
'July',
'August',
'September',
'October',
'November',
'December',
];
var inTwoYears = new Date(date);
inTwoYears.setFullYear(inTwoYears.getFullYear() + 2);
let earliestEOL = new Date(inTwoYears);
return `${monthNames[earliestEOL.getMonth()]} ${earliestEOL.getDate()}, ${earliestEOL.getFullYear()}`;
}
$('span.current-timestamp').text(currentTimestamp)
$('span.current-time').text(currentTime)
$('span.enterprise-eol-date').text(enterpriseEOLDate)
$('span.current-date').each(function() {
var dayOffset = parseInt($(this).attr("offset"))
var trimTime = $(this).attr("trim-time") === "true"
$(this).text(currentDate(dayOffset, trimTime))
})
function initialize() {
$('span.current-timestamp').text(currentTimestamp);
$('span.current-time').text(currentTime);
$('span.enterprise-eol-date').text(enterpriseEOLDate());
$('span.current-date').each(function () {
var dayOffset = parseInt($(this).attr('offset'));
var trimTime = $(this).attr('trim-time') === 'true';
$(this).text(currentDate(dayOffset, trimTime));
});
}
export { initialize };

View File

@ -2,37 +2,24 @@
This feature is designed to callout new features added to the documentation
CSS is required for the callout bubble to determine look and position, but the
element must have the `callout` class and a unique id.
Callouts are treated as notifications and use the notification cookie API in
assets/js/cookies.js.
Callouts are treated as notifications and use the LocalStorage notification API.
*/
import $ from 'jquery';
import * as LocalStorageAPI from './services/local-storage.js';
// Get notification ID
function getCalloutID (el) {
function getCalloutID(el) {
return $(el).attr('id');
}
// Hide a callout and update the cookie with the viewed callout
function hideCallout (calloutID) {
if (!window.LocalStorageAPI.notificationIsRead(calloutID)) {
window.LocalStorageAPI.setNotificationAsRead(calloutID, 'callout');
$(`#${calloutID}`).fadeOut(200);
// Show the url feature callouts on page load
export default function FeatureCallout({ component }) {
const calloutID = getCalloutID($(component));
if (!LocalStorageAPI.notificationIsRead(calloutID, 'callout')) {
$(`#${calloutID}.feature-callout`)
.fadeIn(300)
.removeClass('start-position');
}
}
// Show the url feature callouts on page load
$(document).ready(function () {
$('.feature-callout').each(function () {
const calloutID = getCalloutID($(this));
if (!window.LocalStorageAPI.notificationIsRead(calloutID, 'callout')) {
$(`#${calloutID}.feature-callout`)
.fadeIn(300)
.removeClass('start-position');
}
});
});
// Hide the InfluxDB URL selector callout
// $('button.url-trigger, #influxdb-url-selector .close').click(function () {
// hideCallout('influxdb-url-selector');
// });

View File

@ -1,49 +1,148 @@
var tablesElement = $("#flux-group-keys-demo #grouped-tables")
import $ from 'jquery';
// Sample data
let data = [
[
{ _time: "2021-01-01T00:00:00Z", _measurement: "example", loc: "rm1", sensorID: "A123", _field: "temp", _value: 110.3 },
{ _time: "2021-01-01T00:01:00Z", _measurement: "example", loc: "rm1", sensorID: "A123", _field: "temp", _value: 112.5 },
{ _time: "2021-01-01T00:02:00Z", _measurement: "example", loc: "rm1", sensorID: "A123", _field: "temp", _value: 111.9 }
{
_time: '2021-01-01T00:00:00Z',
_measurement: 'example',
loc: 'rm1',
sensorID: 'A123',
_field: 'temp',
_value: 110.3,
},
{
_time: '2021-01-01T00:01:00Z',
_measurement: 'example',
loc: 'rm1',
sensorID: 'A123',
_field: 'temp',
_value: 112.5,
},
{
_time: '2021-01-01T00:02:00Z',
_measurement: 'example',
loc: 'rm1',
sensorID: 'A123',
_field: 'temp',
_value: 111.9,
},
],
[
{ _time: "2021-01-01T00:00:00Z", _measurement: "example", loc: "rm1", sensorID: "A123", _field: "hum", _value: 73.4 },
{ _time: "2021-01-01T00:01:00Z", _measurement: "example", loc: "rm1", sensorID: "A123", _field: "hum", _value: 73.7 },
{ _time: "2021-01-01T00:02:00Z", _measurement: "example", loc: "rm1", sensorID: "A123", _field: "hum", _value: 75.1 }
{
_time: '2021-01-01T00:00:00Z',
_measurement: 'example',
loc: 'rm1',
sensorID: 'A123',
_field: 'hum',
_value: 73.4,
},
{
_time: '2021-01-01T00:01:00Z',
_measurement: 'example',
loc: 'rm1',
sensorID: 'A123',
_field: 'hum',
_value: 73.7,
},
{
_time: '2021-01-01T00:02:00Z',
_measurement: 'example',
loc: 'rm1',
sensorID: 'A123',
_field: 'hum',
_value: 75.1,
},
],
[
{ _time: "2021-01-01T00:00:00Z", _measurement: "example", loc: "rm2", sensorID: "B456", _field: "temp", _value: 108.2 },
{ _time: "2021-01-01T00:01:00Z", _measurement: "example", loc: "rm2", sensorID: "B456", _field: "temp", _value: 108.5 },
{ _time: "2021-01-01T00:02:00Z", _measurement: "example", loc: "rm2", sensorID: "B456", _field: "temp", _value: 109.6 }
{
_time: '2021-01-01T00:00:00Z',
_measurement: 'example',
loc: 'rm2',
sensorID: 'B456',
_field: 'temp',
_value: 108.2,
},
{
_time: '2021-01-01T00:01:00Z',
_measurement: 'example',
loc: 'rm2',
sensorID: 'B456',
_field: 'temp',
_value: 108.5,
},
{
_time: '2021-01-01T00:02:00Z',
_measurement: 'example',
loc: 'rm2',
sensorID: 'B456',
_field: 'temp',
_value: 109.6,
},
],
[
{ _time: "2021-01-01T00:00:00Z", _measurement: "example", loc: "rm2", sensorID: "B456", _field: "hum", _value: 71.8 },
{ _time: "2021-01-01T00:01:00Z", _measurement: "example", loc: "rm2", sensorID: "B456", _field: "hum", _value: 72.3 },
{ _time: "2021-01-01T00:02:00Z", _measurement: "example", loc: "rm2", sensorID: "B456", _field: "hum", _value: 72.1 }
]
]
{
_time: '2021-01-01T00:00:00Z',
_measurement: 'example',
loc: 'rm2',
sensorID: 'B456',
_field: 'hum',
_value: 71.8,
},
{
_time: '2021-01-01T00:01:00Z',
_measurement: 'example',
loc: 'rm2',
sensorID: 'B456',
_field: 'hum',
_value: 72.3,
},
{
_time: '2021-01-01T00:02:00Z',
_measurement: 'example',
loc: 'rm2',
sensorID: 'B456',
_field: 'hum',
_value: 72.1,
},
],
];
// Default group key
let groupKey = ["_measurement", "loc", "sensorID", "_field"]
let groupKey = ['_measurement', 'loc', 'sensorID', '_field'];
export default function FluxGroupKeysDemo({ component }) {
$('.column-list label').click(function () {
toggleCheckbox($(this));
groupKey = getChecked(component);
groupData();
buildGroupExample(component);
});
// Group and render tables on load
groupData();
}
// Build a table group (group key and table) using an array of objects
function buildTable(inputData) {
// Build the group key string
function wrapString(column, value) {
var stringColumns = ["_measurement", "loc", "sensorID", "_field"]
var stringColumns = ['_measurement', 'loc', 'sensorID', '_field'];
if (stringColumns.includes(column)) {
return '"' + value + '"'
return '"' + value + '"';
} else {
return value
return value;
}
}
var groupKeyString = "Group key instance = [" + (groupKey.map(column => column + ": " + wrapString(column, (inputData[0])[column])) ).join(", ") + "]";
var groupKeyLabel = document.createElement("p");
groupKeyLabel.className = "table-group-key"
groupKeyLabel.innerHTML = groupKeyString
var groupKeyString =
'Group key instance = [' +
groupKey
.map((column) => column + ': ' + wrapString(column, inputData[0][column]))
.join(', ') +
']';
var groupKeyLabel = document.createElement('p');
groupKeyLabel.className = 'table-group-key';
groupKeyLabel.innerHTML = groupKeyString;
// Extract column headers
var columns = [];
@ -54,56 +153,57 @@ function buildTable(inputData) {
}
}
}
// Create the table element
var table = document.createElement("table");
const table = document.createElement('table');
// Create the table header
for (let i = 0; i < columns.length; i++) {
var header = table.createTHead();
var th = document.createElement("th");
var th = document.createElement('th');
th.innerHTML = columns[i];
if (groupKey.includes(columns[i])) {
th.className = "grouped-by";
th.className = 'grouped-by';
}
header.appendChild(th);
}
// Add inputData to the HTML table
for (let i = 0; i < inputData.length; i++) {
tr = table.insertRow(-1);
let tr = table.insertRow(-1);
for (let j = 0; j < columns.length; j++) {
var td = tr.insertCell(-1);
td.innerHTML = inputData[i][columns[j]];
// Highlight the value if column is part of the group key
if (groupKey.includes(columns[j])) {
td.className = "grouped-by";
td.className = 'grouped-by';
}
}
}
// Create a table group with group key and table
var tableGroup = document.createElement("div");
tableGroup.innerHTML += groupKeyLabel.outerHTML + table.outerHTML
var tableGroup = document.createElement('div');
tableGroup.innerHTML += groupKeyLabel.outerHTML + table.outerHTML;
return tableGroup
return tableGroup;
}
// Clear and rebuild all HTML tables
function buildTables(data) {
existingTables = tablesElement[0]
let tablesElement = $('#flux-group-keys-demo #grouped-tables');
let existingTables = tablesElement[0];
while (existingTables.firstChild) {
existingTables.removeChild(existingTables.firstChild);
}
for (let i = 0; i < data.length; i++) {
var table = buildTable(data[i])
var table = buildTable(data[i]);
tablesElement.append(table);
}
}
// Group data based on the group key and output new tables
function groupData() {
let groupedData = data.flat()
let groupedData = data.flat();
function groupBy(array, f) {
var groups = {};
@ -114,20 +214,19 @@ function groupData() {
});
return Object.keys(groups).map(function (group) {
return groups[group];
})
});
}
groupedData = groupBy(groupedData, function (r) {
return groupKey.map(v => r[v]);
return groupKey.map((v) => r[v]);
});
buildTables(groupedData);
}
// Get selected column names
var checkboxes = $("input[type=checkbox]");
function getChecked() {
function getChecked(component) {
// Get selected column names
var checkboxes = $(component).find('input[type=checkbox]');
var checked = [];
for (var i = 0; i < checkboxes.length; i++) {
var checkbox = checkboxes[i];
@ -141,17 +240,12 @@ function toggleCheckbox(element) {
}
// Build example group function
function buildGroupExample() {
var columnCollection = getChecked().map(i => '<span class=\"s2\">"' + i + '"</span>').join(", ")
$("pre#group-by-example")[0].innerHTML = "data\n <span class='nx'>|></span> group(columns<span class='nx'>:</span> [" + columnCollection + "])";
function buildGroupExample(component) {
var columnCollection = getChecked(component)
.map((i) => '<span class=\"s2\">"' + i + '"</span>')
.join(', ');
$('pre#group-by-example')[0].innerHTML =
"data\n <span class='nx'>|></span> group(columns<span class='nx'>:</span> [" +
columnCollection +
'])';
}
$(".column-list label").click(function () {
toggleCheckbox($(this))
groupKey = getChecked();
groupData();
buildGroupExample();
});
// Group and render tables on load
groupData()

View File

@ -1,22 +0,0 @@
$('.exp-btn').click(function() {
var targetBtnElement = $(this).parent()
$('.exp-btn > p', targetBtnElement).fadeOut(100);
setTimeout(function() {
$('.exp-btn-links', targetBtnElement).fadeIn(200)
$('.exp-btn', targetBtnElement).addClass('open');
$('.close-btn', targetBtnElement).fadeIn(200);
}, 100);
})
$('.close-btn').click(function() {
var targetBtnElement = $(this).parent().parent()
$('.exp-btn-links', targetBtnElement).fadeOut(100)
$('.exp-btn', targetBtnElement).removeClass('open');
$(this).fadeOut(100);
setTimeout(function() {
$('p', targetBtnElement).fadeIn(100);
}, 100);
})
/////////////////////////////// EXPANDING BUTTONS //////////////////////////////

View File

@ -1 +0,0 @@
export * from './main.js';

View File

@ -3,7 +3,6 @@
///////////////////////// INFLUXDB URL PREFERENCE /////////////////////////////
////////////////////////////////////////////////////////////////////////////////
*/
import * as pageParams from '@params';
import {
DEFAULT_STORAGE_URLS,
getPreference,
@ -12,15 +11,18 @@ import {
removeInfluxDBUrl,
getInfluxDBUrl,
getInfluxDBUrls,
} from './local-storage.js';
} from './services/local-storage.js';
import $ from 'jquery';
import { context as PRODUCT_CONTEXT, referrerHost } from './page-context.js';
import { influxdbUrls } from './services/influxdb-urls.js';
import { delay } from './helpers.js';
import { toggleModal } from './modals.js';
let CLOUD_URLS = [];
if (pageParams && pageParams.influxdb_urls) {
CLOUD_URLS = Object.values(pageParams.influxdb_urls.cloud.providers).flatMap((provider) => provider.regions?.map((region) => region.url));
if (influxdbUrls?.cloud) {
CLOUD_URLS = Object.values(influxdbUrls.cloud.providers).flatMap((provider) =>
provider.regions?.map((region) => region.url)
);
}
export { CLOUD_URLS };
@ -28,7 +30,7 @@ export function InfluxDBUrl() {
const UNIQUE_URL_PRODUCTS = ['dedicated', 'clustered'];
const IS_UNIQUE_URL_PRODUCT = UNIQUE_URL_PRODUCTS.includes(PRODUCT_CONTEXT);
// Add actual cloud URLs as needed
// Add actual cloud URLs as needed
const elementSelector = '.article--content pre:not(.preserve)';
///////////////////// Stored preference management ///////////////////////
@ -118,11 +120,12 @@ export function InfluxDBUrl() {
});
}
// Retrieve the currently selected URLs from the urls local storage object.
function getUrls() {
const { cloud, oss, core, enterprise, serverless, dedicated, clustered } = getInfluxDBUrls();
return { oss, cloud, core, enterprise, serverless, dedicated, clustered };
}
// Retrieve the currently selected URLs from the urls local storage object.
function getUrls() {
const { cloud, oss, core, enterprise, serverless, dedicated, clustered } =
getInfluxDBUrls();
return { oss, cloud, core, enterprise, serverless, dedicated, clustered };
}
// Retrieve the previously selected URLs from the from the urls local storage object.
// This is used to update URLs whenever you switch between browser tabs.
@ -289,15 +292,17 @@ export function InfluxDBUrl() {
}
// Append the URL selector button to each codeblock containing a placeholder URL
function appendUrlSelector(urls={
cloud: '',
oss: '',
core: '',
enterprise: '',
serverless: '',
dedicated: '',
clustered: '',
}) {
function appendUrlSelector(
urls = {
cloud: '',
oss: '',
core: '',
enterprise: '',
serverless: '',
dedicated: '',
clustered: '',
}
) {
const appendToUrls = Object.values(urls);
const getBtnText = (context) => {
@ -315,7 +320,7 @@ export function InfluxDBUrl() {
return contextText[context];
};
appendToUrls.forEach(function (url) {
appendToUrls.forEach(function (url) {
$(elementSelector).each(function () {
var code = $(this).html();
if (code.includes(url)) {
@ -330,20 +335,32 @@ export function InfluxDBUrl() {
});
}
////////////////////////////////////////////////////////////////////////////
////////////////// Initialize InfluxDB URL interactions ////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////// Initialize InfluxDB URL interactions ////////////////////
////////////////////////////////////////////////////////////////////////////
// Add the preserve tag to code blocks that shouldn't be updated
addPreserve();
const { cloud, oss, core, enterprise, serverless, dedicated, clustered } = DEFAULT_STORAGE_URLS;
const { cloud, oss, core, enterprise, serverless, dedicated, clustered } =
DEFAULT_STORAGE_URLS;
// Append URL selector buttons to code blocks
appendUrlSelector({ cloud, oss, core, enterprise, serverless, dedicated, clustered });
appendUrlSelector({
cloud,
oss,
core,
enterprise,
serverless,
dedicated,
clustered,
});
// Update URLs on load
updateUrls({ cloud, oss, core, enterprise, serverless, dedicated, clustered }, getUrls());
updateUrls(
{ cloud, oss, core, enterprise, serverless, dedicated, clustered },
getUrls()
);
// Set active radio button on page load
setRadioButtons(getUrls());

View File

@ -1,41 +1,58 @@
// Dynamically update keybindings or hotkeys
function getPlatform() {
if (/Mac/.test(navigator.platform)) {
return "osx"
} else if (/Win/.test(navigator.platform)) {
return "win"
} else if (/Linux/.test(navigator.platform)) {
return "linux"
} else {
return "other"
}
import { getPlatform } from './utils/user-agent-platform.js';
import $ from 'jquery';
/**
* Adds OS-specific class to component
* @param {string} osClass - OS-specific class to add
* @param {Object} options - Component options
* @param {jQuery} options.$component - jQuery element reference
*/
function addOSClass(osClass, { $component }) {
$component.addClass(osClass);
}
const platform = getPlatform()
/**
* Updates keybinding display based on detected platform
* @param {Object} options - Component options
* @param {jQuery} options.$component - jQuery element reference
* @param {string} options.platform - Detected platform
*/
function updateKeyBindings({ $component, platform }) {
const osx = $component.data('osx');
const linux = $component.data('linux');
const win = $component.data('win');
function addOSClass(osClass) {
$('.keybinding').addClass(osClass)
}
let keybind;
function updateKeyBindings() {
$('.keybinding').each(function() {
var osx = $(this).data("osx")
var linux = $(this).data("linux")
var win = $(this).data("win")
if (platform === "other") {
if (win != linux) {
var keybind = '<code class="osx">' + osx + '</code> for macOS, <code>' + linux + '</code> for Linux, and <code>' + win + '</code> for Windows';
} else {
var keybind = '<code>' + linux + '</code> for Linux and Windows and <code class="osx">' + osx + '</code> for macOS';
}
if (platform === 'other') {
if (win !== linux) {
keybind =
`<code class="osx">${osx}</code> for macOS, ` +
`<code>${linux}</code> for Linux, ` +
`and <code>${win}</code> for Windows`;
} else {
var keybind = '<code>' + $(this).data(platform) + '</code>'
keybind =
`<code>${linux}</code> for Linux and Windows and ` +
`<code class="osx">${osx}</code> for macOS`;
}
} else {
keybind = `<code>${$component.data(platform)}</code>`;
}
$(this).html(keybind)
})
$component.html(keybind);
}
addOSClass(platform)
updateKeyBindings()
/**
* Initialize and render platform-specific keybindings
* @param {Object} options - Component options
* @param {HTMLElement} options.component - DOM element
* @returns {void}
*/
export default function KeyBinding({ component }) {
// Initialize keybindings
const platform = getPlatform();
const $component = $(component);
addOSClass(platform, { $component });
updateKeyBindings({ $component, platform });
}

View File

@ -1,11 +1,15 @@
import $ from 'jquery';
// Count tag elements
function countTag(tag) {
return $(".visible[data-tags*='" + tag + "']").length
return $(".visible[data-tags*='" + tag + "']").length;
}
function getFilterCounts() {
$('#list-filters label').each(function() {
var tagName = $('input', this).attr('name').replace(/[\W/]+/, "-");
function getFilterCounts($labels) {
$labels.each(function () {
var tagName = $('input', this)
.attr('name')
.replace(/[\W/]+/, '-');
var tagCount = countTag(tagName);
$(this).attr('data-count', '(' + tagCount + ')');
if (tagCount <= 0) {
@ -13,38 +17,58 @@ function getFilterCounts() {
} else {
$(this).fadeTo(400, 1.0);
}
})
});
}
// Get initial filter count on page load
getFilterCounts()
/** TODO: Include the data source value in the as an additional attribute
* in the HTML and pass it into the component, which would let us use selectors
* for only the source items and let us have more than one
* list filter component per page without conflicts */
export default function ListFilters({ component }) {
const $component = $(component);
const $labels = $component.find('label');
const $inputs = $component.find('input');
$("#list-filters input").click(function() {
getFilterCounts($labels);
// List of tags to hide
var tagArray = $("#list-filters input:checkbox:checked").map(function(){
return $(this).attr('name').replace(/[\W]+/, "-");
}).get();
$inputs.click(function () {
// List of tags to hide
var tagArray = $component
.find('input:checkbox:checked')
.map(function () {
return $(this).attr('name').replace(/[\W]+/, '-');
})
.get();
// List of tags to restore
var restoreArray = $("#list-filters input:checkbox:not(:checked)").map(function(){
return $(this).attr('name').replace(/[\W]+/, "-");
}).get();
// List of tags to restore
var restoreArray = $component
.find('input:checkbox:not(:checked)')
.map(function () {
return $(this).attr('name').replace(/[\W]+/, '-');
})
.get();
// Actions for filter select
if ( $(this).is(':checked') ) {
$.each( tagArray, function( index, value ) {
$(".filter-item.visible:not([data-tags~='" + value + "'])").removeClass('visible').fadeOut()
})
} else {
$.each( restoreArray, function( index, value ) {
$(".filter-item:not(.visible)[data-tags~='" + value + "']").addClass('visible').fadeIn()
})
$.each( tagArray, function( index, value ) {
$(".filter-item.visible:not([data-tags~='" + value + "'])").removeClass('visible').hide()
})
}
// Actions for filter select
if ($(this).is(':checked')) {
$.each(tagArray, function (index, value) {
$(".filter-item.visible:not([data-tags~='" + value + "'])")
.removeClass('visible')
.fadeOut();
});
} else {
$.each(restoreArray, function (index, value) {
$(".filter-item:not(.visible)[data-tags~='" + value + "']")
.addClass('visible')
.fadeIn();
});
$.each(tagArray, function (index, value) {
$(".filter-item.visible:not([data-tags~='" + value + "'])")
.removeClass('visible')
.hide();
});
}
// Refresh filter count
getFilterCounts()
});
// Refresh filter count
getFilterCounts($labels);
});
}

View File

@ -1,7 +1,7 @@
// assets/js/main.js
// If you need to pass parameters from the calling Hugo page, you can import them here like so:
// import * as pageParams from '@params';
// Import dependencies that we still need to load in the global scope
import $ from 'jquery';
/** Import modules that are not components.
* TODO: Refactor these into single-purpose component modules.
@ -9,9 +9,10 @@
import * as apiLibs from './api-libs.js';
import * as codeControls from './code-controls.js';
import * as contentInteractions from './content-interactions.js';
import * as datetime from './datetime.js';
import { delay } from './helpers.js';
import { InfluxDBUrl } from './influxdb-url.js';
import * as localStorage from './local-storage.js';
import * as localStorage from './services/local-storage.js';
import * as modals from './modals.js';
import * as notifications from './notifications.js';
import * as pageContext from './page-context.js';
@ -29,8 +30,17 @@ import * as v3Wayfinding from './v3-wayfinding.js';
import AskAITrigger from './ask-ai-trigger.js';
import CodePlaceholder from './code-placeholders.js';
import { CustomTimeTrigger } from './custom-timestamps.js';
import Diagram from './components/diagram.js';
import DocSearch from './components/doc-search.js';
import FeatureCallout from './feature-callouts.js';
import FluxGroupKeysDemo from './flux-group-keys.js';
import FluxInfluxDBVersionsTrigger from './flux-influxdb-versions.js';
import KeyBinding from './keybindings.js';
import ListFilters from './list-filters.js';
import ProductSelector from './version-selector.js';
import ReleaseToc from './release-toc.js';
import { SearchButton } from './search-button.js';
import SidebarSearch from './components/sidebar-search.js';
import { SidebarToggle } from './sidebar-toggle.js';
import Theme from './theme.js';
import ThemeSwitch from './theme-switch.js';
@ -49,11 +59,20 @@ const componentRegistry = {
'ask-ai-trigger': AskAITrigger,
'code-placeholder': CodePlaceholder,
'custom-time-trigger': CustomTimeTrigger,
diagram: Diagram,
'doc-search': DocSearch,
'feature-callout': FeatureCallout,
'flux-group-keys-demo': FluxGroupKeysDemo,
'flux-influxdb-versions-trigger': FluxInfluxDBVersionsTrigger,
keybinding: KeyBinding,
'list-filters': ListFilters,
'product-selector': ProductSelector,
'release-toc': ReleaseToc,
'search-button': SearchButton,
'sidebar-search': SidebarSearch,
'sidebar-toggle': SidebarToggle,
'theme': Theme,
'theme-switch': ThemeSwitch
theme: Theme,
'theme-switch': ThemeSwitch,
};
/**
@ -71,7 +90,12 @@ function initGlobals() {
window.influxdatadocs.pageContext = pageContext;
window.influxdatadocs.toggleModal = modals.toggleModal;
window.influxdatadocs.componentRegistry = componentRegistry;
// Re-export jQuery to global namespace for legacy scripts
if (typeof window.jQuery === 'undefined') {
window.jQuery = window.$ = $;
}
return window.influxdatadocs;
}
@ -81,32 +105,35 @@ function initGlobals() {
*/
function initComponents(globals) {
const components = document.querySelectorAll('[data-component]');
components.forEach((component) => {
const componentName = component.getAttribute('data-component');
const ComponentConstructor = componentRegistry[componentName];
if (ComponentConstructor) {
// Initialize the component and store its instance in the global namespace
try {
const instance = ComponentConstructor({ component });
globals[componentName] = ComponentConstructor;
// Optionally store component instances for future reference
if (!globals.instances) {
globals.instances = {};
}
if (!globals.instances[componentName]) {
globals.instances[componentName] = [];
}
globals.instances[componentName].push({
element: component,
instance
instance,
});
} catch (error) {
console.error(`Error initializing component "${componentName}":`, error);
console.error(
`Error initializing component "${componentName}":`,
error
);
}
} else {
console.warn(`Unknown component: "${componentName}"`);
@ -122,6 +149,7 @@ function initModules() {
apiLibs.initialize();
codeControls.initialize();
contentInteractions.initialize();
datetime.initialize();
InfluxDBUrl();
notifications.initialize();
pageFeedback.initialize();
@ -135,10 +163,10 @@ function initModules() {
function init() {
// Initialize global namespace and expose core modules
const globals = initGlobals();
// Initialize non-component UI modules
initModules();
// Initialize components from registry
initComponents(globals);
}
@ -147,4 +175,4 @@ function init() {
document.addEventListener('DOMContentLoaded', init);
// Export public API
export { initGlobals, componentRegistry };
export { initGlobals, componentRegistry };

View File

@ -1,34 +1,80 @@
/** This module retrieves browser context information and site data for the
* current page, version, and product.
*/
import { products, influxdb_urls } from '@params';
const safeProducts = products || {};
const safeUrls = influxdb_urls || {};
import { products } from './services/influxdata-products.js';
import { influxdbUrls } from './services/influxdb-urls.js';
function getCurrentProductData() {
const path = window.location.pathname;
const mappings = [
{ pattern: /\/influxdb\/cloud\//, product: safeProducts.cloud, urls: safeUrls.influxdb_cloud },
{ pattern: /\/influxdb3\/core/, product: safeProducts.influxdb3_core, urls: safeUrls.core },
{ pattern: /\/influxdb3\/enterprise/, product: safeProducts.influxdb3_enterprise, urls: safeUrls.enterprise },
{ pattern: /\/influxdb3\/cloud-serverless/, product: safeProducts.influxdb3_cloud_serverless, urls: safeUrls.cloud },
{ pattern: /\/influxdb3\/cloud-dedicated/, product: safeProducts.influxdb3_cloud_dedicated, urls: safeUrls.dedicated },
{ pattern: /\/influxdb3\/clustered/, product: safeProducts.influxdb3_clustered, urls: safeUrls.clustered },
{ pattern: /\/enterprise_v1\//, product: safeProducts.enterprise_influxdb, urls: safeUrls.oss },
{ pattern: /\/influxdb.*v1\//, product: safeProducts.influxdb, urls: safeUrls.oss },
{ pattern: /\/influxdb.*v2\//, product: safeProducts.influxdb, urls: safeUrls.oss },
{ pattern: /\/kapacitor\//, product: safeProducts.kapacitor, urls: safeUrls.oss },
{ pattern: /\/telegraf\//, product: safeProducts.telegraf, urls: safeUrls.oss },
{ pattern: /\/chronograf\//, product: safeProducts.chronograf, urls: safeUrls.oss },
{ pattern: /\/flux\//, product: safeProducts.flux, urls: safeUrls.oss },
{
pattern: /\/influxdb\/cloud\//,
product: products.cloud,
urls: influxdbUrls.influxdb_cloud,
},
{
pattern: /\/influxdb3\/core/,
product: products.influxdb3_core,
urls: influxdbUrls.core,
},
{
pattern: /\/influxdb3\/enterprise/,
product: products.influxdb3_enterprise,
urls: influxdbUrls.enterprise,
},
{
pattern: /\/influxdb3\/cloud-serverless/,
product: products.influxdb3_cloud_serverless,
urls: influxdbUrls.cloud,
},
{
pattern: /\/influxdb3\/cloud-dedicated/,
product: products.influxdb3_cloud_dedicated,
urls: influxdbUrls.dedicated,
},
{
pattern: /\/influxdb3\/clustered/,
product: products.influxdb3_clustered,
urls: influxdbUrls.clustered,
},
{
pattern: /\/enterprise_v1\//,
product: products.enterprise_influxdb,
urls: influxdbUrls.oss,
},
{
pattern: /\/influxdb.*v1\//,
product: products.influxdb,
urls: influxdbUrls.oss,
},
{
pattern: /\/influxdb.*v2\//,
product: products.influxdb,
urls: influxdbUrls.oss,
},
{
pattern: /\/kapacitor\//,
product: products.kapacitor,
urls: influxdbUrls.oss,
},
{
pattern: /\/telegraf\//,
product: products.telegraf,
urls: influxdbUrls.oss,
},
{
pattern: /\/chronograf\//,
product: products.chronograf,
urls: influxdbUrls.oss,
},
{ pattern: /\/flux\//, product: products.flux, urls: influxdbUrls.oss },
];
for (const { pattern, product, urls } of mappings) {
if (pattern.test(path)) {
return {
product: product || 'unknown',
urls: urls || {}
return {
product: product || 'unknown',
urls: urls || {},
};
}
}
@ -36,7 +82,8 @@ function getCurrentProductData() {
return { product: 'other', urls: {} };
}
// Return the page context (cloud, serverless, oss/enterprise, dedicated, clustered, other)
// Return the page context
// (cloud, serverless, oss/enterprise, dedicated, clustered, other)
function getContext() {
if (/\/influxdb\/cloud\//.test(window.location.pathname)) {
return 'cloud';
@ -78,8 +125,12 @@ const context = getContext(),
protocol = location.protocol,
referrer = document.referrer === '' ? 'direct' : document.referrer,
referrerHost = getReferrerHost(),
// TODO: Verify this still does what we want since the addition of InfluxDB 3 naming and the Core and Enterprise versions.
version = (/^v\d/.test(pathArr[1]) || pathArr[1]?.includes('cloud') ? pathArr[1].replace(/^v/, '') : "n/a")
// TODO: Verify this works since the addition of InfluxDB 3 naming
// and the Core and Enterprise versions.
version =
/^v\d/.test(pathArr[1]) || pathArr[1]?.includes('cloud')
? pathArr[1].replace(/^v/, '')
: 'n/a';
export {
context,
@ -92,4 +143,4 @@ export {
referrer,
referrerHost,
version,
};
};

View File

@ -1,26 +1,67 @@
/////////////////////////// Table of Contents Script ///////////////////////////
/*
* This script is used to generate a table of contents for the
* release notes pages.
*/
* This script is used to generate a table of contents for the
* release notes pages.
*/
export default function ReleaseToc({ component }) {
// Get all h2 elements that are not checkpoint-releases
const releases = Array.from(document.querySelectorAll('h2')).filter(
(el) => !el.id.match(/checkpoint-releases/)
);
// Get all h2 elements that are not checkpoint-releases
const releases = Array.from(document.querySelectorAll('h2')).filter(
el => !el.id.match(/checkpoint-releases/)
);
// Extract data about each release from the array of releases
const releaseData = releases.map((el) => ({
name: el.textContent,
id: el.id,
class: el.getAttribute('class'),
date: el.getAttribute('date'),
}));
// Extract data about each release from the array of releases
const releaseData = releases.map(el => ({
name: el.textContent,
id: el.id,
class: el.getAttribute('class'),
date: el.getAttribute('date')
}));
// Build the release table of contents
const releaseTocUl = component.querySelector('#release-toc ul');
releaseData.forEach((release) => {
releaseTocUl.appendChild(getReleaseItem(release));
});
/*
* This script is used to expand the release notes table of contents by the
* number specified in the `show` attribute of `ul.release-list`.
* Once all the release items are visible, the "Show More" button is hidden.
*/
const showMoreBtn = component.querySelector('.show-more');
if (showMoreBtn) {
showMoreBtn.addEventListener('click', function () {
const itemHeight = 1.885; // Item height in rem
const releaseNum = releaseData.length;
const maxHeight = releaseNum * itemHeight;
const releaseList = document.getElementById('release-list');
const releaseIncrement = Number(releaseList.getAttribute('show'));
const currentHeightMatch = releaseList.style.height.match(/\d+\.?\d+/);
const currentHeight = currentHeightMatch
? Number(currentHeightMatch[0])
: 0;
const potentialHeight = currentHeight + releaseIncrement * itemHeight;
const newHeight =
potentialHeight > maxHeight ? maxHeight : potentialHeight;
releaseList.style.height = `${newHeight}rem`;
if (newHeight >= maxHeight) {
// Simple fade out
showMoreBtn.style.transition = 'opacity 0.1s';
showMoreBtn.style.opacity = 0;
setTimeout(() => {
showMoreBtn.style.display = 'none';
}, 100);
}
});
}
}
// Use release data to generate a list item for each release
function getReleaseItem(releaseData) {
const li = document.createElement("li");
const li = document.createElement('li');
if (releaseData.class !== null) {
li.className = releaseData.class;
}
@ -28,42 +69,3 @@ function getReleaseItem(releaseData) {
li.setAttribute('date', releaseData.date);
return li;
}
// Build the release table of contents
const releaseTocUl = document.querySelector('#release-toc ul');
releaseData.forEach(release => {
releaseTocUl.appendChild(getReleaseItem(release));
});
/*
* This script is used to expand the release notes table of contents by the
* number specified in the `show` attribute of `ul.release-list`.
* Once all the release items are visible, the "Show More" button is hidden.
*/
const showMoreBtn = document.querySelector('#release-toc .show-more');
if (showMoreBtn) {
showMoreBtn.addEventListener('click', function () {
const itemHeight = 1.885; // Item height in rem
const releaseNum = releaseData.length;
const maxHeight = releaseNum * itemHeight;
const releaseList = document.getElementById('release-list');
const releaseIncrement = Number(releaseList.getAttribute('show'));
const currentHeightMatch = releaseList.style.height.match(/\d+\.?\d+/);
const currentHeight = currentHeightMatch
? Number(currentHeightMatch[0])
: 0;
const potentialHeight = currentHeight + releaseIncrement * itemHeight;
const newHeight = potentialHeight > maxHeight ? maxHeight : potentialHeight;
releaseList.style.height = `${newHeight}rem`;
if (newHeight >= maxHeight) {
// Simple fade out
showMoreBtn.style.transition = 'opacity 0.1s';
showMoreBtn.style.opacity = 0;
setTimeout(() => {
showMoreBtn.style.display = 'none';
}, 100);
}
});
}

View File

@ -1,10 +0,0 @@
// Fade content wrapper when focusing on search input
$('#algolia-search-input').focus(function() {
$('.content-wrapper').fadeTo(300, .35);
})
// Hide search dropdown when leaving search input
$('#algolia-search-input').blur(function() {
$('.content-wrapper').fadeTo(200, 1);
$('.ds-dropdown-menu').hide();
})

View File

@ -0,0 +1,3 @@
import { products as productsParam } from '@params';
export const products = productsParam || {};

View File

@ -0,0 +1,3 @@
import { influxdb_urls as influxdbUrlsParam } from '@params';
export const influxdbUrls = influxdbUrlsParam || {};

View File

@ -10,7 +10,8 @@
- messages: Messages (data/notifications.yaml) that have been seen (array)
- callouts: Feature callouts that have been seen (array)
*/
import * as pageParams from '@params';
import { influxdbUrls } from './influxdb-urls.js';
// Prefix for all InfluxData docs local storage
const storagePrefix = 'influxdata_docs_';
@ -82,14 +83,12 @@ function getPreferences() {
//////////// MANAGE INFLUXDATA DOCS URLS IN LOCAL STORAGE //////////////////////
////////////////////////////////////////////////////////////////////////////////
const defaultUrls = {};
// Guard against pageParams being null/undefined and safely access nested properties
if (pageParams && pageParams.influxdb_urls) {
Object.entries(pageParams.influxdb_urls).forEach(([product, {providers}]) => {
defaultUrls[product] = providers.filter(provider => provider.name === 'Default')[0]?.regions[0]?.url;
});
}
Object.entries(influxdbUrls).forEach(([product, { providers }]) => {
defaultUrls[product] =
providers.filter((provider) => provider.name === 'Default')[0]?.regions[0]
?.url || 'https://cloud2.influxdata.com';
});
export const DEFAULT_STORAGE_URLS = {
oss: defaultUrls.oss,
@ -177,7 +176,10 @@ const defaultNotificationsObj = {
function getNotifications() {
// Initialize notifications data if it doesn't already exist
if (localStorage.getItem(notificationStorageKey) === null) {
initializeStorageItem('notifications', JSON.stringify(defaultNotificationsObj));
initializeStorageItem(
'notifications',
JSON.stringify(defaultNotificationsObj)
);
}
// Retrieve and parse the notifications data as JSON
@ -221,7 +223,10 @@ function setNotificationAsRead(notificationID, notificationType) {
readNotifications.push(notificationID);
notificationsObj[notificationType + 's'] = readNotifications;
localStorage.setItem(notificationStorageKey, JSON.stringify(notificationsObj));
localStorage.setItem(
notificationStorageKey,
JSON.stringify(notificationsObj)
);
}
// Export functions as a module and make the file backwards compatible for non-module environments until all remaining dependent scripts are ported to modules

View File

@ -3,7 +3,7 @@
http://www.thesitewizard.com/javascripts/change-style-sheets.shtml
*/
import * as localStorage from './local-storage.js';
import * as localStorage from './services/local-storage.js';
// *** TO BE CUSTOMISED ***
var sidebar_state_preference_name = 'sidebar_state';

View File

@ -1,20 +1,21 @@
import Theme from './theme.js';
export default function ThemeSwitch({ component }) {
if ( component == undefined) {
if (component === undefined) {
component = document;
}
component.querySelectorAll(`.theme-switch-light`).forEach((button) => {
button.addEventListener('click', function(event) {
component.querySelectorAll('.theme-switch-light').forEach((button) => {
button.addEventListener('click', function (event) {
event.preventDefault();
Theme({ style: 'light-theme' });
Theme({ component, style: 'light-theme' });
});
});
component.querySelectorAll(`.theme-switch-dark`).forEach((button) => {
button.addEventListener('click', function(event) {
component.querySelectorAll('.theme-switch-dark').forEach((button) => {
button.addEventListener('click', function (event) {
event.preventDefault();
Theme({ style: 'dark-theme' });
Theme({ component, style: 'dark-theme' });
});
});
}

View File

@ -1,4 +1,4 @@
import { getPreference, setPreference } from './local-storage.js';
import { getPreference, setPreference } from './services/local-storage.js';
const PROPS = {
style_preference_name: 'theme',
@ -6,19 +6,22 @@ const PROPS = {
style_domain: 'docs.influxdata.com',
};
function getPreferredTheme () {
function getPreferredTheme() {
return `${getPreference(PROPS.style_preference_name)}-theme`;
}
function switchStyle({ styles_element, css_title }) {
// Disable all other theme stylesheets
styles_element.querySelectorAll('link[rel*="stylesheet"][title*="theme"]')
.forEach(function (link) {
link.disabled = true;
});
styles_element
.querySelectorAll('link[rel*="stylesheet"][title*="theme"]')
.forEach(function (link) {
link.disabled = true;
});
// Enable the stylesheet with the specified title
const link = styles_element.querySelector(`link[rel*="stylesheet"][title="${css_title}"]`);
const link = styles_element.querySelector(
`link[rel*="stylesheet"][title="${css_title}"]`
);
link && (link.disabled = false);
setPreference(PROPS.style_preference_name, css_title.replace(/-theme/, ''));
@ -38,5 +41,4 @@ export default function Theme({ component, style }) {
if (component.dataset?.themeCallback === 'setVisibility') {
setVisibility(component);
}
}

View File

@ -0,0 +1,38 @@
/**
* Helper functions for debugging without source maps
* Example usage:
* In your code, you can use these functions like this:
* ```javascript
* import { debugLog, debugBreak, debugInspect } from './debug-helpers.js';
*
* const data = debugInspect(someData, 'Data');
* debugLog('Processing data', 'myFunction');
*
* function processData() {
* // Add a breakpoint that works with DevTools
* debugBreak();
*
* // Your existing code...
* }
* ```
*
* @fileoverview DEVELOPMENT USE ONLY - Functions should not be committed to production
*/
/* eslint-disable no-debugger */
/* eslint-disable-next-line */
// NOTE: These functions are detected by ESLint rules to prevent committing debug code
export function debugLog(message, context = '') {
const contextStr = context ? `[${context}]` : '';
console.log(`DEBUG${contextStr}: ${message}`);
}
export function debugBreak() {
debugger;
}
export function debugInspect(value, label = 'Inspect') {
console.log(`DEBUG[${label}]:`, value);
return value;
}

View File

@ -0,0 +1,107 @@
/**
* Manages search interactions for DocSearch integration
* Uses MutationObserver to watch for dropdown creation
*/
export default function SearchInteractions({ searchInput }) {
const contentWrapper = document.querySelector('.content-wrapper');
let observer = null;
let dropdownObserver = null;
let dropdownMenu = null;
const debug = false; // Set to true for debugging logs
// Fade content wrapper when focusing on search input
function handleFocus() {
contentWrapper.style.opacity = '0.35';
contentWrapper.style.transition = 'opacity 300ms';
}
// Hide search dropdown when leaving search input
function handleBlur(event) {
// Only process blur if not clicking within dropdown
const relatedTarget = event.relatedTarget;
if (
relatedTarget &&
(relatedTarget.closest('.algolia-autocomplete') ||
relatedTarget.closest('.ds-dropdown-menu'))
) {
return;
}
contentWrapper.style.opacity = '1';
contentWrapper.style.transition = 'opacity 200ms';
// Hide dropdown if it exists
if (dropdownMenu) {
dropdownMenu.style.display = 'none';
}
}
// Add event listeners
searchInput.addEventListener('focus', handleFocus);
searchInput.addEventListener('blur', handleBlur);
// Use MutationObserver to detect when dropdown is added to the DOM
observer = new MutationObserver((mutations) => {
for (const mutation of mutations) {
if (mutation.type === 'childList') {
const newDropdown = document.querySelector(
'.ds-dropdown-menu:not([data-monitored])'
);
if (newDropdown) {
// Save reference to dropdown
dropdownMenu = newDropdown;
newDropdown.setAttribute('data-monitored', 'true');
// Monitor dropdown removal/display changes
dropdownObserver = new MutationObserver((dropdownMutations) => {
for (const dropdownMutation of dropdownMutations) {
if (debug) {
if (
dropdownMutation.type === 'attributes' &&
dropdownMutation.attributeName === 'style'
) {
console.log(
'Dropdown style changed:',
dropdownMenu.style.display
);
}
}
}
});
// Observe changes to dropdown attributes (like style)
dropdownObserver.observe(dropdownMenu, {
attributes: true,
attributeFilter: ['style'],
});
// Add event listeners to keep dropdown open when interacted with
dropdownMenu.addEventListener('mousedown', (e) => {
// Prevent blur on searchInput when clicking in dropdown
e.preventDefault();
});
}
}
}
});
// Start observing the document body for dropdown creation
observer.observe(document.body, {
childList: true,
subtree: true,
});
// Return cleanup function
return function cleanup() {
searchInput.removeEventListener('focus', handleFocus);
searchInput.removeEventListener('blur', handleBlur);
if (observer) {
observer.disconnect();
}
if (dropdownObserver) {
dropdownObserver.disconnect();
}
};
}

View File

@ -0,0 +1,35 @@
/**
* Platform detection utility functions
* Provides methods for detecting user's operating system
*/
/**
* Detects user's operating system using modern techniques
* Falls back to userAgent parsing when newer APIs aren't available
* @returns {string} Operating system identifier ("osx", "win", "linux", or "other")
*/
export function getPlatform() {
// Try to use modern User-Agent Client Hints API first (Chrome 89+, Edge 89+)
if (navigator.userAgentData && navigator.userAgentData.platform) {
const platform = navigator.userAgentData.platform.toLowerCase();
if (platform.includes('mac')) return 'osx';
if (platform.includes('win')) return 'win';
if (platform.includes('linux')) return 'linux';
}
// Fall back to userAgent string parsing
const userAgent = navigator.userAgent.toLowerCase();
if (
userAgent.includes('mac') ||
userAgent.includes('iphone') ||
userAgent.includes('ipad')
)
return 'osx';
if (userAgent.includes('win')) return 'win';
if (userAgent.includes('linux') || userAgent.includes('android'))
return 'linux';
return 'other';
}

View File

@ -1,6 +1,14 @@
import { CLOUD_URLS } from './influxdb-url.js';
import * as localStorage from './local-storage.js';
import { context, host, hostname, path, protocol, referrer, referrerHost } from './page-context.js';
import * as localStorage from './services/local-storage.js';
import {
context,
host,
hostname,
path,
protocol,
referrer,
referrerHost,
} from './page-context.js';
/**
* Builds a referrer whitelist array that includes the current page host and all
@ -69,8 +77,6 @@ function setWayfindingInputState() {
}
function submitWayfindingData(engine, action) {
// Build lp using page data and engine data
const lp = `ioxwayfinding,host=${hostname},path=${path},referrer=${referrer},engine=${engine} action="${action}"`;
@ -81,10 +87,7 @@ function submitWayfindingData(engine, action) {
'https://j32dswat7l.execute-api.us-east-1.amazonaws.com/prod/wayfinding'
);
xhr.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
xhr.setRequestHeader(
'Access-Control-Allow-Origin',
`${protocol}//${host}`
);
xhr.setRequestHeader('Access-Control-Allow-Origin', `${protocol}//${host}`);
xhr.setRequestHeader('Content-Type', 'text/plain; charset=utf-8');
xhr.setRequestHeader('Accept', 'application/json');
xhr.send(lp);

View File

@ -1,19 +1,21 @@
// Select the product dropdown and dropdown items
const productDropdown = document.querySelector("#product-dropdown");
const dropdownItems = document.querySelector("#dropdown-items");
export default function ProductSelector({ component }) {
// Select the product dropdown and dropdown items
const productDropdown = component.querySelector('#product-dropdown');
const dropdownItems = component.querySelector('#dropdown-items');
// Expand the menu on click
if (productDropdown) {
productDropdown.addEventListener("click", function() {
productDropdown.classList.toggle("open");
dropdownItems.classList.toggle("open");
// Expand the menu on click
if (productDropdown) {
productDropdown.addEventListener('click', function () {
productDropdown.classList.toggle('open');
dropdownItems.classList.toggle('open');
});
}
// Close the dropdown by clicking anywhere else
document.addEventListener('click', function (e) {
// Check if the click was outside of the '.product-list' container
if (!e.target.closest('.product-list')) {
dropdownItems.classList.remove('open');
}
});
}
// Close the dropdown by clicking anywhere else
document.addEventListener("click", function(e) {
// Check if the click was outside of the '.product-list' container
if (!e.target.closest('.product-list')) {
dropdownItems.classList.remove("open");
}
});

View File

@ -0,0 +1,18 @@
/*
Datetime Components
----------------------------------------------
*/
.current-timestamp,
.current-date,
.current-time,
.enterprise-eol-date {
color: $current-timestamp-color;
display: inline-block;
font-family: $proxima;
white-space: nowrap;
}
.nowrap {
white-space: nowrap;
}

View File

@ -97,4 +97,4 @@ blockquote {
"blocks/important",
"blocks/warning",
"blocks/caution",
"blocks/beta";
"blocks/special-state";

View File

@ -16,6 +16,10 @@
background: $article-code-bg !important;
font-size: .85em;
font-weight: $medium;
p {
background: $article-bg !important;
}
}
.node {

View File

@ -34,5 +34,10 @@
vertical-align: middle;
}
}
// Remove max-width when only one button is present
&:only-child {
max-width: none;
}
}
}

View File

@ -1,10 +1,10 @@
.block.beta {
.block.special-state {
@include gradient($grad-burningDusk);
padding: 4px;
border: none;
border-radius: 25px !important;
.beta-content {
.state-content {
background: $article-bg;
border-radius: 21px;
padding: calc(1.65rem - 4px) calc(2rem - 4px) calc(.1rem + 4px) calc(2rem - 4px);

View File

@ -23,6 +23,7 @@
"layouts/syntax-highlighting",
"layouts/algolia-search-overrides",
"layouts/landing",
"layouts/datetime",
"layouts/error-page",
"layouts/footer-widgets",
"layouts/modals",

View File

@ -203,6 +203,12 @@ $article-btn-text-hover: $g20-white;
$article-nav-icon-bg: $g5-pepper;
$article-nav-acct-bg: $g3-castle;
// Datetime shortcode colors
$current-timestamp-color: $g15-platinum;
$current-date-color: $g15-platinum;
$current-time-color: $g15-platinum;
$enterprise-eol-date-color: $g15-platinum;
// Error Page Colors
$error-page-btn: $b-pool;
$error-page-btn-text: $g20-white;

View File

@ -203,6 +203,12 @@ $article-btn-text-hover: $g20-white !default;
$article-nav-icon-bg: $g6-smoke !default;
$article-nav-acct-bg: $g5-pepper !default;
// Datetime Colors
$current-timestamp-color: $article-text !default;
$current-date-color: $article-text !default;
$current-time-color: $article-text !default;
$enterprise-eol-date-color: $article-text !default;
// Error Page Colors
$error-page-btn: $b-pool !default;
$error-page-btn-text: $g20-white !default;

View File

@ -23,6 +23,7 @@ export { buildContributingInstructions };
/** Build instructions from CONTRIBUTING.md
* This script reads CONTRIBUTING.md, formats it appropriately,
* and saves it to .github/instructions/contributing.instructions.md
* Includes optimization to reduce file size for better performance
*/
function buildContributingInstructions() {
// Paths
@ -41,16 +42,19 @@ function buildContributingInstructions() {
// Read the CONTRIBUTING.md file
let content = fs.readFileSync(contributingPath, 'utf8');
// Optimize content by removing less critical sections for Copilot
content = optimizeContentForContext(content);
// Format the content for Copilot instructions with applyTo attribute
content = `---
applyTo: "content/**/*.md, layouts/**/*.html"
---
# GitHub Copilot Instructions for InfluxData Documentation
# Contributing instructions for InfluxData Documentation
## Purpose and scope
GitHub Copilot should help document InfluxData products
Help document InfluxData products
by creating clear, accurate technical content with proper
code examples, frontmatter, shortcodes, and formatting.
@ -59,7 +63,17 @@ ${content}`;
// Write the formatted content to the instructions file
fs.writeFileSync(instructionsPath, content);
console.log(`✅ Generated Copilot instructions at ${instructionsPath}`);
const fileSize = fs.statSync(instructionsPath).size;
const sizeInKB = (fileSize / 1024).toFixed(1);
console.log(
`✅ Generated instructions at ${instructionsPath} (${sizeInKB}KB)`
);
if (fileSize > 40000) {
console.warn(
`⚠️ Instructions file is large (${sizeInKB}KB > 40KB) and may impact performance`
);
}
// Add the file to git if it has changed
try {
@ -74,3 +88,58 @@ ${content}`;
console.warn('⚠️ Could not add instructions file to git:', error.message);
}
}
/**
* Optimize content for Copilot by removing or condensing less critical sections
* while preserving essential documentation guidance
*/
function optimizeContentForContext(content) {
// Remove or condense sections that are less relevant for context assistance
const sectionsToRemove = [
// Installation and setup sections (less relevant for writing docs)
/### Install project dependencies[\s\S]*?(?=\n##|\n###|$)/g,
/### Install Node\.js dependencies[\s\S]*?(?=\n##|\n###|$)/g,
/### Install Docker[\s\S]*?(?=\n##|\n###|$)/g,
/#### Build the test dependency image[\s\S]*?(?=\n##|\n###|$)/g,
/### Install Visual Studio Code extensions[\s\S]*?(?=\n##|\n###|$)/g,
/### Run the documentation locally[\s\S]*?(?=\n##|\n###|$)/g,
// Testing and CI/CD sections (important but can be condensed)
/### Set up test scripts and credentials[\s\S]*?(?=\n##|\n###|$)/g,
/#### Test shell and python code blocks[\s\S]*?(?=\n##|\n###|$)/g,
/#### Troubleshoot tests[\s\S]*?(?=\n##|\n###|$)/g,
/### Pytest collected 0 items[\s\S]*?(?=\n##|\n###|$)/g,
// Long code examples that can be referenced elsewhere
/```[\s\S]{500,}?```/g,
// Repetitive examples
/#### Example[\s\S]*?(?=\n####|\n###|\n##|$)/g,
];
// Remove identified sections
sectionsToRemove.forEach((regex) => {
content = content.replace(regex, '');
});
// Condense whitespace
content = content.replace(/\n{3,}/g, '\n\n');
// Remove HTML comments
content = content.replace(/<!--[\s\S]*?-->/g, '');
// Shorten repetitive content
content = content.replace(/(\{%[^%]+%\})[\s\S]*?\1/g, (match) => {
// If it's a long repeated pattern, show it once with a note
if (match.length > 200) {
const firstOccurrence = match.split('\n\n')[0];
return (
firstOccurrence +
'\n\n[Similar patterns apply - see full CONTRIBUTING.md for complete examples]'
);
}
return match;
});
return content;
}

View File

@ -303,14 +303,47 @@ services:
container_name: influxdb3-core
image: influxdb:3-core
ports:
- 8181:8181
- 8282:8181
command:
- influxdb3
- serve
- --node-id=sensors_node0
- --node-id=node0
- --log-filter=debug
- --object-store=file
- --data-dir=/var/lib/influxdb3
- --data-dir=/var/lib/influxdb3/data
- --plugin-dir=/var/lib/influxdb3/plugins
volumes:
- type: bind
source: test/.influxdb3/core/data
target: /var/lib/influxdb3/data
- type: bind
source: test/.influxdb3/core/plugins
target: /var/lib/influxdb3/plugins
influxdb3-enterprise:
container_name: influxdb3-enterprise
image: influxdb:3-enterprise
ports:
- 8181:8181
# Change the INFLUXDB3_LICENSE_EMAIL environment variable to your email address. You can also set it in a `.env` file in the same directory as this compose file. Docker Compose automatically loads the .env file.
# The license email option is only used the first time you run the container; you can't change the license email after the first run.
# The server stores the license in the data directory in the object store and the license is associated with the cluster ID and email.
command:
- influxdb3
- serve
- --node-id=node0
- --cluster-id=cluster0
- --log-filter=debug
- --object-store=file
- --data-dir=/var/lib/influxdb3/data
- --plugin-dir=/var/lib/influxdb3/plugins
- --license-email=${INFLUXDB3_LICENSE_EMAIL}
volumes:
- type: bind
source: test/.influxdb3/enterprise/data
target: /var/lib/influxdb3/data
- type: bind
source: test/.influxdb3/enterprise/plugins
target: /var/lib/influxdb3/plugins
telegraf-pytest:
container_name: telegraf-pytest
image: influxdata/docs-pytest

View File

@ -1,2 +0,0 @@
import:
- hugo.yml

View File

@ -1,4 +1,4 @@
baseURL: 'https://docs.influxdata.com/'
baseURL: https://docs.influxdata.com/
languageCode: en-us
title: InfluxDB Documentation
@ -49,21 +49,52 @@ privacy:
youtube:
disable: false
privacyEnhanced: true
outputFormats:
json:
mediaType: application/json
baseName: pages
isPlainText: true
# Asset processing configuration for development
build:
# Ensure Hugo correctly processes JavaScript modules
jsConfig:
nodeEnv: "development"
# Development asset processing
writeStats: false
useResourceCacheWhen: "fallback"
noJSConfigInAssets: false
# Asset processing configuration
assetDir: "assets"
module:
mounts:
- source: assets
target: assets
- source: node_modules
target: assets/node_modules
target: assets/node_modules
# Environment parameters
params:
env: development
environment: development
# Configure the server for development
server:
port: 1313
baseURL: 'http://localhost:1313/'
watchChanges: true
disableLiveReload: false
# Ignore specific warning logs
ignoreLogs:
- warning-goldmark-raw-html
# Disable minification for development
minify:
disableJS: true
disableCSS: true
disableHTML: true
minifyOutput: false

View File

@ -0,0 +1,40 @@
# Production overrides for CI/CD builds
baseURL: 'https://docs.influxdata.com/'
# Production environment parameters
params:
env: production
environment: production
# Enable minification for production
minify:
disableJS: false
disableCSS: false
disableHTML: false
minifyOutput: true
# Production asset processing
build:
writeStats: false
useResourceCacheWhen: "fallback"
buildOptions:
sourcemap: false
target: "es2015"
# Asset processing configuration
assetDir: "assets"
# Mount assets for production
module:
mounts:
- source: assets
target: assets
- source: node_modules
target: assets/node_modules
# Disable development server settings
server: {}
# Suppress the warning mentioned in the error
ignoreLogs:
- 'warning-goldmark-raw-html'

View File

@ -0,0 +1,17 @@
build:
writeStats: false
useResourceCacheWhen: "fallback"
buildOptions:
sourcemap: false
target: "es2015"
minify:
disableJS: false
disableCSS: false
disableHTML: false
minifyOutput: true
params:
env: production
environment: production
server: {
disableLiveReload: true
}

19
config/staging/hugo.yml Normal file
View File

@ -0,0 +1,19 @@
baseURL: https://test2.docs.influxdata.com/
build:
writeStats: false
useResourceCacheWhen: "fallback"
buildOptions:
sourcemap: false
target: "es2015"
minify:
disableJS: false
disableCSS: false
disableHTML: false
minifyOutput: true
params:
env: staging
environment: staging
server: {
disableLiveReload: true
}

View File

@ -1,20 +0,0 @@
baseURL: 'http://localhost:1315/'
server:
port: 1315
# Override settings for testing
buildFuture: true
# Configure what content is built in testing env
params:
environment: testing
buildTestContent: true
# Keep your shared content exclusions
ignoreFiles:
- "content/shared/.*"
# Ignore specific warning logs
ignoreLogs:
- warning-goldmark-raw-html

View File

@ -120,13 +120,13 @@ You can view the file [here](https://github.com/influxdb/influxdb/blob/master/sc
InfluxDB 1.5 introduces the option to log HTTP request traffic separately from the other InfluxDB log output. When HTTP request logging is enabled, the HTTP logs are intermingled by default with internal InfluxDB logging. By redirecting the HTTP request log entries to a separate file, both log files are easier to read, monitor, and debug.
See [Redirecting HTTP request logging](/enterprise_influxdb/v1/administration/logs/#redirecting-http-access-logging) in the InfluxDB OSS documentation.
For more information, see the [InfluxDB OSS v1 HTTP access logging documentation](/influxdb/v1/administration/logs/#http-access-logging).
## Structured logging
With InfluxDB 1.5, structured logging is supported and enable machine-readable and more developer-friendly log output formats. The two new structured log formats, `logfmt` and `json`, provide easier filtering and searching with external tools and simplifies integration of InfluxDB logs with Splunk, Papertrail, Elasticsearch, and other third party tools.
See [Structured logging](/enterprise_influxdb/v1/administration/logs/#structured-logging) in the InfluxDB OSS documentation.
For more information, see the [InfluxDB OSS v1 structured logging documentation](/influxdb/v1/administration/logs/#structured-logging).
## Tracing

View File

@ -11,6 +11,10 @@ menu:
name: Install
weight: 103
parent: Introduction
related:
- /enterprise_influxdb/v1/introduction/installation/docker/
- /enterprise_influxdb/v1/introduction/installation/single-server/
- /enterprise_influxdb/v1/introduction/installation/fips-compliant/
---
Complete the following steps to install an InfluxDB Enterprise cluster in your own environment:
@ -19,8 +23,4 @@ Complete the following steps to install an InfluxDB Enterprise cluster in your o
2. [Install InfluxDB data nodes](/enterprise_influxdb/v1/introduction/installation/data_node_installation/)
3. [Install Chronograf](/enterprise_influxdb/v1/introduction/installation/chrono_install/)
{{< influxdbu title="Installing InfluxDB Enterprise" summary="Learn about InfluxDB architecture and how to install InfluxDB Enterprise with step-by-step instructions." action="Take the course" link="https://university.influxdata.com/courses/installing-influxdb-enterprise-tutorial/" >}}
#### Other installation options
- [Install InfluxDB Enterprise on a single server](/enterprise_influxdb/v1/introduction/installation/single-server/)
- [Federal Information Processing Standards (FIPS)-compliant InfluxDB Enterprise](/enterprise_influxdb/v1/introduction/installation/fips-compliant/)
{{< influxdbu title="Installing InfluxDB Enterprise" summary="Learn about InfluxDB architecture and how to install InfluxDB Enterprise with step-by-step instructions." action="Take the course" link="https://university.influxdata.com/courses/installing-influxdb-enterprise-tutorial/" >}}

View File

@ -327,7 +327,7 @@ influxdb 2706 0.2 7.0 571008 35376 ? Sl 15:37 0:16 /usr/bin/influx
```
If you do not see the expected output, the process is either not launching or is exiting prematurely.
Check the [logs](/enterprise_influxdb/v1/administration/logs/)
Check the [logs](/enterprise_influxdb/v1/administration/monitor/logs/)
for error messages and verify the previous setup steps are complete.
If you see the expected output, repeat for the remaining data nodes.
@ -395,6 +395,10 @@ to the cluster.
{{% /expand %}}
{{< /expand-wrapper >}}
## Docker installation
For Docker-based installations, see [Install and run InfluxDB v1 Enterprise with Docker](/enterprise_influxdb/v1/introduction/installation/docker/) for complete instructions on setting up data nodes using Docker images.
## Next steps
Once your data nodes are part of your cluster, do the following:

View File

@ -0,0 +1,238 @@
---
title: Install and run InfluxDB v1 Enterprise with Docker
description: Install and run InfluxDB v1 Enterprise using Docker images for meta nodes and data nodes.
menu:
enterprise_influxdb_v1:
name: Install with Docker
weight: 30
parent: Install
related:
- /enterprise_influxdb/v1/introduction/installation/docker/docker-troubleshooting/
---
InfluxDB v1 Enterprise provides Docker images for both meta nodes and data nodes to simplify cluster deployment and management.
Using Docker allows you to quickly set up and run InfluxDB Enterprise clusters with consistent configurations.
> [!Important]
> #### Enterprise license required
> You must have a valid license to run InfluxDB Enterprise.
> Contact <sales@influxdata.com> for licensing information or obtain a 14-day demo license via the [InfluxDB Enterprise portal](https://portal.influxdata.com/users/new).
## Docker image variants
InfluxDB Enterprise provides two specialized Docker images:
- **`influxdb:meta`**: Enterprise meta node package for clustering
- **`influxdb:data`**: Enterprise data node package for clustering
## Requirements
- [Docker](https://docs.docker.com/get-docker/) installed and running
- Valid [InfluxData license key](#enterprise-license-required)
- Network connectivity between nodes
- At least 3 meta nodes (odd number recommended)
- At least 2 data nodes
## Set up an InfluxDB Enterprise cluster with Docker
### 1. Create a Docker network
Create a custom Docker network to allow communication between meta and data nodes:
```bash
docker network create influxdb
```
### 2. Start meta nodes
Start three meta nodes using the `influxdb:meta` image.
Each meta node requires a unique hostname and the Enterprise license key:
```bash
# Start first meta node
docker run -d \
--name=influxdb-meta-0 \
--network=influxdb \
-h influxdb-meta-0 \
-e INFLUXDB_ENTERPRISE_LICENSE_KEY=your-license-key \
influxdb:meta
# Start second meta node
docker run -d \
--name=influxdb-meta-1 \
--network=influxdb \
-h influxdb-meta-1 \
-e INFLUXDB_ENTERPRISE_LICENSE_KEY=your-license-key \
influxdb:meta
# Start third meta node
docker run -d \
--name=influxdb-meta-2 \
--network=influxdb \
-h influxdb-meta-2 \
-e INFLUXDB_ENTERPRISE_LICENSE_KEY=your-license-key \
influxdb:meta
```
### 3. Configure meta nodes to know each other
From the first meta node, add the other meta nodes to the cluster:
```bash
# Add the second meta node
docker exec influxdb-meta-0 \
influxd-ctl add-meta influxdb-meta-1:8091
# Add the third meta node
docker exec influxdb-meta-0 \
influxd-ctl add-meta influxdb-meta-2:8091
```
### 4. Start data nodes
Start two or more data nodes using the `influxdb:data` image:
```bash
# Start first data node
docker run -d \
--name=influxdb-data-0 \
--network=influxdb \
-h influxdb-data-0 \
-e INFLUXDB_ENTERPRISE_LICENSE_KEY=your-license-key \
influxdb:data
# Start second data node
docker run -d \
--name=influxdb-data-1 \
--network=influxdb \
-h influxdb-data-1 \
-e INFLUXDB_ENTERPRISE_LICENSE_KEY=your-license-key \
influxdb:data
```
### 5. Add data nodes to the cluster
From the first meta node, register each data node with the cluster:
```bash
# Add first data node
docker exec influxdb-meta-0 \
influxd-ctl add-data influxdb-data-0:8088
# Add second data node
docker exec influxdb-meta-0 \
influxd-ctl add-data influxdb-data-1:8088
```
### 6. Verify the cluster
Check that all nodes are properly added to the cluster:
```bash
docker exec influxdb-meta-0 influxd-ctl show
```
Expected output:
```
Data Nodes
==========
ID TCP Address Version
4 influxdb-data-0:8088 1.x.x-cX.X.X
5 influxdb-data-1:8088 1.x.x-cX.X.X
Meta Nodes
==========
TCP Address Version
influxdb-meta-0:8091 1.x.x-cX.X.X
influxdb-meta-1:8091 1.x.x-cX.X.X
influxdb-meta-2:8091 1.x.x-cX.X.X
```
## Configuration options
### Using environment variables
You can configure {{% product-name %}} using environment variables with the format `INFLUXDB_<SECTION>_<NAME>`.
Common environment variables:
- `INFLUXDB_REPORTING_DISABLED=true`
- `INFLUXDB_META_DIR=/path/to/metadir`
- `INFLUXDB_ENTERPRISE_REGISTRATION_ENABLED=true`
- `INFLUXDB_ENTERPRISE_LICENSE_KEY=your-license-key`
For all available environment variables, see how to [Configure Enterprise](/enterprise_influxdb/v1/administration/configure/).
### Using configuration files
You can also mount custom configuration files:
```bash
# Mount custom meta configuration
docker run -d \
--name=influxdb-meta-0 \
--network=influxdb \
-h influxdb-meta-0 \
-v /path/to/influxdb-meta.conf:/etc/influxdb/influxdb-meta.conf \
-e INFLUXDB_ENTERPRISE_LICENSE_KEY=your-license-key \
influxdb:meta
# Mount custom data configuration
docker run -d \
--name=influxdb-data-0 \
--network=influxdb \
-h influxdb-data-0 \
-v /path/to/influxdb.conf:/etc/influxdb/influxdb.conf \
-e INFLUXDB_ENTERPRISE_LICENSE_KEY=your-license-key \
influxdb:data
```
## Exposing ports
To access your InfluxDB Enterprise cluster from outside Docker, expose the necessary ports:
```bash
# Data node with HTTP API port exposed
docker run -d \
--name=influxdb-data-0 \
--network=influxdb \
-h influxdb-data-0 \
-p 8086:8086 \
-e INFLUXDB_ENTERPRISE_LICENSE_KEY=your-license-key \
influxdb:data
```
## Persistent data storage
To persist data beyond container lifecycles, mount volumes:
```bash
# Meta node with persistent storage
docker run -d \
--name=influxdb-meta-0 \
--network=influxdb \
-h influxdb-meta-0 \
-v influxdb-meta-0-data:/var/lib/influxdb \
-e INFLUXDB_ENTERPRISE_LICENSE_KEY=your-license-key \
influxdb:meta
# Data node with persistent storage
docker run -d \
--name=influxdb-data-0 \
--network=influxdb \
-h influxdb-data-0 \
-v influxdb-data-0-data:/var/lib/influxdb \
-e INFLUXDB_ENTERPRISE_LICENSE_KEY=your-license-key \
influxdb:data
```
## Next steps
Once your InfluxDB Enterprise cluster is running:
1. [Set up authentication and authorization](/enterprise_influxdb/v1/administration/configure/security/authentication/) for your cluster.
2. [Enable TLS encryption](/enterprise_influxdb/v1/guides/enable-tls/) for secure communication.
3. [Install and set up Chronograf](/enterprise_influxdb/v1/introduction/installation/chrono_install) for cluster management and visualization.
4. Configure your load balancer to send client traffic to data nodes. For more information, see [Data node installation](/enterprise_influxdb/v1/introduction/installation/data_node_installation/).
5. [Monitor your cluster](/enterprise_influxdb/v1/administration/monitor/) for performance and reliability.
6. [Write data with the InfluxDB API](/enterprise_influxdb/v1/guides/write_data/).
7. [Query data with the InfluxDB API](/enterprise_influxdb/v1/guides/query_data/).

View File

@ -0,0 +1,226 @@
---
title: Docker troubleshooting for InfluxDB v1 Enterprise
description: Common Docker-specific issues and solutions for InfluxDB v1 Enterprise deployments.
menu:
enterprise_influxdb_v1:
name: Docker troubleshooting
weight: 35
parent: Install with Docker
related:
- /enterprise_influxdb/v1/introduction/installation/docker/
- /enterprise_influxdb/v1/troubleshooting/
- /enterprise_influxdb/v1/administration/monitor/logs/
---
This guide covers common Docker-specific issues and solutions when running InfluxDB v1 Enterprise in containers.
## Common Docker issues
### License key issues
#### Problem: Container fails to start with license error
**Symptoms:**
```
license key verification failed
```
**Solution:**
1. Verify your license key is valid and not expired
2. Ensure the license key environment variable is set correctly:
```bash
-e INFLUXDB_ENTERPRISE_LICENSE_KEY=your-actual-license-key
```
3. If nodes cannot reach `portal.influxdata.com`, use a license file instead:
```bash
-v /path/to/license.json:/etc/influxdb/license.json
-e INFLUXDB_ENTERPRISE_LICENSE_PATH=/etc/influxdb/license.json
```
### Network connectivity issues
#### Problem: Nodes cannot communicate with each other
**Symptoms:**
- Meta nodes fail to join cluster
- Data nodes cannot connect to meta nodes
- `influxd-ctl show` shows missing nodes
**Solution:**
1. Ensure all containers are on the same Docker network:
```bash
docker network create influxdb
# Add --network=influxdb to all container runs
```
2. Use container hostnames consistently:
```bash
# Use hostname (-h) that matches container name
-h influxdb-meta-0 --name=influxdb-meta-0
```
3. Verify network connectivity between containers:
```bash
docker exec influxdb-meta-0 ping influxdb-meta-1
```
#### Problem: Cannot access InfluxDB from host machine
**Symptoms:**
- Connection refused when trying to connect to InfluxDB API
- Client tools cannot reach the database
**Solution:**
Expose the HTTP API port (8086) when starting data nodes:
```bash
docker run -d \
--name=influxdb-data-0 \
--network=influxdb \
-h influxdb-data-0 \
-p 8086:8086 \
-e INFLUXDB_ENTERPRISE_LICENSE_KEY=your-license-key \
influxdb:data
```
### Configuration issues
#### Problem: Custom configuration not being applied
**Symptoms:**
- Environment variables ignored
- Configuration file changes not taking effect
**Solution:**
1. For environment variables, use the correct format `INFLUXDB_$SECTION_$NAME`:
```bash
# Correct
-e INFLUXDB_REPORTING_DISABLED=true
-e INFLUXDB_META_DIR=/custom/meta/dir
# Incorrect
-e REPORTING_DISABLED=true
```
2. For configuration files, ensure proper mounting:
```bash
# Mount config file correctly
-v /host/path/influxdb.conf:/etc/influxdb/influxdb.conf
```
3. Verify file permissions on mounted configuration files:
```bash
# Config files should be readable by influxdb user (uid 1000)
chown 1000:1000 /host/path/influxdb.conf
chmod 644 /host/path/influxdb.conf
```
### Data persistence issues
#### Problem: Data lost when container restarts
**Symptoms:**
- Databases and data disappear after container restart
- Cluster state not preserved
**Solution:**
Mount data directories as volumes:
```bash
# For meta nodes
-v influxdb-meta-0-data:/var/lib/influxdb
# For data nodes
-v influxdb-data-0-data:/var/lib/influxdb
```
### Resource and performance issues
#### Problem: Containers running out of memory
**Symptoms:**
- Containers being killed by Docker
- OOMKilled status in `docker ps`
**Solution:**
1. Increase memory limits:
```bash
--memory=4g --memory-swap=8g
```
2. Monitor memory usage:
```bash
docker stats influxdb-data-0
```
3. Optimize InfluxDB configuration for available resources.
#### Problem: Poor performance in containerized environment
**Solution:**
1. Ensure adequate CPU and memory allocation
2. Use appropriate Docker storage drivers
3. Consider host networking for high-throughput scenarios:
```bash
--network=host
```
## Debugging commands
### Check container logs
```bash
# View container logs
docker logs influxdb-meta-0
docker logs influxdb-data-0
# Follow logs in real-time
docker logs -f influxdb-meta-0
```
### Verify cluster status
```bash
# Check cluster status from any meta node
docker exec influxdb-meta-0 influxd-ctl show
# Check individual node status
docker exec influxdb-meta-0 influxd-ctl show-shards
```
### Network troubleshooting
```bash
# Test connectivity between containers
docker exec influxdb-meta-0 ping influxdb-data-0
docker exec influxdb-meta-0 telnet influxdb-data-0 8088
# Check which ports are listening
docker exec influxdb-meta-0 netstat -tlnp
```
### Configuration verification
```bash
# Check effective configuration
docker exec influxdb-meta-0 cat /etc/influxdb/influxdb-meta.conf
docker exec influxdb-data-0 cat /etc/influxdb/influxdb.conf
# Verify environment variables
docker exec influxdb-meta-0 env | grep INFLUXDB
```
## Best practices for Docker deployments
1. **Use specific image tags** instead of `latest` for production deployments
2. **Implement health checks** to monitor container status
3. **Use Docker Compose** for complex multi-container setups
4. **Mount volumes** for data persistence
5. **Set resource limits** to prevent resource exhaustion
6. **Use secrets management** for license keys in production
7. **Implement proper logging** and monitoring
8. **Regular backups** of data volumes
## Getting additional help
If you continue to experience issues:
1. Check the [general troubleshooting guide](/enterprise_influxdb/v1/troubleshooting/)
2. Review [InfluxDB Enterprise logs](/enterprise_influxdb/v1/administration/monitor/logs/)
3. Contact [InfluxData support](https://support.influxdata.com/) with:
- Docker version and configuration
- Container logs
- Cluster status output
- Network configuration details

View File

@ -365,6 +365,10 @@ the cluster._
{{% /expand %}}
{{< /expand-wrapper >}}
## Docker installation
For Docker-based installations, see [Install and run InfluxDB v1 Enterprise with Docker](/enterprise_influxdb/v1/introduction/installation/docker/) for complete instructions on setting up meta nodes using Docker images.
After your meta nodes are part of your cluster,
[install data nodes](/enterprise_influxdb/v1/introduction/installation/data_node_installation/).

View File

@ -475,7 +475,7 @@ sudo systemctl start influxdb
```
If you do not see the expected output, the process is either not launching or is exiting prematurely.
Check the [logs](/enterprise_influxdb/v1/administration/logs/)
Check the [logs](/enterprise_influxdb/v1/administration/monitor/logs/)
for error messages and verify the previous setup steps are complete.
5. **Use `influxd-ctl` to add the data process to the InfluxDB Enterprise "cluster"**:
@ -542,9 +542,7 @@ For Chronograf installation instructions, see
[Install Chronograf](/chronograf/v1/introduction/installation/).
## Next steps
- Add more users if necessary.
See [Manage users and permissions](/enterprise_influxdb/v1/administration/manage/users-and-permissions/)
for more information.
- [Enable TLS](/enterprise_influxdb/v1/guides/enable-tls/).
- [Write data with the InfluxDB API](/enterprise_influxdb/v1/guides/write_data/).
- [Query data with the InfluxDB API](/enterprise_influxdb/v1/guides/query_data/).
- For information about adding users, see [Manage users and permissions](/enterprise_influxdb/v1/administration/manage/users-and-permissions/)
- [Enable TLS](/enterprise_influxdb/v1/guides/enable-tls/)
- [Write data with the InfluxDB API](/enterprise_influxdb/v1/guides/write_data/)
- [Query data with the InfluxDB API](/enterprise_influxdb/v1/guides/query_data/)

View File

@ -81,7 +81,7 @@ influxd-ctl backup /path/to/backup-dir
### Perform a full backup
```sh
influxd-ctl backup -full /path/to/backup-dir
influxd-ctl backup -strategy full /path/to/backup-dir
```
### Estimate the size of a backup

View File

@ -1267,3 +1267,106 @@ This is small tab 2.4 content.
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Group key demo
Used to demonstrate Flux group keys
{{< tabs-wrapper >}}
{{% tabs "small" %}}
[Input](#)
[Output](#)
<span class="tab-view-output">Click to view output</span>
{{% /tabs %}}
{{% tab-content %}}
The following data is output from the last `filter()` and piped forward into `group()`:
> [!Note]
> `_start` and `_stop` columns have been omitted.
{{% flux/group-key "[_measurement=home, room=Kitchen, _field=hum]" true %}}
| _time | _measurement | room | _field | _value |
| :------------------- | :----------- | :---------- | :----- | :----- |
| 2022-01-01T08:00:00Z | home | Kitchen | hum | 35.9 |
| 2022-01-01T09:00:00Z | home | Kitchen | hum | 36.2 |
| 2022-01-01T10:00:00Z | home | Kitchen | hum | 36.1 |
{{% flux/group-key "[_measurement=home, room=Living Room, _field=hum]" true %}}
| _time | _measurement | room | _field | _value |
| :------------------- | :----------- | :---------- | :----- | :----- |
| 2022-01-01T08:00:00Z | home | Living Room | hum | 35.9 |
| 2022-01-01T09:00:00Z | home | Living Room | hum | 35.9 |
| 2022-01-01T10:00:00Z | home | Living Room | hum | 36 |
{{% flux/group-key "[_measurement=home, room=Kitchen, _field=temp]" true %}}
| _time | _measurement | room | _field | _value |
| :------------------- | :----------- | :---------- | :----- | :----- |
| 2022-01-01T08:00:00Z | home | Kitchen | temp | 21 |
| 2022-01-01T09:00:00Z | home | Kitchen | temp | 23 |
| 2022-01-01T10:00:00Z | home | Kitchen | temp | 22.7 |
{{% flux/group-key "[_measurement=home, room=Living Room, _field=temp]" true %}}
| _time | _measurement | room | _field | _value |
| :------------------- | :----------- | :---------- | :----- | :----- |
| 2022-01-01T08:00:00Z | home | Living Room | temp | 21.1 |
| 2022-01-01T09:00:00Z | home | Living Room | temp | 21.4 |
| 2022-01-01T10:00:00Z | home | Living Room | temp | 21.8 |
{{% /tab-content %}}
{{% tab-content %}}
When grouped by `_field`, all rows with the `temp` field will be in one table
and all the rows with the `hum` field will be in another.
`_measurement` and `room` columns no longer affect how rows are grouped.
{{% note %}}
`_start` and `_stop` columns have been omitted.
{{% /note %}}
{{% flux/group-key "[_field=hum]" true %}}
| _time | _measurement | room | _field | _value |
| :------------------- | :----------- | :---------- | :----- | :----- |
| 2022-01-01T08:00:00Z | home | Kitchen | hum | 35.9 |
| 2022-01-01T09:00:00Z | home | Kitchen | hum | 36.2 |
| 2022-01-01T10:00:00Z | home | Kitchen | hum | 36.1 |
| 2022-01-01T08:00:00Z | home | Living Room | hum | 35.9 |
| 2022-01-01T09:00:00Z | home | Living Room | hum | 35.9 |
| 2022-01-01T10:00:00Z | home | Living Room | hum | 36 |
{{% flux/group-key "[_field=temp]" true %}}
| _time | _measurement | room | _field | _value |
| :------------------- | :----------- | :---------- | :----- | :----- |
| 2022-01-01T08:00:00Z | home | Kitchen | temp | 21 |
| 2022-01-01T09:00:00Z | home | Kitchen | temp | 23 |
| 2022-01-01T10:00:00Z | home | Kitchen | temp | 22.7 |
| 2022-01-01T08:00:00Z | home | Living Room | temp | 21.1 |
| 2022-01-01T09:00:00Z | home | Living Room | temp | 21.4 |
| 2022-01-01T10:00:00Z | home | Living Room | temp | 21.8 |
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## datetime/current-timestamp shortcode
### Default usage
{{< datetime/current-timestamp >}}
### Format YYYY-MM-DD HH:mm:ss
{{< datetime/current-timestamp format="YYYY-MM-DD HH:mm:ss" >}}
### Format with UTC timezone
{{< datetime/current-timestamp format="YYYY-MM-DD HH:mm:ss" timezone="UTC" >}}
### Format with America/New_York timezone
{{< datetime/current-timestamp format="YYYY-MM-DD HH:mm:ss" timezone="America/New_York" >}}

View File

@ -1,19 +1,9 @@
---
title: Get started with InfluxDB OSS
description: Get started with InfluxDB OSS.
# v2.0 alias below routes old external links here temporarily.
description: Get started with InfluxDB OSS. Learn how to create databases, write data, and query your time series data.
aliases:
- /influxdb/v1/introduction/getting_started/
- /influxdb/v1/introduction/getting-started/
- /influxdb/v2/introduction/getting-started/
- /influxdb/v2/introduction/getting-started/
- /influxdb/v2/introduction/getting_started/
- /influxdb/v2/introduction/getting_started/
- /influxdb/v2/introduction/getting_started/
- /influxdb/v2/introduction/getting_started/
- /influxdb/v2/introduction/getting_started/
- /influxdb/v2/introduction/getting-started/
menu:
influxdb_v1:
name: Get started with InfluxDB
@ -23,21 +13,29 @@ alt_links:
v2: /influxdb/v2/get-started/
---
With InfluxDB open source (OSS) [installed](/influxdb/v1/introduction/installation), you're ready to start doing some awesome things.
In this section we'll use the `influx` [command line interface](/influxdb/v1/tools/shell/) (CLI), which is included in all
InfluxDB packages and is a lightweight and simple way to interact with the database.
The CLI communicates with InfluxDB directly by making requests to the InfluxDB API over port `8086` by default.
With InfluxDB open source (OSS) [installed](/influxdb/v1/introduction/installation), you're ready to start working with time series data.
This guide uses the `influx` [command line interface](/influxdb/v1/tools/shell/) (CLI), which is included with InfluxDB
and provides direct access to the database.
The CLI communicates with InfluxDB through the HTTP API on port `8086`.
> **Note:** The database can also be used by making raw HTTP requests.
See [Writing Data](/influxdb/v1/guides/writing_data/) and [Querying Data](/influxdb/v1/guides/querying_data/)
for examples with the `curl` application.
> [!Tip]
> **Docker users**: Access the CLI from your container using:
> ```bash
> docker exec -it <container-name> influx
> ```
> [!Note]
> #### Directly access the API
> You can also interact with InfluxDB using the HTTP API directly.
> See [Writing Data](/influxdb/v1/guides/writing_data/) and [Querying Data](/influxdb/v1/guides/querying_data/) for examples using `curl`.
## Creating a database
If you've installed InfluxDB locally, the `influx` command should be available via the command line.
Executing `influx` will start the CLI and automatically connect to the local InfluxDB instance
(assuming you have already started the server with `service influxdb start` or by running `influxd` directly).
The output should look like this:
After installing InfluxDB locally, the `influx` command is available from your terminal.
Running `influx` starts the CLI and connects to your local InfluxDB instance
(ensure InfluxDB is running with `service influxdb start` or `influxd`).
To start the CLI and connect to the local InfluxDB instance, run the following command.
The [`-precision` argument](/influxdb/v1/tools/shell/#influx-arguments) specifies the format and precision of any returned timestamps.
```bash
$ influx -precision rfc3339
@ -46,15 +44,12 @@ InfluxDB shell {{< latest-patch >}}
>
```
> **Notes:**
>
* The InfluxDB API runs on port `8086` by default.
Therefore, `influx` will connect to port `8086` and `localhost` by default.
If you need to alter these defaults, run `influx --help`.
* The [`-precision` argument](/influxdb/v1/tools/shell/#influx-arguments) specifies the format/precision of any returned timestamps.
In the example above, `rfc3339` tells InfluxDB to return timestamps in [RFC3339 format](https://www.ietf.org/rfc/rfc3339.txt) (`YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ`).
The `influx` CLI connects to port `localhost:8086` (the default).
The timestamp precision `rfc3339` tells InfluxDB to return timestamps in [RFC3339 format](https://www.ietf.org/rfc/rfc3339.txt) (`YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ`).
The command line is now ready to take input in the form of the Influx Query Language (a.k.a InfluxQL) statements.
To view available options for customizing CLI connection parameters or other settings, run `influx --help` in your terminal.
The command line is ready to take input in the form of the Influx Query Language (InfluxQL) statements.
To exit the InfluxQL shell, type `exit` and hit return.
A fresh install of InfluxDB has no databases (apart from the system `_internal`),
@ -75,7 +70,6 @@ Throughout this guide, we'll use the database name `mydb`:
> **Note:** After hitting enter, a new prompt appears and nothing else is displayed.
In the CLI, this means the statement was executed and there were no errors to display.
There will always be an error displayed if something went wrong.
No news is good news!
Now that the `mydb` database is created, we'll use the `SHOW DATABASES` statement
to display all existing databases:
@ -204,6 +198,30 @@ including support for Go-style regex. For example:
> SELECT * FROM "cpu_load_short" WHERE "value" > 0.9
```
## Using the HTTP API
You can also interact with InfluxDB using HTTP requests with tools like `curl`:
### Create a database
```bash
curl -G http://localhost:8086/query --data-urlencode "q=CREATE DATABASE mydb"
```
### Write data
```bash
curl -i -XPOST 'http://localhost:8086/write?db=mydb' \
--data-binary 'cpu,host=serverA,region=us_west value=0.64'
```
### Query data
```bash
curl -G 'http://localhost:8086/query?pretty=true' \
--data-urlencode "db=mydb" \
--data-urlencode "q=SELECT * FROM cpu"
```
## Next steps
This is all you need to know to write data into InfluxDB and query it back.
To learn more about the InfluxDB write protocol,
check out the guide on [Writing Data](/influxdb/v1/guides/writing_data/).

View File

@ -24,6 +24,7 @@ By default, InfluxDB uses the following network ports:
- TCP port `8086` is available for client-server communication using the InfluxDB API.
- TCP port `8088` is available for the RPC service to perform back up and restore operations.
- TCP port `2003` is available for the Graphite protocol (when enabled).
In addition to the ports above, InfluxDB also offers multiple plugins that may
require [custom ports](/influxdb/v1/administration/ports/).
@ -51,10 +52,11 @@ you may want to check out our
[SLES & openSUSE](#)
[FreeBSD/PC-BSD](#)
[macOS](#)
[Docker](#)
{{% /tabs %}}
{{% tab-content %}}
For instructions on how to install the Debian package from a file,
please see the
see the
[downloads page](https://influxdata.com/downloads/).
Debian and Ubuntu users can install the latest stable version of InfluxDB using the
@ -194,6 +196,28 @@ InfluxDB v{{< latest-patch version="1.8" >}} (git: unknown unknown)
{{% /note %}}
{{% /tab-content %}}
{{% tab-content %}}
Use Docker to run InfluxDB v1 in a container.
For comprehensive Docker installation instructions, configuration options, and initialization features, see:
**[Install and run with Docker ](/influxdb/v1/introduction/install/docker/)**
Quick start:
```bash
# Pull the latest InfluxDB v1.x image
docker pull influxdb:{{< latest-patch version="1" >}}
# Start InfluxDB with persistent storage
docker run -p 8086:8086 \
-v $PWD/data:/var/lib/influxdb \
influxdb:{{< latest-patch version="1" >}}
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
@ -274,6 +298,12 @@ For example:
InfluxDB first checks for the `-config` option and then for the environment
variable.
### Configuring InfluxDB with Docker
For detailed Docker configuration instructions including environment variables, configuration files, initialization options, and examples, see:
**[Install and run with Docker ](/influxdb/v1/introduction/install/docker/)**
See the [Configuration](/influxdb/v1/administration/config/) documentation for more information.
### Data and WAL directory permissions

View File

@ -0,0 +1,157 @@
---
title: Install and run InfluxDB using Docker
description: >
Install and run InfluxDB OSS v1.x using Docker. Configure and operate InfluxDB in a Docker container.
menu:
influxdb_v1:
name: Use Docker
weight: 60
parent: Install InfluxDB
related:
- /influxdb/v1/introduction/install/, Install InfluxDB OSS v1
- /influxdb/v1/introduction/get-started/, Get started with InfluxDB OSS v1
- /influxdb/v1/administration/authentication_and_authorization/, Authentication and authorization in InfluxDB OSS v1
- /influxdb/v1/guides/write_data/, Write data to InfluxDB OSS v1
- /influxdb/v1/guides/query_data/, Query data in InfluxDB OSS v1
- /influxdb/v1/administration/config/, Configure InfluxDB OSS v1
alt_links:
core: /influxdb3/core/install/
v2: /influxdb/v2/install/use-docker-compose/
---
Install and run InfluxDB OSS v1.x using Docker containers.
This guide covers Docker installation, configuration, and initialization options.
## Install and run InfluxDB
### Pull the InfluxDB v1.x image
```bash
docker pull influxdb:{{< latest-patch version="1" >}}
```
### Start InfluxDB
Start a basic InfluxDB container with persistent storage:
```bash
docker run -p 8086:8086 \
-v $PWD/data:/var/lib/influxdb \
influxdb:{{< latest-patch version="1" >}}
```
InfluxDB is now running and available at http://localhost:8086.
## Configure InfluxDB
### Using environment variables
Configure InfluxDB settings using environment variables:
```bash
docker run -p 8086:8086 \
-v $PWD/data:/var/lib/influxdb \
-e INFLUXDB_REPORTING_DISABLED=true \
-e INFLUXDB_HTTP_AUTH_ENABLED=true \
-e INFLUXDB_HTTP_LOG_ENABLED=true \
influxdb:{{< latest-patch version="1" >}}
```
### Using a configuration file
Generate a default configuration file:
```bash
docker run --rm influxdb:{{< latest-patch version="1" >}} influxd config > influxdb.conf
```
Start InfluxDB with your custom configuration:
```bash
docker run -p 8086:8086 \
-v $PWD/influxdb.conf:/etc/influxdb/influxdb.conf:ro \
-v $PWD/data:/var/lib/influxdb \
influxdb:{{< latest-patch version="1" >}}
```
## Initialize InfluxDB
### Automatic initialization (for development)
> [!Warning]
> Automatic initialization with InfluxDB v1 is not recommended for production.
> Use this approach only for development and testing.
Automatically create a database and admin user on first startup:
```bash
docker run -p 8086:8086 \
-v $PWD/data:/var/lib/influxdb \
-e INFLUXDB_DB=mydb \
-e INFLUXDB_HTTP_AUTH_ENABLED=true \
-e INFLUXDB_ADMIN_USER=admin \
-e INFLUXDB_ADMIN_PASSWORD=supersecretpassword \
influxdb:{{< latest-patch version="1" >}}
```
Environment variables for user creation:
- `INFLUXDB_USER`: Create a user with no privileges
- `INFLUXDB_USER_PASSWORD`: Password for the user
- `INFLUXDB_READ_USER`: Create a user who can read from `INFLUXDB_DB`
- `INFLUXDB_READ_USER_PASSWORD`: Password for the read user
- `INFLUXDB_WRITE_USER`: Create a user who can write to `INFLUXDB_DB`
- `INFLUXDB_WRITE_USER_PASSWORD`: Password for the write user
### Custom initialization scripts
InfluxDB v1.x Docker containers support custom initialization scripts for testing scenarios:
Create an initialization script (`init-scripts/setup.iql`):
```sql
CREATE DATABASE sensors;
CREATE DATABASE logs;
CREATE USER "telegraf" WITH PASSWORD 'secret123';
GRANT WRITE ON "sensors" TO "telegraf";
CREATE USER "grafana" WITH PASSWORD 'secret456';
GRANT READ ON "sensors" TO "grafana";
GRANT READ ON "logs" TO "grafana";
CREATE RETENTION POLICY "one_week" ON "sensors" DURATION 1w REPLICATION 1 DEFAULT;
```
Run with initialization scripts:
```bash
docker run -p 8086:8086 \
-v $PWD/data:/var/lib/influxdb \
-v $PWD/init-scripts:/docker-entrypoint-initdb.d \
influxdb:{{< latest-patch version="1" >}}
```
Supported script types:
- Shell scripts (`.sh`)
- InfluxDB query language files (`.iql`)
> [!Important]
> Initialization scripts only run on first startup when the data directory is empty.
> Scripts execute in alphabetical order based on filename.
## Access the InfluxDB CLI
To access the InfluxDB command line interface from within the Docker container:
```bash
docker exec -it <container-name> influx
```
Replace `<container-name>` with your InfluxDB container name or ID.
## Next steps
Once you have InfluxDB running in Docker, see the [Get started guide](/influxdb/v1/introduction/get-started/) to:
- Create databases
- Write and query data
- Learn InfluxQL basics

View File

@ -34,8 +34,8 @@ and visit the `/docs` endpoint in a browser ([localhost:8086/docs](http://localh
## InfluxDB v1 compatibility API documentation
The InfluxDB v2 API includes [InfluxDB 1.x compatibility endpoints](/influxdb/v2/reference/api/influxdb-1x/)
The InfluxDB v2 API includes [InfluxDB v1 compatibility endpoints and authentication](/influxdb/v2/api-guide/influxdb-1x/)
that work with InfluxDB 1.x client libraries and third-party integrations like
[Grafana](https://grafana.com) and others.
<a class="btn" href="/influxdb/v2/api/v1-compatibility/">View full v1 compatibility API documentation</a>
<a class="btn" href="/influxdb/v2/api/v2/#tag/Compatibility-endpoints">View full v1 compatibility API documentation</a>

View File

@ -14,4 +14,5 @@ source: /shared/influxdb-v2/api-guide/api_intro.md
---
<!-- The content for this file is located at
// SOURCE content/shared/influxdb-v2/api-guide/api_intro.md -->
// SOURCE content/shared/influxdb-v2/api-guide/api_intro.md
-->

View File

@ -18,4 +18,5 @@ source: /shared/influxdb-v2/api-guide/influxdb-1x/_index.md
---
<!-- The content for this file is located at
// SOURCE content/shared/influxdb-v2/api-guide/influxdb-1x/_index.md -->
// SOURCE content/shared/influxdb-v2/api-guide/influxdb-1x/_index.md
-->

View File

@ -14,17 +14,15 @@ list_code_example: |
<span class="api get">GET</span> http://localhost:8086/query
</pre>
related:
- /influxdb/v2/query-data/execute-queries/influx-api/
- /influxdb/v2/query-data/influxql
aliases:
- /influxdb/v2/reference/api/influxdb-1x/query/
---
The `/query` 1.x compatibility endpoint queries InfluxDB {{< current-version >}} using **InfluxQL**.
Use the `GET` request method to query data from the `/query` endpoint.
Send an InfluxQL query in an HTTP `GET` or `POST` request to query data from the `/query` endpoint.
<pre>
<span class="api get">GET</span> http://localhost:8086/query
</pre>
The `/query` compatibility endpoint uses the **database** and **retention policy**
specified in the query request to map the request to an InfluxDB bucket.
@ -32,31 +30,32 @@ For more information, see [Database and retention policy mapping](/influxdb/v2/r
{{% show-in "cloud,cloud-serverless" %}}
{{% note %}}
If you have an existing bucket that doesn't follow the **database/retention-policy** naming convention,
you **must** [manually create a database and retention policy mapping](/influxdb/v2/query-data/influxql/dbrp/#create-dbrp-mappings)
to query that bucket with the `/query` compatibility API.
{{% /note %}}
> [!Note]
> If you have an existing bucket that doesn't follow the **database/retention-policy** naming convention,
> you **must** [manually create a database and retention policy mapping](/influxdb/v2/query-data/influxql/dbrp/#create-dbrp-mappings)
> to query that bucket with the `/query` compatibility API.
{{% /show-in %}}
## Authentication
Use one of the following authentication methods:
* **token authentication**
* **basic authentication with username and password**
* **query string authentication with username and password**
_For more information, see [Authentication](/influxdb/v2/reference/api/influxdb-1x/#authentication)._
- the 2.x `Authorization: Token` scheme in the header
- the v1-compatible `u` and `p` query string parameters
- the v1-compatible `Basic` authentication scheme in the header
For more information, see [Authentication for the 1.x compatibility API](/influxdb/v2/api-guide/influxdb-1x/).
## Query string parameters
### u
(Optional) The 1.x **username** to authenticate the request.
If you provide an API token as the password, `u` is required, but can be any value.
_See [query string authentication](/influxdb/v2/reference/api/influxdb-1x/#query-string-authentication)._
### p
(Optional) The 1.x **password** to authenticate the request.
(Optional) The 1.x **password** or the 2.x API token to authenticate the request.
_See [query string authentication](/influxdb/v2/reference/api/influxdb-1x/#query-string-authentication)._
### db
@ -94,61 +93,65 @@ The following precisions are available:
- [Return query results with millisecond Unix timestamps](#return-query-results-with-millisecond-unix-timestamps)
- [Execute InfluxQL queries from a file](#execute-influxql-queries-from-a-file)
{{% code-placeholders "API_TOKEN" %}}
{{% code-placeholders "INFLUX_USERNAME|INFLUX_PASSWORD_OR_TOKEN|API_TOKEN" %}}
##### Query using basic authentication
The following example:
- sends a `GET` request to the `/query` endpoint
- uses the `Authorization` header with the `Basic` scheme (compatible with InfluxDB 1.x) to provide username and password credentials
- uses the default retention policy for the database
{{% show-in "v2" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[curl](#curl)
[Node.js](#nodejs)
{{% /code-tabs %}}
{{% code-tab-content %}}
<!--pytest.mark.skip-->
```sh
{{% get-shared-text "api/v1-compat/auth/oss/basic-auth.sh" %}}
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
{{% get-shared-text "api/v1-compat/auth/oss/basic-auth.js" %}}
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
##############################################################################
# Use Basic authentication with an
# InfluxDB v1-compatible username and password
# to query the InfluxDB 1.x compatibility API.
#
# INFLUX_USERNAME: your v1-compatible username.
# INFLUX_PASSWORD_OR_TOKEN: your API token or v1-compatible password.
##############################################################################
curl --get "http://{{< influxdb/host >}}/query" \
--user "INFLUX_USERNAME":"INFLUX_PASSWORD_OR_TOKEN" \
--data-urlencode "db=BUCKET_NAME" \
--data-urlencode "q=SELECT * FROM cpu_usage"
```
{{% /show-in %}}
{{% show-in "cloud,cloud-serverless" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[curl](#curl)
[Node.js](#nodejs)
{{% /code-tabs %}}
{{% code-tab-content %}}
<!--pytest.mark.skip-->
```sh
{{% get-shared-text "api/v1-compat/auth/cloud/basic-auth.sh" %}}
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
{{% get-shared-text "api/v1-compat/auth/cloud/basic-auth.js" %}}
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /show-in %}}
##### Query using an HTTP POST request
```bash
curl \
--request POST \
"http://{{< influxdb/host >}}/query?db=DATABASE_NAME&rp=RETENTION_POLICY" \
--user "INFLUX_USERNAME":"INFLUX_PASSWORD_OR_TOKEN" \
--header "Content-type: application/vnd.influxql" \
--data "SELECT * FROM cpu_usage WHERE time > now() - 1h"
```
##### Query a non-default retention policy
The following example:
- sends a `GET` request to the `/query` endpoint
- uses the `Authorization` header with the `Token` scheme (compatible with InfluxDB 2.x) to provide the API token
- queries a custom retention policy mapped for the database
<!--test:setup
```sh
service influxdb start && \
@ -162,43 +165,56 @@ influx setup \
-->
```sh
curl --get http://localhost:8086/query \
curl --get http://{{< influxdb/host >}}/query \
--header "Authorization: Token API_TOKEN" \
--data-urlencode "db=mydb" \
--data-urlencode "rp=customrp" \
--data-urlencode "db=DATABASE_NAME" \
--data-urlencode "rp=RETENTION_POLICY_NAME" \
--data-urlencode "q=SELECT used_percent FROM mem WHERE host=host1"
```
##### Execute multiple queries
```sh
curl --get http://localhost:8086/query \
curl --get http://{{< influxdb/host >}}/query \
--header "Authorization: Token API_TOKEN" \
--data-urlencode "db=mydb" \
--data-urlencode "db=DATABASE_NAME" \
--data-urlencode "q=SELECT * FROM mem WHERE host=host1;SELECT mean(used_percent) FROM mem WHERE host=host1 GROUP BY time(10m)"
```
##### Return query results with millisecond Unix timestamps
```sh
curl --get http://localhost:8086/query \
curl --get http://{{< influxdb/host >}}/query \
--header "Authorization: Token API_TOKEN" \
--data-urlencode "db=mydb" \
--data-urlencode "rp=myrp" \
--data-urlencode "db=DATABASE_NAME" \
--data-urlencode "rp=RETENTION_POLICY_NAME" \
--data-urlencode "q=SELECT used_percent FROM mem WHERE host=host1" \
--data-urlencode "epoch=ms"
```
##### Execute InfluxQL queries from a file
```sh
curl --get http://localhost:8086/query \
curl --get http://{{< influxdb/host >}}/query \
--header "Authorization: Token API_TOKEN" \
--data-urlencode "db=mydb" \
--data-urlencode "q@path/to/influxql.txt" \
--data-urlencode "async=true"
--data-urlencode "db=DATABASE_NAME" \
--data-urlencode "q@path/to/influxql.txt"
```
##### Return a gzip-compressed response
```sh
curl --get http://{{< influxdb/host >}}/query \
--header 'Accept-Encoding: gzip' \
--header "Authorization: Token API_TOKEN" \
--data-urlencode "db=DATABASE_NAME" \
--data-urlencode "q=SELECT used_percent FROM mem WHERE host=host1"
```
{{% /code-placeholders %}}
Replace the following:
- {{% code-placeholder-key %}}`API_TOKEN`{{% /code-placeholder-key %}}: your InfluxDB [API token](/influxdb/v2/admin/tokens/)
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query.
In InfluxDB 2.x, databases and retention policies map to [buckets](/influxdb/v2/admin/buckets/).
- {{% code-placeholder-key %}}`RETENTION_POLICY_NAME`{{% /code-placeholder-key %}}: the name of the retention policy to query.
In InfluxDB 2.x, databases and retention policies map to [buckets](/influxdb/v2/admin/buckets/).
_For more information about the database and retention policy mapping, see [Database and retention policy mapping](/influxdb/v2/reference/api/influxdb-1x/dbrp)._

View File

@ -12,4 +12,5 @@ source: /shared/influxdb-v2/api-guide/tutorials/_index.md
---
<!-- The content for this file is located at
// SOURCE content/shared/influxdb-v2/api-guide/tutorials/_index.md -->
// SOURCE content/shared/influxdb-v2/api-guide/tutorials/_index.md
-->

View File

@ -328,6 +328,6 @@ which requires authentication.
**For these external clients to work with InfluxDB {{< current-version >}}:**
1. [Manually create a 1.x-compatible authorization](/influxdb/v2/upgrade/v1-to-v2/manual-upgrade/#create-a-1x-compatible-authorization).
1. [Manually create a v1-compatible authorization](/influxdb/v2/upgrade/v1-to-v2/manual-upgrade/#create-a-1x-compatible-authorization).
2. Update the client configuration to use the username and password associated
with your 1.x-compatible authorization.
with your v1-compatible authorization.

View File

@ -3,7 +3,7 @@ title: Manually upgrade from InfluxDB 1.x to 2.7
list_title: Manually upgrade from 1.x to 2.7
description: >
To manually upgrade from InfluxDB 1.x to InfluxDB 2.7, migrate data, create
1.x-compatible authorizations, and create database and retention policy
v1-compatible authorizations, and create database and retention policy
(DBRP) mappings.
menu:
influxdb_v2:

View File

@ -13,6 +13,8 @@ related:
- /influxdb/v2/reference/cli/influx/config/
- /influxdb/v2/reference/cli/influx/
- /influxdb/v2/admin/tokens/
alt_links:
v1: /influxdb/v1/introduction/install/docker/
---
Use Docker Compose to install and set up InfluxDB v2, the time series platform

View File

@ -11,4 +11,5 @@ source: /shared/influxdb-v2/query-data/execute-queries/influx-api.md
---
<!-- The content for this file is located at
// SOURCE content/shared/influxdb-v2/query-data/execute-queries/influx-api.md -->
// SOURCE content/shared/influxdb-v2/query-data/execute-queries/influx-api.md
-->

View File

@ -1,8 +1,8 @@
---
title: Query data with InfluxQL
description: >
Use the [InfluxDB 1.x `/query` compatibility endpoint](/influxdb/v2/reference/api/influxdb-1x/query)
to query data in InfluxDB Cloud and InfluxDB OSS 2.4 with **InfluxQL**.
Use the InfluxDB v1 `/query` compatibility endpoint
to query data in InfluxDB v2 using InfluxQL.
weight: 102
influxdb/v2/tags: [influxql, query]
menu:

View File

@ -26,7 +26,7 @@ The URL in the examples depends on the version and location of your InfluxDB {{<
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Curl](#curl)
[cURL](#curl)
[Node.js](#nodejs)
{{% /code-tabs %}}
{{% code-tab-content %}}

View File

@ -10,7 +10,8 @@ menu:
parent: Write data
influxdb/v2/tags: [write, line protocol, errors]
related:
- /influxdb/v2/api/#tag/Write, InfluxDB API /write endpoint
- /influxdb/v2/api/v2/#operation/PostLegacyWrite, InfluxDB API /write endpoint
- /influxdb/v2/api/v2/#operation/PostWrite, InfluxDB API /api/v2/write endpoint
- /influxdb/v2/reference/internals
- /influxdb/v2/reference/cli/influx/write
source: /shared/influxdb-v2/write-data/troubleshoot.md

View File

@ -33,17 +33,19 @@ or the [Management HTTP API](/influxdb3/cloud-dedicated/api/management/)
to delete a database from your {{< product-name omit=" Clustered" >}} cluster.
> [!Warning]
>
> #### Deleting a database cannot be undone
>
> Once a database is deleted, data stored in that database cannot be recovered.
>
> #### Wait before writing to a new database with the same name
>
> After deleting a database from your {{% product-name omit=" Clustered" %}}
> cluster, you can reuse the name to create a new database, but **wait two to
> three minutes** after deleting the previous database before writing to the new
> database to allow write caches to clear.
>
> #### Tokens still grant access to databases with the same name
>
> [Database tokens](/influxdb3/cloud-dedicated/admin/tokens/database/) are associated to
> databases by name. If you create a new database with the same name, tokens
> that granted access to the deleted database will also grant access to the new
> database.
{{< tabs-wrapper >}}
{{% tabs %}}

View File

@ -0,0 +1,58 @@
---
title: Rename a database
description: >
Use the [`influxctl database rename` command](/influxdb3/cloud-dedicated/reference/cli/influxctl/database/rename/)
to rename a database in your {{< product-name omit=" Cluster" >}} cluster.
menu:
influxdb3_cloud_dedicated:
parent: Manage databases
weight: 202
list_code_example: |
##### CLI
```sh
influxctl database rename <DATABASE_NAME> <NEW_DATABASE_NAME>
```
related:
- /influxdb3/cloud-dedicated/reference/cli/influxctl/database/rename/
- /influxdb3/cloud-dedicated/admin/tokens/database/create/
---
Use the [`influxctl database rename` command](/influxdb3/cloud-dedicated/reference/cli/influxctl/database/rename/)
to rename a database in your {{< product-name omit=" Cluster" >}} cluster.
> [!Note]
> Renaming a database does not change the database ID, modify data in the database,
> or update [database tokens](/influxdb3/cloud-dedicated/admin/tokens/database/).
> After renaming a database, any existing database tokens will stop working and you
> must create new tokens with permissions for the renamed database.
## Rename a database using the influxctl CLI
{{% code-placeholders "DATABASE_NAME|NEW_DATABASE_NAME" %}}
```sh
influxctl database rename DATABASE_NAME NEW_DATABASE_NAME
```
{{% /code-placeholders %}}
Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: Current name of the database to rename
- {{% code-placeholder-key %}}`NEW_DATABASE_NAME`{{% /code-placeholder-key %}}: New name for the database
## Update database tokens after renaming
After renaming a database, existing database tokens will no longer work because
they reference the old database name. Do the following:
1. [Create new database tokens](/influxdb3/cloud-dedicated/admin/tokens/database/create/)
with permissions for the renamed database.
2. Update your applications and clients to use the new tokens.
3. [Delete the old database tokens](/influxdb3/cloud-dedicated/admin/tokens/database/delete/)
that reference the old database name.
{{% note %}}
#### Renamed database retains its ID
The database ID remains the same after renaming. When you list databases,
you'll see the new name associated with the original database ID.
{{% /note %}}

View File

@ -0,0 +1,70 @@
---
title: Undelete a database
description: >
Use the [`influxctl database undelete` command](/influxdb3/cloud-dedicated/reference/cli/influxctl/database/undelete/)
to restore a previously deleted database in your {{< product-name omit=" Cluster" >}} cluster.
menu:
influxdb3_cloud_dedicated:
parent: Manage databases
weight: 204
list_code_example: |
```sh
influxctl database undelete <DATABASE_NAME>
```
related:
- /influxdb3/cloud-dedicated/reference/cli/influxctl/database/undelete/
- /influxdb3/cloud-dedicated/admin/databases/delete/
- /influxdb3/cloud-dedicated/admin/tokens/database/create/
---
Use the [`influxctl database undelete` command](/influxdb3/cloud-dedicated/reference/cli/influxctl/database/undelete/)
to restore a previously deleted database in your {{< product-name omit=" Cluster" >}} cluster.
> [!Important]
> To undelete a database:
>
> - The database name must match the name of the deleted database.
> - A new database with the same name cannot already exist.
> - You must have appropriate permissions to manage databases.
When you undelete a database, it is restored with the same retention period,
table limits, and column limits as when it was deleted.
> [!Warning]
> Databases can only be undeleted for
> {{% show-in "cloud-dedicated" %}}approximately 14 days{{% /show-in %}}{{% show-in "clustered" %}}a configurable "hard-delete" grace period{{% /show-in %}}
> after they are deleted.
> After this grace period, all Parquet files associated with the deleted database
> are permanently removed and the database cannot be undeleted.
## Undelete a database using the influxctl CLI
{{% code-placeholders "DATABASE_NAME" %}}
```sh
influxctl database undelete DATABASE_NAME
```
{{% /code-placeholders %}}
Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
Name of the deleted database to restore
## Recreate tokens for the database
After successfully undeleting a database:
1. **Verify the database was restored** by [listing all databases](/influxdb3/cloud-dedicated/admin/databases/list/).
2. **If you previously deleted tokens associated with the deleted database, create new database tokens**
- Any tokens that existed before deletion are not restored.
[Create new database tokens](/influxdb3/cloud-dedicated/admin/tokens/database/create/)
with appropriate permissions for the restored database.
3. **Update your applications** to use the new database tokens.
{{% note %}}
#### Undeleted databases retain their original configuration
When a database is undeleted, it retains the same database ID, retention period,
and table/column limits it had before deletion. However, database tokens are not
restored and must be recreated.
{{% /note %}}

View File

@ -0,0 +1,53 @@
---
title: Delete a table
description: >
Use the Admin UI or the [`influxctl table delete` command](/influxdb3/cloud-dedicated/reference/cli/influxctl/table/delete/)
to delete a table from a database in your {{< product-name omit=" Cluster" >}} cluster.
menu:
influxdb3_cloud_dedicated:
parent: Manage tables
weight: 203
list_code_example: |
```sh
influxctl table delete <DATABASE_NAME> <TABLE_NAME>
```
related:
- /influxdb3/cloud-dedicated/reference/cli/influxctl/table/delete/
---
Use the Admin UI or the [`influxctl table delete` command](/influxdb3/cloud-dedicated/reference/cli/influxctl/table/delete/)
to delete a table from a database in your {{< product-name omit=" Cluster" >}} cluster.
> [!Warning]
> Deleting a table is irreversible. Once a table is deleted, all data stored in
> that table is permanently removed and cannot be recovered.
Provide the following arguments:
- **Database name**: Name of the database that contains the table to delete
- **Table name**: Name of the table to delete
{{% code-placeholders "DATABASE_NAME|TABLE_NAME" %}}
```sh
influxctl table delete DATABASE_NAME TABLE_NAME
```
{{% /code-placeholders %}}
Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: Name of the database that contains the table to delete
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}: Name of the table to delete
When prompted, enter `y` to confirm the deletion.
{{% note %}}
#### Wait before reusing a deleted table name
After deleting a table, wait a few minutes before attempting to create a new
table with the same name to ensure the deletion process has fully completed.
{{% product-name %}} creates tables implicitly using table names specified in
line protocol written to the databases. To prevent the deleted table from being
immediately recreated by incoming write requests, pause all write requests to
the table before deleting it.
{{% /note %}}

View File

@ -1,7 +1,8 @@
---
title: List tables
description: >
Use the Admin UI, the [`SHOW TABLES` SQL statement](/influxdb3/cloud-dedicated/query-data/sql/explore-schema/#list-measurements-in-a-database),
Use the Admin UI, the [`influxctl table list` command](/influxdb3/cloud-dedicated/reference/cli/influxctl/table/list/),
the [`SHOW TABLES` SQL statement](/influxdb3/cloud-dedicated/query-data/sql/explore-schema/#list-measurements-in-a-database),
or the [`SHOW MEASUREMENTS` InfluxQL statement](/influxdb3/cloud-dedicated/query-data/influxql/explore-schema/#list-measurements-in-a-database)
to list tables in a database.
menu:
@ -9,23 +10,30 @@ menu:
parent: Manage tables
weight: 201
list_code_example: |
###### SQL
##### CLI
```sh
influxctl table list <DATABASE_NAME>
```
##### SQL
```sql
SHOW TABLES
```
###### InfluxQL
##### InfluxQL
```sql
SHOW MEASUREMENTS
```
related:
- /influxdb3/cloud-dedicated/reference/cli/influxctl/table/list/
- /influxdb3/cloud-dedicated/query-data/sql/explore-schema/
- /influxdb3/cloud-dedicated/query-data/influxql/explore-schema/
---
Use the Admin UI, the [`SHOW TABLES` SQL statement](/influxdb3/cloud-dedicated/query-data/sql/explore-schema/#list-measurements-in-a-database),
Use the Admin UI, the [`influxctl table list` command](/influxdb3/cloud-dedicated/reference/cli/influxctl/table/list/),
the [`SHOW TABLES` SQL statement](/influxdb3/cloud-dedicated/query-data/sql/explore-schema/#list-measurements-in-a-database),
or the [`SHOW MEASUREMENTS` InfluxQL statement](/influxdb3/cloud-dedicated/query-data/influxql/explore-schema/#list-measurements-in-a-database)
to list tables in a database.
@ -36,9 +44,11 @@ to list tables in a database.
{{% tabs %}}
[Admin UI](#admin-ui)
[influxctl](#influxctl)
[SQL & InfluxQL](#sql--influxql)
{{% /tabs %}}
{{% tab-content %}}
<!------------------------------- BEGIN ADMIN UI ------------------------------>
The InfluxDB Cloud Dedicated administrative UI includes a portal for managing
tables. You can view the list of tables associated with a database and
their details, including:
@ -47,48 +57,94 @@ their details, including:
- Table ID
- Table size (in bytes)
1. To access the {{< product-name >}} Admin UI, visit the following URL in your browser:
1. To access the {{< product-name >}} Admin UI, visit the following URL in your browser:
<pre>
<a href="https://console.influxdata.com">https://console.influxdata.com</a>
</pre>
2. Use the credentials provided by InfluxData to log into the Admin UI.
If you don't have login credentials, [contact InfluxData support](https://support.influxdata.com).
<pre>
<a href="https://console.influxdata.com">https://console.influxdata.com</a>
</pre>
After you log in, the Account Management portal displays [account information](/influxdb3/cloud-dedicated/admin/account/)
and lists all clusters associated with your account.
3. In the cluster list, find the cluster that contains the database and table. You can **Search** for clusters by name or ID to filter the list and use the sort button and column headers to sort the list.
4. Click the cluster row to view the list of databases associated with the cluster.
5. In the database list, find the database that contains the table. You can **Search** for databases by name or ID to filter the list and use the sort button and column headers to sort the list.
6. Click the database row to view the list of tables associated with the database.
7. The table list displays the following table details:
- Name
- Table ID
- Table size (in bytes)
8. You can **Search** for tables by name or ID to filter the list and use the sort button and column headers to sort the list.
2. Use the credentials provided by InfluxData to log into the Admin UI.
If you don't have login credentials, [contact InfluxData support](https://support.influxdata.com).
You can **Search** for databases by name or ID to filter the list and use the sort button and column headers to sort the list.
After you log in, the Account Management portal displays [account information](/influxdb3/cloud-dedicated/admin/account/)
and lists all clusters associated with your account.
3. In the cluster list, find the cluster that contains the database and table.
You can **Search** for clusters by name or ID to filter the list and use the sort button and column headers to sort the list.
4. Click the cluster row to view the list of databases associated with the cluster.
5. In the database list, find the database that contains the table.
You can **Search** for databases by name or ID to filter the list and use
the sort button and column headers to sort the list.
6. Click the database row to view the list of tables associated with the database.
7. The table list displays the following table details:
- Name
- Table ID
- Table size (in bytes)
8. You can **Search** for tables by name or ID to filter the list and use the
sort button and column headers to sort the list.
You can **Search** for databases by name or ID to filter the list and use the
sort button and column headers to sort the list.
<!-------------------------------- END ADMIN UI ------------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!------------------------------- BEGIN INFLUXCTL ------------------------------>
###### SQL
<!------------------------------ BEGIN INFLUXCTL ------------------------------>
Use the [`influxctl table list` command](/influxdb3/cloud-dedicated/reference/cli/influxctl/table/list/)
to list all tables in a database in your {{< product-name omit=" Cluster" >}} cluster.
{{% code-placeholders "DATABASE_NAME" %}}
<!-- pytest.mark.skip -->
```bash
influxctl table list DATABASE_NAME
```
{{% /code-placeholders %}}
Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
Name of the database containing the tables to list
### Output formats
The `influxctl table list` command supports the following output formats:
- `table` (default): Human-readable table format
- `json`: JSON format for programmatic use
Use the `--format` flag to specify the output format:
{{% code-placeholders "DATABASE_NAME" %}}
```sh
influxctl table list --format json DATABASE_NAME
```
{{% /code-placeholders %}}
<!------------------------------- END INFLUXCTL ------------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!----------------------------- BEGIN SQL/INFLUXQL ---------------------------->
## List tables with the influxctl query command
To list tables using SQL or InfluxQL, use the `influxctl query` command to pass
the appropriate statement.
### SQL
```sql
SHOW TABLES
```
###### InfluxQL
### InfluxQL
```sql
SHOW MEASUREMENTS
```
## List tables with the influxctl CLI
To list tables using the `influxctl` CLI, use the `influxctl query` command to pass
the `SHOW TABLES` SQL statement.
Provide the following with your command:
- **Database token**: [Database token](/influxdb3/cloud-dedicated/admin/tokens/#database-tokens)
@ -98,17 +154,29 @@ Provide the following with your command:
- **Database name**: Name of the database to query. Uses the `database` setting
from the [`influxctl` connection profile](/influxdb3/cloud-dedicated/reference/cli/influxctl/#configure-connection-profiles)
or the `--database` command flag.
- **SQL query**: SQL query with the `SHOW TABLES` statement.
- **SQL query**: SQL query with the `SHOW TABLES` statement or InfluxQL query with the `SHOW MEASUREMENTS` statement.
{{% code-placeholders "DATABASE_(TOKEN|NAME)" %}}
```sh
##### SQL
<!-- pytest.mark.skip -->
```bash
influxctl query \
--token DATABASE_TOKEN \
--database DATABASE_NAME \
"SHOW TABLES"
```
##### InfluxQL
<!-- pytest.mark.skip -->
```bash
influxctl query \
--token DATABASE_TOKEN \
--database DATABASE_NAME \
--language influxql \
"SHOW MEASUREMENTS"
```
{{% /code-placeholders %}}
Replace the following:
@ -118,5 +186,6 @@ Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
Name of the database to query
<!------------------------------ END SQL/INFLUXQL ----------------------------->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
{{< /tabs-wrapper >}}

View File

@ -2,34 +2,12 @@
title: influxctl database
description: >
The `influxctl database` command and its subcommands manage databases in an
InfluxDB Cloud Dedicated cluster.
{{% product-name omit=" Clustered" %}} cluster.
menu:
influxdb3_cloud_dedicated:
parent: influxctl
weight: 201
source: /shared/influxctl/database/_index.md
---
The `influxctl database` command and its subcommands manage databases in an
InfluxDB Cloud Dedicated cluster.
## Usage
```sh
influxctl database [subcommand] [flags]
```
## Subcommands
| Subcommand | Description |
| :--------------------------------------------------------------------------- | :------------------ |
| [create](/influxdb3/cloud-dedicated/reference/cli/influxctl/database/create/) | Create a database |
| [delete](/influxdb3/cloud-dedicated/reference/cli/influxctl/database/delete/) | Delete a database |
| [list](/influxdb3/cloud-dedicated/reference/cli/influxctl/database/list/) | List databases |
| [update](/influxdb3/cloud-dedicated/reference/cli/influxctl/database/update/) | Update a database |
| help, h | Output command help |
## Flags
| Flag | | Description |
| :--- | :------- | :------------------ |
| `-h` | `--help` | Output command help |
<!-- //SOURCE content/shared/influxctl/database/_index.md -->

View File

@ -1,8 +1,8 @@
---
title: influxctl database create
description: >
The `influxctl database create` command creates a new database in an InfluxDB
Cloud Dedicated cluster.
The `influxctl database create` command creates a new database in an
{{% product-name omit=" Clustered" %}} cluster.
menu:
influxdb3_cloud_dedicated:
parent: influxctl database
@ -10,173 +10,7 @@ weight: 301
related:
- /influxdb3/cloud-dedicated/admin/custom-partitions/define-custom-partitions/
- /influxdb3/cloud-dedicated/admin/custom-partitions/partition-templates/
source: /shared/influxctl/database/create.md
---
The `influxctl database create` command creates a new database with a specified
retention period in an {{< product-name omit=" Clustered" >}} cluster.
The retention period defines the maximum age of data retained in the database,
based on the timestamp of the data.
The retention period value is a time duration value made up of a numeric value
plus a duration unit. For example, `30d` means 30 days.
A zero duration retention period is infinite and data will not expire.
The retention period value cannot be negative or contain whitespace.
{{< flex >}}
{{% flex-content "half" %}}
##### Valid durations units include
- **m**: minute
- **h**: hour
- **d**: day
- **w**: week
- **mo**: month
- **y**: year
{{% /flex-content %}}
{{% flex-content "half" %}}
##### Example retention period values
- `0d`: infinite/none
- `3d`: 3 days
- `6w`: 6 weeks
- `1mo`: 1 month (30 days)
- `1y`: 1 year
- `30d30d`: 60 days
- `2.5d`: 60 hours
{{% /flex-content %}}
{{< /flex >}}
#### Custom partitioning
You can override the default partition template (`%Y-%m-%d`) of the database
with the `--template-tag`, `--template-tag-bucket`, and `--template-timeformat`
flags when you create the database.
Provide a time format using [Rust strftime](/influxdb3/cloud-dedicated/admin/custom-partitions/partition-templates/#time-part-templates), partition by specific tag, or partition tag values
into a specified number of "buckets."
Each of these can be used as part of the partition template.
Be sure to follow [partitioning best practices](/influxdb3/cloud-dedicated/admin/custom-partitions/best-practices/).
> [!Note]
> #### Always provide a time format when using custom partitioning
>
> If defining a custom partition template for your database with any of the
> `--template-*` flags, always include the `--template-timeformat` flag with a
> time format to use in your partition template.
> Otherwise, InfluxDB omits time from the partition template and won't compact partitions.
> [!Warning]
> #### Wait before writing to a new database with the same name as a deleted database
>
> After deleting a database from your {{% product-name omit=" Clustered" %}}
> cluster, you can reuse the name to create a new database, but **wait two to
> three minutes** after deleting the previous database before writing to the new
> database to allow write caches to clear.
## Usage
<!--Skip tests for database create and delete: namespaces aren't reusable-->
<!--pytest.mark.skip-->
```sh
influxctl database create [flags] <DATABASE_NAME>
```
## Arguments
| Argument | Description |
| :---------------- | :--------------------- |
| **DATABASE_NAME** | InfluxDB database name |
## Flags
| Flag | | Description |
| :--- | :---------------------- | :--------------------------------------------------------------------------------------------------------------------------------------- |
| | `--retention-period` | [Database retention period ](/influxdb3/cloud-dedicated/admin/databases/#retention-periods)(default is `0s`, infinite) |
| | `--max-tables` | [Maximum tables per database](/influxdb3/cloud-dedicated/admin/databases/#table-limit) (default is 500, `0` uses default) |
| | `--max-columns` | [Maximum columns per table](/influxdb3/cloud-dedicated/admin/databases/#column-limit) (default is 250, `0` uses default) |
| | `--template-tag` | Tag to add to partition template (can include multiple of this flag) |
| | `--template-tag-bucket` | Tag and number of buckets to partition tag values into separated by a comma--for example: `tag1,100` (can include multiple of this flag) |
| | `--template-timeformat` | Timestamp format for partition template (default is `%Y-%m-%d`) |
| `-h` | `--help` | Output command help |
{{% caption %}}
_Also see [`influxctl` global flags](/influxdb3/cloud-dedicated/reference/cli/influxctl/#global-flags)._
{{% /caption %}}
## Examples
- [Create a database with an infinite retention period](#create-a-database-with-an-infinite-retention-period)
- [Create a database with a 30-day retention period](#create-a-database-with-a-30-day-retention-period)
- [Create a database with non-default table and column limits](#create-a-database-with-non-default-table-and-column-limits)
- [Create a database with a custom partition template](#create-a-database-with-a-custom-partition-template)
### Create a database with an infinite retention period
<!--Skip tests for database create and delete: namespaces aren't reusable-->
<!--pytest.mark.skip-->
```sh
influxctl database create mydb
```
### Create a database with a 30-day retention period
<!--Skip tests for database create and delete: namespaces aren't reusable-->
<!--pytest.mark.skip-->
```sh
influxctl database create \
--retention-period 30d \
mydb
```
### Create a database with non-default table and column limits
<!--Skip tests for database create and delete: namespaces aren't reusable-->
<!--pytest.mark.skip-->
```sh
influxctl database create \
--max-tables 200 \
--max-columns 150 \
mydb
```
### Create a database with a custom partition template
The following example creates a new `mydb` database and applies a partition
template that partitions by two tags (`room` and `sensor-type`) and by day using
the time format `%Y-%m-%d`:
<!--Skip tests for database create and delete: namespaces aren't reusable-->
<!--pytest.mark.skip-->
```sh
influxctl database create \
--template-tag room \
--template-tag sensor-type \
--template-tag-bucket customerID,1000 \
--template-timeformat '%Y-%m-%d' \
mydb
```
_For more information about custom partitioning, see
[Manage data partitioning](/influxdb3/cloud-dedicated/admin/custom-partitions/)._
{{% expand "View command updates" %}}
#### v2.7.0 {date="2024-03-26"}
- Introduce the `--template-tag-bucket` flag to group tag values into buckets
and partition by each tag bucket.
#### v2.5.0 {date="2024-03-04"}
- Introduce the `--template-tag` and `--template-timeformat` flags that define
a custom partition template for a database.
{{% /expand %}}
<!-- //SOURCE content/shared/influxctl/database/create.md -->

View File

@ -1,71 +1,13 @@
---
title: influxctl database delete
description: >
The `influxctl database delete` command deletes a database from an InfluxDB
Cloud Dedicated cluster.
The `influxctl database delete` command deletes a database from an
{{% product-name omit=" Clustered" %}} cluster.
menu:
influxdb3_cloud_dedicated:
parent: influxctl database
weight: 301
source: /shared/influxctl/database/delete.md
---
The `influxctl database delete` command deletes a database from an
{{< product-name omit=" Clustered" >}} cluster.
## Usage
<!--Skip tests for database create and delete: namespaces aren't reusable-->
<!--pytest.mark.skip-->
```sh
influxctl database delete [command options] [--force] <DATABASE_NAME> [<DATABASE_NAME_N>...]
```
> [!Warning]
> #### Cannot be undone
>
> Deleting a database is a destructive action that cannot be undone.
>
> #### Wait before writing to a new database with the same name
>
> After deleting a database from your {{% product-name omit=" Clustered" %}}
> cluster, you can reuse the name to create a new database, but **wait two to
> three minutes** after deleting the previous database before writing to the new
> database to allow write caches to clear.
## Arguments
| Argument | Description |
| :---------------- | :----------------------------- |
| **DATABASE_NAME** | Name of the database to delete |
## Flags
| Flag | | Description |
| :--- | :-------- | :---------------------------------------------------------- |
| | `--force` | Do not prompt for confirmation to delete (default is false) |
| `-h` | `--help` | Output command help |
{{% caption %}}
_Also see [`influxctl` global flags](/influxdb3/cloud-dedicated/reference/cli/influxctl/#global-flags)._
{{% /caption %}}
## Examples
##### Delete a database named "mydb"
<!--Skip tests for database create and delete: namespaces aren't reusable-->
<!--pytest.mark.skip-->
```sh
influxctl database delete mydb
```
##### Delete multiple databases
<!--Skip tests for database create and delete: namespaces aren't reusable-->
<!--pytest.mark.skip-->
```sh
influxctl database delete mydb1 mydb2
```
<!-- //SOURCE content/shared/influxctl/database/delete.md -->

Some files were not shown because too many files have changed in this diff Show More