Merge branch 'master' into copilot/create-validation-change-pr

copilot/create-validation-change-pr
Jason Stirnaman 2026-03-26 09:36:06 -05:00 committed by GitHub
commit 945cdc83f6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
476 changed files with 5785 additions and 1051 deletions

View File

@ -13,7 +13,7 @@ set -euo pipefail
# --minAlertLevel=suggestion \
# --config=content/influxdb/cloud-dedicated/.vale.ini
VALE_VERSION="3.13.1"
VALE_VERSION="3.14.1"
VALE_MAJOR_MIN=3
if command -v vale &>/dev/null; then

View File

@ -61,6 +61,15 @@ You are an expert InfluxDB v1 technical writer with deep knowledge of InfluxData
5. **Apply Standards:** Ensure compliance with style guidelines and documentation conventions
6. **Cross-Reference:** Verify consistency with related documentation and product variants
## Release Documentation Workflow
**Always create separate PRs for OSS v1 and Enterprise v1 releases.**
- **OSS v1:** Publish immediately when the release tag is available on GitHub (`https://github.com/influxdata/influxdb/releases/tag/v1.x.x`).
- **Enterprise v1:** Publish only after the release artifact is generally available (GA) in the InfluxData portal. Create the PR as a **draft** until the v1 codeowner signals readiness (e.g., applies a release label).
- **`data/products.yml`:** Split version bumps per product. The OSS PR bumps `influxdb.latest_patches.v1`; the Enterprise PR bumps `enterprise_influxdb.latest_patches.v1`.
- **PR template:** Use `.github/pull_request_template/influxdb_v1_release.md` and select the appropriate release type (OSS or Enterprise).
## Quality Assurance
- All code examples must be testable and include proper pytest-codeblocks annotations

View File

@ -222,6 +222,29 @@ influxdb3_core, influxdb3_enterprise, telegraf
/influxdb3/core, /influxdb3/enterprise, /telegraf
```
## v1 Release Workflow
**InfluxDB v1 releases require separate PRs for OSS and Enterprise.**
1. **OSS PR** — publish immediately when the GitHub release tag is available.
2. **Enterprise PR** — create as a draft; merge only after the v1 codeowner signals readiness (e.g., applies a release label) and the release artifact is GA in the InfluxData portal.
Each PR should bump only its own product version in `data/products.yml`:
- OSS: `influxdb > latest_patches > v1`
- Enterprise: `enterprise_influxdb > latest_patches > v1`
Use the PR template `.github/pull_request_template/influxdb_v1_release.md` and select the appropriate release type.
### Examples for v1
```bash
# Generate OSS v1 release notes
docs release-notes v1.12.2 v1.12.3 --repos ~/github/influxdata/influxdb
# Generate Enterprise v1 release notes (separate PR)
# Use the Enterprise changelog at https://dl.influxdata.com/enterprise/nightlies/master/CHANGELOG.md
```
## Related
- **docs-cli-workflow** skill - When to use CLI tools

View File

@ -364,7 +364,7 @@ The documentation MCP server is hosted at `https://influxdb-docs.mcp.kapa.ai`—
Already configured in [`.mcp.json`](/.mcp.json). Two server entries are available:
- **`influxdb-docs`** (API key) — Set `INFLUXDATA_DOCS_KAPA_API_KEY` env var. 60 req/min.
- **`influxdb-docs-oauth`** (OAuth) — No setup. Authenticates via Google on first use. 40 req/hr, 200 req/day.
- **`influxdb-docs-oauth`** (OAuth) — No setup. Authenticates via Google or GitHub on first use. 40 req/hr, 200 req/day.
### Available Tool
@ -534,7 +534,7 @@ touch content/influxdb3/enterprise/path/to/file.md
**Troubleshooting steps:**
- **API key auth** (`influxdb-docs`): Verify `INFLUXDATA_DOCS_KAPA_API_KEY` is set. Rate limit: 60 req/min.
- **OAuth auth** (`influxdb-docs-oauth`): Sign in with Google on first use. Rate limits: 40 req/hr, 200 req/day.
- **OAuth auth** (`influxdb-docs-oauth`): Sign in with Google or GitHub on first use. Rate limits: 40 req/hr, 200 req/day.
- Verify your network allows connections to `*.kapa.ai`
- Check if you've exceeded rate limits (wait and retry)

1
.gitattributes vendored Normal file
View File

@ -0,0 +1 @@
.github/workflows/*.lock.yml linguist-generated=true merge=ours

View File

@ -1,27 +1,37 @@
## InfluxDB v1 Release Documentation
**Release Version:** v1.x.x
**Release Type:** [ ] OSS [ ] Enterprise [ ] Both
**Release Version:** v1.x.x
**Release Type:** [ ] OSS [ ] Enterprise
> [!Important]
> **Always create separate PRs for OSS and Enterprise releases.**
> OSS can publish immediately when the GitHub release tag is available.
> Enterprise must wait until the release artifact is GA in the InfluxData portal.
> Never combine both products in a single release PR.
### Description
Brief description of the release and documentation changes.
### Pre-merge Gate (Enterprise only)
- [ ] **Confirm release artifact is GA in the InfluxData portal**
- [ ] **v1 codeowner has signaled readiness** (e.g., applied a release label)
### Release Documentation Checklist
#### Release Notes
- [ ] Generate release notes from changelog
- [ ] OSS: Use commit messages from GitHub release tag `https://github.com/influxdata/influxdb/releases/tag/v1.x.x`
- [ ] Enterprise: Use `https://dl.influxdata.com/enterprise/nightlies/master/CHANGELOG.md`
- [ ] **Note**: For Enterprise releases, include important updates, features, and fixes from the corresponding OSS tag
- OSS: Use commit messages from GitHub release tag `https://github.com/influxdata/influxdb/releases/tag/v1.x.x`
- Enterprise: Use `https://dl.influxdata.com/enterprise/nightlies/master/CHANGELOG.md`
- **Note**: For Enterprise releases, include important updates, features, and fixes from the corresponding OSS tag
- [ ] Update release notes in appropriate location
- [ ] OSS: `/content/influxdb/v1/about_the_project/releasenotes-changelog.md`
- [ ] Enterprise: `/content/enterprise_influxdb/v1/about-the-project/release-notes.md`
- OSS: `content/influxdb/v1/about_the_project/release-notes.md`
- Enterprise: `content/enterprise_influxdb/v1/about-the-project/release-notes.md`
- [ ] Ensure release notes follow documentation formatting standards
#### Version Updates
- [ ] Update patch version in `/data/products.yml`
- [ ] OSS: `influxdb > v1 > latest`
- [ ] Enterprise: `enterprise_influxdb > v1 > latest`
- [ ] Update patch version in `data/products.yml` (**only for this product**)
- OSS: `influxdb > latest_patches > v1`
- Enterprise: `enterprise_influxdb > latest_patches > v1`
- [ ] Update version references in documentation
- [ ] Installation guides
- [ ] Docker documentation
@ -37,8 +47,9 @@ Brief description of the release and documentation changes.
#### Testing
- [ ] Build documentation locally and verify changes render correctly
- [ ] Test all updated links
- [ ] Run link validation: `yarn test:links content/influxdb/v1/**/*.md`
- [ ] Run link validation: `yarn test:links content/enterprise_influxdb/v1/**/*.md`
- [ ] Run link validation for the product being released:
- OSS: `yarn test:links content/influxdb/v1/**/*.md`
- Enterprise: `yarn test:links content/enterprise_influxdb/v1/**/*.md`
### Related Resources
- DAR Issue: #
@ -50,6 +61,3 @@ Brief description of the release and documentation changes.
- [ ] Verify documentation is deployed to production
- [ ] Announce in #docs channel
- [ ] Close related DAR issue(s)
---
**Note:** For Enterprise releases, ensure you have access to the Enterprise changelog and coordinate with the release team for timing.

View File

@ -35,10 +35,10 @@ if (!/^origin\/[a-zA-Z0-9._\/-]+$/.test(BASE_REF)) {
*/
function getAllChangedFiles() {
try {
const output = execSync(
`git diff --name-only ${BASE_REF}...HEAD`,
{ encoding: 'utf-8', stdio: ['pipe', 'pipe', 'pipe'] }
);
const output = execSync(`git diff --name-only ${BASE_REF}...HEAD`, {
encoding: 'utf-8',
stdio: ['pipe', 'pipe', 'pipe'],
});
return output.trim().split('\n').filter(Boolean);
} catch (err) {
console.error(`Error detecting changes: ${err.message}`);
@ -53,11 +53,13 @@ function getAllChangedFiles() {
*/
function categorizeChanges(files) {
return {
content: files.filter(f => f.startsWith('content/') && f.endsWith('.md')),
layouts: files.filter(f => f.startsWith('layouts/')),
assets: files.filter(f => f.startsWith('assets/')),
data: files.filter(f => f.startsWith('data/')),
apiDocs: files.filter(f => f.startsWith('api-docs/') || f.startsWith('openapi/')),
content: files.filter((f) => f.startsWith('content/') && f.endsWith('.md')),
layouts: files.filter((f) => f.startsWith('layouts/')),
assets: files.filter((f) => f.startsWith('assets/')),
data: files.filter((f) => f.startsWith('data/')),
apiDocs: files.filter(
(f) => f.startsWith('api-docs/') || f.startsWith('openapi/')
),
};
}
@ -127,7 +129,7 @@ function main() {
const htmlPaths = mapContentToPublic(expandedContent, 'public');
// Convert HTML paths to URL paths
pagesToDeploy = Array.from(htmlPaths).map(htmlPath => {
pagesToDeploy = Array.from(htmlPaths).map((htmlPath) => {
return '/' + htmlPath.replace('public/', '').replace('/index.html', '/');
});
console.log(` Found ${pagesToDeploy.length} affected pages\n`);
@ -135,34 +137,53 @@ function main() {
// Strategy 2: Layout/asset changes - parse URLs from PR body
if (hasLayoutChanges) {
console.log('🎨 Layout/asset changes detected, checking PR description for URLs...');
console.log(
'🎨 Layout/asset changes detected, checking PR description for URLs...'
);
// Auto-detect home page when the root template changes
if (changes.layouts.includes('layouts/index.html')) {
pagesToDeploy = [...new Set([...pagesToDeploy, '/'])];
console.log(
' 🏠 Home page template (layouts/index.html) changed - auto-adding / to preview pages'
);
}
const prUrls = extractDocsUrls(PR_BODY);
if (prUrls.length > 0) {
console.log(` Found ${prUrls.length} URLs in PR description`);
// Merge with content pages (deduplicate)
pagesToDeploy = [...new Set([...pagesToDeploy, ...prUrls])];
} else if (changes.content.length === 0) {
// No content changes AND no URLs specified - need author input
console.log(' ⚠️ No URLs found in PR description - author input needed');
} else if (pagesToDeploy.length === 0) {
// No content changes, no auto-detected pages, and no URLs specified - need author input
console.log(
' ⚠️ No URLs found in PR description - author input needed'
);
setOutput('pages-to-deploy', '[]');
setOutput('has-layout-changes', 'true');
setOutput('needs-author-input', 'true');
setOutput('change-summary', 'Layout/asset changes detected - please specify pages to preview');
setOutput(
'change-summary',
'Layout/asset changes detected - please specify pages to preview'
);
return;
}
}
// Apply page limit
if (pagesToDeploy.length > MAX_PAGES) {
console.log(`⚠️ Limiting preview to ${MAX_PAGES} pages (found ${pagesToDeploy.length})`);
console.log(
`⚠️ Limiting preview to ${MAX_PAGES} pages (found ${pagesToDeploy.length})`
);
pagesToDeploy = pagesToDeploy.slice(0, MAX_PAGES);
}
// Generate summary
const summary = pagesToDeploy.length > 0
? `${pagesToDeploy.length} page(s) will be previewed`
: 'No pages to preview';
const summary =
pagesToDeploy.length > 0
? `${pagesToDeploy.length} page(s) will be previewed`
: 'No pages to preview';
console.log(`\n${summary}`);

View File

@ -63,6 +63,9 @@ function isValidUrlPath(path) {
// Must start with /
if (!path.startsWith('/')) return false;
// Allow root path (docs home page at /)
if (path === '/') return true;
// Must start with known product prefix (loaded from products.yml)
const validPrefixes = PRODUCT_NAMESPACES.map((ns) => `/${ns}/`);
@ -101,7 +104,8 @@ export function extractDocsUrls(text) {
// Pattern 1: Full production URLs
// https://docs.influxdata.com/influxdb3/core/get-started/
const prodUrlPattern = /https?:\/\/docs\.influxdata\.com(\/[^\s)\]>"']+)/g;
// https://docs.influxdata.com/ (home page)
const prodUrlPattern = /https?:\/\/docs\.influxdata\.com(\/[^\s)\]>"']*)/g;
let match;
while ((match = prodUrlPattern.exec(text)) !== null) {
const path = normalizeUrlPath(match[1]);
@ -112,7 +116,8 @@ export function extractDocsUrls(text) {
// Pattern 2: Localhost dev URLs
// http://localhost:1313/influxdb3/core/
const localUrlPattern = /https?:\/\/localhost:\d+(\/[^\s)\]>"']+)/g;
// http://localhost:1313/ (home page)
const localUrlPattern = /https?:\/\/localhost:\d+(\/[^\s)\]>"']*)/g;
while ((match = localUrlPattern.exec(text)) !== null) {
const path = normalizeUrlPath(match[1]);
if (isValidUrlPath(path)) {

View File

@ -10,6 +10,7 @@
*/
import { appendFileSync } from 'fs';
import { execSync } from 'child_process';
import {
getChangedContentFiles,
mapContentToPublic,
@ -27,11 +28,33 @@ if (!/^origin\/[a-zA-Z0-9._/-]+$/.test(BASE_REF)) {
const changed = getChangedContentFiles(BASE_REF);
const htmlPaths = mapContentToPublic(changed, 'public');
const urls = Array.from(htmlPaths)
const contentUrls = Array.from(htmlPaths)
.sort()
.map((p) => '/' + p.replace(/^public\//, '').replace(/\/index\.html$/, '/'))
.slice(0, MAX_PAGES);
// Check if the home page template changed (layouts/index.html → /)
let homePageUrls = [];
try {
const homePageChanged = execSync(
`git diff --name-only ${BASE_REF}...HEAD -- layouts/index.html`,
{ encoding: 'utf-8', stdio: ['pipe', 'pipe', 'pipe'] }
).trim();
if (homePageChanged) {
homePageUrls = ['/'];
console.log(
'Home page template (layouts/index.html) changed - adding / to review URLs'
);
}
} catch {
// Ignore errors - fall back to content-only URLs
}
const urls = [...new Set([...homePageUrls, ...contentUrls])].slice(
0,
MAX_PAGES
);
appendFileSync(GITHUB_OUTPUT, `urls=${JSON.stringify(urls)}\n`);
appendFileSync(GITHUB_OUTPUT, `url-count=${urls.length}\n`);

View File

@ -145,7 +145,11 @@ test('Special characters: backticks are delimiters', () => {
// This prevents command substitution injection
const text = '/influxdb3/`whoami`/';
const result = extractDocsUrls(text);
assertEquals(result, ['/influxdb3/'], 'Should truncate at backtick delimiter');
assertEquals(
result,
['/influxdb3/'],
'Should truncate at backtick delimiter'
);
});
test('Special characters: single quotes truncate at extraction', () => {
@ -257,31 +261,51 @@ test('Normalization: removes query string', () => {
test('Normalization: strips wildcard from path', () => {
const text = '/influxdb3/enterprise/*';
const result = extractDocsUrls(text);
assertEquals(result, ['/influxdb3/enterprise/'], 'Should strip wildcard character');
assertEquals(
result,
['/influxdb3/enterprise/'],
'Should strip wildcard character'
);
});
test('Normalization: strips wildcard in middle of path', () => {
const text = '/influxdb3/*/admin/';
const result = extractDocsUrls(text);
assertEquals(result, ['/influxdb3/admin/'], 'Should strip wildcard from middle of path');
assertEquals(
result,
['/influxdb3/admin/'],
'Should strip wildcard from middle of path'
);
});
test('Normalization: strips multiple wildcards', () => {
const text = '/influxdb3/*/admin/*';
const result = extractDocsUrls(text);
assertEquals(result, ['/influxdb3/admin/'], 'Should strip all wildcard characters');
assertEquals(
result,
['/influxdb3/admin/'],
'Should strip all wildcard characters'
);
});
test('Wildcard in markdown-style notation', () => {
const text = '**InfluxDB 3 Enterprise pages** (`/influxdb3/enterprise/*`)';
const result = extractDocsUrls(text);
assertEquals(result, ['/influxdb3/enterprise/'], 'Should extract and normalize path with wildcard in backticks');
assertEquals(
result,
['/influxdb3/enterprise/'],
'Should extract and normalize path with wildcard in backticks'
);
});
test('Wildcard in parentheses', () => {
const text = 'Affects pages under (/influxdb3/enterprise/*)';
const result = extractDocsUrls(text);
assertEquals(result, ['/influxdb3/enterprise/'], 'Should extract and normalize path with wildcard in parentheses');
assertEquals(
result,
['/influxdb3/enterprise/'],
'Should extract and normalize path with wildcard in parentheses'
);
});
// Test deduplication
@ -360,6 +384,31 @@ test('BASE_REF: rejects without origin/ prefix', () => {
assertEquals(isValid, false, 'Should require origin/ prefix');
});
// Home page URL support
test('Home page: production URL https://docs.influxdata.com/', () => {
const text = 'Preview: https://docs.influxdata.com/';
const result = extractDocsUrls(text);
assertEquals(result, ['/'], 'Should extract root path for docs home page');
});
test('Home page: localhost URL http://localhost:1313/', () => {
const text = 'Testing at http://localhost:1313/';
const result = extractDocsUrls(text);
assertEquals(result, ['/'], 'Should extract root path from localhost URL');
});
test('Home page: relative root path / in text', () => {
// Relative '/' alone is not extractable by the relative pattern (requires product prefix),
// but full URLs with / path are supported
const text = 'https://docs.influxdata.com/ and /influxdb3/core/';
const result = extractDocsUrls(text);
assertEquals(
result.sort(),
['/', '/influxdb3/core/'].sort(),
'Should extract both root path and product path'
);
});
// Print summary
console.log('\n=== Test Summary ===');
console.log(`Total: ${totalTests}`);

1031
.github/workflows/daily-repo-status.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load Diff

58
.github/workflows/daily-repo-status.md vendored Normal file
View File

@ -0,0 +1,58 @@
---
description: |
This workflow creates daily repo status reports. It gathers recent repository
activity (issues, PRs, discussions, releases, code changes) and generates
engaging GitHub issues with productivity insights, community highlights,
and project recommendations.
on:
schedule: daily
workflow_dispatch:
permissions:
contents: read
issues: read
pull-requests: read
network: defaults
tools:
github:
# If in a public repo, setting `lockdown: false` allows
# reading issues, pull requests and comments from 3rd-parties
# If in a private repo this has no particular effect.
lockdown: false
safe-outputs:
mentions: false
allowed-github-references: []
create-issue:
title-prefix: "[repo-status] "
labels: [report, daily-status]
close-older-issues: true
source: githubnext/agentics/workflows/daily-repo-status.md@9a76aba267225767b9b2e1623188d11ed9b58f11
engine: copilot
---
# Daily Repo Status
Create an upbeat daily status report for the repo as a GitHub issue.
## What to include
- Recent repository activity (issues, PRs, discussions, releases, code changes)
- Progress tracking, goal reminders and highlights
- Project status and recommendations
- Actionable next steps for maintainers
## Style
- Be positive, encouraging, and helpful 🌟
- Use emojis moderately for engagement
- Keep it concise - adjust length based on actual activity
## Process
1. Gather recent activity from the repository
2. Study the repository, its issues and its pull requests
3. Create a new GitHub issue with your findings and insights

View File

@ -11,7 +11,7 @@
}
},
"influxdb-docs-oauth": {
"comment": "Hosted InfluxDB documentation search (OAuth). No API key needed--authenticates via Google OAuth on first use. Rate limits: 40 req/hr, 200 req/day.",
"comment": "Hosted InfluxDB documentation search (OAuth). No API key needed--authenticates via Google or GitHub OAuth on first use. Rate limits: 40 req/hr, 200 req/day.",
"type": "sse",
"url": "https://influxdb-docs.mcp.kapa.ai"
},

View File

@ -7,10 +7,11 @@ function initialize() {
var appendHTML = `
<div class="code-controls">
<span class="code-controls-toggle"><span class='cf-icon More'></span></span>
<ul class="code-control-options">
<li class='copy-code'><span class='cf-icon Duplicate_New'></span> <span class="message">Copy</span></li>
<li class='fullscreen-toggle'><span class='cf-icon ExpandB'></span> Fill window</li>
<button class="code-controls-toggle" aria-label="Code block options" aria-expanded="false"><span class='cf-icon More'></span></button>
<ul class="code-control-options" role="menu">
<li role="none"><button role="menuitem" class='copy-code'><span class='cf-icon Duplicate_New'></span> <span class="message">Copy</span></button></li>
<li role="none"><button role="menuitem" class='ask-ai-code'><span class='cf-icon Chat'></span> Ask AI</button></li>
<li role="none"><button role="menuitem" class='fullscreen-toggle'><span class='cf-icon ExpandB'></span> Fill window</button></li>
</ul>
</div>
`;
@ -27,12 +28,17 @@ function initialize() {
// Click outside of the code-controls to close them
$(document).click(function () {
$('.code-controls').removeClass('open');
$('.code-controls.open').each(function () {
$(this).removeClass('open');
$(this).find('.code-controls-toggle').attr('aria-expanded', 'false');
});
});
// Click the code controls toggle to open code controls
$('.code-controls-toggle').click(function () {
$(this).parent('.code-controls').toggleClass('open');
var $controls = $(this).parent('.code-controls');
var isOpen = $controls.toggleClass('open').hasClass('open');
$(this).attr('aria-expanded', String(isOpen));
});
// Stop event propagation for clicks inside of the code-controls div
@ -235,6 +241,34 @@ function initialize() {
return info;
}
////////////////////////////////// ASK AI ////////////////////////////////////
// Build a query from the code block and open Kapa via the ask-ai-open contract
$('.ask-ai-code').click(function () {
var codeElement = $(this)
.closest('.code-controls')
.prevAll('pre:has(code)')[0];
if (!codeElement) return;
var code = codeElement.innerText.trim();
// Use the data-ask-ai-query attribute if the template provided one,
// otherwise build a generic query from the code content
var query =
$(codeElement).attr('data-ask-ai-query') ||
'Explain this code:\n```\n' + code.substring(0, 500) + '\n```';
// Delegate to the global ask-ai-open handler by synthesizing a click.
// Use native .click() instead of jQuery .trigger() so the event
// reaches the native document.addEventListener in ask-ai-trigger.js.
// No href — prevents scroll-to-top when the native click fires.
var triggerEl = document.createElement('a');
triggerEl.className = 'ask-ai-open';
triggerEl.dataset.query = query;
document.body.appendChild(triggerEl);
triggerEl.click();
triggerEl.remove();
});
/////////////////////////////// FULL WINDOW CODE ///////////////////////////////
/*

View File

@ -0,0 +1,92 @@
/**
* Highlights Telegraf Controller dynamic values in code blocks.
*
* Wraps three pattern types in styled <span> elements:
* - Parameters: &{name} or &{name:default}
* - Environment variables: ${VAR_NAME}
* - Secrets: @{store:secret_name}
*
* Applied to code blocks with class="tc-dynamic-values" via
* the data-component="tc-dynamic-values" attribute set by
* the render-codeblock hook.
*/
const PATTERNS = [
{ regex: /&\{[^}]+\}/g, className: 'param' },
{ regex: /\$\{[^}]+\}/g, className: 'env' },
{ regex: /@\{[^:]+:[^}]+\}/g, className: 'secret' },
];
/**
* Walk all text nodes inside the given element and wrap matches
* in <span class="tc-dynamic-value {type}"> elements.
*/
function highlightDynamicValues(codeEl) {
const walker = document.createTreeWalker(codeEl, NodeFilter.SHOW_TEXT);
const textNodes = [];
while (walker.nextNode()) {
textNodes.push(walker.currentNode);
}
for (const node of textNodes) {
const text = node.textContent;
let hasMatch = false;
for (const { regex } of PATTERNS) {
regex.lastIndex = 0;
if (regex.test(text)) {
hasMatch = true;
break;
}
}
if (!hasMatch) continue;
const fragment = document.createDocumentFragment();
let remaining = text;
while (remaining.length > 0) {
let earliestMatch = null;
let earliestIndex = remaining.length;
let matchedPattern = null;
for (const pattern of PATTERNS) {
pattern.regex.lastIndex = 0;
const match = pattern.regex.exec(remaining);
if (match && match.index < earliestIndex) {
earliestMatch = match;
earliestIndex = match.index;
matchedPattern = pattern;
}
}
if (!earliestMatch) {
fragment.appendChild(document.createTextNode(remaining));
break;
}
if (earliestIndex > 0) {
fragment.appendChild(
document.createTextNode(remaining.slice(0, earliestIndex))
);
}
const span = document.createElement('span');
span.className = `tc-dynamic-value ${matchedPattern.className}`;
span.textContent = earliestMatch[0];
fragment.appendChild(span);
remaining = remaining.slice(earliestIndex + earliestMatch[0].length);
}
node.parentNode.replaceChild(fragment, node);
}
}
export default function TcDynamicValues({ component }) {
const codeEl = component.querySelector('code');
if (codeEl) {
highlightDynamicValues(codeEl);
}
}

View File

@ -122,7 +122,7 @@ function expandAccordions() {
// Expand accordions on load based on URL anchor
function openAccordionByHash() {
var anchor = window.location.hash;
var anchor = window.location.hash.split('?')[0];
function expandElement() {
if ($(anchor).parents('.expand').length > 0) {

View File

@ -19,6 +19,7 @@ import * as pageContext from './page-context.js';
import * as pageFeedback from './page-feedback.js';
import * as tabbedContent from './tabbed-content.js';
import * as v3Wayfinding from './v3-wayfinding.js';
import * as tcDownloads from './tc-downloads.js';
/** Import component modules
* The component pattern organizes JavaScript, CSS, and HTML for a specific UI element or interaction:
@ -44,6 +45,7 @@ import ReleaseToc from './release-toc.js';
import { SearchButton } from './search-button.js';
import SidebarSearch from './components/sidebar-search.js';
import { SidebarToggle } from './sidebar-toggle.js';
import TcDynamicValues from './components/tc-dynamic-values.js';
import Theme from './theme.js';
import ThemeSwitch from './theme-switch.js';
@ -75,6 +77,7 @@ const componentRegistry = {
'search-button': SearchButton,
'sidebar-search': SidebarSearch,
'sidebar-toggle': SidebarToggle,
'tc-dynamic-values': TcDynamicValues,
theme: Theme,
'theme-switch': ThemeSwitch,
};
@ -162,6 +165,7 @@ function initModules() {
pageFeedback.initialize();
tabbedContent.initialize();
v3Wayfinding.initialize();
tcDownloads.initialize();
}
/**

View File

@ -117,7 +117,10 @@ function getInfluxDBUrls() {
initializeStorageItem('urls', JSON.stringify(DEFAULT_STORAGE_URLS));
}
return JSON.parse(localStorage.getItem(urlStorageKey));
const storedUrls = JSON.parse(localStorage.getItem(urlStorageKey));
// Backfill any new default keys missing from stored data (e.g., when new
// products like core/enterprise are added after a user's first visit).
return { ...DEFAULT_STORAGE_URLS, ...storedUrls };
}
// Get the current or previous URL for a specific product or a custom url
@ -131,8 +134,8 @@ function getInfluxDBUrl(product) {
const urlsString = localStorage.getItem(urlStorageKey);
const urlsObj = JSON.parse(urlsString);
// Return the URL of the specified product
return urlsObj[product];
// Return the URL of the specified product, falling back to the default
return urlsObj[product] ?? DEFAULT_STORAGE_URLS[product];
}
/*

221
assets/js/tc-downloads.js Normal file
View File

@ -0,0 +1,221 @@
////////////////////////////////////////////////////////////////////////////////
///////////////// Telegraf Controller gated downloads module ////////////////////
////////////////////////////////////////////////////////////////////////////////
import { toggleModal } from './modals.js';
const STORAGE_KEY = 'influxdata_docs_tc_dl';
const QUERY_PARAM = 'ref';
const QUERY_VALUE = 'tc';
// ─── localStorage helpers ───────────────────────────────────────────────────
function setDownloadKey() {
localStorage.setItem(STORAGE_KEY, 'true');
}
function hasDownloadKey() {
return localStorage.getItem(STORAGE_KEY) === 'true';
}
// ─── Query param helpers ────────────────────────────────────────────────────
function hasRefParam() {
// Check query string first (?ref=tc before the hash)
const params = new URLSearchParams(window.location.search);
if (params.get(QUERY_PARAM) === QUERY_VALUE) return true;
// Also check inside the fragment (#heading?ref=tc)
const hash = window.location.hash;
const qIndex = hash.indexOf('?');
if (qIndex !== -1) {
const hashParams = new URLSearchParams(hash.substring(qIndex));
if (hashParams.get(QUERY_PARAM) === QUERY_VALUE) return true;
}
return false;
}
function stripRefParam() {
const url = new URL(window.location.href);
// Remove from query string
url.searchParams.delete(QUERY_PARAM);
// Remove from fragment if present (#heading?ref=tc → #heading)
let hash = url.hash;
const qIndex = hash.indexOf('?');
if (qIndex !== -1) {
const hashBase = hash.substring(0, qIndex);
const hashParams = new URLSearchParams(hash.substring(qIndex));
hashParams.delete(QUERY_PARAM);
const remaining = hashParams.toString();
hash = remaining ? `${hashBase}?${remaining}` : hashBase;
}
window.history.replaceState({}, '', url.pathname + url.search + hash);
}
// ─── Download link rendering ────────────────────────────────────────────────
function renderDownloadLinks(container, data) {
const version = data.version;
const platforms = data.platforms;
let html = '<div class="tc-downloads-container">';
platforms.forEach((platform) => {
html += `<h3>${platform.name}</h3>`;
html +=
'<p class="tc-version">' +
`<em>Telegraf Controller v${version}</em>` +
'</p>';
html += '<div class="tc-build-table">';
platform.builds.forEach((build) => {
const link =
`<a href="${build.url}"` +
' class="btn tc-download-link download"' +
` download>${platform.name}` +
` (${build.arch})</a>`;
const sha =
`<code>sha256:${build.sha256}</code>` +
'<button class="tc-copy-sha"' +
` data-sha="${build.sha256}">` +
'&#59693;</button>';
html +=
'<div class="tc-build-row">' +
`<div class="tc-build-download">${link}</div>` +
`<div class="tc-build-sha">${sha}</div>` +
'</div>';
});
html += '</div>';
});
container.innerHTML = html;
}
// ─── Clipboard copy ─────────────────────────────────────────────────────────
function copyToClipboard(sha, button) {
if (navigator.clipboard && navigator.clipboard.writeText) {
navigator.clipboard.writeText(sha).then(() => {
showCopiedFeedback(button);
});
} else {
// Fallback for older browsers
const textArea = document.createElement('textarea');
textArea.value = sha;
textArea.style.position = 'fixed';
textArea.style.opacity = '0';
document.body.appendChild(textArea);
textArea.select();
document.execCommand('copy');
document.body.removeChild(textArea);
showCopiedFeedback(button);
}
}
function showCopiedFeedback(button) {
const original = button.innerHTML;
button.innerHTML = '&#59671;';
setTimeout(() => {
button.innerHTML = original;
}, 2000);
}
// ─── Marketo form ───────────────────────────────────────────────────────────
function initMarketoForm() {
/* global MktoForms2 */
if (typeof MktoForms2 === 'undefined') {
console.error('tc-downloads: MktoForms2 not loaded');
return;
}
MktoForms2.setOptions({
formXDPath: '/rs/972-GDU-533/images/marketo-xdframe-relative.html',
});
MktoForms2.loadForm(
'https://get.influxdata.com',
'972-GDU-533',
3195,
function (form) {
form.addHiddenFields({ mkto_content_name: 'Telegraf Enterprise Alpha' });
form.onSuccess(function () {
setDownloadKey();
toggleModal();
// Redirect to self with ?ref=tc to trigger downloads on reload
const url = new URL(window.location.href);
url.searchParams.set(QUERY_PARAM, QUERY_VALUE);
window.location.href = url.toString();
// Prevent Marketo's default redirect
return false;
});
}
);
}
// ─── View state management ──────────────────────────────────────────────────
function showDownloads(area) {
const btn = area.querySelector('#tc-download-btn');
const linksContainer = area.querySelector('#tc-downloads-links');
if (!linksContainer) return;
// Parse download data from the JSON data attribute
const rawData = linksContainer.getAttribute('data-downloads');
if (!rawData) return;
let data;
try {
data = JSON.parse(atob(rawData));
} catch (e) {
console.error('tc-downloads: failed to parse download data', e);
return;
}
// Hide the download button
if (btn) btn.style.display = 'none';
// Render download links and show the container
renderDownloadLinks(linksContainer, data);
linksContainer.style.display = 'block';
}
// ─── Initialize ─────────────────────────────────────────────────────────────
function initialize() {
// 1. Handle ?ref=tc query param on any page
if (hasRefParam()) {
setDownloadKey();
stripRefParam();
}
const area = document.getElementById('tc-downloads-area');
if (!area) return; // No shortcode on this page — no-op
// 2. Check localStorage and show appropriate view
if (hasDownloadKey()) {
showDownloads(area);
}
// 3. Initialize Marketo form
initMarketoForm();
// 4. Delegated click handler for SHA copy buttons
area.addEventListener('click', function (e) {
const copyBtn = e.target.closest('.tc-copy-sha');
if (copyBtn) {
const sha = copyBtn.getAttribute('data-sha');
if (sha) copyToClipboard(sha, copyBtn);
}
});
}
export { initialize };

View File

@ -216,6 +216,7 @@
"article/tabbed-content",
"article/tables",
"article/tags",
"article/tc-downloads",
"article/telegraf-plugins",
"article/title",
"article/truncate",

View File

@ -16,10 +16,12 @@
opacity: .5;
transition: opacity .2s;
border-radius: $radius;
border: none;
background: none;
line-height: 0;
cursor: pointer;
cursor: pointer;
&:hover {
&:hover, &:focus-visible {
opacity: 1;
background-color: rgba($article-text, .1);
backdrop-filter: blur(15px);
@ -35,21 +37,26 @@
backdrop-filter: blur(15px);
display: none;
li {
button {
display: block;
width: 100%;
text-align: left;
margin: 0;
padding: .4rem .5rem .6rem;
border: none;
background: none;
border-radius: $radius;
color: $article-bold;
font-size: .87rem;
line-height: 0;
cursor: pointer;
cursor: pointer;
&:hover {background-color: rgba($article-text, .07)}
&.copy-code, &.fullscreen-toggle {
.cf-icon {margin-right: .35rem;}
&:hover, &:focus-visible {
background-color: rgba($article-text, .07);
}
.cf-icon {margin-right: .35rem;}
&.copy-code {
.message {
text-shadow: 0px 0px 8px rgba($article-text, 0);
@ -69,6 +76,8 @@
}
}
}
li {margin: 0;}
}
&.open {

View File

@ -278,8 +278,8 @@
position: relative;
overflow: hidden;
display: flex;
flex-direction: row;
align-items: center;
flex-direction: column;
// align-items: center;
justify-content: space-between;
.bg-overlay {
@ -302,9 +302,6 @@
}
ul.product-links {
padding-left: 0;
margin: 0 3rem 0 2rem;
list-style: none;
li:not(:last-child) {margin-bottom: .35rem;}

View File

@ -135,7 +135,8 @@
@import "modals/url-selector";
@import "modals/page-feedback";
@import "modals/flux-versions";
@import "modals/_influxdb-gs-datepicker"
@import "modals/_influxdb-gs-datepicker";
@import "modals/tc-downloads";
}

View File

@ -0,0 +1,104 @@
/////////////////// Styles for inline TC download links ////////////////////////
#tc-downloads-area {
margin: 0 0 2rem;
#tc-download-btn {
display: inline-block;
}
.tc-version {
font-size: 1rem;
color: rgba($article-text, .6);
margin-bottom: .5rem;
}
.tc-build-table {
margin-bottom: 1rem;
}
.tc-build-row {
display: flex;
align-items: center;
border-bottom: 1px solid $article-hr;
&:first-child {
border-top: 1px solid $article-hr;
}
}
.tc-build-download {
flex: 1 1 auto;
margin-right: 1rem;
}
.tc-download-link {
font-size: 1rem;
padding: .35rem 1rem;
white-space: nowrap;
}
.tc-build-sha {
flex: 1 1 auto;
display: flex;
justify-content: flex-end;
gap: .1rem;
min-width: 0;
max-width: 25rem;
code {
font-size: .8rem;
padding: .15rem .65rem;
color: $article-code;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.tc-copy-sha {
flex-shrink: 0;
background: $article-code-bg;
border: none;
border-radius: $radius;
padding: .2rem .6rem;
font-family: 'icomoon-v4';
font-size: .9rem;
color: rgba($article-code, .85);
cursor: pointer;
transition: color .2s;
&:hover {
color: $article-code-link-hover;
}
}
}
}
//////////////////////////////// MEDIA QUERIES /////////////////////////////////
@include media(small) {
#tc-downloads-area {
.tc-build-row {
flex-direction: column;
align-items: flex-start;
gap: .5rem;
}
.tc-build-download {
margin-right: 0;
width: 100%;
}
.tc-download-link {
width: 100%;
text-align: center;
}
.tc-build-sha {
width: 100%;
max-width: 100%;
margin-bottom: .5rem;
}
}
}

View File

@ -0,0 +1,226 @@
////////////////////// Styles for the TC downloads modal ////////////////////////
#tc-downloads {
// Reset Marketo's inline styles and defaults ────────────────────────────
.mktoForm {
width: 100% !important;
font-family: $proxima !important;
font-size: 1rem !important;
color: $article-text !important;
padding: 0 !important;
}
// Hide Marketo's offset/gutter spacers
.mktoOffset,
.mktoGutter {
display: none !important;
}
// Form layout: 2-column grid for first 4 fields
.mktoForm {
display: grid !important;
grid-template-columns: 1fr 1fr;
gap: 0 1.75rem;
}
// Visible field rows (First Name, Last Name, Company, Job Title)
// occupy one grid cell each pairs share a row automatically
.mktoFormRow {
margin-bottom: .5rem;
}
// Hidden field rows collapse they don't disrupt the grid
.mktoFormRow:has(input[type='hidden']) {
display: none;
}
// Email, Privacy, and Submit span full width
.mktoFormRow:has(.mktoEmailField),
.mktoFormRow:has(.mktoCheckboxList),
.mktoButtonRow {
grid-column: 1 / -1;
}
.mktoFieldDescriptor,
.mktoFieldWrap {
width: 100% !important;
margin-bottom: 0 !important;
}
// Labels
.mktoLabel {
display: flex !important;
align-items: baseline;
width: 100% !important;
font-family: $proxima !important;
font-weight: $medium !important;
font-size: .9rem !important;
color: $article-bold !important;
padding: .5rem 0 .1rem !important;
}
.mktoAsterix {
order: 1;
color: #e85b5b !important;
float: none !important;
padding-left: .15rem;
}
// Text inputs
.mktoField.mktoTextField,
.mktoField.mktoEmailField {
width: 100% !important;
font-family: $proxima !important;
font-weight: $medium !important;
font-size: 1rem !important;
background: rgba($article-text, .06) !important;
border-radius: $radius !important;
border: 1px solid rgba($article-text, .06) !important;
padding: .5em !important;
color: $article-text !important;
transition-property: border;
transition-duration: .2s;
box-shadow: none !important;
&:focus {
outline: none !important;
border-color: $sidebar-search-highlight !important;
}
&::placeholder {
color: rgba($sidebar-search-text, .45) !important;
font-weight: normal !important;
font-style: italic !important;
}
}
// Checkbox / privacy consent
.mktoFormRow:has(.mktoCheckboxList) .mktoAsterix {
display: none !important;
}
.mktoCheckboxList {
width: 100% !important;
label {
font-family: $proxima !important;
font-size: .85rem !important;
line-height: 1.4 !important;
color: rgba($article-text, .7) !important;
&::after {
content: '*';
color: #e85b5b;
font-weight: $medium;
font-size: .95rem;
font-style: normal;
}
a {
color: $article-link !important;
font-weight: $medium;
text-decoration: none;
transition: color .2s;
&:hover {
color: $article-link-hover !important;
}
}
}
input[type='checkbox'] {
margin: .2rem .65rem 0 0;
}
}
// Submit button
.mktoButtonRow {
margin-top: 1rem;
display: flex;
justify-content: flex-end;
}
.mktoButtonWrap {
margin-left: 0 !important;
}
.mktoButton {
@include gradient($article-btn-gradient);
border: none !important;
border-radius: $radius !important;
padding: .65rem 1.5rem !important;
font-family: $proxima !important;
font-weight: $medium !important;
font-size: 1rem !important;
color: $g20-white !important;
cursor: pointer;
transition: opacity .2s;
&:hover {
@include gradient($article-btn-gradient-hover);
}
}
// Validation errors
// Marketo positions errors absolutely make them flow inline instead
.mktoFieldWrap {
position: relative;
}
.mktoError {
position: relative !important;
bottom: auto !important;
left: auto !important;
right: auto !important;
pointer-events: none;
.mktoErrorArrow {
display: none !important;
}
.mktoErrorMsg {
font-family: $proxima !important;
font-size: .8rem !important;
max-width: 100% !important;
background: none !important;
border: none !important;
color: #e85b5b !important;
padding: .15rem 0 0 !important;
box-shadow: none !important;
text-shadow: none !important;
}
}
// Custom error message
.tc-form-error {
margin: .75rem 0;
padding: .5rem .75rem;
background: rgba(#e85b5b, .1);
border: 1px solid rgba(#e85b5b, .3);
border-radius: $radius;
color: #e85b5b;
font-size: .9rem;
}
// Clear floats
.mktoClear {
clear: both;
}
}
//////////////////////////////// MEDIA QUERIES /////////////////////////////////
@include media(small) {
#tc-downloads {
.mktoForm {
grid-template-columns: 1fr;
}
.mktoFormRow:has(.mktoEmailField),
.mktoFormRow:has(.mktoCheckboxList),
.mktoButtonRow {
grid-column: auto;
}
}
}

View File

@ -289,8 +289,8 @@ Run the query on any data node for each retention policy and database.
Here, we use InfluxDB's [CLI](/enterprise_influxdb/v1/tools/influx-cli/use-influx/) to execute the query:
```
> ALTER RETENTION POLICY "<retention_policy_name>" ON "<database_name>" REPLICATION 3
>
ALTER RETENTION POLICY "<retention_policy_name>" ON "<database_name>" REPLICATION 3
```
A successful `ALTER RETENTION POLICY` query returns no results.

View File

@ -124,11 +124,11 @@ CREATE USER <username> WITH PASSWORD '<password>'
###### CLI example
```js
> CREATE USER todd WITH PASSWORD 'influxdb41yf3'
> CREATE USER alice WITH PASSWORD 'wonder\'land'
> CREATE USER "rachel_smith" WITH PASSWORD 'asdf1234!'
> CREATE USER "monitoring-robot" WITH PASSWORD 'XXXXX'
> CREATE USER "$savyadmin" WITH PASSWORD 'm3tr1cL0v3r'
CREATE USER todd WITH PASSWORD 'influxdb41yf3'
CREATE USER alice WITH PASSWORD 'wonder\'land'
CREATE USER "rachel_smith" WITH PASSWORD 'asdf1234!'
CREATE USER "monitoring-robot" WITH PASSWORD 'XXXXX'
CREATE USER "$savyadmin" WITH PASSWORD 'm3tr1cL0v3r'
```
{{% note %}}
@ -169,13 +169,13 @@ CLI examples:
`GRANT` `READ` access to `todd` on the `NOAA_water_database` database:
```sql
> GRANT READ ON "NOAA_water_database" TO "todd"
GRANT READ ON "NOAA_water_database" TO "todd"
```
`GRANT` `ALL` access to `todd` on the `NOAA_water_database` database:
```sql
> GRANT ALL ON "NOAA_water_database" TO "todd"
GRANT ALL ON "NOAA_water_database" TO "todd"
```
##### `REVOKE` `READ`, `WRITE`, or `ALL` database privileges from an existing user
@ -189,13 +189,13 @@ CLI examples:
`REVOKE` `ALL` privileges from `todd` on the `NOAA_water_database` database:
```sql
> REVOKE ALL ON "NOAA_water_database" FROM "todd"
REVOKE ALL ON "NOAA_water_database" FROM "todd"
```
`REVOKE` `WRITE` privileges from `todd` on the `NOAA_water_database` database:
```sql
> REVOKE WRITE ON "NOAA_water_database" FROM "todd"
REVOKE WRITE ON "NOAA_water_database" FROM "todd"
```
{{% note %}}
@ -230,7 +230,7 @@ SET PASSWORD FOR <username> = '<password>'
CLI example:
```sql
> SET PASSWORD FOR "todd" = 'password4todd'
SET PASSWORD FOR "todd" = 'password4todd'
```
{{% note %}}
@ -250,6 +250,6 @@ DROP USER <username>
CLI example:
```sql
> DROP USER "todd"
DROP USER "todd"
```

View File

@ -28,9 +28,9 @@ For example, simple addition:
Assign an expression to a variable using the assignment operator, `=`.
```js
> s = "this is a string"
> i = 1 // an integer
> f = 2.0 // a floating point number
s = "this is a string"
i = 1 // an integer
f = 2.0 // a floating point number
```
Type the name of a variable to print its value:
@ -48,7 +48,7 @@ this is a string
Flux also supports records. Each value in a record can be a different data type.
```js
> o = {name:"Jim", age: 42, "favorite color": "red"}
o = {name:"Jim", age: 42, "favorite color": "red"}
```
Use **dot notation** to access a properties of a record:

View File

@ -70,7 +70,7 @@ the CQ has no `FOR` clause.
#### 1. Create the database
```sql
> CREATE DATABASE "food_data"
CREATE DATABASE "food_data"
```
#### 2. Create a two-hour `DEFAULT` retention policy
@ -85,7 +85,7 @@ Use the
statement to create a `DEFAULT` RP:
```sql
> CREATE RETENTION POLICY "two_hours" ON "food_data" DURATION 2h REPLICATION 1 DEFAULT
CREATE RETENTION POLICY "two_hours" ON "food_data" DURATION 2h REPLICATION 1 DEFAULT
```
That query creates an RP called `two_hours` that exists in the database
@ -116,7 +116,7 @@ Use the
statement to create a non-`DEFAULT` retention policy:
```sql
> CREATE RETENTION POLICY "a_year" ON "food_data" DURATION 52w REPLICATION 1
CREATE RETENTION POLICY "a_year" ON "food_data" DURATION 52w REPLICATION 1
```
That query creates a retention policy (RP) called `a_year` that exists in the database

View File

@ -839,8 +839,7 @@ DROP CONTINUOUS QUERY <cq_name> ON <database_name>
Drop the `idle_hands` CQ from the `telegraf` database:
```sql
> DROP CONTINUOUS QUERY "idle_hands" ON "telegraf"`
>
DROP CONTINUOUS QUERY "idle_hands" ON "telegraf"
```
### Altering continuous queries

View File

@ -380,8 +380,7 @@ The following query returns no data because it specifies a single tag key (`loca
the `SELECT` clause:
```sql
> SELECT "location" FROM "h2o_feet"
>
SELECT "location" FROM "h2o_feet"
```
To return any data associated with the `location` tag key, the query's `SELECT`
@ -597,7 +596,7 @@ separating logic with parentheses.
#### Select data that have specific timestamps
```sql
> SELECT * FROM "h2o_feet" WHERE time > now() - 7d
SELECT * FROM "h2o_feet" WHERE time > now() - 7d
```
The query returns data from the `h2o_feet` measurement that have [timestamps](/enterprise_influxdb/v1/concepts/glossary/#timestamp)
@ -1592,8 +1591,8 @@ the query's time range.
Note that `fill(800)` has no effect on the query results.
```sql
> SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" = 'coyote_creek' AND time >= '2015-09-18T22:00:00Z' AND time <= '2015-09-18T22:18:00Z' GROUP BY time(12m) fill(800)
>
SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" = 'coyote_creek' AND time >= '2015-09-18T22:00:00Z' AND time <= '2015-09-18T22:18:00Z' GROUP BY time(12m) fill(800)
```
##### Queries with `fill(previous)` when the previous result falls outside the query's time range
@ -2639,7 +2638,7 @@ The whitespace between `-` or `+` and the [duration literal](/enterprise_influxd
#### Specify a time range with relative time
```sql
> SELECT "water_level" FROM "h2o_feet" WHERE time > now() - 1h
SELECT "water_level" FROM "h2o_feet" WHERE time > now() - 1h
```
The query returns data with timestamps that occur within the past hour.
@ -2686,7 +2685,7 @@ a `GROUP BY time()` clause must provide an alternative upper bound in the
Use the [CLI](/enterprise_influxdb/v1/tools/influx-cli/use-influx/) to write a point to the `NOAA_water_database` that occurs after `now()`:
```sql
> INSERT h2o_feet,location=santa_monica water_level=3.1 1587074400000000000
INSERT h2o_feet,location=santa_monica water_level=3.1 1587074400000000000
```
Run a `GROUP BY time()` query that covers data with timestamps between
@ -2722,8 +2721,8 @@ the lower bound to `now()` such that the query's time range is between
`now()` and `now()`:
```sql
> SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location"='santa_monica' AND time >= now() GROUP BY time(12m) fill(none)
>
SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location"='santa_monica' AND time >= now() GROUP BY time(12m) fill(none)
```
### Configuring the returned timestamps
@ -2831,8 +2830,8 @@ includes an `m` and `water_level` is greater than three.
#### Use a regular expression to specify a tag with no value in the WHERE clause
```sql
> SELECT * FROM "h2o_feet" WHERE "location" !~ /./
>
SELECT * FROM "h2o_feet" WHERE "location" !~ /./
```
The query selects all data from the `h2o_feet` measurement where the `location`
@ -2989,8 +2988,8 @@ The query returns the integer form of `water_level`'s float [field values](/ente
#### Cast float field values to strings (this functionality is not supported)
```sql
> SELECT "water_level"::string FROM "h2o_feet" LIMIT 4
>
SELECT "water_level"::string FROM "h2o_feet" LIMIT 4
```
The query returns no data as casting a float field value to a string is not

View File

@ -87,8 +87,8 @@ If you attempt to create a database that already exists, InfluxDB does nothing a
##### Create a database
```
> CREATE DATABASE "NOAA_water_database"
>
CREATE DATABASE "NOAA_water_database"
```
The query creates a database called `NOAA_water_database`.
@ -97,8 +97,8 @@ The query creates a database called `NOAA_water_database`.
##### Create a database with a specific retention policy
```
> CREATE DATABASE "NOAA_water_database" WITH DURATION 3d REPLICATION 1 SHARD DURATION 1h NAME "liquid"
>
CREATE DATABASE "NOAA_water_database" WITH DURATION 3d REPLICATION 1 SHARD DURATION 1h NAME "liquid"
```
The query creates a database called `NOAA_water_database`.
@ -114,8 +114,8 @@ DROP DATABASE <database_name>
Drop the database NOAA_water_database:
```bash
> DROP DATABASE "NOAA_water_database"
>
DROP DATABASE "NOAA_water_database"
```
A successful `DROP DATABASE` query returns an empty result.
@ -135,19 +135,19 @@ DROP SERIES FROM <measurement_name[,measurement_name]> WHERE <tag_key>='<tag_val
Drop all series from a single measurement:
```sql
> DROP SERIES FROM "h2o_feet"
DROP SERIES FROM "h2o_feet"
```
Drop series with a specific tag pair from a single measurement:
```sql
> DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
```
Drop all points in the series that have a specific tag pair from all measurements in the database:
```sql
> DROP SERIES WHERE "location" = 'santa_monica'
DROP SERIES WHERE "location" = 'santa_monica'
```
A successful `DROP SERIES` query returns an empty result.
@ -168,25 +168,25 @@ DELETE FROM <measurement_name> WHERE [<tag_key>='<tag_value>'] | [<time interval
Delete all data associated with the measurement `h2o_feet`:
```sql
> DELETE FROM "h2o_feet"
DELETE FROM "h2o_feet"
```
Delete all data associated with the measurement `h2o_quality` and where the tag `randtag` equals `3`:
```sql
> DELETE FROM "h2o_quality" WHERE "randtag" = '3'
DELETE FROM "h2o_quality" WHERE "randtag" = '3'
```
Delete all data in the database that occur before January 01, 2020:
```sql
> DELETE WHERE time < '2020-01-01'
DELETE WHERE time < '2020-01-01'
```
Delete all data associated with the measurement `h2o_feet` in retention policy `one_day`:
```sql
> DELETE FROM "one_day"."h2o_feet"
DELETE FROM "one_day"."h2o_feet"
```
A successful `DELETE` query returns an empty result.
@ -216,7 +216,7 @@ DROP MEASUREMENT <measurement_name>
Delete the measurement `h2o_feet`:
```sql
> DROP MEASUREMENT "h2o_feet"
DROP MEASUREMENT "h2o_feet"
```
> **Note:** `DROP MEASUREMENT` drops all data and series in the measurement.
@ -238,9 +238,9 @@ DROP SHARD <shard_id_number>
```
Delete the shard with the id `1`:
```
> DROP SHARD 1
>
```sql
DROP SHARD 1
```
A successful `DROP SHARD` query returns an empty result.
@ -345,9 +345,9 @@ This setting is optional.
##### Create a retention policy
```
> CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 1d REPLICATION 1
>
```sql
CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 1d REPLICATION 1
```
The query creates a retention policy called `one_day_only` for the database
`NOAA_water_database` with a one day duration and a replication factor of one.
@ -355,8 +355,8 @@ The query creates a retention policy called `one_day_only` for the database
##### Create a DEFAULT retention policy
```sql
> CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 23h60m REPLICATION 1 DEFAULT
>
CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 23h60m REPLICATION 1 DEFAULT
```
The query creates the same retention policy as the one in the example above, but
@ -381,14 +381,14 @@ ALTER RETENTION POLICY <retention_policy_name> ON <database_name> [DURATION <dur
First, create the retention policy `what_is_time` with a `DURATION` of two days:
```sql
> CREATE RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 2d REPLICATION 1
>
CREATE RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 2d REPLICATION 1
```
Modify `what_is_time` to have a three week `DURATION`, a two hour shard group duration, and make it the `DEFAULT` retention policy for `NOAA_water_database`.
```sql
> ALTER RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 3w SHARD DURATION 2h DEFAULT
>
ALTER RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 3w SHARD DURATION 2h DEFAULT
```
In the last example, `what_is_time` retains its original replication factor of 1.
@ -407,9 +407,9 @@ DROP RETENTION POLICY <retention_policy_name> ON <database_name>
```
Delete the retention policy `what_is_time` in the `NOAA_water_database` database:
```bash
> DROP RETENTION POLICY "what_is_time" ON "NOAA_water_database"
>
```sql
DROP RETENTION POLICY "what_is_time" ON "NOAA_water_database"
```
A successful `DROP RETENTION POLICY` query returns an empty result.

View File

@ -50,9 +50,9 @@ digits, or underscores and do not begin with a digit.
Throughout the query language exploration, we'll use the database name `NOAA_water_database`:
```
> CREATE DATABASE NOAA_water_database
> exit
```sql
CREATE DATABASE NOAA_water_database
exit
```
### Download and write the data to InfluxDB

View File

@ -636,7 +636,7 @@ Executes the specified SELECT statement and returns data on the query performanc
For example, executing the following statement:
```sql
> explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
```
May produce an output similar to the following:

View File

@ -407,8 +407,8 @@ Use `insert into <retention policy> <line protocol>` to write data to a specific
Write data to a single field in the measurement `treasures` with the tag `captain_id = pirate_king`.
`influx` automatically writes the point to the database's `DEFAULT` retention policy.
```
> INSERT treasures,captain_id=pirate_king value=2
>
INSERT treasures,captain_id=pirate_king value=2
```
Write the same point to the already-existing retention policy `oneday`:

View File

@ -100,7 +100,7 @@ In Query 1, the field key `duration` is an InfluxQL Keyword.
Double quote `duration` to avoid the error:
```sql
> SELECT "duration" FROM runs
SELECT "duration" FROM runs
```
*Query 2:*
@ -114,7 +114,7 @@ In Query 2, the retention policy name `limit` is an InfluxQL Keyword.
Double quote `limit` to avoid the error:
```sql
> CREATE RETENTION POLICY "limit" ON telegraf DURATION 1d REPLICATION 1
CREATE RETENTION POLICY "limit" ON telegraf DURATION 1d REPLICATION 1
```
While using double quotes is an acceptable workaround, we recommend that you avoid using InfluxQL keywords as identifiers for simplicity's sake.
@ -141,7 +141,7 @@ The `CREATE USER` statement requires single quotation marks around the password
string:
```sql
> CREATE USER penelope WITH PASSWORD 'timeseries4dayz'
CREATE USER penelope WITH PASSWORD 'timeseries4dayz'
```
Note that you should not include the single quotes when authenticating requests.
@ -257,7 +257,7 @@ Replace the timestamp with a UNIX timestamp to avoid the error and successfully
write the point to InfluxDB:
```sql
> INSERT pineapple,fresh=true value=1 1439938800000000000
INSERT pineapple,fresh=true value=1 1439938800000000000
```
### InfluxDB line protocol syntax
@ -283,7 +283,7 @@ InfluxDB assumes that the `value=9` field is the timestamp and returns an error.
Use a comma instead of a space between the measurement and tag to avoid the error:
```sql
> INSERT hens,location=2 value=9
INSERT hens,location=2 value=9
```
*Write 2*
@ -300,7 +300,7 @@ InfluxDB assumes that the `happy=3` field is the timestamp and returns an error.
Use a comma instead of a space between the two fields to avoid the error:
```sql
> INSERT cows,name=daisy milk_prod=3,happy=3
INSERT cows,name=daisy milk_prod=3,happy=3
```
**Resources:**

View File

@ -469,7 +469,7 @@ SELECT MEAN("dogs" - "cats") from "pet_daycare"
Instead, use a subquery to get the same result:
```sql
> SELECT MEAN("difference") FROM (SELECT "dogs" - "cat" AS "difference" FROM "pet_daycare")
SELECT MEAN("difference") FROM (SELECT "dogs" - "cat" AS "difference" FROM "pet_daycare")
```
See the
@ -753,10 +753,10 @@ In the following example, the first query covers data with timestamps between
`2015-09-18T21:30:00Z` and `now()`.
The second query covers data with timestamps between `2015-09-18T21:30:00Z` and 180 weeks from `now()`.
```
> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' GROUP BY time(12m) fill(none)
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' GROUP BY time(12m) fill(none)
> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' AND time <= now() + 180w GROUP BY time(12m) fill(none)
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' AND time <= now() + 180w GROUP BY time(12m) fill(none)
```
Note that the `WHERE` clause must provide an alternative **upper** bound to
@ -765,8 +765,8 @@ the lower bound to `now()` such that the query's time range is between
`now()` and `now()`:
```sql
> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= now() GROUP BY time(12m) fill(none)
>
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= now() GROUP BY time(12m) fill(none)
```
For for more on time syntax in queries, see [Data Exploration](/enterprise_influxdb/v1/query_language/explore-data/#time-syntax).
@ -856,8 +856,8 @@ time count
We [create](/enterprise_influxdb/v1/query_language/manage-database/#create-retention-policies-with-create-retention-policy) a new `DEFAULT` RP (`two_hour`) and perform the same query:
```sql
> SELECT count(flounders) FROM fleeting
>
SELECT count(flounders) FROM fleeting
```
To query the old data, we must specify the old `DEFAULT` RP by fully qualifying `fleeting`:
@ -879,8 +879,8 @@ with time intervals.
Example:
```sql
> SELECT * FROM "absolutismus" WHERE time = '2016-07-31T20:07:00Z' OR time = '2016-07-31T23:07:17Z'
>
SELECT * FROM "absolutismus" WHERE time = '2016-07-31T20:07:00Z' OR time = '2016-07-31T23:07:17Z'
```
{{% warn %}} [GitHub Issue #7530](https://github.com/influxdata/influxdb/issues/7530)

View File

@ -75,7 +75,7 @@ To learn how field value type discrepancies can affect `SELECT *` queries, see
#### Write the field value `-1.234456e+78` as a float to InfluxDB
```sql
> INSERT mymeas value=-1.234456e+78
INSERT mymeas value=-1.234456e+78
```
InfluxDB supports field values specified in scientific notation.
@ -83,25 +83,25 @@ InfluxDB supports field values specified in scientific notation.
#### Write a field value `1.0` as a float to InfluxDB
```sql
> INSERT mymeas value=1.0
INSERT mymeas value=1.0
```
#### Write the field value `1` as a float to InfluxDB
```sql
> INSERT mymeas value=1
INSERT mymeas value=1
```
#### Write the field value `1` as an integer to InfluxDB
```sql
> INSERT mymeas value=1i
INSERT mymeas value=1i
```
#### Write the field value `stringing along` as a string to InfluxDB
```sql
> INSERT mymeas value="stringing along"
INSERT mymeas value="stringing along"
```
Always double quote string field values. More on quoting [below](#quoting).
@ -109,14 +109,14 @@ Always double quote string field values. More on quoting [below](#quoting).
#### Write the field value `true` as a Boolean to InfluxDB
```sql
> INSERT mymeas value=true
INSERT mymeas value=true
```
Do not quote Boolean field values.
The following statement writes `true` as a string field value to InfluxDB:
```sql
> INSERT mymeas value="true"
INSERT mymeas value="true"
```
#### Attempt to write a string to a field that previously accepted floats
@ -132,9 +132,9 @@ ERR: {"error":"field type conflict: input field \"value\" on measurement \"mymea
If the timestamps on the float and string are not stored in the same shard:
```sql
> INSERT mymeas value=3 1465934559000000000
> INSERT mymeas value="stringing along" 1466625759000000000
>
INSERT mymeas value=3 1465934559000000000
INSERT mymeas value="stringing along" 1466625759000000000
```
## Quoting, special characters, and additional naming guidelines
@ -231,7 +231,7 @@ You do not need to escape other special characters.
##### Write a point with special characters
```sql
> INSERT "measurement\ with\ quo⚡es\ and\ emoji",tag\ key\ with\ sp🚀ces=tag\,value\,with"commas" field_k\ey="string field value, only \" need be esc🍭ped"
INSERT "measurement\ with\ quo⚡es\ and\ emoji",tag\ key\ with\ sp🚀ces=tag\,value\,with"commas" field_k\ey="string field value, only \" need be esc🍭ped"
```
The system writes a point where the measurement is `"measurement with quo⚡es and emoji"`, the tag key is `tag key with sp🚀ces`, the

View File

@ -245,9 +245,9 @@ But, writing an integer to a field that previously accepted floats succeeds if
InfluxDB stores the integer in a new shard:
```sql
> INSERT weather,location=us-midwest temperature=82 1465839830100400200
> INSERT weather,location=us-midwest temperature=81i 1467154750000000000
>
INSERT weather,location=us-midwest temperature=82 1465839830100400200
INSERT weather,location=us-midwest temperature=81i 1467154750000000000
```
See

View File

@ -14,6 +14,50 @@ alt_links:
---
## v1.12.3 {date="2026-01-12"}
### Features
- Add [`https-insecure-certificate` configuration option](/influxdb/v1/administration/config/#https-insecure-certificate)
to skip file permission checking for TLS certificate and private key files.
- Add [`advanced-expiration` TLS configuration option](/influxdb/v1/administration/config/#advanced-expiration)
to configure how far in advance to log warnings about TLS certificate expiration.
- Add TLS certificate reloading on `SIGHUP`.
- Add `config` and `cq` (continuous query) diagnostics to the `/debug/vars` endpoint.
- Improve dropped point logging.
- Show user when displaying or logging queries.
- Add `time_format` parameter for the HTTP API.
- Use dynamic logging levels (`zap.AtomicLevel`).
- Report user query bytes.
### Bug fixes
- Fix `FUTURE LIMIT` and `PAST LIMIT`
[clause order](/influxdb/v1/query_language/manage-database/#future-limit)
in retention policy statements.
- Add locking in `ClearBadShardList`.
- Stop noisy logging about phantom shards that do not belong to a node.
- Resolve `RLock()` leakage in `Store.DeleteSeries()`.
- Fix condition check for optimization of array cursor (tsm1).
- Run `init.sh` `buildtsi` as `influxdb` user.
- Reduce unnecessary purger operations and logging.
- Sort files for adjacency testing.
- Fix operator in host detection (systemd).
- Use correct path in open WAL error message.
- Handle nested low-level files in compaction.
- Correct error logic for writing empty index files.
- Reduce lock contention and races in purger.
- Fix bug with authorizer leakage in `SHOW QUERIES`.
- Rename compact throughput logging keys.
- Fix `https-insecure-certificate` not handled properly in httpd.
- Prevent level regression when compacting mixed-level TSM files.
### Other
- Update Go to 1.24.13.
---
## v1.12.2 {date="2025-09-15"}
### Features
@ -340,7 +384,7 @@ reporting an earlier error.
- Use latest version of InfluxQL package.
- Add `-lponly` flag to [`influx export`](/influxdb/v2/reference/cli/influx/export/) sub-command.
- Add the ability to [track number of values](/platform/monitoring/influxdata-platform/tools/measurements-internal/#valueswrittenok) written via the [/debug/vars HTTP endpoint](/influxdb/v1/tools/api/#debug-vars-http-endpoint).
- Add the ability to [track number of values](/platform/monitoring/influxdata-platform/tools/measurements-internal/#valueswrittenok) written via the [`/debug/vars` HTTP endpoint](/influxdb/v1/tools/api/#debugvars-http-endpoint).
- Update UUID library from [github.com/satori/go.uuid](https://github.com/satori/go.uuid) to [github.com/gofrs/uuid](https://github.com/gofrs/uuid).
### Bug fixes
@ -637,7 +681,7 @@ Support for the Flux language and queries has been added in this release. To beg
- Enable Flux using the new configuration setting
[`[http] flux-enabled = true`](/influxdb/v1/administration/config/#flux-enabled).
- Use the new [`influx -type=flux`](/influxdb/v1/tools/shell/#type) option to enable the Flux REPL shell for creating Flux queries.
- Use the new [`influx -type=flux`](/influxdb/v1/tools/influx-cli/) option to enable the Flux REPL shell for creating Flux queries.
- Read about Flux and the Flux language, enabling Flux, or jump into the getting started and other guides.
#### Time Series Index (TSI) query performance and throughputs improvements

View File

@ -355,12 +355,12 @@ CREATE USER <username> WITH PASSWORD '<password>'
###### CLI example
```js
> CREATE USER todd WITH PASSWORD 'influxdb41yf3'
> CREATE USER alice WITH PASSWORD 'wonder\'land'
> CREATE USER "rachel_smith" WITH PASSWORD 'asdf1234!'
> CREATE USER "monitoring-robot" WITH PASSWORD 'XXXXX'
> CREATE USER "$savyadmin" WITH PASSWORD 'm3tr1cL0v3r'
>
CREATE USER todd WITH PASSWORD 'influxdb41yf3'
CREATE USER alice WITH PASSWORD 'wonder\'land'
CREATE USER "rachel_smith" WITH PASSWORD 'asdf1234!'
CREATE USER "monitoring-robot" WITH PASSWORD 'XXXXX'
CREATE USER "$savyadmin" WITH PASSWORD 'm3tr1cL0v3r'
```
> [!Important]
@ -397,15 +397,15 @@ CLI examples:
`GRANT` `READ` access to `todd` on the `NOAA_water_database` database:
```sql
> GRANT READ ON "NOAA_water_database" TO "todd"
>
GRANT READ ON "NOAA_water_database" TO "todd"
```
`GRANT` `ALL` access to `todd` on the `NOAA_water_database` database:
```sql
> GRANT ALL ON "NOAA_water_database" TO "todd"
>
GRANT ALL ON "NOAA_water_database" TO "todd"
```
##### `REVOKE` `READ`, `WRITE`, or `ALL` database privileges from an existing user
@ -419,15 +419,15 @@ CLI examples:
`REVOKE` `ALL` privileges from `todd` on the `NOAA_water_database` database:
```sql
> REVOKE ALL ON "NOAA_water_database" FROM "todd"
>
REVOKE ALL ON "NOAA_water_database" FROM "todd"
```
`REVOKE` `WRITE` privileges from `todd` on the `NOAA_water_database` database:
```sql
> REVOKE WRITE ON "NOAA_water_database" FROM "todd"
>
REVOKE WRITE ON "NOAA_water_database" FROM "todd"
```
>**Note:** If a user with `ALL` privileges has `WRITE` privileges revoked, they are left with `READ` privileges, and vice versa.
@ -460,8 +460,8 @@ SET PASSWORD FOR <username> = '<password>'
CLI example:
```sql
> SET PASSWORD FOR "todd" = 'influxdb4ever'
>
SET PASSWORD FOR "todd" = 'influxdb4ever'
```
> [!Note]
@ -480,8 +480,8 @@ DROP USER <username>
CLI example:
```sql
> DROP USER "todd"
>
DROP USER "todd"
```
## Authentication and authorization HTTP errors

View File

@ -933,7 +933,7 @@ effect if [`auth-enabled`](#auth-enabled) is set to `false`.
User-supplied [HTTP response headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers).
Configure this section to return
[security headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers#Security)
[security headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers#security)
such as `X-Frame-Options` or `Content Security Policy` where needed.
Example:
@ -964,9 +964,16 @@ specified, the `httpd` service will try to load the private key from the
`https-certificate` file. If a separate `https-private-key` file is specified,
the `httpd` service will load the private key from the `https-private-key` file.
**Default**: `""`
**Default**: `""`
**Environment variable**: `INFLUXDB_HTTP_HTTPS_PRIVATE_KEY`
#### https-insecure-certificate {metadata="v1.12.3+"}
Skips file permission checking for `https-certificate` and `https-private-key` when `true`.
**Default**: `false`
**Environment variable**: `INFLUXDB_HTTP_HTTPS_INSECURE_CERTIFICATE`
#### shared-secret
The shared secret used to validate public API requests using JWT tokens.
@ -1638,5 +1645,12 @@ include: `tls1.0`, `tls1.1`, `tls1.2`, and `tls1.3`. If not specified,
In this example, `tls1.3` specifies the maximum version as TLS 1.3, which is
consistent with the behavior of previous InfluxDB releases.
**Default**: `tls1.3`
**Default**: `tls1.3`
**Environment variable**: `INFLUXDB_TLS_MAX_VERSION`
#### advanced-expiration {metadata="v1.12.3+"}
Sets how far in advance to log warnings about TLS certificate expiration.
**Default**: `5d`
**Environment variable**: `INFLUXDB_TLS_ADVANCED_EXPIRATION`

View File

@ -54,9 +54,9 @@ For example, simple addition:
Assign an expression to a variable using the assignment operator, `=`.
```js
> s = "this is a string"
> i = 1 // an integer
> f = 2.0 // a floating point number
s = "this is a string"
i = 1 // an integer
f = 2.0 // a floating point number
```
Type the name of a variable to print its value:
@ -74,7 +74,7 @@ this is a string
Flux also supports records. Each value in a record can be a different data type.
```js
> o = {name:"Jim", age: 42, "favorite color": "red"}
o = {name:"Jim", age: 42, "favorite color": "red"}
```
Use **dot notation** to access a properties of a record:

View File

@ -72,7 +72,7 @@ the CQ has no `FOR` clause.
#### 1. Create the database
```sql
> CREATE DATABASE "food_data"
CREATE DATABASE "food_data"
```
#### 2. Create a two-hour `DEFAULT` retention policy
@ -87,7 +87,7 @@ Use the
statement to create a `DEFAULT` RP:
```sql
> CREATE RETENTION POLICY "two_hours" ON "food_data" DURATION 2h REPLICATION 1 DEFAULT
CREATE RETENTION POLICY "two_hours" ON "food_data" DURATION 2h REPLICATION 1 DEFAULT
```
That query creates an RP called `two_hours` that exists in the database
@ -118,7 +118,7 @@ Use the
statement to create a non-`DEFAULT` retention policy:
```sql
> CREATE RETENTION POLICY "a_year" ON "food_data" DURATION 52w REPLICATION 1
CREATE RETENTION POLICY "a_year" ON "food_data" DURATION 52w REPLICATION 1
```
That query creates a retention policy (RP) called `a_year` that exists in the database

View File

@ -63,8 +63,8 @@ digits, or underscores and do not begin with a digit.
Throughout this guide, we'll use the database name `mydb`:
```sql
> CREATE DATABASE mydb
>
CREATE DATABASE mydb
```
> **Note:** After hitting enter, a new prompt appears and nothing else is displayed.
@ -141,8 +141,8 @@ temperature,machine=unit42,type=assembly external=25,internal=37 143406746700000
To insert a single time series data point into InfluxDB using the CLI, enter `INSERT` followed by a point:
```sql
> INSERT cpu,host=serverA,region=us_west value=0.64
>
INSERT cpu,host=serverA,region=us_west value=0.64
```
A point with the measurement name of `cpu` and tags `host` and `region` has now been written to the database, with the measured `value` of `0.64`.
@ -166,8 +166,8 @@ That means your timestamp will be different.
Let's try storing another type of data, with two fields in the same measurement:
```sql
> INSERT temperature,machine=unit42,type=assembly external=25,internal=37
>
INSERT temperature,machine=unit42,type=assembly external=25,internal=37
```
To return all fields and tags with a query, you can use the `*` operator:

View File

@ -841,8 +841,8 @@ DROP CONTINUOUS QUERY <cq_name> ON <database_name>
Drop the `idle_hands` CQ from the `telegraf` database:
```sql
> DROP CONTINUOUS QUERY "idle_hands" ON "telegraf"`
>
DROP CONTINUOUS QUERY "idle_hands" ON "telegraf"
```
### Altering continuous queries

View File

@ -382,8 +382,8 @@ The following query returns no data because it specifies a single tag key (`loca
the `SELECT` clause:
```sql
> SELECT "location" FROM "h2o_feet"
>
SELECT "location" FROM "h2o_feet"
```
To return any data associated with the `location` tag key, the query's `SELECT`
@ -599,7 +599,7 @@ separating logic with parentheses.
#### Select data that have specific timestamps
```sql
> SELECT * FROM "h2o_feet" WHERE time > now() - 7d
SELECT * FROM "h2o_feet" WHERE time > now() - 7d
```
The query returns data from the `h2o_feet` measurement that have [timestamps](/influxdb/v1/concepts/glossary/#timestamp)
@ -1594,8 +1594,8 @@ the query's time range.
Note that `fill(800)` has no effect on the query results.
```sql
> SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" = 'coyote_creek' AND time >= '2015-09-18T22:00:00Z' AND time <= '2015-09-18T22:18:00Z' GROUP BY time(12m) fill(800)
>
SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" = 'coyote_creek' AND time >= '2015-09-18T22:00:00Z' AND time <= '2015-09-18T22:18:00Z' GROUP BY time(12m) fill(800)
```
##### Queries with `fill(previous)` when the previous result falls outside the query's time range
@ -2646,7 +2646,7 @@ The whitespace between `-` or `+` and the [duration literal](/influxdb/v1/query_
#### Specify a time range with relative time
```sql
> SELECT "water_level" FROM "h2o_feet" WHERE time > now() - 1h
SELECT "water_level" FROM "h2o_feet" WHERE time > now() - 1h
```
The query returns data with timestamps that occur within the past hour.
@ -2693,7 +2693,7 @@ a `GROUP BY time()` clause must provide an alternative upper bound in the
Use the [CLI](/influxdb/v1/tools/shell/) to write a point to the `NOAA_water_database` that occurs after `now()`:
```sql
> INSERT h2o_feet,location=santa_monica water_level=3.1 1587074400000000000
INSERT h2o_feet,location=santa_monica water_level=3.1 1587074400000000000
```
Run a `GROUP BY time()` query that covers data with timestamps between
@ -2729,8 +2729,8 @@ the lower bound to `now()` such that the query's time range is between
`now()` and `now()`:
```sql
> SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location"='santa_monica' AND time >= now() GROUP BY time(12m) fill(none)
>
SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location"='santa_monica' AND time >= now() GROUP BY time(12m) fill(none)
```
### Configuring the returned timestamps
@ -2838,8 +2838,8 @@ includes an `m` and `water_level` is greater than three.
#### Use a regular expression to specify a tag with no value in the WHERE clause
```sql
> SELECT * FROM "h2o_feet" WHERE "location" !~ /./
>
SELECT * FROM "h2o_feet" WHERE "location" !~ /./
```
The query selects all data from the `h2o_feet` measurement where the `location`
@ -2996,8 +2996,8 @@ The query returns the integer form of `water_level`'s float [field values](/infl
#### Cast float field values to strings (this functionality is not supported)
```sql
> SELECT "water_level"::string FROM "h2o_feet" LIMIT 4
>
SELECT "water_level"::string FROM "h2o_feet" LIMIT 4
```
The query returns no data as casting a float field value to a string is not

View File

@ -62,15 +62,15 @@ Creates a new database.
#### Syntax
```sql
CREATE DATABASE <database_name> [WITH [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [PAST LIMIT <duration>] [FUTURE LIMIT <duration>] [NAME <retention-policy-name>]]
CREATE DATABASE <database_name> [WITH [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [FUTURE LIMIT <duration>] [PAST LIMIT <duration>] [NAME <retention-policy-name>]]
```
#### Description of syntax
`CREATE DATABASE` requires a database [name](/influxdb/v1/troubleshooting/frequently-asked-questions/#what-words-and-characters-should-i-avoid-when-writing-data-to-influxdb).
The `WITH`, `DURATION`, `REPLICATION`, `SHARD DURATION`, `PAST LIMIT`,
`FUTURE LIMIT`, and `NAME` clauses are optional and create a single
The `WITH`, `DURATION`, `REPLICATION`, `SHARD DURATION`, `FUTURE LIMIT`,
`PAST LIMIT`, and `NAME` clauses are optional and create a single
[retention policy](/influxdb/v1/concepts/glossary/#retention-policy-rp)
associated with the created database.
If you do not specify one of the clauses after `WITH`, the relevant behavior
@ -87,8 +87,8 @@ If you attempt to create a database that already exists, InfluxDB does nothing a
##### Create a database
```
> CREATE DATABASE "NOAA_water_database"
>
CREATE DATABASE "NOAA_water_database"
```
The query creates a database called `NOAA_water_database`.
@ -97,8 +97,8 @@ The query creates a database called `NOAA_water_database`.
##### Create a database with a specific retention policy
```
> CREATE DATABASE "NOAA_water_database" WITH DURATION 3d REPLICATION 1 SHARD DURATION 1h NAME "liquid"
>
CREATE DATABASE "NOAA_water_database" WITH DURATION 3d REPLICATION 1 SHARD DURATION 1h NAME "liquid"
```
The query creates a database called `NOAA_water_database`.
@ -114,8 +114,8 @@ DROP DATABASE <database_name>
Drop the database NOAA_water_database:
```bash
> DROP DATABASE "NOAA_water_database"
>
DROP DATABASE "NOAA_water_database"
```
A successful `DROP DATABASE` query returns an empty result.
@ -135,19 +135,19 @@ DROP SERIES FROM <measurement_name[,measurement_name]> WHERE <tag_key>='<tag_val
Drop all series from a single measurement:
```sql
> DROP SERIES FROM "h2o_feet"
DROP SERIES FROM "h2o_feet"
```
Drop series with a specific tag pair from a single measurement:
```sql
> DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
```
Drop all points in the series that have a specific tag pair from all measurements in the database:
```sql
> DROP SERIES WHERE "location" = 'santa_monica'
DROP SERIES WHERE "location" = 'santa_monica'
```
A successful `DROP SERIES` query returns an empty result.
@ -168,25 +168,25 @@ DELETE FROM <measurement_name> WHERE [<tag_key>='<tag_value>'] | [<time interval
Delete all data associated with the measurement `h2o_feet`:
```sql
> DELETE FROM "h2o_feet"
DELETE FROM "h2o_feet"
```
Delete all data associated with the measurement `h2o_quality` and where the tag `randtag` equals `3`:
```sql
> DELETE FROM "h2o_quality" WHERE "randtag" = '3'
DELETE FROM "h2o_quality" WHERE "randtag" = '3'
```
Delete all data in the database that occur before January 01, 2020:
```sql
> DELETE WHERE time < '2020-01-01'
DELETE WHERE time < '2020-01-01'
```
Delete all data associated with the measurement `h2o_feet` in retention policy `one_day`:
```sql
> DELETE FROM "one_day"."h2o_feet"
DELETE FROM "one_day"."h2o_feet"
```
A successful `DELETE` query returns an empty result.
@ -217,7 +217,7 @@ DROP MEASUREMENT <measurement_name>
Delete the measurement `h2o_feet`:
```sql
> DROP MEASUREMENT "h2o_feet"
DROP MEASUREMENT "h2o_feet"
```
> **Note:** `DROP MEASUREMENT` drops all data and series in the measurement.
@ -240,8 +240,8 @@ DROP SHARD <shard_id_number>
Delete the shard with the id `1`:
```
> DROP SHARD 1
>
DROP SHARD 1
```
A successful `DROP SHARD` query returns an empty result.
@ -259,7 +259,7 @@ You may disable its auto-creation in the [configuration file](/influxdb/v1/admin
#### Syntax
```sql
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [PAST LIMIT <duration>] [FUTURE LIMIT <duration>] [DEFAULT]
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [FUTURE LIMIT <duration>] [PAST LIMIT <duration>] [DEFAULT]
```
#### Description of syntax
@ -307,6 +307,17 @@ See
[Shard group duration management](/influxdb/v1/concepts/schema_and_data_layout/#shard-group-duration-management)
for recommended configurations.
##### `FUTURE LIMIT` {metadata="v1.12.0+"}
The `FUTURE LIMIT` clause defines a time boundary after and relative to _now_
in which points written to the retention policy are accepted. If a point has a
timestamp after the specified boundary, the point is rejected and the write
request returns a partial write error.
For example, if a write request tries to write data to a retention policy with a
`FUTURE LIMIT 6h` and there are points in the request with future timestamps
greater than 6 hours from now, those points are rejected.
##### `PAST LIMIT` {metadata="v1.12.0+"}
The `PAST LIMIT` clause defines a time boundary before and relative to _now_
@ -318,25 +329,6 @@ For example, if a write request tries to write data to a retention policy with a
`PAST LIMIT 6h` and there are points in the request with timestamps older than
6 hours, those points are rejected.
> [!Important]
> `PAST LIMIT` cannot be changed after it is set.
> This will be fixed in a future release.
##### `FUTURE LIMIT` {metadata="v1.12.0+"}
The `FUTURE LIMIT` clause defines a time boundary after and relative to _now_
in which points written to the retention policy are accepted. If a point has a
timestamp after the specified boundary, the point is rejected and the write
request returns a partial write error.
For example, if a write request tries to write data to a retention policy with a
`FUTURE LIMIT 6h` and there are points in the request with future timestamps
greater than 6 hours from now, those points are rejected.
> [!Important]
> `FUTURE LIMIT` cannot be changed after it is set.
> This will be fixed in a future release.
##### `DEFAULT`
Sets the new retention policy as the default retention policy for the database.
@ -347,8 +339,8 @@ This setting is optional.
##### Create a retention policy
```
> CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 1d REPLICATION 1
>
CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 1d REPLICATION 1
```
The query creates a retention policy called `one_day_only` for the database
`NOAA_water_database` with a one day duration and a replication factor of one.
@ -356,8 +348,8 @@ The query creates a retention policy called `one_day_only` for the database
##### Create a DEFAULT retention policy
```sql
> CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 23h60m REPLICATION 1 DEFAULT
>
CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 23h60m REPLICATION 1 DEFAULT
```
The query creates the same retention policy as the one in the example above, but
@ -372,24 +364,27 @@ See [Create a database with CREATE DATABASE](/influxdb/v1/query_language/manage-
### Modify retention policies with ALTER RETENTION POLICY
The `ALTER RETENTION POLICY` query takes the following form, where you must declare at least one of the retention policy attributes `DURATION`, `REPLICATION`, `SHARD DURATION`, or `DEFAULT`:
The `ALTER RETENTION POLICY` query takes the following form, where you must declare at least one of the retention policy attributes `DURATION`, `REPLICATION`, `SHARD DURATION`, `FUTURE LIMIT`, `PAST LIMIT`, or `DEFAULT`:
```sql
ALTER RETENTION POLICY <retention_policy_name> ON <database_name> [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [DEFAULT]
ALTER RETENTION POLICY <retention_policy_name> ON <database_name> [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [FUTURE LIMIT <duration>] [PAST LIMIT <duration>] [DEFAULT]
```
{{% warn %}} Replication factors do not serve a purpose with single node instances.
{{% /warn %}}
For information about the `FUTURE LIMIT` and `PAST LIMIT` clauses, see
[CREATE RETENTION POLICY](#create-retention-policies-with-create-retention-policy).
First, create the retention policy `what_is_time` with a `DURATION` of two days:
```sql
> CREATE RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 2d REPLICATION 1
>
CREATE RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 2d REPLICATION 1
```
Modify `what_is_time` to have a three week `DURATION`, a two hour shard group duration, and make it the `DEFAULT` retention policy for `NOAA_water_database`.
```sql
> ALTER RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 3w SHARD DURATION 2h DEFAULT
>
ALTER RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 3w SHARD DURATION 2h DEFAULT
```
In the last example, `what_is_time` retains its original replication factor of 1.
@ -409,8 +404,8 @@ DROP RETENTION POLICY <retention_policy_name> ON <database_name>
Delete the retention policy `what_is_time` in the `NOAA_water_database` database:
```bash
> DROP RETENTION POLICY "what_is_time" ON "NOAA_water_database"
>
DROP RETENTION POLICY "what_is_time" ON "NOAA_water_database"
```
A successful `DROP RETENTION POLICY` query returns an empty result.

View File

@ -53,8 +53,8 @@ digits, or underscores and do not begin with a digit.
Throughout the query language exploration, we'll use the database name `NOAA_water_database`:
```
> CREATE DATABASE NOAA_water_database
> exit
CREATE DATABASE NOAA_water_database
exit
```
### Download and write the data to InfluxDB

View File

@ -1,6 +1,6 @@
---
title: Influx Query Language (InfluxQL) reference
description: List of resources for Influx Query Language (InfluxQL).
description: InfluxQL is a SQL-like query language for interacting with InfluxDB and providing features specific to storing and analyzing time series data.
menu:
influxdb_v1:
name: InfluxQL reference
@ -8,38 +8,32 @@ menu:
parent: InfluxQL
aliases:
- /influxdb/v2/query_language/spec/
related:
- /influxdb/v1/query_language/explore-data/
- /influxdb/v1/query_language/explore-schema/
- /influxdb/v1/query_language/manage-database/
---
## Introduction
InfluxQL is a SQL-like query language for interacting with InfluxDB
and providing features specific to storing and analyzing time series data.
Find Influx Query Language (InfluxQL) definitions and details, including:
- [Notation](/influxdb/v1/query_language/spec/#notation)
- [Query representation](/influxdb/v1/query_language/spec/#query-representation)
- [Identifiers](/influxdb/v1/query_language/spec/#identifiers)
- [Keywords](/influxdb/v1/query_language/spec/#keywords)
- [Literals](/influxdb/v1/query_language/spec/#literals)
- [Queries](/influxdb/v1/query_language/spec/#queries)
- [Statements](/influxdb/v1/query_language/spec/#statements)
- [Clauses](/influxdb/v1/query_language/spec/#clauses)
- [Expressions](/influxdb/v1/query_language/spec/#expressions)
- [Other](/influxdb/v1/query_language/spec/#other)
- [Query engine internals](/influxdb/v1/query_language/spec/#query-engine-internals)
To learn more about InfluxQL, browse the following topics:
- [Explore your data with InfluxQL](/influxdb/v1/query_language/explore-data/)
- [Explore your schema with InfluxQL](/influxdb/v1/query_language/explore-schema/)
- [Database management](/influxdb/v1/query_language/manage-database/)
- [Authentication and authorization](/influxdb/v1/administration/authentication_and_authorization/).
InfluxQL is a SQL-like query language for interacting with InfluxDB and providing features specific to storing and analyzing time series data.
- [Notation](#notation)
- [Query representation](#query-representation)
- [Identifiers](#identifiers)
- [Keywords](#keywords)
- [Literals](#literals)
- [Queries](#queries)
- [Statements](#statements)
- [Clauses](#clauses)
- [Expressions](#expressions)
- [Comments](#comments)
- [Other](#other)
- [Query engine internals](#query-engine-internals)
## Notation
The syntax is specified using Extended Backus-Naur Form ("EBNF").
EBNF is the same notation used in the [Go](http://golang.org) programming language specification, which can be found [here](https://golang.org/ref/spec).
Not so coincidentally, InfluxDB is written in Go.
EBNF is the same notation used in the [Go programming language specification](https://golang.org/ref/spec).
```
Production = production_name "=" [ Expression ] "." .
@ -91,7 +85,7 @@ The rules:
- double quoted identifiers can contain any unicode character other than a new line
- double quoted identifiers can contain escaped `"` characters (i.e., `\"`)
- double quoted identifiers can contain InfluxQL [keywords](/influxdb/v1/query_language/spec/#keywords)
- double quoted identifiers can contain InfluxQL [keywords](#keywords)
- unquoted identifiers must start with an upper or lowercase ASCII character or "_"
- unquoted identifiers may contain only ASCII letters, decimal digits, and "_"
@ -129,7 +123,7 @@ SUBSCRIPTIONS TAG TO USER USERS VALUES
WHERE WITH WRITE
```
If you use an InfluxQL keywords as an
If you use an InfluxQL keyword as an
[identifier](/influxdb/v1/concepts/glossary/#identifier) you will need to
double quote that identifier in every query.
@ -145,7 +139,7 @@ In those cases, `time` does not require double quotes in queries.
`time` cannot be a [field key](/influxdb/v1/concepts/glossary/#field-key) or
[tag key](/influxdb/v1/concepts/glossary/#tag-key);
InfluxDB rejects writes with `time` as a field key or tag key and returns an error.
See [Frequently Asked Questions](/influxdb/v1/troubleshooting/frequently-asked-questions/#time) for more information.
For more information, see [Frequently Asked Questions](/influxdb/v1/troubleshooting/frequently-asked-questions/#time).
## Literals
@ -229,19 +223,22 @@ regex_lit = "/" { unicode_char } "/" .
`=~` matches against
`!~` doesn't match against
InfluxQL supports using regular expressions when specifying:
- [field keys](/influxdb/v1/concepts/glossary/#field-key) and [tag keys](/influxdb/v1/concepts/glossary/#tag-key) in the [`SELECT` clause](/influxdb/v1/query_language/explore-data/#the-basic-select-statement)
- [measurements](/influxdb/v1/concepts/glossary/#measurement) in the [`FROM` clause](/influxdb/v1/query_language/explore-data/#the-basic-select-statement)
- [tag values](/influxdb/v1/concepts/glossary/#tag-value) and string [field values](/influxdb/v1/concepts/glossary/#field-value) in the [`WHERE` clause](/influxdb/v1/query_language/explore-data/#the-where-clause).
- [tag keys](/influxdb/v1/concepts/glossary/#tag-key) in the [`GROUP BY` clause](/influxdb/v1/query_language/explore-data/#group-by-tags)
> [!Note]
> InfluxQL supports using regular expressions when specifying:
> #### Regular expressions and non-string field values
>
> * [field keys](/influxdb/v1/concepts/glossary/#field-key) and [tag keys](/influxdb/v1/concepts/glossary/#tag-key) in the [`SELECT` clause](/influxdb/v1/query_language/explore-data/#the-basic-select-statement)
> * [measurements](/influxdb/v1/concepts/glossary/#measurement) in the [`FROM` clause](/influxdb/v1/query_language/explore-data/#the-basic-select-statement)
> * [tag values](/influxdb/v1/concepts/glossary/#tag-value) and string [field values](/influxdb/v1/concepts/glossary/#field-value) in the [`WHERE` clause](/influxdb/v1/query_language/explore-data/#the-where-clause).
> * [tag keys](/influxdb/v1/concepts/glossary/#tag-key) in the [`GROUP BY` clause](/influxdb/v1/query_language/explore-data/#group-by-tags)
>
>Currently, InfluxQL does not support using regular expressions to match
>non-string field values in the
>`WHERE` clause,
>[databases](/influxdb/v1/concepts/glossary/#database), and
>[retention polices](/influxdb/v1/concepts/glossary/#retention-policy-rp).
> Currently, InfluxQL does not support using regular expressions to match
> non-string field values in the
> `WHERE` clause,
> [databases](/influxdb/v1/concepts/glossary/#database), and
> [retention policies](/influxdb/v1/concepts/glossary/#retention-policy-rp).
## Queries
@ -306,6 +303,8 @@ alter_retention_policy_stmt = "ALTER RETENTION POLICY" policy_name on_clause
retention_policy_option
[ retention_policy_option ]
[ retention_policy_option ]
[ retention_policy_option ]
[ retention_policy_option ]
[ retention_policy_option ] .
```
@ -318,6 +317,9 @@ ALTER RETENTION POLICY "1h.cpu" ON "mydb" DEFAULT
-- Change duration and replication factor.
-- REPLICATION (replication factor) not valid for OSS instances.
ALTER RETENTION POLICY "policy1" ON "somedb" DURATION 1h REPLICATION 4
-- Change future and past limits.
ALTER RETENTION POLICY "policy1" ON "somedb" FUTURE LIMIT 6h PAST LIMIT 6h
```
### CREATE CONTINUOUS QUERY
@ -378,12 +380,15 @@ create_database_stmt = "CREATE DATABASE" db_name
[ retention_policy_duration ]
[ retention_policy_replication ]
[ retention_policy_shard_group_duration ]
[ retention_past_limit ]
[ retention_future_limit ]
[ retention_past_limit ]
[ retention_policy_name ]
] .
```
> [!Note]
> When using both `FUTURE LIMIT` and `PAST LIMIT` clauses, `FUTURE LIMIT` must appear before `PAST LIMIT`.
> [!Warning]
> Replication factors do not serve a purpose with single node instances.
@ -402,8 +407,8 @@ CREATE DATABASE "bar" WITH DURATION 1d REPLICATION 1 SHARD DURATION 30m NAME "my
CREATE DATABASE "mydb" WITH NAME "myrp"
-- Create a database called bar with a new retention policy named "myrp", and
-- specify the duration, past and future limits, and name of that retention policy
CREATE DATABASE "bar" WITH DURATION 1d PAST LIMIT 6h FUTURE LIMIT 6h NAME "myrp"
-- specify the duration, future and past limits, and name of that retention policy
CREATE DATABASE "bar" WITH DURATION 1d FUTURE LIMIT 6h PAST LIMIT 6h NAME "myrp"
```
### CREATE RETENTION POLICY
@ -413,11 +418,14 @@ create_retention_policy_stmt = "CREATE RETENTION POLICY" policy_name on_clause
retention_policy_duration
retention_policy_replication
[ retention_policy_shard_group_duration ]
[ retention_past_limit ]
[ retention_future_limit ]
[ retention_past_limit ]
[ "DEFAULT" ] .
```
> [!Note]
> When using both `FUTURE LIMIT` and `PAST LIMIT` clauses, `FUTURE LIMIT` must appear before `PAST LIMIT`.
> [!Warning]
> Replication factors do not serve a purpose with single node instances.
@ -433,8 +441,8 @@ CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 DEFA
-- Create a retention policy and specify the shard group duration.
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 SHARD DURATION 30m
-- Create a retention policy and specify past and future limits.
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 12h PAST LIMIT 6h FUTURE LIMIT 6h
-- Create a retention policy and specify future and past limits.
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 12h FUTURE LIMIT 6h PAST LIMIT 6h
```
### CREATE SUBSCRIPTION
@ -629,12 +637,12 @@ SIZE OF BLOCKS: 931
### EXPLAIN ANALYZE
Executes the specified SELECT statement and returns data on the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including [execution time](#execution-time) and [planning time](#planning-time), and the [iterator type](#iterator-type) and [cursor type](#cursor-type).
Executes the specified SELECT statement and returns data on the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including [execution time](#execution_time) and [planning time](#planning_time), and the [iterator type](#iterator-type) and [cursor type](#cursor-type).
For example, executing the following statement:
```sql
> explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
```
May produce an output similar to the following:
@ -725,7 +733,8 @@ For more information about storage blocks, see [TSM files](/influxdb/v1/concepts
### GRANT
> **NOTE:** Users can be granted privileges on databases that do not yet exist.
> [!Note]
> Users can be granted privileges on databases that do not yet exist.
```
grant_stmt = "GRANT" privilege [ on_clause ] to_clause .
@ -743,20 +752,17 @@ GRANT READ ON "mydb" TO "jdoe"
### KILL QUERY
Stop currently-running query.
Stop a currently-running query.
```sql
KILL QUERY <query_id>
```
```
kill_query_statement = "KILL QUERY" query_id .
```
Where `query_id` is the query ID, displayed in the [`SHOW QUERIES`](/influxdb/v1/troubleshooting/query_management/#list-currently-running-queries-with-show-queries) output as `qid`.
> ***InfluxDB Enterprise clusters:*** To kill queries on a cluster, you need to specify the query ID (qid) and the TCP host (for example, `myhost:8088`),
> available in the `SHOW QUERIES` output.
>
> ```sql
KILL QUERY <qid> ON "<host>"
```
Replace `query_id` with your query ID from [`SHOW QUERIES`](/influxdb/v1/troubleshooting/query_management/#list-currently-running-queries-with-show-queries), output as `qid`.
#### Examples
@ -765,11 +771,6 @@ KILL QUERY <qid> ON "<host>"
KILL QUERY 36
```
```sql
-- kill query on InfluxDB Enterprise cluster
KILL QUERY 53 ON "myhost:8088"
```
### REVOKE
```sql
@ -912,7 +913,7 @@ show_grants_stmt = "SHOW GRANTS FOR" user_name .
SHOW GRANTS FOR "jdoe"
```
#### SHOW MEASUREMENT CARDINALITY
### SHOW MEASUREMENT CARDINALITY
Estimates or counts exactly the cardinality of the measurement set for the current database unless a database is specified using the `ON <database>` option.
@ -999,10 +1000,11 @@ Estimates or counts exactly the cardinality of the series for the current databa
[Series cardinality](/influxdb/v1/concepts/glossary/#series-cardinality) is the major factor that affects RAM requirements. For more information, see:
- [When do I need more RAM?](/influxdb/v1/guides/hardware_sizing/#when-do-i-need-more-ram) in [Hardware Sizing Guidelines](/influxdb/v1/guides/hardware_sizing/)
- [Hardware Sizing Guidelines](/influxdb/v1/guides/hardware_sizing/)
- [Don't have too many series](/influxdb/v1/concepts/schema_and_data_layout/#avoid-too-many-series)
> **Note:** `ON <database>`, `FROM <sources>`, `WITH KEY = <key>`, `WHERE <condition>`, `GROUP BY <dimensions>`, and `LIMIT/OFFSET` clauses are optional.
> [!Note]
> `ON <database>`, `FROM <sources>`, `WITH KEY = <key>`, `WHERE <condition>`, `GROUP BY <dimensions>`, and `LIMIT/OFFSET` clauses are optional.
> When using these query clauses, the query falls back to an exact count.
> Filtering by `time` is not supported in the `WHERE` clause.
@ -1069,26 +1071,18 @@ id database retention_policy shard_group start_time end_time
Returns detailed statistics on available components of an InfluxDB node and available (enabled) components.
Statistics returned by `SHOW STATS` are stored in memory and reset to zero when the node is restarted,
but `SHOW STATS` is triggered every 10 seconds to populate the `_internal` database.
The `SHOW STATS` command does not list index memory usage --
use the [`SHOW STATS FOR 'indexes'`](#show-stats-for-indexes) command.
For more information on using the `SHOW STATS` command, see [Using the SHOW STATS command to monitor InfluxDB](/platform/monitoring/tools/show-stats/).
```
show_stats_stmt = "SHOW STATS [ FOR '<component>' | 'indexes' ]"
```
#### `SHOW STATS`
- The `SHOW STATS` command does not list index memory usage -- use the [`SHOW STATS FOR 'indexes'`](#show-stats-for-indexes) command.
- Statistics returned by `SHOW STATS` are stored in memory and reset to zero when the node is restarted, but `SHOW STATS` is triggered every 10 seconds to populate the `_internal` database.
#### `SHOW STATS FOR <component>`
- For the specified component (\<component\>), the command returns available statistics.
- For the `runtime` component, the command returns an overview of memory usage by the InfluxDB system, using the [Go runtime](https://golang.org/pkg/runtime/) package.
#### `SHOW STATS FOR 'indexes'`
- Returns an estimate of memory use of all indexes. Index memory use is not reported with `SHOW STATS` because it is a potentially expensive operation.
#### Example
```sql
@ -1098,7 +1092,6 @@ name: runtime
Alloc Frees HeapAlloc HeapIdle HeapInUse HeapObjects HeapReleased HeapSys Lookups Mallocs NumGC NumGoroutine PauseTotalNs Sys TotalAlloc
4136056 6684537 4136056 34586624 5816320 49412 0 40402944 110 6733949 83 44 36083006 46692600 439945704
name: graphite
tags: proto=tcp
batches_tx bytes_rx connections_active connections_handled points_rx points_tx
@ -1106,6 +1099,17 @@ batches_tx bytes_rx connections_active connections_handled
159 3999750 0 1 158110 158110
```
### SHOW STATS FOR <component>
For the specified component (\<component\>), the command returns available statistics.
For the `runtime` component, the command returns an overview of memory usage by the InfluxDB system,
using the [Go runtime](https://golang.org/pkg/runtime/) package.
### SHOW STATS FOR 'indexes'
Returns an estimate of memory use of all indexes.
Index memory use is not reported with `SHOW STATS` because it is a potentially expensive operation.
### SHOW SUBSCRIPTIONS
```
@ -1118,11 +1122,12 @@ show_subscriptions_stmt = "SHOW SUBSCRIPTIONS" .
SHOW SUBSCRIPTIONS
```
#### SHOW TAG KEY CARDINALITY
### SHOW TAG KEY CARDINALITY
Estimates or counts exactly the cardinality of tag key set on the current database unless a database is specified using the `ON <database>` option.
> **Note:** `ON <database>`, `FROM <sources>`, `WITH KEY = <key>`, `WHERE <condition>`, `GROUP BY <dimensions>`, and `LIMIT/OFFSET` clauses are optional.
> [!Note]
> `ON <database>`, `FROM <sources>`, `WITH KEY = <key>`, `WHERE <condition>`, `GROUP BY <dimensions>`, and `LIMIT/OFFSET` clauses are optional.
> When using these query clauses, the query falls back to an exact count.
> Filtering by `time` is only supported when TSI (Time Series Index) is enabled and `time` is not supported in the `WHERE` clause.
@ -1190,11 +1195,12 @@ SHOW TAG VALUES WITH KEY !~ /.*c.*/
SHOW TAG VALUES FROM "cpu" WITH KEY IN ("region", "host") WHERE "service" = 'redis'
```
#### SHOW TAG VALUES CARDINALITY
### SHOW TAG VALUES CARDINALITY
Estimates or counts exactly the cardinality of tag key values for the specified tag key on the current database unless a database is specified using the `ON <database>` option.
> **Note:** `ON <database>`, `FROM <sources>`, `WITH KEY = <key>`, `WHERE <condition>`, `GROUP BY <dimensions>`, and `LIMIT/OFFSET` clauses are optional.
> [!Note]
> `ON <database>`, `FROM <sources>`, `WITH KEY = <key>`, `WHERE <condition>`, `GROUP BY <dimensions>`, and `LIMIT/OFFSET` clauses are optional.
> When using these query clauses, the query falls back to an exact count.
> Filtering by `time` is only supported when TSI (Time Series Index) is enabled.
@ -1274,6 +1280,15 @@ unary_expr = "(" expr ")" | var_ref | time_lit | string_lit | int_lit |
float_lit | bool_lit | duration_lit | regex_lit .
```
## Comments
Use comments with InfluxQL statements to describe your queries.
- A single line comment begins with two hyphens (`--`) and ends where InfluxDB detects a line break.
This comment type cannot span several lines.
- A multi-line comment begins with `/*` and ends with `*/`. This comment type can span several lines.
Multi-line comments do not support nested multi-line comments.
## Other
```
@ -1321,6 +1336,8 @@ retention_policy = identifier .
retention_policy_option = retention_policy_duration |
retention_policy_replication |
retention_policy_shard_group_duration |
retention_future_limit |
retention_past_limit |
"DEFAULT" .
retention_policy_duration = "DURATION" duration_lit .
@ -1329,6 +1346,10 @@ retention_policy_replication = "REPLICATION" int_lit .
retention_policy_shard_group_duration = "SHARD DURATION" duration_lit .
retention_future_limit = "FUTURE LIMIT" duration_lit .
retention_past_limit = "PAST LIMIT" duration_lit .
retention_policy_name = "NAME" identifier .
series_id = int_lit .
@ -1350,15 +1371,6 @@ user_name = identifier .
var_ref = measurement .
```
### Comments
Use comments with InfluxQL statements to describe your queries.
- A single line comment begins with two hyphens (`--`) and ends where InfluxDB detects a line break.
This comment type cannot span several lines.
- A multi-line comment begins with `/*` and ends with `*/`. This comment type can span several lines.
Multi-line comments do not support nested multi-line comments.
## Query Engine Internals
Once you understand the language itself, it's important to know how these
@ -1458,7 +1470,7 @@ iterator.
### Built-in iterators
There are many helper iterators that let us build queries:
{{% product-name %}} provides many helper iterators for building queries:
- Merge Iterator - This iterator combines one or more iterators into a single
new iterator of the same type. This iterator guarantees that all points

View File

@ -427,8 +427,8 @@ Use `insert into <retention policy> <line protocol>` to write data to a specific
Write data to a single field in the measurement `treasures` with the tag `captain_id = pirate_king`.
`influx` automatically writes the point to the database's `DEFAULT` retention policy.
```
> INSERT treasures,captain_id=pirate_king value=2
>
INSERT treasures,captain_id=pirate_king value=2
```
Write the same point to the already-existing retention policy `oneday`:

View File

@ -101,7 +101,7 @@ In Query 1, the field key `duration` is an InfluxQL Keyword.
Double quote `duration` to avoid the error:
```sql
> SELECT "duration" FROM runs
SELECT "duration" FROM runs
```
*Query 2:*
@ -115,7 +115,7 @@ In Query 2, the retention policy name `limit` is an InfluxQL Keyword.
Double quote `limit` to avoid the error:
```sql
> CREATE RETENTION POLICY "limit" ON telegraf DURATION 1d REPLICATION 1
CREATE RETENTION POLICY "limit" ON telegraf DURATION 1d REPLICATION 1
```
While using double quotes is an acceptable workaround, we recommend that you avoid using InfluxQL keywords as identifiers for simplicity's sake.
@ -142,7 +142,7 @@ The `CREATE USER` statement requires single quotation marks around the password
string:
```sql
> CREATE USER penelope WITH PASSWORD 'timeseries4dayz'
CREATE USER penelope WITH PASSWORD 'timeseries4dayz'
```
Note that you should not include the single quotes when authenticating requests.
@ -258,7 +258,7 @@ Replace the timestamp with a UNIX timestamp to avoid the error and successfully
write the point to InfluxDB:
```sql
> INSERT pineapple,fresh=true value=1 1439938800000000000
INSERT pineapple,fresh=true value=1 1439938800000000000
```
### InfluxDB line protocol syntax
@ -284,7 +284,7 @@ InfluxDB assumes that the `value=9` field is the timestamp and returns an error.
Use a comma instead of a space between the measurement and tag to avoid the error:
```sql
> INSERT hens,location=2 value=9
INSERT hens,location=2 value=9
```
*Write 2*
@ -301,7 +301,7 @@ InfluxDB assumes that the `happy=3` field is the timestamp and returns an error.
Use a comma instead of a space between the two fields to avoid the error:
```sql
> INSERT cows,name=daisy milk_prod=3,happy=3
INSERT cows,name=daisy milk_prod=3,happy=3
```
**Resources:**

View File

@ -451,7 +451,7 @@ SELECT MEAN("dogs" - "cats") from "pet_daycare"
Instead, use a subquery to get the same result:
```sql
> SELECT MEAN("difference") FROM (SELECT "dogs" - "cat" AS "difference" FROM "pet_daycare")
SELECT MEAN("difference") FROM (SELECT "dogs" - "cat" AS "difference" FROM "pet_daycare")
```
See the
@ -740,9 +740,9 @@ In the following example, the first query covers data with timestamps between
The second query covers data with timestamps between `2015-09-18T21:30:00Z` and 180 weeks from `now()`.
```sql
> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' GROUP BY time(12m) fill(none)
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' GROUP BY time(12m) fill(none)
> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' AND time <= now() + 180w GROUP BY time(12m) fill(none)
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' AND time <= now() + 180w GROUP BY time(12m) fill(none)
```
Note that the `WHERE` clause must provide an alternative **upper** bound to
@ -751,8 +751,8 @@ the lower bound to `now()` such that the query's time range is between
`now()` and `now()`:
```sql
> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= now() GROUP BY time(12m) fill(none)
>
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= now() GROUP BY time(12m) fill(none)
```
For for more on time syntax in queries, see [Data Exploration](/influxdb/v1/query_language/explore-data/#time-syntax).
@ -843,8 +843,8 @@ time count
We [create](/influxdb/v1/query_language/manage-database/#create-retention-policies-with-create-retention-policy) a new `DEFAULT` RP (`two_hour`) and perform the same query:
```sql
> SELECT count(flounders) FROM fleeting
>
SELECT count(flounders) FROM fleeting
```
To query the old data, we must specify the old `DEFAULT` RP by fully qualifying `fleeting`:
@ -866,8 +866,8 @@ with time intervals.
Example:
```sql
> SELECT * FROM "absolutismus" WHERE time = '2016-07-31T20:07:00Z' OR time = '2016-07-31T23:07:17Z'
>
SELECT * FROM "absolutismus" WHERE time = '2016-07-31T20:07:00Z' OR time = '2016-07-31T23:07:17Z'
```
{{% warn %}} [GitHub Issue #7530](https://github.com/influxdata/influxdb/issues/7530)

View File

@ -73,7 +73,7 @@ To learn how field value type discrepancies can affect `SELECT *` queries, see
#### Write the field value `-1.234456e+78` as a float to InfluxDB
```sql
> INSERT mymeas value=-1.234456e+78
INSERT mymeas value=-1.234456e+78
```
InfluxDB supports field values specified in scientific notation.
@ -81,25 +81,25 @@ InfluxDB supports field values specified in scientific notation.
#### Write a field value `1.0` as a float to InfluxDB
```sql
> INSERT mymeas value=1.0
INSERT mymeas value=1.0
```
#### Write the field value `1` as a float to InfluxDB
```sql
> INSERT mymeas value=1
INSERT mymeas value=1
```
#### Write the field value `1` as an integer to InfluxDB
```sql
> INSERT mymeas value=1i
INSERT mymeas value=1i
```
#### Write the field value `stringing along` as a string to InfluxDB
```sql
> INSERT mymeas value="stringing along"
INSERT mymeas value="stringing along"
```
Always double quote string field values. More on quoting [below](#quoting).
@ -107,14 +107,14 @@ Always double quote string field values. More on quoting [below](#quoting).
#### Write the field value `true` as a Boolean to InfluxDB
```sql
> INSERT mymeas value=true
INSERT mymeas value=true
```
Do not quote Boolean field values.
The following statement writes `true` as a string field value to InfluxDB:
```sql
> INSERT mymeas value="true"
INSERT mymeas value="true"
```
#### Attempt to write a string to a field that previously accepted floats
@ -130,9 +130,9 @@ ERR: {"error":"field type conflict: input field \"value\" on measurement \"mymea
If the timestamps on the float and string are not stored in the same shard:
```sql
> INSERT mymeas value=3 1465934559000000000
> INSERT mymeas value="stringing along" 1466625759000000000
>
INSERT mymeas value=3 1465934559000000000
INSERT mymeas value="stringing along" 1466625759000000000
```
## Quoting, special characters, and additional naming guidelines
@ -233,7 +233,7 @@ You do not need to escape other special characters.
##### Write a point with special characters
```sql
> INSERT "measurement\ with\ quo⚡es\ and\ emoji",tag\ key\ with\ sp🚀ces=tag\,value\,with"commas" field_k\ey="string field value, only \" need be esc🍭ped"
INSERT "measurement\ with\ quo⚡es\ and\ emoji",tag\ key\ with\ sp🚀ces=tag\,value\,with"commas" field_k\ey="string field value, only \" need be esc🍭ped"
```
The system writes a point where the measurement is `"measurement with quo⚡es and emoji"`, the tag key is `tag key with sp🚀ces`, the

View File

@ -278,9 +278,9 @@ But, writing an integer to a field that previously accepted floats succeeds if
InfluxDB stores the integer in a new shard:
```sql
> INSERT weather,location=us-midwest temperature=82 1465839830100400200
> INSERT weather,location=us-midwest temperature=81i 1467154750000000000
>
INSERT weather,location=us-midwest temperature=82 1465839830100400200
INSERT weather,location=us-midwest temperature=81i 1467154750000000000
```
See

View File

@ -151,8 +151,8 @@ If using an admin user for visualization or Chronograf administrative functions,
<!--pytest.mark.skip-->
```bash
> CREATE USER <username> WITH PASSWORD '<password>'
> GRANT READ ON <database> TO "<username>"
CREATE USER <username> WITH PASSWORD '<password>'
GRANT READ ON <database> TO "<username>"
```
InfluxDB {{< current-version >}} only grants admin privileges to the primary user

View File

@ -15,7 +15,7 @@ aliases:
- /influxdb3/cloud-dedicated/admin/clusters/list/
---
Use the Admin UI or the [`influxctl cluster list` CLI command](/influxdb3/cloud-dedicated/reference/cli/influxctl/list/)
Use the Admin UI or the [`influxctl cluster list` CLI command](/influxdb3/cloud-dedicated/reference/cli/influxctl/cluster/list/)
to view information about all {{< product-name omit=" Clustered" >}} clusters associated with your account ID.
{{< tabs-wrapper >}}

View File

@ -27,7 +27,7 @@ Use visualization tools to query data stored in {{% product-name %}} with SQL.
The following visualization tools support querying InfluxDB with SQL:
- [Grafana](/influxdb3/cloud-dedicated/process-data/visualize/grafana/)
- [Power BI](/influxdb3/cloud-dedicated/process-data/visualize/powerbi/)
- [Power BI](/influxdb3/cloud-dedicated/visualize-data/powerbi/)
- [Superset](/influxdb3/cloud-dedicated/process-data/visualize/superset/)
- [Tableau](/influxdb3/cloud-dedicated/process-data/visualize/tableau/)

View File

@ -27,7 +27,7 @@ Use visualization tools to query data stored in {{% product-name %}}.
The following visualization tools support querying InfluxDB with SQL:
- [Grafana](/influxdb3/cloud-serverless/process-data/visualize/grafana/)
- [Power BI](/influxdb3/cloud-serverless/process-data/visualize/powerbi/)
- [Power BI](/influxdb3/cloud-serverless/visualize-data/powerbi/)
- [Superset](/influxdb3/cloud-serverless/process-data/visualize/superset/)
- [Tableau](/influxdb3/cloud-serverless/process-data/visualize/tableau/)

View File

@ -27,7 +27,7 @@ Use visualization tools to query data stored in {{% product-name %}} with SQL.
The following visualization tools support querying InfluxDB with SQL:
- [Grafana](/influxdb3/clustered/process-data/visualize/grafana/)
- [Power BI](/influxdb3/clustered/process-data/visualize/powerbi/)
- [Power BI](/influxdb3/clustered/visualize-data/powerbi/)
- [Superset](/influxdb3/clustered/process-data/visualize/superset/)
- [Tableau](/influxdb3/clustered/process-data/visualize/tableau/)

View File

@ -385,7 +385,7 @@ a `GROUP BY time()` clause must provide an alternative upper bound in the
Use the [CLI](/enterprise_influxdb/v1/tools/influx-cli/use-influx/) to write a point to the `noaa` database that occurs after `now()`:
```sql
> INSERT h2o_feet,location=santa_monica water_level=3.1 1587074400000000000
INSERT h2o_feet,location=santa_monica water_level=3.1 1587074400000000000
```
Run a `GROUP BY time()` query that covers data with timestamps between

View File

@ -44,8 +44,8 @@ INSERT INTO mydb example-m,tag1=value1 field1=1i 1640995200000000000
The following example uses the [InfluxQL shell](/influxdb/version/tools/influxql-shell).
```sql
> USE mydb
> INSERT example-m,tag1=value1 field1=1i 1640995200000000000
USE mydb
INSERT example-m,tag1=value1 field1=1i 1640995200000000000
```
## Delete series with DELETE

View File

@ -324,7 +324,7 @@ Executes the specified SELECT statement and returns data on the query performanc
For example, executing the following statement:
```sql
> explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
```
May produce an output similar to the following:

View File

@ -22,7 +22,7 @@ The documentation MCP server is a hosted service—you don't need to install or
Add the server URL to your AI assistant's MCP configuration.
> [!Note]
> On first use, you'll be prompted to sign in with Google.
> On first use, you'll be prompted to sign in with a Google or GitHub account.
> This authentication is used only for rate limiting—no personal data is collected.
**MCP server URL:**
@ -168,23 +168,26 @@ The InfluxDB documentation search tools will be available in your OpenCode sessi
## Authentication and rate limits
When you connect to the documentation MCP server for the first time, a Google sign-in
window opens to complete an OAuth/OpenID Connect login.
When you connect to the documentation MCP server for the first time, a sign-in
window opens where you can choose to authenticate with a **Google** or **GitHub** account.
The hosted MCP server:
The hosted MCP server uses your account only to generate a stable, opaque user ID
for rate limiting—no personal data is collected:
- Requests only the `openid` scope from Google
- Receives an ID token (JWT) containing a stable, opaque user ID
- Does not request `email` or `profile` scopes—your name, email address, and other
personal data are not collected
- **Google**: Requests only the `openid` scope. Does not request `email` or `profile`
scopes—your name, email address, and other personal data are not collected.
- **GitHub**: Requests no OAuth scopes. With no scopes requested, GitHub grants
read-only access to public profile information only. The server does not access
repositories, organizations, email addresses, or other GitHub data.
The anonymous Google ID enforces per-user rate limits to prevent abuse:
The anonymous user ID enforces per-user rate limits to prevent abuse:
- **40 requests** per user per hour
- **200 requests** per user per day
> [!Tip]
> On Google's consent screen, this appears as "Associate you with your personal info on Google."
> If you sign in with Google, the consent screen may display
> "Associate you with your personal info on Google."
> This is Google's generic wording for the `openid` scope—it means the app can recognize
> that the same Google account is signing in again.
> It does not grant access to your email, name, contacts, or other data.

View File

@ -382,7 +382,7 @@ The documentation MCP server is a hosted service—you don't need to install or
Add the server URL to your AI assistant's MCP configuration.
> [!Note]
> On first use, you'll be prompted to sign in with Google.
> On first use, you'll be prompted to sign in with a Google or GitHub account.
> This authentication is used only for rate limiting—no personal data is collected.
**MCP server URL:**
@ -528,23 +528,26 @@ The InfluxDB documentation search tools will be available in your OpenCode sessi
### Authentication and rate limits
When you connect to the documentation MCP server for the first time, a Google sign-in
window opens to complete an OAuth/OpenID Connect login.
When you connect to the documentation MCP server for the first time, a sign-in
window opens where you can choose to authenticate with a **Google** or **GitHub** account.
The hosted MCP server:
The hosted MCP server uses your account only to generate a stable, opaque user ID
for rate limiting—no personal data is collected:
- Requests only the `openid` scope from Google
- Receives an ID token (JWT) containing a stable, opaque user ID
- Does not request `email` or `profile` scopes—your name, email address, and other
personal data are not collected
- **Google**: Requests only the `openid` scope. Does not request `email` or `profile`
scopes—your name, email address, and other personal data are not collected.
- **GitHub**: Requests no OAuth scopes. With no scopes requested, GitHub grants
read-only access to public profile information only. The server does not access
repositories, organizations, email addresses, or other GitHub data.
The anonymous Google ID enforces per-user rate limits to prevent abuse:
The anonymous user ID enforces per-user rate limits to prevent abuse:
- **40 requests** per user per hour
- **200 requests** per user per day
> [!Tip]
> On Google's consent screen, this appears as "Associate you with your personal info on Google."
> If you sign in with Google, the consent screen may display
> "Associate you with your personal info on Google."
> This is Google's generic wording for the `openid` scope—it means the app can recognize
> that the same Google account is signing in again.
> It does not grant access to your email, name, contacts, or other data.

View File

@ -291,7 +291,7 @@ Enables the PachaTree storage engine.
| influxdb3 serve option | Environment variable |
| :--------------------- | :----------------------------- |
| `--use-pacha-tree` | `INFLUXDB3_USE_PACHA_TREE` |
| `--use-pacha-tree` | `INFLUXDB3_ENTERPRISE_USE_PACHA_TREE` |
***

View File

@ -331,7 +331,7 @@ Executes the specified `SELECT` statement and returns data about the query perfo
For example, if you execute the following statement:
```sql
> explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
```
The output is similar to the following:

View File

@ -41,7 +41,9 @@ The following heartbeat plugin configuration options are available:
- **url**: _({{% req %}})_ URL of heartbeat endpoint.
- **instance_id**: _({{% req %}})_ Unique identifier for the Telegraf instance
or agent (also known as the agent ID).
- **token**: Authorization token for the heartbeat endpoint
- **token**: _({{% req text="Required with auth enabled" %}})_
{{% product-name %}} API token for the heartbeat endpoint.
The token must have **write** permissions on the **Heartbeat** API.
- **interval**: Interval for sending heartbeat messages. Default is `1m` (every minute).
- **include**: Information to include in the heartbeat message.
Available options are:
@ -56,12 +58,14 @@ The following heartbeat plugin configuration options are available:
### Example heartbeat output plugin
The following is an example heartbeat output plugin configuration that uses
an `agent_id` [configuration parameter](#) to specify the `instance_id`.
an `agent_id` [configuration parameter](/telegraf/controller/configs/dynamic-values/#parameters)
to specify the `instance_id`.
```toml
```toml { .tc-dynamic-values }
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "&{agent_id}"
token = "${INFLUX_TOKEN}"
interval = "1m"
include = ["hostname", "statistics", "configs"]
@ -69,6 +73,17 @@ an `agent_id` [configuration parameter](#) to specify the `instance_id`.
User-Agent = "telegraf"
```
> [!Important]
> #### Authorize heartbeats using an API token
>
> If {{% product-name %}} requires authorization on the **Heartbeat** API,
> include the `token` option in your heartbeat plugin configuration.
> Provide a {{% product-name %}} token with **write** permissions on the
> **Heartbeat** API.
>
> We recommend defining the `INFLUX_TOKEN` environment variable when starting
> Telegraf and using that to define the token in your heartbeat plugin.
## Verify a new agent
1. Open {{% product-name %}} and go to **Agents**.

View File

@ -1,24 +1,135 @@
---
title: Set agent statuses
description: >
Understand how {{% product-name %}} receives and displays agent statuses from
the heartbeat output plugin.
Configure agent status evaluation using CEL expressions in the Telegraf
heartbeat output plugin and view statuses in {{% product-name %}}.
menu:
telegraf_controller:
name: Set agent statuses
parent: Manage agents
weight: 104
related:
- /telegraf/controller/reference/agent-status-eval/, Agent status evaluation reference
- /telegraf/controller/agents/reporting-rules/
- /telegraf/v1/output-plugins/heartbeat/, Heartbeat output plugin
---
Agent statuses come from the Telegraf heartbeat output plugin and are sent with
each heartbeat request.
The plugin reports an `ok` status.
Agent statuses reflect the health of a Telegraf instance based on runtime data.
The Telegraf [heartbeat output plugin](/telegraf/v1/output-plugins/heartbeat/)
evaluates [Common Expression Language (CEL)](/telegraf/controller/reference/agent-status-eval/)
expressions against agent metrics, error counts, and plugin statistics to
determine the status sent with each heartbeat.
<!-- TODO: Update version to 1.38.2 after it's released -->
> [!Note]
> A future Telegraf release will let you configure logic that sets the status value.
{{% product-name %}} also applies reporting rules to detect stale agents.
If an agent does not send a heartbeat within the rule's threshold, Controller
marks the agent as **Not Reporting** until it resumes sending heartbeats.
> #### Requires Telegraf v1.38.0+
>
> Agent status evaluation in the Heartbeat output plugins requires Telegraf
> v1.38.0+.
> [!Warning]
> #### Heartbeat output plugin panic in Telegraf v1.38.0
>
> Telegraf v1.38.0 introduced a panic in the Heartbeat output plugin that
> prevents Telegraf from starting when the plugin is enabled. Telegraf v1.38.2
> will include a fix, but in the meantime, to use the Heartbeat output plugin,
> do one of the following:
>
> - Revert back to Telegraf v1.37.x _(Recommended)_
> - Use a Telegraf nightly build
> - Build Telegraf from source
## Status values
{{% product-name %}} displays the following agent statuses:
| Status | Source | Description |
| :---------------- | :------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Ok** | Heartbeat plugin | The agent is healthy. Set when the `ok` CEL expression evaluates to `true`. |
| **Warn** | Heartbeat plugin | The agent has a potential issue. Set when the `warn` CEL expression evaluates to `true`. |
| **Fail** | Heartbeat plugin | The agent has a critical problem. Set when the `fail` CEL expression evaluates to `true`. |
| **Undefined** | Heartbeat plugin | No expression matched and the `default` is set to `undefined`, or the `initial` status is `undefined`. |
| **Not Reporting** | {{% product-name %}} | The agent has not sent a heartbeat within the [reporting rule](/telegraf/controller/agents/reporting-rules/) threshold. {{% product-name %}} applies this status automatically. |
## How status evaluation works
You define CEL expressions for `ok`, `warn`, and `fail` in the
`[outputs.heartbeat.status]` section of your heartbeat plugin configuration.
Telegraf evaluates expressions in a configurable order and assigns the status
of the first expression that evaluates to `true`.
For full details on evaluation flow, configuration options, and available
variables and functions, see the
[Agent status evaluation reference](/telegraf/controller/reference/agent-status-eval/).
## Configure agent statuses
To configure status evaluation, add `"status"` to the `include` list in your
heartbeat plugin configuration and define CEL expressions in the
`[outputs.heartbeat.status]` section.
### Example: Basic health check
Report `ok` when metrics are flowing.
If no metrics arrive, fall back to the `fail` status.
```toml { .tc-dynamic-values }
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "&{agent_id}"
token = "${INFLUX_TOKEN}"
interval = "1m"
include = ["hostname", "statistics", "configs", "logs", "status"]
[outputs.heartbeat.status]
ok = "metrics > 0"
default = "fail"
```
### Example: Error-based status
Warn when errors are logged, fail when the error count is high.
```toml { .tc-dynamic-values }
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "&{agent_id}"
token = "${INFLUX_TOKEN}"
interval = "1m"
include = ["hostname", "statistics", "configs", "logs", "status"]
[outputs.heartbeat.status]
ok = "log_errors == 0 && log_warnings == 0"
warn = "log_errors > 0"
fail = "log_errors > 10"
order = ["fail", "warn", "ok"]
default = "ok"
```
### Example: Composite condition
Combine error count and buffer pressure signals.
```toml { .tc-dynamic-values }
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "&{agent_id}"
token = "${INFLUX_TOKEN}"
interval = "1m"
include = ["hostname", "statistics", "configs", "logs", "status"]
[outputs.heartbeat.status]
ok = "metrics > 0 && log_errors == 0"
warn = "log_errors > 0 || (has(outputs.influxdb_v2) && outputs.influxdb_v2.exists(o, o.buffer_fullness > 0.8))"
fail = "log_errors > 5 && has(outputs.influxdb_v2) && outputs.influxdb_v2.exists(o, o.buffer_fullness > 0.9)"
order = ["fail", "warn", "ok"]
default = "ok"
```
For more examples including buffer health, plugin-specific checks, and
time-based expressions, see
[CEL expression examples](/telegraf/controller/reference/agent-status-eval/examples/).
## View an agent's status

View File

@ -65,17 +65,29 @@ configuration with a [Telegraf heartbeat output plugin](/telegraf/v1/output-plug
This plugin reports agent information back to the {{% product-name %}} heartbeat
API and lets you monitor the health of your deployed Telegraf agents.
```toml
```toml { .tc-dynamic-values }
[[outputs.heartbeat]]
url = "http://localhost:8000/agents/heartbeat"
instance_id = "&{agent_id}"
interval = "1m"
include = ["hostname", "statistics", "configs"]
token = "${INFLUX_TOKEN}"
```
To monitor agents with {{% product-name %}}, include a heartbeat plugin in
your Telegraf configurations.
> [!Important]
> #### Authorize heartbeats using an API token
>
> If {{% product-name %}} requires authorization on the **Heartbeat** API,
> include the `token` option in your heartbeat plugin configuration.
> Provide a {{% product-name %}} token with **write** permissions on the
> **Heartbeat** API.
>
> We recommend defining the `INFLUX_TOKEN` environment variable when starting
> Telegraf and using that to define the token in your heartbeat plugin.
## Next steps
- Use [dynamic values](/telegraf/controller/configs/dynamic-values/)

View File

@ -46,8 +46,7 @@ requesting the configuration from {{% product-name %}}.
### Use parameters in Telegraf configurations
{{% telegraf/dynamic-values %}}
```toml
```toml { .tc-dynamic-values }
[[outputs.influxdb_v2]]
# Parameter with a default value
urls = ["&{db_host:https://localhost:8181}"]
@ -56,7 +55,6 @@ requesting the configuration from {{% product-name %}}.
# Required parameter without a default value
instance_id = "&{agent_id}"
```
{{% /telegraf/dynamic-values %}}
The example above uses two parameters:
@ -117,15 +115,13 @@ For more information about Telegraf environment variable syntax, see
### Use environment variables in Telegraf configurations
{{% telegraf/dynamic-values %}}
```toml
```toml { .tc-dynamic-values }
[[inputs.http]]
urls = ["${API_ENDPOINT:-http://localhost:8080}/metrics"]
[inputs.http.headers]
Authorization = "Bearer ${AUTH_TOKEN}"
```
{{% /telegraf/dynamic-values %}}
The example above uses two environment variables:
@ -150,8 +146,7 @@ telegraf \
Use secrets for credentials or tokens you do not want to store in plain text.
Secrets require a secret store and its corresponding `secretstores` plugin.
{{% telegraf/dynamic-values %}}
```toml
```toml { .tc-dynamic-values }
# Configure a secret store plugin
[[secretstores.vault]]
id = "my_vault"
@ -164,7 +159,6 @@ Secrets require a secret store and its corresponding `secretstores` plugin.
host = "my_influxdb.com:8181"
token = "@{my_vault:influx_token}"
```
{{% /telegraf/dynamic-values %}}
For more information about Telegraf secrets and secret stores, see
[Telegraf configuration options—Secret stores](/telegraf/v1/configuration/#secret-stores).

View File

@ -40,6 +40,24 @@ telegraf \
Telegraf retrieves and validates the configuration from {{% product-name %}}
and then starts the `telegraf` process using the loaded configuration.
### Retrieve a configuration with authorization enabled
If {{% product-name %}} is configured to require authentication on the **Configs**
API, define the `INFLUX_TOKEN` environment variable to authorize Telegraf
to retrieve a configuration:
<!--pytest.mark.skip-->
```bash { placeholders="YOUR_TC_API_TOKEN" }
export INFLUX_TOKEN=YOUR_TC_API_TOKEN
telegraf \
--config "http://telegraf_controller.example.com/api/configs/xxxxxx/toml
```
Replace {{% code-placeholder-key %}}`YOUR_TC_API_TOKEN`{{% /code-placeholder-key %}}
with your {{% product-name %}} API token. This token must have **read**
permissions on the **Configs** API.
## Set dynamic values
Telegraf and {{% product-name %}} let you
@ -58,13 +76,11 @@ values—for example:
##### Configuration TOML with a parameter
{{% telegraf/dynamic-values %}}
```toml
```toml { .tc-dynamic-values }
[[outputs.heartbeat]]
instance_id = "&{agent_id}"
# ...
```
{{% /telegraf/dynamic-values %}}
##### Set the parameter value in the configuration URL
@ -88,15 +104,13 @@ starting Telegraf—for example:
##### Configuration TOML with an environment variable
{{% telegraf/dynamic-values %}}
```toml
```toml { .tc-dynamic-values }
[[inputs.http]]
urls = ["http://localhost:8080/metrics"]
[inputs.http.headers]
Authorization = "Bearer ${AUTH_TOKEN}"
```
{{% /telegraf/dynamic-values %}}
##### Set the environment variable before starting Telegraf
@ -135,21 +149,50 @@ parameters, environment variables, auto-update functionality, and Telegraf
{{< img-hd src="/img/telegraf/controller-command-builder.png" alt="Build Telegraf commands with Telegraf Controller" />}}
4. Define dynamic values and select options for your command:
4. _Optional_: To download a configuration and run it from your local filesystem
rather than having Telegraf retrieve it directly from {{% product-name %}},
enable the **Use local configuration file** option.
See more information [below](#download-a-configuration-to-your-local-filesystem).
5. Define dynamic values and select options for your command:
- Set environment variable values
- Set parameter values
- Enable automatic configuration updates and specify the check interval
- Add label selectors to run certain plugins based on configuration labels
5. Click **Copy Commands** to copy the contents of the codeblock to your clipboard.
6. Click **Copy Commands** to copy the contents of the codeblock to your clipboard.
The tool provides commands for Linux, macOS, and Windows (PowerShell).
> [!Warning]
> #### Some browsers restrict copying to clipboard
>
> Your browser may not allow the **Copy Commands** button to copy to your
> clipboard under the following conditions:
>
> - You're using an IP or domain name other than `0.0.0.0` or `localhost` and
> - You're using HTTP, not HTTPS
<!-- TODO: Provide information about downloading configs when the functionality is added -->
### Download a configuration to your local filesystem
With the **Use local configuration file** option enabled in the command builder,
{{% product-name %}} lets you configure the directory path and file name to use
for the configuration.
1. Define dynamic values and select options for your command:
- Set file details
- Set environment variable values
- Set parameter values
- Enable automatic configuration updates and specify the check strategy
- Add label selectors to run certain plugins based on configuration labels
2. Click **Download Config** to download the configuration to your local machine.
The downloaded TOML files uses the file name specified in the
**File Details** tab and includes all the specified parameter replacements.
3. Click **Copy Commands** to copy the contents of the codeblock to your clipboard.
The tool provides commands for Linux, macOS, and Windows (PowerShell).
See [information about copying to your clipboard](#some-browsers-restrict-copying-to-clipboard).
{{< img-hd src="/img/telegraf/controller-command-builder-dl.png" alt="Telegraf Controller command builder" />}}

View File

@ -18,6 +18,7 @@ configurations, monitoring agents, and organizing plugins.
- [Download and install {{% product-name %}}](#download-and-install-telegraf-controller)
- [Set up your database](#set-up-your-database)
- [Configure {{% product-name %}}](#configure-telegraf-controller)
- [Set up the owner account](#set-up-the-owner-account)
- [Access {{% product-name %}}](#access-telegraf-controller)
## System Requirements
@ -75,15 +76,7 @@ $env:TELEGRAF_CONTROLLER_EULA="accept"
1. **Download the {{% product-name %}} executable.**
> [!Note]
> #### Contact InfluxData for download
>
> If you are currently participating in the {{% product-name %}} private alpha,
> send your operating system and architecture to InfluxData and we will
> provide you with the appropriate {{% product-name %}} executable.
>
> If you are not currently in the private alpha and would like to be,
> [request early access](https://www.influxdata.com/products/telegraf-enterprise).
{{< telegraf/tc-downloads >}}
2. **Install {{% product-name %}}**.
@ -508,6 +501,93 @@ $env:TELEGRAF_CONTROLLER_EULA=accept
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Set up the owner account
The first time you access {{% product-name %}}, you need to create an owner account.
The owner has full administrative access to the application, including the
ability to manage users, configurations, and agents.
You can create the owner account using one of four methods:
- [Interactive CLI setup](#interactive-cli-setup) when starting the application
- [Environment variables](#environment-variable-setup) set before starting the application
- [Command line flags](#command-line-flag-setup) passed when starting the application
- [Web interface setup](#web-interface-setup) after starting the application
### Interactive CLI setup
When you start {{% product-name %}} in interactive mode (default) and no owner
account exists, {{% product-name %}} prompts you to provide owner username,
email address, and password.
### Environment variable setup
You can configure the owner account by setting environment variables before
starting {{% product-name %}}.
This method is useful for automated deployments and containerized environments.
| Environment variable | Description |
| :------------------- | :------------------ |
| `OWNER_EMAIL` | Owner email address |
| `OWNER_USERNAME` | Owner username |
| `OWNER_PASSWORD` | Owner password |
Set all three environment variables and then start the application:
```bash
export OWNER_EMAIL="admin@example.com"
export OWNER_USERNAME="admin"
export OWNER_PASSWORD="secure-password-here"
./telegraf-controller
```
> [!Note]
> If an owner account already exists, {{% product-name %}} ignores these
> environment variables.
> [!Important]
> If an administrator account already exists with the specified username,
> that account is promoted to owner.
### Command line flag setup
You can also pass owner account details as command line flags when starting
{{% product-name %}}.
| Flag | Description |
|:-------------------------|:-----------------------|
| `--owner-email=EMAIL` | Owner email address |
| `--owner-username=NAME` | Owner username |
| `--owner-password=PASS` | Owner password |
Pass all three flags when starting the application:
```bash
./telegraf-controller \
--owner-email="admin@example.com" \
--owner-username="admin" \
--owner-password="secure-password-here"
```
> [!Tip]
> Command line flags take precedence over environment variables.
> If you set both, {{% product-name %}} uses the values from the command line flags.
### Web interface setup
If no owner account exists when you start {{% product-name %}} in non-interactive
mode, the web interface displays a setup page where you can create one.
1. Navigate to the [{{% product-name %}} URL](#access-telegraf-controller) in your browser.
2. Fill in the **Username**, **Email**, and **Password** fields.
3. Click **Create Account**.
{{< img-hd src="/img/telegraf/controller-setup-owner-account.png" alt="Owner account setup page" />}}
For more information about user roles and permissions, see
[Authorization](/telegraf/controller/reference/authorization/).
## Access {{% product-name %}}
Once started, access the {{% product-name %}} web interface at

View File

@ -0,0 +1,97 @@
---
title: Agent status evaluation
description: >
Reference documentation for Common Expression Language (CEL) expressions used
to evaluate Telegraf agent status.
menu:
telegraf_controller:
name: Agent status evaluation
parent: Reference
weight: 107
related:
- /telegraf/controller/agents/status/
- /telegraf/v1/output-plugins/heartbeat/
---
The Telegraf [heartbeat output plugin](/telegraf/v1/output-plugins/heartbeat/)
uses CEL expressions to evaluate agent status based on runtime data such as
metric counts, error rates, and plugin statistics.
[CEL (Common Expression Language)](https://cel.dev) is a lightweight expression
language designed for evaluating simple conditions.
## How status evaluation works
You define CEL expressions for three status levels in the
`[outputs.heartbeat.status]` section of your Telegraf configuration:
- **ok** — The agent is healthy.
- **warn** — The agent has a potential issue.
- **fail** — The agent has a critical problem.
Each expression is a CEL program that returns a boolean value.
Telegraf evaluates expressions in a configurable order (default:
`ok`, `warn`, `fail`) and assigns the status of the **first expression that
evaluates to `true`**.
If no expression evaluates to `true`, the `default` status is used
(default: `"ok"`).
### Initial status
Use the `initial` setting to define a status before the first Telegraf flush
cycle.
If `initial` is not set or is empty, Telegraf evaluates the status expressions
immediately, even before the first flush.
### Evaluation order
The `order` setting controls which expressions are evaluated and in what
sequence.
> [!Note]
> If you omit a status from the `order` list, its expression is **not
> evaluated**.
## Configuration reference
Configure status evaluation in the `[outputs.heartbeat.status]` section of the
heartbeat output plugin.
You must include `"status"` in the `include` list for status evaluation to take
effect.
```toml
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "agent-123"
interval = "1m"
include = ["hostname", "statistics", "status"]
[outputs.heartbeat.status]
## CEL expressions that return a boolean.
## The first expression that evaluates to true sets the status.
ok = "metrics > 0"
warn = "log_errors > 0"
fail = "log_errors > 10"
## Evaluation order (default: ["ok", "warn", "fail"])
order = ["ok", "warn", "fail"]
## Default status when no expression matches
## Options: "ok", "warn", "fail", "undefined"
default = "ok"
## Initial status before the first flush cycle
## Options: "ok", "warn", "fail", "undefined", ""
# initial = ""
```
| Option | Type | Default | Description |
|:-------|:-----|:--------|:------------|
| `ok` | string (CEL) | `"false"` | Expression that, when `true`, sets status to **ok**. |
| `warn` | string (CEL) | `"false"` | Expression that, when `true`, sets status to **warn**. |
| `fail` | string (CEL) | `"false"` | Expression that, when `true`, sets status to **fail**. |
| `order` | list of strings | `["ok", "warn", "fail"]` | Order in which expressions are evaluated. |
| `default` | string | `"ok"` | Status used when no expression evaluates to `true`. Options: `ok`, `warn`, `fail`, `undefined`. |
| `initial` | string | `""` | Status before the first flush. Options: `ok`, `warn`, `fail`, `undefined`, `""` (empty = evaluate expressions). |
{{< children hlevel="h2" >}}

View File

@ -0,0 +1,257 @@
---
title: CEL expression examples
description: >
Real-world examples of CEL expressions for evaluating Telegraf agent status.
menu:
telegraf_controller:
name: Examples
parent: Agent status evaluation
weight: 203
related:
- /telegraf/controller/agents/status/
- /telegraf/controller/reference/agent-status-eval/variables/
- /telegraf/controller/reference/agent-status-eval/functions/
---
Each example includes a scenario description, the CEL expression, a full
heartbeat plugin configuration block, and an explanation.
For the full list of available variables and functions, see:
- [CEL variables](/telegraf/controller/reference/agent-status-eval/variables/)
- [CEL functions and operators](/telegraf/controller/reference/agent-status-eval/functions/)
## Basic health check
**Scenario:** Report `ok` when Telegraf is actively processing metrics.
Fall back to the default status (`ok`) when no expression matches — this means
the agent is healthy as long as metrics are flowing.
**Expression:**
```js
ok = "metrics > 0"
```
**Configuration:**
```toml
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "agent-123"
interval = "1m"
include = ["hostname", "statistics", "configs", "logs", "status"]
[outputs.heartbeat.status]
ok = "metrics > 0"
default = "fail"
```
**How it works:** If the heartbeat plugin received metrics since the last
heartbeat, the status is `ok`.
If no metrics arrived, no expression matches and the `default` status of `fail`
is used, indicating the agent is not processing data.
## Error rate monitoring
**Scenario:** Warn when any errors are logged and fail when the error count is
high.
**Expressions:**
```js
warn = "log_errors > 0"
fail = "log_errors > 10"
```
**Configuration:**
```toml
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "agent-123"
interval = "1m"
include = ["hostname", "statistics", "configs", "logs", "status"]
[outputs.heartbeat.status]
ok = "log_errors == 0 && log_warnings == 0"
warn = "log_errors > 0"
fail = "log_errors > 10"
order = ["fail", "warn", "ok"]
default = "ok"
```
**How it works:** Expressions are evaluated in `fail`, `warn`, `ok` order.
If more than 10 errors occurred since the last heartbeat, the status is `fail`.
If 1-10 errors occurred, the status is `warn`.
If no errors or warnings occurred, the status is `ok`.
## Buffer health
**Scenario:** Warn when any output plugin's buffer exceeds 80% fullness,
indicating potential data backpressure.
**Expression:**
```js
warn = "outputs.influxdb_v2.exists(o, o.buffer_fullness > 0.8)"
fail = "outputs.influxdb_v2.exists(o, o.buffer_fullness > 0.95)"
```
**Configuration:**
```toml
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "agent-123"
interval = "1m"
include = ["hostname", "statistics", "configs", "logs", "status"]
[outputs.heartbeat.status]
ok = "metrics > 0"
warn = "outputs.influxdb_v2.exists(o, o.buffer_fullness > 0.8)"
fail = "outputs.influxdb_v2.exists(o, o.buffer_fullness > 0.95)"
order = ["fail", "warn", "ok"]
default = "ok"
```
**How it works:** The `outputs.influxdb_v2` map contains a list of all
`influxdb_v2` output plugin instances.
The `exists()` function iterates over all instances and returns `true` if any
instance's `buffer_fullness` exceeds the threshold.
At 95% fullness, the status is `fail`; at 80%, `warn`; otherwise `ok`.
## Plugin-specific checks
**Scenario:** Monitor a specific input plugin for collection errors and use
safe access patterns to avoid errors when the plugin is not configured.
**Expression:**
```js
warn = "has(inputs.cpu) && inputs.cpu.exists(i, i.errors > 0)"
fail = "has(inputs.cpu) && inputs.cpu.exists(i, i.startup_errors > 0)"
```
**Configuration:**
```toml
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "agent-123"
interval = "1m"
include = ["hostname", "statistics", "configs", "logs", "status"]
[outputs.heartbeat.status]
ok = "metrics > 0"
warn = "has(inputs.cpu) && inputs.cpu.exists(i, i.errors > 0)"
fail = "has(inputs.cpu) && inputs.cpu.exists(i, i.startup_errors > 0)"
order = ["fail", "warn", "ok"]
default = "ok"
```
**How it works:** The `has()` function checks if the `cpu` key exists in the
`inputs` map before attempting to access it.
This prevents evaluation errors when the plugin is not configured.
If the plugin has startup errors, the status is `fail`.
If it has collection errors, the status is `warn`.
## Composite conditions
**Scenario:** Combine multiple signals to detect a degraded agent — high error
count combined with output buffer pressure.
**Expression:**
```js
fail = "log_errors > 5 && has(outputs.influxdb_v2) && outputs.influxdb_v2.exists(o, o.buffer_fullness > 0.9)"
```
**Configuration:**
```toml
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "agent-123"
interval = "1m"
include = ["hostname", "statistics", "configs", "logs", "status"]
[outputs.heartbeat.status]
ok = "metrics > 0 && log_errors == 0"
warn = "log_errors > 0 || (has(outputs.influxdb_v2) && outputs.influxdb_v2.exists(o, o.buffer_fullness > 0.8))"
fail = "log_errors > 5 && has(outputs.influxdb_v2) && outputs.influxdb_v2.exists(o, o.buffer_fullness > 0.9)"
order = ["fail", "warn", "ok"]
default = "ok"
```
**How it works:** The `fail` expression requires **both** a high error count
**and** buffer pressure to trigger.
The `warn` expression uses `||` to trigger on **either** condition independently.
This layered approach avoids false alarms from transient spikes in a single
metric.
## Time-based expressions
**Scenario:** Warn when the time since the last successful heartbeat exceeds a
threshold, indicating potential connectivity or performance issues.
**Expression:**
```js
warn = "now() - last_update > duration('10m')"
fail = "now() - last_update > duration('30m')"
```
**Configuration:**
```toml
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "agent-123"
interval = "1m"
include = ["hostname", "statistics", "configs", "logs", "status"]
[outputs.heartbeat.status]
ok = "metrics > 0"
warn = "now() - last_update > duration('10m')"
fail = "now() - last_update > duration('30m')"
order = ["fail", "warn", "ok"]
default = "undefined"
initial = "undefined"
```
**How it works:** The `now()` function returns the current time and
`last_update` is the timestamp of the last successful heartbeat.
Subtracting them produces a duration that can be compared against a threshold.
The `initial` status is set to `undefined` so new agents don't immediately show
a stale-data warning before their first successful heartbeat.
## Custom evaluation order
**Scenario:** Use fail-first evaluation to prioritize detecting critical issues
before checking for healthy status.
**Configuration:**
```toml
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "agent-123"
interval = "1m"
include = ["hostname", "statistics", "configs", "logs", "status"]
[outputs.heartbeat.status]
ok = "metrics > 0 && log_errors == 0"
warn = "log_errors > 0"
fail = "log_errors > 10 || agent.metrics_dropped > 100"
order = ["fail", "warn", "ok"]
default = "undefined"
```
**How it works:** By setting `order = ["fail", "warn", "ok"]`, the most severe
conditions are checked first.
If the agent has more than 10 logged errors or has dropped more than 100
metrics, the status is `fail` — regardless of whether the `ok` or `warn`
expression would also match.
This is the recommended order for production deployments where early detection
of critical issues is important.

View File

@ -0,0 +1,120 @@
---
title: CEL functions and operators
description: >
Reference for functions and operators available in CEL expressions used to
evaluate Telegraf agent status.
menu:
telegraf_controller:
name: Functions
parent: Agent status evaluation
weight: 202
---
CEL expressions for agent status evaluation support built-in CEL operators and
the following function libraries.
## Time functions
### `now()`
Returns the current time.
Use with `last_update` to calculate durations or detect stale data.
```js
// True if more than 10 minutes since last heartbeat
now() - last_update > duration('10m')
```
```js
// True if more than 5 minutes since last heartbeat
now() - last_update > duration('5m')
```
## Math functions
Math functions from the
[CEL math library](https://github.com/google/cel-go/blob/master/ext/README.md#math)
are available for numeric calculations.
### Commonly used functions
| Function | Description | Example |
|:---------|:------------|:--------|
| `math.greatest(a, b, ...)` | Returns the greatest value. | `math.greatest(log_errors, log_warnings)` |
| `math.least(a, b, ...)` | Returns the least value. | `math.least(agent.metrics_gathered, 1000)` |
### Example
```js
// Warn if either errors or warnings exceed a threshold
math.greatest(log_errors, log_warnings) > 5
```
## String functions
String functions from the
[CEL strings library](https://github.com/google/cel-go/blob/master/ext/README.md#strings)
are available for string operations.
These are useful when checking plugin `alias` or `id` fields.
### Example
```js
// Check if any input plugin has an alias containing "critical"
inputs.cpu.exists(i, has(i.alias) && i.alias.contains("critical"))
```
## Encoding functions
Encoding functions from the
[CEL encoder library](https://github.com/google/cel-go/blob/master/ext/README.md#encoders)
are available for encoding and decoding values.
## Operators
CEL supports standard operators for building expressions.
### Comparison operators
| Operator | Description | Example |
|:---------|:------------|:--------|
| `==` | Equal | `metrics == 0` |
| `!=` | Not equal | `log_errors != 0` |
| `<` | Less than | `agent.metrics_gathered < 100` |
| `<=` | Less than or equal | `buffer_fullness <= 0.5` |
| `>` | Greater than | `log_errors > 10` |
| `>=` | Greater than or equal | `metrics >= 1000` |
### Logical operators
| Operator | Description | Example |
|:---------|:------------|:--------|
| `&&` | Logical AND | `log_errors > 0 && metrics == 0` |
| `\|\|` | Logical OR | `log_errors > 10 \|\| log_warnings > 50` |
| `!` | Logical NOT | `!(metrics > 0)` |
### Arithmetic operators
| Operator | Description | Example |
|:---------|:------------|:--------|
| `+` | Addition | `log_errors + log_warnings` |
| `-` | Subtraction | `agent.metrics_gathered - agent.metrics_dropped` |
| `*` | Multiplication | `log_errors * 2` |
| `/` | Division | `agent.metrics_dropped / agent.metrics_gathered` |
| `%` | Modulo | `metrics % 100` |
### Ternary operator
```js
// Conditional expression
log_errors > 10 ? true : false
```
### List operations
| Function | Description | Example |
|:---------|:------------|:--------|
| `exists(var, condition)` | True if any element matches. | `inputs.cpu.exists(i, i.errors > 0)` |
| `all(var, condition)` | True if all elements match. | `outputs.influxdb_v2.all(o, o.errors == 0)` |
| `size()` | Number of elements. | `inputs.cpu.size() > 0` |
| `has()` | True if a field or key exists. | `has(inputs.cpu)` |

View File

@ -0,0 +1,150 @@
---
title: CEL variables
description: >
Reference for variables available in CEL expressions used to evaluate
Telegraf agent status in {{% product-name %}}.
menu:
telegraf_controller:
name: Variables
parent: Agent status evaluation
weight: 201
---
CEL expressions for agent status evaluation have access to variables that
represent data collected by Telegraf since the last successful heartbeat message
(unless noted otherwise).
## Top-level variables
| Variable | Type | Description |
| :------------- | :--- | :---------------------------------------------------------------------------------------------------- |
| `metrics` | int | Number of metrics arriving at the heartbeat output plugin. |
| `log_errors` | int | Number of errors logged by the Telegraf instance. |
| `log_warnings` | int | Number of warnings logged by the Telegraf instance. |
| `last_update` | time | Timestamp of the last successful heartbeat message. Use with `now()` to calculate durations or rates. |
| `agent` | map | Agent-level statistics. See [Agent statistics](#agent-statistics). |
| `inputs` | map | Input plugin statistics. See [Input plugin statistics](#input-plugin-statistics-inputs). |
| `outputs` | map | Output plugin statistics. See [Output plugin statistics](#output-plugin-statistics-outputs). |
## Agent statistics
The `agent` variable is a map containing aggregate statistics for the entire
Telegraf instance.
These fields correspond to the `internal_agent` metric from the
Telegraf [internal input plugin](/telegraf/v1/plugins/#input-internal).
| Field | Type | Description |
| :----------------------- | :--- | :-------------------------------------------------- |
| `agent.metrics_written` | int | Total metrics written by all output plugins. |
| `agent.metrics_rejected` | int | Total metrics rejected by all output plugins. |
| `agent.metrics_dropped` | int | Total metrics dropped by all output plugins. |
| `agent.metrics_gathered` | int | Total metrics collected by all input plugins. |
| `agent.gather_errors` | int | Total collection errors across all input plugins. |
| `agent.gather_timeouts` | int | Total collection timeouts across all input plugins. |
### Example
```js
agent.gather_errors > 0
```
## Input plugin statistics (`inputs`)
The `inputs` variable is a map where each key is a plugin type (for example,
`cpu` for `inputs.cpu`) and the value is a **list** of plugin instances.
Each entry in the list represents one configured instance of that plugin type.
These fields correspond to the `internal_gather` metric from the Telegraf
[internal input plugin](/telegraf/v1/plugins/#input-internal).
| Field | Type | Description |
| :----------------- | :----- | :---------------------------------------------------------------------------------------- |
| `id` | string | Unique plugin identifier. |
| `alias` | string | Alias set for the plugin. Only exists if an alias is defined in the plugin configuration. |
| `errors` | int | Collection errors for this plugin instance. |
| `metrics_gathered` | int | Number of metrics collected by this instance. |
| `gather_time_ns` | int | Time spent gathering metrics, in nanoseconds. |
| `gather_timeouts` | int | Number of timeouts during metric collection. |
| `startup_errors` | int | Number of times the plugin failed to start. |
### Access patterns
Access a specific plugin type and iterate over its instances:
```js
// Check if any cpu input instance has errors
inputs.cpu.exists(i, i.errors > 0)
```
```js
// Access the first instance of the cpu input
inputs.cpu[0].metrics_gathered
```
Use `has()` to safely check if a plugin type exists before accessing it:
```js
// Safe access — returns false if no cpu input is configured
has(inputs.cpu) && inputs.cpu.exists(i, i.errors > 0)
```
## Output plugin statistics (`outputs`)
The `outputs` variable is a map with the same structure as `inputs`.
Each key is a plugin type (for example, `influxdb_v3` for `outputs.influxdb_v3`)
and the value is a list of plugin instances.
These fields correspond to the `internal_write` metric from the Telegraf
[internal input plugin](/telegraf/v1/plugins/#input-internal).
| Field | Type | Description |
| :----------------- | :----- | :------------------------------------------------------------------------------------------------------- |
| `id` | string | Unique plugin identifier. |
| `alias` | string | Alias set for the plugin. Only exists if an alias is defined in the plugin configuration. |
| `errors` | int | Write errors for this plugin instance. |
| `metrics_filtered` | int | Number of metrics filtered by the output. |
| `write_time_ns` | int | Time spent writing metrics, in nanoseconds. |
| `startup_errors` | int | Number of times the plugin failed to start. |
| `metrics_added` | int | Number of metrics added to the output buffer. |
| `metrics_written` | int | Number of metrics written to the output destination. |
| `metrics_rejected` | int | Number of metrics rejected by the service or serialization. |
| `metrics_dropped` | int | Number of metrics dropped (for example, due to buffer fullness). |
| `buffer_size` | int | Current number of metrics in the output buffer. |
| `buffer_limit` | int | Capacity of the output buffer. Irrelevant for disk-based buffers. |
| `buffer_fullness` | float | Ratio of metrics in the buffer to capacity. Can exceed `1.0` (greater than 100%) for disk-based buffers. |
### Access patterns
```js
// Access the first instance of the InfluxDB v3 output plugin
outputs.influxdb_v3[0].metrics_written
```
```js
// Check if any InfluxDB v3 output has write errors
outputs.influxdb_v3.exists(o, o.errors > 0)
```
```js
// Check buffer fullness across all instances of an output
outputs.influxdb_v3.exists(o, o.buffer_fullness > 0.8)
```
Use `has()` to safely check if a plugin type exists before accessing it:
```js
// Safe access — returns false if no cpu input is configured
has(outputs.influxdb_v3) && outputs.influxdb_v3.exists(o, o.errors > 0)
```
## Accumulation behavior
Unless noted otherwise, all variable values are **accumulated since the last
successful heartbeat message**.
Use the `last_update` variable with `now()` to calculate rates — for example:
```js
// True if the error rate exceeds 1 error per minute
log_errors > 0 && duration.getMinutes(now() - last_update) > 0
&& log_errors / duration.getMinutes(now() - last_update) > 1
```

View File

@ -0,0 +1,79 @@
---
title: Authorization
description: >
Understand how authentication and authorization work in Telegraf Controller,
including user roles, API tokens, and endpoint security.
menu:
telegraf_controller:
name: Authorization
parent: Reference
weight: 106
related:
- /telegraf/controller/users/
- /telegraf/controller/tokens/
- /telegraf/controller/settings/
---
{{% product-name %}} uses session-based authentication for the web UI and
token-based authentication for API and Telegraf agent requests.
Both mechanisms work together to control who can access the system and what
actions they can perform.
## User roles
{{% product-name %}} enforces a four-tier role hierarchy.
Each role inherits the permissions of the roles below it, and higher roles
unlock additional administrative capabilities.
| Role | Description |
| :-------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Owner** | Full system access. Manages users, tokens, and settings. Only one owner exists at a time. Created during initial setup. |
| **Administrator** | Full system access. Same capabilities as the owner except cannot transfer ownership. |
| **Manager** | Manages configurations, agents, labels, and reporting rules. Manages own API tokens. Cannot manage users or settings. |
| **Viewer** | Read-only access to configurations, agents, labels, and reporting rules. Cannot manage tokens, users, or settings. |
Only one owner can exist at a time.
The owner account is created during initial setup and cannot be deleted.
If you need to change the owner, the current owner must transfer ownership to
another user.
> [!Tip]
> To change the owner of your {{% product-name %}} instance, see [Transfer ownership](/telegraf/controller/users/transfer-ownership/).
## API tokens
API tokens authenticate programmatic API requests and Telegraf agent connections
to {{% product-name %}}.
Each token is scoped to the user who created it.
The token's effective permissions are restricted to the creating user's role---a
token cannot exceed the permissions of its owner.
If a user's role changes to a role with less permissions, all of that user's
existing tokens are automatically updated with restricted permissions or revoked
to match the new role.
Tokens use the `tc-apiv1_` prefix, making them easy to identify in configuration
files and scripts.
> [!Important]
> A token value is shown only once at the time of creation.
> Store it in a secure location immediately---you cannot retrieve it later.
## Endpoint authentication
By default, {{% product-name %}} requires authentication for API endpoints.
Administrators can selectively require authentication for individual endpoint
groups:
- **Agents** --- agent management endpoints
- **Configs** --- configuration management endpoints
- **Labels** --- label management endpoints
- **Reporting rules** --- reporting rule management endpoints
- **Heartbeat** --- agent heartbeat endpoints
When authentication is enabled for an endpoint group, every request to that
group must include a valid API token or an active session.
> [!Note]
> To configure which endpoint groups require authentication, see
> [Manage settings](/telegraf/controller/settings/).

View File

@ -0,0 +1,217 @@
---
title: Telegraf Controller release notes
description: >
Important features, bug fixes, and changes in Telegraf Controller releases.
menu:
telegraf_controller:
name: Release notes
parent: Reference
weight: 101
---
## v0.0.5-beta {date="2026-03-26"}
<!-- Link only be on the latest version, update and move with new versions. -->
[Download Telegraf Controller v0.0.5-beta](/telegraf/controller/install/#download-and-install-telegraf-controller)
### Important changes
This release introduces user and account management, API token authentication,
and configurable authentication options.
By default, authentication is required to interact with all API endpoints.
If you have agents reading configurations from and reporting heartbeats
to {{% product-name %}}, they will begin to fail with authorization errors.
**To avoid agent authorization errors:**
1. Temporarily disable authentication on the **Heartbeat** and **Configs** APIs.
You can use either the `--disable-auth-endpoints` command flag or the
`DISABLED_AUTH_ENDPOINTS` environment variable when starting
{{% product-name %}}.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Command flags](#)
[Environment Variables](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
<!--pytest.mark.skip-->
```bash
telegraf_controller --disable-auth-endpoints=configs,heartbeat
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
<!--pytest.mark.skip-->
```bash
export DISABLED_AUTH_ENDPOINTS="configs,heartbeat"
telegraf_controller --disable-auth-endpoints=configs,heartbeat
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
2. [Create an API token](/telegraf/controller/tokens/create/) with read
permissions on the **Configs** API and write permissions on the
**Heartbeat** API.
3. Use the `INFLUX_TOKEN` environment variable to define the `token` option
in your heartbeat output plugin configuration:
```toml { .tc-dynamic-values }
[[outputs.heartbeat]]
# ...
token = "${INFLUX_TOKEN}"
```
4. Define the `INFLUX_TOKEN` environment variable in your Telegraf
environment:
<!--pytest.mark.skip-->
```bash {placeholders="YOUR_TELEGRAF_CONTROLLER_TOKEN"}
export INFLUX_TOKEN=YOUR_TELEGRAF_CONTROLLER_TOKEN
telegraf --config "https://localhost:8888/api/configs/..."
```
Replace {{% code-placeholder-key %}}`YOUR_TELEGRAF_CONTROLLER_TOKEN`{{% /code-placeholder-key %}}
with your {{% product-name %}} API token.
> [!Important]
> It's important to use the `INFLUX_TOKEN` environment variable.
> When present, Telegraf uses this specific variable to set the token used
> in the `Authorization` header when requesting the configuration.
5. Navigate to the **Settings** page in {{% product-name %}} and reenable
authentication on the Configs and Heartbeat APIs. Save your changes.
### Features
- Add user authentication and session management with login and setup pages.
- Add user management with invite system, password reset, and password
complexity validation.
- Add token management with create workflow and management pages.
- Add account management page with ownership transfer flow.
- Add settings page.
- Add application version retrieval and display.
- Enhance Heartbeat plugin with logs, status configurations, and agent
status checks.
- Add dynamic parsing component support for Exec and Google Cloud PubSub Push plugins.
- Add plugin support to the Telegraf Builder UI:
- Aerospike (`inputs.aerospike`)
- Alibaba Cloud Monitor Service (Aliyun) (`inputs.aliyuncms`)
- Amazon Elastic Container Service (`inputs.ecs`)
- AMD ROCm System Management Interface (SMI) (`inputs.amd_rocm_smi`)
- AMQP Consumer (`inputs.amqp_consumer`)
- Apache (`inputs.apache`)
- APC UPSD (`inputs.apcupsd`)
- Apache Aurora (`inputs.aurora`)
- Azure Queue Storage (`inputs.azure_storage_queue`)
- Bcache (`inputs.bcache`)
- Beanstalkd (`inputs.beanstalkd`)
- Beat (`inputs.beat`)
- BIND 9 Nameserver (`inputs.bind`)
- Bond (`inputs.bond`)
- Burrow (`inputs.burrow`)
- Ceph Storage (`inputs.ceph`)
- chrony (`inputs.chrony`)
- Cisco Model-Driven Telemetry (MDT) (`inputs.cisco_telemetry_mdt`)
- ClickHouse (`inputs.clickhouse`)
- Google Cloud PubSub Push (`inputs.cloud_pubsub_push`)
- Amazon CloudWatch Metric Streams (`inputs.cloudwatch_metric_streams`)
- Netfilter Conntrack (`inputs.conntrack`)
- Hashicorp Consul (`inputs.consul`)
- Hashicorp Consul Agent (`inputs.consul_agent`)
- Bosch Rexroth ctrlX Data Layer (`inputs.ctrlx_datalayer`)
- Mesosphere Distributed Cloud OS (`inputs.dcos`)
- Device Mapper Cache (`inputs.dmcache`)
- Data Plane Development Kit (DPDK) (`inputs.dpdk`)
- Elasticsearch (`inputs.elasticsearch`)
- Ethtool (`inputs.ethtool`)
- Exec (`inputs.exec`)
- Fibaro (`inputs.fibaro`)
- File (`inputs.file`)
- Filecount (`inputs.filecount`)
- File statistics (`inputs.filestat`)
- Fireboard (`inputs.fireboard`)
- AWS Data Firehose (`inputs.firehose`)
- Fluentd (`inputs.fluentd`)
- Fritzbox (`inputs.fritzbox`)
- GitHub (`inputs.github`)
- gNMI (gRPC Network Management Interface) (`inputs.gnmi`)
- Google Cloud Storage (`inputs.google_cloud_storage`)
- GrayLog (`inputs.graylog`)
- HAProxy (`inputs.haproxy`)
- HDDtemp (`inputs.hddtemp`)
- HTTP (`inputs.http`)
- HTTP Listener v2 (`inputs.http_listener_v2`)
- HueBridge (`inputs.huebridge`)
- Hugepages (`inputs.hugepages`)
- Icinga2 (`inputs.icinga2`)
- InfiniBand (`inputs.infiniband`)
- InfluxDB (`inputs.influxdb`)
- InfluxDB Listener (`inputs.influxdb_listener`)
- InfluxDB V2 Listener (`inputs.influxdb_v2_listener`)
- Intel Baseband Accelerator (`inputs.intel_baseband`)
- Intel® Dynamic Load Balancer (`inputs.intel_dlb`)
- Intel® Platform Monitoring Technology (`inputs.intel_pmt`)
### Bug fixes
- Fix default Heartbeat plugin configuration and environment variable exports.
---
## v0.0.4-alpha {date="2026-02-05"}
### Features
- Require InfluxData EULA acceptance before starting the server.
- Add plugin support to the Telegraf Builder UI and TOML parser:
- ActiveMQ (`inputs.activemq`)
- Vault (`secretstores.vault`)
- All parsers
- All serializers
- Add support for custom logs directory.
- Reduce binary size.
### Bug fixes
- Fix question mark position in deletion popup.
---
## v0.0.3-alpha {date="2026-01-14"}
### Features
- Add linux-arm64 binary support.
- Add build validation for missing plugins.
- Add local file handling for configurations.
---
## v0.0.2-alpha {date="2026-01-13"}
### Features
- Identify external configurations for Telegraf agents.
- Add SSL support for backend connections.
- Add health check status API endpoint.
- Add `Last-Modified` header to GET TOML API response and remove duplicate
protocol handling.
- Compile native Rust NAPI server for heartbeat service.
### Bug fixes
- Fix default parsing unit to use seconds.
- Fix command line string generation.
---
## v0.0.1-alpha {date="2026-01-01"}
_Initial alpha build of Telegraf Controller._

View File

@ -0,0 +1,143 @@
---
title: Manage settings
description: >
Configure authentication requirements, login security, and password
policies in Telegraf Controller.
menu:
telegraf_controller:
name: Manage settings
weight: 9
---
Owners and administrators can configure authentication, login security, and
password requirements for {{% product-name %}}.
Navigate to the **Settings** page from the left navigation menu to view and
modify these settings.
{{< img-hd src="/img/telegraf/controller-settings.png" alt="Telegraf Controller settings page" />}}
## Require authentication per endpoint
{{% product-name %}} organizes API endpoints into groups.
Authentication can be required or disabled for each group independently, giving
you fine-grained control over which resources require credentials.
| Endpoint group | Covers |
| :---------------- | :------------------------------ |
| `agents` | Agent monitoring and management |
| `configs` | Configuration management |
| `labels` | Label management |
| `reporting-rules` | Reporting rule management |
| `heartbeat` | Agent heartbeat requests |
When authentication is disabled for a group, anyone with network access can use
those endpoints without an API token.
When enabled, requests require valid authentication.
> [!Note]
> By default, authentication is required for all endpoints.
To toggle authentication for endpoint groups:
1. Navigate to the **Settings** page.
2. Toggle authentication on or off for each endpoint group.
3. Click **Save**.
> [!Warning]
> Disabling authentication for endpoints means anyone with network access to
> {{% product-name %}} can access those resources without credentials.
### Environment variable and CLI flag
You can configure disabled authentication endpoints at startup using the
`DISABLED_AUTH_ENDPOINTS` environment variable or the `--disable-auth-endpoints`
CLI flag.
The value is a comma-separated list of endpoint groups, or `"*"` to disable
authentication for all endpoints.
```bash
# Disable auth for agents and heartbeat only
export DISABLED_AUTH_ENDPOINTS="agents,heartbeat"
# Disable auth for all endpoints
export DISABLED_AUTH_ENDPOINTS="*"
```
Using the CLI flag:
```bash
# Disable auth for agents and heartbeat only
./telegraf_controller --disable-auth-endpoints=agents,heartbeat
# Disable auth for all endpoints
./telegraf_controller --disable-auth-endpoints="*"
```
These values are used as initial defaults when {{% product-name %}} creates its settings record for the first time.
After that, changes made through the **Settings** page take precedence.
## Login security
### Login attempts
You can configure the number of failed login attempts allowed before an account is locked out.
The default threshold is 5 attempts, with a minimum of 1.
To change the login attempt threshold:
1. Navigate to the **Settings** page.
2. Update the **Login attempts** value.
3. Click **Save**.
### Login lockout
When a user exceeds the failed attempt threshold, their account is locked for a configurable duration.
The default lockout duration is 15 minutes, with a minimum of 1 minute.
The lockout clears automatically after the configured duration has elapsed.
To change the lockout duration:
1. Navigate to the **Settings** page.
2. Update the **Login lockout duration** value.
3. Click **Save**.
> [!Tip]
> If a user is locked out, an owner or administrator can [reset their password](/telegraf/controller/users/update/#reset-a-users-password) to unlock the account.
### Password complexity requirements
{{% product-name %}} provides three password complexity levels that apply to all
password operations, including initial setup, password changes, password resets,
and invite completion.
| Level | Min length | Uppercase* | Lowercase* | Digits* | Special characters* |
| :--------- | :--------: | :--------: | :--------: | :-----: | :-----------------: |
| **Low** | 8 | No | No | No | No |
| **Medium** | 10 | Yes | Yes | Yes | No |
| **High** | 12 | Yes | Yes | Yes | Yes |
{{% caption %}}
\* Passwords require at least one of the defined character types.
{{% /caption %}}
To change the password complexity level:
1. Navigate to the **Settings** page.
2. Select the desired **Password complexity** level.
3. Click **Save**.
> [!Note]
> Changing the password complexity level does not affect existing passwords. The new requirements apply only when users set or change their passwords.
### Environment variables
You can set initial defaults for login security settings using environment variables.
These values are applied when {{% product-name %}} initializes its settings for the first time.
Changes made on the **Settings** page override initialized settings.
| Environment variable | Description | Default |
| :----------------------- | :----------------------------------------- | :-----: |
| `LOGIN_LOCKOUT_ATTEMPTS` | Failed attempts before lockout | `5` |
| `LOGIN_LOCKOUT_MINUTES` | Minutes to lock account | `15` |
| `PASSWORD_COMPLEXITY` | Complexity level (`low`, `medium`, `high`) | `low` |

View File

@ -0,0 +1,69 @@
---
title: Manage API tokens
description: >
Create and manage API tokens for authenticating API requests and
Telegraf agent connections to Telegraf Controller.
menu:
telegraf_controller:
name: Manage API tokens
weight: 8
cascade:
related:
- /telegraf/controller/reference/authorization/
---
API tokens authenticate requests to the {{% product-name %}} API and Telegraf agent connections.
Use tokens to authorize Telegraf agents, heartbeat requests, and external API clients.
## Token format
All API tokens use the `tc-apiv1_` prefix, making them easy to identify in
configuration files and scripts.
The full token value is displayed only once at the time of creation and cannot be retrieved later.
Copy and store the token in a secure location immediately after creating it.
> [!Important]
> #### Raw token strings are not stored
>
> Tokens are stored as a cryptographic hash. The original value is never saved.
> If you lose a token, you must revoke it and create a new one.
## Token permissions
Each token is scoped to a specific user.
Token permissions are restricted to the permissions allowed by the user's role.
A token cannot exceed the permissions of the user it belongs to.
When you create a token, you can set custom permissions to restrict the token's
access below your full role permissions.
This lets you issue narrowly scoped tokens for specific tasks, such as a token
that can only register agents or a token limited to read-only access.
## Token states
Tokens exist in one of two states:
- **Active** -- The token can be used for authentication.
- **Revoked** -- The token is permanently disabled but the record is retained
for auditing purposes.
Revoking a token is irreversible.
Any agent or client using a revoked token immediately loses access.
## Token visibility
Your role determines which tokens you can view and manage:
| Role | Token visibility |
|:------------------|:----------------------------------|
| **Owner** | All tokens across all users |
| **Administrator** | All tokens across all users |
| **Manager** | Only their own tokens |
| **Viewer** | Cannot manage tokens |
> [!Note]
> **Owner** and **Administrator** users can revoke any token in the organization,
> including tokens belonging to other users.
{{< children hlevel="h2" >}}

View File

@ -0,0 +1,63 @@
---
title: Create an API token
description: >
Create a new API token for authenticating with the Telegraf Controller API.
menu:
telegraf_controller:
name: Create a token
parent: Manage API tokens
weight: 101
---
Create a new API token to authenticate requests to the {{% product-name %}} API.
Tokens let you grant scoped access to external tools, scripts, and services without sharing your login credentials.
> [!Important]
> #### Required permissions
>
> You must have an **Owner**, **Administrator**, or **Manager** role assigned to
> your account.
## Create a token
1. Navigate to the **API Tokens** page.
2. Click **Create Token**.
3. Enter a **Description** for the token that identifies where or how the token
will be used.
4. _(Optional)_ Set an **Expiration** date.
Tokens without an expiration date remain active indefinitely.
5. _(Optional)_ Set **Custom permissions** to restrict the token's access below
your role's full permissions.
See [Custom permissions](#custom-permissions) for details.
6. Click **Create**.
{{< img-hd src="/img/telegraf/controller-create-token.png" alt="Telegraf Controller create token form" />}}
> [!Important]
> #### Copy and store your token
>
> Copy your API token immediately after creation.
> The full token value is only displayed once and cannot be retrieved later.
## Custom permissions
When you set custom permissions on a token, {{% product-name %}} intersects
those permissions with your role's existing permissions.
This means you can use custom permissions to narrow a token's access, but you
cannot create a token with more access than your role allows.
For example, if you have the **Manager** role, you cannot create a token with
user management permissions.
The resulting token will only include the permissions that overlap with what
your role grants.
Custom permissions are useful when you want to issue a token for a specific task,
such as read-only access to configurations, without exposing the full scope of
your role.
## If you lose a token
If you lose or forget a token value, you cannot recover it.
Revoke the lost token and create a new one to restore access.
For instructions on revoking a token, see [Revoke an API token](/telegraf/controller/tokens/revoke/).

View File

@ -0,0 +1,60 @@
---
title: Delete a token
description: >
Permanently delete an API token from Telegraf Controller.
menu:
telegraf_controller:
name: Delete a token
parent: Manage API tokens
weight: 105
---
Deleting a token immediately removes the token so it cannot be used for authentication.
Unlike revocation, deletion removes all data associated with the token and token
history.
> [!Warning]
> #### Deleting and API token cannot be undone
>
> Deleting a token is permanent and cannot be undone. Any agents or clients
> using this token will lose access immediately.
## Delete versus revoke
{{% product-name %}} supports two ways to remove a token from active use:
**deletion** and **revocation**.
- **Deleted** tokens are permanently removed from the system.
No record of the token is retained after deletion.
- **Revoked** tokens remain visible in the token list with a **Revoked** status.
This provides an audit trail showing when the token was created and when it
was disabled. Revoked tokens cannot be used for authentication.
Use revoke when you want to disable a token but maintain an audit trail.
Use delete when you want to completely remove the token and its record from the system.
For more information about revoking a token, see
[Revoke a token](/telegraf/controller/tokens/revoke/).
## Delete a token
1. Navigate to the **API Tokens** page or open the token's detail view.
2. Click **Delete** to initiate the deletion. If on the token detail
page, select the **Manage** tab to reveal the **Delete** action.
3. In the confirmation dialog, confirm that you want to permanently delete the token.
Once confirmed, the token is immediately deleted. Any agent or integration
that relies on the deleted token will no longer be able to authenticate with
{{% product-name %}}.
## Bulk delete tokens
You can delete multiple tokens at once from the **API Tokens** page.
1. On the **API Tokens** page, select the checkboxes next to each token you want to delete.
2. Click the **Delete** option in the bulk actions bar.
3. In the confirmation dialog, review the number of tokens to be deleted and confirm.
All selected tokens are permanently removed and immediately invalidated.
Verify that no active agents depend on the selected tokens before confirming the
bulk deletion.

View File

@ -0,0 +1,64 @@
---
title: Reassign a token
description: >
Reassign an API token from one user to another in Telegraf Controller.
menu:
telegraf_controller:
name: Reassign a token
parent: Manage API tokens
weight: 103
---
Reassigning an API token from one user to another in Telegraf Controller lets
you transfer ownership of that token to another user without disrupting any
external clients using the token.
> [!Important]
> #### Required permissions
>
> To reassign an API token, you must have the **Owner** or **Administrator**
> role in {{% product-name %}}.
## Reassign a token
You can reassign an individual token from one user to another directly from the
token's detail view or the tokens list.
1. In {{% product-name %}}, navigate to the **API Tokens** page or open the
detail page for the token you want to reassign.
2. Click **Reassign** on the token you want to transfer. If on the token detail
page, select the **Manage** tab to reveal the **Reassign** action.
3. In the dialog that appears, select the target user you want to assign the
token to.
4. Click **Confirm** to complete the reassignment.
> [!Important]
> When you reassign a token, its permissions are automatically restricted to
> match the target user's role. For example, a token with full access reassigned
> to a Viewer becomes a read-only token.
## Bulk reassign
If you need to reassign multiple tokens at once, use the bulk reassign option.
1. On the **API Tokens** page, select the checkboxes next to the tokens you want
to reassign.
2. Click the **Reassign** option in the bulk actions bar.
3. Select the target user you want to assign the selected tokens to.
4. Click **Confirm** to reassign all selected tokens.
The same permission restriction applies during bulk reassignment. Each token's
permissions are adjusted to align with the target user's role.
## When to reassign
Reassigning tokens lets you transfer ownership without revoking and recreating
tokens. This is useful in several common scenarios:
- **Offboarding a user**: A user is leaving the organization and their tokens
should continue working under another account.
Reassigning ensures active integrations are not disrupted.
- **Reorganizing responsibilities**: Team members are shifting roles or
responsibilities and token ownership should reflect the new structure.
- **Consolidating ownership after role changes**: After updating user roles, you
may want to consolidate tokens under a single account to simplify token management.

View File

@ -0,0 +1,61 @@
---
title: Revoke a token
description: >
Revoke an API token to immediately prevent its use while keeping
the token record for auditing.
menu:
telegraf_controller:
name: Revoke a token
parent: Manage API tokens
weight: 104
---
Revoking a token immediately prevents it from being used for authentication
while keeping the token record in the system for auditing purposes.
Unlike deletion, revocation preserves a full history of the token, including
when it was created and when it was disabled.
## Revoke versus delete
{{% product-name %}} supports two ways to remove a token from active use:
**revocation** and **deletion**.
- **Revoked** tokens remain visible in the token list with a **Revoked** status.
This provides an audit trail showing when the token was created and when it
was disabled. Revoked tokens cannot be used for authentication.
- **Deleted** tokens are permanently removed from the system.
No record of the token is retained after deletion.
Use revoke when you want to disable a token but maintain an audit trail.
Use delete when you want to completely remove the token and its record from the system.
For more information about deleting a token, see
[Delete a token](/telegraf/controller/tokens/delete/).
## Revoke a token
1. Navigate to the **API Tokens** page, or open the token's detail view.
2. Click **Revoke**. If on the token detail page, select the **Manage** tab to
reveal the **Revoke** action.
3. Confirm the revocation in the dialog.
The token status changes to **Revoked** and any requests that use the token are
immediately rejected.
> [!Note]
> #### You cannot reactivate a revoked token
>
> Revocation is permanent. You cannot re-activate a revoked token.
> If you need to restore access, create a new token.
> See [Create a token](/telegraf/controller/tokens/create/).
## Bulk revoke
To revoke multiple tokens at once:
1. On the **API Tokens** page, select the tokens you want to revoke.
2. Click **Revoke** in the bulk actions bar.
3. Confirm the revocation in the dialog.
All selected tokens are immediately revoked and can no longer be used for
authentication.

View File

@ -0,0 +1,81 @@
---
title: Use API tokens
description: >
Use API tokens to authenticate Telegraf agents, heartbeat requests,
and external API clients with Telegraf Controller.
menu:
telegraf_controller:
name: Use tokens
parent: Manage API tokens
weight: 102
---
API tokens authenticate requests to {{% product-name %}}.
Use tokens to connect Telegraf agents, authorize heartbeat reporting, and
integrate external API clients.
## With Telegraf agents
Configure your Telegraf agent to include an API token when retrieving
configurations and reporting heartbeats to {{% product-name %}}.
Telegraf agents require API tokens with the following permissions:
- **Configs**: Read
- **Heartbeat**: Write
### Use the INFLUX_TOKEN environment variable
When retrieving a configuration from a URL, Telegraf only sends an `Authorization`
when it detects the `INFLUX_TOKEN` environment variable. To authorize Telegraf
to retrieve a configuration from {{% product-name %}}, define the `INFLUX_TOKEN`
environment variable:
<!--pytest.mark.skip-->
```bash { placeholders="YOUR_TC_API_TOKEN" }
export INFLUX_TOKEN=YOUR_TC_API_TOKEN
telegraf \
--config "http://telegraf_controller.example.com/api/configs/xxxxxx/toml
```
Replace {{% code-placeholder-key %}}`YOUR_TC_API_TOKEN`{{% /code-placeholder-key %}}
with your {{% product-name %}} API token.
### For heartbeat requests
Telegraf uses the [Heartbeat output plugin](/telegraf/v1/output-plugins/heartbeat/)
to send heartbeats to {{% product-name %}}.
Use the `INFLUX_TOKEN` environment variable to define the `token` option in your
heartbeat plugin configuration.
Telegraf uses the environment variable value defined when starting Telegraf.
```toml { .tc-dynamic-values }
[[outputs.heartbeat]]
url = "http://telegraf_controller.example.com/agents/heartbeat"
instance_id = "&{agent_id}"
interval = "1m"
include = ["hostname", "statistics", "configs"]
token = "${INFLUX_TOKEN}"
```
When authentication is required for the heartbeat endpoint, agents must include
a valid token with each heartbeat request.
If a heartbeat request is missing a token or includes an invalid token,
{{% product-name %}} rejects the request and the agent's status is not updated.
## With external API clients
Include the token in the `Authorization` header when making API requests to
{{% product-name %}}:
```
Authorization: Bearer tc-apiv1_<token>
```
The token's permissions determine which API endpoints and operations are accessible.
Requests made with a token that lacks the required permissions are rejected with an authorization error.
> [!Note]
> If authentication is disabled for an endpoint group in **Settings**, requests to those endpoints do not require a token.
> See [Settings](/telegraf/controller/settings/#require-authentication-per-endpoint) for details on configuring authentication requirements per endpoint.

View File

@ -0,0 +1,46 @@
---
title: Manage users
description: >
Manage user accounts in Telegraf Controller, including creating, updating,
disabling, and deleting users.
menu:
telegraf_controller:
name: Manage users
weight: 7
cascade:
related:
- /telegraf/controller/reference/authorization/
---
Users are accounts that can log into the {{% product-name %}} web interface and
interact with the system based on their assigned role.
You can create, update, disable, and delete users to control who has access to
your {{% product-name %}} instance.
## User states
Each user account is in one of the following states:
- **Active** --- The user can log in and perform actions based on their assigned
role.
- **Disabled** --- The user cannot log in. Existing API tokens remain associated
with the account but are unusable while the user is disabled.
- **Locked** --- A temporary state triggered by too many failed login attempts.
The lock clears automatically after the configured lockout period. See the
[Settings](/telegraf/controller/settings/) page for configuration options.
## User roles
{{% product-name %}} supports four roles with different levels of access:
| Role | Access level |
|:------------------|:--------------------------------------------------------------------|
| **Owner** | Full access. Manages users, tokens, and settings. |
| **Administrator** | Full access except ownership transfer. |
| **Manager** | Manages configs, agents, labels, reporting rules, and own tokens. |
| **Viewer** | Read-only access. |
For more details about roles and permissions, see
[Authorization](/telegraf/controller/reference/authorization/).
{{< children hlevel="h2" >}}

View File

@ -0,0 +1,54 @@
---
title: Manage your account
description: >
Update your username, email address, and password in Telegraf Controller.
menu:
telegraf_controller:
name: Manage your account
parent: Manage users
weight: 101
---
Any authenticated user can update their own account details from the account page.
Use the account page to change your username, email address, or password at any time.
{{< img-hd src="/img/telegraf/controller-account-page.png" alt="Telegraf Controller account page" />}}
## Update your username
Your username is your display name throughout {{% product-name %}}.
Each username must be unique across the system.
1. Click your profile icon in the top-right corner and select **Account**.
2. In the **Username** field, enter your new username.
3. Click **Save**.
If the username you entered is already taken, {{% product-name %}} displays an
error. Choose a different username and try again.
## Update your email address
Each email address must be unique and in a valid format.
1. Click your profile icon in the top-right corner and select **Account**.
2. In the **Email** field, enter your new email address.
3. Click **Save**.
If the email address is already associated with another account or is not in a
valid format, {{% product-name %}} displays an error.
Correct the email address and try again.
## Update your password
To change your password, you must provide your current password along with the
new one.
1. Click your profile icon in the top-right corner and select **Account**.
2. In the **Current Password** field, enter your existing password.
3. In the **New Password** field, enter your new password.
4. In the **Confirm Password** field, re-enter the new password.
5. Click **Save**.
> [!Note]
> Your new password must meet the password complexity requirements configured by your administrator.
> For more information, see [Password requirements](/telegraf/controller/settings/#password-requirements).

View File

@ -0,0 +1,48 @@
---
title: Delete a user
description: >
Permanently delete a user account and all associated API tokens from
Telegraf Controller.
menu:
telegraf_controller:
name: Delete a user
parent: Manage users
weight: 106
---
> [!Warning]
> #### Deleting a user cannot be undone
>
> Deleting a user is permanent and cannot be undone.
> All of the user's API tokens are also deleted.
## What deletion removes
When you delete a user from {{% product-name %}}, the following are permanently
removed:
- User account and credentials
- All API tokens owned by the user
- All active sessions
## Delete a user
1. In the {{% product-name %}} UI, navigate to **Users** and click the user you
want to delete to open their detail page.
2. Click **Delete User**.
3. In the confirmation dialog, confirm the deletion.
The user is immediately removed and can no longer authenticate with
{{% product-name %}}.
## Restrictions
- You cannot delete your own account.
- You cannot delete the owner — you must
[transfer ownership](/telegraf/controller/users/transfer-ownership/) first.
- Only the owner can delete administrator accounts.
> [!Tip]
> If you're unsure whether to delete a user, consider
> [disabling them](/telegraf/controller/users/disable/) first.
> Disabled accounts can be re-enabled later.

View File

@ -0,0 +1,41 @@
---
title: Disable a user
description: >
Disable a user account to prevent login without deleting the account
or its associated tokens.
menu:
telegraf_controller:
name: Disable a user
parent: Manage users
weight: 105
---
Disabling a user prevents them from logging in without permanently deleting their account or tokens.
This is useful when you want to temporarily revoke access or are unsure whether to delete the account.
## What disabling does
When you disable a user account in {{% product-name %}}:
- The user cannot log in to the web interface.
- All active sessions are destroyed immediately.
- Existing API tokens remain in the system but cannot be used for authentication
while the user is disabled.
- The user's data (account details, token records) is preserved.
## Disable a user
1. Navigate to the user's detail page.
2. Toggle the user's status to **Disabled** (or click the **Disable** option).
3. Confirm the action.
> [!Note]
> You cannot disable your own account or the **Owner** account.
## Re-enable a user
1. Navigate to the disabled user's detail page.
2. Toggle the user's status to **Active** (or click the **Enable** option).
Once re-enabled, the user can log in immediately with their existing credentials.
Their API tokens also become usable again.

View File

@ -0,0 +1,76 @@
---
title: Invite a new user
description: >
Invite new users to Telegraf Controller by generating an invite link with
a pre-assigned role.
menu:
telegraf_controller:
name: Invite a new user
parent: Manage users
weight: 102
---
Owners and administrators can invite new users to {{% product-name %}} by
generating an invite link with a pre-assigned role and expiration.
The invited user opens the link, sets a password, and their account is
immediately active.
> [!Note]
> You must have the **Owner** or **Administrator** role to create invites.
## Create an invite
1. Navigate to the **Users** page.
2. Click the {{% icon "plus" %}} **Invite User** button.
3. Enter a **Username** for the new user (3--50 characters).
4. Enter the user's **Email** address.
5. Select a **Role** for the new user:
- **Administrator** -- full access to all resources and user management.
- **Manager** -- can manage configurations, agents, and labels but cannot
manage users.
- **Viewer** -- read-only access to all resources.
6. Set the invite **Expiration** in hours. The default is 72 hours. Valid
values range from 1 to 720 hours (30 days).
7. Click **Create Invite**.
{{< img-hd src="/img/telegraf/controller-invite-user.png" alt="Telegraf Controller invite user form" />}}
> [!Note]
> You cannot invite a user with the **Owner** role. To make someone the owner,
> first invite them as an **Administrator**, then
> [transfer ownership](/telegraf/controller/users/transfer-ownership/).
## Share the invite link
After creating the invite, {{% product-name %}} displays a unique invite link.
Copy the link and share it with the user through your preferred communication
channel (email, chat, etc.).
The link expires after the duration you configured. Once expired, the link can
no longer be used and you must create a new invite.
## Accept an invite
The invited user completes the following steps to activate their account:
1. Open the invite link in a browser.
2. Set a password that meets the configured complexity requirements.
3. Click **Create Account**.
The account activates immediately and the user is automatically logged in with
the role assigned during the invite.
## Manage pending invites
You can view and manage all pending invites from the **Users** page.
Pending invites appear in a separate list above active users.
To revoke a pending invite before it is used:
1. Navigate to the **Users** page.
2. Locate the pending invite you want to remove.
3. Click the **Delete** button next to the invite.
4. Confirm the deletion when prompted.
Deleting a pending invite invalidates the invite link. The invited user can no
longer use it to create an account.

View File

@ -0,0 +1,57 @@
---
title: Transfer ownership
description: >
Transfer the Telegraf Controller owner role to another administrator.
menu:
telegraf_controller:
name: Transfer ownership
parent: Manage users
weight: 104
---
The **Owner** role grants full administrative access to {{% product-name %}},
including the ability to manage all users, tokens, and settings. Only one owner
can exist at a time. The current owner can transfer ownership to any active
administrator.
## Prerequisites and restrictions
- Only the current **Owner** can transfer ownership.
- The target user must have the **Administrator** role and be in an active state.
- If the target user is a **Manager** or **Viewer**, you must first promote them
to **Administrator**. See
[Change a user's role](/telegraf/controller/users/update/#change-a-users-role).
- You cannot transfer ownership to yourself.
## Transfer the owner role
1. Navigate to the **Users** page or the target user's detail page.
2. Choose the target **Administrator** from the list (if not already selected).
3. Select the **Make Owner** option. If on the user detail page, select the
**Manage** tab to reveal the **Make Owner** option.
4. Confirm the username of the user you want to transfer ownership to and click
**Transfer Ownership**.
{{< img-hd src="/img/telegraf/controller-transfer-ownership.png" alt="Telegraf Controller transfer ownership confirmation" />}}
## What happens during transfer
When you confirm the transfer, {{% product-name %}} performs an atomic operation
that updates both accounts simultaneously:
- The current owner is demoted to **Administrator**.
- The target user is promoted to **Owner**.
- Both users' sessions are destroyed -- both must log in again.
- The operation is atomic: both changes succeed together or neither takes effect.
> [!Tip]
> #### Coordinate ownership transfers
>
> Coordinate with the target user before transferring ownership. Both accounts
> are logged out immediately after the transfer completes.
> [!Warning]
> #### You cannot reclaim the Owner role yourself
>
> Once transferred, you cannot reclaim the **Owner** role yourself. The new
> owner must transfer it back to you.

View File

@ -0,0 +1,84 @@
---
title: Update users
description: >
Reset user passwords, change user roles, and manage user accounts in
Telegraf Controller.
menu:
telegraf_controller:
name: Update users
parent: Manage users
weight: 103
---
Owners and administrators can reset passwords and change roles for other users in {{% product-name %}}.
These actions help maintain account security and ensure users have the appropriate level of access.
## Reset a user's password
When a user forgets their password or needs a credential refresh, you can
generate a time-limited reset link for them.
> [!Note]
> You must have the **Owner** or **Administrator** role to reset passwords.
> Only the **Owner** can reset **Administrator** passwords.
### Generate a password reset link
1. Navigate to the user's detail page.
2. Click **Reset Password**.
3. Set the link expiration. The default is 24 hours, but you can configure it from 1 to 720 hours.
4. Click **Generate Link** to create the reset link.
5. Copy the generated reset link and share it with the user through a secure channel.
### Complete a password reset
After receiving a reset link, the user completes the following steps:
1. Open the reset link in a browser.
2. Enter a new password that meets the complexity requirements.
3. Click **Submit** to save the new password.
> [!Note]
> The user is not automatically logged in after resetting their password.
> They must log in with their new credentials.
### Emergency owner password reset
If the owner account is locked out or the owner has forgotten their password,
you can reset it using environment variables.
1. Set the following environment variables:
- `RESET_OWNER_PASSWORD=true`
- `OWNER_PASSWORD` to the desired new password
2. Restart the {{% product-name %}} application.
3. Log in with the new password.
4. Remove the `RESET_OWNER_PASSWORD` and `OWNER_PASSWORD` environment variables.
> [!Warning]
> Remove `RESET_OWNER_PASSWORD` and `OWNER_PASSWORD` environment variables after successfully logging in. Leaving them set causes the password to reset on every application restart.
## Change a user's role
You can promote or demote users by changing their assigned role.
> [!Note]
> You must have the **Owner** or **Administrator** role to change a user's role.
> Only the **Owner** can change a user's role to **Administrator**.
1. Navigate to the user's detail page.
2. Select the user's new role.
3. Confirm the change when prompted.
The following restrictions apply to role changes:
- You cannot assign the **Owner** role directly. To make a user the owner,
the current owner must [transfer ownership](/telegraf/controller/users/transfer-ownership/).
> [!Important]
> #### Side effects of changing a user's role
>
> - The user's API tokens are reclamped to match the new role's permissions.
> If the new role cannot manage tokens (such as **Viewer**), all active tokens
> are revoked.
> - The user's active sessions are destroyed. They must log in again to continue
> using {{% product-name %}}.

View File

@ -10,7 +10,7 @@ introduced: "v1.5.0"
os_support: "freebsd, linux, macos, solaris, windows"
related:
- /telegraf/v1/configure_plugins/
- https://github.com/influxdata/telegraf/tree/v1.38.0/plugins/aggregators/basicstats/README.md, Basic Statistics Plugin Source
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/basicstats/README.md, Basic Statistics Plugin Source
---
# Basic Statistics Aggregator Plugin

View File

@ -10,7 +10,7 @@ introduced: "v1.18.0"
os_support: "freebsd, linux, macos, solaris, windows"
related:
- /telegraf/v1/configure_plugins/
- https://github.com/influxdata/telegraf/tree/v1.38.0/plugins/aggregators/derivative/README.md, Derivative Plugin Source
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/derivative/README.md, Derivative Plugin Source
---
# Derivative Aggregator Plugin

View File

@ -10,7 +10,7 @@ introduced: "v1.11.0"
os_support: "freebsd, linux, macos, solaris, windows"
related:
- /telegraf/v1/configure_plugins/
- https://github.com/influxdata/telegraf/tree/v1.38.0/plugins/aggregators/final/README.md, Final Plugin Source
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/final/README.md, Final Plugin Source
---
# Final Aggregator Plugin

View File

@ -10,7 +10,7 @@ introduced: "v1.4.0"
os_support: "freebsd, linux, macos, solaris, windows"
related:
- /telegraf/v1/configure_plugins/
- https://github.com/influxdata/telegraf/tree/v1.38.0/plugins/aggregators/histogram/README.md, Histogram Plugin Source
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/histogram/README.md, Histogram Plugin Source
---
# Histogram Aggregator Plugin

View File

@ -10,7 +10,7 @@ introduced: "v1.13.0"
os_support: "freebsd, linux, macos, solaris, windows"
related:
- /telegraf/v1/configure_plugins/
- https://github.com/influxdata/telegraf/tree/v1.38.0/plugins/aggregators/merge/README.md, Merge Plugin Source
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/merge/README.md, Merge Plugin Source
---
# Merge Aggregator Plugin

View File

@ -10,7 +10,7 @@ introduced: "v1.1.0"
os_support: "freebsd, linux, macos, solaris, windows"
related:
- /telegraf/v1/configure_plugins/
- https://github.com/influxdata/telegraf/tree/v1.38.0/plugins/aggregators/minmax/README.md, Minimum-Maximum Plugin Source
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/minmax/README.md, Minimum-Maximum Plugin Source
---
# Minimum-Maximum Aggregator Plugin

View File

@ -10,7 +10,7 @@ introduced: "v1.18.0"
os_support: "freebsd, linux, macos, solaris, windows"
related:
- /telegraf/v1/configure_plugins/
- https://github.com/influxdata/telegraf/tree/v1.38.0/plugins/aggregators/quantile/README.md, Quantile Plugin Source
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/quantile/README.md, Quantile Plugin Source
---
# Quantile Aggregator Plugin

Some files were not shown because too many files have changed in this diff Show More