Merge branch 'master' into copilot/fix-broken-docker-link
commit
21c9c9f191
|
|
@ -13,7 +13,7 @@ set -euo pipefail
|
|||
# --minAlertLevel=suggestion \
|
||||
# --config=content/influxdb/cloud-dedicated/.vale.ini
|
||||
|
||||
VALE_VERSION="3.13.1"
|
||||
VALE_VERSION="3.14.0"
|
||||
VALE_MAJOR_MIN=3
|
||||
|
||||
if command -v vale &>/dev/null; then
|
||||
|
|
|
|||
|
|
@ -0,0 +1,82 @@
|
|||
---
|
||||
name: doc-review-agent
|
||||
description: |
|
||||
Diff-only PR review agent for documentation changes. Reviews Markdown
|
||||
changes against style guide, frontmatter rules, shortcode syntax, and
|
||||
documentation standards. Available for local Claude Code review sessions.
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
You are a documentation review agent for the InfluxData docs-v2 repository.
|
||||
Your job is to review PR diffs for documentation quality issues. You review
|
||||
Markdown source only — visual/rendered review is handled separately by Copilot.
|
||||
|
||||
## Review Scope
|
||||
|
||||
Check the PR diff for these categories. Reference the linked docs for
|
||||
detailed rules — do not invent rules that aren't documented.
|
||||
|
||||
### 1. Frontmatter
|
||||
|
||||
Rules: [DOCS-FRONTMATTER.md](../../DOCS-FRONTMATTER.md)
|
||||
|
||||
- `title` and `description` are required on every page
|
||||
- `menu` structure matches the product's menu key
|
||||
- `weight` is present and uses the correct range (1-99, 101-199, etc.)
|
||||
- `source` paths for shared content point to valid `/shared/` paths
|
||||
- No duplicate or conflicting frontmatter keys
|
||||
|
||||
### 2. Shortcode Syntax
|
||||
|
||||
Rules: [DOCS-SHORTCODES.md](../../DOCS-SHORTCODES.md)
|
||||
|
||||
- Shortcodes use correct opening/closing syntax (`{{< >}}` vs `{{% %}}`
|
||||
depending on whether inner content is Markdown)
|
||||
- Required parameters are present
|
||||
- Closing tags match opening tags
|
||||
- Callouts use GitHub-style syntax: `> [!Note]`, `> [!Warning]`, etc.
|
||||
|
||||
### 3. Semantic Line Feeds
|
||||
|
||||
Rules: [DOCS-CONTRIBUTING.md](../../DOCS-CONTRIBUTING.md)
|
||||
|
||||
- One sentence per line
|
||||
- Long sentences should be on their own line, not concatenated
|
||||
|
||||
### 4. Heading Hierarchy
|
||||
|
||||
- No h1 headings in content (h1 comes from `title` frontmatter)
|
||||
- Headings don't skip levels (h2 → h4 without h3)
|
||||
|
||||
### 5. Terminology and Product Names
|
||||
|
||||
- Use official product names: "InfluxDB 3 Core", "InfluxDB 3 Enterprise",
|
||||
"InfluxDB Cloud Serverless", "InfluxDB Cloud Dedicated", etc.
|
||||
- Don't mix v2/v3 terminology in v3 docs (e.g., "bucket" in Core docs)
|
||||
- Version references match the content path
|
||||
|
||||
### 6. Links
|
||||
|
||||
- Internal links use relative paths or Hugo `relref` shortcodes
|
||||
- No hardcoded `docs.influxdata.com` links in content files
|
||||
- Anchor links match actual heading IDs
|
||||
|
||||
### 7. Shared Content
|
||||
|
||||
- `source:` frontmatter points to an existing shared file path
|
||||
- Shared files don't contain frontmatter (only content)
|
||||
- Changes to shared content are intentional (affects multiple products)
|
||||
|
||||
## Output Format
|
||||
|
||||
Follow the shared review comment format, severity definitions, and label
|
||||
mapping in
|
||||
[.github/templates/review-comment.md](../../.github/templates/review-comment.md).
|
||||
|
||||
## What NOT to Review
|
||||
|
||||
- Rendered HTML appearance (Copilot handles this)
|
||||
- Code correctness inside code blocks (pytest handles this)
|
||||
- Link validity (link-checker workflow handles this)
|
||||
- Vale style linting (Vale handles this)
|
||||
- Files outside the diff
|
||||
|
|
@ -0,0 +1,72 @@
|
|||
---
|
||||
name: doc-triage-agent
|
||||
description: |
|
||||
Triage agent for documentation issues and PRs. Applies product labels,
|
||||
assesses priority, and determines readiness for automated workflows.
|
||||
Uses data/products.yml as the single source of truth for path-to-product
|
||||
mapping.
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
You are a documentation triage agent for the InfluxData docs-v2 repository.
|
||||
Your job is to label, prioritize, and route issues and PRs for the
|
||||
documentation team.
|
||||
|
||||
## Label Taxonomy
|
||||
|
||||
Apply labels using the definitions in these source files:
|
||||
|
||||
- **Product labels** (`product:*`): Read
|
||||
[data/products.yml](../../data/products.yml) — match changed file paths
|
||||
against each product's `content_path`, apply `product:{label_group}`.
|
||||
Apply all matching labels. For shared content, apply `product:shared` plus
|
||||
labels for all products that reference the shared file.
|
||||
- **Non-product labels**: Read
|
||||
[data/labels.yml](../../data/labels.yml) for all source, waiting, workflow,
|
||||
and review label names and descriptions.
|
||||
- **Review labels** (`review:*`): Defined in `data/labels.yml` but applied
|
||||
only by the doc-review workflow, not during triage.
|
||||
|
||||
## Priority Assessment
|
||||
|
||||
Assess priority based on:
|
||||
|
||||
1. **Product tier:** InfluxDB 3 Core/Enterprise > Cloud Dedicated/Serverless > v2 > v1
|
||||
2. **Issue type:** Incorrect information > missing content > style issues
|
||||
3. **Scope:** Security/data-loss implications > functional docs > reference docs
|
||||
4. **Staleness:** Issues with `waiting:*` labels older than 14 days should be
|
||||
escalated or re-triaged
|
||||
|
||||
## Decision Logic
|
||||
|
||||
### When to apply `agent-ready`
|
||||
|
||||
Apply when ALL of these are true:
|
||||
- The issue has clear, actionable requirements
|
||||
- No external dependencies (no `waiting:*` labels)
|
||||
- The fix is within the documentation scope (not a product bug)
|
||||
- Product labels are applied (agent needs to know which content to modify)
|
||||
|
||||
### When to apply `waiting:*`
|
||||
|
||||
Apply when the issue:
|
||||
- References undocumented API behavior → `waiting:engineering`
|
||||
- Requires a product decision about feature naming or scope → `waiting:product`
|
||||
- Needs clarification from the reporter about expected behavior → add a comment asking, don't apply waiting
|
||||
|
||||
### When to apply `review:needs-human`
|
||||
|
||||
Apply during triage only if:
|
||||
- The issue involves complex cross-product implications
|
||||
- The content change could affect shared content used by many products
|
||||
- The issue requires domain expertise the agent doesn't have
|
||||
|
||||
## Triage Workflow
|
||||
|
||||
1. Read the issue/PR title and body
|
||||
2. Identify affected products from content paths or mentions
|
||||
3. Apply product labels
|
||||
4. Apply source label if applicable
|
||||
5. Assess whether the issue is ready for agent work
|
||||
6. Apply `agent-ready` or `waiting:*` as appropriate
|
||||
7. Post a brief triage comment summarizing the labeling decision
|
||||
|
|
@ -61,6 +61,15 @@ You are an expert InfluxDB v1 technical writer with deep knowledge of InfluxData
|
|||
5. **Apply Standards:** Ensure compliance with style guidelines and documentation conventions
|
||||
6. **Cross-Reference:** Verify consistency with related documentation and product variants
|
||||
|
||||
## Release Documentation Workflow
|
||||
|
||||
**Always create separate PRs for OSS v1 and Enterprise v1 releases.**
|
||||
|
||||
- **OSS v1:** Publish immediately when the release tag is available on GitHub (`https://github.com/influxdata/influxdb/releases/tag/v1.x.x`).
|
||||
- **Enterprise v1:** Publish only after the release artifact is generally available (GA) in the InfluxData portal. Create the PR as a **draft** until the v1 codeowner signals readiness (e.g., applies a release label).
|
||||
- **`data/products.yml`:** Split version bumps per product. The OSS PR bumps `influxdb.latest_patches.v1`; the Enterprise PR bumps `enterprise_influxdb.latest_patches.v1`.
|
||||
- **PR template:** Use `.github/pull_request_template/influxdb_v1_release.md` and select the appropriate release type (OSS or Enterprise).
|
||||
|
||||
## Quality Assurance
|
||||
|
||||
- All code examples must be testable and include proper pytest-codeblocks annotations
|
||||
|
|
|
|||
|
|
@ -222,6 +222,29 @@ influxdb3_core, influxdb3_enterprise, telegraf
|
|||
/influxdb3/core, /influxdb3/enterprise, /telegraf
|
||||
```
|
||||
|
||||
## v1 Release Workflow
|
||||
|
||||
**InfluxDB v1 releases require separate PRs for OSS and Enterprise.**
|
||||
|
||||
1. **OSS PR** — publish immediately when the GitHub release tag is available.
|
||||
2. **Enterprise PR** — create as a draft; merge only after the v1 codeowner signals readiness (e.g., applies a release label) and the release artifact is GA in the InfluxData portal.
|
||||
|
||||
Each PR should bump only its own product version in `data/products.yml`:
|
||||
- OSS: `influxdb > latest_patches > v1`
|
||||
- Enterprise: `enterprise_influxdb > latest_patches > v1`
|
||||
|
||||
Use the PR template `.github/pull_request_template/influxdb_v1_release.md` and select the appropriate release type.
|
||||
|
||||
### Examples for v1
|
||||
|
||||
```bash
|
||||
# Generate OSS v1 release notes
|
||||
docs release-notes v1.12.2 v1.12.3 --repos ~/github/influxdata/influxdb
|
||||
|
||||
# Generate Enterprise v1 release notes (separate PR)
|
||||
# Use the Enterprise changelog at https://dl.influxdata.com/enterprise/nightlies/master/CHANGELOG.md
|
||||
```
|
||||
|
||||
## Related
|
||||
|
||||
- **docs-cli-workflow** skill - When to use CLI tools
|
||||
|
|
|
|||
|
|
@ -1,51 +1,51 @@
|
|||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(.ci/vale/vale.sh:*)",
|
||||
"Bash(npm:*)",
|
||||
"Bash(yarn:*)",
|
||||
"Bash(pnpm:*)",
|
||||
"Bash(npx:*)",
|
||||
"Bash(node:*)",
|
||||
"Bash(python:*)",
|
||||
"Bash(python3:*)",
|
||||
"Bash(pip:*)",
|
||||
"Bash(poetry:*)",
|
||||
"Bash(make:*)",
|
||||
"Bash(cargo:*)",
|
||||
"Bash(go:*)",
|
||||
"Bash(curl:*)",
|
||||
"Bash(gh:*)",
|
||||
"Bash(hugo:*)",
|
||||
"Bash(htmlq:*)",
|
||||
"Bash(jq:*)",
|
||||
"Bash(yq:*)",
|
||||
"Bash(mkdir:*)",
|
||||
"Bash(cat:*)",
|
||||
"Bash(ls:*)",
|
||||
"Bash(echo:*)",
|
||||
"Bash(rg:*)",
|
||||
"Bash(grep:*)",
|
||||
"Bash(find:*)",
|
||||
"Bash(bash:*)",
|
||||
"Bash(wc:*)",
|
||||
"Bash(sort:*)",
|
||||
"Bash(uniq:*)",
|
||||
"Bash(head:*)",
|
||||
"Bash(tail:*)",
|
||||
"Bash(awk:*)",
|
||||
"Bash(touch:*)",
|
||||
"Bash(docker:*)",
|
||||
"Edit",
|
||||
"Read",
|
||||
"Write",
|
||||
"Grep",
|
||||
"Glob",
|
||||
"LS",
|
||||
"Skill(superpowers:brainstorming)",
|
||||
"Skill(superpowers:brainstorming:*)",
|
||||
"mcp__acp__Bash"
|
||||
],
|
||||
"Bash(.ci/vale/vale.sh:*)",
|
||||
"Bash(npm:*)",
|
||||
"Bash(yarn:*)",
|
||||
"Bash(pnpm:*)",
|
||||
"Bash(npx:*)",
|
||||
"Bash(node:*)",
|
||||
"Bash(python:*)",
|
||||
"Bash(python3:*)",
|
||||
"Bash(pip:*)",
|
||||
"Bash(poetry:*)",
|
||||
"Bash(make:*)",
|
||||
"Bash(cargo:*)",
|
||||
"Bash(go:*)",
|
||||
"Bash(curl:*)",
|
||||
"Bash(gh:*)",
|
||||
"Bash(hugo:*)",
|
||||
"Bash(htmlq:*)",
|
||||
"Bash(jq:*)",
|
||||
"Bash(yq:*)",
|
||||
"Bash(mkdir:*)",
|
||||
"Bash(cat:*)",
|
||||
"Bash(ls:*)",
|
||||
"Bash(echo:*)",
|
||||
"Bash(rg:*)",
|
||||
"Bash(grep:*)",
|
||||
"Bash(find:*)",
|
||||
"Bash(bash:*)",
|
||||
"Bash(wc:*)",
|
||||
"Bash(sort:*)",
|
||||
"Bash(uniq:*)",
|
||||
"Bash(head:*)",
|
||||
"Bash(tail:*)",
|
||||
"Bash(awk:*)",
|
||||
"Bash(touch:*)",
|
||||
"Bash(docker:*)",
|
||||
"Edit",
|
||||
"Read",
|
||||
"Write",
|
||||
"Grep",
|
||||
"Glob",
|
||||
"LS",
|
||||
"Skill(superpowers:brainstorming)",
|
||||
"Skill(superpowers:brainstorming:*)",
|
||||
"mcp__acp__Bash"
|
||||
],
|
||||
"deny": [
|
||||
"Read(./.env)",
|
||||
"Read(./.env.*)",
|
||||
|
|
@ -58,5 +58,8 @@
|
|||
"Bash(rm:*)",
|
||||
"Read(/tmp)"
|
||||
]
|
||||
},
|
||||
"enabledPlugins": {
|
||||
"github@claude-plugins-official": true
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -359,33 +359,12 @@ Use the Documentation MCP Server when the information here is inconclusive, when
|
|||
|
||||
### Setup
|
||||
|
||||
The documentation MCP server is hosted—no local installation required. Add the server URL to your AI assistant's MCP configuration.
|
||||
The documentation MCP server is hosted at `https://influxdb-docs.mcp.kapa.ai`—no local installation required.
|
||||
|
||||
**MCP server URL:**
|
||||
Already configured in [`.mcp.json`](/.mcp.json). Two server entries are available:
|
||||
|
||||
```text
|
||||
https://influxdb-docs.mcp.kapa.ai
|
||||
```
|
||||
|
||||
**Claude Desktop configuration** (Settings > Developer):
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"influxdb-docs": {
|
||||
"url": "https://influxdb-docs.mcp.kapa.ai"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For other AI assistants see the [InfluxDB documentation MCP server guide](/influxdb3/core/admin/mcp-server/)
|
||||
and verify the MCP configuration options and syntax for a specific AI assistant.
|
||||
|
||||
**Rate limits** (per Google OAuth user):
|
||||
|
||||
- 40 requests per hour
|
||||
- 200 requests per day
|
||||
- **`influxdb-docs`** (API key) — Set `INFLUXDATA_DOCS_KAPA_API_KEY` env var. 60 req/min.
|
||||
- **`influxdb-docs-oauth`** (OAuth) — No setup. Authenticates via Google or GitHub on first use. 40 req/hr, 200 req/day.
|
||||
|
||||
### Available Tool
|
||||
|
||||
|
|
@ -552,17 +531,12 @@ touch content/influxdb3/enterprise/path/to/file.md
|
|||
|
||||
### MCP Server Not Responding
|
||||
|
||||
The hosted MCP server (`https://influxdb-docs.mcp.kapa.ai`) requires:
|
||||
|
||||
1. **Google OAuth authentication** - On first use, sign in with Google
|
||||
2. **Rate limits** - 40 requests/hour, 200 requests/day per user
|
||||
|
||||
**Troubleshooting steps:**
|
||||
|
||||
- Verify your AI assistant has the MCP server URL configured correctly
|
||||
- Check if you've exceeded rate limits (wait an hour or until the next day)
|
||||
- Try re-authenticating by clearing your OAuth session
|
||||
- Ensure your network allows connections to `*.kapa.ai`
|
||||
- **API key auth** (`influxdb-docs`): Verify `INFLUXDATA_DOCS_KAPA_API_KEY` is set. Rate limit: 60 req/min.
|
||||
- **OAuth auth** (`influxdb-docs-oauth`): Sign in with Google or GitHub on first use. Rate limits: 40 req/hr, 200 req/day.
|
||||
- Verify your network allows connections to `*.kapa.ai`
|
||||
- Check if you've exceeded rate limits (wait and retry)
|
||||
|
||||
### Cypress Tests Fail
|
||||
|
||||
|
|
|
|||
|
|
@ -299,15 +299,42 @@ echo "systemd" >> .ci/vale/styles/config/vocabularies/InfluxDataDocs/accept.txt
|
|||
|
||||
### Creating a Product-Specific Override
|
||||
|
||||
> [!Important]
|
||||
> Product-specific `.vale.ini` files must include the same disabled rules as the
|
||||
> root `.vale.ini`. Rules disabled in the root config are **not** inherited by
|
||||
> product-specific configs. Omitting them re-enables the rules for those products.
|
||||
> For example, omitting `Google.Units = NO` causes duration literals like `7d`,
|
||||
> `24h` to be flagged as errors in product-specific linting runs.
|
||||
|
||||
```bash
|
||||
# 1. Create product-specific .vale.ini
|
||||
cat > content/influxdb3/cloud-dedicated/.vale.ini << 'EOF'
|
||||
StylesPath = ../../../.ci/vale/styles
|
||||
MinAlertLevel = error
|
||||
MinAlertLevel = warning
|
||||
Vocab = InfluxDataDocs
|
||||
|
||||
Packages = Google, write-good, Hugo
|
||||
|
||||
[*.md]
|
||||
BasedOnStyles = Vale, InfluxDataDocs, Google, write-good
|
||||
|
||||
# These rules must be disabled in every product .vale.ini, same as the root .vale.ini.
|
||||
Google.Acronyms = NO
|
||||
Google.DateFormat = NO
|
||||
Google.Ellipses = NO
|
||||
Google.Headings = NO
|
||||
Google.WordList = NO
|
||||
# Disable Google.Units in favor of InfluxDataDocs.Units which only checks byte
|
||||
# units (GB, TB, etc). Duration literals (30d, 24h, 1h) are valid InfluxDB syntax.
|
||||
Google.Units = NO
|
||||
Vale.Spelling = NO
|
||||
Vale.Terms = NO
|
||||
write-good.TooWordy = NO
|
||||
|
||||
TokenIgnores = /[a-zA-Z0-9/_\-\.]+, \
|
||||
https?://[^\s\)\]>"]+, \
|
||||
`[^`]+`
|
||||
|
||||
# Product-specific overrides
|
||||
InfluxDataDocs.Branding = YES
|
||||
EOF
|
||||
|
|
|
|||
|
|
@ -0,0 +1 @@
|
|||
.github/workflows/*.lock.yml linguist-generated=true merge=ours
|
||||
|
|
@ -0,0 +1,704 @@
|
|||
# Doc Review Pipeline — Implementation Plan
|
||||
|
||||
**Status:** Complete — all phases implemented and tested
|
||||
**Repository:** influxdata/docs-v2
|
||||
**Author:** Triage agent (Claude Code)
|
||||
**Date:** 2026-02-28
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Goal](#goal)
|
||||
2. [What Already Exists](#what-already-exists)
|
||||
3. [Architecture Overview](#architecture-overview)
|
||||
4. [Phase 1: Label System Overhaul](#phase-1-label-system-overhaul)
|
||||
5. [Phase 2: Doc Review Workflow](#phase-2-doc-review-workflow)
|
||||
6. [Phase 3: Documentation and Agent Instructions](#phase-3-documentation-and-agent-instructions)
|
||||
7. [Future Phases (Not In Scope)](#future-phases-not-in-scope)
|
||||
8. [Decisions (Resolved)](#decisions-resolved)
|
||||
9. [Risk Assessment](#risk-assessment)
|
||||
|
||||
---
|
||||
|
||||
## Goal
|
||||
|
||||
Build two interconnected systems:
|
||||
|
||||
1. **Label system** — An automation-driven label taxonomy that supports
|
||||
cross-repo automation, agentic workflows, and human-in-the-loop review.
|
||||
2. **Doc review pipeline** — A GitHub Actions workflow that automates
|
||||
documentation PR review using Copilot for both code review (diff-based,
|
||||
using auto-loaded instruction files) and visual review (rendered HTML
|
||||
at preview URLs), with rendered-page verification that catches issues
|
||||
invisible in the Markdown source.
|
||||
|
||||
The pipeline catches issues only visible in rendered output — expanded
|
||||
shortcodes, broken layouts, incorrect product names — by having Copilot
|
||||
analyze the rendered HTML of deployed preview pages.
|
||||
|
||||
---
|
||||
|
||||
## What Already Exists
|
||||
|
||||
### Infrastructure
|
||||
|
||||
| Component | Location | Notes |
|
||||
|-----------|----------|-------|
|
||||
| PR preview deployment | `.github/workflows/pr-preview.yml` | Builds Hugo site, deploys to `gh-pages` branch at `influxdata.github.io/docs-v2/pr-preview/pr-{N}/` |
|
||||
| Changed file detection | `.github/scripts/detect-preview-pages.js` | Detects changed files, maps content to public URLs, handles shared content |
|
||||
| Content-to-URL mapping | `scripts/lib/content-utils.js` | `getChangedContentFiles()`, `mapContentToPublic()`, `expandSharedContentChanges()` |
|
||||
| Screenshot tooling | `scripts/puppeteer/screenshot.js` | Puppeteer-based screenshot utility (already a dependency) |
|
||||
| Playwright | `package.json` | Already a dependency (`^1.58.1`) |
|
||||
| Claude agent instructions | `CLAUDE.md`, `AGENTS.md`, `.claude/` | Review criteria, style guide, skills, commands |
|
||||
| Copilot instructions | `.github/copilot-instructions.md` | Style guide, repo structure, patterns |
|
||||
| Copilot pattern instructions | `.github/instructions/` | Auto-loaded by Copilot based on changed file patterns |
|
||||
| Auto-labeling (path-based) | Not yet implemented | Needed for Phase 1 |
|
||||
| Link checker workflow | `.github/workflows/pr-link-check.yml` | Validates links on PR changes |
|
||||
| Sync plugins workflow | `.github/workflows/sync-plugins.yml` | Issue-triggered workflow pattern to follow |
|
||||
| Audit documentation workflow | `.github/workflows/audit-documentation.yml` | Creates issues from audit results |
|
||||
|
||||
### Labels (Current State)
|
||||
|
||||
The repo has 30+ labels with inconsistent naming patterns and significant
|
||||
overlap. Product labels use long names (`InfluxDB 3 Core and Enterprise`),
|
||||
workflow states are minimal (`release:pending` is the only actively used one),
|
||||
and there is no agent-readiness or blocking-state taxonomy.
|
||||
|
||||
---
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
PR opened/updated (content paths)
|
||||
│
|
||||
├──────────────────────────┐
|
||||
▼ ▼
|
||||
┌─ Job 1: Resolve URLs ────┐ ┌─ Job 2: Copilot Code Review ───┐
|
||||
│ resolve-review-urls.js │ │ gh pr edit --add-reviewer │
|
||||
│ changed files → URLs │ │ copilot-reviews │
|
||||
│ Output: url list │ │ Uses .github/instructions/ │
|
||||
└──────────┬───────────────┘ │ for auto-loaded review rules │
|
||||
│ └──────────────┬─────────────────┘
|
||||
▼ │
|
||||
┌─ Job 3: Copilot Visual Review ────────┐ │
|
||||
│ Wait for preview deployment │ │
|
||||
│ Post preview URLs + review prompt │ │
|
||||
│ @copilot analyzes rendered HTML │ │
|
||||
│ Checks: layout, shortcodes, 404s │ │
|
||||
└──────────────┬───────────────────────┘ │
|
||||
│ │
|
||||
▼ ▼
|
||||
Human reviews what remains
|
||||
```
|
||||
|
||||
**Job 2 (Copilot code review) runs in parallel with Jobs 1→3** — it uses
|
||||
GitHub's native Copilot reviewer, which analyzes the PR diff using
|
||||
auto-loaded instruction files from `.github/instructions/`.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Label System Overhaul
|
||||
|
||||
### Rationale
|
||||
|
||||
The label system is a prerequisite for agentic workflows. Agents need clear
|
||||
signals about issue readiness (`agent-ready`), blocking states
|
||||
(`waiting:engineering`, `waiting:product`), and product scope
|
||||
(`product:v3-monolith`, `product:v3-distributed`).
|
||||
Consistent label patterns also enable GitHub API queries for dashboards and
|
||||
automation.
|
||||
|
||||
### 1.1 — Label taxonomy
|
||||
|
||||
> **Note:** The tables below are a planning snapshot. The authoritative
|
||||
> definitions live in `data/labels.yml` (non-product labels) and
|
||||
> `data/products.yml` (product labels). See `.github/LABEL_GUIDE.md` for
|
||||
> the current index.
|
||||
|
||||
**24 labels organized into 6 categories:**
|
||||
|
||||
#### Product labels (11) — Color: `#FFA500` (yellow)
|
||||
|
||||
| Label | Description |
|
||||
|-------|-------------|
|
||||
| `product:v3-monolith` | InfluxDB 3 Core and Enterprise (single-node / clusterable) |
|
||||
| `product:v3-distributed` | InfluxDB 3 Cloud Serverless, Cloud Dedicated, Clustered |
|
||||
| `product:v2` | InfluxDB v2 (Cloud, OSS) |
|
||||
| `product:v1` | InfluxDB v1 OSS |
|
||||
| `product:v1-enterprise` | InfluxDB Enterprise v1 |
|
||||
| `product:telegraf` | Telegraf documentation |
|
||||
| `product:chronograf` | Chronograf documentation |
|
||||
| `product:kapacitor` | Kapacitor documentation |
|
||||
| `product:flux` | Flux language documentation |
|
||||
| `product:explorer` | InfluxDB 3 Explorer |
|
||||
| `product:shared` | Shared content across products |
|
||||
|
||||
#### Source tracking labels (4) — Color: `#9370DB` (purple)
|
||||
|
||||
| Label | Description |
|
||||
|-------|-------------|
|
||||
| `source:auto-detected` | Created by change detection within this repo |
|
||||
| `source:dar` | Generated by DAR pipeline (issue analysis → draft) |
|
||||
| `source:sync` | Synced from an external repository |
|
||||
| `source:manual` | Human-created issue |
|
||||
|
||||
#### Waiting states (2) — Color: `#FF8C00` (orange)
|
||||
|
||||
| Label | Description |
|
||||
|-------|-------------|
|
||||
| `waiting:engineering` | Waiting for engineer confirmation |
|
||||
| `waiting:product` | Waiting for product/PM decision |
|
||||
|
||||
#### Workflow states (2) — Color: `#00FF00` / `#1E90FF`
|
||||
|
||||
| Label | Description |
|
||||
|-------|-------------|
|
||||
| `agent-ready` | Agent can work on this autonomously |
|
||||
| `skip-review` | Skip automated doc review pipeline |
|
||||
|
||||
> [!Note]
|
||||
> Human codeowner approval uses GitHub's native PR review mechanism (CODEOWNERS file), not a label. The `review:*` labels below are applied **manually** after reviewing Copilot feedback.
|
||||
|
||||
#### Review outcome labels (3) — Color: `#28A745` / `#DC3545` / `#FFC107`
|
||||
|
||||
| Label | Description |
|
||||
|-------|-------------|
|
||||
| `review:approved` | Review passed — no blocking issues found |
|
||||
| `review:changes-requested` | Review found blocking issues |
|
||||
| `review:needs-human` | Review inconclusive, needs human |
|
||||
|
||||
> [!Note]
|
||||
> All labels use colons (`:`) as separators for consistency. The `review:*` labels
|
||||
> are mutually exclusive. They are applied manually after review — the CI workflow
|
||||
> does not manage labels. Copilot code review uses GitHub's native "Comment"
|
||||
> review type.
|
||||
|
||||
#### Existing labels to keep (renamed) (2)
|
||||
|
||||
| Old Name | New Name | Description |
|
||||
|----------|----------|-------------|
|
||||
| `AI assistant tooling` | `ai:tooling` | Related to AI assistant infrastructure |
|
||||
| `ci:testing-and-validation` | `ci:testing` | CI/testing infrastructure |
|
||||
|
||||
### 1.2 — Migration scripts
|
||||
|
||||
Create migration scripts in `helper-scripts/label-migration/`:
|
||||
|
||||
- **`create-labels.sh`** — Creates all new labels using `gh label create --force` (idempotent)
|
||||
- **`migrate-labels.sh`** — Migrates existing issues from old labels to new labels using `gh issue edit`
|
||||
- **`delete-labels.sh`** — Deletes old labels (requires interactive confirmation)
|
||||
- **`README.md`** — Execution order, prerequisites, rollback instructions
|
||||
|
||||
**Migration mapping:**
|
||||
|
||||
| Old Label | New Label |
|
||||
|-----------|-----------|
|
||||
| `InfluxDB 3 Core and Enterprise` | `product:v3-monolith` |
|
||||
| `InfluxDB v3` | `product:v3-monolith` (review individually — some may be distributed) |
|
||||
| `Processing engine` | `product:v3-monolith` |
|
||||
| `InfluxDB v2` | `product:v2` |
|
||||
| `InfluxDB v1` | `product:v1` |
|
||||
| `Enterprise 1.x` | `product:v1-enterprise` |
|
||||
| `Chronograf 1.x` | `product:chronograf` |
|
||||
| `Kapacitor` | `product:kapacitor` |
|
||||
| `Flux` | `product:flux` |
|
||||
| `InfluxDB 3 Explorer` | `product:explorer` |
|
||||
| `Pending Release` | `release:pending` |
|
||||
| `release/influxdb3` | `release:pending` |
|
||||
| `sync-plugin-docs` | `source:sync` |
|
||||
|
||||
> [!Important]
|
||||
> **Workflow Updates Required:**
|
||||
> The `sync-plugin-docs` label is used in GitHub Actions workflows. After migrating this label to `source:sync`, the following files must be updated:
|
||||
> - `.github/workflows/sync-plugins.yml` (lines 28, 173, 421)
|
||||
> - `.github/ISSUE_TEMPLATE/sync-plugin-docs.yml` (line 4)
|
||||
>
|
||||
> Update all references from `sync-plugin-docs` to `source:sync` to ensure the plugin sync automation continues to work after the label migration.
|
||||
|
||||
> [!Note]
|
||||
> `release:pending` is an existing workflow state label that we are keeping as-is.
|
||||
> The migration scripts **must ensure** this label exists (create it if missing) and **must not** delete it in the cleanup step.
|
||||
|
||||
**Labels to delete after migration:**
|
||||
`bug`, `priority`, `documentation`, `Proposal`, `Research Phase`,
|
||||
`ready-for-collaboration`, `ui`, `javascript`, `dependencies`,
|
||||
`integration-demo-blog`, `API`, `Docker`, `Grafana`, `Ask AI`,
|
||||
plus all old product labels listed above.
|
||||
|
||||
**Execution:**
|
||||
1. Run `create-labels.sh` (safe, idempotent)
|
||||
2. Run `migrate-labels.sh`
|
||||
3. Human verifies a sample of issues
|
||||
4. Run `delete-labels.sh` (destructive, requires confirmation)
|
||||
|
||||
### 1.3 — Auto-labeling workflow
|
||||
|
||||
**File:** `.github/workflows/auto-label.yml`
|
||||
|
||||
**Trigger:** `pull_request: [opened, synchronize]`
|
||||
|
||||
**Logic:**
|
||||
- List changed files via `github.rest.pulls.listFiles()`
|
||||
- Read `data/products.yml` for path-to-label mappings (single source of truth):
|
||||
- Each product entry has `content_path` and `label_group` fields
|
||||
- Match file paths against `content/{content_path}/` → `product:{label_group}`
|
||||
- Example: `content/influxdb3/core/` matches `content_path: influxdb3/core`,
|
||||
`label_group: v3-monolith` → applies `product:v3-monolith`
|
||||
- Shared content handling:
|
||||
- `content/shared/` changes apply `product:shared` label
|
||||
- Additionally expand shared content to affected products using
|
||||
`expandSharedContentChanges()` from `scripts/lib/content-utils.js`
|
||||
- Apply all affected product labels (additive)
|
||||
- Multi-product PRs: apply all matching `product:*` labels (additive)
|
||||
- Only add labels that are not already present (idempotent)
|
||||
- Runs as `actions/github-script@v7`
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Doc Review Workflow
|
||||
|
||||
### 2.1 — Workflow file
|
||||
|
||||
**File:** `.github/workflows/doc-review.yml`
|
||||
|
||||
**Trigger:**
|
||||
|
||||
```yaml
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize, ready_for_review]
|
||||
paths:
|
||||
- 'content/**'
|
||||
- 'layouts/**'
|
||||
- 'assets/**'
|
||||
- 'data/**'
|
||||
```
|
||||
|
||||
**Permissions:** `contents: read`, `pull-requests: write`
|
||||
|
||||
**Concurrency:** `group: doc-review-${{ github.event.number }}`, `cancel-in-progress: true`
|
||||
|
||||
**Skip conditions:** Draft PRs, fork PRs, PRs with a `skip-review` label (new label to be added in Phase 1 via the label migration scripts).
|
||||
|
||||
### 2.2 — Job 1: Resolve URLs
|
||||
|
||||
**Purpose:** Map changed files to preview URLs.
|
||||
|
||||
**Implementation:**
|
||||
- Reuse the existing `detect-preview-pages.js` script and `content-utils.js` library
|
||||
- Same logic as `pr-preview.yml` Job 1, but output a JSON artifact instead of deploying
|
||||
- Output format: `[{"file": "content/influxdb3/core/write-data/_index.md", "url": "/influxdb3/core/write-data/"}]`
|
||||
- Upload as `urls.json` workflow artifact
|
||||
|
||||
**Key detail:** This job runs `getChangedContentFiles()` and `mapContentToPublic()`
|
||||
from `scripts/lib/content-utils.js`, which already handles shared content
|
||||
expansion (if `content/shared/foo.md` changes, all pages with
|
||||
`source: /shared/foo.md` are included).
|
||||
|
||||
### 2.3 — Job 2: Copilot Code Review
|
||||
|
||||
**Purpose:** Review Markdown changes against the style guide and documentation
|
||||
standards using GitHub's native Copilot code review. Visual review of rendered
|
||||
pages is handled separately in Job 3.
|
||||
|
||||
**Dependencies:** None beyond the PR itself. This job runs in parallel with
|
||||
Jobs 1→3.
|
||||
|
||||
**Implementation:**
|
||||
- Adds `copilot-reviews` as a PR reviewer via `gh pr edit --add-reviewer`
|
||||
- Copilot automatically reviews the PR diff using instruction files from
|
||||
`.github/instructions/` that are auto-loaded based on changed file patterns
|
||||
- No custom prompt or API key required
|
||||
|
||||
**Review criteria file:** `.github/instructions/content-review.instructions.md`
|
||||
|
||||
This file is auto-loaded by Copilot for PRs that change `content/**/*.md`
|
||||
files. It checks for:
|
||||
|
||||
1. **Frontmatter correctness** — Required fields, menu structure, weights
|
||||
2. **Shortcode syntax** — Correct usage, closing tags, parameters
|
||||
3. **Semantic line feeds** — One sentence per line
|
||||
4. **Heading hierarchy** — No h1 in content (title comes from frontmatter)
|
||||
5. **Product-specific terminology** — Correct product names, versions
|
||||
6. **Link format** — Relative links, proper shortcode links
|
||||
7. **Shared content** — `source:` frontmatter correctness
|
||||
8. **Code blocks** — Language identifiers, line length, long CLI options
|
||||
|
||||
**Severity classification:**
|
||||
- `BLOCKING` — Wrong product names, invalid frontmatter, broken shortcode syntax
|
||||
- `WARNING` — Style inconsistencies, missing semantic line feeds
|
||||
- `INFO` — Suggestions, not problems
|
||||
|
||||
**Output:**
|
||||
- Copilot posts inline review comments using GitHub's native "Comment"
|
||||
review type
|
||||
- `review:*` labels are applied manually by humans after reviewing the
|
||||
Copilot feedback — the workflow does not manage labels
|
||||
|
||||
### 2.4 — Job 3: Copilot Visual Review (rendered HTML)
|
||||
|
||||
**Purpose:** Have Copilot analyze the rendered preview pages to catch visual
|
||||
and structural issues invisible in the Markdown source.
|
||||
|
||||
**Dependencies:** Depends on Job 1 (needs URL list). Must wait for the
|
||||
`pr-preview.yml` deployment to be live.
|
||||
|
||||
**Why Copilot for visual review:**
|
||||
- Copilot can analyze rendered HTML content at public preview URLs — no
|
||||
screenshot capture or image upload required.
|
||||
- Visual review is a good fit for Copilot because the rendered pages are
|
||||
self-contained artifacts (no need to cross-reference repo files).
|
||||
- Copilot code review (Job 2) handles the diff; visual review catches what
|
||||
the diff review cannot.
|
||||
|
||||
**Implementation:**
|
||||
|
||||
1. **Wait for preview deployment:**
|
||||
- Poll `https://influxdata.github.io/docs-v2/pr-preview/pr-{N}/` with
|
||||
`curl --head` until it returns 200
|
||||
- Timeout: 10 minutes (preview build takes ~75s + deploy time)
|
||||
- Poll interval: 15 seconds
|
||||
- If timeout, skip visual review; Copilot code review (Job 2) still runs
|
||||
|
||||
2. **Post preview URLs and trigger Copilot review:**
|
||||
- Use `actions/github-script@v7` to post a PR comment listing the preview
|
||||
URLs from Job 1, formatted as clickable links
|
||||
- Post a follow-up comment tagging `@copilot` with instructions to review
|
||||
the rendered pages at the preview URLs. The comment should instruct
|
||||
Copilot to check each page for:
|
||||
- Raw shortcode syntax visible on the page (`{{<` or `{{%`)
|
||||
- Placeholder text that should have been replaced
|
||||
- Broken layouts: overlapping text, missing images, collapsed sections
|
||||
- Code blocks rendered incorrectly (raw HTML/Markdown fences visible)
|
||||
- Navigation/sidebar entries correct
|
||||
- Visible 404 or error state
|
||||
- Product name inconsistencies in the rendered page header/breadcrumbs
|
||||
- The review instruction template is stored in
|
||||
`.github/prompts/copilot-visual-review.md` for maintainability
|
||||
- Preview URL count capped at 50 pages (matching `MAX_PAGES` in
|
||||
`detect-preview-pages.js`)
|
||||
|
||||
3. **Comment upsert pattern:**
|
||||
- Visual review comments use a marker-based upsert pattern — the workflow
|
||||
updates an existing comment if one with the marker exists, otherwise
|
||||
creates a new one. This prevents duplicate comments on `synchronize`
|
||||
events.
|
||||
|
||||
### 2.6 — Workflow failure handling
|
||||
|
||||
- If preview deployment times out: skip Copilot visual review (Job 3),
|
||||
Copilot code review (Job 2) still runs independently. Post a comment
|
||||
explaining visual review was skipped.
|
||||
- If Copilot does not respond to the `@copilot` mention: the preview URLs
|
||||
remain in the comment for human review.
|
||||
- Never block PR merge on workflow failures — the workflow adds comments
|
||||
but does not set required status checks or manage labels.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Documentation and Agent Instructions
|
||||
|
||||
### 3.1 — Instruction file architecture
|
||||
|
||||
**Principle:** One `CLAUDE.md` that references role-specific files. No per-role
|
||||
CLAUDE files — Claude Code only reads one `CLAUDE.md` per directory level. The
|
||||
role context comes from the task prompt (GitHub Actions workflow), not the config
|
||||
file.
|
||||
|
||||
```
|
||||
CLAUDE.md ← lightweight pointer (already exists)
|
||||
├── references .github/LABEL_GUIDE.md ← label taxonomy + usage
|
||||
├── references .claude/agents/ ← role-specific agent instructions
|
||||
│ ├── doc-triage-agent.md ← triage + auto-label logic
|
||||
│ └── doc-review-agent.md ← local review sessions (Claude Code)
|
||||
└── references .github/instructions/ ← Copilot auto-loaded instructions
|
||||
└── content-review.instructions.md ← review criteria for content/**/*.md
|
||||
```
|
||||
|
||||
**How review roles are assigned at runtime:**
|
||||
- **Copilot code review (CI):** GitHub's native reviewer. Auto-loads
|
||||
instruction files from `.github/instructions/` based on changed file
|
||||
patterns. No custom prompt or API key needed.
|
||||
- **Copilot visual review (CI):** Triggered by `@copilot` mention in a PR
|
||||
comment with preview URLs and a review template.
|
||||
- **Claude local review:** Uses `.claude/agents/doc-review-agent.md` for
|
||||
local Claude Code sessions. Not used in CI.
|
||||
- Shared rules (style guide, frontmatter, shortcodes) stay in the existing
|
||||
referenced files (`DOCS-CONTRIBUTING.md`, `DOCS-SHORTCODES.md`, etc.)
|
||||
- No duplication — each instruction file says what's unique to that context
|
||||
|
||||
### 3.2 — Agent instruction files
|
||||
|
||||
#### `.claude/agents/doc-triage-agent.md`
|
||||
|
||||
Role-specific instructions for issue/PR triage. Contents:
|
||||
|
||||
- **Label taxonomy** — Full label list with categories, colors, descriptions
|
||||
- **Path-to-product mapping** — Which content paths map to which `product:*` labels
|
||||
- **Priority rules** — How to assess priority based on product, scope, and issue type
|
||||
- **Decision logic** — When to apply `agent-ready`, `waiting:*`, `review:needs-human`
|
||||
- **Migration context** — Old label → new label mapping (useful during transition)
|
||||
|
||||
This file does NOT duplicate style guide rules. It references
|
||||
`DOCS-CONTRIBUTING.md` for those.
|
||||
|
||||
#### `.claude/agents/doc-review-agent.md`
|
||||
|
||||
Role-specific instructions for **local** Claude Code review sessions. This
|
||||
file is NOT used in CI — the CI review is handled by Copilot using
|
||||
`.github/instructions/content-review.instructions.md`.
|
||||
|
||||
Contents:
|
||||
|
||||
- **Review scope** — Markdown diff review only (frontmatter, shortcodes,
|
||||
semantic line feeds, heading hierarchy, terminology, links, shared content).
|
||||
- **Severity classification** — BLOCKING / WARNING / INFO definitions with examples
|
||||
- **Output format** — Structured review comment template
|
||||
|
||||
This file references `DOCS-CONTRIBUTING.md` for style rules and
|
||||
`DOCS-SHORTCODES.md` for shortcode syntax — it does NOT restate them.
|
||||
|
||||
### 3.3 — Label usage guide
|
||||
|
||||
**File:** `.github/LABEL_GUIDE.md`
|
||||
|
||||
Contents:
|
||||
- Label categories with descriptions and colors
|
||||
- Common workflows (issue triage, DAR pipeline, manual work)
|
||||
- GitHub filter queries for agents and humans
|
||||
- Auto-labeling behavior reference
|
||||
|
||||
### 3.4 — Update existing pointer files
|
||||
|
||||
**`CLAUDE.md`** — Add one line to the "Full instruction resources" list:
|
||||
```markdown
|
||||
- [.github/LABEL_GUIDE.md](.github/LABEL_GUIDE.md) - Label taxonomy and pipeline usage
|
||||
```
|
||||
|
||||
**`AGENTS.md`** — Add a section referencing the label guide and agent roles:
|
||||
```markdown
|
||||
## Doc Review Pipeline
|
||||
- Label guide: `.github/LABEL_GUIDE.md`
|
||||
- Triage agent: `.claude/agents/doc-triage-agent.md`
|
||||
- Review agent: `.claude/agents/doc-review-agent.md`
|
||||
```
|
||||
|
||||
**`.github/copilot-instructions.md`** — Add the label guide to the
|
||||
"Specialized Resources" table.
|
||||
|
||||
These are small additions — no restructuring of existing files.
|
||||
|
||||
### 3.5 — Review instruction files
|
||||
|
||||
#### `.github/instructions/content-review.instructions.md` (Copilot code review)
|
||||
|
||||
Auto-loaded by Copilot for PRs that change `content/**/*.md` files. Contains
|
||||
the review criteria (frontmatter, shortcodes, heading hierarchy, terminology,
|
||||
links, code blocks) with severity classification.
|
||||
|
||||
This file replaces the original `.github/prompts/doc-review.md` Claude prompt.
|
||||
The review criteria are the same but delivered through Copilot's native
|
||||
instruction file mechanism instead of a custom action.
|
||||
|
||||
#### `.github/templates/review-comment.md` (shared format)
|
||||
|
||||
Shared definitions for severity levels, comment structure, and result → label
|
||||
mapping. Used by `doc-review-agent.md` (local review sessions) and the
|
||||
Copilot visual review template.
|
||||
|
||||
#### Copilot visual review template
|
||||
|
||||
The `@copilot` visual review comment is constructed inline in the
|
||||
`doc-review.yml` workflow using the review template from
|
||||
`.github/templates/review-comment.md`. Contains:
|
||||
|
||||
- The visual review checklist (raw shortcodes, broken layouts, 404s, etc.)
|
||||
- Instructions for analyzing the rendered pages at the preview URLs
|
||||
- Output format guidance (what to flag, severity levels)
|
||||
|
||||
---
|
||||
|
||||
## Future Phases (Not In Scope)
|
||||
|
||||
These are explicitly **not** part of this plan. Documented here for context.
|
||||
|
||||
### v2 — Screenshot-based visual review
|
||||
- Add Playwright screenshot capture script (`.github/scripts/capture-screenshots.js`)
|
||||
for design/layout PRs where HTML analysis isn't sufficient.
|
||||
- Capture full-page PNGs of preview pages, upload as workflow artifacts.
|
||||
- Useful for PRs touching `layouts/`, `assets/css/`, or template changes
|
||||
where visual regression matters.
|
||||
- The existing `scripts/puppeteer/screenshot.js` remains for local debugging;
|
||||
the CI script should use Playwright for reliability.
|
||||
|
||||
### v3 — Stale PR management
|
||||
- Cron job that scans for stale PRs (draft >3 days with no review activity)
|
||||
and pings the author.
|
||||
- Metrics tracking: % of PRs that pass Copilot review on first attempt.
|
||||
|
||||
### v4 — Agent-driven issue resolution
|
||||
- Auto-assign doc issues to agents based on `agent-ready` label.
|
||||
- Claude or Copilot drafts the fix, then the other agent reviews.
|
||||
- Closes the loop: issue → draft → review → human approval.
|
||||
|
||||
---
|
||||
|
||||
## Decisions (Resolved)
|
||||
|
||||
### Q1: How should Copilot review rendered pages? — RESOLVED
|
||||
|
||||
**Decision:** Copilot reviews rendered HTML at public preview URLs — no
|
||||
screenshots needed. Job 3 posts preview URLs in a PR comment, then tags
|
||||
`@copilot` with a review prompt. See section 2.5 for implementation details.
|
||||
|
||||
This approach works because:
|
||||
- Preview pages are publicly accessible at `influxdata.github.io/docs-v2/pr-preview/pr-{N}/`
|
||||
- Copilot can analyze HTML content at public URLs
|
||||
- No screenshot capture, image upload, or artifact management required
|
||||
|
||||
Screenshot capture is deferred to Future Phases (v2) for design/layout PRs
|
||||
where visual regression testing matters.
|
||||
|
||||
### Q2: Should the review workflow be a required status check? — RESOLVED
|
||||
|
||||
**Decision:** No. Start as advisory (comments only). The workflow posts review
|
||||
comments but does not set required status checks or manage labels. `review:*`
|
||||
labels are applied manually after review. Make it required only after the team
|
||||
confirms the false-positive rate is acceptable (see Future Phases).
|
||||
|
||||
### Q3: Should screenshots use Playwright or Puppeteer? — DEFERRED
|
||||
|
||||
**Decision:** Deferred to Future Phases (v2). The current implementation
|
||||
reviews rendered HTML at preview URLs, not screenshots. When screenshot
|
||||
capture is added later, use Playwright for CI and keep Puppeteer for local
|
||||
debugging.
|
||||
|
||||
### Q4: How to handle the `pr-preview.yml` dependency? — RESOLVED
|
||||
|
||||
**Decision:** Option A — poll the preview URL with timeout. Job 3 polls
|
||||
`https://influxdata.github.io/docs-v2/pr-preview/pr-{N}/` with `curl --head`
|
||||
every 15 seconds until it returns 200, with a 10-minute timeout. If timeout is
|
||||
reached, skip Copilot visual review; Copilot code review (Job 2) still runs
|
||||
independently.
|
||||
|
||||
Rationale: Polling is simple, self-contained, and resilient. The URL pattern is
|
||||
deterministic. Option B (`workflow_run`) adds complexity and doesn't handle
|
||||
cases where preview doesn't deploy. Option C (combined workflow) makes the
|
||||
workflow too large and eliminates the parallelism benefit.
|
||||
|
||||
### Q5: Cost and rate limiting — RESOLVED
|
||||
|
||||
**Decision:** Acceptable. Both code review and visual review use the repo's
|
||||
Copilot allocation. No external API keys or per-call costs.
|
||||
|
||||
Mitigations already designed into the workflow:
|
||||
- `paths` filter ensures only doc-content PRs trigger the workflow.
|
||||
- `skip-review` label allows trivial PRs to opt out.
|
||||
- Concurrency group cancels in-progress reviews when the PR is updated.
|
||||
- Preview URL count is capped at 50 pages (matching `MAX_PAGES` in
|
||||
`resolve-review-urls.js`).
|
||||
- Draft and fork PRs are skipped entirely.
|
||||
|
||||
### Q6: Label separator convention — RESOLVED
|
||||
|
||||
**Decision:** Use colons (`:`) everywhere. No slashes. One separator for
|
||||
consistency — expecting humans or agents to infer different semantics from
|
||||
separator choice is unrealistic. Mutually exclusive behavior (e.g., `review:*`
|
||||
labels) is enforced in workflow code, not punctuation.
|
||||
|
||||
### Q7: Human approval mechanism — RESOLVED
|
||||
|
||||
**Decision:** Use GitHub's native PR review system (CODEOWNERS file) for human
|
||||
approval. No `approval:codeowner` label. The `review:*` labels are exclusively
|
||||
for automated pipeline outcomes.
|
||||
|
||||
### Q8: Product path mapping — RESOLVED
|
||||
|
||||
**Decision:** Extend `data/products.yml` with `content_path` and `label_group`
|
||||
fields. This file becomes the single source of truth for path-to-product
|
||||
resolution, used by the auto-label workflow, matrix-generator, and documentation
|
||||
(AGENTS.md). Eliminates duplicated mappings across multiple files.
|
||||
|
||||
### Q9: `sync-plugin-docs` label migration — RESOLVED
|
||||
|
||||
**Decision:** Migrate to `source:sync` (not `source:auto-detected`). Plugin
|
||||
sync is a distinct operation from change detection. `source:sync` is general
|
||||
enough to cover future external repo syncs without being hyper-specific.
|
||||
|
||||
### Q10: Multi-product and shared content labeling — RESOLVED
|
||||
|
||||
**Decision:** Auto-labeling is additive — apply all matching `product:*` labels.
|
||||
Changes to `content/shared/` get the `product:shared` label plus all expanded
|
||||
product labels (resolved via `expandSharedContentChanges()`).
|
||||
|
||||
---
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| Risk | Impact | Mitigation |
|
||||
|------|--------|------------|
|
||||
| Preview not deployed in time | Low | 10-minute polling timeout, fall back to code-only review |
|
||||
| False positives in review | Medium | Start as advisory (not required check), iterate instruction files |
|
||||
| Label migration data loss | Low | Migrate before deleting; human verification gate |
|
||||
| Copilot visual review misses issues | Medium | Preview URLs remain in comment for human review; start advisory |
|
||||
| Copilot code review quality | Medium | Review criteria in `.github/instructions/` can be iterated; local Claude review available as backup |
|
||||
| Product mapping drift | Low | Single source of truth in `data/products.yml`; auto-label and matrix-generator both derive from it |
|
||||
|
||||
---
|
||||
|
||||
## File Summary
|
||||
|
||||
Files to create or modify:
|
||||
|
||||
| Action | File | Phase | Status |
|
||||
|--------|------|-------|--------|
|
||||
| Modify | `data/products.yml` | 1.0 | Done |
|
||||
| Modify | `data/labels.yml` | 1.1 | Done |
|
||||
| Create | `helper-scripts/label-migration/create-labels.sh` | 1.2 | Done |
|
||||
| Create | `helper-scripts/label-migration/migrate-labels.sh` | 1.2 | Done |
|
||||
| Create | `helper-scripts/label-migration/delete-labels.sh` | 1.2 | Done |
|
||||
| Create | `helper-scripts/label-migration/README.md` | 1.2 | Done |
|
||||
| Create | `.github/workflows/auto-label.yml` | 1.3 | Done |
|
||||
| Create | `.github/workflows/doc-review.yml` | 2.1 | Done |
|
||||
| Create | `.claude/agents/doc-triage-agent.md` | 3.2 | Done |
|
||||
| Create | `.claude/agents/doc-review-agent.md` | 3.2 | Done |
|
||||
| Create | `.github/LABEL_GUIDE.md` | 3.3 | Done |
|
||||
| Create | `.github/instructions/content-review.instructions.md` | 3.5 | Done |
|
||||
| Create | `.github/templates/review-comment.md` | 2.5/3.5 | Done |
|
||||
| Modify | `CLAUDE.md` | 3.4 | Done |
|
||||
| Modify | `AGENTS.md` | 3.4 | Done |
|
||||
| Modify | `.github/copilot-instructions.md` | 3.4 | Done |
|
||||
|
||||
---
|
||||
|
||||
## Implementation Order
|
||||
|
||||
1. ~~**Phase 1.0** — Extend `data/products.yml` with `content_path` and `label_group`~~ ✅
|
||||
2. ~~**Phase 1.1–1.2** — Create label migration scripts~~ ✅
|
||||
3. ~~**Phase 1.3** — Create auto-label workflow~~ ✅
|
||||
4. ~~**Execute label migration** — Run scripts, then manual cleanup~~ ✅
|
||||
5. ~~**Phase 3.2** — Create agent instruction files~~ ✅
|
||||
6. ~~**Phase 2.1–2.3** — Workflow skeleton + URL resolution + Copilot code review~~ ✅
|
||||
7. ~~**Phase 2.5** — Copilot visual review job~~ ✅
|
||||
8. ~~**Phase 3.3–3.5** — Label guide, instruction files, pointer updates~~ ✅
|
||||
9. ~~**Test end-to-end** — Triggered workflows via `workflow_dispatch` against PR #6890~~ ✅
|
||||
|
||||
### End-to-end test results (2026-03-09)
|
||||
|
||||
Triggered via `workflow_dispatch` with `pr_number=6890` on branch
|
||||
`claude/triage-agent-plan-EOY0u`.
|
||||
|
||||
| Workflow | Job | Result | Notes |
|
||||
|----------|-----|--------|-------|
|
||||
| Auto-label PRs | auto-label | Pass | Loaded 14 path mappings, 0 product labels (correct — no content changes) |
|
||||
| Doc Review | resolve-urls | Pass | 0 preview URLs (correct — no content changes) |
|
||||
| Doc Review | copilot-review | Pass | `copilot-reviews` added as reviewer |
|
||||
| Doc Review | copilot-visual-review | Skipped | Correct — 0 URLs to review |
|
||||
|
||||
**Fixes applied during testing:**
|
||||
- `npm ci` replaced with targeted `js-yaml` install (sparse checkout lacks lock file)
|
||||
- Added `workflow_dispatch` with `pr_number` input for on-demand re-runs
|
||||
|
||||
**Remaining:** Visual review (Job 3) needs a content-changing PR to fully exercise
|
||||
the preview URL polling and Copilot `@copilot` mention flow.
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
name: Sync Plugin Documentation
|
||||
description: Request synchronization of plugin documentation from influxdb3_plugins repository
|
||||
title: "Sync plugin docs: [PLUGIN_NAMES]"
|
||||
labels: ["sync-plugin-docs", "documentation", "automation"]
|
||||
labels: ["source:sync", "documentation", "automation"]
|
||||
assignees: []
|
||||
body:
|
||||
- type: markdown
|
||||
|
|
|
|||
|
|
@ -0,0 +1,100 @@
|
|||
# Label Guide
|
||||
|
||||
Label taxonomy for the docs-v2 repository. Used by automation workflows,
|
||||
triage agents, and human contributors.
|
||||
|
||||
## Label Definitions
|
||||
|
||||
- **Product labels** (`product:*`): Derived from
|
||||
[data/products.yml](../data/products.yml) — each product's `label_group`
|
||||
field determines the label name, `content_path` determines which files
|
||||
trigger it. Applied by the [auto-label workflow](workflows/auto-label.yml).
|
||||
Multi-product PRs get all matching labels. Shared content changes get
|
||||
`product:shared` plus labels for all products that reference the shared file.
|
||||
|
||||
- **Source, waiting, workflow, and review labels**: Defined in
|
||||
[data/labels.yml](../data/labels.yml) — names, colors, and descriptions.
|
||||
|
||||
- **Review label behavior** (severity levels, result rules, result → label
|
||||
mapping): Defined in
|
||||
[templates/review-comment.md](templates/review-comment.md).
|
||||
|
||||
Human approval uses GitHub's native PR review system (CODEOWNERS), not labels.
|
||||
|
||||
## Renamed Labels
|
||||
|
||||
| Old Name | New Name |
|
||||
|----------|----------|
|
||||
| `AI assistant tooling` | `ai:tooling` |
|
||||
| `ci:testing-and-validation` | `ci:testing` |
|
||||
| `design` | `area:site-ui` |
|
||||
| `InfluxDB Cloud` | `product:v2-cloud` |
|
||||
| `user feedback` | `source:feedback` |
|
||||
| `ai:tooling` | `area:agents` |
|
||||
|
||||
## Deleted Labels
|
||||
|
||||
| Label | Replacement | Reason |
|
||||
|-------|-------------|--------|
|
||||
| `Pending PR` | `waiting:pr` | Consolidated into `waiting:` namespace |
|
||||
| `broke-link` | `area:links` | Consolidated into `area:` namespace |
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Issue triage
|
||||
|
||||
1. Read issue → identify product(s) → apply `product:*` labels
|
||||
2. Apply `source:*` label if applicable
|
||||
3. Determine readiness → apply `agent-ready` or `waiting:*`
|
||||
|
||||
### PR review pipeline
|
||||
|
||||
1. PR opened → auto-label applies `product:*` labels
|
||||
2. Doc review workflow triggers (unless `skip-review` is present)
|
||||
3. Copilot code review runs on the diff (uses
|
||||
[`.github/instructions/`](instructions/) files from the base branch)
|
||||
4. Copilot visual review checks rendered preview pages
|
||||
5. Human reviewer uses GitHub's PR review for final approval
|
||||
|
||||
Review labels (`review:*`) are applied manually after review, not by CI.
|
||||
|
||||
### GitHub Filter Queries
|
||||
|
||||
```
|
||||
# PRs needing human review
|
||||
label:review:needs-human is:pr is:open
|
||||
|
||||
# Agent-ready issues
|
||||
label:agent-ready is:issue is:open -label:waiting:engineering -label:waiting:product
|
||||
|
||||
# All InfluxDB 3 issues
|
||||
label:product:v3-monolith,product:v3-distributed is:issue is:open
|
||||
|
||||
# Blocked issues
|
||||
label:waiting:engineering,waiting:product is:issue is:open
|
||||
|
||||
# PRs that skipped review
|
||||
label:skip-review is:pr
|
||||
```
|
||||
|
||||
## Auto-labeling Behavior
|
||||
|
||||
The [auto-label workflow](workflows/auto-label.yml) runs on
|
||||
`pull_request: [opened, synchronize]` and:
|
||||
|
||||
- Reads path-to-product mappings from `data/products.yml`
|
||||
- Matches changed files to product labels
|
||||
- Expands shared content changes to affected product labels
|
||||
- Adds labels idempotently (skips labels already present)
|
||||
- Skips draft and fork PRs
|
||||
|
||||
## References
|
||||
|
||||
- Label definitions: `data/labels.yml`
|
||||
- Product definitions: `data/products.yml`
|
||||
- Review comment format: `.github/templates/review-comment.md`
|
||||
- Auto-label workflow: `.github/workflows/auto-label.yml`
|
||||
- Doc review workflow: `.github/workflows/doc-review.yml`
|
||||
- Triage agent: `.claude/agents/doc-triage-agent.md`
|
||||
- Review agent: `.claude/agents/doc-review-agent.md`
|
||||
- Migration scripts: `helper-scripts/label-migration/`
|
||||
|
|
@ -2,261 +2,61 @@
|
|||
|
||||
> **For GitHub Copilot and other AI coding agents**
|
||||
>
|
||||
> This is the primary instruction file for GitHub Copilot working with the InfluxData documentation site.
|
||||
>
|
||||
> **Instruction resources**:
|
||||
>
|
||||
> - [.github/agents/copilot-instructions-agent.md](agents/copilot-instructions-agent.md) - **Creating/improving Copilot instructions**
|
||||
> - [.claude/skills/](../.claude/skills/) - **Detailed workflows** (content editing, testing, InfluxDB setup, templates)
|
||||
> - [.github/instructions/](instructions/) - **Pattern-specific** (auto-loaded by file type)
|
||||
> - [.github/agents/](agents/) - **Specialist agents** (TypeScript/Hugo, Copilot management)
|
||||
> - [AGENTS.md](../AGENTS.md), [CLAUDE.md](../CLAUDE.md) - General AI assistant guides
|
||||
> - [AGENTS.md](../AGENTS.md) - Shared project guidelines (style, constraints, content structure)
|
||||
> - [.github/LABEL_GUIDE.md](LABEL_GUIDE.md) - Label taxonomy and review pipeline
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Command | Time | Details |
|
||||
| ---------------- | ----------------------------------------------------- | ------- | ------------------------------------- |
|
||||
| Install | `CYPRESS_INSTALL_BINARY=0 yarn install` | \~4s | Skip Cypress for CI |
|
||||
| Build | `npx hugo --quiet` | \~75s | NEVER CANCEL |
|
||||
| Dev Server | `npx hugo server` | \~92s | Port 1313 |
|
||||
| Create Docs | `docs create <draft> --products <keys>` | varies | AI-assisted scaffolding |
|
||||
| Create & Open | `docs create <draft> --products <keys> --open` | instant | Non-blocking (background) |
|
||||
| Create & Wait | `docs create <draft> --products <keys> --open --wait` | varies | Blocking (interactive) |
|
||||
| Edit Docs | `docs edit <url>` | instant | Non-blocking (background) |
|
||||
| Edit Docs (wait) | `docs edit <url> --wait` | varies | Blocking (interactive) |
|
||||
| List Files | `docs edit <url> --list` | instant | Show files without opening |
|
||||
| Add Placeholders | `docs placeholders <file>` | instant | Add placeholder syntax to code blocks |
|
||||
| Audit Docs | `docs audit --products <keys>` | varies | Audit documentation coverage |
|
||||
| Release Notes | `docs release-notes <v1> <v2> --products <keys>` | varies | Generate release notes from commits |
|
||||
| Test All | `yarn test:codeblocks:all` | 15-45m | NEVER CANCEL |
|
||||
| Lint | `yarn lint` | \~1m | Pre-commit checks |
|
||||
| Task | Command | Time |
|
||||
| ---------------- | ----------------------------------------------------- | ------- |
|
||||
| Install | `CYPRESS_INSTALL_BINARY=0 yarn install` | \~4s |
|
||||
| Build | `npx hugo --quiet` | \~75s |
|
||||
| Dev Server | `npx hugo server` | \~92s |
|
||||
| Create Docs | `docs create <draft> --products <keys>` | varies |
|
||||
| Edit Docs | `docs edit <url>` | instant |
|
||||
| Add Placeholders | `docs placeholders <file>` | instant |
|
||||
| Audit Docs | `docs audit --products <keys>` | varies |
|
||||
| Test All | `yarn test:codeblocks:all` | 15-45m |
|
||||
| Lint | `yarn lint` | \~1m |
|
||||
|
||||
**NEVER CANCEL** Hugo builds (\~75s) or test runs (15-45m).
|
||||
|
||||
## CLI Tools
|
||||
|
||||
**For when to use CLI vs direct editing**, see [docs-cli-workflow skill](../.claude/skills/docs-cli-workflow/SKILL.md).
|
||||
|
||||
```bash
|
||||
# Create new documentation (AI-assisted scaffolding)
|
||||
docs create <draft> --products <key-or-path>
|
||||
docs create <draft> --products influxdb3_core --open # Non-blocking
|
||||
docs create <draft> --products influxdb3_core --open --wait # Blocking
|
||||
|
||||
# Find and edit documentation by URL
|
||||
docs edit <url> # Non-blocking (agent-friendly)
|
||||
docs edit <url> --list # List files only
|
||||
docs edit <url> --wait # Wait for editor
|
||||
|
||||
# Other tools
|
||||
docs placeholders <file> # Add placeholder syntax to code blocks
|
||||
docs audit --products <keys> # Audit documentation coverage
|
||||
docs release-notes <v1> <v2> --products <keys>
|
||||
|
||||
# Get help
|
||||
docs --help
|
||||
docs create --help
|
||||
docs --help # Full reference
|
||||
```
|
||||
|
||||
**Key points**:
|
||||
|
||||
- Accepts both product keys (`influxdb3_core`) and paths (`/influxdb3/core`)
|
||||
- Non-blocking by default (agent-friendly)
|
||||
- Use `--wait` for interactive editing
|
||||
- `--products` and `--repos` are mutually exclusive for audit/release-notes
|
||||
Non-blocking by default. Use `--wait` for interactive editing.
|
||||
|
||||
## Workflows
|
||||
|
||||
### Content Editing
|
||||
- **Content editing**: See [content-editing skill](../.claude/skills/content-editing/SKILL.md)
|
||||
- **Testing**: See [DOCS-TESTING.md](../DOCS-TESTING.md)
|
||||
- **Hugo templates**: See [hugo-template-dev skill](../.claude/skills/hugo-template-dev/SKILL.md)
|
||||
|
||||
See [content-editing skill](../.claude/skills/content-editing/SKILL.md) for complete workflow:
|
||||
## Product and Content Paths
|
||||
|
||||
- Creating/editing content with CLI
|
||||
- Shared content management
|
||||
- Testing and validation
|
||||
|
||||
### Testing
|
||||
|
||||
See [DOCS-TESTING.md](../DOCS-TESTING.md) and [cypress-e2e-testing skill](../.claude/skills/cypress-e2e-testing/SKILL.md).
|
||||
|
||||
Quick tests (NEVER CANCEL long-running):
|
||||
|
||||
```bash
|
||||
yarn test:codeblocks:all # 15-45m
|
||||
yarn test:links # 1-5m
|
||||
yarn lint # 1m
|
||||
```
|
||||
|
||||
### InfluxDB 3 Setup
|
||||
|
||||
See [influxdb3-test-setup skill](../.claude/skills/influxdb3-test-setup/SKILL.md).
|
||||
|
||||
Quick setup:
|
||||
|
||||
```bash
|
||||
./test/scripts/init-influxdb3.sh core # Per-worktree, port 8282
|
||||
./test/scripts/init-influxdb3.sh enterprise # Shared, port 8181
|
||||
./test/scripts/init-influxdb3.sh all # Both
|
||||
```
|
||||
|
||||
### Hugo Template Development
|
||||
|
||||
See [hugo-template-dev skill](../.claude/skills/hugo-template-dev/SKILL.md) for template syntax, data access, and testing strategies.
|
||||
|
||||
## Repository Structure
|
||||
|
||||
### Content Organization
|
||||
|
||||
- **InfluxDB 3**: `/content/influxdb3/` (core, enterprise, cloud-dedicated, cloud-serverless, clustered, explorer)
|
||||
- **InfluxDB v2**: `/content/influxdb/` (v2, cloud)
|
||||
- **InfluxDB v1**: `/content/influxdb/v1`
|
||||
- **InfluxDB Enterprise (v1)**: `/content/enterprise_influxdb/v1/`
|
||||
- **Telegraf**: `/content/telegraf/v1/`
|
||||
- **Kapacitor**: `/content/kapacitor/`
|
||||
- **Chronograf**: `/content/chronograf/`
|
||||
- **Flux**: `/content/flux/`
|
||||
- **Examples**: `/content/example.md` (comprehensive shortcode reference)
|
||||
- **Shared content**: `/content/shared/`
|
||||
|
||||
### Key Files
|
||||
|
||||
- **Config**: `/config/_default/`, `package.json`, `compose.yaml`, `lefthook.yml`
|
||||
- **Testing**: `cypress.config.js`, `pytest.ini`, `.vale.ini`
|
||||
- **Assets**: `/assets/` (JS, CSS), `/layouts/` (templates), `/data/` (YAML/JSON)
|
||||
- **Build output**: `/public/` (\~529MB, gitignored)
|
||||
|
||||
## Technology Stack
|
||||
|
||||
- **Hugo** - Static site generator
|
||||
- **Node.js/Yarn** - Package management
|
||||
- **Testing**: Pytest, Cypress, link-checker, Vale
|
||||
- **Tools**: Docker, ESLint, Prettier, Lefthook
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Network Restrictions
|
||||
|
||||
Commands that may fail in restricted environments:
|
||||
|
||||
- Docker builds (external repos)
|
||||
- `docker compose up local-dev` (Alpine packages)
|
||||
- Cypress installation (use `CYPRESS_INSTALL_BINARY=0`)
|
||||
|
||||
### Pre-commit Validation
|
||||
|
||||
```bash
|
||||
# Quick validation before commits
|
||||
yarn prettier --write "**/*.{css,js,ts,jsx,tsx}"
|
||||
yarn eslint assets/js/**/*.js
|
||||
npx hugo --quiet
|
||||
```
|
||||
|
||||
## Documentation Coverage
|
||||
|
||||
- **InfluxDB 3**: Core, Enterprise, Cloud (Dedicated/Serverless), Clustered, Explorer, plugins
|
||||
- **InfluxDB v2/v1**: OSS, Cloud, Enterprise
|
||||
- **Tools**: Telegraf, Kapacitor, Chronograf, Flux
|
||||
- **API Reference**: All InfluxDB editions
|
||||
Defined in [data/products.yml](../data/products.yml).
|
||||
|
||||
## Content Guidelines
|
||||
|
||||
**Style guide**: Google Developer Documentation Style Guide\
|
||||
**Voice**: Active, present tense, second person\
|
||||
**Line breaks**: Semantic line feeds (one sentence per line)\
|
||||
**Files**: lowercase-with-hyphens.md
|
||||
- [DOCS-CONTRIBUTING.md](../DOCS-CONTRIBUTING.md) - Style, workflow, commit format
|
||||
- [DOCS-SHORTCODES.md](../DOCS-SHORTCODES.md) - Shortcode reference
|
||||
- [DOCS-FRONTMATTER.md](../DOCS-FRONTMATTER.md) - Frontmatter reference
|
||||
- [content/example.md](../content/example.md) - Working shortcode examples
|
||||
|
||||
### Quick Shortcodes
|
||||
## File Pattern-Specific Instructions
|
||||
|
||||
````markdown
|
||||
# Callouts (GitHub-style alerts)
|
||||
> [!Note] / [!Warning] / [!Tip] / [!Important] / [!Caution]
|
||||
|
||||
# Required elements
|
||||
{{< req >}}
|
||||
{{< req type="key" >}}
|
||||
|
||||
# Code placeholders
|
||||
```sh { placeholders="DATABASE_NAME|API_TOKEN" }
|
||||
curl https://example.com/api?db=DATABASE_NAME
|
||||
````
|
||||
|
||||
````
|
||||
|
||||
**Complete reference**: [DOCS-SHORTCODES.md](../DOCS-SHORTCODES.md)
|
||||
|
||||
### Required Frontmatter
|
||||
|
||||
```yaml
|
||||
title: # Required
|
||||
description: # Required
|
||||
menu:
|
||||
product_menu_key:
|
||||
name: # Optional
|
||||
parent: # Optional
|
||||
weight: # Required: 1-99, 101-199, 201-299...
|
||||
````
|
||||
|
||||
**Shared content**: Add `source: /shared/path/to/file.md`
|
||||
|
||||
**Complete reference**: [DOCS-FRONTMATTER.md](../DOCS-FRONTMATTER.md)
|
||||
|
||||
### Resources
|
||||
|
||||
- [DOCS-CONTRIBUTING.md](../DOCS-CONTRIBUTING.md) - Workflow & guidelines
|
||||
- [DOCS-SHORTCODES.md](../DOCS-SHORTCODES.md) - Complete shortcodes
|
||||
- [DOCS-FRONTMATTER.md](../DOCS-FRONTMATTER.md) - Complete metadata
|
||||
- [DOCS-TESTING.md](../DOCS-TESTING.md) - Testing procedures
|
||||
- [content/example.md](../content/example.md) - Working examples
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
| ------------------------ | ---------------------------------------------------------------- |
|
||||
| Pytest collected 0 items | Use `python` not `py` for language identifier |
|
||||
| Hugo build errors | Check `/config/_default/` |
|
||||
| Docker build fails | Expected in restricted networks - use local Hugo |
|
||||
| Cypress install fails | Use `CYPRESS_INSTALL_BINARY=0 yarn install` |
|
||||
| Link validation slow | Test specific files: `yarn test:links content/file.md` |
|
||||
| Vale "0 errors in stdin" | File is outside repo - Vale Docker can only access repo files |
|
||||
| Vale false positives | Add terms to `.ci/vale/styles/InfluxDataDocs/Terms/ignore.txt` |
|
||||
| Vale duration warnings | Duration literals (`30d`) are valid - check InfluxDataDocs.Units |
|
||||
|
||||
## Specialized Instructions
|
||||
|
||||
### File Pattern-Specific Instructions
|
||||
|
||||
These instructions are automatically loaded by GitHub Copilot based on the files you're working with:
|
||||
Auto-loaded by GitHub Copilot based on changed files:
|
||||
|
||||
| Pattern | File | Description |
|
||||
| ------------------------ | ----------------------------------------------------------------- | ------------------------------------------------ |
|
||||
| `content/**/*.md` | [content.instructions.md](instructions/content.instructions.md) | Content file guidelines, frontmatter, shortcodes |
|
||||
| `content/**/*.md` | [content-review.instructions.md](instructions/content-review.instructions.md) | Review criteria for content changes |
|
||||
| `layouts/**/*.html` | [layouts.instructions.md](instructions/layouts.instructions.md) | Shortcode implementation patterns and testing |
|
||||
| `api-docs/**/*.yml` | [api-docs.instructions.md](instructions/api-docs.instructions.md) | OpenAPI spec workflow |
|
||||
| `assets/js/**/*.{js,ts}` | [assets.instructions.md](instructions/assets.instructions.md) | TypeScript/JavaScript and CSS development |
|
||||
|
||||
### Specialized Resources
|
||||
|
||||
**Custom Agents** (`.github/agents/`):
|
||||
|
||||
- [typescript-hugo-agent.md](agents/typescript-hugo-agent.md) - TypeScript/Hugo development
|
||||
- [copilot-instructions-agent.md](agents/copilot-instructions-agent.md) - Managing Copilot instructions
|
||||
|
||||
**Claude Skills** (`.claude/skills/` - detailed workflows):
|
||||
|
||||
- [content-editing](../.claude/skills/content-editing/SKILL.md) - Complete content workflow
|
||||
- [docs-cli-workflow](../.claude/skills/docs-cli-workflow/SKILL.md) - CLI decision guidance
|
||||
- [cypress-e2e-testing](../.claude/skills/cypress-e2e-testing/SKILL.md) - E2E testing
|
||||
- [hugo-template-dev](../.claude/skills/hugo-template-dev/SKILL.md) - Hugo templates
|
||||
- [influxdb3-test-setup](../.claude/skills/influxdb3-test-setup/SKILL.md) - InfluxDB 3 setup
|
||||
- [vale-linting](../.claude/skills/vale-linting/SKILL.md) - Vale configuration and debugging
|
||||
|
||||
**Documentation**:
|
||||
|
||||
- [DOCS-TESTING.md](../DOCS-TESTING.md) - Testing procedures
|
||||
- [DOCS-CONTRIBUTING.md](../DOCS-CONTRIBUTING.md) - Contribution guidelines
|
||||
- [DOCS-FRONTMATTER.md](../DOCS-FRONTMATTER.md) - Frontmatter reference
|
||||
- [DOCS-SHORTCODES.md](../DOCS-SHORTCODES.md) - Shortcodes reference
|
||||
|
||||
## Important Notes
|
||||
|
||||
- This is a large site (5,359+ pages) with complex build processes
|
||||
- **NEVER CANCEL** long-running operations (Hugo builds, tests)
|
||||
- Set appropriate timeouts: Hugo build (180s+), tests (30+ minutes)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,76 @@
|
|||
---
|
||||
applyTo: "content/**/*.md"
|
||||
---
|
||||
|
||||
# Content Review Criteria
|
||||
|
||||
Review documentation changes against these rules. Only flag issues you are
|
||||
confident about. Reference the linked docs for detailed rules.
|
||||
|
||||
## Frontmatter
|
||||
|
||||
Rules: [DOCS-FRONTMATTER.md](../../DOCS-FRONTMATTER.md)
|
||||
|
||||
- `title` and `description` are required on every page
|
||||
- `menu` structure matches the product's menu key
|
||||
- `weight` is present for pages in navigation
|
||||
- `source` paths point to valid `/shared/` paths
|
||||
- No duplicate or conflicting frontmatter keys
|
||||
|
||||
## Shortcode Syntax
|
||||
|
||||
Rules: [DOCS-SHORTCODES.md](../../DOCS-SHORTCODES.md)
|
||||
|
||||
- `{{< >}}` for HTML output, `{{% %}}` for Markdown-processed content
|
||||
- Closing tags match opening tags
|
||||
- Required parameters are present
|
||||
- Callouts use GitHub-style syntax: `> [!Note]`, `> [!Warning]`, etc.
|
||||
|
||||
## Heading Hierarchy
|
||||
|
||||
- No h1 headings in content (h1 comes from `title` frontmatter)
|
||||
- Headings don't skip levels (h2 -> h4 without h3)
|
||||
|
||||
## Semantic Line Feeds
|
||||
|
||||
Rules: [DOCS-CONTRIBUTING.md](../../DOCS-CONTRIBUTING.md)
|
||||
|
||||
- One sentence per line (better diffs)
|
||||
- Long sentences on their own line, not concatenated
|
||||
|
||||
## Terminology and Product Names
|
||||
|
||||
Products defined in [data/products.yml](../../data/products.yml):
|
||||
|
||||
- Use official names: "InfluxDB 3 Core", "InfluxDB 3 Enterprise",
|
||||
"InfluxDB Cloud Serverless", "InfluxDB Cloud Dedicated"
|
||||
- Don't mix v2/v3 terminology (e.g., "bucket" in v3 Core docs)
|
||||
- Version references match the content path
|
||||
|
||||
## Links
|
||||
|
||||
- Internal links use relative paths or Hugo `relref` shortcodes
|
||||
- No hardcoded `docs.influxdata.com` links in content files
|
||||
- Anchor links match actual heading IDs
|
||||
|
||||
## Code Blocks
|
||||
|
||||
- Use `python` not `py` for language identifiers (pytest requirement)
|
||||
- Long options in CLI examples (`--output` not `-o`)
|
||||
- Keep lines within 80 characters
|
||||
- Include language identifier on fenced code blocks
|
||||
|
||||
## Shared Content
|
||||
|
||||
- `source:` frontmatter points to an existing shared file
|
||||
- Shared files don't contain frontmatter (only content)
|
||||
- Changes to shared content affect multiple products — flag if unintentional
|
||||
|
||||
## Severity
|
||||
|
||||
- **BLOCKING**: Broken rendering, wrong product names, missing required
|
||||
frontmatter, malformed shortcodes, h1 in content body
|
||||
- **WARNING**: Missing semantic line feeds, skipped heading levels, missing
|
||||
`weight`, long CLI options not used
|
||||
- **INFO**: Suggestions, code block missing language identifier, opportunities
|
||||
to use shared content
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
# Visual Review Prompt
|
||||
|
||||
Review the rendered documentation pages at the preview URLs listed below.
|
||||
Check each page for visual and structural issues that are invisible in the
|
||||
Markdown source.
|
||||
|
||||
## Checklist
|
||||
|
||||
For each preview URL, verify:
|
||||
|
||||
- [ ] **No raw shortcodes** — No `{{<` or `{{%` syntax visible on the page
|
||||
- [ ] **No placeholder text** — No `PLACEHOLDER`, `TODO`, `FIXME`, or
|
||||
template variables visible in rendered content
|
||||
- [ ] **Layout intact** — No overlapping text, missing images, or collapsed
|
||||
sections
|
||||
- [ ] **Code blocks render correctly** — No raw HTML fences or Markdown
|
||||
syntax visible inside code blocks
|
||||
- [ ] **Product names correct** — Page header, breadcrumbs, and sidebar show
|
||||
the correct product name
|
||||
- [ ] **No 404s or errors** — Page loads without error states
|
||||
- [ ] **Navigation correct** — Sidebar entries link to the right pages and
|
||||
the page appears in the expected location
|
||||
|
||||
## Output
|
||||
|
||||
Follow the shared review comment format, severity definitions, and label
|
||||
mapping in
|
||||
[templates/review-comment.md](../templates/review-comment.md).
|
||||
|
||||
Adapt the "Files Reviewed" section to list preview URLs instead of file
|
||||
paths.
|
||||
|
||||
## Preview URLs
|
||||
|
||||
|
|
@ -1,27 +1,37 @@
|
|||
## InfluxDB v1 Release Documentation
|
||||
|
||||
**Release Version:** v1.x.x
|
||||
**Release Type:** [ ] OSS [ ] Enterprise [ ] Both
|
||||
**Release Version:** v1.x.x
|
||||
**Release Type:** [ ] OSS [ ] Enterprise
|
||||
|
||||
> [!Important]
|
||||
> **Always create separate PRs for OSS and Enterprise releases.**
|
||||
> OSS can publish immediately when the GitHub release tag is available.
|
||||
> Enterprise must wait until the release artifact is GA in the InfluxData portal.
|
||||
> Never combine both products in a single release PR.
|
||||
|
||||
### Description
|
||||
Brief description of the release and documentation changes.
|
||||
|
||||
### Pre-merge Gate (Enterprise only)
|
||||
- [ ] **Confirm release artifact is GA in the InfluxData portal**
|
||||
- [ ] **v1 codeowner has signaled readiness** (e.g., applied a release label)
|
||||
|
||||
### Release Documentation Checklist
|
||||
|
||||
#### Release Notes
|
||||
- [ ] Generate release notes from changelog
|
||||
- [ ] OSS: Use commit messages from GitHub release tag `https://github.com/influxdata/influxdb/releases/tag/v1.x.x`
|
||||
- [ ] Enterprise: Use `https://dl.influxdata.com/enterprise/nightlies/master/CHANGELOG.md`
|
||||
- [ ] **Note**: For Enterprise releases, include important updates, features, and fixes from the corresponding OSS tag
|
||||
- OSS: Use commit messages from GitHub release tag `https://github.com/influxdata/influxdb/releases/tag/v1.x.x`
|
||||
- Enterprise: Use `https://dl.influxdata.com/enterprise/nightlies/master/CHANGELOG.md`
|
||||
- **Note**: For Enterprise releases, include important updates, features, and fixes from the corresponding OSS tag
|
||||
- [ ] Update release notes in appropriate location
|
||||
- [ ] OSS: `/content/influxdb/v1/about_the_project/releasenotes-changelog.md`
|
||||
- [ ] Enterprise: `/content/enterprise_influxdb/v1/about-the-project/release-notes.md`
|
||||
- OSS: `content/influxdb/v1/about_the_project/release-notes.md`
|
||||
- Enterprise: `content/enterprise_influxdb/v1/about-the-project/release-notes.md`
|
||||
- [ ] Ensure release notes follow documentation formatting standards
|
||||
|
||||
#### Version Updates
|
||||
- [ ] Update patch version in `/data/products.yml`
|
||||
- [ ] OSS: `influxdb > v1 > latest`
|
||||
- [ ] Enterprise: `enterprise_influxdb > v1 > latest`
|
||||
- [ ] Update patch version in `data/products.yml` (**only for this product**)
|
||||
- OSS: `influxdb > latest_patches > v1`
|
||||
- Enterprise: `enterprise_influxdb > latest_patches > v1`
|
||||
- [ ] Update version references in documentation
|
||||
- [ ] Installation guides
|
||||
- [ ] Docker documentation
|
||||
|
|
@ -37,8 +47,9 @@ Brief description of the release and documentation changes.
|
|||
#### Testing
|
||||
- [ ] Build documentation locally and verify changes render correctly
|
||||
- [ ] Test all updated links
|
||||
- [ ] Run link validation: `yarn test:links content/influxdb/v1/**/*.md`
|
||||
- [ ] Run link validation: `yarn test:links content/enterprise_influxdb/v1/**/*.md`
|
||||
- [ ] Run link validation for the product being released:
|
||||
- OSS: `yarn test:links content/influxdb/v1/**/*.md`
|
||||
- Enterprise: `yarn test:links content/enterprise_influxdb/v1/**/*.md`
|
||||
|
||||
### Related Resources
|
||||
- DAR Issue: #
|
||||
|
|
@ -50,6 +61,3 @@ Brief description of the release and documentation changes.
|
|||
- [ ] Verify documentation is deployed to production
|
||||
- [ ] Announce in #docs channel
|
||||
- [ ] Close related DAR issue(s)
|
||||
|
||||
---
|
||||
**Note:** For Enterprise releases, ensure you have access to the Enterprise changelog and coordinate with the release team for timing.
|
||||
|
|
@ -35,10 +35,10 @@ if (!/^origin\/[a-zA-Z0-9._\/-]+$/.test(BASE_REF)) {
|
|||
*/
|
||||
function getAllChangedFiles() {
|
||||
try {
|
||||
const output = execSync(
|
||||
`git diff --name-only ${BASE_REF}...HEAD`,
|
||||
{ encoding: 'utf-8', stdio: ['pipe', 'pipe', 'pipe'] }
|
||||
);
|
||||
const output = execSync(`git diff --name-only ${BASE_REF}...HEAD`, {
|
||||
encoding: 'utf-8',
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
});
|
||||
return output.trim().split('\n').filter(Boolean);
|
||||
} catch (err) {
|
||||
console.error(`Error detecting changes: ${err.message}`);
|
||||
|
|
@ -53,11 +53,13 @@ function getAllChangedFiles() {
|
|||
*/
|
||||
function categorizeChanges(files) {
|
||||
return {
|
||||
content: files.filter(f => f.startsWith('content/') && f.endsWith('.md')),
|
||||
layouts: files.filter(f => f.startsWith('layouts/')),
|
||||
assets: files.filter(f => f.startsWith('assets/')),
|
||||
data: files.filter(f => f.startsWith('data/')),
|
||||
apiDocs: files.filter(f => f.startsWith('api-docs/') || f.startsWith('openapi/')),
|
||||
content: files.filter((f) => f.startsWith('content/') && f.endsWith('.md')),
|
||||
layouts: files.filter((f) => f.startsWith('layouts/')),
|
||||
assets: files.filter((f) => f.startsWith('assets/')),
|
||||
data: files.filter((f) => f.startsWith('data/')),
|
||||
apiDocs: files.filter(
|
||||
(f) => f.startsWith('api-docs/') || f.startsWith('openapi/')
|
||||
),
|
||||
};
|
||||
}
|
||||
|
||||
|
|
@ -127,7 +129,7 @@ function main() {
|
|||
const htmlPaths = mapContentToPublic(expandedContent, 'public');
|
||||
|
||||
// Convert HTML paths to URL paths
|
||||
pagesToDeploy = Array.from(htmlPaths).map(htmlPath => {
|
||||
pagesToDeploy = Array.from(htmlPaths).map((htmlPath) => {
|
||||
return '/' + htmlPath.replace('public/', '').replace('/index.html', '/');
|
||||
});
|
||||
console.log(` Found ${pagesToDeploy.length} affected pages\n`);
|
||||
|
|
@ -135,34 +137,53 @@ function main() {
|
|||
|
||||
// Strategy 2: Layout/asset changes - parse URLs from PR body
|
||||
if (hasLayoutChanges) {
|
||||
console.log('🎨 Layout/asset changes detected, checking PR description for URLs...');
|
||||
console.log(
|
||||
'🎨 Layout/asset changes detected, checking PR description for URLs...'
|
||||
);
|
||||
|
||||
// Auto-detect home page when the root template changes
|
||||
if (changes.layouts.includes('layouts/index.html')) {
|
||||
pagesToDeploy = [...new Set([...pagesToDeploy, '/'])];
|
||||
console.log(
|
||||
' 🏠 Home page template (layouts/index.html) changed - auto-adding / to preview pages'
|
||||
);
|
||||
}
|
||||
|
||||
const prUrls = extractDocsUrls(PR_BODY);
|
||||
|
||||
if (prUrls.length > 0) {
|
||||
console.log(` Found ${prUrls.length} URLs in PR description`);
|
||||
// Merge with content pages (deduplicate)
|
||||
pagesToDeploy = [...new Set([...pagesToDeploy, ...prUrls])];
|
||||
} else if (changes.content.length === 0) {
|
||||
// No content changes AND no URLs specified - need author input
|
||||
console.log(' ⚠️ No URLs found in PR description - author input needed');
|
||||
} else if (pagesToDeploy.length === 0) {
|
||||
// No content changes, no auto-detected pages, and no URLs specified - need author input
|
||||
console.log(
|
||||
' ⚠️ No URLs found in PR description - author input needed'
|
||||
);
|
||||
setOutput('pages-to-deploy', '[]');
|
||||
setOutput('has-layout-changes', 'true');
|
||||
setOutput('needs-author-input', 'true');
|
||||
setOutput('change-summary', 'Layout/asset changes detected - please specify pages to preview');
|
||||
setOutput(
|
||||
'change-summary',
|
||||
'Layout/asset changes detected - please specify pages to preview'
|
||||
);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Apply page limit
|
||||
if (pagesToDeploy.length > MAX_PAGES) {
|
||||
console.log(`⚠️ Limiting preview to ${MAX_PAGES} pages (found ${pagesToDeploy.length})`);
|
||||
console.log(
|
||||
`⚠️ Limiting preview to ${MAX_PAGES} pages (found ${pagesToDeploy.length})`
|
||||
);
|
||||
pagesToDeploy = pagesToDeploy.slice(0, MAX_PAGES);
|
||||
}
|
||||
|
||||
// Generate summary
|
||||
const summary = pagesToDeploy.length > 0
|
||||
? `${pagesToDeploy.length} page(s) will be previewed`
|
||||
: 'No pages to preview';
|
||||
const summary =
|
||||
pagesToDeploy.length > 0
|
||||
? `${pagesToDeploy.length} page(s) will be previewed`
|
||||
: 'No pages to preview';
|
||||
|
||||
console.log(`\n✅ ${summary}`);
|
||||
|
||||
|
|
|
|||
|
|
@ -63,6 +63,9 @@ function isValidUrlPath(path) {
|
|||
// Must start with /
|
||||
if (!path.startsWith('/')) return false;
|
||||
|
||||
// Allow root path (docs home page at /)
|
||||
if (path === '/') return true;
|
||||
|
||||
// Must start with known product prefix (loaded from products.yml)
|
||||
const validPrefixes = PRODUCT_NAMESPACES.map((ns) => `/${ns}/`);
|
||||
|
||||
|
|
@ -101,7 +104,8 @@ export function extractDocsUrls(text) {
|
|||
|
||||
// Pattern 1: Full production URLs
|
||||
// https://docs.influxdata.com/influxdb3/core/get-started/
|
||||
const prodUrlPattern = /https?:\/\/docs\.influxdata\.com(\/[^\s)\]>"']+)/g;
|
||||
// https://docs.influxdata.com/ (home page)
|
||||
const prodUrlPattern = /https?:\/\/docs\.influxdata\.com(\/[^\s)\]>"']*)/g;
|
||||
let match;
|
||||
while ((match = prodUrlPattern.exec(text)) !== null) {
|
||||
const path = normalizeUrlPath(match[1]);
|
||||
|
|
@ -112,7 +116,8 @@ export function extractDocsUrls(text) {
|
|||
|
||||
// Pattern 2: Localhost dev URLs
|
||||
// http://localhost:1313/influxdb3/core/
|
||||
const localUrlPattern = /https?:\/\/localhost:\d+(\/[^\s)\]>"']+)/g;
|
||||
// http://localhost:1313/ (home page)
|
||||
const localUrlPattern = /https?:\/\/localhost:\d+(\/[^\s)\]>"']*)/g;
|
||||
while ((match = localUrlPattern.exec(text)) !== null) {
|
||||
const path = normalizeUrlPath(match[1]);
|
||||
if (isValidUrlPath(path)) {
|
||||
|
|
|
|||
|
|
@ -0,0 +1,61 @@
|
|||
/**
|
||||
* Resolve Review URLs
|
||||
*
|
||||
* Maps changed content files to URL paths for the doc-review workflow.
|
||||
* Reuses the same content-utils functions as detect-preview-pages.js.
|
||||
*
|
||||
* Outputs (for GitHub Actions):
|
||||
* - urls: JSON array of URL paths
|
||||
* - url-count: Number of URLs
|
||||
*/
|
||||
|
||||
import { appendFileSync } from 'fs';
|
||||
import { execSync } from 'child_process';
|
||||
import {
|
||||
getChangedContentFiles,
|
||||
mapContentToPublic,
|
||||
} from '../../scripts/lib/content-utils.js';
|
||||
|
||||
const GITHUB_OUTPUT = process.env.GITHUB_OUTPUT || '/dev/stdout';
|
||||
const BASE_REF = process.env.BASE_REF || 'origin/master';
|
||||
const MAX_PAGES = 50;
|
||||
|
||||
if (!/^origin\/[a-zA-Z0-9._/-]+$/.test(BASE_REF)) {
|
||||
console.error(`Invalid BASE_REF: ${BASE_REF}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const changed = getChangedContentFiles(BASE_REF);
|
||||
const htmlPaths = mapContentToPublic(changed, 'public');
|
||||
|
||||
const contentUrls = Array.from(htmlPaths)
|
||||
.sort()
|
||||
.map((p) => '/' + p.replace(/^public\//, '').replace(/\/index\.html$/, '/'))
|
||||
.slice(0, MAX_PAGES);
|
||||
|
||||
// Check if the home page template changed (layouts/index.html → /)
|
||||
let homePageUrls = [];
|
||||
try {
|
||||
const homePageChanged = execSync(
|
||||
`git diff --name-only ${BASE_REF}...HEAD -- layouts/index.html`,
|
||||
{ encoding: 'utf-8', stdio: ['pipe', 'pipe', 'pipe'] }
|
||||
).trim();
|
||||
if (homePageChanged) {
|
||||
homePageUrls = ['/'];
|
||||
console.log(
|
||||
'Home page template (layouts/index.html) changed - adding / to review URLs'
|
||||
);
|
||||
}
|
||||
} catch {
|
||||
// Ignore errors - fall back to content-only URLs
|
||||
}
|
||||
|
||||
const urls = [...new Set([...homePageUrls, ...contentUrls])].slice(
|
||||
0,
|
||||
MAX_PAGES
|
||||
);
|
||||
|
||||
appendFileSync(GITHUB_OUTPUT, `urls=${JSON.stringify(urls)}\n`);
|
||||
appendFileSync(GITHUB_OUTPUT, `url-count=${urls.length}\n`);
|
||||
|
||||
console.log(`Detected ${urls.length} preview URLs`);
|
||||
|
|
@ -145,7 +145,11 @@ test('Special characters: backticks are delimiters', () => {
|
|||
// This prevents command substitution injection
|
||||
const text = '/influxdb3/`whoami`/';
|
||||
const result = extractDocsUrls(text);
|
||||
assertEquals(result, ['/influxdb3/'], 'Should truncate at backtick delimiter');
|
||||
assertEquals(
|
||||
result,
|
||||
['/influxdb3/'],
|
||||
'Should truncate at backtick delimiter'
|
||||
);
|
||||
});
|
||||
|
||||
test('Special characters: single quotes truncate at extraction', () => {
|
||||
|
|
@ -257,31 +261,51 @@ test('Normalization: removes query string', () => {
|
|||
test('Normalization: strips wildcard from path', () => {
|
||||
const text = '/influxdb3/enterprise/*';
|
||||
const result = extractDocsUrls(text);
|
||||
assertEquals(result, ['/influxdb3/enterprise/'], 'Should strip wildcard character');
|
||||
assertEquals(
|
||||
result,
|
||||
['/influxdb3/enterprise/'],
|
||||
'Should strip wildcard character'
|
||||
);
|
||||
});
|
||||
|
||||
test('Normalization: strips wildcard in middle of path', () => {
|
||||
const text = '/influxdb3/*/admin/';
|
||||
const result = extractDocsUrls(text);
|
||||
assertEquals(result, ['/influxdb3/admin/'], 'Should strip wildcard from middle of path');
|
||||
assertEquals(
|
||||
result,
|
||||
['/influxdb3/admin/'],
|
||||
'Should strip wildcard from middle of path'
|
||||
);
|
||||
});
|
||||
|
||||
test('Normalization: strips multiple wildcards', () => {
|
||||
const text = '/influxdb3/*/admin/*';
|
||||
const result = extractDocsUrls(text);
|
||||
assertEquals(result, ['/influxdb3/admin/'], 'Should strip all wildcard characters');
|
||||
assertEquals(
|
||||
result,
|
||||
['/influxdb3/admin/'],
|
||||
'Should strip all wildcard characters'
|
||||
);
|
||||
});
|
||||
|
||||
test('Wildcard in markdown-style notation', () => {
|
||||
const text = '**InfluxDB 3 Enterprise pages** (`/influxdb3/enterprise/*`)';
|
||||
const result = extractDocsUrls(text);
|
||||
assertEquals(result, ['/influxdb3/enterprise/'], 'Should extract and normalize path with wildcard in backticks');
|
||||
assertEquals(
|
||||
result,
|
||||
['/influxdb3/enterprise/'],
|
||||
'Should extract and normalize path with wildcard in backticks'
|
||||
);
|
||||
});
|
||||
|
||||
test('Wildcard in parentheses', () => {
|
||||
const text = 'Affects pages under (/influxdb3/enterprise/*)';
|
||||
const result = extractDocsUrls(text);
|
||||
assertEquals(result, ['/influxdb3/enterprise/'], 'Should extract and normalize path with wildcard in parentheses');
|
||||
assertEquals(
|
||||
result,
|
||||
['/influxdb3/enterprise/'],
|
||||
'Should extract and normalize path with wildcard in parentheses'
|
||||
);
|
||||
});
|
||||
|
||||
// Test deduplication
|
||||
|
|
@ -360,6 +384,31 @@ test('BASE_REF: rejects without origin/ prefix', () => {
|
|||
assertEquals(isValid, false, 'Should require origin/ prefix');
|
||||
});
|
||||
|
||||
// Home page URL support
|
||||
test('Home page: production URL https://docs.influxdata.com/', () => {
|
||||
const text = 'Preview: https://docs.influxdata.com/';
|
||||
const result = extractDocsUrls(text);
|
||||
assertEquals(result, ['/'], 'Should extract root path for docs home page');
|
||||
});
|
||||
|
||||
test('Home page: localhost URL http://localhost:1313/', () => {
|
||||
const text = 'Testing at http://localhost:1313/';
|
||||
const result = extractDocsUrls(text);
|
||||
assertEquals(result, ['/'], 'Should extract root path from localhost URL');
|
||||
});
|
||||
|
||||
test('Home page: relative root path / in text', () => {
|
||||
// Relative '/' alone is not extractable by the relative pattern (requires product prefix),
|
||||
// but full URLs with / path are supported
|
||||
const text = 'https://docs.influxdata.com/ and /influxdb3/core/';
|
||||
const result = extractDocsUrls(text);
|
||||
assertEquals(
|
||||
result.sort(),
|
||||
['/', '/influxdb3/core/'].sort(),
|
||||
'Should extract both root path and product path'
|
||||
);
|
||||
});
|
||||
|
||||
// Print summary
|
||||
console.log('\n=== Test Summary ===');
|
||||
console.log(`Total: ${totalTests}`);
|
||||
|
|
|
|||
|
|
@ -0,0 +1,104 @@
|
|||
/**
|
||||
* Workflow Utilities
|
||||
*
|
||||
* Canonical import for GitHub Actions workflow scripts. Re-exports shared
|
||||
* utilities from scripts/lib/ and adds workflow-specific helpers.
|
||||
*
|
||||
* Usage from github-script inline steps:
|
||||
*
|
||||
* const utils = await import(`${process.cwd()}/.github/scripts/workflow-utils.js`);
|
||||
* const pathToLabel = await utils.getProductLabelMap();
|
||||
* const labels = utils.matchFilesToLabels(changedFiles, pathToLabel);
|
||||
*
|
||||
* Usage from .github/scripts/ ESM modules:
|
||||
*
|
||||
* import { getProductLabelMap, findPagesReferencingSharedContent } from './workflow-utils.js';
|
||||
*/
|
||||
|
||||
import { readFileSync } from 'fs';
|
||||
import { findPagesReferencingSharedContent } from '../../scripts/lib/content-utils.js';
|
||||
|
||||
// --- Re-export content utilities ---
|
||||
export {
|
||||
findPagesReferencingSharedContent,
|
||||
expandSharedContentChanges,
|
||||
getChangedContentFiles,
|
||||
mapContentToPublic,
|
||||
categorizeContentFiles,
|
||||
getSourceFromFrontmatter,
|
||||
} from '../../scripts/lib/content-utils.js';
|
||||
|
||||
/**
|
||||
* Build a Map of content path prefixes to product label names
|
||||
* by reading data/products.yml.
|
||||
*
|
||||
* Requires `js-yaml` to be installed (e.g., `npm install js-yaml`).
|
||||
*
|
||||
* @param {string} [productsPath='data/products.yml'] - Path to products.yml
|
||||
* @returns {Promise<Map<string, string>>} Map of "content/{path}/" → "product:{label_group}"
|
||||
*/
|
||||
export async function getProductLabelMap(productsPath = 'data/products.yml') {
|
||||
const { load } = await import('js-yaml');
|
||||
const products = load(readFileSync(productsPath, 'utf8'));
|
||||
const pathToLabel = new Map();
|
||||
|
||||
for (const product of Object.values(products)) {
|
||||
const cp = product.content_path;
|
||||
const lg = product.label_group;
|
||||
if (!cp || !lg) continue;
|
||||
|
||||
if (typeof cp === 'string' && typeof lg === 'string') {
|
||||
pathToLabel.set(`content/${cp}/`, `product:${lg}`);
|
||||
} else if (typeof cp === 'object' && typeof lg === 'object') {
|
||||
for (const version of Object.keys(cp)) {
|
||||
if (lg[version]) {
|
||||
pathToLabel.set(`content/${cp[version]}/`, `product:${lg[version]}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return pathToLabel;
|
||||
}
|
||||
|
||||
/**
|
||||
* Match a list of file paths against the product label map.
|
||||
* For shared content files, expands to find affected products.
|
||||
*
|
||||
* @param {string[]} files - Changed file paths
|
||||
* @param {Map<string, string>} pathToLabel - From getProductLabelMap()
|
||||
* @returns {Set<string>} Set of label names to apply
|
||||
*/
|
||||
export function matchFilesToLabels(files, pathToLabel) {
|
||||
const labels = new Set();
|
||||
|
||||
for (const file of files) {
|
||||
if (file.startsWith('content/shared/')) {
|
||||
labels.add('product:shared');
|
||||
|
||||
try {
|
||||
const referencingPages = findPagesReferencingSharedContent(file);
|
||||
for (const page of referencingPages) {
|
||||
for (const [prefix, label] of pathToLabel) {
|
||||
if (page.startsWith(prefix)) {
|
||||
labels.add(label);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Shared content expansion failed — product:shared still applied
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
for (const [prefix, label] of pathToLabel) {
|
||||
if (file.startsWith(prefix)) {
|
||||
labels.add(label);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return labels;
|
||||
}
|
||||
|
|
@ -0,0 +1,98 @@
|
|||
# Review Comment Format
|
||||
|
||||
Shared definitions for severity levels, comment structure, and result → label
|
||||
mapping. Used by doc-review-agent.md (local review sessions) and
|
||||
copilot-visual-review.md (rendered page review).
|
||||
|
||||
## Severity Levels
|
||||
|
||||
### BLOCKING
|
||||
|
||||
Issues that will cause incorrect rendering, broken pages, or misleading
|
||||
content. These must be fixed before merge.
|
||||
|
||||
Examples:
|
||||
- Missing required frontmatter (`title`, `description`)
|
||||
- Unclosed or malformed shortcode tags
|
||||
- Wrong product name in content (e.g., "InfluxDB 3" in v2 docs)
|
||||
- Broken `source:` path for shared content
|
||||
- h1 heading in content body
|
||||
- Raw shortcode syntax visible on rendered page (`{{<` or `{{%`)
|
||||
- 404 errors on preview pages
|
||||
- Wrong product name in header or breadcrumbs
|
||||
|
||||
### WARNING
|
||||
|
||||
Style issues or minor visual problems that should be fixed but don't break
|
||||
functionality or correctness.
|
||||
|
||||
Examples:
|
||||
- Missing semantic line feeds (multiple sentences on one line)
|
||||
- Heading level skipped (h2 → h4)
|
||||
- Long option not used in CLI examples (`-o` instead of `--output`)
|
||||
- Missing `weight` in frontmatter
|
||||
- Minor layout issues (overlapping text, collapsed sections)
|
||||
- Missing images
|
||||
- Placeholder text visible (`TODO`, `FIXME`)
|
||||
|
||||
### INFO
|
||||
|
||||
Suggestions and observations. Not problems.
|
||||
|
||||
Examples:
|
||||
- Opportunity to use a shared content file
|
||||
- Unusually long page that could be split
|
||||
- Code block missing language identifier
|
||||
- Cosmetic improvements
|
||||
|
||||
## Comment Structure
|
||||
|
||||
Post a single review comment on the PR with this structure:
|
||||
|
||||
```markdown
|
||||
## Doc Review Summary
|
||||
|
||||
**Result:** APPROVED | CHANGES REQUESTED | NEEDS HUMAN REVIEW
|
||||
|
||||
### Issues Found
|
||||
|
||||
#### BLOCKING
|
||||
|
||||
- **file:line** — Description of the issue
|
||||
- Suggested fix: ...
|
||||
|
||||
#### WARNING
|
||||
|
||||
- **file:line** — Description of the issue
|
||||
|
||||
#### INFO
|
||||
|
||||
- **file:line** — Observation
|
||||
|
||||
### Files Reviewed
|
||||
|
||||
- `path/to/file.md` — Brief summary of changes
|
||||
```
|
||||
|
||||
Adapt the "Files Reviewed" section to the review context:
|
||||
- **Source review:** list file paths from the diff
|
||||
- **Visual review (Copilot):** list preview URLs instead of file paths
|
||||
|
||||
## Result Rules
|
||||
|
||||
- Zero BLOCKING issues → **APPROVED**
|
||||
- Any BLOCKING issues → **CHANGES REQUESTED**
|
||||
- Cannot determine severity or diff is ambiguous → **NEEDS HUMAN REVIEW**
|
||||
- Only report issues you are confident about. Do not guess.
|
||||
- Group issues by file when multiple issues exist in the same file.
|
||||
|
||||
## Result → Label Mapping
|
||||
|
||||
| Result | Label |
|
||||
|--------|-------|
|
||||
| APPROVED | `review:approved` |
|
||||
| CHANGES REQUESTED | `review:changes-requested` |
|
||||
| NEEDS HUMAN REVIEW | `review:needs-human` |
|
||||
|
||||
Labels are mutually exclusive. Apply manually after review — Copilot code
|
||||
review uses GitHub's native "Comment" review type and does not manage labels.
|
||||
|
|
@ -0,0 +1,122 @@
|
|||
name: Auto-label PRs
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
pr_number:
|
||||
description: 'PR number to label'
|
||||
required: true
|
||||
type: number
|
||||
|
||||
permissions: {}
|
||||
|
||||
concurrency:
|
||||
group: auto-label-${{ github.event.number || inputs.pr_number }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
auto-label:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
# Skip draft PRs and fork PRs (workflow_dispatch always runs)
|
||||
if: |
|
||||
github.event_name == 'workflow_dispatch' ||
|
||||
(!github.event.pull_request.draft &&
|
||||
github.event.pull_request.head.repo.full_name == github.repository)
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
content
|
||||
data/products.yml
|
||||
scripts/lib/content-utils.js
|
||||
.github/scripts/workflow-utils.js
|
||||
package.json
|
||||
sparse-checkout-cone-mode: false
|
||||
|
||||
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
|
||||
with:
|
||||
node-version: 22
|
||||
|
||||
- name: Install js-yaml
|
||||
run: npm install --no-save --ignore-scripts --no-package-lock --legacy-peer-deps js-yaml
|
||||
|
||||
- name: Apply product labels
|
||||
uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
|
||||
with:
|
||||
script: |
|
||||
const {
|
||||
getProductLabelMap,
|
||||
matchFilesToLabels,
|
||||
} = await import(
|
||||
`${process.cwd()}/.github/scripts/workflow-utils.js`
|
||||
);
|
||||
|
||||
const prNumber =
|
||||
context.issue.number ||
|
||||
Number('${{ inputs.pr_number }}');
|
||||
|
||||
if (!prNumber) {
|
||||
core.setFailed('No PR number available');
|
||||
return;
|
||||
}
|
||||
|
||||
// --- Build path-to-label mapping from products.yml ---
|
||||
const pathToLabel = await getProductLabelMap();
|
||||
core.info(
|
||||
`Loaded ${pathToLabel.size} path-to-label mappings from products.yml`
|
||||
);
|
||||
|
||||
// --- Get changed files from the PR (paginated) ---
|
||||
const files = await github.paginate(
|
||||
github.rest.pulls.listFiles,
|
||||
{
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
pull_number: prNumber,
|
||||
per_page: 100,
|
||||
}
|
||||
);
|
||||
|
||||
const changedFiles = files.map(f => f.filename);
|
||||
core.info(`PR has ${changedFiles.length} changed files`);
|
||||
|
||||
// --- Match files to product labels ---
|
||||
const labelsToAdd = matchFilesToLabels(changedFiles, pathToLabel);
|
||||
|
||||
if (labelsToAdd.size === 0) {
|
||||
core.info('No product labels to add');
|
||||
return;
|
||||
}
|
||||
|
||||
// --- Get existing PR labels to avoid duplicates ---
|
||||
const { data: prData } = await github.rest.issues.get({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: prNumber,
|
||||
});
|
||||
|
||||
const existingLabels = new Set(prData.labels.map(l => l.name));
|
||||
const newLabels = [...labelsToAdd].filter(
|
||||
l => !existingLabels.has(l)
|
||||
);
|
||||
|
||||
if (newLabels.length === 0) {
|
||||
core.info('All matching labels already present');
|
||||
return;
|
||||
}
|
||||
|
||||
// --- Apply labels ---
|
||||
await github.rest.issues.addLabels({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: prNumber,
|
||||
labels: newLabels,
|
||||
});
|
||||
|
||||
core.info(`Added labels: ${newLabels.join(', ')}`);
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
description: |
|
||||
This workflow creates daily repo status reports. It gathers recent repository
|
||||
activity (issues, PRs, discussions, releases, code changes) and generates
|
||||
engaging GitHub issues with productivity insights, community highlights,
|
||||
and project recommendations.
|
||||
|
||||
on:
|
||||
schedule: daily
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
pull-requests: read
|
||||
|
||||
network: defaults
|
||||
|
||||
tools:
|
||||
github:
|
||||
# If in a public repo, setting `lockdown: false` allows
|
||||
# reading issues, pull requests and comments from 3rd-parties
|
||||
# If in a private repo this has no particular effect.
|
||||
lockdown: false
|
||||
|
||||
safe-outputs:
|
||||
mentions: false
|
||||
allowed-github-references: []
|
||||
create-issue:
|
||||
title-prefix: "[repo-status] "
|
||||
labels: [report, daily-status]
|
||||
close-older-issues: true
|
||||
source: githubnext/agentics/workflows/daily-repo-status.md@9a76aba267225767b9b2e1623188d11ed9b58f11
|
||||
engine: copilot
|
||||
---
|
||||
|
||||
# Daily Repo Status
|
||||
|
||||
Create an upbeat daily status report for the repo as a GitHub issue.
|
||||
|
||||
## What to include
|
||||
|
||||
- Recent repository activity (issues, PRs, discussions, releases, code changes)
|
||||
- Progress tracking, goal reminders and highlights
|
||||
- Project status and recommendations
|
||||
- Actionable next steps for maintainers
|
||||
|
||||
## Style
|
||||
|
||||
- Be positive, encouraging, and helpful 🌟
|
||||
- Use emojis moderately for engagement
|
||||
- Keep it concise - adjust length based on actual activity
|
||||
|
||||
## Process
|
||||
|
||||
1. Gather recent activity from the repository
|
||||
2. Study the repository, its issues and its pull requests
|
||||
3. Create a new GitHub issue with your findings and insights
|
||||
|
|
@ -0,0 +1,280 @@
|
|||
name: Doc Review
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize, ready_for_review]
|
||||
paths:
|
||||
- 'content/**'
|
||||
- 'layouts/**'
|
||||
- 'assets/**'
|
||||
- 'data/**'
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
pr_number:
|
||||
description: 'PR number to review'
|
||||
required: true
|
||||
type: number
|
||||
|
||||
permissions: {}
|
||||
|
||||
concurrency:
|
||||
group: doc-review-${{ github.event.number || inputs.pr_number }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
# -----------------------------------------------------------------
|
||||
# Job 1: Resolve preview URLs from changed content files
|
||||
# -----------------------------------------------------------------
|
||||
resolve-urls:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
if: |
|
||||
github.event_name == 'workflow_dispatch' ||
|
||||
(!github.event.pull_request.draft &&
|
||||
github.event.pull_request.head.repo.full_name == github.repository &&
|
||||
!contains(github.event.pull_request.labels.*.name, 'skip-review'))
|
||||
outputs:
|
||||
urls: ${{ steps.detect.outputs.urls }}
|
||||
url-count: ${{ steps.detect.outputs.url-count }}
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1
|
||||
with:
|
||||
persist-credentials: false
|
||||
fetch-depth: 0
|
||||
sparse-checkout: |
|
||||
content
|
||||
data/products.yml
|
||||
scripts/lib/content-utils.js
|
||||
.github/scripts/resolve-review-urls.js
|
||||
package.json
|
||||
sparse-checkout-cone-mode: false
|
||||
|
||||
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
|
||||
with:
|
||||
node-version: 22
|
||||
|
||||
- name: Resolve base ref
|
||||
id: base
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
PR_NUMBER: ${{ github.event.pull_request.number || inputs.pr_number }}
|
||||
run: |
|
||||
if [ -n "${{ github.base_ref }}" ]; then
|
||||
echo "ref=origin/${{ github.base_ref }}" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
BASE=$(gh pr view "$PR_NUMBER" --repo "${{ github.repository }}" --json baseRefName -q .baseRefName)
|
||||
git fetch origin "$BASE"
|
||||
echo "ref=origin/$BASE" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
- name: Detect changed pages
|
||||
id: detect
|
||||
env:
|
||||
BASE_REF: ${{ steps.base.outputs.ref }}
|
||||
run: node .github/scripts/resolve-review-urls.js
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Job 2: Copilot code review (runs in parallel with Job 1)
|
||||
# -----------------------------------------------------------------
|
||||
copilot-review:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
pull-requests: write
|
||||
if: |
|
||||
github.event_name == 'workflow_dispatch' ||
|
||||
(!github.event.pull_request.draft &&
|
||||
github.event.pull_request.head.repo.full_name == github.repository &&
|
||||
!contains(github.event.pull_request.labels.*.name, 'skip-review'))
|
||||
steps:
|
||||
- name: Request Copilot review
|
||||
uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
|
||||
env:
|
||||
PR_NUMBER: ${{ github.event.pull_request.number || inputs.pr_number }}
|
||||
with:
|
||||
script: |
|
||||
const prNumber = context.issue.number || Number(process.env.PR_NUMBER);
|
||||
try {
|
||||
await github.rest.pulls.requestReviewers({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
pull_number: prNumber,
|
||||
reviewers: ['copilot-pull-request-reviewer'],
|
||||
});
|
||||
core.info('Copilot code review requested successfully');
|
||||
} catch (error) {
|
||||
core.warning(`Could not request Copilot review: ${error.message}`);
|
||||
core.warning(
|
||||
'To enable automatic Copilot reviews, configure a repository ruleset: ' +
|
||||
'Settings → Rules → Rulesets → "Automatically request Copilot code review"'
|
||||
);
|
||||
}
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Job 3: Copilot visual review (depends on Job 1 for URLs)
|
||||
# -----------------------------------------------------------------
|
||||
copilot-visual-review:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
needs: resolve-urls
|
||||
if: needs.resolve-urls.result == 'success' && fromJson(needs.resolve-urls.outputs.url-count) > 0
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: .github/prompts/copilot-visual-review.md
|
||||
sparse-checkout-cone-mode: false
|
||||
|
||||
- name: Wait for preview deployment
|
||||
id: wait
|
||||
env:
|
||||
PR_NUMBER: ${{ github.event.pull_request.number || inputs.pr_number }}
|
||||
run: |
|
||||
PREVIEW_URL="https://influxdata.github.io/docs-v2/pr-preview/pr-${PR_NUMBER}/"
|
||||
TIMEOUT=600 # 10 minutes
|
||||
INTERVAL=15
|
||||
ELAPSED=0
|
||||
|
||||
echo "Waiting for preview at ${PREVIEW_URL}"
|
||||
|
||||
while [ "$ELAPSED" -lt "$TIMEOUT" ]; do
|
||||
STATUS=$(curl -s -o /dev/null -L -w "%{http_code}" "$PREVIEW_URL" || echo "000")
|
||||
if [ "$STATUS" = "200" ]; then
|
||||
echo "Preview is live"
|
||||
echo "available=true" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
echo "Status: ${STATUS} (${ELAPSED}s / ${TIMEOUT}s)"
|
||||
sleep "$INTERVAL"
|
||||
ELAPSED=$((ELAPSED + INTERVAL))
|
||||
done
|
||||
|
||||
echo "Preview deployment timed out after ${TIMEOUT}s"
|
||||
echo "available=false" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Post visual review request
|
||||
if: steps.wait.outputs.available == 'true'
|
||||
uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
|
||||
env:
|
||||
PREVIEW_URLS: ${{ needs.resolve-urls.outputs.urls }}
|
||||
PR_NUMBER: ${{ github.event.pull_request.number || inputs.pr_number }}
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
let urls;
|
||||
try {
|
||||
urls = JSON.parse(process.env.PREVIEW_URLS);
|
||||
} catch (e) {
|
||||
core.warning(`Failed to parse PREVIEW_URLS: ${e.message}`);
|
||||
return;
|
||||
}
|
||||
|
||||
const prNumber = context.issue.number || Number(process.env.PR_NUMBER);
|
||||
const previewBase = `https://influxdata.github.io/docs-v2/pr-preview/pr-${prNumber}`;
|
||||
|
||||
// Build preview URL list
|
||||
const urlList = urls
|
||||
.map(u => `- [${u}](${previewBase}${u})`)
|
||||
.join('\n');
|
||||
|
||||
// Read the Copilot visual review template
|
||||
const template = fs.readFileSync(
|
||||
'.github/prompts/copilot-visual-review.md',
|
||||
'utf8'
|
||||
);
|
||||
|
||||
const marker = '<!-- doc-review-visual -->';
|
||||
const body = [
|
||||
marker,
|
||||
'## Preview Pages for Review',
|
||||
'',
|
||||
`${urls.length} page(s) changed in this PR:`,
|
||||
'',
|
||||
'<details>',
|
||||
'<summary>Preview URLs</summary>',
|
||||
'',
|
||||
urlList,
|
||||
'',
|
||||
'</details>',
|
||||
'',
|
||||
'---',
|
||||
'',
|
||||
`@github-copilot please review the preview pages listed above using the template below:`,
|
||||
'',
|
||||
template.trim(),
|
||||
'',
|
||||
urlList,
|
||||
].join('\n');
|
||||
|
||||
// Update existing comment or create new one
|
||||
const { data: comments } = await github.rest.issues.listComments({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: prNumber,
|
||||
});
|
||||
const existing = comments.find(c => c.body.includes(marker));
|
||||
|
||||
if (existing) {
|
||||
await github.rest.issues.updateComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
comment_id: existing.id,
|
||||
body,
|
||||
});
|
||||
} else {
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: prNumber,
|
||||
body,
|
||||
});
|
||||
}
|
||||
|
||||
core.info(`Posted visual review request with ${urls.length} URLs`);
|
||||
|
||||
- name: Post timeout notice
|
||||
if: steps.wait.outputs.available == 'false'
|
||||
uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7.1.0
|
||||
env:
|
||||
PR_NUMBER: ${{ github.event.pull_request.number || inputs.pr_number }}
|
||||
with:
|
||||
script: |
|
||||
const prNumber = context.issue.number || Number(process.env.PR_NUMBER);
|
||||
const marker = '<!-- doc-review-visual-timeout -->';
|
||||
const body = [
|
||||
marker,
|
||||
'## Visual Review Skipped',
|
||||
'',
|
||||
'The PR preview deployment did not become available within 10 minutes.',
|
||||
'Visual review was skipped. The Copilot code review (Job 2) still ran.',
|
||||
'',
|
||||
'To trigger visual review manually, re-run this workflow after the',
|
||||
'preview is deployed.',
|
||||
].join('\n');
|
||||
|
||||
const { data: comments } = await github.rest.issues.listComments({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: prNumber,
|
||||
});
|
||||
const existing = comments.find(c => c.body.includes(marker));
|
||||
|
||||
if (existing) {
|
||||
await github.rest.issues.updateComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
comment_id: existing.id,
|
||||
body,
|
||||
});
|
||||
} else {
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: prNumber,
|
||||
body,
|
||||
});
|
||||
}
|
||||
|
|
@ -2,7 +2,7 @@ name: PR Preview
|
|||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, reopened, synchronize, closed]
|
||||
types: [opened, reopened, synchronize, closed, ready_for_review]
|
||||
paths:
|
||||
- 'content/**'
|
||||
- 'layouts/**'
|
||||
|
|
@ -139,6 +139,8 @@ jobs:
|
|||
|
||||
- name: Deploy preview
|
||||
if: steps.detect.outputs.pages-to-deploy != '[]'
|
||||
id: deploy-preview
|
||||
continue-on-error: true
|
||||
uses: rossjrw/pr-preview-action@v1.4.8
|
||||
with:
|
||||
source-dir: ./preview-staging
|
||||
|
|
@ -146,8 +148,27 @@ jobs:
|
|||
umbrella-dir: pr-preview
|
||||
action: deploy
|
||||
|
||||
- name: Post success comment
|
||||
- name: Validate preview deployment
|
||||
if: steps.detect.outputs.pages-to-deploy != '[]'
|
||||
id: validate-deploy
|
||||
run: |
|
||||
DEPLOY_OUTCOME="${{ steps.deploy-preview.outcome }}"
|
||||
DEPLOY_URL="${{ steps.deploy-preview.outputs.deployment-url }}"
|
||||
|
||||
if [ -z "$DEPLOY_URL" ]; then
|
||||
echo "Deployment step did not produce a preview URL. Failing preview job."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$DEPLOY_OUTCOME" != "success" ]; then
|
||||
echo "Deployment reported outcome: $DEPLOY_OUTCOME"
|
||||
echo "Preview URL exists; treating as transient post-deploy comment error."
|
||||
fi
|
||||
|
||||
echo "status=ok" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Post success comment
|
||||
if: steps.detect.outputs.pages-to-deploy != '[]' && steps.validate-deploy.outputs.status == 'ok'
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
|
|
|
|||
|
|
@ -25,7 +25,7 @@ jobs:
|
|||
# Only run on issues with sync-plugin-docs label or manual dispatch
|
||||
if: |
|
||||
github.event_name == 'workflow_dispatch' ||
|
||||
(github.event_name == 'issues' && contains(github.event.issue.labels.*.name, 'sync-plugin-docs'))
|
||||
(github.event_name == 'issues' && contains(github.event.issue.labels.*.name, 'source:sync'))
|
||||
|
||||
steps:
|
||||
- name: Parse issue inputs
|
||||
|
|
@ -170,7 +170,7 @@ jobs:
|
|||
repo: context.repo.repo,
|
||||
issue_number: parseInt(issueNumber),
|
||||
state: 'closed',
|
||||
labels: ['sync-plugin-docs', 'validation-failed']
|
||||
labels: ['source:sync', 'validation-failed']
|
||||
});
|
||||
}
|
||||
|
||||
|
|
@ -418,7 +418,7 @@ jobs:
|
|||
repo: context.repo.repo,
|
||||
issue_number: ${{ steps.inputs.outputs.issue_number }},
|
||||
state: 'closed',
|
||||
labels: ['sync-plugin-docs', 'completed']
|
||||
labels: ['source:sync', 'completed']
|
||||
});
|
||||
|
||||
- name: Report failure
|
||||
|
|
|
|||
25
.mcp.json
25
.mcp.json
|
|
@ -1,20 +1,33 @@
|
|||
{
|
||||
"$schema": "https://raw.githubusercontent.com/modelcontextprotocol/modelcontextprotocol/refs/heads/main/schema/2025-06-18/schema.json",
|
||||
"description": "InfluxData documentation assistance via MCP server - Node.js execution",
|
||||
"description": "InfluxData documentation assistance via MCP servers",
|
||||
"mcpServers": {
|
||||
"influxdb-docs": {
|
||||
"comment": "Hosted InfluxDB documentation search. Uses API key auth (set INFLUXDATA_DOCS_KAPA_API_KEY env var). Get your key from the Kapa dashboard. Rate limits: 60 req/min.",
|
||||
"type": "sse",
|
||||
"url": "https://influxdb-docs.mcp.kapa.ai",
|
||||
"headers": {
|
||||
"Authorization": "Bearer ${INFLUXDATA_DOCS_KAPA_API_KEY}"
|
||||
}
|
||||
},
|
||||
"influxdb-docs-oauth": {
|
||||
"comment": "Hosted InfluxDB documentation search (OAuth). No API key needed--authenticates via Google or GitHub OAuth on first use. Rate limits: 40 req/hr, 200 req/day.",
|
||||
"type": "sse",
|
||||
"url": "https://influxdb-docs.mcp.kapa.ai"
|
||||
},
|
||||
"influxdata": {
|
||||
"comment": "Use Node to run Docs MCP. To install and setup, see https://github.com/influxdata/docs-mcp-server",
|
||||
"comment": "Local Docs MCP server (optional). To install and setup, see https://github.com/influxdata/docs-mcp-server. NOTE: uses deprecated endpoints--pending update.",
|
||||
"type": "stdio",
|
||||
"command": "node",
|
||||
"args": [
|
||||
"${DOCS_MCP_SERVER_PATH}/dist/index.js"
|
||||
],
|
||||
"env": {
|
||||
"DOCS_API_KEY_FILE": "${DOCS_API_KEY_FILE:-$HOME/.env.docs-kapa-api-key}",
|
||||
"DOCS_MODE": "external-only",
|
||||
"MCP_LOG_LEVEL": "${MCP_LOG_LEVEL:-info}",
|
||||
"INFLUXDATA_DOCS_API_KEY_FILE": "${INFLUXDATA_DOCS_API_KEY_FILE:-$HOME/.env.docs-kapa-api-key}",
|
||||
"INFLUXDATA_DOCS_MODE": "external-only",
|
||||
"INFLUXDATA_DOCS_LOG_LEVEL": "${INFLUXDATA_DOCS_LOG_LEVEL:-info}",
|
||||
"NODE_ENV": "${NODE_ENV:-production}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
219
AGENTS.md
219
AGENTS.md
|
|
@ -1,33 +1,21 @@
|
|||
# InfluxData Documentation (docs-v2)
|
||||
|
||||
> **For general AI assistants (Claude, ChatGPT, Gemini, etc.)**
|
||||
>
|
||||
> This guide provides comprehensive instructions for AI assistants helping with the InfluxData documentation repository. It focuses on content creation, writing workflows, and style guidelines.
|
||||
>
|
||||
> **Shared project guidelines for all AI assistants**
|
||||
>
|
||||
> **Other instruction resources**:
|
||||
> - [.github/copilot-instructions.md](.github/copilot-instructions.md) - For GitHub Copilot (focused on coding and automation)
|
||||
> - [CLAUDE.md](CLAUDE.md) - For Claude with MCP (minimal pointer)
|
||||
> - [.github/copilot-instructions.md](.github/copilot-instructions.md) - GitHub Copilot (CLI tools, workflows, repo structure)
|
||||
> - [CLAUDE.md](CLAUDE.md) - Claude with MCP (pointer file)
|
||||
> - [.claude/](.claude/) - Claude MCP configuration (commands, agents, skills)
|
||||
> - [.github/instructions/](.github/instructions/) - File pattern-specific instructions
|
||||
|
||||
## Project Overview
|
||||
## Commands
|
||||
|
||||
This repository powers [docs.influxdata.com](https://docs.influxdata.com), a Hugo-based static documentation site covering InfluxDB 3, InfluxDB v2/v1, Telegraf, and related products.
|
||||
|
||||
**Key Characteristics:**
|
||||
- **Scale**: 5,359+ pages
|
||||
- **Build time**: ~75 seconds (NEVER cancel Hugo builds)
|
||||
- **Tech stack**: Hugo, Node.js, Docker, Vale, Pytest, Cypress
|
||||
- **Test time**: 15-45 minutes for full code block tests
|
||||
|
||||
## Quick Commands
|
||||
|
||||
| Task | Command | Time |
|
||||
|------|---------|------|
|
||||
| Install dependencies | `CYPRESS_INSTALL_BINARY=0 yarn install` | ~4s |
|
||||
| Build site | `npx hugo --quiet` | ~75s |
|
||||
| Dev server | `npx hugo server` | ~92s |
|
||||
| Test code blocks | `yarn test:codeblocks:all` | 15-45m |
|
||||
| Task | Command | Notes |
|
||||
|------|---------|-------|
|
||||
| Install | `CYPRESS_INSTALL_BINARY=0 yarn install` | ~4s |
|
||||
| Build | `npx hugo --quiet` | ~75s — **NEVER CANCEL** |
|
||||
| Dev server | `npx hugo server` | ~92s, port 1313 |
|
||||
| Test code blocks | `yarn test:codeblocks:all` | 15-45m — **NEVER CANCEL** |
|
||||
| Lint | `yarn lint` | ~1m |
|
||||
|
||||
## Repository Structure
|
||||
|
|
@ -43,7 +31,7 @@ docs-v2/
|
|||
│ └── example.md # Shortcode testing playground
|
||||
├── layouts/ # Hugo templates and shortcodes
|
||||
├── assets/ # JS, CSS, TypeScript
|
||||
├── api-docs/ # OpenAPI specifications
|
||||
├── api-docs/ # InfluxDB OpenAPI specifications, API reference documentation generation scripts
|
||||
├── data/ # YAML/JSON data files
|
||||
├── public/ # Build output (gitignored, ~529MB)
|
||||
└── .github/
|
||||
|
|
@ -52,16 +40,16 @@ docs-v2/
|
|||
|
||||
**Content Paths**: See [copilot-instructions.md](.github/copilot-instructions.md#content-organization)
|
||||
|
||||
## Documentation MCP Server
|
||||
|
||||
A hosted MCP server provides semantic search over all InfluxDB documentation.
|
||||
Use it to verify technical accuracy, check API syntax, and find related docs.
|
||||
|
||||
See the [InfluxDB documentation MCP server guide](https://docs.influxdata.com/influxdb3/core/admin/mcp-server/) for setup instructions.
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Editing a page in your browser
|
||||
|
||||
1. Navigate to the desired page on [docs.influxdata.com](https://docs.influxdata.com)
|
||||
2. Click the "Edit this page" link at the bottom
|
||||
3. Make changes in the GitHub web editor
|
||||
4. Commit changes via a pull request
|
||||
|
||||
### Creating/Editing Content Manually
|
||||
### Creating/Editing Content
|
||||
|
||||
**Frontmatter** (page metadata):
|
||||
```yaml
|
||||
|
|
@ -107,134 +95,83 @@ yarn test:links content/influxdb3/core/**/*.md
|
|||
|
||||
**📖 Complete Reference**: [DOCS-TESTING.md](DOCS-TESTING.md)
|
||||
|
||||
### Committing Changes
|
||||
|
||||
**Commit Message Format**:
|
||||
```
|
||||
type(scope): description
|
||||
## Constraints
|
||||
|
||||
Examples:
|
||||
- fix(enterprise): correct Docker environment variable
|
||||
- feat(influxdb3): add new plugin documentation
|
||||
- docs(core): update configuration examples
|
||||
```
|
||||
- **NEVER cancel** Hugo builds (~75s) or test runs (15-45m) — the site has 5,359+ pages
|
||||
- Set timeouts: Hugo 180s+, tests 30m+
|
||||
- Use `python` not `py` for code block language identifiers (pytest won't collect `py` blocks)
|
||||
- Shared content files (`content/shared/`) have no frontmatter — the consuming page provides it
|
||||
- Product names and versions come from `data/products.yml` (single source of truth)
|
||||
- Commit format: `type(scope): description` — see [DOCS-CONTRIBUTING.md](DOCS-CONTRIBUTING.md#commit-guidelines)
|
||||
- Network-restricted environments: Cypress (`CYPRESS_INSTALL_BINARY=0`), Docker builds, and Alpine packages may fail
|
||||
|
||||
**Types**: `fix`, `feat`, `style`, `refactor`, `test`, `chore`
|
||||
## Style Rules
|
||||
|
||||
**Scopes**: `enterprise`, `influxdb3`, `core`, `cloud`, `telegraf`, etc.
|
||||
Follows [Google Developer Documentation Style Guide](https://developers.google.com/style) with these project-specific additions:
|
||||
|
||||
**Pre-commit hooks** run automatically (Vale, Prettier, tests). Skip with:
|
||||
```bash
|
||||
git commit -m "message" --no-verify
|
||||
```
|
||||
- **Semantic line feeds** — one sentence per line (better diffs)
|
||||
- **No h1 in content** — `title` frontmatter auto-generates h1
|
||||
- Active voice, present tense, second person
|
||||
- Long options in CLI examples (`--output` not `-o`)
|
||||
- Code blocks within 80 characters
|
||||
|
||||
**📖 Complete Reference**: [DOCS-CONTRIBUTING.md](DOCS-CONTRIBUTING.md#commit-guidelines)
|
||||
## Content Structure
|
||||
|
||||
## Key Patterns
|
||||
**Required frontmatter**: `title`, `description`, `menu`, `weight`
|
||||
— see [DOCS-FRONTMATTER.md](DOCS-FRONTMATTER.md)
|
||||
|
||||
### Content Organization
|
||||
**Shared content**: `source: /shared/path/to/content.md`
|
||||
— shared files use `{{% show-in %}}` / `{{% hide-in %}}` for product-specific content
|
||||
|
||||
- **Product versions**: Managed in `/data/products.yml`
|
||||
- **Semantic line feeds**: One sentence per line for better diffs
|
||||
- **Heading hierarchy**: Use h2-h6 only (h1 auto-generated from frontmatter)
|
||||
- **Image naming**: `project/version-context-description.png`
|
||||
**Shortcodes**: Callouts use `> [!Note]` / `> [!Warning]` syntax
|
||||
— see [DOCS-SHORTCODES.md](DOCS-SHORTCODES.md) and [content/example.md](content/example.md)
|
||||
|
||||
### Code Examples
|
||||
## Product Content Paths
|
||||
|
||||
**Testable code blocks** (pytest):
|
||||
```python
|
||||
print("Hello, world!")
|
||||
```
|
||||
Canonical paths from `data/products.yml`:
|
||||
|
||||
<!--pytest-codeblocks:expected-output-->
|
||||
| Product | Content Path |
|
||||
|---------|-------------|
|
||||
| InfluxDB 3 Core | `content/influxdb3/core/` |
|
||||
| InfluxDB 3 Enterprise | `content/influxdb3/enterprise/` |
|
||||
| InfluxDB 3 Explorer | `content/influxdb3/explorer/` |
|
||||
| InfluxDB Cloud Serverless | `content/influxdb3/cloud-serverless/` |
|
||||
| InfluxDB Cloud Dedicated | `content/influxdb3/cloud-dedicated/` |
|
||||
| InfluxDB Clustered | `content/influxdb3/clustered/` |
|
||||
| InfluxDB OSS v2 | `content/influxdb/v2/` |
|
||||
| InfluxDB OSS v1 | `content/influxdb/v1/` |
|
||||
| InfluxDB Cloud (TSM) | `content/influxdb/cloud/` |
|
||||
| InfluxDB Enterprise v1 | `content/enterprise_influxdb/` |
|
||||
| Telegraf | `content/telegraf/` |
|
||||
| Chronograf | `content/chronograf/` |
|
||||
| Kapacitor | `content/kapacitor/` |
|
||||
| Flux | `content/flux/` |
|
||||
| Shared content | `content/shared/` |
|
||||
|
||||
```
|
||||
Hello, world!
|
||||
```
|
||||
## Doc Review Pipeline
|
||||
|
||||
**Language identifiers**: Use `python` not `py`, `bash` not `sh` (for pytest collection)
|
||||
Automated PR review for documentation changes.
|
||||
See [.github/LABEL_GUIDE.md](.github/LABEL_GUIDE.md) for the label taxonomy.
|
||||
|
||||
### API Documentation
|
||||
| Resource | Path |
|
||||
|----------|------|
|
||||
| Label guide | [.github/LABEL_GUIDE.md](.github/LABEL_GUIDE.md) |
|
||||
| Triage agent | [.claude/agents/doc-triage-agent.md](.claude/agents/doc-triage-agent.md) |
|
||||
| Content review instructions | [.github/instructions/content-review.instructions.md](.github/instructions/content-review.instructions.md) |
|
||||
| Review agent (local) | [.claude/agents/doc-review-agent.md](.claude/agents/doc-review-agent.md) |
|
||||
| Auto-label workflow | [.github/workflows/auto-label.yml](.github/workflows/auto-label.yml) |
|
||||
| Doc review workflow | [.github/workflows/doc-review.yml](.github/workflows/doc-review.yml) |
|
||||
|
||||
- **Location**: `/api-docs/` directory
|
||||
- **Format**: OpenAPI 3.0 YAML
|
||||
- **Generation**: Uses Redoc + custom processing
|
||||
- **📖 Workflow**: [api-docs/README.md](api-docs/README.md)
|
||||
|
||||
### JavaScript/TypeScript
|
||||
|
||||
- **Entry point**: `assets/js/main.js`
|
||||
- **Pattern**: Component-based with `data-component` attributes
|
||||
- **Debugging**: Source maps or debug helpers available
|
||||
- **📖 Details**: [DOCS-CONTRIBUTING.md](DOCS-CONTRIBUTING.md#javascript-in-the-documentation-ui)
|
||||
|
||||
## Important Constraints
|
||||
|
||||
### Performance
|
||||
- **NEVER cancel Hugo builds** - they take ~75s normally
|
||||
- **NEVER cancel test runs** - code block tests take 15-45 minutes
|
||||
- **Set timeouts**: Hugo (180s+), tests (30+ minutes)
|
||||
|
||||
### Style Guidelines
|
||||
- Use Google Developer Documentation style
|
||||
- Active voice, present tense, second person for instructions
|
||||
- No emojis unless explicitly requested
|
||||
- Use long options in CLI examples (`--option` vs `-o`)
|
||||
- Format code blocks within 80 characters
|
||||
|
||||
### Network Restrictions
|
||||
Some operations may fail in restricted environments:
|
||||
- Docker builds requiring external repos
|
||||
- `docker compose up local-dev` (Alpine packages)
|
||||
- Cypress installation (use `CYPRESS_INSTALL_BINARY=0`)
|
||||
|
||||
## Documentation References
|
||||
## Reference
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [DOCS-CONTRIBUTING.md](DOCS-CONTRIBUTING.md) | Contribution workflow, style guidelines |
|
||||
| [DOCS-TESTING.md](DOCS-TESTING.md) | Testing procedures (code blocks, links, linting) |
|
||||
| [DOCS-CONTRIBUTING.md](DOCS-CONTRIBUTING.md) | Style guidelines, commit format, contribution workflow |
|
||||
| [DOCS-TESTING.md](DOCS-TESTING.md) | Code block testing, link validation, Vale linting |
|
||||
| [DOCS-SHORTCODES.md](DOCS-SHORTCODES.md) | Complete shortcode reference |
|
||||
| [DOCS-FRONTMATTER.md](DOCS-FRONTMATTER.md) | Complete frontmatter field reference |
|
||||
| [.github/copilot-instructions.md](.github/copilot-instructions.md) | Primary AI assistant instructions |
|
||||
| [api-docs/README.md](api-docs/README.md) | API documentation workflow |
|
||||
| [content/example.md](content/example.md) | Live shortcode examples for testing |
|
||||
|
||||
## Specialized Topics
|
||||
|
||||
### Working with Specific Products
|
||||
|
||||
| Product | Content Path | Special Notes |
|
||||
|---------|-------------|---------------|
|
||||
| InfluxDB 3 Core | `/content/influxdb3/core/` | Latest architecture |
|
||||
| InfluxDB 3 Enterprise | `/content/influxdb3/enterprise/` | Core + licensed features, clustered |
|
||||
| InfluxDB Cloud Dedicated | `/content/influxdb3/cloud-dedicated/`, `/content/influxdb3/cloud-serverless/` | Managed and distributed |
|
||||
| InfluxDB Clustered | `/content/influxdb3/clustered/` | Self-managed and distributed |
|
||||
| InfluxDB Cloud | `/content/influxdb/cloud/` | Legacy but active |
|
||||
| InfluxDB v2 | `/content/influxdb/v2/` | Legacy but active |
|
||||
| InfluxDB Enterprise v1 | `/content/enterprise_influxdb/v1/` | Legacy but active enterprise, clustered |
|
||||
|
||||
### Advanced Tasks
|
||||
|
||||
- **Vale configuration**: `.ci/vale/styles/` for custom rules
|
||||
- **Link checking**: Uses custom `link-checker` binary
|
||||
- **Docker testing**: `compose.yaml` defines test services
|
||||
- **Lefthook**: Git hooks configuration in `lefthook.yml`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| Pytest collected 0 items | Use `python` not `py` for code block language |
|
||||
| Hugo build errors | Check `/config/_default/` configuration |
|
||||
| Link validation slow | Test specific files: `yarn test:links content/file.md` |
|
||||
| Vale errors | Check `.ci/vale/styles/config/vocabularies` |
|
||||
|
||||
## Critical Reminders
|
||||
|
||||
1. **Be a critical thinking partner** - Challenge assumptions, identify issues
|
||||
2. **Test before committing** - Run relevant tests locally
|
||||
3. **Reference, don't duplicate** - Link to detailed docs instead of copying
|
||||
4. **Respect build times** - Don't cancel long-running operations
|
||||
5. **Follow conventions** - Use established patterns for consistency
|
||||
|
||||
| [content/example.md](content/example.md) | Live shortcode examples |
|
||||
| [.github/copilot-instructions.md](.github/copilot-instructions.md) | CLI tools, repo structure, workflows |
|
||||
| [.github/LABEL_GUIDE.md](.github/LABEL_GUIDE.md) | Label taxonomy and review pipeline |
|
||||
|
|
|
|||
13
CLAUDE.md
13
CLAUDE.md
|
|
@ -6,12 +6,23 @@
|
|||
>
|
||||
> **Full instruction resources**:
|
||||
> - [.github/copilot-instructions.md](.github/copilot-instructions.md) - For GitHub Copilot (technical setup, automation)
|
||||
> - [AGENTS.md](AGENTS.md) - For general AI assistants (content creation, workflows, style guidelines)
|
||||
> - [AGENTS.md](AGENTS.md) - Shared project guidelines (style, constraints, content structure)
|
||||
> - [.github/LABEL_GUIDE.md](.github/LABEL_GUIDE.md) - Label taxonomy and pipeline usage
|
||||
> - [.claude/](.claude/) - Claude MCP configuration directory with:
|
||||
> - Custom commands in `.claude/commands/`
|
||||
> - Specialized agents in `.claude/agents/`
|
||||
> - Custom skills in `.claude/skills/`
|
||||
|
||||
## Documentation MCP server
|
||||
|
||||
This repo includes [`.mcp.json`](.mcp.json) with a hosted InfluxDB documentation search server.
|
||||
Use it to verify technical accuracy, check API syntax, and find related docs.
|
||||
|
||||
- **`influxdb-docs`** — API key auth. Set `INFLUXDATA_DOCS_KAPA_API_KEY` env var before launching Claude Code.
|
||||
- **`influxdb-docs-oauth`** — OAuth fallback. No setup needed.
|
||||
|
||||
See [content-editing skill](.claude/skills/content-editing/SKILL.md#part-4-fact-checking-with-the-documentation-mcp-server) for usage details.
|
||||
|
||||
## Purpose and scope
|
||||
|
||||
Claude should help document InfluxData products by creating clear, accurate technical content with proper code examples, frontmatter, and formatting.
|
||||
|
|
|
|||
|
|
@ -70,7 +70,7 @@ function generateHtml {
|
|||
local specbundle=redoc-static_index.html
|
||||
# Define the temporary file for the Hugo template and Redoc HTML.
|
||||
local tmpfile="${productVersion}-${api}_index.tmp"
|
||||
|
||||
|
||||
echo "Bundling $specPath"
|
||||
|
||||
# Use npx to install and run the specified version of redoc-cli.
|
||||
|
|
@ -83,9 +83,9 @@ function generateHtml {
|
|||
--title="$title" \
|
||||
--options.sortPropsAlphabetically \
|
||||
--options.menuToggle \
|
||||
--options.hideDownloadButton \
|
||||
--options.hideHostname \
|
||||
--options.noAutoAuth \
|
||||
--options.hideDownloadButton \
|
||||
--output=$specbundle \
|
||||
--templateOptions.description="$shortDescription" \
|
||||
--templateOptions.product="$productVersion" \
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ x-influxdata-product-name: InfluxDB 3 Core
|
|||
|
||||
apis:
|
||||
v3@3:
|
||||
root: v3/ref.yml
|
||||
root: v3/influxdb3-core-openapi.yaml
|
||||
x-influxdata-docs-aliases:
|
||||
- /influxdb3/core/api/
|
||||
- /influxdb3/core/api/v1/
|
||||
|
|
|
|||
|
|
@ -21,10 +21,7 @@ description: |
|
|||
- `/`: Compatibility endpoints for InfluxDB v1 workloads and clients
|
||||
- `/api/v2/write`: Compatibility endpoint for InfluxDB v2 workloads and clients
|
||||
|
||||
<!-- TODO: verify where to host the spec that users can download.
|
||||
This documentation is generated from the
|
||||
[InfluxDB OpenAPI specification](https://raw.githubusercontent.com/influxdata/).
|
||||
-->
|
||||
[Download the OpenAPI specification](/openapi/influxdb3-core-openapi.yaml)
|
||||
license:
|
||||
name: MIT
|
||||
url: 'https://opensource.org/licenses/MIT'
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -7,7 +7,7 @@ x-influxdata-product-name: InfluxDB 3 Enterprise
|
|||
|
||||
apis:
|
||||
v3@3:
|
||||
root: v3/ref.yml
|
||||
root: v3/influxdb3-enterprise-openapi.yaml
|
||||
x-influxdata-docs-aliases:
|
||||
- /influxdb3/enterprise/api/
|
||||
- /influxdb3/enterprise/api/v1/
|
||||
|
|
|
|||
|
|
@ -21,10 +21,7 @@ description: |
|
|||
- `/`: Compatibility endpoints for InfluxDB v1 workloads and clients
|
||||
- `/api/v2/write`: Compatibility endpoint for InfluxDB v2 workloads and clients
|
||||
|
||||
<!-- TODO: verify where to host the spec that users can download.
|
||||
This documentation is generated from the
|
||||
[InfluxDB OpenAPI specification](https://raw.githubusercontent.com/influxdata/).
|
||||
-->
|
||||
[Download the OpenAPI specification](/openapi/influxdb3-enterprise-openapi.yaml)
|
||||
license:
|
||||
name: MIT
|
||||
url: 'https://opensource.org/licenses/MIT'
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -7,10 +7,11 @@ function initialize() {
|
|||
|
||||
var appendHTML = `
|
||||
<div class="code-controls">
|
||||
<span class="code-controls-toggle"><span class='cf-icon More'></span></span>
|
||||
<ul class="code-control-options">
|
||||
<li class='copy-code'><span class='cf-icon Duplicate_New'></span> <span class="message">Copy</span></li>
|
||||
<li class='fullscreen-toggle'><span class='cf-icon ExpandB'></span> Fill window</li>
|
||||
<button class="code-controls-toggle" aria-label="Code block options" aria-expanded="false"><span class='cf-icon More'></span></button>
|
||||
<ul class="code-control-options" role="menu">
|
||||
<li role="none"><button role="menuitem" class='copy-code'><span class='cf-icon Duplicate_New'></span> <span class="message">Copy</span></button></li>
|
||||
<li role="none"><button role="menuitem" class='ask-ai-code'><span class='cf-icon Chat'></span> Ask AI</button></li>
|
||||
<li role="none"><button role="menuitem" class='fullscreen-toggle'><span class='cf-icon ExpandB'></span> Fill window</button></li>
|
||||
</ul>
|
||||
</div>
|
||||
`;
|
||||
|
|
@ -27,12 +28,17 @@ function initialize() {
|
|||
|
||||
// Click outside of the code-controls to close them
|
||||
$(document).click(function () {
|
||||
$('.code-controls').removeClass('open');
|
||||
$('.code-controls.open').each(function () {
|
||||
$(this).removeClass('open');
|
||||
$(this).find('.code-controls-toggle').attr('aria-expanded', 'false');
|
||||
});
|
||||
});
|
||||
|
||||
// Click the code controls toggle to open code controls
|
||||
$('.code-controls-toggle').click(function () {
|
||||
$(this).parent('.code-controls').toggleClass('open');
|
||||
var $controls = $(this).parent('.code-controls');
|
||||
var isOpen = $controls.toggleClass('open').hasClass('open');
|
||||
$(this).attr('aria-expanded', String(isOpen));
|
||||
});
|
||||
|
||||
// Stop event propagation for clicks inside of the code-controls div
|
||||
|
|
@ -235,6 +241,34 @@ function initialize() {
|
|||
return info;
|
||||
}
|
||||
|
||||
////////////////////////////////// ASK AI ////////////////////////////////////
|
||||
|
||||
// Build a query from the code block and open Kapa via the ask-ai-open contract
|
||||
$('.ask-ai-code').click(function () {
|
||||
var codeElement = $(this)
|
||||
.closest('.code-controls')
|
||||
.prevAll('pre:has(code)')[0];
|
||||
if (!codeElement) return;
|
||||
|
||||
var code = codeElement.innerText.trim();
|
||||
// Use the data-ask-ai-query attribute if the template provided one,
|
||||
// otherwise build a generic query from the code content
|
||||
var query =
|
||||
$(codeElement).attr('data-ask-ai-query') ||
|
||||
'Explain this code:\n```\n' + code.substring(0, 500) + '\n```';
|
||||
|
||||
// Delegate to the global ask-ai-open handler by synthesizing a click.
|
||||
// Use native .click() instead of jQuery .trigger() so the event
|
||||
// reaches the native document.addEventListener in ask-ai-trigger.js.
|
||||
// No href — prevents scroll-to-top when the native click fires.
|
||||
var triggerEl = document.createElement('a');
|
||||
triggerEl.className = 'ask-ai-open';
|
||||
triggerEl.dataset.query = query;
|
||||
document.body.appendChild(triggerEl);
|
||||
triggerEl.click();
|
||||
triggerEl.remove();
|
||||
});
|
||||
|
||||
/////////////////////////////// FULL WINDOW CODE ///////////////////////////////
|
||||
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -117,7 +117,10 @@ function getInfluxDBUrls() {
|
|||
initializeStorageItem('urls', JSON.stringify(DEFAULT_STORAGE_URLS));
|
||||
}
|
||||
|
||||
return JSON.parse(localStorage.getItem(urlStorageKey));
|
||||
const storedUrls = JSON.parse(localStorage.getItem(urlStorageKey));
|
||||
// Backfill any new default keys missing from stored data (e.g., when new
|
||||
// products like core/enterprise are added after a user's first visit).
|
||||
return { ...DEFAULT_STORAGE_URLS, ...storedUrls };
|
||||
}
|
||||
|
||||
// Get the current or previous URL for a specific product or a custom url
|
||||
|
|
@ -131,8 +134,8 @@ function getInfluxDBUrl(product) {
|
|||
const urlsString = localStorage.getItem(urlStorageKey);
|
||||
const urlsObj = JSON.parse(urlsString);
|
||||
|
||||
// Return the URL of the specified product
|
||||
return urlsObj[product];
|
||||
// Return the URL of the specified product, falling back to the default
|
||||
return urlsObj[product] ?? DEFAULT_STORAGE_URLS[product];
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
|||
|
|
@ -16,10 +16,12 @@
|
|||
opacity: .5;
|
||||
transition: opacity .2s;
|
||||
border-radius: $radius;
|
||||
border: none;
|
||||
background: none;
|
||||
line-height: 0;
|
||||
cursor: pointer;
|
||||
cursor: pointer;
|
||||
|
||||
&:hover {
|
||||
&:hover, &:focus-visible {
|
||||
opacity: 1;
|
||||
background-color: rgba($article-text, .1);
|
||||
backdrop-filter: blur(15px);
|
||||
|
|
@ -35,21 +37,26 @@
|
|||
backdrop-filter: blur(15px);
|
||||
display: none;
|
||||
|
||||
li {
|
||||
button {
|
||||
display: block;
|
||||
width: 100%;
|
||||
text-align: left;
|
||||
margin: 0;
|
||||
padding: .4rem .5rem .6rem;
|
||||
border: none;
|
||||
background: none;
|
||||
border-radius: $radius;
|
||||
color: $article-bold;
|
||||
font-size: .87rem;
|
||||
line-height: 0;
|
||||
cursor: pointer;
|
||||
cursor: pointer;
|
||||
|
||||
&:hover {background-color: rgba($article-text, .07)}
|
||||
|
||||
&.copy-code, &.fullscreen-toggle {
|
||||
.cf-icon {margin-right: .35rem;}
|
||||
&:hover, &:focus-visible {
|
||||
background-color: rgba($article-text, .07);
|
||||
}
|
||||
|
||||
.cf-icon {margin-right: .35rem;}
|
||||
|
||||
&.copy-code {
|
||||
.message {
|
||||
text-shadow: 0px 0px 8px rgba($article-text, 0);
|
||||
|
|
@ -69,6 +76,8 @@
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
li {margin: 0;}
|
||||
}
|
||||
|
||||
&.open {
|
||||
|
|
|
|||
|
|
@ -289,8 +289,8 @@ Run the query on any data node for each retention policy and database.
|
|||
Here, we use InfluxDB's [CLI](/enterprise_influxdb/v1/tools/influx-cli/use-influx/) to execute the query:
|
||||
|
||||
```
|
||||
> ALTER RETENTION POLICY "<retention_policy_name>" ON "<database_name>" REPLICATION 3
|
||||
>
|
||||
ALTER RETENTION POLICY "<retention_policy_name>" ON "<database_name>" REPLICATION 3
|
||||
|
||||
```
|
||||
|
||||
A successful `ALTER RETENTION POLICY` query returns no results.
|
||||
|
|
|
|||
|
|
@ -124,11 +124,11 @@ CREATE USER <username> WITH PASSWORD '<password>'
|
|||
|
||||
###### CLI example
|
||||
```js
|
||||
> CREATE USER todd WITH PASSWORD 'influxdb41yf3'
|
||||
> CREATE USER alice WITH PASSWORD 'wonder\'land'
|
||||
> CREATE USER "rachel_smith" WITH PASSWORD 'asdf1234!'
|
||||
> CREATE USER "monitoring-robot" WITH PASSWORD 'XXXXX'
|
||||
> CREATE USER "$savyadmin" WITH PASSWORD 'm3tr1cL0v3r'
|
||||
CREATE USER todd WITH PASSWORD 'influxdb41yf3'
|
||||
CREATE USER alice WITH PASSWORD 'wonder\'land'
|
||||
CREATE USER "rachel_smith" WITH PASSWORD 'asdf1234!'
|
||||
CREATE USER "monitoring-robot" WITH PASSWORD 'XXXXX'
|
||||
CREATE USER "$savyadmin" WITH PASSWORD 'm3tr1cL0v3r'
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
|
|
@ -169,13 +169,13 @@ CLI examples:
|
|||
`GRANT` `READ` access to `todd` on the `NOAA_water_database` database:
|
||||
|
||||
```sql
|
||||
> GRANT READ ON "NOAA_water_database" TO "todd"
|
||||
GRANT READ ON "NOAA_water_database" TO "todd"
|
||||
```
|
||||
|
||||
`GRANT` `ALL` access to `todd` on the `NOAA_water_database` database:
|
||||
|
||||
```sql
|
||||
> GRANT ALL ON "NOAA_water_database" TO "todd"
|
||||
GRANT ALL ON "NOAA_water_database" TO "todd"
|
||||
```
|
||||
|
||||
##### `REVOKE` `READ`, `WRITE`, or `ALL` database privileges from an existing user
|
||||
|
|
@ -189,13 +189,13 @@ CLI examples:
|
|||
`REVOKE` `ALL` privileges from `todd` on the `NOAA_water_database` database:
|
||||
|
||||
```sql
|
||||
> REVOKE ALL ON "NOAA_water_database" FROM "todd"
|
||||
REVOKE ALL ON "NOAA_water_database" FROM "todd"
|
||||
```
|
||||
|
||||
`REVOKE` `WRITE` privileges from `todd` on the `NOAA_water_database` database:
|
||||
|
||||
```sql
|
||||
> REVOKE WRITE ON "NOAA_water_database" FROM "todd"
|
||||
REVOKE WRITE ON "NOAA_water_database" FROM "todd"
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
|
|
@ -230,7 +230,7 @@ SET PASSWORD FOR <username> = '<password>'
|
|||
CLI example:
|
||||
|
||||
```sql
|
||||
> SET PASSWORD FOR "todd" = 'password4todd'
|
||||
SET PASSWORD FOR "todd" = 'password4todd'
|
||||
```
|
||||
|
||||
{{% note %}}
|
||||
|
|
@ -250,6 +250,6 @@ DROP USER <username>
|
|||
CLI example:
|
||||
|
||||
```sql
|
||||
> DROP USER "todd"
|
||||
DROP USER "todd"
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -28,9 +28,9 @@ For example, simple addition:
|
|||
Assign an expression to a variable using the assignment operator, `=`.
|
||||
|
||||
```js
|
||||
> s = "this is a string"
|
||||
> i = 1 // an integer
|
||||
> f = 2.0 // a floating point number
|
||||
s = "this is a string"
|
||||
i = 1 // an integer
|
||||
f = 2.0 // a floating point number
|
||||
```
|
||||
|
||||
Type the name of a variable to print its value:
|
||||
|
|
@ -48,7 +48,7 @@ this is a string
|
|||
Flux also supports records. Each value in a record can be a different data type.
|
||||
|
||||
```js
|
||||
> o = {name:"Jim", age: 42, "favorite color": "red"}
|
||||
o = {name:"Jim", age: 42, "favorite color": "red"}
|
||||
```
|
||||
|
||||
Use **dot notation** to access a properties of a record:
|
||||
|
|
|
|||
|
|
@ -70,7 +70,7 @@ the CQ has no `FOR` clause.
|
|||
#### 1. Create the database
|
||||
|
||||
```sql
|
||||
> CREATE DATABASE "food_data"
|
||||
CREATE DATABASE "food_data"
|
||||
```
|
||||
|
||||
#### 2. Create a two-hour `DEFAULT` retention policy
|
||||
|
|
@ -85,7 +85,7 @@ Use the
|
|||
statement to create a `DEFAULT` RP:
|
||||
|
||||
```sql
|
||||
> CREATE RETENTION POLICY "two_hours" ON "food_data" DURATION 2h REPLICATION 1 DEFAULT
|
||||
CREATE RETENTION POLICY "two_hours" ON "food_data" DURATION 2h REPLICATION 1 DEFAULT
|
||||
```
|
||||
|
||||
That query creates an RP called `two_hours` that exists in the database
|
||||
|
|
@ -116,7 +116,7 @@ Use the
|
|||
statement to create a non-`DEFAULT` retention policy:
|
||||
|
||||
```sql
|
||||
> CREATE RETENTION POLICY "a_year" ON "food_data" DURATION 52w REPLICATION 1
|
||||
CREATE RETENTION POLICY "a_year" ON "food_data" DURATION 52w REPLICATION 1
|
||||
```
|
||||
|
||||
That query creates a retention policy (RP) called `a_year` that exists in the database
|
||||
|
|
|
|||
|
|
@ -839,8 +839,7 @@ DROP CONTINUOUS QUERY <cq_name> ON <database_name>
|
|||
Drop the `idle_hands` CQ from the `telegraf` database:
|
||||
|
||||
```sql
|
||||
> DROP CONTINUOUS QUERY "idle_hands" ON "telegraf"`
|
||||
>
|
||||
DROP CONTINUOUS QUERY "idle_hands" ON "telegraf"
|
||||
```
|
||||
|
||||
### Altering continuous queries
|
||||
|
|
|
|||
|
|
@ -380,8 +380,7 @@ The following query returns no data because it specifies a single tag key (`loca
|
|||
the `SELECT` clause:
|
||||
|
||||
```sql
|
||||
> SELECT "location" FROM "h2o_feet"
|
||||
>
|
||||
SELECT "location" FROM "h2o_feet"
|
||||
```
|
||||
|
||||
To return any data associated with the `location` tag key, the query's `SELECT`
|
||||
|
|
@ -597,7 +596,7 @@ separating logic with parentheses.
|
|||
#### Select data that have specific timestamps
|
||||
|
||||
```sql
|
||||
> SELECT * FROM "h2o_feet" WHERE time > now() - 7d
|
||||
SELECT * FROM "h2o_feet" WHERE time > now() - 7d
|
||||
```
|
||||
|
||||
The query returns data from the `h2o_feet` measurement that have [timestamps](/enterprise_influxdb/v1/concepts/glossary/#timestamp)
|
||||
|
|
@ -1592,8 +1591,8 @@ the query's time range.
|
|||
Note that `fill(800)` has no effect on the query results.
|
||||
|
||||
```sql
|
||||
> SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" = 'coyote_creek' AND time >= '2015-09-18T22:00:00Z' AND time <= '2015-09-18T22:18:00Z' GROUP BY time(12m) fill(800)
|
||||
>
|
||||
SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" = 'coyote_creek' AND time >= '2015-09-18T22:00:00Z' AND time <= '2015-09-18T22:18:00Z' GROUP BY time(12m) fill(800)
|
||||
|
||||
```
|
||||
|
||||
##### Queries with `fill(previous)` when the previous result falls outside the query's time range
|
||||
|
|
@ -2639,7 +2638,7 @@ The whitespace between `-` or `+` and the [duration literal](/enterprise_influxd
|
|||
#### Specify a time range with relative time
|
||||
|
||||
```sql
|
||||
> SELECT "water_level" FROM "h2o_feet" WHERE time > now() - 1h
|
||||
SELECT "water_level" FROM "h2o_feet" WHERE time > now() - 1h
|
||||
```
|
||||
|
||||
The query returns data with timestamps that occur within the past hour.
|
||||
|
|
@ -2686,7 +2685,7 @@ a `GROUP BY time()` clause must provide an alternative upper bound in the
|
|||
Use the [CLI](/enterprise_influxdb/v1/tools/influx-cli/use-influx/) to write a point to the `NOAA_water_database` that occurs after `now()`:
|
||||
|
||||
```sql
|
||||
> INSERT h2o_feet,location=santa_monica water_level=3.1 1587074400000000000
|
||||
INSERT h2o_feet,location=santa_monica water_level=3.1 1587074400000000000
|
||||
```
|
||||
|
||||
Run a `GROUP BY time()` query that covers data with timestamps between
|
||||
|
|
@ -2722,8 +2721,8 @@ the lower bound to `now()` such that the query's time range is between
|
|||
`now()` and `now()`:
|
||||
|
||||
```sql
|
||||
> SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location"='santa_monica' AND time >= now() GROUP BY time(12m) fill(none)
|
||||
>
|
||||
SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location"='santa_monica' AND time >= now() GROUP BY time(12m) fill(none)
|
||||
|
||||
```
|
||||
|
||||
### Configuring the returned timestamps
|
||||
|
|
@ -2831,8 +2830,8 @@ includes an `m` and `water_level` is greater than three.
|
|||
#### Use a regular expression to specify a tag with no value in the WHERE clause
|
||||
|
||||
```sql
|
||||
> SELECT * FROM "h2o_feet" WHERE "location" !~ /./
|
||||
>
|
||||
SELECT * FROM "h2o_feet" WHERE "location" !~ /./
|
||||
|
||||
```
|
||||
|
||||
The query selects all data from the `h2o_feet` measurement where the `location`
|
||||
|
|
@ -2989,8 +2988,8 @@ The query returns the integer form of `water_level`'s float [field values](/ente
|
|||
#### Cast float field values to strings (this functionality is not supported)
|
||||
|
||||
```sql
|
||||
> SELECT "water_level"::string FROM "h2o_feet" LIMIT 4
|
||||
>
|
||||
SELECT "water_level"::string FROM "h2o_feet" LIMIT 4
|
||||
|
||||
```
|
||||
|
||||
The query returns no data as casting a float field value to a string is not
|
||||
|
|
|
|||
|
|
@ -87,8 +87,8 @@ If you attempt to create a database that already exists, InfluxDB does nothing a
|
|||
##### Create a database
|
||||
|
||||
```
|
||||
> CREATE DATABASE "NOAA_water_database"
|
||||
>
|
||||
CREATE DATABASE "NOAA_water_database"
|
||||
|
||||
```
|
||||
|
||||
The query creates a database called `NOAA_water_database`.
|
||||
|
|
@ -97,8 +97,8 @@ The query creates a database called `NOAA_water_database`.
|
|||
##### Create a database with a specific retention policy
|
||||
|
||||
```
|
||||
> CREATE DATABASE "NOAA_water_database" WITH DURATION 3d REPLICATION 1 SHARD DURATION 1h NAME "liquid"
|
||||
>
|
||||
CREATE DATABASE "NOAA_water_database" WITH DURATION 3d REPLICATION 1 SHARD DURATION 1h NAME "liquid"
|
||||
|
||||
```
|
||||
|
||||
The query creates a database called `NOAA_water_database`.
|
||||
|
|
@ -114,8 +114,8 @@ DROP DATABASE <database_name>
|
|||
|
||||
Drop the database NOAA_water_database:
|
||||
```bash
|
||||
> DROP DATABASE "NOAA_water_database"
|
||||
>
|
||||
DROP DATABASE "NOAA_water_database"
|
||||
|
||||
```
|
||||
|
||||
A successful `DROP DATABASE` query returns an empty result.
|
||||
|
|
@ -135,19 +135,19 @@ DROP SERIES FROM <measurement_name[,measurement_name]> WHERE <tag_key>='<tag_val
|
|||
Drop all series from a single measurement:
|
||||
|
||||
```sql
|
||||
> DROP SERIES FROM "h2o_feet"
|
||||
DROP SERIES FROM "h2o_feet"
|
||||
```
|
||||
|
||||
Drop series with a specific tag pair from a single measurement:
|
||||
|
||||
```sql
|
||||
> DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
|
||||
DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
|
||||
```
|
||||
|
||||
Drop all points in the series that have a specific tag pair from all measurements in the database:
|
||||
|
||||
```sql
|
||||
> DROP SERIES WHERE "location" = 'santa_monica'
|
||||
DROP SERIES WHERE "location" = 'santa_monica'
|
||||
```
|
||||
|
||||
A successful `DROP SERIES` query returns an empty result.
|
||||
|
|
@ -168,25 +168,25 @@ DELETE FROM <measurement_name> WHERE [<tag_key>='<tag_value>'] | [<time interval
|
|||
Delete all data associated with the measurement `h2o_feet`:
|
||||
|
||||
```sql
|
||||
> DELETE FROM "h2o_feet"
|
||||
DELETE FROM "h2o_feet"
|
||||
```
|
||||
|
||||
Delete all data associated with the measurement `h2o_quality` and where the tag `randtag` equals `3`:
|
||||
|
||||
```sql
|
||||
> DELETE FROM "h2o_quality" WHERE "randtag" = '3'
|
||||
DELETE FROM "h2o_quality" WHERE "randtag" = '3'
|
||||
```
|
||||
|
||||
Delete all data in the database that occur before January 01, 2020:
|
||||
|
||||
```sql
|
||||
> DELETE WHERE time < '2020-01-01'
|
||||
DELETE WHERE time < '2020-01-01'
|
||||
```
|
||||
|
||||
Delete all data associated with the measurement `h2o_feet` in retention policy `one_day`:
|
||||
|
||||
```sql
|
||||
> DELETE FROM "one_day"."h2o_feet"
|
||||
DELETE FROM "one_day"."h2o_feet"
|
||||
```
|
||||
|
||||
A successful `DELETE` query returns an empty result.
|
||||
|
|
@ -216,7 +216,7 @@ DROP MEASUREMENT <measurement_name>
|
|||
|
||||
Delete the measurement `h2o_feet`:
|
||||
```sql
|
||||
> DROP MEASUREMENT "h2o_feet"
|
||||
DROP MEASUREMENT "h2o_feet"
|
||||
```
|
||||
|
||||
> **Note:** `DROP MEASUREMENT` drops all data and series in the measurement.
|
||||
|
|
@ -238,9 +238,9 @@ DROP SHARD <shard_id_number>
|
|||
```
|
||||
|
||||
Delete the shard with the id `1`:
|
||||
```
|
||||
> DROP SHARD 1
|
||||
>
|
||||
```sql
|
||||
DROP SHARD 1
|
||||
|
||||
```
|
||||
|
||||
A successful `DROP SHARD` query returns an empty result.
|
||||
|
|
@ -345,9 +345,9 @@ This setting is optional.
|
|||
|
||||
##### Create a retention policy
|
||||
|
||||
```
|
||||
> CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 1d REPLICATION 1
|
||||
>
|
||||
```sql
|
||||
CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 1d REPLICATION 1
|
||||
|
||||
```
|
||||
The query creates a retention policy called `one_day_only` for the database
|
||||
`NOAA_water_database` with a one day duration and a replication factor of one.
|
||||
|
|
@ -355,8 +355,8 @@ The query creates a retention policy called `one_day_only` for the database
|
|||
##### Create a DEFAULT retention policy
|
||||
|
||||
```sql
|
||||
> CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 23h60m REPLICATION 1 DEFAULT
|
||||
>
|
||||
CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 23h60m REPLICATION 1 DEFAULT
|
||||
|
||||
```
|
||||
|
||||
The query creates the same retention policy as the one in the example above, but
|
||||
|
|
@ -381,14 +381,14 @@ ALTER RETENTION POLICY <retention_policy_name> ON <database_name> [DURATION <dur
|
|||
|
||||
First, create the retention policy `what_is_time` with a `DURATION` of two days:
|
||||
```sql
|
||||
> CREATE RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 2d REPLICATION 1
|
||||
>
|
||||
CREATE RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 2d REPLICATION 1
|
||||
|
||||
```
|
||||
|
||||
Modify `what_is_time` to have a three week `DURATION`, a two hour shard group duration, and make it the `DEFAULT` retention policy for `NOAA_water_database`.
|
||||
```sql
|
||||
> ALTER RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 3w SHARD DURATION 2h DEFAULT
|
||||
>
|
||||
ALTER RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 3w SHARD DURATION 2h DEFAULT
|
||||
|
||||
```
|
||||
In the last example, `what_is_time` retains its original replication factor of 1.
|
||||
|
||||
|
|
@ -407,9 +407,9 @@ DROP RETENTION POLICY <retention_policy_name> ON <database_name>
|
|||
```
|
||||
|
||||
Delete the retention policy `what_is_time` in the `NOAA_water_database` database:
|
||||
```bash
|
||||
> DROP RETENTION POLICY "what_is_time" ON "NOAA_water_database"
|
||||
>
|
||||
```sql
|
||||
DROP RETENTION POLICY "what_is_time" ON "NOAA_water_database"
|
||||
|
||||
```
|
||||
|
||||
A successful `DROP RETENTION POLICY` query returns an empty result.
|
||||
|
|
|
|||
|
|
@ -50,9 +50,9 @@ digits, or underscores and do not begin with a digit.
|
|||
|
||||
Throughout the query language exploration, we'll use the database name `NOAA_water_database`:
|
||||
|
||||
```
|
||||
> CREATE DATABASE NOAA_water_database
|
||||
> exit
|
||||
```sql
|
||||
CREATE DATABASE NOAA_water_database
|
||||
exit
|
||||
```
|
||||
|
||||
### Download and write the data to InfluxDB
|
||||
|
|
|
|||
|
|
@ -636,7 +636,7 @@ Executes the specified SELECT statement and returns data on the query performanc
|
|||
For example, executing the following statement:
|
||||
|
||||
```sql
|
||||
> explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
|
||||
explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
|
||||
```
|
||||
|
||||
May produce an output similar to the following:
|
||||
|
|
|
|||
|
|
@ -407,8 +407,8 @@ Use `insert into <retention policy> <line protocol>` to write data to a specific
|
|||
Write data to a single field in the measurement `treasures` with the tag `captain_id = pirate_king`.
|
||||
`influx` automatically writes the point to the database's `DEFAULT` retention policy.
|
||||
```
|
||||
> INSERT treasures,captain_id=pirate_king value=2
|
||||
>
|
||||
INSERT treasures,captain_id=pirate_king value=2
|
||||
|
||||
```
|
||||
|
||||
Write the same point to the already-existing retention policy `oneday`:
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ In Query 1, the field key `duration` is an InfluxQL Keyword.
|
|||
Double quote `duration` to avoid the error:
|
||||
|
||||
```sql
|
||||
> SELECT "duration" FROM runs
|
||||
SELECT "duration" FROM runs
|
||||
```
|
||||
|
||||
*Query 2:*
|
||||
|
|
@ -114,7 +114,7 @@ In Query 2, the retention policy name `limit` is an InfluxQL Keyword.
|
|||
Double quote `limit` to avoid the error:
|
||||
|
||||
```sql
|
||||
> CREATE RETENTION POLICY "limit" ON telegraf DURATION 1d REPLICATION 1
|
||||
CREATE RETENTION POLICY "limit" ON telegraf DURATION 1d REPLICATION 1
|
||||
```
|
||||
|
||||
While using double quotes is an acceptable workaround, we recommend that you avoid using InfluxQL keywords as identifiers for simplicity's sake.
|
||||
|
|
@ -141,7 +141,7 @@ The `CREATE USER` statement requires single quotation marks around the password
|
|||
string:
|
||||
|
||||
```sql
|
||||
> CREATE USER penelope WITH PASSWORD 'timeseries4dayz'
|
||||
CREATE USER penelope WITH PASSWORD 'timeseries4dayz'
|
||||
```
|
||||
|
||||
Note that you should not include the single quotes when authenticating requests.
|
||||
|
|
@ -257,7 +257,7 @@ Replace the timestamp with a UNIX timestamp to avoid the error and successfully
|
|||
write the point to InfluxDB:
|
||||
|
||||
```sql
|
||||
> INSERT pineapple,fresh=true value=1 1439938800000000000
|
||||
INSERT pineapple,fresh=true value=1 1439938800000000000
|
||||
```
|
||||
|
||||
### InfluxDB line protocol syntax
|
||||
|
|
@ -283,7 +283,7 @@ InfluxDB assumes that the `value=9` field is the timestamp and returns an error.
|
|||
Use a comma instead of a space between the measurement and tag to avoid the error:
|
||||
|
||||
```sql
|
||||
> INSERT hens,location=2 value=9
|
||||
INSERT hens,location=2 value=9
|
||||
```
|
||||
|
||||
*Write 2*
|
||||
|
|
@ -300,7 +300,7 @@ InfluxDB assumes that the `happy=3` field is the timestamp and returns an error.
|
|||
Use a comma instead of a space between the two fields to avoid the error:
|
||||
|
||||
```sql
|
||||
> INSERT cows,name=daisy milk_prod=3,happy=3
|
||||
INSERT cows,name=daisy milk_prod=3,happy=3
|
||||
```
|
||||
|
||||
**Resources:**
|
||||
|
|
|
|||
|
|
@ -469,7 +469,7 @@ SELECT MEAN("dogs" - "cats") from "pet_daycare"
|
|||
Instead, use a subquery to get the same result:
|
||||
|
||||
```sql
|
||||
> SELECT MEAN("difference") FROM (SELECT "dogs" - "cat" AS "difference" FROM "pet_daycare")
|
||||
SELECT MEAN("difference") FROM (SELECT "dogs" - "cat" AS "difference" FROM "pet_daycare")
|
||||
```
|
||||
|
||||
See the
|
||||
|
|
@ -753,10 +753,10 @@ In the following example, the first query covers data with timestamps between
|
|||
`2015-09-18T21:30:00Z` and `now()`.
|
||||
The second query covers data with timestamps between `2015-09-18T21:30:00Z` and 180 weeks from `now()`.
|
||||
```
|
||||
> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' GROUP BY time(12m) fill(none)
|
||||
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' GROUP BY time(12m) fill(none)
|
||||
|
||||
|
||||
> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' AND time <= now() + 180w GROUP BY time(12m) fill(none)
|
||||
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' AND time <= now() + 180w GROUP BY time(12m) fill(none)
|
||||
```
|
||||
|
||||
Note that the `WHERE` clause must provide an alternative **upper** bound to
|
||||
|
|
@ -765,8 +765,8 @@ the lower bound to `now()` such that the query's time range is between
|
|||
`now()` and `now()`:
|
||||
|
||||
```sql
|
||||
> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= now() GROUP BY time(12m) fill(none)
|
||||
>
|
||||
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= now() GROUP BY time(12m) fill(none)
|
||||
|
||||
```
|
||||
|
||||
For for more on time syntax in queries, see [Data Exploration](/enterprise_influxdb/v1/query_language/explore-data/#time-syntax).
|
||||
|
|
@ -856,8 +856,8 @@ time count
|
|||
We [create](/enterprise_influxdb/v1/query_language/manage-database/#create-retention-policies-with-create-retention-policy) a new `DEFAULT` RP (`two_hour`) and perform the same query:
|
||||
|
||||
```sql
|
||||
> SELECT count(flounders) FROM fleeting
|
||||
>
|
||||
SELECT count(flounders) FROM fleeting
|
||||
|
||||
```
|
||||
|
||||
To query the old data, we must specify the old `DEFAULT` RP by fully qualifying `fleeting`:
|
||||
|
|
@ -879,8 +879,8 @@ with time intervals.
|
|||
Example:
|
||||
|
||||
```sql
|
||||
> SELECT * FROM "absolutismus" WHERE time = '2016-07-31T20:07:00Z' OR time = '2016-07-31T23:07:17Z'
|
||||
>
|
||||
SELECT * FROM "absolutismus" WHERE time = '2016-07-31T20:07:00Z' OR time = '2016-07-31T23:07:17Z'
|
||||
|
||||
```
|
||||
|
||||
{{% warn %}} [GitHub Issue #7530](https://github.com/influxdata/influxdb/issues/7530)
|
||||
|
|
|
|||
|
|
@ -75,7 +75,7 @@ To learn how field value type discrepancies can affect `SELECT *` queries, see
|
|||
#### Write the field value `-1.234456e+78` as a float to InfluxDB
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value=-1.234456e+78
|
||||
INSERT mymeas value=-1.234456e+78
|
||||
```
|
||||
|
||||
InfluxDB supports field values specified in scientific notation.
|
||||
|
|
@ -83,25 +83,25 @@ InfluxDB supports field values specified in scientific notation.
|
|||
#### Write a field value `1.0` as a float to InfluxDB
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value=1.0
|
||||
INSERT mymeas value=1.0
|
||||
```
|
||||
|
||||
#### Write the field value `1` as a float to InfluxDB
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value=1
|
||||
INSERT mymeas value=1
|
||||
```
|
||||
|
||||
#### Write the field value `1` as an integer to InfluxDB
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value=1i
|
||||
INSERT mymeas value=1i
|
||||
```
|
||||
|
||||
#### Write the field value `stringing along` as a string to InfluxDB
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value="stringing along"
|
||||
INSERT mymeas value="stringing along"
|
||||
```
|
||||
|
||||
Always double quote string field values. More on quoting [below](#quoting).
|
||||
|
|
@ -109,14 +109,14 @@ Always double quote string field values. More on quoting [below](#quoting).
|
|||
#### Write the field value `true` as a Boolean to InfluxDB
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value=true
|
||||
INSERT mymeas value=true
|
||||
```
|
||||
|
||||
Do not quote Boolean field values.
|
||||
The following statement writes `true` as a string field value to InfluxDB:
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value="true"
|
||||
INSERT mymeas value="true"
|
||||
```
|
||||
|
||||
#### Attempt to write a string to a field that previously accepted floats
|
||||
|
|
@ -132,9 +132,9 @@ ERR: {"error":"field type conflict: input field \"value\" on measurement \"mymea
|
|||
If the timestamps on the float and string are not stored in the same shard:
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value=3 1465934559000000000
|
||||
> INSERT mymeas value="stringing along" 1466625759000000000
|
||||
>
|
||||
INSERT mymeas value=3 1465934559000000000
|
||||
INSERT mymeas value="stringing along" 1466625759000000000
|
||||
|
||||
```
|
||||
|
||||
## Quoting, special characters, and additional naming guidelines
|
||||
|
|
@ -231,7 +231,7 @@ You do not need to escape other special characters.
|
|||
##### Write a point with special characters
|
||||
|
||||
```sql
|
||||
> INSERT "measurement\ with\ quo⚡️es\ and\ emoji",tag\ key\ with\ sp🚀ces=tag\,value\,with"commas" field_k\ey="string field value, only \" need be esc🍭ped"
|
||||
INSERT "measurement\ with\ quo⚡️es\ and\ emoji",tag\ key\ with\ sp🚀ces=tag\,value\,with"commas" field_k\ey="string field value, only \" need be esc🍭ped"
|
||||
```
|
||||
|
||||
The system writes a point where the measurement is `"measurement with quo⚡️es and emoji"`, the tag key is `tag key with sp🚀ces`, the
|
||||
|
|
|
|||
|
|
@ -245,9 +245,9 @@ But, writing an integer to a field that previously accepted floats succeeds if
|
|||
InfluxDB stores the integer in a new shard:
|
||||
|
||||
```sql
|
||||
> INSERT weather,location=us-midwest temperature=82 1465839830100400200
|
||||
> INSERT weather,location=us-midwest temperature=81i 1467154750000000000
|
||||
>
|
||||
INSERT weather,location=us-midwest temperature=82 1465839830100400200
|
||||
INSERT weather,location=us-midwest temperature=81i 1467154750000000000
|
||||
|
||||
```
|
||||
|
||||
See
|
||||
|
|
|
|||
|
|
@ -14,6 +14,50 @@ alt_links:
|
|||
---
|
||||
|
||||
|
||||
## v1.12.3 {date="2026-01-12"}
|
||||
|
||||
### Features
|
||||
|
||||
- Add [`https-insecure-certificate` configuration option](/influxdb/v1/administration/config/#https-insecure-certificate)
|
||||
to skip file permission checking for TLS certificate and private key files.
|
||||
- Add [`advanced-expiration` TLS configuration option](/influxdb/v1/administration/config/#advanced-expiration)
|
||||
to configure how far in advance to log warnings about TLS certificate expiration.
|
||||
- Add TLS certificate reloading on `SIGHUP`.
|
||||
- Add `config` and `cq` (continuous query) diagnostics to the `/debug/vars` endpoint.
|
||||
- Improve dropped point logging.
|
||||
- Show user when displaying or logging queries.
|
||||
- Add `time_format` parameter for the HTTP API.
|
||||
- Use dynamic logging levels (`zap.AtomicLevel`).
|
||||
- Report user query bytes.
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- Fix `FUTURE LIMIT` and `PAST LIMIT`
|
||||
[clause order](/influxdb/v1/query_language/manage-database/#future-limit)
|
||||
in retention policy statements.
|
||||
- Add locking in `ClearBadShardList`.
|
||||
- Stop noisy logging about phantom shards that do not belong to a node.
|
||||
- Resolve `RLock()` leakage in `Store.DeleteSeries()`.
|
||||
- Fix condition check for optimization of array cursor (tsm1).
|
||||
- Run `init.sh` `buildtsi` as `influxdb` user.
|
||||
- Reduce unnecessary purger operations and logging.
|
||||
- Sort files for adjacency testing.
|
||||
- Fix operator in host detection (systemd).
|
||||
- Use correct path in open WAL error message.
|
||||
- Handle nested low-level files in compaction.
|
||||
- Correct error logic for writing empty index files.
|
||||
- Reduce lock contention and races in purger.
|
||||
- Fix bug with authorizer leakage in `SHOW QUERIES`.
|
||||
- Rename compact throughput logging keys.
|
||||
- Fix `https-insecure-certificate` not handled properly in httpd.
|
||||
- Prevent level regression when compacting mixed-level TSM files.
|
||||
|
||||
### Other
|
||||
|
||||
- Update Go to 1.24.13.
|
||||
|
||||
---
|
||||
|
||||
## v1.12.2 {date="2025-09-15"}
|
||||
|
||||
### Features
|
||||
|
|
@ -340,7 +384,7 @@ reporting an earlier error.
|
|||
|
||||
- Use latest version of InfluxQL package.
|
||||
- Add `-lponly` flag to [`influx export`](/influxdb/v2/reference/cli/influx/export/) sub-command.
|
||||
- Add the ability to [track number of values](/platform/monitoring/influxdata-platform/tools/measurements-internal/#valueswrittenok) written via the [/debug/vars HTTP endpoint](/influxdb/v1/tools/api/#debug-vars-http-endpoint).
|
||||
- Add the ability to [track number of values](/platform/monitoring/influxdata-platform/tools/measurements-internal/#valueswrittenok) written via the [`/debug/vars` HTTP endpoint](/influxdb/v1/tools/api/#debugvars-http-endpoint).
|
||||
- Update UUID library from [github.com/satori/go.uuid](https://github.com/satori/go.uuid) to [github.com/gofrs/uuid](https://github.com/gofrs/uuid).
|
||||
|
||||
### Bug fixes
|
||||
|
|
@ -637,7 +681,7 @@ Support for the Flux language and queries has been added in this release. To beg
|
|||
|
||||
- Enable Flux using the new configuration setting
|
||||
[`[http] flux-enabled = true`](/influxdb/v1/administration/config/#flux-enabled).
|
||||
- Use the new [`influx -type=flux`](/influxdb/v1/tools/shell/#type) option to enable the Flux REPL shell for creating Flux queries.
|
||||
- Use the new [`influx -type=flux`](/influxdb/v1/tools/influx-cli/) option to enable the Flux REPL shell for creating Flux queries.
|
||||
- Read about Flux and the Flux language, enabling Flux, or jump into the getting started and other guides.
|
||||
|
||||
#### Time Series Index (TSI) query performance and throughputs improvements
|
||||
|
|
|
|||
|
|
@ -355,12 +355,12 @@ CREATE USER <username> WITH PASSWORD '<password>'
|
|||
|
||||
###### CLI example
|
||||
```js
|
||||
> CREATE USER todd WITH PASSWORD 'influxdb41yf3'
|
||||
> CREATE USER alice WITH PASSWORD 'wonder\'land'
|
||||
> CREATE USER "rachel_smith" WITH PASSWORD 'asdf1234!'
|
||||
> CREATE USER "monitoring-robot" WITH PASSWORD 'XXXXX'
|
||||
> CREATE USER "$savyadmin" WITH PASSWORD 'm3tr1cL0v3r'
|
||||
>
|
||||
CREATE USER todd WITH PASSWORD 'influxdb41yf3'
|
||||
CREATE USER alice WITH PASSWORD 'wonder\'land'
|
||||
CREATE USER "rachel_smith" WITH PASSWORD 'asdf1234!'
|
||||
CREATE USER "monitoring-robot" WITH PASSWORD 'XXXXX'
|
||||
CREATE USER "$savyadmin" WITH PASSWORD 'm3tr1cL0v3r'
|
||||
|
||||
```
|
||||
|
||||
> [!Important]
|
||||
|
|
@ -397,15 +397,15 @@ CLI examples:
|
|||
`GRANT` `READ` access to `todd` on the `NOAA_water_database` database:
|
||||
|
||||
```sql
|
||||
> GRANT READ ON "NOAA_water_database" TO "todd"
|
||||
>
|
||||
GRANT READ ON "NOAA_water_database" TO "todd"
|
||||
|
||||
```
|
||||
|
||||
`GRANT` `ALL` access to `todd` on the `NOAA_water_database` database:
|
||||
|
||||
```sql
|
||||
> GRANT ALL ON "NOAA_water_database" TO "todd"
|
||||
>
|
||||
GRANT ALL ON "NOAA_water_database" TO "todd"
|
||||
|
||||
```
|
||||
|
||||
##### `REVOKE` `READ`, `WRITE`, or `ALL` database privileges from an existing user
|
||||
|
|
@ -419,15 +419,15 @@ CLI examples:
|
|||
`REVOKE` `ALL` privileges from `todd` on the `NOAA_water_database` database:
|
||||
|
||||
```sql
|
||||
> REVOKE ALL ON "NOAA_water_database" FROM "todd"
|
||||
>
|
||||
REVOKE ALL ON "NOAA_water_database" FROM "todd"
|
||||
|
||||
```
|
||||
|
||||
`REVOKE` `WRITE` privileges from `todd` on the `NOAA_water_database` database:
|
||||
|
||||
```sql
|
||||
> REVOKE WRITE ON "NOAA_water_database" FROM "todd"
|
||||
>
|
||||
REVOKE WRITE ON "NOAA_water_database" FROM "todd"
|
||||
|
||||
```
|
||||
|
||||
>**Note:** If a user with `ALL` privileges has `WRITE` privileges revoked, they are left with `READ` privileges, and vice versa.
|
||||
|
|
@ -460,8 +460,8 @@ SET PASSWORD FOR <username> = '<password>'
|
|||
CLI example:
|
||||
|
||||
```sql
|
||||
> SET PASSWORD FOR "todd" = 'influxdb4ever'
|
||||
>
|
||||
SET PASSWORD FOR "todd" = 'influxdb4ever'
|
||||
|
||||
```
|
||||
|
||||
> [!Note]
|
||||
|
|
@ -480,8 +480,8 @@ DROP USER <username>
|
|||
CLI example:
|
||||
|
||||
```sql
|
||||
> DROP USER "todd"
|
||||
>
|
||||
DROP USER "todd"
|
||||
|
||||
```
|
||||
|
||||
## Authentication and authorization HTTP errors
|
||||
|
|
|
|||
|
|
@ -933,7 +933,7 @@ effect if [`auth-enabled`](#auth-enabled) is set to `false`.
|
|||
|
||||
User-supplied [HTTP response headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers).
|
||||
Configure this section to return
|
||||
[security headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers#Security)
|
||||
[security headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers#security)
|
||||
such as `X-Frame-Options` or `Content Security Policy` where needed.
|
||||
|
||||
Example:
|
||||
|
|
@ -964,9 +964,16 @@ specified, the `httpd` service will try to load the private key from the
|
|||
`https-certificate` file. If a separate `https-private-key` file is specified,
|
||||
the `httpd` service will load the private key from the `https-private-key` file.
|
||||
|
||||
**Default**: `""`
|
||||
**Default**: `""`
|
||||
**Environment variable**: `INFLUXDB_HTTP_HTTPS_PRIVATE_KEY`
|
||||
|
||||
#### https-insecure-certificate {metadata="v1.12.3+"}
|
||||
|
||||
Skips file permission checking for `https-certificate` and `https-private-key` when `true`.
|
||||
|
||||
**Default**: `false`
|
||||
**Environment variable**: `INFLUXDB_HTTP_HTTPS_INSECURE_CERTIFICATE`
|
||||
|
||||
#### shared-secret
|
||||
|
||||
The shared secret used to validate public API requests using JWT tokens.
|
||||
|
|
@ -1638,5 +1645,12 @@ include: `tls1.0`, `tls1.1`, `tls1.2`, and `tls1.3`. If not specified,
|
|||
In this example, `tls1.3` specifies the maximum version as TLS 1.3, which is
|
||||
consistent with the behavior of previous InfluxDB releases.
|
||||
|
||||
**Default**: `tls1.3`
|
||||
**Default**: `tls1.3`
|
||||
**Environment variable**: `INFLUXDB_TLS_MAX_VERSION`
|
||||
|
||||
#### advanced-expiration {metadata="v1.12.3+"}
|
||||
|
||||
Sets how far in advance to log warnings about TLS certificate expiration.
|
||||
|
||||
**Default**: `5d`
|
||||
**Environment variable**: `INFLUXDB_TLS_ADVANCED_EXPIRATION`
|
||||
|
|
|
|||
|
|
@ -54,9 +54,9 @@ For example, simple addition:
|
|||
Assign an expression to a variable using the assignment operator, `=`.
|
||||
|
||||
```js
|
||||
> s = "this is a string"
|
||||
> i = 1 // an integer
|
||||
> f = 2.0 // a floating point number
|
||||
s = "this is a string"
|
||||
i = 1 // an integer
|
||||
f = 2.0 // a floating point number
|
||||
```
|
||||
|
||||
Type the name of a variable to print its value:
|
||||
|
|
@ -74,7 +74,7 @@ this is a string
|
|||
Flux also supports records. Each value in a record can be a different data type.
|
||||
|
||||
```js
|
||||
> o = {name:"Jim", age: 42, "favorite color": "red"}
|
||||
o = {name:"Jim", age: 42, "favorite color": "red"}
|
||||
```
|
||||
|
||||
Use **dot notation** to access a properties of a record:
|
||||
|
|
|
|||
|
|
@ -72,7 +72,7 @@ the CQ has no `FOR` clause.
|
|||
#### 1. Create the database
|
||||
|
||||
```sql
|
||||
> CREATE DATABASE "food_data"
|
||||
CREATE DATABASE "food_data"
|
||||
```
|
||||
|
||||
#### 2. Create a two-hour `DEFAULT` retention policy
|
||||
|
|
@ -87,7 +87,7 @@ Use the
|
|||
statement to create a `DEFAULT` RP:
|
||||
|
||||
```sql
|
||||
> CREATE RETENTION POLICY "two_hours" ON "food_data" DURATION 2h REPLICATION 1 DEFAULT
|
||||
CREATE RETENTION POLICY "two_hours" ON "food_data" DURATION 2h REPLICATION 1 DEFAULT
|
||||
```
|
||||
|
||||
That query creates an RP called `two_hours` that exists in the database
|
||||
|
|
@ -118,7 +118,7 @@ Use the
|
|||
statement to create a non-`DEFAULT` retention policy:
|
||||
|
||||
```sql
|
||||
> CREATE RETENTION POLICY "a_year" ON "food_data" DURATION 52w REPLICATION 1
|
||||
CREATE RETENTION POLICY "a_year" ON "food_data" DURATION 52w REPLICATION 1
|
||||
```
|
||||
|
||||
That query creates a retention policy (RP) called `a_year` that exists in the database
|
||||
|
|
|
|||
|
|
@ -63,8 +63,8 @@ digits, or underscores and do not begin with a digit.
|
|||
Throughout this guide, we'll use the database name `mydb`:
|
||||
|
||||
```sql
|
||||
> CREATE DATABASE mydb
|
||||
>
|
||||
CREATE DATABASE mydb
|
||||
|
||||
```
|
||||
|
||||
> **Note:** After hitting enter, a new prompt appears and nothing else is displayed.
|
||||
|
|
@ -141,8 +141,8 @@ temperature,machine=unit42,type=assembly external=25,internal=37 143406746700000
|
|||
To insert a single time series data point into InfluxDB using the CLI, enter `INSERT` followed by a point:
|
||||
|
||||
```sql
|
||||
> INSERT cpu,host=serverA,region=us_west value=0.64
|
||||
>
|
||||
INSERT cpu,host=serverA,region=us_west value=0.64
|
||||
|
||||
```
|
||||
|
||||
A point with the measurement name of `cpu` and tags `host` and `region` has now been written to the database, with the measured `value` of `0.64`.
|
||||
|
|
@ -166,8 +166,8 @@ That means your timestamp will be different.
|
|||
Let's try storing another type of data, with two fields in the same measurement:
|
||||
|
||||
```sql
|
||||
> INSERT temperature,machine=unit42,type=assembly external=25,internal=37
|
||||
>
|
||||
INSERT temperature,machine=unit42,type=assembly external=25,internal=37
|
||||
|
||||
```
|
||||
|
||||
To return all fields and tags with a query, you can use the `*` operator:
|
||||
|
|
|
|||
|
|
@ -841,8 +841,8 @@ DROP CONTINUOUS QUERY <cq_name> ON <database_name>
|
|||
Drop the `idle_hands` CQ from the `telegraf` database:
|
||||
|
||||
```sql
|
||||
> DROP CONTINUOUS QUERY "idle_hands" ON "telegraf"`
|
||||
>
|
||||
DROP CONTINUOUS QUERY "idle_hands" ON "telegraf"
|
||||
|
||||
```
|
||||
|
||||
### Altering continuous queries
|
||||
|
|
|
|||
|
|
@ -382,8 +382,8 @@ The following query returns no data because it specifies a single tag key (`loca
|
|||
the `SELECT` clause:
|
||||
|
||||
```sql
|
||||
> SELECT "location" FROM "h2o_feet"
|
||||
>
|
||||
SELECT "location" FROM "h2o_feet"
|
||||
|
||||
```
|
||||
|
||||
To return any data associated with the `location` tag key, the query's `SELECT`
|
||||
|
|
@ -599,7 +599,7 @@ separating logic with parentheses.
|
|||
#### Select data that have specific timestamps
|
||||
|
||||
```sql
|
||||
> SELECT * FROM "h2o_feet" WHERE time > now() - 7d
|
||||
SELECT * FROM "h2o_feet" WHERE time > now() - 7d
|
||||
```
|
||||
|
||||
The query returns data from the `h2o_feet` measurement that have [timestamps](/influxdb/v1/concepts/glossary/#timestamp)
|
||||
|
|
@ -1594,8 +1594,8 @@ the query's time range.
|
|||
Note that `fill(800)` has no effect on the query results.
|
||||
|
||||
```sql
|
||||
> SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" = 'coyote_creek' AND time >= '2015-09-18T22:00:00Z' AND time <= '2015-09-18T22:18:00Z' GROUP BY time(12m) fill(800)
|
||||
>
|
||||
SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" = 'coyote_creek' AND time >= '2015-09-18T22:00:00Z' AND time <= '2015-09-18T22:18:00Z' GROUP BY time(12m) fill(800)
|
||||
|
||||
```
|
||||
|
||||
##### Queries with `fill(previous)` when the previous result falls outside the query's time range
|
||||
|
|
@ -2646,7 +2646,7 @@ The whitespace between `-` or `+` and the [duration literal](/influxdb/v1/query_
|
|||
#### Specify a time range with relative time
|
||||
|
||||
```sql
|
||||
> SELECT "water_level" FROM "h2o_feet" WHERE time > now() - 1h
|
||||
SELECT "water_level" FROM "h2o_feet" WHERE time > now() - 1h
|
||||
```
|
||||
|
||||
The query returns data with timestamps that occur within the past hour.
|
||||
|
|
@ -2693,7 +2693,7 @@ a `GROUP BY time()` clause must provide an alternative upper bound in the
|
|||
Use the [CLI](/influxdb/v1/tools/shell/) to write a point to the `NOAA_water_database` that occurs after `now()`:
|
||||
|
||||
```sql
|
||||
> INSERT h2o_feet,location=santa_monica water_level=3.1 1587074400000000000
|
||||
INSERT h2o_feet,location=santa_monica water_level=3.1 1587074400000000000
|
||||
```
|
||||
|
||||
Run a `GROUP BY time()` query that covers data with timestamps between
|
||||
|
|
@ -2729,8 +2729,8 @@ the lower bound to `now()` such that the query's time range is between
|
|||
`now()` and `now()`:
|
||||
|
||||
```sql
|
||||
> SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location"='santa_monica' AND time >= now() GROUP BY time(12m) fill(none)
|
||||
>
|
||||
SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location"='santa_monica' AND time >= now() GROUP BY time(12m) fill(none)
|
||||
|
||||
```
|
||||
|
||||
### Configuring the returned timestamps
|
||||
|
|
@ -2838,8 +2838,8 @@ includes an `m` and `water_level` is greater than three.
|
|||
#### Use a regular expression to specify a tag with no value in the WHERE clause
|
||||
|
||||
```sql
|
||||
> SELECT * FROM "h2o_feet" WHERE "location" !~ /./
|
||||
>
|
||||
SELECT * FROM "h2o_feet" WHERE "location" !~ /./
|
||||
|
||||
```
|
||||
|
||||
The query selects all data from the `h2o_feet` measurement where the `location`
|
||||
|
|
@ -2996,8 +2996,8 @@ The query returns the integer form of `water_level`'s float [field values](/infl
|
|||
#### Cast float field values to strings (this functionality is not supported)
|
||||
|
||||
```sql
|
||||
> SELECT "water_level"::string FROM "h2o_feet" LIMIT 4
|
||||
>
|
||||
SELECT "water_level"::string FROM "h2o_feet" LIMIT 4
|
||||
|
||||
```
|
||||
|
||||
The query returns no data as casting a float field value to a string is not
|
||||
|
|
|
|||
|
|
@ -62,15 +62,15 @@ Creates a new database.
|
|||
#### Syntax
|
||||
|
||||
```sql
|
||||
CREATE DATABASE <database_name> [WITH [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [PAST LIMIT <duration>] [FUTURE LIMIT <duration>] [NAME <retention-policy-name>]]
|
||||
CREATE DATABASE <database_name> [WITH [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [FUTURE LIMIT <duration>] [PAST LIMIT <duration>] [NAME <retention-policy-name>]]
|
||||
```
|
||||
|
||||
#### Description of syntax
|
||||
|
||||
`CREATE DATABASE` requires a database [name](/influxdb/v1/troubleshooting/frequently-asked-questions/#what-words-and-characters-should-i-avoid-when-writing-data-to-influxdb).
|
||||
|
||||
The `WITH`, `DURATION`, `REPLICATION`, `SHARD DURATION`, `PAST LIMIT`,
|
||||
`FUTURE LIMIT`, and `NAME` clauses are optional and create a single
|
||||
The `WITH`, `DURATION`, `REPLICATION`, `SHARD DURATION`, `FUTURE LIMIT`,
|
||||
`PAST LIMIT`, and `NAME` clauses are optional and create a single
|
||||
[retention policy](/influxdb/v1/concepts/glossary/#retention-policy-rp)
|
||||
associated with the created database.
|
||||
If you do not specify one of the clauses after `WITH`, the relevant behavior
|
||||
|
|
@ -87,8 +87,8 @@ If you attempt to create a database that already exists, InfluxDB does nothing a
|
|||
##### Create a database
|
||||
|
||||
```
|
||||
> CREATE DATABASE "NOAA_water_database"
|
||||
>
|
||||
CREATE DATABASE "NOAA_water_database"
|
||||
|
||||
```
|
||||
|
||||
The query creates a database called `NOAA_water_database`.
|
||||
|
|
@ -97,8 +97,8 @@ The query creates a database called `NOAA_water_database`.
|
|||
##### Create a database with a specific retention policy
|
||||
|
||||
```
|
||||
> CREATE DATABASE "NOAA_water_database" WITH DURATION 3d REPLICATION 1 SHARD DURATION 1h NAME "liquid"
|
||||
>
|
||||
CREATE DATABASE "NOAA_water_database" WITH DURATION 3d REPLICATION 1 SHARD DURATION 1h NAME "liquid"
|
||||
|
||||
```
|
||||
|
||||
The query creates a database called `NOAA_water_database`.
|
||||
|
|
@ -114,8 +114,8 @@ DROP DATABASE <database_name>
|
|||
|
||||
Drop the database NOAA_water_database:
|
||||
```bash
|
||||
> DROP DATABASE "NOAA_water_database"
|
||||
>
|
||||
DROP DATABASE "NOAA_water_database"
|
||||
|
||||
```
|
||||
|
||||
A successful `DROP DATABASE` query returns an empty result.
|
||||
|
|
@ -135,19 +135,19 @@ DROP SERIES FROM <measurement_name[,measurement_name]> WHERE <tag_key>='<tag_val
|
|||
Drop all series from a single measurement:
|
||||
|
||||
```sql
|
||||
> DROP SERIES FROM "h2o_feet"
|
||||
DROP SERIES FROM "h2o_feet"
|
||||
```
|
||||
|
||||
Drop series with a specific tag pair from a single measurement:
|
||||
|
||||
```sql
|
||||
> DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
|
||||
DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
|
||||
```
|
||||
|
||||
Drop all points in the series that have a specific tag pair from all measurements in the database:
|
||||
|
||||
```sql
|
||||
> DROP SERIES WHERE "location" = 'santa_monica'
|
||||
DROP SERIES WHERE "location" = 'santa_monica'
|
||||
```
|
||||
|
||||
A successful `DROP SERIES` query returns an empty result.
|
||||
|
|
@ -168,25 +168,25 @@ DELETE FROM <measurement_name> WHERE [<tag_key>='<tag_value>'] | [<time interval
|
|||
Delete all data associated with the measurement `h2o_feet`:
|
||||
|
||||
```sql
|
||||
> DELETE FROM "h2o_feet"
|
||||
DELETE FROM "h2o_feet"
|
||||
```
|
||||
|
||||
Delete all data associated with the measurement `h2o_quality` and where the tag `randtag` equals `3`:
|
||||
|
||||
```sql
|
||||
> DELETE FROM "h2o_quality" WHERE "randtag" = '3'
|
||||
DELETE FROM "h2o_quality" WHERE "randtag" = '3'
|
||||
```
|
||||
|
||||
Delete all data in the database that occur before January 01, 2020:
|
||||
|
||||
```sql
|
||||
> DELETE WHERE time < '2020-01-01'
|
||||
DELETE WHERE time < '2020-01-01'
|
||||
```
|
||||
|
||||
Delete all data associated with the measurement `h2o_feet` in retention policy `one_day`:
|
||||
|
||||
```sql
|
||||
> DELETE FROM "one_day"."h2o_feet"
|
||||
DELETE FROM "one_day"."h2o_feet"
|
||||
```
|
||||
|
||||
A successful `DELETE` query returns an empty result.
|
||||
|
|
@ -217,7 +217,7 @@ DROP MEASUREMENT <measurement_name>
|
|||
|
||||
Delete the measurement `h2o_feet`:
|
||||
```sql
|
||||
> DROP MEASUREMENT "h2o_feet"
|
||||
DROP MEASUREMENT "h2o_feet"
|
||||
```
|
||||
|
||||
> **Note:** `DROP MEASUREMENT` drops all data and series in the measurement.
|
||||
|
|
@ -240,8 +240,8 @@ DROP SHARD <shard_id_number>
|
|||
|
||||
Delete the shard with the id `1`:
|
||||
```
|
||||
> DROP SHARD 1
|
||||
>
|
||||
DROP SHARD 1
|
||||
|
||||
```
|
||||
|
||||
A successful `DROP SHARD` query returns an empty result.
|
||||
|
|
@ -259,7 +259,7 @@ You may disable its auto-creation in the [configuration file](/influxdb/v1/admin
|
|||
#### Syntax
|
||||
|
||||
```sql
|
||||
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [PAST LIMIT <duration>] [FUTURE LIMIT <duration>] [DEFAULT]
|
||||
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [FUTURE LIMIT <duration>] [PAST LIMIT <duration>] [DEFAULT]
|
||||
```
|
||||
|
||||
#### Description of syntax
|
||||
|
|
@ -307,6 +307,17 @@ See
|
|||
[Shard group duration management](/influxdb/v1/concepts/schema_and_data_layout/#shard-group-duration-management)
|
||||
for recommended configurations.
|
||||
|
||||
##### `FUTURE LIMIT` {metadata="v1.12.0+"}
|
||||
|
||||
The `FUTURE LIMIT` clause defines a time boundary after and relative to _now_
|
||||
in which points written to the retention policy are accepted. If a point has a
|
||||
timestamp after the specified boundary, the point is rejected and the write
|
||||
request returns a partial write error.
|
||||
|
||||
For example, if a write request tries to write data to a retention policy with a
|
||||
`FUTURE LIMIT 6h` and there are points in the request with future timestamps
|
||||
greater than 6 hours from now, those points are rejected.
|
||||
|
||||
##### `PAST LIMIT` {metadata="v1.12.0+"}
|
||||
|
||||
The `PAST LIMIT` clause defines a time boundary before and relative to _now_
|
||||
|
|
@ -318,25 +329,6 @@ For example, if a write request tries to write data to a retention policy with a
|
|||
`PAST LIMIT 6h` and there are points in the request with timestamps older than
|
||||
6 hours, those points are rejected.
|
||||
|
||||
> [!Important]
|
||||
> `PAST LIMIT` cannot be changed after it is set.
|
||||
> This will be fixed in a future release.
|
||||
|
||||
##### `FUTURE LIMIT` {metadata="v1.12.0+"}
|
||||
|
||||
The `FUTURE LIMIT` clause defines a time boundary after and relative to _now_
|
||||
in which points written to the retention policy are accepted. If a point has a
|
||||
timestamp after the specified boundary, the point is rejected and the write
|
||||
request returns a partial write error.
|
||||
|
||||
For example, if a write request tries to write data to a retention policy with a
|
||||
`FUTURE LIMIT 6h` and there are points in the request with future timestamps
|
||||
greater than 6 hours from now, those points are rejected.
|
||||
|
||||
> [!Important]
|
||||
> `FUTURE LIMIT` cannot be changed after it is set.
|
||||
> This will be fixed in a future release.
|
||||
|
||||
##### `DEFAULT`
|
||||
|
||||
Sets the new retention policy as the default retention policy for the database.
|
||||
|
|
@ -347,8 +339,8 @@ This setting is optional.
|
|||
##### Create a retention policy
|
||||
|
||||
```
|
||||
> CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 1d REPLICATION 1
|
||||
>
|
||||
CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 1d REPLICATION 1
|
||||
|
||||
```
|
||||
The query creates a retention policy called `one_day_only` for the database
|
||||
`NOAA_water_database` with a one day duration and a replication factor of one.
|
||||
|
|
@ -356,8 +348,8 @@ The query creates a retention policy called `one_day_only` for the database
|
|||
##### Create a DEFAULT retention policy
|
||||
|
||||
```sql
|
||||
> CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 23h60m REPLICATION 1 DEFAULT
|
||||
>
|
||||
CREATE RETENTION POLICY "one_day_only" ON "NOAA_water_database" DURATION 23h60m REPLICATION 1 DEFAULT
|
||||
|
||||
```
|
||||
|
||||
The query creates the same retention policy as the one in the example above, but
|
||||
|
|
@ -372,24 +364,27 @@ See [Create a database with CREATE DATABASE](/influxdb/v1/query_language/manage-
|
|||
|
||||
### Modify retention policies with ALTER RETENTION POLICY
|
||||
|
||||
The `ALTER RETENTION POLICY` query takes the following form, where you must declare at least one of the retention policy attributes `DURATION`, `REPLICATION`, `SHARD DURATION`, or `DEFAULT`:
|
||||
The `ALTER RETENTION POLICY` query takes the following form, where you must declare at least one of the retention policy attributes `DURATION`, `REPLICATION`, `SHARD DURATION`, `FUTURE LIMIT`, `PAST LIMIT`, or `DEFAULT`:
|
||||
```sql
|
||||
ALTER RETENTION POLICY <retention_policy_name> ON <database_name> [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [DEFAULT]
|
||||
ALTER RETENTION POLICY <retention_policy_name> ON <database_name> [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [FUTURE LIMIT <duration>] [PAST LIMIT <duration>] [DEFAULT]
|
||||
```
|
||||
|
||||
{{% warn %}} Replication factors do not serve a purpose with single node instances.
|
||||
{{% /warn %}}
|
||||
|
||||
For information about the `FUTURE LIMIT` and `PAST LIMIT` clauses, see
|
||||
[CREATE RETENTION POLICY](#create-retention-policies-with-create-retention-policy).
|
||||
|
||||
First, create the retention policy `what_is_time` with a `DURATION` of two days:
|
||||
```sql
|
||||
> CREATE RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 2d REPLICATION 1
|
||||
>
|
||||
CREATE RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 2d REPLICATION 1
|
||||
|
||||
```
|
||||
|
||||
Modify `what_is_time` to have a three week `DURATION`, a two hour shard group duration, and make it the `DEFAULT` retention policy for `NOAA_water_database`.
|
||||
```sql
|
||||
> ALTER RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 3w SHARD DURATION 2h DEFAULT
|
||||
>
|
||||
ALTER RETENTION POLICY "what_is_time" ON "NOAA_water_database" DURATION 3w SHARD DURATION 2h DEFAULT
|
||||
|
||||
```
|
||||
In the last example, `what_is_time` retains its original replication factor of 1.
|
||||
|
||||
|
|
@ -409,8 +404,8 @@ DROP RETENTION POLICY <retention_policy_name> ON <database_name>
|
|||
|
||||
Delete the retention policy `what_is_time` in the `NOAA_water_database` database:
|
||||
```bash
|
||||
> DROP RETENTION POLICY "what_is_time" ON "NOAA_water_database"
|
||||
>
|
||||
DROP RETENTION POLICY "what_is_time" ON "NOAA_water_database"
|
||||
|
||||
```
|
||||
|
||||
A successful `DROP RETENTION POLICY` query returns an empty result.
|
||||
|
|
|
|||
|
|
@ -53,8 +53,8 @@ digits, or underscores and do not begin with a digit.
|
|||
Throughout the query language exploration, we'll use the database name `NOAA_water_database`:
|
||||
|
||||
```
|
||||
> CREATE DATABASE NOAA_water_database
|
||||
> exit
|
||||
CREATE DATABASE NOAA_water_database
|
||||
exit
|
||||
```
|
||||
|
||||
### Download and write the data to InfluxDB
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Influx Query Language (InfluxQL) reference
|
||||
description: List of resources for Influx Query Language (InfluxQL).
|
||||
description: InfluxQL is a SQL-like query language for interacting with InfluxDB and providing features specific to storing and analyzing time series data.
|
||||
menu:
|
||||
influxdb_v1:
|
||||
name: InfluxQL reference
|
||||
|
|
@ -8,38 +8,32 @@ menu:
|
|||
parent: InfluxQL
|
||||
aliases:
|
||||
- /influxdb/v2/query_language/spec/
|
||||
related:
|
||||
- /influxdb/v1/query_language/explore-data/
|
||||
- /influxdb/v1/query_language/explore-schema/
|
||||
- /influxdb/v1/query_language/manage-database/
|
||||
---
|
||||
|
||||
## Introduction
|
||||
InfluxQL is a SQL-like query language for interacting with InfluxDB
|
||||
and providing features specific to storing and analyzing time series data.
|
||||
|
||||
Find Influx Query Language (InfluxQL) definitions and details, including:
|
||||
|
||||
- [Notation](/influxdb/v1/query_language/spec/#notation)
|
||||
- [Query representation](/influxdb/v1/query_language/spec/#query-representation)
|
||||
- [Identifiers](/influxdb/v1/query_language/spec/#identifiers)
|
||||
- [Keywords](/influxdb/v1/query_language/spec/#keywords)
|
||||
- [Literals](/influxdb/v1/query_language/spec/#literals)
|
||||
- [Queries](/influxdb/v1/query_language/spec/#queries)
|
||||
- [Statements](/influxdb/v1/query_language/spec/#statements)
|
||||
- [Clauses](/influxdb/v1/query_language/spec/#clauses)
|
||||
- [Expressions](/influxdb/v1/query_language/spec/#expressions)
|
||||
- [Other](/influxdb/v1/query_language/spec/#other)
|
||||
- [Query engine internals](/influxdb/v1/query_language/spec/#query-engine-internals)
|
||||
|
||||
To learn more about InfluxQL, browse the following topics:
|
||||
|
||||
- [Explore your data with InfluxQL](/influxdb/v1/query_language/explore-data/)
|
||||
- [Explore your schema with InfluxQL](/influxdb/v1/query_language/explore-schema/)
|
||||
- [Database management](/influxdb/v1/query_language/manage-database/)
|
||||
- [Authentication and authorization](/influxdb/v1/administration/authentication_and_authorization/).
|
||||
|
||||
InfluxQL is a SQL-like query language for interacting with InfluxDB and providing features specific to storing and analyzing time series data.
|
||||
- [Notation](#notation)
|
||||
- [Query representation](#query-representation)
|
||||
- [Identifiers](#identifiers)
|
||||
- [Keywords](#keywords)
|
||||
- [Literals](#literals)
|
||||
- [Queries](#queries)
|
||||
- [Statements](#statements)
|
||||
- [Clauses](#clauses)
|
||||
- [Expressions](#expressions)
|
||||
- [Comments](#comments)
|
||||
- [Other](#other)
|
||||
- [Query engine internals](#query-engine-internals)
|
||||
|
||||
## Notation
|
||||
|
||||
The syntax is specified using Extended Backus-Naur Form ("EBNF").
|
||||
EBNF is the same notation used in the [Go](http://golang.org) programming language specification, which can be found [here](https://golang.org/ref/spec).
|
||||
Not so coincidentally, InfluxDB is written in Go.
|
||||
EBNF is the same notation used in the [Go programming language specification](https://golang.org/ref/spec).
|
||||
|
||||
```
|
||||
Production = production_name "=" [ Expression ] "." .
|
||||
|
|
@ -91,7 +85,7 @@ The rules:
|
|||
|
||||
- double quoted identifiers can contain any unicode character other than a new line
|
||||
- double quoted identifiers can contain escaped `"` characters (i.e., `\"`)
|
||||
- double quoted identifiers can contain InfluxQL [keywords](/influxdb/v1/query_language/spec/#keywords)
|
||||
- double quoted identifiers can contain InfluxQL [keywords](#keywords)
|
||||
- unquoted identifiers must start with an upper or lowercase ASCII character or "_"
|
||||
- unquoted identifiers may contain only ASCII letters, decimal digits, and "_"
|
||||
|
||||
|
|
@ -129,7 +123,7 @@ SUBSCRIPTIONS TAG TO USER USERS VALUES
|
|||
WHERE WITH WRITE
|
||||
```
|
||||
|
||||
If you use an InfluxQL keywords as an
|
||||
If you use an InfluxQL keyword as an
|
||||
[identifier](/influxdb/v1/concepts/glossary/#identifier) you will need to
|
||||
double quote that identifier in every query.
|
||||
|
||||
|
|
@ -145,7 +139,7 @@ In those cases, `time` does not require double quotes in queries.
|
|||
`time` cannot be a [field key](/influxdb/v1/concepts/glossary/#field-key) or
|
||||
[tag key](/influxdb/v1/concepts/glossary/#tag-key);
|
||||
InfluxDB rejects writes with `time` as a field key or tag key and returns an error.
|
||||
See [Frequently Asked Questions](/influxdb/v1/troubleshooting/frequently-asked-questions/#time) for more information.
|
||||
For more information, see [Frequently Asked Questions](/influxdb/v1/troubleshooting/frequently-asked-questions/#time).
|
||||
|
||||
## Literals
|
||||
|
||||
|
|
@ -229,19 +223,22 @@ regex_lit = "/" { unicode_char } "/" .
|
|||
`=~` matches against
|
||||
`!~` doesn't match against
|
||||
|
||||
|
||||
InfluxQL supports using regular expressions when specifying:
|
||||
|
||||
- [field keys](/influxdb/v1/concepts/glossary/#field-key) and [tag keys](/influxdb/v1/concepts/glossary/#tag-key) in the [`SELECT` clause](/influxdb/v1/query_language/explore-data/#the-basic-select-statement)
|
||||
- [measurements](/influxdb/v1/concepts/glossary/#measurement) in the [`FROM` clause](/influxdb/v1/query_language/explore-data/#the-basic-select-statement)
|
||||
- [tag values](/influxdb/v1/concepts/glossary/#tag-value) and string [field values](/influxdb/v1/concepts/glossary/#field-value) in the [`WHERE` clause](/influxdb/v1/query_language/explore-data/#the-where-clause).
|
||||
- [tag keys](/influxdb/v1/concepts/glossary/#tag-key) in the [`GROUP BY` clause](/influxdb/v1/query_language/explore-data/#group-by-tags)
|
||||
|
||||
> [!Note]
|
||||
> InfluxQL supports using regular expressions when specifying:
|
||||
> #### Regular expressions and non-string field values
|
||||
>
|
||||
> * [field keys](/influxdb/v1/concepts/glossary/#field-key) and [tag keys](/influxdb/v1/concepts/glossary/#tag-key) in the [`SELECT` clause](/influxdb/v1/query_language/explore-data/#the-basic-select-statement)
|
||||
> * [measurements](/influxdb/v1/concepts/glossary/#measurement) in the [`FROM` clause](/influxdb/v1/query_language/explore-data/#the-basic-select-statement)
|
||||
> * [tag values](/influxdb/v1/concepts/glossary/#tag-value) and string [field values](/influxdb/v1/concepts/glossary/#field-value) in the [`WHERE` clause](/influxdb/v1/query_language/explore-data/#the-where-clause).
|
||||
> * [tag keys](/influxdb/v1/concepts/glossary/#tag-key) in the [`GROUP BY` clause](/influxdb/v1/query_language/explore-data/#group-by-tags)
|
||||
>
|
||||
>Currently, InfluxQL does not support using regular expressions to match
|
||||
>non-string field values in the
|
||||
>`WHERE` clause,
|
||||
>[databases](/influxdb/v1/concepts/glossary/#database), and
|
||||
>[retention polices](/influxdb/v1/concepts/glossary/#retention-policy-rp).
|
||||
> Currently, InfluxQL does not support using regular expressions to match
|
||||
> non-string field values in the
|
||||
> `WHERE` clause,
|
||||
> [databases](/influxdb/v1/concepts/glossary/#database), and
|
||||
> [retention policies](/influxdb/v1/concepts/glossary/#retention-policy-rp).
|
||||
|
||||
## Queries
|
||||
|
||||
|
|
@ -306,6 +303,8 @@ alter_retention_policy_stmt = "ALTER RETENTION POLICY" policy_name on_clause
|
|||
retention_policy_option
|
||||
[ retention_policy_option ]
|
||||
[ retention_policy_option ]
|
||||
[ retention_policy_option ]
|
||||
[ retention_policy_option ]
|
||||
[ retention_policy_option ] .
|
||||
```
|
||||
|
||||
|
|
@ -318,6 +317,9 @@ ALTER RETENTION POLICY "1h.cpu" ON "mydb" DEFAULT
|
|||
-- Change duration and replication factor.
|
||||
-- REPLICATION (replication factor) not valid for OSS instances.
|
||||
ALTER RETENTION POLICY "policy1" ON "somedb" DURATION 1h REPLICATION 4
|
||||
|
||||
-- Change future and past limits.
|
||||
ALTER RETENTION POLICY "policy1" ON "somedb" FUTURE LIMIT 6h PAST LIMIT 6h
|
||||
```
|
||||
|
||||
### CREATE CONTINUOUS QUERY
|
||||
|
|
@ -378,12 +380,15 @@ create_database_stmt = "CREATE DATABASE" db_name
|
|||
[ retention_policy_duration ]
|
||||
[ retention_policy_replication ]
|
||||
[ retention_policy_shard_group_duration ]
|
||||
[ retention_past_limit ]
|
||||
[ retention_future_limit ]
|
||||
[ retention_past_limit ]
|
||||
[ retention_policy_name ]
|
||||
] .
|
||||
```
|
||||
|
||||
> [!Note]
|
||||
> When using both `FUTURE LIMIT` and `PAST LIMIT` clauses, `FUTURE LIMIT` must appear before `PAST LIMIT`.
|
||||
|
||||
> [!Warning]
|
||||
> Replication factors do not serve a purpose with single node instances.
|
||||
|
||||
|
|
@ -402,8 +407,8 @@ CREATE DATABASE "bar" WITH DURATION 1d REPLICATION 1 SHARD DURATION 30m NAME "my
|
|||
CREATE DATABASE "mydb" WITH NAME "myrp"
|
||||
|
||||
-- Create a database called bar with a new retention policy named "myrp", and
|
||||
-- specify the duration, past and future limits, and name of that retention policy
|
||||
CREATE DATABASE "bar" WITH DURATION 1d PAST LIMIT 6h FUTURE LIMIT 6h NAME "myrp"
|
||||
-- specify the duration, future and past limits, and name of that retention policy
|
||||
CREATE DATABASE "bar" WITH DURATION 1d FUTURE LIMIT 6h PAST LIMIT 6h NAME "myrp"
|
||||
```
|
||||
|
||||
### CREATE RETENTION POLICY
|
||||
|
|
@ -413,11 +418,14 @@ create_retention_policy_stmt = "CREATE RETENTION POLICY" policy_name on_clause
|
|||
retention_policy_duration
|
||||
retention_policy_replication
|
||||
[ retention_policy_shard_group_duration ]
|
||||
[ retention_past_limit ]
|
||||
[ retention_future_limit ]
|
||||
[ retention_past_limit ]
|
||||
[ "DEFAULT" ] .
|
||||
```
|
||||
|
||||
> [!Note]
|
||||
> When using both `FUTURE LIMIT` and `PAST LIMIT` clauses, `FUTURE LIMIT` must appear before `PAST LIMIT`.
|
||||
|
||||
> [!Warning]
|
||||
> Replication factors do not serve a purpose with single node instances.
|
||||
|
||||
|
|
@ -433,8 +441,8 @@ CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 DEFA
|
|||
-- Create a retention policy and specify the shard group duration.
|
||||
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 SHARD DURATION 30m
|
||||
|
||||
-- Create a retention policy and specify past and future limits.
|
||||
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 12h PAST LIMIT 6h FUTURE LIMIT 6h
|
||||
-- Create a retention policy and specify future and past limits.
|
||||
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 12h FUTURE LIMIT 6h PAST LIMIT 6h
|
||||
```
|
||||
|
||||
### CREATE SUBSCRIPTION
|
||||
|
|
@ -629,12 +637,12 @@ SIZE OF BLOCKS: 931
|
|||
|
||||
### EXPLAIN ANALYZE
|
||||
|
||||
Executes the specified SELECT statement and returns data on the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including [execution time](#execution-time) and [planning time](#planning-time), and the [iterator type](#iterator-type) and [cursor type](#cursor-type).
|
||||
Executes the specified SELECT statement and returns data on the query performance and storage during runtime, visualized as a tree. Use this statement to analyze query performance and storage, including [execution time](#execution_time) and [planning time](#planning_time), and the [iterator type](#iterator-type) and [cursor type](#cursor-type).
|
||||
|
||||
For example, executing the following statement:
|
||||
|
||||
```sql
|
||||
> explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
|
||||
explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
|
||||
```
|
||||
|
||||
May produce an output similar to the following:
|
||||
|
|
@ -725,7 +733,8 @@ For more information about storage blocks, see [TSM files](/influxdb/v1/concepts
|
|||
|
||||
### GRANT
|
||||
|
||||
> **NOTE:** Users can be granted privileges on databases that do not yet exist.
|
||||
> [!Note]
|
||||
> Users can be granted privileges on databases that do not yet exist.
|
||||
|
||||
```
|
||||
grant_stmt = "GRANT" privilege [ on_clause ] to_clause .
|
||||
|
|
@ -743,20 +752,17 @@ GRANT READ ON "mydb" TO "jdoe"
|
|||
|
||||
### KILL QUERY
|
||||
|
||||
Stop currently-running query.
|
||||
Stop a currently-running query.
|
||||
|
||||
```sql
|
||||
KILL QUERY <query_id>
|
||||
```
|
||||
|
||||
```
|
||||
kill_query_statement = "KILL QUERY" query_id .
|
||||
```
|
||||
|
||||
Where `query_id` is the query ID, displayed in the [`SHOW QUERIES`](/influxdb/v1/troubleshooting/query_management/#list-currently-running-queries-with-show-queries) output as `qid`.
|
||||
|
||||
> ***InfluxDB Enterprise clusters:*** To kill queries on a cluster, you need to specify the query ID (qid) and the TCP host (for example, `myhost:8088`),
|
||||
> available in the `SHOW QUERIES` output.
|
||||
>
|
||||
> ```sql
|
||||
KILL QUERY <qid> ON "<host>"
|
||||
```
|
||||
Replace `query_id` with your query ID from [`SHOW QUERIES`](/influxdb/v1/troubleshooting/query_management/#list-currently-running-queries-with-show-queries), output as `qid`.
|
||||
|
||||
#### Examples
|
||||
|
||||
|
|
@ -765,11 +771,6 @@ KILL QUERY <qid> ON "<host>"
|
|||
KILL QUERY 36
|
||||
```
|
||||
|
||||
```sql
|
||||
-- kill query on InfluxDB Enterprise cluster
|
||||
KILL QUERY 53 ON "myhost:8088"
|
||||
```
|
||||
|
||||
### REVOKE
|
||||
|
||||
```sql
|
||||
|
|
@ -912,7 +913,7 @@ show_grants_stmt = "SHOW GRANTS FOR" user_name .
|
|||
SHOW GRANTS FOR "jdoe"
|
||||
```
|
||||
|
||||
#### SHOW MEASUREMENT CARDINALITY
|
||||
### SHOW MEASUREMENT CARDINALITY
|
||||
|
||||
Estimates or counts exactly the cardinality of the measurement set for the current database unless a database is specified using the `ON <database>` option.
|
||||
|
||||
|
|
@ -999,10 +1000,11 @@ Estimates or counts exactly the cardinality of the series for the current databa
|
|||
|
||||
[Series cardinality](/influxdb/v1/concepts/glossary/#series-cardinality) is the major factor that affects RAM requirements. For more information, see:
|
||||
|
||||
- [When do I need more RAM?](/influxdb/v1/guides/hardware_sizing/#when-do-i-need-more-ram) in [Hardware Sizing Guidelines](/influxdb/v1/guides/hardware_sizing/)
|
||||
- [Hardware Sizing Guidelines](/influxdb/v1/guides/hardware_sizing/)
|
||||
- [Don't have too many series](/influxdb/v1/concepts/schema_and_data_layout/#avoid-too-many-series)
|
||||
|
||||
> **Note:** `ON <database>`, `FROM <sources>`, `WITH KEY = <key>`, `WHERE <condition>`, `GROUP BY <dimensions>`, and `LIMIT/OFFSET` clauses are optional.
|
||||
> [!Note]
|
||||
> `ON <database>`, `FROM <sources>`, `WITH KEY = <key>`, `WHERE <condition>`, `GROUP BY <dimensions>`, and `LIMIT/OFFSET` clauses are optional.
|
||||
> When using these query clauses, the query falls back to an exact count.
|
||||
> Filtering by `time` is not supported in the `WHERE` clause.
|
||||
|
||||
|
|
@ -1069,26 +1071,18 @@ id database retention_policy shard_group start_time end_time
|
|||
|
||||
Returns detailed statistics on available components of an InfluxDB node and available (enabled) components.
|
||||
|
||||
Statistics returned by `SHOW STATS` are stored in memory and reset to zero when the node is restarted,
|
||||
but `SHOW STATS` is triggered every 10 seconds to populate the `_internal` database.
|
||||
|
||||
The `SHOW STATS` command does not list index memory usage --
|
||||
use the [`SHOW STATS FOR 'indexes'`](#show-stats-for-indexes) command.
|
||||
|
||||
For more information on using the `SHOW STATS` command, see [Using the SHOW STATS command to monitor InfluxDB](/platform/monitoring/tools/show-stats/).
|
||||
|
||||
```
|
||||
show_stats_stmt = "SHOW STATS [ FOR '<component>' | 'indexes' ]"
|
||||
```
|
||||
|
||||
#### `SHOW STATS`
|
||||
|
||||
- The `SHOW STATS` command does not list index memory usage -- use the [`SHOW STATS FOR 'indexes'`](#show-stats-for-indexes) command.
|
||||
- Statistics returned by `SHOW STATS` are stored in memory and reset to zero when the node is restarted, but `SHOW STATS` is triggered every 10 seconds to populate the `_internal` database.
|
||||
|
||||
#### `SHOW STATS FOR <component>`
|
||||
|
||||
- For the specified component (\<component\>), the command returns available statistics.
|
||||
- For the `runtime` component, the command returns an overview of memory usage by the InfluxDB system, using the [Go runtime](https://golang.org/pkg/runtime/) package.
|
||||
|
||||
#### `SHOW STATS FOR 'indexes'`
|
||||
|
||||
- Returns an estimate of memory use of all indexes. Index memory use is not reported with `SHOW STATS` because it is a potentially expensive operation.
|
||||
|
||||
#### Example
|
||||
|
||||
```sql
|
||||
|
|
@ -1098,7 +1092,6 @@ name: runtime
|
|||
Alloc Frees HeapAlloc HeapIdle HeapInUse HeapObjects HeapReleased HeapSys Lookups Mallocs NumGC NumGoroutine PauseTotalNs Sys TotalAlloc
|
||||
4136056 6684537 4136056 34586624 5816320 49412 0 40402944 110 6733949 83 44 36083006 46692600 439945704
|
||||
|
||||
|
||||
name: graphite
|
||||
tags: proto=tcp
|
||||
batches_tx bytes_rx connections_active connections_handled points_rx points_tx
|
||||
|
|
@ -1106,6 +1099,17 @@ batches_tx bytes_rx connections_active connections_handled
|
|||
159 3999750 0 1 158110 158110
|
||||
```
|
||||
|
||||
### SHOW STATS FOR <component>
|
||||
|
||||
For the specified component (\<component\>), the command returns available statistics.
|
||||
For the `runtime` component, the command returns an overview of memory usage by the InfluxDB system,
|
||||
using the [Go runtime](https://golang.org/pkg/runtime/) package.
|
||||
|
||||
### SHOW STATS FOR 'indexes'
|
||||
|
||||
Returns an estimate of memory use of all indexes.
|
||||
Index memory use is not reported with `SHOW STATS` because it is a potentially expensive operation.
|
||||
|
||||
### SHOW SUBSCRIPTIONS
|
||||
|
||||
```
|
||||
|
|
@ -1118,11 +1122,12 @@ show_subscriptions_stmt = "SHOW SUBSCRIPTIONS" .
|
|||
SHOW SUBSCRIPTIONS
|
||||
```
|
||||
|
||||
#### SHOW TAG KEY CARDINALITY
|
||||
### SHOW TAG KEY CARDINALITY
|
||||
|
||||
Estimates or counts exactly the cardinality of tag key set on the current database unless a database is specified using the `ON <database>` option.
|
||||
|
||||
> **Note:** `ON <database>`, `FROM <sources>`, `WITH KEY = <key>`, `WHERE <condition>`, `GROUP BY <dimensions>`, and `LIMIT/OFFSET` clauses are optional.
|
||||
> [!Note]
|
||||
> `ON <database>`, `FROM <sources>`, `WITH KEY = <key>`, `WHERE <condition>`, `GROUP BY <dimensions>`, and `LIMIT/OFFSET` clauses are optional.
|
||||
> When using these query clauses, the query falls back to an exact count.
|
||||
> Filtering by `time` is only supported when TSI (Time Series Index) is enabled and `time` is not supported in the `WHERE` clause.
|
||||
|
||||
|
|
@ -1190,11 +1195,12 @@ SHOW TAG VALUES WITH KEY !~ /.*c.*/
|
|||
SHOW TAG VALUES FROM "cpu" WITH KEY IN ("region", "host") WHERE "service" = 'redis'
|
||||
```
|
||||
|
||||
#### SHOW TAG VALUES CARDINALITY
|
||||
### SHOW TAG VALUES CARDINALITY
|
||||
|
||||
Estimates or counts exactly the cardinality of tag key values for the specified tag key on the current database unless a database is specified using the `ON <database>` option.
|
||||
|
||||
> **Note:** `ON <database>`, `FROM <sources>`, `WITH KEY = <key>`, `WHERE <condition>`, `GROUP BY <dimensions>`, and `LIMIT/OFFSET` clauses are optional.
|
||||
> [!Note]
|
||||
> `ON <database>`, `FROM <sources>`, `WITH KEY = <key>`, `WHERE <condition>`, `GROUP BY <dimensions>`, and `LIMIT/OFFSET` clauses are optional.
|
||||
> When using these query clauses, the query falls back to an exact count.
|
||||
> Filtering by `time` is only supported when TSI (Time Series Index) is enabled.
|
||||
|
||||
|
|
@ -1274,6 +1280,15 @@ unary_expr = "(" expr ")" | var_ref | time_lit | string_lit | int_lit |
|
|||
float_lit | bool_lit | duration_lit | regex_lit .
|
||||
```
|
||||
|
||||
## Comments
|
||||
|
||||
Use comments with InfluxQL statements to describe your queries.
|
||||
|
||||
- A single line comment begins with two hyphens (`--`) and ends where InfluxDB detects a line break.
|
||||
This comment type cannot span several lines.
|
||||
- A multi-line comment begins with `/*` and ends with `*/`. This comment type can span several lines.
|
||||
Multi-line comments do not support nested multi-line comments.
|
||||
|
||||
## Other
|
||||
|
||||
```
|
||||
|
|
@ -1321,6 +1336,8 @@ retention_policy = identifier .
|
|||
retention_policy_option = retention_policy_duration |
|
||||
retention_policy_replication |
|
||||
retention_policy_shard_group_duration |
|
||||
retention_future_limit |
|
||||
retention_past_limit |
|
||||
"DEFAULT" .
|
||||
|
||||
retention_policy_duration = "DURATION" duration_lit .
|
||||
|
|
@ -1329,6 +1346,10 @@ retention_policy_replication = "REPLICATION" int_lit .
|
|||
|
||||
retention_policy_shard_group_duration = "SHARD DURATION" duration_lit .
|
||||
|
||||
retention_future_limit = "FUTURE LIMIT" duration_lit .
|
||||
|
||||
retention_past_limit = "PAST LIMIT" duration_lit .
|
||||
|
||||
retention_policy_name = "NAME" identifier .
|
||||
|
||||
series_id = int_lit .
|
||||
|
|
@ -1350,15 +1371,6 @@ user_name = identifier .
|
|||
var_ref = measurement .
|
||||
```
|
||||
|
||||
### Comments
|
||||
|
||||
Use comments with InfluxQL statements to describe your queries.
|
||||
|
||||
- A single line comment begins with two hyphens (`--`) and ends where InfluxDB detects a line break.
|
||||
This comment type cannot span several lines.
|
||||
- A multi-line comment begins with `/*` and ends with `*/`. This comment type can span several lines.
|
||||
Multi-line comments do not support nested multi-line comments.
|
||||
|
||||
## Query Engine Internals
|
||||
|
||||
Once you understand the language itself, it's important to know how these
|
||||
|
|
@ -1458,7 +1470,7 @@ iterator.
|
|||
|
||||
### Built-in iterators
|
||||
|
||||
There are many helper iterators that let us build queries:
|
||||
{{% product-name %}} provides many helper iterators for building queries:
|
||||
|
||||
- Merge Iterator - This iterator combines one or more iterators into a single
|
||||
new iterator of the same type. This iterator guarantees that all points
|
||||
|
|
|
|||
|
|
@ -427,8 +427,8 @@ Use `insert into <retention policy> <line protocol>` to write data to a specific
|
|||
Write data to a single field in the measurement `treasures` with the tag `captain_id = pirate_king`.
|
||||
`influx` automatically writes the point to the database's `DEFAULT` retention policy.
|
||||
```
|
||||
> INSERT treasures,captain_id=pirate_king value=2
|
||||
>
|
||||
INSERT treasures,captain_id=pirate_king value=2
|
||||
|
||||
```
|
||||
|
||||
Write the same point to the already-existing retention policy `oneday`:
|
||||
|
|
|
|||
|
|
@ -101,7 +101,7 @@ In Query 1, the field key `duration` is an InfluxQL Keyword.
|
|||
Double quote `duration` to avoid the error:
|
||||
|
||||
```sql
|
||||
> SELECT "duration" FROM runs
|
||||
SELECT "duration" FROM runs
|
||||
```
|
||||
|
||||
*Query 2:*
|
||||
|
|
@ -115,7 +115,7 @@ In Query 2, the retention policy name `limit` is an InfluxQL Keyword.
|
|||
Double quote `limit` to avoid the error:
|
||||
|
||||
```sql
|
||||
> CREATE RETENTION POLICY "limit" ON telegraf DURATION 1d REPLICATION 1
|
||||
CREATE RETENTION POLICY "limit" ON telegraf DURATION 1d REPLICATION 1
|
||||
```
|
||||
|
||||
While using double quotes is an acceptable workaround, we recommend that you avoid using InfluxQL keywords as identifiers for simplicity's sake.
|
||||
|
|
@ -142,7 +142,7 @@ The `CREATE USER` statement requires single quotation marks around the password
|
|||
string:
|
||||
|
||||
```sql
|
||||
> CREATE USER penelope WITH PASSWORD 'timeseries4dayz'
|
||||
CREATE USER penelope WITH PASSWORD 'timeseries4dayz'
|
||||
```
|
||||
|
||||
Note that you should not include the single quotes when authenticating requests.
|
||||
|
|
@ -258,7 +258,7 @@ Replace the timestamp with a UNIX timestamp to avoid the error and successfully
|
|||
write the point to InfluxDB:
|
||||
|
||||
```sql
|
||||
> INSERT pineapple,fresh=true value=1 1439938800000000000
|
||||
INSERT pineapple,fresh=true value=1 1439938800000000000
|
||||
```
|
||||
|
||||
### InfluxDB line protocol syntax
|
||||
|
|
@ -284,7 +284,7 @@ InfluxDB assumes that the `value=9` field is the timestamp and returns an error.
|
|||
Use a comma instead of a space between the measurement and tag to avoid the error:
|
||||
|
||||
```sql
|
||||
> INSERT hens,location=2 value=9
|
||||
INSERT hens,location=2 value=9
|
||||
```
|
||||
|
||||
*Write 2*
|
||||
|
|
@ -301,7 +301,7 @@ InfluxDB assumes that the `happy=3` field is the timestamp and returns an error.
|
|||
Use a comma instead of a space between the two fields to avoid the error:
|
||||
|
||||
```sql
|
||||
> INSERT cows,name=daisy milk_prod=3,happy=3
|
||||
INSERT cows,name=daisy milk_prod=3,happy=3
|
||||
```
|
||||
|
||||
**Resources:**
|
||||
|
|
|
|||
|
|
@ -451,7 +451,7 @@ SELECT MEAN("dogs" - "cats") from "pet_daycare"
|
|||
Instead, use a subquery to get the same result:
|
||||
|
||||
```sql
|
||||
> SELECT MEAN("difference") FROM (SELECT "dogs" - "cat" AS "difference" FROM "pet_daycare")
|
||||
SELECT MEAN("difference") FROM (SELECT "dogs" - "cat" AS "difference" FROM "pet_daycare")
|
||||
```
|
||||
|
||||
See the
|
||||
|
|
@ -740,9 +740,9 @@ In the following example, the first query covers data with timestamps between
|
|||
The second query covers data with timestamps between `2015-09-18T21:30:00Z` and 180 weeks from `now()`.
|
||||
|
||||
```sql
|
||||
> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' GROUP BY time(12m) fill(none)
|
||||
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' GROUP BY time(12m) fill(none)
|
||||
|
||||
> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' AND time <= now() + 180w GROUP BY time(12m) fill(none)
|
||||
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= '2015-09-18T21:30:00Z' AND time <= now() + 180w GROUP BY time(12m) fill(none)
|
||||
```
|
||||
|
||||
Note that the `WHERE` clause must provide an alternative **upper** bound to
|
||||
|
|
@ -751,8 +751,8 @@ the lower bound to `now()` such that the query's time range is between
|
|||
`now()` and `now()`:
|
||||
|
||||
```sql
|
||||
> SELECT MEAN("boards") FROM "hillvalley" WHERE time >= now() GROUP BY time(12m) fill(none)
|
||||
>
|
||||
SELECT MEAN("boards") FROM "hillvalley" WHERE time >= now() GROUP BY time(12m) fill(none)
|
||||
|
||||
```
|
||||
|
||||
For for more on time syntax in queries, see [Data Exploration](/influxdb/v1/query_language/explore-data/#time-syntax).
|
||||
|
|
@ -843,8 +843,8 @@ time count
|
|||
We [create](/influxdb/v1/query_language/manage-database/#create-retention-policies-with-create-retention-policy) a new `DEFAULT` RP (`two_hour`) and perform the same query:
|
||||
|
||||
```sql
|
||||
> SELECT count(flounders) FROM fleeting
|
||||
>
|
||||
SELECT count(flounders) FROM fleeting
|
||||
|
||||
```
|
||||
|
||||
To query the old data, we must specify the old `DEFAULT` RP by fully qualifying `fleeting`:
|
||||
|
|
@ -866,8 +866,8 @@ with time intervals.
|
|||
Example:
|
||||
|
||||
```sql
|
||||
> SELECT * FROM "absolutismus" WHERE time = '2016-07-31T20:07:00Z' OR time = '2016-07-31T23:07:17Z'
|
||||
>
|
||||
SELECT * FROM "absolutismus" WHERE time = '2016-07-31T20:07:00Z' OR time = '2016-07-31T23:07:17Z'
|
||||
|
||||
```
|
||||
|
||||
{{% warn %}} [GitHub Issue #7530](https://github.com/influxdata/influxdb/issues/7530)
|
||||
|
|
|
|||
|
|
@ -73,7 +73,7 @@ To learn how field value type discrepancies can affect `SELECT *` queries, see
|
|||
#### Write the field value `-1.234456e+78` as a float to InfluxDB
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value=-1.234456e+78
|
||||
INSERT mymeas value=-1.234456e+78
|
||||
```
|
||||
|
||||
InfluxDB supports field values specified in scientific notation.
|
||||
|
|
@ -81,25 +81,25 @@ InfluxDB supports field values specified in scientific notation.
|
|||
#### Write a field value `1.0` as a float to InfluxDB
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value=1.0
|
||||
INSERT mymeas value=1.0
|
||||
```
|
||||
|
||||
#### Write the field value `1` as a float to InfluxDB
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value=1
|
||||
INSERT mymeas value=1
|
||||
```
|
||||
|
||||
#### Write the field value `1` as an integer to InfluxDB
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value=1i
|
||||
INSERT mymeas value=1i
|
||||
```
|
||||
|
||||
#### Write the field value `stringing along` as a string to InfluxDB
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value="stringing along"
|
||||
INSERT mymeas value="stringing along"
|
||||
```
|
||||
|
||||
Always double quote string field values. More on quoting [below](#quoting).
|
||||
|
|
@ -107,14 +107,14 @@ Always double quote string field values. More on quoting [below](#quoting).
|
|||
#### Write the field value `true` as a Boolean to InfluxDB
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value=true
|
||||
INSERT mymeas value=true
|
||||
```
|
||||
|
||||
Do not quote Boolean field values.
|
||||
The following statement writes `true` as a string field value to InfluxDB:
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value="true"
|
||||
INSERT mymeas value="true"
|
||||
```
|
||||
|
||||
#### Attempt to write a string to a field that previously accepted floats
|
||||
|
|
@ -130,9 +130,9 @@ ERR: {"error":"field type conflict: input field \"value\" on measurement \"mymea
|
|||
If the timestamps on the float and string are not stored in the same shard:
|
||||
|
||||
```sql
|
||||
> INSERT mymeas value=3 1465934559000000000
|
||||
> INSERT mymeas value="stringing along" 1466625759000000000
|
||||
>
|
||||
INSERT mymeas value=3 1465934559000000000
|
||||
INSERT mymeas value="stringing along" 1466625759000000000
|
||||
|
||||
```
|
||||
|
||||
## Quoting, special characters, and additional naming guidelines
|
||||
|
|
@ -233,7 +233,7 @@ You do not need to escape other special characters.
|
|||
##### Write a point with special characters
|
||||
|
||||
```sql
|
||||
> INSERT "measurement\ with\ quo⚡️es\ and\ emoji",tag\ key\ with\ sp🚀ces=tag\,value\,with"commas" field_k\ey="string field value, only \" need be esc🍭ped"
|
||||
INSERT "measurement\ with\ quo⚡️es\ and\ emoji",tag\ key\ with\ sp🚀ces=tag\,value\,with"commas" field_k\ey="string field value, only \" need be esc🍭ped"
|
||||
```
|
||||
|
||||
The system writes a point where the measurement is `"measurement with quo⚡️es and emoji"`, the tag key is `tag key with sp🚀ces`, the
|
||||
|
|
|
|||
|
|
@ -278,9 +278,9 @@ But, writing an integer to a field that previously accepted floats succeeds if
|
|||
InfluxDB stores the integer in a new shard:
|
||||
|
||||
```sql
|
||||
> INSERT weather,location=us-midwest temperature=82 1465839830100400200
|
||||
> INSERT weather,location=us-midwest temperature=81i 1467154750000000000
|
||||
>
|
||||
INSERT weather,location=us-midwest temperature=82 1465839830100400200
|
||||
INSERT weather,location=us-midwest temperature=81i 1467154750000000000
|
||||
|
||||
```
|
||||
|
||||
See
|
||||
|
|
|
|||
|
|
@ -14,4 +14,22 @@ Google.DateFormat = NO
|
|||
Google.Ellipses = NO
|
||||
Google.Headings = NO
|
||||
Google.WordList = NO
|
||||
Vale.Spelling = NO
|
||||
# Disable Google.Units in favor of InfluxDataDocs.Units which only checks byte
|
||||
# units (GB, TB, etc). Duration literals (30d, 24h, 1h) are valid InfluxDB syntax.
|
||||
Google.Units = NO
|
||||
Vale.Spelling = NO
|
||||
# Disable Vale.Terms - the vocabulary-based substitution rule creates too many
|
||||
# false positives from URLs, file paths, and code. The accepted terms in
|
||||
# accept.txt still work for spelling checks via InfluxDataDocs.Spelling.
|
||||
Vale.Terms = NO
|
||||
# Disable write-good.TooWordy - flags legitimate technical terms like
|
||||
# "aggregate", "expiration", "multiple", "However" that are standard in
|
||||
# database documentation.
|
||||
write-good.TooWordy = NO
|
||||
|
||||
# Ignore URL paths like /api/v3/..., /cli/..., /influxdb3/...
|
||||
# Ignore full URLs like https://example.com/...
|
||||
# Ignore inline code in frontmatter (description fields, etc.)
|
||||
TokenIgnores = /[a-zA-Z0-9/_\-\.]+, \
|
||||
https?://[^\s\)\]>"]+, \
|
||||
`[^`]+`
|
||||
|
|
@ -151,8 +151,8 @@ If using an admin user for visualization or Chronograf administrative functions,
|
|||
|
||||
<!--pytest.mark.skip-->
|
||||
```bash
|
||||
> CREATE USER <username> WITH PASSWORD '<password>'
|
||||
> GRANT READ ON <database> TO "<username>"
|
||||
CREATE USER <username> WITH PASSWORD '<password>'
|
||||
GRANT READ ON <database> TO "<username>"
|
||||
```
|
||||
|
||||
InfluxDB {{< current-version >}} only grants admin privileges to the primary user
|
||||
|
|
|
|||
|
|
@ -14,4 +14,22 @@ Google.DateFormat = NO
|
|||
Google.Ellipses = NO
|
||||
Google.Headings = NO
|
||||
Google.WordList = NO
|
||||
Vale.Spelling = NO
|
||||
# Disable Google.Units in favor of InfluxDataDocs.Units which only checks byte
|
||||
# units (GB, TB, etc). Duration literals (30d, 24h, 1h) are valid InfluxDB syntax.
|
||||
Google.Units = NO
|
||||
Vale.Spelling = NO
|
||||
# Disable Vale.Terms - the vocabulary-based substitution rule creates too many
|
||||
# false positives from URLs, file paths, and code. The accepted terms in
|
||||
# accept.txt still work for spelling checks via InfluxDataDocs.Spelling.
|
||||
Vale.Terms = NO
|
||||
# Disable write-good.TooWordy - flags legitimate technical terms like
|
||||
# "aggregate", "expiration", "multiple", "However" that are standard in
|
||||
# database documentation.
|
||||
write-good.TooWordy = NO
|
||||
|
||||
# Ignore URL paths like /api/v3/..., /cli/..., /influxdb3/...
|
||||
# Ignore full URLs like https://example.com/...
|
||||
# Ignore inline code in frontmatter (description fields, etc.)
|
||||
TokenIgnores = /[a-zA-Z0-9/_\-\.]+, \
|
||||
https?://[^\s\)\]>"]+, \
|
||||
`[^`]+`
|
||||
|
|
@ -15,7 +15,7 @@ aliases:
|
|||
- /influxdb3/cloud-dedicated/admin/clusters/list/
|
||||
---
|
||||
|
||||
Use the Admin UI or the [`influxctl cluster list` CLI command](/influxdb3/cloud-dedicated/reference/cli/influxctl/list/)
|
||||
Use the Admin UI or the [`influxctl cluster list` CLI command](/influxdb3/cloud-dedicated/reference/cli/influxctl/cluster/list/)
|
||||
to view information about all {{< product-name omit=" Clustered" >}} clusters associated with your account ID.
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ Use visualization tools to query data stored in {{% product-name %}} with SQL.
|
|||
The following visualization tools support querying InfluxDB with SQL:
|
||||
|
||||
- [Grafana](/influxdb3/cloud-dedicated/process-data/visualize/grafana/)
|
||||
- [Power BI](/influxdb3/cloud-dedicated/process-data/visualize/powerbi/)
|
||||
- [Power BI](/influxdb3/cloud-dedicated/visualize-data/powerbi/)
|
||||
- [Superset](/influxdb3/cloud-dedicated/process-data/visualize/superset/)
|
||||
- [Tableau](/influxdb3/cloud-dedicated/process-data/visualize/tableau/)
|
||||
|
||||
|
|
|
|||
|
|
@ -14,4 +14,22 @@ Google.DateFormat = NO
|
|||
Google.Ellipses = NO
|
||||
Google.Headings = NO
|
||||
Google.WordList = NO
|
||||
Vale.Spelling = NO
|
||||
# Disable Google.Units in favor of InfluxDataDocs.Units which only checks byte
|
||||
# units (GB, TB, etc). Duration literals (30d, 24h, 1h) are valid InfluxDB syntax.
|
||||
Google.Units = NO
|
||||
Vale.Spelling = NO
|
||||
# Disable Vale.Terms - the vocabulary-based substitution rule creates too many
|
||||
# false positives from URLs, file paths, and code. The accepted terms in
|
||||
# accept.txt still work for spelling checks via InfluxDataDocs.Spelling.
|
||||
Vale.Terms = NO
|
||||
# Disable write-good.TooWordy - flags legitimate technical terms like
|
||||
# "aggregate", "expiration", "multiple", "However" that are standard in
|
||||
# database documentation.
|
||||
write-good.TooWordy = NO
|
||||
|
||||
# Ignore URL paths like /api/v3/..., /cli/..., /influxdb3/...
|
||||
# Ignore full URLs like https://example.com/...
|
||||
# Ignore inline code in frontmatter (description fields, etc.)
|
||||
TokenIgnores = /[a-zA-Z0-9/_\-\.]+, \
|
||||
https?://[^\s\)\]>"]+, \
|
||||
`[^`]+`
|
||||
|
|
@ -27,7 +27,7 @@ Use visualization tools to query data stored in {{% product-name %}}.
|
|||
The following visualization tools support querying InfluxDB with SQL:
|
||||
|
||||
- [Grafana](/influxdb3/cloud-serverless/process-data/visualize/grafana/)
|
||||
- [Power BI](/influxdb3/cloud-serverless/process-data/visualize/powerbi/)
|
||||
- [Power BI](/influxdb3/cloud-serverless/visualize-data/powerbi/)
|
||||
- [Superset](/influxdb3/cloud-serverless/process-data/visualize/superset/)
|
||||
- [Tableau](/influxdb3/cloud-serverless/process-data/visualize/tableau/)
|
||||
|
||||
|
|
|
|||
|
|
@ -14,4 +14,22 @@ Google.DateFormat = NO
|
|||
Google.Ellipses = NO
|
||||
Google.Headings = NO
|
||||
Google.WordList = NO
|
||||
Vale.Spelling = NO
|
||||
# Disable Google.Units in favor of InfluxDataDocs.Units which only checks byte
|
||||
# units (GB, TB, etc). Duration literals (30d, 24h, 1h) are valid InfluxDB syntax.
|
||||
Google.Units = NO
|
||||
Vale.Spelling = NO
|
||||
# Disable Vale.Terms - the vocabulary-based substitution rule creates too many
|
||||
# false positives from URLs, file paths, and code. The accepted terms in
|
||||
# accept.txt still work for spelling checks via InfluxDataDocs.Spelling.
|
||||
Vale.Terms = NO
|
||||
# Disable write-good.TooWordy - flags legitimate technical terms like
|
||||
# "aggregate", "expiration", "multiple", "However" that are standard in
|
||||
# database documentation.
|
||||
write-good.TooWordy = NO
|
||||
|
||||
# Ignore URL paths like /api/v3/..., /cli/..., /influxdb3/...
|
||||
# Ignore full URLs like https://example.com/...
|
||||
# Ignore inline code in frontmatter (description fields, etc.)
|
||||
TokenIgnores = /[a-zA-Z0-9/_\-\.]+, \
|
||||
https?://[^\s\)\]>"]+, \
|
||||
`[^`]+`
|
||||
|
|
@ -27,7 +27,7 @@ Use visualization tools to query data stored in {{% product-name %}} with SQL.
|
|||
The following visualization tools support querying InfluxDB with SQL:
|
||||
|
||||
- [Grafana](/influxdb3/clustered/process-data/visualize/grafana/)
|
||||
- [Power BI](/influxdb3/clustered/process-data/visualize/powerbi/)
|
||||
- [Power BI](/influxdb3/clustered/visualize-data/powerbi/)
|
||||
- [Superset](/influxdb3/clustered/process-data/visualize/superset/)
|
||||
- [Tableau](/influxdb3/clustered/process-data/visualize/tableau/)
|
||||
|
||||
|
|
|
|||
|
|
@ -19,4 +19,22 @@ Google.DateFormat = NO
|
|||
Google.Ellipses = NO
|
||||
Google.Headings = NO
|
||||
Google.WordList = NO
|
||||
Vale.Spelling = NO
|
||||
# Disable Google.Units in favor of InfluxDataDocs.Units which only checks byte
|
||||
# units (GB, TB, etc). Duration literals (30d, 24h, 1h) are valid InfluxDB syntax.
|
||||
Google.Units = NO
|
||||
Vale.Spelling = NO
|
||||
# Disable Vale.Terms - the vocabulary-based substitution rule creates too many
|
||||
# false positives from URLs, file paths, and code. The accepted terms in
|
||||
# accept.txt still work for spelling checks via InfluxDataDocs.Spelling.
|
||||
Vale.Terms = NO
|
||||
# Disable write-good.TooWordy - flags legitimate technical terms like
|
||||
# "aggregate", "expiration", "multiple", "However" that are standard in
|
||||
# database documentation.
|
||||
write-good.TooWordy = NO
|
||||
|
||||
# Ignore URL paths like /api/v3/..., /cli/..., /influxdb3/...
|
||||
# Ignore full URLs like https://example.com/...
|
||||
# Ignore inline code in frontmatter (description fields, etc.)
|
||||
TokenIgnores = /[a-zA-Z0-9/_\-\.]+, \
|
||||
https?://[^\s\)\]>"]+, \
|
||||
`[^`]+`
|
||||
|
|
@ -65,7 +65,7 @@ influxdb3 serve [OPTIONS]
|
|||
| | `--aws-session-token` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-session-token)_ |
|
||||
| | `--aws-skip-signature` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-skip-signature)_ |
|
||||
| | `--azure-allow-http` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)_ |
|
||||
| | `--azure-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)_ |
|
||||
| | `--azure-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#azure-endpoint)_ |
|
||||
| | `--azure-storage-access-key` | _See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-access-key)_ |
|
||||
| | `--azure-storage-account` | _See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-account)_ |
|
||||
| | `--bucket` | _See [configuration options](/influxdb3/core/reference/config-options/#bucket)_ |
|
||||
|
|
@ -73,14 +73,14 @@ influxdb3 serve [OPTIONS]
|
|||
| | `--datafusion-config` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-config)_ |
|
||||
| | `--datafusion-max-parquet-fanout` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-max-parquet-fanout)_ |
|
||||
| | `--datafusion-num-threads` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-num-threads)_ |
|
||||
| | `--datafusion-runtime-disable-lifo-slot` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-disable-lifo-slot)_ |
|
||||
| | `--datafusion-runtime-event-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-event-interval)_ |
|
||||
| | `--datafusion-runtime-global-queue-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-global-queue-interval)_ |
|
||||
| | `--datafusion-runtime-max-blocking-threads` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-blocking-threads)_ |
|
||||
| | `--datafusion-runtime-max-io-events-per-tick` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-io-events-per-tick)_ |
|
||||
| | `--datafusion-runtime-thread-keep-alive` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-keep-alive)_ |
|
||||
| | `--datafusion-runtime-thread-priority` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-priority)_ |
|
||||
| | `--datafusion-runtime-type` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-type)_ |
|
||||
| | `--datafusion-runtime-disable-lifo-slot` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-event-interval` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-global-queue-interval` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-max-blocking-threads` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-max-io-events-per-tick` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-thread-keep-alive` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-thread-priority` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-type` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-use-cached-parquet-loader` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-use-cached-parquet-loader)_ |
|
||||
| | `--delete-grace-period` | _See [configuration options](/influxdb3/core/reference/config-options/#delete-grace-period)_ |
|
||||
| | `--disable-authz` | _See [configuration options](/influxdb3/core/reference/config-options/#disable-authz)_ |
|
||||
|
|
@ -120,7 +120,7 @@ influxdb3 serve [OPTIONS]
|
|||
| | `--table-index-cache-concurrency-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-concurrency-limit)_ |
|
||||
| | `--table-index-cache-max-entries` | _See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-max-entries)_ |
|
||||
| | `--tcp-listener-file-path` | _See [configuration options](/influxdb3/core/reference/config-options/#tcp-listener-file-path)_ |
|
||||
| | `--telemetry-disable-upload` | _See [configuration options](/influxdb3/core/reference/config-options/#telemetry-disable-upload)_ |
|
||||
| | `--telemetry-disable-upload` | _See [configuration options](/influxdb3/core/reference/config-options/#disable-telemetry-upload)_ |
|
||||
| | `--telemetry-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#telemetry-endpoint)_ |
|
||||
| | `--tls-cert` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-cert)_ |
|
||||
| | `--tls-key` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-key)_ |
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ influxdb3 serve [OPTIONS]
|
|||
| | `--aws-session-token` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-session-token)_ |
|
||||
| | `--aws-skip-signature` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-skip-signature)_ |
|
||||
| | `--azure-allow-http` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)_ |
|
||||
| | `--azure-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)_ |
|
||||
| | `--azure-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-endpoint)_ |
|
||||
| | `--azure-storage-access-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-access-key)_ |
|
||||
| | `--azure-storage-account` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-account)_ |
|
||||
| | `--bucket` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#bucket)_ |
|
||||
|
|
@ -75,19 +75,18 @@ influxdb3 serve [OPTIONS]
|
|||
| | `--compaction-gen2-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-gen2-duration)_ |
|
||||
| | `--compaction-max-num-files-per-plan` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-max-num-files-per-plan)_ |
|
||||
| | `--compaction-multipliers` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-multipliers)_ |
|
||||
| | `--compaction-row-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-row-limit)_ |
|
||||
| | `--data-dir` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#data-dir)_ |
|
||||
| | `--datafusion-config` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-config)_ |
|
||||
| | `--datafusion-max-parquet-fanout` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-max-parquet-fanout)_ |
|
||||
| | `--datafusion-num-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-num-threads)_ |
|
||||
| | `--datafusion-runtime-disable-lifo-slot` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-disable-lifo-slot)_ |
|
||||
| | `--datafusion-runtime-event-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-event-interval)_ |
|
||||
| | `--datafusion-runtime-global-queue-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-global-queue-interval)_ |
|
||||
| | `--datafusion-runtime-max-blocking-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-blocking-threads)_ |
|
||||
| | `--datafusion-runtime-max-io-events-per-tick` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-io-events-per-tick)_ |
|
||||
| | `--datafusion-runtime-thread-keep-alive` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-keep-alive)_ |
|
||||
| | `--datafusion-runtime-thread-priority` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-priority)_ |
|
||||
| | `--datafusion-runtime-type` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-type)_ |
|
||||
| | `--datafusion-runtime-disable-lifo-slot` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-event-interval` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-global-queue-interval` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-max-blocking-threads` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-max-io-events-per-tick` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-thread-keep-alive` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-thread-priority` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-runtime-type` | Development-only Tokio runtime configuration |
|
||||
| | `--datafusion-use-cached-parquet-loader` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-use-cached-parquet-loader)_ |
|
||||
| | `--delete-grace-period` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#delete-grace-period)_ |
|
||||
| | `--disable-authz` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-authz)_ |
|
||||
|
|
@ -115,7 +114,7 @@ influxdb3 serve [OPTIONS]
|
|||
| | `--node-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id)_ |
|
||||
| | `--node-id-from-env` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id-from-env)_ |
|
||||
| | `--num-cores` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-cores)_ |
|
||||
| | `--num-datafusion-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-datafusion-threads)_ |
|
||||
| | `--num-datafusion-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-num-threads)_ |
|
||||
| | `--num-database-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-database-limit)_ |
|
||||
| | `--num-table-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-table-limit)_ |
|
||||
| | `--num-total-columns-per-table-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-total-columns-per-table-limit)_ |
|
||||
|
|
@ -142,7 +141,7 @@ influxdb3 serve [OPTIONS]
|
|||
| | `--table-index-cache-concurrency-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-concurrency-limit)_ |
|
||||
| | `--table-index-cache-max-entries` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-max-entries)_ |
|
||||
| | `--tcp-listener-file-path` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tcp-listener-file-path)_ |
|
||||
| | `--telemetry-disable-upload` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-disable-upload)_ |
|
||||
| | `--telemetry-disable-upload` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-telemetry-upload)_ |
|
||||
| | `--telemetry-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-endpoint)_ |
|
||||
| | `--tls-cert` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-cert)_ |
|
||||
| | `--tls-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-key)_ |
|
||||
|
|
|
|||
|
|
@ -9,6 +9,12 @@ aliases:
|
|||
- /kapacitor/v1/about_the_project/releasenotes-changelog/
|
||||
---
|
||||
|
||||
## v1.8.3 {date="2026-03-03"}
|
||||
|
||||
### Dependency updates
|
||||
|
||||
- Upgrade Go to 1.25.7.
|
||||
|
||||
## v1.8.2 {date="2025-09-29"}
|
||||
|
||||
### Features
|
||||
|
|
|
|||
|
|
@ -385,7 +385,7 @@ a `GROUP BY time()` clause must provide an alternative upper bound in the
|
|||
Use the [CLI](/enterprise_influxdb/v1/tools/influx-cli/use-influx/) to write a point to the `noaa` database that occurs after `now()`:
|
||||
|
||||
```sql
|
||||
> INSERT h2o_feet,location=santa_monica water_level=3.1 1587074400000000000
|
||||
INSERT h2o_feet,location=santa_monica water_level=3.1 1587074400000000000
|
||||
```
|
||||
|
||||
Run a `GROUP BY time()` query that covers data with timestamps between
|
||||
|
|
|
|||
|
|
@ -44,8 +44,8 @@ INSERT INTO mydb example-m,tag1=value1 field1=1i 1640995200000000000
|
|||
The following example uses the [InfluxQL shell](/influxdb/version/tools/influxql-shell).
|
||||
|
||||
```sql
|
||||
> USE mydb
|
||||
> INSERT example-m,tag1=value1 field1=1i 1640995200000000000
|
||||
USE mydb
|
||||
INSERT example-m,tag1=value1 field1=1i 1640995200000000000
|
||||
```
|
||||
|
||||
## Delete series with DELETE
|
||||
|
|
|
|||
|
|
@ -324,7 +324,7 @@ Executes the specified SELECT statement and returns data on the query performanc
|
|||
For example, executing the following statement:
|
||||
|
||||
```sql
|
||||
> explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
|
||||
explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
|
||||
```
|
||||
|
||||
May produce an output similar to the following:
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ The documentation MCP server is a hosted service—you don't need to install or
|
|||
Add the server URL to your AI assistant's MCP configuration.
|
||||
|
||||
> [!Note]
|
||||
> On first use, you'll be prompted to sign in with Google.
|
||||
> On first use, you'll be prompted to sign in with a Google or GitHub account.
|
||||
> This authentication is used only for rate limiting—no personal data is collected.
|
||||
|
||||
**MCP server URL:**
|
||||
|
|
@ -168,23 +168,26 @@ The InfluxDB documentation search tools will be available in your OpenCode sessi
|
|||
|
||||
## Authentication and rate limits
|
||||
|
||||
When you connect to the documentation MCP server for the first time, a Google sign-in
|
||||
window opens to complete an OAuth/OpenID Connect login.
|
||||
When you connect to the documentation MCP server for the first time, a sign-in
|
||||
window opens where you can choose to authenticate with a **Google** or **GitHub** account.
|
||||
|
||||
The hosted MCP server:
|
||||
The hosted MCP server uses your account only to generate a stable, opaque user ID
|
||||
for rate limiting—no personal data is collected:
|
||||
|
||||
- Requests only the `openid` scope from Google
|
||||
- Receives an ID token (JWT) containing a stable, opaque user ID
|
||||
- Does not request `email` or `profile` scopes—your name, email address, and other
|
||||
personal data are not collected
|
||||
- **Google**: Requests only the `openid` scope. Does not request `email` or `profile`
|
||||
scopes—your name, email address, and other personal data are not collected.
|
||||
- **GitHub**: Requests no OAuth scopes. With no scopes requested, GitHub grants
|
||||
read-only access to public profile information only. The server does not access
|
||||
repositories, organizations, email addresses, or other GitHub data.
|
||||
|
||||
The anonymous Google ID enforces per-user rate limits to prevent abuse:
|
||||
The anonymous user ID enforces per-user rate limits to prevent abuse:
|
||||
|
||||
- **40 requests** per user per hour
|
||||
- **200 requests** per user per day
|
||||
|
||||
> [!Tip]
|
||||
> On Google's consent screen, this appears as "Associate you with your personal info on Google."
|
||||
> If you sign in with Google, the consent screen may display
|
||||
> "Associate you with your personal info on Google."
|
||||
> This is Google's generic wording for the `openid` scope—it means the app can recognize
|
||||
> that the same Google account is signing in again.
|
||||
> It does not grant access to your email, name, contacts, or other data.
|
||||
|
|
|
|||
|
|
@ -382,7 +382,7 @@ The documentation MCP server is a hosted service—you don't need to install or
|
|||
Add the server URL to your AI assistant's MCP configuration.
|
||||
|
||||
> [!Note]
|
||||
> On first use, you'll be prompted to sign in with Google.
|
||||
> On first use, you'll be prompted to sign in with a Google or GitHub account.
|
||||
> This authentication is used only for rate limiting—no personal data is collected.
|
||||
|
||||
**MCP server URL:**
|
||||
|
|
@ -528,23 +528,26 @@ The InfluxDB documentation search tools will be available in your OpenCode sessi
|
|||
|
||||
### Authentication and rate limits
|
||||
|
||||
When you connect to the documentation MCP server for the first time, a Google sign-in
|
||||
window opens to complete an OAuth/OpenID Connect login.
|
||||
When you connect to the documentation MCP server for the first time, a sign-in
|
||||
window opens where you can choose to authenticate with a **Google** or **GitHub** account.
|
||||
|
||||
The hosted MCP server:
|
||||
The hosted MCP server uses your account only to generate a stable, opaque user ID
|
||||
for rate limiting—no personal data is collected:
|
||||
|
||||
- Requests only the `openid` scope from Google
|
||||
- Receives an ID token (JWT) containing a stable, opaque user ID
|
||||
- Does not request `email` or `profile` scopes—your name, email address, and other
|
||||
personal data are not collected
|
||||
- **Google**: Requests only the `openid` scope. Does not request `email` or `profile`
|
||||
scopes—your name, email address, and other personal data are not collected.
|
||||
- **GitHub**: Requests no OAuth scopes. With no scopes requested, GitHub grants
|
||||
read-only access to public profile information only. The server does not access
|
||||
repositories, organizations, email addresses, or other GitHub data.
|
||||
|
||||
The anonymous Google ID enforces per-user rate limits to prevent abuse:
|
||||
The anonymous user ID enforces per-user rate limits to prevent abuse:
|
||||
|
||||
- **40 requests** per user per hour
|
||||
- **200 requests** per user per day
|
||||
|
||||
> [!Tip]
|
||||
> On Google's consent screen, this appears as "Associate you with your personal info on Google."
|
||||
> If you sign in with Google, the consent screen may display
|
||||
> "Associate you with your personal info on Google."
|
||||
> This is Google's generic wording for the `openid` scope—it means the app can recognize
|
||||
> that the same Google account is signing in again.
|
||||
> It does not grant access to your email, name, contacts, or other data.
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ based on your workload characteristics.
|
|||
{{% /show-in %}}
|
||||
- [Memory tuning](#memory-tuning)
|
||||
- [Advanced tuning options](#advanced-tuning-options)
|
||||
- [Startup optimization](#startup-optimization)
|
||||
- [Monitoring and validation](#monitoring-and-validation)
|
||||
- [Common performance issues](#common-performance-issues-1)
|
||||
|
||||
|
|
@ -577,6 +578,53 @@ For all available configuration options, see:
|
|||
- [CLI serve command reference](/influxdb3/version/reference/cli/influxdb3/serve/)
|
||||
- [Configuration options](/influxdb3/version/reference/config-options/)
|
||||
|
||||
## Startup optimization
|
||||
|
||||
Server startup time scales with the number of
|
||||
[snapshots](/influxdb3/version/admin/backup-restore/#file-structure)
|
||||
stored in the object store.
|
||||
Snapshots accumulate over time and are not automatically deleted.
|
||||
|
||||
Without checkpointing, the server loads individual snapshots on startup.
|
||||
The number of snapshots is determined by the lookback window
|
||||
([`gen1-lookback-duration`](/influxdb3/version/reference/config-options/#gen1-lookback-duration),
|
||||
default 1 month) divided by
|
||||
[`gen1-duration`](/influxdb3/version/reference/config-options/#gen1-duration)
|
||||
(default 10 minutes), with a minimum of 100.
|
||||
With default settings, a long-running server can accumulate up to ~4,320
|
||||
snapshots, causing slow restarts.
|
||||
|
||||
Two configuration options reduce startup time:
|
||||
|
||||
- [`--checkpoint-interval`](/influxdb3/version/reference/config-options/#checkpoint-interval)--
|
||||
periodically consolidates snapshot metadata into monthly checkpoints.
|
||||
On startup, the server loads one to two checkpoints per calendar month,
|
||||
then loads only snapshots created since the last checkpoint.
|
||||
- [`--gen1-lookback-duration`](/influxdb3/version/reference/config-options/#gen1-lookback-duration)--
|
||||
limits how far back the server loads gen1 file index metadata on startup.
|
||||
Files outside this window still exist in object storage but are not indexed.
|
||||
|
||||
> [!Note]
|
||||
> Enabling checkpointing does not delete old snapshots.
|
||||
> They remain in object storage but are no longer needed for startup.
|
||||
|
||||
### Recommended checkpoint intervals
|
||||
|
||||
| Scenario | Recommended interval |
|
||||
| :------- | :------------------- |
|
||||
| Production servers | `1h` |
|
||||
| Development / testing | `10m` |
|
||||
|
||||
### Enable checkpoint creation
|
||||
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
influxdb3 serve --checkpoint-interval 1h
|
||||
```
|
||||
|
||||
For all checkpoint configuration options, see
|
||||
[checkpoint-interval](/influxdb3/version/reference/config-options/#checkpoint-interval).
|
||||
|
||||
## Monitoring and validation
|
||||
|
||||
### Monitor thread utilization
|
||||
|
|
|
|||
|
|
@ -1227,6 +1227,7 @@ percentage (portion of available memory) or absolute value in MB--for example: `
|
|||
|
||||
### Write-Ahead Log (WAL)
|
||||
|
||||
- [checkpoint-interval](#checkpoint-interval)
|
||||
- [wal-flush-interval](#wal-flush-interval)
|
||||
- [wal-snapshot-size](#wal-snapshot-size)
|
||||
- [wal-max-write-buffer-size](#wal-max-write-buffer-size)
|
||||
|
|
@ -1234,6 +1235,46 @@ percentage (portion of available memory) or absolute value in MB--for example: `
|
|||
- [wal-replay-fail-on-error](#wal-replay-fail-on-error)
|
||||
- [wal-replay-concurrency-limit](#wal-replay-concurrency-limit)
|
||||
|
||||
#### checkpoint-interval {#checkpoint-interval metadata="v3.8.2+"}
|
||||
|
||||
Sets the interval for consolidating
|
||||
[snapshots](/influxdb3/version/admin/backup-restore/#file-structure) into
|
||||
monthly checkpoints for faster server startup.
|
||||
Snapshots accumulate in object storage over time and are not automatically deleted.
|
||||
|
||||
Without checkpointing, the server loads individual snapshots on startup.
|
||||
The number of snapshots is determined by the lookback window
|
||||
([`gen1-lookback-duration`](#gen1-lookback-duration), default 1 month)
|
||||
divided by [`gen1-duration`](#gen1-duration) (default 10 minutes),
|
||||
with a minimum of 100.
|
||||
With default settings, that can be up to ~4,320 snapshots.
|
||||
|
||||
With checkpointing enabled, the server periodically consolidates snapshot
|
||||
metadata into checkpoints in object storage.
|
||||
On startup, the server loads one to two checkpoints per calendar month,
|
||||
then loads only snapshots created since the last checkpoint.
|
||||
Enabling checkpointing does not delete old snapshots.
|
||||
|
||||
Up to 10 checkpoints load concurrently during startup.
|
||||
The server retains two checkpoints per calendar month and handles month rollovers automatically.
|
||||
|
||||
Accepts a [duration](/influxdb3/version/reference/glossary/#duration) value--for example: `1h`, `30m`, `10m`.
|
||||
|
||||
**Default:** _Not set (disabled)_
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :---------------------- | :------------------------------ |
|
||||
| `--checkpoint-interval` | `INFLUXDB3_CHECKPOINT_INTERVAL` |
|
||||
|
||||
##### Example
|
||||
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
influxdb3 serve --checkpoint-interval 1h
|
||||
```
|
||||
|
||||
***
|
||||
|
||||
#### wal-flush-interval
|
||||
|
||||
Specifies the interval to flush buffered data to a WAL file. Writes that wait
|
||||
|
|
|
|||
|
|
@ -1,5 +1,4 @@
|
|||
Use tools like the {{% show-in "cloud-dedicated,clustered" %}}`influxctl`{{% /show-in %}}{{% show-in "cloud-serverless" %}}`influx`{{% /show-in %}}{{% show-in "core,enterprise" %}}`influxdb3`{{% /show-in %}}
|
||||
CLI, Telegraf, and InfluxDB client libraries
|
||||
Use tools like the {{% show-in "cloud-dedicated,clustered" %}}`influxctl`{{% /show-in %}}{{% show-in "cloud-serverless" %}}`influx`{{% /show-in %}}{{% show-in "core,enterprise" %}}`influxdb3`{{% /show-in %}} CLI, Telegraf, and InfluxDB client libraries
|
||||
to write time series data to {{< product-name >}}.
|
||||
[line protocol](#line-protocol)
|
||||
is the text-based format used to write data to InfluxDB.
|
||||
|
|
|
|||
|
|
@ -331,7 +331,7 @@ Executes the specified `SELECT` statement and returns data about the query perfo
|
|||
For example, if you execute the following statement:
|
||||
|
||||
```sql
|
||||
> explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
|
||||
explain analyze select mean(usage_steal) from cpu where time >= '2018-02-22T00:00:00Z' and time < '2018-02-22T12:00:00Z'
|
||||
```
|
||||
|
||||
The output is similar to the following:
|
||||
|
|
|
|||
|
|
@ -6,6 +6,27 @@
|
|||
> All updates to Core are automatically included in Enterprise.
|
||||
> The Enterprise sections below only list updates exclusive to Enterprise.
|
||||
|
||||
## v3.8.4 {date="2026-03-10"}
|
||||
|
||||
### Core
|
||||
|
||||
No adjustments in this release.
|
||||
Core remains on v3.8.3.
|
||||
|
||||
### Enterprise
|
||||
|
||||
#### Security
|
||||
|
||||
- **Read and write tokens can no longer delete databases**: Authorization now evaluates both the HTTP method and the request path. Previously, tokens with read or write access to a database could also issue delete requests.
|
||||
|
||||
#### Bug fixes
|
||||
|
||||
- **Stale compactor blocking startup**: Fixed an issue where stopped (stale) compactor entries in the catalog prevented new compactor nodes from starting. Enterprise now only considers currently running compactor nodes for conflict checks.
|
||||
|
||||
- **WAL replay**: Fixed an issue where combined-mode deployments silently ignored the `--wal-replay-concurrency-limit` flag and always used serial replay (concurrency of 1). The flag is now respected.
|
||||
|
||||
- Other bug fixes and performance improvements.
|
||||
|
||||
## v3.8.3 {date="2026-02-24"}
|
||||
|
||||
### Core
|
||||
|
|
@ -40,6 +61,7 @@
|
|||
|
||||
- **`_internal` database default retention**: The `_internal` system database now defaults to a 7-day retention period (previously infinite). Only admin tokens can modify retention on the `_internal` database.
|
||||
|
||||
- **Snapshot checkpointing for faster startup**: Use the new [`--checkpoint-interval`](/influxdb3/version/reference/config-options/#checkpoint-interval) serve option to periodically consolidate snapshots into monthly checkpoints. On startup, the server loads one to two checkpoints per calendar month instead of thousands of individual snapshots, reducing startup time for long-running servers.
|
||||
|
||||
#### Bug fixes
|
||||
|
||||
|
|
@ -427,9 +449,9 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
|
||||
## v3.1.0 {date="2025-05-29"}
|
||||
|
||||
**Core**: revision 482dd8aac580c04f37e8713a8fffae89ae8bc264
|
||||
**Core**: revision `482dd8aac580c04f37e8713a8fffae89ae8bc264`
|
||||
|
||||
**Enterprise**: revision 2cb23cf32b67f9f0d0803e31b356813a1a151b00
|
||||
**Enterprise**: revision `2cb23cf32b67f9f0d0803e31b356813a1a151b00`
|
||||
|
||||
### Core
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.5.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.3/plugins/aggregators/basicstats/README.md, Basic Statistics Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/basicstats/README.md, Basic Statistics Plugin Source
|
||||
---
|
||||
|
||||
# Basic Statistics Aggregator Plugin
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.18.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.3/plugins/aggregators/derivative/README.md, Derivative Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/derivative/README.md, Derivative Plugin Source
|
||||
---
|
||||
|
||||
# Derivative Aggregator Plugin
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.11.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.3/plugins/aggregators/final/README.md, Final Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/final/README.md, Final Plugin Source
|
||||
---
|
||||
|
||||
# Final Aggregator Plugin
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.4.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.3/plugins/aggregators/histogram/README.md, Histogram Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/histogram/README.md, Histogram Plugin Source
|
||||
---
|
||||
|
||||
# Histogram Aggregator Plugin
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.13.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.3/plugins/aggregators/merge/README.md, Merge Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/merge/README.md, Merge Plugin Source
|
||||
---
|
||||
|
||||
# Merge Aggregator Plugin
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.1.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.3/plugins/aggregators/minmax/README.md, Minimum-Maximum Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.38.1/plugins/aggregators/minmax/README.md, Minimum-Maximum Plugin Source
|
||||
---
|
||||
|
||||
# Minimum-Maximum Aggregator Plugin
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue