Merge branch 'master' into chore/update-v0-management-api-with-new-max-columns-per-table
commit
095464b7f9
|
|
@ -1,7 +1,8 @@
|
|||
extends: existence
|
||||
message: "Commas and periods go inside quotation marks."
|
||||
message: "Commas and periods go inside quotation marks (unless in code examples or technical values)."
|
||||
link: 'https://developers.google.com/style/quotation-marks'
|
||||
level: error
|
||||
# Changed to warning due to false positives with code examples
|
||||
level: warning
|
||||
nonword: true
|
||||
tokens:
|
||||
- '"[^"]+"[.,?]'
|
||||
|
|
|
|||
|
|
@ -42,7 +42,7 @@ System.Data.Odbc
|
|||
TBs?
|
||||
\bUI\b
|
||||
URL
|
||||
\w*-?\w*url\w*-\w*
|
||||
(?i)\w*-?\w*url\w*-\w*
|
||||
US (East|West|Central|North|South|Northeast|Northwest|Southeast|Southwest)
|
||||
Unix
|
||||
WALs?
|
||||
|
|
|
|||
|
|
@ -42,6 +42,16 @@ jobs:
|
|||
- run:
|
||||
name: Hugo Build
|
||||
command: yarn hugo --environment production --logLevel info --gc --destination workspace/public
|
||||
- run:
|
||||
name: Generate LLM-friendly Markdown
|
||||
command: |
|
||||
if [ "$CIRCLE_BRANCH" = "master" ]; then
|
||||
# Full build for production deployments
|
||||
yarn build:md --public-dir workspace/public
|
||||
else
|
||||
# Incremental build for PRs - only process changed files
|
||||
yarn build:md --public-dir workspace/public --only-changed --base-branch origin/master
|
||||
fi
|
||||
- persist_to_workspace:
|
||||
root: workspace
|
||||
paths:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,269 @@
|
|||
---
|
||||
name: hugo-ui-dev
|
||||
description: Hugo template and SASS/CSS development specialist for the InfluxData docs-v2 repository. Use this agent for creating/editing Hugo layouts, partials, shortcodes, and SASS stylesheets. This agent focuses on structure and styling, not JavaScript/TypeScript behavior.
|
||||
tools: ["*"]
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
# Hugo Template & SASS/CSS Development Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Specialized agent for Hugo template development and SASS/CSS styling in the InfluxData docs-v2 repository. Handles the **structure and styling** layer of the documentation site UI.
|
||||
|
||||
## Scope and Responsibilities
|
||||
|
||||
### Primary Capabilities
|
||||
|
||||
1. **Hugo Template Development**
|
||||
- Create and modify layouts in `layouts/`, `layouts/partials/`, `layouts/shortcodes/`
|
||||
- Implement safe data access patterns for Hugo templates
|
||||
- Handle Hugo's template inheritance and partial inclusion
|
||||
- Configure page types and content organization
|
||||
|
||||
2. **SASS/CSS Styling**
|
||||
- Develop styles in `assets/styles/`
|
||||
- Implement responsive layouts and component styling
|
||||
- Follow BEM or project-specific naming conventions
|
||||
- Optimize CSS for production builds
|
||||
|
||||
3. **Hugo Data Integration**
|
||||
- Access data from `data/` directory safely
|
||||
- Pass data to components via `data-*` attributes
|
||||
- Handle YAML/JSON data files for dynamic content
|
||||
|
||||
### Out of Scope (Use ts-component-dev agent instead)
|
||||
|
||||
- TypeScript/JavaScript component implementation
|
||||
- Event handlers and user interaction logic
|
||||
- State management and DOM manipulation
|
||||
- Component registry and initialization
|
||||
|
||||
## Critical Testing Requirement
|
||||
|
||||
**Hugo's `npx hugo --quiet` only validates template syntax, not runtime execution.**
|
||||
|
||||
Template errors like accessing undefined fields, nil values, or incorrect type assertions only appear when Hugo actually renders pages.
|
||||
|
||||
### Mandatory Testing Protocol
|
||||
|
||||
After modifying any file in `layouts/`:
|
||||
|
||||
```bash
|
||||
# Step 1: Start Hugo server and check for errors
|
||||
npx hugo server --port 1314 2>&1 | head -50
|
||||
```
|
||||
|
||||
**Success criteria:**
|
||||
|
||||
- No `error calling partial` messages
|
||||
- No `can't evaluate field` errors
|
||||
- No `template: ... failed` messages
|
||||
- Server shows "Web Server is available at <http://localhost:1314/>"
|
||||
|
||||
```bash
|
||||
# Step 2: Verify the page renders
|
||||
curl -s -o /dev/null -w "%{http_code}" http://localhost:1314/PATH/TO/PAGE/
|
||||
```
|
||||
|
||||
```bash
|
||||
# Step 3: Stop the test server
|
||||
pkill -f "hugo server --port 1314"
|
||||
```
|
||||
|
||||
### Quick Test Command
|
||||
|
||||
```bash
|
||||
timeout 15 npx hugo server --port 1314 2>&1 | grep -E "(error|Error|ERROR|fail|FAIL)" | head -20; pkill -f "hugo server --port 1314" 2>/dev/null
|
||||
```
|
||||
|
||||
If output is empty, no errors were detected.
|
||||
|
||||
## Common Hugo Template Patterns
|
||||
|
||||
### Safe Data Access
|
||||
|
||||
**Wrong - direct hyphenated key access:**
|
||||
|
||||
```go
|
||||
{{ .Site.Data.article-data.influxdb }}
|
||||
```
|
||||
|
||||
**Correct - use index function:**
|
||||
|
||||
```go
|
||||
{{ index .Site.Data "article-data" "influxdb" }}
|
||||
```
|
||||
|
||||
### Safe Nested Access
|
||||
|
||||
```go
|
||||
{{ $articleDataRoot := index .Site.Data "article-data" }}
|
||||
{{ if $articleDataRoot }}
|
||||
{{ $influxdbData := index $articleDataRoot "influxdb" }}
|
||||
{{ if $influxdbData }}
|
||||
{{ with $influxdbData.articles }}
|
||||
{{/* Safe to use . here */}}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
### Safe Field Access with isset
|
||||
|
||||
```go
|
||||
{{ if and $data (isset $data "field") }}
|
||||
{{ index $data "field" }}
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
### Iterating Safely
|
||||
|
||||
```go
|
||||
{{ range $idx, $item := $articles }}
|
||||
{{ $path := "" }}
|
||||
{{ if isset $item "path" }}
|
||||
{{ $path = index $item "path" }}
|
||||
{{ end }}
|
||||
{{ if $path }}
|
||||
{{/* Now safe to use $path */}}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
## Template-to-TypeScript Communication
|
||||
|
||||
Pass data via `data-*` attributes - **never use inline JavaScript**:
|
||||
|
||||
**Template:**
|
||||
|
||||
```html
|
||||
<div
|
||||
data-component="api-nav"
|
||||
data-headings="{{ .headings | jsonify | safeHTMLAttr }}"
|
||||
data-scroll-offset="80"
|
||||
>
|
||||
{{/* HTML structure only - no onclick handlers */}}
|
||||
</div>
|
||||
```
|
||||
|
||||
The TypeScript component (handled by ts-component-dev agent) will read these attributes.
|
||||
|
||||
## File Organization
|
||||
|
||||
```
|
||||
layouts/
|
||||
├── _default/ # Default templates
|
||||
├── partials/ # Reusable template fragments
|
||||
│ └── api/ # API-specific partials
|
||||
├── shortcodes/ # Content shortcodes
|
||||
└── TYPE/ # Type-specific templates
|
||||
└── single.html # Single page template
|
||||
|
||||
assets/styles/
|
||||
├── styles-default.scss # Main stylesheet
|
||||
└── layouts/
|
||||
└── _api-layout.scss # Layout-specific styles
|
||||
```
|
||||
|
||||
### Partial Naming
|
||||
|
||||
- Use descriptive names: `api/sidebar-nav.html`, not `nav.html`
|
||||
- Group related partials in subdirectories
|
||||
- Include comments at the top describing purpose and required context
|
||||
|
||||
## Debugging Templates
|
||||
|
||||
### Print Variables for Debugging
|
||||
|
||||
```go
|
||||
{{/* Temporary debugging - REMOVE before committing */}}
|
||||
<pre>{{ printf "%#v" $myVariable }}</pre>
|
||||
```
|
||||
|
||||
### Enable Verbose Mode
|
||||
|
||||
```bash
|
||||
npx hugo server --port 1314 --verbose 2>&1 | head -100
|
||||
```
|
||||
|
||||
### Check Data File Loading
|
||||
|
||||
```bash
|
||||
cat data/article-data/influxdb/influxdb3-core/articles.yml | head -20
|
||||
```
|
||||
|
||||
## SASS/CSS Guidelines
|
||||
|
||||
### File Organization
|
||||
|
||||
- Component styles in `assets/styles/layouts/`
|
||||
- Use SASS variables from existing theme
|
||||
- Follow mobile-first responsive design
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
- Use BEM or project conventions
|
||||
- Prefix component styles (e.g., `.api-nav`, `.api-toc`)
|
||||
- Use state classes: `.is-active`, `.is-open`, `.is-hidden`
|
||||
|
||||
### Common Patterns
|
||||
|
||||
```scss
|
||||
// Component container
|
||||
.api-nav {
|
||||
// Base styles
|
||||
|
||||
&-group-header {
|
||||
// Child element
|
||||
}
|
||||
|
||||
&.is-open {
|
||||
// State modifier
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **Understand Requirements**
|
||||
- What page type or layout is being modified?
|
||||
- What data does the template need?
|
||||
- Does this require styling changes?
|
||||
|
||||
2. **Implement Template**
|
||||
- Use safe data access patterns
|
||||
- Add `data-component` attributes for interactive elements
|
||||
- Do not add inline JavaScript
|
||||
|
||||
3. **Add Styling**
|
||||
- Create/modify SCSS files as needed
|
||||
- Follow existing patterns and variables
|
||||
|
||||
4. **Test Runtime**
|
||||
- Run Hugo server (not just build)
|
||||
- Verify page renders without errors
|
||||
- Check styling in browser
|
||||
|
||||
5. **Clean Up**
|
||||
- Remove debug statements
|
||||
- Stop test server
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before considering template work complete:
|
||||
|
||||
- [ ] No inline `<script>` tags or `onclick` handlers in templates
|
||||
- [ ] All data access uses safe patterns with `isset` and `index`
|
||||
- [ ] Hugo server starts without errors
|
||||
- [ ] Target pages render with HTTP 200
|
||||
- [ ] Debug statements removed
|
||||
- [ ] SCSS follows project conventions
|
||||
- [ ] Test server stopped after verification
|
||||
|
||||
## Communication Style
|
||||
|
||||
- Ask for clarification on data structure if unclear
|
||||
- Explain template patterns when they might be unfamiliar
|
||||
- Warn about common pitfalls (hyphenated keys, nil access)
|
||||
- Always report runtime test results, not just build success
|
||||
|
|
@ -0,0 +1,403 @@
|
|||
---
|
||||
name: ts-component-dev
|
||||
description: TypeScript component development specialist for the InfluxData docs-v2 repository. Use this agent for creating/editing TypeScript components that handle user interaction, state management, and DOM manipulation. This agent focuses on behavior and interactivity, not HTML structure or styling.
|
||||
tools: ["*"]
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
# TypeScript Component Development Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Specialized agent for TypeScript component development in the InfluxData docs-v2 repository. Handles the **behavior and interactivity** layer of the documentation site UI.
|
||||
|
||||
## Scope and Responsibilities
|
||||
|
||||
### Primary Capabilities
|
||||
|
||||
1. **TypeScript Component Implementation**
|
||||
- Create component modules in `assets/js/components/`
|
||||
- Implement user interaction handlers (click, scroll, keyboard)
|
||||
- Manage component state and DOM updates
|
||||
- Handle data parsed from Hugo `data-*` attributes
|
||||
|
||||
2. **Component Architecture**
|
||||
- Follow the established component registry pattern
|
||||
- Define TypeScript interfaces for options and data
|
||||
- Export initializer functions for registration
|
||||
- Maintain type safety throughout
|
||||
|
||||
3. **Hugo Integration**
|
||||
- Parse data from `data-*` attributes set by Hugo templates
|
||||
- Handle Hugo's security placeholders (`#ZgotmplZ`)
|
||||
- Register components in `main.js` componentRegistry
|
||||
|
||||
### Out of Scope (Use hugo-ui-dev agent instead)
|
||||
|
||||
- Hugo template HTML structure
|
||||
- SASS/CSS styling
|
||||
- Data file organization in `data/`
|
||||
- Partial and shortcode implementation
|
||||
|
||||
## Component Architecture Pattern
|
||||
|
||||
### Standard Component Structure
|
||||
|
||||
```typescript
|
||||
// assets/js/components/my-component.ts
|
||||
|
||||
interface MyComponentOptions {
|
||||
component: HTMLElement;
|
||||
}
|
||||
|
||||
interface MyComponentData {
|
||||
items: string[];
|
||||
scrollOffset: number;
|
||||
}
|
||||
|
||||
class MyComponent {
|
||||
private container: HTMLElement;
|
||||
private data: MyComponentData;
|
||||
|
||||
constructor(options: MyComponentOptions) {
|
||||
this.container = options.component;
|
||||
this.data = this.parseData();
|
||||
this.init();
|
||||
}
|
||||
|
||||
private parseData(): MyComponentData {
|
||||
const itemsRaw = this.container.dataset.items;
|
||||
const items = itemsRaw && itemsRaw !== '#ZgotmplZ'
|
||||
? JSON.parse(itemsRaw)
|
||||
: [];
|
||||
const scrollOffset = parseInt(
|
||||
this.container.dataset.scrollOffset || '0',
|
||||
10
|
||||
);
|
||||
return { items, scrollOffset };
|
||||
}
|
||||
|
||||
private init(): void {
|
||||
this.bindEvents();
|
||||
}
|
||||
|
||||
private bindEvents(): void {
|
||||
// Event handlers
|
||||
}
|
||||
}
|
||||
|
||||
export default function initMyComponent(
|
||||
options: MyComponentOptions
|
||||
): MyComponent {
|
||||
return new MyComponent(options);
|
||||
}
|
||||
```
|
||||
|
||||
### Registration in main.js
|
||||
|
||||
```javascript
|
||||
import initMyComponent from './components/my-component.js';
|
||||
|
||||
const componentRegistry = {
|
||||
'my-component': initMyComponent,
|
||||
// ... other components
|
||||
};
|
||||
```
|
||||
|
||||
### HTML Integration (handled by hugo-ui-dev)
|
||||
|
||||
```html
|
||||
<div
|
||||
data-component="my-component"
|
||||
data-items="{{ .items | jsonify | safeHTMLAttr }}"
|
||||
data-scroll-offset="80"
|
||||
>
|
||||
<!-- Structure handled by hugo-ui-dev -->
|
||||
</div>
|
||||
```
|
||||
|
||||
## TypeScript Standards
|
||||
|
||||
### Type Safety
|
||||
|
||||
```typescript
|
||||
// Always define interfaces for component options
|
||||
interface ComponentOptions {
|
||||
component: HTMLElement;
|
||||
}
|
||||
|
||||
// Define interfaces for parsed data
|
||||
interface ParsedData {
|
||||
products?: string[];
|
||||
influxdbUrls?: Record<string, string>;
|
||||
}
|
||||
```
|
||||
|
||||
### DOM Type Safety
|
||||
|
||||
```typescript
|
||||
// Use type assertions for DOM queries
|
||||
const input = this.container.querySelector('#search') as HTMLInputElement;
|
||||
|
||||
// Check existence before use
|
||||
const button = this.container.querySelector('.submit-btn');
|
||||
if (button instanceof HTMLButtonElement) {
|
||||
button.disabled = true;
|
||||
}
|
||||
```
|
||||
|
||||
### Event Handling
|
||||
|
||||
```typescript
|
||||
// Properly type event handlers
|
||||
private handleClick = (e: Event): void => {
|
||||
const target = e.target as HTMLElement;
|
||||
if (target.matches('.nav-item')) {
|
||||
this.activateItem(target);
|
||||
}
|
||||
};
|
||||
|
||||
// Use event delegation
|
||||
private bindEvents(): void {
|
||||
this.container.addEventListener('click', this.handleClick);
|
||||
}
|
||||
|
||||
// Clean up if needed
|
||||
public destroy(): void {
|
||||
this.container.removeEventListener('click', this.handleClick);
|
||||
}
|
||||
```
|
||||
|
||||
### Hugo Data Handling
|
||||
|
||||
```typescript
|
||||
// Handle Hugo's security measures for JSON data
|
||||
private parseData(): ParsedData {
|
||||
const rawData = this.container.getAttribute('data-products');
|
||||
|
||||
// Check for Hugo's security placeholder
|
||||
if (rawData && rawData !== '#ZgotmplZ') {
|
||||
try {
|
||||
return JSON.parse(rawData);
|
||||
} catch (error) {
|
||||
console.warn('Failed to parse data:', error);
|
||||
return {};
|
||||
}
|
||||
}
|
||||
return {};
|
||||
}
|
||||
```
|
||||
|
||||
## File Organization
|
||||
|
||||
```
|
||||
assets/js/
|
||||
├── main.js # Entry point, component registry
|
||||
├── components/
|
||||
│ ├── api-nav.ts # API navigation behavior
|
||||
│ ├── api-toc.ts # Table of contents
|
||||
│ ├── api-tabs.ts # Tab switching
|
||||
│ └── api-scalar.ts # Scalar/RapiDoc integration
|
||||
└── utils/
|
||||
├── dom-helpers.ts # Shared DOM utilities
|
||||
└── debug-helpers.js # Debugging utilities
|
||||
```
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
- Component files: `kebab-case.ts` matching the `data-component` value
|
||||
- Interfaces: `PascalCase` with descriptive names
|
||||
- Private methods: `camelCase` with meaningful verbs
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Build Commands
|
||||
|
||||
```bash
|
||||
# Compile TypeScript
|
||||
yarn build:ts
|
||||
|
||||
# Watch mode for development
|
||||
yarn build:ts:watch
|
||||
|
||||
# Type checking without emit
|
||||
npx tsc --noEmit
|
||||
```
|
||||
|
||||
### Development Process
|
||||
|
||||
1. **Create Component File**
|
||||
- Define interfaces for options and data
|
||||
- Implement component class
|
||||
- Export initializer function
|
||||
|
||||
2. **Register Component**
|
||||
- Import in `main.js` with `.js` extension (Hugo requirement)
|
||||
- Add to `componentRegistry`
|
||||
|
||||
3. **Test Component**
|
||||
- Start Hugo server: `npx hugo server`
|
||||
- Open page with component in browser
|
||||
- Use browser DevTools for debugging
|
||||
|
||||
4. **Write Cypress Tests**
|
||||
- Create test in `cypress/e2e/content/`
|
||||
- Test user interactions
|
||||
- Verify state changes
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Collapsible Sections
|
||||
|
||||
```typescript
|
||||
private toggleSection(header: HTMLElement): void {
|
||||
const isOpen = header.classList.toggle('is-open');
|
||||
header.setAttribute('aria-expanded', String(isOpen));
|
||||
|
||||
const content = header.nextElementSibling;
|
||||
if (content) {
|
||||
content.classList.toggle('is-open', isOpen);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Active State Management
|
||||
|
||||
```typescript
|
||||
private activateItem(item: HTMLElement): void {
|
||||
// Remove active from siblings
|
||||
this.container
|
||||
.querySelectorAll('.nav-item.is-active')
|
||||
.forEach(el => el.classList.remove('is-active'));
|
||||
|
||||
// Add active to current
|
||||
item.classList.add('is-active');
|
||||
}
|
||||
```
|
||||
|
||||
### Scroll Observation
|
||||
|
||||
```typescript
|
||||
private observeScrollPosition(): void {
|
||||
const observer = new IntersectionObserver(
|
||||
(entries) => {
|
||||
entries.forEach(entry => {
|
||||
if (entry.isIntersecting) {
|
||||
this.updateActiveSection(entry.target.id);
|
||||
}
|
||||
});
|
||||
},
|
||||
{ rootMargin: `-${this.data.scrollOffset}px 0px 0px 0px` }
|
||||
);
|
||||
|
||||
this.sections.forEach(section => observer.observe(section));
|
||||
}
|
||||
```
|
||||
|
||||
## Debugging
|
||||
|
||||
### Using Debug Helpers
|
||||
|
||||
```typescript
|
||||
import { debugLog, debugBreak, debugInspect } from '../utils/debug-helpers.js';
|
||||
|
||||
// Log with context
|
||||
debugLog('Processing items', 'MyComponent.init');
|
||||
|
||||
// Inspect data
|
||||
const data = debugInspect(this.data, 'Component Data');
|
||||
|
||||
// Add breakpoint
|
||||
debugBreak();
|
||||
```
|
||||
|
||||
### Browser DevTools
|
||||
|
||||
- Access registry: `window.influxdatadocs.componentRegistry`
|
||||
- Check component initialization in console
|
||||
- Use source maps for TypeScript debugging
|
||||
|
||||
### TypeScript Compiler
|
||||
|
||||
```bash
|
||||
# Detailed error reporting
|
||||
npx tsc --noEmit --pretty
|
||||
|
||||
# Check specific file
|
||||
npx tsc --noEmit assets/js/components/my-component.ts
|
||||
```
|
||||
|
||||
## Testing Requirements
|
||||
|
||||
### Cypress E2E Tests
|
||||
|
||||
```javascript
|
||||
// cypress/e2e/content/my-component.cy.js
|
||||
describe('MyComponent', () => {
|
||||
beforeEach(() => {
|
||||
cy.visit('/path/to/page/with/component/');
|
||||
});
|
||||
|
||||
it('initializes correctly', () => {
|
||||
cy.get('[data-component="my-component"]').should('be.visible');
|
||||
});
|
||||
|
||||
it('responds to user interaction', () => {
|
||||
cy.get('.nav-item').first().click();
|
||||
cy.get('.nav-item.is-active').should('have.length', 1);
|
||||
});
|
||||
|
||||
it('updates state on scroll', () => {
|
||||
cy.scrollTo('bottom');
|
||||
cy.get('.toc-item.is-active').should('exist');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Run Tests
|
||||
|
||||
```bash
|
||||
# Run specific test
|
||||
node cypress/support/run-e2e-specs.js \
|
||||
--spec "cypress/e2e/content/my-component.cy.js" \
|
||||
content/path/to/page.md
|
||||
|
||||
# Run all E2E tests
|
||||
yarn test:e2e
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before considering component work complete:
|
||||
|
||||
- [ ] Interfaces defined for all options and data
|
||||
- [ ] Handles missing/invalid data gracefully
|
||||
- [ ] Hugo `#ZgotmplZ` placeholder handled
|
||||
- [ ] Event listeners use proper typing
|
||||
- [ ] Component registered in `main.js`
|
||||
- [ ] TypeScript compiles without errors (`yarn build:ts`)
|
||||
- [ ] No `any` types unless absolutely necessary
|
||||
- [ ] Cypress tests cover main functionality
|
||||
- [ ] Debug statements removed before commit
|
||||
- [ ] JSDoc comments on public methods
|
||||
|
||||
## Import Requirements
|
||||
|
||||
**Critical:** Use `.js` extensions for imports even for TypeScript files - this is required for Hugo's module system:
|
||||
|
||||
```typescript
|
||||
// Correct
|
||||
import { helper } from '../utils/dom-helpers.js';
|
||||
|
||||
// Wrong - will fail in Hugo
|
||||
import { helper } from '../utils/dom-helpers';
|
||||
import { helper } from '../utils/dom-helpers.ts';
|
||||
```
|
||||
|
||||
## Communication Style
|
||||
|
||||
- Ask for clarification on expected behavior
|
||||
- Explain component patterns and TypeScript concepts
|
||||
- Recommend type-safe approaches over shortcuts
|
||||
- Report test results and any type errors
|
||||
- Suggest Cypress test scenarios for new features
|
||||
|
|
@ -1,253 +0,0 @@
|
|||
---
|
||||
name: ui-dev
|
||||
description: UI TypeScript, Hugo, and SASS (CSS) development specialist for the InfluxData docs-v2 repository
|
||||
tools: ["*"]
|
||||
author: InfluxData
|
||||
version: "1.0"
|
||||
---
|
||||
|
||||
# UI TypeScript & Hugo Development Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Specialized agent for TypeScript and Hugo development in the InfluxData docs-v2 repository. Assists with implementing TypeScript for new documentation site features while maintaining compatibility with the existing JavaScript ecosystem.
|
||||
|
||||
## Scope and Responsibilities
|
||||
|
||||
### Workflow
|
||||
|
||||
- Start by verifying a clear understanding of the requested feature or fix.
|
||||
- Ask if there's an existing plan to follow.
|
||||
- Verify any claimed changes by reading the actual files.
|
||||
|
||||
### Primary Capabilities
|
||||
|
||||
1. **TypeScript Implementation**
|
||||
- Convert existing JavaScript modules to TypeScript
|
||||
- Implement new features using TypeScript best practices
|
||||
- Maintain type safety while preserving Hugo integration
|
||||
- Configure TypeScript for Hugo's asset pipeline
|
||||
|
||||
2. **Component Development**
|
||||
- Create new component-based modules following the established registry pattern
|
||||
- Implement TypeScript interfaces for component options and state
|
||||
- Ensure proper integration with Hugo's data attributes system
|
||||
- Maintain backwards compatibility with existing JavaScript components
|
||||
|
||||
3. **Hugo Asset Pipeline Integration**
|
||||
- Configure TypeScript compilation for Hugo's build process
|
||||
- Manage module imports and exports for Hugo's ES6 module system
|
||||
- Optimize TypeScript output for production builds
|
||||
- Handle Hugo template data integration with TypeScript
|
||||
|
||||
4. **Testing and Quality Assurance**
|
||||
- Write and maintain Cypress e2e tests for TypeScript components
|
||||
- Configure ESLint rules for TypeScript code
|
||||
- Ensure proper type checking in CI/CD pipeline
|
||||
- Debug TypeScript compilation issues
|
||||
|
||||
### Technical Expertise
|
||||
|
||||
- **TypeScript Configuration**: Advanced `tsconfig.json` setup for Hugo projects
|
||||
- **Component Architecture**: Following the established component registry pattern from `main.js`
|
||||
- **Hugo Integration**: Understanding Hugo's asset pipeline and template system
|
||||
- **Module Systems**: ES6 modules, imports/exports, and Hugo's asset bundling
|
||||
- **Type Definitions**: Creating interfaces for Hugo data, component options, and external libraries
|
||||
|
||||
## Current Project Context
|
||||
|
||||
### Existing Infrastructure
|
||||
|
||||
- **Build System**: Hugo extended with PostCSS and TypeScript compilation
|
||||
- **Module Entry Point**: `assets/js/main.js` with component registry pattern
|
||||
- **TypeScript Config**: `tsconfig.json` configured for ES2020 with DOM types
|
||||
- **Testing**: Cypress for e2e testing, ESLint for code quality
|
||||
- **Component Pattern**: Data-attribute based component initialization
|
||||
|
||||
### Key Files and Patterns
|
||||
|
||||
- **Component Registry**: `main.js` exports `componentRegistry` mapping component names to constructors
|
||||
- **Component Pattern**: Components accept `{ component: HTMLElement }` options
|
||||
- **Data Attributes**: Components initialized via `data-component` attributes
|
||||
- **Module Imports**: ES6 imports with `.js` extensions for Hugo compatibility
|
||||
|
||||
### Current TypeScript Usage
|
||||
|
||||
- **Single TypeScript File**: `assets/js/influxdb-version-detector.ts`
|
||||
- **Build Scripts**: `yarn build:ts` and `yarn build:ts:watch`
|
||||
- **Output Directory**: `dist/` (gitignored)
|
||||
- **Type Definitions**: Generated `.d.ts` files for all modules
|
||||
|
||||
## Development Guidelines
|
||||
|
||||
### TypeScript Standards
|
||||
|
||||
1. **Type Safety**
|
||||
```typescript
|
||||
// Always define interfaces for component options
|
||||
interface ComponentOptions {
|
||||
component: HTMLElement;
|
||||
// Add specific component options
|
||||
}
|
||||
|
||||
// Use strict typing for Hugo data
|
||||
interface HugoDataAttribute {
|
||||
products?: string;
|
||||
influxdbUrls?: string;
|
||||
}
|
||||
```
|
||||
|
||||
2. **Component Architecture**
|
||||
```typescript
|
||||
// Follow the established component pattern
|
||||
class MyComponent {
|
||||
private container: HTMLElement;
|
||||
|
||||
constructor(options: ComponentOptions) {
|
||||
this.container = options.component;
|
||||
this.init();
|
||||
}
|
||||
|
||||
private init(): void {
|
||||
// Component initialization
|
||||
}
|
||||
}
|
||||
|
||||
// Export as component initializer
|
||||
export default function initMyComponent(options: ComponentOptions): MyComponent {
|
||||
return new MyComponent(options);
|
||||
}
|
||||
```
|
||||
|
||||
3. **Hugo Data Integration**
|
||||
```typescript
|
||||
// Parse Hugo data attributes safely
|
||||
private parseComponentData(): ParsedData {
|
||||
const rawData = this.container.getAttribute('data-products');
|
||||
if (rawData && rawData !== '#ZgotmplZ') {
|
||||
try {
|
||||
return JSON.parse(rawData);
|
||||
} catch (error) {
|
||||
console.warn('Failed to parse data:', error);
|
||||
return {};
|
||||
}
|
||||
}
|
||||
return {};
|
||||
}
|
||||
```
|
||||
|
||||
### File Organization
|
||||
|
||||
- **TypeScript Files**: Place in `assets/js/` alongside JavaScript files
|
||||
- **Type Definitions**: Auto-generated in `dist/` directory
|
||||
- **Naming Convention**: Use same naming as JavaScript files, with `.ts` extension
|
||||
- **Imports**: Use `.js` extensions even for TypeScript files (Hugo requirement)
|
||||
|
||||
### Integration with Existing System
|
||||
|
||||
1. **Component Registry**: Add TypeScript components to the registry in `main.js`
|
||||
2. **HTML Integration**: Use `data-component` attributes to initialize components
|
||||
3. **Global Namespace**: Expose components via `window.influxdatadocs` if needed
|
||||
4. **Backwards Compatibility**: Ensure TypeScript components work with existing patterns
|
||||
|
||||
### Testing Requirements
|
||||
|
||||
1. **Cypress Tests**: Create e2e tests for TypeScript components
|
||||
2. **Type Checking**: Run `tsc --noEmit` in CI pipeline
|
||||
3. **ESLint**: Configure TypeScript-specific linting rules
|
||||
4. **Manual Testing**: Test components in Hugo development server
|
||||
|
||||
## Build and Development Workflow
|
||||
|
||||
### Development Commands
|
||||
|
||||
```bash
|
||||
# Start TypeScript compilation in watch mode
|
||||
yarn build:ts:watch
|
||||
|
||||
# Start Hugo development server
|
||||
npx hugo server
|
||||
|
||||
# Run e2e tests
|
||||
yarn test:e2e
|
||||
|
||||
# Run linting
|
||||
yarn lint
|
||||
```
|
||||
|
||||
### Component Development Process
|
||||
|
||||
1. **Create TypeScript Component**
|
||||
- Define interfaces for options and data
|
||||
- Implement component class with proper typing
|
||||
- Export initializer function
|
||||
|
||||
2. **Register Component**
|
||||
- Add to `componentRegistry` in `main.js`
|
||||
- Import with `.js` extension (Hugo requirement)
|
||||
|
||||
3. **HTML Implementation**
|
||||
- Add `data-component` attribute to trigger elements
|
||||
- Include necessary Hugo data attributes
|
||||
|
||||
4. **Testing**
|
||||
- Write Cypress tests for component functionality
|
||||
- Test Hugo data integration
|
||||
- Verify TypeScript compilation
|
||||
|
||||
### Common Patterns and Solutions
|
||||
|
||||
1. **Hugo Template Data**
|
||||
```typescript
|
||||
// Handle Hugo's security measures for JSON data
|
||||
if (dataAttribute && dataAttribute !== '#ZgotmplZ') {
|
||||
// Safe to parse
|
||||
}
|
||||
```
|
||||
|
||||
2. **DOM Type Safety**
|
||||
```typescript
|
||||
// Use type assertions for DOM queries
|
||||
const element = this.container.querySelector('#input') as HTMLInputElement;
|
||||
```
|
||||
|
||||
3. **Event Handling**
|
||||
```typescript
|
||||
// Properly type event targets
|
||||
private handleClick = (e: Event): void => {
|
||||
const target = e.target as HTMLElement;
|
||||
// Handle event
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling and Debugging
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Module Resolution**: Use `.js` extensions in imports even for TypeScript files
|
||||
2. **Hugo Data Attributes**: Handle `#ZgotmplZ` security placeholders
|
||||
3. **Type Definitions**: Ensure proper typing for external libraries used in Hugo context
|
||||
4. **Compilation Errors**: Check `tsconfig.json` settings for Hugo compatibility
|
||||
|
||||
### Debugging Tools
|
||||
|
||||
- **VS Code TypeScript**: Use built-in TypeScript language server
|
||||
- **Hugo DevTools**: Browser debugging with source maps
|
||||
- **Component Registry**: Access `window.influxdatadocs.componentRegistry` for debugging
|
||||
- **TypeScript Compiler**: Use `tsc --noEmit --pretty` for detailed error reporting
|
||||
|
||||
## Future Considerations
|
||||
|
||||
### Migration Strategy
|
||||
|
||||
1. **Gradual Migration**: Convert JavaScript modules to TypeScript incrementally
|
||||
2. **Type Definitions**: Add type definitions for existing JavaScript modules
|
||||
3. **Shared Interfaces**: Create common interfaces for Hugo data and component patterns
|
||||
4. **Documentation**: Update component documentation with TypeScript examples
|
||||
|
||||
### Enhancement Opportunities
|
||||
|
||||
1. **Strict Type Checking**: Enable stricter TypeScript compiler options
|
||||
2. **Advanced Types**: Use utility types for Hugo-specific patterns
|
||||
3. **Build Optimization**: Optimize TypeScript compilation for Hugo builds
|
||||
4. **Developer Experience**: Improve tooling and IDE support for Hugo + TypeScript development
|
||||
|
|
@ -0,0 +1,442 @@
|
|||
---
|
||||
name: ui-testing
|
||||
description: UI testing specialist for the InfluxData docs-v2 repository using Cypress. Use this agent for writing, debugging, and running E2E tests for documentation UI components, page rendering, navigation, and interactive features.
|
||||
tools: ["*"]
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
# UI Testing Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Specialized agent for Cypress E2E testing in the InfluxData docs-v2 repository. Handles test creation, debugging, and validation of UI components and documentation pages.
|
||||
|
||||
## Scope and Responsibilities
|
||||
|
||||
### Primary Capabilities
|
||||
|
||||
1. **Cypress Test Development**
|
||||
- Write E2E tests for UI components and features
|
||||
- Test page rendering and navigation
|
||||
- Validate interactive elements and state changes
|
||||
- Test responsive behavior and accessibility
|
||||
|
||||
2. **Test Debugging**
|
||||
- Diagnose failing tests
|
||||
- Fix flaky tests and timing issues
|
||||
- Improve test reliability and performance
|
||||
|
||||
3. **Test Infrastructure**
|
||||
- Configure Cypress for specific test scenarios
|
||||
- Create reusable test utilities and commands
|
||||
- Manage test fixtures and data
|
||||
|
||||
### Out of Scope
|
||||
|
||||
- Hugo template implementation (use hugo-ui-dev agent)
|
||||
- TypeScript component code (use ts-component-dev agent)
|
||||
- CI/CD pipeline configuration (use ci-automation-engineer agent)
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Basic Test Commands
|
||||
|
||||
```bash
|
||||
# Run specific test file against content file
|
||||
node cypress/support/run-e2e-specs.js \
|
||||
--spec "cypress/e2e/content/my-test.cy.js" \
|
||||
content/path/to/page.md
|
||||
|
||||
# Run against a URL (for running server)
|
||||
node cypress/support/run-e2e-specs.js \
|
||||
--spec "cypress/e2e/content/my-test.cy.js" \
|
||||
http://localhost:<port>/path/to/page/
|
||||
|
||||
# Run all E2E tests
|
||||
yarn test:e2e
|
||||
|
||||
# Run shortcode example tests
|
||||
yarn test:shortcode-examples
|
||||
```
|
||||
|
||||
### Test File Organization
|
||||
|
||||
```
|
||||
cypress/
|
||||
├── e2e/
|
||||
│ └── content/
|
||||
│ ├── index.cy.js # General content tests
|
||||
│ ├── api-reference.cy.js # API docs tests
|
||||
│ ├── navigation.cy.js # Navigation tests
|
||||
│ └── my-component.cy.js # Component-specific tests
|
||||
├── fixtures/
|
||||
│ └── test-data.json # Test data files
|
||||
├── support/
|
||||
│ ├── commands.js # Custom Cypress commands
|
||||
│ ├── e2e.js # E2E support file
|
||||
│ └── run-e2e-specs.js # Test runner script
|
||||
└── cypress.config.js # Cypress configuration
|
||||
```
|
||||
|
||||
## Writing Tests
|
||||
|
||||
### Basic Test Structure
|
||||
|
||||
```javascript
|
||||
describe('Feature Name', () => {
|
||||
beforeEach(() => {
|
||||
cy.visit('/path/to/page/');
|
||||
});
|
||||
|
||||
it('describes expected behavior', () => {
|
||||
cy.get('.selector').should('be.visible');
|
||||
cy.get('.button').click();
|
||||
cy.get('.result').should('contain', 'expected text');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Component Testing Pattern
|
||||
|
||||
```javascript
|
||||
describe('API Navigation Component', () => {
|
||||
beforeEach(() => {
|
||||
cy.visit('/influxdb3/core/reference/api/');
|
||||
});
|
||||
|
||||
describe('Initial State', () => {
|
||||
it('renders navigation container', () => {
|
||||
cy.get('[data-component="api-nav"]').should('exist');
|
||||
});
|
||||
|
||||
it('displays all navigation groups', () => {
|
||||
cy.get('.api-nav-group').should('have.length.at.least', 1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('User Interactions', () => {
|
||||
it('expands group on header click', () => {
|
||||
cy.get('.api-nav-group-header').first().as('header');
|
||||
cy.get('@header').click();
|
||||
cy.get('@header').should('have.attr', 'aria-expanded', 'true');
|
||||
cy.get('@header').next('.api-nav-group-items')
|
||||
.should('be.visible');
|
||||
});
|
||||
|
||||
it('collapses expanded group on second click', () => {
|
||||
cy.get('.api-nav-group-header').first().as('header');
|
||||
cy.get('@header').click(); // expand
|
||||
cy.get('@header').click(); // collapse
|
||||
cy.get('@header').should('have.attr', 'aria-expanded', 'false');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Keyboard Navigation', () => {
|
||||
it('supports Enter key to toggle', () => {
|
||||
cy.get('.api-nav-group-header').first()
|
||||
.focus()
|
||||
.type('{enter}');
|
||||
cy.get('.api-nav-group-header').first()
|
||||
.should('have.attr', 'aria-expanded', 'true');
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Page Layout Testing
|
||||
|
||||
```javascript
|
||||
describe('API Reference Page Layout', () => {
|
||||
beforeEach(() => {
|
||||
cy.visit('/influxdb3/core/reference/api/');
|
||||
});
|
||||
|
||||
it('displays 3-column layout on desktop', () => {
|
||||
cy.viewport(1280, 800);
|
||||
cy.get('.sidebar').should('be.visible');
|
||||
cy.get('.api-content').should('be.visible');
|
||||
cy.get('.api-toc').should('be.visible');
|
||||
});
|
||||
|
||||
it('collapses to single column on mobile', () => {
|
||||
cy.viewport(375, 667);
|
||||
cy.get('.sidebar').should('not.be.visible');
|
||||
cy.get('.api-content').should('be.visible');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Tab Component Testing
|
||||
|
||||
```javascript
|
||||
describe('Tab Navigation', () => {
|
||||
beforeEach(() => {
|
||||
cy.visit('/page/with/tabs/');
|
||||
});
|
||||
|
||||
it('shows first tab content by default', () => {
|
||||
cy.get('.tab-content').first().should('be.visible');
|
||||
cy.get('.tab-content').eq(1).should('not.be.visible');
|
||||
});
|
||||
|
||||
it('switches tab content on click', () => {
|
||||
cy.get('.tabs a').eq(1).click();
|
||||
cy.get('.tab-content').first().should('not.be.visible');
|
||||
cy.get('.tab-content').eq(1).should('be.visible');
|
||||
});
|
||||
|
||||
it('updates active tab styling', () => {
|
||||
cy.get('.tabs a').eq(1).click();
|
||||
cy.get('.tabs a').first().should('not.have.class', 'is-active');
|
||||
cy.get('.tabs a').eq(1).should('have.class', 'is-active');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Scroll Behavior Testing
|
||||
|
||||
```javascript
|
||||
describe('Table of Contents Scroll Sync', () => {
|
||||
beforeEach(() => {
|
||||
cy.visit('/page/with/toc/');
|
||||
});
|
||||
|
||||
it('highlights current section in TOC on scroll', () => {
|
||||
// Scroll to a specific section
|
||||
cy.get('#section-two').scrollIntoView();
|
||||
|
||||
// Wait for scroll handler
|
||||
cy.wait(100);
|
||||
|
||||
// Verify TOC highlight
|
||||
cy.get('.toc-nav a[href="#section-two"]')
|
||||
.should('have.class', 'is-active');
|
||||
});
|
||||
|
||||
it('scrolls to section when TOC link clicked', () => {
|
||||
cy.get('.toc-nav a[href="#section-three"]').click();
|
||||
cy.get('#section-three').should('be.visible');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Common Testing Patterns
|
||||
|
||||
### Waiting for Dynamic Content
|
||||
|
||||
```javascript
|
||||
// Wait for element to appear
|
||||
cy.get('.dynamic-content', { timeout: 10000 }).should('exist');
|
||||
|
||||
// Wait for network request
|
||||
cy.intercept('GET', '/api/data').as('getData');
|
||||
cy.wait('@getData');
|
||||
|
||||
// Wait for animation
|
||||
cy.get('.animated-element').should('be.visible');
|
||||
cy.wait(300); // animation duration
|
||||
```
|
||||
|
||||
### Testing Data Attributes
|
||||
|
||||
```javascript
|
||||
it('component receives correct data', () => {
|
||||
cy.get('[data-component="my-component"]')
|
||||
.should('have.attr', 'data-items')
|
||||
.and('not.be.empty')
|
||||
.and('not.equal', '#ZgotmplZ');
|
||||
});
|
||||
```
|
||||
|
||||
### Testing Accessibility
|
||||
|
||||
```javascript
|
||||
describe('Accessibility', () => {
|
||||
it('has proper ARIA attributes', () => {
|
||||
cy.get('.expandable-header')
|
||||
.should('have.attr', 'aria-expanded');
|
||||
cy.get('.nav-item')
|
||||
.should('have.attr', 'role', 'menuitem');
|
||||
});
|
||||
|
||||
it('is keyboard navigable', () => {
|
||||
cy.get('.nav-item').first().focus();
|
||||
cy.focused().type('{downarrow}');
|
||||
cy.focused().should('have.class', 'nav-item');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Testing Responsive Behavior
|
||||
|
||||
```javascript
|
||||
const viewports = [
|
||||
{ name: 'mobile', width: 375, height: 667 },
|
||||
{ name: 'tablet', width: 768, height: 1024 },
|
||||
{ name: 'desktop', width: 1280, height: 800 },
|
||||
];
|
||||
|
||||
viewports.forEach(({ name, width, height }) => {
|
||||
describe(`${name} viewport`, () => {
|
||||
beforeEach(() => {
|
||||
cy.viewport(width, height);
|
||||
cy.visit('/path/to/page/');
|
||||
});
|
||||
|
||||
it('renders correctly', () => {
|
||||
cy.get('.main-content').should('be.visible');
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Debugging Failing Tests
|
||||
|
||||
### Enable Debug Mode
|
||||
|
||||
```javascript
|
||||
// Add .debug() to pause and inspect
|
||||
cy.get('.element').debug().should('be.visible');
|
||||
|
||||
// Log intermediate values
|
||||
cy.get('.element').then($el => {
|
||||
cy.log('Element classes:', $el.attr('class'));
|
||||
});
|
||||
```
|
||||
|
||||
### Screenshot on Failure
|
||||
|
||||
```javascript
|
||||
// Automatic (configure in cypress.config.js)
|
||||
screenshotOnRunFailure: true
|
||||
|
||||
// Manual screenshot
|
||||
cy.screenshot('debug-state');
|
||||
```
|
||||
|
||||
### Interactive Mode
|
||||
|
||||
```bash
|
||||
# Open Cypress Test Runner for interactive debugging
|
||||
npx cypress open
|
||||
```
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Timing Issues:**
|
||||
|
||||
```javascript
|
||||
// Wrong - may fail due to timing
|
||||
cy.get('.element').click();
|
||||
cy.get('.result').should('exist');
|
||||
|
||||
// Better - wait for element
|
||||
cy.get('.element').click();
|
||||
cy.get('.result', { timeout: 5000 }).should('exist');
|
||||
```
|
||||
|
||||
**Element Not Interactable:**
|
||||
|
||||
```javascript
|
||||
// Force click when element is covered
|
||||
cy.get('.element').click({ force: true });
|
||||
|
||||
// Scroll into view first
|
||||
cy.get('.element').scrollIntoView().click();
|
||||
```
|
||||
|
||||
**Stale Element Reference:**
|
||||
|
||||
```javascript
|
||||
// Re-query element after DOM changes
|
||||
cy.get('.container').within(() => {
|
||||
cy.get('.item').click();
|
||||
cy.get('.item').should('have.class', 'active'); // Re-queries
|
||||
});
|
||||
```
|
||||
|
||||
## Custom Commands
|
||||
|
||||
### Creating Custom Commands
|
||||
|
||||
```javascript
|
||||
// cypress/support/commands.js
|
||||
|
||||
// Check page loads without errors
|
||||
Cypress.Commands.add('pageLoadsSuccessfully', () => {
|
||||
cy.get('body').should('exist');
|
||||
cy.get('.error-page').should('not.exist');
|
||||
});
|
||||
|
||||
// Visit and wait for component
|
||||
Cypress.Commands.add('visitWithComponent', (url, component) => {
|
||||
cy.visit(url);
|
||||
cy.get(`[data-component="${component}"]`).should('exist');
|
||||
});
|
||||
|
||||
// Expand all collapsible sections
|
||||
Cypress.Commands.add('expandAllSections', () => {
|
||||
cy.get('[aria-expanded="false"]').each($el => {
|
||||
cy.wrap($el).click();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Using Custom Commands
|
||||
|
||||
```javascript
|
||||
describe('My Test', () => {
|
||||
it('uses custom commands', () => {
|
||||
cy.visitWithComponent('/api/', 'api-nav');
|
||||
cy.expandAllSections();
|
||||
cy.pageLoadsSuccessfully();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Test Data Management
|
||||
|
||||
### Fixtures
|
||||
|
||||
```json
|
||||
// cypress/fixtures/api-endpoints.json
|
||||
{
|
||||
"endpoints": [
|
||||
{ "path": "/write", "method": "POST" },
|
||||
{ "path": "/query", "method": "GET" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
```javascript
|
||||
// Using fixtures
|
||||
cy.fixture('api-endpoints').then((data) => {
|
||||
data.endpoints.forEach(endpoint => {
|
||||
it(`documents ${endpoint.method} ${endpoint.path}`, () => {
|
||||
cy.contains(`${endpoint.method} ${endpoint.path}`).should('exist');
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before considering tests complete:
|
||||
|
||||
- [ ] Tests cover main user flows
|
||||
- [ ] Tests are reliable (no flaky failures)
|
||||
- [ ] Appropriate timeouts for async operations
|
||||
- [ ] Meaningful assertions with clear failure messages
|
||||
- [ ] Tests organized by feature/component
|
||||
- [ ] Common patterns extracted to custom commands
|
||||
- [ ] Tests run successfully: `node cypress/support/run-e2e-specs.js --spec "path/to/test.cy.js" content/path.md`
|
||||
- [ ] No hardcoded waits (use cy.wait() with aliases or assertions)
|
||||
- [ ] Accessibility attributes tested where applicable
|
||||
|
||||
## Communication Style
|
||||
|
||||
- Report test results clearly (pass/fail counts)
|
||||
- Explain failure reasons and debugging steps
|
||||
- Suggest test coverage improvements
|
||||
- Recommend patterns for common scenarios
|
||||
- Ask for clarification on expected behavior when writing new tests
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
],
|
||||
"deny": [
|
||||
"Read(./.env)",
|
||||
"Read(./.env.*)",
|
||||
"Read(./secrets/**)",
|
||||
"Read(./config/credentials.json)",
|
||||
"Read(./build)"
|
||||
],
|
||||
"ask": [
|
||||
"Bash(git push:*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,133 @@
|
|||
---
|
||||
name: docs-cli-workflow
|
||||
description: Guides when to use docs create/edit CLI tools versus direct file editing for InfluxData documentation.
|
||||
author: InfluxData
|
||||
version: "1.0"
|
||||
---
|
||||
|
||||
# docs CLI Workflow Guidance
|
||||
|
||||
## Purpose
|
||||
|
||||
Help recognize when to suggest `docs create` or `docs edit` CLI tools instead of direct file editing.
|
||||
These tools provide scaffolding, context gathering, and education about conventions that direct editing misses.
|
||||
|
||||
## When This Skill Applies
|
||||
|
||||
Check for these trigger keywords in user messages:
|
||||
|
||||
- "new page", "new doc", "create documentation", "add a page"
|
||||
- "edit this URL", "edit <https://docs>", "update this page" (with a URL)
|
||||
- "document this feature", "write docs for"
|
||||
- "I have a draft", "from this draft"
|
||||
- Any docs.influxdata.com URL
|
||||
|
||||
**Skip this skill when:**
|
||||
|
||||
- User provides an explicit file path (e.g., "fix typo in content/influxdb3/...")
|
||||
- Small fixes (typos, broken links)
|
||||
- User says "just edit it" or similar
|
||||
- Frontmatter-only changes
|
||||
|
||||
## Decision: Which Tool to Suggest
|
||||
|
||||
### Suggest `docs create` when
|
||||
|
||||
| Trigger | Why CLI is better |
|
||||
| -------------------------------------- | --------------------------------------------------------- |
|
||||
| Content targets multiple products | CLI scaffolds shared content pattern automatically |
|
||||
| User unsure where page should live | CLI analyzes structure, suggests location |
|
||||
| Draft references existing docs | CLI extracts links, provides context to avoid duplication |
|
||||
| User seems unfamiliar with conventions | CLI prompt includes style guide, shortcode examples |
|
||||
| Complex new feature documentation | CLI gathers product metadata, version info |
|
||||
|
||||
### Suggest `docs edit` when
|
||||
|
||||
| Trigger | Why CLI is better |
|
||||
| -------------------------------------- | ------------------------------------------------------ |
|
||||
| User provides docs.influxdata.com URL | CLI finds source file(s) including shared content |
|
||||
| User doesn't know source file location | CLI maps URL to file path(s) |
|
||||
| Page uses shared content | CLI identifies both frontmatter file AND shared source |
|
||||
|
||||
### Edit directly when
|
||||
|
||||
| Scenario | Why direct is fine |
|
||||
| -------------------------------- | ------------------------------- |
|
||||
| User provides explicit file path | They already know where to edit |
|
||||
| Small typo/link fixes | CLI overhead not worth it |
|
||||
| User says "just edit it" | Explicit preference to skip CLI |
|
||||
| Frontmatter-only changes | No content generation needed |
|
||||
|
||||
## How to Suggest
|
||||
|
||||
When a trigger is detected, present a concise recommendation and wait for confirmation.
|
||||
|
||||
### For `docs create`
|
||||
|
||||
```
|
||||
I'd recommend using the docs CLI for this:
|
||||
|
||||
npx docs create <draft-path> --products <product>
|
||||
|
||||
**Why**: [1-2 sentences explaining the specific benefit]
|
||||
|
||||
Options:
|
||||
1. **Use CLI** - I'll run the command and guide you through product selection
|
||||
2. **Edit directly** - Skip the CLI, I'll create/edit files manually
|
||||
|
||||
Which do you prefer?
|
||||
```
|
||||
|
||||
### For `docs edit`
|
||||
|
||||
```
|
||||
I can use the docs CLI to find the source files for this page:
|
||||
|
||||
npx docs edit <url>
|
||||
|
||||
**Why**: [1-2 sentences explaining the benefit]
|
||||
|
||||
Options:
|
||||
1. **Use CLI** - I'll find and open the relevant files
|
||||
2. **I know the file** - Tell me the path and I'll edit directly
|
||||
|
||||
Which do you prefer?
|
||||
```
|
||||
|
||||
### Key principles
|
||||
|
||||
- Show the actual command (educational)
|
||||
- Explain *why* for this specific case
|
||||
- Always offer the direct alternative
|
||||
- Keep it brief (4-6 lines max)
|
||||
- **Wait for user confirmation before running**
|
||||
|
||||
## Edge Cases
|
||||
|
||||
| Situation | Behavior |
|
||||
| ----------------------------------- | -------------------------------------------------------- |
|
||||
| Already in a `docs create` workflow | Don't re-suggest |
|
||||
| URL points to non-existent page | Suggest `docs create --url <url>` instead of `docs edit` |
|
||||
| User provides both URL and draft | Suggest `docs create --url <url> --from-draft <draft>` |
|
||||
| User declines CLI twice in session | Stop suggesting, respect preference |
|
||||
|
||||
## After User Confirms
|
||||
|
||||
Run the appropriate command and let the CLI handle the rest.
|
||||
No additional guidance needed—the CLI manages product selection, file generation, and context gathering.
|
||||
|
||||
## CLI Reference
|
||||
|
||||
```bash
|
||||
# Create new documentation from a draft
|
||||
npx docs create <draft-path> --products <product-key>
|
||||
|
||||
# Create at specific URL location
|
||||
npx docs create --url <url> --from-draft <draft-path>
|
||||
|
||||
# Find and open files for an existing page
|
||||
npx docs edit <url>
|
||||
npx docs edit --list <url> # List files without opening
|
||||
```
|
||||
|
||||
For full CLI documentation, run `npx docs --help`.
|
||||
|
|
@ -0,0 +1,555 @@
|
|||
---
|
||||
name: hugo-template-dev
|
||||
description: Hugo template development skill for InfluxData docs-v2. Enforces proper build and runtime testing to catch template errors that build-only validation misses.
|
||||
author: InfluxData
|
||||
version: "1.0"
|
||||
---
|
||||
|
||||
# Hugo Template Development Skill
|
||||
|
||||
## Purpose
|
||||
|
||||
This skill enforces proper Hugo template development practices, including **mandatory runtime testing** to catch errors that static builds miss.
|
||||
|
||||
## Critical Testing Requirement
|
||||
|
||||
**Hugo's `npx hugo --quiet` only validates template syntax, not runtime execution.**
|
||||
|
||||
Template errors like accessing undefined fields, nil values, or incorrect type assertions only appear when Hugo actually renders pages. You MUST test templates by running the server.
|
||||
|
||||
## Mandatory Testing Protocol
|
||||
|
||||
### For ANY Hugo Template Change
|
||||
|
||||
After modifying files in `layouts/`, `layouts/partials/`, or `layouts/shortcodes/`:
|
||||
|
||||
```bash
|
||||
# Step 1: Start Hugo server and capture output
|
||||
npx hugo server --port 1314 2>&1 | head -50
|
||||
```
|
||||
|
||||
**Success criteria:**
|
||||
|
||||
- No `error calling partial` messages
|
||||
- No `can't evaluate field` errors
|
||||
- No `template: ... failed` messages
|
||||
- Server shows "Web Server is available at <http://localhost:1314/>"
|
||||
|
||||
**If errors appear:** Fix the template and repeat Step 1 before proceeding.
|
||||
|
||||
```bash
|
||||
# Step 2: Verify the page renders (only after Step 1 passes)
|
||||
curl -s -o /dev/null -w "%{http_code}" http://localhost:1314/PATH/TO/PAGE/
|
||||
```
|
||||
|
||||
**Expected:** HTTP 200 status code
|
||||
|
||||
```bash
|
||||
# Step 3: Stop the test server
|
||||
pkill -f "hugo server --port 1314"
|
||||
```
|
||||
|
||||
### Quick Test Command
|
||||
|
||||
Use this one-liner to test and get immediate feedback:
|
||||
|
||||
```bash
|
||||
timeout 15 npx hugo server --port 1314 2>&1 | grep -E "(error|Error|ERROR|fail|FAIL)" | head -20; pkill -f "hugo server --port 1314" 2>/dev/null
|
||||
```
|
||||
|
||||
If output is empty, no errors were detected.
|
||||
|
||||
## Common Hugo Template Errors
|
||||
|
||||
### 1. Accessing Hyphenated Keys
|
||||
|
||||
**Wrong:**
|
||||
|
||||
```go
|
||||
{{ .Site.Data.article-data.influxdb }}
|
||||
```
|
||||
|
||||
**Correct:**
|
||||
|
||||
```go
|
||||
{{ index .Site.Data "article-data" "influxdb" }}
|
||||
```
|
||||
|
||||
### 2. Nil Field Access
|
||||
|
||||
**Wrong:**
|
||||
|
||||
```go
|
||||
{{ range $articles }}
|
||||
{{ .path }} {{/* Fails if item is nil or wrong type */}}
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
**Correct:**
|
||||
|
||||
```go
|
||||
{{ range $articles }}
|
||||
{{ if . }}
|
||||
{{ with index . "path" }}
|
||||
{{ . }}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
### 3. Type Assertion on Interface{}
|
||||
|
||||
**Wrong:**
|
||||
|
||||
```go
|
||||
{{ range $data }}
|
||||
{{ .fields.menuName }}
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
**Correct:**
|
||||
|
||||
```go
|
||||
{{ range $data }}
|
||||
{{ if isset . "fields" }}
|
||||
{{ $fields := index . "fields" }}
|
||||
{{ if isset $fields "menuName" }}
|
||||
{{ index $fields "menuName" }}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
### 4. Empty Map vs Nil Check
|
||||
|
||||
**Problem:** Hugo's `{{ if . }}` passes for empty maps `{}`:
|
||||
|
||||
```go
|
||||
{{/* This doesn't catch empty maps */}}
|
||||
{{ if $data }}
|
||||
{{ .field }} {{/* Still fails if $data is {} */}}
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
**Solution:** Check for specific keys:
|
||||
|
||||
```go
|
||||
{{ if and $data (isset $data "field") }}
|
||||
{{ index $data "field" }}
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
## Hugo Data Access Patterns
|
||||
|
||||
### Safe Nested Access
|
||||
|
||||
```go
|
||||
{{/* Build up access with nil checks at each level */}}
|
||||
{{ $articleDataRoot := index .Site.Data "article-data" }}
|
||||
{{ if $articleDataRoot }}
|
||||
{{ $influxdbData := index $articleDataRoot "influxdb" }}
|
||||
{{ if $influxdbData }}
|
||||
{{ $productData := index $influxdbData $dataKey }}
|
||||
{{ if $productData }}
|
||||
{{ with $productData.articles }}
|
||||
{{/* Safe to use . here */}}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
### Iterating Over Data Safely
|
||||
|
||||
```go
|
||||
{{ range $idx, $item := $articles }}
|
||||
{{/* Declare variables with defaults */}}
|
||||
{{ $path := "" }}
|
||||
{{ $name := "" }}
|
||||
|
||||
{{/* Safely extract values */}}
|
||||
{{ if isset $item "path" }}
|
||||
{{ $path = index $item "path" }}
|
||||
{{ end }}
|
||||
|
||||
{{ if $path }}
|
||||
{{/* Now safe to use $path */}}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
## File Organization
|
||||
|
||||
### Layouts Directory Structure
|
||||
|
||||
```
|
||||
layouts/
|
||||
├── _default/ # Default templates
|
||||
├── partials/ # Reusable template fragments
|
||||
│ └── api/ # API-specific partials
|
||||
├── shortcodes/ # Content shortcodes
|
||||
└── TYPE/ # Type-specific templates (api/, etc.)
|
||||
└── single.html # Single page template
|
||||
```
|
||||
|
||||
### Partial Naming
|
||||
|
||||
- Use descriptive names: `api/sidebar-nav.html`, not `nav.html`
|
||||
- Group related partials in subdirectories
|
||||
- Include comments at the top describing purpose and required context
|
||||
|
||||
## Separation of Concerns: Templates vs TypeScript
|
||||
|
||||
**Principle:** Hugo templates handle structure and data binding. TypeScript handles behavior and interactivity.
|
||||
|
||||
### What Goes Where
|
||||
|
||||
| Concern | Location | Example |
|
||||
| ---------------- | --------------------------- | ----------------------------------- |
|
||||
| HTML structure | `layouts/**/*.html` | Navigation markup, tab containers |
|
||||
| Data binding | `layouts/**/*.html` | `{{ .Title }}`, `{{ range .Data }}` |
|
||||
| Static styling | `assets/styles/**/*.scss` | Layout, colors, typography |
|
||||
| User interaction | `assets/js/components/*.ts` | Click handlers, scroll behavior |
|
||||
| State management | `assets/js/components/*.ts` | Active tabs, collapsed sections |
|
||||
| DOM manipulation | `assets/js/components/*.ts` | Show/hide, class toggling |
|
||||
|
||||
### Anti-Pattern: Inline JavaScript in Templates
|
||||
|
||||
**Wrong - JavaScript mixed with template:**
|
||||
|
||||
```html
|
||||
{{/* DON'T DO THIS */}}
|
||||
<nav class="api-nav">
|
||||
{{ range $articles }}
|
||||
<button onclick="toggleSection('{{ .id }}')">{{ .name }}</button>
|
||||
{{ end }}
|
||||
</nav>
|
||||
|
||||
<script>
|
||||
function toggleSection(id) {
|
||||
document.getElementById(id).classList.toggle('is-open');
|
||||
}
|
||||
</script>
|
||||
```
|
||||
|
||||
**Correct - Clean separation:**
|
||||
|
||||
Template (`layouts/partials/api/sidebar-nav.html`):
|
||||
|
||||
```html
|
||||
<nav class="api-nav" data-component="api-nav">
|
||||
{{ range $articles }}
|
||||
<button class="api-nav-group-header" aria-expanded="false">
|
||||
{{ .name }}
|
||||
</button>
|
||||
<ul class="api-nav-group-items">
|
||||
{{/* items */}}
|
||||
</ul>
|
||||
{{ end }}
|
||||
</nav>
|
||||
```
|
||||
|
||||
TypeScript (`assets/js/components/api-nav.ts`):
|
||||
|
||||
```typescript
|
||||
interface ApiNavOptions {
|
||||
component: HTMLElement;
|
||||
}
|
||||
|
||||
export default function initApiNav({ component }: ApiNavOptions): void {
|
||||
const headers = component.querySelectorAll('.api-nav-group-header');
|
||||
|
||||
headers.forEach((header) => {
|
||||
header.addEventListener('click', () => {
|
||||
const isOpen = header.classList.toggle('is-open');
|
||||
header.setAttribute('aria-expanded', String(isOpen));
|
||||
header.nextElementSibling?.classList.toggle('is-open', isOpen);
|
||||
});
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
Register in `main.js`:
|
||||
|
||||
```javascript
|
||||
import initApiNav from './components/api-nav.js';
|
||||
|
||||
const componentRegistry = {
|
||||
'api-nav': initApiNav,
|
||||
// ... other components
|
||||
};
|
||||
```
|
||||
|
||||
### Data Passing Pattern
|
||||
|
||||
Pass Hugo data to TypeScript via `data-*` attributes:
|
||||
|
||||
Template:
|
||||
|
||||
```html
|
||||
<div
|
||||
data-component="api-toc"
|
||||
data-headings="{{ .headings | jsonify | safeHTMLAttr }}"
|
||||
data-scroll-offset="80"
|
||||
>
|
||||
</div>
|
||||
```
|
||||
|
||||
TypeScript:
|
||||
|
||||
```typescript
|
||||
interface TocOptions {
|
||||
component: HTMLElement;
|
||||
}
|
||||
|
||||
interface TocData {
|
||||
headings: string[];
|
||||
scrollOffset: number;
|
||||
}
|
||||
|
||||
function parseData(component: HTMLElement): TocData {
|
||||
const headingsRaw = component.dataset.headings;
|
||||
const headings = headingsRaw ? JSON.parse(headingsRaw) : [];
|
||||
const scrollOffset = parseInt(component.dataset.scrollOffset || '0', 10);
|
||||
|
||||
return { headings, scrollOffset };
|
||||
}
|
||||
|
||||
export default function initApiToc({ component }: TocOptions): void {
|
||||
const data = parseData(component);
|
||||
// Use data.headings and data.scrollOffset
|
||||
}
|
||||
```
|
||||
|
||||
### Minimal Inline Scripts (Exception)
|
||||
|
||||
The **only** acceptable inline scripts are minimal initialization that MUST run before component registration:
|
||||
|
||||
```html
|
||||
{{/* Acceptable: Critical path, no logic, runs immediately */}}
|
||||
<script>
|
||||
document.documentElement.dataset.theme =
|
||||
localStorage.getItem('theme') || 'light';
|
||||
</script>
|
||||
```
|
||||
|
||||
Everything else belongs in `assets/js/`.
|
||||
|
||||
### File Organization for Components
|
||||
|
||||
```
|
||||
assets/
|
||||
├── js/
|
||||
│ ├── main.js # Entry point, component registry
|
||||
│ ├── components/
|
||||
│ │ ├── api-nav.ts # API navigation behavior
|
||||
│ │ ├── api-toc.ts # Table of contents
|
||||
│ │ ├── api-tabs.ts # Tab switching (if needed beyond CSS)
|
||||
│ │ └── api-scalar.ts # Scalar/RapiDoc integration
|
||||
│ └── utils/
|
||||
│ └── dom-helpers.ts # Shared DOM utilities
|
||||
└── styles/
|
||||
└── layouts/
|
||||
└── _api-layout.scss # API-specific styles
|
||||
```
|
||||
|
||||
### TypeScript Component Checklist
|
||||
|
||||
When creating a new interactive feature:
|
||||
|
||||
1. [ ] Create TypeScript file in `assets/js/components/`
|
||||
2. [ ] Define interface for component options
|
||||
3. [ ] Export default initializer function
|
||||
4. [ ] Register in `main.js` componentRegistry
|
||||
5. [ ] Add `data-component` attribute to HTML element
|
||||
6. [ ] Pass data via `data-*` attributes (not inline JS)
|
||||
7. [ ] Write Cypress tests for the component
|
||||
8. [ ] **NO inline `<script>` tags in templates**
|
||||
|
||||
## Debugging Templates
|
||||
|
||||
### Enable Verbose Mode
|
||||
|
||||
```bash
|
||||
npx hugo server --port 1314 --verbose 2>&1 | head -100
|
||||
```
|
||||
|
||||
### Print Variables for Debugging
|
||||
|
||||
```go
|
||||
{{/* Temporary debugging - REMOVE before committing */}}
|
||||
<pre>{{ printf "%#v" $myVariable }}</pre>
|
||||
```
|
||||
|
||||
### Check Data File Loading
|
||||
|
||||
```bash
|
||||
# Verify data files exist and are valid YAML
|
||||
cat data/article-data/influxdb/influxdb3-core/articles.yml | head -20
|
||||
```
|
||||
|
||||
## Integration with CI/CD
|
||||
|
||||
### Pre-commit Hook (Recommended)
|
||||
|
||||
Add to `.lefthook.yml` or pre-commit configuration:
|
||||
|
||||
```yaml
|
||||
pre-commit:
|
||||
commands:
|
||||
hugo-template-test:
|
||||
glob: "layouts/**/*.html"
|
||||
run: |
|
||||
timeout 20 npx hugo server --port 1314 2>&1 | grep -E "error|Error" && exit 1 || exit 0
|
||||
pkill -f "hugo server --port 1314" 2>/dev/null
|
||||
```
|
||||
|
||||
### GitHub Actions Workflow
|
||||
|
||||
```yaml
|
||||
- name: Test Hugo templates
|
||||
run: |
|
||||
npx hugo server --port 1314 &
|
||||
sleep 10
|
||||
curl -f http://localhost:1314/ || exit 1
|
||||
pkill -f hugo
|
||||
```
|
||||
|
||||
### Cypress E2E Tests for UI Features
|
||||
|
||||
After template changes that affect UI functionality, run Cypress tests to verify:
|
||||
|
||||
**Run a specific test file against a content page:**
|
||||
|
||||
```bash
|
||||
node cypress/support/run-e2e-specs.js \
|
||||
--spec "cypress/e2e/content/api-reference.cy.js" \
|
||||
content/influxdb3/core/reference/api/_index.md
|
||||
```
|
||||
|
||||
**Run tests against a URL (for deployed or running server):**
|
||||
|
||||
```bash
|
||||
node cypress/support/run-e2e-specs.js \
|
||||
--spec "cypress/e2e/content/api-reference.cy.js" \
|
||||
http://localhost:<port>/influxdb3/core/reference/api/
|
||||
```
|
||||
|
||||
**Example Cypress test structure for API reference:**
|
||||
|
||||
```javascript
|
||||
// cypress/e2e/content/api-reference.cy.js
|
||||
describe('API Reference Documentation', () => {
|
||||
beforeEach(() => {
|
||||
cy.visit('/influxdb3/core/reference/api/');
|
||||
});
|
||||
|
||||
it('displays 3-column layout with sidebar, content, and TOC', () => {
|
||||
cy.get('.sidebar').should('be.visible');
|
||||
cy.get('.api-content').should('be.visible');
|
||||
cy.get('.api-toc').should('be.visible');
|
||||
});
|
||||
|
||||
it('switches tabs correctly', () => {
|
||||
cy.get('.tabs a').contains('Authentication').click();
|
||||
cy.get('.tab-content').contains('Bearer Token').should('be.visible');
|
||||
});
|
||||
|
||||
it('displays API navigation in sidebar', () => {
|
||||
cy.get('.api-nav').should('be.visible');
|
||||
cy.get('.api-nav').contains('API v3');
|
||||
});
|
||||
|
||||
it('TOC updates highlight on scroll', () => {
|
||||
cy.get('.api-toc-nav a').first().click();
|
||||
cy.get('.api-toc-nav a.is-active').should('exist');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Check for JavaScript console errors (common pattern for feature development):**
|
||||
|
||||
```javascript
|
||||
// cypress/e2e/content/my-component.cy.js
|
||||
describe('My Component', () => {
|
||||
it('should not throw JavaScript console errors', () => {
|
||||
cy.visit('/path/to/page/');
|
||||
|
||||
// Wait for component to initialize
|
||||
cy.get('[data-component="my-component"]', { timeout: 5000 })
|
||||
.should('be.visible');
|
||||
|
||||
cy.window().then((win) => {
|
||||
const logs = [];
|
||||
const originalError = win.console.error;
|
||||
|
||||
// Intercept console.error calls
|
||||
win.console.error = (...args) => {
|
||||
logs.push(args.join(' '));
|
||||
originalError.apply(win.console, args);
|
||||
};
|
||||
|
||||
// Allow time for async operations
|
||||
cy.wait(2000);
|
||||
|
||||
cy.then(() => {
|
||||
// Filter for relevant errors (customize for your component)
|
||||
const relevantErrors = logs.filter(
|
||||
(log) =>
|
||||
log.includes('my-component') ||
|
||||
log.includes('Failed to parse') ||
|
||||
log.includes('is not a function')
|
||||
);
|
||||
expect(relevantErrors).to.have.length(0);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
This pattern is especially useful for catching:
|
||||
|
||||
- TypeScript/JavaScript runtime errors in components
|
||||
- JSON parsing failures from `data-*` attributes
|
||||
- Undefined function calls from missing imports
|
||||
- Template data binding issues that only manifest at runtime
|
||||
|
||||
**Integrate Cypress into development workflow:**
|
||||
|
||||
1. Create test file in `cypress/e2e/content/` for your feature
|
||||
2. Run tests after template changes to verify UI behavior
|
||||
3. Include test execution in PR checklist
|
||||
|
||||
**Quick Cypress commands:**
|
||||
|
||||
| Purpose | Command |
|
||||
| ---------------------- | ------------------------------------------------------------------------------------------- |
|
||||
| Run specific spec | `node cypress/support/run-e2e-specs.js --spec "path/to/spec.cy.js" content/path/to/page.md` |
|
||||
| Run all E2E tests | `yarn test:e2e` |
|
||||
| Run shortcode examples | `yarn test:shortcode-examples` |
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Action | Command |
|
||||
| ------------------------- | -------------------------------------------------------------------- |
|
||||
| Test templates (runtime) | `npx hugo server --port 1314 2>&1 \| head -50` |
|
||||
| Build only (insufficient) | `npx hugo --quiet` |
|
||||
| Check specific page | `curl -s -o /dev/null -w "%{http_code}" http://localhost:1314/path/` |
|
||||
| Stop test server | `pkill -f "hugo server --port 1314"` |
|
||||
| Debug data access | `<pre>{{ printf "%#v" $var }}</pre>` |
|
||||
|
||||
## Remember
|
||||
|
||||
1. **Never trust `npx hugo --quiet` alone** - it only checks syntax
|
||||
2. **Always run the server** to test template changes
|
||||
3. **Check error output first** before declaring success
|
||||
4. **Use `isset` and `index`** for safe data access
|
||||
5. **Hyphenated keys require `index` function** - dot notation fails
|
||||
|
||||
## Related Agents
|
||||
|
||||
This skill focuses on Hugo template development practices. For specialized tasks, use:
|
||||
|
||||
- **hugo-ui-dev** - Hugo templates and SASS/CSS styling
|
||||
- **ts-component-dev** - TypeScript component behavior and interactivity
|
||||
- **ui-testing** - Cypress E2E testing for UI components
|
||||
|
|
@ -95,7 +95,7 @@ jobs:
|
|||
curl -L -H "Accept: application/vnd.github+json" \
|
||||
-H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
|
||||
-o link-checker-info.json \
|
||||
"https://api.github.com/repos/influxdata/docs-v2/releases/tags/link-checker-v1.2.4"
|
||||
"https://api.github.com/repos/influxdata/docs-v2/releases/tags/link-checker-v1.2.5"
|
||||
|
||||
# Extract download URL for linux binary
|
||||
DOWNLOAD_URL=$(jq -r '.assets[] | select(.name | test("link-checker.*linux")) | .url' link-checker-info.json)
|
||||
|
|
|
|||
|
|
@ -38,6 +38,7 @@ tmp
|
|||
|
||||
# TypeScript build output
|
||||
**/dist/
|
||||
**/dist-lambda/
|
||||
|
||||
# User context files for AI assistant tools
|
||||
.context/*
|
||||
|
|
@ -45,3 +46,17 @@ tmp
|
|||
|
||||
# External repos
|
||||
.ext/*
|
||||
|
||||
# Lambda deployment artifacts
|
||||
deploy/llm-markdown/lambda-edge/markdown-generator/*.zip
|
||||
deploy/llm-markdown/lambda-edge/markdown-generator/package-lock.json
|
||||
deploy/llm-markdown/lambda-edge/markdown-generator/.package-tmp/
|
||||
deploy/llm-markdown/lambda-edge/markdown-generator/yarn.lock
|
||||
deploy/llm-markdown/lambda-edge/markdown-generator/config.json
|
||||
|
||||
# JavaScript/TypeScript build artifacts
|
||||
*.tsbuildinfo
|
||||
*.d.ts
|
||||
*.d.ts.map
|
||||
*.js.map
|
||||
.eslintcache
|
||||
|
|
|
|||
|
|
@ -3,4 +3,7 @@
|
|||
**/.svn
|
||||
**/.hg
|
||||
**/node_modules
|
||||
assets/jsconfig.json
|
||||
assets/jsconfig.json
|
||||
|
||||
# Markdown files - Prettier insists on escaping common formatting
|
||||
**.md
|
||||
|
|
@ -4,5 +4,5 @@ routes:
|
|||
headers:
|
||||
Cache-Control: "max-age=630720000, no-transform, public"
|
||||
gzip: true
|
||||
- route: "^.+\\.(html|xml|json|js)$"
|
||||
- route: "^.+\\.(html|xml|json|js|md)$"
|
||||
gzip: true
|
||||
|
|
|
|||
|
|
@ -195,6 +195,78 @@ source: /shared/path/to/content.md
|
|||
|
||||
For complete details including examples and best practices, see the [Source section in DOCS-FRONTMATTER.md](DOCS-FRONTMATTER.md#source).
|
||||
|
||||
### Excluding Internal Flags from Documentation Audits
|
||||
|
||||
When documenting CLI commands, you may encounter internal flags or variable names from the source code that aren't intended for end-user documentation.
|
||||
Use structured HTML comments to mark these flags as excluded from documentation audits.
|
||||
|
||||
#### Standard exclude comment format
|
||||
|
||||
Place the comment immediately before the **Options** section in CLI reference documentation:
|
||||
|
||||
```markdown
|
||||
## Options
|
||||
|
||||
<!--docs:exclude
|
||||
--internal-flag-name: reason for exclusion
|
||||
--another-internal-flag: reason for exclusion
|
||||
-->
|
||||
|
||||
| Option | Description |
|
||||
```
|
||||
|
||||
#### When to exclude flags
|
||||
|
||||
Exclude flags and options that are:
|
||||
|
||||
- **Internal variable names**: Source code argument names that aren't exposed as CLI flags
|
||||
- **Hidden test flags**: Development or testing flags not meant for production use
|
||||
- **Deprecated aliases**: Old flag names that are maintained for backward compatibility but shouldn't be documented
|
||||
- **Implementation details**: Flags that expose internal implementation details
|
||||
|
||||
#### Examples
|
||||
|
||||
**Example 1: Internal variable name**
|
||||
|
||||
```markdown
|
||||
## Options
|
||||
|
||||
<!--docs:exclude
|
||||
--database-name: internal variable, use positional <DATABASE_NAME>
|
||||
-->
|
||||
```
|
||||
|
||||
**Example 2: Multiple exclusions**
|
||||
|
||||
```markdown
|
||||
## Options
|
||||
|
||||
<!--docs:exclude
|
||||
--table-name: internal variable, use positional <TABLE_NAME>
|
||||
--trigger-name: internal variable, use positional <TRIGGER_NAME>
|
||||
-->
|
||||
```
|
||||
|
||||
**Example 3: Hidden test flags**
|
||||
|
||||
```markdown
|
||||
## Options
|
||||
|
||||
<!--docs:exclude
|
||||
--test-mode: hidden test flag, not for production use
|
||||
--serve-invocation-method: internal implementation detail
|
||||
-->
|
||||
```
|
||||
|
||||
#### Audit tool behavior
|
||||
|
||||
Documentation audit tools should:
|
||||
|
||||
1. Parse HTML comments with the `docs:exclude` identifier
|
||||
2. Extract flag names and exclusion reasons using the pattern: `--flag-name: reason`
|
||||
3. Skip reporting "missing documentation" for excluded flags
|
||||
4. Support both single-line and multi-line exclusion lists
|
||||
|
||||
<!-- agent:instruct: essential -->
|
||||
### Common Shortcodes Reference
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,352 @@
|
|||
# Deploying InfluxData Documentation
|
||||
|
||||
This guide covers deploying the docs-v2 site to staging and production environments, as well as LLM markdown generation.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Staging Deployment](#staging-deployment)
|
||||
- [Production Deployment](#production-deployment)
|
||||
- [LLM Markdown Generation](#llm-markdown-generation)
|
||||
- [Testing and Validation](#testing-and-validation)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
|
||||
## Staging Deployment
|
||||
|
||||
Staging deployments are manual and run locally with your AWS credentials.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. **AWS Credentials** - Configure AWS CLI with appropriate permissions:
|
||||
```bash
|
||||
aws configure
|
||||
```
|
||||
|
||||
2. **s3deploy** - Install the s3deploy binary:
|
||||
```bash
|
||||
./deploy/ci-install-s3deploy.sh
|
||||
```
|
||||
|
||||
3. **Environment Variables** - Set required variables:
|
||||
```bash
|
||||
export STAGING_BUCKET="test2.docs.influxdata.com"
|
||||
export AWS_REGION="us-east-1"
|
||||
export STAGING_CF_DISTRIBUTION_ID="E1XXXXXXXXXX" # Optional
|
||||
```
|
||||
|
||||
### Deploy to Staging
|
||||
|
||||
Use the staging deployment script:
|
||||
|
||||
```bash
|
||||
yarn deploy:staging
|
||||
```
|
||||
|
||||
Or run the script directly:
|
||||
|
||||
```bash
|
||||
./scripts/deploy-staging.sh
|
||||
```
|
||||
|
||||
### What the Script Does
|
||||
|
||||
1. **Builds Hugo site** with staging configuration (`config/staging/hugo.yml`)
|
||||
2. **Generates LLM-friendly Markdown** (`yarn build:md`)
|
||||
3. **Uploads to S3** using s3deploy
|
||||
4. **Invalidates CloudFront cache** (if `STAGING_CF_DISTRIBUTION_ID` is set)
|
||||
|
||||
### Optional Environment Variables
|
||||
|
||||
Skip specific steps for faster iteration:
|
||||
|
||||
```bash
|
||||
# Skip Hugo build (use existing public/)
|
||||
export SKIP_BUILD=true
|
||||
|
||||
# Skip markdown generation
|
||||
export SKIP_MARKDOWN=true
|
||||
|
||||
# Build only (no S3 upload)
|
||||
export SKIP_DEPLOY=true
|
||||
```
|
||||
|
||||
### Example: Test Markdown Generation Only
|
||||
|
||||
```bash
|
||||
SKIP_DEPLOY=true ./scripts/deploy-staging.sh
|
||||
```
|
||||
|
||||
## Production Deployment
|
||||
|
||||
Production deployments are **automatic** via CircleCI when merging to `master`.
|
||||
|
||||
### Workflow
|
||||
|
||||
1. **Build Job** (`.circleci/config.yml`):
|
||||
- Installs dependencies
|
||||
- Builds Hugo site with production config
|
||||
- Generates LLM-friendly Markdown (`yarn build:md`)
|
||||
- Persists workspace for deploy job
|
||||
|
||||
2. **Deploy Job**:
|
||||
- Attaches workspace
|
||||
- Uploads to S3 using s3deploy
|
||||
- Invalidates CloudFront cache
|
||||
- Posts success notification to Slack
|
||||
|
||||
### Environment Variables (CircleCI)
|
||||
|
||||
Production deployment requires the following environment variables set in CircleCI:
|
||||
|
||||
- `BUCKET` - Production S3 bucket name
|
||||
- `REGION` - AWS region
|
||||
- `CF_DISTRIBUTION_ID` - CloudFront distribution ID
|
||||
- `SLACK_WEBHOOK_URL` - Slack notification webhook
|
||||
|
||||
### Trigger Production Deploy
|
||||
|
||||
```bash
|
||||
git push origin master
|
||||
```
|
||||
|
||||
CircleCI will automatically build and deploy.
|
||||
|
||||
## LLM Markdown Generation
|
||||
|
||||
Both staging and production deployments generate LLM-friendly Markdown files at build time.
|
||||
|
||||
### Output Files
|
||||
|
||||
The build generates two types of markdown files in `public/`:
|
||||
|
||||
1. **Single-page markdown** (`index.md`)
|
||||
- Individual page content with frontmatter
|
||||
- Contains: title, description, URL, product, version, token estimate
|
||||
|
||||
2. **Section bundles** (`index.section.md`)
|
||||
- Aggregated section with all child pages
|
||||
- Includes child page list in frontmatter
|
||||
- Optimized for LLM context windows
|
||||
|
||||
### Generation Script
|
||||
|
||||
```bash
|
||||
# Generate all markdown
|
||||
yarn build:md
|
||||
|
||||
# Generate for specific path
|
||||
node scripts/build-llm-markdown.js --path influxdb3/core/get-started
|
||||
|
||||
# Limit number of files (for testing)
|
||||
node scripts/build-llm-markdown.js --limit 100
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
Edit `scripts/build-llm-markdown.js` to adjust:
|
||||
|
||||
```javascript
|
||||
// Skip files smaller than this (Hugo alias redirects)
|
||||
const MIN_HTML_SIZE_BYTES = 1024;
|
||||
|
||||
// Token estimation ratio
|
||||
const CHARS_PER_TOKEN = 4;
|
||||
|
||||
// Concurrency (workers)
|
||||
const CONCURRENCY = process.env.CI ? 10 : 20;
|
||||
```
|
||||
|
||||
### Performance
|
||||
|
||||
- **Speed**: \~105 seconds for 5,000 pages + 500 sections
|
||||
- **Memory**: \~300MB peak (safe for 2GB CircleCI)
|
||||
- **Rate**: \~23 files/second with memory-bounded parallelism
|
||||
|
||||
## Making Deployment Changes
|
||||
|
||||
### During Initial Implementation
|
||||
|
||||
If making changes that affect `yarn build` commands or `.circleci/config.yml`:
|
||||
|
||||
1. **Read the surrounding context** in the CI file
|
||||
2. **Notice** flags, such as `--destination workspace/public` on the Hugo build
|
||||
3. **Ask**: "Does the build script need to know about environment details--for example, do paths differ between production and staging?"
|
||||
|
||||
### Recommended Prompt for Future Similar Work
|
||||
|
||||
> "This script will run in CI. Let me read the CI configuration to understand the build environment and directory structure before finalizing the implementation."
|
||||
|
||||
## Summary of Recommendations
|
||||
|
||||
| Strategy | Implementation | Effort |
|
||||
| ---------------------------------- | ---------------------------------- | ------ |
|
||||
| Read CI config before implementing | Process/habit change | Low |
|
||||
| Test on feature branch first | Push and watch CI before merging | Low |
|
||||
| Add CI validation step | Add file count check in config.yml | Low |
|
||||
|
||||
## Testing and Validation
|
||||
|
||||
### Local Testing
|
||||
|
||||
Test markdown generation locally before deploying:
|
||||
|
||||
```bash
|
||||
# Prerequisites
|
||||
yarn install
|
||||
yarn build:ts
|
||||
npx hugo --quiet
|
||||
|
||||
# Generate markdown for testing
|
||||
yarn build:md
|
||||
|
||||
# Generate markdown for specific path
|
||||
node scripts/build-llm-markdown.js --path influxdb3/core/get-started --limit 10
|
||||
|
||||
# Run validation tests
|
||||
node cypress/support/run-e2e-specs.js \
|
||||
--spec "cypress/e2e/content/markdown-content-validation.cy.js"
|
||||
```
|
||||
|
||||
### Validation Checks
|
||||
|
||||
The Cypress tests validate:
|
||||
|
||||
- ✅ No raw Hugo shortcodes (`{{< >}}` or `{{% %}}`)
|
||||
- ✅ No HTML comments
|
||||
- ✅ Proper YAML frontmatter with required fields
|
||||
- ✅ UI elements removed (feedback forms, navigation)
|
||||
- ✅ GitHub-style callouts (Note, Warning, etc.)
|
||||
- ✅ Properly formatted tables, lists, and code blocks
|
||||
- ✅ Product context metadata
|
||||
- ✅ Clean link formatting
|
||||
|
||||
See [DOCS-TESTING.md](DOCS-TESTING.md) for comprehensive testing documentation.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### s3deploy Not Found
|
||||
|
||||
Install the s3deploy binary:
|
||||
|
||||
```bash
|
||||
./deploy/ci-install-s3deploy.sh
|
||||
```
|
||||
|
||||
Verify installation:
|
||||
|
||||
```bash
|
||||
s3deploy -version
|
||||
```
|
||||
|
||||
### Missing Environment Variables
|
||||
|
||||
Check required variables are set:
|
||||
|
||||
```bash
|
||||
echo $STAGING_BUCKET
|
||||
echo $AWS_REGION
|
||||
```
|
||||
|
||||
Set them if missing:
|
||||
|
||||
```bash
|
||||
export STAGING_BUCKET="test2.docs.influxdata.com"
|
||||
export AWS_REGION="us-east-1"
|
||||
```
|
||||
|
||||
### AWS Permission Errors
|
||||
|
||||
Ensure your AWS credentials have the required permissions:
|
||||
|
||||
- `s3:PutObject` - Upload files to S3
|
||||
- `s3:DeleteObject` - Delete old files from S3
|
||||
- `cloudfront:CreateInvalidation` - Invalidate cache
|
||||
|
||||
Check your AWS profile:
|
||||
|
||||
```bash
|
||||
aws sts get-caller-identity
|
||||
```
|
||||
|
||||
### Hugo Build Fails
|
||||
|
||||
Check for:
|
||||
|
||||
- Missing dependencies (`yarn install`)
|
||||
- TypeScript compilation errors (`yarn build:ts`)
|
||||
- Invalid Hugo configuration
|
||||
|
||||
Build Hugo separately to isolate the issue:
|
||||
|
||||
```bash
|
||||
yarn hugo --environment staging
|
||||
```
|
||||
|
||||
### Markdown Generation Fails
|
||||
|
||||
Check for:
|
||||
|
||||
- Hugo build completed successfully
|
||||
- TypeScript compiled (`yarn build:ts`)
|
||||
- Sufficient memory available
|
||||
|
||||
Test markdown generation separately:
|
||||
|
||||
```bash
|
||||
yarn build:md --limit 10
|
||||
```
|
||||
|
||||
### CloudFront Cache Not Invalidating
|
||||
|
||||
If you see stale content after deployment:
|
||||
|
||||
1. Check `STAGING_CF_DISTRIBUTION_ID` is set correctly
|
||||
2. Verify AWS credentials have `cloudfront:CreateInvalidation` permission
|
||||
3. Manual invalidation:
|
||||
```bash
|
||||
aws cloudfront create-invalidation \
|
||||
--distribution-id E1XXXXXXXXXX \
|
||||
--paths "/*"
|
||||
```
|
||||
|
||||
### Deployment Timing Out
|
||||
|
||||
For large deployments:
|
||||
|
||||
1. **Skip markdown generation** if unchanged:
|
||||
```bash
|
||||
SKIP_MARKDOWN=true ./scripts/deploy-staging.sh
|
||||
```
|
||||
|
||||
2. **Use s3deploy's incremental upload**:
|
||||
- s3deploy only uploads changed files
|
||||
- First deploy is slower, subsequent deploys are faster
|
||||
|
||||
3. **Check network speed**:
|
||||
- Large uploads require good bandwidth
|
||||
- Consider deploying from an AWS region closer to the S3 bucket
|
||||
|
||||
## Deployment Checklist
|
||||
|
||||
### Before Deploying to Staging
|
||||
|
||||
- [ ] Run tests locally (`yarn lint`)
|
||||
- [ ] Build Hugo successfully (`yarn hugo --environment staging`)
|
||||
- [ ] Generate markdown successfully (`yarn build:md`)
|
||||
- [ ] Set staging environment variables
|
||||
- [ ] Have AWS credentials configured
|
||||
|
||||
### Before Merging to Master (Production)
|
||||
|
||||
- [ ] Test on staging first
|
||||
- [ ] Verify LLM markdown quality
|
||||
- [ ] Check for broken links (`yarn test:links`)
|
||||
- [ ] Run code block tests (`yarn test:codeblocks:all`)
|
||||
- [ ] Review CircleCI configuration changes
|
||||
- [ ] Ensure all tests pass
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Contributing Guide](DOCS-CONTRIBUTING.md)
|
||||
- [Testing Guide](DOCS-TESTING.md)
|
||||
- [CircleCI Configuration](.circleci/config.yml)
|
||||
- [S3 Deploy Configuration](.s3deploy.yml)
|
||||
261
DOCS-TESTING.md
261
DOCS-TESTING.md
|
|
@ -11,12 +11,13 @@ This guide covers all testing procedures for the InfluxData documentation, inclu
|
|||
|
||||
## Test Types Overview
|
||||
|
||||
| Test Type | Purpose | Command |
|
||||
|-----------|---------|---------|
|
||||
| **Code blocks** | Validate shell/Python code examples | `yarn test:codeblocks:all` |
|
||||
| **Link validation** | Check internal/external links | `yarn test:links` |
|
||||
| **Style linting** | Enforce writing standards | `docker compose run -T vale` |
|
||||
| **E2E tests** | UI and functionality testing | `yarn test:e2e` |
|
||||
| Test Type | Purpose | Command |
|
||||
| ----------------------- | ----------------------------------- | ---------------------------- |
|
||||
| **Code blocks** | Validate shell/Python code examples | `yarn test:codeblocks:all` |
|
||||
| **Link validation** | Check internal/external links | `yarn test:links` |
|
||||
| **Style linting** | Enforce writing standards | `docker compose run -T vale` |
|
||||
| **Markdown generation** | Generate LLM-friendly Markdown | `yarn build:md` |
|
||||
| **E2E tests** | UI and functionality testing | `yarn test:e2e` |
|
||||
|
||||
## Code Block Testing
|
||||
|
||||
|
|
@ -70,7 +71,8 @@ See `./test/src/prepare-content.sh` for the full list of variables you may need.
|
|||
|
||||
For influxctl commands to run in tests, move or copy your `config.toml` file to the `./test` directory.
|
||||
|
||||
> [!Warning]
|
||||
> \[!Warning]
|
||||
>
|
||||
> - The database you configure in `.env.test` and any written data may be deleted during test runs
|
||||
> - Don't add your `.env.test` files to Git. Git is configured to ignore `.env*` files to prevent accidentally committing credentials
|
||||
|
||||
|
|
@ -111,6 +113,7 @@ pytest-codeblocks has features for skipping tests and marking blocks as failed.
|
|||
#### "Pytest collected 0 items"
|
||||
|
||||
Potential causes:
|
||||
|
||||
- Check test discovery options in `pytest.ini`
|
||||
- Use `python` (not `py`) for Python code block language identifiers:
|
||||
```python
|
||||
|
|
@ -121,6 +124,215 @@ Potential causes:
|
|||
# This is ignored
|
||||
```
|
||||
|
||||
## LLM-Friendly Markdown Generation
|
||||
|
||||
The documentation includes tooling to generate LLM-friendly Markdown versions of documentation pages, both locally via CLI and on-demand via Lambda\@Edge in production.
|
||||
|
||||
### Quick Start
|
||||
|
||||
```bash
|
||||
# Prerequisites (run once)
|
||||
yarn install
|
||||
yarn build:ts
|
||||
npx hugo --quiet
|
||||
|
||||
# Generate Markdown
|
||||
node scripts/html-to-markdown.js --path influxdb3/core/get-started --limit 10
|
||||
|
||||
# Validate generated Markdown
|
||||
node cypress/support/run-e2e-specs.js \
|
||||
--spec "cypress/e2e/content/markdown-content-validation.cy.js"
|
||||
```
|
||||
|
||||
### Comprehensive Documentation
|
||||
|
||||
For complete documentation including prerequisites, usage examples, output formats, frontmatter structure, troubleshooting, and architecture details, see the inline documentation:
|
||||
|
||||
```bash
|
||||
# Or view the first 150 lines in terminal
|
||||
head -150 scripts/html-to-markdown.js
|
||||
```
|
||||
|
||||
The script documentation includes:
|
||||
|
||||
- Prerequisites and setup steps
|
||||
- Command-line options and examples
|
||||
- Output file types (single page vs section aggregation)
|
||||
- Frontmatter structure for both output types
|
||||
- Testing procedures
|
||||
- Common issues and solutions
|
||||
- Architecture overview
|
||||
- Related files
|
||||
|
||||
### Related Files
|
||||
|
||||
- **CLI tool**: `scripts/html-to-markdown.js` - Comprehensive inline documentation
|
||||
- **Core logic**: `scripts/lib/markdown-converter.js` - Shared conversion library
|
||||
- **Lambda handler**: `deploy/llm-markdown/lambda-edge/markdown-generator/index.js` - Production deployment
|
||||
- **Lambda docs**: `deploy/llm-markdown/README.md` - Deployment guide
|
||||
- **Cypress tests**: `cypress/e2e/content/markdown-content-validation.cy.js` - Validation tests
|
||||
|
||||
### Frontmatter Structure
|
||||
|
||||
All generated markdown files include structured YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
title: Page Title
|
||||
description: Page description for SEO
|
||||
url: /influxdb3/core/get-started/
|
||||
product: InfluxDB 3 Core
|
||||
version: core
|
||||
date: 2024-01-15T00:00:00Z
|
||||
lastmod: 2024-11-20T00:00:00Z
|
||||
type: page
|
||||
estimated_tokens: 2500
|
||||
---
|
||||
```
|
||||
|
||||
Section pages include additional fields:
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: section
|
||||
pages: 4
|
||||
child_pages:
|
||||
- title: Set up InfluxDB 3 Core
|
||||
url: /influxdb3/core/get-started/setup/
|
||||
- title: Write data
|
||||
url: /influxdb3/core/get-started/write/
|
||||
---
|
||||
```
|
||||
|
||||
### Testing Generated Markdown
|
||||
|
||||
#### Manual Testing
|
||||
|
||||
```bash
|
||||
# Generate markdown with verbose output
|
||||
node scripts/html-to-markdown.js --path influxdb3/core/get-started --limit 2 --verbose
|
||||
|
||||
# Check files were created
|
||||
ls -la public/influxdb3/core/get-started/*.md
|
||||
|
||||
# View generated content
|
||||
cat public/influxdb3/core/get-started/index.md
|
||||
|
||||
# Check frontmatter
|
||||
head -20 public/influxdb3/core/get-started/index.md
|
||||
```
|
||||
|
||||
#### Automated Testing with Cypress
|
||||
|
||||
The repository includes comprehensive Cypress tests for markdown validation:
|
||||
|
||||
```bash
|
||||
# Run all markdown validation tests
|
||||
node cypress/support/run-e2e-specs.js --spec "cypress/e2e/content/markdown-content-validation.cy.js"
|
||||
|
||||
# Test specific content file
|
||||
node cypress/support/run-e2e-specs.js \
|
||||
--spec "cypress/e2e/content/markdown-content-validation.cy.js" \
|
||||
content/influxdb3/core/query-data/execute-queries/_index.md
|
||||
```
|
||||
|
||||
The Cypress tests validate:
|
||||
|
||||
- ✅ No raw Hugo shortcodes (`{{< >}}` or `{{% %}}`)
|
||||
- ✅ No HTML comments
|
||||
- ✅ Proper YAML frontmatter with required fields
|
||||
- ✅ UI elements removed (feedback forms, navigation)
|
||||
- ✅ GitHub-style callouts (Note, Warning, etc.)
|
||||
- ✅ Properly formatted tables, lists, and code blocks
|
||||
- ✅ Product context metadata
|
||||
- ✅ Clean link formatting
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
#### Issue: "No article content found" warnings
|
||||
|
||||
**Cause**: Page doesn't have `<article class="article--content">` element (common for index/list pages)
|
||||
|
||||
**Solution**: This is normal behavior. The converter skips pages without article content. To verify:
|
||||
|
||||
```bash
|
||||
# Check HTML structure
|
||||
grep -l 'article--content' public/path/to/page/index.html
|
||||
```
|
||||
|
||||
#### Issue: "Cannot find module" errors
|
||||
|
||||
**Cause**: TypeScript not compiled (product-mappings.js missing)
|
||||
|
||||
**Solution**: Build TypeScript first:
|
||||
|
||||
```bash
|
||||
yarn build:ts
|
||||
ls -la dist/utils/product-mappings.js
|
||||
```
|
||||
|
||||
#### Issue: Memory issues when processing all files
|
||||
|
||||
**Cause**: Attempting to process thousands of pages at once
|
||||
|
||||
**Solution**: Use `--limit` flag to process in batches:
|
||||
|
||||
```bash
|
||||
# Process 1000 files at a time
|
||||
node scripts/html-to-markdown.js --limit 1000
|
||||
```
|
||||
|
||||
#### Issue: Missing or incorrect product detection
|
||||
|
||||
**Cause**: Product mappings not up to date or path doesn't match known patterns
|
||||
|
||||
**Solution**:
|
||||
|
||||
1. Rebuild TypeScript: `yarn build:ts`
|
||||
2. Check product mappings in `assets/js/utils/product-mappings.ts`
|
||||
3. Add new product paths if needed
|
||||
|
||||
### Validation Checklist
|
||||
|
||||
Before committing markdown generation changes:
|
||||
|
||||
- [ ] Run TypeScript build: `yarn build:ts`
|
||||
- [ ] Build Hugo site: `npx hugo --quiet`
|
||||
- [ ] Generate markdown for affected paths
|
||||
- [ ] Run Cypress validation tests
|
||||
- [ ] Manually check sample output files:
|
||||
- [ ] Frontmatter is valid YAML
|
||||
- [ ] No shortcode remnants (`{{<`, `{{%`)
|
||||
- [ ] No HTML comments (`<!--`, `-->`)
|
||||
- [ ] Product context is correct
|
||||
- [ ] Links are properly formatted
|
||||
- [ ] Code blocks have language identifiers
|
||||
- [ ] Tables render correctly
|
||||
|
||||
### Architecture
|
||||
|
||||
The markdown generation uses a shared library architecture:
|
||||
|
||||
```
|
||||
docs-v2/
|
||||
├── scripts/
|
||||
│ ├── html-to-markdown.js # CLI wrapper (filesystem operations)
|
||||
│ └── lib/
|
||||
│ └── markdown-converter.js # Core conversion logic (shared library)
|
||||
├── dist/
|
||||
│ └── utils/
|
||||
│ └── product-mappings.js # Product detection (compiled from TS)
|
||||
└── public/ # Generated HTML + Markdown files
|
||||
```
|
||||
|
||||
The shared library (`scripts/lib/markdown-converter.js`) is:
|
||||
|
||||
- Used by local markdown generation scripts
|
||||
- Imported by docs-tooling Lambda\@Edge for on-demand generation
|
||||
- Tested independently with isolated conversion logic
|
||||
|
||||
For deployment details, see [deploy/lambda-edge/markdown-generator/README.md](deploy/lambda-edge/markdown-generator/README.md).
|
||||
|
||||
## Link Validation with Link-Checker
|
||||
|
||||
Link validation uses the `link-checker` tool to validate internal and external links in documentation files.
|
||||
|
|
@ -158,8 +370,8 @@ chmod +x link-checker
|
|||
./link-checker --version
|
||||
```
|
||||
|
||||
> [!Note]
|
||||
> Pre-built binaries are currently Linux x86_64 only. For macOS development, use Option 1 to build from source.
|
||||
> \[!Note]
|
||||
> Pre-built binaries are currently Linux x86\_64 only. For macOS development, use Option 1 to build from source.
|
||||
|
||||
```bash
|
||||
# Clone and build link-checker
|
||||
|
|
@ -188,11 +400,11 @@ cp target/release/link-checker /usr/local/bin/
|
|||
curl -L -H "Authorization: Bearer $(gh auth token)" \
|
||||
-o link-checker-linux-x86_64 \
|
||||
"https://github.com/influxdata/docs-tooling/releases/download/link-checker-v1.2.x/link-checker-linux-x86_64"
|
||||
|
||||
|
||||
curl -L -H "Authorization: Bearer $(gh auth token)" \
|
||||
-o checksums.txt \
|
||||
"https://github.com/influxdata/docs-tooling/releases/download/link-checker-v1.2.x/checksums.txt"
|
||||
|
||||
|
||||
# Create docs-v2 release
|
||||
gh release create \
|
||||
--repo influxdata/docs-v2 \
|
||||
|
|
@ -209,7 +421,7 @@ cp target/release/link-checker /usr/local/bin/
|
|||
sed -i 's/link-checker-v[0-9.]*/link-checker-v1.2.x/' .github/workflows/pr-link-check.yml
|
||||
```
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> The manual distribution is required because docs-tooling is a private repository and the default GitHub token doesn't have cross-repository access for private repos.
|
||||
|
||||
#### Core Commands
|
||||
|
|
@ -230,6 +442,7 @@ link-checker config
|
|||
The link-checker automatically handles relative link resolution based on the input type:
|
||||
|
||||
**Local Files → Local Resolution**
|
||||
|
||||
```bash
|
||||
# When checking local files, relative links resolve to the local filesystem
|
||||
link-checker check public/influxdb3/core/admin/scale-cluster/index.html
|
||||
|
|
@ -238,6 +451,7 @@ link-checker check public/influxdb3/core/admin/scale-cluster/index.html
|
|||
```
|
||||
|
||||
**URLs → Production Resolution**
|
||||
|
||||
```bash
|
||||
# When checking URLs, relative links resolve to the production site
|
||||
link-checker check https://docs.influxdata.com/influxdb3/core/admin/scale-cluster/
|
||||
|
|
@ -246,6 +460,7 @@ link-checker check https://docs.influxdata.com/influxdb3/core/admin/scale-cluste
|
|||
```
|
||||
|
||||
**Why This Matters**
|
||||
|
||||
- **Testing new content**: Tag pages generated locally will be found when testing local files
|
||||
- **Production validation**: Production URLs validate against the live site
|
||||
- **No false positives**: New content won't appear broken when testing locally before deployment
|
||||
|
|
@ -321,6 +536,7 @@ The docs-v2 repository includes automated link checking for pull requests:
|
|||
- **Results reporting**: Broken links reported as GitHub annotations with detailed summaries
|
||||
|
||||
The workflow automatically:
|
||||
|
||||
1. Detects content changes in PRs using GitHub Files API
|
||||
2. Downloads latest link-checker binary from docs-v2 releases
|
||||
3. Builds Hugo site and maps changed content to public HTML files
|
||||
|
|
@ -405,6 +621,7 @@ docs-v2 uses [Lefthook](https://github.com/evilmartians/lefthook) to manage Git
|
|||
### What Runs Automatically
|
||||
|
||||
When you run `git commit`, Git runs:
|
||||
|
||||
- **Vale**: Style linting (if configured)
|
||||
- **Prettier**: Code formatting
|
||||
- **Cypress**: Link validation tests
|
||||
|
|
@ -459,6 +676,7 @@ For JavaScript code in the documentation UI (`assets/js`):
|
|||
```
|
||||
|
||||
3. Start Hugo: `yarn hugo server`
|
||||
|
||||
4. In VS Code, select "Debug JS (debug-helpers)" configuration
|
||||
|
||||
Remember to remove debug statements before committing.
|
||||
|
|
@ -490,6 +708,18 @@ yarn test:codeblocks:stop-monitors
|
|||
- Format code to fit within 80 characters
|
||||
- Use long options in command-line examples (`--option` vs `-o`)
|
||||
|
||||
### Markdown Generation
|
||||
|
||||
- Build Hugo site before generating markdown: `npx hugo --quiet`
|
||||
- Compile TypeScript before generation: `yarn build:ts`
|
||||
- Test on small subsets first using `--limit` flag
|
||||
- Use `--verbose` flag to debug conversion issues
|
||||
- Always run Cypress validation tests after generation
|
||||
- Check sample output manually for quality
|
||||
- Verify shortcodes are evaluated (no `{{<` or `{{%` in output)
|
||||
- Ensure UI elements are removed (no "Copy page", "Was this helpful?")
|
||||
- Test both single pages (`index.md`) and section pages (`index.section.md`)
|
||||
|
||||
### Link Validation
|
||||
|
||||
- Test links regularly, especially after content restructuring
|
||||
|
|
@ -511,9 +741,14 @@ yarn test:codeblocks:stop-monitors
|
|||
- **Scripts**: `.github/scripts/` directory
|
||||
- **Test data**: `./test/` directory
|
||||
- **Vale config**: `.ci/vale/styles/`
|
||||
- **Markdown generation**:
|
||||
- `scripts/html-to-markdown.js` - CLI wrapper
|
||||
- `scripts/lib/markdown-converter.js` - Core conversion library
|
||||
- `deploy/lambda-edge/markdown-generator/` - Lambda deployment
|
||||
- `cypress/e2e/content/markdown-content-validation.cy.js` - Validation tests
|
||||
|
||||
## Getting Help
|
||||
|
||||
- **GitHub Issues**: [docs-v2 issues](https://github.com/influxdata/docs-v2/issues)
|
||||
- **Good first issues**: [good-first-issue label](https://github.com/influxdata/docs-v2/issues?q=is%3Aissue+is%3Aopen+label%3Agood-first-issue)
|
||||
- **InfluxData CLA**: [Sign here](https://www.influxdata.com/legal/cla/) for substantial contributions
|
||||
- **InfluxData CLA**: [Sign here](https://www.influxdata.com/legal/cla/) for substantial contributions
|
||||
|
|
|
|||
|
|
@ -130,21 +130,9 @@ paths:
|
|||
schema:
|
||||
$ref: '#/components/schemas/LineProtocolLengthError'
|
||||
'429':
|
||||
description: Token is temporarily over quota. The Retry-After header describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
type: integer
|
||||
format: int32
|
||||
description: Token is temporarily over quota or ingesters are resource constrained.
|
||||
'503':
|
||||
description: Server is temporarily unavailable to accept writes. The Retry-After header describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
type: integer
|
||||
format: int32
|
||||
description: Server is temporarily unavailable to accept writes due to too many concurrent requests or insufficient healthy ingesters.
|
||||
default:
|
||||
description: Internal server error
|
||||
content:
|
||||
|
|
@ -293,13 +281,7 @@ paths:
|
|||
type: string
|
||||
format: binary
|
||||
'429':
|
||||
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
type: integer
|
||||
format: int32
|
||||
description: Token is temporarily over quota or the querier is resource constrained.
|
||||
default:
|
||||
description: Error processing query
|
||||
content:
|
||||
|
|
@ -479,13 +461,7 @@ paths:
|
|||
type: string
|
||||
format: binary
|
||||
'429':
|
||||
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
type: integer
|
||||
format: int32
|
||||
description: Token is temporarily over quota or queriers are resource constrained.
|
||||
default:
|
||||
description: Error processing query
|
||||
content:
|
||||
|
|
|
|||
|
|
@ -423,15 +423,8 @@ paths:
|
|||
description: |
|
||||
Service unavailable.
|
||||
|
||||
- Returns this error if
|
||||
the server is temporarily unavailable to accept writes.
|
||||
- Returns a `Retry-After` header that describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: Non-negative decimal integer indicating seconds to wait before retrying the request.
|
||||
schema:
|
||||
format: int32
|
||||
type: integer
|
||||
- Returns this error if the server is temporarily unavailable to accept writes due to concurrent request limits or insufficient healthy ingesters.
|
||||
|
||||
default:
|
||||
$ref: '#/components/responses/GeneralServerError'
|
||||
summary: Write data
|
||||
|
|
@ -562,18 +555,10 @@ paths:
|
|||
type: string
|
||||
'429':
|
||||
description: |
|
||||
#### InfluxDB Cloud:
|
||||
- returns this error if a **read** or **write** request exceeds your
|
||||
plan's [adjustable service quotas](/influxdb3/cloud-dedicated/account-management/limits/#adjustable-service-quotas)
|
||||
or if a **delete** request exceeds the maximum
|
||||
[global limit](/influxdb3/cloud-dedicated/account-management/limits/#global-limits)
|
||||
- returns `Retry-After` header that describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
format: int32
|
||||
type: integer
|
||||
Too many requests.
|
||||
|
||||
- Returns this error if a **read** or **write** request exceeds rate
|
||||
limits or if queriers or ingesters are resource constrained.
|
||||
default:
|
||||
content:
|
||||
application/json:
|
||||
|
|
@ -719,21 +704,9 @@ paths:
|
|||
|
||||
The response body contains details about the [rejected points](/influxdb3/cloud-dedicated/write-data/troubleshoot/#troubleshoot-rejected-points).
|
||||
'429':
|
||||
description: Token is temporarily over quota. The Retry-After header describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
format: int32
|
||||
type: integer
|
||||
description: Token is temporarily over quota or ingesters are resource constrained.
|
||||
'503':
|
||||
description: Server is temporarily unavailable to accept writes. The Retry-After header describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
format: int32
|
||||
type: integer
|
||||
description: Server is temporarily unavailable to accept writes due to too many concurrent requests or insufficient healthy ingesters.
|
||||
default:
|
||||
content:
|
||||
application/json:
|
||||
|
|
|
|||
|
|
@ -130,21 +130,9 @@ paths:
|
|||
schema:
|
||||
$ref: '#/components/schemas/LineProtocolLengthError'
|
||||
'429':
|
||||
description: Token is temporarily over quota. The Retry-After header describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
type: integer
|
||||
format: int32
|
||||
description: Token is temporarily over quota or ingesters are resource constrained.
|
||||
'503':
|
||||
description: Server is temporarily unavailable to accept writes. The Retry-After header describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
type: integer
|
||||
format: int32
|
||||
description: Server is temporarily unavailable to accept writes due to too many concurrent requests or insufficient healthy ingesters.
|
||||
default:
|
||||
description: Internal server error
|
||||
content:
|
||||
|
|
@ -274,13 +262,7 @@ paths:
|
|||
type: string
|
||||
format: binary
|
||||
'429':
|
||||
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
type: integer
|
||||
format: int32
|
||||
description: Token is temporarily over quota or the querier is resource constrained.
|
||||
default:
|
||||
description: Error processing query
|
||||
content:
|
||||
|
|
@ -441,13 +423,7 @@ paths:
|
|||
type: string
|
||||
format: binary
|
||||
'429':
|
||||
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
type: integer
|
||||
format: int32
|
||||
description: Token is temporarily over quota or queriers are resource constrained.
|
||||
default:
|
||||
description: Error processing query
|
||||
content:
|
||||
|
|
|
|||
|
|
@ -419,27 +419,15 @@ paths:
|
|||
'429':
|
||||
description: |
|
||||
Too many requests.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: Non-negative decimal integer indicating seconds to wait before retrying the request.
|
||||
schema:
|
||||
format: int32
|
||||
type: integer
|
||||
|
||||
- Returns this error if ingesters are resource constrained.
|
||||
'500':
|
||||
$ref: '#/components/responses/InternalServerError'
|
||||
'503':
|
||||
description: |
|
||||
Service unavailable.
|
||||
|
||||
- Returns this error if
|
||||
the server is temporarily unavailable to accept writes.
|
||||
- Returns a `Retry-After` header that describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: Non-negative decimal integer indicating seconds to wait before retrying the request.
|
||||
schema:
|
||||
format: int32
|
||||
type: integer
|
||||
- Returns this error if the server is temporarily unavailable to accept writes due to concurrent request limits or insufficient healthy ingesters.
|
||||
default:
|
||||
$ref: '#/components/responses/GeneralServerError'
|
||||
summary: Write data
|
||||
|
|
@ -570,13 +558,9 @@ paths:
|
|||
type: string
|
||||
'429':
|
||||
description: |
|
||||
Token is temporarily over quota. The Retry-After header describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
format: int32
|
||||
type: integer
|
||||
Too many requests.
|
||||
|
||||
- Returns this error if queriers are resource constrained.
|
||||
default:
|
||||
content:
|
||||
application/json:
|
||||
|
|
@ -678,21 +662,9 @@ paths:
|
|||
$ref: '#/components/schemas/LineProtocolLengthError'
|
||||
description: Write has been rejected because the payload is too large. Error message returns max size supported. All data in body was rejected and not written.
|
||||
'429':
|
||||
description: Token is temporarily over quota. The Retry-After header describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
format: int32
|
||||
type: integer
|
||||
description: Too many requests. The service may be temporarily unavailable or ingesters are resource constrained.
|
||||
'503':
|
||||
description: Server is temporarily unavailable to accept writes. The Retry-After header describes when to try the write again.
|
||||
headers:
|
||||
Retry-After:
|
||||
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
|
||||
schema:
|
||||
format: int32
|
||||
type: integer
|
||||
description: Server is temporarily unavailable to accept writes due to too many concurrent requests or insufficient healthy ingesters.
|
||||
default:
|
||||
content:
|
||||
application/json:
|
||||
|
|
|
|||
|
|
@ -17,8 +17,8 @@ description: |
|
|||
- Perform administrative tasks and access system information
|
||||
|
||||
The API includes endpoints under the following paths:
|
||||
- `/api/v3`: InfluxDB 3 Core native endpoints
|
||||
- `/`: Compatibility endpoints for InfluxDB v1 workloads and clients
|
||||
- `/api/v3`: InfluxDB 3 Core native endpoints
|
||||
- `/`: Compatibility endpoints for InfluxDB v1 workloads and clients
|
||||
- `/api/v2/write`: Compatibility endpoint for InfluxDB v2 workloads and clients
|
||||
|
||||
<!-- TODO: verify where to host the spec that users can download.
|
||||
|
|
|
|||
|
|
@ -346,8 +346,8 @@ paths:
|
|||
- Compatibility endpoints
|
||||
- Write data
|
||||
x-influxdata-guides:
|
||||
- title: "Use compatibility APIs to write data"
|
||||
href: "/influxdb3/core/write-data/http-api/compatibility-apis/"
|
||||
- title: Use compatibility APIs to write data
|
||||
href: /influxdb3/core/write-data/http-api/compatibility-apis/
|
||||
/api/v2/write:
|
||||
post:
|
||||
operationId: PostV2Write
|
||||
|
|
@ -439,21 +439,21 @@ paths:
|
|||
- Compatibility endpoints
|
||||
- Write data
|
||||
x-influxdata-guides:
|
||||
- title: "Use compatibility APIs to write data"
|
||||
href: "/influxdb3/core/write-data/http-api/compatibility-apis/"
|
||||
- title: Use compatibility APIs to write data
|
||||
href: /influxdb3/core/write-data/http-api/compatibility-apis/
|
||||
/api/v3/write_lp:
|
||||
post:
|
||||
operationId: PostWriteLP
|
||||
summary: Write line protocol
|
||||
description: |
|
||||
Writes line protocol to the specified database.
|
||||
|
||||
|
||||
This is the native InfluxDB 3 Core write endpoint that provides enhanced control
|
||||
over write behavior with advanced parameters for high-performance and fault-tolerant operations.
|
||||
|
||||
Use this endpoint to send data in [line protocol](/influxdb3/core/reference/syntax/line-protocol/) format to InfluxDB.
|
||||
Use query parameters to specify options for writing data.
|
||||
|
||||
|
||||
#### Features
|
||||
|
||||
- **Partial writes**: Use `accept_partial=true` to allow partial success when some lines in a batch fail
|
||||
|
|
@ -471,7 +471,7 @@ paths:
|
|||
- Larger timestamps → Nanosecond precision (no conversion needed)
|
||||
|
||||
#### Related
|
||||
|
||||
|
||||
- [Use the InfluxDB v3 write_lp API to write data](/influxdb3/core/write-data/http-api/v3-write-lp/)
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/dbWriteParam'
|
||||
|
|
@ -863,8 +863,8 @@ paths:
|
|||
- Query data
|
||||
- Compatibility endpoints
|
||||
x-influxdata-guides:
|
||||
- title: "Use the InfluxDB v1 HTTP query API and InfluxQL to query data"
|
||||
href: "/influxdb3/core/query-data/execute-queries/influxdb-v1-api/"
|
||||
- title: Use the InfluxDB v1 HTTP query API and InfluxQL to query data
|
||||
href: /influxdb3/core/query-data/execute-queries/influxdb-v1-api/
|
||||
post:
|
||||
operationId: PostExecuteV1Query
|
||||
summary: Execute InfluxQL query (v1-compatible)
|
||||
|
|
@ -982,8 +982,8 @@ paths:
|
|||
- Query data
|
||||
- Compatibility endpoints
|
||||
x-influxdata-guides:
|
||||
- title: "Use the InfluxDB v1 HTTP query API and InfluxQL to query data"
|
||||
href: "/influxdb3/core/query-data/execute-queries/influxdb-v1-api/"
|
||||
- title: Use the InfluxDB v1 HTTP query API and InfluxQL to query data
|
||||
href: /influxdb3/core/query-data/execute-queries/influxdb-v1-api/
|
||||
/health:
|
||||
get:
|
||||
operationId: GetHealth
|
||||
|
|
@ -1099,7 +1099,7 @@ paths:
|
|||
Use ISO 8601 date-time format (for example, "2025-12-31T23:59:59Z").
|
||||
|
||||
#### Deleting a database cannot be undone
|
||||
|
||||
|
||||
Deleting a database is a destructive action.
|
||||
Once a database is deleted, data stored in that database cannot be recovered.
|
||||
responses:
|
||||
|
|
@ -1111,6 +1111,24 @@ paths:
|
|||
description: Database not found.
|
||||
tags:
|
||||
- Database
|
||||
/api/v3/configure/database/retention_period:
|
||||
delete:
|
||||
operationId: DeleteDatabaseRetentionPeriod
|
||||
summary: Remove database retention period
|
||||
description: |
|
||||
Removes the retention period from a database, setting it to infinite retention.
|
||||
Data in the database will not expire based on time.
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/db'
|
||||
responses:
|
||||
'200':
|
||||
description: Success. Retention period removed from database.
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'404':
|
||||
description: Database not found.
|
||||
tags:
|
||||
- Database
|
||||
/api/v3/configure/table:
|
||||
post:
|
||||
operationId: PostConfigureTable
|
||||
|
|
@ -1142,7 +1160,7 @@ paths:
|
|||
Use the `hard_delete_at` parameter to schedule a hard deletion.
|
||||
|
||||
#### Deleting a table cannot be undone
|
||||
|
||||
|
||||
Deleting a table is a destructive action.
|
||||
Once a table is deleted, data stored in that table cannot be recovered.
|
||||
parameters:
|
||||
|
|
@ -1224,7 +1242,7 @@ paths:
|
|||
description: Cache not found.
|
||||
tags:
|
||||
- Cache data
|
||||
- Table
|
||||
- Table
|
||||
/api/v3/configure/last_cache:
|
||||
post:
|
||||
operationId: PostConfigureLastCache
|
||||
|
|
@ -1714,6 +1732,93 @@ paths:
|
|||
tags:
|
||||
- Authentication
|
||||
- Token
|
||||
/api/v3/configure/token:
|
||||
delete:
|
||||
operationId: DeleteToken
|
||||
summary: Delete token
|
||||
description: |
|
||||
Deletes a token.
|
||||
parameters:
|
||||
- name: id
|
||||
in: query
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
description: |
|
||||
The ID of the token to delete.
|
||||
responses:
|
||||
'204':
|
||||
description: Success. The token has been deleted.
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'404':
|
||||
description: Token not found.
|
||||
tags:
|
||||
- Authentication
|
||||
- Token
|
||||
/api/v3/configure/token/named_admin:
|
||||
post:
|
||||
operationId: PostCreateNamedAdminToken
|
||||
summary: Create named admin token
|
||||
description: |
|
||||
Creates a named admin token.
|
||||
A named admin token is an admin token with a specific name identifier.
|
||||
parameters:
|
||||
- name: name
|
||||
in: query
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
description: |
|
||||
The name for the admin token.
|
||||
responses:
|
||||
'201':
|
||||
description: |
|
||||
Success. The named admin token has been created.
|
||||
The response body contains the token string and metadata.
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/AdminTokenObject'
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'409':
|
||||
description: A token with this name already exists.
|
||||
tags:
|
||||
- Authentication
|
||||
- Token
|
||||
/api/v3/plugins/files:
|
||||
put:
|
||||
operationId: PutPluginFile
|
||||
summary: Update plugin file
|
||||
description: |
|
||||
Updates a plugin file in the plugin directory.
|
||||
x-security-note: Requires an admin token
|
||||
responses:
|
||||
'204':
|
||||
description: Success. The plugin file has been updated.
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'403':
|
||||
description: Forbidden. Admin token required.
|
||||
tags:
|
||||
- Processing engine
|
||||
/api/v3/plugins/directory:
|
||||
put:
|
||||
operationId: PutPluginDirectory
|
||||
summary: Update plugin directory
|
||||
description: |
|
||||
Updates the plugin directory configuration.
|
||||
x-security-note: Requires an admin token
|
||||
responses:
|
||||
'204':
|
||||
description: Success. The plugin directory has been updated.
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'403':
|
||||
description: Forbidden. Admin token required.
|
||||
tags:
|
||||
- Processing engine
|
||||
components:
|
||||
parameters:
|
||||
AcceptQueryHeader:
|
||||
|
|
@ -2167,7 +2272,7 @@ components:
|
|||
- `every:1w` - Every week
|
||||
- `every:1M` - Every month
|
||||
- `every:1y` - Every year
|
||||
|
||||
|
||||
**Maximum interval**: 1 year
|
||||
|
||||
### Table-based triggers
|
||||
|
|
@ -2285,7 +2390,7 @@ components:
|
|||
description: |
|
||||
The retention period for the database. Specifies how long data should be retained.
|
||||
Use duration format (for example, "1d", "1h", "30m", "7d").
|
||||
example: "7d"
|
||||
example: 7d
|
||||
description: Request schema for updating database configuration.
|
||||
UpdateTableRequest:
|
||||
type: object
|
||||
|
|
@ -2301,7 +2406,7 @@ components:
|
|||
description: |
|
||||
The retention period for the table. Specifies how long data in this table should be retained.
|
||||
Use duration format (for example, "1d", "1h", "30m", "7d").
|
||||
example: "30d"
|
||||
example: 30d
|
||||
required:
|
||||
- db
|
||||
- table
|
||||
|
|
@ -2312,29 +2417,29 @@ components:
|
|||
license_type:
|
||||
type: string
|
||||
description: The type of license (for example, "enterprise", "trial").
|
||||
example: "enterprise"
|
||||
example: enterprise
|
||||
expires_at:
|
||||
type: string
|
||||
format: date-time
|
||||
description: The expiration date of the license in ISO 8601 format.
|
||||
example: "2025-12-31T23:59:59Z"
|
||||
example: '2025-12-31T23:59:59Z'
|
||||
features:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: List of features enabled by the license.
|
||||
example:
|
||||
- "clustering"
|
||||
- "processing_engine"
|
||||
- "advanced_auth"
|
||||
- clustering
|
||||
- processing_engine
|
||||
- advanced_auth
|
||||
status:
|
||||
type: string
|
||||
enum:
|
||||
- "active"
|
||||
- "expired"
|
||||
- "invalid"
|
||||
- active
|
||||
- expired
|
||||
- invalid
|
||||
description: The current status of the license.
|
||||
example: "active"
|
||||
example: active
|
||||
description: Response schema for license information.
|
||||
responses:
|
||||
Unauthorized:
|
||||
|
|
@ -2377,7 +2482,7 @@ components:
|
|||
schema:
|
||||
type: string
|
||||
format: uuid
|
||||
example: "01234567-89ab-cdef-0123-456789abcdef"
|
||||
example: 01234567-89ab-cdef-0123-456789abcdef
|
||||
securitySchemes:
|
||||
BasicAuthentication:
|
||||
type: http
|
||||
|
|
|
|||
|
|
@ -17,8 +17,8 @@ description: |
|
|||
- Perform administrative tasks and access system information
|
||||
|
||||
The API includes endpoints under the following paths:
|
||||
- `/api/v3`: InfluxDB 3 Enterprise native endpoints
|
||||
- `/`: Compatibility endpoints for InfluxDB v1 workloads and clients
|
||||
- `/api/v3`: InfluxDB 3 Enterprise native endpoints
|
||||
- `/`: Compatibility endpoints for InfluxDB v1 workloads and clients
|
||||
- `/api/v2/write`: Compatibility endpoint for InfluxDB v2 workloads and clients
|
||||
|
||||
<!-- TODO: verify where to host the spec that users can download.
|
||||
|
|
|
|||
|
|
@ -13,8 +13,8 @@ info:
|
|||
- Perform administrative tasks and access system information
|
||||
|
||||
The API includes endpoints under the following paths:
|
||||
- `/api/v3`: InfluxDB 3 Enterprise native endpoints
|
||||
- `/`: Compatibility endpoints for InfluxDB v1 workloads and clients
|
||||
- `/api/v3`: InfluxDB 3 Enterprise native endpoints
|
||||
- `/`: Compatibility endpoints for InfluxDB v1 workloads and clients
|
||||
- `/api/v2/write`: Compatibility endpoint for InfluxDB v2 workloads and clients
|
||||
|
||||
<!-- TODO: verify where to host the spec that users can download.
|
||||
|
|
@ -346,8 +346,8 @@ paths:
|
|||
- Compatibility endpoints
|
||||
- Write data
|
||||
x-influxdata-guides:
|
||||
- title: "Use compatibility APIs to write data"
|
||||
href: "/influxdb3/enterprise/write-data/http-api/compatibility-apis/"
|
||||
- title: Use compatibility APIs to write data
|
||||
href: /influxdb3/enterprise/write-data/http-api/compatibility-apis/
|
||||
/api/v2/write:
|
||||
post:
|
||||
operationId: PostV2Write
|
||||
|
|
@ -439,21 +439,21 @@ paths:
|
|||
- Compatibility endpoints
|
||||
- Write data
|
||||
x-influxdata-guides:
|
||||
- title: "Use compatibility APIs to write data"
|
||||
href: "/influxdb3/enterprise/write-data/http-api/compatibility-apis/"
|
||||
- title: Use compatibility APIs to write data
|
||||
href: /influxdb3/enterprise/write-data/http-api/compatibility-apis/
|
||||
/api/v3/write_lp:
|
||||
post:
|
||||
operationId: PostWriteLP
|
||||
summary: Write line protocol
|
||||
description: |
|
||||
Writes line protocol to the specified database.
|
||||
|
||||
|
||||
This is the native InfluxDB 3 Enterprise write endpoint that provides enhanced control
|
||||
over write behavior with advanced parameters for high-performance and fault-tolerant operations.
|
||||
|
||||
Use this endpoint to send data in [line protocol](/influxdb3/enterprise/reference/syntax/line-protocol/) format to InfluxDB.
|
||||
Use query parameters to specify options for writing data.
|
||||
|
||||
|
||||
#### Features
|
||||
|
||||
- **Partial writes**: Use `accept_partial=true` to allow partial success when some lines in a batch fail
|
||||
|
|
@ -471,7 +471,7 @@ paths:
|
|||
- Larger timestamps → Nanosecond precision (no conversion needed)
|
||||
|
||||
#### Related
|
||||
|
||||
|
||||
- [Use the InfluxDB v3 write_lp API to write data](/influxdb3/enterprise/write-data/http-api/v3-write-lp/)
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/dbWriteParam'
|
||||
|
|
@ -863,8 +863,8 @@ paths:
|
|||
- Query data
|
||||
- Compatibility endpoints
|
||||
x-influxdata-guides:
|
||||
- title: "Use the InfluxDB v1 HTTP query API and InfluxQL to query data"
|
||||
href: "/influxdb3/enterprise/query-data/execute-queries/influxdb-v1-api/"
|
||||
- title: Use the InfluxDB v1 HTTP query API and InfluxQL to query data
|
||||
href: /influxdb3/enterprise/query-data/execute-queries/influxdb-v1-api/
|
||||
post:
|
||||
operationId: PostExecuteV1Query
|
||||
summary: Execute InfluxQL query (v1-compatible)
|
||||
|
|
@ -982,8 +982,8 @@ paths:
|
|||
- Query data
|
||||
- Compatibility endpoints
|
||||
x-influxdata-guides:
|
||||
- title: "Use the InfluxDB v1 HTTP query API and InfluxQL to query data"
|
||||
href: "/influxdb3/enterprise/query-data/execute-queries/influxdb-v1-api/"
|
||||
- title: Use the InfluxDB v1 HTTP query API and InfluxQL to query data
|
||||
href: /influxdb3/enterprise/query-data/execute-queries/influxdb-v1-api/
|
||||
/health:
|
||||
get:
|
||||
operationId: GetHealth
|
||||
|
|
@ -1099,7 +1099,7 @@ paths:
|
|||
Use ISO 8601 date-time format (for example, "2025-12-31T23:59:59Z").
|
||||
|
||||
#### Deleting a database cannot be undone
|
||||
|
||||
|
||||
Deleting a database is a destructive action.
|
||||
Once a database is deleted, data stored in that database cannot be recovered.
|
||||
responses:
|
||||
|
|
@ -1111,6 +1111,23 @@ paths:
|
|||
description: Database not found.
|
||||
tags:
|
||||
- Database
|
||||
/api/v3/configure/database/retention_period:
|
||||
delete:
|
||||
operationId: DeleteDatabaseRetentionPeriod
|
||||
summary: Remove database retention period
|
||||
description: |
|
||||
Removes the retention period from a database, setting it to infinite retention.
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/db'
|
||||
responses:
|
||||
'204':
|
||||
description: Success. The database retention period has been removed.
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'404':
|
||||
description: Database not found.
|
||||
tags:
|
||||
- Database
|
||||
/api/v3/configure/table:
|
||||
post:
|
||||
operationId: PostConfigureTable
|
||||
|
|
@ -1142,7 +1159,7 @@ paths:
|
|||
Use the `hard_delete_at` parameter to schedule a hard deletion.
|
||||
|
||||
#### Deleting a table cannot be undone
|
||||
|
||||
|
||||
Deleting a table is a destructive action.
|
||||
Once a table is deleted, data stored in that table cannot be recovered.
|
||||
parameters:
|
||||
|
|
@ -1295,7 +1312,7 @@ paths:
|
|||
description: Cache not found.
|
||||
tags:
|
||||
- Cache data
|
||||
- Table
|
||||
- Table
|
||||
/api/v3/configure/last_cache:
|
||||
post:
|
||||
operationId: PostConfigureLastCache
|
||||
|
|
@ -1808,6 +1825,91 @@ paths:
|
|||
tags:
|
||||
- Authentication
|
||||
- Token
|
||||
/api/v3/configure/token:
|
||||
delete:
|
||||
operationId: DeleteToken
|
||||
summary: Delete token
|
||||
description: |
|
||||
Deletes a token.
|
||||
parameters:
|
||||
- name: id
|
||||
in: query
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
description: The ID of the token to delete.
|
||||
responses:
|
||||
'204':
|
||||
description: Success. The token has been deleted.
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'404':
|
||||
description: Token not found.
|
||||
tags:
|
||||
- Authentication
|
||||
- Token
|
||||
/api/v3/configure/token/named_admin:
|
||||
post:
|
||||
operationId: PostCreateNamedAdminToken
|
||||
summary: Create named admin token
|
||||
description: |
|
||||
Creates a named admin token.
|
||||
A named admin token is a special type of admin token with a custom name for identification and management.
|
||||
parameters:
|
||||
- name: name
|
||||
in: query
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
description: The name for the admin token.
|
||||
responses:
|
||||
'201':
|
||||
description: |
|
||||
Success. The named admin token has been created.
|
||||
The response body contains the token string and metadata.
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/AdminTokenObject'
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'409':
|
||||
description: A token with this name already exists.
|
||||
tags:
|
||||
- Authentication
|
||||
- Token
|
||||
/api/v3/plugins/files:
|
||||
put:
|
||||
operationId: PutPluginFile
|
||||
summary: Update plugin file
|
||||
description: |
|
||||
Updates a plugin file in the plugin directory.
|
||||
x-security-note: Requires an admin token
|
||||
responses:
|
||||
'204':
|
||||
description: Success. The plugin file has been updated.
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'403':
|
||||
description: Forbidden. Admin token required.
|
||||
tags:
|
||||
- Processing engine
|
||||
/api/v3/plugins/directory:
|
||||
put:
|
||||
operationId: PutPluginDirectory
|
||||
summary: Update plugin directory
|
||||
description: |
|
||||
Updates the plugin directory configuration.
|
||||
x-security-note: Requires an admin token
|
||||
responses:
|
||||
'204':
|
||||
description: Success. The plugin directory has been updated.
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'403':
|
||||
description: Forbidden. Admin token required.
|
||||
tags:
|
||||
- Processing engine
|
||||
components:
|
||||
parameters:
|
||||
AcceptQueryHeader:
|
||||
|
|
@ -2142,7 +2244,7 @@ components:
|
|||
properties:
|
||||
db:
|
||||
type: string
|
||||
pattern: '^[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]$|^[a-zA-Z0-9]$'
|
||||
pattern: ^[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]$|^[a-zA-Z0-9]$
|
||||
description: |-
|
||||
The database name. Database names cannot contain underscores (_).
|
||||
Names must start and end with alphanumeric characters and can contain hyphens (-) in the middle.
|
||||
|
|
@ -2151,7 +2253,7 @@ components:
|
|||
description: |-
|
||||
The retention period for the database. Specifies how long data should be retained.
|
||||
Use duration format (for example, "1d", "1h", "30m", "7d").
|
||||
example: "7d"
|
||||
example: 7d
|
||||
required:
|
||||
- db
|
||||
CreateTableRequest:
|
||||
|
|
@ -2188,7 +2290,7 @@ components:
|
|||
description: |-
|
||||
The retention period for the table. Specifies how long data in this table should be retained.
|
||||
Use duration format (for example, "1d", "1h", "30m", "7d").
|
||||
example: "30d"
|
||||
example: 30d
|
||||
required:
|
||||
- db
|
||||
- table
|
||||
|
|
@ -2314,7 +2416,7 @@ components:
|
|||
- `every:1w` - Every week
|
||||
- `every:1M` - Every month
|
||||
- `every:1y` - Every year
|
||||
|
||||
|
||||
**Maximum interval**: 1 year
|
||||
|
||||
### Table-based triggers
|
||||
|
|
@ -2432,7 +2534,7 @@ components:
|
|||
description: |
|
||||
The retention period for the database. Specifies how long data should be retained.
|
||||
Use duration format (for example, "1d", "1h", "30m", "7d").
|
||||
example: "7d"
|
||||
example: 7d
|
||||
description: Request schema for updating database configuration.
|
||||
UpdateTableRequest:
|
||||
type: object
|
||||
|
|
@ -2448,7 +2550,7 @@ components:
|
|||
description: |
|
||||
The retention period for the table. Specifies how long data in this table should be retained.
|
||||
Use duration format (for example, "1d", "1h", "30m", "7d").
|
||||
example: "30d"
|
||||
example: 30d
|
||||
required:
|
||||
- db
|
||||
- table
|
||||
|
|
@ -2459,29 +2561,29 @@ components:
|
|||
license_type:
|
||||
type: string
|
||||
description: The type of license (for example, "enterprise", "trial").
|
||||
example: "enterprise"
|
||||
example: enterprise
|
||||
expires_at:
|
||||
type: string
|
||||
format: date-time
|
||||
description: The expiration date of the license in ISO 8601 format.
|
||||
example: "2025-12-31T23:59:59Z"
|
||||
example: '2025-12-31T23:59:59Z'
|
||||
features:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: List of features enabled by the license.
|
||||
example:
|
||||
- "clustering"
|
||||
- "processing_engine"
|
||||
- "advanced_auth"
|
||||
- clustering
|
||||
- processing_engine
|
||||
- advanced_auth
|
||||
status:
|
||||
type: string
|
||||
enum:
|
||||
- "active"
|
||||
- "expired"
|
||||
- "invalid"
|
||||
- active
|
||||
- expired
|
||||
- invalid
|
||||
description: The current status of the license.
|
||||
example: "active"
|
||||
example: active
|
||||
description: Response schema for license information.
|
||||
responses:
|
||||
Unauthorized:
|
||||
|
|
@ -2524,7 +2626,7 @@ components:
|
|||
schema:
|
||||
type: string
|
||||
format: uuid
|
||||
example: "01234567-89ab-cdef-0123-456789abcdef"
|
||||
example: 01234567-89ab-cdef-0123-456789abcdef
|
||||
securitySchemes:
|
||||
BasicAuthentication:
|
||||
type: http
|
||||
|
|
|
|||
|
|
@ -0,0 +1,19 @@
|
|||
module.exports = RemoveInternalOperations;
|
||||
|
||||
/** @type {import('@redocly/openapi-cli').OasDecorator} */
|
||||
function RemoveInternalOperations() {
|
||||
return {
|
||||
Operation: {
|
||||
leave(operation, ctx) {
|
||||
// Redocly's Redoc natively respects x-internal: true
|
||||
// Operations with x-internal: true remain in the bundled spec
|
||||
// but are hidden from the generated documentation
|
||||
// This decorator preserves the x-internal marker without modification
|
||||
if (operation['x-internal'] === true) {
|
||||
// Keep the operation in the spec with x-internal flag
|
||||
// No deletion - Redoc will hide it automatically
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -2,6 +2,7 @@ const {info, servers, tagGroups} = require('./docs-content.cjs');
|
|||
const ReportTags = require('./rules/report-tags.cjs');
|
||||
const ValidateServersUrl = require('./rules/validate-servers-url.cjs');
|
||||
const RemovePrivatePaths = require('./decorators/paths/remove-private-paths.cjs');
|
||||
const RemoveInternalOperations = require('./decorators/operations/remove-internal-operations.cjs');
|
||||
const ReplaceShortcodes = require('./decorators/replace-shortcodes.cjs');
|
||||
const SetInfo = require('./decorators/set-info.cjs');
|
||||
const DeleteServers = require('./decorators/servers/delete-servers.cjs');
|
||||
|
|
@ -26,6 +27,7 @@ const decorators = {
|
|||
'set-servers': () => SetServers(servers()),
|
||||
'delete-servers': DeleteServers,
|
||||
'remove-private-paths': RemovePrivatePaths,
|
||||
'remove-internal-operations': RemoveInternalOperations,
|
||||
'strip-version-prefix': StripVersionPrefix,
|
||||
'strip-trailing-slash': StripTrailingSlash,
|
||||
'set-info': () => SetInfo(info()),
|
||||
|
|
@ -46,6 +48,7 @@ module.exports = {
|
|||
'docs/set-servers': 'error',
|
||||
'docs/delete-servers': 'error',
|
||||
'docs/remove-private-paths': 'error',
|
||||
'docs/remove-internal-operations': 'error',
|
||||
'docs/strip-version-prefix': 'error',
|
||||
'docs/strip-trailing-slash': 'error',
|
||||
'docs/set-info': 'error',
|
||||
|
|
|
|||
|
|
@ -160,14 +160,21 @@ function getVersionSpecificConfig(configKey: string): unknown {
|
|||
// Try version-specific config first (e.g., ai_sample_questions__v1)
|
||||
if (version && version !== 'n/a') {
|
||||
const versionKey = `${configKey}__v${version}`;
|
||||
const versionConfig = productData?.product?.[versionKey];
|
||||
if (versionConfig) {
|
||||
return versionConfig;
|
||||
const product = productData?.product;
|
||||
if (product && typeof product === 'object' && !Array.isArray(product)) {
|
||||
const versionConfig = product[versionKey];
|
||||
if (versionConfig) {
|
||||
return versionConfig;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Fall back to default config
|
||||
return productData?.product?.[configKey];
|
||||
const product = productData?.product;
|
||||
if (product && typeof product === 'object' && !Array.isArray(product)) {
|
||||
return product[configKey];
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
function getProductExampleQuestions(): string {
|
||||
|
|
|
|||
|
|
@ -0,0 +1,666 @@
|
|||
/**
|
||||
* Format Selector Component
|
||||
*
|
||||
* Provides a dropdown menu for users and AI agents to access documentation
|
||||
* in different formats (Markdown for LLMs, ChatGPT/Claude integration, MCP servers).
|
||||
*
|
||||
* FEATURES:
|
||||
* - Copy page/section as Markdown to clipboard
|
||||
* - Open page in ChatGPT or Claude with context
|
||||
* - Connect to MCP servers (Cursor, VS Code) - future enhancement
|
||||
* - Adaptive UI for leaf nodes (single pages) vs branch nodes (sections)
|
||||
* - Smart section download for large sections (>10 pages)
|
||||
*
|
||||
* UI PATTERN:
|
||||
* Matches Mintlify's format selector with dark dropdown, icons, and sublabels.
|
||||
* See `.context/Screenshot 2025-11-13 at 11.39.13 AM.png` for reference.
|
||||
*/
|
||||
|
||||
interface FormatSelectorConfig {
|
||||
pageType: 'leaf' | 'branch'; // Leaf = single page, Branch = section with children
|
||||
markdownUrl: string;
|
||||
sectionMarkdownUrl?: string; // For branch nodes - aggregated content
|
||||
markdownContent?: string; // For clipboard copy (lazy-loaded)
|
||||
pageTitle: string;
|
||||
pageUrl: string;
|
||||
|
||||
// For branch nodes (sections)
|
||||
childPageCount?: number;
|
||||
estimatedTokens?: number;
|
||||
sectionDownloadUrl?: string;
|
||||
|
||||
// AI integration URLs
|
||||
chatGptUrl: string;
|
||||
claudeUrl: string;
|
||||
|
||||
// Future MCP server links
|
||||
mcpCursorUrl?: string;
|
||||
mcpVSCodeUrl?: string;
|
||||
}
|
||||
|
||||
interface FormatSelectorOption {
|
||||
label: string;
|
||||
sublabel: string;
|
||||
icon: string; // SVG icon name or class
|
||||
action: () => void;
|
||||
href?: string; // For external links
|
||||
target?: string; // '_blank' for external links
|
||||
external: boolean; // Shows ↗ arrow
|
||||
visible: boolean; // Conditional display based on pageType/size
|
||||
dataAttribute: string; // For testing (e.g., 'copy-page', 'open-chatgpt')
|
||||
}
|
||||
|
||||
interface ComponentOptions {
|
||||
component: HTMLElement;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize format selector component
|
||||
* @param {ComponentOptions} options - Component configuration
|
||||
*/
|
||||
export default function FormatSelector(options: ComponentOptions) {
|
||||
const { component } = options;
|
||||
|
||||
// State
|
||||
let isOpen = false;
|
||||
let config: FormatSelectorConfig = {
|
||||
pageType: 'leaf',
|
||||
markdownUrl: '',
|
||||
pageTitle: '',
|
||||
pageUrl: '',
|
||||
chatGptUrl: '',
|
||||
claudeUrl: '',
|
||||
};
|
||||
|
||||
// DOM elements
|
||||
const button = component.querySelector('button') as HTMLButtonElement;
|
||||
const dropdownMenu = component.querySelector(
|
||||
'[data-dropdown-menu]'
|
||||
) as HTMLElement;
|
||||
|
||||
if (!button || !dropdownMenu) {
|
||||
console.error('Format selector: Missing required elements');
|
||||
return;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize component config from page context and data attributes
|
||||
*/
|
||||
function initConfig(): void {
|
||||
// page-context exports individual properties, not a detect() function
|
||||
const currentUrl = window.location.href;
|
||||
const currentPath = window.location.pathname;
|
||||
|
||||
// Determine page type (leaf vs branch)
|
||||
const childCount = parseInt(component.dataset.childCount || '0', 10);
|
||||
const pageType: 'leaf' | 'branch' = childCount > 0 ? 'branch' : 'leaf';
|
||||
|
||||
// Construct markdown URL
|
||||
// Hugo generates markdown files as index.md in directories matching the URL path
|
||||
let markdownUrl = currentPath;
|
||||
if (!markdownUrl.endsWith('.md')) {
|
||||
// Ensure path ends with /
|
||||
if (!markdownUrl.endsWith('/')) {
|
||||
markdownUrl += '/';
|
||||
}
|
||||
// Append index.md
|
||||
markdownUrl += 'index.md';
|
||||
}
|
||||
|
||||
// Construct section markdown URL (for branch pages only)
|
||||
let sectionMarkdownUrl: string | undefined;
|
||||
if (pageType === 'branch') {
|
||||
sectionMarkdownUrl = markdownUrl.replace('index.md', 'index.section.md');
|
||||
}
|
||||
|
||||
// Get page title from meta or h1
|
||||
const pageTitle =
|
||||
document
|
||||
.querySelector('meta[property="og:title"]')
|
||||
?.getAttribute('content') ||
|
||||
document.querySelector('h1')?.textContent ||
|
||||
document.title;
|
||||
|
||||
config = {
|
||||
pageType,
|
||||
markdownUrl,
|
||||
sectionMarkdownUrl,
|
||||
pageTitle,
|
||||
pageUrl: currentUrl,
|
||||
childPageCount: childCount,
|
||||
estimatedTokens: parseInt(component.dataset.estimatedTokens || '0', 10),
|
||||
sectionDownloadUrl: component.dataset.sectionDownloadUrl,
|
||||
|
||||
// AI integration URLs
|
||||
chatGptUrl: generateChatGPTUrl(pageTitle, currentUrl, markdownUrl),
|
||||
claudeUrl: generateClaudeUrl(pageTitle, currentUrl, markdownUrl),
|
||||
|
||||
// Future MCP server links
|
||||
mcpCursorUrl: component.dataset.mcpCursorUrl,
|
||||
mcpVSCodeUrl: component.dataset.mcpVSCodeUrl,
|
||||
};
|
||||
|
||||
// Update button label based on page type
|
||||
updateButtonLabel();
|
||||
}
|
||||
|
||||
/**
|
||||
* Update button label: "Copy page for AI" vs "Copy section for AI"
|
||||
*/
|
||||
function updateButtonLabel(): void {
|
||||
const label =
|
||||
config.pageType === 'leaf' ? 'Copy page for AI' : 'Copy section for AI';
|
||||
const buttonText = button.querySelector('[data-button-text]');
|
||||
if (buttonText) {
|
||||
buttonText.textContent = label;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate ChatGPT share URL with page context
|
||||
*/
|
||||
function generateChatGPTUrl(
|
||||
title: string,
|
||||
pageUrl: string,
|
||||
markdownUrl: string
|
||||
): string {
|
||||
// ChatGPT share URL pattern (as of 2025)
|
||||
// This may need updating based on ChatGPT's URL scheme
|
||||
const baseUrl = 'https://chatgpt.com';
|
||||
const markdownFullUrl = `${window.location.origin}${markdownUrl}`;
|
||||
const prompt = `Read from ${markdownFullUrl} so I can ask questions about it.`;
|
||||
return `${baseUrl}/?q=${encodeURIComponent(prompt)}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate Claude share URL with page context
|
||||
*/
|
||||
function generateClaudeUrl(
|
||||
title: string,
|
||||
pageUrl: string,
|
||||
markdownUrl: string
|
||||
): string {
|
||||
// Claude.ai share URL pattern (as of 2025)
|
||||
const baseUrl = 'https://claude.ai/new';
|
||||
const markdownFullUrl = `${window.location.origin}${markdownUrl}`;
|
||||
const prompt = `Read from ${markdownFullUrl} so I can ask questions about it.`;
|
||||
return `${baseUrl}?q=${encodeURIComponent(prompt)}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch markdown content for clipboard copy
|
||||
*/
|
||||
async function fetchMarkdownContent(): Promise<string> {
|
||||
try {
|
||||
const response = await fetch(config.markdownUrl);
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to fetch Markdown: ${response.statusText}`);
|
||||
}
|
||||
return await response.text();
|
||||
} catch (error) {
|
||||
console.error('Error fetching Markdown content:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Copy content to clipboard
|
||||
*/
|
||||
async function copyToClipboard(text: string): Promise<void> {
|
||||
try {
|
||||
await navigator.clipboard.writeText(text);
|
||||
showNotification('Copied to clipboard!', 'success');
|
||||
} catch (error) {
|
||||
console.error('Failed to copy to clipboard:', error);
|
||||
showNotification('Failed to copy to clipboard', 'error');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Show notification (integrates with existing notifications module)
|
||||
*/
|
||||
function showNotification(message: string, type: 'success' | 'error'): void {
|
||||
// TODO: Integrate with existing notifications module
|
||||
// For now, use a simple console log
|
||||
console.log(`[${type.toUpperCase()}] ${message}`);
|
||||
|
||||
// Optionally add a simple visual notification
|
||||
const notification = document.createElement('div');
|
||||
notification.textContent = message;
|
||||
notification.style.cssText = `
|
||||
position: fixed;
|
||||
bottom: 20px;
|
||||
right: 20px;
|
||||
padding: 12px 20px;
|
||||
background: ${type === 'success' ? '#10b981' : '#ef4444'};
|
||||
color: white;
|
||||
border-radius: 6px;
|
||||
z-index: 10000;
|
||||
font-size: 14px;
|
||||
`;
|
||||
document.body.appendChild(notification);
|
||||
setTimeout(() => notification.remove(), 3000);
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle copy page action
|
||||
*/
|
||||
async function handleCopyPage(): Promise<void> {
|
||||
try {
|
||||
const markdown = await fetchMarkdownContent();
|
||||
await copyToClipboard(markdown);
|
||||
closeDropdown();
|
||||
} catch (error) {
|
||||
console.error('Failed to copy page:', error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle copy section action (aggregates child pages)
|
||||
*/
|
||||
async function handleCopySection(): Promise<void> {
|
||||
try {
|
||||
// Fetch aggregated section markdown (includes all child pages)
|
||||
const url = config.sectionMarkdownUrl || config.markdownUrl;
|
||||
const response = await fetch(url);
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(
|
||||
`Failed to fetch section markdown: ${response.statusText}`
|
||||
);
|
||||
}
|
||||
|
||||
const markdown = await response.text();
|
||||
await copyToClipboard(markdown);
|
||||
showNotification('Section copied to clipboard', 'success');
|
||||
closeDropdown();
|
||||
} catch (error) {
|
||||
console.error('Failed to copy section:', error);
|
||||
showNotification('Failed to copy section', 'error');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle download page action (for single pages)
|
||||
* Commented out - not needed right now
|
||||
*/
|
||||
/*
|
||||
function handleDownloadPage(): void {
|
||||
// Trigger download of current page as markdown
|
||||
window.open(config.markdownUrl, '_self');
|
||||
closeDropdown();
|
||||
}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Handle download section action
|
||||
* Commented out - not yet implemented
|
||||
*/
|
||||
/*
|
||||
function handleDownloadSection(): void {
|
||||
if (config.sectionDownloadUrl) {
|
||||
window.open(config.sectionDownloadUrl, '_self');
|
||||
closeDropdown();
|
||||
}
|
||||
}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Handle external link action
|
||||
*/
|
||||
function handleExternalLink(url: string): void {
|
||||
window.open(url, '_blank', 'noopener,noreferrer');
|
||||
closeDropdown();
|
||||
}
|
||||
|
||||
/**
|
||||
* Build dropdown options based on config
|
||||
*/
|
||||
function buildOptions(): FormatSelectorOption[] {
|
||||
const options: FormatSelectorOption[] = [];
|
||||
|
||||
// Option 1: Copy page/section
|
||||
if (config.pageType === 'leaf') {
|
||||
options.push({
|
||||
label: 'Copy page for AI',
|
||||
sublabel: 'Clean Markdown optimized for AI assistants',
|
||||
icon: 'document',
|
||||
action: handleCopyPage,
|
||||
external: false,
|
||||
visible: true,
|
||||
dataAttribute: 'copy-page',
|
||||
});
|
||||
} else {
|
||||
options.push({
|
||||
label: 'Copy section for AI',
|
||||
sublabel: `${config.childPageCount} pages combined as clean Markdown for AI assistants`,
|
||||
icon: 'document',
|
||||
action: handleCopySection,
|
||||
external: false,
|
||||
visible: true,
|
||||
dataAttribute: 'copy-section',
|
||||
});
|
||||
}
|
||||
|
||||
// Option 1b: Download page (for leaf nodes)
|
||||
// Removed - not needed right now
|
||||
/*
|
||||
if (config.pageType === 'leaf' && config.markdownUrl) {
|
||||
options.push({
|
||||
label: 'Download page',
|
||||
sublabel: 'Download page as Markdown file',
|
||||
icon: 'download',
|
||||
action: handleDownloadPage,
|
||||
external: false,
|
||||
visible: true,
|
||||
dataAttribute: 'download-page',
|
||||
});
|
||||
}
|
||||
*/
|
||||
|
||||
// Option 2: Open in ChatGPT
|
||||
options.push({
|
||||
label: 'Open in ChatGPT',
|
||||
sublabel: 'Ask questions about this page',
|
||||
icon: 'chatgpt',
|
||||
action: () => handleExternalLink(config.chatGptUrl),
|
||||
href: config.chatGptUrl,
|
||||
target: '_blank',
|
||||
external: true,
|
||||
visible: true,
|
||||
dataAttribute: 'open-chatgpt',
|
||||
});
|
||||
|
||||
// Option 3: Open in Claude
|
||||
options.push({
|
||||
label: 'Open in Claude',
|
||||
sublabel: 'Ask questions about this page',
|
||||
icon: 'claude',
|
||||
action: () => handleExternalLink(config.claudeUrl),
|
||||
href: config.claudeUrl,
|
||||
target: '_blank',
|
||||
external: true,
|
||||
visible: true,
|
||||
dataAttribute: 'open-claude',
|
||||
});
|
||||
|
||||
// Future: Download section option
|
||||
// Commented out - not yet implemented
|
||||
/*
|
||||
if (config.pageType === 'branch') {
|
||||
const shouldShowDownload =
|
||||
(config.childPageCount && config.childPageCount > 10) ||
|
||||
(config.estimatedTokens && config.estimatedTokens >= 50000);
|
||||
|
||||
if (shouldShowDownload && config.sectionDownloadUrl) {
|
||||
options.push({
|
||||
label: 'Download section',
|
||||
sublabel: `Download all ${config.childPageCount} pages (.zip with /md and /txt folders)`,
|
||||
icon: 'download',
|
||||
action: handleDownloadSection,
|
||||
external: false,
|
||||
visible: true,
|
||||
dataAttribute: 'download-section',
|
||||
});
|
||||
}
|
||||
}
|
||||
*/
|
||||
|
||||
// Future: MCP server options
|
||||
// Commented out for now - will be implemented as future enhancement
|
||||
/*
|
||||
if (config.mcpCursorUrl) {
|
||||
options.push({
|
||||
label: 'Connect to Cursor',
|
||||
sublabel: 'Install MCP Server on Cursor',
|
||||
icon: 'cursor',
|
||||
action: () => handleExternalLink(config.mcpCursorUrl!),
|
||||
href: config.mcpCursorUrl,
|
||||
target: '_blank',
|
||||
external: true,
|
||||
visible: true,
|
||||
dataAttribute: 'connect-cursor',
|
||||
});
|
||||
}
|
||||
|
||||
if (config.mcpVSCodeUrl) {
|
||||
options.push({
|
||||
label: 'Connect to VS Code',
|
||||
sublabel: 'Install MCP Server on VS Code',
|
||||
icon: 'vscode',
|
||||
action: () => handleExternalLink(config.mcpVSCodeUrl!),
|
||||
href: config.mcpVSCodeUrl,
|
||||
target: '_blank',
|
||||
external: true,
|
||||
visible: true,
|
||||
dataAttribute: 'connect-vscode',
|
||||
});
|
||||
}
|
||||
*/
|
||||
|
||||
return options.filter((opt) => opt.visible);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get SVG icon for option
|
||||
*/
|
||||
function getIconSVG(iconName: string): string {
|
||||
const icons: Record<string, string> = {
|
||||
document: `<svg viewBox="0 0 20 20" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M6 2C4.89543 2 4 2.89543 4 4V16C4 17.1046 4.89543 18 6 18H14C15.1046 18 16 17.1046 16 16V7L11 2H6Z" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
|
||||
<path d="M11 2V7H16" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
|
||||
</svg>`,
|
||||
chatgpt: `<svg viewBox="0 0 721 721" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<g clip-path="url(#clip0_chatgpt)">
|
||||
<path d="M304.246 294.611V249.028C304.246 245.189 305.687 242.309 309.044 240.392L400.692 187.612C413.167 180.415 428.042 177.058 443.394 177.058C500.971 177.058 537.44 221.682 537.44 269.182C537.44 272.54 537.44 276.379 536.959 280.218L441.954 224.558C436.197 221.201 430.437 221.201 424.68 224.558L304.246 294.611ZM518.245 472.145V363.224C518.245 356.505 515.364 351.707 509.608 348.349L389.174 278.296L428.519 255.743C431.877 253.826 434.757 253.826 438.115 255.743L529.762 308.523C556.154 323.879 573.905 356.505 573.905 388.171C573.905 424.636 552.315 458.225 518.245 472.141V472.145ZM275.937 376.182L236.592 353.152C233.235 351.235 231.794 348.354 231.794 344.515V238.956C231.794 187.617 271.139 148.749 324.4 148.749C344.555 148.749 363.264 155.468 379.102 167.463L284.578 222.164C278.822 225.521 275.942 230.319 275.942 237.039V376.186L275.937 376.182ZM360.626 425.122L304.246 393.455V326.283L360.626 294.616L417.002 326.283V393.455L360.626 425.122ZM396.852 570.989C376.698 570.989 357.989 564.27 342.151 552.276L436.674 497.574C442.431 494.217 445.311 489.419 445.311 482.699V343.552L485.138 366.582C488.495 368.499 489.936 371.379 489.936 375.219V480.778C489.936 532.117 450.109 570.985 396.852 570.985V570.989ZM283.134 463.99L191.486 411.211C165.094 395.854 147.343 363.229 147.343 331.562C147.343 294.616 169.415 261.509 203.48 247.593V356.991C203.48 363.71 206.361 368.508 212.117 371.866L332.074 441.437L292.729 463.99C289.372 465.907 286.491 465.907 283.134 463.99ZM277.859 542.68C223.639 542.68 183.813 501.895 183.813 451.514C183.813 447.675 184.294 443.836 184.771 439.997L279.295 494.698C285.051 498.056 290.812 498.056 296.568 494.698L417.002 425.127V470.71C417.002 474.549 415.562 477.429 412.204 479.346L320.557 532.126C308.081 539.323 293.206 542.68 277.854 542.68H277.859ZM396.852 599.776C454.911 599.776 503.37 558.513 514.41 503.812C568.149 489.896 602.696 439.515 602.696 388.176C602.696 354.587 588.303 321.962 562.392 298.45C564.791 288.373 566.231 278.296 566.231 268.224C566.231 199.611 510.571 148.267 446.274 148.267C433.322 148.267 420.846 150.184 408.37 154.505C386.775 133.392 357.026 119.958 324.4 119.958C266.342 119.958 217.883 161.22 206.843 215.921C153.104 229.837 118.557 280.218 118.557 331.557C118.557 365.146 132.95 397.771 158.861 421.283C156.462 431.36 155.022 441.437 155.022 451.51C155.022 520.123 210.682 571.466 274.978 571.466C287.931 571.466 300.407 569.549 312.883 565.228C334.473 586.341 364.222 599.776 396.852 599.776Z" fill="currentColor"/>
|
||||
</g>
|
||||
<defs>
|
||||
<clipPath id="clip0_chatgpt">
|
||||
<rect width="720" height="720" fill="white" transform="translate(0.607 0.1)"/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
</svg>`,
|
||||
claude: `<svg viewBox="0 0 250 251" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M49.0541 166.749L98.2432 139.166L99.0541 136.75L98.2432 135.405H95.8108L87.5676 134.903L59.4595 134.151L35.1351 133.148L11.4865 131.894L5.54054 130.64L0 123.243L0.540541 119.607L5.54054 116.222L12.7027 116.849L28.5135 117.977L52.2973 119.607L69.4595 120.61L95 123.243H99.0541L99.5946 121.613L98.2432 120.61L97.1622 119.607L72.5676 102.932L45.9459 85.3796L32.027 75.2242L24.5946 70.0837L20.8108 65.3195L19.1892 54.7879L25.9459 47.2653L35.1351 47.8922L37.4324 48.5191L46.7568 55.6655L66.6216 71.0868L92.5676 90.1439L96.3514 93.2783L97.875 92.25L98.1081 91.5231L96.3514 88.6394L82.2973 63.1881L67.2973 37.2352L60.5405 26.4529L58.7838 20.0587C58.1033 17.3753 57.7027 15.1553 57.7027 12.4107L65.4054 1.87914L69.7297 0.5L80.1351 1.87914L84.4595 5.64042L90.9459 20.4348L101.351 43.6294L117.568 75.2242L122.297 84.6274L124.865 93.2783L125.811 95.9112H127.432V94.4067L128.784 76.6033L131.216 54.7879L133.649 26.7036L134.459 18.8049L138.378 9.27633L146.216 4.13591L152.297 7.01956L157.297 14.166L156.622 18.8049L153.649 38.1128L147.838 68.3285L144.054 88.6394H146.216L148.784 86.0065L159.054 72.4659L176.216 50.9012L183.784 42.3756L192.703 32.9724L198.378 28.4589H209.189L217.027 40.2442L213.514 52.4057L202.432 66.4478L193.243 78.3586L180.068 96.011L171.892 110.204L172.625 111.375L174.595 111.207L204.324 104.813L220.405 101.929L239.595 98.6695L248.243 102.682L249.189 106.819L245.811 115.219L225.27 120.234L201.216 125.124L165.397 133.556L165 133.875L165.468 134.569L181.622 136.032L188.514 136.408H205.405L236.892 138.79L245.135 144.181L250 150.826L249.189 155.966L236.486 162.361L219.459 158.349L179.595 148.82L165.946 145.435H164.054V146.563L175.405 157.722L196.351 176.528L222.432 200.851L223.784 206.869L220.405 211.633L216.892 211.132L193.919 193.83L185 186.057L165 169.131H163.649V170.886L168.243 177.656L192.703 214.392L193.919 225.676L192.162 229.311L185.811 231.568L178.919 230.314L164.459 210.129L149.73 187.561L137.838 167.25L136.402 168.157L129.324 243.73L126.081 247.616L118.514 250.5L112.162 245.736L108.784 237.962L112.162 222.541L116.216 202.481L119.459 186.558L122.432 166.749L124.248 160.131L124.088 159.688L122.637 159.932L107.703 180.415L85 211.132L67.027 230.314L62.7027 232.07L55.2703 228.183L55.9459 221.287L60.1351 215.144L85 183.549L100 163.865L109.668 152.566L109.573 150.932L109.04 150.886L42.973 193.955L31.2162 195.46L26.0811 190.696L26.7568 182.922L29.1892 180.415L49.0541 166.749Z" fill="currentColor"/>
|
||||
</svg>`,
|
||||
download: `<svg viewBox="0 0 20 20" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M10 3V13M10 13L14 9M10 13L6 9" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
|
||||
<path d="M3 15V16C3 16.5523 3.44772 17 4 17H16C16.5523 17 17 16.5523 17 16V15" stroke-width="1.5" stroke-linecap="round"/>
|
||||
</svg>`,
|
||||
cursor: `<svg viewBox="0 0 20 20" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M3 3L17 10L10 12L8 17L3 3Z" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
|
||||
</svg>`,
|
||||
vscode: `<svg viewBox="0 0 20 20" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M14 3L6 10L3 7L14 3Z" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
|
||||
<path d="M14 17L6 10L3 13L14 17Z" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
|
||||
<path d="M14 3V17" stroke-width="1.5" stroke-linecap="round"/>
|
||||
</svg>`,
|
||||
};
|
||||
return icons[iconName] || icons.document;
|
||||
}
|
||||
|
||||
/**
|
||||
* Render dropdown options
|
||||
*/
|
||||
function renderOptions(): void {
|
||||
const options = buildOptions();
|
||||
dropdownMenu.innerHTML = '';
|
||||
|
||||
options.forEach((option) => {
|
||||
const optionEl = document.createElement(option.href ? 'a' : 'button');
|
||||
optionEl.classList.add('format-selector__option');
|
||||
optionEl.setAttribute('data-option', option.dataAttribute);
|
||||
|
||||
if (option.href) {
|
||||
(optionEl as HTMLAnchorElement).href = option.href;
|
||||
if (option.target) {
|
||||
(optionEl as HTMLAnchorElement).target = option.target;
|
||||
(optionEl as HTMLAnchorElement).rel = 'noopener noreferrer';
|
||||
}
|
||||
}
|
||||
|
||||
optionEl.innerHTML = `
|
||||
<span class="format-selector__icon">
|
||||
${getIconSVG(option.icon)}
|
||||
</span>
|
||||
<span class="format-selector__label-group">
|
||||
<span class="format-selector__label">
|
||||
${option.label}
|
||||
${option.external ? '<span class="format-selector__external">↗</span>' : ''}
|
||||
</span>
|
||||
<span class="format-selector__sublabel">${option.sublabel}</span>
|
||||
</span>
|
||||
`;
|
||||
|
||||
optionEl.addEventListener('click', (e) => {
|
||||
if (!option.href) {
|
||||
e.preventDefault();
|
||||
option.action();
|
||||
}
|
||||
});
|
||||
|
||||
dropdownMenu.appendChild(optionEl);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Position dropdown relative to button using fixed positioning
|
||||
* Ensures dropdown stays within viewport bounds
|
||||
*/
|
||||
function positionDropdown(): void {
|
||||
const buttonRect = button.getBoundingClientRect();
|
||||
const dropdownWidth = dropdownMenu.offsetWidth;
|
||||
const viewportWidth = window.innerWidth;
|
||||
const padding = 8; // Minimum padding from viewport edge
|
||||
|
||||
// Always position dropdown below button with 8px gap
|
||||
dropdownMenu.style.top = `${buttonRect.bottom + 8}px`;
|
||||
|
||||
// Calculate ideal left position (right-aligned with button)
|
||||
let leftPos = buttonRect.right - dropdownWidth;
|
||||
|
||||
// Ensure dropdown doesn't go off the left edge
|
||||
if (leftPos < padding) {
|
||||
leftPos = padding;
|
||||
}
|
||||
|
||||
// Ensure dropdown doesn't go off the right edge
|
||||
if (leftPos + dropdownWidth > viewportWidth - padding) {
|
||||
leftPos = viewportWidth - dropdownWidth - padding;
|
||||
}
|
||||
|
||||
dropdownMenu.style.left = `${leftPos}px`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle resize events to reposition dropdown
|
||||
*/
|
||||
function handleResize(): void {
|
||||
if (isOpen) {
|
||||
positionDropdown();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Open dropdown
|
||||
*/
|
||||
function openDropdown(): void {
|
||||
isOpen = true;
|
||||
dropdownMenu.classList.add('is-open');
|
||||
button.setAttribute('aria-expanded', 'true');
|
||||
|
||||
// Position dropdown relative to button
|
||||
positionDropdown();
|
||||
|
||||
// Add listeners for repositioning and closing
|
||||
setTimeout(() => {
|
||||
document.addEventListener('click', handleClickOutside);
|
||||
}, 0);
|
||||
window.addEventListener('resize', handleResize);
|
||||
window.addEventListener('scroll', handleResize, true); // Capture scroll on any element
|
||||
}
|
||||
|
||||
/**
|
||||
* Close dropdown
|
||||
*/
|
||||
function closeDropdown(): void {
|
||||
isOpen = false;
|
||||
dropdownMenu.classList.remove('is-open');
|
||||
button.setAttribute('aria-expanded', 'false');
|
||||
document.removeEventListener('click', handleClickOutside);
|
||||
window.removeEventListener('resize', handleResize);
|
||||
window.removeEventListener('scroll', handleResize, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* Toggle dropdown
|
||||
*/
|
||||
function toggleDropdown(): void {
|
||||
if (isOpen) {
|
||||
closeDropdown();
|
||||
} else {
|
||||
openDropdown();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle click outside dropdown
|
||||
*/
|
||||
function handleClickOutside(event: Event): void {
|
||||
if (!component.contains(event.target as Node)) {
|
||||
closeDropdown();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle button click
|
||||
*/
|
||||
function handleButtonClick(event: Event): void {
|
||||
event.preventDefault();
|
||||
event.stopPropagation();
|
||||
toggleDropdown();
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle escape key
|
||||
*/
|
||||
function handleKeyDown(event: KeyboardEvent): void {
|
||||
if (event.key === 'Escape' && isOpen) {
|
||||
closeDropdown();
|
||||
button.focus();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize component
|
||||
*/
|
||||
function init(): void {
|
||||
// Initialize config
|
||||
initConfig();
|
||||
|
||||
// Render options
|
||||
renderOptions();
|
||||
|
||||
// Add event listeners
|
||||
button.addEventListener('click', handleButtonClick);
|
||||
document.addEventListener('keydown', handleKeyDown);
|
||||
|
||||
// Set initial ARIA attributes
|
||||
button.setAttribute('aria-expanded', 'false');
|
||||
button.setAttribute('aria-haspopup', 'true');
|
||||
dropdownMenu.setAttribute('role', 'menu');
|
||||
}
|
||||
|
||||
// Initialize on load
|
||||
init();
|
||||
|
||||
// Expose for debugging
|
||||
return {
|
||||
get config() {
|
||||
return config;
|
||||
},
|
||||
openDropdown,
|
||||
closeDropdown,
|
||||
renderOptions,
|
||||
};
|
||||
}
|
||||
|
|
@ -35,6 +35,7 @@ import DocSearch from './components/doc-search.js';
|
|||
import FeatureCallout from './feature-callouts.js';
|
||||
import FluxGroupKeysDemo from './flux-group-keys.js';
|
||||
import FluxInfluxDBVersionsTrigger from './flux-influxdb-versions.js';
|
||||
import FormatSelector from './components/format-selector.ts';
|
||||
import InfluxDBVersionDetector from './influxdb-version-detector.ts';
|
||||
import KeyBinding from './keybindings.js';
|
||||
import ListFilters from './list-filters.js';
|
||||
|
|
@ -65,6 +66,7 @@ const componentRegistry = {
|
|||
'feature-callout': FeatureCallout,
|
||||
'flux-group-keys-demo': FluxGroupKeysDemo,
|
||||
'flux-influxdb-versions-trigger': FluxInfluxDBVersionsTrigger,
|
||||
'format-selector': FormatSelector,
|
||||
'influxdb-version-detector': InfluxDBVersionDetector,
|
||||
keybinding: KeyBinding,
|
||||
'list-filters': ListFilters,
|
||||
|
|
|
|||
|
|
@ -1,12 +1,32 @@
|
|||
/** This module retrieves browser context information and site data for the
|
||||
/**
|
||||
* This module retrieves browser context information and site data for the
|
||||
* current page, version, and product.
|
||||
*/
|
||||
import { products } from './services/influxdata-products.js';
|
||||
import { influxdbUrls } from './services/influxdb-urls.js';
|
||||
import { getProductKeyFromPath } from './utils/product-mappings.js';
|
||||
|
||||
function getCurrentProductData() {
|
||||
/**
|
||||
* Product data return type
|
||||
*/
|
||||
interface ProductDataResult {
|
||||
product: string | Record<string, unknown>;
|
||||
urls: Record<string, unknown>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current product data based on URL path
|
||||
*/
|
||||
function getCurrentProductData(): ProductDataResult {
|
||||
const path = window.location.pathname;
|
||||
const mappings = [
|
||||
|
||||
interface ProductMapping {
|
||||
pattern: RegExp;
|
||||
product: Record<string, unknown> | string;
|
||||
urls: Record<string, unknown>;
|
||||
}
|
||||
|
||||
const mappings: ProductMapping[] = [
|
||||
{
|
||||
pattern: /\/influxdb\/cloud\//,
|
||||
product: products.influxdb_cloud,
|
||||
|
|
@ -87,57 +107,58 @@ function getCurrentProductData() {
|
|||
return { product: 'other', urls: {} };
|
||||
}
|
||||
|
||||
// Return the page context
|
||||
// (cloud, serverless, oss/enterprise, dedicated, clustered, explorer, other)
|
||||
function getContext() {
|
||||
if (/\/influxdb\/cloud\//.test(window.location.pathname)) {
|
||||
return 'cloud';
|
||||
} else if (/\/influxdb3\/core/.test(window.location.pathname)) {
|
||||
return 'core';
|
||||
} else if (/\/influxdb3\/enterprise/.test(window.location.pathname)) {
|
||||
return 'enterprise';
|
||||
} else if (/\/influxdb3\/cloud-serverless/.test(window.location.pathname)) {
|
||||
return 'serverless';
|
||||
} else if (/\/influxdb3\/cloud-dedicated/.test(window.location.pathname)) {
|
||||
return 'dedicated';
|
||||
} else if (/\/influxdb3\/clustered/.test(window.location.pathname)) {
|
||||
return 'clustered';
|
||||
} else if (/\/influxdb3\/explorer/.test(window.location.pathname)) {
|
||||
return 'explorer';
|
||||
} else if (
|
||||
/\/(enterprise_|influxdb).*\/v[1-2]\//.test(window.location.pathname)
|
||||
) {
|
||||
return 'oss/enterprise';
|
||||
} else {
|
||||
return 'other';
|
||||
}
|
||||
/**
|
||||
* Return the page context
|
||||
* (cloud, serverless, oss/enterprise, dedicated, clustered, core, enterprise, other)
|
||||
* Uses shared product key detection for consistency
|
||||
*/
|
||||
function getContext(): string {
|
||||
const productKey = getProductKeyFromPath(window.location.pathname);
|
||||
|
||||
// Map product keys to context strings
|
||||
const contextMap: Record<string, string> = {
|
||||
influxdb_cloud: 'cloud',
|
||||
influxdb3_core: 'core',
|
||||
influxdb3_enterprise: 'enterprise',
|
||||
influxdb3_cloud_serverless: 'serverless',
|
||||
influxdb3_cloud_dedicated: 'dedicated',
|
||||
influxdb3_clustered: 'clustered',
|
||||
enterprise_influxdb: 'oss/enterprise',
|
||||
influxdb: 'oss/enterprise',
|
||||
};
|
||||
|
||||
return contextMap[productKey || ''] || 'other';
|
||||
}
|
||||
|
||||
// Store the host value for the current page
|
||||
const currentPageHost = window.location.href.match(/^(?:[^/]*\/){2}[^/]+/g)[0];
|
||||
const currentPageHost =
|
||||
window.location.href.match(/^(?:[^/]*\/){2}[^/]+/g)?.[0] || '';
|
||||
|
||||
function getReferrerHost() {
|
||||
/**
|
||||
* Get referrer host from document.referrer
|
||||
*/
|
||||
function getReferrerHost(): string {
|
||||
// Extract the protocol and hostname of referrer
|
||||
const referrerMatch = document.referrer.match(/^(?:[^/]*\/){2}[^/]+/g);
|
||||
return referrerMatch ? referrerMatch[0] : '';
|
||||
}
|
||||
|
||||
const context = getContext(),
|
||||
host = currentPageHost,
|
||||
hostname = location.hostname,
|
||||
path = location.pathname,
|
||||
pathArr = location.pathname.split('/').slice(1, -1),
|
||||
product = pathArr[0],
|
||||
productData = getCurrentProductData(),
|
||||
protocol = location.protocol,
|
||||
referrer = document.referrer === '' ? 'direct' : document.referrer,
|
||||
referrerHost = getReferrerHost(),
|
||||
// TODO: Verify this works since the addition of InfluxDB 3 naming
|
||||
// and the Core and Enterprise versions.
|
||||
version =
|
||||
/^v\d/.test(pathArr[1]) || pathArr[1]?.includes('cloud')
|
||||
? pathArr[1].replace(/^v/, '')
|
||||
: 'n/a';
|
||||
const context = getContext();
|
||||
const host = currentPageHost;
|
||||
const hostname = location.hostname;
|
||||
const path = location.pathname;
|
||||
const pathArr = location.pathname.split('/').slice(1, -1);
|
||||
const product = pathArr[0];
|
||||
const productData = getCurrentProductData();
|
||||
const protocol = location.protocol;
|
||||
const referrer = document.referrer === '' ? 'direct' : document.referrer;
|
||||
const referrerHost = getReferrerHost();
|
||||
// TODO: Verify this works since the addition of InfluxDB 3 naming
|
||||
// and the Core and Enterprise versions.
|
||||
const version =
|
||||
/^v\d/.test(pathArr[1]) || pathArr[1]?.includes('cloud')
|
||||
? pathArr[1].replace(/^v/, '')
|
||||
: 'n/a';
|
||||
|
||||
export {
|
||||
context,
|
||||
|
|
@ -0,0 +1,126 @@
|
|||
/**
|
||||
* Node.js module shim for TypeScript code that runs in both browser and Node.js
|
||||
*
|
||||
* This utility provides conditional imports for Node.js-only modules, allowing
|
||||
* TypeScript files to be bundled for the browser (via Hugo/esbuild) while still
|
||||
* working in Node.js environments.
|
||||
*
|
||||
* @module utils/node-shim
|
||||
*/
|
||||
|
||||
/**
|
||||
* Detect if running in Node.js vs browser environment
|
||||
*/
|
||||
export const isNode =
|
||||
typeof process !== 'undefined' &&
|
||||
process.versions != null &&
|
||||
process.versions.node != null;
|
||||
|
||||
/**
|
||||
* Node.js module references (lazily loaded in Node.js environment)
|
||||
*/
|
||||
export interface NodeModules {
|
||||
fileURLToPath: (url: string) => string;
|
||||
dirname: (path: string) => string;
|
||||
join: (...paths: string[]) => string;
|
||||
readFileSync: (path: string, encoding: BufferEncoding) => string;
|
||||
existsSync: (path: string) => boolean;
|
||||
yaml: { load: (content: string) => unknown };
|
||||
}
|
||||
|
||||
let nodeModulesCache: NodeModules | undefined;
|
||||
|
||||
/**
|
||||
* Lazy load Node.js modules (only when running in Node.js)
|
||||
*
|
||||
* This function dynamically imports Node.js built-in modules (`url`, `path`, `fs`)
|
||||
* and third-party modules (`js-yaml`) only when called in a Node.js environment.
|
||||
* In browser environments, this returns undefined and the imports are tree-shaken out.
|
||||
*
|
||||
* @returns Promise resolving to NodeModules or undefined
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* import { loadNodeModules, isNode } from './utils/node-shim.js';
|
||||
*
|
||||
* async function readConfig() {
|
||||
* if (!isNode) return null;
|
||||
*
|
||||
* const nodeModules = await loadNodeModules();
|
||||
* if (!nodeModules) return null;
|
||||
*
|
||||
* const configPath = nodeModules.join(__dirname, 'config.yml');
|
||||
* if (nodeModules.existsSync(configPath)) {
|
||||
* const content = nodeModules.readFileSync(configPath, 'utf8');
|
||||
* return nodeModules.yaml.load(content);
|
||||
* }
|
||||
* }
|
||||
* ```
|
||||
*/
|
||||
export async function loadNodeModules(): Promise<NodeModules | undefined> {
|
||||
// Early return for browser - this branch will be eliminated by tree-shaking
|
||||
if (!isNode) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
// Return cached modules if already loaded
|
||||
if (nodeModulesCache) {
|
||||
return nodeModulesCache;
|
||||
}
|
||||
|
||||
// This code path is never reached in browser builds due to isNode check above
|
||||
// The dynamic imports will be tree-shaken out by esbuild
|
||||
try {
|
||||
// Use Function constructor to hide imports from static analysis
|
||||
// This prevents esbuild from trying to resolve them during browser builds
|
||||
const loadModule = new Function('moduleName', 'return import(moduleName)');
|
||||
|
||||
const [urlModule, pathModule, fsModule, yamlModule] = await Promise.all([
|
||||
loadModule('url'),
|
||||
loadModule('path'),
|
||||
loadModule('fs'),
|
||||
loadModule('js-yaml'),
|
||||
]);
|
||||
|
||||
nodeModulesCache = {
|
||||
fileURLToPath: urlModule.fileURLToPath,
|
||||
dirname: pathModule.dirname,
|
||||
join: pathModule.join,
|
||||
readFileSync: fsModule.readFileSync,
|
||||
existsSync: fsModule.existsSync,
|
||||
yaml: yamlModule.default as { load: (content: string) => unknown },
|
||||
};
|
||||
|
||||
return nodeModulesCache;
|
||||
} catch (err) {
|
||||
if (err instanceof Error) {
|
||||
console.warn('Failed to load Node.js modules:', err.message);
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the directory path of the current module (Node.js only)
|
||||
*
|
||||
* @param importMetaUrl - import.meta.url from the calling module
|
||||
* @returns Directory path or undefined if not in Node.js
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* import { getModuleDir } from './utils/node-shim.js';
|
||||
*
|
||||
* const moduleDir = await getModuleDir(import.meta.url);
|
||||
* ```
|
||||
*/
|
||||
export async function getModuleDir(
|
||||
importMetaUrl: string
|
||||
): Promise<string | undefined> {
|
||||
const nodeModules = await loadNodeModules();
|
||||
if (!nodeModules) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const filename = nodeModules.fileURLToPath(importMetaUrl);
|
||||
return nodeModules.dirname(filename);
|
||||
}
|
||||
|
|
@ -0,0 +1,234 @@
|
|||
/**
|
||||
* Shared product mapping and detection utilities
|
||||
*
|
||||
* This module provides URL-to-product mapping for both browser and Node.js environments.
|
||||
* In Node.js, it reads from data/products.yml. In browser, it uses fallback mappings.
|
||||
*
|
||||
* @module utils/product-mappings
|
||||
*/
|
||||
|
||||
import { isNode, loadNodeModules } from './node-shim.js';
|
||||
|
||||
/**
|
||||
* Product information interface
|
||||
*/
|
||||
export interface ProductInfo {
|
||||
/** Full product display name */
|
||||
name: string;
|
||||
/** Product version or context identifier */
|
||||
version: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Full product data from products.yml
|
||||
*/
|
||||
export interface ProductData {
|
||||
name: string;
|
||||
altname?: string;
|
||||
namespace: string;
|
||||
menu_category?: string;
|
||||
versions?: string[];
|
||||
list_order?: number;
|
||||
latest?: string;
|
||||
latest_patch?: string;
|
||||
latest_patches?: Record<string, string>;
|
||||
latest_cli?: string | Record<string, string>;
|
||||
placeholder_host?: string;
|
||||
link?: string;
|
||||
succeeded_by?: string;
|
||||
detector_config?: {
|
||||
query_languages?: Record<string, unknown>;
|
||||
characteristics?: string[];
|
||||
detection?: {
|
||||
ping_headers?: Record<string, string>;
|
||||
url_contains?: string[];
|
||||
};
|
||||
};
|
||||
ai_sample_questions?: string[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Products YAML data structure
|
||||
*/
|
||||
type ProductsData = Record<string, ProductData>;
|
||||
|
||||
let productsData: ProductsData | null = null;
|
||||
|
||||
/**
|
||||
* Load products data from data/products.yml (Node.js only)
|
||||
*/
|
||||
async function loadProductsData(): Promise<ProductsData | null> {
|
||||
if (!isNode) {
|
||||
return null;
|
||||
}
|
||||
|
||||
if (productsData) {
|
||||
return productsData;
|
||||
}
|
||||
|
||||
try {
|
||||
// Lazy load Node.js modules using shared shim
|
||||
const nodeModules = await loadNodeModules();
|
||||
if (!nodeModules) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const __filename = nodeModules.fileURLToPath(import.meta.url);
|
||||
const __dirname = nodeModules.dirname(__filename);
|
||||
const productsPath = nodeModules.join(
|
||||
__dirname,
|
||||
'../../../data/products.yml'
|
||||
);
|
||||
|
||||
if (nodeModules.existsSync(productsPath)) {
|
||||
const fileContents = nodeModules.readFileSync(productsPath, 'utf8');
|
||||
productsData = nodeModules.yaml.load(fileContents) as ProductsData;
|
||||
return productsData;
|
||||
}
|
||||
} catch (err) {
|
||||
if (err instanceof Error) {
|
||||
console.warn('Could not load products.yml:', err.message);
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* URL pattern to product key mapping
|
||||
* Used for quick lookups based on URL path
|
||||
*/
|
||||
const URL_PATTERN_MAP: Record<string, string> = {
|
||||
'/influxdb3/core/': 'influxdb3_core',
|
||||
'/influxdb3/enterprise/': 'influxdb3_enterprise',
|
||||
'/influxdb3/cloud-dedicated/': 'influxdb3_cloud_dedicated',
|
||||
'/influxdb3/cloud-serverless/': 'influxdb3_cloud_serverless',
|
||||
'/influxdb3/clustered/': 'influxdb3_clustered',
|
||||
'/influxdb3/explorer/': 'influxdb3_explorer',
|
||||
'/influxdb/cloud/': 'influxdb_cloud',
|
||||
'/influxdb/v2': 'influxdb',
|
||||
'/influxdb/v1': 'influxdb',
|
||||
'/enterprise_influxdb/': 'enterprise_influxdb',
|
||||
'/telegraf/': 'telegraf',
|
||||
'/chronograf/': 'chronograf',
|
||||
'/kapacitor/': 'kapacitor',
|
||||
'/flux/': 'flux',
|
||||
};
|
||||
|
||||
/**
|
||||
* Get the product key from a URL path
|
||||
*
|
||||
* @param path - URL path (e.g., '/influxdb3/core/get-started/')
|
||||
* @returns Product key (e.g., 'influxdb3_core') or null
|
||||
*/
|
||||
export function getProductKeyFromPath(path: string): string | null {
|
||||
for (const [pattern, key] of Object.entries(URL_PATTERN_MAP)) {
|
||||
if (path.includes(pattern)) {
|
||||
return key;
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
// Fallback product mappings (used in browser and as fallback in Node.js)
|
||||
const PRODUCT_FALLBACK_MAP: Record<string, ProductInfo> = {
|
||||
influxdb3_core: { name: 'InfluxDB 3 Core', version: 'core' },
|
||||
influxdb3_enterprise: {
|
||||
name: 'InfluxDB 3 Enterprise',
|
||||
version: 'enterprise',
|
||||
},
|
||||
influxdb3_cloud_dedicated: {
|
||||
name: 'InfluxDB Cloud Dedicated',
|
||||
version: 'cloud-dedicated',
|
||||
},
|
||||
influxdb3_cloud_serverless: {
|
||||
name: 'InfluxDB Cloud Serverless',
|
||||
version: 'cloud-serverless',
|
||||
},
|
||||
influxdb3_clustered: { name: 'InfluxDB Clustered', version: 'clustered' },
|
||||
influxdb3_explorer: { name: 'InfluxDB 3 Explorer', version: 'explorer' },
|
||||
influxdb_cloud: { name: 'InfluxDB Cloud (TSM)', version: 'cloud' },
|
||||
influxdb: { name: 'InfluxDB', version: 'v1' }, // Will be refined below
|
||||
enterprise_influxdb: { name: 'InfluxDB Enterprise v1', version: 'v1' },
|
||||
telegraf: { name: 'Telegraf', version: 'v1' },
|
||||
chronograf: { name: 'Chronograf', version: 'v1' },
|
||||
kapacitor: { name: 'Kapacitor', version: 'v1' },
|
||||
flux: { name: 'Flux', version: 'v0' },
|
||||
};
|
||||
|
||||
/**
|
||||
* Get product information from a URL path (synchronous)
|
||||
* Returns simplified product info with name and version
|
||||
*
|
||||
* @param path - URL path to check (e.g., '/influxdb3/core/get-started/')
|
||||
* @returns Product info or null if no match
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* const product = getProductFromPath('/influxdb3/core/admin/');
|
||||
* // Returns: { name: 'InfluxDB 3 Core', version: 'core' }
|
||||
* ```
|
||||
*/
|
||||
export function getProductFromPath(path: string): ProductInfo | null {
|
||||
const productKey = getProductKeyFromPath(path);
|
||||
if (!productKey) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// If we have cached YAML data (Node.js), use it
|
||||
if (productsData && productsData[productKey]) {
|
||||
const product = productsData[productKey];
|
||||
return {
|
||||
name: product.name,
|
||||
version: product.latest || product.versions?.[0] || 'unknown',
|
||||
};
|
||||
}
|
||||
|
||||
// Use fallback map
|
||||
const fallbackInfo = PRODUCT_FALLBACK_MAP[productKey];
|
||||
if (!fallbackInfo) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Handle influxdb product which can be v1 or v2
|
||||
if (productKey === 'influxdb') {
|
||||
return {
|
||||
name: path.includes('/v2') ? 'InfluxDB OSS v2' : 'InfluxDB OSS v1',
|
||||
version: path.includes('/v2') ? 'v2' : 'v1',
|
||||
};
|
||||
}
|
||||
|
||||
return fallbackInfo;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize product data from YAML (Node.js only, async)
|
||||
* Call this in Node.js scripts to load product data before using getProductFromPath
|
||||
*/
|
||||
export async function initializeProductData(): Promise<void> {
|
||||
if (isNode && !productsData) {
|
||||
await loadProductsData();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get full product data from products.yml (Node.js only)
|
||||
* Note: Call initializeProductData() first to load the YAML data
|
||||
*
|
||||
* @param productKey - Product key (e.g., 'influxdb3_core')
|
||||
* @returns Full product data object or null
|
||||
*/
|
||||
export function getProductData(productKey: string): ProductData | null {
|
||||
if (!isNode) {
|
||||
console.warn('getProductData() is only available in Node.js environment');
|
||||
return null;
|
||||
}
|
||||
|
||||
// Use cached data (requires initializeProductData() to have been called)
|
||||
return productsData?.[productKey] || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Export URL pattern map for external use
|
||||
*/
|
||||
export { URL_PATTERN_MAP };
|
||||
|
|
@ -0,0 +1,243 @@
|
|||
/**
|
||||
* Format Selector Component Styles
|
||||
*
|
||||
* Dropdown menu for accessing documentation in LLM-friendly formats.
|
||||
* Uses theme colors to match light/dark modes.
|
||||
*/
|
||||
|
||||
.format-selector {
|
||||
position: relative;
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
margin-left: auto; // Right-align in title container
|
||||
margin-top: 0.5rem;
|
||||
|
||||
// Position near article title
|
||||
.title & {
|
||||
margin-left: auto;
|
||||
}
|
||||
}
|
||||
|
||||
.format-selector__button {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
padding: 0.5rem 0.75rem;
|
||||
background: $sidebar-search-bg;
|
||||
color: $article-text;
|
||||
border: 1px solid $nav-border;
|
||||
border-radius: $radius;
|
||||
font-size: 14px;
|
||||
font-weight: 500;
|
||||
line-height: 1;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
white-space: nowrap;
|
||||
box-shadow: 2px 2px 6px $sidebar-search-shadow;
|
||||
|
||||
&:hover {
|
||||
border-color: $sidebar-search-highlight;
|
||||
box-shadow: 1px 1px 10px rgba($sidebar-search-highlight, .5);
|
||||
}
|
||||
|
||||
&:focus {
|
||||
outline: 2px solid $sidebar-search-highlight;
|
||||
outline-offset: 2px;
|
||||
}
|
||||
|
||||
&[aria-expanded='true'] {
|
||||
border-color: $sidebar-search-highlight;
|
||||
|
||||
.format-selector__button-arrow svg {
|
||||
transform: rotate(180deg);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
.format-selector__button-icon {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
|
||||
svg {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
color: $nav-item;
|
||||
}
|
||||
}
|
||||
|
||||
.format-selector__button-text {
|
||||
font-size: 14px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.format-selector__button-arrow {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
width: 12px;
|
||||
height: 12px;
|
||||
margin-left: 0.25rem;
|
||||
|
||||
svg {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
transition: transform 0.2s ease;
|
||||
}
|
||||
}
|
||||
|
||||
// Dropdown menu
|
||||
.format-selector__dropdown {
|
||||
position: fixed; // Use fixed to break out of parent stacking context
|
||||
// Position will be calculated by JavaScript to align with button
|
||||
min-width: 280px;
|
||||
max-width: 320px;
|
||||
background: $article-bg;
|
||||
border: 1px solid $nav-border;
|
||||
border-radius: 8px;
|
||||
box-shadow: 2px 2px 6px $article-shadow;
|
||||
padding: 0.5rem;
|
||||
z-index: 10000; // Higher than sidebar and other elements
|
||||
opacity: 0;
|
||||
visibility: hidden;
|
||||
transform: translateY(-8px);
|
||||
transition: all 0.2s ease;
|
||||
pointer-events: none;
|
||||
|
||||
&.is-open {
|
||||
opacity: 1;
|
||||
visibility: visible;
|
||||
transform: translateY(0);
|
||||
pointer-events: auto;
|
||||
}
|
||||
}
|
||||
|
||||
// Dropdown options (buttons and links)
|
||||
.format-selector__option {
|
||||
display: flex;
|
||||
align-items: flex-start;
|
||||
gap: 0.75rem;
|
||||
width: 100%;
|
||||
padding: 0.75rem;
|
||||
background: transparent;
|
||||
color: $article-text;
|
||||
border: none;
|
||||
border-radius: $radius;
|
||||
text-align: left;
|
||||
text-decoration: none;
|
||||
cursor: pointer;
|
||||
transition: background 0.15s ease;
|
||||
|
||||
&:hover {
|
||||
background: $sidebar-search-bg;
|
||||
color: $nav-item-hover;
|
||||
}
|
||||
|
||||
&:focus {
|
||||
outline: 2px solid $sidebar-search-highlight;
|
||||
outline-offset: -2px;
|
||||
}
|
||||
|
||||
&:not(:last-child) {
|
||||
margin-bottom: 0.25rem;
|
||||
}
|
||||
}
|
||||
|
||||
.format-selector__icon {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
flex-shrink: 0;
|
||||
margin-top: 2px; // Align with first line of text
|
||||
|
||||
svg {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
|
||||
// Support both stroke and fill-based icons
|
||||
stroke: $nav-item;
|
||||
|
||||
// For fill-based icons (like OpenAI Blossom), use currentColor
|
||||
[fill]:not([fill="none"]):not([fill="white"]) {
|
||||
fill: $nav-item;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
.format-selector__label-group {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.25rem;
|
||||
flex: 1;
|
||||
min-width: 0; // Allow text truncation
|
||||
}
|
||||
|
||||
.format-selector__label {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
font-size: 14px;
|
||||
font-weight: 500;
|
||||
line-height: 1.3;
|
||||
color: $article-text;
|
||||
}
|
||||
|
||||
.format-selector__external {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
font-size: 12px;
|
||||
color: $nav-item;
|
||||
margin-left: 0.25rem;
|
||||
opacity: 0.7;
|
||||
}
|
||||
|
||||
.format-selector__sublabel {
|
||||
font-size: 12px;
|
||||
line-height: 1.4;
|
||||
color: $nav-item;
|
||||
}
|
||||
|
||||
// Responsive adjustments
|
||||
@media (max-width: 768px) {
|
||||
.format-selector {
|
||||
// Stack vertically on mobile
|
||||
margin-left: 0;
|
||||
margin-top: 1rem;
|
||||
}
|
||||
|
||||
.format-selector__dropdown {
|
||||
right: auto;
|
||||
left: 0;
|
||||
min-width: 100%;
|
||||
max-width: 100%;
|
||||
}
|
||||
}
|
||||
|
||||
// Theme styles are now automatically handled by SCSS variables
|
||||
// that switch based on the active theme (light/dark)
|
||||
|
||||
// Ensure dropdown appears above other content
|
||||
.format-selector__dropdown {
|
||||
isolation: isolate;
|
||||
}
|
||||
|
||||
// Animation for notification (temporary toast)
|
||||
@keyframes slideInUp {
|
||||
from {
|
||||
transform: translateY(100%);
|
||||
opacity: 0;
|
||||
}
|
||||
to {
|
||||
transform: translateY(0);
|
||||
opacity: 1;
|
||||
}
|
||||
}
|
||||
|
||||
// Add smooth transitions
|
||||
* {
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
|
@ -2,7 +2,7 @@
|
|||
display: flex;
|
||||
flex-direction: row;
|
||||
position: relative;
|
||||
overflow: hidden;
|
||||
overflow: visible; // Changed from hidden to allow format-selector dropdown
|
||||
border-radius: $radius 0 0 $radius;
|
||||
min-height: 700px;
|
||||
@include gradient($landing-artwork-gradient);
|
||||
|
|
|
|||
|
|
@ -35,5 +35,6 @@
|
|||
"layouts/v3-wayfinding";
|
||||
|
||||
// Import Components
|
||||
@import "components/influxdb-version-detector";
|
||||
@import "components/influxdb-version-detector",
|
||||
"components/format-selector";
|
||||
|
||||
|
|
|
|||
|
|
@ -55,6 +55,24 @@ outputFormats:
|
|||
mediaType: application/json
|
||||
baseName: pages
|
||||
isPlainText: true
|
||||
llmstxt:
|
||||
mediaType: text/plain
|
||||
baseName: llms
|
||||
isPlainText: true
|
||||
notAlternative: true
|
||||
permalinkable: true
|
||||
suffixes:
|
||||
- txt
|
||||
|
||||
outputs:
|
||||
page:
|
||||
- HTML
|
||||
section:
|
||||
- HTML
|
||||
# llmstxt disabled for sections - using .md files via Lambda@Edge instead
|
||||
home:
|
||||
- HTML
|
||||
- llmstxt # Root /llms.txt for AI agent discovery
|
||||
|
||||
# Asset processing configuration for development
|
||||
build:
|
||||
|
|
|
|||
|
|
@ -425,7 +425,7 @@ Environmental variable: `INFLUXDB_DATA_CACHE_MAX_CONCURRENT_COMPACTIONS`
|
|||
|
||||
#### compact-throughput
|
||||
|
||||
Default is `50331648`.
|
||||
Default is `"48m"`.
|
||||
|
||||
The maximum number of bytes per seconds TSM compactions write to disk. Default is `"48m"` (48 million).
|
||||
Note that short bursts are allowed to happen at a possibly larger value, set by `compact-throughput-burst`.
|
||||
|
|
@ -435,7 +435,7 @@ Environment variable: `INFLUXDB_DATA_COMPACT_THROUGHPUT`
|
|||
|
||||
#### compact-throughput-burst
|
||||
|
||||
Default is `50331648`.
|
||||
Default is `"48m"`.
|
||||
|
||||
The maximum number of bytes per seconds TSM compactions write to disk during brief bursts. Default is `"48m"` (48 million).
|
||||
|
||||
|
|
@ -453,9 +453,9 @@ Environment variable: `INFLUXDB_DATA_COMPACT_FULL_WRITE_COLD_DURATION`
|
|||
|
||||
Default is `10000`.
|
||||
|
||||
The number of points per block to use during aggressive compaction. There are
|
||||
certain cases where TSM files do not get fully compacted. This adjusts an
|
||||
internal parameter to help ensure these files do get fully compacted.
|
||||
The number of points per block to use during aggressive compaction. In certain
|
||||
cases, TSM files do not get fully compacted. This adjusts an internal parameter
|
||||
to help ensure these files do get fully compacted.
|
||||
|
||||
Environment variable: `INFLUXDB_DATA_AGGRESSIVE_POINTS_PER_BLOCK`
|
||||
|
||||
|
|
@ -841,7 +841,7 @@ Environment variable: `INFLUXDB_HINTED_HANDOFF_RETRY_RATE_LIMIT`
|
|||
|
||||
Default is `"1s"`.
|
||||
|
||||
The time period after which the hinted handoff retries a write after the write fails. There is an exponential back-off, which starts at 1 second and increases with each failure until it reaches `retry-max-interval`. Retries will then occur at the `retry-max-interval`. Once there is a successful retry, the waiting period will be reset to the `retry-interval`.
|
||||
The time period after which the hinted handoff retries a write after the write fails. An exponential back-off starts at 1 second and increases with each failure until it reaches `retry-max-interval`. Retries then occur at the `retry-max-interval`. Once there is a successful retry, the waiting period is reset to the `retry-interval`.
|
||||
|
||||
Environment variable: `INFLUXDB_HINTED_HANDOFF_RETRY_INTERVAL`
|
||||
|
||||
|
|
|
|||
|
|
@ -115,7 +115,7 @@ Uncommented settings override the internal defaults.
|
|||
Note that the local configuration file does not need to include every
|
||||
configuration setting.
|
||||
|
||||
There are two ways to launch InfluxDB with your configuration file:
|
||||
Use one of the following methods to start InfluxDB with your configuration file:
|
||||
|
||||
- Point the process to the configuration file by using the `-config`
|
||||
option. For example:
|
||||
|
|
@ -430,9 +430,9 @@ brief bursts.
|
|||
|
||||
#### aggressive-points-per-block {metadata="v1.12.0+"}
|
||||
|
||||
The number of points per block to use during aggressive compaction. There are
|
||||
certain cases where TSM files do not get fully compacted. This adjusts an
|
||||
internal parameter to help ensure these files do get fully compacted.
|
||||
The number of points per block to use during aggressive compaction. In certain
|
||||
cases, TSM files do not get fully compacted. This adjusts an internal parameter
|
||||
to help ensure these files do get fully compacted.
|
||||
|
||||
**Default**: `10000`
|
||||
**Environment variable**: `INFLUXDB_AGGRESSIVE_POINTS_PER_BLOCK`
|
||||
|
|
|
|||
|
|
@ -194,6 +194,7 @@ To configure InfluxDB, use the following configuration options when starting the
|
|||
- [tls-strict-ciphers](#tls-strict-ciphers)
|
||||
- [tracing-type](#tracing-type)
|
||||
- [ui-disabled](#ui-disabled)
|
||||
- [use-hashed-tokens](#use-hashed-tokens)
|
||||
- [vault-addr](#vault-addr)
|
||||
- [vault-cacert](#vault-cacert)
|
||||
- [vault-capath](#vault-capath)
|
||||
|
|
@ -3470,6 +3471,61 @@ ui-disabled = true
|
|||
|
||||
---
|
||||
|
||||
### use-hashed-tokens
|
||||
Enable storing hashed API tokens on disk. Hashed tokens are disabled by default in version 2.8. Hashed tokens will be enabled by default in a future version.
|
||||
|
||||
Storing hashed tokens increases security by storing API tokens as hashes on disk. When enabled, all unhashed tokens are converted to hashed tokens on every startup leaving no unhashed tokens on disk. Newly created tokens are also stored as hashes. Lost tokens must be replaced when token hashing is enabled because the hashing prevents them from being recovered.
|
||||
|
||||
If token hashing is disabled after being enabled, any hashed tokens on disk remain as hashed tokens. Newly created tokens are stored unhashed when token hashing is disabled. Hashed tokens on disk remain valid and useable even with token hashing disabled.
|
||||
|
||||
Hashed token support is available in versions 2.8.0 and newer. Downgrading to older versions is not recommended after enabling hashed tokens because the downgrade process deletes all stored hashed tokens. All hashed tokens must be replaced on a downgrade after hashed tokens are enabled.
|
||||
|
||||
**Default:** `false`
|
||||
|
||||
| influxd flag | Environment variable | Configuration key |
|
||||
| :-------------- | :-------------------- | :---------------- |
|
||||
| `--use-hashed-tokens` | `INFLUXD_USE_HASHED_TOKENS` | `use-hashed-tokens` |
|
||||
|
||||
###### influxd flag
|
||||
<!--pytest.mark.skip-->
|
||||
|
||||
```sh
|
||||
influxd --use-hashed-tokens
|
||||
```
|
||||
|
||||
###### Environment variable
|
||||
```sh
|
||||
export INFLUXD_USE_HASHED_TOKENS=true
|
||||
```
|
||||
|
||||
###### Configuration file
|
||||
{{< code-tabs-wrapper >}}
|
||||
{{% code-tabs %}}
|
||||
[YAML](#)
|
||||
[TOML](#)
|
||||
[JSON](#)
|
||||
{{% /code-tabs %}}
|
||||
{{% code-tab-content %}}
|
||||
```yml
|
||||
use-hashed-tokens: true
|
||||
```
|
||||
{{% /code-tab-content %}}
|
||||
{{% code-tab-content %}}
|
||||
```toml
|
||||
use-hashed-tokens = true
|
||||
```
|
||||
{{% /code-tab-content %}}
|
||||
{{% code-tab-content %}}
|
||||
```json
|
||||
{
|
||||
"use-hashed-tokens": true
|
||||
}
|
||||
```
|
||||
{{% /code-tab-content %}}
|
||||
{{< /code-tabs-wrapper >}}
|
||||
|
||||
---
|
||||
|
||||
### vault-addr
|
||||
Specifies the address of the Vault server expressed as a URL and port.
|
||||
For example: `https://127.0.0.1:8200/`.
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ alt_links:
|
|||
cloud: /influxdb/cloud/tools/grafana/
|
||||
core: /influxdb3/core/visualize-data/grafana/
|
||||
enterprise: /influxdb3/enterprise/visualize-data/grafana/
|
||||
source: /content/shared/v3-process-data/visualize/grafana.md
|
||||
source: /shared/v3-process-data/visualize/grafana.md
|
||||
---
|
||||
|
||||
<!-- SOURCE: /content/shared/v3-process-data/visualize/grafana.md -->
|
||||
<!-- SOURCE: /shared/v3-process-data/visualize/grafana.md -->
|
||||
|
|
|
|||
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
title: Compare values in SQL queries
|
||||
seotitle: Compare values across rows in SQL queries
|
||||
description: >
|
||||
Use SQL window functions to compare values across different rows in your
|
||||
time series data. Learn how to calculate differences, percentage changes,
|
||||
and compare values at specific time intervals.
|
||||
menu:
|
||||
influxdb3_cloud_dedicated:
|
||||
name: Compare values
|
||||
parent: Query with SQL
|
||||
identifier: query-sql-compare-values
|
||||
weight: 205
|
||||
influxdb3/cloud-dedicated/tags: [query, sql, window functions]
|
||||
related:
|
||||
- /influxdb3/cloud-dedicated/reference/sql/functions/window/
|
||||
- /influxdb3/cloud-dedicated/query-data/sql/aggregate-select/
|
||||
list_code_example: |
|
||||
##### Calculate difference from previous value
|
||||
```sql
|
||||
SELECT
|
||||
time,
|
||||
room,
|
||||
temp,
|
||||
temp - LAG(temp) OVER (
|
||||
PARTITION BY room
|
||||
ORDER BY time
|
||||
) AS temp_change
|
||||
FROM home
|
||||
ORDER BY room, time
|
||||
```
|
||||
source: /shared/influxdb3-query-guides/sql/compare-values.md
|
||||
---
|
||||
|
||||
<!--
|
||||
//SOURCE content/shared/influxdb3-query-guides/sql/compare-values.md
|
||||
-->
|
||||
|
|
@ -21,7 +21,7 @@ alt_links:
|
|||
cloud: /influxdb/cloud/tools/grafana/
|
||||
core: /influxdb3/core/visualize-data/grafana/
|
||||
enterprise: /influxdb3/enterprise/visualize-data/grafana/
|
||||
source: /content/shared/v3-process-data/visualize/grafana.md
|
||||
source: /shared/v3-process-data/visualize/grafana.md
|
||||
---
|
||||
|
||||
<!-- SOURCE: /content/shared/v3-process-data/visualize/grafana.md -->
|
||||
<!-- SOURCE: /shared/v3-process-data/visualize/grafana.md -->
|
||||
|
|
|
|||
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
title: Compare values in SQL queries
|
||||
seotitle: Compare values across rows in SQL queries
|
||||
description: >
|
||||
Use SQL window functions to compare values across different rows in your
|
||||
time series data. Learn how to calculate differences, percentage changes,
|
||||
and compare values at specific time intervals.
|
||||
menu:
|
||||
influxdb3_cloud_serverless:
|
||||
name: Compare values
|
||||
parent: Query with SQL
|
||||
identifier: query-sql-compare-values
|
||||
weight: 205
|
||||
influxdb3/cloud-serverless/tags: [query, sql, window functions]
|
||||
related:
|
||||
- /influxdb3/cloud-serverless/reference/sql/functions/window/
|
||||
- /influxdb3/cloud-serverless/query-data/sql/aggregate-select/
|
||||
list_code_example: |
|
||||
##### Calculate difference from previous value
|
||||
```sql
|
||||
SELECT
|
||||
time,
|
||||
room,
|
||||
temp,
|
||||
temp - LAG(temp) OVER (
|
||||
PARTITION BY room
|
||||
ORDER BY time
|
||||
) AS temp_change
|
||||
FROM home
|
||||
ORDER BY room, time
|
||||
```
|
||||
source: /shared/influxdb3-query-guides/sql/compare-values.md
|
||||
---
|
||||
|
||||
<!--
|
||||
//SOURCE content/shared/influxdb3-query-guides/sql/compare-values.md
|
||||
-->
|
||||
|
|
@ -57,11 +57,7 @@ See how to [optimize queries](/influxdb3/cloud-serverless/query-data/troubleshoo
|
|||
|
||||
## Analyze your queries
|
||||
|
||||
Use the following tools to retrieve system query information, analyze query execution,
|
||||
and find performance bottlenecks:
|
||||
|
||||
- [Analyze a query plan](/influxdb3/cloud-serverless/query-data/troubleshoot-and-optimize/analyze-query-plan/)
|
||||
- [Retrieve `system.queries` information for a query](/influxdb3/cloud-serverless/query-data/troubleshoot-and-optimize/system-information/)
|
||||
Learn how to [analyze a query plan](/influxdb3/cloud-serverless/query-data/troubleshoot-and-optimize/analyze-query-plan/) to understand query execution and find performance bottlenecks.
|
||||
|
||||
#### Request help to troubleshoot queries
|
||||
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ alt_links:
|
|||
cloud: /influxdb/cloud/tools/grafana/
|
||||
core: /influxdb3/core/visualize-data/grafana/
|
||||
enterprise: /influxdb3/enterprise/visualize-data/grafana/
|
||||
source: /content/shared/v3-process-data/visualize/grafana.md
|
||||
source: /shared/v3-process-data/visualize/grafana.md
|
||||
---
|
||||
|
||||
<!-- SOURCE: /content/shared/v3-process-data/visualize/grafana.md -->
|
||||
<!-- SOURCE: /shared/v3-process-data/visualize/grafana.md -->
|
||||
|
|
|
|||
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
title: Compare values in SQL queries
|
||||
seotitle: Compare values across rows in SQL queries
|
||||
description: >
|
||||
Use SQL window functions to compare values across different rows in your
|
||||
time series data. Learn how to calculate differences, percentage changes,
|
||||
and compare values at specific time intervals.
|
||||
menu:
|
||||
influxdb3_clustered:
|
||||
name: Compare values
|
||||
parent: Query with SQL
|
||||
identifier: query-sql-compare-values
|
||||
weight: 205
|
||||
influxdb3/clustered/tags: [query, sql, window functions]
|
||||
related:
|
||||
- /influxdb3/clustered/reference/sql/functions/window/
|
||||
- /influxdb3/clustered/query-data/sql/aggregate-select/
|
||||
list_code_example: |
|
||||
##### Calculate difference from previous value
|
||||
```sql
|
||||
SELECT
|
||||
time,
|
||||
room,
|
||||
temp,
|
||||
temp - LAG(temp) OVER (
|
||||
PARTITION BY room
|
||||
ORDER BY time
|
||||
) AS temp_change
|
||||
FROM home
|
||||
ORDER BY room, time
|
||||
```
|
||||
source: /shared/influxdb3-query-guides/sql/compare-values.md
|
||||
---
|
||||
|
||||
<!--
|
||||
//SOURCE content/shared/influxdb3-query-guides/sql/compare-values.md
|
||||
-->
|
||||
|
|
@ -82,14 +82,18 @@ spec:
|
|||
|
||||
Tables can now be renamed and undeleted with [influxctl v2.10.5](https://docs.influxdata.com/influxdb3/clustered/reference/release-notes/influxctl/#2105) or later.
|
||||
|
||||
To enable hard delete of soft-deleted namespaces:
|
||||
- Set `INFLUXDB_IOX_ENABLE_NAMESPACE_ROW_DELETION` to `true`.
|
||||
- If needed, adjust how long a namespace remains soft-deleted (and eligible for undeletion) by setting `INFLUXDB_IOX_GC_NAMESPACE_CUTOFF` (default: `14d`)
|
||||
- If needed, adjust how long the garbage collector should sleep between runs of the namespace deletion task with `INFLUXDB_IOX_GC_NAMESPACE_SLEEP_INTERVAL` The default is `24h` which should be suitable for ongoing cleanup, but if there is a backlog of soft-deleted namespaces to clean up, you may want to run this more frequently until the garbage collector has caught up.
|
||||
- If needed, adjust the maximum number of namespaces that will get hard deleted in one run of the namespace deletion task with `INFLUXDB_IOX_GC_NAMESPACE_LIMIT` The default is `1000` which should be suitable for ongoing cleanup, but if you have a large number of namespaces and you're running the task very frequently, you may need to lower this to delete fewer records per run if each individual run is timing out.
|
||||
To enable hard delete of soft-deleted databases:
|
||||
|
||||
To enable hard delete of soft-deleted tables in active namespaces (soft-deleted tables in soft-deleted namespaces get cleaned up when the namespace gets cleaned up):
|
||||
- Set `INFLUXDB_IOX_ENABLE_TABLE_ROW_DELETION` to `true`, and if needed, adjust these settings that work in the same way as the corresponding namespace flags:
|
||||
> [!Note]
|
||||
> In {{% product-name %}}, "namespace" in environment variable names refers to a database.
|
||||
|
||||
- Set `INFLUXDB_IOX_ENABLE_NAMESPACE_ROW_DELETION` to `true`.
|
||||
- If needed, adjust how long a database remains soft-deleted (and eligible for undeletion) by setting `INFLUXDB_IOX_GC_NAMESPACE_CUTOFF` (default: `14d`)
|
||||
- If needed, adjust how long the garbage collector should sleep between runs of the database deletion task with `INFLUXDB_IOX_GC_NAMESPACE_SLEEP_INTERVAL` The default is `24h` which should be suitable for ongoing cleanup, but if there is a backlog of soft-deleted databases to clean up, you may want to run this more frequently until the garbage collector has caught up.
|
||||
- If needed, adjust the maximum number of databases that will get hard deleted in one run of the database deletion task with `INFLUXDB_IOX_GC_NAMESPACE_LIMIT` The default is `1000` which should be suitable for ongoing cleanup, but if you have a large number of databases and you're running the task very frequently, you may need to lower this to delete fewer records per run if each individual run is timing out.
|
||||
|
||||
To enable hard delete of soft-deleted tables in active databases (soft-deleted tables in soft-deleted databases get cleaned up when the database gets cleaned up):
|
||||
- Set `INFLUXDB_IOX_ENABLE_TABLE_ROW_DELETION` to `true`, and if needed, adjust these settings that work in the same way as the corresponding database flags:
|
||||
- `INFLUXDB_IOX_GC_TABLE_CUTOFF` (default: `14d`)
|
||||
- `INFLUXDB_IOX_GC_TABLE_SLEEP_INTERVAL` (default: `24h`)
|
||||
- `INFLUXDB_IOX_GC_TABLE_LIMIT` (default: `1000`)
|
||||
|
|
|
|||
|
|
@ -15,7 +15,6 @@ related:
|
|||
source: /shared/influxdb3-get-started/setup.md
|
||||
---
|
||||
|
||||
<!--
|
||||
The content of this page is at
|
||||
<!--
|
||||
// SOURCE content/shared/influxdb3-get-started/setup.md
|
||||
-->
|
||||
|
|
|
|||
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
title: Compare values in SQL queries
|
||||
seotitle: Compare values across rows in SQL queries
|
||||
description: >
|
||||
Use SQL window functions to compare values across different rows in your
|
||||
time series data. Learn how to calculate differences, percentage changes,
|
||||
and compare values at specific time intervals.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Compare values
|
||||
parent: Query with SQL
|
||||
identifier: query-sql-compare-values
|
||||
weight: 205
|
||||
influxdb3/core/tags: [query, sql, window functions]
|
||||
related:
|
||||
- /influxdb3/core/reference/sql/functions/window/
|
||||
- /influxdb3/core/query-data/sql/aggregate-select/
|
||||
list_code_example: |
|
||||
##### Calculate difference from previous value
|
||||
```sql
|
||||
SELECT
|
||||
time,
|
||||
room,
|
||||
temp,
|
||||
temp - LAG(temp) OVER (
|
||||
PARTITION BY room
|
||||
ORDER BY time
|
||||
) AS temp_change
|
||||
FROM home
|
||||
ORDER BY room, time
|
||||
```
|
||||
source: /shared/influxdb3-query-guides/sql/compare-values.md
|
||||
---
|
||||
|
||||
<!--
|
||||
//SOURCE content/shared/influxdb3-query-guides/sql/compare-values.md
|
||||
-->
|
||||
|
|
@ -42,6 +42,11 @@ influxdb3 serve [OPTIONS]
|
|||
|
||||
## Options
|
||||
|
||||
<!--docs:exclude
|
||||
--serve-invocation-method: internal implementation detail
|
||||
--test-mode: hidden test flag, not for production use
|
||||
-->
|
||||
|
||||
| Option | | Description |
|
||||
| :--------------- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------ |
|
||||
| {{< req "\*" >}} | `--node-id` | _See [configuration options](/influxdb3/core/reference/config-options/#node-id)_ |
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ alt_links:
|
|||
v1: /influxdb/v1/tools/grafana/
|
||||
v2: /influxdb/v2/tools/grafana/
|
||||
cloud: /influxdb/cloud/tools/grafana/
|
||||
source: /content/shared/v3-process-data/visualize/grafana.md
|
||||
source: /shared/v3-process-data/visualize/grafana.md
|
||||
---
|
||||
|
||||
<!-- SOURCE: /content/shared/v3-process-data/visualize/grafana.md -->
|
||||
<!-- SOURCE: /shared/v3-process-data/visualize/grafana.md -->
|
||||
|
|
|
|||
|
|
@ -15,7 +15,6 @@ related:
|
|||
source: /shared/influxdb3-get-started/setup.md
|
||||
---
|
||||
|
||||
<!--
|
||||
The content of this page is at
|
||||
<!--
|
||||
// SOURCE content/shared/influxdb3-get-started/setup.md
|
||||
-->
|
||||
|
|
|
|||
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
title: Compare values in SQL queries
|
||||
seotitle: Compare values across rows in SQL queries
|
||||
description: >
|
||||
Use SQL window functions to compare values across different rows in your
|
||||
time series data. Learn how to calculate differences, percentage changes,
|
||||
and compare values at specific time intervals.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Compare values
|
||||
parent: Query with SQL
|
||||
identifier: query-sql-compare-values
|
||||
weight: 205
|
||||
influxdb3/enterprise/tags: [query, sql, window functions]
|
||||
related:
|
||||
- /influxdb3/enterprise/reference/sql/functions/window/
|
||||
- /influxdb3/enterprise/query-data/sql/aggregate-select/
|
||||
list_code_example: |
|
||||
##### Calculate difference from previous value
|
||||
```sql
|
||||
SELECT
|
||||
time,
|
||||
room,
|
||||
temp,
|
||||
temp - LAG(temp) OVER (
|
||||
PARTITION BY room
|
||||
ORDER BY time
|
||||
) AS temp_change
|
||||
FROM home
|
||||
ORDER BY room, time
|
||||
```
|
||||
source: /shared/influxdb3-query-guides/sql/compare-values.md
|
||||
---
|
||||
|
||||
<!--
|
||||
//SOURCE content/shared/influxdb3-query-guides/sql/compare-values.md
|
||||
-->
|
||||
|
|
@ -43,6 +43,11 @@ influxdb3 serve [OPTIONS]
|
|||
|
||||
## Options
|
||||
|
||||
<!--docs:exclude
|
||||
--serve-invocation-method: internal implementation detail
|
||||
--test-mode: hidden test flag, not for production use
|
||||
-->
|
||||
|
||||
| Option | | Description |
|
||||
| :--------------- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| | `--admin-token-recovery-http-bind` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-http-bind)_ |
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ alt_links:
|
|||
v1: /influxdb/v1/tools/grafana/
|
||||
v2: /influxdb/v2/tools/grafana/
|
||||
cloud: /influxdb/cloud/tools/grafana/
|
||||
source: /content/shared/v3-process-data/visualize/grafana.md
|
||||
source: /shared/v3-process-data/visualize/grafana.md
|
||||
---
|
||||
|
||||
<!-- SOURCE: /content/shared/v3-process-data/visualize/grafana.md -->
|
||||
<!-- SOURCE: /shared/v3-process-data/visualize/grafana.md -->
|
||||
|
|
|
|||
|
|
@ -17,6 +17,8 @@ Use [Docker](https://docker.com) to install and run **InfluxDB 3 Explorer**.
|
|||
- [Persist data across restarts](#persist-data-across-restarts)
|
||||
- [Pre-configure InfluxDB connections](#pre-configure-influxdb-connections)
|
||||
- [Enable TLS/SSL (HTTPS)](#enable-tlsssl-https)
|
||||
- [TLS and certificate verification options](#tls-and-certificate-verification-options)
|
||||
- [Use self-signed certificates](#use-self-signed-certificates)
|
||||
- [Choose operational mode](#choose-operational-mode)
|
||||
- [Advanced configuration](#advanced-configuration)
|
||||
- [Environment variables](#environment-variables)
|
||||
|
|
@ -347,6 +349,104 @@ To enable TLS/SSL for secure connections:
|
|||
> [!Note]
|
||||
> The nginx web server automatically detects and uses certificate files in the mounted path.
|
||||
|
||||
#### TLS and certificate verification options
|
||||
|
||||
Use the following environment variables to configure TLS and certificate verification:
|
||||
|
||||
- `NODE_EXTRA_CA_CERTS` - Path to custom CA certificate file inside container (recommended).
|
||||
|
||||
This option adds an intermediate or custom CA certificate to the Node.js trusted certificate store
|
||||
and is required when InfluxDB uses certificates signed by an internal or private CA.
|
||||
|
||||
- **Format**: PEM format certificate file
|
||||
- **Example**: `-e NODE_EXTRA_CA_CERTS=/ca-certs/ca-bundle.crt`
|
||||
|
||||
> [!Note]
|
||||
> This is the native Node.js environment variable for custom CAs.
|
||||
|
||||
- `CA_CERT_PATH` - Alternative to `NODE_EXTRA_CA_CERTS` (convenience alias)
|
||||
- **Example**: `-e CA_CERT_PATH=/ca-certs/ca-bundle.crt`
|
||||
|
||||
> [!Note]
|
||||
> Use either `NODE_EXTRA_CA_CERTS` or `CA_CERT_PATH`; not both. `CA_CERT_PATH` aliases `NODE_EXTRA_CA_CERTS`.
|
||||
|
||||
#### Use self-signed certificates
|
||||
|
||||
To configure Explorer to trust self-signed or custom CA certificates when connecting to InfluxDB:
|
||||
|
||||
1. **Create a directory for CA certificates:**
|
||||
|
||||
```bash
|
||||
mkdir -p ./ca-certs
|
||||
```
|
||||
|
||||
2. **Copy your CA certificate to the directory:**
|
||||
|
||||
```bash
|
||||
cp /path/to/your-ca.pem ./ca-certs/
|
||||
```
|
||||
|
||||
3. **Mount the CA certificate directory and set the `NODE_EXTRA_CA_CERTS` environment variable:**
|
||||
|
||||
{{< expand-wrapper >}}
|
||||
{{% expand "View example Docker configuration for self-signed certificates" %}}
|
||||
|
||||
{{< code-tabs-wrapper >}}
|
||||
{{% code-tabs %}}
|
||||
[Docker](#)
|
||||
[Docker Compose](#)
|
||||
{{% /code-tabs %}}
|
||||
|
||||
{{% code-tab-content %}}
|
||||
{{< code-callout "NODE_EXTRA_CA_CERTS" >}}
|
||||
```bash
|
||||
docker run --detach \
|
||||
--name influxdb3-explorer \
|
||||
--restart unless-stopped \
|
||||
--publish 8888:443 \
|
||||
--volume $(pwd)/db:/db:rw \
|
||||
--volume $(pwd)/config:/app-root/config:ro \
|
||||
--volume $(pwd)/ssl:/etc/nginx/ssl:ro \
|
||||
--volume $(pwd)/ca-certs:/ca-certs:ro \
|
||||
--env SESSION_SECRET_KEY=your-secure-secret-key-here \
|
||||
--env NODE_EXTRA_CA_CERTS=/ca-certs/your-ca.pem \
|
||||
influxdata/influxdb3-ui:{{% latest-patch %}} \
|
||||
--mode=admin
|
||||
```
|
||||
{{< /code-callout >}}
|
||||
{{% /code-tab-content %}}
|
||||
|
||||
{{% code-tab-content %}}
|
||||
{{< code-callout "NODE_EXTRA_CA_CERTS" >}}
|
||||
```yaml
|
||||
# docker-compose.yml
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
explorer:
|
||||
image: influxdata/influxdb3-ui:{{% latest-patch %}}
|
||||
container_name: influxdb3-explorer
|
||||
pull_policy: always
|
||||
command: ["--mode=admin"]
|
||||
ports:
|
||||
- "8888:443"
|
||||
volumes:
|
||||
- ./db:/db:rw
|
||||
- ./config:/app-root/config:ro
|
||||
- ./ssl:/etc/nginx/ssl:ro
|
||||
- ./ca-certs:/ca-certs:ro
|
||||
environment:
|
||||
SESSION_SECRET_KEY: ${SESSION_SECRET_KEY:-your-secure-secret-key-here}
|
||||
NODE_EXTRA_CA_CERTS: /ca-certs/your-ca.pem
|
||||
restart: unless-stopped
|
||||
```
|
||||
{{< /code-callout >}}
|
||||
{{% /code-tab-content %}}
|
||||
{{< /code-tabs-wrapper >}}
|
||||
|
||||
{{% /expand %}}
|
||||
{{< /expand-wrapper >}}
|
||||
|
||||
### Choose operational mode
|
||||
|
||||
{{% product-name %}} supports two operational modes:
|
||||
|
|
@ -410,6 +510,8 @@ services:
|
|||
| `DATABASE_URL` | `/db/sqlite.db` | Path to SQLite database inside container |
|
||||
| `SSL_CERT_PATH` | `/etc/nginx/ssl/cert.pem` | Path to SSL certificate file |
|
||||
| `SSL_KEY_PATH` | `/etc/nginx/ssl/key.pem` | Path to SSL private key file |
|
||||
| `NODE_EXTRA_CA_CERTS` | _(none)_ | Path to custom CA certificate file (PEM format) for trusting self-signed or internal CA certificates |
|
||||
| `CA_CERT_PATH` | _(none)_ | Alias for `NODE_EXTRA_CA_CERTS` |
|
||||
|
||||
> [!Important]
|
||||
> Always set `SESSION_SECRET_KEY` in production to persist user sessions across container restarts.
|
||||
|
|
@ -426,6 +528,7 @@ services:
|
|||
| `/db` | SQLite database storage | 700 | No (but recommended) |
|
||||
| `/app-root/config` | Connection configuration | 755 | No |
|
||||
| `/etc/nginx/ssl` | TLS/SSL certificates | 755 | Only for HTTPS |
|
||||
| `/ca-certs` | Custom CA certificates | 755 | Only for self-signed certificates |
|
||||
|
||||
### Port reference
|
||||
|
||||
|
|
@ -527,7 +630,7 @@ docker-compose up -d
|
|||
{{% code-tab-content %}}
|
||||
```bash
|
||||
docker run --rm \
|
||||
--name influxdb3-explorer-dev \
|
||||
--name influxdb3-explorer \
|
||||
--publish 8888:80 \
|
||||
influxdata/influxdb3-ui:{{% latest-patch %}}
|
||||
```
|
||||
|
|
@ -541,9 +644,10 @@ version: '3.8'
|
|||
services:
|
||||
explorer:
|
||||
image: influxdata/influxdb3-ui:{{% latest-patch %}}
|
||||
container_name: influxdb3-explorer-dev
|
||||
container_name: influxdb3-explorer
|
||||
ports:
|
||||
- "8888:80"
|
||||
```
|
||||
{{% /code-tab-content %}}
|
||||
{{< /code-tabs-wrapper >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -70,7 +70,7 @@ curl -i http://localhost:8181/ping
|
|||
|
||||
Look for:
|
||||
|
||||
- `x-influxdb-version`: Version number (for example, `3.6.0`)
|
||||
- `x-influxdb-version`: Version number (for example, {{% latest-patch %}})
|
||||
- `x-influxdb-build`: `Core`
|
||||
|
||||
{{% /show-in %}}
|
||||
|
|
@ -99,9 +99,10 @@ curl -i http://localhost:8181/ping
|
|||
|
||||
Look for:
|
||||
|
||||
- `x-influxdb-version`: Version number (for example, `3.6.0`)
|
||||
- `x-influxdb-version`: Version number (for example, {{% latest-patch %}})
|
||||
- `x-influxdb-build`: `Enterprise`
|
||||
|
||||
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "clustered" %}}
|
||||
|
|
@ -155,6 +156,17 @@ influxctl cluster list
|
|||
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% hide-in "v2,cloud,v1" %}}
|
||||
> [!Note]
|
||||
>
|
||||
> #### SQL version() function
|
||||
>
|
||||
> The SQL `version()` function returns the
|
||||
> [DataFusion](https://datafusion.apache.org/) query engine version, not the
|
||||
> InfluxDB product version. Use the methods above to identify your InfluxDB
|
||||
> version.
|
||||
{{% /hide-in %}}
|
||||
|
||||
{{% show-in "v2" %}}
|
||||
|
||||
### InfluxDB OSS v2 detection
|
||||
|
|
@ -272,6 +284,15 @@ Look for:
|
|||
- `x-influxdb-version`: Version number (for example, `3.6.0`)
|
||||
- `x-influxdb-build`: `Core` or `Enterprise`
|
||||
|
||||
> [!Note]
|
||||
>
|
||||
> #### SQL version() function
|
||||
>
|
||||
> The SQL `version()` function returns the
|
||||
> [DataFusion](https://datafusion.apache.org/) query engine version, not the
|
||||
> InfluxDB product version. Use the methods above to identify your InfluxDB
|
||||
> version.
|
||||
|
||||
{{% /tab-content %}}
|
||||
|
||||
{{< /tabs-wrapper >}}
|
||||
|
|
|
|||
|
|
@ -1,3 +1,70 @@
|
|||
## 2.12.0 {date="2025-12-09"}
|
||||
|
||||
### Features
|
||||
|
||||
- Add 'influxdata-archive-keyring' as a suggested package to simplify future repository key rotations for the end user
|
||||
- Add a new `--perf-debug` flag to the `query` command that outputs performance statistics and gRPC response trailers instead of query results
|
||||
|
||||
Example Output for `--perf-debug`:
|
||||
|
||||
```
|
||||
$ ./influxctl query --perf-debug --format table --token REDACTED --database testdb --language influxql "SELECT SUM(i), non_negative_difference(SUM(i)) as diff_i FROM data WHERE time > '2025-11-07T01:20:00Z' AND time < '2025-11-07T03:00:00Z' AND runid = '540cd752bb6411f0a23e30894adea878' GROUP BY time(5m)"
|
||||
+--------------------------+----------+
|
||||
| Metric | Value |
|
||||
+--------------------------+----------+
|
||||
| Client Duration | 1.222 s |
|
||||
| Output Rows | 20 |
|
||||
| Output Size | 647 B |
|
||||
+--------------------------+----------+
|
||||
| Compute Duration | 37.2 ms |
|
||||
| Execution Duration | 243.8 ms |
|
||||
| Ingester Latency Data | 0 |
|
||||
| Ingester Latency Plan | 0 |
|
||||
| Ingester Partition Count | 0 |
|
||||
| Ingester Response | 0 B |
|
||||
| Ingester Response Rows | 0 |
|
||||
| Max Memory | 70 KiB |
|
||||
| Parquet Files | 1 |
|
||||
| Partitions | 1 |
|
||||
| Planning Duration | 9.6 ms |
|
||||
| Queue Duration | 286.6 µs |
|
||||
+--------------------------+----------+
|
||||
|
||||
$ ./influxctl query --perf-debug --format json --token REDACTED --database testdb --language influxql "SELECT SUM(i), non_negative_difference(SUM(i)) as diff_i FROM data WHERE time > '2025-11-07T01:20:00Z' AND time < '2025-11-07T03:00:00Z' AND runid = '540cd752bb6411f0a23e30894adea878' GROUP BY time(5m)"
|
||||
{
|
||||
"client_duration_secs": 1.101,
|
||||
"compute_duration_secs": 0.037,
|
||||
"execution_duration_secs": 0.247,
|
||||
"ingester_latency_data": 0,
|
||||
"ingester_latency_plan": 0,
|
||||
"ingester_partition_count": 0,
|
||||
"ingester_response_bytes": 0,
|
||||
"ingester_response_rows": 0,
|
||||
"max_memory_bytes": 71744,
|
||||
"output_bytes": 647,
|
||||
"output_rows": 20,
|
||||
"parquet_files": 1,
|
||||
"partitions": 1,
|
||||
"planning_duration_secs": 0.009,
|
||||
"queue_duration_secs": 0
|
||||
}
|
||||
```
|
||||
|
||||
### Dependency updates
|
||||
|
||||
- Update Go to 1.25.5.
|
||||
- Update `github.com/containerd/containerd` from 1.7.27 to 1.7.29
|
||||
- Update `github.com/go-git/go-git/v5` from 5.16.3 to 5.16.4
|
||||
- Update `github.com/jedib0t/go-pretty/v6` from 6.6.8 to 6.7.5
|
||||
- Update `github.com/ovechkin-dm/mockio/v2` from 2.0.3 to 2.0.4
|
||||
- Update `go.uber.org/zap` from 1.27.0 to 1.27.1
|
||||
- Update `golang.org/x/crypto` from 0.43.0 to 0.45.0
|
||||
- Update `golang.org/x/mod` from 0.29.0 to 0.30.0
|
||||
- Update `golang.org/x/oauth2` from 0.32.0 to 0.33.0
|
||||
- Update `google.golang.org/grpc` from 1.76.0 to 1.77.0
|
||||
|
||||
---
|
||||
|
||||
## 2.11.0 {date="2025-10-17"}
|
||||
|
||||
### Features
|
||||
|
|
|
|||
|
|
@ -997,16 +997,17 @@ Specifies the maximum number of messages sent to a Jaeger service per second.
|
|||
- [datafusion-max-parquet-fanout](#datafusion-max-parquet-fanout)
|
||||
- [datafusion-use-cached-parquet-loader](#datafusion-use-cached-parquet-loader)
|
||||
- [datafusion-config](#datafusion-config)
|
||||
<!-- DEV-ONLY FLAGS: DO NOT DOCUMENT IN PRODUCTION - TOKIO RUNTIME FLAGS
|
||||
- datafusion-runtime-type
|
||||
- datafusion-runtime-disable-lifo-slot
|
||||
- datafusion-runtime-event-interval
|
||||
- datafusion-runtime-global-queue-interval
|
||||
- datafusion-runtime-max-blocking-threads
|
||||
- datafusion-runtime-max-io-events-per-tick
|
||||
- datafusion-runtime-thread-keep-alive
|
||||
- datafusion-runtime-thread-priority
|
||||
END DEV-ONLY FLAGS -->
|
||||
|
||||
<!--docs:exclude
|
||||
--datafusion-runtime-type: development-only Tokio runtime configuration
|
||||
--datafusion-runtime-disable-lifo-slot: development-only Tokio runtime configuration
|
||||
--datafusion-runtime-event-interval: development-only Tokio runtime configuration
|
||||
--datafusion-runtime-global-queue-interval: development-only Tokio runtime configuration
|
||||
--datafusion-runtime-max-blocking-threads: development-only Tokio runtime configuration
|
||||
--datafusion-runtime-max-io-events-per-tick: development-only Tokio runtime configuration
|
||||
--datafusion-runtime-thread-keep-alive: development-only Tokio runtime configuration
|
||||
--datafusion-runtime-thread-priority: development-only Tokio runtime configuration
|
||||
-->
|
||||
|
||||
#### datafusion-num-threads
|
||||
|
||||
|
|
@ -1891,10 +1892,10 @@ Sets the default duration for hard deletion of data.
|
|||
|
||||
### Telemetry
|
||||
|
||||
- [telemetry-disable-upload](#telemetry-disable-upload)
|
||||
- [disable-telemetry-upload](#disable-telemetry-upload)
|
||||
- [telemetry-endpoint](#telemetry-endpoint)
|
||||
|
||||
#### telemetry-disable-upload
|
||||
#### disable-telemetry-upload
|
||||
|
||||
Disables the upload of telemetry data to InfluxData.
|
||||
|
||||
|
|
@ -1902,7 +1903,7 @@ Disables the upload of telemetry data to InfluxData.
|
|||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :--------------------------- | :----------------------------------- |
|
||||
| `--telemetry-disable-upload` | `INFLUXDB3_TELEMETRY_DISABLE_UPLOAD` |
|
||||
| `--disable-telemetry-upload` | `INFLUXDB3_TELEMETRY_DISABLE_UPLOAD` |
|
||||
|
||||
***
|
||||
|
||||
|
|
|
|||
|
|
@ -19,6 +19,10 @@ You can also set the database name using the `INFLUXDB3_DATABASE_NAME` environme
|
|||
|
||||
## Options
|
||||
|
||||
<!--docs:exclude
|
||||
--database-name: internal variable, use positional <DATABASE_NAME>
|
||||
-->
|
||||
|
||||
| Option | | Description |
|
||||
| :----- | :------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
|
||||
|
|
|
|||
|
|
@ -23,6 +23,9 @@ influxdb3 create table [OPTIONS] \
|
|||
- **TABLE_NAME**: The name of the table to create.
|
||||
|
||||
## Options
|
||||
<!--docs:exclude
|
||||
--table-name: internal variable, use positional <TABLE_NAME>
|
||||
-->
|
||||
|
||||
{{% hide-in "enterprise" %}}
|
||||
| Option | | Description |
|
||||
|
|
|
|||
|
|
@ -21,6 +21,10 @@ influxdb3 create trigger [OPTIONS] \
|
|||
|
||||
## Options
|
||||
|
||||
<!--docs:exclude
|
||||
--trigger-name: internal variable, use positional <TRIGGER_NAME>
|
||||
-->
|
||||
|
||||
| Option | | Description |
|
||||
| :----- | :------------------ | :------------------------------------------------------------------------------------------------------- |
|
||||
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
|
||||
|
|
|
|||
|
|
@ -17,6 +17,10 @@ influxdb3 delete database [OPTIONS] <DATABASE_NAME>
|
|||
|
||||
## Options
|
||||
|
||||
<!--docs:exclude
|
||||
--database-name: internal variable, use positional <DATABASE_NAME>
|
||||
-->
|
||||
|
||||
| Option | | Description |
|
||||
| :----- | :------------ | :--------------------------------------------------------------------------------------- |
|
||||
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
|
||||
|
|
|
|||
|
|
@ -15,6 +15,10 @@ influxdb3 delete table [OPTIONS] --database <DATABASE_NAME> <TABLE_NAME>
|
|||
|
||||
## Options
|
||||
|
||||
<!--docs:exclude
|
||||
--table-name: internal variable, use positional <TABLE_NAME>
|
||||
-->
|
||||
|
||||
| Option | | Description |
|
||||
| :----- | :------------ | :--------------------------------------------------------------------------------------- |
|
||||
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
|
||||
|
|
|
|||
|
|
@ -15,6 +15,10 @@ influxdb3 delete trigger [OPTIONS] --database <DATABASE_NAME> <TRIGGER_NAME>
|
|||
|
||||
## Options
|
||||
|
||||
<!--docs:exclude
|
||||
--trigger-name: internal variable, use positional <TRIGGER_NAME>
|
||||
-->
|
||||
|
||||
| Option | | Description |
|
||||
| :----- | :----------- | :--------------------------------------------------------------------------------------- |
|
||||
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
|
||||
|
|
|
|||
|
|
@ -13,6 +13,10 @@ influxdb3 install package [OPTIONS] [PACKAGES]...
|
|||
|
||||
## Options
|
||||
|
||||
<!--docs:exclude
|
||||
--packages: internal variable, use positional [PACKAGES]...
|
||||
-->
|
||||
|
||||
| Option | Description | Default | Environment Variable |
|
||||
| :---------------------------------------------- | :------------------------------------------------------------------ | :---------------------- | :-------------------------- |
|
||||
| `-H`, `--host <HOST_URL>` | The host URL of the running {{< product-name >}} server | `http://127.0.0.1:8181` | `INFLUXDB3_HOST_URL` |
|
||||
|
|
|
|||
|
|
@ -496,12 +496,10 @@ influxdb3 create token --admin
|
|||
{{% /code-tab-content %}}
|
||||
{{% code-tab-content %}}
|
||||
|
||||
{{% code-placeholders "CONTAINER_NAME" %}}
|
||||
```bash
|
||||
```bash { placeholders="CONTAINER_NAME" }
|
||||
# With Docker — in a new terminal:
|
||||
docker exec -it CONTAINER_NAME influxdb3 create token --admin
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace {{% code-placeholder-key %}}`CONTAINER_NAME`{{% /code-placeholder-key %}} with the name of your running Docker container.
|
||||
|
||||
|
|
@ -523,48 +521,58 @@ such as {{% show-in "enterprise" %}}creating additional tokens, {{% /show-in %}}
|
|||
performing administrative tasks{{% show-in "enterprise" %}},{{% /show-in %}}
|
||||
and writing and querying data.
|
||||
|
||||
#### Authorize CLI commands
|
||||
|
||||
Use one of the following methods to provide your token and authenticate `influxdb3` CLI commands.
|
||||
|
||||
In your command, replace {{% code-placeholder-key %}}`YOUR_AUTH_TOKEN`{{% /code-placeholder-key %}} with your token string (for example, the [operator token](#create-an-operator-token) from the previous step).
|
||||
|
||||
##### Set an environment variable (recommended)
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[Environment variable (recommended)](#)
|
||||
[Command option](#)
|
||||
[macOS and Linux](#)
|
||||
[PowerShell](#)
|
||||
[CMD](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
Set the `INFLUXDB3_AUTH_TOKEN` environment variable to have the CLI use your
|
||||
token automatically:
|
||||
|
||||
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
|
||||
```bash
|
||||
```bash { placeholders="YOUR_AUTH_TOKEN" }
|
||||
export INFLUXDB3_AUTH_TOKEN=YOUR_AUTH_TOKEN
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
Include the `--token` option with CLI commands:
|
||||
|
||||
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
|
||||
```bash
|
||||
influxdb3 show databases --token YOUR_AUTH_TOKEN
|
||||
```powershell { placeholders="YOUR_AUTH_TOKEN" }
|
||||
$env:INFLUXDB3_AUTH_TOKEN = "YOUR_AUTH_TOKEN"
|
||||
```
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
```cmd { placeholders="YOUR_AUTH_TOKEN" }
|
||||
set INFLUXDB3_AUTH_TOKEN=YOUR_AUTH_TOKEN
|
||||
# Make sure to include a space character at the end of this command.
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
##### Use the `--token` option
|
||||
|
||||
```bash { placeholders="YOUR_AUTH_TOKEN" }
|
||||
influxdb3 show databases --token YOUR_AUTH_TOKEN
|
||||
```
|
||||
|
||||
#### Authorize HTTP API requests
|
||||
|
||||
For HTTP API requests, include your token in the `Authorization` header--for example:
|
||||
|
||||
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
|
||||
```bash
|
||||
```bash { placeholders="YOUR_AUTH_TOKEN" }
|
||||
curl "http://{{< influxdb/host >}}/api/v3/configure/database" \
|
||||
--header "Authorization: Bearer YOUR_AUTH_TOKEN"
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
#### Learn more about tokens and permissions
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,329 @@
|
|||
|
||||
Use [SQL window functions](/influxdb3/version/reference/sql/functions/window/) to compare values across different rows in your time series data.
|
||||
Window functions like [`LAG`](/influxdb3/version/reference/sql/functions/window/#lag) and [`LEAD`](/influxdb3/version/reference/sql/functions/window/#lead) let you access values from previous or subsequent rows without using self-joins, making it easy to calculate changes over time.
|
||||
|
||||
Common use cases for comparing values include:
|
||||
|
||||
- Calculating the difference between the current value and a previous value
|
||||
- Computing rate of change or percentage change
|
||||
- Detecting significant changes or anomalies
|
||||
- Comparing values at specific time intervals
|
||||
- Handling counter metrics that reset to zero
|
||||
|
||||
**To compare values across rows:**
|
||||
|
||||
1. Use a [window function](/influxdb3/version/reference/sql/functions/window/) such as `LAG` or `LEAD` with an `OVER` clause.
|
||||
2. Include a `PARTITION BY` clause to group data by tags (like `room` or `sensor_id`).
|
||||
3. Include an `ORDER BY` clause to define the order for comparisons (typically by `time`).
|
||||
4. Use arithmetic operators to calculate differences, ratios, or percentage changes.
|
||||
|
||||
## Examples of comparing values
|
||||
|
||||
> [!Note]
|
||||
> #### Sample data
|
||||
>
|
||||
> The following examples use the
|
||||
> {{% influxdb3/home-sample-link %}}.
|
||||
> To run the example queries and return results,
|
||||
> [write the sample data](/influxdb3/version/reference/sample-data/#write-home-sensor-data-to-influxdb)
|
||||
> to your {{% product-name %}} database before running the example queries.
|
||||
|
||||
- [Calculate the difference from the previous value](#calculate-the-difference-from-the-previous-value)
|
||||
- [Calculate the percentage change](#calculate-the-percentage-change)
|
||||
- [Compare values at regular intervals](#compare-values-at-regular-intervals)
|
||||
- [Compare values with exact time offsets](#compare-values-with-exact-time-offsets)
|
||||
- [Handle counter metrics and resets](#handle-counter-metrics-and-resets)
|
||||
- [Calculate non-negative differences (counter rate)](#calculate-non-negative-differences-counter-rate)
|
||||
- [Calculate cumulative counter increase](#calculate-cumulative-counter-increase)
|
||||
- [Aggregate counter increases by time interval](#aggregate-counter-increases-by-time-interval)
|
||||
|
||||
### Calculate the difference from the previous value
|
||||
|
||||
Use the `LAG` function to access the value from the previous row and calculate the difference.
|
||||
This is useful for detecting changes over time.
|
||||
|
||||
{{% influxdb/custom-timestamps %}}
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
time,
|
||||
room,
|
||||
temp,
|
||||
temp - LAG(temp, 1) OVER (
|
||||
PARTITION BY room
|
||||
ORDER BY time
|
||||
) AS temp_change
|
||||
FROM home
|
||||
WHERE
|
||||
time >= '2022-01-01T08:00:00Z'
|
||||
AND time < '2022-01-01T11:00:00Z'
|
||||
ORDER BY room, time
|
||||
```
|
||||
|
||||
| time | room | temp | temp_change |
|
||||
|:--------------------|:------------|-----:|------------:|
|
||||
| 2022-01-01T08:00:00 | Kitchen | 21.0 | NULL |
|
||||
| 2022-01-01T09:00:00 | Kitchen | 23.0 | 2.0 |
|
||||
| 2022-01-01T10:00:00 | Kitchen | 22.7 | -0.3 |
|
||||
| 2022-01-01T08:00:00 | Living Room | 21.1 | NULL |
|
||||
| 2022-01-01T09:00:00 | Living Room | 21.4 | 0.3 |
|
||||
| 2022-01-01T10:00:00 | Living Room | 21.8 | 0.4 |
|
||||
|
||||
{{% /influxdb/custom-timestamps %}}
|
||||
|
||||
The first row in each partition returns `NULL` for `temp_change` because there's no previous value.
|
||||
To use a default value instead of `NULL`, provide a third argument to `LAG`:
|
||||
|
||||
```sql
|
||||
LAG(temp, 1, 0) -- Returns 0 if no previous value exists
|
||||
```
|
||||
|
||||
### Calculate the percentage change
|
||||
|
||||
Calculate the percentage change between the current value and a previous value by dividing the difference by the previous value.
|
||||
|
||||
{{% influxdb/custom-timestamps %}}
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
time,
|
||||
room,
|
||||
temp,
|
||||
ROUND(
|
||||
((temp - LAG(temp, 1) OVER (PARTITION BY room ORDER BY time)) /
|
||||
LAG(temp, 1) OVER (PARTITION BY room ORDER BY time)) * 100,
|
||||
2
|
||||
) AS percent_change
|
||||
FROM home
|
||||
WHERE
|
||||
time >= '2022-01-01T08:00:00Z'
|
||||
AND time < '2022-01-01T11:00:00Z'
|
||||
ORDER BY room, time
|
||||
```
|
||||
|
||||
| time | room | temp | percent_change |
|
||||
|:--------------------|:------------|-----:|---------------:|
|
||||
| 2022-01-01T08:00:00 | Kitchen | 21.0 | NULL |
|
||||
| 2022-01-01T09:00:00 | Kitchen | 23.0 | 9.52 |
|
||||
| 2022-01-01T10:00:00 | Kitchen | 22.7 | -1.30 |
|
||||
| 2022-01-01T08:00:00 | Living Room | 21.1 | NULL |
|
||||
| 2022-01-01T09:00:00 | Living Room | 21.4 | 1.42 |
|
||||
| 2022-01-01T10:00:00 | Living Room | 21.8 | 1.87 |
|
||||
|
||||
{{% /influxdb/custom-timestamps %}}
|
||||
|
||||
### Compare values at regular intervals
|
||||
|
||||
For regularly spaced time series data (like hourly readings), use `LAG` with an offset parameter to compare values from a specific number of rows back.
|
||||
|
||||
The following query compares each temperature reading with the reading from one hour earlier (assuming hourly data):
|
||||
|
||||
{{% influxdb/custom-timestamps %}}
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
time,
|
||||
room,
|
||||
temp,
|
||||
LAG(temp, 1) OVER (
|
||||
PARTITION BY room
|
||||
ORDER BY time
|
||||
) AS temp_1h_ago,
|
||||
temp - LAG(temp, 1) OVER (
|
||||
PARTITION BY room
|
||||
ORDER BY time
|
||||
) AS hourly_change
|
||||
FROM home
|
||||
WHERE
|
||||
time >= '2022-01-01T08:00:00Z'
|
||||
AND time < '2022-01-01T12:00:00Z'
|
||||
ORDER BY room, time
|
||||
```
|
||||
|
||||
| time | room | temp | temp_1h_ago | hourly_change |
|
||||
|:--------------------|:------------|-----:|------------:|--------------:|
|
||||
| 2022-01-01T08:00:00 | Kitchen | 21.0 | NULL | NULL |
|
||||
| 2022-01-01T09:00:00 | Kitchen | 23.0 | 21.0 | 2.0 |
|
||||
| 2022-01-01T10:00:00 | Kitchen | 22.7 | 23.0 | -0.3 |
|
||||
| 2022-01-01T11:00:00 | Kitchen | 22.4 | 22.7 | -0.3 |
|
||||
| 2022-01-01T08:00:00 | Living Room | 21.1 | NULL | NULL |
|
||||
| 2022-01-01T09:00:00 | Living Room | 21.4 | 21.1 | 0.3 |
|
||||
| 2022-01-01T10:00:00 | Living Room | 21.8 | 21.4 | 0.4 |
|
||||
| 2022-01-01T11:00:00 | Living Room | 22.2 | 21.8 | 0.4 |
|
||||
|
||||
{{% /influxdb/custom-timestamps %}}
|
||||
|
||||
### Compare values with exact time offsets
|
||||
|
||||
For irregularly spaced time series data or when you need to compare values from an exact time offset (like exactly 1 hour ago, not just the previous row), use a self-join with interval arithmetic.
|
||||
|
||||
{{% influxdb/custom-timestamps %}}
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
current.time,
|
||||
current.room,
|
||||
current.temp AS current_temp,
|
||||
previous.temp AS temp_1h_ago,
|
||||
current.temp - previous.temp AS hourly_diff
|
||||
FROM home AS current
|
||||
LEFT JOIN home AS previous
|
||||
ON current.room = previous.room
|
||||
AND previous.time = current.time - INTERVAL '1 hour'
|
||||
WHERE
|
||||
current.time >= '2022-01-01T08:00:00Z'
|
||||
AND current.time < '2022-01-01T12:00:00Z'
|
||||
ORDER BY current.room, current.time
|
||||
```
|
||||
|
||||
| time | room | current_temp | temp_1h_ago | hourly_diff |
|
||||
|:--------------------|:------------|-------------:|------------:|------------:|
|
||||
| 2022-01-01T08:00:00 | Kitchen | 21.0 | NULL | NULL |
|
||||
| 2022-01-01T09:00:00 | Kitchen | 23.0 | 21.0 | 2.0 |
|
||||
| 2022-01-01T10:00:00 | Kitchen | 22.7 | 23.0 | -0.3 |
|
||||
| 2022-01-01T11:00:00 | Kitchen | 22.4 | 22.7 | -0.3 |
|
||||
| 2022-01-01T08:00:00 | Living Room | 21.1 | NULL | NULL |
|
||||
| 2022-01-01T09:00:00 | Living Room | 21.4 | 21.1 | 0.3 |
|
||||
| 2022-01-01T10:00:00 | Living Room | 21.8 | 21.4 | 0.4 |
|
||||
| 2022-01-01T11:00:00 | Living Room | 22.2 | 21.8 | 0.4 |
|
||||
|
||||
{{% /influxdb/custom-timestamps %}}
|
||||
|
||||
This self-join approach works when:
|
||||
|
||||
- Your data points don't fall at regular intervals
|
||||
- You need to compare against a specific time offset regardless of when the previous data point occurred
|
||||
- You want to ensure the comparison is against a value from exactly 1 hour ago (or any other specific interval)
|
||||
|
||||
## Handle counter metrics and resets
|
||||
|
||||
Counter metrics track cumulative values that increase over time, such as total requests, bytes transferred, or errors.
|
||||
Unlike gauge metrics (which can go up or down), counters typically only increase, though they may reset to zero when a service restarts.
|
||||
|
||||
Use [`GREATEST`](/influxdb3/version/reference/sql/functions/conditional/#greatest) with `LAG` to handle counter resets by treating negative differences as zero.
|
||||
|
||||
> [!Note]
|
||||
> #### InfluxDB 3 SQL and counter metrics
|
||||
>
|
||||
> InfluxDB 3 SQL doesn't provide built-in equivalents to Flux's `increase()`
|
||||
> or InfluxQL's `NON_NEGATIVE_DIFFERENCE()` functions.
|
||||
> Use the patterns shown below to achieve similar results.
|
||||
|
||||
### Calculate non-negative differences (counter rate)
|
||||
|
||||
Calculate the increase between consecutive counter readings, treating negative differences (counter resets) as zero.
|
||||
|
||||
{{% influxdb/custom-timestamps %}}
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
time,
|
||||
host,
|
||||
requests,
|
||||
LAG(requests) OVER (PARTITION BY host ORDER BY time) AS prev_requests,
|
||||
GREATEST(
|
||||
requests - LAG(requests) OVER (PARTITION BY host ORDER BY time),
|
||||
0
|
||||
) AS requests_increase
|
||||
FROM metrics
|
||||
WHERE host = 'server1'
|
||||
ORDER BY time
|
||||
```
|
||||
|
||||
| time | host | requests | prev_requests | requests_increase |
|
||||
|:--------------------|:--------|----------|---------------|------------------:|
|
||||
| 2024-01-01T00:00:00 | server1 | 1000 | NULL | 0 |
|
||||
| 2024-01-01T01:00:00 | server1 | 1250 | 1000 | 250 |
|
||||
| 2024-01-01T02:00:00 | server1 | 1600 | 1250 | 350 |
|
||||
| 2024-01-01T03:00:00 | server1 | 50 | 1600 | 0 |
|
||||
| 2024-01-01T04:00:00 | server1 | 300 | 50 | 250 |
|
||||
|
||||
{{% /influxdb/custom-timestamps %}}
|
||||
|
||||
`LAG(requests)` retrieves the previous counter value, `requests - LAG(requests)` calculates the difference, and `GREATEST(..., 0)` returns 0 for negative differences (counter resets).
|
||||
`PARTITION BY host` ensures comparisons are only within the same host.
|
||||
|
||||
### Calculate cumulative counter increase
|
||||
|
||||
Calculate the total increase in a counter over time, handling resets.
|
||||
Use a Common Table Expression (CTE) to first calculate the differences, then sum them.
|
||||
|
||||
{{% influxdb/custom-timestamps %}}
|
||||
|
||||
```sql
|
||||
WITH counter_diffs AS (
|
||||
SELECT
|
||||
time,
|
||||
host,
|
||||
requests,
|
||||
GREATEST(
|
||||
requests - LAG(requests) OVER (PARTITION BY host ORDER BY time),
|
||||
0
|
||||
) AS requests_increase
|
||||
FROM metrics
|
||||
WHERE host = 'server1'
|
||||
)
|
||||
SELECT
|
||||
time,
|
||||
host,
|
||||
requests,
|
||||
SUM(requests_increase) OVER (PARTITION BY host ORDER BY time) AS cumulative_increase
|
||||
FROM counter_diffs
|
||||
ORDER BY time
|
||||
```
|
||||
|
||||
| time | host | requests | cumulative_increase |
|
||||
|:--------------------|:--------|----------|--------------------:|
|
||||
| 2024-01-01T00:00:00 | server1 | 1000 | 0 |
|
||||
| 2024-01-01T01:00:00 | server1 | 1250 | 250 |
|
||||
| 2024-01-01T02:00:00 | server1 | 1600 | 600 |
|
||||
| 2024-01-01T03:00:00 | server1 | 50 | 600 |
|
||||
| 2024-01-01T04:00:00 | server1 | 300 | 850 |
|
||||
|
||||
{{% /influxdb/custom-timestamps %}}
|
||||
|
||||
The CTE computes non-negative differences for each row, then `SUM(requests_increase) OVER (...)` creates a running total.
|
||||
The cumulative increase continues to grow despite the counter reset at 03:00.
|
||||
|
||||
### Aggregate counter increases by time interval
|
||||
|
||||
Calculate the total increase in a counter for each time interval (for example, hourly totals).
|
||||
|
||||
{{% influxdb/custom-timestamps %}}
|
||||
|
||||
```sql
|
||||
WITH counter_diffs AS (
|
||||
SELECT
|
||||
DATE_BIN(INTERVAL '1 hour', time) AS time_bucket,
|
||||
host,
|
||||
requests,
|
||||
GREATEST(
|
||||
requests - LAG(requests) OVER (PARTITION BY host ORDER BY time),
|
||||
0
|
||||
) AS requests_increase
|
||||
FROM metrics
|
||||
)
|
||||
SELECT
|
||||
time_bucket,
|
||||
host,
|
||||
SUM(requests_increase) AS total_increase
|
||||
FROM counter_diffs
|
||||
WHERE requests_increase > 0
|
||||
GROUP BY time_bucket, host
|
||||
ORDER BY host, time_bucket
|
||||
```
|
||||
|
||||
| time_bucket | host | total_increase |
|
||||
|:--------------------|:--------|---------------:|
|
||||
| 2024-01-01T01:00:00 | server1 | 250 |
|
||||
| 2024-01-01T02:00:00 | server1 | 350 |
|
||||
| 2024-01-01T04:00:00 | server1 | 250 |
|
||||
| 2024-01-01T01:00:00 | server2 | 400 |
|
||||
| 2024-01-01T02:00:00 | server2 | 500 |
|
||||
| 2024-01-01T03:00:00 | server2 | 300 |
|
||||
| 2024-01-01T04:00:00 | server2 | 400 |
|
||||
|
||||
{{% /influxdb/custom-timestamps %}}
|
||||
|
||||
The CTE calculates differences for each row.
|
||||
`DATE_BIN()` assigns each timestamp to a 1-hour interval, `SUM(requests_increase)` aggregates all increases within each interval, and `WHERE requests_increase > 0` filters out zero increases (first row and counter resets).
|
||||
|
|
@ -42,6 +42,7 @@ how you can opt out.
|
|||
- **Instance ID**: Unique identifier for the server instance
|
||||
- **Cluster UUID**: Unique identifier for the cluster
|
||||
- **Storage type**: Type of object storage being used
|
||||
- **Invocation**: How the server was started
|
||||
{{% show-in "core" %}}
|
||||
- **Product type**: "Core"
|
||||
{{% /show-in %}}
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ Learn how to avoid unexpected results and recover from errors when writing to {{
|
|||
- [Troubleshoot failures](#troubleshoot-failures)
|
||||
- [Troubleshoot rejected points](#troubleshoot-rejected-points)
|
||||
- [Report write issues](#report-write-issues)
|
||||
{{% show-in "cloud-dedicated,clustered" %}}- [Implement an exponential backoff strategy](#implement-an-exponential-backoff-strategy){{% /show-in %}}
|
||||
|
||||
## Handle write responses
|
||||
|
||||
|
|
@ -39,7 +40,7 @@ The `message` property of the response body may contain additional details about
|
|||
| `404 "Not found"` | A requested **resource type** (for example, "database"), and **resource name** | A requested resource wasn't found |
|
||||
| `422 "Unprocessable Entity"` | `message` contains details about the error | The data isn't allowed (for example, falls outside of the database's retention period). |
|
||||
| `500 "Internal server error"` | Empty | Default status for an error |
|
||||
| `503 "Service unavailable"` | Empty | The server is temporarily unavailable to accept writes. The `Retry-After` header contains the number of seconds to wait before trying the write again. |
|
||||
| `503 "Service unavailable"` | Empty | The server is temporarily unavailable or the requested service is resource constrained. [Implement an exponential backoff strategy](#implement-an-exponential-backoff-strategy). |
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "cloud-serverless" %}}
|
||||
|
|
@ -346,3 +347,121 @@ Include the support package when contacting InfluxData support through your stan
|
|||
- Business context if the issue affects production systems
|
||||
|
||||
This comprehensive information will help InfluxData engineers identify root causes and provide targeted solutions for your write issues.
|
||||
|
||||
{{% show-in "cloud-dedicated,clustered" %}}
|
||||
## Implement an exponential backoff strategy
|
||||
|
||||
Use exponential backoff with jitter for retrying requests that return `429` or `503`.
|
||||
This reduces load spikes and avoids thundering-herd problems.
|
||||
|
||||
**Recommended parameters**:
|
||||
|
||||
- Base delay: 1s
|
||||
- Multiplier: 2 (double each retry)
|
||||
- Max delay: 30s
|
||||
- Max retries: 5 (increase only with care)
|
||||
- Jitter: use "full jitter" (random between 0 and computed delay)
|
||||
|
||||
### Exponential backoff examples
|
||||
|
||||
{{< code-tabs-wrapper >}}
|
||||
{{% code-tabs %}}
|
||||
[cURL](#)
|
||||
[Python](#)
|
||||
[JavaScript](#)
|
||||
{{% /code-tabs %}}
|
||||
{{% code-tab-content %}}
|
||||
<!--------------------------------- BEGIN cURL -------------------------------->
|
||||
<!--pytest.mark.skip-->
|
||||
```sh
|
||||
base=1
|
||||
max_delay=30
|
||||
max_retries=5
|
||||
|
||||
for attempt in $(seq 0 $max_retries); do
|
||||
resp_code=$(curl -s -o /dev/null -w "%{http_code}" --request POST "https://{{< influxdb/host >}}/write?db=DB" ...)
|
||||
if [ "$resp_code" -eq 204 ]; then
|
||||
echo "Write succeeded"
|
||||
break
|
||||
fi
|
||||
|
||||
if [ "$resp_code" -ne 429 ] && [ "$resp_code" -ne 503 ]; then
|
||||
echo "Non-retryable response: $resp_code"
|
||||
break
|
||||
fi
|
||||
|
||||
# compute exponential delay and apply full jitter
|
||||
delay=$(awk -v b=$base -v a=$attempt -v m=$max_delay 'BEGIN{d=b*(2^a); if(d>m) d=m; print d}')
|
||||
sleep_seconds=$(awk -v d=$delay 'BEGIN{srand(); printf "%.3f", rand()*d}')
|
||||
sleep $sleep_seconds
|
||||
done
|
||||
```
|
||||
<!---------------------------------- END cURL --------------------------------->
|
||||
{{% /code-tab-content %}}
|
||||
|
||||
{{% code-tab-content %}}
|
||||
<!-------------------------------- BEGIN Python ------------------------------->
|
||||
<!--pytest.mark.skip-->
|
||||
```python
|
||||
import random
|
||||
import time
|
||||
import requests
|
||||
|
||||
base = 1.0
|
||||
max_delay = 30.0
|
||||
max_retries = 5
|
||||
|
||||
for attempt in range(max_retries + 1):
|
||||
r = requests.post(url, headers=headers, data=body, timeout=10)
|
||||
if r.status_code == 204:
|
||||
break
|
||||
if r.status_code not in (429, 503):
|
||||
raise RuntimeError(f"Non-retryable: {r.status_code} {r.text}")
|
||||
|
||||
# exponential backoff with full jitter
|
||||
retry_delay = min(base * (2 ** attempt), max_delay)
|
||||
sleep = random.random() * retry_delay # full jitter
|
||||
time.sleep(sleep)
|
||||
else:
|
||||
raise RuntimeError("Max retries exceeded")
|
||||
```
|
||||
<!--------------------------------- END Python -------------------------------->
|
||||
{{% /code-tab-content %}}
|
||||
|
||||
{{% code-tab-content %}}
|
||||
<!------------------------------ BEGIN JavaScript ----------------------------->
|
||||
<!--pytest.mark.skip-->
|
||||
```js
|
||||
const base = 1000;
|
||||
const maxDelay = 30000;
|
||||
const maxRetries = 5;
|
||||
|
||||
async function sleep(ms) { return new Promise(r => setTimeout(r, ms)); }
|
||||
|
||||
for (let attempt = 0; attempt <= maxRetries; attempt++) {
|
||||
const res = await fetch(url, { method: 'POST', body });
|
||||
if (res.status === 204) break;
|
||||
if (![429, 503].includes(res.status)) throw new Error(`Non-retryable ${res.status}`);
|
||||
|
||||
let delay = base * 2 ** attempt;
|
||||
delay = Math.min(delay, maxDelay);
|
||||
|
||||
const sleepMs = Math.random() * delay; // full jitter
|
||||
await sleep(sleepMs);
|
||||
}
|
||||
```
|
||||
<!------------------------------- END JavaScript ------------------------------>
|
||||
{{% /code-tab-content %}}
|
||||
{{< /code-tabs-wrapper >}}
|
||||
|
||||
### Exponential backoff best practices
|
||||
|
||||
- Only retry on idempotent or safe request semantics your client supports.
|
||||
- Retry only for `429` (Too Many Requests) and `503` (Service Unavailable).
|
||||
- Do not retry on client errors like `400`, `401`, `404`, `422`.
|
||||
- Cap the delay with `max_delay` to avoid excessively long waits.
|
||||
- Limit total retries to avoid infinite loops and provide meaningful errors.
|
||||
- Log retry attempts and backoff delays for observability and debugging.
|
||||
- Combine backoff with bounded concurrency to avoid overwhelming the server.
|
||||
|
||||
{{% /show-in %}}
|
||||
|
|
|
|||
|
|
@ -268,8 +268,30 @@ GROUP BY _time, room
|
|||
|
||||
## version
|
||||
|
||||
Returns the version of DataFusion.
|
||||
Returns the version of [DataFusion](https://datafusion.apache.org/) used by the query engine.
|
||||
|
||||
> [!Note]
|
||||
>
|
||||
> The `version()` function returns the DataFusion query engine version, not the
|
||||
> InfluxDB product version.
|
||||
> To identify your InfluxDB version, see [Identify version](/influxdb3/version/admin/identify-version/).
|
||||
|
||||
```sql
|
||||
version()
|
||||
```
|
||||
|
||||
{{< expand-wrapper >}}
|
||||
{{% expand "View `version` query example" %}}
|
||||
|
||||
```sql
|
||||
SELECT version()
|
||||
```
|
||||
|
||||
| version() |
|
||||
| :----------------------------------------- |
|
||||
| Apache DataFusion 49.0.2, aarch64 on linux |
|
||||
|
||||
The output includes the DataFusion version, CPU architecture, and operating system.
|
||||
|
||||
{{% /expand %}}
|
||||
{{< /expand-wrapper >}}
|
||||
|
|
|
|||
|
|
@ -914,6 +914,93 @@ ORDER BY room, time
|
|||
|
||||
{{% /influxdb/custom-timestamps %}}
|
||||
|
||||
{{% /expand %}}
|
||||
|
||||
{{% expand "Calculate the difference from a previous value" %}}
|
||||
|
||||
Use `LAG` with arithmetic to calculate the difference between the current value and a previous value.
|
||||
This is useful for detecting changes over time.
|
||||
|
||||
The following query calculates the temperature change from the previous reading:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
time,
|
||||
room,
|
||||
temp,
|
||||
temp - lag(temp, 1) OVER (
|
||||
PARTITION BY room
|
||||
ORDER BY time
|
||||
) AS temp_change
|
||||
FROM home
|
||||
WHERE
|
||||
time >= '2022-01-01T08:00:00Z'
|
||||
AND time < '2022-01-01T11:00:00Z'
|
||||
ORDER BY room, time
|
||||
```
|
||||
|
||||
| time | room | temp | temp_change |
|
||||
|:--------------------|:------------|-----:|------------:|
|
||||
| 2022-01-01T08:00:00 | Kitchen | 21.0 | NULL |
|
||||
| 2022-01-01T09:00:00 | Kitchen | 23.0 | 2.0 |
|
||||
| 2022-01-01T10:00:00 | Kitchen | 22.7 | -0.3 |
|
||||
| 2022-01-01T08:00:00 | Living Room | 21.1 | NULL |
|
||||
| 2022-01-01T09:00:00 | Living Room | 21.4 | 0.3 |
|
||||
| 2022-01-01T10:00:00 | Living Room | 21.8 | 0.4 |
|
||||
|
||||
{{% /expand %}}
|
||||
|
||||
{{% expand "Calculate the difference from a value at a specific time offset" %}}
|
||||
|
||||
For regularly spaced data, use `LAG` with an offset to compare values from a specific time period ago.
|
||||
|
||||
The following query compares temperature values that are 1 hour apart (assuming hourly data):
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
time,
|
||||
room,
|
||||
temp,
|
||||
lag(temp, 1) OVER (
|
||||
PARTITION BY room
|
||||
ORDER BY time
|
||||
) AS temp_1h_ago,
|
||||
temp - lag(temp, 1) OVER (
|
||||
PARTITION BY room
|
||||
ORDER BY time
|
||||
) AS hourly_change
|
||||
FROM home
|
||||
WHERE
|
||||
time >= '2022-01-01T08:00:00Z'
|
||||
AND time < '2022-01-01T12:00:00Z'
|
||||
ORDER BY room, time
|
||||
```
|
||||
|
||||
{{% /expand %}}
|
||||
|
||||
{{% expand "Use a self-join for irregularly spaced data" %}}
|
||||
|
||||
For irregularly spaced time series data where you need to compare values from an exact time offset (like exactly 1 hour ago), use a self-join with interval arithmetic:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
current.time,
|
||||
current.room,
|
||||
current.temp AS current_temp,
|
||||
previous.temp AS temp_1h_ago,
|
||||
current.temp - previous.temp AS hourly_diff
|
||||
FROM home AS current
|
||||
LEFT JOIN home AS previous
|
||||
ON current.room = previous.room
|
||||
AND previous.time = current.time - INTERVAL '1 hour'
|
||||
WHERE
|
||||
current.time >= '2022-01-01T08:00:00Z'
|
||||
AND current.time < '2022-01-01T12:00:00Z'
|
||||
ORDER BY current.room, current.time
|
||||
```
|
||||
|
||||
This approach works when your data points don't fall at regular intervals, or when you need to compare against a specific time offset regardless of when the previous data point occurred.
|
||||
|
||||
{{% /expand %}}
|
||||
{{< /expand-wrapper >}}
|
||||
|
||||
|
|
@ -1063,21 +1150,21 @@ SELECT
|
|||
temp,
|
||||
nth_value(temp, 2) OVER (
|
||||
PARTITION BY room
|
||||
) AS "2nd_temp"
|
||||
) AS second_temp
|
||||
FROM home
|
||||
WHERE
|
||||
WHERE
|
||||
time >= '2025-02-10T08:00:00Z'
|
||||
AND time < '2025-02-10T11:00:00Z'
|
||||
```
|
||||
|
||||
| time | room | temp | 2nd_temp |
|
||||
| :------------------ | :---------- | ---: | -------: |
|
||||
| 2025-02-10T08:00:00 | Kitchen | 21.0 | 22.7 |
|
||||
| 2025-02-10T10:00:00 | Kitchen | 22.7 | 22.7 |
|
||||
| 2025-02-10T09:00:00 | Kitchen | 23.0 | 22.7 |
|
||||
| 2025-02-10T08:00:00 | Living Room | 21.1 | 21.8 |
|
||||
| 2025-02-10T10:00:00 | Living Room | 21.8 | 21.8 |
|
||||
| 2025-02-10T09:00:00 | Living Room | 21.4 | 21.8 |
|
||||
| time | room | temp | second_temp |
|
||||
| :------------------ | :---------- | ---: | ----------: |
|
||||
| 2025-02-10T08:00:00 | Kitchen | 21.0 | 22.7 |
|
||||
| 2025-02-10T10:00:00 | Kitchen | 22.7 | 22.7 |
|
||||
| 2025-02-10T09:00:00 | Kitchen | 23.0 | 22.7 |
|
||||
| 2025-02-10T08:00:00 | Living Room | 21.1 | 21.8 |
|
||||
| 2025-02-10T10:00:00 | Living Room | 21.8 | 21.8 |
|
||||
| 2025-02-10T09:00:00 | Living Room | 21.4 | 21.8 |
|
||||
|
||||
{{% /influxdb/custom-timestamps %}}
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.5.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/aggregators/basicstats/README.md, Basic Statistics Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/aggregators/basicstats/README.md, Basic Statistics Plugin Source
|
||||
---
|
||||
|
||||
# Basic Statistics Aggregator Plugin
|
||||
|
|
@ -25,10 +25,9 @@ emits these statistical values every `period`.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.18.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/aggregators/derivative/README.md, Derivative Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/aggregators/derivative/README.md, Derivative Plugin Source
|
||||
---
|
||||
|
||||
# Derivative Aggregator Plugin
|
||||
|
|
@ -23,10 +23,9 @@ This plugin computes the derivative for all fields of the aggregated metrics.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.11.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/aggregators/final/README.md, Final Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/aggregators/final/README.md, Final Plugin Source
|
||||
---
|
||||
|
||||
# Final Aggregator Plugin
|
||||
|
|
@ -38,10 +38,9 @@ metrics collected at a higher frequency.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.4.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/aggregators/histogram/README.md, Histogram Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/aggregators/histogram/README.md, Histogram Plugin Source
|
||||
---
|
||||
|
||||
# Histogram Aggregator Plugin
|
||||
|
|
@ -34,10 +34,9 @@ consecutive buckets in the distribution creating a [cumulative histogram](https:
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.13.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/aggregators/merge/README.md, Merge Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/aggregators/merge/README.md, Merge Plugin Source
|
||||
---
|
||||
|
||||
# Merge Aggregator Plugin
|
||||
|
|
@ -28,10 +28,9 @@ measurement, tag set and timestamp.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.1.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/aggregators/minmax/README.md, Minimum-Maximum Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/aggregators/minmax/README.md, Minimum-Maximum Plugin Source
|
||||
---
|
||||
|
||||
# Minimum-Maximum Aggregator Plugin
|
||||
|
|
@ -25,10 +25,9 @@ and `_max` respectively.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.18.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/aggregators/quantile/README.md, Quantile Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/aggregators/quantile/README.md, Quantile Plugin Source
|
||||
---
|
||||
|
||||
# Quantile Aggregator Plugin
|
||||
|
|
@ -25,10 +25,9 @@ algorithms are supported with varying accuracy and limitations.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
@ -82,7 +81,7 @@ For implementation details see the underlying [golang library](https://github.co
|
|||
### exact R7 and R8
|
||||
|
||||
These algorithms compute quantiles as described in [Hyndman & Fan
|
||||
(1996)](). The R7 variant is used in Excel and NumPy. The R8
|
||||
(1996)](http://www.maths.usyd.edu.au/u/UG/SM/STAT3022/r/current/Misc/Sample%20Quantiles%20in%20Statistical%20Packages.pdf). The R7 variant is used in Excel and NumPy. The R8
|
||||
variant is recommended by Hyndman & Fan due to its independence of the
|
||||
underlying sample distribution.
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.21.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/aggregators/starlark/README.md, Starlark Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/aggregators/starlark/README.md, Starlark Plugin Source
|
||||
---
|
||||
|
||||
# Starlark Aggregator Plugin
|
||||
|
|
@ -51,10 +51,9 @@ More details on the syntax and available functions can be found in the
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.8.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/aggregators/valuecounter/README.md, Value Counter Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/aggregators/valuecounter/README.md, Value Counter Plugin Source
|
||||
---
|
||||
|
||||
# Value Counter Aggregator Plugin
|
||||
|
|
@ -37,10 +37,9 @@ other categorical values in the defined `period`.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -7,6 +7,9 @@ menu:
|
|||
name: Aggregator and processor plugins
|
||||
weight: 50
|
||||
parent: Configure plugins
|
||||
related:
|
||||
- /telegraf/v1/aggregator-plugins/
|
||||
- /telegraf/v1/processor-plugins/
|
||||
---
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -8,6 +8,8 @@ menu:
|
|||
name: Input plugins
|
||||
weight: 10
|
||||
parent: Configure plugins
|
||||
related:
|
||||
- /telegraf/v1/input-plugins/
|
||||
---
|
||||
|
||||
Telegraf input plugins are used with the InfluxData time series platform to collect metrics from the system, services, or third-party APIs. All metrics are gathered from the inputs you enable and configure in the [Telegraf configuration file](/telegraf/v1/configuration/).
|
||||
|
|
|
|||
|
|
@ -1,13 +1,15 @@
|
|||
---
|
||||
title: Write data with output plugins
|
||||
description: |
|
||||
Output plugins define where Telegraf will deliver the collected metrics.
|
||||
Output plugins define where Telegraf will deliver the collected metrics.
|
||||
menu:
|
||||
telegraf_v1:
|
||||
|
||||
name: Output plugins
|
||||
weight: 20
|
||||
parent: Configure plugins
|
||||
related:
|
||||
- /telegraf/v1/output-plugins/
|
||||
---
|
||||
Output plugins define where Telegraf will deliver the collected metrics. Send metrics to InfluxDB or to a variety of other datastores, services, and message queues, including Graphite, OpenTSDB, Datadog, Librato, Kafka, MQTT, and NSQ.
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.8.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/activemq/README.md, ActiveMQ Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/activemq/README.md, ActiveMQ Plugin Source
|
||||
---
|
||||
|
||||
# ActiveMQ Input Plugin
|
||||
|
|
@ -26,10 +26,9 @@ This plugin gathers queue, topics and subscribers metrics using the Console API
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ removal: v1.40.0
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/aerospike/README.md, Aerospike Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/aerospike/README.md, Aerospike Plugin Source
|
||||
---
|
||||
|
||||
# Aerospike Input Plugin
|
||||
|
|
@ -47,10 +47,9 @@ in order.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
@ -68,13 +67,30 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
|
|||
# password = "pa$$word"
|
||||
|
||||
## Optional TLS Config
|
||||
# enable_tls = false
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
# tls_name = "tlsname"
|
||||
## If false, skip chain & host verification
|
||||
# insecure_skip_verify = true
|
||||
## Set to true/false to enforce TLS being enabled/disabled. If not set,
|
||||
## enable TLS only if any of the other options are specified.
|
||||
# tls_enable =
|
||||
## Trusted root certificates for server
|
||||
# tls_ca = "/path/to/cafile"
|
||||
## Used for TLS client certificate authentication
|
||||
# tls_cert = "/path/to/certfile"
|
||||
## Used for TLS client certificate authentication
|
||||
# tls_key = "/path/to/keyfile"
|
||||
## Password for the key file if it is encrypted
|
||||
# tls_key_pwd = ""
|
||||
## Send the specified TLS server name via SNI
|
||||
# tls_server_name = "kubernetes.example.com"
|
||||
## Minimal TLS version to accept by the client
|
||||
# tls_min_version = "TLS12"
|
||||
## List of ciphers to accept, by default all secure ciphers will be accepted
|
||||
## See https://pkg.go.dev/crypto/tls#pkg-constants for supported values.
|
||||
## Use "all", "secure" and "insecure" to add all support ciphers, secure
|
||||
## suites or insecure suites respectively.
|
||||
# tls_cipher_suites = ["secure"]
|
||||
## Renegotiation method, "never", "once" or "freely"
|
||||
# tls_renegotiation_method = "never"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
|
||||
# Feature Options
|
||||
# Add namespace variable to limit the namespaces executed on
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.19.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/aliyuncms/README.md, Alibaba Cloud Monitor Service (Aliyun) Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/aliyuncms/README.md, Alibaba Cloud Monitor Service (Aliyun) Plugin Source
|
||||
---
|
||||
|
||||
# Alibaba Cloud Monitor Service (Aliyun) Input Plugin
|
||||
|
|
@ -46,10 +46,9 @@ to authenticate.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.20.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/amd_rocm_smi/README.md, AMD ROCm System Management Interface (SMI) Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/amd_rocm_smi/README.md, AMD ROCm System Management Interface (SMI) Plugin Source
|
||||
---
|
||||
|
||||
# AMD ROCm System Management Interface (SMI) Input Plugin
|
||||
|
|
@ -19,7 +19,7 @@ This plugin gathers statistics including memory and GPU usage, temperatures
|
|||
etc from [AMD ROCm platform](https://rocm.docs.amd.com/) GPUs.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The [`rocm-smi` binary]() is required and needs to be installed on the
|
||||
> The [`rocm-smi` binary](https://github.com/RadeonOpenCompute/rocm_smi_lib/tree/master/python_smi_tools) is required and needs to be installed on the
|
||||
> system.
|
||||
|
||||
**Introduced in:** Telegraf v1.20.0
|
||||
|
|
@ -31,10 +31,9 @@ etc from [AMD ROCm platform](https://rocm.docs.amd.com/) GPUs.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.3.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/amqp_consumer/README.md, AMQP Consumer Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/amqp_consumer/README.md, AMQP Consumer Plugin Source
|
||||
---
|
||||
|
||||
# AMQP Consumer Input Plugin
|
||||
|
|
@ -47,10 +47,9 @@ normal plugins:
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,13 +10,13 @@ introduced: "v1.8.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/apache/README.md, Apache Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/apache/README.md, Apache Plugin Source
|
||||
---
|
||||
|
||||
# Apache Input Plugin
|
||||
|
||||
This plugin collects performance information from [Apache HTTP Servers](https://httpd.apache.org)
|
||||
using the [`mod_status` module](). Typically, this module is
|
||||
using the [`mod_status` module](https://httpd.apache.org/docs/current/mod/mod_status.html). Typically, this module is
|
||||
configured to expose a page at the `/server-status?auto` endpoint the server.
|
||||
|
||||
The [ExtendedStatus option](https://httpd.apache.org/docs/current/mod/core.html#extendedstatus) must be enabled in order to collect
|
||||
|
|
@ -33,10 +33,9 @@ the [module documentation](https://httpd.apache.org/docs/current/mod/mod_status.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.12.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/apcupsd/README.md, APC UPSD Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/apcupsd/README.md, APC UPSD Plugin Source
|
||||
---
|
||||
|
||||
# APC UPSD Input Plugin
|
||||
|
|
@ -27,10 +27,9 @@ accessible.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.7.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/aurora/README.md, Apache Aurora Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/aurora/README.md, Apache Aurora Plugin Source
|
||||
---
|
||||
|
||||
# Apache Aurora Input Plugin
|
||||
|
|
@ -28,10 +28,9 @@ article.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.25.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/azure_monitor/README.md, Azure Monitor Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/azure_monitor/README.md, Azure Monitor Plugin Source
|
||||
---
|
||||
|
||||
# Azure Monitor Input Plugin
|
||||
|
|
@ -67,10 +67,9 @@ subscription with resource type.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.13.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/azure_storage_queue/README.md, Azure Queue Storage Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/azure_storage_queue/README.md, Azure Queue Storage Plugin Source
|
||||
---
|
||||
|
||||
# Azure Queue Storage Input Plugin
|
||||
|
|
@ -26,10 +26,9 @@ service, storing a large numbers of messages.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v0.2.0"
|
|||
os_support: "linux"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/bcache/README.md, Bcache Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/bcache/README.md, Bcache Plugin Source
|
||||
---
|
||||
|
||||
# Bcache Input Plugin
|
||||
|
|
@ -26,10 +26,9 @@ from the `stats_total` directory and `dirty_data` file.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.8.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/beanstalkd/README.md, Beanstalkd Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/beanstalkd/README.md, Beanstalkd Plugin Source
|
||||
---
|
||||
|
||||
# Beanstalkd Input Plugin
|
||||
|
|
@ -27,10 +27,9 @@ server commands.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.18.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/beat/README.md, Beat Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/beat/README.md, Beat Plugin Source
|
||||
---
|
||||
|
||||
# Beat Input Plugin
|
||||
|
|
@ -26,10 +26,9 @@ to work with Filebeat and Kafkabeat.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.11.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/bind/README.md, BIND 9 Nameserver Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/bind/README.md, BIND 9 Nameserver Plugin Source
|
||||
---
|
||||
|
||||
# BIND 9 Nameserver Input Plugin
|
||||
|
|
@ -38,10 +38,9 @@ distros still do not enable support for JSON statistics in their BIND packages.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ introduced: "v1.5.0"
|
|||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.4/plugins/inputs/bond/README.md, Bond Plugin Source
|
||||
- https://github.com/influxdata/telegraf/tree/v1.37.0/plugins/inputs/bond/README.md, Bond Plugin Source
|
||||
---
|
||||
|
||||
# Bond Input Plugin
|
||||
|
|
@ -24,10 +24,9 @@ slave interfaces using `/proc/net/bonding/*` files.
|
|||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
Plugins support additional global and plugin configuration settings for tasks
|
||||
such as modifying metrics, tags, and fields, creating aliases, and configuring
|
||||
plugin ordering. See [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue