Merge branch '6406-grafana-configuration-outline-and-draft-documentation' of github.com:influxdata/docs-v2 into 6406-grafana-configuration-outline-and-draft-documentation
commit
4243d04999
|
|
@ -0,0 +1,253 @@
|
|||
---
|
||||
name: ui-dev
|
||||
description: UI TypeScript, Hugo, and SASS (CSS) development specialist for the InfluxData docs-v2 repository
|
||||
tools: ["*"]
|
||||
author: InfluxData
|
||||
version: "1.0"
|
||||
---
|
||||
|
||||
# UI TypeScript & Hugo Development Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Specialized agent for TypeScript and Hugo development in the InfluxData docs-v2 repository. Assists with implementing TypeScript for new documentation site features while maintaining compatibility with the existing JavaScript ecosystem.
|
||||
|
||||
## Scope and Responsibilities
|
||||
|
||||
### Workflow
|
||||
|
||||
- Start by verifying a clear understanding of the requested feature or fix.
|
||||
- Ask if there's an existing plan to follow.
|
||||
- Verify any claimed changes by reading the actual files.
|
||||
|
||||
### Primary Capabilities
|
||||
|
||||
1. **TypeScript Implementation**
|
||||
- Convert existing JavaScript modules to TypeScript
|
||||
- Implement new features using TypeScript best practices
|
||||
- Maintain type safety while preserving Hugo integration
|
||||
- Configure TypeScript for Hugo's asset pipeline
|
||||
|
||||
2. **Component Development**
|
||||
- Create new component-based modules following the established registry pattern
|
||||
- Implement TypeScript interfaces for component options and state
|
||||
- Ensure proper integration with Hugo's data attributes system
|
||||
- Maintain backwards compatibility with existing JavaScript components
|
||||
|
||||
3. **Hugo Asset Pipeline Integration**
|
||||
- Configure TypeScript compilation for Hugo's build process
|
||||
- Manage module imports and exports for Hugo's ES6 module system
|
||||
- Optimize TypeScript output for production builds
|
||||
- Handle Hugo template data integration with TypeScript
|
||||
|
||||
4. **Testing and Quality Assurance**
|
||||
- Write and maintain Cypress e2e tests for TypeScript components
|
||||
- Configure ESLint rules for TypeScript code
|
||||
- Ensure proper type checking in CI/CD pipeline
|
||||
- Debug TypeScript compilation issues
|
||||
|
||||
### Technical Expertise
|
||||
|
||||
- **TypeScript Configuration**: Advanced `tsconfig.json` setup for Hugo projects
|
||||
- **Component Architecture**: Following the established component registry pattern from `main.js`
|
||||
- **Hugo Integration**: Understanding Hugo's asset pipeline and template system
|
||||
- **Module Systems**: ES6 modules, imports/exports, and Hugo's asset bundling
|
||||
- **Type Definitions**: Creating interfaces for Hugo data, component options, and external libraries
|
||||
|
||||
## Current Project Context
|
||||
|
||||
### Existing Infrastructure
|
||||
|
||||
- **Build System**: Hugo extended with PostCSS and TypeScript compilation
|
||||
- **Module Entry Point**: `assets/js/main.js` with component registry pattern
|
||||
- **TypeScript Config**: `tsconfig.json` configured for ES2020 with DOM types
|
||||
- **Testing**: Cypress for e2e testing, ESLint for code quality
|
||||
- **Component Pattern**: Data-attribute based component initialization
|
||||
|
||||
### Key Files and Patterns
|
||||
|
||||
- **Component Registry**: `main.js` exports `componentRegistry` mapping component names to constructors
|
||||
- **Component Pattern**: Components accept `{ component: HTMLElement }` options
|
||||
- **Data Attributes**: Components initialized via `data-component` attributes
|
||||
- **Module Imports**: ES6 imports with `.js` extensions for Hugo compatibility
|
||||
|
||||
### Current TypeScript Usage
|
||||
|
||||
- **Single TypeScript File**: `assets/js/influxdb-version-detector.ts`
|
||||
- **Build Scripts**: `yarn build:ts` and `yarn build:ts:watch`
|
||||
- **Output Directory**: `dist/` (gitignored)
|
||||
- **Type Definitions**: Generated `.d.ts` files for all modules
|
||||
|
||||
## Development Guidelines
|
||||
|
||||
### TypeScript Standards
|
||||
|
||||
1. **Type Safety**
|
||||
```typescript
|
||||
// Always define interfaces for component options
|
||||
interface ComponentOptions {
|
||||
component: HTMLElement;
|
||||
// Add specific component options
|
||||
}
|
||||
|
||||
// Use strict typing for Hugo data
|
||||
interface HugoDataAttribute {
|
||||
products?: string;
|
||||
influxdbUrls?: string;
|
||||
}
|
||||
```
|
||||
|
||||
2. **Component Architecture**
|
||||
```typescript
|
||||
// Follow the established component pattern
|
||||
class MyComponent {
|
||||
private container: HTMLElement;
|
||||
|
||||
constructor(options: ComponentOptions) {
|
||||
this.container = options.component;
|
||||
this.init();
|
||||
}
|
||||
|
||||
private init(): void {
|
||||
// Component initialization
|
||||
}
|
||||
}
|
||||
|
||||
// Export as component initializer
|
||||
export default function initMyComponent(options: ComponentOptions): MyComponent {
|
||||
return new MyComponent(options);
|
||||
}
|
||||
```
|
||||
|
||||
3. **Hugo Data Integration**
|
||||
```typescript
|
||||
// Parse Hugo data attributes safely
|
||||
private parseComponentData(): ParsedData {
|
||||
const rawData = this.container.getAttribute('data-products');
|
||||
if (rawData && rawData !== '#ZgotmplZ') {
|
||||
try {
|
||||
return JSON.parse(rawData);
|
||||
} catch (error) {
|
||||
console.warn('Failed to parse data:', error);
|
||||
return {};
|
||||
}
|
||||
}
|
||||
return {};
|
||||
}
|
||||
```
|
||||
|
||||
### File Organization
|
||||
|
||||
- **TypeScript Files**: Place in `assets/js/` alongside JavaScript files
|
||||
- **Type Definitions**: Auto-generated in `dist/` directory
|
||||
- **Naming Convention**: Use same naming as JavaScript files, with `.ts` extension
|
||||
- **Imports**: Use `.js` extensions even for TypeScript files (Hugo requirement)
|
||||
|
||||
### Integration with Existing System
|
||||
|
||||
1. **Component Registry**: Add TypeScript components to the registry in `main.js`
|
||||
2. **HTML Integration**: Use `data-component` attributes to initialize components
|
||||
3. **Global Namespace**: Expose components via `window.influxdatadocs` if needed
|
||||
4. **Backwards Compatibility**: Ensure TypeScript components work with existing patterns
|
||||
|
||||
### Testing Requirements
|
||||
|
||||
1. **Cypress Tests**: Create e2e tests for TypeScript components
|
||||
2. **Type Checking**: Run `tsc --noEmit` in CI pipeline
|
||||
3. **ESLint**: Configure TypeScript-specific linting rules
|
||||
4. **Manual Testing**: Test components in Hugo development server
|
||||
|
||||
## Build and Development Workflow
|
||||
|
||||
### Development Commands
|
||||
|
||||
```bash
|
||||
# Start TypeScript compilation in watch mode
|
||||
yarn build:ts:watch
|
||||
|
||||
# Start Hugo development server
|
||||
npx hugo server
|
||||
|
||||
# Run e2e tests
|
||||
yarn test:e2e
|
||||
|
||||
# Run linting
|
||||
yarn lint
|
||||
```
|
||||
|
||||
### Component Development Process
|
||||
|
||||
1. **Create TypeScript Component**
|
||||
- Define interfaces for options and data
|
||||
- Implement component class with proper typing
|
||||
- Export initializer function
|
||||
|
||||
2. **Register Component**
|
||||
- Add to `componentRegistry` in `main.js`
|
||||
- Import with `.js` extension (Hugo requirement)
|
||||
|
||||
3. **HTML Implementation**
|
||||
- Add `data-component` attribute to trigger elements
|
||||
- Include necessary Hugo data attributes
|
||||
|
||||
4. **Testing**
|
||||
- Write Cypress tests for component functionality
|
||||
- Test Hugo data integration
|
||||
- Verify TypeScript compilation
|
||||
|
||||
### Common Patterns and Solutions
|
||||
|
||||
1. **Hugo Template Data**
|
||||
```typescript
|
||||
// Handle Hugo's security measures for JSON data
|
||||
if (dataAttribute && dataAttribute !== '#ZgotmplZ') {
|
||||
// Safe to parse
|
||||
}
|
||||
```
|
||||
|
||||
2. **DOM Type Safety**
|
||||
```typescript
|
||||
// Use type assertions for DOM queries
|
||||
const element = this.container.querySelector('#input') as HTMLInputElement;
|
||||
```
|
||||
|
||||
3. **Event Handling**
|
||||
```typescript
|
||||
// Properly type event targets
|
||||
private handleClick = (e: Event): void => {
|
||||
const target = e.target as HTMLElement;
|
||||
// Handle event
|
||||
};
|
||||
```
|
||||
|
||||
## Error Handling and Debugging
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Module Resolution**: Use `.js` extensions in imports even for TypeScript files
|
||||
2. **Hugo Data Attributes**: Handle `#ZgotmplZ` security placeholders
|
||||
3. **Type Definitions**: Ensure proper typing for external libraries used in Hugo context
|
||||
4. **Compilation Errors**: Check `tsconfig.json` settings for Hugo compatibility
|
||||
|
||||
### Debugging Tools
|
||||
|
||||
- **VS Code TypeScript**: Use built-in TypeScript language server
|
||||
- **Hugo DevTools**: Browser debugging with source maps
|
||||
- **Component Registry**: Access `window.influxdatadocs.componentRegistry` for debugging
|
||||
- **TypeScript Compiler**: Use `tsc --noEmit --pretty` for detailed error reporting
|
||||
|
||||
## Future Considerations
|
||||
|
||||
### Migration Strategy
|
||||
|
||||
1. **Gradual Migration**: Convert JavaScript modules to TypeScript incrementally
|
||||
2. **Type Definitions**: Add type definitions for existing JavaScript modules
|
||||
3. **Shared Interfaces**: Create common interfaces for Hugo data and component patterns
|
||||
4. **Documentation**: Update component documentation with TypeScript examples
|
||||
|
||||
### Enhancement Opportunities
|
||||
|
||||
1. **Strict Type Checking**: Enable stricter TypeScript compiler options
|
||||
2. **Advanced Types**: Use utility types for Hugo-specific patterns
|
||||
3. **Build Optimization**: Optimize TypeScript compilation for Hugo builds
|
||||
4. **Developer Experience**: Improve tooling and IDE support for Hugo + TypeScript development
|
||||
|
|
@ -1,6 +1,17 @@
|
|||
# InfluxData Documentation Repository (docs-v2)
|
||||
|
||||
Always follow these instructions first and fallback to additional search and context gathering only when the information provided here is incomplete or found to be in error.
|
||||
This is the primary instruction file for working with the InfluxData documentation site.
|
||||
For detailed information on specific topics, refer to the specialized instruction files in `.github/instructions/`.
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Command | Time | Details |
|
||||
|------|---------|------|---------|
|
||||
| Install | `CYPRESS_INSTALL_BINARY=0 yarn install` | ~4s | Skip Cypress for CI |
|
||||
| Build | `npx hugo --quiet` | ~75s | NEVER CANCEL |
|
||||
| Dev Server | `npx hugo server` | ~92s | Port 1313 |
|
||||
| Test All | `yarn test:codeblocks:all` | 15-45m | NEVER CANCEL |
|
||||
| Lint | `yarn lint` | ~1m | Pre-commit checks |
|
||||
|
||||
## Working Effectively
|
||||
|
||||
|
|
@ -8,139 +19,39 @@ Always follow these instructions first and fallback to additional search and con
|
|||
|
||||
Be a critical thinking partner, provide honest feedback, and identify potential issues.
|
||||
|
||||
### Bootstrap, Build, and Test the Repository
|
||||
### Setup Steps
|
||||
|
||||
Execute these commands in order to set up a complete working environment:
|
||||
1. Install dependencies (see Quick Reference table above)
|
||||
2. Build the static site
|
||||
3. Start development server at http://localhost:1313/
|
||||
4. Alternative: Use `docker compose up local-dev` if local setup fails
|
||||
|
||||
1. **Install Node.js dependencies** (takes ~4 seconds):
|
||||
### Testing
|
||||
|
||||
```bash
|
||||
# Skip Cypress binary download due to network restrictions in CI environments
|
||||
CYPRESS_INSTALL_BINARY=0 yarn install
|
||||
```
|
||||
For comprehensive testing procedures, see **[TESTING.md](../TESTING.md)**.
|
||||
|
||||
2. **Build the static site** (takes ~75 seconds, NEVER CANCEL - set timeout to 180+ seconds):
|
||||
**Quick reference** (NEVER CANCEL long-running tests):
|
||||
- **Code blocks**: `yarn test:codeblocks:all` (15-45 minutes)
|
||||
- **Links**: `yarn test:links` (1-5 minutes, requires link-checker binary)
|
||||
- **Style**: `docker compose run -T vale content/**/*.md` (30-60 seconds)
|
||||
- **Pre-commit**: `yarn lint` (or skip with `--no-verify`)
|
||||
|
||||
```bash
|
||||
npx hugo --quiet
|
||||
```
|
||||
### Validation
|
||||
|
||||
3. **Start the development server** (builds in ~92 seconds, NEVER CANCEL - set timeout to 150+ seconds):
|
||||
|
||||
```bash
|
||||
npx hugo server --bind 0.0.0.0 --port 1313
|
||||
```
|
||||
|
||||
- Access at: http://localhost:1313/
|
||||
- Serves 5,359+ pages and 441 static files
|
||||
- Auto-rebuilds on file changes
|
||||
|
||||
4. **Alternative Docker development setup** (use if local Hugo fails):
|
||||
```bash
|
||||
docker compose up local-dev
|
||||
```
|
||||
**Note**: May fail in restricted network environments due to Alpine package manager issues.
|
||||
|
||||
### Testing (CRITICAL: NEVER CANCEL long-running tests)
|
||||
|
||||
#### Code Block Testing (takes 5-15 minutes per product, NEVER CANCEL - set timeout to 30+ minutes):
|
||||
Test these after changes:
|
||||
|
||||
```bash
|
||||
# Build test environment first (takes ~30 seconds, may fail due to network restrictions)
|
||||
docker build -t influxdata/docs-pytest:latest -f Dockerfile.pytest .
|
||||
|
||||
# Test all products (takes 15-45 minutes total)
|
||||
yarn test:codeblocks:all
|
||||
|
||||
# Test specific products
|
||||
yarn test:codeblocks:cloud
|
||||
yarn test:codeblocks:v2
|
||||
yarn test:codeblocks:telegraf
|
||||
```
|
||||
|
||||
#### Link Validation (takes 1-5 minutes):
|
||||
|
||||
Runs automatically on pull requests.
|
||||
Requires the **link-checker** binary from the repo release artifacts.
|
||||
|
||||
```bash
|
||||
# Test specific files/products (faster)
|
||||
# JSON format is required for accurate reporting
|
||||
link-checker map content/influxdb3/core/**/*.md \
|
||||
| link-checker check \
|
||||
--config .ci/link-checker/production.lycherc.toml
|
||||
--format json
|
||||
```
|
||||
|
||||
#### Style Linting (takes 30-60 seconds):
|
||||
|
||||
```bash
|
||||
# Basic Vale linting
|
||||
docker compose run -T vale content/**/*.md
|
||||
|
||||
# Product-specific linting with custom configurations
|
||||
docker compose run -T vale --config=content/influxdb3/cloud-dedicated/.vale.ini --minAlertLevel=error content/influxdb3/cloud-dedicated/**/*.md
|
||||
```
|
||||
|
||||
#### JavaScript and CSS Linting (takes 5-10 seconds):
|
||||
|
||||
```bash
|
||||
yarn eslint assets/js/**/*.js
|
||||
yarn prettier --check "**/*.{css,js,ts,jsx,tsx}"
|
||||
```
|
||||
|
||||
### Pre-commit Hooks (automatically run, can be skipped if needed):
|
||||
|
||||
```bash
|
||||
# Run all pre-commit checks manually
|
||||
yarn lint
|
||||
|
||||
# Skip pre-commit hooks if necessary (not recommended)
|
||||
git commit -m "message" --no-verify
|
||||
```
|
||||
|
||||
## Validation Scenarios
|
||||
|
||||
Always test these scenarios after making changes to ensure full functionality:
|
||||
|
||||
### 1. Documentation Rendering Test
|
||||
|
||||
```bash
|
||||
# Start Hugo server
|
||||
npx hugo server --bind 0.0.0.0 --port 1313
|
||||
|
||||
# Verify key pages load correctly (200 status)
|
||||
# 1. Server renders pages (check 200 status)
|
||||
curl -s -o /dev/null -w "%{http_code}" http://localhost:1313/influxdb3/core/
|
||||
curl -s -o /dev/null -w "%{http_code}" http://localhost:1313/influxdb/v2/
|
||||
curl -s -o /dev/null -w "%{http_code}" http://localhost:1313/telegraf/v1/
|
||||
|
||||
# Verify content contains expected elements
|
||||
curl -s http://localhost:1313/influxdb3/core/ | grep -i "influxdb"
|
||||
```
|
||||
# 2. Build outputs exist (~529MB)
|
||||
npx hugo --quiet && du -sh public/
|
||||
|
||||
### 2. Build Output Validation
|
||||
|
||||
```bash
|
||||
# Verify build completes successfully
|
||||
npx hugo --quiet
|
||||
|
||||
# Check build output exists and has reasonable size (~529MB)
|
||||
ls -la public/
|
||||
du -sh public/
|
||||
|
||||
# Verify key files exist
|
||||
file public/index.html
|
||||
file public/influxdb3/core/index.html
|
||||
```
|
||||
|
||||
### 3. Shortcode and Formatting Test
|
||||
|
||||
```bash
|
||||
# Test shortcode examples page
|
||||
# 3. Shortcodes work
|
||||
yarn test:links content/example.md
|
||||
```
|
||||
|
||||
## Repository Structure and Key Locations
|
||||
## Repository Structure
|
||||
|
||||
### Content Organization
|
||||
|
||||
|
|
@ -151,132 +62,60 @@ yarn test:links content/example.md
|
|||
- **Shared content**: `/content/shared/`
|
||||
- **Examples**: `/content/example.md` (comprehensive shortcode reference)
|
||||
|
||||
### Configuration Files
|
||||
### Key Files
|
||||
|
||||
- **Hugo config**: `/config/_default/`
|
||||
- **Package management**: `package.json`, `yarn.lock`
|
||||
- **Docker**: `compose.yaml`, `Dockerfile.pytest`
|
||||
- **Git hooks**: `lefthook.yml`
|
||||
- **Testing**: `cypress.config.js`, `pytest.ini` (in test directories)
|
||||
- **Linting**: `.vale.ini`, `.prettierrc.yaml`, `eslint.config.js`
|
||||
|
||||
### Build and Development
|
||||
|
||||
- **Hugo binary**: Available via `npx hugo` (version 0.148.2+)
|
||||
- **Static assets**: `/assets/` (JavaScript, CSS, images)
|
||||
- **Build output**: `/public/` (generated, ~529MB)
|
||||
- **Layouts**: `/layouts/` (Hugo templates)
|
||||
- **Data files**: `/data/` (YAML/JSON data for templates)
|
||||
- **Config**: `/config/_default/`, `package.json`, `compose.yaml`, `lefthook.yml`
|
||||
- **Testing**: `cypress.config.js`, `pytest.ini`, `.vale.ini`
|
||||
- **Assets**: `/assets/` (JS, CSS), `/layouts/` (templates), `/data/` (YAML/JSON)
|
||||
- **Build output**: `/public/` (~529MB, gitignored)
|
||||
|
||||
## Technology Stack
|
||||
|
||||
- **Static Site Generator**: Hugo (0.148.2+ extended)
|
||||
- **Package Manager**: Yarn (1.22.22+) with Node.js (20.19.4+)
|
||||
- **Testing Framework**:
|
||||
- Pytest with pytest-codeblocks (for code examples)
|
||||
- Cypress (for E2E tests)
|
||||
- influxdata/docs-link-checker (for link validation)
|
||||
- Vale (for style and writing guidelines)
|
||||
- **Containerization**: Docker with Docker Compose
|
||||
- **Linting**: ESLint, Prettier, Vale
|
||||
- **Git Hooks**: Lefthook
|
||||
- **Hugo** (0.148.2+ extended) - Static site generator
|
||||
- **Node.js/Yarn** (20.19.4+/1.22.22+) - Package management
|
||||
- **Testing**: Pytest, Cypress, link-checker, Vale
|
||||
- **Tools**: Docker, ESLint, Prettier, Lefthook
|
||||
|
||||
## Common Tasks and Build Times
|
||||
## Common Issues
|
||||
|
||||
### Network Connectivity Issues
|
||||
### Network Restrictions
|
||||
Commands that may fail in restricted environments:
|
||||
- Docker builds (external repos)
|
||||
- `docker compose up local-dev` (Alpine packages)
|
||||
- Cypress installation (use `CYPRESS_INSTALL_BINARY=0`)
|
||||
|
||||
In restricted environments, these commands may fail due to external dependency downloads:
|
||||
|
||||
- `docker build -t influxdata/docs-pytest:latest -f Dockerfile.pytest .` (InfluxData repositories, HashiCorp repos)
|
||||
- `docker compose up local-dev` (Alpine package manager)
|
||||
- Cypress binary installation (use `CYPRESS_INSTALL_BINARY=0`)
|
||||
|
||||
Document these limitations but proceed with available functionality.
|
||||
|
||||
### Validation Commands for CI
|
||||
|
||||
Always run these before committing changes:
|
||||
### Pre-commit Validation
|
||||
|
||||
```bash
|
||||
# Format and lint code
|
||||
# Quick validation before commits
|
||||
yarn prettier --write "**/*.{css,js,ts,jsx,tsx}"
|
||||
yarn eslint assets/js/**/*.js
|
||||
|
||||
# Test Hugo build
|
||||
npx hugo --quiet
|
||||
|
||||
# Test development server startup
|
||||
timeout 150 npx hugo server --bind 0.0.0.0 --port 1313 &
|
||||
sleep 120
|
||||
curl -s -o /dev/null -w "%{http_code}" http://localhost:1313/
|
||||
pkill hugo
|
||||
```
|
||||
|
||||
## Key Projects in This Codebase
|
||||
## Documentation Coverage
|
||||
|
||||
1. **InfluxDB 3 Documentation** (Core, Enterprise, Clustered, Cloud Dedicated, Cloud Serverless, and InfluxDB 3 plugins for Core and Enterprise)
|
||||
2. **InfluxDB 3 Explorer** (UI)
|
||||
3. **InfluxDB v2 Documentation** (OSS and Cloud)
|
||||
3. **InfuxDB v1 Documentation** (OSS and Enterprise)
|
||||
4. **Telegraf Documentation** (agent and plugins)
|
||||
5. **Supporting Tools Documentation** (Kapacitor, Chronograf, Flux)
|
||||
6. **API Reference Documentation** (`/api-docs/`)
|
||||
7. **Shared Documentation Components** (`/content/shared/`)
|
||||
- **InfluxDB 3**: Core, Enterprise, Cloud (Dedicated/Serverless), Clustered, Explorer, plugins
|
||||
- **InfluxDB v2/v1**: OSS, Cloud, Enterprise
|
||||
- **Tools**: Telegraf, Kapacitor, Chronograf, Flux
|
||||
- **API Reference**: All InfluxDB editions
|
||||
|
||||
## Important Locations for Frequent Tasks
|
||||
## Content Guidelines
|
||||
|
||||
- **Shortcode reference**: `/content/example.md`
|
||||
- **Contributing guide**: `CONTRIBUTING.md`
|
||||
- **Testing guide**: `TESTING.md`
|
||||
- **Product configurations**: `/data/products.yml`
|
||||
- **Vale style rules**: `/.ci/vale/styles/`
|
||||
- **GitHub workflows**: `/.github/workflows/`
|
||||
- **Test scripts**: `/test/scripts/`
|
||||
- **Hugo layouts and shortcodes**: `/layouts/`
|
||||
- **CSS/JS assets**: `/assets/`
|
||||
- **Product versions**: `/data/products.yml`
|
||||
- **Query languages**: SQL, InfluxQL, Flux (per product version)
|
||||
- **Site**: https://docs.influxdata.com
|
||||
|
||||
## Content Guidelines and Style
|
||||
### Writing Documentation
|
||||
|
||||
### Documentation Structure
|
||||
For detailed guidelines, see:
|
||||
- **Frontmatter**: `.github/instructions/content.instructions.md`
|
||||
- **Shortcodes**: `.github/instructions/shortcodes-reference.instructions.md`
|
||||
- **Contributing**: `.github/instructions/contributing.instructions.md`
|
||||
|
||||
- **Product version data**: `/data/products.yml`
|
||||
- **Query Languages**: SQL, InfluxQL, Flux (use appropriate language per product version)
|
||||
- **Documentation Site**: https://docs.influxdata.com
|
||||
- **Framework**: Hugo static site generator
|
||||
### Code Examples
|
||||
|
||||
### Style Guidelines
|
||||
|
||||
- Follow Google Developer Documentation style guidelines
|
||||
- Use semantic line feeds (one sentence per line)
|
||||
- Format code examples to fit within 80 characters
|
||||
- Use long options in command line examples (`--option` instead of `-o`)
|
||||
- Use GitHub callout syntax for notes and warnings
|
||||
- Image naming: `project/version-context-description.png`
|
||||
|
||||
### Markdown and Shortcodes
|
||||
|
||||
Include proper frontmatter for all content pages:
|
||||
|
||||
```yaml
|
||||
title: # Page title (h1)
|
||||
seotitle: # SEO title
|
||||
description: # SEO description
|
||||
menu:
|
||||
product_version:
|
||||
weight: # Page order (1-99, 101-199, etc.)
|
||||
```
|
||||
|
||||
Key shortcodes (see `/content/example.md` for full reference):
|
||||
|
||||
- Notes/warnings (GitHub syntax): `> [!Note]`, `> [!Warning]`
|
||||
- Tabbed content: `{{< tabs-wrapper >}}`, `{{% tabs %}}`, `{{% tab-content %}}`
|
||||
- Code examples: `{{< code-tabs-wrapper >}}`, `{{% code-tabs %}}`, `{{% code-tab-content %}}`
|
||||
- Required elements: `{{< req >}}`
|
||||
- API endpoints: `{{< api-endpoint >}}`
|
||||
|
||||
### Code Examples and Testing
|
||||
|
||||
Provide complete, working examples with pytest annotations:
|
||||
Use pytest annotations for testable examples:
|
||||
|
||||
```python
|
||||
print("Hello, world!")
|
||||
|
|
@ -288,21 +127,32 @@ print("Hello, world!")
|
|||
Hello, world!
|
||||
```
|
||||
|
||||
## Troubleshooting Common Issues
|
||||
## Troubleshooting
|
||||
|
||||
1. **"Pytest collected 0 items"**: Use `python` (not `py`) for code block language identifiers
|
||||
2. **Hugo build errors**: Check `/config/_default/` for configuration issues
|
||||
3. **Docker build failures**: Expected in restricted networks - document and continue with local Hugo
|
||||
4. **Cypress installation failures**: Use `CYPRESS_INSTALL_BINARY=0 yarn install`
|
||||
5. **Link validation slow**: Use file-specific testing: `yarn test:links content/specific-file.md`
|
||||
6. **Vale linting errors**: Check `.ci/vale/styles/config/vocabularies` for accepted/rejected terms
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| Pytest collected 0 items | Use `python` not `py` for language identifier |
|
||||
| Hugo build errors | Check `/config/_default/` |
|
||||
| Docker build fails | Expected in restricted networks - use local Hugo |
|
||||
| Cypress install fails | Use `CYPRESS_INSTALL_BINARY=0 yarn install` |
|
||||
| Link validation slow | Test specific files: `yarn test:links content/file.md` |
|
||||
| Vale errors | Check `.ci/vale/styles/config/vocabularies` |
|
||||
|
||||
## Additional Instruction Files
|
||||
## Specialized Instructions
|
||||
|
||||
For specific workflows and content types, also refer to:
|
||||
For detailed information on specific topics:
|
||||
|
||||
- **InfluxDB 3 code placeholders**: `.github/instructions/influxdb3-code-placeholders.instructions.md`
|
||||
- **Contributing guidelines**: `.github/instructions/contributing.instructions.md`
|
||||
- **Content-specific instructions**: Check `.github/instructions/` directory
|
||||
| Topic | File | Description |
|
||||
|-------|------|-------------|
|
||||
| **Content** | [content.instructions.md](instructions/content.instructions.md) | Frontmatter, metadata, page structure |
|
||||
| **Shortcodes** | [shortcodes-reference.instructions.md](instructions/shortcodes-reference.instructions.md) | All available Hugo shortcodes |
|
||||
| **Contributing** | [contributing.instructions.md](instructions/contributing.instructions.md) | Style guide, workflow, CLA |
|
||||
| **API Docs** | [api-docs.instructions.md](instructions/api-docs.instructions.md) | OpenAPI spec management |
|
||||
| **Testing** | [TESTING.md](../TESTING.md) | Comprehensive testing procedures |
|
||||
| **Assets** | [assets.instructions.md](instructions/assets.instructions.md) | JavaScript and CSS development |
|
||||
|
||||
Remember: This is a large documentation site with complex build processes. Patience with build times is essential, and NEVER CANCEL long-running operations.
|
||||
## Important Notes
|
||||
|
||||
- This is a large site (5,359+ pages) with complex build processes
|
||||
- **NEVER CANCEL** long-running operations (Hugo builds, tests)
|
||||
- Set appropriate timeouts: Hugo build (180s+), tests (30+ minutes)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,28 @@
|
|||
---
|
||||
applyTo: "api-docs/**/*.md, layouts/**/*.html"
|
||||
---
|
||||
|
||||
# InfluxDB API documentation
|
||||
|
||||
To edit the API reference documentation, edit the YAML files in `/api-docs`.
|
||||
|
||||
InfluxData uses [Redoc](https://github.com/Redocly/redoc) to generate the full
|
||||
InfluxDB API documentation when documentation is deployed.
|
||||
Redoc generates HTML documentation using the InfluxDB `swagger.yml`.
|
||||
For more information about generating InfluxDB API documentation, see the
|
||||
[API Documentation README](https://github.com/influxdata/docs-v2/tree/master/api-docs#readme).
|
||||
|
||||
## Generate API documentation locally
|
||||
|
||||
From `api-docs` directory:
|
||||
|
||||
1. Install dependencies. To generate the API documentation locally, you need to have [Node.js](https://nodejs.org/en/) and [Yarn](https://yarnpkg.com/getting-started/install) installed.
|
||||
```sh
|
||||
yarn install
|
||||
```
|
||||
|
||||
2. Run the script to generate the API documentation.
|
||||
|
||||
```sh
|
||||
generate-api-docs.sh
|
||||
```
|
||||
|
|
@ -0,0 +1,55 @@
|
|||
---
|
||||
applyTo: "assets/**/*.md, layouts/**/*.html"
|
||||
---
|
||||
|
||||
## JavaScript in the documentation UI
|
||||
|
||||
The InfluxData documentation UI uses JavaScript with ES6+ syntax and
|
||||
`assets/js/main.js` as the entry point to import modules from.
|
||||
|
||||
|
||||
1. In your HTML file, add a `data-component` attribute to the element that will
|
||||
encapsulate the UI feature and use the JavaScript module.
|
||||
|
||||
```html
|
||||
<div data-component="my-component"></div>
|
||||
```
|
||||
|
||||
2. In `assets/js/main.js`, import your module and initialize it on the element.
|
||||
|
||||
## Debugging helpers for JavaScript
|
||||
|
||||
In your JavaScript module, import the debug helpers from `assets/js/utils/debug-helpers.js`.
|
||||
|
||||
```js
|
||||
import { debugLog, debugBreak, debugInspect } from './utils/debug-helpers.js';
|
||||
|
||||
const data = debugInspect(someData, 'Data');
|
||||
debugLog('Processing data', 'myFunction');
|
||||
|
||||
function processData() {
|
||||
// Add a breakpoint that works with DevTools
|
||||
debugBreak();
|
||||
|
||||
// Your existing code...
|
||||
}
|
||||
```
|
||||
|
||||
## Debugging with VS Code
|
||||
|
||||
1. Start Hugo in development mode--for example:
|
||||
|
||||
```bash
|
||||
yarn hugo server
|
||||
```
|
||||
|
||||
2. In VS Code, go to Run > Start Debugging, and select the "Debug JS (debug-helpers)" configuration.
|
||||
|
||||
Your system uses the configuration in `launch.json` to launch the site in Chrome
|
||||
and attach the debugger to the Developer Tools console.
|
||||
|
||||
Make sure to remove the debug statements before merging your changes.
|
||||
The debug helpers are designed to be used in development and should not be used in production.
|
||||
|
||||
_See full CONTRIBUTING.md for complete details._
|
||||
|
||||
|
|
@ -2,11 +2,23 @@
|
|||
applyTo: "content/**/*.md, layouts/**/*.html"
|
||||
---
|
||||
|
||||
### Complete Frontmatter Reference
|
||||
## Frontmatter Requirements
|
||||
|
||||
Every documentation page includes frontmatter which specifies information about the page.
|
||||
Documentation pages include frontmatter which specifies information about the page.
|
||||
Include proper frontmatter for pages in `/content/`, except `/content/shared/`.
|
||||
Frontmatter populates variables in page templates and the site's navigation menu.
|
||||
|
||||
```yaml
|
||||
title: # Page title (h1)
|
||||
seotitle: # SEO title
|
||||
description: # SEO description
|
||||
menu:
|
||||
product_version:
|
||||
weight: # Page order (1-99, 101-199, etc.)
|
||||
```
|
||||
|
||||
### Complete Frontmatter Reference
|
||||
|
||||
```yaml
|
||||
title: # Title of the page used in the page's h1
|
||||
seotitle: # Page title used in the html <head> title and used in search engine results
|
||||
|
|
@ -196,3 +208,32 @@ When building shared content, use the `show-in` and `hide-in` shortcodes to show
|
|||
or hide blocks of content based on the current InfluxDB product/version.
|
||||
For more information, see [show-in](#show-in) and [hide-in](#hide-in).
|
||||
|
||||
#### Links in shared content
|
||||
|
||||
When creating links in shared content files, use `/influxdb3/version/` instead of the `{{% product-key %}}` shortcode.
|
||||
The keyword `version` gets replaced during the build process with the appropriate product version.
|
||||
|
||||
**Use this in shared content:**
|
||||
```markdown
|
||||
[Configuration options](/influxdb3/version/reference/config-options/)
|
||||
[CLI serve command](/influxdb3/version/reference/cli/influxdb3/serve/)
|
||||
```
|
||||
|
||||
**Not this:**
|
||||
```markdown
|
||||
[Configuration options](/influxdb3/{{% product-key %}}/reference/config-options/)
|
||||
[CLI serve command](/influxdb3/{{% product-key %}}/reference/cli/influxdb3/serve/)
|
||||
```
|
||||
|
||||
#### Shortcodes in Markdown files
|
||||
|
||||
For the complete shortcodes reference, see `/.github/instructions/shortcodes-reference.instructions.md`.
|
||||
|
||||
### Style Guidelines
|
||||
|
||||
- Follow Google Developer Documentation style guidelines
|
||||
- Use semantic line feeds (one sentence per line)
|
||||
- Format code examples to fit within 80 characters
|
||||
- Use long options in command line examples (`--option` instead of `-o`)
|
||||
- Use GitHub callout syntax for notes and warnings
|
||||
- Image naming: `project/version-context-description.png`
|
||||
|
|
@ -55,7 +55,6 @@ For the linting and tests to run, you need to install:
|
|||
- **Docker**: For running Vale linter and code block tests
|
||||
- **VS Code extensions** (optional): For enhanced editing experience
|
||||
|
||||
|
||||
```sh
|
||||
git commit -m "<COMMIT_MESSAGE>" --no-verify
|
||||
```
|
||||
|
|
@ -82,7 +81,6 @@ _Some parts of the documentation, such as `./api-docs`, contain Markdown within
|
|||
|
||||
#### Semantic line feeds
|
||||
|
||||
|
||||
```diff
|
||||
-Data is taking off. This data is time series. You need a database that specializes in time series. You should check out InfluxDB.
|
||||
+Data is taking off. This data is time series. You need a database that specializes in time series. You need InfluxDB.
|
||||
|
|
@ -91,81 +89,20 @@ _Some parts of the documentation, such as `./api-docs`, contain Markdown within
|
|||
|
||||
### Essential Frontmatter Reference
|
||||
|
||||
|
||||
```yaml
|
||||
title: # Title of the page used in the page's h1
|
||||
description: # Page description displayed in search engine results
|
||||
# ... (see full CONTRIBUTING.md for complete example)
|
||||
```
|
||||
|
||||
|
||||
_See full CONTRIBUTING.md for complete details._
|
||||
|
||||
#### Notes and warnings
|
||||
|
||||
```md
|
||||
> [!Note]
|
||||
> Insert note markdown content here.
|
||||
|
||||
> [!Warning]
|
||||
> Insert warning markdown content here.
|
||||
|
||||
> [!Caution]
|
||||
> Insert caution markdown content here.
|
||||
|
||||
> [!Important]
|
||||
> Insert important markdown content here.
|
||||
|
||||
> [!Tip]
|
||||
> Insert tip markdown content here.
|
||||
```
|
||||
|
||||
#### Tabbed content
|
||||
|
||||
```md
|
||||
{{< tabs-wrapper >}}
|
||||
|
||||
{{% tabs %}}
|
||||
[Button text for tab 1](#)
|
||||
[Button text for tab 2](#)
|
||||
{{% /tabs %}}
|
||||
|
||||
{{% tab-content %}}
|
||||
Markdown content for tab 1.
|
||||
{{% /tab-content %}}
|
||||
|
||||
{{% tab-content %}}
|
||||
Markdown content for tab 2.
|
||||
{{% /tab-content %}}
|
||||
|
||||
{{< /tabs-wrapper >}}
|
||||
```
|
||||
|
||||
#### Required elements
|
||||
|
||||
```md
|
||||
{{< req >}}
|
||||
{{< req type="key" >}}
|
||||
|
||||
- {{< req "\*" >}} **This element is required**
|
||||
- {{< req "\*" >}} **This element is also required**
|
||||
- **This element is NOT required**
|
||||
```
|
||||
|
||||
For the complete shortcodes reference with all available shortcodes, see [Complete Shortcodes Reference](#complete-shortcodes-reference).
|
||||
|
||||
---
|
||||
See content.instructions.md for more details.
|
||||
|
||||
### InfluxDB API documentation
|
||||
|
||||
docs-v2 includes the InfluxDB API reference documentation in the `/api-docs` directory.
|
||||
To edit the API documentation, edit the YAML files in `/api-docs`.
|
||||
|
||||
InfluxData uses [Redoc](https://github.com/Redocly/redoc) to generate the full
|
||||
InfluxDB API documentation when documentation is deployed.
|
||||
Redoc generates HTML documentation using the InfluxDB `swagger.yml`.
|
||||
For more information about generating InfluxDB API documentation, see the
|
||||
[API Documentation README](https://github.com/influxdata/docs-v2/tree/master/api-docs#readme).
|
||||
See api-docs.instructions.md for more details.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -173,7 +110,7 @@ For more information about generating InfluxDB API documentation, see the
|
|||
|
||||
For comprehensive testing information, including code block testing, link validation, style linting, and advanced testing procedures, see **[TESTING.md](TESTING.md)**.
|
||||
|
||||
### Quick Testing Reference
|
||||
### Testing Code Blocks
|
||||
|
||||
```bash
|
||||
# Test code blocks
|
||||
|
|
@ -181,9 +118,6 @@ yarn test:codeblocks:all
|
|||
|
||||
# Test links
|
||||
yarn test:links content/influxdb3/core/**/*.md
|
||||
|
||||
# Run style linting
|
||||
docker compose run -T vale content/**/*.md
|
||||
```
|
||||
|
||||
Pre-commit hooks run automatically when you commit changes, testing your staged files with Vale, Prettier, Cypress, and Pytest. To skip hooks if needed:
|
||||
|
|
@ -215,16 +149,15 @@ chore(ci): update Vale configuration
|
|||
|
||||
## Reference Sections
|
||||
|
||||
|
||||
_See full CONTRIBUTING.md for complete details._
|
||||
|
||||
### Complete Frontmatter Reference
|
||||
|
||||
_For the complete Complete Frontmatter Reference reference, see frontmatter-reference.instructions.md._
|
||||
_For the complete Complete Frontmatter Reference reference, see content.instructions.md._
|
||||
|
||||
### Complete Shortcodes Reference
|
||||
|
||||
_For the complete Complete Shortcodes Reference reference, see shortcodes-reference.instructions.md._
|
||||
_For the complete Complete Shortcodes Reference reference, see content.instructions.md._
|
||||
|
||||
#### Vale style linting configuration
|
||||
|
||||
|
|
@ -236,52 +169,11 @@ docs-v2 includes Vale writing style linter configurations to enforce documentati
|
|||
docker compose run -T vale --config=content/influxdb/cloud-dedicated/.vale.ini --minAlertLevel=error content/influxdb/cloud-dedicated/write-data/**/*.md
|
||||
```
|
||||
|
||||
|
||||
- **Error**:
|
||||
- **Warning**: General style guide rules and best practices
|
||||
- **Suggestion**: Style preferences that may require refactoring or updates to an exceptions list
|
||||
|
||||
#### Configure style rules
|
||||
|
||||
|
||||
_See full CONTRIBUTING.md for complete details._
|
||||
|
||||
#### JavaScript in the documentation UI
|
||||
|
||||
The InfluxData documentation UI uses JavaScript with ES6+ syntax and
|
||||
`assets/js/main.js` as the entry point to import modules from
|
||||
|
||||
|
||||
1. In your HTML file, add a `data-component` attribute to the element that
|
||||
|
||||
# ... (see full CONTRIBUTING.md for complete example)
|
||||
```js
|
||||
import { debugLog, debugBreak, debugInspect } from './utils/debug-helpers.js';
|
||||
|
||||
const data = debugInspect(someData, 'Data');
|
||||
debugLog('Processing data', 'myFunction');
|
||||
|
||||
function processData() {
|
||||
// Add a breakpoint that works with DevTools
|
||||
debugBreak();
|
||||
|
||||
// Your existing code...
|
||||
}
|
||||
```
|
||||
|
||||
3. Start Hugo in development mode--for example:
|
||||
|
||||
```bash
|
||||
yarn hugo server
|
||||
```
|
||||
|
||||
4. In VS Code, go to Run > Start Debugging, and select the "Debug JS (debug-helpers)" configuration.
|
||||
|
||||
Your system uses the configuration in `launch.json` to launch the site in Chrome
|
||||
and attach the debugger to the Developer Tools console.
|
||||
|
||||
Make sure to remove the debug statements before merging your changes.
|
||||
The debug helpers are designed to be used in development and should not be used in production.
|
||||
|
||||
_See full CONTRIBUTING.md for complete details._
|
||||
|
||||
|
|
|
|||
|
|
@ -1,100 +0,0 @@
|
|||
---
|
||||
mode: 'edit'
|
||||
applyTo: "content/{influxdb3/core,influxdb3/enterprise,shared/influxdb3*}/**"
|
||||
---
|
||||
## Best Practices
|
||||
|
||||
- Use UPPERCASE for placeholders to make them easily identifiable
|
||||
- Don't use pronouns in placeholders (e.g., "your", "this")
|
||||
- List placeholders in the same order they appear in the code
|
||||
- Provide clear descriptions including:
|
||||
- - Expected data type or format
|
||||
- - Purpose of the value
|
||||
- - Any constraints or requirements
|
||||
- Mark optional placeholders as "Optional:" in their descriptions
|
||||
- Placeholder key descriptions should fit the context of the code snippet
|
||||
- Include examples for complex formats
|
||||
|
||||
## Writing Placeholder Descriptions
|
||||
|
||||
Descriptions should follow consistent patterns:
|
||||
|
||||
1. **Admin Authentication tokens**:
|
||||
- Recommended: "a {{% token-link "admin" %}} for your {{< product-name >}} instance"
|
||||
- Avoid: "your token", "the token", "an authorization token"
|
||||
2. **Database resource tokens**:
|
||||
- Recommended: "your {{% token-link "database" %}}"{{% show-in "enterprise" %}} with permissions on the specified database{{% /show-in %}}
|
||||
- Avoid: "your token", "the token", "an authorization token"
|
||||
3. **Database names**:
|
||||
- Recommended: "the name of the database to [action]"
|
||||
- Avoid: "your database", "the database name"
|
||||
4. **Conditional content**:
|
||||
- Use `{{% show-in "enterprise" %}}` for content specific to enterprise versions
|
||||
- Example: "your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to query the specified database{{% /show-in %}}"
|
||||
|
||||
## Common placeholders for InfluxDB 3
|
||||
|
||||
- `AUTH_TOKEN`: your {{% token-link %}}
|
||||
- `DATABASE_NAME`: the database to use
|
||||
- `TABLE_NAME`: Name of the table/measurement to query or write to
|
||||
- `NODE_ID`: Node ID for a specific node in a cluster
|
||||
- `CLUSTER_ID`: Cluster ID for a specific cluster
|
||||
- `HOST`: InfluxDB server hostname or URL
|
||||
- `PORT`: InfluxDB server port (typically 8181)
|
||||
- `QUERY`: SQL or InfluxQL query string
|
||||
- `LINE_PROTOCOL`: Line protocol data for writes
|
||||
- `PLUGIN_FILENAME`: Name of plugin file to use
|
||||
- `CACHE_NAME`: Name for a new or existing cache
|
||||
|
||||
## Hugo shortcodes in Markdown
|
||||
|
||||
**Syntax**:
|
||||
|
||||
- Use the `placeholders` code block attribute to define placeholder patterns:
|
||||
```<language> { placeholders="<expr>" }
|
||||
function sampleCode () {};
|
||||
```
|
||||
**Old (deprecated) syntax**:
|
||||
|
||||
- `{{% code-placeholders "PLACEHOLDER1|PLACEHOLDER2" %}}`
|
||||
- `{{% /code-placeholders %}}`
|
||||
|
||||
**Define a placeholder key (typically following the example)**:
|
||||
|
||||
- `{{% code-placeholder-key %}}`: Use this shortcode to define a placeholder key
|
||||
- `{{% /code-placeholder-key %}}`: Use this shortcode to close the key name
|
||||
- Follow with a description
|
||||
|
||||
## Language-Specific Placeholder Formatting
|
||||
|
||||
- **Bash/Shell**: Use uppercase variables with no quotes or prefix
|
||||
```bash { placeholders="DATABASE_NAME" }
|
||||
--database DATABASE_NAME
|
||||
```
|
||||
- Python: Use string literals with quotes
|
||||
```python { placeholders="DATABASE_NAME" }
|
||||
database_name='DATABASE_NAME'
|
||||
```
|
||||
- JSON: Use key-value pairs with quotes
|
||||
```json { placeholders="DATABASE_NAME" }
|
||||
{
|
||||
"database": "DATABASE_NAME"
|
||||
}
|
||||
```
|
||||
|
||||
## Real-World Examples from Documentation
|
||||
|
||||
### InfluxDB CLI Commands
|
||||
This pattern appears frequently in CLI documentation:
|
||||
|
||||
```bash { placeholders="DATABASE_NAME|AUTH_TOKEN" }
|
||||
influxdb3 write \
|
||||
--database DATABASE_NAME \
|
||||
--token AUTH_TOKEN \
|
||||
--precision ns
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following placeholders with your values:
|
||||
|
||||
{{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to write to
|
||||
{{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with write permissions on the specified database{{% /show-in %}}
|
||||
|
|
@ -4,19 +4,8 @@ applyTo: "content/**/*.md, layouts/**/*.html"
|
|||
|
||||
### Complete Shortcodes Reference
|
||||
|
||||
#### Notes and warnings
|
||||
|
||||
Shortcodes are available for formatting notes and warnings in each article:
|
||||
|
||||
```md
|
||||
{{% note %}}
|
||||
Insert note markdown content here.
|
||||
{{% /note %}}
|
||||
|
||||
{{% warn %}}
|
||||
Insert warning markdown content here.
|
||||
{{% /warn %}}
|
||||
```
|
||||
influxdata/docs-v2 uses a variety of custom Hugo shortcodes to add functionality.
|
||||
For more usage examples, see the shortcode test page at `/content/example.md`.
|
||||
|
||||
#### Product data
|
||||
|
||||
|
|
@ -1161,22 +1150,65 @@ Supported argument values:
|
|||
{{< influxdb/host "serverless" >}}
|
||||
```
|
||||
|
||||
##### User-populated placeholders
|
||||
#### Placeholders in code samples
|
||||
|
||||
Use the `code-placeholders` shortcode to format placeholders
|
||||
as text fields that users can populate with their own values.
|
||||
The shortcode takes a regular expression for matching placeholder names.
|
||||
Use the `code-placeholder-key` shortcode to format the placeholder names in
|
||||
text that describes the placeholder--for example:
|
||||
##### Best Practices
|
||||
|
||||
```markdown
|
||||
{{% code-placeholders "DATABASE_NAME|USERNAME|PASSWORD_OR_TOKEN|API_TOKEN|exampleuser@influxdata.com" %}}
|
||||
```sh
|
||||
- Use UPPERCASE for placeholders to make them easily identifiable
|
||||
- Don't use pronouns in placeholders (e.g., "your", "this")
|
||||
- List placeholders in the same order they appear in the code
|
||||
- Provide clear descriptions including:
|
||||
- Expected data type or format
|
||||
- Purpose of the value
|
||||
- Any constraints or requirements
|
||||
- Mark optional placeholders as "Optional:" in their descriptions
|
||||
- Placeholder key descriptions should fit the context of the code snippet
|
||||
- Include examples for complex formats
|
||||
|
||||
##### Writing Placeholder Descriptions
|
||||
|
||||
Descriptions should follow consistent patterns:
|
||||
|
||||
1. **Admin Authentication tokens**:
|
||||
- Recommended: "a {{% token-link "admin" %}} for your {{< product-name >}} instance"
|
||||
- Avoid: "your token", "the token", "an authorization token"
|
||||
2. **Database resource tokens**:
|
||||
- Recommended: "your {{% token-link "database" %}}"{{% show-in "enterprise" %}} with permissions on the specified database{{% /show-in %}}
|
||||
- Avoid: "your token", "the token", "an authorization token"
|
||||
3. **Database names**:
|
||||
- Recommended: "the name of the database to [action]"
|
||||
- Avoid: "your database", "the database name"
|
||||
4. **Conditional content**:
|
||||
- Use `{{% show-in "enterprise" %}}` for content specific to enterprise versions
|
||||
- Example: "your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to query the specified database{{% /show-in %}}"
|
||||
|
||||
##### Common placeholders for InfluxDB 3
|
||||
|
||||
- `AUTH_TOKEN`: your {{% token-link %}}
|
||||
- `DATABASE_NAME`: the database to use
|
||||
- `TABLE_NAME`: Name of the table/measurement to query or write to
|
||||
- `NODE_ID`: Node ID for a specific node in a cluster
|
||||
- `CLUSTER_ID`: Cluster ID for a specific cluster
|
||||
- `HOST`: InfluxDB server hostname or URL
|
||||
- `PORT`: InfluxDB server port (typically 8181)
|
||||
- `QUERY`: SQL or InfluxQL query string
|
||||
- `LINE_PROTOCOL`: Line protocol data for writes
|
||||
- `PLUGIN_FILENAME`: Name of plugin file to use
|
||||
- `CACHE_NAME`: Name for a new or existing cache
|
||||
|
||||
##### Syntax
|
||||
|
||||
- `{ placeholders="PATTERN1|PATTERN2" }`: Use this code block attribute to define placeholder patterns
|
||||
- `{{% code-placeholder-key %}}`: Use this shortcode to define a placeholder key
|
||||
- `{{% /code-placeholder-key %}}`: Use this shortcode to close the key name
|
||||
|
||||
##### Example usage
|
||||
|
||||
```sh { placeholders "DATABASE_NAME|USERNAME|PASSWORD_OR_TOKEN|API_TOKEN|exampleuser@influxdata.com" }
|
||||
curl --request POST http://localhost:8086/write?db=DATABASE_NAME \
|
||||
--header "Authorization: Token API_TOKEN" \
|
||||
--data-binary @path/to/line-protocol.txt
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following:
|
||||
|
||||
|
|
@ -1184,6 +1216,40 @@ Replace the following:
|
|||
- {{% code-placeholder-key %}}`USERNAME`{{% /code-placeholder-key %}}: your [InfluxDB 1.x username](/influxdb/v2/reference/api/influxdb-1x/#manage-credentials)
|
||||
- {{% code-placeholder-key %}}`PASSWORD_OR_TOKEN`{{% /code-placeholder-key %}}: your [InfluxDB 1.x password or InfluxDB API token](/influxdb/v2/reference/api/influxdb-1x/#manage-credentials)
|
||||
- {{% code-placeholder-key %}}`API_TOKEN`{{% /code-placeholder-key %}}: your [InfluxDB API token](/influxdb/v2/admin/tokens/)
|
||||
|
||||
**Old (deprecated) syntax**:
|
||||
|
||||
Replace the following syntax with the new `placeholders` syntax shown above.
|
||||
|
||||
- `{{% code-placeholders "PLACEHOLDER1|PLACEHOLDER2" %}}`
|
||||
- `{{% /code-placeholders %}}`
|
||||
|
||||
## Notes and warnings
|
||||
|
||||
```md
|
||||
> [!Note]
|
||||
> Insert note markdown content here.
|
||||
|
||||
> [!Warning]
|
||||
> Insert warning markdown content here.
|
||||
|
||||
> [!Caution]
|
||||
> Insert caution markdown content here.
|
||||
|
||||
> [!Important]
|
||||
> Insert important markdown content here.
|
||||
|
||||
> [!Tip]
|
||||
> Insert tip markdown content here.
|
||||
```
|
||||
|
||||
## Required elements
|
||||
|
||||
```md
|
||||
{{< req >}}
|
||||
{{< req type="key" >}}
|
||||
|
||||
- {{< req "\*" >}} **This element is required**
|
||||
- {{< req "\*" >}} **This element is also required**
|
||||
- **This element is NOT required**
|
||||
```
|
||||
10
CLAUDE.md
10
CLAUDE.md
|
|
@ -24,15 +24,5 @@ formatting, and commonly used shortcodes.
|
|||
See @TESTING.md for comprehensive testing information, including code block
|
||||
testing, link validation, style linting, and advanced testing procedures.
|
||||
|
||||
See @.github/instructions/shortcodes-reference.instructions.md for detailed
|
||||
information about shortcodes used in this project.
|
||||
|
||||
See @.github/instructions/frontmatter-reference.instructions.md for detailed
|
||||
information about frontmatter used in this project.
|
||||
|
||||
See @.github/instructions/influxdb3-code-placeholders.instructions.md for using
|
||||
placeholders in code samples and CLI commands.
|
||||
|
||||
See @api-docs/README.md for information about the API reference documentation, how to
|
||||
generate it, and how to contribute to it.
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,2 @@
|
|||
# API reference documentation instructions
|
||||
See @.github/instructions/api-docs.instructions.md for the complete API reference docs editing guidelines and instructions for generating pages locally.
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
## JavaScript, TypeScript, and CSS in the documentation UI
|
||||
|
||||
See @.github/instructions/assets.instructions.md for the complete JavaScript, TypeScript, and SASS (CSS) development guidelines.
|
||||
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
# Frontmatter and Content Instructions
|
||||
See @.github/instructions/content.instructions.md for the complete frontmatter reference and content guidelines.
|
||||
|
|
@ -0,0 +1,65 @@
|
|||
---
|
||||
title: Undelete a table
|
||||
description: >
|
||||
Use the [`influxctl table undelete` command](/influxdb3/cloud-dedicated/reference/cli/influxctl/table/undelete/)
|
||||
to restore a previously deleted table in your {{< product-name omit=" Cluster" >}} cluster.
|
||||
menu:
|
||||
influxdb3_cloud_dedicated:
|
||||
parent: Manage tables
|
||||
weight: 204
|
||||
list_code_example: |
|
||||
```bash { placeholders="DATABASE_NAME|TABLE_ID" }
|
||||
influxctl table undelete DATABASE_NAME TABLE_ID
|
||||
```
|
||||
related:
|
||||
- /influxdb3/cloud-dedicated/reference/cli/influxctl/table/undelete/
|
||||
- /influxdb3/cloud-dedicated/admin/tables/delete/
|
||||
- /influxdb3/cloud-dedicated/admin/tokens/table/create/
|
||||
---
|
||||
|
||||
Use the [`influxctl table undelete` command](/influxdb3/cloud-dedicated/reference/cli/influxctl/table/undelete/)
|
||||
to restore a previously deleted table in your {{< product-name omit=" Cluster" >}} cluster.
|
||||
|
||||
> [!Important]
|
||||
> To undelete a table:
|
||||
>
|
||||
> - A new table with the same name cannot already exist.
|
||||
> - You must have appropriate permissions to manage databases.
|
||||
|
||||
When you undelete a table, it is restored with the same partition template and
|
||||
other settings as when it was deleted.
|
||||
|
||||
> [!Warning]
|
||||
> Tables can only be undeleted for
|
||||
> {{% show-in "cloud-dedicated" %}}approximately 14 days{{% /show-in %}}{{% show-in "clustered" %}}a configurable "hard-delete" grace period{{% /show-in %}}
|
||||
> after they are deleted.
|
||||
> After this grace period, all Parquet files associated with the deleted table
|
||||
> are permanently removed and the table cannot be undeleted.
|
||||
|
||||
## Undelete a table using the influxctl CLI
|
||||
|
||||
```bash { placeholders="DATABASE_NAME|TABLE_ID" }
|
||||
influxctl table undelete DATABASE_NAME TABLE_ID
|
||||
```
|
||||
|
||||
Replace the following:
|
||||
|
||||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
|
||||
Name of the database associated with the deleted table
|
||||
- {{% code-placeholder-key %}}`TABLE_ID`{{% /code-placeholder-key %}}:
|
||||
ID of the deleted table to restore
|
||||
|
||||
> [!Tip]
|
||||
> #### View deleted table IDs
|
||||
>
|
||||
> To view the IDs of deleted tables, use the `influxctl table list` command with
|
||||
> the `--filter-status=deleted` flag--for example:
|
||||
>
|
||||
> <!--pytest.mark.skip-->
|
||||
>
|
||||
> ```bash {placeholders="DATABASE_NAME" }
|
||||
> influxctl table list --filter-status=deleted DATABASE_NAME
|
||||
> ```
|
||||
>
|
||||
> Replace {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}
|
||||
> with the name of the database associated with the table you want to undelete.
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
---
|
||||
title: influxctl table undelete
|
||||
description: >
|
||||
The `influxctl table undelete` command undeletes a previously deleted
|
||||
table in an {{% product-name omit=" Clustered" %}} cluster.
|
||||
menu:
|
||||
influxdb3_cloud_dedicated:
|
||||
parent: influxctl table
|
||||
weight: 301
|
||||
metadata: [influxctl 2.10.4+]
|
||||
source: /shared/influxctl/table/undelete.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE content/shared/influxctl/table/undelete.md -->
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
---
|
||||
title: Undelete a table
|
||||
description: >
|
||||
Use the [`influxctl table undelete` command](/influxdb3/clustered/reference/cli/influxctl/table/undelete/)
|
||||
to restore a previously deleted table in your {{< product-name omit=" Cluster" >}} cluster.
|
||||
menu:
|
||||
influxdb3_clustered:
|
||||
parent: Manage tables
|
||||
weight: 204
|
||||
list_code_example: |
|
||||
```bash { placeholders="DATABASE_NAME|TABLE_ID" }
|
||||
influxctl table undelete DATABASE_NAME TABLE_ID
|
||||
```
|
||||
related:
|
||||
- /influxdb3/clustered/reference/cli/influxctl/table/undelete/
|
||||
- /influxdb3/clustered/admin/tables/delete/
|
||||
- /influxdb3/clustered/admin/tokens/table/create/
|
||||
draft: true # hide until next clustered release
|
||||
---
|
||||
|
||||
Use the [`influxctl table undelete` command](/influxdb3/clustered/reference/cli/influxctl/table/undelete/)
|
||||
to restore a previously deleted table in your {{< product-name omit=" Cluster" >}} cluster.
|
||||
|
||||
> [!Important]
|
||||
> To undelete a table:
|
||||
>
|
||||
> - A new table with the same name cannot already exist.
|
||||
> - You must have appropriate permissions to manage databases.
|
||||
|
||||
When you undelete a table, it is restored with the same partition template and
|
||||
other settings as when it was deleted.
|
||||
|
||||
> [!Warning]
|
||||
> Tables can only be undeleted for
|
||||
> {{% show-in "cloud-dedicated" %}}approximately 14 days{{% /show-in %}}{{% show-in "clustered" %}}a configurable "hard-delete" grace period{{% /show-in %}}
|
||||
> after they are deleted.
|
||||
> After this grace period, all Parquet files associated with the deleted table
|
||||
> are permanently removed and the table cannot be undeleted.
|
||||
|
||||
## Undelete a table using the influxctl CLI
|
||||
|
||||
```bash { placeholders="DATABASE_NAME|TABLE_ID" }
|
||||
influxctl table undelete DATABASE_NAME TABLE_ID
|
||||
```
|
||||
|
||||
Replace the following:
|
||||
|
||||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
|
||||
Name of the database associated with the deleted table
|
||||
- {{% code-placeholder-key %}}`TABLE_ID`{{% /code-placeholder-key %}}:
|
||||
ID of the deleted table to restore
|
||||
|
||||
> [!Tip]
|
||||
> #### View deleted table IDs
|
||||
>
|
||||
> To view the IDs of deleted tables, use the `influxctl table list` command with
|
||||
> the `--filter-status=deleted` flag--for example:
|
||||
>
|
||||
> <!--pytest.mark.skip-->
|
||||
>
|
||||
> ```bash {placeholders="DATABASE_NAME" }
|
||||
> influxctl table list --filter-status=deleted DATABASE_NAME
|
||||
> ```
|
||||
>
|
||||
> Replace {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}
|
||||
> with the name of the database associated with the table you want to undelete.
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: influxctl table undelete
|
||||
description: >
|
||||
The `influxctl table undelete` command undeletes a previously deleted
|
||||
table in an {{% product-name omit=" Clustered" %}} cluster.
|
||||
menu:
|
||||
influxdb3_clustered:
|
||||
parent: influxctl table
|
||||
weight: 301
|
||||
metadata: [influxctl 2.10.4+]
|
||||
source: /shared/influxctl/table/undelete.md
|
||||
draft: true # hide until next clustered release
|
||||
---
|
||||
|
||||
<!-- //SOURCE content/shared/influxctl/table/undelete.md -->
|
||||
|
|
@ -14,10 +14,11 @@ influxctl database list [--format=table|json]
|
|||
|
||||
## Flags
|
||||
|
||||
| Flag | | Description |
|
||||
| :--- | :--------- | :-------------------------------------------- |
|
||||
| | `--format` | Output format (`table` _(default)_ or `json`) |
|
||||
| `-h` | `--help` | Output command help |
|
||||
| Flag | | Description |
|
||||
| :--- | :---------------- | :----------------------------------------------------------------------------- |
|
||||
| | `--filter-status` | Only list databases with a specific status (`active` _(default)_ or `deleted`) |
|
||||
| | `--format` | Output format (`table` _(default)_ or `json`) |
|
||||
| `-h` | `--help` | Output command help |
|
||||
|
||||
{{% caption %}}
|
||||
_Also see [`influxctl` global flags](/influxdb3/version/reference/cli/influxctl/#global-flags)._
|
||||
|
|
|
|||
|
|
@ -1,3 +1,38 @@
|
|||
## 2.10.5 {date="2025-09-23"}
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Update warnings for the `influxctl database delete` command.
|
||||
|
||||
## 2.10.4 {date="2025-09-22"}
|
||||
|
||||
### Features
|
||||
|
||||
- Add the [`influxctl table undelete` command](/influxdb3/version/reference/cli/influxctl/table/undelete/).
|
||||
- Add `--filter-status` flag to the
|
||||
[`influxctl database list`](/influxdb3/version/reference/cli/influxctl/database/list/)
|
||||
and [`influxctl table list`](/influxdb3/version/reference/cli/influxctl/table/list/)
|
||||
commands.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Allow changing only maxTables or maxColumns individually.
|
||||
|
||||
### Dependency updates
|
||||
|
||||
- Update `github.com/apache/arrow-go/v18` from 18.4.0 to 18.4.1.
|
||||
- Update `github.com/golang-jwt/jwt/v5` from 5.2.3 to 5.3.0.
|
||||
- Update `github.com/stretchr/testify` from 1.10.0 to 1.11.1.
|
||||
- Update `golang.org/x/mod` from 0.26.0 to 0.28.0.
|
||||
- Update `golang.org/x/oauth2` from 0.30.0 to 0.31.0.
|
||||
- Update `google.golang.org/grpc` from 1.74.2 to 1.75.1.
|
||||
- Update `google.golang.org/protobuf` from 1.36.6 to 1.36.9.
|
||||
- Update `helm.sh/helm/v3` from 3.18.4 to 3.18.5.
|
||||
- Update IOxProxy Protobuf.
|
||||
- Update IOxProxy proto to include `UndeleteTable`.
|
||||
- Upgrade Go to 1.25.1.
|
||||
- Upgrade `make` dependencies.
|
||||
|
||||
## 2.10.3 {date="2025-07-30"}
|
||||
|
||||
### Features
|
||||
|
|
|
|||
|
|
@ -16,10 +16,11 @@ influxctl table list [flags] <DATABASE_NAME>
|
|||
|
||||
## Flags
|
||||
|
||||
| Flag | | Description |
|
||||
| :--- | :--------- | :-------------------------------------------- |
|
||||
| | `--format` | Output format (`table` _(default)_ or `json`) |
|
||||
| `-h` | `--help` | Output command help |
|
||||
| Flag | | Description |
|
||||
| :--- | :---------------- | :-------------------------------------------------------------------------- |
|
||||
| | `--filter-status` | Only list tables with a specific status (`active` _(default)_ or `deleted`) |
|
||||
| | `--format` | Output format (`table` _(default)_ or `json`) |
|
||||
| `-h` | `--help` | Output command help |
|
||||
|
||||
{{% caption %}}
|
||||
_Also see [`influxctl` global flags](/influxdb3/version/reference/cli/influxctl/#global-flags)._
|
||||
|
|
|
|||
|
|
@ -0,0 +1,49 @@
|
|||
|
||||
The `influxctl table undelete` command undeletes a previously deleted
|
||||
table in an {{% product-name omit=" Clustered" %}} cluster and restores the
|
||||
table with the same partition template and other table settings present when the
|
||||
table was deleted.
|
||||
|
||||
> [!Important]
|
||||
> The table name must match the name of the deleted table and
|
||||
> **a new, active table with the same name cannot exist**.
|
||||
|
||||
## Usage
|
||||
|
||||
<!-- pytest.mark.skip -->
|
||||
|
||||
```bash
|
||||
influxctl table undelete <DATABASE_NAME> <TABLE_ID>
|
||||
```
|
||||
|
||||
## Arguments
|
||||
|
||||
| Argument | Description |
|
||||
| :---------------- | :----------------------------------------------------------- |
|
||||
| **DATABASE_NAME** | The name of the database that contains the table to undelete |
|
||||
| **TABLE_ID** | The ID of the table to undelete |
|
||||
|
||||
> [!Tip]
|
||||
> #### View deleted table IDs
|
||||
>
|
||||
> To view the IDs of deleted tables, use the `influxctl table list` command with
|
||||
> the `--filter-status=deleted` flag--for example:
|
||||
>
|
||||
> <!--pytest.mark.skip-->
|
||||
>
|
||||
> ```bash {placeholders="DATABASE_NAME" }
|
||||
> influxctl table list --filter-status=deleted DATABASE_NAME
|
||||
> ```
|
||||
>
|
||||
> Replace {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}
|
||||
> with the name of the database associated with the table you want to undelete.
|
||||
|
||||
## Flags
|
||||
|
||||
| Flag | | Description |
|
||||
| :--- | :--------- | :-------------------------------------------- |
|
||||
| `-h` | `--help` | Output command help |
|
||||
|
||||
{{% caption %}}
|
||||
_Also see [`influxctl` global flags](/influxdb3/version/reference/cli/influxctl/#global-flags)._
|
||||
{{% /caption %}}
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: "Telegraf Aggregator Plugins"
|
||||
description: "Telegraf aggregator plugins aggregate data across multiple metrics."
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
name: Aggregator plugins
|
||||
parent: plugins_reference
|
||||
identifier: aggregator_plugins_reference
|
||||
weight: 10
|
||||
tags: [aggregator-plugins]
|
||||
---
|
||||
|
||||
Telegraf aggregator plugins aggregate data across multiple metrics.
|
||||
Aggregator plugins create aggregate metrics--for example, by implementing statistical functions such as mean, min, and max.
|
||||
|
||||
{{< telegraf/plugins type="aggregator" >}}
|
||||
|
|
@ -0,0 +1,89 @@
|
|||
---
|
||||
description: "Telegraf plugin for aggregating metrics using Basic Statistics"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: aggregator_plugins_reference
|
||||
name: Basic Statistics
|
||||
identifier: aggregator-basicstats
|
||||
tags: [Basic Statistics, "aggregator-plugins", "configuration", "statistics"]
|
||||
introduced: "v1.5.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/aggregators/basicstats/README.md, Basic Statistics Plugin Source
|
||||
---
|
||||
|
||||
# Basic Statistics Aggregator Plugin
|
||||
|
||||
This plugin computes basic statistics such as counts, differences, minima,
|
||||
maxima, mean values, non-negative differences etc. for a set of metrics and
|
||||
emits these statistical values every `period`.
|
||||
|
||||
**Introduced in:** Telegraf v1.5.0
|
||||
**Tags:** statistics
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Keep the aggregate basicstats of each metric passing through.
|
||||
[[aggregators.basicstats]]
|
||||
## The period on which to flush & clear the aggregator.
|
||||
# period = "30s"
|
||||
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
# drop_original = false
|
||||
|
||||
## Configures which basic stats to push as fields
|
||||
# stats = ["count","min","max","mean","variance","stdev"]
|
||||
```
|
||||
|
||||
- stats
|
||||
- If not specified, then `count`, `min`, `max`, `mean`, `stdev`, and `s2` are
|
||||
aggregated and pushed as fields. Other fields are not aggregated by default
|
||||
to maintain backwards compatibility.
|
||||
- If empty array, no stats are aggregated
|
||||
|
||||
## Measurements & Fields
|
||||
|
||||
- measurement1
|
||||
- field1_count
|
||||
- field1_diff (difference)
|
||||
- field1_rate (rate per second)
|
||||
- field1_max
|
||||
- field1_min
|
||||
- field1_mean
|
||||
- field1_non_negative_diff (non-negative difference)
|
||||
- field1_non_negative_rate (non-negative rate per second)
|
||||
- field1_percent_change
|
||||
- field1_sum
|
||||
- field1_s2 (variance)
|
||||
- field1_stdev (standard deviation)
|
||||
- field1_interval (interval in nanoseconds)
|
||||
- field1_last (last aggregated value)
|
||||
- field1_first (first aggregated value)
|
||||
|
||||
## Tags
|
||||
|
||||
No tags are applied by this aggregator.
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
system,host=tars load1=1 1475583980000000000
|
||||
system,host=tars load1=1 1475583990000000000
|
||||
system,host=tars load1_count=2,load1_diff=0,load1_rate=0,load1_max=1,load1_min=1,load1_mean=1,load1_sum=2,load1_s2=0,load1_stdev=0,load1_interval=10000000000i,load1_last=1 1475584010000000000
|
||||
system,host=tars load1=1 1475584020000000000
|
||||
system,host=tars load1=3 1475584030000000000
|
||||
system,host=tars load1_count=2,load1_diff=2,load1_rate=0.2,load1_max=3,load1_min=1,load1_mean=2,load1_sum=4,load1_s2=2,load1_stdev=1.414162,load1_interval=10000000000i,load1_last=3,load1_first=3 1475584010000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,146 @@
|
|||
---
|
||||
description: "Telegraf plugin for aggregating metrics using Derivative"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: aggregator_plugins_reference
|
||||
name: Derivative
|
||||
identifier: aggregator-derivative
|
||||
tags: [Derivative, "aggregator-plugins", "configuration", "math"]
|
||||
introduced: "v1.18.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/aggregators/derivative/README.md, Derivative Plugin Source
|
||||
---
|
||||
|
||||
# Derivative Aggregator Plugin
|
||||
|
||||
This plugin computes the derivative for all fields of the aggregated metrics.
|
||||
|
||||
**Introduced in:** Telegraf v1.18.0
|
||||
**Tags:** math
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Calculates a derivative for every field.
|
||||
[[aggregators.derivative]]
|
||||
## The period in which to flush the aggregator.
|
||||
# period = "30s"
|
||||
|
||||
## Suffix to append for the resulting derivative field.
|
||||
# suffix = "_rate"
|
||||
|
||||
## Field to use for the quotient when computing the derivative.
|
||||
## When using a field as the derivation parameter the name of that field will
|
||||
## be used for the resulting derivative, e.g. *fieldname_by_parameter*.
|
||||
## By default the timestamps of the metrics are used and the suffix is omitted.
|
||||
# variable = ""
|
||||
|
||||
## Maximum number of roll-overs in case only one measurement is found during a period.
|
||||
# max_roll_over = 10
|
||||
```
|
||||
|
||||
This aggregator will estimate a derivative for each field of a metric, which is
|
||||
contained in both the first and last metric of the aggregation interval.
|
||||
Without further configuration the derivative will be calculated with respect to
|
||||
the time difference between these two measurements in seconds.
|
||||
The following formula is applied is for every field
|
||||
|
||||
```text
|
||||
derivative = (value_last - value_first) / (time_last - time_first)
|
||||
```
|
||||
|
||||
The resulting derivative will be named `<fieldname>_rate` if no `suffix` is
|
||||
configured.
|
||||
|
||||
To calculate a derivative for every field use
|
||||
|
||||
```toml
|
||||
[[aggregators.derivative]]
|
||||
## Specific Derivative Aggregator Arguments:
|
||||
|
||||
## Configure a custom derivation variable. Timestamp is used if none is given.
|
||||
# variable = ""
|
||||
|
||||
## Suffix to add to the field name for the derivative name.
|
||||
# suffix = "_rate"
|
||||
|
||||
## Roll-Over last measurement to first measurement of next period
|
||||
# max_roll_over = 10
|
||||
|
||||
## General Aggregator Arguments:
|
||||
|
||||
## calculate derivative every 30 seconds
|
||||
period = "30s"
|
||||
```
|
||||
|
||||
## Time Derivatives
|
||||
|
||||
In its default configuration it determines the first and last measurement of
|
||||
the period. From these measurements the time difference in seconds is
|
||||
calculated. This time difference is than used to divide the difference of each
|
||||
field using the following formula:
|
||||
|
||||
```text
|
||||
derivative = (value_last - value_first) / (time_last - time_first)
|
||||
```
|
||||
|
||||
For each field the derivative is emitted with a naming pattern
|
||||
`<fieldname>_rate`.
|
||||
|
||||
## Custom Derivation Variable
|
||||
|
||||
The plugin supports to use a field of the aggregated measurements as derivation
|
||||
variable in the denominator. This variable is assumed to be a monotonically
|
||||
increasing value. In this feature the following formula is used:
|
||||
|
||||
```text
|
||||
derivative = (value_last - value_first) / (variable_last - variable_first)
|
||||
```
|
||||
|
||||
**Make sure the specified variable is not filtered and exists in the metrics
|
||||
passed to this aggregator!**
|
||||
|
||||
When using a custom derivation variable, you should change the `suffix` of the
|
||||
derivative name. See the next section on customizing the derivative
|
||||
name |
|
||||
| 16 | 4.0 | | | | |
|
||||
| 18 | 2.0 | | | | |
|
||||
| 20 | 0.0 | | | | |
|
||||
||| -1.0 | -1.0 | | |
|
||||
|
||||
The difference stems from the change of the value between periods, e.g. from 6.0
|
||||
to 8.0 between first and second period. Those changes are omitted with
|
||||
`max_roll_over = 0` but are respected with `max_roll_over = 1`. That there are
|
||||
no more differences in the calculated derivatives is due to the example data,
|
||||
which has constant derivatives in during the first and last period, even when
|
||||
including the gap between the periods. Using `max_roll_over` with a value
|
||||
greater 0 may be important, if you need to detect changes between periods,
|
||||
e.g. when you have very few measurements in a period or quasi-constant metrics
|
||||
with only occasional changes.
|
||||
|
||||
### Tags
|
||||
|
||||
No tags are applied by this aggregator.
|
||||
Existing tags are passed through the aggregator untouched.
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
net bytes_recv=15409i,packets_recv=164i,bytes_sent=16649i,packets_sent=120i 1508843640000000000
|
||||
net bytes_recv=73987i,packets_recv=364i,bytes_sent=87328i,packets_sent=452i 1508843660000000000
|
||||
net bytes_recv_by_packets_recv=292.89 1508843660000000000
|
||||
net packets_sent_rate=16.6,bytes_sent_rate=3533.95 1508843660000000000
|
||||
net bytes_sent_by_packet=292.89 1508843660000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,107 @@
|
|||
---
|
||||
description: "Telegraf plugin for aggregating metrics using Final"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: aggregator_plugins_reference
|
||||
name: Final
|
||||
identifier: aggregator-final
|
||||
tags: [Final, "aggregator-plugins", "configuration", "sampling"]
|
||||
introduced: "v1.11.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/aggregators/final/README.md, Final Plugin Source
|
||||
---
|
||||
|
||||
# Final Aggregator Plugin
|
||||
|
||||
This plugin emits the last metric of a contiguous series, defined as a
|
||||
series which receives updates within the time period in `series_timeout`. The
|
||||
contiguous series may be longer than the time interval defined by `period`.
|
||||
When a series has not been updated within the `series_timeout`, the last metric
|
||||
is emitted.
|
||||
|
||||
Alternatively, the plugin emits the last metric in the `period` for the
|
||||
`periodic` output strategy.
|
||||
|
||||
This is useful for getting the final value for data sources that produce
|
||||
discrete time series such as procstat, cgroup, kubernetes etc. or to downsample
|
||||
metrics collected at a higher frequency.
|
||||
|
||||
> [!NOTE]
|
||||
> All emited metrics do have fields with `_final` appended to the field-name
|
||||
> by default.
|
||||
|
||||
**Introduced in:** Telegraf v1.11.0
|
||||
**Tags:** sampling
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Report the final metric of a series
|
||||
[[aggregators.final]]
|
||||
## The period on which to flush & clear the aggregator.
|
||||
# period = "30s"
|
||||
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
# drop_original = false
|
||||
|
||||
## If false, _final is added to every field name
|
||||
# keep_original_field_names = false
|
||||
|
||||
## The time that a series is not updated until considering it final. Ignored
|
||||
## when output_strategy is "periodic".
|
||||
# series_timeout = "5m"
|
||||
|
||||
## Output strategy, supported values:
|
||||
## timeout -- output a metric if no new input arrived for `series_timeout`
|
||||
## periodic -- output the last received metric every `period`
|
||||
# output_strategy = "timeout"
|
||||
```
|
||||
|
||||
### Output strategy
|
||||
|
||||
By default (`output_strategy = "timeout"`) the plugin will only emit a metric
|
||||
for the period if the last received one is older than the series_timeout. This
|
||||
will not guarantee a regular output of a `final` metric e.g. if the
|
||||
series-timeout is a multiple of the gathering interval for an input. In this
|
||||
case metric sporadically arrive in the timeout phase of the period and emitting
|
||||
the `final` metric is suppressed.
|
||||
|
||||
Contrary to this, `output_strategy = "periodic"` will always output a `final`
|
||||
metric at the end of the period irrespectively of when the last metric arrived,
|
||||
the `series_timeout` is ignored.
|
||||
|
||||
## Metrics
|
||||
|
||||
Measurement and tags are unchanged, fields are emitted with the suffix
|
||||
`_final`.
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
counter,host=bar i_final=3,j_final=6 1554281635115090133
|
||||
counter,host=foo i_final=3,j_final=6 1554281635112992012
|
||||
```
|
||||
|
||||
Original input:
|
||||
|
||||
```text
|
||||
counter,host=bar i=1,j=4 1554281633101153300
|
||||
counter,host=foo i=1,j=4 1554281633099323601
|
||||
counter,host=bar i=2,j=5 1554281634107980073
|
||||
counter,host=foo i=2,j=5 1554281634105931116
|
||||
counter,host=bar i=3,j=6 1554281635115090133
|
||||
counter,host=foo i=3,j=6 1554281635112992012
|
||||
```
|
||||
|
|
@ -0,0 +1,150 @@
|
|||
---
|
||||
description: "Telegraf plugin for aggregating metrics using Histogram"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: aggregator_plugins_reference
|
||||
name: Histogram
|
||||
identifier: aggregator-histogram
|
||||
tags: [Histogram, "aggregator-plugins", "configuration", "statistics"]
|
||||
introduced: "v1.4.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/aggregators/histogram/README.md, Histogram Plugin Source
|
||||
---
|
||||
|
||||
# Histogram Aggregator Plugin
|
||||
|
||||
This plugin creates histograms containing the counts of field values within the
|
||||
configured range. The histogram metric is emitted every `period`.
|
||||
|
||||
In `cumulative` mode, values added to a bucket are also added to the
|
||||
consecutive buckets in the distribution creating a [cumulative histogram](https://en.wikipedia.org/wiki/Histogram#/media/File:Cumulative_vs_normal_histogram.svg).
|
||||
|
||||
> [!NOTE]
|
||||
> By default bucket counts are not reset between periods and will be
|
||||
> non-strictly increasing while Telegraf is running. This behavior can be
|
||||
> by setting the `reset` parameter.
|
||||
|
||||
**Introduced in:** Telegraf v1.4.0
|
||||
**Tags:** statistics
|
||||
**OS support:** all
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Histogram#/media/File:Cumulative_vs_normal_histogram.svg
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Configuration for aggregate histogram metrics
|
||||
[[aggregators.histogram]]
|
||||
## The period in which to flush the aggregator.
|
||||
# period = "30s"
|
||||
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
# drop_original = false
|
||||
|
||||
## If true, the histogram will be reset on flush instead
|
||||
## of accumulating the results.
|
||||
reset = false
|
||||
|
||||
## Whether bucket values should be accumulated. If set to false, "gt" tag will be added.
|
||||
## Defaults to true.
|
||||
cumulative = true
|
||||
|
||||
## Expiration interval for each histogram. The histogram will be expired if
|
||||
## there are no changes in any buckets for this time interval. 0 == no expiration.
|
||||
# expiration_interval = "0m"
|
||||
|
||||
## If true, aggregated histogram are pushed to output only if it was updated since
|
||||
## previous push. Defaults to false.
|
||||
# push_only_on_update = false
|
||||
|
||||
## Example config that aggregates all fields of the metric.
|
||||
# [[aggregators.histogram.config]]
|
||||
# ## Right borders of buckets (with +Inf implicitly added).
|
||||
# buckets = [0.0, 15.6, 34.5, 49.1, 71.5, 80.5, 94.5, 100.0]
|
||||
# ## The name of metric.
|
||||
# measurement_name = "cpu"
|
||||
|
||||
## Example config that aggregates only specific fields of the metric.
|
||||
# [[aggregators.histogram.config]]
|
||||
# ## Right borders of buckets (with +Inf implicitly added).
|
||||
# buckets = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]
|
||||
# ## The name of metric.
|
||||
# measurement_name = "diskio"
|
||||
# ## The concrete fields of metric
|
||||
# fields = ["io_time", "read_time", "write_time"]
|
||||
```
|
||||
|
||||
The user is responsible for defining the bounds of the histogram bucket as
|
||||
well as the measurement name and fields to aggregate.
|
||||
|
||||
Each histogram config section must contain a `buckets` and `measurement_name`
|
||||
option. Optionally, if `fields` is set only the fields listed will be
|
||||
aggregated. If `fields` is not set all fields are aggregated.
|
||||
|
||||
The `buckets` option contains a list of floats which specify the bucket
|
||||
boundaries. Each float value defines the inclusive upper (right) bound of the
|
||||
bucket. The `+Inf` bucket is added automatically and does not need to be
|
||||
defined. (For left boundaries, these specified bucket borders and `-Inf` will
|
||||
be used).
|
||||
|
||||
## Measurements & Fields
|
||||
|
||||
The postfix `bucket` will be added to each field key.
|
||||
|
||||
- measurement1
|
||||
- field1_bucket
|
||||
- field2_bucket
|
||||
|
||||
### Tags
|
||||
|
||||
- `cumulative = true` (default):
|
||||
- `le`: Right bucket border. It means that the metric value is less than or
|
||||
equal to the value of this tag. If a metric value is sorted into a bucket,
|
||||
it is also sorted into all larger buckets. As a result, the value of
|
||||
`<field>_bucket` is rising with rising `le` value. When `le` is `+Inf`,
|
||||
the bucket value is the count of all metrics, because all metric values are
|
||||
less than or equal to positive infinity.
|
||||
- `cumulative = false`:
|
||||
- `gt`: Left bucket border. It means that the metric value is greater than
|
||||
(and not equal to) the value of this tag.
|
||||
- `le`: Right bucket border. It means that the metric value is less than or
|
||||
equal to the value of this tag.
|
||||
- As both `gt` and `le` are present, each metric is sorted in only exactly
|
||||
one bucket.
|
||||
|
||||
## Example Output
|
||||
|
||||
Let assume we have the buckets [0, 10, 50, 100] and the following field values
|
||||
for `usage_idle`: [50, 7, 99, 12]
|
||||
|
||||
With `cumulative = true`:
|
||||
|
||||
```text
|
||||
cpu,cpu=cpu1,host=localhost,le=0.0 usage_idle_bucket=0i 1486998330000000000 # none
|
||||
cpu,cpu=cpu1,host=localhost,le=10.0 usage_idle_bucket=1i 1486998330000000000 # 7
|
||||
cpu,cpu=cpu1,host=localhost,le=50.0 usage_idle_bucket=2i 1486998330000000000 # 7, 12
|
||||
cpu,cpu=cpu1,host=localhost,le=100.0 usage_idle_bucket=4i 1486998330000000000 # 7, 12, 50, 99
|
||||
cpu,cpu=cpu1,host=localhost,le=+Inf usage_idle_bucket=4i 1486998330000000000 # 7, 12, 50, 99
|
||||
```
|
||||
|
||||
With `cumulative = false`:
|
||||
|
||||
```text
|
||||
cpu,cpu=cpu1,host=localhost,gt=-Inf,le=0.0 usage_idle_bucket=0i 1486998330000000000 # none
|
||||
cpu,cpu=cpu1,host=localhost,gt=0.0,le=10.0 usage_idle_bucket=1i 1486998330000000000 # 7
|
||||
cpu,cpu=cpu1,host=localhost,gt=10.0,le=50.0 usage_idle_bucket=1i 1486998330000000000 # 12
|
||||
cpu,cpu=cpu1,host=localhost,gt=50.0,le=100.0 usage_idle_bucket=2i 1486998330000000000 # 50, 99
|
||||
cpu,cpu=cpu1,host=localhost,gt=100.0,le=+Inf usage_idle_bucket=0i 1486998330000000000 # none
|
||||
```
|
||||
|
|
@ -0,0 +1,64 @@
|
|||
---
|
||||
description: "Telegraf plugin for aggregating metrics using Merge"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: aggregator_plugins_reference
|
||||
name: Merge
|
||||
identifier: aggregator-merge
|
||||
tags: [Merge, "aggregator-plugins", "configuration", "transformation"]
|
||||
introduced: "v1.13.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/aggregators/merge/README.md, Merge Plugin Source
|
||||
---
|
||||
|
||||
# Merge Aggregator Plugin
|
||||
|
||||
This plugin merges metrics of the same series and timestamp into new metrics
|
||||
with the super-set of fields. A series here is defined by the metric name and
|
||||
the tag key-value set.
|
||||
|
||||
Use this plugin when fields are split over multiple metrics, with the same
|
||||
measurement, tag set and timestamp.
|
||||
|
||||
**Introduced in:** Telegraf v1.13.0
|
||||
**Tags:** transformation
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Merge metrics into multifield metrics by series key
|
||||
[[aggregators.merge]]
|
||||
## General Aggregator Arguments:
|
||||
## The period on which to flush & clear the aggregator.
|
||||
# period = "30s"
|
||||
|
||||
## Precision to round the metric timestamp to
|
||||
## This is useful for cases where metrics to merge arrive within a small
|
||||
## interval and thus vary in timestamp. The timestamp of the resulting metric
|
||||
## is also rounded.
|
||||
# round_timestamp_to = "1ns"
|
||||
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
drop_original = true
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
```diff
|
||||
- cpu,host=localhost usage_time=42 1567562620000000000
|
||||
- cpu,host=localhost idle_time=42 1567562620000000000
|
||||
+ cpu,host=localhost idle_time=42,usage_time=42 1567562620000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,71 @@
|
|||
---
|
||||
description: "Telegraf plugin for aggregating metrics using Minimum-Maximum"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: aggregator_plugins_reference
|
||||
name: Minimum-Maximum
|
||||
identifier: aggregator-minmax
|
||||
tags: [Minimum-Maximum, "aggregator-plugins", "configuration", "statistics"]
|
||||
introduced: "v1.1.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/aggregators/minmax/README.md, Minimum-Maximum Plugin Source
|
||||
---
|
||||
|
||||
# Minimum-Maximum Aggregator Plugin
|
||||
|
||||
This plugin aggregates the minimum and maximum values of each field it sees,
|
||||
emitting the aggrate every `period` seconds with field names suffixed by `_min`
|
||||
and `_max` respectively.
|
||||
|
||||
**Introduced in:** Telegraf v1.1.0
|
||||
**Tags:** statistics
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Keep the aggregate min/max of each metric passing through.
|
||||
[[aggregators.minmax]]
|
||||
## General Aggregator Arguments:
|
||||
## The period on which to flush & clear the aggregator.
|
||||
# period = "30s"
|
||||
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
# drop_original = false
|
||||
```
|
||||
|
||||
## Measurements & Fields
|
||||
|
||||
- measurement1
|
||||
- field1_max
|
||||
- field1_min
|
||||
|
||||
## Tags
|
||||
|
||||
No tags are applied by this aggregator.
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
system,host=tars load1=1.72 1475583980000000000
|
||||
system,host=tars load1=1.6 1475583990000000000
|
||||
system,host=tars load1=1.66 1475584000000000000
|
||||
system,host=tars load1=1.63 1475584010000000000
|
||||
system,host=tars load1_max=1.72,load1_min=1.6 1475584010000000000
|
||||
system,host=tars load1=1.46 1475584020000000000
|
||||
system,host=tars load1=1.39 1475584030000000000
|
||||
system,host=tars load1=1.41 1475584040000000000
|
||||
system,host=tars load1_max=1.46,load1_min=1.39 1475584040000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,160 @@
|
|||
---
|
||||
description: "Telegraf plugin for aggregating metrics using Quantile"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: aggregator_plugins_reference
|
||||
name: Quantile
|
||||
identifier: aggregator-quantile
|
||||
tags: [Quantile, "aggregator-plugins", "configuration", "statistics"]
|
||||
introduced: "v1.18.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/aggregators/quantile/README.md, Quantile Plugin Source
|
||||
---
|
||||
|
||||
# Quantile Aggregator Plugin
|
||||
|
||||
This plugin aggregates each numeric field per metric into the specified
|
||||
quantiles and emits the quantiles every `period`. Different aggregation
|
||||
algorithms are supported with varying accuracy and limitations.
|
||||
|
||||
**Introduced in:** Telegraf v1.18.0
|
||||
**Tags:** statistics
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Keep the aggregate quantiles of each metric passing through.
|
||||
[[aggregators.quantile]]
|
||||
## General Aggregator Arguments:
|
||||
## The period on which to flush & clear the aggregator.
|
||||
# period = "30s"
|
||||
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
# drop_original = false
|
||||
|
||||
## Quantiles to output in the range [0,1]
|
||||
# quantiles = [0.25, 0.5, 0.75]
|
||||
|
||||
## Type of aggregation algorithm
|
||||
## Supported are:
|
||||
## "t-digest" -- approximation using centroids, can cope with large number of samples
|
||||
## "exact R7" -- exact computation also used by Excel or NumPy (Hyndman & Fan 1996 R7)
|
||||
## "exact R8" -- exact computation (Hyndman & Fan 1996 R8)
|
||||
## NOTE: Do not use "exact" algorithms with large number of samples
|
||||
## to not impair performance or memory consumption!
|
||||
# algorithm = "t-digest"
|
||||
|
||||
## Compression for approximation (t-digest). The value needs to be
|
||||
## greater or equal to 1.0. Smaller values will result in more
|
||||
## performance but less accuracy.
|
||||
# compression = 100.0
|
||||
```
|
||||
|
||||
## Algorithm types
|
||||
|
||||
### t-digest
|
||||
|
||||
Proposed by [Dunning & Ertl (2019)](https://arxiv.org/abs/1902.04023) this type uses a
|
||||
special data-structure to cluster data. These clusters are later used
|
||||
to approximate the requested quantiles. The bounds of the approximation
|
||||
can be controlled by the `compression` setting where smaller values
|
||||
result in higher performance but less accuracy.
|
||||
|
||||
Due to its incremental nature, this algorithm can handle large
|
||||
numbers of samples efficiently. It is recommended for applications
|
||||
where exact quantile calculation isn't required.
|
||||
|
||||
For implementation details see the underlying [golang library](https://github.com/caio/go-tdigest).
|
||||
|
||||
### exact R7 and R8
|
||||
|
||||
These algorithms compute quantiles as described in [Hyndman & Fan
|
||||
(1996)](). The R7 variant is used in Excel and NumPy. The R8
|
||||
variant is recommended by Hyndman & Fan due to its independence of the
|
||||
underlying sample distribution.
|
||||
|
||||
These algorithms save all data for the aggregation `period`. They require a lot
|
||||
of memory when used with a large number of series or a large number of
|
||||
samples. They are slower than the `t-digest` algorithm and are recommended only
|
||||
to be used with a small number of samples and series.
|
||||
|
||||
## Benchmark (linux/amd64)
|
||||
|
||||
The benchmark was performed by adding 100 metrics with six numeric
|
||||
(and two non-numeric) fields to the aggregator and the derive the aggregation
|
||||
result.
|
||||
|
||||
| algorithm | # quantiles | avg. runtime |
|
||||
| :------------ | -------------:| -------------:|
|
||||
| t-digest | 3 | 376372 ns/op |
|
||||
| exact R7 | 3 | 9782946 ns/op |
|
||||
| exact R8 | 3 | 9158205 ns/op |
|
||||
| t-digest | 100 | 899204 ns/op |
|
||||
| exact R7 | 100 | 7868816 ns/op |
|
||||
| exact R8 | 100 | 8099612 ns/op |
|
||||
|
||||
## Measurements
|
||||
|
||||
Measurement names are passed through this aggregator.
|
||||
|
||||
### Fields
|
||||
|
||||
For all numeric fields (int32/64, uint32/64 and float32/64) new *quantile*
|
||||
fields are aggregated in the form `<fieldname>_<quantile*100>`. Other field
|
||||
types (e.g. boolean, string) are ignored and dropped from the output.
|
||||
|
||||
For example passing in the following metric as *input*:
|
||||
|
||||
- somemetric
|
||||
- average_response_ms (float64)
|
||||
- minimum_response_ms (float64)
|
||||
- maximum_response_ms (float64)
|
||||
- status (string)
|
||||
- ok (boolean)
|
||||
|
||||
and the default setting for `quantiles` you get the following *output*
|
||||
|
||||
- somemetric
|
||||
- average_response_ms_025 (float64)
|
||||
- average_response_ms_050 (float64)
|
||||
- average_response_ms_075 (float64)
|
||||
- minimum_response_ms_025 (float64)
|
||||
- minimum_response_ms_050 (float64)
|
||||
- minimum_response_ms_075 (float64)
|
||||
- maximum_response_ms_025 (float64)
|
||||
- maximum_response_ms_050 (float64)
|
||||
- maximum_response_ms_075 (float64)
|
||||
|
||||
The `status` and `ok` fields are dropped because they are not numeric. Note
|
||||
that the number of resulting fields scales with the number of `quantiles`
|
||||
specified.
|
||||
|
||||
### Tags
|
||||
|
||||
Tags are passed through to the output by this aggregator.
|
||||
|
||||
### Example Output
|
||||
|
||||
```text
|
||||
cpu,cpu=cpu-total,host=Hugin usage_user=10.814851731872487,usage_system=2.1679541490155687,usage_irq=1.046598554697342,usage_steal=0,usage_guest_nice=0,usage_idle=85.79616247197244,usage_nice=0,usage_iowait=0,usage_softirq=0.1744330924495688,usage_guest=0 1608288360000000000
|
||||
cpu,cpu=cpu-total,host=Hugin usage_guest=0,usage_system=2.1601016518428664,usage_iowait=0.02541296060990694,usage_irq=1.0165184243964942,usage_softirq=0.1778907242693666,usage_steal=0,usage_guest_nice=0,usage_user=9.275730622616953,usage_idle=87.34434561626493,usage_nice=0 1608288370000000000
|
||||
cpu,cpu=cpu-total,host=Hugin usage_idle=85.78199052131747,usage_nice=0,usage_irq=1.0476428036915637,usage_guest=0,usage_guest_nice=0,usage_system=1.995510102269591,usage_iowait=0,usage_softirq=0.1995510102269662,usage_steal=0,usage_user=10.975305562484735 1608288380000000000
|
||||
cpu,cpu=cpu-total,host=Hugin usage_guest_nice_075=0,usage_user_050=10.814851731872487,usage_guest_075=0,usage_steal_025=0,usage_irq_025=1.031558489546918,usage_irq_075=1.0471206791944527,usage_iowait_025=0,usage_guest_050=0,usage_guest_nice_050=0,usage_nice_075=0,usage_iowait_050=0,usage_system_050=2.1601016518428664,usage_irq_050=1.046598554697342,usage_guest_nice_025=0,usage_idle_050=85.79616247197244,usage_softirq_075=0.1887208672481664,usage_steal_075=0,usage_system_025=2.0778058770562287,usage_system_075=2.1640279004292173,usage_softirq_050=0.1778907242693666,usage_nice_050=0,usage_iowait_075=0.01270648030495347,usage_user_075=10.895078647178611,usage_nice_025=0,usage_steal_050=0,usage_user_025=10.04529117724472,usage_idle_025=85.78907649664495,usage_idle_075=86.57025404411868,usage_softirq_025=0.1761619083594677,usage_guest_025=0 1608288390000000000
|
||||
```
|
||||
|
||||
[tdigest_paper]: https://arxiv.org/abs/1902.04023
|
||||
[tdigest_lib]: https://github.com/caio/go-tdigest
|
||||
[hyndman_fan]: http://www.maths.usyd.edu.au/u/UG/SM/STAT3022/r/current/Misc/Sample%20Quantiles%20in%20Statistical%20Packages.pdf
|
||||
|
|
@ -0,0 +1,152 @@
|
|||
---
|
||||
description: "Telegraf plugin for aggregating metrics using Starlark"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: aggregator_plugins_reference
|
||||
name: Starlark
|
||||
identifier: aggregator-starlark
|
||||
tags: [Starlark, "aggregator-plugins", "configuration", "transformation"]
|
||||
introduced: "v1.21.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/aggregators/starlark/README.md, Starlark Plugin Source
|
||||
---
|
||||
|
||||
# Starlark Aggregator Plugin
|
||||
|
||||
This plugin allows to implement a custom aggregator plugin via a
|
||||
[Starlark](https://github.com/google/starlark-go) script.
|
||||
|
||||
The Starlark language is a dialect of Python and will be familiar to those who
|
||||
have experience with the Python language. However, there are major
|
||||
differences. Existing Python code is unlikely to work
|
||||
unmodified.
|
||||
|
||||
> [!NOTE]
|
||||
> The execution environment is sandboxed, and it is not possible to access the
|
||||
> local filesystem or perfoming network operations. This is by design of the
|
||||
> Starlark language as a configuration language.
|
||||
|
||||
The Starlark script used by this plugin needs to be composed of the three
|
||||
methods defining an aggreagtor named `add`, `push` and `reset`.
|
||||
|
||||
The `add` method is called as soon as a new metric is added to the plugin the
|
||||
metrics to the aggregator. After `period`, the `push` method is called to
|
||||
output the resulting metrics and finally the aggregation is reset by using the
|
||||
`reset` method of the Starlark script.
|
||||
|
||||
The Starlark functions might use the global function `state` to keep aggregation
|
||||
information such as added metrics etc.
|
||||
|
||||
More details on the syntax and available functions can be found in the
|
||||
[Starlark specification](https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md).
|
||||
|
||||
**Introduced in:** Telegraf v1.21.0
|
||||
**Tags:** transformation
|
||||
**OS support:** all
|
||||
|
||||
[starlark]: https://github.com/google/starlark-go
|
||||
[spec]: https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Aggregate metrics using a Starlark script
|
||||
[[aggregators.starlark]]
|
||||
## The Starlark source can be set as a string in this configuration file, or
|
||||
## by referencing a file containing the script. Only one source or script
|
||||
## should be set at once.
|
||||
##
|
||||
## Source of the Starlark script.
|
||||
source = '''
|
||||
state = {}
|
||||
|
||||
def add(metric):
|
||||
state["last"] = metric
|
||||
|
||||
def push():
|
||||
return state.get("last")
|
||||
|
||||
def reset():
|
||||
state.clear()
|
||||
'''
|
||||
|
||||
## File containing a Starlark script.
|
||||
# script = "/usr/local/bin/myscript.star"
|
||||
|
||||
## The constants of the Starlark script.
|
||||
# [aggregators.starlark.constants]
|
||||
# max_size = 10
|
||||
# threshold = 0.75
|
||||
# default_name = "Julia"
|
||||
# debug_mode = true
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
The Starlark code should contain a function called `add` that takes a metric as
|
||||
argument. The function will be called with each metric to add, and doesn't
|
||||
return anything.
|
||||
|
||||
```python
|
||||
def add(metric):
|
||||
state["last"] = metric
|
||||
```
|
||||
|
||||
The Starlark code should also contain a function called `push` that doesn't take
|
||||
any argument. The function will be called to compute the aggregation, and
|
||||
returns the metrics to push to the accumulator.
|
||||
|
||||
```python
|
||||
def push():
|
||||
return state.get("last")
|
||||
```
|
||||
|
||||
The Starlark code should also contain a function called `reset` that doesn't
|
||||
take any argument. The function will be called to reset the plugin, and doesn't
|
||||
return anything.
|
||||
|
||||
```python
|
||||
def reset():
|
||||
state.clear()
|
||||
```
|
||||
|
||||
For a list of available types and functions that can be used in the code, see
|
||||
the [Starlark specification](https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md).
|
||||
|
||||
## Python Differences
|
||||
|
||||
Refer to the section Python
|
||||
Differences of the
|
||||
documentation about the Starlark processor.
|
||||
|
||||
## Libraries available
|
||||
|
||||
Refer to the section Libraries
|
||||
available of the
|
||||
documentation about the Starlark processor.
|
||||
|
||||
## Common Questions
|
||||
|
||||
Refer to the section Common
|
||||
Questions of the
|
||||
documentation about the Starlark processor.
|
||||
|
||||
## Examples
|
||||
|
||||
- minmax
|
||||
- merge
|
||||
|
||||
All examples are in the testdata folder.
|
||||
|
||||
Open a Pull Request to add any other useful Starlark examples.
|
||||
|
|
@ -0,0 +1,105 @@
|
|||
---
|
||||
description: "Telegraf plugin for aggregating metrics using Value Counter"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: aggregator_plugins_reference
|
||||
name: Value Counter
|
||||
identifier: aggregator-valuecounter
|
||||
tags: [Value Counter, "aggregator-plugins", "configuration", "statistics"]
|
||||
introduced: "v1.8.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/aggregators/valuecounter/README.md, Value Counter Plugin Source
|
||||
---
|
||||
|
||||
# Value Counter Aggregator Plugin
|
||||
|
||||
This plugin counts the occurrence of unique values in fields and emits the
|
||||
counter once every `period` with the field-names being suffixed by the unique
|
||||
value converted to `string`.
|
||||
|
||||
> [!NOTE]
|
||||
> The fields to be counted must be configured using the `fields` setting,
|
||||
> otherwise no field will be counted and no metric is emitted.
|
||||
|
||||
This plugin is useful to e.g. count the occurrances of HTTP status codes or
|
||||
other categorical values in the defined `period`.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> Counting fields with a high number of potential values may produce a
|
||||
> significant amounts of new fields and results in an increased memory usage.
|
||||
> Take care to only count fields with a limited set of values.
|
||||
|
||||
**Introduced in:** Telegraf v1.8.0
|
||||
**Tags:** statistics
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Count the occurrence of values in fields.
|
||||
[[aggregators.valuecounter]]
|
||||
## General Aggregator Arguments:
|
||||
## The period on which to flush & clear the aggregator.
|
||||
# period = "30s"
|
||||
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
# drop_original = false
|
||||
|
||||
## The fields for which the values will be counted
|
||||
fields = ["status"]
|
||||
```
|
||||
|
||||
### Measurements & Fields
|
||||
|
||||
- measurement1
|
||||
- field_value1
|
||||
- field_value2
|
||||
|
||||
### Tags
|
||||
|
||||
No tags are applied by this aggregator.
|
||||
|
||||
## Example Output
|
||||
|
||||
Example for parsing a HTTP access log.
|
||||
|
||||
telegraf.conf:
|
||||
|
||||
```toml
|
||||
[[inputs.logparser]]
|
||||
files = ["/tmp/tst.log"]
|
||||
[inputs.logparser.grok]
|
||||
patterns = ['%{DATA:url:tag} %{NUMBER:response:string}']
|
||||
measurement = "access"
|
||||
|
||||
[[aggregators.valuecounter]]
|
||||
namepass = ["access"]
|
||||
fields = ["response"]
|
||||
```
|
||||
|
||||
/tmp/tst.log
|
||||
|
||||
```text
|
||||
/some/path 200
|
||||
/some/path 401
|
||||
/some/path 200
|
||||
```
|
||||
|
||||
```text
|
||||
access,url=/some/path,path=/tmp/tst.log,host=localhost.localdomain response="200" 1511948755991487011
|
||||
access,url=/some/path,path=/tmp/tst.log,host=localhost.localdomain response="401" 1511948755991522282
|
||||
access,url=/some/path,path=/tmp/tst.log,host=localhost.localdomain response="200" 1511948755991531697
|
||||
access,path=/tmp/tst.log,host=localhost.localdomain,url=/some/path response_200=2i,response_401=1i 1511948761000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: "Telegraf Input Plugins"
|
||||
description: "Telegraf input plugins collect metrics from the system, services, and third-party APIs."
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
name: Input plugins
|
||||
parent: plugins_reference
|
||||
identifier: input_plugins_reference
|
||||
weight: 10
|
||||
tags: [input-plugins]
|
||||
---
|
||||
|
||||
Telegraf input plugins collect metrics from the system, services, and third-party APIs.
|
||||
|
||||
{{< telegraf/plugins type="input" >}}
|
||||
|
|
@ -0,0 +1,115 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from ActiveMQ"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: ActiveMQ
|
||||
identifier: input-activemq
|
||||
tags: [ActiveMQ, "input-plugins", "configuration", "messaging"]
|
||||
introduced: "v1.8.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/activemq/README.md, ActiveMQ Plugin Source
|
||||
---
|
||||
|
||||
# ActiveMQ Input Plugin
|
||||
|
||||
This plugin gathers queue, topics and subscribers metrics using the Console API
|
||||
[ActiveMQ](https://activemq.apache.org/) message broker daemon.
|
||||
|
||||
**Introduced in:** Telegraf v1.8.0
|
||||
**Tags:** messaging
|
||||
**OS support:** all
|
||||
|
||||
[activemq]: https://activemq.apache.org/
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Gather ActiveMQ metrics
|
||||
[[inputs.activemq]]
|
||||
## ActiveMQ WebConsole URL
|
||||
url = "http://127.0.0.1:8161"
|
||||
|
||||
## Credentials for basic HTTP authentication
|
||||
# username = "admin"
|
||||
# password = "admin"
|
||||
|
||||
## Required ActiveMQ webadmin root path
|
||||
# webadmin = "admin"
|
||||
|
||||
## Maximum time to receive response.
|
||||
# response_timeout = "5s"
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Every effort was made to preserve the names based on the XML response from the
|
||||
ActiveMQ Console API.
|
||||
|
||||
- activemq_queues
|
||||
- tags:
|
||||
- name
|
||||
- source
|
||||
- port
|
||||
- fields:
|
||||
- size
|
||||
- consumer_count
|
||||
- enqueue_count
|
||||
- dequeue_count
|
||||
- activemq_topics
|
||||
- tags:
|
||||
- name
|
||||
- source
|
||||
- port
|
||||
- fields:
|
||||
- size
|
||||
- consumer_count
|
||||
- enqueue_count
|
||||
- dequeue_count
|
||||
- activemq_subscribers
|
||||
- tags:
|
||||
- client_id
|
||||
- subscription_name
|
||||
- connection_id
|
||||
- destination_name
|
||||
- selector
|
||||
- active
|
||||
- source
|
||||
- port
|
||||
- fields:
|
||||
- pending_queue_size
|
||||
- dispatched_queue_size
|
||||
- dispatched_counter
|
||||
- enqueue_counter
|
||||
- dequeue_counter
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
activemq_queues,name=sandra,host=88284b2fe51b,source=localhost,port=8161 consumer_count=0i,enqueue_count=0i,dequeue_count=0i,size=0i 1492610703000000000
|
||||
activemq_queues,name=Test,host=88284b2fe51b,source=localhost,port=8161 dequeue_count=0i,size=0i,consumer_count=0i,enqueue_count=0i 1492610703000000000
|
||||
activemq_topics,name=ActiveMQ.Advisory.MasterBroker\ ,host=88284b2fe51b,source=localhost,port=8161 size=0i,consumer_count=0i,enqueue_count=1i,dequeue_count=0i 1492610703000000000
|
||||
activemq_topics,host=88284b2fe51b,name=AAA\,source=localhost,port=8161 size=0i,consumer_count=1i,enqueue_count=0i,dequeue_count=0i 1492610703000000000
|
||||
activemq_topics,name=ActiveMQ.Advisory.Topic\,source=localhost,port=8161 ,host=88284b2fe51b enqueue_count=1i,dequeue_count=0i,size=0i,consumer_count=0i 1492610703000000000
|
||||
activemq_topics,name=ActiveMQ.Advisory.Queue\,source=localhost,port=8161 ,host=88284b2fe51b size=0i,consumer_count=0i,enqueue_count=2i,dequeue_count=0i 1492610703000000000
|
||||
activemq_topics,name=AAAA\ ,host=88284b2fe51b,source=localhost,port=8161 consumer_count=0i,enqueue_count=0i,dequeue_count=0i,size=0i 1492610703000000000
|
||||
activemq_subscribers,connection_id=NOTSET,destination_name=AAA,,source=localhost,port=8161,selector=AA,active=no,host=88284b2fe51b,client_id=AAA,subscription_name=AAA pending_queue_size=0i,dispatched_queue_size=0i,dispatched_counter=0i,enqueue_counter=0i,dequeue_counter=0i 1492610703000000000
|
||||
```
|
||||
File diff suppressed because one or more lines are too long
|
|
@ -0,0 +1,210 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Alibaba Cloud Monitor Service (Aliyun)"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Alibaba Cloud Monitor Service (Aliyun)
|
||||
identifier: input-aliyuncms
|
||||
tags: [Alibaba Cloud Monitor Service (Aliyun), "input-plugins", "configuration", "cloud"]
|
||||
introduced: "v1.19.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/aliyuncms/README.md, Alibaba Cloud Monitor Service (Aliyun) Plugin Source
|
||||
---
|
||||
|
||||
# Alibaba Cloud Monitor Service (Aliyun) Input Plugin
|
||||
|
||||
This plugin gathers statistics from the
|
||||
[Alibaba / Aliyun cloud monitoring service](https://www.alibabacloud.com). In the following we will
|
||||
use `Aliyun` instead of `Alibaba` as it's the default naming across the web
|
||||
console and docs.
|
||||
|
||||
**Introduced in:** Telegraf v1.19.0
|
||||
**Tags:** cloud
|
||||
**OS support:** all
|
||||
|
||||
[alibaba]: https://www.alibabacloud.com
|
||||
|
||||
## Aliyun Authentication
|
||||
|
||||
This plugin uses an [AccessKey](https://www.alibabacloud.com/help/doc-detail/53045.htm?spm=a2c63.p38356.b99.127.5cba21fdt5MJKr&parentId=28572) credential for Authentication with the
|
||||
Aliyun OpenAPI endpoint. In the following order the plugin will attempt
|
||||
to authenticate.
|
||||
|
||||
1. Ram RoleARN credential if `access_key_id`, `access_key_secret`, `role_arn`,
|
||||
`role_session_name` is specified
|
||||
2. AccessKey STS token credential if `access_key_id`, `access_key_secret`,
|
||||
`access_key_sts_token` is specified
|
||||
3. AccessKey credential if `access_key_id`, `access_key_secret` is specified
|
||||
4. Ecs Ram Role Credential if `role_name` is specified
|
||||
5. RSA keypair credential if `private_key`, `public_key_id` is specified
|
||||
6. Environment variables credential
|
||||
7. Instance metadata credential
|
||||
|
||||
[1]: https://www.alibabacloud.com/help/doc-detail/53045.htm?spm=a2c63.p38356.b99.127.5cba21fdt5MJKr&parentId=28572
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Pull Metric Statistics from Aliyun CMS
|
||||
[[inputs.aliyuncms]]
|
||||
## Aliyun Credentials
|
||||
## Credentials are loaded in the following order
|
||||
## 1) Ram RoleArn credential
|
||||
## 2) AccessKey STS token credential
|
||||
## 3) AccessKey credential
|
||||
## 4) Ecs Ram Role credential
|
||||
## 5) RSA keypair credential
|
||||
## 6) Environment variables credential
|
||||
## 7) Instance metadata credential
|
||||
|
||||
# access_key_id = ""
|
||||
# access_key_secret = ""
|
||||
# access_key_sts_token = ""
|
||||
# role_arn = ""
|
||||
# role_session_name = ""
|
||||
# private_key = ""
|
||||
# public_key_id = ""
|
||||
# role_name = ""
|
||||
|
||||
## Specify ali cloud regions to be queried for metric and object discovery
|
||||
## If not set, all supported regions (see below) would be covered, it can
|
||||
## provide a significant load on API, so the recommendation here is to
|
||||
## limit the list as much as possible.
|
||||
## Allowed values: https://www.alibabacloud.com/help/zh/doc-detail/40654.htm
|
||||
## Default supported regions are:
|
||||
## cn-qingdao,cn-beijing,cn-zhangjiakou,cn-huhehaote,cn-hangzhou,
|
||||
## cn-shanghai, cn-shenzhen, cn-heyuan,cn-chengdu,cn-hongkong,
|
||||
## ap-southeast-1,ap-southeast-2,ap-southeast-3,ap-southeast-5,
|
||||
## ap-south-1,ap-northeast-1, us-west-1,us-east-1,eu-central-1,
|
||||
## eu-west-1,me-east-1
|
||||
##
|
||||
## From discovery perspective it set the scope for object discovery,
|
||||
## the discovered info can be used to enrich the metrics with objects
|
||||
## attributes/tags. Discovery is not supported for all projects.
|
||||
## Currently, discovery supported for the following projects:
|
||||
## - acs_ecs_dashboard
|
||||
## - acs_rds_dashboard
|
||||
## - acs_slb_dashboard
|
||||
## - acs_vpc_eip
|
||||
regions = ["cn-hongkong"]
|
||||
|
||||
## Requested AliyunCMS aggregation Period (required)
|
||||
## The period must be multiples of 60s and the minimum for AliyunCMS metrics
|
||||
## is 1 minute (60s). However not all metrics are made available to the
|
||||
## one minute period. Some are collected at 3 minute, 5 minute, or larger
|
||||
## intervals.
|
||||
## See: https://help.aliyun.com/document_detail/51936.html?spm=a2c4g.11186623.2.18.2bc1750eeOw1Pv
|
||||
## Note that if a period is configured that is smaller than the minimum for
|
||||
## a particular metric, that metric will not be returned by Aliyun's
|
||||
## OpenAPI and will not be collected by Telegraf.
|
||||
period = "5m"
|
||||
|
||||
## Collection Delay (required)
|
||||
## The delay must account for metrics availability via AliyunCMS API.
|
||||
delay = "1m"
|
||||
|
||||
## Recommended: use metric 'interval' that is a multiple of 'period'
|
||||
## to avoid gaps or overlap in pulled data
|
||||
interval = "5m"
|
||||
|
||||
## Metric Statistic Project (required)
|
||||
project = "acs_slb_dashboard"
|
||||
|
||||
## Maximum requests per second, default value is 200
|
||||
ratelimit = 200
|
||||
|
||||
## How often the discovery API call executed (default 1m)
|
||||
#discovery_interval = "1m"
|
||||
|
||||
## NOTE: Due to the way TOML is parsed, tables must be at the END of the
|
||||
## plugin definition, otherwise additional config options are read as part of
|
||||
## the table
|
||||
|
||||
## Metrics to Pull
|
||||
## At least one metrics definition required
|
||||
[[inputs.aliyuncms.metrics]]
|
||||
## Metrics names to be requested,
|
||||
## Description can be found here (per project):
|
||||
## https://help.aliyun.com/document_detail/28619.html?spm=a2c4g.11186623.6.690.1938ad41wg8QSq
|
||||
names = ["InstanceActiveConnection", "InstanceNewConnection"]
|
||||
|
||||
## Dimension filters for Metric (optional)
|
||||
## This allows to get additional metric dimension. If dimension is not
|
||||
## specified it can be returned or the data can be aggregated - it depends
|
||||
## on particular metric, you can find details here:
|
||||
## https://help.aliyun.com/document_detail/28619.html?spm=a2c4g.11186623.6.690.1938ad41wg8QSq
|
||||
##
|
||||
## Note, that by default dimension filter includes the list of discovered
|
||||
## objects in scope (if discovery is enabled). Values specified here would
|
||||
## be added into the list of discovered objects. You can specify either
|
||||
## single dimension:
|
||||
# dimensions = '{"instanceId": "p-example"}'
|
||||
|
||||
## Or you can specify several dimensions at once:
|
||||
# dimensions = '[{"instanceId": "p-example"},{"instanceId": "q-example"}]'
|
||||
|
||||
## Tag Query Path
|
||||
## The following tags added by default:
|
||||
## * regionId (if discovery enabled)
|
||||
## * userId
|
||||
## * instanceId
|
||||
## Enrichment tags, can be added from discovery (if supported)
|
||||
## Notation is
|
||||
## <measurement_tag_name>:<JMES query path (https://jmespath.org/tutorial.html)>
|
||||
## To figure out which fields are available, consult the
|
||||
## Describe<ObjectType> API per project. For example, for SLB see:
|
||||
## https://api.aliyun.com/#/?product=Slb&version=2014-05-15&api=DescribeLoadBalancers¶ms={}&tab=MOCK&lang=GO
|
||||
# tag_query_path = [
|
||||
# "address:Address",
|
||||
# "name:LoadBalancerName",
|
||||
# "cluster_owner:Tags.Tag[?TagKey=='cs.cluster.name'].TagValue | [0]"
|
||||
# ]
|
||||
|
||||
## Allow metrics without discovery data, if discovery is enabled.
|
||||
## If set to true, then metric without discovery data would be emitted, otherwise dropped.
|
||||
## This cane be of help, in case debugging dimension filters, or partial coverage of
|
||||
## discovery scope vs monitoring scope
|
||||
# allow_dps_without_discovery = false
|
||||
```
|
||||
|
||||
### Requirements and Terminology
|
||||
|
||||
Plugin Configuration utilizes [preset metric items references](https://www.alibabacloud.com/help/doc-detail/28619.htm?spm=a2c63.p38356.a3.2.389f233d0kPJn0)
|
||||
|
||||
- `discovery_region` must be a valid Aliyun
|
||||
[Region](https://www.alibabacloud.com/help/doc-detail/40654.htm) value
|
||||
- `period` must be a valid duration value
|
||||
- `project` must be a preset project value
|
||||
- `names` must be preset metric names
|
||||
- `dimensions` must be preset dimension values
|
||||
|
||||
[2]: https://www.alibabacloud.com/help/doc-detail/28619.htm?spm=a2c63.p38356.a3.2.389f233d0kPJn0
|
||||
|
||||
## Metrics
|
||||
|
||||
Each Aliyun CMS Project monitored records a measurement with fields for each
|
||||
available Metric Statistic Project and Metrics are represented in [snake
|
||||
case](https://en.wikipedia.org/wiki/Snake_case)
|
||||
|
||||
- aliyuncms_{project}
|
||||
- {metric}_average (metric Average value)
|
||||
- {metric}_minimum (metric Minimum value)
|
||||
- {metric}_maximum (metric Maximum value)
|
||||
- {metric}_value (metric Value value)
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
aliyuncms_acs_slb_dashboard,instanceId=p-example,regionId=cn-hangzhou,userId=1234567890 latency_average=0.004810798017284538,latency_maximum=0.1100282669067383,latency_minimum=0.0006084442138671875
|
||||
```
|
||||
|
|
@ -0,0 +1,125 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from AMD ROCm System Management Interface (SMI)"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: AMD ROCm System Management Interface (SMI)
|
||||
identifier: input-amd_rocm_smi
|
||||
tags: [AMD ROCm System Management Interface (SMI), "input-plugins", "configuration", "hardware", "system"]
|
||||
introduced: "v1.20.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/amd_rocm_smi/README.md, AMD ROCm System Management Interface (SMI) Plugin Source
|
||||
---
|
||||
|
||||
# AMD ROCm System Management Interface (SMI) Input Plugin
|
||||
|
||||
This plugin gathers statistics including memory and GPU usage, temperatures
|
||||
etc from [AMD ROCm platform](https://rocm.docs.amd.com/) GPUs.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The [`rocm-smi` binary]() is required and needs to be installed on the
|
||||
> system.
|
||||
|
||||
**Introduced in:** Telegraf v1.20.0
|
||||
**Tags:** hardware, system
|
||||
**OS support:** all
|
||||
|
||||
[amd_rocm]: https://rocm.docs.amd.com/
|
||||
[binary]: https://github.com/RadeonOpenCompute/rocm_smi_lib/tree/master/python_smi_tools
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Startup error behavior options
|
||||
|
||||
In addition to the plugin-specific and global configuration settings the plugin
|
||||
supports options for specifying the behavior when experiencing startup errors
|
||||
using the `startup_error_behavior` setting. Available values are:
|
||||
|
||||
- `error`: Telegraf with stop and exit in case of startup errors. This is the
|
||||
default behavior.
|
||||
- `ignore`: Telegraf will ignore startup errors for this plugin and disables it
|
||||
but continues processing for all other plugins.
|
||||
- `retry`: NOT AVAILABLE
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Query statistics from AMD Graphics cards using rocm-smi binary
|
||||
[[inputs.amd_rocm_smi]]
|
||||
## Optional: path to rocm-smi binary, defaults to $PATH via exec.LookPath
|
||||
# bin_path = "/opt/rocm/bin/rocm-smi"
|
||||
|
||||
## Optional: timeout for GPU polling
|
||||
# timeout = "5s"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- measurement: `amd_rocm_smi`
|
||||
- tags
|
||||
- `name` (entry name assigned by rocm-smi executable)
|
||||
- `gpu_id` (id of the GPU according to rocm-smi)
|
||||
- `gpu_unique_id` (unique id of the GPU)
|
||||
|
||||
- fields
|
||||
- `driver_version` (integer)
|
||||
- `fan_speed` (integer)
|
||||
- `memory_total` (integer, B)
|
||||
- `memory_used` (integer, B)
|
||||
- `memory_free` (integer, B)
|
||||
- `temperature_sensor_edge` (float, Celsius)
|
||||
- `temperature_sensor_junction` (float, Celsius)
|
||||
- `temperature_sensor_memory` (float, Celsius)
|
||||
- `utilization_gpu` (integer, percentage)
|
||||
- `utilization_memory` (integer, percentage)
|
||||
- `clocks_current_sm` (integer, Mhz)
|
||||
- `clocks_current_memory` (integer, Mhz)
|
||||
- `clocks_current_display` (integer, Mhz)
|
||||
- `clocks_current_fabric` (integer, Mhz)
|
||||
- `clocks_current_system` (integer, Mhz)
|
||||
- `power_draw` (float, Watt)
|
||||
- `card_series` (string)
|
||||
- `card_model` (string)
|
||||
- `card_vendor` (string)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Check the full output by running `rocm-smi` binary manually.
|
||||
|
||||
Linux:
|
||||
|
||||
```sh
|
||||
rocm-smi rocm-smi -o -l -m -M -g -c -t -u -i -f -p -P -s -S -v --showreplaycount --showpids --showdriverversion --showmemvendor --showfwinfo --showproductname --showserial --showuniqueid --showbus --showpendingpages --showpagesinfo --showretiredpages --showunreservablepages --showmemuse --showvoltage --showtopo --showtopoweight --showtopohops --showtopotype --showtoponuma --showmeminfo all --json
|
||||
```
|
||||
|
||||
Please include the output of this command if opening a GitHub issue, together
|
||||
with ROCm version.
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=28,temperature_sensor_junction=29,temperature_sensor_memory=92,utilization_gpu=0i 1630572551000000000
|
||||
amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=29,temperature_sensor_junction=30,temperature_sensor_memory=91,utilization_gpu=0i 1630572701000000000
|
||||
amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=29,temperature_sensor_junction=29,temperature_sensor_memory=92,utilization_gpu=0i 1630572749000000000
|
||||
```
|
||||
|
||||
## Limitations and notices
|
||||
|
||||
Please notice that this plugin has been developed and tested on a limited number
|
||||
of versions and small set of GPUs. Currently the latest ROCm version tested is
|
||||
4.3.0. Notice that depending on the device and driver versions the amount of
|
||||
information provided by `rocm-smi` can vary so that some fields would start/stop
|
||||
appearing in the metrics upon updates. The `rocm-smi` JSON output is not
|
||||
perfectly homogeneous and is possibly changing in the future, hence parsing and
|
||||
unmarshalling can start failing upon updating ROCm.
|
||||
|
||||
Inspired by the current state of the art of the `nvidia-smi` plugin.
|
||||
|
|
@ -0,0 +1,208 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from AMQP Consumer"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: AMQP Consumer
|
||||
identifier: input-amqp_consumer
|
||||
tags: [AMQP Consumer, "input-plugins", "configuration", "messaging"]
|
||||
introduced: "v1.3.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/amqp_consumer/README.md, AMQP Consumer Plugin Source
|
||||
---
|
||||
|
||||
# AMQP Consumer Input Plugin
|
||||
|
||||
This plugin consumes messages from an Advanced Message Queuing Protocol v0.9.1
|
||||
broker. A prominent implementation of this protocol is [RabbitMQ](https://www.rabbitmq.com).
|
||||
|
||||
Metrics are read from a topic exchange using the configured queue and binding
|
||||
key. The message payloads must be formatted in one of the supported
|
||||
[data formats](/telegraf/v1/data_formats/input).
|
||||
|
||||
For an introduction check the [AMQP concepts page](https://www.rabbitmq.com/tutorials/amqp-concepts.html) and the
|
||||
[RabbitMQ getting started guide](https://www.rabbitmq.com/getstarted.html).
|
||||
|
||||
**Introduced in:** Telegraf v1.3.0
|
||||
**Tags:** messaging
|
||||
**OS support:** all
|
||||
|
||||
[amqp_concepts]: https://www.rabbitmq.com/tutorials/amqp-concepts.html
|
||||
[data_formats]: /docs/DATA_FORMATS_INPUT.md
|
||||
[rabbitmq]: https://www.rabbitmq.com
|
||||
[rabbitmq_getting_started]: https://www.rabbitmq.com/getstarted.html
|
||||
|
||||
## Service Input <!-- @/docs/includes/service_input.md -->
|
||||
|
||||
This plugin is a service input. Normal plugins gather metrics determined by the
|
||||
interval setting. Service plugins start a service to listen and wait for
|
||||
metrics or events to occur. Service plugins have two key differences from
|
||||
normal plugins:
|
||||
|
||||
1. The global or plugin specific `interval` setting may not apply
|
||||
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
|
||||
output for this plugin
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Startup error behavior options <!-- @/docs/includes/startup_error_behavior.md -->
|
||||
|
||||
In addition to the plugin-specific and global configuration settings the plugin
|
||||
supports options for specifying the behavior when experiencing startup errors
|
||||
using the `startup_error_behavior` setting. Available values are:
|
||||
|
||||
- `error`: Telegraf with stop and exit in case of startup errors. This is the
|
||||
default behavior.
|
||||
- `ignore`: Telegraf will ignore startup errors for this plugin and disables it
|
||||
but continues processing for all other plugins.
|
||||
- `retry`: Telegraf will try to startup the plugin in every gather or write
|
||||
cycle in case of startup errors. The plugin is disabled until
|
||||
the startup succeeds.
|
||||
- `probe`: Telegraf will probe the plugin's function (if possible) and disables the plugin
|
||||
in case probing fails. If the plugin does not support probing, Telegraf will
|
||||
behave as if `ignore` was set instead.
|
||||
|
||||
## Secret-store support
|
||||
|
||||
This plugin supports secrets from secret-stores for the `username` and
|
||||
`password` option.
|
||||
See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
|
||||
to use them.
|
||||
|
||||
[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# AMQP consumer plugin
|
||||
[[inputs.amqp_consumer]]
|
||||
## Brokers to consume from. If multiple brokers are specified a random broker
|
||||
## will be selected anytime a connection is established. This can be
|
||||
## helpful for load balancing when not using a dedicated load balancer.
|
||||
brokers = ["amqp://localhost:5672/influxdb"]
|
||||
|
||||
## Authentication credentials for the PLAIN auth_method.
|
||||
# username = ""
|
||||
# password = ""
|
||||
|
||||
## Name of the exchange to declare. If unset, no exchange will be declared.
|
||||
exchange = "telegraf"
|
||||
|
||||
## Exchange type; common types are "direct", "fanout", "topic", "header", "x-consistent-hash".
|
||||
# exchange_type = "topic"
|
||||
|
||||
## If true, exchange will be passively declared.
|
||||
# exchange_passive = false
|
||||
|
||||
## Exchange durability can be either "transient" or "durable".
|
||||
# exchange_durability = "durable"
|
||||
|
||||
## Additional exchange arguments.
|
||||
# exchange_arguments = { }
|
||||
# exchange_arguments = {"hash_property" = "timestamp"}
|
||||
|
||||
## AMQP queue name.
|
||||
queue = "telegraf"
|
||||
|
||||
## AMQP queue durability can be "transient" or "durable".
|
||||
queue_durability = "durable"
|
||||
|
||||
## If true, queue will be passively declared.
|
||||
# queue_passive = false
|
||||
|
||||
## Additional arguments when consuming from Queue
|
||||
# queue_consume_arguments = { }
|
||||
# queue_consume_arguments = {"x-stream-offset" = "first"}
|
||||
|
||||
## Additional queue arguments.
|
||||
# queue_arguments = { }
|
||||
# queue_arguments = {"x-max-length" = 100}
|
||||
|
||||
## A binding between the exchange and queue using this binding key is
|
||||
## created. If unset, no binding is created.
|
||||
binding_key = "#"
|
||||
|
||||
## Maximum number of messages server should give to the worker.
|
||||
# prefetch_count = 50
|
||||
|
||||
## Max undelivered messages
|
||||
## This plugin uses tracking metrics, which ensure messages are read to
|
||||
## outputs before acknowledging them to the original broker to ensure data
|
||||
## is not lost. This option sets the maximum messages to read from the
|
||||
## broker that have not been written by an output.
|
||||
##
|
||||
## This value needs to be picked with awareness of the agent's
|
||||
## metric_batch_size value as well. Setting max undelivered messages too high
|
||||
## can result in a constant stream of data batches to the output. While
|
||||
## setting it too low may never flush the broker's messages.
|
||||
# max_undelivered_messages = 1000
|
||||
|
||||
## Timeout for establishing the connection to a broker
|
||||
# timeout = "30s"
|
||||
|
||||
## Auth method. PLAIN and EXTERNAL are supported
|
||||
## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
|
||||
## described here: https://www.rabbitmq.com/plugins.html
|
||||
# auth_method = "PLAIN"
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
|
||||
## Content encoding for message payloads, can be set to
|
||||
## "gzip", "identity" or "auto"
|
||||
## - Use "gzip" to decode gzip
|
||||
## - Use "identity" to apply no encoding
|
||||
## - Use "auto" determine the encoding using the ContentEncoding header
|
||||
# content_encoding = "identity"
|
||||
|
||||
## Maximum size of decoded message.
|
||||
## Acceptable units are B, KiB, KB, MiB, MB...
|
||||
## Without quotes and units, interpreted as size in bytes.
|
||||
# max_decompression_size = "500MB"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "influx"
|
||||
```
|
||||
|
||||
## Message acknowledgement behavior
|
||||
|
||||
This plugin tracks metrics to report the delivery state to the broker.
|
||||
|
||||
Messages are **acknowledged** (ACK) in the broker if they were successfully
|
||||
parsed and delivered to all corresponding output sinks.
|
||||
|
||||
Messages are **not acknowledged** (NACK) if parsing of the messages fails and no
|
||||
metrics were created. In this case requeueing is disabled so messages will not
|
||||
be sent out to any other queue. The message will then be discarded or sent to a
|
||||
dead-letter exchange depending on the server configuration. See
|
||||
[RabitMQ documentation](https://www.rabbitmq.com/docs/confirms) for more details.
|
||||
|
||||
Messages are **rejected** (REJECT) if the messages were parsed correctly but
|
||||
could not be delivered e.g. due to output-service outages. Requeueing is
|
||||
disabled in this case and messages will be discarded by the server. See
|
||||
[RabitMQ documentation](https://www.rabbitmq.com/docs/confirms) for more details.
|
||||
|
||||
[rabbitmq_doc]: https://www.rabbitmq.com/docs/confirms
|
||||
|
||||
## Metrics
|
||||
|
||||
The format of metrics produced by this plugin depends on the content and
|
||||
data format of received messages.
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,121 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Apache"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Apache
|
||||
identifier: input-apache
|
||||
tags: [Apache, "input-plugins", "configuration", "server", "web"]
|
||||
introduced: "v1.8.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/apache/README.md, Apache Plugin Source
|
||||
---
|
||||
|
||||
# Apache Input Plugin
|
||||
|
||||
This plugin collects performance information from [Apache HTTP Servers](https://httpd.apache.org)
|
||||
using the [`mod_status` module](). Typically, this module is
|
||||
configured to expose a page at the `/server-status?auto` endpoint the server.
|
||||
|
||||
The [ExtendedStatus option](https://httpd.apache.org/docs/current/mod/core.html#extendedstatus) must be enabled in order to collect
|
||||
all available fields. For information about configuration of your server check
|
||||
the [module documentation](https://httpd.apache.org/docs/current/mod/mod_status.html).
|
||||
|
||||
**Introduced in:** Telegraf v1.8.0
|
||||
**Tags:** server, web
|
||||
**OS support:** all
|
||||
|
||||
[apache]: https://httpd.apache.org
|
||||
[extended_status]: https://httpd.apache.org/docs/current/mod/core.html#extendedstatus
|
||||
[mod_status_module]: https://httpd.apache.org/docs/current/mod/mod_status.html
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read Apache status information (mod_status)
|
||||
[[inputs.apache]]
|
||||
## An array of URLs to gather from, must be directed at the machine
|
||||
## readable version of the mod_status page including the auto query string.
|
||||
## Default is "http://localhost/server-status?auto".
|
||||
urls = ["http://localhost/server-status?auto"]
|
||||
|
||||
## Credentials for basic HTTP authentication.
|
||||
# username = "myuser"
|
||||
# password = "mypassword"
|
||||
|
||||
## Maximum time to receive response.
|
||||
# response_timeout = "5s"
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- apache
|
||||
- BusyWorkers (float)
|
||||
- BytesPerReq (float)
|
||||
- BytesPerSec (float)
|
||||
- ConnsAsyncClosing (float)
|
||||
- ConnsAsyncKeepAlive (float)
|
||||
- ConnsAsyncWriting (float)
|
||||
- ConnsTotal (float)
|
||||
- CPUChildrenSystem (float)
|
||||
- CPUChildrenUser (float)
|
||||
- CPULoad (float)
|
||||
- CPUSystem (float)
|
||||
- CPUUser (float)
|
||||
- IdleWorkers (float)
|
||||
- Load1 (float)
|
||||
- Load5 (float)
|
||||
- Load15 (float)
|
||||
- ParentServerConfigGeneration (float)
|
||||
- ParentServerMPMGeneration (float)
|
||||
- ReqPerSec (float)
|
||||
- ServerUptimeSeconds (float)
|
||||
- TotalAccesses (float)
|
||||
- TotalkBytes (float)
|
||||
- Uptime (float)
|
||||
|
||||
The following fields are collected from the `Scoreboard`, and represent the
|
||||
number of requests in the given state:
|
||||
|
||||
- apache
|
||||
- scboard_closing (float)
|
||||
- scboard_dnslookup (float)
|
||||
- scboard_finishing (float)
|
||||
- scboard_idle_cleanup (float)
|
||||
- scboard_keepalive (float)
|
||||
- scboard_logging (float)
|
||||
- scboard_open (float)
|
||||
- scboard_reading (float)
|
||||
- scboard_sending (float)
|
||||
- scboard_starting (float)
|
||||
- scboard_waiting (float)
|
||||
|
||||
## Tags
|
||||
|
||||
- All measurements have the following tags:
|
||||
- port
|
||||
- server
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
apache,port=80,server=debian-stretch-apache BusyWorkers=1,BytesPerReq=0,BytesPerSec=0,CPUChildrenSystem=0,CPUChildrenUser=0,CPULoad=0.00995025,CPUSystem=0.01,CPUUser=0.01,ConnsAsyncClosing=0,ConnsAsyncKeepAlive=0,ConnsAsyncWriting=0,ConnsTotal=0,IdleWorkers=49,Load1=0.01,Load15=0,Load5=0,ParentServerConfigGeneration=3,ParentServerMPMGeneration=2,ReqPerSec=0.00497512,ServerUptimeSeconds=201,TotalAccesses=1,TotalkBytes=0,Uptime=201,scboard_closing=0,scboard_dnslookup=0,scboard_finishing=0,scboard_idle_cleanup=0,scboard_keepalive=0,scboard_logging=0,scboard_open=100,scboard_reading=0,scboard_sending=1,scboard_starting=0,scboard_waiting=49 1502489900000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,84 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from APC UPSD"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: APC UPSD
|
||||
identifier: input-apcupsd
|
||||
tags: [APC UPSD, "input-plugins", "configuration", "hardware", "server"]
|
||||
introduced: "v1.12.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/apcupsd/README.md, APC UPSD Plugin Source
|
||||
---
|
||||
|
||||
# APC UPSD Input Plugin
|
||||
|
||||
This plugin gathers data from one or more [apcupsd daemon](https://sourceforge.net/projects/apcupsd/) over
|
||||
the NIS network protocol. To query a server, the daemon must be running and be
|
||||
accessible.
|
||||
|
||||
**Introduced in:** Telegraf v1.12.0
|
||||
**Tags:** hardware, server
|
||||
**OS support:** all
|
||||
|
||||
[apcupsd_daemon]: https://sourceforge.net/projects/apcupsd/
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Monitor APC UPSes connected to apcupsd
|
||||
[[inputs.apcupsd]]
|
||||
# A list of running apcupsd server to connect to.
|
||||
# If not provided will default to tcp://127.0.0.1:3551
|
||||
servers = ["tcp://127.0.0.1:3551"]
|
||||
|
||||
## Timeout for dialing server.
|
||||
timeout = "5s"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- apcupsd
|
||||
- tags:
|
||||
- serial
|
||||
- ups_name
|
||||
- status (string representing the set status_flags)
|
||||
- model
|
||||
- fields:
|
||||
- status_flags ([status-bits](http://www.apcupsd.org/manual/manual.html#status-bits))
|
||||
- input_voltage
|
||||
- load_percent
|
||||
- battery_charge_percent
|
||||
- time_left_ns
|
||||
- output_voltage
|
||||
- internal_temp
|
||||
- battery_voltage
|
||||
- input_frequency
|
||||
- time_on_battery_ns
|
||||
- cumulative_time_on_battery_ns
|
||||
- nominal_input_voltage
|
||||
- nominal_battery_voltage
|
||||
- nominal_power
|
||||
- firmware
|
||||
- battery_date
|
||||
- last_transfer
|
||||
- number_transfers
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
apcupsd,serial=AS1231515,status=ONLINE,ups_name=name1 time_on_battery=0,load_percent=9.7,time_left_minutes=98,output_voltage=230.4,internal_temp=32.4,battery_voltage=27.4,input_frequency=50.2,input_voltage=230.4,battery_charge_percent=100,status_flags=8i 1490035922000000000
|
||||
```
|
||||
|
||||
[status-bits]: http://www.apcupsd.org/manual/manual.html#status-bits
|
||||
File diff suppressed because one or more lines are too long
|
|
@ -0,0 +1,193 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Azure Monitor"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Azure Monitor
|
||||
identifier: input-azure_monitor
|
||||
tags: [Azure Monitor, "input-plugins", "configuration", "cloud"]
|
||||
introduced: "v1.25.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/azure_monitor/README.md, Azure Monitor Plugin Source
|
||||
---
|
||||
|
||||
# Azure Monitor Input Plugin
|
||||
|
||||
This plugin gathers metrics of Azure resources using the
|
||||
[Azure Monitor](https://docs.microsoft.com/en-us/azure/azure-monitor) API. The plugin requires a `client_id`,
|
||||
`client_secret` and `tenant_id` for authentication via access token. The
|
||||
`subscription_id` is required for accessing Azure resources.
|
||||
|
||||
Check the [supported metrics page](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-supported) for available resource
|
||||
types and their metrics.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The Azure API has a read limit of 12,000 requests per hour. Please make sure
|
||||
> you don't exceed this limit with the total number of metrics you are in the
|
||||
> configured interval.
|
||||
|
||||
**Introduced in:** Telegraf v1.25.0
|
||||
**Tags:** cloud
|
||||
**OS support:** all
|
||||
|
||||
[azure_monitor]: https://docs.microsoft.com/en-us/azure/azure-monitor
|
||||
[supported_metrics]: https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-supported
|
||||
|
||||
## Property Locations
|
||||
|
||||
The `subscription_id` can be found under `Overview > Essentials` in the Azure
|
||||
portal for your application or service.
|
||||
|
||||
The `client_id` and `client_secret` can be obtained by registering an
|
||||
application under Azure Active Directory.
|
||||
|
||||
The `tenant_id` can be found under `Azure Active Directory > Properties`.
|
||||
|
||||
The resource target `resource_id` can be found under
|
||||
`Overview > Essentials > JSON View` in the Azure portal for your
|
||||
application or service.
|
||||
|
||||
The `cloud_option` defines the optional value for the API endpoints in case you
|
||||
are using the solution to get the metrics from the Azure Sovereign Cloud
|
||||
shipment e.g. AzureChina, AzureGovernment or AzurePublic.
|
||||
The default value is AzurePublic
|
||||
|
||||
## Usage
|
||||
|
||||
Use `resource_targets` to collect metrics from specific resources using
|
||||
resource id.
|
||||
|
||||
Use `resource_group_targets` to collect metrics from resources under the
|
||||
resource group with resource type.
|
||||
|
||||
Use `subscription_targets` to collect metrics from resources under the
|
||||
subscription with resource type.
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Gather Azure resources metrics from Azure Monitor API
|
||||
[[inputs.azure_monitor]]
|
||||
# can be found under Overview->Essentials in the Azure portal for your application/service
|
||||
subscription_id = "<<SUBSCRIPTION_ID>>"
|
||||
# can be obtained by registering an application under Azure Active Directory
|
||||
client_id = "<<CLIENT_ID>>"
|
||||
# can be obtained by registering an application under Azure Active Directory.
|
||||
# If not specified Default Azure Credentials chain will be attempted:
|
||||
# - Environment credentials (AZURE_*)
|
||||
# - Workload Identity in Kubernetes cluster
|
||||
# - Managed Identity
|
||||
# - Azure CLI auth
|
||||
# - Developer Azure CLI auth
|
||||
client_secret = "<<CLIENT_SECRET>>"
|
||||
# can be found under Azure Active Directory->Properties
|
||||
tenant_id = "<<TENANT_ID>>"
|
||||
# Define the optional Azure cloud option e.g. AzureChina, AzureGovernment or AzurePublic. The default is AzurePublic.
|
||||
# cloud_option = "AzurePublic"
|
||||
|
||||
# resource target #1 to collect metrics from
|
||||
[[inputs.azure_monitor.resource_target]]
|
||||
# can be found under Overview->Essentials->JSON View in the Azure portal for your application/service
|
||||
# must start with 'resourceGroups/...' ('/subscriptions/xxxxxxxx-xxxx-xxxx-xxx-xxxxxxxxxxxx'
|
||||
# must be removed from the beginning of Resource ID property value)
|
||||
resource_id = "<<RESOURCE_ID>>"
|
||||
# the metric names to collect
|
||||
# leave the array empty to use all metrics available to this resource
|
||||
metrics = [ "<<METRIC>>", "<<METRIC>>" ]
|
||||
# metrics aggregation type value to collect
|
||||
# can be 'Total', 'Count', 'Average', 'Minimum', 'Maximum'
|
||||
# leave the array empty to collect all aggregation types values for each metric
|
||||
aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
|
||||
|
||||
# resource target #2 to collect metrics from
|
||||
[[inputs.azure_monitor.resource_target]]
|
||||
resource_id = "<<RESOURCE_ID>>"
|
||||
metrics = [ "<<METRIC>>", "<<METRIC>>" ]
|
||||
aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
|
||||
|
||||
# resource group target #1 to collect metrics from resources under it with resource type
|
||||
[[inputs.azure_monitor.resource_group_target]]
|
||||
# the resource group name
|
||||
resource_group = "<<RESOURCE_GROUP_NAME>>"
|
||||
|
||||
# defines the resources to collect metrics from
|
||||
[[inputs.azure_monitor.resource_group_target.resource]]
|
||||
# the resource type
|
||||
resource_type = "<<RESOURCE_TYPE>>"
|
||||
metrics = [ "<<METRIC>>", "<<METRIC>>" ]
|
||||
aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
|
||||
|
||||
# defines the resources to collect metrics from
|
||||
[[inputs.azure_monitor.resource_group_target.resource]]
|
||||
resource_type = "<<RESOURCE_TYPE>>"
|
||||
metrics = [ "<<METRIC>>", "<<METRIC>>" ]
|
||||
aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
|
||||
|
||||
# resource group target #2 to collect metrics from resources under it with resource type
|
||||
[[inputs.azure_monitor.resource_group_target]]
|
||||
resource_group = "<<RESOURCE_GROUP_NAME>>"
|
||||
|
||||
[[inputs.azure_monitor.resource_group_target.resource]]
|
||||
resource_type = "<<RESOURCE_TYPE>>"
|
||||
metrics = [ "<<METRIC>>", "<<METRIC>>" ]
|
||||
aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
|
||||
|
||||
# subscription target #1 to collect metrics from resources under it with resource type
|
||||
[[inputs.azure_monitor.subscription_target]]
|
||||
resource_type = "<<RESOURCE_TYPE>>"
|
||||
metrics = [ "<<METRIC>>", "<<METRIC>>" ]
|
||||
aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
|
||||
|
||||
# subscription target #2 to collect metrics from resources under it with resource type
|
||||
[[inputs.azure_monitor.subscription_target]]
|
||||
resource_type = "<<RESOURCE_TYPE>>"
|
||||
metrics = [ "<<METRIC>>", "<<METRIC>>" ]
|
||||
aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
* azure_monitor_<<RESOURCE_NAMESPACE>>_<<METRIC_NAME>>
|
||||
* fields:
|
||||
* total (float64)
|
||||
* count (float64)
|
||||
* average (float64)
|
||||
* minimum (float64)
|
||||
* maximum (float64)
|
||||
* tags:
|
||||
* namespace
|
||||
* resource_group
|
||||
* resource_name
|
||||
* subscription_id
|
||||
* resource_region
|
||||
* unit
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
azure_monitor_microsoft_storage_storageaccounts_used_capacity,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Bytes average=9065573,maximum=9065573,minimum=9065573,timeStamp="2021-11-08T09:52:00Z",total=9065573 1636368744000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_transactions,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Count average=1,count=6,maximum=1,minimum=0,timeStamp="2021-11-08T09:52:00Z",total=6 1636368744000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_ingress,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Bytes average=5822.333333333333,count=6,maximum=5833,minimum=0,timeStamp="2021-11-08T09:52:00Z",total=34934 1636368744000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_egress,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Bytes average=840.1666666666666,count=6,maximum=841,minimum=0,timeStamp="2021-11-08T09:52:00Z",total=5041 1636368744000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_success_server_latency,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=MilliSeconds average=12.833333333333334,count=6,maximum=30,minimum=8,timeStamp="2021-11-08T09:52:00Z",total=77 1636368744000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_success_e2e_latency,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=MilliSeconds average=12.833333333333334,count=6,maximum=30,minimum=8,timeStamp="2021-11-08T09:52:00Z",total=77 1636368744000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_availability,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Percent average=100,count=6,maximum=100,minimum=100,timeStamp="2021-11-08T09:52:00Z",total=600 1636368744000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_used_capacity,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Bytes average=9065573,maximum=9065573,minimum=9065573,timeStamp="2021-11-08T09:52:00Z",total=9065573 1636368745000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_transactions,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Count average=1,count=6,maximum=1,minimum=0,timeStamp="2021-11-08T09:52:00Z",total=6 1636368745000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_ingress,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Bytes average=5822.333333333333,count=6,maximum=5833,minimum=0,timeStamp="2021-11-08T09:52:00Z",total=34934 1636368745000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_egress,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Bytes average=840.1666666666666,count=6,maximum=841,minimum=0,timeStamp="2021-11-08T09:52:00Z",total=5041 1636368745000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_success_server_latency,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=MilliSeconds average=12.833333333333334,count=6,maximum=30,minimum=8,timeStamp="2021-11-08T09:52:00Z",total=77 1636368745000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_success_e2e_latency,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=MilliSeconds average=12.833333333333334,count=6,maximum=30,minimum=8,timeStamp="2021-11-08T09:52:00Z",total=77 1636368745000000000
|
||||
azure_monitor_microsoft_storage_storageaccounts_availability,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Percent average=100,count=6,maximum=100,minimum=100,timeStamp="2021-11-08T09:52:00Z",total=600 1636368745000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Azure Queue Storage"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Azure Queue Storage
|
||||
identifier: input-azure_storage_queue
|
||||
tags: [Azure Queue Storage, "input-plugins", "configuration", "cloud"]
|
||||
introduced: "v1.13.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/azure_storage_queue/README.md, Azure Queue Storage Plugin Source
|
||||
---
|
||||
|
||||
# Azure Queue Storage Input Plugin
|
||||
|
||||
This plugin gathers queue sizes from the [Azure Queue Storage](https://learn.microsoft.com/en-us/azure/storage/queues)
|
||||
service, storing a large numbers of messages.
|
||||
|
||||
**Introduced in:** Telegraf v1.13.0
|
||||
**Tags:** cloud
|
||||
**OS support:** all
|
||||
|
||||
[azure_queues]: https://learn.microsoft.com/en-us/azure/storage/queues
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Gather Azure Storage Queue metrics
|
||||
[[inputs.azure_storage_queue]]
|
||||
## Azure Storage Account name and shared access key (required)
|
||||
account_name = "mystorageaccount"
|
||||
account_key = "storageaccountaccesskey"
|
||||
|
||||
## Disable peeking age of oldest message (faster)
|
||||
# peek_oldest_message_age = true
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- azure_storage_queues
|
||||
- tags:
|
||||
- queue
|
||||
- account
|
||||
- fields:
|
||||
- size (integer, count)
|
||||
- oldest_message_age_ns (integer, nanoseconds) Age of message at the head
|
||||
of the queue. Requires `peek_oldest_message_age` to be configured
|
||||
to `true`.
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
azure_storage_queues,queue=myqueue,account=mystorageaccount oldest_message_age=799714900i,size=7i 1565970503000000000
|
||||
azure_storage_queues,queue=myemptyqueue,account=mystorageaccount size=0i 1565970502000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,80 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Bcache"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Bcache
|
||||
identifier: input-bcache
|
||||
tags: [Bcache, "input-plugins", "configuration", "system"]
|
||||
introduced: "v0.2.0"
|
||||
os_support: "linux"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/bcache/README.md, Bcache Plugin Source
|
||||
---
|
||||
|
||||
# Bcache Input Plugin
|
||||
|
||||
This plugin gathers statistics for the [block layer cache](https://docs.kernel.org/admin-guide/bcache.html)
|
||||
from the `stats_total` directory and `dirty_data` file.
|
||||
|
||||
**Introduced in:** Telegraf v0.2.0
|
||||
**Tags:** system
|
||||
**OS support:** linux
|
||||
|
||||
[bcache]: https://docs.kernel.org/admin-guide/bcache.html
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics of bcache from stats_total and dirty_data
|
||||
# This plugin ONLY supports Linux
|
||||
[[inputs.bcache]]
|
||||
## Bcache sets path
|
||||
## If not specified, then default is:
|
||||
bcachePath = "/sys/fs/bcache"
|
||||
|
||||
## By default, Telegraf gather stats for all bcache devices
|
||||
## Setting devices will restrict the stats to the specified
|
||||
## bcache devices.
|
||||
bcacheDevs = ["bcache0"]
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Tags:
|
||||
|
||||
- `backing_dev` device backed by the cache
|
||||
- `bcache_dev` device used for caching
|
||||
|
||||
Fields:
|
||||
|
||||
- `dirty_data`: Amount of dirty data for this backing device in the cache.
|
||||
Continuously updated unlike the cache set's version, but may be slightly off
|
||||
- `bypassed`: Amount of IO (both reads and writes) that has bypassed the cache
|
||||
- `cache_bypass_hits`: Hits for IO that is intended to skip the cache
|
||||
- `cache_bypass_misses`: Misses for IO that is intended to skip the cache
|
||||
- `cache_hits`: Hits per individual IO as seen by bcache sees them; a
|
||||
partial hit is counted as a miss.
|
||||
- `cache_misses`: Misses per individual IO as seen by bcache sees them; a
|
||||
partial hit is counted as a miss.
|
||||
- `cache_hit_ratio`: Hit to miss ratio
|
||||
- `cache_miss_collisions`: Instances where data was going to be inserted into
|
||||
cache from a miss, but raced with a write and data was already present
|
||||
(usually zero since the synchronization for cache misses was rewritten)
|
||||
- `cache_readaheads`: Count of times readahead occurred.
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
bcache,backing_dev="md10",bcache_dev="bcache0" dirty_data=11639194i,bypassed=5167704440832i,cache_bypass_hits=146270986i,cache_bypass_misses=0i,cache_hit_ratio=90i,cache_hits=511941651i,cache_miss_collisions=157678i,cache_misses=50647396i,cache_readaheads=0i
|
||||
```
|
||||
|
|
@ -0,0 +1,136 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Beanstalkd"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Beanstalkd
|
||||
identifier: input-beanstalkd
|
||||
tags: [Beanstalkd, "input-plugins", "configuration", "messaging"]
|
||||
introduced: "v1.8.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/beanstalkd/README.md, Beanstalkd Plugin Source
|
||||
---
|
||||
|
||||
# Beanstalkd Input Plugin
|
||||
|
||||
This plugin collects server statistics as well as tube statistics from a
|
||||
[Beanstalkd work queue](https://beanstalkd.github.io/) as reported by the `stats` and `stats-tube`
|
||||
server commands.
|
||||
|
||||
**Introduced in:** Telegraf v1.8.0
|
||||
**Tags:** messaging
|
||||
**OS support:** all
|
||||
|
||||
[beanstalkd]: https://beanstalkd.github.io/
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Collects Beanstalkd server and tubes stats
|
||||
[[inputs.beanstalkd]]
|
||||
## Server to collect data from
|
||||
server = "localhost:11300"
|
||||
|
||||
## List of tubes to gather stats about.
|
||||
## If no tubes specified then data gathered for each tube on server reported by list-tubes command
|
||||
tubes = ["notifications"]
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Please see the [Beanstalk protocol doc](https://github.com/beanstalkd/beanstalkd/blob/master/doc/protocol.txt) for a detailed explanation of
|
||||
`stats` and `stats-tube` server commands output.
|
||||
|
||||
[protocol]: https://github.com/beanstalkd/beanstalkd/blob/master/doc/protocol.txt
|
||||
|
||||
`beanstalkd_overview` – statistical information about the system as a whole
|
||||
|
||||
- fields
|
||||
- cmd_delete
|
||||
- cmd_pause_tube
|
||||
- current_jobs_buried
|
||||
- current_jobs_delayed
|
||||
- current_jobs_ready
|
||||
- current_jobs_reserved
|
||||
- current_jobs_urgent
|
||||
- current_using
|
||||
- current_waiting
|
||||
- current_watching
|
||||
- pause
|
||||
- pause_time_left
|
||||
- total_jobs
|
||||
- tags
|
||||
- name
|
||||
- server (address taken from config)
|
||||
|
||||
`beanstalkd_tube` – statistical information about the specified tube
|
||||
|
||||
- fields
|
||||
- binlog_current_index
|
||||
- binlog_max_size
|
||||
- binlog_oldest_index
|
||||
- binlog_records_migrated
|
||||
- binlog_records_written
|
||||
- cmd_bury
|
||||
- cmd_delete
|
||||
- cmd_ignore
|
||||
- cmd_kick
|
||||
- cmd_list_tube_used
|
||||
- cmd_list_tubes
|
||||
- cmd_list_tubes_watched
|
||||
- cmd_pause_tube
|
||||
- cmd_peek
|
||||
- cmd_peek_buried
|
||||
- cmd_peek_delayed
|
||||
- cmd_peek_ready
|
||||
- cmd_put
|
||||
- cmd_release
|
||||
- cmd_reserve
|
||||
- cmd_reserve_with_timeout
|
||||
- cmd_stats
|
||||
- cmd_stats_job
|
||||
- cmd_stats_tube
|
||||
- cmd_touch
|
||||
- cmd_use
|
||||
- cmd_watch
|
||||
- current_connections
|
||||
- current_jobs_buried
|
||||
- current_jobs_delayed
|
||||
- current_jobs_ready
|
||||
- current_jobs_reserved
|
||||
- current_jobs_urgent
|
||||
- current_producers
|
||||
- current_tubes
|
||||
- current_waiting
|
||||
- current_workers
|
||||
- job_timeouts
|
||||
- max_job_size
|
||||
- pid
|
||||
- rusage_stime
|
||||
- rusage_utime
|
||||
- total_connections
|
||||
- total_jobs
|
||||
- uptime
|
||||
- tags
|
||||
- hostname
|
||||
- id
|
||||
- server (address taken from config)
|
||||
- version
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
beanstalkd_overview,host=server.local,hostname=a2ab22ed12e0,id=232485800aa11b24,server=localhost:11300,version=1.10 cmd_stats_tube=29482i,current_jobs_delayed=0i,current_jobs_urgent=6i,cmd_kick=0i,cmd_stats=7378i,cmd_stats_job=0i,current_waiting=0i,max_job_size=65535i,pid=6i,cmd_bury=0i,cmd_reserve_with_timeout=0i,cmd_touch=0i,current_connections=1i,current_jobs_ready=6i,current_producers=0i,cmd_delete=0i,cmd_list_tubes=7369i,cmd_peek_ready=0i,cmd_put=6i,cmd_use=3i,cmd_watch=0i,current_jobs_reserved=0i,rusage_stime=6.07,cmd_list_tubes_watched=0i,cmd_pause_tube=0i,total_jobs=6i,binlog_records_migrated=0i,cmd_list_tube_used=0i,cmd_peek_delayed=0i,cmd_release=0i,current_jobs_buried=0i,job_timeouts=0i,binlog_current_index=0i,binlog_max_size=10485760i,total_connections=7378i,cmd_peek_buried=0i,cmd_reserve=0i,current_tubes=4i,binlog_records_written=0i,cmd_peek=0i,rusage_utime=1.13,uptime=7099i,binlog_oldest_index=0i,current_workers=0i,cmd_ignore=0i 1528801650000000000
|
||||
beanstalkd_tube,host=server.local,name=notifications,server=localhost:11300 pause_time_left=0i,current_jobs_buried=0i,current_jobs_delayed=0i,current_jobs_reserved=0i,current_using=0i,current_waiting=0i,pause=0i,total_jobs=3i,cmd_delete=0i,cmd_pause_tube=0i,current_jobs_ready=3i,current_jobs_urgent=3i,current_watching=0i 1528801650000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,175 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Beat"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Beat
|
||||
identifier: input-beat
|
||||
tags: [Beat, "input-plugins", "configuration", "applications"]
|
||||
introduced: "v1.18.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/beat/README.md, Beat Plugin Source
|
||||
---
|
||||
|
||||
# Beat Input Plugin
|
||||
|
||||
This plugin will collect metrics from a [Beats](https://www.elastic.co/beats) instances. It is known
|
||||
to work with Filebeat and Kafkabeat.
|
||||
|
||||
**Introduced in:** Telegraf v1.18.0
|
||||
**Tags:** applications
|
||||
**OS support:** all
|
||||
|
||||
[beats]: https://www.elastic.co/beats
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics exposed by Beat
|
||||
[[inputs.beat]]
|
||||
## An URL from which to read Beat-formatted JSON
|
||||
## Default is "http://127.0.0.1:5066".
|
||||
url = "http://127.0.0.1:5066"
|
||||
|
||||
## Enable collection of the listed stats
|
||||
## An empty list means collect all. Available options are currently
|
||||
## "beat", "libbeat", "system" and "filebeat".
|
||||
# include = ["beat", "libbeat", "filebeat"]
|
||||
|
||||
## HTTP method
|
||||
# method = "GET"
|
||||
|
||||
## Optional HTTP headers
|
||||
# headers = {"X-Special-Header" = "Special-Value"}
|
||||
|
||||
## Override HTTP "Host" header
|
||||
# host_header = "logstash.example.com"
|
||||
|
||||
## Timeout for HTTP requests
|
||||
# timeout = "5s"
|
||||
|
||||
## Optional HTTP Basic Auth credentials
|
||||
# username = "username"
|
||||
# password = "pa$$word"
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- **beat**
|
||||
- Fields:
|
||||
- cpu_system_ticks
|
||||
- cpu_system_time_ms
|
||||
- cpu_total_ticks
|
||||
- cpu_total_time_ms
|
||||
- cpu_total_value
|
||||
- cpu_user_ticks
|
||||
- cpu_user_time_ms
|
||||
- info_uptime_ms
|
||||
- memstats_gc_next
|
||||
- memstats_memory_alloc
|
||||
- memstats_memory_total
|
||||
- memstats_rss
|
||||
- Tags:
|
||||
- beat_beat
|
||||
- beat_host
|
||||
- beat_id
|
||||
- beat_name
|
||||
- beat_version
|
||||
|
||||
- **beat_filebeat**
|
||||
- Fields:
|
||||
- events_active
|
||||
- events_added
|
||||
- events_done
|
||||
- harvester_closed
|
||||
- harvester_open_files
|
||||
- harvester_running
|
||||
- harvester_skipped
|
||||
- harvester_started
|
||||
- input_log_files_renamed
|
||||
- input_log_files_truncated
|
||||
- Tags:
|
||||
- beat_beat
|
||||
- beat_host
|
||||
- beat_id
|
||||
- beat_name
|
||||
- beat_version
|
||||
|
||||
- **beat_libbeat**
|
||||
- Fields:
|
||||
- config_module_running
|
||||
- config_module_starts
|
||||
- config_module_stops
|
||||
- config_reloads
|
||||
- output_events_acked
|
||||
- output_events_active
|
||||
- output_events_batches
|
||||
- output_events_dropped
|
||||
- output_events_duplicates
|
||||
- output_events_failed
|
||||
- output_events_total
|
||||
- output_type
|
||||
- output_read_bytes
|
||||
- output_read_errors
|
||||
- output_write_bytes
|
||||
- output_write_errors
|
||||
- outputs_kafka_bytes_read
|
||||
- outputs_kafka_bytes_write
|
||||
- pipeline_clients
|
||||
- pipeline_events_active
|
||||
- pipeline_events_dropped
|
||||
- pipeline_events_failed
|
||||
- pipeline_events_filtered
|
||||
- pipeline_events_published
|
||||
- pipeline_events_retry
|
||||
- pipeline_events_total
|
||||
- pipeline_queue_acked
|
||||
- Tags:
|
||||
- beat_beat
|
||||
- beat_host
|
||||
- beat_id
|
||||
- beat_name
|
||||
- beat_version
|
||||
|
||||
- **beat_system**
|
||||
- Field:
|
||||
- cpu_cores
|
||||
- load_1
|
||||
- load_15
|
||||
- load_5
|
||||
- load_norm_1
|
||||
- load_norm_15
|
||||
- load_norm_5
|
||||
- Tags:
|
||||
- beat_beat
|
||||
- beat_host
|
||||
- beat_id
|
||||
- beat_name
|
||||
- beat_version
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
beat,beat_beat=filebeat,beat_host=node-6,beat_id=9c1c8697-acb4-4df0-987d-28197814f788,beat_name=node-6-test,beat_version=6.4.2,host=node-6 cpu_system_ticks=656750,cpu_system_time_ms=656750,cpu_total_ticks=5461190,cpu_total_time_ms=5461198,cpu_total_value=5461190,cpu_user_ticks=4804440,cpu_user_time_ms=4804448,info_uptime_ms=342634196,memstats_gc_next=20199584,memstats_memory_alloc=12547424,memstats_memory_total=486296424792,memstats_rss=72552448 1540316047000000000
|
||||
beat_libbeat,beat_beat=filebeat,beat_host=node-6,beat_id=9c1c8697-acb4-4df0-987d-28197814f788,beat_name=node-6-test,beat_version=6.4.2,host=node-6 config_module_running=0,config_module_starts=0,config_module_stops=0,config_reloads=0,output_events_acked=192404,output_events_active=0,output_events_batches=1607,output_events_dropped=0,output_events_duplicates=0,output_events_failed=0,output_events_total=192404,output_read_bytes=0,output_read_errors=0,output_write_bytes=0,output_write_errors=0,outputs_kafka_bytes_read=1118528,outputs_kafka_bytes_write=48002014,pipeline_clients=1,pipeline_events_active=0,pipeline_events_dropped=0,pipeline_events_failed=0,pipeline_events_filtered=11496,pipeline_events_published=192404,pipeline_events_retry=14,pipeline_events_total=203900,pipeline_queue_acked=192404 1540316047000000000
|
||||
beat_system,beat_beat=filebeat,beat_host=node-6,beat_id=9c1c8697-acb4-4df0-987d-28197814f788,beat_name=node-6-test,beat_version=6.4.2,host=node-6 cpu_cores=32,load_1=46.08,load_15=49.82,load_5=47.88,load_norm_1=1.44,load_norm_15=1.5569,load_norm_5=1.4963 1540316047000000000
|
||||
beat_filebeat,beat_beat=filebeat,beat_host=node-6,beat_id=9c1c8697-acb4-4df0-987d-28197814f788,beat_name=node-6-test,beat_version=6.4.2,host=node-6 events_active=0,events_added=3223,events_done=3223,harvester_closed=0,harvester_open_files=0,harvester_running=0,harvester_skipped=0,harvester_started=0,input_log_files_renamed=0,input_log_files_truncated=0 1540320286000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,172 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from BIND 9 Nameserver"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: BIND 9 Nameserver
|
||||
identifier: input-bind
|
||||
tags: [BIND 9 Nameserver, "input-plugins", "configuration", "server"]
|
||||
introduced: "v1.11.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/bind/README.md, BIND 9 Nameserver Plugin Source
|
||||
---
|
||||
|
||||
# BIND 9 Nameserver Input Plugin
|
||||
|
||||
This plugin collects metrics from [BIND 9 nameservers](https://www.isc.org/bind) using the XML or
|
||||
JSON endpoint.
|
||||
|
||||
For _XML_, version 2 statistics (BIND 9.6 to 9.9) and version 3 statistics
|
||||
(BIND 9.9+) are supported. Version 3 statistics are the default and only XML
|
||||
format in BIND 9.10+.
|
||||
|
||||
> [!NOTE]
|
||||
> For BIND 9.9 to support version 3 statistics, it must be built with the
|
||||
> `--enable-newstats` compile flag, and the statistics must be specifically
|
||||
> requested via the correct URL.
|
||||
|
||||
For _JSON_, version 1 statistics (BIND 9.10+) are supported. As of writing, some
|
||||
distros still do not enable support for JSON statistics in their BIND packages.
|
||||
|
||||
**Introduced in:** Telegraf v1.11.0
|
||||
**Tags:** server
|
||||
**OS support:** all
|
||||
|
||||
[bind]: https://www.isc.org/bind
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read BIND nameserver XML statistics
|
||||
[[inputs.bind]]
|
||||
## An array of BIND XML statistics URI to gather stats.
|
||||
## Default is "http://localhost:8053/xml/v3".
|
||||
# urls = ["http://localhost:8053/xml/v3"]
|
||||
# gather_memory_contexts = false
|
||||
# gather_views = false
|
||||
|
||||
## Report xml v3 counters as integers instead of unsigned for backward
|
||||
## compatibility. Set this to false as soon as possible!
|
||||
## Values are clipped if exceeding the integer range.
|
||||
# report_counters_as_int = true
|
||||
|
||||
## Timeout for http requests made by bind nameserver
|
||||
# timeout = "4s"
|
||||
```
|
||||
|
||||
- **urls** []string: List of BIND statistics channel URLs to collect from.
|
||||
Do not include a trailing slash in the URL.
|
||||
Default is `http://localhost:8053/xml/v3`.
|
||||
- **gather_memory_contexts** bool: Report per-context memory statistics.
|
||||
- **gather_views** bool: Report per-view query statistics.
|
||||
- **timeout** Timeout for http requests made by bind (example: "4s").
|
||||
|
||||
The following table summarizes the URL formats which should be used,
|
||||
depending on your BIND version and configured statistics channel.
|
||||
|
||||
| BIND Version | Statistics Format | Example URL |
|
||||
| ------------ | ----------------- | ----------------------------- |
|
||||
| 9.6 - 9.8 | XML v2 | `http://localhost:8053` |
|
||||
| 9.9 | XML v2 | `http://localhost:8053/xml/v2` |
|
||||
| 9.9+ | XML v3 | `http://localhost:8053/xml/v3` |
|
||||
| 9.10+ | JSON v1 | `http://localhost:8053/json/v1` |
|
||||
|
||||
### Configuration of BIND Daemon
|
||||
|
||||
Add the following to your named.conf if running Telegraf on the same host
|
||||
as the BIND daemon:
|
||||
|
||||
```json
|
||||
statistics-channels {
|
||||
inet 127.0.0.1 port 8053;
|
||||
};
|
||||
```
|
||||
|
||||
Alternatively, specify a wildcard address (e.g., 0.0.0.0) or specific
|
||||
IP address of an interface to configure the BIND daemon to listen on that
|
||||
address. Note that you should secure the statistics channel with an ACL if
|
||||
it is publicly reachable. Consult the BIND Administrator Reference Manual
|
||||
for more information.
|
||||
|
||||
## Metrics
|
||||
|
||||
- bind_counter
|
||||
- name=value (multiple)
|
||||
- bind_memory
|
||||
- total_use
|
||||
- in_use
|
||||
- block_size
|
||||
- context_size
|
||||
- lost
|
||||
- bind_memory_context
|
||||
- total
|
||||
- in_use
|
||||
|
||||
## Tags
|
||||
|
||||
- All measurements
|
||||
- url
|
||||
- source
|
||||
- port
|
||||
- bind_counter
|
||||
- type
|
||||
- view (optional)
|
||||
- bind_memory_context
|
||||
- id
|
||||
- name
|
||||
|
||||
## Sample Queries
|
||||
|
||||
These are some useful queries (to generate dashboards or other) to run against
|
||||
data from this plugin:
|
||||
|
||||
```sql
|
||||
SELECT non_negative_derivative(mean(/^A$|^PTR$/), 5m) FROM bind_counter \
|
||||
WHERE "url" = 'localhost:8053' AND "type" = 'qtype' AND time > now() - 1h \
|
||||
GROUP BY time(5m), "type"
|
||||
```
|
||||
|
||||
```text
|
||||
name: bind_counter
|
||||
tags: type=qtype
|
||||
time non_negative_derivative_A non_negative_derivative_PTR
|
||||
---- ------------------------- ---------------------------
|
||||
1553862000000000000 254.99444444430992 1388.311111111194
|
||||
1553862300000000000 354 2135.716666666791
|
||||
1553862600000000000 316.8666666666977 2130.133333333768
|
||||
1553862900000000000 309.05000000004657 2126.75
|
||||
1553863200000000000 315.64999999990687 2128.483333332464
|
||||
1553863500000000000 308.9166666667443 2132.350000000559
|
||||
1553863800000000000 302.64999999990687 2131.1833333335817
|
||||
1553864100000000000 310.85000000009313 2132.449999999255
|
||||
1553864400000000000 314.3666666666977 2136.216666666791
|
||||
1553864700000000000 303.2333333331626 2133.8166666673496
|
||||
1553865000000000000 304.93333333334886 2127.333333333023
|
||||
1553865300000000000 317.93333333334886 2130.3166666664183
|
||||
1553865600000000000 280.6666666667443 1807.9071428570896
|
||||
```
|
||||
|
||||
## Example Output
|
||||
|
||||
Here is example output of this plugin:
|
||||
|
||||
```text
|
||||
bind_memory,host=LAP,port=8053,source=localhost,url=localhost:8053 block_size=12058624i,context_size=4575056i,in_use=4113717i,lost=0i,total_use=16663252i 1554276619000000000
|
||||
bind_counter,host=LAP,port=8053,source=localhost,type=opcode,url=localhost:8053 IQUERY=0i,NOTIFY=0i,QUERY=9i,STATUS=0i,UPDATE=0i 1554276619000000000
|
||||
bind_counter,host=LAP,port=8053,source=localhost,type=rcode,url=localhost:8053 17=0i,18=0i,19=0i,20=0i,21=0i,22=0i,BADCOOKIE=0i,BADVERS=0i,FORMERR=0i,NOERROR=7i,NOTAUTH=0i,NOTIMP=0i,NOTZONE=0i,NXDOMAIN=0i,NXRRSET=0i,REFUSED=0i,RESERVED11=0i,RESERVED12=0i,RESERVED13=0i,RESERVED14=0i,RESERVED15=0i,SERVFAIL=2i,YXDOMAIN=0i,YXRRSET=0i 1554276619000000000
|
||||
bind_counter,host=LAP,port=8053,source=localhost,type=qtype,url=localhost:8053 A=1i,ANY=1i,NS=1i,PTR=5i,SOA=1i 1554276619000000000
|
||||
bind_counter,host=LAP,port=8053,source=localhost,type=nsstat,url=localhost:8053 AuthQryRej=0i,CookieBadSize=0i,CookieBadTime=0i,CookieIn=9i,CookieMatch=0i,CookieNew=9i,CookieNoMatch=0i,DNS64=0i,ECSOpt=0i,ExpireOpt=0i,KeyTagOpt=0i,NSIDOpt=0i,OtherOpt=0i,QryAuthAns=7i,QryBADCOOKIE=0i,QryDropped=0i,QryDuplicate=0i,QryFORMERR=0i,QryFailure=0i,QryNXDOMAIN=0i,QryNXRedir=0i,QryNXRedirRLookup=0i,QryNoauthAns=0i,QryNxrrset=1i,QryRecursion=2i,QryReferral=0i,QrySERVFAIL=2i,QrySuccess=6i,QryTCP=1i,QryUDP=8i,RPZRewrites=0i,RateDropped=0i,RateSlipped=0i,RecQryRej=0i,RecursClients=0i,ReqBadEDNSVer=0i,ReqBadSIG=0i,ReqEdns0=9i,ReqSIG0=0i,ReqTCP=1i,ReqTSIG=0i,Requestv4=9i,Requestv6=0i,RespEDNS0=9i,RespSIG0=0i,RespTSIG=0i,Response=9i,TruncatedResp=0i,UpdateBadPrereq=0i,UpdateDone=0i,UpdateFail=0i,UpdateFwdFail=0i,UpdateRej=0i,UpdateReqFwd=0i,UpdateRespFwd=0i,XfrRej=0i,XfrReqDone=0i 1554276619000000000
|
||||
bind_counter,host=LAP,port=8053,source=localhost,type=zonestat,url=localhost:8053 AXFRReqv4=0i,AXFRReqv6=0i,IXFRReqv4=0i,IXFRReqv6=0i,NotifyInv4=0i,NotifyInv6=0i,NotifyOutv4=0i,NotifyOutv6=0i,NotifyRej=0i,SOAOutv4=0i,SOAOutv6=0i,XfrFail=0i,XfrSuccess=0i 1554276619000000000
|
||||
bind_counter,host=LAP,port=8053,source=localhost,type=sockstat,url=localhost:8053 FDWatchClose=0i,FDwatchConn=0i,FDwatchConnFail=0i,FDwatchRecvErr=0i,FDwatchSendErr=0i,FdwatchBindFail=0i,RawActive=1i,RawClose=0i,RawOpen=1i,RawOpenFail=0i,RawRecvErr=0i,TCP4Accept=6i,TCP4AcceptFail=0i,TCP4Active=9i,TCP4BindFail=0i,TCP4Close=5i,TCP4Conn=0i,TCP4ConnFail=0i,TCP4Open=8i,TCP4OpenFail=0i,TCP4RecvErr=0i,TCP4SendErr=0i,TCP6Accept=0i,TCP6AcceptFail=0i,TCP6Active=2i,TCP6BindFail=0i,TCP6Close=0i,TCP6Conn=0i,TCP6ConnFail=0i,TCP6Open=2i,TCP6OpenFail=0i,TCP6RecvErr=0i,TCP6SendErr=0i,UDP4Active=18i,UDP4BindFail=14i,UDP4Close=14i,UDP4Conn=0i,UDP4ConnFail=0i,UDP4Open=32i,UDP4OpenFail=0i,UDP4RecvErr=0i,UDP4SendErr=0i,UDP6Active=3i,UDP6BindFail=0i,UDP6Close=6i,UDP6Conn=0i,UDP6ConnFail=6i,UDP6Open=9i,UDP6OpenFail=0i,UDP6RecvErr=0i,UDP6SendErr=0i,UnixAccept=0i,UnixAcceptFail=0i,UnixActive=0i,UnixBindFail=0i,UnixClose=0i,UnixConn=0i,UnixConnFail=0i,UnixOpen=0i,UnixOpenFail=0i,UnixRecvErr=0i,UnixSendErr=0i 1554276619000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,119 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Bond"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Bond
|
||||
identifier: input-bond
|
||||
tags: [Bond, "input-plugins", "configuration", "system"]
|
||||
introduced: "v1.5.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/bond/README.md, Bond Plugin Source
|
||||
---
|
||||
|
||||
# Bond Input Plugin
|
||||
|
||||
This plugin collects metrics for both the network bond interface as well as its
|
||||
slave interfaces using `/proc/net/bonding/*` files.
|
||||
|
||||
**Introduced in:** Telegraf v1.5.0
|
||||
**Tags:** system
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Collect bond interface status, slaves statuses and failures count
|
||||
[[inputs.bond]]
|
||||
## Sets 'proc' directory path
|
||||
## If not specified, then default is /proc
|
||||
# host_proc = "/proc"
|
||||
|
||||
## Sets 'sys' directory path
|
||||
## If not specified, then default is /sys
|
||||
# host_sys = "/sys"
|
||||
|
||||
## By default, telegraf gather stats for all bond interfaces
|
||||
## Setting interfaces will restrict the stats to the specified
|
||||
## bond interfaces.
|
||||
# bond_interfaces = ["bond0"]
|
||||
|
||||
## Tries to collect additional bond details from /sys/class/net/{bond}
|
||||
## currently only useful for LACP (mode 4) bonds
|
||||
# collect_sys_details = false
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- bond
|
||||
- tags:
|
||||
- `bond`: name of the bond
|
||||
- fields:
|
||||
- `active_slave`: currently active slave interface for active-backup mode
|
||||
- `status`: status of the interface (0: down , 1: up)
|
||||
|
||||
- bond_slave
|
||||
- tags:
|
||||
- `bond`: name of the bond
|
||||
- `interface`: name of the network interface
|
||||
- fields:
|
||||
- `failures`: amount of failures for bond's slave interface
|
||||
- `status`: status of the interface (0: down , 1: up)
|
||||
- `count`: number of slaves attached to bond
|
||||
- `actor_churned (for LACP bonds)`: count for local end of LACP bond flapped
|
||||
- `partner_churned (for LACP bonds)`: count for remote end of LACP bond flapped
|
||||
- `total_churned (for LACP bonds)`: full count of all churn events
|
||||
|
||||
- bond_sys
|
||||
- tags:
|
||||
- `bond`: name of the bond
|
||||
- `mode`: name of the bonding mode
|
||||
- fields:
|
||||
- `slave_count`: number of slaves
|
||||
- `ad_port_count`: number of ports
|
||||
|
||||
## Example Output
|
||||
|
||||
Configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.bond]]
|
||||
## Sets 'proc' directory path
|
||||
## If not specified, then default is /proc
|
||||
host_proc = "/proc"
|
||||
|
||||
## By default, telegraf gather stats for all bond interfaces
|
||||
## Setting interfaces will restrict the stats to the specified
|
||||
## bond interfaces.
|
||||
bond_interfaces = ["bond0", "bond1"]
|
||||
```
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
telegraf --config telegraf.conf --input-filter bond --test
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```text
|
||||
bond,bond=bond1,host=local active_slave="eth0",status=1i 1509704525000000000
|
||||
bond_slave,bond=bond1,interface=eth0,host=local status=1i,failures=0i 1509704525000000000
|
||||
bond_slave,host=local,bond=bond1,interface=eth1 status=1i,failures=0i 1509704525000000000
|
||||
bond_slave,host=local,bond=bond1 count=2i 1509704525000000000
|
||||
bond,bond=bond0,host=isvetlov-mac.local status=1i 1509704525000000000
|
||||
bond_slave,bond=bond0,interface=eth1,host=local status=1i,failures=0i 1509704525000000000
|
||||
bond_slave,bond=bond0,interface=eth2,host=local status=1i,failures=0i 1509704525000000000
|
||||
bond_slave,bond=bond0,host=local count=2i 1509704525000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,136 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Burrow"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Burrow
|
||||
identifier: input-burrow
|
||||
tags: [Burrow, "input-plugins", "configuration", "messaging"]
|
||||
introduced: "v1.7.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/burrow/README.md, Burrow Plugin Source
|
||||
---
|
||||
|
||||
# Burrow Input Plugin
|
||||
|
||||
This plugin collect Kafka topic, consumer and partition status from the
|
||||
[Burrow - Kafka Consumer Lag Checking](https://github.com/linkedin/Burrow) companion via [HTTP API](https://github.com/linkedin/Burrow/wiki/HTTP-Endpoint).
|
||||
Burrow v1.x versions are supported.
|
||||
|
||||
**Introduced in:** Telegraf v1.7.0
|
||||
**Tags:** messaging
|
||||
**OS support:** all
|
||||
|
||||
[burrow]: https://github.com/linkedin/Burrow
|
||||
[api]: https://github.com/linkedin/Burrow/wiki/HTTP-Endpoint
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Collect Kafka topics and consumers status from Burrow HTTP API.
|
||||
[[inputs.burrow]]
|
||||
## Burrow API endpoints in format "schema://host:port".
|
||||
## Default is "http://localhost:8000".
|
||||
servers = ["http://localhost:8000"]
|
||||
|
||||
## Override Burrow API prefix.
|
||||
## Useful when Burrow is behind reverse-proxy.
|
||||
# api_prefix = "/v3/kafka"
|
||||
|
||||
## Maximum time to receive response.
|
||||
# response_timeout = "5s"
|
||||
|
||||
## Limit per-server concurrent connections.
|
||||
## Useful in case of large number of topics or consumer groups.
|
||||
# concurrent_connections = 20
|
||||
|
||||
## Filter clusters, default is no filtering.
|
||||
## Values can be specified as glob patterns.
|
||||
# clusters_include = []
|
||||
# clusters_exclude = []
|
||||
|
||||
## Filter consumer groups, default is no filtering.
|
||||
## Values can be specified as glob patterns.
|
||||
# groups_include = []
|
||||
# groups_exclude = []
|
||||
|
||||
## Filter topics, default is no filtering.
|
||||
## Values can be specified as glob patterns.
|
||||
# topics_include = []
|
||||
# topics_exclude = []
|
||||
|
||||
## Credentials for basic HTTP authentication.
|
||||
# username = ""
|
||||
# password = ""
|
||||
|
||||
## Optional SSL config
|
||||
# ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# ssl_key = "/etc/telegraf/key.pem"
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
## Group/Partition Status mappings
|
||||
|
||||
* `OK` = 1
|
||||
* `NOT_FOUND` = 2
|
||||
* `WARN` = 3
|
||||
* `ERR` = 4
|
||||
* `STOP` = 5
|
||||
* `STALL` = 6
|
||||
|
||||
> unknown value will be mapped to 0
|
||||
|
||||
## Metrics
|
||||
|
||||
### Fields
|
||||
|
||||
* `burrow_group` (one event per each consumer group)
|
||||
* status (string, see Partition Status mappings)
|
||||
* status_code (int, `1..6`, see Partition status mappings)
|
||||
* partition_count (int, `number of partitions`)
|
||||
* offset (int64, `total offset of all partitions`)
|
||||
* total_lag (int64, `totallag`)
|
||||
* lag (int64, `maxlag.current_lag || 0`)
|
||||
* timestamp (int64, `end.timestamp`)
|
||||
|
||||
* `burrow_partition` (one event per each topic partition)
|
||||
* status (string, see Partition Status mappings)
|
||||
* status_code (int, `1..6`, see Partition status mappings)
|
||||
* lag (int64, `current_lag || 0`)
|
||||
* offset (int64, `end.timestamp`)
|
||||
* timestamp (int64, `end.timestamp`)
|
||||
|
||||
* `burrow_topic` (one event per topic offset)
|
||||
* offset (int64)
|
||||
|
||||
### Tags
|
||||
|
||||
* `burrow_group`
|
||||
* cluster (string)
|
||||
* group (string)
|
||||
|
||||
* `burrow_partition`
|
||||
* cluster (string)
|
||||
* group (string)
|
||||
* topic (string)
|
||||
* partition (int)
|
||||
* owner (string)
|
||||
|
||||
* `burrow_topic`
|
||||
* cluster (string)
|
||||
* topic (string)
|
||||
* partition (int)
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,492 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Ceph Storage"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Ceph Storage
|
||||
identifier: input-ceph
|
||||
tags: [Ceph Storage, "input-plugins", "configuration", "system"]
|
||||
introduced: "v0.13.1"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/ceph/README.md, Ceph Storage Plugin Source
|
||||
---
|
||||
|
||||
# Ceph Storage Input Plugin
|
||||
|
||||
This plugin collects performance metrics from MON and OSD nodes in a
|
||||
[Ceph storage cluster](https://ceph.com). Support for Telegraf has been introduced in the
|
||||
v13.x Mimic release where data is sent to a socket (see
|
||||
[their documnetation](https://docs.ceph.com/en/latest/mgr/telegraf)).
|
||||
|
||||
**Introduced in:** Telegraf v0.13.1
|
||||
**Tags:** system
|
||||
**OS support:** all
|
||||
|
||||
[ceph]: https://ceph.com
|
||||
[docs]: https://docs.ceph.com/en/latest/mgr/telegraf
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Collects performance metrics from the MON, OSD, MDS and RGW nodes
|
||||
# in a Ceph storage cluster.
|
||||
[[inputs.ceph]]
|
||||
## This is the recommended interval to poll. Too frequent and you
|
||||
## will lose data points due to timeouts during rebalancing and recovery
|
||||
interval = '1m'
|
||||
|
||||
## All configuration values are optional, defaults are shown below
|
||||
|
||||
## location of ceph binary
|
||||
ceph_binary = "/usr/bin/ceph"
|
||||
|
||||
## directory in which to look for socket files
|
||||
socket_dir = "/var/run/ceph"
|
||||
|
||||
## prefix of MON and OSD socket files, used to determine socket type
|
||||
mon_prefix = "ceph-mon"
|
||||
osd_prefix = "ceph-osd"
|
||||
mds_prefix = "ceph-mds"
|
||||
rgw_prefix = "ceph-client"
|
||||
|
||||
## suffix used to identify socket files
|
||||
socket_suffix = "asok"
|
||||
|
||||
## Ceph user to authenticate as, ceph will search for the corresponding
|
||||
## keyring e.g. client.admin.keyring in /etc/ceph, or the explicit path
|
||||
## defined in the client section of ceph.conf for example:
|
||||
##
|
||||
## [client.telegraf]
|
||||
## keyring = /etc/ceph/client.telegraf.keyring
|
||||
##
|
||||
## Consult the ceph documentation for more detail on keyring generation.
|
||||
ceph_user = "client.admin"
|
||||
|
||||
## Ceph configuration to use to locate the cluster
|
||||
ceph_config = "/etc/ceph/ceph.conf"
|
||||
|
||||
## Whether to gather statistics via the admin socket
|
||||
gather_admin_socket_stats = true
|
||||
|
||||
## Whether to gather statistics via ceph commands, requires ceph_user
|
||||
## and ceph_config to be specified
|
||||
gather_cluster_stats = false
|
||||
```
|
||||
|
||||
## Admin Socket Stats
|
||||
|
||||
This gatherer works by scanning the configured SocketDir for OSD, MON, MDS
|
||||
and RGW socket files. When it finds a MON socket, it runs
|
||||
|
||||
```shell
|
||||
ceph --admin-daemon $file perfcounters_dump
|
||||
```
|
||||
|
||||
For OSDs it runs
|
||||
|
||||
```shell
|
||||
ceph --admin-daemon $file perf dump
|
||||
```
|
||||
|
||||
The resulting JSON is parsed and grouped into collections, based on
|
||||
top-level key. Top-level keys are used as collection tags, and all
|
||||
sub-keys are flattened. For example:
|
||||
|
||||
```json
|
||||
{
|
||||
"paxos": {
|
||||
"refresh": 9363435,
|
||||
"refresh_latency": {
|
||||
"avgcount": 9363435,
|
||||
"sum": 5378.794002000
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Would be parsed into the following metrics, all of which would be tagged
|
||||
with `collection=paxos`:
|
||||
|
||||
- refresh = 9363435
|
||||
- refresh_latency.avgcount: 9363435
|
||||
- refresh_latency.sum: 5378.794002000
|
||||
|
||||
## Cluster Stats
|
||||
|
||||
This gatherer works by invoking ceph commands against the cluster thus only
|
||||
requires the ceph client, valid ceph configuration and an access key to
|
||||
function (the ceph_config and ceph_user configuration variables work in
|
||||
conjunction to specify these prerequisites). It may be run on any server you
|
||||
wish which has access to the cluster. The currently supported commands are:
|
||||
|
||||
- ceph status
|
||||
- ceph df
|
||||
- ceph osd pool stats
|
||||
|
||||
## Metrics
|
||||
|
||||
### Admin Socket
|
||||
|
||||
All fields are collected under the **ceph** measurement and stored as
|
||||
float64s. For a full list of fields, see the sample perf dumps in ceph_test.go.
|
||||
|
||||
All admin measurements will have the following tags:
|
||||
|
||||
- type: either 'osd', 'mon', 'mds' or 'rgw' to indicate the queried node type
|
||||
- id: a unique string identifier, parsed from the socket file name for the node
|
||||
- collection: the top-level key under which these fields were reported.
|
||||
Possible values are:
|
||||
- for MON nodes:
|
||||
- cluster
|
||||
- leveldb
|
||||
- mon
|
||||
- paxos
|
||||
- throttle-mon_client_bytes
|
||||
- throttle-mon_daemon_bytes
|
||||
- throttle-msgr_dispatch_throttler-mon
|
||||
- for OSD nodes:
|
||||
- WBThrottle
|
||||
- filestore
|
||||
- leveldb
|
||||
- mutex-FileJournal::completions_lock
|
||||
- mutex-FileJournal::finisher_lock
|
||||
- mutex-FileJournal::write_lock
|
||||
- mutex-FileJournal::writeq_lock
|
||||
- mutex-JOS::ApplyManager::apply_lock
|
||||
- mutex-JOS::ApplyManager::com_lock
|
||||
- mutex-JOS::SubmitManager::lock
|
||||
- mutex-WBThrottle::lock
|
||||
- objecter
|
||||
- osd
|
||||
- recoverystate_perf
|
||||
- throttle-filestore_bytes
|
||||
- throttle-filestore_ops
|
||||
- throttle-msgr_dispatch_throttler-client
|
||||
- throttle-msgr_dispatch_throttler-cluster
|
||||
- throttle-msgr_dispatch_throttler-hb_back_server
|
||||
- throttle-msgr_dispatch_throttler-hb_front_serve
|
||||
- throttle-msgr_dispatch_throttler-hbclient
|
||||
- throttle-msgr_dispatch_throttler-ms_objecter
|
||||
- throttle-objecter_bytes
|
||||
- throttle-objecter_ops
|
||||
- throttle-osd_client_bytes
|
||||
- throttle-osd_client_messages
|
||||
- for MDS nodes:
|
||||
- AsyncMessenger::Worker-0
|
||||
- AsyncMessenger::Worker-1
|
||||
- AsyncMessenger::Worker-2
|
||||
- finisher-PurgeQueue
|
||||
- mds
|
||||
- mds_cache
|
||||
- mds_log
|
||||
- mds_mem
|
||||
- mds_server
|
||||
- mds_sessions
|
||||
- objecter
|
||||
- purge_queue
|
||||
- throttle-msgr_dispatch_throttler-mds
|
||||
- throttle-objecter_bytes
|
||||
- throttle-objecter_ops
|
||||
- throttle-write_buf_throttle
|
||||
- for RGW nodes:
|
||||
- AsyncMessenger::Worker-0
|
||||
- AsyncMessenger::Worker-1
|
||||
- AsyncMessenger::Worker-2
|
||||
- cct
|
||||
- finisher-radosclient
|
||||
- mempool
|
||||
- objecter
|
||||
- rgw
|
||||
- simple-throttler
|
||||
- throttle-msgr_dispatch_throttler-radosclient
|
||||
- throttle-objecter_bytes
|
||||
- throttle-objecter_ops
|
||||
- throttle-rgw_async_rados_ops
|
||||
|
||||
## Cluster
|
||||
|
||||
- ceph_fsmap
|
||||
- fields:
|
||||
- up (float)
|
||||
- in (float)
|
||||
- max (float)
|
||||
- up_standby (float)
|
||||
|
||||
- ceph_health
|
||||
- fields:
|
||||
- status (string)
|
||||
- status_code (int)
|
||||
- overall_status (string, exists only in ceph <15)
|
||||
|
||||
- ceph_monmap
|
||||
- fields:
|
||||
- num_mons (float)
|
||||
|
||||
- ceph_osdmap
|
||||
- fields:
|
||||
- epoch (float)
|
||||
- full (bool, exists only in ceph <15)
|
||||
- nearfull (bool, exists only in ceph <15)
|
||||
- num_in_osds (float)
|
||||
- num_osds (float)
|
||||
- num_remapped_pgs (float)
|
||||
- num_up_osds (float)
|
||||
|
||||
- ceph_pgmap
|
||||
- fields:
|
||||
- bytes_avail (float)
|
||||
- bytes_total (float)
|
||||
- bytes_used (float)
|
||||
- data_bytes (float)
|
||||
- degraded_objects (float)
|
||||
- degraded_ratio (float)
|
||||
- degraded_total (float)
|
||||
- inactive_pgs_ratio (float)
|
||||
- num_bytes_recovered (float)
|
||||
- num_keys_recovered (float)
|
||||
- num_objects (float)
|
||||
- num_objects_recovered (float)
|
||||
- num_pgs (float)
|
||||
- num_pools (float)
|
||||
- op_per_sec (float, exists only in ceph <10)
|
||||
- read_bytes_sec (float)
|
||||
- read_op_per_sec (float)
|
||||
- recovering_bytes_per_sec (float)
|
||||
- recovering_keys_per_sec (float)
|
||||
- recovering_objects_per_sec (float)
|
||||
- version (float)
|
||||
- write_bytes_sec (float)
|
||||
- write_op_per_sec (float)
|
||||
|
||||
- ceph_pgmap_state
|
||||
- tags:
|
||||
- state
|
||||
- fields:
|
||||
- count (float)
|
||||
|
||||
- ceph_usage
|
||||
- fields:
|
||||
- num_osd (float)
|
||||
- num_per_pool_omap_osds (float)
|
||||
- num_per_pool_osds (float)
|
||||
- total_avail (float, exists only in ceph <0.84)
|
||||
- total_avail_bytes (float)
|
||||
- total_bytes (float)
|
||||
- total_space (float, exists only in ceph <0.84)
|
||||
- total_used (float, exists only in ceph <0.84)
|
||||
- total_used_bytes (float)
|
||||
- total_used_raw_bytes (float)
|
||||
- total_used_raw_ratio (float)
|
||||
|
||||
- ceph_deviceclass_usage
|
||||
- tags:
|
||||
- class
|
||||
- fields:
|
||||
- total_avail_bytes (float)
|
||||
- total_bytes (float)
|
||||
- total_used_bytes (float)
|
||||
- total_used_raw_bytes (float)
|
||||
- total_used_raw_ratio (float)
|
||||
|
||||
- ceph_pool_usage
|
||||
- tags:
|
||||
- name
|
||||
- fields:
|
||||
- bytes_used (float)
|
||||
- kb_used (float)
|
||||
- max_avail (float)
|
||||
- objects (float)
|
||||
- percent_used (float)
|
||||
- stored (float)
|
||||
|
||||
- ceph_pool_stats
|
||||
- tags:
|
||||
- name
|
||||
- fields:
|
||||
- degraded_objects (float)
|
||||
- degraded_ratio (float)
|
||||
- degraded_total (float)
|
||||
- num_bytes_recovered (float)
|
||||
- num_keys_recovered (float)
|
||||
- num_objects_recovered (float)
|
||||
- op_per_sec (float, exists only in ceph <10)
|
||||
- read_bytes_sec (float)
|
||||
- read_op_per_sec (float)
|
||||
- recovering_bytes_per_sec (float)
|
||||
- recovering_keys_per_sec (float)
|
||||
- recovering_objects_per_sec (float)
|
||||
- write_bytes_sec (float)
|
||||
- write_op_per_sec (float)
|
||||
|
||||
## Example Output
|
||||
|
||||
Below is an example of a cluster stats:
|
||||
|
||||
```text
|
||||
ceph_fsmap,host=ceph in=1,max=1,up=1,up_standby=2 1646782035000000000
|
||||
ceph_health,host=ceph status="HEALTH_OK",status_code=2 1646782035000000000
|
||||
ceph_monmap,host=ceph num_mons=3 1646782035000000000
|
||||
ceph_osdmap,host=ceph epoch=10560,num_in_osds=6,num_osds=6,num_remapped_pgs=0,num_up_osds=6 1646782035000000000
|
||||
ceph_pgmap,host=ceph bytes_avail=7863124942848,bytes_total=14882929901568,bytes_used=7019804958720,data_bytes=2411111520818,degraded_objects=0,degraded_ratio=0,degraded_total=0,inactive_pgs_ratio=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects=973030,num_objects_recovered=0,num_pgs=233,num_pools=6,read_bytes_sec=7334,read_op_per_sec=2,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,version=0,write_bytes_sec=13113085,write_op_per_sec=355 1646782035000000000
|
||||
ceph_pgmap_state,host=ceph,state=active+clean count=233 1646782035000000000
|
||||
ceph_usage,host=ceph num_osds=6,num_per_pool_omap_osds=6,num_per_pool_osds=6,total_avail_bytes=7863124942848,total_bytes=14882929901568,total_used_bytes=7019804958720,total_used_raw_bytes=7019804958720,total_used_raw_ratio=0.47166821360588074 1646782035000000000
|
||||
ceph_deviceclass_usage,class=hdd,host=ceph total_avail_bytes=6078650843136,total_bytes=12002349023232,total_used_bytes=5923698180096,total_used_raw_bytes=5923698180096,total_used_raw_ratio=0.49354490637779236 1646782035000000000
|
||||
ceph_deviceclass_usage,class=ssd,host=ceph total_avail_bytes=1784474099712,total_bytes=2880580878336,total_used_bytes=1096106778624,total_used_raw_bytes=1096106778624,total_used_raw_ratio=0.3805158734321594 1646782035000000000
|
||||
ceph_pool_usage,host=ceph,name=Foo bytes_used=2019483848658,kb_used=1972152196,max_avail=1826022621184,objects=161029,percent_used=0.26935243606567383,stored=672915064134 1646782035000000000
|
||||
ceph_pool_usage,host=ceph,name=Bar_metadata bytes_used=4370899787,kb_used=4268457,max_avail=546501918720,objects=89702,percent_used=0.002658897778019309,stored=1456936576 1646782035000000000
|
||||
ceph_pool_usage,host=ceph,name=Bar_data bytes_used=3893328740352,kb_used=3802078848,max_avail=1826022621184,objects=518396,percent_used=0.41544806957244873,stored=1292214337536 1646782035000000000
|
||||
ceph_pool_usage,host=ceph,name=device_health_metrics bytes_used=85289044,kb_used=83291,max_avail=3396406870016,objects=9,percent_used=0.000012555617104226258,stored=42644520 1646782035000000000
|
||||
ceph_pool_usage,host=ceph,name=Foo_Fast bytes_used=597511814461,kb_used=583507632,max_avail=546501918720,objects=67014,percent_used=0.2671019732952118,stored=199093853972 1646782035000000000
|
||||
ceph_pool_usage,host=ceph,name=Bar_data_fast bytes_used=490009280512,kb_used=478524688,max_avail=546501918720,objects=136880,percent_used=0.23010368645191193,stored=163047325696 1646782035000000000
|
||||
ceph_pool_stats,host=ceph,name=Foo degraded_objects=0,degraded_ratio=0,degraded_total=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects_recovered=0,read_bytes_sec=0,read_op_per_sec=0,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=27720,write_op_per_sec=4 1646782036000000000
|
||||
ceph_pool_stats,host=ceph,name=Bar_metadata degraded_objects=0,degraded_ratio=0,degraded_total=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects_recovered=0,read_bytes_sec=9638,read_op_per_sec=3,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=11802778,write_op_per_sec=60 1646782036000000000
|
||||
ceph_pool_stats,host=ceph,name=Bar_data degraded_objects=0,degraded_ratio=0,degraded_total=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects_recovered=0,read_bytes_sec=0,read_op_per_sec=0,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=0,write_op_per_sec=104 1646782036000000000
|
||||
ceph_pool_stats,host=ceph,name=device_health_metrics degraded_objects=0,degraded_ratio=0,degraded_total=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects_recovered=0,read_bytes_sec=0,read_op_per_sec=0,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=0,write_op_per_sec=0 1646782036000000000
|
||||
ceph_pool_stats,host=ceph,name=Foo_Fast degraded_objects=0,degraded_ratio=0,degraded_total=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects_recovered=0,read_bytes_sec=0,read_op_per_sec=0,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=11173,write_op_per_sec=1 1646782036000000000
|
||||
ceph_pool_stats,host=ceph,name=Bar_data_fast degraded_objects=0,degraded_ratio=0,degraded_total=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects_recovered=0,read_bytes_sec=0,read_op_per_sec=0,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=2155404,write_op_per_sec=262 1646782036000000000
|
||||
```
|
||||
|
||||
Below is an example of admin socket stats:
|
||||
|
||||
```text
|
||||
ceph,collection=cct,host=stefanmon1,id=stefanmon1,type=monitor total_workers=0,unhealthy_workers=0 1587117563000000000
|
||||
ceph,collection=mempool,host=stefanmon1,id=stefanmon1,type=monitor bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=0,bluefs_items=0,bluestore_alloc_bytes=0,bluestore_alloc_items=0,bluestore_cache_data_bytes=0,bluestore_cache_data_items=0,bluestore_cache_onode_bytes=0,bluestore_cache_onode_items=0,bluestore_cache_other_bytes=0,bluestore_cache_other_items=0,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=0,bluestore_txc_items=0,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=0,bluestore_writing_deferred_items=0,bluestore_writing_items=0,buffer_anon_bytes=719152,buffer_anon_items=192,buffer_meta_bytes=352,buffer_meta_items=4,mds_co_bytes=0,mds_co_items=0,osd_bytes=0,osd_items=0,osd_mapbl_bytes=0,osd_mapbl_items=0,osd_pglog_bytes=0,osd_pglog_items=0,osdmap_bytes=15872,osdmap_items=138,osdmap_mapping_bytes=63112,osdmap_mapping_items=7626,pgmap_bytes=38680,pgmap_items=477,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117563000000000
|
||||
ceph,collection=throttle-mon_client_bytes,host=stefanmon1,id=stefanmon1,type=monitor get=1041157,get_or_fail_fail=0,get_or_fail_success=1041157,get_started=0,get_sum=64928901,max=104857600,put=1041157,put_sum=64928901,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117563000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-mon,host=stefanmon1,id=stefanmon1,type=monitor get=12695426,get_or_fail_fail=0,get_or_fail_success=12695426,get_started=0,get_sum=42542216884,max=104857600,put=12695426,put_sum=42542216884,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117563000000000
|
||||
ceph,collection=finisher-mon_finisher,host=stefanmon1,id=stefanmon1,type=monitor complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117563000000000
|
||||
ceph,collection=finisher-monstore,host=stefanmon1,id=stefanmon1,type=monitor complete_latency.avgcount=1609831,complete_latency.avgtime=0.015857621,complete_latency.sum=25528.09131035,queue_len=0 1587117563000000000
|
||||
ceph,collection=mon,host=stefanmon1,id=stefanmon1,type=monitor election_call=25,election_lose=0,election_win=22,num_elections=94,num_sessions=3,session_add=174679,session_rm=439316,session_trim=137 1587117563000000000
|
||||
ceph,collection=throttle-mon_daemon_bytes,host=stefanmon1,id=stefanmon1,type=monitor get=72697,get_or_fail_fail=0,get_or_fail_success=72697,get_started=0,get_sum=32261199,max=419430400,put=72697,put_sum=32261199,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117563000000000
|
||||
ceph,collection=rocksdb,host=stefanmon1,id=stefanmon1,type=monitor compact=1,compact_queue_len=0,compact_queue_merge=1,compact_range=19126,get=62449211,get_latency.avgcount=62449211,get_latency.avgtime=0.000022216,get_latency.sum=1387.371811726,rocksdb_write_delay_time.avgcount=0,rocksdb_write_delay_time.avgtime=0,rocksdb_write_delay_time.sum=0,rocksdb_write_memtable_time.avgcount=0,rocksdb_write_memtable_time.avgtime=0,rocksdb_write_memtable_time.sum=0,rocksdb_write_pre_and_post_time.avgcount=0,rocksdb_write_pre_and_post_time.avgtime=0,rocksdb_write_pre_and_post_time.sum=0,rocksdb_write_wal_time.avgcount=0,rocksdb_write_wal_time.avgtime=0,rocksdb_write_wal_time.sum=0,submit_latency.avgcount=0,submit_latency.avgtime=0,submit_latency.sum=0,submit_sync_latency.avgcount=3219961,submit_sync_latency.avgtime=0.007532173,submit_sync_latency.sum=24253.303584224,submit_transaction=0,submit_transaction_sync=3219961 1587117563000000000
|
||||
ceph,collection=AsyncMessenger::Worker-0,host=stefanmon1,id=stefanmon1,type=monitor msgr_active_connections=148317,msgr_created_connections=162806,msgr_recv_bytes=11557888328,msgr_recv_messages=5113369,msgr_running_fast_dispatch_time=0,msgr_running_recv_time=868.377161686,msgr_running_send_time=1626.525392721,msgr_running_total_time=4222.235694322,msgr_send_bytes=91516226816,msgr_send_messages=6973706 1587117563000000000
|
||||
ceph,collection=AsyncMessenger::Worker-2,host=stefanmon1,id=stefanmon1,type=monitor msgr_active_connections=146396,msgr_created_connections=159788,msgr_recv_bytes=2162802496,msgr_recv_messages=689168,msgr_running_fast_dispatch_time=0,msgr_running_recv_time=164.148550562,msgr_running_send_time=153.462890368,msgr_running_total_time=644.188791379,msgr_send_bytes=7422484152,msgr_send_messages=749381 1587117563000000000
|
||||
ceph,collection=cluster,host=stefanmon1,id=stefanmon1,type=monitor num_bytes=5055,num_mon=3,num_mon_quorum=3,num_object=245,num_object_degraded=0,num_object_misplaced=0,num_object_unfound=0,num_osd=9,num_osd_in=8,num_osd_up=8,num_pg=504,num_pg_active=504,num_pg_active_clean=504,num_pg_peering=0,num_pool=17,osd_bytes=858959904768,osd_bytes_avail=849889787904,osd_bytes_used=9070116864,osd_epoch=203 1587117563000000000
|
||||
ceph,collection=paxos,host=stefanmon1,id=stefanmon1,type=monitor accept_timeout=1,begin=1609847,begin_bytes.avgcount=1609847,begin_bytes.sum=41408662074,begin_keys.avgcount=1609847,begin_keys.sum=4829541,begin_latency.avgcount=1609847,begin_latency.avgtime=0.007213392,begin_latency.sum=11612.457661116,collect=0,collect_bytes.avgcount=0,collect_bytes.sum=0,collect_keys.avgcount=0,collect_keys.sum=0,collect_latency.avgcount=0,collect_latency.avgtime=0,collect_latency.sum=0,collect_timeout=1,collect_uncommitted=17,commit=1609831,commit_bytes.avgcount=1609831,commit_bytes.sum=41087428442,commit_keys.avgcount=1609831,commit_keys.sum=11637931,commit_latency.avgcount=1609831,commit_latency.avgtime=0.006236333,commit_latency.sum=10039.442388355,lease_ack_timeout=0,lease_timeout=0,new_pn=33,new_pn_latency.avgcount=33,new_pn_latency.avgtime=3.844272773,new_pn_latency.sum=126.86100151,refresh=1609856,refresh_latency.avgcount=1609856,refresh_latency.avgtime=0.005900486,refresh_latency.sum=9498.932866761,restart=109,share_state=2,share_state_bytes.avgcount=2,share_state_bytes.sum=39612,share_state_keys.avgcount=2,share_state_keys.sum=2,start_leader=22,start_peon=0,store_state=14,store_state_bytes.avgcount=14,store_state_bytes.sum=51908281,store_state_keys.avgcount=14,store_state_keys.sum=7016,store_state_latency.avgcount=14,store_state_latency.avgtime=11.668377665,store_state_latency.sum=163.357287311 1587117563000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-mon-mgrc,host=stefanmon1,id=stefanmon1,type=monitor get=13225,get_or_fail_fail=0,get_or_fail_success=13225,get_started=0,get_sum=158700,max=104857600,put=13225,put_sum=158700,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117563000000000
|
||||
ceph,collection=AsyncMessenger::Worker-1,host=stefanmon1,id=stefanmon1,type=monitor msgr_active_connections=147680,msgr_created_connections=162374,msgr_recv_bytes=29781706740,msgr_recv_messages=7170733,msgr_running_fast_dispatch_time=0,msgr_running_recv_time=1728.559151358,msgr_running_send_time=2086.681244508,msgr_running_total_time=6084.532916585,msgr_send_bytes=94062125718,msgr_send_messages=9161564 1587117563000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-cluster,host=stefanosd1,id=0,type=osd get=281745,get_or_fail_fail=0,get_or_fail_success=281745,get_started=0,get_sum=446024457,max=104857600,put=281745,put_sum=446024457,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-bluestore_throttle_bytes,host=stefanosd1,id=0,type=osd get=275707,get_or_fail_fail=0,get_or_fail_success=0,get_started=275707,get_sum=185073179842,max=67108864,put=268870,put_sum=185073179842,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-hb_front_server,host=stefanosd1,id=0,type=osd get=2606982,get_or_fail_fail=0,get_or_fail_success=2606982,get_started=0,get_sum=5224391928,max=104857600,put=2606982,put_sum=5224391928,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=rocksdb,host=stefanosd1,id=0,type=osd compact=0,compact_queue_len=0,compact_queue_merge=0,compact_range=0,get=1570,get_latency.avgcount=1570,get_latency.avgtime=0.000051233,get_latency.sum=0.080436788,rocksdb_write_delay_time.avgcount=0,rocksdb_write_delay_time.avgtime=0,rocksdb_write_delay_time.sum=0,rocksdb_write_memtable_time.avgcount=0,rocksdb_write_memtable_time.avgtime=0,rocksdb_write_memtable_time.sum=0,rocksdb_write_pre_and_post_time.avgcount=0,rocksdb_write_pre_and_post_time.avgtime=0,rocksdb_write_pre_and_post_time.sum=0,rocksdb_write_wal_time.avgcount=0,rocksdb_write_wal_time.avgtime=0,rocksdb_write_wal_time.sum=0,submit_latency.avgcount=275707,submit_latency.avgtime=0.000174936,submit_latency.sum=48.231345334,submit_sync_latency.avgcount=268870,submit_sync_latency.avgtime=0.006097313,submit_sync_latency.sum=1639.384555624,submit_transaction=275707,submit_transaction_sync=268870 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-hb_back_server,host=stefanosd1,id=0,type=osd get=2606982,get_or_fail_fail=0,get_or_fail_success=2606982,get_started=0,get_sum=5224391928,max=104857600,put=2606982,put_sum=5224391928,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-objecter_bytes,host=stefanosd1,id=0,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-hb_back_client,host=stefanosd1,id=0,type=osd get=2610285,get_or_fail_fail=0,get_or_fail_success=2610285,get_started=0,get_sum=5231011140,max=104857600,put=2610285,put_sum=5231011140,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=AsyncMessenger::Worker-1,host=stefanosd1,id=0,type=osd msgr_active_connections=2093,msgr_created_connections=29142,msgr_recv_bytes=7214238199,msgr_recv_messages=3928206,msgr_running_fast_dispatch_time=171.289615064,msgr_running_recv_time=278.531155966,msgr_running_send_time=489.482588813,msgr_running_total_time=1134.004853662,msgr_send_bytes=9814725232,msgr_send_messages=3814927 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-client,host=stefanosd1,id=0,type=osd get=488206,get_or_fail_fail=0,get_or_fail_success=488206,get_started=0,get_sum=104085134,max=104857600,put=488206,put_sum=104085134,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=finisher-defered_finisher,host=stefanosd1,id=0,type=osd complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117698000000000
|
||||
ceph,collection=recoverystate_perf,host=stefanosd1,id=0,type=osd activating_latency.avgcount=87,activating_latency.avgtime=0.114348341,activating_latency.sum=9.948305683,active_latency.avgcount=25,active_latency.avgtime=1790.961574431,active_latency.sum=44774.039360795,backfilling_latency.avgcount=0,backfilling_latency.avgtime=0,backfilling_latency.sum=0,clean_latency.avgcount=25,clean_latency.avgtime=1790.830827794,clean_latency.sum=44770.770694867,down_latency.avgcount=0,down_latency.avgtime=0,down_latency.sum=0,getinfo_latency.avgcount=141,getinfo_latency.avgtime=0.446233476,getinfo_latency.sum=62.918920183,getlog_latency.avgcount=87,getlog_latency.avgtime=0.007708069,getlog_latency.sum=0.670602073,getmissing_latency.avgcount=87,getmissing_latency.avgtime=0.000077594,getmissing_latency.sum=0.006750701,incomplete_latency.avgcount=0,incomplete_latency.avgtime=0,incomplete_latency.sum=0,initial_latency.avgcount=166,initial_latency.avgtime=0.001313715,initial_latency.sum=0.218076764,notbackfilling_latency.avgcount=0,notbackfilling_latency.avgtime=0,notbackfilling_latency.sum=0,notrecovering_latency.avgcount=0,notrecovering_latency.avgtime=0,notrecovering_latency.sum=0,peering_latency.avgcount=141,peering_latency.avgtime=0.948324273,peering_latency.sum=133.713722563,primary_latency.avgcount=79,primary_latency.avgtime=567.706192991,primary_latency.sum=44848.78924634,recovered_latency.avgcount=87,recovered_latency.avgtime=0.000378284,recovered_latency.sum=0.032910791,recovering_latency.avgcount=2,recovering_latency.avgtime=0.338242008,recovering_latency.sum=0.676484017,replicaactive_latency.avgcount=23,replicaactive_latency.avgtime=1790.893991295,replicaactive_latency.sum=41190.561799786,repnotrecovering_latency.avgcount=25,repnotrecovering_latency.avgtime=1647.627024984,repnotrecovering_latency.sum=41190.675624616,reprecovering_latency.avgcount=2,reprecovering_latency.avgtime=0.311884638,reprecovering_latency.sum=0.623769276,repwaitbackfillreserved_latency.avgcount=0,repwaitbackfillreserved_latency.avgtime=0,repwaitbackfillreserved_latency.sum=0,repwaitrecoveryreserved_latency.avgcount=2,repwaitrecoveryreserved_latency.avgtime=0.000462873,repwaitrecoveryreserved_latency.sum=0.000925746,reset_latency.avgcount=372,reset_latency.avgtime=0.125056393,reset_latency.sum=46.520978537,start_latency.avgcount=372,start_latency.avgtime=0.000109397,start_latency.sum=0.040695881,started_latency.avgcount=206,started_latency.avgtime=418.299777245,started_latency.sum=86169.754112641,stray_latency.avgcount=231,stray_latency.avgtime=0.98203205,stray_latency.sum=226.849403565,waitactingchange_latency.avgcount=0,waitactingchange_latency.avgtime=0,waitactingchange_latency.sum=0,waitlocalbackfillreserved_latency.avgcount=0,waitlocalbackfillreserved_latency.avgtime=0,waitlocalbackfillreserved_latency.sum=0,waitlocalrecoveryreserved_latency.avgcount=2,waitlocalrecoveryreserved_latency.avgtime=0.002802377,waitlocalrecoveryreserved_latency.sum=0.005604755,waitremotebackfillreserved_latency.avgcount=0,waitremotebackfillreserved_latency.avgtime=0,waitremotebackfillreserved_latency.sum=0,waitremoterecoveryreserved_latency.avgcount=2,waitremoterecoveryreserved_latency.avgtime=0.012855439,waitremoterecoveryreserved_latency.sum=0.025710878,waitupthru_latency.avgcount=87,waitupthru_latency.avgtime=0.805727895,waitupthru_latency.sum=70.09832695 1587117698000000000
|
||||
ceph,collection=cct,host=stefanosd1,id=0,type=osd total_workers=6,unhealthy_workers=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-hb_front_client,host=stefanosd1,id=0,type=osd get=2610285,get_or_fail_fail=0,get_or_fail_success=2610285,get_started=0,get_sum=5231011140,max=104857600,put=2610285,put_sum=5231011140,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=bluefs,host=stefanosd1,id=0,type=osd bytes_written_slow=0,bytes_written_sst=9018781,bytes_written_wal=831081573,db_total_bytes=4294967296,db_used_bytes=434110464,files_written_sst=3,files_written_wal=2,gift_bytes=0,log_bytes=134291456,log_compactions=1,logged_bytes=1101668352,max_bytes_db=1234173952,max_bytes_slow=0,max_bytes_wal=0,num_files=11,reclaim_bytes=0,slow_total_bytes=0,slow_used_bytes=0,wal_total_bytes=0,wal_used_bytes=0 1587117698000000000
|
||||
ceph,collection=mempool,host=stefanosd1,id=0,type=osd bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=10600,bluefs_items=458,bluestore_alloc_bytes=230288,bluestore_alloc_items=28786,bluestore_cache_data_bytes=622592,bluestore_cache_data_items=43,bluestore_cache_onode_bytes=249280,bluestore_cache_onode_items=380,bluestore_cache_other_bytes=192678,bluestore_cache_other_items=20199,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=8272,bluestore_txc_items=11,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=670130,bluestore_writing_deferred_items=176,bluestore_writing_items=0,buffer_anon_bytes=2412465,buffer_anon_items=297,buffer_meta_bytes=5896,buffer_meta_items=67,mds_co_bytes=0,mds_co_items=0,osd_bytes=2124800,osd_items=166,osd_mapbl_bytes=155152,osd_mapbl_items=10,osd_pglog_bytes=3214704,osd_pglog_items=6288,osdmap_bytes=710892,osdmap_items=4426,osdmap_mapping_bytes=0,osdmap_mapping_items=0,pgmap_bytes=0,pgmap_items=0,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117698000000000
|
||||
ceph,collection=osd,host=stefanosd1,id=0,type=osd agent_evict=0,agent_flush=0,agent_skip=0,agent_wake=0,cached_crc=0,cached_crc_adjusted=0,copyfrom=0,heartbeat_to_peers=7,loadavg=11,map_message_epoch_dups=21,map_message_epochs=40,map_messages=31,messages_delayed_for_map=0,missed_crc=0,numpg=166,numpg_primary=62,numpg_removing=0,numpg_replica=104,numpg_stray=0,object_ctx_cache_hit=476529,object_ctx_cache_total=476536,op=476525,op_before_dequeue_op_lat.avgcount=755708,op_before_dequeue_op_lat.avgtime=0.000205759,op_before_dequeue_op_lat.sum=155.493843473,op_before_queue_op_lat.avgcount=755702,op_before_queue_op_lat.avgtime=0.000047877,op_before_queue_op_lat.sum=36.181069552,op_cache_hit=0,op_in_bytes=0,op_latency.avgcount=476525,op_latency.avgtime=0.000365956,op_latency.sum=174.387387878,op_out_bytes=10882,op_prepare_latency.avgcount=476527,op_prepare_latency.avgtime=0.000205307,op_prepare_latency.sum=97.834380034,op_process_latency.avgcount=476525,op_process_latency.avgtime=0.000139616,op_process_latency.sum=66.530847665,op_r=476521,op_r_latency.avgcount=476521,op_r_latency.avgtime=0.00036559,op_r_latency.sum=174.21148267,op_r_out_bytes=10882,op_r_prepare_latency.avgcount=476523,op_r_prepare_latency.avgtime=0.000205302,op_r_prepare_latency.sum=97.831473175,op_r_process_latency.avgcount=476521,op_r_process_latency.avgtime=0.000139396,op_r_process_latency.sum=66.425498624,op_rw=2,op_rw_in_bytes=0,op_rw_latency.avgcount=2,op_rw_latency.avgtime=0.048818975,op_rw_latency.sum=0.097637951,op_rw_out_bytes=0,op_rw_prepare_latency.avgcount=2,op_rw_prepare_latency.avgtime=0.000467887,op_rw_prepare_latency.sum=0.000935775,op_rw_process_latency.avgcount=2,op_rw_process_latency.avgtime=0.013741256,op_rw_process_latency.sum=0.027482512,op_w=2,op_w_in_bytes=0,op_w_latency.avgcount=2,op_w_latency.avgtime=0.039133628,op_w_latency.sum=0.078267257,op_w_prepare_latency.avgcount=2,op_w_prepare_latency.avgtime=0.000985542,op_w_prepare_latency.sum=0.001971084,op_w_process_latency.avgcount=2,op_w_process_latency.avgtime=0.038933264,op_w_process_latency.sum=0.077866529,op_wip=0,osd_map_bl_cache_hit=22,osd_map_bl_cache_miss=40,osd_map_cache_hit=4570,osd_map_cache_miss=15,osd_map_cache_miss_low=0,osd_map_cache_miss_low_avg.avgcount=0,osd_map_cache_miss_low_avg.sum=0,osd_pg_biginfo=2050,osd_pg_fastinfo=265780,osd_pg_info=274542,osd_tier_flush_lat.avgcount=0,osd_tier_flush_lat.avgtime=0,osd_tier_flush_lat.sum=0,osd_tier_promote_lat.avgcount=0,osd_tier_promote_lat.avgtime=0,osd_tier_promote_lat.sum=0,osd_tier_r_lat.avgcount=0,osd_tier_r_lat.avgtime=0,osd_tier_r_lat.sum=0,pull=0,push=2,push_out_bytes=10,recovery_bytes=10,recovery_ops=2,stat_bytes=107369988096,stat_bytes_avail=106271539200,stat_bytes_used=1098448896,subop=253554,subop_in_bytes=168644225,subop_latency.avgcount=253554,subop_latency.avgtime=0.0073036,subop_latency.sum=1851.857230388,subop_pull=0,subop_pull_latency.avgcount=0,subop_pull_latency.avgtime=0,subop_pull_latency.sum=0,subop_push=0,subop_push_in_bytes=0,subop_push_latency.avgcount=0,subop_push_latency.avgtime=0,subop_push_latency.sum=0,subop_w=253554,subop_w_in_bytes=168644225,subop_w_latency.avgcount=253554,subop_w_latency.avgtime=0.0073036,subop_w_latency.sum=1851.857230388,tier_clean=0,tier_delay=0,tier_dirty=0,tier_evict=0,tier_flush=0,tier_flush_fail=0,tier_promote=0,tier_proxy_read=0,tier_proxy_write=0,tier_try_flush=0,tier_try_flush_fail=0,tier_whiteout=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-ms_objecter,host=stefanosd1,id=0,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=AsyncMessenger::Worker-2,host=stefanosd1,id=0,type=osd msgr_active_connections=2055,msgr_created_connections=27411,msgr_recv_bytes=6431950009,msgr_recv_messages=3552443,msgr_running_fast_dispatch_time=162.271664213,msgr_running_recv_time=254.307853033,msgr_running_send_time=503.037285799,msgr_running_total_time=1130.21070681,msgr_send_bytes=10865436237,msgr_send_messages=3523374 1587117698000000000
|
||||
ceph,collection=bluestore,host=stefanosd1,id=0,type=osd bluestore_allocated=24641536,bluestore_blob_split=0,bluestore_blobs=88,bluestore_buffer_bytes=622592,bluestore_buffer_hit_bytes=160578,bluestore_buffer_miss_bytes=540236,bluestore_buffers=43,bluestore_compressed=0,bluestore_compressed_allocated=0,bluestore_compressed_original=0,bluestore_extent_compress=0,bluestore_extents=88,bluestore_fragmentation_micros=1,bluestore_gc_merged=0,bluestore_onode_hits=532102,bluestore_onode_misses=388,bluestore_onode_reshard=0,bluestore_onode_shard_hits=0,bluestore_onode_shard_misses=0,bluestore_onodes=380,bluestore_read_eio=0,bluestore_reads_with_retries=0,bluestore_stored=1987856,bluestore_txc=275707,bluestore_write_big=0,bluestore_write_big_blobs=0,bluestore_write_big_bytes=0,bluestore_write_small=60,bluestore_write_small_bytes=343843,bluestore_write_small_deferred=22,bluestore_write_small_new=38,bluestore_write_small_pre_read=22,bluestore_write_small_unused=0,commit_lat.avgcount=275707,commit_lat.avgtime=0.00699778,commit_lat.sum=1929.337103334,compress_lat.avgcount=0,compress_lat.avgtime=0,compress_lat.sum=0,compress_rejected_count=0,compress_success_count=0,csum_lat.avgcount=67,csum_lat.avgtime=0.000032601,csum_lat.sum=0.002184323,decompress_lat.avgcount=0,decompress_lat.avgtime=0,decompress_lat.sum=0,deferred_write_bytes=0,deferred_write_ops=0,kv_commit_lat.avgcount=268870,kv_commit_lat.avgtime=0.006365428,kv_commit_lat.sum=1711.472749866,kv_final_lat.avgcount=268867,kv_final_lat.avgtime=0.000043227,kv_final_lat.sum=11.622427109,kv_flush_lat.avgcount=268870,kv_flush_lat.avgtime=0.000000223,kv_flush_lat.sum=0.060141588,kv_sync_lat.avgcount=268870,kv_sync_lat.avgtime=0.006365652,kv_sync_lat.sum=1711.532891454,omap_lower_bound_lat.avgcount=2,omap_lower_bound_lat.avgtime=0.000006524,omap_lower_bound_lat.sum=0.000013048,omap_next_lat.avgcount=6704,omap_next_lat.avgtime=0.000004721,omap_next_lat.sum=0.031654097,omap_seek_to_first_lat.avgcount=323,omap_seek_to_first_lat.avgtime=0.00000522,omap_seek_to_first_lat.sum=0.00168614,omap_upper_bound_lat.avgcount=4,omap_upper_bound_lat.avgtime=0.000013086,omap_upper_bound_lat.sum=0.000052344,read_lat.avgcount=227,read_lat.avgtime=0.000699457,read_lat.sum=0.158776879,read_onode_meta_lat.avgcount=311,read_onode_meta_lat.avgtime=0.000072207,read_onode_meta_lat.sum=0.022456667,read_wait_aio_lat.avgcount=84,read_wait_aio_lat.avgtime=0.001556141,read_wait_aio_lat.sum=0.130715885,state_aio_wait_lat.avgcount=275707,state_aio_wait_lat.avgtime=0.000000345,state_aio_wait_lat.sum=0.095246457,state_deferred_aio_wait_lat.avgcount=0,state_deferred_aio_wait_lat.avgtime=0,state_deferred_aio_wait_lat.sum=0,state_deferred_cleanup_lat.avgcount=0,state_deferred_cleanup_lat.avgtime=0,state_deferred_cleanup_lat.sum=0,state_deferred_queued_lat.avgcount=0,state_deferred_queued_lat.avgtime=0,state_deferred_queued_lat.sum=0,state_done_lat.avgcount=275696,state_done_lat.avgtime=0.00000286,state_done_lat.sum=0.788700007,state_finishing_lat.avgcount=275696,state_finishing_lat.avgtime=0.000000302,state_finishing_lat.sum=0.083437168,state_io_done_lat.avgcount=275707,state_io_done_lat.avgtime=0.000001041,state_io_done_lat.sum=0.287025147,state_kv_commiting_lat.avgcount=275707,state_kv_commiting_lat.avgtime=0.006424459,state_kv_commiting_lat.sum=1771.268407864,state_kv_done_lat.avgcount=275707,state_kv_done_lat.avgtime=0.000001627,state_kv_done_lat.sum=0.448805853,state_kv_queued_lat.avgcount=275707,state_kv_queued_lat.avgtime=0.000488565,state_kv_queued_lat.sum=134.7009424,state_prepare_lat.avgcount=275707,state_prepare_lat.avgtime=0.000082464,state_prepare_lat.sum=22.736065534,submit_lat.avgcount=275707,submit_lat.avgtime=0.000120236,submit_lat.sum=33.149934412,throttle_lat.avgcount=275707,throttle_lat.avgtime=0.000001571,throttle_lat.sum=0.433185935,write_pad_bytes=151773,write_penalty_read_ops=0 1587117698000000000
|
||||
ceph,collection=finisher-objecter-finisher-0,host=stefanosd1,id=0,type=osd complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117698000000000
|
||||
ceph,collection=objecter,host=stefanosd1,id=0,type=osd command_active=0,command_resend=0,command_send=0,linger_active=0,linger_ping=0,linger_resend=0,linger_send=0,map_epoch=203,map_full=0,map_inc=19,omap_del=0,omap_rd=0,omap_wr=0,op=0,op_active=0,op_laggy=0,op_pg=0,op_r=0,op_reply=0,op_resend=0,op_rmw=0,op_send=0,op_send_bytes=0,op_w=0,osd_laggy=0,osd_session_close=0,osd_session_open=0,osd_sessions=0,osdop_append=0,osdop_call=0,osdop_clonerange=0,osdop_cmpxattr=0,osdop_create=0,osdop_delete=0,osdop_getxattr=0,osdop_mapext=0,osdop_notify=0,osdop_other=0,osdop_pgls=0,osdop_pgls_filter=0,osdop_read=0,osdop_resetxattrs=0,osdop_rmxattr=0,osdop_setxattr=0,osdop_sparse_read=0,osdop_src_cmpxattr=0,osdop_stat=0,osdop_truncate=0,osdop_watch=0,osdop_write=0,osdop_writefull=0,osdop_writesame=0,osdop_zero=0,poolop_active=0,poolop_resend=0,poolop_send=0,poolstat_active=0,poolstat_resend=0,poolstat_send=0,statfs_active=0,statfs_resend=0,statfs_send=0 1587117698000000000
|
||||
ceph,collection=finisher-commit_finisher,host=stefanosd1,id=0,type=osd complete_latency.avgcount=11,complete_latency.avgtime=0.003447516,complete_latency.sum=0.037922681,queue_len=0 1587117698000000000
|
||||
ceph,collection=throttle-objecter_ops,host=stefanosd1,id=0,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=1024,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=AsyncMessenger::Worker-0,host=stefanosd1,id=0,type=osd msgr_active_connections=2128,msgr_created_connections=33685,msgr_recv_bytes=8679123051,msgr_recv_messages=4200356,msgr_running_fast_dispatch_time=151.889337454,msgr_running_recv_time=297.632294886,msgr_running_send_time=599.20020523,msgr_running_total_time=1321.361931202,msgr_send_bytes=11716202897,msgr_send_messages=4347418 1587117698000000000
|
||||
ceph,collection=throttle-osd_client_bytes,host=stefanosd1,id=0,type=osd get=476554,get_or_fail_fail=0,get_or_fail_success=476554,get_started=0,get_sum=103413728,max=524288000,put=476587,put_sum=103413728,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-bluestore_throttle_deferred_bytes,host=stefanosd1,id=0,type=osd get=11,get_or_fail_fail=0,get_or_fail_success=11,get_started=0,get_sum=7723117,max=201326592,put=0,put_sum=0,take=0,take_sum=0,val=7723117,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-cluster,host=stefanosd1,id=1,type=osd get=860895,get_or_fail_fail=0,get_or_fail_success=860895,get_started=0,get_sum=596482256,max=104857600,put=860895,put_sum=596482256,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-objecter_ops,host=stefanosd1,id=1,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=1024,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-objecter_bytes,host=stefanosd1,id=1,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=finisher-defered_finisher,host=stefanosd1,id=1,type=osd complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117698000000000
|
||||
ceph,collection=osd,host=stefanosd1,id=1,type=osd agent_evict=0,agent_flush=0,agent_skip=0,agent_wake=0,cached_crc=0,cached_crc_adjusted=0,copyfrom=0,heartbeat_to_peers=7,loadavg=11,map_message_epoch_dups=29,map_message_epochs=50,map_messages=39,messages_delayed_for_map=0,missed_crc=0,numpg=188,numpg_primary=71,numpg_removing=0,numpg_replica=117,numpg_stray=0,object_ctx_cache_hit=1349777,object_ctx_cache_total=2934118,op=1319230,op_before_dequeue_op_lat.avgcount=3792053,op_before_dequeue_op_lat.avgtime=0.000405802,op_before_dequeue_op_lat.sum=1538.826381623,op_before_queue_op_lat.avgcount=3778690,op_before_queue_op_lat.avgtime=0.000033273,op_before_queue_op_lat.sum=125.731131596,op_cache_hit=0,op_in_bytes=0,op_latency.avgcount=1319230,op_latency.avgtime=0.002858138,op_latency.sum=3770.541581676,op_out_bytes=1789210,op_prepare_latency.avgcount=1336472,op_prepare_latency.avgtime=0.000279458,op_prepare_latency.sum=373.488913339,op_process_latency.avgcount=1319230,op_process_latency.avgtime=0.002666408,op_process_latency.sum=3517.606407526,op_r=1075394,op_r_latency.avgcount=1075394,op_r_latency.avgtime=0.000303779,op_r_latency.sum=326.682443032,op_r_out_bytes=1789210,op_r_prepare_latency.avgcount=1075394,op_r_prepare_latency.avgtime=0.000171228,op_r_prepare_latency.sum=184.138580631,op_r_process_latency.avgcount=1075394,op_r_process_latency.avgtime=0.00011609,op_r_process_latency.sum=124.842894319,op_rw=243832,op_rw_in_bytes=0,op_rw_latency.avgcount=243832,op_rw_latency.avgtime=0.014123636,op_rw_latency.sum=3443.79445124,op_rw_out_bytes=0,op_rw_prepare_latency.avgcount=261072,op_rw_prepare_latency.avgtime=0.000725265,op_rw_prepare_latency.sum=189.346543463,op_rw_process_latency.avgcount=243832,op_rw_process_latency.avgtime=0.013914089,op_rw_process_latency.sum=3392.700241086,op_w=4,op_w_in_bytes=0,op_w_latency.avgcount=4,op_w_latency.avgtime=0.016171851,op_w_latency.sum=0.064687404,op_w_prepare_latency.avgcount=6,op_w_prepare_latency.avgtime=0.00063154,op_w_prepare_latency.sum=0.003789245,op_w_process_latency.avgcount=4,op_w_process_latency.avgtime=0.01581803,op_w_process_latency.sum=0.063272121,op_wip=0,osd_map_bl_cache_hit=36,osd_map_bl_cache_miss=40,osd_map_cache_hit=5404,osd_map_cache_miss=14,osd_map_cache_miss_low=0,osd_map_cache_miss_low_avg.avgcount=0,osd_map_cache_miss_low_avg.sum=0,osd_pg_biginfo=2333,osd_pg_fastinfo=576157,osd_pg_info=591751,osd_tier_flush_lat.avgcount=0,osd_tier_flush_lat.avgtime=0,osd_tier_flush_lat.sum=0,osd_tier_promote_lat.avgcount=0,osd_tier_promote_lat.avgtime=0,osd_tier_promote_lat.sum=0,osd_tier_r_lat.avgcount=0,osd_tier_r_lat.avgtime=0,osd_tier_r_lat.sum=0,pull=0,push=22,push_out_bytes=0,recovery_bytes=0,recovery_ops=21,stat_bytes=107369988096,stat_bytes_avail=106271997952,stat_bytes_used=1097990144,subop=306946,subop_in_bytes=204236742,subop_latency.avgcount=306946,subop_latency.avgtime=0.006744881,subop_latency.sum=2070.314452989,subop_pull=0,subop_pull_latency.avgcount=0,subop_pull_latency.avgtime=0,subop_pull_latency.sum=0,subop_push=0,subop_push_in_bytes=0,subop_push_latency.avgcount=0,subop_push_latency.avgtime=0,subop_push_latency.sum=0,subop_w=306946,subop_w_in_bytes=204236742,subop_w_latency.avgcount=306946,subop_w_latency.avgtime=0.006744881,subop_w_latency.sum=2070.314452989,tier_clean=0,tier_delay=0,tier_dirty=8,tier_evict=0,tier_flush=0,tier_flush_fail=0,tier_promote=0,tier_proxy_read=0,tier_proxy_write=0,tier_try_flush=0,tier_try_flush_fail=0,tier_whiteout=0 1587117698000000000
|
||||
ceph,collection=objecter,host=stefanosd1,id=1,type=osd command_active=0,command_resend=0,command_send=0,linger_active=0,linger_ping=0,linger_resend=0,linger_send=0,map_epoch=203,map_full=0,map_inc=19,omap_del=0,omap_rd=0,omap_wr=0,op=0,op_active=0,op_laggy=0,op_pg=0,op_r=0,op_reply=0,op_resend=0,op_rmw=0,op_send=0,op_send_bytes=0,op_w=0,osd_laggy=0,osd_session_close=0,osd_session_open=0,osd_sessions=0,osdop_append=0,osdop_call=0,osdop_clonerange=0,osdop_cmpxattr=0,osdop_create=0,osdop_delete=0,osdop_getxattr=0,osdop_mapext=0,osdop_notify=0,osdop_other=0,osdop_pgls=0,osdop_pgls_filter=0,osdop_read=0,osdop_resetxattrs=0,osdop_rmxattr=0,osdop_setxattr=0,osdop_sparse_read=0,osdop_src_cmpxattr=0,osdop_stat=0,osdop_truncate=0,osdop_watch=0,osdop_write=0,osdop_writefull=0,osdop_writesame=0,osdop_zero=0,poolop_active=0,poolop_resend=0,poolop_send=0,poolstat_active=0,poolstat_resend=0,poolstat_send=0,statfs_active=0,statfs_resend=0,statfs_send=0 1587117698000000000
|
||||
ceph,collection=AsyncMessenger::Worker-0,host=stefanosd1,id=1,type=osd msgr_active_connections=1356,msgr_created_connections=12290,msgr_recv_bytes=8577187219,msgr_recv_messages=6387040,msgr_running_fast_dispatch_time=475.903632306,msgr_running_recv_time=425.937196699,msgr_running_send_time=783.676217521,msgr_running_total_time=1989.242459076,msgr_send_bytes=12583034449,msgr_send_messages=6074344 1587117698000000000
|
||||
ceph,collection=bluestore,host=stefanosd1,id=1,type=osd bluestore_allocated=24182784,bluestore_blob_split=0,bluestore_blobs=88,bluestore_buffer_bytes=614400,bluestore_buffer_hit_bytes=142047,bluestore_buffer_miss_bytes=541480,bluestore_buffers=41,bluestore_compressed=0,bluestore_compressed_allocated=0,bluestore_compressed_original=0,bluestore_extent_compress=0,bluestore_extents=88,bluestore_fragmentation_micros=1,bluestore_gc_merged=0,bluestore_onode_hits=1403948,bluestore_onode_misses=1584732,bluestore_onode_reshard=0,bluestore_onode_shard_hits=0,bluestore_onode_shard_misses=0,bluestore_onodes=459,bluestore_read_eio=0,bluestore_reads_with_retries=0,bluestore_stored=1985647,bluestore_txc=593150,bluestore_write_big=0,bluestore_write_big_blobs=0,bluestore_write_big_bytes=0,bluestore_write_small=58,bluestore_write_small_bytes=343091,bluestore_write_small_deferred=20,bluestore_write_small_new=38,bluestore_write_small_pre_read=20,bluestore_write_small_unused=0,commit_lat.avgcount=593150,commit_lat.avgtime=0.006514834,commit_lat.sum=3864.274280733,compress_lat.avgcount=0,compress_lat.avgtime=0,compress_lat.sum=0,compress_rejected_count=0,compress_success_count=0,csum_lat.avgcount=60,csum_lat.avgtime=0.000028258,csum_lat.sum=0.001695512,decompress_lat.avgcount=0,decompress_lat.avgtime=0,decompress_lat.sum=0,deferred_write_bytes=0,deferred_write_ops=0,kv_commit_lat.avgcount=578129,kv_commit_lat.avgtime=0.00570707,kv_commit_lat.sum=3299.423186928,kv_final_lat.avgcount=578124,kv_final_lat.avgtime=0.000042752,kv_final_lat.sum=24.716171934,kv_flush_lat.avgcount=578129,kv_flush_lat.avgtime=0.000000209,kv_flush_lat.sum=0.121169044,kv_sync_lat.avgcount=578129,kv_sync_lat.avgtime=0.00570728,kv_sync_lat.sum=3299.544355972,omap_lower_bound_lat.avgcount=22,omap_lower_bound_lat.avgtime=0.000005979,omap_lower_bound_lat.sum=0.000131539,omap_next_lat.avgcount=13248,omap_next_lat.avgtime=0.000004836,omap_next_lat.sum=0.064077797,omap_seek_to_first_lat.avgcount=525,omap_seek_to_first_lat.avgtime=0.000004906,omap_seek_to_first_lat.sum=0.002575786,omap_upper_bound_lat.avgcount=0,omap_upper_bound_lat.avgtime=0,omap_upper_bound_lat.sum=0,read_lat.avgcount=406,read_lat.avgtime=0.000383254,read_lat.sum=0.155601529,read_onode_meta_lat.avgcount=483,read_onode_meta_lat.avgtime=0.000008805,read_onode_meta_lat.sum=0.004252832,read_wait_aio_lat.avgcount=77,read_wait_aio_lat.avgtime=0.001907361,read_wait_aio_lat.sum=0.146866799,state_aio_wait_lat.avgcount=593150,state_aio_wait_lat.avgtime=0.000000388,state_aio_wait_lat.sum=0.230498048,state_deferred_aio_wait_lat.avgcount=0,state_deferred_aio_wait_lat.avgtime=0,state_deferred_aio_wait_lat.sum=0,state_deferred_cleanup_lat.avgcount=0,state_deferred_cleanup_lat.avgtime=0,state_deferred_cleanup_lat.sum=0,state_deferred_queued_lat.avgcount=0,state_deferred_queued_lat.avgtime=0,state_deferred_queued_lat.sum=0,state_done_lat.avgcount=593140,state_done_lat.avgtime=0.000003048,state_done_lat.sum=1.80789161,state_finishing_lat.avgcount=593140,state_finishing_lat.avgtime=0.000000325,state_finishing_lat.sum=0.192952339,state_io_done_lat.avgcount=593150,state_io_done_lat.avgtime=0.000001202,state_io_done_lat.sum=0.713333116,state_kv_commiting_lat.avgcount=593150,state_kv_commiting_lat.avgtime=0.005788541,state_kv_commiting_lat.sum=3433.473378536,state_kv_done_lat.avgcount=593150,state_kv_done_lat.avgtime=0.000001472,state_kv_done_lat.sum=0.873559611,state_kv_queued_lat.avgcount=593150,state_kv_queued_lat.avgtime=0.000634215,state_kv_queued_lat.sum=376.18491577,state_prepare_lat.avgcount=593150,state_prepare_lat.avgtime=0.000089694,state_prepare_lat.sum=53.202464675,submit_lat.avgcount=593150,submit_lat.avgtime=0.000127856,submit_lat.sum=75.83816759,throttle_lat.avgcount=593150,throttle_lat.avgtime=0.000001726,throttle_lat.sum=1.023832181,write_pad_bytes=144333,write_penalty_read_ops=0 1587117698000000000
|
||||
ceph,collection=throttle-osd_client_bytes,host=stefanosd1,id=1,type=osd get=2920772,get_or_fail_fail=0,get_or_fail_success=2920772,get_started=0,get_sum=739935873,max=524288000,put=4888498,put_sum=739935873,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-hb_front_client,host=stefanosd1,id=1,type=osd get=2605442,get_or_fail_fail=0,get_or_fail_success=2605442,get_started=0,get_sum=5221305768,max=104857600,put=2605442,put_sum=5221305768,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=AsyncMessenger::Worker-2,host=stefanosd1,id=1,type=osd msgr_active_connections=1375,msgr_created_connections=12689,msgr_recv_bytes=6393440855,msgr_recv_messages=3260458,msgr_running_fast_dispatch_time=120.622437418,msgr_running_recv_time=225.24709441,msgr_running_send_time=499.150587343,msgr_running_total_time=1043.340296846,msgr_send_bytes=11134862571,msgr_send_messages=3450760 1587117698000000000
|
||||
ceph,collection=bluefs,host=stefanosd1,id=1,type=osd bytes_written_slow=0,bytes_written_sst=19824993,bytes_written_wal=1788507023,db_total_bytes=4294967296,db_used_bytes=522190848,files_written_sst=4,files_written_wal=2,gift_bytes=0,log_bytes=1056768,log_compactions=2,logged_bytes=1933271040,max_bytes_db=1483735040,max_bytes_slow=0,max_bytes_wal=0,num_files=12,reclaim_bytes=0,slow_total_bytes=0,slow_used_bytes=0,wal_total_bytes=0,wal_used_bytes=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-hb_back_client,host=stefanosd1,id=1,type=osd get=2605442,get_or_fail_fail=0,get_or_fail_success=2605442,get_started=0,get_sum=5221305768,max=104857600,put=2605442,put_sum=5221305768,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-bluestore_throttle_deferred_bytes,host=stefanosd1,id=1,type=osd get=10,get_or_fail_fail=0,get_or_fail_success=10,get_started=0,get_sum=7052009,max=201326592,put=0,put_sum=0,take=0,take_sum=0,val=7052009,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=rocksdb,host=stefanosd1,id=1,type=osd compact=0,compact_queue_len=0,compact_queue_merge=0,compact_range=0,get=1586061,get_latency.avgcount=1586061,get_latency.avgtime=0.000083009,get_latency.sum=131.658296684,rocksdb_write_delay_time.avgcount=0,rocksdb_write_delay_time.avgtime=0,rocksdb_write_delay_time.sum=0,rocksdb_write_memtable_time.avgcount=0,rocksdb_write_memtable_time.avgtime=0,rocksdb_write_memtable_time.sum=0,rocksdb_write_pre_and_post_time.avgcount=0,rocksdb_write_pre_and_post_time.avgtime=0,rocksdb_write_pre_and_post_time.sum=0,rocksdb_write_wal_time.avgcount=0,rocksdb_write_wal_time.avgtime=0,rocksdb_write_wal_time.sum=0,submit_latency.avgcount=593150,submit_latency.avgtime=0.000172072,submit_latency.sum=102.064900673,submit_sync_latency.avgcount=578129,submit_sync_latency.avgtime=0.005447017,submit_sync_latency.sum=3149.078822012,submit_transaction=593150,submit_transaction_sync=578129 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-hb_back_server,host=stefanosd1,id=1,type=osd get=2607669,get_or_fail_fail=0,get_or_fail_success=2607669,get_started=0,get_sum=5225768676,max=104857600,put=2607669,put_sum=5225768676,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=recoverystate_perf,host=stefanosd1,id=1,type=osd activating_latency.avgcount=104,activating_latency.avgtime=0.071646485,activating_latency.sum=7.451234493,active_latency.avgcount=33,active_latency.avgtime=1734.369034268,active_latency.sum=57234.178130859,backfilling_latency.avgcount=1,backfilling_latency.avgtime=2.598401698,backfilling_latency.sum=2.598401698,clean_latency.avgcount=33,clean_latency.avgtime=1734.213467342,clean_latency.sum=57229.044422292,down_latency.avgcount=0,down_latency.avgtime=0,down_latency.sum=0,getinfo_latency.avgcount=167,getinfo_latency.avgtime=0.373444627,getinfo_latency.sum=62.365252849,getlog_latency.avgcount=105,getlog_latency.avgtime=0.003575062,getlog_latency.sum=0.375381569,getmissing_latency.avgcount=104,getmissing_latency.avgtime=0.000157091,getmissing_latency.sum=0.016337565,incomplete_latency.avgcount=0,incomplete_latency.avgtime=0,incomplete_latency.sum=0,initial_latency.avgcount=188,initial_latency.avgtime=0.001833512,initial_latency.sum=0.344700343,notbackfilling_latency.avgcount=0,notbackfilling_latency.avgtime=0,notbackfilling_latency.sum=0,notrecovering_latency.avgcount=0,notrecovering_latency.avgtime=0,notrecovering_latency.sum=0,peering_latency.avgcount=167,peering_latency.avgtime=1.501818082,peering_latency.sum=250.803619796,primary_latency.avgcount=97,primary_latency.avgtime=591.344286378,primary_latency.sum=57360.395778762,recovered_latency.avgcount=104,recovered_latency.avgtime=0.000291138,recovered_latency.sum=0.030278433,recovering_latency.avgcount=2,recovering_latency.avgtime=0.142378096,recovering_latency.sum=0.284756192,replicaactive_latency.avgcount=32,replicaactive_latency.avgtime=1788.474901442,replicaactive_latency.sum=57231.196846165,repnotrecovering_latency.avgcount=34,repnotrecovering_latency.avgtime=1683.273587087,repnotrecovering_latency.sum=57231.301960987,reprecovering_latency.avgcount=2,reprecovering_latency.avgtime=0.418094818,reprecovering_latency.sum=0.836189637,repwaitbackfillreserved_latency.avgcount=0,repwaitbackfillreserved_latency.avgtime=0,repwaitbackfillreserved_latency.sum=0,repwaitrecoveryreserved_latency.avgcount=2,repwaitrecoveryreserved_latency.avgtime=0.000588413,repwaitrecoveryreserved_latency.sum=0.001176827,reset_latency.avgcount=433,reset_latency.avgtime=0.15669689,reset_latency.sum=67.849753631,start_latency.avgcount=433,start_latency.avgtime=0.000412707,start_latency.sum=0.178702508,started_latency.avgcount=245,started_latency.avgtime=468.419544137,started_latency.sum=114762.788313581,stray_latency.avgcount=266,stray_latency.avgtime=1.489291271,stray_latency.sum=396.151478238,waitactingchange_latency.avgcount=1,waitactingchange_latency.avgtime=0.982689906,waitactingchange_latency.sum=0.982689906,waitlocalbackfillreserved_latency.avgcount=1,waitlocalbackfillreserved_latency.avgtime=0.000542092,waitlocalbackfillreserved_latency.sum=0.000542092,waitlocalrecoveryreserved_latency.avgcount=2,waitlocalrecoveryreserved_latency.avgtime=0.00391669,waitlocalrecoveryreserved_latency.sum=0.007833381,waitremotebackfillreserved_latency.avgcount=1,waitremotebackfillreserved_latency.avgtime=0.003110409,waitremotebackfillreserved_latency.sum=0.003110409,waitremoterecoveryreserved_latency.avgcount=2,waitremoterecoveryreserved_latency.avgtime=0.012229338,waitremoterecoveryreserved_latency.sum=0.024458677,waitupthru_latency.avgcount=104,waitupthru_latency.avgtime=1.807608905,waitupthru_latency.sum=187.991326197 1587117698000000000
|
||||
ceph,collection=AsyncMessenger::Worker-1,host=stefanosd1,id=1,type=osd msgr_active_connections=1289,msgr_created_connections=9469,msgr_recv_bytes=8348149800,msgr_recv_messages=5048791,msgr_running_fast_dispatch_time=313.754567889,msgr_running_recv_time=372.054833029,msgr_running_send_time=694.900405016,msgr_running_total_time=1656.294769387,msgr_send_bytes=11550148208,msgr_send_messages=5175962 1587117698000000000
|
||||
ceph,collection=throttle-bluestore_throttle_bytes,host=stefanosd1,id=1,type=osd get=593150,get_or_fail_fail=0,get_or_fail_success=0,get_started=593150,get_sum=398147414260,max=67108864,put=578129,put_sum=398147414260,take=0,take_sum=0,val=0,wait.avgcount=29,wait.avgtime=0.000972655,wait.sum=0.028207005 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-ms_objecter,host=stefanosd1,id=1,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=cct,host=stefanosd1,id=1,type=osd total_workers=6,unhealthy_workers=0 1587117698000000000
|
||||
ceph,collection=mempool,host=stefanosd1,id=1,type=osd bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=13064,bluefs_items=593,bluestore_alloc_bytes=230288,bluestore_alloc_items=28786,bluestore_cache_data_bytes=614400,bluestore_cache_data_items=41,bluestore_cache_onode_bytes=301104,bluestore_cache_onode_items=459,bluestore_cache_other_bytes=230945,bluestore_cache_other_items=26119,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=7520,bluestore_txc_items=10,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=657768,bluestore_writing_deferred_items=172,bluestore_writing_items=0,buffer_anon_bytes=2328515,buffer_anon_items=271,buffer_meta_bytes=5808,buffer_meta_items=66,mds_co_bytes=0,mds_co_items=0,osd_bytes=2406400,osd_items=188,osd_mapbl_bytes=139623,osd_mapbl_items=9,osd_pglog_bytes=6768784,osd_pglog_items=18179,osdmap_bytes=710892,osdmap_items=4426,osdmap_mapping_bytes=0,osdmap_mapping_items=0,pgmap_bytes=0,pgmap_items=0,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-client,host=stefanosd1,id=1,type=osd get=2932513,get_or_fail_fail=0,get_or_fail_success=2932513,get_started=0,get_sum=740620215,max=104857600,put=2932513,put_sum=740620215,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-hb_front_server,host=stefanosd1,id=1,type=osd get=2607669,get_or_fail_fail=0,get_or_fail_success=2607669,get_started=0,get_sum=5225768676,max=104857600,put=2607669,put_sum=5225768676,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=finisher-commit_finisher,host=stefanosd1,id=1,type=osd complete_latency.avgcount=10,complete_latency.avgtime=0.002884646,complete_latency.sum=0.028846469,queue_len=0 1587117698000000000
|
||||
ceph,collection=finisher-objecter-finisher-0,host=stefanosd1,id=1,type=osd complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117698000000000
|
||||
ceph,collection=throttle-objecter_bytes,host=stefanosd1,id=2,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=finisher-commit_finisher,host=stefanosd1,id=2,type=osd complete_latency.avgcount=11,complete_latency.avgtime=0.002714416,complete_latency.sum=0.029858583,queue_len=0 1587117698000000000
|
||||
ceph,collection=finisher-defered_finisher,host=stefanosd1,id=2,type=osd complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117698000000000
|
||||
ceph,collection=objecter,host=stefanosd1,id=2,type=osd command_active=0,command_resend=0,command_send=0,linger_active=0,linger_ping=0,linger_resend=0,linger_send=0,map_epoch=203,map_full=0,map_inc=19,omap_del=0,omap_rd=0,omap_wr=0,op=0,op_active=0,op_laggy=0,op_pg=0,op_r=0,op_reply=0,op_resend=0,op_rmw=0,op_send=0,op_send_bytes=0,op_w=0,osd_laggy=0,osd_session_close=0,osd_session_open=0,osd_sessions=0,osdop_append=0,osdop_call=0,osdop_clonerange=0,osdop_cmpxattr=0,osdop_create=0,osdop_delete=0,osdop_getxattr=0,osdop_mapext=0,osdop_notify=0,osdop_other=0,osdop_pgls=0,osdop_pgls_filter=0,osdop_read=0,osdop_resetxattrs=0,osdop_rmxattr=0,osdop_setxattr=0,osdop_sparse_read=0,osdop_src_cmpxattr=0,osdop_stat=0,osdop_truncate=0,osdop_watch=0,osdop_write=0,osdop_writefull=0,osdop_writesame=0,osdop_zero=0,poolop_active=0,poolop_resend=0,poolop_send=0,poolstat_active=0,poolstat_resend=0,poolstat_send=0,statfs_active=0,statfs_resend=0,statfs_send=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-hb_back_client,host=stefanosd1,id=2,type=osd get=2607136,get_or_fail_fail=0,get_or_fail_success=2607136,get_started=0,get_sum=5224700544,max=104857600,put=2607136,put_sum=5224700544,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=mempool,host=stefanosd1,id=2,type=osd bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=11624,bluefs_items=522,bluestore_alloc_bytes=230288,bluestore_alloc_items=28786,bluestore_cache_data_bytes=614400,bluestore_cache_data_items=41,bluestore_cache_onode_bytes=228288,bluestore_cache_onode_items=348,bluestore_cache_other_bytes=174158,bluestore_cache_other_items=18527,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=8272,bluestore_txc_items=11,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=670130,bluestore_writing_deferred_items=176,bluestore_writing_items=0,buffer_anon_bytes=2311664,buffer_anon_items=244,buffer_meta_bytes=5456,buffer_meta_items=62,mds_co_bytes=0,mds_co_items=0,osd_bytes=1920000,osd_items=150,osd_mapbl_bytes=155152,osd_mapbl_items=10,osd_pglog_bytes=3393520,osd_pglog_items=9128,osdmap_bytes=710892,osdmap_items=4426,osdmap_mapping_bytes=0,osdmap_mapping_items=0,pgmap_bytes=0,pgmap_items=0,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117698000000000
|
||||
ceph,collection=osd,host=stefanosd1,id=2,type=osd agent_evict=0,agent_flush=0,agent_skip=0,agent_wake=0,cached_crc=0,cached_crc_adjusted=0,copyfrom=0,heartbeat_to_peers=7,loadavg=11,map_message_epoch_dups=37,map_message_epochs=56,map_messages=37,messages_delayed_for_map=0,missed_crc=0,numpg=150,numpg_primary=59,numpg_removing=0,numpg_replica=91,numpg_stray=0,object_ctx_cache_hit=705923,object_ctx_cache_total=705951,op=690584,op_before_dequeue_op_lat.avgcount=1155697,op_before_dequeue_op_lat.avgtime=0.000217926,op_before_dequeue_op_lat.sum=251.856487141,op_before_queue_op_lat.avgcount=1148445,op_before_queue_op_lat.avgtime=0.000039696,op_before_queue_op_lat.sum=45.589516462,op_cache_hit=0,op_in_bytes=0,op_latency.avgcount=690584,op_latency.avgtime=0.002488685,op_latency.sum=1718.646504654,op_out_bytes=1026000,op_prepare_latency.avgcount=698700,op_prepare_latency.avgtime=0.000300375,op_prepare_latency.sum=209.872029659,op_process_latency.avgcount=690584,op_process_latency.avgtime=0.00230742,op_process_latency.sum=1593.46739165,op_r=548020,op_r_latency.avgcount=548020,op_r_latency.avgtime=0.000298287,op_r_latency.sum=163.467760649,op_r_out_bytes=1026000,op_r_prepare_latency.avgcount=548020,op_r_prepare_latency.avgtime=0.000186359,op_r_prepare_latency.sum=102.128629183,op_r_process_latency.avgcount=548020,op_r_process_latency.avgtime=0.00012716,op_r_process_latency.sum=69.686468884,op_rw=142562,op_rw_in_bytes=0,op_rw_latency.avgcount=142562,op_rw_latency.avgtime=0.010908597,op_rw_latency.sum=1555.151525732,op_rw_out_bytes=0,op_rw_prepare_latency.avgcount=150678,op_rw_prepare_latency.avgtime=0.000715043,op_rw_prepare_latency.sum=107.741399304,op_rw_process_latency.avgcount=142562,op_rw_process_latency.avgtime=0.01068836,op_rw_process_latency.sum=1523.754107887,op_w=2,op_w_in_bytes=0,op_w_latency.avgcount=2,op_w_latency.avgtime=0.013609136,op_w_latency.sum=0.027218273,op_w_prepare_latency.avgcount=2,op_w_prepare_latency.avgtime=0.001000586,op_w_prepare_latency.sum=0.002001172,op_w_process_latency.avgcount=2,op_w_process_latency.avgtime=0.013407439,op_w_process_latency.sum=0.026814879,op_wip=0,osd_map_bl_cache_hit=15,osd_map_bl_cache_miss=41,osd_map_cache_hit=4241,osd_map_cache_miss=14,osd_map_cache_miss_low=0,osd_map_cache_miss_low_avg.avgcount=0,osd_map_cache_miss_low_avg.sum=0,osd_pg_biginfo=1824,osd_pg_fastinfo=285998,osd_pg_info=294869,osd_tier_flush_lat.avgcount=0,osd_tier_flush_lat.avgtime=0,osd_tier_flush_lat.sum=0,osd_tier_promote_lat.avgcount=0,osd_tier_promote_lat.avgtime=0,osd_tier_promote_lat.sum=0,osd_tier_r_lat.avgcount=0,osd_tier_r_lat.avgtime=0,osd_tier_r_lat.sum=0,pull=0,push=1,push_out_bytes=0,recovery_bytes=0,recovery_ops=0,stat_bytes=107369988096,stat_bytes_avail=106271932416,stat_bytes_used=1098055680,subop=134165,subop_in_bytes=89501237,subop_latency.avgcount=134165,subop_latency.avgtime=0.007313523,subop_latency.sum=981.218888627,subop_pull=0,subop_pull_latency.avgcount=0,subop_pull_latency.avgtime=0,subop_pull_latency.sum=0,subop_push=0,subop_push_in_bytes=0,subop_push_latency.avgcount=0,subop_push_latency.avgtime=0,subop_push_latency.sum=0,subop_w=134165,subop_w_in_bytes=89501237,subop_w_latency.avgcount=134165,subop_w_latency.avgtime=0.007313523,subop_w_latency.sum=981.218888627,tier_clean=0,tier_delay=0,tier_dirty=4,tier_evict=0,tier_flush=0,tier_flush_fail=0,tier_promote=0,tier_proxy_read=0,tier_proxy_write=0,tier_try_flush=0,tier_try_flush_fail=0,tier_whiteout=0 1587117698000000000
|
||||
ceph,collection=AsyncMessenger::Worker-1,host=stefanosd1,id=2,type=osd msgr_active_connections=746,msgr_created_connections=15212,msgr_recv_bytes=8633229006,msgr_recv_messages=4284202,msgr_running_fast_dispatch_time=153.820479102,msgr_running_recv_time=282.031655658,msgr_running_send_time=585.444749736,msgr_running_total_time=1231.431789242,msgr_send_bytes=11962769351,msgr_send_messages=4440622 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-ms_objecter,host=stefanosd1,id=2,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-hb_front_client,host=stefanosd1,id=2,type=osd get=2607136,get_or_fail_fail=0,get_or_fail_success=2607136,get_started=0,get_sum=5224700544,max=104857600,put=2607136,put_sum=5224700544,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=bluefs,host=stefanosd1,id=2,type=osd bytes_written_slow=0,bytes_written_sst=9065815,bytes_written_wal=901884611,db_total_bytes=4294967296,db_used_bytes=546308096,files_written_sst=3,files_written_wal=2,gift_bytes=0,log_bytes=225726464,log_compactions=1,logged_bytes=1195945984,max_bytes_db=1234173952,max_bytes_slow=0,max_bytes_wal=0,num_files=11,reclaim_bytes=0,slow_total_bytes=0,slow_used_bytes=0,wal_total_bytes=0,wal_used_bytes=0 1587117698000000000
|
||||
ceph,collection=recoverystate_perf,host=stefanosd1,id=2,type=osd activating_latency.avgcount=88,activating_latency.avgtime=0.086149065,activating_latency.sum=7.581117751,active_latency.avgcount=29,active_latency.avgtime=1790.849396082,active_latency.sum=51934.632486379,backfilling_latency.avgcount=0,backfilling_latency.avgtime=0,backfilling_latency.sum=0,clean_latency.avgcount=29,clean_latency.avgtime=1790.754765195,clean_latency.sum=51931.888190683,down_latency.avgcount=0,down_latency.avgtime=0,down_latency.sum=0,getinfo_latency.avgcount=134,getinfo_latency.avgtime=0.427567953,getinfo_latency.sum=57.294105786,getlog_latency.avgcount=88,getlog_latency.avgtime=0.011810192,getlog_latency.sum=1.03929697,getmissing_latency.avgcount=88,getmissing_latency.avgtime=0.000104598,getmissing_latency.sum=0.009204673,incomplete_latency.avgcount=0,incomplete_latency.avgtime=0,incomplete_latency.sum=0,initial_latency.avgcount=150,initial_latency.avgtime=0.001251361,initial_latency.sum=0.187704197,notbackfilling_latency.avgcount=0,notbackfilling_latency.avgtime=0,notbackfilling_latency.sum=0,notrecovering_latency.avgcount=0,notrecovering_latency.avgtime=0,notrecovering_latency.sum=0,peering_latency.avgcount=134,peering_latency.avgtime=0.998405763,peering_latency.sum=133.786372331,primary_latency.avgcount=75,primary_latency.avgtime=693.473306562,primary_latency.sum=52010.497992212,recovered_latency.avgcount=88,recovered_latency.avgtime=0.000609715,recovered_latency.sum=0.053654964,recovering_latency.avgcount=1,recovering_latency.avgtime=0.100713031,recovering_latency.sum=0.100713031,replicaactive_latency.avgcount=21,replicaactive_latency.avgtime=1790.852354921,replicaactive_latency.sum=37607.89945336,repnotrecovering_latency.avgcount=21,repnotrecovering_latency.avgtime=1790.852315529,repnotrecovering_latency.sum=37607.898626121,reprecovering_latency.avgcount=0,reprecovering_latency.avgtime=0,reprecovering_latency.sum=0,repwaitbackfillreserved_latency.avgcount=0,repwaitbackfillreserved_latency.avgtime=0,repwaitbackfillreserved_latency.sum=0,repwaitrecoveryreserved_latency.avgcount=0,repwaitrecoveryreserved_latency.avgtime=0,repwaitrecoveryreserved_latency.sum=0,reset_latency.avgcount=346,reset_latency.avgtime=0.126826803,reset_latency.sum=43.882073917,start_latency.avgcount=346,start_latency.avgtime=0.000233277,start_latency.sum=0.080713962,started_latency.avgcount=196,started_latency.avgtime=457.885378797,started_latency.sum=89745.534244237,stray_latency.avgcount=212,stray_latency.avgtime=1.013774396,stray_latency.sum=214.920172121,waitactingchange_latency.avgcount=0,waitactingchange_latency.avgtime=0,waitactingchange_latency.sum=0,waitlocalbackfillreserved_latency.avgcount=0,waitlocalbackfillreserved_latency.avgtime=0,waitlocalbackfillreserved_latency.sum=0,waitlocalrecoveryreserved_latency.avgcount=1,waitlocalrecoveryreserved_latency.avgtime=0.001572379,waitlocalrecoveryreserved_latency.sum=0.001572379,waitremotebackfillreserved_latency.avgcount=0,waitremotebackfillreserved_latency.avgtime=0,waitremotebackfillreserved_latency.sum=0,waitremoterecoveryreserved_latency.avgcount=1,waitremoterecoveryreserved_latency.avgtime=0.012729633,waitremoterecoveryreserved_latency.sum=0.012729633,waitupthru_latency.avgcount=88,waitupthru_latency.avgtime=0.857137729,waitupthru_latency.sum=75.428120205 1587117698000000000
|
||||
ceph,collection=throttle-objecter_ops,host=stefanosd1,id=2,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=1024,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=bluestore,host=stefanosd1,id=2,type=osd bluestore_allocated=24248320,bluestore_blob_split=0,bluestore_blobs=83,bluestore_buffer_bytes=614400,bluestore_buffer_hit_bytes=161362,bluestore_buffer_miss_bytes=534799,bluestore_buffers=41,bluestore_compressed=0,bluestore_compressed_allocated=0,bluestore_compressed_original=0,bluestore_extent_compress=0,bluestore_extents=83,bluestore_fragmentation_micros=1,bluestore_gc_merged=0,bluestore_onode_hits=723852,bluestore_onode_misses=364,bluestore_onode_reshard=0,bluestore_onode_shard_hits=0,bluestore_onode_shard_misses=0,bluestore_onodes=348,bluestore_read_eio=0,bluestore_reads_with_retries=0,bluestore_stored=1984402,bluestore_txc=295997,bluestore_write_big=0,bluestore_write_big_blobs=0,bluestore_write_big_bytes=0,bluestore_write_small=60,bluestore_write_small_bytes=343843,bluestore_write_small_deferred=22,bluestore_write_small_new=38,bluestore_write_small_pre_read=22,bluestore_write_small_unused=0,commit_lat.avgcount=295997,commit_lat.avgtime=0.006994931,commit_lat.sum=2070.478673619,compress_lat.avgcount=0,compress_lat.avgtime=0,compress_lat.sum=0,compress_rejected_count=0,compress_success_count=0,csum_lat.avgcount=47,csum_lat.avgtime=0.000034434,csum_lat.sum=0.001618423,decompress_lat.avgcount=0,decompress_lat.avgtime=0,decompress_lat.sum=0,deferred_write_bytes=0,deferred_write_ops=0,kv_commit_lat.avgcount=291889,kv_commit_lat.avgtime=0.006347015,kv_commit_lat.sum=1852.624108527,kv_final_lat.avgcount=291885,kv_final_lat.avgtime=0.00004358,kv_final_lat.sum=12.720529751,kv_flush_lat.avgcount=291889,kv_flush_lat.avgtime=0.000000211,kv_flush_lat.sum=0.061636079,kv_sync_lat.avgcount=291889,kv_sync_lat.avgtime=0.006347227,kv_sync_lat.sum=1852.685744606,omap_lower_bound_lat.avgcount=1,omap_lower_bound_lat.avgtime=0.000004482,omap_lower_bound_lat.sum=0.000004482,omap_next_lat.avgcount=6933,omap_next_lat.avgtime=0.000003956,omap_next_lat.sum=0.027427456,omap_seek_to_first_lat.avgcount=309,omap_seek_to_first_lat.avgtime=0.000005879,omap_seek_to_first_lat.sum=0.001816658,omap_upper_bound_lat.avgcount=0,omap_upper_bound_lat.avgtime=0,omap_upper_bound_lat.sum=0,read_lat.avgcount=229,read_lat.avgtime=0.000394981,read_lat.sum=0.090450704,read_onode_meta_lat.avgcount=295,read_onode_meta_lat.avgtime=0.000016832,read_onode_meta_lat.sum=0.004965516,read_wait_aio_lat.avgcount=66,read_wait_aio_lat.avgtime=0.001237841,read_wait_aio_lat.sum=0.081697561,state_aio_wait_lat.avgcount=295997,state_aio_wait_lat.avgtime=0.000000357,state_aio_wait_lat.sum=0.105827433,state_deferred_aio_wait_lat.avgcount=0,state_deferred_aio_wait_lat.avgtime=0,state_deferred_aio_wait_lat.sum=0,state_deferred_cleanup_lat.avgcount=0,state_deferred_cleanup_lat.avgtime=0,state_deferred_cleanup_lat.sum=0,state_deferred_queued_lat.avgcount=0,state_deferred_queued_lat.avgtime=0,state_deferred_queued_lat.sum=0,state_done_lat.avgcount=295986,state_done_lat.avgtime=0.000003017,state_done_lat.sum=0.893199127,state_finishing_lat.avgcount=295986,state_finishing_lat.avgtime=0.000000306,state_finishing_lat.sum=0.090792683,state_io_done_lat.avgcount=295997,state_io_done_lat.avgtime=0.000001066,state_io_done_lat.sum=0.315577655,state_kv_commiting_lat.avgcount=295997,state_kv_commiting_lat.avgtime=0.006423586,state_kv_commiting_lat.sum=1901.362268572,state_kv_done_lat.avgcount=295997,state_kv_done_lat.avgtime=0.00000155,state_kv_done_lat.sum=0.458963064,state_kv_queued_lat.avgcount=295997,state_kv_queued_lat.avgtime=0.000477234,state_kv_queued_lat.sum=141.260101773,state_prepare_lat.avgcount=295997,state_prepare_lat.avgtime=0.000091806,state_prepare_lat.sum=27.174436583,submit_lat.avgcount=295997,submit_lat.avgtime=0.000135729,submit_lat.sum=40.17557682,throttle_lat.avgcount=295997,throttle_lat.avgtime=0.000002734,throttle_lat.sum=0.809479837,write_pad_bytes=151773,write_penalty_read_ops=0 1587117698000000000
|
||||
ceph,collection=throttle-bluestore_throttle_bytes,host=stefanosd1,id=2,type=osd get=295997,get_or_fail_fail=0,get_or_fail_success=0,get_started=295997,get_sum=198686579299,max=67108864,put=291889,put_sum=198686579299,take=0,take_sum=0,val=0,wait.avgcount=83,wait.avgtime=0.003670612,wait.sum=0.304660858 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-cluster,host=stefanosd1,id=2,type=osd get=452060,get_or_fail_fail=0,get_or_fail_success=452060,get_started=0,get_sum=269934345,max=104857600,put=452060,put_sum=269934345,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-bluestore_throttle_deferred_bytes,host=stefanosd1,id=2,type=osd get=11,get_or_fail_fail=0,get_or_fail_success=11,get_started=0,get_sum=7723117,max=201326592,put=0,put_sum=0,take=0,take_sum=0,val=7723117,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-hb_front_server,host=stefanosd1,id=2,type=osd get=2607433,get_or_fail_fail=0,get_or_fail_success=2607433,get_started=0,get_sum=5225295732,max=104857600,put=2607433,put_sum=5225295732,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=finisher-objecter-finisher-0,host=stefanosd1,id=2,type=osd complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117698000000000
|
||||
ceph,collection=cct,host=stefanosd1,id=2,type=osd total_workers=6,unhealthy_workers=0 1587117698000000000
|
||||
ceph,collection=AsyncMessenger::Worker-2,host=stefanosd1,id=2,type=osd msgr_active_connections=670,msgr_created_connections=13455,msgr_recv_bytes=6334605563,msgr_recv_messages=3287843,msgr_running_fast_dispatch_time=137.016615819,msgr_running_recv_time=240.687997039,msgr_running_send_time=471.710658466,msgr_running_total_time=1034.029109337,msgr_send_bytes=9753423475,msgr_send_messages=3439611 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-client,host=stefanosd1,id=2,type=osd get=710355,get_or_fail_fail=0,get_or_fail_success=710355,get_started=0,get_sum=166306283,max=104857600,put=710355,put_sum=166306283,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-hb_back_server,host=stefanosd1,id=2,type=osd get=2607433,get_or_fail_fail=0,get_or_fail_success=2607433,get_started=0,get_sum=5225295732,max=104857600,put=2607433,put_sum=5225295732,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=AsyncMessenger::Worker-0,host=stefanosd1,id=2,type=osd msgr_active_connections=705,msgr_created_connections=17953,msgr_recv_bytes=7261438733,msgr_recv_messages=4496034,msgr_running_fast_dispatch_time=254.716476808,msgr_running_recv_time=272.196741555,msgr_running_send_time=571.102924903,msgr_running_total_time=1338.461077493,msgr_send_bytes=10772250508,msgr_send_messages=4192781 1587117698000000000
|
||||
ceph,collection=rocksdb,host=stefanosd1,id=2,type=osd compact=0,compact_queue_len=0,compact_queue_merge=0,compact_range=0,get=1424,get_latency.avgcount=1424,get_latency.avgtime=0.000030752,get_latency.sum=0.043792142,rocksdb_write_delay_time.avgcount=0,rocksdb_write_delay_time.avgtime=0,rocksdb_write_delay_time.sum=0,rocksdb_write_memtable_time.avgcount=0,rocksdb_write_memtable_time.avgtime=0,rocksdb_write_memtable_time.sum=0,rocksdb_write_pre_and_post_time.avgcount=0,rocksdb_write_pre_and_post_time.avgtime=0,rocksdb_write_pre_and_post_time.sum=0,rocksdb_write_wal_time.avgcount=0,rocksdb_write_wal_time.avgtime=0,rocksdb_write_wal_time.sum=0,submit_latency.avgcount=295997,submit_latency.avgtime=0.000173137,submit_latency.sum=51.248072285,submit_sync_latency.avgcount=291889,submit_sync_latency.avgtime=0.006094397,submit_sync_latency.sum=1778.887521449,submit_transaction=295997,submit_transaction_sync=291889 1587117698000000000
|
||||
ceph,collection=throttle-osd_client_bytes,host=stefanosd1,id=2,type=osd get=698701,get_or_fail_fail=0,get_or_fail_success=698701,get_started=0,get_sum=165630172,max=524288000,put=920880,put_sum=165630172,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
|
||||
ceph,collection=mds_sessions,host=stefanmds1,id=stefanmds1,type=mds average_load=0,avg_session_uptime=0,session_add=0,session_count=0,session_remove=0,sessions_open=0,sessions_stale=0,total_load=0 1587117476000000000
|
||||
ceph,collection=mempool,host=stefanmds1,id=stefanmds1,type=mds bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=0,bluefs_items=0,bluestore_alloc_bytes=0,bluestore_alloc_items=0,bluestore_cache_data_bytes=0,bluestore_cache_data_items=0,bluestore_cache_onode_bytes=0,bluestore_cache_onode_items=0,bluestore_cache_other_bytes=0,bluestore_cache_other_items=0,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=0,bluestore_txc_items=0,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=0,bluestore_writing_deferred_items=0,bluestore_writing_items=0,buffer_anon_bytes=132069,buffer_anon_items=82,buffer_meta_bytes=0,buffer_meta_items=0,mds_co_bytes=44208,mds_co_items=154,osd_bytes=0,osd_items=0,osd_mapbl_bytes=0,osd_mapbl_items=0,osd_pglog_bytes=0,osd_pglog_items=0,osdmap_bytes=16952,osdmap_items=139,osdmap_mapping_bytes=0,osdmap_mapping_items=0,pgmap_bytes=0,pgmap_items=0,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117476000000000
|
||||
ceph,collection=objecter,host=stefanmds1,id=stefanmds1,type=mds command_active=0,command_resend=0,command_send=0,linger_active=0,linger_ping=0,linger_resend=0,linger_send=0,map_epoch=203,map_full=0,map_inc=1,omap_del=0,omap_rd=28,omap_wr=1,op=33,op_active=0,op_laggy=0,op_pg=0,op_r=26,op_reply=33,op_resend=2,op_rmw=0,op_send=35,op_send_bytes=364,op_w=7,osd_laggy=0,osd_session_close=91462,osd_session_open=91468,osd_sessions=6,osdop_append=0,osdop_call=0,osdop_clonerange=0,osdop_cmpxattr=0,osdop_create=0,osdop_delete=5,osdop_getxattr=14,osdop_mapext=0,osdop_notify=0,osdop_other=0,osdop_pgls=0,osdop_pgls_filter=0,osdop_read=8,osdop_resetxattrs=0,osdop_rmxattr=0,osdop_setxattr=0,osdop_sparse_read=0,osdop_src_cmpxattr=0,osdop_stat=2,osdop_truncate=0,osdop_watch=0,osdop_write=0,osdop_writefull=0,osdop_writesame=0,osdop_zero=1,poolop_active=0,poolop_resend=0,poolop_send=0,poolstat_active=0,poolstat_resend=0,poolstat_send=0,statfs_active=0,statfs_resend=0,statfs_send=0 1587117476000000000
|
||||
ceph,collection=cct,host=stefanmds1,id=stefanmds1,type=mds total_workers=1,unhealthy_workers=0 1587117476000000000
|
||||
ceph,collection=mds_server,host=stefanmds1,id=stefanmds1,type=mds cap_revoke_eviction=0,dispatch_client_request=0,dispatch_server_request=0,handle_client_request=0,handle_client_session=0,handle_slave_request=0,req_create_latency.avgcount=0,req_create_latency.avgtime=0,req_create_latency.sum=0,req_getattr_latency.avgcount=0,req_getattr_latency.avgtime=0,req_getattr_latency.sum=0,req_getfilelock_latency.avgcount=0,req_getfilelock_latency.avgtime=0,req_getfilelock_latency.sum=0,req_link_latency.avgcount=0,req_link_latency.avgtime=0,req_link_latency.sum=0,req_lookup_latency.avgcount=0,req_lookup_latency.avgtime=0,req_lookup_latency.sum=0,req_lookuphash_latency.avgcount=0,req_lookuphash_latency.avgtime=0,req_lookuphash_latency.sum=0,req_lookupino_latency.avgcount=0,req_lookupino_latency.avgtime=0,req_lookupino_latency.sum=0,req_lookupname_latency.avgcount=0,req_lookupname_latency.avgtime=0,req_lookupname_latency.sum=0,req_lookupparent_latency.avgcount=0,req_lookupparent_latency.avgtime=0,req_lookupparent_latency.sum=0,req_lookupsnap_latency.avgcount=0,req_lookupsnap_latency.avgtime=0,req_lookupsnap_latency.sum=0,req_lssnap_latency.avgcount=0,req_lssnap_latency.avgtime=0,req_lssnap_latency.sum=0,req_mkdir_latency.avgcount=0,req_mkdir_latency.avgtime=0,req_mkdir_latency.sum=0,req_mknod_latency.avgcount=0,req_mknod_latency.avgtime=0,req_mknod_latency.sum=0,req_mksnap_latency.avgcount=0,req_mksnap_latency.avgtime=0,req_mksnap_latency.sum=0,req_open_latency.avgcount=0,req_open_latency.avgtime=0,req_open_latency.sum=0,req_readdir_latency.avgcount=0,req_readdir_latency.avgtime=0,req_readdir_latency.sum=0,req_rename_latency.avgcount=0,req_rename_latency.avgtime=0,req_rename_latency.sum=0,req_renamesnap_latency.avgcount=0,req_renamesnap_latency.avgtime=0,req_renamesnap_latency.sum=0,req_rmdir_latency.avgcount=0,req_rmdir_latency.avgtime=0,req_rmdir_latency.sum=0,req_rmsnap_latency.avgcount=0,req_rmsnap_latency.avgtime=0,req_rmsnap_latency.sum=0,req_rmxattr_latency.avgcount=0,req_rmxattr_latency.avgtime=0,req_rmxattr_latency.sum=0,req_setattr_latency.avgcount=0,req_setattr_latency.avgtime=0,req_setattr_latency.sum=0,req_setdirlayout_latency.avgcount=0,req_setdirlayout_latency.avgtime=0,req_setdirlayout_latency.sum=0,req_setfilelock_latency.avgcount=0,req_setfilelock_latency.avgtime=0,req_setfilelock_latency.sum=0,req_setlayout_latency.avgcount=0,req_setlayout_latency.avgtime=0,req_setlayout_latency.sum=0,req_setxattr_latency.avgcount=0,req_setxattr_latency.avgtime=0,req_setxattr_latency.sum=0,req_symlink_latency.avgcount=0,req_symlink_latency.avgtime=0,req_symlink_latency.sum=0,req_unlink_latency.avgcount=0,req_unlink_latency.avgtime=0,req_unlink_latency.sum=0 1587117476000000000
|
||||
ceph,collection=AsyncMessenger::Worker-2,host=stefanmds1,id=stefanmds1,type=mds msgr_active_connections=84,msgr_created_connections=68511,msgr_recv_bytes=238078,msgr_recv_messages=2655,msgr_running_fast_dispatch_time=0.004247777,msgr_running_recv_time=25.369012545,msgr_running_send_time=3.743427461,msgr_running_total_time=130.277111559,msgr_send_bytes=172767043,msgr_send_messages=18172 1587117476000000000
|
||||
ceph,collection=mds_log,host=stefanmds1,id=stefanmds1,type=mds ev=0,evadd=0,evex=0,evexd=0,evexg=0,evtrm=0,expos=4194304,jlat.avgcount=0,jlat.avgtime=0,jlat.sum=0,rdpos=4194304,replayed=1,seg=1,segadd=0,segex=0,segexd=0,segexg=0,segtrm=0,wrpos=0 1587117476000000000
|
||||
ceph,collection=AsyncMessenger::Worker-0,host=stefanmds1,id=stefanmds1,type=mds msgr_active_connections=595,msgr_created_connections=943825,msgr_recv_bytes=78618003,msgr_recv_messages=914080,msgr_running_fast_dispatch_time=0.001544386,msgr_running_recv_time=459.627068807,msgr_running_send_time=469.337032316,msgr_running_total_time=2744.084305898,msgr_send_bytes=61684163658,msgr_send_messages=1858008 1587117476000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-mds,host=stefanmds1,id=stefanmds1,type=mds get=1216458,get_or_fail_fail=0,get_or_fail_success=1216458,get_started=0,get_sum=51976882,max=104857600,put=1216458,put_sum=51976882,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117476000000000
|
||||
ceph,collection=AsyncMessenger::Worker-1,host=stefanmds1,id=stefanmds1,type=mds msgr_active_connections=226,msgr_created_connections=42679,msgr_recv_bytes=63140151,msgr_recv_messages=299727,msgr_running_fast_dispatch_time=26.316138629,msgr_running_recv_time=36.969916165,msgr_running_send_time=70.457421128,msgr_running_total_time=226.230019936,msgr_send_bytes=193154464,msgr_send_messages=310481 1587117476000000000
|
||||
ceph,collection=mds,host=stefanmds1,id=stefanmds1,type=mds caps=0,dir_commit=0,dir_fetch=12,dir_merge=0,dir_split=0,exported=0,exported_inodes=0,forward=0,imported=0,imported_inodes=0,inode_max=2147483647,inodes=10,inodes_bottom=3,inodes_expired=0,inodes_pin_tail=0,inodes_pinned=10,inodes_top=7,inodes_with_caps=0,load_cent=0,openino_backtrace_fetch=0,openino_dir_fetch=0,openino_peer_discover=0,q=0,reply=0,reply_latency.avgcount=0,reply_latency.avgtime=0,reply_latency.sum=0,request=0,subtrees=2,traverse=0,traverse_dir_fetch=0,traverse_discover=0,traverse_forward=0,traverse_hit=0,traverse_lock=0,traverse_remote_ino=0 1587117476000000000
|
||||
ceph,collection=purge_queue,host=stefanmds1,id=stefanmds1,type=mds pq_executed=0,pq_executing=0,pq_executing_ops=0 1587117476000000000
|
||||
ceph,collection=throttle-write_buf_throttle,host=stefanmds1,id=stefanmds1,type=mds get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=3758096384,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117476000000000
|
||||
ceph,collection=throttle-write_buf_throttle-0x5624e9377f40,host=stefanmds1,id=stefanmds1,type=mds get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=3758096384,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117476000000000
|
||||
ceph,collection=mds_cache,host=stefanmds1,id=stefanmds1,type=mds ireq_enqueue_scrub=0,ireq_exportdir=0,ireq_flush=0,ireq_fragmentdir=0,ireq_fragstats=0,ireq_inodestats=0,num_recovering_enqueued=0,num_recovering_prioritized=0,num_recovering_processing=0,num_strays=0,num_strays_delayed=0,num_strays_enqueuing=0,recovery_completed=0,recovery_started=0,strays_created=0,strays_enqueued=0,strays_migrated=0,strays_reintegrated=0 1587117476000000000
|
||||
ceph,collection=throttle-objecter_bytes,host=stefanmds1,id=stefanmds1,type=mds get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=16,put_sum=1016,take=33,take_sum=1016,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117476000000000
|
||||
ceph,collection=throttle-objecter_ops,host=stefanmds1,id=stefanmds1,type=mds get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=1024,put=33,put_sum=33,take=33,take_sum=33,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117476000000000
|
||||
ceph,collection=mds_mem,host=stefanmds1,id=stefanmds1,type=mds cap=0,cap+=0,cap-=0,dir=12,dir+=12,dir-=0,dn=10,dn+=10,dn-=0,heap=322284,ino=13,ino+=13,ino-=0,rss=76032 1587117476000000000
|
||||
ceph,collection=finisher-PurgeQueue,host=stefanmds1,id=stefanmds1,type=mds complete_latency.avgcount=4,complete_latency.avgtime=0.000176985,complete_latency.sum=0.000707941,queue_len=0 1587117476000000000
|
||||
ceph,collection=cct,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw total_workers=0,unhealthy_workers=0 1587117156000000000
|
||||
ceph,collection=throttle-objecter_bytes,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=791732,get_or_fail_fail=0,get_or_fail_success=791732,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
|
||||
ceph,collection=rgw,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw cache_hit=0,cache_miss=791706,failed_req=0,get=0,get_b=0,get_initial_lat.avgcount=0,get_initial_lat.avgtime=0,get_initial_lat.sum=0,keystone_token_cache_hit=0,keystone_token_cache_miss=0,pubsub_event_lost=0,pubsub_event_triggered=0,pubsub_events=0,pubsub_push_failed=0,pubsub_push_ok=0,pubsub_push_pending=0,pubsub_store_fail=0,pubsub_store_ok=0,put=0,put_b=0,put_initial_lat.avgcount=0,put_initial_lat.avgtime=0,put_initial_lat.sum=0,qactive=0,qlen=0,req=791705 1587117156000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-radosclient,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=2697988,get_or_fail_fail=0,get_or_fail_success=2697988,get_started=0,get_sum=444563051,max=104857600,put=2697988,put_sum=444563051,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
|
||||
ceph,collection=finisher-radosclient,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw complete_latency.avgcount=2,complete_latency.avgtime=0.003530161,complete_latency.sum=0.007060323,queue_len=0 1587117156000000000
|
||||
ceph,collection=throttle-rgw_async_rados_ops,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=64,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
|
||||
ceph,collection=throttle-objecter_ops,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=791732,get_or_fail_fail=0,get_or_fail_success=791732,get_started=0,get_sum=791732,max=24576,put=791732,put_sum=791732,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
|
||||
ceph,collection=throttle-objecter_bytes-0x5598969981c0,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=1637900,get_or_fail_fail=0,get_or_fail_success=1637900,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
|
||||
ceph,collection=objecter,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw command_active=0,command_resend=0,command_send=0,linger_active=8,linger_ping=1905736,linger_resend=4,linger_send=13,map_epoch=203,map_full=0,map_inc=17,omap_del=0,omap_rd=0,omap_wr=0,op=2697488,op_active=0,op_laggy=0,op_pg=0,op_r=791730,op_reply=2697476,op_resend=1,op_rmw=0,op_send=2697490,op_send_bytes=362,op_w=1905758,osd_laggy=5,osd_session_close=59558,osd_session_open=59566,osd_sessions=8,osdop_append=0,osdop_call=1,osdop_clonerange=0,osdop_cmpxattr=0,osdop_create=8,osdop_delete=0,osdop_getxattr=0,osdop_mapext=0,osdop_notify=0,osdop_other=791714,osdop_pgls=0,osdop_pgls_filter=0,osdop_read=16,osdop_resetxattrs=0,osdop_rmxattr=0,osdop_setxattr=0,osdop_sparse_read=0,osdop_src_cmpxattr=0,osdop_stat=791706,osdop_truncate=0,osdop_watch=1905750,osdop_write=0,osdop_writefull=0,osdop_writesame=0,osdop_zero=0,poolop_active=0,poolop_resend=0,poolop_send=0,poolstat_active=0,poolstat_resend=0,poolstat_send=0,statfs_active=0,statfs_resend=0,statfs_send=0 1587117156000000000
|
||||
ceph,collection=AsyncMessenger::Worker-2,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw msgr_active_connections=11,msgr_created_connections=59839,msgr_recv_bytes=342697143,msgr_recv_messages=1441603,msgr_running_fast_dispatch_time=161.807937536,msgr_running_recv_time=118.174064257,msgr_running_send_time=207.679154333,msgr_running_total_time=698.527662129,msgr_send_bytes=530785909,msgr_send_messages=1679950 1587117156000000000
|
||||
ceph,collection=mempool,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=0,bluefs_items=0,bluestore_alloc_bytes=0,bluestore_alloc_items=0,bluestore_cache_data_bytes=0,bluestore_cache_data_items=0,bluestore_cache_onode_bytes=0,bluestore_cache_onode_items=0,bluestore_cache_other_bytes=0,bluestore_cache_other_items=0,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=0,bluestore_txc_items=0,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=0,bluestore_writing_deferred_items=0,bluestore_writing_items=0,buffer_anon_bytes=225471,buffer_anon_items=163,buffer_meta_bytes=0,buffer_meta_items=0,mds_co_bytes=0,mds_co_items=0,osd_bytes=0,osd_items=0,osd_mapbl_bytes=0,osd_mapbl_items=0,osd_pglog_bytes=0,osd_pglog_items=0,osdmap_bytes=33904,osdmap_items=278,osdmap_mapping_bytes=0,osdmap_mapping_items=0,pgmap_bytes=0,pgmap_items=0,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117156000000000
|
||||
ceph,collection=throttle-msgr_dispatch_throttler-radosclient-0x559896998120,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=1652935,get_or_fail_fail=0,get_or_fail_success=1652935,get_started=0,get_sum=276333029,max=104857600,put=1652935,put_sum=276333029,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
|
||||
ceph,collection=AsyncMessenger::Worker-1,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw msgr_active_connections=17,msgr_created_connections=84859,msgr_recv_bytes=211170759,msgr_recv_messages=922646,msgr_running_fast_dispatch_time=31.487443762,msgr_running_recv_time=83.190789333,msgr_running_send_time=174.670510496,msgr_running_total_time=484.22086275,msgr_send_bytes=1322113179,msgr_send_messages=1636839 1587117156000000000
|
||||
ceph,collection=finisher-radosclient-0x559896998080,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117156000000000
|
||||
ceph,collection=throttle-objecter_ops-0x559896997b80,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=1637900,get_or_fail_fail=0,get_or_fail_success=1637900,get_started=0,get_sum=1637900,max=24576,put=1637900,put_sum=1637900,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
|
||||
ceph,collection=AsyncMessenger::Worker-0,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw msgr_active_connections=18,msgr_created_connections=74757,msgr_recv_bytes=489001094,msgr_recv_messages=1986686,msgr_running_fast_dispatch_time=168.60950961,msgr_running_recv_time=142.903031533,msgr_running_send_time=267.911165712,msgr_running_total_time=824.885614951,msgr_send_bytes=707973504,msgr_send_messages=2463727 1587117156000000000
|
||||
ceph,collection=objecter-0x559896997720,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw command_active=0,command_resend=0,command_send=0,linger_active=0,linger_ping=0,linger_resend=0,linger_send=0,map_epoch=203,map_full=0,map_inc=8,omap_del=0,omap_rd=0,omap_wr=0,op=1637998,op_active=0,op_laggy=0,op_pg=0,op_r=1062803,op_reply=1637998,op_resend=15,op_rmw=0,op_send=1638013,op_send_bytes=63321099,op_w=575195,osd_laggy=0,osd_session_close=125555,osd_session_open=125563,osd_sessions=8,osdop_append=0,osdop_call=1637886,osdop_clonerange=0,osdop_cmpxattr=0,osdop_create=0,osdop_delete=0,osdop_getxattr=0,osdop_mapext=0,osdop_notify=0,osdop_other=112,osdop_pgls=0,osdop_pgls_filter=0,osdop_read=0,osdop_resetxattrs=0,osdop_rmxattr=0,osdop_setxattr=0,osdop_sparse_read=0,osdop_src_cmpxattr=0,osdop_stat=0,osdop_truncate=0,osdop_watch=0,osdop_write=0,osdop_writefull=0,osdop_writesame=0,osdop_zero=0,poolop_active=0,poolop_resend=0,poolop_send=0,poolstat_active=0,poolstat_resend=0,poolstat_send=0,statfs_active=0,statfs_resend=0,statfs_send=0 1587117156000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,90 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Control Group"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Control Group
|
||||
identifier: input-cgroup
|
||||
tags: [Control Group, "input-plugins", "configuration", "system"]
|
||||
introduced: "v1.0.0"
|
||||
os_support: "linux"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/cgroup/README.md, Control Group Plugin Source
|
||||
---
|
||||
|
||||
# Control Group Input Plugin
|
||||
|
||||
This plugin gathers statistics per [control group (cgroup)](https://docs.kernel.org/admin-guide/cgroup-v2.html).
|
||||
|
||||
> [!NOTE]
|
||||
> Consider restricting paths to the set of cgroups you are interested in if you
|
||||
> have a large number of cgroups, to avoid cardinality issues.
|
||||
|
||||
The plugin supports the _single value format_ in the form
|
||||
|
||||
```text
|
||||
VAL\n
|
||||
```
|
||||
|
||||
the _new line separated values format_ in the form
|
||||
|
||||
```text
|
||||
VAL0\n
|
||||
VAL1\n
|
||||
```
|
||||
|
||||
the _space separated values format_ in the form
|
||||
|
||||
```text
|
||||
VAL0 VAL1 ...\n
|
||||
```
|
||||
|
||||
and the _space separated keys and value, separated by new line format_ in the
|
||||
form
|
||||
|
||||
```text
|
||||
KEY0 ... VAL0\n
|
||||
KEY1 ... VAL1\n
|
||||
```
|
||||
|
||||
**Introduced in:** Telegraf v1.0.0
|
||||
**Tags:** system
|
||||
**OS support:** linux
|
||||
|
||||
[cgroup]: https://docs.kernel.org/admin-guide/cgroup-v2.html
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read specific statistics per cgroup
|
||||
# This plugin ONLY supports Linux
|
||||
[[inputs.cgroup]]
|
||||
## Directories in which to look for files, globs are supported.
|
||||
## Consider restricting paths to the set of cgroups you really
|
||||
## want to monitor if you have a large number of cgroups, to avoid
|
||||
## any cardinality issues.
|
||||
# paths = [
|
||||
# "/sys/fs/cgroup/memory",
|
||||
# "/sys/fs/cgroup/memory/child1",
|
||||
# "/sys/fs/cgroup/memory/child2/*",
|
||||
# ]
|
||||
## cgroup stat fields, as file names, globs are supported.
|
||||
## these file names are appended to each path from above.
|
||||
# files = ["memory.*usage*", "memory.limit_in_bytes"]
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
All measurements have the `path` tag.
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,104 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from chrony"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: chrony
|
||||
identifier: input-chrony
|
||||
tags: [chrony, "input-plugins", "configuration", "system"]
|
||||
introduced: "v0.13.1"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/chrony/README.md, chrony Plugin Source
|
||||
---
|
||||
|
||||
# chrony Input Plugin
|
||||
|
||||
This plugin queries metrics from a [chrony NTP server](https://chrony-project.org). For details on
|
||||
the meaning of the gathered fields please check the [chronyc manual](https://chrony-project.org/doc/4.4/chronyc.html).
|
||||
|
||||
**Introduced in:** Telegraf v0.13.1
|
||||
**Tags:** system
|
||||
**OS support:** all
|
||||
|
||||
[chrony]: https://chrony-project.org
|
||||
[manual]: https://chrony-project.org/doc/4.4/chronyc.html
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Get standard chrony metrics.
|
||||
[[inputs.chrony]]
|
||||
## Server address of chronyd with address scheme
|
||||
## If empty or not set, the plugin will mimic the behavior of chronyc and
|
||||
## check "unixgram:///run/chrony/chronyd.sock", "udp://127.0.0.1:323"
|
||||
## and "udp://[::1]:323".
|
||||
# server = ""
|
||||
|
||||
## Timeout for establishing the connection
|
||||
# timeout = "5s"
|
||||
|
||||
## Try to resolve received addresses to host-names via DNS lookups
|
||||
## Disabled by default to avoid DNS queries especially for slow DNS servers.
|
||||
# dns_lookup = false
|
||||
|
||||
## Metrics to query named according to chronyc commands
|
||||
## Available settings are:
|
||||
## activity -- number of peers online or offline
|
||||
## tracking -- information about system's clock performance
|
||||
## serverstats -- chronyd server statistics
|
||||
## sources -- extended information about peers
|
||||
## sourcestats -- statistics on peers
|
||||
# metrics = ["tracking"]
|
||||
|
||||
## Socket group & permissions
|
||||
## If the user requests collecting metrics via unix socket, then it is created
|
||||
## with the following group and permissions.
|
||||
# socket_group = "chrony"
|
||||
# socket_perms = "0660"
|
||||
```
|
||||
|
||||
## Local socket permissions
|
||||
|
||||
To use the unix socket, telegraf must be able to talk to it. Please ensure that
|
||||
the telegraf user is a member of the `chrony` group or telegraf won't be able to
|
||||
use the socket!
|
||||
|
||||
The unix socket is needed in order to use the `serverstats` metrics. All other
|
||||
metrics can be gathered using the udp connection.
|
||||
|
||||
## Metrics
|
||||
|
||||
- chrony
|
||||
- system_time (float, seconds)
|
||||
- last_offset (float, seconds)
|
||||
- rms_offset (float, seconds)
|
||||
- frequency (float, ppm)
|
||||
- residual_freq (float, ppm)
|
||||
- skew (float, ppm)
|
||||
- root_delay (float, seconds)
|
||||
- root_dispersion (float, seconds)
|
||||
- update_interval (float, seconds)
|
||||
|
||||
### Tags
|
||||
|
||||
- All measurements have the following tags:
|
||||
- reference_id
|
||||
- stratum
|
||||
- leap_status
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
chrony,leap_status=not\ synchronized,reference_id=A29FC87B,stratum=3 frequency=-16.000999450683594,last_offset=0.000012651000361074694,residual_freq=0,rms_offset=0.000025576999178156257,root_delay=0.0016550000291317701,root_dispersion=0.00330700003542006,skew=0.006000000052154064,system_time=0.000020389999917824753,update_interval=507.1999816894531 1706271167571675297
|
||||
```
|
||||
|
|
@ -0,0 +1,181 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Cisco Model-Driven Telemetry (MDT)"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Cisco Model-Driven Telemetry (MDT)
|
||||
identifier: input-cisco_telemetry_mdt
|
||||
tags: [Cisco Model-Driven Telemetry (MDT), "input-plugins", "configuration", "applications"]
|
||||
introduced: "v1.11.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/cisco_telemetry_mdt/README.md, Cisco Model-Driven Telemetry (MDT) Plugin Source
|
||||
---
|
||||
|
||||
# Cisco Model-Driven Telemetry (MDT) Input Plugin
|
||||
|
||||
This plugin consumes [Cisco model-driven telemetry (MDT)](https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9300-series-switches/model-driven-telemetry-wp.html) data from
|
||||
Cisco IOS XR, IOS XE and NX-OS platforms via TCP or GRPC. GRPC-based transport
|
||||
can utilize TLS for authentication and encryption. Telemetry data is expected to
|
||||
be GPB-KV (self-describing-gpb) encoded.
|
||||
|
||||
The GRPC dialout transport is supported on various IOS XR (64-bit) 6.1.x and
|
||||
later, IOS XE 16.10 and later, as well as NX-OS 7.x and later platforms. The
|
||||
TCP dialout transport is supported on IOS XR (32-bit and 64-bit) 6.1.x and
|
||||
later.
|
||||
|
||||
**Introduced in:** Telegraf v1.11.0
|
||||
**Tags:** applications
|
||||
**OS support:** all
|
||||
|
||||
[cisco_mdt]: https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9300-series-switches/model-driven-telemetry-wp.html
|
||||
|
||||
## Service Input <!-- @/docs/includes/service_input.md -->
|
||||
|
||||
This plugin is a service input. Normal plugins gather metrics determined by the
|
||||
interval setting. Service plugins start a service to listen and wait for
|
||||
metrics or events to occur. Service plugins have two key differences from
|
||||
normal plugins:
|
||||
|
||||
1. The global or plugin specific `interval` setting may not apply
|
||||
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
|
||||
output for this plugin
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Cisco model-driven telemetry (MDT) input plugin for IOS XR, IOS XE and NX-OS platforms
|
||||
[[inputs.cisco_telemetry_mdt]]
|
||||
## Telemetry transport can be "tcp" or "grpc". TLS is only supported when
|
||||
## using the grpc transport.
|
||||
transport = "grpc"
|
||||
|
||||
## Address and port to host telemetry listener
|
||||
service_address = ":57000"
|
||||
|
||||
## Grpc Maximum Message Size, default is 4MB, increase the size. This is
|
||||
## stored as a uint32, and limited to 4294967295.
|
||||
max_msg_size = 4000000
|
||||
|
||||
## Enable TLS; grpc transport only.
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
|
||||
## Enable TLS client authentication and define allowed CA certificates; grpc
|
||||
## transport only.
|
||||
# tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
|
||||
|
||||
## Define (for certain nested telemetry measurements with embedded tags) which fields are tags
|
||||
# embedded_tags = ["Cisco-IOS-XR-qos-ma-oper:qos/interface-table/interface/input/service-policy-names/service-policy-instance/statistics/class-stats/class-name"]
|
||||
|
||||
## Include the delete field in every telemetry message.
|
||||
# include_delete_field = false
|
||||
|
||||
## Specify custom name for incoming MDT source field.
|
||||
# source_field_name = "mdt_source"
|
||||
|
||||
## Define aliases to map telemetry encoding paths to simple measurement names
|
||||
[inputs.cisco_telemetry_mdt.aliases]
|
||||
ifstats = "ietf-interfaces:interfaces-state/interface/statistics"
|
||||
## Define Property Xformation, please refer README and https://pubhub.devnetcloud.com/media/dme-docs-9-3-3/docs/appendix/ for Model details.
|
||||
[inputs.cisco_telemetry_mdt.dmes]
|
||||
# Global Property Xformation.
|
||||
# prop1 = "uint64 to int"
|
||||
# prop2 = "uint64 to string"
|
||||
# prop3 = "string to uint64"
|
||||
# prop4 = "string to int64"
|
||||
# prop5 = "string to float64"
|
||||
# auto-prop-xfrom = "auto-float-xfrom" #Xform any property which is string, and has float number to type float64
|
||||
# Per Path property xformation, Name is telemetry configuration under sensor-group, path configuration "WORD Distinguished Name"
|
||||
# Per Path configuration is better as it avoid property collision issue of types.
|
||||
# dnpath = '{"Name": "show ip route summary","prop": [{"Key": "routes","Value": "string"}, {"Key": "best-paths","Value": "string"}]}'
|
||||
# dnpath2 = '{"Name": "show processes cpu","prop": [{"Key": "kernel_percent","Value": "float"}, {"Key": "idle_percent","Value": "float"}, {"Key": "process","Value": "string"}, {"Key": "user_percent","Value": "float"}, {"Key": "onesec","Value": "float"}]}'
|
||||
# dnpath3 = '{"Name": "show processes memory physical","prop": [{"Key": "processname","Value": "string"}]}'
|
||||
|
||||
## Additional GRPC connection settings.
|
||||
[inputs.cisco_telemetry_mdt.grpc_enforcement_policy]
|
||||
## GRPC permit keepalives without calls, set to true if your clients are
|
||||
## sending pings without calls in-flight. This can sometimes happen on IOS-XE
|
||||
## devices where the GRPC connection is left open but subscriptions have been
|
||||
## removed, and adding subsequent subscriptions does not keep a stable session.
|
||||
# permit_keepalive_without_calls = false
|
||||
|
||||
## GRPC minimum timeout between successive pings, decreasing this value may
|
||||
## help if this plugin is closing connections with ENHANCE_YOUR_CALM (too_many_pings).
|
||||
# keepalive_minimum_time = "5m"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Metrics are named by the encoding path that generated the data, or by the alias
|
||||
if the `inputs.cisco_telemetry_mdt.aliases` config section is defined.
|
||||
Metric fields are dependent on the device type and path.
|
||||
|
||||
Tags included in all metrics:
|
||||
|
||||
- source
|
||||
- path
|
||||
- subscription
|
||||
|
||||
Additional tags (such as interface_name) may be included depending on the path.
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
ifstats,path=ietf-interfaces:interfaces-state/interface/statistics,host=linux,name=GigabitEthernet2,source=csr1kv,subscription=101 in-unicast-pkts=27i,in-multicast-pkts=0i,discontinuity-time="2019-05-23T07:40:23.000362+00:00",in-octets=5233i,in-errors=0i,out-multicast-pkts=0i,out-discards=0i,in-broadcast-pkts=0i,in-discards=0i,in-unknown-protos=0i,out-unicast-pkts=0i,out-broadcast-pkts=0i,out-octets=0i,out-errors=0i 1559150462624000000
|
||||
ifstats,path=ietf-interfaces:interfaces-state/interface/statistics,host=linux,name=GigabitEthernet1,source=csr1kv,subscription=101 in-octets=3394770806i,in-broadcast-pkts=0i,in-multicast-pkts=0i,out-broadcast-pkts=0i,in-unknown-protos=0i,out-octets=350212i,in-unicast-pkts=9477273i,in-discards=0i,out-unicast-pkts=2726i,out-discards=0i,discontinuity-time="2019-05-23T07:40:23.000363+00:00",in-errors=30i,out-multicast-pkts=0i,out-errors=0i 1559150462624000000
|
||||
```
|
||||
|
||||
### NX-OS Configuration Example
|
||||
|
||||
```text
|
||||
Requirement DATA-SOURCE Configuration
|
||||
-----------------------------------------
|
||||
Environment DME path sys/ch query-condition query-target=subtree&target-subtree-class=eqptPsuSlot,eqptFtSlot,eqptSupCSlot,eqptPsu,eqptFt,eqptSensor,eqptLCSlot
|
||||
DME path sys/ch depth 5 (Another configuration option)
|
||||
Environment NXAPI show environment power
|
||||
NXAPI show environment fan
|
||||
NXAPI show environment temperature
|
||||
Interface Stats DME path sys/intf query-condition query-target=subtree&target-subtree-class=rmonIfIn,rmonIfOut,rmonIfHCIn,rmonIfHCOut,rmonEtherStats
|
||||
Interface State DME path sys/intf depth unbounded query-condition query-target=subtree&target-subtree-class=l1PhysIf,pcAggrIf,l3EncRtdIf,l3LbRtdIf,ethpmPhysIf
|
||||
VPC DME path sys/vpc query-condition query-target=subtree&target-subtree-class=vpcDom,vpcIf
|
||||
Resources cpu DME path sys/procsys query-condition query-target=subtree&target-subtree-class=procSystem,procSysCore,procSysCpuSummary,procSysCpu,procIdle,procIrq,procKernel,procNice,procSoftirq,procTotal,procUser,procWait,procSysCpuHistory,procSysLoad
|
||||
Resources Mem DME path sys/procsys/sysmem/sysmemused
|
||||
path sys/procsys/sysmem/sysmemusage
|
||||
path sys/procsys/sysmem/sysmemfree
|
||||
Per Process cpu DME path sys/proc depth unbounded query-condition rsp-foreign-subtree=ephemeral
|
||||
vxlan(svi stats) DME path sys/bd query-condition query-target=subtree&target-subtree-class=l2VlanStats
|
||||
BGP DME path sys/bgp query-condition query-target=subtree&target-subtree-class=bgpDom,bgpPeer,bgpPeerAf,bgpDomAf,bgpPeerAfEntry,bgpOperRtctrlL3,bgpOperRttP,bgpOperRttEntry,bgpOperAfCtrl
|
||||
mac dynamic DME path sys/mac query-condition query-target=subtree&target-subtree-class=l2MacAddressTable
|
||||
bfd DME path sys/bfd/inst depth unbounded
|
||||
lldp DME path sys/lldp depth unbounded
|
||||
urib DME path sys/urib depth unbounded query-condition rsp-foreign-subtree=ephemeral
|
||||
u6rib DME path sys/u6rib depth unbounded query-condition rsp-foreign-subtree=ephemeral
|
||||
multicast flow DME path sys/mca/show/flows depth unbounded
|
||||
multicast stats DME path sys/mca/show/stats depth unbounded
|
||||
multicast igmp NXAPI show ip igmp groups vrf all
|
||||
multicast igmp NXAPI show ip igmp interface vrf all
|
||||
multicast igmp NXAPI show ip igmp snooping
|
||||
multicast igmp NXAPI show ip igmp snooping groups
|
||||
multicast igmp NXAPI show ip igmp snooping groups detail
|
||||
multicast igmp NXAPI show ip igmp snooping groups summary
|
||||
multicast igmp NXAPI show ip igmp snooping mrouter
|
||||
multicast igmp NXAPI show ip igmp snooping statistics
|
||||
multicast pim NXAPI show ip pim interface vrf all
|
||||
multicast pim NXAPI show ip pim neighbor vrf all
|
||||
multicast pim NXAPI show ip pim route vrf all
|
||||
multicast pim NXAPI show ip pim rp vrf all
|
||||
multicast pim NXAPI show ip pim statistics vrf all
|
||||
multicast pim NXAPI show ip pim vrf all
|
||||
microburst NATIVE path microburst
|
||||
```
|
||||
|
|
@ -0,0 +1,255 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from ClickHouse"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: ClickHouse
|
||||
identifier: input-clickhouse
|
||||
tags: [ClickHouse, "input-plugins", "configuration", "server"]
|
||||
introduced: "v1.14.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/clickhouse/README.md, ClickHouse Plugin Source
|
||||
---
|
||||
|
||||
# ClickHouse Input Plugin
|
||||
|
||||
This plugin gathers statistics data from a [ClickHouse server](https://github.com/ClickHouse/ClickHouse).
|
||||
Users on Clickhouse Cloud will not see the Zookeeper metrics as they may not
|
||||
have permissions to query those tables.
|
||||
|
||||
**Introduced in:** Telegraf v1.14.0
|
||||
**Tags:** server
|
||||
**OS support:** all
|
||||
|
||||
[clickhouse]: https://github.com/ClickHouse/ClickHouse
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics from one or many ClickHouse servers
|
||||
[[inputs.clickhouse]]
|
||||
## Username for authorization on ClickHouse server
|
||||
username = "default"
|
||||
|
||||
## Password for authorization on ClickHouse server
|
||||
# password = ""
|
||||
|
||||
## HTTP(s) timeout while getting metrics values
|
||||
## The timeout includes connection time, any redirects, and reading the
|
||||
## response body.
|
||||
# timeout = 5s
|
||||
|
||||
## List of servers for metrics scraping
|
||||
## metrics scrape via HTTP(s) clickhouse interface
|
||||
## https://clickhouse.tech/docs/en/interfaces/http/
|
||||
servers = ["http://127.0.0.1:8123"]
|
||||
|
||||
## Server Variant
|
||||
## When set to "managed", some queries are excluded from being run. This is
|
||||
## useful for instances hosted in ClickHouse Cloud where certain tables are
|
||||
## not available.
|
||||
# variant = "self-hosted"
|
||||
|
||||
## If "auto_discovery"" is "true" plugin tries to connect to all servers
|
||||
## available in the cluster with using same "user:password" described in
|
||||
## "user" and "password" parameters and get this server hostname list from
|
||||
## "system.clusters" table. See
|
||||
## - https://clickhouse.tech/docs/en/operations/system_tables/#system-clusters
|
||||
## - https://clickhouse.tech/docs/en/operations/server_settings/settings/#server_settings_remote_servers
|
||||
## - https://clickhouse.tech/docs/en/operations/table_engines/distributed/
|
||||
## - https://clickhouse.tech/docs/en/operations/table_engines/replication/#creating-replicated-tables
|
||||
# auto_discovery = true
|
||||
|
||||
## Filter cluster names in "system.clusters" when "auto_discovery" is "true"
|
||||
## when this filter present then "WHERE cluster IN (...)" filter will apply
|
||||
## please use only full cluster names here, regexp and glob filters is not
|
||||
## allowed for "/etc/clickhouse-server/config.d/remote.xml"
|
||||
## <yandex>
|
||||
## <remote_servers>
|
||||
## <my-own-cluster>
|
||||
## <shard>
|
||||
## <replica><host>clickhouse-ru-1.local</host><port>9000</port></replica>
|
||||
## <replica><host>clickhouse-ru-2.local</host><port>9000</port></replica>
|
||||
## </shard>
|
||||
## <shard>
|
||||
## <replica><host>clickhouse-eu-1.local</host><port>9000</port></replica>
|
||||
## <replica><host>clickhouse-eu-2.local</host><port>9000</port></replica>
|
||||
## </shard>
|
||||
## </my-own-cluster>
|
||||
## </remote_servers>
|
||||
##
|
||||
## </yandex>
|
||||
##
|
||||
## example: cluster_include = ["my-own-cluster"]
|
||||
# cluster_include = []
|
||||
|
||||
## Filter cluster names in "system.clusters" when "auto_discovery" is
|
||||
## "true" when this filter present then "WHERE cluster NOT IN (...)"
|
||||
## filter will apply
|
||||
## example: cluster_exclude = ["my-internal-not-discovered-cluster"]
|
||||
# cluster_exclude = []
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- clickhouse_events (see [system.events](https://clickhouse.tech/docs/en/operations/system-tables/events/) for details)
|
||||
- tags:
|
||||
- source (ClickHouse server hostname)
|
||||
- cluster (Name of the cluster [optional])
|
||||
- shard_num (Shard number in the cluster [optional])
|
||||
- fields:
|
||||
- all rows from [system.events](https://clickhouse.tech/docs/en/operations/system-tables/events/)
|
||||
|
||||
- clickhouse_metrics (see [system.metrics](https://clickhouse.tech/docs/en/operations/system-tables/metrics/) for details)
|
||||
- tags:
|
||||
- source (ClickHouse server hostname)
|
||||
- cluster (Name of the cluster [optional])
|
||||
- shard_num (Shard number in the cluster [optional])
|
||||
- fields:
|
||||
- all rows from [system.metrics](https://clickhouse.tech/docs/en/operations/system-tables/metrics/)
|
||||
|
||||
- clickhouse_asynchronous_metrics (see [system.asynchronous_metrics]()
|
||||
for details)
|
||||
- tags:
|
||||
- source (ClickHouse server hostname)
|
||||
- cluster (Name of the cluster [optional])
|
||||
- shard_num (Shard number in the cluster [optional])
|
||||
- fields:
|
||||
- all rows from [system.asynchronous_metrics]()
|
||||
|
||||
- clickhouse_tables
|
||||
- tags:
|
||||
- source (ClickHouse server hostname)
|
||||
- table
|
||||
- database
|
||||
- cluster (Name of the cluster [optional])
|
||||
- shard_num (Shard number in the cluster [optional])
|
||||
- fields:
|
||||
- bytes
|
||||
- parts
|
||||
- rows
|
||||
|
||||
- clickhouse_zookeeper (see [system.zookeeper](https://clickhouse.tech/docs/en/operations/system-tables/zookeeper/) for details)
|
||||
- tags:
|
||||
- source (ClickHouse server hostname)
|
||||
- cluster (Name of the cluster [optional])
|
||||
- shard_num (Shard number in the cluster [optional])
|
||||
- fields:
|
||||
- root_nodes (count of node where path=/)
|
||||
|
||||
- clickhouse_replication_queue (see [system.replication_queue]() for details)
|
||||
- tags:
|
||||
- source (ClickHouse server hostname)
|
||||
- cluster (Name of the cluster [optional])
|
||||
- shard_num (Shard number in the cluster [optional])
|
||||
- fields:
|
||||
- too_many_tries_replicas (count of replicas which have `num_tries > 1`)
|
||||
|
||||
- clickhouse_detached_parts (see [system.detached_parts]() for details)
|
||||
- tags:
|
||||
- source (ClickHouse server hostname)
|
||||
- cluster (Name of the cluster [optional])
|
||||
- shard_num (Shard number in the cluster [optional])
|
||||
- fields:
|
||||
- detached_parts (total detached parts for all tables and databases
|
||||
from [system.detached_parts]())
|
||||
|
||||
- clickhouse_dictionaries (see [system.dictionaries](https://clickhouse.tech/docs/en/operations/system-tables/dictionaries/) for details)
|
||||
- tags:
|
||||
- source (ClickHouse server hostname)
|
||||
- cluster (Name of the cluster [optional])
|
||||
- shard_num (Shard number in the cluster [optional])
|
||||
- dict_origin (xml Filename when dictionary created from *_dictionary.xml,
|
||||
database.table when dictionary created from DDL)
|
||||
- fields:
|
||||
- is_loaded (0 - when dictionary data not successful load, 1 - when
|
||||
dictionary data loading fail
|
||||
- bytes_allocated (bytes allocated in RAM after a dictionary loaded)
|
||||
|
||||
- clickhouse_mutations (see [system.mutations](https://clickhouse.tech/docs/en/operations/system-tables/mutations/) for details)
|
||||
- tags:
|
||||
- source (ClickHouse server hostname)
|
||||
- cluster (Name of the cluster [optional])
|
||||
- shard_num (Shard number in the cluster [optional])
|
||||
- fields:
|
||||
- running - gauge which show how much mutation doesn't complete now
|
||||
- failed - counter which show total failed mutations from first
|
||||
clickhouse-server run
|
||||
- completed - counter which show total successful finished mutations
|
||||
from first clickhouse-server run
|
||||
|
||||
- clickhouse_disks (see [system.disks](https://clickhouse.tech/docs/en/operations/system-tables/disks/) for details)
|
||||
- tags:
|
||||
- source (ClickHouse server hostname)
|
||||
- cluster (Name of the cluster [optional])
|
||||
- shard_num (Shard number in the cluster [optional])
|
||||
- name (disk name in storage configuration)
|
||||
- path (path to disk)
|
||||
- fields:
|
||||
- free_space_percent - 0-100, gauge which show current percent of
|
||||
free disk space bytes relative to total disk space bytes
|
||||
- keep_free_space_percent - 0-100, gauge which show current percent
|
||||
of required keep free disk bytes relative to total disk space bytes
|
||||
|
||||
- clickhouse_processes (see [system.processes](https://clickhouse.tech/docs/en/operations/system-tables/processes/) for details)
|
||||
- tags:
|
||||
- source (ClickHouse server hostname)
|
||||
- cluster (Name of the cluster [optional])
|
||||
- shard_num (Shard number in the cluster [optional])
|
||||
- fields:
|
||||
- percentile_50 - float gauge which show 50% percentile (quantile 0.5) for
|
||||
`elapsed` field of running processes
|
||||
- percentile_90 - float gauge which show 90% percentile (quantile 0.9) for
|
||||
`elapsed` field of running processes
|
||||
- longest_running - float gauge which show maximum value for `elapsed`
|
||||
field of running processes
|
||||
|
||||
- clickhouse_text_log (see [system.text_log]() for details)
|
||||
- tags:
|
||||
- source (ClickHouse server hostname)
|
||||
- cluster (Name of the cluster [optional])
|
||||
- shard_num (Shard number in the cluster [optional])
|
||||
- level (message level, only messages with level less or equal Notice are
|
||||
collected)
|
||||
- fields:
|
||||
- messages_last_10_min - gauge which show how many messages collected
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
clickhouse_events,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 read_compressed_bytes=212i,arena_alloc_chunks=35i,function_execute=85i,merge_tree_data_writer_rows=3i,rw_lock_acquired_read_locks=421i,file_open=46i,io_buffer_alloc_bytes=86451985i,inserted_bytes=196i,regexp_created=3i,real_time_microseconds=116832i,query=23i,network_receive_elapsed_microseconds=268i,merge_tree_data_writer_compressed_bytes=1080i,arena_alloc_bytes=212992i,disk_write_elapsed_microseconds=556i,inserted_rows=3i,compressed_read_buffer_bytes=81i,read_buffer_from_file_descriptor_read_bytes=148i,write_buffer_from_file_descriptor_write=47i,merge_tree_data_writer_blocks=3i,soft_page_faults=896i,hard_page_faults=7i,select_query=21i,merge_tree_data_writer_uncompressed_bytes=196i,merge_tree_data_writer_blocks_already_sorted=3i,user_time_microseconds=40196i,compressed_read_buffer_blocks=5i,write_buffer_from_file_descriptor_write_bytes=3246i,io_buffer_allocs=296i,created_write_buffer_ordinary=12i,disk_read_elapsed_microseconds=59347044i,network_send_elapsed_microseconds=1538i,context_lock=1040i,insert_query=1i,system_time_microseconds=14582i,read_buffer_from_file_descriptor_read=3i 1569421000000000000
|
||||
clickhouse_asynchronous_metrics,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 jemalloc.metadata_thp=0i,replicas_max_relative_delay=0i,jemalloc.mapped=1803177984i,jemalloc.allocated=1724839256i,jemalloc.background_thread.run_interval=0i,jemalloc.background_thread.num_threads=0i,uncompressed_cache_cells=0i,replicas_max_absolute_delay=0i,mark_cache_bytes=0i,compiled_expression_cache_count=0i,replicas_sum_queue_size=0i,number_of_tables=35i,replicas_max_merges_in_queue=0i,replicas_max_inserts_in_queue=0i,replicas_sum_merges_in_queue=0i,replicas_max_queue_size=0i,mark_cache_files=0i,jemalloc.background_thread.num_runs=0i,jemalloc.active=1726210048i,uptime=158i,jemalloc.retained=380481536i,replicas_sum_inserts_in_queue=0i,uncompressed_cache_bytes=0i,number_of_databases=2i,jemalloc.metadata=9207704i,max_part_count_for_partition=1i,jemalloc.resident=1742442496i 1569421000000000000
|
||||
clickhouse_metrics,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 replicated_send=0i,write=0i,ephemeral_node=0i,zoo_keeper_request=0i,distributed_files_to_insert=0i,replicated_fetch=0i,background_schedule_pool_task=0i,interserver_connection=0i,leader_replica=0i,delayed_inserts=0i,global_thread_active=41i,merge=0i,readonly_replica=0i,memory_tracking_in_background_schedule_pool=0i,memory_tracking_for_merges=0i,zoo_keeper_session=0i,context_lock_wait=0i,storage_buffer_bytes=0i,background_pool_task=0i,send_external_tables=0i,zoo_keeper_watch=0i,part_mutation=0i,disk_space_reserved_for_merge=0i,distributed_send=0i,version_integer=19014003i,local_thread=0i,replicated_checks=0i,memory_tracking=0i,memory_tracking_in_background_processing_pool=0i,leader_election=0i,revision=54425i,open_file_for_read=0i,open_file_for_write=0i,storage_buffer_rows=0i,rw_lock_waiting_readers=0i,rw_lock_waiting_writers=0i,rw_lock_active_writers=0i,local_thread_active=0i,query_preempted=0i,tcp_connection=1i,http_connection=1i,read=2i,query_thread=0i,dict_cache_requests=0i,rw_lock_active_readers=1i,global_thread=43i,query=1i 1569421000000000000
|
||||
clickhouse_tables,cluster=test_cluster_two_shards_localhost,database=system,host=kshvakov,source=localhost,shard_num=1,table=trace_log bytes=754i,parts=1i,rows=1i 1569421000000000000
|
||||
clickhouse_tables,cluster=test_cluster_two_shards_localhost,database=default,host=kshvakov,source=localhost,shard_num=1,table=example bytes=326i,parts=2i,rows=2i 1569421000000000000
|
||||
```
|
||||
|
||||
[system.asynchronous_metrics]: https://clickhouse.tech/docs/en/operations/system-tables/asynchronous_metrics/
|
||||
[system.detached_parts]: https://clickhouse.tech/docs/en/operations/system-tables/detached_parts/
|
||||
[system.dictionaries]: https://clickhouse.tech/docs/en/operations/system-tables/dictionaries/
|
||||
[system.disks]: https://clickhouse.tech/docs/en/operations/system-tables/disks/
|
||||
[system.events]: https://clickhouse.tech/docs/en/operations/system-tables/events/
|
||||
[system.metrics]: https://clickhouse.tech/docs/en/operations/system-tables/metrics/
|
||||
[system.mutations]: https://clickhouse.tech/docs/en/operations/system-tables/mutations/
|
||||
[system.processes]: https://clickhouse.tech/docs/en/operations/system-tables/processes/
|
||||
[system.replication_queue]:https://clickhouse.com/docs/en/operations/system-tables/replication_queue/
|
||||
[system.text_log]: https://clickhouse.tech/docs/en/operations/system-tables/text_log/
|
||||
[system.zookeeper]: https://clickhouse.tech/docs/en/operations/system-tables/zookeeper/
|
||||
|
|
@ -0,0 +1,151 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Google Cloud PubSub"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Google Cloud PubSub
|
||||
identifier: input-cloud_pubsub
|
||||
tags: [Google Cloud PubSub, "input-plugins", "configuration", "cloud", "messaging"]
|
||||
introduced: "v1.10.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/cloud_pubsub/README.md, Google Cloud PubSub Plugin Source
|
||||
---
|
||||
|
||||
# Google Cloud PubSub Input Plugin
|
||||
|
||||
This plugin consumes messages from the [Google Cloud PubSub](https://cloud.google.com/pubsub) service
|
||||
and creates metrics using one of the supported [data formats](/telegraf/v1/data_formats/input).
|
||||
|
||||
**Introduced in:** Telegraf v1.10.0
|
||||
**Tags:** cloud, messaging
|
||||
**OS support:** all
|
||||
|
||||
[pubsub]: https://cloud.google.com/pubsub
|
||||
[data_formats]: /docs/DATA_FORMATS_INPUT.md
|
||||
|
||||
## Service Input <!-- @/docs/includes/service_input.md -->
|
||||
|
||||
This plugin is a service input. Normal plugins gather metrics determined by the
|
||||
interval setting. Service plugins start a service to listen and wait for
|
||||
metrics or events to occur. Service plugins have two key differences from
|
||||
normal plugins:
|
||||
|
||||
1. The global or plugin specific `interval` setting may not apply
|
||||
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
|
||||
output for this plugin
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics from Google PubSub
|
||||
[[inputs.cloud_pubsub]]
|
||||
## Required. Name of Google Cloud Platform (GCP) Project that owns
|
||||
## the given PubSub subscription.
|
||||
project = "my-project"
|
||||
|
||||
## Required. Name of PubSub subscription to ingest metrics from.
|
||||
subscription = "my-subscription"
|
||||
|
||||
## Required. Data format to consume.
|
||||
## Each data format has its own unique set of configuration options.
|
||||
## Read more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "influx"
|
||||
|
||||
## Optional. Filepath for GCP credentials JSON file to authorize calls to
|
||||
## PubSub APIs. If not set explicitly, Telegraf will attempt to use
|
||||
## Application Default Credentials, which is preferred.
|
||||
# credentials_file = "path/to/my/creds.json"
|
||||
|
||||
## Optional. Number of seconds to wait before attempting to restart the
|
||||
## PubSub subscription receiver after an unexpected error.
|
||||
## If the streaming pull for a PubSub Subscription fails (receiver),
|
||||
## the agent attempts to restart receiving messages after this many seconds.
|
||||
# retry_delay_seconds = 5
|
||||
|
||||
## Optional. Maximum byte length of a message to consume.
|
||||
## Larger messages are dropped with an error. If less than 0 or unspecified,
|
||||
## treated as no limit.
|
||||
# max_message_len = 1000000
|
||||
|
||||
## Max undelivered messages
|
||||
## This plugin uses tracking metrics, which ensure messages are read to
|
||||
## outputs before acknowledging them to the original broker to ensure data
|
||||
## is not lost. This option sets the maximum messages to read from the
|
||||
## broker that have not been written by an output.
|
||||
##
|
||||
## This value needs to be picked with awareness of the agent's
|
||||
## metric_batch_size value as well. Setting max undelivered messages too high
|
||||
## can result in a constant stream of data batches to the output. While
|
||||
## setting it too low may never flush the broker's messages.
|
||||
# max_undelivered_messages = 1000
|
||||
|
||||
## The following are optional Subscription ReceiveSettings in PubSub.
|
||||
## Read more about these values:
|
||||
## https://godoc.org/cloud.google.com/go/pubsub/v2#ReceiveSettings
|
||||
|
||||
## Optional. Maximum number of seconds for which a PubSub subscription
|
||||
## should auto-extend the PubSub ACK deadline for each message. If less than
|
||||
## 0, auto-extension is disabled.
|
||||
# max_extension = 0
|
||||
|
||||
## Optional. Maximum number of unprocessed messages in PubSub
|
||||
## (unacknowledged but not yet expired in PubSub).
|
||||
## A value of 0 is treated as the default PubSub value.
|
||||
## Negative values will be treated as unlimited.
|
||||
# max_outstanding_messages = 0
|
||||
|
||||
## Optional. Maximum size in bytes of unprocessed messages in PubSub
|
||||
## (unacknowledged but not yet expired in PubSub).
|
||||
## A value of 0 is treated as the default PubSub value.
|
||||
## Negative values will be treated as unlimited.
|
||||
# max_outstanding_bytes = 0
|
||||
|
||||
## Optional. Max number of goroutines a PubSub Subscription receiver can spawn
|
||||
## to pull messages from PubSub concurrently. This limit applies to each
|
||||
## subscription separately and is treated as the PubSub default if less than
|
||||
## 1. Note this setting does not limit the number of messages that can be
|
||||
## processed concurrently (use "max_outstanding_messages" instead).
|
||||
# max_receiver_go_routines = 0
|
||||
|
||||
## Optional. If true, Telegraf will attempt to base64 decode the
|
||||
## PubSub message data before parsing. Many GCP services that
|
||||
## output JSON to Google PubSub base64-encode the JSON payload.
|
||||
# base64_data = false
|
||||
|
||||
## Content encoding for message payloads, can be set to "gzip" or
|
||||
## "identity" to apply no encoding.
|
||||
# content_encoding = "identity"
|
||||
|
||||
## If content encoding is not "identity", sets the maximum allowed size,
|
||||
## in bytes, for a message payload when it's decompressed. Can be increased
|
||||
## for larger payloads or reduced to protect against decompression bombs.
|
||||
## Acceptable units are B, KiB, KB, MiB, MB...
|
||||
# max_decompression_size = "500MB"
|
||||
```
|
||||
|
||||
### Multiple Subscriptions and Topics
|
||||
|
||||
This plugin assumes you have already created a PULL subscription for a given
|
||||
PubSub topic. To learn how to do so, see [how to create a subscription](https://cloud.google.com/pubsub/docs/admin#create_a_pull_subscription).
|
||||
|
||||
Each plugin agent can listen to one subscription at a time, so you will
|
||||
need to run multiple instances of the plugin to pull messages from multiple
|
||||
subscriptions/topics.
|
||||
|
||||
[pubsub create sub]: https://cloud.google.com/pubsub/docs/admin#create_a_pull_subscription
|
||||
|
||||
## Metrics
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,118 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Google Cloud PubSub Push"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Google Cloud PubSub Push
|
||||
identifier: input-cloud_pubsub_push
|
||||
tags: [Google Cloud PubSub Push, "input-plugins", "configuration", "cloud", "messaging"]
|
||||
introduced: "v1.10.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/cloud_pubsub_push/README.md, Google Cloud PubSub Push Plugin Source
|
||||
---
|
||||
|
||||
# Google Cloud PubSub Push Input Plugin
|
||||
|
||||
This plugin listens for messages sent via an HTTP POST from
|
||||
[Google Cloud PubSub](https://cloud.google.com/pubsub) and expects messages in Google's Pub/Sub
|
||||
_JSON format_. The plugin allows Telegraf to serve as an endpoint of push
|
||||
service.
|
||||
|
||||
Google's PubSub service will __only__ send over HTTPS/TLS so this plugin must be
|
||||
behind a valid proxy or must be configured to use TLS by setting the `tls_cert`
|
||||
and `tls_key` accordingly.
|
||||
|
||||
Enable mutually authenticated TLS and authorize client connections by signing
|
||||
certificate authority by including a list of allowed CA certificate file names
|
||||
in `tls_allowed_cacerts`.
|
||||
|
||||
**Introduced in:** Telegraf v1.10.0
|
||||
**Tags:** cloud, messaging
|
||||
**OS support:** all
|
||||
|
||||
[pubsub]: https://cloud.google.com/pubsub
|
||||
|
||||
## Service Input <!-- @/docs/includes/service_input.md -->
|
||||
|
||||
This plugin is a service input. Normal plugins gather metrics determined by the
|
||||
interval setting. Service plugins start a service to listen and wait for
|
||||
metrics or events to occur. Service plugins have two key differences from
|
||||
normal plugins:
|
||||
|
||||
1. The global or plugin specific `interval` setting may not apply
|
||||
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
|
||||
output for this plugin
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Google Cloud Pub/Sub Push HTTP listener
|
||||
[[inputs.cloud_pubsub_push]]
|
||||
## Address and port to host HTTP listener on
|
||||
service_address = ":8080"
|
||||
|
||||
## Application secret to verify messages originate from Cloud Pub/Sub
|
||||
# token = ""
|
||||
|
||||
## Path to listen to.
|
||||
# path = "/"
|
||||
|
||||
## Maximum duration before timing out read of the request
|
||||
# read_timeout = "10s"
|
||||
## Maximum duration before timing out write of the response. This should be
|
||||
## set to a value large enough that you can send at least 'metric_batch_size'
|
||||
## number of messages within the duration.
|
||||
# write_timeout = "10s"
|
||||
|
||||
## Maximum allowed http request body size in bytes.
|
||||
## 0 means to use the default of 524,288,00 bytes (500 mebibytes)
|
||||
# max_body_size = "500MB"
|
||||
|
||||
## Whether to add the pubsub metadata, such as message attributes and
|
||||
## subscription as a tag.
|
||||
# add_meta = false
|
||||
|
||||
## Max undelivered messages
|
||||
## This plugin uses tracking metrics, which ensure messages are read to
|
||||
## outputs before acknowledging them to the original broker to ensure data
|
||||
## is not lost. This option sets the maximum messages to read from the
|
||||
## broker that have not been written by an output.
|
||||
##
|
||||
## This value needs to be picked with awareness of the agent's
|
||||
## metric_batch_size value as well. Setting max undelivered messages too high
|
||||
## can result in a constant stream of data batches to the output. While
|
||||
## setting it too low may never flush the broker's messages.
|
||||
# max_undelivered_messages = 1000
|
||||
|
||||
## Set one or more allowed client CA certificate file names to
|
||||
## enable mutually authenticated TLS connections
|
||||
# tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
|
||||
|
||||
## Add service certificate and key
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "influx"
|
||||
```
|
||||
|
||||
This plugin assumes you have already created a PUSH subscription for a given
|
||||
PubSub topic.
|
||||
|
||||
## Metrics
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,359 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Amazon CloudWatch Statistics"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Amazon CloudWatch Statistics
|
||||
identifier: input-cloudwatch
|
||||
tags: [Amazon CloudWatch Statistics, "input-plugins", "configuration", "cloud"]
|
||||
introduced: "v0.12.1"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/cloudwatch/README.md, Amazon CloudWatch Statistics Plugin Source
|
||||
---
|
||||
|
||||
# Amazon CloudWatch Statistics Input Plugin
|
||||
|
||||
This plugin will gather metric statistics from [Amazon CloudWatch](https://aws.amazon.com/cloudwatch).
|
||||
|
||||
**Introduced in:** Telegraf v0.12.1
|
||||
**Tags:** cloud
|
||||
**OS support:** all
|
||||
|
||||
[cloudwatch]: https://aws.amazon.com/cloudwatch
|
||||
|
||||
## Amazon Authentication
|
||||
|
||||
This plugin uses a credential chain for Authentication with the CloudWatch
|
||||
API endpoint. In the following order the plugin will attempt to authenticate.
|
||||
|
||||
1. Assumed credentials via STS if `role_arn` attribute is specified
|
||||
(source credentials are evaluated from subsequent rules)
|
||||
2. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||
3. Shared profile from `profile` attribute
|
||||
4. [Environment Variables](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#environment-variables)
|
||||
5. [Shared Credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#shared-credentials-file)
|
||||
6. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Pull Metric Statistics from Amazon CloudWatch
|
||||
[[inputs.cloudwatch]]
|
||||
## Amazon Region
|
||||
region = "us-east-1"
|
||||
|
||||
## Amazon Credentials
|
||||
## Credentials are loaded in the following order
|
||||
## 1) Web identity provider credentials via STS if role_arn and
|
||||
## web_identity_token_file are specified
|
||||
## 2) Assumed credentials via STS if role_arn is specified
|
||||
## 3) explicit credentials from 'access_key' and 'secret_key'
|
||||
## 4) shared profile from 'profile'
|
||||
## 5) environment variables
|
||||
## 6) shared credentials file
|
||||
## 7) EC2 Instance Profile
|
||||
# access_key = ""
|
||||
# secret_key = ""
|
||||
# token = ""
|
||||
# role_arn = ""
|
||||
# web_identity_token_file = ""
|
||||
# role_session_name = ""
|
||||
# profile = ""
|
||||
# shared_credential_file = ""
|
||||
|
||||
## If you are using CloudWatch cross-account observability, you can
|
||||
## set IncludeLinkedAccounts to true in a monitoring account
|
||||
## and collect metrics from the linked source accounts
|
||||
# include_linked_accounts = false
|
||||
|
||||
## Endpoint to make request against, the correct endpoint is automatically
|
||||
## determined and this option should only be set if you wish to override the
|
||||
## default.
|
||||
## ex: endpoint_url = "http://localhost:8000"
|
||||
# endpoint_url = ""
|
||||
|
||||
## Set http_proxy
|
||||
# use_system_proxy = false
|
||||
# http_proxy_url = "http://localhost:8888"
|
||||
|
||||
## The minimum period for Cloudwatch metrics is 1 minute (60s). However not
|
||||
## all metrics are made available to the 1 minute period. Some are collected
|
||||
## at 3 minute, 5 minute, or larger intervals.
|
||||
## See https://aws.amazon.com/cloudwatch/faqs/#monitoring.
|
||||
## Note that if a period is configured that is smaller than the minimum for a
|
||||
## particular metric, that metric will not be returned by the Cloudwatch API
|
||||
## and will not be collected by Telegraf.
|
||||
#
|
||||
## Requested CloudWatch aggregation Period (required)
|
||||
## Must be a multiple of 60s.
|
||||
period = "5m"
|
||||
|
||||
## Collection Delay (required)
|
||||
## Must account for metrics availability via CloudWatch API
|
||||
delay = "5m"
|
||||
|
||||
## Recommended: use metric 'interval' that is a multiple of 'period' to avoid
|
||||
## gaps or overlap in pulled data
|
||||
interval = "5m"
|
||||
|
||||
## Recommended if "delay" and "period" are both within 3 hours of request
|
||||
## time. Invalid values will be ignored. Recently Active feature will only
|
||||
## poll for CloudWatch ListMetrics values that occurred within the last 3h.
|
||||
## If enabled, it will reduce total API usage of the CloudWatch ListMetrics
|
||||
## API and require less memory to retain.
|
||||
## Do not enable if "period" or "delay" is longer than 3 hours, as it will
|
||||
## not return data more than 3 hours old.
|
||||
## See https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_ListMetrics.html
|
||||
# recently_active = "PT3H"
|
||||
|
||||
## Configure the TTL for the internal cache of metrics.
|
||||
# cache_ttl = "1h"
|
||||
|
||||
## Metric Statistic Namespaces, wildcards are allowed
|
||||
# namespaces = ["*"]
|
||||
|
||||
## Metric Format
|
||||
## This determines the format of the produces metrics. 'sparse', the default
|
||||
## will produce a unique field for each statistic. 'dense' will report all
|
||||
## statistics will be in a field called value and have a metric_name tag
|
||||
## defining the name of the statistic. See the plugin README for examples.
|
||||
# metric_format = "sparse"
|
||||
|
||||
## Maximum requests per second. Note that the global default AWS rate limit
|
||||
## is 50 reqs/sec, so if you define multiple namespaces, these should add up
|
||||
## to a maximum of 50.
|
||||
## See http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html
|
||||
# ratelimit = 25
|
||||
|
||||
## Timeout for http requests made by the cloudwatch client.
|
||||
# timeout = "5s"
|
||||
|
||||
## Batch Size
|
||||
## The size of each batch to send requests to Cloudwatch. 500 is the
|
||||
## suggested largest size. If a request gets to large (413 errors), consider
|
||||
## reducing this amount.
|
||||
# batch_size = 500
|
||||
|
||||
## Namespace-wide statistic filters. These allow fewer queries to be made to
|
||||
## cloudwatch.
|
||||
# statistic_include = ["average", "sum", "minimum", "maximum", sample_count"]
|
||||
# statistic_exclude = []
|
||||
|
||||
## Metrics to Pull
|
||||
## Defaults to all Metrics in Namespace if nothing is provided
|
||||
## Refreshes Namespace available metrics every 1h
|
||||
#[[inputs.cloudwatch.metrics]]
|
||||
# names = ["Latency", "RequestCount"]
|
||||
#
|
||||
# ## Statistic filters for Metric. These allow for retrieving specific
|
||||
# ## statistics for an individual metric.
|
||||
# # statistic_include = ["average", "sum", "minimum", "maximum", sample_count"]
|
||||
# # statistic_exclude = []
|
||||
#
|
||||
# ## Dimension filters for Metric.
|
||||
# ## All dimensions defined for the metric names must be specified in order
|
||||
# ## to retrieve the metric statistics.
|
||||
# ## 'value' has wildcard / 'glob' matching support such as 'p-*'.
|
||||
# [[inputs.cloudwatch.metrics.dimensions]]
|
||||
# name = "LoadBalancerName"
|
||||
# value = "p-example"
|
||||
```
|
||||
|
||||
Please note, the `namespace` option is deprecated in favor of the `namespaces`
|
||||
list option.
|
||||
|
||||
## Requirements and Terminology
|
||||
|
||||
Plugin Configuration utilizes [CloudWatch concepts](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html) and access
|
||||
pattern to allow monitoring of any CloudWatch Metric.
|
||||
|
||||
- `region` must be a valid AWS [region](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchRegions) value
|
||||
- `period` must be a valid CloudWatch [period](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchPeriods) value
|
||||
- `namespaces` must be a list of valid CloudWatch [namespace](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Namespace) value(s)
|
||||
- `names` must be valid CloudWatch [metric](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Metric) names
|
||||
- `dimensions` must be valid CloudWatch [dimension](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Dimension) name/value pairs
|
||||
|
||||
Omitting or specifying a value of `'*'` for a dimension value configures all
|
||||
available metrics that contain a dimension with the specified name to be
|
||||
retrieved. If specifying >1 dimension, then the metric must contain *all* the
|
||||
configured dimensions where the value of the wildcard dimension is ignored.
|
||||
|
||||
Example:
|
||||
|
||||
```toml
|
||||
[[inputs.cloudwatch]]
|
||||
period = "1m"
|
||||
interval = "5m"
|
||||
|
||||
[[inputs.cloudwatch.metrics]]
|
||||
names = ["Latency"]
|
||||
|
||||
## Dimension filters for Metric (optional)
|
||||
[[inputs.cloudwatch.metrics.dimensions]]
|
||||
name = "LoadBalancerName"
|
||||
value = "p-example"
|
||||
|
||||
[[inputs.cloudwatch.metrics.dimensions]]
|
||||
name = "AvailabilityZone"
|
||||
value = "*"
|
||||
```
|
||||
|
||||
If the following ELBs are available:
|
||||
|
||||
- name: `p-example`, availabilityZone: `us-east-1a`
|
||||
- name: `p-example`, availabilityZone: `us-east-1b`
|
||||
- name: `q-example`, availabilityZone: `us-east-1a`
|
||||
- name: `q-example`, availabilityZone: `us-east-1b`
|
||||
|
||||
Then 2 metrics will be output:
|
||||
|
||||
- name: `p-example`, availabilityZone: `us-east-1a`
|
||||
- name: `p-example`, availabilityZone: `us-east-1b`
|
||||
|
||||
If the `AvailabilityZone` wildcard dimension was omitted, then a single metric
|
||||
(name: `p-example`) would be exported containing the aggregate values of the ELB
|
||||
across availability zones.
|
||||
|
||||
To maximize efficiency and savings, consider making fewer requests by increasing
|
||||
`interval` but keeping `period` at the duration you would like metrics to be
|
||||
reported. The above example will request metrics from Cloudwatch every 5 minutes
|
||||
but will output five metrics timestamped one minute apart.
|
||||
|
||||
## Restrictions and Limitations
|
||||
|
||||
- CloudWatch metrics are not available instantly via the CloudWatch API.
|
||||
You should adjust your collection `delay` to account for this lag in metrics
|
||||
availability based on your [monitoring subscription level](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html)
|
||||
- CloudWatch API usage incurs cost - see [GetMetricData Pricing](https://aws.amazon.com/cloudwatch/pricing/)
|
||||
|
||||
## Metrics
|
||||
|
||||
Each CloudWatch Namespace monitored records a measurement with fields for each
|
||||
available Metric Statistic. Namespace and Metrics are represented in [snake
|
||||
case](https://en.wikipedia.org/wiki/Snake_case)
|
||||
|
||||
### Sparse Metrics
|
||||
|
||||
By default, metrics generated by this plugin are sparse. Use the `metric_format`
|
||||
option to override this setting.
|
||||
|
||||
Sparse metrics produce a set of fields for every AWS Metric.
|
||||
|
||||
- cloudwatch_{namespace}
|
||||
- Fields
|
||||
- {metric}_sum (metric Sum value)
|
||||
- {metric}_average (metric Average value)
|
||||
- {metric}_minimum (metric Minimum value)
|
||||
- {metric}_maximum (metric Maximum value)
|
||||
- {metric}_sample_count (metric SampleCount value)
|
||||
|
||||
For example:
|
||||
|
||||
```text
|
||||
cloudwatch_aws_usage,class=None,resource=GetSecretValue,service=Secrets\ Manager,type=API call_count_maximum=1,call_count_minimum=1,call_count_sum=8,call_count_sample_count=8,call_count_average=1 1715097720000000000
|
||||
```
|
||||
|
||||
### Dense Metrics
|
||||
|
||||
Dense metrics are generated when `metric_format` is set to `dense`.
|
||||
|
||||
Dense metrics use the same fields over and over for every AWS Metric and
|
||||
differentiate between AWS Metrics using a tag called `metric_name` with the AWS
|
||||
Metric name:
|
||||
|
||||
- cloudwatch_{namespace}
|
||||
- Tags
|
||||
- metric_name (AWS Metric name)
|
||||
- Fields
|
||||
- sum (metric Sum value)
|
||||
- average (metric Average value)
|
||||
- minimum (metric Minimum value)
|
||||
- maximum (metric Maximum value)
|
||||
- sample_count (metric SampleCount value)
|
||||
|
||||
For example:
|
||||
|
||||
```text
|
||||
cloudwatch_aws_usage,class=None,resource=GetSecretValue,service=Secrets\ Manager,metric_name=call_count,type=API sum=6,sample_count=6,average=1,maximum=1,minimum=1 1715097840000000000
|
||||
```
|
||||
|
||||
### Tags
|
||||
|
||||
Each measurement is tagged with the following identifiers to uniquely identify
|
||||
the associated metric Tag Dimension names are represented in [snake
|
||||
case](https://en.wikipedia.org/wiki/Snake_case)
|
||||
|
||||
- All measurements have the following tags:
|
||||
- region (CloudWatch Region)
|
||||
- {dimension-name} (Cloudwatch Dimension value - one per metric dimension)
|
||||
- If `include_linked_accounts` is set to true then below tag is also provided:
|
||||
- account (The ID of the account where the metrics are located.)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
You can use the aws cli to get a list of available metrics and dimensions:
|
||||
|
||||
```shell
|
||||
aws cloudwatch list-metrics --namespace AWS/EC2 --region us-east-1
|
||||
aws cloudwatch list-metrics --namespace AWS/EC2 --region us-east-1 --metric-name CPUCreditBalance
|
||||
```
|
||||
|
||||
If the expected metrics are not returned, you can try getting them manually
|
||||
for a short period of time:
|
||||
|
||||
```shell
|
||||
aws cloudwatch get-metric-data \
|
||||
--start-time 2018-07-01T00:00:00Z \
|
||||
--end-time 2018-07-01T00:15:00Z \
|
||||
--metric-data-queries '[
|
||||
{
|
||||
"Id": "avgCPUCreditBalance",
|
||||
"MetricStat": {
|
||||
"Metric": {
|
||||
"Namespace": "AWS/EC2",
|
||||
"MetricName": "CPUCreditBalance",
|
||||
"Dimensions": [
|
||||
{
|
||||
"Name": "InstanceId",
|
||||
"Value": "i-deadbeef"
|
||||
}
|
||||
]
|
||||
},
|
||||
"Period": 300,
|
||||
"Stat": "Average"
|
||||
},
|
||||
"Label": "avgCPUCreditBalance"
|
||||
}
|
||||
]'
|
||||
```
|
||||
|
||||
## Example Output
|
||||
|
||||
See the discussion above about sparse vs dense metrics for more details.
|
||||
|
||||
```text
|
||||
cloudwatch_aws_elb,load_balancer_name=p-example,region=us-east-1 latency_average=0.004810798017284538,latency_maximum=0.1100282669067383,latency_minimum=0.0006084442138671875,latency_sample_count=4029,latency_sum=19.382705211639404 1459542420000000000
|
||||
```
|
||||
|
||||
[concept]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html
|
||||
[credentials]: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#shared-credentials-file
|
||||
[dimension]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Dimension
|
||||
[env]: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#environment-variables
|
||||
[iam-roles]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
|
||||
[metric]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Metric
|
||||
[namespace]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Namespace
|
||||
[period]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchPeriods
|
||||
[pricing]: https://aws.amazon.com/cloudwatch/pricing/
|
||||
[region]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchRegions
|
||||
[using]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html
|
||||
|
|
@ -0,0 +1,184 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Amazon CloudWatch Metric Streams"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Amazon CloudWatch Metric Streams
|
||||
identifier: input-cloudwatch_metric_streams
|
||||
tags: [Amazon CloudWatch Metric Streams, "input-plugins", "configuration", "cloud"]
|
||||
introduced: "v1.24.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/cloudwatch_metric_streams/README.md, Amazon CloudWatch Metric Streams Plugin Source
|
||||
---
|
||||
|
||||
# Amazon CloudWatch Metric Streams Input Plugin
|
||||
|
||||
This plugin listens for metrics sent via HTTP by
|
||||
[Cloudwatch metric streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html) implementing the required
|
||||
[response specifications](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html).
|
||||
|
||||
> [!IMPORTANT]
|
||||
> Using this plugin can incure costs, see the _Metric Streams example_ in
|
||||
> [CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing).
|
||||
|
||||
**Introduced in:** Telegraf v1.24.0
|
||||
**Tags:** cloud
|
||||
**OS support:** all
|
||||
|
||||
[pricing]: https://aws.amazon.com/cloudwatch/pricing
|
||||
[metric_streams]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html
|
||||
[response_specs]: https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html
|
||||
|
||||
## Service Input <!-- @/docs/includes/service_input.md -->
|
||||
|
||||
This plugin is a service input. Normal plugins gather metrics determined by the
|
||||
interval setting. Service plugins start a service to listen and wait for
|
||||
metrics or events to occur. Service plugins have two key differences from
|
||||
normal plugins:
|
||||
|
||||
1. The global or plugin specific `interval` setting may not apply
|
||||
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
|
||||
output for this plugin
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# AWS Metric Streams listener
|
||||
[[inputs.cloudwatch_metric_streams]]
|
||||
## Address and port to host HTTP listener on
|
||||
service_address = ":443"
|
||||
|
||||
## Paths to listen to.
|
||||
# paths = ["/telegraf"]
|
||||
|
||||
## maximum duration before timing out read of the request
|
||||
# read_timeout = "10s"
|
||||
|
||||
## maximum duration before timing out write of the response
|
||||
# write_timeout = "10s"
|
||||
|
||||
## Maximum allowed http request body size in bytes.
|
||||
## 0 means to use the default of 524,288,000 bytes (500 mebibytes)
|
||||
# max_body_size = "500MB"
|
||||
|
||||
## Optional access key for Firehose security.
|
||||
# access_key = "test-key"
|
||||
|
||||
## An optional flag to keep Metric Streams metrics compatible with
|
||||
## CloudWatch's API naming
|
||||
# api_compatability = false
|
||||
|
||||
## Set one or more allowed client CA certificate file names to
|
||||
## enable mutually authenticated TLS connections
|
||||
# tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
|
||||
|
||||
## Add service certificate and key
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
The plugin has its own internal metrics for troubleshooting:
|
||||
|
||||
* Requests Received
|
||||
* The number of requests received by the listener.
|
||||
* Writes Served
|
||||
* The number of writes served by the listener.
|
||||
* Bad Requests
|
||||
* The number of bad requests, separated by the error code as a tag.
|
||||
* Request Time
|
||||
* The duration of the request measured in ns.
|
||||
* Age Max
|
||||
* The maximum age of a metric in this interval. This is useful for offsetting
|
||||
any lag or latency measurements in a metrics pipeline that measures based
|
||||
on the timestamp.
|
||||
* Age Min
|
||||
* The minimum age of a metric in this interval.
|
||||
|
||||
Specific errors will be logged and an error will be returned to AWS.
|
||||
|
||||
For additional help check the [Firehose Troubleshooting](https://docs.aws.amazon.com/firehose/latest/dev/http_troubleshooting.html)
|
||||
page.
|
||||
|
||||
[firehose_troubleshoot]: https://docs.aws.amazon.com/firehose/latest/dev/http_troubleshooting.html
|
||||
|
||||
## Metrics
|
||||
|
||||
Metrics sent by AWS are Base64 encoded blocks of JSON data.
|
||||
The JSON block below is the Base64 decoded data in the `data`
|
||||
field of a `record`.
|
||||
There can be multiple blocks of JSON for each `data` field
|
||||
in each `record` and there can be multiple `record` fields in
|
||||
a `record`.
|
||||
|
||||
The metric when decoded may look like this:
|
||||
|
||||
```json
|
||||
{
|
||||
"metric_stream_name": "sandbox-dev-cloudwatch-metric-stream",
|
||||
"account_id": "541737779709",
|
||||
"region": "us-west-2",
|
||||
"namespace": "AWS/EC2",
|
||||
"metric_name": "CPUUtilization",
|
||||
"dimensions": {
|
||||
"InstanceId": "i-0efc7ghy09c123428"
|
||||
},
|
||||
"timestamp": 1651679580000,
|
||||
"value": {
|
||||
"max": 10.011666666666667,
|
||||
"min": 10.011666666666667,
|
||||
"sum": 10.011666666666667,
|
||||
"count": 1
|
||||
},
|
||||
"unit": "Percent"
|
||||
}
|
||||
```
|
||||
|
||||
### Tags
|
||||
|
||||
All tags in the `dimensions` list are added as tags to the metric.
|
||||
|
||||
The `account_id` and `region` tag are added to each metric as well.
|
||||
|
||||
### Measurements and Fields
|
||||
|
||||
The metric name is a combination of `namespace` and `metric_name`,
|
||||
separated by `_` and lowercased.
|
||||
|
||||
The fields are each aggregate in the `value` list.
|
||||
|
||||
These fields are optionally renamed to match the CloudWatch API for
|
||||
easier transition from the API to Metric Streams. This relies on
|
||||
setting the `api_compatability` flag in the configuration.
|
||||
|
||||
The timestamp applied is the timestamp from the metric,
|
||||
typically 3-5 minutes older than the time processed due
|
||||
to CloudWatch delays.
|
||||
|
||||
## Example Output
|
||||
|
||||
Example output based on the above JSON & compatability flag is:
|
||||
|
||||
**Standard Metric Streams format:**
|
||||
|
||||
```text
|
||||
aws_ec2_cpuutilization,accountId=541737779709,region=us-west-2,InstanceId=i-0efc7ghy09c123428 max=10.011666666666667,min=10.011666666666667,sum=10.011666666666667,count=1 1651679580000
|
||||
```
|
||||
|
||||
**API Compatability format:**
|
||||
|
||||
```text
|
||||
aws_ec2_cpuutilization,accountId=541737779709,region=us-west-2,InstanceId=i-0efc7ghy09c123428 maximum=10.011666666666667,minimum=10.011666666666667,sum=10.011666666666667,samplecount=1 1651679580000
|
||||
```
|
||||
|
|
@ -0,0 +1,124 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Netfilter Conntrack"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Netfilter Conntrack
|
||||
identifier: input-conntrack
|
||||
tags: [Netfilter Conntrack, "input-plugins", "configuration", "system"]
|
||||
introduced: "v1.0.0"
|
||||
os_support: "linux"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/conntrack/README.md, Netfilter Conntrack Plugin Source
|
||||
---
|
||||
|
||||
# Netfilter Conntrack Input Plugin
|
||||
|
||||
This plugin collects metrics from [Netfilter's conntrack tools](https://conntrack-tools.netfilter.org/).
|
||||
There are two collection mechanisms for this plugin:
|
||||
|
||||
1. Extracting information from `/proc/net/stat/nf_conntrack` files if the
|
||||
`collect` option is set accordingly for finding CPU specific values.
|
||||
2. Using specific files and directories by specifying the `dirs` option. At
|
||||
runtime, conntrack exposes many of those connection statistics within
|
||||
`/proc/sys/net`. Depending on your kernel version, these files can be found
|
||||
in either `/proc/sys/net/ipv4/netfilter` or `/proc/sys/net/netfilter` and
|
||||
will be prefixed with either `ip` or `nf`.
|
||||
|
||||
In order to simplify configuration in a heterogeneous environment, a superset
|
||||
of directory and filenames can be specified. Any locations that doesn't exist
|
||||
is ignored.
|
||||
|
||||
**Introduced in:** Telegraf v1.0.0
|
||||
**Tags:** system
|
||||
**OS support:** linux
|
||||
|
||||
[conntrack]: https://conntrack-tools.netfilter.org/
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Collects conntrack stats from the configured directories and files.
|
||||
# This plugin ONLY supports Linux
|
||||
[[inputs.conntrack]]
|
||||
## The following defaults would work with multiple versions of conntrack.
|
||||
## Note the nf_ and ip_ filename prefixes are mutually exclusive across
|
||||
## kernel versions, as are the directory locations.
|
||||
|
||||
## Look through /proc/net/stat/nf_conntrack for these metrics
|
||||
## all - aggregated statistics
|
||||
## percpu - include detailed statistics with cpu tag
|
||||
collect = ["all", "percpu"]
|
||||
|
||||
## User-specified directories and files to look through
|
||||
## Directories to search within for the conntrack files above.
|
||||
## Missing directories will be ignored.
|
||||
dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
|
||||
|
||||
## Superset of filenames to look for within the conntrack dirs.
|
||||
## Missing files will be ignored.
|
||||
files = ["ip_conntrack_count","ip_conntrack_max",
|
||||
"nf_conntrack_count","nf_conntrack_max"]
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
A detailed explanation of each fields can be found in
|
||||
[kernel documentation](https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt)
|
||||
|
||||
[kerneldoc]: https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt
|
||||
|
||||
- conntrack
|
||||
- `ip_conntrack_count` `(int, count)`: The number of entries in the conntrack table
|
||||
- `ip_conntrack_max` `(int, size)`: The max capacity of the conntrack table
|
||||
- `ip_conntrack_buckets` `(int, size)`: The size of hash table.
|
||||
|
||||
With `collect = ["all"]`:
|
||||
|
||||
- `entries`: The number of entries in the conntrack table
|
||||
- `searched`: The number of conntrack table lookups performed
|
||||
- `found`: The number of searched entries which were successful
|
||||
- `new`: The number of entries added which were not expected before
|
||||
- `invalid`: The number of packets seen which can not be tracked
|
||||
- `ignore`: The number of packets seen which are already connected to an entry
|
||||
- `delete`: The number of entries which were removed
|
||||
- `delete_list`: The number of entries which were put to dying list
|
||||
- `insert`: The number of entries inserted into the list
|
||||
- `insert_failed`: The number of insertion attempted but failed (duplicate entry)
|
||||
- `drop`: The number of packets dropped due to conntrack failure
|
||||
- `early_drop`: The number of dropped entries to make room for new ones, if
|
||||
`maxsize` is reached
|
||||
- `icmp_error`: Subset of invalid. Packets that can't be tracked due to error
|
||||
- `expect_new`: Entries added after an expectation was already present
|
||||
- `expect_create`: Expectations added
|
||||
- `expect_delete`: Expectations deleted
|
||||
- `search_restart`: Conntrack table lookups restarted due to hashtable resizes
|
||||
|
||||
### Tags
|
||||
|
||||
With `collect = ["percpu"]` will include detailed statistics per CPU thread.
|
||||
|
||||
Without `"percpu"` the `cpu` tag will have `all` value.
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
conntrack,host=myhost ip_conntrack_count=2,ip_conntrack_max=262144 1461620427667995735
|
||||
```
|
||||
|
||||
with stats:
|
||||
|
||||
```text
|
||||
conntrack,cpu=all,host=localhost delete=0i,delete_list=0i,drop=2i,early_drop=0i,entries=5568i,expect_create=0i,expect_delete=0i,expect_new=0i,found=7i,icmp_error=1962i,ignore=2586413402i,insert=0i,insert_failed=2i,invalid=46853i,new=0i,search_restart=453336i,searched=0i 1615233542000000000
|
||||
conntrack,host=localhost ip_conntrack_count=464,ip_conntrack_max=262144 1615233542000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,124 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Hashicorp Consul"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Hashicorp Consul
|
||||
identifier: input-consul
|
||||
tags: [Hashicorp Consul, "input-plugins", "configuration", "server"]
|
||||
introduced: "v1.0.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/consul/README.md, Hashicorp Consul Plugin Source
|
||||
---
|
||||
|
||||
# Hashicorp Consul Input Plugin
|
||||
|
||||
This plugin will collect statistics about all health checks registered in
|
||||
[Consul](https://www.consul.io) using the [Consul API](https://www.consul.io/docs/agent/http/health.html#health_state). The plugin will not report any
|
||||
[telemetry metrics](https://www.consul.io/docs/agent/telemetry.html) but Consul can report those statistics using
|
||||
the StatsD protocol if needed.
|
||||
|
||||
**Introduced in:** Telegraf v1.0.0
|
||||
**Tags:** server
|
||||
**OS support:** all
|
||||
|
||||
[api]: https://www.consul.io/docs/agent/http/health.html#health_state
|
||||
[telemetry]: https://www.consul.io/docs/agent/telemetry.html
|
||||
[consul]: https://www.consul.io
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Gather health check statuses from services registered in Consul
|
||||
[[inputs.consul]]
|
||||
## Consul server address
|
||||
# address = "localhost:8500"
|
||||
|
||||
## URI scheme for the Consul server, one of "http", "https"
|
||||
# scheme = "http"
|
||||
|
||||
## Metric version controls the mapping from Consul metrics into
|
||||
## Telegraf metrics. Version 2 moved all fields with string values
|
||||
## to tags.
|
||||
##
|
||||
## example: metric_version = 1; deprecated in 1.16
|
||||
## metric_version = 2; recommended version
|
||||
# metric_version = 1
|
||||
|
||||
## ACL token used in every request
|
||||
# token = ""
|
||||
|
||||
## HTTP Basic Authentication username and password.
|
||||
# username = ""
|
||||
# password = ""
|
||||
|
||||
## Data center to query the health checks from
|
||||
# datacenter = ""
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = true
|
||||
|
||||
## Consul checks' tag splitting
|
||||
# When tags are formatted like "key:value" with ":" as a delimiter then
|
||||
# they will be split and reported as proper key:value in Telegraf
|
||||
# tag_delimiter = ":"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
### metric_version = 1
|
||||
|
||||
- consul_health_checks
|
||||
- tags:
|
||||
- node (node that check/service is registered on)
|
||||
- service_name
|
||||
- check_id
|
||||
- fields:
|
||||
- check_name
|
||||
- service_id
|
||||
- status
|
||||
- passing (integer)
|
||||
- critical (integer)
|
||||
- warning (integer)
|
||||
|
||||
### metric_version = 2
|
||||
|
||||
- consul_health_checks
|
||||
- tags:
|
||||
- node (node that check/service is registered on)
|
||||
- service_name
|
||||
- check_id
|
||||
- check_name
|
||||
- service_id
|
||||
- status
|
||||
- fields:
|
||||
- passing (integer)
|
||||
- critical (integer)
|
||||
- warning (integer)
|
||||
|
||||
`passing`, `critical`, and `warning` are integer representations of the health
|
||||
check state. A value of `1` represents that the status was the state of the
|
||||
health check at this sample. `status` is string representation of the same
|
||||
state.
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
consul_health_checks,host=wolfpit,node=consul-server-node,check_id="serfHealth" check_name="Serf Health Status",service_id="",status="passing",passing=1i,critical=0i,warning=0i 1464698464486439902
|
||||
consul_health_checks,host=wolfpit,node=consul-server-node,service_name=www.example.com,check_id="service:www-example-com.test01" check_name="Service 'www.example.com' check",service_id="www-example-com.test01",status="critical",passing=0i,critical=1i,warning=0i 1464698464486519036
|
||||
```
|
||||
|
|
@ -0,0 +1,65 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Hashicorp Consul Agent"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Hashicorp Consul Agent
|
||||
identifier: input-consul_agent
|
||||
tags: [Hashicorp Consul Agent, "input-plugins", "configuration", "server"]
|
||||
introduced: "v1.22.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/consul_agent/README.md, Hashicorp Consul Agent Plugin Source
|
||||
---
|
||||
|
||||
# Hashicorp Consul Agent Input Plugin
|
||||
|
||||
This plugin collects metrics from a [Consul agent](https://developer.hashicorp.com/consul/commands/agent). Telegraf may be
|
||||
present in every node and connect to the agent locally. Tested on Consul v1.10.
|
||||
|
||||
**Introduced in:** Telegraf v1.22.0
|
||||
**Tags:** server
|
||||
**OS support:** all
|
||||
|
||||
[agent]: https://developer.hashicorp.com/consul/commands/agent
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics from the Consul Agent API
|
||||
[[inputs.consul_agent]]
|
||||
## URL for the Consul agent
|
||||
# url = "http://127.0.0.1:8500"
|
||||
|
||||
## Use auth token for authorization.
|
||||
## If both are set, an error is thrown.
|
||||
## If both are empty, no token will be used.
|
||||
# token_file = "/path/to/auth/token"
|
||||
## OR
|
||||
# token = "a1234567-40c7-9048-7bae-378687048181"
|
||||
|
||||
## Set timeout (default 5 seconds)
|
||||
# timeout = "5s"
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = /path/to/cafile
|
||||
# tls_cert = /path/to/certfile
|
||||
# tls_key = /path/to/keyfile
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Consul collects various metrics. For every details, please have a look at
|
||||
[Consul's documentation](https://www.consul.io/api/agent#view-metrics).
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,352 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Couchbase"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Couchbase
|
||||
identifier: input-couchbase
|
||||
tags: [Couchbase, "input-plugins", "configuration", "server"]
|
||||
introduced: "v0.12.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/couchbase/README.md, Couchbase Plugin Source
|
||||
---
|
||||
|
||||
# Couchbase Input Plugin
|
||||
|
||||
This plugin collects metrics from [Couchbase](https://www.couchbase.com/), a distributed NoSQL
|
||||
database. Metrics are collected for each node, as well as detailed metrics for
|
||||
each bucket, for a given couchbase server.
|
||||
|
||||
**Introduced in:** Telegraf v0.12.0
|
||||
**Tags:** server
|
||||
**OS support:** all
|
||||
|
||||
[couchbase]: https://www.couchbase.com/
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read per-node and per-bucket metrics from Couchbase
|
||||
[[inputs.couchbase]]
|
||||
## specify servers via a url matching:
|
||||
## [protocol://]()@address[:port]
|
||||
## e.g.
|
||||
## http://couchbase-0.example.com/
|
||||
## http://admin:secret@couchbase-0.example.com:8091/
|
||||
##
|
||||
## If no servers are specified, then localhost is used as the host.
|
||||
## If no protocol is specified, HTTP is used.
|
||||
## If no port is specified, 8091 is used.
|
||||
servers = ["http://localhost:8091"]
|
||||
|
||||
## Filter bucket fields to include only here.
|
||||
# bucket_stats_included = ["quota_percent_used", "ops_per_sec", "disk_fetches", "item_count", "disk_used", "data_used", "mem_used"]
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification (defaults to false)
|
||||
## If set to false, tls_cert and tls_key are required
|
||||
# insecure_skip_verify = false
|
||||
|
||||
## Whether to collect cluster-wide bucket statistics
|
||||
## It is recommended to disable this in favor of node_stats
|
||||
## to get a better view of the cluster.
|
||||
# cluster_bucket_stats = true
|
||||
|
||||
## Whether to collect bucket stats for each individual node
|
||||
# node_bucket_stats = false
|
||||
|
||||
## List of additional stats to collect, choose from:
|
||||
## * autofailover
|
||||
# additional_stats = []
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
### couchbase_node
|
||||
|
||||
Tags:
|
||||
|
||||
- cluster: sanitized string from `servers` configuration field
|
||||
e.g.: `http://user:password@couchbase-0.example.com:8091/endpoint` becomes
|
||||
`http://couchbase-0.example.com:8091/endpoint`
|
||||
- hostname: Couchbase's name for the node and port, e.g., `172.16.10.187:8091`
|
||||
|
||||
Fields:
|
||||
|
||||
- memory_free (unit: bytes, example: 23181365248.0)
|
||||
- memory_total (unit: bytes, example: 64424656896.0)
|
||||
|
||||
### couchbase_autofailover
|
||||
|
||||
Tags:
|
||||
|
||||
- cluster: sanitized string from `servers` configuration field
|
||||
e.g.: `http://user:password@couchbase-0.example.com:8091/endpoint` becomes
|
||||
`http://couchbase-0.example.com:8091/endpoint`
|
||||
|
||||
Fields:
|
||||
|
||||
- count (unit: int, example: 1)
|
||||
- enabled (unit: bool, example: true)
|
||||
- max_count (unit: int, example: 2)
|
||||
- timeout (unit: int, example: 72)
|
||||
|
||||
### couchbase_bucket and couchbase_node_bucket
|
||||
|
||||
Tags:
|
||||
|
||||
- cluster: whatever you called it in `servers` in the configuration,
|
||||
e.g. `http://couchbase-0.example.com/`
|
||||
- bucket: the name of the couchbase bucket, e.g., `blastro-df`
|
||||
- hostname: the hostname of the node the bucket metrics were collected
|
||||
from, e.g. `172.16.10.187:8091` (only present in `couchbase_node_bucket`)
|
||||
|
||||
Default bucket fields:
|
||||
|
||||
- quota_percent_used (unit: percent, example: 68.85424936294555)
|
||||
- ops_per_sec (unit: count, example: 5686.789686789687)
|
||||
- disk_fetches (unit: count, example: 0.0)
|
||||
- item_count (unit: count, example: 943239752.0)
|
||||
- disk_used (unit: bytes, example: 409178772321.0)
|
||||
- data_used (unit: bytes, example: 212179309111.0)
|
||||
- mem_used (unit: bytes, example: 202156957464.0)
|
||||
|
||||
Additional fields that can be configured with the `bucket_stats_included`
|
||||
option:
|
||||
|
||||
- couch_total_disk_size
|
||||
- couch_docs_fragmentation
|
||||
- couch_views_fragmentation
|
||||
- hit_ratio
|
||||
- ep_cache_miss_rate
|
||||
- ep_resident_items_rate
|
||||
- vb_avg_active_queue_age
|
||||
- vb_avg_replica_queue_age
|
||||
- vb_avg_pending_queue_age
|
||||
- vb_avg_total_queue_age
|
||||
- vb_active_resident_items_ratio
|
||||
- vb_replica_resident_items_ratio
|
||||
- vb_pending_resident_items_ratio
|
||||
- avg_disk_update_time
|
||||
- avg_disk_commit_time
|
||||
- avg_bg_wait_time
|
||||
- avg_active_timestamp_drift
|
||||
- avg_replica_timestamp_drift
|
||||
- ep_dcp_views+indexes_count
|
||||
- ep_dcp_views+indexes_items_remaining
|
||||
- ep_dcp_views+indexes_producer_count
|
||||
- ep_dcp_views+indexes_total_backlog_size
|
||||
- ep_dcp_views+indexes_items_sent
|
||||
- ep_dcp_views+indexes_total_bytes
|
||||
- ep_dcp_views+indexes_backoff
|
||||
- bg_wait_count
|
||||
- bg_wait_total
|
||||
- bytes_read
|
||||
- bytes_written
|
||||
- cas_badval
|
||||
- cas_hits
|
||||
- cas_misses
|
||||
- cmd_get
|
||||
- cmd_lookup
|
||||
- cmd_set
|
||||
- couch_docs_actual_disk_size
|
||||
- couch_docs_data_size
|
||||
- couch_docs_disk_size
|
||||
- couch_spatial_data_size
|
||||
- couch_spatial_disk_size
|
||||
- couch_spatial_ops
|
||||
- couch_views_actual_disk_size
|
||||
- couch_views_data_size
|
||||
- couch_views_disk_size
|
||||
- couch_views_ops
|
||||
- curr_connections
|
||||
- curr_items
|
||||
- curr_items_tot
|
||||
- decr_hits
|
||||
- decr_misses
|
||||
- delete_hits
|
||||
- delete_misses
|
||||
- disk_commit_count
|
||||
- disk_commit_total
|
||||
- disk_update_count
|
||||
- disk_update_total
|
||||
- disk_write_queue
|
||||
- ep_active_ahead_exceptions
|
||||
- ep_active_hlc_drift
|
||||
- ep_active_hlc_drift_count
|
||||
- ep_bg_fetched
|
||||
- ep_clock_cas_drift_threshold_exceeded
|
||||
- ep_data_read_failed
|
||||
- ep_data_write_failed
|
||||
- ep_dcp_2i_backoff
|
||||
- ep_dcp_2i_count
|
||||
- ep_dcp_2i_items_remaining
|
||||
- ep_dcp_2i_items_sent
|
||||
- ep_dcp_2i_producer_count
|
||||
- ep_dcp_2i_total_backlog_size
|
||||
- ep_dcp_2i_total_bytes
|
||||
- ep_dcp_cbas_backoff
|
||||
- ep_dcp_cbas_count
|
||||
- ep_dcp_cbas_items_remaining
|
||||
- ep_dcp_cbas_items_sent
|
||||
- ep_dcp_cbas_producer_count
|
||||
- ep_dcp_cbas_total_backlog_size
|
||||
- ep_dcp_cbas_total_bytes
|
||||
- ep_dcp_eventing_backoff
|
||||
- ep_dcp_eventing_count
|
||||
- ep_dcp_eventing_items_remaining
|
||||
- ep_dcp_eventing_items_sent
|
||||
- ep_dcp_eventing_producer_count
|
||||
- ep_dcp_eventing_total_backlog_size
|
||||
- ep_dcp_eventing_total_bytes
|
||||
- ep_dcp_fts_backoff
|
||||
- ep_dcp_fts_count
|
||||
- ep_dcp_fts_items_remaining
|
||||
- ep_dcp_fts_items_sent
|
||||
- ep_dcp_fts_producer_count
|
||||
- ep_dcp_fts_total_backlog_size
|
||||
- ep_dcp_fts_total_bytes
|
||||
- ep_dcp_other_backoff
|
||||
- ep_dcp_other_count
|
||||
- ep_dcp_other_items_remaining
|
||||
- ep_dcp_other_items_sent
|
||||
- ep_dcp_other_producer_count
|
||||
- ep_dcp_other_total_backlog_size
|
||||
- ep_dcp_other_total_bytes
|
||||
- ep_dcp_replica_backoff
|
||||
- ep_dcp_replica_count
|
||||
- ep_dcp_replica_items_remaining
|
||||
- ep_dcp_replica_items_sent
|
||||
- ep_dcp_replica_producer_count
|
||||
- ep_dcp_replica_total_backlog_size
|
||||
- ep_dcp_replica_total_bytes
|
||||
- ep_dcp_views_backoff
|
||||
- ep_dcp_views_count
|
||||
- ep_dcp_views_items_remaining
|
||||
- ep_dcp_views_items_sent
|
||||
- ep_dcp_views_producer_count
|
||||
- ep_dcp_views_total_backlog_size
|
||||
- ep_dcp_views_total_bytes
|
||||
- ep_dcp_xdcr_backoff
|
||||
- ep_dcp_xdcr_count
|
||||
- ep_dcp_xdcr_items_remaining
|
||||
- ep_dcp_xdcr_items_sent
|
||||
- ep_dcp_xdcr_producer_count
|
||||
- ep_dcp_xdcr_total_backlog_size
|
||||
- ep_dcp_xdcr_total_bytes
|
||||
- ep_diskqueue_drain
|
||||
- ep_diskqueue_fill
|
||||
- ep_diskqueue_items
|
||||
- ep_flusher_todo
|
||||
- ep_item_commit_failed
|
||||
- ep_kv_size
|
||||
- ep_max_size
|
||||
- ep_mem_high_wat
|
||||
- ep_mem_low_wat
|
||||
- ep_meta_data_memory
|
||||
- ep_num_non_resident
|
||||
- ep_num_ops_del_meta
|
||||
- ep_num_ops_del_ret_meta
|
||||
- ep_num_ops_get_meta
|
||||
- ep_num_ops_set_meta
|
||||
- ep_num_ops_set_ret_meta
|
||||
- ep_num_value_ejects
|
||||
- ep_oom_errors
|
||||
- ep_ops_create
|
||||
- ep_ops_update
|
||||
- ep_overhead
|
||||
- ep_queue_size
|
||||
- ep_replica_ahead_exceptions
|
||||
- ep_replica_hlc_drift
|
||||
- ep_replica_hlc_drift_count
|
||||
- ep_tmp_oom_errors
|
||||
- ep_vb_total
|
||||
- evictions
|
||||
- get_hits
|
||||
- get_misses
|
||||
- incr_hits
|
||||
- incr_misses
|
||||
- mem_used
|
||||
- misses
|
||||
- ops
|
||||
- timestamp
|
||||
- vb_active_eject
|
||||
- vb_active_itm_memory
|
||||
- vb_active_meta_data_memory
|
||||
- vb_active_num
|
||||
- vb_active_num_non_resident
|
||||
- vb_active_ops_create
|
||||
- vb_active_ops_update
|
||||
- vb_active_queue_age
|
||||
- vb_active_queue_drain
|
||||
- vb_active_queue_fill
|
||||
- vb_active_queue_size
|
||||
- vb_active_sync_write_aborted_count
|
||||
- vb_active_sync_write_accepted_count
|
||||
- vb_active_sync_write_committed_count
|
||||
- vb_pending_curr_items
|
||||
- vb_pending_eject
|
||||
- vb_pending_itm_memory
|
||||
- vb_pending_meta_data_memory
|
||||
- vb_pending_num
|
||||
- vb_pending_num_non_resident
|
||||
- vb_pending_ops_create
|
||||
- vb_pending_ops_update
|
||||
- vb_pending_queue_age
|
||||
- vb_pending_queue_drain
|
||||
- vb_pending_queue_fill
|
||||
- vb_pending_queue_size
|
||||
- vb_replica_curr_items
|
||||
- vb_replica_eject
|
||||
- vb_replica_itm_memory
|
||||
- vb_replica_meta_data_memory
|
||||
- vb_replica_num
|
||||
- vb_replica_num_non_resident
|
||||
- vb_replica_ops_create
|
||||
- vb_replica_ops_update
|
||||
- vb_replica_queue_age
|
||||
- vb_replica_queue_drain
|
||||
- vb_replica_queue_fill
|
||||
- vb_replica_queue_size
|
||||
- vb_total_queue_age
|
||||
- xdc_ops
|
||||
- allocstall
|
||||
- cpu_cores_available
|
||||
- cpu_irq_rate
|
||||
- cpu_stolen_rate
|
||||
- cpu_sys_rate
|
||||
- cpu_user_rate
|
||||
- cpu_utilization_rate
|
||||
- hibernated_requests
|
||||
- hibernated_waked
|
||||
- mem_actual_free
|
||||
- mem_actual_used
|
||||
- mem_free
|
||||
- mem_limit
|
||||
- mem_total
|
||||
- mem_used_sys
|
||||
- odp_report_failed
|
||||
- rest_requests
|
||||
- swap_total
|
||||
- swap_used
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
couchbase_node,cluster=http://localhost:8091/,hostname=172.17.0.2:8091 memory_free=7705575424,memory_total=16558182400 1547829754000000000
|
||||
couchbase_bucket,bucket=beer-sample,cluster=http://localhost:8091/ quota_percent_used=27.09285736083984,ops_per_sec=0,disk_fetches=0,item_count=7303,disk_used=21662946,data_used=9325087,mem_used=28408920 1547829754000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,112 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Apache CouchDB"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Apache CouchDB
|
||||
identifier: input-couchdb
|
||||
tags: [Apache CouchDB, "input-plugins", "configuration", "server"]
|
||||
introduced: "v0.10.3"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/couchdb/README.md, Apache CouchDB Plugin Source
|
||||
---
|
||||
|
||||
# Apache CouchDB Input Plugin
|
||||
|
||||
This plugin gathers metrics from [Apache CouchDB](https://couchdb.apache.org/) instances using the
|
||||
[stats](http://docs.couchdb.org/en/1.6.1/api/server/common.html?highlight=stats#get--_stats) endpoint.
|
||||
|
||||
**Introduced in:** Telegraf v0.10.3
|
||||
**Tags:** server
|
||||
**OS support:** all
|
||||
|
||||
[couchdb]: https://couchdb.apache.org/
|
||||
[stats]: http://docs.couchdb.org/en/1.6.1/api/server/common.html?highlight=stats#get--_stats
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read CouchDB Stats from one or more servers
|
||||
[[inputs.couchdb]]
|
||||
## Works with CouchDB stats endpoints out of the box
|
||||
## Multiple Hosts from which to read CouchDB stats:
|
||||
hosts = ["http://localhost:8086/_stats"]
|
||||
|
||||
## Use HTTP Basic Authentication.
|
||||
# basic_username = "telegraf"
|
||||
# basic_password = "p@ssw0rd"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Statistics specific to the internals of CouchDB:
|
||||
|
||||
- couchdb_auth_cache_misses
|
||||
- couchdb_database_writes
|
||||
- couchdb_open_databases
|
||||
- couchdb_auth_cache_hits
|
||||
- couchdb_request_time
|
||||
- couchdb_database_reads
|
||||
- couchdb_open_os_files
|
||||
|
||||
Statistics of HTTP requests by method:
|
||||
|
||||
- httpd_request_methods_put
|
||||
- httpd_request_methods_get
|
||||
- httpd_request_methods_copy
|
||||
- httpd_request_methods_delete
|
||||
- httpd_request_methods_post
|
||||
- httpd_request_methods_head
|
||||
|
||||
Statistics of HTTP requests by response code:
|
||||
|
||||
- httpd_status_codes_200
|
||||
- httpd_status_codes_201
|
||||
- httpd_status_codes_202
|
||||
- httpd_status_codes_301
|
||||
- httpd_status_codes_304
|
||||
- httpd_status_codes_400
|
||||
- httpd_status_codes_401
|
||||
- httpd_status_codes_403
|
||||
- httpd_status_codes_404
|
||||
- httpd_status_codes_405
|
||||
- httpd_status_codes_409
|
||||
- httpd_status_codes_412
|
||||
- httpd_status_codes_500
|
||||
|
||||
httpd statistics:
|
||||
|
||||
- httpd_clients_requesting_changes
|
||||
- httpd_temporary_view_reads
|
||||
- httpd_requests
|
||||
- httpd_bulk_requests
|
||||
- httpd_view_reads
|
||||
|
||||
## Tags
|
||||
|
||||
- server (url of the couchdb _stats endpoint)
|
||||
|
||||
## Example Output
|
||||
|
||||
### Post Couchdb 2.0
|
||||
|
||||
```text
|
||||
couchdb,server=http://couchdb22:5984/_node/_local/_stats couchdb_auth_cache_hits_value=0,httpd_request_methods_delete_value=0,couchdb_auth_cache_misses_value=0,httpd_request_methods_get_value=42,httpd_status_codes_304_value=0,httpd_status_codes_400_value=0,httpd_request_methods_head_value=0,httpd_status_codes_201_value=0,couchdb_database_reads_value=0,httpd_request_methods_copy_value=0,couchdb_request_time_max=0,httpd_status_codes_200_value=42,httpd_status_codes_301_value=0,couchdb_open_os_files_value=2,httpd_request_methods_put_value=0,httpd_request_methods_post_value=0,httpd_status_codes_202_value=0,httpd_status_codes_403_value=0,httpd_status_codes_409_value=0,couchdb_database_writes_value=0,couchdb_request_time_min=0,httpd_status_codes_412_value=0,httpd_status_codes_500_value=0,httpd_status_codes_401_value=0,httpd_status_codes_404_value=0,httpd_status_codes_405_value=0,couchdb_open_databases_value=0 1536707179000000000
|
||||
```
|
||||
|
||||
### Pre Couchdb 2.0
|
||||
|
||||
```text
|
||||
couchdb,server=http://couchdb16:5984/_stats couchdb_request_time_sum=96,httpd_status_codes_200_sum=37,httpd_status_codes_200_min=0,httpd_requests_mean=0.005,httpd_requests_min=0,couchdb_request_time_stddev=3.833,couchdb_request_time_min=1,httpd_request_methods_get_stddev=0.073,httpd_request_methods_get_min=0,httpd_status_codes_200_mean=0.005,httpd_status_codes_200_max=1,httpd_requests_sum=37,couchdb_request_time_current=96,httpd_request_methods_get_sum=37,httpd_request_methods_get_mean=0.005,httpd_request_methods_get_max=1,httpd_status_codes_200_stddev=0.073,couchdb_request_time_mean=2.595,couchdb_request_time_max=25,httpd_request_methods_get_current=37,httpd_status_codes_200_current=37,httpd_requests_current=37,httpd_requests_stddev=0.073,httpd_requests_max=1 1536707179000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,101 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from CPU"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: CPU
|
||||
identifier: input-cpu
|
||||
tags: [CPU, "input-plugins", "configuration", "system"]
|
||||
introduced: "v0.1.5"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/cpu/README.md, CPU Plugin Source
|
||||
---
|
||||
|
||||
# CPU Input Plugin
|
||||
|
||||
This plugin gathers metrics about the system's CPUs.
|
||||
|
||||
**Introduced in:** Telegraf v0.1.5
|
||||
**Tags:** system
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics about cpu usage
|
||||
[[inputs.cpu]]
|
||||
## Whether to report per-cpu stats or not
|
||||
percpu = true
|
||||
## Whether to report total system cpu stats or not
|
||||
totalcpu = true
|
||||
## If true, collect raw CPU time metrics
|
||||
collect_cpu_time = false
|
||||
## If true, compute and report the sum of all non-idle CPU states
|
||||
## NOTE: The resulting 'time_active' field INCLUDES 'iowait'!
|
||||
report_active = false
|
||||
## If true and the info is available then add core_id and physical_id tags
|
||||
core_tags = false
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
On Linux, consult `man proc` for details on the meanings of these values.
|
||||
|
||||
- cpu
|
||||
- tags:
|
||||
- cpu (CPU ID or `cpu-total`)
|
||||
- fields:
|
||||
- time_user (float)
|
||||
- time_system (float)
|
||||
- time_idle (float)
|
||||
- time_active (float)
|
||||
- time_nice (float)
|
||||
- time_iowait (float)
|
||||
- time_irq (float)
|
||||
- time_softirq (float)
|
||||
- time_steal (float)
|
||||
- time_guest (float)
|
||||
- time_guest_nice (float)
|
||||
- usage_user (float, percent)
|
||||
- usage_system (float, percent)
|
||||
- usage_idle (float, percent)
|
||||
- usage_active (float)
|
||||
- usage_nice (float, percent)
|
||||
- usage_iowait (float, percent)
|
||||
- usage_irq (float, percent)
|
||||
- usage_softirq (float, percent)
|
||||
- usage_steal (float, percent)
|
||||
- usage_guest (float, percent)
|
||||
- usage_guest_nice (float, percent)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
On Linux systems the `/proc/stat` file is used to gather CPU times.
|
||||
Percentages are based on the last 2 samples.
|
||||
Tags core_id and physical_id are read from `/proc/cpuinfo` on Linux systems
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
cpu,cpu=cpu0,host=loaner time_active=202224.15999999992,time_guest=30250.35,time_guest_nice=0,time_idle=1527035.04,time_iowait=1352,time_irq=0,time_nice=169.28,time_softirq=6281.4,time_steal=0,time_system=40097.14,time_user=154324.34 1568760922000000000
|
||||
cpu,cpu=cpu0,host=loaner usage_active=31.249999981810106,usage_guest=2.083333333080696,usage_guest_nice=0,usage_idle=68.7500000181899,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=4.166666666161392,usage_user=25.000000002273737 1568760922000000000
|
||||
cpu,cpu=cpu1,host=loaner time_active=201890.02000000002,time_guest=30508.41,time_guest_nice=0,time_idle=264641.18,time_iowait=210.44,time_irq=0,time_nice=181.75,time_softirq=4537.88,time_steal=0,time_system=39480.7,time_user=157479.25 1568760922000000000
|
||||
cpu,cpu=cpu1,host=loaner usage_active=12.500000010610771,usage_guest=2.0833333328280585,usage_guest_nice=0,usage_idle=87.49999998938922,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=2.0833333332070145,usage_steal=0,usage_system=4.166666665656117,usage_user=4.166666666414029 1568760922000000000
|
||||
cpu,cpu=cpu2,host=loaner time_active=201382.78999999998,time_guest=30325.8,time_guest_nice=0,time_idle=264686.63,time_iowait=202.77,time_irq=0,time_nice=162.81,time_softirq=3378.34,time_steal=0,time_system=39270.59,time_user=158368.28 1568760922000000000
|
||||
cpu,cpu=cpu2,host=loaner usage_active=15.999999993480742,usage_guest=1.9999999999126885,usage_guest_nice=0,usage_idle=84.00000000651926,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=2.0000000002764864,usage_steal=0,usage_system=3.999999999825377,usage_user=7.999999998923158 1568760922000000000
|
||||
cpu,cpu=cpu3,host=loaner time_active=198953.51000000007,time_guest=30344.43,time_guest_nice=0,time_idle=265504.09,time_iowait=187.64,time_irq=0,time_nice=197.47,time_softirq=2301.47,time_steal=0,time_system=39313.73,time_user=156953.2 1568760922000000000
|
||||
cpu,cpu=cpu3,host=loaner usage_active=10.41666667424579,usage_guest=0,usage_guest_nice=0,usage_idle=89.58333332575421,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=4.166666666666667,usage_user=6.249999998484175 1568760922000000000
|
||||
cpu,cpu=cpu-total,host=loaner time_active=804450.5299999998,time_guest=121429,time_guest_nice=0,time_idle=2321866.96,time_iowait=1952.86,time_irq=0,time_nice=711.32,time_softirq=16499.1,time_steal=0,time_system=158162.17,time_user=627125.08 1568760922000000000
|
||||
cpu,cpu=cpu-total,host=loaner usage_active=17.616580305880305,usage_guest=1.036269430422946,usage_guest_nice=0,usage_idle=82.3834196941197,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=1.0362694300459534,usage_steal=0,usage_system=4.145077721691784,usage_user=11.398963731636465 1568760922000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Counter-Strike Global Offensive (CSGO)"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Counter-Strike Global Offensive (CSGO)
|
||||
identifier: input-csgo
|
||||
tags: [Counter-Strike Global Offensive (CSGO), "input-plugins", "configuration", "server"]
|
||||
introduced: "v1.18.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/csgo/README.md, Counter-Strike Global Offensive (CSGO) Plugin Source
|
||||
---
|
||||
|
||||
# Counter-Strike: Global Offensive (CSGO) Input Plugin
|
||||
|
||||
This plugin gather metrics from [Counter-Strike: Global Offensive](https://www.counter-strike.net/)
|
||||
servers.
|
||||
|
||||
**Introduced in:** Telegraf v1.18.0
|
||||
**Tags:** server
|
||||
**OS support:** all
|
||||
|
||||
[csgo]: https://www.counter-strike.net/
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Fetch metrics from a CSGO SRCDS
|
||||
[[inputs.csgo]]
|
||||
## Specify servers using the following format:
|
||||
## servers = [
|
||||
## ["ip1:port1", "rcon_password1"],
|
||||
## ["ip2:port2", "rcon_password2"],
|
||||
## ]
|
||||
#
|
||||
## If no servers are specified, no data will be collected
|
||||
servers = []
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
The plugin retrieves the output of the `stats` command that is executed via
|
||||
rcon.
|
||||
|
||||
If no servers are specified, no data will be collected
|
||||
|
||||
- csgo
|
||||
- tags:
|
||||
- host
|
||||
- fields:
|
||||
- cpu (float)
|
||||
- net_in (float)
|
||||
- net_out (float)
|
||||
- uptime_minutes (float)
|
||||
- maps (float)
|
||||
- fps (float)
|
||||
- players (float)
|
||||
- sv_ms (float)
|
||||
- variance_ms (float)
|
||||
- tick_ms (float)
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,405 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Bosch Rexroth ctrlX Data Layer"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Bosch Rexroth ctrlX Data Layer
|
||||
identifier: input-ctrlx_datalayer
|
||||
tags: [Bosch Rexroth ctrlX Data Layer, "input-plugins", "configuration", "iot", "messaging"]
|
||||
introduced: "v1.27.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/ctrlx_datalayer/README.md, Bosch Rexroth ctrlX Data Layer Plugin Source
|
||||
---
|
||||
|
||||
# Bosch Rexroth ctrlX Data Layer Input Plugin
|
||||
|
||||
This plugin gathers data from the [ctrlX Data Layer](https://ctrlx-automation.com) a communication
|
||||
middleware running on Bosch Rexroth's [ctrlX CORE devices](https://ctrlx-core.com). The
|
||||
platform is used for professional automation applications like industrial
|
||||
automation, building automation, robotics, IoT Gateways or as classical PLC.
|
||||
|
||||
**Introduced in:** Telegraf v1.27.0
|
||||
**Tags:** iot, messaging
|
||||
**OS support:** all
|
||||
|
||||
[ctrlx]: https://ctrlx-automation.com
|
||||
[core_devs]: https://ctrlx-core.com
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# A ctrlX Data Layer server sent event input plugin
|
||||
[[inputs.ctrlx_datalayer]]
|
||||
## Hostname or IP address of the ctrlX CORE Data Layer server
|
||||
## example: server = "localhost" # Telegraf is running directly on the device
|
||||
## server = "192.168.1.1" # Connect to ctrlX CORE remote via IP
|
||||
## server = "host.example.com" # Connect to ctrlX CORE remote via hostname
|
||||
## server = "10.0.2.2:8443" # Connect to ctrlX CORE Virtual from development environment
|
||||
server = "localhost"
|
||||
|
||||
## Authentication credentials
|
||||
username = "boschrexroth"
|
||||
password = "boschrexroth"
|
||||
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
|
||||
## Timeout for HTTP requests. (default: "10s")
|
||||
# timeout = "10s"
|
||||
|
||||
|
||||
## Create a ctrlX Data Layer subscription.
|
||||
## It is possible to define multiple subscriptions per host. Each subscription can have its own
|
||||
## sampling properties and a list of nodes to subscribe to.
|
||||
## All subscriptions share the same credentials.
|
||||
[[inputs.ctrlx_datalayer.subscription]]
|
||||
## The name of the measurement. (default: "ctrlx")
|
||||
measurement = "memory"
|
||||
|
||||
## Configure the ctrlX Data Layer nodes which should be subscribed.
|
||||
## address - node address in ctrlX Data Layer (mandatory)
|
||||
## name - field name to use in the output (optional, default: base name of address)
|
||||
## tags - extra node tags to be added to the output metric (optional)
|
||||
## Note:
|
||||
## Use either the inline notation or the bracketed notation, not both.
|
||||
## The tags property is only supported in bracketed notation due to toml parser restrictions
|
||||
## Examples:
|
||||
## Inline notation
|
||||
nodes=[
|
||||
{name="available", address="framework/metrics/system/memavailable-mb"},
|
||||
{name="used", address="framework/metrics/system/memused-mb"},
|
||||
]
|
||||
## Bracketed notation
|
||||
# [[inputs.ctrlx_datalayer.subscription.nodes]]
|
||||
# name ="available"
|
||||
# address="framework/metrics/system/memavailable-mb"
|
||||
# ## Define extra tags related to node to be added to the output metric (optional)
|
||||
# [inputs.ctrlx_datalayer.subscription.nodes.tags]
|
||||
# node_tag1="node_tag1"
|
||||
# node_tag2="node_tag2"
|
||||
# [[inputs.ctrlx_datalayer.subscription.nodes]]
|
||||
# name ="used"
|
||||
# address="framework/metrics/system/memused-mb"
|
||||
|
||||
## The switch "output_json_string" enables output of the measurement as json.
|
||||
## That way it can be used in in a subsequent processor plugin, e.g. "Starlark Processor Plugin".
|
||||
# output_json_string = false
|
||||
|
||||
## Define extra tags related to subscription to be added to the output metric (optional)
|
||||
# [inputs.ctrlx_datalayer.subscription.tags]
|
||||
# subscription_tag1 = "subscription_tag1"
|
||||
# subscription_tag2 = "subscription_tag2"
|
||||
|
||||
## The interval in which messages shall be sent by the ctrlX Data Layer to this plugin. (default: 1s)
|
||||
## Higher values reduce load on network by queuing samples on server side and sending as a single TCP packet.
|
||||
# publish_interval = "1s"
|
||||
|
||||
## The interval a "keepalive" message is sent if no change of data occurs. (default: 60s)
|
||||
## Only used internally to detect broken network connections.
|
||||
# keep_alive_interval = "60s"
|
||||
|
||||
## The interval an "error" message is sent if an error was received from a node. (default: 10s)
|
||||
## Higher values reduce load on output target and network in case of errors by limiting frequency of error messages.
|
||||
# error_interval = "10s"
|
||||
|
||||
## The interval that defines the fastest rate at which the node values should be sampled and values captured. (default: 1s)
|
||||
## The sampling frequency should be adjusted to the dynamics of the signal to be sampled.
|
||||
## Higher sampling frequencies increases load on ctrlX Data Layer.
|
||||
## The sampling frequency can be higher, than the publish interval. Captured samples are put in a queue and sent in publish interval.
|
||||
## Note: The minimum sampling interval can be overruled by a global setting in the ctrlX Data Layer configuration ('datalayer/subscriptions/settings').
|
||||
# sampling_interval = "1s"
|
||||
|
||||
## The requested size of the node value queue. (default: 10)
|
||||
## Relevant if more values are captured than can be sent.
|
||||
# queue_size = 10
|
||||
|
||||
## The behaviour of the queue if it is full. (default: "DiscardOldest")
|
||||
## Possible values:
|
||||
## - "DiscardOldest"
|
||||
## The oldest value gets deleted from the queue when it is full.
|
||||
## - "DiscardNewest"
|
||||
## The newest value gets deleted from the queue when it is full.
|
||||
# queue_behaviour = "DiscardOldest"
|
||||
|
||||
## The filter when a new value will be sampled. (default: 0.0)
|
||||
## Calculation rule: If (abs(lastCapturedValue - newValue) > dead_band_value) capture(newValue).
|
||||
# dead_band_value = 0.0
|
||||
|
||||
## The conditions on which a sample should be captured and thus will be sent as a message. (default: "StatusValue")
|
||||
## Possible values:
|
||||
## - "Status"
|
||||
## Capture the value only, when the state of the node changes from or to error state. Value changes are ignored.
|
||||
## - "StatusValue"
|
||||
## Capture when the value changes or the node changes from or to error state.
|
||||
## See also 'dead_band_value' for what is considered as a value change.
|
||||
## - "StatusValueTimestamp":
|
||||
## Capture even if the value is the same, but the timestamp of the value is newer.
|
||||
## Note: This might lead to high load on the network because every sample will be sent as a message
|
||||
## even if the value of the node did not change.
|
||||
# value_change = "StatusValue"
|
||||
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
All measurements are tagged with the server address of the device and the
|
||||
corresponding node address as defined in the ctrlX Data Layer.
|
||||
|
||||
- measurement name
|
||||
- tags:
|
||||
- `source` (ctrlX Data Layer server where the metrics are gathered from)
|
||||
- `node` (Address of the ctrlX Data Layer node)
|
||||
- fields:
|
||||
- `{name}` (for nodes with simple data types)
|
||||
- `{name}_{index}`(for nodes with array data types)
|
||||
- `{name}_{jsonflat.key}` (for nodes with object data types)
|
||||
|
||||
### Output Format
|
||||
|
||||
The switch "output_json_string" determines the format of the output metric.
|
||||
|
||||
#### Output default format
|
||||
|
||||
With the output default format
|
||||
|
||||
```toml
|
||||
output_json_string=false
|
||||
```
|
||||
|
||||
the output is formatted automatically as follows depending on the data type:
|
||||
|
||||
##### Simple data type
|
||||
|
||||
The value is passed 'as it is' to a metric with pattern:
|
||||
|
||||
```text
|
||||
{name}={value}
|
||||
```
|
||||
|
||||
Simple data types of ctrlX Data Layer:
|
||||
|
||||
```text
|
||||
bool8,int8,uint8,int16,uint16,int32,uint32,int64,uint64,float,double,string,timestamp
|
||||
```
|
||||
|
||||
##### Array data type
|
||||
|
||||
Every value in the array is passed to a metric with pattern:
|
||||
|
||||
```text
|
||||
{name}_{index}={value[index]}
|
||||
```
|
||||
|
||||
example:
|
||||
|
||||
```text
|
||||
myarray=[1,2,3] -> myarray_1=1, myarray_2=2, myarray_3=3
|
||||
```
|
||||
|
||||
Array data types of ctrlX Data Layer:
|
||||
|
||||
```text
|
||||
arbool8,arint8,aruint8,arint16,aruint16,arint32,aruint32,arint64,aruint64,arfloat,ardouble,arstring,artimestamp
|
||||
```
|
||||
|
||||
##### Object data type (JSON)
|
||||
|
||||
Every value of the flattened json is passed to a metric with pattern:
|
||||
|
||||
```text
|
||||
{name}_{jsonflat.key}={jsonflat.value}
|
||||
```
|
||||
|
||||
example:
|
||||
|
||||
```text
|
||||
myobj={"a":1,"b":2,"c":{"d": 3}} -> myobj_a=1, myobj_b=2, myobj_c_d=3
|
||||
```
|
||||
|
||||
#### Output JSON format
|
||||
|
||||
With the output JSON format
|
||||
|
||||
```toml
|
||||
output_json_string=true
|
||||
```
|
||||
|
||||
the output is formatted as JSON string:
|
||||
|
||||
```text
|
||||
{name}="{value}"
|
||||
```
|
||||
|
||||
examples:
|
||||
|
||||
```text
|
||||
input=true -> output="true"
|
||||
```
|
||||
|
||||
```text
|
||||
input=[1,2,3] -> output="[1,2,3]"
|
||||
```
|
||||
|
||||
```text
|
||||
input={"x":4720,"y":9440,"z":{"d": 14160}} -> output="{\"x\":4720,\"y\":9440,\"z\":14160}"
|
||||
```
|
||||
|
||||
The JSON output string can be passed to a processor plugin for transformation
|
||||
e.g. [Parser Processor Plugin](../../processors/parser/README.md)
|
||||
or [Starlark Processor Plugin](../../processors/starlark/README.md)
|
||||
|
||||
[PARSER.md]: ../../processors/parser/README.md
|
||||
[STARLARK.md]: ../../processors/starlark/README.md
|
||||
|
||||
example:
|
||||
|
||||
```toml
|
||||
[[inputs.ctrlx_datalayer.subscription]]
|
||||
measurement = "osci"
|
||||
nodes = [
|
||||
{address="oscilloscope/instances/Osci_PLC/rec-values/allsignals"},
|
||||
]
|
||||
output_json_string = true
|
||||
|
||||
[[processors.starlark]]
|
||||
namepass = [
|
||||
'osci',
|
||||
]
|
||||
script = "oscilloscope.star"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
This plugin was contributed by
|
||||
[Bosch Rexroth](https://www.boschrexroth.com).
|
||||
For questions regarding ctrlX AUTOMATION and this plugin feel
|
||||
free to check out and be part of the
|
||||
[ctrlX AUTOMATION Community](https://ctrlx-automation.com/community)
|
||||
to get additional support or leave some ideas and feedback.
|
||||
|
||||
Also, join
|
||||
[InfluxData Community Slack](https://influxdata.com/slack) or
|
||||
[InfluxData Community Page](https://community.influxdata.com/)
|
||||
if you have questions or comments for the telegraf engineering teams.
|
||||
|
||||
## Example Output
|
||||
|
||||
The plugin handles simple, array and object (JSON) data types.
|
||||
|
||||
### Example with simple data type
|
||||
|
||||
Configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.ctrlx_datalayer.subscription]]
|
||||
measurement="memory"
|
||||
[inputs.ctrlx_datalayer.subscription.tags]
|
||||
sub_tag1="memory_tag1"
|
||||
sub_tag2="memory_tag2"
|
||||
|
||||
[[inputs.ctrlx_datalayer.subscription.nodes]]
|
||||
name ="available"
|
||||
address="framework/metrics/system/memavailable-mb"
|
||||
[inputs.ctrlx_datalayer.subscription.nodes.tags]
|
||||
node_tag1="memory_available_tag1"
|
||||
node_tag2="memory_available_tag2"
|
||||
|
||||
[[inputs.ctrlx_datalayer.subscription.nodes]]
|
||||
name ="used"
|
||||
address="framework/metrics/system/memused-mb"
|
||||
[inputs.ctrlx_datalayer.subscription.nodes.tags]
|
||||
node_tag1="memory_used_node_tag1"
|
||||
node_tag2="memory_used_node_tag2"
|
||||
```
|
||||
|
||||
Source:
|
||||
|
||||
```json
|
||||
"framework/metrics/system/memavailable-mb" : 365.93359375
|
||||
"framework/metrics/system/memused-mb" : 567.67578125
|
||||
```
|
||||
|
||||
Metrics:
|
||||
|
||||
```text
|
||||
memory,source=192.168.1.1,host=host.example.com,node=framework/metrics/system/memavailable-mb,node_tag1=memory_available_tag1,node_tag2=memory_available_tag2,sub_tag1=memory2_tag1,sub_tag2=memory_tag2 available=365.93359375 1680093310249627400
|
||||
memory,source=192.168.1.1,host=host.example.com,node=framework/metrics/system/memused-mb,node_tag1=memory_used_node_tag1,node_tag2=memory_used_node_tag2,sub_tag1=memory2_tag1,sub_tag2=memory_tag2 used=567.67578125 1680093310249667600
|
||||
```
|
||||
|
||||
### Example with array data type
|
||||
|
||||
Configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.ctrlx_datalayer.subscription]]
|
||||
measurement="array"
|
||||
nodes=[
|
||||
{ name="ar_uint8", address="alldata/dynamic/array-of-uint8"},
|
||||
{ name="ar_bool8", address="alldata/dynamic/array-of-bool8"},
|
||||
]
|
||||
```
|
||||
|
||||
Source:
|
||||
|
||||
```json
|
||||
"alldata/dynamic/array-of-bool8" : [true, false, true]
|
||||
"alldata/dynamic/array-of-uint8" : [0, 255]
|
||||
```
|
||||
|
||||
Metrics:
|
||||
|
||||
```text
|
||||
array,source=192.168.1.1,host=host.example.com,node=alldata/dynamic/array-of-bool8 ar_bool8_0=true,ar_bool8_1=false,ar_bool8_2=true 1680095727347018800
|
||||
array,source=192.168.1.1,host=host.example.com,node=alldata/dynamic/array-of-uint8 ar_uint8_0=0,ar_uint8_1=255 1680095727347223300
|
||||
```
|
||||
|
||||
### Example with object data type (JSON)
|
||||
|
||||
Configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.ctrlx_datalayer.subscription]]
|
||||
measurement="motion"
|
||||
nodes=[
|
||||
{name="linear", address="motion/axs/Axis_1/state/values/actual"},
|
||||
{name="rotational", address="motion/axs/Axis_2/state/values/actual"},
|
||||
]
|
||||
```
|
||||
|
||||
Source:
|
||||
|
||||
```json
|
||||
"motion/axs/Axis_1/state/values/actual" : {"actualPos":65.249329860957,"actualVel":5,"actualAcc":0,"actualTorque":0,"distLeft":0,"actualPosUnit":"mm","actualVelUnit":"mm/min","actualAccUnit":"m/s^2","actualTorqueUnit":"Nm","distLeftUnit":"mm"}
|
||||
"motion/axs/Axis_2/state/values/actual" : {"actualPos":120,"actualVel":0,"actualAcc":0,"actualTorque":0,"distLeft":0,"actualPosUnit":"deg","actualVelUnit":"rpm","actualAccUnit":"rad/s^2","actualTorqueUnit":"Nm","distLeftUnit":"deg"}
|
||||
```
|
||||
|
||||
Metrics:
|
||||
|
||||
```text
|
||||
motion,source=192.168.1.1,host=host.example.com,node=motion/axs/Axis_1/state/values/actual linear_actualVel=5,linear_distLeftUnit="mm",linear_actualAcc=0,linear_distLeft=0,linear_actualPosUnit="mm",linear_actualAccUnit="m/s^2",linear_actualTorqueUnit="Nm",linear_actualPos=65.249329860957,linear_actualVelUnit="mm/min",linear_actualTorque=0 1680258290342523500
|
||||
motion,source=192.168.1.1,host=host.example.com,node=motion/axs/Axis_2/state/values/actual rotational_distLeft=0,rotational_actualVelUnit="rpm",rotational_actualAccUnit="rad/s^2",rotational_distLeftUnit="deg",rotational_actualPos=120,rotational_actualVel=0,rotational_actualAcc=0,rotational_actualPosUnit="deg",rotational_actualTorqueUnit="Nm",rotational_actualTorque=0 1680258290342538100
|
||||
```
|
||||
|
||||
If `output_json_string` is set in the configuration:
|
||||
|
||||
```toml
|
||||
output_json_string = true
|
||||
```
|
||||
|
||||
then the metrics will be generated like this:
|
||||
|
||||
```text
|
||||
motion,source=192.168.1.1,host=host.example.com,node=motion/axs/Axis_1/state/values/actual linear="{\"actualAcc\":0,\"actualAccUnit\":\"m/s^2\",\"actualPos\":65.249329860957,\"actualPosUnit\":\"mm\",\"actualTorque\":0,\"actualTorqueUnit\":\"Nm\",\"actualVel\":5,\"actualVelUnit\":\"mm/min\",\"distLeft\":0,\"distLeftUnit\":\"mm\"}" 1680258290342523500
|
||||
motion,source=192.168.1.1,host=host.example.com,node=motion/axs/Axis_2/state/values/actual rotational="{\"actualAcc\":0,\"actualAccUnit\":\"rad/s^2\",\"actualPos\":120,\"actualPosUnit\":\"deg\",\"actualTorque\":0,\"actualTorqueUnit\":\"Nm\",\"actualVel\":0,\"actualVelUnit\":\"rpm\",\"distLeft\":0,\"distLeftUnit\":\"deg\"}" 1680258290342538100
|
||||
```
|
||||
|
|
@ -0,0 +1,256 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Mesosphere Distributed Cloud OS"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Mesosphere Distributed Cloud OS
|
||||
identifier: input-dcos
|
||||
tags: [Mesosphere Distributed Cloud OS, "input-plugins", "configuration", "containers"]
|
||||
introduced: "v1.5.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/dcos/README.md, Mesosphere Distributed Cloud OS Plugin Source
|
||||
---
|
||||
|
||||
# Mesosphere Distributed Cloud OS Input Plugin
|
||||
|
||||
This input plugin gathers metrics from a [Distributed Cloud OS](https://dcos.io/) cluster's
|
||||
[metrics component](https://docs.mesosphere.com/1.10/metrics/).
|
||||
|
||||
> [!WARNING]
|
||||
> Depending on the workload of your DC/OS cluster, this plugin can quickly
|
||||
> create a high number of series which, when unchecked, can cause high load on
|
||||
> your database!
|
||||
|
||||
**Introduced in:** Telegraf v1.5.0
|
||||
**Tags:** containers
|
||||
**OS support:** all
|
||||
|
||||
[dcos]: https://dcos.io/
|
||||
[metrics]: https://docs.mesosphere.com/1.10/metrics/
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Input plugin for DC/OS metrics
|
||||
[[inputs.dcos]]
|
||||
## The DC/OS cluster URL.
|
||||
cluster_url = "https://dcos-master-1"
|
||||
|
||||
## The ID of the service account.
|
||||
service_account_id = "telegraf"
|
||||
## The private key file for the service account.
|
||||
service_account_private_key = "/etc/telegraf/telegraf-sa-key.pem"
|
||||
|
||||
## Path containing login token. If set, will read on every gather.
|
||||
# token_file = "/home/dcos/.dcos/token"
|
||||
|
||||
## In all filter options if both include and exclude are empty all items
|
||||
## will be collected. Arrays may contain glob patterns.
|
||||
##
|
||||
## Node IDs to collect metrics from. If a node is excluded, no metrics will
|
||||
## be collected for its containers or apps.
|
||||
# node_include = []
|
||||
# node_exclude = []
|
||||
## Container IDs to collect container metrics from.
|
||||
# container_include = []
|
||||
# container_exclude = []
|
||||
## Container IDs to collect app metrics from.
|
||||
# app_include = []
|
||||
# app_exclude = []
|
||||
|
||||
## Maximum concurrent connections to the cluster.
|
||||
# max_connections = 10
|
||||
## Maximum time to receive a response from cluster.
|
||||
# response_timeout = "20s"
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## If false, skip chain & host verification
|
||||
# insecure_skip_verify = true
|
||||
|
||||
## Recommended filtering to reduce series cardinality.
|
||||
# [inputs.dcos.tagdrop]
|
||||
# path = ["/var/lib/mesos/slave/slaves/*"]
|
||||
```
|
||||
|
||||
### Enterprise Authentication
|
||||
|
||||
When using Enterprise DC/OS, it is recommended to use a service account to
|
||||
authenticate with the cluster.
|
||||
|
||||
The plugin requires the following permissions:
|
||||
|
||||
```text
|
||||
dcos:adminrouter:ops:system-metrics full
|
||||
dcos:adminrouter:ops:mesos full
|
||||
```
|
||||
|
||||
Follow the directions to [create a service account and assign permissions](https://docs.mesosphere.com/1.10/security/service-auth/custom-service-auth/).
|
||||
|
||||
Quick configuration using the Enterprise CLI:
|
||||
|
||||
```text
|
||||
dcos security org service-accounts keypair telegraf-sa-key.pem telegraf-sa-cert.pem
|
||||
dcos security org service-accounts create -p telegraf-sa-cert.pem -d "Telegraf DC/OS input plugin" telegraf
|
||||
dcos security org users grant telegraf dcos:adminrouter:ops:system-metrics full
|
||||
dcos security org users grant telegraf dcos:adminrouter:ops:mesos full
|
||||
```
|
||||
|
||||
[1]: https://docs.mesosphere.com/1.10/security/service-auth/custom-service-auth/
|
||||
|
||||
### Open Source Authentication
|
||||
|
||||
The Open Source DC/OS does not provide service accounts. Instead you can use
|
||||
of the following options:
|
||||
|
||||
1. [Disable authentication](https://dcos.io/docs/1.10/security/managing-authentication/#authentication-opt-out)
|
||||
2. Use the `token_file` parameter to read a authentication token from a file.
|
||||
|
||||
Then `token_file` can be set by using the [dcos cli] to login periodically.
|
||||
The cli can login for at most XXX days, you will need to ensure the cli
|
||||
performs a new login before this time expires.
|
||||
|
||||
```shell
|
||||
dcos auth login --username foo --password bar
|
||||
dcos config show core.dcos_acs_token > ~/.dcos/token
|
||||
```
|
||||
|
||||
Another option to create a `token_file` is to generate a token using the
|
||||
cluster secret. This will allow you to set the expiration date manually or
|
||||
even create a never expiring token. However, if the cluster secret or the
|
||||
token is compromised it cannot be revoked and may require a full reinstall of
|
||||
the cluster. For more information on this technique reference
|
||||
[this blog post](https://medium.com/@richardgirges/authenticating-open-source-dc-os-with-third-party-services-125fa33a5add).
|
||||
|
||||
[2]: https://medium.com/@richardgirges/authenticating-open-source-dc-os-with-third-party-services-125fa33a5add
|
||||
|
||||
### Series Cardinality Mitigation
|
||||
|
||||
- Use measurement filteringto exclude
|
||||
unnecessary tags.
|
||||
- Write to a database with an appropriate
|
||||
[retention policy](https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/).
|
||||
- Consider using the
|
||||
[Time Series Index](https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/).
|
||||
- Monitor your databases'
|
||||
[series cardinality](https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality).
|
||||
|
||||
## Metrics
|
||||
|
||||
Please consult the [Metrics Reference](https://docs.mesosphere.com/1.10/metrics/reference/) for details about field
|
||||
interpretation.
|
||||
|
||||
- dcos_node
|
||||
- tags:
|
||||
- cluster
|
||||
- hostname
|
||||
- path (filesystem fields only)
|
||||
- interface (network fields only)
|
||||
- fields:
|
||||
- system_uptime (float)
|
||||
- cpu_cores (float)
|
||||
- cpu_total (float)
|
||||
- cpu_user (float)
|
||||
- cpu_system (float)
|
||||
- cpu_idle (float)
|
||||
- cpu_wait (float)
|
||||
- load_1min (float)
|
||||
- load_5min (float)
|
||||
- load_15min (float)
|
||||
- filesystem_capacity_total_bytes (int)
|
||||
- filesystem_capacity_used_bytes (int)
|
||||
- filesystem_capacity_free_bytes (int)
|
||||
- filesystem_inode_total (float)
|
||||
- filesystem_inode_used (float)
|
||||
- filesystem_inode_free (float)
|
||||
- memory_total_bytes (int)
|
||||
- memory_free_bytes (int)
|
||||
- memory_buffers_bytes (int)
|
||||
- memory_cached_bytes (int)
|
||||
- swap_total_bytes (int)
|
||||
- swap_free_bytes (int)
|
||||
- swap_used_bytes (int)
|
||||
- network_in_bytes (int)
|
||||
- network_out_bytes (int)
|
||||
- network_in_packets (float)
|
||||
- network_out_packets (float)
|
||||
- network_in_dropped (float)
|
||||
- network_out_dropped (float)
|
||||
- network_in_errors (float)
|
||||
- network_out_errors (float)
|
||||
- process_count (float)
|
||||
|
||||
- dcos_container
|
||||
- tags:
|
||||
- cluster
|
||||
- hostname
|
||||
- container_id
|
||||
- task_name
|
||||
- fields:
|
||||
- cpus_limit (float)
|
||||
- cpus_system_time (float)
|
||||
- cpus_throttled_time (float)
|
||||
- cpus_user_time (float)
|
||||
- disk_limit_bytes (int)
|
||||
- disk_used_bytes (int)
|
||||
- mem_limit_bytes (int)
|
||||
- mem_total_bytes (int)
|
||||
- net_rx_bytes (int)
|
||||
- net_rx_dropped (float)
|
||||
- net_rx_errors (float)
|
||||
- net_rx_packets (float)
|
||||
- net_tx_bytes (int)
|
||||
- net_tx_dropped (float)
|
||||
- net_tx_errors (float)
|
||||
- net_tx_packets (float)
|
||||
|
||||
- dcos_app
|
||||
- tags:
|
||||
- cluster
|
||||
- hostname
|
||||
- container_id
|
||||
- task_name
|
||||
- fields:
|
||||
- fields are application specific
|
||||
|
||||
[3]: https://docs.mesosphere.com/1.10/metrics/reference/
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/boot filesystem_capacity_free_bytes=918188032i,filesystem_capacity_total_bytes=1063256064i,filesystem_capacity_used_bytes=145068032i,filesystem_inode_free=523958,filesystem_inode_total=524288,filesystem_inode_used=330 1511859222000000000
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=dummy0 network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=docker0 network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18 cpu_cores=2,cpu_idle=81.62,cpu_system=4.19,cpu_total=13.670000000000002,cpu_user=9.48,cpu_wait=0,load_15min=0.7,load_1min=0.22,load_5min=0.6,memory_buffers_bytes=970752i,memory_cached_bytes=1830473728i,memory_free_bytes=1178636288i,memory_total_bytes=3975073792i,process_count=198,swap_free_bytes=859828224i,swap_total_bytes=859828224i,swap_used_bytes=0i,system_uptime=18874 1511859222000000000
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=lo network_in_bytes=1090992450i,network_in_dropped=0,network_in_errors=0,network_in_packets=1546938,network_out_bytes=1090992450i,network_out_dropped=0,network_out_errors=0,network_out_packets=1546938 1511859222000000000
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/ filesystem_capacity_free_bytes=1668378624i,filesystem_capacity_total_bytes=6641680384i,filesystem_capacity_used_bytes=4973301760i,filesystem_inode_free=3107856,filesystem_inode_total=3248128,filesystem_inode_used=140272 1511859222000000000
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=minuteman network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=210i,network_out_dropped=0,network_out_errors=0,network_out_packets=3 1511859222000000000
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=eth0 network_in_bytes=539886216i,network_in_dropped=1,network_in_errors=0,network_in_packets=979808,network_out_bytes=112395836i,network_out_dropped=0,network_out_errors=0,network_out_packets=891239 1511859222000000000
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=spartan network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=210i,network_out_dropped=0,network_out_errors=0,network_out_packets=3 1511859222000000000
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/var/lib/docker/overlay filesystem_capacity_free_bytes=1668378624i,filesystem_capacity_total_bytes=6641680384i,filesystem_capacity_used_bytes=4973301760i,filesystem_inode_free=3107856,filesystem_inode_total=3248128,filesystem_inode_used=140272 1511859222000000000
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=vtep1024 network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/var/lib/docker/plugins filesystem_capacity_free_bytes=1668378624i,filesystem_capacity_total_bytes=6641680384i,filesystem_capacity_used_bytes=4973301760i,filesystem_inode_free=3107856,filesystem_inode_total=3248128,filesystem_inode_used=140272 1511859222000000000
|
||||
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=d-dcos network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
|
||||
dcos_app,cluster=enterprise,container_id=9a78d34a-3bbf-467e-81cf-a57737f154ee,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
|
||||
dcos_container,cluster=enterprise,container_id=cbf19b77-3b8d-4bcf-b81f-824b67279629,hostname=192.168.122.18 cpus_limit=0.3,cpus_system_time=307.31,cpus_throttled_time=102.029930607,cpus_user_time=268.57,disk_limit_bytes=268435456i,disk_used_bytes=30953472i,mem_limit_bytes=570425344i,mem_total_bytes=13316096i,net_rx_bytes=0i,net_rx_dropped=0,net_rx_errors=0,net_rx_packets=0,net_tx_bytes=0i,net_tx_dropped=0,net_tx_errors=0,net_tx_packets=0 1511859222000000000
|
||||
dcos_app,cluster=enterprise,container_id=cbf19b77-3b8d-4bcf-b81f-824b67279629,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
|
||||
dcos_container,cluster=enterprise,container_id=5725e219-f66e-40a8-b3ab-519d85f4c4dc,hostname=192.168.122.18,task_name=hello-world cpus_limit=0.6,cpus_system_time=25.6,cpus_throttled_time=327.977109217,cpus_user_time=566.54,disk_limit_bytes=0i,disk_used_bytes=0i,mem_limit_bytes=1107296256i,mem_total_bytes=335941632i,net_rx_bytes=0i,net_rx_dropped=0,net_rx_errors=0,net_rx_packets=0,net_tx_bytes=0i,net_tx_dropped=0,net_tx_errors=0,net_tx_packets=0 1511859222000000000
|
||||
dcos_app,cluster=enterprise,container_id=5725e219-f66e-40a8-b3ab-519d85f4c4dc,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
|
||||
dcos_app,cluster=enterprise,container_id=c76e1488-4fb7-4010-a4cf-25725f8173f9,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
|
||||
dcos_container,cluster=enterprise,container_id=cbe0b2f9-061f-44ac-8f15-4844229e8231,hostname=192.168.122.18,task_name=telegraf cpus_limit=0.2,cpus_system_time=8.109999999,cpus_throttled_time=93.183916045,cpus_user_time=17.97,disk_limit_bytes=0i,disk_used_bytes=0i,mem_limit_bytes=167772160i,mem_total_bytes=0i,net_rx_bytes=0i,net_rx_dropped=0,net_rx_errors=0,net_rx_packets=0,net_tx_bytes=0i,net_tx_dropped=0,net_tx_errors=0,net_tx_packets=0 1511859222000000000
|
||||
dcos_container,cluster=enterprise,container_id=b64115de-3d2a-431d-a805-76e7c46453f1,hostname=192.168.122.18 cpus_limit=0.2,cpus_system_time=2.69,cpus_throttled_time=20.064861214,cpus_user_time=6.56,disk_limit_bytes=268435456i,disk_used_bytes=29360128i,mem_limit_bytes=297795584i,mem_total_bytes=13733888i,net_rx_bytes=0i,net_rx_dropped=0,net_rx_errors=0,net_rx_packets=0,net_tx_bytes=0i,net_tx_dropped=0,net_tx_errors=0,net_tx_packets=0 1511859222000000000
|
||||
dcos_app,cluster=enterprise,container_id=b64115de-3d2a-431d-a805-76e7c46453f1,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,121 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Directory Monitor"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Directory Monitor
|
||||
identifier: input-directory_monitor
|
||||
tags: [Directory Monitor, "input-plugins", "configuration", "system"]
|
||||
introduced: "v1.18.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/directory_monitor/README.md, Directory Monitor Plugin Source
|
||||
---
|
||||
|
||||
# Directory Monitor Input Plugin
|
||||
|
||||
This plugin monitors a single directory (traversing sub-directories), and
|
||||
processes each file placed in the directory. The plugin will gather all files in
|
||||
the directory at the configured interval, and parse the ones that haven't been
|
||||
picked up yet.
|
||||
|
||||
> [!NOTE]
|
||||
> Files should not be used by another process or the plugin may fail.
|
||||
> Furthermore, files should not be written _live_ to the monitored directory.
|
||||
> If you absolutely must write files directly, they must be guaranteed to finish
|
||||
> writing before `directory_duration_threshold`.
|
||||
|
||||
**Introduced in:** Telegraf v1.18.0
|
||||
**Tags:** system
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Ingests files in a directory and then moves them to a target directory.
|
||||
[[inputs.directory_monitor]]
|
||||
## The directory to monitor and read files from (including sub-directories if "recursive" is true).
|
||||
directory = ""
|
||||
#
|
||||
## The directory to move finished files to (maintaining directory hierarchy from source).
|
||||
finished_directory = ""
|
||||
#
|
||||
## Setting recursive to true will make the plugin recursively walk the directory and process all sub-directories.
|
||||
# recursive = false
|
||||
#
|
||||
## The directory to move files to upon file error.
|
||||
## If not provided, erroring files will stay in the monitored directory.
|
||||
# error_directory = ""
|
||||
#
|
||||
## The amount of time a file is allowed to sit in the directory before it is picked up.
|
||||
## This time can generally be low but if you choose to have a very large file written to the directory and it's potentially slow,
|
||||
## set this higher so that the plugin will wait until the file is fully copied to the directory.
|
||||
# directory_duration_threshold = "50ms"
|
||||
#
|
||||
## A list of the only file names to monitor, if necessary. Supports regex. If left blank, all files are ingested.
|
||||
# files_to_monitor = ["^.*\\.csv"]
|
||||
#
|
||||
## A list of files to ignore, if necessary. Supports regex.
|
||||
# files_to_ignore = [".DS_Store"]
|
||||
#
|
||||
## Maximum lines of the file to process that have not yet be written by the
|
||||
## output. For best throughput set to the size of the output's metric_buffer_limit.
|
||||
## Warning: setting this number higher than the output's metric_buffer_limit can cause dropped metrics.
|
||||
# max_buffered_metrics = 10000
|
||||
#
|
||||
## The maximum amount of file paths to queue up for processing at once, before waiting until files are processed to find more files.
|
||||
## Lowering this value will result in *slightly* less memory use, with a potential sacrifice in speed efficiency, if absolutely necessary.
|
||||
# file_queue_size = 100000
|
||||
#
|
||||
## Name a tag containing the name of the file the data was parsed from. Leave empty
|
||||
## to disable. Cautious when file name variation is high, this can increase the cardinality
|
||||
## significantly. Read more about cardinality here:
|
||||
## https://docs.influxdata.com/influxdb/cloud/reference/glossary/#series-cardinality
|
||||
# file_tag = ""
|
||||
#
|
||||
## Specify if the file can be read completely at once or if it needs to be read line by line (default).
|
||||
## Possible values: "line-by-line", "at-once"
|
||||
# parse_method = "line-by-line"
|
||||
#
|
||||
## The dataformat to be read from the files.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "influx"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
The format of metrics produced by this plugin depends on the content and data
|
||||
format of the file.
|
||||
|
||||
When the [internal](/telegraf/v1/plugins/#input-internal) input is enabled:
|
||||
|
||||
- internal_directory_monitor
|
||||
- fields:
|
||||
- files_processed - How many files have been processed (counter)
|
||||
- files_dropped - How many files have been dropped (counter)
|
||||
- internal_directory_monitor
|
||||
- tags:
|
||||
- directory - The monitored directory
|
||||
- fields:
|
||||
- files_processed_per_dir - How many files have been processed (counter)
|
||||
- files_dropped_per_dir - How many files have been dropped (counter)
|
||||
- files_queue_per_dir - How many files to be processed (gauge)
|
||||
|
||||
## Example Output
|
||||
|
||||
The metrics produced by this plugin depends on the content and data
|
||||
format of the file.
|
||||
|
||||
[internal]: /plugins/inputs/internal
|
||||
|
|
@ -0,0 +1,118 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Disk"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Disk
|
||||
identifier: input-disk
|
||||
tags: [Disk, "input-plugins", "configuration", "system"]
|
||||
introduced: "v0.1.1"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/disk/README.md, Disk Plugin Source
|
||||
---
|
||||
|
||||
# Disk Input Plugin
|
||||
|
||||
This plugin gathers metrics about disk usage.
|
||||
|
||||
> [!NOTE]
|
||||
> The `used_percent` field is calculated by `used / (used + free)` and _not_
|
||||
> `used / total` as the unix `df` command does it. See [wikipedia - df](https://en.wikipedia.org/wiki/Df_(Unix))
|
||||
> for more details.
|
||||
|
||||
**Introduced in:** Telegraf v0.1.1
|
||||
**Tags:** system
|
||||
**OS support:** all
|
||||
|
||||
[wiki_df]: https://en.wikipedia.org/wiki/Df_(Unix)
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics about disk usage by mount point
|
||||
[[inputs.disk]]
|
||||
## By default stats will be gathered for all mount points.
|
||||
## Set mount_points will restrict the stats to only the specified mount points.
|
||||
# mount_points = ["/"]
|
||||
|
||||
## Ignore mount points by filesystem type.
|
||||
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
|
||||
|
||||
## Ignore mount points by mount options.
|
||||
## The 'mount' command reports options of all mounts in parathesis.
|
||||
## Bind mounts can be ignored with the special 'bind' option.
|
||||
# ignore_mount_opts = []
|
||||
```
|
||||
|
||||
### Docker container
|
||||
|
||||
To monitor the Docker engine host from within a container you will need to mount
|
||||
the host's filesystem into the container and set the `HOST_PROC` environment
|
||||
variable to the location of the `/proc` filesystem. If desired, you can also
|
||||
set the `HOST_MOUNT_PREFIX` environment variable to the prefix containing the
|
||||
`/proc` directory, when present this variable is stripped from the reported
|
||||
`path` tag.
|
||||
|
||||
```shell
|
||||
docker run -v /:/hostfs:ro -e HOST_MOUNT_PREFIX=/hostfs -e HOST_PROC=/hostfs/proc telegraf
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- disk
|
||||
- tags:
|
||||
- fstype (filesystem type)
|
||||
- device (device file)
|
||||
- path (mount point path)
|
||||
- mode (whether the mount is rw or ro)
|
||||
- label (devicemapper labels, only if present)
|
||||
- fields:
|
||||
- free (integer, bytes)
|
||||
- total (integer, bytes)
|
||||
- used (integer, bytes)
|
||||
- used_percent (float, percent)
|
||||
- inodes_free (integer, files)
|
||||
- inodes_total (integer, files)
|
||||
- inodes_used (integer, files)
|
||||
- inodes_used_percent (float, percent)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
On Linux, the list of disks is taken from the `/proc/self/mounts` file and a
|
||||
[statfs] call is made on the second column. If any expected filesystems are
|
||||
missing ensure that the `telegraf` user can read these files:
|
||||
|
||||
```shell
|
||||
$ sudo -u telegraf cat /proc/self/mounts | grep sda2
|
||||
/dev/sda2 /home ext4 rw,relatime,data=ordered 0 0
|
||||
$ sudo -u telegraf stat /home
|
||||
```
|
||||
|
||||
It may be desired to use POSIX ACLs to provide additional access:
|
||||
|
||||
```shell
|
||||
sudo setfacl -R -m u:telegraf:X /var/lib/docker/volumes/
|
||||
```
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
disk,fstype=hfs,mode=ro,path=/ free=398407520256i,inodes_free=97267461i,inodes_total=121847806i,inodes_used=24580345i,total=499088621568i,used=100418957312i,used_percent=20.131039916242397,inodes_used_percent=20.1729894 1453832006274071563
|
||||
disk,fstype=devfs,mode=rw,path=/dev free=0i,inodes_free=0i,inodes_total=628i,inodes_used=628i,total=185856i,used=185856i,used_percent=100,inodes_used_percent=100 1453832006274137913
|
||||
disk,fstype=autofs,mode=rw,path=/net free=0i,inodes_free=0i,inodes_total=0i,inodes_used=0i,total=0i,used=0i,used_percent=0,inodes_used_percent=0 1453832006274157077
|
||||
disk,fstype=autofs,mode=rw,path=/home free=0i,inodes_free=0i,inodes_total=0i,inodes_used=0i,total=0i,used=0i,used_percent=0,inodes_used_percent=0 1453832006274169688
|
||||
disk,device=dm-1,fstype=xfs,label=lvg-lv,mode=rw,path=/mnt inodes_free=8388605i,inodes_used=3i,total=17112760320i,free=16959598592i,used=153161728i,used_percent=0.8950147441789215,inodes_total=8388608i,inodes_used_percent=0.0017530778 1677001387000000000
|
||||
```
|
||||
|
||||
[statfs]: http://man7.org/linux/man-pages/man2/statfs.2.html
|
||||
|
|
@ -0,0 +1,189 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from DiskIO"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: DiskIO
|
||||
identifier: input-diskio
|
||||
tags: [DiskIO, "input-plugins", "configuration", "system"]
|
||||
introduced: "v0.10.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/diskio/README.md, DiskIO Plugin Source
|
||||
---
|
||||
|
||||
# DiskIO Input Plugin
|
||||
|
||||
This plugin gathers metrics about disk traffic and timing.
|
||||
|
||||
**Introduced in:** Telegraf v0.10.0
|
||||
**Tags:** system
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics about disk IO by device
|
||||
[[inputs.diskio]]
|
||||
## Devices to collect stats for
|
||||
## Wildcards are supported except for disk synonyms like '/dev/disk/by-id'.
|
||||
## ex. devices = ["sda", "sdb", "vd*", "/dev/disk/by-id/nvme-eui.00123deadc0de123"]
|
||||
# devices = ["*"]
|
||||
|
||||
## Skip gathering of the disk's serial numbers.
|
||||
# skip_serial_number = true
|
||||
|
||||
## Device metadata tags to add on systems supporting it (Linux only)
|
||||
## Use 'udevadm info -q property -n <device>' to get a list of properties.
|
||||
## Note: Most, but not all, udev properties can be accessed this way. Properties
|
||||
## that are currently inaccessible include DEVTYPE, DEVNAME, and DEVPATH.
|
||||
# device_tags = ["ID_FS_TYPE", "ID_FS_USAGE"]
|
||||
|
||||
## Using the same metadata source as device_tags, you can also customize the
|
||||
## name of the device via templates.
|
||||
## The 'name_templates' parameter is a list of templates to try and apply to
|
||||
## the device. The template may contain variables in the form of '$PROPERTY' or
|
||||
## '${PROPERTY}'. The first template which does not contain any variables not
|
||||
## present for the device is used as the device name tag.
|
||||
## The typical use case is for LVM volumes, to get the VG/LV name instead of
|
||||
## the near-meaningless DM-0 name.
|
||||
# name_templates = ["$ID_FS_LABEL","$DM_VG_NAME/$DM_LV_NAME"]
|
||||
```
|
||||
|
||||
### Docker container
|
||||
|
||||
To monitor the Docker engine host from within a container you will need to
|
||||
mount the host's filesystem into the container and set the `HOST_PROC`
|
||||
environment variable to the location of the `/proc` filesystem. Additionally,
|
||||
it is required to use privileged mode to provide access to `/dev`.
|
||||
|
||||
If you are using the `device_tags` or `name_templates` options, you will need
|
||||
to bind mount `/run/udev` into the container.
|
||||
|
||||
```shell
|
||||
docker run --privileged -v /:/hostfs:ro -v /run/udev:/run/udev:ro -e HOST_PROC=/hostfs/proc telegraf
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- diskio
|
||||
- tags:
|
||||
- name (device name)
|
||||
- serial (device serial number)
|
||||
- fields:
|
||||
- reads (integer, counter)
|
||||
- writes (integer, counter)
|
||||
- read_bytes (integer, counter, bytes)
|
||||
- write_bytes (integer, counter, bytes)
|
||||
- read_time (integer, counter, milliseconds)
|
||||
- write_time (integer, counter, milliseconds)
|
||||
- io_time (integer, counter, milliseconds)
|
||||
- weighted_io_time (integer, counter, milliseconds)
|
||||
- iops_in_progress (integer, gauge)
|
||||
- merged_reads (integer, counter)
|
||||
- merged_writes (integer, counter)
|
||||
- io_util (float64, gauge, percent)
|
||||
- io_await (float64, gauge, milliseconds)
|
||||
- io_svctm (float64, gauge, milliseconds)
|
||||
|
||||
On linux these values correspond to the values in [`/proc/diskstats`]() and
|
||||
[`/sys/block/<dev>/stat`]().
|
||||
|
||||
[1]: https://www.kernel.org/doc/Documentation/ABI/testing/procfs-diskstats
|
||||
|
||||
[2]: https://www.kernel.org/doc/Documentation/block/stat.txt
|
||||
|
||||
### `reads` & `writes`
|
||||
|
||||
These values increment when an I/O request completes.
|
||||
|
||||
### `read_bytes` & `write_bytes`
|
||||
|
||||
These values count the number of bytes read from or written to this
|
||||
block device.
|
||||
|
||||
### `read_time` & `write_time`
|
||||
|
||||
These values count the number of milliseconds that I/O requests have
|
||||
waited on this block device. If there are multiple I/O requests waiting,
|
||||
these values will increase at a rate greater than 1000/second; for
|
||||
example, if 60 read requests wait for an average of 30 ms, the read_time
|
||||
field will increase by 60*30 = 1800.
|
||||
|
||||
### `io_time`
|
||||
|
||||
This value counts the number of milliseconds during which the device has
|
||||
had I/O requests queued.
|
||||
|
||||
### `weighted_io_time`
|
||||
|
||||
This value counts the number of milliseconds that I/O requests have waited
|
||||
on this block device. If there are multiple I/O requests waiting, this
|
||||
value will increase as the product of the number of milliseconds times the
|
||||
number of requests waiting (see `read_time` above for an example).
|
||||
|
||||
### `iops_in_progress`
|
||||
|
||||
This value counts the number of I/O requests that have been issued to
|
||||
the device driver but have not yet completed. It does not include I/O
|
||||
requests that are in the queue but not yet issued to the device driver.
|
||||
|
||||
### `merged_reads` & `merged_writes`
|
||||
|
||||
Reads and writes which are adjacent to each other may be merged for
|
||||
efficiency. Thus two 4K reads may become one 8K read before it is
|
||||
ultimately handed to the disk, and so it will be counted (and queued)
|
||||
as only one I/O. These fields lets you know how often this was done.
|
||||
|
||||
### `io_await`
|
||||
|
||||
The average time per I/O operation (ms)
|
||||
|
||||
### `io_svctm`
|
||||
|
||||
The service time per I/O operation, excluding wait time (ms)
|
||||
|
||||
### `io_util`
|
||||
|
||||
The percentage of time the disk was active (%)
|
||||
|
||||
## Sample Queries
|
||||
|
||||
### Calculate percent IO utilization per disk and host
|
||||
|
||||
```sql
|
||||
SELECT non_negative_derivative(last("io_time"),1ms) FROM "diskio" WHERE time > now() - 30m GROUP BY "host","name",time(60s)
|
||||
```
|
||||
|
||||
### Calculate average queue depth
|
||||
|
||||
`iops_in_progress` will give you an instantaneous value. This will give you the
|
||||
average between polling intervals.
|
||||
|
||||
```sql
|
||||
SELECT non_negative_derivative(last("weighted_io_time"),1ms) from "diskio" WHERE time > now() - 30m GROUP BY "host","name",time(60s)
|
||||
```
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
diskio,name=sda1 merged_reads=0i,reads=2353i,writes=10i,write_bytes=2117632i,write_time=49i,io_time=1271i,weighted_io_time=1350i,read_bytes=31350272i,read_time=1303i,iops_in_progress=0i,merged_writes=0i 1578326400000000000
|
||||
diskio,name=centos/var_log reads=1063077i,writes=591025i,read_bytes=139325491712i,write_bytes=144233131520i,read_time=650221i,write_time=24368817i,io_time=852490i,weighted_io_time=25037394i,iops_in_progress=1i,merged_reads=0i,merged_writes=0i 1578326400000000000
|
||||
diskio,name=sda write_time=49i,io_time=1317i,weighted_io_time=1404i,reads=2495i,read_time=1357i,write_bytes=2117632i,iops_in_progress=0i,merged_reads=0i,merged_writes=0i,writes=10i,read_bytes=38956544i 1578326400000000000
|
||||
```
|
||||
|
||||
```text
|
||||
diskio,name=sda io_await:0.3317307692307692,io_svctm:0.07692307692307693,io_util:0.5329780146568954 1578326400000000000
|
||||
diskio,name=sda1 io_await:0.3317307692307692,io_svctm:0.07692307692307693,io_util:0.5329780146568954 1578326400000000000
|
||||
diskio,name=sda2 io_await:0.3317307692307692,io_svctm:0.07692307692307693,io_util:0.5329780146568954 1578326400000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,70 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Disque"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Disque
|
||||
identifier: input-disque
|
||||
tags: [Disque, "input-plugins", "configuration", "messaging"]
|
||||
introduced: "v0.10.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/disque/README.md, Disque Plugin Source
|
||||
---
|
||||
|
||||
# Disque Input Plugin
|
||||
|
||||
This plugin gathers data from a [Disque](https://github.com/antirez/disque) instance, an experimental
|
||||
distributed, in-memory, message broker.
|
||||
|
||||
**Introduced in:** Telegraf v0.10.0
|
||||
**Tags:** messaging
|
||||
**OS support:** all
|
||||
|
||||
[disque]: https://github.com/antirez/disque
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics from one or many disque servers
|
||||
[[inputs.disque]]
|
||||
## An array of URI to gather stats about. Specify an ip or hostname
|
||||
## with optional port and password.
|
||||
## ie disque://localhost, disque://10.10.3.33:18832, 10.0.0.1:10000, etc.
|
||||
## If no servers are specified, then localhost is used as the host.
|
||||
servers = ["localhost"]
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- disque
|
||||
- disque_host
|
||||
- uptime_in_seconds
|
||||
- connected_clients
|
||||
- blocked_clients
|
||||
- used_memory
|
||||
- used_memory_rss
|
||||
- used_memory_peak
|
||||
- total_connections_received
|
||||
- total_commands_processed
|
||||
- instantaneous_ops_per_sec
|
||||
- latest_fork_usec
|
||||
- mem_fragmentation_ratio
|
||||
- used_cpu_sys
|
||||
- used_cpu_user
|
||||
- used_cpu_sys_children
|
||||
- used_cpu_user_children
|
||||
- registered_jobs
|
||||
- registered_queues
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,79 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Device Mapper Cache"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Device Mapper Cache
|
||||
identifier: input-dmcache
|
||||
tags: [Device Mapper Cache, "input-plugins", "configuration", "system"]
|
||||
introduced: "v1.3.0"
|
||||
os_support: "linux"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/dmcache/README.md, Device Mapper Cache Plugin Source
|
||||
---
|
||||
|
||||
# Device Mapper Cache Input Plugin
|
||||
|
||||
This plugin provide a native collection for dmsetup based statistics for
|
||||
[dm-cache](https://docs.kernel.org/admin-guide/device-mapper/cache.html).
|
||||
|
||||
> [!NOTE]
|
||||
> This plugin requires super-user permissions! Please make sure, Telegraf is
|
||||
> able to run `sudo /sbin/dmsetup status --target cache` without requiring a
|
||||
> password.
|
||||
|
||||
**Introduced in:** Telegraf v1.3.0
|
||||
**Tags:** system
|
||||
**OS support:** linux
|
||||
|
||||
[dmcache]: https://docs.kernel.org/admin-guide/device-mapper/cache.html
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Provide a native collection for dmsetup based statistics for dm-cache
|
||||
# This plugin ONLY supports Linux
|
||||
[[inputs.dmcache]]
|
||||
## Whether to report per-device stats or not
|
||||
per_device = true
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- dmcache
|
||||
- length
|
||||
- target
|
||||
- metadata_blocksize
|
||||
- metadata_used
|
||||
- metadata_total
|
||||
- cache_blocksize
|
||||
- cache_used
|
||||
- cache_total
|
||||
- read_hits
|
||||
- read_misses
|
||||
- write_hits
|
||||
- write_misses
|
||||
- demotions
|
||||
- promotions
|
||||
- dirty
|
||||
|
||||
## Tags
|
||||
|
||||
- All measurements have the following tags:
|
||||
- device
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
dmcache,device=example cache_blocksize=0i,read_hits=995134034411520i,read_misses=916807089127424i,write_hits=195107267543040i,metadata_used=12861440i,write_misses=563725346013184i,promotions=3265223720960i,dirty=0i,metadata_blocksize=0i,cache_used=1099511627776ii,cache_total=0i,length=0i,metadata_total=1073741824i,demotions=3265223720960i 1491482035000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,108 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from DNS Query"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: DNS Query
|
||||
identifier: input-dns_query
|
||||
tags: [DNS Query, "input-plugins", "configuration", "network", "system"]
|
||||
introduced: "v1.4.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/dns_query/README.md, DNS Query Plugin Source
|
||||
---
|
||||
|
||||
# DNS Query Input Plugin
|
||||
|
||||
This plugin gathers information about DNS queries such as response time and
|
||||
result codes.
|
||||
|
||||
**Introduced in:** Telegraf v1.4.0
|
||||
**Tags:** system, network
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Query given DNS server and gives statistics
|
||||
[[inputs.dns_query]]
|
||||
## servers to query
|
||||
servers = ["8.8.8.8"]
|
||||
|
||||
## Network is the network protocol name.
|
||||
# network = "udp"
|
||||
|
||||
## Domains or subdomains to query.
|
||||
# domains = ["."]
|
||||
|
||||
## Query record type.
|
||||
## Possible values: A, AAAA, CNAME, MX, NS, PTR, TXT, SOA, SPF, SRV.
|
||||
# record_type = "A"
|
||||
|
||||
## Dns server port.
|
||||
# port = 53
|
||||
|
||||
## Query timeout
|
||||
# timeout = "2s"
|
||||
|
||||
## Include the specified additional properties in the resulting metric.
|
||||
## The following values are supported:
|
||||
## "first_ip" -- return IP of the first A and AAAA answer
|
||||
## "all_ips" -- return IPs of all A and AAAA answers
|
||||
# include_fields = []
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- dns_query
|
||||
- tags:
|
||||
- server
|
||||
- domain
|
||||
- record_type
|
||||
- result
|
||||
- rcode
|
||||
- fields:
|
||||
- query_time_ms (float)
|
||||
- result_code (int, success = 0, timeout = 1, error = 2)
|
||||
- rcode_value (int)
|
||||
|
||||
## Rcode Descriptions
|
||||
|
||||
|rcode_value|rcode|Description|
|
||||
|---|-----------|-----------------------------------|
|
||||
|0 | NoError | No Error |
|
||||
|1 | FormErr | Format Error |
|
||||
|2 | ServFail | Server Failure |
|
||||
|3 | NXDomain | Non-Existent Domain |
|
||||
|4 | NotImp | Not Implemented |
|
||||
|5 | Refused | Query Refused |
|
||||
|6 | YXDomain | Name Exists when it should not |
|
||||
|7 | YXRRSet | RR Set Exists when it should not |
|
||||
|8 | NXRRSet | RR Set that should exist does not |
|
||||
|9 | NotAuth | Server Not Authoritative for zone |
|
||||
|10 | NotZone | Name not contained in zone |
|
||||
|16 | BADSIG | TSIG Signature Failure |
|
||||
|16 | BADVERS | Bad OPT Version |
|
||||
|17 | BADKEY | Key not recognized |
|
||||
|18 | BADTIME | Signature out of time window |
|
||||
|19 | BADMODE | Bad TKEY Mode |
|
||||
|20 | BADNAME | Duplicate key name |
|
||||
|21 | BADALG | Algorithm not supported |
|
||||
|22 | BADTRUNC | Bad Truncation |
|
||||
|23 | BADCOOKIE | Bad/missing Server Cookie |
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
dns_query,domain=google.com,rcode=NOERROR,record_type=A,result=success,server=127.0.0.1 rcode_value=0i,result_code=0i,query_time_ms=0.13746 1550020750001000000
|
||||
```
|
||||
|
|
@ -0,0 +1,422 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Docker"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Docker
|
||||
identifier: input-docker
|
||||
tags: [Docker, "input-plugins", "configuration", "containers"]
|
||||
introduced: "v0.1.9"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/docker/README.md, Docker Plugin Source
|
||||
---
|
||||
|
||||
# Docker Input Plugin
|
||||
|
||||
This plugin uses the [Docker Engine API](https://docs.docker.com/engine/api) to gather metrics on running
|
||||
Docker containers.
|
||||
|
||||
> [!NOTE]
|
||||
> Make sure Telegraf has sufficient permissions to access the configured
|
||||
> endpoint.
|
||||
|
||||
**Introduced in:** Telegraf v0.1.9
|
||||
**Tags:** containers
|
||||
**OS support:** all
|
||||
|
||||
[api]: https://docs.docker.com/engine/api
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics about docker containers
|
||||
[[inputs.docker]]
|
||||
## Docker Endpoint
|
||||
## To use TCP, set endpoint = "tcp://[ip]:[port]"
|
||||
## To use environment variables (ie, docker-machine), set endpoint = "ENV"
|
||||
endpoint = "unix:///var/run/docker.sock"
|
||||
|
||||
## Set to true to collect Swarm metrics(desired_replicas, running_replicas)
|
||||
## Note: configure this in one of the manager nodes in a Swarm cluster.
|
||||
## configuring in multiple Swarm managers results in duplication of metrics.
|
||||
gather_services = false
|
||||
|
||||
## Set the source tag for the metrics to the container ID hostname, eg first 12 chars
|
||||
source_tag = false
|
||||
|
||||
## Containers to include and exclude. Collect all if empty. Globs accepted.
|
||||
container_name_include = []
|
||||
container_name_exclude = []
|
||||
|
||||
## Container states to include and exclude. Globs accepted.
|
||||
## When empty only containers in the "running" state will be captured.
|
||||
## example: container_state_include = ["created", "restarting", "running", "removing", "paused", "exited", "dead"]
|
||||
## example: container_state_exclude = ["created", "restarting", "running", "removing", "paused", "exited", "dead"]
|
||||
# container_state_include = []
|
||||
# container_state_exclude = []
|
||||
|
||||
## Objects to include for disk usage query
|
||||
## Allowed values are "container", "image", "volume"
|
||||
## When empty disk usage is excluded
|
||||
storage_objects = []
|
||||
|
||||
## Timeout for docker list, info, and stats commands
|
||||
timeout = "5s"
|
||||
|
||||
## Specifies for which classes a per-device metric should be issued
|
||||
## Possible values are 'cpu' (cpu0, cpu1, ...), 'blkio' (8:0, 8:1, ...) and 'network' (eth0, eth1, ...)
|
||||
# perdevice_include = ["cpu"]
|
||||
|
||||
## Specifies for which classes a total metric should be issued. Total is an aggregated of the 'perdevice_include' values.
|
||||
## Possible values are 'cpu', 'blkio' and 'network'
|
||||
## Total 'cpu' is reported directly by Docker daemon, and 'network' and 'blkio' totals are aggregated by this plugin.
|
||||
# total_include = ["cpu", "blkio", "network"]
|
||||
|
||||
## docker labels to include and exclude as tags. Globs accepted.
|
||||
## Note that an empty array for both will include all labels as tags
|
||||
docker_label_include = []
|
||||
docker_label_exclude = []
|
||||
|
||||
## Which environment variables should we use as a tag
|
||||
tag_env = ["JAVA_HOME", "HEAP_SIZE"]
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
### Environment Configuration
|
||||
|
||||
When using the `"ENV"` endpoint, the connection is configured using the [cli
|
||||
Docker environment variables]().
|
||||
|
||||
[3]: https://godoc.org/github.com/moby/moby/client#NewEnvClient
|
||||
|
||||
### Security
|
||||
|
||||
Giving telegraf access to the Docker daemon expands the [attack surface](https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface) that
|
||||
could result in an attacker gaining root access to a machine. This is especially
|
||||
relevant if the telegraf configuration can be changed by untrusted users.
|
||||
|
||||
[4]: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
|
||||
|
||||
### Docker Daemon Permissions
|
||||
|
||||
Typically, telegraf must be given permission to access the docker daemon unix
|
||||
socket when using the default endpoint. This can be done by adding the
|
||||
`telegraf` unix user (created when installing a Telegraf package) to the
|
||||
`docker` unix group with the following command:
|
||||
|
||||
```shell
|
||||
sudo usermod -aG docker telegraf
|
||||
```
|
||||
|
||||
If telegraf is run within a container, the unix socket will need to be exposed
|
||||
within the telegraf container. This can be done in the docker CLI by add the
|
||||
option `-v /var/run/docker.sock:/var/run/docker.sock` or adding the following
|
||||
lines to the telegraf container definition in a docker compose file.
|
||||
Additionally docker `telegraf` user must be assigned to `docker` group id
|
||||
from host:
|
||||
|
||||
```yaml
|
||||
user: telegraf:<host_docker_gid>
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
```
|
||||
|
||||
### source tag
|
||||
|
||||
Selecting the containers measurements can be tricky if you have many containers
|
||||
with the same name. To alleviate this issue you can set the below value to
|
||||
`true`
|
||||
|
||||
```toml
|
||||
source_tag = true
|
||||
```
|
||||
|
||||
This will cause all measurements to have the `source` tag be set to the first 12
|
||||
characters of the container id. The first 12 characters is the common hostname
|
||||
for containers that have no explicit hostname set, as defined by docker.
|
||||
|
||||
### Kubernetes Labels
|
||||
|
||||
Kubernetes may add many labels to your containers, if they are not needed you
|
||||
may prefer to exclude them:
|
||||
|
||||
```json
|
||||
docker_label_exclude = ["annotation.kubernetes*"]
|
||||
```
|
||||
|
||||
### Docker-compose Labels
|
||||
|
||||
Docker-compose will add labels to your containers. You can limit restrict labels
|
||||
to selected ones, e.g.
|
||||
|
||||
```json
|
||||
docker_label_include = [
|
||||
"com.docker.compose.config-hash",
|
||||
"com.docker.compose.container-number",
|
||||
"com.docker.compose.oneoff",
|
||||
"com.docker.compose.project",
|
||||
"com.docker.compose.service",
|
||||
]
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- docker
|
||||
- tags:
|
||||
- unit
|
||||
- engine_host
|
||||
- server_version
|
||||
- fields:
|
||||
- n_used_file_descriptors
|
||||
- n_cpus
|
||||
- n_containers
|
||||
- n_containers_running
|
||||
- n_containers_stopped
|
||||
- n_containers_paused
|
||||
- n_images
|
||||
- n_goroutines
|
||||
- n_listener_events
|
||||
- memory_total
|
||||
- pool_blocksize (requires devicemapper storage driver) (deprecated see: `docker_devicemapper`)
|
||||
|
||||
The `docker_data` and `docker_metadata` measurements are available only for
|
||||
some storage drivers such as devicemapper.
|
||||
|
||||
- docker_data (deprecated see: `docker_devicemapper`)
|
||||
- tags:
|
||||
- unit
|
||||
- engine_host
|
||||
- server_version
|
||||
- fields:
|
||||
- available
|
||||
- total
|
||||
- used
|
||||
|
||||
- docker_metadata (deprecated see: `docker_devicemapper`)
|
||||
- tags:
|
||||
- unit
|
||||
- engine_host
|
||||
- server_version
|
||||
- fields:
|
||||
- available
|
||||
- total
|
||||
- used
|
||||
|
||||
The above measurements for the devicemapper storage driver can now be found in
|
||||
the new `docker_devicemapper` measurement
|
||||
|
||||
- docker_devicemapper
|
||||
- tags:
|
||||
- engine_host
|
||||
- server_version
|
||||
- pool_name
|
||||
- fields:
|
||||
- pool_blocksize_bytes
|
||||
- data_space_used_bytes
|
||||
- data_space_total_bytes
|
||||
- data_space_available_bytes
|
||||
- metadata_space_used_bytes
|
||||
- metadata_space_total_bytes
|
||||
- metadata_space_available_bytes
|
||||
- thin_pool_minimum_free_space_bytes
|
||||
|
||||
- docker_container_mem
|
||||
- tags:
|
||||
- engine_host
|
||||
- server_version
|
||||
- container_image
|
||||
- container_name
|
||||
- container_status
|
||||
- container_version
|
||||
- fields:
|
||||
- total_pgmajfault
|
||||
- cache
|
||||
- mapped_file
|
||||
- total_inactive_file
|
||||
- pgpgout
|
||||
- rss
|
||||
- total_mapped_file
|
||||
- writeback
|
||||
- unevictable
|
||||
- pgpgin
|
||||
- total_unevictable
|
||||
- pgmajfault
|
||||
- total_rss
|
||||
- total_rss_huge
|
||||
- total_writeback
|
||||
- total_inactive_anon
|
||||
- rss_huge
|
||||
- hierarchical_memory_limit
|
||||
- total_pgfault
|
||||
- total_active_file
|
||||
- active_anon
|
||||
- total_active_anon
|
||||
- total_pgpgout
|
||||
- total_cache
|
||||
- inactive_anon
|
||||
- active_file
|
||||
- pgfault
|
||||
- inactive_file
|
||||
- total_pgpgin
|
||||
- max_usage
|
||||
- usage
|
||||
- failcnt
|
||||
- limit
|
||||
- container_id
|
||||
|
||||
- docker_container_cpu
|
||||
- tags:
|
||||
- engine_host
|
||||
- server_version
|
||||
- container_image
|
||||
- container_name
|
||||
- container_status
|
||||
- container_version
|
||||
- cpu
|
||||
- fields:
|
||||
- throttling_periods
|
||||
- throttling_throttled_periods
|
||||
- throttling_throttled_time
|
||||
- usage_in_kernelmode
|
||||
- usage_in_usermode
|
||||
- usage_system
|
||||
- usage_total
|
||||
- usage_percent
|
||||
- container_id
|
||||
|
||||
- docker_container_net
|
||||
- tags:
|
||||
- engine_host
|
||||
- server_version
|
||||
- container_image
|
||||
- container_name
|
||||
- container_status
|
||||
- container_version
|
||||
- network
|
||||
- fields:
|
||||
- rx_dropped
|
||||
- rx_bytes
|
||||
- rx_errors
|
||||
- tx_packets
|
||||
- tx_dropped
|
||||
- rx_packets
|
||||
- tx_errors
|
||||
- tx_bytes
|
||||
- container_id
|
||||
|
||||
- docker_container_blkio
|
||||
- tags:
|
||||
- engine_host
|
||||
- server_version
|
||||
- container_image
|
||||
- container_name
|
||||
- container_status
|
||||
- container_version
|
||||
- device
|
||||
- fields:
|
||||
- io_service_bytes_recursive_async
|
||||
- io_service_bytes_recursive_read
|
||||
- io_service_bytes_recursive_sync
|
||||
- io_service_bytes_recursive_total
|
||||
- io_service_bytes_recursive_write
|
||||
- io_serviced_recursive_async
|
||||
- io_serviced_recursive_read
|
||||
- io_serviced_recursive_sync
|
||||
- io_serviced_recursive_total
|
||||
- io_serviced_recursive_write
|
||||
- container_id
|
||||
|
||||
The `docker_container_health` measurements report on a containers
|
||||
[HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck)
|
||||
status if configured.
|
||||
|
||||
- docker_container_health (container must use the HEALTHCHECK)
|
||||
- tags:
|
||||
- engine_host
|
||||
- server_version
|
||||
- container_image
|
||||
- container_name
|
||||
- container_status
|
||||
- container_version
|
||||
- fields:
|
||||
- health_status (string)
|
||||
- failing_streak (integer)
|
||||
|
||||
- docker_container_status
|
||||
- tags:
|
||||
- engine_host
|
||||
- server_version
|
||||
- container_image
|
||||
- container_name
|
||||
- container_status
|
||||
- container_version
|
||||
- fields:
|
||||
- container_id
|
||||
- oomkilled (boolean)
|
||||
- pid (integer)
|
||||
- exitcode (integer)
|
||||
- started_at (integer)
|
||||
- finished_at (integer)
|
||||
- uptime_ns (integer)
|
||||
|
||||
- docker_swarm
|
||||
- tags:
|
||||
- service_id
|
||||
- service_name
|
||||
- service_mode
|
||||
- fields:
|
||||
- tasks_desired
|
||||
- tasks_running
|
||||
|
||||
- docker_disk_usage
|
||||
- tags:
|
||||
- engine_host
|
||||
- server_version
|
||||
- container_name
|
||||
- container_image
|
||||
- container_version
|
||||
- image_id
|
||||
- image_name
|
||||
- image_version
|
||||
- volume_name
|
||||
- fields:
|
||||
- size_rw
|
||||
- size_root_fs
|
||||
- size
|
||||
- shared_size
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
docker,engine_host=debian-stretch-docker,server_version=17.09.0-ce n_containers=6i,n_containers_paused=0i,n_containers_running=1i,n_containers_stopped=5i,n_cpus=2i,n_goroutines=41i,n_images=2i,n_listener_events=0i,n_used_file_descriptors=27i 1524002041000000000
|
||||
docker,engine_host=debian-stretch-docker,server_version=17.09.0-ce,unit=bytes memory_total=2101661696i 1524002041000000000
|
||||
docker_container_mem,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,engine_host=debian-stretch-docker,server_version=17.09.0-ce active_anon=8327168i,active_file=2314240i,cache=27402240i,container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",hierarchical_memory_limit=9223372036854771712i,inactive_anon=0i,inactive_file=25088000i,limit=2101661696i,mapped_file=20582400i,max_usage=36646912i,pgfault=4193i,pgmajfault=214i,pgpgin=9243i,pgpgout=520i,rss=8327168i,rss_huge=0i,total_active_anon=8327168i,total_active_file=2314240i,total_cache=27402240i,total_inactive_anon=0i,total_inactive_file=25088000i,total_mapped_file=20582400i,total_pgfault=4193i,total_pgmajfault=214i,total_pgpgin=9243i,total_pgpgout=520i,total_rss=8327168i,total_rss_huge=0i,total_unevictable=0i,total_writeback=0i,unevictable=0i,usage=36528128i,usage_percent=0.4342225020025297,writeback=0i 1524002042000000000
|
||||
docker_container_cpu,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,cpu=cpu-total,engine_host=debian-stretch-docker,server_version=17.09.0-ce container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",throttling_periods=0i,throttling_throttled_periods=0i,throttling_throttled_time=0i,usage_in_kernelmode=40000000i,usage_in_usermode=100000000i,usage_percent=0,usage_system=6394210000000i,usage_total=117319068i 1524002042000000000
|
||||
docker_container_cpu,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,cpu=cpu0,engine_host=debian-stretch-docker,server_version=17.09.0-ce container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",usage_total=20825265i 1524002042000000000
|
||||
docker_container_cpu,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,cpu=cpu1,engine_host=debian-stretch-docker,server_version=17.09.0-ce container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",usage_total=96493803i 1524002042000000000
|
||||
docker_container_net,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,engine_host=debian-stretch-docker,network=eth0,server_version=17.09.0-ce container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",rx_bytes=1576i,rx_dropped=0i,rx_errors=0i,rx_packets=20i,tx_bytes=0i,tx_dropped=0i,tx_errors=0i,tx_packets=0i 1524002042000000000
|
||||
docker_container_blkio,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,device=254:0,engine_host=debian-stretch-docker,server_version=17.09.0-ce container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",io_service_bytes_recursive_async=27398144i,io_service_bytes_recursive_read=27398144i,io_service_bytes_recursive_sync=0i,io_service_bytes_recursive_total=27398144i,io_service_bytes_recursive_write=0i,io_serviced_recursive_async=529i,io_serviced_recursive_read=529i,io_serviced_recursive_sync=0i,io_serviced_recursive_total=529i,io_serviced_recursive_write=0i 1524002042000000000
|
||||
docker_container_health,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,engine_host=debian-stretch-docker,server_version=17.09.0-ce failing_streak=0i,health_status="healthy" 1524007529000000000
|
||||
docker_swarm,service_id=xaup2o9krw36j2dy1mjx1arjw,service_mode=replicated,service_name=test tasks_desired=3,tasks_running=3 1508968160000000000
|
||||
docker_disk_usage,engine_host=docker-desktop,server_version=24.0.5 layers_size=17654519107i 1695742041000000000
|
||||
docker_disk_usage,container_image=influxdb,container_name=frosty_wright,container_version=1.8,engine_host=docker-desktop,server_version=24.0.5 size_root_fs=286593526i,size_rw=538i 1695742041000000000
|
||||
docker_disk_usage,engine_host=docker-desktop,image_id=7f4a1cc74046,image_name=telegraf,image_version=latest,server_version=24.0.5 shared_size=0i,size=425484494i 1695742041000000000
|
||||
docker_disk_usage,engine_host=docker-desktop,server_version=24.0.5,volume_name=docker_influxdb-data size=91989940i 1695742041000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,129 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Docker Log"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Docker Log
|
||||
identifier: input-docker_log
|
||||
tags: [Docker Log, "input-plugins", "configuration", "containers", "logging"]
|
||||
introduced: "v1.12.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/docker_log/README.md, Docker Log Plugin Source
|
||||
---
|
||||
|
||||
# Docker Log Input Plugin
|
||||
|
||||
This plugin uses the [Docker Engine API](https://docs.docker.com/engine/api) to gather logs from running
|
||||
Docker containers.
|
||||
|
||||
> [!NOTE]
|
||||
> This plugin works only for containers with the `local` or `json-file` or
|
||||
> `journald` logging driver. Make sure Telegraf has sufficient permissions to
|
||||
> access the configured endpoint.
|
||||
|
||||
**Introduced in:** Telegraf v1.12.0
|
||||
**Tags:** containers, logging
|
||||
**OS support:** all
|
||||
|
||||
[api]: https://docs.docker.com/engine/api
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read logging output from the Docker engine
|
||||
[[inputs.docker_log]]
|
||||
## Docker Endpoint
|
||||
## To use TCP, set endpoint = "tcp://[ip]:[port]"
|
||||
## To use environment variables (ie, docker-machine), set endpoint = "ENV"
|
||||
# endpoint = "unix:///var/run/docker.sock"
|
||||
|
||||
## When true, container logs are read from the beginning; otherwise reading
|
||||
## begins at the end of the log. If state-persistence is enabled for Telegraf,
|
||||
## the reading continues at the last previously processed timestamp.
|
||||
# from_beginning = false
|
||||
|
||||
## Timeout for Docker API calls.
|
||||
# timeout = "5s"
|
||||
|
||||
## Containers to include and exclude. Globs accepted.
|
||||
## Note that an empty array for both will include all containers
|
||||
# container_name_include = []
|
||||
# container_name_exclude = []
|
||||
|
||||
## Container states to include and exclude. Globs accepted.
|
||||
## When empty only containers in the "running" state will be captured.
|
||||
# container_state_include = []
|
||||
# container_state_exclude = []
|
||||
|
||||
## docker labels to include and exclude as tags. Globs accepted.
|
||||
## Note that an empty array for both will include all labels as tags
|
||||
# docker_label_include = []
|
||||
# docker_label_exclude = []
|
||||
|
||||
## Set the source tag for the metrics to the container ID hostname, eg first 12 chars
|
||||
source_tag = false
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
### Environment Configuration
|
||||
|
||||
When using the `"ENV"` endpoint, the connection is configured using the
|
||||
[CLI Docker environment variables](https://godoc.org/github.com/moby/moby/client#NewEnvClient)
|
||||
|
||||
[env]: https://godoc.org/github.com/moby/moby/client#NewEnvClient
|
||||
|
||||
## source tag
|
||||
|
||||
Selecting the containers can be tricky if you have many containers with the same
|
||||
name. To alleviate this issue you can set the below value to `true`
|
||||
|
||||
```toml
|
||||
source_tag = true
|
||||
```
|
||||
|
||||
This will cause all data points to have the `source` tag be set to the first 12
|
||||
characters of the container id. The first 12 characters is the common hostname
|
||||
for containers that have no explicit hostname set, as defined by docker.
|
||||
|
||||
## Metrics
|
||||
|
||||
- docker_log
|
||||
- tags:
|
||||
- container_image
|
||||
- container_version
|
||||
- container_name
|
||||
- stream (stdout, stderr, or tty)
|
||||
- source
|
||||
- fields:
|
||||
- container_id
|
||||
- message
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:\"371ee5d3e587\", Flush Interval:10s" 1560913872000000000
|
||||
docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Tags enabled: host=371ee5d3e587" 1560913872000000000
|
||||
docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Loaded outputs: file" 1560913872000000000
|
||||
docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Loaded processors:" 1560913872000000000
|
||||
docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Loaded aggregators:" 1560913872000000000
|
||||
docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Loaded inputs: net" 1560913872000000000
|
||||
docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Using config file: /etc/telegraf/telegraf.conf" 1560913872000000000
|
||||
docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Starting Telegraf 1.10.4" 1560913872000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,107 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Dovecot"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Dovecot
|
||||
identifier: input-dovecot
|
||||
tags: [Dovecot, "input-plugins", "configuration", "server"]
|
||||
introduced: "v0.10.3"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/dovecot/README.md, Dovecot Plugin Source
|
||||
---
|
||||
|
||||
# Dovecot Input Plugin
|
||||
|
||||
This plugin uses the Dovecot [v2.1 stats protocol](https://doc.dovecot.org/configuration_manual/stats/old_statistics/#old-statistics) to gather
|
||||
metrics about configured domains of [Dovecot](https://www.dovecot.org/) servers. You can use this
|
||||
plugin on Dovecot up to and including version v2.3.x.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> Dovecot v2.4+ has the old protocol removed and this plugin will not work.
|
||||
> Please use Dovecot's [Openmetrics exporter](https://doc.dovecot.org/latest/core/config/statistics.html#openmetrics) in combination with
|
||||
> the [http input plugin](/telegraf/v1/plugins/#input-http) and `openmetrics` data format for newer
|
||||
> versions of Dovecot.
|
||||
|
||||
**Introduced in:** Telegraf v0.10.3
|
||||
**Tags:** server
|
||||
**OS support:** all
|
||||
|
||||
[dovecot]: https://www.dovecot.org/
|
||||
[stats]: https://doc.dovecot.org/configuration_manual/stats/old_statistics/#old-statistics
|
||||
[http_plugin]: /plugins/inputs/http/README.md
|
||||
[openmetrics]: https://doc.dovecot.org/latest/core/config/statistics.html#openmetrics
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics about dovecot servers
|
||||
[[inputs.dovecot]]
|
||||
## specify dovecot servers via an address:port list
|
||||
## e.g.
|
||||
## localhost:24242
|
||||
## or as an UDS socket
|
||||
## e.g.
|
||||
## /var/run/dovecot/old-stats
|
||||
##
|
||||
## If no servers are specified, then localhost is used as the host.
|
||||
servers = ["localhost:24242"]
|
||||
|
||||
## Type is one of "user", "domain", "ip", or "global"
|
||||
type = "global"
|
||||
|
||||
## Wildcard matches like "*.com". An empty string "" is same as "*"
|
||||
## If type = "ip" filters should be <IP/network>
|
||||
filters = [""]
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- dovecot
|
||||
- tags:
|
||||
- server (hostname)
|
||||
- type (query type)
|
||||
- ip (ip addr)
|
||||
- user (username)
|
||||
- domain (domain name)
|
||||
- fields:
|
||||
- reset_timestamp (string)
|
||||
- last_update (string)
|
||||
- num_logins (integer)
|
||||
- num_cmds (integer)
|
||||
- num_connected_sessions (integer)
|
||||
- user_cpu (float)
|
||||
- sys_cpu (float)
|
||||
- clock_time (float)
|
||||
- min_faults (integer)
|
||||
- maj_faults (integer)
|
||||
- vol_cs (integer)
|
||||
- invol_cs (integer)
|
||||
- disk_input (integer)
|
||||
- disk_output (integer)
|
||||
- read_count (integer)
|
||||
- read_bytes (integer)
|
||||
- write_count (integer)
|
||||
- write_bytes (integer)
|
||||
- mail_lookup_path (integer)
|
||||
- mail_lookup_attr (integer)
|
||||
- mail_read_count (integer)
|
||||
- mail_read_bytes (integer)
|
||||
- mail_cache_hits (integer)
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
dovecot,server=dovecot-1.domain.test,type=global clock_time=101196971074203.94,disk_input=6493168218112i,disk_output=17978638815232i,invol_cs=1198855447i,last_update="2016-04-08 11:04:13.000379245 +0200 CEST",mail_cache_hits=68192209i,mail_lookup_attr=0i,mail_lookup_path=653861i,mail_read_bytes=86705151847i,mail_read_count=566125i,maj_faults=17208i,min_faults=1286179702i,num_cmds=917469i,num_connected_sessions=8896i,num_logins=174827i,read_bytes=30327690466186i,read_count=1772396430i,reset_timestamp="2016-04-08 10:28:45 +0200 CEST",sys_cpu=157965.692,user_cpu=219337.48,vol_cs=2827615787i,write_bytes=17150837661940i,write_count=992653220i 1460106266642153907
|
||||
```
|
||||
|
|
@ -0,0 +1,306 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Data Plane Development Kit (DPDK)"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Data Plane Development Kit (DPDK)
|
||||
identifier: input-dpdk
|
||||
tags: [Data Plane Development Kit (DPDK), "input-plugins", "configuration", "applications", "network"]
|
||||
introduced: "v1.19.0"
|
||||
os_support: "linux"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/dpdk/README.md, Data Plane Development Kit (DPDK) Plugin Source
|
||||
---
|
||||
|
||||
# Data Plane Development Kit (DPDK) Input Plugin
|
||||
|
||||
This plugin collects metrics exposed by applications built with the
|
||||
[Data Plane Development Kit](https://www.dpdk.org) which is an extensive set of open
|
||||
source libraries designed for accelerating packet processing workloads.
|
||||
|
||||
> [!NOTE]
|
||||
> Since DPDK will most likely run with root privileges, the telemetry socket
|
||||
> exposed by DPDK will also require root access. Please adjust permissions
|
||||
> accordingly!
|
||||
|
||||
Refer to the [Telemetry User Guide](https://doc.dpdk.org/guides/howto/telemetry.html) for details and examples on how
|
||||
to use DPDK in your application.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> This plugin uses the `v2` interface to read telemetry > data from applications
|
||||
> and required DPDK version `v20.05` or higher. Some metrics might require later
|
||||
> versions.
|
||||
> The recommended version, especially in conjunction with the `in_memory`
|
||||
> option is `DPDK 21.11.2` or higher.
|
||||
|
||||
**Introduced in:** Telegraf v1.19.0
|
||||
**Tags:** applications, network
|
||||
**OS support:** linux
|
||||
|
||||
[dpdk]: https://www.dpdk.org
|
||||
[user_guide]: https://doc.dpdk.org/guides/howto/telemetry.html
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Reads metrics from DPDK applications using v2 telemetry interface.
|
||||
# This plugin ONLY supports Linux
|
||||
[[inputs.dpdk]]
|
||||
## Path to DPDK telemetry socket. This shall point to v2 version of DPDK
|
||||
## telemetry interface.
|
||||
# socket_path = "/var/run/dpdk/rte/dpdk_telemetry.v2"
|
||||
|
||||
## Duration that defines how long the connected socket client will wait for
|
||||
## a response before terminating connection.
|
||||
## This includes both writing to and reading from socket. Since it's local
|
||||
## socket access to a fast packet processing application, the timeout should
|
||||
## be sufficient for most users.
|
||||
## Setting the value to 0 disables the timeout (not recommended)
|
||||
# socket_access_timeout = "200ms"
|
||||
|
||||
## Enables telemetry data collection for selected device types.
|
||||
## Adding "ethdev" enables collection of telemetry from DPDK NICs (stats, xstats, link_status, info).
|
||||
## Adding "rawdev" enables collection of telemetry from DPDK Raw Devices (xstats).
|
||||
# device_types = ["ethdev"]
|
||||
|
||||
## List of custom, application-specific telemetry commands to query
|
||||
## The list of available commands depend on the application deployed.
|
||||
## Applications can register their own commands via telemetry library API
|
||||
## https://doc.dpdk.org/guides/prog_guide/telemetry_lib.html#registering-commands
|
||||
## For L3 Forwarding with Power Management Sample Application this could be:
|
||||
## additional_commands = ["/l3fwd-power/stats"]
|
||||
# additional_commands = []
|
||||
|
||||
## List of plugin options.
|
||||
## Supported options:
|
||||
## - "in_memory" option enables reading for multiple sockets when a dpdk application is running with --in-memory option.
|
||||
## When option is enabled plugin will try to find additional socket paths related to provided socket_path.
|
||||
## Details: https://doc.dpdk.org/guides/howto/telemetry.html#connecting-to-different-dpdk-processes
|
||||
# plugin_options = ["in_memory"]
|
||||
|
||||
## Specifies plugin behavior regarding unreachable socket (which might not have been initialized yet).
|
||||
## Available choices:
|
||||
## - error: Telegraf will return an error during the startup and gather phases if socket is unreachable
|
||||
## - ignore: Telegraf will ignore error regarding unreachable socket on both startup and gather
|
||||
# unreachable_socket_behavior = "error"
|
||||
|
||||
## List of metadata fields which will be added to every metric produced by the plugin.
|
||||
## Supported options:
|
||||
## - "pid" - exposes PID of DPDK process. Example: pid=2179660i
|
||||
## - "version" - exposes version of DPDK. Example: version="DPDK 21.11.2"
|
||||
# metadata_fields = ["pid", "version"]
|
||||
|
||||
## Allows turning off collecting data for individual "ethdev" commands.
|
||||
## Remove "/ethdev/link_status" from list to gather link status metrics.
|
||||
[inputs.dpdk.ethdev]
|
||||
exclude_commands = ["/ethdev/link_status"]
|
||||
|
||||
## When running multiple instances of the plugin it's recommended to add a
|
||||
## unique tag to each instance to identify metrics exposed by an instance
|
||||
## of DPDK application. This is useful when multiple DPDK apps run on a
|
||||
## single host.
|
||||
## [inputs.dpdk.tags]
|
||||
## dpdk_instance = "my-fwd-app"
|
||||
```
|
||||
|
||||
This plugin offers multiple configuration options, please review examples below
|
||||
for additional usage information.
|
||||
|
||||
### Example: Minimal Configuration for NIC metrics
|
||||
|
||||
This configuration allows getting metrics for all devices reported via
|
||||
`/ethdev/list` command:
|
||||
|
||||
* `/ethdev/info` - device information: name, MAC address, buffers size, etc
|
||||
(since `DPDK 21.11`)
|
||||
* `/ethdev/stats` - basic device statistics (since `DPDK 20.11`)
|
||||
* `/ethdev/xstats` - extended device statistics
|
||||
* `/ethdev/link_status` - up/down link status
|
||||
|
||||
```toml
|
||||
[[inputs.dpdk]]
|
||||
device_types = ["ethdev"]
|
||||
```
|
||||
|
||||
Since this configuration will query `/ethdev/link_status` it's recommended to
|
||||
increase timeout to `socket_access_timeout = "10s"`.
|
||||
|
||||
The plugin collecting interval.
|
||||
|
||||
### Example: Excluding NIC link status from being collected
|
||||
|
||||
Checking link status depending on underlying implementation may take more time
|
||||
to complete. This configuration can be used to exclude this telemetry command
|
||||
to allow faster response for metrics.
|
||||
|
||||
```toml
|
||||
[[inputs.dpdk]]
|
||||
device_types = ["ethdev"]
|
||||
|
||||
[inputs.dpdk.ethdev]
|
||||
exclude_commands = ["/ethdev/link_status"]
|
||||
```
|
||||
|
||||
A separate plugin instance with higher timeout settings can be used to get
|
||||
`/ethdev/link_status` independently. Consult Independent NIC link status
|
||||
configuration and Getting
|
||||
metrics from multiple DPDK instances running on same
|
||||
host
|
||||
examples for further details.
|
||||
|
||||
### Example: Independent NIC link status configuration
|
||||
|
||||
This configuration allows getting `/ethdev/link_status` using separate
|
||||
configuration, with higher timeout.
|
||||
|
||||
```toml
|
||||
[[inputs.dpdk]]
|
||||
interval = "30s"
|
||||
socket_access_timeout = "10s"
|
||||
device_types = ["ethdev"]
|
||||
|
||||
[inputs.dpdk.ethdev]
|
||||
exclude_commands = ["/ethdev/info", "/ethdev/stats", "/ethdev/xstats"]
|
||||
```
|
||||
|
||||
### Example: Getting application-specific metrics
|
||||
|
||||
This configuration allows reading custom metrics exposed by
|
||||
applications. Example telemetry command obtained from
|
||||
[L3 Forwarding with Power Management Sample Application](https://doc.dpdk.org/guides/sample_app_ug/l3_forward_power_man.html).
|
||||
|
||||
```toml
|
||||
[[inputs.dpdk]]
|
||||
device_types = ["ethdev"]
|
||||
additional_commands = ["/l3fwd-power/stats"]
|
||||
|
||||
[inputs.dpdk.ethdev]
|
||||
exclude_commands = ["/ethdev/link_status"]
|
||||
```
|
||||
|
||||
Command entries specified in `additional_commands` should match DPDK command
|
||||
format:
|
||||
|
||||
* Command entry format: either `command` or `command,params` for commands that
|
||||
expect parameters, where comma (`,`) separates command from params.
|
||||
* Command entry length (command with params) should be `< 1024` characters.
|
||||
* Command length (without params) should be `< 56` characters.
|
||||
* Commands have to start with `/`.
|
||||
|
||||
Providing invalid commands will prevent the plugin from starting. Additional
|
||||
commands allow duplicates, but they will be removed during execution, so each
|
||||
command will be executed only once during each metric gathering interval.
|
||||
|
||||
[sample]: https://doc.dpdk.org/guides/sample_app_ug/l3_forward_power_man.html
|
||||
|
||||
### Example: Getting metrics from multiple DPDK instances on same host
|
||||
|
||||
This configuration allows getting metrics from two separate applications
|
||||
exposing their telemetry interfaces via separate sockets. For each plugin
|
||||
instance a unique tag `[inputs.dpdk.tags]` allows distinguishing between them.
|
||||
|
||||
```toml
|
||||
# Instance #1 - L3 Forwarding with Power Management Application
|
||||
[[inputs.dpdk]]
|
||||
socket_path = "/var/run/dpdk/rte/l3fwd-power_telemetry.v2"
|
||||
device_types = ["ethdev"]
|
||||
additional_commands = ["/l3fwd-power/stats"]
|
||||
|
||||
[inputs.dpdk.ethdev]
|
||||
exclude_commands = ["/ethdev/link_status"]
|
||||
|
||||
[inputs.dpdk.tags]
|
||||
dpdk_instance = "l3fwd-power"
|
||||
|
||||
# Instance #2 - L2 Forwarding with Intel Cache Allocation Technology (CAT)
|
||||
# Application
|
||||
[[inputs.dpdk]]
|
||||
socket_path = "/var/run/dpdk/rte/l2fwd-cat_telemetry.v2"
|
||||
device_types = ["ethdev"]
|
||||
|
||||
[inputs.dpdk.ethdev]
|
||||
exclude_commands = ["/ethdev/link_status"]
|
||||
|
||||
[inputs.dpdk.tags]
|
||||
dpdk_instance = "l2fwd-cat"
|
||||
```
|
||||
|
||||
This utilizes Telegraf's standard capability of adding custom
|
||||
tags to input plugin's
|
||||
measurements.
|
||||
|
||||
## Metrics
|
||||
|
||||
The DPDK socket accepts `command,params` requests and returns metric data in
|
||||
JSON format. All metrics from DPDK socket become flattened using Telegraf's
|
||||
JSON Flattener, and a set of tags that identify
|
||||
querying hierarchy:
|
||||
|
||||
```text
|
||||
dpdk,host=dpdk-host,dpdk_instance=l3fwd-power,command=/ethdev/stats,params=0 [fields] [timestamp]
|
||||
```
|
||||
|
||||
| Tag | Description |
|
||||
|-----|-------------|
|
||||
| `host` | hostname of the machine (consult [Telegraf Agent configuration](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#agent) for additional details) |
|
||||
| `dpdk_instance` | custom tag from `[inputs.dpdk.tags]` (optional) |
|
||||
| `command` | executed command (without params) |
|
||||
| `params` | command parameter, e.g. for `/ethdev/stats` it is the ID of NIC as exposed by `/ethdev/list`. For DPDK app that uses 2 NICs the metrics will output e.g. `params=0`, `params=1`. |
|
||||
|
||||
When running plugin configuration below...
|
||||
|
||||
```toml
|
||||
[[inputs.dpdk]]
|
||||
device_types = ["ethdev"]
|
||||
additional_commands = ["/l3fwd-power/stats"]
|
||||
metadata_fields = []
|
||||
[inputs.dpdk.tags]
|
||||
dpdk_instance = "l3fwd-power"
|
||||
```
|
||||
|
||||
...expected output for `dpdk` plugin instance running on host named
|
||||
`host=dpdk-host`:
|
||||
|
||||
```text
|
||||
dpdk,command=/ethdev/info,dpdk_instance=l3fwd-power,host=dpdk-host,params=0 all_multicast=0,dev_configured=1,dev_flags=74,dev_started=1,ethdev_rss_hf=0,lro=0,mac_addr="E4:3D:1A:DD:13:31",mtu=1500,name="0000:ca:00.1",nb_rx_queues=1,nb_tx_queues=1,numa_node=1,port_id=0,promiscuous=1,rx_mbuf_alloc_fail=0,rx_mbuf_size_min=2176,rx_offloads=0,rxq_state_0=1,scattered_rx=0,state=1,tx_offloads=65536,txq_state_0=1 1659017414000000000
|
||||
dpdk,command=/ethdev/stats,dpdk_instance=l3fwd-power,host=dpdk-host,params=0 q_opackets_0=0,q_ipackets_5=0,q_errors_11=0,ierrors=0,q_obytes_5=0,q_obytes_10=0,q_opackets_10=0,q_ipackets_4=0,q_ipackets_7=0,q_ipackets_15=0,q_ibytes_5=0,q_ibytes_6=0,q_ibytes_9=0,obytes=0,q_opackets_1=0,q_opackets_11=0,q_obytes_7=0,q_errors_5=0,q_errors_10=0,q_ibytes_4=0,q_obytes_6=0,q_errors_1=0,q_opackets_5=0,q_errors_3=0,q_errors_12=0,q_ipackets_11=0,q_ipackets_12=0,q_obytes_14=0,q_opackets_15=0,q_obytes_2=0,q_errors_8=0,q_opackets_12=0,q_errors_0=0,q_errors_9=0,q_opackets_14=0,q_ibytes_3=0,q_ibytes_15=0,q_ipackets_13=0,q_ipackets_14=0,q_obytes_3=0,q_errors_13=0,q_opackets_3=0,q_ibytes_0=7092,q_ibytes_2=0,q_ibytes_8=0,q_ipackets_8=0,q_ipackets_10=0,q_obytes_4=0,q_ibytes_10=0,q_ibytes_13=0,q_ibytes_1=0,q_ibytes_12=0,opackets=0,q_obytes_1=0,q_errors_15=0,q_opackets_2=0,oerrors=0,rx_nombuf=0,q_opackets_8=0,q_ibytes_11=0,q_ipackets_3=0,q_obytes_0=0,q_obytes_12=0,q_obytes_11=0,q_obytes_13=0,q_errors_6=0,q_ipackets_1=0,q_ipackets_6=0,q_ipackets_9=0,q_obytes_15=0,q_opackets_7=0,q_ibytes_14=0,ipackets=98,q_ipackets_2=0,q_opackets_6=0,q_ibytes_7=0,imissed=0,q_opackets_4=0,q_opackets_9=0,q_obytes_8=0,q_obytes_9=0,q_errors_4=0,q_errors_14=0,q_opackets_13=0,ibytes=7092,q_ipackets_0=98,q_errors_2=0,q_errors_7=0 1606310780000000000
|
||||
dpdk,command=/ethdev/stats,dpdk_instance=l3fwd-power,host=dpdk-host,params=1 q_opackets_0=0,q_ipackets_5=0,q_errors_11=0,ierrors=0,q_obytes_5=0,q_obytes_10=0,q_opackets_10=0,q_ipackets_4=0,q_ipackets_7=0,q_ipackets_15=0,q_ibytes_5=0,q_ibytes_6=0,q_ibytes_9=0,obytes=0,q_opackets_1=0,q_opackets_11=0,q_obytes_7=0,q_errors_5=0,q_errors_10=0,q_ibytes_4=0,q_obytes_6=0,q_errors_1=0,q_opackets_5=0,q_errors_3=0,q_errors_12=0,q_ipackets_11=0,q_ipackets_12=0,q_obytes_14=0,q_opackets_15=0,q_obytes_2=0,q_errors_8=0,q_opackets_12=0,q_errors_0=0,q_errors_9=0,q_opackets_14=0,q_ibytes_3=0,q_ibytes_15=0,q_ipackets_13=0,q_ipackets_14=0,q_obytes_3=0,q_errors_13=0,q_opackets_3=0,q_ibytes_0=7092,q_ibytes_2=0,q_ibytes_8=0,q_ipackets_8=0,q_ipackets_10=0,q_obytes_4=0,q_ibytes_10=0,q_ibytes_13=0,q_ibytes_1=0,q_ibytes_12=0,opackets=0,q_obytes_1=0,q_errors_15=0,q_opackets_2=0,oerrors=0,rx_nombuf=0,q_opackets_8=0,q_ibytes_11=0,q_ipackets_3=0,q_obytes_0=0,q_obytes_12=0,q_obytes_11=0,q_obytes_13=0,q_errors_6=0,q_ipackets_1=0,q_ipackets_6=0,q_ipackets_9=0,q_obytes_15=0,q_opackets_7=0,q_ibytes_14=0,ipackets=98,q_ipackets_2=0,q_opackets_6=0,q_ibytes_7=0,imissed=0,q_opackets_4=0,q_opackets_9=0,q_obytes_8=0,q_obytes_9=0,q_errors_4=0,q_errors_14=0,q_opackets_13=0,ibytes=7092,q_ipackets_0=98,q_errors_2=0,q_errors_7=0 1606310780000000000
|
||||
dpdk,command=/ethdev/xstats,dpdk_instance=l3fwd-power,host=dpdk-host,params=0 out_octets_encrypted=0,rx_fcoe_mbuf_allocation_errors=0,tx_q1packets=0,rx_priority0_xoff_packets=0,rx_priority7_xoff_packets=0,rx_errors=0,mac_remote_errors=0,in_pkts_invalid=0,tx_priority3_xoff_packets=0,tx_errors=0,rx_fcoe_bytes=0,rx_flow_control_xon_packets=0,rx_priority4_xoff_packets=0,tx_priority2_xoff_packets=0,rx_illegal_byte_errors=0,rx_xoff_packets=0,rx_management_packets=0,rx_priority7_dropped=0,rx_priority4_dropped=0,in_pkts_unchecked=0,rx_error_bytes=0,rx_size_256_to_511_packets=0,tx_priority4_xoff_packets=0,rx_priority6_xon_packets=0,tx_priority4_xon_to_xoff_packets=0,in_pkts_delayed=0,rx_priority0_mbuf_allocation_errors=0,out_octets_protected=0,tx_priority7_xon_to_xoff_packets=0,tx_priority1_xon_to_xoff_packets=0,rx_fcoe_no_direct_data_placement_ext_buff=0,tx_priority6_xon_to_xoff_packets=0,flow_director_filter_add_errors=0,rx_total_packets=99,rx_crc_errors=0,flow_director_filter_remove_errors=0,rx_missed_errors=0,tx_size_64_packets=0,rx_priority3_dropped=0,flow_director_matched_filters=0,tx_priority2_xon_to_xoff_packets=0,rx_priority1_xon_packets=0,rx_size_65_to_127_packets=99,rx_fragment_errors=0,in_pkts_notusingsa=0,rx_q0bytes=7162,rx_fcoe_dropped=0,rx_priority1_dropped=0,rx_fcoe_packets=0,rx_priority5_xoff_packets=0,out_pkts_protected=0,tx_total_packets=0,rx_priority2_dropped=0,in_pkts_late=0,tx_q1bytes=0,in_pkts_badtag=0,rx_multicast_packets=99,rx_priority6_xoff_packets=0,tx_flow_control_xoff_packets=0,rx_flow_control_xoff_packets=0,rx_priority0_xon_packets=0,in_pkts_untagged=0,tx_fcoe_packets=0,rx_priority7_mbuf_allocation_errors=0,tx_priority0_xon_to_xoff_packets=0,tx_priority5_xon_to_xoff_packets=0,tx_flow_control_xon_packets=0,tx_q0packets=0,tx_xoff_packets=0,rx_size_512_to_1023_packets=0,rx_priority3_xon_packets=0,rx_q0errors=0,rx_oversize_errors=0,tx_priority4_xon_packets=0,tx_priority5_xoff_packets=0,rx_priority5_xon_packets=0,rx_total_missed_packets=0,rx_priority4_mbuf_allocation_errors=0,tx_priority1_xon_packets=0,tx_management_packets=0,rx_priority5_mbuf_allocation_errors=0,rx_fcoe_no_direct_data_placement=0,rx_undersize_errors=0,tx_priority1_xoff_packets=0,rx_q0packets=99,tx_q2packets=0,tx_priority6_xon_packets=0,rx_good_packets=99,tx_priority5_xon_packets=0,tx_size_256_to_511_packets=0,rx_priority6_dropped=0,rx_broadcast_packets=0,tx_size_512_to_1023_packets=0,tx_priority3_xon_to_xoff_packets=0,in_pkts_unknownsci=0,in_octets_validated=0,tx_priority6_xoff_packets=0,tx_priority7_xoff_packets=0,rx_jabber_errors=0,tx_priority7_xon_packets=0,tx_priority0_xon_packets=0,in_pkts_unusedsa=0,tx_priority0_xoff_packets=0,mac_local_errors=33,rx_total_bytes=7162,in_pkts_notvalid=0,rx_length_errors=0,in_octets_decrypted=0,rx_size_128_to_255_packets=0,rx_good_bytes=7162,tx_size_65_to_127_packets=0,rx_mac_short_packet_dropped=0,tx_size_1024_to_max_packets=0,rx_priority2_mbuf_allocation_errors=0,flow_director_added_filters=0,tx_multicast_packets=0,rx_fcoe_crc_errors=0,rx_priority1_xoff_packets=0,flow_director_missed_filters=0,rx_xon_packets=0,tx_size_128_to_255_packets=0,out_pkts_encrypted=0,rx_priority4_xon_packets=0,rx_priority0_dropped=0,rx_size_1024_to_max_packets=0,tx_good_bytes=0,rx_management_dropped=0,rx_mbuf_allocation_errors=0,tx_xon_packets=0,rx_priority3_xoff_packets=0,tx_good_packets=0,tx_fcoe_bytes=0,rx_priority6_mbuf_allocation_errors=0,rx_priority2_xon_packets=0,tx_broadcast_packets=0,tx_q2bytes=0,rx_priority7_xon_packets=0,out_pkts_untagged=0,rx_priority2_xoff_packets=0,rx_priority1_mbuf_allocation_errors=0,tx_q0bytes=0,rx_size_64_packets=0,rx_priority5_dropped=0,tx_priority2_xon_packets=0,in_pkts_nosci=0,flow_director_removed_filters=0,in_pkts_ok=0,rx_l3_l4_xsum_error=0,rx_priority3_mbuf_allocation_errors=0,tx_priority3_xon_packets=0 1606310780000000000
|
||||
dpdk,command=/ethdev/xstats,dpdk_instance=l3fwd-power,host=dpdk-host,params=1 tx_priority5_xoff_packets=0,in_pkts_unknownsci=0,tx_q0packets=0,tx_total_packets=0,rx_crc_errors=0,rx_priority4_xoff_packets=0,rx_priority5_dropped=0,tx_size_65_to_127_packets=0,rx_good_packets=98,tx_priority6_xoff_packets=0,tx_fcoe_bytes=0,out_octets_protected=0,out_pkts_encrypted=0,rx_priority1_xon_packets=0,tx_size_128_to_255_packets=0,rx_flow_control_xoff_packets=0,rx_priority7_xoff_packets=0,tx_priority0_xon_to_xoff_packets=0,rx_broadcast_packets=0,tx_priority1_xon_packets=0,rx_xon_packets=0,rx_fragment_errors=0,tx_flow_control_xoff_packets=0,tx_q0bytes=0,out_pkts_untagged=0,rx_priority4_xon_packets=0,tx_priority5_xon_packets=0,rx_priority1_xoff_packets=0,rx_good_bytes=7092,rx_priority4_mbuf_allocation_errors=0,in_octets_decrypted=0,tx_priority2_xon_to_xoff_packets=0,rx_priority3_dropped=0,tx_multicast_packets=0,mac_local_errors=33,in_pkts_ok=0,rx_illegal_byte_errors=0,rx_xoff_packets=0,rx_q0errors=0,flow_director_added_filters=0,rx_size_256_to_511_packets=0,rx_priority3_xon_packets=0,rx_l3_l4_xsum_error=0,rx_priority6_dropped=0,in_pkts_notvalid=0,rx_size_64_packets=0,tx_management_packets=0,rx_length_errors=0,tx_priority7_xon_to_xoff_packets=0,rx_mbuf_allocation_errors=0,rx_missed_errors=0,rx_priority1_mbuf_allocation_errors=0,rx_fcoe_no_direct_data_placement=0,tx_priority3_xoff_packets=0,in_pkts_delayed=0,tx_errors=0,rx_size_512_to_1023_packets=0,tx_priority4_xon_packets=0,rx_q0bytes=7092,in_pkts_unchecked=0,tx_size_512_to_1023_packets=0,rx_fcoe_packets=0,in_pkts_nosci=0,rx_priority6_mbuf_allocation_errors=0,rx_priority1_dropped=0,tx_q2packets=0,rx_priority7_dropped=0,tx_size_1024_to_max_packets=0,rx_management_packets=0,rx_multicast_packets=98,rx_total_bytes=7092,mac_remote_errors=0,tx_priority3_xon_packets=0,rx_priority2_mbuf_allocation_errors=0,rx_priority5_mbuf_allocation_errors=0,tx_q2bytes=0,rx_size_128_to_255_packets=0,in_pkts_badtag=0,out_pkts_protected=0,rx_management_dropped=0,rx_fcoe_bytes=0,flow_director_removed_filters=0,tx_priority2_xoff_packets=0,rx_fcoe_crc_errors=0,rx_priority0_mbuf_allocation_errors=0,rx_priority0_xon_packets=0,rx_fcoe_dropped=0,tx_priority1_xon_to_xoff_packets=0,rx_size_65_to_127_packets=98,rx_q0packets=98,tx_priority0_xoff_packets=0,rx_priority6_xon_packets=0,rx_total_packets=98,rx_undersize_errors=0,flow_director_missed_filters=0,rx_jabber_errors=0,in_pkts_invalid=0,in_pkts_late=0,rx_priority5_xon_packets=0,tx_priority4_xoff_packets=0,out_octets_encrypted=0,tx_q1packets=0,rx_priority5_xoff_packets=0,rx_priority6_xoff_packets=0,rx_errors=0,in_octets_validated=0,rx_priority3_xoff_packets=0,tx_priority4_xon_to_xoff_packets=0,tx_priority5_xon_to_xoff_packets=0,tx_flow_control_xon_packets=0,rx_priority0_dropped=0,flow_director_filter_add_errors=0,tx_q1bytes=0,tx_priority6_xon_to_xoff_packets=0,flow_director_matched_filters=0,tx_priority2_xon_packets=0,rx_fcoe_mbuf_allocation_errors=0,rx_priority2_xoff_packets=0,tx_priority7_xoff_packets=0,rx_priority0_xoff_packets=0,rx_oversize_errors=0,in_pkts_notusingsa=0,tx_size_64_packets=0,rx_size_1024_to_max_packets=0,tx_priority6_xon_packets=0,rx_priority2_dropped=0,rx_priority4_dropped=0,rx_priority7_mbuf_allocation_errors=0,rx_flow_control_xon_packets=0,tx_good_bytes=0,tx_priority3_xon_to_xoff_packets=0,rx_total_missed_packets=0,rx_error_bytes=0,tx_priority7_xon_packets=0,rx_mac_short_packet_dropped=0,tx_priority1_xoff_packets=0,tx_good_packets=0,tx_broadcast_packets=0,tx_xon_packets=0,in_pkts_unusedsa=0,rx_priority2_xon_packets=0,in_pkts_untagged=0,tx_fcoe_packets=0,flow_director_filter_remove_errors=0,rx_priority3_mbuf_allocation_errors=0,tx_priority0_xon_packets=0,rx_priority7_xon_packets=0,rx_fcoe_no_direct_data_placement_ext_buff=0,tx_xoff_packets=0,tx_size_256_to_511_packets=0 1606310780000000000
|
||||
dpdk,command=/ethdev/link_status,dpdk_instance=l3fwd-power,host=dpdk-host,params=0 status="UP",link_status=1,speed=10000,duplex="full-duplex" 1606310780000000000
|
||||
dpdk,command=/ethdev/link_status,dpdk_instance=l3fwd-power,host=dpdk-host,params=1 status="UP",link_status=1,speed=10000,duplex="full-duplex" 1606310780000000000
|
||||
dpdk,command=/l3fwd-power/stats,dpdk_instance=l3fwd-power,host=dpdk-host empty_poll=49506395979901,full_poll=0,busy_percent=0 1606310780000000000
|
||||
```
|
||||
|
||||
When running plugin configuration below...
|
||||
|
||||
```toml
|
||||
[[inputs.dpdk]]
|
||||
interval = "30s"
|
||||
socket_access_timeout = "10s"
|
||||
device_types = ["ethdev"]
|
||||
metadata_fields = ["version", "pid"]
|
||||
plugin_options = ["in_memory"]
|
||||
|
||||
[inputs.dpdk.ethdev]
|
||||
exclude_commands = ["/ethdev/info", "/ethdev/stats", "/ethdev/xstats"]
|
||||
```
|
||||
|
||||
Expected output for `dpdk` plugin instance running with `link_status` command
|
||||
and all metadata fields enabled, additionally `link_status` field will be
|
||||
exposed to represent string value of `status` field (`DOWN`=0,`UP`=1):
|
||||
|
||||
```text
|
||||
dpdk,command=/ethdev/link_status,host=dpdk-host,params=0 pid=100988i,version="DPDK 21.11.2",status="DOWN",link_status=0i 1660295749000000000
|
||||
dpdk,command=/ethdev/link_status,host=dpdk-host,params=0 pid=2401624i,version="DPDK 21.11.2",status="UP",link_status=1i 1660295749000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,270 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Amazon Elastic Container Service"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Amazon Elastic Container Service
|
||||
identifier: input-ecs
|
||||
tags: [Amazon Elastic Container Service, "input-plugins", "configuration", "cloud"]
|
||||
introduced: "v1.11.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/ecs/README.md, Amazon Elastic Container Service Plugin Source
|
||||
---
|
||||
|
||||
# Amazon Elastic Container Service Input Plugin
|
||||
|
||||
This plugin gathers statistics on running containers in a Task from the
|
||||
[Amazon Elastic Container Service](https://aws.amazon.com/ecs/) using the [Amazon ECS metadata](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint.html)
|
||||
and the [v2](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v2.html) or [v3](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v3.html) statistics API endpoints.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The telegraf container must be run in the same Task as the workload it is
|
||||
> inspecting.
|
||||
|
||||
The amazon-ecs-agent (though it _is_ a container running on the host) is not
|
||||
present in the metadata/stats endpoints.
|
||||
|
||||
**Introduced in:** Telegraf v1.11.0
|
||||
**Tags:** cloud
|
||||
**OS support:** all
|
||||
|
||||
[ecs]: https://aws.amazon.com/ecs/
|
||||
[metadata]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint.html
|
||||
[v2_endpoint]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v2.html
|
||||
[v3_endpoint]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v3.html
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics about ECS containers
|
||||
[[inputs.ecs]]
|
||||
## ECS metadata url.
|
||||
## Metadata v2 API is used if set explicitly. Otherwise,
|
||||
## v3 metadata endpoint API is used if available.
|
||||
# endpoint_url = ""
|
||||
|
||||
## Containers to include and exclude. Globs accepted.
|
||||
## Note that an empty array for both will include all containers
|
||||
# container_name_include = []
|
||||
# container_name_exclude = []
|
||||
|
||||
## Container states to include and exclude. Globs accepted.
|
||||
## When empty only containers in the "RUNNING" state will be captured.
|
||||
## Possible values are "NONE", "PULLED", "CREATED", "RUNNING",
|
||||
## "RESOURCES_PROVISIONED", "STOPPED".
|
||||
# container_status_include = []
|
||||
# container_status_exclude = []
|
||||
|
||||
## ecs labels to include and exclude as tags. Globs accepted.
|
||||
## Note that an empty array for both will include all labels as tags
|
||||
ecs_label_include = [ "com.amazonaws.ecs.*" ]
|
||||
ecs_label_exclude = []
|
||||
|
||||
## Timeout for queries.
|
||||
# timeout = "5s"
|
||||
```
|
||||
|
||||
## Configuration (enforce v2 metadata)
|
||||
|
||||
```toml
|
||||
# Read metrics about ECS containers
|
||||
[[inputs.ecs]]
|
||||
## ECS metadata url.
|
||||
## Metadata v2 API is used if set explicitly. Otherwise,
|
||||
## v3 metadata endpoint API is used if available.
|
||||
endpoint_url = "http://169.254.170.2"
|
||||
|
||||
## Containers to include and exclude. Globs accepted.
|
||||
## Note that an empty array for both will include all containers
|
||||
# container_name_include = []
|
||||
# container_name_exclude = []
|
||||
|
||||
## Container states to include and exclude. Globs accepted.
|
||||
## When empty only containers in the "RUNNING" state will be captured.
|
||||
## Possible values are "NONE", "PULLED", "CREATED", "RUNNING",
|
||||
## "RESOURCES_PROVISIONED", "STOPPED".
|
||||
# container_status_include = []
|
||||
# container_status_exclude = []
|
||||
|
||||
## ecs labels to include and exclude as tags. Globs accepted.
|
||||
## Note that an empty array for both will include all labels as tags
|
||||
ecs_label_include = [ "com.amazonaws.ecs.*" ]
|
||||
ecs_label_exclude = []
|
||||
|
||||
## Timeout for queries.
|
||||
# timeout = "5s"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- ecs_task
|
||||
- tags:
|
||||
- cluster
|
||||
- task_arn
|
||||
- family
|
||||
- revision
|
||||
- id
|
||||
- name
|
||||
- fields:
|
||||
- desired_status (string)
|
||||
- known_status (string)
|
||||
- limit_cpu (float)
|
||||
- limit_mem (float)
|
||||
|
||||
- ecs_container_mem
|
||||
- tags:
|
||||
- cluster
|
||||
- task_arn
|
||||
- family
|
||||
- revision
|
||||
- id
|
||||
- name
|
||||
- fields:
|
||||
- container_id
|
||||
- active_anon
|
||||
- active_file
|
||||
- cache
|
||||
- hierarchical_memory_limit
|
||||
- inactive_anon
|
||||
- inactive_file
|
||||
- mapped_file
|
||||
- pgfault
|
||||
- pgmajfault
|
||||
- pgpgin
|
||||
- pgpgout
|
||||
- rss
|
||||
- rss_huge
|
||||
- total_active_anon
|
||||
- total_active_file
|
||||
- total_cache
|
||||
- total_inactive_anon
|
||||
- total_inactive_file
|
||||
- total_mapped_file
|
||||
- total_pgfault
|
||||
- total_pgmajfault
|
||||
- total_pgpgin
|
||||
- total_pgpgout
|
||||
- total_rss
|
||||
- total_rss_huge
|
||||
- total_unevictable
|
||||
- total_writeback
|
||||
- unevictable
|
||||
- writeback
|
||||
- fail_count
|
||||
- limit
|
||||
- max_usage
|
||||
- usage
|
||||
- usage_percent
|
||||
|
||||
- ecs_container_cpu
|
||||
- tags:
|
||||
- cluster
|
||||
- task_arn
|
||||
- family
|
||||
- revision
|
||||
- id
|
||||
- name
|
||||
- cpu
|
||||
- fields:
|
||||
- container_id
|
||||
- usage_total
|
||||
- usage_in_usermode
|
||||
- usage_in_kernelmode
|
||||
- usage_system
|
||||
- throttling_periods
|
||||
- throttling_throttled_periods
|
||||
- throttling_throttled_time
|
||||
- usage_percent
|
||||
- usage_total
|
||||
|
||||
- ecs_container_net
|
||||
- tags:
|
||||
- cluster
|
||||
- task_arn
|
||||
- family
|
||||
- revision
|
||||
- id
|
||||
- name
|
||||
- network
|
||||
- fields:
|
||||
- container_id
|
||||
- rx_packets
|
||||
- rx_dropped
|
||||
- rx_bytes
|
||||
- rx_errors
|
||||
- tx_packets
|
||||
- tx_dropped
|
||||
- tx_bytes
|
||||
- tx_errors
|
||||
|
||||
- ecs_container_blkio
|
||||
- tags:
|
||||
- cluster
|
||||
- task_arn
|
||||
- family
|
||||
- revision
|
||||
- id
|
||||
- name
|
||||
- device
|
||||
- fields:
|
||||
- container_id
|
||||
- io_service_bytes_recursive_async
|
||||
- io_service_bytes_recursive_read
|
||||
- io_service_bytes_recursive_sync
|
||||
- io_service_bytes_recursive_total
|
||||
- io_service_bytes_recursive_write
|
||||
- io_serviced_recursive_async
|
||||
- io_serviced_recursive_read
|
||||
- io_serviced_recursive_sync
|
||||
- io_serviced_recursive_total
|
||||
- io_serviced_recursive_write
|
||||
|
||||
- ecs_container_meta
|
||||
- tags:
|
||||
- cluster
|
||||
- task_arn
|
||||
- family
|
||||
- revision
|
||||
- id
|
||||
- name
|
||||
- fields:
|
||||
- container_id
|
||||
- docker_name
|
||||
- image
|
||||
- image_id
|
||||
- desired_status
|
||||
- known_status
|
||||
- limit_cpu
|
||||
- limit_mem
|
||||
- created_at
|
||||
- started_at
|
||||
- type
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
ecs_task,cluster=test,family=nginx,host=c4b301d4a123,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a desired_status="RUNNING",known_status="RUNNING",limit_cpu=0.5,limit_mem=512 1542641488000000000
|
||||
ecs_container_mem,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a active_anon=40960i,active_file=8192i,cache=790528i,pgpgin=1243i,total_pgfault=1298i,total_rss=40960i,limit=1033658368i,max_usage=4825088i,hierarchical_memory_limit=536870912i,rss=40960i,total_active_file=8192i,total_mapped_file=618496i,usage_percent=0.05349543109392212,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",pgfault=1298i,pgmajfault=6i,pgpgout=1040i,total_active_anon=40960i,total_inactive_file=782336i,total_pgpgin=1243i,usage=552960i,inactive_file=782336i,mapped_file=618496i,total_cache=790528i,total_pgpgout=1040i 1542642001000000000
|
||||
ecs_container_cpu,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,cpu=cpu-total,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a usage_in_kernelmode=0i,throttling_throttled_periods=0i,throttling_periods=0i,throttling_throttled_time=0i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",usage_percent=0,usage_total=26426156i,usage_in_usermode=20000000i,usage_system=2336100000000i 1542642001000000000
|
||||
ecs_container_cpu,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,cpu=cpu0,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",usage_total=26426156i 1542642001000000000
|
||||
ecs_container_net,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,network=eth0,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a rx_errors=0i,rx_packets=36i,tx_errors=0i,tx_bytes=648i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",rx_dropped=0i,rx_bytes=5338i,tx_packets=8i,tx_dropped=0i 1542642001000000000
|
||||
ecs_container_net,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,network=eth5,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a rx_errors=0i,tx_packets=9i,rx_packets=26i,tx_errors=0i,rx_bytes=4641i,tx_dropped=0i,tx_bytes=690i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",rx_dropped=0i 1542642001000000000
|
||||
ecs_container_net,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,network=total,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a rx_dropped=0i,rx_bytes=9979i,rx_errors=0i,rx_packets=62i,tx_bytes=1338i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",tx_packets=17i,tx_dropped=0i,tx_errors=0i 1542642001000000000
|
||||
ecs_container_blkio,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,device=253:1,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a io_service_bytes_recursive_sync=790528i,io_service_bytes_recursive_total=790528i,io_serviced_recursive_sync=10i,io_serviced_recursive_write=0i,io_serviced_recursive_async=0i,io_serviced_recursive_total=10i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",io_service_bytes_recursive_read=790528i,io_service_bytes_recursive_write=0i,io_service_bytes_recursive_async=0i,io_serviced_recursive_read=10i 1542642001000000000
|
||||
ecs_container_blkio,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,device=253:2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a io_service_bytes_recursive_sync=790528i,io_service_bytes_recursive_total=790528i,io_serviced_recursive_async=0i,io_serviced_recursive_total=10i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",io_service_bytes_recursive_read=790528i,io_service_bytes_recursive_write=0i,io_service_bytes_recursive_async=0i,io_serviced_recursive_read=10i,io_serviced_recursive_write=0i,io_serviced_recursive_sync=10i 1542642001000000000
|
||||
ecs_container_blkio,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,device=253:4,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a io_service_bytes_recursive_write=0i,io_service_bytes_recursive_sync=790528i,io_service_bytes_recursive_async=0i,io_service_bytes_recursive_total=790528i,io_serviced_recursive_async=0i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",io_service_bytes_recursive_read=790528i,io_serviced_recursive_read=10i,io_serviced_recursive_write=0i,io_serviced_recursive_sync=10i,io_serviced_recursive_total=10i 1542642001000000000
|
||||
ecs_container_blkio,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,device=202:26368,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a io_serviced_recursive_read=10i,io_serviced_recursive_write=0i,io_serviced_recursive_sync=10i,io_serviced_recursive_async=0i,io_serviced_recursive_total=10i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",io_service_bytes_recursive_sync=790528i,io_service_bytes_recursive_total=790528i,io_service_bytes_recursive_async=0i,io_service_bytes_recursive_read=790528i,io_service_bytes_recursive_write=0i 1542642001000000000
|
||||
ecs_container_blkio,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,device=total,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a io_serviced_recursive_async=0i,io_serviced_recursive_read=40i,io_serviced_recursive_sync=40i,io_serviced_recursive_write=0i,io_serviced_recursive_total=40i,io_service_bytes_recursive_read=3162112i,io_service_bytes_recursive_write=0i,io_service_bytes_recursive_async=0i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",io_service_bytes_recursive_sync=3162112i,io_service_bytes_recursive_total=3162112i 1542642001000000000
|
||||
ecs_container_meta,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a limit_mem=0,type="CNI_PAUSE",container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",docker_name="ecs-nginx-2-internalecspause",limit_cpu=0,known_status="RESOURCES_PROVISIONED",image="amazon/amazon-ecs-pause:0.1.0",image_id="",desired_status="RESOURCES_PROVISIONED" 1542642001000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,903 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Elasticsearch"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Elasticsearch
|
||||
identifier: input-elasticsearch
|
||||
tags: [Elasticsearch, "input-plugins", "configuration", "server"]
|
||||
introduced: "v0.1.5"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/elasticsearch/README.md, Elasticsearch Plugin Source
|
||||
---
|
||||
|
||||
# Elasticsearch Input Plugin
|
||||
|
||||
This plugin queries endpoints of a [Elasticsearch](https://www.elastic.co/) instance to obtain
|
||||
[node statistics](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html) and optionally [cluster-health](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html)
|
||||
metrics.
|
||||
Additionally, the plugin is able to query [cluster](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html),
|
||||
[indices and shard](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html) statistics for the master node.
|
||||
|
||||
> [!NOTE]
|
||||
> Specific statistics information can change between Elasticsearch versions. In
|
||||
> general, this plugin attempts to stay as version-generic as possible by
|
||||
> tagging high-level categories only and creating unique field names of
|
||||
> whatever statistics names are provided at the mid-low level.
|
||||
|
||||
**Introduced in:** Telegraf v0.1.5
|
||||
**Tags:** server
|
||||
**OS support:** all
|
||||
|
||||
[elastic]: https://www.elastic.co/
|
||||
[node_stats]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html
|
||||
[cluster_health]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html
|
||||
[cluster_stats]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html
|
||||
[indices_stats]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read stats from one or more Elasticsearch servers or clusters
|
||||
[[inputs.elasticsearch]]
|
||||
## specify a list of one or more Elasticsearch servers
|
||||
## you can add username and password to your url to use basic authentication:
|
||||
## servers = ["http://user:pass@localhost:9200"]
|
||||
servers = ["http://localhost:9200"]
|
||||
|
||||
## HTTP headers to send with each request
|
||||
# headers = { "X-Custom-Header" = "Custom" }
|
||||
|
||||
## When local is true (the default), the node will read only its own stats.
|
||||
## Set local to false when you want to read the node stats from all nodes
|
||||
## of the cluster.
|
||||
local = true
|
||||
|
||||
## Set cluster_health to true when you want to obtain cluster health stats
|
||||
cluster_health = false
|
||||
|
||||
## Adjust cluster_health_level when you want to obtain detailed health stats
|
||||
## The options are
|
||||
## - indices (default)
|
||||
## - cluster
|
||||
# cluster_health_level = "indices"
|
||||
|
||||
## Set cluster_stats to true when you want to obtain cluster stats.
|
||||
cluster_stats = false
|
||||
|
||||
## Only gather cluster_stats from the master node.
|
||||
## To work this require local = true
|
||||
cluster_stats_only_from_master = true
|
||||
|
||||
## Gather stats from the enrich API
|
||||
# enrich_stats = false
|
||||
|
||||
## Indices to collect; can be one or more indices names or _all
|
||||
## Use of wildcards is allowed. Use a wildcard at the end to retrieve index
|
||||
## names that end with a changing value, like a date.
|
||||
indices_include = ["_all"]
|
||||
|
||||
## One of "shards", "cluster", "indices"
|
||||
## Currently only "shards" is implemented
|
||||
indices_level = "shards"
|
||||
|
||||
## node_stats is a list of sub-stats that you want to have gathered.
|
||||
## Valid options are "indices", "os", "process", "jvm", "thread_pool",
|
||||
## "fs", "transport", "http", "breaker". Per default, all stats are gathered.
|
||||
# node_stats = ["jvm", "http"]
|
||||
|
||||
## HTTP Basic Authentication username and password.
|
||||
# username = ""
|
||||
# password = ""
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
|
||||
## If 'use_system_proxy' is set to true, Telegraf will check env vars such as
|
||||
## HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or their lowercase counterparts).
|
||||
## If 'use_system_proxy' is set to false (default) and 'http_proxy_url' is
|
||||
## provided, Telegraf will use the specified URL as HTTP proxy.
|
||||
# use_system_proxy = false
|
||||
# http_proxy_url = "http://localhost:8888"
|
||||
|
||||
## Sets the number of most recent indices to return for indices that are
|
||||
## configured with a date-stamped suffix. Each 'indices_include' entry
|
||||
## ending with a wildcard (*) or glob matching pattern will group together
|
||||
## all indices that match it, and sort them by the date or number after
|
||||
## the wildcard. Metrics then are gathered for only the
|
||||
## 'num_most_recent_indices' amount of most recent indices.
|
||||
# num_most_recent_indices = 0
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Emitted when `cluster_health = true`:
|
||||
|
||||
- elasticsearch_cluster_health
|
||||
- tags:
|
||||
- name
|
||||
- fields:
|
||||
- active_primary_shards (integer)
|
||||
- active_shards (integer)
|
||||
- active_shards_percent_as_number (float)
|
||||
- delayed_unassigned_shards (integer)
|
||||
- initializing_shards (integer)
|
||||
- number_of_data_nodes (integer)
|
||||
- number_of_in_flight_fetch (integer)
|
||||
- number_of_nodes (integer)
|
||||
- number_of_pending_tasks (integer)
|
||||
- relocating_shards (integer)
|
||||
- status (string, one of green, yellow or red)
|
||||
- status_code (integer, green = 1, yellow = 2, red = 3),
|
||||
- task_max_waiting_in_queue_millis (integer)
|
||||
- timed_out (boolean)
|
||||
- unassigned_shards (integer)
|
||||
|
||||
Emitted when `cluster_health = true` and `cluster_health_level = "indices"`:
|
||||
|
||||
- elasticsearch_cluster_health_indices
|
||||
- tags:
|
||||
- index
|
||||
- name
|
||||
- fields:
|
||||
- active_primary_shards (integer)
|
||||
- active_shards (integer)
|
||||
- initializing_shards (integer)
|
||||
- number_of_replicas (integer)
|
||||
- number_of_shards (integer)
|
||||
- relocating_shards (integer)
|
||||
- status (string, one of green, yellow or red)
|
||||
- status_code (integer, green = 1, yellow = 2, red = 3),
|
||||
- unassigned_shards (integer)
|
||||
|
||||
Emitted when `cluster_stats = true`:
|
||||
|
||||
- elasticsearch_clusterstats_indices
|
||||
- tags:
|
||||
- cluster_name
|
||||
- node_name
|
||||
- status
|
||||
- fields:
|
||||
- completion_size_in_bytes (float)
|
||||
- count (float)
|
||||
- docs_count (float)
|
||||
- docs_deleted (float)
|
||||
- fielddata_evictions (float)
|
||||
- fielddata_memory_size_in_bytes (float)
|
||||
- query_cache_cache_count (float)
|
||||
- query_cache_cache_size (float)
|
||||
- query_cache_evictions (float)
|
||||
- query_cache_hit_count (float)
|
||||
- query_cache_memory_size_in_bytes (float)
|
||||
- query_cache_miss_count (float)
|
||||
- query_cache_total_count (float)
|
||||
- segments_count (float)
|
||||
- segments_doc_values_memory_in_bytes (float)
|
||||
- segments_fixed_bit_set_memory_in_bytes (float)
|
||||
- segments_index_writer_memory_in_bytes (float)
|
||||
- segments_max_unsafe_auto_id_timestamp (float)
|
||||
- segments_memory_in_bytes (float)
|
||||
- segments_norms_memory_in_bytes (float)
|
||||
- segments_points_memory_in_bytes (float)
|
||||
- segments_stored_fields_memory_in_bytes (float)
|
||||
- segments_term_vectors_memory_in_bytes (float)
|
||||
- segments_terms_memory_in_bytes (float)
|
||||
- segments_version_map_memory_in_bytes (float)
|
||||
- shards_index_primaries_avg (float)
|
||||
- shards_index_primaries_max (float)
|
||||
- shards_index_primaries_min (float)
|
||||
- shards_index_replication_avg (float)
|
||||
- shards_index_replication_max (float)
|
||||
- shards_index_replication_min (float)
|
||||
- shards_index_shards_avg (float)
|
||||
- shards_index_shards_max (float)
|
||||
- shards_index_shards_min (float)
|
||||
- shards_primaries (float)
|
||||
- shards_replication (float)
|
||||
- shards_total (float)
|
||||
- store_size_in_bytes (float)
|
||||
|
||||
- elasticsearch_clusterstats_nodes
|
||||
- tags:
|
||||
- cluster_name
|
||||
- node_name
|
||||
- status
|
||||
- fields:
|
||||
- count_coordinating_only (float)
|
||||
- count_data (float)
|
||||
- count_ingest (float)
|
||||
- count_master (float)
|
||||
- count_total (float)
|
||||
- fs_available_in_bytes (float)
|
||||
- fs_free_in_bytes (float)
|
||||
- fs_total_in_bytes (float)
|
||||
- jvm_max_uptime_in_millis (float)
|
||||
- jvm_mem_heap_max_in_bytes (float)
|
||||
- jvm_mem_heap_used_in_bytes (float)
|
||||
- jvm_threads (float)
|
||||
- jvm_versions_0_count (float)
|
||||
- jvm_versions_0_version (string)
|
||||
- jvm_versions_0_vm_name (string)
|
||||
- jvm_versions_0_vm_vendor (string)
|
||||
- jvm_versions_0_vm_version (string)
|
||||
- network_types_http_types_security4 (float)
|
||||
- network_types_transport_types_security4 (float)
|
||||
- os_allocated_processors (float)
|
||||
- os_available_processors (float)
|
||||
- os_mem_free_in_bytes (float)
|
||||
- os_mem_free_percent (float)
|
||||
- os_mem_total_in_bytes (float)
|
||||
- os_mem_used_in_bytes (float)
|
||||
- os_mem_used_percent (float)
|
||||
- os_names_0_count (float)
|
||||
- os_names_0_name (string)
|
||||
- os_pretty_names_0_count (float)
|
||||
- os_pretty_names_0_pretty_name (string)
|
||||
- process_cpu_percent (float)
|
||||
- process_open_file_descriptors_avg (float)
|
||||
- process_open_file_descriptors_max (float)
|
||||
- process_open_file_descriptors_min (float)
|
||||
- versions_0 (string)
|
||||
|
||||
Emitted when the appropriate `node_stats` options are set.
|
||||
|
||||
- elasticsearch_transport
|
||||
- tags:
|
||||
- cluster_name
|
||||
- node_attribute_ml.enabled
|
||||
- node_attribute_ml.machine_memory
|
||||
- node_attribute_ml.max_open_jobs
|
||||
- node_attribute_xpack.installed
|
||||
- node_host
|
||||
- node_id
|
||||
- node_name
|
||||
- fields:
|
||||
- rx_count (float)
|
||||
- rx_size_in_bytes (float)
|
||||
- server_open (float)
|
||||
- tx_count (float)
|
||||
- tx_size_in_bytes (float)
|
||||
|
||||
- elasticsearch_breakers
|
||||
- tags:
|
||||
- cluster_name
|
||||
- node_attribute_ml.enabled
|
||||
- node_attribute_ml.machine_memory
|
||||
- node_attribute_ml.max_open_jobs
|
||||
- node_attribute_xpack.installed
|
||||
- node_host
|
||||
- node_id
|
||||
- node_name
|
||||
- fields:
|
||||
- accounting_estimated_size_in_bytes (float)
|
||||
- accounting_limit_size_in_bytes (float)
|
||||
- accounting_overhead (float)
|
||||
- accounting_tripped (float)
|
||||
- fielddata_estimated_size_in_bytes (float)
|
||||
- fielddata_limit_size_in_bytes (float)
|
||||
- fielddata_overhead (float)
|
||||
- fielddata_tripped (float)
|
||||
- in_flight_requests_estimated_size_in_bytes (float)
|
||||
- in_flight_requests_limit_size_in_bytes (float)
|
||||
- in_flight_requests_overhead (float)
|
||||
- in_flight_requests_tripped (float)
|
||||
- parent_estimated_size_in_bytes (float)
|
||||
- parent_limit_size_in_bytes (float)
|
||||
- parent_overhead (float)
|
||||
- parent_tripped (float)
|
||||
- request_estimated_size_in_bytes (float)
|
||||
- request_limit_size_in_bytes (float)
|
||||
- request_overhead (float)
|
||||
- request_tripped (float)
|
||||
|
||||
- elasticsearch_fs
|
||||
- tags:
|
||||
- cluster_name
|
||||
- node_attribute_ml.enabled
|
||||
- node_attribute_ml.machine_memory
|
||||
- node_attribute_ml.max_open_jobs
|
||||
- node_attribute_xpack.installed
|
||||
- node_host
|
||||
- node_id
|
||||
- node_name
|
||||
- fields:
|
||||
- data_0_available_in_bytes (float)
|
||||
- data_0_free_in_bytes (float)
|
||||
- data_0_total_in_bytes (float)
|
||||
- io_stats_devices_0_operations (float)
|
||||
- io_stats_devices_0_read_kilobytes (float)
|
||||
- io_stats_devices_0_read_operations (float)
|
||||
- io_stats_devices_0_write_kilobytes (float)
|
||||
- io_stats_devices_0_write_operations (float)
|
||||
- io_stats_total_operations (float)
|
||||
- io_stats_total_read_kilobytes (float)
|
||||
- io_stats_total_read_operations (float)
|
||||
- io_stats_total_write_kilobytes (float)
|
||||
- io_stats_total_write_operations (float)
|
||||
- timestamp (float)
|
||||
- total_available_in_bytes (float)
|
||||
- total_free_in_bytes (float)
|
||||
- total_total_in_bytes (float)
|
||||
|
||||
- elasticsearch_http
|
||||
- tags:
|
||||
- cluster_name
|
||||
- node_attribute_ml.enabled
|
||||
- node_attribute_ml.machine_memory
|
||||
- node_attribute_ml.max_open_jobs
|
||||
- node_attribute_xpack.installed
|
||||
- node_host
|
||||
- node_id
|
||||
- node_name
|
||||
- fields:
|
||||
- current_open (float)
|
||||
- total_opened (float)
|
||||
|
||||
- elasticsearch_indices
|
||||
- tags:
|
||||
- cluster_name
|
||||
- node_attribute_ml.enabled
|
||||
- node_attribute_ml.machine_memory
|
||||
- node_attribute_ml.max_open_jobs
|
||||
- node_attribute_xpack.installed
|
||||
- node_host
|
||||
- node_id
|
||||
- node_name
|
||||
- fields:
|
||||
- completion_size_in_bytes (float)
|
||||
- docs_count (float)
|
||||
- docs_deleted (float)
|
||||
- fielddata_evictions (float)
|
||||
- fielddata_memory_size_in_bytes (float)
|
||||
- flush_periodic (float)
|
||||
- flush_total (float)
|
||||
- flush_total_time_in_millis (float)
|
||||
- get_current (float)
|
||||
- get_exists_time_in_millis (float)
|
||||
- get_exists_total (float)
|
||||
- get_missing_time_in_millis (float)
|
||||
- get_missing_total (float)
|
||||
- get_time_in_millis (float)
|
||||
- get_total (float)
|
||||
- indexing_delete_current (float)
|
||||
- indexing_delete_time_in_millis (float)
|
||||
- indexing_delete_total (float)
|
||||
- indexing_index_current (float)
|
||||
- indexing_index_failed (float)
|
||||
- indexing_index_time_in_millis (float)
|
||||
- indexing_index_total (float)
|
||||
- indexing_noop_update_total (float)
|
||||
- indexing_throttle_time_in_millis (float)
|
||||
- merges_current (float)
|
||||
- merges_current_docs (float)
|
||||
- merges_current_size_in_bytes (float)
|
||||
- merges_total (float)
|
||||
- merges_total_auto_throttle_in_bytes (float)
|
||||
- merges_total_docs (float)
|
||||
- merges_total_size_in_bytes (float)
|
||||
- merges_total_stopped_time_in_millis (float)
|
||||
- merges_total_throttled_time_in_millis (float)
|
||||
- merges_total_time_in_millis (float)
|
||||
- query_cache_cache_count (float)
|
||||
- query_cache_cache_size (float)
|
||||
- query_cache_evictions (float)
|
||||
- query_cache_hit_count (float)
|
||||
- query_cache_memory_size_in_bytes (float)
|
||||
- query_cache_miss_count (float)
|
||||
- query_cache_total_count (float)
|
||||
- recovery_current_as_source (float)
|
||||
- recovery_current_as_target (float)
|
||||
- recovery_throttle_time_in_millis (float)
|
||||
- refresh_listeners (float)
|
||||
- refresh_total (float)
|
||||
- refresh_total_time_in_millis (float)
|
||||
- request_cache_evictions (float)
|
||||
- request_cache_hit_count (float)
|
||||
- request_cache_memory_size_in_bytes (float)
|
||||
- request_cache_miss_count (float)
|
||||
- search_fetch_current (float)
|
||||
- search_fetch_time_in_millis (float)
|
||||
- search_fetch_total (float)
|
||||
- search_open_contexts (float)
|
||||
- search_query_current (float)
|
||||
- search_query_time_in_millis (float)
|
||||
- search_query_total (float)
|
||||
- search_scroll_current (float)
|
||||
- search_scroll_time_in_millis (float)
|
||||
- search_scroll_total (float)
|
||||
- search_suggest_current (float)
|
||||
- search_suggest_time_in_millis (float)
|
||||
- search_suggest_total (float)
|
||||
- segments_count (float)
|
||||
- segments_doc_values_memory_in_bytes (float)
|
||||
- segments_fixed_bit_set_memory_in_bytes (float)
|
||||
- segments_index_writer_memory_in_bytes (float)
|
||||
- segments_max_unsafe_auto_id_timestamp (float)
|
||||
- segments_memory_in_bytes (float)
|
||||
- segments_norms_memory_in_bytes (float)
|
||||
- segments_points_memory_in_bytes (float)
|
||||
- segments_stored_fields_memory_in_bytes (float)
|
||||
- segments_term_vectors_memory_in_bytes (float)
|
||||
- segments_terms_memory_in_bytes (float)
|
||||
- segments_version_map_memory_in_bytes (float)
|
||||
- store_size_in_bytes (float)
|
||||
- translog_earliest_last_modified_age (float)
|
||||
- translog_operations (float)
|
||||
- translog_size_in_bytes (float)
|
||||
- translog_uncommitted_operations (float)
|
||||
- translog_uncommitted_size_in_bytes (float)
|
||||
- warmer_current (float)
|
||||
- warmer_total (float)
|
||||
- warmer_total_time_in_millis (float)
|
||||
|
||||
- elasticsearch_jvm
|
||||
- tags:
|
||||
- cluster_name
|
||||
- node_attribute_ml.enabled
|
||||
- node_attribute_ml.machine_memory
|
||||
- node_attribute_ml.max_open_jobs
|
||||
- node_attribute_xpack.installed
|
||||
- node_host
|
||||
- node_id
|
||||
- node_name
|
||||
- fields:
|
||||
- buffer_pools_direct_count (float)
|
||||
- buffer_pools_direct_total_capacity_in_bytes (float)
|
||||
- buffer_pools_direct_used_in_bytes (float)
|
||||
- buffer_pools_mapped_count (float)
|
||||
- buffer_pools_mapped_total_capacity_in_bytes (float)
|
||||
- buffer_pools_mapped_used_in_bytes (float)
|
||||
- classes_current_loaded_count (float)
|
||||
- classes_total_loaded_count (float)
|
||||
- classes_total_unloaded_count (float)
|
||||
- gc_collectors_old_collection_count (float)
|
||||
- gc_collectors_old_collection_time_in_millis (float)
|
||||
- gc_collectors_young_collection_count (float)
|
||||
- gc_collectors_young_collection_time_in_millis (float)
|
||||
- mem_heap_committed_in_bytes (float)
|
||||
- mem_heap_max_in_bytes (float)
|
||||
- mem_heap_used_in_bytes (float)
|
||||
- mem_heap_used_percent (float)
|
||||
- mem_non_heap_committed_in_bytes (float)
|
||||
- mem_non_heap_used_in_bytes (float)
|
||||
- mem_pools_old_max_in_bytes (float)
|
||||
- mem_pools_old_peak_max_in_bytes (float)
|
||||
- mem_pools_old_peak_used_in_bytes (float)
|
||||
- mem_pools_old_used_in_bytes (float)
|
||||
- mem_pools_survivor_max_in_bytes (float)
|
||||
- mem_pools_survivor_peak_max_in_bytes (float)
|
||||
- mem_pools_survivor_peak_used_in_bytes (float)
|
||||
- mem_pools_survivor_used_in_bytes (float)
|
||||
- mem_pools_young_max_in_bytes (float)
|
||||
- mem_pools_young_peak_max_in_bytes (float)
|
||||
- mem_pools_young_peak_used_in_bytes (float)
|
||||
- mem_pools_young_used_in_bytes (float)
|
||||
- threads_count (float)
|
||||
- threads_peak_count (float)
|
||||
- timestamp (float)
|
||||
- uptime_in_millis (float)
|
||||
|
||||
- elasticsearch_os
|
||||
- tags:
|
||||
- cluster_name
|
||||
- node_attribute_ml.enabled
|
||||
- node_attribute_ml.machine_memory
|
||||
- node_attribute_ml.max_open_jobs
|
||||
- node_attribute_xpack.installed
|
||||
- node_host
|
||||
- node_id
|
||||
- node_name
|
||||
- fields:
|
||||
- cgroup_cpu_cfs_period_micros (float)
|
||||
- cgroup_cpu_cfs_quota_micros (float)
|
||||
- cgroup_cpu_stat_number_of_elapsed_periods (float)
|
||||
- cgroup_cpu_stat_number_of_times_throttled (float)
|
||||
- cgroup_cpu_stat_time_throttled_nanos (float)
|
||||
- cgroup_cpuacct_usage_nanos (float)
|
||||
- cpu_load_average_15m (float)
|
||||
- cpu_load_average_1m (float)
|
||||
- cpu_load_average_5m (float)
|
||||
- cpu_percent (float)
|
||||
- mem_free_in_bytes (float)
|
||||
- mem_free_percent (float)
|
||||
- mem_total_in_bytes (float)
|
||||
- mem_used_in_bytes (float)
|
||||
- mem_used_percent (float)
|
||||
- swap_free_in_bytes (float)
|
||||
- swap_total_in_bytes (float)
|
||||
- swap_used_in_bytes (float)
|
||||
- timestamp (float)
|
||||
|
||||
- elasticsearch_process
|
||||
- tags:
|
||||
- cluster_name
|
||||
- node_attribute_ml.enabled
|
||||
- node_attribute_ml.machine_memory
|
||||
- node_attribute_ml.max_open_jobs
|
||||
- node_attribute_xpack.installed
|
||||
- node_host
|
||||
- node_id
|
||||
- node_name
|
||||
- fields:
|
||||
- cpu_percent (float)
|
||||
- cpu_total_in_millis (float)
|
||||
- max_file_descriptors (float)
|
||||
- mem_total_virtual_in_bytes (float)
|
||||
- open_file_descriptors (float)
|
||||
- timestamp (float)
|
||||
|
||||
- elasticsearch_thread_pool
|
||||
- tags:
|
||||
- cluster_name
|
||||
- node_attribute_ml.enabled
|
||||
- node_attribute_ml.machine_memory
|
||||
- node_attribute_ml.max_open_jobs
|
||||
- node_attribute_xpack.installed
|
||||
- node_host
|
||||
- node_id
|
||||
- node_name
|
||||
- fields:
|
||||
- analyze_active (float)
|
||||
- analyze_completed (float)
|
||||
- analyze_largest (float)
|
||||
- analyze_queue (float)
|
||||
- analyze_rejected (float)
|
||||
- analyze_threads (float)
|
||||
- ccr_active (float)
|
||||
- ccr_completed (float)
|
||||
- ccr_largest (float)
|
||||
- ccr_queue (float)
|
||||
- ccr_rejected (float)
|
||||
- ccr_threads (float)
|
||||
- fetch_shard_started_active (float)
|
||||
- fetch_shard_started_completed (float)
|
||||
- fetch_shard_started_largest (float)
|
||||
- fetch_shard_started_queue (float)
|
||||
- fetch_shard_started_rejected (float)
|
||||
- fetch_shard_started_threads (float)
|
||||
- fetch_shard_store_active (float)
|
||||
- fetch_shard_store_completed (float)
|
||||
- fetch_shard_store_largest (float)
|
||||
- fetch_shard_store_queue (float)
|
||||
- fetch_shard_store_rejected (float)
|
||||
- fetch_shard_store_threads (float)
|
||||
- flush_active (float)
|
||||
- flush_completed (float)
|
||||
- flush_largest (float)
|
||||
- flush_queue (float)
|
||||
- flush_rejected (float)
|
||||
- flush_threads (float)
|
||||
- force_merge_active (float)
|
||||
- force_merge_completed (float)
|
||||
- force_merge_largest (float)
|
||||
- force_merge_queue (float)
|
||||
- force_merge_rejected (float)
|
||||
- force_merge_threads (float)
|
||||
- generic_active (float)
|
||||
- generic_completed (float)
|
||||
- generic_largest (float)
|
||||
- generic_queue (float)
|
||||
- generic_rejected (float)
|
||||
- generic_threads (float)
|
||||
- get_active (float)
|
||||
- get_completed (float)
|
||||
- get_largest (float)
|
||||
- get_queue (float)
|
||||
- get_rejected (float)
|
||||
- get_threads (float)
|
||||
- index_active (float)
|
||||
- index_completed (float)
|
||||
- index_largest (float)
|
||||
- index_queue (float)
|
||||
- index_rejected (float)
|
||||
- index_threads (float)
|
||||
- listener_active (float)
|
||||
- listener_completed (float)
|
||||
- listener_largest (float)
|
||||
- listener_queue (float)
|
||||
- listener_rejected (float)
|
||||
- listener_threads (float)
|
||||
- management_active (float)
|
||||
- management_completed (float)
|
||||
- management_largest (float)
|
||||
- management_queue (float)
|
||||
- management_rejected (float)
|
||||
- management_threads (float)
|
||||
- ml_autodetect_active (float)
|
||||
- ml_autodetect_completed (float)
|
||||
- ml_autodetect_largest (float)
|
||||
- ml_autodetect_queue (float)
|
||||
- ml_autodetect_rejected (float)
|
||||
- ml_autodetect_threads (float)
|
||||
- ml_datafeed_active (float)
|
||||
- ml_datafeed_completed (float)
|
||||
- ml_datafeed_largest (float)
|
||||
- ml_datafeed_queue (float)
|
||||
- ml_datafeed_rejected (float)
|
||||
- ml_datafeed_threads (float)
|
||||
- ml_utility_active (float)
|
||||
- ml_utility_completed (float)
|
||||
- ml_utility_largest (float)
|
||||
- ml_utility_queue (float)
|
||||
- ml_utility_rejected (float)
|
||||
- ml_utility_threads (float)
|
||||
- refresh_active (float)
|
||||
- refresh_completed (float)
|
||||
- refresh_largest (float)
|
||||
- refresh_queue (float)
|
||||
- refresh_rejected (float)
|
||||
- refresh_threads (float)
|
||||
- rollup_indexing_active (float)
|
||||
- rollup_indexing_completed (float)
|
||||
- rollup_indexing_largest (float)
|
||||
- rollup_indexing_queue (float)
|
||||
- rollup_indexing_rejected (float)
|
||||
- rollup_indexing_threads (float)
|
||||
- search_active (float)
|
||||
- search_completed (float)
|
||||
- search_largest (float)
|
||||
- search_queue (float)
|
||||
- search_rejected (float)
|
||||
- search_threads (float)
|
||||
- search_throttled_active (float)
|
||||
- search_throttled_completed (float)
|
||||
- search_throttled_largest (float)
|
||||
- search_throttled_queue (float)
|
||||
- search_throttled_rejected (float)
|
||||
- search_throttled_threads (float)
|
||||
- security-token-key_active (float)
|
||||
- security-token-key_completed (float)
|
||||
- security-token-key_largest (float)
|
||||
- security-token-key_queue (float)
|
||||
- security-token-key_rejected (float)
|
||||
- security-token-key_threads (float)
|
||||
- snapshot_active (float)
|
||||
- snapshot_completed (float)
|
||||
- snapshot_largest (float)
|
||||
- snapshot_queue (float)
|
||||
- snapshot_rejected (float)
|
||||
- snapshot_threads (float)
|
||||
- warmer_active (float)
|
||||
- warmer_completed (float)
|
||||
- warmer_largest (float)
|
||||
- warmer_queue (float)
|
||||
- warmer_rejected (float)
|
||||
- warmer_threads (float)
|
||||
- watcher_active (float)
|
||||
- watcher_completed (float)
|
||||
- watcher_largest (float)
|
||||
- watcher_queue (float)
|
||||
- watcher_rejected (float)
|
||||
- watcher_threads (float)
|
||||
- write_active (float)
|
||||
- write_completed (float)
|
||||
- write_largest (float)
|
||||
- write_queue (float)
|
||||
- write_rejected (float)
|
||||
- write_threads (float)
|
||||
|
||||
Emitted when the appropriate `indices_stats` options are set.
|
||||
|
||||
- elasticsearch_indices_stats_(primaries|total)
|
||||
- tags:
|
||||
- index_name
|
||||
- fields:
|
||||
- completion_size_in_bytes (float)
|
||||
- docs_count (float)
|
||||
- docs_deleted (float)
|
||||
- fielddata_evictions (float)
|
||||
- fielddata_memory_size_in_bytes (float)
|
||||
- flush_periodic (float)
|
||||
- flush_total (float)
|
||||
- flush_total_time_in_millis (float)
|
||||
- get_current (float)
|
||||
- get_exists_time_in_millis (float)
|
||||
- get_exists_total (float)
|
||||
- get_missing_time_in_millis (float)
|
||||
- get_missing_total (float)
|
||||
- get_time_in_millis (float)
|
||||
- get_total (float)
|
||||
- indexing_delete_current (float)
|
||||
- indexing_delete_time_in_millis (float)
|
||||
- indexing_delete_total (float)
|
||||
- indexing_index_current (float)
|
||||
- indexing_index_failed (float)
|
||||
- indexing_index_time_in_millis (float)
|
||||
- indexing_index_total (float)
|
||||
- indexing_is_throttled (float)
|
||||
- indexing_noop_update_total (float)
|
||||
- indexing_throttle_time_in_millis (float)
|
||||
- merges_current (float)
|
||||
- merges_current_docs (float)
|
||||
- merges_current_size_in_bytes (float)
|
||||
- merges_total (float)
|
||||
- merges_total_auto_throttle_in_bytes (float)
|
||||
- merges_total_docs (float)
|
||||
- merges_total_size_in_bytes (float)
|
||||
- merges_total_stopped_time_in_millis (float)
|
||||
- merges_total_throttled_time_in_millis (float)
|
||||
- merges_total_time_in_millis (float)
|
||||
- query_cache_cache_count (float)
|
||||
- query_cache_cache_size (float)
|
||||
- query_cache_evictions (float)
|
||||
- query_cache_hit_count (float)
|
||||
- query_cache_memory_size_in_bytes (float)
|
||||
- query_cache_miss_count (float)
|
||||
- query_cache_total_count (float)
|
||||
- recovery_current_as_source (float)
|
||||
- recovery_current_as_target (float)
|
||||
- recovery_throttle_time_in_millis (float)
|
||||
- refresh_external_total (float)
|
||||
- refresh_external_total_time_in_millis (float)
|
||||
- refresh_listeners (float)
|
||||
- refresh_total (float)
|
||||
- refresh_total_time_in_millis (float)
|
||||
- request_cache_evictions (float)
|
||||
- request_cache_hit_count (float)
|
||||
- request_cache_memory_size_in_bytes (float)
|
||||
- request_cache_miss_count (float)
|
||||
- search_fetch_current (float)
|
||||
- search_fetch_time_in_millis (float)
|
||||
- search_fetch_total (float)
|
||||
- search_open_contexts (float)
|
||||
- search_query_current (float)
|
||||
- search_query_time_in_millis (float)
|
||||
- search_query_total (float)
|
||||
- search_scroll_current (float)
|
||||
- search_scroll_time_in_millis (float)
|
||||
- search_scroll_total (float)
|
||||
- search_suggest_current (float)
|
||||
- search_suggest_time_in_millis (float)
|
||||
- search_suggest_total (float)
|
||||
- segments_count (float)
|
||||
- segments_doc_values_memory_in_bytes (float)
|
||||
- segments_fixed_bit_set_memory_in_bytes (float)
|
||||
- segments_index_writer_memory_in_bytes (float)
|
||||
- segments_max_unsafe_auto_id_timestamp (float)
|
||||
- segments_memory_in_bytes (float)
|
||||
- segments_norms_memory_in_bytes (float)
|
||||
- segments_points_memory_in_bytes (float)
|
||||
- segments_stored_fields_memory_in_bytes (float)
|
||||
- segments_term_vectors_memory_in_bytes (float)
|
||||
- segments_terms_memory_in_bytes (float)
|
||||
- segments_version_map_memory_in_bytes (float)
|
||||
- store_size_in_bytes (float)
|
||||
- translog_earliest_last_modified_age (float)
|
||||
- translog_operations (float)
|
||||
- translog_size_in_bytes (float)
|
||||
- translog_uncommitted_operations (float)
|
||||
- translog_uncommitted_size_in_bytes (float)
|
||||
- warmer_current (float)
|
||||
- warmer_total (float)
|
||||
- warmer_total_time_in_millis (float)
|
||||
|
||||
Emitted when the appropriate `shards_stats` options are set.
|
||||
|
||||
- elasticsearch_indices_stats_shards_total
|
||||
- fields:
|
||||
- failed (float)
|
||||
- successful (float)
|
||||
- total (float)
|
||||
|
||||
- elasticsearch_indices_stats_shards
|
||||
- tags:
|
||||
- index_name
|
||||
- node_name
|
||||
- shard_name
|
||||
- type
|
||||
- fields:
|
||||
- commit_generation (float)
|
||||
- commit_num_docs (float)
|
||||
- completion_size_in_bytes (float)
|
||||
- docs_count (float)
|
||||
- docs_deleted (float)
|
||||
- fielddata_evictions (float)
|
||||
- fielddata_memory_size_in_bytes (float)
|
||||
- flush_periodic (float)
|
||||
- flush_total (float)
|
||||
- flush_total_time_in_millis (float)
|
||||
- get_current (float)
|
||||
- get_exists_time_in_millis (float)
|
||||
- get_exists_total (float)
|
||||
- get_missing_time_in_millis (float)
|
||||
- get_missing_total (float)
|
||||
- get_time_in_millis (float)
|
||||
- get_total (float)
|
||||
- indexing_delete_current (float)
|
||||
- indexing_delete_time_in_millis (float)
|
||||
- indexing_delete_total (float)
|
||||
- indexing_index_current (float)
|
||||
- indexing_index_failed (float)
|
||||
- indexing_index_time_in_millis (float)
|
||||
- indexing_index_total (float)
|
||||
- indexing_is_throttled (bool)
|
||||
- indexing_noop_update_total (float)
|
||||
- indexing_throttle_time_in_millis (float)
|
||||
- merges_current (float)
|
||||
- merges_current_docs (float)
|
||||
- merges_current_size_in_bytes (float)
|
||||
- merges_total (float)
|
||||
- merges_total_auto_throttle_in_bytes (float)
|
||||
- merges_total_docs (float)
|
||||
- merges_total_size_in_bytes (float)
|
||||
- merges_total_stopped_time_in_millis (float)
|
||||
- merges_total_throttled_time_in_millis (float)
|
||||
- merges_total_time_in_millis (float)
|
||||
- query_cache_cache_count (float)
|
||||
- query_cache_cache_size (float)
|
||||
- query_cache_evictions (float)
|
||||
- query_cache_hit_count (float)
|
||||
- query_cache_memory_size_in_bytes (float)
|
||||
- query_cache_miss_count (float)
|
||||
- query_cache_total_count (float)
|
||||
- recovery_current_as_source (float)
|
||||
- recovery_current_as_target (float)
|
||||
- recovery_throttle_time_in_millis (float)
|
||||
- refresh_external_total (float)
|
||||
- refresh_external_total_time_in_millis (float)
|
||||
- refresh_listeners (float)
|
||||
- refresh_total (float)
|
||||
- refresh_total_time_in_millis (float)
|
||||
- request_cache_evictions (float)
|
||||
- request_cache_hit_count (float)
|
||||
- request_cache_memory_size_in_bytes (float)
|
||||
- request_cache_miss_count (float)
|
||||
- retention_leases_primary_term (float)
|
||||
- retention_leases_version (float)
|
||||
- routing_state (int)
|
||||
(UNASSIGNED = 1, INITIALIZING = 2, STARTED = 3, RELOCATING = 4, other = 0)
|
||||
- search_fetch_current (float)
|
||||
- search_fetch_time_in_millis (float)
|
||||
- search_fetch_total (float)
|
||||
- search_open_contexts (float)
|
||||
- search_query_current (float)
|
||||
- search_query_time_in_millis (float)
|
||||
- search_query_total (float)
|
||||
- search_scroll_current (float)
|
||||
- search_scroll_time_in_millis (float)
|
||||
- search_scroll_total (float)
|
||||
- search_suggest_current (float)
|
||||
- search_suggest_time_in_millis (float)
|
||||
- search_suggest_total (float)
|
||||
- segments_count (float)
|
||||
- segments_doc_values_memory_in_bytes (float)
|
||||
- segments_fixed_bit_set_memory_in_bytes (float)
|
||||
- segments_index_writer_memory_in_bytes (float)
|
||||
- segments_max_unsafe_auto_id_timestamp (float)
|
||||
- segments_memory_in_bytes (float)
|
||||
- segments_norms_memory_in_bytes (float)
|
||||
- segments_points_memory_in_bytes (float)
|
||||
- segments_stored_fields_memory_in_bytes (float)
|
||||
- segments_term_vectors_memory_in_bytes (float)
|
||||
- segments_terms_memory_in_bytes (float)
|
||||
- segments_version_map_memory_in_bytes (float)
|
||||
- seq_no_global_checkpoint (float)
|
||||
- seq_no_local_checkpoint (float)
|
||||
- seq_no_max_seq_no (float)
|
||||
- shard_path_is_custom_data_path (bool)
|
||||
- store_size_in_bytes (float)
|
||||
- translog_earliest_last_modified_age (float)
|
||||
- translog_operations (float)
|
||||
- translog_size_in_bytes (float)
|
||||
- translog_uncommitted_operations (float)
|
||||
- translog_uncommitted_size_in_bytes (float)
|
||||
- warmer_current (float)
|
||||
- warmer_total (float)
|
||||
- warmer_total_time_in_millis (float)
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,222 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Elasticsearch Query"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Elasticsearch Query
|
||||
identifier: input-elasticsearch_query
|
||||
tags: [Elasticsearch Query, "input-plugins", "configuration", "datastore"]
|
||||
introduced: "v1.20.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/elasticsearch_query/README.md, Elasticsearch Query Plugin Source
|
||||
---
|
||||
|
||||
# Elasticsearch Query Input Plugin
|
||||
|
||||
This plugin allows to query an [Elasticsearch](https://www.elastic.co/) instance to obtain
|
||||
metrics from data stored in the cluster. The plugins supports counting the
|
||||
number of hits for a search query, calculating statistics for numeric fields,
|
||||
filtered by a query, aggregated per tag and to count the number of terms for a
|
||||
particular field.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> This plugins supports Elasticsearch 5.x and 6.x but is known to break on 7.x
|
||||
> or higher.
|
||||
|
||||
**Introduced in:** Telegraf v1.20.0
|
||||
**Tags:** datastore
|
||||
**OS support:** all
|
||||
|
||||
[elastic]: https://www.elastic.co/
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Derive metrics from aggregating Elasticsearch query results
|
||||
[[inputs.elasticsearch_query]]
|
||||
## The full HTTP endpoint URL for your Elasticsearch instance
|
||||
## Multiple urls can be specified as part of the same cluster,
|
||||
## this means that only ONE of the urls will be written to each interval.
|
||||
urls = [ "http://node1.es.example.com:9200" ] # required.
|
||||
|
||||
## Elasticsearch client timeout, defaults to "5s".
|
||||
# timeout = "5s"
|
||||
|
||||
## Set to true to ask Elasticsearch a list of all cluster nodes,
|
||||
## thus it is not necessary to list all nodes in the urls config option
|
||||
# enable_sniffer = false
|
||||
|
||||
## Set the interval to check if the Elasticsearch nodes are available
|
||||
## This option is only used if enable_sniffer is also set (0s to disable it)
|
||||
# health_check_interval = "10s"
|
||||
|
||||
## HTTP basic authentication details (eg. when using x-pack)
|
||||
# username = "telegraf"
|
||||
# password = "mypassword"
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
|
||||
## If 'use_system_proxy' is set to true, Telegraf will check env vars such as
|
||||
## HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or their lowercase counterparts).
|
||||
## If 'use_system_proxy' is set to false (default) and 'http_proxy_url' is
|
||||
## provided, Telegraf will use the specified URL as HTTP proxy.
|
||||
# use_system_proxy = false
|
||||
# http_proxy_url = "http://localhost:8888"
|
||||
|
||||
[[inputs.elasticsearch_query.aggregation]]
|
||||
## measurement name for the results of the aggregation query
|
||||
measurement_name = "measurement"
|
||||
|
||||
## Elasticsearch indexes to query (accept wildcards).
|
||||
index = "index-*"
|
||||
|
||||
## The date/time field in the Elasticsearch index (mandatory).
|
||||
date_field = "@timestamp"
|
||||
|
||||
## If the field used for the date/time field in Elasticsearch is also using
|
||||
## a custom date/time format it may be required to provide the format to
|
||||
## correctly parse the field.
|
||||
##
|
||||
## If using one of the built in elasticsearch formats this is not required.
|
||||
# date_field_custom_format = ""
|
||||
|
||||
## Time window to query (eg. "1m" to query documents from last minute).
|
||||
## Normally should be set to same as collection interval
|
||||
query_period = "1m"
|
||||
|
||||
## Lucene query to filter results
|
||||
# filter_query = "*"
|
||||
|
||||
## Fields to aggregate values (must be numeric fields)
|
||||
# metric_fields = ["metric"]
|
||||
|
||||
## Aggregation function to use on the metric fields
|
||||
## Must be set if 'metric_fields' is set
|
||||
## Valid values are: avg, sum, min, max, sum
|
||||
# metric_function = "avg"
|
||||
|
||||
## Fields to be used as tags
|
||||
## Must be text, non-analyzed fields. Metric aggregations are performed
|
||||
## per tag
|
||||
# tags = ["field.keyword", "field2.keyword"]
|
||||
|
||||
## Set to true to not ignore documents when the tag(s) above are missing
|
||||
# include_missing_tag = false
|
||||
|
||||
## String value of the tag when the tag does not exist
|
||||
## Used when include_missing_tag is true
|
||||
# missing_tag_value = "null"
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
Please note that the `[[inputs.elasticsearch_query]]` is still required for all
|
||||
of the examples below.
|
||||
|
||||
### Search the average response time, per URI and per response status code
|
||||
|
||||
```toml
|
||||
[[inputs.elasticsearch_query.aggregation]]
|
||||
measurement_name = "http_logs"
|
||||
index = "my-index-*"
|
||||
filter_query = "*"
|
||||
metric_fields = ["response_time"]
|
||||
metric_function = "avg"
|
||||
tags = ["URI.keyword", "response.keyword"]
|
||||
include_missing_tag = true
|
||||
missing_tag_value = "null"
|
||||
date_field = "@timestamp"
|
||||
query_period = "1m"
|
||||
```
|
||||
|
||||
### Search the maximum response time per method and per URI
|
||||
|
||||
```toml
|
||||
[[inputs.elasticsearch_query.aggregation]]
|
||||
measurement_name = "http_logs"
|
||||
index = "my-index-*"
|
||||
filter_query = "*"
|
||||
metric_fields = ["response_time"]
|
||||
metric_function = "max"
|
||||
tags = ["method.keyword","URI.keyword"]
|
||||
include_missing_tag = false
|
||||
missing_tag_value = "null"
|
||||
date_field = "@timestamp"
|
||||
query_period = "1m"
|
||||
```
|
||||
|
||||
### Search number of documents matching a filter query in all indices
|
||||
|
||||
```toml
|
||||
[[inputs.elasticsearch_query.aggregation]]
|
||||
measurement_name = "http_logs"
|
||||
index = "*"
|
||||
filter_query = "product_1 AND HEAD"
|
||||
query_period = "1m"
|
||||
date_field = "@timestamp"
|
||||
```
|
||||
|
||||
### Search number of documents matching a filter query, returning per response status code
|
||||
|
||||
```toml
|
||||
[[inputs.elasticsearch_query.aggregation]]
|
||||
measurement_name = "http_logs"
|
||||
index = "*"
|
||||
filter_query = "downloads"
|
||||
tags = ["response.keyword"]
|
||||
include_missing_tag = false
|
||||
date_field = "@timestamp"
|
||||
query_period = "1m"
|
||||
```
|
||||
|
||||
### Required parameters
|
||||
|
||||
- `measurement_name`: The target measurement to be stored the results of the
|
||||
aggregation query.
|
||||
- `index`: The index name to query on Elasticsearch
|
||||
- `query_period`: The time window to query (eg. "1m" to query documents from
|
||||
last minute). Normally should be set to same as collection
|
||||
- `date_field`: The date/time field in the Elasticsearch index
|
||||
|
||||
### Optional parameters
|
||||
|
||||
- `date_field_custom_format`: Not needed if using one of the built in date/time
|
||||
formats of Elasticsearch, but may be required if using a custom date/time
|
||||
format. The format syntax uses the [Joda date format](https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-aggregations-bucket-daterange-aggregation.html#date-format-pattern).
|
||||
- `filter_query`: Lucene query to filter the results (default: "\*")
|
||||
- `metric_fields`: The list of fields to perform metric aggregation (these must
|
||||
be indexed as numeric fields)
|
||||
- `metric_function`: The single-value metric aggregation function to be performed
|
||||
on the `metric_fields` defined. Currently supported aggregations are "avg",
|
||||
"min", "max", "sum". (see the [aggregation docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html)
|
||||
- `tags`: The list of fields to be used as tags (these must be indexed as
|
||||
non-analyzed fields). A "terms aggregation" will be done per tag defined
|
||||
- `include_missing_tag`: Set to true to not ignore documents where the tag(s)
|
||||
specified above does not exist. (If false, documents without the specified tag
|
||||
field will be ignored in `doc_count` and in the metric aggregation)
|
||||
- `missing_tag_value`: The value of the tag that will be set for documents in
|
||||
which the tag field does not exist. Only used when `include_missing_tag` is
|
||||
set to `true`.
|
||||
|
||||
[joda]: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-aggregations-bucket-daterange-aggregation.html#date-format-pattern
|
||||
[agg]: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html
|
||||
|
||||
## Metrics
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,125 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Ethtool"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Ethtool
|
||||
identifier: input-ethtool
|
||||
tags: [Ethtool, "input-plugins", "configuration", "network", "system"]
|
||||
introduced: "v1.13.0"
|
||||
os_support: "linux"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/ethtool/README.md, Ethtool Plugin Source
|
||||
---
|
||||
|
||||
# Ethtool Input Plugin
|
||||
|
||||
This plugin collects ethernet device statistics. The available information
|
||||
strongly depends on the network device and driver.
|
||||
|
||||
**Introduced in:** Telegraf v1.13.0
|
||||
**Tags:** system, network
|
||||
**OS support:** linux
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Returns ethtool statistics for given interfaces
|
||||
# This plugin ONLY supports Linux
|
||||
[[inputs.ethtool]]
|
||||
## List of interfaces to pull metrics for
|
||||
# interface_include = ["eth0"]
|
||||
|
||||
## List of interfaces to ignore when pulling metrics.
|
||||
# interface_exclude = ["eth1"]
|
||||
|
||||
## Plugin behavior for downed interfaces
|
||||
## Available choices:
|
||||
## - expose: collect & report metrics for down interfaces
|
||||
## - skip: ignore interfaces that are marked down
|
||||
# down_interfaces = "expose"
|
||||
|
||||
## Reading statistics from interfaces in additional namespaces is also
|
||||
## supported, so long as the namespaces are named (have a symlink in
|
||||
## /var/run/netns). The telegraf process will also need the CAP_SYS_ADMIN
|
||||
## permission.
|
||||
## By default, only the current namespace will be used. For additional
|
||||
## namespace support, at least one of `namespace_include` and
|
||||
## `namespace_exclude` must be provided.
|
||||
## To include all namespaces, set `namespace_include` to `["*"]`.
|
||||
## The initial namespace (if anonymous) can be specified with the empty
|
||||
## string ("").
|
||||
|
||||
## List of namespaces to pull metrics for
|
||||
# namespace_include = []
|
||||
|
||||
## List of namespace to ignore when pulling metrics.
|
||||
# namespace_exclude = []
|
||||
|
||||
## Some drivers declare statistics with extra whitespace, different spacing,
|
||||
## and mix cases. This list, when enabled, can be used to clean the keys.
|
||||
## Here are the current possible normalizations:
|
||||
## * snakecase: converts fooBarBaz to foo_bar_baz
|
||||
## * trim: removes leading and trailing whitespace
|
||||
## * lower: changes all capitalized letters to lowercase
|
||||
## * underscore: replaces spaces with underscores
|
||||
# normalize_keys = ["snakecase", "trim", "lower", "underscore"]
|
||||
```
|
||||
|
||||
Interfaces can be included or ignored using:
|
||||
|
||||
- `interface_include`
|
||||
- `interface_exclude`
|
||||
|
||||
Note that loopback interfaces will be automatically ignored.
|
||||
|
||||
## Namespaces
|
||||
|
||||
Metrics from interfaces in additional namespaces will be retrieved if either
|
||||
`namespace_include` or `namespace_exclude` is configured (to a non-empty list).
|
||||
This requires `CAP_SYS_ADMIN` permissions to switch namespaces, which can be
|
||||
granted to telegraf in several ways. The two recommended ways are listed below:
|
||||
|
||||
### Using systemd capabilities
|
||||
|
||||
If you are using systemd to run Telegraf, you may run
|
||||
`systemctl edit telegraf.service` and add the following:
|
||||
|
||||
```text
|
||||
[Service]
|
||||
AmbientCapabilities=CAP_SYS_ADMIN
|
||||
```
|
||||
|
||||
### Configuring executable capabilities
|
||||
|
||||
If you are not using systemd to run Telegraf, you can configure the Telegraf
|
||||
executable to have `CAP_SYS_ADMIN` when run.
|
||||
|
||||
```sh
|
||||
sudo setcap CAP_SYS_ADMIN+epi $(which telegraf)
|
||||
```
|
||||
|
||||
N.B.: This capability is a filesystem attribute on the binary itself. The
|
||||
attribute needs to be re-applied if the Telegraf binary is rotated (e.g. on
|
||||
installation of new a Telegraf version from the system package manager).
|
||||
|
||||
## Metrics
|
||||
|
||||
Metrics are dependent on the network device and driver.
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
ethtool,driver=igb,host=test01,interface=mgmt0 tx_queue_1_packets=280782i,rx_queue_5_csum_err=0i,tx_queue_4_restart=0i,tx_multicast=7i,tx_queue_1_bytes=39674885i,rx_queue_2_alloc_failed=0i,tx_queue_5_packets=173970i,tx_single_coll_ok=0i,rx_queue_1_drops=0i,tx_queue_2_restart=0i,tx_aborted_errors=0i,rx_queue_6_csum_err=0i,tx_queue_5_restart=0i,tx_queue_4_bytes=64810835i,tx_abort_late_coll=0i,tx_queue_4_packets=109102i,os2bmc_tx_by_bmc=0i,tx_bytes=427527435i,tx_queue_7_packets=66665i,dropped_smbus=0i,rx_queue_0_csum_err=0i,tx_flow_control_xoff=0i,rx_packets=25926536i,rx_queue_7_csum_err=0i,rx_queue_3_bytes=84326060i,rx_multicast=83771i,rx_queue_4_alloc_failed=0i,rx_queue_3_drops=0i,rx_queue_3_csum_err=0i,rx_errors=0i,tx_errors=0i,tx_queue_6_packets=183236i,rx_broadcast=24378893i,rx_queue_7_packets=88680i,tx_dropped=0i,rx_frame_errors=0i,tx_queue_3_packets=161045i,tx_packets=1257017i,rx_queue_1_csum_err=0i,tx_window_errors=0i,tx_dma_out_of_sync=0i,rx_length_errors=0i,rx_queue_5_drops=0i,tx_timeout_count=0i,rx_queue_4_csum_err=0i,rx_flow_control_xon=0i,tx_heartbeat_errors=0i,tx_flow_control_xon=0i,collisions=0i,tx_queue_0_bytes=29465801i,rx_queue_6_drops=0i,rx_queue_0_alloc_failed=0i,tx_queue_1_restart=0i,rx_queue_0_drops=0i,tx_broadcast=9i,tx_carrier_errors=0i,tx_queue_7_bytes=13777515i,tx_queue_7_restart=0i,rx_queue_5_bytes=50732006i,rx_queue_7_bytes=35744457i,tx_deferred_ok=0i,tx_multi_coll_ok=0i,rx_crc_errors=0i,rx_fifo_errors=0i,rx_queue_6_alloc_failed=0i,tx_queue_2_packets=175206i,tx_queue_0_packets=107011i,rx_queue_4_bytes=201364548i,rx_queue_6_packets=372573i,os2bmc_rx_by_host=0i,multicast=83771i,rx_queue_4_drops=0i,rx_queue_5_packets=130535i,rx_queue_6_bytes=139488035i,tx_fifo_errors=0i,tx_queue_5_bytes=84899130i,rx_queue_0_packets=24529563i,rx_queue_3_alloc_failed=0i,rx_queue_7_drops=0i,tx_queue_6_bytes=96288614i,tx_queue_2_bytes=22132949i,tx_tcp_seg_failed=0i,rx_queue_1_bytes=246703840i,rx_queue_0_bytes=1506870738i,tx_queue_0_restart=0i,rx_queue_2_bytes=111344804i,tx_tcp_seg_good=0i,tx_queue_3_restart=0i,rx_no_buffer_count=0i,rx_smbus=0i,rx_queue_1_packets=273865i,rx_over_errors=0i,os2bmc_tx_by_host=0i,rx_queue_1_alloc_failed=0i,rx_queue_7_alloc_failed=0i,rx_short_length_errors=0i,tx_hwtstamp_timeouts=0i,tx_queue_6_restart=0i,rx_queue_2_packets=207136i,tx_queue_3_bytes=70391970i,rx_queue_3_packets=112007i,rx_queue_4_packets=212177i,tx_smbus=0i,rx_long_byte_count=2480280632i,rx_queue_2_csum_err=0i,rx_missed_errors=0i,rx_bytes=2480280632i,rx_queue_5_alloc_failed=0i,rx_queue_2_drops=0i,os2bmc_rx_by_bmc=0i,rx_align_errors=0i,rx_long_length_errors=0i,interface_up=1i,rx_hwtstamp_cleared=0i,rx_flow_control_xoff=0i,speed=1000i,link=1i,duplex=1i,autoneg=1i 1564658080000000000
|
||||
ethtool,driver=igb,host=test02,interface=mgmt0 rx_queue_2_bytes=111344804i,tx_queue_3_bytes=70439858i,multicast=83771i,rx_broadcast=24378975i,tx_queue_0_packets=107011i,rx_queue_6_alloc_failed=0i,rx_queue_6_drops=0i,rx_hwtstamp_cleared=0i,tx_window_errors=0i,tx_tcp_seg_good=0i,rx_queue_1_drops=0i,tx_queue_1_restart=0i,rx_queue_7_csum_err=0i,rx_no_buffer_count=0i,tx_queue_1_bytes=39675245i,tx_queue_5_bytes=84899130i,tx_broadcast=9i,rx_queue_1_csum_err=0i,tx_flow_control_xoff=0i,rx_queue_6_csum_err=0i,tx_timeout_count=0i,os2bmc_tx_by_bmc=0i,rx_queue_6_packets=372577i,rx_queue_0_alloc_failed=0i,tx_flow_control_xon=0i,rx_queue_2_drops=0i,tx_queue_2_packets=175206i,rx_queue_3_csum_err=0i,tx_abort_late_coll=0i,tx_queue_5_restart=0i,tx_dropped=0i,rx_queue_2_alloc_failed=0i,tx_multi_coll_ok=0i,rx_queue_1_packets=273865i,rx_flow_control_xon=0i,tx_single_coll_ok=0i,rx_length_errors=0i,rx_queue_7_bytes=35744457i,rx_queue_4_alloc_failed=0i,rx_queue_6_bytes=139488395i,rx_queue_2_csum_err=0i,rx_long_byte_count=2480288216i,rx_queue_1_alloc_failed=0i,tx_queue_0_restart=0i,rx_queue_0_csum_err=0i,tx_queue_2_bytes=22132949i,rx_queue_5_drops=0i,tx_dma_out_of_sync=0i,rx_queue_3_drops=0i,rx_queue_4_packets=212177i,tx_queue_6_restart=0i,rx_packets=25926650i,rx_queue_7_packets=88680i,rx_frame_errors=0i,rx_queue_3_bytes=84326060i,rx_short_length_errors=0i,tx_queue_7_bytes=13777515i,rx_queue_3_alloc_failed=0i,tx_queue_6_packets=183236i,rx_queue_0_drops=0i,rx_multicast=83771i,rx_queue_2_packets=207136i,rx_queue_5_csum_err=0i,rx_queue_5_packets=130535i,rx_queue_7_alloc_failed=0i,tx_smbus=0i,tx_queue_3_packets=161081i,rx_queue_7_drops=0i,tx_queue_2_restart=0i,tx_multicast=7i,tx_fifo_errors=0i,tx_queue_3_restart=0i,rx_long_length_errors=0i,tx_queue_6_bytes=96288614i,tx_queue_1_packets=280786i,tx_tcp_seg_failed=0i,rx_align_errors=0i,tx_errors=0i,rx_crc_errors=0i,rx_queue_0_packets=24529673i,rx_flow_control_xoff=0i,tx_queue_0_bytes=29465801i,rx_over_errors=0i,rx_queue_4_drops=0i,os2bmc_rx_by_bmc=0i,rx_smbus=0i,dropped_smbus=0i,tx_hwtstamp_timeouts=0i,rx_errors=0i,tx_queue_4_packets=109102i,tx_carrier_errors=0i,tx_queue_4_bytes=64810835i,tx_queue_4_restart=0i,rx_queue_4_csum_err=0i,tx_queue_7_packets=66665i,tx_aborted_errors=0i,rx_missed_errors=0i,tx_bytes=427575843i,collisions=0i,rx_queue_1_bytes=246703840i,rx_queue_5_bytes=50732006i,rx_bytes=2480288216i,os2bmc_rx_by_host=0i,rx_queue_5_alloc_failed=0i,rx_queue_3_packets=112007i,tx_deferred_ok=0i,os2bmc_tx_by_host=0i,tx_heartbeat_errors=0i,rx_queue_0_bytes=1506877506i,tx_queue_7_restart=0i,tx_packets=1257057i,rx_queue_4_bytes=201364548i,interface_up=0i,rx_fifo_errors=0i,tx_queue_5_packets=173970i,speed=1000i,link=1i,duplex=1i,autoneg=1i 1564658090000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,165 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Azure Event Hub Consumer"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Azure Event Hub Consumer
|
||||
identifier: input-eventhub_consumer
|
||||
tags: [Azure Event Hub Consumer, "input-plugins", "configuration", "iot", "messaging"]
|
||||
introduced: "v1.14.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/eventhub_consumer/README.md, Azure Event Hub Consumer Plugin Source
|
||||
---
|
||||
|
||||
# Azure Event Hub Consumer Input Plugin
|
||||
|
||||
This plugin allows consuming messages from [Azure Event Hubs](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-about) and
|
||||
[Azure IoT Hub](https://azure.microsoft.com/en-us/products/iot-hub) instances.
|
||||
|
||||
**Introduced in:** Telegraf v1.14.0
|
||||
**Tags:** iot, messaging
|
||||
**OS support:** all
|
||||
|
||||
[eventhub]: https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-about
|
||||
[iothub]: https://azure.microsoft.com/en-us/products/iot-hub
|
||||
|
||||
## IoT Hub Setup
|
||||
|
||||
The main focus for development of this plugin is Azure IoT hub:
|
||||
|
||||
1. Create an Azure IoT Hub by following any of the guides provided here: [Azure
|
||||
IoT Hub](https://docs.microsoft.com/en-us/azure/iot-hub/)
|
||||
2. Create a device, for example a [simulated Raspberry
|
||||
Pi](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started)
|
||||
3. The connection string needed for the plugin is located under *Shared access
|
||||
policies*, both the *iothubowner* and *service* policies should work
|
||||
|
||||
## Service Input <!-- @/docs/includes/service_input.md -->
|
||||
|
||||
This plugin is a service input. Normal plugins gather metrics determined by the
|
||||
interval setting. Service plugins start a service to listen and wait for
|
||||
metrics or events to occur. Service plugins have two key differences from
|
||||
normal plugins:
|
||||
|
||||
1. The global or plugin specific `interval` setting may not apply
|
||||
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
|
||||
output for this plugin
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Azure Event Hubs service input plugin
|
||||
[[inputs.eventhub_consumer]]
|
||||
## The default behavior is to create a new Event Hub client from environment variables.
|
||||
## This requires one of the following sets of environment variables to be set:
|
||||
##
|
||||
## 1) Expected Environment Variables:
|
||||
## - "EVENTHUB_CONNECTION_STRING"
|
||||
##
|
||||
## 2) Expected Environment Variables:
|
||||
## - "EVENTHUB_NAMESPACE"
|
||||
## - "EVENTHUB_NAME"
|
||||
## - "EVENTHUB_KEY_NAME"
|
||||
## - "EVENTHUB_KEY_VALUE"
|
||||
|
||||
## 3) Expected Environment Variables:
|
||||
## - "EVENTHUB_NAMESPACE"
|
||||
## - "EVENTHUB_NAME"
|
||||
## - "AZURE_TENANT_ID"
|
||||
## - "AZURE_CLIENT_ID"
|
||||
## - "AZURE_CLIENT_SECRET"
|
||||
|
||||
## Uncommenting the option below will create an Event Hub client based solely on the connection string.
|
||||
## This can either be the associated environment variable or hard coded directly.
|
||||
## If this option is uncommented, environment variables will be ignored.
|
||||
## Connection string should contain EventHubName (EntityPath)
|
||||
# connection_string = ""
|
||||
|
||||
## Set persistence directory to a valid folder to use a file persister instead of an in-memory persister
|
||||
# persistence_dir = ""
|
||||
|
||||
## Change the default consumer group
|
||||
# consumer_group = ""
|
||||
|
||||
## By default the event hub receives all messages present on the broker, alternative modes can be set below.
|
||||
## The timestamp should be in https://github.com/toml-lang/toml#offset-date-time format (RFC 3339).
|
||||
## The 3 options below only apply if no valid persister is read from memory or file (e.g. first run).
|
||||
# from_timestamp =
|
||||
# latest = true
|
||||
|
||||
## Set a custom prefetch count for the receiver(s)
|
||||
# prefetch_count = 1000
|
||||
|
||||
## Add an epoch to the receiver(s)
|
||||
# epoch = 0
|
||||
|
||||
## Change to set a custom user agent, "telegraf" is used by default
|
||||
# user_agent = "telegraf"
|
||||
|
||||
## To consume from a specific partition, set the partition_ids option.
|
||||
## An empty array will result in receiving from all partitions.
|
||||
# partition_ids = ["0","1"]
|
||||
|
||||
## Max undelivered messages
|
||||
## This plugin uses tracking metrics, which ensure messages are read to
|
||||
## outputs before acknowledging them to the original broker to ensure data
|
||||
## is not lost. This option sets the maximum messages to read from the
|
||||
## broker that have not been written by an output.
|
||||
##
|
||||
## This value needs to be picked with awareness of the agent's
|
||||
## metric_batch_size value as well. Setting max undelivered messages too high
|
||||
## can result in a constant stream of data batches to the output. While
|
||||
## setting it too low may never flush the broker's messages.
|
||||
# max_undelivered_messages = 1000
|
||||
|
||||
## Set either option below to true to use a system property as timestamp.
|
||||
## You have the choice between EnqueuedTime and IoTHubEnqueuedTime.
|
||||
## It is recommended to use this setting when the data itself has no timestamp.
|
||||
# enqueued_time_as_ts = true
|
||||
# iot_hub_enqueued_time_as_ts = true
|
||||
|
||||
## Tags or fields to create from keys present in the application property bag.
|
||||
## These could for example be set by message enrichments in Azure IoT Hub.
|
||||
# application_property_tags = []
|
||||
# application_property_fields = []
|
||||
|
||||
## Tag or field name to use for metadata
|
||||
## By default all metadata is disabled
|
||||
# sequence_number_field = "SequenceNumber"
|
||||
# enqueued_time_field = "EnqueuedTime"
|
||||
# offset_field = "Offset"
|
||||
# partition_id_tag = "PartitionID"
|
||||
# partition_key_tag = "PartitionKey"
|
||||
# iot_hub_device_connection_id_tag = "IoTHubDeviceConnectionID"
|
||||
# iot_hub_auth_generation_id_tag = "IoTHubAuthGenerationID"
|
||||
# iot_hub_connection_auth_method_tag = "IoTHubConnectionAuthMethod"
|
||||
# iot_hub_connection_module_id_tag = "IoTHubConnectionModuleID"
|
||||
# iot_hub_enqueued_time_field = "IoTHubEnqueuedTime"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "influx"
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
[Full documentation of the available environment variables](https://github.com/Azure/azure-event-hubs-go#environment-variables).
|
||||
|
||||
[envvar]: https://github.com/Azure/azure-event-hubs-go#environment-variables
|
||||
|
||||
## Metrics
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,115 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Exec"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Exec
|
||||
identifier: input-exec
|
||||
tags: [Exec, "input-plugins", "configuration", "system"]
|
||||
introduced: "v0.1.5"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/exec/README.md, Exec Plugin Source
|
||||
---
|
||||
|
||||
# Exec Input Plugin
|
||||
|
||||
This plugin executes the given `commands` on every interval and parses metrics
|
||||
from their output in any one of the supported [data formats](/telegraf/v1/data_formats/input).
|
||||
This plugin can be used to poll for custom metrics from any source.
|
||||
|
||||
**Introduced in:** Telegraf v0.1.5
|
||||
**Tags:** system
|
||||
**OS support:** all
|
||||
|
||||
[data_formats]: /docs/DATA_FORMATS_INPUT.md
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics from one or more commands that can output to stdout
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = []
|
||||
|
||||
## Environment variables
|
||||
## Array of "key=value" pairs to pass as environment variables
|
||||
## e.g. "KEY=value", "USERNAME=John Doe",
|
||||
## "LD_LIBRARY_PATH=/opt/custom/lib64:/usr/local/libs"
|
||||
# environment = []
|
||||
|
||||
## Timeout for each command to complete.
|
||||
# timeout = "5s"
|
||||
|
||||
## Measurement name suffix
|
||||
## Used for separating different commands
|
||||
# name_suffix = ""
|
||||
|
||||
## Ignore Error Code
|
||||
## If set to true, a non-zero error code in not considered an error and the
|
||||
## plugin will continue to parse the output.
|
||||
# ignore_error = false
|
||||
|
||||
## Data format
|
||||
## By default, exec expects JSON. This was done for historical reasons and is
|
||||
## different than other inputs that use the influx line protocol. Each data
|
||||
## format has its own unique set of configuration options, read more about
|
||||
## them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
# data_format = "json"
|
||||
```
|
||||
|
||||
Glob patterns in the `command` option are matched on every run, so adding new
|
||||
scripts that match the pattern will cause them to be picked up immediately.
|
||||
|
||||
## Example
|
||||
|
||||
This script produces static values, since no timestamp is specified the values
|
||||
are at the current time. Ensure that int values are followed with `i` for proper
|
||||
parsing.
|
||||
|
||||
```sh
|
||||
#!/bin/sh
|
||||
echo 'example,tag1=a,tag2=b i=42i,j=43i,k=44i'
|
||||
```
|
||||
|
||||
It can be paired with the following configuration and will be run at the
|
||||
`interval` of the agent.
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
commands = ["sh /tmp/test.sh"]
|
||||
timeout = "5s"
|
||||
data_format = "influx"
|
||||
```
|
||||
|
||||
## Common Issues
|
||||
|
||||
### My script works when I run it by hand, but not when Telegraf is running as a service
|
||||
|
||||
This may be related to the Telegraf service running as a different user. The
|
||||
official packages run Telegraf as the `telegraf` user and group on Linux
|
||||
systems.
|
||||
|
||||
### With a PowerShell on Windows, the output of the script appears to be truncated
|
||||
|
||||
You may need to set a variable in your script to increase the number of columns
|
||||
available for output:
|
||||
|
||||
```shell
|
||||
$host.UI.RawUI.BufferSize = new-object System.Management.Automation.Host.Size(1024,50)
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,115 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Execd"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Execd
|
||||
identifier: input-execd
|
||||
tags: [Execd, "input-plugins", "configuration", "system"]
|
||||
introduced: "v1.14.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/execd/README.md, Execd Plugin Source
|
||||
---
|
||||
|
||||
# Execd Input Plugin
|
||||
|
||||
This plugin runs the given external program as a long-running daemon and collects
|
||||
the metrics in one of the supported [data formats](/telegraf/v1/data_formats/input) on the
|
||||
process's `stdout`. The program is expected to stay running and output data
|
||||
when receiving the configured `signal`.
|
||||
|
||||
The `stderr` output of the process will be relayed to Telegraf's logging
|
||||
facilities and will be logged as _error_ by default. However, you can log to
|
||||
other levels by prefixing your message with `E!` for error, `W!` for warning,
|
||||
`I!` for info, `D!` for debugging and `T!` for trace levels followed by a space
|
||||
and the actual message. For example outputting `I! A log message` will create a
|
||||
`info` log line in your Telegraf logging output.
|
||||
|
||||
**Introduced in:** Telegraf v1.14.0
|
||||
**Tags:** system
|
||||
**OS support:** all
|
||||
|
||||
[data_formats]: /docs/DATA_FORMATS_INPUT.md
|
||||
|
||||
## Service Input <!-- @/docs/includes/service_input.md -->
|
||||
|
||||
This plugin is a service input. Normal plugins gather metrics determined by the
|
||||
interval setting. Service plugins start a service to listen and wait for
|
||||
metrics or events to occur. Service plugins have two key differences from
|
||||
normal plugins:
|
||||
|
||||
1. The global or plugin specific `interval` setting may not apply
|
||||
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
|
||||
output for this plugin
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Run executable as long-running input plugin
|
||||
[[inputs.execd]]
|
||||
## One program to run as daemon.
|
||||
## NOTE: process and each argument should each be their own string
|
||||
command = ["telegraf-smartctl", "-d", "/dev/sda"]
|
||||
|
||||
## Environment variables
|
||||
## Array of "key=value" pairs to pass as environment variables
|
||||
## e.g. "KEY=value", "USERNAME=John Doe",
|
||||
## "LD_LIBRARY_PATH=/opt/custom/lib64:/usr/local/libs"
|
||||
# environment = []
|
||||
|
||||
## Define how the process is signaled on each collection interval.
|
||||
## Valid values are:
|
||||
## "none" : Do not signal anything. (Recommended for service inputs)
|
||||
## The process must output metrics by itself.
|
||||
## "STDIN" : Send a newline on STDIN. (Recommended for gather inputs)
|
||||
## "SIGHUP" : Send a HUP signal. Not available on Windows. (not recommended)
|
||||
## "SIGUSR1" : Send a USR1 signal. Not available on Windows.
|
||||
## "SIGUSR2" : Send a USR2 signal. Not available on Windows.
|
||||
# signal = "none"
|
||||
|
||||
## Delay before the process is restarted after an unexpected termination
|
||||
# restart_delay = "10s"
|
||||
|
||||
## Buffer size used to read from the command output stream
|
||||
## Optional parameter. Default is 64 Kib, minimum is 16 bytes
|
||||
# buffer_size = "64Kib"
|
||||
|
||||
## Disable automatic restart of the program and stop if the program exits
|
||||
## with an error (i.e. non-zero error code)
|
||||
# stop_on_error = false
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
# data_format = "influx"
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
See the examples directory for basic examples in different languages expecting
|
||||
various signals from Telegraf:
|
||||
|
||||
- Go: Example expects `signal = "SIGHUP"`
|
||||
- Python: Example expects `signal = "none"`
|
||||
- Ruby: Example expects `signal = "none"`
|
||||
- shell: Example expects `signal = "STDIN"`
|
||||
|
||||
## Metrics
|
||||
|
||||
Varies depending on the users data.
|
||||
|
||||
## Example Output
|
||||
|
||||
Varies depending on the users data.
|
||||
|
|
@ -0,0 +1,103 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Fail2ban"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Fail2ban
|
||||
identifier: input-fail2ban
|
||||
tags: [Fail2ban, "input-plugins", "configuration", "network", "system"]
|
||||
introduced: "v1.4.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/fail2ban/README.md, Fail2ban Plugin Source
|
||||
---
|
||||
|
||||
# Fail2ban Input Plugin
|
||||
|
||||
This plugin gathers the count of failed and banned IP addresses using
|
||||
[fail2ban](https://www.fail2ban.org) by running the `fail2ban-client` command.
|
||||
|
||||
> [!NOTE]
|
||||
> The `fail2ban-client` requires root access, so please make sure to either
|
||||
> allow Telegraf to run that command using `sudo` without a password or by
|
||||
> running telegraf as root (not recommended).
|
||||
|
||||
**Introduced in:** Telegraf v1.4.0
|
||||
**Tags:** network, system
|
||||
**OS support:** all
|
||||
|
||||
[fail2ban]: https://www.fail2ban.org
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics from fail2ban.
|
||||
[[inputs.fail2ban]]
|
||||
## Use sudo to run fail2ban-client
|
||||
# use_sudo = false
|
||||
|
||||
## Use the given socket instead of the default one
|
||||
# socket = "/var/run/fail2ban/fail2ban.sock"
|
||||
```
|
||||
|
||||
## Using sudo
|
||||
|
||||
Make sure to set `use_sudo = true` in your configuration file.
|
||||
|
||||
You will also need to update your sudoers file. It is recommended to modify a
|
||||
file in the `/etc/sudoers.d` directory using `visudo`:
|
||||
|
||||
```bash
|
||||
sudo visudo -f /etc/sudoers.d/telegraf
|
||||
```
|
||||
|
||||
Add the following lines to the file, these commands allow the `telegraf` user
|
||||
to call `fail2ban-client` without needing to provide a password and disables
|
||||
logging of the call in the auth.log. Consult `man 8 visudo` and `man 5
|
||||
sudoers` for details.
|
||||
|
||||
```text
|
||||
Cmnd_Alias FAIL2BAN = /usr/bin/fail2ban-client status, /usr/bin/fail2ban-client status *
|
||||
telegraf ALL=(root) NOEXEC: NOPASSWD: FAIL2BAN
|
||||
Defaults!FAIL2BAN !logfile, !syslog, !pam_session
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- fail2ban
|
||||
- tags:
|
||||
- jail
|
||||
- fields:
|
||||
- failed (integer, count)
|
||||
- banned (integer, count)
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
fail2ban,jail=sshd failed=5i,banned=2i 1495868667000000000
|
||||
```
|
||||
|
||||
### Execute the binary directly
|
||||
|
||||
```shell
|
||||
# fail2ban-client status sshd
|
||||
Status for the jail: sshd
|
||||
|- Filter
|
||||
| |- Currently failed: 5
|
||||
| |- Total failed: 20
|
||||
| `- File list: /var/log/secure
|
||||
`- Actions
|
||||
|- Currently banned: 2
|
||||
|- Total banned: 10
|
||||
`- Banned IP list: 192.168.0.1 192.168.0.2
|
||||
```
|
||||
|
|
@ -0,0 +1,93 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Fibaro"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Fibaro
|
||||
identifier: input-fibaro
|
||||
tags: [Fibaro, "input-plugins", "configuration", "iot"]
|
||||
introduced: "v1.7.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/fibaro/README.md, Fibaro Plugin Source
|
||||
---
|
||||
|
||||
# Fibaro Input Plugin
|
||||
|
||||
This plugin gathers data from devices connected to a [Fibaro](https://www.fibaro.com)
|
||||
controller. Those values could be true (1) or false (0) for switches, percentage
|
||||
for dimmers, temperature, etc. Both _Home Center 2_ and _Home Center 3_ devices
|
||||
are supported.
|
||||
|
||||
**Introduced in:** Telegraf v1.7.0
|
||||
**Tags:** iot
|
||||
**OS support:** all
|
||||
|
||||
[fibaro]: https://www.fibaro.com
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read devices value(s) from a Fibaro controller
|
||||
[[inputs.fibaro]]
|
||||
## Required Fibaro controller address/hostname.
|
||||
## Note: at the time of writing this plugin, Fibaro only implemented http - no https available
|
||||
url = "http://<controller>:80"
|
||||
|
||||
## Required credentials to access the API (http://<controller/api/<component>)
|
||||
username = "<username>"
|
||||
password = "<password>"
|
||||
|
||||
## Amount of time allowed to complete the HTTP request
|
||||
# timeout = "5s"
|
||||
|
||||
## Fibaro Device Type
|
||||
## By default, this plugin will attempt to read using the HC2 API. For HC3
|
||||
## devices, set this to "HC3"
|
||||
# device_type = "HC2"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- fibaro
|
||||
- tags:
|
||||
- deviceId (device id)
|
||||
- section (section name)
|
||||
- room (room name)
|
||||
- name (device name)
|
||||
- type (device type)
|
||||
- fields:
|
||||
- batteryLevel (float, when available from device)
|
||||
- energy (float, when available from device)
|
||||
- power (float, when available from device)
|
||||
- value (float)
|
||||
- value2 (float, when available from device)
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
fibaro,deviceId=9,host=vm1,name=Fenêtre\ haute,room=Cuisine,section=Cuisine,type=com.fibaro.FGRM222 energy=2.04,power=0.7,value=99,value2=99 1529996807000000000
|
||||
fibaro,deviceId=10,host=vm1,name=Escaliers,room=Dégagement,section=Pièces\ communes,type=com.fibaro.binarySwitch value=0 1529996807000000000
|
||||
fibaro,deviceId=13,host=vm1,name=Porte\ fenêtre,room=Salon,section=Pièces\ communes,type=com.fibaro.FGRM222 energy=4.33,power=0.7,value=99,value2=99 1529996807000000000
|
||||
fibaro,deviceId=21,host=vm1,name=LED\ îlot\ central,room=Cuisine,section=Cuisine,type=com.fibaro.binarySwitch value=0 1529996807000000000
|
||||
fibaro,deviceId=90,host=vm1,name=Détérioration,room=Entrée,section=Pièces\ communes,type=com.fibaro.heatDetector value=0 1529996807000000000
|
||||
fibaro,deviceId=163,host=vm1,name=Température,room=Cave,section=Cave,type=com.fibaro.temperatureSensor value=21.62 1529996807000000000
|
||||
fibaro,deviceId=191,host=vm1,name=Présence,room=Garde-manger,section=Cuisine,type=com.fibaro.FGMS001 value=1 1529996807000000000
|
||||
fibaro,deviceId=193,host=vm1,name=Luminosité,room=Garde-manger,section=Cuisine,type=com.fibaro.lightSensor value=195 1529996807000000000
|
||||
fibaro,deviceId=200,host=vm1,name=Etat,room=Garage,section=Extérieur,type=com.fibaro.doorSensor value=0 1529996807000000000
|
||||
fibaro,deviceId=220,host=vm1,name=CO2\ (ppm),room=Salon,section=Pièces\ communes,type=com.fibaro.multilevelSensor value=536 1529996807000000000
|
||||
fibaro,deviceId=221,host=vm1,name=Humidité\ (%),room=Salon,section=Pièces\ communes,type=com.fibaro.humiditySensor value=61 1529996807000000000
|
||||
fibaro,deviceId=222,host=vm1,name=Pression\ (mb),room=Salon,section=Pièces\ communes,type=com.fibaro.multilevelSensor value=1013.7 1529996807000000000
|
||||
fibaro,deviceId=223,host=vm1,name=Bruit\ (db),room=Salon,section=Pièces\ communes,type=com.fibaro.multilevelSensor value=44 1529996807000000000
|
||||
fibaro,deviceId=248,host=vm1,name=Température,room=Garage,section=Extérieur,type=com.fibaro.temperatureSensor batteryLevel=85,value=10.8 1529996807000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,84 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from File"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: File
|
||||
identifier: input-file
|
||||
tags: [File, "input-plugins", "configuration", "system"]
|
||||
introduced: "v1.8.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/file/README.md, File Plugin Source
|
||||
---
|
||||
|
||||
# File Input Plugin
|
||||
|
||||
This plugin reads the __complete__ contents of the configured files in
|
||||
__every__ interval. The file content is split line-wise and parsed according to
|
||||
one of the supported [data formats](/telegraf/v1/data_formats/input).
|
||||
|
||||
> [!TIP]
|
||||
> If you wish to only process newly appended lines use the [tail](/telegraf/v1/plugins/#input-tail) input
|
||||
> plugin instead.
|
||||
|
||||
**Introduced in:** Telegraf v1.8.0
|
||||
**Tags:** system
|
||||
**OS support:** all
|
||||
|
||||
[data_formats]: /docs/DATA_FORMATS_INPUT.md
|
||||
[tail]: /plugins/inputs/tail
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Parse a complete file each interval
|
||||
[[inputs.file]]
|
||||
## Files to parse each interval. Accept standard unix glob matching rules,
|
||||
## as well as ** to match recursive files and directories.
|
||||
files = ["/tmp/metrics.out"]
|
||||
|
||||
## Character encoding to use when interpreting the file contents. Invalid
|
||||
## characters are replaced using the unicode replacement character. When set
|
||||
## to the empty string the data is not decoded to text.
|
||||
## ex: character_encoding = "utf-8"
|
||||
## character_encoding = "utf-16le"
|
||||
## character_encoding = "utf-16be"
|
||||
## character_encoding = ""
|
||||
# character_encoding = ""
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "influx"
|
||||
|
||||
## Please use caution when using the following options: when file name
|
||||
## variation is high, this can increase the cardinality significantly. Read
|
||||
## more about cardinality here:
|
||||
## https://docs.influxdata.com/influxdb/cloud/reference/glossary/#series-cardinality
|
||||
|
||||
## Name of tag to store the name of the file. Disabled if not set.
|
||||
# file_tag = ""
|
||||
|
||||
## Name of tag to store the absolute path and name of the file. Disabled if
|
||||
## not set.
|
||||
# file_path_tag = ""
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
The format of metrics produced by this plugin depends on the content and data
|
||||
format of the file.
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,86 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Filecount"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Filecount
|
||||
identifier: input-filecount
|
||||
tags: [Filecount, "input-plugins", "configuration", "system"]
|
||||
introduced: "v1.8.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/filecount/README.md, Filecount Plugin Source
|
||||
---
|
||||
|
||||
# Filecount Input Plugin
|
||||
|
||||
This plugin reports the number and total size of files in specified directories.
|
||||
|
||||
**Introduced in:** Telegraf v1.8.0
|
||||
**Tags:** system
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Count files in a directory
|
||||
[[inputs.filecount]]
|
||||
## Directories to gather stats about.
|
||||
## This accept standard unit glob matching rules, but with the addition of
|
||||
## ** as a "super asterisk". ie:
|
||||
## /var/log/** -> recursively find all directories in /var/log and count files in each directories
|
||||
## /var/log/*/* -> find all directories with a parent dir in /var/log and count files in each directories
|
||||
## /var/log -> count all files in /var/log and all of its subdirectories
|
||||
directories = ["/var/cache/apt", "/tmp"]
|
||||
|
||||
## Only count files that match the name pattern. Defaults to "*".
|
||||
name = "*"
|
||||
|
||||
## Count files in subdirectories. Defaults to true.
|
||||
recursive = true
|
||||
|
||||
## Only count regular files. Defaults to true.
|
||||
regular_only = true
|
||||
|
||||
## Follow all symlinks while walking the directory tree. Defaults to false.
|
||||
follow_symlinks = false
|
||||
|
||||
## Only count files that are at least this size. If size is
|
||||
## a negative number, only count files that are smaller than the
|
||||
## absolute value of size. Acceptable units are B, KiB, MiB, KB, ...
|
||||
## Without quotes and units, interpreted as size in bytes.
|
||||
size = "0B"
|
||||
|
||||
## Only count files that have not been touched for at least this
|
||||
## duration. If mtime is negative, only count files that have been
|
||||
## touched in this duration. Defaults to "0s".
|
||||
mtime = "0s"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- filecount
|
||||
- tags:
|
||||
- directory (the directory path)
|
||||
- fields:
|
||||
- count (integer)
|
||||
- size_bytes (integer)
|
||||
- oldest_file_timestamp (int, unix time nanoseconds)
|
||||
- newest_file_timestamp (int, unix time nanoseconds)
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
filecount,directory=/var/cache/apt count=7i,size_bytes=7438336i,oldest_file_timestamp=1507152973123456789i,newest_file_timestamp=1507152973123456789i 1530034445000000000
|
||||
filecount,directory=/tmp count=17i,size_bytes=28934786i,oldest_file_timestamp=1507152973123456789i,newest_file_timestamp=1507152973123456789i 1530034445000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,68 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from File statistics"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: File statistics
|
||||
identifier: input-filestat
|
||||
tags: [File statistics, "input-plugins", "configuration", "system"]
|
||||
introduced: "v0.13.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/filestat/README.md, File statistics Plugin Source
|
||||
---
|
||||
|
||||
# File statistics Input Plugin
|
||||
|
||||
This plugin gathers metrics about file existence, size, and other file
|
||||
statistics.
|
||||
|
||||
**Introduced in:** Telegraf v0.13.0
|
||||
**Tags:** system
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read stats about given file(s)
|
||||
[[inputs.filestat]]
|
||||
## Files to gather stats about.
|
||||
## These accept standard unix glob matching rules, but with the addition of
|
||||
## ** as a "super asterisk". See https://github.com/gobwas/glob.
|
||||
files = ["/etc/telegraf/telegraf.conf", "/var/log/**.log"]
|
||||
|
||||
## If true, read the entire file and calculate an md5 checksum.
|
||||
md5 = false
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
### Measurements & Fields
|
||||
|
||||
- filestat
|
||||
- exists (int, 0 | 1)
|
||||
- size_bytes (int, bytes)
|
||||
- modification_time (int, unix time nanoseconds)
|
||||
- md5 (optional, string)
|
||||
|
||||
### Tags
|
||||
|
||||
- All measurements have the following tags:
|
||||
- file (the path the to file, as specified in the config)
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
filestat,file=/tmp/foo/bar,host=tyrion exists=0i 1507218518192154351
|
||||
filestat,file=/Users/sparrc/ws/telegraf.conf,host=tyrion exists=1i,size=47894i,modification_time=1507152973123456789i 1507218518192154351
|
||||
```
|
||||
|
|
@ -0,0 +1,95 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Fireboard"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Fireboard
|
||||
identifier: input-fireboard
|
||||
tags: [Fireboard, "input-plugins", "configuration", "iot"]
|
||||
introduced: "v1.12.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/fireboard/README.md, Fireboard Plugin Source
|
||||
---
|
||||
|
||||
# Fireboard Input Plugin
|
||||
|
||||
This plugin gathers real-time temperature data from [fireboard](https://www.fireboard.com)
|
||||
thermometers.
|
||||
|
||||
> [!NOTE]
|
||||
> You will need to sign up to for the [Fireboard REST API](https://docs.fireboard.io/reference/restapi.html) in order to use
|
||||
> this plugin.
|
||||
|
||||
**Introduced in:** Telegraf v1.12.0
|
||||
**Tags:** iot
|
||||
**OS support:** all
|
||||
|
||||
[fireboard]: https://www.fireboard.com
|
||||
[api]: https://docs.fireboard.io/reference/restapi.html
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read real time temps from fireboard.io servers
|
||||
[[inputs.fireboard]]
|
||||
## Specify auth token for your account
|
||||
auth_token = "invalidAuthToken"
|
||||
## You can override the fireboard server URL if necessary
|
||||
# url = https://fireboard.io/api/v1/devices.json
|
||||
## You can set a different http_timeout if you need to
|
||||
## You should set a string using an number and time indicator
|
||||
## for example "12s" for 12 seconds.
|
||||
# http_timeout = "4s"
|
||||
```
|
||||
|
||||
### auth_token
|
||||
|
||||
In lieu of requiring a username and password, this plugin requires an
|
||||
authentication token that you can generate using the [Fireboard REST
|
||||
API](https://docs.fireboard.io/reference/restapi.html#Authentication).
|
||||
|
||||
### url
|
||||
|
||||
While there should be no reason to override the URL, the option is available
|
||||
in case Fireboard changes their site, etc.
|
||||
|
||||
### http_timeout
|
||||
|
||||
If you need to increase the HTTP timeout, you can do so here. You can set this
|
||||
value in seconds. The default value is four (4) seconds.
|
||||
|
||||
## Metrics
|
||||
|
||||
The Fireboard REST API docs have good examples of the data that is available,
|
||||
currently this input only returns the real time temperatures. Temperature
|
||||
values are included if they are less than a minute old.
|
||||
|
||||
- fireboard
|
||||
- tags:
|
||||
- channel
|
||||
- scale (Celcius; Fahrenheit)
|
||||
- title (name of the Fireboard)
|
||||
- uuid (UUID of the Fireboard)
|
||||
- fields:
|
||||
- temperature (float, unit)
|
||||
|
||||
## Example Output
|
||||
|
||||
This section shows example output in Line Protocol format. You can often use
|
||||
`telegraf --input-filter <plugin-name> --test` or use the `file` output to get
|
||||
this information.
|
||||
|
||||
```text
|
||||
fireboard,channel=2,host=patas-mbp,scale=Fahrenheit,title=telegraf-FireBoard,uuid=b55e766c-b308-49b5-93a4-df89fe31efd0 temperature=78.2 1561690040000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,137 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from AWS Data Firehose"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: AWS Data Firehose
|
||||
identifier: input-firehose
|
||||
tags: [AWS Data Firehose, "input-plugins", "configuration", "cloud", "messaging"]
|
||||
introduced: "v1.34.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/firehose/README.md, AWS Data Firehose Plugin Source
|
||||
---
|
||||
|
||||
# AWS Data Firehose Input Plugin
|
||||
|
||||
This plugin listens for metrics sent via HTTP from [AWS Data Firehose](https://aws.amazon.com/de/firehose/)
|
||||
in one of the supported [data formats](/telegraf/v1/data_formats/input).
|
||||
The plugin strictly follows the request-response schema as describe in the
|
||||
official [documentation](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html).
|
||||
|
||||
**Introduced in:** Telegraf v1.34.0
|
||||
**Tags:** cloud, messaging
|
||||
**OS support:** all
|
||||
|
||||
[firehose]: https://aws.amazon.com/de/firehose/
|
||||
[data_formats]: /docs/DATA_FORMATS_INPUT.md
|
||||
[response_spec]: https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html
|
||||
|
||||
## Service Input <!-- @/docs/includes/service_input.md -->
|
||||
|
||||
This plugin is a service input. Normal plugins gather metrics determined by the
|
||||
interval setting. Service plugins start a service to listen and wait for
|
||||
metrics or events to occur. Service plugins have two key differences from
|
||||
normal plugins:
|
||||
|
||||
1. The global or plugin specific `interval` setting may not apply
|
||||
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
|
||||
output for this plugin
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# AWS Data Firehose listener
|
||||
[[inputs.firehose]]
|
||||
## Address and port to host HTTP listener on
|
||||
service_address = ":8080"
|
||||
|
||||
## Paths to listen to.
|
||||
# paths = ["/telegraf"]
|
||||
|
||||
## maximum duration before timing out read of the request
|
||||
# read_timeout = "5s"
|
||||
## maximum duration before timing out write of the response
|
||||
# write_timeout = "5s"
|
||||
|
||||
## Set one or more allowed client CA certificate file names to
|
||||
## enable mutually authenticated TLS connections
|
||||
# tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
|
||||
|
||||
## Add service certificate and key
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
|
||||
## Minimal TLS version accepted by the server
|
||||
# tls_min_version = "TLS12"
|
||||
|
||||
## Optional access key to accept for authentication.
|
||||
## AWS Data Firehose uses "x-amz-firehose-access-key" header to set the access key.
|
||||
## If no access_key is provided (default), authentication is completely disabled and
|
||||
## this plugin will accept all request ignoring the provided access-key in the request!
|
||||
# access_key = "foobar"
|
||||
|
||||
## Optional setting to add parameters as tags
|
||||
## If the http header "x-amz-firehose-common-attributes" is not present on the
|
||||
## request, no corresponding tag will be added. The header value should be a
|
||||
## json and should follow the schema as describe in the official documentation:
|
||||
## https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html#requestformat
|
||||
# parameter_tags = ["env"]
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
# data_format = "influx"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Metrics are collected from the `records.[*].data` field in the request body.
|
||||
The data must be base64 encoded and may be sent in any supported
|
||||
[data format](/telegraf/v1/data_formats/input).
|
||||
|
||||
## Example Output
|
||||
|
||||
When run with this configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.firehose]]
|
||||
service_address = ":8080"
|
||||
paths = ["/telegraf"]
|
||||
data_format = "value"
|
||||
data_type = "string"
|
||||
```
|
||||
|
||||
the following curl command:
|
||||
|
||||
```sh
|
||||
curl -i -XPOST 'localhost:8080/telegraf' \
|
||||
--header 'x-amz-firehose-request-id: ed4acda5-034f-9f42-bba1-f29aea6d7d8f' \
|
||||
--header 'Content-Type: application/json' \
|
||||
--data '{
|
||||
"requestId": "ed4acda5-034f-9f42-bba1-f29aea6d7d8f",
|
||||
"timestamp": 1578090901599,
|
||||
"records": [
|
||||
{
|
||||
"data": "aGVsbG8gd29ybGQK" // "hello world"
|
||||
}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
produces:
|
||||
|
||||
```text
|
||||
firehose,firehose_http_path=/telegraf value="hello world" 1725001851000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,103 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Fluentd"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Fluentd
|
||||
identifier: input-fluentd
|
||||
tags: [Fluentd, "input-plugins", "configuration", "server"]
|
||||
introduced: "v1.4.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/fluentd/README.md, Fluentd Plugin Source
|
||||
---
|
||||
|
||||
# Fluentd Input Plugin
|
||||
|
||||
This plugin gathers internal metrics of a [fluentd](https://www.fluentd.org/) instance provided
|
||||
by fluentd's [monitor agent plugin](https://docs.fluentd.org/input/monitor_agent). Data provided
|
||||
by the `/api/plugin.json` resource, `/api/config.json` is not covered.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> This plugin might produce high-cardinality series as the `plugin_id` value is
|
||||
> random after each restart of fluentd. You might need to adjust your fluentd
|
||||
> configuration, in order to reduce series cardinality in case your fluentd
|
||||
> restarts frequently by adding the `@id` parameter to each plugin.
|
||||
> See [fluentd's documentation](https://docs.fluentd.org/configuration/config-file#common-plugin-parameter) for details.
|
||||
|
||||
**Introduced in:** Telegraf v1.4.0
|
||||
**Tags:** server
|
||||
**OS support:** all
|
||||
|
||||
[fluentd]: https://www.fluentd.org/
|
||||
[monitor_agent]: https://docs.fluentd.org/input/monitor_agent
|
||||
[docs]: https://docs.fluentd.org/configuration/config-file#common-plugin-parameter
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics exposed by fluentd in_monitor plugin
|
||||
[[inputs.fluentd]]
|
||||
## This plugin reads information exposed by fluentd (using /api/plugins.json endpoint).
|
||||
##
|
||||
## Endpoint:
|
||||
## - only one URI is allowed
|
||||
## - https is not supported
|
||||
endpoint = "http://localhost:24220/api/plugins.json"
|
||||
|
||||
## Define which plugins have to be excluded (based on "type" field - e.g. monitor_agent)
|
||||
exclude = [
|
||||
"monitor_agent",
|
||||
"dummy",
|
||||
]
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
### Measurements & Fields
|
||||
|
||||
Fields may vary depending on the plugin type
|
||||
|
||||
- fluentd
|
||||
- retry_count (float, unit)
|
||||
- buffer_queue_length (float, unit)
|
||||
- buffer_total_queued_size (float, unit)
|
||||
- rollback_count (float, unit)
|
||||
- flush_time_count (float, unit)
|
||||
- slow_flush_count (float, unit)
|
||||
- emit_count (float, unit)
|
||||
- emit_records (float, unit)
|
||||
- emit_size (float, unit)
|
||||
- write_count (float, unit)
|
||||
- buffer_stage_length (float, unit)
|
||||
- buffer_queue_byte_size (float, unit)
|
||||
- buffer_stage_byte_size (float, unit)
|
||||
- buffer_available_buffer_space_ratios (float, unit)
|
||||
|
||||
### Tags
|
||||
|
||||
- All measurements have the following tags:
|
||||
- plugin_id (unique plugin id)
|
||||
- plugin_type (type of the plugin e.g. s3)
|
||||
- plugin_category (plugin category e.g. output)
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
fluentd,host=T440s,plugin_id=object:9f748c,plugin_category=input,plugin_type=dummy buffer_total_queued_size=0,buffer_queue_length=0,retry_count=0 1492006105000000000
|
||||
fluentd,plugin_category=input,plugin_type=dummy,host=T440s,plugin_id=object:8da98c buffer_queue_length=0,retry_count=0,buffer_total_queued_size=0 1492006105000000000
|
||||
fluentd,plugin_id=object:820190,plugin_category=input,plugin_type=monitor_agent,host=T440s retry_count=0,buffer_total_queued_size=0,buffer_queue_length=0 1492006105000000000
|
||||
fluentd,plugin_id=object:c5e054,plugin_category=output,plugin_type=stdout,host=T440s buffer_queue_length=0,retry_count=0,buffer_total_queued_size=0 1492006105000000000
|
||||
fluentd,plugin_type=s3,host=T440s,plugin_id=object:bd7a90,plugin_category=output buffer_queue_length=0,retry_count=0,buffer_total_queued_size=0 1492006105000000000
|
||||
fluentd,plugin_id=output_td, plugin_category=output,plugin_type=tdlog, host=T440s buffer_available_buffer_space_ratios=100,buffer_queue_byte_size=0,buffer_queue_length=0,buffer_stage_byte_size=0,buffer_stage_length=0,buffer_total_queued_size=0,emit_count=0,emit_records=0,flush_time_count=0,retry_count=0,rollback_count=0,slow_flush_count=0,write_count=0 1651474085000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,200 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Fritzbox"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Fritzbox
|
||||
identifier: input-fritzbox
|
||||
tags: [Fritzbox, "input-plugins", "configuration", "iot", "network"]
|
||||
introduced: "v1.35.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/fritzbox/README.md, Fritzbox Plugin Source
|
||||
---
|
||||
|
||||
# Fritzbox Input Plugin
|
||||
|
||||
This plugin gathers status information from [AVM](https://en.avm.de/) devices (routers,
|
||||
repeaters, etc) using the device's [TR-064](https://avm.de/service/schnittstellen/) interface.
|
||||
|
||||
**Introduced in:** Telegraf v1.35.0
|
||||
**Tags:** network, iot
|
||||
**OS support:** all
|
||||
|
||||
[avm]: https://en.avm.de/
|
||||
[tr064]: https://avm.de/service/schnittstellen/
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Gather fritzbox status
|
||||
[[inputs.fritzbox]]
|
||||
## URLs of the devices to query including login credentials
|
||||
urls = [ "http://user:password@fritz.box:49000/" ]
|
||||
|
||||
## The information to collect (see README for further details).
|
||||
# collect = [
|
||||
# "device",
|
||||
# "wan",
|
||||
# "ppp",
|
||||
# "dsl",
|
||||
# "wlan",
|
||||
# ]
|
||||
|
||||
## The http timeout to use.
|
||||
# timeout = "10s"
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
# tls_key_pwd = "secret"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
### Collect options
|
||||
|
||||
The following collect options are available:
|
||||
|
||||
`device` : Collect device information like model name, SW version, uptime etc
|
||||
for the configured devices. Will create `fritzbox_device` metrics.
|
||||
|
||||
`wan` : Collect generic WAN connection status like bit rates, transferred
|
||||
bytes for the configured devices. Will create `fritzbox_wan`metrics.
|
||||
|
||||
`ppp` : Collect PPP connection parameters like bit rates, uptime for the
|
||||
configured devices. Will create `fritzbox_ppp` metrics.
|
||||
|
||||
`dsl` : Collect DSL line status and statistics for the configured devices.
|
||||
Will create `fritzbox_dsl` metrics.
|
||||
|
||||
`wlan` : Collect status and number of associated devices for all WLANs.
|
||||
Will create `fritzbox_wlan` metrics.
|
||||
|
||||
`hosts` : Collect detailed information of the mesh network including
|
||||
connected nodes, there role in the network as well as their connection
|
||||
bandwidth. Will create `fritzbox_hosts` metrics.
|
||||
|
||||
> [!NOTE] Collecting `hosts` metrics is time consuming and generates
|
||||
> very detailed data. If you activate this option, consider increasing
|
||||
> the plugin's query interval to avoid interval overruns and to minimize
|
||||
> the amount of collected data.
|
||||
|
||||
## Metrics
|
||||
|
||||
By default field names are directly derived from the corresponding [interface
|
||||
specification]().
|
||||
|
||||
- `fritzbox_device`
|
||||
- tags
|
||||
- `source` - The name of the device (this metric has been queried from)
|
||||
- `service` - The service id used to query this metric
|
||||
- fields
|
||||
- `uptime` (uint) - Device's uptime in seconds.
|
||||
- `model_name` (string) - Device's model name.
|
||||
- `serial_number` (string) - Device's serial number.
|
||||
- `hardware_version` (string) - Device's hardware version.
|
||||
- `software_version` (string) - Device's software version.
|
||||
- `fritzbox_wan`
|
||||
- tags
|
||||
- `source` - The name of the device (this metric has been queried from)
|
||||
- `service` - The service id used to query this metric
|
||||
- fields
|
||||
- `layer1_upstream_max_bit_rate` (uint) - The WAN interface's maximum upstream bit rate (bits/sec)
|
||||
- `layer1_downstream_max_bit_rate` (uint) - The WAN interface's maximum downstream bit rate (bits/sec)
|
||||
- `upstream_current_max_speed` (uint) - The WAN interface's current maximum upstream transfer rate (bytes/sec)
|
||||
- `downstream_current_max_speed` (uint) - The WAN interface's current maximum downstream data rate (bytes/sec)
|
||||
- `total_bytes_sent` (uint) - The total number of bytes sent via the WAN interface (bytes)
|
||||
- `total_bytes_received` (uint) - The total number of bytes received via the WAN interface (bytes)
|
||||
- `fritzbox_ppp`
|
||||
- tags
|
||||
- `source` - The name of the device (this metric has been queried from)
|
||||
- `service` - The service id used to query this metric
|
||||
- fields
|
||||
- `uptime` (uint) - The current uptime of the PPP connection in seconds
|
||||
- `upstream_max_bit_rate` (uint) - The current maximum upstream bit rate negotiated for the PPP connection (bits/sec)
|
||||
- `downstream_max_bit_rate` (uint) - The current maximum downstream bit rate negotiated for the PPP connection (bits/sec)
|
||||
- `fritzbox_dsl`
|
||||
- tags
|
||||
- `source` - The name of the device (this metric has been queried from)
|
||||
- `service` - The service id used to query this metric
|
||||
- `status` - The status of the DLS line (Up or Down)
|
||||
- fields
|
||||
- `upstream_curr_rate` (uint) - Current DSL upstream rate (kilobits/sec)
|
||||
- `downstream_curr_rate` (uint) - Current DSL downstream rate (kilobits/sec)
|
||||
- `upstream_max_rate` (uint) - Maximum DSL upstream rate (kilobits/sec)
|
||||
- `downstream_max_rate` (uint) - Maximum DSL downstream rate (kilobits/sec)
|
||||
- `upstream_noise_margin` (uint) - Upstream noise margin (db)
|
||||
- `downstream_noise_margin` (uint) - Downstream noise margin (db)
|
||||
- `upstream_attenuation` (uint) - Upstream attenuation (db)
|
||||
- `downstream_attenuation` (uint) - Downstream attenuation (db)
|
||||
- `upstream_power` (uint) - Upstream power
|
||||
- `downstream_power` (uint) - Downstream power
|
||||
- `receive_blocks` (uint) - Received blocks
|
||||
- `transmit_blocks` (uint) - Transmitted blocks
|
||||
- `cell_delin` (uint) - Cell delineation count
|
||||
- `link_retrain` (uint) - Link retrains
|
||||
- `init_errors` (uint) - Initialization errors
|
||||
- `init_timeouts` (uint) - Initialization timeouts
|
||||
- `loss_of_framing` (uint) - Loss of frame errors
|
||||
- `errored_secs` (uint) - Continuous seconds with errors
|
||||
- `severly_errored_secs` (uint) - Continuous seconds with severe errors
|
||||
- `fec_errors` (uint) - Local (Modem) FEC (Forward Error Correction) errors
|
||||
- `atuc_fec_errors` (uint) - Remote (DSLAM) FEC (Forward Error Correction) errors
|
||||
- `hec_errors` (uint) - Local (Modem) HEC (Header Error Control) errors
|
||||
- `atuc_hec_errors` (uint) - Remote (DSLAM) HEC (Header Error Control) errors
|
||||
- `crc_errors` (uint) - Local (Modem) CRC (Cyclic Redundancy Check) error
|
||||
- `atuc_crc_errors` (uint) - Remote (DSLAM) CRC (Cyclic Redundancy Check) errors
|
||||
- `fritzbox_wlan`
|
||||
- tags
|
||||
- `source` - The name of the device (this metric has been queried from)
|
||||
- `service` - The service id used to query this metric
|
||||
- `wlan` - The WLAN SSID (name)
|
||||
- `channel` - The channel used by this WLAN
|
||||
- `band` - The band (in MHz) used by this WLAN
|
||||
- `status` - The status of the WLAN line (Up or Down)
|
||||
- fields
|
||||
- `total_associations` (uint) - The number of devices connected to this WLAN.
|
||||
- `fritzbox_hosts`
|
||||
- tags
|
||||
- `source` - The name of the device (this metric has been queried from)
|
||||
- `service` - The service id used to query this metric
|
||||
- `node` - The name of the node connected to the mesh network
|
||||
- `node_role` - The node's role ("master" = mesh master, "slave" = mesh slave, "client") in the network
|
||||
- `node_ap` - The name of the access point this node is connected to
|
||||
- `node_ap_role` - The access point's role ("master" = mesh master, "slave" = mesh slave, never "client") in the network
|
||||
- `link_type` - The link type ("WLAN" or "LAN") of the peer connection
|
||||
- `link_name` - The link name of the connection
|
||||
- fields
|
||||
- `max_data_rate_tx` (uint) - The connection's maximum transmit rate (kilobits/sec)
|
||||
- `max_data_rate_rx` (uint) - The connection's maximum receive rate (kilobits/sec)
|
||||
- `cur_data_rate_tx` (uint) - The connection's maximum transmit rate (kilobits/sec)
|
||||
- `cur_data_rate_rx` (uint) - The connection's current receive rate (kilobits/sec)
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
fritzbox_device,service=DeviceInfo1,source=fritz.box uptime=2058438i,model_name="Mock 1234",serial_number="123456789",hardware_version="Mock 1234",software_version="1.02.03" 1737003520174438000
|
||||
|
||||
fritzbox_wan,service=WANCommonInterfaceConfig1,source=fritz.box layer1_upstream_max_bit_rate=48816000i,layer1_downstream_max_bit_rate=253247000i,upstream_current_max_speed=511831i,downstream_current_max_speed=1304268i,total_bytes_sent=129497283207i,total_bytes_received=554484531337i 1737003587690504000
|
||||
|
||||
fritzbox_ppp,service=WANPPPConnection1,source=fritz.box uptime=369434i,upstream_max_bit_rate=44213433i,downstream_max_bit_rate=68038668i 1737003622308149000
|
||||
|
||||
fritzbox_dsl,service=WANDSLInterfaceConfig1,source=fritz.box,status=Up downstream_curr_rate=249065i,downstream_max_rate=249065i,downstream_power=513i,init_timeouts=0i,atuc_crc_errors=13i,errored_secs=25i,atuc_hec_errors=0i,upstream_noise_margin=80i,downstream_noise_margin=60i,downstream_attenuation=140i,receive_blocks=490282831i,transmit_blocks=254577751i,init_errors=0i,crc_errors=53i,fec_errors=0i,hec_errors=0i,upstream_max_rate=48873i,upstream_attenuation=80i,upstream_power=498i,cell_delin=0i,link_retrain=2i,loss_of_framing=0i,upstream_curr_rate=46719i,severly_errored_secs=0i,atuc_fec_errors=0i 1737003645769642000
|
||||
|
||||
fritzbox_wlan,band=2400,channel=13,service=WLANConfiguration1,source=fritz.box,ssid=MOCK1234,status=Up total_associations=11i 1737003673561198000
|
||||
|
||||
fritzbox_hosts,node=device#17,node_ap=device#1,node_ap_role=master,node_role=slave,link_name=AP:2G:0,link_type=WLAN,service=Hosts1,source=fritz.box cur_data_rate_tx=216000i,cur_data_rate_rx=216000i,max_data_rate_tx=216000i,max_data_rate_rx=216000i 1737003707257394000
|
||||
fritzbox_hosts,node=device#24,node_ap=device#17,node_ap_role=slave,node_role=client,link_name=LAN:1,link_type=LAN,service=Hosts1,source=fritz.box max_data_rate_tx=1000000i,max_data_rate_rx=1000000i,cur_data_rate_tx=0i,cur_data_rate_rx=0i 1737003707257248000
|
||||
```
|
||||
|
|
@ -0,0 +1,115 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from GitHub"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: GitHub
|
||||
identifier: input-github
|
||||
tags: [GitHub, "input-plugins", "configuration", "applications"]
|
||||
introduced: "v1.11.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/github/README.md, GitHub Plugin Source
|
||||
---
|
||||
|
||||
# GitHub Input Plugin
|
||||
|
||||
This plugin gathers information from projects and repositories hosted on
|
||||
[GitHub](https://www.github.com).
|
||||
|
||||
> [!NOTE]
|
||||
> Telegraf also contains the [webhook input plugin](/telegraf/v1/plugins/#input-webhooks) which can be used
|
||||
> as an alternative method for collecting repository information.
|
||||
|
||||
**Introduced in:** Telegraf v1.11.0
|
||||
**Tags:** applications
|
||||
**OS support:** all
|
||||
|
||||
[github]: https://www.github.com
|
||||
[webhook]: /plugins/inputs/webhooks/github
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Gather repository information from GitHub hosted repositories.
|
||||
[[inputs.github]]
|
||||
## List of repositories to monitor
|
||||
repositories = [
|
||||
"influxdata/telegraf",
|
||||
"influxdata/influxdb"
|
||||
]
|
||||
|
||||
## Github API access token. Unauthenticated requests are limited to 60 per hour.
|
||||
# access_token = ""
|
||||
|
||||
## Github API enterprise url. Github Enterprise accounts must specify their base url.
|
||||
# enterprise_base_url = ""
|
||||
|
||||
## Timeout for HTTP requests.
|
||||
# http_timeout = "5s"
|
||||
|
||||
## List of additional fields to query.
|
||||
## NOTE: Getting those fields might involve issuing additional API-calls, so please
|
||||
## make sure you do not exceed the rate-limit of GitHub.
|
||||
##
|
||||
## Available fields are:
|
||||
## - pull-requests -- number of open and closed pull requests (2 API-calls per repository)
|
||||
# additional_fields = []
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- github_repository
|
||||
- tags:
|
||||
- name - The repository name
|
||||
- owner - The owner of the repository
|
||||
- language - The primary language of the repository
|
||||
- license - The license set for the repository
|
||||
- fields:
|
||||
- forks (int)
|
||||
- open_issues (int)
|
||||
- networks (int)
|
||||
- size (int)
|
||||
- subscribers (int)
|
||||
- stars (int)
|
||||
- watchers (int)
|
||||
|
||||
When the [internal](/telegraf/v1/plugins/#input-internal) input is enabled:
|
||||
|
||||
- internal_github
|
||||
- tags:
|
||||
- access_token - obfuscated reference to access token or "Unauthenticated"
|
||||
- fields:
|
||||
- limit - How many requests you are limited to (per hour)
|
||||
- remaining - How many requests you have remaining (per hour)
|
||||
- blocks - How many requests have been blocked due to rate limit
|
||||
|
||||
When specifying `additional_fields` the plugin will collect the specified
|
||||
properties. **NOTE:** Querying this additional fields might require to perform
|
||||
additional API-calls. Please make sure you don't exceed the query rate-limit by
|
||||
specifying too many additional fields. In the following we list the available
|
||||
options with the required API-calls and the resulting fields
|
||||
|
||||
- "pull-requests" (2 API-calls per repository)
|
||||
- fields:
|
||||
- open_pull_requests (int)
|
||||
- closed_pull_requests (int)
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
github_repository,language=Go,license=MIT\ License,name=telegraf,owner=influxdata forks=2679i,networks=2679i,open_issues=794i,size=23263i,stars=7091i,subscribers=316i,watchers=7091i 1563901372000000000
|
||||
internal_github,access_token=Unauthenticated closed_pull_requests=3522i,rate_limit_remaining=59i,rate_limit_limit=60i,rate_limit_blocks=0i,open_pull_requests=260i 1552653551000000000
|
||||
```
|
||||
|
||||
[internal]: /plugins/inputs/internal
|
||||
|
|
@ -0,0 +1,291 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from gNMI (gRPC Network Management Interface)"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: gNMI (gRPC Network Management Interface)
|
||||
identifier: input-gnmi
|
||||
tags: [gNMI (gRPC Network Management Interface), "input-plugins", "configuration", "network"]
|
||||
introduced: "v1.15.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/gnmi/README.md, gNMI (gRPC Network Management Interface) Plugin Source
|
||||
---
|
||||
|
||||
# gNMI (gRPC Network Management Interface) Input Plugin
|
||||
|
||||
This plugin consumes telemetry data based on [gNMI](https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md) subscriptions. TLS is
|
||||
supported for authentication and encryption. This plugin is vendor-agnostic and
|
||||
is supported on any platform that supports the gNMI specification.
|
||||
|
||||
For Cisco devices the plugin has been optimized to support gNMI telemetry as
|
||||
produced by Cisco IOS XR (64-bit) version 6.5.1, Cisco NX-OS 9.3 and
|
||||
Cisco IOS XE 16.12 and later.
|
||||
|
||||
**Introduced in:** Telegraf v1.15.0
|
||||
**Tags:** network
|
||||
**OS support:** all
|
||||
|
||||
[gnmi]: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md
|
||||
|
||||
## Service Input <!-- @/docs/includes/service_input.md -->
|
||||
|
||||
This plugin is a service input. Normal plugins gather metrics determined by the
|
||||
interval setting. Service plugins start a service to listen and wait for
|
||||
metrics or events to occur. Service plugins have two key differences from
|
||||
normal plugins:
|
||||
|
||||
1. The global or plugin specific `interval` setting may not apply
|
||||
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
|
||||
output for this plugin
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Secret-store support
|
||||
|
||||
This plugin supports secrets from secret-stores for the `username` and
|
||||
`password` options. See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more
|
||||
details on how to use them.
|
||||
|
||||
[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# gNMI telemetry input plugin
|
||||
[[inputs.gnmi]]
|
||||
## Address and port of the gNMI GRPC server
|
||||
addresses = ["10.49.234.114:57777"]
|
||||
|
||||
## define credentials
|
||||
username = "cisco"
|
||||
password = "cisco"
|
||||
|
||||
## gNMI encoding requested (one of: "proto", "json", "json_ietf", "bytes")
|
||||
# encoding = "proto"
|
||||
|
||||
## redial in case of failures after
|
||||
# redial = "10s"
|
||||
|
||||
## gRPC Keepalive settings
|
||||
## See https://pkg.go.dev/google.golang.org/grpc/keepalive
|
||||
## The client will ping the server to see if the transport is still alive if it has
|
||||
## not see any activity for the given time.
|
||||
## If not set, none of the keep-alive setting (including those below) will be applied.
|
||||
## If set and set below 10 seconds, the gRPC library will apply a minimum value of 10s will be used instead.
|
||||
# keepalive_time = ""
|
||||
|
||||
## Timeout for seeing any activity after the keep-alive probe was
|
||||
## sent. If no activity is seen the connection is closed.
|
||||
# keepalive_timeout = ""
|
||||
|
||||
## gRPC Maximum Message Size
|
||||
# max_msg_size = "4MB"
|
||||
|
||||
## Subtree depth for depth extension (disables if < 1)
|
||||
## see https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-depth.md
|
||||
# depth = 0
|
||||
|
||||
## Enable to get the canonical path as field-name
|
||||
# canonical_field_names = false
|
||||
|
||||
## Remove leading slashes and dots in field-name
|
||||
# trim_field_names = false
|
||||
|
||||
## Only receive updates for the state, also suppresses receiving the initial state
|
||||
# updates_only = false
|
||||
|
||||
## Enforces the namespace of the first element as origin for aliases and
|
||||
## response paths, required for backward compatibility.
|
||||
## NOTE: Set to 'false' if possible but be aware that this might change the path tag!
|
||||
# enforce_first_namespace_as_origin = true
|
||||
|
||||
## Guess the path-tag if an update does not contain a prefix-path
|
||||
## Supported values are
|
||||
## none -- do not add a 'path' tag
|
||||
## common path -- use the common path elements of all fields in an update
|
||||
## subscription -- use the subscription path
|
||||
# path_guessing_strategy = "none"
|
||||
|
||||
## Prefix tags from path keys with the path element
|
||||
# prefix_tag_key_with_path = false
|
||||
|
||||
## Optional client-side TLS to authenticate the device
|
||||
## Set to true/false to enforce TLS being enabled/disabled. If not set,
|
||||
## enable TLS only if any of the other options are specified.
|
||||
# tls_enable =
|
||||
## Trusted root certificates for server
|
||||
# tls_ca = "/path/to/cafile"
|
||||
## Used for TLS client certificate authentication
|
||||
# tls_cert = "/path/to/certfile"
|
||||
## Used for TLS client certificate authentication
|
||||
# tls_key = "/path/to/keyfile"
|
||||
## Password for the key file if it is encrypted
|
||||
# tls_key_pwd = ""
|
||||
## Send the specified TLS server name via SNI
|
||||
# tls_server_name = "kubernetes.example.com"
|
||||
## Minimal TLS version to accept by the client
|
||||
# tls_min_version = "TLS12"
|
||||
## List of ciphers to accept, by default all secure ciphers will be accepted
|
||||
## See https://pkg.go.dev/crypto/tls#pkg-constants for supported values.
|
||||
## Use "all", "secure" and "insecure" to add all support ciphers, secure
|
||||
## suites or insecure suites respectively.
|
||||
# tls_cipher_suites = ["secure"]
|
||||
## Renegotiation method, "never", "once" or "freely"
|
||||
# tls_renegotiation_method = "never"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
|
||||
## gNMI subscription prefix (optional, can usually be left empty)
|
||||
## See: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md#222-paths
|
||||
# origin = ""
|
||||
# prefix = ""
|
||||
# target = ""
|
||||
|
||||
## Vendor specific options
|
||||
## This defines what vendor specific options to load.
|
||||
## * Juniper Header Extension (juniper_header): some sensors are directly managed by
|
||||
## Linecard, which adds the Juniper GNMI Header Extension. Enabling this
|
||||
## allows the decoding of the Extension header if present. Currently this knob
|
||||
## adds component, component_id & sub_component_id as additional tags
|
||||
# vendor_specific = []
|
||||
|
||||
## YANG model paths for decoding IETF JSON payloads
|
||||
## Model files are loaded recursively from the given directories. Disabled if
|
||||
## no models are specified.
|
||||
# yang_model_paths = []
|
||||
|
||||
## Define additional aliases to map encoding paths to measurement names
|
||||
# [inputs.gnmi.aliases]
|
||||
# ifcounters = "openconfig:/interfaces/interface/state/counters"
|
||||
|
||||
[[inputs.gnmi.subscription]]
|
||||
## Name of the measurement that will be emitted
|
||||
name = "ifcounters"
|
||||
|
||||
## Origin and path of the subscription
|
||||
## See: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md#222-paths
|
||||
##
|
||||
## origin usually refers to a (YANG) data model implemented by the device
|
||||
## and path to a specific substructure inside it that should be subscribed
|
||||
## to (similar to an XPath). YANG models can be found e.g. here:
|
||||
## https://github.com/YangModels/yang/tree/master/vendor/cisco/xr
|
||||
origin = "openconfig-interfaces"
|
||||
path = "/interfaces/interface/state/counters"
|
||||
|
||||
## Subscription mode ("target_defined", "sample", "on_change") and interval
|
||||
subscription_mode = "sample"
|
||||
sample_interval = "10s"
|
||||
|
||||
## Suppress redundant transmissions when measured values are unchanged
|
||||
# suppress_redundant = false
|
||||
|
||||
## If suppression is enabled, send updates at least every X seconds anyway
|
||||
# heartbeat_interval = "60s"
|
||||
|
||||
## Tag subscriptions are applied as tags to other subscriptions.
|
||||
# [[inputs.gnmi.tag_subscription]]
|
||||
# ## When applying this value as a tag to other metrics, use this tag name
|
||||
# name = "descr"
|
||||
#
|
||||
# ## All other subscription fields are as normal
|
||||
# origin = "openconfig-interfaces"
|
||||
# path = "/interfaces/interface/state"
|
||||
# subscription_mode = "on_change"
|
||||
#
|
||||
# ## Match strategy to use for the tag.
|
||||
# ## Tags are only applied for metrics of the same address. The following
|
||||
# ## settings are valid:
|
||||
# ## unconditional -- always match
|
||||
# ## name -- match by the "name" key
|
||||
# ## This resembles the previous 'tag-only' behavior.
|
||||
# ## elements -- match by the keys in the path filtered by the path
|
||||
# ## parts specified `elements` below
|
||||
# ## By default, 'elements' is used if the 'elements' option is provided,
|
||||
# ## otherwise match by 'name'.
|
||||
# # match = ""
|
||||
#
|
||||
# ## For the 'elements' match strategy, at least one path-element name must
|
||||
# ## be supplied containing at least one key to match on. Multiple path
|
||||
# ## elements can be specified in any order. All given keys must be equal
|
||||
# ## for a match.
|
||||
# # elements = ["description", "interface"]
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Each configured subscription will emit a different measurement. Each leaf in a
|
||||
GNMI SubscribeResponse Update message will produce a field reading in the
|
||||
measurement. GNMI PathElement keys for leaves will attach tags to the field(s).
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
ifcounters,path=openconfig-interfaces:/interfaces/interface/state/counters,host=linux,name=MgmtEth0/RP0/CPU0/0,source=10.49.234.115,descr/description=Foo in-multicast-pkts=0i,out-multicast-pkts=0i,out-errors=0i,out-discards=0i,in-broadcast-pkts=0i,out-broadcast-pkts=0i,in-discards=0i,in-unknown-protos=0i,in-errors=0i,out-unicast-pkts=0i,in-octets=0i,out-octets=0i,last-clear="2019-05-22T16:53:21Z",in-unicast-pkts=0i 1559145777425000000
|
||||
ifcounters,path=openconfig-interfaces:/interfaces/interface/state/counters,host=linux,name=GigabitEthernet0/0/0/0,source=10.49.234.115,descr/description=Bar out-multicast-pkts=0i,out-broadcast-pkts=0i,in-errors=0i,out-errors=0i,in-discards=0i,out-octets=0i,in-unknown-protos=0i,in-unicast-pkts=0i,in-octets=0i,in-multicast-pkts=0i,in-broadcast-pkts=0i,last-clear="2019-05-22T16:54:50Z",out-unicast-pkts=0i,out-discards=0i 1559145777425000000
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Empty metric-name warning
|
||||
|
||||
Some devices (e.g. Juniper) report spurious data with response paths not
|
||||
corresponding to any subscription. In those cases, Telegraf will not be able
|
||||
to determine the metric name for the response and you get an
|
||||
*empty metric-name warning*
|
||||
|
||||
For example if you subscribe to `/junos/system/linecard/cpu/memory` but the
|
||||
corresponding response arrives with path
|
||||
`/components/component/properties/property/...` To avoid those issues, you can
|
||||
manually map the response to a metric name using the `aliases` option like
|
||||
|
||||
```toml
|
||||
[[inputs.gnmi]]
|
||||
addresses = ["..."]
|
||||
|
||||
[inputs.gnmi.aliases]
|
||||
memory = "/components"
|
||||
|
||||
[[inputs.gnmi.subscription]]
|
||||
name = "memory"
|
||||
origin = "openconfig"
|
||||
path = "/junos/system/linecard/cpu/memory"
|
||||
subscription_mode = "sample"
|
||||
sample_interval = "60s"
|
||||
```
|
||||
|
||||
If this does *not* solve the issue, please follow the warning instructions and
|
||||
open an issue with the response, your configuration and the metric you expect.
|
||||
|
||||
### Missing `path` tag
|
||||
|
||||
Some devices (e.g. Arista) omit the prefix and specify the path in the update
|
||||
if there is only one value reported. This leads to a missing `path` tag for
|
||||
the resulting metrics. In those cases you should set `path_guessing_strategy`
|
||||
to `subscription` to use the subscription path as `path` tag.
|
||||
|
||||
Other devices might omit the prefix in updates altogether. Here setting
|
||||
`path_guessing_strategy` to `common path` can help to infer the `path` tag by
|
||||
using the part of the path that is common to all values in the update.
|
||||
|
||||
### TLS handshake failure
|
||||
|
||||
When receiving an error like
|
||||
|
||||
```text
|
||||
2024-01-01T00:00:00Z E! [inputs.gnmi] Error in plugin: failed to setup subscription: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: remote error: tls: handshake failure"
|
||||
```
|
||||
|
||||
this might be due to insecure TLS configurations in the GNMI server. Please
|
||||
check the minimum TLS version provided by the server as well as the cipher suite
|
||||
used. You might want to use the `tls_min_version` or `tls_cipher_suites` setting
|
||||
respectively to work-around the issue. Please be careful to not undermine the
|
||||
security of the connection between the plugin and the device!
|
||||
|
|
@ -0,0 +1,96 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from Google Cloud Storage"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: Google Cloud Storage
|
||||
identifier: input-google_cloud_storage
|
||||
tags: [Google Cloud Storage, "input-plugins", "configuration", "cloud", "datastore"]
|
||||
introduced: "v1.25.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/google_cloud_storage/README.md, Google Cloud Storage Plugin Source
|
||||
---
|
||||
|
||||
# Google Cloud Storage Input Plugin
|
||||
|
||||
This plugin will collect metrics from the given [Google Cloud Storage](https://cloud.google.com/storage)
|
||||
buckets in any of the supported [data formats](/telegraf/v1/data_formats/input).
|
||||
|
||||
**Introduced in:** Telegraf v1.25.0
|
||||
**Tags:** cloud, datastore
|
||||
**OS support:** all
|
||||
|
||||
[gcs]: https://cloud.google.com/storage
|
||||
[data_formats]: /docs/DATA_FORMATS_INPUT.md
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Gather metrics by iterating the files located on a Cloud Storage Bucket.
|
||||
[[inputs.google_cloud_storage]]
|
||||
## Required. Name of Cloud Storage bucket to ingest metrics from.
|
||||
bucket = "my-bucket"
|
||||
|
||||
## Optional. Prefix of Cloud Storage bucket keys to list metrics from.
|
||||
# key_prefix = "my-bucket"
|
||||
|
||||
## Key that will store the offsets in order to pick up where the ingestion was left.
|
||||
offset_key = "offset_key"
|
||||
|
||||
## Key that will store the offsets in order to pick up where the ingestion was left.
|
||||
objects_per_iteration = 10
|
||||
|
||||
## Required. Data format to consume.
|
||||
## Each data format has its own unique set of configuration options.
|
||||
## Read more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "influx"
|
||||
|
||||
## Optional. Filepath for GCP credentials JSON file to authorize calls to
|
||||
## Google Cloud Storage APIs. If not set explicitly, Telegraf will attempt to use
|
||||
## Application Default Credentials, which is preferred.
|
||||
# credentials_file = "path/to/my/creds.json"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Measurements will reside on Google Cloud Storage with the format specified, for
|
||||
example like
|
||||
|
||||
```json
|
||||
{
|
||||
"metrics": [
|
||||
{
|
||||
"fields": {
|
||||
"cosine": 10,
|
||||
"sine": -1.0975806427415925e-12
|
||||
},
|
||||
"name": "cpu",
|
||||
"tags": {
|
||||
"datacenter": "us-east-1",
|
||||
"host": "localhost"
|
||||
},
|
||||
"timestamp": 1604148850990
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
when the [data format](/telegraf/v1/data_formats/input) is set to `json`.
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
google_cloud_storage,datacenter=us-east-1,host=localhost cosine=10,sine=-1.0975806427415925e-12 1604148850990000000
|
||||
```
|
||||
|
|
@ -0,0 +1,94 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from GrayLog"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: GrayLog
|
||||
identifier: input-graylog
|
||||
tags: [GrayLog, "input-plugins", "configuration", "logging"]
|
||||
introduced: "v1.0.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/graylog/README.md, GrayLog Plugin Source
|
||||
---
|
||||
|
||||
# GrayLog Input Plugin
|
||||
|
||||
This plugin collects data from [Graylog servers](https://graylog.org/), currently supporting
|
||||
two type of end points `multiple`
|
||||
(e.g. `http://<host>:9000/api/system/metrics/multiple`) and `namespace`
|
||||
(e.g. `http://<host>:9000/api/system/metrics/namespace/{namespace}`).
|
||||
|
||||
Multiple endpoint can be queried and mixing `multiple` and serveral `namespace`
|
||||
end points is possible. Check `http://<host>:9000/api/api-browser` for the full
|
||||
list of available endpoints.
|
||||
|
||||
> [!NOTE]
|
||||
> When specifying a `namespace` endpoint without an actual namespace, the
|
||||
> metrics array will be ignored.
|
||||
|
||||
**Introduced in:** Telegraf v1.0.0
|
||||
**Tags:** logging
|
||||
**OS support:** all
|
||||
|
||||
[graylog]: https://graylog.org/
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read flattened metrics from one or more GrayLog HTTP endpoints
|
||||
[[inputs.graylog]]
|
||||
## API endpoint, currently supported API:
|
||||
##
|
||||
## - multiple (e.g. http://<host>:9000/api/system/metrics/multiple)
|
||||
## - namespace (e.g. http://<host>:9000/api/system/metrics/namespace/{namespace})
|
||||
##
|
||||
## For namespace endpoint, the metrics array will be ignored for that call.
|
||||
## Endpoint can contain namespace and multiple type calls.
|
||||
##
|
||||
## Please check http://[graylog-server-ip]:9000/api/api-browser for full list
|
||||
## of endpoints
|
||||
servers = [
|
||||
"http://[graylog-server-ip]:9000/api/system/metrics/multiple",
|
||||
]
|
||||
|
||||
## Set timeout (default 5 seconds)
|
||||
# timeout = "5s"
|
||||
|
||||
## Metrics list
|
||||
## List of metrics can be found on Graylog webservice documentation.
|
||||
## Or by hitting the web service api at:
|
||||
## http://[graylog-host]:9000/api/system/metrics
|
||||
metrics = [
|
||||
"jvm.cl.loaded",
|
||||
"jvm.memory.pools.Metaspace.committed"
|
||||
]
|
||||
|
||||
## Username and password
|
||||
username = ""
|
||||
password = ""
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
Please refer to GrayLog metrics API browser for full metric end points:
|
||||
`http://host:9000/api/api-browser`
|
||||
|
||||
## Metrics
|
||||
|
||||
## Example Output
|
||||
|
|
@ -0,0 +1,143 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from HAProxy"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: HAProxy
|
||||
identifier: input-haproxy
|
||||
tags: [HAProxy, "input-plugins", "configuration", "network", "server"]
|
||||
introduced: "v0.1.5"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/haproxy/README.md, HAProxy Plugin Source
|
||||
---
|
||||
|
||||
# HAProxy Input Plugin
|
||||
|
||||
This plugin gathers statistics of [HAProxy](http://www.haproxy.org/) servers using sockets or
|
||||
the HTTP protocol.
|
||||
|
||||
**Introduced in:** Telegraf v0.1.5
|
||||
**Tags:** network, server
|
||||
**OS support:** all
|
||||
|
||||
[haproxy]: http://www.haproxy.org/
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read metrics of HAProxy, via stats socket or http endpoints
|
||||
[[inputs.haproxy]]
|
||||
## List of stats endpoints. Metrics can be collected from both http and socket
|
||||
## endpoints. Examples of valid endpoints:
|
||||
## - http://myhaproxy.com:1936/haproxy?stats
|
||||
## - https://myhaproxy.com:8000/stats
|
||||
## - socket:/run/haproxy/admin.sock
|
||||
## - /run/haproxy/*.sock
|
||||
## - tcp://127.0.0.1:1936
|
||||
##
|
||||
## Server addresses not starting with 'http://', 'https://', 'tcp://' will be
|
||||
## treated as possible sockets. When specifying local socket, glob patterns are
|
||||
## supported.
|
||||
servers = ["http://myhaproxy.com:1936/haproxy?stats"]
|
||||
|
||||
## By default, some of the fields are renamed from what haproxy calls them.
|
||||
## Setting this option to true results in the plugin keeping the original
|
||||
## field names.
|
||||
# keep_field_names = false
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
### HAProxy Configuration
|
||||
|
||||
The following information may be useful when getting started, but please consult
|
||||
the HAProxy documentation for complete and up to date instructions.
|
||||
|
||||
The [`stats enable`]() option can be used to add unauthenticated access over
|
||||
HTTP using the default settings. To enable the unix socket begin by reading
|
||||
about the [`stats socket`]() option.
|
||||
|
||||
[4]: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-stats%20enable
|
||||
[5]: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-stats%20socket
|
||||
|
||||
### servers
|
||||
|
||||
Server addresses must explicitly start with 'http' if you wish to use HAProxy
|
||||
status page. Otherwise, addresses will be assumed to be an UNIX socket and any
|
||||
protocol (if present) will be discarded.
|
||||
|
||||
When using socket names, wildcard expansion is supported so plugin can gather
|
||||
stats from multiple sockets at once.
|
||||
|
||||
To use HTTP Basic Auth add the username and password in the userinfo section of
|
||||
the URL: `http://user:password@1.2.3.4/haproxy?stats`. The credentials are sent
|
||||
via the `Authorization` header and not using the request URL.
|
||||
|
||||
### keep_field_names
|
||||
|
||||
By default, some of the fields are renamed from what haproxy calls them.
|
||||
Setting the `keep_field_names` parameter to `true` will result in the plugin
|
||||
keeping the original field names.
|
||||
|
||||
The following renames are made:
|
||||
|
||||
- `pxname` -> `proxy`
|
||||
- `svname` -> `sv`
|
||||
- `act` -> `active_servers`
|
||||
- `bck` -> `backup_servers`
|
||||
- `cli_abrt` -> `cli_abort`
|
||||
- `srv_abrt` -> `srv_abort`
|
||||
- `hrsp_1xx` -> `http_response.1xx`
|
||||
- `hrsp_2xx` -> `http_response.2xx`
|
||||
- `hrsp_3xx` -> `http_response.3xx`
|
||||
- `hrsp_4xx` -> `http_response.4xx`
|
||||
- `hrsp_5xx` -> `http_response.5xx`
|
||||
- `hrsp_other` -> `http_response.other`
|
||||
|
||||
## Metrics
|
||||
|
||||
For more details about collected metrics reference the [HAProxy CSV format
|
||||
documentation]().
|
||||
|
||||
- haproxy
|
||||
- tags:
|
||||
- `server` - address of the server data was gathered from
|
||||
- `proxy` - proxy name
|
||||
- `sv` - service name
|
||||
- `type` - proxy session type
|
||||
- fields:
|
||||
- `status` (string)
|
||||
- `check_status` (string)
|
||||
- `last_chk` (string)
|
||||
- `mode` (string)
|
||||
- `tracked` (string)
|
||||
- `agent_status` (string)
|
||||
- `last_agt` (string)
|
||||
- `addr` (string)
|
||||
- `cookie` (string)
|
||||
- `lastsess` (int)
|
||||
- **all other stats** (int)
|
||||
|
||||
[6]: https://cbonte.github.io/haproxy-dconv/1.8/management.html#9.1
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
haproxy,server=/run/haproxy/admin.sock,proxy=public,sv=FRONTEND,type=frontend http_response.other=0i,req_rate_max=1i,comp_byp=0i,status="OPEN",rate_lim=0i,dses=0i,req_rate=0i,comp_rsp=0i,bout=9287i,comp_in=0i,mode="http",smax=1i,slim=2000i,http_response.1xx=0i,conn_rate=0i,dreq=0i,ereq=0i,iid=2i,rate_max=1i,http_response.2xx=1i,comp_out=0i,intercepted=1i,stot=2i,pid=1i,http_response.5xx=1i,http_response.3xx=0i,http_response.4xx=0i,conn_rate_max=1i,conn_tot=2i,dcon=0i,bin=294i,rate=0i,sid=0i,req_tot=2i,scur=0i,dresp=0i 1513293519000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from HDDtemp"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: HDDtemp
|
||||
identifier: input-hddtemp
|
||||
tags: [HDDtemp, "input-plugins", "configuration", "hardware", "system"]
|
||||
introduced: "v1.0.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/hddtemp/README.md, HDDtemp Plugin Source
|
||||
---
|
||||
|
||||
# HDDtemp Input Plugin
|
||||
|
||||
This plugin reads data from a [hddtemp](https://savannah.nongnu.org/projects/hddtemp/) daemon.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> This plugin requires `hddtemp` to be installed and running as a daemon.
|
||||
|
||||
As the upstream project is not activly maintained anymore and various
|
||||
distributions (e.g. Debian Bookwork and later) don't ship packages for `hddtemp`
|
||||
anymore, the binary might not be available (e.g. in Ubuntu 22.04 or later).
|
||||
|
||||
> [!TIP]
|
||||
> As an alternative consider using the [smartctl](/telegraf/v1/plugins/#input-smartctl) relying on
|
||||
> SMART information or [sensors](/telegraf/v1/plugins/#input-sensors) plugins to retrieve temperature data
|
||||
> of your hard-drive.
|
||||
|
||||
**Introduced in:** Telegraf v1.0.0
|
||||
**Tags:** hardware, system
|
||||
**OS support:** all
|
||||
|
||||
[hddtemp]: https://savannah.nongnu.org/projects/hddtemp/
|
||||
[smartctl]: /plugins/inputs/smartctl/README.md
|
||||
[sensors]: /plugins/inputs/sensors/README.md
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Monitor disks' temperatures using hddtemp
|
||||
[[inputs.hddtemp]]
|
||||
## By default, telegraf gathers temps data from all disks detected by the
|
||||
## hddtemp.
|
||||
##
|
||||
## Only collect temps from the selected disks.
|
||||
##
|
||||
## A * as the device name will return the temperature values of all disks.
|
||||
##
|
||||
# address = "127.0.0.1:7634"
|
||||
# devices = ["sda", "*"]
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- hddtemp
|
||||
- tags:
|
||||
- device
|
||||
- model
|
||||
- unit
|
||||
- status
|
||||
- source
|
||||
- fields:
|
||||
- temperature
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
hddtemp,source=server1,unit=C,status=,device=sdb,model=WDC\ WD740GD-00FLA1 temperature=43i 1481655647000000000
|
||||
hddtemp,device=sdc,model=SAMSUNG\ HD103UI,unit=C,source=server1,status= temperature=38i 148165564700000000
|
||||
hddtemp,device=sdd,model=SAMSUNG\ HD103UI,unit=C,source=server1,status= temperature=36i 1481655647000000000
|
||||
```
|
||||
|
|
@ -0,0 +1,181 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from HTTP"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: HTTP
|
||||
identifier: input-http
|
||||
tags: [HTTP, "input-plugins", "configuration", "applications", "server"]
|
||||
introduced: "v1.6.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/http/README.md, HTTP Plugin Source
|
||||
---
|
||||
|
||||
# HTTP Input Plugin
|
||||
|
||||
This plugin collects metrics from one or more HTTP endpoints providing data in
|
||||
one of the supported [data formats](/telegraf/v1/data_formats/input).
|
||||
|
||||
**Introduced in:** Telegraf v1.6.0
|
||||
**Tags:** applications, server
|
||||
**OS support:** all
|
||||
|
||||
[data_formats]: /docs/DATA_FORMATS_INPUT.md
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Secret-store support
|
||||
|
||||
This plugin supports secrets from secret-stores for the `username`, `password`,
|
||||
`token`, `headers`, and `cookie_auth_headers` option.
|
||||
See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
|
||||
to use them.
|
||||
|
||||
[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Read formatted metrics from one or more HTTP endpoints
|
||||
[[inputs.http]]
|
||||
## One or more URLs from which to read formatted metrics.
|
||||
urls = [
|
||||
"http://localhost/metrics",
|
||||
"http+unix:///run/user/420/podman/podman.sock:/d/v4.0.0/libpod/pods/json"
|
||||
]
|
||||
|
||||
## HTTP method
|
||||
# method = "GET"
|
||||
|
||||
## Optional HTTP headers
|
||||
# headers = {"X-Special-Header" = "Special-Value"}
|
||||
|
||||
## HTTP entity-body to send with POST/PUT requests.
|
||||
# body = ""
|
||||
|
||||
## HTTP Content-Encoding for write request body, can be set to "gzip" to
|
||||
## compress body or "identity" to apply no encoding.
|
||||
# content_encoding = "identity"
|
||||
|
||||
## Optional Bearer token settings to use for the API calls.
|
||||
## Use either the token itself or the token file if you need a token.
|
||||
# token = "eyJhbGc...Qssw5c"
|
||||
# token_file = "/path/to/file"
|
||||
|
||||
## Optional HTTP Basic Auth Credentials
|
||||
# username = "username"
|
||||
# password = "pa$$word"
|
||||
|
||||
## OAuth2 Client Credentials. The options 'client_id', 'client_secret', and 'token_url' are required to use OAuth2.
|
||||
# client_id = "clientid"
|
||||
# client_secret = "secret"
|
||||
# token_url = "https://indentityprovider/oauth2/v1/token"
|
||||
# scopes = ["urn:opc:idm:__myscopes__"]
|
||||
|
||||
## HTTP Proxy support
|
||||
# use_system_proxy = false
|
||||
# http_proxy_url = ""
|
||||
|
||||
## Optional TLS Config
|
||||
## Set to true/false to enforce TLS being enabled/disabled. If not set,
|
||||
## enable TLS only if any of the other options are specified.
|
||||
# tls_enable =
|
||||
## Trusted root certificates for server
|
||||
# tls_ca = "/path/to/cafile"
|
||||
## Used for TLS client certificate authentication
|
||||
# tls_cert = "/path/to/certfile"
|
||||
## Used for TLS client certificate authentication
|
||||
# tls_key = "/path/to/keyfile"
|
||||
## Password for the key file if it is encrypted
|
||||
# tls_key_pwd = ""
|
||||
## Send the specified TLS server name via SNI
|
||||
# tls_server_name = "kubernetes.example.com"
|
||||
## Minimal TLS version to accept by the client
|
||||
# tls_min_version = "TLS12"
|
||||
## List of ciphers to accept, by default all secure ciphers will be accepted
|
||||
## See https://pkg.go.dev/crypto/tls#pkg-constants for supported values.
|
||||
## Use "all", "secure" and "insecure" to add all support ciphers, secure
|
||||
## suites or insecure suites respectively.
|
||||
# tls_cipher_suites = ["secure"]
|
||||
## Renegotiation method, "never", "once" or "freely"
|
||||
# tls_renegotiation_method = "never"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
|
||||
## Optional Cookie authentication
|
||||
# cookie_auth_url = "https://localhost/authMe"
|
||||
# cookie_auth_method = "POST"
|
||||
# cookie_auth_username = "username"
|
||||
# cookie_auth_password = "pa$$word"
|
||||
# cookie_auth_headers = { Content-Type = "application/json", X-MY-HEADER = "hello" }
|
||||
# cookie_auth_body = '{"username": "user", "password": "pa$$word", "authenticate": "me"}'
|
||||
## cookie_auth_renewal not set or set to "0" will auth once and never renew the cookie
|
||||
# cookie_auth_renewal = "5m"
|
||||
|
||||
## Amount of time allowed to complete the HTTP request
|
||||
# timeout = "5s"
|
||||
|
||||
## List of success status codes
|
||||
# success_status_codes = [200]
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
# data_format = "influx"
|
||||
|
||||
```
|
||||
|
||||
HTTP requests over Unix domain sockets can be specified via the "http+unix" or
|
||||
"https+unix" schemes.
|
||||
Request URLs should have the following form:
|
||||
|
||||
```text
|
||||
http+unix:///path/to/service.sock:/api/endpoint
|
||||
```
|
||||
|
||||
Note: The path to the Unix domain socket and the request endpoint are separated
|
||||
by a colon (":").
|
||||
|
||||
## Example Output
|
||||
|
||||
This example output was taken from [this instructional article](https://docs.influxdata.com/telegraf/v1/configure_plugins/input_plugins/using_http/).
|
||||
|
||||
[1]: https://docs.influxdata.com/telegraf/v1/configure_plugins/input_plugins/using_http/
|
||||
|
||||
```text
|
||||
citibike,station_id=4703 eightd_has_available_keys=false,is_installed=1,is_renting=1,is_returning=1,legacy_id="4703",num_bikes_available=6,num_bikes_disabled=2,num_docks_available=26,num_docks_disabled=0,num_ebikes_available=0,station_status="active" 1641505084000000000
|
||||
citibike,station_id=4704 eightd_has_available_keys=false,is_installed=1,is_renting=1,is_returning=1,legacy_id="4704",num_bikes_available=10,num_bikes_disabled=2,num_docks_available=36,num_docks_disabled=0,num_ebikes_available=0,station_status="active" 1641505084000000000
|
||||
citibike,station_id=4711 eightd_has_available_keys=false,is_installed=1,is_renting=1,is_returning=1,legacy_id="4711",num_bikes_available=9,num_bikes_disabled=0,num_docks_available=36,num_docks_disabled=0,num_ebikes_available=1,station_status="active" 1641505084000000000
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
The metrics collected by this input plugin will depend on the configured
|
||||
`data_format` and the payload returned by the HTTP endpoint(s).
|
||||
|
||||
The default values below are added if the input format does not specify a value:
|
||||
|
||||
- http
|
||||
- tags:
|
||||
- url
|
||||
|
||||
## Optional Cookie Authentication Settings
|
||||
|
||||
The optional Cookie Authentication Settings will retrieve a cookie from the
|
||||
given authorization endpoint, and use it in subsequent API requests. This is
|
||||
useful for services that do not provide OAuth or Basic Auth authentication,
|
||||
e.g. the [Tesla Powerwall API](https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network), which uses a Cookie Auth Body to retrieve
|
||||
an authorization cookie. The Cookie Auth Renewal interval will renew the
|
||||
authorization by retrieving a new cookie at the given interval.
|
||||
|
||||
[tesla]: https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network
|
||||
|
|
@ -0,0 +1,158 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from HTTP Listener v2"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: HTTP Listener v2
|
||||
identifier: input-http_listener_v2
|
||||
tags: [HTTP Listener v2, "input-plugins", "configuration", "server"]
|
||||
introduced: "v1.9.0"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/http_listener_v2/README.md, HTTP Listener v2 Plugin Source
|
||||
---
|
||||
|
||||
# HTTP Listener v2 Input Plugin
|
||||
|
||||
This plugin listens for metrics sent via HTTP in any of the supported
|
||||
[data formats](/telegraf/v1/data_formats/input).
|
||||
|
||||
> [!NOTE]
|
||||
> If you would like Telegraf to act as a proxy/relay for InfluxDB v1 or
|
||||
> InfluxDB v2 it is recommended to use the
|
||||
> [influxdb__listener]() or
|
||||
> [influxdb_v2_listener]() plugin instead.
|
||||
|
||||
**Introduced in:** Telegraf v1.9.0
|
||||
**Tags:** server
|
||||
**OS support:** all
|
||||
|
||||
[data_formats]: /docs/DATA_FORMATS_INPUT.md
|
||||
[influxdb_listener]: /plugins/inputs/influxdb_listener/README.md
|
||||
[influxdb_v2_listener]: /plugins/inputs/influxdb_v2_listener/README.md
|
||||
|
||||
## Service Input <!-- @/docs/includes/service_input.md -->
|
||||
|
||||
This plugin is a service input. Normal plugins gather metrics determined by the
|
||||
interval setting. Service plugins start a service to listen and wait for
|
||||
metrics or events to occur. Service plugins have two key differences from
|
||||
normal plugins:
|
||||
|
||||
1. The global or plugin specific `interval` setting may not apply
|
||||
2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
|
||||
output for this plugin
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# Generic HTTP write listener
|
||||
[[inputs.http_listener_v2]]
|
||||
## Address to host HTTP listener on
|
||||
## can be prefixed by protocol tcp, or unix if not provided defaults to tcp
|
||||
## if unix network type provided it should be followed by absolute path for unix socket
|
||||
service_address = "tcp://:8080"
|
||||
## service_address = "tcp://:8443"
|
||||
## service_address = "unix:///tmp/telegraf.sock"
|
||||
|
||||
## Permission for unix sockets (only available for unix sockets)
|
||||
## This setting may not be respected by some platforms. To safely restrict
|
||||
## permissions it is recommended to place the socket into a previously
|
||||
## created directory with the desired permissions.
|
||||
## ex: socket_mode = "777"
|
||||
# socket_mode = ""
|
||||
|
||||
## Paths to listen to.
|
||||
# paths = ["/telegraf"]
|
||||
|
||||
## Save path as http_listener_v2_path tag if set to true
|
||||
# path_tag = false
|
||||
|
||||
## HTTP methods to accept.
|
||||
# methods = ["POST", "PUT"]
|
||||
|
||||
## Optional HTTP headers
|
||||
## These headers are applied to the server that is listening for HTTP
|
||||
## requests and included in responses.
|
||||
# http_headers = {"HTTP_HEADER" = "TAG_NAME"}
|
||||
|
||||
## HTTP Return Success Code
|
||||
## This is the HTTP code that will be returned on success
|
||||
# http_success_code = 204
|
||||
|
||||
## maximum duration before timing out read of the request
|
||||
# read_timeout = "10s"
|
||||
## maximum duration before timing out write of the response
|
||||
# write_timeout = "10s"
|
||||
|
||||
## Maximum allowed http request body size in bytes.
|
||||
## 0 means to use the default of 524,288,000 bytes (500 mebibytes)
|
||||
# max_body_size = "500MB"
|
||||
|
||||
## Part of the request to consume. Available options are "body" and
|
||||
## "query".
|
||||
# data_source = "body"
|
||||
|
||||
## Set one or more allowed client CA certificate file names to
|
||||
## enable mutually authenticated TLS connections
|
||||
# tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
|
||||
|
||||
## Add service certificate and key
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
|
||||
## Minimal TLS version accepted by the server
|
||||
# tls_min_version = "TLS12"
|
||||
|
||||
## Optional username and password to accept for HTTP basic authentication.
|
||||
## You probably want to make sure you have TLS configured above for this.
|
||||
# basic_username = "foobar"
|
||||
# basic_password = "barfoo"
|
||||
|
||||
## Optional setting to map http headers into tags
|
||||
## If the http header is not present on the request, no corresponding tag will be added
|
||||
## If multiple instances of the http header are present, only the first value will be used
|
||||
# http_header_tags = {"HTTP_HEADER" = "TAG_NAME"}
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "influx"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Metrics are collected from the part of the request specified by the
|
||||
`data_source` param and are parsed depending on the value of `data_format`.
|
||||
|
||||
## Example Output
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Send Line Protocol:
|
||||
|
||||
```shell
|
||||
curl -i -XPOST 'http://localhost:8080/telegraf' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
|
||||
```
|
||||
|
||||
Send JSON:
|
||||
|
||||
```shell
|
||||
curl -i -XPOST 'http://localhost:8080/telegraf' --data-binary '{"value1": 42, "value2": 42}'
|
||||
```
|
||||
|
||||
Send query params:
|
||||
|
||||
```shell
|
||||
curl -i -XGET 'http://localhost:8080/telegraf?host=server01&value=0.42'
|
||||
```
|
||||
|
|
@ -0,0 +1,169 @@
|
|||
---
|
||||
description: "Telegraf plugin for collecting metrics from HTTP Response"
|
||||
menu:
|
||||
telegraf_v1_ref:
|
||||
parent: input_plugins_reference
|
||||
name: HTTP Response
|
||||
identifier: input-http_response
|
||||
tags: [HTTP Response, "input-plugins", "configuration", "server"]
|
||||
introduced: "v0.12.1"
|
||||
os_support: "freebsd, linux, macos, solaris, windows"
|
||||
related:
|
||||
- /telegraf/v1/configure_plugins/
|
||||
- https://github.com/influxdata/telegraf/tree/v1.36.1/plugins/inputs/http_response/README.md, HTTP Response Plugin Source
|
||||
---
|
||||
|
||||
# HTTP Response Input Plugin
|
||||
|
||||
This plugin generates metrics from HTTP responses including the status code and
|
||||
response statistics.
|
||||
|
||||
**Introduced in:** Telegraf v0.12.1
|
||||
**Tags:** server
|
||||
**OS support:** all
|
||||
|
||||
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
|
||||
|
||||
In addition to the plugin-specific configuration settings, plugins support
|
||||
additional global and plugin configuration settings. These settings are used to
|
||||
modify metrics, tags, and field or create aliases and configure ordering, etc.
|
||||
See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
|
||||
|
||||
[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
|
||||
|
||||
## Secret-store support
|
||||
|
||||
This plugin supports secrets from secret-stores for the `username` and
|
||||
`password` option.
|
||||
See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
|
||||
to use them.
|
||||
|
||||
[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml @sample.conf
|
||||
# HTTP/HTTPS request given an address a method and a timeout
|
||||
[[inputs.http_response]]
|
||||
## List of urls to query.
|
||||
# urls = ["http://localhost"]
|
||||
|
||||
## Set http_proxy.
|
||||
## Telegraf uses the system wide proxy settings if it's is not set.
|
||||
# http_proxy = "http://localhost:8888"
|
||||
|
||||
## Set response_timeout (default 5 seconds)
|
||||
# response_timeout = "5s"
|
||||
|
||||
## HTTP Request Method
|
||||
# method = "GET"
|
||||
|
||||
## Whether to follow redirects from the server (defaults to false)
|
||||
# follow_redirects = false
|
||||
|
||||
## Optional file with Bearer token
|
||||
## file content is added as an Authorization header
|
||||
# bearer_token = "/path/to/file"
|
||||
|
||||
## Optional HTTP Basic Auth Credentials
|
||||
# username = "username"
|
||||
# password = "pa$$word"
|
||||
|
||||
## Optional HTTP Request Body
|
||||
# body = '''
|
||||
# {'fake':'data'}
|
||||
# '''
|
||||
|
||||
## Optional HTTP Request Body Form
|
||||
## Key value pairs to encode and set at URL form. Can be used with the POST
|
||||
## method + application/x-www-form-urlencoded content type to replicate the
|
||||
## POSTFORM method.
|
||||
# body_form = { "key": "value" }
|
||||
|
||||
## Optional name of the field that will contain the body of the response.
|
||||
## By default it is set to an empty String indicating that the body's
|
||||
## content won't be added
|
||||
# response_body_field = ''
|
||||
|
||||
## Maximum allowed HTTP response body size in bytes.
|
||||
## 0 means to use the default of 32MiB.
|
||||
## If the response body size exceeds this limit a "body_read_error" will
|
||||
## be raised.
|
||||
# response_body_max_size = "32MiB"
|
||||
|
||||
## Optional substring or regex match in body of the response (case sensitive)
|
||||
# response_string_match = "\"service_status\": \"up\""
|
||||
# response_string_match = "ok"
|
||||
# response_string_match = "\".*_status\".?:.?\"up\""
|
||||
|
||||
## Expected response status code.
|
||||
## The status code of the response is compared to this value. If they match,
|
||||
## the field "response_status_code_match" will be 1, otherwise it will be 0.
|
||||
## If the expected status code is 0, the check is disabled and the field
|
||||
## won't be added.
|
||||
# response_status_code = 0
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
## Use the given name as the SNI server name on each URL
|
||||
# tls_server_name = ""
|
||||
## TLS renegotiation method, choose from "never", "once", "freely"
|
||||
# tls_renegotiation_method = "never"
|
||||
|
||||
## HTTP Request Headers (all values must be strings)
|
||||
# [inputs.http_response.headers]
|
||||
# Host = "github.com"
|
||||
|
||||
## Optional setting to map response http headers into tags
|
||||
## If the http header is not present on the request, no corresponding tag will
|
||||
## be added. If multiple instances of the http header are present, only the
|
||||
## first value will be used.
|
||||
# http_header_tags = {"HTTP_HEADER" = "TAG_NAME"}
|
||||
|
||||
## Interface to use when dialing an address
|
||||
# interface = "eth0"
|
||||
|
||||
## Optional Cookie authentication
|
||||
# cookie_auth_url = "https://localhost/authMe"
|
||||
# cookie_auth_method = "POST"
|
||||
# cookie_auth_username = "username"
|
||||
# cookie_auth_password = "pa$$word"
|
||||
# cookie_auth_body = '{"username": "user", "password": "pa$$word", "authenticate": "me"}'
|
||||
## cookie_auth_renewal not set or set to "0" will auth once and never renew the cookie
|
||||
# cookie_auth_renewal = "5m"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
- http_response
|
||||
- tags:
|
||||
- server (target URL)
|
||||
- method (request method)
|
||||
- status_code (response status code)
|
||||
- result (see below
|
||||
- result_code (int, see below will trigger this error. Or the option `response_body_field` was used and the content of the response body was not a valid utf-8. Or the size of the body of the response exceeded the `response_body_max_size` |
|
||||
|connection_failed | 3 |Catch all for any network error not specifically handled by the plugin|
|
||||
|timeout | 4 |The plugin timed out while awaiting the HTTP connection to complete|
|
||||
|dns_error | 5 |There was a DNS error while attempting to connect to the host|
|
||||
|response_status_code_mismatch | 6 |The option `response_status_code_match` was used, and the status code of the response didn't match the value.|
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
http_response,method=GET,result=success,server=http://github.com,status_code=200 content_length=87878i,http_response_code=200i,response_time=0.937655534,result_code=0i,result_type="success" 1565839598000000000
|
||||
```
|
||||
|
||||
## Optional Cookie Authentication Settings
|
||||
|
||||
The optional Cookie Authentication Settings will retrieve a cookie from the
|
||||
given authorization endpoint, and use it in subsequent API requests. This is
|
||||
useful for services that do not provide OAuth or Basic Auth authentication,
|
||||
e.g. the [Tesla Powerwall API](https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network), which uses a Cookie Auth Body to retrieve
|
||||
an authorization cookie. The Cookie Auth Renewal interval will renew the
|
||||
authorization by retrieving a new cookie at the given interval.
|
||||
|
||||
[tesla]: https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue