chore(ci): Improve CI lint and test runners

- Reconfigures prettier linting.
- Adds .editorconfig to help with consistent editor settings
- Refactors test runs:
  - Removes test configuration from compose.yaml (not suited for this use case).
  - Splits test runner into test content setup and pytest that can be run separately or together (and with other test runners in the future).
  - Configuration is in Dockerfiles and command line (`.lintstagedrc.mjs`)
- Updates CONTRIBUTING.md
- Updates client library write examples in cloud-dedicated and clustered.
pull/5503/head
Jason Stirnaman 2024-06-21 18:41:07 -05:00
parent 37dd3eaa8d
commit 5c74f013a1
30 changed files with 2945 additions and 2261 deletions

6
.editorconfig Normal file
View File

@ -0,0 +1,6 @@
charset = utf-8
insert_final_newline = true
end_of_line = lf
indent_style = space
indent_size = 2
max_line_length = 80

4
.gitignore vendored
View File

@ -3,6 +3,8 @@
public
.*.swp
node_modules
.config*
**/.env*
*.log
/resources
.hugo_build.lock
@ -10,4 +12,6 @@ node_modules
/api-docs/redoc-static.html*
.vscode/*
.idea
config.toml
package-lock.json
tmp

View File

@ -1,2 +1 @@
npx lint-staged --relative --verbose
yarn run test
npx lint-staged --relative

44
.lintstagedrc.mjs Normal file
View File

@ -0,0 +1,44 @@
// Lint-staged configuration. This file must export a lint-staged configuration object.
function lintStagedContent(paths, productPath) {
const name = `staged-${productPath.replace(/\//g, '-')}`;
return [
`prettier --write ${paths.join(' ')}`,
`docker build . -f Dockerfile.tests -t influxdata-docs/tests:latest`,
// Remove any existing test container.
`docker rm -f ${name} || true`,
`docker run --name ${name} --mount type=volume,target=/app/content --mount type=bind,src=./content,dst=/src/content --mount type=bind,src=./static/downloads,dst=/app/data
influxdata-docs/tests --files "${paths.join(' ')}"`,
`docker build . -f Dockerfile.pytest -t influxdata-docs/pytest:latest`,
// Run test runners. If tests fail, the container will be removed,
//but the "test-" container will remain until the next run.
`docker run --env-file ${productPath}/.env.test
--volumes-from ${name} --rm
influxdata-docs/pytest --codeblocks ${productPath}/`
];
}
export default {
"*.{js,css}": paths => `prettier --write ${paths.join(' ')}`,
// Don't let prettier check or write Markdown files for now;
// it indents code blocks within list items, which breaks Hugo's rendering.
// "*.md": paths => `prettier --check ${paths.join(' ')}`,
"content/influxdb/cloud-dedicated/**/*.md":
paths => lintStagedContent(paths, 'content/influxdb/cloud-dedicated'),
"content/influxdb/clustered/**/*.md":
paths => lintStagedContent(paths, 'content/influxdb/clustered'),
// "content/influxdb/cloud-serverless/**/*.md": "docker compose run -T lint --config=content/influxdb/cloud-serverless/.vale.ini --minAlertLevel=error",
// "content/influxdb/clustered/**/*.md": "docker compose run -T lint --config=content/influxdb/clustered/.vale.ini --minAlertLevel=error",
// "content/influxdb/{cloud,v2,telegraf}/**/*.md": "docker compose run -T lint --config=.vale.ini --minAlertLevel=error"
}

5
.prettierignore Normal file
View File

@ -0,0 +1,5 @@
# Ignore Prettier checking for files
**/.git
**/.svn
**/.hg
**/node_modules

View File

@ -1,4 +1,14 @@
trailingComma: "es5"
tabWidth: 2
# ~/.prettierrc.yaml
printWidth: 80
semi: true
singleQuote: true
tabWidth: 2
trailingComma: "es5"
useTabs: false
overrides:
- files:
- "*.md"
- "*.markdown"
options:
proseWrap: "preserve"
# Prettier also uses settings, such as indent, specified in .editorconfig

View File

@ -1,6 +1,7 @@
# Contributing to InfluxData Documentation
## Sign the InfluxData CLA
The InfluxData Contributor License Agreement (CLA) is part of the legal framework
for the open source ecosystem that protects both you and InfluxData.
To make substantial contributions to InfluxData documentation, first sign the InfluxData CLA.
@ -10,77 +11,176 @@ What constitutes a "substantial" change is at the discretion of InfluxData docum
_**Note:** Typo and broken link fixes are greatly appreciated and do not require signing the CLA._
*If you're new to contributing or you're looking for an easy update, see [`docs-v2` good-first-issues](https://github.com/influxdata/docs-v2/issues?q=is%3Aissue+is%3Aopen+label%3Agood-first-issue).*
_If you're new to contributing or you're looking for an easy update, see [`docs-v2` good-first-issues](https://github.com/influxdata/docs-v2/issues?q=is%3Aissue+is%3Aopen+label%3Agood-first-issue)._
## Make suggested updates
### Fork and clone InfluxData Documentation Repository
[Fork this repository](https://help.github.com/articles/fork-a-repo/) and
[clone it](https://help.github.com/articles/cloning-a-repository/) to your local machine.
## Install project dependencies
docs-v2 automatically runs format (Markdown, JS, and CSS) linting and code block tests for staged files that you try to commit.
For the linting and tests to run, you need to install Docker and Node.js
dependencies.
\_**Note:**
We strongly recommend running linting and tests, but you can skip them
(and avoid installing dependencies)
by including the `--no-verify` flag with your commit--for example, enter the following command in your terminal:
```sh
git commit -m "<COMMIT_MESSAGE>" --no-verify
```
### Install Node.js dependencies
To install dependencies listed in package.json:
1. Install [Node.js](https://nodejs.org/en) for your system.
2. Install [Yarn](https://yarnpkg.com/getting-started/install) for your system.
3. Run `yarn` to install dependencies (including Hugo).
4. Install the Yarn package manager and run `yarn` to install project dependencies.
`package.json` contains dependencies for linting and running Git hooks.
- **[husky](https://github.com/typicode/husky)**: manages Git hooks, including the pre-commit hook for linting and testing
- **[lint-staged](https://github.com/lint-staged/lint-staged)**: passes staged files to commands
- **[prettier](https://prettier.io/docs/en/)**: formats code, including Markdown, according to style rules for consistency
### Install Docker
Install [Docker](https://docs.docker.com/get-docker/) for your system.
docs-v2 includes Docker configurations (`compose.yaml` and Dockerfiles) for running the Vale style linter and tests for code blocks (Shell, Bash, and Python) in Markdown files.
### Run the documentation locally (optional)
To run the documentation locally, follow the instructions provided in the README.
### Install and run Vale
### Make your changes
Use the [Vale](https://vale.sh/) style linter for spellchecking and enforcing style guidelines.
The docs-v2 `package.json` includes a Vale dependency that installs the Vale binary when you run `yarn`.
After you use `yarn` to install Vale, you can run `npx vale` to execute Vale commands.
Make your suggested changes being sure to follow the [style and formatting guidelines](#style--formatting) outline below.
_To install Vale globally or use a different package manager, follow the [Vale CLI installation](https://vale.sh/docs/vale-cli/installation/) for your system._
## Lint and test your changes
#### Integrate with your editor
### Automatic pre-commit checks
docs-v2 uses Husky to manage Git hook scripts.
When you try to commit your changes (for example, `git commit`), Git runs
scripts configured in `.husky/pre-commit`, including linting and tests for your staged files.
### Skip pre-commit hooks
**We strongly recommend running linting and tests**, but you can skip them
(and avoid installing dependencies)
by including the `--no-verify` flag with your commit--for example, enter the following command in your terminal:
```sh
git commit -m "<COMMIT_MESSAGE>" --no-verify
```
For more options, see the [Husky documentation](https://typicode.github.io/husky/how-to.html#skipping-git-hooks).
### Pre-commit linting and testing
When you try to commit your changes using `git commit` or your editor,
the project automatically runs pre-commit checks for spelling, punctuation,
and style on your staged files.
The pre-commit hook calls [`lint-staged`](https://github.com/lint-staged/lint-staged) using the configuration in `package.json`.
To run `lint-staged` scripts manually (without committing), enter the following
command in your terminal:
```sh
npx lint-staged --relative --verbose
```
The pre-commit linting configuration checks for _error-level_ problems.
An error-level rule violation fails the commit and you must
fix the problems before you can commit your changes.
If an error doesn't warrant a fix (for example, a term that should be allowed),
you can override the check and try the commit again or you can edit the linter
style rules to permanently allow the content. See **Configure style rules**.
### Vale style linting
docs-v2 includes Vale writing style linter configurations to enforce documentation writing style rules, guidelines, branding, and vocabulary terms.
To run Vale, use the Vale extension for your editor or the included Docker configuration.
For example, the following command runs Vale in a container and lints `*.md` (Markdown) files in the path `./content/influxdb/cloud-dedicated/write-data/` using the specified configuration for `cloud-dedicated`:
```sh
docker compose run -T vale --config=content/influxdb/cloud-dedicated/.vale.ini --minAlertLevel=error content/influxdb/cloud-dedicated/write-data/**/*.md
```
The output contains error-level style alerts for the Markdown content.
**Note**: We strongly recommend running Vale, but it's not included in the
docs-v2 pre-commit hooks](#automatic-pre-commit-checks) for now.
You can include it in your own Git hooks.
If a file contains style, spelling, or punctuation problems,
the Vale linter can raise one of the following alert levels:
- **Error**:
- Problems that can cause content to render incorrectly
- Violations of branding guidelines or trademark guidelines
- Rejected vocabulary terms
- **Warning**: General style guide rules and best practices
- **Suggestion**: Style preferences that may require refactoring or updates to an exceptions list
### Integrate Vale with your editor
To integrate Vale with VSCode:
1. Install the [Vale VSCode](https://marketplace.visualstudio.com/items?itemName=ChrisChinchilla.vale-vscode) extension.
2. In the extension settings, set the `Vale:Vale CLI:Path` value to the path of your Vale binary.
Use the path `${workspaceFolder}/node_modules/.bin/vale` for the Vale binary that you installed with Yarn.
2. In the extension settings, set the `Vale:Vale CLI:Path` value to the path of your Vale binary (`${workspaceFolder}/node_modules/.bin/vale` for Yarn-installed Vale).
To use with an editor other than VSCode, see the [Vale integration guide](https://vale.sh/docs/integrations/guide/).
#### Lint product directories
### Configure style rules
The `docs-v2` repository includes a shell script that lints product directories using the `InfluxDataDocs` style rules and product-specific vocabularies, and then generates a report.
To run the script, enter the following command in your terminal:
`<docs-v2>/.ci/vale/styles/` contains configuration files for the custom `InfluxDataDocs` style.
```sh
sh .ci/vale/vale.sh
```
#### Configure style rules
The `docs-v2` repository contains `.vale.ini` files that configure a custom `InfluxDataDocs` style with spelling and style rules.
When you run `vale <file path>` (from the CLI or an editor extension), it searches for a `.vale.ini` file in the directory of the file being linted.
`docs-v2` style rules are located at `.ci/vale/styles/`.
The easiest way to add accepted or rejected spellings is to enter your terms (or regular expression patterns) into the Vocabulary files at `.ci/vale/styles/config/vocabularies`.
To add accepted/rejected terms for specific products, configure a style for the product and include a `Branding.yml` configuration. As an example, see `content/influxdb/cloud-dedicated/.vale.ini` and `.ci/vale/styles/Cloud-Dedicated/Branding.yml`.
To learn more about configuration and rules, see [Vale configuration](https://vale.sh/docs/topics/config).
### Make your changes
Make your suggested changes being sure to follow the [style and formatting guidelines](#style--formatting) outline below.
### Submit a pull request
Push your changes up to your forked repository, then [create a new pull request](https://help.github.com/articles/creating-a-pull-request/).
## Style & Formatting
### Markdown
All of our documentation is written in [Markdown](https://en.wikipedia.org/wiki/Markdown).
Most docs-v2 documentation content uses [Markdown](https://en.wikipedia.org/wiki/Markdown).
_Some parts of the documentation, such as `./api-docs`, contain Markdown within YAML and rely on additional tooling._
### Semantic line feeds
Use [semantic line feeds](http://rhodesmill.org/brandon/2012/one-sentence-per-line/).
Separating each sentence with a new line makes it easy to parse diffs with the human eye.
**Diff without semantic line feeds:**
``` diff
```diff
-Data is taking off. This data is time series. You need a database that specializes in time series. You should check out InfluxDB.
+Data is taking off. This data is time series. You need a database that specializes in time series. You need InfluxDB.
```
**Diff with semantic line feeds:**
``` diff
```diff
Data is taking off.
This data is time series.
You need a database that specializes in time series.
@ -89,16 +189,19 @@ You need a database that specializes in time series.
```
### Article headings
Use only h2-h6 headings in markdown content.
h1 headings act as the page title and are populated automatically from the `title` frontmatter.
h2-h6 headings act as section headings.
### Image naming conventions
Save images using the following naming format: `project/version-context-description.png`.
For example, `influxdb/2-0-visualizations-line-graph.png` or `influxdb/2-0-tasks-add-new.png`.
Specify a version other than 2.0 only if the image is specific to that version.
## Page frontmatter
Every documentation page includes frontmatter which specifies information about the page.
Frontmatter populates variables in page templates and the site's navigation menu.
@ -121,7 +224,7 @@ external_url: # Used in children shortcode type="list" for page links that are e
list_image: # Image included with article descriptions in children type="articles" shortcode
list_note: # Used in children shortcode type="list" to add a small note next to listed links
list_code_example: # Code example included with article descriptions in children type="articles" shortcode
list_query_example: # Code examples included with article descriptions in children type="articles" shortcode,
list_query_example:# Code examples included with article descriptions in children type="articles" shortcode,
# References to examples in data/query_examples
canonical: # Path to canonical page, overrides auto-gen'd canonical URL
v2: # Path to v2 equivalent page
@ -138,22 +241,27 @@ updated_in: # Product and version the referenced feature was updated in (display
### Title usage
##### `title`
The `title` frontmatter populates each page's HTML `h1` heading tag.
It shouldn't be overly long, but should set the context for users coming from outside sources.
##### `seotitle`
The `seotitle` frontmatter populates each page's HTML `title` attribute.
Search engines use this in search results (not the page's h1) and therefore it should be keyword optimized.
##### `list_title`
The `list_title` frontmatter determines an article title when in a list generated
by the [`{{< children >}}` shortcode](#generate-a-list-of-children-articles).
##### `menu > name`
The `name` attribute under the `menu` frontmatter determines the text used in each page's link in the site navigation.
It should be short and assume the context of its parent if it has one.
#### Page Weights
To ensure pages are sorted both by weight and their depth in the directory
structure, pages should be weighted in "levels."
All top level pages are weighted 1-99.
@ -163,6 +271,7 @@ Then 201-299 and so on.
_**Note:** `_index.md` files should be weighted one level up from the other `.md` files in the same directory._
### Related content
Use the `related` frontmatter to include links to specific articles at the bottom of an article.
- If the page exists inside of this documentation, just include the path to the page.
@ -181,6 +290,7 @@ related:
```
### Canonical URLs
Search engines use canonical URLs to accurately rank pages with similar or identical content.
The `canonical` HTML meta tag identifies which page should be used as the source of truth.
@ -200,6 +310,7 @@ canonical: /{{< latest "influxdb" "v2" >}}/path/to/canonical/doc/
```
### v2 equivalent documentation
To display a notice on a 1.x page that links to an equivalent 2.0 page,
add the following frontmatter to the 1.x page:
@ -208,6 +319,7 @@ v2: /influxdb/v2.0/get-started/
```
### Prepend and append content to a page
Use the `prepend` and `append` frontmatter to add content to the top or bottom of a page.
Each has the following fields:
@ -235,6 +347,7 @@ cascade:
```
### Cascade
To automatically apply frontmatter to a page and all of its children, use the
[`cascade` frontmatter](https://gohugo.io/content-management/front-matter/#front-matter-cascade)
built in into Hugo.
@ -253,6 +366,7 @@ those frontmatter keys. Frontmatter defined on the page overrides frontmatter
## Shortcodes
### Notes and warnings
Shortcodes are available for formatting notes and warnings in each article:
```md
@ -266,6 +380,7 @@ Insert warning markdown content here.
```
### Enterprise Content
For sections content that relate specifically to InfluxDB Enterprise, use the `{{% enterprise %}}` shortcode.
```md
@ -275,6 +390,7 @@ Insert enterprise-specific markdown content here.
```
#### Enterprise name
The name used to refer to InfluxData's enterprise offering is subject to change.
To facilitate easy updates in the future, use the `enterprise-name` shortcode
when referencing the enterprise product.
@ -288,6 +404,7 @@ This is content that references {{< enterprise-name "short" >}}.
Product names are stored in `data/products.yml`.
#### Enterprise link
References to InfluxDB Enterprise are often accompanied with a link to a page where
visitors can get more information about the Enterprise offering.
This link is subject to change.
@ -299,6 +416,7 @@ Find more info [here][{{< enterprise-link >}}]
```
### InfluxDB Cloud Content
For sections of content that relate specifically to InfluxDB Cloud, use the `{{% cloud %}}` shortcode.
```md
@ -308,6 +426,7 @@ Insert cloud-specific markdown content here.
```
#### InfluxDB Cloud name
The name used to refer to InfluxData's cloud offering is subject to change.
To facilitate easy updates in the future, use the `cloud-name` short-code when
referencing the cloud product.
@ -321,6 +440,7 @@ This is content that references {{< cloud-name "short" >}}.
Product names are stored in `data/products.yml`.
#### InfluxDB Cloud link
References to InfluxDB Cloud are often accompanied with a link to a page where
visitors can get more information.
This link is subject to change.
@ -332,6 +452,7 @@ Find more info [here][{{< cloud-link >}}]
```
### Latest links
Each of the InfluxData projects have different "latest" versions.
Use the `{{< latest >}}` shortcode to populate link paths with the latest version
for the specified project.
@ -365,6 +486,7 @@ Use the following for project names:
```
### Latest patch version
Use the `{{< latest-patch >}}` shortcode to add the latest patch version of a product.
By default, this shortcode parses the product and minor version from the URL.
To specify a specific product and minor version, use the `product` and `version` arguments.
@ -379,6 +501,7 @@ Easier to maintain being you update the version number in the `data/products.yml
```
### Latest influx CLI version
Use the `{{< latest-cli >}}` shortcode to add the latest version of the `influx`
CLI supported by the minor version of InfluxDB.
By default, this shortcode parses the minor version from the URL.
@ -392,6 +515,7 @@ Maintain CLI version numbers in the `data/products.yml` file instead of updating
```
### API endpoint
Use the `{{< api-endpoint >}}` shortcode to generate a code block that contains
a colored request method, a specified API endpoint, and an optional link to
the API reference documentation.
@ -420,6 +544,7 @@ Provide the following arguments:
```
### Tabbed Content
To create "tabbed" content (content that is changed by a users' selection), use the following three shortcodes in combination:
`{{< tabs-wrapper >}}`
@ -453,6 +578,7 @@ This shortcode must be closed with `{{% /tab-content %}}`.
**Note**: The `%` characters used in this shortcode indicate that the contents should be processed as Markdown.
#### Example tabbed content group
```md
{{< tabs-wrapper >}}
@ -473,6 +599,7 @@ Markdown content for tab 2.
```
#### Tabbed code blocks
Shortcodes are also available for tabbed code blocks primarily used to give users
the option to choose between different languages and syntax.
The shortcode structure is the same as above, but the shortcode names are different:
@ -481,7 +608,7 @@ The shortcode structure is the same as above, but the shortcode names are differ
`{{% code-tabs %}}`
`{{% code-tab-content %}}`
~~~md
````md
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
@ -490,6 +617,7 @@ The shortcode structure is the same as above, but the shortcode names are differ
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
data = from(bucket: "example-bucket")
|> range(start: -15m)
@ -498,18 +626,21 @@ data = from(bucket: "example-bucket")
r._field == "used_percent"
)
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SELECT "used_percent"
FROM "telegraf"."autogen"."mem"
WHERE time > now() - 15m
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
~~~
````
#### Link to tabbed content
@ -522,6 +653,7 @@ For example:
```
### Required elements
Use the `{{< req >}}` shortcode to identify required elements in documentation with
orange text and/or asterisks. By default, the shortcode outputs the text, "Required," but
you can customize the text by passing a string argument with the shortcode.
@ -546,8 +678,9 @@ customize the text of the required message.
```
#### Required elements in a list
When identifying required elements in a list, use `{{< req type="key" >}}` to generate
a "* Required" key before the list. For required elements in the list, include
a "\* Required" key before the list. For required elements in the list, include
{{< req "\*" >}} before the text of the list item. For example:
```md
@ -559,6 +692,7 @@ a "* Required" key before the list. For required elements in the list, include
```
#### Change color of required text
Use the `color` argument to change the color of required text.
The following colors are available:
@ -571,6 +705,7 @@ The following colors are available:
```
### Page navigation buttons
Use the `{{< page-nav >}}` shortcode to add page navigation buttons to a page.
These are useful for guiding users through a set of docs that should be read in sequential order.
The shortcode has the following parameters:
@ -587,16 +722,20 @@ document, but you can use `prevText` and `nextText` to override button text.
```md
<!-- Simple example -->
{{ page-nav prev="/path/to/prev/" next="/path/to/next" >}}
<!-- Override button text -->
{{ page-nav prev="/path/to/prev/" prevText="Previous" next="/path/to/next" nextText="Next" >}}
<!-- Add currently selected tab to button link -->
{{ page-nav prev="/path/to/prev/" next="/path/to/next" keepTab=true>}}
```
### Keybinds
Use the `{{< keybind >}}` shortcode to include OS-specific keybindings/hotkeys.
The following parameters are available:
@ -608,16 +747,20 @@ The following parameters are available:
```md
<!-- Provide keybinding for one OS and another for all others -->
{{< keybind mac="⇧⌘P" other="Ctrl+Shift+P" >}}
<!-- Provide a keybind for all OSs -->
{{< keybind all="Ctrl+Shift+P" >}}
<!-- Provide unique keybindings for each OS -->
{{< keybind mac="⇧⌘P" linux="Ctrl+Shift+P" win="Ctrl+Shift+Alt+P" >}}
```
### Diagrams
Use the `{{< diagram >}}` shortcode to dynamically build diagrams.
The shortcode uses [mermaid.js](https://github.com/mermaid-js/mermaid) to convert
simple text into SVG diagrams.
@ -626,28 +769,32 @@ For information about the syntax, see the [mermaid.js documentation](https://mer
```md
{{< diagram >}}
flowchart TB
This --> That
That --> There
This --> That
That --> There
{{< /diagram >}}
```
### File system diagrams
Use the `{{< filesystem-diagram >}}` shortcode to create a styled file system
diagram using a Markdown unordered list.
##### Example filesystem diagram shortcode
```md
{{< filesystem-diagram >}}
- Dir1/
- Dir2/
- ChildDir/
- Child
- Child
- Dir3/
{{< /filesystem-diagram >}}
{{< /filesystem-diagram >}}
```
### High-resolution images
In many cases, screenshots included in the docs are taken from high-resolution (retina) screens.
Because of this, the actual pixel dimension is 2x larger than it needs to be and is rendered 2x bigger than it should be.
The following shortcode automatically sets a fixed width on the image using half of its actual pixel dimension.
@ -659,12 +806,14 @@ cause by browser image resizing.
```
###### Notes
- This should only be used on screenshots takes from high-resolution screens.
- The `src` should be relative to the `static` directory.
- Image widths are limited to the width of the article content container and will scale accordingly,
even with the `width` explicitly set.
### Truncated content blocks
In some cases, it may be appropriate to shorten or truncate blocks of content.
Use cases include long examples of output data or tall images.
The following shortcode truncates blocks of content and allows users to opt into
@ -677,6 +826,7 @@ Truncated markdown content here.
```
### Expandable accordion content blocks
Use the `{{% expand "Item label" %}}` shortcode to create expandable, accordion-style content blocks.
Each expandable block needs a label that users can click to expand or collapse the content block.
Pass the label as a string to the shortcode.
@ -711,6 +861,7 @@ Markdown content associated with label 2.
```
### Captions
Use the `{{% caption %}}` shortcode to add captions to images and code blocks.
Captions are styled with a smaller font size, italic text, slight transparency,
and appear directly under the previous image or code block.
@ -722,6 +873,7 @@ Markdown content for the caption.
```
### Generate a list of children articles
Section landing pages often contain just a list of articles with links and descriptions for each.
This can be cumbersome to maintain as content is added.
To automate the listing of articles in a section, use the `{{< children >}}` shortcode.
@ -735,7 +887,9 @@ or only "page" articles (those with no children) using the `show` argument:
```md
{{< children show="sections" >}}
<!-- OR -->
{{< children show="pages" >}}
```
@ -757,6 +911,7 @@ The following list types are available:
- **functions:** a special use-case designed for listing Flux functions.
#### Include a "Read more" link
To include a "Read more" link with each child summary, set `readmore=true`.
_Only the `articles` list type supports "Read more" links._
@ -765,6 +920,7 @@ _Only the `articles` list type supports "Read more" links._
```
#### Include a horizontal rule
To include a horizontal rule after each child summary, set `hr=true`.
_Only the `articles` list type supports horizontal rules._
@ -773,30 +929,34 @@ _Only the `articles` list type supports horizontal rules._
```
#### Include a code example with a child summary
Use the `list_code_example` frontmatter to provide a code example with an article
in an articles list.
~~~yaml
````yaml
list_code_example: |
```sh
This is a code example
```
~~~
````
#### Organize and include native code examples
To include text from a file in `/shared/text/`, use the
`{{< get-shared-text >}}` shortcode and provide the relative path and filename.
This is useful for maintaining and referencing sample code variants in their
native file formats.
native file formats.
1. Store code examples in their native formats at `/shared/text/`.
```md
/shared/text/example1/example.js
/shared/text/example1/example.py
```
```md
/shared/text/example1/example.js
/shared/text/example1/example.py
```
2. Include the files--for example, in code tabs:
````md
{{% code-tabs-wrapper %}}
{{% code-tabs %}}
@ -804,23 +964,28 @@ This is useful for maintaining and referencing sample code variants in their
[Python](#py)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
{{< get-shared-text "example1/example.js" >}}
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```py
{{< get-shared-text "example1/example.py" >}}
```
{{% /code-tab-content %}}
{{% /code-tabs-wrapper %}}
````
#### Include specific files from the same directory
To include the text from one file in another file in the same
directory, use the `{{< get-leaf-text >}}` shortcode.
The directory that contains both files must be a
Hugo [*Leaf Bundle*](https://gohugo.io/content-management/page-bundles/#leaf-bundles),
Hugo [_Leaf Bundle_](https://gohugo.io/content-management/page-bundles/#leaf-bundles),
a directory that doesn't have any child directories.
In the following example, `api` is a leaf bundle. `content` isn't.
@ -829,26 +994,30 @@ In the following example, `api` is a leaf bundle. `content` isn't.
content
|
|--- api
| query.pdmc
| query.sh
| _index.md
| query.pdmc
| query.sh
| \_index.md
```
##### query.pdmc
```md
# Query examples
```
##### query.sh
```md
curl https://localhost:8086/query
```
To include `query.sh` and `query.pdmc` in `api/_index.md`, use the following code:
````md
{{< get-leaf-text "query.pdmc" >}}
# Curl example
```sh
{{< get-leaf-text "query.sh" >}}
```
@ -858,6 +1027,7 @@ Avoid using the following file extensions when naming included text files since
`.ad`, `.adoc`, `.asciidoc`, `.htm`, `.html`, `.markdown`, `.md`, `.mdown`, `.mmark`, `.pandoc`, `.pdc`, `.org`, or `.rst`.
#### Reference a query example in children
To include a query example with the children in your list, update `data/query_examples.yml`
with the example code, input, and output, and use the `list_query_example`
frontmatter to reference the corresponding example.
@ -867,11 +1037,12 @@ list_query_example: cumulative_sum
```
#### Children frontmatter
Each children list `type` uses [frontmatter properties](#page-frontmatter) when generating the list of articles.
The following table shows which children types use which frontmatter properties:
| Frontmatter | articles | list | functions |
|:----------- |:--------:|:----:|:---------:|
| :------------------- | :------: | :--: | :-------: |
| `list_title` | ✓ | ✓ | ✓ |
| `description` | ✓ | | |
| `external_url` | ✓ | ✓ | |
@ -881,6 +1052,7 @@ The following table shows which children types use which frontmatter properties:
| `list_query_example` | ✓ | | |
### Inline icons
The `icon` shortcode allows you to inject icons in paragraph text.
It's meant to clarify references to specific elements in the InfluxDB user interface.
This shortcode supports Clockface (the UI) v2 and v3.
@ -955,6 +1127,7 @@ Below is a list of available icons (some are aliases):
- x
### InfluxDB UI left navigation icons
In many cases, documentation references an item in the left nav of the InfluxDB UI.
Provide a visual example of the navigation item using the `nav-icon` shortcode.
This shortcode supports Clockface (the UI) v2 and v3.
@ -978,6 +1151,7 @@ The following case insensitive values are supported:
- feedback
### Flexbox-formatted content blocks
CSS Flexbox formatting lets you create columns in article content that adjust and
flow based on the viewable width.
In article content, this helps if you have narrow tables that could be displayed
@ -1010,6 +1184,7 @@ The following options are available:
- quarter
### Tooltips
Use the `{{< tooltip >}}` shortcode to add tooltips to text.
The **first** argument is the text shown in the tooltip.
The **second** argument is the highlighted text that triggers the tooltip.
@ -1022,6 +1197,7 @@ The rendered output is "I like butterflies" with "butterflies" highlighted.
When you hover over "butterflies," a tooltip appears with the text: "Butterflies are awesome!"
### Flux sample data tables
The Flux `sample` package provides basic sample datasets that can be used to
illustrate how Flux functions work. To quickly display one of the raw sample
datasets, use the `{{% flux/sample %}}` shortcode.
@ -1030,6 +1206,7 @@ The `flux/sample` shortcode has the following arguments that can be specified
by name or positionally.
#### set
Sample dataset to output. Use either `set` argument name or provide the set
as the first argument. The following sets are available:
@ -1041,33 +1218,41 @@ as the first argument. The following sets are available:
- numericBool
#### includeNull
Specify whether or not to include _null_ values in the dataset.
Use either `includeNull` argument name or provide the boolean value as the second argument.
#### includeRange
Specify whether or not to include time range columns (`_start` and `_stop`) in the dataset.
This is only recommended when showing how functions that require a time range
(such as `window()`) operate on input data.
Use either `includeRange` argument name or provide the boolean value as the third argument.
##### Example Flux sample data shortcodes
```md
<!-- No arguments, defaults to "float" set without nulls -->
{{% flux/sample %}}
<!-- Output the "string" set without nulls or time range columns -->
{{% flux/sample set="string" includeNull=false %}}
<!-- Output the "int" set with nulls but without time range columns -->
{{% flux/sample "int" true %}}
<!-- Output the "int" set with nulls and time range columns -->
<!-- The following shortcode examples render the same -->
{{% flux/sample set="int" includeNull=true includeRange=true %}}
{{% flux/sample "int" true true %}}
```
### Duplicate OSS content in Cloud
Docs for InfluxDB OSS and InfluxDB Cloud share a majority of content.
To prevent duplication of content between versions, use the following shortcodes:
@ -1076,12 +1261,14 @@ To prevent duplication of content between versions, use the following shortcodes
- `{{% cloud-only %}}`
#### duplicate-oss
The `{{< duplicate-oss >}}` shortcode copies the page content of the file located
at the identical file path in the most recent InfluxDB OSS version.
The Cloud version of this markdown file should contain the frontmatter required
for all pages, but the body content should just be the `{{< duplicate-oss >}}` shortcode.
#### oss-only
Wrap content that should only appear in the OSS version of the doc with the `{{% oss-only %}}` shortcode.
Use the shortcode on both inline and content blocks:
@ -1123,6 +1310,7 @@ This is necessary to get the first sentence/paragraph to render correctly.
```
#### cloud-only
Wrap content that should only appear in the Cloud version of the doc with the `{{% cloud-only %}}` shortcode.
Use the shortcode on both inline and content blocks:
@ -1164,6 +1352,7 @@ This is necessary to get the first sentence/paragraph to render correctly.
```
#### All-Caps
Clockface v3 introduces many buttons with text formatted as all-caps.
Use the `{{< caps >}}` shortcode to format text to match those buttons.
@ -1172,20 +1361,24 @@ Click {{< caps >}}Add Data{{< /caps >}}
```
#### Code callouts
Use the `{{< code-callout >}}` shortcode to highlight and emphasize a specific
piece of code (for example, a variable, placeholder, or value) in a code block.
Provide the string to highlight in the code block.
Include a syntax for the codeblock to properly style the called out code.
~~~md
````md
{{< code-callout "03a2bbf46249a000" >}}
```sh
http://localhost:8086/orgs/03a2bbf46249a000/...
```
{{< /code-callout >}}
~~~
````
#### InfluxDB University banners
Use the `{{< influxdbu >}}` shortcode to add an InfluxDB University banner that
points to the InfluxDB University site or a specific course.
Use the default banner template, a predefined course template, or fully customize
@ -1199,10 +1392,12 @@ the content of the banner.
{{< influxdbu "influxdb-101" >}}
<!-- Custom banner -->
{{< influxdbu title="Course title" summary="Short course summary." action="Take the course" link="https://university.influxdata.com/" >}}
{{< influxdbu title="Course title" summary="Short course summary." action="Take
the course" link="https://university.influxdata.com/" >}}
```
##### Course templates
Use one of the following course templates:
- influxdb-101
@ -1210,6 +1405,7 @@ Use one of the following course templates:
- flux-103
##### Custom banner content
Use the following shortcode parameters to customize the content of the InfluxDB
University banner:
@ -1219,6 +1415,7 @@ University banner:
- **link**: URL the button links to
### Reference content
The InfluxDB documentation is "task-based," meaning content primarily focuses on
what a user is **doing**, not what they are **using**.
However, there is a need to document tools and other things that don't necessarily
@ -1242,6 +1439,7 @@ menu:
```
## InfluxDB URLs
When a user selects an InfluxDB product and region, example URLs in code blocks
throughout the documentation are updated to match their product and region.
InfluxDB URLs are configured in `/data/influxdb_urls.yml`.
@ -1251,7 +1449,7 @@ Use this URL in all code examples that should be updated with a selected provide
For example:
~~~
````
```sh
# This URL will get updated
http://localhost:8086
@ -1259,37 +1457,40 @@ http://localhost:8086
# This URL will NOT get updated
http://example.com
```
~~~
````
If the user selects the **US West (Oregon)** region, all occurrences of `http://localhost:8086`
in code blocks will get updated to `https://us-west-2-1.aws.cloud2.influxdata.com`.
### Exempt URLs from getting updated
To exempt a code block from being updated, include the `{{< keep-url >}}` shortcode
just before the code block.
~~~
````
{{< keep-url >}}
```
// This URL won't get updated
http://localhost:8086
```
~~~
````
### Code examples only supported in InfluxDB Cloud
Some functionality is only supported in InfluxDB Cloud and code examples should
only use InfluxDB Cloud URLs. In these cases, use `https://cloud2.influxdata.com`
as the placeholder in the code block. It will get updated on page load and when
users select a Cloud region in the URL select modal.
~~~
````
```sh
# This URL will get updated
https://cloud2.influxdata.com
```
~~~
````
### Automatically populate InfluxDB host placeholder
The InfluxDB host placeholder that gets replaced by custom domains differs
between each InfluxDB product/version.
Use the `influxdb/host` shortcode to automatically render the correct
@ -1313,6 +1514,7 @@ Supported argument values:
```
## New Versions of InfluxDB
Version bumps occur regularly in the documentation.
Each minor version has its own directory with unique content.
Patch versions within a minor version are updated in place.
@ -1321,17 +1523,20 @@ To add a new minor version, go through the steps below.
_This example assumes v2.0 is the most recent version and v2.1 is the new version._
1. Ensure your `master` branch is up to date:
```sh
git checkout master
git pull
```
2. Create a new branch for the new minor version:
```sh
git checkout -b influxdb-2.1
```
3. Duplicate the most recent version's content directory:
```sh
# From the root of the project
cp content/influxdb/v2.0 content/influxdb/v2.1
@ -1355,6 +1560,7 @@ _This example assumes v2.0 is the most recent version and v2.1 is the new versio
```
6. Update the `latest_version` in `data/products.yml`:
```yaml
latest_version: v2.1
```
@ -1370,6 +1576,7 @@ Once the necessary changes are in place and the new version is released,
merge the new branch into `master`.
## InfluxDB API documentation
InfluxData uses [Redoc](https://github.com/Redocly/redoc) to generate the full
InfluxDB API documentation when documentation is deployed.
Redoc generates HTML documentation using the InfluxDB `swagger.yml`.

54
Dockerfile.pytest Normal file
View File

@ -0,0 +1,54 @@
FROM golang:latest
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
curl \
git \
gpg \
jq \
maven \
nodejs \
npm \
python3 \
python3-pip \
python3-venv \
wget
RUN ln -s /usr/bin/python3 /usr/bin/python
# Create a virtual environment for Python to avoid conflicts with the system Python and having to use the --break-system-packages flag when installing packages with pip.
RUN python -m venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
WORKDIR /app
# Some Python test dependencies (pytest-dotenv and pytest-codeblocks) aren't
# available as packages in apt-cache, so use pip to download dependencies in a # separate step and use Docker's caching.
COPY ./test/src/pytest.ini pytest.ini
COPY ./test/src/requirements.txt requirements.txt
RUN pip install -Ur requirements.txt
# Activate the Python virtual environment configured in the Dockerfile.
RUN . /opt/venv/bin/activate
### Install InfluxDB clients for testing
# Install InfluxDB keys to verify client installs.
# Follow the install instructions (https://docs.influxdata.com/telegraf/v1/install/?t=curl), except for sudo (which isn't available in Docker).
# influxdata-archive_compat.key GPG fingerprint:
# 9D53 9D90 D332 8DC7 D6C8 D3B9 D8FF 8E1F 7DF8 B07E
ADD https://repos.influxdata.com/influxdata-archive_compat.key ./influxdata-archive_compat.key
RUN echo '393e8779c89ac8d958f81f942f9ad7fb82a25e133faddaf92e15b16e6ac9ce4c influxdata-archive_compat.key' | sha256sum -c && cat influxdata-archive_compat.key | gpg --dearmor | tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
RUN echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | tee /etc/apt/sources.list.d/influxdata.list
# Install InfluxDB clients to use in tests.
RUN apt-get update && apt-get -y install telegraf influxdb2-cli influxctl
COPY --chmod=755 ./test/config.toml /root/.config/influxctl/config.toml
### End InfluxDB client installs
ENTRYPOINT [ "pytest" ]
CMD [ "" ]

18
Dockerfile.tests Normal file
View File

@ -0,0 +1,18 @@
# Use the Dockerfile 1.2 syntax to leverage BuildKit features like cache mounts and inline mounts--temporary mounts that are only available during the build step, not at runtime.
# syntax=docker/dockerfile:1.2
# Starting from a Go base image is easier than setting up the Go environment later.
FROM python:3.9-slim
# Install the necessary packages for the test environment.
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
rsync
COPY --chmod=755 ./test/src/parse_yaml.sh /usr/local/bin/parse_yaml
COPY --chmod=755 ./test/src/prepare-content.sh /usr/local/bin/prepare-content
COPY ./data/products.yml /app/appdata/products.yml
WORKDIR /src
ENTRYPOINT [ "prepare-content" ]
# The default command is an empty string to pass all command line arguments to the entrypoint and allow the entrypoint to run.
CMD [ "" ]

View File

@ -1,27 +1,39 @@
# This is a Docker Compose file for the InfluxData documentation site.
## Run documentation tests for code samples.
name: influxdata-docs
volumes:
test-content:
services:
test:
image: docs-v2-tests
container_name: docs-v2-tests
markdownlint:
image: davidanson/markdownlint-cli2:v0.13.0
container_name: markdownlint
profiles:
- test
- ci
- lint
volumes:
- type: bind
source: ./test
target: /usr/src/app/test
- type: bind
source: ./data
target: /usr/src/app/test/data
- type: bind
source: ./static/downloads
target: /usr/src/app/test/tmp/data
source: .
target: /workdir
working_dir: /workdir
build:
context: .
dockerfile: test.Dockerfile
args:
- SOURCE_DIR=test
- DOCKER_IMAGE=docs-v2-tests
vale:
image: jdkato/vale:latest
container_name: vale
profiles:
- ci
- lint
volumes:
- type: bind
source: .
target: /workdir
working_dir: /workdir
entrypoint: ["/bin/vale"]
build:
context: .
dockerfile_inline: |
COPY .ci /src/.ci
COPY **/.vale.ini /src/
## Run InfluxData documentation with the hugo development server on port 1313.
## For more information about the hugomods/hugo image, see
## https://docker.hugomods.com/docs/development/docker-compose/

View File

@ -10,16 +10,16 @@ weight: 3
influxdb/cloud-dedicated/tags: [get-started]
---
{{% product-name %}} is the platform purpose-built to collect, store, and
query time series data.
It is powered by the InfluxDB 3.0 storage engine which provides a number of
benefits including nearly unlimited series cardinality, improved query performance,
and interoperability with widely used data processing tools and platforms.
InfluxDB is the platform purpose-built to collect, store, and query
time series data.
{{% product-name %}} is powered by the InfluxDB 3.0 storage engine, that
provides nearly unlimited series cardinality,
improved query performance, and interoperability with widely used data
processing tools and platforms.
**Time series data** is a sequence of data points indexed in time order.
Data points typically consist of successive measurements made from the same
source and are used to track changes over time.
Examples of time series data include:
**Time series data** is a sequence of data points indexed in time order. Data
points typically consist of successive measurements made from the same source
and are used to track changes over time. Examples of time series data include:
- Industrial sensor data
- Server performance metrics
@ -28,14 +28,14 @@ Examples of time series data include:
- Rainfall measurements
- Stock prices
This multi-part tutorial walks you through writing time series data to {{% product-name %}},
querying, and then visualizing that data.
This multi-part tutorial walks you through writing time series data to
{{% product-name %}}, querying, and then visualizing that data.
## Key concepts before you get started
Before you get started using InfluxDB, it's important to understand how time series
data is organized and stored in InfluxDB and some key definitions that are used
throughout this documentation.
Before you get started using InfluxDB, it's important to understand how time
series data is organized and stored in InfluxDB and some key definitions that
are used throughout this documentation.
- [Data organization](#data-organization)
- [Schema on write](#schema-on-write)
@ -44,43 +44,53 @@ throughout this documentation.
### Data organization
The {{% product-name %}} data model organizes time series data into databases
and measurements.
and tables.
A database can contain multiple measurements.
Measurements contain multiple tags and fields.
A database can contain multiple tables.
Tables contain multiple tags and fields.
- **Database**: Named location where time series data is stored.
A database can contain multiple _measurements_.
- **Measurement**: Logical grouping for time series data.
All _points_ in a given measurement should have the same _tags_.
A measurement contains multiple _tags_ and _fields_.
- **Tags**: Key-value pairs that provide metadata for each point--for example,
something to identify the source or context of the data like host,
location, station, etc.
Tag values may be null.
- **Fields**: Key-value pairs with values that change over time--for example,
temperature, pressure, stock price, etc.
Field values may be null, but at least one field value is not null on any given row.
- **Timestamp**: Timestamp associated with the data.
When stored on disk and queried, all data is ordered by time.
A timestamp is never null.
- **Database**: A named location where time series data is stored in _tables_.
_Database_ is synonymous with _bucket_ in InfluxDB Cloud Serverless and InfluxDB TSM.
- **Table**: A logical grouping for time series data. All _points_ in a given
table should have the same _tags_. A table contains _tags_ and
_fields_. _Table_ is synonymous with _measurement_ in InfluxDB Cloud
Serverless and InfluxDB TSM.
- **Tags**: Key-value pairs that provide metadata for each point--for
example, something to identify the source or context of the data like
host, location, station, etc. Tag values may be null.
- **Fields**: Key-value pairs with values that change over time--for
example, temperature, pressure, stock price, etc. Field values may be
null, but at least one field value is not null on any given row.
- **Timestamp**: Timestamp associated with the data. When stored on disk and
queried, all data is ordered by time. A timestamp is never null.
{{% note %}}
#### What about buckets and measurements?
If coming from InfluxDB Cloud Serverless or InfluxDB powered by the TSM storage
engine, you're likely familiar with the concepts _bucket_ and _measurement_.
_Bucket_ in TSM or InfluxDB Cloud Serverless is synonymous with _database_ in
{{% product-name %}}. _Measurement_ in TSM or InfluxDB Cloud Serverless is
synonymous with _table_ in {{% product-name %}}.
{{% /note %}}
### Schema on write
When using InfluxDB, you define your schema as you write your data.
You don't need to create measurements (equivalent to a relational table) or
explicitly define the schema of the measurement.
Measurement schemas are defined by the schema of data as it is written to the measurement.
As you write data to InfluxDB, the data defines the table schema. You don't need
to create tables or explicitly define the table schema.
### Important definitions
The following definitions are important to understand when using InfluxDB:
- **Point**: Single data record identified by its _measurement, tag keys, tag values, field key, and timestamp_.
- **Series**: A group of points with the same _measurement, tag keys and values, and field key_.
- **Primary key**: Columns used to uniquely identify each row in a table.
Rows are uniquely identified by their _timestamp and tag set_.
A row's primary key _tag set_ does not include tags with null values.
- **Point**: Single data record identified by its _measurement, tag keys, tag
values, field key, and timestamp_.
- **Series**: A group of points with the same _measurement, tag keys and values,
and field key_.
- **Primary key**: Columns used to uniquely identify each row in a table. Rows
are uniquely identified by their _timestamp and tag set_. A row's primary key
_tag set_ does not include tags with null values.
##### Example InfluxDB query results
@ -88,8 +98,8 @@ The following definitions are important to understand when using InfluxDB:
## Tools to use
The following table compares tools that you can use to interact with {{% product-name %}}.
This tutorial covers many of the recommended tools.
The following table compares tools that you can use to interact with
{{% product-name %}}. This tutorial covers many of the recommended tools.
| Tool | Administration | Write | Query |
| :-------------------------------------------------------------------------------------------------- | :----------------------: | :----------------------: | :----------------------: |
@ -114,39 +124,52 @@ This tutorial covers many of the recommended tools.
{{< /caption >}}
{{% warn %}}
Avoid using the `influx` CLI with {{% product-name %}}.
While it may coincidentally work, it isn't supported.
Avoid using the `influx` CLI with {{% product-name %}}. While it
may coincidentally work, it isn't supported.
{{% /warn %}}
### `influxctl` CLI
The [`influxctl` command line interface (CLI)](/influxdb/cloud-dedicated/reference/cli/influxctl/)
The
[`influxctl` command line interface (CLI)](/influxdb/cloud-dedicated/reference/cli/influxctl/)
writes, queries, and performs administrative tasks, such as managing databases
and authorization tokens in a cluster.
### `influx3` data CLI
The [`influx3` data CLI](/influxdb/cloud-dedicated/get-started/query/?t=influx3+CLI#execute-an-sql-query) is a community-maintained tool that lets you write and query data in {{% product-name %}} from a command line.
It uses the HTTP API to write data and uses Flight gRPC to query data.
The
[`influx3` data CLI](/influxdb/cloud-dedicated/get-started/query/?t=influx3+CLI#execute-an-sql-query)
is a community-maintained tool that lets you write and query data in
{{% product-name %}} from a command line. It uses the HTTP API to write data and
uses Flight gRPC to query data.
### InfluxDB HTTP API
The [InfluxDB HTTP API](/influxdb/v2/reference/api/) provides a simple way to let you manage {{% product-name %}} and write and query data using HTTP(S) clients.
Examples in this tutorial use cURL, but any HTTP(S) client will work.
The [InfluxDB HTTP API](/influxdb/v2/reference/api/) provides a simple way to
let you manage {{% product-name %}} and write and query data using HTTP(S)
clients. Examples in this tutorial use cURL, but any HTTP(S) client will work.
The `/write` and `/query` v1-compatible endpoints work with the username/password authentication schemes and existing InfluxDB 1.x tools and code.
The `/api/v2/write` v2-compatible endpoint works with existing InfluxDB 2.x tools and code.
The `/write` and `/query` v1-compatible endpoints work with the
username/password authentication schemes and existing InfluxDB 1.x tools and
code. The `/api/v2/write` v2-compatible endpoint works with existing InfluxDB
2.x tools and code.
### InfluxDB client libraries
InfluxDB client libraries are community-maintained, language-specific clients that interact with InfluxDB APIs.
InfluxDB client libraries are community-maintained, language-specific clients
that interact with InfluxDB APIs.
[InfluxDB v3 client libraries](/influxdb/cloud-dedicated/reference/client-libraries/v3/) are the recommended client libraries for writing and querying data {{% product-name %}}.
They use the HTTP API to write data and use Flight gRPC to query data.
[InfluxDB v3 client libraries](/influxdb/cloud-dedicated/reference/client-libraries/v3/)
are the recommended client libraries for writing and querying data
{{% product-name %}}. They use the HTTP API to write data and use InfluxDB's
Flight gRPC API to query data.
[InfluxDB v2 client libraries](/influxdb/cloud-dedicated/reference/client-libraries/v2/) can use `/api/v2` HTTP endpoints to manage resources such as buckets and API tokens, and write data in {{% product-name %}}.
[InfluxDB v2 client libraries](/influxdb/cloud-dedicated/reference/client-libraries/v2/)
can use `/api/v2` HTTP endpoints to manage resources such as buckets and API
tokens, and write data in {{% product-name %}}.
[InfluxDB v1 client libraries](/influxdb/cloud-dedicated/reference/client-libraries/v1/) can write data to {{% product-name %}}.
[InfluxDB v1 client libraries](/influxdb/cloud-dedicated/reference/client-libraries/v1/)
can write data to {{% product-name %}}.
## Authorization
@ -158,13 +181,14 @@ There are two types of tokens:
- **Database token**: A token that grants read and write access to InfluxDB
databases.
- **Management token**: A short-lived (1 hour) [Auth0 token](#) used to
administer your InfluxDB cluster.
These are generated by the `influxctl` CLI and do not require any direct management.
Management tokens authorize a user to perform tasks related to:
administer your InfluxDB cluster. These are generated by the `influxctl` CLI
and do not require any direct management. Management tokens authorize a user
to perform tasks related to:
- Account management
- Database management
- Database token management
- Pricing
<!-- - Infrastructure management -->
{{< page-nav next="/influxdb/cloud-dedicated/get-started/setup/" >}}
{{< page-nav next="/influxdb/clustered/get-started/setup/" >}}

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +1,8 @@
---
title: Use InfluxDB client libraries to write line protocol data
description: >
Use InfluxDB API clients to write line protocol data to InfluxDB Cloud Dedicated.
Use InfluxDB API clients to write points as line protocol data to InfluxDB
Cloud Dedicated.
menu:
influxdb_cloud_dedicated:
name: Use client libraries
@ -13,23 +14,35 @@ related:
- /influxdb/cloud-dedicated/get-started/write/
---
Use InfluxDB client libraries to build line protocol, and then write it to an
InfluxDB database.
Use InfluxDB client libraries to build time series points, and then write them
line protocol to an {{% product-name %}} database.
- [Construct line protocol](#construct-line-protocol)
- [Example home schema](#example-home-schema)
- [Set up your project](#set-up-your-project)
- [Construct points and write line protocol](#construct-points-and-write-line-protocol)
- [Run the example](#run-the-example)
- [Home sensor data line protocol](#home-sensor-data-line-protocol)
## Construct line protocol
With a [basic understanding of line protocol](/influxdb/cloud-dedicated/write-data/line-protocol/),
you can now construct line protocol and write data to InfluxDB.
Consider a use case where you collect data from sensors in your home.
Each sensor collects temperature, humidity, and carbon monoxide readings.
With a
[basic understanding of line protocol](/influxdb/cloud-dedicated/write-data/line-protocol/),
you can construct line protocol data and write it to InfluxDB.
All InfluxDB client libraries write data in line protocol format to InfluxDB.
Client library `write` methods let you provide data as raw line protocol or as
`Point` objects that the client library converts to line protocol. If your
program creates the data you write to InfluxDB, use the client library `Point`
interface to take advantage of type safety in your program.
### Example home schema
Consider a use case where you collect data from sensors in your home. Each
sensor collects temperature, humidity, and carbon monoxide readings.
To collect this data, use the following schema:
<!-- vale InfluxDataDocs.v3Schema = NO -->
- **measurement**: `home`
- **tags**
- `room`: Living Room or Kitchen
@ -39,12 +52,16 @@ To collect this data, use the following schema:
- `co`: carbon monoxide in parts per million (integer)
- **timestamp**: Unix timestamp in _second_ precision
The following example shows how to construct and write points that follow this schema.
<!-- vale InfluxDataDocs.v3Schema = YES -->
The following example shows how to construct and write points that follow the
`home` schema.
## Set up your project
The examples in this guide assume you followed [Set up InfluxDB](/influxdb/cloud-dedicated/get-started/setup/)
and [Write data set up](/influxdb/cloud-dedicated/get-started/write/#set-up-your-project-and-credentials)
The examples in this guide assume you followed
[Set up InfluxDB](/influxdb/cloud-dedicated/get-started/setup/) and
[Write data set up](/influxdb/cloud-dedicated/get-started/write/#set-up-your-project-and-credentials)
instructions in [Get started](/influxdb/cloud-dedicated/get-started/).
After setting up InfluxDB and your project, you should have the following:
@ -57,213 +74,335 @@ After setting up InfluxDB and your project, you should have the following:
- A directory for your project.
- Credentials stored as environment variables or in a project configuration file--for example, a `.env` ("dotenv") file.
- Credentials stored as environment variables or in a project configuration
file--for example, a `.env` ("dotenv") file.
- Client libraries installed for writing data to InfluxDB.
The following example shows how to construct `Point` objects that follow the [example `home` schema](#example-home-schema), and then write the points as line protocol to an
{{% product-name %}} database.
The following example shows how to construct `Point` objects that follow the
[example `home` schema](#example-home-schema), and then write the data as line
protocol to an {{% product-name %}} database.
The examples use InfluxDB v3 client libraries. For examples using InfluxDB v2
client libraries to write data to InfluxDB v3, see
[InfluxDB v2 clients](/influxdb/cloud-dedicated/reference/client-libraries/v2/).
{{< tabs-wrapper >}}
{{% tabs %}}
<!-- prettier-ignore -->
[Go](#)
[Node.js](#)
[Python](#)
{{% /tabs %}}
{{% tab-content %}}
The following steps set up a Go project using the
[InfluxDB v3 Go client](https://github.com/InfluxCommunity/influxdb3-go/):
<!-- BEGIN GO PROJECT SETUP -->
1. Install [Go 1.13 or later](https://golang.org/doc/install).
2. Inside of your project directory, install the client package to your project dependencies.
1. Create a directory for your Go module and change to the directory--for
example:
```sh
go get github.com/influxdata/influxdb-client-go/v2
mkdir iot-starter-go && cd $_
```
1. Initialize a Go module--for example:
```sh
go mod init iot-starter
```
1. Install [`influxdb3-go`](https://github.com/InfluxCommunity/influxdb3-go/),
which provides the InfluxDB `influxdb3` Go client library module.
```sh
go get github.com/InfluxCommunity/influxdb3-go
```
<!-- END GO SETUP PROJECT -->
{{% /tab-content %}}
{{% tab-content %}}
{{% /tab-content %}} {{% tab-content %}}
<!-- BEGIN NODE.JS PROJECT SETUP -->
Inside of your project directory, install the `@influxdata/influxdb-client` InfluxDB v2 JavaScript client library.
The following steps set up a JavaScript project using the
[InfluxDB v3 JavaScript client](https://github.com/InfluxCommunity/influxdb3-js/).
```sh
npm install --save @influxdata/influxdb-client
```
1. Install [Node.js](https://nodejs.org/en/download/).
<!-- END NODE.JS SETUP PROJECT -->
{{% /tab-content %}}
{{% tab-content %}}
<!-- BEGIN PYTHON SETUP PROJECT -->
1. **Optional, but recommended**: Use [`venv`](https://docs.python.org/3/library/venv.html)) or [`conda`](https://docs.continuum.io/anaconda/install/) to activate a virtual environment for installing and executing code--for example:
Inside of your project directory, enter the following command using `venv` to create and activate a virtual environment for the project:
1. Create a directory for your JavaScript project and change to the
directory--for example:
```sh
python3 -m venv envs/env1 && source ./envs/env1/bin/activate
mkdir -p iot-starter-js && cd $_
```
2. Install the [`influxdb3-python`](https://github.com/InfluxCommunity/influxdb3-python), which provides the InfluxDB `influxdb_client_3` Python client library module and also installs the [`pyarrow` package](https://arrow.apache.org/docs/python/index.html) for working with Arrow data.
1. Initialize a project--for example, using `npm`:
<!-- pytest.mark.skip -->
```sh
npm init
```
1. Install the `@influxdata/influxdb3-client` InfluxDB v3 JavaScript client
library.
```sh
npm install @influxdata/influxdb3-client
```
<!-- END NODE.JS SETUP PROJECT -->
{{% /tab-content %}} {{% tab-content %}}
<!-- BEGIN PYTHON SETUP PROJECT -->
The following steps set up a Python project using the
[InfluxDB v3 Python client](https://github.com/InfluxCommunity/influxdb3-python/):
1. Install [Python](https://www.python.org/downloads/)
1. Inside of your project directory, create a directory for your Python module
and change to the module directory--for example:
```sh
mkdir -p iot-starter-py && cd $_
```
1. **Optional, but recommended**: Use
[`venv`](https://docs.python.org/3/library/venv.html) or
[`conda`](https://docs.continuum.io/anaconda/install/) to activate a virtual
environment for installing and executing code--for example, enter the
following command using `venv` to create and activate a virtual environment
for the project:
```bash
python3 -m venv envs/iot-starter && source ./envs/iot-starter/bin/activate
```
1. Install
[`influxdb3-python`](https://github.com/InfluxCommunity/influxdb3-python),
which provides the InfluxDB `influxdb_client_3` Python client library module
and also installs the
[`pyarrow` package](https://arrow.apache.org/docs/python/index.html) for
working with Arrow data.
```sh
pip install influxdb3-python
```
<!-- END PYTHON SETUP PROJECT -->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Construct points and write line protocol
Client libraries provide one or more `Point` constructor methods. Some libraries
support language-native data structures, such as Go's `struct`, for creating
points.
{{< tabs-wrapper >}}
{{% tabs %}}
<!-- prettier-ignore -->
[Go](#)
[Node.js](#)
[Python](#)
{{% /tabs %}}
{{% tab-content %}}
<!-- BEGIN GO SETUP SAMPLE -->
1. Create a file for your module--for example: `write-point.go`.
1. Create a file for your module--for example: `main.go`.
2. In `write-point.go`, enter the following sample code:
1. In `main.go`, enter the following sample code:
```go
package main
import (
"context"
"os"
"time"
"fmt"
"github.com/influxdata/influxdb-client-go/v2"
"time"
"github.com/InfluxCommunity/influxdb3-go/influxdb3"
"github.com/influxdata/line-protocol/v2/lineprotocol"
)
func main() {
// Set a log level constant
const debugLevel uint = 4
func Write() error {
url := os.Getenv("INFLUX_HOST")
token := os.Getenv("INFLUX_TOKEN")
database := os.Getenv("INFLUX_DATABASE")
/**
* Define options for the client.
* Instantiate the client with the following arguments:
* - An object containing InfluxDB URL and token credentials.
* - Write options for batch size and timestamp precision.
// To instantiate a client, call New() with InfluxDB credentials.
client, err := influxdb3.New(influxdb3.ClientConfig{
Host: url,
Token: token,
Database: database,
})
/** Use a deferred function to ensure the client is closed when the
* function returns.
**/
clientOptions := influxdb2.DefaultOptions().
SetBatchSize(20).
SetLogLevel(debugLevel).
SetPrecision(time.Second)
client := influxdb2.NewClientWithOptions(os.Getenv("INFLUX_URL"),
os.Getenv("INFLUX_TOKEN"),
clientOptions)
/**
* Create an asynchronous, non-blocking write client.
* Provide your InfluxDB org and database as arguments
**/
writeAPI := client.WriteAPI(os.Getenv("INFLUX_ORG"), "get-started")
// Get the errors channel for the asynchronous write client.
errorsCh := writeAPI.Errors()
/** Create a point.
* Provide measurement, tags, and fields as arguments.
**/
p := influxdb2.NewPointWithMeasurement("home").
AddTag("room", "Kitchen").
AddField("temp", 72.0).
AddField("hum", 20.2).
AddField("co", 9).
SetTime(time.Now())
// Define a proc for handling errors.
go func() {
for err := range errorsCh {
fmt.Printf("write error: %s\n", err.Error())
defer func (client *influxdb3.Client) {
err = client.Close()
if err != nil {
panic(err)
}
}()
}(client)
// Write the point asynchronously
writeAPI.WritePoint(p)
/** Use the NewPoint method to construct a point.
* NewPoint(measurement, tags map, fields map, time)
**/
point := influxdb3.NewPoint("home",
map[string]string{
"room": "Living Room",
},
map[string]any{
"temp": 24.5,
"hum": 40.5,
"co": 15i},
time.Now(),
)
// Send pending writes from the buffer to the database.
writeAPI.Flush()
/** Use the NewPointWithMeasurement method to construct a point with
* method chaining.
**/
point2 := influxdb3.NewPointWithMeasurement("home").
SetTag("room", "Living Room").
SetField("temp", 23.5).
SetField("hum", 38.0).
SetField("co", 16i).
SetTimestamp(time.Now())
// Ensure background processes finish and release resources.
client.Close()
fmt.Println("Writing points")
points := []*influxdb3.Point{point, point2}
/** Write points to InfluxDB.
* You can specify WriteOptions, such as Gzip threshold,
* default tags, and timestamp precision. Default precision is lineprotocol.Nanosecond
**/
err = client.WritePoints(context.Background(), points,
influxdb3.WithPrecision(lineprotocol.Second))
return nil
}
func main() {
Write()
}
```
<!-- END GO SETUP SAMPLE -->
{{% /tab-content %}}
{{% tab-content %}}
1. To run the module and write the data to your {{% product-name %}} database,
enter the following command in your terminal:
<!-- pytest.mark.skip -->
```sh
go run main.go
```
<!-- END GO SAMPLE -->
{{% /tab-content %}} {{% tab-content %}}
<!-- BEGIN NODE.JS SETUP SAMPLE -->
1. Create a file for your module--for example: `write-point.js`.
1. Create a file for your module--for example: `write-points.js`.
2. In `write-point.js`, enter the following sample code:
1. In `write-points.js`, enter the following sample code:
```js
'use strict'
/** @module write
* Use the JavaScript client library for Node.js. to create a point and write it to InfluxDB
**/
import {InfluxDB, Point} from '@influxdata/influxdb-client'
/** Get credentials from the environment **/
const url = process.env.INFLUX_URL
const token = process.env.INFLUX_TOKEN
const org = process.env.INFLUX_ORG
// write-points.js
import { InfluxDBClient, Point } from '@influxdata/influxdb3-client';
/**
* Instantiate a client with a configuration object
* that contains your InfluxDB URL and token.
**/
const influxDB = new InfluxDB({url, token})
* Set InfluxDB credentials.
*/
const host = process.env.INFLUX_HOST ?? '';
const database = process.env.INFLUX_DATABASE;
const token = process.env.INFLUX_TOKEN;
/**
* Create a write client configured to write to the database.
* Provide your InfluxDB org and database.
**/
const writeApi = influxDB.getWriteApi(org, 'get-started')
* Write line protocol to InfluxDB using the JavaScript client library.
*/
export async function writePoints() {
/**
* Create a point and add tags and fields.
* To add a field, call the field method for your data type.
**/
const point1 = new Point('home')
.tag('room', 'Kitchen')
.floatField('temp', 72.0)
.floatField('hum', 20.2)
.intField('co', 9)
console.log(` ${point1}`)
* Instantiate an InfluxDBClient.
* Provide the host URL and the database token.
*/
const client = new InfluxDBClient({ host, token });
/**
* Add the point to the batch.
**/
writeApi.writePoint(point1)
/** Use the fluent interface with chained methods to construct Points. */
const point = Point.measurement('home')
.setTag('room', 'Living Room')
.setFloatField('temp', 22.2)
.setFloatField('hum', 35.5)
.setIntegerField('co', 7)
.setTimestamp(new Date().getTime() / 1000);
/**
* Flush pending writes in the batch from the buffer and close the write client.
const point2 = Point.measurement('home')
.setTag('room', 'Kitchen')
.setFloatField('temp', 21.0)
.setFloatField('hum', 35.9)
.setIntegerField('co', 0)
.setTimestamp(new Date().getTime() / 1000);
/** Write points to InfluxDB.
* The write method accepts an array of points, the target database, and
* an optional configuration object.
* You can specify WriteOptions, such as Gzip threshold, default tags,
* and timestamp precision. Default precision is lineprotocol.Nanosecond
**/
writeApi.close().then(() => {
console.log('WRITE FINISHED')
})
try {
await client.write([point, point2], database, '', { precision: 's' });
console.log('Data has been written successfully!');
} catch (error) {
console.error(`Error writing data to InfluxDB: ${error.body}`);
}
client.close();
}
writePoints();
```
<!-- END NODE.JS SETUP SAMPLE -->
{{% /tab-content %}}
{{% tab-content %}}
<!-- BEGIN PYTHON SETUP SAMPLE -->
1. Create a file for your module--for example: `write-point.py`.
1. To run the module and write the data to your {{\< product-name >}} database,
enter the following command in your terminal:
2. In `write-point.py`, enter the following sample code to write data in batching mode:
<!-- pytest.mark.skip -->
```sh
node writePoints.js
```
<!-- END NODE.JS SAMPLE -->
{{% /tab-content %}} {{% tab-content %}}
<!-- BEGIN PYTHON SETUP SAMPLE -->
1. Create a file for your module--for example: `write-points.py`.
1. In `write-points.py`, enter the following sample code to write data in
batching mode:
```python
import os
from influxdb_client_3 import Point, write_client_options, WritePrecision, WriteOptions, InfluxDBError
from influxdb_client_3 import (
InfluxDBClient3, InfluxDBError, Point, WritePrecision,
WriteOptions, write_client_options)
host = os.getenv('INFLUX_HOST')
token = os.getenv('INFLUX_TOKEN')
database = os.getenv('INFLUX_DATABASE')
# Create an array of points with tags and fields.
points = [Point("home")
@ -272,7 +411,8 @@ npm install --save @influxdata/influxdb-client
.field('hum', 20.2)
.field('co', 9)]
# With batching mode, define callbacks to execute after a successful or failed write request.
# With batching mode, define callbacks to execute after a successful or
# failed write request.
# Callback methods receive the configuration and data sent in the request.
def success(self, data: str):
print(f"Successfully wrote batch: data: {data}")
@ -296,80 +436,43 @@ npm install --save @influxdata/influxdb-client
wco = write_client_options(success_callback=success,
error_callback=error,
retry_callback=retry,
WriteOptions=write_options)
write_options=write_options)
# Instantiate a synchronous instance of the client with your
# InfluxDB credentials and write options.
with InfluxDBClient3(host=config['INFLUX_HOST'],
token=config['INFLUX_TOKEN'],
database=config['INFLUX_DATABASE'],
# InfluxDB credentials and write options, such as Gzip threshold, default tags,
# and timestamp precision. Default precision is nanosecond ('ns').
with InfluxDBClient3(host=host,
token=token,
database=database,
write_client_options=wco) as client:
client.write(points, write_precision='s')
```
<!-- END PYTHON SETUP PROJECT -->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
1. To run the module and write the data to your {{< product-name >}} database,
enter the following command in your terminal:
<!-- pytest.mark.skip -->
```sh
python write-points.py
```
<!-- END PYTHON SETUP PROJECT -->
{{% /tab-content %}} {{< /tabs-wrapper >}}
The sample code does the following:
<!-- vale InfluxDataDocs.v3Schema = NO -->
1. Instantiates a client configured with the InfluxDB URL and API token.
1. Constructs `home`
[measurement](/influxdb/cloud-dedicated/reference/glossary/#measurement)
`Point` objects.
1. Sends data as line protocol format to InfluxDB and waits for the response.
1. If the write succeeds, logs the success message to stdout; otherwise, logs
the failure message and error details.
1. Closes the client to release resources.
2. Uses the client to instantiate a **write client** with credentials.
3. Constructs a `Point` object with the [measurement](/influxdb/cloud-dedicated/reference/glossary/#measurement) name (`"home"`).
4. Adds a tag and fields to the point.
5. Adds the point to a batch to be written to the database.
6. Sends the batch to InfluxDB and waits for the response.
7. Executes callbacks for the response, flushes the write buffer, and releases resources.
## Run the example
To run the sample and write the data to your InfluxDB Cloud Dedicated database, enter the following command in your terminal:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Go](#)
[Node.js](#)
[Python](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
<!-- BEGIN GO RUN EXAMPLE -->
```sh
go run write-point.go
```
<!-- END GO RUN EXAMPLE -->
{{% /code-tab-content %}}
{{% code-tab-content %}}
<!-- BEGIN NODE.JS RUN EXAMPLE -->
```sh
node write-point.js
```
<!-- END NODE.JS RUN EXAMPLE -->
{{% /code-tab-content %}}
{{% code-tab-content %}}
<!-- BEGIN PYTHON RUN EXAMPLE -->
```sh
python write-point.py
```
<!-- END PYTHON RUN EXAMPLE -->
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
The example logs the point as line protocol to stdout, and then writes the point to the database.
The line protocol is similar to the following:
### Home sensor data line protocol
```sh
home,room=Kitchen co=9i,hum=20.2,temp=72 1641024000
```
<!-- vale InfluxDataDocs.v3Schema = YES -->

View File

@ -10,17 +10,16 @@ weight: 3
influxdb/clustered/tags: [get-started]
---
{{% product-name %}} is a highly available InfluxDB cluster hosted and
managed on your own infrastructure and is the platform purpose-built to collect,
store, and query time series data.
It is powered by the InfluxDB 3.0 storage engine which provides a number of
benefits including nearly unlimited series cardinality, improved query performance,
and interoperability with widely used data processing tools and platforms.
InfluxDB is the platform purpose-built to collect, store, and query
time series data.
{{% product-name %}} is powered by the InfluxDB 3.0 storage engine, that
provides nearly unlimited series cardinality,
improved query performance, and interoperability with widely used data
processing tools and platforms.
**Time series data** is a sequence of data points indexed in time order.
Data points typically consist of successive measurements made from the same
source and are used to track changes over time.
Examples of time series data include:
**Time series data** is a sequence of data points indexed in time order. Data
points typically consist of successive measurements made from the same source
and are used to track changes over time. Examples of time series data include:
- Industrial sensor data
- Server performance metrics
@ -45,33 +44,43 @@ throughout this documentation.
### Data organization
The {{% product-name %}} data model organizes time series data into databases
and measurements.
and tables.
A database can contain multiple measurements.
Measurements contain multiple tags and fields.
A database can contain multiple tables.
Tables contain multiple tags and fields.
- **Database**: Named location where time series data is stored.
A database can contain multiple _measurements_.
- **Measurement**: Logical grouping for time series data.
All _points_ in a given measurement should have the same _tags_.
A measurement contains multiple _tags_ and _fields_.
- **Tags**: Key-value pairs that provide metadata for each point--for example,
something to identify the source or context of the data like host,
location, station, etc.
Tag values may be null.
- **Fields**: Key-value pairs with values that change over time--for example,
temperature, pressure, stock price, etc.
Field values may be null, but at least one field value is not null on any given row.
- **Timestamp**: Timestamp associated with the data.
When stored on disk and queried, all data is ordered by time.
A timestamp is never null.
- **Database**: A named location where time series data is stored in _tables_.
_Database_ is synonymous with _bucket_ in InfluxDB Cloud Serverless and InfluxDB TSM.
- **Table**: A logical grouping for time series data. All _points_ in a given
table should have the same _tags_. A table contains _tags_ and
_fields_. _Table_ is synonymous with _measurement_ in InfluxDB Cloud
Serverless and InfluxDB TSM.
- **Tags**: Key-value pairs that provide metadata for each point--for
example, something to identify the source or context of the data like
host, location, station, etc. Tag values may be null.
- **Fields**: Key-value pairs with values that change over time--for
example, temperature, pressure, stock price, etc. Field values may be
null, but at least one field value is not null on any given row.
- **Timestamp**: Timestamp associated with the data. When stored on disk and
queried, all data is ordered by time. A timestamp is never null.
{{% note %}}
#### What about buckets and measurements?
If coming from InfluxDB Cloud Serverless or InfluxDB powered by the TSM storage engine, you're likely familiar
with the concepts _bucket_ and _measurement_.
_Bucket_ in TSM or InfluxDB Cloud Serverless is synonymous with
_database_ in {{% product-name %}}.
_Measurement_ in TSM or InfluxDB Cloud Serverless is synonymous with
_table_ in {{% product-name %}}.
{{% /note %}}
### Schema on write
When using InfluxDB, you define your schema as you write your data.
You don't need to create measurements (equivalent to a relational table) or
explicitly define the schema of the measurement.
Measurement schemas are defined by the schema of data as it is written to the measurement.
As you write data to InfluxDB, the data defines the table schema.
You don't need to create tables or
explicitly define the table schema.
### Important definitions
@ -121,7 +130,7 @@ While it may coincidentally work, it isn't supported.
### `influxctl` admin CLI
The [`influxctl` command line interface (CLI)](/influxdb/cloud-dedicated/reference/cli/influxctl/)
The [`influxctl` command line interface (CLI)](/influxdb/clustered/reference/cli/influxctl/)
writes, queries, and performs administrative tasks, such as managing databases
and authorization tokens in a cluster.
@ -143,7 +152,7 @@ The `/api/v2/write` v2-compatible endpoint works with existing InfluxDB 2.x tool
InfluxDB client libraries are community-maintained, language-specific clients that interact with InfluxDB APIs.
[InfluxDB v3 client libraries](/influxdb/clustered/reference/client-libraries/v3/) are the recommended client libraries for writing and querying data {{% product-name %}}.
They use the HTTP API to write data and use Flight gRPC to query data.
They use the HTTP API to write data and use InfluxDB's Flight gRPC API to query data.
[InfluxDB v2 client libraries](/influxdb/clustered/reference/client-libraries/v2/) can use `/api/v2` HTTP endpoints to manage resources such as buckets and API tokens, and write data in {{% product-name %}}.

View File

@ -21,9 +21,11 @@ related:
- /telegraf/v1/
---
This tutorial walks you through the fundamental of creating **line protocol** data and writing it to InfluxDB.
This tutorial walks you through the fundamental of creating **line protocol**
data and writing it to InfluxDB.
InfluxDB provides many different options for ingesting or writing data, including the following:
InfluxDB provides many different options for ingesting or writing data,
including the following:
- InfluxDB HTTP API (v1 and v2)
- Telegraf
@ -31,15 +33,16 @@ InfluxDB provides many different options for ingesting or writing data, includin
- `influx3` data CLI
- InfluxDB client libraries
If using tools like Telegraf or InfluxDB client libraries, they can
build the line protocol for you, but it's good to understand how line protocol works.
If using tools like Telegraf or InfluxDB client libraries, they can build the
line protocol for you, but it's good to understand how line protocol works.
## Line protocol
All data written to InfluxDB is written using **line protocol**, a text-based
format that lets you provide the necessary information to write a data point to InfluxDB.
_This tutorial covers the basics of line protocol, but for detailed information,
see the [Line protocol reference](/influxdb/clustered/reference/syntax/line-protocol/)._
format that lets you provide the necessary information to write a data point to
InfluxDB. _This tutorial covers the basics of line protocol, but for detailed
information, see the
[Line protocol reference](/influxdb/clustered/reference/syntax/line-protocol/)._
### Line protocol elements
@ -47,8 +50,8 @@ Each line of line protocol contains the following elements:
{{< req type="key" >}}
- {{< req "\*" >}} **measurement**: String that identifies the
[measurement](/influxdb/clustered/reference/glossary/#measurement) to store the data in.
- {{< req "\*" >}} **measurement**: A string that identifies the
[table](/influxdb/clustered/reference/glossary/#table) to store the data in.
- **tag set**: Comma-delimited list of key value pairs, each representing a tag.
Tag keys and values are unquoted strings. _Spaces, commas, and equal characters must be escaped._
- {{< req "\*" >}} **field set**: Comma-delimited list of key value pairs, each representing a field.
@ -65,12 +68,15 @@ Each line of line protocol contains the following elements:
#### Line protocol element parsing
<!-- vale InfluxDataDocs.v3Schema = NO -->
- **measurement**: Everything before the _first unescaped comma before the first whitespace_.
- **tag set**: Key-value pairs between the _first unescaped comma_ and the _first unescaped whitespace_.
- **field set**: Key-value pairs between the _first and second unescaped whitespaces_.
- **timestamp**: Integer value after the _second unescaped whitespace_.
- Lines are separated by the newline character (`\n`).
Line protocol is whitespace sensitive.
Line protocol is whitespace sensitive.
<!-- vale InfluxDataDocs.v3Schema = YES -->
---
@ -88,6 +94,8 @@ Consider a use case where you collect data from sensors in your home.
Each sensor collects temperature, humidity, and carbon monoxide readings.
To collect this data, use the following schema:
<!-- vale InfluxDataDocs.v3Schema = NO -->
- **measurement**: `home`
- **tags**
- `room`: Living Room or Kitchen
@ -96,6 +104,7 @@ To collect this data, use the following schema:
- `hum`: percent humidity (float)
- `co`: carbon monoxide in parts per million (integer)
- **timestamp**: Unix timestamp in _second_ precision
<!-- vale InfluxDataDocs.v3Schema = YES -->
Data is collected hourly beginning at
{{% influxdb/custom-timestamps-span %}}**2022-01-01T08:00:00Z (UTC)** until **2022-01-01T20:00:00Z (UTC)**{{% /influxdb/custom-timestamps-span %}}.
@ -151,6 +160,7 @@ credentials (**URL**, **organization**, and **token**) are provided by
{{% /note %}}
{{< tabs-wrapper >}}
{{% tabs %}}
[influxctl CLI](#)
[Telegraf](#)
@ -162,7 +172,9 @@ credentials (**URL**, **organization**, and **token**) are provided by
[C#](#)
[Java](#)
{{% /tabs %}}
{{% tab-content %}}
<!---------------------------- BEGIN INFLUXCTL CLI CONTENT ---------------------------->
Use the [`influxctl write` command](/influxdb/clustered/reference/cli/influxctl/write/)
@ -178,6 +190,7 @@ Provide the following:
{{% influxdb/custom-timestamps %}}
{{% code-placeholders "get-started" %}}
```sh
influxctl write \
--database get-started \
@ -213,10 +226,14 @@ home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200'
{{% /code-placeholders %}}
{{% /influxdb/custom-timestamps %}}
<!----------------------------- END INFLUXCTL CLI CONTENT ----------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!------------------------------- BEGIN TELEGRAF CONTENT ------------------------------>
{{% influxdb/custom-timestamps %}}
Use [Telegraf](/telegraf/v1/) to consume line protocol,
@ -257,7 +274,9 @@ and then write it to {{< product-name >}}.
EOF
```
3. Run the following command to generate a Telegraf configuration file (`./telegraf.conf`) that enables the `inputs.file` and `outputs.influxdb_v2` plugins:
3. Run the following command to generate a Telegraf configuration file
(`./telegraf.conf`) that enables the `inputs.file` and `outputs.influxdb_v2`
plugins:
```sh
telegraf --sample-config \
@ -268,8 +287,9 @@ and then write it to {{< product-name >}}.
4. In your editor, open `./telegraf.conf` and configure the following:
- **`file` input plugin**: In the `[[inputs.file]].files` list, replace `"/tmp/metrics.out"` with your sample data filename.
If Telegraf can't find a file when started, it stops processing and exits.
- **`file` input plugin**: In the `[[inputs.file]].files` list, replace
`"/tmp/metrics.out"` with your sample data filename. If Telegraf can't
find a file when started, it stops processing and exits.
```toml
[[inputs.file]]
@ -281,8 +301,6 @@ and then write it to {{< product-name >}}.
<!--test
```bash
echo '[[inputs.file]]' > telegraf.conf
echo ' ## Files to parse each interval. Accept standard unix glob matching rules,' >> telegraf.conf
echo ' ## as well as ** to match recursive files and directories.' >> telegraf.conf
echo ' files = ["home.lp"]' >> telegraf.conf
```
-->
@ -307,23 +325,20 @@ and then write it to {{< product-name >}}.
<!--test
```bash
echo '[[outputs.influxdb_v2]]' >> telegraf.conf
echo ' # InfluxDB cluster URL' >> telegraf.conf
echo ' urls = ["${INFLUX_HOST}"]' >> telegraf.conf
echo '' >> telegraf.conf
echo ' # INFLUX_TOKEN is an environment variable you assigned to your database token' >> telegraf.conf
echo ' token = "${INFLUX_TOKEN}"' >> telegraf.conf
echo '' >> telegraf.conf
echo ' # An empty string (InfluxDB ignores this parameter)' >> telegraf.conf
echo ' organization = ""' >> telegraf.conf
echo '' >> telegraf.conf
echo ' # Database name' >> telegraf.conf
echo ' bucket = "get-started"' >> telegraf.conf
```
-->
The example configuration uses the following InfluxDB credentials:
- **`urls`**: an array containing your **`INFLUX_HOST`** environment variable
- **`urls`**: an array containing your **`INFLUX_HOST`** environment
variable
- **`token`**: your **`INFLUX_TOKEN`** environment variable
- **`organization`**: an empty string (InfluxDB ignores this parameter)
- **`bucket`**: the name of the database to write to
@ -331,7 +346,8 @@ and then write it to {{< product-name >}}.
5. To write the data, start the `telegraf` daemon with the following options:
- `--config`: Specifies the path of the configuration file.
- `--once`: Runs a single Telegraf collection cycle for the configured inputs and outputs, and then exits.
- `--once`: Runs a single Telegraf collection cycle for the configured
inputs and outputs, and then exits.
Enter the following command in your terminal:
@ -351,22 +367,29 @@ Telegraf and its plugins provide many options for reading and writing data.
To learn more, see how to [use Telegraf to write data](/influxdb/clustered/write-data/use-telegraf/).
{{% /influxdb/custom-timestamps %}}
<!------------------------------- END TELEGRAF CONTENT ------------------------------>
{{% /tab-content %}}
{{% tab-content %}}
<!----------------------------- BEGIN v1 API CONTENT ----------------------------->
{{% influxdb/custom-timestamps %}}
Write data with your existing workloads that already use the InfluxDB v1 `/write` API endpoint.
{{% note %}}
If migrating data from InfluxDB 1.x, see the [Migrate data from InfluxDB 1.x to InfluxDB {{% product-name %}}](/influxdb/clustered/guides/migrate-data/migrate-1x-to-clustered/) guide.
If migrating data from InfluxDB 1.x, see the
[Migrate data from InfluxDB 1.x to InfluxDB {{% product-name %}}](/influxdb/clustered/guides/migrate-data/migrate-1x-to-clustered/) guide.
{{% /note %}}
To write data to InfluxDB using the [InfluxDB v1 HTTP API](/influxdb/clustered/reference/api/), send a
request to the [InfluxDB API `/write` endpoint](/influxdb/clustered/api/#operation/PostLegacyWrite) using the `POST` request method.
To write data to InfluxDB using the
[InfluxDB v1 HTTP API](/influxdb/clustered/reference/api/), send a
request to the
[InfluxDB API `/write` endpoint](/influxdb/clustered/api/#operation/PostLegacyWrite) using the `POST` request method.
{{% api-endpoint endpoint="https://{{< influxdb/host >}}/write" method="post" api-ref="/influxdb/clustered/api/#operation/PostLegacyWrite"%}}
@ -392,7 +415,7 @@ to InfluxDB:
{{% code-placeholders "DATABASE_TOKEN" %}}
```sh
curl --silent -w "%{response_code}: ${errormsg}\n" \
response=$(curl --silent --write-out "%{response_code}:%{errormsg}" \
"https://{{< influxdb/host >}}/write?db=get-started&precision=s" \
--header "Authorization: Bearer DATABASE_TOKEN" \
--header "Content-type: text/plain; charset=utf-8" \
@ -424,7 +447,19 @@ home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1641063600
home,room=Kitchen temp=23.1,hum=36.6,co=22i 1641063600
home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200
home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200
"
")
# Format the response code and error message output.
response_code=${response%%:*}
errormsg=${response#*:}
# Remove leading and trailing whitespace from errormsg
errormsg=$(echo "${errormsg}" | tr -d '[:space:]')
echo "$response_code"
if [[ $errormsg ]]; then
echo "$errormsg"
fi
```
{{% /code-placeholders %}}
@ -440,23 +475,31 @@ If successful, the output is an HTTP `204 No Content` status code.
<!--pytest-codeblocks:expected-output-->
```
204:
204
```
{{% /influxdb/custom-timestamps %}}
<!------------------------------ END v1 API CONTENT ------------------------------>
{{% /tab-content %}}
{{% tab-content %}}
<!----------------------------- BEGIN v2 API CONTENT ----------------------------->
{{% influxdb/custom-timestamps %}}
To write data to InfluxDB using the [InfluxDB v2 HTTP API](/influxdb/clustered/reference/api/), send a
request to the InfluxDB API `/api/v2/write` endpoint using the `POST` request method.
To write data to InfluxDB using the
[InfluxDB v2 HTTP API](/influxdb/clustered/reference/api/), send a request
to the InfluxDB API `/api/v2/write` endpoint using the `POST` request method.
{{< api-endpoint endpoint="https://{{< influxdb/host >}}/api/v2/write" method="post" api-ref="/influxdb/clustered/api/#operation/PostWrite" >}}
{{< api-endpoint endpoint="https://{{< influxdb/host >}}/api/v2/write"
method="post" api-ref="/influxdb/clustered/api/#operation/PostWrite" >}}
Include the following with your request:
<!-- vale InfluxDataDocs.v3Schema = NO -->
- **Headers**:
- **Authorization**: Bearer <INFLUX_TOKEN>
- **Content-Type**: text/plain; charset=utf-8
@ -465,18 +508,24 @@ Include the following with your request:
- **bucket**: InfluxDB database name
- **precision**:[timestamp precision](/influxdb/clustered/reference/glossary/#timestamp-precision) (default is `ns`)
- **Request body**: Line protocol as plain text
<!-- vale InfluxDataDocs.v3Schema = YES -->
{{% note %}}
The {{% product-name %}} v2 API `/api/v2/write` endpoint supports `Bearer` and `Token` authorization schemes and you can use either scheme to pass a database token in your request.
For more information about HTTP API token schemes, see how to [authenticate API requests](/influxdb/clustered/guides/api-compatibility/v2/).
The {{% product-name %}} v2 API `/api/v2/write` endpoint supports
`Bearer` and `Token` authorization schemes and you can use either scheme to pass
a database token in your request.
For more information about HTTP API token
schemes, see how to
[authenticate API requests](/influxdb/clustered/guides/api-compatibility/v2/).
{{% /note %}}
The following example uses cURL and the InfluxDB v2 API to write line protocol
to InfluxDB:
{{% code-placeholders "DATABASE_TOKEN"%}}
```sh
curl --silent -w "%{response_code}: %{errormsg}\n" \
response=$(curl --silent --write-out "%{response_code}:%{errormsg}" \
"https://{{< influxdb/host >}}/api/v2/write?bucket=get-started&precision=s" \
--header "Authorization: Bearer DATABASE_TOKEN" \
--header "Content-Type: text/plain; charset=utf-8" \
@ -508,7 +557,19 @@ home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1641063600
home,room=Kitchen temp=23.1,hum=36.6,co=22i 1641063600
home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200
home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200
"
")
# Format the response code and error message output.
response_code=${response%%:*}
errormsg=${response#*:}
# Remove leading and trailing whitespace from errormsg
errormsg=$(echo "${errormsg}" | tr -d '[:space:]')
echo "$response_code"
if [[ $errormsg ]]; then
echo "$errormsg"
fi
```
{{% /code-placeholders %}}
@ -524,14 +585,18 @@ If successful, the output is an HTTP `204 No Content` status code.
<!--pytest-codeblocks:expected-output-->
```
204:
204
```
{{% /influxdb/custom-timestamps %}}
<!------------------------------ END v2 API CONTENT ------------------------------>
{{% /tab-content %}}
{{% tab-content %}}
<!---------------------------- BEGIN PYTHON CONTENT --------------------------->
{{% influxdb/custom-timestamps %}}
To write data to {{% product-name %}} using Python, use the
@ -575,9 +640,13 @@ dependencies to your current project.
pip install influxdb3-python
```
The `influxdb3-python` package provides the `influxdb_client_3` module and also installs the [`pyarrow` package](https://arrow.apache.org/docs/python/index.html) for working with Arrow data returned from queries.
The `influxdb3-python` package provides the `influxdb_client_3` module and
also installs the
[`pyarrow` package](https://arrow.apache.org/docs/python/index.html) for
working with Arrow data returned from queries.
5. In your terminal or editor, create a new file for your code--for example: `write.py`.
5. In your terminal or editor, create a new file for your code--for example:
`write.py`.
<!--pytest-codeblocks:cont-->
@ -638,24 +707,31 @@ dependencies to your current project.
The sample does the following:
1. Imports the `InfluxDBClient3` object from the `influxdb_client_3` module.
2. Calls the `InfluxDBClient3()` constructor to instantiate an InfluxDB client
configured with the following credentials:
2. Calls the `InfluxDBClient3()` constructor to instantiate an InfluxDB
client configured with the following credentials:
- **`host`**: {{% product-name omit=" Clustered" %}} cluster hostname (URL without protocol or trailing slash)
- **`org`**: an empty or arbitrary string (InfluxDB ignores this parameter)
- **`token`**: a [database token](/influxdb/clustered/admin/tokens/#database-tokens)
with write access to the specified database.
_Store this in a secret store or environment variable to avoid exposing the raw token string._
- **`database`**: the name of the {{% product-name %}} database to write to
- **`host`**: {{% product-name omit=" Clustered" %}} cluster hostname (URL
without protocol or trailing slash)
- **`org`**: an empty or arbitrary string (InfluxDB ignores this
parameter)
- **`token`**: a
[database token](/influxdb/clustered/admin/tokens/#database-tokens)
with write access to the specified database. _Store this in a secret
store or environment variable to avoid exposing the raw token string._
- **`database`**: the name of the {{% product-name %}} database to write
to
3. Defines a list of line protocol strings where each string represents a data record.
4. Calls the `client.write()` method with the line protocol record list and write options.
3. Defines a list of line protocol strings where each string represents a
data record.
4. Calls the `client.write()` method with the line protocol record list and
write options.
**Because the timestamps in the sample line protocol are in second
precision, the example passes the `write_precision='s'` option
to set the [timestamp precision](/influxdb/clustered/reference/glossary/#timestamp-precision) to seconds.**
precision, the example passes the `write_precision='s'` option to set the
[timestamp precision](/influxdb/clustered/reference/glossary/#timestamp-precision)
to seconds.**
6. To execute the module and write line protocol to your {{% product-name %}}
7. To execute the module and write line protocol to your {{% product-name %}}
database, enter the following command in your terminal:
<!--pytest.mark.skip-->
@ -667,15 +743,19 @@ dependencies to your current project.
{{% /influxdb/custom-timestamps %}}
<!----------------------------- END PYTHON CONTENT ---------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!----------------------------- BEGIN GO CONTENT ------------------------------>
{{% influxdb/custom-timestamps %}}
To write data to {{% product-name %}} using Go, use the
InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommunity/influxdb3-go).
To write data to {{% product-name %}} using Go, use the InfluxDB v3
[influxdb3-go client library package](https://github.com/InfluxCommunity/influxdb3-go).
1. Inside of your project directory, create a new module directory and navigate into it.
1. Inside of your project directory, create a new module directory and navigate
into it.
<!--
Using bash here is required when running with pytest.
@ -694,7 +774,8 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
go mod init influxdb_go_client
```
3. In your terminal or editor, create a new file for your code--for example: `write.go`.
3. In your terminal or editor, create a new file for your code--for example:
`write.go`.
<!--pytest-codeblocks:cont-->
@ -794,31 +875,38 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
The sample does the following:
1. Imports required packages.
2. Defines a `WriteLineProtocol()` function that does the following:
1. To instantiate the client, calls the `influxdb3.New(influxdb3.ClientConfig)` function and passes the following:
1. To instantiate the client, calls the
`influxdb3.New(influxdb3.ClientConfig)` function and passes the
following:
- **`Host`**: the {{% product-name omit=" Clustered" %}} cluster URL
- **`Database`**: The name of your {{% product-name %}} database
- **`Token`**: a [database token](/influxdb/clustered/admin/tokens/#database-tokens)
with _write_ access to the specified database.
_Store this in a secret store or environment variable to avoid exposing the raw token string._
- **`WriteOptions`**: `influxdb3.WriteOptions` options for writing to InfluxDB.
- **`Token`**: a
[database token](/influxdb/clustered/admin/tokens/#database-tokens)
with _write_ access to the specified database. _Store this in a
secret store or environment variable to avoid exposing the raw
token string._
- **`WriteOptions`**: `influxdb3.WriteOptions` options for writing
to InfluxDB.
**Because the timestamps in the sample line protocol are in second
precision, the example passes the `Precision: lineprotocol.Second` option
to set the [timestamp precision](/influxdb/clustered/reference/glossary/#timestamp-precision) to seconds.**
2. Defines a deferred function that closes the client when the function returns.
precision, the example passes the `Precision: lineprotocol.Second`
option to set the
[timestamp precision](/influxdb/clustered/reference/glossary/#timestamp-precision)
to seconds.**
2. Defines a deferred function that closes the client when the function
returns.
3. Defines an array of line protocol strings where each string
represents a data record.
4. Iterates through the array of line protocol and calls the write
client's `Write()` method to write each line of line protocol
separately to InfluxDB.
4. Iterates through the array of line protocol and calls the
write client's `Write()` method
to write each line of line protocol separately to InfluxDB.
5. In your editor, create a `main.go` file and enter the following sample code that calls the `WriteLineProtocol()` function:
5. In your editor, create a `main.go` file and enter the following sample code
that calls the `WriteLineProtocol()` function:
```go
package main
@ -829,7 +917,9 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
}
```
6. In your terminal, enter the following command to install the packages listed in `imports`, build the `influxdb_go_client` module, and execute the `main()` function:
6. In your terminal, enter the following command to install the packages listed
in `imports`, build the `influxdb_go_client` module, and execute the
`main()` function:
<!--pytest.mark.skip-->
@ -840,21 +930,28 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
The program writes the line protocol to your {{% product-name %}} database.
{{% /influxdb/custom-timestamps %}}
<!------------------------------- END GO CONTENT ------------------------------>
{{% /tab-content %}}
{{% tab-content %}}
{{% influxdb/custom-timestamps %}}
<!---------------------------- BEGIN NODE.JS CONTENT --------------------------->
1. If you haven't already, follow the instructions for [Downloading and installing Node.js and npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) for your system.
2. In your terminal, enter the following command to create a `influxdb_js_client` directory for your project:
1. If you haven't already, follow the instructions for
[Downloading and installing Node.js and npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
for your system.
2. In your terminal, enter the following command to create a
`influxdb_js_client` directory for your project:
```bash
mkdir influxdb_js_client && cd influxdb_js_client
```
3. Inside of `influxdb_js_client`, enter the following command to initialize a package.
This example configures the package to use [ECMAScript modules (ESM)](https://nodejs.org/api/packages.html#modules-loaders).
3. Inside of `influxdb_js_client`, enter the following command to initialize a
package. This example configures the package to use
[ECMAScript modules (ESM)](https://nodejs.org/api/packages.html#modules-loaders).
<!--pytest-codeblocks:cont-->
@ -862,7 +959,8 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
npm init -y; npm pkg set type="module"
```
4. Install the `@influxdata/influxdb3-client` JavaScript client library as a dependency to your project.
4. Install the `@influxdata/influxdb3-client` JavaScript client library as a
dependency to your project.
<!--pytest-codeblocks:cont-->
@ -882,13 +980,13 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
```js
// write.js
import { InfluxDBClient } from "@influxdata/influxdb3-client";
import { InfluxDBClient } from '@influxdata/influxdb3-client';
/**
* Set InfluxDB credentials.
*/
const host = "https://cluster-id.influxdb.io";
const database = "get-started";
const host = 'https://cluster-id.influxdb.io';
const database = 'get-started';
/**
* INFLUX_TOKEN is an environment variable you assigned to your
* WRITE token value.
@ -941,16 +1039,17 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
* for all the records.
*/
const writePromises = records.map((record) => {
return client.write(record, database, "", { precision: "s" })
.then(() => `Data has been written successfully: ${record}`,
() => `Failed writing data: ${record}`);
return client.write(record, database, '', { precision: 's' }).then(
() => `Data has been written successfully: ${record}`,
() => `Failed writing data: ${record}`
);
});
/**
* Wait for all the write promises to settle, and then output the results.
*/
const writeResults = await Promise.allSettled(writePromises);
writeResults.forEach(write => console.log(write.value));
writeResults.forEach((write) => console.log(write.value));
/** Close the client to release resources. */
await client.close();
@ -960,35 +1059,48 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
The sample code does the following:
1. Imports the `InfluxDBClient` class.
2. Calls the `new InfluxDBClient()` constructor and passes a `ClientOptions` object to instantiate a client configured
with InfluxDB credentials.
2. Calls the `new InfluxDBClient()` constructor and passes a
`ClientOptions` object to instantiate a client configured with InfluxDB
credentials.
- **`host`**: your {{% product-name omit=" Clustered" %}} cluster URL
- **`token`**: a [database token](/influxdb/clustered/admin/tokens/#database-tokens)
with _write_ access to the specified database.
_Store this in a secret store or environment variable to avoid exposing the raw token string._
- **`token`**: a
[database token](/influxdb/clustered/admin/tokens/#database-tokens)
with _write_ access to the specified database. _Store this in a secret
store or environment variable to avoid exposing the raw token string._
3. Defines a list of line protocol strings where each string represents a data record.
4. Calls the client's `write()` method for each record, defines the success or failure message to return, and collects the pending promises into the `writePromises` array.
Each call to `write()` passes the following arguments:
3. Defines a list of line protocol strings where each string represents a
data record.
4. Calls the client's `write()` method for each record, defines the success
or failure message to return, and collects the pending promises into the
`writePromises` array. Each call to `write()` passes the following
arguments:
- **`record`**: the line protocol record
- **`database`**: the name of the {{% product-name %}} database to write to
- **`{precision}`**: a `WriteOptions` object that sets the `precision` value.
- **`database`**: the name of the {{% product-name %}} database to write
to
- **`{precision}`**: a `WriteOptions` object that sets the `precision`
value.
**Because the timestamps in the sample line protocol are in second
precision, the example passes `s` as the `precision` value to set the write
[timestamp precision](/influxdb/clustered/reference/glossary/#timestamp-precision) to seconds.**
precision, the example passes `s` as the `precision` value to set the
write
[timestamp precision](/influxdb/clustered/reference/glossary/#timestamp-precision)
to seconds.**
5. Calls `Promise.allSettled()` with the promises array to pause execution
until the promises have completed, and then assigns the array containing
success and failure messages to a `writeResults` constant.
6. Iterates over and prints the messages in `writeResults`.
7. Closes the client to release resources.
5. Calls `Promise.allSettled()` with the promises array to pause execution until the promises have completed, and then assigns the array containing success and failure messages to a `writeResults` constant.
7. Iterates over and prints the messages in `writeResults`.
8. Closes the client to release resources.
7. In your terminal or editor, create an `index.js` file.
8. Inside of `index.js`, enter the following sample code to import and call `writeLineProtocol()`:
8. Inside of `index.js`, enter the following sample code to import and call
`writeLineProtocol()`:
```js
// index.js
import { writeLineProtocol } from "./write.js";
import { writeLineProtocol } from './write.js';
/**
* Execute the client functions.
@ -1010,13 +1122,21 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
```
{{% /influxdb/custom-timestamps %}}
<!---------------------------- END NODE.JS CONTENT --------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!---------------------------- BEGIN C# CONTENT --------------------------->
{{% influxdb/custom-timestamps %}}
1. If you haven't already, follow the [Microsoft.com download instructions](https://dotnet.microsoft.com/en-us/download) to install .NET and the `dotnet` CLI.
2. In your terminal, create an executable C# project using the .NET **console** template.
1. If you haven't already, follow the
[Microsoft.com download instructions](https://dotnet.microsoft.com/en-us/download)
to install .NET and the `dotnet` CLI.
2. In your terminal, create an executable C# project using the .NET **console**
template.
<!--pytest.mark.skip-->
@ -1032,7 +1152,8 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
cd influxdb_csharp_client
```
4. Run the following command to install the latest version of the InfluxDB v3 C# client library.
4. Run the following command to install the latest version of the InfluxDB v3
C# client library.
<!--pytest.mark.skip-->
@ -1040,7 +1161,8 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
dotnet add package InfluxDB3.Client
```
5. In your editor, create a `Write.cs` file and enter the following sample code:
5. In your editor, create a `Write.cs` file and enter the following sample
code:
```c#
// Write.cs
@ -1125,25 +1247,34 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
The sample does the following:
1. Calls the `new InfluxDBClient()` constructor to instantiate a client configured
with InfluxDB credentials.
1. Calls the `new InfluxDBClient()` constructor to instantiate a client
configured with InfluxDB credentials.
- **`host`**: your {{% product-name omit=" Clustered" %}} cluster URL
- **`database`**: the name of the {{% product-name %}} database to write to
- **`token`**: a [database token](/influxdb/clustered/admin/tokens/#database-tokens)
with _write_ access to the specified database.
_Store this in a secret store or environment variable to avoid exposing the raw token string._
- **`database`**: the name of the {{% product-name %}} database to write
to
- **`token`**: a
[database token](/influxdb/clustered/admin/tokens/#database-tokens)
with _write_ access to the specified database. _Store this in a secret
store or environment variable to avoid exposing the raw token string._
_Instantiating the client with the `using` statement ensures that the client is disposed of when it's no longer needed._
_Instantiating the client with the `using` statement ensures that the
client is disposed of when it's no longer needed._
2. Defines an array of line protocol strings where each string represents a data record.
3. Calls the client's `WriteRecordAsync()` method to write each line protocol record to InfluxDB.
2. Defines an array of line protocol strings where each string represents a
data record.
3. Calls the client's `WriteRecordAsync()` method to write each line
protocol record to InfluxDB.
**Because the timestamps in the sample line protocol are in second
precision, the example passes the [`WritePrecision.S` enum value](https://github.com/InfluxCommunity/influxdb3-csharp/blob/main/Client/Write/WritePrecision.cs)
to the `precision:` option to set the[timestamp precision](/influxdb/clustered/reference/glossary/#timestamp-precision) to seconds.**
precision, the example passes the
[`WritePrecision.S` enum value](https://github.com/InfluxCommunity/influxdb3-csharp/blob/main/Client/Write/WritePrecision.cs)
to the `precision:` option to set
the[timestamp precision](/influxdb/clustered/reference/glossary/#timestamp-precision)
to seconds.**
6. In your editor, open the `Program.cs` file and replace its contents with the following:
6. In your editor, open the `Program.cs` file and replace its contents with the
following:
```c#
// Program.cs
@ -1162,30 +1293,36 @@ InfluxDB v3 [influxdb3-go client library package](https://github.com/InfluxCommu
}
```
The `Program` class shares the same `InfluxDBv3` namespace as the `Write` class you defined in the preceding step
and defines a `Main()` function that calls `Write.WriteLineProtocol()`.
The `dotnet` CLI recognizes `Program.Main()` as the entry point for your program.
The `Program` class shares the same `InfluxDBv3` namespace as the `Write`
class you defined in the preceding step and defines a `Main()` function that
calls `Write.WriteLineProtocol()`. The `dotnet` CLI recognizes
`Program.Main()` as the entry point for your program.
7. To build and execute the program and write the line protocol to your {{% product-name %}} database, enter the following command in your terminal:
7. To build and execute the program and write the line protocol to your
{{% product-name %}} database, enter the following command in your terminal:
<!--pytest.mark.skip-->
```sh
dotnet run
```
<!---------------------------- END C# CONTENT --------------------------->
{{% /influxdb/custom-timestamps %}}
{{% /tab-content %}}
{{% tab-content %}}
{{% influxdb/custom-timestamps %}}
<!---------------------------- BEGIN JAVA CONTENT --------------------------->
<!---------------------------- END C# CONTENT --------------------------->
{{% /influxdb/custom-timestamps %}}
{{% /tab-content %}}
{{% tab-content %}}
{{% influxdb/custom-timestamps %}}
<!---------------------------- BEGIN JAVA CONTENT --------------------------->
_The tutorial assumes using Maven version 3.9 and Java version >= 15._
1. If you haven't already, follow the instructions to download and install the [Java JDK](https://www.oracle.com/java/technologies/downloads/) and [Maven](https://maven.apache.org/download.cgi) for your system.
1. If you haven't already, follow the instructions to download and install the
[Java JDK](https://www.oracle.com/java/technologies/downloads/) and
[Maven](https://maven.apache.org/download.cgi) for your system.
2. In your terminal or editor, use Maven to generate a project--for example:
```sh
```bash
mvn org.apache.maven.plugins:maven-archetype-plugin:3.1.2:generate \
-DarchetypeArtifactId="maven-archetype-quickstart" \
-DarchetypeGroupId="org.apache.maven.archetypes" -DarchetypeVersion="1.4" \
@ -1194,9 +1331,11 @@ _The tutorial assumes using Maven version 3.9 and Java version >= 15._
```
Maven creates the `<artifactId>` directory (`./influxdb_java_client`) that
contains a `pom.xml` and scaffolding for your `com.influxdbv3.influxdb_java_client` Java application.
contains a `pom.xml` and scaffolding for your
`com.influxdbv3.influxdb_java_client` Java application.
3. In your terminal or editor, change into the `./influxdb_java_client` directory--for example:
3. In your terminal or editor, change into the `./influxdb_java_client`
directory--for example:
<!--pytest-codeblocks:cont-->
@ -1204,7 +1343,8 @@ _The tutorial assumes using Maven version 3.9 and Java version >= 15._
cd ./influxdb_java_client
```
4. In your editor, open the `pom.xml` Maven configuration file and add the `com.influxdb.influxdb3-java` client library into `dependencies`.
4. In your editor, open the `pom.xml` Maven configuration file and add the
`com.influxdb.influxdb3-java` client library into `dependencies`.
```pom
...
@ -1218,7 +1358,9 @@ _The tutorial assumes using Maven version 3.9 and Java version >= 15._
...
</dependencies>
```
5. To validate your `pom.xml`, run Maven's `validate` command--for example, enter the following in your terminal:
5. To check your `pom.xml` for problems, run Maven's `validate` command--for example,
enter the following in your terminal:
<!--pytest.mark.skip-->
@ -1226,7 +1368,9 @@ _The tutorial assumes using Maven version 3.9 and Java version >= 15._
mvn validate
```
6. In your editor, navigate to the `./influxdb_java_client/src/main/java/com/influxdbv3` directory and create a `Write.java` file.
6. In your editor, navigate to the
`./influxdb_java_client/src/main/java/com/influxdbv3` directory and create a
`Write.java` file.
7. In `Write.java`, enter the following sample code:
```java
@ -1328,19 +1472,27 @@ _The tutorial assumes using Maven version 3.9 and Java version >= 15._
with InfluxDB credentials.
- **`host`**: your {{% product-name omit=" Clustered" %}} cluster URL
- **`database`**: the name of the {{% product-name %}} database to write to
- **`token`**: a [database token](/influxdb/clustered/admin/tokens/#database-tokens)
with _write_ access to the specified database.
_Store this in a secret store or environment variable to avoid exposing the raw token string._
- **`database`**: the name of the {{% product-name %}} database to write
to
- **`token`**: a
[database token](/influxdb/clustered/admin/tokens/#database-tokens)
with _write_ access to the specified database. _Store this in a secret
store or environment variable to avoid exposing the raw token string._
2. Defines a list of line protocol strings where each string represents a data record.
3. Calls the client's `writeRecord()` method to write each record separately to InfluxDB.
3. Defines a list of line protocol strings where each string represents a
data record.
4. Calls the client's `writeRecord()` method to write each record
separately to InfluxDB.
**Because the timestamps in the sample line protocol are in second
precision, the example passes the [`WritePrecision.S` enum value](https://github.com/InfluxCommunity/influxdb3-java/blob/main/src/main/java/com/influxdb/v3/client/write/WritePrecision.java)
as the `precision` argument to set the write [timestamp precision](/influxdb/clustered/reference/glossary/#timestamp-precision) to seconds.**
precision, the example passes the
[`WritePrecision.S` enum value](https://github.com/InfluxCommunity/influxdb3-java/blob/main/src/main/java/com/influxdb/v3/client/write/WritePrecision.java)
as the `precision` argument to set the write
[timestamp precision](/influxdb/clustered/reference/glossary/#timestamp-precision)
to seconds.**
8. In your editor, open the `App.java` file (created by Maven) and replace its contents with the following sample code:
8. In your editor, open the `App.java` file (created by Maven) and replace its
contents with the following sample code:
```java
// App.java
@ -1364,9 +1516,12 @@ _The tutorial assumes using Maven version 3.9 and Java version >= 15._
}
```
- The `App` class and `Write` class are part of the same `com.influxdbv3` package (your project **groupId**).
- The `App` class and `Write` class are part of the same `com.influxdbv3`
package (your project **groupId**).
- `App` defines a `main()` function that calls `Write.writeLineProtocol()`.
9. In your terminal or editor, use Maven to to install dependencies and compile the project code--for example:
9. In your terminal or editor, use Maven to install dependencies and compile
the project code--for example:
<!--pytest.mark.skip-->
@ -1374,19 +1529,23 @@ _The tutorial assumes using Maven version 3.9 and Java version >= 15._
mvn compile
```
10. In your terminal or editor, execute `App.main()` to write to InfluxDB--for example, using Maven:
10. In your terminal or editor, execute `App.main()` to write to InfluxDB--for
example, using Maven:
<!--pytest.mark.skip-->
```sh
mvn exec:java -Dexec.mainClass="com.influxdbv3.App"
```
<!---------------------------- END JAVA CONTENT --------------------------->
{{% /influxdb/custom-timestamps %}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
If successful, the output is the success message; otherwise, error details and the failure message.
<!---------------------------- END JAVA CONTENT --------------------------->
{{% /influxdb/custom-timestamps %}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
If successful, the output is the success message; otherwise, error details and
the failure message.
{{< expand-wrapper >}}
{{% expand "View the written data" %}}
@ -1425,7 +1584,7 @@ If successful, the output is the success message; otherwise, error details and t
{{% /expand %}}
{{< /expand-wrapper >}}
**Congratulations!** You have written data to InfluxDB.
With data now stored in InfluxDB, let's query it.
**Congratulations!** You've written data to InfluxDB.
Next, learn how to query your data.
{{< page-nav prev="/influxdb/clustered/get-started/setup/" next="/influxdb/clustered/get-started/query/" keepTab=true >}}

View File

@ -1,375 +0,0 @@
---
title: Use InfluxDB client libraries to write line protocol data
description: >
Use InfluxDB API clients to write line protocol data to InfluxDB Clustered.
menu:
influxdb_clustered:
name: Use client libraries
parent: Write line protocol
identifier: write-client-libs
weight: 103
related:
- /influxdb/clustered/reference/syntax/line-protocol/
- /influxdb/clustered/get-started/write/
---
Use InfluxDB client libraries to build line protocol, and then write it to an
InfluxDB database.
- [Construct line protocol](#construct-line-protocol)
- [Set up your project](#set-up-your-project)
- [Construct points and write line protocol](#construct-points-and-write-line-protocol)
- [Run the example](#run-the-example)
- [Home sensor data line protocol](#home-sensor-data-line-protocol)
## Construct line protocol
With a [basic understanding of line protocol](/influxdb/clustered/write-data/line-protocol/),
you can now construct line protocol and write data to InfluxDB.
Consider a use case where you collect data from sensors in your home.
Each sensor collects temperature, humidity, and carbon monoxide readings.
To collect this data, use the following schema:
- **measurement**: `home`
- **tags**
- `room`: Living Room or Kitchen
- **fields**
- `temp`: temperature in °C (float)
- `hum`: percent humidity (float)
- `co`: carbon monoxide in parts per million (integer)
- **timestamp**: Unix timestamp in _second_ precision
The following example shows how to construct and write points that follow this schema.
## Set up your project
The examples in this guide assume you followed [Set up InfluxDB](/influxdb/clustered/get-started/setup/)
and [Write data set up](/influxdb/clustered/get-started/write/#set-up-your-project-and-credentials)
instructions in [Get started](/influxdb/clustered/get-started/).
After setting up InfluxDB and your project, you should have the following:
- {{< product-name >}} credentials:
- [Database](/influxdb/clustered/admin/databases/)
- [Database token](/influxdb/clustered/admin/tokens/#database-tokens)
- Cluster hostname
- A directory for your project.
- Credentials stored as environment variables or in a project configuration file--for example, a `.env` ("dotenv") file.
- Client libraries installed for writing data to InfluxDB.
The following example shows how to construct `Point` objects that follow the [example `home` schema](#example-home-schema), and then write the points as line protocol to an
{{% product-name %}} database.
{{< tabs-wrapper >}}
{{% tabs %}}
[Go](#)
[Node.js](#)
[Python](#)
{{% /tabs %}}
{{% tab-content %}}
<!-- BEGIN GO PROJECT SETUP -->
1. Install [Go 1.13 or later](https://golang.org/doc/install).
2. Inside of your project directory, install the client package to your project dependencies.
```sh
go get github.com/influxdata/influxdb-client-go/v2
```
<!-- END GO SETUP PROJECT -->
{{% /tab-content %}}
{{% tab-content %}}
<!-- BEGIN NODE.JS PROJECT SETUP -->
Inside of your project directory, install the `@influxdata/influxdb-client` InfluxDB v2 JavaScript client library.
```sh
npm install --save @influxdata/influxdb-client
```
<!-- END NODE.JS SETUP PROJECT -->
{{% /tab-content %}}
{{% tab-content %}}
<!-- BEGIN PYTHON SETUP PROJECT -->
1. **Optional, but recommended**: Use [`venv`](https://docs.python.org/3/library/venv.html)) or [`conda`](https://docs.continuum.io/anaconda/install/) to activate a virtual environment for installing and executing code--for example:
Inside of your project directory, enter the following command using `venv` to create and activate a virtual environment for the project:
```sh
python3 -m venv envs/env1 && source ./envs/env1/bin/activate
```
2. Install the [`influxdb3-python`](https://github.com/InfluxCommunity/influxdb3-python), which provides the InfluxDB `influxdb_client_3` Python client library module and also installs the [`pyarrow` package](https://arrow.apache.org/docs/python/index.html) for working with Arrow data.
```sh
pip install influxdb3-python
```
<!-- END PYTHON SETUP PROJECT -->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Construct points and write line protocol
{{< tabs-wrapper >}}
{{% tabs %}}
[Go](#)
[Node.js](#)
[Python](#)
{{% /tabs %}}
{{% tab-content %}}
<!-- BEGIN GO SETUP SAMPLE -->
1. Create a file for your module--for example: `write-point.go`.
2. In `write-point.go`, enter the following sample code:
```go
package main
import (
"os"
"time"
"fmt"
"github.com/influxdata/influxdb-client-go/v2"
)
func main() {
// Set a log level constant
const debugLevel uint = 4
/**
* Define options for the client.
* Instantiate the client with the following arguments:
* - An object containing InfluxDB URL and token credentials.
* - Write options for batch size and timestamp precision.
**/
clientOptions := influxdb2.DefaultOptions().
SetBatchSize(20).
SetLogLevel(debugLevel).
SetPrecision(time.Second)
client := influxdb2.NewClientWithOptions(os.Getenv("INFLUX_URL"),
os.Getenv("INFLUX_TOKEN"),
clientOptions)
/**
* Create an asynchronous, non-blocking write client.
* Provide your InfluxDB org and database as arguments
**/
writeAPI := client.WriteAPI(os.Getenv("INFLUX_ORG"), "get-started")
// Get the errors channel for the asynchronous write client.
errorsCh := writeAPI.Errors()
/** Create a point.
* Provide measurement, tags, and fields as arguments.
**/
p := influxdb2.NewPointWithMeasurement("home").
AddTag("room", "Kitchen").
AddField("temp", 72.0).
AddField("hum", 20.2).
AddField("co", 9).
SetTime(time.Now())
// Define a proc for handling errors.
go func() {
for err := range errorsCh {
fmt.Printf("write error: %s\n", err.Error())
}
}()
// Write the point asynchronously
writeAPI.WritePoint(p)
// Send pending writes from the buffer to the database.
writeAPI.Flush()
// Ensure background processes finish and release resources.
client.Close()
}
```
<!-- END GO SETUP SAMPLE -->
{{% /tab-content %}}
{{% tab-content %}}
<!-- BEGIN NODE.JS SETUP SAMPLE -->
1. Create a file for your module--for example: `write-point.js`.
2. In `write-point.js`, enter the following sample code:
```js
'use strict'
/** @module write
* Use the JavaScript client library for Node.js. to create a point and write it to InfluxDB
**/
import {InfluxDB, Point} from '@influxdata/influxdb-client'
/** Get credentials from the environment **/
const url = process.env.INFLUX_URL
const token = process.env.INFLUX_TOKEN
const org = process.env.INFLUX_ORG
/**
* Instantiate a client with a configuration object
* that contains your InfluxDB URL and token.
**/
const influxDB = new InfluxDB({url, token})
/**
* Create a write client configured to write to the database.
* Provide your InfluxDB org and database.
**/
const writeApi = influxDB.getWriteApi(org, 'get-started')
/**
* Create a point and add tags and fields.
* To add a field, call the field method for your data type.
**/
const point1 = new Point('home')
.tag('room', 'Kitchen')
.floatField('temp', 72.0)
.floatField('hum', 20.2)
.intField('co', 9)
console.log(` ${point1}`)
/**
* Add the point to the batch.
**/
writeApi.writePoint(point1)
/**
* Flush pending writes in the batch from the buffer and close the write client.
**/
writeApi.close().then(() => {
console.log('WRITE FINISHED')
})
```
<!-- END NODE.JS SETUP SAMPLE -->
{{% /tab-content %}}
{{% tab-content %}}
<!-- BEGIN PYTHON SETUP SAMPLE -->
1. Create a file for your module--for example: `write-point.py`.
2. In `write-point.py`, enter the following sample code to write data in batching mode:
```python
import os
from influxdb_client_3 import Point, write_client_options, WritePrecision, WriteOptions, InfluxDBError
# Create an array of points with tags and fields.
points = [Point("home")
.tag("room", "Kitchen")
.field("temp", 25.3)
.field('hum', 20.2)
.field('co', 9)]
# With batching mode, define callbacks to execute after a successful or failed write request.
# Callback methods receive the configuration and data sent in the request.
def success(self, data: str):
print(f"Successfully wrote batch: data: {data}")
def error(self, data: str, exception: InfluxDBError):
print(f"Failed writing batch: config: {self}, data: {data} due: {exception}")
def retry(self, data: str, exception: InfluxDBError):
print(f"Failed retry writing batch: config: {self}, data: {data} retry: {exception}")
# Configure options for batch writing.
write_options = WriteOptions(batch_size=500,
flush_interval=10_000,
jitter_interval=2_000,
retry_interval=5_000,
max_retries=5,
max_retry_delay=30_000,
exponential_base=2)
# Create an options dict that sets callbacks and WriteOptions.
wco = write_client_options(success_callback=success,
error_callback=error,
retry_callback=retry,
WriteOptions=write_options)
# Instantiate a synchronous instance of the client with your
# InfluxDB credentials and write options.
with InfluxDBClient3(host=config['INFLUX_HOST'],
token=config['INFLUX_TOKEN'],
database=config['INFLUX_DATABASE'],
write_client_options=wco) as client:
client.write(points, write_precision='s')
```
<!-- END PYTHON SETUP PROJECT -->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
The sample code does the following:
1. Instantiates a client configured with the InfluxDB URL and API token.
2. Uses the client to instantiate a **write client** with credentials.
3. Constructs a `Point` object with the [measurement](/influxdb/clustered/reference/glossary/#measurement) name (`"home"`).
4. Adds a tag and fields to the point.
5. Adds the point to a batch to be written to the database.
6. Sends the batch to InfluxDB and waits for the response.
7. Executes callbacks for the response, flushes the write buffer, and releases resources.
## Run the example
To run the sample and write the data to your InfluxDB Clustered database, enter the following command in your terminal:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Go](#)
[Node.js](#)
[Python](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
<!-- BEGIN GO RUN EXAMPLE -->
```sh
go run write-point.go
```
<!-- END GO RUN EXAMPLE -->
{{% /code-tab-content %}}
{{% code-tab-content %}}
<!-- BEGIN NODE.JS RUN EXAMPLE -->
```sh
node write-point.js
```
<!-- END NODE.JS RUN EXAMPLE -->
{{% /code-tab-content %}}
{{% code-tab-content %}}
<!-- BEGIN PYTHON RUN EXAMPLE -->
```sh
python write-point.py
```
<!-- END PYTHON RUN EXAMPLE -->
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
The example logs the point as line protocol to stdout, and then writes the point to the database.
The line protocol is similar to the following:
### Home sensor data line protocol
```sh
home,room=Kitchen co=9i,hum=20.2,temp=72 1641024000
```

View File

@ -1,165 +0,0 @@
---
title: Use the influxctl CLI to write line protocol data
description: >
Use the [`influxctl` CLI](/influxdb/clustered/reference/cli/influxctl/)
to write line protocol data to InfluxDB Clustered.
menu:
influxdb_clustered:
name: Use the influxctl CLI
parent: Write line protocol
identifier: write-influxctl
weight: 101
related:
- /influxdb/clustered/reference/cli/influxctl/write/
- /influxdb/clustered/reference/syntax/line-protocol/
- /influxdb/clustered/get-started/write/
---
Use the [`influxctl` CLI](/influxdb/clustered/reference/cli/influxctl/)
to write line protocol data to {{< product-name >}}.
- [Construct line protocol](#construct-line-protocol)
- [Write the line protocol to InfluxDB](#write-the-line-protocol-to-influxdb)
## Construct line protocol
With a [basic understanding of line protocol](/influxdb/clustered/write-data/line-protocol/),
you can now construct line protocol and write data to InfluxDB.
Consider a use case where you collect data from sensors in your home.
Each sensor collects temperature, humidity, and carbon monoxide readings.
To collect this data, use the following schema:
- **measurement**: `home`
- **tags**
- `room`: Living Room or Kitchen
- **fields**
- `temp`: temperature in °C (float)
- `hum`: percent humidity (float)
- `co`: carbon monoxide in parts per million (integer)
- **timestamp**: Unix timestamp in _second_ precision
The following line protocol represent the schema described above:
```
home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000
```
For this tutorial, you can either pass this line protocol directly to the
`influxctl write` command as a string, via `stdin`, or you can save it to and read
it from a file.
## Write the line protocol to InfluxDB
Use the [`influxctl write` command](/influxdb/clustered/reference/cli/influxctl/write/)
to write the [home sensor sample data](#home-sensor-data-line-protocol) to your
{{< product-name omit=" Clustered" >}} cluster.
Provide the following:
- The [database](/influxdb/clustered/admin/databases/) name using the `--database` flag
- A [database token](/influxdb/clustered/admin/tokens/#database-tokens) (with write permissions
on the target database) using the `--token` flag
- The timestamp precision as seconds (`s`) using the `--precision` flag
- [Line protocol](#construct-line-protocol).
Pass the line protocol in one of the following ways:
- a string on the command line
- a path to a file that contains the query
- a single dash (`-`) to read the query from stdin
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[string](#)
[file](#)
[stdin](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
{{% influxdb/custom-timestamps %}}
{{% code-placeholders "DATABASE_(NAME|TOKEN)|(LINE_PROTOCOL_FILEPATH)" %}}
```sh
influxctl write \
--database DATABASE_NAME \
--token DATABASE_TOKEN \
--precision s \
'home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000'
```
{{% /code-placeholders %}}
{{% /influxdb/custom-timestamps %}}
Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
Name of the database to write to.
- {{% code-placeholder-key %}}`DATABASE_TOKEN`{{% /code-placeholder-key %}}:
Database token with write permissions on the target database.
{{% /code-tab-content %}}
{{% code-tab-content %}}
{{% code-placeholders "DATABASE_(NAME|TOKEN)|(LINE_PROTOCOL_FILEPATH)" %}}
```sh
influxctl write \
--database DATABASE_NAME \
--token DATABASE_TOKEN \
--precision s \
LINE_PROTOCOL_FILEPATH
```
{{% /code-placeholders %}}
Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
Name of the database to write to.
- {{% code-placeholder-key %}}`DATABASE_TOKEN`{{% /code-placeholder-key %}}:
Database token with write permissions on the target database.
- {{% code-placeholder-key %}}`LINE_PROTOCOL_FILEPATH`{{% /code-placeholder-key %}}:
File path to the file containing the line protocol. Can be an absolute file path
or relative to the current working directory.
{{% /code-tab-content %}}
{{% code-tab-content %}}
{{% code-placeholders "DATABASE_(NAME|TOKEN)|(LINE_PROTOCOL_FILEPATH)" %}}
```sh
cat LINE_PROTOCOL_FILEPATH | influxctl write \
--database DATABASE_NAME \
--token DATABASE_TOKEN \
--precision s \
-
```
{{% /code-placeholders %}}
Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
Name of the database to write to.
- {{% code-placeholder-key %}}`DATABASE_TOKEN`{{% /code-placeholder-key %}}:
Database token with write permissions on the target database.
- {{% code-placeholder-key %}}`LINE_PROTOCOL_FILEPATH`{{% /code-placeholder-key %}}:
File path to the file containing the line protocol. Can be an absolute file path
or relative to the current working directory.
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}

View File

@ -43,7 +43,7 @@ Each line of line protocol contains the following elements:
{{< req type="key" >}}
- {{< req "\*" >}} **measurement**: String that identifies the [measurement](/influxdb/clustered/reference/glossary/#measurement) to store the data in.
- {{< req "\*" >}} **measurement**: A string that identifies the [table](/influxdb/clustered/reference/glossary/#table) to store the data in.
- **tag set**: Comma-delimited list of key value pairs, each representing a tag.
Tag keys and values are unquoted strings. _Spaces, commas, and equal characters must be escaped._
- {{< req "\*" >}} **field set**: Comma-delimited list of key value pairs, each representing a field.

View File

@ -0,0 +1,463 @@
---
title: Use InfluxDB client libraries to write line protocol data
description: >
Use InfluxDB API clients to write points as line protocol data to InfluxDB
Clustered.
menu:
influxdb_clustered:
name: Use client libraries
parent: Write line protocol
identifier: write-client-libs
weight: 103
related:
- /influxdb/clustered/reference/syntax/line-protocol/
- /influxdb/clustered/get-started/write/
---
Use InfluxDB client libraries to build time series points, and then write them
line protocol to an {{% product-name %}} database.
- [Construct line protocol](#construct-line-protocol)
- [Example home schema](#example-home-schema)
- [Set up your project](#set-up-your-project)
- [Construct points and write line protocol](#construct-points-and-write-line-protocol)
## Construct line protocol
With a
[basic understanding of line protocol](/influxdb/clustered/write-data/line-protocol/),
you can construct line protocol data and write it to InfluxDB.
All InfluxDB client libraries write data in line protocol format to InfluxDB.
Client library `write` methods let you provide data as raw line protocol or as
`Point` objects that the client library converts to line protocol. If your
program creates the data you write to InfluxDB, use the client library `Point`
interface to take advantage of type safety in your program.
### Example home schema
Consider a use case where you collect data from sensors in your home. Each
sensor collects temperature, humidity, and carbon monoxide readings.
To collect this data, use the following schema:
<!-- vale InfluxDataDocs.v3Schema = NO -->
- **measurement**: `home`
- **tags**
- `room`: Living Room or Kitchen
- **fields**
- `temp`: temperature in °C (float)
- `hum`: percent humidity (float)
- `co`: carbon monoxide in parts per million (integer)
- **timestamp**: Unix timestamp in _second_ precision
<!-- vale InfluxDataDocs.v3Schema = YES -->
The following example shows how to construct and write points that follow the
`home` schema.
## Set up your project
The examples in this guide assume you followed
[Set up InfluxDB](/influxdb/clustered/get-started/setup/) and
[Write data set up](/influxdb/clustered/get-started/write/#set-up-your-project-and-credentials)
instructions in [Get started](/influxdb/clustered/get-started/).
After setting up InfluxDB and your project, you should have the following:
- {{< product-name >}} credentials:
- [Database](/influxdb/clustered/admin/databases/)
- [Database token](/influxdb/clustered/admin/tokens/#database-tokens)
- Cluster hostname
- A directory for your project.
- Credentials stored as environment variables or in a project configuration
file--for example, a `.env` ("dotenv") file.
- Client libraries installed for writing data to InfluxDB.
The following example shows how to construct `Point` objects that follow the
[example `home` schema](#example-home-schema), and then write the data as line
protocol to an {{% product-name %}} database.
The examples use InfluxDB v3 client libraries. For examples using InfluxDB v2
client libraries to write data to InfluxDB v3, see
[InfluxDB v2 clients](/influxdb/clustered/reference/client-libraries/v2/).
{{< tabs-wrapper >}} {{% tabs %}} [Go](#) [Node.js](#) [Python](#) {{% /tabs %}}
{{% tab-content %}}
The following steps set up a Go project using the
[InfluxDB v3 Go client](https://github.com/InfluxCommunity/influxdb3-go/):
<!-- BEGIN GO PROJECT SETUP -->
1. Install [Go 1.13 or later](https://golang.org/doc/install).
1. Create a directory for your Go module and change to the directory--for
example:
```sh
mkdir iot-starter-go && cd $_
```
1. Initialize a Go module--for example:
```sh
go mod init iot-starter
```
1. Install [`influxdb3-go`](https://github.com/InfluxCommunity/influxdb3-go/),
which provides the InfluxDB `influxdb3` Go client library module.
```sh
go get github.com/InfluxCommunity/influxdb3-go
```
<!-- END GO SETUP PROJECT -->
{{% /tab-content %}} {{% tab-content %}}
<!-- BEGIN NODE.JS PROJECT SETUP -->
The following steps set up a JavaScript project using the
[InfluxDB v3 JavaScript client](https://github.com/InfluxCommunity/influxdb3-js/).
1. Install [Node.js](https://nodejs.org/en/download/).
1. Create a directory for your JavaScript project and change to the
directory--for example:
```sh
mkdir -p iot-starter-js && cd $_
```
1. Initialize a project--for example, using `npm`:
<!-- pytest.mark.skip -->
```sh
npm init
```
1. Install the `@influxdata/influxdb3-client` InfluxDB v3 JavaScript client
library.
```sh
npm install @influxdata/influxdb3-client
```
<!-- END NODE.JS SETUP PROJECT -->
{{% /tab-content %}} {{% tab-content %}}
<!-- BEGIN PYTHON SETUP PROJECT -->
The following steps set up a Python project using the
[InfluxDB v3 Python client](https://github.com/InfluxCommunity/influxdb3-python/):
1. Install [Python](https://www.python.org/downloads/)
1. Inside of your project directory, create a directory for your Python module
and change to the module directory--for example:
```sh
mkdir -p iot-starter-py && cd $_
```
1. **Optional, but recommended**: Use
[`venv`](https://docs.python.org/3/library/venv.html) or
[`conda`](https://docs.continuum.io/anaconda/install/) to activate a virtual
environment for installing and executing code--for example, enter the
following command using `venv` to create and activate a virtual environment
for the project:
```bash
python3 -m venv envs/iot-starter && source ./envs/iot-starter/bin/activate
```
1. Install
[`influxdb3-python`](https://github.com/InfluxCommunity/influxdb3-python),
which provides the InfluxDB `influxdb_client_3` Python client library module
and also installs the
[`pyarrow` package](https://arrow.apache.org/docs/python/index.html) for
working with Arrow data.
```sh
pip install influxdb3-python
```
<!-- END PYTHON SETUP PROJECT -->
{{% /tab-content %}} {{< /tabs-wrapper >}}
## Construct points and write line protocol
Client libraries provide one or more `Point` constructor methods. Some libraries
support language-native data structures, such as Go's `struct`, for creating
points.
{{< tabs-wrapper >}} {{% tabs %}} [Go](#) [Node.js](#) [Python](#) {{% /tabs %}}
{{% tab-content %}}
<!-- BEGIN GO SETUP SAMPLE -->
1. Create a file for your module--for example: `main.go`.
1. In `main.go`, enter the following sample code:
```go
package main
import (
"context"
"os"
"fmt"
"time"
"github.com/InfluxCommunity/influxdb3-go/influxdb3"
"github.com/influxdata/line-protocol/v2/lineprotocol"
)
func Write() error {
url := os.Getenv("INFLUX_HOST")
token := os.Getenv("INFLUX_TOKEN")
database := os.Getenv("INFLUX_DATABASE")
// To instantiate a client, call New() with InfluxDB credentials.
client, err := influxdb3.New(influxdb3.ClientConfig{
Host: url,
Token: token,
Database: database,
})
/** Use a deferred function to ensure the client is closed when the
* function returns.
**/
defer func (client *influxdb3.Client) {
err = client.Close()
if err != nil {
panic(err)
}
}(client)
/** Use the NewPoint method to construct a point.
* NewPoint(measurement, tags map, fields map, time)
**/
point := influxdb3.NewPoint("home",
map[string]string{
"room": "Living Room",
},
map[string]any{
"temp": 24.5,
"hum": 40.5,
"co": 15i},
time.Now(),
)
/** Use the NewPointWithMeasurement method to construct a point with
* method chaining.
**/
point2 := influxdb3.NewPointWithMeasurement("home").
SetTag("room", "Living Room").
SetField("temp", 23.5).
SetField("hum", 38.0).
SetField("co", 16i).
SetTimestamp(time.Now())
fmt.Println("Writing points")
points := []*influxdb3.Point{point, point2}
/** Write points to InfluxDB.
* You can specify WriteOptions, such as Gzip threshold,
* default tags, and timestamp precision. Default precision is lineprotocol.Nanosecond
**/
err = client.WritePoints(context.Background(), points,
influxdb3.WithPrecision(lineprotocol.Second))
return nil
}
func main() {
Write()
}
```
1. To run the module and write the data to your {{% product-name %}} database,
enter the following command in your terminal:
<!-- pytest.mark.skip -->
```sh
go run main.go
```
<!-- END GO SAMPLE -->
{{% /tab-content %}} {{% tab-content %}}
<!-- BEGIN NODE.JS SETUP SAMPLE -->
1. Create a file for your module--for example: `write-points.js`.
1. In `write-points.js`, enter the following sample code:
```js
// write-points.js
import { InfluxDBClient, Point } from '@influxdata/influxdb3-client';
/**
* Set InfluxDB credentials.
*/
const host = process.env.INFLUX_HOST ?? '';
const database = process.env.INFLUX_DATABASE;
const token = process.env.INFLUX_TOKEN;
/**
* Write line protocol to InfluxDB using the JavaScript client library.
*/
export async function writePoints() {
/**
* Instantiate an InfluxDBClient.
* Provide the host URL and the database token.
*/
const client = new InfluxDBClient({ host, token });
/** Use the fluent interface with chained methods to construct Points. */
const point = Point.measurement('home')
.setTag('room', 'Living Room')
.setFloatField('temp', 22.2)
.setFloatField('hum', 35.5)
.setIntegerField('co', 7)
.setTimestamp(new Date().getTime() / 1000);
const point2 = Point.measurement('home')
.setTag('room', 'Kitchen')
.setFloatField('temp', 21.0)
.setFloatField('hum', 35.9)
.setIntegerField('co', 0)
.setTimestamp(new Date().getTime() / 1000);
/** Write points to InfluxDB.
* The write method accepts an array of points, the target database, and
* an optional configuration object.
* You can specify WriteOptions, such as Gzip threshold, default tags,
* and timestamp precision. Default precision is lineprotocol.Nanosecond
**/
try {
await client.write([point, point2], database, '', { precision: 's' });
console.log('Data has been written successfully!');
} catch (error) {
console.error(`Error writing data to InfluxDB: ${error.body}`);
}
client.close();
}
writePoints();
```
1. To run the module and write the data to your {{\< product-name >}} database,
enter the following command in your terminal:
<!-- pytest.mark.skip -->
```sh
node writePoints.js
```
<!-- END NODE.JS SAMPLE -->
{{% /tab-content %}} {{% tab-content %}}
<!-- BEGIN PYTHON SETUP SAMPLE -->
1. Create a file for your module--for example: `write-points.py`.
1. In `write-points.py`, enter the following sample code to write data in
batching mode:
```python
import os
from influxdb_client_3 import (
InfluxDBClient3, InfluxDBError, Point, WritePrecision,
WriteOptions, write_client_options)
host = os.getenv('INFLUX_HOST')
token = os.getenv('INFLUX_TOKEN')
database = os.getenv('INFLUX_DATABASE')
# Create an array of points with tags and fields.
points = [Point("home")
.tag("room", "Kitchen")
.field("temp", 25.3)
.field('hum', 20.2)
.field('co', 9)]
# With batching mode, define callbacks to execute after a successful or
# failed write request.
# Callback methods receive the configuration and data sent in the request.
def success(self, data: str):
print(f"Successfully wrote batch: data: {data}")
def error(self, data: str, exception: InfluxDBError):
print(f"Failed writing batch: config: {self}, data: {data} due: {exception}")
def retry(self, data: str, exception: InfluxDBError):
print(f"Failed retry writing batch: config: {self}, data: {data} retry: {exception}")
# Configure options for batch writing.
write_options = WriteOptions(batch_size=500,
flush_interval=10_000,
jitter_interval=2_000,
retry_interval=5_000,
max_retries=5,
max_retry_delay=30_000,
exponential_base=2)
# Create an options dict that sets callbacks and WriteOptions.
wco = write_client_options(success_callback=success,
error_callback=error,
retry_callback=retry,
write_options=write_options)
# Instantiate a synchronous instance of the client with your
# InfluxDB credentials and write options, such as Gzip threshold, default tags,
# and timestamp precision. Default precision is nanosecond ('ns').
with InfluxDBClient3(host=host,
token=token,
database=database,
write_client_options=wco) as client:
client.write(points, write_precision='s')
```
1. To run the module and write the data to your {{< product-name >}} database,
enter the following command in your terminal:
<!-- pytest.mark.skip -->
```sh
python write-points.py
```
<!-- END PYTHON SETUP PROJECT -->
{{% /tab-content %}} {{< /tabs-wrapper >}}
The sample code does the following:
<!-- vale InfluxDataDocs.v3Schema = NO -->
1. Instantiates a client configured with the InfluxDB URL and API token.
1. Constructs `home`
[measurement](/influxdb/clustered/reference/glossary/#measurement)
`Point` objects.
1. Sends data as line protocol format to InfluxDB and waits for the response.
1. If the write succeeds, logs the success message to stdout; otherwise, logs
the failure message and error details.
1. Closes the client to release resources.
<!-- vale InfluxDataDocs.v3Schema = YES -->

View File

@ -5,7 +5,7 @@
"description": "InfluxDB documentation",
"license": "MIT",
"devDependencies": {
"@vvago/vale": "^3.0.7",
"@vvago/vale": "^3.4.2",
"autoprefixer": ">=10.2.5",
"hugo-extended": ">=0.101.0",
"husky": "^9.0.11",
@ -20,13 +20,14 @@
},
"scripts": {
"prepare": "husky",
"test": "./test.sh"
"lint-vale": ".ci/vale/vale.sh",
"lint-staged": "lint-staged --relative"
},
"lint-staged": {
"*.{js,css,md}": "prettier --write",
"content/influxdb/cloud-dedicated/**/*.md": "npx vale --config=content/influxdb/cloud-dedicated/.vale.ini --minAlertLevel=error --output=line",
"content/influxdb/cloud-serverless/**/*.md": "npx vale --config=content/influxdb/cloud-serverless/.vale.ini --minAlertLevel=error --output=line",
"content/influxdb/clustered/**/*.md": "npx vale --config=content/influxdb/clustered/.vale.ini --minAlertLevel=error --output=line",
"content/influxdb/{cloud,v2,telegraf}/**/*.md": "npx vale --config=.vale.ini --minAlertLevel=error --output=line"
}
"main": "index.js",
"module": "main.js",
"directories": {
"test": "test"
},
"keywords": [],
"author": ""
}

View File

@ -1,94 +0,0 @@
# If you need more help, visit the Dockerfile reference guide at
# https://docs.docker.com/engine/reference/builder/
# Starting from a Go base image is easier than setting up the Go environment later.
FROM golang:latest
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
curl \
git \
gpg \
jq \
maven \
nodejs \
npm \
wget
# Install test runner dependencies
RUN apt-get install -y \
python3 \
python3-pip \
python3-venv
RUN ln -s /usr/bin/python3 /usr/bin/python
# Create a virtual environment for Python to avoid conflicts with the system Python and having to use the --break-system-packages flag when installing packages with pip.
RUN python -m venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# RUN --mount=type=cache,target=/root/.cache/node_modules \
# --mount=type=bind,source=package.json,target=package.json \
# npm install
# Copy docs test directory to the image.
WORKDIR /usr/src/app
RUN chmod -R 755 .
ARG SOURCE_DIR
COPY data ./data
# Install parse_yaml.sh and parse YAML config files into dotenv files to be used by tests.
RUN /bin/bash -c 'curl -sO https://raw.githubusercontent.com/mrbaseman/parse_yaml/master/src/parse_yaml.sh'
RUN /bin/bash -c 'source ./parse_yaml.sh && parse_yaml ./data/products.yml > .env.products'
COPY test ./test
WORKDIR /usr/src/app/test
COPY shared/fixtures ./tmp/data
# Some Python test dependencies (pytest-dotenv and pytest-codeblocks) aren't
# available as packages in apt-cache, so use pip to download dependencies in a # separate step and use Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# this layer.
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,source=test/requirements.txt,target=requirements.txt \
pip install -Ur requirements.txt
COPY test/setup/run-tests.sh /usr/local/bin/run-tests.sh
RUN chmod +x /usr/local/bin/run-tests.sh
# Install Telegraf for use in tests.
# Follow the install instructions (https://docs.influxdata.com/telegraf/v1/install/?t=curl), except for sudo (which isn't available in Docker).
# influxdata-archive_compat.key GPG fingerprint:
# 9D53 9D90 D332 8DC7 D6C8 D3B9 D8FF 8E1F 7DF8 B07E
RUN wget -q https://repos.influxdata.com/influxdata-archive_compat.key
RUN echo '393e8779c89ac8d958f81f942f9ad7fb82a25e133faddaf92e15b16e6ac9ce4c influxdata-archive_compat.key' | sha256sum -c && cat influxdata-archive_compat.key | gpg --dearmor | tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
RUN echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | tee /etc/apt/sources.list.d/influxdata.list
RUN apt-get update && apt-get install telegraf
# Install influx v2 Cloud CLI for use in tests.
# Follow the install instructions(https://portal.influxdata.com/downloads/), except for sudo (which isn't available in Docker).
# influxdata-archive_compat.key GPG fingerprint:
# 9D53 9D90 D332 8DC7 D6C8 D3B9 D8FF 8E1F 7DF8 B07E
RUN wget -q https://repos.influxdata.com/influxdata-archive_compat.key
RUN echo '393e8779c89ac8d958f81f942f9ad7fb82a25e133faddaf92e15b16e6ac9ce4c influxdata-archive_compat.key' | sha256sum -c && cat influxdata-archive_compat.key | gpg --dearmor | tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
RUN echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | tee /etc/apt/sources.list.d/influxdata.list
RUN apt-get update && apt-get install influxdb2-cli
ENV TEMP_DIR=./tmp
ENTRYPOINT [ "run-tests.sh" ]
CMD [""]

66
test.sh
View File

@ -1,66 +0,0 @@
#! /bin/bash
# Path: test.sh
# Description:
# This script is used to copy content files for testing and to run tests on tests on those temporary copies.
# The temporary files are shared between the host and the Docker container
# using a bind mount configured in compose.yaml.
#
# Docker compose now has an experimental file watch feature
# (https://docs.docker.com/compose/file-watch/) that is likely preferable to the
# strategy here.
#
# Usage:
# The default behavior is to test all *.md files that have been added or modified in the current branch, effectively:
#
# `git diff --name-only --diff-filter=AM --relative master | grep -E '\.md$' | ./test.sh`
#
# To specify files to test, in your terminal command line, pass a file pattern as the only argument to the script--for example:
#
# sh test.sh ./content/**/*.md
##
paths="$1"
target=./test/tmp
testrun=./test/.test-run.txt
mkdir -p "$target"
cat /dev/null > "$testrun"
rm -rf "$target"/*
# Check if the user provided a path to copy.
if [ -z "$paths" ]; then
echo "No path provided. Running tests for *.md files that have been added or modified in the current branch."
paths=$(git diff --name-only --diff-filter=AM HEAD | \
grep -E '\.md$')
if [ -z "$paths" ]; then
echo "No files found for pattern: $paths"
exit 1
fi
else
paths=$(find "$paths" -type f -name '*.md')
fi
# Log the list of files to be tested and copy them to the test directory.
echo "$paths" >> "$testrun"
echo "$paths" | rsync -arv --files-from=- . "$target"
# Build or rebuild a service if the Dockerfile or build directory have changed, and then run the tests.
docker compose up test
# Troubleshoot tests
# If you want to examine files or run commands for debugging tests,
# start the container and use `exec` to open an interactive shell--for example:
# docker compose run -it --entrypoint=/bin/bash test
# To build and run a new container and debug test failures, use `docker compose run` which runs a one-off command in a new container. Pass additional flags to be used by the container's entrypoint and the test runners it executes--for example:
# docker compose run --rm test -v
# docker compose run --rm test --entrypoint /bin/bash
# Or, pass the flags in the compose file--for example:
# services:
# test:
# build:...
# command: ["-vv"]

View File

@ -8,9 +8,10 @@
**/__pycache__
**/.venv
**/.classpath
**/.config.toml
**/.dockerignore
**/.env
**/.env.influxdbv3
**/.env.*
**/.git
**/.gitignore
**/.project
@ -23,6 +24,7 @@
**/*.jfm
**/bin
**/charts
**/config.toml
**/docker-compose*
**/compose*
**/Dockerfile*

3
test/.gitignore vendored
View File

@ -1,8 +1,11 @@
/target
/Cargo.lock
config.toml
content
node_modules
tmp
.config*
.env*
**/.env.test
.pytest_cache
.test-run.txt

View File

@ -1,116 +0,0 @@
#!/bin/bash
# This script is used to run tests for the InfluxDB documentation.
# The script is designed to be run in a Docker container. It is used to substitute placeholder values.
# Function to check if an option is present in the arguments
has_option() {
local target="$1"
shift
for arg in "$@"; do
if [ "$arg" == "$target" ]; then
return 0
fi
done
return 1
}
verbose=0
# Check if "--option" is present in the CMD arguments
if has_option "-v" "$@"; then
verbose=1
echo "Using verbose mode..."
fi
BASE_DIR=$(pwd)
cd $TEMP_DIR
for file in `find . -type f \( -iname '*.md' \)` ; do
if [ -f "$file" ]; then
echo "PRETEST: substituting values in $file"
# Replaces placeholder values with environment variable references.
# Non-language-specific replacements.
sed -i 's|https:\/\/{{< influxdb/host >}}|$INFLUX_HOST|g;
' $file
# Python-specific replacements.
# Use f-strings to identify placeholders in Python while also keeping valid syntax if
# the user replaces the value.
# Remember to import os for your example code.
sed -i 's/f"DATABASE_TOKEN"/os.getenv("INFLUX_TOKEN")/g;
s/f"API_TOKEN"/os.getenv("INFLUX_TOKEN")/g;
s/f"BUCKET_NAME"/os.getenv("INFLUX_DATABASE")/g;
s/f"DATABASE_NAME"/os.getenv("INFLUX_DATABASE")/g;
s|f"{{< influxdb/host >}}"|os.getenv("INFLUX_HOSTNAME")|g;
s|f"RETENTION_POLICY_NAME\|RETENTION_POLICY"|"autogen"|g;
' $file
# Shell-specific replacements.
## In JSON Heredoc
sed -i 's|"orgID": "ORG_ID"|"orgID": "$INFLUX_ORG"|g;
s|"name": "BUCKET_NAME"|"name": "$INFLUX_DATABASE"|g;' \
$file
sed -i 's/API_TOKEN/$INFLUX_TOKEN/g;
s/ORG_ID/$INFLUX_ORG/g;
s/DATABASE_TOKEN/$INFLUX_TOKEN/g;
s/--bucket-id BUCKET_ID/--bucket-id $INFLUX_BUCKET_ID/g;
s/BUCKET_NAME/$INFLUX_DATABASE/g;
s/DATABASE_NAME/$INFLUX_DATABASE/g;
s/--id DBRP_ID/--id $INFLUX_DBRP_ID/g;
s/get-started/$INFLUX_DATABASE/g;
s/RETENTION_POLICY_NAME\|RETENTION_POLICY/$INFLUX_RETENTION_POLICY/g;
s/CONFIG_NAME/CONFIG_$(shuf -i 0-100 -n1)/g;' \
$file
# v2-specific replacements.
sed -i 's|https:\/\/us-west-2-1.aws.cloud2.influxdata.com|$INFLUX_HOST|g;
s|{{< latest-patch >}}|${influxdb_latest_patches_v2}|g;
s|{{< latest-patch cli=true >}}|${influxdb_latest_cli_v2}|g;' \
$file
# Skip package manager commands.
sed -i 's|sudo dpkg.*$||g;
s|sudo yum.*$||g;' \
$file
# Environment-specific replacements.
sed -i 's|sudo ||g;' \
$file
fi
if [ $verbose -eq 1 ]; then
echo "FILE CONTENTS:"
cat $file
fi
done
# Miscellaneous test setup.
# For macOS samples.
mkdir -p ~/Downloads && rm -rf ~/Downloads/*
# Clean up installed files from previous runs.
gpg -q --batch --yes --delete-key D8FF8E1F7DF8B07E > /dev/null 2>&1
# Activate the Python virtual environment configured in the Dockerfile.
. /opt/venv/bin/activate
# List installed Python dependencies.
pip list
# Run test commands with options provided in the CMD of the Dockerfile.
# pytest rootdir is the directory where pytest.ini is located (/test).
if [ -d ./content/influxdb/cloud-dedicated/ ]; then
echo "Running content/influxdb/cloud-dedicated tests..."
pytest --codeblocks --envfile $BASE_DIR/.env.dedicated ./content/influxdb/cloud-dedicated/ $@
fi
if [ -d ./content/influxdb/cloud-serverless/ ]; then
echo "Running content/influxdb/cloud-serverless tests..."
pytest --codeblocks --envfile $BASE_DIR/.env.serverless ./content/influxdb/cloud-serverless/ $@
fi
if [ -d ./content/telegraf/ ]; then
echo "Running content/telegraf tests..."
pytest --codeblocks --envfile $BASE_DIR/.env.telegraf ./content/telegraf/ $@
fi

105
test/src/prepare-content.sh Normal file
View File

@ -0,0 +1,105 @@
#!/bin/bash
# This script is used to run tests for the InfluxDB documentation.
# The script is designed to be run in a Docker container. It is used to substitute placeholder values in test files.
TEST_CONTENT="/app/content"
function substitute_placeholders {
for file in `find "$TEST_CONTENT" -type f \( -iname '*.md' \)`; do
if [ -f "$file" ]; then
# echo "PRETEST: substituting values in $file"
# Replaces placeholder values with environment variable references.
# Non-language-specific replacements.
sed -i 's|https:\/\/{{< influxdb/host >}}|$INFLUX_HOST|g;
' $file
# Python-specific replacements.
# Use f-strings to identify placeholders in Python while also keeping valid syntax if
# the user replaces the value.
# Remember to import os for your example code.
sed -i 's/f"DATABASE_TOKEN"/os.getenv("INFLUX_TOKEN")/g;
s/f"API_TOKEN"/os.getenv("INFLUX_TOKEN")/g;
s/f"BUCKET_NAME"/os.getenv("INFLUX_DATABASE")/g;
s/f"DATABASE_NAME"/os.getenv("INFLUX_DATABASE")/g;
s|f"{{< influxdb/host >}}"|os.getenv("INFLUX_HOSTNAME")|g;
s|f"RETENTION_POLICY_NAME\|RETENTION_POLICY"|"autogen"|g;
' $file
# Shell-specific replacements.
## In JSON Heredoc
sed -i 's|"orgID": "ORG_ID"|"orgID": "$INFLUX_ORG"|g;
s|"name": "BUCKET_NAME"|"name": "$INFLUX_DATABASE"|g;' \
$file
sed -i 's/API_TOKEN/$INFLUX_TOKEN/g;
s/ORG_ID/$INFLUX_ORG/g;
s/DATABASE_TOKEN/$INFLUX_TOKEN/g;
s/--bucket-id BUCKET_ID/--bucket-id $INFLUX_BUCKET_ID/g;
s/BUCKET_NAME/$INFLUX_DATABASE/g;
s/DATABASE_NAME/$INFLUX_DATABASE/g;
s/--id DBRP_ID/--id $INFLUX_DBRP_ID/g;
s/get-started/$INFLUX_DATABASE/g;
s/RETENTION_POLICY_NAME\|RETENTION_POLICY/$INFLUX_RETENTION_POLICY/g;
s/CONFIG_NAME/CONFIG_$(shuf -i 0-100 -n1)/g;' \
$file
# v2-specific replacements.
sed -i 's|https:\/\/us-west-2-1.aws.cloud2.influxdata.com|$INFLUX_HOST|g;
s|{{< latest-patch >}}|${influxdb_latest_patches_v2}|g;
s|{{< latest-patch cli=true >}}|${influxdb_latest_cli_v2}|g;' \
$file
# Skip package manager commands.
sed -i 's|sudo dpkg.*$||g;
s|sudo yum.*$||g;' \
$file
# Environment-specific replacements.
sed -i 's|sudo ||g;' \
$file
fi
done
}
setup() {
# Parse YAML config files into dotenv files to be used by tests.
parse_yaml /app/appdata/products.yml > /app/appdata/.env.products
# Miscellaneous test setup.
# For macOS samples.
mkdir -p ~/Downloads && rm -rf ~/Downloads/*
}
prepare_tests() {
TEST_FILES="$*"
# Remove files from the previous run.
rm -rf "$TEST_CONTENT"/*
# Copy the test files to the target directory while preserving the directory structure.
for FILE in $TEST_FILES; do
# Create the parent directories of the destination file
#mkdir -p "$(dirname "$TEST_TARGET/$FILE")"
# Copy the file
rsync -avz --relative --log-file=./test.log "$FILE" /app/
done
substitute_placeholders
}
# If arguments were passed and the first argument is not --files, run the command. This is useful for running "/bin/bash" for debugging the container.
# If --files is passed, prepare all remaining arguments as test files.
# Otherwise (no arguments), run the setup function and return existing files to be tested.
if [ "$1" != "--files" ]; then
echo "Executing $0 without --files argument."
"$@"
fi
if [ "$1" == "--files" ]; then
shift
prepare_tests "$@"
fi
setup
# Return new or existing files to be tested.
find "$TEST_CONTENT" -type f -name '*.md'

745
yarn.lock

File diff suppressed because it is too large Load Diff