diff --git a/content/v2.0/collect-data/_index.md b/content/v2.0/collect-data/_index.md index a08bd6e83..5ea1521d4 100644 --- a/content/v2.0/collect-data/_index.md +++ b/content/v2.0/collect-data/_index.md @@ -1,5 +1,6 @@ --- -title: Collect data with InfluxDB +title: Collect data with InfluxDB 2.0 +weight: 2 description: > InfluxDB provides multiple ways to collect time series data including using Telegraf, using InfluxDB's built-in data scraper, and using line protocol. @@ -9,4 +10,8 @@ menu: name: Collect data --- -_More information about collecting data with InfluxDB v2.0 is coming shortly._ +The following guides will give you a taste of how you can use the InfluxDB user +interface (UI) to collect data into InfluxDB 2.0 buckets and display the data +in dashboards. + +{{< children >}} diff --git a/content/v2.0/collect-data/advanced-telegraf.md b/content/v2.0/collect-data/advanced-telegraf.md new file mode 100644 index 000000000..8b0a66a99 --- /dev/null +++ b/content/v2.0/collect-data/advanced-telegraf.md @@ -0,0 +1,62 @@ +--- +title: Create a Telegraf configuration +weight: 103 +seotitle: Create a Telegraf configuration +description: > + Use the InfluxDB UI to create Telegraf configurations for collecting metrics data. +menu: + v2_0: + name : Create a Telegraf configuration + parent: Collect data + +--- + +{{% note %}} +* Telegraf 1.9.2 or later must be used +* Telegraf 1.9.x is required to use the `https://` option. +* All Telegraf plugins are supported, but only a subset are configurable using the InfluxDB UI. +* If you have a Telegraf agent (v. 1.8 or later) running, you can enable the InfluxDB v2 output plugin to "dual land" data using both your existing InfluxDB 1.x and InfluxDB 2.0 instances. +{{% /note %}} + +## Create a Telegraf configuration + +Follow the steps below to use the InfluxDB UI to create a Telegraf configuration for collecting time series data. + +1. Open a web browser to access the InfluxDB 2.0 user interface + ([localhost:9999](http://localhost:9999)). The **Getting started with InfluxDB 2.0** screen appears. +2. To access the **Telegraf Configurations** page, use either of the following two paths: + * Click **Organizations** in the navigation bar on the far left of the page, click on an organization, and then click the **Telegraf** tab. + * Click **Configure a Data Collector** and then select the **Telegraf** tab. +3. Click **Create Configuration**. The **Data Loading** page appears with the heading "Select Telegraf Plugins to add to your bucket." +4. Select your predefined **Bucket**, select one or more of the available options (**System**, **Docker**, **Kubernetes**, **NGINX**, or **Redis**), and then click **Continue**. A page with **Plugins to Configure** appears. +5. Review the list of **Plugins to Configure** for any configuration requirements. + * Plugins listed with a green checks in front require no additional configuration steps. + * To configure a plugin or access plugin documentation, click the plugin name. + * Click **Continue** repeatedly to cycle through information on each of the plugins and then continue to the next step. Alternatively, you can click **Skip to Verify** to immediately proceed to the next step. +6. On the **Listen for Telegraf Data** page, complete the three steps to install Telegraf, configure your API Token, and start Telegraf on your local instance. + + 1. Install the latest Telegraf version. + * See the note above for specifics about supported versions. + * The latest Telegraf version can be downloaded from the [InfluxData Downloads](https://portal.influxdata.com/downloads/) page. + 2. Configure your API token as an environment variable. + * The API token grants Telegraf access to your InfluxDB 2.0 instance. + * Copy the code from this page and run the code on your terminal window to set an environment variable with your token. + 3. Start the Telegraf service + * Copy the code from this page and run it in a terminal window. + * When you start Telegraf with the `-config` flag provided, Telegraf will download the configuration file generated by InfluxDB 2.0 and start Telegraf using that configuration file. + +7. Verify that you have correctly completed the steps by clicking **Listen for Data** (if you don't see this button, scroll down the internal frame or create a larger browser window). A **Connection Found!** message appears. +8. Click **Finish**. Your configuration name + and the associated bucket name appears in the list of Telegraf connections. + +You have configured Telegraf plugins that can collect data and add them to your InfluxDB buckets. + +## Next steps + +Now that you have data ready for exploration, you can: + +* **Query data.** To get started querying the data stored in InfluxDB buckets using the InfluxDB user interface (UI) and the `influx` command line interface (CLI), see [Query data in InfluxDB](/v2.0/query-data). + +* **Process data.** To learn about creating tasks for processing and analyzing data, see [Process data with InfluxDB tasks](/v2.0/process-data) + +* **Visualize data.** To learn how to build dashboards for visualizing your data, see [Visualize data with the InfluxDB UI](/v2.0/visualize-data). diff --git a/content/v2.0/collect-data/scraper-metrics-endpoint.md b/content/v2.0/collect-data/scraper-metrics-endpoint.md new file mode 100644 index 000000000..1346dcec7 --- /dev/null +++ b/content/v2.0/collect-data/scraper-metrics-endpoint.md @@ -0,0 +1,41 @@ +--- +title: Create a scraper +weight: 102 +seotitle: Create a scraper +description: > + Use the InfluxDB UI to configure a scraper for collecting metrics from InfluxDB instances or third-party systems. +menu: + v2_0: + name: Create a scraper + parent: Collect data +--- + +An InfluxDB scraper collects data from specified targets at regular intervals and then writes the scraped data to a bucket. Scrapers can collect data from available data sources as long as the data is in the [Prometheus data format](https://prometheus.io/docs/instrumenting/exposition_formats/), which is supported by InfluxDB. + +To quickly create a scraper in InfluxDB 2.0, you can use the InfluxDB 2.0 user interface (UI) to specify the target URL and the bucket to store the data. The scraped data is collected in the Prometheus data format and then transformed to match the InfluxDB data structure in the buckets. + +## Use the InfluxDB UI to create a scraper + +Follow the steps below to configure an InfluxDB scraper. The steps below use the InfluxDB +`/metrics` HTTP endpoint as an example. This endpoint provides InfluxDB-specific metrics in the Prometheus data format. + +1. Open a web browser to access the InfluxDB 2.0 user interface + ([localhost:9999](http://localhost:9999)). The **Getting started with InfluxDB 2.0** screen appears. +2. In the navigation bar on the left, click **Organizations** and then click the name of your organization. The **Organization** page appears for the selected organization. +3. Click the **Scrapers** tab. A listing of any existing scrapers appears, listing the **URL** and the **BUCKET** name. +4. Click **Create Scraper**. The **Data Loading** page appears with **Add Scraper Target** options to define a scraper. +5. From the **Bucket** listing, select the bucket for collecting the data. +6. Enter the **Target URL** to use for the Prometheus `/metrics` HTTP endpoint. The default URL value is `http://localhost:9999/metrics`. +7. Click **Finish**. Your new scraper appears in the scraper listing, displaying the values you specified for the **URL** and the **BUCKET**. + +The new scraper is now collecting data into the InfluxDB bucket you specified. + +## Next steps + +Now that you have data ready to be explored, you can: + +* **Query data.** To get started querying the data stored in InfluxDB buckets using the InfluxDB user interface (UI) and the `influx` command line interface (CLI), see [Query data in InfluxDB](/v2.0/query-data). + +* **Process data.** To learn about creating tasks for processing and analyzing data, see [Process data with InfluxDB tasks](/v2.0/process-data) + +* **Visualize data.** To learn how to build dashboards for visualizing your data, see [Visualize data with the InfluxDB UI](/v2.0/visualize-data). diff --git a/content/v2.0/collect-data/scraper-quickstart.md b/content/v2.0/collect-data/scraper-quickstart.md new file mode 100644 index 000000000..01e550015 --- /dev/null +++ b/content/v2.0/collect-data/scraper-quickstart.md @@ -0,0 +1,56 @@ +--- +title: Quick start to data collection +weight: 101 +seotitle: Quick start to data collection +description: > + Use Quick Start to create a scraper to collect InfluxDB metrics into a bucket. +menu: + v2_0: + name: Quick start + parent: Collect data +--- + +{{% note %}} +The steps below are available on a page that appears after you complete the initial configuration described in [Set up InfluxDB](/v2.0/get-started/#setup-influxdb). After clicking one of the three options, the page is no longer available. + +If you missed the change to select Quick Start or you want to learn how to configure a scraper yourself, see [Scrape data using the /metrics endpoint](influxdb/v2.0/collect-data/scraper-endpoint/). +{{% /note %}} + +## Use Quick Start to collect InfluxDB metrics + +When you start InfluxDB 2.0 for the first time, you are guided to configure a user, an organization, and a bucket (see [Set up InfluxDB](/v2.0/get-started/#setup-influxdb)). After completing the setup, the next page displays "Let's start collecting data!" and three options. + +On this page, click **Quick Start**. +The following message briefly appears in a pop-up alert: + +"The InfluxDB Scraper has been configured for http://localhost:9999/metrics." + +Behind the scenes, here's what happened: + +1. InfluxDB 2.0 configured a scraper named "InfluxDB Scraper." + + * The target URL points to the `/metrics` HTTP endpoint of your + local InfluxDB instance: `http://localhost:9999/metrics`. The `/metrics` HTTP endpoint monitors your InfluxDB instance, collects metrics from it, and provides the data in the [Prometheus data format](https://prometheus.io/docs/instrumenting/exposition_formats/). + * InfluxDB stores the scraped data in the default bucket created in [the initial setup procedure](/v2.0/get-started/#setup-influxdb). + +2. The InfluxDB Scraper immediately started collecting InfluxDB data and + writing it into your bucket. + +To see a sample of the data being collected in Prometheus format, you can use one of the following methods to display a sample of the exposed InfluxDB metrics in the Prometheus text-based format: + +* In a web browser, open the InfluxDB Scraper URL (http://localhost:9999/metrics). + +* In a terminal window, run the following cURL command: +``` +curl http://localhost:9999/metrics +``` + +## Next steps + +Now that you have data ready for exploration, you can: + +* **Query data.** To get started querying the data stored in InfluxDB buckets using the InfluxDB user interface (UI) and the `influx` command line interface (CLI), see [Query data in InfluxDB](/v2.0/query-data). + +* **Process data.** To learn about creating tasks for processing and analyzing data, see [Process data with InfluxDB tasks](/v2.0/process-data) + +* **Visualize data.** To learn how to build dashboards for visualizing your data, see [Visualize data with the InfluxDB UI](/v2.0/visualize-data).