Add passive node (#4008)
* add initial passive node description * update glossary with passive node * added updates * updated passive node definiton and show command * made requested changes * Update content/enterprise_influxdb/v1.9/concepts/glossary.md Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com> * Update content/enterprise_influxdb/v1.9/concepts/glossary.md Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com> * Update content/enterprise_influxdb/v1.9/concepts/glossary.md Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com> * Update content/enterprise_influxdb/v1.9/features/clustering-features.md Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com> * Update content/enterprise_influxdb/v1.9/features/clustering-features.md Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com> * Update content/enterprise_influxdb/v1.9/features/clustering-features.md Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com> * Update content/enterprise_influxdb/v1.9/features/clustering-features.md Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com> * Update content/enterprise_influxdb/v1.9/features/clustering-features.md Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com> * Update content/enterprise_influxdb/v1.9/features/clustering-features.md Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com> * Update content/enterprise_influxdb/v1.9/tools/influxd-ctl.md Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com> Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>pull/4077/head
parent
5d40d8e3b8
commit
ee02f88a71
|
@ -9,79 +9,11 @@ menu:
|
|||
parent: Concepts
|
||||
---
|
||||
|
||||
## data node
|
||||
|
||||
A node that runs the data service.
|
||||
|
||||
For high availability, installations must have at least two data nodes.
|
||||
The number of data nodes in your cluster must be the same as your highest
|
||||
replication factor.
|
||||
Any replication factor greater than two gives you additional fault tolerance and
|
||||
query capacity within the cluster.
|
||||
|
||||
Data node sizes will depend on your needs.
|
||||
The Amazon EC2 m4.large or m4.xlarge are good starting points.
|
||||
|
||||
Related entries: [data service](#data-service), [replication factor](#replication-factor)
|
||||
|
||||
## data service
|
||||
|
||||
Stores all time series data and handles all writes and queries.
|
||||
|
||||
Related entries: [data node](#data-node)
|
||||
|
||||
## meta node
|
||||
|
||||
A node that runs the meta service.
|
||||
|
||||
For high availability, installations must have three meta nodes.
|
||||
Meta nodes can be very modestly sized instances like an EC2 t2.micro or even a
|
||||
nano.
|
||||
For additional fault tolerance installations may use five meta nodes; the
|
||||
number of meta nodes must be an odd number.
|
||||
|
||||
Related entries: [meta service](#meta-service)
|
||||
|
||||
## meta service
|
||||
|
||||
The consistent data store that keeps state about the cluster, including which
|
||||
servers, databases, users, continuous queries, retention policies, subscriptions,
|
||||
and blocks of time exist.
|
||||
|
||||
Related entries: [meta node](#meta-node)
|
||||
|
||||
## replication factor
|
||||
|
||||
The attribute of the retention policy that determines how many copies of the
|
||||
data are stored in the cluster.
|
||||
InfluxDB replicates data across `N` data nodes, where `N` is the replication
|
||||
factor.
|
||||
|
||||
To maintain data availability for queries, the replication factor should be less
|
||||
than or equal to the number of data nodes in the cluster:
|
||||
|
||||
* Data is fully available when the replication factor is greater than the
|
||||
number of unavailable data nodes.
|
||||
* Data may be unavailable when the replication factor is less than the number of
|
||||
unavailable data nodes.
|
||||
|
||||
Any replication factor greater than two gives you additional fault tolerance and
|
||||
query capacity within the cluster.
|
||||
|
||||
## web console
|
||||
|
||||
Legacy user interface for the InfluxDB Enterprise.
|
||||
|
||||
This has been deprecated and the suggestion is to use [Chronograf](/{{< latest "chronograf" >}}/introduction/).
|
||||
|
||||
If you are transitioning from the Enterprise Web Console to Chronograf, see how to [transition from the InfluxDB Web Admin Interface](/chronograf/v1.7/guides/transition-web-admin-interface/).
|
||||
|
||||
<!-- --- -->
|
||||
|
||||
## aggregation
|
||||
|
||||
An InfluxQL function that returns an aggregated value across a set of points.
|
||||
For a complete list of the available and upcoming aggregations, see [InfluxQL functions](/enterprise_influxdb/v1.9/query_language/functions/#aggregations).
|
||||
For a complete list of the available and upcoming aggregations,
|
||||
see [InfluxQL functions](/enterprise_influxdb/v1.9/query_language/functions/#aggregations).
|
||||
|
||||
Related entries: [function](#function), [selector](#selector), [transformation](#transformation)
|
||||
|
||||
|
@ -107,6 +39,27 @@ See [Continuous Queries](/enterprise_influxdb/v1.9/query_language/continuous_que
|
|||
|
||||
Related entries: [function](#function)
|
||||
|
||||
## data node
|
||||
|
||||
A node that runs the data service.
|
||||
|
||||
For high availability, installations must have at least two data nodes.
|
||||
The number of data nodes in your cluster must be the same as your highest
|
||||
replication factor.
|
||||
Any replication factor greater than two gives you additional fault tolerance and
|
||||
query capacity within the cluster.
|
||||
|
||||
Data node sizes will depend on your needs.
|
||||
The Amazon EC2 m4.large or m4.xlarge are good starting points.
|
||||
|
||||
Related entries: [data service](#data-service), [replication factor](#replication-factor)
|
||||
|
||||
## data service
|
||||
|
||||
Stores all time series data and handles all writes and queries.
|
||||
|
||||
Related entries: [data node](#data-node)
|
||||
|
||||
## database
|
||||
|
||||
A logical container for users, retention policies, continuous queries, and time series data.
|
||||
|
@ -191,6 +144,26 @@ Measurements are strings.
|
|||
|
||||
Related entries: [field](#field), [series](#series)
|
||||
|
||||
## meta node
|
||||
|
||||
A node that runs the meta service.
|
||||
|
||||
For high availability, installations must have three meta nodes.
|
||||
Meta nodes can be very modestly sized instances like an EC2 t2.micro or even a
|
||||
nano.
|
||||
For additional fault tolerance installations may use five meta nodes; the
|
||||
number of meta nodes must be an odd number.
|
||||
|
||||
Related entries: [meta service](#meta-service)
|
||||
|
||||
## meta service
|
||||
|
||||
The consistent data store that keeps state about the cluster, including which
|
||||
servers, databases, users, continuous queries, retention policies, subscriptions,
|
||||
and blocks of time exist.
|
||||
|
||||
Related entries: [meta node](#meta-node)
|
||||
|
||||
## metastore
|
||||
|
||||
Contains internal information about the status of the system.
|
||||
|
@ -198,9 +171,6 @@ The metastore contains the user information, databases, retention policies, shar
|
|||
|
||||
Related entries: [database](#database), [retention policy](#retention-policy-rp), [user](#user)
|
||||
|
||||
<!--
|
||||
## permission
|
||||
-->
|
||||
## node
|
||||
|
||||
An independent `influxd` process.
|
||||
|
@ -211,6 +181,15 @@ Related entries: [server](#server)
|
|||
|
||||
The local server's nanosecond timestamp.
|
||||
|
||||
## passive node (experimental)
|
||||
|
||||
Passive nodes act as load balancers--they accept write calls, perform shard lookup and RPC calls (on active data nodes), and distribute writes to active data nodes. They do not own shards or accept writes.
|
||||
**Note:** This is an experimental feature.
|
||||
|
||||
<!--
|
||||
## permission
|
||||
-->
|
||||
|
||||
## point
|
||||
|
||||
In InfluxDB, a point represents a single data record, similar to a row in a SQL database table. Each point:
|
||||
|
@ -238,12 +217,23 @@ Related entries: [point](#point), [schema](#schema), [values per second](#values
|
|||
An operation that retrieves data from InfluxDB.
|
||||
See [Data Exploration](/enterprise_influxdb/v1.9/query_language/explore-data/), [Schema Exploration](/enterprise_influxdb/v1.9/query_language/explore-schema/), [Database Management](/enterprise_influxdb/v1.9/query_language/manage-database/).
|
||||
|
||||
## replication factor
|
||||
## replication factor (RF)
|
||||
|
||||
The attribute of the retention policy that determines how many copies of data to concurrently store (or retain) in the cluster. Replicating copies ensures that data is available when a data node (or more) is unavailable.
|
||||
The attribute of the retention policy that determines how many copies of the
|
||||
data are stored in the cluster. Replicating copies ensures that data is accessible when one or more data nodes are unavailable.
|
||||
InfluxDB replicates data across `N` data nodes, where `N` is the replication
|
||||
factor.
|
||||
|
||||
For three nodes or less, the default replication factor equals the number of data nodes.
|
||||
For more than three nodes, the default replication factor is 3. To change the default replication factor, specify the replication factor `n` in the retention policy.
|
||||
To maintain data availability for queries, the replication factor should be less
|
||||
than or equal to the number of data nodes in the cluster:
|
||||
|
||||
* Data is fully available when the replication factor is greater than the
|
||||
number of unavailable data nodes.
|
||||
* Data may be unavailable when the replication factor is less than the number of
|
||||
unavailable data nodes.
|
||||
|
||||
Any replication factor greater than two gives you additional fault tolerance and
|
||||
query capacity within the cluster.
|
||||
|
||||
Related entries: [duration](#duration), [node](#node),
|
||||
[retention policy](#retention-policy-rp)
|
||||
|
@ -457,11 +447,10 @@ Points in the WAL can be queried, and they persist through a system reboot. On p
|
|||
|
||||
Related entries: [tsm](#tsm-time-structured-merge-tree)
|
||||
|
||||
<!--
|
||||
## web console
|
||||
|
||||
Legacy user interface for the InfluxDB Enterprise.
|
||||
|
||||
This interface has been deprecated. We recommend using [Chronograf](/{{< latest "chronograf" >}}/introduction/).
|
||||
|
||||
## shard
|
||||
|
||||
## shard group
|
||||
-->
|
||||
If you are transitioning from the Enterprise Web Console to Chronograf, see how to [transition from the InfluxDB Web Admin Interface](/chronograf/v1.7/guides/transition-web-admin-interface/).
|
||||
|
|
|
@ -133,3 +133,20 @@ InfluxDB Enterprise clusters support backup and restore functionality starting w
|
|||
version 0.7.1.
|
||||
See [Backup and restore](/enterprise_influxdb/v1.9/administration/backup-and-restore/) for
|
||||
more information.
|
||||
|
||||
## Passive node setup (experimental)
|
||||
|
||||
Passive nodes act as load balancers--they accept write calls, perform shard lookup and RPC calls (on active data nodes), and distribute writes to active data nodes. They do not own shards or accept writes.
|
||||
|
||||
Use this feature when you have a replication factor (RF) of 2 or more and your CPU usage is consistently above 80 percent. Using the passive feature lets you scale a cluster when you can no longer vertically scale. Especially useful if you experience a large amount of hinted handoff growth. The passive node writes the hinted handoff queue to its own disk, and then communicates periodically with the appropriate node until it can send the queue contents there.
|
||||
|
||||
Best practices when using an active-passive node setup:
|
||||
- Use when you have a large cluster setup, generally 8 or more nodes.
|
||||
- Keep the ratio of active to passive nodes between 1:1 and 2:1.
|
||||
- Passive nodes should receive all writes.
|
||||
|
||||
For more inforrmation, see how to [add a passive node to a cluster](/enterprise_influxdb/v1.9/tools/influxd-ctl/#add-a-passive-node-to-the-cluster).
|
||||
|
||||
{{% note %}}
|
||||
**Note:** This feature is experimental and available only in InfluxDB Enterprise.
|
||||
{{% /note %}}
|
||||
|
|
|
@ -149,9 +149,9 @@ If authentication is enabled and the `influxd-ctl` command provides the incorrec
|
|||
Error: authorization failed.
|
||||
```
|
||||
|
||||
### Commands
|
||||
### **Commands**
|
||||
|
||||
#### `add-data`
|
||||
### `add-data`
|
||||
|
||||
Adds a data node to a cluster.
|
||||
By default, `influxd-ctl` adds the specified data node to the local meta node's cluster.
|
||||
|
@ -165,7 +165,14 @@ add-data <data-node-TCP-bind-address>
|
|||
|
||||
Resources: [Installation](/enterprise_influxdb/v1.9/installation/data_node_installation/)
|
||||
|
||||
##### Examples
|
||||
##### Arguments
|
||||
|
||||
Optional arguments are in brackets.
|
||||
|
||||
##### `[ -p ]`
|
||||
Add a passive node to an Enterprise cluster.
|
||||
|
||||
### Examples
|
||||
|
||||
###### Add a data node to a cluster using the local meta node
|
||||
|
||||
|
@ -189,6 +196,13 @@ $ influxd-ctl -bind cluster-meta-node-01:8091 add-data cluster-data-node:8088
|
|||
Added data node 3 at cluster-data-node:8088
|
||||
```
|
||||
|
||||
###### Add a passive node to a cluster
|
||||
**Passive nodes** act as load balancers--they accept write calls, perform shard lookup and RPC calls (on active data nodes), and distribute writes to active data nodes. They do not own shards or accept writes. If you are using passive nodes, they should be the write endpoint for all data ingest. A cluster can have multiple passive nodes.
|
||||
|
||||
```bash
|
||||
influxd-ctl add-data -p <passive-data-node-TCP-bind-address>
|
||||
```
|
||||
|
||||
### `add-meta`
|
||||
|
||||
Adds a meta node to a cluster.
|
||||
|
@ -1083,6 +1097,18 @@ cluster-node-01:8091 1.9.x-c1.9.x {}
|
|||
cluster-node-02:8091 1.9.x-c1.9.x {}
|
||||
cluster-node-03:8091 1.9.x-c1.9.x {}
|
||||
```
|
||||
##### Show active and passive data nodes in a cluster
|
||||
|
||||
In this example, the `show` command output displays that the cluster includes a passive data node.
|
||||
|
||||
```bash
|
||||
Data Nodes
|
||||
==========
|
||||
ID TCP Address Version Labels Passive
|
||||
4 cluster-node_0_1:8088 1.9.6-c1.9.6 {} false
|
||||
5 cluster-node_1_1:8088 1.9.6-c1.9.6 {} true
|
||||
6 cluster-node_2_1:8088 1.9.6-c1.9.6 {} false
|
||||
```
|
||||
|
||||
### `show-shards`
|
||||
|
||||
|
|
Loading…
Reference in New Issue