diff --git a/assets/styles/layouts/_top-nav.scss b/assets/styles/layouts/_top-nav.scss
index 6784f969e..838e3dee6 100644
--- a/assets/styles/layouts/_top-nav.scss
+++ b/assets/styles/layouts/_top-nav.scss
@@ -47,7 +47,7 @@
display: inline-block;
position: relative;
align-self: flex-start;
- margin-left: .5rem;
+ margin-left: .25rem;
color: $g20-white;
height: 2rem;
@include gradient($version-selector-gradient);
diff --git a/content/enterprise_influxdb/v1.5/_index.md b/content/enterprise_influxdb/v1.5/_index.md
new file mode 100644
index 000000000..30a8de87c
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/_index.md
@@ -0,0 +1,36 @@
+---
+title: InfluxDB Enterprise 1.5 documentation
+description: Technical documentation for InfluxDB Enterprise, which adds clustering, high availability, fine-grained authorization, and more to InfluxDB OSS. Documentation includes release notes, what's new, guides, concepts, features, and administration.
+aliases:
+ - /enterprise/v1.5/
+
+menu:
+ enterprise_influxdb:
+ name: v1.5
+ identifier: enterprise_influxdb_1_5
+ weight: 9
+---
+
+InfluxDB Enterprise offers highly scalable InfluxDB Enterprise clusters on your infrastructure
+with a management UI.
+Use InfluxDB Enterprise to:
+
+* Monitor your cluster
+
+ 
+
+* Manage queries
+
+ 
+
+* Manage users
+
+ 
+
+* Explore and visualize your data
+
+ 
+
+If you're interested in working with InfluxDB Enterprise, visit
+[InfluxPortal](https://portal.influxdata.com/) to sign up, get a license key,
+and get started!
diff --git a/content/enterprise_influxdb/v1.5/about-the-project/_index.md b/content/enterprise_influxdb/v1.5/about-the-project/_index.md
new file mode 100644
index 000000000..9b5f1b018
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/about-the-project/_index.md
@@ -0,0 +1,42 @@
+---
+title: About the InfluxDB Enterprise project
+menu:
+ enterprise_influxdb_1_5:
+ name: About the project
+ weight: 10
+---
+
+## [InfluxDB Enterprise release notes](/enterprise_influxdb/v1.5/about-the-project/release-notes-changelog/)
+
+The [InfluxDB Enterprise release notes](/enterprise_influxdb/v1.5/about-the-project/release-notes-changelog/) includes details about features, bug fixes, and breaking changes for current and earlier InfluxDB Enterprise releases.
+
+## [InfluxData Software License Subscription Agreement (SLSA)](https://www.influxdata.com/legal/slsa/)
+
+InfluxDB Enterprise is available with a commercial license based on the [InfluxData Software License Subscription Agreement (SLSA)](https://www.influxdata.com/legal/slsa/). [Contact sales for more information](https://www.influxdata.com/contact-sales/).
+
+## Third party software
+
+InfluxData products contain third party software, which means the copyrighted, patented, or otherwise legally protected
+software of third parties, that is incorporated in InfluxData products.
+
+Third party suppliers make no representation nor warranty with respect to such third party software or any portion thereof.
+Third party suppliers assume no liability for any claim that might arise with respect to such third party software, nor for a
+customer’s use of or inability to use the third party software.
+
+In addition to [third party software incorporated in InfluxDB](http://docs.influxdata.com/influxdb/v1.5/about_the_project/#third_party), InfluxDB Enterprise incorporates the following additional third party software:
+
+| Third Party / Open Source Software - Description | License Type |
+| ---------------------------------------- | ---------------------------------------- |
+| [Go language library for exporting performance and runtime metrics to external metrics systems (i.e., statsite, statsd)](https://github.com/armon/go-metrics) (armon/go-metrics) | [MIT](https://github.com/armon/go-metrics/blob/master/LICENSE) |
+| [Golang implementation of JavaScript Object](https://github.com/dvsekhvalnov/jose2go) (dvsekhvalnov/jose2go) | [MIT](https://github.com/dvsekhvalnov/jose2go/blob/master/LICENSE) |
+| [Collection of useful handlers for Go net/http package ](https://github.com/gorilla/handlers) (gorilla/handlers) | [BSD-2](https://github.com/gorilla/handlers/blob/master/LICENSE) |
+| [A powerful URL router and dispatcher for golang](https://github.com/gorilla/mux) (gorilla/mux) | [BSD-2](https://github.com/gorilla/handlers/blob/master/LICENSE) |
+| [Golang connection multiplexing library](https://github.com/hashicorp/yamux/) (hashicorp/yamux) | [Mozilla 2.0](https://github.com/hashicorp/yamux/blob/master/LICENSE) |
+| [Codec - a high performance and feature-rich Idiomatic encode/decode and rpc library for msgpack and Binc](https://github.com/hashicorp/go-msgpack) (hashicorp/go-msgpack) | [BSD-3](https://github.com/hashicorp/go-msgpack/blob/master/LICENSE) |
+| [Go language implementation of the Raft consensus protocol](https://github.com/hashicorp/raft) (hashicorp/raft) | [Mozilla 2.0](https://github.com/hashicorp/raft/blob/master/LICENSE) |
+| [Raft backend implementation using BoltDB](https://github.com/hashicorp/raft-boltdb) (hashicorp/raft-boltdb) | [Mozilla 2.0](https://github.com/hashicorp/raft-boltdb/blob/master/LICENSE) |
+| [Pretty printing for Go values](https://github.com/kr/pretty) (kr/pretty) | [MIT](https://github.com/kr/pretty/blob/master/License) |
+| [Miscellaneous functions for formatting text](https://github.com/kr/text) (kr/text) | [MIT](https://github.com/kr/text/blob/main/License) |
+| [Some helpful packages for writing Go apps](https://github.com/markbates/going) (markbates/going) | [MIT](https://github.com/markbates/going/blob/master/LICENSE.txt) |
+
+***Thanks to the open source community for your contributions!***
diff --git a/content/enterprise_influxdb/v1.5/about-the-project/release-notes-changelog.md b/content/enterprise_influxdb/v1.5/about-the-project/release-notes-changelog.md
new file mode 100644
index 000000000..20d81b386
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/about-the-project/release-notes-changelog.md
@@ -0,0 +1,543 @@
+---
+title: InfluxDB Enterprise 1.5 release notes
+
+menu:
+ enterprise_influxdb_1_5:
+ name: Release notes
+ weight: 10
+ parent: About the project
+---
+
+## v1.5.5 [2018-12-19]
+
+This release builds off of the InfluxDB OSS 1.5.5 release. Please see the [InfluxDB OSS release notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+## v1.5.4 [2018-06-21]
+
+This release builds off of the InfluxDB OSS 1.5.4 release. Please see the [InfluxDB OSS release notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+## v1.5.3 [2018-05-25]
+
+This release builds off of the InfluxDB OSS 1.5.3 release. Please see the [InfluxDB OSS release notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+### Features
+
+* Include the query task status in the show queries output.
+* [1.5] Add hh writeBlocked counter.
+
+### Bug fixes
+
+* Hinted-handoff: enforce max queue size per peer node.
+* TSM files not closed when shard deleted.
+
+
+## v1.5.2 [2018-04-12]
+
+This release builds off of the InfluxDB OSS 1.5.2 release. Please see the [InfluxDB OSS release notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+### Bug fixes
+
+* Running backup snapshot with client's retryWithBackoff function.
+* Ensure that conditions are encoded correctly even if the AST is not properly formed.
+
+## v1.5.1 [2018-03-20]
+
+This release builds off of the InfluxDB OSS 1.5.1 release. There are no Enterprise-specific changes.
+Please see the [InfluxDB OSS release notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+## v1.5.0 [2018-03-06]
+
+> ***Note:*** This release builds off of the 1.5 release of InfluxDB OSS. Please see the [InfluxDB OSS release
+> notes](https://docs.influxdata.com/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+For highlights of the InfluxDB 1.5 release, see [What's new in InfluxDB 1.5](/influxdb/v1.5/about_the_project/whats_new/).
+
+### Breaking changes
+
+The default logging format has been changed. See [Logging and tracing in InfluxDB](/influxdb/v1.5/administration/logs/) for details.
+
+### Features
+
+* Add `LastModified` fields to shard RPC calls.
+* As of OSS 1.5 backup/restore interoperability is confirmed.
+* Make InfluxDB Enterprise use OSS digests.
+* Move digest to its own package.
+* Implement distributed cardinality estimation.
+* Add logging configuration to the configuration files.
+* Add AE `/repair` endpoint and update Swagger doc.
+* Update logging calls to take advantage of structured logging.
+* Use actual URL when logging anonymous stats start.
+* Fix auth failures on backup/restore.
+* Add support for passive nodes
+* Implement explain plan for remote nodes.
+* Add message pack format for query responses.
+* Teach show tag values to respect FGA
+* Address deadlock in meta server on 1.3.6
+* Add time support to `SHOW TAG VALUES`
+* Add distributed `SHOW TAG KEYS` with time support
+
+### Bug fixes
+
+* Fix errors occurring when policy or shard keys are missing from the manifest when limited is set to true.
+* Fix spurious `rpc error: i/o deadline exceeded` errors.
+* Elide `stream closed` error from logs and handle `io.EOF` as remote iterator interrupt.
+* Discard remote iterators that label their type as unknown.
+* Do not queue partial write errors to hinted handoff.
+* Segfault in `digest.merge`
+* Meta Node CPU pegged on idle cluster.
+* Data race on `(meta.UserInfo).acl)`
+* Fix wildcard when one shard has no data for a measurement with partial replication.
+* Add `X-Influxdb-Build` to http response headers so users can identify if a response is from an InfluxDB OSS or InfluxDB Enterprise service.
+* Ensure that permissions cannot be set on non-existent databases.
+* Switch back to using `cluster-tracing` config option to enable meta HTTP request logging.
+* `influxd-ctl restore -newdb` can't restore data.
+* Close connection for remote iterators after EOF to avoid writer hanging indefinitely.
+* Data race reading `Len()` in connection pool.
+* Use InfluxData fork of `yamux`. This update reduces overall memory usage when streaming large amounts of data.
+* Fix group by marshaling in the IteratorOptions.
+* Meta service data race.
+* Read for the interrupt signal from the stream before creating the iterators.
+* Show retention policies requires the `createdatabase` permission
+* Handle UTF files with a byte order mark when reading the configuration files.
+* Remove the pidfile after the server has exited.
+* Resend authentication credentials on redirect.
+* Updated yamux resolves race condition when SYN is successfully sent and a write timeout occurs.
+* Fix no license message.
+
+## v1.3.9 [2018-01-19]
+
+### Upgrading -- for users of the TSI preview
+
+If you have been using the TSI preview with 1.3.6 or earlier 1.3.x releases, you will need to follow the upgrade steps to continue using the TSI preview. Unfortunately, these steps cannot be executed while the cluster is operating --
+so it will require downtime.
+
+### Bugfixes
+
+* Elide `stream closed` error from logs and handle `io.EOF` as remote iterator interrupt.
+* Fix spurious `rpc error: i/o deadline exceeded` errors
+* Discard remote iterators that label their type as unknown.
+* Do not queue `partial write` errors to hinted handoff.
+
+## v1.3.8 [2017-12-04]
+
+### Upgrading -- for users of the TSI preview
+
+If you have been using the TSI preview with 1.3.6 or earlier 1.3.x releases, you will need to follow the upgrade steps to continue using the TSI preview. Unfortunately, these steps cannot be executed while the cluster is operating -- so it will require downtime.
+
+### Bug fixes
+
+- Updated `yamux` resolves race condition when SYN is successfully sent and a write timeout occurs.
+- Resend authentication credentials on redirect.
+- Fix wildcard when one shard has no data for a measurement with partial replication.
+- Fix spurious `rpc error: i/o deadline exceeded` errors.
+
+## v1.3.7 [2017-10-26]
+
+### Upgrading -- for users of the TSI preview
+
+The 1.3.7 release resolves a defect that created duplicate tag values in TSI indexes See Issues
+[#8995](https://github.com/influxdata/influxdb/pull/8995), and [#8998](https://github.com/influxdata/influxdb/pull/8998).
+However, upgrading to 1.3.7 cause compactions to fail, see [Issue #9025](https://github.com/influxdata/influxdb/issues/9025).
+We will provide a utility that will allow TSI indexes to be rebuilt,
+resolving the corruption possible in releases prior to 1.3.7. If you are using the TSI preview,
+**you should not upgrade to 1.3.7 until this utility is available**.
+We will update this release note with operational steps once the utility is available.
+
+#### Bug fixes
+
+ - Read for the interrupt signal from the stream before creating the iterators.
+ - Address Deadlock issue in meta server on 1.3.6
+ - Fix logger panic associated with anti-entropy service and manually removed shards.
+
+## v1.3.6 [2017-09-28]
+
+### Bug fixes
+
+- Fix "group by" marshaling in the IteratorOptions.
+- Address meta service data race condition.
+- Fix race condition when writing points to remote nodes.
+- Use InfluxData fork of yamux. This update reduces overall memory usage when streaming large amounts of data.
+ Contributed back to the yamux project via: https://github.com/hashicorp/yamux/pull/50
+- Address data race reading Len() in connection pool.
+
+## v1.3.5 [2017-08-29]
+
+This release builds off of the 1.3.5 release of OSS InfluxDB.
+Please see the OSS [release notes](/influxdb/v1.3/about_the_project/releasenotes-changelog/#v1-3-5-2017-08-29) for more information about the OSS releases.
+
+## v1.3.4 [2017-08-23]
+
+This release builds off of the 1.3.4 release of OSS InfluxDB. Please see the [OSS release notes](https://docs.influxdata.com/influxdb/v1.3/about_the_project/releasenotes-changelog/) for more information about the OSS releases.
+
+### Bug fixes
+
+- Close connection for remote iterators after EOF to avoid writer hanging indefinitely
+
+## v1.3.3 [2017-08-10]
+
+This release builds off of the 1.3.3 release of OSS InfluxDB. Please see the [OSS release notes](https://docs.influxdata.com/influxdb/v1.3/about_the_project/releasenotes-changelog/) for more information about the OSS releases.
+
+### Bug fixes
+
+- Connections are not closed when `CreateRemoteIterator` RPC returns no iterators, resolved memory leak
+
+## v1.3.2 [2017-08-04]
+
+### Bug fixes
+
+- `influxd-ctl restore -newdb` unable to restore data.
+- Improve performance of `SHOW TAG VALUES`.
+- Show a subset of config settings in `SHOW DIAGNOSTICS`.
+- Switch back to using cluster-tracing config option to enable meta HTTP request logging.
+- Fix remove-data error.
+
+## v1.3.1 [2017-07-20]
+
+#### Bug fixes
+
+- Show a subset of config settings in SHOW DIAGNOSTICS.
+- Switch back to using cluster-tracing config option to enable meta HTTP request logging.
+- Fix remove-data error.
+
+## v1.3.0 [2017-06-21]
+
+### Configuration changes
+
+#### `[cluster]` Section
+
+* `max-remote-write-connections` is deprecated and can be removed.
+* NEW: `pool-max-idle-streams` and `pool-max-idle-time` configure the RPC connection pool.
+ See `config.sample.toml` for descriptions of these new options.
+
+### Removals
+
+The admin UI is removed and unusable in this release. The `[admin]` configuration section will be ignored.
+
+#### Features
+
+- Allow non-admin users to execute SHOW DATABASES
+- Add default config path search for influxd-meta.
+- Reduce cost of admin user check for clusters with large numbers of users.
+- Store HH segments by node and shard
+- Remove references to the admin console.
+- Refactor RPC connection pool to multiplex multiple streams over single connection.
+- Report RPC connection pool statistics.
+
+#### Bug fixes
+
+- Fix security escalation bug in subscription management.
+- Certain permissions should not be allowed at the database context.
+- Make the time in `influxd-ctl`'s `copy-shard-status` argument human readable.
+- Fix `influxd-ctl remove-data -force`.
+- Ensure replaced data node correctly joins meta cluster.
+- Delay metadata restriction on restore.
+- Writing points outside of retention policy does not return error
+- Decrement internal database's replication factor when a node is removed.
+
+## v1.2.5 [2017-05-16]
+
+This release builds off of the 1.2.4 release of OSS InfluxDB.
+Please see the OSS [release notes](/influxdb/v1.3/about_the_project/releasenotes-changelog/#v1-2-4-2017-05-08) for more information about the OSS releases.
+
+#### Bug fixes
+
+- Fix issue where the [`ALTER RETENTION POLICY` query](/influxdb/v1.3/query_language/database_management/#modify-retention-policies-with-alter-retention-policy) does not update the default retention policy.
+- Hinted-handoff: remote write errors containing `partial write` are considered droppable.
+- Fix the broken `influxd-ctl remove-data -force` command.
+- Fix security escalation bug in subscription management.
+- Prevent certain user permissions from having a database-specific scope.
+- Reduce the cost of the admin user check for clusters with large numbers of users.
+- Fix hinted-handoff remote write batching.
+
+## v1.2.2 [2017-03-15]
+
+This release builds off of the 1.2.1 release of OSS InfluxDB.
+Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.2/CHANGELOG.md#v121-2017-03-08) for more information about the OSS release.
+
+### Configuration changes
+
+The following configuration changes may need to changed before [upgrading](/enterprise_influxdb/v1.3/administration/upgrading/) to 1.2.2 from prior versions.
+
+#### shard-writer-timeout
+
+We've removed the data node's `shard-writer-timeout` configuration option from the `[cluster]` section.
+As of version 1.2.2, the system sets `shard-writer-timeout` internally.
+The configuration option can be removed from the [data node configuration file](/enterprise_influxdb/v1.3/administration/configuration/#data-node-configuration).
+
+#### retention-autocreate
+
+In versions 1.2.0 and 1.2.1, the `retention-autocreate` setting appears in both the meta node and data node configuration files.
+To disable retention policy auto-creation, users on version 1.2.0 and 1.2.1 must set `retention-autocreate` to `false` in both the meta node and data node configuration files.
+
+In version 1.2.2, we’ve removed the `retention-autocreate` setting from the data node configuration file.
+As of version 1.2.2, users may remove `retention-autocreate` from the data node configuration file.
+To disable retention policy auto-creation, set `retention-autocreate` to `false` in the meta node configuration file only.
+
+This change only affects users who have disabled the `retention-autocreate` option and have installed version 1.2.0 or 1.2.1.
+
+#### Bug fixes
+
+##### Backup and restore
+
+
+- Prevent the `shard not found` error by making [backups](/enterprise_influxdb/v1.3/guides/backup-and-restore/#backup) skip empty shards
+- Prevent the `shard not found` error by making [restore](/enterprise_influxdb/v1.3/guides/backup-and-restore/#restore) handle empty shards
+- Ensure that restores from an incremental backup correctly handle file paths
+- Allow incremental backups with restrictions (for example, they use the `-db` or `rp` flags) to be stores in the same directory
+- Support restores on meta nodes that are not the raft leader
+
+##### Hinted handoff
+
+
+- Fix issue where dropped writes were not recorded when the [hinted handoff](/enterprise_influxdb/v1.3/concepts/clustering/#hinted-handoff) queue reached the maximum size
+- Prevent the hinted handoff from becoming blocked if it encounters field type errors
+
+##### Other
+
+
+- Return partial results for the [`SHOW TAG VALUES` query](/influxdb/v1.3/query_language/schema_exploration/#show-tag-values) even if the cluster includes an unreachable data node
+- Return partial results for the [`SHOW MEASUREMENTS` query](/influxdb/v1.3/query_language/schema_exploration/#show-measurements) even if the cluster includes an unreachable data node
+- Prevent a panic when the system files to process points
+- Ensure that cluster hostnames can be case insensitive
+- Update the `retryCAS` code to wait for a newer snapshot before retrying
+- Serialize access to the meta client and meta store to prevent raft log buildup
+- Remove sysvinit package dependency for RPM packages
+- Make the default retention policy creation an atomic process instead of a two-step process
+- Prevent `influxd-ctl`'s [`join` argument](/enterprise_influxdb/v1.3/features/cluster-commands/#join) from completing a join when the command also specifies the help flag (`-h`)
+- Fix the `influxd-ctl`'s [force removal](/enterprise_influxdb/v1.3/features/cluster-commands/#remove-meta) of meta nodes
+- Update the meta node and data node sample configuration files
+
+## v1.2.1 [2017-01-25]
+
+#### Cluster-specific bug fixes
+
+- Fix panic: Slice bounds out of range
+ Fix how the system removes expired shards.
+- Remove misplaced newlines from cluster logs
+
+## v1.2.0 [2017-01-24]
+
+This release builds off of the 1.2.0 release of OSS InfluxDB.
+Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.2/CHANGELOG.md#v120-2017-01-24) for more information about the OSS release.
+
+### Upgrading
+
+* The `retention-autocreate` configuration option has moved from the meta node configuration file to the [data node configuration file](/enterprise_influxdb/v1.3/administration/configuration/#retention-autocreate-true).
+To disable the auto-creation of retention policies, set `retention-autocreate` to `false` in your data node configuration files.
+* The previously deprecated `influxd-ctl force-leave` command has been removed. The replacement command to remove a meta node which is never coming back online is [`influxd-ctl remove-meta -force`](/enterprise_influxdb/v1.3/features/cluster-commands/).
+
+#### Cluster-specific Features
+
+- Improve the meta store: any meta store changes are done via a compare and swap
+- Add support for [incremental backups](/enterprise_influxdb/v1.3/guides/backup-and-restore/)
+- Automatically remove any deleted shard groups from the data store
+- Uncomment the section headers in the default [configuration file](/enterprise_influxdb/v1.3/administration/configuration/)
+- Add InfluxQL support for [subqueries](/influxdb/v1.3/query_language/data_exploration/#subqueries)
+
+#### Cluster-specific bug fixes
+
+- Update dependencies with Godeps
+- Fix a data race in meta client
+- Ensure that the system removes the relevant [user permissions and roles](/enterprise_influxdb/v1.3/features/users/) when a database is dropped
+- Fix a couple typos in demo [configuration file](/enterprise_influxdb/v1.3/administration/configuration/)
+- Make optional the version protobuf field for the meta store
+- Remove the override of GOMAXPROCS
+- Remove an unused configuration option (`dir`) from the backend
+- Fix a panic around processing remote writes
+- Return an error if a remote write has a field conflict
+- Drop points in the hinted handoff that (1) have field conflict errors (2) have [`max-values-per-tag`](/influxdb/v1.3/administration/config/#max-values-per-tag-100000) errors
+- Remove the deprecated `influxd-ctl force-leave` command
+- Fix issue where CQs would stop running if the first meta node in the cluster stops
+- Fix logging in the meta httpd handler service
+- Fix issue where subscriptions send duplicate data for [Continuous Query](/influxdb/v1.3/query_language/continuous_queries/) results
+- Fix the output for `influxd-ctl show-shards`
+- Send the correct RPC response for `ExecuteStatementRequestMessage`
+
+## v1.1.5 [2017-04-28]
+
+### Bug fixes
+
+- Prevent certain user permissions from having a database-specific scope.
+- Fix security escalation bug in subscription management.
+
+## v1.1.3 [2017-02-27]
+
+This release incorporates the changes in the 1.1.4 release of OSS InfluxDB.
+Please see the OSS [changelog](https://github.com/influxdata/influxdb/blob/v1.1.4/CHANGELOG.md) for more information about the OSS release.
+
+### Bug fixes
+
+- Delay when a node listens for network connections until after all requisite services are running. This prevents queries to the cluster from failing unnecessarily.
+- Allow users to set the `GOMAXPROCS` environment variable.
+
+## v1.1.2 [internal]
+
+This release was an internal release only.
+It incorporates the changes in the 1.1.3 release of OSS InfluxDB.
+Please see the OSS [changelog](https://github.com/influxdata/influxdb/blob/v1.1.3/CHANGELOG.md) for more information about the OSS release.
+
+## v1.1.1 [2016-12-06]
+
+This release builds off of the 1.1.1 release of OSS InfluxDB.
+Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.1/CHANGELOG.md#v111-2016-12-06) for more information about the OSS release.
+
+This release is built with Go (golang) 1.7.4.
+It resolves a security vulnerability reported in Go (golang) version 1.7.3 which impacts all
+users currently running on the macOS platform, powered by the Darwin operating system.
+
+#### Cluster-specific bug fixes
+
+- Fix hinted-handoff issue: Fix record size larger than max size
+ If a Hinted Handoff write appended a block that was larger than the maximum file size, the queue would get stuck because the maximum size was not updated. When reading the block back out during processing, the system would return an error because the block size was larger than the file size -- which indicates a corrupted block.
+
+## v1.1.0 [2016-11-14]
+
+This release builds off of the 1.1.0 release of InfluxDB OSS.
+Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.1/CHANGELOG.md#v110-2016-11-14) for more information about the OSS release.
+
+### Upgrading
+
+* The 1.1.0 release of OSS InfluxDB has some important [configuration changes](https://github.com/influxdata/influxdb/blob/1.1/CHANGELOG.md#configuration-changes) that may affect existing clusters.
+* The `influxd-ctl join` command has been renamed to `influxd-ctl add-meta`. If you have existing scripts that use `influxd-ctl join`, they will need to use `influxd-ctl add-meta` or be updated to use the new cluster setup command.
+
+#### Cluster setup
+
+The `influxd-ctl join` command has been changed to simplify cluster setups. To join a node to a cluster, you can run `influxd-ctl join `, and we will attempt to detect and add any meta or data node process running on the hosts automatically. The previous `join` command exists as `add-meta` now. If it's the first node of a cluster, the meta address argument is optional.
+
+#### Logging
+
+Switches to journald logging for on systemd systems. Logs are no longer sent to `/var/log/influxdb` on systemd systems.
+
+#### Cluster-specific features
+
+- Add a configuration option for setting gossiping frequency on data nodes
+- Allow for detailed insight into the Hinted Handoff queue size by adding `queueBytes` to the hh\_processor statistics
+- Add authentication to the meta service API
+- Update Go (golang) dependencies: Fix Go Vet and update circle Go Vet command
+- Simplify the process for joining nodes to a cluster
+- Include the node's version number in the `influxd-ctl show` output
+- Return and error if there are additional arguments after `influxd-ctl show`
+ Fixes any confusion between the correct command for showing detailed shard information (`influxd-ctl show-shards`) and the incorrect command (`influxd-ctl show shards`)
+
+#### Cluster-specific bug fixes
+
+- Return an error if getting latest snapshot takes longer than 30 seconds
+- Remove any expired shards from the `/show-shards` output
+- Respect the [`pprof-enabled` configuration setting](/enterprise_influxdb/v1.3/administration/configuration/#pprof-enabled-true) and enable it by default on meta nodes
+- Respect the [`pprof-enabled` configuration setting](/enterprise_influxdb/v1.3/administration/configuration/#pprof-enabled-true-1) on data nodes
+- Use the data reference instead of `Clone()` during read-only operations for performance purposes
+- Prevent the system from double-collecting cluster statistics
+- Ensure that the meta API redirects to the cluster leader when it gets the `ErrNotLeader` error
+- Don't overwrite cluster users with existing OSS InfluxDB users when migrating an OSS instance into a cluster
+- Fix a data race in the raft store
+- Allow large segment files (> 10MB) in the Hinted Handoff
+- Prevent `copy-shard` from retrying if the `copy-shard` command was killed
+- Prevent a hanging `influxd-ctl add-data` command by making data nodes check for meta nodes before they join a cluster
+
+## v1.0.4 [2016-10-19]
+
+#### Cluster-specific bug fixes
+
+- Respect the [Hinted Handoff settings](/enterprise_influxdb/v1.3/administration/configuration/#hinted-handoff) in the configuration file
+- Fix expanding regular expressions when all shards do not exist on node that's handling the request
+
+## v1.0.3 [2016-10-07]
+
+#### Cluster-specific bug fixes
+
+- Fix a panic in the Hinted Handoff: `lastModified`
+
+## v1.0.2 [2016-10-06]
+
+This release builds off of the 1.0.2 release of OSS InfluxDB. Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.0/CHANGELOG.md#v102-2016-10-05) for more information about the OSS release.
+
+#### Cluster-specific bug fixes
+
+- Prevent double read-lock in the meta client
+- Fix a panic around a corrupt block in Hinted Handoff
+- Fix issue where `systemctl enable` would throw an error if the symlink already exists
+
+## v1.0.1 [2016-09-28]
+
+This release builds off of the 1.0.1 release of OSS InfluxDB.
+Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.0/CHANGELOG.md#v101-2016-09-26)
+for more information about the OSS release.
+
+#### Cluster-specific bug fixes
+
+* Balance shards correctly with a restore
+* Fix a panic in the Hinted Handoff: `runtime error: invalid memory address or nil pointer dereference`
+* Ensure meta node redirects to leader when removing data node
+* Fix a panic in the Hinted Handoff: `runtime error: makeslice: len out of range`
+* Update the data node configuration file so that only the minimum configuration options are uncommented
+
+## v1.0.0 [2016-09-07]
+
+This release builds off of the 1.0.0 release of OSS InfluxDB.
+Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.0/CHANGELOG.md#v100-2016-09-07) for more information about the OSS release.
+
+Breaking Changes:
+
+* The keywords `IF`, `EXISTS`, and `NOT` were removed for this release. This means you no longer need to specify `IF NOT EXISTS` for `DROP DATABASE` or `IF EXISTS` for `CREATE DATABASE`. Using these keywords will return a query error.
+* `max-series-per-database` was added with a default of 1M but can be disabled by setting it to `0`. Existing databases with series that exceed this limit will continue to load, but writes that would create new series will fail.
+
+### Hinted handoff
+
+A number of changes to hinted handoff are included in this release:
+
+* Truncating only the corrupt block in a corrupted segment to minimize data loss
+* Immediately queue writes in hinted handoff if there are still writes pending to prevent inconsistencies in shards
+* Remove hinted handoff queues when data nodes are removed to eliminate manual cleanup tasks
+
+### Performance
+
+* `SHOW MEASUREMENTS` and `SHOW TAG VALUES` have been optimized to work better for multiple nodes and shards
+* `DROP` and `DELETE` statements run in parallel and more efficiently and should not leave the system in an inconsistent state
+
+### Security
+
+The Cluster API used by `influxd-ctl` can not be protected with SSL certs.
+
+### Cluster management
+
+Data nodes that can no longer be restarted can now be forcefully removed from the cluster using `influxd-ctl remove-data -force `. This should only be run if a grace removal is not possible.
+
+Backup and restore has been updated to fix issues and refine existing capabilities.
+
+#### Cluster-specific features
+
+- Add the Users method to control client
+- Add a `-force` option to the `influxd-ctl remove-data` command
+- Disable the logging of `stats` service queries
+- Optimize the `SHOW MEASUREMENTS` and `SHOW TAG VALUES` queries
+- Update the Go (golang) package library dependencies
+- Minimize the amount of data-loss in a corrupted Hinted Handoff file by truncating only the last corrupted segment instead of the entire file
+- Log a write error when the Hinted Handoff queue is full for a node
+- Remove Hinted Handoff queues on data nodes when the target data nodes are removed from the cluster
+- Add unit testing around restore in the meta store
+- Add full TLS support to the cluster API, including the use of self-signed certificates
+- Improve backup/restore to allow for partial restores to a different cluster or to a database with a different database name
+- Update the shard group creation logic to be balanced
+- Keep raft log to a minimum to prevent replaying large raft logs on startup
+
+#### Cluster-specific bug fixes
+
+- Remove bad connections from the meta executor connection pool
+- Fix a panic in the meta store
+- Fix a panic caused when a shard group is not found
+- Fix a corrupted Hinted Handoff
+- Ensure that any imported OSS admin users have all privileges in the cluster
+- Ensure that `max-select-series` is respected
+- Handle the `peer already known` error
+- Fix Hinted handoff panic around segment size check
+- Drop Hinted Handoff writes if they contain field type inconsistencies
+
+
+# Web Console
+
+## DEPRECATED: Enterprise Web Console
+
+The Enterprise Web Console has officially been deprecated and will be eliminated entirely by the end of 2017.
+No additional features will be added and no additional bug fix releases are planned.
+
+For browser-based access to InfluxDB Enterprise, [Chronograf](/chronograf/latest/introduction) is now the recommended tool to use.
diff --git a/content/enterprise_influxdb/v1.5/administration/_index.md b/content/enterprise_influxdb/v1.5/administration/_index.md
new file mode 100644
index 000000000..1b0dcaa2d
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/administration/_index.md
@@ -0,0 +1,22 @@
+---
+title: Administering InfluxDB Enterprise
+description: This section includes technical documentation on InfluxDB Enterprise administration, including backup and restore, configuration, logs, security, and upgrading.
+menu:
+ enterprise_influxdb_1_5:
+ name: Administration
+ weight: 70
+---
+
+## [Configuring InfluxDB Enterprise](/enterprise_influxdb/v1.5/administration/configuration/)
+
+## [Upgrading InfluxDB Enterprise clusters](/enterprise_influxdb/v1.5/administration/upgrading/)
+
+## [Cluster management utilities](/enterprise_influxdb/v1.5/administration/cluster-commands/)
+
+## [Backing up and restoring in InfluxDB Enterprise](/enterprise_influxdb/v1.5/administration/backup-and-restore/)
+
+## [Logging and tracing in InfluxDB Enterprise](/enterprise_influxdb/v1.5/administration/logs/)
+
+## [Host renaming in InfluxDB Enterprise](/enterprise_influxdb/v1.5/administration/renaming/)
+
+## [Managing InfluxDB Enterprise security](/enterprise_influxdb/v1.5/administration/security/)
diff --git a/content/enterprise_influxdb/v1.5/administration/anti-entropy.md b/content/enterprise_influxdb/v1.5/administration/anti-entropy.md
new file mode 100644
index 000000000..ec45e2c4d
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/administration/anti-entropy.md
@@ -0,0 +1,48 @@
+---
+title: Anti-entropy service in InfluxDB Enterprise
+aliases:
+ - /enterprise_influxdb/v1.5/guides/anti-entropy/
+menu:
+ enterprise_influxdb_1_5:
+ name: Anti-entropy service
+ weight: 40
+ parent: Administration
+---
+
+## Introduction
+
+The anti-entropy service tries to ensure that each data node has all the shards that it needs according to the meta store.
+This guide covers some of the basic situations where the anti-entropy service takes effect.
+
+## Concepts
+
+The anti-entropy service examines each node to see whether it has all the shards that the meta store says it should have,
+and if any shards are missing, the service will copy existing shards from owners to the node that is missing the shard.
+
+By default, the service checks every 30 seconds, as configured in the [`anti-entropy.check-interval`](/enterprise_influxdb/v1.5/administration/configuration/#check-interval-30s) setting.
+
+The anti-entropy service can only address missing shards when there is at least one copy of the shard available.
+In other words, so long as new and healthy nodes are introduced, a replication factor of 2 can recover from one missing node, a replication factor of 3 can recover from two missing nodes, and so on.
+A replication factor of 1 cannot be recovered by the anti-entropy service if the shard goes missing.
+
+## Configuration
+
+Anti-entropy configuration options are available in [`[anti-entropy]`](/enterprise_influxdb/v1.5/administration/configuration/#anti-entropy) section of your `influxdb.conf`.
+
+## Scenarios
+
+This section covers some of the common use cases for the anti-entropy service.
+
+### Scenario 1: Replacing an unresponsive data node
+
+If a data node suddenly disappears, e.g. due to a catastrophic hardware failure, as soon as a new data node is online, the anti-entropy service will copy the correct shards to the new replacement node. The time it takes for the copying to complete is determined by the number of shards to be copied and how much data is stored in each.
+
+*View the [Replacing Data Nodes](/enterprise_influxdb/v1.5/guides/replacing-nodes/#replacing-data-nodes-in-an-influxdb-enterprise-cluster) documentation for instructions on replacing data nodes in your InfluxDB Enterprise cluster.*
+
+### Scenario 2: Replacing a machine that is running a data node
+
+Perhaps you are replacing a machine that is being decommissioned, upgrading hardware, or something else entirely.
+The anti-entropy service will automatically copy shards to the new machines.
+
+Once you have successfully run the `influxd-ctl update-data` command, you are free to shut down the retired node without causing any interruption to the cluster.
+The anti-entropy process will continue copying the appropriate shards from the remaining replicate in the cluster.
diff --git a/content/enterprise_influxdb/v1.5/administration/backup-and-restore.md b/content/enterprise_influxdb/v1.5/administration/backup-and-restore.md
new file mode 100644
index 000000000..ce6cf81cb
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/administration/backup-and-restore.md
@@ -0,0 +1,377 @@
+---
+title: Backing up and restoring in InfluxDB Enterprise
+description: Overview and use of backup and restore utilities in InfluxDB Enterprise.
+aliases:
+ - /enterprise/v1.5/guides/backup-and-restore/
+menu:
+ enterprise_influxdb_1_5:
+ name: Backing up and restoring
+ weight: 40
+ parent: Administration
+---
+
+## Overview
+
+The primary use cases for backup and restore are:
+
+* Disaster recovery
+* Debugging
+* Restoring clusters to a consistent state
+
+InfluxDB Enterprise supports backing up and restoring data in a cluster, a single database, a single database and retention policy, and
+single [shard](/influxdb/v1.5/concepts/glossary/#shard).
+
+> **Note:** You can use the [new `backup` and `restore` utilities in InfluxDB OSS 1.5](/influxdb/v1.5/administration/backup_and_restore/) to:
+> * Restore InfluxDB Enterprise 1.5 backup files to InfluxDB OSS 1.5.
+> * Back up InfluxDB OSS 1.5 data that can be restored in InfluxDB Enterprise 1.5.
+
+## Backup
+
+A backup creates a copy of the [metastore](/influxdb/v1.5/concepts/glossary/#metastore) and [shard](/influxdb/v1.5/concepts/glossary/#shard) data at that point in time and stores the copy in the specified directory.
+All backups also include a manifest, a JSON file describing what was collected during the backup.
+The filenames reflect the UTC timestamp of when the backup was created, for example:
+
+* Metastore backup: `20060102T150405Z.meta`
+* Shard data backup: `20060102T150405Z..tar.gz`
+* Manifest: `20060102T150405Z.manifest`
+
+Backups can be full (using the `-full` flag) or incremental (default).
+Incremental backups create a copy of the metastore and shard data that have changed since the last incremental backup.
+If there are no existing incremental backups, the system automatically performs a complete backup.
+
+Restoring a full backup and restoring an incremental backup require different syntax.
+To prevent issues with [restore](#restore), keep full backups and incremental backups in separate directories.
+
+### Syntax
+
+```
+influxd-ctl [ global-options ] backup [ arguments ]
+```
+
+#### Global options:
+
+Please see the [influxd-ctl documentation](/enterprise_influxdb/v1.5/administration/cluster-commands/#global-options)
+for a complete list of the `influxd-ctl` global options.
+
+#### Arguments
+
+Optional arguments are in brackets.
+
+##### [ `-db ` ]
+
+The name of the database to back up.
+
+##### [ `-from ` ]
+
+The data node TCP address to prefer when backing up.
+
+##### [ `-full` ]
+
+The flag to perform a full backup.
+
+##### [ `-rp ` ]
+
+The name of the single retention policy to back up (must specify `-db` with `-rp`).
+
+##### [ `-shard ` ]
+
+The identifier of the shard to back up.
+
+#### Examples
+
+In this example, the `backup` command is used to store the following incremental backups in different directories.
+The first `backup` command specifies `-db myfirstdb` and the second `backup` command specifies
+different arguments: `-db myfirstdb` and `-rp autogen`.
+```
+influxd-ctl backup -db myfirstdb ./myfirstdb-allrp-backup
+
+influxd-ctl backup -db myfirstdb -rp autogen ./myfirstdb-autogen-backup
+```
+
+Store the following incremental backups in the same directory.
+Both backups specify the same `-db` argument and the same database.
+```
+influxd-ctl backup -db myfirstdb ./myfirstdb-allrp-backup
+
+influxd-ctl backup -db myfirstdb ./myfirstdb-allrp-backup
+```
+
+### Examples
+
+#### Performing an incremental backup
+
+In this example, the `backup` command creates an incremental backup in the current directory.
+If there are any existing backups the current directory, the system performs an incremental backup.
+If there are no existing backups in the current directory, the system backs up all of the data in InfluxDB.
+
+```
+influxd-ctl backup .
+```
+
+Output:
+```
+$ influxd-ctl backup .
+Backing up meta data... Done. 421 bytes transferred
+Backing up node 7ba671c7644b:8088, db telegraf, rp autogen, shard 4... Done. Backed up in 903.539567ms, 307712 bytes transferred
+Backing up node bf5a5f73bad8:8088, db _internal, rp monitor, shard 1... Done. Backed up in 138.694402ms, 53760 bytes transferred
+Backing up node 9bf0fa0c302a:8088, db _internal, rp monitor, shard 2... Done. Backed up in 101.791148ms, 40448 bytes transferred
+Backing up node 7ba671c7644b:8088, db _internal, rp monitor, shard 3... Done. Backed up in 144.477159ms, 39424 bytes transferred
+Backed up to . in 1.293710883s, transferred 441765 bytes
+$ ls
+20160803T222310Z.manifest 20160803T222310Z.s1.tar.gz 20160803T222310Z.s3.tar.gz
+20160803T222310Z.meta 20160803T222310Z.s2.tar.gz 20160803T222310Z.s4.tar.gz
+```
+
+#### Performing a full backup
+
+In this example, the `backup` command creates a full backup in a specific directory.
+The directory must already exist.
+
+```
+influxd-ctl backup -full
+```
+
+Output:
+```
+$ influxd-ctl backup -full backup_dir
+Backing up meta data... Done. 481 bytes transferred
+Backing up node :8088, db _internal, rp monitor, shard 1... Done. Backed up in 33.207375ms, 238080 bytes transferred
+Backing up node :8088, db telegraf, rp autogen, shard 2... Done. Backed up in 15.184391ms, 95232 bytes transferred
+Backed up to backup_dir in 51.388233ms, transferred 333793 bytes
+~# ls backup_dir
+20170130T184058Z.manifest
+20170130T184058Z.meta
+20170130T184058Z.s1.tar.gz
+20170130T184058Z.s2.tar.gz
+```
+
+#### Performing an incremental backup on a single database
+
+In this example, the `backup` command is used to point at a remote meta server and back up only one database into a given directory that must already exist.
+
+```
+influxd-ctl -bind :8091 backup -db
+```
+
+Output:
+
+```
+$ influxd-ctl -bind 2a1b7a338184:8091 backup -db telegraf ./telegrafbackup
+Backing up meta data... Done. 318 bytes transferred
+Backing up node 7ba671c7644b:8088, db telegraf, rp autogen, shard 4... Done. Backed up in 997.168449ms, 399872 bytes transferred
+Backed up to ./telegrafbackup in 1.002358077s, transferred 400190 bytes
+$ ls ./telegrafbackup
+20160803T222811Z.manifest 20160803T222811Z.meta 20160803T222811Z.s4.tar.gz
+```
+
+## Restore
+
+Restore a backup to an existing cluster or a new cluster.
+By default, a restore writes to databases using the backed-up data's [replication factor](/influxdb/v1.5/concepts/glossary/#replication-factor).
+An alternate replication factor can be specified with the `-newrf` argument when restoring a single database.
+Restore supports both full backups and incremental backups, but the syntax for
+a restore differs depending on the backup type.
+
+> #### Restores from an existing cluster to a new cluster
+Restores from an existing cluster to a new cluster restore the existing cluster's
+[users](/influxdb/v1.5/concepts/glossary/#user), roles,
+[databases](/influxdb/v1.5/concepts/glossary/#database), and
+[continuous queries](/influxdb/v1.5/concepts/glossary/#continuous-query-cq) to
+the new cluster.
+>
+They do not restore Kapacitor [subscriptions](/influxdb/v1.5/concepts/glossary/#subscription).
+In addition, restores to a new cluster drop any data in the new cluster's
+`_internal` database and begin writing to that database anew.
+The restore does not write the existing cluster's `_internal` database to
+the new cluster.
+
+### Syntax
+
+The `restore` command syntax is different depending on whether you are restoring from a full backup or from am incremental backup.
+
+#### Restoring from full backups
+
+Use the syntax below to restore a backup that you made with the `-full` flag.
+Restore the `-full` backup to a new cluster or an existing cluster.
+Note that the existing cluster must contain no data in the affected databases.*
+Performing a restore from a `-full` backup requires the `-full` flag and the path to the full backup's manifest file.
+
+```
+influxd-ctl [ global-arguments ] restore [ arguments ] -full
+```
+
+\* The existing cluster can have data in the `_internal` database, the database
+that the system creates by default.
+The system automatically drops the `_internal` database when it performs a
+complete restore.
+
+#### Restoring from incremental backups
+
+Use the syntax below to restore an incremental backup to a new cluster or an existing cluster.
+Note that the existing cluster must contain no data in the affected databases.*
+Performing a restore from an incremental backup requires the path to the incremental backup's directory.
+
+```
+influxd-ctl [ global-options ] restore [ arguments ]
+```
+
+\* The existing cluster can have data in the `_internal` database, the database
+that the system creates by default.
+The system automatically drops the `_internal` database when it performs a complete restore.
+
+#### Global options
+
+Please see the [influxd-ctl documentation](/enterprise_influxdb/v1.5/administration/cluster-commands/#global-options)
+for a complete list of the `influxd-ctl` global options.
+
+#### Arguments
+
+Optional arguments are in brackets.
+
+##### [ `-db ` ]
+
+The name of the database to restore.
+
+##### [ `-list` ]
+
+The flag to show the contents of the backup.
+
+##### [ `-newdb ` ]
+
+The name of the new database to restore to (must specify with `-db`).
+
+##### [ `-newrf ` ]
+
+The new replication factor to restore to (this is capped to the number of data nodes in the cluster).
+
+##### [ `-newrp ` ]
+
+The name of the new retention policy to restore to (must specify with `-rp`).
+
+##### [ `-rp ` ]
+
+The name of the single retention policy to restore.
+
+##### [ `-shard ` ]
+
+The identifier of the shard to restore.
+
+### Examples
+
+#### Restoring from an incremental backup
+
+```
+influxd-ctl restore
+```
+
+Output:
+
+```
+$ influxd-ctl restore my-incremental-backup/
+Using backup directory: my-incremental-backup/
+Using meta backup: 20170130T231333Z.meta
+Restoring meta data... Done. Restored in 21.373019ms, 1 shards mapped
+Restoring db telegraf, rp autogen, shard 2 to shard 2...
+Copying data to :8088... Copying data to :8088... Done. Restored shard 2 into shard 2 in 61.046571ms, 588800 bytes transferred
+Restored from my-incremental-backup/ in 83.892591ms, transferred 588800 bytes
+```
+
+#### Restoring from a full backup
+
+In this example, the `restore` command uses the `-full` option to restore a full backup.
+
+```
+influxd-ctl restore -full
+```
+
+Output:
+
+```
+$ influxd-ctl restore -full my-full-backup/20170131T020341Z.manifest
+Using manifest: my-full-backup/20170131T020341Z.manifest
+Restoring meta data... Done. Restored in 9.585639ms, 1 shards mapped
+Restoring db telegraf, rp autogen, shard 2 to shard 2...
+Copying data to :8088... Copying data to :8088... Done. Restored shard 2 into shard 2 in 48.095082ms, 569344 bytes transferred
+Restored from my-full-backup in 58.58301ms, transferred 569344 bytes
+```
+
+#### Restoring from an incremental backup for a single database and give the database a new name
+
+```
+influxd-ctl restore -db -newdb
+```
+
+Output:
+
+```
+$ influxd-ctl restore -db telegraf -newdb restored_telegraf my-incremental-backup/
+Using backup directory: my-incremental-backup/
+Using meta backup: 20170130T231333Z.meta
+Restoring meta data... Done. Restored in 8.119655ms, 1 shards mapped
+Restoring db telegraf, rp autogen, shard 2 to shard 4...
+Copying data to :8088... Copying data to :8088... Done. Restored shard 2 into shard 4 in 57.89687ms, 588800 bytes transferred
+Restored from my-incremental-backup/ in 66.715524ms, transferred 588800 bytes
+```
+
+#### Restoring from an incremental backup for a database and merge that database into an existing database
+
+In this example, our `telegraf` database was mistakenly dropped, but you have a recent backup so you've only lost a small amount of data.
+
+If [Telegraf](/telegraf/v1.5/) is still running, it will recreate the `telegraf` database shortly after the database is dropped.
+You might try to directly restore your `telegraf` backup just to find that you can't restore.
+
+```
+$ influxd-ctl restore -db telegraf my-incremental-backup/
+Using backup directory: my-incremental-backup/
+Using meta backup: 20170130T231333Z.meta
+Restoring meta data... Error.
+restore: operation exited with error: problem setting snapshot: database already exists
+```
+
+To work around this, you can restore your `telegraf` backup into a new database by specifying the `-db` flag for the source and the `-newdb` flag for the new destination.
+
+```
+$ influxd-ctl restore -db telegraf -newdb restored_telegraf my-incremental-backup/
+Using backup directory: my-incremental-backup/
+Using meta backup: 20170130T231333Z.meta
+Restoring meta data... Done. Restored in 19.915242ms, 1 shards mapped
+Restoring db telegraf, rp autogen, shard 2 to shard 7...
+Copying data to :8088... Copying data to :8088... Done. Restored shard 2 into shard 7 in 36.417682ms, 588800 bytes transferred
+Restored from my-incremental-backup/ in 56.623615ms, transferred 588800 bytes
+```
+
+Then, in the [`influx` client](/influxdb/v1.5/tools/shell/), use an [`INTO` query](/influxdb/v1.5/query_language/data_exploration/#the-into-clause) to copy the data from the new database into the existing `telegraf` database.
+
+```
+$ influx
+> USE restored_telegraf
+Using database restored_telegraf
+> SELECT * INTO telegraf..:MEASUREMENT FROM /.*/ GROUP BY *
+name: result
+------------
+time written
+1970-01-01T00:00:00Z 471
+```
+
+### Common issues when using `restore`
+
+#### Restoring writes information not part of the original backup
+
+If a [restore from an incremental backup](#restoring-from-incremental-backups) does not limit the restore to the same database, retention policy, and shard specified by the backup command, the restore may appear to restore information that was not part of the original backup.
+Backups consist of a shard data backup and a metastore backup.
+The **shard data backup** contains the actual time series data: the measurements, tags, fields, and so on.
+The **metastore backup** contains user information, database names, retention policy names, shard metadata, continuous queries, and subscriptions.
+
+When the system creates a backup, the backup includes:
+
+* the relevant shard data determined by the specified backup arguments.
+* all of the metastore information in the cluster regardless of the specified backup arguments.
+
+Because a backup always includes the complete metastore information, a restore that doesn't include the same arguments specified by the backup command may appear to restore data that were not targeted by the original backup.
+The unintended data, however, include only the metastore information, not the shard data associated with that metastore information.
+
+#### Restore backups created before version 1.2.0
+
+InfluxDB Enterprise introduced incremental backups in version 1.2.0.
+To restore a backup created prior to version 1.2.0, use the syntax
+for [restoring from a full backup](#restoring-from-full-backups).
diff --git a/content/enterprise_influxdb/v1.5/administration/cluster-commands.md b/content/enterprise_influxdb/v1.5/administration/cluster-commands.md
new file mode 100644
index 000000000..a4a942979
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/administration/cluster-commands.md
@@ -0,0 +1,997 @@
+---
+title: InfluxDB Enterprise cluster management utilities
+description: Use the "influxd-ctl" and "influx" command line tools to interact with your InfluxDB Enterprise cluster and data.
+aliases:
+ - /enterprise/v1.5/features/cluster-commands/
+menu:
+ enterprise_influxdb_1_5:
+ name: Cluster management utilities
+ weight: 30
+ parent: Administration
+---
+
+InfluxDB Enterprise includes two utilities for interacting with and managing your clusters. The [`influxd-ctl`](#influxd-ctl-cluster-management-utility) utility provides commands for managing your InfluxDB Enterprise clusters. The [`influx` command line interface](#influx-command-line-interface-cli) is used for interacting with and managing your data.
+
+#### Content
+
+* [`influxd-ctl` cluster management utility](#influxd-ctl-cluster-management-utility)
+ * [Syntax](#syntax)
+ * [Global options](#global-options)
+ * [`-auth-type`](#auth-type-none-basic-jwt)
+ * [`-bind`](#bind-hostname-port)
+ * [`-bind-tls`](#bind-tls)
+ * [`-config`](#config-path-to-configuration-file)
+ * [`-pwd`](#pwd-password)
+ * [`-k`](#k)
+ * [`-secret`](#secret-jwt-shared-secret)
+ * [`-user`](#user-username)
+ * [Commands](#commands)
+ * [`add-data`](#add-data)
+ * [`add-meta`](#add-meta)
+ * [`backup`](#backup)
+ * [`copy-shard`](#copy-shard)
+ * [`copy-shard-status`](#copy-shard-status)
+ * [`join`](#join)
+ * [`kill-copy-shard`](#kill-copy-shard)
+ * [`leave`](#leave)
+ * [`remove-data`](#remove-data)
+ * [`remove-meta`](#remove-meta)
+ * [`remove-shard`](#remove-shard)
+ * [`restore`](#restore)
+ * [`show`](#show)
+ * [`show-shards`](#show-shards)
+ * [`update-data`](#update-data)
+ * [`token`](#token)
+ * [`truncate-shards`](#truncate-shards)
+* [`influx` command line interface (CLI)](#influx-command-line-interface-cli)
+
+
+## `influxd-ctl` cluster management utility
+
+Use the `influxd-ctl` cluster management utility to manage your cluster nodes, back up and restore data, and rebalance clusters.
+The `influxd-ctl` utility is available on all [meta nodes](/enterprise_influxdb/v1.5/concepts/glossary/#meta-node).
+
+### Syntax
+
+```
+influxd-ctl [ global-options ] [ arguments ]
+```
+
+### Global options
+
+Optional arguments are in brackets.
+
+#### `[ -auth-type [ none | basic | jwt ] ]`
+
+Specify the type of authentication to use. Default value is `none`.
+
+#### `[ -bind : ]`
+
+Specify the bind HTTP address of a meta node to connect to. Default value is `localhost:8091`.
+
+#### `[ -bind-tls ]`
+
+Use TLS. If you have enabled HTTPS, you MUST use this argument in order to connect to the meta node.
+
+#### `[ -config ' ]'`
+
+Specify the path to the configuration file.
+
+#### `[ -pwd ]`
+
+Specify the user’s password. This argument is ignored if `-auth-type basic` isn’t specified.
+
+#### `[ -k ]`
+
+Skip certificate verification; use this argument with a self-signed certificate. `-k` is ignored if `-bind-tls` isn't specified.
+
+#### `[ -secret ]`
+
+Specify the JSON Web Token (JWT) shared secret. This argument is ignored if `-auth-type jwt` isn't specified.
+
+#### `[ -user ]`
+
+Specify the user’s username. This argument is ignored if `-auth-type basic` isn’t specified.
+
+### Examples
+
+The following examples use the `influxd-ctl` utility's [`show` option](#show).
+
+#### Binding to a remote meta node
+
+```
+$ influxd-ctl -bind meta-node-02:8091 show
+```
+
+The `influxd-ctl` utility binds to the meta node with the hostname `meta-node-02` at port `8091`.
+By default, the tool binds to the meta node with the hostname `localhost` at port `8091`.
+
+#### Authenticating with JWT
+
+```
+$ influxd-ctl -auth-type jwt -secret oatclusters show
+```
+The `influxd-ctl` utility uses JWT authentication with the shared secret `oatclusters`.
+
+If authentication is enabled in the cluster's [meta node configuration files](/enterprise_influxdb/v1.5/administration/configuration/#auth-enabled-false) and [data node configuration files](/enterprise_influxdb/v1.5/administration/configuration/#meta-auth-enabled-false) and the `influxd-ctl` command does not include authentication details, the system returns:
+
+```
+Error: unable to parse authentication credentials.
+```
+
+If authentication is enabled and the `influxd-ctl` command provides the incorrect shared secret, the system returns:
+
+```
+Error: signature is invalid.
+```
+
+#### Authenticating with basic authentication
+
+To authenticate a user with basic authentication, use the `-auth-type basic` option on the `influxd-ctl` utility, with the `-user` and `-password` options.
+
+In the following example, the `influxd-ctl` utility uses basic authentication for a cluster user.
+
+```
+$ influxd-ctl -auth-type basic -user admini -pwd mouse show
+```
+
+If authentication is enabled in the cluster's [meta node configuration files](/enterprise_influxdb/v1.5/administration/configuration/#auth-enabled-false) and [data node configuration files](/enterprise_influxdb/v1.5/administration/configuration/#meta-auth-enabled-false) and the `influxd-ctl` command does not include authentication details, the system returns:
+
+```
+Error: unable to parse authentication credentials.
+```
+
+If authentication is enabled and the `influxd-ctl` command provides the incorrect username or password, the system returns:
+
+```
+Error: authorization failed.
+```
+
+## Commands
+
+### `add-data`
+
+Adds a data node to a cluster.
+By default, `influxd-ctl` adds the specified data node to the local meta node's cluster.
+Use `add-data` instead of the [`join` argument](#join) when performing a [production installation](/enterprise_influxdb/v1.5/production_installation/data_node_installation/) of an InfluxEnterprise cluster.
+
+#### Syntax
+
+```
+add-data
+```
+
+Resources: [Production installation](/enterprise_influxdb/v1.5/production_installation/data_node_installation/)
+
+#### Examples
+
+##### Adding a data node to a cluster using the local meta node
+
+In the following example, the `add-data` command contacts the local meta node running at `localhost:8091` and adds a data node to that meta node's cluster.
+The data node has the hostname `cluster-data-node` and runs on port `8088`.
+
+```
+$ influxd-ctl add-data cluster-data-node:8088
+
+Added data node 3 at cluster-data-node:8088
+```
+
+##### Adding a data node to a cluster using a remote meta node
+
+In the following example, the command contacts the meta node running at `cluster-meta-node-01:8091` and adds a data node to that meta node's cluster.
+The data node has the hostname `cluster-data-node` and runs on port `8088`.
+
+```
+$ influxd-ctl -bind cluster-meta-node-01:8091 add-data cluster-data-node:8088
+
+Added data node 3 at cluster-data-node:8088
+```
+
+### `add-meta`
+
+Adds a meta node to a cluster.
+By default, `influxd-ctl` adds the specified meta node to the local meta node's cluster.
+Use `add-meta` instead of the [`join` argument](#join) when performing a [Production Installation](/enterprise_influxdb/v1.5/production_installation/meta_node_installation/) of an InfluxEnterprise cluster.
+
+Resources: [Production installation](/enterprise_influxdb/v1.5/production_installation/data_node_installation/)
+
+#### Syntax
+
+```
+influxd-ctl add-meta
+```
+
+#### Examples
+
+##### Adding a meta node to a cluster using the local meta node
+
+In the following example, the `add-meta` command contacts the local meta node running at `localhost:8091` and adds a meta node to that local meta node's cluster.
+The added meta node has the hostname `cluster-meta-node-03` and runs on port `8091`.
+
+```
+$ influxd-ctl add-meta cluster-meta-node-03:8091
+
+Added meta node 3 at cluster-meta-node:8091
+```
+
+##### Adding a meta node to a cluster using a remote meta node**
+
+In the following example, the `add-meta` command contacts the meta node running at `cluster-meta-node-01:8091` and adds a meta node to that meta node's cluster.
+The added meta node has the hostname `cluster-meta-node-03` and runs on port `8091`.
+
+```
+$ influxd-ctl -bind cluster-meta-node-01:8091 add-meta cluster-meta-node-03:8091
+
+Added meta node 3 at cluster-meta-node-03:8091
+```
+
+### `backup`
+
+Creates a backup of a cluster's [metastore](/influxdb/v1.5/concepts/glossary/#metastore) and [shard](/influxdb/v1.5/concepts/glossary/#shard) data at that point in time and stores the copy in the specified directory.
+Backups are incremental by default; they create a copy of the metastore and shard data that have changed since the previous incremental backup.
+If there are no existing incremental backups, the system automatically performs a complete backup.
+
+#### Syntax
+
+```
+influxd-ctl backup [ -db | -from | -full | -rp | -shard ]
+```
+#### Arguments
+
+Optional arguments are in brackets.
+
+##### [ `-db ` ]
+
+The name of the single database to back up.
+
+##### [ `-from ` ]
+
+The TCP address of the target data node.
+
+##### [ `-full` ]
+
+Flag to perform a [full](/enterprise_influxdb/v1.5/administration/backup-and-restore/#backup) backup.
+
+##### [ `-rp ` ]
+
+The name of the [retention policy](/influxdb/v1.5/concepts/glossary/#retention-policy-rp) to back up (requires the `-db` flag).
+
+##### [ `-shard ` ]
+
+The identifier of the shard to back up.
+
+> Restoring a `-full` backup and restoring an incremental backup require different syntax.
+To prevent issues with [`restore`](#restore), keep `-full` backups and incremental backups in separate directories.
+
+Resources: [Backing up and restoring in InfluxDB Enterprise ](/enterprise_influxdb/v1.5/administration/backup-and-restore/)
+
+#### Examples
+
+##### Performing an incremental backup
+
+In the following example, the command performs an incremental backup and stores it in the current directory.
+If there are any existing backups the current directory, the system performs an incremental backup.
+If there aren’t any existing backups in the current directory, the system performs a complete backup of the cluster.
+
+```
+$ influxd-ctl backup .
+```
+
+Output:
+```
+Backing up meta data... Done. 421 bytes transferred
+Backing up node cluster-data-node:8088, db telegraf, rp autogen, shard 4... Done. Backed up in 903.539567ms, 307712 bytes transferred
+Backing up node cluster-data-node:8088, db _internal, rp monitor, shard 1... Done. Backed up in 138.694402ms, 53760 bytes transferred
+Backing up node cluster-data-node:8088, db _internal, rp monitor, shard 2... Done. Backed up in 101.791148ms, 40448 bytes transferred
+Backing up node cluster-data-node:8088, db _internal, rp monitor, shard 3... Done. Backed up in 144.477159ms, 39424 bytes transferred
+Backed up to . in 1.293710883s, transferred 441765 bytes
+
+$ ls
+20160803T222310Z.manifest 20160803T222310Z.s1.tar.gz 20160803T222310Z.s3.tar.gz
+20160803T222310Z.meta 20160803T222310Z.s2.tar.gz 20160803T222310Z.s4.tar.gz
+```
+
+##### Performing a full backup
+
+In the following example, the `backup` command performs a full backup of the cluster and stores the backup in the existing directory `backup_dir`.
+
+```
+$ influxd-ctl backup -full backup_dir
+```
+
+Output:
+
+```
+Backing up meta data... Done. 481 bytes transferred
+Backing up node cluster-data-node:8088, db _internal, rp monitor, shard 1... Done. Backed up in 33.207375ms, 238080 bytes transferred
+Backing up node cluster-data-node:8088, db telegraf, rp autogen, shard 2... Done. Backed up in 15.184391ms, 95232 bytes transferred
+Backed up to backup_dir in 51.388233ms, transferred 333793 bytes
+
+~# ls backup_dir
+20170130T184058Z.manifest
+20170130T184058Z.meta
+20170130T184058Z.s1.tar.gz
+20170130T184058Z.s2.tar.gz
+```
+
+### `copy-shard`
+
+Copies a [shard](/influxdb/v1.5/concepts/glossary/#shard) from a source data node to a destination data node.
+
+#### Syntax
+
+```
+influxd-ctl copy-shard
+```
+
+Resources: [Rebalancing InfluxDB Enterprise clusters](/enterprise_influxdb/v1.5/guides/rebalance/)
+
+#### Examples
+
+###### Copying a shard from one data node to another data node
+
+In the following example, the `copy-shard` command copies the shard with the id `22` from the data node running at `cluster-data-node-01:8088` to the data node running at `cluster-data-node-02:8088`.
+
+```
+$ influxd-ctl copy-shard cluster-data-node-01:8088 cluster-data-node-02:8088 22'
+
+Copied shard 22 from cluster-data-node-01:8088 to cluster-data-node-02:8088
+```
+
+### `copy-shard-status`
+
+Shows all in-progress [copy shard](#copy-shard) operations, including the shard's source node, destination node, database, [retention policy](/influxdb/v1.5/concepts/glossary/#retention-policy-rp), shard ID, total size, current size, and the operation's start time.
+
+#### Syntax
+
+```
+influxd-ctl copy-shard-status
+```
+
+#### Examples
+
+###### Displaying all in-progress copy-shard operations
+
+In this example, the `copy-shard-status` command returns one in-progress copy-shard operation.
+The system is copying shard `34` from `cluster-data-node-02:8088` to `cluster-data-node-03:8088`.
+Shard `34` is associated with the `telegraf` database and the `autogen` retention policy.
+The `TotalSize` and `CurrentSize` columns are reported in bytes.
+
+```
+$ influxd-ctl copy-shard-status
+
+Source Dest Database Policy ShardID TotalSize CurrentSize StartedAt
+cluster-data-node-02:8088 cluster-data-node-03:8088 telegraf autogen 34 119624324 119624324 2017-06-22 23:45:09.470696179 +0000 UTC
+```
+
+### `join`
+
+Joins a meta node and/or data node to a cluster.
+By default, `influxd-ctl` joins the local meta node and/or data node into a new cluster.
+Use `join` instead of the [`add-meta`](#add-meta) or [`add-data`](#add-data) arguments when performing a [QuickStart Installation](/enterprise_influxdb/v1.5/quickstart_installation/cluster_installation/) of an InfluxEnterprise cluster.
+
+#### Syntax
+
+```
+influxd-ctl join [-v]
+```
+
+#### Arguments
+
+Optional arguments are in brackets.
+
+##### [ `-v` ]
+
+Flag to print verbose information about the join.
+
+##### ``
+
+Address of a meta node in an existing cluster.
+Use this argument to add the un-joined meta node and/or data node to an existing cluster.
+
+Resources: [QuickStart installation](/enterprise_influxdb/v1.5/quickstart_installation/cluster_installation/)
+
+##### Examples
+
+###### Joining a meta and data node into a cluster
+
+In this example, the `join` command joins the meta node running at `cluster-node-03:8091` and the data node running at `cluster-node-03:8088` into a new cluster.
+
+```
+$ influxd-ctl join
+
+Joining meta node at localhost:8091
+Searching for meta node on cluster-node-03:8091...
+Searching for data node on cluster-node-03:8088...
+
+Successfully created cluster
+
+ * Added meta node 1 at cluster-node-03:8091
+ * Added data node 2 at cluster-node-03:8088
+
+ To join additional nodes to this cluster, run the following command:
+
+ influxd-ctl join cluster-node-03:8091
+```
+
+##### Joining a meta and data node to an existing cluster
+
+The command joins the meta node running at `cluster-node-03:8091` and the data node running at `cluster-node-03:8088` to an existing cluster.
+The existing cluster includes the meta node running at `cluster-meta-node-02:8091`.
+
+```
+$ influxd-ctl join cluster-meta-node-02:8091
+
+Joining meta node at cluster-meta-node-02:8091
+Searching for meta node on cluster-node-03:8091...
+Searching for data node on cluster-node-03:8088...
+
+Successfully joined cluster
+
+ * Added meta node 3 at cluster-node-03:8091
+ * Added data node 4 at cluster-node-03:8088
+```
+
+##### Joining a meta node to an existing cluster
+
+The command joins the meta node running at `cluster-meta-node-03:8091` to an existing cluster.
+The existing cluster includes the meta node running at `cluster-meta-node-02:8091`.
+The system doesn't join a data node to the cluster because it doesn't find a data node at `cluster-meta-node-03:8088`.
+
+```
+$ influxd-ctl join cluster-meta-node-02:8091
+
+Joining meta node at cluster-meta-node-02:8091
+Searching for meta node on cluster-meta-node-03:8091...
+Searching for data node on cluster-meta-node-03:8088...
+
+Successfully joined cluster
+
+ * Added meta node 18 at cluster-meta-node-03:8091
+ * No data node added. Run with -v to see more information
+```
+
+##### Joining a meta node to an existing cluster and show detailed information about the join
+
+The command joins the meta node running at `cluster-meta-node-03:8091` to an existing cluster.
+The existing cluster includes the meta node running at `cluster-meta-node-02:8091`.
+The `-v` argument prints detailed information about the join.
+
+```
+$ influxd-ctl join -v meta-node-02:8091
+
+Joining meta node at meta-node-02:8091
+Searching for meta node on meta-node-03:8091...
+Searching for data node on data-node-03:8088...
+
+No data node found on data-node-03:8091!
+
+ If a data node is running on this host,
+ you may need to add it manually using the following command:
+
+ influxd-ctl -bind meta-node-02:8091 add-data
+
+ Common problems:
+
+ * The influxd process is using a non-standard port (default 8088).
+ * The influxd process is not running. Check the logs for startup errors.
+
+Successfully joined cluster
+
+ * Added meta node 18 at meta-node-03:8091
+ * No data node added. Run with -v to see more information
+```
+
+### `kill-copy-shard`
+
+Aborts an in-progress [`copy-shard`](#copy-shard) command.
+
+#### Syntax
+
+```
+influxd-ctl kill-copy-shard
+```
+
+#### Examples
+
+##### Stopping an in-progress `copy-shard` command
+
+In this example, the `kill-copy-shard` command aborts the `copy-shard` command that was copying shard `39` from `cluster-data-node-02:8088` to `cluster-data-node-03:8088`.
+
+```
+$ influxd-ctl kill-copy-shard cluster-data-node-02:8088 cluster-data-node-03:8088 39
+
+Killed shard copy 39 from cluster-data-node-02:8088 to cluster-data-node-03:8088
+```
+
+### `leave`
+
+Removes a meta node and/or data node from the cluster.
+Use `leave` instead of the [`remove-meta`](#remove-meta) and [`remove-data`](#remove-data) arguments if you set up your InfluxEnterprise cluster with the [QuickStart Installation](/enterprise_influxdb/v1.5/quickstart_installation/cluster_installation/) process.
+
+{{% warn %}}The `leave` argument is destructive; it erases all metastore information from meta nodes and all data from data nodes.
+Use `leave` only if you want to *permanently* remove a node from a cluster.
+{{% /warn %}}
+
+#### Syntax
+
+```
+influxd-ctl leave [-y]
+```
+
+#### Arguments
+
+Optional arguments are in brackets.
+
+##### [ `-y` ]
+
+Assume yes (`y`) to all prompts.
+
+#### Examples
+
+##### Removing a meta and data node from a cluster
+
+In this example, the `leave` command removes the meta node running at `cluster-node-03:8091` and the data node running at `cluster-node-03:8088` from an existing cluster.
+Here, we respond yes (`y`) to the two prompts that ask if we'd like to remove the data node and if we'd like to remove the meta node from the cluster.
+
+```
+$ influxd-ctl leave
+
+Searching for data node on cluster-node-03:8088...
+Remove data node cluster-node-03:8088 from the cluster [y/N]: y
+Removed cluster-node-03:8088 from the cluster
+Searching for meta node on cluster-node-03:8091...
+Remove meta node cluster-node-03:8091 from the cluster [y/N]: y
+
+Successfully left cluster
+
+ * Removed data node cluster-node-03:8088 from cluster
+ * Removed meta node cluster-node-03:8091 from cluster
+```
+
+##### Removing a meta and data node from a cluster and assume yes to all prompts
+
+In this example, the `leave` command removes the meta node running at `cluster-node-03:8091` and the data node running at `cluster-node-03:8088` from an existing cluster.
+Because we specify the `-y` flag, the system assumes that we'd like to remove both the data node and meta node from the cluster and does not prompt us for responses.
+
+```
+$ influxd-ctl leave -y
+
+Searching for data node on cluster-node-03:8088...
+Removed cluster-node-03:8088 from the cluster
+Searching for meta node on cluster-node-03:8091...
+
+Successfully left cluster
+
+ * Removed data node cluster-node-03:8088 from cluster
+ * Removed meta node cluster-node-03:8091 from cluster
+```
+
+##### Removing a meta node from a cluster
+
+In this example, the `leave` command removes the meta node running at `cluster-meta-node-03:8091` from an existing cluster.
+The system doesn't remove a data node from the cluster because it doesn't find a data node running at `cluster-meta-node-03:8088`.
+
+```
+$ influxd-ctl leave
+
+Searching for data node on cluster-meta-node-03:8088...
+ * No data node found.
+Searching for meta node on cluster-meta-node-03:8091...
+Remove meta node cluster-meta-node-03:8091 from the cluster [y/N]: y
+
+Successfully left cluster
+
+ * No data node removed from cluster
+ * Removed meta node cluster-meta-node-03:8091 from cluster
+```
+
+### `remove-data`
+
+Removes a data node from a cluster.
+Use `remove-data` instead of the [`leave`](#leave) argument if you set up your InfluxEnterprise cluster with the [Production Installation](/enterprise_influxdb/v1.5/production_installation/) process.
+
+{{% warn %}}The `remove-data` argument is destructive; it erases all data from the specified data node.
+Use `remove-data` only if you want to *permanently* remove a data node from a cluster.
+{{% /warn %}}
+
+#### Syntax
+
+```
+influxd-ctl remove-data [ -force ]
+```
+
+#### Arguments
+
+Optional arguments are in brackets.
+
+##### [ `-force` ]
+
+Flag to force the removal of the data node.
+Use `-force` if the data node process is not running.
+
+#### Examples
+
+##### Removing a data node from a cluster
+
+In this example, the `remove-data` command removes a data node running at `cluster-data-node-03:8088` from an existing cluster.
+
+```
+~# influxd-ctl remove-data cluster-data-node-03:8088
+Removed data node at cluster-data-node-03:8088
+```
+
+### `remove-meta`
+
+Removes a meta node from the cluster.
+Use `remove-meta` instead of the [`leave`](#leave) command if you set up your InfluxEnterprise cluster with the [Production Installation](/enterprise_influxdb/v1.5/production_installation/) process.
+
+{{% warn %}}The `remove-meta` argument is destructive; it erases all metastore information from the specified meta node.
+Use `remove-meta` only if you want to *permanently* remove a meta node from a cluster.
+{{% /warn %}}
+
+#### Syntax
+
+```
+influxd-ctl remove-meta [ -force | -tcpAddr | -y ]
+```
+
+#### Arguments
+
+Optional arguments are in brackets.
+
+##### [ `-force` ]
+
+Flag to force the removal of the meta node.
+Use `-force` if the meta node process if not running, and the node is not reachable and unrecoverable.
+If a meta node restarts after being `-force` removed, it may interfere with the cluster.
+This argument requires the `-tcpAddr` argument.
+
+##### [ `-tcpAddr ` ]
+
+The TCP address of the meta node to remove from the cluster.
+Use this argument with the `-force` argument.
+
+##### [ `-y` ]
+
+Flag to assume `Yes` to all prompts.
+
+#### Examples
+
+##### Removing a meta node from a cluster
+
+In this example, the `remove-meta` command removes the meta node at `cluster-meta-node-02:8091` from an existing cluster.
+In the example, we respond yes (`y`) to the prompt that asks if we'd like to remove the meta node from the cluster.
+
+```
+$ influxd-ctl remove-meta cluster-meta-node-02:8091
+
+Remove cluster-meta-node-02:8091 from the cluster [y/N]: y
+
+Removed meta node at cluster-meta-node-02:8091
+```
+
+##### Forcefully removing an unresponsive meta node from a cluster
+
+In this example, the `remove-data` command forcefully removes the meta node running at the TCP address `cluster-meta-node-02:8089` and HTTP address `cluster-meta-node-02:8091` from the cluster.
+In the example, we respond yes (`y`) to the prompt that asks if we'd like to force remove the meta node from the cluster.
+Note that if the meta node at `cluster-meta-node-02:8091` restarts, it may interfere with the cluster.
+Only perform a force removal of a meta node if the node is not reachable and unrecoverable.
+
+```
+$ influxd-ctl remove-meta -force -tcpAddr cluster-meta-node-02:8089 cluster-meta-node-02:8091
+
+Force remove cluster-meta-node-02:8091 from the cluster [y/N]:y
+
+Removed meta node at cluster-meta-node-02:8091
+```
+
+### `remove-shard`
+
+Removes a shard from a data node.
+Removing a shard is an irrecoverable, destructive action; please be cautious with this command.
+
+#### Syntax
+
+```
+influxd-ctl remove-shard
+```
+
+Resources: [Cluster Rebalance](/enterprise_influxdb/v1.5/guides/rebalance/)
+
+#### Examples
+
+##### Removing a shard from a running data node
+
+In this example, the `remove-shard` command removes shard `31` from the data node running at `cluster-data-node-02:8088`.
+
+```
+~# influxd-ctl remove-shard cluster-data-node-02:8088 31
+
+Removed shard 31 from cluster-data-node-02:8088
+```
+
+### `restore`
+
+Restore a [backup](#backup) to an existing cluster or a new cluster.
+
+> **Note:** The existing cluster must contain no data in the databases affected by the restore.
+
+Restore supports both full backups and incremental backups; the syntax for a restore differs depending on the backup type.
+
+#### Syntax
+
+```
+influxd-ctl restore [ -db | -full | -list | -newdb | -newrf | -newrp | -rp | shard ] ( | )
+```
+
+The `restore` command must specify either the `path-to-backup-manifest-file` or the `path-to-backup-directory`.
+If the restore uses the `-full` argument, specify the `path-to-backup-manifest-file`.
+If the restore doesn't use the `-full` argument, specify the ``.
+
+
+#### Arguments
+
+Optional arguments are in brackets.
+
+##### [ `-db ` ]
+
+The name of the single database to restore.
+
+##### [ `-full` ]
+
+Flag to restore a backup that was created with the `-full` flag.
+A restore command with the `-full` flag requires the `path-to-backup-manifest-file`.
+
+##### [ `-list` ]
+
+Flag to show the contents of the backup.
+
+##### [ `-newdb ` ]
+
+The name of the new database to restore to (must specify with `-db`).
+
+##### [ `-newrf ` ]
+
+The integer of the new [replication factor](/influxdb/v1.5/concepts/glossary/#replication-factor) to restore to (this is capped to the number of data nodes in the cluster).
+
+##### [ `-newrp ` ]
+
+The name of the new [retention policy](/influxdb/v1.5/concepts/glossary/#retention-policy-rp) to restore to (must specify with `-rp`).
+
+##### [ `-rp ` ]
+
+The rame of the retention policy to restore.
+
+##### [ `-shard ` ]
+
+The identifier of the [shard](/influxdb/v1.5/concepts/glossary/#shard) to restore.
+
+Resources: [Backing up and restoring in InfluxDB Enterprise](/enterprise_influxdb/v1.5/administration/backup-and-restore/#restore)
+
+
+#### Examples
+
+##### Restoring from an incremental backup
+
+In this example, the `restore` command restores an incremental backup stored in the `my-incremental-backup/` directory.
+
+```
+$ influxd-ctl restore my-incremental-backup/
+
+Using backup directory: my-incremental-backup/
+Using meta backup: 20170130T231333Z.meta
+Restoring meta data... Done. Restored in 21.373019ms, 1 shards mapped
+Restoring db telegraf, rp autogen, shard 2 to shard 2...
+Copying data to :8088... Copying data to :8088... Done. Restored shard 2 into shard 2 in 61.046571ms, 588800 bytes transferred
+Restored from my-incremental-backup/ in 83.892591ms, transferred 588800 bytes
+```
+
+##### Restoring from a full backup
+
+In this example, the `restore` command is used to restore a full backup that includes the manifest file at `my-full-backup/20170131T020341Z.manifest`.
+
+```
+$ influxd-ctl restore -full my-full-backup/20170131T020341Z.manifest
+
+Using manifest: my-full-backup/20170131T020341Z.manifest
+Restoring meta data... Done. Restored in 9.585639ms, 1 shards mapped
+Restoring db telegraf, rp autogen, shard 2 to shard 2...
+Copying data to :8088... Copying data to :8088... Done. Restored shard 2 into shard 2 in 48.095082ms, 569344 bytes transferred
+Restored from my-full-backup in 58.58301ms, transferred 569344 bytes
+```
+
+### `show`
+
+Shows all [meta nodes](/enterprise_influxdb/v1.5/concepts/glossary/#meta-node) and [data nodes](/enterprise_influxdb/v1.5/concepts/glossary/#data-node) that are part of the cluster.
+The output includes the InfluxDB Enterprise version number.
+
+#### Syntax
+
+```
+influxd-ctl show
+```
+
+#### Examples
+
+##### Showing all meta and data nodes in a cluster
+
+In this example, the `show` command output displays that the cluster includes three meta nodes and two data nodes.
+Every node is using InfluxDB Enterprise `1.3.x-c1.3.x`.
+
+```
+$ influxd-ctl show
+
+Data Nodes
+==========
+ID TCP Address Version
+2 cluster-node-01:8088 1.3.x-c1.3.x
+4 cluster-node-02:8088 1.3.x-c1.3.x
+
+Meta Nodes
+==========
+TCP Address Version
+cluster-node-01:8091 1.3.x-c1.3.x
+cluster-node-02:8091 1.3.x-c1.3.x
+cluster-node-03:8091 1.3.x-c1.3.x
+```
+
+### `show-shards`
+
+Outputs details about existing [shards](/influxdb/v1.5/concepts/glossary/#shard) of the cluster, including shard ID, database, [retention policy](/influxdb/v1.5/concepts/glossary/#retention-policy-rp), desired replicas, [shard group](/influxdb/v1.5/concepts/glossary/#shard-group), starting timestamp, ending timestamp, expiration timestamp, and [data node](/enterprise_influxdb/v1.5/concepts/glossary/#data-node) owners.
+
+```
+influxd-ctl show-shards
+```
+
+#### Examples
+
+##### Showing the existing shards in a cluster
+
+In this example, the `show-shards` output shows that there are two shards in the cluster.
+The first shard has an id of `51` and it's in the `telegraf` database and the `autogen` retention policy.
+The desired number of copies for shard `51` is `2` and it belongs to shard group `37`.
+The data in shard `51` cover the time range between `2017-03-13T00:00:00Z` and `2017-03-20T00:00:00Z`, and the shard has no expiry time; `telegraf`'s `autogen` retention policy has an infinite duration so the system never removes shard `51`.
+Finally, shard `51` appears on two data nodes: `cluster-data-node-01:8088` and `cluster-data-node-03:8088`.
+
+```
+$ influxd-ctl show-shards
+
+Shards
+==========
+ID Database Retention Policy Desired Replicas Shard Group Start End Expires Owners
+51 telegraf autogen 2 37 2017-03-13T00:00:00Z 2017-03-20T00:00:00Z [{26 cluster-data-node-01:8088} {33 cluster-data-node-03:8088}]
+52 telegraf autogen 2 37 2017-03-13T00:00:00Z 2017-03-20T00:00:00Z [{5 cluster-data-node-02:8088} {26 cluster-data-node-01:8088}]
+```
+
+### `update-data`
+
+Updates a data node's address in the [meta store](/enterprise_influxdb/v1.5/concepts/glossary/#meta-service).
+
+#### Syntax
+
+```
+influxd-ctl update-data
+```
+
+#### Examples
+
+##### Updating a data node hostname
+
+In this example, the `update-data` command updates the address for data node `26` from `cluster-node-01:8088` to `cluster-data-node-01:8088`.
+
+```
+$ influxd-ctl update-data cluster-node-01:8088 cluster-data-node-01:8088
+
+updated data node 26 to cluster-data-node-01:8088
+```
+
+### `token`
+
+Generates a signed JSON Web Token (JWT) token.
+The token argument only works when using JWT authentication in the cluster and when using the [`-auth-type jwt`](#auth-type-none-basic-jwt) and [`-secret `](#secret-jwt-shared-secret) arguments.
+
+##### Syntax
+
+```
+influxd-ctl token [-exp ]
+```
+
+#### Arguments
+
+Optional arguments are in brackets.
+
+##### [ `-exp ` ]
+
+The time after which the token expires.
+By default, the token expires after one minute.
+
+#### Examples
+
+##### Creating a signed JWT token
+
+In this example, the `token` command returns a signed JWT token.
+
+```
+$ influxd-ctl -auth-type jwt -secret oatclusters token
+
+hereistokenisitgoodandsoareyoufriend.timingisaficklefriendbutwherewouldwebewithoutit.timingthentimeseriesgood-wevemadetheleap-nowletsgetdownanddataandqueryallourheartsout
+```
+
+##### Attempting to create a signed JWT token with basic authentication
+
+In this example, the `token` command returns an error because the command doesn't use JWT authentication.
+
+```
+$ influxd-ctl -auth-type basic -user admini -pwd mouse token
+
+token: tokens can only be created when using bearer authentication
+```
+
+### `truncate-shards`
+
+Truncates hot [shards](/influxdb/v1.5/concepts/glossary/#shard), that is, shards that cover the time range that includes the current time ([`now()`](/influxdb/v1.5/concepts/glossary/#now)).
+The `truncate-shards` command creates a new shard and the system writes all new points to that shard.
+
+#### Syntax
+
+```
+influxd-ctl truncate-shards [-delay ]
+```
+
+#### Arguments
+
+Optional arguments are in brackets.
+
+##### [ `-delay ` ]
+
+The delay duration when to truncate shards after [`now()`](/influxdb/v1.5/concepts/glossary/#now).
+By default, the tool sets the delay to one minute.
+The `duration` is an integer followed by a [duration unit](/influxdb/v1.5/query_language/spec/#durations).
+
+Resources: [Cluster rebalancing](/enterprise_influxdb/v1.5/guides/rebalance/)
+
+#### Examples
+
+##### Truncating shards with the default delay time
+
+In this example, after running the `truncate-shards` command and waiting one minute, the output of the [`show-shards` command](#show-shards) shows that the system truncated shard `51` (truncated shards have an asterisk (`*`) on the timestamp in the `End` column) and created the new shard with the id `54`.
+
+```
+$ influxd-ctl truncate-shards
+
+Truncated shards.
+
+$ influxd-ctl show-shards
+
+Shards
+==========
+ID Database Retention Policy Desired Replicas Shard Group Start End Expires Owners
+51 telegraf autogen 2 37 2017-03-13T00:00:00Z 2017-03-13T20:40:15.753443255Z* [{26 cluster-data-node-01:8088} {33 cluster-data-node-03:8088}]
+54 telegraf autogen 2 38 2017-03-13T00:00:00Z 2017-03-20T00:00:00Z [{26 cluster-data-node-01:8088} {33 cluster-data-node-03:8088}]
+```
+
+##### Truncating shards with a user-provided delay timestamp
+
+In this example, after running the `truncate-shards` command and waiting three minutes, the output of the [`show-shards` command](#show-shards) shows that the system truncated shard `54` (truncated shards have an asterix (`*`) on the timestamp in the `End` column) and created the new shard with the id `58`.
+
+```
+$ influxd-ctl truncate-shards -delay 3m
+
+Truncated shards.
+
+$ influxd-ctl show-shards
+
+Shards
+==========
+ID Database Retention Policy Desired Replicas Shard Group Start End Expires Owners
+54 telegraf autogen 2 38 2017-03-13T00:00:00Z 2017-03-13T20:59:14.665827038Z* [{26 cluster-data-node-01:8088} {33 cluster-data-node-03:8088}]
+58 telegraf autogen 2 40 2017-03-13T00:00:00Z 2017-03-20T00:00:00Z [{26 cluster-data-node-01:8088} {33 cluster-data-node-03:8088}]
+```
+
+## `influx` command line interface (CLI)
+
+Use the `influx` command line interface (CLI) to write data to your cluster, query data interactively, and view query output in different formats.
+The `influx` CLI is available on all [data nodes](/enterprise_influxdb/v1.5/concepts/glossary/#data-node).
+
+See [InfluxDB command line interface (CLI/shell)](/influxdb/v1.5/tools/shell/) in the InfluxDB OSS documentation for details on using the `influx` command line interface utility.
diff --git a/content/enterprise_influxdb/v1.5/administration/configuration.md b/content/enterprise_influxdb/v1.5/administration/configuration.md
new file mode 100644
index 000000000..fbe53e237
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/administration/configuration.md
@@ -0,0 +1,1070 @@
+---
+title: Configuring InfluxDB Enterprise
+aliases:
+ - /enterprise/v1.5/administration/configuration/
+menu:
+ enterprise_influxdb_1_5:
+ name: Configuring
+ weight: 10
+ parent: Administration
+---
+
+#### Content
+
+* [Using configuration files](#using-configuration-files)
+* [Meta node configuration sections](#meta-node-configuration)
+ * [Global settings](#global-settings)
+ * [[enterprise]](#enterprise)
+ * [[meta]](#meta)
+* [Data node configuration sections](#data-node-configuration)
+ * [Global settings](#global-settings-1)
+ * [[enterprise]](#enterprise-1)
+ * [[meta]](#meta-1)
+ * [[data]](#data)
+ * [[cluster]](#cluster)
+ * [[retention]](#retention)
+ * [[shard-precreation]](#shard-precreation)
+ * [[monitor]](#monitor)
+ * [[subscriber]](#subscriber)
+ * [[http]](#http)
+ * [[graphite]](#graphite)
+ * [[collectd]](#collectd)
+ * [[opentsdb]](#opentsdb)
+ * [[udp]](#udp)
+ * [[continuous-queries]](#continuous-queries)
+ * [[hinted-handoff]](#hinted-handoff)
+ * [[anti-entropy]](#anti-entropy)
+
+
+# Using configuration files
+
+#### Print a default configuration file
+
+The following commands print out a TOML-formatted configuration with all available settings set to their default values.
+
+Meta configuration:
+```
+influxd-meta config
+```
+
+Data configuration:
+```
+influxd config
+```
+
+#### Create a configuration file
+
+On POSIX systems, generate a new configuration file by redirecting the output
+of the command to a file.
+
+New meta configuration file:
+```
+influxd-meta config > /etc/influxdb/influxdb-meta-generated.conf
+```
+
+New data configuration file:
+```
+influxd config > /etc/influxdb/influxdb-generated.conf
+```
+
+Preserve custom settings from older configuration files when generating a new
+configuration file with the `-config` option.
+For example, this overwrites any default configuration settings in the output
+file (`/etc/influxdb/influxdb.conf.new`) with the configuration settings from
+the file (`/etc/influxdb/influxdb.conf.old`) passed to `-config`:
+
+```
+influxd config -config /etc/influxdb/influxdb.conf.old > /etc/influxdb/influxdb.conf.new
+```
+
+#### Launch the process with a configuration file
+
+There are two ways to launch the meta or data processes using your customized
+configuration file.
+
+* Point the process to the desired configuration file with the `-config` option.
+
+ To start the meta node process with `/etc/influxdb/influxdb-meta-generate.conf`:
+
+ `influxd-meta -config /etc/influxdb/influxdb-meta-generate.conf`
+
+ To start the data node process with `/etc/influxdb/influxdb-generated.conf`:
+
+ `influxd -config /etc/influxdb/influxdb-generated.conf`
+
+* Set the environment variable `INFLUXDB_CONFIG_PATH` to the path of your
+configuration file and start the process.
+
+ To set the `INFLUXDB_CONFIG_PATH` environment variable and launch the data
+ process using `INFLUXDB_CONFIG_PATH` for the configuration file path:
+
+ ```bash
+ export INFLUXDB_CONFIG_PATH=/root/influxdb.generated.conf
+ echo $INFLUXDB_CONFIG_PATH
+ /root/influxdb.generated.conf
+ influxd
+ ```
+
+If set, the command line `-config` path overrides any environment variable path.
+If you do not supply a configuration file, InfluxDB uses an internal default
+configuration (equivalent to the output of `influxd config` and `influxd-meta
+config`).
+
+{{% warn %}} Note for 1.3, the influxd-meta binary, if no configuration is specified, will check the INFLUXDB_META_CONFIG_PATH.
+If that environment variable is set, the path will be used as the configuration file.
+If unset, the binary will check the ~/.influxdb and /etc/influxdb folder for an influxdb-meta.conf file.
+If it finds that file at either of the two locations, the first will be loaded as the configuration file automatically.
+
+This matches a similar behavior that the open source and data node versions of InfluxDB already follow.
+{{% /warn %}}
+
+### Environment variables
+
+All configuration settings can be specified in the configuration file or in
+environment variables.
+Environment variables override the equivalent settings in the configuration
+file.
+If a configuration option is not specified in either the configuration file
+or in an environment variable, InfluxDB uses its internal default
+configuration.
+
+In the sections below we name the relevant environment variable in the
+description for the configuration setting.
+Environment variables can be set in `/etc/default/influxdb-meta` and
+`/etc/default/influxdb`.
+
+> **Note:**
+To set or override settings in a config section that allows multiple
+configurations (any section with [[`double_brackets`]] in the header supports
+multiple configurations), the desired configuration must be specified by ordinal
+number.
+For example, for the first set of `[[graphite]]` environment variables,
+prefix the configuration setting name in the environment variable with the
+relevant position number (in this case: `0`):
+>
+ INFLUXDB_GRAPHITE_0_BATCH_PENDING
+ INFLUXDB_GRAPHITE_0_BATCH_SIZE
+ INFLUXDB_GRAPHITE_0_BATCH_TIMEOUT
+ INFLUXDB_GRAPHITE_0_BIND_ADDRESS
+ INFLUXDB_GRAPHITE_0_CONSISTENCY_LEVEL
+ INFLUXDB_GRAPHITE_0_DATABASE
+ INFLUXDB_GRAPHITE_0_ENABLED
+ INFLUXDB_GRAPHITE_0_PROTOCOL
+ INFLUXDB_GRAPHITE_0_RETENTION_POLICY
+ INFLUXDB_GRAPHITE_0_SEPARATOR
+ INFLUXDB_GRAPHITE_0_TAGS
+ INFLUXDB_GRAPHITE_0_TEMPLATES
+ INFLUXDB_GRAPHITE_0_UDP_READ_BUFFER
+>
+For the Nth Graphite configuration in the configuration file, the relevant
+environment variables would be of the form `INFLUXDB_GRAPHITE_(N-1)_BATCH_PENDING`.
+For each section of the configuration file the numbering restarts at zero.
+
+
+
+# Meta node configuration
+
+## Global settings
+
+### reporting-disabled = false
+
+InfluxData, the company, relies on reported data from running nodes primarily to
+track the adoption rates of different InfluxDB versions.
+These data help InfluxData support the continuing development of InfluxDB.
+
+The `reporting-disabled` option toggles the reporting of data every 24 hours to
+`usage.influxdata.com`.
+Each report includes a randomly-generated identifier, OS, architecture,
+InfluxDB version, and the number of databases, measurements, and unique series.
+Setting this option to `true` will disable reporting.
+
+> **Note:** No data from user databases are ever transmitted.
+
+### bind-address = ""
+This setting is not intended for use.
+It will be removed in future versions.
+
+### hostname = ""
+
+The hostname of the [meta node](/enterprise_influxdb/v1.5/concepts/glossary/#meta-node).
+This must be resolvable and reachable by all other members of the cluster.
+
+Environment variable: `INFLUXDB_HOSTNAME`
+
+## [enterprise]
+
+The `[enterprise]` section contains the parameters for the meta node's
+registration with the [InfluxEnterprise License Portal](https://portal.influxdata.com/).
+
+### license-key = ""
+
+The license key created for you on [InfluxPortal](https://portal.influxdata.com).
+The meta node transmits the license key to [portal.influxdata.com](https://portal.influxdata.com) over port 80 or port 443 and receives a temporary JSON license file in return.
+The server caches the license file locally.
+You must use the [`license-path` setting](#license-path) if your server cannot communicate with [https://portal.influxdata.com](https://portal.influxdata.com).
+
+Use the same key for all nodes in the same cluster.
+{{% warn %}}The `license-key` and `license-path` settings are mutually exclusive and one must remain set to the empty string.
+{{% /warn %}}
+
+We recommended performing rolling restarts on the nodes after the
+license key update. Restart one Meta, Data, or Enterprise service at a time and
+wait for it to come back up successfully. The cluster should remain unaffected
+as long as only one node is restarting at a time as long as there are two or more
+data nodes.
+
+Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_KEY`
+
+### license-path = ""
+
+The local path to the permanent JSON license file that you received from InfluxData
+for instances that do not have access to the internet.
+Contact [sales@influxdb.com](mailto:sales@influxdb.com) if a licence file is required.
+
+The license file should be saved on every server in the cluster, including Meta,
+Data, and Enterprise nodes. The file contains the JSON-formatted license, and must
+be readable by the influxdb user. Each server in the cluster independently verifies
+its license.
+
+{{% warn %}}
+The `license-key` and `license-path` settings are mutually exclusive and one must remain set to the empty string.
+{{% /warn %}}
+
+We recommended performing rolling restarts on the nodes after the
+license file update. Restart one Meta, Data, or Enterprise service at a time and
+wait for it to come back up successfully. The cluster should remain unaffected
+as long as only one node is restarting at a time as long as there are two or more
+data nodes.
+
+Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_PATH`
+
+## [meta]
+
+### dir = "/var/lib/influxdb/meta"
+
+The location of the meta directory which stores the [metastore](/influxdb/v1.5/concepts/glossary/#metastore).
+
+Environment variable: `INFLUXDB_META_DIR`
+
+### bind-address = ":8089"
+
+The bind address/port for meta node communication.
+For simplicity we recommend using the same port on all meta nodes, but this
+is not necessary.
+
+Environment variable: `INFLUXDB_META_BIND_ADDRESS`
+
+### auth-enabled = false
+
+Set to `true` to enable authentication.
+Meta nodes support JWT authentication and Basic authentication.
+For JWT authentication, also see the [`shared-secret`](#shared-secret) and [`internal-shared-secret`](#internal-shared-secret) configuration settings.
+
+If set to `true`, also set the [`meta-auth-enabled` option](#meta-auth-enabled-false) to `true` in the `[meta]` section of the data node configuration file.
+
+Environment variable: `INFLUXDB_META_AUTH_ENABLED`
+
+### http-bind-address = ":8091"
+
+The port used by the [`influxd-ctl` tool](/enterprise_influxdb/v1.5/administration/cluster-commands/) and by data nodes to access the
+meta APIs.
+For simplicity we recommend using the same port on all meta nodes, but this
+is not necessary.
+
+Environment variable: `INFLUXDB_META_HTTP_BIND_ADDRESS`
+
+### https-enabled = false
+
+Set to `true` to if using HTTPS over the `8091` API port.
+Currently, the `8089` and `8088` ports do not support TLS.
+
+Environment variable: `INFLUXDB_META_HTTPS_ENABLED`
+
+### https-certificate = ""
+
+The path of the certificate file.
+This is required if [`https-enabled`](#https-enabled-false) is set to `true`.
+
+Environment variable: `INFLUXDB_META_HTTPS_CERTIFICATE`
+
+### https-private-key = ""
+
+The path of the private key file.
+
+Environment variable: `INFLUXDB_META_HTTPS_PRIVATE_KEY`
+
+### https-insecure-tls = false
+
+Set to `true` to allow insecure HTTPS connections to meta nodes.
+Use this setting when testing with self-signed certificates.
+
+Environment variable: `INFLUXDB_META_HTTPS_INSECURE_TLS`
+
+### gossip-frequency = "5s"
+
+The frequency at which meta nodes communicate the cluster membership state.
+
+Environment variable: `INFLUXDB_META_GOSSIP_FREQUENCY`
+
+### announcement-expiration = "30s"
+
+The rate at which the results of `influxd-ctl show` are updated when a meta
+node leaves the cluster.
+Note that in version 1.0, configuring this setting provides no change from the
+user's perspective.
+
+Environment variable: `INFLUXDB_META_ANNOUNCEMENT_EXPIRATION`
+
+### retention-autocreate = true
+
+Automatically creates a default [retention policy](/influxdb/v1.5/concepts/glossary/#retention-policy-rp) (RP) when the system creates a database.
+The default RP (`autogen`) has an infinite duration, a shard group duration of seven days, and a replication factor set to the number of data nodes in the cluster.
+The system targets the `autogen` RP when a write or query does not specify an RP.
+Set this option to `false` to prevent the system from creating the `autogen` RP when the system creates a database.
+
+Environment variable: `INFLUXDB_META_RETENTION_AUTOCREATE`
+
+### election-timeout = "1s"
+
+The duration a Raft candidate spends in the candidate state without a leader
+before it starts an election.
+The election timeout is slightly randomized on each Raft node each time it is called.
+An additional jitter is added to the `election-timeout` duration of between zero and the `election-timeout`.
+The default setting should work for most systems.
+
+Environment variable: `INFLUXDB_META_ELECTION_TIMEOUT`
+
+### heartbeat-timeout = "1s"
+
+The heartbeat timeout is the amount of time a Raft follower remains in the
+follower state without a leader before it starts an election.
+Clusters with high latency between nodes may want to increase this parameter to
+avoid unnecessary Raft elections.
+
+Environment variable: `INFLUXDB_META_HEARTBEAT_TIMEOUT`
+
+### leader-lease-timeout = "500ms"
+
+The leader lease timeout is the amount of time a Raft leader will remain leader
+if it does not hear from a majority of nodes.
+After the timeout the leader steps down to the follower state.
+Clusters with high latency between nodes may want to increase this parameter to
+avoid unnecessary Raft elections.
+
+Environment variable: `INFLUXDB_META_LEADER_LEASE_TIMEOUT`
+
+### consensus-timeout = "30s"
+
+Environment variable: `INFLUXDB_META_CONSENSUS_TIMEOUT`
+
+### commit-timeout = "50ms"
+
+The commit timeout is the amount of time a Raft node will tolerate between
+commands before issuing a heartbeat to tell the leader it is alive.
+The default setting should work for most systems.
+
+Environment variable: `INFLUXDB_META_COMMIT_TIMEOUT`
+
+### cluster-tracing = false
+
+Cluster tracing toggles the logging of Raft logs on Raft nodes.
+Enable this setting when debugging Raft consensus issues.
+
+Environment variable: `INFLUXDB_META_CLUSTER_TRACING`
+
+### logging-enabled = true
+
+Meta logging toggles the logging of messages from the meta service.
+
+Environment variable: `INFLUXDB_META_LOGGING_ENABLED`
+
+### `pprof-enabled = true`
+
+Enable the `/net/http/pprof` endpoint. Useful for troubleshooting and monitoring.
+
+Environment variable: `INFLUXDB_HTTP_PPROF_ENABLED`
+
+### `debug-pprof-enabled = false`
+
+Enable the default `/net/http/pprof` endpoint and bind against `localhost:6060`. Useful for debugging startup performance issues.
+
+### lease-duration = "1m0s"
+
+The default duration of the leases that data nodes acquire from the meta nodes.
+Leases automatically expire after the `lease-duration` is met.
+
+Leases ensure that only one data node is running something at a given time.
+For example, [Continuous Queries](/influxdb/v1.5/concepts/glossary/#continuous-query-cq)
+(CQ) use a lease so that all data nodes aren't running the same CQs at once.
+
+Environment variable: `INFLUXDB_META_LEASE_DURATION`
+
+### shared-secret = ""
+The shared secret used by the API for JWT authentication.
+Set [`auth-enabled`](#auth-enabled-false) to `true` if using this option.
+
+Environment variable: `INFLUXDB_META_SHARED_SECRET`
+
+### internal-shared-secret = ""
+The shared secret used by the internal API for JWT authentication.
+Set [`auth-enabled`](#auth-enabled-false) to `true` if using this option.
+
+Environment variable: `INFLUXDB_META_INTERNAL_SHARED_SECRET`
+
+
+
+# Data node configuration
+
+The InfluxDB Enterprise data node configuration settings overlap significantly
+with the settings in InfluxDB OSS.
+Where possible, the following sections link to the [configuration documentation](/influxdb/v1.5/administration/config/)
+for InfluxDB's OSS.
+
+> **Note:**
+The system has internal defaults for every configuration file setting.
+View the default settings with the `influxd config` command.
+The local configuration file (`/etc/influxdb/influxdb.conf`) overrides any
+internal defaults but the configuration file does not need to include
+every configuration setting.
+Starting with version 1.0.1, most of the settings in the local configuration
+file are commented out.
+All commented-out settings will be determined by the internal defaults.
+
+## Global settings
+
+### reporting-disabled = false
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#reporting-disabled-false).
+
+### bind-address = ":8088"
+
+The bind address to use for the RPC service for [backup and restore](/enterprise_influxdb/v1.5/administration/backup-and-restore/).
+
+Environment variable: `INFLUXDB_BIND_ADDRESS`
+
+### hostname = "localhost"
+
+The hostname of the [data node](/enterprise_influxdb/v1.5/concepts/glossary/#data-node).
+
+Environment variable: `INFLUXDB_HOSTNAME`
+
+### gossip-frequency = "3s"
+
+How often to update the cluster with this node's internal status.
+
+Environment variable: `INFLUXDB_GOSSIP_FREQUENCY`
+
+## [enterprise]
+
+The `[enterprise]` section contains the parameters for the meta node's
+registration with the [InfluxEnterprise License Portal](https://portal.influxdata.com/).
+
+### license-key = ""
+
+The license key created for you on [InfluxPortal](https://portal.influxdata.com).
+The meta node transmits the license key to [portal.influxdata.com](https://portal.influxdata.com) over port 80 or port 443 and receives a temporary JSON license file in return.
+The server caches the license file locally.
+The data process will only function for a limited time without a valid license file.
+You must use the [`license-path` setting](#license-path-1) if your server cannot communicate with [https://portal.influxdata.com](https://portal.influxdata.com).
+
+{{% warn %}}
+Use the same key for all nodes in the same cluster. The `license-key` and `license-path` settings are
+mutually exclusive and one must remain set to the empty string.
+{{% /warn %}}
+
+We recommended performing rolling restarts on the nodes after the
+license key update. Restart one meta, data, or Enterprise service at a time and
+wait for it to come back up successfully. The cluster should remain unaffected
+as long as only one node is restarting at a time as long as there are two or more
+data nodes.
+
+Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_KEY`
+
+### license-path = ""
+
+The local path to the permanent JSON license file that you received from InfluxData
+for instances that do not have access to the internet.
+The data process will only function for a limited time without a valid license file.
+Contact [sales@influxdb.com](mailto:sales@influxdb.com) if a licence file is required.
+
+The license file should be saved on every server in the cluster, including Meta,
+Data, and Enterprise nodes. The file contains the JSON-formatted license, and must
+be readable by the influxdb user. Each server in the cluster independently verifies
+its license. We recommended performing rolling restarts on the nodes after the
+license file update. Restart one meta, data, or Enterprise service at a time and
+wait for it to come back up successfully. The cluster should remain unaffected
+as long as only one node is restarting at a time as long as there are two or more
+data nodes.
+
+{{% warn %}}
+Use the same license file for all nodes in the same cluster. The `license-key` and `license-path` settings are mutually exclusive and one must remain set to the empty string.
+{{% /warn %}}
+
+Environment variable: `INFLUXDB_ENTERPRISE_LICENSE_PATH`
+
+## [meta]
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#metastore-settings-meta).
+
+### dir = "/var/lib/influxdb/meta"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#dir-var-lib-influxdb-meta).
+Note that data nodes do require a local meta directory.
+
+Environment variable: `INFLUXDB_META_DIR`
+
+### meta-tls-enabled = false
+
+Set to `true` to if [`https-enabled`](#https-enabled-false) is set to `true`.
+
+Environment variable: `INFLUXDB_META_META_TLS_ENABLED`
+
+### meta-insecure-tls = false
+
+Set to `true` to allow the data node to accept self-signed certificates if
+[`https-enabled`](#https-enabled-false) is set to `true`.
+
+Environment variable: `INFLUXDB_META_META_INSECURE_TLS`
+
+### meta-auth-enabled = false
+
+Set to `true` if [`auth-enabled`](#auth-enabled-false) is set to `true` in the meta node configuration files.
+For JWT authentication, also see the [`meta-internal-shared-secret`](#meta-internal-shared-secret) configuration option.
+
+Environment variable: `INFLUXDB_META_META_AUTH_ENABLED`
+
+### meta-internal-shared-secret = ""
+
+The shared secret used by the internal API for JWT authentication.
+Set to the [`internal-shared-secret`](#internal-shared-secret) specified in the meta node configuration file.
+
+Environment variable: `INFLUXDB_META_META_INTERNAL_SHARED_SECRET`
+
+### retention-autocreate = true
+
+Automatically creates a default [retention policy](/influxdb/v1.5/concepts/glossary/#retention-policy-rp) (RP) when the system creates a database.
+The default RP (`autogen`) has an infinite duration, a shard group duration of seven days, and a replication factor set to the number of data nodes in the cluster.
+The system targets the `autogen` RP when a write or query does not specify an RP.
+Set this option to `false` to prevent the system from creating the `autogen` RP when the system creates a database.
+
+Environment variable: `INFLUXDB_META_RETENTION_AUTOCREATE`
+
+### logging-enabled = true
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#logging-enabled-true).
+
+Environment variable: `INFLUXDB_META_LOGGING_ENABLED`
+
+## [data]
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#data-settings-data).
+
+### dir = "/var/lib/influxdb/data"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#dir-var-lib-influxdb-data).
+
+Environment variable: `INFLUXDB_DATA_DIR`
+
+### wal-dir = "/var/lib/influxdb/wal"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#wal-dir-var-lib-influxdb-wal).
+
+Environment variable: `INFLUXDB_DATA_WAL_DIR`
+
+### wal-fsync-delay = "0s"
+
+The amount of time that a write waits before fsyncing. Use a duration greater than 0 to batch
+up multiple fsync calls. This is useful for slower disks or when experiencing WAL write contention.
+A value of 0s fsyncs every write to the WAL. We recommend values in the range of 0ms-100ms for non-SSD disks.
+
+Environment variable: `INFLUXDB_DATA_WAL_FSYNC_DELAY`
+
+### query-log-enabled = true
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#query-log-enabled-true).
+
+Environment variable: `INFLUXDB_DATA_QUERY_LOG_ENABLED`
+
+### cache-max-memory-size = 1073741824
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#cache-max-memory-size-1073741824).
+
+Environment variable: `INFLUXDB_DATA_CACHE_MAX_MEMORY_SIZE`
+
+### cache-snapshot-memory-size = 26214400
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#cache-snapshot-memory-size-26214400).
+
+Environment variable: `INFLUXDB_DATA_CACHE_SNAPSHOT_MEMORY_SIZE`
+
+### cache-snapshot-write-cold-duration = "10m0s"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#cache-snapshot-write-cold-duration-10m).
+
+Environment variable: `INFLUXDB_DATA_CACHE_SNAPSHOT_WRITE_COLD_DURATION`
+
+### compact-full-write-cold-duration = "4h0m0s"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#compact-full-write-cold-duration-4h).
+
+Environment variable: `INFLUXDB_DATA_COMPACT_FULL_WRITE_COLD_DURATION`
+
+### max-series-per-database = 1000000
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#max-series-per-database-1000000).
+
+Environment variable: `INFLUXDB_DATA_MAX_SERIES_PER_DATABASE`
+
+### max-values-per-tag = 100000
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#max-values-per-tag-100000).
+
+Environment variable: `INFLUXDB_DATA_MAX_VALUES_PER_TAG`
+
+### index-version = "inmem"
+
+The type of shard index to use for new shards. The default (`inmem`) is to use an in-memory index that is
+recreated at startup. A value of `tsi1` will use a disk-based index that supports higher cardinality datasets.
+Value should be enclosed in double quotes.
+
+Environment variable: `INFLUXDB_DATA_INDEX_VERSION`
+
+### max-concurrent-compactions = 0
+
+The maximum number of concurrent full and level compactions.
+The default value of `0` results in 50% of the CPU cores being used for compactions at runtime.
+With the default setting, at most 4 cores will be used. If explicitly set, the number of cores used for compaction is limited to the specified value.
+This setting does not apply to cache snapshotting.
+
+Environment variable: `INFLUXDB_DATA_MAX_CONCURRENT_COMPACTIONS`
+
+### trace-logging-enabled = false
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#trace-logging-enabled-false).
+
+Environment variable: `INFLUXDB_DATA_TRACE_LOGGING_ENABLED`
+
+## [cluster]
+
+Controls how data is shared across shards and the options for [query
+management](/influxdb/v1.5/troubleshooting/query_management/).
+
+### dial-timeout = "1s"
+
+The duration for which the meta node waits for a connection to a remote data
+node before the meta node attempts to connect to a different remote data node.
+This setting applies to queries only.
+
+Environment variable: `INFLUXDB_CLUSTER_DIAL_TIMEOUT`
+
+### shard-reader-timeout = "0"
+
+The time in which a query connection must return its response after which the
+system returns an error.
+
+Environment variable: `INFLUXDB_CLUSTER_SHARD_READER_TIMEOUT`
+
+### cluster-tracing = false
+
+Set to `true` to enable logging of cluster communications.
+Enable this setting to verify connectivity issues between data nodes.
+
+Environment variable: `INFLUXDB_CLUSTER_CLUSTER_TRACING`
+
+### write-timeout = "10s"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#write-timeout-10s).
+
+Environment variable: `INFLUXDB_CLUSTER_WRITE_TIMEOUT`
+
+### max-concurrent-queries = 0
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#max-concurrent-queries-0).
+
+Environment variable: `INFLUXDB_CLUSTER_MAX_CONCURRENT_QUERIES`
+
+### query-timeout = "0s"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#query-timeout-0s).
+
+Environment variable: `INFLUXDB_CLUSTER_QUERY_TIMEOUT`
+
+### log-queries-after = "0s"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#log-queries-after-0s).
+
+Environment variable: `INFLUXDB_CLUSTER_LOG_QUERIES_AFTER`
+
+### max-select-point = 0
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#max-select-point-0).
+
+Environment variable: `INFLUXDB_CLUSTER_MAX_SELECT_POINT`
+
+### max-select-series = 0
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#max-select-series-0).
+
+Environment variable: `INFLUXDB_CLUSTER_MAX_SELECT_SERIES`
+
+### max-select-buckets = 0
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#max-select-buckets-0).
+
+Environment variable: `INFLUXDB_CLUSTER_MAX_SELECT_BUCKETS`
+
+### pool-max-idle-time = "60s"
+
+The time a stream remains idle in the connection pool after which it's reaped.
+
+Environment variable: `INFLUXDB_CLUSTER_POOL_MAX_IDLE_TIME`
+
+### pool-max-idle-streams = 100
+
+The maximum number of streams that can be idle in a pool, per node. The number of active streams can exceed the maximum, but they will not return to the pool when released.
+
+Environment variable: `INFLUXDB_CLUSTER_POOL_MAX_IDLE_STREAMS`
+
+## [retention]
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#retention-policy-settings-retention).
+
+### enabled = true
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#enabled-true).
+
+Environment variable: `INFLUXDB_RETENTION_ENABLED`
+
+### check-interval = "30m0s"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#check-interval-30m0s).
+
+Environment variable: `INFLUXDB_RETENTION_CHECK_INTERVAL`
+
+## [shard-precreation]
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#shard-precreation-settings-shard-precreation).
+
+### enabled = true
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#enabled-true-1).
+
+Environment variable: `INFLUXDB_SHARD_PRECREATION_ENABLED`
+
+### check-interval = "10m"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#check-interval-10m).
+
+Environment variable: `INFLUXDB_SHARD_PRECREATION_CHECK_INTERVAL`
+
+### advance-period = "30m"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#advance-period-30m).
+
+Environment variable: `INFLUXDB_SHARD_PRECREATION_ADVANCE_PERIOD`
+
+## [monitor]
+
+By default, InfluxDB writes system monitoring data to the `_internal` database. If that database does not exist, InfluxDB creates it automatically. The `DEFAULT` retention policy on the `internal` database is seven days. To change the default seven-day retention policy, you must [create](/influxdb/v1.5/query_language/database_management/#retention-policy-management) it.
+
+For InfluxDB Enterprise production systems, InfluxData recommends including a dedicated InfluxDB (OSS) monitoring instance for monitoring InfluxEnterprise cluster nodes.
+
+* On the dedicated InfluxDB monitoring instance, set `store-enabled = false` to avoid potential performance and storage issues.
+* On each InfluxDB cluster node, install a Telegraf input plugin and Telegraf output plugin configured to report data to the dedicated InfluxDB monitoring instance.
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#monitoring-settings-monitor).
+
+### store-enabled = true
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#store-enabled-true).
+
+Environment variable: `INFLUXDB_MONITOR_STORE_ENABLED`
+
+### store-database = "\_internal"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#store-database-internal).
+
+Environment variable: `INFLUXDB_MONITOR_STORE_DATABASE`
+
+### store-interval = "10s"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#store-interval-10s).
+
+Environment variable: `INFLUXDB_MONITOR_STORE_INTERVAL`
+
+### remote-collect-interval = "10s"
+
+Environment variable: `INFLUXDB_MONITOR_REMOTE_COLLECT_INTERVAL`
+
+## [subscriber]
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#subscription-settings-subscriber).
+
+### enabled = true
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#enabled-true-3).
+
+Environment variable: `INFLUXDB_SUBSCRIBER_ENABLED`
+
+### http-timeout = "30s"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#http-timeout-30s).
+
+Environment variable: `INFLUXDB_SUBSCRIBER_HTTP_TIMEOUT`
+
+### insecure-skip-verify = false
+Allows insecure HTTPS connections to subscribers.
+Use this option when testing with self-signed certificates.
+
+Environment variable: `INFLUXDB_SUBSCRIBER_INSECURE_SKIP_VERIFY`
+
+### ca-certs = ""
+The path to the PEM encoded CA certs file. If the empty string, the default system certs will be used.
+
+Environment variable: `INFLUXDB_SUBSCRIBER_CA_CERTS`
+
+### write-concurrency = 40
+The number of writer Goroutines processing the write channel.
+
+Environment variable: `INFLUXDB_SUBSCRIBER_WRITE_CONCURRENCY`
+
+### write-buffer-size = 1000
+The number of in-flight writes buffered in the write channel.
+
+Environment variable: `INFLUXDB_SUBSCRIBER_WRITE_BUFFER_SIZE`
+
+## [http]
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#http-endpoint-settings-http).
+
+### enabled = true
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#enabled-true-4).
+
+Environment variable: `INFLUXDB_HTTP_ENABLED`
+
+### bind-address = ":8086"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#bind-address-8086).
+
+Environment variable: `INFLUXDB_HTTP_BIND_ADDRESS`
+
+### auth-enabled = false
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#auth-enabled-false).
+
+Environment variable: `INFLUXDB_HTTP_AUTH_ENABLED`
+
+### log-enabled = true
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#log-enabled-true).
+
+Environment variable: `INFLUXDB_HTTP_LOG_ENABLED`
+
+### write-tracing = false
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#write-tracing-false).
+
+Environment variable: `INFLUXDB_HTTP_WRITE_TRACING`
+
+### pprof-enabled = true
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#pprof-enabled-true).
+
+Environment variable: `INFLUXDB_HTTP_PPROF_ENABLED`
+
+### https-enabled = false
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#https-enabled-false).
+
+Environment variable: `INFLUXDB_HTTP_HTTPS_ENABLED`
+
+### https-certificate = "/etc/ssl/influxdb.pem"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#https-certificate-etc-ssl-influxdb-pem).
+
+Environment variable: `INFLUXDB_HTTP_HTTPS_CERTIFICATE`
+
+### https-private-key = ""
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#https-private-key).
+
+Environment variable: `INFLUXDB_HTTP_HTTPS_PRIVATE_KEY`
+
+### max-row-limit = 0
+
+This limits the number of rows that can be returned in a non-chunked query.
+The default setting (`0`) allows for an unlimited number of rows.
+InfluxDB includes a `"partial":true` tag in the response body if query results exceed the `max-row-limit` setting.
+
+Environment variable: `INFLUXDB_HTTP_MAX_ROW_LIMIT`
+
+### max-connection-limit = 0
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#max-connection-limit-0).
+
+Environment variable: `INFLUXDB_HTTP_MAX_CONNECTION_LIMIT`
+
+### shared-secret = ""
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#shared-secret).
+
+This setting is required and must match on each data node if the cluster is using the InfluxEnterprise Web Console.
+
+Environment variable: `INFLUXDB_HTTP_SHARED_SECRET`
+
+### realm = "InfluxDB"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#realm-influxdb).
+
+Environment variable: `INFLUXDB_HTTP_REALM`
+
+### unix-socket-enabled = false
+Set to `true` to enable the http service over unix domain socket.
+
+Environment variable: `INFLUXDB_HTTP_UNIX_SOCKET_ENABLED`
+
+### bind-socket = "/var/run/influxdb.sock"
+The path of the unix domain socket.
+
+Environment variable: `INFLUXDB_HTTP_BIND_SOCKET`
+
+## [[graphite]]
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#graphite-settings-graphite).
+
+## [[collectd]]
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#collectd-settings-collectd).
+
+## [[opentsdb]]
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#opentsdb-settings-opentsdb).
+
+## [[udp]]
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#udp-settings-udp).
+
+## [continuous_queries]
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#continuous-queries-settings-continuous-queries).
+
+### log-enabled = true
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#log-enabled-true-1).
+
+Environment variable: `INFLUXDB_CONTINUOUS_QUERIES_LOG_ENABLED`
+
+### enabled = true
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#enabled-true-5).
+
+Environment variable: `INFLUXDB_CONTINUOUS_QUERIES_ENABLED`
+
+### run-interval = "1s"
+
+See the [InfluxDB OSS documentation](/influxdb/v1.5/administration/config/#run-interval-1s).
+
+Environment variable: `INFLUXDB_CONTINUOUS_QUERIES_RUN_INTERVAL`
+
+## [hinted-handoff]
+
+Controls the hinted handoff feature, which allows data nodes to temporarily cache writes destined for another data node when that data node is unreachable.
+
+### batch-size = 512000
+
+The maximum number of bytes to write to a shard in a single request.
+
+Environment variable: `INFLUXDB_HINTED_HANDOFF_BATCH_SIZE`
+
+### dir = "/var/lib/influxdb/hh"
+
+The hinted handoff directory where the durable queue will be stored on disk.
+
+Environment variable: `INFLUXDB_HINTED_HANDOFF_DIR`
+
+### enabled = true
+
+Set to `false` to disable hinted handoff.
+Disabling hinted handoff is not recommended and can lead to data loss if another data node is unreachable for any length of time.
+
+Environment variable: `INFLUXDB_HINTED_HANDOFF_ENABLED`
+
+### max-size = 10737418240
+
+The maximum size of the hinted handoff queue.
+Each queue is for one and only one other data node in the cluster.
+If there are N data nodes in the cluster, each data node may have up to N-1 hinted handoff queues.
+
+Environment variable: `INFLUXDB_HINTED_HANDOFF_MAX_SIZE`
+
+### max-age = "168h0m0s"
+
+The time writes sit in the queue before they are purged. The time is determined by how long the batch has been in the queue, not by the timestamps in the data.
+If another data node is unreachable for more than the `max-age` it can lead to data loss.
+
+Environment variable: `INFLUXDB_HINTED_HANDOFF_MAX_AGE`
+
+### retry-concurrency = 20
+
+The maximum number of hinted handoff blocks that the source data node attempts to write to each destination data node.
+Hinted handoff blocks are sets of data that belong to the same shard and have the same destination data node.
+
+If `retry-concurrency` is 20 and the source data node's hinted handoff has 25 blocks for destination data node A, then the source data node attempts to concurrently write 20 blocks to node A.
+If `retry-concurrency` is 20 and the source data node's hinted handoff has 25 blocks for destination data node A and 30 blocks for destination data node B, then the source data node attempts to concurrently write 20 blocks to node A and 20 blocks to node B.
+If the source data node successfully writes 20 blocks to a destination data node, it continues to write the remaining hinted handoff data to that destination node in sets of 20 blocks.
+
+If the source data node successfully writes data to destination data nodes, a higher `retry-concurrency` setting can accelerate the rate at which the source data node empties its hinted handoff queue.
+Note that increasing `retry-concurrency` also increases network traffic.
+
+Environment variable: `INFLUXDB_HINTED_HANDOFF_RETRY_CONCURRENCY`
+
+### retry-rate-limit = 0
+
+The rate (in bytes per second) at which the hinted handoff retries writes.
+The `retry-rate-limit` option is no longer in use and will be removed from the configuration file in a future release.
+Changing the `retry-rate-limit` setting has no effect on your cluster.
+
+Environment variable: `INFLUXDB_HINTED_HANDOFF_RETRY_RATE_LIMIT`
+
+### retry-interval = "1s"
+
+The time period after which the hinted handoff retries a write after the write fails.
+
+Environment variable: `INFLUXDB_HINTED_HANDOFF_RETRY_INTERVAL`
+
+### retry-max-interval = "10s"
+
+The maximum interval after which the hinted handoff retries a write after the write fails.
+The `retry-max-interval` option is no longer in use and will be removed from the configuration file in a future release.
+Changing the `retry-max-interval` setting has no effect on your cluster.
+
+Environment variable: `INFLUXDB_HINTED_HANDOFF_RETRY_MAX_INTERVAL`
+
+### purge-interval = "1m0s"
+
+The interval at which InfluxDB checks to purge data that are above `max-age`.
+
+Environment variable: `INFLUXDB_HINTED_HANDOFF_PURGE_INTERVAL`
+
+## [anti-entropy]
+
+### enabled = true
+
+Set to true to enable the anti-entropy service.
+
+Environment variable: `INFLUXDB_ANTI_ENTROPY_ENABLED`
+
+### check-interval = "30s"
+
+The interval of time when anti-entropy checks run on each data node.
+
+Environment variable: `INFLUXDB_ANTI_ENTROPY_CHECK_INTERVAL`
+
+### max-fetch = 10
+
+The maximum number of shards that a single data node will copy or repair in parallel.
+
+Environment variable: `INFLUXDB_ANTI_ENTROPY_MAX_FETCH`
+
+
diff --git a/content/enterprise_influxdb/v1.5/administration/logs.md b/content/enterprise_influxdb/v1.5/administration/logs.md
new file mode 100644
index 000000000..88c457856
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/administration/logs.md
@@ -0,0 +1,132 @@
+---
+title: Logging and tracing in InfluxDB Enterprise
+
+menu:
+ enterprise_influxdb_1_5:
+ name: Logging and tracing
+ weight: 60
+ parent: Administration
+---
+
+
+* [Logging locations](#logging-locations)
+* [Redirecting HTTP request logging](#redirecting-http-request-logging)
+* [Structured logging](#structured-logging)
+* [Tracing](#tracing)
+
+
+InfluxDB writes log output, by default, to `stderr`.
+Depending on your use case, this log information can be written to another location.
+Some service managers may override this default.
+
+## Logging locations
+
+### Running InfluxDB directly
+
+If you run InfluxDB directly, using `influxd`, all logs will be written to `stderr`.
+You may redirect this log output as you would any output to `stderr` like so:
+
+```bash
+influxdb-meta 2>$HOME/my_log_file # Meta nodes
+influxd 2>$HOME/my_log_file # Data nodes
+influx-enterprise 2>$HOME/my_log_file # Enterprise Web
+```
+
+### Launched as a service
+
+#### sysvinit
+
+If InfluxDB was installed using a pre-built package, and then launched
+as a service, `stderr` is redirected to
+`/var/log/influxdb/.log`, and all log data will be written to
+that file. You can override this location by setting the variable
+`STDERR` in the file `/etc/default/`.
+
+For example, if on a data node `/etc/default/influxdb` contains:
+
+```bash
+STDERR=/dev/null
+```
+
+all log data will be discarded. You can similarly direct output to
+`stdout` by setting `STDOUT` in the same file. Output to `stdout` is
+sent to `/dev/null` by default when InfluxDB is launched as a service.
+
+InfluxDB must be restarted to pick up any changes to `/etc/default/`.
+
+
+##### Meta nodes
+
+For meta nodes, the `` is `influxdb-meta`.
+The default log file is `/var/log/influxdb/influxdb-meta.log`
+The service configuration file is `/etc/default/influxdb-meta`.
+
+##### Data nodes
+
+For data nodes, the `` is `influxdb`.
+The default log file is `/var/log/influxdb/influxdb.log`
+The service configuration file is `/etc/default/influxdb`.
+
+##### Enterprise Web
+
+For Enterprise Web nodes, the `` is `influx-enterprise`.
+The default log file is `/var/log/influxdb/influx-enterprise.log`
+The service configuration file is `/etc/default/influx-enterprise`.
+
+#### systemd
+
+Starting with version 1.0, InfluxDB on systemd systems no longer
+writes files to `/var/log/.log` by default, and now uses the
+system configured default for logging (usually `journald`). On most
+systems, the logs will be directed to the systemd journal and can be
+accessed with the command:
+
+```
+sudo journalctl -u .service
+```
+
+Please consult the systemd journald documentation for configuring
+journald.
+
+##### Meta nodes
+
+For data nodes the is `influxdb-meta`.
+The default log command is `sudo journalctl -u influxdb-meta.service`
+The service configuration file is `/etc/default/influxdb-meta`.
+
+##### Data nodes
+
+For data nodes the is `influxdb`.
+The default log command is `sudo journalctl -u influxdb.service`
+The service configuration file is `/etc/default/influxdb`.
+
+##### Enterprise Web
+
+For data nodes the is `influx-enterprise`.
+The default log command is `sudo journalctl -u influx-enterprise.service`
+The service configuration file is `/etc/default/influx-enterprise`.
+
+### Using logrotate
+
+You can use [logrotate](http://manpages.ubuntu.com/manpages/cosmic/en/man8/logrotate.8.html)
+to rotate the log files generated by InfluxDB on systems where logs are written to flat files.
+If using the package install on a sysvinit system, the config file for logrotate is installed in `/etc/logrotate.d`.
+You can view the file [here](https://github.com/influxdb/influxdb/blob/master/scripts/logrotate).
+
+## Redirecting HTTP request logging
+
+InfluxDB 1.5 introduces the option to log HTTP request traffic separately from the other InfluxDB log output. When HTTP request logging is enabled, the HTTP logs are intermingled by default with internal InfluxDB logging. By redirecting the HTTP request log entries to a separate file, both log files are easier to read, monitor, and debug.
+
+See [Redirecting HTTP request logging](/influxdb/v1.5/administration/logs/#redirecting-http-request-logging) in the InfluxDB OSS documentation.
+
+## Structured logging
+
+With InfluxDB 1.5, structured logging is supported and enable machine-readable and more developer-friendly log output formats. The two new structured log formats, `logfmt` and `json`, provide easier filtering and searching with external tools and simplifies integration of InfluxDB logs with Splunk, Papertrail, Elasticsearch, and other third party tools.
+
+See [Structured logging](/influxdb/v1.5/administration/logs/#structured-logging) in the InfluxDB OSS documentation.
+
+## Tracing
+
+Logging has been enhanced, starting in InfluxDB 1.5, to provide tracing of important InfluxDB operations. Tracing is useful for error reporting and discovering performance bottlenecks.
+
+See [Tracing](/influxdb/v1.5/administration/logs/#tracing) in the InfluxDB OSS documentation.
diff --git a/content/enterprise_influxdb/v1.5/administration/renaming.md b/content/enterprise_influxdb/v1.5/administration/renaming.md
new file mode 100644
index 000000000..064ee6c19
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/administration/renaming.md
@@ -0,0 +1,51 @@
+---
+title: Host renaming in InfluxDB Enterprise
+aliases:
+ - /enterprise/v1.5/administration/renaming/
+menu:
+ enterprise_influxdb_1_5:
+ name: Host renaming
+ weight: 60
+ parent: Administration
+---
+
+## Host renaming
+
+The following instructions allow you to rename a host within your InfluxDB Enterprise instance.
+
+First, spend write and query activity to the cluster.
+
+### Meta node:
+- Find the meta node leader with `curl localhost:8091/status`. The `leader` field in the JSON output reports the leader meta node. We will start with the two meta nodes that are not leaders.
+- On a non-leader meta node, run `influxd-ctl remove-meta`. Once removed, confirm by running `influxd-ctl show` on the meta leader.
+- Stop the meta service on the removed node, edit its configuration file to set the new "hostname" under "/etc/influxdb/influxdb-meta.conf".
+- Update the actual OS host's name if needed, apply DNS changes.
+- Start the meta service.
+- On the meta leader, add the meta node with the new hostname using `influxd-ctl add-meta newmetanode:8091`. Confirm with `influxd-ctl show`
+- Repeat for the second meta node.
+- Once the two non-leaders are updated, stop the leader and wait for another meta node to become the leader - check with `curl localhost:8091/status`.
+- Repeat the process for the last meta node (former leader).
+
+Intermediate verification:
+- Verify the state of the cluster with `influx-ctl show`. The version must be reported on all nodes for them to be healthy.
+- Verify there is a meta leader with `curl localhost:8091/status` and that all meta nodes list the rest in the output.
+- Restart all data nodes one by one. Verify that `/var/lib/influxdb/meta/client.json` on all data nodes references the new meta names.
+- Verify the `show shards` output lists all shards and node ownership as expected.
+- Verify that the cluster is in good shape functional-wise, responds to writes and queries.
+
+### Data node:
+- Find the meta node leader with `curl localhost:8091/status`. The `leader` field in the JSON output reports the leader meta node.
+- Stop the service on the data node you want to rename. Edit its configuration file to set the new `hostname` under `/etc/influxdb/influxdb.conf`.
+- Update the actual OS host's name if needed, apply DNS changes.
+- Start the data service. Errors will be logged until it is added to the cluster again.
+- On the meta node leader, run `influxd-ctl update-data oldname:8088 newname:8088`. Upon success you will get a message updated data node ID to `newname:8088`.
+- Verify with `influxd-ctl show` on the meta node leader. Verify there are no errors in the logs of the updated data node and other data nodes. Restart the service on the updated data node. Verify writes, replication and queries work as expected.
+- Repeat on the remaining data nodes. Remember to only execute the `update-data` command from the meta leader.
+
+Final verification:
+- Verify the state of the cluster with `influx-ctl show`. The version must be reported on all nodes for them to be healthy.
+- Verify the `show shards` output lists all shards and node ownership as expected.
+- Verify meta queries work (show measurements under a database).
+- Verify data are being queried successfully.
+
+Once you've performed the verification steps, resume write and query activity.
diff --git a/content/enterprise_influxdb/v1.5/administration/security.md b/content/enterprise_influxdb/v1.5/administration/security.md
new file mode 100644
index 000000000..40e063922
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/administration/security.md
@@ -0,0 +1,51 @@
+---
+title: Managing InfluxDB Enterprise security
+menu:
+ enterprise_influxdb_1_5:
+ name: Managing security
+ weight: 70
+ parent: Administration
+---
+
+Some customers may choose to install InfluxDB Enterprise with public internet access, however doing so can inadvertently expose your data and invite unwelcome attacks on your database.
+Check out the sections below for how protect the data in your InfluxDB Enterprise instance.
+
+## Enabling authentication
+
+Password protect your InfluxDB Enterprise instance to keep any unauthorized individuals
+from accessing your data.
+
+Resources:
+[Set up Authentication](/influxdb/v1.5/query_language/authentication_and_authorization/#set-up-authentication)
+
+## Managing users and permissions
+
+Restrict access by creating individual users and assigning them relevant
+read and/or write permissions.
+
+Resources:
+[User types and privileges](/influxdb/v1.5/query_language/authentication_and_authorization/#user-types-and-privileges),
+[User management commands](/influxdb/v1.5/query_language/authentication_and_authorization/#user-management-commands),
+[Fine-grained authorization](/enterprise_influxdb/v1.5/guides/fine-grained-authorization/)
+
+## Enabling HTTPS
+
+Using HTTPS secures the communication between clients and the InfluxDB server, and, in
+some cases, HTTPS verifies the authenticity of the InfluxDB server to clients (bi-directional authentication).
+The communicatio between the meta nodes and the data nodes are also secured via HTTPS.
+
+Resources:
+[Enabling HTTPS](/enterprise_influxdb/v1.5/guides/https_setup/)
+
+## Secure your Host
+
+### Ports
+For InfluxEnterprise Data Nodes, close all ports on each host except for port `8086`.
+You can also use a proxy to port `8086`. By default, data nodes and meta nodes communicate with each other over '8088','8089',and'8091'
+
+For InfluxDB Enterprise, [backuping and restoring](/enterprise_influxdb/v1.5/administration/backup-and-restore/) is performed from the meta nodes.
+
+
+### AWS Recommendations
+
+We recommend implementing on-disk encryption; InfluxDB does not offer built-in support to encrypt the data.
diff --git a/content/enterprise_influxdb/v1.5/administration/upgrading.md b/content/enterprise_influxdb/v1.5/administration/upgrading.md
new file mode 100644
index 000000000..3d7deecd0
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/administration/upgrading.md
@@ -0,0 +1,234 @@
+---
+title: Upgrading InfluxDB Enterprise clusters
+aliases:
+ - /enterprise/v1.5/administration/upgrading/
+menu:
+ enterprise_influxdb_1_5:
+ weight: 20
+ parent: Administration
+---
+
+## Upgrading to InfluxDB Enterprise 1.5.4
+
+Version 1.5 includes the first official Time Series Index (TSI) release. Although you can install without enabling TSI, you are encouraged to begin leveraging the advantages the TSI indexing offers.
+
+## Upgrading InfluxDB Enterprise 1.3.x-1.5.x clusters to 1.5.4 (rolling upgrade)
+
+### Step 0: Back up your cluster before upgrading to version 1.5.4.
+
+Create a full backup of your InfluxDB Enterprise cluster before performing an upgrade.
+If you have incremental backups created as part of your standard operating procedures, make sure to
+trigger a final incremental backup before proceeding with the upgrade.
+
+> ***Note:*** For information on performing a final incremental backup or a full backup,
+> see the InfluxDB Enterprise [Backup and restore](/enterprise_influxdb/v1.5/administration/backup-and-restore/) documentation.
+
+## Upgrading meta nodes
+
+Follow these steps to upgrade all meta nodes in your InfluxDB Enterprise cluster. Ensure that the meta cluster is healthy before proceeding to the data nodes.
+
+### Step 1: Download the 1.5.4 meta node package.
+
+#### Meta node package download
+**Ubuntu & Debian (64-bit)**
+```
+wget https://dl.influxdata.com/enterprise/releases/influxdb-meta_1.5.4-c1.5.4_amd64.deb
+```
+
+**RedHat & CentOS (64-bit)**
+```
+wget https://dl.influxdata.com/enterprise/releases/influxdb-meta-1.5.4_c1.5.4.x86_64.rpm
+```
+
+### Step 2: Install the 1.5.4 meta nodes package.
+
+#### Meta node package install
+
+##### Ubuntu & Debian (64-bit)
+
+```
+sudo dpkg -i influxdb-meta_1.5.4-c1.5.4_amd64.deb
+```
+
+##### RedHat & CentOS (64-bit)
+
+```
+sudo yum localinstall influxdb-meta-1.5.4_c1.5.4.x86_64.rpm
+```
+
+### Step 3: Restart the `influxdb-meta` service.
+
+#### Meta node restart
+
+##### sysvinit systems
+
+```
+service influxdb-meta restart
+```
+##### systemd systems
+
+```
+sudo systemctl restart influxdb-meta
+```
+
+### Step 4: Confirm the upgrade.
+
+After performing the upgrade on ALL meta nodes, check your node version numbers using the
+`influxd-ctl show` command.
+The [`influxd-ctl` utility](/enterprise_influxdb/v1.5/administration/cluster-commands/) is available on all meta nodes.
+
+```
+~# influxd-ctl show
+
+Data Nodes
+==========
+ID TCP Address Version
+4 rk-upgrading-01:8088 1.3.x_c1.3.y
+5 rk-upgrading-02:8088 1.3.x_c1.3.y
+6 rk-upgrading-03:8088 1.3.x_c1.3.y
+
+Meta Nodes
+==========
+TCP Address Version
+rk-upgrading-01:8091 1.5.4_c1.5.4 # 1.5.4_c1.5.4 = 👍
+rk-upgrading-02:8091 1.5.4_c1.5.4
+rk-upgrading-03:8091 1.5.4_c1.5.4
+```
+
+## Upgrading data nodes
+
+Repeat the following steps for each data node in your InfluxDB Enterprise cluster.
+
+### Step 1: Download the 1.5.4 data node package.
+
+#### Data node package download
+
+##### Ubuntu & Debian (64-bit)
+
+```
+wget https://dl.influxdata.com/enterprise/releases/influxdb-data_1.5.4-c1.5.4_amd64.deb
+```
+
+##### RedHat & CentOS (64-bit)
+
+```
+wget https://dl.influxdata.com/enterprise/releases/influxdb-data-1.5.4_c1.5.4.x86_64.rpm
+```
+
+### Step 2: Remove the data node from the load balancer.
+
+To avoid downtime and allow for a smooth transition, remove the data node you are upgrading from your
+load balancer **before** performing the remaining steps.
+
+### Step 3: Install the 1.5.4 data node packages.
+
+#### Data node package install
+
+When you run the install command, your terminal asks if you want to keep your
+current configuration file or overwrite your current configuration file with the file for version 1.5.4.
+
+Keep your current configuration file by entering `N` or `O`.
+The configuration file will be updated with the necessary changes for version 1.5.4 in the next step.
+
+**Ubuntu & Debian (64-bit)**
+```
+sudo dpkg -i influxdb-data_1.5.4-c1.5.4_amd64.deb
+```
+
+**RedHat & CentOS (64-bit)**
+```
+sudo yum localinstall influxdb-data-1.5.4_c1.5.4.x86_64.rpm
+```
+
+### Step 4: Update the data node configuration file.
+
+**Add:**
+
+* If enabling TSI: [index-version = "tsi1"](/enterprise_influxdb/v1.5/administration/configuration/#index-version-inmem) to the `[data]` section.
+* If not enabling TSI: [index-version = "inmem"](/enterprise_influxdb/v1.5/administration/configuration/#index-version-inmem) to the `[data]` section.
+ - Use 'tsi1' for the Time Series Index (TSI); set the value to `inmem` to use the TSM in-memory index.
+* [wal-fsync-delay = "0s"](/enterprise_influxdb/v1.5/administration/configuration/#wal-fsync-delay-0s) to the `[data]` section
+* [max-concurrent-compactions = 0](/enterprise_influxdb/v1.5/administration/configuration/#max-concurrent-compactions-0) to the `[data]` section
+* [pool-max-idle-streams = 100](/enterprise_influxdb/v1.5/administration/configuration/#pool-max-idle-streams-100) to the `[cluster]` section
+* [pool-max-idle-time = "1m0s"](/enterprise_influxdb/v1.5/administration/configuration/#pool-max-idle-time-60s) to the `[cluster]` section
+* the [[anti-entropy]](/enterprise_influxdb/v1.5/administration/configuration/#anti-entropy) section:
+```
+[anti-entropy]
+ enabled = true
+ check-interval = "30s"
+ max-fetch = 10
+```
+**Remove:**
+
+* `max-remote-write-connections` from the `[cluster]` section
+* `[admin]` section
+
+**Update:**
+
+* [cache-max-memory-size](/enterprise_influxdb/v1.5/administration/configuration/#cache-max-memory-size-1073741824) to `1073741824` in the `[data]` section
+
+The new configuration options are set to the default settings.
+
+### Step 5: [For TSI Preview instances only] Prepare your node to support Time Series Index (TSI).
+
+1. Delete all existing TSM-based shard `index` directories.
+
+- Remove the existing `index` directories to ensure there are no incompatible index files.
+- By default, the `index` directories are located at `//index` (e.g., `/2/index`).
+
+2. Convert existing TSM-based shards (or rebuild TSI Preview shards) to support TSI.
+
+ - When TSI is enabled, new shards use the TSI index. Existing shards must be converted to support TSI.
+ - Run the [`influx_inspect buildtsi`](/influxdb/v1.5/tools/influx_inspect#buildtsi) command to convert existing TSM-based shards (or rebuild TSI Preview shards) to support TSI.
+
+> **Note:** Run the `buildtsi` command using the user account that you are going to run the database as,
+> or ensure that the permissions match afterward.
+
+### Step 6: Restart the `influxdb` service.
+
+#### Restart data node
+
+##### sysvinit systems
+
+```
+service influxdb restart
+```
+##### systemd systems
+
+```
+sudo systemctl restart influxdb
+```
+
+### Step 7: Add the data node back into the load balancer.
+
+Add the data node back into the load balancer to allow it to serve reads and writes.
+
+If this is the last data node to be upgraded, proceed to step 7.
+Otherwise, return to Step 1 of [Upgrading data nodes](#upgrading-data-nodes) and repeat the process for the remaining data nodes.
+
+### Step 8: Confirm the upgrade.
+
+Your cluster is now upgraded to InfluxDB Enterprise 1.5.
+Check your node version numbers using the `influxd-ctl show` command.
+The [`influxd-ctl`](/enterprise_influxdb/v1.5/administration/cluster-commands/) utility is available on all meta nodes.
+
+```
+~# influxd-ctl show
+
+Data Nodes
+==========
+ID TCP Address Version
+4 rk-upgrading-01:8088 1.5.4_c1.5.4 # 1.5.4_c1.5.4 = 👍
+5 rk-upgrading-02:8088 1.5.4_c1.5.4
+6 rk-upgrading-03:8088 1.5.4_c1.5.4
+
+Meta Nodes
+==========
+TCP Address Version
+rk-upgrading-01:8091 1.5.4_c1.5.4
+rk-upgrading-02:8091 1.5.4_c1.5.4
+rk-upgrading-03:8091 1.5.4_c1.5.4
+```
+
+If you have any issues upgrading your cluster, please do not hesitate to contact support at the email address
+provided to you when you received your InfluxEnterprise license.
diff --git a/content/enterprise_influxdb/v1.5/concepts/_index.md b/content/enterprise_influxdb/v1.5/concepts/_index.md
new file mode 100644
index 000000000..e3e94309a
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/concepts/_index.md
@@ -0,0 +1,12 @@
+---
+title: InfluxDB Enterprise concepts
+aliases:
+ - /enterprise/v1.5/concepts/
+menu:
+ enterprise_influxdb_1_5:
+ name: Concepts
+ weight: 50
+---
+
+## [Clustering](/enterprise_influxdb/v1.5/concepts/clustering)
+## [Glossary](/enterprise_influxdb/v1.5/concepts/glossary/)
diff --git a/content/enterprise_influxdb/v1.5/concepts/clustering.md b/content/enterprise_influxdb/v1.5/concepts/clustering.md
new file mode 100644
index 000000000..41abaad17
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/concepts/clustering.md
@@ -0,0 +1,149 @@
+---
+title: Clustering in InfluxDB Enterprise
+aliases:
+ - /enterprise/v1.5/concepts/clustering/
+menu:
+ enterprise_influxdb_1_5:
+ name: Clustering
+ weight: 10
+ parent: Concepts
+---
+
+This document describes in detail how clustering works in InfluxEnterprise. It starts with a high level description of the different components of a cluster and then delves into the implementation details.
+
+## Architectural overview
+
+An InfluxEnterprise installation consists of three separate software processes: Data nodes, Meta nodes, and the Enterprise Web server. To run an InfluxDB cluster, only the meta and data nodes are required. Communication within a cluster looks like this:
+
+```text
+ ┌───────┐ ┌───────┐
+ │ │ │ │
+ │ Meta1 │◀───▶│ Meta2 │
+ │ │ │ │
+ └───────┘ └───────┘
+ ▲ ▲
+ │ │
+ │ ┌───────┐ │
+ │ │ │ │
+ └─▶│ Meta3 │◀─┘
+ │ │
+ └───────┘
+
+─────────────────────────────────
+ ╲│╱ ╲│╱
+ ┌────┘ └──────┐
+ │ │
+ ┌───────┐ ┌───────┐
+ │ │ │ │
+ │ Data1 │◀────────▶│ Data2 │
+ │ │ │ │
+ └───────┘ └───────┘
+```
+
+The meta nodes communicate with each other via a TCP protocol and the Raft consensus protocol that all use port `8089` by default. This port must be reachable between the meta nodes. The meta nodes also expose an HTTP API bound to port `8091` by default that the `influxd-ctl` command uses.
+
+Data nodes communicate with each other through a TCP protocol that is bound to port `8088`. Data nodes communicate with the meta nodes through their HTTP API bound to `8091`. These ports must be reachable between the meta and data nodes.
+
+Within a cluster, all meta nodes must communicate with all other meta nodes. All data nodes must communicate with all other data nodes and all meta nodes.
+
+The meta nodes keep a consistent view of the metadata that describes the cluster. The meta cluster uses the [HashiCorp implementation of Raft](https://github.com/hashicorp/raft) as the underlying consensus protocol. This is the same implementation that they use in Consul.
+
+The data nodes replicate data and query each other via a Protobuf protocol over TCP. Details on replication and querying are covered later in this document.
+
+## Where data lives
+
+The meta and data nodes are each responsible for different parts of the database.
+
+### Meta nodes
+
+Meta nodes hold all of the following meta data:
+
+* all nodes in the cluster and their role
+* all databases and retention policies that exist in the cluster
+* all shards and shard groups, and on what nodes they exist
+* cluster users and their permissions
+* all continuous queries
+
+The meta nodes keep this data in the Raft database on disk, backed by BoltDB. By default the Raft database is `/var/lib/influxdb/meta/raft.db`.
+
+### Data nodes
+
+Data nodes hold all of the raw time series data and metadata, including:
+
+* measurements
+* tag keys and values
+* field keys and values
+
+On disk, the data is always organized by `//`. By default the parent directory is `/var/lib/influxdb/data`.
+
+> **Note:** Meta nodes only require the `/meta` directory, but Data nodes require all four subdirectories of `/var/lib/influxdb/`: `/meta`, `/data`, `/wal`, and `/hh`.
+
+## Optimal server counts
+
+When creating a cluster you'll need to choose how meta and data nodes to configure and connect. You can think of InfluxEnterprise as two separate clusters that communicate with each other: a cluster of meta nodes and one of data nodes. The number of meta nodes is driven by the number of meta node failures they need to be able to handle, while the number of data nodes scales based on your storage and query needs.
+
+The consensus protocol requires a quorum to perform any operation, so there should always be an odd number of meta nodes. For almost all use cases, 3 meta nodes is the correct number, and such a cluster operates normally even with the permanant loss of 1 meta node.
+
+If you were to create a cluster with 4 meta nodes, it can still only survive the loss of 1 node. Losing a second node means the remaining two nodes can only gather two votes out of a possible four, which does not achieve a majority consensus. Since a cluster of 3 meta nodes can also survive the loss of a single meta node, adding the fourth node achieves no extra redundancy and only complicates cluster maintenance. At higher numbers of meta nodes the communication overhead increases exponentially, so configurations of 5 or more are not recommended unless the cluster will frequently lose meta nodes.
+
+Data nodes hold the actual time series data. The minimum number of data nodes to run is 1 and can scale up from there. **Generally, you'll want to run a number of data nodes that is evenly divisible by your replication factor.** For instance, if you have a replication factor of 2, you'll want to run 2, 4, 6, 8, 10, etc. data nodes.
+
+## Chronograf
+
+[Chronograf](/chronograf/latest/introduction/getting-started/) is the user interface component of InfluxData’s TICK stack.
+It makes owning the monitoring and alerting for your infrastructure easy to setup and maintain.
+It talks directly to the data and meta nodes over their HTTP protocols, which are bound by default to ports `8086` for data nodes and port `8091` for meta nodes.
+
+## Writes in a cluster
+
+This section describes how writes in a cluster work. We'll work through some examples using a cluster of four data nodes: `A`, `B`, `C`, and `D`. Assume that we have a retention policy with a replication factor of 2 with shard durations of 1 day.
+
+### Shard groups
+
+The cluster creates shards within a shard group to maximize the number of data nodes utilized. If there are N data nodes in the cluster and the replication factor is X, then N/X shards are created in each shard group, discarding any fractions.
+
+This means that a new shard group gets created for each day of data that gets written in. Within each shard group 2 shards are created. Because of the replication factor of 2, each of those two shards are copied on 2 servers. For example we have a shard group for `2016-09-19` that has two shards `1` and `2`. Shard `1` is replicated to servers `A` and `B` while shard `2` is copied to servers `C` and `D`.
+
+When a write comes in with values that have a timestamp in `2016-09-19` the cluster must first determine which shard within the shard group should receive the write. This is done by taking a hash of the `measurement` + sorted `tagset` (the metaseries) and bucketing into the correct shard. In Go this looks like:
+
+```go
+// key is measurement + tagset
+// shardGroup is the group for the values based on timestamp
+// hash with fnv and then bucket
+shard := shardGroup.shards[fnv.New64a(key) % len(shardGroup.Shards)]
+```
+
+There are multiple implications to this scheme for determining where data lives in a cluster. First, for any given metaseries all data on any given day exists in a single shard, and thus only on those servers hosting a copy of that shard. Second, once a shard group is created, adding new servers to the cluster won't scale out write capacity for that shard group. The replication is fixed when the shard group is created.
+
+However, there is a method for expanding writes in the current shard group (i.e. today) when growing a cluster. The current shard group can be truncated to stop at the current time using `influxd-ctl truncate-shards`. This immediately closes the current shard group, forcing a new shard group to be created. That new shard group inherits the latest retention policy and data node changes and then copies itself appropriately to the newly available data nodes. Run `influxd-ctl truncate-shards help` for more information on the command.
+
+### Write consistency
+
+Each request to the HTTP API can specify the consistency level via the `consistency` query parameter. For this example let's assume that an HTTP write is being sent to server `D` and the data belongs in shard `1`. The write needs to be replicated to the owners of shard `1`: data nodes `A` and `B`. When the write comes into `D`, that node determines from its local cache of the metastore that the write needs to be replicated to the `A` and `B`, and it immediately tries to write to both. The subsequent behavior depends on the consistency level chosen:
+
+* `any` - return success to the client as soon as any node has responded with a write success, or the receiving node has written the data to its hinted handoff queue. In our example, if `A` or `B` return a successful write response to `D`, or if `D` has cached the write in its local hinted handoff, `D` returns a write success to the client.
+* `one` - return success to the client as soon as any node has responded with a write success, but not if the write is only in hinted handoff. In our example, if `A` or `B` return a successful write response to `D`, `D` returns a write success to the client. If `D` could not send the data to either `A` or `B` but instead put the data in hinted handoff, `D` returns a write failure to the client. Note that this means writes may return a failure and yet the data may eventually persist successfully when hinted handoff drains.
+* `quorum` - return success when a majority of nodes return success. This option is only useful if the replication factor is greater than 2, otherwise it is equivalent to `all`. In our example, if both `A` and `B` return a successful write response to `D`, `D` returns a write success to the client. If either `A` or `B` does not return success, then a majority of nodes have not successfully persisted the write and `D` returns a write failure to the client. If we assume for a moment the data were bound for three nodes, `A`, `B`, and `C`, then if any two of those nodes respond with a write success, `D` returns a write success to the client. If one or fewer nodes respond with a success, `D` returns a write failure to the client. Note that this means writes may return a failure and yet the data may eventually persist successfully when hinted handoff drains.
+* `all` - return success only when all nodes return success. In our example, if both `A` and `B` return a successful write response to `D`, `D` returns a write success to the client. If either `A` or `B` does not return success, then `D` returns a write failure to the client. If we again assume three destination nodes `A`, `B`, and `C`, then all if three nodes respond with a write success, `D` returns a write success to the client. Otherwise, `D` returns a write failure to the client. Note that this means writes may return a failure and yet the data may eventually persist successfully when hinted handoff drains.
+
+The important thing to note is how failures are handled. In the case of failures, the database uses the hinted handoff system.
+
+### Hinted handoff
+
+Hinted handoff is how InfluxEnterprise deals with data node outages while writes are happening. Hinted handoff is essentially a durable disk based queue. When writing at `any`, `one` or `quorum` consistency, hinted handoff is used when one or more replicas return an error after a success has already been returned to the client. When writing at `all` consistency, writes cannot return success unless all nodes return success. Temporarily stalled or failed writes may still go to the hinted handoff queues but the cluster would have already returned a failure response to the write. The receiving node creates a separate queue on disk for each data node (and shard) it cannot reach.
+
+Let's again use the example of a write coming to `D` that should go to shard `1` on `A` and `B`. If we specified a consistency level of `one` and node `A` returns success, `D` immediately returns success to the client even though the write to `B` is still in progress.
+
+Now let's assume that `B` returns an error. Node `D` then puts the write into its hinted handoff queue for shard `1` on node `B`. In the background, node `D` continues to attempt to empty the hinted handoff queue by writing the data to node `B`. The configuration file has settings for the maximum size and age of data in hinted handoff queues.
+
+If a data node is restarted it checks for pending writes in the hinted handoff queues and resume attempts to replicate the writes. The important thing to note is that the hinted handoff queue is durable and does survive a process restart.
+
+When restarting nodes within an active cluster, during upgrades or maintenance, for example, other nodes in the cluster store hinted handoff writes to the offline node and replicates them when the node is again available. Thus, a healthy cluster should have enough resource headroom on each data node to handle the burst of hinted handoff writes following a node outage. The returning node needs to handle both the steady state traffic and the queued hinted handoff writes from other nodes, meaning its write traffic will have a significant spike following any outage of more than a few seconds, until the hinted handoff queue drains.
+
+If a node with pending hinted handoff writes for another data node receives a write destined for that node, it adds the write to the end of the hinted handoff queue rather than attempt a direct write. This ensures that data nodes receive data in mostly chronological order, as well as preventing unnecessary connection attempts while the other node is offline.
+
+## Queries in a cluster
+
+Queries in a cluster are distributed based on the time range being queried and the replication factor of the data. For example if the retention policy has a replication factor of 4, the coordinating data node receiving the query randomly picks any of the 4 data nodes that store a replica of the shard(s) to receive the query. If we assume that the system has shard durations of one day, then for each day of time covered by a query the coordinating node selects one data node to receive the query for that day.
+
+The coordinating node executes and fulfill the query locally whenever possible. If a query must scan multiple shard groups (multiple days in our example above), the coordinating node forwards queries to other nodes for shard(s) it does not have locally. The queries are forwarded in parallel to scanning its own local data. The queries are distributed to as many nodes as required to query each shard group once. As the results come back from each data node, the coordinating data node combines them into the final result that gets returned to the user.
diff --git a/content/enterprise_influxdb/v1.5/concepts/glossary.md b/content/enterprise_influxdb/v1.5/concepts/glossary.md
new file mode 100644
index 000000000..7660bb5ac
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/concepts/glossary.md
@@ -0,0 +1,77 @@
+---
+title: InfluxDB Enterprise glossary of terms
+aliases:
+ - /enterprise/v1.5/concepts/glossary/
+menu:
+ enterprise_influxdb_1_5:
+ name: Glossary of terms
+ weight: 20
+ parent: Concepts
+---
+
+## data node
+
+A node that runs the data service.
+
+For high availability, installations must have at least two data nodes.
+The number of data nodes in your cluster must be the same as your highest
+replication factor.
+Any replication factor greater than two gives you additional fault tolerance and
+query capacity within the cluster.
+
+Data node sizes will depend on your needs.
+The Amazon EC2 m4.large or m4.xlarge are good starting points.
+
+Related entries: [data service](#data-service), [replication factor](#replication-factor)
+
+## data service
+
+Stores all time series data and handles all writes and queries.
+
+Related entries: [data node](#data-node)
+
+## meta node
+
+A node that runs the meta service.
+
+For high availability, installations must have three meta nodes.
+Meta nodes can be very modestly sized instances like an EC2 t2.micro or even a
+nano.
+For additional fault tolerance installations may use five meta nodes; the
+number of meta nodes must be an odd number.
+
+Related entries: [meta service](#meta-service)
+
+## meta service
+
+The consistent data store that keeps state about the cluster, including which
+servers, databases, users, continuous queries, retention policies, subscriptions,
+and blocks of time exist.
+
+Related entries: [meta node](#meta-node)
+
+## replication factor
+
+The attribute of the retention policy that determines how many copies of the
+data are stored in the cluster.
+InfluxDB replicates data across `N` data nodes, where `N` is the replication
+factor.
+
+To maintain data availability for queries, the replication factor should be less
+than or equal to the number of data nodes in the cluster:
+
+* Data is fully available when the replication factor is greater than the
+number of unavailable data nodes.
+* Data may be unavailable when the replication factor is less than the number of
+unavailable data nodes.
+
+Any replication factor greater than two gives you additional fault tolerance and
+query capacity within the cluster.
+
+## web console
+
+Legacy user interface for the InfluxEnterprise cluster.
+
+This has been deprecated and the suggestion is to use [Chronograf](/chronograf/latest/introduction/).
+
+If you are transitioning from the Enterprise Web Console to Chronograf and helpful [transition guide](/chronograf/latest/guides/transition-web-admin-interface/) is available.
diff --git a/content/enterprise_influxdb/v1.5/features/_index.md b/content/enterprise_influxdb/v1.5/features/_index.md
new file mode 100644
index 000000000..8bf60b6a4
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/features/_index.md
@@ -0,0 +1,15 @@
+---
+title: InfluxDB Enterprise features
+aliases:
+ - /enterprise/v1.5/features/
+menu:
+ enterprise_influxdb_1_5:
+ name: Enterprise features
+ weight: 60
+---
+
+## [InfluxDB Enterprise users](/enterprise_influxdb/v1.5/features/users/)
+
+## [Clustering features](/enterprise_influxdb/v1.5/features/clustering-features/)
+
+The [Clustering features](/enterprise_influxdb/v1.5/features/clustering-features/) section covers topics important to
diff --git a/content/enterprise_influxdb/v1.5/features/clustering-features.md b/content/enterprise_influxdb/v1.5/features/clustering-features.md
new file mode 100644
index 000000000..1c2a9ff16
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/features/clustering-features.md
@@ -0,0 +1,109 @@
+---
+title: InfluxDB Enterprise cluster features
+description: InfluxDB Enterprise cluster features, including enttitlements, query management, subscriptions, continuous queries, conversion from InfluxDB OSS to InfluxDB Enterprise clusters, and more.
+aliases:
+ - /enterprise/v1.5/features/clustering-features/
+menu:
+ enterprise_influxdb_1_5:
+ name: Cluster features
+ weight: 20
+ parent: Enterprise features
+---
+
+## Entitlements
+
+A valid license key is required in order to start `influxd-meta` or `influxd`.
+License keys restrict the number of data nodes that can be added to a cluster as well as the number of CPU cores a data node can use.
+Without a valid license, the process will abort startup.
+
+## Query management
+
+Query management works cluster wide. Specifically, `SHOW QUERIES` and `KILL QUERY ` on `""` can be run on any data node. `SHOW QUERIES` will report all queries running across the cluster and the node which is running the query.
+`KILL QUERY` can abort queries running on the local node or any other remote data node. For details on using the `SHOW QUERIES` and `KILL QUERY` on InfluxDB Enterprise clusters,
+see [Query Management](/influxdb/v1.5/troubleshooting/query_management/).
+
+## Subscriptions
+
+Subscriptions used by Kapacitor work in a cluster. Writes to any node will be forwarded to subscribers across all supported subscription protocols.
+
+## Continuous queries
+
+### Configuration and operational considerations on a cluster
+
+It is important to understand how to configure InfluxDB Enterprise and how this impacts the Continuous Queries (CQ) engine’s behavior:
+
+- **Data node configuration** `[continuous queries]`
+[run-interval](/enterprise_influxdb/v1.5/administration/configuration#run-interval-1s)
+-- The interval at which InfluxDB checks to see if a CQ needs to run. Set this option to the lowest interval
+at which your CQs run. For example, if your most frequent CQ runs every minute, set run-interval to 1m.
+- **Meta node configuration** `[meta]`
+[lease-duration](/enterprise_influxdb/v1.5/administration/configuration#lease-duration-1m0s)
+-- The default duration of the leases that data nodes acquire from the meta nodes. Leases automatically expire after the
+lease-duration is met. Leases ensure that only one data node is running something at a given time. For example, Continuous
+Queries use a lease so that all data nodes aren’t running the same CQs at once.
+- **Execution time of CQs** – CQs are sequentially executed. Depending on the amount of work that they need to accomplish
+in order to complete, the configuration parameters mentioned above can have an impact on the observed behavior of CQs.
+
+The CQ service is running on every node, but only a single node is granted exclusive access to execute CQs at any one time.
+However, every time the `run-interval` elapses (and assuming a node isn't currently executing CQs), a node attempts to
+acquire the CQ lease. By default the `run-interval` is one second – so the data nodes are aggressively checking to see
+if they can acquire the lease. On clusters where all CQs execute in an amount of time less than `lease-duration`
+(default is 1m), there's a good chance that the first data node to acquire the lease will still hold the lease when
+the `run-interval` elapses. Other nodes will be denied the lease and when the node holding the lease requests it again,
+the lease is renewed with the expiration extended to `lease-duration`. So in a typical situation, we observe that a
+single data node acquires the CQ lease and holds on to it. It effectively becomes the executor of CQs until it is
+recycled (for any reason).
+
+Now consider the the following case, CQs take longer to execute than the `lease-duration`, so when the lease expires,
+~1 second later another data node requests and is granted the lease. The original holder of the lease is busily working
+on sequentially executing the list of CQs it was originally handed and the data node now holding the lease begins
+executing CQs from the top of the list.
+
+Based on this scenario, it may appear that CQs are “executing in parallel” because multiple data nodes are
+essentially “rolling” sequentially through the registered CQs and the lease is rolling from node to node.
+The “long pole” here is effectively your most complex CQ – and it likely means that at some point all nodes
+are attempting to execute that same complex CQ (and likely competing for resources as they overwrite points
+generated by that query on each node that is executing it --- likely with some phased offset).
+
+To avoid this behavior, and this is desirable because it reduces the overall load on your cluster,
+you should set the lease-duration to a value greater than the aggregate execution time for ALL the CQs that you are running.
+
+Based on the current way in which CQs are configured to execute, the way to address parallelism is by using
+Kapacitor for the more complex CQs that you are attempting to run.
+[See Kapacitor as a continuous query engine](/kapacitor/v1.4/guides/continuous_queries/).
+However, you can keep the more simplistic and highly performant CQs within the database –
+but ensure that the lease duration is greater than their aggregate execution time to ensure that
+“extra” load is not being unnecessarily introduced on your cluster.
+
+
+## `/debug/pprof` endpoints
+
+Meta nodes expose the `/debug/pprof` endpoints for profiling and troubleshooting.
+
+## Shard movement
+
+* [Copy Shard](/enterprise_influxdb/v1.5/administration/cluster-commands/#copy-shard) support - copy a shard from one node to another
+* [Copy Shard Status](/enterprise_influxdb/v1.5/administration/cluster-commands/#copy-shard-status) - query the status of a copy shard request
+* [Kill Copy Shard](/enterprise_influxdb/v1.5/administration/cluster-commands/#kill-copy-shard) - kill a running shard copy
+* [Remove Shard](/enterprise_influxdb/v1.5/administration/cluster-commands/#remove-shard) - remove a shard from a node (this deletes data)
+* [Truncate Shards](/enterprise_influxdb/v1.5/administration/cluster-commands/#truncate-shards) - truncate all active shard groups and start new shards immediately (This is useful when adding nodes or changing replication factors.)
+
+This functionality is exposed via an API on the meta service and through [`influxd-ctl` sub-commands](/enterprise_influxdb/v1.5/administration/cluster-commands/).
+
+## InfluxDB OSS conversion to InfluxDB Enterprise clusters
+
+Importing a OSS single server as the first data node is supported.
+
+See [OSS to cluster migration](/enterprise_influxdb/v1.5/guides/migration/) for
+step-by-step instructions.
+
+## Query routing
+
+The query engine skips failed nodes that hold a shard needed for queries.
+If there is a replica on another node, it will retry on that node.
+
+## Backing up and restoring
+
+InfluxDB Enterprise clusters support backup and restore functionality.
+See [Backing up and restoring in InfluxDB Enterprise](/enterprise_influxdb/v1.5/administration/backup-and-restore/) for
+more information.
diff --git a/content/enterprise_influxdb/v1.5/features/users.md b/content/enterprise_influxdb/v1.5/features/users.md
new file mode 100644
index 000000000..fd77583f8
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/features/users.md
@@ -0,0 +1,159 @@
+---
+title: Managing InfluxDB Enterprise users
+aliases:
+ - /enterprise/v1.5/features/users/
+menu:
+ enterprise_influxdb_1_5:
+ weight: 0
+ parent: Enterprise features
+---
+
+InfluxDB Enterprise users have functions that are either specific to the web
+console or specific to the cluster:
+```
+Users Cluster Permissions
+
+Penelope
+ O
+ \|/
+ | ----------------------> Dev Account --------> Manage Queries
+ / \ --------> Monitor
+ --------> Add/Remove Nodes
+Jim
+ O
+ \|/
+ | ----------------------> Marketing Account ---> View Admin
+ / \ ---> Graph Role ---> Read
+ ---> View Chronograf
+```
+
+## Cluster user information
+In the cluster, individual users are assigned to an account.
+Cluster accounts have permissions and roles.
+
+In the diagram above, Penelope is assigned to the Dev Account and
+Jim is assigned to the Marketing Account.
+The Dev Account includes the permissions to manage queries, monitor the
+cluster, and add/remove nodes from the cluster.
+The Marketing Account includes the permission to view and edit the admin screens
+as well as the Graph Role which contains the permissions to read data and
+view Chronograf.
+
+### Roles
+Roles are groups of permissions.
+A single role can belong to several cluster accounts.
+
+InfluxEnterprise clusters have two built-in roles:
+
+#### Global Admin
+
+The Global Admin role has all 16 [cluster permissions](#permissions).
+
+#### Admin
+
+The Admin role has all [cluster permissions](#permissions) except for the
+permissions to:
+
+* Add/Remove Nodes
+* Copy Shard
+* Manage Shards
+* Rebalance
+
+### Permissions
+InfluxEnterprise clusters have 16 permissions:
+
+#### View Admin
+Permission to view or edit admin screens.
+#### View Chronograf
+Permission to use Chronograf tools.
+#### Create Databases
+Permission to create databases.
+#### Create Users & Roles
+Permission to create users and roles.
+#### Add/Remove Nodes
+Permission to add/remove nodes from a cluster.
+#### Drop Databases
+Permission to drop databases.
+#### Drop Data
+Permission to drop measurements and series.
+#### Read
+Permission to read data.
+#### Write
+Permission to write data.
+#### Rebalance
+Permission to rebalance a cluster.
+#### Manage Shards
+Permission to copy and delete shards.
+#### Manage Continuous Queries
+Permission to create, show, and drop continuous queries.
+#### Manage Queries
+Permission to show and kill queries.
+#### Manage Subscriptions
+Permission to show, add, and drop subscriptions.
+#### Monitor
+Permission to show stats and diagnostics.
+#### Copy Shard
+Permission to copy shards.
+
+### Permission to Statement
+The following table describes permissions required to execute the associated database statement. It also describes whether these permissions apply just to InfluxDB (Database) or InfluxDB Enterprise (Cluster).
+
+|Permission|Statement|
+|---|---|
+|CreateDatabasePermission|AlterRetentionPolicyStatement, CreateDatabaseStatement, CreateRetentionPolicyStatement, ShowRetentionPoliciesStatement|
+|ManageContinuousQueryPermission|CreateContinuousQueryStatement, DropContinuousQueryStatement, ShowContinuousQueriesStatement|
+|ManageSubscriptionPermission|CreateSubscriptionStatement, DropSubscriptionStatement, ShowSubscriptionsStatement|
+|CreateUserAndRolePermission|CreateUserStatement, DropUserStatement, GrantAdminStatement, GrantStatement, RevokeAdminStatement, RevokeStatement, SetPasswordUserStatement, ShowGrantsForUserStatement, ShowUsersStatement|
+|DropDataPermission|DeleteSeriesStatement, DeleteStatement, DropMeasurementStatement, DropSeriesStatement|
+|DropDatabasePermission|DropDatabaseStatement, DropRetentionPolicyStatement|
+|ManageShardPermission|DropShardStatement,ShowShardGroupsStatement, ShowShardsStatement|
+|ManageQueryPermission|KillQueryStatement, ShowQueriesStatement|
+|MonitorPermission|ShowDiagnosticsStatement, ShowStatsStatement|
+|ReadDataPermission|ShowFieldKeysStatement, ShowMeasurementsStatement, ShowSeriesStatement, ShowTagKeysStatement, ShowTagValuesStatement, ShowRetentionPoliciesStatement|
+|NoPermissions|ShowDatabasesStatement|
+|Determined by type of select statement|SelectStatement|
+
+### Statement to Permission
+The following table describes database statements and the permissions required to execute them. It also describes whether these permissions apply just to InfluxDB (Database) or InfluxEnterprise (Cluster).
+
+|Statment|Permissions|Scope|
+|---|---|---|
+|AlterRetentionPolicyStatement|CreateDatabasePermission|Database|
+|CreateContinuousQueryStatement|ManageContinuousQueryPermission|Database|
+|CreateDatabaseStatement|CreateDatabasePermission|Cluster|
+|CreateRetentionPolicyStatement|CreateDatabasePermission|Database|
+|CreateSubscriptionStatement|ManageSubscriptionPermission|Database|
+|CreateUserStatement|CreateUserAndRolePermission|Database|
+|DeleteSeriesStatement|DropDataPermission|Database|
+|DeleteStatement|DropDataPermission|Database|
+|DropContinuousQueryStatement|ManageContinuousQueryPermission|Database|
+|DropDatabaseStatement|DropDatabasePermission|Cluster|
+|DropMeasurementStatement|DropDataPermission|Database|
+|DropRetentionPolicyStatement|DropDatabasePermission|Database|
+|DropSeriesStatement|DropDataPermission|Database|
+|DropShardStatement|ManageShardPermission|Cluster|
+|DropSubscriptionStatement|ManageSubscriptionPermission|Database|
+|DropUserStatement|CreateUserAndRolePermission|Database|
+|GrantAdminStatement|CreateUserAndRolePermission|Database|
+|GrantStatement|CreateUserAndRolePermission|Database|
+|KillQueryStatement|ManageQueryPermission|Database|
+|RevokeAdminStatement|CreateUserAndRolePermission|Database|
+|RevokeStatement|CreateUserAndRolePermission|Database|
+|SelectStatement|Determined by type of select statement|n/a|
+|SetPasswordUserStatement|CreateUserAndRolePermission|Database|
+|ShowContinuousQueriesStatement|ManageContinuousQueryPermission|Database|
+|ShowDatabasesStatement|NoPermissions|Cluster|The user's grants determine which databases are returned in the results.|
+|ShowDiagnosticsStatement|MonitorPermission|Database|
+|ShowFieldKeysStatement|ReadDataPermission|Database|
+|ShowGrantsForUserStatement|CreateUserAndRolePermission|Database|
+|ShowMeasurementsStatement|ReadDataPermission|Database|
+|ShowQueriesStatement|ManageQueryPermission|Database|
+|ShowRetentionPoliciesStatement|CreateDatabasePermission|Database|
+|ShowSeriesStatement|ReadDataPermission|Database|
+|ShowShardGroupsStatement|ManageShardPermission|Cluster|
+|ShowShardsStatement|ManageShardPermission|Cluster|
+|ShowStatsStatement|MonitorPermission|Database|
+|ShowSubscriptionsStatement|ManageSubscriptionPermission|Database|
+|ShowTagKeysStatement|ReadDataPermission|Database|
+|ShowTagValuesStatement|ReadDataPermission|Database|
+|ShowUsersStatement|CreateUserAndRolePermission|Database|
diff --git a/content/enterprise_influxdb/v1.5/guides/_index.md b/content/enterprise_influxdb/v1.5/guides/_index.md
new file mode 100644
index 000000000..1c527a8cc
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/guides/_index.md
@@ -0,0 +1,15 @@
+---
+title: InfluxDB Enterprise guides and tutorials
+aliases:
+ - /enterprise/v1.5/guides/
+menu:
+ enterprise_influxdb_1_5:
+ name: Guides
+ weight: 60
+---
+## [Anti-entropy service in InfluxDB Enterprise](/enterprise_influxdb/v1.5/guides/anti-entropy/)
+## [Backing up and restoring in InfluxDB Enterprise](/enterprise_influxdb/v1.5/administration/backup-and-restore/)
+## [Fine-grained authorization in InfluxDB Enterprise](/enterprise_influxdb/v1.5/guides/fine-grained-authorization/)
+## [Migrating InfluxDB OSS instances to InfluxDB Enterprise clusters](/enterprise_influxdb/v1.5/guides/migration/)
+## [Rebalancing InfluxDB Enterprise clusters](/enterprise_influxdb/v1.5/guides/rebalance/)
+## [SMTP server setup](/enterprise_influxdb/v1.5/guides/smtp-server/)
diff --git a/content/enterprise_influxdb/v1.5/guides/fine-grained-authorization.md b/content/enterprise_influxdb/v1.5/guides/fine-grained-authorization.md
new file mode 100644
index 000000000..7cda24c73
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/guides/fine-grained-authorization.md
@@ -0,0 +1,393 @@
+---
+title: Fine-grained authorization in InfluxDB Enterprise
+alias:
+ -/docs/v1.5/administration/fga
+menu:
+ enterprise_influxdb_1_5:
+ name: Fine-grained authorization
+ weight: 10
+ parent: Guides
+---
+
+## Controlling access to data with InfluxDB Enterprise's fine-grained authorization
+
+In InfluxDB OSS, access control operates only at a database level.
+In InfluxDB Enterprise, fine-grained authorization can be used to control access at a measurement or series level.
+
+### Concepts
+
+To use fine-grained authorization (FGA), you must first [enable authentication](/influxdb/v1.5/query_language/authentication_and_authorization/#set-up-authentication) in your configuration file.
+Then the admin user needs to create users through the query API and grant those users explicit read and/or write privileges per database.
+So far, this is the same as how you would configure authorization on an InfluxDB OSS instance.
+
+To continue setting up fine-grained authorization, the admin user must first set _restrictions_ which define a combination of database, measurement, and tags which cannot be accessed without an explicit _grant_.
+A _grant_ enables access to series that were previously restricted.
+
+Restrictions limit access to the series that match the database, measurement, and tags specified.
+The different access permissions (currently just "read" and "write") can be restricted independently depending on the scenario.
+Grants will allow access, according to the listed permissions, to restricted series for the users and roles specified.
+Users are the same as the users created in InfluxQL, and [roles](/enterprise_influxdb/v1.5/features/users/#cluster-user-information), an InfluxDB Enterprise feature, are created separately through the Meta HTTP API.
+
+### Modifying grants and restrictions
+
+To configure FGA, you will need access to the meta nodes' HTTP ports (which run on port 8091 by default).
+Note that in a typical cluster configuration, the data nodes' HTTP ports (8086 by default) are exposed to clients but the meta nodes' HTTP ports are not.
+You may need to work with your network administrator to gain access to the meta nodes' HTTP ports.
+
+### Scenario: partitioning access within a single measurement via users
+
+We'll assume a schema of a database named `datacenters`, one measurement named `network` with a tag of `dc=east` or `dc=west`, and two fields, `bytes_in` and `bytes_out`.
+Suppose you want to make sure that the client in the east datacenter can't read or write the west datacenter's metrics, and vice versa.
+
+First, as an administrator, you would create the database and users and standard grants with InfluxQL queries:
+
+```
+CREATE DATABASE datacenters
+
+CREATE USER east WITH PASSWORD 'east'
+GRANT ALL ON datacenters TO east
+
+CREATE USER west WITH PASSWORD 'west'
+GRANT ALL ON datacenters TO west
+```
+
+At this point, the east and west users have unrestricted read and write access to the `datacenters` database.
+We'll need to decide what restrictions to apply in order to limit their access.
+
+#### Restrictions
+
+##### Restriction option 1: the entire database
+
+Restricting the entire database is a simple option, and in most cases it is the simplest option to reason about.
+Moreover, because this is a very general restriction, it will have minimal impact on performance.
+
+Assuming the meta node is running its HTTP service on localhost on the default port, you can run
+
+```
+curl -L -XPOST "http://localhost:8091/influxdb/v2/acl/restrictions" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "permissions": ["read", "write"]
+ }'
+```
+
+After applying this restriction and before applying any grants, the east and west users will not be authorized to read from or write to the database.
+
+##### Restriction option 2: one measurement within the database
+
+Restricting a single measurement will disallow reads and writes within that measurement, but access to other measurements within the database will be decided by standard permissions.
+
+```
+curl -L -XPOST "http://localhost:8091/influxdb/v2/acl/restrictions" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "measurement": {"match": "exact", "value": "network"},
+ "permissions": ["read", "write"]
+ }'
+```
+
+Compared to the previous approach of restricting the entire database, this only restricts access to the measurement `network`.
+In this state, the east and west users are free to read from and write to any measurement in the database `datacenters` besides `network`.
+
+##### Restriction option 3: specific series in a database
+
+The most fine-grained restriction option is to restrict specific tags in a measurement and database.
+
+```
+for region in east west; do
+ curl -L -XPOST "http://localhost:8091/influxdb/v2/acl/restrictions" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "measurement": {"match": "exact", "value": "network"},
+ "tags": [{"match": "exact", "key": "dc", "value": "'$region'"}],
+ "permissions": ["read", "write"]
+ }'
+done
+```
+
+This configuration would allow reads and writes from any measurement in `datacenters`; and when the measurement is `network`, it would only restrict when there is a tag `dc=east` or `dc=west`.
+This is probably not what you want, as it would allow writes to `network` without tags or writes to `network` with a tag key of `dc` and a tag value of anything but `east` or `west`.
+
+##### Restriction summary
+
+These options were simple matchers on exact patterns.
+Remember that you will achieve the best performance by having few, broad restrictions as opposed to many narrow restrictions.
+
+We only used the matcher `exact` above, but you can also match with `prefix` if you want to restrict based on a common prefix on your database, measurements, or tags.
+
+#### Grants
+
+Now that you've applied your restrictions that apply to all users, you must apply grants to allow selected users to bypass the restrictions.
+The structure of a POST body for a grant is identical to the POST body for a restriction, but with the addition of a `users` array.
+
+##### Grant option 1: the entire database
+
+This offers no guarantee that the users will write to the correct measurement or use the correct tags.
+
+```
+curl -s -L -XPOST "http://localhost:8091/influxdb/v2/acl/grants" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "permissions": ["read", "write"],
+ "users": [{"name": "east"}, {"name": "west"}]
+ }'
+```
+
+##### Grant option 2: one measurement within the database
+
+This guarantees that the users will only have access to the `network` measurement but it still does not guarantee that they will use the correct tags.
+
+```
+curl -s -L -XPOST "http://localhost:8091/influxdb/v2/acl/grants" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "measurement": {"match": "exact", "value": "network"},
+ "permissions": ["read", "write"],
+ "users": [{"name": "east"}, {"name": "west"}]
+ }'
+```
+
+##### Grant option 3: specific tags on a database
+
+This guarantees that the users will only have access to data with the corresponding `dc` tag but it does not guarantee that they will use the `network` measurement.
+
+```
+curl -s -L -XPOST "http://localhost:8091/influxdb/v2/acl/grants" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "tags": [{"match": "exact", "key": "dc", "value": "east"}],
+ "permissions": ["read", "write"],
+ "users": [{"name": "east"}]
+ }'
+curl -s -L -XPOST "http://localhost:8091/influxdb/v2/acl/grants" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "tags": [{"match": "exact", "key": "dc", "value": "west"}],
+ "permissions": ["read", "write"],
+ "users": [{"name": "west"}]
+ }'
+```
+
+##### Grant option 4: specific series within the database
+
+To guarantee that both users only have access to the `network` measurement and that the east user uses the tag `dc=east` and the west user uses the tag `dc=west`, we need to make two separate grant calls:
+
+```
+curl -s -L -XPOST "http://localhost:8091/influxdb/v2/acl/grants" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "measurement": {"match": "exact", "value": "network"},
+ "tags": [{"match": "exact", "key": "dc", "value": "east"}],
+ "permissions": ["read", "write"],
+ "users": [{"name": "east"}]
+ }'
+curl -s -L -XPOST "http://localhost:8091/influxdb/v2/acl/grants" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "measurement": {"match": "exact", "value": "network"},
+ "tags": [{"match": "exact", "key": "dc", "value": "west"}],
+ "permissions": ["read", "write"],
+ "users": [{"name": "west"}]
+ }'
+```
+
+Now, when the east user writes to the `network` measurement, it must include the tag `dc=east`, and when the west user writes to `network`, it must include the tag `dc=west`.
+Note that this is only the requirement of the presence of that tag; `dc=east,foo=bar` will also be accepted.
+
+### Scenario: partitioning access via roles
+
+Suppose that we have many individuals who need to write to our `datacenters` database in the previous example.
+We wouldn't want them to all share one set of login credentials.
+We can instead use _roles_, which are associate a set of users with a set of permissions.
+
+We'll assume that we now have many users on the east and west teams, and we'll have an `ops` user who needs full access to data from both the east and west datacenters.
+We will only create one user each for east and west, but the process would be the same for any number of users.
+
+First we will set up the users.
+
+```
+CREATE DATABASE datacenters
+
+CREATE USER e001 WITH PASSWORD 'e001'
+CREATE USER w001 WITH PASSWORD 'w001'
+CREATE USER ops WITH PASSWORD 'ops'
+```
+
+#### Creating the roles
+
+We want one role for full access to any point in `datacenters` with the tag `dc=east` and another role for the tag `dc=west`.
+
+First, we initialize the roles.
+
+```
+curl -s -L -XPOST "http://localhost:8091/role" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "action": "create",
+ "role": {
+ "name": "east"
+ }
+ }'
+curl -s -L -XPOST "http://localhost:8091/role" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "action": "create",
+ "role": {
+ "name": "west"
+ }
+ }'
+```
+
+Next, let's specify that anyone belonging to those roles has general read and write access to the `datacenters` database.
+
+```
+curl -s -L -XPOST "http://localhost:8091/role" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "action": "add-permissions",
+ "role": {
+ "name": "east",
+ "permissions": {
+ "datacenters": ["ReadData", "WriteData"]
+ }
+ }
+ }'
+
+curl -s -L -XPOST "http://localhost:8091/role" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "action": "add-permissions",
+ "role": {
+ "name": "west",
+ "permissions": {
+ "datacenters": ["ReadData", "WriteData"]
+ }
+ }
+ }'
+```
+
+Next, we need to associate users to the roles.
+The `east` role gets the user from the east team, the `west` role gets the user from the west team, and both roles get the `ops` user.
+
+```
+curl -s -L -XPOST "http://localhost:8091/role" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "action": "add-users",
+ "role": {
+ "name": "east",
+ "users": ["e001", "ops"]
+ }
+ }'
+curl -s -L -XPOST "http://localhost:8091/role" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "action": "add-users",
+ "role": {
+ "name": "west",
+ "users": ["w001", "ops"]
+ }
+ }'
+```
+
+#### Restrictions
+
+Please refer to the previous scenario for directions on how to set up restrictions.
+
+#### Grants and roles
+
+Grants for a role function the same as grants for a user.
+Instead of using the key `users` to refer to users, use the key `roles` to refer to roles.
+
+##### Grant option 1: the entire database
+
+This offers no guarantee that the users in the roles will write to the correct measurement or use the correct tags.
+
+```
+curl -s -L -XPOST "http://localhost:8091/influxdb/v2/acl/grants" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "permissions": ["read", "write"],
+ "roles": [{"name": "east"}, {"name": "west"}]
+ }'
+```
+
+##### Grant option 2: one measurement within the database
+
+This guarantees that the users in the roles will only have access to the `network` measurement but it still does not guarantee that they will use the correct tags.
+
+```
+curl -s -L -XPOST "http://localhost:8091/influxdb/v2/acl/grants" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "measurement": {"match": "exact", "value": "network"},
+ "permissions": ["read", "write"],
+ "roles": [{"name": "east"}, {"name": "west"}]
+ }'
+```
+
+##### Grant option 3: specific tags on a database
+
+This guarantees that the users in the roles will only have access to data with the corresponding `dc` tag.
+They will have access to any measurement in the `datacenters` database.
+
+```
+curl -s -L -XPOST "http://localhost:8091/influxdb/v2/acl/grants" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "tags": [{"match": "exact", "key": "dc", "value": "east"}],
+ "permissions": ["read", "write"],
+ "roles": [{"name": "east"}]
+ }'
+curl -s -L -XPOST "http://localhost:8091/influxdb/v2/acl/grants" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "tags": [{"match": "exact", "key": "dc", "value": "west"}],
+ "permissions": ["read", "write"],
+ "roles": [{"name": "west"}]
+ }'
+```
+
+##### Grant option 4: specific series within the database
+
+To guarantee that both roles only have access to the `network` measurement and that the east user uses the tag `dc=east` and the west user uses the tag `dc=west`, we need to make two separate grant calls:
+
+```
+curl -s -L -XPOST "http://localhost:8091/influxdb/v2/acl/grants" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "measurement": {"match": "exact", "value": "network"},
+ "tags": [{"match": "exact", "key": "dc", "value": "east"}],
+ "permissions": ["read", "write"],
+ "roles": [{"name": "east"}]
+ }'
+curl -s -L -XPOST "http://localhost:8091/influxdb/v2/acl/grants" \
+ -H "Content-Type: application/json" \
+ --data-binary '{
+ "database": {"match": "exact", "value": "datacenters"},
+ "measurement": {"match": "exact", "value": "network"},
+ "tags": [{"match": "exact", "key": "dc", "value": "west"}],
+ "permissions": ["read", "write"],
+ "roles": [{"name": "west"}]
+ }'
+```
+
+Now, when a user in the east role writes to the `network` measurement, it must include the tag `dc=east`, and when the west user writes to `network`, it must include the tag `dc=west`.
+Note that this is only the requirement of the presence of that tag; `dc=east,foo=bar` will also be accepted.
+
+If a user is in both the east and west roles, they must write points with either `dc=east` or `dc=west`.
+When they query data, they will be able to read points tagged with `dc=east` or `dc=west`.
diff --git a/content/enterprise_influxdb/v1.5/guides/https_setup.md b/content/enterprise_influxdb/v1.5/guides/https_setup.md
new file mode 100644
index 000000000..7424c85b2
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/guides/https_setup.md
@@ -0,0 +1,367 @@
+---
+title: Enabling HTTPS for InfluxDB Enterprise
+menu:
+ enterprise_influxdb_1_5:
+ name: Enabling HTTPS
+ weight: 100
+ parent: Guides
+---
+
+This guide describes how to enable HTTPS for InfluxDB Enterprise.
+Setting up HTTPS secures the communication between clients and the InfluxEnterprise
+server,
+and, in some cases, HTTPS verifies the authenticity of the InfluxEnterprise server to
+clients.
+
+If you plan on sending requests to InfluxEnterprise over a network, we
+[strongly recommend](/enterprise_influxdb/v1.5/administration/security/)
+that you set up HTTPS.
+
+## Requirements
+
+To set up HTTPS with InfluxEnterprise, you'll need an existing or new InfluxEnterprise instance
+and a Transport Layer Security (TLS) certificate (also known as a Secured Sockets Layer (SSL) certificate).
+InfluxEnterprise supports three types of TLS/SSL certificates:
+
+* **Single domain certificates signed by a [Certificate Authority](https://en.wikipedia.org/wiki/Certificate_authority)**
+
+ These certificates provide cryptographic security to HTTPS requests and allow clients to verify the identity of the InfluxEnterprise server.
+ With this certificate option, every InfluxEnterprise instance requires a unique single domain certificate.
+
+* **Wildcard certificates signed by a Certificate Authority**
+
+ These certificates provide cryptographic security to HTTPS requests and allow clients to verify the identity of the InfluxDB server.
+ Wildcard certificates can be used across multiple InfluxEnterprise instances on different servers.
+
+* **Self-signed certificates**
+
+ Self-signed certificates are not signed by a CA and you can [generate](#step-1-generate-a-self-signed-certificate) them on your own machine.
+ Unlike CA-signed certificates, self-signed certificates only provide cryptographic security to HTTPS requests.
+ They do not allow clients to verify the identity of the InfluxDB server.
+ We recommend using a self-signed certificate if you are unable to obtain a CA-signed certificate.
+ With this certificate option, every InfluxEnterprise instance requires a unique self-signed certificate.
+
+Regardless of your certificate's type, InfluxEnterprise supports certificates composed of
+a private key file (`.key`) and a signed certificate file (`.crt`) file pair, as well as certificates
+that combine the private key file and the signed certificate file into a single bundled file (`.pem`).
+
+The following two sections outline how to set up HTTPS with InfluxEnterprise [using a CA-signed
+certificate](#setup-https-with-a-ca-signed-certificate) and [using a self-signed certificate](#setup-https-with-a-self-signed-certificate)
+on Ubuntu 16.04.
+Specific steps may be different for other operating systems.
+
+## Setup HTTPS with a CA-Signed Certificate
+
+#### Step 1: Install the SSL/TLS certificate in each Data Node
+
+Place the private key file (`.key`) and the signed certificate file (`.crt`)
+or the single bundled file (`.pem`) in the `/etc/ssl` directory of each Data Node.
+
+#### Step 2: Ensure file permissions for each Data Node
+Certificate files require read and write access by the `root` user.
+Ensure that you have the correct file permissions in each Data Node by running the following
+commands:
+
+```
+sudo chown root:root /etc/ssl/
+sudo chmod 644 /etc/ssl/
+sudo chmod 600 /etc/ssl/
+```
+#### Step 3: Enable HTTPS within the configuration file for each Meta Node
+
+HTTPS is disabled by default.
+Enable HTTPS for each Meta Node within the `[meta]` section of the configuration file (`/etc/influxdb/influxdb-meta.conf`) by setting:
+
+* `https-enabled` to `true`
+* `http-certificate` to `/etc/ssl/.crt` (or to `/etc/ssl/.pem`)
+* `http-private-key` to `/etc/ssl/.key` (or to `/etc/ssl/.pem`)
+
+```
+[meta]
+
+ [...]
+
+ # Determines whether HTTPS is enabled.
+ https-enabled = true
+[...]
+
+ # The SSL certificate to use when HTTPS is enabled.
+ https-certificate = ".pem"
+
+ # Use a separate private key location.
+ https-private-key = ".pem"
+```
+
+#### Step 4: Enable HTTPS within the configuration file for each Data Node
+
+HTTPS is disabled by default. There are two sets of configuration changes required.
+
+First, enable HTTPS for each Data Node within the `[http]` section of the configuration file (`/etc/influxdb/influxdb.conf`) by setting:
+
+* `https-enabled` to `true`
+* `http-certificate` to `/etc/ssl/.crt` (or to `/etc/ssl/.pem`)
+* `http-private-key` to `/etc/ssl/.key` (or to `/etc/ssl/.pem`)
+
+```
+[http]
+
+ [...]
+
+ # Determines whether HTTPS is enabled.
+ https-enabled = true
+
+ [...]
+
+ # The SSL certificate to use when HTTPS is enabled.
+ https-certificate = ".pem"
+
+ # Use a separate private key location.
+ https-private-key = ".pem"
+```
+
+Second, Configure the Data Nodes to use HTTPS when communicating with the Meta Nodes within the `[meta]` section of the configuration file (`/etc/influxdb/influxdb.conf`) by setting:
+
+* `meta-tls-enabled` to `true`
+
+```
+[meta]
+
+ [...]
+ meta-tls-enabled = true
+```
+
+#### Step 5: Restart InfluxEnterprise
+
+Restart the InfluxEnterprise meta node processes for the configuration changes to take effect:
+```
+sudo systemctl start influxdb-meta
+```
+
+Restart the InfluxEnterprise data node processes for the configuration changes to take effect:
+```
+sudo systemctl restart influxdb
+```
+
+#### Step 6: Verify the HTTPS Setup
+
+Verify that HTTPS is working on the meta nodes by using `influxd-ctl`.
+```
+influxd-ctl -bind-tls show
+```
+{{% warn %}}
+ Once you have enabled HTTPS, you MUST use `-bind-tls` in order for influxd-ctl to connect to the meta node.
+{{% /warn %}}
+
+A successful connection returns output which should resemble the following:
+```
+Data Nodes
+==========
+ID TCP Address Version
+4 enterprise-data-01:8088 1.x.y-c1.x.y
+5 enterprise-data-02:8088 1.x.y-c1.x.y
+
+Meta Nodes
+==========
+TCP Address Version
+enterprise-meta-01:8091 1.x.y-c1.x.z
+enterprise-meta-02:8091 1.x.y-c1.x.z
+enterprise-meta-03:8091 1.x.y-c1.x.z
+```
+
+
+Next, verify that HTTPS is working by connecting to InfluxEnterprise with the [CLI tool](/influxdb/v1.5/tools/shell/):
+```
+influx -ssl -host .com
+```
+
+A successful connection returns the following:
+```
+Connected to https://.com:8086 version 1.x.y
+InfluxDB shell version: 1.x.y
+>
+```
+
+That's it! You've successfully set up HTTPS with InfluxEnterprise.
+
+## Setup HTTPS with a Self-Signed Certificate
+
+#### Step 1: Generate a self-signed certificate
+
+The following command generates a private key file (`.key`) and a self-signed
+certificate file (`.crt`) which remain valid for the specified `NUMBER_OF_DAYS`.
+It outputs those files to InfluxEnterprise's default certificate file paths and gives them
+the required permissions.
+
+```
+sudo openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/ssl/influxdb-selfsigned.key -out /etc/ssl/influxdb-selfsigned.crt -days
+```
+
+When you execute the command, it will prompt you for more information.
+You can choose to fill out that information or leave it blank;
+both actions generate valid certificate files.
+
+#### Step 2: Enable HTTPS within the configuration file for each Meta Node
+
+HTTPS is disabled by default.
+Enable HTTPS for each Meta Node within the `[meta]` section of the configuration file (`/etc/influxdb/influxdb-meta.conf`) by setting:
+
+* `https-enabled` to `true`
+* `https-certificate` to `/etc/ssl/influxdb-selfsigned.crt`
+* `https-private-key` to `/etc/ssl/influxdb-selfsigned.key`
+* `https-insecure-tls` to `true` to indicate a self-signed key
+
+
+```
+[meta]
+
+ [...]
+
+ # Determines whether HTTPS is enabled.
+ https-enabled = true
+
+ [...]
+
+ # The SSL certificate to use when HTTPS is enabled.
+ https-certificate = "/etc/ssl/influxdb-selfsigned.crt"
+
+ # Use a separate private key location.
+ https-private-key = "/etc/ssl/influxdb-selfsigned.key"
+
+ # For self-signed key
+ https-insecure-tls = true
+```
+
+#### Step 3: Enable HTTPS within the configuration file for each Data Node
+
+HTTPS is disabled by default. There are two sets of configuration changes required.
+
+First, enable HTTPS for each Data Node within the `[http]` section of the configuration file (`/etc/influxdb/influxdb.conf`) by setting:
+
+* `https-enabled` to `true`
+* `http-certificate` to `/etc/ssl/influxdb-selfsigned.crt`
+* `http-private-key` to `/etc/ssl/influxdb-selfsigned.key`
+
+```
+[http]
+
+ [...]
+
+ # Determines whether HTTPS is enabled.
+ https-enabled = true
+
+ [...]
+
+ # The SSL certificate to use when HTTPS is enabled.
+ https-certificate = "/etc/ssl/influxdb-selfsigned.crt"
+
+ # Use a separate private key location.
+ https-private-key = "/etc/ssl/influxdb-selfsigned.key"
+```
+
+Second, Configure the Data Nodes to use HTTPS when communicating with the Meta Nodes within the `[meta]` section of the configuration file (`/etc/influxdb/influxdb.conf`) by setting:
+
+* `meta-tls-enabled` to `true`
+* `meta-insecure-tls` to `true` to indicate a self-signed key
+
+```
+[meta]
+
+ [...]
+ meta-tls-enabled = true
+
+ #for self-signed key
+ meta-insecure-tls = true
+```
+
+#### Step 4: Restart InfluxEnterprise
+
+Restart the InfluxEnterprise meta node processes for the configuration changes to take effect:
+```
+sudo systemctl restart influxdb-meta
+```
+
+Restart the InfluxEnterprise data node processes for the configuration changes to take effect:
+```
+sudo systemctl restart influxdb
+```
+
+#### Step 5: Verify the HTTPS Setup
+
+Verify that HTTPS is working on the meta nodes by using `influxd-ctl`.
+```
+influxd-ctl -bind-tls -k show
+```
+{{% warn %}}
+ Once you have enabled HTTPS, you MUST use `-bind-tls` in order for influxd-ctl to connect to the meta node. Because the cert is self-signed, you MUST also use the `-k` option. This skips certificate verification.
+{{% /warn %}}
+
+A successful connection returns output which should resemble the following:
+```
+Data Nodes
+==========
+ID TCP Address Version
+4 enterprise-data-01:8088 1.x.y-c1.x.y
+5 enterprise-data-02:8088 1.x.y-c1.x.y
+
+Meta Nodes
+==========
+TCP Address Version
+enterprise-meta-01:8091 1.x.y-c1.x.z
+enterprise-meta-02:8091 1.x.y-c1.x.z
+enterprise-meta-03:8091 1.x.y-c1.x.z
+```
+
+
+Next, verify that HTTPS is working by connecting to InfluxEnterprise with the [CLI tool](/influxdb/v1.5/tools/shell/):
+```
+influx -ssl -unsafeSsl -host .com
+```
+
+A successful connection returns the following:
+```
+Connected to https://.com:8086 version 1.x.y
+InfluxDB shell version: 1.x.y
+>
+```
+
+That's it! You've successfully set up HTTPS with InfluxEnterprise.
+
+
+## Connect Telegraf to a secured InfluxEnterprise instance
+
+Connecting [Telegraf](/telegraf/v1.5/) to an InfluxEnterprise instance that's using
+HTTPS requires some additional steps.
+
+In Telegraf's configuration file (`/etc/telegraf/telegraf.conf`), under the OUTPUT PLUGINS section, edit the `urls`
+setting to indicate `https` instead of `http` and change `localhost` to the
+relevant domain name.
+>
+The best practice in terms of security is to transfer the cert to the client and make it trusted (e.g. by putting in OS cert repo or using `ssl_ca` option). The alternative is to sign the cert using an internal CA and then trust the CA cert.
+
+
+If you're using a self-signed certificate, uncomment the `insecure_skip_verify`
+setting and set it to `true`.
+```
+ ###############################################################################
+ # OUTPUT PLUGINS #
+ ###############################################################################
+
+ # Configuration for influxdb server to send metrics to
+ [[outputs.influxdb]]
+ ## The full HTTP or UDP endpoint URL for your InfluxEnterprise instance.
+ ## Multiple urls can be specified as part of the same cluster,
+ ## this means that only ONE of the urls will be written to each interval.
+ # urls = ["udp://localhost:8089"] # UDP endpoint example
+ urls = ["https://.com:8086"]
+
+ [...]
+
+ ## Optional SSL Config
+ [...]
+ insecure_skip_verify = true # <-- Update only if you're using a self-signed certificate
+```
+
+Next, restart Telegraf and you're all set!
+```
+sudo systemctl restart telegraf
+```
diff --git a/content/enterprise_influxdb/v1.5/guides/migration.md b/content/enterprise_influxdb/v1.5/guides/migration.md
new file mode 100644
index 000000000..15c96b6f4
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/guides/migration.md
@@ -0,0 +1,214 @@
+---
+title: Migrating InfluxDB OSS instances to InfluxDB Enterprise clusters
+aliases:
+ - /enterprise/v1.5/guides/migration/
+menu:
+ enterprise_influxdb_1_5:
+ name: Migrating InfluxDB OSS to clusters
+ weight: 10
+ parent: Guides
+---
+
+{{% warn %}}
+## IMPORTANT
+Due to a known issue in InfluxDB, attempts to upgrade an InfluxDB OSS instance to
+InfluxDB Enterprise will fail.
+A fix is in place and will be released with InfluxDB v1.7.10.
+Until InfluxDB v1.7.10 is released, **DO NOT** attempt to migrate an InfluxDB OSS
+instance to InfluxDB Enterprise by following the steps in this guide.
+
+We will update this guide to reflect the new upgrade process after the release of InfluxDB 1.7.10.
+{{% /warn %}}
+
+
diff --git a/content/enterprise_influxdb/v1.5/guides/rebalance.md b/content/enterprise_influxdb/v1.5/guides/rebalance.md
new file mode 100644
index 000000000..5efcbcde6
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/guides/rebalance.md
@@ -0,0 +1,416 @@
+---
+title: Rebalancing InfluxDB Enterprise clusters
+aliases:
+ - /enterprise/v1.5/guides/rebalance/
+menu:
+ enterprise_influxdb_1_5:
+ name: Rebalancing clusters
+ weight: 19
+ parent: Guides
+---
+
+## Introduction
+
+This guide describes how to manually rebalance an InfluxDB Enterprise cluster.
+Rebalancing a cluster involves two primary goals:
+
+* Evenly distribute
+[shards](/influxdb/v1.5/concepts/glossary/#shard) across all data nodes in the
+cluster
+* Ensure that every
+shard is on N number of nodes, where N is determined by the retention policy's
+[replication factor](/influxdb/v1.5/concepts/glossary/#replication-factor)
+
+Rebalancing a cluster is essential for cluster health.
+Perform a rebalance if you add a new data node to your cluster.
+The proper rebalance path depends on the purpose of the new data node.
+If you added a data node to expand the disk size of the cluster or increase
+write throughput, follow the steps in
+[Rebalance Procedure 1](#rebalance-procedure-1-rebalance-a-cluster-to-create-space).
+If you added a data node to increase data availability for queries and query
+throughput, follow the steps in
+[Rebalance Procedure 2](#rebalance-procedure-2-rebalance-a-cluster-to-increase-availability).
+
+### Requirements
+
+The following sections assume that you already added a new data node to the
+cluster, and they use the
+[`influx-ctl` tool](/enterprise_influxdb/v1.5/administration/cluster-commands/) available on
+all meta nodes.
+
+Before you begin, stop writing historical data to InfluxDB.
+Historical data has a timestamp that occurs in the past (not real-time data).
+Performing a rebalance while writing historical data can lead to data loss.
+
+## Rebalance Procedure 1: Rebalance a cluster to create space
+
+For demonstration purposes, the next steps assume that you added a third
+data node to a previously two-data-node cluster that has a
+[replication factor](/influxdb/v1.5/concepts/glossary/#replication-factor) of
+two.
+This rebalance procedure is applicable for different cluster sizes and
+replication factors, but some of the specific, user-provided values will depend
+on that cluster size.
+
+Rebalance Procedure 1 focuses on how to rebalance a cluster after adding a
+data node to expand the total disk capacity of the cluster.
+In the next steps, you will safely move shards from one of the two original data
+nodes to the new data node.
+
+### Step 1: Truncate Hot Shards
+
+Hot shards are shards that are currently receiving writes.
+Performing any action on a hot shard can lead to data inconsistency within the
+cluster which requires manual intervention from the user.
+
+To prevent data inconsistency, truncate hot shards before moving any shards
+across data nodes.
+The command below creates a new hot shard which is automatically distributed
+across all data nodes in the cluster, and the system writes all new points to
+that shard.
+All previous writes are now stored in cold shards.
+
+```
+influxd-ctl truncate-shards
+```
+
+The expected ouput of this command is:
+
+```
+Truncated shards.
+```
+
+Once you truncate the shards, you can work on redistributing the cold shards
+without the threat of data inconsistency in the cluster.
+Any hot or new shards are now evenly distributed across the cluster and require
+no further intervention.
+
+### Step 2: Identify Cold Shards
+
+In this step, you identify the cold shards that you will copy to the new data node
+and remove from one of the original two data nodes.
+
+The following command lists every shard in our cluster:
+
+```
+influxd-ctl show-shards
+```
+
+The expected output is similar to the items in the codeblock below:
+
+```
+Shards
+==========
+ID Database Retention Policy Desired Replicas [...] End Owners
+21 telegraf autogen 2 [...] 2017-01-26T18:00:00Z [{4 enterprise-data-01:8088} {5 enterprise-data-02:8088}]
+22 telegraf autogen 2 [...] 2017-01-26T18:05:36.418734949Z* [{4 enterprise-data-01:8088} {5 enterprise-data-02:8088}]
+24 telegraf autogen 2 [...] 2017-01-26T19:00:00Z [{5 enterprise-data-02:8088} {6 enterprise-data-03:8088}]
+```
+
+The sample output includes three shards.
+The first two shards are cold shards.
+The timestamp in the `End` column occurs in the past (assume that the current
+time is just after `2017-01-26T18:05:36.418734949Z`), and the shards' owners
+are the two original data nodes: `enterprise-data-01:8088` and
+`enterprise-data-02:8088`.
+The second shard is the truncated shard; truncated shards have an asterix (`*`)
+on the timestamp in the `End` column.
+
+The third shard is the newly-created hot shard; the timestamp in the `End`
+column is in the future (again, assume that the current time is just after
+`2017-01-26T18:05:36.418734949Z`), and the shard's owners include one of the
+original data nodes (`enterprise-data-02:8088`) and the new data node
+(`enterprise-data-03:8088`).
+That hot shard and any subsequent shards require no attention during
+the rebalance process.
+
+Identify the cold shards that you'd like to move from one of the original two
+data nodes to the new data node.
+Take note of the cold shard's `ID` (for example: `22`) and the TCP address of
+one of its owners in the `Owners` column (for example:
+`enterprise-data-01:8088`).
+
+> **Note:**
+>
+Use the following command string to determine the size of the shards in
+your cluster:
+>
+ find /var/lib/influxdb/data/ -mindepth 3 -type d -exec du -h {} \;
+>
+In general, we recommend moving larger shards to the new data node to increase the
+available disk space on the original data nodes.
+Users should note that moving shards will impact network traffic.
+
+### Step 3: Copy Cold Shards
+
+Next, copy the relevant cold shards to the new data node with the syntax below.
+Repeat this command for every cold shard that you'd like to move to the
+new data node.
+
+```
+influxd-ctl copy-shard
+```
+
+Where `source_TCP_address` is the address that you noted in step 2,
+`destination_TCP_address` is the TCP address of the new data node, and `shard_ID`
+is the ID of the shard that you noted in step 2.
+
+The expected output of the command is:
+```
+Copied shard from to
+```
+
+### Step 4: Confirm the Copied Shards
+
+Confirm that the TCP address of the new data node appears in the `Owners` column
+for every copied shard:
+
+```
+influxd-ctl show-shards
+```
+
+The expected output shows that the copied shard now has three owners:
+```
+Shards
+==========
+ID Database Retention Policy Desired Replicas [...] End Owners
+22 telegraf autogen 2 [...] 2017-01-26T18:05:36.418734949Z* [{4 enterprise-data-01:8088} {5 enterprise-data-02:8088} {6 enterprise-data-03:8088}]
+```
+
+In addition, verify that the copied shards appear in the new data node's shard
+directory and match the shards in the source data node's shard directory.
+Shards are located in
+`/var/lib/influxdb/data///`.
+
+Here's an example of the correct output for shard `22`:
+```
+# On the source data node (enterprise-data-01)
+
+~# ls /var/lib/influxdb/data/telegraf/autogen/22
+000000001-000000001.tsm # 👍
+
+# On the new data node (enterprise-data-03)
+
+~# ls /var/lib/influxdb/data/telegraf/autogen/22
+000000001-000000001.tsm # 👍
+```
+
+It is essential that every copied shard appears on the new data node both
+in the `influxd-ctl show-shards` output and in the shard directory.
+If a shard does not pass both of the tests above, please repeat step 3.
+
+### Step 5: Remove Unnecessary Cold Shards
+
+Next, remove the copied shard from the original data node with the command below.
+Repeat this command for every cold shard that you'd like to remove from one of
+the original data nodes.
+**Removing a shard is an irrecoverable, destructive action; please be
+cautious with this command.**
+
+```
+influxd-ctl remove-shard
+```
+
+Where `source_TCP_address` is the TCP address of the original data node and
+`shard_ID` is the ID of the shard that you noted in step 2.
+
+The expected output of the command is:
+```
+Removed shard from
+```
+
+### Step 6: Confirm the Rebalance
+
+For every relevant shard, confirm that the TCP address of the new data node and
+only one of the original data nodes appears in the `Owners` column:
+
+```
+influxd-ctl show-shards
+```
+
+The expected output shows that the copied shard now has only two owners:
+```
+Shards
+==========
+ID Database Retention Policy Desired Replicas [...] End Owners
+22 telegraf autogen 2 [...] 2017-01-26T18:05:36.418734949Z* [{5 enterprise-data-02:8088} {6 enterprise-data-03:8088}]
+```
+
+That's it.
+You've successfully rebalanced your cluster; you expanded the available disk
+size on the original data nodes and increased the cluster's write throughput.
+
+## Rebalance Procedure 2: Rebalance a cluster to increase availability
+
+For demonstration purposes, the next steps assume that you added a third
+data node to a previously two-data-node cluster that has a
+[replication factor](/influxdb/v1.5/concepts/glossary/#replication-factor) of
+two.
+This rebalance procedure is applicable for different cluster sizes and
+replication factors, but some of the specific, user-provided values will depend
+on that cluster size.
+
+Rebalance Procedure 2 focuses on how to rebalance a cluster to improve availability
+and query throughput.
+In the next steps, you will increase the retention policy's replication factor and
+safely copy shards from one of the two original data nodes to the new data node.
+
+### Step 1: Update the Retention Policy
+
+[Update](/influxdb/v1.5/query_language/database_management/#modify-retention-policies-with-alter-retention-policy)
+every retention policy to have a replication factor of three.
+This step ensures that the system automatically distributes all newly-created
+shards across the three data nodes in the cluster.
+
+The following query increases the replication factor to three.
+Run the query on any data node for each retention policy and database.
+Here, we use InfluxDB's [CLI](/influxdb/v1.5/tools/shell/) to execute the query:
+
+```
+> ALTER RETENTION POLICY "" ON "" REPLICATION 3
+>
+```
+
+A successful `ALTER RETENTION POLICY` query returns no results.
+Use the
+[`SHOW RETENTION POLICIES` query](/influxdb/v1.5/query_language/schema_exploration/#show-retention-policies)
+to verify the new replication factor.
+
+Example:
+```
+> SHOW RETENTION POLICIES ON "telegraf"
+
+name duration shardGroupDuration replicaN default
+---- -------- ------------------ -------- -------
+autogen 0s 1h0m0s 3 #👍 true
+```
+
+### Step 2: Truncate Hot Shards
+
+Hot shards are shards that are currently receiving writes.
+Performing any action on a hot shard can lead to data inconsistency within the
+cluster which requires manual intervention from the user.
+
+To prevent data inconsistency, truncate hot shards before copying any shards
+to the new data node.
+The command below creates a new hot shard which is automatically distributed
+across the three data nodes in the cluster, and the system writes all new points
+to that shard.
+All previous writes are now stored in cold shards.
+
+```
+influxd-ctl truncate-shards
+```
+
+The expected ouput of this command is:
+
+```
+Truncated shards.
+```
+
+Once you truncate the shards, you can work on distributing the cold shards
+without the threat of data inconsistency in the cluster.
+Any hot or new shards are now automatically distributed across the cluster and
+require no further intervention.
+
+### Step 3: Identify Cold Shards
+
+In this step, you identify the cold shards that you will copy to the new data node.
+
+The following command lists every shard in your cluster:
+
+```
+influxd-ctl show-shards
+```
+
+The expected output is similar to the items in the codeblock below:
+
+```
+Shards
+==========
+ID Database Retention Policy Desired Replicas [...] End Owners
+21 telegraf autogen 3 [...] 2017-01-26T18:00:00Z [{4 enterprise-data-01:8088} {5 enterprise-data-02:8088}]
+22 telegraf autogen 3 [...] 2017-01-26T18:05:36.418734949Z* [{4 enterprise-data-01:8088} {5 enterprise-data-02:8088}]
+24 telegraf autogen 3 [...] 2017-01-26T19:00:00Z [{4 enterprise-data-01:8088} {5 enterprise-data-02:8088} {6 enterprise-data-03:8088}]
+```
+
+The sample output includes three shards.
+The first two shards are cold shards.
+The timestamp in the `End` column occurs in the past (assume that the current
+time is just after `2017-01-26T18:05:36.418734949Z`), and the shards' owners
+are the two original data nodes: `enterprise-data-01:8088` and
+`enterprise-data-02:8088`.
+The second shard is the truncated shard; truncated shards have an asterix (`*`)
+on the timestamp in the `End` column.
+
+The third shard is the newly-created hot shard; the timestamp in the `End`
+column is in the future (again, assume that the current time is just after
+`2017-01-26T18:05:36.418734949Z`), and the shard's owners include all three
+data nodes: `enterprise-data-01:8088`, `enterprise-data-02:8088`, and
+`enterprise-data-03:8088`.
+That hot shard and any subsequent shards require no attention during
+the rebalance process.
+
+Identify the cold shards that you'd like to copy from one of the original two
+data nodes to the new data node.
+Take note of the cold shard's `ID` (for example: `22`) and the TCP address of
+one of its owners in the `Owners` column (for example:
+`enterprise-data-01:8088`).
+
+### Step 4: Copy Cold Shards
+
+Next, copy the relevant cold shards to the new data node with the syntax below.
+Repeat this command for every cold shard that you'd like to move to the
+new data node.
+
+```
+influxd-ctl copy-shard
+```
+
+Where `source_TCP_address` is the address that you noted in step 3,
+`destination_TCP_address` is the TCP address of the new data node, and `shard_ID`
+is the ID of the shard that you noted in step 3.
+
+The expected output of the command is:
+```
+Copied shard from to
+```
+
+### Step 5: Confirm the Rebalance
+
+Confirm that the TCP address of the new data node appears in the `Owners` column
+for every copied shard:
+
+```
+influxd-ctl show-shards
+```
+
+The expected output shows that the copied shard now has three owners:
+```
+Shards
+==========
+ID Database Retention Policy Desired Replicas [...] End Owners
+22 telegraf autogen 3 [...] 2017-01-26T18:05:36.418734949Z* [{4 enterprise-data-01:8088} {5 enterprise-data-02:8088} {6 enterprise-data-03:8088}]
+```
+
+In addition, verify that the copied shards appear in the new data node's shard
+directory and match the shards in the source data node's shard directory.
+Shards are located in
+`/var/lib/influxdb/data///`.
+
+Here's an example of the correct output for shard `22`:
+```
+# On the source data node (enterprise-data-01)
+
+~# ls /var/lib/influxdb/data/telegraf/autogen/22
+000000001-000000001.tsm # 👍
+
+# On the new data node (enterprise-data-03)
+
+~# ls /var/lib/influxdb/data/telegraf/autogen/22
+000000001-000000001.tsm # 👍
+```
+
+That's it.
+You've successfully rebalanced your cluster and increased data availability for
+queries and query throughput.
diff --git a/content/enterprise_influxdb/v1.5/guides/replacing-nodes.md b/content/enterprise_influxdb/v1.5/guides/replacing-nodes.md
new file mode 100644
index 000000000..79e09a4d1
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/guides/replacing-nodes.md
@@ -0,0 +1,409 @@
+---
+title: Replacing InfluxDB Enterprise cluster meta nodes and data nodes
+
+menu:
+ enterprise_influxdb_1_5:
+ name: Replacing cluster nodes
+ weight: 10
+ parent: Guides
+---
+
+## Introduction
+
+Nodes in an InfluxDB Enterprise cluster may need to be replaced at some point due to hardware needs, hardware issues, or something else entirely.
+This guide outlines processes for replacing both meta nodes and data nodes in an InfluxDB Enterprise cluster.
+
+## Concepts
+Meta nodes manage and monitor both the uptime of nodes in the cluster as well as distribution of [shards](/influxdb/v1.5/concepts/glossary/#shard) among nodes in the cluster.
+They hold information about which data nodes own which shards; information on which the
+[anti-entropy](/enterprise_influxdb/v1.5/administration/anti-entropy/) (AE) process depends.
+
+Data nodes hold raw time-series data and metadata. Data shards are both distributed and replicated across data nodes in the cluster. The AE process runs on data nodes and references the shard information stored in the meta nodes to ensure each data node has the shards they need.
+
+`influxd-ctl` is a CLI included in each meta node and is used to manage your InfluxDB Enterprise cluster.
+
+## Scenarios
+
+### Replacing nodes in clusters with security enabled
+Many InfluxDB Enterprise clusters are configured with security enabled, forcing secure TLS encryption between all nodes in the cluster.
+Both `influxd-ctl` and `curl`, the command line tools used when replacing nodes, have options that facilitate the use of TLS.
+
+#### `influxd-ctl -bind-tls`
+In order to manage your cluster over TLS, pass the `-bind-tls` flag with any `influxd-ctl` commmand.
+
+> If using a self-signed certificate, pass the `-k` flag to skip certificate verification.
+
+```bash
+# Pattern
+influxd-ctl -bind-tls [-k]
+
+# Example
+influxd-ctl -bind-tls remove-meta enterprise-meta-02:8091
+```
+
+#### `curl -k`
+`curl` natively supports TLS/SSL connections, but if using a self-signed certificate, pass the `-k`/`--insecure` flag to allow for "insecure" SSL connections.
+
+> Self-signed certificates are considered "insecure" due to their lack of a valid chain of authority. However, data is still encrypted when using self-signed certificates.
+
+```bash
+# Pattern
+curl [-k, --insecure]
+
+# Example
+curl -k https://localhost:8091/status
+```
+
+### Replacing meta nodes in a functional cluster
+If all meta nodes in the cluster are fully functional, simply follow the steps for [replacing meta nodes](#replacing-meta-nodes-in-an-influxdb-enterprise-cluster).
+
+### Replacing an unresponsive meta node
+If replacing a meta node that is either unreachable or unrecoverable, you need to forcefully remove it from the meta cluster. Instructions for forcefully removing meta nodes are provided in the [step 2.2](#2-2-remove-the-non-leader-meta-node) of the [replacing meta nodes](#replacing-meta-nodes-in-an-influxdb-enterprise-cluster) process.
+
+### Replacing responsive and unresponsive data nodes in a cluster
+The process of replacing both responsive and unresponsive data nodes is the same. Simply follow the instructions for [replacing data nodes](#replacing-data-nodes-in-an-influxdb-enterprise-cluster).
+
+### Reconnecting a data node with a failed disk
+A disk drive failing is never a good thing, but it does happen, and when it does,
+all shards on that node are lost.
+
+Often in this scenario, rather than replacing the entire host, you just need to replace the disk.
+Host information remains the same, but once started again, the `influxd` process doesn't know
+to communicate with the meta nodes so the AE process can't start the shard-sync process.
+
+To resolve this, log in to a meta node and use the `update-data` command
+to [update the failed data node to itself](#2-replace-the-old-data-node-with-the-new-data-node).
+
+```bash
+# Pattern
+influxd-ctl update-data
+
+# Example
+influxd-ctl update-data enterprise-data-01:8088 enterprise-data-01:8088
+```
+
+This will connect the `influxd` process running on the newly replaced disk to the cluster.
+The AE process will detect the missing shards and begin to sync data from other
+shards in the same shard group.
+
+
+## Replacing meta nodes in an InfluxDB Enterprise cluster
+
+[Meta nodes](/enterprise_influxdb/v1.5/concepts/clustering/#meta-nodes) together form a [Raft](https://raft.github.io/) cluster in which nodes elect a leader through consensus vote.
+The leader oversees the management of the meta cluster, so it is important to replace non-leader nodes before the leader node.
+The process for replacing meta nodes is as follows:
+
+1. [Identify the leader node](#1-identify-the-leader-node)
+2. [Replace all non-leader nodes](#2-replace-all-non-leader-nodes)
+ 2.1. [Provision a new meta node](#2-1-provision-a-new-meta-node)
+ 2.2. [Remove the non-leader meta node](#2-2-remove-the-non-leader-meta-node)
+ 2.3. [Add the new meta node](#2-3-add-the-new-meta-node)
+ 2.4. [Confirm the meta node was added](#2-4-confirm-the-meta-node-was-added)
+ 2.5. [Remove and replace all other non-leader meta nodes](#2-5-remove-and-replace-all-other-non-leader-meta-nodes)
+3. [Replace the leader node](#3-replace-the-leader-node)
+ 3.1. [Kill the meta process on the leader node](#3-1-kill-the-meta-process-on-the-leader-node)
+ 3.2. [Remove and replace the old leader node](#3-2-remove-and-replace-the-old-leader-node)
+
+### 1. Identify the leader node
+
+Log into any of your meta nodes and run the following:
+
+```bash
+curl -s localhost:8091/status | jq
+```
+
+> Piping the command into `jq` is optional, but does make the JSON output easier to read.
+
+The output will include information about the current meta node, the leader of the meta cluster, and a list of "peers" in the meta cluster.
+
+```json
+{
+ "nodeType": "meta",
+ "leader": "enterprise-meta-01:8089",
+ "httpAddr": "enterprise-meta-01:8091",
+ "raftAddr": "enterprise-meta-01:8089",
+ "peers": [
+ "enterprise-meta-01:8089",
+ "enterprise-meta-02:8089",
+ "enterprise-meta-03:8089"
+ ]
+}
+```
+
+Identify the `leader` of the cluster. When replacing nodes in a cluster, non-leader nodes should be replaced _before_ the leader node.
+
+### 2. Replace all non-leader nodes
+
+#### 2.1. Provision a new meta node
+[Provision and start a new meta node](/enterprise_influxdb/v1.5/production_installation/meta_node_installation/), but **do not** add it to the cluster yet.
+For this guide, the new meta node's hostname will be `enterprise-meta-04`.
+
+#### 2.2. Remove the non-leader meta node
+Now remove the non-leader node you are replacing by using the `influxd-ctl remove-meta` command and the TCP address of the meta node (ex. `enterprise-meta-02:8091`):
+
+```bash
+# Pattern
+influxd-ctl remove-meta
+
+# Example
+influxd-ctl remove-meta enterprise-meta-02:8091
+```
+
+> Only use `remove-meta` if you want to permanently remove a meta node from a cluster.
+
+
+
+> **For unresponsive or unrecoverable meta nodes:**
+
+>If the meta process is not running on the node you are trying to remove or the node is neither reachable nor recoverable, use the `-force` flag.
+When forcefully removing a meta node, you must also pass the `-tcpAddr` flag with the TCP and HTTP bind addresses of the node you are removing.
+
+>```bash
+# Pattern
+influxd-ctl remove-meta -force -tcpAddr
+
+# Example
+influxd-ctl remove-meta -force -tcpAddr enterprise-meta-02:8089 enterprise-meta-02:8091
+```
+
+#### 2.3. Add the new meta node
+Once the non-leader meta node has been removed, use `influx-ctl add-meta` to replace it with the new meta node:
+
+```bash
+# Pattern
+influxd-ctl add-meta
+
+# Example
+influxd-ctl add-meta enterprise-meta-04:8091
+```
+
+You can also add a meta node remotely through another meta node:
+
+```bash
+# Pattern
+influxd-ctl -bind add-meta
+
+# Example
+influxd-ctl -bind enterprise-meta-node-01:8091 add-meta enterprise-meta-node-04:8091
+```
+
+>This command contacts the meta node running at `cluster-meta-node-01:8091` and adds a meta node to that meta node’s cluster.
+The added meta node has the hostname `cluster-meta-node-04` and runs on port `8091`.
+
+#### 2.4. Confirm the meta node was added
+Confirm the new meta-node has been added by running:
+
+```bash
+influxd-ctl show
+```
+
+The new meta node should appear in the output:
+
+```bash
+Data Nodes
+==========
+ID TCP Address Version
+4 enterprise-data-01:8088 1.5.x-c1.5.x
+5 enterprise-data-02:8088 1.5.x-c1.5.x
+
+Meta Nodes
+==========
+TCP Address Version
+enterprise-meta-01:8091 1.5.x-c1.5.x
+enterprise-meta-03:8091 1.5.x-c1.5.x
+enterprise-meta-04:8091 1.5.x-c1.5.x # <-- The newly added meta node
+```
+
+#### 2.5. Remove and replace all other non-leader meta nodes
+**If replacing only one meta node, no further action is required.**
+If replacing others, repeat steps [2.1-2.4](#2-1-provision-a-new-meta-node) for all non-leader meta nodes one at a time.
+
+### 3. Replace the leader node
+As non-leader meta nodes are removed and replaced, the leader node oversees the replication of data to each of the new meta nodes.
+Leave the leader up and running until at least two of the new meta nodes are up, running and healthy.
+
+#### 3.1 - Kill the meta process on the leader node
+Log into the leader meta node and kill the meta process.
+
+```bash
+# List the running processes and get the
+# PID of the 'influx-meta' process
+ps aux
+
+# Kill the 'influx-meta' process
+kill
+```
+
+Once killed, the meta cluster will elect a new leader using the [raft consensus algorithm](https://raft.github.io/).
+Confirm the new leader by running:
+
+```bash
+curl localhost:8091/status | jq
+```
+
+#### 3.2 - Remove and replace the old leader node
+Remove the old leader node and replace it by following steps [2.1-2.4](#2-1-provision-a-new-meta-node).
+The minimum number of meta nodes you should have in your cluster is 3.
+
+## Replacing data nodes in an InfluxDB Enterprise cluster
+
+[Data nodes](/enterprise_influxdb/v1.5/concepts/clustering/#data-nodes) house all raw time series data and metadata.
+The process of replacing data nodes is as follows:
+
+1. [Provision a new data node](#1-provision-a-new-data-node)
+2. [Replace the old data node with the new data node](#2-replace-the-old-data-node-with-the-new-data-node)
+3. [Confirm the data node was added](#3-confirm-the-data-node-was-added)
+4. [Check the copy-shard-status](#4-check-the-copy-shard-status)
+
+### 1. Provision a new data node
+
+[Provision and start a new data node](/enterprise_influxdb/v1.5/production_installation/data_node_installation/), but **do not** add it to your cluster yet.
+
+### 2. Replace the old data node with the new data node
+Log into any of your cluster's meta nodes and use `influxd-ctl update-data` to replace the old data node with the new data node:
+
+```bash
+# Pattern
+influxd-ctl update-data
+
+# Example
+influxd-ctl update-data enterprise-data-01:8088 enterprise-data-03:8088
+```
+
+### 3. Confirm the data node was added
+
+Confirm the new data node has been added by running:
+
+```bash
+influxd-ctl show
+```
+
+The new data node should appear in the output:
+
+```bash
+Data Nodes
+==========
+ID TCP Address Version
+4 enterprise-data-03:8088 1.5.x-c1.5.x # <-- The newly added data node
+5 enterprise-data-02:8088 1.5.x-c1.5.x
+
+Meta Nodes
+==========
+TCP Address Version
+enterprise-meta-01:8091 1.5.x-c1.5.x
+enterprise-meta-02:8091 1.5.x-c1.5.x
+enterprise-meta-03:8091 1.5.x-c1.5.x
+```
+
+Inspect your cluster's shard distribution with `influxd-ctl show-shards`.
+Shards will immediately reflect the new address of the node.
+
+```bash
+influxd-ctl show-shards
+
+Shards
+==========
+ID Database Retention Policy Desired Replicas Shard Group Start End Expires Owners
+3 telegraf autogen 2 2 2018-03-19T00:00:00Z 2018-03-26T00:00:00Z [{5 enterprise-data-02:8088} {4 enterprise-data-03:8088}]
+1 _internal monitor 2 1 2018-03-22T00:00:00Z 2018-03-23T00:00:00Z 2018-03-30T00:00:00Z [{5 enterprise-data-02:8088}]
+2 _internal monitor 2 1 2018-03-22T00:00:00Z 2018-03-23T00:00:00Z 2018-03-30T00:00:00Z [{4 enterprise-data-03:8088}]
+4 _internal monitor 2 3 2018-03-23T00:00:00Z 2018-03-24T00:00:00Z 2018-03-01T00:00:00Z [{5 enterprise-data-02:8088}]
+5 _internal monitor 2 3 2018-03-23T00:00:00Z 2018-03-24T00:00:00Z 2018-03-01T00:00:00Z [{4 enterprise-data-03:8088}]
+6 foo autogen 2 4 2018-03-19T00:00:00Z 2018-03-26T00:00:00Z [{5 enterprise-data-02:8088} {4 enterprise-data-03:8088}]
+```
+
+Within the duration defined by [`anti-entropy.check-interval`](/enterprise_influxdb/v1.5/administration/configuration/#check-interval-30s),
+the AE service will begin copying shards from other shard owners to the new node.
+The time it takes for copying to complete is determined by the number of shards copied and how much data is stored in each.
+
+### 4. Check the `copy-shard-status`
+Check on the status of the copy-shard process with:
+
+```bash
+influx-ctl copy-shard-status
+```
+
+The output will show all currently running copy-shard processes.
+
+```
+Source Dest Database Policy ShardID TotalSize CurrentSize StartedAt
+enterprise-data-02:8088 enterprise-data-03:8088 telegraf autogen 3 119624324 119624324 2018-04-17 23:45:09.470696179 +0000 UTC
+```
+
+> **Important:** If replacing other data nodes in the cluster, make sure shards are completely copied from nodes in the same shard group before replacing the other nodes.
+View the [Anti-entropy](/enterprise_influxdb/v1.5/administration/anti-entropy/#concepts) documentation for important information regarding anti-entropy and your database's replication factor.
+
+
+## Troubleshooting
+
+### Cluster commands result in timeout without error
+In some cases, commands used to add or remove nodes from your cluster
+timeout, but don't return an error.
+
+```
+add-data: operation timed out with error:
+```
+
+#### Check your InfluxDB user permissions
+In order to add or remove nodes to or from a cluster, your user must have `AddRemoveNode` permissions.
+Attempting to manage cluster nodes without the appropriate permissions results
+in a timeout with no accompanying error.
+
+To check user permissions, log in to one of your meta nodes and `curl` the `/user` API endpoint:
+
+```bash
+curl localhost:8091/user
+```
+
+You can also check the permissions of a specific user by passing the username with the `name` parameter:
+
+```bash
+# Pattern
+curl localhost:8091/user?name=
+
+# Example
+curl localhost:8091/user?name=bob
+```
+
+The JSON output will include user information and permissions:
+
+```json
+"users": [
+ {
+ "name": "bob",
+ "hash": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
+ "permissions": {
+ "": [
+ "ViewAdmin",
+ "ViewChronograf",
+ "CreateDatabase",
+ "CreateUserAndRole",
+ "DropDatabase",
+ "DropData",
+ "ReadData",
+ "WriteData",
+ "ManageShard",
+ "ManageContinuousQuery",
+ "ManageQuery",
+ "ManageSubscription",
+ "Monitor"
+ ]
+ }
+ }
+]
+```
+
+_In the output above, `bob` does not have the required `AddRemoveNode` permissions
+and would not be able to add or remove nodes from the cluster._
+
+#### Check the network connection between nodes
+Something may be interrupting the network connection between nodes.
+To check, `ping` the server or node you're trying to add or remove.
+If the ping is unsuccessful, something in the network is preventing communication.
+
+```bash
+ping enterprise-data-03:8088
+```
+
+_If pings are unsuccessful, be sure to ping from other meta nodes as well to determine
+if the communication issues are unique to specific nodes._
diff --git a/content/enterprise_influxdb/v1.5/guides/smtp-server.md b/content/enterprise_influxdb/v1.5/guides/smtp-server.md
new file mode 100644
index 000000000..f45688ca9
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/guides/smtp-server.md
@@ -0,0 +1,40 @@
+---
+title: Setup of SMTP server for InfluxDB Enterprise
+aliases:
+ - /enterprise/v1.5/guides/smtp-server/
+menu:
+ enterprise_influxdb_1_5:
+ name: SMTP server setup
+ weight: 20
+ parent: Guides
+---
+
+InfluxDB Enterprise requires a functioning SMTP server to invite users to the console.
+If you’re working on Ubuntu 14.04 and are looking for an SMTP server to use for
+development purposes, the following steps will get you up and running with [MailCatcher](https://mailcatcher.me/).
+
+Note that MailCatcher will NOT send actual emails, it merely captures email
+traffic from the cluster and allows you to view it in a browser.
+If you want to invite other users you must set up an actual email server that the InfluxDB Enterprise process can use.
+
+#### 1. Install the relevant packages on the server running the InfluxDB Enterprise Web Console
+```
+$ sudo apt-add-repository ppa:brightbox/ruby-ng
+$ sudo apt-get update
+$ sudo apt-get install ruby2.2 ruby2.2-dev build-essential libsqlite3-dev
+$ sudo gem install mailcatcher
+```
+#### 2. Start MailCatcher
+```
+$ mailcatcher --ip=0.0.0.0 --http-ip=0.0.0.0
+```
+#### 3. Update the InfluxDB Enterprise configuration file
+
+In `/etc/influx-enterprise/influx-enterprise.conf`, update the port setting in
+the `[smtp]` section to `1025`.
+
+#### 4. Restart the InfluxDB Enterprise Web Console
+```
+$ service influx-enterprise restart
+```
+View emails at `:1080`.
diff --git a/content/enterprise_influxdb/v1.5/introduction/_index.md b/content/enterprise_influxdb/v1.5/introduction/_index.md
new file mode 100644
index 000000000..4062a21c5
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/introduction/_index.md
@@ -0,0 +1,27 @@
+---
+title: Introducing InfluxDB Enterprise
+description: Covers introductory information on downloading, installing, and getting started with InfluxDB Enterprise.
+
+aliases:
+ - /enterprise/v1.5/introduction/
+
+menu:
+ enterprise_influxdb_1_5:
+ name: Introduction
+ weight: 20
+---
+
+The introductory documentation includes all the information you need to get up
+and running with InfluxDB Enterprise.
+
+## [Downloading InfluxDB Enterprise](/enterprise_influxdb/v1.5/introduction/download/)
+
+See [Downloading InfluxDB Enterprise](/enterprise_influxdb/v1.5/introduction/download/) for information on obtaining license keys and download URLs for InfluxDB Enterprise.
+
+## [Installation options](/enterprise_influxdb/v1.5/introduction/installation_guidelines/)
+
+See [Installation options](/enterprise_influxdb/v1.5/introduction/installation_guidelines/) to learn about installing InfluxDB Enterprise using the QuickStart and Production installation options.
+
+## [Getting started with InfluxDB Enterprise](/enterprise_influxdb/v1.5/introduction/getting-started/)
+
+See [Getting started with InfluxDB Enterprise](/enterprise_influxdb/v1.5/introduction/getting-started/) to begin exploring and using InfluxDB Enterprise.
diff --git a/content/enterprise_influxdb/v1.5/introduction/download.md b/content/enterprise_influxdb/v1.5/introduction/download.md
new file mode 100644
index 000000000..43dcc213f
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/introduction/download.md
@@ -0,0 +1,17 @@
+---
+title: InfluxDB Enterprise downloads
+
+aliases:
+ - /enterprise/v1.5/introduction/download/
+
+menu:
+ enterprise_influxdb_1_5:
+ name: Downloads
+ weight: 0
+ parent: Introduction
+---
+
+Please visit [InfluxPortal](https://portal.influxdata.com/) to get a license key and download URLs.
+Also see the [Installation](/enterprise_influxdb/v1.5/introduction/meta_node_installation/) documentation to access the downloads.
+You must have a valid license to run a cluster.
+InfluxPortal offers 14-day Demo licenses on sign-up.
diff --git a/content/enterprise_influxdb/v1.5/introduction/getting-started.md b/content/enterprise_influxdb/v1.5/introduction/getting-started.md
new file mode 100644
index 000000000..45e438a4d
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/introduction/getting-started.md
@@ -0,0 +1,20 @@
+---
+title: Getting started with InfluxDB Enterprise
+aliases:
+ - /enterprise_influxdb/v1.5/introduction/getting_started/
+ - /enterprise/v1.5/introduction/getting_started/
+menu:
+ enterprise_influxdb_1_5:
+ name: Getting started
+ weight: 40
+ parent: Introduction
+---
+
+Now that you successfully [installed and set up](/enterprise_influxdb/v1.5/introduction/meta_node_installation/) InfluxDB Enterprise, you can configure Chronograf for [monitoring InfluxDB Enterprise clusters](/chronograf/latest/guides/monitor-an-influxenterprise-cluster/).
+
+See [Getting started with Chronograf](/chronograf/latest/introduction/getting-started/) to learn more about using Chronograf with the InfluxData time series platform.
+
+
+### Where to from here?
+
+Check out [InfluxDB Enterprise features](/enterprise_influxdb/v1.5/features/) to learn about additional InfluxDB features that are unique to InfluxDB Enterprise.
diff --git a/content/enterprise_influxdb/v1.5/introduction/installation_guidelines.md b/content/enterprise_influxdb/v1.5/introduction/installation_guidelines.md
new file mode 100644
index 000000000..60cf3f241
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/introduction/installation_guidelines.md
@@ -0,0 +1,94 @@
+---
+title: Installation options (⏰ Please Read!)
+aliases:
+ - /enterprise_influxdb/v1.5/introduction/meta_node_installation/
+ - /enterprise_influxdb/v1.5/introduction/data_node_installation/
+ - /chronograf/latest/introduction/installation
+ - /enterprise/v1.5/introduction/installation_guidelines/
+menu:
+ enterprise_influxdb_1_5:
+ name: Installing options - QuickStart or Production
+ weight: 20
+ parent: Introduction
+---
+
+Please review the sections below before you begin working with InfluxDB Enterprise.
+
+## Which installation is right for me?
+
+Two options are described for installing InfluxDB Enterprise.
+
+The [QuickStart installation](/enterprise_influxdb/v1.5/quickstart_installation/) process is intended for users looking to quickly get up and running with InfluxDB Enterprise and for users who want to evaluate it.
+The QuickStart installation process **is not** intended for use
+in a production environment.
+
+The [Production installation](/enterprise_influxdb/v1.5/production_installation/) process is recommended for users intending to deploy the InfluxDB Enterprise installation in a production environment.
+
+> **Note:** If you install InfluxDB Enterprise with the QuickStart installation process you will need to reinstall InfluxDB Enterprise with the Production installation process before using the product in a production environment.
+
+## Requirements for InfluxDB Enterprise clusters
+
+For an overview of the architecture and concepts in an InfluxDB Enterprise Cluster, review [Clustering Guide](/enterprise_influxdb/v1.5/concepts/clustering/).
+
+For clusters using a license key and not a license file, all nodes must be able to contact `portal.influxdata.com`
+via port `80` or port `443`. Nodes that go more than four hours without connectivity to the Portal may experience license issues.
+
+### Frequently Overlooked Requirements
+
+The following are the most frequently overlooked requirements when installing a cluster.
+
+#### Ensure connectivity between machines
+
+All nodes in the cluster must be able to resolve each other by hostname or IP,
+whichever is used in the configuration files.
+
+For simplicity, ensure that all nodes can reach all other nodes on ports `8086`, `8088`, `8089`, and `8091`.
+If you alter the default ports in the configuration file(s), ensure the configured ports are open between the nodes.
+
+#### Synchronize time between hosts
+
+InfluxEnterprise uses hosts' local time in UTC to assign timestamps to data and for coordination purposes.
+Use the Network Time Protocol (NTP) to synchronize time between hosts.
+
+#### Use SSDs
+
+Clusters require sustained availability of 1000-2000 IOPS from the attached storage.
+SANs must guarantee at least 1000 IOPS is always available to InfluxEnterprise
+nodes or they may not be sufficient.
+SSDs are strongly recommended, and we have had no reports of IOPS contention from any customers running on SSDs.
+
+#### Use three and only three meta nodes
+
+Although technically the cluster can function with any number of meta nodes, the best pratice is to ALWAYS have an odd number of meta nodes.
+This allows the meta nodes to reach consensus.
+An even number of meta nodes cannot achieve consensus because there can be no "deciding vote" cast between the nodes if they disagree.
+
+Therefore, the minumum number of meta nodes for a high availability (HA) installation is three. The typical HA installation for InfluxDB Enterprise deploys three meta nodes.
+
+Aside from three being a magic number, a three meta node cluster can tolerate the permanent loss of a single meta node with no degradation in any function or performance.
+A replacement meta node can be added to restore the cluster to full redundancy.
+A three meta node cluster that loses two meta nodes will still be able to handle
+basic writes and queries, but no new shards, databases, users, etc. can be created.
+
+Running a cluster with five meta nodes does allow for the permanent loss of
+two meta nodes without impact on the cluster, but it doubles the
+Raft communication overhead.
+
+#### Meta and data nodes are fully independent
+
+Meta nodes run the Raft consensus protocol together, and manage the metastore of
+all shared cluster information: cluster nodes, databases, retention policies,
+shard groups, users, continuous queries, and subscriptions.
+
+Data nodes store the shard groups and respond to queries.
+They request metastore information from the meta group as needed.
+
+There is no requirement at all for there to be a meta process on a data node,
+or for there to be a meta process per data node.
+Three meta nodes is enough for an arbitrary number of data nodes, and for best
+redundancy, all nodes should run on independent servers.
+
+#### Install Chronograf last
+
+Chronograf should not be installed or configured until the
+InfluxDB Enterprise cluster is fully functional.
diff --git a/content/enterprise_influxdb/v1.5/production_installation/_index.md b/content/enterprise_influxdb/v1.5/production_installation/_index.md
new file mode 100644
index 000000000..e0d239bdb
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/production_installation/_index.md
@@ -0,0 +1,27 @@
+---
+title: Production installation
+aliases:
+ - /enterprise/v1.5/production_installation/
+menu:
+ enterprise_influxdb_1_5:
+ weight: 40
+---
+
+The Production Installation process is designed for users looking to deploy
+InfluxEnterprise in a production environment.
+
+If you wish to evaluate InfluxEnterprise in a non-production
+environment, feel free to follow the instructions outlined in the
+[QuickStart installation](/enterprise_influxdb/v1.5/quickstart_installation) section.
+Please note that if you install InfluxDB Enterprise with the QuickStart Installation process you
+will need to reinstall InfluxDB Enterprise with the Production Installation
+process before using the product in a production environment.
+
+
+## Production installation
+
+Follow the links below to get up and running with InfluxEnterprise.
+
+### [Step 1 - Meta node installation](/enterprise_influxdb/v1.5/production_installation/meta_node_installation/)
+### [Step 2 - Data node installation](/enterprise_influxdb/v1.5/production_installation/data_node_installation/)
+### [Step 3 - Chronograf installation](/enterprise_influxdb/v1.5/production_installation/chrono_install/)
diff --git a/content/enterprise_influxdb/v1.5/production_installation/chrono_install.md b/content/enterprise_influxdb/v1.5/production_installation/chrono_install.md
new file mode 100644
index 000000000..4f30eb3ca
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/production_installation/chrono_install.md
@@ -0,0 +1,15 @@
+---
+title: Step 3 - Installing Chronograf
+aliases:
+ - /enterprise/v1.5/production_installation/chrono_install/
+menu:
+ enterprise_influxdb_1_5:
+ weight: 20
+ parent: Production installation
+ identifier: chrono_install
+---
+
+Now that you've installed the meta nodes and data nodes, you are ready to install Chronograf
+to provide you with a user interface to access the InfluxDB Enterprise instance.
+
+[Installation instruction for Chronograf](/chronograf/latest/introduction/installation/)
diff --git a/content/enterprise_influxdb/v1.5/production_installation/data_node_installation.md b/content/enterprise_influxdb/v1.5/production_installation/data_node_installation.md
new file mode 100644
index 000000000..ba5729ea3
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/production_installation/data_node_installation.md
@@ -0,0 +1,245 @@
+---
+title: Step 2 - Installing InfluxDB Enterprise data nodes
+aliases:
+ - /enterprise/v1.5/production_installation/data_node_installation/
+menu:
+ enterprise_influxdb_1_5:
+ weight: 20
+ parent: Production installation
+---
+
+InfluxDB Enterprise offers highly scalable clusters on your infrastructure
+and a management UI for working with clusters.
+The next steps will get you up and running with the second essential component of
+your InfluxDB Enterprise cluster: the data nodes.
+
+If you have not set up your meta nodes, go to
+[Installing InfluxDB Enterprise meta nodes](/enterprise_influxdb/v1.5/production_installation/meta_node_installation/).
+Do not proceed unless you have finished installing your meta nodes.
+
+
+## Data node setup description and requirements
+
+The Production Installation process sets up two [data nodes](/enterprise_influxdb/v1.5/concepts/glossary#data-node), each running on a dedicated server.
+You **must** have a minimum of two data nodes in a cluster.
+InfluxDB Enterprise clusters require at least two data nodes for high availability and redundancy.
+
+> **Note:** Although there is no requirement that each data node runs on a dedicated
+server, InfluxData recommends this for production installations.
+
+See [Clustering in InfluxDB Enterprise](/enterprise_influxdb/v1.5/concepts/clustering/#optimal-server-counts)
+for more on cluster architecture.
+
+## Other requirements
+
+### License key or file
+
+InfluxDB Enterprise requires a license key **OR** a license file to run.
+Your license key is available at [InfluxPortal](https://portal.influxdata.com/licenses).
+Contact support at the email we provided at signup to receive a license file.
+License files are required only if the nodes in your cluster cannot reach
+`portal.influxdata.com` on port `80` or `443`.
+
+### Networking
+
+Data nodes communicate over ports `8088`, `8089`, and `8091`.
+
+For licensing purposes, data nodes must also be able to reach `portal.influxdata.com`
+on port `80` or `443`.
+If the data nodes cannot reach `portal.influxdata.com` on port `80` or `443`,
+set the `license-path` setting instead of the `license-key` setting in the data node configuration file.
+
+#### Load balancer
+
+InfluxDB Enterprise does not function as a load balancer.
+You will need to configure your own load balancer to send client traffic to the
+data nodes on port `8086` (the default port for the [HTTP API](/influxdb/v1.5/tools/api/)).
+
+## Data node setup
+
+### Step 1: Modify the `/etc/hosts` file
+
+Add your servers' hostnames and IP addresses to **each** cluster server's `/etc/hosts`
+file (the hostnames below are representative).
+
+```
+ enterprise-data-01
+ enterprise-data-02
+```
+
+> **Verification steps:**
+>
+Before proceeding with the installation, verify on each meta node and data node server that the other
+servers are resolvable. Here is an example set of shell commands using `ping`:
+>
+ ping -qc 1 enterprise-meta-01
+ ping -qc 1 enterprise-meta-02
+ ping -qc 1 enterprise-meta-03
+ ping -qc 1 enterprise-data-01
+ ping -qc 1 enterprise-data-02
+
+If there are any connectivity issues resolve them before proceeding with the
+installation.
+A healthy cluster requires that every meta and data node can communicate
+with every other meta and data node.
+
+### Step 2: Set up, configure, and start the data services
+
+Perform the following steps on each data server.
+
+### 2.1 Download and install the data service
+
+#### Ubuntu & Debian (64-bit)
+
+```
+wget https://dl.influxdata.com/enterprise/releases/influxdb-data_1.5.4-c1.5.4_amd64.deb
+sudo dpkg -i influxdb-data_1.5.4-c1.5.4_amd64.deb
+```
+
+#### RedHat & CentOS (64-bit)
+
+```
+wget https://dl.influxdata.com/enterprise/releases/influxdb-data-1.5.4_c1.5.4.x86_64.rpm
+sudo yum localinstall influxdb-data-1.5.4_c1.5.4.x86_64.rpm
+```
+
+### 2.2 Edit the configuration file
+
+In the `/etc/influxdb/influxdb.conf` file, complete these steps:
+
+1. Uncomment `hostname` at the top of the file and set it to the full hostname of the data node
+2. Uncomment `auth-enabled` in the `[http]` section and set it to `true`
+3. Uncomment `shared-secret` in the `[http]` section and set it to a long pass phrase that will be used to sign tokens for intra-cluster communication. This value must be the same for all data nodes.
+4. Set `license-key` in the `[enterprise]` section to the license key you received on InfluxPortal **or** `license-path` in the `[enterprise]` section to the local path to the JSON license file you received from InfluxData.
+
+{{% warn %}}
+The `license-key` and `license-path` settings are mutually exclusive and one must remain set to the empty string.
+{{% /warn %}}
+
+```toml
+# Change this option to true to disable reporting.
+# reporting-disabled = false
+# bind-address = ":8088"
+hostname=""
+
+[enterprise]
+ # license-key and license-path are mutually exclusive, use only one and leave the other blank
+ license-key = "" # Mutually exclusive with license-path
+
+ # The path to a valid license file. license-key and license-path are mutually exclusive,
+ # use only one and leave the other blank.
+ license-path = "/path/to/readable/JSON.license.file" # Mutually exclusive with license-key
+
+[meta]
+ # Where the cluster metadata is stored
+ dir = "/var/lib/influxdb/meta" # data nodes do require a local meta directory
+
+[...]
+
+[http]
+ # Determines whether HTTP endpoint is enabled.
+ # enabled = true
+
+ # The bind address used by the HTTP service.
+ # bind-address = ":8086"
+
+ # Determines whether HTTP authentication is enabled.
+ auth-enabled = true # Recommended, but not required
+
+[...]
+
+ # The JWT auth shared secret to validate requests using JSON web tokens.
+ shared-secret = "long pass phrase used for signing tokens"
+```
+
+### 2.3 Start the data service
+
+On `sysvinit` systems, enter:
+
+```bash
+service influxdb start
+```
+
+On `systemd` systems, enter:
+
+```bash
+sudo systemctl start influxdb
+```
+
+#### Verify the data service is running
+
+Check to see that the service is running by entering:
+
+```
+ ps aux | grep -v grep | grep influxdb
+```
+
+You should see output similar to:
+
+```
+influxdb 2706 0.2 7.0 571008 35376 ? Sl 15:37 0:16 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
+```
+
+If you do not see the expected output, the process is either not launching or is exiting prematurely. Check the [logs](/enterprise_influxdb/v1.5/administration/logs/) for error messages and verify the previous setup steps are complete.
+
+If you see the expected output, repeat for the remaining data nodes.
+After all of your data nodes have been installed, configured, and launched, move on to the next section to join the data nodes to the cluster.
+
+## Join the data nodes to the cluster
+
+{{% warn %}}Join your data nodes to the cluster only when you are adding a brand new node,
+either during the initial creation of your cluster or when growing the number of data nodes.
+If you are replacing an existing data node using the `influxd-ctl update-data` command, skip the rest of this guide.
+{{% /warn %}}
+
+Run the following commands on one of the meta nodes that you set up during
+[meta nodes installation](/enterprise_influxdb/v1.5/introduction/meta_node_installation/):
+
+```
+influxd-ctl add-data enterprise-data-01:8088
+
+influxd-ctl add-data enterprise-data-02:8088
+```
+
+The expected output is:
+```
+Added data node y at enterprise-data-0x:8088
+```
+
+Run the `add-data` command only once for each data node you are joining to the cluster.
+
+## Verify your data nodes installation
+
+Issue the following command on any meta node:
+
+```
+influxd-ctl show
+```
+The expected output is:
+
+```
+Data Nodes
+==========
+ID TCP Address Version
+4 enterprise-data-01:8088 1.5.4-c1.5.4
+5 enterprise-data-02:8088 1.5.4-c1.5.4
+
+
+Meta Nodes
+==========
+TCP Address Version
+enterprise-meta-01:8091 1.5.4-c1.5.4
+enterprise-meta-02:8091 1.5.4-c1.5.4
+enterprise-meta-03:8091 1.5.4-c1.5.4
+```
+
+The output should list every data node that was added to the cluster.
+The first data node added should have `ID=N`, where `N` is equal to one plus the number of meta nodes.
+In a standard three meta node cluster, the first data node should have `ID=4`.
+Subsequently added data nodes should have monotonically increasing IDs.
+If not, there may be artifacts of a previous cluster in the metastore.
+
+If you do not see your data nodes in the output, retry adding them
+to the cluster.
+
+Once your data nodes are part of your cluster, proceed to [installing Chronograf](/enterprise_influxdb/v1.5/production_installation/chrono_install).
diff --git a/content/enterprise_influxdb/v1.5/production_installation/meta_node_installation.md b/content/enterprise_influxdb/v1.5/production_installation/meta_node_installation.md
new file mode 100644
index 000000000..dd0d4c3e1
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/production_installation/meta_node_installation.md
@@ -0,0 +1,218 @@
+---
+title: Step 1 - Installing InfluxDB Enterprise meta nodes
+aliases:
+ - /enterprise/v1.5/production_installation/meta_node_installation/
+
+menu:
+ enterprise_influxdb_1_5:
+ weight: 10
+ parent: Production installation
+---
+
+InfluxDB Enterprise offers highly scalable clusters on your infrastructure
+and a management user interface ([using Chronograf](https://docs.influxdata.com/chronograf/latest)) for working with clusters.
+The Production Installation process is designed for users looking to
+deploy InfluxDB Enterprise in a production environment.
+The following steps will get you up and running with the first essential component of
+your InfluxDB Enterprise cluster: the meta nodes.
+
+> If you wish to evaluate InfluxDB Enterprise in a non-production
+environment, feel free to follow the instructions outlined in the
+[QuickStart installation](/enterprise_influxdb/v1.5/quickstart_installation) section.
+Please note that if you install InfluxDB Enterprise with the QuickStart Installation process you
+will need to reinstall InfluxDB Enterprise with the Production Installation
+process before using the product in a production environment.
+
+
+## Meta node setup description and requirements
+
+The Production Installation process sets up three [meta nodes](/enterprise_influxdb/v1.5/concepts/glossary/#meta-node), each running on a dedicated server.
+
+You **must** have a minimum of three meta nodes in a cluster.
+InfluxDB Enterprise clusters require at least three meta nodes and an __**odd number**__
+of meta nodes for high availability and redundancy.
+We do not recommend having more than three meta nodes unless your servers,
+or the communication between the servers, have chronic reliability issues.
+
+> **Note:** While there is no requirement for each meta node to run on its own server, deploying
+multiple meta nodes on the same server creates a larger point of potential failure if that node is unresponsive. InfluxData recommends deploying meta nodes on servers with relatively small footprints.
+
+See [Clustering in InfluxDB Enterprise](/enterprise_influxdb/v1.5/concepts/clustering#optimal-server-counts)
+for more on cluster architecture.
+
+## Other requirements
+
+### License key or file
+
+InfluxDB Enterprise requires a license key **OR** a license file to run.
+Your license key is available at [InfluxPortal](https://portal.influxdata.com/licenses).
+Contact support at the email we provided at signup to receive a license file.
+License files are required only if the nodes in your cluster cannot reach
+`portal.influxdata.com` on port `80` or `443`.
+
+### Ports
+
+Meta nodes communicate over ports `8088`, `8089`, and `8091`.
+
+For licensing purposes, meta nodes must also be able to reach `portal.influxdata.com`
+on port `80` or `443`.
+If the meta nodes cannot reach `portal.influxdata.com` on port `80` or `443`,
+you'll need to set the `license-path` setting instead of the `license-key`
+setting in the meta node configuration file.
+
+
+## Meta node setup
+
+### Step 1: Modify the `/etc/hosts` file
+
+Add your servers' hostnames and IP addresses to **each** of the cluster server's `/etc/hosts`
+file (the hostnames below are representative).
+
+```
+ enterprise-meta-01
+ enterprise-meta-02
+ enterprise-meta-03
+```
+
+#### Verify the meta nodes are resolvable
+
+Before proceeding with the installation, verify on each server that the other
+servers are resolvable. Here is an example set of shell commands using `ping`:
+
+```
+ping -qc 1 enterprise-meta-01
+ping -qc 1 enterprise-meta-02
+ping -qc 1 enterprise-meta-03
+```
+
+If there are any connectivity issues resolve them before proceeding with the
+installation.
+A healthy cluster requires that every meta node can communicate with every other
+meta node.
+
+### Step 2: Set up, configure, and start the meta node services
+
+Complete the following steps for each meta node server.
+
+#### 2.1: Download and install the meta node services
+
+##### Ubuntu & Debian (64-bit)
+
+```
+wget https://dl.influxdata.com/enterprise/releases/influxdb-meta_1.5.4-c1.5.4_amd64.deb
+sudo dpkg -i influxdb-meta_1.5.4-c1.5.4_amd64.deb
+```
+
+##### RedHat and CentOS (64-bit)
+
+```
+wget https://dl.influxdata.com/enterprise/releases/influxdb-meta-1.5.4_c1.5.4.x86_64.rpm
+sudo yum localinstall influxdb-meta-1.5.4_c1.5.4.x86_64.rpm
+```
+
+#### 2.2: Edit the configuration file
+
+In `/etc/influxdb/influxdb-meta.conf`:
+
+* Uncomment and set `hostname` to the hostname of the meta node.
+* Set `license-key` in the `[enterprise]` section to the license key you received on InfluxPortal **OR** `license-path` in the `[enterprise]` section to the local path to the JSON license file you received from InfluxData.
+
+{{% warn %}}
+The `license-key` and `license-path` settings are mutually exclusive and one must remain set to the empty string.
+{{% /warn %}}
+
+```
+# Hostname advertised by this host for remote addresses. This must be resolvable by all
+# other nodes in the cluster
+hostname=""
+
+[enterprise]
+ # license-key and license-path are mutually exclusive, use only one and leave the other blank
+ license-key = "" # Mutually exclusive with license-path
+
+ # license-key and license-path are mutually exclusive, use only one and leave the other blank
+ license-path = "/path/to/readable/JSON.license.file" # Mutually exclusive with license-key
+```
+
+#### 2.3: Start the meta service
+
+On `sysvinit` systems, run:
+
+```
+service influxdb-meta start
+```
+
+On `systemd` systems, run:
+
+```
+sudo systemctl start influxdb-meta
+```
+
+##### Verify the meta node service started
+
+Check to see that the service is running by entering:
+
+```
+ps aux | grep -v grep | grep influxdb-meta
+```
+
+You should see output similar to:
+
+```
+influxdb 3207 0.8 4.4 483000 22168 ? Ssl 17:05 0:08 /usr/bin/influxd-meta -config /etc/influxdb/influxdb-meta.conf
+```
+
+> **Note:** A cluster with only one meta node is **not recommended** for
+production environments.
+You can start the cluster with a single meta node using the `-single-server` flag when starting the single meta node.
+
+### Join the meta nodes to the cluster
+
+Using only one meta node, join all meta nodes.
+In our example, from `enterprise-meta-01`, run:
+
+```
+influxd-ctl add-meta enterprise-meta-01:8091
+
+influxd-ctl add-meta enterprise-meta-02:8091
+
+influxd-ctl add-meta enterprise-meta-03:8091
+```
+
+> **Note:** Specify the hostname of the meta node during the join process.
+Do not specify `localhost`, which can cause cluster connection issues.
+
+The expected output is:
+
+```
+Added meta node x at enterprise-meta-0x:8091
+```
+
+## Verify your meta nodes installation
+
+Issue the following command on any meta node:
+
+```
+influxd-ctl show
+```
+
+The expected output is:
+
+```
+Data Nodes
+==========
+ID TCP Address Version
+
+Meta Nodes
+==========
+TCP Address Version
+enterprise-meta-01:8091 1.5.4-c1.5.4
+enterprise-meta-02:8091 1.5.4-c1.5.4
+enterprise-meta-03:8091 1.5.4-c1.5.4
+```
+
+Your cluster must have at least three meta nodes.
+If you do not see your meta nodes in the output, retry adding them to
+the cluster.
+
+Once your meta nodes are part of your cluster, you can proceed to [installing your data nodes](/enterprise_influxdb/v1.5/production_installation/data_node_installation/).
diff --git a/content/enterprise_influxdb/v1.5/quickstart_installation/_index.md b/content/enterprise_influxdb/v1.5/quickstart_installation/_index.md
new file mode 100644
index 000000000..5ee91e0e3
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/quickstart_installation/_index.md
@@ -0,0 +1,27 @@
+---
+title: QuickStart installation
+aliases:
+ - /enterprise/v1.5/quickstart_installation/
+menu:
+ enterprise_influxdb_1_5:
+ weight: 30
+---
+
+The QuickStart installation process is designed for users looking to quickly
+get up and running with InfluxEnterprise and for users who are looking to
+evaluate the product.
+
+The QuickStart installation process **is not** designed for use
+in a production environment.
+Follow the instructions outlined in the [Production installation](/enterprise_influxdb/v1.5/production_installation/) section
+if you wish to use InfluxDB Enterprise in a production environment.
+Please note that if you install InfluxEnterprise with the QuickStart Installation process you
+will need to reinstall InfluxDB Enterprise with the Production Installation
+process before using the product in a production environment.
+
+## QuickStart installation
+
+Follow the links below to get up and running with InfluxEnterprise.
+
+### [Step 1 - Installing an InfluxDB Enterprise cluster](/enterprise_influxdb/v1.5/quickstart_installation/cluster_installation/)
+### [Step 2 - Installing Chronograf](/enterprise_influxdb/v1.5/quickstart_installation/chrono_install/)
diff --git a/content/enterprise_influxdb/v1.5/quickstart_installation/chrono_install.md b/content/enterprise_influxdb/v1.5/quickstart_installation/chrono_install.md
new file mode 100644
index 000000000..735411538
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/quickstart_installation/chrono_install.md
@@ -0,0 +1,14 @@
+---
+title: Step 2 - Installing Chronograf
+aliases:
+ - /enterprise/v1.5/production_installation/chrono_install/
+menu:
+ enterprise_influxdb_1_5:
+ weight: 20
+ parent: QuickStart installation
+---
+
+Now that you've installed the meta nodes and data nodes, you are ready to install Chronograf
+to provide you with a user interface to access the InfluxDB Enterprise instance.
+
+[Installing Chronograf](/chronograf/latest/introduction/installation/)
diff --git a/content/enterprise_influxdb/v1.5/quickstart_installation/cluster_installation.md b/content/enterprise_influxdb/v1.5/quickstart_installation/cluster_installation.md
new file mode 100644
index 000000000..489e500a9
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/quickstart_installation/cluster_installation.md
@@ -0,0 +1,364 @@
+---
+title: Step 1 - Installing an InfluxDB Enterprise cluster
+aliases:
+ - /enterprise/v1.5/quickstart_installation/cluster_installation/
+
+menu:
+ enterprise_influxdb_1_5:
+ weight: 10
+ parent: QuickStart installation
+---
+
+InfluxDB Enterprise offers highly scalable clusters on your infrastructure
+and a management UI for working with clusters.
+The QuickStart Installation process will get you up and running with your
+InfluxDB Enterprise cluster.
+
+> The QuickStart installation process **is not** designed for use
+in a production environment.
+Follow the instructions outlined in the [Production installation](/enterprise_influxdb/v1.5/production_installation/) section
+if you wish to use InfluxDB Enterprise in a production environment.
+Please note that if you install InfluxDB Enterprise with the QuickStart Installation process you
+will need to reinstall InfluxDB Enterprise with the Production Installation
+process before using the product in a production environment.
+
+## Setup description and requirements
+
+### Setup description
+
+The QuickStart installation process sets up an InfluxDB Enterprise cluster on three servers.
+Each server is a [meta node](/enterprise_influxdb/v1.5/concepts/glossary/#meta-node) and
+a [data node](/enterprise_influxdb/v1.5/concepts/glossary/#data-node), that is, each server
+runs both the [meta service](/enterprise_influxdb/v1.5/concepts/glossary/#meta-service)
+and the [data service](/enterprise_influxdb/v1.5/concepts/glossary/#data-service).
+
+### Requirements
+
+#### License key or file
+
+InfluxDB Enterprise requires a license key **OR** a license file to run.
+Your license key is available at [InfluxPortal](https://portal.influxdata.com/licenses).
+Contact support at the email we provided at signup to receive a license file.
+License files are required only if the nodes in your cluster cannot reach
+`portal.influxdata.com` on port `80` or `443`.
+
+#### Networking
+
+Meta nodes and data nodes communicate over ports `8088`,
+`8089`, and `8091`.
+
+For licensing purposes, meta nodes and data nodes must also be able to reach `portal.influxdata.com`
+on port `80` or `443`.
+If the nodes cannot reach `portal.influxdata.com` on port `80` or `443`,
+you'll need to set the `license-path` setting instead of the `license-key`
+setting in the meta node and data node configuration files.
+
+#### Load balancer
+
+InfluxEnterprise does not function as a load balancer.
+You will need to configure your own load balancer to send client traffic to the
+data nodes on port `8086` (the default port for the [HTTP API](/influxdb/v1.5/tools/api/)).
+
+## Step 1: Modify the `/etc/hosts` file in each of your servers
+
+Add your three servers' hostnames and IP addresses to **each** of your server's `/etc/hosts`
+file.
+
+The hostnames below are representative:
+
+```
+ quickstart-cluster-01
+ quickstart-cluster-02
+ quickstart-cluster-03
+```
+
+> **Verification steps:**
+>
+Before proceeding with the installation, verify on each server that the other
+servers are resolvable. Here is an example set of shell commands using `ping` and the
+output for `quickstart-cluster-01`:
+>
+
+```
+ ping -qc 1 quickstart-cluster-01
+ ping -qc 1 quickstart-cluster-02
+ ping -qc 1 quickstart-cluster-03
+
+ > ping -qc 1 quickstart-cluster-01
+ PING quickstart-cluster-01 (Server_1_IP_Address) 56(84) bytes of data.
+
+ --- quickstart-cluster-01 ping statistics ---
+ 1 packets transmitted, 1 received, 0% packet loss, time 0ms
+ rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms
+```
+
+If there are any connectivity issues please resolve them before proceeding with the
+installation.
+A healthy cluster requires that every meta node and data node can communicate with every other
+meta node and data node.
+
+## Step 2: Set up the meta nodes
+
+Perform the following steps on all three servers.
+
+### I. Download and install the meta service
+
+
+#### Ubuntu & Debian (64-bit)
+
+```
+wget https://dl.influxdata.com/enterprise/releases/influxdb-meta_1.5.4-c1.5.4_amd64.deb
+sudo dpkg -i influxdb-meta_1.5.4-c1.5.4_amd64.deb
+```
+#### RedHat & CentOS (64-bit)]
+
+```
+wget https://dl.influxdata.com/enterprise/releases/influxdb-meta-1.5.4_c1.5.4.x86_64.rpm
+sudo yum localinstall influxdb-meta-1.5.4_c1.5.4.x86_64.rpm
+```
+
+### II. Edit the meta service configuration file
+
+In `/etc/influxdb/influxdb-meta.conf`:
+
+* uncomment and set `hostname` to the full hostname of the meta node
+* set `license-key` in the `[enterprise]` section to the license key you received on InfluxPortal **OR** `license-path` in the `[enterprise]` section to the local path to the JSON license file you received from InfluxData.
+
+{{% warn %}}
+The `license-key` and `license-path` settings are mutually exclusive and one must remain set to the empty string.
+{{% /warn %}}
+
+```
+# Hostname advertised by this host for remote addresses. This must be resolvable by all
+# other nodes in the cluster
+hostname=""
+
+[enterprise]
+ # license-key and license-path are mutually exclusive, use only one and leave the other blank
+ license-key = "" # Mutually exclusive with license-path
+
+ # license-key and license-path are mutually exclusive, use only one and leave the other blank
+ license-path = "/path/to/readable/JSON.license.file" # Mutually exclusive with license-key
+```
+
+> **Note:** The `hostname` in the configuration file must match the `hostname` in your server's `/etc/hosts` file.
+
+### III. Start the meta service
+
+On sysvinit systems, enter:
+```
+service influxdb-meta start
+```
+
+On systemd systems, enter:
+```
+sudo systemctl start influxdb-meta
+```
+
+> **Verification steps:**
+>
+Check to see that the process is running by entering:
+>
+ ps aux | grep -v grep | grep influxdb-meta
+>
+You should see output similar to:
+>
+ influxdb 3207 0.8 4.4 483000 22168 ? Ssl 17:05 0:08 /usr/bin/influxd-meta -config /etc/influxdb/influxdb-meta.conf
+
+## Step 3: Set up the data nodes
+
+Perform the following steps on all three servers.
+
+### I. Download and install the data service
+
+#### Ubuntu & Debian (64-bit)
+
+```
+wget https://dl.influxdata.com/enterprise/releases/influxdb-data_1.5.4-c1.5.4_amd64.deb
+sudo dpkg -i influxdb-data_1.5.4-c1.5.4_amd64.deb
+```
+#### RedHat & CentOS (64-bit)
+
+```
+wget https://dl.influxdata.com/enterprise/releases/influxdb-data-1.5.4_c1.5.4.x86_64.rpm
+sudo yum localinstall influxdb-data-1.5.4_c1.5.4.x86_64.rpm
+```
+
+### II. Edit the data service configuration file
+
+First, in `/etc/influxdb/influxdb.conf`, uncomment:
+
+* `hostname` at the top of the file and set it to the full hostname of the data node
+* `auth-enabled` in the `[http]` section and set it to `true`
+* `shared-secret` in the `[http]` section and set it to a long pass phrase that will be used to sign tokens for intra-cluster communication. This value needs to be consistent across all data nodes.
+
+> **Note:** When you enable authentication, InfluxDB only executes HTTP requests that are sent with valid credentials.
+See the [authentication section](/influxdb/latest/administration/authentication_and_authorization/#authentication) for more information.
+
+Second, in `/etc/influxdb/influxdb.conf`, set:
+
+`license-key` in the `[enterprise]` section to the license key you received on InfluxPortal **OR** `license-path` in the `[enterprise]` section to the local path to the JSON license file you received from InfluxData.
+
+{{% warn %}}
+The `license-key` and `license-path` settings are mutually exclusive and one must remain set to the empty string.
+{{% /warn %}}
+
+```toml
+# Change this option to true to disable reporting.
+# reporting-disabled = false
+# bind-address = ":8088"
+hostname=""
+
+[enterprise]
+ # license-key and license-path are mutually exclusive, use only one and leave the other blank
+ license-key = "" # Mutually exclusive with license-path
+
+ # The path to a valid license file. license-key and license-path are mutually exclusive,
+ # use only one and leave the other blank.
+ license-path = "/path/to/readable/JSON.license.file" # Mutually exclusive with license-key
+
+[meta]
+ # Where the cluster metadata is stored
+ dir = "/var/lib/influxdb/meta" # data nodes do require a local meta directory
+
+[...]
+
+[http]
+ # Determines whether HTTP endpoint is enabled.
+ # enabled = true
+
+ # The bind address used by the HTTP service.
+ # bind-address = ":8086"
+
+ # Determines whether HTTP authentication is enabled.
+ auth-enabled = true # Recommended, but not required
+
+[...]
+
+ # The JWT auth shared secret to validate requests using JSON web tokens.
+ shared-secret = "long pass phrase used for signing tokens"
+```
+> **Note:** The `hostname` in the configuration file must match the `hostname` in your server's `/etc/hosts` file.
+
+
+### III. Start the data service
+On sysvinit systems, enter:
+```
+service influxdb start
+```
+
+On systemd systems, enter:
+```
+sudo systemctl start influxdb
+```
+
+> **Verification steps:**
+>
+Check to see that the process is running by entering:
+>
+ ps aux | grep -v grep | grep influxdb
+>
+You should see output similar to:
+>
+ influxdb 2706 0.2 7.0 571008 35376 ? Sl 15:37 0:16 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
+>
+If you do not see the expected output, the process is either not launching or is exiting prematurely. Check the [logs](/enterprise_influxdb/v1.5/administration/logs/) for error messages and verify the previous setup steps are complete.
+
+## Step 4: Join the nodes to the cluster
+
+### I. Join the first server to the cluster
+On the first server (`quickstart-cluster-01`), join its meta node and data node
+to the cluster by entering:
+```
+influxd-ctl join
+```
+
+The expected output is:
+```
+Joining meta node at localhost:8091
+Searching for meta node on quickstart-cluster-01:8091...
+Searching for data node on quickstart-cluster-01:8088...
+
+Successfully created cluster
+
+ * Added meta node 1 at quickstart-cluster-01:8091
+ * Added data node 2 at quickstart-cluster-01:8088
+
+ To join additional nodes to this cluster, run the following command:
+
+ influxd-ctl join quickstart-cluster-01:8091
+```
+
+>**Note:** `influxd-ctl` takes the flag `-v` as an option to print verbose information about the join.
+The flag must be right after the influxd-ctl join command:
+`influxd-ctl join -v quickstart-cluster-01:8091`
+
+>To confirm that the node was successfully joined, run `influxd-ctl show` and verify that the node's hostname shows in the output.
+
+### II. Join the second server to the cluster
+On the second server (`quickstart-cluster-02`), join its meta node and data node
+to the cluster by entering:
+```
+influxd-ctl join quickstart-cluster-01:8091
+```
+
+The expected output is:
+```
+Joining meta node at quickstart-cluster-01:8091
+Searching for meta node on quickstart-cluster-02:8091...
+Searching for data node on quickstart-cluster-02:8088...
+
+Successfully joined cluster
+
+ * Added meta node 3 at quickstart-cluster-02:8091
+ * Added data node 4 at quickstart-cluster-02:8088
+```
+
+### III. Join the third server to the cluster
+On the third server (`quickstart-cluster-03`), join its meta node and data node
+to the cluster by entering:
+```
+influxd-ctl join quickstart-cluster-01:8091
+```
+
+The expected output is:
+```
+Joining meta node at quickstart-cluster-01:8091
+Searching for meta node on quickstart-cluster-03:8091...
+Searching for data node on quickstart-cluster-03:8088...
+
+Successfully joined cluster
+
+ * Added meta node 5 at quickstart-cluster-03:8091
+ * Added data node 6 at quickstart-cluster-03:8088
+```
+
+### IV. Verify your cluster
+On any server, enter:
+```
+influxd-ctl show
+```
+
+The expected output is:
+```
+Data Nodes
+==========
+ID TCP Address Version
+2 quickstart-cluster-01:8088 1.5.4-c1.5.4
+4 quickstart-cluster-02:8088 1.5.4-c1.5.4
+6 quickstart-cluster-03:8088 1.5.4-c1.5.4
+
+Meta Nodes
+==========
+TCP Address Version
+quickstart-cluster-01:8091 1.5.4-c1.5.4
+quickstart-cluster-02:8091 1.5.4-c1.5.4
+quickstart-cluster-03:8091 1.5.4-c1.5.4
+```
+
+Your InfluxDB Enterprise cluster should have three data nodes and three meta nodes.
+If you do not see your meta or data nodes in the output, please retry
+adding them to the cluster.
+
+Once all of your nodes are joined to the cluster, move on to the [next step](/enterprise_influxdb/v1.5/quickstart_installation/chrono_install)
+in the QuickStart installation to set up Chronograf.
diff --git a/content/enterprise_influxdb/v1.5/troubleshooting/_index.md b/content/enterprise_influxdb/v1.5/troubleshooting/_index.md
new file mode 100644
index 000000000..cf878cf32
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/troubleshooting/_index.md
@@ -0,0 +1,14 @@
+---
+title: Troubleshooting InfluxDB Enterprise
+
+aliases:
+ - /enterprise/v1.5/troubleshooting/
+menu:
+ enterprise_influxdb_1_5:
+ name: Troubleshooting
+ weight: 90
+---
+
+## [Frequently asked questions](/enterprise_influxdb/v1.5/troubleshooting/frequently_asked_questions/)
+
+## [Reporting issues](/enterprise_influxdb/v1.5/troubleshooting/reporting-issues/)
diff --git a/content/enterprise_influxdb/v1.5/troubleshooting/frequently_asked_questions.md b/content/enterprise_influxdb/v1.5/troubleshooting/frequently_asked_questions.md
new file mode 100644
index 000000000..9e5b5a1e2
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/troubleshooting/frequently_asked_questions.md
@@ -0,0 +1,99 @@
+---
+title: InfluxDB Enterprise frequently asked questions (FAQ)
+aliases:
+ - /enterprise_influxdb/v1.5/troubleshooting/frequently-asked-questions/
+ - /enterprise/v1.5/troubleshooting/frequently_asked_questions/
+ - /enterprise_influxdb/v1.5/introduction/meta_node_installation/
+menu:
+ enterprise_influxdb_1_5:
+ name: Frequently asked questions (FAQ)
+ weight: 10
+ parent: Troubleshooting
+---
+
+**Known issues**
+
+
+**Log errors**
+
+* [Why am I seeing a `503 Service Unavailable` error in my meta node logs?](#why-am-i-seeing-a-503-service-unavailable-error-in-my-meta-node-logs)
+* [Why am I seeing a `409` error in some of my data node logs?](#why-am-i-seeing-a-409-error-in-some-of-my-data-node-logs)
+* [Why am I seeing `hinted handoff queue not empty` errors in my data node logs?](#why-am-i-seeing-hinted-handoff-queue-not-empty-errors-in-my-data-node-logs)
+* [Why am I seeing `error writing count stats ...: partial write` errors in my data node logs?](#why-am-i-seeing-error-writing-count-stats-partial-write-errors-in-my-data-node-logs)
+* [Why am I seeing `queue is full` errors in my data node logs?](#why-am-i-seeing-queue-is-full-errors-in-my-data-node-logs)
+* [Why am I seeing `unable to determine if "hostname" is a meta node` when I try to add a meta node with `influxd-ctl join`?](#why-am-i-seeing-unable-to-determine-if-hostname-is-a-meta-node-when-i-try-to-add-a-meta-node-with-influxd-ctl-join)
+
+
+**Other**
+
+## Where can I find InfluxDB Enterprise logs?
+
+On systemd operating systems service logs can be accessed using the `journalctl` command.
+
+Meta: `journalctl -u influxdb-meta`
+
+Data : `journalctl -u influxdb`
+
+Enterprise console: `journalctl -u influx-enterprise`
+
+The `journalctl` output can be redirected to print the logs to a text file. With systemd, log retention depends on the system's journald settings.
+
+## Why am I seeing a `503 Service Unavailable` error in my meta node logs?
+
+This is the expected behavior if you haven't joined the meta node to the
+cluster.
+The `503` errors should stop showing up in the logs once you
+[join](/enterprise_influxdb/v1.5/production_installation/meta_node_installation/#join-the-meta-nodes-to-the-cluster)
+the meta node to the cluster.
+
+## Why am I seeing a `409` error in some of my data node logs?
+
+When you create a
+[Continuous Query (CQ)](/influxdb/v1.5/concepts/glossary/#continuous-query-cq)
+on your cluster every data node will ask for the CQ lease.
+Only one data node can accept the lease.
+That data node will have a `200` in its logs.
+All other data nodes will be denied the lease and have a `409` in their logs.
+This is the expected behavior.
+
+Log output for a data node that is denied the lease:
+```
+[meta-http] 2016/09/19 09:08:53 172.31.4.132 - - [19/Sep/2016:09:08:53 +0000] GET /lease?name=continuous_querier&node_id=5 HTTP/1.2 409 105 - InfluxDB Meta Client b00e4943-7e48-11e6-86a6-000000000000 380.542µs
+```
+Log output for the data node that accepts the lease:
+```
+[meta-http] 2016/09/19 09:08:54 172.31.12.27 - - [19/Sep/2016:09:08:54 +0000] GET /lease?name=continuous_querier&node_id=0 HTTP/1.2 200 105 - InfluxDB Meta Client b05a3861-7e48-11e6-86a7-000000000000 8.87547ms
+```
+
+## Why am I seeing `hinted handoff queue not empty` errors in my data node logs?
+
+```
+[write] 2016/10/18 10:35:21 write failed for shard 2382 on node 4: hinted handoff queue not empty
+```
+
+This error is informational only and does not necessarily indicate a problem in the cluster. It indicates that the node handling the write request currently has data in its local [hinted handoff](/enterprise_influxdb/v1.5/concepts/clustering/#hinted-handoff) queue for the destination node. Coordinating nodes will not attempt direct writes to other nodes until the hinted handoff queue for the destination node has fully drained. New data is instead appended to the hinted handoff queue. This helps data arrive in chronological order for consistency of graphs and alerts and also prevents unnecessary failed connection attempts between the data nodes. Until the hinted handoff queue is empty this message will continue to display in the logs. Monitor the size of the hinted handoff queues with `ls -lRh /var/lib/influxdb/hh` to ensure that they are decreasing in size.
+
+Note that for some [write consistency](/enterprise_influxdb/v1.5/concepts/clustering/#write-consistency) settings, InfluxDB may return a write error (500) for the write attempt, even if the points are successfully queued in hinted handoff. Some write clients may attempt to resend those points, leading to duplicate points being added to the hinted handoff queue and lengthening the time it takes for the queue to drain. If the queues are not draining, consider temporarily downgrading the write consistency setting, or pause retries on the write clients until the hinted handoff queues fully drain.
+
+## Why am I seeing `error writing count stats ...: partial write` errors in my data node logs?
+
+```
+[stats] 2016/10/18 10:35:21 error writing count stats for FOO_grafana: partial write
+```
+
+The `_internal` database collects per-node and also cluster-wide information about the InfluxEnterprise cluster. The cluster metrics are replicated to other nodes using `consistency=all`. For a [write consistency](/enterprise_influxdb/v1.5/concepts/clustering/#write-consistency) of `all`, InfluxDB returns a write error (500) for the write attempt even if the points are successfully queued in hinted handoff. Thus, if there are points still in hinted handoff, the `_internal` writes will fail the consistency check and log the error, even though the data is in the durable hinted handoff queue and should eventually persist.
+
+
+## Why am I seeing `queue is full` errors in my data node logs?
+
+This error indicates that the coordinating node that received the write cannot add the incoming write to the hinted handoff queue for the destination node because it would exceed the maximum size of the queue. This error typically indicates a catastrophic condition for the cluster - one data node may have been offline or unable to accept writes for an extended duration.
+
+The controlling configuration settings are in the `[hinted-handoff]` section of the file. `max-size` is the total size in bytes per hinted handoff queue. When `max-size` is exceeded, all new writes for that node are rejected until the queue drops below `max-size`. `max-age` is the maximum length of time a point will persist in the queue. Once this limit has been reached, points expire from the queue. The age is calculated from the write time of the point, not the timestamp of the point.
+
+## Why am I seeing `unable to determine if "hostname" is a meta node` when I try to add a meta node with `influxd-ctl join`?
+
+Meta nodes use the `/status` endpoint to determine the current state of another metanode. A healthy meta node that is ready to join the cluster will respond with a `200` HTTP response code and a JSON string with the following format (assuming the default ports):
+
+`"nodeType":"meta","leader":"","httpAddr":":8091","raftAddr":":8089","peers":null}`
+
+If you are getting an error message while attempting to `influxd-ctl join` a new meta node, it means that the JSON string returned from the `/status` endpoint is incorrect. This generally indicates that the meta node configuration file is incomplete or incorrect. Inspect the HTTP response with `curl -v "http://:8091/status"` and make sure that the `hostname`, the `bind-address`, and the `http-bind-address` are correctly populated. Also check the `license-key` or `license-path` in the configuration file of the meta nodes. Finally, make sure that you specify the `http-bind-address` port in the join command, e.g. `influxd-ctl join hostname:8091`.
diff --git a/content/enterprise_influxdb/v1.5/troubleshooting/reporting-issues.md b/content/enterprise_influxdb/v1.5/troubleshooting/reporting-issues.md
new file mode 100644
index 000000000..59a2e298b
--- /dev/null
+++ b/content/enterprise_influxdb/v1.5/troubleshooting/reporting-issues.md
@@ -0,0 +1,24 @@
+---
+title: Reporting issues with InfluxDB Enterprise
+aliases:
+ - /enterprise/v1.5/troubleshooting/reporting-issues/
+menu:
+ enterprise_influxdb_1_5:
+ name: Reporting issues
+ weight: 20
+ parent: Troubleshooting
+---
+
+When you have challenges with the InfluxDB Enterprise product, please contact support
+using the email we provided at the time of the signup.
+That will ensure your support tickets are routed directly to our private release
+support team.
+
+Please include the following in your email:
+
+* the version of InfluxDB Enterprise, e.g. 1.3.x-c1.3.x (and include your value for x)
+* the version of Telegraf or Kapacitor, if applicable
+* what you expected to happen
+* what did happen
+* query output, logs, screenshots, and any other helpful diagnostic information
+* the results of the `SHOW DIAGNOSTICS` and `SHOW STATS` queries
diff --git a/content/enterprise_influxdb/v1.6/_index.md b/content/enterprise_influxdb/v1.6/_index.md
new file mode 100644
index 000000000..d9b0b1d1c
--- /dev/null
+++ b/content/enterprise_influxdb/v1.6/_index.md
@@ -0,0 +1,36 @@
+---
+title: InfluxDB Enterprise 1.6 documentation
+description: Technical documentation for InfluxDB Enterprise, which adds clustering, high availability, fine-grained authorization, and more to InfluxDB OSS. Documentation includes release notes, what's new, guides, concepts, features, and administration.
+aliases:
+ - /enterprise/v1.6/
+
+menu:
+ enterprise_influxdb:
+ name: v1.6
+ identifier: enterprise_influxdb_1_6
+ weight: 8
+---
+
+InfluxDB Enterprise offers highly scalable InfluxDB Enterprise clusters on your infrastructure
+with a management UI.
+Use InfluxDB Enterprise to:
+
+* Monitor your cluster
+
+ 
+
+* Manage queries
+
+ 
+
+* Manage users
+
+ 
+
+* Explore and visualize your data
+
+ 
+
+If you're interested in working with InfluxDB Enterprise, visit
+[InfluxPortal](https://portal.influxdata.com/) to sign up, get a license key,
+and get started!
diff --git a/content/enterprise_influxdb/v1.6/about-the-project/_index.md b/content/enterprise_influxdb/v1.6/about-the-project/_index.md
new file mode 100644
index 000000000..f928eab11
--- /dev/null
+++ b/content/enterprise_influxdb/v1.6/about-the-project/_index.md
@@ -0,0 +1,44 @@
+---
+title: About the project
+menu:
+ enterprise_influxdb_1_6:
+ weight: 10
+---
+
+## [Release notes/changelog](/enterprise_influxdb/v1.6/about-the-project/release-notes-changelog/)
+
+
+
+## [Commercial license](https://www.influxdata.com/legal/slsa/)
+
+InfluxDB Enterprise is available with a commercial license. [Contact sales for more information](https://www.influxdata.com/contact-sales/).
+
+## Third party software
+
+InfluxData products contain third party software, which means the copyrighted, patented, or otherwise legally protected
+software of third parties, that is incorporated in InfluxData products.
+
+Third party suppliers make no representation nor warranty with respect to such third party software or any portion thereof.
+Third party suppliers assume no liability for any claim that might arise with respect to such third party software, nor for a
+customer’s use of or inability to use the third party software.
+
+In addition to [third party software incorporated in InfluxDB](http://docs.influxdata.com/influxdb/v1.6/about_the_project/#third_party), InfluxDB Enterprise incorporates the following additional third party software:
+
+| Third Party / Open Source Software - Description | License Type |
+| ---------------------------------------- | ---------------------------------------- |
+| [Go language library for exporting performance and runtime metrics to external metrics systems (i.e., statsite, statsd)](https://github.com/armon/go-metrics) (armon/go-metrics) | [MIT](https://github.com/armon/go-metrics/blob/master/LICENSE) |
+| [Golang implementation of JavaScript Object](https://github.com/dvsekhvalnov/jose2go) (dvsekhvalnov/jose2go) | [MIT](https://github.com/dvsekhvalnov/jose2go/blob/master/LICENSE) |
+| [Collection of useful handlers for Go net/http package ](https://github.com/gorilla/handlers) (gorilla/handlers) | [BSD-2](https://github.com/gorilla/handlers/blob/master/LICENSE) |
+| [A powerful URL router and dispatcher for golang](https://github.com/gorilla/mux) (gorilla/mux) | [BSD-3](https://github.com/gorilla/mux/blob/master/LICENSE) |
+| [Golang connection multiplexing library](https://github.com/hashicorp/yamux/) (hashicorp/yamux) | [Mozilla 2.0](https://github.com/hashicorp/yamux/blob/master/LICENSE) |
+| [Codec - a high performance and feature-rich Idiomatic encode/decode and rpc library for msgpack and Binc](https://github.com/hashicorp/go-msgpack) (hashicorp/go-msgpack) | [BSD-3](https://github.com/hashicorp/go-msgpack/blob/master/LICENSE) |
+| [Go language implementation of the Raft consensus protocol](https://github.com/hashicorp/raft) (hashicorp/raft) | [Mozilla 2.0](https://github.com/hashicorp/raft/blob/master/LICENSE) |
+| [Raft backend implementation using BoltDB](https://github.com/hashicorp/raft-boltdb) (hashicorp/raft-boltdb) | [Mozilla 2.0](https://github.com/hashicorp/raft-boltdb/blob/master/LICENSE) |
+| [Pretty printing for Go values](https://github.com/kr/pretty) (kr/pretty) | [MIT](https://github.com/kr/pretty/blob/master/License) |
+| [Miscellaneous functions for formatting text](https://github.com/kr/text) (kr/text) | [MIT](https://github.com/kr/text/blob/main/License) |
+| [Some helpful packages for writing Go apps](https://github.com/markbates/going) (markbates/going) | [MIT](https://github.com/markbates/going/blob/master/LICENSE.txt) |
+| [Basic LDAP v3 functionality for the Go programming language](https://github.com/mark-rushakoff/ldapserver) (mark-rushakoff/ldapserver) | [BSD-3-Clause](https://github.com/markbates/going/blob/master/LICENSE) |
+| [Basic LDAP v3 functionality for the Go programming language](https://github.com/go-ldap/ldap) (go-ldap/ldap) | [MIT](https://github.com/go-ldap/ldap/blob/master/LICENSE) |
+| [ASN1 BER Encoding / Decoding Library for the GO programming language](https://github.com/go-asn1-ber/asn1-ber) (go-asn1-ber/ans1-ber) | [MIT](https://github.com/go-asn1-ber/asn1-ber/blob/master/LICENSE) |
+
+***Thanks to the open source community for your contributions!***
diff --git a/content/enterprise_influxdb/v1.6/about-the-project/release-notes-changelog.md b/content/enterprise_influxdb/v1.6/about-the-project/release-notes-changelog.md
new file mode 100644
index 000000000..960c4076b
--- /dev/null
+++ b/content/enterprise_influxdb/v1.6/about-the-project/release-notes-changelog.md
@@ -0,0 +1,629 @@
+---
+title: InfluxDB Enterprise 1.6 release notes
+
+menu:
+ enterprise_influxdb_1_6:
+ name: Release notes
+ weight: 10
+ parent: About the project
+---
+
+## v1.6.6 [2019-02-28]
+-------------------
+
+This release only includes the OSS InfluxDB 1.6.6 changes (no Enterprise-specific changes).
+
+## v1.6.5 [2019-01-10]
+
+This release builds off of the InfluxDB OSS 1.6.5 release. Please see the [InfluxDB OSS release notes](/influxdb/v1.6/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+## v1.6.4 [2018-10-23]
+
+This release builds off of the InfluxDB OSS 1.6.0 through 1.6.4 releases. For details about changes incorporated from InfluxDB OSS releases, see the [InfluxDB OSS release notes](/influxdb/v1.6/about_the_project/releasenotes-changelog/).
+
+### Breaking changes
+
+#### Require `internal-shared-secret` if meta auth enabled
+
+If `[meta] auth-enabled` is set to `true`, the `[meta] internal-shared-secret` value must be set in the configuration.
+If it is not set, an error will be logged and `influxd-meta` will not start.
+* Previously, authentication could be enabled without setting an `internal-shared-secret`. The security risk was that an unset (empty) value could be used for the `internal-shared-secret`, seriously weakening the JWT authentication used for intra-node communication.
+
+#### Review production installation configurations
+
+The [Production Installation](/enterprise_influxdb/v1.6/install-and-deploy/production_installation/)
+documentation has been updated to fix errors in configuration settings, including changing `shared-secret` to `internal-shared-secret` and adding missing steps for configuration settings of data nodes and meta nodes. All Enterprise users should review their current configurations to ensure that the configuration settings properly enable JWT authentication for intra-node communication.
+
+The following summarizes the expected settings for proper configuration of JWT authentication for intra-node communication:
+
+##### Data node configuration files (`influxdb.conf`)
+
+**[http] section**
+
+* `auth-enabled = true`
+ - Enables authentication. Default value is false.
+
+**[meta] section**
+
+* `meta-auth-enabled = true`
+ - Must match for meta nodes' `[meta] auth-enabled` settings.
+- `meta-internal-shared-secret = ""`
+ - Must be the same pass phrase on all meta nodes' `[meta] internal-shared-secret` settings.
+ - Used by the internal API for JWT authentication. Default value is `""`.
+ - A long pass phrase is recommended for stronger security.
+
+##### Meta node configuration files (`meta-influxdb.conf`)
+
+**[meta]** section
+
+* `auth-enabled = true`
+ * Enables authentication. Default value is `false` .
+* `internal-shared-secret = ""`
+ * Must same pass phrase on all data nodes' `[meta] meta-internal-shared-secret`
+ settings.
+ * Used by the internal API for JWT authentication. Default value is
+`""`.
+ * A long pass phrase is recommended for better security.
+
+>**Note:** To provide encrypted intra-node communication, you must enable HTTPS. Although the JWT signature is encrypted, the the payload of a JWT token is encoded, but is not encrypted.
+
+### Bug fixes
+
+* Only map shards that are reported ready.
+* Fix data race when shards are deleted and created concurrently.
+* Reject `influxd-ctl update-data` from one existing host to another.
+* Require `internal-shared-secret` if meta auth enabled.
+
+## v1.6.2 [08-27-2018]
+
+This release builds off of the InfluxDB OSS 1.6.0 through 1.6.2 releases. For details about changes incorporated from InfluxDB OSS releases, see the [InfluxDB OSS release notes](/influxdb/v1.6/about_the_project/releasenotes-changelog/).
+
+### Features
+
+* Update Go runtime to `1.10`.
+* Provide configurable TLS security options.
+* Add LDAP functionality for authorization and authentication.
+* Anti-Entropy (AE): add ability to repair shards.
+* Anti-Entropy (AE): improve swagger doc for `/status` endpoint.
+* Include the query task status in the show queries output.
+
+#### Bug fixes
+
+* TSM files not closed when shard is deleted.
+* Ensure shards are not queued to copy if a remote node is unavailable.
+* Ensure the hinted handoff (hh) queue makes forward progress when segment errors occur.
+* Add hinted handoff (hh) queue back pressure.
+
+## v1.5.5 [2018-12-19]
+
+This release builds off of the InfluxDB OSS 1.5.5 release. Please see the [InfluxDB OSS release notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+## v1.5.4 [2018-06-21]
+
+This release builds off of the InfluxDB OSS 1.5.4 release. Please see the [InfluxDB OSS release notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+## v1.5.3 [2018-05-25]
+
+This release builds off of the InfluxDB OSS 1.5.3 release. Please see the [InfluxDB OSS release notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+### Features
+
+* Include the query task status in the show queries output.
+* Add hh writeBlocked counter.
+
+### Bug fixes
+
+* Hinted-handoff: enforce max queue size per peer node.
+* TSM files not closed when shard deleted.
+
+
+## v1.5.2 [2018-04-12]
+
+This release builds off of the InfluxDB OSS 1.5.2 release. Please see the [InfluxDB OSS release notes](/influxdb/v1.5/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+### Bug fixes
+
+* Running backup snapshot with client's retryWithBackoff function.
+* Ensure that conditions are encoded correctly even if the AST is not properly formed.
+
+## v1.5.1 [2018-03-20]
+
+This release builds off of the InfluxDB OSS 1.5.1 release. There are no Enterprise-specific changes.
+Please see the [InfluxDB OSS release notes](/influxdb/v1.6/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+## v1.5.0 [2018-03-06]
+
+> ***Note:*** This release builds off of the 1.5 release of InfluxDB OSS. Please see the [InfluxDB OSS release
+> notes](https://docs.influxdata.com/influxdb/v1.6/about_the_project/releasenotes-changelog/) for more information about the InfluxDB OSS release.
+
+For highlights of the InfluxDB 1.5 release, see [What's new in InfluxDB 1.5](/influxdb/v1.5/about_the_project/whats_new/).
+
+### Breaking changes
+
+The default logging format has been changed. See [Logging and tracing in InfluxDB](/influxdb/v1.6/administration/logs/) for details.
+
+### Features
+
+* Add `LastModified` fields to shard RPC calls.
+* As of OSS 1.5 backup/restore interoperability is confirmed.
+* Make InfluxDB Enterprise use OSS digests.
+* Move digest to its own package.
+* Implement distributed cardinality estimation.
+* Add logging configuration to the configuration files.
+* Add AE `/repair` endpoint and update Swagger doc.
+* Update logging calls to take advantage of structured logging.
+* Use actual URL when logging anonymous stats start.
+* Fix auth failures on backup/restore.
+* Add support for passive nodes
+* Implement explain plan for remote nodes.
+* Add message pack format for query responses.
+* Teach show tag values to respect FGA
+* Address deadlock in meta server on 1.3.6
+* Add time support to `SHOW TAG VALUES`
+* Add distributed `SHOW TAG KEYS` with time support
+
+### Bug fixes
+
+* Fix errors occurring when policy or shard keys are missing from the manifest when limited is set to true.
+* Fix spurious `rpc error: i/o deadline exceeded` errors.
+* Elide `stream closed` error from logs and handle `io.EOF` as remote iterator interrupt.
+* Discard remote iterators that label their type as unknown.
+* Do not queue partial write errors to hinted handoff.
+* Segfault in `digest.merge`
+* Meta Node CPU pegged on idle cluster.
+* Data race on `(meta.UserInfo).acl)`
+* Fix wildcard when one shard has no data for a measurement with partial replication.
+* Add `X-Influxdb-Build` to http response headers so users can identify if a response is from an InfluxDB OSS or InfluxDB Enterprise service.
+* Ensure that permissions cannot be set on non-existent databases.
+* Switch back to using `cluster-tracing` config option to enable meta HTTP request logging.
+* `influxd-ctl restore -newdb` can't restore data.
+* Close connection for remote iterators after EOF to avoid writer hanging indefinitely.
+* Data race reading `Len()` in connection pool.
+* Use InfluxData fork of `yamux`. This update reduces overall memory usage when streaming large amounts of data.
+* Fix group by marshaling in the IteratorOptions.
+* Meta service data race.
+* Read for the interrupt signal from the stream before creating the iterators.
+* Show retention policies requires the `createdatabase` permission
+* Handle UTF files with a byte order mark when reading the configuration files.
+* Remove the pidfile after the server has exited.
+* Resend authentication credentials on redirect.
+* Updated yamux resolves race condition when SYN is successfully sent and a write timeout occurs.
+* Fix no license message.
+
+## v1.3.9 [2018-01-19]
+
+### Upgrading -- for users of the TSI preview
+
+If you have been using the TSI preview with 1.3.6 or earlier 1.3.x releases, you will need to follow the upgrade steps to continue using the TSI preview. Unfortunately, these steps cannot be executed while the cluster is operating --
+so it will require downtime.
+
+### Bugfixes
+
+* Elide `stream closed` error from logs and handle `io.EOF` as remote iterator interrupt.
+* Fix spurious `rpc error: i/o deadline exceeded` errors
+* Discard remote iterators that label their type as unknown.
+* Do not queue `partial write` errors to hinted handoff.
+
+## v1.3.8 [2017-12-04]
+
+### Upgrading -- for users of the TSI preview
+
+If you have been using the TSI preview with 1.3.6 or earlier 1.3.x releases, you will need to follow the upgrade steps to continue using the TSI preview. Unfortunately, these steps cannot be executed while the cluster is operating -- so it will require downtime.
+
+### Bugfixes
+
+- Updated `yamux` resolves race condition when SYN is successfully sent and a write timeout occurs.
+- Resend authentication credentials on redirect.
+- Fix wildcard when one shard has no data for a measurement with partial replication.
+- Fix spurious `rpc error: i/o deadline exceeded` errors.
+
+## v1.3.7 [2017-10-26]
+
+### Upgrading -- for users of the TSI preview
+
+The 1.3.7 release resolves a defect that created duplicate tag values in TSI indexes See Issues
+[#8995](https://github.com/influxdata/influxdb/pull/8995), and [#8998](https://github.com/influxdata/influxdb/pull/8998).
+However, upgrading to 1.3.7 cause compactions to fail, see [Issue #9025](https://github.com/influxdata/influxdb/issues/9025).
+We will provide a utility that will allow TSI indexes to be rebuilt,
+resolving the corruption possible in releases prior to 1.3.7. If you are using the TSI preview,
+**you should not upgrade to 1.3.7 until this utility is available**.
+We will update this release note with operational steps once the utility is available.
+
+#### Bugfixes
+
+ - Read for the interrupt signal from the stream before creating the iterators.
+ - Address Deadlock issue in meta server on 1.3.6
+ - Fix logger panic associated with anti-entropy service and manually removed shards.
+
+## v1.3.6 [2017-09-28]
+
+### Bugfixes
+
+- Fix "group by" marshaling in the IteratorOptions.
+- Address meta service data race condition.
+- Fix race condition when writing points to remote nodes.
+- Use InfluxData fork of yamux. This update reduces overall memory usage when streaming large amounts of data.
+ Contributed back to the yamux project via: https://github.com/hashicorp/yamux/pull/50
+- Address data race reading Len() in connection pool.
+
+## v1.3.5 [2017-08-29]
+
+This release builds off of the 1.3.5 release of OSS InfluxDB.
+Please see the OSS [release notes](/influxdb/v1.3/about_the_project/releasenotes-changelog/#v1-3-5-2017-08-29) for more information about the OSS releases.
+
+## v1.3.4 [2017-08-23]
+
+This release builds off of the 1.3.4 release of OSS InfluxDB. Please see the [OSS release notes](https://docs.influxdata.com/influxdb/v1.3/about_the_project/releasenotes-changelog/) for more information about the OSS releases.
+
+### Bugfixes
+
+- Close connection for remote iterators after EOF to avoid writer hanging indefinitely
+
+## v1.3.3 [2017-08-10]
+
+This release builds off of the 1.3.3 release of OSS InfluxDB. Please see the [OSS release notes](https://docs.influxdata.com/influxdb/v1.3/about_the_project/releasenotes-changelog/) for more information about the OSS releases.
+
+### Bugfixes
+
+- Connections are not closed when `CreateRemoteIterator` RPC returns no iterators, resolved memory leak
+
+## v1.3.2 [2017-08-04]
+
+### Bug fixes
+
+- `influxd-ctl restore -newdb` unable to restore data.
+- Improve performance of `SHOW TAG VALUES`.
+- Show a subset of config settings in `SHOW DIAGNOSTICS`.
+- Switch back to using cluster-tracing config option to enable meta HTTP request logging.
+- Fix remove-data error.
+
+## v1.3.1 [2017-07-20]
+
+#### Bug fixes
+
+- Show a subset of config settings in SHOW DIAGNOSTICS.
+- Switch back to using cluster-tracing config option to enable meta HTTP request logging.
+- Fix remove-data error.
+
+## v1.3.0 [2017-06-21]
+
+### Configuration Changes
+
+#### `[cluster]` Section
+
+* `max-remote-write-connections` is deprecated and can be removed.
+* NEW: `pool-max-idle-streams` and `pool-max-idle-time` configure the RPC connection pool.
+ See `config.sample.toml` for descriptions of these new options.
+
+### Removals
+
+The admin UI is removed and unusable in this release. The `[admin]` configuration section will be ignored.
+
+#### Features
+
+- Allow non-admin users to execute SHOW DATABASES
+- Add default config path search for influxd-meta.
+- Reduce cost of admin user check for clusters with large numbers of users.
+- Store HH segments by node and shard
+- Remove references to the admin console.
+- Refactor RPC connection pool to multiplex multiple streams over single connection.
+- Report RPC connection pool statistics.
+
+#### Bug fixes
+
+- Fix security escalation bug in subscription management.
+- Certain permissions should not be allowed at the database context.
+- Make the time in `influxd-ctl`'s `copy-shard-status` argument human readable.
+- Fix `influxd-ctl remove-data -force`.
+- Ensure replaced data node correctly joins meta cluster.
+- Delay metadata restriction on restore.
+- Writing points outside of retention policy does not return error
+- Decrement internal database's replication factor when a node is removed.
+
+## v1.2.5 [2017-05-16]
+
+This release builds off of the 1.2.4 release of OSS InfluxDB.
+Please see the OSS [release notes](/influxdb/v1.3/about_the_project/releasenotes-changelog/#v1-2-4-2017-05-08) for more information about the OSS releases.
+
+#### Bug fixes
+
+- Fix issue where the [`ALTER RETENTION POLICY` query](/influxdb/v1.3/query_language/database_management/#modify-retention-policies-with-alter-retention-policy) does not update the default retention policy.
+- Hinted-handoff: remote write errors containing `partial write` are considered droppable.
+- Fix the broken `influxd-ctl remove-data -force` command.
+- Fix security escalation bug in subscription management.
+- Prevent certain user permissions from having a database-specific scope.
+- Reduce the cost of the admin user check for clusters with large numbers of users.
+- Fix hinted-handoff remote write batching.
+
+## v1.2.2 [2017-03-15]
+
+This release builds off of the 1.2.1 release of OSS InfluxDB.
+Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.2/CHANGELOG.md#v121-2017-03-08) for more information about the OSS release.
+
+### Configuration Changes
+
+The following configuration changes may need to changed before [upgrading](/enterprise_influxdb/v1.3/administration/upgrading/) to 1.2.2 from prior versions.
+
+#### shard-writer-timeout
+
+We've removed the data node's `shard-writer-timeout` configuration option from the `[cluster]` section.
+As of version 1.2.2, the system sets `shard-writer-timeout` internally.
+The configuration option can be removed from the [data node configuration file](/enterprise_influxdb/v1.3/administration/configuration/#data-node-configuration).
+
+#### retention-autocreate
+
+In versions 1.2.0 and 1.2.1, the `retention-autocreate` setting appears in both the meta node and data node configuration files.
+To disable retention policy auto-creation, users on version 1.2.0 and 1.2.1 must set `retention-autocreate` to `false` in both the meta node and data node configuration files.
+
+In version 1.2.2, we’ve removed the `retention-autocreate` setting from the data node configuration file.
+As of version 1.2.2, users may remove `retention-autocreate` from the data node configuration file.
+To disable retention policy auto-creation, set `retention-autocreate` to `false` in the meta node configuration file only.
+
+This change only affects users who have disabled the `retention-autocreate` option and have installed version 1.2.0 or 1.2.1.
+
+#### Bug fixes
+
+##### Backup and Restore
+
+
+- Prevent the `shard not found` error by making [backups](/enterprise_influxdb/v1.3/guides/backup-and-restore/#backup) skip empty shards
+- Prevent the `shard not found` error by making [restore](/enterprise_influxdb/v1.3/guides/backup-and-restore/#restore) handle empty shards
+- Ensure that restores from an incremental backup correctly handle file paths
+- Allow incremental backups with restrictions (for example, they use the `-db` or `rp` flags) to be stores in the same directory
+- Support restores on meta nodes that are not the raft leader
+
+##### Hinted handoff
+
+
+- Fix issue where dropped writes were not recorded when the [hinted handoff](/enterprise_influxdb/v1.3/concepts/clustering/#hinted-handoff) queue reached the maximum size
+- Prevent the hinted handoff from becoming blocked if it encounters field type errors
+
+##### Other
+
+
+- Return partial results for the [`SHOW TAG VALUES` query](/influxdb/v1.3/query_language/schema_exploration/#show-tag-values) even if the cluster includes an unreachable data node
+- Return partial results for the [`SHOW MEASUREMENTS` query](/influxdb/v1.3/query_language/schema_exploration/#show-measurements) even if the cluster includes an unreachable data node
+- Prevent a panic when the system files to process points
+- Ensure that cluster hostnames can be case insensitive
+- Update the `retryCAS` code to wait for a newer snapshot before retrying
+- Serialize access to the meta client and meta store to prevent raft log buildup
+- Remove sysvinit package dependency for RPM packages
+- Make the default retention policy creation an atomic process instead of a two-step process
+- Prevent `influxd-ctl`'s [`join` argument](/enterprise_influxdb/v1.3/features/cluster-commands/#join) from completing a join when the command also specifies the help flag (`-h`)
+- Fix the `influxd-ctl`'s [force removal](/enterprise_influxdb/v1.3/features/cluster-commands/#remove-meta) of meta nodes
+- Update the meta node and data node sample configuration files
+
+## v1.2.1 [2017-01-25]
+
+#### Cluster-specific Bugfixes
+
+- Fix panic: Slice bounds out of range
+ Fix how the system removes expired shards.
+- Remove misplaced newlines from cluster logs
+
+## v1.2.0 [2017-01-24]
+
+This release builds off of the 1.2.0 release of OSS InfluxDB.
+Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.2/CHANGELOG.md#v120-2017-01-24) for more information about the OSS release.
+
+### Upgrading
+
+* The `retention-autocreate` configuration option has moved from the meta node configuration file to the [data node configuration file](/enterprise_influxdb/v1.3/administration/configuration/#retention-autocreate-true).
+To disable the auto-creation of retention policies, set `retention-autocreate` to `false` in your data node configuration files.
+* The previously deprecated `influxd-ctl force-leave` command has been removed. The replacement command to remove a meta node which is never coming back online is [`influxd-ctl remove-meta -force`](/enterprise_influxdb/v1.3/features/cluster-commands/).
+
+#### Cluster-specific Features
+
+- Improve the meta store: any meta store changes are done via a compare and swap
+- Add support for [incremental backups](/enterprise_influxdb/v1.3/guides/backup-and-restore/)
+- Automatically remove any deleted shard groups from the data store
+- Uncomment the section headers in the default [configuration file](/enterprise_influxdb/v1.3/administration/configuration/)
+- Add InfluxQL support for [subqueries](/influxdb/v1.3/query_language/data_exploration/#subqueries)
+
+#### Cluster-specific Bugfixes
+
+- Update dependencies with Godeps
+- Fix a data race in meta client
+- Ensure that the system removes the relevant [user permissions and roles](/enterprise_influxdb/v1.3/features/users/) when a database is dropped
+- Fix a couple typos in demo [configuration file](/enterprise_influxdb/v1.3/administration/configuration/)
+- Make optional the version protobuf field for the meta store
+- Remove the override of GOMAXPROCS
+- Remove an unused configuration option (`dir`) from the backend
+- Fix a panic around processing remote writes
+- Return an error if a remote write has a field conflict
+- Drop points in the hinted handoff that (1) have field conflict errors (2) have [`max-values-per-tag`](/influxdb/v1.3/administration/config/#max-values-per-tag-100000) errors
+- Remove the deprecated `influxd-ctl force-leave` command
+- Fix issue where CQs would stop running if the first meta node in the cluster stops
+- Fix logging in the meta httpd handler service
+- Fix issue where subscriptions send duplicate data for [Continuous Query](/influxdb/v1.3/query_language/continuous_queries/) results
+- Fix the output for `influxd-ctl show-shards`
+- Send the correct RPC response for `ExecuteStatementRequestMessage`
+
+## v1.1.5 [2017-04-28]
+
+### Bug fixes
+
+- Prevent certain user permissions from having a database-specific scope.
+- Fix security escalation bug in subscription management.
+
+## v1.1.3 [2017-02-27]
+
+This release incorporates the changes in the 1.1.4 release of OSS InfluxDB.
+Please see the OSS [changelog](https://github.com/influxdata/influxdb/blob/v1.1.4/CHANGELOG.md) for more information about the OSS release.
+
+### Bug fixes
+
+- Delay when a node listens for network connections until after all requisite services are running. This prevents queries to the cluster from failing unnecessarily.
+- Allow users to set the `GOMAXPROCS` environment variable.
+
+## v1.1.2 [internal]
+
+This release was an internal release only.
+It incorporates the changes in the 1.1.3 release of OSS InfluxDB.
+Please see the OSS [changelog](https://github.com/influxdata/influxdb/blob/v1.1.3/CHANGELOG.md) for more information about the OSS release.
+
+## v1.1.1 [2016-12-06]
+
+This release builds off of the 1.1.1 release of OSS InfluxDB.
+Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.1/CHANGELOG.md#v111-2016-12-06) for more information about the OSS release.
+
+This release is built with Go (golang) 1.7.4.
+It resolves a security vulnerability reported in Go (golang) version 1.7.3 which impacts all
+users currently running on the macOS platform, powered by the Darwin operating system.
+
+#### Cluster-specific bug fixes
+
+- Fix hinted-handoff issue: Fix record size larger than max size
+ If a Hinted Handoff write appended a block that was larger than the maximum file size, the queue would get stuck because the maximum size was not updated. When reading the block back out during processing, the system would return an error because the block size was larger than the file size -- which indicates a corrupted block.
+
+## v1.1.0 [2016-11-14]
+
+This release builds off of the 1.1.0 release of InfluxDB OSS.
+Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.1/CHANGELOG.md#v110-2016-11-14) for more information about the OSS release.
+
+### Upgrading
+
+* The 1.1.0 release of OSS InfluxDB has some important [configuration changes](https://github.com/influxdata/influxdb/blob/1.1/CHANGELOG.md#configuration-changes) that may affect existing clusters.
+* The `influxd-ctl join` command has been renamed to `influxd-ctl add-meta`. If you have existing scripts that use `influxd-ctl join`, they will need to use `influxd-ctl add-meta` or be updated to use the new cluster setup command.
+
+#### Cluster setup
+
+The `influxd-ctl join` command has been changed to simplify cluster setups. To join a node to a cluster, you can run `influxd-ctl join `, and we will attempt to detect and add any meta or data node process running on the hosts automatically. The previous `join` command exists as `add-meta` now. If it's the first node of a cluster, the meta address argument is optional.
+
+#### Logging
+
+Switches to journald logging for on systemd systems. Logs are no longer sent to `/var/log/influxdb` on systemd systems.
+
+#### Cluster-specific features
+
+- Add a configuration option for setting gossiping frequency on data nodes
+- Allow for detailed insight into the Hinted Handoff queue size by adding `queueBytes` to the hh\_processor statistics
+- Add authentication to the meta service API
+- Update Go (golang) dependencies: Fix Go Vet and update circle Go Vet command
+- Simplify the process for joining nodes to a cluster
+- Include the node's version number in the `influxd-ctl show` output
+- Return and error if there are additional arguments after `influxd-ctl show`
+ Fixes any confusion between the correct command for showing detailed shard information (`influxd-ctl show-shards`) and the incorrect command (`influxd-ctl show shards`)
+
+#### Cluster-specific bug fixes
+
+- Return an error if getting latest snapshot takes longer than 30 seconds
+- Remove any expired shards from the `/show-shards` output
+- Respect the [`pprof-enabled` configuration setting](/enterprise_influxdb/v1.3/administration/configuration/#pprof-enabled-true) and enable it by default on meta nodes
+- Respect the [`pprof-enabled` configuration setting](/enterprise_influxdb/v1.3/administration/configuration/#pprof-enabled-true-1) on data nodes
+- Use the data reference instead of `Clone()` during read-only operations for performance purposes
+- Prevent the system from double-collecting cluster statistics
+- Ensure that the meta API redirects to the cluster leader when it gets the `ErrNotLeader` error
+- Don't overwrite cluster users with existing OSS InfluxDB users when migrating an OSS instance into a cluster
+- Fix a data race in the raft store
+- Allow large segment files (> 10MB) in the Hinted Handoff
+- Prevent `copy-shard` from retrying if the `copy-shard` command was killed
+- Prevent a hanging `influxd-ctl add-data` command by making data nodes check for meta nodes before they join a cluster
+
+## v1.0.4 [2016-10-19]
+
+#### Cluster-specific bug fixes
+
+- Respect the [Hinted Handoff settings](/enterprise_influxdb/v1.3/administration/configuration/#hinted-handoff) in the configuration file
+- Fix expanding regular expressions when all shards do not exist on node that's handling the request
+
+## v1.0.3 [2016-10-07]
+
+#### Cluster-specific bug fixes
+
+- Fix a panic in the Hinted Handoff: `lastModified`
+
+## v1.0.2 [2016-10-06]
+
+This release builds off of the 1.0.2 release of OSS InfluxDB. Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.0/CHANGELOG.md#v102-2016-10-05) for more information about the OSS release.
+
+#### Cluster-specific bug fixes
+
+- Prevent double read-lock in the meta client
+- Fix a panic around a corrupt block in Hinted Handoff
+- Fix issue where `systemctl enable` would throw an error if the symlink already exists
+
+## v1.0.1 [2016-09-28]
+
+This release builds off of the 1.0.1 release of OSS InfluxDB.
+Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.0/CHANGELOG.md#v101-2016-09-26)
+for more information about the OSS release.
+
+#### Cluster-specific bug fixes
+
+* Balance shards correctly with a restore
+* Fix a panic in the Hinted Handoff: `runtime error: invalid memory address or nil pointer dereference`
+* Ensure meta node redirects to leader when removing data node
+* Fix a panic in the Hinted Handoff: `runtime error: makeslice: len out of range`
+* Update the data node configuration file so that only the minimum configuration options are uncommented
+
+## v1.0.0 [2016-09-07]
+
+This release builds off of the 1.0.0 release of OSS InfluxDB.
+Please see the OSS [release notes](https://github.com/influxdata/influxdb/blob/1.0/CHANGELOG.md#v100-2016-09-07) for more information about the OSS release.
+
+Breaking Changes:
+
+* The keywords `IF`, `EXISTS`, and `NOT` were removed for this release. This means you no longer need to specify `IF NOT EXISTS` for `DROP DATABASE` or `IF EXISTS` for `CREATE DATABASE`. Using these keywords will return a query error.
+* `max-series-per-database` was added with a default of 1M but can be disabled by setting it to `0`. Existing databases with series that exceed this limit will continue to load, but writes that would create new series will fail.
+
+### Hinted handoff
+
+A number of changes to hinted handoff are included in this release:
+
+* Truncating only the corrupt block in a corrupted segment to minimize data loss
+* Immediately queue writes in hinted handoff if there are still writes pending to prevent inconsistencies in shards
+* Remove hinted handoff queues when data nodes are removed to eliminate manual cleanup tasks
+
+### Performance
+
+* `SHOW MEASUREMENTS` and `SHOW TAG VALUES` have been optimized to work better for multiple nodes and shards
+* `DROP` and `DELETE` statements run in parallel and more efficiently and should not leave the system in an inconsistent state
+
+### Security
+
+The Cluster API used by `influxd-ctl` can not be protected with SSL certs.
+
+### Cluster management
+
+Data nodes that can no longer be restarted can now be forcefully removed from the cluster using `influxd-ctl remove-data -force `. This should only be run if a grace removal is not possible.
+
+Backup and restore has been updated to fix issues and refine existing capabilities.
+
+#### Cluster-specific features
+
+- Add the Users method to control client
+- Add a `-force` option to the `influxd-ctl remove-data` command
+- Disable the logging of `stats` service queries
+- Optimize the `SHOW MEASUREMENTS` and `SHOW TAG VALUES` queries
+- Update the Go (golang) package library dependencies
+- Minimize the amount of data-loss in a corrupted Hinted Handoff file by truncating only the last corrupted segment instead of the entire file
+- Log a write error when the Hinted Handoff queue is full for a node
+- Remove Hinted Handoff queues on data nodes when the target data nodes are removed from the cluster
+- Add unit testing around restore in the meta store
+- Add full TLS support to the cluster API, including the use of self-signed certificates
+- Improve backup/restore to allow for partial restores to a different cluster or to a database with a different database name
+- Update the shard group creation logic to be balanced
+- Keep raft log to a minimum to prevent replaying large raft logs on startup
+
+#### Cluster-specific bug fixes
+
+- Remove bad connections from the meta executor connection pool
+- Fix a panic in the meta store
+- Fix a panic caused when a shard group is not found
+- Fix a corrupted Hinted Handoff
+- Ensure that any imported OSS admin users have all privileges in the cluster
+- Ensure that `max-select-series` is respected
+- Handle the `peer already known` error
+- Fix Hinted handoff panic around segment size check
+- Drop Hinted Handoff writes if they contain field type inconsistencies
+
+
+# Web Console
+
+## DEPRECATED: Enterprise Web Console
+
+The Enterprise Web Console has officially been deprecated and will be eliminated entirely by the end of 2017.
+No additional features will be added and no additional bug fix releases are planned.
+
+For browser-based access to InfluxDB Enterprise, [Chronograf](/chronograf/latest/introduction) is now the recommended tool to use.
diff --git a/content/enterprise_influxdb/v1.6/administration/_index.md b/content/enterprise_influxdb/v1.6/administration/_index.md
new file mode 100644
index 000000000..976592834
--- /dev/null
+++ b/content/enterprise_influxdb/v1.6/administration/_index.md
@@ -0,0 +1,48 @@
+---
+title: Administering InfluxDB Enterprise
+description: Covers InfluxDB Enterprise administration, including backup and restore, configuration, logs, security, and upgrading.
+menu:
+ enterprise_influxdb_1_6:
+ name: Administration
+ weight: 70
+---
+
+## [Configuring InfluxDB Enterprise](/enterprise_influxdb/v1.6/administration/configuration/)
+
+[Configuring InfluxDB Enterprise](/enterprise_influxdb/v1.6/administration/configuration/) covers the InfluxDB Enterprise configuration settings, including global options, meta node options, and data node options.
+
+## [Data node configurations](/enterprise_influxdb/v1.6/administration/config-data-nodes/)
+
+The [Data node configurations](/enterprise_influxdb/v1.6/administration/config-data-nodes/) includes listings and descriptions of all data node configurations.
+
+## [Meta node configurations](/enterprise_influxdb/v1.6/administration/config-meta-nodes/)
+
+The [Meta node configurations](/enterprise_influxdb/v1.6/administration/config-meta-nodes/) includes listings and descriptions of all meta node configurations.
+
+## [Authentication and authorization](/influxdb/v1.6/administration/authentication_and_authorization/)
+
+See [Authentication and authorization](/influxdb/v1.6/administration/authentication_and_authorization/) in the InfluxDB OSS documentation for details on
+
+* how to
+[set up authentication](/influxdb/v1.6/administration/authentication_and_authorization/#set-up-authentication)
+
+* how to
+[authenticate requests](/influxdb/v1.6/administration/authentication_and_authorization/#authenticate-requests) in InfluxDB.
+
+* descriptions of the different
+[user types](/influxdb/v1.6/administration/authentication_and_authorization/#user-types-and-privileges) and the InfluxQL for
+[managing database users](/influxdb/v1.6/administration/authentication_and_authorization/#user-management-commands).
+
+## [Configuring LDAP authentication](/enterprise_influxdb/v1.6/administration/ldap/)
+
+## [Upgrading InfluxDB Enterprise clusters](/enterprise_influxdb/v1.6/administration/upgrading/)
+
+## [Backing up and restoring in InfluxDB Enterprise](/enterprise_influxdb/v1.6/administration/backup-and-restore/)
+
+## [Logging and tracing in InfluxDB Enterprise](/enterprise_influxdb/v1.6/administration/logs/)
+
+[Logging and tracing in InfluxDB Enterprise](/enterprise_influxdb/v1.6/administration/logs/) covers logging locations, redirecting HTTP request logging, structured logging, and tracing.
+
+## [Host renaming in InfluxDB Enterprise](/enterprise_influxdb/v1.6/administration/renaming/)
+
+## [Managing security in InfluxDB Enterprise](/enterprise_influxdb/v1.6/administration/security/)
diff --git a/content/enterprise_influxdb/v1.6/administration/anti-entropy.md b/content/enterprise_influxdb/v1.6/administration/anti-entropy.md
new file mode 100644
index 000000000..3c705044b
--- /dev/null
+++ b/content/enterprise_influxdb/v1.6/administration/anti-entropy.md
@@ -0,0 +1,336 @@
+---
+title: Anti-entropy service in InfluxDB Enterprise
+aliases:
+ - /enterprise_influxdb/v1.6/guides/anti-entropy/
+menu:
+ enterprise_influxdb_1_6:
+ menu: Anti-entropy service
+ weight: 40
+ parent: Administration
+---
+
+## Introduction
+
+Shard entropy refers to inconsistency among shards in a shard group.
+This can be due to the "eventually consistent" nature of data stored in InfluxDB
+Enterprise clusters or due to missing or unreachable shards.
+The anti-entropy (AE) service ensures that each data node has all the shards it
+owns according to the metastore and that all shards in a shard group are consistent.
+Missing shards are automatically repaired without operator intervention while
+out-of-sync shards can be manually queued for repair.
+This guide covers how AE works and some of the basic situations where it takes effect.
+
+## Concepts
+
+The anti-entropy service is part of the `influxd` process running on each data node
+that ensures the node has all the shards the metastore says it owns and
+that those shards are in sync with others in the same shard group.
+If any shards are missing, the AE service will copy existing shards from other shard owners.
+If data inconsistencies are detected among shards in a shard group, you can
+[envoke the AE process](#command-line-tools-for-managing-entropy) and queue the
+out-of-sync shards for repair.
+In the repair process, AE will sync the necessary updates from other shards
+in the same shard group.
+
+By default, the service checks every 5 minutes, as configured in the [`anti-entropy.check-interval`](/enterprise_influxdb/v1.6/administration/config-data-nodes/#check-interval-5m) setting.
+
+The anti-entropy service can only address missing or inconsistent shards when
+there is at least one copy of the shard available.
+In other words, as long as new and healthy nodes are introduced, a replication
+factor of 2 can recover from one missing or inconsistent node;
+a replication factor of 3 can recover from two missing or inconsistent nodes, and so on.
+A replication factor of 1 cannot be recovered by the anti-entropy service.
+
+## Symptoms of entropy
+The AE process automatically detects and fixes missing shards, but shard inconsistencies
+must be [manually detected and queued for repair](#detecting-and-repairing-entropy).
+There are symptoms of entropy that, if seen, would indicate an entropy repair is necessary.
+
+### Different results for the same query
+When running queries against an InfluxDB Enterprise cluster, each query may be routed to a different data node.
+If entropy affects data within the queried range, the same query will return different
+results depending on which node it is run against.
+
+_**Query attempt 1**_
+```
+SELECT mean("usage_idle") WHERE time > '2018-06-06T18:00:00Z' AND time < '2018-06-06T18:15:00Z' GROUP BY time(3m) FILL(0)
+
+name: cpu
+time mean
+---- ----
+1528308000000000000 99.11867392974537
+1528308180000000000 99.15410822137049
+1528308360000000000 99.14927494363032
+1528308540000000000 99.1980535465783
+1528308720000000000 99.18584290492262
+```
+
+_**Query attempt 2**_
+```
+SELECT mean("usage_idle") WHERE time > '2018-06-06T18:00:00Z' AND time < '2018-06-06T18:15:00Z' GROUP BY time(3m) FILL(0)
+
+name: cpu
+time mean
+---- ----
+1528308000000000000 99.11867392974537
+1528308180000000000 0
+1528308360000000000 0
+1528308540000000000 0
+1528308720000000000 99.18584290492262
+```
+
+This indicates that data is missing in the queried time range and entropy is present.
+
+### Flapping dashboards
+A "flapping" dashboard means data visualizations changing when data is refreshed
+and pulled from a node with entropy (inconsistent data).
+It is the visual manifestation of getting [different results from the same query](#different-results-for-the-same-query).
+
+
+
+## Technical details
+
+### Detecting entropy
+The AE service runs on each data node and periodically checks its shards' statuses
+relative to the next data node in the ownership list.
+It does this by creating a "digest" or summary of data in the shards on the node.
+
+As an example, assume there are two data nodes in your cluster: `node1` and `node2`.
+Both `node1` and `node2` own `shard1` so `shard1` is replicated across each.
+
+When a status check runs, `node1` will ask `node2` when `shard1` was last modified.
+If the reported modification time is different than it was in the previous check,
+`node1` will ask `node2` for a new digest of `shard1`.
+`node1` then checks for differences (performs a "diff") between `node2`'s `shard1` digest and its own local digest for `shard1`.
+If there's a difference, `shard1` is flagged as having entropy.
+
+### Repairing entropy
+If during a status check a node determines the next node is completely missing a shard,
+it immediately adds the missing shard to the repair queue.
+A background routine monitors the queue and begins the repair process as new shards are added to it.
+Repair request are pulled from the queue by the background process and repaired using a `copy shard` operation.
+
+> Currently, shards that are present on both nodes but contain different data are not automatically queued for repair.
+> A user must make the request via `influxd-ctl entropy repair `.
+> More info [below](#detecting-and-repairing-entropy)
+
+Using `node1` and `node2` from the example [above](#detecting-entropy) – `node1` asks `node2` for a digest of `shard1`.
+`node1` diffs its own local `shard1` digest and `node2`'s `shard1` digest,
+then creates a new digest containing only the differences (the diff digest).
+The diff digest is used to create a patch containing only the data `node2` is missing.
+`node1` sends the patch to `node2` and instructs it to apply it.
+Once `node2` finishes applying the patch, it queues a repair for `shard1` locally.
+
+The "node-to-node" shard repair continues until it runs on every data node that owns the shard in need of repair.
+
+### Repair order
+Repairs between shard owners happen in a deterministic order.
+This doesn't mean repairs always start on node 1 and then follow a specific node order.
+Repairs are viewed at the shard level.
+Each shard has a list of owners and the repairs for a particular shard will happen
+in a deterministic order among its owners.
+
+When the AE service on any data node receives a repair request for a shard, it determines which
+owner node is the first in the deterministic order and forwards the request to that node.
+The request is now queued on the first owner.
+
+The first owner's repair processor pulls it from the queue, detects the differences
+between the local copy of the shard with the copy of the same shard on the next
+owner in the deterministic order, then generates a patch from that difference.
+The first owner then makes an RPC call to the next owner instructing it to apply
+the patch to its copy of the shard.
+
+Once the next owner has successfully applied the patch, it adds that shard to its AE repair queue.
+A list of "visited" nodes follows the repair through the list of owners.
+Each owner will check the list to detect when the repair has cycled through all owners,
+at which point the repair is finished.
+
+### Hot shards
+The AE service does its best to avoid hot shards (shards that are currently receiving writes)
+because they change quickly.
+While write replication between shard owner nodes (with a
+[replication factor](/influxdb/v1.6/concepts/glossary/#replication-factor)
+greater than 1) typically happens in milliseconds, this slight difference is
+still enough to cause the appearance of entropy where there is none.
+AE is designed and intended to repair cold shards.
+
+This can sometimes have some unexpected effects. For example:
+
+* A shard goes cold.
+* AE detects entropy.
+* Entropy reported by the AE `/status` API or with the `influxd-ctl entropy show` command.
+* Shard takes a write, gets compacted, or something else causes it to go hot.
+ _These actions are out of AE's control._
+* A repair is requested, but ignored because the shard is now hot.
+
+In the scenario above, you will have to periodically request a repair of the shard
+until it either shows as being in the queue, being repaired, or no longer in the list of shards with entropy.
+
+## Configuration
+
+Anti-entropy configuration options are available in [`[anti-entropy]`](/enterprise_influxdb/v1.6/administration/config-data-nodes#anti-entropy) section of your `influxdb.conf`.
+
+## Command line tools for managing entropy
+The `influxd-ctl entropy` command enables you to manage entropy among shards in a cluster.
+It includes the following subcommands:
+
+#### `show`
+Lists shards that are in an inconsistent state and in need of repair as well as
+shards currently in the repair queue.
+
+```bash
+influxd-ctl entropy show
+```
+
+#### `repair`
+Queues a shard for repair.
+It requires a Shard ID which is provided in the [`show`](#show) output.
+
+```bash
+influxd-ctl entropy repair
+```
+
+Repairing entropy in a shard is an asynchronous operation.
+This command will return quickly as it only adds a shard to the repair queue.
+Queuing shards for repair is idempotent.
+There is no harm in making multiple requests to repair the same shard even if
+it is already queued, currently being repaired, or not in need of repair.
+
+#### `kill-repair`
+Removes a shard from the repair queue.
+It requires a Shard ID which is provided in the [`show`](#show) output.
+
+```bash
+influxd-ctl entropy kill-repair
+```
+
+This only applies to shards in the repair queue.
+It does not cancel repairs on nodes that are in the process of being repaired.
+Once a repair has started, requests to cancel it are ignored.
+
+> Stopping a entropy repair for a **missing** shard operation is not currently supported.
+> It may be possible to stop repairs for missing shards with the
+> [`influxd-ctl kill-copy-shard`](/enterprise_influxdb/v1.6/administration/cluster-commands/#kill-copy-shard) command.
+
+
+## Scenarios
+
+This section covers some of the common use cases for the anti-entropy service.
+
+### Detecting and repairing entropy
+Periodically, you may want to see if shards in your cluster have entropy or are
+inconsistent with other shards in the shard group.
+Use the `influxd-ctl entropy show` command to list all shards with detected entropy:
+
+```
+influxd-ctl entropy show
+
+Entropy
+==========
+ID Database Retention Policy Start End Expires Status
+21179 statsdb 1hour 2017-10-09 00:00:00 +0000 UTC 2017-10-16 00:00:00 +0000 UTC 2018-10-22 00:00:00 +0000 UTC diff
+25165 statsdb 1hour 2017-11-20 00:00:00 +0000 UTC 2017-11-27 00:00:00 +0000 UTC 2018-12-03 00:00:00 +0000 UTC diff
+```
+
+Then use the `influxd-ctl entropy repair` command to add the shards with entropy
+to the repair queue:
+
+```
+influxd-ctl entropy repair 21179
+
+Repair Shard 21179 queued
+
+influxd-ctl entropy repair 25165
+
+Repair Shard 25165 queued
+```
+
+Check on the status of the repair queue with the `influxd-ctl entropy show` command:
+
+```
+influxd-ctl entropy show
+
+Entropy
+==========
+ID Database Retention Policy Start End Expires Status
+21179 statsdb 1hour 2017-10-09 00:00:00 +0000 UTC 2017-10-16 00:00:00 +0000 UTC 2018-10-22 00:00:00 +0000 UTC diff
+25165 statsdb 1hour 2017-11-20 00:00:00 +0000 UTC 2017-11-27 00:00:00 +0000 UTC 2018-12-03 00:00:00 +0000 UTC diff
+
+Queued Shards: [21179 25165]
+```
+
+### Replacing an unresponsive data node
+
+If a data node suddenly disappears due to a catastrophic hardware failure or for any other reason, as soon as a new data node is online, the anti-entropy service will copy the correct shards to the new replacement node. The time it takes for the copying to complete is determined by the number of shards to be copied and how much data is stored in each.
+
+*View the [Replacing Data Nodes](/enterprise_influxdb/v1.6/guides/replacing-nodes/#replacing-data-nodes-in-an-influxdb-enterprise-cluster) documentation for instructions on replacing data nodes in your InfluxDB Enterprise cluster.*
+
+### Replacing a machine that is running a data node
+
+Perhaps you are replacing a machine that is being decommissioned, upgrading hardware, or something else entirely.
+The anti-entropy service will automatically copy shards to the new machines.
+
+Once you have successfully run the `influxd-ctl update-data` command, you are free
+to shut down the retired node without causing any interruption to the cluster.
+The anti-entropy process will continue copying the appropriate shards from the
+remaining replicas in the cluster.
+
+### Fixing entropy in active shards
+In rare cases, the currently active shard, or the shard to which new data is
+currently being written, may find itself with inconsistent data.
+Because the AE process can't write to hot shards, you must stop writes to the new
+shard using the [`influxd-ctl truncate-shards` command](/enterprise_influxdb/v1.6/administration/cluster-commands/#truncate-shards),
+then add the inconsistent shard to the entropy repair queue:
+
+```bash
+# Truncate hot shards
+influxd-ctl truncate-shards
+
+# Show shards with entropy
+influxd-ctl entropy show
+
+Entropy
+==========
+ID Database Retention Policy Start End Expires Status
+21179 statsdb 1hour 2018-06-06 12:00:00 +0000 UTC 2018-06-06 23:44:12 +0000 UTC 2018-12-06 00:00:00 +0000 UTC diff
+
+# Add the inconsistent shard to the repair queue
+influxd-ctl entropy repair 21179
+```
+
+## Troubleshooting
+
+### Queued repairs are not being processed
+The primary reason a repair in the repair queue isn't being processed is because
+it went "hot" after the repair was queued.
+AE can only repair cold shards, or shards that are not currently being written to.
+If the shard is hot, AE will wait until it goes cold again before performing the repair.
+
+If the shard is "old" and writes to it are part of a backfill process, you simply
+have to wait the until the backfill process is finished. If the shard is the active
+shard, you can `truncate-shards` to stop writes to active shards. This process is
+outlined [above](#fixing-entropy-in-active-shards).
+
+### AE log messages
+Below are common messages output by AE along with what they mean.
+
+#### `Checking status`
+Indicates that the AE process has begun the [status check process](#detecting-entropy).
+
+#### `Skipped shards`
+Indicates that the AE process has skipped a status check on shards because they are currently [hot](#hot-shards).
+
+
+## Changes to the AE Service in v1.6
+
+- New `entropy` command in the `influxd-ctl` cluster management utility that
+ includes `show`, `repair`, and `kill-repair` subcommands.
+- New `/repair` API _(Documentation Coming)_.
+- New `/cancel-repair` API _(Documentation Coming)_.
+- Updated `/status` API that now includes a list of shards waiting in the repair
+ queue and a list of shards currently being repaired.
+- New [repair order](#repair-order).
+- Repairs are now "push" instead of "pull".
+ In v1.5, repairs of missing shards were done with a "pull" of the shard from another node.
+ The AE service would notice a shard missing and choose another owner to copy from.
+ In 1.6, it happens in the deterministic order described [above](#repair-order).
diff --git a/content/enterprise_influxdb/v1.6/administration/backup-and-restore.md b/content/enterprise_influxdb/v1.6/administration/backup-and-restore.md
new file mode 100644
index 000000000..f54babfa3
--- /dev/null
+++ b/content/enterprise_influxdb/v1.6/administration/backup-and-restore.md
@@ -0,0 +1,338 @@
+---
+title: Backing up and restoring in InfluxDB Enterprise
+aliases:
+ - /enterprise/v1.6/guides/backup-and-restore/
+menu:
+ enterprise_influxdb_1_6:
+ name: Backing up and restoring
+ weight: 30
+ parent: Administration
+---
+
+## Overview
+
+The primary use cases for backup and restore are:
+
+* Disaster recovery
+* Debugging
+* Restoring clusters to a consistent state
+
+InfluxDB Enterprise supports backing up and restoring data in a cluster, a single database, a single database and retention policy, and
+single [shard](/influxdb/v1.6/concepts/glossary/#shard).
+
+> **Note:** You can use the [new `backup` and `restore` utilities in InfluxDB OSS 1.5](/influxdb/v1.5/administration/backup_and_restore/) to:
+> * Restore InfluxDB Enterprise 1.5 backup files to InfluxDB OSS 1.5.
+> * Back up InfluxDB OSS 1.5 data that can be restored in InfluxDB Enterprise 1.5.
+
+## Backup
+
+A backup creates a copy of the [metastore](/influxdb/v1.6/concepts/glossary/#metastore) and [shard](/influxdb/v1.6/concepts/glossary/#shard) data at that point in time and stores the copy in the specified directory.
+All backups also include a manifest, a JSON file describing what was collected during the backup.
+The filenames reflect the UTC timestamp of when the backup was created, for example:
+
+* Metastore backup: `20060102T150405Z.meta`
+* Shard data backup: `20060102T150405Z..tar.gz`
+* Manifest: `20060102T150405Z.manifest`
+
+Backups can be full (using the `-full` flag) or incremental, and they are incremental by default.
+Incremental backups create a copy of the metastore and shard data that have changed since the last incremental backup.
+If there are no existing incremental backups, the system automatically performs a complete backup.
+
+Restoring a `-full` backup and restoring an incremental backup require different syntax.
+To prevent issues with [restore](#restore), keep `-full` backups and incremental backups in separate directories.
+
+### Syntax
+
+```
+influxd-ctl [global-options] backup [backup-options]
+```
+
+#### Global options:
+
+Please see the [influxd-ctl documentation](/enterprise_influxdb/v1.6/administration/cluster-commands/#global-options)
+for a complete list of the global `influxd-ctl` options.
+
+#### Backup options:
+
+* `-db `: the name of the single database to back up
+* `-from `: the data node TCP address to prefer when backing up
+* `-full`: perform a full backup
+* `-rp `: the name of the single retention policy to back up (must specify `-db` with `-rp`)
+* `-shard `: the ID of the single shard to back up
+
+#### Examples
+
+Store the following incremental backups in different directories.
+The first backup specifies `-db myfirstdb` and the second backup specifies
+different options: `-db myfirstdb` and `-rp autogen`.
+```
+influxd-ctl backup -db myfirstdb ./myfirstdb-allrp-backup
+
+influxd-ctl backup -db myfirstdb -rp autogen ./myfirstdb-autogen-backup
+```
+
+Store the following incremental backups in the same directory.
+Both backups specify the same `-db` flag and the same database.
+```
+influxd-ctl backup -db myfirstdb ./myfirstdb-allrp-backup
+
+influxd-ctl backup -db myfirstdb ./myfirstdb-allrp-backup
+```
+
+### Examples
+
+#### Example 1: Perform an incremental backup
+
+Perform an incremental backup into the current directory with the command below.
+If there are any existing backups the current directory, the system performs an incremental backup.
+If there aren't any existing backups in the current directory, the system performs a backup of all data in InfluxDB.
+```
+influxd-ctl backup .
+```
+
+Output:
+```
+$ influxd-ctl backup .
+Backing up meta data... Done. 421 bytes transferred
+Backing up node 7ba671c7644b:8088, db telegraf, rp autogen, shard 4... Done. Backed up in 903.539567ms, 307712 bytes transferred
+Backing up node bf5a5f73bad8:8088, db _internal, rp monitor, shard 1... Done. Backed up in 138.694402ms, 53760 bytes transferred
+Backing up node 9bf0fa0c302a:8088, db _internal, rp monitor, shard 2... Done. Backed up in 101.791148ms, 40448 bytes transferred
+Backing up node 7ba671c7644b:8088, db _internal, rp monitor, shard 3... Done. Backed up in 144.477159ms, 39424 bytes transferred
+Backed up to . in 1.293710883s, transferred 441765 bytes
+$ ls
+20160803T222310Z.manifest 20160803T222310Z.s1.tar.gz 20160803T222310Z.s3.tar.gz
+20160803T222310Z.meta 20160803T222310Z.s2.tar.gz 20160803T222310Z.s4.tar.gz
+```
+#### Example 2: Perform a full backup
+
+Perform a full backup into a specific directory with the command below.
+The directory must already exist.
+
+```
+influxd-ctl backup -full
+```
+
+Output:
+```
+$ influxd-ctl backup -full backup_dir
+Backing up meta data... Done. 481 bytes transferred
+Backing up node :8088, db _internal, rp monitor, shard 1... Done. Backed up in 33.207375ms, 238080 bytes transferred
+Backing up node :8088, db telegraf, rp autogen, shard 2... Done. Backed up in 15.184391ms, 95232 bytes transferred
+Backed up to backup_dir in 51.388233ms, transferred 333793 bytes
+~# ls backup_dir
+20170130T184058Z.manifest
+20170130T184058Z.meta
+20170130T184058Z.s1.tar.gz
+20170130T184058Z.s2.tar.gz
+```
+
+#### Example 3: Perform an incremental backup on a single database
+
+Point at a remote meta server and back up only one database into a given directory (the directory must already exist):
+```
+influxd-ctl -bind :8091 backup -db
+```
+
+Output:
+```
+$ influxd-ctl -bind 2a1b7a338184:8091 backup -db telegraf ./telegrafbackup
+Backing up meta data... Done. 318 bytes transferred
+Backing up node 7ba671c7644b:8088, db telegraf, rp autogen, shard 4... Done. Backed up in 997.168449ms, 399872 bytes transferred
+Backed up to ./telegrafbackup in 1.002358077s, transferred 400190 bytes
+$ ls ./telegrafbackup
+20160803T222811Z.manifest 20160803T222811Z.meta 20160803T222811Z.s4.tar.gz
+```
+
+## Restore
+
+Restore a backup to an existing cluster or a new cluster.
+By default, a restore writes to databases using the backed-up data's [replication factor](/influxdb/v1.6/concepts/glossary/#replication-factor).
+An alternate replication factor can be specified with the `-newrf` flag when restoring a single database.
+Restore supports both `-full` backups and incremental backups; the syntax for
+a restore differs depending on the backup type.
+
+> #### Restores from an existing cluster to a new cluster
+Restores from an existing cluster to a new cluster restore the existing cluster's
+[users](/influxdb/v1.6/concepts/glossary/#user), roles,
+[databases](/influxdb/v1.6/concepts/glossary/#database), and
+[continuous queries](/influxdb/v1.6/concepts/glossary/#continuous-query-cq) to
+the new cluster.
+>
+They do not restore Kapacitor [subscriptions](/influxdb/v1.6/concepts/glossary/#subscription).
+In addition, restores to a new cluster drop any data in the new cluster's
+`_internal` database and begin writing to that database anew.
+The restore does not write the existing cluster's `_internal` database to
+the new cluster.
+
+### Syntax for a restore from an incremental backup
+Use the syntax below to restore an incremental backup to a new cluster or an existing cluster.
+Note that the existing cluster must contain no data in the affected databases.*
+Performing a restore from an incremental backup requires the path to the incremental backup's directory.
+
+```
+influxd-ctl [global-options] restore [restore-options]
+```
+
+\* The existing cluster can have data in the `_internal` database, the database
+that the system creates by default.
+The system automatically drops the `_internal` database when it performs a complete restore.
+
+#### Global options:
+
+Please see the [influxd-ctl documentation](/enterprise_influxdb/v1.6/administration/cluster-commands/#global-options)
+for a complete list of the global `influxd-ctl` options.
+
+#### Restore options:
+
+* `-db `: the name of the single database to restore
+* `-list`: shows the contents of the backup
+* `-newdb `: the name of the new database to restore to (must specify with `-db`)
+* `-newrf `: the new replication factor to restore to (this is capped to the number of data nodes in the cluster)
+* `-newrp `: the name of the new retention policy to restore to (must specify with `-rp`)
+* `-rp `: the name of the single retention policy to restore
+* `-shard `: the shard ID to restore
+
+### Syntax for a restore from a full backup
+Use the syntax below to restore a backup that you made with the `-full` flag.
+Restore the `-full` backup to a new cluster or an existing cluster.
+Note that the existing cluster must contain no data in the affected databases.*
+Performing a restore from a `-full` backup requires the `-full` flag and the path to the full backup's manifest file.
+
+```
+influxd-ctl [global-options] restore [options] -full
+```
+
+\* The existing cluster can have data in the `_internal` database, the database
+that the system creates by default.
+The system automatically drops the `_internal` database when it performs a
+complete restore.
+
+#### Global options:
+
+Please see the [influxd-ctl documentation](/enterprise_influxdb/v1.6/administration/cluster-commands/#global-options)
+for a complete list of the global `influxd-ctl` options.
+
+#### Restore options:
+
+* `-db `: the name of the single database to restore
+* `-list`: shows the contents of the backup
+* `-newdb `: the name of the new database to restore to (must specify with `-db`)
+* `-newrf `: the new replication factor to restore to (this is capped to the number of data nodes in the cluster)
+* `-newrp `: the name of the new retention policy to restore to (must specify with `-rp`)
+* `-rp `: the name of the single retention policy to restore
+* `-shard `: the shard ID to restore
+
+### Examples
+
+#### Example 1: Perform a restore from an incremental backup
+
+```
+influxd-ctl restore
+```
+
+Output:
+```
+$ influxd-ctl restore my-incremental-backup/
+Using backup directory: my-incremental-backup/
+Using meta backup: 20170130T231333Z.meta
+Restoring meta data... Done. Restored in 21.373019ms, 1 shards mapped
+Restoring db telegraf, rp autogen, shard 2 to shard 2...
+Copying data to :8088... Copying data to :8088... Done. Restored shard 2 into shard 2 in 61.046571ms, 588800 bytes transferred
+Restored from my-incremental-backup/ in 83.892591ms, transferred 588800 bytes
+```
+
+#### Example 2: Perform a restore from a `-full` backup
+
+```
+influxd-ctl restore -full
+```
+
+Output:
+```
+$ influxd-ctl restore -full my-full-backup/20170131T020341Z.manifest
+Using manifest: my-full-backup/20170131T020341Z.manifest
+Restoring meta data... Done. Restored in 9.585639ms, 1 shards mapped
+Restoring db telegraf, rp autogen, shard 2 to shard 2...
+Copying data to :8088... Copying data to :8088... Done. Restored shard 2 into shard 2 in 48.095082ms, 569344 bytes transferred
+Restored from my-full-backup in 58.58301ms, transferred 569344 bytes
+```
+
+#### Example 3: Perform a restore from an incremental backup for a single database and give the database a new name
+
+```
+influxd-ctl restore -db -newdb
+```
+
+Output:
+```
+$ influxd-ctl restore -db telegraf -newdb restored_telegraf my-incremental-backup/
+Using backup directory: my-incremental-backup/
+Using meta backup: 20170130T231333Z.meta
+Restoring meta data... Done. Restored in 8.119655ms, 1 shards mapped
+Restoring db telegraf, rp autogen, shard 2 to shard 4...
+Copying data to :8088... Copying data to :8088... Done. Restored shard 2 into shard 4 in 57.89687ms, 588800 bytes transferred
+Restored from my-incremental-backup/ in 66.715524ms, transferred 588800 bytes
+```
+
+#### Example 4: Perform a restore from an incremental backup for a database and merge that database into an existing database
+
+Your `telegraf` database was mistakenly dropped, but you have a recent backup so you've only lost a small amount of data.
+
+If [Telegraf](/telegraf/v1.7/) is still running, it will recreate the `telegraf` database shortly after the database is dropped.
+You might try to directly restore your `telegraf` backup just to find that you can't restore:
+
+```
+$ influxd-ctl restore -db telegraf my-incremental-backup/
+Using backup directory: my-incremental-backup/
+Using meta backup: 20170130T231333Z.meta
+Restoring meta data... Error.
+restore: operation exited with error: problem setting snapshot: database already exists
+```
+
+To work around this, you can restore your telegraf backup into a new database by specifying the `-db` flag for the source and the `-newdb` flag for the new destination:
+
+```
+$ influxd-ctl restore -db telegraf -newdb restored_telegraf my-incremental-backup/
+Using backup directory: my-incremental-backup/
+Using meta backup: 20170130T231333Z.meta
+Restoring meta data... Done. Restored in 19.915242ms, 1 shards mapped
+Restoring db telegraf, rp autogen, shard 2 to shard 7...
+Copying data to :8088... Copying data to :8088... Done. Restored shard 2 into shard 7 in 36.417682ms, 588800 bytes transferred
+Restored from my-incremental-backup/ in 56.623615ms, transferred 588800 bytes
+```
+
+Then, in the [`influx` client](/influxdb/v1.6/tools/shell/), use an [`INTO` query](/influxdb/v1.6/query_language/data_exploration/#the-into-clause) to copy the data from the new database into the existing `telegraf` database:
+
+```
+$ influx
+> USE restored_telegraf
+Using database restored_telegraf
+> SELECT * INTO telegraf..:MEASUREMENT FROM /.*/ GROUP BY *
+name: result
+------------
+time written
+1970-01-01T00:00:00Z 471
+```
+
+### Common Issues with Restore
+
+#### Issue 1: Restore writes information not part of the original backup
+
+If a [restore from an incremental backup](#syntax-for-a-restore-from-an-incremental-backup) does not limit the restore to the same database, retention policy, and shard specified by the backup command, the restore may appear to restore information that was not part of the original backup.
+Backups consist of a shard data backup and a metastore backup.
+The **shard data backup** contains the actual time series data: the measurements, tags, fields, and so on.
+The **metastore backup** contains user information, database names, retention policy names, shard metadata, continuous queries, and subscriptions.
+
+When the system creates a backup, the backup includes:
+
+* the relevant shard data determined by the specified backup options
+* all of the metastore information in the cluster regardless of the specified backup options
+
+Because a backup always includes the complete metastore information, a restore that doesn't include the same options specified by the backup command may appear to restore data that were not targeted by the original backup.
+The unintended data, however, include only the metastore information, not the shard data associated with that metastore information.
+
+#### Issue 2: Restore a backup created prior to version 1.2.0
+
+InfluxDB Enterprise introduced incremental backups in version 1.2.0.
+To restore a backup created prior to version 1.2.0, be sure to follow the syntax
+for [restoring from a full backup](#syntax-for-a-restore-from-a-full-backup).
diff --git a/content/enterprise_influxdb/v1.6/administration/cluster-commands.md b/content/enterprise_influxdb/v1.6/administration/cluster-commands.md
new file mode 100644
index 000000000..c79ff390a
--- /dev/null
+++ b/content/enterprise_influxdb/v1.6/administration/cluster-commands.md
@@ -0,0 +1,1064 @@
+---
+title: InfluxDB Enterprise cluster management utilities
+description: Use the "influxd-ctl" and "influx" command line tools to interact with your InfluxDB Enterprise cluster and data.
+aliases:
+ - /enterprise/v1.6/features/cluster-commands/
+menu:
+ enterprise_influxdb_1_6:
+ name: Cluster management utilities
+ weight: 30
+ parent: Administration
+---
+
+InfluxDB Enterprise includes two utilities for interacting with and managing your clusters.
+The [`influxd-ctl`](#influxd-ctl-cluster-management-utility) utility provides commands
+for managing your InfluxDB Enterprise clusters.
+The [`influx` command line interface](#influx-command-line-interface-cli) is used
+for interacting with and managing your data.
+
+#### Content
+
+* [`influxd-ctl` cluster management utility](#influxd-ctl-cluster-management-utility)
+ * [Syntax](#syntax)
+ * [Global options](#global-options)
+ * [`-auth-type`](#auth-type-none-basic-jwt)
+ * [`-bind-tls`](#bind-tls)
+ * [`-config`](#config-path-to-configuration-file)
+ * [`-pwd`](#pwd-password)
+ * [`-k`](#k)
+ * [`-secret`](#secret-jwt-shared-secret)
+ * [`-user`](#user-username)
+ * [Commands](#commands)
+ * [`add-data`](#add-data)
+ * [`add-meta`](#add-meta)
+ * [`backup`](#backup)
+ * [`copy-shard`](#copy-shard)
+ * [`copy-shard-status`](#copy-shard-status)
+ * [`entropy`](#entropy)
+ * [`join`](#join)
+ * [`kill-copy-shard`](#kill-copy-shard)
+ * [`leave`](#leave)
+ * [`remove-data`](#remove-data)
+ * [`remove-meta`](#remove-meta)
+ * [`remove-shard`](#remove-shard)
+ * [`restore`](#restore)
+ * [`show`](#show)
+ * [`show-shards`](#show-shards)
+ * [`update-data`](#update-data)
+ * [`token`](#token)
+ * [`truncate-shards`](#truncate-shards)
+* [`influx` command line interface (CLI)](#influx-command-line-interface-cli)
+
+
+## `influxd-ctl` cluster management utility
+
+Use the `influxd-ctl` cluster management utility to manage your cluster nodes, back up and restore data, and rebalance clusters.
+The `influxd-ctl` utility is available on all [meta nodes](/enterprise_influxdb/v1.6/concepts/glossary/#meta-node).
+
+### Syntax
+
+```
+influxd-ctl [ global-options ] [ arguments ]
+```
+
+### Global options
+
+Optional arguments are in brackets.
+
+#### `[ -auth-type [ none | basic | jwt ] ]`
+
+Specify the type of authentication to use. Default value is `none`.
+
+#### `[ -bind : ]`
+
+Specify the bind HTTP address of a meta node to connect to. Default value is `localhost:8091`.
+
+#### `[ -bind-tls ]`
+
+Use TLS. If you have enabled HTTPS, you MUST use this argument in order to connect to the meta node.
+
+#### `[ -config ' ]'`
+
+Specify the path to the configuration file.
+
+#### `[ -pwd ]`
+
+Specify the user’s password. This argument is ignored if `-auth-type basic` isn’t specified.
+
+#### `[ -k ]`
+
+Skip certificate verification; use this argument with a self-signed certificate. `-k` is ignored if `-bind-tls` isn't specified.
+
+#### `[ -secret ]`
+
+Specify the JSON Web Token (JWT) shared secret. This argument is ignored if `-auth-type jwt` isn't specified.
+
+#### `[ -user ]`
+
+Specify the user’s username. This argument is ignored if `-auth-type basic` isn’t specified.
+
+### Examples
+
+The following examples use the `influxd-ctl` utility's [`show` option](#show).
+
+#### Binding to a remote meta node
+
+```
+$ influxd-ctl -bind meta-node-02:8091 show
+```
+
+The `influxd-ctl` utility binds to the meta node with the hostname `meta-node-02` at port `8091`.
+By default, the tool binds to the meta node with the hostname `localhost` at port `8091`.
+
+#### Authenticating with JWT
+
+```
+$ influxd-ctl -auth-type jwt -secret oatclusters show
+```
+The `influxd-ctl` utility uses JWT authentication with the shared secret `oatclusters`.
+
+If authentication is enabled in the cluster's [meta node configuration files](/enterprise_influxdb/v1.6/administration/config-meta-nodes#auth-enabled-false) and [data node configuration files](/enterprise_influxdb/v1.6/administration/config-data-nodes#meta-auth-enabled-false) and the `influxd-ctl` command does not include authentication details, the system returns:
+
+```
+Error: unable to parse authentication credentials.
+```
+
+If authentication is enabled and the `influxd-ctl` command provides the incorrect shared secret, the system returns:
+
+```
+Error: signature is invalid.
+```
+
+#### Authenticating with basic authentication
+
+To authenticate a user with basic authentication, use the `-auth-type basic` option on the `influxd-ctl` utility, with the `-user` and `-password` options.
+
+In the following example, the `influxd-ctl` utility uses basic authentication for a cluster user.
+
+```
+$ influxd-ctl -auth-type basic -user admini -pwd mouse show
+```
+
+If authentication is enabled in the cluster's [meta node configuration files](/enterprise_influxdb/v1.6/administration/config-meta-nodes#auth-enabled-false) and [data node configuration files](/enterprise_influxdb/v1.6/administration/config-data-nodes#meta-auth-enabled-false) and the `influxd-ctl` command does not include authentication details, the system returns:
+
+```
+Error: unable to parse authentication credentials.
+```
+
+If authentication is enabled and the `influxd-ctl` command provides the incorrect username or password, the system returns:
+
+```
+Error: authorization failed.
+```
+
+## Commands
+
+### `add-data`
+
+Adds a data node to a cluster.
+By default, `influxd-ctl` adds the specified data node to the local meta node's cluster.
+Use `add-data` instead of the [`join` argument](#join) when performing a [production installation](/enterprise_influxdb/v1.6/install-and-deploy/production_installation/data_node_installation/) of an InfluxDB Enterprise cluster.
+
+#### Syntax
+
+```
+add-data
+```
+
+Resources: [Production installation](/enterprise_influxdb/v1.6/install-and-deploy/production_installation/data_node_installation/)
+
+#### Examples
+
+##### Adding a data node to a cluster using the local meta node
+
+In the following example, the `add-data` command contacts the local meta node running at `localhost:8091` and adds a data node to that meta node's cluster.
+The data node has the hostname `cluster-data-node` and runs on port `8088`.
+
+```
+$ influxd-ctl add-data cluster-data-node:8088
+
+Added data node 3 at cluster-data-node:8088
+```
+
+##### Adding a data node to a cluster using a remote meta node
+
+In the following example, the command contacts the meta node running at `cluster-meta-node-01:8091` and adds a data node to that meta node's cluster.
+The data node has the hostname `cluster-data-node` and runs on port `8088`.
+
+```
+$ influxd-ctl -bind cluster-meta-node-01:8091 add-data cluster-data-node:8088
+
+Added data node 3 at cluster-data-node:8088
+```
+
+### `add-meta`
+
+Adds a meta node to a cluster.
+By default, `influxd-ctl` adds the specified meta node to the local meta node's cluster.
+Use `add-meta` instead of the [`join` argument](#join) when performing a [Production Installation](/enterprise_influxdb/v1.6/install-and-deploy/production_installation/meta_node_installation/) of an InfluxDB Enterprise cluster.
+
+Resources: [Production installation](/enterprise_influxdb/v1.6/install-and-deploy/production_installation/data_node_installation/)
+
+#### Syntax
+
+```
+influxd-ctl add-meta
+```
+
+#### Examples
+
+##### Adding a meta node to a cluster using the local meta node
+
+In the following example, the `add-meta` command contacts the local meta node running at `localhost:8091` and adds a meta node to that local meta node's cluster.
+The added meta node has the hostname `cluster-meta-node-03` and runs on port `8091`.
+
+```
+$ influxd-ctl add-meta cluster-meta-node-03:8091
+
+Added meta node 3 at cluster-meta-node:8091
+```
+
+##### Adding a meta node to a cluster using a remote meta node**
+
+In the following example, the `add-meta` command contacts the meta node running at `cluster-meta-node-01:8091` and adds a meta node to that meta node's cluster.
+The added meta node has the hostname `cluster-meta-node-03` and runs on port `8091`.
+
+```
+$ influxd-ctl -bind cluster-meta-node-01:8091 add-meta cluster-meta-node-03:8091
+
+Added meta node 3 at cluster-meta-node-03:8091
+```
+
+#### `backup`
+
+Creates a backup of a cluster's [metastore](/influxdb/v1.6/concepts/glossary/#metastore) and [shard](/influxdb/v1.6/concepts/glossary/#shard) data at that point in time and stores the copy in the specified directory.
+Backups are incremental by default; they create a copy of the metastore and shard data that have changed since the previous incremental backup.
+If there are no existing incremental backups, the system automatically performs a complete backup.
+
+#### Syntax
+
+```
+influxd-ctl backup [ -db | -from | -full | -rp | -shard ]
+```
+##### Arguments
+
+Optional arguments are in brackets.
+
+#### [ `-db ` ]
+
+Name of the single database to back up.
+
+#### [ `-from ` ]
+
+TCP address of the target data node.
+
+#### [ `-full` ]
+
+Perform a [full](/enterprise_influxdb/v1.6/administration/backup-and-restore/#backup) backup.
+
+#### [ `-rp ` ]
+
+Name of the single [retention policy](/influxdb/v1.6/concepts/glossary/#retention-policy-rp) to back up (requires the `-db` flag).
+
+#### [ `-shard ` ]
+
+Identifier of the shard to back up.
+
+> Restoring a `-full` backup and restoring an incremental backup require different syntax.
+To prevent issues with [`restore`](#restore), keep `-full` backups and incremental backups in separate directories.
+
+Resources: [Backing up and restoring in InfluxDB Enterprise ](/enterprise_influxdb/v1.6/administration/backup-and-restore/)
+
+#### Examples
+
+##### Performing an incremental backup
+
+In the following example, the command performs an incremental backup and stores it in the current directory.
+If there are any existing backups the current directory, the system performs an incremental backup.
+If there aren’t any existing backups in the current directory, the system performs a complete backup of the cluster.
+
+```
+$ influxd-ctl backup .
+```
+
+Output:
+```
+Backing up meta data... Done. 421 bytes transferred
+Backing up node cluster-data-node:8088, db telegraf, rp autogen, shard 4... Done. Backed up in 903.539567ms, 307712 bytes transferred
+Backing up node cluster-data-node:8088, db _internal, rp monitor, shard 1... Done. Backed up in 138.694402ms, 53760 bytes transferred
+Backing up node cluster-data-node:8088, db _internal, rp monitor, shard 2... Done. Backed up in 101.791148ms, 40448 bytes transferred
+Backing up node cluster-data-node:8088, db _internal, rp monitor, shard 3... Done. Backed up in 144.477159ms, 39424 bytes transferred
+Backed up to . in 1.293710883s, transferred 441765 bytes
+
+$ ls
+20160803T222310Z.manifest 20160803T222310Z.s1.tar.gz 20160803T222310Z.s3.tar.gz
+20160803T222310Z.meta 20160803T222310Z.s2.tar.gz 20160803T222310Z.s4.tar.gz
+```
+
+#### Performing a full backup
+
+In the following example, the `backup` command performs a full backup of the cluster and stores the backup in the existing directory `backup_dir`.
+
+```
+$ influxd-ctl backup -full backup_dir
+```
+
+Output:
+
+```
+Backing up meta data... Done. 481 bytes transferred
+Backing up node cluster-data-node:8088, db _internal, rp monitor, shard 1... Done. Backed up in 33.207375ms, 238080 bytes transferred
+Backing up node cluster-data-node:8088, db telegraf, rp autogen, shard 2... Done. Backed up in 15.184391ms, 95232 bytes transferred
+Backed up to backup_dir in 51.388233ms, transferred 333793 bytes
+
+~# ls backup_dir
+20170130T184058Z.manifest
+20170130T184058Z.meta
+20170130T184058Z.s1.tar.gz
+20170130T184058Z.s2.tar.gz
+```
+
+#### `copy-shard`
+
+Copies a [shard](/influxdb/v1.6/concepts/glossary/#shard) from a source data node to a destination data node.
+
+#### Syntax
+
+```
+influxd-ctl copy-shard