Commit Graph

420 Commits (0341bc35328c22b6bb4965413e4cbb61ab602da1)

Author SHA1 Message Date
Paul Dix 0341bc3532 Update meta client and retention service.
* Remove VisitRetentionPolicies from meta client.
* Update retention enforcer to run on every data node.
2016-01-21 15:28:33 -05:00
Paul Dix 70de1a7690 Update meta service/client and shard precreator.
* Wire up DataNode(id uint64).
* Remove IsLeader test on precreator.
* Clean up error in client if the server returns a non-200 on get snapshot.
2016-01-21 15:28:33 -05:00
Paul Dix 9ea8ff357e Wire up meta service and client delete data node 2016-01-21 15:28:33 -05:00
Paul Dix 7b71b66e31 Update meta service, meta client, and httpd handler
* Improve the ping endpoint so that it can optionally check for leader agreement across all meta servers
* Add Ping method to the meta client
* Fix ClusterID tests
* Remove WaitForLeader from meta client and remove unnecessary references to it
2016-01-21 15:28:33 -05:00
Paul Dix 101f93f1db Add meta service test to ensure cluster id persisted 2016-01-21 15:28:33 -05:00
Paul Dix 2f07fe88ca Update meta client to use data method to protect cache 2016-01-21 15:28:33 -05:00
Paul Dix 101ab32571 Fix meta-service for server integration tests
* Updated CreateShardGroup to not return an error if it already exists so it's idempotent
* Removed old test making sure you can't delete the default RP. You can delete it now, there was no reason to disallow it.
* Wired up the UpdateRetentionPolicy functionality
2016-01-21 15:28:33 -05:00
Cory LaNou 2715d5ef72 add clusterID and tests 2016-01-21 15:28:33 -05:00
Paul Dix fb9181d240 Fix meta-service build 2016-01-21 15:28:33 -05:00
Paul Dix bfcf5d63ce Clean up meta service close. 2016-01-21 15:28:33 -05:00
David Norton f23fea81b3 take rlock and grab ref to data 2016-01-21 15:28:33 -05:00
David Norton d1fcf1f7a1 wire up meta client shard methods 2016-01-21 15:28:33 -05:00
Paul Dix f385945058 Update Server to work with new metaservice/client 2016-01-21 15:28:33 -05:00
Cory LaNou d0cad8a022 add subscription meta client test 2016-01-21 15:28:33 -05:00
Cory LaNou 853f4bf70e add continuous query meta client tests 2016-01-21 15:28:33 -05:00
Cory LaNou a41222befb add continuous query/subscription methods to meta client 2016-01-21 15:28:33 -05:00
Cory LaNou 7c41c0e02f add user tests for meta client 2016-01-21 15:28:33 -05:00
Paul Dix 9fd9a666bf Add CreateDataNode to meta client/service 2016-01-21 15:28:33 -05:00
Cory LaNou 53042ac56d bringing back client user methods 2016-01-21 15:28:33 -05:00
Paul Dix 13e32f6880 Update close handling on meta service 2016-01-21 15:28:33 -05:00
Paul Dix d2e3cf519c Cleanup host/port in meta service 2016-01-21 15:28:33 -05:00
Paul Dix e906107bea Update meta service to handle host names
This ensures that the meta service will gracefully handle host name changes in a single server configuration.

It also changes the raft setup to use the user specified bind address (and thus hostname) instead of pulling it off the listener, which returns the IP. This will enable users to have hostnames listed instead of IPs in the megastore, making it easier to read. This also means that underlying IPs can change without causing problems in a cluster.
2016-01-21 15:28:32 -05:00
Paul Dix 0f36fbe5ce Add comment to client 2016-01-21 15:28:32 -05:00
Paul Dix 1632980eb8 Cleanup PrintLns in meta client 2016-01-21 15:28:32 -05:00
Paul Dix eda4a6eda0 Wire up meta service and client recovery.
* increase sleep on error in client exec in case a server went down so we don't max out retries before a new leader gets elected
* update and add close logic to service, handler, raft state, and the client
2016-01-21 15:28:32 -05:00
David Norton 5c20e16406 wire up some RP stuff in meta client / service 2016-01-21 15:28:32 -05:00
David Norton c84e9b38d0 fix unit tests after backing out proto change 2016-01-21 15:28:32 -05:00
David Norton f91fd0b8ae back out proto struct changes 2016-01-21 15:28:32 -05:00
Paul Dix e9e63b573b Cycle to next server on failure in meta client 2016-01-21 15:28:32 -05:00
Paul Dix 1e63fa4e2c Enforce max retries on meta client 2016-01-21 15:28:32 -05:00
Paul Dix 90a08154c5 Wire up redirects to execute against raft leader 2016-01-21 15:28:32 -05:00
David Norton c7721c8948 don't clone database infos in client 2016-01-21 15:28:32 -05:00
David Norton 6561b702b8 remove commented out test code 2016-01-21 15:28:32 -05:00
David Norton f80f860ee5 temporarily rename statement_executor_test.go 2016-01-21 15:28:32 -05:00
David Norton 1d6878c37c wire up some meta client funcs and tests 2016-01-21 15:28:32 -05:00
Paul Dix c9d82ad0ad Wire up meta service functionality
* Add dir, hostname, and bind address to top level config since it applies to services other than meta
* Add enabled flags to example toml for data and meta services
* Wire up add/remove raft peers and meta servers to meta service
* Update DROP SERVER to be either DROP META SERVER or DROP DATA SERVER
* Bring over statement executor from old meta package
* Start meta service client implementation
* Update meta service test to use the client
* Wire up node ID/meta server storage information
2016-01-21 15:28:32 -05:00
David Norton 688bc7a2f1 fix go vet error 2016-01-21 15:28:32 -05:00
Cory LaNou d69c5f853f set store peers when starting up from config 2016-01-21 15:28:32 -05:00
David Norton 79d81a2448 add meta service tests & bug fixes 2016-01-21 15:28:32 -05:00
Cory LaNou 9ec7a710c9 some misc refactoring on influxd startup 2016-01-21 15:28:32 -05:00
Cory LaNou 8d878fff91 buildable meta -> services/meta 2016-01-21 15:28:32 -05:00
David Norton bf0b477a0b set raftState on the store 2016-01-21 15:28:32 -05:00
Cory LaNou d3ab0b5ae6 buildable again. lot of wip 2016-01-21 15:28:32 -05:00
Cory LaNou b0d0668138 wip 2016-01-21 15:28:32 -05:00
David Norton 94b05404dc remove cors from handler 2016-01-21 15:28:31 -05:00
David Norton 169c6a5dfa store and handler to interface 2016-01-21 15:28:31 -05:00
David Norton 9f93f0b84a convert to AfterIndex 2016-01-21 15:28:31 -05:00
David Norton 05da43d9f6 rough out meta service 2016-01-21 15:28:31 -05:00
Paul Dix 59fbd371fc Implement backup/restore for TSM.
This changes backup and restore to work for TSM. It breaks it for b1 and bz1, but since those are getting removed it's ok.

The backup runs against any host that is specified and can backup either the metasstore, a database, specific retention policy, or a specific shard. It can also take incremental backups with the `since` flag, which will only backup TSM files that have been created since that timestamp.

The backup is safe to run online. However, for shards that are still hot for writes, they won't be able to create new TSM files while the backup for that single shard runs. If the backup isn't too large and the write throughput isn't too high this shouldn't be a problem since the writes will just go into the WAL cache.
2015-12-30 18:06:50 -05:00
Jonathan A. Sternberg 5d4ecf853c Add continuous query option for customizing resampling
This makes the following syntax possible:

    CREATE CONTINUOUS QUERY mycq ON mydb
        RESAMPLE EVERY 1m FOR 1h
        BEGIN
          SELECT mean(value) INTO cpu_mean FROM cpu GROUP BY time(5m)
        END

The RESAMPLE option customizes how often an interval will be sampled and
the duration. The interval is customized with EVERY. Any intervals
within the resampling duration on a multiple of the resample interval
will be updated with the new results from the query.

The duration is customized with FOR. This determines how long an
interval will participate in resampling.

Both options are optional. If RESAMPLE is in the syntax, at least one of
the two needs to be given. The default for both is the interval of the
continuous query.

The service also improves tracking of the last run time and the logic of
when a query for an interval should be run. When determining the oldest
interval to run for a query, the continuous query service determines
what would have been the optimal time to perform the next query based on
the last run time. It then uses this time to determine the oldest
interval that should be run using the resample duration and will
resample all intervals between this time and the current time as opposed
to potentially forgetting about the last run in an interval if the
continuous query service gets delayed for some reason.

This removes the previous config options for customizing continuous
queries since they are no longer relevant and adds a new option of
customizing the run interval. The run interval determines how often the
continuous query service polls for when it should execute a query. This
option defaults to 1s, but can be set to 1m if the least common factor
of all continuous queries' intervals is a higher value (like 1m).
2015-12-28 16:43:49 -05:00