Jason Wilder
eb1cd44b8d
Log write errors
...
Since the client only receives a "write failed" or "partial write" error
message, log more context in the logs.
2015-06-09 14:49:22 -06:00
Jason Wilder
5e515fbeda
Don't log EOF as an error
...
It's expected when a client disconnects
2015-06-08 16:39:39 -06:00
Jason Wilder
8323d6aa9e
Log when TCP clients connect/disconnect
2015-06-08 16:39:02 -06:00
Jason Wilder
8cbda9694e
Ensure unusable connections get closed
...
Fixes a bug where a connection that was marked as unusable didn't
prevent it from getting checked backed into the pool.
2015-06-08 11:26:56 -06:00
Jason Wilder
0c6ea32540
Use read locks instead of write lock for connection pools checkout
2015-06-08 11:21:07 -06:00
Ben Johnson
6e40f869fe
Fix formatting directive.
2015-06-05 23:06:52 -06:00
Ben Johnson
617e214a49
Add remote write logging.
2015-06-05 22:49:03 -06:00
Ben Johnson
607c352412
Add remote write logging.
2015-06-05 22:34:30 -06:00
Jason Wilder
1024965db7
Create shard received from cluster writer
2015-06-05 22:16:51 -06:00
Jason Wilder
1638ff8b6c
Handle nil node returned from meta store in shard writer
2015-06-05 22:16:51 -06:00
Jason Wilder
75b72c60fe
Add hinted handoff service
...
The hinted handoff service will queue a write to a remote node if
that write fails and periodically retry the write.
2015-06-05 22:16:51 -06:00
Ben Johnson
fb06549552
remove bind address from cluster config
2015-06-05 17:07:54 -06:00
Ben Johnson
abbcf15bb2
integrate mux into influxd cluster service
2015-06-05 17:02:32 -06:00
Ben Johnson
5a5c077790
refactor cluster to use mux
2015-06-05 16:54:12 -06:00
Ben Johnson
b925e1c1af
Multi-node clustering.
...
This commit adds the ability to cluster multiple nodes together to share
the same metadata through raft consensus.
2015-06-05 14:41:19 -06:00
Cory LaNou
21af1ded6b
messages over 1gb are probably not valid
2015-06-04 19:40:48 -06:00
Cory LaNou
5c52c4cda1
add ability to set logger for testing
2015-06-03 09:58:39 -06:00
Jason Wilder
156e7df346
Rename PointsWrite.Store to TSDBStore
...
Matches MetaStore naming convention better.
2015-06-02 14:47:59 -06:00
Jason Wilder
3957e096f8
Remove ownerID from protobufs
...
Not needed since the node that processes the request is the owner.
2015-06-02 14:45:52 -06:00
Jason Wilder
e400e8f2d6
Use default retention policy if not specified during writes
2015-06-01 17:16:44 -06:00
Jason Wilder
497cd506f9
Remove temporary INFLUXDB_ALPHA write path enable flag
...
Real thing exists now.
2015-06-01 16:45:08 -06:00
Cory LaNou
17bdf1c114
get both json/line protocol endpoints working
2015-06-01 12:35:57 -06:00
Cory LaNou
3597565955
reading and writing yo!
2015-06-01 11:59:58 -06:00
Ben Johnson
bf823d9887
Integrating cmd/influxd/run.
2015-05-30 14:06:36 -06:00
Ben Johnson
e1fc0958e7
Rename cluster.Coordinator to cluster.PointsWriter.
2015-05-30 14:05:27 -06:00
Ben Johnson
c916256ac9
Rename cluster.Writer to cluster.ShardWriter.
2015-05-30 14:05:27 -06:00
Ben Johnson
8c8a55a737
Removed 'failed' from test suite.
2015-05-30 08:59:27 -06:00
Ben Johnson
cdc5a47efa
Clean up influxdb.
2015-05-30 08:14:10 -06:00
Ben Johnson
9d4527071e
Refactor run command.
2015-05-29 14:59:57 -06:00
Ben Johnson
df1aeee70a
WIP
2015-05-29 14:56:30 -06:00
Ben Johnson
736875b858
Integrate meta package.
2015-05-29 14:54:04 -06:00
Cory LaNou
03975a8ac0
remove retries from dial
2015-05-28 09:57:42 -06:00
Cory LaNou
04fd69b6ab
use correct pool package. Mark connections unusable on error
2015-05-27 15:25:08 -06:00
Cory LaNou
51dbb171ee
rearrange some tests, add some timeout testing
2015-05-27 15:13:31 -06:00
Cory LaNou
1ac46b56b8
adding timeouts and deadlines
2015-05-27 14:40:19 -06:00
Cory LaNou
5c1d407d5e
pool map key is now nodeID, always get a fresh nodeInfo when dialing
2015-05-27 13:18:03 -06:00
Cory LaNou
b699938bdb
make the cluster listener a Opener
2015-05-27 10:30:52 -06:00
Cory LaNou
4da0e9a93c
close client pool
2015-05-27 10:06:04 -06:00
Cory LaNou
1228de4e7c
move tcp to cluster
2015-05-27 10:02:38 -06:00
Jason Wilder
85f59d696b
Create and open shards on-demand
...
Uses a structure like:
/root/
/db1/rp1/1
/2
/db2/rp2/3
If a write is assigned to a shard on the local node but the shard
has not been created, create it when the write returns an error
and retry the write.
2015-05-26 16:38:45 -06:00
Jason Wilder
cb23a7a297
Fix breakage from rename of data to cluster package
2015-05-26 15:42:44 -06:00
Paul Dix
01618dc143
Move data.Node to tsdb.Store. Move data to cluster.
2015-05-26 15:56:54 -04:00
Ben Johnson
f52c85a8a1
Query engine and parser integration into root pkg.
2014-11-09 19:55:53 -07:00
Ben Johnson
b78d4f1329
Add basic query code into the database.
2014-11-06 20:18:36 -05:00
Ben Johnson
9d1464813a
Merge branch 'master' of https://github.com/influxdb/influxdb into streaming-raft
...
Conflicts:
Makefile.in
_vendor/raft/server.go
_vendor/raft/snapshot.go
_vendor/raft/snapshot_test.go
admin/http_server.go
admin/http_server_test.go
api/graphite/api.go
api/http/series_writer.go
cluster/cluster_configuration.go
cluster/cluster_server.go
cluster/nil_processor.go
cluster/shard_space.go
cmd/influxd/main.go
common/helpers.go
configuration/configuration.go
configuration/configuration_test.go
coordinator/protobuf_server.go
coordinator/raft_server.go
datastore/point_iterator.go
datastore/shard.go
datastore/storage_key.go
engine/aggregator_engine.go
engine/arithmetic_operators.go
parser/group_by.go
parser/query_api.go
response_channel.go
server/server.go
2014-11-06 01:20:36 -05:00
John Shahid
1f5f5cb789
Don't write points if they are too old
...
Background of the bug: Prior to this patch we actually tried writing
points that were older than the retention period of the shard. This
caused race condition when it came to writing points to a shard that's
being dropped, which will happen frequently if the user is loading old
data (by accident). This is demonstrated in the test in this commit.This
bug was previously addressed in #985 . It turns the fix for #985 wasn't
enough. A user reported in #1078 that some shards are left behind and
not deleted.
It turns out that while the shard is being dropped more write
requests could come in and end up on line `cluster/shard.go:195` which
will cause the datastore to create a shard on disk that isn't tracked
anywhere in the metadata. This shard will live forever and never get
deleted. This fix address this issue by not writing old points in, but
there are still some edge cases with the current implementation, at
least not as bad as current master.
Close #1078
2014-11-03 17:28:47 -05:00
Ben Johnson
31f981e804
Refactoring common, cluster, and protobuf.
2014-10-31 19:31:19 -06:00
John Shahid
60c98e519e
Remove some nonsense
2014-10-30 14:16:46 -04:00
David Norton
9786d31db3
Add func to get str desc of Processor chain
...
Close #1068
2014-10-29 16:39:04 -04:00
John Shahid
c265f1f588
Delete shards only after making sure no has a reference to it.
...
* shard_datastore.go(Deleteshard): Check the reference count of the
shard and mark it for deletion if there are still more references out
there. Otherwise, delete the shard immediately. Also refactor the
deletion code in deleteShard(), see below.
* shard_datastore.go(ReturnShard): Check to see if the shard is marked
for deletion.
* shard_datastore.go(deleteShard): Refactor the code that used to be in
Deleteshard in its own method. Use `closeShard` instead of doing the
cleanup ourselves.
2014-10-24 16:36:45 -04:00