Uses a structure like:
/root/
/db1/rp1/1
/2
/db2/rp2/3
If a write is assigned to a shard on the local node but the shard
has not been created, create it when the write returns an error
and retry the write.
Background of the bug: Prior to this patch we actually tried writing
points that were older than the retention period of the shard. This
caused race condition when it came to writing points to a shard that's
being dropped, which will happen frequently if the user is loading old
data (by accident). This is demonstrated in the test in this commit.This
bug was previously addressed in #985. It turns the fix for #985 wasn't
enough. A user reported in #1078 that some shards are left behind and
not deleted.
It turns out that while the shard is being dropped more write
requests could come in and end up on line `cluster/shard.go:195` which
will cause the datastore to create a shard on disk that isn't tracked
anywhere in the metadata. This shard will live forever and never get
deleted. This fix address this issue by not writing old points in, but
there are still some edge cases with the current implementation, at
least not as bad as current master.
Close#1078
* shard_datastore.go(Deleteshard): Check the reference count of the
shard and mark it for deletion if there are still more references out
there. Otherwise, delete the shard immediately. Also refactor the
deletion code in deleteShard(), see below.
* shard_datastore.go(ReturnShard): Check to see if the shard is marked
for deletion.
* shard_datastore.go(deleteShard): Refactor the code that used to be in
Deleteshard in its own method. Use `closeShard` instead of doing the
cleanup ourselves.
This was causing InfluxDB to create a new shard in the grafana db every
ten minutes. Also we talked about getting rid of this feature a while
ago, so here we go.
Fix#954
Conflicts:
cluster/cluster_configuration.go
Fixes#853. Close#854. Previously, there was an unprotected endpoint in
raft to return the cluster config that would include user hashes. This
endpoint is useful for debugging purposes so I restructured it and moved
it to the API. It ensures the requesting user is a cluster admin.
Cluster config will now return all of the cluster state including
servers, CQs, shards, etc.
Conflicts:
integration/single_server_test.go
This was causing InfluxDB to create a new shard in the grafana db every
ten minutes. Also we talked about getting rid of this feature a while
ago, so here we go.
Fix#954
Fixes#867. Close#927. Updated lexer and parser to work, added code to
coordinator to insert spaces if requested. Now the user can request the
shard spaces. `list series include spaces`
Fixes#853. Close#854. Previously, there was an unprotected endpoint in
raft to return the cluster config that would include user hashes. This
endpoint is useful for debugging purposes so I restructured it and moved
it to the API. It ensures the requesting user is a cluster admin.
Cluster config will now return all of the cluster state including
servers, CQs, shards, etc.
This will help users recover from #886. It's dangerous functionality because it only changes the metadata. Will document and tell people to use with caution.
Fixes#886. Shard spaces would not have compiled regexes when the server is restarted and the cluster config is pulled from a raft snapshot. A call to MatchSeries would then reset the regex for the shard space. BAAAAAD.
This will help users recover from #886. It's dangerous functionality because it only changes the metadata. Will document and tell people to use with caution.
Fixes#886. Shard spaces would not have compiled regexes when the server is restarted and the cluster config is pulled from a raft snapshot. A call to MatchSeries would then reset the regex for the shard space. BAAAAAD.
This was a problem with 0.7.x upgrades to 0.8 where there was a raft snapshot. The recovery method assumed certain structures would be there and they weren't.
Fix#791 - Removed load database config options from the daemon. Created an API endpoint and updated test.
Fix#745 - Added definition of continuous queries to load database config.
Close#792
This commit fixes two bugs:
Don't try to parse "inf" retention policy when creating a shard
space. This caused a panic to be thrown when a shard space is created
with infinity. Fix#774
`getExpiredShards()` used shard duration to determine which shards are
expired but should be using shard retention duration instead. Close#769