Background of the bug: Prior to this patch we actually tried writing
points that were older than the retention period of the shard. This
caused race condition when it came to writing points to a shard that's
being dropped, which will happen frequently if the user is loading old
data (by accident). This is demonstrated in the test in this commit.This
bug was previously addressed in #985. It turns the fix for #985 wasn't
enough. A user reported in #1078 that some shards are left behind and
not deleted.
It turns out that while the shard is being dropped more write
requests could come in and end up on line `cluster/shard.go:195` which
will cause the datastore to create a shard on disk that isn't tracked
anywhere in the metadata. This shard will live forever and never get
deleted. This fix address this issue by not writing old points in, but
there are still some edge cases with the current implementation, at
least not as bad as current master.
Close#1078
This reverts commit 49c49d818c.
In order to use rocksdb we need g++ >= 4.7 (which has support for
C++0x11). It's too expensive to build g++ every time. Upgrading the
ubuntu distro used by travis seems to be coming soon, I'll shelf this
for now until travis-ci/travis-ci#2046 is resolved.
* shard_datastore.go(Deleteshard): Check the reference count of the
shard and mark it for deletion if there are still more references out
there. Otherwise, delete the shard immediately. Also refactor the
deletion code in deleteShard(), see below.
* shard_datastore.go(ReturnShard): Check to see if the shard is marked
for deletion.
* shard_datastore.go(deleteShard): Refactor the code that used to be in
Deleteshard in its own method. Use `closeShard` instead of doing the
cleanup ourselves.
Refactor integration/data_test.go to use Test* names for the tests
and remove the TestAll test case, which uses reflection to iterate
over all test functions. Also, change the two SingleServerSuite
test functions in this file to DataTestSuite functions.
The DataTestSuite now conforms to standard Go / gocheck test
conventions. Individual tests can be run. Groups of tests can be
run by specifying patterns. E.g.,
make integraton_test only=DataTestSuite
...will run all tests in data_test.go. Or,
make integration_test only=DataTestSuite.Test.*Histogram
...will run all histogram related tests.
See the gocheck documentation for further details.
Conflicts:
integration/data_test.go
Fixes#853. Close#854. Previously, there was an unprotected endpoint in
raft to return the cluster config that would include user hashes. This
endpoint is useful for debugging purposes so I restructured it and moved
it to the API. It ensures the requesting user is a cluster admin.
Cluster config will now return all of the cluster state including
servers, CQs, shards, etc.
Conflicts:
integration/single_server_test.go
Refactor integration/data_test.go to use Test* names for the tests
and remove the TestAll test case, which uses reflection to iterate
over all test functions. Also, change the two SingleServerSuite
test functions in this file to DataTestSuite functions.
The DataTestSuite now conforms to standard Go / gocheck test
conventions. Individual tests can be run. Groups of tests can be
run by specifying patterns. E.g.,
make integraton_test only=DataTestSuite
...will run all tests in data_test.go. Or,
make integration_test only=DataTestSuite.Test.*Histogram
...will run all histogram related tests.
See the gocheck documentation for further details.
Fixes#867. Close#927. Updated lexer and parser to work, added code to
coordinator to insert spaces if requested. Now the user can request the
shard spaces. `list series include spaces`
Combine the three separate loops for DB creation, running setup
functions, and running tests into one loop. Add a DB delete at the
end of each test for cleanup.
This groups output for each test together in one place. It also has
the advantage of not running all DB creations and setup functions until
they're needed.