This commit limits queries to only process one shard at a time.
However, within a shard, multiple series can still be processed in
parallel. Shard iterators are lazily instantiated during query
execution to limit the amount of memory a given query uses.
The highest time represented by a nanosecond needs to be used for an
exclusive range, so the maximum time needs to be one less than the
possible maximum number of nanoseconds representable by an int64 so that
we don't lose a point at that one time.
Previously worked in the open source version because the timestamp used
for finding a shard would be truncated by the retention policy so the
lookup time didn't run into this edge case because it didn't rest on the
truncation boundary. Since that point didn't really belong in that shard
group and was placed there by mistake, it's best to fix this bug since
the timestamp used to create the shard group should be capable of
retrieving it.
The default retention policy name is changed to "autogen" instead of
"default" since it ends up being ambiguous when we tell a user to check
the default retention policy, it is uncertain if we are referring to the
default retention policy (which can be changed) or the retention policy
with the name "default".
Now the automatically generated retention policy name is "autogen".
The default retention policy is now also configurable through the
configuration file so an administrator can customize what they think
should be the default.
Fixes#3733.
Partially fixes#6094.
Prior to this when passing the same query and CQ name in a CREATE
CONTINUOUS QUERY command an error would be returned. This means the
command was not behaving in a similar way to other commands.a
Now when running the command with the same CQ name and query string no
error will be returned. Note, this change does not parse the query, it
simply compares a normalised query string to the existing one on the CQ.
Allows configuration of shard group duration at database creation, and retention
policy create/alter time.
Query examples:
```
CREATE DATABASE testdb WITH DURATION 90d SHARD DURATION 30m NAME rp_testdb
CREATE RETENTION POLICY rp_testdb2 ON testdb DURATION INF REPLICATION 1 SHARD DURATION 30m
ALTER RETENTION POLICY rp_testdb2 ON testdb SHARD DURATION 1h
```
This can be useful with long duration retention policies with lots of data, where
you can split into smaller shards to relieve memory pressure.
Fixes#5612, #5573 and #5518.
Using the MetaExecuter, queries that need to run on both data nodes
and optionally the meta store will be executed across all data nodes
in the cluster.
Fixes#5680.
When dropping a data node, the following will now happen on the
Meta Store.
1) If any shards no longer have any owners (because the data node
being dropped is the only owner), they will be reassigned a
new owner from within their respective shard group.
2) If a shard group no longer has any shards/data nodes, they will
be marked as deleted.
When a shard is being assigned a new owner a data node with the fewest
number of shards in the shard group will be selected as the new owner.
Finally, checking the validity of a data node's ID now happens in the
Meta store, rather than in the state machine.
* Updated CreateShardGroup to not return an error if it already exists so it's idempotent
* Removed old test making sure you can't delete the default RP. You can delete it now, there was no reason to disallow it.
* Wired up the UpdateRetentionPolicy functionality
This ensures that the meta service will gracefully handle host name changes in a single server configuration.
It also changes the raft setup to use the user specified bind address (and thus hostname) instead of pulling it off the listener, which returns the IP. This will enable users to have hostnames listed instead of IPs in the megastore, making it easier to read. This also means that underlying IPs can change without causing problems in a cluster.
* Add dir, hostname, and bind address to top level config since it applies to services other than meta
* Add enabled flags to example toml for data and meta services
* Wire up add/remove raft peers and meta servers to meta service
* Update DROP SERVER to be either DROP META SERVER or DROP DATA SERVER
* Bring over statement executor from old meta package
* Start meta service client implementation
* Update meta service test to use the client
* Wire up node ID/meta server storage information