Partially addresses #6094.
Previously, when creating a retention policy only the name was
considered when deciding if the policy already existed. This meant that
adding a second policy with the same name but different duration or
replica factor returned the original policy and no error.
This commit fixes that and ensures that name, duration and replica
factor are all considered.
This fixes a couple of issues with starting meta-only nodes.
1. We were always calling CreateDataNode regardless of whether the the
node is running data services. We only call that now when node is
data enabled.
2. The node.json was created along-side creating the data node. Since
we are not creatinga a data node, this didn't happen anymore. There
wasn't a simple way to do this in one place so it's actually handle
for when creating a meta or a data node now. Since the ID assigned
to the node is the same regardless of role this works in all combinations
of roles.
3. The JoinMetaServer didn't return the ID of the joining node which
created some races when multiple nodes were joining. The join call now
returns that information to the caller.
Fixes#5754
Fixes#5653 and #5394.
Previously dropping retention policies did not propogate to local TSDB
shards. Instead, the retention policiess would just be removed from the
Meta Store.
This PR adds ensures that data associated with retention policies is
removed, when the retention policy is dropped.
Also, it cleans up a couple of other methods in `tsdb`, including the
requirement to provide (redundant) shardIDs when deleting databases.
Dropping a meta node that had already been removed from the config
would fail because the raft.RemovePeers call would return an error
that the address was unknown. This change skips calling RemovePeer
if it doesn't exist.
Dropping a non-existing ID would hang for 10 seconds becuase the
meta.Client retryUntilExec didn't differentiate before command errors
and redirect errors. In this case, the command would return an error
but we'd try 10 more times and ultimately give up and return the error.
We now return immediately if the command returned and error because
retrying it will not succeed.
Finally, the join loop had no delay and would immediately try to join
the other nodes hundreds of times a second. We now pause a second if we've
tried every node at least once.
Previously, meta.Client would drop the default retention policy when
trying to create a database with a retention policy. The RPC has now
been modified to include the desired retention policy in the
CreateDatabase command and have it use that retention policy information
instead of the default configuration when provided.
This also lowers the number of RPC calls for
CreateDatabaseWithRetentionPolicy to only a single RPC call instead of
two.
Protections have also been included so creating a retention policy with
different parameters will return an error similar to if you tried to
modify the retention policy separately.
Fixes#5696.
This removes the MetaServers property from node.json to eliminate one
of the four places those addresses are stored on disk. We always use
the values that come through the config (via file, env var or -join arg).
Possible fix for #5437. meta.Client.RetentionPolicy acquired a read-lock and
then called Database which called data() which acquired a read-lock again.
If a write lock was taken between these two read-locks (likely by Authenticate),
the write-lock would block, and the second read-lock would also block
causing a dead-lock.
* Improve the ping endpoint so that it can optionally check for leader agreement across all meta servers
* Add Ping method to the meta client
* Fix ClusterID tests
* Remove WaitForLeader from meta client and remove unnecessary references to it
* Updated CreateShardGroup to not return an error if it already exists so it's idempotent
* Removed old test making sure you can't delete the default RP. You can delete it now, there was no reason to disallow it.
* Wired up the UpdateRetentionPolicy functionality