Previously, meta.Client would drop the default retention policy when
trying to create a database with a retention policy. The RPC has now
been modified to include the desired retention policy in the
CreateDatabase command and have it use that retention policy information
instead of the default configuration when provided.
This also lowers the number of RPC calls for
CreateDatabaseWithRetentionPolicy to only a single RPC call instead of
two.
Protections have also been included so creating a retention policy with
different parameters will return an error similar to if you tried to
modify the retention policy separately.
Fixes#5696.
This removes the MetaServers property from node.json to eliminate one
of the four places those addresses are stored on disk. We always use
the values that come through the config (via file, env var or -join arg).
I was trying to create a Diagnostics Client in the tsdb package, but
IIRC importing `monitor` caused an import cycle of:
tsdb -> monitor -> cluster -> tsdb.
Moving Diagnostics to its own package will allow further use of
diagnostics.Client without running into import cycles.
Meta HTTP commands are cluster level requests and were showing up in
the main log creating a lot of noise. Switch them to use the ClusterTracing
config option which is disabled by default.
Previously, the lease redirect was invalid causing anything relying
on a lease for execution (eg. continuous queries) to cease functioning.
The name/nodeid URL param parsing has been moved up to the top of the
handler so the options can be forwarded on to the real leader.
X-Github-Closes: #5592
* pass configured precision string to point parsing
* add Precision configuration to UDP config
* default configured precision to match what it appears to be now (from ParsePoints)
Possible fix for #5437. meta.Client.RetentionPolicy acquired a read-lock and
then called Database which called data() which acquired a read-lock again.
If a write lock was taken between these two read-locks (likely by Authenticate),
the write-lock would block, and the second read-lock would also block
causing a dead-lock.
Previously if you issued a CQ with a resample interval higher than the
query interval, such as the following:
CREATE CONTINUOUS QUERY cq ON db
RESAMPLE EVERY 4m
BEGIN
SELECT mean(value) INTO cpu_mean FROM cpu GROUP BY time(2m)
END
This would result in strange behavior because the FOR value defaulted to
the GROUP BY interval and the minimum time passing before a CQ ran was
also the resample interval, so it wouldn't run the appropriate intervals
even if you set the resample duration to a higher value.
This tweaks the CQ runner to set the minimum interval before a bucket
becomes capable of running to the lower of the query interval or the
resample interval instead of always using the resample interval.
It also sets the default resample duration to be the higher value of the
query interval or the resample interval so the above query gets a
default of 4m instead of 2m and will execute 2 queries every 4 minutes.
If you manually set the resample duration to a lower value than the
resample interval, the old behavior will still happen and should be
considered an error.
This also makes trying to create a continuous query with a resample
duration of below the resample interval or query interval (whichever is
higher) as an error returned by the parser.
Fixes#5286.
If a bind-address of :8088 is used, cluster nodes cannot
connect to those nodes because there is no hostname portion
of the address. When we see a bind-address without a hostname,
use the os hostname or localhost if that fails if it is not specified
in the config already.
* Improve the ping endpoint so that it can optionally check for leader agreement across all meta servers
* Add Ping method to the meta client
* Fix ClusterID tests
* Remove WaitForLeader from meta client and remove unnecessary references to it
* Updated CreateShardGroup to not return an error if it already exists so it's idempotent
* Removed old test making sure you can't delete the default RP. You can delete it now, there was no reason to disallow it.
* Wired up the UpdateRetentionPolicy functionality
This ensures that the meta service will gracefully handle host name changes in a single server configuration.
It also changes the raft setup to use the user specified bind address (and thus hostname) instead of pulling it off the listener, which returns the IP. This will enable users to have hostnames listed instead of IPs in the megastore, making it easier to read. This also means that underlying IPs can change without causing problems in a cluster.
* increase sleep on error in client exec in case a server went down so we don't max out retries before a new leader gets elected
* update and add close logic to service, handler, raft state, and the client
* Add dir, hostname, and bind address to top level config since it applies to services other than meta
* Add enabled flags to example toml for data and meta services
* Wire up add/remove raft peers and meta servers to meta service
* Update DROP SERVER to be either DROP META SERVER or DROP DATA SERVER
* Bring over statement executor from old meta package
* Start meta service client implementation
* Update meta service test to use the client
* Wire up node ID/meta server storage information