NewTestConfig() would enable broker and data nodes so running
influxd w/o a config file would start the nodes. If you ran
influxd w/ a config file but did not explicitly set Data.Enabled
or Broker.Enabled, the server would not start. This is not
intuitive when moving to a config file setup.
Instead, broker and data are enabled w/ the config file (like w/o)
and they must be explicitly disabled to run in a data or broker
only mode. This will help w/ backwards compatibility with existing
config files.
If a node is restarted and it had already joined the cluster,
ignore and log that the join urls are being ignored and existing
cluster state will be used.
When starting multiple servers concurrently, they can race to connect
to each other. This change just has the join attempts retry to make
cluster setup easier.
This removes all join URLs from the config. To join a node to a
cluster, the URL of another member of the cluster should be passed
on the command line w/ the -join flag. The join URLs can now be
any node regardless of whether the node is a broker only or data
only node. At join time, the receiving node will redirect the
request to a valid broker or data node if it cannot handle the request
itself.
How a cluster is setup has changed and this test is failing w/
panic: assert failed: invalid initial server id: 2 [recovered]
There is an existing multi-node test w/ a broker and two data
nodes so we're still covering this case and will need to come
back to it.
To add a new data node, it currently needs a broker
and another data node to join. Temporarily adding
a JoinURLs option to the Data node section so a
standalone data node can be created but the intent is
that this will be removed.
Ideally, the the joinURL could point to either a data node
or a broker and it would get the required URLs from that
host but that is not possible currently.
The previous behavior caused "[srvr]" to print out during usage, e.g. in
`influxd help run`:
```
[srvr] 2015/04/06 11:58:04 usage: run [flags]
run starts the broker and data node server....
```
When a data node starts up, the broker URLs were not set before
they were actually being used. The call to client.Open() in
turn triggers the raft streamer and heartbeat which try to connect
to the broker. If those started before the subsequent client.SetURLs()
call, you would see the following error in the logs at startup:
[messaging] 2015/04/01 11:59:22 reconnecting to broker: url={ <nil> /messaging/messages index=2&streaming=true&topicID=0 }, err=Get /messaging/messages?index=2&streaming=true&topicID=0: unsupported protocol scheme ""
Fixing this race uncovered another bug where the join urls would be
cleared the first time the broker was started. In this case, the
join urls should be left alone since they were set properly w/ SetURLs.
Fixes#2152
This is a pre-requisite for #1934. When running separate
broker and data nodes, you currently need to know what role
a host is performing. This complicates cluster setup in
that you must configure separate broker URLs and data node
URLs.
This change allows a broker only node to redirect data nodes endpoints
to a valid data node and a data only node to redirect broker
endpoints to a valid broker.
Refactored query engine to have different processing pipeline for raw queries. This enables queries that have a large offset to not keep everything in memory. It also makes it so that queries against raw data that have a limit will only p
rocess up to that limit and then bail out.
Raw data queries will only read up to a certain point in the map phase before yielding to the engine for further processing.
Fixes#2029 and fixes#2030