Now it is possible to compare tags and fields and it is also now
possible to compare tags and tags. Previously, it was only possible to
compare fields with fields and tags with a string or a regex.
Fixes#3371.
This commit makes a number of performance improvements to
reduce allocations during query execution. Several objects
and buffers are now reused across the components to avoid
allocations.
Previously a simple `count(value)` query across 1M points
would require 26,000+ allocations. After the changes in
this commit that number has been reduced to 88.
The tsdb package can't have a dependency on the meta package so it takes
a slice of uint64 types. The clustering implementation needs the full
ShardInfo to know the shard owners though, so a different implementation
needs to be used by clustering.
The `*tsdb.Store` type gets wrapped in the cluster package so it can
implement the `IteratorCreator` function without having a dependency on
the meta package.
The QueryExecutor had a lot of dead code made obsolete by the query
engine refactor that has now been removed. The TSDBStore interface has
also been cleaned up so we can have multiple implementations of this
(such as a local and remote version).
A StatementExecutor interface has been created for adding custom
functionality to the QueryExecutor that may not be available in the open
source version. The QueryExecutor delegate all statement execution to
the StatementExecutor and the QueryExecutor will only keep track of
housekeeping. Implementing additional queries is as simple as wrapping
the cluster.StatementExecutor struct or replacing it with something
completely different.
The PointsWriter in the QueryExecutor has been changed to a simple
interface that implements the one method needed by the query executor.
This is to allow different PointsWriter implementations to be used by
the QueryExecutor. It has also been moved into the StatementExecutor
instead.
The TSDBStore interface has now been modified to contain the code for
creating an IteratorCreator. This is so the underlying TSDBStore can
implement different ways of accessing the underlying shards rather than
always having to access each shard individually (such as batch
requests).
Remove the show servers handling. This isn't a valid command in the open
source version of InfluxDB anymore.
The QueryManager interface is now built into QueryExecutor and is no
longer necessary. The StatementExecutor and QueryExecutor split allows
task management to much more easily be built into QueryExecutor rather
than as a separate struct.
After adding type-switches to the tsm1 packages, the custom
implementation found in the conversion tool broke. This change uses
tsm1.NewValue() instead of a custom implementation.
This change also ensures that the tsm1.Value interface can only be
implemented internally to allow for the optimized type-switch based
encoding
The simple moving average will gradually emit points instead of waiting
until the end. This should apply to derivative and difference in the
future too.
Fixes#6112.
Related to #6140, but won't actually fix that problem. It will correctly
stop new queries from being started during shutdown and will send the
interrupt signal to queries during shutdown.
Since the interrupt signal is asynchronous, there isn't currently a way
to wait for the queries to complete themselves before shutting down the
engine.
The difference function is implemented very similar to how derivative is
implemented. It is an aggregate function that acts over the entire
aggregate. This function will also have the same problems that
derivative has with getting values from the previous interval or point.
This will be fixed separately as part of #5943.
Fixes#1825.
Allows configuration of shard group duration at database creation, and retention
policy create/alter time.
Query examples:
```
CREATE DATABASE testdb WITH DURATION 90d SHARD DURATION 30m NAME rp_testdb
CREATE RETENTION POLICY rp_testdb2 ON testdb DURATION INF REPLICATION 1 SHARD DURATION 30m
ALTER RETENTION POLICY rp_testdb2 ON testdb SHARD DURATION 1h
```
This can be useful with long duration retention policies with lots of data, where
you can split into smaller shards to relieve memory pressure.
This commit adds a configurable limit to the number of series that
can be returned from a `SELECT` statement. The limit is checked
immediately after planning and is determined by the use of iterator
stats.
Fixes#6076
If an OR was used, merging filters between different expressions would
not work correctly. If one of the sides had a set of series ids with a
condition and the other side had no series ids associated with the
expression, all of the series from the side with a condition would have
the condition ignored. Instead of defaulting a non-existant series
filter to true, it should just be false and the evaluation of the one
side that does exist should take care of determining if the series id
should be included or not. The AND condition used false correctly so did
not have to be changed.
If a tag did not exist and `!=` or `!~` were used, it would return false
even though the neither a field or a tag equaled those values. This has
now been modified to correctly return the correct series ids and the
correct condition.
Also fixed a panic that would occur when a tag caused a field access to
become unnecessary. The filter using the field access still got created
and used even though it was unnecessary, resulting in an attempted
access to a non-initialized map.
Fixes#5152 and a bunch of other miscellaneous issues.
The currently running queries can be listed with the command
`SHOW QUERIES` and it will display the current commands that have been
run, the database they were run against, and how long they have been
running.
These were all b1/bz1 settings that no longer have any effect:
- {Default,}MaxWALSize
- {Default,}WALFlushInterval
- {Default,}WALPartitionFlushDelay
- {Default,WAL}ReadySeriesSize
- {Default,WAL}CompactionThreshold
- {Default,WAL}MaxSeriesSize
- {Default,WAL}FlushColdInterval
- {Default,WAL}PartitionSizeThreshold
Numbers in the query without any decimal will now be emitted as integers
instead and be parsed as an IntegerLiteral. This ensures we keep the
original context that a query was issued with and allows us to act more
similar to how programming languages are typically structured when it
comes to floats and ints.
This adds functionality for dealing with integers promoting to floats in
the various different places where math are used.
Fixes#5744 and #5629.
Normalize the time for the distinct() call to either be at the beginning
of the group by interval or the start time similar to every other call.
The timestamp previously just showed the first time found and didn't
make a lot of sense in the context of what the function was supposed to
do.
Fixes#6040.
0.11 no longer uses some files from 0.10. The code was a little
too aggressive and remove these files which would break rolling back
to 0.10 if necessary. Since shards must be migrated to tsm before
upgrading to 0.11 and a user might not know they still have old shard
formats, they would not be able to revert back to 0.11 and migrate
them.
Also adds uptime stats to usage data.
`SHOW TAG VALUES` output has been modified to print the measurement name
for every measurement and to return the output in two columns: key and
value. An example output might be:
> SHOW TAG VALUES WITH KEY IN (host, region)
name: cpu
---------
key value
host server01
region useast
name: mem
---------
key value
host server02
region useast
`measurementsByExpr` has been taught how to handle reserved keys (ones
with an underscore at the beginning) to allow reusing that function and
skipping over expressions that don't matter to the call.
Fixes#5593.
This should fix#5865
This commit also removes the dependecy on the influxql package constants
that were used to write b1 and bz1 files and have changed since the
release of 0.10
- Removed a lot of unused code
- Consolidated types
- Improved allocations for converting b1 shards
- Eliminated allocations when sorting cursors
- Eliminated allocations from removing NaN and Infinity values,
they are removed by the cursors now
- Separated out the stats from the conversion tracker
- Removed allocations from shard reader buffers
- Improved logic for shard reader Next()/Read()
Related to #4098
Lint cmd/influxd/
* Errors cannot end with punctuation
* Better comment for exported method
* Better control flow when return is present
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
Linted cmd/influx_tsm
* Added comments to exported fields
* Removed punctuation at the end of errors
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
Linted cmd/influx_tsm/b1 and cmd/influx_tsm/bz1
* Added comments to exportes fields
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
Linted cmd/influx_tsm/tsdb
* Added comments to exported fields
* range k, _ := can be written as range k :=
* removed else when return is present
* Added consistency to receiver names in methods
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
Fix typos
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
Fixes#5612, #5573 and #5518.
Using the MetaExecuter, queries that need to run on both data nodes
and optionally the meta store will be executed across all data nodes
in the cluster.
This fixes a regression introduced in #5757 due to the node.ID getting
assigned by both the meta and data services. When both roles are active,
the data CreateDataNode path was not getting called because a node ID was
already assigned.
This fixes the issue by seeing if a DataNode already exists for our node
ID, and if it does not, we create one.
This fixes a couple of issues with starting meta-only nodes.
1. We were always calling CreateDataNode regardless of whether the the
node is running data services. We only call that now when node is
data enabled.
2. The node.json was created along-side creating the data node. Since
we are not creatinga a data node, this didn't happen anymore. There
wasn't a simple way to do this in one place so it's actually handle
for when creating a meta or a data node now. Since the ID assigned
to the node is the same regardless of role this works in all combinations
of roles.
3. The JoinMetaServer didn't return the ID of the joining node which
created some races when multiple nodes were joining. The join call now
returns that information to the caller.
Fixes#5754
The join option was incorrectly exposed on the meta config. It should
be at the top-level as a string and propogate down to the meta config
as a slice.
This fixes several issues related to the bind address and hostname:
* Allows bind addresses where a hostname or IP is not specified to
work correct and bind to all interfaces by default.
* Fixes the top-level "hostname" config option to allow overridding
all bind address hostnames. This allows a node to advertise a different
hostname than what is defined in the bind address setting.
* Adds the -hostname command-line option back to allow specifing
both -join and -hostname as command-line flags.
* Enforces a configuration precedence and overriding ability defined
as config file is overridden by env vars which are overriden by command-line
flags.
Fixes#5670#5671
The name of the column will be every measurement located inside of the
math expression in the order they are encountered in within the
expression.
Also handle `*influxql.ParenExpr` in the function
`(*influxql.Field).Name()`
Fixes#5730.
Previously, meta.Client would drop the default retention policy when
trying to create a database with a retention policy. The RPC has now
been modified to include the desired retention policy in the
CreateDatabase command and have it use that retention policy information
instead of the default configuration when provided.
This also lowers the number of RPC calls for
CreateDatabaseWithRetentionPolicy to only a single RPC call instead of
two.
Protections have also been included so creating a retention policy with
different parameters will return an error similar to if you tried to
modify the retention policy separately.
Fixes#5696.
A case (#5606) was found where a lot of data unexpectedly disappeared from a database
following a TSM conversion.
The proximate cause was an inconsistency between the root Bolt DB bucket list
and the meta data in the "series" bucket of the same shard. There were apparently valid
series in Bolt DB buckets that were no longer referenced by the meta data
in the "series" bucket - so-called orphaned series; since the conversion
process only iterated across the series found in the meta data, the conversion process
caused the orphaned series to be removed from the converted shards. This resulted in the
unexpected removal of data from the TSM shards that had previously been accessible
(despite the meta data inconsistency) in the b1 shards.
The root cause of the meta data inconsistency in the case above was a failure, in versions prior
to v0.9.3 (actually 3348dab) to update the "series" bucket with series that had been created in
previous shards during the life of the same influxd process instance.
This fix is required to avoid data loss during TSM conversions for shards that were created with
versions of influx that did not include 3348dab (e.g. prior to v0.9.3).
Analysis-by: Jon Seymour <jon@wildducktheories.com>
This removes the MetaServers property from node.json to eliminate one
of the four places those addresses are stored on disk. We always use
the values that come through the config (via file, env var or -join arg).
top() and bottom() point ordering was incorrect and using an inefficient
method of sorting. It has now been updated to use a heap and ordering is
being done by value first and time second (with earlier times always
taking priority).
Removed unit tests that test using `time` inside of the query to get the
real time instead of the interval time and only allowing the default
behavior. We will have another mechanism to get the real time during an
interval, but the current method is deprecated.
The top() and bottom() methods now have integer support.
It had the time values for the selectors being returned equal the actual
points time. We have decided to have the time always be the interval
time and adding another feature later that can return the selected
point's time in the future.
The history file is cleared before WriteHistory is called after each
command/exit() to prevent exponential file growth.
This commit addresses issue #5436, please see PR for full explanation.
Under highly conncurrent write load, the coordinating node would
create a connection to any other node that is part of the replica
group. Since each connection can be expensive, OOM sitations could
occur because there was no bounds on the number of new connections
that would be created. If writes on a remote node were slow, connections
could pile up an exacerbate the problem.
This switches the pool to be bounded and has a checkout that is blocking
with a timeout. If a connection is available, it's returned immediately.
If the pool still has room for more connections, it will create one if needed.
Otherwise, the call will block until a connection becomes available or
the timeout expires. In the case of a timeout, it is propogated back up
to the PointsWriter that determine what do return to the client.
If a bind-address of :8088 is used, cluster nodes cannot
connect to those nodes because there is no hostname portion
of the address. When we see a bind-address without a hostname,
use the os hostname or localhost if that fails if it is not specified
in the config already.
* Improve the ping endpoint so that it can optionally check for leader agreement across all meta servers
* Add Ping method to the meta client
* Fix ClusterID tests
* Remove WaitForLeader from meta client and remove unnecessary references to it
* Updated CreateShardGroup to not return an error if it already exists so it's idempotent
* Removed old test making sure you can't delete the default RP. You can delete it now, there was no reason to disallow it.
* Wired up the UpdateRetentionPolicy functionality
* Add dir, hostname, and bind address to top level config since it applies to services other than meta
* Add enabled flags to example toml for data and meta services
* Wire up add/remove raft peers and meta servers to meta service
* Update DROP SERVER to be either DROP META SERVER or DROP DATA SERVER
* Bring over statement executor from old meta package
* Start meta service client implementation
* Update meta service test to use the client
* Wire up node ID/meta server storage information
This changes backup and restore to work for TSM. It breaks it for b1 and bz1, but since those are getting removed it's ok.
The backup runs against any host that is specified and can backup either the metasstore, a database, specific retention policy, or a specific shard. It can also take incremental backups with the `since` flag, which will only backup TSM files that have been created since that timestamp.
The backup is safe to run online. However, for shards that are still hot for writes, they won't be able to create new TSM files while the backup for that single shard runs. If the backup isn't too large and the write throughput isn't too high this shouldn't be a problem since the writes will just go into the WAL cache.
One of the first unit tests in the cli tests called the Run method.
Since the Run method called os.Exit, it reported the unit tests as
succeeded. When parallel is set to 1, this skips _all_ unit tests after
the first one. When parallel is set to a higher value, unit tests run by
other processes still get run.
This changes the Run method to return an error (if one occurred). This
error can then be printed out and a bad exit status can be used to exit
the program from the main program instead. That causes the unit tests
to run correctly regardless of how many parallel processes are running.
Also added an additional option to the CLI called `IgnoreSignals`. If
this is set to true, then signals are not registered with the process.
Setting signals doesn't really work in unit tests so it's good to ensure
they don't get set in the first place.
In addition to fixing the influx cli tests, this adds a mock client to
the cli test for Use. PR #5183 added a validation for `use` to only be
able to select public databases so `_internal` couldn't be chosen. To
implement this, the `SHOW DATABASES` command was used by the internal
client.
Some of the unit tests in `cli_test.go` don't set the client to
anything. `TestParseCommand_Use` previously didn't, but now it needs to
have a client in the unit test with an empty test server.
This has a few changes in it (unfortuantely). The main change is to run compactions
concurrently. While implementing this, a few query and performance bugs showed up that
are also fixed by this commit.
Changed non-interactive mode to send everything through the CLI's parser the same way the interactive mode works.
Added multiline support for -execute flag.
Server registration and stats reporting has been removed from what was
once http://enterprise.influxdata.com. The app that lived there, now
runs at http://usage.influxdata.com, so that the subdomain can
eventually be repurposed. Because we also want to repurpose the
`enterprise-client` repo, we have also renamed that to `usage-client`.
InfluxDB no longer needs the `registration` service now, since all of
the endpoints it communicates with simply discard the data provided to
them.
Add StressTest type and auxillary interfaces
Add config structs
Move generator to config
Add utility methods used in stress
Add basic components for a stress test
Add touches
Add configuration options
Add unified results handlers
Add final print out of results
Add Success function to response type
Add query support
Send query results
Add comments to run.go
Change Basic to BasicWriter
Add basic query
Add incomplete README
Abstract out response handling
Change plugin to basic
Add responseHandler type
Add additional parameter to Query function
Add todo comments and cleanup main
Lower hard coded value
Add flag for profiling
Fix race condition
Wait at the right place
Chane point from struct to interface
Improve generic write throughput
Reorganize
Fastest State
Add toml config
Add test server
Add basic working version of config file
Move config file logic into its own file
Fix broken config file
Add query count to stress config
Add support for concurrency and batch interval
Reorder config option
Remove unneeded init
Remove old stress package
Move new stress code into stress directory
Rework influx_stress tool
Do something reasonable if no config is given
Remove unneeded comments
Add tests for stress package
Add comments and reorganize code
Add more comments
Count lines posted correctly
Add NewConfig method
Fix style issues
Add backticks to flag description
Fix grammar
Remove `StartTimer` calls where appropriate
Fix comment language
Change Reader to Querier
Reorder defer
Fix issues bought up by golint
Add more comments
Add more detailed Readme
Increase counter appropriately
Add return errors where appropriate
Add test coverage
Move `now()` from QueryClient to QueryGenerator
This change ensures that if there are any fields in the WHERE clause of
an aggregate that are different from the fields in the SELECT clause,
that the cursors also decode those fields. Otherwise WHERE clauses of
the form 'SELECT f(w) FROM x WHERE y=z' will return incorrect results
Fixes issue #4701.
match the info provided by the influx --help output,
and added history command
Reverted description for pretty command
+ minor edits
Removed duplication of command names
Signed-off-by: Anes Hasicic <anes.hasicic@gmail.com>
When unpacking the meta, the Store `Addr` is built
against the hostname and the `bind-address` port.
We can use this resolved address for the `RemoteAddr`
as well since according to the clustering docs the
`hostname must be resolved by all members in the cluster`