Previously, we needed a write lock on the cache because it was the
only lock we had available to guard updates to entry.values and
entry.needSort.
However, now we have a entry-scoped lock for this purpose, we don't
need the cache write lock for this purpose. Since merged() doesn't
modify the .store or the c.snapshot.sort, there is no need for
a write lock on the cache to protect the cache.
So, we don't need to escalate here - we simply rely on the entry lock
to protect the entries we are iterating over.
Signed-off-by: Jon Seymour <jon@wildducktheories.com>
Based on @jwilder's alternative to the 'dirty' slice that featured
in previous iterations of this fix.
Suggested-by: Jason Wilder <jason@influxdb.com>
Signed-off-by: Jon Seymour <jon@wildducktheories.com>
Currently two compactors can execute Engine.WriteSnapshot at once.
This isn't thread safe since both threads want to make modifications to
Cache.snapshot at the same time.
This commit introduces a lock which is acquired during Snapshot() and
released during ClearSnapshot(), ensuring that at most one thread
executes within Engine.WriteSnapshot() at once.
To ensure that we always release this lock, but only release the
snapshot resources on a successful commit, we modify ClearSnapshot() to
accept a boolean which indicates whether the write was successful or not
and guarantee to call this function if Snapshot() has been called.
Signed-off-by: Jon Seymour <jon@wildducktheories.com>
There are two tests that show two different one vulnerability.
One test shows that Cache.Deduplicate modifies entries in a snapshot's
store without a lock while cache readers are deduplicating those same
entries while correctly locked.
A second test shows that two threads trying to execute the methods
that Engine.WriteSnapshot calls will cause concurrent, unsynchronized
mutating access to the snapshot's store and entries.
The tests fail at this commit and are fixed by subsequent commits.
Signed-off-by: Jon Seymour <jon@wildducktheories.com>
Related to #4098
Lint cmd/influxd/
* Errors cannot end with punctuation
* Better comment for exported method
* Better control flow when return is present
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
Linted cmd/influx_tsm
* Added comments to exported fields
* Removed punctuation at the end of errors
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
Linted cmd/influx_tsm/b1 and cmd/influx_tsm/bz1
* Added comments to exportes fields
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
Linted cmd/influx_tsm/tsdb
* Added comments to exported fields
* range k, _ := can be written as range k :=
* removed else when return is present
* Added consistency to receiver names in methods
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
Fix typos
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
Previously, CQs with the same name would be stored in the last run map
the same way. This caused only one of the CQs to run because after the
first one ran it would update the last run time for all CQs with the
same name.
Add the database name to the CQ ID in the last run map to differentiate
between CQs in different databases.
Fixes#5814.
Fixes#5612, #5573 and #5518.
Using the MetaExecuter, queries that need to run on both data nodes
and optionally the meta store will be executed across all data nodes
in the cluster.
Fix for #5804.
The commit for #5789 rendered the semantics of snapshotCount statistic
useless. This commit restores semantics that have diagnostic value to
this statistic.
Signed-off-by: Jon Seymour <jon@wildducktheories.com>
The Cache had support for taking multiple snapshots to support writing
multiple snapshots to TSM files concurrently if that happened to be
a bottleneck. In practice, this is never a bottleneck and we only
run one snappshoting goroutine continously per shard which has worked
well for all workloads.
The multiple snapshot support introduces some unhandled failure scenarios
where wal segments could be removed without writing them to TSM files. If
a snapshot compaction fails to write due to transient disk errors, subsequent
snapshots will continue, but the failed one will not be retried. When the
subsequent ones succeeded, all closed wal segments are removed causing data
loss.
This change simplifies the snapshotting capability to ensure that there is only
ever one snapshot. If one fails, the next snapshot will update the existing
snapshot and retry all of old and new data.
Fixes#5686
This fixes a regression introduced in #5757 due to the node.ID getting
assigned by both the meta and data services. When both roles are active,
the data CreateDataNode path was not getting called because a node ID was
already assigned.
This fixes the issue by seeing if a DataNode already exists for our node
ID, and if it does not, we create one.
The dimensions array in `RewriteWildcards` gets emptied by an earlier
section of the code and then tries to iterate over that empty slice to
append it to the list of dimensions.
That makes the loop dead code that can't ever be hit.
Also improve the efficiency of this method by not creating a new slice
when there are no wildcards. We already check at the beginning of the
function if there is a wildcard out of necessity. There's no point in
making a new slice and copying the contents if we know that there will
be no wildcards to expand.
It also improves memory efficiency by assuming that if a wildcard
exists, there is only one and the pre-allocated slice can take advantage
of that. If there are multiple wildcards, then a new slice will have to
be created in the middle of the loop to raise the capacity.
Fixes#5680.
When dropping a data node, the following will now happen on the
Meta Store.
1) If any shards no longer have any owners (because the data node
being dropped is the only owner), they will be reassigned a
new owner from within their respective shard group.
2) If a shard group no longer has any shards/data nodes, they will
be marked as deleted.
When a shard is being assigned a new owner a data node with the fewest
number of shards in the shard group will be selected as the new owner.
Finally, checking the validity of a data node's ID now happens in the
Meta store, rather than in the state machine.
When a wildcard is specified for the field but not the dimensions, the
dimensions get added to the list of fields as part of
`RewriteWildcards()`.
But when a dimension was given with no wildcard, the dimension didn't
get removed from the wildcard in the fields section. This teaches the
rewriter to disclude dimensions explicitly included from being expanded
as a field. Now this statement when a measurement has one tag named host
and a field named value:
SELECT * FROM cpu GROUP BY host
Would expand to this:
SELECT value FROM cpu GROUP BY host
Instead of this:
SELECT host, value FROM cpu GROUP BY host
If you want the latter behavior, you can include it like this:
SELECT host, * FROM cpu GROUP BY host
Fixes#5770.
This fixes a couple of issues with starting meta-only nodes.
1. We were always calling CreateDataNode regardless of whether the the
node is running data services. We only call that now when node is
data enabled.
2. The node.json was created along-side creating the data node. Since
we are not creatinga a data node, this didn't happen anymore. There
wasn't a simple way to do this in one place so it's actually handle
for when creating a meta or a data node now. Since the ID assigned
to the node is the same regardless of role this works in all combinations
of roles.
3. The JoinMetaServer didn't return the ID of the joining node which
created some races when multiple nodes were joining. The join call now
returns that information to the caller.
Fixes#5754
The join option was incorrectly exposed on the meta config. It should
be at the top-level as a string and propogate down to the meta config
as a slice.