This more accurately shows whether or not a query has been killed.
Instead of automatically removing it from the query table when it's
killed even though goroutines and iterators may still be open, it now
marks the process as killed. This should allow us to more accurately
determine if a query has been stalled and is still using resources on
the server.
This is related to #8848, but not directly connected.
This instructs the kernel that it can release memory used by mmap'd
TSM files when they are not actively being used. It the mappings are
use, the kernel will fault the pages back in. On linux, this causes
RES memory to drop immediately when run.
Originally, casting was performed inside of the query engine especially
for call iterators. Currently, the engine takes care of all casting so
we just need to normalize the iterators types for type safety reasons
rather than actual functional reasons.
Removing this code. Cover coverage showed that it was not hit when run
against the actual server. I ran the tests package and got code coverage
of the query package while running the tests in that package.
When using `SHOW MEASUREMENTS ON <db>`, the required permissions didn't
take into account `<db>` and would only use the `db` parameters from the
HTTP parameters.
Added several docker-based scripts that can be used to build the OSS binaries and Debian and CentOS packages in a clean, standardized environment. Supports Windows, OSX, and Linux on AMD64 as well as multiple chip architectures for linux including i386, arm5, arm6 and arm8. Will also package source code with all necessary dependencies, and run basic unit tests in the same environment.
When an integer cannot be parsed, we attempt to parse it as a uint64. If
that succeeds, we then have an unsigned literal that can then be used to
compare unsigned values.
This method allows us not to introduce new syntax to the language and
continue just doing the right thing at the right time. But, it also
delegates a lot of the heavy lifting to implicit casting in Reduce and
Eval.
All time ranges are combined with AND regardless of context and
regardless of whether it makes any logical sense.
That was the previous behavior and, unfortunately, a lot of people rely
on it.
A cold shard that suddenly receives a lot of writes could get a very
big cache that takes a long time to snapshot or causes the cache
max memory limit to be hit more quickly. This re-enables the compactions
if necessary during writes so we don't have to wait for the shard monitor
goroutine to re-enable them.