This adds a sleep so that the parquet cache has a little bit of time to
populate before we make another request to the query buffer. Sometimes
it does not populate and so we have a race condition where the new
request comes in and actually goes to object store. This is fine in
practice because it would also take time to fill the cache in production
as well. I haven't really seen the test fail since adding this, but
triggering it in the first place is really hard and in practice does not
happen all that often.
When starting up a new cluster in Enterprise we might have multiple
nodes starting at the same time. We might have an issue wherby we have
multiple catalogs with different UUIDs in their in memory
representation.
For example:
- Let's say we have node0 and node1
- node0 and node1 start at the same time and both check object storage
to see if there is a catalog to load
- They both see there is no catalog
- They both create a new one by generating a UUID and persisting it to
object storage
- Whichever is written second is now the one with the correct UUID in
their in memory representation while the other will not have the
correct one until restarted likely
This in practice isn't an issue today as Trevor notes in
https://github.com/influxdata/influxdb_pro/issues/600, but it could be
once we start using `--cluster-id` for licensing purposes. In order to
prevent this we instead make the write to object storage use the Put
mode. If it exists then the write will fail and the node that lost the
race will instead just load the other's catalog.
For example if node1 wins the race then node0 will load the catalog
created by node1 and use that UUID instead.
As this is hard to create a test for as it involves a race condition to
happen I have not included one as we could never really be sure it was
taken care of and we rely on the underlying object store we are writing
to to handle this for us. It's also not likely to happen given this is
only on a new cluster being initiated for the first time decreasing the
chances of it occurring in the first place.
This creates a CatalogUpdateMessage type that is used to send
CatalogUpdates; this type performs the send on the oneshot Sender so
that the consumer of the message does not need to do so.
Subscribers to the catalog get a CatalogSubscription, which uses the
CatalogUpdateMessage type to ACK the message broadcast from the catalog.
This means that catalog message broadcast can fail, but this commit does
not provide any means of rolling back a catalog update.
A test was added to check that it works.
* feat(python): update to python-build-standalone 3.13.2
References:
- https://github.com/influxdata/influxdb/issues/26044
* fix: update fetch-python-standalone.bash to properly set 'executable'
* fix: use PYO3_CONFIG_FILE to find PYTHONHOME.
* fix: add comment about PYO3_CONFIG_FILE.
* fix: remove ensure_pyo3().
* fix: add some sleep so catalog is updated.
---------
Co-authored-by: Jackson Newhouse <jnewhouse@influxdata.com>
* refactor: use repository in catalog
The catalog was refactored to use identifiers on everything, and store
everything in a consistent structure. This structure makes use of the
`Repository` type that holds a `SerdeVecMap` of Id to Resource, along
with the next Id, and a bi-map of Id to resource name.
The `Repository` type is used at each level of the catalog where a
resource is stored.
This simplified repeated logic for snapshot'ing, insert and update of
resources in the catalog, as well as accessor methods for getting by id
or name, and mapping names to ids and vice-versa.
In addition, the process for catalog batch verification and permit was
altered so that the permit process induces a retry if the catalog was
updated while the catalog batch function was producing the batch, i.e, if
the catalog sequence incremented while the caller was waiting for a permit.
This eliminated the need for verifying the catalog batch after it had been
generated, and allows for a single path to apply a catalog batch after it
has been persisted to object store.
This assumes that the generation of the catalog batch implies validity.
Irelevant tests were removed.
Last and Distinct cache's now rely more heavily on Ids, though the proc-
essing engine still needs to switch over to use Ids for starting/stopping
triggers.
* fix(circleci): add librt.so to list of acceptable libraries
* feat(circleci): check for glibc portability
* fix(circleci): remove rpm --nodeps workaround in rpm validate
Now that we have glibc portability in rpm builds, we no longer need the
'rpm --nodeps' workaround and can go back to 'yum localinstall'.
Closes#26011
* chore: update README_processing_engine.md for glibc portability
Continuing our work of creating versioned files before Beta, this commit
adds a PersistedSnapshotVersion which is used at the boundary of
serializing and deserializing so that we can easily upgrade to a newer
version and handle old versions without breaking things for users.
This commit restores the old behavior we had where new tags can be added
to a schema. To do this we made tags nullable and brings us in line with
our other products. These changes were made in this PR:
https://github.com/influxdata/influxdb3_core/pull/41.
Changes to accomplish this new behavior were:
- Queries now do not return an empty string for null tags instead they
are returned as null, or in many formats not at all.
- References to v1 for parsing and validating lines were removed as we
only have one path for doing so these days shared amongst all the
write_lp endpoints.
- We fixed failing tests that expected us to not be able to have new
tags or depended on that functionality indirectly
- Tests had their snapshot files updated to reflect that tags are
nullable by default
- Behavior for making a schema and checking whether a column can be null
were updated in a separate repo and integrated here
- The series_key is updated whenever we get a new tag added to the
schema
- New tests were added to show that you can add a new tag and that the
series key is updated as part of that
With the above changes we can now allow tags to be added again by users
like they would expect, especially with v1 and v2 apis and Telegraf
plugins.
* chore: clean up runtime code for python setup
ccd5d22aab introduced working but temporary code for setting up the
python runtime environment. This cleans that up:
* refactor various find_python() functionality into virtualenv.rs
* refactor PYTHONHOME calculation to virtualenv.rs:find_python_install()
* adjust init_pyo3() to temporarily set PYTHONHOME based on
virtualenv.rs:find_python_install() as this is the only place it is
needed (indeed, venv activation scripts try to remove it)
Importantly, virtualenv.rs:find_python_install() tries to find the
python build standalone runtime based on a few heuristics. This function
could be improved in the fullness of time to, eg, be configured via a
build parameter.
Also, virtualenv.rs:find_python() can be used pre and post venv
activation. Before entering the venv, it will use find_python_install()
which is useful for things like setting up the initial venv. After
entering the venv, it will honor VIRTUAL_ENV (as set by
virtualenv.rs:initialize_venv()) to find python, which is important for
install packages with pip and having them installed into the venv.
* chore: update README_processing_engine.md for default venv
* chore: add bug reference for venv migrations with python minor releases
* chore: add security URLs to README_processing_engine.md
* chore: find_python_home() returns Option<PathBuf>. Thanks Jackson Newhouse
* fix: manually set sys.prefix, exec_prefix and sys.path
When activating a venv in the shell, sys.base_prefix and
sys.base_exec_prefix should be set to the installation location while
sys.prefix and sys.exec_prefix should be set to the venv dir.
Unfortunately, when initialize_venv() and init_pyo3() are called, we
can't use Py_InitializeFromConfig() to set any of these and certain
platforms are unable to find python-build-standalone. For now, we'll
temporarily set PYTHONHOME in init_pyo3() to the installation location
to make python work. By setting it at this point in the code, sys.prefix
and sys.exec_prefix end up also being set to the installation location,
which is fine when not under a venv, but is different from when entering
an venv.
To address this, in PYTHON_INIT.call_once() and when VIRTUAL_ENV is set
(which initialize_venv() will have set at this point), manually set
sys.prefix and sys.exec_prefix to what is in VIRTUAL_ENV.
Similarly, when activating a venv in the shell, sys.path is appended to
have the venv's site-packages dir. Previously we were setting PYTHONPATH
in initialize_venv() which ensures that the venv's site-packages dir is
in sys.path, but this ends up having the venv's site-packages dir first
in sys.path. To correct this, don't set PYTHONPATH any more and instead
adjust PYTHON_INIT.call_once() to append the venv's site-packages dir to
sys.path when VIRTUAL_ENV is set.
Finally, when exiting init_pyo3(), unconditionally unset PYTHONHOME when
VIRTUAL_ENV is set (like activation scripts do) and restore/unset when
it isn't.
Prior to these changes, all target incorrectly had the venv's
site-packages first in sys.path and OSX and Windows additionally had an
incorrect sys.prefix and sys.exec_prefix. With these initialization
changes in place, the runtime environment for the plugins is much closer
to that of a shell activated venv.
Removed all of the shellcheck complaints that were showing up in my
editor.
Where possible, I just fixed them directly but I just told it to ignore
this one in the install scripts https://www.shellcheck.net/wiki/SC2059
Happy to go and fix that lint too I just wasn't sure if we prefer the
current approach, since it doesn't seem to be causing any problems.