* feat(paging): add support for after id parameter in find options
* chore(http): update swagger to reflect after query parameter in list buckets
* chore(changelog): update changelog to reflect after query parameter in list buckets
* chore(tenant): update tenant storage tests for paginating with after
The `buckets()` command would use a bucket lookup that wrapped the
`FindBuckets` API. It did not use the pagination aspect of this API
correctly. When the underlying implementation was changed to a version
that correctly implemented pagination, this broke the query `buckets()`
command. Since it was query that used the API incorrectly rather than a
regression in the `FindBuckets` implementation, this fixes the usage to
correctly use pagination.
* refactor: migrator and introduce Store.(Create|Delete)Bucket
feat: kvmigration internal utility to create / managing kv store migrations
fix: ensure migrations applied in all test cases
* chore: update kv and migration documentation
Tenant bucket look up needs route a action to storage to use indexes.
I added an error condtion into the storage code to ensure this was working
but the tests did not exercise this piece of code. Both have been remedied.
* feat(tenant): Add in service functions and business logic
Built on top of the crud layer of the system we should now have additional service logic.
The addition of service layer should allow for us to verify similar functionality to the kv system.
At times snowflake id generation would create org and bucket IDs with
characters that had special meaning for the storage engine.
The storage engine concats the org and bucket bytes together into a
single 128 bit value. That value is used in the old measurement
section. Measurement was transformed into the tag, _measurement.
However, certain properties of the older measurement data location
are still required for the org/bucket bytes. We cannot have
commas, spaces, nor backslashes.
This PR puts a specific ID generator in place during the creation of
orgs and buckets. The IDs are just random numbers but with each
of the restricted chars incremented by one. While this changes the
entropy distribution somewhat, it does not matter too much for our
purposes.
... because now org and bucket ids are checked for previous existence
transactionally in the key-value stores. If the ID does already exist
then we try to generate a new key up to 100 times.
I did this with a dumb editor macro, so some comments changed too.
Also rename root package from platform to influxdb.
In interest of minimizing risk, anyone importing the root package has
now aliased it to "platform" so that no changes beyond imports were
necessary in those files.
Lastly, replace the old platform module to local path /dev/null so that
nobody can accidentally reintroduce a platform dependency while
migrating platform code to influxdb.
* fix(testing): compare expected error messages against actual
* remove nonsense
* remove nonsense
* add expected error message for bucket not found
* oops
* add types to bucket service tests
* add type to bucket cmd interface
* bucket type needs to be defined in json for POST creations
* rip out bucket type stuff
* remove type from bucket tests
* add InternalBucketID helper fn
* remove more code
* remove org from internal bucket ID