- Ideally, we would use the soon-to-be open source plutonium client to interact with Influx Enterprise. This would mean that this application could be entirely open source. (We should check with Todd and Nate.)
- However, if in the future we want to deliever a closed source version, we'll use the open source version as a library. The open source library will define certain routes (/users, /whatever); the closed source version will either override those routes, or add new ones. This implies that the closed source version is simply additional or manipulated routes on the server.
- Survey the experience of closed source with Jason and Nathaniel.
Javascript build will be decoupled from Go build process.
Asset compilation will happen during build of backend-server.
This allows the front-end team to swap in mocked, auto-generated swagger backend for testing and development.
##### Javascript
Webpack
Static asset compilation during backend-server build.
##### Go
We'll use GDM as the vendoring solution to maintain consistency with other pieces of TICK stack.
*Future work*: we must switch to the community vendoring solution when it actually seems mature.
### API
#### REST
We'll use swagger interface definition to specify API and JSON validation. The goal is to emphasize designing to an interface facilitating parallel development.
#### Query Proxy
The query proxy is a special endpoint to query influx.
We have a different approach than enterprise 1.0 and grafana 2.0. These two use a GET and pass GET parameters to the backend.
Our approach will be a POST receiving a JSON object defining additional meta data about the query.
Features would include:
1. Load balancing against all data nodes in cluster
2. Formatting the output results to be simple to use in frontend.
3. Decimating the results to minimize network traffic.
4. Use prepared queries to move query window.
5. Allow different types of response protocols (http GET, websocket, etc.)
1. User clicks on an icon that represents their system (e.g. Redhat).
2. User fills out a form that includes the information needed to configure telegraf.
- influx url
- influx authentication
- does telegraf have shared secret jwt?
- let's talk to nathaniel about this... in regards to how it worked with kapacitor.
3. User gets a download command (sh?) This command has enough to start a telegraf for docker container monitoring (v1.1) and send the data to influx.
- Question: how do we remove machines? (e.g. I don't want to see my testing mac laptop anymore)
- Could use retention policies (fast)
- testing rp
- production rp
- Could use namespaced databases
- We should talk to the people that working on the new series index to help us handle paging-off of old/inactive instances problem gracefully. DROP SERIES WHERE "host" = 'machine1'
1. SHOW SERIES WHERE time > 1w # gets all host names for the last week SHOW TAG VALUE WITH KEY = 'host' WHERE time > 1w
2. Performance??
2. DROP SERIES WHERE "host" = 'machine1'
3. We could have a machine endpoint allowing GET/DELETE
4. we want to filter machines by the times whey were active and the times they first showed up
#### Update telegraf configuration on host
confd for telegraf configuration?
survey prometheus service discovery methodology and compare to our telegraf design (stand-alone service or built-in to telegraf)