Merge branch 'master' into feature/structured-queries

Conflicts:
	Godeps
	LICENSE_OF_DEPENDENCIES.md
	server/mux.go
	server/routes.go
	ui/.eslintrc
pull/10616/head
Chris Goller 2017-04-07 16:06:24 -05:00
commit d2c7c74238
523 changed files with 33862 additions and 12572 deletions

1
.gitattributes vendored Normal file
View File

@ -0,0 +1 @@
CHANGELOG.md merge=union

6
.gitignore vendored
View File

@ -11,3 +11,9 @@ chronograf.db
chronograf-v1.db
npm-debug.log
.vscode
.DS_Store
.godep
.jsdep
.jssrc
.dev-jssrc
.bindata

View File

@ -1,10 +1,157 @@
## v1.2.0 [unreleased]
### Upcoming Bug Fixes
### Bug Fixes
1. [#1104](https://github.com/influxdata/chronograf/pull/1104): Fix windows hosts on host list
1. [#1125](https://github.com/influxdata/chronograf/pull/1125): Fix visualizations not showing graph name
1. [#1133](https://github.com/influxdata/chronograf/issues/1133): Fix Enterprise Kapacitor authentication.
1. [#1142](https://github.com/influxdata/chronograf/issues/1142): Fix Kapacitor Telegram config to display correct disableNotification setting
1. [#1097](https://github.com/influxdata/chronograf/issues/1097): Fix broken graph spinner in the Data Explorer & Dashboard Cell Edit
1. [#1106](https://github.com/influxdata/chronograf/issues/1106): Fix obscured legends in dashboards
1. [#1051](https://github.com/influxdata/chronograf/issues/1051): Exit presentation mode when using the browser back button
1. [#1123](https://github.com/influxdata/chronograf/issues/1123): Widen single column results in data explorer
1. [#1164](https://github.com/influxdata/chronograf/pull/1164): Restore ability to save raw queries to a Dashboard Cell
1. [#1115](https://github.com/influxdata/chronograf/pull/1115): Fix Basepath issue where content would fail to render under certain circumstances
1. [#1173](https://github.com/influxdata/chronograf/pull/1173): Fix saving email in Kapacitor alerts
1. [#1178](https://github.com/influxdata/chronograf/pull/1178): Repair DataExplorer+CellEditorOverlay's QueryBuilder in Safari
1. [#979](https://github.com/influxdata/chronograf/issues/979): Fix empty tags for non-default retention policies
1. [#1179](https://github.com/influxdata/chronograf/pull/1179): Admin Databases Page will render a database without retention policies
1. [#1128](https://github.com/influxdata/chronograf/pull/1128): No more ghost dashboards 👻
1. [#1189](https://github.com/influxdata/chronograf/pull/1189): Clicking inside the graph header edit box will no longer blur the field. Use the Escape key for that behavior instead.
1. [#1193](https://github.com/influxdata/chronograf/issues/1193): Fix no quoting of raw InfluxQL fields with function names
1. [#1195](https://github.com/influxdata/chronograf/issues/1195): Chronograf was not redirecting with authentiation for Influx Enterprise Meta service
1. [#1095](https://github.com/influxdata/chronograf/pull/1095): Make logout button display again
1. [#1209](https://github.com/influxdata/chronograf/pull/1209): HipChat Kapacitor config now uses only the subdomain instead of asking for the entire HipChat URL.
1. [#1219](https://github.com/influxdata/chronograf/pull/1219): Update query for default cell in new dashboard
1. [#1206](https://github.com/influxdata/chronograf/issues/1206): Chronograf now proxies to kapacitors behind proxy using vhost correctly.
### Upcoming Features
### Features
1. [#1112](https://github.com/influxdata/chronograf/pull/1112): Add ability to delete a dashboard
1. [#1120](https://github.com/influxdata/chronograf/pull/1120): Allow users to update user passwords.
1. [#1129](https://github.com/influxdata/chronograf/pull/1129): Allow InfluxDB and Kapacitor configuration via ENV vars or CLI options
1. [#1130](https://github.com/influxdata/chronograf/pull/1130): Add loading spinner to Alert History page.
1. [#1168](https://github.com/influxdata/chronograf/issues/1168): Expand support for --basepath on some load balancers
1. [#1113](https://github.com/influxdata/chronograf/issues/1113): Add Slack channel per Kapacitor alert.
1. [#1095](https://github.com/influxdata/chronograf/pull/1095): Add new auth duration CLI option; add client heartbeat
1. [#1168](https://github.com/influxdata/chronograf/issue/1168): Expand support for --basepath on some load balancers
1. [#1207](https://github.com/influxdata/chronograf/pull/1207): Add support for custom OAuth2 providers
1. [#1212](https://github.com/influxdata/chronograf/issue/1212): Add query templates and loading animation to the RawQueryEditor
1. [#1221](https://github.com/influxdata/chronograf/issue/1221): More sensical Cell and Dashboard defaults
### Upcoming UI Improvements
### UI Improvements
1. [#1101](https://github.com/influxdata/chronograf/pull/1101): Compress InfluxQL responses with gzip
1. [#1132](https://github.com/influxdata/chronograf/pull/1132): All sidebar items show activity with a blue strip
1. [#1135](https://github.com/influxdata/chronograf/pull/1135): Clarify Kapacitor Alert configuration for Telegram
1. [#1137](https://github.com/influxdata/chronograf/pull/1137): Clarify Kapacitor Alert configuration for HipChat
1. [#1079](https://github.com/influxdata/chronograf/issues/1079): Remove series highlighting in line graphs
1. [#1124](https://github.com/influxdata/chronograf/pull/1124): Polished dashboard cell drag interaction, use Hover-To-Reveal UI pattern in all tables, Source Indicator & Graph Tips are no longer misleading, and aesthetic improvements to the DB Management page
1. [#1187](https://github.com/influxdata/chronograf/pull/1187): Replace Kill Query confirmation modal with ConfirmButtons
1. [#1185](https://github.com/influxdata/chronograf/pull/1185): Alphabetically sort Admin Database Page
1. [#1199](https://github.com/influxdata/chronograf/pull/1199): Move Rename Cell functionality to ContextMenu dropdown
1. [#1222](https://github.com/influxdata/chronograf/pull/1222): Isolate cell repositioning to just those affected by adding a new cell
## v1.2.0-beta7 [2017-03-28]
### Bug Fixes
1. [#1008](https://github.com/influxdata/chronograf/issues/1008): Fix unexpected redirection to create sources page when deleting a source
1. [#1067](https://github.com/influxdata/chronograf/issues/1067): Fix issue creating retention policies
1. [#1068](https://github.com/influxdata/chronograf/issues/1068): Fix issue deleting databases
1. [#1078](https://github.com/influxdata/chronograf/issues/1078): Fix cell resizing in dashboards
1. [#1070](https://github.com/influxdata/chronograf/issues/1070): Save GROUP BY tag(s) clauses on dashboards
1. [#1086](https://github.com/influxdata/chronograf/issues/1086): Fix validation for deleting databases
### Features
### UI Improvements
1. [#1092](https://github.com/influxdata/chronograf/pull/1092): Persist and render Dashboard Cell groupby queries
### UI Improvements
## v1.2.0-beta6 [2017-03-24]
### Bug Fixes
1. [#1065](https://github.com/influxdata/chronograf/pull/1065): Add functionality to the `save` and `cancel` buttons on editable dashboards
2. [#1069](https://github.com/influxdata/chronograf/pull/1069): Make graphs on pre-created dashboards un-editable
3. [#1085](https://github.com/influxdata/chronograf/pull/1085): Make graphs resizable again
4. [#1087](https://github.com/influxdata/chronograf/pull/1087): Hosts page now displays proper loading, host count, and error messages.
### Features
1. [#1056](https://github.com/influxdata/chronograf/pull/1056): Add ability to add a dashboard cell
2. [#1020](https://github.com/influxdata/chronograf/pull/1020): Allow users to edit cell names on dashboards
3. [#1015](https://github.com/influxdata/chronograf/pull/1015): Add ability to edit a dashboard cell
4. [#832](https://github.com/influxdata/chronograf/issues/832): Add a database and retention policy management page
5. [#1035](https://github.com/influxdata/chronograf/pull/1035): Add ability to move and edit queries between raw InfluxQL mode and Query Builder mode
### UI Improvements
## v1.2.0-beta5 [2017-03-10]
### Bug Fixes
1. [#936](https://github.com/influxdata/chronograf/pull/936): Fix leaking sockets for InfluxQL queries
2. [#967](https://github.com/influxdata/chronograf/pull/967): Fix flash of empty graph on auto-refresh when no results were previously returned from a query
3. [#968](https://github.com/influxdata/chronograf/issue/968): Fix wrong database used in dashboards
### Features
1. [#993](https://github.com/influxdata/chronograf/pull/993): Add Admin page for managing users, roles, and permissions for [OSS InfluxDB](https://github.com/influxdata/influxdb) and InfluxData's [Enterprise](https://docs.influxdata.com/enterprise/v1.2/) product
2. [#993](https://github.com/influxdata/chronograf/pull/993): Add Query Management features including the ability to view active queries and stop queries
### UI Improvements
1. [#989](https://github.com/influxdata/chronograf/pull/989) Add a canned dashboard for mesos
2. [#993](https://github.com/influxdata/chronograf/pull/993): Improve the multi-select dropdown
3. [#993](https://github.com/influxdata/chronograf/pull/993): Provide better error information to users
## v1.2.0-beta4 [2017-02-24]
### Bug Fixes
1. [#882](https://github.com/influxdata/chronograf/pull/882): Fix y-axis graph padding
2. [#907](https://github.com/influxdata/chronograf/pull/907): Fix react-router warning
3. [#926](https://github.com/influxdata/chronograf/pull/926): Fix Kapacitor RuleGraph display
### Features
1. [#873](https://github.com/influxdata/chronograf/pull/873): Add [TLS](https://github.com/influxdata/chronograf/blob/master/docs/tls.md) support
2. [#885](https://github.com/influxdata/chronograf/issues/885): Add presentation mode to the dashboard page
3. [#891](https://github.com/influxdata/chronograf/issues/891): Make dashboard visualizations draggable
4. [#892](https://github.com/influxdata/chronograf/issues/891): Make dashboard visualizations resizable
5. [#893](https://github.com/influxdata/chronograf/issues/893): Persist dashboard visualization position
6. [#922](https://github.com/influxdata/chronograf/issues/922): Additional OAuth2 support for [Heroku](https://github.com/influxdata/chronograf/blob/master/docs/auth.md#heroku) and [Google](https://github.com/influxdata/chronograf/blob/master/docs/auth.md#google)
7. [#781](https://github.com/influxdata/chronograf/issues/781): Add global auto-refresh dropdown to all graph dashboards
### UI Improvements
1. [#905](https://github.com/influxdata/chronograf/pull/905): Make scroll bar thumb element bigger
2. [#917](https://github.com/influxdata/chronograf/pull/917): Simplify the sidebar
3. [#920](https://github.com/influxdata/chronograf/pull/920): Display stacked and step plot graph types
4. [#851](https://github.com/influxdata/chronograf/pull/851): Add configuration for [InfluxEnterprise](https://portal.influxdata.com/) meta nodes
5. [#916](https://github.com/influxdata/chronograf/pull/916): Dynamically scale font size based on resolution
## v1.2.0-beta3 [2017-02-15]
### Bug Fixes
1. [#879](https://github.com/influxdata/chronograf/pull/879): Fix several Kapacitor configuration page state bugs: [#875](https://github.com/influxdata/chronograf/issues/875), [#876](https://github.com/influxdata/chronograf/issues/876), [#878](https://github.com/influxdata/chronograf/issues/878)
2. [#872](https://github.com/influxdata/chronograf/pull/872): Fix incorrect data source response
### Features
1. [#896](https://github.com/influxdata/chronograf/pull/896) Add more docker stats
## v1.2.0-beta2 [2017-02-10]
### Bug Fixes
1. [#865](https://github.com/influxdata/chronograf/issues/865): Support for String fields compare Kapacitor rules in Chronograf UI
### Features
1. [#838](https://github.com/influxdata/chronograf/issues/838): Add [detail node](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#details) to Kapacitor alerts
2. [#847](https://github.com/influxdata/chronograf/issues/847): Enable and disable Kapacitor alerts from the alert manager page
3. [#853](https://github.com/influxdata/chronograf/issues/853): Update builds to use yarn over npm install
4. [#860](https://github.com/influxdata/chronograf/issues/860): Add gzip encoding and caching of static assets to server
5. [#864](https://github.com/influxdata/chronograf/issues/864): Add support to Kapacitor rule alert configuration for:
- HTTP
- TCP
- Exec
- SMTP
- Alerta
### UI Improvements
1. [#822](https://github.com/influxdata/chronograf/issues/822): Simplify and improve the layout of the Data Explorer
- The Data Explorer's intention and purpose has always been the ad hoc and ephemeral exploration of your schema and data.
The concept of `Exploration` sessions and `Panels` betrayed this initial intention. The DE turned into a "poor man's"
dashboarding tool. In turn, this introduced complexity in the code and the UI. In the future if I want to save, manipulate,
and view multiple visualizations this will be done more efficiently and effectively in our dashboarding solution.
## v1.2.0-beta1 [2017-01-27]

6
Godeps
View File

@ -1,3 +1,4 @@
github.com/NYTimes/gziphandler 6710af535839f57c687b62c4c23d649f9545d885
github.com/Sirupsen/logrus 3ec0642a7fb6488f65b06f9040adc67e3990296a
github.com/boltdb/bolt 5cc10bbbc5c141029940133bb33c9e969512a698
github.com/bouk/httprouter ee8b3818a7f51fbc94cc709b5744b52c2c725e91
@ -6,8 +7,8 @@ github.com/elazarl/go-bindata-assetfs 9a6736ed45b44bf3835afeebb3034b57ed329f3e
github.com/gogo/protobuf 6abcf94fd4c97dcb423fdafd42fe9f96ca7e421b
github.com/google/go-github 1bc362c7737e51014af7299e016444b654095ad9
github.com/google/go-querystring 9235644dd9e52eeae6fa48efd539fdc351a0af53
github.com/influxdata/influxdb 98e19f257c33ceb8e1717fa57236de473054641e
github.com/influxdata/kapacitor 0eb8c348b210dd3d32cb240a417f9e6ded1b691d
github.com/influxdata/influxdb af72d9b0e4ebe95be30e89b160f43eabaf0529ed
github.com/influxdata/kapacitor 5408057e5a3493d3b5bd38d5d535ea45b587f8ff
github.com/influxdata/usage-client 6d3895376368aa52a3a81d2a16e90f0f52371967
github.com/jessevdk/go-flags 4cc2832a6e6d1d3b815e2b9d544b2a4dfb3ce8fa
github.com/satori/go.uuid b061729afc07e77a8aa4fad0a2fd840958f1942a
@ -17,3 +18,4 @@ github.com/uber-go/atomic 3b8db5e93c4c02efbc313e17b2e796b0914a01fb
go.uber.org/zap a5783ee4b216a927da8f839c45cfbf9d694e1467
golang.org/x/net 749a502dd1eaf3e5bfd4f8956748c502357c0bbe
golang.org/x/oauth2 1e695b1c8febf17aad3bfa7bf0a819ef94b98ad5
google.golang.org/api bc20c61134e1d25265dd60049f5735381e79b631

11
LICENSE
View File

@ -17,12 +17,11 @@ You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
InfluxData Inc.
150 Spear Street
Suite 1750
San Francisco, CA 94105
contact@influxdata.com
799 Market Street, Suite 400
San Francisco, CA 94103
contact@influxdata.com
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007

View File

@ -1,4 +1,5 @@
### Go
* github.com/NYTimes/gziphandler [APACHE-2.0](https://github.com/NYTimes/gziphandler/blob/master/LICENSE.md)
* github.com/Sirupsen/logrus [MIT](https://github.com/Sirupsen/logrus/blob/master/LICENSE)
* github.com/boltdb/bolt [MIT](https://github.com/boltdb/bolt/blob/master/LICENSE)
* github.com/bouk/httprouter [BSD](https://github.com/bouk/httprouter/blob/master/LICENSE)
@ -7,6 +8,7 @@
* github.com/gogo/protobuf [BSD](https://github.com/gogo/protobuf/blob/master/LICENSE)
* github.com/google/go-github [BSD](https://github.com/google/go-github/blob/master/LICENSE)
* github.com/google/go-querystring [BSD](https://github.com/google/go-querystring/blob/master/LICENSE)
* github.com/influxdata/influxdb [MIT](https://github.com/influxdata/influxdb/blob/master/LICENSE)
* github.com/influxdata/kapacitor [MIT](https://github.com/influxdata/kapacitor/blob/master/LICENSE)
* github.com/influxdata/usage-client [MIT](https://github.com/influxdata/usage-client/blob/master/LICENSE.TXT)
* github.com/jessevdk/go-flags [BSD](https://github.com/jessevdk/go-flags/blob/master/LICENSE)
@ -16,8 +18,7 @@
* golang.org/x/net [BSD](https://github.com/golang/net/blob/master/LICENSE)
* golang.org/x/oauth2 [BSD](https://github.com/golang/oauth2/blob/master/LICENSE)
* github.com/influxdata/influxdb [MIT](https://github.com/influxdata/influxdb/blob/master/LICENSE)
* go.uber.org/zap [MIT](https://github.com/uber-go/zap/blob/master/LICENSE.txt)
* github.com/uber-go/atomic [MIT](https://github.com/uber-go/atomic/blob/master/LICENSE.txt)
* google.golang.org/api/oauth2/v2 [BSD](https://github.com/google/google-api-go-client/blob/master/LICENSE)
### Javascript
* Base64 0.2.1 [WTFPL](http://github.com/davidchambers/Base64.js)
@ -891,6 +892,7 @@
* rimraf 2.5.3 [ISC](http://github.com/isaacs/rimraf)
* rimraf 2.5.4 [ISC](http://github.com/isaacs/rimraf)
* ripemd160 0.2.0 [Unknown](https://github.com/cryptocoinjs/ripemd160)
* rome 2.1.22 [MIT](https://github.com/bevacqua/rome)
* run-async 0.1.0 [MIT](http://github.com/SBoudrias/run-async)
* rx-lite 3.1.2 [Apache License](https://github.com/Reactive-Extensions/RxJS)
* samsam 1.1.2 [BSD](https://github.com/busterjs/samsam)

View File

@ -1,20 +1,42 @@
.PHONY: assets dep clean test gotest gotestrace jstest run run-dev ctags continuous
VERSION ?= $(shell git describe --always --tags)
COMMIT ?= $(shell git rev-parse --short=8 HEAD)
GDM := $(shell command -v gdm 2> /dev/null)
GOBINDATA := $(shell go list -f {{.Root}} github.com/jteeuwen/go-bindata 2> /dev/null)
YARN := $(shell command -v yarn 2> /dev/null)
SOURCES := $(shell find . -name '*.go')
SOURCES := $(shell find . -name '*.go' ! -name '*_gen.go')
UISOURCES := $(shell find ui -type f -not \( -path ui/build/\* -o -path ui/node_modules/\* -prune \) )
LDFLAGS=-ldflags "-s -X main.version=${VERSION} -X main.commit=${COMMIT}"
BINARY=chronograf
default: dep build
.DEFAULT_GOAL := all
all: dep build
build: assets ${BINARY}
dev: dev-assets ${BINARY}
dev: dep dev-assets ${BINARY}
${BINARY}: $(SOURCES)
${BINARY}: $(SOURCES) .bindata .jsdep .godep
go build -o ${BINARY} ${LDFLAGS} ./cmd/chronograf/main.go
define CHRONOGIRAFFE
._ o o
\_`-)|_
,"" _\_
," ## | 0 0.
," ## ,-\__ `.
," / `--._;)
," ## /
," ## /
endef
export CHRONOGIRAFFE
chronogiraffe: ${BINARY}
@echo "$$CHRONOGIRAFFE"
docker-${BINARY}: $(SOURCES)
CGO_ENABLED=0 GOOS=linux go build -installsuffix cgo -o ${BINARY} ${LDFLAGS} \
./cmd/chronograf/main.go
@ -22,30 +44,51 @@ docker-${BINARY}: $(SOURCES)
docker: dep assets docker-${BINARY}
docker build -t chronograf .
assets: js bindata
assets: .jssrc .bindata
dev-assets: dev-js bindata
dev-assets: .dev-jssrc .bindata
bindata:
.bindata: server/swagger_gen.go canned/bin_gen.go dist/dist_gen.go
@touch .bindata
dist/dist_gen.go: $(UISOURCES)
go generate -x ./dist
go generate -x ./canned
server/swagger_gen.go: server/swagger.json
go generate -x ./server
js:
canned/bin_gen.go: canned/*.json
go generate -x ./canned
.jssrc: $(UISOURCES)
cd ui && npm run build
@touch .jssrc
dev-js:
.dev-jssrc: $(UISOURCES)
cd ui && npm run build:dev
@touch .dev-jssrc
dep: jsdep godep
dep: .jsdep .godep
godep:
.godep: Godeps
ifndef GDM
@echo "Installing GDM"
go get github.com/sparrc/gdm
gdm restore
endif
ifndef GOBINDATA
@echo "Installing go-bindata"
go get -u github.com/jteeuwen/go-bindata/...
endif
gdm restore
@touch .godep
jsdep:
cd ui && npm install
.jsdep: ui/yarn.lock
ifndef YARN
$(error Please install yarn 0.19.1+)
else
cd ui && yarn --no-progress --no-emoji
@touch .jsdep
endif
gen: bolt/internal/internal.proto
go generate -x ./bolt/internal
@ -64,12 +107,18 @@ jstest:
run: ${BINARY}
./chronograf
run-dev: ${BINARY}
./chronograf -d
run-dev: chronogiraffe
./chronograf -d --log-level=debug
clean:
if [ -f ${BINARY} ] ; then rm ${BINARY} ; fi
cd ui && npm run clean
cd ui && rm -rf node_modules
rm -f dist/dist_gen.go canned/bin_gen.go server/swagger_gen.go
@rm -f .godep .jsdep .jssrc .dev-jssrc .bindata
.PHONY: clean test jstest gotest run
continuous:
while true; do if fswatch -e "\.git" -r --one-event .; then echo "#-> Starting build: `date`"; make dev; pkill -9 chronograf; make run-dev & echo "#-> Build complete."; fi; sleep 0.5; done
ctags:
ctags -R --languages="Go" --exclude=.git --exclude=ui .

113
README.md
View File

@ -2,7 +2,9 @@
Chronograf is an open-source web application written in Go and React.js that provides the tools to visualize your monitoring data and easily create alerting and automation rules.
![Chronograf](https://github.com/influxdata/chronograf/blob/master/docs/images/overview-readme.png)
<p align="left">
<img src="https://github.com/influxdata/chronograf/blob/master/docs/images/overview-readme.png"/>
</p>
## Features
@ -17,27 +19,28 @@ Chronograf is an open-source web application written in Go and React.js that pro
Chronograf's [pre-canned dashboards](https://github.com/influxdata/chronograf/tree/master/canned) for the supported [Telegraf](https://github.com/influxdata/telegraf) input plugins.
Currently, Chronograf offers dashboard templates for the following Telegraf input plugins:
* Apache
* Consul
* Docker
* Elastic
* [Apache](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/apache)
* [Consul](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/consul)
* [Docker](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/docker)
* [Elastic](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/elasticsearch)
* etcd
* HAProxy
* [HAProxy](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/haproxy)
* IIS
* InfluxDB
* Kubernetes
* Memcached
* MongoDB
* MySQL
* [InfluxDB](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/influxdb)
* [Kubernetes](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/kubernetes)
* [Memcached](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/memcached)
* [Mesos](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/mesos)
* [MongoDB](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/mongodb)
* [MySQL](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/mysql)
* Network
* NGINX
* NSQ
* Ping
* PostgreSQL
* [NGINX](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/nginx)
* [NSQ](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/nsq)
* [Ping](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/ping)
* [PostgreSQL](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/postgresql)
* Processes
* RabbitMQ
* Redis
* Riak
* [RabbitMQ](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/rabbitmq)
* [Redis](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/redis)
* [Riak](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/riak)
* [System](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/system/SYSTEM_README.md)
* [CPU](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/system/CPU_README.md)
* [Disk](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/system/DISK_README.md)
@ -47,8 +50,8 @@ Currently, Chronograf offers dashboard templates for the following Telegraf inpu
* [Netstat](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/system/NETSTAT_README.md)
* [Processes](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/system/PROCESSES_README.md)
* [Procstat](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/procstat/README.md)
* Varnish
* Windows Performance Counters
* [Varnish](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/varnish)
* [Windows Performance Counters](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/win_perf_counters)
> Note: If a `telegraf` instance isn't running the `system` and `cpu` plugins the canned dashboards from that instance won't be generated.
@ -59,31 +62,12 @@ Chronograf's graphing tool that allows you to dig in and create personalized vis
* Generate [InfluxQL](https://docs.influxdata.com/influxdb/latest/query_language/) statements with the query builder
* Generate and edit [InfluxQL](https://docs.influxdata.com/influxdb/latest/query_language/) statements with the raw query editor
* Create visualizations and view query results in tabular format
* Manage visualizations with exploration sessions
### Dashboards
While there is an API and presentation layer for dashboards released in version 1.2.0-beta1, it is not recommended that you try to use Chronograf as a general purpose dashboard solution. The visualization around editing is under way and will be in a future release. Meanwhile, if you would like to try it out you can use `curl` or other HTTP tools to push dashboard definitions directly to the API. If you do so, they should be shown when selected in the application.
Version 1.2.0-beta6 introduces a UI for creating and editing dashboards. The dashboards support several visualization types including line graphs, stacked graphs, step plots, single statistic graphs, and line-single-statistic graphs.
Example:
```
curl -X POST -H "Content-Type: application/json" -d '{
"cells": [
{
"queries": [
{
"label": "%",
"query": "SELECT mean(\"usage_user\") AS \"usage_user\" FROM \"cpu\"",
"wheres": [],
"groupbys": []
}
],
"type": "line"
}
],
"name": "dashboard name"
}' "http://localhost:8888/chronograf/v1/dashboards"
```
This feature is new in version 1.2.0-beta6. We recommend using dashboards in a non-production environment only. Please see the [known issues](#known-issues) section for known bugs, and, should you come across any bugs or unexpected behavior please open [an issue](https://github.com/influxdata/chronograf/issues/new). We appreciate the feedback!
### Kapacitor UI
@ -92,26 +76,41 @@ A UI for [Kapacitor](https://github.com/influxdata/kapacitor) alert creation and
* Simply generate threshold, relative, and deadman alerts
* Preview data and alert boundaries while creating an alert
* Configure alert destinations - Currently, Chronograf supports sending alerts to:
* HipChat
* OpsGenie
* PagerDuty
* Sensu
* Slack
* SMTP
* Talk
* Telegram
* VictorOps
* [Alerta](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#alerta)
* [Exec](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#exec)
* [HipChat](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#hipchat)
* [HTTP/Post](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#post)
* [OpsGenie](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#opsgenie)
* [PagerDuty](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#pagerduty)
* [Sensu](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#sensu)
* [Slack](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#slack)
* [SMTP/Email](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#email)
* [Talk](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#talk)
* [Telegram](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#telegram)
* [TCP](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#tcp)
* [VictorOps](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#victorops)
* View all active alerts at a glance on the alerting dashboard
* Enable and disable existing alert rules with the check of a box
### GitHub OAuth Login
### User and Query Management
Manage users, roles, permissions for [OSS InfluxDB](https://github.com/influxdata/influxdb) and InfluxData's [Enterprise](https://docs.influxdata.com/enterprise/v1.2/) product.
View actively running queries and stop expensive queries on the Query Management page.
These features are new as of Chronograf version 1.2.0-beta5. We recommend using them in a non-production environment only. Should you come across any bugs or unexpected behavior please open [an issue](https://github.com/influxdata/chronograf/issues/new). We appreciate the feedback as we work to finalize and improve the user and query management features!
### TLS/HTTPS Support
See [Chronograf with TLS](https://github.com/influxdata/chronograf/blob/master/docs/tls.md) for more information.
### OAuth Login
See [Chronograf with OAuth 2.0](https://github.com/influxdata/chronograf/blob/master/docs/auth.md) for more information.
### Advanced Routing
Change the default root path of the Chronograf server with the `—basepath` option.
Change the default root path of the Chronograf server with the `--basepath` option.
## Versions
Chronograf v1.2.0-beta1 is a beta release.
Chronograf v1.2.0-beta7 is a beta release.
We will be iterating quickly based on user feedback and recommend using the [nightly builds](https://www.influxdata.com/downloads/) for the time being.
Spotted a bug or have a feature request?
@ -121,7 +120,7 @@ Please open [an issue](https://github.com/influxdata/chronograf/issues/new)!
The Chronograf team has identified and is working on the following issues:
* Currently, Chronograf requires users to run Telegraf's [CPU](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/system/CPU_README.md) and [system](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/system/SYSTEM_README.md) plugins to ensure that all Apps appear on the [HOST LIST](https://github.com/influxdata/chronograf/blob/master/docs/GETTING_STARTED.md#host-list) page.
* Chronograf requires users to run Telegraf's [CPU](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/system/CPU_README.md) and [system](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/system/SYSTEM_README.md) plugins to ensure that all Apps appear on the [HOST LIST](https://github.com/influxdata/chronograf/blob/master/docs/GETTING_STARTED.md#host-list) page.
## Installation
@ -133,6 +132,9 @@ We recommend installing Chronograf using one of the [pre-built packages](https:/
* `systemctl start chronograf` if you have installed Chronograf using an official Debian or RPM package, and are running a distro with `systemd`. For example, Ubuntu 15 or later.
* `$GOPATH/bin/chronograf` if you have built Chronograf from source.
By default, chronograf runs on port `8888`.
### With Docker
To get started right away with Docker, you can pull down our latest alpha:
@ -142,11 +144,12 @@ docker pull quay.io/influxdb/chronograf:latest
### From Source
* Chronograf works with go 1.7.x, node 6.x/7.x, and npm 3.x.
* Chronograf works with go 1.7.x, node 6.x/7.x, and yarn 0.18+.
* Chronograf requires [Kapacitor](https://github.com/influxdata/kapacitor) 1.1.x+ to create and store alerts.
1. [Install Go](https://golang.org/doc/install)
1. [Install Node and NPM](https://nodejs.org/en/download/)
1. [Install yarn](https://yarnpkg.com/docs/install)
1. [Setup your GOPATH](https://golang.org/doc/code.html#GOPATH)
1. Run `go get github.com/influxdata/chronograf`
1. Run `cd $GOPATH/src/github.com/influxdata/chronograf`

View File

@ -1,133 +0,0 @@
import React, {PropTypes} from 'react';
import {Link} from 'react-router';
const RoleClusterAccounts = React.createClass({
propTypes: {
clusterID: PropTypes.string.isRequired,
users: PropTypes.arrayOf(PropTypes.string.isRequired),
onRemoveClusterAccount: PropTypes.func.isRequired,
},
getDefaultProps() {
return {users: []};
},
getInitialState() {
return {
searchText: '',
};
},
handleSearch(searchText) {
this.setState({searchText});
},
handleRemoveClusterAccount(user) {
this.props.onRemoveClusterAccount(user);
},
render() {
const users = this.props.users.filter((user) => {
const name = user.toLowerCase();
const searchText = this.state.searchText.toLowerCase();
return name.indexOf(searchText) > -1;
});
return (
<div className="panel panel-default">
<div className="panel-body cluster-accounts">
{this.props.users.length ? <SearchBar onSearch={this.handleSearch} searchText={this.state.searchText} /> : null}
{this.props.users.length ? (
<TableBody
users={users}
clusterID={this.props.clusterID}
onRemoveClusterAccount={this.handleRemoveClusterAccount}
/>
) : (
<div className="generic-empty-state">
<span className="icon alert-triangle"></span>
<h4>No Cluster Accounts found</h4>
</div>
)}
</div>
</div>
);
},
});
const TableBody = React.createClass({
propTypes: {
users: PropTypes.arrayOf(PropTypes.string.isRequired),
clusterID: PropTypes.string.isRequired,
onRemoveClusterAccount: PropTypes.func.isRequired,
},
render() {
return (
<table className="table v-center users-table">
<tbody>
<tr>
<th></th>
<th>Username</th>
<th></th>
</tr>
{this.props.users.map((user) => {
return (
<tr key={user} data-row-id={user}>
<td></td>
<td>
<Link to={`/clusters/${this.props.clusterID}/accounts/${user}`} className="btn btn-xs btn-link">
{user}
</Link>
</td>
<td>
<button
title="Remove cluster account from role"
onClick={() => this.props.onRemoveClusterAccount(user)}
type="button"
data-toggle="modal"
data-target="#removeAccountFromRoleModal"
className="btn btn-sm btn-link-danger"
>
Remove From Role
</button>
</td>
</tr>
);
})}
</tbody>
</table>
);
},
});
const SearchBar = React.createClass({
propTypes: {
onSearch: PropTypes.func.isRequired,
searchText: PropTypes.string.isRequired,
},
handleChange() {
this.props.onSearch(this._searchText.value);
},
render() {
return (
<div className="users__search-widget input-group">
<div className="input-group-addon">
<span className="icon search" aria-hidden="true"></span>
</div>
<input
type="text"
className="form-control"
placeholder="Find User"
value={this.props.searchText}
ref={(ref) => this._searchText = ref}
onChange={this.handleChange}
/>
</div>
);
},
});
export default RoleClusterAccounts;

View File

@ -1,74 +0,0 @@
import React, {PropTypes} from 'react';
import {Link} from 'react-router';
const RoleHeader = React.createClass({
propTypes: {
selectedRole: PropTypes.shape(),
roles: PropTypes.arrayOf(PropTypes.shape({
name: PropTypes.string,
})).isRequired,
clusterID: PropTypes.string.isRequired,
activeTab: PropTypes.string,
},
getDefaultProps() {
return {
selectedRole: '',
};
},
render() {
return (
<div className="enterprise-header">
<div className="enterprise-header__container">
<div className="enterprise-header__left">
<div className="dropdown minimal-dropdown">
<button className="dropdown-toggle" type="button" id="roleSelection" data-toggle="dropdown">
<span className="button-text">{this.props.selectedRole.name}</span>
<span className="caret"></span>
</button>
<ul className="dropdown-menu" aria-labelledby="dropdownMenu1">
{this.props.roles.map((role) => (
<li key={role.name}>
<Link to={`/clusters/${this.props.clusterID}/roles/${encodeURIComponent(role.name)}`} className="role-option">
{role.name}
</Link>
</li>
))}
<li role="separator" className="divider"></li>
<li>
<Link to={`/clusters/${this.props.clusterID}/roles`} className="role-option">
All Roles
</Link>
</li>
</ul>
</div>
</div>
<div className="enterprise-header__right">
<button className="btn btn-sm btn-default" data-toggle="modal" data-target="#deleteRoleModal">Delete Role</button>
{this.props.activeTab === 'Permissions' ? (
<button
className="btn btn-sm btn-primary"
data-toggle="modal"
data-target="#addPermissionModal"
>
Add Permission
</button>
) : null}
{this.props.activeTab === 'Cluster Accounts' ? (
<button
className="btn btn-sm btn-primary"
data-toggle="modal"
data-target="#addClusterAccountModal"
>
Add Cluster Account
</button>
) : null}
</div>
</div>
</div>
);
},
});
export default RoleHeader;

View File

@ -1,110 +0,0 @@
import React, {PropTypes} from 'react';
import {Tab, Tabs, TabPanel, TabPanels, TabList} from 'src/shared/components/Tabs';
import RoleHeader from '../components/RoleHeader';
import RoleClusterAccounts from '../components/RoleClusterAccounts';
import PermissionsTable from 'src/shared/components/PermissionsTable';
import AddPermissionModal from 'src/shared/components/AddPermissionModal';
import AddClusterAccountModal from '../components/modals/AddClusterAccountModal';
import DeleteRoleModal from '../components/modals/DeleteRoleModal';
const {arrayOf, string, shape, func} = PropTypes;
const TABS = ['Permissions', 'Cluster Accounts'];
const RolePage = React.createClass({
propTypes: {
// All permissions to populate the "Add permission" modal
allPermissions: arrayOf(shape({
displayName: string.isRequired,
name: string.isRequired,
description: string.isRequired,
})),
// All roles to populate the navigation dropdown
roles: arrayOf(shape({})),
role: shape({
id: string,
name: string.isRequired,
permissions: arrayOf(shape({
displayName: string.isRequired,
name: string.isRequired,
description: string.isRequired,
resources: arrayOf(string.isRequired).isRequired,
})),
}),
databases: arrayOf(string.isRequired),
clusterID: string.isRequired,
roleSlug: string.isRequired,
onRemoveClusterAccount: func.isRequired,
onDeleteRole: func.isRequired,
onAddPermission: func.isRequired,
onAddClusterAccount: func.isRequired,
onRemovePermission: func.isRequired,
},
getInitialState() {
return {activeTab: TABS[0]};
},
handleActivateTab(activeIndex) {
this.setState({activeTab: TABS[activeIndex]});
},
render() {
const {role, roles, allPermissions, databases, clusterID,
onDeleteRole, onRemoveClusterAccount, onAddPermission, onRemovePermission, onAddClusterAccount} = this.props;
return (
<div id="role-edit-page" className="js-role-edit">
<RoleHeader
roles={roles}
selectedRole={role}
clusterID={clusterID}
activeTab={this.state.activeTab}
/>
<div className="container-fluid">
<div className="row">
<div className="col-md-12">
<Tabs onSelect={this.handleActivateTab}>
<TabList>
<Tab>{TABS[0]}</Tab>
<Tab>{TABS[1]}</Tab>
</TabList>
<TabPanels>
<TabPanel>
<PermissionsTable
permissions={role.permissions}
showAddResource={true}
onRemovePermission={onRemovePermission}
/>
</TabPanel>
<TabPanel>
<RoleClusterAccounts
clusterID={clusterID}
users={role.users}
onRemoveClusterAccount={onRemoveClusterAccount}
/>
</TabPanel>
</TabPanels>
</Tabs>
</div>
</div>
</div>
<DeleteRoleModal onDeleteRole={onDeleteRole} roleName={role.name} />
<AddPermissionModal
permissions={allPermissions}
activeCluster={clusterID}
databases={databases}
onAddPermission={onAddPermission}
/>
<AddClusterAccountModal
clusterID={clusterID}
onAddClusterAccount={onAddClusterAccount}
roleClusterAccounts={role.users}
role={role}
/>
</div>
);
},
});
export default RolePage;

View File

@ -1,55 +0,0 @@
import React, {PropTypes} from 'react';
import CreateRoleModal from './modals/CreateRoleModal';
import RolePanels from 'src/shared/components/RolePanels';
const RolesPage = React.createClass({
propTypes: {
roles: PropTypes.arrayOf(PropTypes.shape({
name: PropTypes.string,
users: PropTypes.arrayOf(PropTypes.string),
permissions: PropTypes.arrayOf(PropTypes.shape({
name: PropTypes.string.isRequired,
displayName: PropTypes.string.isRequired,
description: PropTypes.string.isRequired,
resources: PropTypes.arrayOf(PropTypes.string.isRequired).isRequired,
})),
})).isRequired,
onCreateRole: PropTypes.func.isRequired,
clusterID: PropTypes.string.isRequired,
},
handleCreateRole(roleName) {
this.props.onCreateRole(roleName);
},
render() {
return (
<div id="role-index-page">
<div className="enterprise-header">
<div className="enterprise-header__container">
<div className="enterprise-header__left">
<h1>Access Control</h1>
</div>
<div className="enterprise-header__right">
<button className="btn btn-sm btn-primary" data-toggle="modal" data-target="#createRoleModal">Create Role</button>
</div>
</div>
</div>
<div className="container-fluid">
<div className="row">
<div className="col-md-12">
<h3 className="deluxe fake-panel-title match-search">All Roles</h3>
<div className="panel-group sub-page roles" id="role-page" role="tablist">
<RolePanels roles={this.props.roles} clusterID={this.props.clusterID} showUserCount={true} />
</div>
</div>
</div>
</div>
<CreateRoleModal onConfirm={this.handleCreateRole} />
</div>
);
},
});
export default RolesPage;

View File

@ -1,110 +0,0 @@
import React, {PropTypes} from 'react';
import AddClusterAccounts from 'src/shared/components/AddClusterAccounts';
import {getClusterAccounts} from 'src/shared/apis';
const {arrayOf, func, string, shape} = PropTypes;
// Allows a user to add a cluster account to a role. Very similar to other features
// (e.g. adding cluster accounts to a web user), the main difference being that
// we'll only give users the option to select users from the active cluster instead of
// from all clusters.
const AddClusterAccountModal = React.createClass({
propTypes: {
clusterID: string.isRequired,
onAddClusterAccount: func.isRequired,
// Cluster accounts that already belong to a role so we can filter
// the list of available options.
roleClusterAccounts: arrayOf(string),
role: shape({
name: PropTypes.string,
}),
},
getDefaultProps() {
return {roleClusterAccounts: []};
},
getInitialState() {
return {
selectedAccount: null,
clusterAccounts: [],
error: null,
isFetching: true,
};
},
componentDidMount() {
getClusterAccounts(this.props.clusterID).then((resp) => {
this.setState({clusterAccounts: resp.data.users});
}).catch(() => {
this.setState({error: 'An error occured.'});
}).then(() => {
this.setState({isFetching: false});
});
},
handleSubmit(e) {
e.preventDefault();
this.props.onAddClusterAccount(this.state.selectedAccount);
$('#addClusterAccountModal').modal('hide'); // eslint-disable-line no-undef
},
handleSelectClusterAccount({accountName}) {
this.setState({
selectedAccount: accountName,
});
},
render() {
if (this.state.isFetching) {
return null;
}
const {role} = this.props;
// Temporary hack while https://github.com/influxdata/enterprise/issues/948 is resolved.
// We want to use the /api/int/v1/clusters endpoint and just pick the
// Essentially we're taking the raw output from /user and morphing whatthe `AddClusterAccounts`
// modal expects (a cluster with fields defined by the enterprise web database)
const availableClusterAccounts = this.state.clusterAccounts.filter((account) => {
return !this.props.roleClusterAccounts.includes(account.name);
});
const cluster = {
id: 0, // Only used as a `key` prop
cluster_users: availableClusterAccounts,
cluster_id: this.props.clusterID,
};
return (
<div className="modal fade in" id="addClusterAccountModal" tabIndex="-1" role="dialog">
<div className="modal-dialog">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title">Add Cluster Account to <strong>{role.name}</strong></h4>
</div>
<form onSubmit={this.handleSubmit}>
<div className="modal-body">
<div className="alert alert-info">
<span className="icon star"></span>
<p><strong>NOTE:</strong> Cluster Accounts added to a Role inherit all the permissions associated with that Role.</p>
</div>
<AddClusterAccounts
clusters={[cluster]}
onSelectClusterAccount={this.handleSelectClusterAccount}
/>
</div>
<div className="modal-footer">
<button className="btn btn-default" data-dismiss="modal">Cancel</button>
<input disabled={!this.state.selectedAccount} className="btn btn-success" type="submit" value="Confirm" />
</div>
</form>
</div>
</div>
</div>
);
},
});
export default AddClusterAccountModal;

View File

@ -1,51 +0,0 @@
import React, {PropTypes} from 'react';
const CreateRoleModal = React.createClass({
propTypes: {
onConfirm: PropTypes.func.isRequired,
},
handleSubmit(e) {
e.preventDefault();
if (this.roleName.value === '') {
return;
}
this.props.onConfirm(this.roleName.value);
this.roleName.value = '';
$('#createRoleModal').modal('hide'); // eslint-disable-line no-undef
},
render() {
return (
<div className="modal fade in" id="createRoleModal" tabIndex="-1" role="dialog">
<div className="modal-dialog">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title">Create Role</h4>
</div>
<form onSubmit={this.handleSubmit}>
<div className="modal-body">
<div className="form-grid padding-top">
<div className="form-group col-md-8 col-md-offset-2">
<label htmlFor="roleName" className="sr-only">Name this Role</label>
<input ref={(n) => this.roleName = n}name="roleName" type="text" className="form-control input-lg" id="roleName" placeholder="Name this Role" required={true} />
</div>
</div>
</div>
<div className="modal-footer">
<button type="button" className="btn btn-default" data-dismiss="modal">Cancel</button>
<input className="btn btn-success" type="submit" value="Create" />
</div>
</form>
</div>
</div>
</div>
);
},
});
export default CreateRoleModal;

View File

@ -1,40 +0,0 @@
import React, {PropTypes} from 'react';
const {string, func} = PropTypes;
const DeleteRoleModal = React.createClass({
propTypes: {
roleName: string.isRequired,
onDeleteRole: func.isRequired,
},
handleConfirm() {
$('#deleteRoleModal').modal('hide'); // eslint-disable-line no-undef
this.props.onDeleteRole();
},
render() {
return (
<div className="modal fade" id="deleteRoleModal" tabIndex="-1" role="dialog">
<div className="modal-dialog">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title">
Are you sure you want to delete <strong>{this.props.roleName}</strong>?
</h4>
</div>
<div className="modal-footer">
<button className="btn btn-default" type="button" data-dismiss="modal">Cancel</button>
<button onClick={this.handleConfirm} className="btn btn-danger" value="Delete">Delete</button>
</div>
</div>
</div>
</div>
);
},
});
export default DeleteRoleModal;

View File

@ -1,192 +0,0 @@
import React, {PropTypes} from 'react';
import {withRouter} from 'react-router';
import RolePage from '../components/RolePage';
import {showDatabases} from 'src/shared/apis/metaQuery';
import showDatabasesParser from 'shared/parsing/showDatabases';
import {buildRoles, buildAllPermissions} from 'src/shared/presenters';
import {
getRoles,
removeAccountsFromRole,
addAccountsToRole,
deleteRole,
addPermissionToRole,
removePermissionFromRole,
} from 'src/shared/apis';
import _ from 'lodash';
export const RolePageContainer = React.createClass({
propTypes: {
params: PropTypes.shape({
clusterID: PropTypes.string.isRequired,
roleSlug: PropTypes.string.isRequired,
}).isRequired,
router: React.PropTypes.shape({
push: React.PropTypes.func.isRequired,
}).isRequired,
addFlashMessage: PropTypes.func,
dataNodes: PropTypes.arrayOf(PropTypes.string.isRequired),
},
getInitialState() {
return {
role: {},
roles: [],
databases: [],
isFetching: true,
};
},
componentDidMount() {
const {clusterID, roleSlug} = this.props.params;
this.getRole(clusterID, roleSlug);
},
componentWillReceiveProps(nextProps) {
if (this.props.params.roleSlug !== nextProps.params.roleSlug) {
this.setState(this.getInitialState());
this.getRole(nextProps.params.clusterID, nextProps.params.roleSlug);
}
},
getRole(clusterID, roleName) {
this.setState({isFetching: true});
Promise.all([
getRoles(clusterID, roleName),
showDatabases(this.props.dataNodes, this.props.params.clusterID),
]).then(([rolesResp, dbResp]) => {
// Fetch databases for adding permissions/resources
const {errors, databases} = showDatabasesParser(dbResp.data);
if (errors.length) {
this.props.addFlashMessage({
type: 'error',
text: `InfluxDB error: ${errors[0]}`,
});
}
const roles = buildRoles(rolesResp.data.roles);
const activeRole = roles.find(role => role.name === roleName);
this.setState({
role: activeRole,
roles,
databases,
isFetching: false,
});
}).catch(err => {
this.setState({isFetching: false});
console.error(err.toString()); // eslint-disable-line no-console
this.props.addFlashMessage({
type: 'error',
text: `Unable to fetch role! Please try refreshing the page.`,
});
});
},
handleRemoveClusterAccount(username) {
const {clusterID, roleSlug} = this.props.params;
removeAccountsFromRole(clusterID, roleSlug, [username]).then(() => {
this.setState({
role: Object.assign({}, this.state.role, {
users: _.reject(this.state.role.users, (user) => user === username),
}),
});
this.props.addFlashMessage({
type: 'success',
text: 'Cluster account removed from role!',
});
}).catch(err => {
this.addErrorNotification(err);
});
},
handleDeleteRole() {
const {clusterID, roleSlug} = this.props.params;
deleteRole(clusterID, roleSlug).then(() => {
// TODO: add success notification when we're implementing them higher in the tree.
// Right now the notification just gets swallowed when we transition to a new route.
this.props.router.push(`/roles`);
}).catch(err => {
console.error(err.toString()); // eslint-disable-line no-console
this.addErrorNotification(err);
});
},
handleAddPermission(permission) {
const {clusterID, roleSlug} = this.props.params;
addPermissionToRole(clusterID, roleSlug, permission).then(() => {
this.getRole(clusterID, roleSlug);
}).then(() => {
this.props.addFlashMessage({
type: 'success',
text: 'Added permission to role!',
});
}).catch(err => {
this.addErrorNotification(err);
});
},
handleRemovePermission(permission) {
const {clusterID, roleSlug} = this.props.params;
removePermissionFromRole(clusterID, roleSlug, permission).then(() => {
this.setState({
role: Object.assign({}, this.state.role, {
permissions: _.reject(this.state.role.permissions, (p) => p.name === permission.name),
}),
});
this.props.addFlashMessage({
type: 'success',
text: 'Removed permission from role!',
});
}).catch(err => {
this.addErrorNotification(err);
});
},
handleAddClusterAccount(clusterAccountName) {
const {clusterID, roleSlug} = this.props.params;
addAccountsToRole(clusterID, roleSlug, [clusterAccountName]).then(() => {
this.getRole(clusterID, roleSlug);
}).then(() => {
this.props.addFlashMessage({
type: 'success',
text: 'Added cluster account to role!',
});
}).catch(err => {
this.addErrorNotification(err);
});
},
addErrorNotification(err) {
const text = _.result(err, ['response', 'data', 'error', 'toString'], 'An error occurred.');
this.props.addFlashMessage({
type: 'error',
text,
});
},
render() {
if (this.state.isFetching) {
return <div className="page-spinner" />;
}
const {clusterID, roleSlug} = this.props.params;
const {role, roles, databases} = this.state;
return (
<RolePage
role={role}
roles={roles}
allPermissions={buildAllPermissions()}
databases={databases}
roleSlug={roleSlug}
clusterID={clusterID}
onRemoveClusterAccount={this.handleRemoveClusterAccount}
onDeleteRole={this.handleDeleteRole}
onAddPermission={this.handleAddPermission}
onRemovePermission={this.handleRemovePermission}
onAddClusterAccount={this.handleAddClusterAccount}
/>
);
},
});
export default withRouter(RolePageContainer);

View File

@ -1,71 +0,0 @@
import React, {PropTypes} from 'react';
import {getRoles, createRole} from 'src/shared/apis';
import {buildRoles} from 'src/shared/presenters';
import RolesPage from '../components/RolesPage';
import _ from 'lodash';
export const RolesPageContainer = React.createClass({
propTypes: {
params: PropTypes.shape({
clusterID: PropTypes.string.isRequired,
}).isRequired,
addFlashMessage: PropTypes.func,
},
getInitialState() {
return {
roles: [],
};
},
componentDidMount() {
this.fetchRoles();
},
fetchRoles() {
getRoles(this.props.params.clusterID).then((resp) => {
this.setState({
roles: buildRoles(resp.data.roles),
});
}).catch((err) => {
console.error(err.toString()); // eslint-disable-line no-console
this.props.addFlashMessage({
type: 'error',
text: `Unable to fetch roles! Please try refreshing the page.`,
});
});
},
handleCreateRole(roleName) {
createRole(this.props.params.clusterID, roleName)
// TODO: this should be an optimistic update, but we can't guarantee that we'll
// get an error when a user tries to make a duplicate role (we don't want to
// display a role twice). See https://github.com/influxdata/plutonium/issues/538
.then(this.fetchRoles)
.then(() => {
this.props.addFlashMessage({
type: 'success',
text: 'Role created!',
});
})
.catch((err) => {
const text = _.result(err, ['response', 'data', 'error', 'toString'], 'An error occurred.');
this.props.addFlashMessage({
type: 'error',
text,
});
});
},
render() {
return (
<RolesPage
roles={this.state.roles}
onCreateRole={this.handleCreateRole}
clusterID={this.props.params.clusterID}
/>
);
},
});
export default RolesPageContainer;

View File

@ -1,3 +0,0 @@
import RolesPageContainer from './containers/RolesPageContainer';
import RolePageContainer from './containers/RolePageContainer';
export {RolesPageContainer, RolePageContainer};

View File

@ -1,156 +0,0 @@
import React, {PropTypes} from 'react';
const {shape, string, arrayOf, func} = PropTypes;
const AddRoleModal = React.createClass({
propTypes: {
account: shape({
name: string.isRequired,
hash: string,
permissions: arrayOf(shape({
name: string.isRequired,
displayName: string.isRequired,
description: string.isRequired,
resources: arrayOf(string.isRequired).isRequired,
})).isRequired,
roles: arrayOf(shape({
name: string.isRequired,
users: arrayOf(string.isRequired).isRequired,
permissions: arrayOf(shape({
name: string.isRequired,
displayName: string.isRequired,
description: string.isRequired,
resources: arrayOf(string.isRequired).isRequired,
})).isRequired,
})).isRequired,
}),
roles: arrayOf(shape({
name: string.isRequired,
users: arrayOf(string.isRequired).isRequired,
permissions: arrayOf(shape({
name: string.isRequired,
displayName: string.isRequired,
description: string.isRequired,
resources: arrayOf(string.isRequired).isRequired,
})).isRequired,
})),
onAddRoleToAccount: func.isRequired,
},
getInitialState() {
return {
selectedRole: this.props.roles[0],
};
},
handleChangeRole(e) {
this.setState({selectedRole: this.props.roles.find((role) => role.name === e.target.value)});
},
handleSubmit(e) {
e.preventDefault();
$('#addRoleModal').modal('hide'); // eslint-disable-line no-undef
this.props.onAddRoleToAccount(this.state.selectedRole);
},
render() {
const {account, roles} = this.props;
const {selectedRole} = this.state;
if (!roles.length) {
return (
<div className="modal fade" id="addRoleModal" tabIndex="-1" role="dialog">
<div className="modal-dialog modal-lg">
<div className="modal-content">
<div className="modal-header">
<h4>This cluster account already belongs to all roles.</h4>
</div>
</div>
</div>
</div>
);
}
return (
<div className="modal fade" id="addRoleModal" tabIndex="-1" role="dialog">
<form onSubmit={this.handleSubmit} className="form">
<div className="modal-dialog modal-lg">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title">Add <strong>{account.name}</strong> to a new Role</h4>
</div>
<div className="modal-body">
<div className="row">
<div className="col-xs-6 col-xs-offset-3">
<label htmlFor="roles-select">Available Roles</label>
<select id="roles-select" onChange={this.handleChangeRole} value={selectedRole.name} className="form-control input-lg" name="roleName">
{roles.map((role) => {
return <option key={role.name} >{role.name}</option>;
})}
</select>
<br/>
</div>
<div className="col-xs-10 col-xs-offset-1">
<h4>Permissions</h4>
<div className="well well-white">
{this.renderRoleTable()}
</div>
</div>
</div>
</div>
<div className="modal-footer">
<button className="btn btn-default" data-dismiss="modal">Cancel</button>
<input className="btn btn-success" type="submit" value="Add to Role" />
</div>
</div>
</div>
</form>
</div>
);
},
renderRoleTable() {
return (
<table className="table permissions-table">
<tbody>
{this.renderPermissions()}
</tbody>
</table>
);
},
renderPermissions() {
const role = this.state.selectedRole;
if (!role.permissions.length) {
return (
<tr className="role-row">
<td>
<div className="generic-empty-state">
<span className="icon alert-triangle"></span>
<h4>This Role has no Permissions</h4>
</div>
</td>
</tr>
);
}
return role.permissions.map((p) => {
return (
<tr key={p.name} className="role-row">
<td>{p.displayName}</td>
<td>
{p.resources.map((resource, i) => (
<div key={i} className="pill">{resource === '' ? 'All Databases' : resource}</div>
))}
</td>
</tr>
);
});
},
});
export default AddRoleModal;

View File

@ -1,83 +0,0 @@
import React, {PropTypes} from 'react';
const AttachWebUsers = React.createClass({
propTypes: {
users: PropTypes.arrayOf(PropTypes.shape()).isRequired,
account: PropTypes.string.isRequired,
onConfirm: PropTypes.func.isRequired,
},
getInitialState() {
return {
selectedUsers: [],
};
},
handleConfirm() {
$('#addWebUsers').modal('hide'); // eslint-disable-line no-undef
this.props.onConfirm(this.state.selectedUsers);
// uncheck all the boxes?
},
handleSelection(e) {
const checked = e.target.checked;
const id = parseInt(e.target.dataset.id, 10);
const user = this.props.users.find((u) => u.id === id);
const newSelectedUsers = this.state.selectedUsers.slice(0);
if (checked) {
newSelectedUsers.push(user);
} else {
const userIndex = newSelectedUsers.find(u => u.id === id);
newSelectedUsers.splice(userIndex, 1);
}
this.setState({selectedUsers: newSelectedUsers});
},
render() {
return (
<div className="modal fade" id="addWebUsers" tabIndex="-1" role="dialog">
<div className="modal-dialog">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title">
Link Web Users to <strong>{this.props.account}</strong>
</h4>
</div>
<div className="row">
<div className="col-xs-10 col-xs-offset-1">
<h4>Web Users</h4>
<div className="well well-white">
<table className="table v-center">
<tbody>
{ // TODO: style this and make it select / collect users
this.props.users.map((u) => {
return (
<tr key={u.name}>
<td><input onChange={this.handleSelection} data-id={u.id} type="checkbox" /></td>
<td>{u.name}</td>
</tr>
);
})
}
</tbody>
</table>
</div>
</div>
</div>
<div className="modal-footer">
<button className="btn btn-default" type="button" data-dismiss="modal">Cancel</button>
<button onClick={this.handleConfirm} className="btn btn-success">Link Users</button>
</div>
</div>
</div>
</div>
);
},
});
export default AttachWebUsers;

View File

@ -1,93 +0,0 @@
import React, {PropTypes} from 'react';
const {string, func, bool} = PropTypes;
const ClusterAccountDetails = React.createClass({
propTypes: {
name: string.isRequired,
onUpdatePassword: func.isRequired,
showDelete: bool,
},
getDefaultProps() {
return {
showDelete: true,
};
},
getInitialState() {
return {
passwordsMatch: true,
};
},
handleSubmit(e) {
e.preventDefault();
const password = this.password.value;
const confirmation = this.confirmation.value;
const passwordsMatch = password === confirmation;
if (!passwordsMatch) {
return this.setState({passwordsMatch});
}
this.props.onUpdatePassword(password);
},
render() {
return (
<div id="settings-page">
<div className="panel panel-default">
<div className="panel-body">
<form onSubmit={this.handleSubmit}>
{this.renderPasswordMismatch()}
<div className="form-group col-sm-12">
<label htmlFor="name">Name</label>
<input disabled={true} className="form-control input-lg" type="name" id="name" name="name" value={this.props.name}/>
</div>
<div className="form-group col-sm-6">
<label htmlFor="password">Password</label>
<input ref={(password) => this.password = password} className="form-control input-lg" type="password" id="password" name="password"/>
</div>
<div className="form-group col-sm-6">
<label htmlFor="password-confirmation">Confirm Password</label>
<input ref={(confirmation) => this.confirmation = confirmation} className="form-control input-lg" type="password" id="password-confirmation" name="confirmation"/>
</div>
<div className="form-group col-sm-6 col-sm-offset-3">
<button className="btn btn-next btn-success btn-lg btn-block" type="submit">Reset Password</button>
</div>
</form>
</div>
</div>
{this.props.showDelete ? (
<div className="panel panel-default delete-account">
<div className="panel-body">
<div className="col-sm-3">
<button
className="btn btn-next btn-danger btn-lg"
type="submit"
data-toggle="modal"
data-target="#deleteClusterAccountModal">
Delete Account
</button>
</div>
<div className="col-sm-9">
<h4>Delete this cluster account</h4>
<p>Beware! We won't be able to recover a cluster account once you've deleted it.</p>
</div>
</div>
</div>
) : null}
</div>
);
},
renderPasswordMismatch() {
if (this.state.passwordsMatch) {
return null;
}
return <div>Passwords do not match</div>;
},
});
export default ClusterAccountDetails;

View File

@ -1,238 +0,0 @@
import React, {PropTypes} from 'react';
import RolePanels from 'src/shared/components/RolePanels';
import PermissionsTable from 'src/shared/components/PermissionsTable';
import UsersTable from 'shared/components/UsersTable';
import ClusterAccountDetails from '../components/ClusterAccountDetails';
import AddRoleModal from '../components/AddRoleModal';
import AddPermissionModal from 'shared/components/AddPermissionModal';
import AttachWebUsers from '../components/AttachWebUsersModal';
import RemoveAccountFromRoleModal from '../components/RemoveAccountFromRoleModal';
import RemoveWebUserModal from '../components/RemoveUserFromAccountModal';
import DeleteClusterAccountModal from '../components/DeleteClusterAccountModal';
import {Tab, TabList, TabPanels, TabPanel, Tabs} from 'shared/components/Tabs';
const {shape, string, func, arrayOf, number, bool} = PropTypes;
const TABS = ['Roles', 'Permissions', 'Account Details', 'Web Users'];
export const ClusterAccountEditPage = React.createClass({
propTypes: {
// All permissions to populate the "Add permission" modal
allPermissions: arrayOf(shape({
displayName: string.isRequired,
name: string.isRequired,
description: string.isRequired,
})),
clusterID: string.isRequired,
accountID: string.isRequired,
account: shape({
name: string.isRequired,
hash: string,
permissions: arrayOf(shape({
name: string.isRequired,
displayName: string.isRequired,
description: string.isRequired,
resources: arrayOf(string.isRequired).isRequired,
})).isRequired,
roles: arrayOf(shape({
name: string.isRequired,
users: arrayOf(string.isRequired).isRequired,
permissions: arrayOf(shape({
name: string.isRequired,
displayName: string.isRequired,
description: string.isRequired,
resources: arrayOf(string.isRequired).isRequired,
})).isRequired,
})).isRequired,
}),
roles: arrayOf(shape({
name: string.isRequired,
users: arrayOf(string.isRequired).isRequired,
permissions: arrayOf(shape({
name: string.isRequired,
displayName: string.isRequired,
description: string.isRequired,
resources: arrayOf(string.isRequired).isRequired,
})).isRequired,
})),
databases: arrayOf(string.isRequired),
assignedWebUsers: arrayOf(shape({
id: number.isRequired,
name: string.isRequired,
email: string.isRequired,
admin: bool.isRequired,
})),
unassignedWebUsers: arrayOf(shape({
id: number.isRequired,
name: string.isRequired,
email: string.isRequired,
admin: bool.isRequired,
})),
me: shape(),
onUpdatePassword: func.isRequired,
onRemoveAccountFromRole: func.isRequired,
onRemoveWebUserFromAccount: func.isRequired,
onAddRoleToAccount: func.isRequired,
onAddPermission: func.isRequired,
onRemovePermission: func.isRequired,
onAddWebUsersToAccount: func.isRequired,
onDeleteAccount: func.isRequired,
},
getInitialState() {
return {
roleToRemove: {},
userToRemove: {},
activeTab: TABS[0],
};
},
handleActivateTab(activeIndex) {
this.setState({activeTab: TABS[activeIndex]});
},
handleRemoveAccountFromRole(role) {
this.setState({roleToRemove: role});
},
handleUserToRemove(userToRemove) {
this.setState({userToRemove});
},
getUnassignedRoles() {
return this.props.roles.filter(role => {
return !this.props.account.roles.map(r => r.name).includes(role.name);
});
},
render() {
const {clusterID, accountID, account, databases, onAddPermission, me,
assignedWebUsers, unassignedWebUsers, onAddWebUsersToAccount, onRemovePermission, onDeleteAccount} = this.props;
if (!account || !Object.keys(me).length) {
return null; // TODO: 404?
}
return (
<div id="user-edit-page">
<div className="enterprise-header">
<div className="enterprise-header__container">
<div className="enterprise-header__left">
<h1>
{accountID}&nbsp;<span className="label label-warning">Cluster Account</span>
</h1>
</div>
{this.renderActions()}
</div>
</div>
<div className="container-fluid">
<div className="row">
<div className="col-md-12">
<Tabs onSelect={this.handleActivateTab}>
<TabList>
{TABS.map(tab => <Tab key={tab}>{tab}</Tab>)}
</TabList>
<TabPanels>
<TabPanel>
<RolePanels
roles={account.roles}
clusterID={clusterID}
onRemoveAccountFromRole={this.handleRemoveAccountFromRole}
/>
</TabPanel>
<TabPanel>
<PermissionsTable permissions={account.permissions} onRemovePermission={onRemovePermission} />
</TabPanel>
<TabPanel>
<ClusterAccountDetails
showDelete={me.cluster_links.every(cl => cl.cluster_user !== account.name)}
name={account.name}
onUpdatePassword={this.props.onUpdatePassword}
/>
</TabPanel>
<TabPanel>
<div className="panel panel-default">
<div className="panel-body">
<UsersTable
onUserToDelete={this.handleUserToRemove}
activeCluster={clusterID}
users={assignedWebUsers}
me={me}
deleteText="Unlink" />
</div>
</div>
</TabPanel>
</TabPanels>
</Tabs>
</div>
</div>
</div>
<AddPermissionModal
activeCluster={clusterID}
permissions={this.props.allPermissions}
databases={databases}
onAddPermission={onAddPermission}
/>
<RemoveAccountFromRoleModal
roleName={this.state.roleToRemove.name}
onConfirm={() => this.props.onRemoveAccountFromRole(this.state.roleToRemove)}
/>
<AddRoleModal
account={account}
roles={this.getUnassignedRoles()}
onAddRoleToAccount={this.props.onAddRoleToAccount}
/>
<RemoveWebUserModal
account={accountID}
onRemoveWebUser={() => this.props.onRemoveWebUserFromAccount(this.state.userToRemove)}
user={this.state.userToRemove.name}
/>
<AttachWebUsers
account={accountID}
users={unassignedWebUsers}
onConfirm={onAddWebUsersToAccount}
/>
<DeleteClusterAccountModal
account={account}
webUsers={assignedWebUsers}
onConfirm={onDeleteAccount}
/>
</div>
);
},
renderActions() {
const {activeTab} = this.state;
return (
<div className="enterprise-header__right">
{activeTab === 'Roles' ? (
<button
className="btn btn-sm btn-primary"
data-toggle="modal"
data-target="#addRoleModal">
Add to Role
</button>
) : null}
{activeTab === 'Permissions' ? (
<button
className="btn btn-sm btn-primary"
data-toggle="modal"
data-target="#addPermissionModal">
Add Permissions
</button>
) : null}
{activeTab === 'Web Users' ? (
<button
className="btn btn-sm btn-primary"
data-toggle="modal"
data-target="#addWebUsers">
Link Web Users
</button>
) : null}
</div>
);
},
});
export default ClusterAccountEditPage;

View File

@ -1,48 +0,0 @@
import React, {PropTypes} from 'react';
import PageHeader from '../components/PageHeader';
import ClusterAccountsTable from '../components/ClusterAccountsTable';
const ClusterAccountsPage = React.createClass({
propTypes: {
clusterID: PropTypes.string.isRequired,
users: PropTypes.arrayOf(PropTypes.shape({
name: PropTypes.string,
roles: PropTypes.arrayOf(PropTypes.shape({
name: PropTypes.string.isRequired,
})),
})),
roles: PropTypes.arrayOf(PropTypes.shape({
name: PropTypes.shape,
})),
onDeleteAccount: PropTypes.func.isRequired,
onCreateAccount: PropTypes.func.isRequired,
me: PropTypes.shape(),
},
render() {
const {clusterID, users, roles, onCreateAccount, me} = this.props;
return (
<div id="cluster-accounts-page" data-cluster-id={clusterID}>
<PageHeader
roles={roles}
activeCluster={clusterID}
onCreateAccount={onCreateAccount} />
<div className="container-fluid">
<div className="row">
<div className="col-md-12">
<ClusterAccountsTable
users={users}
clusterID={clusterID}
onDeleteAccount={this.props.onDeleteAccount}
me={me}
/>
</div>
</div>
</div>
</div>
);
},
});
export default ClusterAccountsPage;

View File

@ -1,161 +0,0 @@
import React, {PropTypes} from 'react';
import {Link} from 'react-router';
const ClusterAccountsTable = React.createClass({
propTypes: {
clusterID: PropTypes.string.isRequired,
users: PropTypes.arrayOf(PropTypes.shape({
name: PropTypes.string.isRequired,
roles: PropTypes.arrayOf(PropTypes.shape({
name: PropTypes.string.isRequired,
})).isRequired,
})),
onDeleteAccount: PropTypes.func.isRequired,
me: PropTypes.shape(),
},
getInitialState() {
return {
searchText: '',
};
},
handleSearch(searchText) {
this.setState({searchText});
},
handleDeleteAccount(user) {
this.props.onDeleteAccount(user);
},
render() {
const users = this.props.users.filter((user) => {
const name = user.name.toLowerCase();
const searchText = this.state.searchText.toLowerCase();
return name.indexOf(searchText) > -1;
});
return (
<div className="panel panel-minimal">
<div className="panel-heading u-flex u-jc-space-between u-ai-center">
<h2 className="panel-title">Cluster Accounts</h2>
<SearchBar onSearch={this.handleSearch} searchText={this.state.searchText} />
</div>
<div className="panel-body">
<TableBody
users={users}
clusterID={this.props.clusterID}
onDeleteAccount={this.handleDeleteAccount}
me={this.props.me}
/>
</div>
</div>
);
},
});
const TableBody = React.createClass({
propTypes: {
users: PropTypes.arrayOf(PropTypes.shape({
name: PropTypes.string.isRequired,
roles: PropTypes.arrayOf(PropTypes.shape({
name: PropTypes.string.isRequired,
})).isRequired,
})),
clusterID: PropTypes.string.isRequired,
onDeleteAccount: PropTypes.func.isRequired,
me: PropTypes.shape(),
},
render() {
if (!this.props.users.length) {
return (
<div className="generic-empty-state">
<span className="icon alert-triangle"></span>
<h4>No Cluster Accounts</h4>
</div>
);
}
return (
<table className="table v-center users-table">
<tbody>
<tr>
<th>Username</th>
<th>Roles</th>
<th></th>
</tr>
{this.props.users.map((user) => {
return (
<tr key={user.name} data-test="user-row">
<td>
<Link to={`/clusters/${this.props.clusterID}/accounts/${user.name}`} >
{user.name}
</Link>
</td>
<td>{user.roles.map((r) => r.name).join(', ')}</td>
<td>
{this.renderDeleteAccount(user)}
</td>
</tr>
);
})}
</tbody>
</table>
);
},
renderDeleteAccount(clusterAccount) {
const currentUserIsAssociatedWithAccount = this.props.me.cluster_links.some(cl => (
cl.cluster_user === clusterAccount.name
));
const title = currentUserIsAssociatedWithAccount ?
'You can\'t remove a cluster account that you are associated with.'
: 'Delete cluster account';
return (
<button
onClick={() => this.props.onDeleteAccount(clusterAccount)}
title={title}
type="button"
data-toggle="modal"
data-target="#deleteClusterAccountModal"
className="btn btn-sm btn-link"
disabled={currentUserIsAssociatedWithAccount}>
Delete
</button>
);
},
});
const SearchBar = React.createClass({
propTypes: {
onSearch: PropTypes.func.isRequired,
searchText: PropTypes.string.isRequired,
},
handleChange() {
this.props.onSearch(this._searchText.value);
},
render() {
return (
<div className="users__search-widget input-group">
<div className="input-group-addon">
<span className="icon search" aria-hidden="true"></span>
</div>
<input
type="text"
className="form-control"
placeholder="Find User"
value={this.props.searchText}
ref={(ref) => this._searchText = ref}
onChange={this.handleChange}
/>
</div>
);
},
});
export default ClusterAccountsTable;

View File

@ -1,68 +0,0 @@
import React, {PropTypes} from 'react';
const CreateAccountModal = React.createClass({
propTypes: {
onCreateAccount: PropTypes.func.isRequired,
roles: PropTypes.arrayOf(PropTypes.shape({
name: PropTypes.shape,
})),
},
handleConfirm(e) {
e.preventDefault();
const name = this.name.value;
const password = this.password.value;
const role = this.accountRole.value;
$('#createAccountModal').modal('hide'); // eslint-disable-line no-undef
this.props.onCreateAccount(name, password, role);
},
render() {
return (
<div className="modal fade" id="createAccountModal" tabIndex="-1" role="dialog">
<div className="modal-dialog">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title">Create Cluster Account</h4>
</div>
<form onSubmit={this.handleConfirm} data-test="cluster-account-form">
<div className="modal-body">
<div className="row">
<div className="form-group col-xs-6">
<label htmlFor="account-name">Username</label>
<input ref={(r) => this.name = r} className="form-control" type="text" id="account-name" data-test="account-name" required={true} />
</div>
<div className="form-group col-xs-6">
<label htmlFor="account-password">Password</label>
<input ref={(r) => this.password = r} className="form-control" type="password" id="account-password" data-test="account-password" required={true} />
</div>
</div>
<div className="row">
<div className="form-group col-xs-6">
<label htmlFor="account-role">Role</label>
<select ref={(r) => this.accountRole = r} id="account-role" className="form-control input-lg">
{this.props.roles.map((r, i) => {
return <option key={i}>{r.name}</option>;
})}
</select>
</div>
</div>
</div>
<div className="modal-footer">
<button className="btn btn-default" type="button" data-dismiss="modal">Cancel</button>
<button className="btn btn-danger js-delete-users" type="submit">Create Account</button>
</div>
</form>
</div>
</div>
</div>
);
},
});
export default CreateAccountModal;

View File

@ -1,51 +0,0 @@
import React, {PropTypes} from 'react';
const DeleteClusterAccountModal = React.createClass({
propTypes: {
onConfirm: PropTypes.func.isRequired,
account: PropTypes.shape({
name: PropTypes.string,
}),
webUsers: PropTypes.arrayOf(PropTypes.shape()), // TODO
},
handleConfirm() {
this.props.onConfirm();
},
render() {
return (
<div className="modal fade" id="deleteClusterAccountModal" tabIndex="-1" role="dialog">
<div className="modal-dialog">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title">{`Are you sure you want to delete ${this.props.account && this.props.account.name}?`}</h4>
</div>
{this.props.webUsers.length ? (
<div className="modal-body">
<h5>
The following web users are associated with this cluster account will need to be reassigned
to another cluster account to continue using many of EnterpriseWeb's features:
</h5>
<ul>
{this.props.webUsers.map(webUser => {
return <li key={webUser.id}>{webUser.email}</li>;
})}
</ul>
</div>
) : null}
<div className="modal-footer">
<button className="btn btn-default" type="button" data-dismiss="modal">Cancel</button>
<button className="btn btn-danger js-delete-users" onClick={this.handleConfirm} type="button" data-dismiss="modal">Confirm</button>
</div>
</div>
</div>
</div>
);
},
});
export default DeleteClusterAccountModal;

View File

@ -1,37 +0,0 @@
import React, {PropTypes} from 'react';
const DeleteUserModal = React.createClass({
propTypes: {
onConfirm: PropTypes.func.isRequired,
user: PropTypes.shape({
name: PropTypes.string,
}),
},
handleConfirm() {
this.props.onConfirm(this.props.user);
},
render() {
return (
<div className="modal fade" id="deleteUserModal" tabIndex="-1" role="dialog">
<div className="modal-dialog">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title">{this.props.user ? `Are you sure you want to delete ${this.props.user.name}?` : 'Are you sure?'}</h4>
</div>
<div className="modal-footer">
<button className="btn btn-default" type="button" data-dismiss="modal">Cancel</button>
<button className="btn btn-danger js-delete-users" onClick={this.handleConfirm} type="button" data-dismiss="modal">Delete</button>
</div>
</div>
</div>
</div>
);
},
});
export default DeleteUserModal;

View File

@ -1,37 +0,0 @@
import React, {PropTypes} from 'react';
import CreateAccountModal from './CreateAccountModal';
const Header = React.createClass({
propTypes: {
onCreateAccount: PropTypes.func,
roles: PropTypes.arrayOf(PropTypes.shape({
name: PropTypes.shape,
})),
},
render() {
const {roles, onCreateAccount} = this.props;
return (
<div id="cluster-accounts-page">
<div className="enterprise-header">
<div className="enterprise-header__container">
<div className="enterprise-header__left">
<h1>
Access Control
</h1>
</div>
<div className="enterprise-header__right">
<button className="btn btn-sm btn-primary" data-toggle="modal" data-target="#createAccountModal" data-test="create-cluster-account">
Create Cluster Account
</button>
</div>
</div>
</div>
<CreateAccountModal roles={roles} onCreateAccount={onCreateAccount} />
</div>
);
},
});
export default Header;

View File

@ -1,38 +0,0 @@
import React, {PropTypes} from 'react';
const RemoveAccountFromRoleModal = React.createClass({
propTypes: {
roleName: PropTypes.string,
onConfirm: PropTypes.func.isRequired,
},
handleConfirm() {
$('#removeAccountFromRoleModal').modal('hide'); // eslint-disable-line no-undef
this.props.onConfirm();
},
render() {
return (
<div className="modal fade" id="removeAccountFromRoleModal" tabIndex="-1" role="dialog">
<div className="modal-dialog">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title">
Are you sure you want to remove <strong>{this.props.roleName}</strong> from this cluster account?
</h4>
</div>
<div className="modal-footer">
<button className="btn btn-default" type="button" data-dismiss="modal">Cancel</button>
<button onClick={this.handleConfirm} className="btn btn-danger" value="Remove">Remove</button>
</div>
</div>
</div>
</div>
);
},
});
export default RemoveAccountFromRoleModal;

View File

@ -1,43 +0,0 @@
import React, {PropTypes} from 'react';
const {string, func} = PropTypes;
const RemoveWebUserModal = React.createClass({
propTypes: {
user: string,
onRemoveWebUser: func.isRequired,
account: string.isRequired,
},
handleConfirm() {
$('#deleteUsersModal').modal('hide'); // eslint-disable-line no-undef
this.props.onRemoveWebUser();
},
render() {
return (
<div className="modal fade" id="deleteUsersModal" tabIndex="-1" role="dialog">
<div className="modal-dialog">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title">
Are you sure you want to remove
<strong> {this.props.user} </strong> from
<strong> {this.props.account} </strong> ?
</h4>
</div>
<div className="modal-footer">
<button className="btn btn-default" type="button" data-dismiss="modal">Cancel</button>
<button onClick={this.handleConfirm} className="btn btn-danger">Remove</button>
</div>
</div>
</div>
</div>
);
},
});
export default RemoveWebUserModal;

View File

@ -1,278 +0,0 @@
import React, {PropTypes} from 'react';
import _ from 'lodash';
import {withRouter} from 'react-router';
import ClusterAccountEditPage from '../components/ClusterAccountEditPage';
import {buildClusterAccounts, buildRoles, buildAllPermissions, buildPermission} from 'src/shared/presenters';
import {showDatabases} from 'src/shared/apis/metaQuery';
import showDatabasesParser from 'shared/parsing/showDatabases';
import {
addPermissionToAccount,
removePermissionFromAccount,
deleteUserClusterLink,
getUserClusterLinks,
getClusterAccount,
getWebUsers,
getRoles,
addWebUsersToClusterAccount,
updateClusterAccountPassword,
removeAccountsFromRole,
addAccountsToRole,
meShow,
deleteClusterAccount,
getWebUsersByClusterAccount,
} from 'shared/apis';
const {shape, string, func, arrayOf} = PropTypes;
export const ClusterAccountContainer = React.createClass({
propTypes: {
dataNodes: arrayOf(string.isRequired),
params: shape({
clusterID: string.isRequired,
accountID: string.isRequired,
}).isRequired,
router: shape({
push: func.isRequired,
}).isRequired,
addFlashMessage: func,
},
getInitialState() {
return {
account: null,
roles: [],
databases: [],
assignedWebUsers: [],
unassignedWebUsers: [],
me: {},
};
},
componentDidMount() {
const {accountID, clusterID} = this.props.params;
const {dataNodes} = this.props;
Promise.all([
getClusterAccount(clusterID, accountID),
getRoles(clusterID),
showDatabases(dataNodes, clusterID),
getWebUsersByClusterAccount(clusterID, accountID),
getWebUsers(clusterID),
meShow(),
]).then(([
{data: {users}},
{data: {roles}},
{data: dbs},
{data: assignedWebUsers},
{data: allUsers},
{data: me},
]) => {
const account = buildClusterAccounts(users, roles)[0];
const presentedRoles = buildRoles(roles);
this.setState({
account,
assignedWebUsers,
roles: presentedRoles,
databases: showDatabasesParser(dbs).databases,
unassignedWebUsers: _.differenceBy(allUsers, assignedWebUsers, (u) => u.id),
me,
});
}).catch(err => {
this.props.addFlashMessage({
type: 'error',
text: `An error occured. Please try refreshing the page. ${err.message}`,
});
});
},
handleUpdatePassword(password) {
updateClusterAccountPassword(this.props.params.clusterID, this.state.account.name, password).then(() => {
this.props.addFlashMessage({
type: 'success',
text: 'Password successfully updated :)',
});
}).catch(() => {
this.props.addFlashMessage({
type: 'error',
text: 'There was a problem updating password :(',
});
});
},
handleAddPermission({name, resources}) {
const {clusterID} = this.props.params;
const {account} = this.state;
addPermissionToAccount(clusterID, account.name, name, resources).then(() => {
const newPermissions = account.permissions.map(p => p.name).includes(name) ?
account.permissions
: account.permissions.concat(buildPermission(name, resources));
this.setState({
account: Object.assign({}, account, {permissions: newPermissions}),
}, () => {
this.props.addFlashMessage({
type: 'success',
text: 'Permission successfully added :)',
});
});
}).catch(() => {
this.props.addFlashMessage({
type: 'error',
text: 'There was a problem adding the permission :(',
});
});
},
handleRemovePermission(permission) {
const {clusterID} = this.props.params;
const {account} = this.state;
removePermissionFromAccount(clusterID, account.name, permission).then(() => {
this.setState({
account: Object.assign({}, this.state.account, {
permissions: _.reject(this.state.account.permissions, (p) => p.name === permission.name),
}),
});
this.props.addFlashMessage({
type: 'success',
text: 'Removed permission from cluster account!',
});
}).catch(err => {
const text = _.result(err, ['response', 'data', 'error'], 'An error occurred.');
this.props.addFlashMessage({
type: 'error',
text,
});
});
},
handleRemoveAccountFromRole(role) {
const {clusterID, accountID} = this.props.params;
removeAccountsFromRole(clusterID, role.name, [accountID]).then(() => {
this.setState({
account: Object.assign({}, this.state.account, {
roles: this.state.account.roles.filter(r => r.name !== role.name),
}),
});
this.props.addFlashMessage({
type: 'success',
text: 'Cluster account removed from role!',
});
}).catch(err => {
this.props.addFlashMessage({
type: 'error',
text: `An error occured. ${err.message}.`,
});
});
},
handleRemoveWebUserFromAccount(user) {
const {clusterID} = this.props.params;
// TODO: update this process to just include a call to
// deleteUserClusterLinkByUserID which is currently in development
getUserClusterLinks(clusterID).then(({data}) => {
const clusterLinkToDelete = data.find((cl) => cl.cluster_id === clusterID && cl.user_id === user.id);
deleteUserClusterLink(clusterID, clusterLinkToDelete.id).then(() => {
this.setState({assignedWebUsers: this.state.assignedWebUsers.filter(u => u.id !== user.id)});
this.props.addFlashMessage({
type: 'success',
text: `${user.name} removed from this cluster account`,
});
}).catch((err) => {
console.error(err); // eslint-disable-line no-console
this.props.addFlashMessage({
type: 'error',
text: 'Something went wrong while removing this user',
});
});
});
},
handleAddRoleToAccount(role) {
const {clusterID, accountID} = this.props.params;
addAccountsToRole(clusterID, role.name, [accountID]).then(() => {
this.setState({
account: Object.assign({}, this.state.account, {
roles: this.state.account.roles.concat(role),
}),
});
this.props.addFlashMessage({
type: 'success',
text: 'Cluster account added to role!',
});
}).catch(err => {
this.props.addFlashMessage({
type: 'error',
text: `An error occured. ${err.message}.`,
});
});
},
handleAddWebUsersToAccount(users) {
const {clusterID, accountID} = this.props.params;
const userIDs = users.map((u) => {
return {
user_id: u.id,
};
});
addWebUsersToClusterAccount(clusterID, accountID, userIDs).then(() => {
this.setState({assignedWebUsers: this.state.assignedWebUsers.concat(users)});
this.props.addFlashMessage({
type: 'success',
text: `Web users added to ${accountID}`,
});
}).catch((err) => {
console.error(err); // eslint-disable-line no-console
this.props.addFlashMessage({
type: 'error',
text: `Something went wrong`,
});
});
},
handleDeleteAccount() {
const {clusterID, accountID} = this.props.params;
deleteClusterAccount(clusterID, accountID).then(() => {
this.props.router.push(`/accounts`);
this.props.addFlashMessage({
type: 'success',
text: 'Cluster account deleted!',
});
}).catch(err => {
this.props.addFlashMessage({
type: 'error',
text: `An error occured. ${err.message}.`,
});
});
},
render() {
const {clusterID, accountID} = this.props.params;
const {account, databases, roles, me} = this.state;
return (
<ClusterAccountEditPage
clusterID={clusterID}
accountID={accountID}
databases={databases}
account={account}
roles={roles}
assignedWebUsers={this.state.assignedWebUsers}
unassignedWebUsers={this.state.unassignedWebUsers}
allPermissions={buildAllPermissions()}
me={me}
onAddPermission={this.handleAddPermission}
onRemovePermission={this.handleRemovePermission}
onUpdatePassword={this.handleUpdatePassword}
onRemoveAccountFromRole={this.handleRemoveAccountFromRole}
onRemoveWebUserFromAccount={this.handleRemoveWebUserFromAccount}
onAddRoleToAccount={this.handleAddRoleToAccount}
onAddWebUsersToAccount={this.handleAddWebUsersToAccount}
onDeleteAccount={this.handleDeleteAccount}
/>
);
},
});
export default withRouter(ClusterAccountContainer);

View File

@ -1,157 +0,0 @@
import React, {PropTypes} from 'react';
import ClusterAccountsPage from '../components/ClusterAccountsPage';
import DeleteClusterAccountModal from '../components/DeleteClusterAccountModal';
import {buildClusterAccounts} from 'src/shared/presenters';
import {
getClusterAccounts,
getRoles,
deleteClusterAccount,
getWebUsersByClusterAccount,
meShow,
addUsersToRole,
createClusterAccount,
} from 'src/shared/apis';
import _ from 'lodash';
export const ClusterAccountsPageContainer = React.createClass({
propTypes: {
params: PropTypes.shape({
clusterID: PropTypes.string.isRequired,
}).isRequired,
addFlashMessage: PropTypes.func.isRequired,
},
getInitialState() {
return {
users: [],
roles: [],
// List of associated web users to display when deleting a cluster account.
webUsers: [],
// This is an unfortunate solution to using bootstrap to open modals.
// The modal will have already been rendered in this component by the
// time a user chooses "Remove" from one of the rows in the users table.
userToDelete: null,
};
},
componentDidMount() {
const {clusterID} = this.props.params;
Promise.all([
getClusterAccounts(clusterID),
getRoles(clusterID),
meShow(),
]).then(([accountsResp, rolesResp, me]) => {
this.setState({
users: buildClusterAccounts(accountsResp.data.users, rolesResp.data.roles),
roles: rolesResp.data.roles,
me: me.data,
});
});
},
// Ensures the modal will remove the correct user. TODO: our own modals
handleDeleteAccount(account) {
getWebUsersByClusterAccount(this.props.params.clusterID, account.name).then(resp => {
this.setState({
webUsers: resp.data,
userToDelete: account,
});
}).catch(err => {
console.error(err.toString()); // eslint-disable-line no-console
this.props.addFlashMessage({
type: 'error',
text: 'An error occured while trying to remove a cluster account.',
});
});
},
handleDeleteConfirm() {
const {name} = this.state.userToDelete;
deleteClusterAccount(this.props.params.clusterID, name).then(() => {
this.props.addFlashMessage({
type: 'success',
text: 'Cluster account deleted!',
});
this.setState({
users: _.reject(this.state.users, (user) => user.name === name),
});
}).catch((err) => {
console.error(err.toString()); // eslint-disable-line no-console
this.props.addFlashMessage({
type: 'error',
text: 'An error occured while trying to remove a cluster account.',
});
});
},
handleCreateAccount(name, password, roleName) {
const {clusterID} = this.props.params;
const {users, roles} = this.state;
createClusterAccount(clusterID, name, password).then(() => {
addUsersToRole(clusterID, roleName, [name]).then(() => {
this.props.addFlashMessage({
type: 'success',
text: `User ${name} added with the ${roleName} role`,
});
// add user to role
const newRoles = roles.map((role) => {
if (role.name !== roleName) {
return role;
}
return Object.assign({}, role, {
users: role.users ? role.users.concat(name) : [name],
});
});
const newUser = buildClusterAccounts([{name}], newRoles);
this.setState({
roles: newRoles,
users: users.concat(newUser),
});
}).catch((err) => {
console.error(err.toString()); // eslint-disable-line no-console
this.props.addFlashMessage({
type: 'error',
text: `An error occured while assigning ${name} to the ${roleName} role`,
});
});
}).catch((err) => {
const msg = _.get(err, 'response.data.error', '');
console.error(err.toString()); // eslint-disable-line no-console
this.props.addFlashMessage({
type: 'error',
text: `An error occured creating user ${name}. ${msg}`,
});
});
},
render() {
const {clusterID} = this.props.params;
const {users, me, roles} = this.state;
return (
<div>
<ClusterAccountsPage
users={users}
roles={roles}
clusterID={clusterID}
onDeleteAccount={this.handleDeleteAccount}
onCreateAccount={this.handleCreateAccount}
me={me}
/>
<DeleteClusterAccountModal
account={this.state.userToDelete}
webUsers={this.state.webUsers}
onConfirm={this.handleDeleteConfirm}
/>
</div>
);
},
});
export default ClusterAccountsPageContainer;

View File

@ -1,4 +0,0 @@
import ClusterAccountsPage from './containers/ClusterAccountsPageContainer';
import ClusterAccountPage from './containers/ClusterAccountContainer';
export {ClusterAccountsPage, ClusterAccountPage};

View File

@ -1,97 +0,0 @@
import React, {PropTypes} from 'react';
const {arrayOf, number, func} = PropTypes;
const CreateDatabase = React.createClass({
propTypes: {
replicationFactors: arrayOf(number.isRequired).isRequired,
onCreateDatabase: func.isRequired,
},
getInitialState() {
return {
rpName: '',
database: '',
duration: '24h',
replicaN: '1',
};
},
handleRpNameChange(e) {
this.setState({rpName: e.target.value});
},
handleDatabaseNameChange(e) {
this.setState({database: e.target.value});
},
handleSelectDuration(e) {
this.setState({duration: e.target.value});
},
handleSelectReplicaN(e) {
this.setState({replicaN: e.target.value});
},
handleSubmit() {
const {rpName, database, duration, replicaN} = this.state;
this.props.onCreateDatabase({rpName, database, duration, replicaN});
},
render() {
const {database, rpName, duration, replicaN} = this.state;
return (
<div className="modal fade" id="dbModal" tabIndex="-1" role="dialog" aria-labelledby="myModalLabel">
<form data-remote="true" onSubmit={this.handleSubmit} >
<div className="modal-dialog" role="document">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title" id="myModalLabel">Create Database</h4>
</div>
<div className="modal-body">
<div id="form-errors"></div>
<div className="form-group col-sm-12">
<label htmlFor="name">Database Name</label>
<input onChange={this.handleDatabaseNameChange} value={database} required={true} className="form-control input-lg" type="text" id="name" name="name"/>
</div>
<div className="form-group col-sm-6">
<label htmlFor="retention-policy">Retention Policy Name</label>
<input onChange={this.handleRpNameChange} value={rpName} required={true} className="form-control input-lg" type="text" id="retention-policy" name="retention-policy"/>
</div>
<div className="form-group col-sm-3">
<label htmlFor="duration" data-toggle="tooltip" data-placement="top" title="How long InfluxDB stores data">Duration</label>
<select onChange={this.handleSelectDuration} defaultValue={duration} className="form-control input-lg" name="duration" id="exampleSelect" required={true}>
<option value="24h">1 Day</option>
<option value="168h">7 Days</option>
<option value="720h">30 Days</option>
<option value="8670h">365 Days</option>
</select>
</div>
<div className="form-group col-sm-3">
<label htmlFor="replication-factor" data-toggle="tooltip" data-placement="top" title="How many copies of the data InfluxDB stores">Replication Factor</label>
<select onChange={this.handleSelectReplicaN} defaultValue={replicaN} className="form-control input-lg" name="replication-factor" id="replication-factor" required={true}>
{
this.props.replicationFactors.map((rp) => {
return <option key={rp}>{rp}</option>;
})
}
</select>
</div>
</div>
<div className="modal-footer">
<button type="button" className="btn btn-default" data-dismiss="modal">Cancel</button>
<button type="submit" className="btn btn-success">Create</button>
</div>
</div>
</div>
</form>
</div>
);
},
});
export default CreateDatabase;

View File

@ -1,141 +0,0 @@
import React, {PropTypes} from 'react';
import {Link} from 'react-router';
import CreateDatabase from './CreateDatabase';
const {number, string, shape, arrayOf, func} = PropTypes;
const DatabaseManager = React.createClass({
propTypes: {
database: string.isRequired,
databases: arrayOf(shape({})).isRequired,
dbStats: shape({
diskBytes: string.isRequired,
numMeasurements: number.isRequired,
numSeries: number.isRequired,
}),
users: arrayOf(shape({
id: number,
name: string.isRequired,
roles: string.isRequired,
})).isRequired,
queries: arrayOf(string).isRequired,
replicationFactors: arrayOf(number).isRequired,
onClickDatabase: func.isRequired,
onCreateDatabase: func.isRequired,
},
render() {
const {database, databases, dbStats, queries, users,
replicationFactors, onClickDatabase, onCreateDatabase} = this.props;
return (
<div className="page-wrapper database-manager">
<div className="enterprise-header">
<div className="enterprise-header__container">
<div className="enterprise-header__left">
<div className="dropdown minimal-dropdown">
<button className="dropdown-toggle" type="button" id="dropdownMenu1" data-toggle="dropdown" aria-haspopup="true" aria-expanded="true">
<span className="button-text">{database}</span>
<span className="caret"></span>
</button>
<ul className="dropdown-menu" aria-labelledby="dropdownMenu1">
{
databases.map((db) => {
return <li onClick={() => onClickDatabase(db.Name)} key={db.Name}><Link to={`/databases/manager/${db.Name}`}>{db.Name}</Link></li>;
})
}
</ul>
</div>
</div>
<div className="enterprise-header__right">
<button className="btn btn-sm btn-primary" data-toggle="modal" data-target="#dbModal">Create Database</button>
</div>
</div>
</div>
<div className="container-fluid">
<div className="row">
<div className="col-sm-12 col-md-4">
<div className="panel panel-minimal">
<div className="panel-heading">
<h2 className="panel-title">Database Stats</h2>
</div>
<div className="panel-body">
<div className="db-manager-stats">
<div>
<h4>{dbStats.diskBytes}</h4>
<p>On Disk</p>
</div>
<div>
<h4>{dbStats.numMeasurements}</h4>
<p>Measurements</p>
</div>
<div>
<h4>{dbStats.numSeries}</h4>
<p>Series</p>
</div>
</div>
</div>
</div>
</div>
<div className="col-sm-12 col-md-8">
<div className="panel panel-minimal">
<div className="panel-heading">
<h2 className="panel-title">Users</h2>
</div>
<div className="panel-body">
<table className="table v-center margin-bottom-zero">
<thead>
<tr>
<th>Name</th>
<th>Role</th>
</tr>
</thead>
<tbody>
{
users.map((user) => {
return (
<tr key={user.name}>
<td><Link title="Manage Access" to={`/accounts/${user.name}`}>{user.name}</Link></td>
<td>{user.roles}</td>
</tr>
);
})
}
</tbody>
</table>
</div>
</div>
</div>
</div>
<div className="row">
<div className="col-md-12">
<div className="panel panel-minimal">
<div className="panel-heading">
<h2 className="panel-title">Continuous Queries Associated</h2>
</div>
<div className="panel-body continuous-queries">
{
queries.length ? queries.map((query, i) => <pre key={i}><code>{query}</code></pre>) :
(
<div className="continuous-queries__empty">
<img src="/assets/images/continuous-query-empty.svg" />
<h4>No queries to display</h4>
</div>
)
}
</div>
</div>
</div>
</div>
</div>
<CreateDatabase onCreateDatabase={onCreateDatabase} replicationFactors={replicationFactors}/>
</div>
);
},
});
export default DatabaseManager;

View File

@ -1,77 +0,0 @@
import React, {PropTypes} from 'react';
import {getDatabaseManager, createDatabase} from 'shared/apis/index';
import DatabaseManager from '../components/DatabaseManager';
const {shape, string} = PropTypes;
const DatabaseManagerApp = React.createClass({
propTypes: {
params: shape({
clusterID: string.isRequired,
database: string.isRequired,
}).isRequired,
},
componentDidMount() {
this.getData();
},
getInitialState() {
return {
databases: [],
dbStats: {
diskBytes: '',
numMeasurements: 0,
numSeries: 0,
},
users: [],
queries: [],
replicationFactors: [],
selectedDatabase: null,
};
},
getData(selectedDatabase) {
const {clusterID, database} = this.props.params;
getDatabaseManager(clusterID, selectedDatabase || database)
.then(({data}) => {
this.setState({
databases: data.databases,
dbStats: data.databaseStats,
users: data.users,
queries: data.queries || [],
replicationFactors: data.replicationFactors,
});
});
},
handleClickDatabase(selectedDatabase) {
this.getData(selectedDatabase);
this.setState({selectedDatabase});
},
handleCreateDatabase(db) {
createDatabase(db);
},
render() {
const {databases, dbStats, queries, users, replicationFactors} = this.state;
const {clusterID, database} = this.props.params;
return (
<DatabaseManager
clusterID={clusterID}
database={database}
databases={databases}
dbStats={dbStats}
queries={queries}
users={users}
replicationFactors={replicationFactors}
onClickDatabase={this.handleClickDatabase}
onCreateDatabase={this.handleCreateDatabase}
/>
);
},
});
export default DatabaseManagerApp;

View File

@ -1,2 +0,0 @@
import DatabaseManagerApp from './containers/DatabaseManagerApp';
export default DatabaseManagerApp;

View File

@ -1,187 +0,0 @@
import React, {PropTypes} from 'react';
import flatten from 'lodash/flatten';
import reject from 'lodash/reject';
import uniqBy from 'lodash/uniqBy';
import {
showDatabases,
showQueries,
killQuery,
} from 'shared/apis/metaQuery';
import showDatabasesParser from 'shared/parsing/showDatabases';
import showQueriesParser from 'shared/parsing/showQueries';
const times = [
{test: /ns/, magnitude: 0},
{test: /^\d*u/, magnitude: 1},
{test: /^\d*ms/, magnitude: 2},
{test: /^\d*s/, magnitude: 3},
{test: /^\d*m\d*s/, magnitude: 4},
{test: /^\d*h\d*m\d*s/, magnitude: 5},
];
export const QueriesPage = React.createClass({
propTypes: {
dataNodes: PropTypes.arrayOf(PropTypes.string.isRequired).isRequired,
addFlashMessage: PropTypes.func,
params: PropTypes.shape({
clusterID: PropTypes.string.isRequired,
}),
},
getInitialState() {
return {
queries: [],
queryIDToKill: null,
};
},
componentDidMount() {
this.updateQueries();
const updateInterval = 5000;
this.intervalID = setInterval(this.updateQueries, updateInterval);
},
componentWillUnmount() {
clearInterval(this.intervalID);
},
updateQueries() {
const {dataNodes, addFlashMessage, params} = this.props;
showDatabases(dataNodes, params.clusterID).then((resp) => {
const {databases, errors} = showDatabasesParser(resp.data);
if (errors.length) {
errors.forEach((message) => addFlashMessage({type: 'error', text: message}));
return;
}
const fetches = databases.map((db) => showQueries(dataNodes, db, params.clusterID));
Promise.all(fetches).then((queryResponses) => {
const allQueries = [];
queryResponses.forEach((queryResponse) => {
const result = showQueriesParser(queryResponse.data);
if (result.errors.length) {
result.erorrs.forEach((message) => this.props.addFlashMessage({type: 'error', text: message}));
}
allQueries.push(...result.queries);
});
const queries = uniqBy(flatten(allQueries), (q) => q.id);
// sorting queries by magnitude, so generally longer queries will appear atop the list
const sortedQueries = queries.sort((a, b) => {
const aTime = times.find((t) => a.duration.match(t.test));
const bTime = times.find((t) => b.duration.match(t.test));
return +aTime.magnitude <= +bTime.magnitude;
});
this.setState({
queries: sortedQueries,
});
});
});
},
render() {
const {queries} = this.state;
return (
<div>
<div className="enterprise-header">
<div className="enterprise-header__container">
<div className="enterprise-header__left">
<h1>
Queries
</h1>
</div>
</div>
</div>
<div className="container-fluid">
<div className="row">
<div className="col-md-12">
<div className="panel panel-minimal">
<div className="panel-body">
<table className="table v-center">
<thead>
<tr>
<th>Database</th>
<th>Query</th>
<th>Running</th>
<th></th>
</tr>
</thead>
<tbody>
{queries.map((q) => {
return (
<tr key={q.id}>
<td>{q.database}</td>
<td><code>{q.query}</code></td>
<td>{q.duration}</td>
<td className="text-right">
<button className="btn btn-xs btn-link-danger" onClick={this.handleKillQuery} data-toggle="modal" data-query-id={q.id} data-target="#killModal">
Kill
</button>
</td>
</tr>
);
})}
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div className="modal fade" id="killModal" tabIndex="-1" role="dialog" aria-labelledby="myModalLabel">
<div className="modal-dialog" role="document">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title" id="myModalLabel">Are you sure you want to kill this query?</h4>
</div>
<div className="modal-footer">
<button type="button" className="btn btn-default" data-dismiss="modal">No</button>
<button type="button" className="btn btn-danger" data-dismiss="modal" onClick={this.handleConfirmKillQuery}>Yes, kill it!</button>
</div>
</div>
</div>
</div>
</div>
);
},
handleKillQuery(e) {
e.stopPropagation();
const id = e.target.dataset.queryId;
this.setState({
queryIDToKill: id,
});
},
handleConfirmKillQuery() {
const {queryIDToKill} = this.state;
if (queryIDToKill === null) {
return;
}
// optimitstic update
const {queries} = this.state;
this.setState({
queries: reject(queries, (q) => +q.id === +queryIDToKill),
});
// kill the query over http
const {dataNodes, params} = this.props;
killQuery(dataNodes, queryIDToKill, params.clusterID).then(() => {
this.setState({
queryIDToKill: null,
});
});
},
});
export default QueriesPage;

View File

@ -1,2 +0,0 @@
import QueriesPage from './containers/QueriesPage';
export default QueriesPage;

View File

@ -1,70 +0,0 @@
import React, {PropTypes} from 'react';
export default React.createClass({
propTypes: {
onCreate: PropTypes.func.isRequired,
dataNodes: PropTypes.arrayOf(PropTypes.string).isRequired,
},
render() {
return (
<div className="modal fade" id="rpModal" tabIndex={-1} role="dialog" aria-labelledby="myModalLabel">
<div className="modal-dialog" role="document">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title" id="myModalLabel">Create Retention Policy</h4>
</div>
<form onSubmit={this.handleSubmit}>
<div className="modal-body">
<div className="form-group col-md-12">
<label htmlFor="rpName">Name Retention Pollicy</label>
<input ref={(r) => this.rpName = r} type="text" className="form-control" id="rpName" placeholder="Name" required={true}/>
</div>
<div className="form-group col-md-6">
<label htmlFor="durationSelect">Select Duration</label>
<select ref={(r) => this.duration = r} className="form-control" id="durationSelect">
<option value="1d">1 Day</option>
<option value="7d">7 Days</option>
<option value="30d">30 Days</option>
<option value="365d">365 Days</option>
</select>
</div>
<div className="form-group col-md-6">
<label htmlFor="replicationFactor">Replication Factor</label>
<select ref={(r) => this.replicationFactor = r} className="form-control" id="replicationFactor">
{
this.props.dataNodes.map((node, i) => <option key={node}>{i + 1}</option>)
}
</select>
</div>
</div>
<div className="modal-footer">
<button type="button" className="btn btn-default" data-dismiss="modal">Cancel</button>
<button ref="submitButton" type="submit" className="btn btn-success">Create</button>
</div>
</form>
</div>
</div>
</div>
);
},
handleSubmit(e) {
e.preventDefault();
const rpName = this.rpName.value;
const duration = this.duration.value;
const replicationFactor = this.replicationFactor.value;
// Not using data-dimiss="modal" becuase it doesn't play well with HTML5 validations.
$('#rpModal').modal('hide'); // eslint-disable-line no-undef
this.props.onCreate({
rpName,
duration,
replicationFactor,
});
},
});

View File

@ -1,74 +0,0 @@
import React, {PropTypes} from 'react';
const DropShardModal = React.createClass({
propTypes: {
onConfirm: PropTypes.func.isRequired,
},
getInitialState() {
return {error: null, text: ''};
},
componentDidMount() {
// Using this unfortunate hack because this modal is still using bootstrap,
// and this component is never removed once being mounted -- meaning it doesn't
// start with a new initial state when it gets closed/reopened. A better
// long term solution is just to handle modals in ReactLand.
$('#dropShardModal').on('hide.bs.modal', () => { // eslint-disable-line no-undef
this.setState({error: null, text: ''});
});
},
handleConfirmationTextChange(e) {
this.setState({text: e.target.value});
},
render() {
return (
<div className="modal fade" id="dropShardModal" tabIndex={-1} role="dialog" aria-labelledby="myModalLabel">
<div className="modal-dialog" role="document">
<div className="modal-content">
<div className="modal-header">
<button type="button" className="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
<h4 className="modal-title" id="myModalLabel">Are you sure?</h4>
</div>
<form onSubmit={this.handleSubmit}>
<div className="modal-body">
{this.state.error ?
<div className="alert alert-danger" role="alert">{this.state.error}</div>
: null}
<div className="form-group col-md-12">
<label htmlFor="confirmation">All of the data on this shard will be removed permanently. Please Type 'delete' to confirm.</label>
<input onChange={this.handleConfirmationTextChange} value={this.state.text} type="text" className="form-control" id="confirmation" />
</div>
</div>
<div className="modal-footer">
<button type="button" className="btn btn-default" data-dismiss="modal">Cancel</button>
<button ref="submitButton" type="submit" className="btn btn-danger">Delete</button>
</div>
</form>
</div>
</div>
</div>
);
},
handleSubmit(e) {
e.preventDefault();
if (this.state.text.toLowerCase() !== 'delete') {
this.setState({error: "Please confirm by typing 'delete'"});
return;
}
// Hiding the modal directly because we have an extra confirmation step,
// bootstrap will close the modal immediately after clicking 'Delete'.
$('#dropShardModal').modal('hide'); // eslint-disable-line no-undef
this.props.onConfirm();
},
});
export default DropShardModal;

View File

@ -1,34 +0,0 @@
import React, {PropTypes} from 'react';
export default React.createClass({
propTypes: {
databases: PropTypes.array.isRequired, // eslint-disable-line react/forbid-prop-types
selectedDatabase: PropTypes.string.isRequired,
onChooseDatabase: PropTypes.func.isRequired,
},
render() {
return (
<div className="enterprise-header">
<div className="enterprise-header__container">
<div className="enterprise-header__left">
<div className="dropdown minimal-dropdown">
<button className="dropdown-toggle" type="button" id="dropdownMenu1" data-toggle="dropdown" aria-haspopup="true" aria-expanded="true">
{this.props.selectedDatabase}
<span className="caret" />
</button>
<ul className="dropdown-menu" aria-labelledby="dropdownMenu1">
{this.props.databases.map((d) => {
return <li key={d} onClick={() => this.props.onChooseDatabase(d)}><a href="#">{d}</a></li>;
})}
</ul>
</div>
</div>
<div className="enterprise-header__right">
<button className="btn btn-sm btn-primary" data-toggle="modal" data-target="#rpModal">Create Retention Policy</button>
</div>
</div>
</div>
);
},
});

View File

@ -1,63 +0,0 @@
import React, {PropTypes} from 'react';
import RetentionPolicyCard from './RetentionPolicyCard';
const {string, arrayOf, shape, func} = PropTypes;
export default React.createClass({
propTypes: {
retentionPolicies: arrayOf(shape()).isRequired,
shardDiskUsage: shape(),
shards: shape().isRequired,
selectedDatabase: string.isRequired,
onDropShard: func.isRequired,
},
render() {
const {shardDiskUsage, retentionPolicies, onDropShard, shards, selectedDatabase} = this.props;
return (
<div className="row">
<div className="col-md-12">
<h3 className="deluxe fake-panel-title">Retention Policies</h3>
<div className="panel-group retention-policies" id="accordion" role="tablist" aria-multiselectable="true">
{retentionPolicies.map((rp, i) => {
const ss = shards[`${selectedDatabase}..${rp.name}`] || [];
/**
* We use the `/show-shards` endpoint as 'source of truth' for active shards in the cluster.
* Disk usage has to be fetched directly from InfluxDB, which means we'll have stale shard
* data (the results will often include disk usage for shards that have been removed). This
* ensures we only use active shards when we calculate disk usage.
*/
const newDiskUsage = {};
ss.forEach((shard) => {
(shardDiskUsage[shard.shardId] || []).forEach((d) => {
if (!shard.owners.map((o) => o.tcpAddr).includes(d.nodeID)) {
return;
}
if (newDiskUsage[shard.shardId]) {
newDiskUsage[shard.shardId].push(d);
} else {
newDiskUsage[shard.shardId] = [d];
}
});
});
return (
<RetentionPolicyCard
key={rp.name}
onDelete={() => {}}
rp={rp}
shards={ss}
index={i}
shardDiskUsage={newDiskUsage}
onDropShard={onDropShard}
/>
);
})}
</div>
</div>
</div>
);
},
});

View File

@ -1,140 +0,0 @@
import React, {PropTypes} from 'react';
import classNames from 'classnames';
import moment from 'moment';
import DropShardModal from './DropShardModal';
import {formatBytes, formatRPDuration} from 'utils/formatting';
/* eslint-disable no-magic-numbers */
const {func, string, shape, number, bool, arrayOf, objectOf} = PropTypes;
export default React.createClass({
propTypes: {
onDropShard: func.isRequired,
rp: shape({
name: string.isRequired,
duration: string.isRequired,
isDefault: bool.isRequired,
replication: number,
shardGroupDuration: string,
}).isRequired,
shards: arrayOf(shape({
database: string.isRequired,
startTime: string.isRequired,
endTime: string.isRequired,
retentionPolicy: string.isRequired,
shardId: string.isRequired,
shardGroup: string.isRequired,
})),
shardDiskUsage: objectOf(
arrayOf(
shape({
diskUsage: number.isRequired,
nodeID: string.isRequired,
}),
),
),
index: number, // Required to make bootstrap JS work.
},
formatTimestamp(timestamp) {
return moment(timestamp).format('YYYY-MM-DD:H');
},
render() {
const {index, rp, shards, shardDiskUsage} = this.props;
const diskUsage = shards.reduce((sum, shard) => {
// Check if we don't have any disk usage for a shard. This happens most often
// with a new cluster before any disk usage has a chance to be recorded.
if (!shardDiskUsage[shard.shardId]) {
return sum;
}
return sum + shardDiskUsage[shard.shardId].reduce((shardSum, shardInfo) => {
return shardSum + shardInfo.diskUsage;
}, 0);
}, 0);
return (
<div className="panel panel-default">
<div className="panel-heading" role="tab" id={`heading${index}`}>
<h4 className="panel-title js-rp-card-header u-flex u-ai-center u-jc-space-between">
<a className={index === 0 ? "" : "collapsed"} role="button" data-toggle="collapse" data-parent="#accordion" href={`#collapse${index}`} aria-expanded="true" aria-controls={`collapse${index}`}>
<span className="caret" /> {rp.name}
</a>
<span>
<p className="rp-duration">{formatRPDuration(rp.duration)} {rp.isDefault ? '(Default)' : null}</p>
<p className="rp-disk-usage">{formatBytes(diskUsage)}</p>
</span>
</h4>
</div>
<div id={`collapse${index}`} className={classNames("panel-collapse collapse", {'in': index === 0})} role="tabpanel" aria-labelledby={`heading${index}`}>
<div className="panel-body">
{this.renderShardTable()}
</div>
</div>
<DropShardModal onConfirm={this.handleDropShard} />
</div>
);
},
renderShardTable() {
const {shards, shardDiskUsage} = this.props;
if (!shards.length) {
return <div>No shards.</div>;
}
return (
<table className="table shard-table">
<thead>
<tr>
<th>Shard ID</th>
<th>Time Range</th>
<th>Disk Usage</th>
<th>Nodes</th>
<th />
</tr>
</thead>
<tbody>
{shards.map((shard, index) => {
const diskUsages = shardDiskUsage[shard.shardId] || [];
return (
<tr key={index}>
<td>{shard.shardId}</td>
<td>{this.formatTimestamp(shard.startTime)} {this.formatTimestamp(shard.endTime)}</td>
<td>
{diskUsages.length ? diskUsages.map((s) => {
const diskUsageForShard = formatBytes(s.diskUsage) || 'n/a';
return <p key={s.nodeID}>{diskUsageForShard}</p>;
})
: 'n/a'}
</td>
<td>
{diskUsages.length ? diskUsages.map((s) => <p key={s.nodeID}>{s.nodeID}</p>) : 'n/a'}
</td>
<td className="text-right">
<button data-toggle="modal" data-target="#dropShardModal" onClick={() => this.openConfirmationModal(shard)} className="btn btn-danger btn-sm" title="Drop Shard"><span className="icon trash js-drop-shard" /></button>
</td>
</tr>
);
})}
</tbody>
</table>
);
},
openConfirmationModal(shard) {
this.setState({shardIdToDelete: shard.shardId});
},
handleDropShard() {
const shard = this.props.shards.filter((s) => s.shardId === this.state.shardIdToDelete)[0];
this.props.onDropShard(shard);
this.setState({shardIdToDelete: null});
},
});
/* eslint-enable no-magic-numbers */

View File

@ -1,212 +0,0 @@
import React, {PropTypes} from 'react';
import _ from 'lodash';
import RetentionPoliciesHeader from '../components/RetentionPoliciesHeader';
import RetentionPoliciesList from '../components/RetentionPoliciesList';
import CreateRetentionPolicyModal from '../components/CreateRetentionPolicyModal';
import {
showDatabases,
showRetentionPolicies,
showShards,
createRetentionPolicy,
dropShard,
} from 'shared/apis/metaQuery';
import {fetchShardDiskBytesForDatabase} from 'shared/apis/stats';
import parseShowDatabases from 'shared/parsing/showDatabases';
import parseShowRetentionPolicies from 'shared/parsing/showRetentionPolicies';
import parseShowShards from 'shared/parsing/showShards';
import {diskBytesFromShardForDatabase} from 'shared/parsing/diskBytes';
const RetentionPoliciesApp = React.createClass({
propTypes: {
dataNodes: PropTypes.arrayOf(PropTypes.string.isRequired).isRequired,
params: PropTypes.shape({
clusterID: PropTypes.string.isRequired,
}).isRequired,
addFlashMessage: PropTypes.func,
},
getInitialState() {
return {
// Simple list of databases
databases: [],
// A list of retention policy objects for the currently selected database
retentionPolicies: [],
/**
* Disk usage/node locations for all shards across a database, keyed by shard ID.
* e.g. if shard 10 was replicated across two data nodes:
* {
* 10: [
* {nodeID: 'localhost:8088', diskUsage: 12312414},
* {nodeID: 'localhost:8188', diskUsage: 12312414},
* ],
* ...
* }
*/
shardDiskUsage: {},
// All shards across all databases, keyed by database and retention policy. e.g.:
// 'telegraf..default': [
// <shard>,
// <shard>
// ]
shards: {},
selectedDatabase: null,
isFetching: true,
};
},
componentDidMount() {
showDatabases(this.props.dataNodes, this.props.params.clusterID).then((resp) => {
const result = parseShowDatabases(resp.data);
if (!result.databases.length) {
this.props.addFlashMessage({
text: 'No databases found',
type: 'error',
});
return;
}
const selectedDatabase = result.databases[0];
this.setState({
databases: result.databases,
selectedDatabase,
});
this.fetchInfoForDatabase(selectedDatabase);
}).catch((err) => {
console.error(err); // eslint-disable-line no-console
this.addGenericErrorMessage(err.toString());
});
},
fetchInfoForDatabase(database) {
this.setState({isFetching: true});
Promise.all([
this.fetchRetentionPoliciesAndShards(database),
this.fetchDiskUsage(database),
]).then(([rps, shardDiskUsage]) => {
const {retentionPolicies, shards} = rps;
this.setState({
shardDiskUsage,
retentionPolicies,
shards,
});
}).catch((err) => {
console.error(err); // eslint-disable-line no-console
this.addGenericErrorMessage(err.toString());
}).then(() => {
this.setState({isFetching: false});
});
},
addGenericErrorMessage(errMessage) {
const defaultMsg = 'Something went wrong! Try refreshing your browser and email support@influxdata.com if the problem persists.';
this.props.addFlashMessage({
text: errMessage || defaultMsg,
type: 'error',
});
},
fetchRetentionPoliciesAndShards(database) {
const shared = {};
return showRetentionPolicies(this.props.dataNodes, database, this.props.params.clusterID).then((resp) => {
shared.retentionPolicies = resp.data.results.map(parseShowRetentionPolicies);
return showShards(this.props.params.clusterID);
}).then((resp) => {
const shards = parseShowShards(resp.data);
return {shards, retentionPolicies: shared.retentionPolicies[0].retentionPolicies};
});
},
fetchDiskUsage(database) {
const {dataNodes, params: {clusterID}} = this.props;
return fetchShardDiskBytesForDatabase(dataNodes, database, clusterID).then((resp) => {
return diskBytesFromShardForDatabase(resp.data).shardData;
});
},
handleChooseDatabase(database) {
this.setState({selectedDatabase: database, retentionPolicies: []});
this.fetchInfoForDatabase(database);
},
handleCreateRetentionPolicy({rpName, duration, replicationFactor}) {
const params = {
database: this.state.selectedDatabase,
host: this.props.dataNodes,
rpName,
duration,
replicationFactor,
clusterID: this.props.params.clusterID,
};
createRetentionPolicy(params).then(() => {
this.props.addFlashMessage({
text: 'Retention policy created successfully!',
type: 'success',
});
this.fetchInfoForDatabase(this.state.selectedDatabase);
}).catch((err) => {
this.addGenericErrorMessage(err.toString());
});
},
render() {
if (this.state.isFetching) {
return <div className="page-spinner" />;
}
const {selectedDatabase, shards, shardDiskUsage} = this.state;
return (
<div className="page-wrapper retention-policies">
<RetentionPoliciesHeader
databases={this.state.databases}
selectedDatabase={selectedDatabase}
onChooseDatabase={this.handleChooseDatabase}
/>
<div className="container-fluid">
<RetentionPoliciesList
retentionPolicies={this.state.retentionPolicies}
selectedDatabase={selectedDatabase}
shards={shards}
shardDiskUsage={shardDiskUsage}
onDropShard={this.handleDropShard}
/>
</div>
<CreateRetentionPolicyModal onCreate={this.handleCreateRetentionPolicy} dataNodes={this.props.dataNodes} />
</div>
);
},
handleDropShard(shard) {
const {dataNodes, params} = this.props;
dropShard(dataNodes, shard, params.clusterID).then(() => {
const key = `${this.state.selectedDatabase}..${shard.retentionPolicy}`;
const shardsForRP = this.state.shards[key];
const nextShards = _.reject(shardsForRP, (s) => s.shardId === shard.shardId);
const shards = Object.assign({}, this.state.shards);
shards[key] = nextShards;
this.props.addFlashMessage({
text: `Dropped shard ${shard.shardId}`,
type: 'success',
});
this.setState({shards});
}).catch(() => {
this.addGenericErrorMessage();
});
},
});
export default RetentionPoliciesApp;

View File

@ -1,2 +0,0 @@
import RetentionPoliciesApp from './containers/RetentionPoliciesApp';
export default RetentionPoliciesApp;

View File

@ -1,79 +0,0 @@
import React, {PropTypes} from 'react';
const CreateClusterAdmin = React.createClass({
propTypes: {
onCreateClusterAdmin: PropTypes.func.isRequired,
},
getInitialState() {
return {
passwordsMatch: true,
};
},
handleSubmit(e) {
e.preventDefault();
const username = this.username.value;
const password = this.password.value;
const confirmation = this.confirmation.value;
if (password !== confirmation) {
return this.setState({
passwordsMatch: false,
});
}
this.props.onCreateClusterAdmin(username, password);
},
render() {
const {passwordsMatch} = this.state;
return (
<div id="signup-page">
<div className="container">
<div className="row">
<div className="col-md-8 col-md-offset-2">
<div className="panel panel-summer">
<div className="panel-heading text-center">
<div className="signup-progress-circle step2of3">2/3</div>
<h2 className="deluxe">Welcome to InfluxEnterprise</h2>
</div>
<div className="panel-body">
{passwordsMatch ? null : this.renderValidationError()}
<h4>Create a Cluster Administrator account.</h4>
<p>Users assigned to the Cluster Administrator account have all cluster permissions.</p>
<form onSubmit={this.handleSubmit}>
<div className="form-group col-sm-12">
<label htmlFor="username">Account Name</label>
<input ref={(username) => this.username = username} className="form-control input-lg" type="text" id="username" required={true} placeholder="Ex. ClusterAdmin"/>
</div>
<div className="form-group col-sm-6">
<label htmlFor="password">Password</label>
<input ref={(pass) => this.password = pass} className="form-control input-lg" type="password" id="password" required={true}/>
</div>
<div className="form-group col-sm-6">
<label htmlFor="confirmation">Confirm Password</label>
<input ref={(conf) => this.confirmation = conf} className="form-control input-lg" type="password" id="confirmation" required={true} />
</div>
<div className="form-group col-sm-6 col-sm-offset-3">
<button className="btn btn-lg btn-success btn-block" type="submit">Next</button>
</div>
</form>
</div>
</div>
</div>
</div>
</div>
</div>
);
},
renderValidationError() {
return <div>Your passwords don't match! Please make sure they match.</div>;
},
});
export default CreateClusterAdmin;

View File

@ -1,125 +0,0 @@
import React, {PropTypes} from 'react';
import ClusterAccounts from 'shared/components/AddClusterAccounts';
import {getClusters} from 'shared/apis';
const CreateWebAdmin = React.createClass({
propTypes: {
onCreateWebAdmin: PropTypes.func.isRequired,
},
getInitialState() {
return {
clusters: [],
clusterLinks: {},
passwordsMatch: true,
};
},
componentDidMount() {
getClusters().then(({data}) => {
this.setState({clusters: data});
});
},
handleSubmit(e) {
e.preventDefault();
const firstName = this.firstName.value;
const lastName = this.lastName.value;
const email = this.email.value;
const password = this.password.value;
const confirmation = this.confirmation.value;
if (password !== confirmation) {
return this.setState({passwordsMatch: false});
}
this.props.onCreateWebAdmin(firstName, lastName, email, password, confirmation, this.getClusterLinks());
},
handleSelectClusterAccount({clusterID, accountName}) {
const clusterLinks = Object.assign({}, this.state.clusterLinks, {
[clusterID]: accountName,
});
this.setState({
clusterLinks,
});
},
getClusterLinks() {
return Object.keys(this.state.clusterLinks).map((clusterID) => {
return {
cluster_id: clusterID,
cluster_user: this.state.clusterLinks[clusterID],
};
});
},
render() {
const {clusters, passwordsMatch, clusterLinks} = this.state;
return (
<div id="signup-page">
<div className="container">
<div className="row">
<div className="col-md-8 col-md-offset-2">
<div className="panel panel-summer">
<div className="panel-heading text-center">
<div className="signup-progress-circle step3of3">3/3</div>
<h2 className="deluxe">Welcome to InfluxEnterprise</h2>
</div>
<div className="panel-body">
{passwordsMatch ? null : this.renderValidationError()}
<h4>Create a Web Administrator user.</h4>
<h5>A Web Administrator has all web console permissions.</h5>
<p>
After filling out the form with your name, email, and password, assign yourself to the Cluster Administrator account that you
created in the previous step. This ensures that you have all web console permissions and all cluster permissions.
</p>
<form onSubmit={this.handleSubmit}>
<div className="row">
<div className="form-group col-sm-6">
<label htmlFor="first-name">First Name</label>
<input ref={(firstName) => this.firstName = firstName} className="form-control input-lg" type="text" id="first-name" required={true} />
</div>
<div className="form-group col-sm-6">
<label htmlFor="last-name">Last Name</label>
<input ref={(lastName) => this.lastName = lastName} className="form-control input-lg" type="text" id="last-name" required={true} />
</div>
</div>
<div className="row">
<div className="form-group col-sm-12">
<label htmlFor="email">Email</label>
<input ref={(email) => this.email = email} className="form-control input-lg" type="text" id="email" required={true} />
</div>
</div>
<div className="row">
<div className="form-group col-sm-6">
<label htmlFor="password">Password</label>
<input ref={(password) => this.password = password} className="form-control input-lg" type="password" id="password" required={true} />
</div>
<div className="form-group col-sm-6">
<label htmlFor="confirmation">Confirm Password</label>
<input ref={(confirmation) => this.confirmation = confirmation} className="form-control input-lg" type="password" id="confirmation" required={true} />
</div>
</div>
{clusters.length ? <ClusterAccounts clusters={clusters} onSelectClusterAccount={this.handleSelectClusterAccount} /> : null}
<div className="form-group col-sm-6 col-sm-offset-3">
<button disabled={!Object.keys(clusterLinks).length} className="btn btn-lg btn-success btn-block" type="submit">Enter App</button>
</div>
</form>
</div>
</div>
</div>
</div>
</div>
</div>
);
},
renderValidationError() {
return <div>Your passwords don't match!</div>;
},
});
export default CreateWebAdmin;

View File

@ -1,48 +0,0 @@
import React, {PropTypes} from 'react';
const NameCluster = React.createClass({
propTypes: {
onNameCluster: PropTypes.func.isRequired,
},
handleSubmit(e) {
e.preventDefault();
this.props.onNameCluster(this.clusterName.value);
},
render() {
return (
<div id="signup-page">
<div className="container">
<div className="row">
<div className="col-md-8 col-md-offset-2">
<div className="panel panel-summer">
<div className="panel-heading text-center">
<div className="signup-progress-circle step1of3">1/3</div>
<h2 className="deluxe">Welcome to InfluxEnterprise</h2>
<p>
</p>
</div>
<div className="panel-body">
<form onSubmit={this.handleSubmit}>
<div className="form-group col-sm-12">
<h4>What do you want to call your cluster?</h4>
<label htmlFor="cluster-name">Cluster Name (you can change this later)</label>
<input ref={(name) => this.clusterName = name} className="form-control input-lg" type="text" id="cluster-name" placeholder="Ex. MyCluster"/>
</div>
<div className="form-group col-sm-6 col-sm-offset-3">
<button className="btn btn-lg btn-block btn-success" type="submit">Next</button>
</div>
</form>
</div>
</div>
</div>
</div>
</div>
</div>
);
},
});
export default NameCluster;

View File

@ -1,33 +0,0 @@
import React from 'react';
const NoCluster = React.createClass({
handleSubmit() {
window.location.reload();
},
render() {
return (
<div id="signup-page">
<div className="container">
<div className="row">
<div className="col-md-8 col-md-offset-2">
<div className="panel panel-summer">
<div className="panel-heading text-center">
<h2 className="deluxe">Welcome to Enterprise</h2>
<p>
Looks like you don't have your cluster set up.
</p>
</div>
<div className="panel-body">
<button className="btn btn-lg btn-success btn-block" onClick={this.handleSubmit}>Try Again</button>
</div>
</div>
</div>
</div>
</div>
</div>
);
},
});
export default NoCluster;

View File

@ -1,97 +0,0 @@
import React, {PropTypes} from 'react';
import CreateClusterAdmin from './components/CreateClusterAdmin';
import CreateWebAdmin from './components/CreateWebAdmin';
import NameCluster from './components/NameCluster';
import NoCluster from './components/NoCluster';
import {withRouter} from 'react-router';
import {
createWebAdmin,
getClusters,
createClusterUserAtSetup,
updateClusterAtSetup,
} from 'shared/apis';
const SignUpApp = React.createClass({
propTypes: {
params: PropTypes.shape({
step: PropTypes.string.isRequired,
}).isRequired,
router: PropTypes.shape({
push: PropTypes.func.isRequired,
replace: PropTypes.func.isRequired,
}).isRequired,
},
getInitialState() {
return {
clusterDisplayName: null,
clusterIDs: null,
activeClusterID: null,
clusterUser: '',
};
},
componentDidMount() {
getClusters().then(({data: clusters}) => {
const clusterIDs = clusters.map((c) => c.cluster_id); // TODO: handle when the first cluster is down...
this.setState({
clusterIDs,
activeClusterID: clusterIDs[0],
});
});
},
handleNameCluster(clusterDisplayName) {
this.setState({clusterDisplayName}, () => {
this.props.router.replace('/signup/admin/2');
});
},
handleCreateClusterAdmin(username, password) {
const {activeClusterID, clusterDisplayName} = this.state;
createClusterUserAtSetup(activeClusterID, username, password).then(() => {
updateClusterAtSetup(activeClusterID, clusterDisplayName).then(() => {
this.setState({clusterUser: username}, () => {
this.props.router.replace('/signup/admin/3');
});
});
});
},
handleCreateWebAdmin(firstName, lastName, email, password, confirmation, clusterLinks) {
createWebAdmin({firstName, lastName, email, password, confirmation, clusterLinks}).then(() => {
window.location.replace('/');
});
},
render() {
const {params: {step}, router} = this.props;
const {clusterDisplayName, clusterIDs} = this.state;
if (!['1', '2', '3'].includes(step)) {
router.replace('/signup/admin/1');
}
if (clusterIDs === null) {
return null; // spinner?
}
if (!clusterIDs.length) {
return <NoCluster />;
}
if (step === '1' || !clusterDisplayName) {
return <NameCluster onNameCluster={this.handleNameCluster} />;
}
if (step === '2') {
return <CreateClusterAdmin onCreateClusterAdmin={this.handleCreateClusterAdmin} />;
}
if (step === '3') {
return <CreateWebAdmin onCreateWebAdmin={this.handleCreateWebAdmin} />;
}
},
});
export default withRouter(SignUpApp);

View File

@ -11,8 +11,10 @@ import (
// Ensure AlertsStore implements chronograf.AlertsStore.
var _ chronograf.AlertRulesStore = &AlertsStore{}
// AlertsBucket is the name of the bucket alert configuration is stored in
var AlertsBucket = []byte("Alerts")
// AlertsStore represents the bolt implementation of a store for alerts
type AlertsStore struct {
client *Client
}

108
bolt/alerts_test.go Normal file
View File

@ -0,0 +1,108 @@
package bolt_test
import (
"context"
"reflect"
"testing"
"github.com/influxdata/chronograf"
)
func setupTestClient() (*TestClient, error) {
if c, err := NewTestClient(); err != nil {
return nil, err
} else if err := c.Open(context.TODO()); err != nil {
return nil, err
} else {
return c, nil
}
}
// Ensure an AlertRuleStore can be stored.
func TestAlertRuleStoreAdd(t *testing.T) {
c, err := setupTestClient()
if err != nil {
t.Fatal(err)
}
defer c.Close()
s := c.AlertsStore
alerts := []chronograf.AlertRule{
chronograf.AlertRule{
ID: "one",
},
chronograf.AlertRule{
ID: "two",
Details: "howdy",
},
}
// Add new alert.
ctx := context.Background()
for i, a := range alerts {
// Adding should return an identical copy
actual, err := s.Add(ctx, 0, 0, a)
if err != nil {
t.Errorf("erroring adding alert to store: %v", err)
}
if !reflect.DeepEqual(actual, alerts[i]) {
t.Fatalf("alert returned is different then alert saved; actual: %v, expected %v", actual, alerts[i])
}
}
}
func setupWithRule(ctx context.Context, alert chronograf.AlertRule) (*TestClient, error) {
c, err := setupTestClient()
if err != nil {
return nil, err
}
// Add test alert
if _, err := c.AlertsStore.Add(ctx, 0, 0, alert); err != nil {
return nil, err
}
return c, nil
}
// Ensure an AlertRuleStore can be loaded.
func TestAlertRuleStoreGet(t *testing.T) {
ctx := context.Background()
alert := chronograf.AlertRule{
ID: "one",
}
c, err := setupWithRule(ctx, alert)
if err != nil {
t.Fatalf("Error adding test alert to store: %v", err)
}
defer c.Close()
actual, err := c.AlertsStore.Get(ctx, 0, 0, "one")
if err != nil {
t.Fatalf("Error loading rule from store: %v", err)
}
if !reflect.DeepEqual(actual, alert) {
t.Fatalf("alert returned is different then alert saved; actual: %v, expected %v", actual, alert)
}
}
// Ensure an AlertRuleStore can be load with a detail.
func TestAlertRuleStoreGetDetail(t *testing.T) {
ctx := context.Background()
alert := chronograf.AlertRule{
ID: "one",
Details: "my details",
}
c, err := setupWithRule(ctx, alert)
if err != nil {
t.Fatalf("Error adding test alert to store: %v", err)
}
defer c.Close()
actual, err := c.AlertsStore.Get(ctx, 0, 0, "one")
if err != nil {
t.Fatalf("Error loading rule from store: %v", err)
}
if !reflect.DeepEqual(actual, alert) {
t.Fatalf("alert returned is different then alert saved; actual: %v, expected %v", actual, alert)
}
}

View File

@ -1,6 +1,7 @@
package bolt
import (
"context"
"time"
"github.com/boltdb/bolt"
@ -15,18 +16,17 @@ type Client struct {
Now func() time.Time
LayoutIDs chronograf.ID
ExplorationStore *ExplorationStore
SourcesStore *SourcesStore
ServersStore *ServersStore
LayoutStore *LayoutStore
UsersStore *UsersStore
AlertsStore *AlertsStore
DashboardsStore *DashboardsStore
SourcesStore *SourcesStore
ServersStore *ServersStore
LayoutStore *LayoutStore
UsersStore *UsersStore
AlertsStore *AlertsStore
DashboardsStore *DashboardsStore
}
// NewClient initializes all stores
func NewClient() *Client {
c := &Client{Now: time.Now}
c.ExplorationStore = &ExplorationStore{client: c}
c.SourcesStore = &SourcesStore{client: c}
c.ServersStore = &ServersStore{client: c}
c.AlertsStore = &AlertsStore{client: c}
@ -35,12 +35,15 @@ func NewClient() *Client {
client: c,
IDs: &uuid.V4{},
}
c.DashboardsStore = &DashboardsStore{client: c}
c.DashboardsStore = &DashboardsStore{
client: c,
IDs: &uuid.V4{},
}
return c
}
// Open and initialize boltDB. Initial buckets are created if they do not exist.
func (c *Client) Open() error {
func (c *Client) Open(ctx context.Context) error {
// Open database file.
db, err := bolt.Open(c.Path, 0600, &bolt.Options{Timeout: 1 * time.Second})
if err != nil {
@ -49,10 +52,6 @@ func (c *Client) Open() error {
c.db = db
if err := c.db.Update(func(tx *bolt.Tx) error {
// Always create explorations bucket.
if _, err := tx.CreateBucketIfNotExists(ExplorationBucket); err != nil {
return err
}
// Always create Sources bucket.
if _, err := tx.CreateBucketIfNotExists(SourcesBucket); err != nil {
return err
@ -82,9 +81,11 @@ func (c *Client) Open() error {
return err
}
return nil
// Runtime migrations
return c.DashboardsStore.Migrate(ctx)
}
// Close the connection to the bolt database
func (c *Client) Close() error {
if c.db != nil {
return c.db.Close()

View File

@ -12,11 +12,49 @@ import (
// Ensure DashboardsStore implements chronograf.DashboardsStore.
var _ chronograf.DashboardsStore = &DashboardsStore{}
// DashboardBucket is the bolt bucket dashboards are stored in
var DashboardBucket = []byte("Dashoard")
// DashboardsStore is the bolt implementation of storing dashboards
type DashboardsStore struct {
client *Client
IDs chronograf.DashboardID
IDs chronograf.ID
}
// AddIDs is a migration function that adds ID information to existing dashboards
func (d *DashboardsStore) AddIDs(ctx context.Context, boards []chronograf.Dashboard) error {
for _, board := range boards {
update := false
for i, cell := range board.Cells {
// If there are is no id set, we generate one and update the dashboard
if cell.ID == "" {
id, err := d.IDs.Generate()
if err != nil {
return err
}
cell.ID = id
board.Cells[i] = cell
update = true
}
}
if !update {
continue
}
if err := d.Update(ctx, board); err != nil {
return err
}
}
return nil
}
// Migrate updates the dashboards at runtime
func (d *DashboardsStore) Migrate(ctx context.Context) error {
// 1. Add UUIDs to cells without one
boards, err := d.All(ctx)
if err != nil {
return err
}
return d.AddIDs(ctx, boards)
}
// All returns all known dashboards
@ -49,6 +87,14 @@ func (d *DashboardsStore) Add(ctx context.Context, src chronograf.Dashboard) (ch
src.ID = chronograf.DashboardID(id)
strID := strconv.Itoa(int(id))
for i, cell := range src.Cells {
cid, err := d.IDs.Generate()
if err != nil {
return err
}
cell.ID = cid
src.Cells[i] = cell
}
if v, err := internal.MarshalDashboard(src); err != nil {
return err
} else if err := b.Put([]byte(strID), v); err != nil {
@ -81,9 +127,10 @@ func (d *DashboardsStore) Get(ctx context.Context, id chronograf.DashboardID) (c
}
// Delete the dashboard from DashboardsStore
func (s *DashboardsStore) Delete(ctx context.Context, d chronograf.Dashboard) error {
if err := s.client.db.Update(func(tx *bolt.Tx) error {
if err := tx.Bucket(DashboardBucket).Delete(itob(int(d.ID))); err != nil {
func (d *DashboardsStore) Delete(ctx context.Context, dash chronograf.Dashboard) error {
if err := d.client.db.Update(func(tx *bolt.Tx) error {
strID := strconv.Itoa(int(dash.ID))
if err := tx.Bucket(DashboardBucket).Delete([]byte(strID)); err != nil {
return err
}
return nil
@ -95,16 +142,27 @@ func (s *DashboardsStore) Delete(ctx context.Context, d chronograf.Dashboard) er
}
// Update the dashboard in DashboardsStore
func (s *DashboardsStore) Update(ctx context.Context, d chronograf.Dashboard) error {
if err := s.client.db.Update(func(tx *bolt.Tx) error {
func (d *DashboardsStore) Update(ctx context.Context, dash chronograf.Dashboard) error {
if err := d.client.db.Update(func(tx *bolt.Tx) error {
// Get an existing dashboard with the same ID.
b := tx.Bucket(DashboardBucket)
strID := strconv.Itoa(int(d.ID))
strID := strconv.Itoa(int(dash.ID))
if v := b.Get([]byte(strID)); v == nil {
return chronograf.ErrDashboardNotFound
}
if v, err := internal.MarshalDashboard(d); err != nil {
for i, cell := range dash.Cells {
if cell.ID != "" {
continue
}
cid, err := d.IDs.Generate()
if err != nil {
return err
}
cell.ID = cid
dash.Cells[i] = cell
}
if v, err := internal.MarshalDashboard(dash); err != nil {
return err
} else if err := b.Put([]byte(strID), v); err != nil {
return err

View File

@ -1,128 +0,0 @@
package bolt
import (
"context"
"github.com/boltdb/bolt"
"github.com/influxdata/chronograf"
"github.com/influxdata/chronograf/bolt/internal"
)
// Ensure ExplorationStore implements chronograf.ExplorationStore.
var _ chronograf.ExplorationStore = &ExplorationStore{}
var ExplorationBucket = []byte("Explorations")
type ExplorationStore struct {
client *Client
}
// Search the ExplorationStore for all explorations owned by userID.
func (s *ExplorationStore) Query(ctx context.Context, uid chronograf.UserID) ([]*chronograf.Exploration, error) {
var explorations []*chronograf.Exploration
if err := s.client.db.View(func(tx *bolt.Tx) error {
if err := tx.Bucket(ExplorationBucket).ForEach(func(k, v []byte) error {
var e chronograf.Exploration
if err := internal.UnmarshalExploration(v, &e); err != nil {
return err
} else if e.UserID != uid {
return nil
}
explorations = append(explorations, &e)
return nil
}); err != nil {
return err
}
return nil
}); err != nil {
return nil, err
}
return explorations, nil
}
// Create a new Exploration in the ExplorationStore.
func (s *ExplorationStore) Add(ctx context.Context, e *chronograf.Exploration) (*chronograf.Exploration, error) {
if err := s.client.db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket(ExplorationBucket)
seq, err := b.NextSequence()
if err != nil {
return err
}
e.ID = chronograf.ExplorationID(seq)
e.CreatedAt = s.client.Now().UTC()
e.UpdatedAt = e.CreatedAt
if v, err := internal.MarshalExploration(e); err != nil {
return err
} else if err := b.Put(itob(int(e.ID)), v); err != nil {
return err
}
return nil
}); err != nil {
return nil, err
}
return e, nil
}
// Delete the exploration from the ExplorationStore
func (s *ExplorationStore) Delete(ctx context.Context, e *chronograf.Exploration) error {
if err := s.client.db.Update(func(tx *bolt.Tx) error {
if err := tx.Bucket(ExplorationBucket).Delete(itob(int(e.ID))); err != nil {
return err
}
return nil
}); err != nil {
return err
}
return nil
}
// Retrieve an exploration for an id exists.
func (s *ExplorationStore) Get(ctx context.Context, id chronograf.ExplorationID) (*chronograf.Exploration, error) {
var e chronograf.Exploration
if err := s.client.db.View(func(tx *bolt.Tx) error {
if v := tx.Bucket(ExplorationBucket).Get(itob(int(id))); v == nil {
return chronograf.ErrExplorationNotFound
} else if err := internal.UnmarshalExploration(v, &e); err != nil {
return err
}
return nil
}); err != nil {
return nil, err
}
return &e, nil
}
// Update an exploration; will also update the `UpdatedAt` time.
func (s *ExplorationStore) Update(ctx context.Context, e *chronograf.Exploration) error {
if err := s.client.db.Update(func(tx *bolt.Tx) error {
// Retreive an existing exploration with the same exploration ID.
var ee chronograf.Exploration
b := tx.Bucket(ExplorationBucket)
if v := b.Get(itob(int(e.ID))); v == nil {
return chronograf.ErrExplorationNotFound
} else if err := internal.UnmarshalExploration(v, &ee); err != nil {
return err
}
ee.Name = e.Name
ee.UserID = e.UserID
ee.Data = e.Data
ee.UpdatedAt = s.client.Now().UTC()
if v, err := internal.MarshalExploration(&ee); err != nil {
return err
} else if err := b.Put(itob(int(ee.ID)), v); err != nil {
return err
}
return nil
}); err != nil {
return err
}
return nil
}

View File

@ -1,142 +0,0 @@
package bolt_test
import (
"context"
"testing"
"github.com/influxdata/chronograf"
)
// Ensure an ExplorationStore can store, retrieve, update, and delete explorations.
func TestExplorationStore_CRUD(t *testing.T) {
c, err := NewTestClient()
if err != nil {
t.Fatal(err)
}
if err := c.Open(); err != nil {
t.Fatal(err)
}
defer c.Close()
s := c.ExplorationStore
explorations := []*chronograf.Exploration{
&chronograf.Exploration{
Name: "Ferdinand Magellan",
UserID: 2,
Data: "{\"panels\":{\"123\":{\"id\":\"123\",\"queryIds\":[\"456\"]}},\"queryConfigs\":{\"456\":{\"id\":\"456\",\"database\":null,\"measurement\":null,\"retentionPolicy\":null,\"fields\":[],\"tags\":{},\"groupBy\":{\"time\":null,\"tags\":[]},\"areTagsAccepted\":true,\"rawText\":null}}}",
},
&chronograf.Exploration{
Name: "Marco Polo",
UserID: 3,
Data: "{\"panels\":{\"123\":{\"id\":\"123\",\"queryIds\":[\"456\"]}},\"queryConfigs\":{\"456\":{\"id\":\"456\",\"database\":null,\"measurement\":null,\"retentionPolicy\":null,\"fields\":[],\"tags\":{},\"groupBy\":{\"time\":null,\"tags\":[]},\"areTagsAccepted\":true,\"rawText\":null}}}",
},
&chronograf.Exploration{
Name: "Leif Ericson",
UserID: 3,
Data: "{\"panels\":{\"123\":{\"id\":\"123\",\"queryIds\":[\"456\"]}},\"queryConfigs\":{\"456\":{\"id\":\"456\",\"database\":null,\"measurement\":null,\"retentionPolicy\":null,\"fields\":[],\"tags\":{},\"groupBy\":{\"time\":null,\"tags\":[]},\"areTagsAccepted\":true,\"rawText\":null}}}",
},
}
ctx := context.Background()
// Add new explorations.
for i := range explorations {
if _, err := s.Add(ctx, explorations[i]); err != nil {
t.Fatal(err)
}
}
// Confirm first exploration in the store is the same as the original.
if e, err := s.Get(ctx, explorations[0].ID); err != nil {
t.Fatal(err)
} else if e.ID != explorations[0].ID {
t.Fatalf("exploration ID error: got %v, expected %v", e.ID, explorations[1].ID)
} else if e.Name != explorations[0].Name {
t.Fatalf("exploration Name error: got %v, expected %v", e.Name, explorations[1].Name)
} else if e.UserID != explorations[0].UserID {
t.Fatalf("exploration UserID error: got %v, expected %v", e.UserID, explorations[1].UserID)
} else if e.Data != explorations[0].Data {
t.Fatalf("exploration Data error: got %v, expected %v", e.Data, explorations[1].Data)
}
// Update explorations.
explorations[1].Name = "Francis Drake"
explorations[2].UserID = 4
if err := s.Update(ctx, explorations[1]); err != nil {
t.Fatal(err)
} else if err := s.Update(ctx, explorations[2]); err != nil {
t.Fatal(err)
}
// Confirm explorations are updated.
if e, err := s.Get(ctx, explorations[1].ID); err != nil {
t.Fatal(err)
} else if e.Name != "Francis Drake" {
t.Fatalf("exploration 1 update error: got %v, expected %v", e.Name, "Francis Drake")
}
if e, err := s.Get(ctx, explorations[2].ID); err != nil {
t.Fatal(err)
} else if e.UserID != 4 {
t.Fatalf("exploration 2 update error: got %v, expected %v", e.UserID, 4)
}
// Delete an exploration.
if err := s.Delete(ctx, explorations[2]); err != nil {
t.Fatal(err)
}
// Confirm exploration has been deleted.
if e, err := s.Get(ctx, explorations[2].ID); err != chronograf.ErrExplorationNotFound {
t.Fatalf("exploration delete error: got %v, expected %v", e, chronograf.ErrExplorationNotFound)
}
}
// Ensure Explorations can be queried by UserID.
func TestExplorationStore_Query(t *testing.T) {
c, err := NewTestClient()
if err != nil {
t.Fatal(err)
}
if err := c.Open(); err != nil {
t.Fatal(err)
}
defer c.Close()
s := c.ExplorationStore
explorations := []*chronograf.Exploration{
&chronograf.Exploration{
Name: "Ferdinand Magellan",
UserID: 2,
Data: "{\"panels\":{\"123\":{\"id\":\"123\",\"queryIds\":[\"456\"]}},\"queryConfigs\":{\"456\":{\"id\":\"456\",\"database\":null,\"measurement\":null,\"retentionPolicy\":null,\"fields\":[],\"tags\":{},\"groupBy\":{\"time\":null,\"tags\":[]},\"areTagsAccepted\":true,\"rawText\":null}}}",
},
&chronograf.Exploration{
Name: "Marco Polo",
UserID: 3,
Data: "{\"panels\":{\"123\":{\"id\":\"123\",\"queryIds\":[\"456\"]}},\"queryConfigs\":{\"456\":{\"id\":\"456\",\"database\":null,\"measurement\":null,\"retentionPolicy\":null,\"fields\":[],\"tags\":{},\"groupBy\":{\"time\":null,\"tags\":[]},\"areTagsAccepted\":true,\"rawText\":null}}}",
},
&chronograf.Exploration{
Name: "Leif Ericson",
UserID: 3,
Data: "{\"panels\":{\"123\":{\"id\":\"123\",\"queryIds\":[\"456\"]}},\"queryConfigs\":{\"456\":{\"id\":\"456\",\"database\":null,\"measurement\":null,\"retentionPolicy\":null,\"fields\":[],\"tags\":{},\"groupBy\":{\"time\":null,\"tags\":[]},\"areTagsAccepted\":true,\"rawText\":null}}}",
},
}
ctx := context.Background()
// Add new explorations.
for i := range explorations {
if _, err := s.Add(ctx, explorations[i]); err != nil {
t.Fatal(err)
}
}
// Query for explorations.
if e, err := s.Query(ctx, 3); err != nil {
t.Fatal(err)
} else if len(e) != 2 {
t.Fatalf("exploration query length error: got %v, expected %v", len(explorations), len(e))
} else if e[0].Name != explorations[1].Name {
t.Fatalf("exploration query error: got %v, expected %v", explorations[0].Name, "Marco Polo")
} else if e[1].Name != explorations[2].Name {
t.Fatalf("exploration query error: got %v, expected %v", explorations[1].Name, "Leif Ericson")
}
}

View File

@ -2,7 +2,6 @@ package internal
import (
"encoding/json"
"time"
"github.com/gogo/protobuf/proto"
"github.com/influxdata/chronograf"
@ -10,37 +9,6 @@ import (
//go:generate protoc --gogo_out=. internal.proto
// MarshalExploration encodes an exploration to binary protobuf format.
func MarshalExploration(e *chronograf.Exploration) ([]byte, error) {
return proto.Marshal(&Exploration{
ID: int64(e.ID),
Name: e.Name,
UserID: int64(e.UserID),
Data: e.Data,
CreatedAt: e.CreatedAt.UnixNano(),
UpdatedAt: e.UpdatedAt.UnixNano(),
Default: e.Default,
})
}
// UnmarshalExploration decodes an exploration from binary protobuf data.
func UnmarshalExploration(data []byte, e *chronograf.Exploration) error {
var pb Exploration
if err := proto.Unmarshal(data, &pb); err != nil {
return err
}
e.ID = chronograf.ExplorationID(pb.ID)
e.Name = pb.Name
e.UserID = chronograf.UserID(pb.UserID)
e.Data = pb.Data
e.CreatedAt = time.Unix(0, pb.CreatedAt).UTC()
e.UpdatedAt = time.Unix(0, pb.UpdatedAt).UTC()
e.Default = pb.Default
return nil
}
// MarshalSource encodes a source to binary protobuf format.
func MarshalSource(s chronograf.Source) ([]byte, error) {
return proto.Marshal(&Source{
@ -50,6 +18,7 @@ func MarshalSource(s chronograf.Source) ([]byte, error) {
Username: s.Username,
Password: s.Password,
URL: s.URL,
MetaURL: s.MetaURL,
InsecureSkipVerify: s.InsecureSkipVerify,
Default: s.Default,
Telegraf: s.Telegraf,
@ -69,6 +38,7 @@ func UnmarshalSource(data []byte, s *chronograf.Source) error {
s.Username = pb.Username
s.Password = pb.Password
s.URL = pb.URL
s.MetaURL = pb.MetaURL
s.InsecureSkipVerify = pb.InsecureSkipVerify
s.Default = pb.Default
s.Telegraf = pb.Telegraf
@ -201,17 +171,14 @@ func MarshalDashboard(d chronograf.Dashboard) ([]byte, error) {
r.Upper, r.Lower = q.Range.Upper, q.Range.Lower
}
queries[j] = &Query{
Command: q.Command,
DB: q.DB,
RP: q.RP,
GroupBys: q.GroupBys,
Wheres: q.Wheres,
Label: q.Label,
Range: r,
Command: q.Command,
Label: q.Label,
Range: r,
}
}
cells[i] = &DashboardCell{
ID: c.ID,
X: c.X,
Y: c.Y,
W: c.W,
@ -238,15 +205,11 @@ func UnmarshalDashboard(data []byte, d *chronograf.Dashboard) error {
cells := make([]chronograf.DashboardCell, len(pb.Cells))
for i, c := range pb.Cells {
queries := make([]chronograf.Query, len(c.Queries))
queries := make([]chronograf.DashboardQuery, len(c.Queries))
for j, q := range c.Queries {
queries[j] = chronograf.Query{
Command: q.Command,
DB: q.DB,
RP: q.RP,
GroupBys: q.GroupBys,
Wheres: q.Wheres,
Label: q.Label,
queries[j] = chronograf.DashboardQuery{
Command: q.Command,
Label: q.Label,
}
if q.Range.Upper != q.Range.Lower {
queries[j].Range = &chronograf.Range{
@ -257,6 +220,7 @@ func UnmarshalDashboard(data []byte, d *chronograf.Dashboard) error {
}
cells[i] = chronograf.DashboardCell{
ID: c.ID,
X: c.X,
Y: c.Y,
W: c.W,
@ -310,21 +274,35 @@ func UnmarshalAlertRule(data []byte, r *ScopedAlert) error {
}
// MarshalUser encodes a user to binary protobuf format.
// We are ignoring the password for now.
func MarshalUser(u *chronograf.User) ([]byte, error) {
return proto.Marshal(&User{
ID: uint64(u.ID),
Email: u.Email,
return MarshalUserPB(&User{
Name: u.Name,
})
}
// MarshalUserPB encodes a user to binary protobuf format.
// We are ignoring the password for now.
func MarshalUserPB(u *User) ([]byte, error) {
return proto.Marshal(u)
}
// UnmarshalUser decodes a user from binary protobuf data.
// We are ignoring the password for now.
func UnmarshalUser(data []byte, u *chronograf.User) error {
var pb User
if err := proto.Unmarshal(data, &pb); err != nil {
if err := UnmarshalUserPB(data, &pb); err != nil {
return err
}
u.Name = pb.Name
return nil
}
// UnmarshalUser decodes a user from binary protobuf data.
// We are ignoring the password for now.
func UnmarshalUserPB(data []byte, u *User) error {
if err := proto.Unmarshal(data, u); err != nil {
return err
}
u.ID = chronograf.UserID(pb.ID)
u.Email = pb.Email
return nil
}

View File

@ -9,7 +9,6 @@ It is generated from these files:
internal.proto
It has these top-level messages:
Exploration
Source
Dashboard
DashboardCell
@ -38,21 +37,6 @@ var _ = math.Inf
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
type Exploration struct {
ID int64 `protobuf:"varint,1,opt,name=ID,proto3" json:"ID,omitempty"`
Name string `protobuf:"bytes,2,opt,name=Name,proto3" json:"Name,omitempty"`
UserID int64 `protobuf:"varint,3,opt,name=UserID,proto3" json:"UserID,omitempty"`
Data string `protobuf:"bytes,4,opt,name=Data,proto3" json:"Data,omitempty"`
CreatedAt int64 `protobuf:"varint,5,opt,name=CreatedAt,proto3" json:"CreatedAt,omitempty"`
UpdatedAt int64 `protobuf:"varint,6,opt,name=UpdatedAt,proto3" json:"UpdatedAt,omitempty"`
Default bool `protobuf:"varint,7,opt,name=Default,proto3" json:"Default,omitempty"`
}
func (m *Exploration) Reset() { *m = Exploration{} }
func (m *Exploration) String() string { return proto.CompactTextString(m) }
func (*Exploration) ProtoMessage() {}
func (*Exploration) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{0} }
type Source struct {
ID int64 `protobuf:"varint,1,opt,name=ID,proto3" json:"ID,omitempty"`
Name string `protobuf:"bytes,2,opt,name=Name,proto3" json:"Name,omitempty"`
@ -63,12 +47,13 @@ type Source struct {
Default bool `protobuf:"varint,7,opt,name=Default,proto3" json:"Default,omitempty"`
Telegraf string `protobuf:"bytes,8,opt,name=Telegraf,proto3" json:"Telegraf,omitempty"`
InsecureSkipVerify bool `protobuf:"varint,9,opt,name=InsecureSkipVerify,proto3" json:"InsecureSkipVerify,omitempty"`
MetaURL string `protobuf:"bytes,10,opt,name=MetaURL,proto3" json:"MetaURL,omitempty"`
}
func (m *Source) Reset() { *m = Source{} }
func (m *Source) String() string { return proto.CompactTextString(m) }
func (*Source) ProtoMessage() {}
func (*Source) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{1} }
func (*Source) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{0} }
type Dashboard struct {
ID int64 `protobuf:"varint,1,opt,name=ID,proto3" json:"ID,omitempty"`
@ -79,7 +64,7 @@ type Dashboard struct {
func (m *Dashboard) Reset() { *m = Dashboard{} }
func (m *Dashboard) String() string { return proto.CompactTextString(m) }
func (*Dashboard) ProtoMessage() {}
func (*Dashboard) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{2} }
func (*Dashboard) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{1} }
func (m *Dashboard) GetCells() []*DashboardCell {
if m != nil {
@ -96,12 +81,13 @@ type DashboardCell struct {
Queries []*Query `protobuf:"bytes,5,rep,name=queries" json:"queries,omitempty"`
Name string `protobuf:"bytes,6,opt,name=name,proto3" json:"name,omitempty"`
Type string `protobuf:"bytes,7,opt,name=type,proto3" json:"type,omitempty"`
ID string `protobuf:"bytes,8,opt,name=ID,proto3" json:"ID,omitempty"`
}
func (m *DashboardCell) Reset() { *m = DashboardCell{} }
func (m *DashboardCell) String() string { return proto.CompactTextString(m) }
func (*DashboardCell) ProtoMessage() {}
func (*DashboardCell) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{3} }
func (*DashboardCell) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{2} }
func (m *DashboardCell) GetQueries() []*Query {
if m != nil {
@ -122,7 +108,7 @@ type Server struct {
func (m *Server) Reset() { *m = Server{} }
func (m *Server) String() string { return proto.CompactTextString(m) }
func (*Server) ProtoMessage() {}
func (*Server) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{4} }
func (*Server) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{3} }
type Layout struct {
ID string `protobuf:"bytes,1,opt,name=ID,proto3" json:"ID,omitempty"`
@ -135,7 +121,7 @@ type Layout struct {
func (m *Layout) Reset() { *m = Layout{} }
func (m *Layout) String() string { return proto.CompactTextString(m) }
func (*Layout) ProtoMessage() {}
func (*Layout) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{5} }
func (*Layout) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{4} }
func (m *Layout) GetCells() []*Cell {
if m != nil {
@ -160,7 +146,7 @@ type Cell struct {
func (m *Cell) Reset() { *m = Cell{} }
func (m *Cell) String() string { return proto.CompactTextString(m) }
func (*Cell) ProtoMessage() {}
func (*Cell) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{6} }
func (*Cell) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{5} }
func (m *Cell) GetQueries() []*Query {
if m != nil {
@ -182,7 +168,7 @@ type Query struct {
func (m *Query) Reset() { *m = Query{} }
func (m *Query) String() string { return proto.CompactTextString(m) }
func (*Query) ProtoMessage() {}
func (*Query) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{7} }
func (*Query) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{6} }
func (m *Query) GetRange() *Range {
if m != nil {
@ -199,7 +185,7 @@ type Range struct {
func (m *Range) Reset() { *m = Range{} }
func (m *Range) String() string { return proto.CompactTextString(m) }
func (*Range) ProtoMessage() {}
func (*Range) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{8} }
func (*Range) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{7} }
type AlertRule struct {
ID string `protobuf:"bytes,1,opt,name=ID,proto3" json:"ID,omitempty"`
@ -211,20 +197,19 @@ type AlertRule struct {
func (m *AlertRule) Reset() { *m = AlertRule{} }
func (m *AlertRule) String() string { return proto.CompactTextString(m) }
func (*AlertRule) ProtoMessage() {}
func (*AlertRule) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{9} }
func (*AlertRule) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{8} }
type User struct {
ID uint64 `protobuf:"varint,1,opt,name=ID,proto3" json:"ID,omitempty"`
Email string `protobuf:"bytes,2,opt,name=Email,proto3" json:"Email,omitempty"`
ID uint64 `protobuf:"varint,1,opt,name=ID,proto3" json:"ID,omitempty"`
Name string `protobuf:"bytes,2,opt,name=Name,proto3" json:"Name,omitempty"`
}
func (m *User) Reset() { *m = User{} }
func (m *User) String() string { return proto.CompactTextString(m) }
func (*User) ProtoMessage() {}
func (*User) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{10} }
func (*User) Descriptor() ([]byte, []int) { return fileDescriptorInternal, []int{9} }
func init() {
proto.RegisterType((*Exploration)(nil), "internal.Exploration")
proto.RegisterType((*Source)(nil), "internal.Source")
proto.RegisterType((*Dashboard)(nil), "internal.Dashboard")
proto.RegisterType((*DashboardCell)(nil), "internal.DashboardCell")
@ -240,50 +225,47 @@ func init() {
func init() { proto.RegisterFile("internal.proto", fileDescriptorInternal) }
var fileDescriptorInternal = []byte{
// 712 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xbc, 0x55, 0xd1, 0x6e, 0xd3, 0x4a,
0x10, 0xd5, 0xc6, 0x76, 0x12, 0x4f, 0x7b, 0x7b, 0xaf, 0x56, 0xd5, 0xc5, 0x42, 0x3c, 0x44, 0x16,
0x48, 0x41, 0x82, 0x3e, 0xd0, 0x2f, 0x48, 0xe3, 0x0a, 0x05, 0x4a, 0x29, 0x9b, 0x06, 0x9e, 0x40,
0xda, 0x26, 0x9b, 0xc6, 0xc2, 0xb1, 0xcd, 0xda, 0x26, 0xf5, 0x3f, 0xf0, 0x05, 0x3c, 0xf0, 0x11,
0xf0, 0x29, 0xfc, 0x08, 0x9f, 0x80, 0x66, 0xbc, 0x76, 0x5c, 0x51, 0x50, 0x9f, 0x78, 0x9b, 0x33,
0x33, 0x9d, 0x3d, 0x73, 0xce, 0xb8, 0x81, 0xbd, 0x30, 0xce, 0x95, 0x8e, 0x65, 0x74, 0x90, 0xea,
0x24, 0x4f, 0x78, 0xbf, 0xc6, 0xfe, 0x37, 0x06, 0x3b, 0xc7, 0x57, 0x69, 0x94, 0x68, 0x99, 0x87,
0x49, 0xcc, 0xf7, 0xa0, 0x33, 0x09, 0x3c, 0x36, 0x60, 0x43, 0x4b, 0x74, 0x26, 0x01, 0xe7, 0x60,
0x9f, 0xca, 0xb5, 0xf2, 0x3a, 0x03, 0x36, 0x74, 0x05, 0xc5, 0xfc, 0x7f, 0xe8, 0xce, 0x32, 0xa5,
0x27, 0x81, 0x67, 0x51, 0x9f, 0x41, 0xd8, 0x1b, 0xc8, 0x5c, 0x7a, 0x76, 0xd5, 0x8b, 0x31, 0xbf,
0x07, 0xee, 0x58, 0x2b, 0x99, 0xab, 0xc5, 0x28, 0xf7, 0x1c, 0x6a, 0xdf, 0x26, 0xb0, 0x3a, 0x4b,
0x17, 0xa6, 0xda, 0xad, 0xaa, 0x4d, 0x82, 0x7b, 0xd0, 0x0b, 0xd4, 0x52, 0x16, 0x51, 0xee, 0xf5,
0x06, 0x6c, 0xd8, 0x17, 0x35, 0xf4, 0x7f, 0x30, 0xe8, 0x4e, 0x93, 0x42, 0xcf, 0xd5, 0xad, 0x08,
0x73, 0xb0, 0xcf, 0xcb, 0x54, 0x11, 0x5d, 0x57, 0x50, 0xcc, 0xef, 0x42, 0x1f, 0x69, 0xc7, 0xd8,
0x5b, 0x11, 0x6e, 0x30, 0xd6, 0xce, 0x64, 0x96, 0x6d, 0x12, 0xbd, 0x20, 0xce, 0xae, 0x68, 0x30,
0xff, 0x0f, 0xac, 0x99, 0x38, 0x21, 0xb2, 0xae, 0xc0, 0xf0, 0xf7, 0x34, 0x71, 0xce, 0xb9, 0x8a,
0xd4, 0xa5, 0x96, 0x4b, 0xaf, 0x5f, 0xcd, 0xa9, 0x31, 0x3f, 0x00, 0x3e, 0x89, 0x33, 0x35, 0x2f,
0xb4, 0x9a, 0xbe, 0x0f, 0xd3, 0xd7, 0x4a, 0x87, 0xcb, 0xd2, 0x73, 0x69, 0xc0, 0x0d, 0x15, 0xff,
0x1d, 0xb8, 0x81, 0xcc, 0x56, 0x17, 0x89, 0xd4, 0x8b, 0x5b, 0x2d, 0xfd, 0x18, 0x9c, 0xb9, 0x8a,
0xa2, 0xcc, 0xb3, 0x06, 0xd6, 0x70, 0xe7, 0xc9, 0x9d, 0x83, 0xe6, 0x06, 0x9a, 0x39, 0x63, 0x15,
0x45, 0xa2, 0xea, 0xf2, 0x3f, 0x33, 0xf8, 0xe7, 0x5a, 0x81, 0xef, 0x02, 0xbb, 0xa2, 0x37, 0x1c,
0xc1, 0xae, 0x10, 0x95, 0x34, 0xdf, 0x11, 0xac, 0x44, 0xb4, 0x21, 0x39, 0x1d, 0xc1, 0x36, 0x88,
0x56, 0x24, 0xa2, 0x23, 0xd8, 0x8a, 0x3f, 0x84, 0xde, 0x87, 0x42, 0xe9, 0x50, 0x65, 0x9e, 0x43,
0x4f, 0xff, 0xbb, 0x7d, 0xfa, 0x55, 0xa1, 0x74, 0x29, 0xea, 0x3a, 0xf2, 0x26, 0x03, 0x2a, 0x35,
0x29, 0xc6, 0x5c, 0x8e, 0x66, 0xf5, 0xaa, 0x1c, 0xc6, 0xfe, 0x27, 0xf4, 0x5b, 0xe9, 0x8f, 0x4a,
0xdf, 0x6a, 0xf5, 0xb6, 0xb7, 0xd6, 0x1f, 0xbc, 0xb5, 0x6f, 0xf6, 0xd6, 0xd9, 0x7a, 0xbb, 0x0f,
0xce, 0x54, 0xcf, 0x27, 0x81, 0x39, 0xce, 0x0a, 0xf8, 0x5f, 0x18, 0x74, 0x4f, 0x64, 0x99, 0x14,
0x79, 0x8b, 0x8e, 0x4b, 0x74, 0x06, 0xb0, 0x33, 0x4a, 0xd3, 0x28, 0x9c, 0xd3, 0xe7, 0x64, 0x58,
0xb5, 0x53, 0xd8, 0xf1, 0x42, 0xc9, 0xac, 0xd0, 0x6a, 0xad, 0xe2, 0xdc, 0xf0, 0x6b, 0xa7, 0xf8,
0x7d, 0x70, 0xc6, 0xe4, 0x9c, 0x4d, 0xf2, 0xed, 0x6d, 0xe5, 0xab, 0x0c, 0xa3, 0x22, 0x2e, 0x32,
0x2a, 0xf2, 0x64, 0x19, 0x25, 0x1b, 0x62, 0xdc, 0x17, 0x0d, 0xf6, 0xbf, 0x33, 0xb0, 0xff, 0x96,
0x87, 0xbb, 0xc0, 0x42, 0x63, 0x20, 0x0b, 0x1b, 0x47, 0x7b, 0x2d, 0x47, 0x3d, 0xe8, 0x95, 0x5a,
0xc6, 0x97, 0x2a, 0xf3, 0xfa, 0x03, 0x6b, 0x68, 0x89, 0x1a, 0x52, 0x25, 0x92, 0x17, 0x2a, 0xca,
0x3c, 0x77, 0x60, 0x0d, 0x5d, 0x51, 0xc3, 0xe6, 0x0a, 0xa0, 0x75, 0x05, 0x5f, 0x19, 0x38, 0xf4,
0x38, 0xfe, 0xdd, 0x38, 0x59, 0xaf, 0x65, 0xbc, 0x30, 0xd2, 0xd7, 0x10, 0xfd, 0x08, 0x8e, 0x8c,
0xec, 0x9d, 0xe0, 0x08, 0xb1, 0x38, 0x33, 0x22, 0x77, 0xc4, 0x19, 0xaa, 0xf6, 0x54, 0x27, 0x45,
0x7a, 0x54, 0x56, 0xf2, 0xba, 0xa2, 0xc1, 0xf8, 0x7f, 0xed, 0xcd, 0x4a, 0x69, 0xb3, 0xb3, 0x2b,
0x0c, 0xc2, 0x23, 0x38, 0x41, 0x56, 0x66, 0xcb, 0x0a, 0xf0, 0x07, 0xe0, 0x08, 0xdc, 0x82, 0x56,
0xbd, 0x26, 0x10, 0xa5, 0x45, 0x55, 0xf5, 0x0f, 0x4d, 0x1b, 0x4e, 0x99, 0xa5, 0xa9, 0xd2, 0xe6,
0x76, 0x2b, 0x40, 0xb3, 0x93, 0x8d, 0xd2, 0x44, 0xd9, 0x12, 0x15, 0xf0, 0xdf, 0x82, 0x3b, 0x8a,
0x94, 0xce, 0x45, 0x11, 0xa9, 0x5f, 0x4e, 0x8c, 0x83, 0xfd, 0x6c, 0xfa, 0xf2, 0xb4, 0xbe, 0x78,
0x8c, 0xb7, 0x77, 0x6a, 0xb5, 0xee, 0x14, 0x17, 0x7a, 0x2e, 0x53, 0x39, 0x09, 0xc8, 0x58, 0x4b,
0x18, 0xe4, 0x3f, 0x02, 0x1b, 0xbf, 0x87, 0xd6, 0x64, 0x9b, 0x26, 0xef, 0x83, 0x73, 0xbc, 0x96,
0x61, 0x64, 0x46, 0x57, 0xe0, 0xa2, 0x4b, 0xbf, 0x19, 0x87, 0x3f, 0x03, 0x00, 0x00, 0xff, 0xff,
0x6d, 0xf2, 0xe7, 0x54, 0x45, 0x06, 0x00, 0x00,
// 660 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xbc, 0x54, 0xdd, 0x6e, 0xd3, 0x4a,
0x10, 0xd6, 0xc6, 0x76, 0x7e, 0xa6, 0x3d, 0x3d, 0x47, 0xab, 0x23, 0x58, 0x71, 0x15, 0x59, 0x20,
0x05, 0x24, 0x7a, 0x41, 0x9f, 0xa0, 0xad, 0x25, 0x14, 0x68, 0x4b, 0xd9, 0xb4, 0x70, 0x05, 0xd2,
0x36, 0x9d, 0x34, 0x16, 0x8e, 0x6d, 0xd6, 0x36, 0xa9, 0x5f, 0x01, 0xf1, 0x0c, 0x3c, 0x00, 0x97,
0xbc, 0x0a, 0x2f, 0x84, 0x66, 0x77, 0xed, 0xb8, 0xa2, 0xa0, 0x5e, 0x71, 0x37, 0xdf, 0xcc, 0x66,
0x7e, 0xbe, 0xef, 0x73, 0x60, 0x27, 0x4e, 0x4b, 0xd4, 0xa9, 0x4a, 0x76, 0x73, 0x9d, 0x95, 0x19,
0x1f, 0x36, 0x38, 0xfc, 0xdc, 0x83, 0xfe, 0x2c, 0xab, 0xf4, 0x1c, 0xf9, 0x0e, 0xf4, 0xa6, 0x91,
0x60, 0x63, 0x36, 0xf1, 0x64, 0x6f, 0x1a, 0x71, 0x0e, 0xfe, 0x89, 0x5a, 0xa1, 0xe8, 0x8d, 0xd9,
0x64, 0x24, 0x4d, 0x4c, 0xb9, 0xb3, 0x3a, 0x47, 0xe1, 0xd9, 0x1c, 0xc5, 0xfc, 0x01, 0x0c, 0xcf,
0x0b, 0xea, 0xb6, 0x42, 0xe1, 0x9b, 0x7c, 0x8b, 0xa9, 0x76, 0xaa, 0x8a, 0x62, 0x9d, 0xe9, 0x4b,
0x11, 0xd8, 0x5a, 0x83, 0xf9, 0x7f, 0xe0, 0x9d, 0xcb, 0x23, 0xd1, 0x37, 0x69, 0x0a, 0xb9, 0x80,
0x41, 0x84, 0x0b, 0x55, 0x25, 0xa5, 0x18, 0x8c, 0xd9, 0x64, 0x28, 0x1b, 0x48, 0x7d, 0xce, 0x30,
0xc1, 0x2b, 0xad, 0x16, 0x62, 0x68, 0xfb, 0x34, 0x98, 0xef, 0x02, 0x9f, 0xa6, 0x05, 0xce, 0x2b,
0x8d, 0xb3, 0x0f, 0x71, 0xfe, 0x06, 0x75, 0xbc, 0xa8, 0xc5, 0xc8, 0x34, 0xb8, 0xa5, 0x42, 0x53,
0x8e, 0xb1, 0x54, 0x34, 0x1b, 0x4c, 0xab, 0x06, 0x86, 0xef, 0x61, 0x14, 0xa9, 0x62, 0x79, 0x91,
0x29, 0x7d, 0x79, 0x27, 0x3a, 0x9e, 0x42, 0x30, 0xc7, 0x24, 0x29, 0x84, 0x37, 0xf6, 0x26, 0x5b,
0xcf, 0xee, 0xef, 0xb6, 0x3c, 0xb7, 0x7d, 0x0e, 0x31, 0x49, 0xa4, 0x7d, 0x15, 0x7e, 0x63, 0xf0,
0xcf, 0x8d, 0x02, 0xdf, 0x06, 0x76, 0x6d, 0x66, 0x04, 0x92, 0x5d, 0x13, 0xaa, 0x4d, 0xff, 0x40,
0xb2, 0x9a, 0xd0, 0xda, 0x10, 0x1d, 0x48, 0xb6, 0x26, 0xb4, 0x34, 0xf4, 0x06, 0x92, 0x2d, 0xf9,
0x63, 0x18, 0x7c, 0xac, 0x50, 0xc7, 0x58, 0x88, 0xc0, 0x8c, 0xfe, 0x77, 0x33, 0xfa, 0x75, 0x85,
0xba, 0x96, 0x4d, 0x9d, 0xf6, 0x36, 0xd2, 0x58, 0x9e, 0x4d, 0x4c, 0xb9, 0x92, 0x64, 0x1c, 0xd8,
0x1c, 0xc5, 0xee, 0x5e, 0x4b, 0x6e, 0x6f, 0x1a, 0x85, 0x5f, 0x18, 0xf4, 0x67, 0xa8, 0x3f, 0xa1,
0xbe, 0x13, 0x15, 0x5d, 0x17, 0x78, 0x7f, 0x70, 0x81, 0x7f, 0xbb, 0x0b, 0x82, 0x8d, 0x0b, 0xfe,
0x87, 0x60, 0xa6, 0xe7, 0xd3, 0xc8, 0x6c, 0xec, 0x49, 0x0b, 0xc2, 0xaf, 0x0c, 0xfa, 0x47, 0xaa,
0xce, 0xaa, 0xb2, 0xb3, 0x8e, 0xd9, 0x94, 0x8f, 0x61, 0x6b, 0x3f, 0xcf, 0x93, 0x78, 0xae, 0xca,
0x38, 0x4b, 0xdd, 0x56, 0xdd, 0x14, 0xbd, 0x38, 0x46, 0x55, 0x54, 0x1a, 0x57, 0x98, 0x96, 0x6e,
0xbf, 0x6e, 0x8a, 0x3f, 0x84, 0xe0, 0xd0, 0x28, 0xe9, 0x1b, 0x3a, 0x77, 0x36, 0x74, 0x5a, 0x01,
0x4d, 0x91, 0x0e, 0xd9, 0xaf, 0xca, 0x6c, 0x91, 0x64, 0x6b, 0xb3, 0xf1, 0x50, 0xb6, 0x38, 0xfc,
0xc1, 0xc0, 0xff, 0x5b, 0x9a, 0x6e, 0x03, 0x8b, 0x9d, 0xa0, 0x2c, 0x6e, 0x15, 0x1e, 0x74, 0x14,
0x16, 0x30, 0xa8, 0xb5, 0x4a, 0xaf, 0xb0, 0x10, 0xc3, 0xb1, 0x37, 0xf1, 0x64, 0x03, 0x4d, 0x25,
0x51, 0x17, 0x98, 0x14, 0x62, 0x34, 0xf6, 0xc8, 0xfe, 0x0e, 0xb6, 0xae, 0x80, 0x8d, 0x2b, 0xc2,
0xef, 0x0c, 0x02, 0x33, 0x9c, 0x7e, 0x77, 0x98, 0xad, 0x56, 0x2a, 0xbd, 0x74, 0xd4, 0x37, 0x90,
0xf4, 0x88, 0x0e, 0x1c, 0xed, 0xbd, 0xe8, 0x80, 0xb0, 0x3c, 0x75, 0x24, 0xf7, 0xe4, 0x29, 0xb1,
0xf6, 0x5c, 0x67, 0x55, 0x7e, 0x50, 0x5b, 0x7a, 0x47, 0xb2, 0xc5, 0xfc, 0x1e, 0xf4, 0xdf, 0x2e,
0x51, 0xbb, 0x9b, 0x47, 0xd2, 0x21, 0x32, 0xc1, 0x11, 0x6d, 0xe5, 0xae, 0xb4, 0x80, 0x3f, 0x82,
0x40, 0xd2, 0x15, 0xe6, 0xd4, 0x1b, 0x04, 0x99, 0xb4, 0xb4, 0xd5, 0x70, 0xcf, 0x3d, 0xa3, 0x2e,
0xe7, 0x79, 0x8e, 0xda, 0x79, 0xd7, 0x02, 0xd3, 0x3b, 0x5b, 0xa3, 0x36, 0x2b, 0x7b, 0xd2, 0x82,
0xf0, 0x1d, 0x8c, 0xf6, 0x13, 0xd4, 0xa5, 0xac, 0x12, 0xfc, 0xc5, 0x62, 0x1c, 0xfc, 0x17, 0xb3,
0x57, 0x27, 0x8d, 0xe3, 0x29, 0xde, 0xf8, 0xd4, 0xeb, 0xf8, 0x94, 0x0e, 0x7a, 0xa9, 0x72, 0x35,
0x8d, 0x8c, 0xb0, 0x9e, 0x74, 0x28, 0x7c, 0x02, 0x3e, 0x7d, 0x0f, 0x9d, 0xce, 0xfe, 0xef, 0xbe,
0xa5, 0x8b, 0xbe, 0xf9, 0x97, 0xde, 0xfb, 0x19, 0x00, 0x00, 0xff, 0xff, 0x93, 0x68, 0x0f, 0xcf,
0xb7, 0x05, 0x00, 0x00,
}

View File

@ -1,16 +1,6 @@
syntax = "proto3";
package internal;
message Exploration {
int64 ID = 1; // ExplorationID is a unique ID for an Exploration.
string Name = 2; // User provided name of the Exploration.
int64 UserID = 3; // UserID is the owner of this Exploration.
string Data = 4; // Opaque blob of JSON data.
int64 CreatedAt = 5; // Time the exploration was first created.
int64 UpdatedAt = 6; // Latest time the exploration was updated.
bool Default = 7; // Flags an exploration as the default.
}
message Source {
int64 ID = 1; // ID is the unique ID of the source
string Name = 2; // Name is the user-defined name for the source
@ -18,9 +8,10 @@ message Source {
string Username = 4; // Username is the username to connect to the source
string Password = 5;
string URL = 6; // URL are the connections to the source
bool Default = 7; // Flags an exploration as the default.
bool Default = 7; // Flags an source as the default.
string Telegraf = 8; // Telegraf is the db telegraf is written to. By default it is "telegraf"
bool InsecureSkipVerify = 9; // InsecureSkipVerify accepts any certificate from the influx server
string MetaURL = 10; // MetaURL is the connection URL for the meta node.
}
message Dashboard {
@ -34,18 +25,19 @@ message DashboardCell {
int32 y = 2; // Y-coordinate of Cell in the Dashboard
int32 w = 3; // Width of Cell in the Dashboard
int32 h = 4; // Height of Cell in the Dashboard
repeated Query queries = 5; // Time-series data queries for Dashboard
repeated Query queries = 5; // Time-series data queries for Dashboard
string name = 6; // User-facing name for this Dashboard
string type = 7; // Dashboard visualization type
string ID = 8; // id is the unique id of the dashboard. MIGRATED FIELD added in 1.2.0-beta6
}
message Server {
int64 ID = 1; // ID is the unique ID of the server
string Name = 2; // Name is the user-defined name for the server
string Username = 3; // Username is the username to connect to the server
string Password = 4;
string URL = 5; // URL is the path to the server
int64 SrcID = 6; // SrcID is the ID of the data source
int64 ID = 1; // ID is the unique ID of the server
string Name = 2; // Name is the user-defined name for the server
string Username = 3; // Username is the username to connect to the server
string Password = 4;
string URL = 5; // URL is the path to the server
int64 SrcID = 6; // SrcID is the ID of the data source
}
message Layout {
@ -92,6 +84,6 @@ message AlertRule {
}
message User {
uint64 ID = 1; // ID is the unique ID of this user
string Email = 2; // Email byte representation of the user
uint64 ID = 1; // ID is the unique ID of this user
string Name = 2; // Name is the user's login name
}

View File

@ -3,33 +3,11 @@ package internal_test
import (
"reflect"
"testing"
"time"
"github.com/influxdata/chronograf"
"github.com/influxdata/chronograf/bolt/internal"
)
// Ensure an exploration can be marshaled and unmarshaled.
func TestMarshalExploration(t *testing.T) {
v := chronograf.Exploration{
ID: 12,
Name: "Some Exploration",
UserID: 34,
Data: "{\"data\":\"something\"}",
CreatedAt: time.Now().UTC(),
UpdatedAt: time.Now().UTC(),
}
var vv chronograf.Exploration
if buf, err := internal.MarshalExploration(&v); err != nil {
t.Fatal(err)
} else if err := internal.UnmarshalExploration(buf, &vv); err != nil {
t.Fatal(err)
} else if !reflect.DeepEqual(v, vv) {
t.Fatalf("exploration protobuf copy error: got %#v, expected %#v", vv, v)
}
}
func TestMarshalSource(t *testing.T) {
v := chronograf.Source{
ID: 12,
@ -38,6 +16,7 @@ func TestMarshalSource(t *testing.T) {
Username: "docbrown",
Password: "1 point twenty-one g1g@w@tts",
URL: "http://twin-pines.mall.io:8086",
MetaURL: "http://twin-pines.meta.io:8086",
Default: true,
Telegraf: "telegraf",
}

View File

@ -11,8 +11,10 @@ import (
// Ensure LayoutStore implements chronograf.LayoutStore.
var _ chronograf.LayoutStore = &LayoutStore{}
// LayoutBucket is the bolt bucket layouts are stored in
var LayoutBucket = []byte("Layout")
// LayoutStore is the bolt implementation to store layouts
type LayoutStore struct {
client *Client
IDs chronograf.ID

View File

@ -11,8 +11,11 @@ import (
// Ensure ServersStore implements chronograf.ServersStore.
var _ chronograf.ServersStore = &ServersStore{}
// ServersBucket is the bolt bucket to store lists of servers
var ServersBucket = []byte("Servers")
// ServersStore is the bolt implementation to store servers in a store.
// Used store servers that are associated in some way with a source
type ServersStore struct {
client *Client
}

View File

@ -14,7 +14,7 @@ func TestServerStore(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if err := c.Open(); err != nil {
if err := c.Open(context.TODO()); err != nil {
t.Fatal(err)
}
defer c.Close()

View File

@ -11,8 +11,10 @@ import (
// Ensure SourcesStore implements chronograf.SourcesStore.
var _ chronograf.SourcesStore = &SourcesStore{}
// SourcesBucket is the bolt bucket used to store source information
var SourcesBucket = []byte("Sources")
// SourcesStore is a bolt implementation to store time-series source information.
type SourcesStore struct {
client *Client
}
@ -202,23 +204,23 @@ func (s *SourcesStore) setRandomDefault(ctx context.Context, src chronograf.Sour
return err
} else if target.Default {
// Locate another source to be the new default
if srcs, err := s.all(ctx, tx); err != nil {
srcs, err := s.all(ctx, tx)
if err != nil {
return err
} else {
var other *chronograf.Source
for idx, _ := range srcs {
other = &srcs[idx]
// avoid selecting the source we're about to delete as the new default
if other.ID != target.ID {
break
}
}
var other *chronograf.Source
for idx := range srcs {
other = &srcs[idx]
// avoid selecting the source we're about to delete as the new default
if other.ID != target.ID {
break
}
}
// set the other to be the default
other.Default = true
if err := s.update(ctx, *other, tx); err != nil {
return err
}
// set the other to be the default
other.Default = true
if err := s.update(ctx, *other, tx); err != nil {
return err
}
}
return nil

View File

@ -15,7 +15,7 @@ func TestSourceStore(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if err := c.Open(); err != nil {
if err := c.Open(context.TODO()); err != nil {
t.Fatal(err)
}
defer c.Close()

View File

@ -11,31 +11,36 @@ import (
// Ensure UsersStore implements chronograf.UsersStore.
var _ chronograf.UsersStore = &UsersStore{}
var UsersBucket = []byte("Users")
// UsersBucket is used to store users local to chronograf
var UsersBucket = []byte("UsersV1")
// UsersStore uses bolt to store and retrieve users
type UsersStore struct {
client *Client
}
// FindByEmail searches the UsersStore for all users owned with the email
func (s *UsersStore) FindByEmail(ctx context.Context, email string) (*chronograf.User, error) {
var user chronograf.User
// get searches the UsersStore for user with name and returns the bolt representation
func (s *UsersStore) get(ctx context.Context, name string) (*internal.User, error) {
found := false
var user internal.User
err := s.client.db.View(func(tx *bolt.Tx) error {
err := tx.Bucket(UsersBucket).ForEach(func(k, v []byte) error {
var u chronograf.User
if err := internal.UnmarshalUser(v, &u); err != nil {
return err
} else if u.Email != email {
} else if u.Name != name {
return nil
}
user.Email = u.Email
user.ID = u.ID
found = true
if err := internal.UnmarshalUserPB(v, &user); err != nil {
return err
}
return nil
})
if err != nil {
return err
}
if user.ID == 0 {
if found == false {
return chronograf.ErrUserNotFound
}
return nil
@ -47,7 +52,18 @@ func (s *UsersStore) FindByEmail(ctx context.Context, email string) (*chronograf
return &user, nil
}
// Create a new Users in the UsersStore.
// Get searches the UsersStore for user with name
func (s *UsersStore) Get(ctx context.Context, name string) (*chronograf.User, error) {
u, err := s.get(ctx, name)
if err != nil {
return nil, err
}
return &chronograf.User{
Name: u.Name,
}, nil
}
// Add a new Users in the UsersStore.
func (s *UsersStore) Add(ctx context.Context, u *chronograf.User) (*chronograf.User, error) {
if err := s.client.db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket(UsersBucket)
@ -55,11 +71,9 @@ func (s *UsersStore) Add(ctx context.Context, u *chronograf.User) (*chronograf.U
if err != nil {
return err
}
u.ID = chronograf.UserID(seq)
if v, err := internal.MarshalUser(u); err != nil {
return err
} else if err := b.Put(itob(int(u.ID)), v); err != nil {
} else if err := b.Put(u64tob(seq), v); err != nil {
return err
}
return nil
@ -71,9 +85,13 @@ func (s *UsersStore) Add(ctx context.Context, u *chronograf.User) (*chronograf.U
}
// Delete the users from the UsersStore
func (s *UsersStore) Delete(ctx context.Context, u *chronograf.User) error {
func (s *UsersStore) Delete(ctx context.Context, user *chronograf.User) error {
u, err := s.get(ctx, user.Name)
if err != nil {
return err
}
if err := s.client.db.Update(func(tx *bolt.Tx) error {
if err := tx.Bucket(UsersBucket).Delete(itob(int(u.ID))); err != nil {
if err := tx.Bucket(UsersBucket).Delete(u64tob(u.ID)); err != nil {
return err
}
return nil
@ -84,13 +102,39 @@ func (s *UsersStore) Delete(ctx context.Context, u *chronograf.User) error {
return nil
}
// Get retrieves a user by id.
func (s *UsersStore) Get(ctx context.Context, id chronograf.UserID) (*chronograf.User, error) {
var u chronograf.User
// Update a user
func (s *UsersStore) Update(ctx context.Context, usr *chronograf.User) error {
u, err := s.get(ctx, usr.Name)
if err != nil {
return err
}
if err := s.client.db.Update(func(tx *bolt.Tx) error {
u.Name = usr.Name
if v, err := internal.MarshalUserPB(u); err != nil {
return err
} else if err := tx.Bucket(UsersBucket).Put(u64tob(u.ID), v); err != nil {
return err
}
return nil
}); err != nil {
return err
}
return nil
}
// All returns all users
func (s *UsersStore) All(ctx context.Context) ([]chronograf.User, error) {
var users []chronograf.User
if err := s.client.db.View(func(tx *bolt.Tx) error {
if v := tx.Bucket(UsersBucket).Get(itob(int(id))); v == nil {
return chronograf.ErrUserNotFound
} else if err := internal.UnmarshalUser(v, &u); err != nil {
if err := tx.Bucket(UsersBucket).ForEach(func(k, v []byte) error {
var user chronograf.User
if err := internal.UnmarshalUser(v, &user); err != nil {
return err
}
users = append(users, user)
return nil
}); err != nil {
return err
}
return nil
@ -98,32 +142,5 @@ func (s *UsersStore) Get(ctx context.Context, id chronograf.UserID) (*chronograf
return nil, err
}
return &u, nil
}
// Update a user
func (s *UsersStore) Update(ctx context.Context, usr *chronograf.User) error {
if err := s.client.db.Update(func(tx *bolt.Tx) error {
// Retrieve an existing user with the same ID.
var u chronograf.User
b := tx.Bucket(UsersBucket)
if v := b.Get(itob(int(usr.ID))); v == nil {
return chronograf.ErrUserNotFound
} else if err := internal.UnmarshalUser(v, &u); err != nil {
return err
}
u.Email = usr.Email
if v, err := internal.MarshalUser(&u); err != nil {
return err
} else if err := b.Put(itob(int(u.ID)), v); err != nil {
return err
}
return nil
}); err != nil {
return err
}
return nil
return users, nil
}

257
bolt/users_test.go Normal file
View File

@ -0,0 +1,257 @@
package bolt_test
import (
"context"
"reflect"
"testing"
"github.com/influxdata/chronograf"
)
func TestUsersStore_Get(t *testing.T) {
type args struct {
ctx context.Context
name string
}
tests := []struct {
name string
args args
want *chronograf.User
wantErr bool
}{
{
name: "User not found",
args: args{
ctx: context.Background(),
name: "unknown",
},
wantErr: true,
},
}
for _, tt := range tests {
client, err := NewTestClient()
if err != nil {
t.Fatal(err)
}
if err := client.Open(context.TODO()); err != nil {
t.Fatal(err)
}
defer client.Close()
s := client.UsersStore
got, err := s.Get(tt.args.ctx, tt.args.name)
if (err != nil) != tt.wantErr {
t.Errorf("%q. UsersStore.Get() error = %v, wantErr %v", tt.name, err, tt.wantErr)
continue
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("%q. UsersStore.Get() = %v, want %v", tt.name, got, tt.want)
}
}
}
func TestUsersStore_Add(t *testing.T) {
type args struct {
ctx context.Context
u *chronograf.User
}
tests := []struct {
name string
args args
want *chronograf.User
wantErr bool
}{
{
name: "Add new user",
args: args{
ctx: context.Background(),
u: &chronograf.User{
Name: "docbrown",
},
},
want: &chronograf.User{
Name: "docbrown",
},
},
}
for _, tt := range tests {
client, err := NewTestClient()
if err != nil {
t.Fatal(err)
}
if err := client.Open(context.TODO()); err != nil {
t.Fatal(err)
}
defer client.Close()
s := client.UsersStore
got, err := s.Add(tt.args.ctx, tt.args.u)
if (err != nil) != tt.wantErr {
t.Errorf("%q. UsersStore.Add() error = %v, wantErr %v", tt.name, err, tt.wantErr)
continue
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("%q. UsersStore.Add() = %v, want %v", tt.name, got, tt.want)
}
got, _ = s.Get(tt.args.ctx, got.Name)
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("%q. UsersStore.Add() = %v, want %v", tt.name, got, tt.want)
}
}
}
func TestUsersStore_Delete(t *testing.T) {
type args struct {
ctx context.Context
user *chronograf.User
}
tests := []struct {
name string
args args
addFirst bool
wantErr bool
}{
{
name: "No such user",
args: args{
ctx: context.Background(),
user: &chronograf.User{
Name: "noone",
},
},
wantErr: true,
},
{
name: "Delete new user",
args: args{
ctx: context.Background(),
user: &chronograf.User{
Name: "noone",
},
},
addFirst: true,
},
}
for _, tt := range tests {
client, err := NewTestClient()
if err != nil {
t.Fatal(err)
}
if err := client.Open(context.TODO()); err != nil {
t.Fatal(err)
}
defer client.Close()
s := client.UsersStore
if tt.addFirst {
s.Add(tt.args.ctx, tt.args.user)
}
if err := s.Delete(tt.args.ctx, tt.args.user); (err != nil) != tt.wantErr {
t.Errorf("%q. UsersStore.Delete() error = %v, wantErr %v", tt.name, err, tt.wantErr)
}
}
}
func TestUsersStore_Update(t *testing.T) {
type args struct {
ctx context.Context
usr *chronograf.User
}
tests := []struct {
name string
args args
addFirst bool
wantErr bool
}{
{
name: "No such user",
args: args{
ctx: context.Background(),
usr: &chronograf.User{
Name: "noone",
},
},
wantErr: true,
},
{
name: "Update new user",
args: args{
ctx: context.Background(),
usr: &chronograf.User{
Name: "noone",
},
},
addFirst: true,
},
}
for _, tt := range tests {
client, err := NewTestClient()
if err != nil {
t.Fatal(err)
}
if err := client.Open(context.TODO()); err != nil {
t.Fatal(err)
}
defer client.Close()
s := client.UsersStore
if tt.addFirst {
s.Add(tt.args.ctx, tt.args.usr)
}
if err := s.Update(tt.args.ctx, tt.args.usr); (err != nil) != tt.wantErr {
t.Errorf("%q. UsersStore.Update() error = %v, wantErr %v", tt.name, err, tt.wantErr)
}
}
}
func TestUsersStore_All(t *testing.T) {
tests := []struct {
name string
ctx context.Context
want []chronograf.User
addFirst bool
wantErr bool
}{
{
name: "No users",
},
{
name: "Update new user",
want: []chronograf.User{
{
Name: "howdy",
},
{
Name: "doody",
},
},
addFirst: true,
},
}
for _, tt := range tests {
client, err := NewTestClient()
if err != nil {
t.Fatal(err)
}
if err := client.Open(context.TODO()); err != nil {
t.Fatal(err)
}
defer client.Close()
s := client.UsersStore
if tt.addFirst {
for _, u := range tt.want {
s.Add(tt.ctx, &u)
}
}
got, err := s.All(tt.ctx)
if (err != nil) != tt.wantErr {
t.Errorf("%q. UsersStore.All() error = %v, wantErr %v", tt.name, err, tt.wantErr)
continue
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("%q. UsersStore.All() = %v, want %v", tt.name, got, tt.want)
}
}
}

View File

@ -10,3 +10,10 @@ func itob(v int) []byte {
binary.BigEndian.PutUint64(b, uint64(v))
return b
}
// u64tob returns an 8-byte big endian representation of v.
func u64tob(v uint64) []byte {
b := make([]byte, 8)
binary.BigEndian.PutUint64(b, v)
return b
}

View File

@ -14,6 +14,7 @@
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"BytesPerSec\")) AS \"bytes_per_sec\" FROM apache",
"label": "bytes/s",
"groupbys": [
"\"server\""
],
@ -31,6 +32,7 @@
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"ReqPerSec\")) AS \"req_per_sec\" FROM apache",
"label": "requests/s",
"groupbys": [
"\"server\""
],
@ -48,6 +50,7 @@
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"TotalAccesses\")) AS \"tot_access\" FROM apache",
"label": "accesses/s",
"groupbys": [
"\"server\""
],

View File

@ -11,6 +11,7 @@ import (
"github.com/influxdata/chronograf"
)
// AppExt is the the file extension searched for in the directory for layout files
const AppExt = ".json"
// Apps are canned JSON layouts. Implements LayoutStore.
@ -25,6 +26,7 @@ type Apps struct {
Logger chronograf.Logger
}
// NewApps constructs a layout store wrapping a file system directory
func NewApps(dir string, ids chronograf.ID, logger chronograf.Logger) chronograf.LayoutStore {
return &Apps{
Dir: dir,
@ -63,14 +65,14 @@ func createLayout(file string, layout chronograf.Layout) error {
defer h.Close()
if octets, err := json.MarshalIndent(layout, " ", " "); err != nil {
return chronograf.ErrLayoutInvalid
} else {
if _, err := h.Write(octets); err != nil {
return err
}
} else if _, err := h.Write(octets); err != nil {
return err
}
return nil
}
// All returns all layouts from the directory
func (a *Apps) All(ctx context.Context) ([]chronograf.Layout, error) {
files, err := a.ReadDir(a.Dir)
if err != nil {
@ -91,6 +93,7 @@ func (a *Apps) All(ctx context.Context) ([]chronograf.Layout, error) {
return layouts, nil
}
// Add creates a new layout within the directory
func (a *Apps) Add(ctx context.Context, layout chronograf.Layout) (chronograf.Layout, error) {
var err error
layout.ID, err = a.IDs.Generate()
@ -118,6 +121,7 @@ func (a *Apps) Add(ctx context.Context, layout chronograf.Layout) (chronograf.La
return layout, nil
}
// Delete removes a layout file from the directory
func (a *Apps) Delete(ctx context.Context, layout chronograf.Layout) error {
_, file, err := a.idToFile(layout.ID)
if err != nil {
@ -134,6 +138,7 @@ func (a *Apps) Delete(ctx context.Context, layout chronograf.Layout) error {
return nil
}
// Get returns an app file from the layout directory
func (a *Apps) Get(ctx context.Context, ID string) (chronograf.Layout, error) {
l, file, err := a.idToFile(ID)
if err != nil {
@ -157,6 +162,7 @@ func (a *Apps) Get(ctx context.Context, ID string) (chronograf.Layout, error) {
return l, nil
}
// Update replaces a layout from the file system directory
func (a *Apps) Update(ctx context.Context, layout chronograf.Layout) error {
l, _, err := a.idToFile(layout.ID)
if err != nil {

View File

@ -10,6 +10,7 @@ import (
//go:generate go-bindata -o bin_gen.go -ignore README|apps|.sh|go -pkg canned .
// BinLayoutStore represents a layout store using data generated by go-bindata
type BinLayoutStore struct {
Logger chronograf.Logger
}

View File

@ -14,6 +14,7 @@
"queries": [
{
"query": "SELECT count(\"check_id\") as \"Number Critical\" FROM consul_health_checks",
"label": "count",
"groupbys": [
"\"service_name\""
],
@ -33,6 +34,7 @@
"queries": [
{
"query": "SELECT count(\"check_id\") as \"Number Warning\" FROM consul_health_checks",
"label": "count",
"groupbys": [
"\"service_name\""
],

59
canned/consul_agent.json Normal file
View File

@ -0,0 +1,59 @@
{
"id": "f3bec493-0bc1-49d5-a40a-a09bd5cfb700",
"measurement": "consul_consul_fsm_register",
"app": "consul_telemetry",
"autoflow": true,
"cells": [
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "9e14639d-b8d9-4245-8c45-862ed4383701",
"name": "Consul Agent Number of Go Routines",
"queries": [
{
"query": "SELECT max(\"value\") AS \"Go Routines\" FROM \"consul_ip-172-31-6-247_runtime_num_goroutines\"",
"groupbys": [
],
"wheres": [
]
}
]
},
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "9e14639d-b8d9-4245-8c45-862ed4383702",
"name": "Consul Agent Runtime Alloc Bytes",
"queries": [
{
"query": "SELECT max(\"value\") AS \"Runtime Alloc Bytes\" FROM \"consul_ip-172-31-6-247_runtime_alloc_bytes\"",
"groupbys": [
],
"wheres": [
]
}
]
},
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "9e14639d-b8d9-4245-8c45-862ed4383703",
"name": "Consul Agent Heap Objects",
"queries": [
{
"query": "SELECT max(\"value\") AS \"Heap Objects\" FROM \"consul_ip-172-31-6-247_runtime_heap_objects\"",
"groupbys": [
],
"wheres": [
]
}
]
}
]
}

View File

@ -0,0 +1,24 @@
{
"id": "350b780c-7d32-4b29-ac49-0d4e2c092943",
"measurement": "consul_memberlist_msg_alive",
"app": "consul_telemetry",
"autoflow": true,
"cells": [
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "bd62186a-f475-478b-bf02-8c4ab07eccd1",
"name": "Consul Number of Agents",
"queries": [
{
"query": "SELECT min(\"value\") AS \"num_agents\" FROM \"consul_memberlist_msg_alive\"",
"label": "count",
"groupbys": [],
"wheres": []
}
]
}
]
}

View File

@ -0,0 +1,24 @@
{
"id": "b15aaf24-701a-4d9b-920c-9a407e91da71",
"measurement": "consul_raft_state_candidate",
"app": "consul_telemetry",
"autoflow": true,
"cells": [
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "5b2bddce-badb-4594-91fb-0486f62266e5",
"name": "Consul Leadership Election",
"queries": [
{
"query": "SELECT max(\"value\") AS \"max_value\" FROM \"consul_raft_state_candidate\"",
"label": "count",
"groupbys": [],
"wheres": []
}
]
}
]
}

24
canned/consul_http.json Normal file
View File

@ -0,0 +1,24 @@
{
"id": "26809869-8df3-49ad-b2f0-b1e1c72f67b0",
"measurement": "consul_consul_http_GET_v1_health_state__",
"app": "consul_telemetry",
"autoflow": true,
"cells": [
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "dfb4c50f-547e-484a-944b-d6374ba2b4c0",
"name": "Consul HTTP Request Time (ms)",
"queries": [
{
"query": "SELECT max(\"upper\") AS \"GET_health_state\" FROM \"consul_consul_http_GET_v1_health_state__\"",
"label": "ms",
"groupbys": [],
"wheres": []
}
]
}
]
}

View File

@ -0,0 +1,24 @@
{
"id": "34611ae0-7c3e-4697-8db0-371b16bef345",
"measurement": "consul_raft_state_leader",
"app": "consul_telemetry",
"autoflow": true,
"cells": [
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "ef8eeeb5-b408-46d6-8cfc-20c00c9d7239",
"name": "Consul Leadership Change",
"queries": [
{
"query": "SELECT max(\"value\") as \"change\" FROM \"consul_raft_state_leader\"",
"label": "count",
"groupbys": [],
"wheres": []
}
]
}
]
}

View File

@ -0,0 +1,24 @@
{
"id": "ef4b596c-77de-41c5-bb5b-d5c9a69fa633",
"measurement": "consul_serf_events",
"app": "consul_telemetry",
"autoflow": true,
"cells": [
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "59df3d73-5fac-48cb-84f1-dbe9a1bb886c",
"name": "Consul Number of serf events",
"queries": [
{
"query": "SELECT max(\"value\") AS \"serf_events\" FROM \"consul_serf_events\"",
"label": "count",
"groupbys": [],
"wheres": []
}
]
}
]
}

View File

@ -14,6 +14,7 @@
"queries": [
{
"query": "SELECT mean(\"usage_user\") AS \"usage_user\" FROM \"cpu\"",
"label": "% CPU time",
"groupbys": [],
"wheres": []
}

View File

@ -14,6 +14,7 @@
"queries": [
{
"query": "SELECT mean(\"used_percent\") AS \"used_percent\" FROM disk",
"label": "% used",
"groupbys": [
"\"path\""
],

View File

@ -1,7 +1,7 @@
{
"id": "0e980b97-c162-487b-a815-3f955df6243f",
"measurement": "docker",
"app": "docker",
"measurement": "docker",
"autoflow": true,
"cells": [
{
@ -10,16 +10,17 @@
"w": 4,
"h": 4,
"i": "4c79cefb-5152-410c-9b88-74f9bff7ef22",
"name": "Docker - Container CPU",
"name": "Docker - Container CPU %",
"queries": [
{
"query": "SELECT mean(\"usage_percent\") AS \"usage_percent\" FROM \"docker_container_cpu\"",
"label": "% CPU time",
"groupbys": [
"\"container_name\""
],
"wheres": []
]
}
]
],
"type": "line-stacked"
},
{
"x": 0,
@ -27,16 +28,82 @@
"w": 4,
"h": 4,
"i": "4c79cefb-5152-410c-9b88-74f9bff7ef00",
"name": "Docker - Container Memory",
"name": "Docker - Container Memory (MB)",
"queries": [
{
"query": "SELECT mean(\"usage\") AS \"usage\" FROM \"docker_container_mem\"",
"query": "SELECT mean(\"usage\") / 1048576 AS \"usage\" FROM \"docker_container_mem\"",
"label": "MB",
"groupbys": [
"\"container_name\""
],
"wheres": []
]
}
]
],
"type": "line-stepplot"
},
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "4c79cefb-5152-410c-9b88-74f9bff7ef01",
"name": "Docker - Containers",
"queries": [
{
"query": "SELECT max(\"n_containers\") AS \"max_n_containers\" FROM \"docker\"",
"label": "count",
"groupbys": [
"\"host\""
]
}
],
"type": "single-stat"
},
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "4c79cefb-5152-410c-9b88-74f9bff7ef02",
"name": "Docker - Images",
"queries": [
{
"query": "SELECT max(\"n_images\") AS \"max_n_images\" FROM \"docker\"",
"groupbys": [
"\"host\""
]
}
],
"type": "single-stat"
},
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "4c79cefb-5152-410c-9b88-74f9bff7ef03",
"name": "Docker - Container State",
"queries": [
{
"query": "SELECT max(\"n_containers_running\") AS \"max_n_containers_running\" FROM \"docker\"",
"label": "count",
"groupbys": [
"\"host\""
]
},
{
"query": "SELECT max(\"n_containers_stopped\") AS \"max_n_containers_stopped\" FROM \"docker\"",
"groupbys": [
"\"host\""
]
},
{
"query": "SELECT max(\"n_containers_paused\") AS \"max_n_containers_paused\" FROM \"docker\"",
"groupbys": [
"\"host\""
]
}
],
"type": ""
}
]
}

46
canned/docker_blkio.json Normal file
View File

@ -0,0 +1,46 @@
{
"id": "0e980b97-c162-487b-a815-3f955df62440",
"measurement": "docker_container_blkio",
"app": "docker",
"autoflow": true,
"cells": [
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "4c79cefb-5152-410c-9b88-74f9bff7ef50",
"name": "Docker - Container Block IO",
"queries": [
{
"query": "SELECT max(\"io_serviced_recursive_read\") AS \"max_io_read\" FROM \"docker_container_blkio\"",
"groupbys": [
"\"container_name\""
],
"wheres": []
},
{
"query": "SELECT max(\"io_serviced_recursive_sync\") AS \"max_io_sync\" FROM \"docker_container_blkio\"",
"groupbys": [
"\"container_name\""
],
"wheres": []
},
{
"query": "SELECT max(\"io_serviced_recursive_write\") AS \"max_io_write\" FROM \"docker_container_blkio\"",
"groupbys": [
"\"container_name\""
],
"wheres": []
},
{
"query": "SELECT max(\"io_serviced_recursive_total\") AS \"max_io_total\" FROM \"docker_container_blkio\"",
"groupbys": [
"\"container_name\""
],
"wheres": []
}
]
}
]
}

32
canned/docker_net.json Normal file
View File

@ -0,0 +1,32 @@
{
"id": "0e980b97-c162-487b-a815-3f955df62430",
"measurement": "docker_container_net",
"app": "docker",
"autoflow": true,
"cells": [
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "4c79cefb-5152-410c-9b88-74f9bff7ef23",
"name": "Docker - Container Network",
"queries": [
{
"query": "SELECT derivative(mean(\"tx_bytes\"), 10s) AS \"net_tx_bytes\" FROM \"docker_container_net\"",
"groupbys": [
"\"container_name\""
],
"wheres": []
},
{
"query": "SELECT derivative(mean(\"rx_bytes\"), 10s) AS \"net_rx_bytes\" FROM \"docker_container_net\"",
"groupbys": [
"\"container_name\""
],
"wheres": []
}
]
}
]
}

View File

@ -13,7 +13,8 @@
"name": "InfluxDB - Write HTTP Requests",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"writeReq\"), 1s) AS \"http_requests\" FROM \"influxdb_httpd\"",
"query": "SELECT non_negative_derivative(max(\"writeReq\")) AS \"http_requests\" FROM \"influxdb_httpd\"",
"label": "count/s",
"groupbys": [],
"wheres": []
}
@ -28,13 +29,15 @@
"name": "InfluxDB - Query Requests",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"queryReq\"), 1s) AS \"query_requests\" FROM \"influxdb_httpd\"",
"query": "SELECT non_negative_derivative(max(\"queryReq\")) AS \"query_requests\" FROM \"influxdb_httpd\"",
"label": "count/s",
"groupbys": [],
"wheres": []
}
]
},
{
"type": "line-stepplot",
"x": 0,
"y": 0,
"w": 4,
@ -43,7 +46,8 @@
"name": "InfluxDB - Client Failures",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"clientError\"), 1s) AS \"client_errors\" FROM \"influxdb_httpd\"",
"query": "SELECT non_negative_derivative(max(\"clientError\")) AS \"client_errors\" FROM \"influxdb_httpd\"",
"label": "count/s",
"groupbys": [],
"wheres": []
},

View File

@ -13,7 +13,8 @@
"name": "InfluxDB - Write Points",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"pointReq\"), 1s) AS \"points_written\" FROM \"influxdb_write\"",
"query": "SELECT non_negative_derivative(max(\"pointReq\")) AS \"points_written\" FROM \"influxdb_write\"",
"label": "points/s",
"groupbys": [],
"wheres": []
}
@ -28,12 +29,13 @@
"name": "InfluxDB - Write Errors",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"writeError\"), 1s) AS \"shard_write_error\" FROM \"influxdb_write\"",
"query": "SELECT non_negative_derivative(max(\"writeError\")) AS \"shard_write_error\" FROM \"influxdb_write\"",
"label": "errors/s",
"groupbys": [],
"wheres": []
},
{
"query": "SELECT non_negative_derivative(max(\"serveError\"), 1s) AS \"http_error\" FROM \"influxdb_httpd\"",
"query": "SELECT non_negative_derivative(max(\"serveError\")) AS \"http_error\" FROM \"influxdb_httpd\"",
"groupbys": [],
"wheres": []
}

View File

@ -10,10 +10,11 @@
"w": 4,
"h": 4,
"i": "e6e5063c-43d5-409b-a0ab-68da51ed3f28",
"name": "System - Memory Bytes Used",
"name": "System - Memory Gigabytes Used",
"queries": [
{
"query": "SELECT mean(\"used\") AS \"used\", mean(\"available\") AS \"available\" FROM \"mem\"",
"query": "SELECT mean(\"used\") / 1073741824 AS \"used\", mean(\"available\") / 1073741824 AS \"available\" FROM \"mem\"",
"label": "GB",
"groupbys": [],
"wheres": []
}

View File

@ -14,6 +14,7 @@
"queries": [
{
"query": "SELECT max(\"curr_connections\") AS \"current_connections\" FROM memcached",
"label": "count",
"groupbys": [],
"wheres": []
}
@ -28,7 +29,8 @@
"name": "Memcached - Get Hits/Second",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"get_hits\"), 1s) AS \"get_hits\" FROM memcached",
"query": "SELECT non_negative_derivative(max(\"get_hits\")) AS \"get_hits\" FROM memcached",
"label": "hits/s",
"groupbys": [],
"wheres": []
}
@ -43,7 +45,8 @@
"name": "Memcached - Get Misses/Second",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"get_misses\"), 1s) AS \"get_misses\" FROM memcached",
"query": "SELECT non_negative_derivative(max(\"get_misses\")) AS \"get_misses\" FROM memcached",
"label": "misses/s",
"groupbys": [],
"wheres": []
}
@ -58,7 +61,8 @@
"name": "Memcached - Delete Hits/Second",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"delete_hits\"), 1s) AS \"delete_hits\" FROM memcached",
"query": "SELECT non_negative_derivative(max(\"delete_hits\")) AS \"delete_hits\" FROM memcached",
"label": "deletes/s",
"groupbys": [],
"wheres": []
}
@ -73,7 +77,8 @@
"name": "Memcached - Delete Misses/Second",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"delete_misses\"), 1s) AS \"delete_misses\" FROM memcached",
"query": "SELECT non_negative_derivative(max(\"delete_misses\")) AS \"delete_misses\" FROM memcached",
"label": "delete misses/s",
"groupbys": [],
"wheres": []
}
@ -88,7 +93,8 @@
"name": "Memcached - Incr Hits/Second",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"incr_hits\"), 1s) AS \"incr_hits\" FROM memcached",
"query": "SELECT non_negative_derivative(max(\"incr_hits\")) AS \"incr_hits\" FROM memcached",
"label": "incr hits/s",
"groupbys": [],
"wheres": []
}
@ -103,7 +109,8 @@
"name": "Memcached - Incr Misses/Second",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"incr_misses\"), 1s) AS \"incr_misses\" FROM memcached",
"query": "SELECT non_negative_derivative(max(\"incr_misses\")) AS \"incr_misses\" FROM memcached",
"label": "incr misses/s",
"groupbys": [],
"wheres": []
}
@ -119,6 +126,7 @@
"queries": [
{
"query": "SELECT max(\"curr_items\") AS \"current_items\" FROM memcached",
"label": "count",
"groupbys": [],
"wheres": []
}
@ -134,6 +142,7 @@
"queries": [
{
"query": "SELECT max(\"total_items\") AS \"total_items\" FROM memcached",
"label": "count",
"groupbys": [],
"wheres": []
}
@ -149,6 +158,7 @@
"queries": [
{
"query": "SELECT max(\"bytes\") AS \"bytes\" FROM memcached",
"label": "bytes",
"groupbys": [],
"wheres": []
}
@ -163,7 +173,8 @@
"name": "Memcached - Bytes Read/Sec",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"bytes_read\"), 1s) AS \"bytes_read\" FROM memcached",
"query": "SELECT non_negative_derivative(max(\"bytes_read\")) AS \"bytes_read\" FROM memcached",
"label": "bytes/s",
"groupbys": [],
"wheres": []
}
@ -178,7 +189,8 @@
"name": "Memcached - Bytes Written/Sec",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"bytes_written\"), 1s) AS \"bytes_written\" FROM memcached",
"query": "SELECT non_negative_derivative(max(\"bytes_written\")) AS \"bytes_written\" FROM memcached",
"label": "bytes/s",
"groupbys": [],
"wheres": []
}
@ -194,6 +206,7 @@
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"evictions\"), 10s) AS \"evictions\" FROM memcached",
"label": "evictions / 10s",
"groupbys": [],
"wheres": []
}

131
canned/mesos.json Normal file
View File

@ -0,0 +1,131 @@
{
"id": "0fa47984-825b-46f1-9ca5-0366e3220000",
"measurement": "mesos",
"app": "mesos",
"autoflow": true,
"cells": [
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "0fa47984-825b-46f1-9ca5-0366e3220007",
"name": "Mesos Active Slaves",
"queries": [
{
"query": "SELECT max(\"master/slaves_active\") AS \"Active Slaves\" FROM \"mesos\"",
"label": "count",
"groupbys": [],
"wheres": []
}
]
},
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "0fa47984-825b-46f1-9ca5-0366e3220001",
"name": "Mesos Tasks Active",
"queries": [
{
"query": "SELECT max(\"master/tasks_running\") AS \"num tasks\" FROM \"mesos\"",
"label": "count",
"groupbys": [],
"wheres": []
}
]
},
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "0fa47984-825b-46f1-9ca5-0366e3220004",
"name": "Mesos Tasks",
"queries": [
{
"query": "SELECT non_negative_derivative(max(\"master/tasks_finished\"), 60s) AS \"tasks finished\" FROM \"mesos\"",
"label": "count",
"groupbys": [],
"wheres": []
},
{
"query": "SELECT non_negative_derivative(max(\"master/tasks_failed\"), 60s) AS \"tasks failed\" FROM \"mesos\"",
"groupbys": [],
"wheres": []
},
{
"query": "SELECT non_negative_derivative(max(\"master/tasks_killed\"), 60s) AS \"tasks killed\" FROM \"mesos\"",
"groupbys": [],
"wheres": []
}
]
},
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "0fa47984-825b-46f1-9ca5-0366e3220005",
"name": "Mesos Outstanding offers",
"queries": [
{
"query": "SELECT max(\"master/outstanding_offers\") AS \"Outstanding Offers\" FROM \"mesos\"",
"label": "count",
"groupbys": [],
"wheres": []
}
]
},
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "0fa47984-825b-46f1-9ca5-0366e3220002",
"name": "Mesos Available/Used CPUs",
"queries": [
{
"query": "SELECT max(\"master/cpus_total\") AS \"cpu total\", max(\"master/cpus_used\") AS \"cpu used\" FROM \"mesos\"",
"label": "count",
"groupbys": [],
"wheres": []
}
]
},
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "0fa47984-825b-46f1-9ca5-0366e3220003",
"name": "Mesos Available/Used Memory",
"queries": [
{
"query": "SELECT max(\"master/mem_total\") AS \"memory total\", max(\"master/mem_used\") AS \"memory used\" FROM \"mesos\"",
"label": "MB",
"groupbys": [],
"wheres": []
}
]
},
{
"x": 0,
"y": 0,
"w": 4,
"h": 4,
"i": "0fa47984-825b-46f1-9ca5-0366e3220008",
"name": "Mesos Master Uptime",
"type": "single-stat",
"queries": [
{
"query": "SELECT max(\"master/uptime_secs\") AS \"uptime\" FROM \"mesos\"",
"label": "Seconds",
"groupbys": [],
"wheres": []
}
]
}
]
}

View File

@ -14,6 +14,7 @@
"queries": [
{
"query": "SELECT mean(queries_per_sec) AS queries_per_second, mean(getmores_per_sec) AS getmores_per_second FROM mongodb",
"label": "reads/s",
"groupbys": [],
"wheres": []
}
@ -29,6 +30,7 @@
"queries": [
{
"query": "SELECT mean(inserts_per_sec) AS inserts_per_second, mean(updates_per_sec) AS updates_per_second, mean(deletes_per_sec) AS deletes_per_second FROM mongodb",
"label": "writes/s",
"groupbys": [],
"wheres": []
}
@ -44,6 +46,7 @@
"queries": [
{
"query": "SELECT mean(open_connections) AS open_connections FROM mongodb",
"label": "count",
"groupbys": [],
"wheres": []
}
@ -59,6 +62,7 @@
"queries": [
{
"query": "SELECT max(queued_reads) AS queued_reads, max(queued_writes) as queued_writes FROM mongodb",
"label": "count",
"groupbys": [],
"wheres": []
}
@ -74,6 +78,7 @@
"queries": [
{
"query": "SELECT mean(net_in_bytes) AS net_in_bytes, mean(net_out_bytes) as net_out_bytes FROM mongodb",
"label": "bytes/s",
"groupbys": [],
"wheres": []
}
@ -89,6 +94,7 @@
"queries": [
{
"query": "SELECT mean(page_faults_per_sec) AS page_faults_per_second FROM mongodb",
"label": "faults/s",
"groupbys": [],
"wheres": []
}
@ -104,6 +110,7 @@
"queries": [
{
"query": "SELECT mean(vsize_megabytes) AS virtual_memory_megabytes, mean(resident_megabytes) as resident_memory_megabytes FROM mongodb",
"label": "MB",
"groupbys": [],
"wheres": []
}

View File

@ -2,22 +2,22 @@ package chronograf
import (
"context"
"io"
"net/http"
"time"
)
// General errors.
const (
ErrUpstreamTimeout = Error("request to backend timed out")
ErrExplorationNotFound = Error("exploration not found")
ErrSourceNotFound = Error("source not found")
ErrServerNotFound = Error("server not found")
ErrLayoutNotFound = Error("layout not found")
ErrDashboardNotFound = Error("dashboard not found")
ErrUserNotFound = Error("user not found")
ErrLayoutInvalid = Error("layout is invalid")
ErrAlertNotFound = Error("alert not found")
ErrAuthentication = Error("user not authenticated")
ErrUpstreamTimeout = Error("request to backend timed out")
ErrSourceNotFound = Error("source not found")
ErrServerNotFound = Error("server not found")
ErrLayoutNotFound = Error("layout not found")
ErrDashboardNotFound = Error("dashboard not found")
ErrUserNotFound = Error("user not found")
ErrLayoutInvalid = Error("layout is invalid")
ErrAlertNotFound = Error("alert not found")
ErrAuthentication = Error("user not authenticated")
ErrUninitialized = Error("client uninitialized. Call Open() method")
)
// Error is a domain error encountered while processing chronograf requests
@ -33,12 +33,26 @@ func (e Error) Error() string {
type Logger interface {
Debug(...interface{})
Info(...interface{})
Warn(...interface{})
Error(...interface{})
Fatal(...interface{})
Panic(...interface{})
WithField(string, interface{}) Logger
// Logger can be transformed into an io.Writer.
// That writer is the end of an io.Pipe and it is your responsibility to close it.
Writer() *io.PipeWriter
}
// Router is an abstracted Router based on the API provided by the
// julienschmidt/httprouter package.
type Router interface {
http.Handler
GET(string, http.HandlerFunc)
PATCH(string, http.HandlerFunc)
POST(string, http.HandlerFunc)
DELETE(string, http.HandlerFunc)
PUT(string, http.HandlerFunc)
Handler(string, string, http.Handler)
}
// Assets returns a handler to serve the website.
@ -46,12 +60,61 @@ type Assets interface {
Handler() http.Handler
}
// Supported time-series databases
const (
// InfluxDB is the open-source time-series database
InfluxDB = "influx"
// InfluxEnteprise is the clustered HA time-series database
InfluxEnterprise = "influx-enterprise"
// InfluxRelay is the basic HA layer over InfluxDB
InfluxRelay = "influx-relay"
)
// TSDBStatus represents the current status of a time series database
type TSDBStatus interface {
// Connect will connect to the time series using the information in `Source`.
Connect(ctx context.Context, src *Source) error
// Ping returns version and TSDB type of time series database if reachable.
Ping(context.Context) error
// Version returns the version of the TSDB database
Version(context.Context) (string, error)
// Type returns the type of the TSDB database
Type(context.Context) (string, error)
}
// TimeSeries represents a queryable time series database.
type TimeSeries interface {
// Query retrieves time series data from the database.
Query(context.Context, Query) (Response, error)
// Connect will connect to the time series using the information in `Source`.
Connect(context.Context, *Source) error
// UsersStore represents the user accounts within the TimeSeries database
Users(context.Context) UsersStore
// Permissions returns all valid names permissions in this database
Permissions(context.Context) Permissions
// Roles represents the roles associated with this TimesSeriesDatabase
Roles(context.Context) (RolesStore, error)
}
// Role is a restricted set of permissions assigned to a set of users.
type Role struct {
Name string `json:"name"`
Permissions Permissions `json:"permissions,omitempty"`
Users []User `json:"users,omitempty"`
}
// RolesStore is the Storage and retrieval of authentication information
type RolesStore interface {
// All lists all roles from the RolesStore
All(context.Context) ([]Role, error)
// Create a new Role in the RolesStore
Add(context.Context, *Role) (*Role, error)
// Delete the Role from the RolesStore
Delete(context.Context, *Role) error
// Get retrieves a role if name exists.
Get(ctx context.Context, name string) (*Role, error)
// Update the roles' users or permissions
Update(context.Context, *Role) error
}
// Range represents an upper and lower bound for data
@ -71,6 +134,15 @@ type Query struct {
Range *Range `json:"range,omitempty"` // Range is the default Y-Axis range for the data
}
// DashboardQuery includes state for the query builder. This is a transition
// struct while we move to the full InfluxQL AST
type DashboardQuery struct {
Command string `json:"query"` // Command is the query itself
Label string `json:"label,omitempty"` // Label is the Y-Axis label for the data
Range *Range `json:"range,omitempty"` // Range is the default Y-Axis range for the data
QueryConfig QueryConfig `json:"queryConfig,omitempty"` // QueryConfig represents the query state that is understood by the data explorer
}
// Response is the result of a query against a TimeSeries
type Response interface {
MarshalJSON() ([]byte, error)
@ -78,12 +150,13 @@ type Response interface {
// Source is connection information to a time-series data store.
type Source struct {
ID int `json:"id,omitempty,string"` // ID is the unique ID of the source
ID int `json:"id,string"` // ID is the unique ID of the source
Name string `json:"name"` // Name is the user-defined name for the source
Type string `json:"type,omitempty"` // Type specifies which kinds of source (enterprise vs oss)
Username string `json:"username,omitempty"` // Username is the username to connect to the source
Password string `json:"password,omitempty"` // Password is in CLEARTEXT
URL string `json:"url"` // URL are the connections to the source
MetaURL string `json:"metaUrl,omitempty"` // MetaURL is the url for the meta node
InsecureSkipVerify bool `json:"insecureSkipVerify,omitempty"` // InsecureSkipVerify as true means any certificate presented by the source is accepted.
Default bool `json:"default"` // Default specifies the default source for the application
Telegraf string `json:"telegraf"` // Telegraf is the db telegraf is written to. By default it is "telegraf"
@ -105,14 +178,16 @@ type SourcesStore interface {
// AlertRule represents rules for building a tickscript alerting task
type AlertRule struct {
ID string `json:"id,omitempty"` // ID is the unique ID of the alert
Query QueryConfig `json:"query"` // Query is the filter of data for the alert.
Every string `json:"every"` // Every how often to check for the alerting criteria
Alerts []string `json:"alerts"` // AlertServices name all the services to notify (e.g. pagerduty)
Message string `json:"message"` // Message included with alert
Trigger string `json:"trigger"` // Trigger is a type that defines when to trigger the alert
TriggerValues TriggerValues `json:"values"` // Defines the values that cause the alert to trigger
Name string `json:"name"` // Name is the user-defined name for the alert
ID string `json:"id,omitempty"` // ID is the unique ID of the alert
Query QueryConfig `json:"query"` // Query is the filter of data for the alert.
Every string `json:"every"` // Every how often to check for the alerting criteria
Alerts []string `json:"alerts"` // Alerts name all the services to notify (e.g. pagerduty)
AlertNodes []KapacitorNode `json:"alertNodes,omitempty"` // AlertNodes define additional arguments to alerts
Message string `json:"message"` // Message included with alert
Details string `json:"details"` // Details is generally used for the Email alert. If empty will not be added.
Trigger string `json:"trigger"` // Trigger is a type that defines when to trigger the alert
TriggerValues TriggerValues `json:"values"` // Defines the values that cause the alert to trigger
Name string `json:"name"` // Name is the user-defined name for the alert
}
// AlertRulesStore stores rules for building tickscript alerting tasks
@ -173,6 +248,20 @@ type QueryConfig struct {
RawText string `json:"rawText,omitempty"`
}
// KapacitorNode adds arguments and properties to an alert
type KapacitorNode struct {
Name string `json:"name"`
Args []string `json:"args"`
Properties []KapacitorProperty `json:"properties"`
// In the future we could add chaining methods here.
}
// KapacitorProperty modifies the node they are called on
type KapacitorProperty struct {
Name string `json:"name"`
Args []string `json:"args"`
}
// Server represents a proxy connection to an HTTP server
type Server struct {
ID int // ID is the unique ID of the server
@ -203,27 +292,80 @@ type ID interface {
Generate() (string, error)
}
// UserID is a unique ID for a source user.
type UserID int
const (
// AllScope grants permission for all databases.
AllScope Scope = "all"
// DBScope grants permissions for a specific database
DBScope Scope = "database"
)
// Permission is a specific allowance for User or Role bound to a
// scope of the data source
type Permission struct {
Scope Scope `json:"scope"`
Name string `json:"name,omitempty"`
Allowed Allowances `json:"allowed"`
}
// Permissions represent the entire set of permissions a User or Role may have
type Permissions []Permission
// Allowances defines what actions a user can have on a scoped permission
type Allowances []string
// Scope defines the location of access of a permission
type Scope string
// User represents an authenticated user.
type User struct {
ID UserID `json:"id"`
Email string `json:"email"`
Name string `json:"name"`
Passwd string `json:"password"`
Permissions Permissions `json:"permissions,omitempty"`
Roles []Role `json:"roles,omitempty"`
}
// UsersStore is the Storage and retrieval of authentication information
type UsersStore interface {
// All lists all users from the UsersStore
All(context.Context) ([]User, error)
// Create a new User in the UsersStore
Add(context.Context, *User) (*User, error)
// Delete the User from the UsersStore
Delete(context.Context, *User) error
// Get retrieves a user if `ID` exists.
Get(ctx context.Context, ID UserID) (*User, error)
// Get retrieves a user if name exists.
Get(ctx context.Context, name string) (*User, error)
// Update the user's permissions or roles
Update(context.Context, *User) error
// FindByEmail will retrieve a user by email address.
FindByEmail(ctx context.Context, Email string) (*User, error)
}
// Database represents a database in a time series source
type Database struct {
Name string `json:"name"` // a unique string identifier for the database
Duration string `json:"duration,omitempty"` // the duration (when creating a default retention policy)
Replication int32 `json:"replication,omitempty"` // the replication factor (when creating a default retention policy)
ShardDuration string `json:"shardDuration,omitempty"` // the shard duration (when creating a default retention policy)
}
// RetentionPolicy represents a retention policy in a time series source
type RetentionPolicy struct {
Name string `json:"name"` // a unique string identifier for the retention policy
Duration string `json:"duration,omitempty"` // the duration
Replication int32 `json:"replication,omitempty"` // the replication factor
ShardDuration string `json:"shardDuration,omitempty"` // the shard duration
Default bool `json:"isDefault,omitempty"` // whether the RP should be the default
}
// Databases represents a databases in a time series source
type Databases interface {
// All lists all databases
AllDB(context.Context) ([]Database, error)
Connect(context.Context, *Source) error
CreateDB(context.Context, *Database) (*Database, error)
DropDB(context.Context, string) error
AllRP(context.Context, string) ([]RetentionPolicy, error)
CreateRP(context.Context, string, *RetentionPolicy) (*RetentionPolicy, error)
UpdateRP(context.Context, string, string, *RetentionPolicy) (*RetentionPolicy, error)
DropRP(context.Context, string, string) error
}
// DashboardID is the dashboard ID
@ -238,13 +380,14 @@ type Dashboard struct {
// DashboardCell holds visual and query information for a cell
type DashboardCell struct {
X int32 `json:"x"`
Y int32 `json:"y"`
W int32 `json:"w"`
H int32 `json:"h"`
Name string `json:"name"`
Queries []Query `json:"queries"`
Type string `json:"type"`
ID string `json:"i"`
X int32 `json:"x"`
Y int32 `json:"y"`
W int32 `json:"w"`
H int32 `json:"h"`
Name string `json:"name"`
Queries []DashboardQuery `json:"queries"`
Type string `json:"type"`
}
// DashboardsStore is the storage and retrieval of dashboards
@ -261,34 +404,6 @@ type DashboardsStore interface {
Update(context.Context, Dashboard) error
}
// ExplorationID is a unique ID for an Exploration.
type ExplorationID int
// Exploration is a serialization of front-end Data Explorer.
type Exploration struct {
ID ExplorationID
Name string // User provided name of the Exploration.
UserID UserID // UserID is the owner of this Exploration.
Data string // Opaque blob of JSON data.
CreatedAt time.Time // Time the exploration was first created.
UpdatedAt time.Time // Latest time the exploration was updated.
Default bool // Flags an exploration as the default.
}
// ExplorationStore stores front-end serializations of data explorer sessions.
type ExplorationStore interface {
// Search the ExplorationStore for each Exploration owned by `UserID`.
Query(ctx context.Context, userID UserID) ([]*Exploration, error)
// Create a new Exploration in the ExplorationStore.
Add(context.Context, *Exploration) (*Exploration, error)
// Delete the Exploration from the ExplorationStore.
Delete(context.Context, *Exploration) error
// Retrieve an Exploration if `ID` exists.
Get(ctx context.Context, ID ExplorationID) (*Exploration, error)
// Update the Exploration; will also update the `UpdatedAt` time.
Update(context.Context, *Exploration) error
}
// Cell is a rectangle and multiple time series queries to visualize.
type Cell struct {
X int32 `json:"x"`
@ -323,25 +438,3 @@ type LayoutStore interface {
// Update the dashboard in the store.
Update(context.Context, Layout) error
}
// Principal is any entity that can be authenticated
type Principal string
// PrincipalKey is used to pass principal
// via context.Context to request-scoped
// functions.
const PrincipalKey Principal = "principal"
// Authenticator represents a service for authenticating users.
type Authenticator interface {
// Authenticate returns User associated with token if successful.
Authenticate(ctx context.Context, token string) (Principal, error)
// Token generates a valid token for Principal lasting a duration
Token(context.Context, Principal, time.Duration) (string, error)
}
// TokenExtractor extracts tokens from http requests
type TokenExtractor interface {
// Extract will return the token or an error.
Extract(r *http.Request) (string, error)
}

View File

@ -3,7 +3,7 @@ machine:
services:
- docker
environment:
DOCKER_TAG: chronograf-20170127
DOCKER_TAG: chronograf-20170208
dependencies:
override:
@ -13,6 +13,7 @@ test:
override:
- >
./etc/scripts/docker/run.sh
--debug
--test
--no-build
@ -22,6 +23,7 @@ deployment:
commands:
- >
./etc/scripts/docker/run.sh
--debug
--clean
--package
--platform all
@ -42,6 +44,7 @@ deployment:
- >
./etc/scripts/docker/run.sh
--clean
--debug
--release
--package
--platform all
@ -64,6 +67,7 @@ deployment:
- >
./etc/scripts/docker/run.sh
--clean
--debug
--release
--package
--platform all

View File

@ -1,6 +1,7 @@
package main
import (
"context"
"log"
"os"
@ -41,7 +42,8 @@ func main() {
os.Exit(0)
}
if err := srv.Serve(); err != nil {
ctx := context.Background()
if err := srv.Serve(ctx); err != nil {
log.Fatalln(err)
}
}

1
dist/dir.go vendored
View File

@ -11,6 +11,7 @@ type Dir struct {
dir http.Dir
}
// NewDir constructs a Dir with a default file
func NewDir(dir, def string) Dir {
return Dir{
Default: def,

22
dist/dist.go vendored
View File

@ -3,6 +3,7 @@ package dist
//go:generate go-bindata -o dist_gen.go -ignore 'map|go' -pkg dist ../ui/build/...
import (
"fmt"
"net/http"
"github.com/elazarl/go-bindata-assetfs"
@ -32,6 +33,21 @@ func (b *BindataAssets) Handler() http.Handler {
return b
}
// addCacheHeaders requests an hour of Cache-Control and sets an ETag based on file size and modtime
func (b *BindataAssets) addCacheHeaders(filename string, w http.ResponseWriter) error {
w.Header().Add("Cache-Control", "public, max-age=3600")
fi, err := AssetInfo(filename)
if err != nil {
return err
}
hour, minute, second := fi.ModTime().Clock()
etag := fmt.Sprintf(`"%d%d%d%d%d"`, fi.Size(), fi.ModTime().Day(), hour, minute, second)
w.Header().Set("ETag", etag)
return nil
}
// ServeHTTP wraps http.FileServer by returning a default asset if the asset
// doesn't exist. This supports single-page react-apps with its own
// built-in router. Additionally, we override the content-type if the
@ -52,8 +68,14 @@ func (b *BindataAssets) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Additionally, because we know we are returning the default asset,
// we need to set the default asset's content-type.
w.Header().Set("Content-Type", b.DefaultContentType)
if err := b.addCacheHeaders(b.Default, w); err != nil {
return nil, err
}
return Asset(b.Default)
}
if err := b.addCacheHeaders(name, w); err != nil {
return nil, err
}
return octets, nil
}
var dir http.FileSystem = &assetfs.AssetFS{

View File

@ -12,18 +12,19 @@ It lists every host that is sending [Telegraf](https://github.com/influxdata/tel
![Host List](https://github.com/influxdata/chronograf/blob/master/docs/images/host-list-gs.png)
The Chronograf instance shown above is connected to two hosts (`telegraf-neverland` and `telegraf-narnia`).
The first host is using 0.23% of its total CPU and has a load of 0.00.
The Chronograf instance shown above is connected to two hosts (`telegraf-narnia` and `telegraf-neverland`).
The first host is using 0.35% of its total CPU and has a load of 0.00.
It has one configured app: `system`.
Apps are Telegraf [input plugins](https://github.com/influxdata/telegraf#input-plugins) that have dashboard templates in Chronograf.
Click on the app on the `HOST LIST` page to access its dashboard template.
The dashboard offers [pre-canned](https://github.com/influxdata/chronograf/tree/master/canned) graphs of the input's data that are currently in InfluxDB.
The dashboard offers [pre-created](https://github.com/influxdata/chronograf/tree/master/canned) graphs of the input's data that are currently in InfluxDB.
Here's the dashboard template for Telegraf's [system stats](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system) input plugin:
![System Graph Layout](https://github.com/influxdata/chronograf/blob/master/docs/images/system-layout-gs.gif)
Hover over the graphs to get additional information about the data, and select alternative time ranges for the graphs by using the time selector in the top right corner.
Hover over the graphs to get additional information about the data.
In addition, select alternative refresh intervals, alternative time ranges, and enter presentation mode with the icons in the top right corner.
See the [README](https://github.com/influxdata/chronograf#dashboard-templates) for a complete list of the apps supported by Chronograf.
@ -44,7 +45,7 @@ Paste an existing [InfluxQL](https://docs.influxdata.com/influxdb/latest/query_l
![Raw Editor](https://github.com/influxdata/chronograf/blob/master/docs/images/raw-editor-gs.gif)
### Other Features
View query results in tabular format (1), easily alter the query's time range with the time range selector (2), and save your graphs in individual exploration sessions (3):
Select an alternative refresh interval (1), an alternative time range (2), and view query results in tabular format (3):
![Data Exploration Extras](https://github.com/influxdata/chronograf/blob/master/docs/images/data-exploration-extras-gs.png)
@ -71,7 +72,7 @@ It supports three rule types:
* Relative Rule - alert if the data change relative to the data in a different time range
* Deadman Rule - alert if no data are received for the specified time range
The example above creates a simple threshold rule that sends an alert when `usage_idle` values are less than 86% within the past minute.
The example above creates a simple threshold rule that sends an alert when `usage_idle` values are less than 96%.
Notice that the graph provides a preview of the target data and the configured rule boundary.
Lastly, the `Alert Message` section allows you to personalize the alert message and select an alert endpoint.
@ -89,3 +90,37 @@ See all active alerts on the `ALERTING` page, and filter them by `Name`,
`Level`, and `Host`:
![Alert View](https://github.com/influxdata/chronograf/blob/master/docs/images/alert-view-gs.png)
### Alerta TICKscript Parser
Chronograf offers a parser for TICKscripts that use the [Alerta](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#alerta) output.
This is a new feature in version 1.2.0-beta2.
To use the TICKscript parser:
* Select Alerta as the output when creating or editing an alert rule
* Paste your existing TICKscript in the text input (spacing doesn't matter!)
* Save your rule
You're good to go! The system automatically parses your TICKscript and creates a
Chronograf-friendly alert rule.
> **Notes:**
>
* Currently, the Alerta TICKscript parser requires users to **paste** their existing TICKscript in the text input. The parser does not support manually entering or editing a TICKscript.
* The parser requires users to whitespace delimit any services listed in the TICKscript's [`.services()` attribute](https://docs.influxdata.com/kapacitor/latest/nodes/alert_node/#alerta-services).
## Manage Users and Queries
The `ADMIN` section of Chronograf supports managing InfluxDB users and queries.
### User Management
Create, assign permissions to, and delete [InfluxDB users](https://docs.influxdata.com/influxdb/latest/query_language/authentication_and_authorization/#user-types-and-privileges).
In version 1.2.0-beta5, Chronograf only supports assigning `ALL` permissions to users; that is, read and write permissions to every database in the InfluxDB instance.
### Query Management
View currently-running queries and stop expensive queries from running on the InfluxDB instance:
![Alert View](https://github.com/influxdata/chronograf/blob/master/docs/images/admin-gs.png)

View File

@ -20,8 +20,8 @@ Check out the [downloads](https://www.influxdata.com/downloads/) page for links
#### 1. Download and Install InfluxDB
```
wget https://dl.influxdata.com/influxdb/releases/influxdb_1.2.0_amd64.deb
sudo dpkg -i influxdb_1.2.0_amd64.deb
wget https://dl.influxdata.com/influxdb/releases/influxdb_1.2.2_amd64.deb
sudo dpkg -i influxdb_1.2.2_amd64.deb
```
#### 2. Start InfluxDB
@ -88,8 +88,8 @@ This is a known issue.
#### 1. Download and Install Telegraf
```
wget https://dl.influxdata.com/telegraf/releases/telegraf_1.2.0_amd64.deb
sudo dpkg -i telegraf_1.2.0_amd64.deb
wget https://dl.influxdata.com/telegraf/releases/telegraf_1.2.1_amd64.deb
sudo dpkg -i telegraf_1.2.1_amd64.deb
```
#### 2. Start Telegraf
@ -195,8 +195,8 @@ Now that we are collecting data with Telegraf and storing data with InfluxDB, it
#### 1. Download and Install Chronograf
```
wget https://dl.influxdata.com/chronograf/nightlies/chronograf_nightly_amd64.deb
sudo dpkg -i chronograf_nightly_amd64.deb
wget https://dl.influxdata.com/chronograf/releases/chronograf_1.2.0~beta7_amd64.deb
sudo dpkg -i chronograf_1.2.0~beta5_amd64.deb
```
#### 2. Start Chronograf
@ -242,7 +242,7 @@ There's no need to enter any information for the `Username` and `Password` input
Finally, click `Connect Kapacitor`.
If Kapacitor successfully connects you'll see an
[Alert Endpoints](https://docs.influxdata.com/kapacitor/v1.0/nodes/alert_node/)
Configure [Alert Endpoints](https://docs.influxdata.com/kapacitor/v1.0/nodes/alert_node/)
section below the `Connection Details` section:
![Alert Endpoints](https://github.com/influxdata/chronograf/blob/master/docs/images/alert-endpoints.png)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

View File

@ -2,11 +2,33 @@
OAuth 2.0 Style Authentication
### TL;DR
#### Github
```sh
export AUTH_DURATION=1h # force login every hour
export TOKEN_SECRET=supersupersecret # Signing secret
export GH_CLIENT_ID=b339dd4fddd95abec9aa # Github client id
export GH_CLIENT_SECRET=260041897d3252c146ece6b46ba39bc1e54416dc # Github client secret
export GH_ORGS=biffs-gang # Restrict to GH orgs
```
### Configuration
To use authentication in Chronograf, both Github OAuth and JWT signature need to be configured.
To use authentication in Chronograf, both the OAuth provider and JWT signature
need to be configured.
#### Configuring JWT signature
Set a [JWT](https://tools.ietf.org/html/rfc7519) signature to a random string. This is needed for all OAuth2 providers that you choose to configure. *Keep this random string around!*
You'll need it each time you start a chronograf server because it is used to verify user authorization. If you are running multiple chronograf servers in an HA configuration set the `TOKEN_SECRET` on each to allow users to stay logged in. If you want to log all users out every time the server restarts, change the value of `TOKEN_SECRET` to a different value on each restart.
```sh
export TOKEN_SECRET=supersupersecret
```
### Github
#### Creating Github OAuth Application
To create a Github OAuth Application follow the [Register your app](https://developer.github.com/guides/basics-of-authentication/#registering-your-app) instructions.
@ -14,13 +36,13 @@ Essentially, you'll register your application [here](https://github.com/settings
The `Homepage URL` should be Chronograf's full server name and port. If you are running it locally for example, make it `http://localhost:8888`
The `Authorization callback URL` must be the location of the `Homepage URL` plus `/oauth/github/callback`. For example, if `Homepage URL` was
The `Authorization callback URL` must be the location of the `Homepage URL` plus `/oauth/github/callback`. For example, if `Homepage URL` was
`http://localhost:8888` then the `Authorization callback URL` should be `http://localhost:8888/oauth/github/callback`.
Github will provide a `Client ID` and `Client Secret`. To register these values with chronograf set the following environment variables:
* `GH_CLIENT_ID`
* `GH_CLIENT_SECRET`
* `GH_CLIENT_ID`
* `GH_CLIENT_SECRET`
For example:
@ -29,18 +51,6 @@ export GH_CLIENT_ID=b339dd4fddd95abec9aa
export GH_CLIENT_SECRET=260041897d3252c146ece6b46ba39bc1e54416dc
```
#### Configuring JWT signature
Set a [JWT](https://tools.ietf.org/html/rfc7519) signature to a random string.
*Keep this random string around!*
You'll need it each time you start a chronograf server because it is used to verify
user authorization. If you are running multiple chronograf servers in an HA configuration set the `TOKEN_SECRET` on each to allow users to stay logged in.
```sh
export TOKEN_SECRET=supersupersecret
```
#### Optional Github Organizations
To require an organization membership for a user, set the `GH_ORGS` environment variables
@ -56,72 +66,113 @@ To support multiple organizations use a comma delimted list like so:
export GH_ORGS=hill-valley-preservation-sociey,the-pinheads
```
### Design
### Google
The Chronograf authentication scheme is a standard [web application](https://developer.github.com/v3/oauth/#web-application-flow) OAuth flow.
#### Creating Google OAuth Application
![oauth 2.0 flow](./OauthStyleAuthentication.png)
You will need to obtain a client ID and an application secret by following the steps under "Basic Steps" [here](https://developers.google.com/identity/protocols/OAuth2). Chronograf will also need to be publicly accessible via a fully qualified domain name so that Google properly redirects users back to the application.
The browser receives a cookie from Chronograf, authorizing it. The contents of the cookie is a JWT whose "sub" claim is the user's primary
github email address.
This information should be set in the following ENVs:
On each request to Chronograf, the JWT contained in the cookie will be validated against the `TOKEN_SECRET` signature and checked for expiration.
The JWT's "sub" becomes the [principal](https://en.wikipedia.org/wiki/Principal_(computer_security)) used for authorization to resources.
* `GOOGLE_CLIENT_ID`
* `GOOGLE_CLIENT_SECRET`
* `PUBLIC_URL`
The API provides three endpoints `/oauth`, `/oauth/logout` and `/oauth/github/callback`.
Alternatively, this can also be set using the command line switches:
#### /oauth
* `--google-client-id`
* `--google-client-secret`
* `--public-url`
The `/oauth` endpoint redirects to Github for OAuth. Chronograf sets the OAuth `state` request parameter to a JWT with a random "sub". Using $TOKEN_SECRET `/oauth/github/callback`
can validate the `state` parameter without needing `state` to be saved.
#### Optional Google Domains
#### /oauth/github/callback
Similar to Github's organization restriction, Google authentication can be restricted to permit access to Chronograf from only specific domains. These are configured using the `GOOGLE_DOMAINS` ENV or the `--google-domains` switch. Multiple domains are separated with a comma. For example, if we wanted to permit access only from biffspleasurepalace.com and savetheclocktower.com the ENV would be set as follows:
The `/oauth/github/callback` receives the OAuth `authorization code` and `state`.
First, it will validate the `state` JWT from the `/oauth` endpoint. `JWT` validation
only requires access to the signature token. Therefore, there is no need for `state`
to be saved. Additionally, multiple Chronograf servers will not need to share third
party storage to synchronize `state`. If this validation fails, the request
will be redirected to `/login`.
Secondly, the endpoint will use the `authorization code` to retrieve a valid OAuth token
with the `user:email` scope. If unable to get a token from Github, the request will
be redirected to `/login`.
Finally, the endpoint will attempt to get the primary email address of the Github user.
Again, if not successful, the request will redirect to `/login`.
The email address is used as the subject claim for a new JWT. This JWT becomes the
value of the cookie sent back to the browser. The cookie is valid for thirty days.
Next, the request is redirected to `/`.
For all API calls to `/chronograf/v1`, the server checks for the existence and validity
of the JWT within the cookie value.
If the request did not have a valid JWT, the API returns `HTTP/1.1 401 Unauthorized`.
#### /oauth/logout
Simply expires the session cookie and redirects to `/`.
### Authorization
After successful validation of the JWT, each API endpoint of `/chronograf/v1` receives the
JWT subject within the `http.Request` as a `context.Context` value.
Within the Go API code all interfaces take `context.Context`. This means that each
interface can use the value as a principal. The design allows for authorization to happen
at the level of design most closely related to the problem.
An example usage in Go would be:
```go
func ShallIPass(ctx context.Context) (string, error) {
principal := ctx.Value(chronograf.PrincipalKey).(chronograf.Principal)
if principal != "gandolf@moria.misty.mt" {
return "you shall not pass", chronograf.ErrAuthentication
}
return "run you fools", nil
}
```sh
export GOOGLE_DOMAINS=biffspleasurepalance.com,savetheclocktower.com
```
### Heroku
#### Creating Heroku Application
To obtain a client ID and application secret for Heroku, you will need to follow the guide posted [here](https://devcenter.heroku.com/articles/oauth#register-client). Once your application has been created, those two values should be inserted into the following ENVs:
* `HEROKU_CLIENT_ID`
* `HEROKU_SECRET`
The equivalent command line switches are:
* `--heroku-client-id`
* `--heroku-secret`
#### Optional Heroku Organizations
Like the other OAuth2 providers, access to Chronograf via Heroku can be restricted to members of specific Heroku organizations. This is controlled using the `HEROKU_ORGS` ENV or the `--heroku-organizations` switch and is comma-separated. If we wanted to permit access from the `hill-valley-preservation-society` orgization and `the-pinheads` organization, we would use the following ENV:
```sh
export HEROKU_ORGS=hill-valley-preservation-sociey,the-pinheads
```
### Generic OAuth2 Provider
#### Creating OAuth Application using your own provider
The generic OAuth2 provider is very similiar to the Github provider, but,
you are able to set your own authentication, token and API URLs.
The callback URL path will be `/oauth/generic/callback`. So, if your chronograf
is hosted at `https://localhost:8888` then the full callback URL would be
`https://localhost:8888/oauth/generic/callback`
The generic OAuth2 provider has many settings that are required.
* `GENERIC_CLIENT_ID` : this application's client [identifier](https://tools.ietf.org/html/rfc6749#section-2.2) issued by the provider
* `GENERIC_CLIENT_SECRET` : this application's [secret](https://tools.ietf.org/html/rfc6749#section-2.3.1) issued by the provider
* `GENERIC_AUTH_URL` : OAuth 2.0 provider's authorization [endpoint](https://tools.ietf.org/html/rfc6749#section-3.1) URL
* `GENERIC_TOKEN_URL` : OAuth 2.0 provider's token endpoint [endpoint](https://tools.ietf.org/html/rfc6749#section-3.2) is used by the client to obtain an access token
* `TOKEN_SECRET` : Used to validate OAuth [state](https://tools.ietf.org/html/rfc6749#section-4.1.1) response. (see above)
#### Optional Scopes
By default chronograf will ask for the `user:email`
[scope](https://tools.ietf.org/html/rfc6749#section-3.3)
of the client. If your
provider scopes email access under a different scope or scopes provide them as
comma separated values in the `GENERIC_SCOPES` environment variable.
```sh
export GENERIC_SCOPES="openid,email" # Requests access to openid and email scopes
```
#### Optional Email domains
Also, the generic OAuth2 provider has a few optional parameters as well.
* `GENERIC_API_URL` : URL that returns [OpenID UserInfo JWT](https://connect2id.com/products/server/docs/api/userinfo) (specifically email address)
* `GENERIC_DOMAINS` : Email domains user's email address must use.
#### Configuring the look of the login page
To configure the copy of the login page button text, set `GENERIC_NAME`.
For example with
```sh
export GENERIC_NAME="Hill Valley Preservation Society"
```
the button text will be `Login with Hill Valley Preservation Society`.
### Optional: Configuring Authentication Duration
By default, auth will remain valid for 30 days via a cookie stored in the browser. This duration can be changed with the environment variable `AUTH_DURATION`. For example, to change it to 1 hour, use:
```sh
export AUTH_DURATION=1h
```
The duration uses the golang [time duration format](https://golang.org/pkg/time/#ParseDuration), so the largest time unit is `h` (hours). So to change it to 45 days, use:
```sh
export AUTH_DURATION=1080h
```
Additionally, for greater security, if you want to require re-authentication every time the browser is closed, set `AUTH_DURATION` to `0`. This will make the cookie transient (aka "in-memory").

View File

@ -3,7 +3,7 @@ The dashboard API will support collections of resizable InfluxQL visualizations.
### TL; DR
Here are the objects we are thinking about; dashboards contain layouts which
contain explorations.
contain queries.
#### Dashboard

Some files were not shown because too many files have changed in this diff Show More