Re-mounting should only happen if the --prefix-routes option is set. If
this happens, the result will be a no-op as intended since the
--basepath will be "". MountableRouter and http.StripPrefix are both
no-ops with prefix set to ""
http.StripPrefix is a standard library handler which is designed to do
exactly what the inline http.HandlerFunc did (with almost the same
implementation).
In certain situations, the http.ResponseWriter passed to the URLPrefixer
may not be an http.Flusher. A simple case where this may occur is if the
Prefixer has been wrapped up in a middleware where the above middleware
wraps the ResponseWriter in a ResponseWriter that doesn't implement the
Flush method.
Previously, the Prefixer would error, which would cause the request to
fail with a 500. Instead, the condition is logged and the request is
passed unmodified to the next middleware in the chain. This effectively
disables prefixing for requests where the above ResponseWriter is not an
http.Flusher.
Misc. Changes
=============
- Some tests for "builders" were moved to server/builders_test.go to
follow with convention. We've been naming files after different things
under test and leaving the file matching the package name for support
objects-in this case a mock logger was added to that file.
Some load balancers will strip prefixes on their way to the chronograf
backend, others won't. The "--prefix-routes" parameter forces all
requests to the backend to have the prefix specified in "--basepath".
Omitting it will only cause routes to be rewritten in rendered
templates and assumes that the load balancer will remove the prefix.
Use with Caddy
==============
An easy way to test this out is using the free Caddy http server at
http://caddyserver.com.
This Caddyfile will work with the options `--basepath /chronograf
--prefix-routes` set:
```
localhost:2020 {
proxy /chronograf localhost:8888
log stdout
}
```
This Caddyfile will work with only the option `--basepath /chronograf`
set:
```
localhost:2020 {
proxy /chronograf localhost:8888 {
except /chronograf
}
log stdout
}
```
This breaks compatibility with the old behavior of --basepath, so this
requires that proxies be configured to not modify routes forwarded to
backends. The old behavior will be supported in a subsequent commit.
The httprouter used in Chronograf did not support prefixing every route
with some basepath. This caused problems for those using the --basepath
parameter in combination with a load balancer that did not strip the
basepath prefix from requests that it forwarded onto Chronograf.
To support this, MountableRouter prefixes all routes at definition time
with the supplied prefix.
* Introduce Kapacitor and InfluxDB as command line options
If omitted, their values will be null at runtime. If supplied, e.g.:
chronograf
--kapacitor https://path.to.my:1/kapacitor/instance
--influxdb https://path.to.my:1/influxdb/instance
Their values will be accessible via
Server.Kapacitor
Server.InfluxDB
* MultiSourcesStore will hold Bolt and config’d sources
* Delegate to db.SourcesStore for now
* Add Username/Password tags for InfluxDB and Kapacitor
* Builders for MultiSourceStore and MultiLayoutStore
* Store Kapacitor and InfluxDB configs in memory
* Typo
* Update CHANGELOG
* Move StoreBuilders to server/builders.go
* Correct these assertions by reversing them
* Kapacitor -> KapacitorURL; InfluxDB -> InfluxDBURL
* Redirect to default source on invalid source ID
When supplied with an invalid source ID, the CheckSources component
would redirect the user to a "Create Source" page. This caused
surprising behavior when a source was deleted because that source ID
would become invalid. The effect being that deleting a source brought
users immediately to the create source page, rather than back to the
sources list.
This instead redirects users to the default source when provided an
invalid source id. The backend automatically re-assigns the "default"
source, so this will always succeed, since sources are fetched again
from the backend.
The regex used is slightly dependent on URL structure that has been
stable over the lifetime of this project. Also it relies on URL
structure more than the previous redirecting implementation.
* Force sources to reload after deletion
Deleting a source invalidates the state held by the client because of
automatic re-assignment of the default source by the backend. Without
duplicating backend logic, it is impossible for the frontend to discover
the new source without reloading sources.
The ManageSources page now uses an async-action creator which deletes
the requested source and reloads all sources. The source action creators
have also been refactored to use implicit returns like other action
creators.
* Remove Dead removeSource action
removeSource is no longer used because the API invalidates its
assumptions. For more information, see 04bf3ca.
* Update Changelog with source deletion redirect fix
Users are no longer unexpectedly redirected to the "create source" page
whenever they delete a source that they are connected to.
* Return 404 when deleting non-existent source
When deleting a source, a new default is assigned automatically. If a
non-existent source ID was provided, previously this would result in a
500. This is a violation of the Swagger docs. The solution is to examine
the error and if it was an ErrSourceNotFound, invoke the notFound
handler.
* Add Error handling to source deletion
There are two kinds of errors that can be encountered when deleting a
source: a 404 and a 500 (from either the delete or the subsequent
fetch).
The 404 is a precondition failure of the action creator. The source.id
requested can be non-existent for two reasons: 1) The action creator was
passed garbage by the caller. 2) A concurrent write occurred which
silently invalidated this session's state. For the first case, we can
ensure that the caller is sane by having an assertion check that the
requested source is among some set of sources. This could be
circumvented by a caller, but chances are good that both the full set of
sources and the desired source are both available to callers of this
action creator. The second case is not an error. In this case, we should
proceed reloading sources, since the deletion that was requested has
already been performed by someone else.
Finally, 500s can only occur if there is something broken with the API.
In this situation, we provide a notification that tells the user to
check the API logs for more information.
* Remove duplicate CHANGELOG entries
These were introduced due to a naive merge conflict resolution.
* Remove assertion
This was decided to be confusing and unnecessary.
* Remove remnants of removed assertion
These were needed for an assertion that has been removed. It's no longer
necessary to pass `sources` to the action creator.
* Relax query validation for cell endpoint
* Dashboards can now add a cell; Rebase over 950-overlay_technologies-edit
* Server now returns empty queries array when creating a new dashboard cell
* Use async/await pattern for addDashboardCell, add basic error handling
* Update names of methods and actions for editing and updating cells to match those for adding
Factor out newDefaultCell to dashboard constants
* Update CHANGELOG
* Fix bug where Overlay wouldn’t display for query-less cells
* We removed these validations
* Correct documentation for dashboards
* Exclude .git and use 'make run-dev' in 'make continuous'
* Fix dashboard deletion bug where id serialization was wrong
* Commence creation of overlay technology, add autoRefresh props to DashboardPage
* Enhance overlay magnitude of overlay technology
* Add confirm buttons to overlay technology
* Refactor ResizeContainer to accommodate arbitrary containers
* Refactor ResizeContainer to require explicit ResizeTop and ResizeBottom for clarity
* Add markup and styles for OverlayControls
* CellEditorOverlay needs a larger minimum bottom height to accommodate more things
* Revert Visualization to not use ResizeTop or flex-box
* Remove TODO and move to issue
* Refactor CellEditorOverlay to allow selection of graph type
* Style Overlay controls, move confirm buttons to own stylesheet
* Fix toggle buttons in overlay so active is actually active
* Block user-select on a few UI items
* Update cell query shape to support Visualization and LayoutRenderer
* Code cleanup
* Repair fixture schema; update props for affected components
* Wired up selectedGraphType and activeQueryID in CellEditorOverlay
* Wire up chooseMeasurements in QueryBuilder
Pass queryActions into QueryBuilder so that DataExplorer can provide
actionCreators and CellEditorOverlay can provide functions that
modify its component state
* semicolon cleanup
* Bind all queryModifier actions to component state with a stateReducer
* Overlay Technologies™ can add and delete a query from a cell
* Semicolon cleanup
* Add conversion of InfluxQL to QueryConfig for dashboards
* Update go deps to add influxdb at af72d9b0e4ebe95be30e89b160f43eabaf0529ed
* Updated docs for dashboard query config
* Update CHANGELOG to mention InfluxQL to QueryConfig
* Make reducer’s name more specific for clarity
* Remove 'table' as graphType
* Make graph renaming prettier
* Remove duplicate DashboardQuery in swagger.json
* Fix swagger to include name and links for Cell
* Refactor CellEditorOverlay to enable graph type selection
* Add link.self to all Dashboard cells; add bolt migrations
* Make dash graph names only hover on contents
* Consolidate timeRange format patterns, clean up
* Add cell endpoints to dashboards
* Include Line + Stat in Visualization Type list
* Add cell link to dashboards
* Enable step plot and stacked graph in Visualization
* Overlay Technologies are summonable and dismissable
* OverlayTechnologies saves changes to a cell
* Convert NameableGraph to createClass for state
This was converted from a pure function to encapsulate the state of the
buttons. An attempt was made previously to store this state in Redux,
but it proved too convoluted with the current state of the reducers for
cells and dashboards. Another effort must take place to separate a cell
reducer to manage the state of an individual cell in Redux in order for
this state to be sanely kept in Redux as well.
For the time being, this state is being kept in the component for the
sake of expeditiousness, since this is needed for Dashboards to be
released. A refactor of this will occur later.
* Cells should contain a links key in server response
* Clean up console logs
* Use live data instead of a cellQuery fixture
* Update docs for dashboard creation
* DB and RP are already present in the Command field
* Fix LayoutRenderer’s understanding of query schema
* Return a new object, rather that mutate in place
* Visualization doesn’t use activeQueryID
* Selected is an object, not a string
* QueryBuilder refactored to use query index instead of query id
* CellEditorOverlay refactored to use query index instead of query id
* ConfirmButtons doesn’t need to act on an item
* Rename functions to follow convention
* Queries are no longer guaranteed to have ids
* Omit WHERE and GROUP BY clauses when saving query
* Select new query on add in OverlayTechnologies
* Add click outside to dash graph menu, style menu also
* Change context menu from ... to a caret
More consistent with the rest of the UI, better affordance
* Hide graph context menu in presentation mode
Don’t want people editing a dashboard from presentation mode
* Move graph refreshing spinner so it does not overlap with context menu
* Wire up Cell Menu to Overlay Technologies
* Correct empty dashboard type
* Refactor dashboard spec fixtures
* Test syncDashboardCell reducer
* Remove Delete button from graph dropdown menu (for now)
* Update changelog
This uses a provide() function in server/server.go, to push the
necessary oauth2.Provider and oauth2.Mux into the scope of the
server.Mux. This allows the server.Mux to configure its routes without
caring which Providers are enabled, which switches/ENVs are set etc. It
configures its routes optimistically and leaves the higher-order logic
to decide whether to actually invoke the logic used by the mux to
configure routes for that provider.
This allows operators to permit access to Chronograf only to users belonging
to a set of specific Heroku organizations. This is controlled using the
HEROKU_ORGS env or the --heroku-organizations switch.
JWTMux was a disingenuous name because while JWTs are a very good choice
for a cookie encoding, they were not strictly required for use with this
mux. To better indicate the responsibilities of this mux, it's been
renamed "CookieMux," since its responsibilities end with persisting the
oauth2.Authenticator's encoded state in the browser. It is up to the
oauth2.Authenticator to choose the encoding.
If a --token-secret, --heroku-client-id, and --heroku-secret are
provided to Chronograf, it will add Heroku as an OAuth2 provider. These
tokens can be obtained (as of this writing) by visiting your "manage
account" page, navigating to "Applications," and then clicking "Register
New API Client" under the "API Clients" section.
Created an oauth2 package which encapsulates all oauth2 providers,
utility functions, types, and interfaces. Previously some methods of the
Github provider were used as http.HandlerFuncs. These have now been
pulled into a concrete type called a JWTMux to implement other Oauth2
providers.
JWTMux has all of the functionality required to take a token from any
provider and store it as a JWT in a browser, and that is the extent of
its responsibilities. It implements the oauth2.Mux interface which would
potentially allow other strategies of oauth2 credential storage.
Since this is a flag that is being accepted by the application, it makes
sense to group it with the other flags. Also, the `json` struct tag was
a remnant from an earlier attempt at implementing this feature, and is
no longer necessary.
URLPrefixer had nothing to do with assets, so it actually belongs up in
the mux, where we're assembling handlers together across the
application.
Also, the setup was painful to look at, and others will probably use the
same `Attrs`, so a `NewDefaultURLPrefixer` was added to spawn a prefixer
with only a prefix and a next handler.
React-router and also the client that we use in the frontend need to be
informed on how to access the Chronograf backend when it's being hosted
on a route other than /. To accomplish this, a data attribute is written
into the `<div>` which serves as our React root. We then make the React
router aware of this if it's set and also pass the prefix to axios (our
front end HTTP client) by way of window.
Originally, it was desired to have the basepath accessible via an API,
but this proved to be impossible because to access that API, the front
end would already need to know the basepath. The technique we went with
was arrived at independently, but is also used by Jupityr notebooks
which encountered the same problem.
The prefixer needs to not only replace `src="` attributes as it
currently does because that is not the only place a relative URL can
appear. It needs to also prefix URLs found in CSS which can also come
from the downstream http.ResponseWriter.
This adds support for an arbitrary list of patterns that will cause the
prefixer to insert its configured prefix. This is currently set to look
for `src`, `href`, and `url()` attributes.
Also, because we are modifying the stream, we need to suppress the
Content-Length generated by any downstream http.Handlers and instead
enable Transfer-Encoding: chunked so that we can stream the modified
response (we don't know apriori how many times we'll perform a
prefixing, so we can't calculate a final Content-Length). This is
accomplished by duplicating the Headers in the wrapResponseWriter that
is handed to the `Next` handler. We also handle the chunking and
Flushing that needs to happen as a result of using chunked transfer
encoding.
In order to support hosting chronograf under an arbitrary path[1], we
need to be able to rewrite all the URLs that are served in HTML and CSS.
Take, for example, the scenario where Chronograf is to be hosted under
`/chronograf` using Caddy and this example Caddyfile:
```
localhost:2020
gzip
proxy /chronograf localhost:8888 {
without /chronograf
}
```
Chronograf will not load properly when visiting
`http://localhost:2020/chronograf` because the requests for CSS, and
fonts will go to `http://localhost:2020/app-somegianthash.js` when they
should go to `http://localhost:2020/chronograf/app-somegianthash.js`.
This is the essence of issue #721.
To solve this, we add a URLPrefixer http.Handler, that acts as a
middleware. It inserts itself between any upstream handlers, and the
handler that was passed to it as its `Next` parameter and searches for
`src="` attributes. Upon discovering one of these attributes, it writes
the detected attribute and then the configured prefix. It then continues
writing the stream to the upstream http.ResponseWriter until
encountering another attribute until EOF.
Using my existing layout chaining, I added layouts wrapped in
go-bindata as the last option for loading layouts. This means
that the data store is preferred over file system over bindata.
With this functionality, we can simply distribute the single-file
binary.