Role members are fully explicated for CRUD operations.
Also adds validation for Roles on requests.
Also returns an empty array in JSON when a User has no roles.
Move server/roles.go and server/roles_test.go into server/sources.go and
server/sources_test.go respectively.
Signed-off-by: Michael de Sa <mjdesa@gmail.com>
Move source Users code into source_users and source_users_test files.
Use the UsersStore for both InfluxDB and Chronograf users.
Signed-off-by: Michael de Sa <mjdesa@gmail.com>
The pattern of using a select with a list of options and a default that
returns an error isn't bad for a one-off validation:
select myProp {
case "validOption1", "validOption2":
// no-op
default:
panic("invalid!")
}
However, we're doing this multiple times in this method, so it makes
sense to pull this out into a new method to make it clearer what's
happening.
This adds a `oneOf` function that takes some property and a variadic
list of valid options and reports whether or not that property is among
that list.
The Base and Scale options on axes can only be one of two parameters. We
weren't validating that this was the case. This patch ensures that Base
can only ever be "10" or "2", and Scale must be either "linear" or
"log".
Associated test coverage was also added.
New options were introduced to control things like scale, base, etc. on
axes and these were previously not documented. This adds documentation
of the newly supported parameters by the API.
This logic was originally left in place to help future test writers, but
its presence was vexing because it was not exercised in existing test
cases. It has been commented out should future tests need to leverage
it.
Kapacitor responses are paginated, and sometimes users have more than
the default 100 tasks that are returned from Kapacitor. This replaces
the previous Kapa client with one that automatically follows paginated
responses from Kapacitor's ListTasks endpoint and returns the full
response.
Tests for the KapacitorRulesGet endpoint had to be updated because they
did not account for "limit" and "offset", and so led to an infinite
loop with the paginated client. A correct kapacitor backend will not
have this behavior
The kapacitor client used in the kapacitor endpoints is limited to
fetching whatever limit you provide it. If you provide no limit, it
defaults to a limit of 100. We use this default behavior currently.
Some users have more than 100 tasks, so we need a client that's capable
of continually fetching tasks from Kapacitor until there are none left,
and returning the full response to the frontend.
This introduces a PaginatingKapacitorClient which does exactly that.
Also, test coverage was added around the KapacitorRulesGet endpoint,
since it was previously untested.
Because we are now creating new instances of dashboards when we create a
response, it's critical to copy every element of Dashboards from the
previous to the new instance.
We were not previously copying the Type field of cells, so this was
defaulting to the empty string zero value. This patch adds "Type" to the
tests and ensures that it's properly copied
In anticipation of adding Axes to cells, I wanted some test coverage to
be in place before I made the change.
This covers the happy path case as well as focusing on individual
applications. To come are focusing on a measurement and a test for when
the store is unavailable.
Also removed LegacyBounds marshaling since it was no longer necessary
Conflicts resolved:
bolt/internal/internal.go
bolt/internal/internal.pb.go
bolt/internal/internal.proto
bolt/internal/internal_test.go
chronograf.go
server/cells_test.go
server/dashboards_test.go
server/swagger.json
When creating new dashboards to set defaults, not all properties of the
dashboard were being copied. This ensures that they are so that zero
values are not used for things like the ID and Name.
Dashboard responses had data races because multiple goroutines were
reading and modifying dashboards before sending them out on the wire.
This patch introduces immutability in the construction of the response,
so that each goroutine is working with its own set of dashboardResponse
structs.
The contract with the frontend states that bounds should come back as an
empty array instead of null when there are no bounds present. We must
explicitly specify []string{} for this to happen.
Certain aspects of the frontend requires the presence of these three
axes, so part of the contract established is that the backend will
always provide them. Since we centralize creation of
dashboardCellResponses, this is where these axes are added to all cell
responses.
Additionally, because there was previously no coverage over the
dashboard cells endpoints, a test has been added to cover the
DashboardCells method of Service.
Due to various limitations with the previous implementation of Bounds as
a [2]int64{}, we've decided to change this to a []string{}. This will
allow clients to store arbitrary data specifying a bound and interpret
it as they wish.
For the forseeable future, we will only be using the "x", "y", and "y2"
axes, even though the underlying serialization can support arbitrary
axes (for the future).
This ensures that only "x", "y", and "y2" axes are present and updates
the Swagger docs to reflect that fact
Cells now have axes which represent their visualization's viewport. This
updates the Swagger documentation to reflect this.
Things to be aware of
=====================
The form of "axes" is that of a map<string,object>, which is represented
in Swagger by an "additionalProperties" key (search for "string to model
mapping" here: https://swagger.io/specification/).
For the forseeable future, we will only be using the "x", "y", and "y2"
axes, even though the underlying serialization can support arbitrary
axes (for the future).
This ensures that only "x", "y", and "y2" axes are present and updates
the Swagger docs to reflect that fact
Cells now have axes which represent their visualization's viewport. This
updates the Swagger documentation to reflect this.
Things to be aware of
=====================
The form of "axes" is that of a map<string,object>, which is represented
in Swagger by an "additionalProperties" key (search for "string to model
mapping" here: https://swagger.io/specification/).
It's useful for operators to classify users into separate groups which
we have termed "organizations". For other OAuth providers, the notion of
an organization typically fell along company lines. For example,
MegaCorp might have a "MegaCorp" GitHub organiztion, and all email
addresses would have the domain "megacorp.com".
Auth0 is slightly different in that MegaCorp would likely run their own
Auth0 provider for their internal services, so "organizations" in Auth0
are no longer synonymous with "large organizations" (or companies).
Instead, Auth0 organizations could be used to restrict access to
Chronograf instances based on team membership within an organization.
To make use of Auth0 organizations, operators should modify users'
app_metadata to include the key "organization". Its value should be the
organization which that user belongs to. This can be done automatically
through arbitrary rules using Auth0 Rules.
Auth0 is an OpenID Connect compliant OAuth2 provider, so we're able to
re-use the generic OAuth2 provider to implement it. The routes required
by Auth0 have been hardcoded for user convenience.
Also, Auth0 requires users to register a subdomain of auth0.com when
signing up. This must be provided to chronograf through the
`--auth0-domain` parameter (or `AUTH0_DOMAIN` ENV). This is **distinct**
from the `PUBLIC_URL`. For example, for a Chronograf hosted at
`http://www.example.com`, and an Auth0 domain of
`http://oceanic-airlines.auth0.com`, a client-id of `notpennysboat` and a
client-secret of `4-8-15-16-23-42`, the command line options would look
like:
```
chronograf \
--auth0-domain=http://oceanic-airlines.auth0.com \
--auth0-client-id=notpennysboat \
--auth0-secret=4-8-15-16-23-24
--public-url=http://www.example.com
-t `uuidgen`
```
When using a basepath of /chronograf, the app would present a
never-ending spinner when visiting the root route. This was because the
prefixingRedirector middleware which is responsible for appending the
basepath to redirects from downstream http.Handlers thought that the
prefix was already appended since it saw `/chronograf/v1`. In reality,
it should have produced a location like `/chronograf/chronograf/v1`.
The solution was to look beyond the first instance of a prefix and check
for the presence of another prefix to detect if a prefix was already
applied by a downstream handler.
The Basepath option should be applied in anything that will be consumed
by the React application. This is because from its perspective, the
proxy sitting between it and the backend wants those prefixes regardless
of what it does with them before handing the request back to the
Chronograf backend. Consequently, there's situations in the backend
where we need to have the `opts.Basepath` or the `basepath` that we
alter when `opts.PrefixRoutes` is set. The `basepath` is strictly for
altering routing decisions made by the backend.
There's subtle places where routes are supplied to the frontend that
need to always have the `opts.Basepath` set as well. Another commit
addressed the "Location" header of Redirects, for example.
The router that we use has a feature that will automatically redirect
routes in certain situations where it feels a trailing slash would be
appropriate. Because the underlying router is totally unaware of
upstream prefixing activity, the "Location" that it sends clients to is
incorrect because it doesn't have the prefix.
This introduces a middleware that catches any downstream 3XX class
responses and replaces the Location header with the prefixed version of
it, plus a trailing slash. It does this only when the prefix has not
been applied already by some downstream middleware.
This adds the status code to the response log message to make it easier
to diagnose issues. It also replaces the placeholder "Success" message
with the decoded value of the HTTP Status, resulting in messages like:
INFO[0041] Response: Temporary Redirect code=307
...and so on. Both easily consumable by humans and machines.
Basepath was previously not working here because the strings constructed
via concatenation had a trailing slash at the end:
Before:
rootPath => "/someprefix/chronograf/v1/"
After:
rootPath => "/someprefix/chronograf/v1"
The julienschmidt/httprouter that the bouk/httprouter is based on has
support for ignoring trailing slashes, which is behavior that we want.
However, routing decisions involving this rootPath string were being
made by a `strings.HasPrefix` function. This conditional seeks to
apply the token middleware only in cases where routes _under_
`/chronograf/v1` are accessed (e.g. `/chronograf/v1/sources`). In cases
where the paths were effectively equal, this conditional accidentally
worked because the string `/chronograf/v1` does not have the prefix
`/chronograf/v1/`. When this was corrected to use `path.Join`, this case
became true and caused the token middleware to be applied.
`path.Join` is the correct way to construct paths, since this prevents
issues where a fragment like `/foo/` is concatenated with a fragment
like `/bar/quux/` to yield the string `/foo//bar/quux/`.
Given that continuing to use concatenation is no longer an option, the
solution is to compare the lengths of the strings to ensure that the
path under comparison is longer than the prefix it's being tested
against. This guarantees that the subject path is a route underneath the
`/chronograf/v1` route.
Updated the logout link in the UI to use a link provided by the
/chronograf/v1/ endpoint. We also replaced many instances of string
concatenation of URL paths with path.Join, which better handles cases
where prefixed and suffixed "/" characters may be present in provided
basepaths. We also refactored how Basepath was being prefixed when using
Auth. Documentation was also updated to warn users that basepaths should
be applied to the OAuth callback link when configuring OAuth with their
provider.
JWTs will only life five minutes into the future. Any time
the server receives an authenicated request, the JWT's expire at
will be extended into the future.
* User can now set oauth cookie session duration via the CLI to any duration or to expire on browser close
* Refactor GET 'me' into heartbeat at constant interval
* Add ping route to all routes
* Add /chronograf/v1/ping endpoint for server status
* Refactor cookie generation to use an interface
* WIP adding refreshable tokens
* Add reminder to review index.js Login error handling
* Refactor Authenticator interface to accommodate cookie duration and logout delay
* Update make run-dev to be more TICKStack compliant
* Remove heartbeat/logout duration from authentication
* WIP Refactor tests to accommodate cookie and auth refactor
* Update oauth2 tests to newly refactored design
* Update oauth provider tests
* Remove unused oauth2/consts.go
* Move authentication middleware to server package
* Fix authentication comment
* Update authenication documentation to mention AUTH_DURATION
* Update /chronograf/v1/ping to simply return 204
* Fix Makefile run-dev target
* Remove spurious ping route
* Update auth docs to clarify authentication duration
* Revert "Refactor GET 'me' into heartbeat at constant interval"
This reverts commit 298a8c47e1431720d9bd97a9cb853744f04501a3.
Conflicts:
ui/src/index.js
* Add auth test for JWT signing method
* Add comments for why coverage isn't written for some areas of jwt code
* Update auth docs to explicitly mention how to require re-auth for all users on server restart
* Add Duration to Validation interface for Tokens
* Make auth duration of zero yield a everlasting token
* Revert "Revert "Refactor GET 'me' into heartbeat at constant interval""
This reverts commit b4773c15afe4fcd227ad88aa9d5686beb6b0a6cd.
* Rename http status constants and add FORBIDDEN
* Heartbeat only when logged in, notify user if heartbeat fails
* Update changelog
* Fix minor word semantics
* Update oauth2 tests to be in the oauth2_test package
* Add check at compile time that JWT implements Tokenizer
* Rename CookieMux to AuthMux for consistency with earlier refactor
* Fix logout middleware
* Fix logout button not showing due to obsolete data shape expectations
* Update changelog
* Fix proptypes for logout button data shape in SideNav
Re-mounting should only happen if the --prefix-routes option is set. If
this happens, the result will be a no-op as intended since the
--basepath will be "". MountableRouter and http.StripPrefix are both
no-ops with prefix set to ""
http.StripPrefix is a standard library handler which is designed to do
exactly what the inline http.HandlerFunc did (with almost the same
implementation).
In certain situations, the http.ResponseWriter passed to the URLPrefixer
may not be an http.Flusher. A simple case where this may occur is if the
Prefixer has been wrapped up in a middleware where the above middleware
wraps the ResponseWriter in a ResponseWriter that doesn't implement the
Flush method.
Previously, the Prefixer would error, which would cause the request to
fail with a 500. Instead, the condition is logged and the request is
passed unmodified to the next middleware in the chain. This effectively
disables prefixing for requests where the above ResponseWriter is not an
http.Flusher.
Misc. Changes
=============
- Some tests for "builders" were moved to server/builders_test.go to
follow with convention. We've been naming files after different things
under test and leaving the file matching the package name for support
objects-in this case a mock logger was added to that file.
Some load balancers will strip prefixes on their way to the chronograf
backend, others won't. The "--prefix-routes" parameter forces all
requests to the backend to have the prefix specified in "--basepath".
Omitting it will only cause routes to be rewritten in rendered
templates and assumes that the load balancer will remove the prefix.
Use with Caddy
==============
An easy way to test this out is using the free Caddy http server at
http://caddyserver.com.
This Caddyfile will work with the options `--basepath /chronograf
--prefix-routes` set:
```
localhost:2020 {
proxy /chronograf localhost:8888
log stdout
}
```
This Caddyfile will work with only the option `--basepath /chronograf`
set:
```
localhost:2020 {
proxy /chronograf localhost:8888 {
except /chronograf
}
log stdout
}
```
This breaks compatibility with the old behavior of --basepath, so this
requires that proxies be configured to not modify routes forwarded to
backends. The old behavior will be supported in a subsequent commit.
The httprouter used in Chronograf did not support prefixing every route
with some basepath. This caused problems for those using the --basepath
parameter in combination with a load balancer that did not strip the
basepath prefix from requests that it forwarded onto Chronograf.
To support this, MountableRouter prefixes all routes at definition time
with the supplied prefix.
* Introduce Kapacitor and InfluxDB as command line options
If omitted, their values will be null at runtime. If supplied, e.g.:
chronograf
--kapacitor https://path.to.my:1/kapacitor/instance
--influxdb https://path.to.my:1/influxdb/instance
Their values will be accessible via
Server.Kapacitor
Server.InfluxDB
* MultiSourcesStore will hold Bolt and config’d sources
* Delegate to db.SourcesStore for now
* Add Username/Password tags for InfluxDB and Kapacitor
* Builders for MultiSourceStore and MultiLayoutStore
* Store Kapacitor and InfluxDB configs in memory
* Typo
* Update CHANGELOG
* Move StoreBuilders to server/builders.go
* Correct these assertions by reversing them
* Kapacitor -> KapacitorURL; InfluxDB -> InfluxDBURL
* Redirect to default source on invalid source ID
When supplied with an invalid source ID, the CheckSources component
would redirect the user to a "Create Source" page. This caused
surprising behavior when a source was deleted because that source ID
would become invalid. The effect being that deleting a source brought
users immediately to the create source page, rather than back to the
sources list.
This instead redirects users to the default source when provided an
invalid source id. The backend automatically re-assigns the "default"
source, so this will always succeed, since sources are fetched again
from the backend.
The regex used is slightly dependent on URL structure that has been
stable over the lifetime of this project. Also it relies on URL
structure more than the previous redirecting implementation.
* Force sources to reload after deletion
Deleting a source invalidates the state held by the client because of
automatic re-assignment of the default source by the backend. Without
duplicating backend logic, it is impossible for the frontend to discover
the new source without reloading sources.
The ManageSources page now uses an async-action creator which deletes
the requested source and reloads all sources. The source action creators
have also been refactored to use implicit returns like other action
creators.
* Remove Dead removeSource action
removeSource is no longer used because the API invalidates its
assumptions. For more information, see 04bf3ca.
* Update Changelog with source deletion redirect fix
Users are no longer unexpectedly redirected to the "create source" page
whenever they delete a source that they are connected to.
* Return 404 when deleting non-existent source
When deleting a source, a new default is assigned automatically. If a
non-existent source ID was provided, previously this would result in a
500. This is a violation of the Swagger docs. The solution is to examine
the error and if it was an ErrSourceNotFound, invoke the notFound
handler.
* Add Error handling to source deletion
There are two kinds of errors that can be encountered when deleting a
source: a 404 and a 500 (from either the delete or the subsequent
fetch).
The 404 is a precondition failure of the action creator. The source.id
requested can be non-existent for two reasons: 1) The action creator was
passed garbage by the caller. 2) A concurrent write occurred which
silently invalidated this session's state. For the first case, we can
ensure that the caller is sane by having an assertion check that the
requested source is among some set of sources. This could be
circumvented by a caller, but chances are good that both the full set of
sources and the desired source are both available to callers of this
action creator. The second case is not an error. In this case, we should
proceed reloading sources, since the deletion that was requested has
already been performed by someone else.
Finally, 500s can only occur if there is something broken with the API.
In this situation, we provide a notification that tells the user to
check the API logs for more information.
* Remove duplicate CHANGELOG entries
These were introduced due to a naive merge conflict resolution.
* Remove assertion
This was decided to be confusing and unnecessary.
* Remove remnants of removed assertion
These were needed for an assertion that has been removed. It's no longer
necessary to pass `sources` to the action creator.
* Relax query validation for cell endpoint
* Dashboards can now add a cell; Rebase over 950-overlay_technologies-edit
* Server now returns empty queries array when creating a new dashboard cell
* Use async/await pattern for addDashboardCell, add basic error handling
* Update names of methods and actions for editing and updating cells to match those for adding
Factor out newDefaultCell to dashboard constants
* Update CHANGELOG
* Fix bug where Overlay wouldn’t display for query-less cells
* We removed these validations
* Correct documentation for dashboards
* Exclude .git and use 'make run-dev' in 'make continuous'
* Fix dashboard deletion bug where id serialization was wrong
* Commence creation of overlay technology, add autoRefresh props to DashboardPage
* Enhance overlay magnitude of overlay technology
* Add confirm buttons to overlay technology
* Refactor ResizeContainer to accommodate arbitrary containers
* Refactor ResizeContainer to require explicit ResizeTop and ResizeBottom for clarity
* Add markup and styles for OverlayControls
* CellEditorOverlay needs a larger minimum bottom height to accommodate more things
* Revert Visualization to not use ResizeTop or flex-box
* Remove TODO and move to issue
* Refactor CellEditorOverlay to allow selection of graph type
* Style Overlay controls, move confirm buttons to own stylesheet
* Fix toggle buttons in overlay so active is actually active
* Block user-select on a few UI items
* Update cell query shape to support Visualization and LayoutRenderer
* Code cleanup
* Repair fixture schema; update props for affected components
* Wired up selectedGraphType and activeQueryID in CellEditorOverlay
* Wire up chooseMeasurements in QueryBuilder
Pass queryActions into QueryBuilder so that DataExplorer can provide
actionCreators and CellEditorOverlay can provide functions that
modify its component state
* semicolon cleanup
* Bind all queryModifier actions to component state with a stateReducer
* Overlay Technologies™ can add and delete a query from a cell
* Semicolon cleanup
* Add conversion of InfluxQL to QueryConfig for dashboards
* Update go deps to add influxdb at af72d9b0e4
* Updated docs for dashboard query config
* Update CHANGELOG to mention InfluxQL to QueryConfig
* Make reducer’s name more specific for clarity
* Remove 'table' as graphType
* Make graph renaming prettier
* Remove duplicate DashboardQuery in swagger.json
* Fix swagger to include name and links for Cell
* Refactor CellEditorOverlay to enable graph type selection
* Add link.self to all Dashboard cells; add bolt migrations
* Make dash graph names only hover on contents
* Consolidate timeRange format patterns, clean up
* Add cell endpoints to dashboards
* Include Line + Stat in Visualization Type list
* Add cell link to dashboards
* Enable step plot and stacked graph in Visualization
* Overlay Technologies are summonable and dismissable
* OverlayTechnologies saves changes to a cell
* Convert NameableGraph to createClass for state
This was converted from a pure function to encapsulate the state of the
buttons. An attempt was made previously to store this state in Redux,
but it proved too convoluted with the current state of the reducers for
cells and dashboards. Another effort must take place to separate a cell
reducer to manage the state of an individual cell in Redux in order for
this state to be sanely kept in Redux as well.
For the time being, this state is being kept in the component for the
sake of expeditiousness, since this is needed for Dashboards to be
released. A refactor of this will occur later.
* Cells should contain a links key in server response
* Clean up console logs
* Use live data instead of a cellQuery fixture
* Update docs for dashboard creation
* DB and RP are already present in the Command field
* Fix LayoutRenderer’s understanding of query schema
* Return a new object, rather that mutate in place
* Visualization doesn’t use activeQueryID
* Selected is an object, not a string
* QueryBuilder refactored to use query index instead of query id
* CellEditorOverlay refactored to use query index instead of query id
* ConfirmButtons doesn’t need to act on an item
* Rename functions to follow convention
* Queries are no longer guaranteed to have ids
* Omit WHERE and GROUP BY clauses when saving query
* Select new query on add in OverlayTechnologies
* Add click outside to dash graph menu, style menu also
* Change context menu from ... to a caret
More consistent with the rest of the UI, better affordance
* Hide graph context menu in presentation mode
Don’t want people editing a dashboard from presentation mode
* Move graph refreshing spinner so it does not overlap with context menu
* Wire up Cell Menu to Overlay Technologies
* Correct empty dashboard type
* Refactor dashboard spec fixtures
* Test syncDashboardCell reducer
* Remove Delete button from graph dropdown menu (for now)
* Update changelog
This uses a provide() function in server/server.go, to push the
necessary oauth2.Provider and oauth2.Mux into the scope of the
server.Mux. This allows the server.Mux to configure its routes without
caring which Providers are enabled, which switches/ENVs are set etc. It
configures its routes optimistically and leaves the higher-order logic
to decide whether to actually invoke the logic used by the mux to
configure routes for that provider.
This allows operators to permit access to Chronograf only to users belonging
to a set of specific Heroku organizations. This is controlled using the
HEROKU_ORGS env or the --heroku-organizations switch.
JWTMux was a disingenuous name because while JWTs are a very good choice
for a cookie encoding, they were not strictly required for use with this
mux. To better indicate the responsibilities of this mux, it's been
renamed "CookieMux," since its responsibilities end with persisting the
oauth2.Authenticator's encoded state in the browser. It is up to the
oauth2.Authenticator to choose the encoding.
If a --token-secret, --heroku-client-id, and --heroku-secret are
provided to Chronograf, it will add Heroku as an OAuth2 provider. These
tokens can be obtained (as of this writing) by visiting your "manage
account" page, navigating to "Applications," and then clicking "Register
New API Client" under the "API Clients" section.
Created an oauth2 package which encapsulates all oauth2 providers,
utility functions, types, and interfaces. Previously some methods of the
Github provider were used as http.HandlerFuncs. These have now been
pulled into a concrete type called a JWTMux to implement other Oauth2
providers.
JWTMux has all of the functionality required to take a token from any
provider and store it as a JWT in a browser, and that is the extent of
its responsibilities. It implements the oauth2.Mux interface which would
potentially allow other strategies of oauth2 credential storage.
Since this is a flag that is being accepted by the application, it makes
sense to group it with the other flags. Also, the `json` struct tag was
a remnant from an earlier attempt at implementing this feature, and is
no longer necessary.
URLPrefixer had nothing to do with assets, so it actually belongs up in
the mux, where we're assembling handlers together across the
application.
Also, the setup was painful to look at, and others will probably use the
same `Attrs`, so a `NewDefaultURLPrefixer` was added to spawn a prefixer
with only a prefix and a next handler.
React-router and also the client that we use in the frontend need to be
informed on how to access the Chronograf backend when it's being hosted
on a route other than /. To accomplish this, a data attribute is written
into the `<div>` which serves as our React root. We then make the React
router aware of this if it's set and also pass the prefix to axios (our
front end HTTP client) by way of window.
Originally, it was desired to have the basepath accessible via an API,
but this proved to be impossible because to access that API, the front
end would already need to know the basepath. The technique we went with
was arrived at independently, but is also used by Jupityr notebooks
which encountered the same problem.
The prefixer needs to not only replace `src="` attributes as it
currently does because that is not the only place a relative URL can
appear. It needs to also prefix URLs found in CSS which can also come
from the downstream http.ResponseWriter.
This adds support for an arbitrary list of patterns that will cause the
prefixer to insert its configured prefix. This is currently set to look
for `src`, `href`, and `url()` attributes.
Also, because we are modifying the stream, we need to suppress the
Content-Length generated by any downstream http.Handlers and instead
enable Transfer-Encoding: chunked so that we can stream the modified
response (we don't know apriori how many times we'll perform a
prefixing, so we can't calculate a final Content-Length). This is
accomplished by duplicating the Headers in the wrapResponseWriter that
is handed to the `Next` handler. We also handle the chunking and
Flushing that needs to happen as a result of using chunked transfer
encoding.
In order to support hosting chronograf under an arbitrary path[1], we
need to be able to rewrite all the URLs that are served in HTML and CSS.
Take, for example, the scenario where Chronograf is to be hosted under
`/chronograf` using Caddy and this example Caddyfile:
```
localhost:2020
gzip
proxy /chronograf localhost:8888 {
without /chronograf
}
```
Chronograf will not load properly when visiting
`http://localhost:2020/chronograf` because the requests for CSS, and
fonts will go to `http://localhost:2020/app-somegianthash.js` when they
should go to `http://localhost:2020/chronograf/app-somegianthash.js`.
This is the essence of issue #721.
To solve this, we add a URLPrefixer http.Handler, that acts as a
middleware. It inserts itself between any upstream handlers, and the
handler that was passed to it as its `Next` parameter and searches for
`src="` attributes. Upon discovering one of these attributes, it writes
the detected attribute and then the configured prefix. It then continues
writing the stream to the upstream http.ResponseWriter until
encountering another attribute until EOF.