Go to file
Jason Wilder 7ace10f7e6 Load shards from filesystem when tsdb.Store is opened 2015-05-26 15:43:46 -06:00
.hooks commit hook was inaccuratly reporting vet errors due to how we ran vet 2015-02-12 11:17:41 -07:00
admin Fix data race w/ stopping admin server 2015-04-09 20:51:46 -06:00
client Wait for quorum write before returning from Log.Apply(). 2015-05-13 16:05:26 -06:00
cluster Fix breakage from rename of data to cluster package 2015-05-26 15:42:44 -06:00
cmd Move data.Point to tsdb.Point 2015-05-22 15:00:51 -06:00
collectd Convert Point.Fields to Point.Fields() 2015-05-22 15:22:03 -06:00
etc Default Raft election timeout to 5 seconds 2015-05-12 13:57:28 -07:00
graphite Convert Point to interface 2015-05-22 15:39:55 -06:00
httpd Wait for quorum write before returning from Log.Apply(). 2015-05-13 16:05:26 -06:00
influxql more derivative tests 2015-05-19 16:25:23 -06:00
messaging Re-add raft/messaging 2015-05-18 12:48:10 -06:00
meta Implement meta.Store and meta.Data. 2015-05-25 16:28:58 -06:00
opentsdb Move data.Point to tsdb.Point 2015-05-22 15:00:51 -06:00
raft Re-add raft/messaging 2015-05-18 12:48:10 -06:00
scripts init.sh: double start considers success 2015-05-11 20:27:38 +03:00
shared/admin Update admin assets. 2015-02-23 22:30:38 -08:00
statik Update admin assets. 2015-02-23 22:30:38 -08:00
tcp Fix breakage from rename of data to cluster package 2015-05-26 15:42:44 -06:00
test Fix rebase issues. 2015-05-22 16:15:36 -04:00
tests add in non-distinct data to distinct seed 2015-05-19 12:29:52 -06:00
tsdb Load shards from filesystem when tsdb.Store is opened 2015-05-26 15:43:46 -06:00
udp Move data.Point to tsdb.Point 2015-05-22 15:00:51 -06:00
uuid Replace code.google.com/p/go-uuid with TimeUUID from gocql 2015-03-30 09:02:11 -07:00
.gitignore Write path interfaces 2015-05-18 12:35:34 -06:00
CHANGELOG.md update changelog 2015-05-19 12:29:52 -06:00
CONTRIBUTING.md Update CONTRIBUTING.md with protobuf setup 2015-05-21 11:29:34 -06:00
LICENSE update the year in the LICENSE file 2015-01-05 21:35:28 -08:00
QUERIES.md Update QUERIES.md 2015-01-29 16:25:12 -05:00
README.md change timestamp to time 2015-05-11 12:28:47 -05:00
balancer.go Load balance distributed queries across data nodes 2015-04-17 11:28:47 -06:00
balancer_test.go Load balance distributed queries across data nodes 2015-04-17 11:28:47 -06:00
broker.go Don't set client to nil when closing broker 2015-04-21 15:18:07 -06:00
broker_test.go Make top-level handler less brittle 2015-04-13 15:38:42 -06:00
circle-test.sh Re-enable test parallelism 2015-05-18 09:44:43 -07:00
circle.yml Install from binary package. 2015-04-20 15:11:32 -07:00
commands.go change timestamp to time 2015-05-11 12:28:47 -05:00
continuous_queries.md Add stuff on architecture 2015-02-17 20:59:57 -05:00
database.go Remove timeBetweenInclusive 2015-05-19 11:20:18 -06:00
diagnostics.go Start move to 64-bit ints 2015-04-07 12:58:44 -07:00
influxdb.go Move data.Point to tsdb.Point 2015-05-22 15:00:51 -06:00
influxdb_test.go Move data.Point to tsdb.Point 2015-05-22 15:00:51 -06:00
internal_test.go Remove timeBetweenInclusive 2015-05-19 11:20:18 -06:00
metastore.go Add single-node raft back metastore. 2015-05-20 16:49:03 -06:00
package.sh package.sh: restore lost newline 2015-05-14 22:11:35 +03:00
remote_mapper.go Add failover to other data nodes for distributed queries 2015-04-17 11:28:47 -06:00
server.go Load shards from filesystem when tsdb.Store is opened 2015-05-26 15:43:46 -06:00
server_test.go Move data.Point to tsdb.Point 2015-05-22 15:00:51 -06:00
shard.go Remove shard Read/Write interface funcs 2015-05-20 10:01:29 -06:00
snapshot.go track more stats and report errors for shards 2015-04-22 15:37:59 -06:00
snapshot_test.go Add incremental backups. 2015-03-24 15:57:03 -06:00
stats.go Fix data races in stats 2015-04-27 22:57:51 -07:00
stats_test.go Add race test 2015-04-25 10:00:17 -07:00
tx.go refactor validate type in aggregate transaction 2015-05-19 12:29:39 -06:00

README.md

InfluxDB Circle CI

An Open-Source, Distributed, Time Series Database

InfluxDB v0.9.0 is now in the alpha phase. Builds are currently tagged as RCs, but they're alpha stage at this point. We will update this document when the first stable RC is ready. However, the current builds have an API that should not change significantly between now and the final 0.9.0 release. Most of the work we're doing now is focused on features and stability for clustering. So please develop against the current 0.9.0 RCs for new projects that won't go into production for a little bit.

InfluxDB is an open source distributed time series database with no external dependencies. It's useful for recording metrics, events, and performing analytics.

Features

  • Built-in [HTTP API] (http://influxdb.com/docs/v0.9/concepts/reading_and_writing_data.html) so you don't have to write any server side code to get up and running.
  • Clustering is supported out of the box, so that you can scale horizontally to handle your data.
  • Simple to install and manage, and fast to get data in and out.
  • It aims to answer queries in real-time. That means every data point is indexed as it comes in and is immediately available in queries that should return in < 100ms.

Getting Started

The following directions apply only to the 0.9.0 release or building from the source on master.

Building

You don't need to build the project to use it - you can use any of our pre-built packages to install InfluxDB. That's the recommended way to get it running. However, if you want to contribute to the core of InfluxDB, you'll need to build. For those adventurous enough, you can follow along on our docs.

Starting InfluxDB

  • service influxdb start if you have installed InfluxDB using an official Debian or RPM package.
  • $GOPATH/bin/influxd if you have built InfluxDB from source.

Creating your first database

curl -G 'http://localhost:8086/query' --data-urlencode "q=CREATE DATABASE mydb"

Insert some data

curl -H "Content-Type: application/json" http://localhost:8086/write -d '
{
    "database": "mydb",
    "retentionPolicy": "default",
    "points": [
        {
            "time": "2014-11-10T23:00:00Z",
            "name": "cpu",
             "tags": {
                 "region":"uswest",
                 "host": "server01"
            },
             "fields":{
                 "value": 100
            }
         }
      ]
}'

Query for the data

curl -G http://localhost:8086/query?pretty=true \
--data-urlencode "db=mydb" --data-urlencode "q=SELECT * FROM cpu"