Go to file
John Shahid 5eae652bf5 add the build status. 2013-10-31 13:33:29 -04:00
homebrew Update sha1 for new package with corrected config. 2013-10-22 12:24:16 -04:00
leveldb-patches Create ubuntu and centos packages. 2013-10-24 17:58:19 -04:00
scripts make pidfile optional. 2013-10-26 13:27:17 -04:00
spec Add main server. Add configuration for server. Add integration spec. Updatae http api to not have /api at the beginning 2013-10-21 14:54:37 -04:00
src make the batch size configurable. 2013-10-31 13:28:31 -04:00
.gitignore Enforce read/write access. 2013-10-25 16:03:52 -04:00
.travis.yml add travis ci notifications. 2013-10-29 11:40:08 -04:00
CHANGELOG Add a CHANGELOG. 2013-10-22 10:32:53 -04:00
LICENSE update license with Errplane 2013-10-24 13:44:55 -04:00
README.md add the build status. 2013-10-31 13:33:29 -04:00
VERSION bump the version. 2013-10-26 13:39:06 -04:00
build.sh add password caching to speed up password validation. 2013-10-31 12:32:15 -04:00
compile_protobuf.sh use the package manager instead of compiling stuff to speed up the test. 2013-10-24 16:36:54 -04:00
config.json.sample Update configuration to use ints for port numbers instead of the go style strings 2013-10-22 12:00:48 -04:00
exports.sh Enforce read/write access. 2013-10-25 16:03:52 -04:00
package.sh move things slightly and make version and gitSha constants. 2013-10-31 11:53:08 -04:00
release.sh put the bucket name in a variable. 2013-10-28 13:45:10 -04:00
test.sh allow test arguments. 2013-10-31 11:53:16 -04:00

README.md

chronosdb Build Status

Scalable datastore for metrics, events, and real-time analytics

Requirements

  • horizontal scalable
  • http interface
  • udp interface (low priority)
  • persistent
  • metadata for time series
  • perform functions quickly (count, unique, sum, etc.)
  • group by time intervals (e.g. count ticks every 5 minutes)
  • joining multiple time series to generate new timeseries
  • schema-less
  • sql-like query language
  • support multiple databases with read/write api key
  • single time series should scale horizontally (no hot spots)
  • dynamic cluster changes and data balancing
  • pubsub layer
  • continuous queries (keep connection open and return new points as they arrive)
  • Delete ranges of points from any number of timeseries (that should reflect in disk space usage)
  • querying should support one or more timeseries (possibly with regex to match on)

New Requirements

  • Easy to backup and restore
  • Large time range queries with one column ?
  • Optimize for HDD access ?
  • What are the common use cases that we should optimize for ?

Modules

       +--------------------+   +--------------------+
       |                    |   |                    |
       |  WebConsole/docs   |   |      Http API      |
       |                    |   |                    |
       +------------------+-+   +-+------------------+
                          |       |
                          |       |
                    +-----+-------+-----------+
                    |                         |
                    |  Lang. Bindings         |
                    |                         |
                    +-----------------+       |
                    |                 |       |
                    |   Query Engine  |       |
                    |                 |       |
                    +-----------------+-------+
                    |                         |
               +----+ Coordinator (consensus) +-----+
               |    |                         |     |
               |    +-------------------------+     |
               |                                    |
               |                                    |
      +--------+-----------+                +-------+------------+
      |                    |                |                    |
      |   Storage Engine   |                |   Storage Engine   |
      |                    |                |                    |
      +--------+-----------+                +-------+------------+

Replication & Concensus Notes

Single raft cluster for which machines are in cluster and who owns which locations.

  1. When a write comes into a server, figure out which machine owns the data, proxy out to that.
  2. The machine proxies to the server, which assigns a sequence number
  3. Each machine in the cluster asks the other machines that own hash ring locations what their latest sequence number is every 10 seconds (this is read repair)

For example, take machines A, B, and C. Say B and C own ring location #2. If a write comes into A it will look up the configuration and pick B or C at random to proxy the write to. Say it goes to B. B assigns a sequence number of 1. It keeps a log for B2 of the writes. It will also keep a log for C2's writes. It then tries to write #1 to C.

If the write is marked as a quorum write, then B won't return a success to A until the data has been written to both B and C. Every so often both B and C will ask each other what their latest writes are.

Taking the example further, if we had server D that also owned ring location 2. B would ask C for writes to C2. If C is down it will ask D for writes to C2. This will ensure that if C fails no data will be lost.

Coding Style

  1. Public functions should be at the top of the file, followed by a comment // private functions and all private functions.