@
Select a type first
Chart
Data

Select a database to view TimeSeries schema.

ArcadeDB TimeSeries supports multiple ingestion methods and integrates with Grafana for visualization. Before ingesting data, create a TimeSeries type that defines the timestamp column, tags (dimensions for filtering), and fields (numeric measurements).

1. Create a TimeSeries Type (SQL)

Define the schema before ingesting. Tags are indexed dimensions for fast filtering; fields are the numeric measurements.

CREATE TIMESERIES TYPE stocks
  TIMESTAMP ts
  TAGS (symbol STRING)
  FIELDS (open DOUBLE, close DOUBLE, high DOUBLE, low DOUBLE, volume LONG)
  SHARDS 4

Optional parameters: RETENTION <ms> for automatic data expiration, COMPACTION_INTERVAL <ms> for time-bucketed compaction, IF NOT EXISTS to avoid errors if the type already exists.

2. InfluxDB Line Protocol (HTTP API) — Recommended for Bulk Ingestion

The fastest way to ingest large volumes of data. Send one or more lines in InfluxDB Line Protocol format. Each line is: measurement,tag1=val1 field1=value1,field2=value2 timestamp

Endpoint
POST /api/v1/ts/{database}/write?precision=ns|us|ms|s
Line Protocol Format
# measurement,tag1=val1,tag2=val2 field1=value1,field2=value2 timestamp
stocks,symbol=TSLA open=250.64,close=252.10,high=253.50,low=249.80,volume=125000i 1700000000000000000
stocks,symbol=AAPL open=195.20,close=196.50,high=197.00,low=194.80,volume=89000i  1700000000000000000

Data type hints: integers require an i suffix (e.g. volume=125000i), floats are bare numbers, strings are quoted ("value"). Timestamp precision defaults to nanoseconds; use ?precision=ms for milliseconds.

curl Example — Single Point
curl -u root:password -X POST \
  "http://localhost:2480/api/v1/ts/mydb/write?precision=ms" \
  -H "Content-Type: text/plain" \
  --data-binary 'stocks,symbol=TSLA open=250.64,close=252.10,high=253.50,low=249.80,volume=125000i 1700000000000'
curl + Python Example — Generate 5,000 Sample Points

Generates realistic stock data for 5 symbols, 1,000 points each at 1-minute intervals:

curl -s -u root:password -X POST \
  "http://localhost:2480/api/v1/ts/mydb/write" \
  -H "Content-Type: text/plain" \
  --data-binary "$(python3 -c "
import random, time

symbols = ['TSLA', 'AAPL', 'GOOGL', 'MSFT', 'AMZN']
bases = {'TSLA': 250, 'AAPL': 195, 'GOOGL': 175, 'MSFT': 420, 'AMZN': 185}

now_ns = int(time.time() * 1e9)
interval = 60 * 1_000_000_000  # 1 min in ns

lines = []
for sym in symbols:
    price = bases[sym]
    for i in range(1000):
        ts = now_ns - (1000 - i) * interval
        o = round(price + random.uniform(-2, 2), 2)
        c = round(o + random.uniform(-3, 3), 2)
        h = round(max(o, c) + random.uniform(0, 2), 2)
        l = round(min(o, c) - random.uniform(0, 2), 2)
        v = random.randint(10000, 500000)
        lines.append(f'stocks,symbol={sym} open={o},close={c},high={h},low={l},volume={v}i {ts}')
        price = c
print('\n'.join(lines))
")"

Returns 204 No Content on success. Unknown measurement names (no matching TimeSeries type) are silently skipped.

3. SQL INSERT (via Command API)

Use standard SQL INSERT statements through the generic command endpoint. Useful for small batches or when integrating with existing SQL-based workflows.

Endpoint
POST /api/v1/command/{database}
SQL Syntax
INSERT INTO stocks (ts, symbol, open, close, high, low, volume)
  VALUES (1700000000000, 'TSLA', 250.64, 252.10, 253.50, 249.80, 125000)
curl Example
curl -u root:password -X POST \
  "http://localhost:2480/api/v1/command/mydb" \
  -H "Content-Type: application/json" \
  -d '{
    "language": "sql",
    "command": "INSERT INTO stocks (ts, symbol, open, close, high, low, volume) VALUES (1700000000000, '\''TSLA'\'', 250.64, 252.10, 253.50, 249.80, 125000)"
  }'

Timestamps are in milliseconds (epoch). Each INSERT runs inside a transaction. For bulk inserts, prefer the Line Protocol endpoint.

4. Java Embedded API

For applications embedding ArcadeDB directly, use the TimeSeriesEngine API for maximum performance with zero network overhead.

// Get the TimeSeries type and engine
LocalTimeSeriesType tsType = (LocalTimeSeriesType) db.getSchema().getType("stocks");
TimeSeriesEngine engine = tsType.getEngine();

// Prepare sample data (batch of N samples)
long[] timestamps = new long[] { System.currentTimeMillis(), System.currentTimeMillis() + 1000 };
Object[][] columns = new Object[][] {
    { "TSLA", "TSLA" },        // symbol (tag)
    { 250.64, 251.30 },        // open
    { 252.10, 253.00 },        // close
    { 253.50, 254.00 },        // high
    { 249.80, 250.50 },        // low
    { 125000L, 130000L }       // volume
};

db.begin();
engine.appendSamples(timestamps, columns);
db.commit();

The columns array must match the order of non-timestamp columns as defined in the type schema. Use tsType.getTsColumns() to inspect the column order.

5. Analytical SQL Functions

Use these aggregate functions in SQL queries via the Query tab for advanced time series analysis.

FunctionDescription
ts.rate(value, ts)Per-second rate of change
ts.rate(value, ts, true)Rate with counter reset detection (Prometheus-style)
ts.delta(value, ts)Difference between first and last values
ts.percentile(value, 0.95)Approximate percentile (p50, p95, p99, etc.)
ts.movingAvg(value, 10)Moving average with configurable window size
ts.interpolate(value, 'linear', ts)Gap filling with linear interpolation
ts.interpolate(value, 'prev')Gap filling with previous value
ts.first(value, ts)First value ordered by timestamp
ts.last(value, ts)Last value ordered by timestamp
ts.correlate(a, b)Pearson correlation coefficient between two series
ts.timeBucket(60000, ts)Time bucketing for GROUP BY (interval in ms)
Example: p95 Latency per Minute
SELECT ts.timeBucket(60000, ts) AS bucket, ts.percentile(latency, 0.95) AS p95
  FROM metrics
  WHERE ts BETWEEN 1700000000000 AND 1700086400000
  GROUP BY bucket ORDER BY bucket
Example: Rate with Counter Reset Detection
SELECT ts.timeBucket(300000, ts) AS bucket, ts.rate(requests_total, ts, true) AS req_per_sec
  FROM counters GROUP BY bucket ORDER BY bucket
6. Downsampling & Retention Policies

Configure automatic data lifecycle management. Retention removes old data; downsampling reduces granularity for historical data. Both policies are enforced automatically by a background scheduler (every 60 seconds).

Set Retention on Create
CREATE TIMESERIES TYPE metrics TIMESTAMP ts
  TAGS (host STRING) FIELDS (cpu DOUBLE, mem DOUBLE)
  RETENTION 30 DAYS
Add Downsampling Policy
ALTER TIMESERIES TYPE metrics ADD DOWNSAMPLING POLICY
  AFTER 7 DAYS GRANULARITY 1 HOURS
  AFTER 30 DAYS GRANULARITY 1 DAYS

You can also manage downsampling policies from the Schema tab using the visual controls.

Method Comparison
MethodBest ForThroughputBatching
Line ProtocolBulk ingestion, IoT, metrics collectionHighestNative (multi-line)
SQL INSERTSmall batches, ad-hoc inserts, SQL workflowsMediumOne row per statement
Java APIEmbedded applications, maximum controlHighestNative (array-based)

Grafana Integration

ArcadeDB exposes Grafana DataFrame-compatible endpoints so you can visualize TimeSeries data in Grafana without a custom plugin. Use the Grafana Infinity datasource plugin to connect.

1. Install the Infinity Datasource Plugin

The Infinity plugin is a generic JSON/CSV/XML datasource maintained by the Grafana community. Install it from the Grafana plugin catalog or via CLI:

grafana cli plugins install yesoreyeram-infinity-datasource
# then restart Grafana
2. Configure the Datasource

In Grafana, go to Connections → Data Sources → Add data source, select Infinity, and configure:

SettingValue
URLhttp://<arcadedb-host>:2480
AuthenticationBasic Auth
User / PasswordYour ArcadeDB credentials

Test the connection using the Health Check URL:

GET /api/v1/ts/{database}/grafana/health

# Response: { "status": "ok", "database": "mydb" }
3. Available Endpoints
MethodEndpointPurpose
GET/api/v1/ts/{db}/grafana/healthDatasource health check
GET/api/v1/ts/{db}/grafana/metadataDiscover types, fields, tags, aggregation types
POST/api/v1/ts/{db}/grafana/queryQuery → Grafana DataFrame JSON
4. Discover Available Metrics (Metadata)

Use the metadata endpoint to discover TimeSeries types and their fields before configuring panels:

curl -u root:password \
  "http://localhost:2480/api/v1/ts/mydb/grafana/metadata"
Response
{
  "types": [
    {
      "name": "weather",
      "fields": [{ "name": "temperature", "dataType": "DOUBLE" }],
      "tags": [{ "name": "location", "dataType": "STRING" }]
    }
  ],
  "aggregationTypes": ["SUM", "AVG", "MIN", "MAX", "COUNT"]
}
5. Build a Panel Query

Configure the Infinity plugin to POST JSON to the query endpoint. Each target maps to a Grafana panel query (refId A, B, C...). The response uses the columnar DataFrame format Grafana expects.

Request
{
  "from": 1700000000000,
  "to": 1700086400000,
  "maxDataPoints": 1000,
  "targets": [
    {
      "refId": "A",
      "type": "weather",
      "fields": ["temperature"],
      "tags": { "location": "us-east" },
      "aggregation": {
        "bucketInterval": 60000,
        "requests": [
          { "field": "temperature", "type": "AVG", "alias": "avg_temp" }
        ]
      }
    }
  ]
}
Response (DataFrame format)
{
  "results": {
    "A": {
      "frames": [{
        "schema": {
          "fields": [
            { "name": "time", "type": "time" },
            { "name": "avg_temp", "type": "number" }
          ]
        },
        "data": {
          "values": [
            [1700000000000, 1700000060000],
            [23.5, 24.1]
          ]
        }
      }]
    }
  }
}

Auto bucket interval: If aggregation is present but bucketInterval is omitted, it is automatically calculated as (to - from) / maxDataPoints. Omit aggregation entirely to get raw data points.

6. curl Example
curl -u root:password -X POST \
  "http://localhost:2480/api/v1/ts/mydb/grafana/query" \
  -H "Content-Type: application/json" \
  -d '{
    "from": 1700000000000,
    "to": 1700086400000,
    "maxDataPoints": 500,
    "targets": [{
      "refId": "A",
      "type": "weather",
      "aggregation": {
        "requests": [
          { "field": "temperature", "type": "AVG", "alias": "avg_temp" }
        ]
      }
    }]
  }'