questdb

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

QuestDB Skill

QuestDB 技能

How to Use This Skill

如何使用此技能

IMPORTANT — MINIMIZE ROUND-TRIPS:
  • Do NOT explore library source code (cryptofeed, questdb, etc.)
  • Do NOT check library versions or verify callback signatures
  • Do NOT read installed package files to "understand the API"
  • Do NOT verify infrastructure (Docker containers, Grafana health) is running — trust the user's prompt
  • Do NOT start
    02_ingest.py
    separately —
    03_dashboard.py
    launches it and verifies data automatically
  • Do NOT read extra reference files for topics already covered in this skill file
  • DO read reference files when their topic applies (e.g. enterprise.md for auth, grafana-advanced.md for complex panels)
  • Do NOT use task tracking (TaskCreate/TaskUpdate) for straightforward builds
  • Do NOT add
    sleep
    commands to wait for data or check background processes (the deploy script handles this)
  • Do NOT Ctrl+C, restart, or re-launch the ingestion process once
    03_dashboard.py
    has started it
  • Do NOT put VWAP, Bollinger, or RSI in separate timeseries panels — they are refIDs on the OHLC candlestick panel
  • Do NOT omit or empty
    fieldConfig.overrides
    — they put RSI on a right Y-axis (0-100%) and spread on a right axis. Without them, different scales crush the chart flat.
  • Do NOT set dashboard refresh to
    "5s"
    — the default is
    "250ms"
  • When opening the dashboard URL in the browser, ALWAYS append
    ?refresh=250ms
    to the URL. Without this, Grafana ignores the JSON refresh setting.
  • All API details for cryptofeed, QuestDB ingestion, and Grafana are below — use them as-is
  • Known Python environment issues are already solved in the templates below: uvloop: When creating a NEW venv, pip may pull a uvloop version that crashes on Python 3.10+ (
    RuntimeError: no current event loop
    ). Uninstall it after pip install in fresh venvs only. Never uninstall uvloop from a user's existing venv — their setup already works. macOS SSL: Homebrew Python lacks system CA certificates. Any outbound HTTPS/WSS connection (exchange WebSockets, API calls) fails without certifi. Always set
    SSL_CERT_FILE
    via certifi at the top of scripts that make network connections. These fixes are baked into the pip commands and Python templates below — copy them exactly.
This skill contains ready-to-use SQL, schemas, ingestion code, and Grafana queries. Write the files and run them. A typical pipeline is 3 files (schema setup, ingestion script, dashboard deploy) — write them, execute them, done.
重要提示 — 减少往返操作:
  • 不要探索库源代码(如cryptofeed、questdb等)
  • 不要检查库版本或验证回调签名
  • 不要读取已安装的包文件以“理解API”
  • 不要验证基础设施(Docker容器、Grafana健康状态)是否运行——信任用户的提示
  • 不要单独启动
    02_ingest.py
    ——
    03_dashboard.py
    会自动启动它并验证数据
  • 不要读取额外的参考文件来获取此技能文件已覆盖的主题
  • 当参考文件的主题适用时才读取(例如,涉及认证时读enterprise.md,复杂面板时读grafana-advanced.md)
  • 对于简单构建,不要使用任务跟踪(TaskCreate/TaskUpdate)
  • 不要添加
    sleep
    命令等待数据或检查后台进程(部署脚本会处理此问题)
  • 一旦
    03_dashboard.py
    启动了摄入进程,不要按Ctrl+C、重启或重新启动该进程
  • 不要将VWAP、布林带或RSI放在单独的时序面板中——它们是OHLC蜡烛图面板的refID
  • 不要省略或清空
    fieldConfig.overrides
    ——这些配置将RSI放在右侧Y轴(0-100%),将点差放在右侧轴。没有这些配置,不同的刻度会使图表被压缩成一条线
  • 不要将仪表板刷新设置为
    "5s"
    ——默认值为
    "250ms"
  • 在浏览器中打开仪表板URL时,务必在URL后附加
    ?refresh=250ms
    。如果不这样做,Grafana会忽略JSON中的刷新设置
  • 以下包含了cryptofeed、QuestDB摄入和Grafana的所有API细节——直接使用即可
  • 已知的Python环境问题已在下面的模板中解决: uvloop: 创建新虚拟环境时,pip可能会拉取在Python 3.10+上崩溃的uvloop版本 (
    RuntimeError: no current event loop
    )。仅在全新虚拟环境中执行pip install后卸载它。 永远不要从用户已有的虚拟环境中卸载uvloop——他们的设置已经可以正常工作。 macOS SSL: Homebrew Python缺少系统CA证书。任何出站HTTPS/WSS 连接(交易所WebSocket、API调用)在没有certifi的情况下都会失败。始终在进行网络连接的脚本顶部 通过certifi设置
    SSL_CERT_FILE
    。 这些修复已嵌入下面的pip命令和Python模板中——完全复制即可。
此技能包含现成可用的SQL、模式、摄入代码和Grafana 查询。编写文件并运行即可。典型的流程包含3个文件 (模式设置、摄入脚本、仪表板部署)——编写它们,执行它们,完成。

Execution Scenarios

执行场景

Pick the scenario that matches the user's request. Run the exact commands shown. Each bash code block below is one Bash tool call. Execute immediately - do not deliberate over shell syntax. Blocks at the same step can run as parallel tool calls.
Scenario A — Everything from scratch (Docker + venv + pipeline): Use separate parallel Bash tool calls (one command per call) to parallelize:
bash
docker run -d --name questdb -p 9000:9000 -p 9009:9009 -p 8812:8812 questdb/questdb:latest
bash
docker run -d --name grafana -p 3000:3000 -e GF_INSTALL_PLUGINS=questdb-questdb-datasource -e GF_SECURITY_ADMIN_PASSWORD=admin -e GF_DASHBOARDS_MIN_REFRESH_INTERVAL=250ms grafana/grafana:latest
bash
python3 -m venv .venv
Then sequentially:
bash
.venv/bin/pip install -q cryptofeed questdb 'psycopg[binary]' requests numpy certifi
bash
.venv/bin/pip uninstall uvloop -y 2>/dev/null
Then wait for both services (each in its own Bash call):
bash
curl -sf --retry 30 --retry-delay 1 --retry-all-errors -o /dev/null "http://localhost:9000/exec?query=SELECT+1"
bash
curl -sf --retry 30 --retry-delay 1 --retry-all-errors -o /dev/null http://localhost:3000/api/health
Then configure the datasource:
bash
curl -s -X POST http://localhost:3000/api/datasources -u admin:admin -H "Content-Type: application/json" -d '{"name":"QuestDB","type":"questdb-questdb-datasource","access":"proxy","jsonData":{"server":"host.docker.internal","port":8812,"username":"admin","tlsMode":"disable","timeout":"120","queryTimeout":"60"},"secureJsonData":{"password":"quest"}}'
Datasource fields (QuestDB Grafana plugin uses jsonData, NOT the standard url field):
  • server
    : hostname only, no port, no protocol (e.g.
    host.docker.internal
    )
  • port
    : integer, separate from server (e.g.
    8812
    )
  • tlsMode
    : must be
    "disable"
    for local Docker — omitting it defaults to TLS enabled, which breaks the connection
  • username
    /
    password
    : QuestDB defaults are
    admin
    /
    quest
    Then write 3 files (schema, ingestion, dashboard) and run them.
Scenario B — Containers running, need venv:
bash
python3 -m venv .venv
bash
.venv/bin/pip install -q cryptofeed questdb 'psycopg[binary]' requests numpy certifi
bash
.venv/bin/pip uninstall uvloop -y 2>/dev/null
Then write 3 files and run them.
Scenario C — User provides existing venv path: Trust the user's venv as-is. Do NOT run pip install or uninstall anything. Write 3 files and run them. Use the user's venv path for all
python
commands.
Scenario D — Everything already running, just need pipeline scripts: Write 3 files and run them. No infrastructure setup needed.
Additional references in the
references/
directory — only read when the user's request goes beyond what this file covers:
  • common-mistakes.md
    — Wrong patterns → correct QuestDB equivalents (read when writing novel SQL not already templated below)
  • grafana-advanced.md
    — Read only for Plotly order book depth charts or advanced features not in the dashboard template below
  • indicators.md
    — Read when user asks for indicators beyond OHLC/VWAP/Bollinger/RSI (MACD, ATR, Stochastic, OBV, Drawdown, Keltner, Donchian, etc.)
  • cookbook.md
    — Index of official QuestDB cookbook recipes: finance (slippage, markout, IS, ECN scorecard, last-look, indicators, volume/order-flow, risk) and time-series patterns (FILL strategies, session windows, latest-N-per-partition, sparse sensor joins). Read when the user asks for an execution-quality/TCA metric, a named indicator, or a time-series pattern you don't already have inline.
  • sql-grammar.md
    — Complete index of QuestDB keywords, functions (by category with signatures), data types, and operators. Read when you need to know "does QuestDB have function X?" or want the correct signature for a function. Includes native finance builtins (
    vwap()
    ,
    twap()
    ,
    spread()
    ,
    mid()
    ,
    wmid()
    ,
    l2price()
    ), statistical functions, array/matrix operations, and more.
  • enterprise.md
    Read when QuestDB uses authentication, HTTPS, tokens, or ACLs (skip for open source)
选择与用户请求匹配的场景。运行所示的精确命令。 下面的每个bash代码块都是一个Bash工具调用。立即执行——不要 纠结于shell语法。同一步骤的代码块可以作为并行工具调用运行。
场景A — 从零开始搭建所有内容(Docker + 虚拟环境 + 流程): 使用单独的并行Bash工具调用(每个调用一个命令)来并行化操作:
bash
docker run -d --name questdb -p 9000:9000 -p 9009:9009 -p 8812:8812 questdb/questdb:latest
bash
docker run -d --name grafana -p 3000:3000 -e GF_INSTALL_PLUGINS=questdb-questdb-datasource -e GF_SECURITY_ADMIN_PASSWORD=admin -e GF_DASHBOARDS_MIN_REFRESH_INTERVAL=250ms grafana/grafana:latest
bash
python3 -m venv .venv
然后按顺序执行:
bash
.venv/bin/pip install -q cryptofeed questdb 'psycopg[binary]' requests numpy certifi
bash
.venv/bin/pip uninstall uvloop -y 2>/dev/null
然后等待两个服务启动(每个服务单独用Bash调用):
bash
curl -sf --retry 30 --retry-delay 1 --retry-all-errors -o /dev/null "http://localhost:9000/exec?query=SELECT+1"
bash
curl -sf --retry 30 --retry-delay 1 --retry-all-errors -o /dev/null http://localhost:3000/api/health
然后配置数据源:
bash
curl -s -X POST http://localhost:3000/api/datasources -u admin:admin -H "Content-Type: application/json" -d '{"name":"QuestDB","type":"questdb-questdb-datasource","access":"proxy","jsonData":{"server":"host.docker.internal","port":8812,"username":"admin","tlsMode":"disable","timeout":"120","queryTimeout":"60"},"secureJsonData":{"password":"quest"}}'
数据源字段(QuestDB Grafana插件使用jsonData,而非标准url字段):
  • server
    :仅主机名,不含端口和协议(例如
    host.docker.internal
  • port
    :整数,与server分开(例如
    8812
  • tlsMode
    :本地Docker必须设置为
    "disable"
    ——省略此参数会默认启用TLS,这会导致连接失败
  • username
    /
    password
    :QuestDB默认值为
    admin
    /
    quest
    然后编写3个文件(模式、摄入、仪表板)并运行它们。
场景B — 容器已运行,需要虚拟环境:
bash
python3 -m venv .venv
bash
.venv/bin/pip install -q cryptofeed questdb 'psycopg[binary]' requests numpy certifi
bash
.venv/bin/pip uninstall uvloop -y 2>/dev/null
然后编写3个文件并运行它们。
场景C — 用户提供现有虚拟环境路径: 信任用户的虚拟环境原样使用。不要运行pip install或卸载任何内容。 编写3个文件并运行它们。所有
python
命令都使用用户提供的虚拟环境路径。
场景D — 所有内容已运行,仅需要流程脚本: 编写3个文件并运行它们。无需设置基础设施。
references/
目录中有额外的参考资料——仅当用户的 请求超出此文件覆盖范围时才读取:
  • common-mistakes.md
    ——错误模式→正确的QuestDB等效写法(当编写下面未提供模板的新SQL时读取)
  • grafana-advanced.md
    ——仅在需要Plotly订单簿深度图表或下面仪表板模板中没有的高级功能时读取
  • indicators.md
    ——当用户询问OHLC/VWAP/布林带/RSI之外的指标(MACD、ATR、随机指标、OBV、回撤、肯特纳通道、唐奇安通道等)时读取
  • cookbook.md
    ——官方QuestDB食谱索引:金融(滑点、标记、执行缺口、ECN评分卡、最后看盘、指标、成交量/订单流、风险)和时序模式(FILL策略、会话窗口、每个分区最新N条记录、稀疏传感器关联)。当用户询问执行质量/TCA指标、指定指标或你不熟悉的时序模式时读取。
  • sql-grammar.md
    ——完整的QuestDB关键字、函数(按类别带签名)、数据类型和运算符索引。当你需要知道“QuestDB是否有函数X?”或想要函数的正确签名时读取。包括原生金融内置函数(
    vwap()
    twap()
    spread()
    mid()
    wmid()
    l2price()
    )、统计函数、数组/矩阵运算等。
  • enterprise.md
    ——当QuestDB使用认证、HTTPS、令牌或ACL时读取(开源版本跳过)

Critical Rule

关键规则

QuestDB is NOT PostgreSQL. It supports the PostgreSQL wire protocol for querying, but has its own SQL extensions for time-series operations. Standard PostgreSQL patterns like
time_bucket()
,
DISTINCT ON
,
HAVING
, and
generate_series()
do not exist.
QuestDB不是PostgreSQL。它支持PostgreSQL wire协议用于 查询,但有自己的时序操作SQL扩展。标准 PostgreSQL模式如
time_bucket()
DISTINCT ON
HAVING
generate_series()
不存在。

Live Documentation Access

实时文档访问

Every QuestDB docs page has a plain-markdown twin at the same URL with
.md
appended. Fetch these with
curl
for fast, clean, LLM-friendly reads — no HTML to strip, no scraping required. Prefer an authoritative doc fetch over guessing when you need a keyword, syntax detail, or edge case not covered inline here.
bash
undefined
每个QuestDB文档页面都有对应的纯markdown版本,在相同URL后追加
.md
即可访问。使用
curl
获取这些版本,以便快速、清晰地供LLM读取——无需剥离HTML,无需爬虫。当你需要关键字、语法细节或此文件未涵盖的边缘情况时,优先获取权威文档而非猜测。
bash
undefined

Any doc page — just append .md to the URL

任何文档页面——只需在URL后追加.md

curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/sql/horizon-join.md" curl -sH "Accept: text/markdown" "https://questdb.com/docs/cookbook/sql/finance.md"
curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/sql/horizon-join.md" curl -sH "Accept: text/markdown" "https://questdb.com/docs/cookbook/sql/finance.md"

Full documentation index for LLMs — lists every page with its .md URL

供LLM使用的完整文档索引——列出每个页面及其.md URL


Use `llms.txt` to discover a page when you don't know the exact path. It is the
authoritative directory of every doc and cookbook recipe.

---

当你不知道确切路径时,使用`llms.txt`查找页面。它是所有文档和食谱的权威目录。

---

SQL Reference

SQL参考

CREATE TABLE

CREATE TABLE

sql
CREATE TABLE IF NOT EXISTS trades (
    ts TIMESTAMP,
    symbol SYMBOL,
    side SYMBOL,
    price DOUBLE,
    amount DOUBLE
) TIMESTAMP(ts) PARTITION BY DAY WAL
DEDUP UPSERT KEYS(ts, symbol);
Key rules:
  • TIMESTAMP(col)
    designates the time column — required for SAMPLE BY, LATEST ON, ASOF JOIN
  • SYMBOL
    type for any repeated string (tickers, categories, status codes) — much faster than VARCHAR
  • PARTITION BY DAY|MONTH|YEAR|HOUR
    — use the most common query granularity
  • WAL
    enables concurrent writes (required for ILP ingestion)
  • DEDUP UPSERT KEYS(ts, symbol)
    deduplicates on (timestamp, symbol) — idempotent ingestion
Column types: BOOLEAN, BYTE, SHORT, INT, LONG, FLOAT, DOUBLE, CHAR, VARCHAR, SYMBOL, TIMESTAMP, DATE, LONG256, GEOHASH, UUID, IPv4, DOUBLE[], FLOAT[], INT[], LONG[], SHORT[], UUID[] — plus 2D arrays like
DOUBLE[][]
ALTER TABLE:
sql
ALTER TABLE trades ADD COLUMN exchange SYMBOL;
ALTER TABLE trades DROP COLUMN exchange;
ALTER TABLE trades RENAME COLUMN amount TO qty;
ALTER TABLE trades ALTER COLUMN exchange SET TYPE SYMBOL;
sql
CREATE TABLE IF NOT EXISTS trades (
    ts TIMESTAMP,
    symbol SYMBOL,
    side SYMBOL,
    price DOUBLE,
    amount DOUBLE
) TIMESTAMP(ts) PARTITION BY DAY WAL
DEDUP UPSERT KEYS(ts, symbol);
关键规则:
  • TIMESTAMP(col)
    指定时间列——SAMPLE BY、LATEST ON、ASOF JOIN必需
  • 任何重复字符串(代码、类别、状态码)使用
    SYMBOL
    类型——比VARCHAR快得多
  • PARTITION BY DAY|MONTH|YEAR|HOUR
    ——使用最常见的查询粒度
  • WAL
    启用并发写入(ILP摄入必需)
  • DEDUP UPSERT KEYS(ts, symbol)
    根据(时间戳、代码)去重——幂等摄入
列类型: BOOLEAN、BYTE、SHORT、INT、LONG、FLOAT、DOUBLE、CHAR、VARCHAR、 SYMBOL、TIMESTAMP、DATE、LONG256、GEOHASH、UUID、IPv4、DOUBLE[]、FLOAT[]、INT[]、 LONG[]、SHORT[]、UUID[]——还有二维数组如
DOUBLE[][]
ALTER TABLE:
sql
ALTER TABLE trades ADD COLUMN exchange SYMBOL;
ALTER TABLE trades DROP COLUMN exchange;
ALTER TABLE trades RENAME COLUMN amount TO qty;
ALTER TABLE trades ALTER COLUMN exchange SET TYPE SYMBOL;

SAMPLE BY (Time-Series Aggregation)

SAMPLE BY(时序聚合)

SAMPLE BY is QuestDB's time-bucketing. It replaces
GROUP BY time_bucket()
. Requires a designated timestamp.
sql
SELECT ts, symbol,
    first(price) AS open, max(price) AS high,
    min(price) AS low, last(price) AS close,
    sum(amount) AS volume
FROM trades
WHERE ts > '2025-01-01'
SAMPLE BY 1h;
Key rules:
  • Valid intervals:
    1s
    ,
    5s
    ,
    1m
    ,
    15m
    ,
    1h
    ,
    1d
    ,
    1M
    (month)
  • ALIGN TO CALENDAR
    aligns buckets to clock boundaries (this is the default, so it can be omitted)
  • FILL(PREV | NULL | LINEAR | value)
    fills gaps — goes AFTER SAMPLE BY
  • In Grafana, default to
    SAMPLE BY 5s
    unless the user specifies a different bar size
  • first()
    ,
    last()
    return first/last values within each time bucket
  • count_distinct(col)
    instead of
    COUNT(DISTINCT col)
  • No
    HAVING
    — use a subquery:
    SELECT * FROM (... SAMPLE BY ...) WHERE volume > 1000
SAMPLE BY是QuestDB的时间分桶功能。它替代了
GROUP BY time_bucket()
。 需要指定时间戳。
sql
SELECT ts, symbol,
    first(price) AS open, max(price) AS high,
    min(price) AS low, last(price) AS close,
    sum(amount) AS volume
FROM trades
WHERE ts > '2025-01-01'
SAMPLE BY 1h;
关键规则:
  • 有效间隔:
    1s
    5s
    1m
    15m
    1h
    1d
    1M
    (月)
  • ALIGN TO CALENDAR
    将分桶对齐到时钟边界(这是默认值,可以省略)
  • FILL(PREV | NULL | LINEAR | value)
    填充间隙——放在SAMPLE BY之后
  • 在Grafana中,默认使用
    SAMPLE BY 5s
    ,除非用户指定不同的K线大小
  • first()
    last()
    返回每个时间分桶内的第一个/最后一个值
  • 使用
    count_distinct(col)
    替代
    COUNT(DISTINCT col)
  • 没有
    HAVING
    ——使用子查询:
    SELECT * FROM (... SAMPLE BY ...) WHERE volume > 1000

LATEST ON (Last Value Per Group)

LATEST ON(每组最新值)

Returns the most recent row per group. Replaces
DISTINCT ON
/
ROW_NUMBER()
.
sql
SELECT * FROM trades
WHERE ts > dateadd('h', -1, now())
LATEST ON ts PARTITION BY symbol;
Rules:
  • LATEST ON ts
    must reference the designated timestamp
  • PARTITION BY
    is required
  • Add a
    WHERE
    time filter for performance
返回每组的最新行。替代
DISTINCT ON
/
ROW_NUMBER()
sql
SELECT * FROM trades
WHERE ts > dateadd('h', -1, now())
LATEST ON ts PARTITION BY symbol;
规则:
  • LATEST ON ts
    必须引用指定的时间戳
  • PARTITION BY
    是必需的
  • 添加
    WHERE
    时间过滤器以提升性能

Joins

关联查询

QuestDB has a family of specialised time-series joins. Pick the one that matches the question — reaching for ASOF JOIN by default when HORIZON or WINDOW would be idiomatic is a common LLM mistake.
PatternUse
Point-in-time match: "value at trade time"ASOF JOIN (or HORIZON JOIN with
LIST (0)
)
Match strictly before trade timeLT JOIN
Interleave two time series chronologicallySPLICE JOIN
Price at trade time AND at offsets ±N (markout, IS decomposition, adverse selection)HORIZON JOIN
Aggregate quotes within a ±N window around each trade (rolling stats, quote density, pre-/post-trade averages)WINDOW JOIN
Per-outer-row subquery: top-N per group, dynamic filters, per-row SAMPLE BY / LATEST ON / ASOFLATERAL JOIN
sql
-- ASOF JOIN: nearest right row with ts ≤ left.ts
SELECT * FROM trades ASOF JOIN quotes ON (symbol);

-- LT JOIN: strictly ts < left.ts
SELECT * FROM trades LT JOIN quotes ON (symbol);

-- SPLICE JOIN: merge two time series interleaved
SELECT * FROM trades SPLICE JOIN quotes ON (symbol);

-- HORIZON JOIN: ASOF at a grid of time offsets, one parallel pass.
-- For each trade, computes metrics at -1m, -30s, 0, 30s, 1m, ...
-- Supports multiple right-hand tables (only the last carries RANGE/LIST/AS).
SELECT h.offset / 1_000_000_000 AS horizon_sec, t.symbol,
       avg((m.best_bid + m.best_ask) / 2) AS avg_mid
FROM fx_trades t
HORIZON JOIN market_data m ON (symbol)
RANGE FROM -1m TO 5m STEP 30s AS h
WHERE t.timestamp IN '$yesterday'
ORDER BY t.symbol, horizon_sec;

-- Or with explicit offsets for implementation shortfall:
--   LIST (0, 1800s) AS h  →  at_fill and at_30m columns (via PIVOT)

-- WINDOW JOIN: aggregate right-table rows within a ±time window around each left row.
-- EXCLUDE PREVAILING omits the last-known value before the window start.
SELECT t.symbol, t.timestamp,
       avg(c.bid_price) AS avg_bid_pm5s,
       count()          AS num_quotes
FROM fx_trades t
WINDOW JOIN core_price c ON (t.symbol = c.symbol)
    RANGE BETWEEN 5 seconds PRECEDING AND 5 seconds FOLLOWING
    EXCLUDE PREVAILING
WHERE t.symbol = 'EURUSD' AND t.timestamp IN '$now-1h..$now';

-- LATERAL JOIN: subquery evaluated per outer row, can reference outer columns.
-- Classic top-N per group — two largest fills for each order:
SELECT o.id, o.desk, t.qty
FROM orders o
JOIN LATERAL (
    SELECT qty FROM fills
    WHERE order_id = o.id
    ORDER BY qty DESC LIMIT 2
) t
ORDER BY o.id, t.qty DESC;
Join rules:
  • All time-series joins (ASOF, LT, SPLICE, HORIZON, WINDOW) require a designated timestamp on both tables. Timestamps are matched automatically.
  • ON (symbol)
    matches the key column; both tables must use the same column name. Use
    ON (t.sym = q.sym)
    when the names differ.
  • Standard INNER JOIN and LEFT JOIN also work for non-time-series joins.
  • HORIZON JOIN: no other join types in the same level, no SAMPLE BY, no window functions inside, WHERE filters left table only. Wrap in a subquery for anything else.
  • WINDOW JOIN: cannot be combined with GROUP BY, window functions, or other join types. Wrap the join in a CTE then aggregate/join in the outer query.
  • LATERAL JOIN: only INNER, LEFT, or CROSS variants supported. Requires a parenthesised subquery. Inside the subquery you can reference outer columns and use SAMPLE BY, LATEST ON, ASOF JOIN, window functions, UNION ALL, etc.
For the full syntax of HORIZON / WINDOW / LATERAL (including mixed-precision timestamps, dynamic window bounds, and multi-table HORIZON patterns), fetch:
  • curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/sql/horizon-join.md"
  • curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/sql/window-join.md"
  • curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/sql/lateral-join.md"
For ready-made execution analytics queries (slippage, markout, implementation shortfall, last-look, ECN scorecard) — all built on HORIZON JOIN — see
references/cookbook.md
or https://questdb.com/docs/cookbook/sql/finance.md.
QuestDB有一系列专门的时序关联查询。根据问题选择合适的类型——默认使用ASOF JOIN而不是更合适的HORIZON或WINDOW是LLM常见的错误。
模式用途
时间点匹配:“交易时的值”ASOF JOIN(或带
LIST (0)
的HORIZON JOIN)
严格匹配交易时间之前的数据LT JOIN
按时间顺序交错两个时序数据SPLICE JOIN
交易时及偏移±N时的价格(标记、执行缺口分解、逆向选择)HORIZON JOIN
每个交易周围±N窗口内的报价聚合(滚动统计、报价密度、交易前后平均值)WINDOW JOIN
外部行子查询:每组前N条记录、动态过滤器、每行的SAMPLE BY / LATEST ON / ASOFLATERAL JOIN
sql
-- ASOF JOIN:最近的右侧行,其中ts ≤ 左侧ts
SELECT * FROM trades ASOF JOIN quotes ON (symbol);

-- LT JOIN:严格ts < 左侧ts
SELECT * FROM trades LT JOIN quotes ON (symbol);

-- SPLICE JOIN:合并两个时序数据并按时间交错
SELECT * FROM trades SPLICE JOIN quotes ON (symbol);

-- HORIZON JOIN:在时间偏移网格上进行ASOF,一次并行处理。
-- 对于每个交易,计算-1m、-30s、0、30s、1m等时间点的指标。
-- 支持多个右侧表(只有最后一个表携带RANGE/LIST/AS)。
SELECT h.offset / 1_000_000_000 AS horizon_sec, t.symbol,
       avg((m.best_bid + m.best_ask) / 2) AS avg_mid
FROM fx_trades t
HORIZON JOIN market_data m ON (symbol)
RANGE FROM -1m TO 5m STEP 30s AS h
WHERE t.timestamp IN '$yesterday'
ORDER BY t.symbol, horizon_sec;

-- 或者使用显式偏移量计算执行缺口:
--   LIST (0, 1800s) AS h  →  通过PIVOT得到at_fill和at_30m列

-- WINDOW JOIN:聚合左侧行周围±时间窗口内的右侧表行。
-- EXCLUDE PREVAILING省略窗口开始前的最后已知值。
SELECT t.symbol, t.timestamp,
       avg(c.bid_price) AS avg_bid_pm5s,
       count()          AS num_quotes
FROM fx_trades t
WINDOW JOIN core_price c ON (t.symbol = c.symbol)
    RANGE BETWEEN 5 seconds PRECEDING AND 5 seconds FOLLOWING
    EXCLUDE PREVAILING
WHERE t.symbol = 'EURUSD' AND t.timestamp IN '$now-1h..$now';

-- LATERAL JOIN:为每个外部行计算子查询,可以引用外部列。
-- 经典的每组前N条记录——每个订单的两个最大成交记录:
SELECT o.id, o.desk, t.qty
FROM orders o
JOIN LATERAL (
    SELECT qty FROM fills
    WHERE order_id = o.id
    ORDER BY qty DESC LIMIT 2
) t
ORDER BY o.id, t.qty DESC;
关联查询规则:
  • 所有时序关联查询(ASOF、LT、SPLICE、HORIZON、WINDOW)要求两个表都有指定时间戳。时间戳会自动匹配。
  • ON (symbol)
    匹配键列;两个表必须使用相同的列名。 当名称不同时,使用
    ON (t.sym = q.sym)
  • 标准INNER JOIN和LEFT JOIN也适用于非时序关联查询。
  • HORIZON JOIN:同一层级不能有其他关联查询类型,不能使用SAMPLE BY,内部不能有窗口函数,WHERE过滤器仅作用于左侧表。如需其他操作,将其包装在子查询中。
  • WINDOW JOIN:不能与GROUP BY、窗口函数或其他关联查询类型结合使用。将关联查询包装在CTE中,然后在外部查询中进行聚合/关联。
  • LATERAL JOIN:仅支持INNER、LEFT或CROSS变体。需要带括号的子查询。在子查询中可以引用外部列,并使用SAMPLE BY、LATEST ON、ASOF JOIN、窗口函数、UNION ALL等。
如需HORIZON / WINDOW / LATERAL的完整语法(包括混合精度时间戳、动态窗口边界和多表HORIZON模式),请获取:
  • curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/sql/horizon-join.md"
  • curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/sql/window-join.md"
  • curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/sql/lateral-join.md"
如需现成的执行分析查询(滑点、标记、执行缺口、最后看盘、ECN评分卡)——全部基于HORIZON JOIN构建——请查看
references/cookbook.md
https://questdb.com/docs/cookbook/sql/finance.md。

UNNEST (Arrays and JSON Arrays to Rows)

UNNEST(数组和JSON数组转行)

QuestDB supports
UNNEST
for expanding arrays - or JSON arrays stored as VARCHAR - into rows. Appears in the
FROM
clause and behaves like a table.
sql
-- Native array: expand the sizes at level 2 of each order book row
SELECT t.symbol, u.vol
FROM market_data t, UNNEST(t.asks[2]) u(vol)
WHERE t.timestamp IN '$now-1m..$now' AND t.symbol = 'EURUSD';

-- WITH ORDINALITY gives a 1-based level index (resets per input row)
SELECT m.symbol, u.vol, u.level
FROM market_data m, UNNEST(m.asks[2]) WITH ORDINALITY u(vol, level)
WHERE m.timestamp IN '$now-1m..$now';

-- JSON array (VARCHAR) with typed columns — great for ingesting API payloads
SELECT u.trade_id, u.price, u.size, u.side, u.time
FROM UNNEST(
    payload_varchar::VARCHAR
    COLUMNS(trade_id LONG, price DOUBLE, size DOUBLE, side VARCHAR, time TIMESTAMP)
) u;
Key rules:
  • UNNEST
    must be in
    FROM
    (or after
    CROSS JOIN
    / comma), not in
    SELECT
  • Native arrays: currently
    DOUBLE[]
    only.
    UNNEST
    of a
    DOUBLE[][]
    yields
    DOUBLE[]
    rows - chain a second
    UNNEST
    to fully flatten.
  • JSON:
    COLUMNS(name TYPE, ...)
    extracts typed fields. Supported types: BOOLEAN, SHORT, INT, LONG, DOUBLE, VARCHAR, TIMESTAMP. Invalid JSON / NULL / empty string all produce 0 rows (no error).
  • ordinality
    is reserved — alias it (
    u(val, pos)
    ) or quote it.
Full reference:
curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/sql/unnest.md"
QuestDB支持
UNNEST
用于将数组——或存储为VARCHAR的JSON数组——展开为行。它出现在
FROM
子句中,表现得像一个表。
sql
-- 原生数组:展开每个订单簿行第2层的规模
SELECT t.symbol, u.vol
FROM market_data t, UNNEST(t.asks[2]) u(vol)
WHERE t.timestamp IN '$now-1m..$now' AND t.symbol = 'EURUSD';

-- WITH ORDINALITY提供1-based的层级索引(每行输入重置)
SELECT m.symbol, u.vol, u.level
FROM market_data m, UNNEST(m.asks[2]) WITH ORDINALITY u(vol, level)
WHERE m.timestamp IN '$now-1m..$now';

-- JSON数组(VARCHAR)带类型列——非常适合摄入API负载
SELECT u.trade_id, u.price, u.size, u.side, u.time
FROM UNNEST(
    payload_varchar::VARCHAR
    COLUMNS(trade_id LONG, price DOUBLE, size DOUBLE, side VARCHAR, time TIMESTAMP)
) u;
关键规则:
  • UNNEST
    必须在
    FROM
    (或
    CROSS JOIN
    / 逗号之后),不能在
    SELECT
  • 原生数组:目前仅支持
    DOUBLE[]
    UNNEST
    一个
    DOUBLE[][]
    会生成
    DOUBLE[]
    行——链式调用第二个
    UNNEST
    以完全展平。
  • JSON:
    COLUMNS(name TYPE, ...)
    提取类型化字段。支持的类型: BOOLEAN、SHORT、INT、LONG、DOUBLE、VARCHAR、TIMESTAMP。无效JSON / NULL / 空字符串都会生成0行(无错误)。
  • ordinality
    是保留字——给它起别名(
    u(val, pos)
    )或加引号。
完整参考:
curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/sql/unnest.md"

Window Functions

窗口函数

sql
-- ROWS frame (count-based)
AVG(close) OVER (ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW) AS sma20

-- CUMULATIVE shorthand (QuestDB extension, equivalent to ROWS UNBOUNDED PRECEDING)
SUM(volume) OVER (ORDER BY ts CUMULATIVE) AS running_total

-- PARTITION BY
LAG(close) OVER (PARTITION BY symbol ORDER BY ts) AS prev_close

-- EMA (exponential moving average — QuestDB extension)
avg(price, 'period', 14) OVER (ORDER BY ts) AS ema14
Supported:
ROW_NUMBER()
,
RANK()
,
DENSE_RANK()
,
LAG()
,
LEAD()
,
FIRST_VALUE()
,
AVG()
,
SUM()
,
MIN()
,
MAX()
,
COUNT()
Important:
stddev_samp()
may not work in window frames. For standard deviation, compute manually:
sqrt(avg(x*x) - avg(x)^2)
— see Bollinger Bands query below.
sql
-- ROWS框架(基于计数)
AVG(close) OVER (ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW) AS sma20

-- 累积简写(QuestDB扩展,等效于ROWS UNBOUNDED PRECEDING)
SUM(volume) OVER (ORDER BY ts CUMULATIVE) AS running_total

-- PARTITION BY
LAG(close) OVER (PARTITION BY symbol ORDER BY ts) AS prev_close

-- EMA(指数移动平均线——QuestDB扩展)
avg(price, 'period', 14) OVER (ORDER BY ts) AS ema14
支持的函数:
ROW_NUMBER()
RANK()
DENSE_RANK()
LAG()
LEAD()
FIRST_VALUE()
AVG()
SUM()
MIN()
MAX()
COUNT()
重要提示:
stddev_samp()
在窗口框架中可能无法工作。如需计算标准差, 手动计算:
sqrt(avg(x*x) - avg(x)^2)
——请参见下面的布林带查询。

Materialized Views

物化视图

Auto-updated, incremental aggregations triggered by new data:
sql
CREATE MATERIALIZED VIEW IF NOT EXISTS candles_5s AS (
    SELECT ts, symbol,
        first(price) AS open, max(price) AS high,
        min(price) AS low, last(price) AS close,
        sum(amount) AS volume
    FROM trades SAMPLE BY 5s
) PARTITION BY DAY;
Rules:
  • Source query MUST use SAMPLE BY
  • Only aggregation functions allowed (no WHERE, no JOIN, no window functions)
  • Use
    IF NOT EXISTS
    to avoid errors on re-run
  • Cascade views for multi-resolution:
    trades → 5s → 1m → 1h
    (see Schema Design below)
  • Invalidate/rebuild:
    ALTER MATERIALIZED VIEW candles_5s INVALIDATE
自动更新、增量聚合,由新数据触发:
sql
CREATE MATERIALIZED VIEW IF NOT EXISTS candles_5s AS (
    SELECT ts, symbol,
        first(price) AS open, max(price) AS high,
        min(price) AS low, last(price) AS close,
        sum(amount) AS volume
    FROM trades SAMPLE BY 5s
) PARTITION BY DAY;
规则:
  • 源查询必须使用SAMPLE BY
  • 仅允许聚合函数(无WHERE、无JOIN、无窗口函数)
  • 使用
    IF NOT EXISTS
    避免重新运行时出错
  • 级联视图实现多分辨率:
    trades → 5s → 1m → 1h
    (参见下面的模式设计)
  • 失效/重建:
    ALTER MATERIALIZED VIEW candles_5s INVALIDATE

Views (non-materialized)

视图(非物化)

QuestDB also supports regular views - virtual tables defined by a SELECT statement, evaluated on every query. Use them for reusable query abstractions that don't need incremental precomputation.
sql
CREATE VIEW IF NOT EXISTS latest_prices AS (
    SELECT * FROM trades
    LATEST ON ts PARTITION BY symbol
);

-- Query it like a table
SELECT * FROM latest_prices WHERE symbol = 'BTC-USDT';

-- Modify or drop
ALTER VIEW latest_prices AS (SELECT ...);   -- redefine
COMPILE VIEW latest_prices;                  -- recompile after schema changes
DROP VIEW latest_prices;
Use a materialized view when you need incremental SAMPLE BY rollups. Use a regular view for everything else (latest-value lookups, filtered subsets, join abstractions).
QuestDB也支持普通视图——由SELECT语句定义的虚拟表,每次查询时计算。用于不需要增量预计算的可重用查询抽象。
sql
CREATE VIEW IF NOT EXISTS latest_prices AS (
    SELECT * FROM trades
    LATEST ON ts PARTITION BY symbol
);

-- 像表一样查询它
SELECT * FROM latest_prices WHERE symbol = 'BTC-USDT';

-- 修改或删除
ALTER VIEW latest_prices AS (SELECT ...);   -- 重新定义
COMPILE VIEW latest_prices;                  -- 模式更改后重新编译
DROP VIEW latest_prices;
当需要增量SAMPLE BY聚合时使用物化视图。其他情况(最新值查询、过滤子集、关联抽象)使用普通视图

Timestamp Filtering — TICK Syntax (Preferred)

时间戳过滤 — TICK语法(首选)

Always prefer TICK over
dateadd()
/
BETWEEN
for time filters.
TICK (Temporal Interval Calendar Kit) is QuestDB's declarative syntax for time intervals. It is more readable, generates optimized interval scans, handles timezones and business days, and expresses complex multi-interval patterns that would require UNION ALL or application-side logic otherwise.
Syntax order:
date [T time] @timezone #dayFilter ;duration
sql
-- Date variables: resolve at query time
WHERE ts IN '$today'                     -- full day (midnight to midnight)
WHERE ts IN '$yesterday'                 -- previous full day
WHERE ts IN '$tomorrow'                  -- next full day

-- Ranges with $now (point-in-time, microsecond precision)
WHERE ts IN '$now - 1h..$now'            -- last hour
WHERE ts IN '$now - 30m..$now'           -- last 30 minutes
WHERE ts IN '$now - 5bd..$now'           -- last 5 business days (skips weekends)

-- Duration suffix: extends forward from a point
WHERE ts IN '$now;1h'                    -- 1 hour starting now (forward)
WHERE ts IN '2025-01-15T09:30;6h30m'     -- NYSE trading session

-- Imprecise dates: month-level and year-level (expand to all days)
WHERE ts IN '[2025-01]'                  -- all of January (no need for [01..31])
WHERE ts IN '[2025]T09:30'              -- all of 2025 at 09:30
WHERE ts IN '2025-02T09:30'             -- bare imprecise (brackets optional alone)

-- Bracket expansion: generates multiple intervals
WHERE ts IN '2025-01-[10..15]'           -- days 10 through 15
WHERE ts IN '2025-01-[5,10..12,20]'      -- specific days + ranges
WHERE ts IN '2025-[01,06]-[10,15]'       -- Cartesian: Jan+Jun x 10th+15th
WHERE ts IN '2025-[01..03]T09:30'        -- month-level range: all days Jan-Mar

-- Day-of-week filters
WHERE ts IN '[2025-01]#workday'          -- weekdays in January
WHERE ts IN '[2025-01]#weekend'          -- weekends in January
WHERE ts IN '[2025-01]#Mon,Wed,Fri'      -- specific days in January

-- Timezone-aware (handles DST)
WHERE ts IN '2025-01-15T09:30@America/New_York;6h30m'

-- Combined: workdays at 09:30 New York time for all of January
WHERE ts IN '[2025-01]T09:30@America/New_York#workday;6h30m'

-- Mixed-precision date lists
WHERE ts IN '[2025-01-15, 2025-02]T09:30'  -- one day + full month

-- Time lists: multiple intraday windows
WHERE ts IN '2025-01-15T[09:00,14:30];1h'  -- two 1h windows on the same day

-- Date lists (brackets required for lists)
WHERE ts IN '[$today, $yesterday, 2025-01-15]'

-- ISO week dates
WHERE ts IN '2025-W01-[1..5]T09:00;8h'  -- Mon-Fri of week 1
Key rules:
  • Date variables are case-insensitive (
    $TODAY
    =
    $today
    )
  • $today
    /
    $yesterday
    /
    $tomorrow
    produce full-day intervals;
    $now
    produces a point-in-time (add range or duration to make it useful)
  • Arithmetic units:
    y
    M
    w
    d
    bd
    h
    m
    s
    T
    (ms)
    u
    (us)
    n
    (ns).
    bd
    (business days) is valid in arithmetic only, not in durations. Case-sensitive:
    M
    = months,
    m
    = minutes,
    T
    = milliseconds.
  • Imprecise dates:
    YYYY-MM
    expands to all days in that month,
    YYYY
    to all days in that year.
    '[2025-01]T09:30#workday;6h30m'
    replaces the verbose
    '2025-01-[01..31]T09:30#workday;6h30m'
    . Brackets optional when used alone (
    '2025-02T09:30'
    =
    '[2025-02]T09:30'
    ), required in lists.
  • Brackets required for: lists (
    [$today, $yesterday]
    ), ranges with suffixes (
    [$now - 2h..$now]@America/New_York
    ). Optional for standalone variables, bare ranges (
    $now - 2h..$now
    ), and bare imprecise dates (
    2025-01T09:30
    ).
  • Overlapping intervals from bracket expansion are auto-merged.
  • Exchange calendars (Enterprise): use
    #XNYS
    ,
    #XLON
    , etc. (ISO 10383 MIC codes) instead of
    #workday
    to filter by real exchange trading schedules - holidays, early closes, and lunch breaks are handled automatically.
    WHERE ts IN '[2025-01]#XNYS'
    gives only NYSE trading sessions. See:
    curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/operators/exchange-calendars.md"
Full reference:
curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/operators/tick.md"
始终优先使用TICK而非
dateadd()
/
BETWEEN
进行时间过滤
。TICK (Temporal Interval Calendar Kit)是QuestDB的声明式时间间隔语法。它更具可读性,生成优化的间隔扫描,处理时区和工作日,还能表达复杂的多间隔模式,否则需要UNION ALL或应用端逻辑。
语法顺序:
date [T time] @timezone #dayFilter ;duration
sql
-- 日期变量:查询时解析
WHERE ts IN '$today'                     -- 全天(午夜到午夜)
WHERE ts IN '$yesterday'                 -- 前一天
WHERE ts IN '$tomorrow'                  -- 后一天

-- 使用$now的范围(时间点,微秒精度)
WHERE ts IN '$now - 1h..$now'            -- 过去1小时
WHERE ts IN '$now - 30m..$now'           -- 过去30分钟
WHERE ts IN '$now - 5bd..$now'           -- 过去5个工作日(跳过周末)

-- 持续时间后缀:从某个时间点向前延伸
WHERE ts IN '$now;1h'                    -- 从现在开始的1小时(向前)
WHERE ts IN '2025-01-15T09:30;6h30m'     -- 纽约证券交易所交易时段

-- 不精确日期:月级和年级(扩展到所有天数)
WHERE ts IN '[2025-01]'                  -- 整个1月(不需要[01..31])
WHERE ts IN '[2025]T09:30'              -- 2025年所有时间的09:30
WHERE ts IN '2025-02T09:30'             -- 单独的不精确日期(括号可选)

-- 括号扩展:生成多个间隔
WHERE ts IN '2025-01-[10..15]'           -- 10日至15日
WHERE ts IN '2025-01-[5,10..12,20]'      -- 指定日期+范围
WHERE ts IN '2025-[01,06]-[10,15]'       -- 笛卡尔积:1月+6月 × 10日+15日
WHERE ts IN '2025-[01..03]T09:30'        -- 月级范围:1月至3月的所有天数

-- 星期几过滤器
WHERE ts IN '[2025-01]#workday'          -- 1月的工作日
WHERE ts IN '[2025-01]#weekend'          -- 1月的周末
WHERE ts IN '[2025-01]#Mon,Wed,Fri'      -- 1月的指定星期几

-- 时区感知(处理夏令时)
WHERE ts IN '2025-01-15T09:30@America/New_York;6h30m'

-- 组合:1月所有工作日的纽约时间09:30开始的6.5小时
WHERE ts IN '[2025-01]T09:30@America/New_York#workday;6h30m'

-- 混合精度日期列表
WHERE ts IN '[2025-01-15, 2025-02]T09:30'  -- 一天+整月

-- 时间列表:多个日内窗口
WHERE ts IN '2025-01-15T[09:00,14:30];1h'  -- 同一天的两个1小时窗口

-- 日期列表(列表需要括号)
WHERE ts IN '[$today, $yesterday, 2025-01-15]'

-- ISO周日期
WHERE ts IN '2025-W01-[1..5]T09:00;8h'  -- 第1周的周一至周五
关键规则:
  • 日期变量不区分大小写(
    $TODAY
    =
    $today
  • $today
    /
    $yesterday
    /
    $tomorrow
    生成全天间隔
    $now
    生成时间点(添加范围或持续时间使其有用)
  • 算术单位:
    y
    M
    w
    d
    bd
    h
    m
    s
    T
    (毫秒)
    u
    (微秒)
    n
    (纳秒)。
    bd
    (工作日)仅在算术运算中有效,不适用于持续时间。 区分大小写:
    M
    = 月,
    m
    = 分钟,
    T
    = 毫秒。
  • 不精确日期:
    YYYY-MM
    扩展到该月的所有天数,
    YYYY
    扩展到该年的所有天数。
    '[2025-01]T09:30#workday;6h30m'
    替代冗长的
    '2025-01-[01..31]T09:30#workday;6h30m'
    。单独使用时括号可选(
    '2025-02T09:30'
    =
    '[2025-02]T09:30'
    ),列表中必需。
  • 以下情况需要括号:列表(
    [$today, $yesterday]
    )、带后缀的范围 (
    [$now - 2h..$now]@America/New_York
    )。单独变量、 纯范围(
    $now - 2h..$now
    )和纯不精确日期(
    2025-01T09:30
    )可选。
  • 括号扩展产生的重叠间隔会自动合并。
  • 交易所日历(企业版): 使用
    #XNYS
    #XLON
    等(ISO 10383 MIC 代码)替代
    #workday
    ,按真实交易所交易时段过滤—— 自动处理节假日、提前收盘和午休。
    WHERE ts IN '[2025-01]#XNYS'
    仅返回纽约证券交易所的交易时段。 参见:
    curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/operators/exchange-calendars.md"
完整参考:
curl -sH "Accept: text/markdown" "https://questdb.com/docs/query/operators/tick.md"

DECLARE (Variables)

DECLARE(变量)

sql
DECLARE @start := '2025-01-01T00:00:00Z', @end := now();
SELECT * FROM trades WHERE ts BETWEEN @start AND @end;
sql
DECLARE @start := '2025-01-01T00:00:00Z', @end := now();
SELECT * FROM trades WHERE ts BETWEEN @start AND @end;

SQL Execution Order

SQL执行顺序

FROM → WHERE → SAMPLE BY → SELECT → LIMIT
(No GROUP BY needed with SAMPLE BY. No HAVING at all.)
FROM → WHERE → SAMPLE BY → SELECT → LIMIT
(使用SAMPLE BY不需要GROUP BY。完全没有HAVING。)

Key Functions

关键函数

Timestamp:
now()
,
systimestamp()
,
dateadd('unit', n, ts)
,
datediff('unit', ts1, ts2)
,
to_timestamp('str', 'fmt')
,
timestamp_floor('unit', ts)
,
timestamp_ceil('unit', ts)
,
hour(ts)
,
day_of_week(ts)
,
week_of_year(ts)
,
year(ts)
,
month(ts)
String:
concat('a', 'b')
,
left(s, n)
,
right(s, n)
,
length(s)
,
starts_with(s, prefix)
,
lcase(s)
,
ucase(s)
,
replace(s, old, new)
,
split_part(s, delim, idx)
,
regexp_replace(s, pattern, replacement)
Aggregation:
first(x)
,
last(x)
,
count_distinct(x)
,
sum(x)
,
avg(x)
,
min(x)
,
max(x)
,
haversine_dist_deg(lat1, lon1, lat2, lon2)
Array:
array_cum_sum(arr)
— cumulative sum of array elements

时间戳:
now()
systimestamp()
dateadd('unit', n, ts)
datediff('unit', ts1, ts2)
to_timestamp('str', 'fmt')
timestamp_floor('unit', ts)
timestamp_ceil('unit', ts)
hour(ts)
day_of_week(ts)
week_of_year(ts)
year(ts)
month(ts)
字符串:
concat('a', 'b')
left(s, n)
right(s, n)
length(s)
starts_with(s, prefix)
lcase(s)
ucase(s)
replace(s, old, new)
split_part(s, delim, idx)
regexp_replace(s, pattern, replacement)
聚合:
first(x)
last(x)
count_distinct(x)
sum(x)
avg(x)
min(x)
max(x)
haversine_dist_deg(lat1, lon1, lat2, lon2)
数组:
array_cum_sum(arr)
— 数组元素的累积和

Data Ingestion

数据摄入

ILP (InfluxDB Line Protocol) — Primary Method

ILP(InfluxDB行协议)——主要方法

Use ILP for all high-throughput ingestion. Never use INSERT INTO for streaming data.
python
from questdb.ingress import Sender, TimestampNanos
import numpy as np
所有高吞吐量摄入都使用ILP。永远不要对流数据使用INSERT INTO
python
from questdb.ingress import Sender, TimestampNanos
import numpy as np

Open Source (no auth). For Enterprise: read references/enterprise.md Quick Start

开源版本(无认证)。企业版:查看references/enterprise.md快速入门

(admin creates service account + token via REST → ingestion script uses token)

(管理员通过REST创建服务账户+令牌→摄入脚本使用令牌)

conf = "tcp::addr=localhost:9009;protocol_version=2;"
with Sender.from_conf(conf) as sender: sender.row( 'trades', symbols={'symbol': 'BTC-USDT', 'side': 'buy'}, columns={'price': 42000.50, 'amount': 1.5}, at=TimestampNanos.now() ) sender.flush()

Key patterns:
- `Sender.from_conf()` does NOT connect — use `with` context manager or call `sender.establish()`
- `symbols={}` for SYMBOL columns, `columns={}` for everything else
- Arrays MUST be `np.float64` numpy arrays, not Python lists
- TCP config requires `protocol_version=2` for array support
- HTTP transport: `http::addr=localhost:9000;`
- TCP transport: `tcp::addr=localhost:9009;`

**2D array ingestion (order books):**
```python
sender.row(
    'order_book',
    symbols={'symbol': 'BTC-USDT'},
    columns={
        'bids': [
            np.array([41999.0, 41998.5, 41998.0], dtype=np.float64),  # prices [1]
            np.array([2.1, 5.3, 10.0], dtype=np.float64),            # sizes  [2]
        ],
        'asks': [
            np.array([42001.0, 42001.5, 42002.0], dtype=np.float64),
            np.array([1.8, 4.2, 8.5], dtype=np.float64),
        ],
    },
    at=TimestampNanos.now()
)
Other client libraries: Go, Java, Rust, Node.js, C/C++, .NET — fetch
ingestion/clients/{language}
conf = "tcp::addr=localhost:9009;protocol_version=2;"
with Sender.from_conf(conf) as sender: sender.row( 'trades', symbols={'symbol': 'BTC-USDT', 'side': 'buy'}, columns={'price': 42000.50, 'amount': 1.5}, at=TimestampNanos.now() ) sender.flush()

关键模式:
- `Sender.from_conf()`不会连接——使用`with`上下文管理器或调用`sender.establish()`
- `symbols={}`用于SYMBOL列,`columns={}`用于其他所有列
- 数组必须是`np.float64` numpy数组,不能是Python列表
- TCP配置需要`protocol_version=2`以支持数组
- HTTP传输:`http::addr=localhost:9000;`
- TCP传输:`tcp::addr=localhost:9009;`

**二维数组摄入(订单簿):**
```python
sender.row(
    'order_book',
    symbols={'symbol': 'BTC-USDT'},
    columns={
        'bids': [
            np.array([41999.0, 41998.5, 41998.0], dtype=np.float64),  # 价格 [1]
            np.array([2.1, 5.3, 10.0], dtype=np.float64),            # 规模  [2]
        ],
        'asks': [
            np.array([42001.0, 42001.5, 42002.0], dtype=np.float64),
            np.array([1.8, 4.2, 8.5], dtype=np.float64),
        ],
    },
    at=TimestampNanos.now()
)
其他客户端库: Go、Java、Rust、Node.js、C/C++、.NET——获取
ingestion/clients/{language}

INSERT INTO — For Low Volume Only

INSERT INTO — 仅用于低吞吐量

sql
INSERT INTO trades (ts, symbol, side, price, amount)
VALUES ('2025-02-09T10:00:00.000000Z', 'BTC-USDT', 'buy', 42000.50, 1.5);
sql
INSERT INTO trades (ts, symbol, side, price, amount)
VALUES ('2025-02-09T10:00:00.000000Z', 'BTC-USDT', 'buy', 42000.50, 1.5);

Querying via PG Wire

通过PG Wire查询

python
import psycopg as pg
conn = pg.connect("user=admin password=quest host=localhost port=8812 dbname=qdb")
python
import psycopg as pg
conn = pg.connect("user=admin password=quest host=localhost port=8812 dbname=qdb")

HTTP REST API

HTTP REST API

  • Query:
    GET http://localhost:9000/exec?query=URL_ENCODED_SQL
  • URL-encode ALL special characters including parentheses:
    (
    =
    %28
    ,
    )
    =
    %29
    . Unencoded
    ()
    in curl commands triggers a security prompt. Example:
    count_distinct%28symbol%29
    not
    count_distinct(symbol)
  • Enterprise: see Quick Start in
    references/enterprise.md
  • POST is not supported for the exec endpoint - use GET only
  • Returns JSON:
    { "columns": [...], "dataset": [...] }

  • 查询:
    GET http://localhost:9000/exec?query=URL_ENCODED_SQL
  • 对所有特殊字符进行URL编码,包括括号:
    (
    =
    %28
    ,
    )
    =
    %29
    。curl命令中未编码的
    ()
    会触发安全提示。 示例:
    count_distinct%28symbol%29
    而非
    count_distinct(symbol)
  • 企业版: 查看
    references/enterprise.md
    中的快速入门
  • exec端点不支持POST——仅使用GET
  • 返回JSON:
    { "columns": [...], "dataset": [...] }

Schema Design

模式设计

Key principles:
  • Every time-series table needs a designated timestamp
  • Use SYMBOL for any repeated string (tickers, categories, status codes)
  • Partition by the most common query granularity
  • WAL tables for concurrent write workloads
  • DEDUP for idempotent ingestion
关键原则:
  • 每个时序表需要一个指定时间戳
  • 任何重复字符串(代码、类别、状态码)使用SYMBOL类型
  • 按最常见的查询粒度分区
  • WAL表用于并发写入工作负载
  • DEDUP用于幂等摄入

Financial Market Data Pipeline Schema (Ready to Use)

金融市场数据流程模式(现成可用)

sql
CREATE TABLE IF NOT EXISTS trades (
    ts TIMESTAMP,
    symbol SYMBOL,
    side SYMBOL,
    price DOUBLE,
    amount DOUBLE
) TIMESTAMP(ts) PARTITION BY DAY WAL
DEDUP UPSERT KEYS(ts, symbol);

CREATE TABLE IF NOT EXISTS orderbook (
    ts TIMESTAMP,
    symbol SYMBOL,
    bid_prices DOUBLE[],
    bid_sizes DOUBLE[],
    ask_prices DOUBLE[],
    ask_sizes DOUBLE[]
) TIMESTAMP(ts) PARTITION BY DAY WAL
DEDUP UPSERT KEYS(ts, symbol);

-- Cascading materialized views: trades → 5s → 1m → 1h
CREATE MATERIALIZED VIEW IF NOT EXISTS candles_5s AS (
    SELECT ts, symbol,
        first(price) AS open, max(price) AS high,
        min(price) AS low, last(price) AS close,
        sum(amount) AS volume
    FROM trades SAMPLE BY 5s
) PARTITION BY DAY;

CREATE MATERIALIZED VIEW IF NOT EXISTS candles_1m AS (
    SELECT ts, symbol,
        first(open) AS open, max(high) AS high,
        min(low) AS low, last(close) AS close,
        sum(volume) AS volume
    FROM candles_5s SAMPLE BY 1m
) PARTITION BY DAY;

CREATE MATERIALIZED VIEW IF NOT EXISTS candles_1h AS (
    SELECT ts, symbol,
        first(open) AS open, max(high) AS high,
        min(low) AS low, last(close) AS close,
        sum(volume) AS volume
    FROM candles_1m SAMPLE BY 1h
) PARTITION BY MONTH;
The Grafana queries below work with these exact table/view names.

sql
CREATE TABLE IF NOT EXISTS trades (
    ts TIMESTAMP,
    symbol SYMBOL,
    side SYMBOL,
    price DOUBLE,
    amount DOUBLE
) TIMESTAMP(ts) PARTITION BY DAY WAL
DEDUP UPSERT KEYS(ts, symbol);

CREATE TABLE IF NOT EXISTS orderbook (
    ts TIMESTAMP,
    symbol SYMBOL,
    bid_prices DOUBLE[],
    bid_sizes DOUBLE[],
    ask_prices DOUBLE[],
    ask_sizes DOUBLE[]
) TIMESTAMP(ts) PARTITION BY DAY WAL
DEDUP UPSERT KEYS(ts, symbol);

-- 级联物化视图:trades → 5s → 1m → 1h
CREATE MATERIALIZED VIEW IF NOT EXISTS candles_5s AS (
    SELECT ts, symbol,
        first(price) AS open, max(price) AS high,
        min(price) AS low, last(price) AS close,
        sum(amount) AS volume
    FROM trades SAMPLE BY 5s
) PARTITION BY DAY;

CREATE MATERIALIZED VIEW IF NOT EXISTS candles_1m AS (
    SELECT ts, symbol,
        first(open) AS open, max(high) AS high,
        min(low) AS low, last(close) AS close,
        sum(volume) AS volume
    FROM candles_5s SAMPLE BY 1m
) PARTITION BY DAY;

CREATE MATERIALIZED VIEW IF NOT EXISTS candles_1h AS (
    SELECT ts, symbol,
        first(open) AS open, max(high) AS high,
        min(low) AS low, last(close) AS close,
        sum(volume) AS volume
    FROM candles_1m SAMPLE BY 1h
) PARTITION BY MONTH;
下面的Grafana查询与这些精确的表/视图名称配合使用。

Demo & Sample Data

演示与示例数据

Infrastructure setup commands are in the Execution Scenarios section above. Pick the scenario that matches, run the exact commands, then proceed here.
基础设施设置命令在上面的执行场景部分。 选择匹配的场景,运行精确的命令,然后继续此处。

Real-Time Crypto Feed (cryptofeed + QuestDB)

实时加密货币Feed(cryptofeed + QuestDB)

Do NOT explore cryptofeed source code or check its version. Everything you need is right here. Copy this ingestion script verbatim. The certifi/SSL fix and all imports are required — do not omit any lines:
python
import os
import certifi
os.environ['SSL_CERT_FILE'] = certifi.where()  # Required: macOS lacks system CA certs for HTTPS/WSS

import numpy as np
from cryptofeed import FeedHandler
from cryptofeed.exchanges import OKX
from cryptofeed.defines import TRADES, L2_BOOK
from questdb.ingress import Sender, TimestampNanos

conf = "tcp::addr=localhost:9009;protocol_version=2;"  # Enterprise: see references/enterprise.md

async def trade_cb(t, receipt_timestamp):
    with Sender.from_conf(conf) as sender:
        sender.row(
            'trades',
            symbols={'symbol': t.symbol, 'side': t.side},
            columns={'price': float(t.price), 'amount': float(t.amount)},
            at=TimestampNanos(int(t.timestamp * 1e9))
        )
        sender.flush()

async def book_cb(book, receipt_timestamp):
    bids = book.book['bid']
    asks = book.book['ask']
    # Sort: bids descending by price, asks ascending
    bid_prices = sorted(bids.keys(), reverse=True)[:25]
    ask_prices = sorted(asks.keys())[:25]
    with Sender.from_conf(conf) as sender:
        sender.row(
            'orderbook',
            symbols={'symbol': book.symbol},
            columns={
                'bid_prices': np.array([float(p) for p in bid_prices], dtype=np.float64),
                'bid_sizes':  np.array([float(bids[p]) for p in bid_prices], dtype=np.float64),
                'ask_prices': np.array([float(p) for p in ask_prices], dtype=np.float64),
                'ask_sizes':  np.array([float(asks[p]) for p in ask_prices], dtype=np.float64),
            },
            at=TimestampNanos(int(book.timestamp * 1e9))
        )
        sender.flush()

SYMBOLS = ['BTC-USDT', 'ETH-USDT', 'SOL-USDT']  # add more as needed

f = FeedHandler()
f.add_feed(OKX(
    symbols=SYMBOLS,
    channels=[TRADES, L2_BOOK],
    callbacks={TRADES: trade_cb, L2_BOOK: book_cb}
))
f.run()
cryptofeed API reference (complete — tested with v2.4.x — do NOT read source code to verify):
  • t.symbol
    ,
    t.side
    (
    'buy'
    /
    'sell'
    ),
    t.price
    ,
    t.amount
    ,
    t.timestamp
    (float epoch seconds)
  • t.price
    and
    t.amount
    are
    Decimal
    types — cast with
    float()
    before passing to QuestDB Sender
  • book.book['bid']
    and
    book.book['ask']
    are dicts:
    {Decimal(price): Decimal(size), ...}
  • book.symbol
    ,
    book.timestamp
    (float epoch seconds)
  • Exchanges:
    OKX
    ,
    Coinbase
    ,
    Binance
    ,
    Kraken
    ,
    Bybit
    , etc.
  • Channels:
    TRADES
    ,
    L2_BOOK
    ,
    L3_BOOK
    ,
    TICKER
    ,
    CANDLES
    ,
    OPEN_INTEREST
    ,
    FUNDING
    ,
    LIQUIDATIONS
  • Symbol format is exchange-native:
    'BTC-USDT'
    for OKX,
    'BTC-USD'
    for Coinbase
  • Python compatibility: avoid
    X | None
    type hints (requires 3.10+). Use
    Optional[X]
    or plain assignment.
  • Dependencies (fresh venv):
    pip install cryptofeed questdb 'psycopg[binary]' requests numpy certifi
    then
    pip uninstall uvloop -y
    (separate Bash calls)
  • macOS SSL: Always set
    SSL_CERT_FILE
    via certifi before any outbound HTTPS/WSS connections (Homebrew Python lacks system CA certs). Set it INSIDE Python code (
    os.environ['SSL_CERT_FILE'] = certifi.where()
    ), NEVER as a shell-level
    SSL_CERT_FILE=$(...)
    prefix - that triggers a security prompt
Performance note: The example above opens a Sender per callback for clarity. For production, use a shared Sender with periodic flush:
python
sender = Sender.from_conf(conf)
sender.establish()

async def trade_cb(t, receipt_timestamp):
    sender.row('trades', symbols={...}, columns={...}, at=...)
    # flush periodically or use auto_flush_interval in conf string
Execution order (mandatory — do not reorder):
  1. Run
    01_schema.py
    — create tables and views
  2. Run
    03_dashboard.py
    — this script starts ingestion automatically, waits for data, then deploys the dashboard and opens the browser. Do not start
    02_ingest.py
    separately.
  • Do not Ctrl+C, restart, or otherwise touch the ingestion process once it is started.
  • Definition of done: Dashboard is open in browser AND ingestion process is still running.
Operational notes:
  • cryptofeed logs to stderr, not stdout. An empty stdout does not mean failure.
  • End the dashboard deploy script with
    open
    (macOS) or
    xdg-open
    (Linux) to launch the browser automatically. Include
    ?refresh=250ms
    in the URL
    so the dashboard opens with the correct refresh rate:
    python
    url = f"{GRAFANA_URL}{resp.json()['url']}?refresh=250ms&from=now-5m&to=now"
    subprocess.run(["open" if sys.platform == "darwin" else "xdg-open", url])
不要探索cryptofeed源代码或检查其版本。你需要的所有内容 都在这里。 完全复制此摄入脚本。certifi/SSL修复和所有 导入都是必需的——不要省略任何行:
python
import os
import certifi
os.environ['SSL_CERT_FILE'] = certifi.where()  # 必需:macOS缺少HTTPS/WSS的系统CA证书

import numpy as np
from cryptofeed import FeedHandler
from cryptofeed.exchanges import OKX
from cryptofeed.defines import TRADES, L2_BOOK
from questdb.ingress import Sender, TimestampNanos

conf = "tcp::addr=localhost:9009;protocol_version=2;"  # 企业版:查看references/enterprise.md

async def trade_cb(t, receipt_timestamp):
    with Sender.from_conf(conf) as sender:
        sender.row(
            'trades',
            symbols={'symbol': t.symbol, 'side': t.side},
            columns={'price': float(t.price), 'amount': float(t.amount)},
            at=TimestampNanos(int(t.timestamp * 1e9))
        )
        sender.flush()

async def book_cb(book, receipt_timestamp):
    bids = book.book['bid']
    asks = book.book['ask']
    # 排序:卖一按价格降序,买一按价格升序
    bid_prices = sorted(bids.keys(), reverse=True)[:25]
    ask_prices = sorted(asks.keys())[:25]
    with Sender.from_conf(conf) as sender:
        sender.row(
            'orderbook',
            symbols={'symbol': book.symbol},
            columns={
                'bid_prices': np.array([float(p) for p in bid_prices], dtype=np.float64),
                'bid_sizes':  np.array([float(bids[p]) for p in bid_prices], dtype=np.float64),
                'ask_prices': np.array([float(p) for p in ask_prices], dtype=np.float64),
                'ask_sizes':  np.array([float(asks[p]) for p in ask_prices], dtype=np.float64),
            },
            at=TimestampNanos(int(book.timestamp * 1e9))
        )
        sender.flush()

SYMBOLS = ['BTC-USDT', 'ETH-USDT', 'SOL-USDT']  # 根据需要添加更多

f = FeedHandler()
f.add_feed(OKX(
    symbols=SYMBOLS,
    channels=[TRADES, L2_BOOK],
    callbacks={TRADES: trade_cb, L2_BOOK: book_cb}
))
f.run()
cryptofeed API参考(完整——已在v2.4.x测试——不要读取源代码验证):
  • t.symbol
    t.side
    'buy'
    /
    'sell'
    )、
    t.price
    t.amount
    t.timestamp
    (浮点型纪元秒)
  • t.price
    t.amount
    Decimal
    类型——传递给QuestDB Sender前用
    float()
    转换
  • book.book['bid']
    book.book['ask']
    是字典:
    {Decimal(price): Decimal(size), ...}
  • book.symbol
    book.timestamp
    (浮点型纪元秒)
  • 交易所:
    OKX
    Coinbase
    Binance
    Kraken
    Bybit
  • 通道:
    TRADES
    L2_BOOK
    L3_BOOK
    TICKER
    CANDLES
    OPEN_INTEREST
    FUNDING
    LIQUIDATIONS
  • 代码格式是交易所原生的:OKX用
    'BTC-USDT'
    ,Coinbase用
    'BTC-USD'
  • Python兼容性: 避免
    X | None
    类型提示(需要3.10+)。使用
    Optional[X]
    或直接赋值。
  • 依赖项(全新虚拟环境):
    pip install cryptofeed questdb 'psycopg[binary]' requests numpy certifi
    然后
    pip uninstall uvloop -y
    (单独的Bash调用)
  • macOS SSL: 始终在任何出站HTTPS/WSS连接之前通过certifi设置
    SSL_CERT_FILE
    (Homebrew Python缺少系统CA证书)。 在Python代码内部设置(
    os.environ['SSL_CERT_FILE'] = certifi.where()
    ), 永远不要作为shell级别的
    SSL_CERT_FILE=$(...)
    前缀——这会触发安全提示
性能说明: 上面的示例为了清晰,每个回调打开一个Sender。 生产环境中,使用共享Sender并定期刷新:
python
sender = Sender.from_conf(conf)
sender.establish()

async def trade_cb(t, receipt_timestamp):
    sender.row('trades', symbols={...}, columns={...}, at=...)
    # 定期刷新或在conf字符串中使用auto_flush_interval
执行顺序(强制——不要重新排序):
  1. 运行
    01_schema.py
    ——创建表和视图
  2. 运行
    03_dashboard.py
    ——此脚本自动启动摄入,等待数据, 然后部署仪表板并打开浏览器。不要单独启动
    02_ingest.py
  • 一旦摄入进程启动,不要按Ctrl+C、重启或以其他方式操作它。
  • 完成定义: 仪表板在浏览器中打开,并且摄入进程仍在运行。
操作说明:
  • cryptofeed记录到stderr,而非stdout。空stdout不代表失败。
  • 仪表板部署脚本最后使用
    open
    (macOS)或
    xdg-open
    (Linux) 自动启动浏览器。URL中务必包含
    ?refresh=250ms
    以便仪表板以正确的刷新速率打开:
    python
    url = f"{GRAFANA_URL}{resp.json()['url']}?refresh=250ms&from=now-5m&to=now"
    subprocess.run(["open" if sys.platform == "darwin" else "xdg-open", url])

QuestDB Demo Instance

QuestDB演示实例

QuestDB's live demo at
demo.questdb.io
has FX and crypto datasets. Fetch the schema reference:
curl -sH "Accept: text/markdown" "https://questdb.com/docs/cookbook/demo-data-schema/"

QuestDB的实时演示地址
demo.questdb.io
有外汇和加密货币数据集。 获取模式参考:
curl -sH "Accept: text/markdown" "https://questdb.com/docs/cookbook/demo-data-schema/"

Grafana Integration

Grafana集成

QuestDB has a dedicated Grafana datasource plugin (
questdb-questdb-datasource
). Connects via PG wire on port 8812.
Datasource API config (the QuestDB plugin uses jsonData, NOT the standard url field):
  • jsonData.server
    : hostname only — no port, no protocol (e.g.
    host.docker.internal
    )
  • jsonData.port
    : integer, separate from server (e.g.
    8812
    )
  • jsonData.tlsMode
    :
    "disable"
    for local Docker — omitting defaults to TLS enabled, which breaks
  • jsonData.username
    +
    secureJsonData.password
    : QuestDB defaults
    admin
    /
    quest
  • Do NOT use the
    url
    field — the QuestDB plugin ignores it
The dashboard deploy template below includes create-or-find datasource logic. Use it.
Key macros:
  • $__timeFilter(ts)
    — time range from Grafana's time picker
  • Default SAMPLE BY interval:
    5s
    . Only change if the user specifies a different bar size.
Symbol dropdown variable:
SELECT DISTINCT symbol FROM trades
For advanced Grafana patterns (multi-query panels, axis overrides, repeating panels, order book depth charts), see
references/grafana-advanced.md
.
QuestDB有专门的Grafana数据源插件(
questdb-questdb-datasource
)。 通过端口8812的PG Wire连接。
数据源API配置(QuestDB插件使用jsonData,而非标准url字段):
  • jsonData.server
    :仅主机名——不含端口和协议(例如
    host.docker.internal
  • jsonData.port
    :整数,与server分开(例如
    8812
  • jsonData.tlsMode
    :本地Docker设置为
    "disable"
    ——省略此参数默认启用TLS,会导致连接失败
  • jsonData.username
    +
    secureJsonData.password
    :QuestDB默认值
    admin
    /
    quest
  • 不要使用
    url
    字段——QuestDB插件会忽略它
下面的仪表板部署模板包含创建或查找数据源的逻辑。使用它即可。
关键宏:
  • $__timeFilter(ts)
    ——来自Grafana时间选择器的时间范围
  • 默认SAMPLE BY间隔:
    5s
    。仅当用户指定不同的K线大小时才更改。
代码下拉变量:
SELECT DISTINCT symbol FROM trades
如需高级Grafana模式(多查询面板、轴覆盖、重复面板、订单簿深度图表),请查看
references/grafana-advanced.md

Ready-to-Use Grafana Queries

现成可用的Grafana查询

Complete, tested SQL for common financial panels. Use directly.
Grafana query pattern rule: Grafana needs a
time
column, so the final SELECT aliases
ts AS time
. Never put OVER() clauses in the same SELECT that aliases
ts AS time
— put all window functions in CTEs where
ts
is still
ts
, then alias only in the final SELECT. All queries below follow this pattern.
OHLC Candlestick (from raw trades):
sql
SELECT ts AS time,
    first(price) AS open, max(price) AS high,
    min(price) AS low, last(price) AS close,
    sum(amount) AS volume
FROM trades
WHERE $__timeFilter(ts) AND symbol = '$symbol'
SAMPLE BY 5s;
OHLC Candlestick (from materialized view — faster for longer ranges):
sql
SELECT ts AS time,
    first(open) AS open, max(high) AS high,
    min(low) AS low, last(close) AS close,
    sum(volume) AS volume
FROM candles_1m
WHERE $__timeFilter(ts) AND symbol = '$symbol'
SAMPLE BY 5m;
VWAP (cumulative volume-weighted average price):
sql
WITH ohlc AS (
    SELECT ts, symbol,
        first(price) AS open, max(price) AS high,
        min(price) AS low, last(price) AS close,
        sum(amount) AS volume
    FROM trades
    WHERE $__timeFilter(ts) AND symbol = '$symbol'
    SAMPLE BY 5s
),
vwap AS (
    SELECT ts, close,
        sum((high + low + close) / 3 * volume) OVER (ORDER BY ts CUMULATIVE)
        / sum(volume) OVER (ORDER BY ts CUMULATIVE) AS vwap
    FROM ohlc
)
SELECT ts AS time, close, vwap FROM vwap;
Bollinger Bands (20-period SMA ± 2σ): Uses manual variance — more compatible than
stddev_samp
in window frames:
sql
WITH ohlc AS (
    SELECT ts, symbol,
        first(price) AS open, max(price) AS high,
        min(price) AS low, last(price) AS close,
        sum(amount) AS volume
    FROM trades
    WHERE $__timeFilter(ts) AND symbol = '$symbol'
    SAMPLE BY 5s
),
stats AS (
    SELECT ts, close,
        AVG(close) OVER (
            ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW
        ) AS sma20,
        AVG(close * close) OVER (
            ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW
        ) AS avg_close_sq
    FROM ohlc
)
SELECT ts AS time, close, sma20,
    sma20 + 2 * sqrt(avg_close_sq - (sma20 * sma20)) AS upper_band,
    sma20 - 2 * sqrt(avg_close_sq - (sma20 * sma20)) AS lower_band
FROM stats;
RSI (14-period, SMA-smoothed): This Grafana version uses SMA via ROWS BETWEEN, proven in production. For standalone EMA-smoothed RSI, see
references/indicators.md
.
sql
WITH ohlc AS (
    SELECT ts, symbol,
        first(price) AS open, max(price) AS high,
        min(price) AS low, last(price) AS close,
        sum(amount) AS volume
    FROM trades
    WHERE $__timeFilter(ts) AND symbol = '$symbol'
    SAMPLE BY 5s
),
changes AS (
    SELECT ts, close,
        close - LAG(close) OVER (ORDER BY ts) AS change
    FROM ohlc
),
gains_losses AS (
    SELECT ts, close,
        CASE WHEN change > 0 THEN change ELSE 0 END AS gain,
        CASE WHEN change < 0 THEN ABS(change) ELSE 0 END AS loss
    FROM changes
),
avg_gl AS (
    SELECT ts, close,
        AVG(gain) OVER (ORDER BY ts
            ROWS BETWEEN 13 PRECEDING AND CURRENT ROW) AS avg_gain,
        AVG(loss) OVER (ORDER BY ts
            ROWS BETWEEN 13 PRECEDING AND CURRENT ROW) AS avg_loss
    FROM gains_losses
)
SELECT ts AS time,
    CASE WHEN avg_loss = 0 THEN 100
         ELSE 100 - (100 / (1 + avg_gain / NULLIF(avg_loss, 0)))
    END AS rsi
FROM avg_gl;
Note: For EMA-smoothed RSI (standard), replace the AVG...ROWS BETWEEN with
avg(gain, 'period', 14) OVER (ORDER BY ts)
(QuestDB native EMA). For Wilder's smoothing (α=1/N), use
avg(gain, 'period', 27)
.
Combined VWAP + Bollinger (single panel, multiple series):
sql
WITH ohlc AS (
    SELECT ts, symbol,
        first(price) AS open, max(price) AS high,
        min(price) AS low, last(price) AS close,
        sum(amount) AS volume
    FROM trades
    WHERE $__timeFilter(ts) AND symbol = '$symbol'
    SAMPLE BY 5s
),
indicators AS (
    SELECT ts, close,
        AVG(close) OVER (
            ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW
        ) AS sma20,
        AVG(close * close) OVER (
            ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW
        ) AS avg_close_sq,
        sum((high + low + close) / 3 * volume) OVER (ORDER BY ts CUMULATIVE)
        / sum(volume) OVER (ORDER BY ts CUMULATIVE) AS vwap
    FROM ohlc
)
SELECT ts AS time, close, sma20,
    sma20 + 2 * sqrt(avg_close_sq - (sma20 * sma20)) AS upper_band,
    sma20 - 2 * sqrt(avg_close_sq - (sma20 * sma20)) AS lower_band,
    vwap
FROM indicators;
常见金融面板的完整、经过测试的SQL。直接使用即可。
Grafana查询模式规则: Grafana需要
time
列,因此最终SELECT将
ts
别名为
time
永远不要在将
ts
别名为
time
的同一个SELECT中使用OVER()子句
——将所有窗口函数放在CTE中,其中
ts
仍然是
ts
,然后仅在最终SELECT中别名。下面的所有查询都遵循此模式。
OHLC蜡烛图(来自原始交易数据):
sql
SELECT ts AS time,
    first(price) AS open, max(price) AS high,
    min(price) AS low, last(price) AS close,
    sum(amount) AS volume
FROM trades
WHERE $__timeFilter(ts) AND symbol = '$symbol'
SAMPLE BY 5s;
OHLC蜡烛图(来自物化视图——长范围查询更快):
sql
SELECT ts AS time,
    first(open) AS open, max(high) AS high,
    min(low) AS low, last(close) AS close,
    sum(volume) AS volume
FROM candles_1m
WHERE $__timeFilter(ts) AND symbol = '$symbol'
SAMPLE BY 5m;
VWAP(累积成交量加权平均价格):
sql
WITH ohlc AS (
    SELECT ts, symbol,
        first(price) AS open, max(price) AS high,
        min(price) AS low, last(price) AS close,
        sum(amount) AS volume
    FROM trades
    WHERE $__timeFilter(ts) AND symbol = '$symbol'
    SAMPLE BY 5s
),
vwap AS (
    SELECT ts, close,
        sum((high + low + close) / 3 * volume) OVER (ORDER BY ts CUMULATIVE)
        / sum(volume) OVER (ORDER BY ts CUMULATIVE) AS vwap
    FROM ohlc
)
SELECT ts AS time, close, vwap FROM vwap;
布林带(20期SMA ± 2σ): 使用手动方差——比窗口框架中的
stddev_samp
兼容性更好:
sql
WITH ohlc AS (
    SELECT ts, symbol,
        first(price) AS open, max(price) AS high,
        min(price) AS low, last(price) AS close,
        sum(amount) AS volume
    FROM trades
    WHERE $__timeFilter(ts) AND symbol = '$symbol'
    SAMPLE BY 5s
),
stats AS (
    SELECT ts, close,
        AVG(close) OVER (
            ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW
        ) AS sma20,
        AVG(close * close) OVER (
            ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW
        ) AS avg_close_sq
    FROM ohlc
)
SELECT ts AS time, close, sma20,
    sma20 + 2 * sqrt(avg_close_sq - (sma20 * sma20)) AS upper_band,
    sma20 - 2 * sqrt(avg_close_sq - (sma20 * sma20)) AS lower_band
FROM stats;
RSI(14期,SMA平滑): 此Grafana版本使用ROWS BETWEEN实现SMA,已在生产环境中验证。 如需独立的EMA平滑RSI,请查看
references/indicators.md
sql
WITH ohlc AS (
    SELECT ts, symbol,
        first(price) AS open, max(price) AS high,
        min(price) AS low, last(price) AS close,
        sum(amount) AS volume
    FROM trades
    WHERE $__timeFilter(ts) AND symbol = '$symbol'
    SAMPLE BY 5s
),
changes AS (
    SELECT ts, close,
        close - LAG(close) OVER (ORDER BY ts) AS change
    FROM ohlc
),
gains_losses AS (
    SELECT ts, close,
        CASE WHEN change > 0 THEN change ELSE 0 END AS gain,
        CASE WHEN change < 0 THEN ABS(change) ELSE 0 END AS loss
    FROM changes
),
avg_gl AS (
    SELECT ts, close,
        AVG(gain) OVER (ORDER BY ts
            ROWS BETWEEN 13 PRECEDING AND CURRENT ROW) AS avg_gain,
        AVG(loss) OVER (ORDER BY ts
            ROWS BETWEEN 13 PRECEDING AND CURRENT ROW) AS avg_loss
    FROM gains_losses
)
SELECT ts AS time,
    CASE WHEN avg_loss = 0 THEN 100
         ELSE 100 - (100 / (1 + avg_gain / NULLIF(avg_loss, 0)))
    END AS rsi
FROM avg_gl;
注意:如需标准的EMA平滑RSI,将AVG...ROWS BETWEEN替换为
avg(gain, 'period', 14) OVER (ORDER BY ts)
(QuestDB原生EMA)。 如需Wilder平滑(α=1/N),使用
avg(gain, 'period', 27)
组合VWAP + 布林带(单个面板,多个系列):
sql
WITH ohlc AS (
    SELECT ts, symbol,
        first(price) AS open, max(price) AS high,
        min(price) AS low, last(price) AS close,
        sum(amount) AS volume
    FROM trades
    WHERE $__timeFilter(ts) AND symbol = '$symbol'
    SAMPLE BY 5s
),
indicators AS (
    SELECT ts, close,
        AVG(close) OVER (
            ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW
        ) AS sma20,
        AVG(close * close) OVER (
            ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW
        ) AS avg_close_sq,
        sum((high + low + close) / 3 * volume) OVER (ORDER BY ts CUMULATIVE)
        / sum(volume) OVER (ORDER BY ts CUMULATIVE) AS vwap
    FROM ohlc
)
SELECT ts AS time, close, sma20,
    sma20 + 2 * sqrt(avg_close_sq - (sma20 * sma20)) AS upper_band,
    sma20 - 2 * sqrt(avg_close_sq - (sma20 * sma20)) AS lower_band,
    vwap
FROM indicators;

Dashboard Deployment via API

通过API部署仪表板

Complete working deployment script. This dashboard JSON is tested and working — copy the structure exactly for all panels. Do not split or reorganize panels.
Panel layout rule: VWAP, Bollinger Bands, and RSI are ALWAYS overlaid on an OHLC candlestick panel as additional refIDs (B, C, D) with
includeAllFields: true
. The panel type MUST be
candlestick
, not
timeseries
. Creating separate timeseries panels for these indicators is wrong — they go on the candlestick panel. Query pattern: refId A returns OHLC columns (open, high, low, close, volume). refIds B, C, D each return
ts AS time
+ their indicator column(s) ONLY — not OHLC. Grafana overlays them using the shared time column. If an indicator query returns OHLC columns too, the candlestick rendering breaks.
Overrides are mandatory, not cosmetic. RSI (0-100) on the same Y-axis as price (~97,000) makes candlesticks invisible. Spread (~0.01) on the same axis as bid/ask (~97,000) makes the spread line invisible. Copy the
fieldConfig.overrides
arrays from the template exactly — especially RSI's right axis (0-100%) and spread's right axis.
Dashboard defaults (copy exactly):
  • "refresh": "250ms"
    — NOT
    "5s"
    . The 250ms refresh is intentional for real-time data.
  • "liveNow": false
  • "time": {"from": "now-5m", "to": "now"}
  • "timepicker": {"refresh_intervals": ["250ms", "500ms", "1s", "5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d"]}
  • Open URL with
    ?refresh=250ms&from=now-5m&to=now
    appended
CRITICAL: Grafana's
min_refresh_interval
defaults to
5s
.
Sub-second refresh intervals (250ms, 500ms, 1s) are blocked server-side unless you set
GF_DASHBOARDS_MIN_REFRESH_INTERVAL=250ms
when starting Grafana. This is already set in the Docker run command in the Execution Scenarios above. Without it, the dashboard JSON
"refresh": "250ms"
and URL
?refresh=250ms
are silently ignored, and the dropdown won't show sub-5s options.
Target structure for every panel query:
json
{
    "refId": "A",
    "datasource": {"uid": "QUESTDB_UID", "type": "questdb-questdb-datasource"},
    "format": 1,
    "rawSql": "SELECT ts AS time, ... WHERE $__timeFilter(ts) ..."
}
CRITICAL:
"format"
MUST be integer
1
, not string
"table"
. The QuestDB Grafana plugin uses a Go integer enum (
sqlutil.FormatQueryOption
). String values cause
json: cannot unmarshal string into Go struct field Query.format
. Grafana's JSON export shows
"table"
(string) but the API POST requires
1
(integer).
python
import json, subprocess, sys, os, requests, time

QUESTDB_URL = "http://localhost:9000"
GRAFANA_URL = "http://localhost:3000"
GRAFANA_AUTH = ("admin", "admin")
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
完整的可工作部署脚本。此仪表板JSON经过测试可正常工作 ——所有面板完全复制此结构。不要拆分或重新组织面板。
面板布局规则: VWAP、布林带和RSI始终作为额外的refID(B、C、D)叠加在 OHLC蜡烛图面板上,设置
includeAllFields: true
。 面板类型必须是
candlestick
,而非
timeseries
。为这些指标创建单独的 时序面板是错误的——它们应该放在蜡烛图面板上。 查询模式: refId A返回OHLC列(open、high、low、close、volume)。 refIds B、C、D各自仅返回
ts AS time
+ 它们的指标列——不返回OHLC。 Grafana使用共享的time列将它们叠加。如果指标查询也返回OHLC列,蜡烛图渲染会失败。
覆盖配置是必需的,而非装饰性的。 RSI(0-100)与价格(约97,000)共享Y轴会使蜡烛图不可见。点差(约0.01)与买卖价(约97,000)共享轴会使点差线不可见。完全复制模板中的
fieldConfig.overrides
数组——尤其是RSI的右侧轴(0-100%)和点差的右侧轴。
仪表板默认值(完全复制):
  • "refresh": "250ms"
    ——不要改为
    "5s"
    。250ms刷新是为实时数据特意设置的。
  • "liveNow": false
  • "time": {"from": "now-5m", "to": "now"}
  • "timepicker": {"refresh_intervals": ["250ms", "500ms", "1s", "5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d"]}
  • 打开URL时附加
    ?refresh=250ms&from=now-5m&to=now
关键提示: Grafana的
min_refresh_interval
默认值为
5s
。 亚秒级刷新间隔(250ms、500ms、1s)会被服务器端阻止,除非启动Grafana时设置
GF_DASHBOARDS_MIN_REFRESH_INTERVAL=250ms
。上面的执行场景中的Docker运行命令已设置此参数。 如果没有设置,仪表板JSON中的
"refresh": "250ms"
和URL中的
?refresh=250ms
会被静默忽略,下拉菜单不会显示5秒以下的选项。
每个面板查询的目标结构:
json
{
    "refId": "A",
    "datasource": {"uid": "QUESTDB_UID", "type": "questdb-questdb-datasource"},
    "format": 1,
    "rawSql": "SELECT ts AS time, ... WHERE $__timeFilter(ts) ..."
}
关键提示:
"format"
必须是整数
1
,而非字符串
"table"
。QuestDB Grafana插件使用Go整数枚举(
sqlutil.FormatQueryOption
)。字符串值会导致
json: cannot unmarshal string into Go struct field Query.format
。Grafana的 JSON导出显示
"table"
(字符串),但API POST需要
1
(整数)。
python
import json, subprocess, sys, os, requests, time

QUESTDB_URL = "http://localhost:9000"
GRAFANA_URL = "http://localhost:3000"
GRAFANA_AUTH = ("admin", "admin")
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))

--- Start ingestion if not already running ---

--- 如果摄入未运行则启动 ---

already_running = False try: resp = requests.get(f"{QUESTDB_URL}/exec", params={"query": "SELECT count() FROM trades WHERE ts > dateadd('s', -10, now())"}) already_running = resp.json()["dataset"][0][0] > 0 except Exception: pass
if already_running: print("Ingestion already running — skipping launch") else: ingest_log = open(os.path.join(SCRIPT_DIR, "ingest.log"), "w") ingest_proc = subprocess.Popen( [sys.executable, "-u", os.path.join(SCRIPT_DIR, "02_ingest.py")], stdout=ingest_log, stderr=ingest_log, start_new_session=True, # detach from parent — survives Ctrl+C and script exit ) print(f"Ingestion started: PID {ingest_proc.pid}")
# --- Gate: wait for data before deploying dashboard ---
for i in range(15):  # up to 15s
    time.sleep(1)
    if ingest_proc.poll() is not None:
        ingest_log.close()
        print("ERROR: Ingestion process died. Log:")
        print(open(os.path.join(SCRIPT_DIR, "ingest.log")).read())
        sys.exit(1)
    try:
        resp = requests.get(f"{QUESTDB_URL}/exec", params={"query": "SELECT count() FROM trades"})
        count = resp.json()["dataset"][0][0]
        if count > 0:
            print(f"Data gate passed: {count} rows in trades (waited {i+1}s)")
            break
    except Exception:
        pass
else:
    print("ERROR: No data after 15s. Check ingest.log")
    sys.exit(1)
already_running = False try: resp = requests.get(f"{QUESTDB_URL}/exec", params={"query": "SELECT count() FROM trades WHERE ts > dateadd('s', -10, now())"}) already_running = resp.json()["dataset"][0][0] > 0 except Exception: pass
if already_running: print("Ingestion already running — skipping launch") else: ingest_log = open(os.path.join(SCRIPT_DIR, "ingest.log"), "w") ingest_proc = subprocess.Popen( [sys.executable, "-u", os.path.join(SCRIPT_DIR, "02_ingest.py")], stdout=ingest_log, stderr=ingest_log, start_new_session=True, # 与父进程分离——在Ctrl+C和脚本退出后仍能运行 ) print(f"Ingestion started: PID {ingest_proc.pid}")
# --- 等待数据后再部署仪表板 ---
for i in range(15):  # 最多等待15秒
    time.sleep(1)
    if ingest_proc.poll() is not None:
        ingest_log.close()
        print("ERROR: Ingestion process died. Log:")
        print(open(os.path.join(SCRIPT_DIR, "ingest.log")).read())
        sys.exit(1)
    try:
        resp = requests.get(f"{QUESTDB_URL}/exec", params={"query": "SELECT count() FROM trades"})
        count = resp.json()["dataset"][0][0]
        if count > 0:
            print(f"Data gate passed: {count} rows in trades (waited {i+1}s)")
            break
    except Exception:
        pass
else:
    print("ERROR: No data after 15s. Check ingest.log")
    sys.exit(1)

--- Create or find QuestDB datasource ---

--- 创建或查找QuestDB数据源 ---

QuestDB plugin uses jsonData fields, NOT the standard url field.

QuestDB插件使用jsonData字段,而非标准url字段。

server = hostname only (no port, no protocol), port = integer, tlsMode = "disable"

server = 仅主机名(无端口、无协议),port = 整数,tlsMode = "disable"

ds_list = requests.get(f"{GRAFANA_URL}/api/datasources", auth=GRAFANA_AUTH).json() existing = [d for d in ds_list if d["type"] == "questdb-questdb-datasource"] if existing: questdb_uid = existing[0]["uid"] else: resp = requests.post(f"{GRAFANA_URL}/api/datasources", auth=GRAFANA_AUTH, json={ "name": "QuestDB", "type": "questdb-questdb-datasource", "access": "proxy", "jsonData": { "server": "host.docker.internal", "port": 8812, "username": "admin", "tlsMode": "disable", "timeout": "120", "queryTimeout": "60", }, "secureJsonData": {"password": "quest"}, }) questdb_uid = resp.json()["datasource"]["uid"]
DS_REF = {"uid": questdb_uid, "type": "questdb-questdb-datasource"}
dashboard = { "dashboard": { "title": "Crypto Real-Time Market Data", "uid": "crypto-realtime", "timezone": "browser", "refresh": "250ms", # Default refresh rate — do NOT change to 5s "liveNow": False, "schemaVersion": 38, "time": {"from": "now-5m", "to": "now"}, "timepicker": { "refresh_intervals": ["250ms", "500ms", "1s", "5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d"], }, "tags": ["crypto", "questdb", "realtime"], "templating": {"list": [{ "name": "symbol", "type": "query", "label": "Symbol", "query": "SELECT DISTINCT symbol FROM trades ORDER BY symbol;", "datasource": DS_REF, "refresh": 1, "sort": 1, "current": {"text": "BTC-USDT", "value": "BTC-USDT"}, }]}, "panels": [ { "id": 1, "type": "candlestick", "title": "OHLC - $symbol", "gridPos": {"h": 10, "w": 24, "x": 0, "y": 0}, "datasource": DS_REF, "fieldConfig": { "defaults": {"custom": {"axisBorderShow": False, "axisPlacement": "auto"}}, "overrides": [{"matcher": {"id": "byName", "options": "volume"}, "properties": [{"id": "custom.axisPlacement", "value": "hidden"}]}], }, "options": { "mode": "candles+volume", "includeAllFields": False, "candleStyle": "candles", "colorStrategy": "open-close", "colors": {"up": "green", "down": "red"}, "fields": {"open": "open", "high": "high", "low": "low", "close": "close", "volume": "volume"}, }, "targets": [{ "refId": "A", "datasource": DS_REF, "format": 1, "rawSql": "SELECT ts AS time, first(price) AS open, max(price) AS high, min(price) AS low, last(price) AS close, sum(amount) AS volume FROM trades WHERE $__timeFilter(ts) AND symbol = '$symbol' SAMPLE BY 5s;", }], }, { "id": 2, "type": "candlestick", "title": "OHLC + Indicators - $symbol", "gridPos": {"h": 10, "w": 24, "x": 0, "y": 10}, "datasource": DS_REF, "fieldConfig": { "defaults": {"custom": {"axisBorderShow": False, "axisPlacement": "auto"}}, # CRITICAL: these overrides are NOT cosmetic — without them: # - RSI (0-100) shares Y-axis with price (~97000) → candlesticks appear flat # - Volume distorts the price axis # - Bollinger bands have no visual distinction from price lines # Copy ALL overrides below exactly. NEVER use "overrides": [] "overrides": [ {"matcher": {"id": "byName", "options": "volume"}, "properties": [{"id": "custom.axisPlacement", "value": "hidden"}]}, {"matcher": {"id": "byName", "options": "vwap"}, "properties": [{"id": "color", "value": {"fixedColor": "orange", "mode": "fixed"}}, {"id": "custom.lineWidth", "value": 2}]}, {"matcher": {"id": "byName", "options": "sma20"}, "properties": [{"id": "color", "value": {"fixedColor": "yellow", "mode": "fixed"}}, {"id": "custom.lineWidth", "value": 2}]}, {"matcher": {"id": "byName", "options": "upper_band"}, "properties": [{"id": "color", "value": {"fixedColor": "light-blue", "mode": "fixed"}}, {"id": "custom.lineStyle", "value": {"fill": "dash", "dash": [10, 10]}}, {"id": "custom.fillBelowTo", "value": "lower_band"}, {"id": "custom.fillOpacity", "value": 8}]}, {"matcher": {"id": "byName", "options": "lower_band"}, "properties": [{"id": "color", "value": {"fixedColor": "light-blue", "mode": "fixed"}}, {"id": "custom.lineStyle", "value": {"fill": "dash", "dash": [10, 10]}}]}, {"matcher": {"id": "byFrameRefID", "options": "D"}, "properties": [{"id": "color", "value": {"fixedColor": "purple", "mode": "fixed"}}, {"id": "custom.axisPlacement", "value": "right"}, {"id": "min", "value": 0}, {"id": "max", "value": 100}, {"id": "unit", "value": "percent"}]}, ], }, "options": { "mode": "candles+volume", "includeAllFields": True, "candleStyle": "candles", "colorStrategy": "open-close", "colors": {"up": "green", "down": "red"}, "fields": {"open": "open", "high": "high", "low": "low", "close": "close", "volume": "volume"}, }, # --- Overlay pattern: refId A = OHLC columns, B/C/D = one indicator each --- # A MUST return: open, high, low, close, volume (for the candlestick) # B/C/D MUST return: ts AS time + indicator columns ONLY (not OHLC) # Grafana overlays B/C/D on A using the shared time column "targets": [ { # A: OHLC + volume — the candlestick data "refId": "A", "datasource": DS_REF, "format": 1, "rawSql": "SELECT ts AS time, first(price) AS open, max(price) AS high, min(price) AS low, last(price) AS close, sum(amount) AS volume FROM trades WHERE $__timeFilter(ts) AND symbol = '$symbol' SAMPLE BY 5s;", }, { # B: VWAP only — overlays on candlestick "refId": "B", "datasource": DS_REF, "format": 1, "rawSql": "WITH ohlc AS (SELECT ts, symbol, first(price) AS open, max(price) AS high, min(price) AS low, last(price) AS close, sum(amount) AS volume FROM trades WHERE $__timeFilter(ts) AND symbol = '$symbol' SAMPLE BY 5s), vwap AS (SELECT ts, sum((high + low + close) / 3 * volume) OVER (ORDER BY ts CUMULATIVE) / sum(volume) OVER (ORDER BY ts CUMULATIVE) AS vwap FROM ohlc) SELECT ts AS time, vwap FROM vwap;", }, { # C: Bollinger Bands only (sma20, upper_band, lower_band) "refId": "C", "datasource": DS_REF, "format": 1, "rawSql": "WITH ohlc AS (SELECT ts, symbol, first(price) AS open, max(price) AS high, min(price) AS low, last(price) AS close, sum(amount) AS volume FROM trades WHERE $__timeFilter(ts) AND symbol = '$symbol' SAMPLE BY 5s), stats AS (SELECT ts, close, AVG(close) OVER (ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW) AS sma20, AVG(close * close) OVER (ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW) AS avg_close_sq FROM ohlc) SELECT ts AS time, sma20, sma20 + 2 * sqrt(avg_close_sq - (sma20 * sma20)) AS upper_band, sma20 - 2 * sqrt(avg_close_sq - (sma20 * sma20)) AS lower_band FROM stats;", }, { # D: RSI only — right Y-axis 0-100% "refId": "D", "datasource": DS_REF, "format": 1, "rawSql": "WITH ohlc AS (SELECT ts, symbol, first(price) AS open, max(price) AS high, min(price) AS low, last(price) AS close, sum(amount) AS volume FROM trades WHERE $__timeFilter(ts) AND symbol = '$symbol' SAMPLE BY 5s), changes AS (SELECT ts, close, close - LAG(close) OVER (ORDER BY ts) AS change FROM ohlc), gains_losses AS (SELECT ts, close, CASE WHEN change > 0 THEN change ELSE 0 END AS gain, CASE WHEN change < 0 THEN ABS(change) ELSE 0 END AS loss FROM changes), avg_gl AS (SELECT ts, close, AVG(gain) OVER (ORDER BY ts ROWS BETWEEN 13 PRECEDING AND CURRENT ROW) AS avg_gain, AVG(loss) OVER (ORDER BY ts ROWS BETWEEN 13 PRECEDING AND CURRENT ROW) AS avg_loss FROM gains_losses) SELECT ts AS time, CASE WHEN avg_loss = 0 THEN 100 ELSE 100 - (100 / (1 + avg_gain / NULLIF(avg_loss, 0))) END AS rsi FROM avg_gl;", }, ], }, { "id": 3, "type": "timeseries", "title": "Bid-Ask Spread - $symbol", "gridPos": {"h": 6, "w": 24, "x": 0, "y": 20}, "datasource": DS_REF, # CRITICAL: overrides REQUIRED — without them spread (~0.01) is invisible # next to bid/ask (~97000). Spread MUST use right Y-axis. "fieldConfig": { "defaults": {"custom": {"lineWidth": 1, "fillOpacity": 15, "spanNulls": True, "pointSize": 1}}, "overrides": [ {"matcher": {"id": "byName", "options": "spread"}, "properties": [{"id": "color", "value": {"fixedColor": "red", "mode": "fixed"}}, {"id": "custom.axisPlacement", "value": "right"}]}, {"matcher": {"id": "byName", "options": "best_bid"}, "properties": [{"id": "color", "value": {"fixedColor": "green", "mode": "fixed"}}]}, {"matcher": {"id": "byName", "options": "best_ask"}, "properties": [{"id": "color", "value": {"fixedColor": "orange", "mode": "fixed"}}]}, ], }, "targets": [{ "refId": "A", "datasource": DS_REF, "format": 1, "rawSql": "SELECT ts AS time, avg(ask_prices[1] - bid_prices[1]) AS spread, avg(bid_prices[1]) AS best_bid, avg(ask_prices[1]) AS best_ask FROM orderbook WHERE $__timeFilter(ts) AND symbol = '$symbol' SAMPLE BY 5s;", }], }, ], }, "overwrite": True, }
resp = requests.post(f"{GRAFANA_URL}/api/dashboards/db", auth=GRAFANA_AUTH, headers={"Content-Type": "application/json"}, json=dashboard) url = f"{GRAFANA_URL}{resp.json().get('url', '')}?refresh=250ms&from=now-5m&to=now" print(f"Dashboard: {resp.status_code} - {url}")
ds_list = requests.get(f"{GRAFANA_URL}/api/datasources", auth=GRAFANA_AUTH).json() existing = [d for d in ds_list if d["type"] == "questdb-questdb-datasource"] if existing: questdb_uid = existing[0]["uid"] else: resp = requests.post(f"{GRAFANA_URL}/api/datasources", auth=GRAFANA_AUTH, json={ "name": "QuestDB", "type": "questdb-questdb-datasource", "access": "proxy", "jsonData": { "server": "host.docker.internal", "port": 8812, "username": "admin", "tlsMode": "disable", "timeout": "120", "queryTimeout": "60", }, "secureJsonData": {"password": "quest"}, }) questdb_uid = resp.json()["datasource"]["uid"]
DS_REF = {"uid": questdb_uid, "type": "questdb-questdb-datasource"}
dashboard = { "dashboard": { "title": "Crypto Real-Time Market Data", "uid": "crypto-realtime", "timezone": "browser", "refresh": "250ms", # 默认刷新速率——不要改为5s "liveNow": False, "schemaVersion": 38, "time": {"from": "now-5m", "to": "now"}, "timepicker": { "refresh_intervals": ["250ms", "500ms", "1s", "5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d"], }, "tags": ["crypto", "questdb", "realtime"], "templating": {"list": [{ "name": "symbol", "type": "query", "label": "Symbol", "query": "SELECT DISTINCT symbol FROM trades ORDER BY symbol;", "datasource": DS_REF, "refresh": 1, "sort": 1, "current": {"text": "BTC-USDT", "value": "BTC-USDT"}, }]}, "panels": [ { "id": 1, "type": "candlestick", "title": "OHLC - $symbol", "gridPos": {"h": 10, "w": 24, "x": 0, "y": 0}, "datasource": DS_REF, "fieldConfig": { "defaults": {"custom": {"axisBorderShow": False, "axisPlacement": "auto"}}, "overrides": [{"matcher": {"id": "byName", "options": "volume"}, "properties": [{"id": "custom.axisPlacement", "value": "hidden"}]}], }, "options": { "mode": "candles+volume", "includeAllFields": False, "candleStyle": "candles", "colorStrategy": "open-close", "colors": {"up": "green", "down": "red"}, "fields": {"open": "open", "high": "high", "low": "low", "close": "close", "volume": "volume"}, }, "targets": [{ "refId": "A", "datasource": DS_REF, "format": 1, "rawSql": "SELECT ts AS time, first(price) AS open, max(price) AS high, min(price) AS low, last(price) AS close, sum(amount) AS volume FROM trades WHERE $__timeFilter(ts) AND symbol = '$symbol' SAMPLE BY 5s;", }], }, { "id": 2, "type": "candlestick", "title": "OHLC + Indicators - $symbol", "gridPos": {"h": 10, "w": 24, "x": 0, "y": 10}, "datasource": DS_REF, "fieldConfig": { "defaults": {"custom": {"axisBorderShow": False, "axisPlacement": "auto"}}, # 关键提示:这些覆盖配置不是装饰性的——没有它们: # - RSI(0-100)与价格(约97000)共享Y轴→蜡烛图看起来是平的 # - 成交量会扭曲价格轴 # - 布林带与价格线没有视觉区别 # 完全复制下面的所有覆盖配置。永远不要使用"overrides": [] "overrides": [ {"matcher": {"id": "byName", "options": "volume"}, "properties": [{"id": "custom.axisPlacement", "value": "hidden"}]}, {"matcher": {"id": "byName", "options": "vwap"}, "properties": [{"id": "color", "value": {"fixedColor": "orange", "mode": "fixed"}}, {"id": "custom.lineWidth", "value": 2}]}, {"matcher": {"id": "byName", "options": "sma20"}, "properties": [{"id": "color", "value": {"fixedColor": "yellow", "mode": "fixed"}}, {"id": "custom.lineWidth", "value": 2}]}, {"matcher": {"id": "byName", "options": "upper_band"}, "properties": [{"id": "color", "value": {"fixedColor": "light-blue", "mode": "fixed"}}, {"id": "custom.lineStyle", "value": {"fill": "dash", "dash": [10, 10]}}, {"id": "custom.fillBelowTo", "value": "lower_band"}, {"id": "custom.fillOpacity", "value": 8}]}, {"matcher": {"id": "byName", "options": "lower_band"}, "properties": [{"id": "color", "value": {"fixedColor": "light-blue", "mode": "fixed"}}, {"id": "custom.lineStyle", "value": {"fill": "dash", "dash": [10, 10]}}]}, {"matcher": {"id": "byFrameRefID", "options": "D"}, "properties": [{"id": "color", "value": {"fixedColor": "purple", "mode": "fixed"}}, {"id": "custom.axisPlacement", "value": "right"}, {"id": "min", "value": 0}, {"id": "max", "value": 100}, {"id": "unit", "value": "percent"}]}, ], }, "options": { "mode": "candles+volume", "includeAllFields": True, "candleStyle": "candles", "colorStrategy": "open-close", "colors": {"up": "green", "down": "red"}, "fields": {"open": "open", "high": "high", "low": "low", "close": "close", "volume": "volume"}, }, # --- 叠加模式:refId A = OHLC列,B/C/D = 单个指标 --- # A必须返回:open、high、low、close、volume(用于蜡烛图) # B/C/D必须返回:ts AS time + 仅指标列(不返回OHLC) # Grafana使用共享的time列将B/C/D叠加在A上 "targets": [ { # A: OHLC + 成交量 — 蜡烛图数据 "refId": "A", "datasource": DS_REF, "format": 1, "rawSql": "SELECT ts AS time, first(price) AS open, max(price) AS high, min(price) AS low, last(price) AS close, sum(amount) AS volume FROM trades WHERE $__timeFilter(ts) AND symbol = '$symbol' SAMPLE BY 5s;", }, { # B: 仅VWAP — 叠加在蜡烛图上 "refId": "B", "datasource": DS_REF, "format": 1, "rawSql": "WITH ohlc AS (SELECT ts, symbol, first(price) AS open, max(price) AS high, min(price) AS low, last(price) AS close, sum(amount) AS volume FROM trades WHERE $__timeFilter(ts) AND symbol = '$symbol' SAMPLE BY 5s), vwap AS (SELECT ts, sum((high + low + close) / 3 * volume) OVER (ORDER BY ts CUMULATIVE) / sum(volume) OVER (ORDER BY ts CUMULATIVE) AS vwap FROM ohlc) SELECT ts AS time, vwap FROM vwap;", }, { # C: 仅布林带(sma20、upper_band、lower_band) "refId": "C", "datasource": DS_REF, "format": 1, "rawSql": "WITH ohlc AS (SELECT ts, symbol, first(price) AS open, max(price) AS high, min(price) AS low, last(price) AS close, sum(amount) AS volume FROM trades WHERE $__timeFilter(ts) AND symbol = '$symbol' SAMPLE BY 5s), stats AS (SELECT ts, close, AVG(close) OVER (ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW) AS sma20, AVG(close * close) OVER (ORDER BY ts ROWS BETWEEN 19 PRECEDING AND CURRENT ROW) AS avg_close_sq FROM ohlc) SELECT ts AS time, sma20, sma20 + 2 * sqrt(avg_close_sq - (sma20 * sma20)) AS upper_band, sma20 - 2 * sqrt(avg_close_sq - (sma20 * sma20)) AS lower_band FROM stats;", }, { # D: 仅RSI — 右侧Y轴0-100% "refId": "D", "datasource": DS_REF, "format": 1, "rawSql": "WITH ohlc AS (SELECT ts, symbol, first(price) AS open, max(price) AS high, min(price) AS low, last(price) AS close, sum(amount) AS volume FROM trades WHERE $__timeFilter(ts) AND symbol = '$symbol' SAMPLE BY 5s), changes AS (SELECT ts, close, close - LAG(close) OVER (ORDER BY ts) AS change FROM ohlc), gains_losses AS (SELECT ts, close, CASE WHEN change > 0 THEN change ELSE 0 END AS gain, CASE WHEN change < 0 THEN ABS(change) ELSE 0 END AS loss FROM changes), avg_gl AS (SELECT ts, close, AVG(gain) OVER (ORDER BY ts ROWS BETWEEN 13 PRECEDING AND CURRENT ROW) AS avg_gain, AVG(loss) OVER (ORDER BY ts ROWS BETWEEN 13 PRECEDING AND CURRENT ROW) AS avg_loss FROM gains_losses) SELECT ts AS time, CASE WHEN avg_loss = 0 THEN 100 ELSE 100 - (100 / (1 + avg_gain / NULLIF(avg_loss, 0))) END AS rsi FROM avg_gl;", }, ], }, { "id": 3, "type": "timeseries", "title": "Bid-Ask Spread - $symbol", "gridPos": {"h": 6, "w": 24, "x": 0, "y": 20}, "datasource": DS_REF, # 关键提示:覆盖配置是必需的——没有它们,点差(约0.01)在 # 买卖价(约97000)旁边是不可见的。点差必须使用右侧Y轴。 "fieldConfig": { "defaults": {"custom": {"lineWidth": 1, "fillOpacity": 15, "spanNulls": True, "pointSize": 1}}, "overrides": [ {"matcher": {"id": "byName", "options": "spread"}, "properties": [{"id": "color", "value": {"fixedColor": "red", "mode": "fixed"}}, {"id": "custom.axisPlacement", "value": "right"}]}, {"matcher": {"id": "byName", "options": "best_bid"}, "properties": [{"id": "color", "value": {"fixedColor": "green", "mode": "fixed"}}]}, {"matcher": {"id": "byName", "options": "best_ask"}, "properties": [{"id": "color", "value": {"fixedColor": "orange", "mode": "fixed"}}]}, ], }, "targets": [{ "refId": "A", "datasource": DS_REF, "format": 1, "rawSql": "SELECT ts AS time, avg(ask_prices[1] - bid_prices[1]) AS spread, avg(bid_prices[1]) AS best_bid, avg(ask_prices[1]) AS best_ask FROM orderbook WHERE $__timeFilter(ts) AND symbol = '$symbol' SAMPLE BY 5s;", }], }, ], }, "overwrite": True, }
resp = requests.post(f"{GRAFANA_URL}/api/dashboards/db", auth=GRAFANA_AUTH, headers={"Content-Type": "application/json"}, json=dashboard) url = f"{GRAFANA_URL}{resp.json().get('url', '')}?refresh=250ms&from=now-5m&to=now" print(f"Dashboard: {resp.status_code} - {url}")

Open in browser — data gate already confirmed rows are flowing

在浏览器中打开——数据门已确认数据正在流入

subprocess.run(["open" if sys.platform == "darwin" else "xdg-open", url])

Always reference the datasource by UID and type, never by display name.
Do NOT add extra `sleep` commands — the data gate loop handles all waiting.
subprocess.run(["open" if sys.platform == "darwin" else "xdg-open", url])

始终通过UID和类型引用数据源,不要通过显示名称。
不要添加额外的`sleep`命令——数据门循环已处理所有等待。