dflow-kalshi-market-data

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

DFlow Kalshi Market Data

DFlow Kalshi 市场数据

Pull data about a known Kalshi market (or set of markets) — orderbook, trades, prices, candles, forecasts, in-game live data — as a snapshot, a historical range, or a live stream.
拉取已知Kalshi市场(或一组市场)的相关数据——订单簿、交易记录、价格、K线图、预测数据、游戏内实时数据——支持快照、历史范围查询或实时流三种方式。

Prerequisites

前置条件

  • DFlow docs MCP (
    https://pond.dflow.net/mcp
    ) — install per the repo README. This skill is the recipe; the MCP is the reference. Query params, pagination, exact payload schemas, WS snapshot-vs-diff semantics, and the category-specific
    live_data.details
    shapes all live there — don't guess.
  • DFlow文档MCP
    https://pond.dflow.net/mcp
    )——请按照仓库README中的说明安装。本技能是操作指南,MCP是参考依据。查询参数、分页规则、精确的负载 schema、WebSocket快照与增量更新的语义,以及特定分类的
    live_data.details
    结构均在MCP中定义——请勿自行猜测。

Surface

调用方式

All data endpoints in this skill run against the Metadata API (
https://pond.dflow.net/build/metadata-api
) — REST for snapshots and history, WebSockets for live streams. Call it from anywhere: a
curl
from the command line, a Node/Python script, a cron job, a backend, or a Next.js proxy fronting a browser UI.
If the user says "run this from my terminal", don't reach for the
dflow
CLI
— it has no market-data subcommands. Write a short HTTP/WS script against the Metadata API instead.
本技能中的所有数据端点均基于Metadata API
https://pond.dflow.net/build/metadata-api
)——REST接口用于快照和历史查询,WebSocket用于实时流。可从任意环境调用:命令行的
curl
、Node/Python脚本、定时任务、后端服务,或是为浏览器UI提供代理的Next.js应用。
如果用户要求“从我的终端运行”,请勿使用
dflow
CLI
——它没有市场数据相关的子命令。请编写一个简短的HTTP/WS脚本直接调用Metadata API。

Pick the shape first

先选择数据形态

Three intents, three shapes. Match the user's phrasing, then pick the endpoint:
  • Snapshot ("right now", "current") → REST, one call.
  • History ("last hour", "between T1 and T2", "last N trades") → REST with time / limit params.
  • Stream ("live", "as it happens", "alert me when") → WebSocket subscription.
三种需求对应三种数据形态。匹配用户的表述,然后选择对应的端点:
  • 快照(“当前”、“现在”)→ REST接口,单次调用。
  • 历史数据(“过去一小时”、“T1到T2之间”、“最近N笔交易”)→ REST接口,附带时间/限制参数。
  • 实时流(“实时”、“即时更新”、“当...时提醒我”)→ WebSocket订阅。

Data → endpoint map

数据→端点映射

For each dataset below, the one-liner covers all three shapes. Field-level details (exact params, pagination tokens, payload schemas) → docs MCP.
以下每个数据集的说明涵盖了三种数据形态。字段级详情(精确参数、分页令牌、负载schema)请参考文档MCP。

Orderbook

订单簿

  • Snapshot:
    GET /api/v1/orderbook/{ticker}
    or
    /api/v1/orderbook/by-mint/{mint}
    (includes
    sequence
    ).
  • Stream:
    orderbook
    channel (
    yes_bids
    +
    no_bids
    maps per update; no
    sequence
    on the stream payload).
  • 快照:
    GET /api/v1/orderbook/{ticker}
    /api/v1/orderbook/by-mint/{mint}
    (包含
    sequence
    字段)。
  • 实时流:
    orderbook
    频道(每次更新包含
    yes_bids
    no_bids
    映射;流负载中不包含
    sequence
    字段)。

Trades — two endpoints, overlapping but different scopes

交易记录——两个端点,范围重叠但有所不同

  • GET /api/v1/trades
    (and
    /trades/by-mint/{mint}
    ) — the complete market print tape. All trades that hit Kalshi's orderbook, which includes DFlow onchain fills (those hit Kalshi's book too; see the "Do onchain trades show up on Kalshi's trade websocket?" FAQ). This is the default for "show me trades on this market." Stream equivalent:
    trades
    channel.
  • GET /api/v1/onchain-trades
    (and
    /onchain-trades/by-market/{ticker}
    ,
    /onchain-trades/by-event/{eventTicker}
    ) — DFlow onchain fills only, with onchain-specific fields that
    /trades
    doesn't carry:
    wallet
    ,
    transactionSignature
    ,
    id
    ,
    inputAmount
    ,
    outputAmount
    ,
    createdAt
    . Subset of what's on
    /trades
    , but richer per-row. No WS stream.
  • Decision: complete tape
    /trades
    . Wallet-scoped activity feed, DFlow-execution analytics, tx-signature lookups
    /onchain-trades
    . Real-time fill detection for a specific user order → parse program events directly (see
    /build/prediction-markets/onchain-trade-parsing
    ).
  • GET /api/v1/trades
    (以及
    /trades/by-mint/{mint}
    )——完整的市场交易记录。所有在Kalshi订单簿上成交的交易,包括DFlow链上成交的订单(这些订单也会进入Kalshi的订单簿;请查看FAQ“链上交易会出现在Kalshi的交易WebSocket上吗?”)。这是“展示该市场的交易记录”的默认选择。对应的实时流频道:
    trades
  • GET /api/v1/onchain-trades
    (以及
    /onchain-trades/by-market/{ticker}
    /onchain-trades/by-event/{eventTicker}
    )——仅DFlow链上成交的订单,包含
    /trades
    端点没有的链上专属字段:
    wallet
    transactionSignature
    id
    inputAmount
    outputAmount
    createdAt
    。是
    /trades
    的子集,但每行数据更丰富。无WebSocket流。
  • 选择建议:需要完整交易记录→使用
    /trades
    。需要钱包范围的活动流、DFlow执行分析、交易签名查询→使用
    /onchain-trades
    。要实时检测特定用户订单的成交情况→直接解析程序事件(请参阅
    /build/prediction-markets/onchain-trade-parsing
    )。

Top-of-book prices

最优报价

  • Snapshot: read
    yesBid
    /
    yesAsk
    /
    noBid
    /
    noAsk
    directly from the market object (
    GET /api/v1/market/{ticker}
    — singular) — no separate endpoint.
  • Stream:
    prices
    channel.
  • 快照:直接从市场对象中读取
    yesBid
    /
    yesAsk
    /
    noBid
    /
    noAsk
    字段(
    GET /api/v1/market/{ticker}
    ——单个市场)——无需单独的端点。
  • 实时流:
    prices
    频道。

Candlesticks (OHLCV)

K线图(OHLCV)

  • Market-level:
    GET /api/v1/market/{ticker}/candlesticks
    or
    /api/v1/market/by-mint/{mint}/candlesticks
    .
  • Event-level:
    GET /api/v1/event/{ticker}/candlesticks
    .
  • 5,000-candle cap per request (see Gotchas).
  • 市场级:
    GET /api/v1/market/{ticker}/candlesticks
    /api/v1/market/by-mint/{mint}/candlesticks
  • 事件级:
    GET /api/v1/event/{ticker}/candlesticks
  • 每次请求最多返回5000根K线(请参阅注意事项)。

Forecast percentile history

预测百分位历史

  • Event-level:
    GET /api/v1/event/{seriesTicker}/{eventId}/forecast_percentile_history
    (plus
    /api/v1/event/by-mint/{mint}/forecast_percentile_history
    ). Kalshi's historical forecast distribution for an event.
  • 事件级:
    GET /api/v1/event/{seriesTicker}/{eventId}/forecast_percentile_history
    (以及
    /api/v1/event/by-mint/{mint}/forecast_percentile_history
    )。Kalshi针对某一事件的历史预测分布数据。

Live data (Kalshi passthrough)

实时数据(Kalshi直通)

  • GET /api/v1/live_data
    ,
    /live_data/by-event/{ticker}
    ,
    /live_data/by-mint/{mint}
    .
  • Response includes a
    details
    object whose fields depend on the milestone type — football, soccer, tennis, golf, MMA, baseball, cricket, racing each have their own known-field sets. See
    live-data-details
    in the docs MCP before touching
    details
    .
  • GET /api/v1/live_data
    /live_data/by-event/{ticker}
    /live_data/by-mint/{mint}
  • 响应包含一个
    details
    对象,其字段取决于里程碑类型——足球、足球(英式)、网球、高尔夫、MMA、棒球、板球、赛车各自有一套已知的字段集合。在使用
    details
    之前,请查看文档MCP中的
    live-data-details

Streaming lifecycle

流生命周期

Connect → subscribe → handle → reconnect. In a sentence each:
  • Connect: dev is
    wss://dev-prediction-markets-api.dflow.net/api/v1/ws
    (no auth). Prod is
    wss://prediction-markets-api.dflow.net/api/v1/ws
    with
    x-api-key
    on the WS upgrade headers. REST equivalents:
    https://dev-prediction-markets-api.dflow.net
    and
    https://prediction-markets-api.dflow.net
    .
  • Subscribe: send
    { type: "subscribe", channel: "prices" | "trades" | "orderbook", all: true | tickers: [...] }
    . Each channel holds its own subscription state.
  • Handle: parse each message by
    channel
    , process asynchronously — don't block the read loop.
  • Reconnect: exponential backoff on disconnect, and re-send every subscription after reconnect. The server doesn't remember you.
Exact message schemas (prices, trades, orderbook), heartbeat/ping behavior, and incremental-vs-snapshot semantics on the orderbook channel → docs MCP.
连接→订阅→处理→重连。各步骤简述:
  • 连接:开发环境地址为
    wss://dev-prediction-markets-api.dflow.net/api/v1/ws
    (无需认证)。生产环境地址为
    wss://prediction-markets-api.dflow.net/api/v1/ws
    ,需在WebSocket升级请求头中携带
    x-api-key
    。对应的REST地址分别为:
    https://dev-prediction-markets-api.dflow.net
    https://prediction-markets-api.dflow.net
  • 订阅:发送
    { type: "subscribe", channel: "prices" | "trades" | "orderbook", all: true | tickers: [...] }
    。每个频道独立维护订阅状态。
  • 处理:按
    channel
    解析每条消息,异步处理——不要阻塞读取循环。
  • 重连:断开连接时使用指数退避策略重试,并且重连后重新发送所有订阅请求。服务器不会保留你的订阅状态。
精确的消息schema(价格、交易、订单簿)、心跳/ ping行为、订单簿频道的增量更新与快照语义,请参考文档MCP。

What to ASK the user (and what NOT to ask)

需要询问用户的内容(以及无需询问的内容)

Query shape — infer if unambiguous, confirm if not:
  1. Which market — ticker or outcome mint.
  2. Which dataset — orderbook, trades (Kalshi vs onchain), prices, candles, forecasts, or live data.
  3. Snapshot / history / stream — infer from phrasing, confirm if ambiguous.
  4. History bounds / interval — time range (
    startTs
    ,
    endTs
    ) and
    periodInterval
    for candles; limit for trades.
Infra — always ask, never infer:
  1. DFlow API key. Ask with a clean, neutral question: "Do you have a DFlow API key?" Don't presuppose where the key lives — phrasings like "do you have it in env?" or "is
    DFLOW_API_KEY
    set?"
    nudge the user toward env-var defaults they didn't ask for. Don't assume the user has one just because they mention the
    dflow
    CLI is configured. Surface the choice; don't silently fall back to env or to dev. It's one key for everything DFlow — same
    x-api-key
    unlocks the Trade API and the Metadata API, REST and WebSocket. If yes → prod host (
    https://prediction-markets-api.dflow.net
    REST,
    wss://prediction-markets-api.dflow.net/api/v1/ws
    WS) with
    x-api-key
    on every request (REST and the WS upgrade). If no → dev host (
    https://dev-prediction-markets-api.dflow.net
    ,
    wss://dev-prediction-markets-api.dflow.net/api/v1/ws
    ), rate-limited; point them at
    https://pond.dflow.net/build/api-key
    for a prod key. When you generate a script, log the resolved host + key-presence at startup so the user can see which rails they're on.
Do NOT ask about:
  • RPC, wallet, signing — this skill is read-only public data.
  • Settlement mint / slippage / fees — trade-side concerns; if the user pivots to placing an order off something they see here, hand off to
    dflow-kalshi-trading
    .
查询形态——表述明确则自行推断,表述模糊则确认:
  1. 哪个市场——ticker或标的mint地址。
  2. 哪种数据集——订单簿、交易记录(Kalshi vs 链上)、价格、K线图、预测数据或实时数据。
  3. 快照/历史数据/实时流——根据用户表述推断,模糊时确认。
  4. 历史范围/时间间隔——K线图的时间范围(
    startTs
    endTs
    )和
    periodInterval
    ;交易记录的数量限制。
基础设施相关——必须询问,切勿自行推断:
  1. DFlow API密钥。**请用简洁中立的问题询问:“你是否拥有DFlow API密钥?”**不要预设密钥的存储位置——类似“你是否将其存放在环境变量中?”或“是否设置了
    DFLOW_API_KEY
    ?”的表述会引导用户使用他们未要求的环境变量默认值。不要因为用户提到
    dflow
    CLI已配置就假设他们拥有密钥。请给出选择;不要默认使用环境变量或开发环境。一个密钥可用于所有DFlow服务——同一个
    x-api-key
    可解锁Trade API和Metadata API,包括REST和WebSocket。如果用户拥有密钥→使用生产环境地址(REST:
    https://prediction-markets-api.dflow.net
    ,WebSocket:
    wss://prediction-markets-api.dflow.net/api/v1/ws
    ),并在每个请求(REST和WebSocket升级)中携带
    x-api-key
    。如果没有→使用开发环境地址(
    https://dev-prediction-markets-api.dflow.net
    wss://dev-prediction-markets-api.dflow.net/api/v1/ws
    ),该环境有速率限制;请引导用户访问
    https://pond.dflow.net/build/api-key
    获取生产环境密钥。当你生成脚本时,请在启动时输出已解析的地址和密钥状态,以便用户了解当前使用的环境。
无需询问的内容:
  • RPC、钱包、签名——本技能仅处理只读的公开数据。
  • 结算mint地址/滑点/手续费——这些是交易相关的问题;如果用户根据此处看到的数据转而提出下单需求,请转交至
    dflow-kalshi-trading
    技能。

Gotchas (the docs MCP won't volunteer these)

注意事项(文档MCP未主动提及)

  • Two trade endpoints, overlapping scopes.
    /api/v1/trades
    is the complete market tape (Kalshi-offchain order flow plus DFlow onchain fills — DFlow fills hit Kalshi's book).
    /api/v1/onchain-trades
    is the DFlow-onchain subset, enriched with
    wallet
    /
    transactionSignature
    / input-output amounts. When a user says "show trades on this market" they want
    /trades
    ; when they say "show this wallet's DFlow activity" they want
    /onchain-trades?wallet=...
    .
  • Orderbook returns only bid ladders (
    yes_bids
    ,
    no_bids
    ). Best YES ask is derived:
    1 - max(no_bids keys)
    (a NO bid at
    p
    is a YES offer at
    1-p
    ). Same on REST and the WS channel.
  • Two price scales. Probability strings (
    "0.4200"
    ) on orderbook + prices channels. Integer 0–10000 on
    /trades
    +
    trades
    channel, with
    yes_price_dollars
    /
    no_price_dollars
    string companions. Normalize before you compute.
  • 5,000-candle cap per request, hard 400. If the range × interval would produce more than 5,000 candles, the endpoint returns a 400 with no partial result — it's Kalshi's upstream cap forwarded through DFlow. Narrow the range, widen the interval, or page yourself.
  • periodInterval
    is in minutes, not seconds.
    Kalshi convention:
    1
    = 1-minute candles,
    60
    = hourly,
    1440
    = daily. Easy to blow past the 5,000-candle cap by assuming seconds.
  • live_data.details
    is categorical, not generic.
    Fields differ per milestone type. Don't hardcode cross-category field access; branch on
    type
    and pull the known fields for that category from the MCP's
    live-data-details
    reference.
  • WebSocket
    all: true
    is a firehose.
    Especially on
    prices
    and
    orderbook
    . Use a ticker list unless the monitor truly needs universe-wide coverage.
  • WS subscriptions don't survive reconnects. After every reconnect, resend every
    subscribe
    message you had before the drop.
  • Streams can go quiet in the maintenance window — Thursdays 3:00–5:00 AM ET, Kalshi is offline; expect sparse or missing WS traffic and stale REST fields.
  • The CLI's stored key doesn't flow into your script's HTTP client.
    dflow setup
    stores the key for the
    dflow
    binary's own use. The Metadata API calls your script makes directly are separate — they need the key plumbed in (env,
    .env
    , flag). It's one DFlow key, but two plumbing sites any time you mix CLI invocations with direct HTTP/WS calls in the same codebase.
  • 两个交易记录端点,范围重叠但不同
    /api/v1/trades
    是完整的市场交易记录(Kalshi链下订单流加上DFlow链上成交的订单——DFlow成交的订单会进入Kalshi的订单簿)。
    /api/v1/onchain-trades
    是DFlow链上成交的子集,包含
    wallet
    /
    transactionSignature
    /输入输出金额等额外字段。当用户说“展示该市场的交易记录”时,他们需要的是
    /trades
    ;当用户说“展示该钱包的DFlow活动”时,他们需要的是
    /onchain-trades?wallet=...
  • 订单簿仅返回买单队列
    yes_bids
    no_bids
    )。最优YES卖价需推导得出:
    1 - max(no_bids的键值)
    (NO买单价格为
    p
    等价于YES卖价为
    1-p
    )。REST接口和WebSocket频道均遵循此规则。
  • 两种价格刻度。订单簿和价格频道使用概率字符串(
    "0.4200"
    )。
    /trades
    trades
    频道使用0–10000的整数,附带
    yes_price_dollars
    /
    no_price_dollars
    字符串格式的价格。计算前请先统一格式。
  • 每次请求最多返回5000根K线,超出则返回400错误。如果时间范围×时间间隔会生成超过5000根K线,端点会返回400错误且无部分结果——这是Kalshi上游的限制,DFlow直接转发。请缩小时间范围、增大时间间隔,或自行分页。
  • periodInterval
    的单位是分钟,而非秒
    。Kalshi的约定:
    1
    =1分钟K线,
    60
    =小时级,
    1440
    =日级。若误以为单位是秒,很容易超出5000根K线的限制。
  • live_data.details
    是分类特定的,而非通用结构
    。字段因里程碑类型而异。请勿硬编码跨分类的字段访问;请根据
    type
    分支处理,并从MCP的
    live-data-details
    参考中获取对应分类的已知字段。
  • WebSocket的
    all: true
    是全量数据流
    。尤其是
    prices
    orderbook
    频道。除非监控确实需要覆盖全市场,否则请使用ticker列表订阅。
  • WebSocket订阅不会在重连后保留。每次重连后,重新发送所有之前的
    subscribe
    消息。
  • 维护窗口期间流可能中断——每周四美国东部时间3:00–5:00,Kalshi会下线;此时WebSocket流量可能稀疏或缺失,REST字段可能过时。
  • CLI存储的密钥不会自动流入脚本的HTTP客户端
    dflow setup
    dflow
    二进制文件存储密钥。你的脚本直接调用Metadata API时是独立的——需要手动传入密钥(环境变量、
    .env
    文件、命令行参数)。虽然是同一个DFlow密钥,但当你在同一代码库中混合使用CLI调用和直接HTTP/WS调用时,需要在两个地方配置密钥。

When something doesn't fit

超出范围的内容

For anything not covered above — full parameter lists, pagination tokens, exact WS message shapes (snapshot-vs-diff on orderbook, heartbeat cadence), candlestick interval enums, category-specific
live_data.details
fields, forecast-percentile response shape — query the docs MCP (
search_d_flow
,
query_docs_filesystem_d_flow
). Don't guess.
对于上述未涵盖的内容——完整参数列表、分页令牌、精确的WebSocket消息结构(订单簿的快照与增量更新、心跳频率)、K线图时间间隔枚举、特定分类的
live_data.details
字段、预测百分位响应结构——请查询文档MCP(
search_d_flow
query_docs_filesystem_d_flow
)。请勿自行猜测。

Sibling skills

关联技能

  • dflow-kalshi-market-scanner
    — find markets matching a criterion across the universe (uses these primitives, shapes them into named scans).
  • dflow-kalshi-trading
    — place buy / sell / redeem orders on a market you're watching here.
  • dflow-kalshi-portfolio
    — view the user's own positions and P&L.
  • dflow-kalshi-market-scanner
    ——在全市场中查找符合特定条件的市场(基于本技能的基础功能,构建为命名扫描)。
  • dflow-kalshi-trading
    ——在你监控的市场上下买单/卖单/赎回单。
  • dflow-kalshi-portfolio
    ——查看用户自身的持仓和盈亏。