alva

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Alva

Alva

What is Alva

什么是Alva

Alva is an agentic finance platform. It provides unified access to 250+ financial data sources spanning crypto, equities, ETFs, macroeconomic indicators, on-chain analytics, and social sentiment -- including spot and futures OHLCV, funding rates, company fundamentals, price targets, insider and senator trades, earnings estimates, CPI, GDP, Treasury rates, exchange flows, DeFi metrics, news feeds, social media and more!
Alva是一个智能金融平台。它提供对250+金融数据源的统一访问,涵盖加密货币、股票、ETF、宏观经济指标、链上分析和社交情绪数据——包括现货和期货OHLCV、资金费率、公司基本面、目标价、内部人士和参议员交易、盈利预期、CPI、GDP、国债利率、交易所资金流、DeFi指标、新闻源、社交媒体等!

What Alva Skills Enables

Alva Skills的功能

The Alva skill connects any AI agent or IDE to the full Alva platform. With it you can:
  • Access financial data -- query any of Alva's 250+ data SDKs programmatically, or bring your own data via HTTP API or direct upload.
  • Run cloud-side analytics -- write JavaScript that executes on Alva Cloud in a secure runtime. No local compute, no dependencies, no infrastructure to manage.
  • Build agentic playbooks -- create data pipelines, trading strategies, and scheduled automations that run continuously on Alva Cloud.
  • Deploy trading strategies -- backtest with the Altra trading engine and run continuous live paper trading.
  • Release and share -- turn your work into a hosted playbook web app at
    https://yourusername.playbook.alva.ai/playbook-name/version/index.html
    , and share it with the world.
In short: turn your ideas into a forever-running finance agent that gets things done for you.
Alva Skill可将任意AI Agent或IDE连接到完整的Alva平台。通过它你可以:
  • 访问金融数据——以编程方式查询Alva的250+数据SDK,或通过HTTP API直接上传自有数据。
  • 运行云端分析——在Alva Cloud的安全运行环境中执行JavaScript代码。无需本地计算资源、无需管理依赖项和基础设施。
  • 构建智能操作手册——创建数据管道、交易策略和定时自动化任务,在Alva Cloud上持续运行。
  • 部署交易策略——使用Altra交易引擎进行回测,并运行持续的模拟实盘交易。
  • 发布与分享——将你的成果转换为托管式操作手册Web应用,地址为
    https://yourusername.playbook.alva.ai/playbook-name/version/index.html
    ,并向全球用户分享。
简而言之:将你的想法转化为可持续运行的金融Agent,为你自动完成各项任务。

Capabilities & Common Workflows

核心功能与常见工作流

1. ALFS (Alva FileSystem)

1. ALFS(Alva文件系统)

The foundation of the platform. ALFS is a globally shared filesystem with built-in authorization. Every user has a home directory; permissions control who can read and write each path. Scripts, data feeds, playbook assets, and shared libraries all live on ALFS.
Key operations: read, write, mkdir, stat, readdir, remove, rename, copy, symlink, chmod, grant, revoke.
平台的基础组件。ALFS是一个带有内置授权机制的全局共享文件系统。每个用户都有一个主目录;权限控制谁可以读写每个路径。脚本、数据源、操作手册资产和共享库都存储在ALFS中。
核心操作:read、write、mkdir、stat、readdir、remove、rename、copy、symlink、chmod、grant、revoke。

2. JS Runtime

2. JS运行环境

Run JavaScript on Alva Cloud in a secure V8 isolate. The runtime has access to ALFS, all 250+ SDKs, HTTP networking, LLM access, and the Feed SDK. Everything executes server-side -- nothing runs on your local machine.
在Alva Cloud的安全V8隔离环境中运行JavaScript。该运行环境可访问ALFS、所有250+ SDK、HTTP网络、LLM接口以及Feed SDK。所有代码均在服务器端执行——无任何代码在本地机器运行。

3. SDKHub

3. SDKHub

250+ built-in financial data SDKs. To find the right SDK for a task, use the two-step retrieval flow:
  1. Pick a partition from the index below.
  2. Call
    GET /api/v1/sdk/partitions/:partition/modules
    to see module summaries, then load the full doc for the chosen module.
内置250+金融数据SDK。要为任务选择合适的SDK,请遵循两步检索流程:
  1. 从下方索引中选择一个分区
  2. **调用
    GET /api/v1/sdk/partitions/:partition/modules
    **查看模块摘要,然后加载所选模块的完整文档。

SDK Partition Index

SDK分区索引

PartitionDescription
spot_market_price_and_volume
Spot OHLCV for crypto and equities. Price bars, volume, historical candles.
crypto_futures_data
Perpetual futures: OHLCV, funding rates, open interest, long/short ratio.
crypto_technical_metrics
Crypto technical & on-chain indicators: MA, EMA, RSI, MACD, Bollinger, MVRV, SOPR, NUPL, whale ratio, market cap, FDV, etc. (20 modules)
crypto_exchange_flow
Exchange inflow/outflow data for crypto assets.
crypto_fundamentals
Crypto market fundamentals: circulating supply, max supply, market dominance.
crypto_screener
Screen crypto assets by technical metrics over custom time ranges.
company_crypto_holdings
Public companies' crypto token holdings (e.g. MicroStrategy BTC).
equity_fundamentals
Stock fundamentals: income statements, balance sheets, cash flow, margins, PE, PB, ROE, ROA, EPS, market cap, dividend yield, enterprise value, etc. (31 modules)
equity_estimates_and_targets
Analyst price targets, consensus estimates, earnings guidance.
equity_events_calendar
Dividend calendar, stock split calendar.
equity_ownership_and_flow
Institutional holdings, insider trades, senator trading activity.
stock_screener
Screen stocks by sector, industry, country, exchange, IPO date, earnings date, financial & technical metrics. (9 modules)
stock_technical_metrics
Stock technical indicators: beta, volatility, Bollinger, EMA, MA, MACD, RSI-14, VWAP, avg daily dollar volume.
etf_fundamentals
ETF holdings breakdown.
macro_and_economics_data
CPI, GDP, unemployment, federal funds rate, Treasury rates, PPI, consumer sentiment, VIX, TIPS, nonfarm payroll, retail sales, recession probability, etc. (20 modules)
technical_indicator_calculation_helpers
50+ pure calculation helpers: RSI, MACD, Bollinger Bands, ATR, VWAP, Ichimoku, Parabolic SAR, KDJ, OBV, etc. Input your own price arrays.
feed_widgets
Social & news data feeds: news, Twitter/X, YouTube, Reddit, podcasts, web search (Brave, Grok).
ask
General news and market articles.
You can also bring your own data by uploading files to ALFS or fetching from external HTTP APIs within the runtime.
分区描述
spot_market_price_and_volume
加密货币和股票的现货OHLCV。价格K线、成交量、历史蜡烛图。
crypto_futures_data
永续期货数据:OHLCV、资金费率、持仓量、多空比。
crypto_technical_metrics
加密货币技术指标与链上指标:MA、EMA、RSI、MACD、布林带、MVRV、SOPR、NUPL、鲸鱼比率、市值、FDV等(20个模块)
crypto_exchange_flow
加密货币资产的交易所资金流入/流出数据。
crypto_fundamentals
加密货币市场基本面:流通供应量、最大供应量、市场主导地位。
crypto_screener
按自定义时间范围的技术指标筛选加密货币资产。
company_crypto_holdings
上市公司的加密货币持仓(如MicroStrategy的BTC持仓)。
equity_fundamentals
股票基本面:损益表、资产负债表、现金流、利润率、PE、PB、ROE、ROA、EPS、市值、股息率、企业价值等(31个模块)
equity_estimates_and_targets
分析师目标价、一致预期、盈利指引。
equity_events_calendar
股息日历、股票拆分日历。
equity_ownership_and_flow
机构持仓、内部人士交易、参议员交易活动。
stock_screener
按行业、国家、交易所、IPO日期、盈利日期、财务与技术指标筛选股票(9个模块)
stock_technical_metrics
股票技术指标:贝塔系数、波动率、布林带、EMA、MA、MACD、RSI-14、VWAP、日均成交额。
etf_fundamentals
ETF持仓明细。
macro_and_economics_data
宏观经济数据:CPI、GDP、失业率、联邦基金利率、国债利率、PPI、消费者信心指数、VIX、TIPS、非农就业数据、零售销售、衰退概率等(20个模块)
technical_indicator_calculation_helpers
50+纯计算工具:RSI、MACD、布林带、ATR、VWAP、一目均衡表、抛物线SAR、KDJ、OBV等。可输入自定义价格数组。
feed_widgets
社交与新闻数据源:新闻、Twitter/X、YouTube、Reddit、播客、网页搜索(Brave、Grok)。
ask
通用新闻与市场文章。
你也可以通过将文件上传到ALFS,或在运行环境中从外部HTTP API获取数据,来接入自有数据。

4. Altra (Alva Trading Engine)

4. Altra(Alva交易引擎)

A feed-based event-driven backtesting engine for quantitative trading strategies. A trading strategy IS a feed: all output data (targets, portfolio, orders, equity, metrics) lives under a single feed's ALFS path. Altra supports historical backtesting and continuous live paper trading, with custom indicators, portfolio simulation, and performance analytics.
基于事件驱动的量化交易策略回测引擎。交易策略本身就是一个Feed:所有输出数据(目标、投资组合、订单、权益、指标)都存储在单个Feed的ALFS路径下。Altra支持历史回测和持续的模拟实盘交易,可自定义指标、投资组合模拟和绩效分析。

5. Deploy on Alva Cloud

5. 在Alva Cloud上部署

Once your data analytics scripts and feeds are ready, deploy them as scheduled cronjobs on Alva Cloud. They run continuously on your chosen schedule (e.g. every hour, every day). Grant public access so anyone -- or any playbook page -- can read the data.
当你的数据分析脚本和Feed准备就绪后,可将它们部署为Alva Cloud上的定时 cron 任务。它们会按照你选择的计划(如每小时、每天)持续运行。可设置公共访问权限,让任何人——或任何操作手册页面——都能读取数据。

6. Build the Playbook Web App

6. 构建操作手册Web应用

After your data pipelines are deployed and producing data, build the playbook's web interface. Create HTML5 pages that read from Alva's data gateway and visualize the results. Follow the Alva Design System for styling, layout, and component guidelines.
在你的数据管道部署并生成数据后,构建操作手册的Web界面。创建HTML5页面,从Alva的数据网关读取数据并可视化结果。遵循Alva设计系统的样式、布局和组件规范。

7. Release

7. 发布

Three phases:
  1. Write HTML to ALFS:
    POST /api/v1/fs/write
    the playbook HTML to
    ~/playbooks/{name}/index.html
    .
  2. Call release API:
    POST /api/v1/release/playbook
    — creates DB records and uploads HTML to CDN. Returns
    playbook_id
    (numeric).
  3. Write ALFS files: Using the returned numeric
    playbook_id
    , write release files, draft files, and
    playbook.json
    to ALFS. See api-reference.md for details.
The
playbook.json
must include a
type
field (
"dashboard"
or
"strategy"
) and a
draft
object. Omitting
type
causes wrong frontend routing; omitting
draft
causes the dashboard iframe to never load.
Once released, the playbook is accessible at
https://yourusername.playbook.alva.ai/playbook-name/version/index.html
. -- ready to share with the world.

Detailed sub-documents (read these for in-depth reference):
DocumentContents
api-reference.mdFull REST API reference (filesystem, run, deploy, user info, time series paths)
jagent-runtime.mdWriting jagent scripts: module system, built-in modules, async model, constraints
feed-sdk.mdFeed SDK guide: creating data feeds, time series, upstreams, state management
altra-trading.mdAltra backtesting engine: strategies, features, signals, testing, debugging
deployment.mdDeploying scripts as cronjobs for scheduled execution
design-system.mdAlva Design System: design tokens, colors, typography, font rules
design-widgets.mdWidget design: chart cards, KPI cards, table cards, feed cards, layout grid
design-components.mdBase component templates: dropdown, button, switch, modal, select, markdown
design-playbook-trading-strategy.mdTrading strategy playbook guideline
adk.mdAgent Development Kit:
adk.agent()
API, tool calling, ReAct loop, examples

分为三个阶段:
  1. 将HTML写入ALFS:调用
    POST /api/v1/fs/write
    将操作手册HTML写入
    ~/playbooks/{name}/index.html
  2. 调用发布API
    POST /api/v1/release/playbook
    — 创建数据库记录并将HTML上传到CDN。返回
    playbook_id
    (数字类型)。
  3. 写入ALFS文件:使用返回的数字
    playbook_id
    ,将发布文件、草稿文件和
    playbook.json
    写入ALFS。详情请参阅api-reference.md
playbook.json
必须包含
type
字段(
"dashboard"
"strategy"
)和
draft
对象。省略
type
会导致前端路由错误;省略
draft
会导致仪表板iframe无法加载。
发布完成后,操作手册可通过
https://yourusername.playbook.alva.ai/playbook-name/version/index.html
访问——随时可向全球分享。

详细子文档(如需深入了解请阅读):
文档内容
api-reference.md完整REST API参考(文件系统、运行、部署、用户信息、时间序列路径)
jagent-runtime.md编写jagent脚本:模块系统、内置模块、异步模型、约束条件
feed-sdk.mdFeed SDK指南:创建数据源、时间序列、上游、状态管理
altra-trading.mdAltra回测引擎:策略、功能、信号、测试、调试
deployment.md将脚本部署为定时cron任务
design-system.mdAlva设计系统:设计令牌、颜色、排版、字体规则
design-widgets.md组件设计:图表卡片、KPI卡片、表格卡片、Feed卡片、布局网格
design-components.md基础组件模板:下拉菜单、按钮、开关、模态框、选择器、Markdown
design-playbook-trading-strategy.md交易策略操作手册指南
adk.mdAgent开发工具包:
adk.agent()
API、工具调用、ReAct循环、示例

Setup

配置

All configuration is done via environment variables.
VariableRequiredDescription
ALVA_API_KEY
yesYour API key (create and manage at alva.ai)
ALVA_ENDPOINT
noAlva API base URL. Defaults to
https://api-llm.prd.alva.ai
if not set
所有配置均通过环境变量完成。
变量是否必填描述
ALVA_API_KEY
你的API密钥(在alva.ai创建和管理)
ALVA_ENDPOINT
Alva API基础URL。如果未设置,默认值为
https://api-llm.prd.alva.ai

Making API Requests

发起API请求

All API examples in this skill use HTTP notation (
METHOD /path
). Every request requires the
X-Alva-Api-Key
header unless marked (public, no auth).
Curl templates for reference:
bash
undefined
本Skill中的所有API示例均使用HTTP表示法(
METHOD /path
)。除非标记为**(公开,无需授权)**,否则所有请求都需要
X-Alva-Api-Key
头。
参考Curl模板:
bash
undefined

Authenticated

已授权请求

curl -s -H "X-Alva-Api-Key: $ALVA_API_KEY" "$ALVA_ENDPOINT{path}"
curl -s -H "X-Alva-Api-Key: $ALVA_API_KEY" "$ALVA_ENDPOINT{path}"

Authenticated + JSON body

已授权+JSON请求体

curl -s -H "X-Alva-Api-Key: $ALVA_API_KEY" -H "Content-Type: application/json"
"$ALVA_ENDPOINT{path}" -d '{body}'
curl -s -H "X-Alva-Api-Key: $ALVA_API_KEY" -H "Content-Type: application/json"
"$ALVA_ENDPOINT{path}" -d '{body}'

Public read (no API key, absolute path)

公开读取(无需API密钥,使用绝对路径)

curl -s "$ALVA_ENDPOINT{path}"
undefined
curl -s "$ALVA_ENDPOINT{path}"
undefined

Discovering User Info

获取用户信息

Retrieve your
user_id
and
username
:
GET /api/v1/me
→ {"id":1,"username":"alice"}

获取你的
user_id
username
GET /api/v1/me
→ {"id":1,"username":"alice"}

Quick API Reference

快速API参考

See api-reference.md for full details.
详情请参阅api-reference.md

Filesystem (
/api/v1/fs/
)

文件系统(
/api/v1/fs/

MethodEndpointDescription
GET
/api/v1/fs/read?path={path}
Read file content (raw bytes) or time series data
POST
/api/v1/fs/write
Write file (raw body or JSON with
data
field)
GET
/api/v1/fs/stat?path={path}
Get file/directory metadata
GET
/api/v1/fs/readdir?path={path}
List directory entries
POST
/api/v1/fs/mkdir
Create directory (recursive)
DELETE
/api/v1/fs/remove?path={path}
Remove file or directory
POST
/api/v1/fs/rename
Rename / move
POST
/api/v1/fs/copy
Copy file
POST
/api/v1/fs/symlink
Create symlink
GET
/api/v1/fs/readlink?path={path}
Read symlink target
POST
/api/v1/fs/chmod
Change permissions
POST
/api/v1/fs/grant
Grant read/write access to a path
POST
/api/v1/fs/revoke
Revoke access
Paths:
~/data/file.json
(home-relative) or
/alva/home/<username>/...
(absolute). Public reads use absolute paths without API key.
方法端点描述
GET
/api/v1/fs/read?path={path}
读取文件内容(原始字节)或时间序列数据
POST
/api/v1/fs/write
写入文件(原始请求体或包含
data
字段的JSON)
GET
/api/v1/fs/stat?path={path}
获取文件/目录元数据
GET
/api/v1/fs/readdir?path={path}
列出目录条目
POST
/api/v1/fs/mkdir
创建目录(支持递归创建)
DELETE
/api/v1/fs/remove?path={path}
删除文件或目录
POST
/api/v1/fs/rename
重命名/移动
POST
/api/v1/fs/copy
复制文件
POST
/api/v1/fs/symlink
创建符号链接
GET
/api/v1/fs/readlink?path={path}
读取符号链接目标
POST
/api/v1/fs/chmod
修改权限
POST
/api/v1/fs/grant
授予路径读写权限
POST
/api/v1/fs/revoke
撤销权限
路径格式:
~/data/file.json
(相对主目录)或
/alva/home/<username>/...
(绝对路径)。公开读取需使用绝对路径且无需API密钥。

Run (
/api/v1/run
)

运行(
/api/v1/run

MethodEndpointDescription
POST
/api/v1/run
Execute JavaScript (inline
code
or
entry_path
to a script on filesystem)
方法端点描述
POST
/api/v1/run
执行JavaScript(内联
code
或指向文件系统中脚本的
entry_path

Deploy (
/api/v1/deploy/
)

部署(
/api/v1/deploy/

MethodEndpointDescription
POST
/api/v1/deploy/cronjob
Create a cronjob
GET
/api/v1/deploy/cronjobs
List cronjobs (paginated)
GET
/api/v1/deploy/cronjob/:id
Get cronjob details
PATCH
/api/v1/deploy/cronjob/:id
Update cronjob (name, cron, args)
DELETE
/api/v1/deploy/cronjob/:id
Delete cronjob
POST
/api/v1/deploy/cronjob/:id/pause
Pause cronjob
POST
/api/v1/deploy/cronjob/:id/resume
Resume cronjob
方法端点描述
POST
/api/v1/deploy/cronjob
创建定时任务
GET
/api/v1/deploy/cronjobs
列出定时任务(分页)
GET
/api/v1/deploy/cronjob/:id
获取定时任务详情
PATCH
/api/v1/deploy/cronjob/:id
更新定时任务(名称、cron表达式、参数)
DELETE
/api/v1/deploy/cronjob/:id
删除定时任务
POST
/api/v1/deploy/cronjob/:id/pause
暂停定时任务
POST
/api/v1/deploy/cronjob/:id/resume
恢复定时任务

Release (
/api/v1/release/
)

发布(
/api/v1/release/

MethodEndpointDescription
POST
/api/v1/release/feed
Register feed (DB + link to cronjob task). Call after deploying cronjob.
POST
/api/v1/release/playbook
Release playbook for public hosting. Call after writing playbook HTML.
Name uniqueness: Both
name
in releaseFeed and releasePlaybook must be unique within your user space. Use
GET /api/v1/fs/readdir?path=~/feeds
or
GET /api/v1/fs/readdir?path=~/playbooks
to check existing names before releasing.
方法端点描述
POST
/api/v1/release/feed
注册Feed(数据库记录 + 关联定时任务)。部署定时任务后调用。
POST
/api/v1/release/playbook
发布操作手册以进行公共托管。写入操作手册HTML后调用。
名称唯一性:releaseFeed和releasePlaybook中的
name
在你的用户空间内必须唯一。发布前请使用
GET /api/v1/fs/readdir?path=~/feeds
GET /api/v1/fs/readdir?path=~/playbooks
检查现有名称。

SDK Documentation (
/api/v1/sdk/
)

SDK文档(
/api/v1/sdk/

MethodEndpointDescription
GET
/api/v1/sdk/doc?name={module_name}
Get full doc for a specific SDK module
GET
/api/v1/sdk/partitions
List all SDK partitions
GET
/api/v1/sdk/partitions/:partition/summary
Get one-line summaries of all modules in a partition
SDK retrieval flow: pick a partition from the index above → call
/partitions/:partition/summary
to see module summaries → call
/sdk/doc?name=...
to load the full doc for the chosen module.
方法端点描述
GET
/api/v1/sdk/doc?name={module_name}
获取特定SDK模块的完整文档
GET
/api/v1/sdk/partitions
列出所有SDK分区
GET
/api/v1/sdk/partitions/:partition/summary
获取分区内所有模块的单行摘要
SDK检索流程:从上方索引中选择分区 → 调用
/partitions/:partition/summary
查看模块摘要 → 调用
/sdk/doc?name=...
加载所选模块的完整文档。

Trading Pair Search (
/api/v1/trading-pairs/
)

交易对搜索(
/api/v1/trading-pairs/

MethodEndpointDescription
GET
/api/v1/trading-pairs/search?q={q}
Search trading pairs by base asset (fuzzy match)
Search before writing code to check which symbols/exchanges Alva supports. Supports exact match + prefix fuzzy search by base asset or alias. Comma-separated queries for multiple searches.
GET /api/v1/trading-pairs/search?q=BTC,ETH
→ {"trading_pairs":[{"base":"BTC","quote":"USDT","symbol":"BINANCE_PERP_BTC_USDT","exchange":"binance","type":"crypto-perp","fee_rate":0.001,...},...]}
方法端点描述
GET
/api/v1/trading-pairs/search?q={q}
按基础资产搜索交易对(模糊匹配)
编写代码前先搜索,确认Alva支持哪些交易对/交易所。支持精确匹配和前缀模糊匹配(按基础资产或别名)。可使用逗号分隔多个搜索关键词。
GET /api/v1/trading-pairs/search?q=BTC,ETH
→ {"trading_pairs":[{"base":"BTC","quote":"USDT","symbol":"BINANCE_PERP_BTC_USDT","exchange":"binance","type":"crypto-perp","fee_rate":0.001,...},...]}

User Info

用户信息

MethodEndpointDescription
GET
/api/v1/me
Get authenticated user's id and username

方法端点描述
GET
/api/v1/me
获取已授权用户的ID和用户名

Runtime Modules Quick Reference

运行环境模块快速参考

Scripts executed via
/api/v1/run
run in a V8 isolate. See jagent-runtime.md for full details.
Modulerequire()Description
alfs
require("alfs")
Filesystem (uses absolute paths
/alva/home/<username>/...
)
env
require("env")
userId
,
username
,
args
from request
net/http
require("net/http")
fetch(url, init)
for async HTTP requests
@alva/algorithm
require("@alva/algorithm")
Statistics
@alva/feed
require("@alva/feed")
Feed SDK for persistent data pipelines + FeedAltra trading engine
@alva/adk
require("@alva/adk")
Agent SDK for LLM requests —
agent()
for LLM agents with tool calling
@test/suite
require("@test/suite")
Jest-style test framework (
describe
,
it
,
expect
,
runTests
)
SDKHub: 250+ data modules available via
require("@arrays/crypto/ohlcv:v1.0.0")
etc. Version suffix is optional (defaults to
v1.0.0
). To discover function signatures and response shapes, use the SDK doc API (
GET /api/v1/sdk/doc?name=...
).
Key constraints: No top-level
await
(wrap script in
(async () => { ... })();
). No Node.js builtins (
fs
,
path
,
http
). Module exports are frozen.

通过
/api/v1/run
执行的脚本运行在V8隔离环境中。详情请参阅jagent-runtime.md
模块require() 语法描述
alfs
require("alfs")
文件系统(使用绝对路径
/alva/home/<username>/...
env
require("env")
从请求中获取
userId
username
args
net/http
require("net/http")
用于异步HTTP请求的
fetch(url, init)
@alva/algorithm
require("@alva/algorithm")
统计工具
@alva/feed
require("@alva/feed")
用于持久化数据管道的Feed SDK + FeedAltra交易引擎
@alva/adk
require("@alva/adk")
用于LLM请求的Agent SDK —
agent()
支持带工具调用的LLM Agent
@test/suite
require("@test/suite")
Jest风格测试框架(
describe
it
expect
runTests
SDKHub:250+数据模块可通过
require("@arrays/crypto/ohlcv:v1.0.0")
等方式导入。版本后缀为可选(默认
v1.0.0
)。如需发现函数签名和响应结构,请使用SDK文档API(
GET /api/v1/sdk/doc?name=...
)。
关键约束:不支持顶层
await
(需将脚本包裹在
(async () => { ... })();
中)。不支持Node.js内置模块(
fs
path
http
)。模块导出为只读。

Feed SDK Quick Reference

Feed SDK快速参考

See feed-sdk.md for full details.
Feeds are persistent data pipelines that store time series data, readable via filesystem paths.
javascript
const { Feed, feedPath, makeDoc, num } = require("@alva/feed");
const { getCryptoKline } = require("@arrays/crypto/ohlcv:v1.0.0");
const { indicators } = require("@alva/algorithm");

const feed = new Feed({ path: feedPath("btc-ema") });

feed.def("metrics", {
  prices: makeDoc("BTC Prices", "Close + EMA10", [num("close"), num("ema10")]),
});

(async () => {
  await feed.run(async (ctx) => {
    const raw = await ctx.kv.load("lastDate");
    const lastDateMs = raw ? Number(raw) : 0;

    const now = Math.floor(Date.now() / 1000);
    const start =
      lastDateMs > 0 ? Math.floor(lastDateMs / 1000) : now - 30 * 86400;

    const bars = getCryptoKline({
      symbol: "BTCUSDT",
      start_time: start,
      end_time: now,
      interval: "1h",
    })
      .response.data.slice()
      .reverse();
    const closes = bars.map((b) => b.close);
    const ema10 = indicators.ema(closes, { period: 10 });

    const records = bars
      .map((b, i) => ({
        date: b.date,
        close: b.close,
        ema10: ema10[i] || null,
      }))
      .filter((r) => r.date > lastDateMs);

    if (records.length > 0) {
      await ctx.self.ts("metrics", "prices").append(records);
      await ctx.kv.put("lastDate", String(records[records.length - 1].date));
    }
  });
})();
Feed output is readable at:
~/feeds/btc-ema/v1/data/metrics/prices/@last/100

详情请参阅feed-sdk.md
Feed是持久化数据管道,存储时间序列数据,可通过文件系统路径读取。
javascript
const { Feed, feedPath, makeDoc, num } = require("@alva/feed");
const { getCryptoKline } = require("@arrays/crypto/ohlcv:v1.0.0");
const { indicators } = require("@alva/algorithm");

const feed = new Feed({ path: feedPath("btc-ema") });

feed.def("metrics", {
  prices: makeDoc("BTC价格", "收盘价 + EMA10", [num("close"), num("ema10")]),
});

(async () => {
  await feed.run(async (ctx) => {
    const raw = await ctx.kv.load("lastDate");
    const lastDateMs = raw ? Number(raw) : 0;

    const now = Math.floor(Date.now() / 1000);
    const start =
      lastDateMs > 0 ? Math.floor(lastDateMs / 1000) : now - 30 * 86400;

    const bars = getCryptoKline({
      symbol: "BTCUSDT",
      start_time: start,
      end_time: now,
      interval: "1h",
    })
      .response.data.slice()
      .reverse();
    const closes = bars.map((b) => b.close);
    const ema10 = indicators.ema(closes, { period: 10 });

    const records = bars
      .map((b, i) => ({
        date: b.date,
        close: b.close,
        ema10: ema10[i] || null,
      }))
      .filter((r) => r.date > lastDateMs);

    if (records.length > 0) {
      await ctx.self.ts("metrics", "prices").append(records);
      await ctx.kv.put("lastDate", String(records[records.length - 1].date));
    }
  });
})();
Feed输出可通过以下路径读取:
~/feeds/btc-ema/v1/data/metrics/prices/@last/100

Data Modeling Patterns

数据建模模式

All data produced by a feed should use
feed.def()
+
ctx.self.ts().append()
. Do not use
alfs.writeFile()
for feed output data.
Pattern A -- Snapshot (latest-wins): For data that represents current state (company detail, ratings, price target consensus). Use start-of-day as the date so re-runs overwrite.
javascript
const today = new Date();
today.setHours(0, 0, 0, 0);
await ctx.self
  .ts("info", "company")
  .append([
    { date: today.getTime(), name: company.name, sector: company.sector },
  ]);
Read
@last/1
for current snapshot,
@last/30
for 30-day history.
Pattern B -- Event log: For timestamped events (insider trades, news, senator trades). Each event uses its natural date. Same-date records are auto-grouped.
javascript
const records = trades.map((t) => ({
  date: new Date(t.transactionDate).getTime(),
  name: t.name,
  type: t.type,
  shares: t.shares,
}));
await ctx.self.ts("activity", "insiderTrades").append(records);
Pattern C -- Tabular (versioned batch): For data where the whole set refreshes each run (top holders, EPS estimates). Stamp all records with the same run timestamp; same-date grouping stores them as a batch.
javascript
const now = Date.now();
const records = holdings.map((h, i) => ({
  date: now,
  rank: i + 1,
  name: h.name,
  marketValue: h.value,
}));
await ctx.self.ts("research", "institutions").append(records);
Data TypePatternDate StrategyRead Query
OHLCV, indicatorsTime series (standard)Bar timestamp
@last/252
Company detail, ratingsSnapshot (A)Start of day
@last/1
Insider trades, newsEvent log (B)Event timestamp
@last/50
Holdings, estimatesTabular (C)Run timestamp
@last/N
See feed-sdk.md for detailed data modeling examples and deduplication behavior.

Feed生成的所有数据都应使用
feed.def()
+
ctx.self.ts().append()
。请勿使用
alfs.writeFile()
存储Feed输出数据。
模式A -- 快照(最后写入优先):用于表示当前状态的数据(公司详情、评级、目标价共识)。使用当日零点作为日期,以便重新运行时覆盖旧数据。
javascript
const today = new Date();
today.setHours(0, 0, 0, 0);
await ctx.self
  .ts("info", "company")
  .append([
    { date: today.getTime(), name: company.name, sector: company.sector },
  ]);
读取
@last/1
获取当前快照,
@last/30
获取30天历史数据。
模式B -- 事件日志:用于带时间戳的事件(内部人士交易、新闻、参议员交易)。每个事件使用其原生日期。同一日期的记录会自动分组。
javascript
const records = trades.map((t) => ({
  date: new Date(t.transactionDate).getTime(),
  name: t.name,
  type: t.type,
  shares: t.shares,
}));
await ctx.self.ts("activity", "insiderTrades").append(records);
模式C -- 表格(版本化批量数据):用于每次运行都会刷新整个数据集的数据(顶级持仓、EPS预期)。为所有记录添加相同的运行时间戳;同一日期的分组会将它们存储为一个批次。
javascript
const now = Date.now();
const records = holdings.map((h, i) => ({
  date: now,
  rank: i + 1,
  name: h.name,
  marketValue: h.value,
}));
await ctx.self.ts("research", "institutions").append(records);
数据类型模式日期策略读取查询
OHLCV、指标时间序列(标准)K线时间戳
@last/252
公司详情、评级快照(A)当日零点
@last/1
内部人士交易、新闻事件日志(B)事件时间戳
@last/50
持仓、预期表格(C)运行时间戳
@last/N
详情请参阅feed-sdk.md中的详细数据建模示例和去重规则。

Deploying Feeds

部署Feed

Every feed follows a 6-step lifecycle:
  1. Write -- define schema + incremental logic with
    ctx.kv
  2. Upload -- write script to
    ~/feeds/<name>/v1/src/index.js
  3. Test --
    POST /api/v1/run
    with
    entry_path
    to verify output
  4. Grant -- make feed public via
    POST /api/v1/fs/grant
  5. Deploy --
    POST /api/v1/deploy/cronjob
    for scheduled execution
  6. Release --
    POST /api/v1/release/feed
    to register the feed in the database (requires the
    task_id
    from the deploy step)
Data TypeRecommended ScheduleRationale
Stock OHLCV + technicals
0 */4 * * *
(every 4h)
Markets update during trading hours
Company detail, price targets
0 8 * * *
(daily 8am)
Changes infrequently
Insider/senator trades
0 8 * * *
(daily 8am)
SEC filings are daily
Earnings estimates
0 8 * * *
(daily 8am)
Updated periodically
See deployment.md for the full deployment guide and API reference.

每个Feed遵循6步生命周期:
  1. 编写 -- 使用
    ctx.kv
    定义 schema 和增量逻辑
  2. 上传 -- 将脚本写入
    ~/feeds/<name>/v1/src/index.js
  3. 测试 -- 调用
    POST /api/v1/run
    并传入
    entry_path
    验证输出
  4. 授权 -- 通过
    POST /api/v1/fs/grant
    设置Feed为公开访问
  5. 部署 -- 调用
    POST /api/v1/deploy/cronjob
    设置定时执行
  6. 发布 -- 调用
    POST /api/v1/release/feed
    在数据库中注册Feed(需要部署步骤返回的
    task_id
数据类型推荐执行计划理由
股票OHLCV + 技术指标
0 */4 * * *
(每4小时一次)
市场在交易时段内更新
公司详情、目标价
0 8 * * *
(每日早8点)
数据变化频率低
内部人士/参议员交易
0 8 * * *
(每日早8点)
SEC filings为每日更新
盈利预期
0 8 * * *
(每日早8点)
定期更新
详情请参阅deployment.md中的完整部署指南和API参考。

Debugging Feeds

调试Feed

Resetting Feed Data (development only)

重置Feed数据(仅开发环境)

During development, use the REST API to clear stale or incorrect data. Do not use this in production.
undefined
开发期间,可使用REST API清除过期或错误的数据。请勿在生产环境使用
undefined

Clear a specific time series output

清除特定时间序列输出

DELETE /api/v1/fs/remove?path=~/feeds/my-feed/v1/data/market/ohlcv&recursive=true
DELETE /api/v1/fs/remove?path=~/feeds/my-feed/v1/data/market/ohlcv&recursive=true

Clear an entire group (all outputs under "market")

清除整个分组("market"下的所有输出)

DELETE /api/v1/fs/remove?path=~/feeds/my-feed/v1/data/market&recursive=true
DELETE /api/v1/fs/remove?path=~/feeds/my-feed/v1/data/market&recursive=true

Full reset: clear ALL data + KV state (removes the data mount, re-created on next run)

完全重置:清除所有数据 + KV状态(删除数据挂载,下次运行时自动重建)

DELETE /api/v1/fs/remove?path=~/feeds/my-feed/v1/data&recursive=true
undefined
DELETE /api/v1/fs/remove?path=~/feeds/my-feed/v1/data&recursive=true
undefined

Inline Debug Snippets

内联调试代码片段

Test SDK shapes before building a full feed:
POST /api/v1/run
{"code":"const { getCryptoKline } = require(\"@arrays/crypto/ohlcv:v1.0.0\"); JSON.stringify(Object.keys(getCryptoKline({ symbol: \"BTCUSDT\", start_time: 0, end_time: 0, interval: \"1h\" })));"}

在构建完整Feed之前,可先测试SDK返回结构:
POST /api/v1/run
{"code":"const { getCryptoKline } = require(\"@arrays/crypto/ohlcv:v1.0.0\"); JSON.stringify(Object.keys(getCryptoKline({ symbol: \"BTCUSDT\", start_time: 0, end_time: 0, interval: \"1h\" })));"}

Altra Trading Engine Quick Reference

Altra交易引擎快速参考

See altra-trading.md for full details.
Altra is a feed-based event-driven backtesting engine. A trading strategy IS a feed: all output data lives under a single ALFS path. Decisions execute at bar CLOSE.
javascript
const { createOHLCVProvider } = require("@arrays/data/ohlcv-provider:v1.0.0");
const { FeedAltraModule } = require("@alva/feed");
const { FeedAltra, e, Amount } = FeedAltraModule;

const altra = new FeedAltra(
  {
    path: "~/feeds/my-strategy/v1",
    startDate: Date.parse("2025-01-01T00:00:00Z"),
    portfolioOptions: { initialCash: 1_000_000 },
    simOptions: { simTick: "1min", feeRate: 0.001 },
    perfOptions: { timezone: "UTC", marketType: "crypto" },
  },
  createOHLCVProvider(),
);

const dg = altra.getDataGraph();
dg.registerOhlcv("BINANCE_SPOT_BTC_USDT", "1d");
dg.registerFeature({ name: "rsi" /* ... */ });

altra.setStrategy(strategyFn, {
  trigger: { type: "events", expr: e.ohlcv("BINANCE_SPOT_BTC_USDT", "1d") },
  inputConfig: {
    ohlcvs: [{ id: { pair: "BINANCE_SPOT_BTC_USDT", interval: "1d" } }],
    features: [{ id: "rsi" }],
  },
  initialState: {},
});

(async () => {
  await altra.run(Date.now());
})();

详情请参阅altra-trading.md
Altra是基于Feed的事件驱动回测引擎。交易策略本身就是一个Feed:所有输出数据都存储在单个ALFS路径下。决策在K线收盘时执行。
javascript
const { createOHLCVProvider } = require("@arrays/data/ohlcv-provider:v1.0.0");
const { FeedAltraModule } = require("@alva/feed");
const { FeedAltra, e, Amount } = FeedAltraModule;

const altra = new FeedAltra(
  {
    path: "~/feeds/my-strategy/v1",
    startDate: Date.parse("2025-01-01T00:00:00Z"),
    portfolioOptions: { initialCash: 1_000_000 },
    simOptions: { simTick: "1min", feeRate: 0.001 },
    perfOptions: { timezone: "UTC", marketType: "crypto" },
  },
  createOHLCVProvider(),
);

const dg = altra.getDataGraph();
dg.registerOhlcv("BINANCE_SPOT_BTC_USDT", "1d");
dg.registerFeature({ name: "rsi" /* ... */ });

altra.setStrategy(strategyFn, {
  trigger: { type: "events", expr: e.ohlcv("BINANCE_SPOT_BTC_USDT", "1d") },
  inputConfig: {
    ohlcvs: [{ id: { pair: "BINANCE_SPOT_BTC_USDT", interval: "1d" } }],
    features: [{ id: "rsi" }],
  },
  initialState: {},
});

(async () => {
  await altra.run(Date.now());
})();

Deployment Quick Reference

部署快速参考

See deployment.md for full details.
Deploy feed scripts or tasks as cronjobs for scheduled execution:
POST /api/v1/deploy/cronjob
{"path":"~/feeds/btc-ema/v1/src/index.js","cron_expression":"0 */4 * * *","name":"BTC EMA Update"}
Cronjobs execute the script via the same jagent runtime as
/api/v1/run
. Max 20 cronjobs per user. Min interval: 1 minute.
After deploying a cronjob, register the feed and release the playbook for public hosting. The playbook HTML must already be written to ALFS at
~/playbooks/{name}/index.html
via
fs/write
before releasing.
Important: Feed names and playbook names must be unique within your user space. Before creating a new feed or playbook, use
GET /api/v1/fs/readdir?path=~/feeds
or
GET /api/v1/fs/readdir?path=~/playbooks
to check for existing names and avoid conflicts.
undefined
详情请参阅deployment.md
将Feed脚本或任务部署为定时cron任务:
POST /api/v1/deploy/cronjob
{"path":"~/feeds/btc-ema/v1/src/index.js","cron_expression":"0 */4 * * *","name":"BTC EMA更新"}
定时任务通过与
/api/v1/run
相同的jagent运行环境执行脚本。每个用户最多可创建20个定时任务。最小执行间隔为1分钟。
部署定时任务后,需注册Feed并发布操作手册以进行公共托管。发布前需先通过
fs/write
将操作手册HTML写入ALFS的
~/playbooks/{name}/index.html
路径。
重要提示:Feed名称和操作手册名称在你的用户空间内必须唯一。创建新Feed或操作手册前,请使用
GET /api/v1/fs/readdir?path=~/feeds
GET /api/v1/fs/readdir?path=~/playbooks
检查现有名称,避免冲突。
undefined

1. Release feed (register in DB, link to cronjob)

1. 发布Feed(在数据库中注册,关联定时任务)

POST /api/v1/release/feed {"name":"btc-ema","version":"1.0.0","task_id":42} → {"feed_id":100,"name":"btc-ema","feed_major":1}
POST /api/v1/release/feed {"name":"btc-ema","version":"1.0.0","task_id":42} → {"feed_id":100,"name":"btc-ema","feed_major":1}

2. Release playbook (uploads HTML to CDN, returns numeric playbook_id)

2. 发布操作手册(将HTML上传到CDN,返回数字类型playbook_id)

POST /api/v1/release/playbook {"name":"btc-dashboard","version":"v1.0.0","description":"BTC market dashboard with price and technicals","feeds":[{"feed_id":100}]} → {"playbook_id":99,"version":"v1.0.0"}
POST /api/v1/release/playbook {"name":"btc-dashboard","version":"v1.0.0","description":"包含价格和技术指标的BTC市场仪表板","feeds":[{"feed_id":100}]} → {"playbook_id":99,"version":"v1.0.0"}

3. Write release layout.html (CDN URL, using numeric playbook_id from step 2)

3. 写入发布版layout.html(CDN URL,使用步骤2返回的数字playbook_id)

POST /api/v1/fs/write?path=~/playbooks/99/releases/v1.0.0/layout.html&mkdir_parents=true Content-Type: application/octet-stream Body: https://alice.playbook.alva.ai/btc-dashboard/v1.0.0/index.html
POST /api/v1/fs/write?path=~/playbooks/99/releases/v1.0.0/layout.html&mkdir_parents=true Content-Type: application/octet-stream Body: https://alice.playbook.alva.ai/btc-dashboard/v1.0.0/index.html

4. Write draft layout.html (required for frontend dashboard iframe rendering)

4. 写入草稿版layout.html(仪表板iframe渲染必需)

POST /api/v1/fs/write?path=~/playbooks/99/draft/layout.html&mkdir_parents=true Content-Type: application/octet-stream Body: https://alice.playbook.alva.ai/btc-dashboard/v1.0.0/index.html
POST /api/v1/fs/write?path=~/playbooks/99/draft/layout.html&mkdir_parents=true Content-Type: application/octet-stream Body: https://alice.playbook.alva.ai/btc-dashboard/v1.0.0/index.html

5. Write playbook.json (must include "type" and "draft" fields)

5. 写入playbook.json(必须包含"type"和"draft"字段)

POST /api/v1/fs/write Content-Type: application/json {"path":"~/playbooks/99/playbook.json","data":"{"playbook_id":99,"owner_uid":"1","type":"dashboard","name":"btc-dashboard","created_at":"2026-03-12T00:00:00Z","updated_at":"2026-03-12T00:00:00Z","draft":{"playbook_version_id":0,"updated_at":"2026-03-12T00:00:00Z","layout_path":"./draft/layout.html","feeds_dir":"./draft/feeds/","feeds":[{"feed_id":100,"feed_major":1}]},"releases":[{"version":"v1.0.0","playbook_version_id":0,"created_at":"2026-03-12T00:00:00Z","layout_path":"./releases/v1.0.0/layout.html","feeds_dir":"./releases/v1.0.0/feeds/","feeds":[{"feed_id":100,"feed_major":1}]}],"latest_release":{"version":"v1.0.0","playbook_version_id":0,"created_at":"2026-03-12T00:00:00Z","layout_path":"./releases/v1.0.0/layout.html","feeds_dir":"./releases/v1.0.0/feeds/","feeds":[{"feed_id":100,"feed_major":1}]}}","mkdir_parents":true}

The playbook will be accessible at `https://alice.playbook.alva.ai/btc-dashboard/v1.0.0/index.html`.

---
POST /api/v1/fs/write Content-Type: application/json {"path":"~/playbooks/99/playbook.json","data":"{"playbook_id":99,"owner_uid":"1","type":"dashboard","name":"btc-dashboard","created_at":"2026-03-12T00:00:00Z","updated_at":"2026-03-12T00:00:00Z","draft":{"playbook_version_id":0,"updated_at":"2026-03-12T00:00:00Z","layout_path":"./draft/layout.html","feeds_dir":"./draft/feeds/","feeds":[{"feed_id":100,"feed_major":1}]},"releases":[{"version":"v1.0.0","playbook_version_id":0,"created_at":"2026-03-12T00:00:00Z","layout_path":"./releases/v1.0.0/layout.html","feeds_dir":"./releases/v1.0.0/feeds/","feeds":[{"feed_id":100,"feed_major":1}]}],"latest_release":{"version":"v1.0.0","playbook_version_id":0,"created_at":"2026-03-12T00:00:00Z","layout_path":"./releases/v1.0.0/layout.html","feeds_dir":"./releases/v1.0.0/feeds/","feeds":[{"feed_id":100,"feed_major":1}]}}","mkdir_parents":true}

操作手册可通过`https://alice.playbook.alva.ai/btc-dashboard/v1.0.0/index.html`访问。

---

Alva Design System

Alva设计系统

All Alva playbook pages, dashboards, and widgets must follow the Alva Design System. The system defines design tokens (colors, spacing, shadows), typography rules, and component/widget templates.
Key rules:
  • Font: Delight (Regular 400, Medium 500). No Semibold/Bold. Font files: Delight-Regular.ttf, Delight-Medium.ttf
  • Page background:
    --b0-page
    (
    #ffffff
    )
  • Semantic colors:
    --main-m3
    (bullish/green),
    --main-m4
    (bearish/red),
    --main-m1
    (Alva theme/teal)
  • Charts: Use ECharts. Select colors from the chart palette in design-system.md. Grey only when >= 3 series.
  • Widgets: No borders on widget cards. Chart cards use dotted background; table card has no background; other cards use
    --grey-g01
    .
  • Grid: 8-column grid (web), 4-column grid (mobile). Column spans must sum to 8 per row.
Reference documents (read for detailed specs when building playbook web apps):
WhenRead
Design tokens, typography, font rules, general guidelinesdesign-system.md
Widget types, chart/KPI/table/feed cards, grid layoutdesign-widgets.md
Component templates (button, dropdown, modal, select, switch, markdown)design-components.md
Trading strategy playbook layout, sections, and content guidelinesdesign-playbook-trading-strategy.md

所有Alva操作手册页面、仪表板和组件必须遵循Alva设计系统。该系统定义了设计令牌(颜色、间距、阴影)、排版规则以及组件/组件模板。
核心规则:
  • 字体:Delight(Regular 400、Medium 500)。禁止使用Semibold/Bold。字体文件:Delight-Regular.ttfDelight-Medium.ttf
  • 页面背景
    --b0-page
    #ffffff
  • 语义颜色
    --main-m3
    (看涨/绿色)、
    --main-m4
    (看跌/红色)、
    --main-m1
    (Alva主题/蓝绿色)
  • 图表:使用ECharts。从design-system.md中的图表调色板选择颜色。当系列数≥3时才使用灰色。
  • 组件:组件卡片无边框。图表卡片使用点状背景;表格卡片无背景;其他卡片使用
    --grey-g01
  • 网格:Web端为8列网格,移动端为4列网格。每行的列跨度总和必须为8。
参考文档(构建操作手册Web应用时需阅读以获取详细规范):
场景参考文档
设计令牌、排版、字体规则、通用指南design-system.md
组件类型、图表/KPI/表格/Feed卡片、网格布局design-widgets.md
组件模板(按钮、下拉菜单、模态框、选择器、开关、Markdown)design-components.md
交易策略操作手册布局、章节和内容指南design-playbook-trading-strategy.md

Filesystem Layout Convention

文件系统布局规范

PathPurpose
~/tasks/<name>/src/
Task source code
~/feeds/<name>/v1/src/
Feed script source code
~/feeds/<name>/v1/data/
Feed synth mount (auto-created by Feed SDK)
~/playbooks/<name>/
Playbook web app assets
~/data/
General data storage
~/library/
Shared code modules
Prefer using the Feed SDK for all data organization, including point-in-time snapshots. Store snapshots as single-record time series rather than raw JSON files via
alfs.writeFile()
. This keeps all data queryable through a single consistent read pattern (
@last
,
@range
, etc.).

路径用途
~/tasks/<name>/src/
任务源代码
~/feeds/<name>/v1/src/
Feed脚本源代码
~/feeds/<name>/v1/data/
Feed合成挂载点(由Feed SDK自动创建)
~/playbooks/<name>/
操作手册Web应用资产
~/data/
通用数据存储
~/library/
共享代码模块
优先使用Feed SDK进行所有数据管理,包括时点快照。将快照存储为单记录时间序列,而非通过
alfs.writeFile()
存储为原始JSON文件。这样可通过统一的读取模式(
@last
@range
等)查询所有数据。

Common Pitfalls

常见陷阱

  • @last
    returns chronological (oldest-first) order
    , consistent with
    @first
    and
    @range
    . No manual sorting needed.
  • Time series reads return flat JSON records. Paths with
    @last
    ,
    @range
    , etc. return JSON arrays of flat records like
    [{"date":...,"close":...,"ema10":...}]
    . Regular paths return file content with
    Content-Type: application/octet-stream
    .
  • last(N)
    limits unique timestamps, not records.
    When multiple records share a timestamp (grouped via
    append()
    ), auto-flatten may return more than N individual records.
  • The
    data/
    in feed paths is the synth mount.
    feedPath("my-feed")
    gives
    ~/feeds/my-feed/v1
    , and the Feed SDK mounts storage at
    <feedPath>/data/
    . Don't name your group
    "data"
    or you'll get
    data/data/...
    .
  • Public reads require absolute paths. Unauthenticated reads must use
    /alva/home/<username>/...
    (not
    ~/...
    ). Discover your username via
    GET /api/v1/me
    .
  • Top-level
    await
    is not supported.
    Wrap async code in
    (async () => { ... })();
    .
  • require("alfs")
    uses absolute paths.
    Inside the V8 runtime,
    alfs.readFile()
    needs full paths like
    /alva/home/alice/...
    . Get your username from
    require("env").username
    .
  • No Node.js builtins.
    require("fs")
    ,
    require("path")
    ,
    require("http")
    do not exist. Use
    require("alfs")
    for files,
    require("net/http")
    for HTTP.
  • Altra
    run()
    is async.
    FeedAltra.run()
    returns a
    Promise<RunResult>
    . Always
    await
    it:
    const result = await altra.run(endDate);
  • Altra decisions happen at bar CLOSE. Feature timestamps must use
    bar.endTime
    , not
    bar.date
    . Using
    bar.date
    introduces look-ahead bias.
  • Altra lookback: feature vs strategy. Feature lookback controls how many bars the feature computation sees. Strategy lookback controls how many feature outputs the strategy function sees. They are independent.
  • Cronjob path must point to an existing script. The deploy API validates the entry_path exists via filesystem stat before creating the cronjob.
  • playbook.json
    must include
    type
    and
    draft
    ; draft ALFS files are required.
    Omitting
    type
    defaults to "strategy" (wrong routing for dashboards). Omitting
    draft
    or the
    draft/layout.html
    file causes the dashboard iframe to never load.

  • @last
    返回按时间顺序排列(从旧到新)
    ,与
    @first
    @range
    保持一致。无需手动排序。
  • 时间序列读取返回扁平化JSON记录。包含
    @last
    @range
    等的路径返回JSON数组形式的扁平化记录,如
    [{"date":...,"close":...,"ema10":...}]
    。常规路径返回文件内容,Content-Type为
    application/octet-stream
  • last(N)
    限制唯一时间戳的数量,而非记录数
    。当多条记录共享同一时间戳(通过
    append()
    分组)时,自动扁平化可能返回超过N条的单个记录。
  • Feed路径中的
    data/
    是合成挂载点
    feedPath("my-feed")
    返回
    ~/feeds/my-feed/v1
    ,Feed SDK会在
    <feedPath>/data/
    挂载存储。请勿将你的分组命名为
    "data"
    ,否则会出现
    data/data/...
    的路径。
  • 公开读取需使用绝对路径。未授权读取必须使用
    /alva/home/<username>/...
    (而非
    ~/...
    )。可通过
    GET /api/v1/me
    获取你的用户名。
  • 不支持顶层
    await
    。需将异步代码包裹在
    (async () => { ... })();
    中。
  • require("alfs")
    使用绝对路径
    。在V8运行环境中,
    alfs.readFile()
    需要完整路径如
    /alva/home/alice/...
    。可从
    require("env").username
    获取你的用户名。
  • 不支持Node.js内置模块
    require("fs")
    require("path")
    require("http")
    均不可用。文件操作请使用
    require("alfs")
    ,HTTP请求请使用
    require("net/http")
  • Altra的
    run()
    是异步的
    FeedAltra.run()
    返回
    Promise<RunResult>
    。必须使用
    await
    const result = await altra.run(endDate);
  • Altra决策在K线收盘时执行。指标时间戳必须使用
    bar.endTime
    ,而非
    bar.date
    。使用
    bar.date
    会引入前瞻偏差。
  • Altra回溯:指标 vs 策略。指标回溯控制指标计算可查看的K线数量。策略回溯控制策略函数可查看的指标输出数量。二者相互独立。
  • 定时任务路径必须指向已存在的脚本。部署API会在创建定时任务前通过文件系统stat验证entry_path是否存在。
  • playbook.json
    必须包含
    type
    draft
    ;且必须存在草稿版ALFS文件
    。省略
    type
    会默认设为"strategy"(仪表板路由错误)。省略
    draft
    draft/layout.html
    文件会导致仪表板iframe无法加载。

Resource Limits

资源限制

ResourceLimit
Write payload10 MB max per request
HTTP response body128 MB max
Max cronjobs per user20
Min cron interval1 minute

资源限制
请求写入负载单请求最大10 MB
HTTP响应体最大128 MB
每个用户的最大定时任务数20
最小定时任务间隔1分钟

Error Responses

错误响应

All errors return:
{"error":{"code":"...","message":"..."}}
HTTP StatusCodeMeaning
400INVALID_ARGUMENTBad request or invalid path
401UNAUTHENTICATEDMissing or invalid API key
403PERMISSION_DENIEDAccess denied
404NOT_FOUNDFile/directory not found
429RATE_LIMITEDRate limit / runner pool exhausted
500INTERNALServer error
所有错误返回格式:
{"error":{"code":"...","message":"..."}}
HTTP状态码错误码含义
400INVALID_ARGUMENT请求错误或路径无效
40UNAUTHENTICATEDAPI密钥缺失或无效
403PERMISSION_DENIED访问被拒绝
404NOT_FOUND文件/目录不存在
429RATE_LIMITED超出速率限制/运行池耗尽
500INTERNAL服务器错误