cost-booster-edit
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCost Booster Edit
Cost Booster Edit
Direct wrapper around (npm v0.2.x, exposed via ). Use when a transform is already classified as Tier 1 eligible — recommends whether; this skill executes.
agent-booster.apply()agent-boosteragentic-flow/agent-boostercost-booster-route直接封装(npm包 v0.2.x,通过暴露)。当转换已被归类为符合Tier 1条件时使用——负责判断是否适用,本技能负责执行转换。
agent-booster.apply()agent-boosteragentic-flow/agent-boostercost-booster-routeWhen to use
使用场景
- Bulk transforms across many files (,
var → const,add-types,remove-console,add-error-handling,async-await).add-logging - Any simple, structural edit where an LLM would otherwise be called and billed.
- Inside CI pipelines where determinism + zero-cost matter more than naturalness.
Do NOT use when the transform requires reasoning about intent, naming, or cross-file context — those are Tier 2/3 jobs.
- 多文件批量转换(、
var → const、add-types、remove-console、add-error-handling、async-await)。add-logging - 任何原本需要调用LLM并产生费用的简单结构化编辑场景。
- 在CI流水线中,确定性与零成本比自然语义更重要的场景。
请勿使用场景:转换需要推理意图、命名或跨文件上下文——这些属于Tier 2/3任务。
Steps
步骤
-
Take inputs —(one of the 6 booster intents) and
intentpath.file -
Read the source to a variable, derive the intendedtext from the intent (caller supplies).
edit -
Invoke — run from anywhere underso
v3/resolves:agent-boosterbashnode --input-type=module -e ' import("agent-booster") .then(async ({ AgentBooster }) => { const booster = new AgentBooster(); const r = await booster.apply({ code: process.argv[1], edit: process.argv[2], language: process.argv[3] || "javascript", }); console.log(JSON.stringify({ success: r.success, output: r.output, latency: r.latency, confidence: r.confidence, strategy: r.strategy, tokens: r.tokens, })); }) .catch(e => console.log(JSON.stringify({ success: false, error: String(e.message) }))); ' -- "$CODE" "$EDIT" "$LANG" -
Check confidence — default threshold is. Below that, fail closed: do NOT write the file; report and escalate to Tier 2/3.
0.5 -
Write back thefield if
output.success && confidence >= 0.5 -
Persist outcome —. Feed the routing learner via
memory_store --namespace cost-tracking --key "booster-edit-..." --value '{"intent":..., "latency":..., "confidence":..., "strategy":..., "applied":true}'(use thehooks_model-outcomeskill's step 8).cost-optimize
-
接收输入——(6种booster意图之一)和
intent路径。file -
读取源码到变量,根据intent推导目标文本(由调用方提供)。
edit -
调用执行——在目录下任意位置运行,确保
v3/可以被解析:agent-boosterbashnode --input-type=module -e ' import("agent-booster") .then(async ({ AgentBooster }) => { const booster = new AgentBooster(); const r = await booster.apply({ code: process.argv[1], edit: process.argv[2], language: process.argv[3] || "javascript", }); console.log(JSON.stringify({ success: r.success, output: r.output, latency: r.latency, confidence: r.confidence, strategy: r.strategy, tokens: r.tokens, })); }) .catch(e => console.log(JSON.stringify({ success: false, error: String(e.message) }))); ' -- "$CODE" "$EDIT" "$LANG" -
检查置信度——默认阈值为。低于该阈值时,终止操作:请勿写入文件,上报并升级至Tier 2/3任务。
0.5 -
写入结果——若,将
success && confidence >= 0.5字段内容写回文件。output -
保存结果——。通过
memory_store --namespace cost-tracking --key "booster-edit-..." --value '{"intent":..., "latency":..., "confidence":..., "strategy":..., "applied":true}'将结果反馈给路由学习器(参考hooks_model-outcome技能的步骤8)。cost-optimize
Measured benchmark (2026-05-04, this checkout)
实测基准(2026-05-04,当前版本)
5 representative intents run through :
AgentBooster.apply()| intent | latency (ms) | wall (ms) | confidence | strategy | success |
|---|---|---|---|---|---|
| var-to-const | 5 | 5 | 0.65 | fuzzy_replace | true |
| add-types | 1 | 1 | 0.64 | fuzzy_replace | true |
| remove-console | 0 | 0 | 0.70 | fuzzy_replace | true |
| add-error-handling | 0 | 0 | 0.85 | exact_replace | true |
| async-await | 0 | 0 | 0.85 | exact_replace | true |
Avg measured latency ≈ 1.2 ms. All 5 above the default 0.5 confidence threshold. See for the LLM-baseline comparison.
docs/benchmarks/0002-baseline.md5种代表性意图通过运行的结果:
AgentBooster.apply()| intent | 延迟 (ms) | 实际耗时 (ms) | 置信度 | 策略 | 成功 |
|---|---|---|---|---|---|
| var-to-const | 5 | 5 | 0.65 | fuzzy_replace | true |
| add-types | 1 | 1 | 0.64 | fuzzy_replace | true |
| remove-console | 0 | 0 | 0.70 | fuzzy_replace | true |
| add-error-handling | 0 | 0 | 0.85 | exact_replace | true |
| async-await | 0 | 0 | 0.85 | exact_replace | true |
平均实测延迟≈1.2毫秒。上述5种意图的置信度均高于默认阈值0.5。LLM基准对比请参考。
docs/benchmarks/0002-baseline.mdWhat's verified locally
本地验证项
| Claim | Status here |
|---|---|
| 100% win rate | Verified — 12/12 on |
| Sub-millisecond latency | Verified — avg 0.67 ms, p50 0 ms, p99 6 ms, max 6 ms. |
| $0 per edit | Verified structurally — no API call, no token billing. |
| Deterministic AST-based merge | Verified — same inputs reproduce the same |
| Confidence ≥ 0.5 ⇒ correct | Verified on this corpus — 12/12 above 0.5 (min 0.551), all correct. |
| Verified — exceeded against every tier: 1000.9× vs Gemini 2.0 Flash, 1838.7× vs Claude Sonnet 4.6, 2634.1× vs Claude Opus 4.7. Run |
| Cost saved per edit | Measured: $0.000020 vs Gemini, $0.000722 vs Sonnet 4.6, $0.004720 vs Opus 4.7 (the booster side is $0 in all cases). |
| Win parity with frontier LLMs | Verified — Booster, Gemini 2.0 Flash, Sonnet 4.6, Opus 4.7 all scored 12/12 on this corpus. Booster matches LLM accuracy structurally for deterministic transforms. |
To extend: add cases to , run (or with ), commit . Smoke step 23 fails the build if win rate drops below 0.80.
bench/booster-corpus.json( cd v3 && node ../plugins/ruflo-cost-tracker/scripts/bench.mjs )BENCH_LLM_BASELINE=1runs/latest.jsonOverride the LLM model: (when wired against ) or for a reasoning-model comparison. Pricing flags: , .
BENCH_LLM_MODEL='claude-sonnet-4'api.anthropic.comBENCH_LLM_MODEL='models/gemini-2.5-flash'BENCH_LLM_PRICE_INBENCH_LLM_PRICE_OUTfuzzy_replaceexact_replace| 声明 | 当前验证状态 |
|---|---|
| 100%成功率 | 已验证 — 在 |
| 亚毫秒级延迟 | 已验证 — 平均0.67毫秒,p50为0毫秒,p99为6毫秒,最大值6毫秒。 |
| 每次编辑零成本 | 结构验证通过 — 无API调用,无token计费。 |
| 基于AST的确定性合并 | 已验证 — 相同输入会生成相同的 |
| 置信度≥0.5即正确 | 在当前测试集上已验证 — 12项测试的置信度均高于0.5(最低0.551),全部正确。 |
| 比LLM快350倍 | 已验证——远超所有LLM层级:比Gemini 2.0 Flash快1000.9倍,比Claude Sonnet 4.6快1838.7倍,比Claude Opus 4.7快2634.1倍。运行 |
| 每次编辑节省的成本 | 已测算:对比Gemini节省$0.000020,对比Sonnet 4.6节省$0.000722,对比Opus 4.7节省$0.004720(Booster侧均为$0)。 |
| 与前沿LLM准确率持平 | 已验证 — Booster、Gemini 2.0 Flash、Sonnet 4.6、Opus 4.7在该测试集上均获得12/12的成绩。对于确定性转换,Booster在结构准确性上与LLM持平。 |
扩展方法:在中添加测试用例,运行(或添加参数),提交。若成功率低于0.80,冒烟测试步骤23会导致构建失败。
bench/booster-corpus.json( cd v3 && node ../plugins/ruflo-cost-tracker/scripts/bench.mjs )BENCH_LLM_BASELINE=1runs/latest.json覆盖LLM模型:(对接时)或,用于推理模型对比。定价参数:、。
BENCH_LLM_MODEL='claude-sonnet-4'api.anthropic.comBENCH_LLM_MODEL='models/gemini-2.5-flash'BENCH_LLM_PRICE_INBENCH_LLM_PRICE_OUTfuzzy_replaceexact_replaceCross-references
交叉引用
ADR-0002 §"Decision 1" (route classifier) and §"Riskiest assumption" (Bash-shelled invocation) · (classifier-side companion) · npm README (3-mode install, MCP / npm / HTTP).
cost-booster-routeagent-boosterADR-0002 §"Decision 1"(路由分类器)和§"Riskiest assumption"(Bash shell调用) · (分类器侧配套工具) · npm README(三种安装模式:MCP / npm / HTTP)。
cost-booster-routeagent-booster