agenthub
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseAgentHub — Multi-Agent Collaboration
AgentHub — 多Agent协作
Spawn N parallel AI agents that compete on the same task. Each agent works in an isolated git worktree. The coordinator evaluates results and merges the winner.
生成N个并行的AI Agent来竞争完成同一任务。每个Agent在独立的git worktree中工作,协调者会评估结果并合并最优分支。
Slash Commands
斜杠命令
| Command | Description |
|---|---|
| Create a new collaboration session — task, agent count, eval criteria |
| Launch N parallel subagents in isolated worktrees |
| Show DAG state, agent progress, branch status |
| Rank agent results by metric or LLM judge |
| Merge winning branch, archive losers |
| Read/write the agent message board |
| One-shot lifecycle: init → baseline → spawn → eval → merge |
| 命令 | 描述 |
|---|---|
| 创建新的协作会话——设置任务、Agent数量、评估标准 |
| 在隔离的worktree中启动N个并行子Agent |
| 显示DAG状态、Agent进度、分支状态 |
| 通过指标或LLM评判者对Agent结果进行排名 |
| 合并获胜分支,归档失败分支 |
| 读写Agent留言板 |
| 一站式生命周期:初始化 → 基准线 → 启动Agent → 评估 → 合并 |
Agent Templates
Agent模板
When spawning with , agents follow a predefined iteration pattern:
--template| Template | Pattern | Use Case |
|---|---|---|
| Edit → eval → keep/discard → repeat x10 | Performance, latency, size |
| Restructure → test → iterate until green | Code quality, tech debt |
| Write tests → measure coverage → repeat | Test coverage gaps |
| Reproduce → diagnose → fix → verify | Bug fix approaches |
Templates are defined in .
references/agent-templates.md使用启动Agent时,Agent会遵循预定义的迭代模式:
--template| 模板 | 模式 | 适用场景 |
|---|---|---|
| 编辑 → 评估 → 保留/丢弃 → 重复10次 | 性能、延迟、体积优化 |
| 重构 → 测试 → 迭代直至测试通过 | 代码质量提升、技术债务清理 |
| 编写测试 → 衡量覆盖率 → 重复 | 填补测试覆盖率缺口 |
| 复现 → 诊断 → 修复 → 验证 | 漏洞修复方案探索 |
模板定义在中。
references/agent-templates.mdWhen This Skill Activates
技能触发场景
Trigger phrases:
- "try multiple approaches"
- "have agents compete"
- "parallel optimization"
- "spawn N agents"
- "compare different solutions"
- "fan-out" or "tournament"
- "generate content variations"
- "compare different drafts"
- "A/B test copy"
- "explore multiple strategies"
触发短语:
- "尝试多种方案"
- "让Agent竞争"
- "并行优化"
- "生成N个Agent"
- "对比不同解决方案"
- "分支展开"或"锦标赛模式"
- "生成内容变体"
- "对比不同草稿"
- "文案A/B测试"
- "探索多种策略"
Coordinator Protocol
协调者协议
The main Claude Code session is the coordinator. It follows this lifecycle:
INIT → DISPATCH → MONITOR → EVALUATE → MERGE主Claude Code会话作为协调者,遵循以下生命周期:
INIT → DISPATCH → MONITOR → EVALUATE → MERGE1. Init
1. 初始化
Run to create a session. This generates:
/hub:init- — task config
.agenthub/sessions/{session-id}/config.yaml - — state machine
.agenthub/sessions/{session-id}/state.json - — message board channels
.agenthub/board/
运行创建会话,会生成:
/hub:init- — 任务配置文件
.agenthub/sessions/{session-id}/config.yaml - — 状态机文件
.agenthub/sessions/{session-id}/state.json - — 留言板频道
.agenthub/board/
2. Dispatch
2. 分发任务
Run to launch agents. For each agent 1..N:
/hub:spawn- Post task assignment to
.agenthub/board/dispatch/ - Spawn via Agent tool with
isolation: "worktree" - All agents launched in a single message (parallel)
运行启动Agent。对于每个Agent 1..N:
/hub:spawn- 将任务分配发布到
.agenthub/board/dispatch/ - 通过Agent工具以参数启动
isolation: "worktree" - 所有Agent在同一条消息中并行启动
3. Monitor
3. 监控进度
Run to check progress:
/hub:status- shows branch state
dag_analyzer.py --status --session {id} - Board channel has agent updates
progress/
运行检查进度:
/hub:status- 显示分支状态
dag_analyzer.py --status --session {id} - 频道包含Agent的进度更新
progress/
4. Evaluate
4. 结果评估
Run to rank results:
/hub:eval- Metric mode: run eval command in each worktree, parse numeric result
- Judge mode: read diffs, coordinator ranks by quality
- Hybrid: metric first, LLM-judge for ties
运行对结果进行排名:
/hub:eval- 指标模式:在每个worktree中运行评估命令,解析数值结果
- 评判模式:读取差异内容,由协调者根据质量排名
- 混合模式:先按指标排名,LLM评判者处理平局情况
5. Merge
5. 合并分支
Run to finalize:
/hub:merge- winner into base branch
git merge --no-ff - Tag losers:
git tag hub/archive/{session}/agent-{i} - Clean up worktrees
- Post merge summary to board
运行完成最终操作:
/hub:merge- 使用将获胜分支合并到基准分支
git merge --no-ff - 为失败分支打标签:
git tag hub/archive/{session}/agent-{i} - 清理worktree
- 将合并摘要发布到留言板
Agent Protocol
Agent协议
Each subagent receives this prompt pattern:
You are agent-{i} in hub session {session-id}.
Your task: {task description}
Instructions:
1. Read your assignment at .agenthub/board/dispatch/{seq}-agent-{i}.md
2. Work in your worktree — make changes, run tests, iterate
3. Commit all changes with descriptive messages
4. Write your result summary to .agenthub/board/results/agent-{i}-result.md
5. Exit when doneAgents do NOT see each other's work. They do NOT communicate with each other. They only write to the board for the coordinator to read.
每个子Agent会收到以下提示模板:
你是hub会话{session-id}中的agent-{i}。
你的任务:{任务描述}
说明:
1. 阅读.agenthub/board/dispatch/{seq}-agent-{i}.md中的任务分配
2. 在你的worktree中工作——修改代码、运行测试、迭代优化
3. 提交所有更改并添加描述性提交信息
4. 将结果摘要写入.agenthub/board/results/agent-{i}-result.md
5. 完成后退出Agent无法查看彼此的工作内容,也不会相互通信,仅能向留言板写入内容供协调者读取。
DAG Model
DAG模型
Branch Naming
分支命名规则
hub/{session-id}/agent-{N}/attempt-{M}- Session ID: timestamp-based ()
YYYYMMDD-HHMMSS - Agent N: sequential (1 to agent-count)
- Attempt M: increments on retry (usually 1)
hub/{session-id}/agent-{N}/attempt-{M}- 会话ID:基于时间戳()
YYYYMMDD-HHMMSS - Agent编号:顺序编号(1到Agent总数)
- 尝试次数:重试时递增(通常为1)
Frontier Detection
前沿检测
Frontier = branch tips with no child branches. Equivalent to AgentHub's "leaves" query.
bash
python scripts/dag_analyzer.py --frontier --session {id}前沿 = 没有子分支的分支尖端,等同于AgentHub的"叶子"查询。
bash
python scripts/dag_analyzer.py --frontier --session {id}Immutability
不可变性
The DAG is append-only:
- Never rebase or force-push agent branches
- Never delete commits (only branch refs after archival)
- Every approach preserved via git tags
DAG仅支持追加操作:
- 永远不要对Agent分支执行变基或强制推送
- 永远不要删除提交(仅在归档后删除分支引用)
- 所有方案通过git标签永久保存
Message Board
留言板
Location:
.agenthub/board/位置:
.agenthub/board/Channels
频道
| Channel | Writer | Reader | Purpose |
|---|---|---|---|
| Coordinator | Agents | Task assignments |
| Agents | Coordinator | Status updates |
| Agents + Coordinator | All | Final results + merge summary |
| 频道 | 写入者 | 读取者 | 用途 |
|---|---|---|---|
| 协调者 | Agent | 任务分配 |
| Agent | 协调者 | 进度更新 |
| Agent + 协调者 | 所有角色 | 最终结果 + 合并摘要 |
Post Format
发布格式
markdown
---
author: agent-1
timestamp: 2026-03-17T14:30:22Z
channel: results
parent: null
---markdown
---
author: agent-1
timestamp: 2026-03-17T14:30:22Z
channel: results
parent: null
---Result Summary
结果摘要
- Approach: Replaced O(n²) sort with hash map
- Files changed: 3
- Metric: 142ms (baseline: 180ms, delta: -38ms)
- Confidence: High — all tests pass
undefined- 方案:将O(n²)排序替换为哈希表
- 修改文件数:3个
- 指标:142ms(基准线:180ms,差值:-38ms)
- 置信度:高——所有测试通过
undefinedBoard Rules
留言板规则
- Append-only: never edit or delete posts
- Unique filenames:
{seq:03d}-{author}-{timestamp}.md - YAML frontmatter required on all posts
- 仅支持追加:永远不要编辑或删除发布内容
- 文件名唯一:
{seq:03d}-{author}-{timestamp}.md - 所有发布内容必须包含YAML前置元数据
Evaluation Modes
评估模式
Metric-Based
基于指标的评估
Best for: benchmarks, test pass rates, file sizes, response times.
bash
python scripts/result_ranker.py --session {id} \
--eval-cmd "pytest bench.py --json" \
--metric p50_ms --direction lowerThe ranker runs the eval command in each agent's worktree directory and parses the metric from stdout.
最适用于:基准测试、测试通过率、文件大小、响应时间。
bash
python scripts/result_ranker.py --session {id} \
--eval-cmd "pytest bench.py --json" \
--metric p50_ms --direction lower排名器会在每个Agent的worktree目录中运行评估命令,并从标准输出中解析指标。
LLM Judge
LLM评判
Best for: code quality, readability, architecture decisions.
The coordinator reads each agent's diff () and ranks by:
git diff base...agent-branch- Correctness (does it solve the task?)
- Simplicity (fewer lines changed preferred)
- Quality (clean execution, good structure)
最适用于:代码质量、可读性、架构决策。
协调者读取每个Agent的差异内容(),并根据以下维度排名:
git diff base...agent-branch- 正确性(是否解决了任务?)
- 简洁性(优先选择修改行数更少的方案)
- 质量(代码执行流畅、结构良好)
Hybrid
混合模式
Run metric first. If top agents are within 10% of each other, use LLM judge to break ties.
先按指标排名,如果排名靠前的Agent结果差异在10%以内,则使用LLM评判者打破平局。
Session Lifecycle
会话生命周期
init → running → evaluating → merged
→ archived (if no winner)State transitions managed by :
session_manager.py| From | To | Trigger |
|---|---|---|
| | |
| | All agents return |
| | |
| | No winner / all failed |
初始化 → 运行中 → 评估中 → 已合并
→ 已归档(无获胜者时)状态转换由管理:
session_manager.py| 原状态 | 目标状态 | 触发条件 |
|---|---|---|
| | |
| | 所有Agent返回结果 |
| | |
| | 无获胜者 / 全部失败 |
Proactive Triggers
主动触发机制
The coordinator should act when:
| Signal | Action |
|---|---|
| All agents crashed | Post failure summary, suggest retry with different constraints |
| No improvement over baseline | Archive session, suggest different approaches |
| Orphan worktrees detected | Run |
Session stuck in | Check board for progress, consider timeout |
当出现以下信号时,协调者应采取行动:
| 信号 | 行动 |
|---|---|
| 所有Agent崩溃 | 发布失败摘要,建议使用不同约束条件重试 |
| 结果未优于基准线 | 归档会话,建议尝试其他方案 |
| 检测到孤立worktree | 运行 |
会话卡在 | 检查留言板进度,考虑设置超时 |
Installation
安装步骤
bash
undefinedbash
undefinedCopy to your Claude Code skills directory
复制到你的Claude Code技能目录
cp -r engineering/agenthub ~/.claude/skills/agenthub
cp -r engineering/agenthub ~/.claude/skills/agenthub
Or install via ClawHub
或通过ClawHub安装
clawhub install agenthub
undefinedclawhub install agenthub
undefinedScripts
脚本说明
| Script | Purpose |
|---|---|
| Initialize |
| Frontier detection, DAG graph, branch status |
| Message board CRUD (channels, posts, threads) |
| Rank agents by metric or diff quality |
| Session state machine and cleanup |
| 脚本 | 用途 |
|---|---|
| 初始化 |
| 前沿检测、DAG图生成、分支状态查询 |
| 留言板增删改查(频道、发布内容、线程) |
| 根据指标或差异质量对Agent进行排名 |
| 会话状态机管理和清理 |
Related Skills
相关技能
- autoresearch-agent — Single-agent optimization loop (use AgentHub when you want N agents competing)
- self-improving-agent — Self-modifying agent (use AgentHub when you want external competition)
- git-worktree-manager — Git worktree utilities (AgentHub uses worktrees internally)
- autoresearch-agent — 单Agent优化循环(当你需要N个Agent竞争时使用AgentHub)
- self-improving-agent — 自修改Agent(当你需要外部竞争时使用AgentHub)
- git-worktree-manager — Git worktree工具集(AgentHub内部使用worktree机制)