agenthub

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

AgentHub — Multi-Agent Collaboration

AgentHub — 多Agent协作

Spawn N parallel AI agents that compete on the same task. Each agent works in an isolated git worktree. The coordinator evaluates results and merges the winner.
生成N个并行的AI Agent来竞争完成同一任务。每个Agent在独立的git worktree中工作,协调者会评估结果并合并最优分支。

Slash Commands

斜杠命令

CommandDescription
/hub:init
Create a new collaboration session — task, agent count, eval criteria
/hub:spawn
Launch N parallel subagents in isolated worktrees
/hub:status
Show DAG state, agent progress, branch status
/hub:eval
Rank agent results by metric or LLM judge
/hub:merge
Merge winning branch, archive losers
/hub:board
Read/write the agent message board
/hub:run
One-shot lifecycle: init → baseline → spawn → eval → merge
命令描述
/hub:init
创建新的协作会话——设置任务、Agent数量、评估标准
/hub:spawn
在隔离的worktree中启动N个并行子Agent
/hub:status
显示DAG状态、Agent进度、分支状态
/hub:eval
通过指标或LLM评判者对Agent结果进行排名
/hub:merge
合并获胜分支,归档失败分支
/hub:board
读写Agent留言板
/hub:run
一站式生命周期:初始化 → 基准线 → 启动Agent → 评估 → 合并

Agent Templates

Agent模板

When spawning with
--template
, agents follow a predefined iteration pattern:
TemplatePatternUse Case
optimizer
Edit → eval → keep/discard → repeat x10Performance, latency, size
refactorer
Restructure → test → iterate until greenCode quality, tech debt
test-writer
Write tests → measure coverage → repeatTest coverage gaps
bug-fixer
Reproduce → diagnose → fix → verifyBug fix approaches
Templates are defined in
references/agent-templates.md
.
使用
--template
启动Agent时,Agent会遵循预定义的迭代模式:
模板模式适用场景
optimizer
编辑 → 评估 → 保留/丢弃 → 重复10次性能、延迟、体积优化
refactorer
重构 → 测试 → 迭代直至测试通过代码质量提升、技术债务清理
test-writer
编写测试 → 衡量覆盖率 → 重复填补测试覆盖率缺口
bug-fixer
复现 → 诊断 → 修复 → 验证漏洞修复方案探索
模板定义在
references/agent-templates.md
中。

When This Skill Activates

技能触发场景

Trigger phrases:
  • "try multiple approaches"
  • "have agents compete"
  • "parallel optimization"
  • "spawn N agents"
  • "compare different solutions"
  • "fan-out" or "tournament"
  • "generate content variations"
  • "compare different drafts"
  • "A/B test copy"
  • "explore multiple strategies"
触发短语:
  • "尝试多种方案"
  • "让Agent竞争"
  • "并行优化"
  • "生成N个Agent"
  • "对比不同解决方案"
  • "分支展开"或"锦标赛模式"
  • "生成内容变体"
  • "对比不同草稿"
  • "文案A/B测试"
  • "探索多种策略"

Coordinator Protocol

协调者协议

The main Claude Code session is the coordinator. It follows this lifecycle:
INIT → DISPATCH → MONITOR → EVALUATE → MERGE
主Claude Code会话作为协调者,遵循以下生命周期:
INIT → DISPATCH → MONITOR → EVALUATE → MERGE

1. Init

1. 初始化

Run
/hub:init
to create a session. This generates:
  • .agenthub/sessions/{session-id}/config.yaml
    — task config
  • .agenthub/sessions/{session-id}/state.json
    — state machine
  • .agenthub/board/
    — message board channels
运行
/hub:init
创建会话,会生成:
  • .agenthub/sessions/{session-id}/config.yaml
    — 任务配置文件
  • .agenthub/sessions/{session-id}/state.json
    — 状态机文件
  • .agenthub/board/
    — 留言板频道

2. Dispatch

2. 分发任务

Run
/hub:spawn
to launch agents. For each agent 1..N:
  • Post task assignment to
    .agenthub/board/dispatch/
  • Spawn via Agent tool with
    isolation: "worktree"
  • All agents launched in a single message (parallel)
运行
/hub:spawn
启动Agent。对于每个Agent 1..N:
  • 将任务分配发布到
    .agenthub/board/dispatch/
  • 通过Agent工具以
    isolation: "worktree"
    参数启动
  • 所有Agent在同一条消息中并行启动

3. Monitor

3. 监控进度

Run
/hub:status
to check progress:
  • dag_analyzer.py --status --session {id}
    shows branch state
  • Board
    progress/
    channel has agent updates
运行
/hub:status
检查进度:
  • dag_analyzer.py --status --session {id}
    显示分支状态
  • progress/
    频道包含Agent的进度更新

4. Evaluate

4. 结果评估

Run
/hub:eval
to rank results:
  • Metric mode: run eval command in each worktree, parse numeric result
  • Judge mode: read diffs, coordinator ranks by quality
  • Hybrid: metric first, LLM-judge for ties
运行
/hub:eval
对结果进行排名:
  • 指标模式:在每个worktree中运行评估命令,解析数值结果
  • 评判模式:读取差异内容,由协调者根据质量排名
  • 混合模式:先按指标排名,LLM评判者处理平局情况

5. Merge

5. 合并分支

Run
/hub:merge
to finalize:
  • git merge --no-ff
    winner into base branch
  • Tag losers:
    git tag hub/archive/{session}/agent-{i}
  • Clean up worktrees
  • Post merge summary to board
运行
/hub:merge
完成最终操作:
  • 使用
    git merge --no-ff
    将获胜分支合并到基准分支
  • 为失败分支打标签:
    git tag hub/archive/{session}/agent-{i}
  • 清理worktree
  • 将合并摘要发布到留言板

Agent Protocol

Agent协议

Each subagent receives this prompt pattern:
You are agent-{i} in hub session {session-id}.
Your task: {task description}

Instructions:
1. Read your assignment at .agenthub/board/dispatch/{seq}-agent-{i}.md
2. Work in your worktree — make changes, run tests, iterate
3. Commit all changes with descriptive messages
4. Write your result summary to .agenthub/board/results/agent-{i}-result.md
5. Exit when done
Agents do NOT see each other's work. They do NOT communicate with each other. They only write to the board for the coordinator to read.
每个子Agent会收到以下提示模板:
你是hub会话{session-id}中的agent-{i}。
你的任务:{任务描述}

说明:
1. 阅读.agenthub/board/dispatch/{seq}-agent-{i}.md中的任务分配
2. 在你的worktree中工作——修改代码、运行测试、迭代优化
3. 提交所有更改并添加描述性提交信息
4. 将结果摘要写入.agenthub/board/results/agent-{i}-result.md
5. 完成后退出
Agent无法查看彼此的工作内容,也不会相互通信,仅能向留言板写入内容供协调者读取。

DAG Model

DAG模型

Branch Naming

分支命名规则

hub/{session-id}/agent-{N}/attempt-{M}
  • Session ID: timestamp-based (
    YYYYMMDD-HHMMSS
    )
  • Agent N: sequential (1 to agent-count)
  • Attempt M: increments on retry (usually 1)
hub/{session-id}/agent-{N}/attempt-{M}
  • 会话ID:基于时间戳(
    YYYYMMDD-HHMMSS
  • Agent编号:顺序编号(1到Agent总数)
  • 尝试次数:重试时递增(通常为1)

Frontier Detection

前沿检测

Frontier = branch tips with no child branches. Equivalent to AgentHub's "leaves" query.
bash
python scripts/dag_analyzer.py --frontier --session {id}
前沿 = 没有子分支的分支尖端,等同于AgentHub的"叶子"查询。
bash
python scripts/dag_analyzer.py --frontier --session {id}

Immutability

不可变性

The DAG is append-only:
  • Never rebase or force-push agent branches
  • Never delete commits (only branch refs after archival)
  • Every approach preserved via git tags
DAG仅支持追加操作:
  • 永远不要对Agent分支执行变基或强制推送
  • 永远不要删除提交(仅在归档后删除分支引用)
  • 所有方案通过git标签永久保存

Message Board

留言板

Location:
.agenthub/board/
位置:
.agenthub/board/

Channels

频道

ChannelWriterReaderPurpose
dispatch/
CoordinatorAgentsTask assignments
progress/
AgentsCoordinatorStatus updates
results/
Agents + CoordinatorAllFinal results + merge summary
频道写入者读取者用途
dispatch/
协调者Agent任务分配
progress/
Agent协调者进度更新
results/
Agent + 协调者所有角色最终结果 + 合并摘要

Post Format

发布格式

markdown
---
author: agent-1
timestamp: 2026-03-17T14:30:22Z
channel: results
parent: null
---
markdown
---
author: agent-1
timestamp: 2026-03-17T14:30:22Z
channel: results
parent: null
---

Result Summary

结果摘要

  • Approach: Replaced O(n²) sort with hash map
  • Files changed: 3
  • Metric: 142ms (baseline: 180ms, delta: -38ms)
  • Confidence: High — all tests pass
undefined
  • 方案:将O(n²)排序替换为哈希表
  • 修改文件数:3个
  • 指标:142ms(基准线:180ms,差值:-38ms)
  • 置信度:高——所有测试通过
undefined

Board Rules

留言板规则

  • Append-only: never edit or delete posts
  • Unique filenames:
    {seq:03d}-{author}-{timestamp}.md
  • YAML frontmatter required on all posts
  • 仅支持追加:永远不要编辑或删除发布内容
  • 文件名唯一:
    {seq:03d}-{author}-{timestamp}.md
  • 所有发布内容必须包含YAML前置元数据

Evaluation Modes

评估模式

Metric-Based

基于指标的评估

Best for: benchmarks, test pass rates, file sizes, response times.
bash
python scripts/result_ranker.py --session {id} \
  --eval-cmd "pytest bench.py --json" \
  --metric p50_ms --direction lower
The ranker runs the eval command in each agent's worktree directory and parses the metric from stdout.
最适用于:基准测试、测试通过率、文件大小、响应时间。
bash
python scripts/result_ranker.py --session {id} \
  --eval-cmd "pytest bench.py --json" \
  --metric p50_ms --direction lower
排名器会在每个Agent的worktree目录中运行评估命令,并从标准输出中解析指标。

LLM Judge

LLM评判

Best for: code quality, readability, architecture decisions.
The coordinator reads each agent's diff (
git diff base...agent-branch
) and ranks by:
  1. Correctness (does it solve the task?)
  2. Simplicity (fewer lines changed preferred)
  3. Quality (clean execution, good structure)
最适用于:代码质量、可读性、架构决策。
协调者读取每个Agent的差异内容(
git diff base...agent-branch
),并根据以下维度排名:
  1. 正确性(是否解决了任务?)
  2. 简洁性(优先选择修改行数更少的方案)
  3. 质量(代码执行流畅、结构良好)

Hybrid

混合模式

Run metric first. If top agents are within 10% of each other, use LLM judge to break ties.
先按指标排名,如果排名靠前的Agent结果差异在10%以内,则使用LLM评判者打破平局。

Session Lifecycle

会话生命周期

init → running → evaluating → merged
                            → archived (if no winner)
State transitions managed by
session_manager.py
:
FromToTrigger
init
running
/hub:spawn
completes
running
evaluating
All agents return
evaluating
merged
/hub:merge
completes
evaluating
archived
No winner / all failed
初始化 → 运行中 → 评估中 → 已合并
                            → 已归档(无获胜者时)
状态转换由
session_manager.py
管理:
原状态目标状态触发条件
init
running
/hub:spawn
执行完成
running
evaluating
所有Agent返回结果
evaluating
merged
/hub:merge
执行完成
evaluating
archived
无获胜者 / 全部失败

Proactive Triggers

主动触发机制

The coordinator should act when:
SignalAction
All agents crashedPost failure summary, suggest retry with different constraints
No improvement over baselineArchive session, suggest different approaches
Orphan worktrees detectedRun
session_manager.py --cleanup {id}
Session stuck in
running
Check board for progress, consider timeout
当出现以下信号时,协调者应采取行动:
信号行动
所有Agent崩溃发布失败摘要,建议使用不同约束条件重试
结果未优于基准线归档会话,建议尝试其他方案
检测到孤立worktree运行
session_manager.py --cleanup {id}
会话卡在
running
状态
检查留言板进度,考虑设置超时

Installation

安装步骤

bash
undefined
bash
undefined

Copy to your Claude Code skills directory

复制到你的Claude Code技能目录

cp -r engineering/agenthub ~/.claude/skills/agenthub
cp -r engineering/agenthub ~/.claude/skills/agenthub

Or install via ClawHub

或通过ClawHub安装

clawhub install agenthub
undefined
clawhub install agenthub
undefined

Scripts

脚本说明

ScriptPurpose
hub_init.py
Initialize
.agenthub/
structure and session
dag_analyzer.py
Frontier detection, DAG graph, branch status
board_manager.py
Message board CRUD (channels, posts, threads)
result_ranker.py
Rank agents by metric or diff quality
session_manager.py
Session state machine and cleanup
脚本用途
hub_init.py
初始化
.agenthub/
结构和会话
dag_analyzer.py
前沿检测、DAG图生成、分支状态查询
board_manager.py
留言板增删改查(频道、发布内容、线程)
result_ranker.py
根据指标或差异质量对Agent进行排名
session_manager.py
会话状态机管理和清理

Related Skills

相关技能

  • autoresearch-agent — Single-agent optimization loop (use AgentHub when you want N agents competing)
  • self-improving-agent — Self-modifying agent (use AgentHub when you want external competition)
  • git-worktree-manager — Git worktree utilities (AgentHub uses worktrees internally)
  • autoresearch-agent — 单Agent优化循环(当你需要N个Agent竞争时使用AgentHub)
  • self-improving-agent — 自修改Agent(当你需要外部竞争时使用AgentHub)
  • git-worktree-manager — Git worktree工具集(AgentHub内部使用worktree机制)