agent-swarm

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Agent Swarm - 多智能体集群编排

Agent Swarm - Multi-Agent Cluster Orchestration



🚨 强制入口 - 必须先执行!

🚨 Mandatory Entry - Must Execute First!

无论用户请求什么任务,使用智能体集群前必须先执行入口脚本:
bash
python3 scripts/swarm_entry.py
No matter what task the user requests, you must execute the entry script before using the agent cluster:
bash
python3 scripts/swarm_entry.py

根据返回的 status 决定下一步

Determine Next Steps Based on Returned status

脚本返回 JSON,根据
status
字段行动:
status含义下一步操作
need_config
未初始化向用户展示
display
prompt
内容,等待用户选择 A/B/C
ready
已就绪直接进入任务编排,使用
agents
列表中的智能体
The script returns JSON, act according to the
status
field:
statusMeaningNext Action
need_config
Not initializedDisplay the
display
and
prompt
content to the user, wait for the user to select A/B/C
ready
ReadyDirectly enter task orchestration, use the agents in the
agents
list

示例流程

Example Workflow

python
undefined
python
undefined

Step 1: 执行入口脚本

Step 1: Execute the entry script

result = exec("python3 scripts/swarm_entry.py")
result = exec("python3 scripts/swarm_entry.py")

Step 2: 解析返回的 JSON

Step 2: Parse the returned JSON

if result.status == "need_config": # 向用户展示配置选项 print(result.display) # 已检测到的模型 print(result.prompt) # 请选择 A/B/C # 等待用户回复...
elif result.status == "ready": # 直接开始任务编排 agents = result.agents # 继续执行用户的任务...
undefined
if result.status == "need_config": # Display configuration options to the user print(result.display) # Detected models print(result.prompt) # Please select A/B/C # Wait for user response...
elif result.status == "ready": # Start task orchestration directly agents = result.agents # Continue executing the user's task...
undefined

用户选择后完成初始化

Complete Initialization After User Selection

用户选择配置方式后,执行初始化:
bash
undefined
After the user selects the configuration method, perform initialization:
bash
undefined

用户选择 A(自动分配)后

After user selects A (Auto-assign)

python3 scripts/swarm_entry.py --action init
undefined
python3 scripts/swarm_entry.py --action init
undefined

重置配置

Reset Configuration

bash
python3 scripts/swarm_entry.py --action reset

bash
python3 scripts/swarm_entry.py --action reset

概述

Overview

此技能使你成为智能体团队的指挥官,能够根据任务复杂度智能调度多个专业智能体协同完成工作。
核心流程:入口检查 → 分析任务 → 拆解子任务 → 选择合适的 Agent → 并行/串行执行 → 整合结果

This skill makes you the commander of the agent team, enabling you to intelligently schedule multiple specialized agents to collaborate and complete work based on task complexity.
Core Process: Entry Check → Task Analysis → Sub-task Breakdown → Select Suitable Agents → Parallel/Serial Execution → Result Integration

⚡ 配置向导详解

⚡ Configuration Wizard Details

当入口脚本返回
status: "need_config"
时,执行以下配置流程:
When the entry script returns
status: "need_config"
, execute the following configuration process:

Step 1: 展示检测结果

Step 1: Display Detection Results

脚本已自动检测模型,直接展示
result.display
内容给用户:
markdown
undefined
The script has automatically detected models, directly display the
result.display
content to the user:
markdown
undefined

📦 您的 OpenClaw 已配置以下模型

📦 Your OpenClaw Has the Following Models Configured

🔴 高性能模型 (适合: coder, writer, analyst, reviewer)

🔴 High-performance Models (Suitable for: coder, writer, analyst, reviewer)

  • Claude Opus 4.5 (
    vendor-claude-opus-4-5/aws-claude-opus-4-5
    )
  • Claude Opus 4.5 (
    vendor-claude-opus-4-5/aws-claude-opus-4-5
    )

🟡 中等模型 (适合: pm, designer)

🟡 Medium Models (Suitable for: pm, designer)

  • Gemini 3 Pro (
    vendor-gemini-3-pro/gemini-3-pro-preview
    )
  • Gemini 3 Pro (
    vendor-gemini-3-pro/gemini-3-pro-preview
    )

🟢 轻量模型 (适合: researcher, assistant)

🟢 Lightweight Models (Suitable for: researcher, assistant)

  • GLM-4.7 (
    lixiang-glm-4-7/Kivy-GLM-4.7
    )
undefined
  • GLM-4.7 (
    lixiang-glm-4-7/Kivy-GLM-4.7
    )
undefined

Step 2: 展示配置选项

Step 2: Display Configuration Options

展示
result.prompt
内容:
markdown
请选择配置方式:

**A. 自动分配** — 根据您现有的模型自动配置智能体团队
**B. 添加新模型** — 我会推荐主流模型供您选择
**C. 自定义配置** — 您手动指定每个智能体的模型

请回复 A/B/C
Display the
result.prompt
content:
markdown
Please select a configuration method:

**A. Auto-assign** — Automatically configure the agent team based on your existing models
**B. Add new model** — I will recommend mainstream models for you to choose from
**C. Custom configuration** — You manually specify the model for each agent

Please reply with A/B/C

Step 3: 根据用户选择执行

Step 3: Execute Based on User Selection

选择 A(自动分配):
bash
python3 scripts/swarm_entry.py --action init
选择 B(添加新模型):
  • 展示主流模型选项和配置指南
  • 用户提供配置后,更新 OpenClaw 配置
  • 然后执行 init
选择 C(自定义配置):
  • 让用户指定每个智能体的模型
  • 收集完成后执行 init
Select A (Auto-assign):
bash
python3 scripts/swarm_entry.py --action init
Select B (Add new model):
  • Display mainstream model options and configuration guidelines
  • After the user provides the configuration, update the OpenClaw configuration
  • Then execute init
Select C (Custom configuration):
  • Ask the user to specify the model for each agent
  • Execute init after collection is complete

Step 4: 确认初始化完成

Step 4: Confirm Initialization Completion

初始化成功后,告知用户:
✅ Agent Swarm 配置完成!现在可以开始使用智能体团队了。

After successful initialization, inform the user:
✅ Agent Swarm configuration completed! You can now start using the agent team.

旧版配置方式(兼容)

Legacy Configuration Method (Compatible)

如需手动检测模型,也可以使用 gateway 工具:
python
undefined
If you need to manually detect models, you can also use the gateway tool:
python
undefined

使用 gateway 工具获取当前配置

Use gateway tool to get current configuration

gateway({ action: "config.get" })

从返回的配置中提取 `models.providers` 下的所有可用模型。
gateway({ action: "config.get" })

Extract all available models under `models.providers` from the returned configuration.

Step 2: 向用户展示可用模型

Step 2: Display Available Models to User

按性能等级分类展示:
markdown
undefined
Display by performance level:
markdown
undefined

📦 您的 OpenClaw 已配置以下模型

📦 Your OpenClaw Has the Following Models Configured

🔴 高性能模型 (适合: coder, writer, analyst, reviewer)

🔴 High-performance Models (Suitable for: coder, writer, analyst, reviewer)

  • Claude Opus 4.5 (claude-opus-4-5/claude-opus-4-5)
  • Claude Opus 4.5 (claude-opus-4-5/claude-opus-4-5)

🟡 中等模型 (适合: pm, designer)

🟡 Medium Models (Suitable for: pm, designer)

  • Gemini 3 Pro (vendor-gemini-3-pro/gemini-3-pro-preview)
  • Gemini 3 Pro (vendor-gemini-3-pro/gemini-3-pro-preview)

🟢 轻量模型 (适合: researcher, assistant)

🟢 Lightweight Models (Suitable for: researcher, assistant)

  • GLM-4.7 (glm-4-7/Kivy-GLM-4.7)
  • GLM-4.7 (glm-4-7/Kivy-GLM-4.7)

🖼️ 图像模型 (适合: designer)

🖼️ Image Models (Suitable for: designer)

  • Gemini 3 Pro Image (gemini-3-pro-image/gemini-3-pro-image-preview)
undefined
  • Gemini 3 Pro Image (gemini-3-pro-image/gemini-3-pro-image-preview)
undefined

Step 3: 询问用户配置方式

Step 3: Ask User for Configuration Method

markdown
请选择配置方式:

**A. 自动分配** — 根据您现有的模型自动配置智能体团队
   - 高性能任务(编码/写作/分析) → 使用您最强的模型
   - 中等任务(规划/设计) → 使用中等模型
   - 轻量任务(搜索/问答) → 使用成本最低的模型

**B. 添加新模型** — 我会推荐主流模型供您选择
   - Claude (Anthropic)
   - GPT-4o (OpenAI)
   - Gemini (Google)
   - DeepSeek V3 (DeepSeek)
   - Qwen Max (阿里云)
   - GLM-4 (智谱)

**C. 自定义配置** — 您手动指定每个智能体的模型

请回复 A/B/C 或直接告诉我您的选择。
markdown
Please select a configuration method:

**A. Auto-assign** — Automatically configure the agent team based on your existing models
   - High-complexity tasks (coding/writing/analysis) → Use your most powerful model
   - Medium-complexity tasks (planning/design) → Use medium models
   - Lightweight tasks (search/Q&A) → Use the lowest-cost model

**B. Add new model** — I will recommend mainstream models for you to choose from
   - Claude (Anthropic)
   - GPT-4o (OpenAI)
   - Gemini (Google)
   - DeepSeek V3 (DeepSeek)
   - Qwen Max (Alibaba Cloud)
   - GLM-4 (Zhipu)

**C. Custom configuration** — You manually specify the model for each agent

Please reply with A/B/C or tell me your choice directly.

Step 4: 根据用户选择执行配置

Step 4: Execute Configuration Based on User Selection

选择 A(自动分配):
  • 分析已有模型,按能力等级分配到各智能体
  • 生成配置补丁并应用
选择 B(添加新模型):
  • 展示主流模型选项和 API 配置指南
  • 用户提供 API Key 后,生成模型配置
  • 更新 OpenClaw 配置
选择 C(自定义配置):
  • 列出所有智能体及其推荐模型等级
  • 让用户逐个指定
Select A (Auto-assign):
  • Analyze existing models, assign to each agent by capability level
  • Generate configuration patch and apply it
Select B (Add new model):
  • Display mainstream model options and API configuration guidelines
  • After the user provides the API Key, generate model configuration
  • Update OpenClaw configuration
Select C (Custom configuration):
  • List all agents and their recommended model levels
  • Ask the user to specify each one

配置向导脚本

Configuration Wizard Script

可运行配置向导脚本辅助检测:
bash
python3 scripts/setup_wizard.py
脚本会:
  1. 自动读取 OpenClaw 配置
  2. 分析已配置的模型
  3. 建议智能体分配方案
  4. 生成配置补丁文件
You can run the configuration wizard script to assist with detection:
bash
python3 scripts/setup_wizard.py
The script will:
  1. Automatically read OpenClaw configuration
  2. Analyze configured models
  3. Suggest agent allocation schemes
  4. Generate configuration patch files

主流模型推荐

Mainstream Model Recommendations

模型提供商推荐用于API 类型
Claude Opus 4/4.5Anthropic高复杂度任务anthropic-messages
Claude Sonnet 4Anthropic中等复杂度anthropic-messages
GPT-4oOpenAI通用任务openai-completions
Gemini 2.5 ProGoogle长文档处理google-generative-ai
DeepSeek V3DeepSeek性价比之选openai-completions
Qwen Max阿里云中文任务openai-completions
GLM-4智谱轻量任务openai-completions
ModelProviderRecommended ForAPI Type
Claude Opus 4/4.5AnthropicHigh-complexity tasksanthropic-messages
Claude Sonnet 4AnthropicMedium-complexity tasksanthropic-messages
GPT-4oOpenAIGeneral tasksopenai-completions
Gemini 2.5 ProGoogleLong document processinggoogle-generative-ai
DeepSeek V3DeepSeekCost-effective optionopenai-completions
Qwen MaxAlibaba CloudChinese tasksopenai-completions
GLM-4ZhipuLightweight tasksopenai-completions

模型添加示例

Model Addition Example

如果用户选择添加新模型,生成类似配置:
json
{
  "models": {
    "providers": {
      "my-deepseek": {
        "baseUrl": "https://api.deepseek.com/v1",
        "apiKey": "sk-xxx(用户提供)",
        "api": "openai-completions",
        "authHeader": "Authorization",
        "models": [{
          "id": "deepseek-chat",
          "name": "DeepSeek V3",
          "contextWindow": 64000,
          "maxTokens": 8192
        }]
      }
    }
  }
}

If the user chooses to add a new model, generate a configuration similar to:
json
{
  "models": {
    "providers": {
      "my-deepseek": {
        "baseUrl": "https://api.deepseek.com/v1",
        "apiKey": "sk-xxx (provided by user)",
        "api": "openai-completions",
        "authHeader": "Authorization",
        "models": [{
          "id": "deepseek-chat",
          "name": "DeepSeek V3",
          "contextWindow": 64000,
          "maxTokens": 8192
        }]
      }
    }
  }
}

可用智能体团队

Available Agent Teams

Agent IDEmoji角色定位核心能力可用工具
pm
📋规划者需求分析、任务拆解、优先级排序read, write, edit, web_search, web_fetch, memory
researcher
🔍信息猎手广度搜索、交叉验证、结构化输出web_search, web_fetch, read, write, memory
coder
👨‍💻代码工匠编码、调试、测试、重构read, write, edit, exec, process
writer
✍️文字工匠文档、报告、文案、翻译read, write, edit, memory
designer
🎨视觉创作者配图、插画、数据可视化read, write
analyst
📊数据侦探数据处理、统计分析、趋势预测read, write, edit, exec
reviewer
🔎质量守门人代码审查、内容审核、合规检查read, memory
assistant
💬沟通桥梁简单问答、消息转发、提醒message, read, sessions_send
automator
🤖效率大师定时任务、网页自动化、脚本exec, process, cron, browser, read, write
github-tracker
🔥GitHub猎人追踪热门项目、分析趋势、日报生成web_search, web_fetch, read, write, memory
Agent IDEmojiRole PositioningCore CompetenciesAvailable Tools
pm
📋PlannerRequirements analysis, task breakdown, priority sortingread, write, edit, web_search, web_fetch, memory
researcher
🔍Information HunterBroad search, cross-validation, structured outputweb_search, web_fetch, read, write, memory
coder
👨‍💻Code ArtisanCoding, debugging, testing, refactoringread, write, edit, exec, process
writer
✍️Text ArtisanDocumentation, reports, copywriting, translationread, write, edit, memory
designer
🎨Visual CreatorIllustration, data visualizationread, write
analyst
📊Data DetectiveData processing, statistical analysis, trend predictionread, write, edit, exec
reviewer
🔎Quality GatekeeperCode review, content audit, compliance checkread, memory
assistant
💬Communication BridgeSimple Q&A, message forwarding, remindersmessage, read, sessions_send
automator
🤖Efficiency ExpertScheduled tasks, web automation, scriptingexec, process, cron, browser, read, write
github-tracker
🔥GitHub HunterTrack popular projects, analyze trends, generate daily reportsweb_search, web_fetch, read, write, memory

智能体人格速览

Agent Personality Overview

智能体一句话定位核心原则
📋 pm把模糊需求变成清晰方案用户视角、目标导向、优先级思维
🔍 researcher找到别人找不到的资料广度优先、多源验证、标注来源
👨‍💻 coder写出优雅高效的程序先理解再动手、简单优于复杂、可读性优先
✍️ writer把信息变成有价值的内容读者优先、结构清晰、言之有物
🎨 designer让想法变成图像目的明确、简洁清晰、风格一致
📊 analyst从数字中发现故事数据质量、假设驱动、洞察导向
🔎 reviewer确保输出达到标准客观公正、建设性反馈、不直接修改
💬 assistant传递信息、快速响应简洁明了、知道边界、友好礼貌
🤖 automator让重复的事自动化ROI思维、稳定可靠、有监控
🔥 github-tracker发现GitHub热门项目数据驱动、聚焦价值、趋势洞察
AgentOne-sentence PositioningCore Principles
📋 pmTurn vague requirements into clear plansUser perspective, goal-oriented, priority thinking
🔍 researcherFind information others can'tBreadth-first, multi-source verification, cite sources
👨‍💻 coderWrite elegant and efficient programsUnderstand first then act, simplicity over complexity, readability first
✍️ writerTurn information into valuable contentReader-first, clear structure, substance over form
🎨 designerTurn ideas into imagesClear purpose, concise and clear, consistent style
📊 analystDiscover stories from numbersData quality, hypothesis-driven, insight-oriented
🔎 reviewerEnsure output meets standardsObjective and fair, constructive feedback, no direct modification
💬 assistantDeliver information and respond quicklyConcise and clear, know boundaries, friendly and polite
🤖 automatorAutomate repetitive tasksROI thinking, stable and reliable, with monitoring
🔥 github-trackerDiscover popular GitHub projectsData-driven, focus on value, trend insight

模型成本参考

Model Cost Reference

模型Input ($/M)Output ($/M)用于
Claude Opus 4.5$5.00$25.00main, coder, writer, analyst, reviewer, automator
Gemini 3 Pro$1.25$10.00pm, researcher
Gemini 3 Pro Image$1.25$10.00designer
GLM-4.7~$0.014~$0.014assistant, github-tracker
成本优化原则:简单任务用便宜模型,复杂任务才用贵模型。
ModelInput ($/M)Output ($/M)Used For
Claude Opus 4.5$5.00$25.00main, coder, writer, analyst, reviewer, automator
Gemini 3 Pro$1.25$10.00pm, researcher
Gemini 3 Pro Image$1.25$10.00designer
GLM-4.7~$0.014~$0.014assistant, github-tracker
Cost Optimization Principle: Use cheap models for simple tasks, only use expensive models for complex tasks.

编排流程

Orchestration Process

Step 1: 任务分析

Step 1: Task Analysis

收到任务 → 判断复杂度
├── 简单任务 → 直接执行
└── 复杂任务 → 进入编排模式
Receive task → Judge complexity
├── Simple task → Execute directly
└── Complex task → Enter orchestration mode

Step 2: 任务拆解

Step 2: Task Breakdown

将复杂任务分解为独立子任务,明确:
  • 每个子任务的目标和输出格式
  • 输入数据和上下文
  • 依赖关系(哪些可并行,哪些需串行)
Decompose complex tasks into independent sub-tasks, clarify:
  • Goals and output format of each sub-task
  • Input data and context
  • Dependencies (which can be parallelized, which need to be executed serially)

Step 3: Agent 选择

Step 3: Agent Selection

根据子任务性质选择最合适的 Agent:
任务类型推荐智能体说明
项目规划、需求分析📋 pm输出任务列表和优先级
信息搜集、资料整理🔍 researcher多源搜索,结构化输出
写代码、修bug、脚本👨‍💻 coder可执行 shell 命令
写文章、文档、报告✍️ writer基于资料进行创作
配图、插画、图表🎨 designer图像生成
数据分析、统计📊 analyst可执行数据处理脚本
代码审查、内容审核🔎 reviewer只读,给出建议
消息转发、简单问答💬 assistant快速响应
定时任务、自动化🤖 automator可设置 cron
Select the most suitable agent based on the nature of the sub-task:
Task TypeRecommended AgentDescription
Project planning, requirements analysis📋 pmOutput task list and priorities
Information collection, data organization🔍 researcherMulti-source search, structured output
Code writing, bug fixing, scripting👨‍💻 coderCan execute shell commands
Article writing, documentation, reports✍️ writerCreate based on materials
Illustration, charts🎨 designerImage generation
Data analysis, statistics📊 analystCan execute data processing scripts
Code review, content audit🔎 reviewerRead-only, provide suggestions
Message forwarding, simple Q&A💬 assistantQuick response
Scheduled tasks, automation🤖 automatorCan set cron

Step 4: 执行调度

Step 4: Execution Scheduling

使用
sessions_spawn
调度子智能体。spawn 是异步的,子任务完成后会自动回报结果。
并行执行示例(多个 spawn 同时派发,各自独立执行):
javascript
// 在同一个回合内连续 spawn,这些任务会并行执行
// 子任务完成后各自回报,主 Agent 收集结果后汇总

// 方式 1: 直接连续 spawn
sessions_spawn({ task: "搜索 LangChain 资料...", agentId: "researcher", label: "research-langchain" })
sessions_spawn({ task: "搜索 AutoGPT 资料...", agentId: "researcher", label: "research-autogpt" })
sessions_spawn({ task: "搜索 CrewAI 资料...", agentId: "researcher", label: "research-crewai" })
// 三个任务并行执行,分别回报结果

// 方式 2: 循环派发(更清晰)
const frameworks = ["LangChain", "AutoGPT", "CrewAI"]
frameworks.forEach(name => {
  sessions_spawn({
    task: `搜索 ${name} 框架的特点、优缺点、适用场景,输出结构化总结到 /workspace/research/${name.toLowerCase()}.md`,
    agentId: "researcher",
    label: `research-${name.toLowerCase()}`
  })
})
// 子任务完成后自动回报,主 Agent 汇总所有结果
串行执行示例(等待上一步结果再继续):
javascript
// 串行需要等待前序任务完成,收到回报后再 spawn 下一个
// 流程:调研 → (等待回报) → 写作 → (等待回报) → 配图 → (等待回报) → 审核

// Step 1: 先派发调研任务
sessions_spawn({ task: "调研 AI Agent 框架...", agentId: "researcher" })
// 等待 researcher 回报结果...

// Step 2: 收到调研结果后,派发写作任务
sessions_spawn({ 
  task: "基于 /workspace/research/ 的调研资料,撰写对比分析文章...", 
  agentId: "writer" 
})
// 等待 writer 回报...

// Step 3: 文章完成后,派发配图任务
sessions_spawn({ task: "为文章生成配图...", agentId: "designer" })
混合编排示例(先并行,后串行):
javascript
// Phase 1: 并行调研(同时派发)
sessions_spawn({ task: "搜索 LangChain...", agentId: "researcher", label: "r1" })
sessions_spawn({ task: "搜索 AutoGPT...", agentId: "researcher", label: "r2" })
sessions_spawn({ task: "搜索 CrewAI...", agentId: "researcher", label: "r3" })

// 等待 3 个调研任务都完成...

// Phase 2: 串行处理(基于汇总结果)
sessions_spawn({ task: "整合调研资料,撰写报告...", agentId: "writer" })
// 等待 writer 完成...

sessions_spawn({ task: "审核报告质量...", agentId: "reviewer" })
Use
sessions_spawn
to schedule sub-agents. Spawn is asynchronous, and sub-tasks will automatically report results after completion.
Parallel Execution Example (Multiple spawns dispatched simultaneously, executed independently):
javascript
// Continuously spawn in the same round, these tasks will be executed in parallel
// After sub-tasks are completed, they will report results separately, and the main agent will collect and summarize the results

// Method 1: Direct continuous spawn
sessions_spawn({ task: "Search for LangChain materials...", agentId: "researcher", label: "research-langchain" })
sessions_spawn({ task: "Search for AutoGPT materials...", agentId: "researcher", label: "research-autogpt" })
sessions_spawn({ task: "Search for CrewAI materials...", agentId: "researcher", label: "research-crewai" })
// Three tasks are executed in parallel, reporting results separately

// Method 2: Dispatch in loop (clearer)
const frameworks = ["LangChain", "AutoGPT", "CrewAI"]
frameworks.forEach(name => {
  sessions_spawn({
    task: `Search for the features, advantages, disadvantages, and applicable scenarios of the ${name} framework, output a structured summary to /workspace/research/${name.toLowerCase()}.md`,
    agentId: "researcher",
    label: `research-${name.toLowerCase()}`
  })
})
// Sub-tasks automatically report results after completion, main agent summarizes all results
Serial Execution Example (Wait for previous step results before continuing):
javascript
// Serial execution requires waiting for the previous task to complete, then spawn the next one after receiving the report
// Process: Research → (Wait for report) → Writing → (Wait for report) → Illustration → (Wait for report) → Review

// Step 1: First dispatch research task
sessions_spawn({ task: "Research AI Agent frameworks...", agentId: "researcher" })
// Wait for researcher to report results...

// Step 2: After receiving research results, dispatch writing task
sessions_spawn({ 
  task: "Based on the research materials in /workspace/research/, write a comparative analysis article...", 
  agentId: "writer" 
})
// Wait for writer to report...

// Step 3: After article is completed, dispatch illustration task
sessions_spawn({ task: "Generate illustrations for the article...", agentId: "designer" })
Hybrid Orchestration Example (Parallel first, then serial):
javascript
// Phase 1: Parallel research (dispatched simultaneously)
sessions_spawn({ task: "Search for LangChain...", agentId: "researcher", label: "r1" })
sessions_spawn({ task: "Search for AutoGPT...", agentId: "researcher", label: "r2" })
sessions_spawn({ task: "Search for CrewAI...", agentId: "researcher", label: "r3" })

// Wait for all 3 research tasks to complete...

// Phase 2: Serial processing (based on summarized results)
sessions_spawn({ task: "Integrate research materials and write a report...", agentId: "writer" })
// Wait for writer to complete...

sessions_spawn({ task: "Review report quality...", agentId: "reviewer" })

Step 5: 结果整合

Step 5: Result Integration

  • 收集所有子 Agent 的输出
  • 整合、去重、格式化
  • 输出最终交付物
  • 必须输出执行统计(见下方模板)
  • Collect outputs from all sub-agents
  • Integrate, deduplicate, format
  • Output final deliverables
  • Must output execution statistics (see template below)

编排示例

Orchestration Examples

示例 1: 技术调研报告

Example 1: Technical Research Report

用户: "调研主流 AI Agent 框架,写一篇对比分析文章"

编排方案:
├── 🔍 researcher × 3 (并行)
│   ├── 搜索 LangChain - 整理功能、优缺点、案例
│   ├── 搜索 AutoGPT - 整理功能、优缺点、案例  
│   └── 搜索 CrewAI - 整理功能、优缺点、案例
├── ✍️ writer (串行,等调研完成)
│   └── 整合资料,撰写对比分析文章
├── 🎨 designer (串行)
│   └── 生成框架对比图/架构图
└── 🔎 reviewer (串行)
    └── 审核文章质量,提出改进建议
User: "Research mainstream AI Agent frameworks and write a comparative analysis article"

Orchestration Plan:
├── 🔍 researcher × 3 (Parallel)
│   ├── Search LangChain - Organize features, advantages/disadvantages, cases
│   ├── Search AutoGPT - Organize features, advantages/disadvantages, cases  
│   └── Search CrewAI - Organize features, advantages/disadvantages, cases
├── ✍️ writer (Serial, wait for research completion)
│   └── Integrate materials and write comparative analysis article
├── 🎨 designer (Serial)
│   └── Generate framework comparison diagram/architecture diagram
└── 🔎 reviewer (Serial)
    └── Review article quality and propose improvement suggestions

示例 2: 代码项目

Example 2: Code Project

用户: "帮我重构这个项目的认证模块"

编排方案:
├── 📋 pm (可选)
│   └── 分析需求,拆解重构步骤
├── 👨‍💻 coder
│   └── 分析现有代码,实现重构
└── 🔎 reviewer (串行)
    └── 代码审查,确保质量
User: "Help me refactor the authentication module of this project"

Orchestration Plan:
├── 📋 pm (Optional)
│   └── Analyze requirements and break down refactoring steps
├── 👨‍💻 coder
│   └── Analyze existing code and implement refactoring
└── 🔎 reviewer (Serial)
    └── Code review to ensure quality

示例 3: 数据分析报告

Example 3: Data Analysis Report

用户: "分析这份销售数据,生成月度报告"

编排方案:
├── 📊 analyst
│   └── 数据清洗、统计分析、发现洞察
├── ✍️ writer (串行)
│   └── 撰写分析报告
└── 🎨 designer (串行)
    └── 生成数据可视化图表
User: "Analyze this sales data and generate a monthly report"

Orchestration Plan:
├── 📊 analyst
│   └── Data cleaning, statistical analysis, insight discovery
├── ✍️ writer (Serial)
│   └── Write analysis report
└── 🎨 designer (Serial)
    └── Generate data visualization charts

示例 4: 自动化任务

Example 4: Automation Task

用户: "帮我设置每天早上自动检查 GitHub trending"

编排方案:
├── 🤖 automator
│   └── 编写脚本 + 设置 cron 定时任务
User: "Help me set up automatic daily checks of GitHub trending every morning"

Orchestration Plan:
├── 🤖 automator
│   └── Write script + set cron scheduled task

编排原则

Orchestration Principles

  1. 简单任务不过度编排 — 能直接做的就直接做,不要为了用而用
  2. 合理并行 — 无依赖的任务并行执行,提高效率
  3. 明确交接 — 子任务输出要清晰完整,便于下游使用
  4. 失败处理 — 某个子任务失败时,决定重试还是跳过
  5. 结果整合 — 最终输出要连贯,不是简单拼接
  6. 成本意识 — 优先用便宜模型,复杂任务才用贵模型

  1. Do not over-orchestrate simple tasks — Do it directly if you can, don't use orchestration just for the sake of it
  2. Reasonable parallelism — Execute independent tasks in parallel to improve efficiency
  3. Clear handover — Sub-task outputs should be clear and complete for downstream use
  4. Failure handling — When a sub-task fails, decide whether to retry or skip
  5. Result integration — Final output should be coherent, not simple拼接
  6. Cost awareness — Prioritize cheap models, only use expensive models for complex tasks

🔧 超长文本分批输出策略

🔧 Batch Output Strategy for Ultra-Long Text

当需要生成较长的文件(如完整报告、长文档)时,单次输出可能因模型 token 限制被截断,导致
write
工具调用失败。
When generating long files (such as complete reports, long documents), single output may be truncated due to model token limits, causing
write
tool call failure.

问题表现

Problem Manifestation

Validation failed for tool "write":
  - content: must have required property 'content'
或者输出被截断(
stopReason: "length"
),导致文件内容不完整。
Validation failed for tool "write":
  - content: must have required property 'content'
Or output is truncated (
stopReason: "length"
), resulting in incomplete file content.

解决方案:分段生成 + 脚本汇总

Solution: Segmented Generation + Script Summary

策略一:分章节派发多个 writer(推荐)
将长报告拆分为多个章节,分别派发给不同的 writer 并行撰写,最后用脚本拼接:
javascript
// Phase 1: 并行撰写各章节
sessions_spawn({ task: "撰写第1章:摘要和背景...", agentId: "writer", label: "ch01" })
sessions_spawn({ task: "撰写第2章:核心内容...", agentId: "writer", label: "ch02" })
sessions_spawn({ task: "撰写第3章:结论...", agentId: "writer", label: "ch03" })

// Phase 2: 所有章节完成后,用 exec 拼接
exec(`
  cat sections/ch01.md > FINAL-REPORT.md
  cat sections/ch02.md >> FINAL-REPORT.md
  cat sections/ch03.md >> FINAL-REPORT.md
`)
策略二:exec + heredoc 追加写入
对于单个智能体任务,如果内容太长导致单次 write 失败,可以分段写入:
bash
undefined
Strategy 1: Dispatch multiple writers for each chapter (Recommended)
Split the long report into multiple chapters, dispatch different writers to write in parallel, then splice with scripts:
javascript
// Phase 1: Write each chapter in parallel
sessions_spawn({ task: "Write Chapter 1-2: Abstract and Background...", agentId: "writer", label: "ch01" })
sessions_spawn({ task: "Write Chapter 3-4: Core Content...", agentId: "writer", label: "ch02" })
sessions_spawn({ task: "Write Chapter 5-6: Conclusion...", agentId: "writer", label: "ch03" })

// Phase 2: After all chapters are completed, splice with exec
exec(`
  cat sections/ch01.md > FINAL-REPORT.md
  cat sections/ch02.md >> FINAL-REPORT.md
  cat sections/ch03.md >> FINAL-REPORT.md
`)
Strategy 2: Exec + heredoc append writing
For a single agent task, if the content is too long causing single write failure, write in segments:
bash
undefined

先写入文件头部

Write file header first

cat > output.md << 'PART1'
cat > output.md << 'PART1'

标题

Title

第一部分内容...

Part 1 Content...

PART1
PART1

追加后续内容

Append subsequent content

cat >> output.md << 'PART2'
cat >> output.md << 'PART2'

第二部分内容...

Part 2 Content...

PART2
PART2

继续追加

Continue appending

cat >> output.md << 'PART3'
cat >> output.md << 'PART3'

第三部分内容...

Part 3 Content...

PART3
undefined
PART3
undefined

最佳实践

Best Practices

报告长度推荐策略
< 3000 字单个 writer 直接输出
3000-8000 字分 2-4 个章节并行撰写,脚本汇总
> 8000 字分 5+ 个章节,多 writer 并行 + 脚本汇总
核心原则:不限制单次输出长度,而是通过拆分任务并行执行来解决长文本问题。

Report LengthRecommended Strategy
< 3000 wordsSingle writer outputs directly
3000-8000 wordsSplit into 2-4 chapters, write in parallel, script summary
> 8000 wordsSplit into 5+ chapters, multiple writers in parallel + script summary
Core Principle: Do not limit single output length, but solve long text problems through task splitting and parallel execution.

🆘 子智能体遇错上报机制

🆘 Error Reporting Mechanism for Sub-Agents

子智能体在执行任务时可能遇到各种错误(工具调用失败、模型限制、资源不足等)。为提高任务成功率,建立遇错上报机制
Sub-agents may encounter various errors during task execution (tool call failure, model limits, insufficient resources, etc.). To improve task success rate, establish an error reporting mechanism.

机制说明

Mechanism Description

当子智能体任务失败或返回异常时,主智能体应:
  1. 分析错误类型
    • 输出截断(
      stopReason: "length"
      )→ 采用分段策略
    • 工具调用失败(
      Validation failed
      )→ 检查参数或换方案
    • 模型不支持(如 Gemini Image 不支持 thinking)→ 调整配置
    • 超时(
      timeout
      )→ 拆分任务或增加时间
  2. 选择解决方案
    • 增派子智能体并行分担:将大任务拆成小块,派发多个子智能体
    • 主智能体直接处理:简单任务直接由主智能体完成
    • 调整参数重试:修改 task 描述、超时时间、模型配置后重试
When a sub-agent task fails or returns an exception, the main agent should:
  1. Analyze error type:
    • Output truncation (
      stopReason: "length"
      ) → Adopt segmented strategy
    • Tool call failure (
      Validation failed
      ) → Check parameters or change plan
    • Model not supported (e.g., Gemini Image does not support thinking) → Adjust configuration
    • Timeout (
      timeout
      ) → Split task or increase time
  2. Select solution:
    • Dispatch additional sub-agents to share in parallel: Split large tasks into small pieces and dispatch multiple sub-agents
    • Main agent handles directly: Complete simple tasks directly by the main agent
    • Adjust parameters and retry: Retry after modifying task description, timeout, model configuration

错误处理流程

Error Handling Process

子智能体任务失败
主智能体收到失败通知
分析错误原因
    ├── 输出过长 → 拆分为多个子任务,增派 writer 并行
    ├── 工具不可用 → 换用 exec 或其他方案
    ├── 模型限制 → 调整 thinking/model 配置
    └── 超时 → 拆分任务或延长 timeout
执行解决方案
汇总结果
Sub-agent task fails
Main agent receives failure notification
Analyze error cause
    ├── Output too long → Split into multiple sub-tasks, dispatch more writers in parallel
    ├── Tool unavailable → Switch to exec or other solutions
    ├── Model limitation → Adjust thinking/model configuration
    └── Timeout → Split task or extend timeout
Execute solution
Summarize results

示例:writer 输出被截断的处理

Example: Handling Truncated Writer Output

javascript
// 原始任务失败(输出太长被截断)
// 主智能体收到通知后,改用分段策略

// 解决方案:拆分为 3 个子任务
sessions_spawn({
  task: "撰写报告第1-2章(摘要、背景),限制 1500 字...",
  agentId: "writer",
  label: "report-part1"
})

sessions_spawn({
  task: "撰写报告第3-4章(核心内容),限制 1500 字...",
  agentId: "writer",
  label: "report-part2"
})

sessions_spawn({
  task: "撰写报告第5-6章(结论、参考文献),限制 1000 字...",
  agentId: "writer",
  label: "report-part3"
})

// 全部完成后用 exec 合并
javascript
// Original task failed (output too long and truncated)
// After receiving notification, main agent switches to segmented strategy

// Solution: Split into 3 sub-tasks
sessions_spawn({
  task: "Write Chapter 1-2 of the report (Abstract, Background), limit to 1500 words...",
  agentId: "writer",
  label: "report-part1"
})

sessions_spawn({
  task: "Write Chapter 3-4 of the report (Core Content), limit to 1500 words...",
  agentId: "writer",
  label: "report-part2"
})

sessions_spawn({
  task: "Write Chapter 5-6 of the report (Conclusion, References), limit to 1000 words...",
  agentId: "writer",
  label: "report-part3"
})

// Merge with exec after all are completed

在子智能体 AGENTS.md 中添加上报指引

Add Reporting Guidelines to Sub-agent AGENTS.md

建议在每个子智能体的 AGENTS.md 中添加:
markdown
undefined
It is recommended to add the following to each sub-agent's AGENTS.md:
markdown
undefined

遇到问题时

When Encountering Problems

如果遇到以下情况,在输出中明确说明,以便主智能体处理:
  1. 任务太大:说明"任务内容过多,建议拆分为 X 个子任务"
  2. 工具不可用:说明"工具 X 调用失败,原因是 Y"
  3. 信息不足:说明"缺少 X 信息,无法完成任务"
  4. 超出能力范围:说明"此任务需要 X 能力,建议交给 Y 智能体"
不要静默失败,明确上报问题有助于主智能体找到解决方案。

---
If you encounter the following situations, clearly state it in the output so that the main agent can handle it:
  1. Task too large: State "Task content is too much, recommend splitting into X sub-tasks"
  2. Tool unavailable: State "Tool X call failed, reason is Y"
  3. Insufficient information: State "Missing X information, unable to complete task"
  4. Beyond capability: State "This task requires X capability, recommend assigning to Y agent"
Do not fail silently, clear reporting helps the main agent find solutions.

---

调用语法

Calling Syntax

javascript
sessions_spawn({
  task: "具体任务描述,包含必要的上下文和期望的输出格式",
  agentId: "researcher",   // 指定 Agent ID
  model: "glm",            // 可选,覆盖 Agent 默认模型
  thinking: "off",         // 可选,控制思考模式(off/minimal/low/medium/high)
  label: "task-name",      // 可选,便于追踪
  runTimeoutSeconds: 300   // 可选,超时时间(秒)
})
javascript
sessions_spawn({
  task: "Specific task description, including necessary context and expected output format",
  agentId: "researcher",   // Specify Agent ID
  model: "glm",            // Optional, override Agent's default model
  thinking: "off",         // Optional, control thinking mode (off/minimal/low/medium/high)
  label: "task-name",      // Optional, for tracking
  runTimeoutSeconds: 300   // Optional, timeout in seconds
})

⚠️ 特殊说明:Designer 智能体

⚠️ Special Note: Designer Agent

重要:调用 designer 智能体时,必须显式设置
thinking: "off"
,因为 Gemini Image 模型不支持 thinking 模式:
javascript
sessions_spawn({
  task: "为文章生成配图...",
  agentId: "designer",
  thinking: "off"    // 必须!Gemini Image 不支持 thinking
})
Important: When calling the designer agent, you must explicitly set
thinking: "off"
because the Gemini Image model does not support thinking mode:
javascript
sessions_spawn({
  task: "Generate illustrations for the article...",
  agentId: "designer",
  thinking: "off"    // Mandatory! Gemini Image does not support thinking
})

Task 描述最佳实践

Task Description Best Practices

markdown
好的 task 描述应包含:
1. 明确的目标 - 要做什么
2. 必要的上下文 - 背景信息
3. 输出要求 - 格式、保存位置
4. 约束条件 - 限制和注意事项

示例:
"搜索 LangChain 框架的最新资料,整理以下内容:
1. 核心功能和架构
2. 优点和缺点
3. 典型使用案例
4. 与其他框架的对比

输出格式:Markdown
保存到:/workspace/research/langchain.md
语言:中文"
markdown
A good task description should include:
1. Clear goal - What to do
2. Necessary context - Background information
3. Output requirements - Format, save location
4. Constraints - Limits and precautions

Example:
"Search for the latest information on the LangChain framework and organize the following content:
1. Core features and architecture
2. Advantages and disadvantages
3. Typical use cases
4. Comparison with other frameworks

Output format: Markdown
Save to: /workspace/research/langchain.md
Language: Chinese"

任务完成统计

Task Completion Statistics

完成智能体团队协作任务后,必须输出统计信息:
markdown
undefined
After completing an agent team collaboration task, must output statistical information:
markdown
undefined

📊 智能体团队执行统计

📊 Agent Team Execution Statistics

执行明细

Execution Details

智能体任务耗时Tokens (in/out)状态
🔍 researcherLangChain调研2m30s8k/1.2k
🔍 researcherAutoGPT调研2m45s9k/1.0k
✍️ writer撰写报告3m12s15k/2.5k
🎨 designer生成配图45s2k/-
AgentTaskDurationTokens (in/out)Status
🔍 researcherLangChain Research2m30s8k/1.2k
🔍 researcherAutoGPT Research2m45s9k/1.0k
✍️ writerWrite Report3m12s15k/2.5k
🎨 designerGenerate Illustrations45s2k/-

成本汇总

Cost Summary

  • 总耗时: 9m12s(并行优化后实际: 6m30s)
  • 总 Tokens: 34k input / 4.7k output
  • 实际成本: $0.12
  • 全用主模型成本: $0.29
  • 节省: 59%
  • Total Duration: 9m12s (Actual after parallel optimization: 6m30s)
  • Total Tokens: 34k input / 4.7k output
  • Actual Cost: $0.12
  • Cost if using main model for all: $0.29
  • Savings: 59%

效率分析

Efficiency Analysis

  • 并行任务数: 2个 researcher 并行
  • 串行节省: 通过并行节省 ~2m45s

详细模板见 [references/statistics-template.md](references/statistics-template.md)
  • Number of Parallel Tasks: 2 researchers in parallel
  • Serial Savings: ~2m45s saved through parallelism

Detailed template see [references/statistics-template.md](references/statistics-template.md)

智能体工作目录

Agent Work Directory

每个智能体有独立的工作目录,包含其人格配置:
/workspace/agents/
├── pm/           # 📋 产品经理
│   ├── SOUL.md   # 人格定义
│   └── AGENTS.md # 工作规范
├── researcher/   # 🔍 研究员
├── coder/        # 👨‍💻 程序员
├── writer/       # ✍️ 写作者
├── designer/     # 🎨 设计师
├── analyst/      # 📊 分析师
├── reviewer/     # 🔎 审核员
├── assistant/    # 💬 助手
└── automator/    # 🤖 自动化
Each agent has an independent work directory containing its personality configuration:
/workspace/agents/
├── pm/           # 📋 Product Manager
│   ├── SOUL.md   # Personality definition
│   └── AGENTS.md # Work specifications
├── researcher/   # 🔍 Researcher
├── coder/        # 👨‍💻 Programmer
├── writer/       # ✍️ Writer
├── designer/     # 🎨 Designer
├── analyst/      # 📊 Analyst
├── reviewer/     # 🔎 Reviewer
├── assistant/    # 💬 Assistant
└── automator/    # 🤖 Automator

智能体配置管理

Agent Configuration Management

使用
agent_manager.py
脚本管理智能体集群:
bash
undefined
Use the
agent_manager.py
script to manage the agent cluster:
bash
undefined

列出所有智能体

List all agents

python3 scripts/agent_manager.py list
python3 scripts/agent_manager.py list

查看智能体详情

View agent details

python3 scripts/agent_manager.py show researcher
python3 scripts/agent_manager.py show researcher

添加新智能体(使用模板)

Add new agent (using template)

python3 scripts/agent_manager.py add my_agent --template researcher --name "我的智能体" --emoji "🚀"
python3 scripts/agent_manager.py add my_agent --template researcher --name "My Agent" --emoji "🚀"

删除智能体(默认会备份)

Delete agent (backup by default)

python3 scripts/agent_manager.py remove my_agent
python3 scripts/agent_manager.py remove my_agent

更新智能体配置

Update agent configuration

python3 scripts/agent_manager.py update my_agent --name "新名称"
undefined
python3 scripts/agent_manager.py update my_agent --name "New Name"
undefined

可用模板

Available Templates

模板说明默认模型
default
通用智能体claude-opus-4
researcher
研究调研glm-4
coder
编程开发claude-opus-4
writer
内容写作gemini-2.5-pro
TemplateDescriptionDefault Model
default
General-purpose agentclaude-opus-4
researcher
Research and investigationglm-4
coder
Programming developmentclaude-opus-4
writer
Content writinggemini-2.5-pro

智能体经验记忆

Agent Experience Memory

每个智能体可以积累任务经验,用于提升后续任务的执行质量。
Each agent can accumulate task experience to improve the quality of subsequent task execution.

经验记录结构

Experience Record Structure

/workspace/agents/<agent_id>/
└── memory/
    ├── experience.md    # 人类可读的经验记录
    └── experience.json  # 结构化经验数据
/workspace/agents/<agent_id>/
└── memory/
    ├── experience.md    # Human-readable experience records
    └── experience.json  # Structured experience data

使用 experience_logger.py

Use experience_logger.py

bash
undefined
bash
undefined

记录一条经验

Record an experience

python3 scripts/experience_logger.py log researcher "搜索技术资料时,英文关键词效果更好" --task "LangChain调研"
python3 scripts/experience_logger.py log researcher "When searching for technical information, English keywords work better" --task "LangChain Research"

查看智能体经验

View agent experience

python3 scripts/experience_logger.py show researcher --limit 10
python3 scripts/experience_logger.py show researcher --limit 10

生成经验摘要

Generate experience summary

python3 scripts/experience_logger.py summary researcher
python3 scripts/experience_logger.py summary researcher

输出可注入 prompt 的经验(用于 spawn 时注入)

Output experience that can be injected into prompt (for injection during spawn)

python3 scripts/experience_logger.py inject researcher --limit 5
undefined
python3 scripts/experience_logger.py inject researcher --limit 5
undefined

在任务中使用经验

Use Experience in Tasks

方法 1: 在 task 描述中注入经验
python
undefined
Method 1: Inject experience into task description
python
undefined

获取历史经验

Get historical experience

import subprocess result = subprocess.run( ["python3", "scripts/experience_logger.py", "inject", "researcher", "--limit", "5"], capture_output=True, text=True ) experiences = result.stdout
import subprocess result = subprocess.run( ["python3", "scripts/experience_logger.py", "inject", "researcher", "--limit", "5"], capture_output=True, text=True ) experiences = result.stdout

在 spawn 时注入

Inject during spawn

sessions_spawn({ task: f"""搜索 xxx 资料...
{experiences} """, agentId: "researcher" })

**方法 2: 智能体主动读取经验**

在智能体的 AGENTS.md 中添加指引:
```markdown
sessions_spawn({ task: f"""Search for xxx information...
{experiences} """, agentId: "researcher" })

**Method 2: Agent actively reads experience**

Add guidelines to the agent's AGENTS.md:
```markdown

任务前准备

Pre-Task Preparation

执行任务前,先读取 memory/experience.md 中的历史经验。
Before executing the task, read the historical experience in memory/experience.md.

任务后总结

Post-Task Summary

完成任务后,总结 1-3 条有效经验,记录到 memory/experience.md。
undefined
After completing the task, summarize 1-3 valid experiences and record them in memory/experience.md.
undefined

经验记录最佳实践

Experience Record Best Practices

好的经验记录
  • 具体可操作:"搜索 GitHub 时加 language:python 过滤更精准"
  • 有因果关系:"JSON 输出比纯文本更便于下游处理"
  • 针对性强:"处理大文件时分块读取,避免内存溢出"
避免的记录
  • 太笼统:"要认真工作"
  • 太具体:"用户 A 喜欢蓝色"(除非是个性化智能体)
  • 重复已有的:"要输出 Markdown 格式"(已在 AGENTS.md 中)
Good experience records:
  • Specific and actionable: "Adding language:python filter when searching GitHub is more accurate"
  • Has causal relationship: "JSON output is more convenient for downstream processing than plain text"
  • Targeted: "Read large files in chunks to avoid memory overflow"
Avoid such records:
  • Too general: "Work carefully"
  • Too specific: "User A likes blue" (unless it's a personalized agent)
  • Duplicate existing content: "Output in Markdown format" (already in AGENTS.md)

经验自动总结(推荐)

Automatic Experience Summary (Recommended)

在每个智能体的 AGENTS.md 末尾添加:
markdown
undefined
Add the following to the end of each agent's AGENTS.md:
markdown
undefined

任务完成后

After Task Completion

  1. 检查输出是否符合要求
  2. 总结本次任务中的有效经验(1-3 条)
  3. 将经验追加到 memory/experience.md,格式:
    • [YYYY-MM-DD] 经验描述 (任务名称)

这样智能体在完成任务后会自动总结经验,无需手动干预。
  1. Check if the output meets requirements
  2. Summarize 1-3 valid experiences from this task
  3. Append the experience to memory/experience.md in the format:
    • [YYYY-MM-DD] Experience description (Task name)

This way, the agent will automatically summarize experience after completing the task without manual intervention.

配置与部署

Configuration and Deployment

如需配置新的智能体团队或添加新模型,请参阅 references/setup-guide.md
使用初始化脚本快速创建工作目录:
bash
python3 scripts/init_agents.py --base-path /workspace/agents
For configuring new agent teams or adding new models, please refer to references/setup-guide.md
Use the initialization script to quickly create the work directory:
bash
python3 scripts/init_agents.py --base-path /workspace/agents