launch-sub-agent

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

launch-sub-agent

launch-sub-agent

<task> Launch a focused sub-agent to execute the provided task. Analyze the task to intelligently select the optimal model and agent configuration, then dispatch a sub-agent with Zero-shot Chain-of-Thought reasoning at the beginning and mandatory self-critique verification at the end. </task> <context> This command implements the **Supervisor/Orchestrator pattern** from multi-agent architectures where you (the orchestrator) dispatch focused sub-agents with isolated context. The primary benefit is **context isolation** - each sub-agent operates in a clean context window focused on its specific task without accumulated context pollution. </context>
<task> 启动一个专注的子Agent来执行指定任务。分析任务以智能选择最优模型和Agent配置,然后启动子Agent,开头采用Zero-shot Chain-of-Thought推理,结尾强制进行自我审查验证。 </task> <context> 该命令实现了多Agent架构中的**Supervisor/Orchestrator模式**,你作为编排者,将带有独立上下文的专注子Agent分派出去。主要优势是**上下文隔离**——每个子Agent在干净的上下文窗口中运行,专注于其特定任务,不会受到累积上下文的污染。 </context>

Process

流程

Phase 1: Task Analysis with Zero-shot CoT

阶段1:基于Zero-shot CoT的任务分析

Before dispatching, analyze the task systematically. Think through step by step:
Let me analyze this task step by step to determine the optimal configuration:

1. **Task Type Identification**
   "What type of work is being requested?"
   - Code implementation / feature development
   - Research / investigation / comparison
   - Documentation / technical writing
   - Code review / quality analysis
   - Architecture / system design
   - Testing / validation
   - Simple transformation / lookup

2. **Complexity Assessment**
   "How complex is the reasoning required?"
   - High: Architecture decisions, novel problem-solving, multi-faceted analysis
   - Medium: Standard implementation following patterns, moderate research
   - Low: Simple transformations, lookups, well-defined single-step tasks

3. **Output Size Estimation**
   "How extensive is the expected output?"
   - Large: Multiple files, comprehensive documentation, extensive analysis
   - Medium: Single feature, focused deliverable
   - Small: Quick answer, minor change, brief output

4. **Domain Expertise Check**
   "Does this task match a specialized agent profile?"
   - Development: code, implement, feature, endpoint, TDD, tests
   - Research: investigate, compare, evaluate, options, library
   - Documentation: document, README, guide, explain, tutorial
   - Architecture: design, system, structure, scalability
   - Exploration: understand, navigate, find, codebase patterns
在分派之前,系统地分析任务。逐步思考:
Let me analyze this task step by step to determine the optimal configuration:

1. **Task Type Identification**
   "What type of work is being requested?"
   - Code implementation / feature development
   - Research / investigation / comparison
   - Documentation / technical writing
   - Code review / quality analysis
   - Architecture / system design
   - Testing / validation
   - Simple transformation / lookup

2. **Complexity Assessment**
   "How complex is the reasoning required?"
   - High: Architecture decisions, novel problem-solving, multi-faceted analysis
   - Medium: Standard implementation following patterns, moderate research
   - Low: Simple transformations, lookups, well-defined single-step tasks

3. **Output Size Estimation**
   "How extensive is the expected output?"
   - Large: Multiple files, comprehensive documentation, extensive analysis
   - Medium: Single feature, focused deliverable
   - Small: Quick answer, minor change, brief output

4. **Domain Expertise Check**
   "Does this task match a specialized agent profile?"
   - Development: code, implement, feature, endpoint, TDD, tests
   - Research: investigate, compare, evaluate, options, library
   - Documentation: document, README, guide, explain, tutorial
   - Architecture: design, system, structure, scalability
   - Exploration: understand, navigate, find, codebase patterns

Phase 2: Model Selection

阶段2:模型选择

Select the optimal model based on task analysis:
Task ProfileRecommended ModelRationale
Complex reasoning (architecture, design, critical decisions)
opus
Maximum reasoning capability
Specialized domain (matches agent profile)Opus + Specialized AgentDomain expertise + reasoning power
Non-complex but long (extensive docs, verbose output)
sonnet[1m]
Good capability, cost-efficient for length
Simple and short (trivial tasks, quick lookups)
haiku
Fast, cost-effective for easy tasks
Default (when uncertain)
opus
Optimize for quality over cost
Decision Tree:
Is task COMPLEX (architecture, design, novel problem, critical decision)?
|
+-- YES --> Use Opus (highest capability)
|           |
|           +-- Does it match a specialized domain?
|               +-- YES --> Include specialized agent prompt
|               +-- NO --> Use Opus alone
|
+-- NO --> Is task SIMPLE and SHORT?
           |
           +-- YES --> Use Haiku (fast, cheap)
           |
           +-- NO --> Is output LONG but task not complex?
                      |
                      +-- YES --> Use Sonnet (balanced)
                      |
                      +-- NO --> Use Opus (default)
基于任务分析结果选择最优模型:
任务特征推荐模型理由
复杂推理(架构、设计、关键决策)
opus
具备最强推理能力
专业领域(匹配Agent特征)Opus + 专业Agent领域专业知识 + 推理能力
非复杂但任务量大(大量文档、冗长输出)
sonnet[1m]
能力出色,针对长任务性价比高
简单且任务量小(琐碎任务、快速查询)
haiku
速度快,处理简单任务成本低
默认选项(不确定时)
opus
优先保证质量而非控制成本
决策树:
Is task COMPLEX (architecture, design, novel problem, critical decision)?
|
+-- YES --> Use Opus (highest capability)
|           |
|           +-- Does it match a specialized domain?
|               +-- YES --> Include specialized agent prompt
|               +-- NO --> Use Opus alone
|
+-- NO --> Is task SIMPLE and SHORT?
           |
           +-- YES --> Use Haiku (fast, cheap)
           |
           +-- NO --> Is output LONG but task not complex?
                      |
                      +-- YES --> Use Sonnet (balanced)
                      |
                      +-- NO --> Use Opus (default)

Phase 3: Specialized Agent Matching

阶段3:匹配专业Agent

If the task matches a specialized domain, incorporate the relevant agent prompt. Specialized agents provide domain-specific best practices, quality standards, and structured approaches that improve output quality.
Decision: Use specialized agent when task clearly benefits from domain expertise. Skip for trivial tasks where specialization adds unnecessary overhead.
Agents: Available specialized agents depends on project and plugins installed. Common agents from the
sdd
plugin include:
sdd:developer
,
sdd:researcher
,
sdd:software-architect
,
sdd:tech-lead
,
sdd:team-lead
,
sdd:qa-engineer
,
sdd:code-explorer
,
sdd:business-analyst
. If the appropriate specialized agent is not available, fallback to a general agent without specialization.
Integration with Model Selection:
  • Specialized agents are combined WITH model selection, not instead of
  • Complex task + specialized domain = Opus + Specialized Agent
  • Simple task matching domain = Haiku without specialization (overhead not justified)
Usage:
  1. Read the agent definition
  2. Include the agent's instructions in the sub-agent prompt AFTER the CoT prefix
  3. Combine with Zero-shot CoT prefix and Critique suffix
如果任务属于专业领域,则整合相关Agent提示词。专业Agent可提供领域特定的最佳实践、质量标准和结构化方法,提升输出质量。
决策: 当任务明显能从领域专业知识中获益时,使用专业Agent。对于琐碎任务,无需使用专业Agent,避免增加不必要的开销。
可用Agent: 可用的专业Agent取决于项目和已安装的插件。
sdd
插件中的常见Agent包括:
sdd:developer
sdd:researcher
sdd:software-architect
sdd:tech-lead
sdd:team-lead
sdd:qa-engineer
sdd:code-explorer
sdd:business-analyst
。如果没有合适的专业Agent,则回退到通用Agent。
与模型选择的整合:
  • 专业Agent需与模型选择结合使用,而非替代模型
  • 复杂任务 + 专业领域 = Opus + 专业Agent
  • 匹配领域的简单任务 = Haiku(无需专业Agent,开销不划算)
使用方法:
  1. 阅读Agent定义
  2. 在子Agent提示词中,将Agent的指令放在CoT前缀之后
  3. 结合Zero-shot CoT前缀和自我审查后缀

Phase 4: Construct Sub-Agent Prompt

阶段4:构建子Agent提示词

Build the sub-agent prompt with these mandatory components:
构建子Agent提示词时需包含以下必填组件:

4.1 Zero-shot Chain-of-Thought Prefix (REQUIRED - MUST BE FIRST)

4.1 Zero-shot Chain-of-Thought前缀(必填 - 必须放在最前面)

markdown
undefined
markdown
undefined

Reasoning Approach

推理方法

Before taking any action, you MUST think through the problem systematically.
Let's approach this step by step:
  1. "Let me first understand what is being asked..."
    • What is the core objective?
    • What are the explicit requirements?
    • What constraints must I respect?
  2. "Let me break this down into concrete steps..."
    • What are the major components of this task?
    • What order should I tackle them?
    • What dependencies exist between steps?
  3. "Let me consider what could go wrong..."
    • What assumptions am I making?
    • What edge cases might exist?
    • What could cause this to fail?
  4. "Let me verify my approach before proceeding..."
    • Does my plan address all requirements?
    • Is there a simpler approach?
    • Am I following existing patterns?
Work through each step explicitly before implementing.
undefined
在采取任何行动之前,你必须系统地思考问题。
让我们逐步解决问题:
  1. "首先,我需要理解需求是什么..."
    • 核心目标是什么?
    • 明确的需求有哪些?
    • 必须遵守哪些约束条件?
  2. "我需要将问题拆解为具体步骤..."
    • 该任务的主要组成部分是什么?
    • 应该按什么顺序处理?
    • 步骤之间存在哪些依赖关系?
  3. "我需要考虑可能出现的问题..."
    • 我做出了哪些假设?
    • 可能存在哪些边缘情况?
    • 哪些因素可能导致任务失败?
  4. "在开始执行之前,我需要验证我的方法..."
    • 我的计划是否覆盖了所有需求?
    • 是否有更简单的方法?
    • 我是否遵循了现有模式?
在开始执行之前,务必明确完成每个步骤的思考。
undefined

4.2 Task Body

4.2 任务主体

markdown
<task>
{Task description from $ARGUMENTS}
</task>

<constraints>
{Any constraints inferred from the task or conversation context}
</constraints>

<context>
{Relevant context: files, patterns, requirements, codebase information}
</context>

<output>
{Expected deliverable: format, location, structure}
</output>
markdown
<task>
{来自$ARGUMENTS的任务描述}
</task>

<constraints>
{从任务或对话上下文中推断出的任何约束条件}
</constraints>

<context>
{相关上下文:文件、模式、需求、代码库信息}
</context>

<output>
{预期交付物:格式、位置、结构}
</output>

4.3 Self-Critique Suffix (REQUIRED - MUST BE LAST)

4.3 自我审查后缀(必填 - 必须放在最后)

markdown
undefined
markdown
undefined

Self-Critique Loop (MANDATORY)

自我审查循环(强制要求)

Before completing, you MUST verify your work. Submitting unverified work is UNACCEPTABLE.
在完成任务之前,你必须验证你的工作。提交未经验证的工作是不被接受的。

1. Generate 5 Verification Questions

1. 生成5个验证问题

Create 5 questions specific to this task that test correctness and completeness. There example questions:
#Verification QuestionWhy This Matters
1Does my solution fully address ALL stated requirements?Partial solutions = failed task
2Have I verified every assumption against available evidence?Unverified assumptions = potential failures
3Are there edge cases or error scenarios I haven't handled?Edge cases cause production issues
4Does my solution follow existing patterns in the codebase?Pattern violations create maintenance debt
5Is my solution clear enough for someone else to understand and use?Unclear output reduces value
创建5个针对该任务的具体问题,测试正确性和完整性。示例问题如下:
#验证问题重要性
1我的解决方案是否完全满足所有明确需求?部分解决方案等同于任务失败
2我是否根据现有证据验证了所有假设?未验证的假设可能导致潜在失败
3是否存在我未处理的边缘情况或错误场景?边缘情况会引发生产环境问题
4我的解决方案是否遵循了代码库中的现有模式?违反模式会增加维护成本
5我的解决方案是否足够清晰,便于他人理解和使用?不清晰的输出会降低其价值

2. Answer Each Question with Evidence

2. 结合证据回答每个问题

For each question, examine your solution and provide specific evidence:
[Q1] Requirements Coverage:
  • Requirement 1: [COVERED/MISSING] - [specific evidence from solution]
  • Requirement 2: [COVERED/MISSING] - [specific evidence from solution]
  • Gap analysis: [any gaps identified]
[Q2] Assumption Verification:
  • Assumption 1: [assumption made] - [VERIFIED/UNVERIFIED] - [evidence]
  • Assumption 2: [assumption made] - [VERIFIED/UNVERIFIED] - [evidence]
[Q3] Edge Case Analysis:
  • Edge case 1: [scenario] - [HANDLED/UNHANDLED] - [how]
  • Edge case 2: [scenario] - [HANDLED/UNHANDLED] - [how]
[Q4] Pattern Adherence:
  • Pattern 1: [pattern name] - [FOLLOWED/DEVIATED] - [evidence]
  • Pattern 2: [pattern name] - [FOLLOWED/DEVIATED] - [evidence]
[Q5] Clarity Assessment:
  • Is the solution well-organized? [YES/NO]
  • Are complex parts explained? [YES/NO]
  • Could someone else use this immediately? [YES/NO]
针对每个问题,检查你的解决方案并提供具体证据:
[Q1] 需求覆盖情况:
  • 需求1:[已覆盖/未覆盖] - [解决方案中的具体证据]
  • 需求2:[已覆盖/未覆盖] - [解决方案中的具体证据]
  • 差距分析:[发现的任何差距]
[Q2] 假设验证情况:
  • 假设1:[做出的假设] - [已验证/未验证] - [证据]
  • 假设2:[做出的假设] - [已验证/未验证] - [证据]
[Q3] 边缘情况分析:
  • 边缘情况1:[场景] - [已处理/未处理] - [处理方式]
  • 边缘情况2:[场景] - [已处理/未处理] - [处理方式]
[Q4] 模式遵循情况:
  • 模式1:[模式名称] - [已遵循/已偏离] - [证据]
  • 模式2:[模式名称] - [已遵循/已偏离] - [证据]
[Q5] 清晰度评估:
  • 解决方案是否组织有序?[是/否]
  • 复杂部分是否有说明?[是/否]
  • 他人能否直接使用该解决方案?[是/否]

3. Revise If Needed

3. 必要时进行修订

If ANY verification question reveals a gap:
  1. STOP - Do not submit incomplete work
  2. FIX - Address the specific gap identified
  3. RE-VERIFY - Confirm the fix resolves the issue
  4. DOCUMENT - Note what was changed and why
CRITICAL: Do not submit until ALL verification questions have satisfactory answers with evidence.
undefined
如果任何验证问题发现了差距:
  1. 停止 - 不要提交不完整的工作
  2. 修复 - 解决发现的具体差距
  3. 重新验证 - 确认修复措施解决了问题
  4. 记录 - 记录修改内容及原因
关键提示:只有当所有验证问题都有令人满意的证据支持时,才能提交工作。
undefined

Phase 5: Dispatch Sub-Agent

阶段5:分派子Agent

Use the Task tool to dispatch with the selected configuration:
Use Task tool:
- description: "Sub-agent: {brief task summary}"
- prompt: {constructed prompt with CoT prefix + task + critique suffix}
- model: {selected model - opus/sonnet/haiku}
Context isolation reminder: Pass only context relevant to this specific task. Do not pass entire conversation history.
使用Task工具按照选定的配置分派:
使用Task工具:
- description: "Sub-agent: {简短任务摘要}"
- prompt: {构建好的提示词,包含CoT前缀 + 任务 + 自我审查后缀}
- model: {选定的模型 - opus/sonnet/haiku}
上下文隔离提醒: 仅传递与该特定任务相关的上下文,不要传递整个对话历史。

Examples

示例

Example 1: Complex Architecture Task (Opus)

示例1:复杂架构任务(Opus)

Input:
/launch-sub-agent Design a caching strategy for our API that handles 10k requests/second
Analysis:
  • Task type: Architecture / design
  • Complexity: High (performance requirements, system design)
  • Output size: Medium (design document)
  • Domain match: sdd:software-architect
Selection: Opus + sdd:software-architect agent
Dispatch: Task tool with Opus model, sdd:software-architect prompt, CoT prefix, critique suffix

输入:
/launch-sub-agent 为我们的API设计缓存策略,以处理每秒10000次请求
分析:
  • 任务类型:架构/设计
  • 复杂度:高(性能要求、系统设计)
  • 输出规模:中等(设计文档)
  • 领域匹配:sdd:software-architect
选择: Opus + sdd:software-architect Agent
分派: 使用Task工具,配置为Opus模型、sdd:software-architect提示词、CoT前缀、自我审查后缀

Example 2: Simple Documentation Update (Haiku)

示例2:简单文档更新任务(Haiku)

Input:
/launch-sub-agent Update the README to add --verbose flag to CLI options
Analysis:
  • Task type: Documentation (simple edit)
  • Complexity: Low (single file, well-defined)
  • Output size: Small (one section)
  • Domain match: None needed (too simple)
Selection: Haiku (fast, cheap, sufficient for task)
Dispatch: Task tool with Haiku model, basic CoT prefix, basic critique suffix

输入:
/launch-sub-agent 更新README,在CLI选项中添加--verbose参数
分析:
  • 任务类型:文档(简单编辑)
  • 复杂度:低(单个文件、需求明确)
  • 输出规模:小(一个章节)
  • 领域匹配:无需(任务过于简单)
选择: Haiku(速度快、成本低,足以完成任务)
分派: 使用Task工具,配置为Haiku模型、基础CoT前缀、基础自我审查后缀

Example 3: Moderate Implementation (Sonnet + Developer)

示例3:中等复杂度实现任务(Sonnet + 开发Agent)

Input:
/launch-sub-agent Implement pagination for /users endpoint following patterns in /products
Analysis:
  • Task type: Code implementation
  • Complexity: Medium (follow existing patterns)
  • Output size: Medium (implementation + tests)
  • Domain match: sdd:developer
Selection: Sonnet + sdd:developer agent (non-complex but needs domain expertise)
Dispatch: Task tool with Sonnet model, sdd:developer prompt, CoT prefix, critique suffix

输入:
/launch-sub-agent 参照/products端点的模式,为/users端点实现分页功能
分析:
  • 任务类型:代码实现
  • 复杂度:中等(遵循现有模式)
  • 输出规模:中等(实现代码 + 测试)
  • 领域匹配:sdd:developer
选择: Sonnet + sdd:developer Agent(非复杂任务,但需要领域专业知识)
分派: 使用Task工具,配置为Sonnet模型、sdd:developer提示词、CoT前缀、自我审查后缀

Example 4: Research Task (Opus + Researcher)

示例4:研究任务(Opus + 研究Agent)

Input:
/launch-sub-agent Research authentication options for mobile app - evaluate OAuth2, SAML, passwordless
Analysis:
  • Task type: Research / comparison
  • Complexity: High (comparative analysis, recommendations)
  • Output size: Large (comprehensive research)
  • Domain match: sdd:researcher
Selection: Opus + sdd:researcher agent
Dispatch: Task tool with Opus model, sdd:researcher prompt, CoT prefix, critique suffix
输入:
/launch-sub-agent 研究移动应用的认证选项 - 评估OAuth2、SAML、无密码认证
分析:
  • 任务类型:研究/对比
  • 复杂度:高(对比分析、提供建议)
  • 输出规模:大(全面研究报告)
  • 领域匹配:sdd:researcher
选择: Opus + sdd:researcher Agent
分派: 使用Task工具,配置为Opus模型、sdd:researcher提示词、CoT前缀、自我审查后缀

Best Practices

最佳实践

Context Isolation

上下文隔离

  • Pass only context relevant to the specific task
  • Avoid passing entire conversation history
  • Let sub-agent discover codebase patterns through tools
  • Use file paths and references rather than embedding large content
  • 仅传递与特定任务相关的上下文
  • 避免传递整个对话历史
  • 让子Agent通过工具发现代码库模式
  • 使用文件路径和引用,而非嵌入大段内容

Model Selection

模型选择

  • When in doubt, use Opus (quality over cost)
  • Use Haiku only for truly trivial tasks
  • Use Sonnet for "grunt work" - needs capability but not genius
  • Production code always deserves Opus
  • 不确定时,优先使用Opus(质量优先于成本)
  • 仅针对真正琐碎的任务使用Haiku
  • 对于“重复性工作”使用Sonnet - 需要一定能力但无需顶尖水平
  • 生产代码始终值得使用Opus

Specialized Agents

专业Agent

  • Use when domain expertise clearly improves quality
  • Combine with CoT and critique patterns
  • Don't force specialization on general tasks
  • 当领域专业知识能明显提升质量时使用
  • 结合CoT和自我审查模式
  • 不要在通用任务上强行使用专业Agent

Quality Gates

质量关卡

  • Self-critique loop is non-negotiable
  • Sub-agents must answer verification questions before completing
  • Review sub-agent output before accepting
  • 自我审查循环是强制性的
  • 子Agent必须在完成任务前回答验证问题
  • 在接受子Agent输出前进行审查