team-arch-opt

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Auto Mode

自动模式

When
--yes
or
-y
: Auto-confirm task decomposition, skip interactive validation, use defaults.
当使用
--yes
-y
参数时:自动确认任务拆分,跳过交互式验证,使用默认配置。

Team Architecture Optimization

团队架构优化

Usage

使用方法

bash
$team-arch-opt "Refactor the auth module to reduce coupling and eliminate circular dependencies"
$team-arch-opt -c 4 "Analyze and fix God Classes across the service layer"
$team-arch-opt -y "Remove dead code and clean up barrel exports in src/utils"
$team-arch-opt --continue "tao-refactor-auth-20260308"
Flags:
  • -y, --yes
    : Skip all confirmations (auto mode)
  • -c, --concurrency N
    : Max concurrent agents within each wave (default: 3)
  • --continue
    : Resume existing session
Output Directory:
.workflow/.csv-wave/{session-id}/
Core Output:
tasks.csv
(master state) +
results.csv
(final) +
discoveries.ndjson
(shared exploration) +
context.md
(human-readable report)

bash
$team-arch-opt "Refactor the auth module to reduce coupling and eliminate circular dependencies"
$team-arch-opt -c 4 "Analyze and fix God Classes across the service layer"
$team-arch-opt -y "Remove dead code and clean up barrel exports in src/utils"
$team-arch-opt --continue "tao-refactor-auth-20260308"
参数:
  • -y, --yes
    : 跳过所有确认步骤(自动模式)
  • -c, --concurrency N
    : 每个wave中最大并发Agent数量(默认值:3)
  • --continue
    : 恢复现有会话
输出目录:
.workflow/.csv-wave/{session-id}/
核心输出:
tasks.csv
(主状态文件) +
results.csv
(最终结果) +
discoveries.ndjson
(共享探索记录) +
context.md
(可读性报告)

Overview

概述

Orchestrate multi-agent architecture optimization: analyze codebase structure, design refactoring plan, implement changes, validate improvements, review code quality. The pipeline has five domain roles (analyzer, designer, refactorer, validator, reviewer) mapped to CSV wave stages with an interactive review-fix cycle.
Execution Model: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
+-------------------------------------------------------------------+
|           TEAM ARCHITECTURE OPTIMIZATION WORKFLOW                   |
+-------------------------------------------------------------------+
|                                                                     |
|  Phase 0: Pre-Wave Interactive (Requirement Clarification)          |
|     +- Parse user task description                                  |
|     +- Detect scope: targeted module vs full architecture           |
|     +- Clarify ambiguous requirements (AskUserQuestion)             |
|     +- Output: refined requirements for decomposition               |
|                                                                     |
|  Phase 1: Requirement -> CSV + Classification                       |
|     +- Identify architecture issues to target                       |
|     +- Build 5-stage pipeline (analyze->design->refactor->validate  |
|     |  +review)                                                     |
|     +- Classify tasks: csv-wave | interactive (exec_mode)           |
|     +- Compute dependency waves (topological sort)                  |
|     +- Generate tasks.csv with wave + exec_mode columns             |
|     +- User validates task breakdown (skip if -y)                   |
|                                                                     |
|  Phase 2: Wave Execution Engine (Extended)                          |
|     +- For each wave (1..N):                                        |
|     |   +- Execute pre-wave interactive tasks (if any)              |
|     |   +- Build wave CSV (filter csv-wave tasks for this wave)     |
|     |   +- Inject previous findings into prev_context column        |
|     |   +- spawn_agents_on_csv(wave CSV)                            |
|     |   +- Execute post-wave interactive tasks (if any)             |
|     |   +- Merge all results into master tasks.csv                  |
|     |   +- Check: any failed? -> skip dependents                    |
|     +- discoveries.ndjson shared across all modes (append-only)     |
|     +- Review-fix cycle: max 3 iterations per branch               |
|                                                                     |
|  Phase 3: Post-Wave Interactive (Completion Action)                 |
|     +- Pipeline completion report with improvement metrics          |
|     +- Interactive completion choice (Archive/Keep/Export)           |
|     +- Final aggregation / report                                   |
|                                                                     |
|  Phase 4: Results Aggregation                                       |
|     +- Export final results.csv                                     |
|     +- Generate context.md with all findings                        |
|     +- Display summary: completed/failed/skipped per wave           |
|     +- Offer: view results | retry failed | done                    |
|                                                                     |
+-------------------------------------------------------------------+

编排多Agent架构优化流程:分析代码库结构、设计重构方案、实施变更、验证改进效果、审查代码质量。该流水线包含五个领域角色(analyzer、designer、refactorer、validator、reviewer),分别对应CSV wave的不同阶段,支持交互式审查-修复周期。
执行模式: 混合模式 -- CSV wave流水线(主模式)+ 单独Agent启动(次模式)
+-------------------------------------------------------------------+
|           TEAM ARCHITECTURE OPTIMIZATION WORKFLOW                   |
+-------------------------------------------------------------------+
|                                                                     |
|  Phase 0: Pre-Wave Interactive (Requirement Clarification)          |
|     +- Parse user task description                                  |
|     +- Detect scope: targeted module vs full architecture           |
|     +- Clarify ambiguous requirements (AskUserQuestion)             |
|     +- Output: refined requirements for decomposition               |
|                                                                     |
|  Phase 1: Requirement -> CSV + Classification                       |
|     +- Identify architecture issues to target                       |
|     +- Build 5-stage pipeline (analyze->design->refactor->validate  |
|     |  +review)                                                     |
|     +- Classify tasks: csv-wave | interactive (exec_mode)           |
|     +- Compute dependency waves (topological sort)                  |
|     +- Generate tasks.csv with wave + exec_mode columns             |
|     +- User validates task breakdown (skip if -y)                   |
|                                                                     |
|  Phase 2: Wave Execution Engine (Extended)                          |
|     +- For each wave (1..N):                                        |
|     |   +- Execute pre-wave interactive tasks (if any)              |
|     |   +- Build wave CSV (filter csv-wave tasks for this wave)     |
|     |   +- Inject previous findings into prev_context column        |
|     |   +- spawn_agents_on_csv(wave CSV)                            |
|     |   +- Execute post-wave interactive tasks (if any)             |
|     |   +- Merge all results into master tasks.csv                  |
|     |   +- Check: any failed? -> skip dependents                    |
|     +- discoveries.ndjson shared across all modes (append-only)     |
|     +- Review-fix cycle: max 3 iterations per branch               |
|                                                                     |
|  Phase 3: Post-Wave Interactive (Completion Action)                 |
|     +- Pipeline completion report with improvement metrics          |
|     +- Interactive completion choice (Archive/Keep/Export)           |
|     +- Final aggregation / report                                   |
|                                                                     |
|  Phase 4: Results Aggregation                                       |
|     +- Export final results.csv                                     |
|     +- Generate context.md with all findings                        |
|     +- Display summary: completed/failed/skipped per wave           |
|     +- Offer: view results | retry failed | done                    |
|                                                                     |
+-------------------------------------------------------------------+

Pipeline Definition

流水线定义

Stage 1           Stage 2           Stage 3           Stage 4
ANALYZE-001  -->  DESIGN-001  -->  REFACTOR-001 --> VALIDATE-001
[analyzer]        [designer]       [refactorer]     [validator]
                                       ^                |
                                       +<-- FIX-001 ----+
                                       |           REVIEW-001
                                       +<-------->  [reviewer]
                                              (max 3 iterations)

Stage 1           Stage 2           Stage 3           Stage 4
ANALYZE-001  -->  DESIGN-001  -->  REFACTOR-001 --> VALIDATE-001
[analyzer]        [designer]       [refactorer]     [validator]
                                       ^                |
                                       +<-- FIX-001 ----+
                                       |           REVIEW-001
                                       +<-------->  [reviewer]
                                              (max 3 iterations)

Task Classification Rules

任务分类规则

Each task is classified by
exec_mode
:
exec_modeMechanismCriteria
csv-wave
spawn_agents_on_csv
One-shot, structured I/O, no multi-round interaction
interactive
spawn_agent
/
wait
/
send_input
/
close_agent
Multi-round, revision cycles, user checkpoints
Classification Decision:
Task PropertyClassification
Architecture analysis (single-pass scan)
csv-wave
Refactoring plan design (single-pass)
csv-wave
Code refactoring implementation
csv-wave
Validation (build, test, metrics)
csv-wave
Code review (single-pass)
csv-wave
Review-fix cycle (iterative revision)
interactive
User checkpoint (plan approval)
interactive
Discussion round (DISCUSS-REFACTOR, DISCUSS-REVIEW)
interactive

每个任务通过
exec_mode
分类:
exec_mode执行机制分类标准
csv-wave
spawn_agents_on_csv
单次执行、结构化输入输出、无需多轮交互
interactive
spawn_agent
/
wait
/
send_input
/
close_agent
多轮交互、修订周期、需要用户 checkpoint
分类判定规则:
任务属性分类
架构分析(单次扫描)
csv-wave
重构方案设计(单次输出)
csv-wave
代码重构实现
csv-wave
验证(构建、测试、指标检测)
csv-wave
代码审查(单次审查)
csv-wave
审查-修复周期(迭代修订)
interactive
用户 checkpoint(方案审批)
interactive
讨论环节(DISCUSS-REFACTOR、DISCUSS-REVIEW)
interactive

CSV Schema

CSV结构

tasks.csv (Master State)

tasks.csv(主状态文件)

csv
id,title,description,role,issue_type,priority,target_files,deps,context_from,exec_mode,wave,status,findings,verdict,artifacts_produced,error
"ANALYZE-001","Analyze architecture","Analyze codebase architecture to identify structural issues: cycles, coupling, cohesion, God Classes, dead code, API bloat. Produce baseline metrics and ranked report.","analyzer","","","","","","csv-wave","1","pending","","","",""
"DESIGN-001","Design refactoring plan","Analyze architecture report to design prioritized refactoring plan with strategies, expected improvements, and risk assessments.","designer","","","","ANALYZE-001","ANALYZE-001","csv-wave","2","pending","","","",""
"REFACTOR-001","Implement refactorings","Implement architecture refactoring changes following design plan in priority order (P0 first).","refactorer","","","","DESIGN-001","DESIGN-001","csv-wave","3","pending","","","",""
"VALIDATE-001","Validate changes","Validate refactoring: build checks, test suite, dependency metrics, API compatibility.","validator","","","","REFACTOR-001","REFACTOR-001","csv-wave","4","pending","","PASS","",""
"REVIEW-001","Review refactoring code","Review refactoring changes for correctness, patterns, completeness, migration safety, best practices.","reviewer","","","","REFACTOR-001","REFACTOR-001","csv-wave","4","pending","","APPROVE","",""
Columns:
ColumnPhaseDescription
id
InputUnique task identifier (PREFIX-NNN format)
title
InputShort task title
description
InputDetailed task description (self-contained)
role
InputWorker role: analyzer, designer, refactorer, validator, reviewer
issue_type
InputArchitecture issue category: CYCLE, COUPLING, COHESION, GOD_CLASS, DUPLICATION, LAYER_VIOLATION, DEAD_CODE, API_BLOAT
priority
InputP0 (Critical), P1 (High), P2 (Medium), P3 (Low)
target_files
InputSemicolon-separated file paths to focus on
deps
InputSemicolon-separated dependency task IDs
context_from
InputSemicolon-separated task IDs whose findings this task needs
exec_mode
Input
csv-wave
or
interactive
wave
ComputedWave number (computed by topological sort, 1-based)
status
Output
pending
->
completed
/
failed
/
skipped
findings
OutputKey discoveries or implementation notes (max 500 chars)
verdict
OutputValidation/review verdict: PASS, WARN, FAIL, APPROVE, REVISE, REJECT
artifacts_produced
OutputSemicolon-separated paths of produced artifacts
error
OutputError message if failed (empty if success)
csv
id,title,description,role,issue_type,priority,target_files,deps,context_from,exec_mode,wave,status,findings,verdict,artifacts_produced,error
"ANALYZE-001","Analyze architecture","Analyze codebase architecture to identify structural issues: cycles, coupling, cohesion, God Classes, dead code, API bloat. Produce baseline metrics and ranked report.","analyzer","","","","","","csv-wave","1","pending","","","",""
"DESIGN-001","Design refactoring plan","Analyze architecture report to design prioritized refactoring plan with strategies, expected improvements, and risk assessments.","designer","","","","ANALYZE-001","ANALYZE-001","csv-wave","2","pending","","","",""
"REFACTOR-001","Implement refactorings","Implement architecture refactoring changes following design plan in priority order (P0 first).","refactorer","","","","DESIGN-001","DESIGN-001","csv-wave","3","pending","","","",""
"VALIDATE-001","Validate changes","Validate refactoring: build checks, test suite, dependency metrics, API compatibility.","validator","","","","REFACTOR-001","REFACTOR-001","csv-wave","4","pending","","PASS","",""
"REVIEW-001","Review refactoring code","Review refactoring changes for correctness, patterns, completeness, migration safety, best practices.","reviewer","","","","REFACTOR-001","REFACTOR-001","csv-wave","4","pending","","APPROVE","",""
字段说明:
字段所属阶段描述
id
输入唯一任务标识符(格式:前缀-NNN)
title
输入简短任务标题
description
输入详细任务描述(自包含)
role
输入执行角色:analyzer、designer、refactorer、validator、reviewer
issue_type
输入架构问题分类:CYCLE、COUPLING、COHESION、GOD_CLASS、DUPLICATION、LAYER_VIOLATION、DEAD_CODE、API_BLOAT
priority
输入优先级:P0(严重)、P1(高)、P2(中)、P3(低)
target_files
输入半角分号分隔的目标文件路径
deps
输入半角分号分隔的依赖任务ID
context_from
输入半角分号分隔的需要依赖其输出的任务ID
exec_mode
输入
csv-wave
interactive
wave
计算生成Wave编号(通过拓扑排序计算,从1开始)
status
输出
pending
->
completed
/
failed
/
skipped
findings
输出核心发现或实现说明(最多500字符)
verdict
输出验证/审查结论:PASS、WARN、FAIL、APPROVE、REVISE、REJECT
artifacts_produced
输出半角分号分隔的生成产物路径
error
输出失败时的错误信息(成功时为空)

Per-Wave CSV (Temporary)

单Wave CSV(临时文件)

Each wave generates a temporary
wave-{N}.csv
with extra
prev_context
column (csv-wave tasks only).

每个wave会生成临时的
wave-{N}.csv
文件,额外包含
prev_context
字段(仅针对csv-wave任务)。

Agent Registry (Interactive Agents)

Agent注册中心(交互式Agent)

AgentRole FilePatternResponsibilityPosition
Plan Revieweragents/plan-reviewer.md2.3 (send_input cycle)Review architecture report or refactoring plan at user checkpointpre-wave
Fix Cycle Handleragents/fix-cycle-handler.md2.3 (send_input cycle)Manage review-fix iteration cycle (max 3 rounds)post-wave
Completion Handleragents/completion-handler.md2.3 (send_input cycle)Handle pipeline completion action (Archive/Keep/Export)standalone
COMPACT PROTECTION: Agent files are execution documents. When context compression occurs, you MUST immediately
Read
the corresponding agent.md
to reload.

Agent角色文件模式职责执行位置
Plan Revieweragents/plan-reviewer.md2.3(send_input 周期)在用户checkpoint环节审查架构报告或重构方案wave前
Fix Cycle Handleragents/fix-cycle-handler.md2.3(send_input 周期)管理审查-修复迭代周期(最多3轮)wave后
Completion Handleragents/completion-handler.md2.3(send_input 周期)处理流水线完成动作(归档/保留/导出)独立环节
紧凑保护规则:Agent文件是执行文档。当发生上下文压缩时,你必须立即
Read
对应的agent.md文件
重新加载上下文。

Output Artifacts

输出产物

FilePurposeLifecycle
tasks.csv
Master state -- all tasks with status/findingsUpdated after each wave
wave-{N}.csv
Per-wave input (temporary, csv-wave tasks only)Created before wave, deleted after
results.csv
Final export of all task resultsCreated in Phase 4
discoveries.ndjson
Shared exploration board (all agents, both modes)Append-only, carries across waves
context.md
Human-readable execution reportCreated in Phase 4
task-analysis.json
Phase 1 output: scope, issues, pipeline configCreated in Phase 1
artifacts/architecture-baseline.json
Analyzer: pre-refactoring metricsCreated by analyzer
artifacts/architecture-report.md
Analyzer: ranked structural issue findingsCreated by analyzer
artifacts/refactoring-plan.md
Designer: prioritized refactoring planCreated by designer
artifacts/validation-results.json
Validator: post-refactoring validationCreated by validator
artifacts/review-report.md
Reviewer: code review findingsCreated by reviewer
interactive/{id}-result.json
Results from interactive tasksCreated per interactive task

文件名用途生命周期
tasks.csv
主状态文件 -- 所有任务的状态/发现记录每个wave执行完成后更新
wave-{N}.csv
单wave输入文件(临时,仅针对csv-wave任务)wave执行前创建,执行后删除
results.csv
所有任务结果的最终导出文件阶段4生成
discoveries.ndjson
共享探索看板(所有模式的所有Agent均可访问)仅追加内容,跨所有wave生效
context.md
可读性执行报告阶段4生成
task-analysis.json
阶段1输出:范围、问题、流水线配置阶段1生成
artifacts/architecture-baseline.json
分析员输出:重构前指标基线由analyzer生成
artifacts/architecture-report.md
分析员输出:排序后的结构问题发现由analyzer生成
artifacts/refactoring-plan.md
设计师输出:按优先级排序的重构方案由designer生成
artifacts/validation-results.json
验证员输出:重构后验证结果由validator生成
artifacts/review-report.md
审查员输出:代码审查发现由reviewer生成
interactive/{id}-result.json
交互式任务的执行结果每个交互式任务执行完成后生成

Session Structure

会话结构

.workflow/.csv-wave/{session-id}/
+-- tasks.csv                  # Master state (all tasks, both modes)
+-- results.csv                # Final results export
+-- discoveries.ndjson         # Shared discovery board (all agents)
+-- context.md                 # Human-readable report
+-- task-analysis.json         # Phase 1 analysis output
+-- wave-{N}.csv               # Temporary per-wave input (csv-wave only)
+-- artifacts/
|   +-- architecture-baseline.json   # Analyzer output
|   +-- architecture-report.md       # Analyzer output
|   +-- refactoring-plan.md          # Designer output
|   +-- validation-results.json      # Validator output
|   +-- review-report.md             # Reviewer output
+-- interactive/               # Interactive task artifacts
|   +-- {id}-result.json
+-- wisdom/
    +-- patterns.md            # Discovered patterns and conventions

.workflow/.csv-wave/{session-id}/
+-- tasks.csv                  # 主状态文件(所有模式的所有任务)
+-- results.csv                # 最终结果导出文件
+-- discoveries.ndjson         # 共享探索看板(所有Agent)
+-- context.md                 # 可读性报告
+-- task-analysis.json         # 阶段1分析输出
+-- wave-{N}.csv               # 临时单wave输入文件(仅csv-wave任务)
+-- artifacts/
|   +-- architecture-baseline.json   # 分析员输出
|   +-- architecture-report.md       # 分析员输出
|   +-- refactoring-plan.md          # 设计师输出
|   +-- validation-results.json      # 验证员输出
|   +-- review-report.md             # 审查员输出
+-- interactive/               # 交互式任务产物
|   +-- {id}-result.json
+-- wisdom/
    +-- patterns.md            # 发现的模式和规范

Implementation

实现

Session Initialization

会话初始化

javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()

const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3

const requirement = $ARGUMENTS
  .replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
  .trim()

const slug = requirement.toLowerCase()
  .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
  .substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `tao-${slug}-${dateStr}`
const sessionFolder = `.workflow/.csv-wave/${sessionId}`

Bash(`mkdir -p ${sessionFolder}/artifacts ${sessionFolder}/interactive ${sessionFolder}/wisdom`)

// Initialize discoveries.ndjson
Write(`${sessionFolder}/discoveries.ndjson`, '')

// Initialize wisdom
Write(`${sessionFolder}/wisdom/patterns.md`, '# Patterns & Conventions\n')

javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()

const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3

const requirement = $ARGUMENTS
  .replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
  .trim()

const slug = requirement.toLowerCase()
  .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
  .substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `tao-${slug}-${dateStr}`
const sessionFolder = `.workflow/.csv-wave/${sessionId}`

Bash(`mkdir -p ${sessionFolder}/artifacts ${sessionFolder}/interactive ${sessionFolder}/wisdom`)

// Initialize discoveries.ndjson
Write(`${sessionFolder}/discoveries.ndjson`, '')

// Initialize wisdom
Write(`${sessionFolder}/wisdom/patterns.md`, '# Patterns & Conventions\n')

Phase 0: Pre-Wave Interactive (Requirement Clarification)

阶段0:Wave前交互(需求澄清)

Objective: Parse user task, detect architecture scope, clarify ambiguities, prepare for decomposition.
Workflow:
  1. Parse user task description from $ARGUMENTS
  2. Check for existing sessions (continue mode):
    • Scan
      .workflow/.csv-wave/tao-*/tasks.csv
      for sessions with pending tasks
    • If
      --continue
      : resume the specified or most recent session, skip to Phase 2
    • If active session found: ask user whether to resume or start new
  3. Identify architecture optimization target:
SignalTarget
Specific file/module mentionedScoped refactoring
"coupling", "dependency", "structure", genericFull architecture analysis
Specific issue (cycles, God Class, duplication)Targeted issue resolution
  1. Clarify if ambiguous (skip if AUTO_YES):
    javascript
    AskUserQuestion({
      questions: [{
        question: "Please confirm the architecture optimization scope:",
        header: "Architecture Scope",
        multiSelect: false,
        options: [
          { label: "Proceed as described", description: "Scope is clear" },
          { label: "Narrow scope", description: "Specify modules/files to focus on" },
          { label: "Add constraints", description: "Exclude areas, set priorities" }
        ]
      }]
    })
  2. Output: Refined requirement string for Phase 1
Success Criteria:
  • Refined requirements available for Phase 1 decomposition
  • Existing session detected and handled if applicable

目标:解析用户任务,检测架构范围,澄清歧义,为任务拆分做准备。
工作流:
  1. 从$ARGUMENTS中解析用户任务描述
  2. 检查现有会话(继续模式):
    • 扫描
      .workflow/.csv-wave/tao-*/tasks.csv
      查找包含待处理任务的会话
    • 如果指定
      --continue
      :恢复指定的或最近的会话,直接跳转到阶段2
    • 如果检测到活跃会话:询问用户是否恢复或启动新会话
  3. 识别架构优化目标:
信号目标
提到特定文件/模块范围限定的重构
提到「耦合」、「依赖」、「结构」等通用描述全架构分析
提到特定问题(循环依赖、God Class、重复代码)定向问题修复
  1. 歧义澄清(AUTO_YES模式下跳过):
    javascript
    AskUserQuestion({
      questions: [{
        question: "Please confirm the architecture optimization scope:",
        header: "Architecture Scope",
        multiSelect: false,
        options: [
          { label: "Proceed as described", description: "Scope is clear" },
          { label: "Narrow scope", description: "Specify modules/files to focus on" },
          { label: "Add constraints", description: "Exclude areas, set priorities" }
        ]
      }]
    })
  2. 输出:供阶段1拆分使用的精炼需求字符串
成功标准:
  • 阶段1拆分可用的精炼需求已准备完毕
  • 现有会话(如果存在)已被检测并处理

Phase 1: Requirement -> CSV + Classification

阶段1:需求转CSV + 分类

Objective: Decompose architecture optimization task into the 5-stage pipeline tasks, assign waves, generate tasks.csv.
Decomposition Rules:
  1. Stage mapping -- architecture optimization always follows this pipeline:
StageRoleTask PrefixWaveDescription
1analyzerANALYZE1Scan codebase, identify structural issues, produce baseline metrics
2designerDESIGN2Design refactoring plan from architecture report
3refactorerREFACTOR3Implement refactorings per plan priority
4avalidatorVALIDATE4Validate build, tests, metrics, API compatibility
4breviewerREVIEW4Review refactoring code for correctness and patterns
  1. Single-pipeline decomposition: Generate one task per stage with sequential dependencies:
    • ANALYZE-001 (wave 1, no deps)
    • DESIGN-001 (wave 2, deps: ANALYZE-001)
    • REFACTOR-001 (wave 3, deps: DESIGN-001)
    • VALIDATE-001 (wave 4, deps: REFACTOR-001)
    • REVIEW-001 (wave 4, deps: REFACTOR-001)
  2. Description enrichment: Each task description must be self-contained with:
    • Clear goal statement
    • Input artifacts to read
    • Output artifacts to produce
    • Success criteria
    • Session folder path
Classification Rules:
Task Propertyexec_mode
ANALYZE, DESIGN, REFACTOR, VALIDATE, REVIEW (initial pass)
csv-wave
FIX tasks (review-fix cycle)
interactive
(handled by fix-cycle-handler agent)
Wave Computation: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
User Validation: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
Success Criteria:
  • tasks.csv created with valid schema, wave, and exec_mode assignments
  • task-analysis.json written with scope and pipeline config
  • No circular dependencies
  • User approved (or AUTO_YES)

目标:将架构优化任务拆分为5阶段流水线任务,分配wave,生成tasks.csv。
拆分规则:
  1. 阶段映射 -- 架构优化始终遵循以下流水线:
阶段角色任务前缀Wave描述
1analyzerANALYZE1扫描代码库,识别结构问题,生成基线指标
2designerDESIGN2基于架构报告设计重构方案
3refactorerREFACTOR3按照方案优先级实施重构
4avalidatorVALIDATE4验证构建、测试、指标、API兼容性
4breviewerREVIEW4审查重构代码的正确性和模式合规性
  1. 单流水线拆分:每个阶段生成一个任务,按顺序设置依赖:
    • ANALYZE-001(wave 1,无依赖)
    • DESIGN-001(wave 2,依赖:ANALYZE-001)
    • REFACTOR-001(wave 3,依赖:DESIGN-001)
    • VALIDATE-001(wave 4,依赖:REFACTOR-001)
    • REVIEW-001(wave 4,依赖:REFACTOR-001)
  2. 描述丰富化:每个任务的描述必须自包含,包含:
    • 清晰的目标说明
    • 需要读取的输入产物
    • 需要生成的输出产物
    • 成功标准
    • 会话文件夹路径
分类规则:
任务属性exec_mode
ANALYZE、DESIGN、REFACTOR、VALIDATE、REVIEW(首次执行)
csv-wave
FIX任务(审查-修复周期)
interactive
(由fix-cycle-handler Agent处理)
Wave计算:基于Kahn BFS拓扑排序并跟踪深度(仅针对csv-wave任务)。
用户验证:展示任务拆分结果以及wave和exec_mode分配(AUTO_YES模式下跳过)。
成功标准:
  • tasks.csv已创建,包含合法的结构、wave和exec_mode分配
  • task-analysis.json已写入,包含范围和流水线配置
  • 无循环依赖
  • 已获得用户批准(或处于AUTO_YES模式)

Phase 2: Wave Execution Engine (Extended)

阶段2:Wave执行引擎(扩展版)

Objective: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
let tasks = parseCsv(masterCsv)
const maxWave = Math.max(...tasks.map(t => t.wave))

for (let wave = 1; wave <= maxWave; wave++) {
  console.log(`\nWave ${wave}/${maxWave}`)

  // 1. Separate tasks by exec_mode
  const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
  const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
  const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')

  // 2. Check dependencies -- skip tasks whose deps failed
  for (const task of waveTasks) {
    const depIds = (task.deps || '').split(';').filter(Boolean)
    const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
    if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
      task.status = 'skipped'
      task.error = `Dependency failed: ${depIds.filter((id, i) =>
        ['failed','skipped'].includes(depStatuses[i])).join(', ')}`
    }
  }

  // 3. Execute pre-wave interactive tasks (if any)
  for (const task of interactiveTasks.filter(t => t.status === 'pending')) {
    // Determine agent file based on task type
    const agentFile = task.id.startsWith('FIX') ? 'agents/fix-cycle-handler.md' : 'agents/plan-reviewer.md'
    Read(agentFile)

    const agent = spawn_agent({
      message: `## TASK ASSIGNMENT\n\n### MANDATORY FIRST STEPS\n1. Read: ${agentFile}\n2. Read: ${sessionFolder}/discoveries.ndjson\n3. Read: .workflow/project-tech.json (if exists)\n\n---\n\nGoal: ${task.description}\nScope: ${task.title}\nSession: ${sessionFolder}\n\n### Previous Context\n${buildPrevContext(task, tasks)}`
    })
    const result = wait({ ids: [agent], timeout_ms: 600000 })
    if (result.timed_out) {
      send_input({ id: agent, message: "Please finalize and output current findings." })
      wait({ ids: [agent], timeout_ms: 120000 })
    }
    Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
      task_id: task.id, status: "completed", findings: parseFindings(result),
      timestamp: getUtc8ISOString()
    }))
    close_agent({ id: agent })
    task.status = 'completed'
    task.findings = parseFindings(result)
  }

  // 4. Build prev_context for csv-wave tasks
  const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
  for (const task of pendingCsvTasks) {
    task.prev_context = buildPrevContext(task, tasks)
  }

  if (pendingCsvTasks.length > 0) {
    // 5. Write wave CSV
    Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))

    // 6. Determine instruction -- read from instructions/agent-instruction.md
    Read('instructions/agent-instruction.md')

    // 7. Execute wave via spawn_agents_on_csv
    spawn_agents_on_csv({
      csv_path: `${sessionFolder}/wave-${wave}.csv`,
      id_column: "id",
      instruction: archOptInstruction,  // from instructions/agent-instruction.md
      max_concurrency: maxConcurrency,
      max_runtime_seconds: 900,
      output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
      output_schema: {
        type: "object",
        properties: {
          id: { type: "string" },
          status: { type: "string", enum: ["completed", "failed"] },
          findings: { type: "string" },
          verdict: { type: "string" },
          artifacts_produced: { type: "string" },
          error: { type: "string" }
        }
      }
    })

    // 8. Merge results into master CSV
    const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
    for (const r of results) {
      const t = tasks.find(t => t.id === r.id)
      if (t) Object.assign(t, r)
    }
  }

  // 9. Update master CSV
  Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))

  // 10. Cleanup temp files
  Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)

  // 11. Post-wave: check for review-fix cycle
  const validateTask = tasks.find(t => t.id.startsWith('VALIDATE') && t.wave === wave)
  const reviewTask = tasks.find(t => t.id.startsWith('REVIEW') && t.wave === wave)

  if ((validateTask?.verdict === 'FAIL' || reviewTask?.verdict === 'REVISE' || reviewTask?.verdict === 'REJECT')) {
    const fixCycleCount = tasks.filter(t => t.id.startsWith('FIX')).length
    if (fixCycleCount < 3) {
      // Create FIX task, add to tasks, re-run refactor -> validate+review cycle
      const fixId = `FIX-${String(fixCycleCount + 1).padStart(3, '0')}`
      const feedback = [validateTask?.error, reviewTask?.findings].filter(Boolean).join('\n')
      tasks.push({
        id: fixId, title: `Fix issues from review/validation cycle ${fixCycleCount + 1}`,
        description: `Fix issues found:\n${feedback}`,
        role: 'refactorer', issue_type: '', priority: 'P0', target_files: '',
        deps: '', context_from: '', exec_mode: 'interactive',
        wave: wave + 1, status: 'pending', findings: '', verdict: '',
        artifacts_produced: '', error: ''
      })
    }
  }

  // 12. Display wave summary
  const completed = waveTasks.filter(t => t.status === 'completed').length
  const failed = waveTasks.filter(t => t.status === 'failed').length
  const skipped = waveTasks.filter(t => t.status === 'skipped').length
  console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed, ${skipped} skipped`)
}
Success Criteria:
  • All waves executed in order
  • Both csv-wave and interactive tasks handled per wave
  • Each wave's results merged into master CSV before next wave starts
  • Dependent tasks skipped when predecessor failed
  • Review-fix cycle handled with max 3 iterations
  • discoveries.ndjson accumulated across all waves and mechanisms

目标:逐wave执行任务,支持混合执行机制和跨wave上下文传递。
javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
let tasks = parseCsv(masterCsv)
const maxWave = Math.max(...tasks.map(t => t.wave))

for (let wave = 1; wave <= maxWave; wave++) {
  console.log(`\nWave ${wave}/${maxWave}`)

  // 1. 按exec_mode拆分任务
  const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
  const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
  const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')

  // 2. 检查依赖 -- 依赖失败的任务直接跳过
  for (const task of waveTasks) {
    const depIds = (task.deps || '').split(';').filter(Boolean)
    const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
    if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
      task.status = 'skipped'
      task.error = `Dependency failed: ${depIds.filter((id, i) =>
        ['failed','skipped'].includes(depStatuses[i])).join(', ')}`
    }
  }

  // 3. 执行wave前交互式任务(如果存在)
  for (const task of interactiveTasks.filter(t => t.status === 'pending')) {
    // 根据任务类型确定Agent文件
    const agentFile = task.id.startsWith('FIX') ? 'agents/fix-cycle-handler.md' : 'agents/plan-reviewer.md'
    Read(agentFile)

    const agent = spawn_agent({
      message: `## TASK ASSIGNMENT\n\n### MANDATORY FIRST STEPS\n1. Read: ${agentFile}\n2. Read: ${sessionFolder}/discoveries.ndjson\n3. Read: .workflow/project-tech.json (if exists)\n\n---\n\nGoal: ${task.description}\nScope: ${task.title}\nSession: ${sessionFolder}\n\n### Previous Context\n${buildPrevContext(task, tasks)}`
    })
    const result = wait({ ids: [agent], timeout_ms: 600000 })
    if (result.timed_out) {
      send_input({ id: agent, message: "Please finalize and output current findings." })
      wait({ ids: [agent], timeout_ms: 120000 })
    }
    Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
      task_id: task.id, status: "completed", findings: parseFindings(result),
      timestamp: getUtc8ISOString()
    }))
    close_agent({ id: agent })
    task.status = 'completed'
    task.findings = parseFindings(result)
  }

  // 4. 为csv-wave任务构建prev_context
  const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
  for (const task of pendingCsvTasks) {
    task.prev_context = buildPrevContext(task, tasks)
  }

  if (pendingCsvTasks.length > 0) {
    // 5. 写入wave CSV
    Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))

    // 6. 读取执行指令 -- 从instructions/agent-instruction.md读取
    Read('instructions/agent-instruction.md')

    // 7. 通过spawn_agents_on_csv执行wave
    spawn_agents_on_csv({
      csv_path: `${sessionFolder}/wave-${wave}.csv`,
      id_column: "id",
      instruction: archOptInstruction,  // from instructions/agent-instruction.md
      max_concurrency: maxConcurrency,
      max_runtime_seconds: 900,
      output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
      output_schema: {
        type: "object",
        properties: {
          id: { type: "string" },
          status: { type: "string", enum: ["completed", "failed"] },
          findings: { type: "string" },
          verdict: { type: "string" },
          artifacts_produced: { type: "string" },
          error: { type: "string" }
        }
      }
    })

    // 8. 合并结果到主CSV
    const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
    for (const r of results) {
      const t = tasks.find(t => t.id === r.id)
      if (t) Object.assign(t, r)
    }
  }

  // 9. 更新主CSV
  Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))

  // 10. 清理临时文件
  Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)

  // 11. Wave后:检查是否需要审查-修复周期
  const validateTask = tasks.find(t => t.id.startsWith('VALIDATE') && t.wave === wave)
  const reviewTask = tasks.find(t => t.id.startsWith('REVIEW') && t.wave === wave)

  if ((validateTask?.verdict === 'FAIL' || reviewTask?.verdict === 'REVISE' || reviewTask?.verdict === 'REJECT')) {
    const fixCycleCount = tasks.filter(t => t.id.startsWith('FIX')).length
    if (fixCycleCount < 3) {
      // 创建FIX任务,添加到任务列表,重新运行重构->验证+审查周期
      const fixId = `FIX-${String(fixCycleCount + 1).padStart(3, '0')}`
      const feedback = [validateTask?.error, reviewTask?.findings].filter(Boolean).join('\n')
      tasks.push({
        id: fixId, title: `Fix issues from review/validation cycle ${fixCycleCount + 1}`,
        description: `Fix issues found:\n${feedback}`,
        role: 'refactorer', issue_type: '', priority: 'P0', target_files: '',
        deps: '', context_from: '', exec_mode: 'interactive',
        wave: wave + 1, status: 'pending', findings: '', verdict: '',
        artifacts_produced: '', error: ''
      })
    }
  }

  // 12. 展示wave执行摘要
  const completed = waveTasks.filter(t => t.status === 'completed').length
  const failed = waveTasks.filter(t => t.status === 'failed').length
  const skipped = waveTasks.filter(t => t.status === 'skipped').length
  console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed, ${skipped} skipped`)
}
成功标准:
  • 所有wave按顺序执行完成
  • 每个wave的csv-wave和交互式任务都已处理
  • 每个wave的结果在进入下一个wave前已合并到主CSV
  • 前置任务失败时,依赖任务已被跳过
  • 审查-修复周期最多执行3次迭代
  • discoveries.ndjson已累积所有wave和执行模式的发现内容

Phase 3: Post-Wave Interactive (Completion Action)

阶段3:Wave后交互(完成动作)

Objective: Pipeline completion report with architecture improvement metrics and interactive completion choice.
javascript
// 1. Generate pipeline summary
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
const completed = tasks.filter(t => t.status === 'completed')
const failed = tasks.filter(t => t.status === 'failed')

// 2. Load improvement metrics from validation results
let improvements = ''
try {
  const validation = JSON.parse(Read(`${sessionFolder}/artifacts/validation-results.json`))
  improvements = `Architecture Improvements:\n${validation.dimensions.map(d =>
    `  ${d.name}: ${d.baseline} -> ${d.current} (${d.improvement})`).join('\n')}`
} catch {}

console.log(`
============================================
ARCHITECTURE OPTIMIZATION COMPLETE

Deliverables:
  - Architecture Baseline: artifacts/architecture-baseline.json
  - Architecture Report: artifacts/architecture-report.md
  - Refactoring Plan: artifacts/refactoring-plan.md
  - Validation Results: artifacts/validation-results.json
  - Review Report: artifacts/review-report.md

${improvements}

Pipeline: ${completed.length}/${tasks.length} tasks
Session: ${sessionFolder}
============================================
`)

// 3. Completion action
if (!AUTO_YES) {
  AskUserQuestion({
    questions: [{
      question: "Architecture optimization complete. What would you like to do?",
      header: "Completion",
      multiSelect: false,
      options: [
        { label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
        { label: "Keep Active", description: "Keep session for follow-up work" },
        { label: "Retry Failed", description: "Re-run failed tasks" }
      ]
    }]
  })
}
Success Criteria:
  • Post-wave interactive processing complete
  • User informed of results and improvement metrics

目标:输出包含架构改进指标的流水线完成报告,提供交互式完成选项。
javascript
// 1. 生成流水线摘要
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
const completed = tasks.filter(t => t.status === 'completed')
const failed = tasks.filter(t => t.status === 'failed')

// 2. 从验证结果加载改进指标
let improvements = ''
try {
  const validation = JSON.parse(Read(`${sessionFolder}/artifacts/validation-results.json`))
  improvements = `Architecture Improvements:\n${validation.dimensions.map(d =>
    `  ${d.name}: ${d.baseline} -> ${d.current} (${d.improvement})`).join('\n')}`
} catch {}

console.log(`
============================================
ARCHITECTURE OPTIMIZATION COMPLETE

Deliverables:
  - Architecture Baseline: artifacts/architecture-baseline.json
  - Architecture Report: artifacts/architecture-report.md
  - Refactoring Plan: artifacts/refactoring-plan.md
  - Validation Results: artifacts/validation-results.json
  - Review Report: artifacts/review-report.md

${improvements}

Pipeline: ${completed.length}/${tasks.length} tasks
Session: ${sessionFolder}
============================================
`)

// 3. 完成动作选择
if (!AUTO_YES) {
  AskUserQuestion({
    questions: [{
      question: "Architecture optimization complete. What would you like to do?",
      header: "Completion",
      multiSelect: false,
      options: [
        { label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
        { label: "Keep Active", description: "Keep session for follow-up work" },
        { label: "Retry Failed", description: "Re-run failed tasks" }
      ]
    }]
  })
}
成功标准:
  • Wave后交互处理完成
  • 用户已获知执行结果和改进指标

Phase 4: Results Aggregation

阶段4:结果聚合

Objective: Generate final results and human-readable report.
javascript
// 1. Export results.csv
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)

// 2. Generate context.md
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
let contextMd = `# Architecture Optimization Report\n\n`
contextMd += `**Session**: ${sessionId}\n`
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`

contextMd += `## Summary\n`
contextMd += `| Status | Count |\n|--------|-------|\n`
contextMd += `| Completed | ${tasks.filter(t => t.status === 'completed').length} |\n`
contextMd += `| Failed | ${tasks.filter(t => t.status === 'failed').length} |\n`
contextMd += `| Skipped | ${tasks.filter(t => t.status === 'skipped').length} |\n\n`

contextMd += `## Deliverables\n\n`
contextMd += `| Artifact | Path |\n|----------|------|\n`
contextMd += `| Architecture Baseline | artifacts/architecture-baseline.json |\n`
contextMd += `| Architecture Report | artifacts/architecture-report.md |\n`
contextMd += `| Refactoring Plan | artifacts/refactoring-plan.md |\n`
contextMd += `| Validation Results | artifacts/validation-results.json |\n`
contextMd += `| Review Report | artifacts/review-report.md |\n\n`

const maxWave = Math.max(...tasks.map(t => t.wave))
contextMd += `## Wave Execution\n\n`
for (let w = 1; w <= maxWave; w++) {
  const waveTasks = tasks.filter(t => t.wave === w)
  contextMd += `### Wave ${w}\n\n`
  for (const t of waveTasks) {
    const icon = t.status === 'completed' ? '[DONE]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
    contextMd += `${icon} **${t.title}** [${t.role}] ${t.verdict ? `(${t.verdict})` : ''} ${t.findings || ''}\n\n`
  }
}

Write(`${sessionFolder}/context.md`, contextMd)

console.log(`Results exported to: ${sessionFolder}/results.csv`)
console.log(`Report generated at: ${sessionFolder}/context.md`)
Success Criteria:
  • results.csv exported (all tasks, both modes)
  • context.md generated with deliverables list
  • Summary displayed to user

目标:生成最终结果和可读性报告。
javascript
// 1. 导出results.csv
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)

// 2. 生成context.md
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
let contextMd = `# Architecture Optimization Report\n\n`
contextMd += `**Session**: ${sessionId}\n`
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`

contextMd += `## Summary\n`
contextMd += `| Status | Count |\n|--------|-------|\n`
contextMd += `| Completed | ${tasks.filter(t => t.status === 'completed').length} |\n`
contextMd += `| Failed | ${tasks.filter(t => t.status === 'failed').length} |\n`
contextMd += `| Skipped | ${tasks.filter(t => t.status === 'skipped').length} |\n\n`

contextMd += `## Deliverables\n\n`
contextMd += `| Artifact | Path |\n|----------|------|\n`
contextMd += `| Architecture Baseline | artifacts/architecture-baseline.json |\n`
contextMd += `| Architecture Report | artifacts/architecture-report.md |\n`
contextMd += `| Refactoring Plan | artifacts/refactoring-plan.md |\n`
contextMd += `| Validation Results | artifacts/validation-results.json |\n`
contextMd += `| Review Report | artifacts/review-report.md |\n\n`

const maxWave = Math.max(...tasks.map(t => t.wave))
contextMd += `## Wave Execution\n\n`
for (let w = 1; w <= maxWave; w++) {
  const waveTasks = tasks.filter(t => t.wave === w)
  contextMd += `### Wave ${w}\n\n`
  for (const t of waveTasks) {
    const icon = t.status === 'completed' ? '[DONE]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
    contextMd += `${icon} **${t.title}** [${t.role}] ${t.verdict ? `(${t.verdict})` : ''} ${t.findings || ''}\n\n`
  }
}

Write(`${sessionFolder}/context.md`, contextMd)

console.log(`Results exported to: ${sessionFolder}/results.csv`)
console.log(`Report generated at: ${sessionFolder}/context.md`)
成功标准:
  • results.csv已导出(包含所有模式的所有任务)
  • context.md已生成,包含产物列表
  • 摘要已展示给用户

Shared Discovery Board Protocol

共享探索看板协议

All agents (csv-wave and interactive) share a single
discoveries.ndjson
file for cross-task knowledge exchange.
Format: One JSON object per line (NDJSON):
jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"ANALYZE-001","type":"cycle_found","data":{"modules":["auth","user"],"depth":2,"description":"Circular dependency between auth and user modules"}}
{"ts":"2026-03-08T10:05:00Z","worker":"REFACTOR-001","type":"file_modified","data":{"file":"src/auth/index.ts","change":"Extracted interface to break cycle","lines_added":15}}
Discovery Types:
TypeData SchemaDescription
cycle_found
{modules, depth, description}
Circular dependency detected
god_class_found
{file, loc, methods, description}
God Class/Module identified
coupling_issue
{module, fan_in, fan_out, description}
High coupling detected
dead_code_found
{file, type, description}
Dead code or dead export found
file_modified
{file, change, lines_added}
File change recorded
pattern_found
{pattern_name, location, description}
Code pattern identified
metric_measured
{metric, value, unit, module}
Architecture metric measured
artifact_produced
{name, path, producer, type}
Deliverable created
Protocol:
  1. Agents MUST read discoveries.ndjson at start of execution
  2. Agents MUST append relevant discoveries during execution
  3. Agents MUST NOT modify or delete existing entries
  4. Deduplication by
    {type, data.file}
    or
    {type, data.modules}
    key

所有Agent(csv-wave和交互式)共享单一
discoveries.ndjson
文件,用于跨任务知识交换。
格式:每行一个JSON对象(NDJSON):
jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"ANALYZE-001","type":"cycle_found","data":{"modules":["auth","user"],"depth":2,"description":"Circular dependency between auth and user modules"}}
{"ts":"2026-03-08T10:05:00Z","worker":"REFACTOR-001","type":"file_modified","data":{"file":"src/auth/index.ts","change":"Extracted interface to break cycle","lines_added":15}}
发现类型:
类型数据结构描述
cycle_found
{modules, depth, description}
检测到循环依赖
god_class_found
{file, loc, methods, description}
识别到God Class/模块
coupling_issue
{module, fan_in, fan_out, description}
检测到高耦合
dead_code_found
{file, type, description}
发现死代码或无效导出
file_modified
{file, change, lines_added}
记录文件变更
pattern_found
{pattern_name, location, description}
识别到代码模式
metric_measured
{metric, value, unit, module}
完成架构指标测量
artifact_produced
{name, path, producer, type}
生成交付产物
协议规则:
  1. Agent在执行开始时必须读取discoveries.ndjson
  2. Agent在执行过程中必须追加相关的发现内容
  3. Agent不得修改或删除已有条目
  4. 通过
    {type, data.file}
    {type, data.modules}
    键进行去重

Error Handling

错误处理

ErrorResolution
Circular dependency in tasksDetect in wave computation, abort with error message
CSV agent timeoutMark as failed in results, continue with wave
CSV agent failedMark as failed, skip dependent tasks in later waves
Interactive agent timeoutUrge convergence via send_input, then close if still timed out
Interactive agent failedMark as failed, skip dependents
All agents in wave failedLog error, offer retry or abort
CSV parse errorValidate CSV format before execution, show line number
discoveries.ndjson corruptIgnore malformed lines, continue with valid entries
Review-fix cycle exceeds 3 iterationsEscalate to user with summary of remaining issues
Validation fails on buildCreate FIX task with compilation error details
Architecture baseline unavailableFall back to static analysis estimates
Continue mode: no session foundList available sessions, prompt user to select

错误解决方案
任务间循环依赖在wave计算阶段检测,输出错误信息并终止
CSV Agent超时在结果中标记为失败,继续执行当前wave
CSV Agent执行失败标记为失败,跳过后续wave的依赖任务
交互式Agent超时通过send_input催促收敛,仍然超时则关闭
交互式Agent执行失败标记为失败,跳过依赖任务
wave中所有Agent都执行失败记录错误,提供重试或终止选项
CSV解析错误执行前验证CSV格式,展示出错行号
discoveries.ndjson损坏忽略格式错误的行,继续使用有效条目
审查-修复周期超过3次迭代向用户上报剩余问题摘要,请求人工处理
构建验证失败创建FIX任务,附带编译错误详情
架构基线不可用降级使用静态分析估算值
继续模式下未找到会话列出可用会话,提示用户选择

Core Rules

核心规则

  1. Start Immediately: First action is session initialization, then Phase 0/1
  2. Wave Order is Sacred: Never execute wave N before wave N-1 completes and results are merged
  3. CSV is Source of Truth: Master tasks.csv holds all state (both csv-wave and interactive)
  4. CSV First: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
  5. Context Propagation: prev_context built from master CSV, not from memory
  6. Discovery Board is Append-Only: Never clear, modify, or recreate discoveries.ndjson -- both mechanisms share it
  7. Skip on Failure: If a dependency failed, skip the dependent task (regardless of mechanism)
  8. Max 3 Fix Cycles: Review-fix cycle capped at 3 iterations; escalate to user after
  9. Cleanup Temp Files: Remove wave-{N}.csv after results are merged
  10. DO NOT STOP: Continuous execution until all waves complete or all remaining tasks are skipped

  1. 立即启动:首个动作是会话初始化,然后进入阶段0/1
  2. Wave顺序不可更改:在wave N-1执行完成并合并结果前,绝对不能执行wave N
  3. CSV是唯一可信源:主tasks.csv存储所有状态(包含csv-wave和交互式任务)
  4. CSV优先:任务默认使用csv-wave模式,仅当交互模式要求时才使用交互式
  5. 上下文传递:prev_context从主CSV构建,而非内存数据
  6. 探索看板仅可追加:永远不要清空、修改或重建discoveries.ndjson -- 所有执行模式共享该文件
  7. 失败即跳过:如果依赖任务失败,跳过当前依赖任务(无论执行模式)
  8. 最多3次修复周期:审查-修复周期最多执行3次迭代,之后上报用户
  9. 清理临时文件:合并结果后删除wave-{N}.csv临时文件
  10. 不要停止执行:持续执行直到所有wave完成或所有剩余任务都被跳过

Coordinator Role Constraints (Main Agent)

协调者角色约束(主Agent)

CRITICAL: The coordinator (main agent executing this skill) is responsible for orchestration only, NOT implementation.
  1. Coordinator Does NOT Execute Code: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
    • Spawns agents with task assignments
    • Waits for agent callbacks
    • Merges results and coordinates workflow
    • Manages workflow transitions between phases
  2. Patient Waiting is Mandatory: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
    • Wait patiently for
      wait()
      calls to complete
    • NOT skip workflow steps due to perceived delays
    • NOT assume agents have failed just because they're taking time
    • Trust the timeout mechanisms defined in the skill
  3. Use send_input for Clarification: When agents need guidance or appear stuck, the coordinator MUST:
    • Use
      send_input()
      to ask questions or provide clarification
    • NOT skip the agent or move to next phase prematurely
    • Give agents opportunity to respond before escalating
    • Example:
      send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })
  4. No Workflow Shortcuts: The coordinator MUST NOT:
    • Skip phases or stages defined in the workflow
    • Bypass required approval or review steps
    • Execute dependent tasks before prerequisites complete
    • Assume task completion without explicit agent callback
    • Make up or fabricate agent results
  5. Respect Long-Running Processes: This is a complex multi-agent workflow that requires patience:
    • Total execution time may range from 30-90 minutes or longer
    • Each phase may take 10-30 minutes depending on complexity
    • The coordinator must remain active and attentive throughout the entire process
    • Do not terminate or skip steps due to time concerns
重要:执行本技能的协调者(主Agent)仅负责流程编排,不负责具体实现。
  1. 协调者不执行代码:主Agent绝对不能直接编写、修改或实现任何代码。所有实现工作都委托给启动的团队Agent。协调者仅负责:
    • 为Agent分配任务并启动
    • 等待Agent回调
    • 合并结果并协调工作流
    • 管理阶段间的工作流跳转
  2. 必须耐心等待:Agent执行需要较长时间(通常每个阶段10-30分钟,有时更长)。协调者必须:
    • 耐心等待
      wait()
      调用完成
    • 不要因为感知到延迟就跳过工作流步骤
    • 不要仅仅因为Agent执行时间长就判定其失败
    • 信任技能中定义的超时机制
  3. 使用send_input进行澄清:当Agent需要指导或看起来卡住时,协调者必须:
    • 使用
      send_input()
      提问或提供澄清信息
    • 不要提前跳过Agent或进入下一阶段
    • 在升级处理前给Agent响应的机会
    • 示例:
      send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })
  4. 不允许工作流捷径:协调者绝对不能:
    • 跳过工作流中定义的阶段或步骤
    • 绕过要求的审批或审查步骤
    • 在前置条件完成前执行依赖任务
    • 没有明确的Agent回调就假设任务完成
    • 编造或伪造Agent结果
  5. 尊重长运行流程:这是复杂的多Agent工作流,需要耐心:
    • 总执行时间可能在30-90分钟甚至更长
    • 根据复杂度不同,每个阶段可能需要10-30分钟
    • 协调者必须在整个执行过程中保持活跃和关注
    • 不要因为时间问题终止或跳过步骤