workflow-req-plan

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Workflow Req-Plan

Workflow Req-Plan 需求规划工作流

Usage

使用方法

bash
$workflow-req-plan "Implement user authentication system with OAuth and 2FA"
bash
$workflow-req-plan "Implement user authentication system with OAuth and 2FA"

With mode selection

With mode selection

$workflow-req-plan -m progressive "Build real-time notification system" # Layered MVP→iterations $workflow-req-plan -m direct "Refactor payment module" # Topologically-sorted task sequence $workflow-req-plan -m auto "Add data export feature" # Auto-select strategy
$workflow-req-plan -m progressive "Build real-time notification system" # Layered MVP→iterations $workflow-req-plan -m direct "Refactor payment module" # Topologically-sorted task sequence $workflow-req-plan -m auto "Add data export feature" # Auto-select strategy

Continue existing session

Continue existing session

$workflow-req-plan --continue "user authentication system"
$workflow-req-plan --continue "user authentication system"

Auto mode (skip all confirmations)

Auto mode (skip all confirmations)

$workflow-req-plan -y "Implement caching layer"
$workflow-req-plan -y "Implement caching layer"

Flags

Flags

-y, --yes Skip all confirmations (auto mode) -c, --continue Continue existing session -m, --mode <progressive|direct|auto> Decomposition strategy (default: auto)

**Context Source**: cli-explore-agent (optional) + requirement analysis
**Output Directory**: `.workflow/.req-plan/{session-id}/`
**Core Innovation**: Requirement decomposition → issue creation via `ccw issue create`. Issues stored in `.workflow/issues/issues.jsonl` (single source of truth); wave and dependency info embedded in issue tags and `extended_context.notes`. team-planex consumes issues directly by ID or tag query.
-y, --yes Skip all confirmations (auto mode) -c, --continue Continue existing session -m, --mode <progressive|direct|auto> Decomposition strategy (default: auto)

**上下文来源**: cli-explore-agent(可选)+ 需求分析
**输出目录**: `.workflow/.req-plan/{session-id}/`
**核心创新点**: 需求分解 → 通过`ccw issue create`创建Issue。Issue存储在`.workflow/issues/issues.jsonl`中(唯一可信数据源);波次和依赖信息嵌入到Issue标签与`extended_context.notes`中。team-planex可通过ID或标签查询直接使用这些Issue。

Overview

概述

Requirement-level layered roadmap planning. Decomposes a requirement into convergent layers or task sequences, creates issues via
ccw issue create
. Issues are the single source of truth in
.workflow/issues/issues.jsonl
; wave and dependency info is embedded in issue tags and
extended_context.notes
.
Dual Modes:
  • Progressive: Layered MVP→iterations, suitable for high-uncertainty requirements (validate first, then refine)
  • Direct: Topologically-sorted task sequence, suitable for low-uncertainty requirements (clear tasks, directly ordered)
  • Auto: Automatically selects based on uncertainty level
Core Workflow: Requirement Understanding → Strategy Selection → Context Collection (optional) → Decomposition + Issue Creation → Validation → team-planex Handoff
基于需求级别的分层路线图规划。将需求分解为可收敛的层级或任务序列,通过
ccw issue create
创建Issue。Issue是
.workflow/issues/issues.jsonl
中的唯一可信数据源;波次和依赖信息嵌入到Issue标签与
extended_context.notes
中。
双模式:
  • 渐进式(Progressive): 分层MVP→迭代,适用于高不确定性需求(先验证,再细化)
  • 直接式(Direct): 拓扑排序的任务序列,适用于低不确定性需求(任务清晰,直接排序)
  • 自动式(Auto): 根据不确定性水平自动选择策略
核心工作流: 需求理解 → 策略选择 → 上下文收集(可选)→ 分解+Issue创建 → 验证 → 交付给team-planex

Execution Process

执行流程

Phase 0: Initialization
   ├─ Parse arguments (--yes, --continue, --mode)
   ├─ Generate session ID (RPLAN-{slug}-{date})
   └─ Create session folder

Phase 1: Requirement Understanding & Strategy Selection
   ├─ Parse requirement: goal / constraints / stakeholders
   ├─ Assess uncertainty level
   │   ├─ High uncertainty → recommend progressive
   │   └─ Low uncertainty  → recommend direct
   ├─ ASK_USER: Confirm strategy (-m skips, -y auto-selects)
   └─ Write strategy-assessment.json + roadmap.md skeleton

Phase 2: Context Collection (Optional, Subagent)
   ├─ Detect codebase: package.json / go.mod / src / ...
   ├─ Has codebase → spawn_agent cli-explore-agent
   │   ├─ Explore relevant modules and patterns
   │   ├─ wait for completion
   │   └─ close_agent
   └─ No codebase → skip, pure requirement decomposition

Phase 3: Decomposition & Issue Creation (Inlined Agent)
   ├─ Step 3.1: CLI-Assisted Decomposition
   │   ├─ Construct CLI prompt with requirement + context + mode
   │   ├─ Execute Gemini (fallback: Qwen → manual decomposition)
   │   └─ Parse CLI output into structured records
   ├─ Step 3.2: Record Enhancement & Validation
   │   ├─ Validate each record against schema
   │   ├─ Enhance convergence criteria quality
   │   ├─ Validate dependency graph (no cycles)
   │   └─ Progressive: verify scope; Direct: verify inputs/outputs
   ├─ Step 3.3: Issue Creation & Output Generation
   │   ├─ Internal records → issue data mapping
   │   ├─ ccw issue create for each item (get ISS-xxx IDs)
   │   └─ Generate roadmap.md with issue ID references
   └─ Step 3.4: Decomposition Quality Check (MANDATORY)
       ├─ Execute CLI quality check (Gemini, Qwen fallback)
       └─ Decision: PASS / AUTO_FIX / NEEDS_REVIEW

Phase 4: Validation & team-planex Handoff
   ├─ Display decomposition results (tabular + convergence criteria)
   ├─ ASK_USER: Feedback loop (up to 5 rounds)
   └─ ASK_USER: Next steps (team-planex / wave-by-wave / view / done)
Phase 0: 初始化
   ├─ 解析参数(--yes, --continue, --mode)
   ├─ 生成会话ID(RPLAN-{slug}-{date})
   └─ 创建会话文件夹

Phase 1: 需求理解与策略选择
   ├─ 解析需求:目标 / 约束 / 利益相关方
   ├─ 评估不确定性水平
   │   ├─ 高不确定性 → 推荐渐进式
   │   └─ 低不确定性 → 推荐直接式
   ├─ 询问用户:确认策略(-m参数跳过,-y参数自动选择)
   └─ 生成strategy-assessment.json + roadmap.md框架

Phase 2: 上下文收集(可选,子Agent)
   ├─ 检测代码库:package.json / go.mod / src / ...
   ├─ 存在代码库 → 启动子Agent cli-explore-agent
   │   ├─ 探索相关模块与模式
   │   ├─ 等待完成
   │   └─ 关闭Agent
   └─ 无代码库 → 跳过,仅进行纯需求分解

Phase 3: 分解与Issue创建(内置Agent)
   ├─ Step 3.1: CLI辅助分解
   │   ├─ 构建包含需求+上下文+模式的CLI提示词
   │   ├─ 执行Gemini(备选:Qwen → 手动分解)
   │   └─ 将CLI输出解析为结构化记录
   ├─ Step 3.2: 记录增强与验证
   │   ├─ 根据Schema验证每条记录
   │   ├─ 提升收敛标准质量
   │   ├─ 验证依赖图(无循环)
   │   └─ 渐进式:验证范围;直接式:验证输入/输出
   ├─ Step 3.3: Issue创建与输出生成
   │   ├─ 内部记录 → Issue数据映射
   │   ├─ 为每个条目执行ccw issue create(获取ISS-xxx ID)
   │   └─ 生成包含Issue ID引用的roadmap.md
   └─ Step 3.4: 分解质量检查(必填)
       ├─ 执行CLI质量检查(Gemini,备选Qwen)
       └─ 决策:通过 / 自动修复 / 需要人工评审

Phase 4: 验证与交付给team-planex
   ├─ 展示分解结果(表格+收敛标准)
   ├─ 询问用户:反馈循环(最多5轮)
   └─ 询问用户:下一步操作(team-planex / 按波次执行 / 查看 / 完成)

Output

输出

.workflow/.req-plan/RPLAN-{slug}-{YYYY-MM-DD}/
├── roadmap.md                    # Human-readable roadmap with issue ID references
├── strategy-assessment.json      # Strategy assessment result
└── exploration-codebase.json     # Codebase context (optional)
FilePhaseDescription
strategy-assessment.json
1Uncertainty analysis + mode recommendation + extracted goal/constraints/stakeholders
roadmap.md
(skeleton)
1Initial skeleton with placeholders, finalized in Phase 3
exploration-codebase.json
2Codebase context: relevant modules, patterns, integration points (only when codebase exists)
roadmap.md
(final)
3Human-readable roadmap with issue ID references, convergence details, team-planex execution guide
.workflow/.req-plan/RPLAN-{slug}-{YYYY-MM-DD}/
├── roadmap.md                    # 包含Issue ID引用的人类可读路线图
├── strategy-assessment.json      # 策略评估结果
└── exploration-codebase.json     # 代码库上下文(可选)
文件阶段描述
strategy-assessment.json
1不确定性分析 + 模式推荐 + 提取的目标/约束/利益相关方
roadmap.md
(框架)
1带有占位符的初始框架,在Phase 3中完成最终版本
exploration-codebase.json
2代码库上下文:相关模块、模式、集成点(仅当存在代码库时生成)
roadmap.md
(最终版)
3包含Issue ID引用、收敛细节、team-planex执行指南的人类可读路线图

Subagent API Reference

子Agent API参考

spawn_agent

spawn_agent

Create a new subagent with task assignment.
javascript
const agentId = spawn_agent({
  message: `
创建新的子Agent并分配任务。
javascript
const agentId = spawn_agent({
  message: `

TASK ASSIGNMENT

TASK ASSIGNMENT

MANDATORY FIRST STEPS (Agent Execute)

MANDATORY FIRST STEPS (Agent Execute)

  1. Read role definition: ~/.codex/agents/{agent-type}.md (MUST read first)
  2. Read: .workflow/project-tech.json
  3. Read: .workflow/project-guidelines.json
  1. Read role definition: ~/.codex/agents/{agent-type}.md (MUST read first)
  2. Read: .workflow/project-tech.json
  3. Read: .workflow/project-guidelines.json

TASK CONTEXT

TASK CONTEXT

${taskContext}
${taskContext}

DELIVERABLES

DELIVERABLES

${deliverables} ` })
undefined
${deliverables} ` })
undefined

wait

wait

Get results from subagent (only way to retrieve results).
javascript
const result = wait({
  ids: [agentId],
  timeout_ms: 600000  // 10 minutes
})

if (result.timed_out) {
  // Handle timeout - can send_input to prompt completion
}
从子Agent获取结果(唯一获取结果的方式)。
javascript
const result = wait({
  ids: [agentId],
  timeout_ms: 600000  // 10 minutes
})

if (result.timed_out) {
  // Handle timeout - can send_input to prompt completion
}

close_agent

close_agent

Clean up subagent resources (irreversible).
javascript
close_agent({ id: agentId })

清理子Agent资源(不可逆)。
javascript
close_agent({ id: agentId })

Implementation

实现细节

Phase 0: Initialization

Phase 0: 初始化

Step 0: Determine Project Root
Step 0: 确定项目根目录
bash
PROJECT_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()

// Parse arguments
const args = "$ARGUMENTS"
const AUTO_YES = args.includes('--yes') || args.includes('-y')
const continueMode = args.includes('--continue') || args.includes('-c')
const modeMatch = args.match(/(?:--mode|-m)\s+(progressive|direct|auto)/)
const requestedMode = modeMatch ? modeMatch[1] : 'auto'

// Clean requirement text (remove flags)
const requirement = args
  .replace(/--yes|-y|--continue|-c|--mode\s+\w+|-m\s+\w+/g, '')
  .trim()

const slug = requirement.toLowerCase()
  .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
  .substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `RPLAN-${slug}-${dateStr}`
const sessionFolder = `${projectRoot}/.workflow/.req-plan/${sessionId}`

bash(`mkdir -p ${sessionFolder}`)

// Utility functions
function fileExists(p) {
  try { return bash(`test -f "${p}" && echo "yes"`).includes('yes') } catch { return false }
}

bash
PROJECT_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()

// Parse arguments
const args = "$ARGUMENTS"
const AUTO_YES = args.includes('--yes') || args.includes('-y')
const continueMode = args.includes('--continue') || args.includes('-c')
const modeMatch = args.match(/(?:--mode|-m)\s+(progressive|direct|auto)/)
const requestedMode = modeMatch ? modeMatch[1] : 'auto'

// Clean requirement text (remove flags)
const requirement = args
  .replace(/--yes|-y|--continue|-c|--mode\s+\w+|-m\s+\w+/g, '')
  .trim()

const slug = requirement.toLowerCase()
  .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
  .substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `RPLAN-${slug}-${dateStr}`
const sessionFolder = `${projectRoot}/.workflow/.req-plan/${sessionId}`

bash(`mkdir -p ${sessionFolder}`)

// Utility functions
function fileExists(p) {
  try { return bash(`test -f "${p}" && echo "yes"`).includes('yes') } catch { return false }
}

Phase 1: Requirement Understanding & Strategy Selection

Phase 1: 需求理解与策略选择

Objective: Parse requirement, assess uncertainty, select decomposition strategy.
javascript
// 1. Parse Requirement
// - Extract core goal (what to achieve)
// - Identify constraints (tech stack, timeline, compatibility, etc.)
// - Identify stakeholders (users, admins, developers, etc.)
// - Identify keywords to determine domain

// 2. Assess Uncertainty Level
const uncertaintyFactors = {
  scope_clarity: 'low|medium|high',
  technical_risk: 'low|medium|high',
  dependency_unknown: 'low|medium|high',
  domain_familiarity: 'low|medium|high',
  requirement_stability: 'low|medium|high'
}
// high uncertainty (>=3 high) → progressive
// low uncertainty (>=3 low)  → direct
// otherwise → ask user preference

// 3. Strategy Selection
let selectedMode
if (requestedMode !== 'auto') {
  selectedMode = requestedMode
} else if (AUTO_YES) {
  selectedMode = recommendedMode  // use defaults
} else {
  const strategyAnswer = ASK_USER([
    {
      id: "strategy", type: "select",
      prompt: `Decomposition strategy selection:\n\nUncertainty: ${uncertaintyLevel}\nRecommended: ${recommendedMode}\n\nSelect:`,
      options: [
        { label: recommendedMode === 'progressive' ? "Progressive (Recommended)" : "Progressive",
          description: "Layered MVP→iterations, validate core first" },
        { label: recommendedMode === 'direct' ? "Direct (Recommended)" : "Direct",
          description: "Topologically-sorted task sequence" }
      ]
    }
  ])  // BLOCKS (wait for user response)
  selectedMode = strategyAnswer.strategy.toLowerCase().includes('progressive') ? 'progressive' : 'direct'
}

// 4. Generate strategy-assessment.json
const strategyAssessment = {
  session_id: sessionId,
  requirement: requirement,
  timestamp: getUtc8ISOString(),
  uncertainty_factors: uncertaintyFactors,
  uncertainty_level: uncertaintyLevel,
  recommended_mode: recommendedMode,
  selected_mode: selectedMode,
  goal: extractedGoal,
  constraints: extractedConstraints,
  stakeholders: extractedStakeholders,
  domain_keywords: extractedKeywords
}
Write(`${sessionFolder}/strategy-assessment.json`, JSON.stringify(strategyAssessment, null, 2))

// 5. Initialize roadmap.md skeleton
const roadmapMdSkeleton = `# Requirement Roadmap

**Session**: ${sessionId}
**Requirement**: ${requirement}
**Strategy**: ${selectedMode}
**Status**: Planning
**Created**: ${getUtc8ISOString()}
目标: 解析需求,评估不确定性,选择分解策略。
javascript
// 1. Parse Requirement
// - Extract core goal (what to achieve)
// - Identify constraints (tech stack, timeline, compatibility, etc.)
// - Identify stakeholders (users, admins, developers, etc.)
// - Identify keywords to determine domain

// 2. Assess Uncertainty Level
const uncertaintyFactors = {
  scope_clarity: 'low|medium|high',
  technical_risk: 'low|medium|high',
  dependency_unknown: 'low|medium|high',
  domain_familiarity: 'low|medium|high',
  requirement_stability: 'low|medium|high'
}
// high uncertainty (>=3 high) → progressive
// low uncertainty (>=3 low)  → direct
// otherwise → ask user preference

// 3. Strategy Selection
let selectedMode
if (requestedMode !== 'auto') {
  selectedMode = requestedMode
} else if (AUTO_YES) {
  selectedMode = recommendedMode  // use defaults
} else {
  const strategyAnswer = ASK_USER([
    {
      id: "strategy", type: "select",
      prompt: `Decomposition strategy selection:\n\nUncertainty: ${uncertaintyLevel}\nRecommended: ${recommendedMode}\n\nSelect:`,
      options: [
        { label: recommendedMode === 'progressive' ? "Progressive (Recommended)" : "Progressive",
          description: "Layered MVP→iterations, validate core first" },
        { label: recommendedMode === 'direct' ? "Direct (Recommended)" : "Direct",
          description: "Topologically-sorted task sequence" }
      ]
    }
  ])  // BLOCKS (wait for user response)
  selectedMode = strategyAnswer.strategy.toLowerCase().includes('progressive') ? 'progressive' : 'direct'
}

// 4. Generate strategy-assessment.json
const strategyAssessment = {
  session_id: sessionId,
  requirement: requirement,
  timestamp: getUtc8ISOString(),
  uncertainty_factors: uncertaintyFactors,
  uncertainty_level: uncertaintyLevel,
  recommended_mode: recommendedMode,
  selected_mode: selectedMode,
  goal: extractedGoal,
  constraints: extractedConstraints,
  stakeholders: extractedStakeholders,
  domain_keywords: extractedKeywords
}
Write(`${sessionFolder}/strategy-assessment.json`, JSON.stringify(strategyAssessment, null, 2))

// 5. Initialize roadmap.md skeleton
const roadmapMdSkeleton = `# Requirement Roadmap

**Session**: ${sessionId}
**Requirement**: ${requirement}
**Strategy**: ${selectedMode}
**Status**: Planning
**Created**: ${getUtc8ISOString()}

Strategy Assessment

Strategy Assessment

  • Uncertainty level: ${uncertaintyLevel}
  • Decomposition mode: ${selectedMode}
  • Uncertainty level: ${uncertaintyLevel}
  • Decomposition mode: ${selectedMode}

Roadmap

Roadmap

To be populated after Phase 3 decomposition
To be populated after Phase 3 decomposition

Convergence Criteria Details

Convergence Criteria Details

To be populated after Phase 3 decomposition
To be populated after Phase 3 decomposition

Risk Items

Risk Items

To be populated after Phase 3 decomposition
To be populated after Phase 3 decomposition

Next Steps

Next Steps

To be populated after Phase 4 validation
 Write(
${sessionFolder}/roadmap.md`, roadmapMdSkeleton)

**Success Criteria**:
- Requirement goal, constraints, stakeholders identified
- Uncertainty level assessed
- Strategy selected (progressive or direct)
- strategy-assessment.json generated
- roadmap.md skeleton initialized

---
To be populated after Phase 4 validation
 Write(
${sessionFolder}/roadmap.md`, roadmapMdSkeleton)

**成功标准**:
- 已识别需求目标、约束、利益相关方
- 已评估不确定性水平
- 已选择策略(渐进式或直接式)
- 已生成strategy-assessment.json
- 已初始化roadmap.md框架

---

Phase 2: Context Collection (Optional, Subagent)

Phase 2: 上下文收集(可选,子Agent)

Objective: If a codebase exists, collect relevant context to enhance decomposition quality.
javascript
// 1. Detect Codebase
const hasCodebase = bash(`
  test -f package.json && echo "nodejs" ||
  test -f go.mod && echo "golang" ||
  test -f Cargo.toml && echo "rust" ||
  test -f pyproject.toml && echo "python" ||
  test -f pom.xml && echo "java" ||
  test -d src && echo "generic" ||
  echo "none"
`).trim()

// 2. Codebase Exploration (only when hasCodebase !== 'none')
let exploreAgent = null

if (hasCodebase !== 'none') {
  try {
    exploreAgent = spawn_agent({
      message: `
目标: 如果存在代码库,收集相关上下文以提升分解质量。
javascript
// 1. Detect Codebase
const hasCodebase = bash(`
  test -f package.json && echo "nodejs" ||
  test -f go.mod && echo "golang" ||
  test -f Cargo.toml && echo "rust" ||
  test -f pyproject.toml && echo "python" ||
  test -f pom.xml && echo "java" ||
  test -d src && echo "generic" ||
  echo "none"
`).trim()

// 2. Codebase Exploration (only when hasCodebase !== 'none')
let exploreAgent = null

if (hasCodebase !== 'none') {
  try {
    exploreAgent = spawn_agent({
      message: `

TASK ASSIGNMENT

TASK ASSIGNMENT

MANDATORY FIRST STEPS (Agent Execute)

MANDATORY FIRST STEPS (Agent Execute)

  1. Read role definition: ~/.codex/agents/cli-explore-agent.md (MUST read first)
  2. Read: ${projectRoot}/.workflow/project-tech.json (if exists)
  3. Read: ${projectRoot}/.workflow/project-guidelines.json (if exists)

  1. Read role definition: ~/.codex/agents/cli-explore-agent.md (MUST read first)
  2. Read: ${projectRoot}/.workflow/project-tech.json (if exists)
  3. Read: ${projectRoot}/.workflow/project-guidelines.json (if exists)

Task Objective

Task Objective

Explore codebase for requirement decomposition context.
Explore codebase for requirement decomposition context.

Exploration Context

Exploration Context

Requirement: ${requirement} Strategy: ${selectedMode} Project Type: ${hasCodebase} Session: ${sessionFolder}
Requirement: ${requirement} Strategy: ${selectedMode} Project Type: ${hasCodebase} Session: ${sessionFolder}

MANDATORY FIRST STEPS

MANDATORY FIRST STEPS

  1. Run: ccw tool exec get_modules_by_depth '{}'
  2. Execute relevant searches based on requirement keywords
  1. Run: ccw tool exec get_modules_by_depth '{}'
  2. Execute relevant searches based on requirement keywords

Exploration Focus

Exploration Focus

  • Identify modules/components related to the requirement
  • Find existing patterns that should be followed
  • Locate integration points for new functionality
  • Assess current architecture constraints
  • Identify modules/components related to the requirement
  • Find existing patterns that should be followed
  • Locate integration points for new functionality
  • Assess current architecture constraints

Output

Output

Write findings to: ${sessionFolder}/exploration-codebase.json
Schema: { project_type: "${hasCodebase}", relevant_modules: [{name, path, relevance}], existing_patterns: [{pattern, files, description}], integration_points: [{location, description, risk}], architecture_constraints: [string], tech_stack: {languages, frameworks, tools}, _metadata: {timestamp, exploration_scope} } ` })
// Wait with timeout handling
let result = wait({ ids: [exploreAgent], timeout_ms: 600000 })

if (result.timed_out) {
  send_input({ id: exploreAgent, message: 'Complete now and write exploration-codebase.json.' })
  result = wait({ ids: [exploreAgent], timeout_ms: 300000 })
  if (result.timed_out) throw new Error('Agent timeout')
}
} finally { if (exploreAgent) close_agent({ id: exploreAgent }) } } // No codebase → skip, proceed directly to Phase 3

---
Write findings to: ${sessionFolder}/exploration-codebase.json
Schema: { project_type: "${hasCodebase}", relevant_modules: [{name, path, relevance}], existing_patterns: [{pattern, files, description}], integration_points: [{location, description, risk}], architecture_constraints: [string], tech_stack: {languages, frameworks, tools}, _metadata: {timestamp, exploration_scope} } ` })
// Wait with timeout handling
let result = wait({ ids: [exploreAgent], timeout_ms: 600000 })

if (result.timed_out) {
  send_input({ id: exploreAgent, message: 'Complete now and write exploration-codebase.json.' })
  result = wait({ ids: [exploreAgent], timeout_ms: 300000 })
  if (result.timed_out) throw new Error('Agent timeout')
}
} finally { if (exploreAgent) close_agent({ id: exploreAgent }) } } // No codebase → skip, proceed directly to Phase 3

---

Phase 3: Decomposition & Issue Creation (Inlined Agent)

Phase 3: 分解与Issue创建(内置Agent)

Objective: Execute requirement decomposition, create issues, generate execution-plan.json + issues.jsonl + roadmap.md.
CRITICAL: After creating issues, MUST execute Decomposition Quality Check (Step 3.4) using CLI analysis before proceeding to Phase 4.
目标: 执行需求分解,创建Issue,生成execution-plan.json + issues.jsonl + roadmap.md。
关键: 创建Issue后,在进入Phase 4之前必须执行分解质量检查(Step 3.4),使用CLI分析。

Prepare Context

准备上下文

javascript
const strategy = JSON.parse(Read(`${sessionFolder}/strategy-assessment.json`))
let explorationContext = null
if (fileExists(`${sessionFolder}/exploration-codebase.json`)) {
  explorationContext = JSON.parse(Read(`${sessionFolder}/exploration-codebase.json`))
}
javascript
const strategy = JSON.parse(Read(`${sessionFolder}/strategy-assessment.json`))
let explorationContext = null
if (fileExists(`${sessionFolder}/exploration-codebase.json`)) {
  explorationContext = JSON.parse(Read(`${sessionFolder}/exploration-codebase.json`))
}

Internal Record Schemas (CLI Parsing)

内部记录Schema(CLI解析)

These schemas are used internally for parsing CLI decomposition output. They are converted to issues in Step 3.3.
Progressive Mode - Layer Record:
javascript
{
  id: "L{n}",               // L0, L1, L2, L3
  name: string,              // Layer name: MVP / 可用 / 完善 / 优化
  goal: string,              // Layer goal (one sentence)
  scope: [string],           // Features included
  excludes: [string],        // Features explicitly excluded
  convergence: {
    criteria: [string],         // Testable conditions
    verification: string,       // How to verify
    definition_of_done: string  // Business-language completion definition
  },
  risks: [{description, probability, impact, mitigation}],
  effort: "small" | "medium" | "large",
  depends_on: ["L{n}"]
}
Direct Mode - Task Record:
javascript
{
  id: "T{n}",                // T1, T2, T3, ...
  title: string,
  type: "infrastructure" | "feature" | "enhancement" | "testing",
  scope: string,
  inputs: [string],
  outputs: [string],
  convergence: { criteria, verification, definition_of_done },
  depends_on: ["T{n}"],
  parallel_group: number      // Same group = parallelizable
}
这些Schema用于内部解析CLI分解输出,在Step 3.3中转换为Issue。
渐进式模式 - 层级记录:
javascript
{
  id: "L{n}",               // L0, L1, L2, L3
  name: string,              // Layer name: MVP / 可用 / 完善 / 优化
  goal: string,              // Layer goal (one sentence)
  scope: [string],           // Features included
  excludes: [string],        // Features explicitly excluded
  convergence: {
    criteria: [string],         // Testable conditions
    verification: string,       // How to verify
    definition_of_done: string  // Business-language completion definition
  },
  risks: [{description, probability, impact, mitigation}],
  effort: "small" | "medium" | "large",
  depends_on: ["L{n}"]
}
直接式模式 - 任务记录:
javascript
{
  id: "T{n}",                // T1, T2, T3, ...
  title: string,
  type: "infrastructure" | "feature" | "enhancement" | "testing",
  scope: string,
  inputs: [string],
  outputs: [string],
  convergence: { criteria, verification, definition_of_done },
  depends_on: ["T{n}"],
  parallel_group: number      // Same group = parallelizable
}

Convergence Quality Requirements

收敛标准质量要求

FieldRequirementBad ExampleGood Example
criteria[]
Testable
"系统工作正常"
"API 返回 200 且响应体包含 user_id 字段"
verification
Executable
"检查一下"
"jest --testPathPattern=auth && curl -s localhost:3000/health"
definition_of_done
Business language
"代码通过编译"
"新用户可完成注册→登录→执行核心操作的完整流程"

字段要求反例正例
criteria[]
可测试
"系统工作正常"
"API 返回 200 且响应体包含 user_id 字段"
verification
可执行
"检查一下"
"jest --testPathPattern=auth && curl -s localhost:3000/health"
definition_of_done
业务语言
"代码通过编译"
"新用户可完成注册→登录→执行核心操作的完整流程"

Step 3.1: CLI-Assisted Decomposition

Step 3.1: CLI辅助分解

Progressive Mode CLI Template
渐进式模式CLI模板
bash
ccw cli -p "
PURPOSE: Decompose requirement into progressive layers (MVP→iterations) with convergence criteria
Success: 2-4 self-contained layers, each with testable convergence, no scope overlap

REQUIREMENT:
${requirement}

STRATEGY CONTEXT:
- Uncertainty: ${strategy.uncertainty_level}
- Goal: ${strategy.goal}
- Constraints: ${strategy.constraints.join(', ')}
- Stakeholders: ${strategy.stakeholders.join(', ')}

${explorationContext ? `CODEBASE CONTEXT:
- Relevant modules: ${explorationContext.relevant_modules.map(m => m.name).join(', ')}
- Existing patterns: ${explorationContext.existing_patterns.map(p => p.pattern).join(', ')}
- Architecture constraints: ${explorationContext.architecture_constraints.join(', ')}
- Tech stack: ${JSON.stringify(explorationContext.tech_stack)}` : 'NO CODEBASE (pure requirement decomposition)'}

TASK:
• Define 2-4 progressive layers from MVP to full implementation
• L0 (MVP): Minimum viable closed loop - core path works end-to-end
• L1 (Usable): Critical user paths, basic error handling
• L2 (Complete): Edge cases, performance, security hardening
• L3 (Optimized): Advanced features, observability, operations support
• Each layer: explicit scope (included) and excludes (not included)
• Each layer: convergence with testable criteria, executable verification, business-language DoD
• Risk items per layer

MODE: analysis
CONTEXT: @**/*
EXPECTED:
For each layer output:
bash
ccw cli -p "
PURPOSE: Decompose requirement into progressive layers (MVP→iterations) with convergence criteria
Success: 2-4 self-contained layers, each with testable convergence, no scope overlap

REQUIREMENT:
${requirement}

STRATEGY CONTEXT:
- Uncertainty: ${strategy.uncertainty_level}
- Goal: ${strategy.goal}
- Constraints: ${strategy.constraints.join(', ')}
- Stakeholders: ${strategy.stakeholders.join(', ')}

${explorationContext ? `CODEBASE CONTEXT:
- Relevant modules: ${explorationContext.relevant_modules.map(m => m.name).join(', ')}
- Existing patterns: ${explorationContext.existing_patterns.map(p => p.pattern).join(', ')}
- Architecture constraints: ${explorationContext.architecture_constraints.join(', ')}
- Tech stack: ${JSON.stringify(explorationContext.tech_stack)}` : 'NO CODEBASE (pure requirement decomposition)'}

TASK:
• Define 2-4 progressive layers from MVP to full implementation
• L0 (MVP): Minimum viable closed loop - core path works end-to-end
• L1 (Usable): Critical user paths, basic error handling
• L2 (Complete): Edge cases, performance, security hardening
• L3 (Optimized): Advanced features, observability, operations support
• Each layer: explicit scope (included) and excludes (not included)
• Each layer: convergence with testable criteria, executable verification, business-language DoD
• Risk items per layer

MODE: analysis
CONTEXT: @**/*
EXPECTED:
For each layer output:

L{n}: {Name}

L{n}: {Name}

Goal: {one sentence} Scope: {comma-separated features} Excludes: {comma-separated excluded features} Convergence:
  • Criteria: {bullet list of testable conditions}
  • Verification: {executable command or steps}
  • Definition of Done: {business language sentence} Risk Items: {bullet list} Effort: {small|medium|large} Depends On: {layer IDs or none}
CONSTRAINTS:
  • Each feature belongs to exactly ONE layer (no overlap)
  • Criteria must be testable (can write assertions)
  • Verification must be executable (commands or explicit steps)
  • Definition of Done must be understandable by non-technical stakeholders
  • L0 must be a complete closed loop (end-to-end path works) " --tool gemini --mode analysis
undefined
Goal: {one sentence} Scope: {comma-separated features} Excludes: {comma-separated excluded features} Convergence:
  • Criteria: {bullet list of testable conditions}
  • Verification: {executable command or steps}
  • Definition of Done: {business language sentence} Risk Items: {bullet list} Effort: {small|medium|large} Depends On: {layer IDs or none}
CONSTRAINTS:
  • Each feature belongs to exactly ONE layer (no overlap)
  • Criteria must be testable (can write assertions)
  • Verification must be executable (commands or explicit steps)
  • Definition of Done must be understandable by non-technical stakeholders
  • L0 must be a complete closed loop (end-to-end path works) " --tool gemini --mode analysis
undefined
Direct Mode CLI Template
直接式模式CLI模板
bash
ccw cli -p "
PURPOSE: Decompose requirement into topologically-sorted task sequence with convergence criteria
Success: Self-contained tasks with clear inputs/outputs, testable convergence, correct dependency order

REQUIREMENT:
${requirement}

STRATEGY CONTEXT:
- Goal: ${strategy.goal}
- Constraints: ${strategy.constraints.join(', ')}

${explorationContext ? `CODEBASE CONTEXT:
- Relevant modules: ${explorationContext.relevant_modules.map(m => m.name).join(', ')}
- Existing patterns: ${explorationContext.existing_patterns.map(p => p.pattern).join(', ')}
- Tech stack: ${JSON.stringify(explorationContext.tech_stack)}` : 'NO CODEBASE (pure requirement decomposition)'}

TASK:
• Decompose into vertical slices with clear boundaries
• Each task: type (infrastructure|feature|enhancement|testing)
• Each task: explicit inputs (what it needs) and outputs (what it produces)
• Each task: convergence with testable criteria, executable verification, business-language DoD
• Topological sort: respect dependency order
• Assign parallel_group numbers (same group = can run in parallel)

MODE: analysis
CONTEXT: @**/*
EXPECTED:
For each task output:
bash
ccw cli -p "
PURPOSE: Decompose requirement into topologically-sorted task sequence with convergence criteria
Success: Self-contained tasks with clear inputs/outputs, testable convergence, correct dependency order

REQUIREMENT:
${requirement}

STRATEGY CONTEXT:
- Goal: ${strategy.goal}
- Constraints: ${strategy.constraints.join(', ')}

${explorationContext ? `CODEBASE CONTEXT:
- Relevant modules: ${explorationContext.relevant_modules.map(m => m.name).join(', ')}
- Existing patterns: ${explorationContext.existing_patterns.map(p => p.pattern).join(', ')}
- Tech stack: ${JSON.stringify(explorationContext.tech_stack)}` : 'NO CODEBASE (pure requirement decomposition)'}

TASK:
• Decompose into vertical slices with clear boundaries
• Each task: type (infrastructure|feature|enhancement|testing)
• Each task: explicit inputs (what it needs) and outputs (what it produces)
• Each task: convergence with testable criteria, executable verification, business-language DoD
• Topological sort: respect dependency order
• Assign parallel_group numbers (same group = can run in parallel)

MODE: analysis
CONTEXT: @**/*
EXPECTED:
For each task output:

T{n}: {Title}

T{n}: {Title}

Type: {infrastructure|feature|enhancement|testing} Scope: {description} Inputs: {comma-separated files/modules or 'none'} Outputs: {comma-separated files/modules} Convergence:
  • Criteria: {bullet list of testable conditions}
  • Verification: {executable command or steps}
  • Definition of Done: {business language sentence} Depends On: {task IDs or none} Parallel Group: {number}
CONSTRAINTS:
  • Inputs must come from preceding task outputs or existing resources
  • No circular dependencies
  • Criteria must be testable
  • Verification must be executable
  • Tasks in same parallel_group must be truly independent " --tool gemini --mode analysis
undefined
Type: {infrastructure|feature|enhancement|testing} Scope: {description} Inputs: {comma-separated files/modules or 'none'} Outputs: {comma-separated files/modules} Convergence:
  • Criteria: {bullet list of testable conditions}
  • Verification: {executable command or steps}
  • Definition of Done: {business language sentence} Depends On: {task IDs or none} Parallel Group: {number}
CONSTRAINTS:
  • Inputs must come from preceding task outputs or existing resources
  • No circular dependencies
  • Criteria must be testable
  • Verification must be executable
  • Tasks in same parallel_group must be truly independent " --tool gemini --mode analysis
undefined
CLI Fallback Chain
CLI备选链
javascript
// Fallback chain: Gemini → Qwen → manual decomposition
try {
  cliOutput = executeCLI('gemini', prompt)
} catch (error) {
  try {
    cliOutput = executeCLI('qwen', prompt)
  } catch {
    // Manual fallback (see Fallback Decomposition below)
    records = selectedMode === 'progressive'
      ? manualProgressiveDecomposition(requirement, explorationContext)
      : manualDirectDecomposition(requirement, explorationContext)
  }
}
javascript
// Fallback chain: Gemini → Qwen → manual decomposition
try {
  cliOutput = executeCLI('gemini', prompt)
} catch (error) {
  try {
    cliOutput = executeCLI('qwen', prompt)
  } catch {
    // Manual fallback (see Fallback Decomposition below)
    records = selectedMode === 'progressive'
      ? manualProgressiveDecomposition(requirement, explorationContext)
      : manualDirectDecomposition(requirement, explorationContext)
  }
}
CLI Output Parsing
CLI输出解析
javascript
// Parse progressive layers from CLI output
function parseProgressiveLayers(cliOutput) {
  const layers = []
  const layerBlocks = cliOutput.split(/## L(\d+):/).slice(1)

  for (let i = 0; i < layerBlocks.length; i += 2) {
    const layerId = `L${layerBlocks[i].trim()}`
    const text = layerBlocks[i + 1]

    const nameMatch = /^(.+?)(?=\n)/.exec(text)
    const goalMatch = /\*\*Goal\*\*:\s*(.+?)(?=\n)/.exec(text)
    const scopeMatch = /\*\*Scope\*\*:\s*(.+?)(?=\n)/.exec(text)
    const excludesMatch = /\*\*Excludes\*\*:\s*(.+?)(?=\n)/.exec(text)
    const effortMatch = /\*\*Effort\*\*:\s*(.+?)(?=\n)/.exec(text)
    const dependsMatch = /\*\*Depends On\*\*:\s*(.+?)(?=\n|$)/.exec(text)
    const riskMatch = /\*\*Risk Items\*\*:\n((?:- .+?\n)*)/.exec(text)

    const convergence = parseConvergence(text)

    layers.push({
      id: layerId,
      name: nameMatch?.[1].trim() || `Layer ${layerId}`,
      goal: goalMatch?.[1].trim() || "",
      scope: scopeMatch?.[1].split(/[,,]/).map(s => s.trim()).filter(Boolean) || [],
      excludes: excludesMatch?.[1].split(/[,,]/).map(s => s.trim()).filter(Boolean) || [],
      convergence,
      risks: riskMatch
        ? riskMatch[1].split('\n').map(s => s.replace(/^- /, '').trim()).filter(Boolean)
            .map(desc => ({description: desc, probability: "Medium", impact: "Medium", mitigation: "N/A"}))
        : [],
      effort: normalizeEffort(effortMatch?.[1].trim()),
      depends_on: parseDependsOn(dependsMatch?.[1], 'L')
    })
  }

  return layers
}

// Parse direct tasks from CLI output
function parseDirectTasks(cliOutput) {
  const tasks = []
  const taskBlocks = cliOutput.split(/## T(\d+):/).slice(1)

  for (let i = 0; i < taskBlocks.length; i += 2) {
    const taskId = `T${taskBlocks[i].trim()}`
    const text = taskBlocks[i + 1]

    const titleMatch = /^(.+?)(?=\n)/.exec(text)
    const typeMatch = /\*\*Type\*\*:\s*(.+?)(?=\n)/.exec(text)
    const scopeMatch = /\*\*Scope\*\*:\s*(.+?)(?=\n)/.exec(text)
    const inputsMatch = /\*\*Inputs\*\*:\s*(.+?)(?=\n)/.exec(text)
    const outputsMatch = /\*\*Outputs\*\*:\s*(.+?)(?=\n)/.exec(text)
    const dependsMatch = /\*\*Depends On\*\*:\s*(.+?)(?=\n|$)/.exec(text)
    const groupMatch = /\*\*Parallel Group\*\*:\s*(\d+)/.exec(text)

    const convergence = parseConvergence(text)

    tasks.push({
      id: taskId,
      title: titleMatch?.[1].trim() || `Task ${taskId}`,
      type: normalizeType(typeMatch?.[1].trim()),
      scope: scopeMatch?.[1].trim() || "",
      inputs: parseList(inputsMatch?.[1]),
      outputs: parseList(outputsMatch?.[1]),
      convergence,
      depends_on: parseDependsOn(dependsMatch?.[1], 'T'),
      parallel_group: parseInt(groupMatch?.[1]) || 1
    })
  }

  return tasks
}

// Parse convergence section from a record block
function parseConvergence(text) {
  const criteriaMatch = /- Criteria:\s*((?:.+\n?)+?)(?=- Verification:)/.exec(text)
  const verificationMatch = /- Verification:\s*(.+?)(?=\n- Definition)/.exec(text)
  const dodMatch = /- Definition of Done:\s*(.+?)(?=\n\*\*|$)/.exec(text)

  const criteria = criteriaMatch
    ? criteriaMatch[1].split('\n')
        .map(s => s.replace(/^\s*[-•]\s*/, '').trim())
        .filter(s => s && !s.startsWith('Verification') && !s.startsWith('Definition'))
    : []

  return {
    criteria: criteria.length > 0 ? criteria : ["Task completed successfully"],
    verification: verificationMatch?.[1].trim() || "Manual verification",
    definition_of_done: dodMatch?.[1].trim() || "Feature works as expected"
  }
}

// Helpers
function normalizeEffort(effort) {
  if (!effort) return "medium"
  const lower = effort.toLowerCase()
  if (lower.includes('small') || lower.includes('low')) return "small"
  if (lower.includes('large') || lower.includes('high')) return "large"
  return "medium"
}

function normalizeType(type) {
  if (!type) return "feature"
  const lower = type.toLowerCase()
  if (lower.includes('infra')) return "infrastructure"
  if (lower.includes('enhance')) return "enhancement"
  if (lower.includes('test')) return "testing"
  return "feature"
}

function parseList(text) {
  if (!text || text.toLowerCase() === 'none') return []
  return text.split(/[,,]/).map(s => s.trim()).filter(Boolean)
}

function parseDependsOn(text, prefix) {
  if (!text || text.toLowerCase() === 'none' || text === '[]') return []
  const pattern = new RegExp(`${prefix}\\d+`, 'g')
  return (text.match(pattern) || [])
}
javascript
// Parse progressive layers from CLI output
function parseProgressiveLayers(cliOutput) {
  const layers = []
  const layerBlocks = cliOutput.split(/## L(\d+):/).slice(1)

  for (let i = 0; i < layerBlocks.length; i += 2) {
    const layerId = `L${layerBlocks[i].trim()}`
    const text = layerBlocks[i + 1]

    const nameMatch = /^(.+?)(?=\n)/.exec(text)
    const goalMatch = /\*\*Goal\*\*:\s*(.+?)(?=\n)/.exec(text)
    const scopeMatch = /\*\*Scope\*\*:\s*(.+?)(?=\n)/.exec(text)
    const excludesMatch = /\*\*Excludes\*\*:\s*(.+?)(?=\n)/.exec(text)
    const effortMatch = /\*\*Effort\*\*:\s*(.+?)(?=\n)/.exec(text)
    const dependsMatch = /\*\*Depends On\*\*:\s*(.+?)(?=\n|$)/.exec(text)
    const riskMatch = /\*\*Risk Items\*\*:\n((?:- .+?\n)*)/.exec(text)

    const convergence = parseConvergence(text)

    layers.push({
      id: layerId,
      name: nameMatch?.[1].trim() || `Layer ${layerId}`,
      goal: goalMatch?.[1].trim() || "",
      scope: scopeMatch?.[1].split(/[,,]/).map(s => s.trim()).filter(Boolean) || [],
      excludes: excludesMatch?.[1].split(/[,,]/).map(s => s.trim()).filter(Boolean) || [],
      convergence,
      risks: riskMatch
        ? riskMatch[1].split('\n').map(s => s.replace(/^- /, '').trim()).filter(Boolean)
            .map(desc => ({description: desc, probability: "Medium", impact: "Medium", mitigation: "N/A"}))
        : [],
      effort: normalizeEffort(effortMatch?.[1].trim()),
      depends_on: parseDependsOn(dependsMatch?.[1], 'L')
    })
  }

  return layers
}

// Parse direct tasks from CLI output
function parseDirectTasks(cliOutput) {
  const tasks = []
  const taskBlocks = cliOutput.split(/## T(\d+):/).slice(1)

  for (let i = 0; i < taskBlocks.length; i += 2) {
    const taskId = `T${taskBlocks[i].trim()}`
    const text = taskBlocks[i + 1]

    const titleMatch = /^(.+?)(?=\n)/.exec(text)
    const typeMatch = /\*\*Type\*\*:\s*(.+?)(?=\n)/.exec(text)
    const scopeMatch = /\*\*Scope\*\*:\s*(.+?)(?=\n)/.exec(text)
    const inputsMatch = /\*\*Inputs\*\*:\s*(.+?)(?=\n)/.exec(text)
    const outputsMatch = /\*\*Outputs\*\*:\s*(.+?)(?=\n)/.exec(text)
    const dependsMatch = /\*\*Depends On\*\*:\s*(.+?)(?=\n|$)/.exec(text)
    const groupMatch = /\*\*Parallel Group\*\*:\s*(\d+)/.exec(text)

    const convergence = parseConvergence(text)

    tasks.push({
      id: taskId,
      title: titleMatch?.[1].trim() || `Task ${taskId}`,
      type: normalizeType(typeMatch?.[1].trim()),
      scope: scopeMatch?.[1].trim() || "",
      inputs: parseList(inputsMatch?.[1]),
      outputs: parseList(outputsMatch?.[1]),
      convergence,
      depends_on: parseDependsOn(dependsMatch?.[1], 'T'),
      parallel_group: parseInt(groupMatch?.[1]) || 1
    })
  }

  return tasks
}

// Parse convergence section from a record block
function parseConvergence(text) {
  const criteriaMatch = /- Criteria:\s*((?:.+\n?)+?)(?=- Verification:)/.exec(text)
  const verificationMatch = /- Verification:\s*(.+?)(?=\n- Definition)/.exec(text)
  const dodMatch = /- Definition of Done:\s*(.+?)(?=\n\*\*|$)/.exec(text)

  const criteria = criteriaMatch
    ? criteriaMatch[1].split('\n')
        .map(s => s.replace(/^\s*[-•]\s*/, '').trim())
        .filter(s => s && !s.startsWith('Verification') && !s.startsWith('Definition'))
    : []

  return {
    criteria: criteria.length > 0 ? criteria : ["Task completed successfully"],
    verification: verificationMatch?.[1].trim() || "Manual verification",
    definition_of_done: dodMatch?.[1].trim() || "Feature works as expected"
  }
}

// Helpers
function normalizeEffort(effort) {
  if (!effort) return "medium"
  const lower = effort.toLowerCase()
  if (lower.includes('small') || lower.includes('low')) return "small"
  if (lower.includes('large') || lower.includes('high')) return "large"
  return "medium"
}

function normalizeType(type) {
  if (!type) return "feature"
  const lower = type.toLowerCase()
  if (lower.includes('infra')) return "infrastructure"
  if (lower.includes('enhance')) return "enhancement"
  if (lower.includes('test')) return "testing"
  return "feature"
}

function parseList(text) {
  if (!text || text.toLowerCase() === 'none') return []
  return text.split(/[,,]/).map(s => s.trim()).filter(Boolean)
}

function parseDependsOn(text, prefix) {
  if (!text || text.toLowerCase() === 'none' || text === '[]') return []
  const pattern = new RegExp(`${prefix}\\d+`, 'g')
  return (text.match(pattern) || [])
}
Fallback Decomposition
备选分解方案
javascript
// Manual decomposition when CLI fails
function manualProgressiveDecomposition(requirement, context) {
  return [
    {
      id: "L0", name: "MVP", goal: "最小可用闭环",
      scope: ["核心功能"], excludes: ["高级功能", "优化"],
      convergence: {
        criteria: ["核心路径端到端可跑通"],
        verification: "手动测试核心流程",
        definition_of_done: "用户可完成一次核心操作的完整流程"
      },
      risks: [{description: "技术选型待验证", probability: "Medium", impact: "Medium", mitigation: "待评估"}],
      effort: "medium", depends_on: []
    },
    {
      id: "L1", name: "可用", goal: "关键用户路径完善",
      scope: ["错误处理", "输入校验"], excludes: ["性能优化", "监控"],
      convergence: {
        criteria: ["所有用户输入有校验", "错误场景有提示"],
        verification: "单元测试 + 手动测试错误场景",
        definition_of_done: "用户遇到问题时有清晰的引导和恢复路径"
      },
      risks: [], effort: "medium", depends_on: ["L0"]
    }
  ]
}

function manualDirectDecomposition(requirement, context) {
  return [
    {
      id: "T1", title: "基础设施搭建", type: "infrastructure",
      scope: "项目骨架和基础配置",
      inputs: [], outputs: ["project-structure"],
      convergence: {
        criteria: ["项目可构建无报错", "基础配置完成"],
        verification: "npm run build (或对应构建命令)",
        definition_of_done: "项目基础框架就绪,可开始功能开发"
      },
      depends_on: [], parallel_group: 1
    },
    {
      id: "T2", title: "核心功能实现", type: "feature",
      scope: "核心业务逻辑",
      inputs: ["project-structure"], outputs: ["core-module"],
      convergence: {
        criteria: ["核心 API/功能可调用", "返回预期结果"],
        verification: "运行核心功能测试",
        definition_of_done: "核心业务功能可正常使用"
      },
      depends_on: ["T1"], parallel_group: 2
    }
  ]
}

javascript
// Manual decomposition when CLI fails
function manualProgressiveDecomposition(requirement, context) {
  return [
    {
      id: "L0", name: "MVP", goal: "最小可用闭环",
      scope: ["核心功能"], excludes: ["高级功能", "优化"],
      convergence: {
        criteria: ["核心路径端到端可跑通"],
        verification: "手动测试核心流程",
        definition_of_done: "用户可完成一次核心操作的完整流程"
      },
      risks: [{description: "技术选型待验证", probability: "Medium", impact: "Medium", mitigation: "待评估"}],
      effort: "medium", depends_on: []
    },
    {
      id: "L1", name: "可用", goal: "关键用户路径完善",
      scope: ["错误处理", "输入校验"], excludes: ["性能优化", "监控"],
      convergence: {
        criteria: ["所有用户输入有校验", "错误场景有提示"],
        verification: "单元测试 + 手动测试错误场景",
        definition_of_done: "用户遇到问题时有清晰的引导和恢复路径"
      },
      risks: [], effort: "medium", depends_on: ["L0"]
    }
  ]
}

function manualDirectDecomposition(requirement, context) {
  return [
    {
      id: "T1", title: "基础设施搭建", type: "infrastructure",
      scope: "项目骨架和基础配置",
      inputs: [], outputs: ["project-structure"],
      convergence: {
        criteria: ["项目可构建无报错", "基础配置完成"],
        verification: "npm run build (或对应构建命令)",
        definition_of_done: "项目基础框架就绪,可开始功能开发"
      },
      depends_on: [], parallel_group: 1
    },
    {
      id: "T2", title: "核心功能实现", type: "feature",
      scope: "核心业务逻辑",
      inputs: ["project-structure"], outputs: ["core-module"],
      convergence: {
        criteria: ["核心 API/功能可调用", "返回预期结果"],
        verification: "运行核心功能测试",
        definition_of_done: "核心业务功能可正常使用"
      },
      depends_on: ["T1"], parallel_group: 2
    }
  ]
}

Step 3.2: Record Enhancement & Validation

Step 3.2: 记录增强与验证

javascript
// Validate progressive layers
function validateProgressiveLayers(layers) {
  const errors = []

  // Check scope overlap
  const allScopes = new Map()
  layers.forEach(layer => {
    layer.scope.forEach(feature => {
      if (allScopes.has(feature)) {
        errors.push(`Scope overlap: "${feature}" in both ${allScopes.get(feature)} and ${layer.id}`)
      }
      allScopes.set(feature, layer.id)
    })
  })

  // Check circular dependencies
  const cycleErrors = detectCycles(layers, 'L')
  errors.push(...cycleErrors)

  // Check convergence quality
  layers.forEach(layer => {
    errors.push(...validateConvergence(layer.id, layer.convergence))
  })

  // Check L0 is self-contained (no depends_on)
  const l0 = layers.find(l => l.id === 'L0')
  if (l0 && l0.depends_on.length > 0) {
    errors.push("L0 (MVP) should not have dependencies")
  }

  return errors
}

// Validate direct tasks
function validateDirectTasks(tasks) {
  const errors = []

  // Check inputs/outputs chain
  const availableOutputs = new Set()
  const sortedTasks = topologicalSort(tasks)

  sortedTasks.forEach(task => {
    task.inputs.forEach(input => {
      if (!availableOutputs.has(input)) {
        // Existing files are valid inputs - only warn
      }
    })
    task.outputs.forEach(output => availableOutputs.add(output))
  })

  // Check circular dependencies
  errors.push(...detectCycles(tasks, 'T'))

  // Check convergence quality
  tasks.forEach(task => {
    errors.push(...validateConvergence(task.id, task.convergence))
  })

  // Check parallel_group consistency
  const groups = new Map()
  tasks.forEach(task => {
    if (!groups.has(task.parallel_group)) groups.set(task.parallel_group, [])
    groups.get(task.parallel_group).push(task)
  })
  groups.forEach((groupTasks, groupId) => {
    if (groupTasks.length > 1) {
      const ids = new Set(groupTasks.map(t => t.id))
      groupTasks.forEach(task => {
        task.depends_on.forEach(dep => {
          if (ids.has(dep)) {
            errors.push(`Parallel group ${groupId}: ${task.id} depends on ${dep} but both in same group`)
          }
        })
      })
    }
  })

  return errors
}

// Validate convergence quality
function validateConvergence(recordId, convergence) {
  const errors = []

  const vaguePatterns = /正常|正确||可以|没问题|works|fine|good|correct/i
  convergence.criteria.forEach((criterion, i) => {
    if (vaguePatterns.test(criterion) && criterion.length < 15) {
      errors.push(`${recordId} criteria[${i}]: Too vague - "${criterion}"`)
    }
  })

  if (convergence.verification.length < 10) {
    errors.push(`${recordId} verification: Too short, needs executable steps`)
  }

  const technicalPatterns = /compile|build|lint|npm|npx|jest|tsc|eslint/i
  if (technicalPatterns.test(convergence.definition_of_done)) {
    errors.push(`${recordId} definition_of_done: Should be business language, not technical commands`)
  }

  return errors
}

// Detect circular dependencies
function detectCycles(records, prefix) {
  const errors = []
  const graph = new Map(records.map(r => [r.id, r.depends_on]))
  const visited = new Set()
  const inStack = new Set()

  function dfs(node, path) {
    if (inStack.has(node)) {
      errors.push(`Circular dependency detected: ${[...path, node].join(' → ')}`)
      return
    }
    if (visited.has(node)) return

    visited.add(node)
    inStack.add(node)
    ;(graph.get(node) || []).forEach(dep => dfs(dep, [...path, node]))
    inStack.delete(node)
  }

  records.forEach(r => {
    if (!visited.has(r.id)) dfs(r.id, [])
  })

  return errors
}

// Topological sort
function topologicalSort(tasks) {
  const result = []
  const visited = new Set()
  const taskMap = new Map(tasks.map(t => [t.id, t]))

  function visit(taskId) {
    if (visited.has(taskId)) return
    visited.add(taskId)
    const task = taskMap.get(taskId)
    if (task) {
      task.depends_on.forEach(dep => visit(dep))
      result.push(task)
    }
  }

  tasks.forEach(t => visit(t.id))
  return result
}

javascript
// Validate progressive layers
function validateProgressiveLayers(layers) {
  const errors = []

  // Check scope overlap
  const allScopes = new Map()
  layers.forEach(layer => {
    layer.scope.forEach(feature => {
      if (allScopes.has(feature)) {
        errors.push(`Scope overlap: "${feature}" in both ${allScopes.get(feature)} and ${layer.id}`)
      }
      allScopes.set(feature, layer.id)
    })
  })

  // Check circular dependencies
  const cycleErrors = detectCycles(layers, 'L')
  errors.push(...cycleErrors)

  // Check convergence quality
  layers.forEach(layer => {
    errors.push(...validateConvergence(layer.id, layer.convergence))
  })

  // Check L0 is self-contained (no depends_on)
  const l0 = layers.find(l => l.id === 'L0')
  if (l0 && l0.depends_on.length > 0) {
    errors.push("L0 (MVP) should not have dependencies")
  }

  return errors
}

// Validate direct tasks
function validateDirectTasks(tasks) {
  const errors = []

  // Check inputs/outputs chain
  const availableOutputs = new Set()
  const sortedTasks = topologicalSort(tasks)

  sortedTasks.forEach(task => {
    task.inputs.forEach(input => {
      if (!availableOutputs.has(input)) {
        // Existing files are valid inputs - only warn
      }
    })
    task.outputs.forEach(output => availableOutputs.add(output))
  })

  // Check circular dependencies
  errors.push(...detectCycles(tasks, 'T'))

  // Check convergence quality
  tasks.forEach(task => {
    errors.push(...validateConvergence(task.id, task.convergence))
  })

  // Check parallel_group consistency
  const groups = new Map()
  tasks.forEach(task => {
    if (!groups.has(task.parallel_group)) groups.set(task.parallel_group, [])
    groups.get(task.parallel_group).push(task)
  })
  groups.forEach((groupTasks, groupId) => {
    if (groupTasks.length > 1) {
      const ids = new Set(groupTasks.map(t => t.id))
      groupTasks.forEach(task => {
        task.depends_on.forEach(dep => {
          if (ids.has(dep)) {
            errors.push(`Parallel group ${groupId}: ${task.id} depends on ${dep} but both in same group`)
          }
        })
      })
    }
  })

  return errors
}

// Validate convergence quality
function validateConvergence(recordId, convergence) {
  const errors = []

  const vaguePatterns = /正常|正确||可以|没问题|works|fine|good|correct/i
  convergence.criteria.forEach((criterion, i) => {
    if (vaguePatterns.test(criterion) && criterion.length < 15) {
      errors.push(`${recordId} criteria[${i}]: Too vague - "${criterion}"`)
    }
  })

  if (convergence.verification.length < 10) {
    errors.push(`${recordId} verification: Too short, needs executable steps`)
  }

  const technicalPatterns = /compile|build|lint|npm|npx|jest|tsc|eslint/i
  if (technicalPatterns.test(convergence.definition_of_done)) {
    errors.push(`${recordId} definition_of_done: Should be business language, not technical commands`)
  }

  return errors
}

// Detect circular dependencies
function detectCycles(records, prefix) {
  const errors = []
  const graph = new Map(records.map(r => [r.id, r.depends_on]))
  const visited = new Set()
  const inStack = new Set()

  function dfs(node, path) {
    if (inStack.has(node)) {
      errors.push(`Circular dependency detected: ${[...path, node].join(' → ')}`)
      return
    }
    if (visited.has(node)) return

    visited.add(node)
    inStack.add(node)
    ;(graph.get(node) || []).forEach(dep => dfs(dep, [...path, node]))
    inStack.delete(node)
  }

  records.forEach(r => {
    if (!visited.has(r.id)) dfs(r.id, [])
  })

  return errors
}

// Topological sort
function topologicalSort(tasks) {
  const result = []
  const visited = new Set()
  const taskMap = new Map(tasks.map(t => [t.id, t]))

  function visit(taskId) {
    if (visited.has(taskId)) return
    visited.add(taskId)
    const task = taskMap.get(taskId)
    if (task) {
      task.depends_on.forEach(dep => visit(dep))
      result.push(task)
    }
  }

  tasks.forEach(t => visit(t.id))
  return result
}

Step 3.3: Issue Creation & Output Generation

Step 3.3: Issue创建与输出生成

3.3a: Internal Records → Issue Data Mapping
3.3a: 内部记录 → Issue数据映射
javascript
// Progressive mode: layer → issue data (issues-jsonl-schema)
function layerToIssue(layer, sessionId, timestamp) {
  const context = `## Goal\n${layer.goal}\n\n` +
    `## Scope\n${layer.scope.map(s => `- ${s}`).join('\n')}\n\n` +
    `## Excludes\n${layer.excludes.map(s => `- ${s}`).join('\n') || 'None'}\n\n` +
    `## Convergence Criteria\n${layer.convergence.criteria.map(c => `- ${c}`).join('\n')}\n\n` +
    `## Verification\n${layer.convergence.verification}\n\n` +
    `## Definition of Done\n${layer.convergence.definition_of_done}\n\n` +
    (layer.risks.length ? `## Risks\n${layer.risks.map(r => `- ${r.description} (P:${r.probability} I:${r.impact})`).join('\n')}` : '')

  const effortToPriority = { small: 4, medium: 3, large: 2 }

  return {
    title: `[${layer.name}] ${layer.goal}`,
    context: context,
    priority: effortToPriority[layer.effort] || 3,
    source: "text",
    tags: ["req-plan", "progressive", layer.name.toLowerCase(), `wave-${getWaveNum(layer)}`],
    affected_components: [],
    extended_context: {
      notes: JSON.stringify({
        session: sessionId,
        strategy: "progressive",
        layer: layer.id,
        wave: getWaveNum(layer),
        effort: layer.effort,
        depends_on_issues: [],    // Backfilled after all issues created
        original_id: layer.id
      })
    },
    lifecycle_requirements: {
      test_strategy: "integration",
      regression_scope: "affected",
      acceptance_type: "automated",
      commit_strategy: "per-task"
    }
  }
}

function getWaveNum(layer) {
  const match = layer.id.match(/L(\d+)/)
  return match ? parseInt(match[1]) + 1 : 1
}

// Direct mode: task → issue data (issues-jsonl-schema)
function taskToIssue(task, sessionId, timestamp) {
  const context = `## Scope\n${task.scope}\n\n` +
    `## Inputs\n${task.inputs.length ? task.inputs.map(i => `- ${i}`).join('\n') : 'None (starting task)'}\n\n` +
    `## Outputs\n${task.outputs.map(o => `- ${o}`).join('\n')}\n\n` +
    `## Convergence Criteria\n${task.convergence.criteria.map(c => `- ${c}`).join('\n')}\n\n` +
    `## Verification\n${task.convergence.verification}\n\n` +
    `## Definition of Done\n${task.convergence.definition_of_done}`

  return {
    title: `[${task.type}] ${task.title}`,
    context: context,
    priority: 3,
    source: "text",
    tags: ["req-plan", "direct", task.type, `wave-${task.parallel_group}`],
    affected_components: task.outputs,
    extended_context: {
      notes: JSON.stringify({
        session: sessionId,
        strategy: "direct",
        task_id: task.id,
        wave: task.parallel_group,
        parallel_group: task.parallel_group,
        depends_on_issues: [],    // Backfilled after all issues created
        original_id: task.id
      })
    },
    lifecycle_requirements: {
      test_strategy: task.type === 'testing' ? 'unit' : 'integration',
      regression_scope: "affected",
      acceptance_type: "automated",
      commit_strategy: "per-task"
    }
  }
}
javascript
// Progressive mode: layer → issue data (issues-jsonl-schema)
function layerToIssue(layer, sessionId, timestamp) {
  const context = `## Goal\n${layer.goal}\n\n` +
    `## Scope\n${layer.scope.map(s => `- ${s}`).join('\n')}\n\n` +
    `## Excludes\n${layer.excludes.map(s => `- ${s}`).join('\n') || 'None'}\n\n` +
    `## Convergence Criteria\n${layer.convergence.criteria.map(c => `- ${c}`).join('\n')}\n\n` +
    `## Verification\n${layer.convergence.verification}\n\n` +
    `## Definition of Done\n${layer.convergence.definition_of_done}\n\n` +
    (layer.risks.length ? `## Risks\n${layer.risks.map(r => `- ${r.description} (P:${r.probability} I:${r.impact})`).join('\n')}` : '')

  const effortToPriority = { small: 4, medium: 3, large: 2 }

  return {
    title: `[${layer.name}] ${layer.goal}`,
    context: context,
    priority: effortToPriority[layer.effort] || 3,
    source: "text",
    tags: ["req-plan", "progressive", layer.name.toLowerCase(), `wave-${getWaveNum(layer)}`],
    affected_components: [],
    extended_context: {
      notes: JSON.stringify({
        session: sessionId,
        strategy: "progressive",
        layer: layer.id,
        wave: getWaveNum(layer),
        effort: layer.effort,
        depends_on_issues: [],    // Backfilled after all issues created
        original_id: layer.id
      })
    },
    lifecycle_requirements: {
      test_strategy: "integration",
      regression_scope: "affected",
      acceptance_type: "automated",
      commit_strategy: "per-task"
    }
  }
}

function getWaveNum(layer) {
  const match = layer.id.match(/L(\d+)/)
  return match ? parseInt(match[1]) + 1 : 1
}

// Direct mode: task → issue data (issues-jsonl-schema)
function taskToIssue(task, sessionId, timestamp) {
  const context = `## Scope\n${task.scope}\n\n` +
    `## Inputs\n${task.inputs.length ? task.inputs.map(i => `- ${i}`).join('\n') : 'None (starting task)'}\n\n` +
    `## Outputs\n${task.outputs.map(o => `- ${o}`).join('\n')}\n\n` +
    `## Convergence Criteria\n${task.convergence.criteria.map(c => `- ${c}`).join('\n')}\n\n` +
    `## Verification\n${task.convergence.verification}\n\n` +
    `## Definition of Done\n${task.convergence.definition_of_done}`

  return {
    title: `[${task.type}] ${task.title}`,
    context: context,
    priority: 3,
    source: "text",
    tags: ["req-plan", "direct", task.type, `wave-${task.parallel_group}`],
    affected_components: task.outputs,
    extended_context: {
      notes: JSON.stringify({
        session: sessionId,
        strategy: "direct",
        task_id: task.id,
        wave: task.parallel_group,
        parallel_group: task.parallel_group,
        depends_on_issues: [],    // Backfilled after all issues created
        original_id: task.id
      })
    },
    lifecycle_requirements: {
      test_strategy: task.type === 'testing' ? 'unit' : 'integration',
      regression_scope: "affected",
      acceptance_type: "automated",
      commit_strategy: "per-task"
    }
  }
}
3.3b: Create Issues via ccw issue create
3.3b: 通过ccw issue create创建Issue
javascript
// Create issues sequentially (get formal ISS-xxx IDs)
const issueIdMap = {}  // originalId → ISS-xxx

for (const record of records) {
  const issueData = selectedMode === 'progressive'
    ? layerToIssue(record, sessionId, getUtc8ISOString())
    : taskToIssue(record, sessionId, getUtc8ISOString())

  // Create issue via ccw issue create
  try {
    const createResult = bash(`ccw issue create --data '${JSON.stringify(issueData)}' --json`)
    const created = JSON.parse(createResult.trim())
    issueIdMap[record.id] = created.id
  } catch (error) {
    // Retry once
    try {
      const retryResult = bash(`ccw issue create --data '${JSON.stringify(issueData)}' --json`)
      const created = JSON.parse(retryResult.trim())
      issueIdMap[record.id] = created.id
    } catch {
      // Log error, skip this record, continue with remaining
      console.error(`Failed to create issue for ${record.id}`)
    }
  }
}

// Backfill depends_on_issues into extended_context.notes
for (const record of records) {
  const issueId = issueIdMap[record.id]
  if (!issueId) continue
  const deps = record.depends_on.map(d => issueIdMap[d]).filter(Boolean)
  if (deps.length > 0) {
    const currentNotes = JSON.parse(issueData.extended_context.notes)
    currentNotes.depends_on_issues = deps
    bash(`ccw issue update ${issueId} --notes '${JSON.stringify(currentNotes)}'`)
  }
}
javascript
// Create issues sequentially (get formal ISS-xxx IDs)
const issueIdMap = {}  // originalId → ISS-xxx

for (const record of records) {
  const issueData = selectedMode === 'progressive'
    ? layerToIssue(record, sessionId, getUtc8ISOString())
    : taskToIssue(record, sessionId, getUtc8ISOString())

  // Create issue via ccw issue create
  try {
    const createResult = bash(`ccw issue create --data '${JSON.stringify(issueData)}' --json`)
    const created = JSON.parse(createResult.trim())
    issueIdMap[record.id] = created.id
  } catch (error) {
    // Retry once
    try {
      const retryResult = bash(`ccw issue create --data '${JSON.stringify(issueData)}' --json`)
      const created = JSON.parse(retryResult.trim())
      issueIdMap[record.id] = created.id
    } catch {
      // Log error, skip this record, continue with remaining
      console.error(`Failed to create issue for ${record.id}`)
    }
  }
}

// Backfill depends_on_issues into extended_context.notes
for (const record of records) {
  const issueId = issueIdMap[record.id]
  if (!issueId) continue
  const deps = record.depends_on.map(d => issueIdMap[d]).filter(Boolean)
  if (deps.length > 0) {
    const currentNotes = JSON.parse(issueData.extended_context.notes)
    currentNotes.depends_on_issues = deps
    bash(`ccw issue update ${issueId} --notes '${JSON.stringify(currentNotes)}'`)
  }
}
3.3c: Generate execution-plan.json
3.3c: 生成execution-plan.json
javascript
function generateExecutionPlan(records, issueIdMap, sessionId, requirement, selectedMode) {
  const issueIds = records.map(r => issueIdMap[r.id]).filter(Boolean)

  let waves
  if (selectedMode === 'progressive') {
    waves = records.filter(r => issueIdMap[r.id]).map((r, i) => ({
      wave: i + 1,
      label: r.name,
      issue_ids: [issueIdMap[r.id]],
      depends_on_waves: r.depends_on.length > 0
        ? [...new Set(r.depends_on.map(d => records.findIndex(x => x.id === d) + 1))]
        : []
    }))
  } else {
    const groups = new Map()
    records.filter(r => issueIdMap[r.id]).forEach(r => {
      const g = r.parallel_group
      if (!groups.has(g)) groups.set(g, [])
      groups.get(g).push(r)
    })

    waves = [...groups.entries()]
      .sort(([a], [b]) => a - b)
      .map(([groupNum, groupRecords]) => ({
        wave: groupNum,
        label: `Group ${groupNum}`,
        issue_ids: groupRecords.map(r => issueIdMap[r.id]),
        depends_on_waves: groupNum > 1 ? [groupNum - 1] : []
      }))
  }

  const issueDependencies = {}
  records.forEach(r => {
    if (!issueIdMap[r.id]) return
    const deps = r.depends_on.map(d => issueIdMap[d]).filter(Boolean)
    if (deps.length > 0) {
      issueDependencies[issueIdMap[r.id]] = deps
    }
  })

  return {
    session_id: sessionId,
    requirement: requirement,
    strategy: selectedMode,
    created_at: new Date().toISOString(),
    issue_ids: issueIds,
    waves: waves,
    issue_dependencies: issueDependencies
  }
}

const executionPlan = generateExecutionPlan(records, issueIdMap, sessionId, requirement, selectedMode)
Write(`${sessionFolder}/execution-plan.json`, JSON.stringify(executionPlan, null, 2))
javascript
function generateExecutionPlan(records, issueIdMap, sessionId, requirement, selectedMode) {
  const issueIds = records.map(r => issueIdMap[r.id]).filter(Boolean)

  let waves
  if (selectedMode === 'progressive') {
    waves = records.filter(r => issueIdMap[r.id]).map((r, i) => ({
      wave: i + 1,
      label: r.name,
      issue_ids: [issueIdMap[r.id]],
      depends_on_waves: r.depends_on.length > 0
        ? [...new Set(r.depends_on.map(d => records.findIndex(x => x.id === d) + 1))]
        : []
    }))
  } else {
    const groups = new Map()
    records.filter(r => issueIdMap[r.id]).forEach(r => {
      const g = r.parallel_group
      if (!groups.has(g)) groups.set(g, [])
      groups.get(g).push(r)
    })

    waves = [...groups.entries()]
      .sort(([a], [b]) => a - b)
      .map(([groupNum, groupRecords]) => ({
        wave: groupNum,
        label: `Group ${groupNum}`,
        issue_ids: groupRecords.map(r => issueIdMap[r.id]),
        depends_on_waves: groupNum > 1 ? [groupNum - 1] : []
      }))
  }

  const issueDependencies = {}
  records.forEach(r => {
    if (!issueIdMap[r.id]) return
    const deps = r.depends_on.map(d => issueIdMap[d]).filter(Boolean)
    if (deps.length > 0) {
      issueDependencies[issueIdMap[r.id]] = deps
    }
  })

  return {
    session_id: sessionId,
    requirement: requirement,
    strategy: selectedMode,
    created_at: new Date().toISOString(),
    issue_ids: issueIds,
    waves: waves,
    issue_dependencies: issueDependencies
  }
}

const executionPlan = generateExecutionPlan(records, issueIdMap, sessionId, requirement, selectedMode)
Write(`${sessionFolder}/execution-plan.json`, JSON.stringify(executionPlan, null, 2))
3.3d: Generate issues.jsonl Session Copy
3.3d: 生成issues.jsonl会话副本
javascript
const sessionIssues = []
for (const originalId of Object.keys(issueIdMap)) {
  const issueId = issueIdMap[originalId]
  if (!issueId) continue
  const issueJson = bash(`ccw issue status ${issueId} --json`).trim()
  sessionIssues.push(issueJson)
}
Write(`${sessionFolder}/issues.jsonl`, sessionIssues.join('\n') + '\n')
javascript
const sessionIssues = []
for (const originalId of Object.keys(issueIdMap)) {
  const issueId = issueIdMap[originalId]
  if (!issueId) continue
  const issueJson = bash(`ccw issue status ${issueId} --json`).trim()
  sessionIssues.push(issueJson)
}
Write(`${sessionFolder}/issues.jsonl`, sessionIssues.join('\n') + '\n')
3.3e: Generate roadmap.md (with Issue ID References)
3.3e: 生成roadmap.md(包含Issue ID引用)
javascript
// Progressive mode roadmap
function generateProgressiveRoadmapMd(layers, issueIdMap, input) {
  return `# 需求路线图

**Session**: ${input.sessionId}
**需求**: ${input.requirement}
**策略**: progressive
**不确定性**: ${input.strategy.uncertainty_level}
**生成时间**: ${new Date().toISOString()}
javascript
// Progressive mode roadmap
function generateProgressiveRoadmapMd(layers, issueIdMap, input) {
  return `# 需求路线图

**Session**: ${input.sessionId}
**需求**: ${input.requirement}
**策略**: progressive
**不确定性**: ${input.strategy.uncertainty_level}
**生成时间**: ${new Date().toISOString()}

策略评估

策略评估

  • 目标: ${input.strategy.goal}
  • 约束: ${input.strategy.constraints.join(', ') || '无'}
  • 利益方: ${input.strategy.stakeholders.join(', ') || '无'}
  • 目标: ${input.strategy.goal}
  • 约束: ${input.strategy.constraints.join(', ') || '无'}
  • 利益方: ${input.strategy.stakeholders.join(', ') || '无'}

路线图概览

路线图概览

层级名称目标工作量依赖Issue ID
${layers.map(l => `${l.id}${l.name}${l.goal}${l.effort}${l.depends_on.length ? l.depends_on.join(', ') : '-'}
层级名称目标工作量依赖Issue ID
${layers.map(l => `${l.id}${l.name}${l.goal}${l.effort}${l.depends_on.length ? l.depends_on.join(', ') : '-'}

Issue Mapping

Issue 映射

WaveIssue IDTitlePriority
${layers.map(l => `${getWaveNum(l)}${issueIdMap[l.id]}[${l.name}] ${l.goal}
波次Issue ID标题优先级
${layers.map(l => `${getWaveNum(l)}${issueIdMap[l.id]}[${l.name}] ${l.goal}

各层详情

各层详情

${layers.map(l => `### ${l.id}: ${l.name} (${issueIdMap[l.id]})
目标: ${l.goal}
范围: ${l.scope.join('、')}
排除: ${l.excludes.join('、') || '无'}
收敛标准: ${l.convergence.criteria.map(c =>
- ${c}
).join('\n')}
  • 验证方法: ${l.convergence.verification}
  • 完成定义: ${l.convergence.definition_of_done}
风险项: ${l.risks.length ? l.risks.map(r =>
\n- ${r.description} (概率: ${r.probability}, 影响: ${r.impact}, 缓解: ${r.mitigation})
).join('') : '无'}
工作量: ${l.effort} `).join('\n---\n\n')}
${layers.map(l => `### ${l.id}: ${l.name} (${issueIdMap[l.id]})
目标: ${l.goal}
范围: ${l.scope.join('、')}
排除: ${l.excludes.join('、') || '无'}
收敛标准: ${l.convergence.criteria.map(c =>
- ${c}
).join('\n')}
  • 验证方法: ${l.convergence.verification}
  • 完成定义: ${l.convergence.definition_of_done}
风险项: ${l.risks.length ? l.risks.map(r =>
\n- ${r.description} (概率: ${r.probability}, 影响: ${r.impact}, 缓解: ${r.mitigation})
).join('') : '无'}
工作量: ${l.effort} `).join('\n---\n\n')}

风险汇总

风险汇总

${layers.flatMap(l => l.risks.map(r =>
- **${l.id}** (${issueIdMap[l.id]}): ${r.description} (概率: ${r.probability}, 影响: ${r.impact})
)).join('\n') || '无已识别风险'}
${layers.flatMap(l => l.risks.map(r =>
- **${l.id}** (${issueIdMap[l.id]}): ${r.description} (概率: ${r.probability}, 影响: ${r.impact})
)).join('\n') || '无已识别风险'}

Next Steps

下一步操作

使用 team-planex 执行全部波次

使用 team-planex 执行全部波次

``` $team-planex --plan ${input.sessionFolder}/execution-plan.json ```
``` $team-planex --plan ${input.sessionFolder}/execution-plan.json ```

按波次逐步执行

按波次逐步执行

``` ${layers.map(l =>
# Wave ${getWaveNum(l)}: ${l.name}\n$team-planex ${issueIdMap[l.id]}
).join('\n')} ```
路线图文件: `${input.sessionFolder}/`
  • issues.jsonl (标准 issue 格式)
  • execution-plan.json (波次编排) ` }
// Direct mode roadmap function generateDirectRoadmapMd(tasks, issueIdMap, input) { const groups = new Map() tasks.forEach(t => { const g = t.parallel_group if (!groups.has(g)) groups.set(g, []) groups.get(g).push(t) })
return `# 需求路线图
Session: ${input.sessionId} 需求: ${input.requirement} 策略: direct 生成时间: ${new Date().toISOString()}
``` ${layers.map(l =>
# Wave ${getWaveNum(l)}: ${l.name}\n$team-planex ${issueIdMap[l.id]}
).join('\n')} ```
路线图文件: `${input.sessionFolder}/`
  • issues.jsonl (标准 issue 格式)
  • execution-plan.json (波次编排) ` }
// Direct mode roadmap function generateDirectRoadmapMd(tasks, issueIdMap, input) { const groups = new Map() tasks.forEach(t => { const g = t.parallel_group if (!groups.has(g)) groups.set(g, []) groups.get(g).push(t) })
return `# 需求路线图
Session: ${input.sessionId} 需求: ${input.requirement} 策略: direct 生成时间: ${new Date().toISOString()}

策略评估

策略评估

  • 目标: ${input.strategy.goal}
  • 约束: ${input.strategy.constraints.join(', ') || '无'}
  • 目标: ${input.strategy.goal}
  • 约束: ${input.strategy.constraints.join(', ') || '无'}

任务序列

任务序列

ID标题类型依赖Issue ID
${tasks.map(t => `${t.parallel_group}${t.id}${t.title}${t.type}${t.depends_on.length ? t.depends_on.join(', ') : '-'}
ID标题类型依赖Issue ID
${tasks.map(t => `${t.parallel_group}${t.id}${t.title}${t.type}${t.depends_on.length ? t.depends_on.join(', ') : '-'}

Issue Mapping

Issue 映射

WaveIssue IDTitlePriority
${tasks.map(t => `${t.parallel_group}${issueIdMap[t.id]}[${t.type}] ${t.title}
波次Issue ID标题优先级
${tasks.map(t => `${t.parallel_group}${issueIdMap[t.id]}[${t.type}] ${t.title}

各任务详情

各任务详情

${tasks.map(t => `### ${t.id}: ${t.title} (${issueIdMap[t.id]})
类型: ${t.type} | 并行组: ${t.parallel_group}
范围: ${t.scope}
输入: ${t.inputs.length ? t.inputs.join(', ') : '无(起始任务)'} 输出: ${t.outputs.join(', ')}
收敛标准: ${t.convergence.criteria.map(c =>
- ${c}
).join('\n')}
  • 验证方法: ${t.convergence.verification}
  • 完成定义: ${t.convergence.definition_of_done} `).join('\n---\n\n')}
${tasks.map(t => `### ${t.id}: ${t.title} (${issueIdMap[t.id]})
类型: ${t.type} | 并行组: ${t.parallel_group}
范围: ${t.scope}
输入: ${t.inputs.length ? t.inputs.join(', ') : '无(起始任务)'} 输出: ${t.outputs.join(', ')}
收敛标准: ${t.convergence.criteria.map(c =>
- ${c}
).join('\n')}
  • 验证方法: ${t.convergence.verification}
  • 完成定义: ${t.convergence.definition_of_done} `).join('\n---\n\n')}

Next Steps

下一步操作

使用 team-planex 执行全部波次

使用 team-planex 执行全部波次

``` $team-planex --plan ${input.sessionFolder}/execution-plan.json ```
``` $team-planex --plan ${input.sessionFolder}/execution-plan.json ```

按波次逐步执行

按波次逐步执行

``` ${[...groups.entries()].sort(([a], [b]) => a - b).map(([g, ts]) =>
# Wave ${g}: Group ${g}\n$team-planex ${ts.map(t => issueIdMap[t.id]).join(' ')}
).join('\n')} ```
路线图文件: `${input.sessionFolder}/`
  • issues.jsonl (标准 issue 格式)
  • execution-plan.json (波次编排) ` }
// Write roadmap.md const roadmapInput = { sessionId, requirement, sessionFolder, strategy: { ...strategy } } const roadmapMd = selectedMode === 'progressive' ? generateProgressiveRoadmapMd(records, issueIdMap, roadmapInput) : generateDirectRoadmapMd(records, issueIdMap, roadmapInput) Write(
${sessionFolder}/roadmap.md
, roadmapMd)

---
``` ${[...groups.entries()].sort(([a], [b]) => a - b).map(([g, ts]) =>
# Wave ${g}: Group ${g}\n$team-planex ${ts.map(t => issueIdMap[t.id]).join(' ')}
).join('\n')} ```
路线图文件: `${input.sessionFolder}/`
  • issues.jsonl (标准 issue 格式)
  • execution-plan.json (波次编排) ` }
// Write roadmap.md const roadmapInput = { sessionId, requirement, sessionFolder, strategy: { ...strategy } } const roadmapMd = selectedMode === 'progressive' ? generateProgressiveRoadmapMd(records, issueIdMap, roadmapInput) : generateDirectRoadmapMd(records, issueIdMap, roadmapInput) Write(
${sessionFolder}/roadmap.md
, roadmapMd)

---

Step 3.4: Decomposition Quality Check (MANDATORY)

Step 3.4: 分解质量检查(必填)

After creating issues and generating output files, MUST execute CLI quality check before proceeding.
创建Issue并生成输出文件后,必须在进入下一阶段前执行CLI质量检查。
Quality Dimensions
质量维度
DimensionCheck CriteriaCritical?
Requirement CoverageAll aspects of original requirement addressed in issuesYes
Convergence Qualitycriteria testable, verification executable, DoD business-readableYes
Scope IntegrityProgressive: no overlap/gaps; Direct: inputs/outputs chain validYes
Dependency CorrectnessNo circular deps, proper ordering, issue dependencies matchYes
Effort BalanceNo single issue disproportionately largeNo
维度检查标准是否关键
需求覆盖度原始需求的所有方面均在Issue中得到体现
收敛标准质量标准可测试、验证可执行、完成定义为业务可读
范围完整性渐进式:无重叠/缺口;直接式:输入/输出链有效
依赖正确性无循环依赖、顺序合理、Issue依赖匹配
工作量平衡无单个Issue工作量过大
CLI Quality Check
CLI质量检查
bash
ccw cli -p "
PURPOSE: Validate roadmap decomposition quality
Success: All quality dimensions pass

ORIGINAL REQUIREMENT:
${requirement}

ISSUES CREATED (${selectedMode} mode):
${issuesJsonlContent}

EXECUTION PLAN:
${JSON.stringify(executionPlan, null, 2)}

TASK:
• Requirement Coverage: Does the decomposition address ALL aspects of the requirement?
• Convergence Quality: Are criteria testable? Is verification executable? Is DoD business-readable?
• Scope Integrity: ${selectedMode === 'progressive' ? 'No scope overlap between layers, no feature gaps' : 'Inputs/outputs chain is valid, parallel groups are correct'}
• Dependency Correctness: No circular dependencies, wave ordering correct
• Effort Balance: No disproportionately large items

MODE: analysis
EXPECTED:
bash
ccw cli -p "
PURPOSE: Validate roadmap decomposition quality
Success: All quality dimensions pass

ORIGINAL REQUIREMENT:
${requirement}

ISSUES CREATED (${selectedMode} mode):
${issuesJsonlContent}

EXECUTION PLAN:
${JSON.stringify(executionPlan, null, 2)}

TASK:
• Requirement Coverage: Does the decomposition address ALL aspects of original requirement?
• Convergence Quality: Are criteria testable, verification executable, DoD business-readable?
• Scope Integrity: ${selectedMode === 'progressive' ? 'No scope overlap between layers, no feature gaps' : 'Inputs/outputs chain is valid, parallel groups are correct'}
• Dependency Correctness: No circular deps, proper ordering, issue dependencies match
• Effort Balance: No single issue disproportionately large

MODE: analysis
CONTEXT: @**/*
EXPECTED:

Quality Check Results

Quality Check Results

Requirement Coverage: PASS|FAIL

Requirement Coverage: PASS|FAIL

[details]
[details]

Convergence Quality: PASS|FAIL

Convergence Quality: PASS|FAIL

[details and specific issues per record]
[details and specific issues per record]

Scope Integrity: PASS|FAIL

Scope Integrity: PASS|FAIL

[details]
[details]

Dependency Correctness: PASS|FAIL

Dependency Correctness: PASS|FAIL

[details]
[details]

Effort Balance: PASS|FAIL

Effort Balance: PASS|FAIL

[details]
[details]

Recommendation: PASS|AUTO_FIX|NEEDS_REVIEW

Recommendation: PASS|AUTO_FIX|NEEDS_REVIEW

Fixes (if AUTO_FIX):

Fixes (if AUTO_FIX):

[specific fixes as JSON patches]
CONSTRAINTS: Read-only validation, do not modify files " --tool gemini --mode analysis
undefined
[specific fixes as JSON patches]
CONSTRAINTS: Read-only validation, do not modify files " --tool gemini --mode analysis
undefined
Auto-Fix Strategy
自动修复策略
Issue TypeAuto-Fix Action
Vague criteriaReplace with specific, testable conditions
Technical DoDRewrite in business language
Missing scope itemsAdd to appropriate issue context
Effort imbalanceSuggest split (report to user)
After fixes, update issues via
ccw issue update
and regenerate
issues.jsonl
+
roadmap.md
.

问题类型自动修复操作
模糊的标准替换为具体、可测试的条件
技术化的完成定义重写为业务语言
缺失的范围项添加到对应Issue的上下文中
工作量不平衡建议拆分(告知用户)
修复后,通过
ccw issue update
更新Issue,并重新生成
issues.jsonl
+
roadmap.md

Phase 4: Validation & team-planex Handoff

Phase 4: 验证与交付给team-planex

Objective: Display decomposition results, collect user feedback, provide team-planex execution options.
javascript
// 1. Display Decomposition Results
const executionPlan = JSON.parse(Read(`${sessionFolder}/execution-plan.json`))
const issueIds = executionPlan.issue_ids
const waves = executionPlan.waves
Progressive Mode display:
markdown
undefined
目标: 展示分解结果,收集用户反馈,提供team-planex执行选项。
javascript
// 1. Display Decomposition Results
const executionPlan = JSON.parse(Read(`${sessionFolder}/execution-plan.json`))
const issueIds = executionPlan.issue_ids
const waves = executionPlan.waves
渐进式模式展示:
markdown
undefined

Roadmap Overview

路线图概览

WaveIssue IDNameGoalPriority
1ISS-xxxMVP...2
2ISS-yyyUsable...3
波次Issue ID名称目标优先级
1ISS-xxxMVP...2
2ISS-yyy可用...3

Convergence Criteria

收敛标准

Wave 1 - MVP (ISS-xxx):
  • Criteria: [criteria list]
  • Verification: [verification]
  • Definition of Done: [definition_of_done]

**Direct Mode** display:
```markdown
Wave 1 - MVP (ISS-xxx):
  • 标准: [标准列表]
  • 验证方法: [验证方法]
  • 完成定义: [完成定义]

**直接式模式**展示:
```markdown

Task Sequence

任务序列

WaveIssue IDTitleTypeDependencies
1ISS-xxx...infrastructure-
2ISS-yyy...featureISS-xxx
undefined
波次Issue ID标题类型依赖
1ISS-xxx...infrastructure-
2ISS-yyy...featureISS-xxx
undefined

User Feedback Loop (up to 5 rounds, skipped when AUTO_YES)

用户反馈循环(最多5轮,AUTO_YES时跳过)

javascript
if (!AUTO_YES) {
  let round = 0
  let continueLoop = true

  while (continueLoop && round < 5) {
    round++
    const feedback = ASK_USER([
      {
        id: "feedback", type: "select",
        prompt: `Roadmap validation (round ${round}):\nAny feedback on the current decomposition?`,
        options: [
          { label: "Approve", description: "Decomposition is reasonable, proceed to next steps" },
          { label: "Adjust Scope", description: "Some issue scopes need adjustment" },
          { label: "Modify Convergence", description: "Convergence criteria are not specific or testable enough" },
          { label: "Re-decompose", description: "Overall strategy or layering approach needs change" }
        ]
      }
    ])  // BLOCKS (wait for user response)

    if (feedback.feedback === 'Approve') {
      continueLoop = false
    } else {
      // Handle adjustment based on feedback type
      // After adjustment, re-display and return to loop top
    }
  }
}
javascript
if (!AUTO_YES) {
  let round = 0
  let continueLoop = true

  while (continueLoop && round < 5) {
    round++
    const feedback = ASK_USER([
      {
        id: "feedback", type: "select",
        prompt: `路线图验证(第 ${round} 轮):\n对当前分解有任何反馈吗?`,
        options: [
          { label: "批准", description: "分解合理,进入下一步" },
          { label: "调整范围", description: "部分Issue范围需要调整" },
          { label: "修改收敛标准", description: "收敛标准不够具体或可测试" },
          { label: "重新分解", description: "整体策略或分层方式需要更改" }
        ]
      }
    ])  // 阻塞(等待用户响应)

    if (feedback.feedback === '批准') {
      continueLoop = false
    } else {
      // 根据反馈类型处理调整
      // 调整后重新展示,回到循环顶部
    }
  }
}

Post-Completion Options

完成后选项

javascript
if (AUTO_YES) {
  // Auto mode: display summary and end
  console.log(`路线图已生成,${issueIds.length} 个 issues 已创建。`)
  console.log(`Session: ${sessionFolder}`)
} else {
  const nextStep = ASK_USER([
    {
      id: "next", type: "select",
      prompt: `路线图已生成,${issueIds.length} 个 issues 已创建。下一步:`,
      options: [
        { label: "Execute with team-planex",
          description: `启动 team-planex 执行全部 ${issueIds.length} 个 issues(${waves.length} 个波次)` },
        { label: "Execute first wave",
          description: `仅执行 Wave 1: ${waves[0].label}` },
        { label: "View issues",
          description: "查看已创建的 issue 详情" },
        { label: "Done",
          description: "保存路线图,稍后执行" }
      ]
    }
  ])  // BLOCKS (wait for user response)
}
SelectionAction
Execute with team-planex
$team-planex --plan ${sessionFolder}/execution-plan.json
Execute first wave
$team-planex ${waves[0].issue_ids.join(' ')}
View issuesDisplay issues summary table from issues.jsonl
DoneDisplay file paths, end
Implementation sketch: 编排器内部使用
Skill(skill="team-planex", args="--plan ...")
接口调用, 此为伪代码示意,非命令行语法。

javascript
if (AUTO_YES) {
  // 自动模式:展示摘要并结束
  console.log(`路线图已生成,${issueIds.length} 个 issues 已创建。`)
  console.log(`会话: ${sessionFolder}`)
} else {
  const nextStep = ASK_USER([
    {
      id: "next", type: "select",
      prompt: `路线图已生成,${issueIds.length} 个 issues 已创建。下一步:`,
      options: [
        { label: "使用 team-planex 执行",
          description: `启动 team-planex 执行全部 ${issueIds.length} 个 issues(${waves.length} 个波次)` },
        { label: "执行第一个波次",
          description: `仅执行 Wave 1: ${waves[0].label}` },
        { label: "查看 issues",
          description: "查看已创建的 issue 详情" },
        { label: "完成",
          description: "保存路线图,稍后执行" }
      ]
    }
  ])  // 阻塞(等待用户响应)
}
选择操作
使用 team-planex 执行
$team-planex --plan ${sessionFolder}/execution-plan.json
执行第一个波次
$team-planex ${waves[0].issue_ids.join(' ')}
查看 issues从issues.jsonl中展示Issue摘要表格
完成展示文件路径,结束
实现说明: 编排器内部使用
Skill(skill="team-planex", args="--plan ...")
接口调用, 此为伪代码示意,非命令行语法。

JSONL Schema Design

JSONL Schema设计

Issue Format (issues.jsonl)

Issue格式(issues.jsonl)

Each line follows the standard
issues-jsonl-schema.json
(see
.ccw/workflows/cli-templates/schemas/issues-jsonl-schema.json
).
FieldSourceDescription
id
ccw issue create
Formal ISS-YYYYMMDD-NNN ID
title
Layer/task mapping
[LayerName] goal
or
[TaskType] title
context
Convergence fieldsMarkdown with goal, scope, convergence criteria, verification, DoD
priority
Effort mappingsmall→4, medium→3, large→2
source
Fixed
"text"
tags
Auto-generated
["req-plan", mode, name/type, "wave-N"]
extended_context.notes
Metadata JSONsession, strategy, original_id, wave, depends_on_issues
lifecycle_requirements
Fixedtest_strategy, regression_scope, acceptance_type, commit_strategy
每行遵循标准
issues-jsonl-schema.json
(见
.ccw/workflows/cli-templates/schemas/issues-jsonl-schema.json
)。
字段来源描述
id
ccw issue create
正式ISS-YYYYMMDD-NNN格式ID
title
层级/任务映射
[层级名称] 目标
[任务类型] 标题
context
收敛字段包含目标、范围、收敛标准、验证方法、完成定义的Markdown
priority
工作量映射small→4, medium→3, large→2
source
固定值
"text"
tags
自动生成
["req-plan", 模式, 名称/类型, "wave-N"]
extended_context.notes
元数据JSON会话、策略、原始ID、波次、依赖Issue
lifecycle_requirements
固定值测试策略、回归范围、验收类型、提交策略

Execution Plan Format (execution-plan.json)

执行计划格式(execution-plan.json)

json
{
  "session_id": "RPLAN-{slug}-{date}",
  "requirement": "Original requirement description",
  "strategy": "progressive|direct",
  "created_at": "ISO 8601",
  "issue_ids": ["ISS-xxx", "ISS-yyy"],
  "waves": [
    {
      "wave": 1,
      "label": "MVP",
      "issue_ids": ["ISS-xxx"],
      "depends_on_waves": []
    },
    {
      "wave": 2,
      "label": "Usable",
      "issue_ids": ["ISS-yyy"],
      "depends_on_waves": [1]
    }
  ],
  "issue_dependencies": {
    "ISS-yyy": ["ISS-xxx"]
  }
}
Wave mapping:
  • Progressive mode: each layer → one wave (L0→Wave 1, L1→Wave 2, ...)
  • Direct mode: each parallel_group → one wave (group 1→Wave 1, group 2→Wave 2, ...)

json
{
  "session_id": "RPLAN-{slug}-{date}",
  "requirement": "原始需求描述",
  "strategy": "progressive|direct",
  "created_at": "ISO 8601格式时间",
  "issue_ids": ["ISS-xxx", "ISS-yyy"],
  "waves": [
    {
      "wave": 1,
      "label": "MVP",
      "issue_ids": ["ISS-xxx"],
      "depends_on_waves": []
    },
    {
      "wave": 2,
      "label": "可用",
      "issue_ids": ["ISS-yyy"],
      "depends_on_waves": [1]
    }
  ],
  "issue_dependencies": {
    "ISS-yyy": ["ISS-xxx"]
  }
}
波次映射:
  • 渐进式模式:每个层级对应一个波次(L0→Wave 1, L1→Wave 2, ...)
  • 直接式模式:每个并行组对应一个波次(group 1→Wave 1, group 2→Wave 2, ...)

Session Configuration

会话配置

FlagDefaultDescription
-y, --yes
falseAuto-confirm all decisions
-c, --continue
falseContinue existing session
-m, --mode
autoDecomposition strategy: progressive / direct / auto
Session ID format:
RPLAN-{slug}-{YYYY-MM-DD}
  • slug: lowercase, alphanumeric + CJK characters, max 40 chars
  • date: YYYY-MM-DD (UTC+8)

标志默认值描述
-y, --yes
false自动确认所有决策
-c, --continue
false继续现有会话
-m, --mode
auto分解策略:progressive / direct / auto
会话ID格式:
RPLAN-{slug}-{YYYY-MM-DD}
  • slug: 小写字母、数字+中日韩字符,最长40个字符
  • date: YYYY-MM-DD(UTC+8时区)

Error Handling

错误处理

ErrorResolution
cli-explore-agent timeoutRetry once with send_input prompt, then skip exploration
cli-explore-agent failureSkip code exploration, proceed with pure requirement decomposition
No codebaseNormal flow, skip Phase 2
CLI decomposition failure (Gemini)Fallback to Qwen, then manual decomposition
Issue creation failureRetry once, then skip and continue with remaining
Circular dependency detectedPrompt user to adjust dependencies, re-decompose
User feedback timeoutSave current state, display
--continue
recovery command
Max feedback rounds reachedUse current version to generate final artifacts
Session folder conflictAppend timestamp suffix
Quality check NEEDS_REVIEWReport critical issues to user for manual resolution
错误解决方法
cli-explore-agent超时重试一次并发送提示,然后跳过探索
cli-explore-agent失败跳过代码探索,继续纯需求分解
无代码库正常流程,跳过Phase 2
CLI分解失败(Gemini)备选Qwen,然后手动分解
Issue创建失败重试一次,然后跳过该记录继续剩余部分
检测到循环依赖提示用户调整依赖,重新分解
用户反馈超时保存当前状态,展示
--continue
恢复命令
达到最大反馈轮次使用当前版本生成最终产物
会话文件夹冲突添加时间戳后缀
质量检查需要人工评审向用户报告关键问题,等待手动解决

Core Rules

核心规则

  1. Explicit Lifecycle: Always close_agent after wait completes to free resources
  2. DO NOT STOP: Continuous multi-phase workflow. After completing each phase, immediately proceed to next
  3. NEVER output vague convergence: criteria must be testable, verification executable, DoD in business language
  4. NEVER skip quality check: Step 3.4 is MANDATORY before proceeding to Phase 4
  5. ALWAYS write all three output files: issues.jsonl, execution-plan.json, roadmap.md
  1. 显式生命周期: wait完成后必须调用close_agent以释放资源
  2. 不要停止: 连续多阶段工作流,完成每个阶段后立即进入下一个阶段
  3. 不要输出模糊的收敛标准: 标准必须可测试、验证必须可执行、完成定义必须为业务语言
  4. 不要跳过质量检查: Step 3.4在进入Phase 4之前是必填项
  5. 必须生成所有三个输出文件: issues.jsonl, execution-plan.json, roadmap.md

Best Practices

最佳实践

  1. Clear requirement description: Detailed description → more accurate uncertainty assessment and decomposition
  2. Validate MVP first: In progressive mode, L0 should be the minimum verifiable closed loop
  3. Testable convergence: criteria must be writable as assertions or manual steps; definition_of_done should be judgeable by non-technical stakeholders
  4. Incremental validation: Use
    --continue
    to iterate on existing roadmaps
  5. team-planex integration: Issues created follow standard issues-jsonl-schema, directly consumable by
    $team-planex
    via execution-plan.json
  1. 清晰的需求描述: 详细的描述→更准确的不确定性评估和分解
  2. 先验证MVP: 渐进式模式中,L0应为最小可验证闭环
  3. 可测试的收敛标准: 标准必须可编写断言或手动步骤;完成定义必须可由非技术利益相关方判断
  4. 增量验证: 使用
    --continue
    迭代现有路线图
  5. team-planex集成: 创建的Issue遵循标准issues-jsonl-schema,可通过execution-plan.json直接被
    $team-planex
    使用

Usage Recommendations

使用建议

Use
$workflow-req-plan
when:
  • You need to decompose a large requirement into a progressively executable roadmap
  • Unsure where to start, need an MVP strategy
  • Need to generate a trackable task sequence
  • Requirement involves multiple stages or iterations
  • Want automatic issue creation + team-planex execution pipeline
Use
$workflow-lite-plan
when:
  • You have a clear single task to execute
  • The requirement is already a layer/task from the roadmap
  • No layered planning needed
Use
$team-planex
directly when:
  • Issues already exist (created manually or from other workflows)
  • Have an execution-plan.json ready from a previous req-plan session

Now execute req-plan workflow for: $ARGUMENTS
当你需要以下场景时使用
$workflow-req-plan
:
  • 将大型需求分解为可逐步执行的路线图
  • 不确定从何开始,需要MVP策略
  • 需要生成可追踪的任务序列
  • 需求涉及多个阶段或迭代
  • 想要自动创建Issue + team-planex执行流水线
当你需要以下场景时使用
$workflow-lite-plan
:
  • 有清晰的单个任务需要执行
  • 需求已经是路线图中的一个层级/任务
  • 不需要分层规划
当你需要以下场景时直接使用
$team-planex
:
  • Issue已存在(手动创建或来自其他工作流)
  • 已有来自之前req-plan会话的execution-plan.json

现在为以下需求执行req-plan工作流: $ARGUMENTS