knowledge-build
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseKnowledge Builder - Parallel KB Generation Orchestrator
知识构建器 - 并行KB生成编排器
Parameters
参数
Extract these parameters from the user's input:
| Parameter | Required | Default | Description |
|---|---|---|---|
| No | - | Feature ID to incorporate learnings from an archived feature into KB |
Environment values (resolve via shell):
- : !
RP1_ROOT(extractrp1 agent-tools rp1-root-dirfrom JSON response)data.root
This command orchestrates parallel knowledge base generation using a map-reduce architecture
CRITICAL: This is an ORCHESTRATOR command, not a thin wrapper. This command must handle parallel execution coordination, result aggregation, and state management.
从用户输入中提取以下参数:
| Parameter | Required | Default | Description |
|---|---|---|---|
| No | - | 用于将已归档特性的经验纳入KB的特性ID |
环境变量(通过shell解析):
- : !
RP1_ROOT(从JSON响应中提取rp1 agent-tools rp1-root-dir)data.root
本命令采用Map-Reduce架构编排并行知识库生成流程
重要提示:这是一个编排器命令,而非轻量封装。本命令必须负责并行执行协调、结果聚合和状态管理。
Architecture Overview
架构概览
Phase 1 (Sequential): Spatial Analyzer -> Categorized file lists
Phase 2 (Parallel): 4 Analysis Agents -> JSON outputs (concept, arch, module, pattern)
Phase 3 (Sequential): Command -> Merge JSON -> Generate index.md -> Write KB filesKey Design: The main orchestrator generates index.md directly (not via sub-agent) because:
- It has visibility into all 4 sub-agent outputs
- It can aggregate key facts into a "jump off" entry point
- index.md must contain file manifest with accurate line counts from generated files
Phase 1 (Sequential): Spatial Analyzer -> Categorized file lists
Phase 2 (Parallel): 4 Analysis Agents -> JSON outputs (concept, arch, module, pattern)
Phase 3 (Sequential): Command -> Merge JSON -> Generate index.md -> Write KB files核心设计:主编排器直接生成index.md(不通过子Agent),原因如下:
- 它可以获取所有4个子Agent的输出数据
- 它可以将关键事实聚合为「跳转入口」页面
- index.md需要包含生成文件的准确行数统计清单
Execution Instructions
执行说明
DO NOT ask for user approval. Execute immediately.
请勿请求用户审批,直接执行。
Feature Learning Mode
特性学习模式
If is provided, this is a feature learning build that captures knowledge from an archived feature. Skip Phase 0 entirely (no git commit parsing needed).
FEATURE_ID-
Locate archived feature:
FEATURE_PATH = {{$RP1_ROOT}}/work/archives/features/{FEATURE_ID}/If not found, check active features:FEATURE_PATH = {{$RP1_ROOT}}/work/features/{FEATURE_ID}/If neither exists, error:Feature not found: {FEATURE_ID} Checked: {{$RP1_ROOT}}/work/archives/features/{FEATURE_ID}/ {{$RP1_ROOT}}/work/features/{FEATURE_ID}/ -
Read feature documentation:
- - What was built
{FEATURE_PATH}/requirements.md - - How it was designed
{FEATURE_PATH}/design.md - - Learnings and discoveries (if exists)
{FEATURE_PATH}/field-notes.md - - Implementation details with files modified
{FEATURE_PATH}/tasks.md
-
Extract files modified from tasks.md: Parse implementation summaries to buildlist:
FILES_MODIFIEDLook for patterns: - **Files**: `src/file1.ts`, `src/file2.ts` - **Files Modified**: ... Extract all file paths into FILES_MODIFIED array. -
Extract feature context: Build aobject containing:
FEATURE_CONTEXT- Feature ID and path
- Key requirements (summarized)
- Architectural decisions from design.md
- All discoveries from field-notes.md
- Implementation patterns used
- : FILES_MODIFIED array
files_modified
-
Jump directly to Phase 1 (Spatial Analysis):
- Pass to spatial analyzer instead of git diff
FILES_MODIFIED - Spatial analyzer categorizes these specific files
- No git commit comparison needed
- Pass
-
Spatial analyzer prompt (Feature Learning Mode):
FEATURE_LEARNING mode. Categorize these files modified during feature implementation: FILES: {{stringify(FILES_MODIFIED)}} Rank each file 0-5, categorize by KB section (index_files, concept_files, arch_files, module_files). Return JSON with categorized files. -
Sub-agent prompts include:
FEATURE_CONTEXT: {{stringify(feature_context)}} MODE: FEATURE_LEARNING Incorporate learnings from this completed feature: - Update patterns.md with implementation patterns discovered - Update architecture.md if new architectural patterns emerged - Update modules.md with new components/dependencies - Update concept_map.md with new domain concepts Focus on files that were modified: {{stringify(FILES_MODIFIED)}}
如果提供了,则为特性学习构建,用于从已归档的特性中提取知识。完全跳过第0阶段(无需解析git commit)。
FEATURE_ID-
定位已归档特性:
FEATURE_PATH = {{$RP1_ROOT}}/work/archives/features/{FEATURE_ID}/如果未找到,检查活跃特性目录:FEATURE_PATH = {{$RP1_ROOT}}/work/features/{FEATURE_ID}/两个目录都不存在则报错:Feature not found: {FEATURE_ID} Checked: {{$RP1_ROOT}}/work/archives/features/{FEATURE_ID}/ {{$RP1_ROOT}}/work/features/{FEATURE_ID}/ -
读取特性文档:
- - 已实现的需求内容
{FEATURE_PATH}/requirements.md - - 设计方案
{FEATURE_PATH}/design.md - - 开发经验与发现(如果存在)
{FEATURE_PATH}/field-notes.md - - 实现细节及修改的文件清单
{FEATURE_PATH}/tasks.md
-
从tasks.md提取修改的文件: 解析实现摘要构建列表:
FILES_MODIFIED匹配以下模式: - **Files**: `src/file1.ts`, `src/file2.ts` - **Files Modified**: ... 将所有文件路径提取到FILES_MODIFIED数组中。 -
提取特性上下文: 构建对象,包含:
FEATURE_CONTEXT- 特性ID和路径
- 核心需求(摘要)
- 来自design.md的架构决策
- 来自field-notes.md的所有开发发现
- 使用的实现模式
- : FILES_MODIFIED数组
files_modified
-
直接进入第1阶段(空间分析):
- 将而非git diff传入空间分析器
FILES_MODIFIED - 空间分析器对这些指定文件进行分类
- 无需进行git commit对比
- 将
-
空间分析器提示词(特性学习模式):
FEATURE_LEARNING mode. Categorize these files modified during feature implementation: FILES: {{stringify(FILES_MODIFIED)}} Rank each file 0-5, categorize by KB section (index_files, concept_files, arch_files, module_files). Return JSON with categorized files. -
子Agent提示词包含内容:
FEATURE_CONTEXT: {{stringify(feature_context)}} MODE: FEATURE_LEARNING Incorporate learnings from this completed feature: - Update patterns.md with implementation patterns discovered - Update architecture.md if new architectural patterns emerged - Update modules.md with new components/dependencies - Update concept_map.md with new domain concepts Focus on files that were modified: {{stringify(FILES_MODIFIED)}}
Phase 0: Change Detection and Diff Analysis
第0阶段:变更检测与差异分析
NOTE: Skip this phase entirely if FEATURE_ID is provided (Feature Learning Mode).
-
Check for existing KB state:
- Check if exists
{{$RP1_ROOT}}/context/state.json - If exists, read the field from state.json
git_commit
- Check if
-
Check current git commit:
- Run: to get current commit hash
git rev-parse HEAD - Compare with git_commit from state.json (if exists)
- Run:
-
Determine build strategy:CASE A: No changes detected (state.json exists AND git commit unchanged):
- ACTION: Skip build entirely (no-op)
- MESSAGE: "KB is up-to-date (commit {{commit_hash}}). No regeneration needed. KB is automatically loaded by agents when needed."
CASE A-MONOREPO: No changes in this service (monorepo: git commit changed but no changes in CODEBASE_ROOT):- ACTION: Skip build BUT update state.json with new commit
- REASON: In monorepo, global commit moves even when this service unchanged. Update commit reference to avoid checking larger diff ranges in future.
- Update state.json:
- Read existing state.json
- Update only the field to new commit hash
git_commit - Keep all other fields unchanged (strategy, repo_type, files_analyzed, etc.)
- Write updated state.json
- MESSAGE: "No changes in this service since last build. Updated commit reference ({{old_commit}} -> {{new_commit}}). KB is automatically loaded by agents when needed."
CASE B: First-time build (no state.json):- ACTION: Full analysis mode - proceed to Phase 1
- MESSAGE: "First-time KB generation with parallel analysis (10-15 min)"
- MODE: Full scan (spatial analyzer processes all files)
CASE C: Incremental update (state.json exists AND commit changed AND files changed in CODEBASE_ROOT):-
ACTION: Incremental analysis mode - get changed files with diffs
-
Read monorepo metadata from state.json AND local values from meta.json:bash
# Read shareable state repo_type=$(jq -r '.repo_type // "single-project"' {{$RP1_ROOT}}/context/state.json) # Read local values from meta.json (with fallback to state.json for backward compatibility) if [ -f "{{$RP1_ROOT}}/context/meta.json" ]; then repo_root=$(jq -r '.repo_root // "."' {{$RP1_ROOT}}/context/meta.json) current_project_path=$(jq -r '.current_project_path // "."' {{$RP1_ROOT}}/context/meta.json) else # Backward compatibility: read from state.json if meta.json doesn't exist repo_root=$(jq -r '.repo_root // "."' {{$RP1_ROOT}}/context/state.json) current_project_path=$(jq -r '.current_project_path // "."' {{$RP1_ROOT}}/context/state.json) fi -
Get changed files list:bash
# If monorepo, run git diff from repo root and filter to current project if [ "$repo_type" = "monorepo" ]; then cd "$repo_root" # Get all changed files all_changes=$(git diff --name-only {{old_commit}} {{new_commit}}) # Filter to current project (skip filtering if root project) if [ "$current_project_path" = "." ] || [ "$current_project_path" = "" ]; then # Root project - include all files echo "$all_changes" else # Subdirectory project - filter to project path echo "$all_changes" | grep "^${current_project_path}" fi else # Single-project - get all changes git diff --name-only {{old_commit}} {{new_commit}} fi -
Check if any files changed in scope:
- If NO changes found -> Go to CASE A-MONOREPO (update commit only)
- If changes found -> Continue with incremental analysis
-
Check change set size (prevent token limit issues):bash
changed_file_count=$(echo "$changed_files" | wc -l) if [ $changed_file_count -gt 50 ]; then echo "Large change set ($changed_file_count files changed). Using FULL mode for reliability." # Fall back to FULL mode (skip getting diffs) MODE="FULL" else MODE="INCREMENTAL" fi -
MESSAGE:
- If MODE=FULL: "Large change set ({{changed_file_count}} files). Full analysis (10-15 min)"
- If MODE=INCREMENTAL: "Changes detected since last build ({{old_commit}} -> {{new_commit}}). Analyzing {{changed_file_count}} changed files (2-5 min)"
-
Get detailed diffs for each changed file (only if MODE=INCREMENTAL):bash
# Only if incremental mode (< 50 files) git diff {{old_commit}} {{new_commit}} -- <filepath> -
Store diffs: Create FILE_DIFFS JSON mapping filepath -> diff content (only if MODE=INCREMENTAL)
-
Filter changed files: Apply EXCLUDE_PATTERNS, filter to relevant extensions
-
Store changed files list: Will be passed to spatial analyzer
-
MODE: INCREMENTAL (< 50 files) or FULL (>= 50 files)
注意:如果提供了FEATURE_ID(特性学习模式),完全跳过本阶段。
-
检查现有KB状态:
- 检查是否存在
{{$RP1_ROOT}}/context/state.json - 如果存在,从state.json中读取字段
git_commit
- 检查
-
检查当前git commit:
- 执行:获取当前提交哈希
git rev-parse HEAD - 与state.json中的git_commit(如果存在)对比
- 执行:
-
确定构建策略:场景A:未检测到变更(state.json存在且git commit未变化):
- 操作:完全跳过构建(空操作)
- 消息:"KB已为最新版本(提交哈希{{commit_hash}}),无需重新生成。Agent会在需要时自动加载KB。"
场景A-单仓多服务:当前服务无变更(monorepo场景:git commit变化但CODEBASE_ROOT下无变更):- 操作:跳过构建,但更新state.json中的commit值
- 原因:单仓多服务场景下,即使当前服务无变更,全局commit也会更新。更新commit引用可避免后续检查更大的差异范围。
- 更新state.json:
- 读取现有state.json
- 仅更新字段为新的提交哈希
git_commit - 保留所有其他字段不变(strategy、repo_type、files_analyzed等)
- 写入更新后的state.json
- 消息:"自上次构建以来当前服务无变更。已更新commit引用({{old_commit}} -> {{new_commit}})。Agent会在需要时自动加载KB。"
场景B:首次构建(无state.json):- 操作:全量分析模式 - 进入第1阶段
- 消息:"首次并行分析生成KB,预计耗时10-15分钟"
- 模式:全量扫描(空间分析器处理所有文件)
场景C:增量更新(state.json存在、commit变化且CODEBASE_ROOT下有文件变更):-
操作:增量分析模式 - 获取变更文件及差异
-
从state.json读取单仓多服务元数据,从meta.json读取本地配置(兼容旧版本):bash
# Read shareable state repo_type=$(jq -r '.repo_type // "single-project"' {{$RP1_ROOT}}/context/state.json) # Read local values from meta.json (with fallback to state.json for backward compatibility) if [ -f "{{$RP1_ROOT}}/context/meta.json" ]; then repo_root=$(jq -r '.repo_root // "."' {{$RP1_ROOT}}/context/meta.json) current_project_path=$(jq -r '.current_project_path // "."' {{$RP1_ROOT}}/context/meta.json) else # Backward compatibility: read from state.json if meta.json doesn't exist repo_root=$(jq -r '.repo_root // "."' {{$RP1_ROOT}}/context/state.json) current_project_path=$(jq -r '.current_project_path // "."' {{$RP1_ROOT}}/context/state.json) fi -
获取变更文件列表:bash
# If monorepo, run git diff from repo root and filter to current project if [ "$repo_type" = "monorepo" ]; then cd "$repo_root" # Get all changed files all_changes=$(git diff --name-only {{old_commit}} {{new_commit}}) # Filter to current project (skip filtering if root project) if [ "$current_project_path" = "." ] || [ "$current_project_path" = "" ]; then # Root project - include all files echo "$all_changes" else # Subdirectory project - filter to project path echo "$all_changes" | grep "^${current_project_path}" fi else # Single-project - get all changes git diff --name-only {{old_commit}} {{new_commit}} fi -
检查范围内是否有文件变更:
- 如果未找到变更 -> 进入场景A-单仓多服务流程(仅更新commit)
- 如果找到变更 -> 继续增量分析
-
检查变更集大小(避免Token超限问题):bash
changed_file_count=$(echo "$changed_files" | wc -l) if [ $changed_file_count -gt 50 ]; then echo "Large change set ($changed_file_count files changed). Using FULL mode for reliability." # Fall back to FULL mode (skip getting diffs) MODE="FULL" else MODE="INCREMENTAL" fi -
消息:
- 如果MODE=FULL:"变更集过大({{changed_file_count}}个文件),启动全量分析,预计耗时10-15分钟"
- 如果MODE=INCREMENTAL:"自上次构建以来检测到变更({{old_commit}} -> {{new_commit}}),正在分析{{changed_file_count}}个变更文件,预计耗时2-5分钟"
-
获取每个变更文件的详细差异(仅当MODE=INCREMENTAL时执行):bash
# Only if incremental mode (< 50 files) git diff {{old_commit}} {{new_commit}} -- <filepath> -
存储差异:创建FILE_DIFFS JSON映射,结构为 文件路径 -> 差异内容(仅当MODE=INCREMENTAL时执行)
-
过滤变更文件:应用EXCLUDE_PATTERNS,过滤出相关扩展名的文件
-
存储变更文件列表:将传入空间分析器
-
模式:INCREMENTAL(<50个文件)或FULL(>=50个文件)
Phase 1: Spatial Analysis (Sequential)
第1阶段:空间分析(串行执行)
-
Spawn spatial analyzer agent:For full build (CASE B):
Use Task tool with: subagent_type: rp1-base:kb-spatial-analyzer prompt: "FULL SCAN mode. Scan all files in repository at {{CODEBASE_ROOT}}, rank files 0-5, categorize by KB section. Return JSON with index_files, concept_files, arch_files, module_files arrays."For incremental build (CASE C):Use Task tool with: subagent_type: rp1-base:kb-spatial-analyzer prompt: "INCREMENTAL mode. Only categorize these changed files: {{changed_files_list}}. Rank each file 0-5, categorize by KB section (index_files, concept_files, arch_files, module_files). Return JSON with categorized changed files." -
Parse spatial analyzer output:
- Extract JSON from agent response
- Validate structure: must have ,
repo_type,monorepo_projects,total_files_scanned,index_files,concept_files,arch_files,module_fileslocal_meta - Store shareable metadata: ,
repo_typemonorepo_projects - Store local metadata from :
local_meta,repo_root(will be written to meta.json)current_project_path - For incremental: files_scanned should match changed_file_count
- Check that at least one category has files (some categories may be empty in incremental)
-
Handle spatial analyzer failure:
- If agent crashes or returns invalid JSON: Log error with details
- If categorization is completely empty: Log error
- Provide troubleshooting guidance
-
启动空间分析Agent:全量构建(场景B):
Use Task tool with: subagent_type: rp1-base:kb-spatial-analyzer prompt: "FULL SCAN mode. Scan all files in repository at {{CODEBASE_ROOT}}, rank files 0-5, categorize by KB section. Return JSON with index_files, concept_files, arch_files, module_files arrays."增量构建(场景C):Use Task tool with: subagent_type: rp1-base:kb-spatial-analyzer prompt: "INCREMENTAL mode. Only categorize these changed files: {{changed_files_list}}. Rank each file 0-5, categorize by KB section (index_files, concept_files, arch_files, module_files). Return JSON with categorized changed files." -
解析空间分析器输出:
- 从Agent响应中提取JSON
- 验证结构:必须包含、
repo_type、monorepo_projects、total_files_scanned、index_files、concept_files、arch_files、module_fileslocal_meta - 存储共享元数据:、
repo_typemonorepo_projects - 存储中的本地元数据:
local_meta、repo_root(将写入meta.json)current_project_path - 增量模式下:扫描文件数应与变更文件数匹配
- 检查至少有一个分类包含文件(增量模式下部分分类可能为空)
-
处理空间分析器故障:
- 如果Agent崩溃或返回无效JSON:记录详细错误日志
- 如果分类结果完全为空:记录错误
- 提供故障排查指南
Phase 2: Map Phase (Parallel Execution)
第2阶段:Map阶段(并行执行)
-
Spawn 4 analysis agents in parallel (CRITICAL: Use a SINGLE message with 4 Task tool calls):Agent 1 - Concept Extractor:
Use Task tool with: subagent_type: rp1-base:kb-concept-extractor prompt: "MODE={{mode}}. Extract domain concepts for concept_map.md. Repository type: {{repo_type}}. Files to analyze (JSON): {{stringify(concept_files)}}. {{if mode==INCREMENTAL}}File diffs (JSON): {{stringify(file_diffs_for_concept_files)}}{{endif}}. Return JSON with concepts, terminology, relationships."Agent 2 - Architecture Mapper:Use Task tool with: subagent_type: rp1-base:kb-architecture-mapper prompt: "MODE={{mode}}. Map system architecture for architecture.md. Repository type: {{repo_type}}. Files to analyze (JSON): {{stringify(arch_files)}}. {{if mode==INCREMENTAL}}File diffs (JSON): {{stringify(file_diffs_for_arch_files)}}{{endif}}. Return JSON with patterns, layers, diagram."Agent 3 - Module Analyzer:Use Task tool with: subagent_type: rp1-base:kb-module-analyzer prompt: "MODE={{mode}}. Analyze modules for modules.md. Repository type: {{repo_type}}. Files to analyze (JSON): {{stringify(module_files)}}. {{if mode==INCREMENTAL}}File diffs (JSON): {{stringify(file_diffs_for_module_files)}}{{endif}}. Return JSON with modules, components, dependencies."Agent 4 - Pattern Extractor:Use Task tool with: subagent_type: rp1-base:kb-pattern-extractor prompt: "MODE={{mode}}. Extract implementation patterns for patterns.md. Repository type: {{repo_type}}. Files to analyze (JSON): {{stringify(concept_files + module_files)}}. {{if mode==INCREMENTAL}}File diffs (JSON): {{stringify(file_diffs_for_pattern_files)}}{{endif}}. Return JSON with patterns (<=150 lines when rendered)." -
Collect agent outputs:
- Wait for all 4 agents to complete
- Parse JSON from each agent response
- Validate JSON structure for each output
-
Handle partial failures:If 1 agent fails:
- Continue with remaining 3 successful agents
- Generate placeholder content for failed section:
- concept_map.md failed -> "# Error extracting concepts - run full rebuild"
- architecture.md failed -> "# Error mapping architecture - see logs"
- modules.md failed -> "# Error analyzing modules - run full rebuild"
- patterns.md failed -> "# Error extracting patterns - run full rebuild"
- Include warning in final report: "Partial KB generated (1 agent failed: <agent-name>)"
- Write partial KB files (index.md always generated by orchestrator + 3 successful agent files + 1 placeholder)
- Exit with partial success (still usable KB)
If 2+ agents fail:- Log all errors with specific agent names and error messages
- Do NOT write partial KB (too incomplete to be useful)
- Provide troubleshooting guidance:
- Check file permissions
- Verify git repository is valid
- Try running again (may be transient failure)
- Exit with error message: "ERROR: KB generation failed (X agents failed)"
- Exit code: 1
-
并行启动4个分析Agent(重要提示:使用单条消息包含4个Task工具调用):Agent 1 - 概念提取器:
Use Task tool with: subagent_type: rp1-base:kb-concept-extractor prompt: "MODE={{mode}}. Extract domain concepts for concept_map.md. Repository type: {{repo_type}}. Files to analyze (JSON): {{stringify(concept_files)}}. {{if mode==INCREMENTAL}}File diffs (JSON): {{stringify(file_diffs_for_concept_files)}}{{endif}}. Return JSON with concepts, terminology, relationships."Agent 2 - 架构映射器:Use Task tool with: subagent_type: rp1-base:kb-architecture-mapper prompt: "MODE={{mode}}. Map system architecture for architecture.md. Repository type: {{repo_type}}. Files to analyze (JSON): {{stringify(arch_files)}}. {{if mode==INCREMENTAL}}File diffs (JSON): {{stringify(file_diffs_for_arch_files)}}{{endif}}. Return JSON with patterns, layers, diagram."Agent 3 - 模块分析器:Use Task tool with: subagent_type: rp1-base:kb-module-analyzer prompt: "MODE={{mode}}. Analyze modules for modules.md. Repository type: {{repo_type}}. Files to analyze (JSON): {{stringify(module_files)}}. {{if mode==INCREMENTAL}}File diffs (JSON): {{stringify(file_diffs_for_module_files)}}{{endif}}. Return JSON with modules, components, dependencies."Agent 4 - 模式提取器:Use Task tool with: subagent_type: rp1-base:kb-pattern-extractor prompt: "MODE={{mode}}. Extract implementation patterns for patterns.md. Repository type: {{repo_type}}. Files to analyze (JSON): {{stringify(concept_files + module_files)}}. {{if mode==INCREMENTAL}}File diffs (JSON): {{stringify(file_diffs_for_pattern_files)}}{{endif}}. Return JSON with patterns (<=150 lines when rendered)." -
收集Agent输出:
- 等待4个Agent全部执行完成
- 解析每个Agent响应中的JSON
- 验证每个输出的JSON结构
-
处理部分故障:如果1个Agent执行失败:
- 继续使用其余3个成功的Agent结果
- 为失败的模块生成占位内容:
- concept_map.md生成失败 -> "# 概念提取错误 - 请运行全量重建"
- architecture.md生成失败 -> "# 架构映射错误 - 请查看日志"
- modules.md生成失败 -> "# 模块分析错误 - 请运行全量重建"
- patterns.md生成失败 -> "# 模式提取错误 - 请运行全量重建"
- 在最终报告中包含警告:"已生成部分KB(1个Agent执行失败:<agent-name>)"
- 写入部分KB文件(编排器生成的index.md + 3个成功的Agent文件 + 1个占位文件)
- 以部分成功状态退出(KB仍可使用)
如果2个及以上Agent执行失败:- 记录所有错误,包含具体Agent名称和错误信息
- 不写入部分KB(内容过不全无法使用)
- 提供故障排查指南:
- 检查文件权限
- 验证git仓库是否有效
- 尝试重新运行(可能是临时故障)
- 退出并返回错误消息:"错误:KB生成失败(X个Agent执行失败)"
- 退出码:1
Phase 3: Reduce Phase (Merge and Write)
第3阶段:Reduce阶段(合并与写入)
-
Load KB templates:
Use Skill tool with: skill: rp1-base:knowledge-base-templates- Load templates for: index.md, concept_map.md, architecture.md, modules.md, patterns.md
-
Merge agent data into templates (concept_map, architecture, modules, patterns):concept_map.md:
- Use concept-extractor JSON data
- Fill template sections: core concepts, terminology, relationships, patterns
- Add concept boundaries
architecture.md:- Use architecture-mapper JSON data
- Fill template sections: patterns, layers, interactions, integrations
- Insert Mermaid diagram from JSON
modules.md:- Use module-analyzer JSON data
- Fill template sections: modules, components, dependencies, metrics
- Add responsibility matrix
patterns.md:- Use pattern-extractor JSON data
- Fill template sections: 6 core patterns, conditional patterns (if detected)
- Verify output is <=150 lines
- Omit conditional sections if not detected
-
Validate Mermaid diagrams:
Use Skill tool with: skill: rp1-base:mermaid- Validate diagram from architecture.md
- If invalid: Log warning, use fallback simple diagram or omit
-
Generate index.md directly (orchestrator-owned, not agent):The orchestrator generates index.md as the "jump off" entry point by aggregating data from all 4 sub-agents.Follow the index.md generation instructions in the knowledge-base-templates skill:
- See "Index.md Generation (Orchestrator-Owned)" section in SKILL.md
- Aggregation process: extract data from each sub-agent's JSON output
- Calculate file manifest: get line counts after writing other KB files
- Template placeholder mapping: fill template with aggregated data
-
Write KB files:
Use Write tool to write: - {{$RP1_ROOT}}/context/index.md - {{$RP1_ROOT}}/context/concept_map.md - {{$RP1_ROOT}}/context/architecture.md - {{$RP1_ROOT}}/context/modules.md - {{$RP1_ROOT}}/context/patterns.md
-
加载KB模板:
Use Skill tool with: skill: rp1-base:knowledge-base-templates- 加载以下模板:index.md、concept_map.md、architecture.md、modules.md、patterns.md
-
将Agent数据合并到模板中(concept_map、architecture、modules、patterns):concept_map.md:
- 使用概念提取器的JSON数据
- 填充模板模块:核心概念、术语、关系、模式
- 添加概念边界
architecture.md:- 使用架构映射器的JSON数据
- 填充模板模块:模式、层级、交互、集成
- 插入JSON中的Mermaid diagram
modules.md:- 使用模块分析器的JSON数据
- 填充模板模块:模块、组件、依赖、指标
- 添加职责矩阵
patterns.md:- 使用模式提取器的JSON数据
- 填充模板模块:6个核心模式、条件模式(如果检测到)
- 验证输出<=150行
- 如果未检测到条件模式则省略相关模块
-
验证Mermaid图表:
Use Skill tool with: skill: rp1-base:mermaid- 验证architecture.md中的图表
- 如果无效:记录警告,使用简化的 fallback 图表或省略
-
直接生成index.md(编排器负责,不通过Agent):编排器通过聚合4个子Agent的所有数据生成index.md作为「跳转入口」页面。遵循knowledge-base-templates技能中的index.md生成说明:
- 查看SKILL.md中的「Index.md生成(编排器负责)」模块
- 聚合流程:从每个子Agent的JSON输出中提取数据
- 计算文件清单:写入其他KB文件后获取行数统计
- 模板占位符映射:用聚合数据填充模板
-
写入KB文件:
Use Write tool to write: - {{$RP1_ROOT}}/context/index.md - {{$RP1_ROOT}}/context/concept_map.md - {{$RP1_ROOT}}/context/architecture.md - {{$RP1_ROOT}}/context/modules.md - {{$RP1_ROOT}}/context/patterns.md
Phase 4: State Management
第4阶段:状态管理
-
Aggregate metadata:
- Combine metadata from spatial analyzer + 4 analysis agents
- Calculate total files analyzed
- Extract languages and frameworks
- Calculate metrics (module count, component count, concept count)
-
Generate state.json (shareable metadata - safe to commit/share):json
{ "strategy": "parallel-map-reduce", "repo_type": "{{repo_type}}", "monorepo_projects": ["{{project1}}", "{{project2}}"], "generated_at": "{{ISO timestamp}}", "git_commit": "{{git rev-parse HEAD}}", "files_analyzed": {{total_files}}, "languages": ["{{lang1}}", "{{lang2}}"], "metrics": { "modules": {{module_count}}, "components": {{component_count}}, "concepts": {{concept_count}} } } -
Generate meta.json (local values - should NOT be committed/shared):json
{ "repo_root": "{{repo_root}}", "current_project_path": "{{current_project_path}}" }NOTE:contains local paths that may differ per team member. This file should be added tometa.json..gitignore -
Write state files:
Use Write tool to write: - {{$RP1_ROOT}}/context/state.json - {{$RP1_ROOT}}/context/meta.json
-
聚合元数据:
- 合并空间分析器 + 4个分析Agent的元数据
- 计算总分析文件数
- 提取语言和框架
- 计算指标(模块数、组件数、概念数)
-
生成state.json(可共享元数据 - 可提交/共享):json
{ "strategy": "parallel-map-reduce", "repo_type": "{{repo_type}}", "monorepo_projects": ["{{project1}}", "{{project2}}"], "generated_at": "{{ISO timestamp}}", "git_commit": "{{git rev-parse HEAD}}", "files_analyzed": {{total_files}}, "languages": ["{{lang1}}", "{{lang2}}"], "metrics": { "modules": {{module_count}}, "components": {{component_count}}, "concepts": {{concept_count}} } } -
生成meta.json(本地配置 - 不可提交/共享):json
{ "repo_root": "{{repo_root}}", "current_project_path": "{{current_project_path}}" }注意:包含可能因团队成员不同而变化的本地路径,该文件应添加到meta.json中。.gitignore -
写入状态文件:
Use Write tool to write: - {{$RP1_ROOT}}/context/state.json - {{$RP1_ROOT}}/context/meta.json
Phase 5: Error Handling
第5阶段:错误处理
Error Conditions:
- Spatial analyzer fails or returns invalid JSON
- 2 or more analysis agents fail
- Template loading fails
- Write operations fail repeatedly
- Git commands fail (unable to detect commit hash)
Error Handling Procedure:
- Log clear error message indicating which phase/component failed
- Provide specific details about what went wrong
- List attempted operations and their results
- Provide actionable guidance for resolution:
- Check git repository status if git commands failed
- Verify file permissions if write operations failed
- Check agent logs if spatial analyzer or analysis agents failed
- Report error to user with troubleshooting steps
Final Report:
Knowledge Base Generated Successfully
Strategy: Parallel map-reduce
Repository: {{repo_type}}
Files Analyzed: {{total_files}}
KB Files Written:
- {{$RP1_ROOT}}/context/index.md
- {{$RP1_ROOT}}/context/concept_map.md
- {{$RP1_ROOT}}/context/architecture.md
- {{$RP1_ROOT}}/context/modules.md
- {{$RP1_ROOT}}/context/patterns.md
- {{$RP1_ROOT}}/context/state.json (shareable metadata)
- {{$RP1_ROOT}}/context/meta.json (local paths - add to .gitignore)
Next steps:
- KB is automatically loaded by agents when needed (no manual /knowledge-load required)
- Subsequent runs will use same parallel approach (10-15 min)
- Incremental updates (changed files only) are faster (2-5 min)
- Add meta.json to .gitignore to prevent sharing local pathsFinal Report (Feature Learning Mode):
Feature Learnings Captured
Feature: {{FEATURE_ID}}
Source: {{FEATURE_PATH}}
Learnings Incorporated:
- patterns.md: {{N}} new patterns from implementation
- architecture.md: {{N}} architectural decisions
- modules.md: {{N}} new components/dependencies
- concept_map.md: {{N}} domain concepts
KB Files Updated:
- {{$RP1_ROOT}}/context/index.md
- {{$RP1_ROOT}}/context/concept_map.md
- {{$RP1_ROOT}}/context/architecture.md
- {{$RP1_ROOT}}/context/modules.md
- {{$RP1_ROOT}}/context/patterns.md
The knowledge from feature "{{FEATURE_ID}}" has been captured into the KB.
Future agents will benefit from these learnings.错误场景:
- 空间分析器失败或返回无效JSON
- 2个及以上分析Agent失败
- 模板加载失败
- 写入操作反复失败
- Git命令失败(无法检测提交哈希)
错误处理流程:
- 记录清晰的错误消息,指明哪个阶段/组件失败
- 提供故障的具体细节
- 列出已尝试的操作及其结果
- 提供可执行的解决指南:
- 如果git命令失败,检查git仓库状态
- 如果写入操作失败,验证文件权限
- 如果空间分析器或分析Agent失败,查看Agent日志
- 向用户报告错误并提供排查步骤
最终报告:
Knowledge Base Generated Successfully
Strategy: Parallel map-reduce
Repository: {{repo_type}}
Files Analyzed: {{total_files}}
KB Files Written:
- {{$RP1_ROOT}}/context/index.md
- {{$RP1_ROOT}}/context/concept_map.md
- {{$RP1_ROOT}}/context/architecture.md
- {{$RP1_ROOT}}/context/modules.md
- {{$RP1_ROOT}}/context/patterns.md
- {{$RP1_ROOT}}/context/state.json (shareable metadata)
- {{$RP1_ROOT}}/context/meta.json (local paths - add to .gitignore)
Next steps:
- KB is automatically loaded by agents when needed (no manual /knowledge-load required)
- Subsequent runs will use same parallel approach (10-15 min)
- Incremental updates (changed files only) are faster (2-5 min)
- Add meta.json to .gitignore to prevent sharing local paths最终报告(特性学习模式):
Feature Learnings Captured
Feature: {{FEATURE_ID}}
Source: {{FEATURE_PATH}}
Learnings Incorporated:
- patterns.md: {{N}} new patterns from implementation
- architecture.md: {{N}} architectural decisions
- modules.md: {{N}} new components/dependencies
- concept_map.md: {{N}} domain concepts
KB Files Updated:
- {{$RP1_ROOT}}/context/index.md
- {{$RP1_ROOT}}/context/concept_map.md
- {{$RP1_ROOT}}/context/architecture.md
- {{$RP1_ROOT}}/context/modules.md
- {{$RP1_ROOT}}/context/patterns.md
The knowledge from feature "{{FEATURE_ID}}" has been captured into the KB.
Future agents will benefit from these learnings.Additional Parameters
附加参数
| Parameter | Default | Purpose |
|---|---|---|
| RP1_ROOT | | Root directory for KB artifacts |
| CODEBASE_ROOT | | Repository root to analyze |
| EXCLUDE_PATTERNS | | Patterns to exclude from scanning |
| Parameter | Default | Purpose |
|---|---|---|
| RP1_ROOT | | KB产物的根目录 |
| CODEBASE_ROOT | | 待分析的仓库根目录 |
| EXCLUDE_PATTERNS | | 扫描时排除的路径模式 |
Critical Execution Notes
关键执行注意事项
- Change detection first: Always check Phase 0 - compare git commit hash to skip if unchanged
- Do NOT iterate: Execute workflow ONCE, no refinement
- Parallel spawning: Spawn 4 agents in SINGLE message with multiple Task calls
- Index.md ownership: Orchestrator generates index.md directly (not via sub-agent)
- Error handling: Provide clear error messages with troubleshooting steps if failures occur
- No user interaction: Complete entire workflow autonomously
- Set expectations: Inform user builds take 10-15 minutes (or instant if no changes)
- 优先检测变更:始终先执行第0阶段 - 对比git commit哈希,无变更则跳过构建
- 不要迭代执行:工作流仅执行一次,无需优化迭代
- 并行启动:在单条消息中通过多个Task调用同时启动4个Agent
- Index.md归属:编排器直接生成index.md(不通过子Agent)
- 错误处理:如果发生故障,提供清晰的错误消息和排查步骤
- 无需用户交互:自主完成整个工作流
- 预期设置:告知用户构建耗时10-15分钟(无变更则立即完成)
Output Discipline
输出规范
CRITICAL - Keep Output Concise:
- Do ALL internal work in <thinking> tags (NOT visible to user)
- Do NOT output verbose phase-by-phase progress ("Now doing Phase 1...", "Spawning agents...", etc.)
- Do NOT explain internal logic or decision-making process
- Only output 3 things:
- Initial status: Build mode message (CASE A/B/C)
- High-level progress (optional): "Analyzing... (Phase X/5)" every 2-3 minutes
- Final report: Success message with KB files written (see Final Report above)
Example of CORRECT output:
First-time KB generation with parallel analysis (10-15 min)
Analyzing... (Phase 2/5)
Knowledge Base Generated Successfully
[Final Report as shown above]Example of INCORRECT output (DO NOT DO THIS):
Checking for state.json...
state.json not found, proceeding with first-time build
Running git rev-parse HEAD to get commit...
Commit is 475b03e...
Spawning kb-spatial-analyzer agent...
Parsing spatial analyzer output...
Found 90 files in index_files category...
Now spawning 4 parallel agents...
Spawning kb-concept-extractor...
Spawning kb-architecture-mapper...
Spawning kb-module-analyzer...
etc. (too verbose!)重要提示 - 保持输出简洁:
- 所有内部处理逻辑放在<thinking>标签中(用户不可见)
- 不要输出逐阶段的详细进度(如「正在执行第1阶段...」、「正在启动Agent...」等)
- 不要解释内部逻辑或决策过程
- 仅输出3类内容:
- 初始状态:构建模式消息(场景A/B/C)
- 高层进度(可选):每2-3分钟输出一次「分析中...(第X/5阶段)」
- 最终报告:成功消息及已写入的KB文件清单(见上方最终报告)
正确输出示例:
First-time KB generation with parallel analysis (10-15 min)
Analyzing... (Phase 2/5)
Knowledge Base Generated Successfully
[Final Report as shown above]错误输出示例(请勿这样输出):
Checking for state.json...
state.json not found, proceeding with first-time build
Running git rev-parse HEAD to get commit...
Commit is 475b03e...
Spawning kb-spatial-analyzer agent...
Parsing spatial analyzer output...
Found 90 files in index_files category...
Now spawning 4 parallel agents...
Spawning kb-concept-extractor...
Spawning kb-architecture-mapper...
Spawning kb-module-analyzer...
etc. (too verbose!)Expected Performance
预期性能
No changes detected:
- Instant (no-op)
- Single-project: Commit unchanged -> Skip entirely
- Monorepo: Commit changed but no changes in this service -> Update state.json commit only
First-time build (no state.json - full analysis):
- 10-15 minutes
- Spatial analyzer scans all files
- 5 parallel agents analyze all relevant files
- Generates complete KB
Incremental update (commit changed - changed files only):
- 2-5 minutes (much faster!)
- Git diff identifies changed files
- Spatial analyzer categorizes only changed files
- 5 parallel agents load existing KB + analyze only changed files
- Updates KB with changes only
- Preserves all existing good content
未检测到变更:
- 立即完成(空操作)
- 单项目:commit未变化 -> 完全跳过
- 单仓多服务:commit变化但当前服务无变更 -> 仅更新state.json中的commit
首次构建(无state.json - 全量分析):
- 10-15分钟
- 空间分析器扫描所有文件
- 5个并行Agent分析所有相关文件
- 生成完整KB
增量更新(commit变化 - 仅分析变更文件):
- 2-5分钟(速度大幅提升)
- Git diff识别变更文件
- 空间分析器仅对变更文件分类
- 5个并行Agent加载现有KB + 仅分析变更文件
- 仅更新KB中的变更部分
- 保留所有现有有效内容