brownfield-adoption
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseBrownfield Adoption: Adding Cavekit to Existing Codebases
棕地(Brownfield)适配:在现有代码库中引入Cavekit
Brownfield adoption layers kits on top of existing code without rewriting it. The existing codebase becomes reference material, and kits are reverse-engineered from what the code actually does. Once kits exist, all future changes flow through the Cavekit lifecycle.
Core principle: The existing code is not the enemy -- it is the source of truth for cavekit generation. Respect what works; cavekit what matters.
Brownfield适配指在现有代码之上分层引入kits,无需重写代码。现有代码库将作为参考资料,kits会从代码实际运行的逻辑反向推导而来。一旦kits创建完成,所有后续变更都将遵循Cavekit生命周期进行。
核心原则: 现有代码不是阻碍——它是生成Cavekit的事实来源。尊重已验证可行的部分;为关键功能创建Cavekit。
1. When to Use Brownfield Adoption
1. 何时选择Brownfield适配
Brownfield adoption is the right choice when:
- You have a working codebase that you want to improve incrementally
- You want to adopt Cavekit without stopping development
- The codebase is too large or critical for a full rewrite
- You want traceability between kits and code for future changes
- You need to onboard AI agents to an existing project safely
- The team wants to start with Cavekit on a subset of the codebase
Brownfield is NOT the right choice when:
- You are migrating to a completely different framework (use a deliberate rewrite instead)
- The existing code is so broken that kits would just document bugs
- The codebase is being sunset or replaced
以下场景适合采用Brownfield适配:
- 你拥有一个可正常运行的代码库,希望逐步对其进行改进
- 你希望在不中断开发的情况下引入Cavekit
- 代码库规模过大或至关重要,无法进行全面重写
- 你希望在kits与代码之间建立可追溯性,便于后续变更管理
- 需要安全地为现有项目接入AI agents
- 团队希望先在代码库的部分子集上试用Cavekit
不适合采用Brownfield适配的场景:
- 你正在迁移至完全不同的框架(此时应选择刻意重写)
- 现有代码存在严重问题,kits只会记录错误行为
- 代码库即将被淘汰或替换
2. Brownfield vs Deliberate Rewrite
2. Brownfield适配 vs 刻意重写
Before starting, decide which approach fits your situation:
| Dimension | Incremental Adoption | Clean-Slate Rebuild |
|---|---|---|
| Objective | Add cavekit coverage around working code | Replace the codebase with a new implementation |
| What happens to existing code | Remains in place, evolves under Cavekit governance | Archived once kits are extracted; new code replaces it |
| Risk profile | Lower -- production system stays functional throughout | Higher -- new system must achieve feature parity before cutover |
| Time to first value | Fast -- kits appear in days, improvements follow | Slow -- significant upfront investment before any return |
| Ideal scenarios | Production systems, incremental improvement, large legacy codebases | Technology stack changes, irrecoverable tech debt, greenfield-quality rebuilds |
| How kits originate | Derived by analyzing existing behavior | Written forward from product requirements |
| Handling broken behavior | Kits capture current state; bugs are fixed through normal Cavekit cycles | Kits capture intended state; fresh implementation avoids old bugs |
| Impact on ongoing work | Low -- regular development continues alongside adoption | High -- team capacity is split between old and new systems |
开始前,请根据你的情况选择合适的方案:
| 维度 | 增量式适配 | 从零重建 |
|---|---|---|
| 目标 | 在可运行代码周围添加Cavekit覆盖 | 用新实现替换整个代码库 |
| 现有代码的去向 | 保留原位,在Cavekit管控下演进 | 提取kits后归档,由新代码替代 |
| 风险等级 | 较低——整个过程中生产系统保持可用 | 较高——新系统必须在切换前实现功能对等 |
| 首次产出价值的时间 | 快速——数天内即可生成kits,后续逐步优化 | 缓慢——需要大量前期投入才能获得回报 |
| 理想场景 | 生产系统、增量改进、大型遗留代码库 | 技术栈变更、无法修复的技术债务、绿地级重建 |
| kits的来源 | 通过分析现有行为推导而来 | 根据产品需求正向编写 |
| 错误行为的处理 | kits记录当前状态;通过常规Cavekit周期修复bug | kits记录预期状态;全新实现避免旧bug |
| 对现有工作的影响 | 低——常规开发可与适配工作并行开展 | 高——团队精力需分配到新旧系统之间 |
Decision flowchart
决策流程图
Is the existing code fundamentally sound?
YES -> Are you changing frameworks?
YES -> Deliberate Rewrite (extract specs, build new)
NO -> Brownfield Adoption (layer specs, evolve)
NO -> Is a rewrite feasible (time, budget, risk)?
YES -> Deliberate Rewrite
NO -> Brownfield Adoption (spec the broken parts, fix incrementally)现有代码是否从根本上可行?
YES -> Are you changing frameworks?
YES -> Deliberate Rewrite (extract specs, build new)
NO -> Brownfield Adoption (layer specs, evolve)
NO -> Is a rewrite feasible (time, budget, risk)?
YES -> Deliberate Rewrite
NO -> Brownfield Adoption (spec the broken parts, fix incrementally)3. The 6-Step Brownfield Process
3. The 6-Step Brownfield Process
Step 1: Set Up the Context Directory
Step 1: Set Up the Context Directory
Create the standard Cavekit context directory structure alongside your existing codebase:
bash
mkdir -p context/{refs,kits,plans,impl,prompts}Resulting structure:
your-project/
+-- src/ # Existing source code (untouched)
+-- tests/ # Existing tests (untouched)
+-- package.json # Existing config (untouched)
+-- context/
+-- refs/
| +-- architecture-overview.md # High-level description of existing system
+-- kits/
| +-- CLAUDE.md # "Kits define WHAT needs implementing"
+-- plans/
| +-- CLAUDE.md # "Plans define HOW to implement something"
+-- impl/
| +-- CLAUDE.md # "Impls record implementation progress"
+-- prompts/
+-- 000-generate-kits-from-code.md # Bootstrap prompt (this step)Create with a high-level description of the existing system:
context/refs/architecture-overview.mdmarkdown
undefinedCreate the standard Cavekit context directory structure alongside your existing codebase:
bash
mkdir -p context/{refs,kits,plans,impl,prompts}Resulting structure:
your-project/
+-- src/ # 现有源代码(保持不变)
+-- tests/ # 现有测试用例(保持不变)
+-- package.json # 现有配置文件(保持不变)
+-- context/
+-- refs/
| +-- architecture-overview.md # 现有系统的高层描述
+-- kits/
| +-- CLAUDE.md # "Kits define WHAT needs implementing"
+-- plans/
| +-- CLAUDE.md # "Plans define HOW to implement something"
+-- impl/
| +-- CLAUDE.md # "Impls record implementation progress"
+-- prompts/
+-- 000-generate-kits-from-code.md # 引导提示(本步骤使用)Create with a high-level description of the existing system:
context/refs/architecture-overview.mdmarkdown
undefinedArchitecture Overview
架构概述
System Description
系统描述
{Brief description of what the application does}
{Brief description of what the application does}
Technology Stack
技术栈
- Language: {LANGUAGE}
- Framework: {FRAMEWORK}
- Build: {BUILD_COMMAND}
- Test: {TEST_COMMAND}
- Language: {LANGUAGE}
- Framework: {FRAMEWORK}
- Build: {BUILD_COMMAND}
- Test: {TEST_COMMAND}
Directory Structure
目录结构
{Key directories and their purposes}
{Key directories and their purposes}
Key Domains
核心业务域
{List the major functional areas of the application}
{List the major functional areas of the application}
External Dependencies
外部依赖
{APIs, databases, services the application depends on}
{APIs, databases, services the application depends on}
Known Issues / Tech Debt
已知问题 / 技术债务
{Major known issues that specs should account for}
undefined{Major known issues that specs should account for}
undefinedStep 2: Designate the Codebase as Reference Material
Step 2: Designate the Codebase as Reference Material
The existing codebase itself becomes the reference material. Unlike greenfield projects (where refs are PRDs or language specs), brownfield refs are the living code.
In , add a pointer:
context/refs/markdown
undefinedThe existing codebase itself becomes the reference material. Unlike greenfield projects (where refs are PRDs or language specs), brownfield refs are the living code.
In , add a pointer:
context/refs/markdown
undefinedReference: Existing Codebase
参考资料:现有代码库
The existing source code at is the primary reference material for spec generation.
src/The existing source code at is the primary reference material for spec generation.
src/How to Use This Reference
如何使用本参考资料
- Explore the codebase structure to identify domains
- Read source files to understand current behavior
- Run existing tests to understand expected behavior
- Check git history for context on design decisions
- Explore the codebase structure to identify domains
- Read source files to understand current behavior
- Run existing tests to understand expected behavior
- Check git history for context on design decisions
What the Codebase Tells Us
代码库能告诉我们什么
- Current behavior (what the code DOES)
- Implicit requirements (what the code assumes)
- Test coverage (what is validated)
- Architecture decisions (how domains interact)
- Current behavior (what the code DOES)
- Implicit requirements (what the code assumes)
- Test coverage (what is validated)
- Architecture decisions (how domains interact)
What the Codebase Does NOT Tell Us
代码库无法告诉我们什么
- Why decisions were made (check git history, docs)
- What behavior is intentional vs accidental
- What requirements are missing
- What the system SHOULD do vs what it DOES
undefined- Why decisions were made (check git history, docs)
- What behavior is intentional vs accidental
- What requirements are missing
- What the system SHOULD do vs what it DOES
undefinedStep 3: Create the Bootstrap Prompt (000)
Step 3: Create the Bootstrap Prompt (000)
The bootstrap prompt is numbered because it runs first and only once. It reverse-engineers kits from the existing code.
000markdown
undefinedThe bootstrap prompt is numbered because it runs first and only once. It reverse-engineers kits from the existing code.
000markdown
undefined000: Generate Kits from Existing Code (Brownfield Bootstrap)
000: Generate Kits from Existing Code (Brownfield Bootstrap)
Runtime Inputs
Runtime Inputs
- Framework: {FRAMEWORK}
- Build command: {BUILD_COMMAND}
- Test command: {TEST_COMMAND}
- Source directory: {SRC_DIR}
- Framework: {FRAMEWORK}
- Build command: {BUILD_COMMAND}
- Test command: {TEST_COMMAND}
- Source directory: {SRC_DIR}
Context
Context
This is a brownfield adoption. The existing codebase at is the reference material.
Read for system context.
{SRC_DIR}context/refs/architecture-overview.mdThis is a brownfield adoption. The existing codebase at is the reference material.
Read for system context.
{SRC_DIR}context/refs/architecture-overview.mdTask
Task
Phase 1: Explore and Discover
Phase 1: Explore and Discover
- Read the architecture overview
- Explore the source directory structure
- Identify distinct functional domains (auth, data, UI, API, etc.)
- Read key source files in each domain
- Run existing tests to understand expected behavior:
{TEST_COMMAND}
- Read the architecture overview
- Explore the source directory structure
- Identify distinct functional domains (auth, data, UI, API, etc.)
- Read key source files in each domain
- Run existing tests to understand expected behavior:
{TEST_COMMAND}
Phase 2: Generate Kits
Phase 2: Generate Kits
For each identified domain:
-
Create
context/kits/cavekit-{domain}.md -
Each cavekit must include:
- Scope: What this domain covers
- Requirements: What the code currently does, expressed as requirements
- Acceptance Criteria: Testable criteria derived from existing behavior
- Dependencies: What other domains this depends on
- Out of Scope: What this cavekit explicitly excludes
- Cross-References: Links to related kits
-
Createas the index:
context/kits/cavekit-overview.md- One-line summary per domain cavekit
- Dependency graph between domains
- Overall system architecture summary
For each identified domain:
-
Create
context/kits/cavekit-{domain}.md -
Each cavekit must include:
- Scope: What this domain covers
- Requirements: What the code currently does, expressed as requirements
- Acceptance Criteria: Testable criteria derived from existing behavior
- Dependencies: What other domains this depends on
- Out of Scope: What this cavekit explicitly excludes
- Cross-References: Links to related kits
-
Createas the index:
context/kits/cavekit-overview.md- One-line summary per domain cavekit
- Dependency graph between domains
- Overall system architecture summary
Phase 3: Validate
Phase 3: Validate
For each acceptance criterion in the generated kits:
- Verify the existing code satisfies it
- If a test exists that validates it, reference the test
- If no test exists, note it as a coverage gap
For each acceptance criterion in the generated kits:
- Verify the existing code satisfies it
- If a test exists that validates it, reference the test
- If no test exists, note it as a coverage gap
Exit Criteria
Exit Criteria
- All major domains have corresponding cavekit files
- Every requirement has testable acceptance criteria
- cavekit-overview.md indexes all kits
- Validation report shows which criteria are covered by existing tests
- Coverage gaps are documented
- All major domains have corresponding cavekit files
- Every requirement has testable acceptance criteria
- cavekit-overview.md indexes all kits
- Validation report shows which criteria are covered by existing tests
- Coverage gaps are documented
Completion Signal
Completion Signal
<all-tasks-complete>
```
<all-tasks-complete>
```
Step 4: Run the Iteration Loop
Step 4: Run the Iteration Loop
Run the bootstrap prompt through the iteration loop:
bash
undefinedRun the bootstrap prompt through the iteration loop:
bash
undefinedRun 3-5 iterations to stabilize kits
Run 3-5 iterations to stabilize kits
iteration-loop context/prompts/000-generate-kits-from-code.md -n 5 -t 1h
**What happens during iteration:**
- **Iteration 1:** Agent explores codebase, generates initial kits (broad but shallow)
- **Iteration 2:** Agent refines kits based on git history from iteration 1, adds detail
- **Iteration 3:** Agent validates kits against code, fills coverage gaps
- **Iterations 4-5:** Convergence -- minor refinements, polishing cross-references
**Watch for convergence:** Kits should stabilize after 3-5 iterations. If they do not, the codebase may be too large for a single prompt. Split into domain-specific bootstrap prompts.iteration-loop context/prompts/000-generate-kits-from-code.md -n 5 -t 1h
**What happens during iteration:**
- **Iteration 1:** Agent explores codebase, generates initial kits (broad but shallow)
- **Iteration 2:** Agent refines kits based on git history from iteration 1, adds detail
- **Iteration 3:** Agent validates kits against code, fills coverage gaps
- **Iterations 4-5:** Convergence -- minor refinements, polishing cross-references
**Watch for convergence:** Kits should stabilize after 3-5 iterations. If they do not, the codebase may be too large for a single prompt. Split into domain-specific bootstrap prompts.Step 5: Validate Kits Match Behavior
Step 5: Validate Kits Match Behavior
After the bootstrap prompt converges, validate that the generated kits accurately describe the existing code:
After the bootstrap prompt converges, validate that the generated kits accurately describe the existing code:
5a. Run tests against kits
5a. Run tests against kits
bash
undefinedbash
undefinedUse TDD to verify kits match behavior
Use TDD to verify kits match behavior
For each domain cavekit, generate tests from acceptance criteria
For each domain cavekit, generate tests from acceptance criteria
then verify existing code passes them
then verify existing code passes them
{TEST_COMMAND}
undefined{TEST_COMMAND}
undefined5b. Manual review checklist
5b. Manual review checklist
markdown
undefinedmarkdown
undefinedCavekit Validation Checklist
Cavekit Validation Checklist
- Each domain in the codebase has a corresponding cavekit
- Acceptance criteria match actual code behavior (not aspirational)
- Dependencies between kits match actual code dependencies
- No orphan code -- every significant module is covered by a cavekit
- No phantom requirements -- kits do not describe behavior that does not exist
- Cross-references are accurate
undefined- Each domain in the codebase has a corresponding cavekit
- Acceptance criteria match actual code behavior (not aspirational)
- Dependencies between kits match actual code dependencies
- No orphan code -- every significant module is covered by a cavekit
- No phantom requirements -- kits do not describe behavior that does not exist
- Cross-references are accurate
undefined5c. Handle mismatches
5c. Handle mismatches
| Mismatch Type | Action |
|---|---|
| Cavekit describes behavior that does not exist | Remove the requirement (phantom requirement) |
| Code has behavior not in any cavekit | Add a requirement (coverage gap) |
| Cavekit and code disagree on behavior | Determine which is correct; update the other |
| Code has bugs that kits documented as-is | Mark as known issue in cavekit; fix via normal Cavekit |
| Mismatch Type | Action |
|---|---|
| Cavekit describes behavior that does not exist | Remove the requirement (phantom requirement) |
| Code has behavior not in any cavekit | Add a requirement (coverage gap) |
| Cavekit and code disagree on behavior | Determine which is correct; update the other |
| Code has bugs that kits documented as-is | Mark as known issue in cavekit; fix via normal Cavekit |
Step 6: Proceed with Normal Hunt
Step 6: Proceed with Normal Hunt
Once kits are validated, the project is ready for full Cavekit. All future changes flow through kits first:
Future change workflow:
1. Update cavekit with new/changed requirement
2. Generate/update plans from kits (prompt 002)
3. Implement from plans (prompt 003)
4. Validate: build + test + acceptance criteria
5. If issues found: revise kitsCreate the standard pipeline prompts:
bash
undefinedOnce kits are validated, the project is ready for full Cavekit. All future changes flow through kits first:
Future change workflow:
1. Update cavekit with new/changed requirement
2. Generate/update plans from kits (prompt 002)
3. Implement from plans (prompt 003)
4. Validate: build + test + acceptance criteria
5. If issues found: revise kitsCreate the standard pipeline prompts:
bash
undefinedCreate greenfield-style prompts for ongoing development
Create greenfield-style prompts for ongoing development
(000 was the bootstrap; 001-003 are the ongoing pipeline)
(000 was the bootstrap; 001-003 are the ongoing pipeline)
context/prompts/001-generate-kits-from-refs.md # For new features
context/prompts/002-generate-plans-from-kits.md # Plan generation
context/prompts/003-generate-impl-from-plans.md # Implementation
---context/prompts/001-generate-kits-from-refs.md # For new features
context/prompts/002-generate-plans-from-kits.md # Plan generation
context/prompts/003-generate-impl-from-plans.md # Implementation
---4. Incremental Adoption Strategy
4. 增量式适配策略
You do not have to cavekit the entire codebase at once. Start with the most active or highest-risk areas:
你无需一次性为整个代码库创建Cavekit。从最活跃或风险最高的区域开始:
Priority matrix for cavekit coverage
Cavekit覆盖优先级矩阵
| Priority | Criteria | Example |
|---|---|---|
| P0: Cavekit immediately | Code changes frequently, high risk, many bugs | Auth system, payment processing |
| P1: Cavekit soon | Active development area, moderate complexity | Feature modules, API endpoints |
| P2: Cavekit when touched | Stable code, rarely changes | Utility libraries, config modules |
| P3: Skip until needed | Dead code, deprecated features | Legacy compatibility layers |
| 优先级 | 判定标准 | 示例 |
|---|---|---|
| P0:立即创建Cavekit | 代码频繁变更、风险高、bug多 | 认证系统、支付处理 |
| P1:尽快创建Cavekit | 开发活跃区域、复杂度中等 | 功能模块、API端点 |
| P2:修改时再创建Cavekit | 代码稳定、极少变更 | 工具库、配置模块 |
| P3:无需创建,除非必要 | 废弃代码、已淘汰功能 | 遗留兼容层 |
Incremental process
增量式流程
Week 1: Bootstrap kits for P0 domains
-> Run 000 prompt scoped to P0 directories only
-> Validate and refine
Week 2-3: Extend to P1 domains
-> Add P1 directories to the bootstrap prompt
-> Cross-reference with existing P0 kits
Week 4+: Cavekit-on-touch
-> When any P2 file is modified, generate its cavekit first
-> Gradually expand coverageWeek 1: Bootstrap kits for P0 domains
-> Run 000 prompt scoped to P0 directories only
-> Validate and refine
Week 2-3: Extend to P1 domains
-> Add P1 directories to the bootstrap prompt
-> Cross-reference with existing P0 kits
Week 4+: Cavekit-on-touch
-> When any P2 file is modified, generate its cavekit first
-> Gradually expand coverageScoping the bootstrap prompt
Scoping the bootstrap prompt
For incremental adoption, modify prompt 000 to target specific directories:
markdown
undefinedFor incremental adoption, modify prompt 000 to target specific directories:
markdown
undefinedScope
Scope
This bootstrap targets the following domains only:
- -> cavekit-auth.md
src/auth/ - -> cavekit-payments.md
src/payments/
Do NOT generate kits for other directories at this time.
---This bootstrap targets the following domains only:
- -> cavekit-auth.md
src/auth/ - -> cavekit-payments.md
src/payments/
Do NOT generate kits for other directories at this time.
---5. Common Challenges and Solutions
5. 常见挑战与解决方案
Challenge: Codebase is too large for one context window
Challenge: Codebase is too large for one context window
Solution: Split the bootstrap into domain-specific prompts:
context/prompts/
+-- 000a-generate-kits-auth.md
+-- 000b-generate-kits-data.md
+-- 000c-generate-kits-ui.mdRun each independently, then create a manual that ties them together.
cavekit-overview.mdSolution: Split the bootstrap into domain-specific prompts:
context/prompts/
+-- 000a-generate-kits-auth.md
+-- 000b-generate-kits-data.md
+-- 000c-generate-kits-ui.mdRun each independently, then create a manual that ties them together.
cavekit-overview.mdChallenge: No existing tests
Challenge: No existing tests
Solution: The bootstrap prompt generates kits from code behavior, not tests. After kits exist, use the implementation prompt to generate tests:
bash
undefinedSolution: The bootstrap prompt generates kits from code behavior, not tests. After kits exist, use the implementation prompt to generate tests:
bash
undefinedAfter bootstrap, generate tests from kits
After bootstrap, generate tests from kits
iteration-loop context/prompts/003-generate-impl-from-plans.md -n 5 -t 1h
iteration-loop context/prompts/003-generate-impl-from-plans.md -n 5 -t 1h
Focus on test generation, not code changes
Focus on test generation, not code changes
undefinedundefinedChallenge: Code has undocumented behavior
Challenge: Code has undocumented behavior
Solution: Use git history to understand intent:
markdown
undefinedSolution: Use git history to understand intent:
markdown
undefinedIn the bootstrap prompt, add:
In the bootstrap prompt, add:
Discovery Strategy
Discovery Strategy
- Read source code for current behavior
- Read for recent changes
git log --oneline -50 - Read for individual file history
git log --follow {file} - Infer requirements from both code AND history
undefined- Read source code for current behavior
- Read for recent changes
git log --oneline -50 - Read for individual file history
git log --follow {file} - Infer requirements from both code AND history
undefinedChallenge: Code has known bugs
Challenge: Code has known bugs
Solution: Cavekit the intended behavior, not the buggy behavior. Mark known bugs as issues:
markdown
undefinedSolution: Cavekit the intended behavior, not the buggy behavior. Mark known bugs as issues:
markdown
undefinedR3: Search Results Pagination
R3: Search Results Pagination
Description: Search results are paginated with 20 items per page
Acceptance Criteria:
- Results are paginated
- Page size is configurable (default 20) Known Issues:
- BUG: Off-by-one error on last page (see issue #142)
undefinedDescription: Search results are paginated with 20 items per page
Acceptance Criteria:
- Results are paginated
- Page size is configurable (default 20) Known Issues:
- BUG: Off-by-one error on last page (see issue #142)
undefinedChallenge: Team resistance to Cavekit
Challenge: Team resistance to Cavekit
Solution: Start small, show results:
- Pick ONE upcoming feature
- Write a cavekit before implementing it
- Show how the cavekit caught issues the team would have missed
- Gradually expand Cavekit coverage based on demonstrated value
Solution: Start small, show results:
- Pick ONE upcoming feature
- Write a cavekit before implementing it
- Show how the cavekit caught issues the team would have missed
- Gradually expand Cavekit coverage based on demonstrated value
6. Lightweight Cavekit for Small Projects
6. 小型项目的轻量级Cavekit
Even small projects benefit from minimal Cavekit. The "Cavekit floor" is:
your-small-project/
+-- src/
+-- context/
+-- kits/
| +-- cavekit-task.md # One cavekit for the current task
+-- plans/
+-- plan-task.md # One plan for the current taskNo prompts directory needed. Just write a focused cavekit and plan, then use the iteration loop against the plan.
Why bother for small projects?
- The cavekit catches requirements you would have missed
- The plan sequences work so the agent does not thrash
- If the project grows, you already have the structure in place
- It is much easier to scale up from lightweight Cavekit than to retrofit full Cavekit later
即使是小型项目也能从极简的Cavekit中获益。Cavekit的“基础配置”如下:
your-small-project/
+-- src/
+-- context/
+-- kits/
| +-- cavekit-task.md # 针对当前任务的单个cavekit
+-- plans/
+-- plan-task.md # 针对当前任务的单个plan无需创建prompts目录。 只需编写聚焦的cavekit和plan,然后针对plan运行迭代循环即可。
小型项目为何值得使用?
- Cavekit能捕捉你可能遗漏的需求
- plan能规划工作顺序,避免Agent无效操作
- 如果项目后续扩张,你已具备相应的结构
- 从基础Cavekit扩展比后期全面改造要容易得多
Lightweight Cavekit process
轻量级Cavekit流程
- Write (15-30 minutes)
context/kits/cavekit-task.md - Write (10-20 minutes)
context/plans/plan-task.md - Run the iteration loop against the plan
- If the project grows, add the full context directory structure
- Write (15-30 minutes)
context/kits/cavekit-task.md - Write (10-20 minutes)
context/plans/plan-task.md - Run the iteration loop against the plan
- If the project grows, add the full context directory structure
7. Transition Milestones
7. 过渡里程碑
Track your brownfield adoption progress with these milestones:
markdown
undefined通过以下里程碑跟踪Brownfield适配进度:
markdown
undefinedBrownfield Adoption Progress
Brownfield Adoption Progress
Milestone 1: Foundation
Milestone 1: Foundation
- Context directory created
- Architecture overview written
- Bootstrap prompt created
- Context directory created
- Architecture overview written
- Bootstrap prompt created
Milestone 2: Initial Specs
Milestone 2: Initial Specs
- P0 domains have kits
- Kits validated against existing code
- Coverage gaps documented
- P0 domains have kits
- Kits validated against existing code
- Coverage gaps documented
Milestone 3: Pipeline Active
Milestone 3: Pipeline Active
- Standard prompts (001-003) created
- First feature developed through Cavekit pipeline
- Revision process tested
- Standard prompts (001-003) created
- First feature developed through Cavekit pipeline
- Revision process tested
Milestone 4: Steady State
Milestone 4: Steady State
- All active domains have kits
- All new features go through kits first
- Revision is routine
- Iteration loop runs are predictable (convergence in 3-5 iterations)
- All active domains have kits
- All new features go through kits first
- Revision is routine
- Iteration loop runs are predictable (convergence in 3-5 iterations)
Milestone 5: Full Cavekit
Milestone 5: Full Cavekit
- All domains have kits
- All changes flow through the Hunt
- Convergence monitoring active
- Team comfortable with the process
---- All domains have kits
- All changes flow through the Hunt
- Convergence monitoring active
- Team comfortable with the process
---Cross-References
Cross-References
- Context architecture: See skill for the full context directory structure and progressive disclosure patterns.
ck:context-architecture - Prompt pipeline: See skill for designing the 001-003 prompts after bootstrap.
ck:prompt-pipeline - Cavekit writing: See skill for how to write high-quality kits with testable acceptance criteria.
ck:cavekit-writing - Revision: See skill for tracing bugs back to kits after brownfield adoption.
ck:revision - Convergence monitoring: See skill for detecting when the bootstrap prompt has converged.
ck:convergence-monitoring
- Context architecture: See skill for the full context directory structure and progressive disclosure patterns.
ck:context-architecture - Prompt pipeline: See skill for designing the 001-003 prompts after bootstrap.
ck:prompt-pipeline - Cavekit writing: See skill for how to write high-quality kits with testable acceptance criteria.
ck:cavekit-writing - Revision: See skill for tracing bugs back to kits after brownfield adoption.
ck:revision - Convergence monitoring: See skill for detecting when the bootstrap prompt has converged.
ck:convergence-monitoring