brownfield-adoption

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Brownfield Adoption: Adding Cavekit to Existing Codebases

棕地(Brownfield)适配:在现有代码库中引入Cavekit

Brownfield adoption layers kits on top of existing code without rewriting it. The existing codebase becomes reference material, and kits are reverse-engineered from what the code actually does. Once kits exist, all future changes flow through the Cavekit lifecycle.
Core principle: The existing code is not the enemy -- it is the source of truth for cavekit generation. Respect what works; cavekit what matters.

Brownfield适配指在现有代码之上分层引入kits,无需重写代码。现有代码库将作为参考资料,kits会从代码实际运行的逻辑反向推导而来。一旦kits创建完成,所有后续变更都将遵循Cavekit生命周期进行。
核心原则: 现有代码不是阻碍——它是生成Cavekit的事实来源。尊重已验证可行的部分;为关键功能创建Cavekit。

1. When to Use Brownfield Adoption

1. 何时选择Brownfield适配

Brownfield adoption is the right choice when:
  • You have a working codebase that you want to improve incrementally
  • You want to adopt Cavekit without stopping development
  • The codebase is too large or critical for a full rewrite
  • You want traceability between kits and code for future changes
  • You need to onboard AI agents to an existing project safely
  • The team wants to start with Cavekit on a subset of the codebase
Brownfield is NOT the right choice when:
  • You are migrating to a completely different framework (use a deliberate rewrite instead)
  • The existing code is so broken that kits would just document bugs
  • The codebase is being sunset or replaced

以下场景适合采用Brownfield适配:
  • 你拥有一个可正常运行的代码库,希望逐步对其进行改进
  • 你希望在不中断开发的情况下引入Cavekit
  • 代码库规模过大或至关重要,无法进行全面重写
  • 你希望在kits与代码之间建立可追溯性,便于后续变更管理
  • 需要安全地为现有项目接入AI agents
  • 团队希望先在代码库的部分子集上试用Cavekit
不适合采用Brownfield适配的场景:
  • 你正在迁移至完全不同的框架(此时应选择刻意重写)
  • 现有代码存在严重问题,kits只会记录错误行为
  • 代码库即将被淘汰或替换

2. Brownfield vs Deliberate Rewrite

2. Brownfield适配 vs 刻意重写

Before starting, decide which approach fits your situation:
DimensionIncremental AdoptionClean-Slate Rebuild
ObjectiveAdd cavekit coverage around working codeReplace the codebase with a new implementation
What happens to existing codeRemains in place, evolves under Cavekit governanceArchived once kits are extracted; new code replaces it
Risk profileLower -- production system stays functional throughoutHigher -- new system must achieve feature parity before cutover
Time to first valueFast -- kits appear in days, improvements followSlow -- significant upfront investment before any return
Ideal scenariosProduction systems, incremental improvement, large legacy codebasesTechnology stack changes, irrecoverable tech debt, greenfield-quality rebuilds
How kits originateDerived by analyzing existing behaviorWritten forward from product requirements
Handling broken behaviorKits capture current state; bugs are fixed through normal Cavekit cyclesKits capture intended state; fresh implementation avoids old bugs
Impact on ongoing workLow -- regular development continues alongside adoptionHigh -- team capacity is split between old and new systems
开始前,请根据你的情况选择合适的方案:
维度增量式适配从零重建
目标在可运行代码周围添加Cavekit覆盖用新实现替换整个代码库
现有代码的去向保留原位,在Cavekit管控下演进提取kits后归档,由新代码替代
风险等级较低——整个过程中生产系统保持可用较高——新系统必须在切换前实现功能对等
首次产出价值的时间快速——数天内即可生成kits,后续逐步优化缓慢——需要大量前期投入才能获得回报
理想场景生产系统、增量改进、大型遗留代码库技术栈变更、无法修复的技术债务、绿地级重建
kits的来源通过分析现有行为推导而来根据产品需求正向编写
错误行为的处理kits记录当前状态;通过常规Cavekit周期修复bugkits记录预期状态;全新实现避免旧bug
对现有工作的影响低——常规开发可与适配工作并行开展高——团队精力需分配到新旧系统之间

Decision flowchart

决策流程图

Is the existing code fundamentally sound?
  YES -> Are you changing frameworks?
           YES -> Deliberate Rewrite (extract specs, build new)
           NO  -> Brownfield Adoption (layer specs, evolve)
  NO  -> Is a rewrite feasible (time, budget, risk)?
           YES -> Deliberate Rewrite
           NO  -> Brownfield Adoption (spec the broken parts, fix incrementally)

现有代码是否从根本上可行?
  YES -> Are you changing frameworks?
           YES -> Deliberate Rewrite (extract specs, build new)
           NO  -> Brownfield Adoption (layer specs, evolve)
  NO  -> Is a rewrite feasible (time, budget, risk)?
           YES -> Deliberate Rewrite
           NO  -> Brownfield Adoption (spec the broken parts, fix incrementally)

3. The 6-Step Brownfield Process

3. The 6-Step Brownfield Process

Step 1: Set Up the Context Directory

Step 1: Set Up the Context Directory

Create the standard Cavekit context directory structure alongside your existing codebase:
bash
mkdir -p context/{refs,kits,plans,impl,prompts}
Resulting structure:
your-project/
+-- src/                    # Existing source code (untouched)
+-- tests/                  # Existing tests (untouched)
+-- package.json            # Existing config (untouched)
+-- context/
    +-- refs/
    |   +-- architecture-overview.md   # High-level description of existing system
    +-- kits/
    |   +-- CLAUDE.md                  # "Kits define WHAT needs implementing"
    +-- plans/
    |   +-- CLAUDE.md                  # "Plans define HOW to implement something"
    +-- impl/
    |   +-- CLAUDE.md                  # "Impls record implementation progress"
    +-- prompts/
        +-- 000-generate-kits-from-code.md   # Bootstrap prompt (this step)
Create
context/refs/architecture-overview.md
with a high-level description of the existing system:
markdown
undefined
Create the standard Cavekit context directory structure alongside your existing codebase:
bash
mkdir -p context/{refs,kits,plans,impl,prompts}
Resulting structure:
your-project/
+-- src/                    # 现有源代码(保持不变)
+-- tests/                  # 现有测试用例(保持不变)
+-- package.json            # 现有配置文件(保持不变)
+-- context/
    +-- refs/
    |   +-- architecture-overview.md   # 现有系统的高层描述
    +-- kits/
    |   +-- CLAUDE.md                  # "Kits define WHAT needs implementing"
    +-- plans/
    |   +-- CLAUDE.md                  # "Plans define HOW to implement something"
    +-- impl/
    |   +-- CLAUDE.md                  # "Impls record implementation progress"
    +-- prompts/
        +-- 000-generate-kits-from-code.md   # 引导提示(本步骤使用)
Create
context/refs/architecture-overview.md
with a high-level description of the existing system:
markdown
undefined

Architecture Overview

架构概述

System Description

系统描述

{Brief description of what the application does}
{Brief description of what the application does}

Technology Stack

技术栈

  • Language: {LANGUAGE}
  • Framework: {FRAMEWORK}
  • Build: {BUILD_COMMAND}
  • Test: {TEST_COMMAND}
  • Language: {LANGUAGE}
  • Framework: {FRAMEWORK}
  • Build: {BUILD_COMMAND}
  • Test: {TEST_COMMAND}

Directory Structure

目录结构

{Key directories and their purposes}
{Key directories and their purposes}

Key Domains

核心业务域

{List the major functional areas of the application}
{List the major functional areas of the application}

External Dependencies

外部依赖

{APIs, databases, services the application depends on}
{APIs, databases, services the application depends on}

Known Issues / Tech Debt

已知问题 / 技术债务

{Major known issues that specs should account for}
undefined
{Major known issues that specs should account for}
undefined

Step 2: Designate the Codebase as Reference Material

Step 2: Designate the Codebase as Reference Material

The existing codebase itself becomes the reference material. Unlike greenfield projects (where refs are PRDs or language specs), brownfield refs are the living code.
In
context/refs/
, add a pointer:
markdown
undefined
The existing codebase itself becomes the reference material. Unlike greenfield projects (where refs are PRDs or language specs), brownfield refs are the living code.
In
context/refs/
, add a pointer:
markdown
undefined

Reference: Existing Codebase

参考资料:现有代码库

The existing source code at
src/
is the primary reference material for spec generation.
The existing source code at
src/
is the primary reference material for spec generation.

How to Use This Reference

如何使用本参考资料

  1. Explore the codebase structure to identify domains
  2. Read source files to understand current behavior
  3. Run existing tests to understand expected behavior
  4. Check git history for context on design decisions
  1. Explore the codebase structure to identify domains
  2. Read source files to understand current behavior
  3. Run existing tests to understand expected behavior
  4. Check git history for context on design decisions

What the Codebase Tells Us

代码库能告诉我们什么

  • Current behavior (what the code DOES)
  • Implicit requirements (what the code assumes)
  • Test coverage (what is validated)
  • Architecture decisions (how domains interact)
  • Current behavior (what the code DOES)
  • Implicit requirements (what the code assumes)
  • Test coverage (what is validated)
  • Architecture decisions (how domains interact)

What the Codebase Does NOT Tell Us

代码库无法告诉我们什么

  • Why decisions were made (check git history, docs)
  • What behavior is intentional vs accidental
  • What requirements are missing
  • What the system SHOULD do vs what it DOES
undefined
  • Why decisions were made (check git history, docs)
  • What behavior is intentional vs accidental
  • What requirements are missing
  • What the system SHOULD do vs what it DOES
undefined

Step 3: Create the Bootstrap Prompt (000)

Step 3: Create the Bootstrap Prompt (000)

The bootstrap prompt is numbered
000
because it runs first and only once. It reverse-engineers kits from the existing code.
markdown
undefined
The bootstrap prompt is numbered
000
because it runs first and only once. It reverse-engineers kits from the existing code.
markdown
undefined

000: Generate Kits from Existing Code (Brownfield Bootstrap)

000: Generate Kits from Existing Code (Brownfield Bootstrap)

Runtime Inputs

Runtime Inputs

  • Framework: {FRAMEWORK}
  • Build command: {BUILD_COMMAND}
  • Test command: {TEST_COMMAND}
  • Source directory: {SRC_DIR}
  • Framework: {FRAMEWORK}
  • Build command: {BUILD_COMMAND}
  • Test command: {TEST_COMMAND}
  • Source directory: {SRC_DIR}

Context

Context

This is a brownfield adoption. The existing codebase at
{SRC_DIR}
is the reference material. Read
context/refs/architecture-overview.md
for system context.
This is a brownfield adoption. The existing codebase at
{SRC_DIR}
is the reference material. Read
context/refs/architecture-overview.md
for system context.

Task

Task

Phase 1: Explore and Discover

Phase 1: Explore and Discover

  1. Read the architecture overview
  2. Explore the source directory structure
  3. Identify distinct functional domains (auth, data, UI, API, etc.)
  4. Read key source files in each domain
  5. Run existing tests to understand expected behavior:
    {TEST_COMMAND}
  1. Read the architecture overview
  2. Explore the source directory structure
  3. Identify distinct functional domains (auth, data, UI, API, etc.)
  4. Read key source files in each domain
  5. Run existing tests to understand expected behavior:
    {TEST_COMMAND}

Phase 2: Generate Kits

Phase 2: Generate Kits

For each identified domain:
  1. Create
    context/kits/cavekit-{domain}.md
  2. Each cavekit must include:
    • Scope: What this domain covers
    • Requirements: What the code currently does, expressed as requirements
    • Acceptance Criteria: Testable criteria derived from existing behavior
    • Dependencies: What other domains this depends on
    • Out of Scope: What this cavekit explicitly excludes
    • Cross-References: Links to related kits
  3. Create
    context/kits/cavekit-overview.md
    as the index:
    • One-line summary per domain cavekit
    • Dependency graph between domains
    • Overall system architecture summary
For each identified domain:
  1. Create
    context/kits/cavekit-{domain}.md
  2. Each cavekit must include:
    • Scope: What this domain covers
    • Requirements: What the code currently does, expressed as requirements
    • Acceptance Criteria: Testable criteria derived from existing behavior
    • Dependencies: What other domains this depends on
    • Out of Scope: What this cavekit explicitly excludes
    • Cross-References: Links to related kits
  3. Create
    context/kits/cavekit-overview.md
    as the index:
    • One-line summary per domain cavekit
    • Dependency graph between domains
    • Overall system architecture summary

Phase 3: Validate

Phase 3: Validate

For each acceptance criterion in the generated kits:
  1. Verify the existing code satisfies it
  2. If a test exists that validates it, reference the test
  3. If no test exists, note it as a coverage gap
For each acceptance criterion in the generated kits:
  1. Verify the existing code satisfies it
  2. If a test exists that validates it, reference the test
  3. If no test exists, note it as a coverage gap

Exit Criteria

Exit Criteria

  • All major domains have corresponding cavekit files
  • Every requirement has testable acceptance criteria
  • cavekit-overview.md indexes all kits
  • Validation report shows which criteria are covered by existing tests
  • Coverage gaps are documented
  • All major domains have corresponding cavekit files
  • Every requirement has testable acceptance criteria
  • cavekit-overview.md indexes all kits
  • Validation report shows which criteria are covered by existing tests
  • Coverage gaps are documented

Completion Signal

Completion Signal

<all-tasks-complete> ```
<all-tasks-complete> ```

Step 4: Run the Iteration Loop

Step 4: Run the Iteration Loop

Run the bootstrap prompt through the iteration loop:
bash
undefined
Run the bootstrap prompt through the iteration loop:
bash
undefined

Run 3-5 iterations to stabilize kits

Run 3-5 iterations to stabilize kits

iteration-loop context/prompts/000-generate-kits-from-code.md -n 5 -t 1h

**What happens during iteration:**
- **Iteration 1:** Agent explores codebase, generates initial kits (broad but shallow)
- **Iteration 2:** Agent refines kits based on git history from iteration 1, adds detail
- **Iteration 3:** Agent validates kits against code, fills coverage gaps
- **Iterations 4-5:** Convergence -- minor refinements, polishing cross-references

**Watch for convergence:** Kits should stabilize after 3-5 iterations. If they do not, the codebase may be too large for a single prompt. Split into domain-specific bootstrap prompts.
iteration-loop context/prompts/000-generate-kits-from-code.md -n 5 -t 1h

**What happens during iteration:**
- **Iteration 1:** Agent explores codebase, generates initial kits (broad but shallow)
- **Iteration 2:** Agent refines kits based on git history from iteration 1, adds detail
- **Iteration 3:** Agent validates kits against code, fills coverage gaps
- **Iterations 4-5:** Convergence -- minor refinements, polishing cross-references

**Watch for convergence:** Kits should stabilize after 3-5 iterations. If they do not, the codebase may be too large for a single prompt. Split into domain-specific bootstrap prompts.

Step 5: Validate Kits Match Behavior

Step 5: Validate Kits Match Behavior

After the bootstrap prompt converges, validate that the generated kits accurately describe the existing code:
After the bootstrap prompt converges, validate that the generated kits accurately describe the existing code:

5a. Run tests against kits

5a. Run tests against kits

bash
undefined
bash
undefined

Use TDD to verify kits match behavior

Use TDD to verify kits match behavior

For each domain cavekit, generate tests from acceptance criteria

For each domain cavekit, generate tests from acceptance criteria

then verify existing code passes them

then verify existing code passes them

{TEST_COMMAND}
undefined
{TEST_COMMAND}
undefined

5b. Manual review checklist

5b. Manual review checklist

markdown
undefined
markdown
undefined

Cavekit Validation Checklist

Cavekit Validation Checklist

  • Each domain in the codebase has a corresponding cavekit
  • Acceptance criteria match actual code behavior (not aspirational)
  • Dependencies between kits match actual code dependencies
  • No orphan code -- every significant module is covered by a cavekit
  • No phantom requirements -- kits do not describe behavior that does not exist
  • Cross-references are accurate
undefined
  • Each domain in the codebase has a corresponding cavekit
  • Acceptance criteria match actual code behavior (not aspirational)
  • Dependencies between kits match actual code dependencies
  • No orphan code -- every significant module is covered by a cavekit
  • No phantom requirements -- kits do not describe behavior that does not exist
  • Cross-references are accurate
undefined

5c. Handle mismatches

5c. Handle mismatches

Mismatch TypeAction
Cavekit describes behavior that does not existRemove the requirement (phantom requirement)
Code has behavior not in any cavekitAdd a requirement (coverage gap)
Cavekit and code disagree on behaviorDetermine which is correct; update the other
Code has bugs that kits documented as-isMark as known issue in cavekit; fix via normal Cavekit
Mismatch TypeAction
Cavekit describes behavior that does not existRemove the requirement (phantom requirement)
Code has behavior not in any cavekitAdd a requirement (coverage gap)
Cavekit and code disagree on behaviorDetermine which is correct; update the other
Code has bugs that kits documented as-isMark as known issue in cavekit; fix via normal Cavekit

Step 6: Proceed with Normal Hunt

Step 6: Proceed with Normal Hunt

Once kits are validated, the project is ready for full Cavekit. All future changes flow through kits first:
Future change workflow:
  1. Update cavekit with new/changed requirement
  2. Generate/update plans from kits (prompt 002)
  3. Implement from plans (prompt 003)
  4. Validate: build + test + acceptance criteria
  5. If issues found: revise kits
Create the standard pipeline prompts:
bash
undefined
Once kits are validated, the project is ready for full Cavekit. All future changes flow through kits first:
Future change workflow:
  1. Update cavekit with new/changed requirement
  2. Generate/update plans from kits (prompt 002)
  3. Implement from plans (prompt 003)
  4. Validate: build + test + acceptance criteria
  5. If issues found: revise kits
Create the standard pipeline prompts:
bash
undefined

Create greenfield-style prompts for ongoing development

Create greenfield-style prompts for ongoing development

(000 was the bootstrap; 001-003 are the ongoing pipeline)

(000 was the bootstrap; 001-003 are the ongoing pipeline)

context/prompts/001-generate-kits-from-refs.md # For new features context/prompts/002-generate-plans-from-kits.md # Plan generation context/prompts/003-generate-impl-from-plans.md # Implementation

---
context/prompts/001-generate-kits-from-refs.md # For new features context/prompts/002-generate-plans-from-kits.md # Plan generation context/prompts/003-generate-impl-from-plans.md # Implementation

---

4. Incremental Adoption Strategy

4. 增量式适配策略

You do not have to cavekit the entire codebase at once. Start with the most active or highest-risk areas:
你无需一次性为整个代码库创建Cavekit。从最活跃或风险最高的区域开始:

Priority matrix for cavekit coverage

Cavekit覆盖优先级矩阵

PriorityCriteriaExample
P0: Cavekit immediatelyCode changes frequently, high risk, many bugsAuth system, payment processing
P1: Cavekit soonActive development area, moderate complexityFeature modules, API endpoints
P2: Cavekit when touchedStable code, rarely changesUtility libraries, config modules
P3: Skip until neededDead code, deprecated featuresLegacy compatibility layers
优先级判定标准示例
P0:立即创建Cavekit代码频繁变更、风险高、bug多认证系统、支付处理
P1:尽快创建Cavekit开发活跃区域、复杂度中等功能模块、API端点
P2:修改时再创建Cavekit代码稳定、极少变更工具库、配置模块
P3:无需创建,除非必要废弃代码、已淘汰功能遗留兼容层

Incremental process

增量式流程

Week 1: Bootstrap kits for P0 domains
  -> Run 000 prompt scoped to P0 directories only
  -> Validate and refine

Week 2-3: Extend to P1 domains
  -> Add P1 directories to the bootstrap prompt
  -> Cross-reference with existing P0 kits

Week 4+: Cavekit-on-touch
  -> When any P2 file is modified, generate its cavekit first
  -> Gradually expand coverage
Week 1: Bootstrap kits for P0 domains
  -> Run 000 prompt scoped to P0 directories only
  -> Validate and refine

Week 2-3: Extend to P1 domains
  -> Add P1 directories to the bootstrap prompt
  -> Cross-reference with existing P0 kits

Week 4+: Cavekit-on-touch
  -> When any P2 file is modified, generate its cavekit first
  -> Gradually expand coverage

Scoping the bootstrap prompt

Scoping the bootstrap prompt

For incremental adoption, modify prompt 000 to target specific directories:
markdown
undefined
For incremental adoption, modify prompt 000 to target specific directories:
markdown
undefined

Scope

Scope

This bootstrap targets the following domains only:
  • src/auth/
    -> cavekit-auth.md
  • src/payments/
    -> cavekit-payments.md
Do NOT generate kits for other directories at this time.

---
This bootstrap targets the following domains only:
  • src/auth/
    -> cavekit-auth.md
  • src/payments/
    -> cavekit-payments.md
Do NOT generate kits for other directories at this time.

---

5. Common Challenges and Solutions

5. 常见挑战与解决方案

Challenge: Codebase is too large for one context window

Challenge: Codebase is too large for one context window

Solution: Split the bootstrap into domain-specific prompts:
context/prompts/
+-- 000a-generate-kits-auth.md
+-- 000b-generate-kits-data.md
+-- 000c-generate-kits-ui.md
Run each independently, then create a manual
cavekit-overview.md
that ties them together.
Solution: Split the bootstrap into domain-specific prompts:
context/prompts/
+-- 000a-generate-kits-auth.md
+-- 000b-generate-kits-data.md
+-- 000c-generate-kits-ui.md
Run each independently, then create a manual
cavekit-overview.md
that ties them together.

Challenge: No existing tests

Challenge: No existing tests

Solution: The bootstrap prompt generates kits from code behavior, not tests. After kits exist, use the implementation prompt to generate tests:
bash
undefined
Solution: The bootstrap prompt generates kits from code behavior, not tests. After kits exist, use the implementation prompt to generate tests:
bash
undefined

After bootstrap, generate tests from kits

After bootstrap, generate tests from kits

iteration-loop context/prompts/003-generate-impl-from-plans.md -n 5 -t 1h
iteration-loop context/prompts/003-generate-impl-from-plans.md -n 5 -t 1h

Focus on test generation, not code changes

Focus on test generation, not code changes

undefined
undefined

Challenge: Code has undocumented behavior

Challenge: Code has undocumented behavior

Solution: Use git history to understand intent:
markdown
undefined
Solution: Use git history to understand intent:
markdown
undefined

In the bootstrap prompt, add:

In the bootstrap prompt, add:

Discovery Strategy

Discovery Strategy

  1. Read source code for current behavior
  2. Read
    git log --oneline -50
    for recent changes
  3. Read
    git log --follow {file}
    for individual file history
  4. Infer requirements from both code AND history
undefined
  1. Read source code for current behavior
  2. Read
    git log --oneline -50
    for recent changes
  3. Read
    git log --follow {file}
    for individual file history
  4. Infer requirements from both code AND history
undefined

Challenge: Code has known bugs

Challenge: Code has known bugs

Solution: Cavekit the intended behavior, not the buggy behavior. Mark known bugs as issues:
markdown
undefined
Solution: Cavekit the intended behavior, not the buggy behavior. Mark known bugs as issues:
markdown
undefined

R3: Search Results Pagination

R3: Search Results Pagination

Description: Search results are paginated with 20 items per page Acceptance Criteria:
  • Results are paginated
  • Page size is configurable (default 20) Known Issues:
  • BUG: Off-by-one error on last page (see issue #142)
undefined
Description: Search results are paginated with 20 items per page Acceptance Criteria:
  • Results are paginated
  • Page size is configurable (default 20) Known Issues:
  • BUG: Off-by-one error on last page (see issue #142)
undefined

Challenge: Team resistance to Cavekit

Challenge: Team resistance to Cavekit

Solution: Start small, show results:
  1. Pick ONE upcoming feature
  2. Write a cavekit before implementing it
  3. Show how the cavekit caught issues the team would have missed
  4. Gradually expand Cavekit coverage based on demonstrated value

Solution: Start small, show results:
  1. Pick ONE upcoming feature
  2. Write a cavekit before implementing it
  3. Show how the cavekit caught issues the team would have missed
  4. Gradually expand Cavekit coverage based on demonstrated value

6. Lightweight Cavekit for Small Projects

6. 小型项目的轻量级Cavekit

Even small projects benefit from minimal Cavekit. The "Cavekit floor" is:
your-small-project/
+-- src/
+-- context/
    +-- kits/
    |   +-- cavekit-task.md     # One cavekit for the current task
    +-- plans/
        +-- plan-task.md          # One plan for the current task
No prompts directory needed. Just write a focused cavekit and plan, then use the iteration loop against the plan.
Why bother for small projects?
  • The cavekit catches requirements you would have missed
  • The plan sequences work so the agent does not thrash
  • If the project grows, you already have the structure in place
  • It is much easier to scale up from lightweight Cavekit than to retrofit full Cavekit later
即使是小型项目也能从极简的Cavekit中获益。Cavekit的“基础配置”如下:
your-small-project/
+-- src/
+-- context/
    +-- kits/
    |   +-- cavekit-task.md     # 针对当前任务的单个cavekit
    +-- plans/
        +-- plan-task.md          # 针对当前任务的单个plan
无需创建prompts目录。 只需编写聚焦的cavekit和plan,然后针对plan运行迭代循环即可。
小型项目为何值得使用?
  • Cavekit能捕捉你可能遗漏的需求
  • plan能规划工作顺序,避免Agent无效操作
  • 如果项目后续扩张,你已具备相应的结构
  • 从基础Cavekit扩展比后期全面改造要容易得多

Lightweight Cavekit process

轻量级Cavekit流程

  1. Write
    context/kits/cavekit-task.md
    (15-30 minutes)
  2. Write
    context/plans/plan-task.md
    (10-20 minutes)
  3. Run the iteration loop against the plan
  4. If the project grows, add the full context directory structure

  1. Write
    context/kits/cavekit-task.md
    (15-30 minutes)
  2. Write
    context/plans/plan-task.md
    (10-20 minutes)
  3. Run the iteration loop against the plan
  4. If the project grows, add the full context directory structure

7. Transition Milestones

7. 过渡里程碑

Track your brownfield adoption progress with these milestones:
markdown
undefined
通过以下里程碑跟踪Brownfield适配进度:
markdown
undefined

Brownfield Adoption Progress

Brownfield Adoption Progress

Milestone 1: Foundation

Milestone 1: Foundation

  • Context directory created
  • Architecture overview written
  • Bootstrap prompt created
  • Context directory created
  • Architecture overview written
  • Bootstrap prompt created

Milestone 2: Initial Specs

Milestone 2: Initial Specs

  • P0 domains have kits
  • Kits validated against existing code
  • Coverage gaps documented
  • P0 domains have kits
  • Kits validated against existing code
  • Coverage gaps documented

Milestone 3: Pipeline Active

Milestone 3: Pipeline Active

  • Standard prompts (001-003) created
  • First feature developed through Cavekit pipeline
  • Revision process tested
  • Standard prompts (001-003) created
  • First feature developed through Cavekit pipeline
  • Revision process tested

Milestone 4: Steady State

Milestone 4: Steady State

  • All active domains have kits
  • All new features go through kits first
  • Revision is routine
  • Iteration loop runs are predictable (convergence in 3-5 iterations)
  • All active domains have kits
  • All new features go through kits first
  • Revision is routine
  • Iteration loop runs are predictable (convergence in 3-5 iterations)

Milestone 5: Full Cavekit

Milestone 5: Full Cavekit

  • All domains have kits
  • All changes flow through the Hunt
  • Convergence monitoring active
  • Team comfortable with the process

---
  • All domains have kits
  • All changes flow through the Hunt
  • Convergence monitoring active
  • Team comfortable with the process

---

Cross-References

Cross-References

  • Context architecture: See
    ck:context-architecture
    skill for the full context directory structure and progressive disclosure patterns.
  • Prompt pipeline: See
    ck:prompt-pipeline
    skill for designing the 001-003 prompts after bootstrap.
  • Cavekit writing: See
    ck:cavekit-writing
    skill for how to write high-quality kits with testable acceptance criteria.
  • Revision: See
    ck:revision
    skill for tracing bugs back to kits after brownfield adoption.
  • Convergence monitoring: See
    ck:convergence-monitoring
    skill for detecting when the bootstrap prompt has converged.
  • Context architecture: See
    ck:context-architecture
    skill for the full context directory structure and progressive disclosure patterns.
  • Prompt pipeline: See
    ck:prompt-pipeline
    skill for designing the 001-003 prompts after bootstrap.
  • Cavekit writing: See
    ck:cavekit-writing
    skill for how to write high-quality kits with testable acceptance criteria.
  • Revision: See
    ck:revision
    skill for tracing bugs back to kits after brownfield adoption.
  • Convergence monitoring: See
    ck:convergence-monitoring
    skill for detecting when the bootstrap prompt has converged.