Writing a Plan
Overview
Create a comprehensive implementation plan assuming engineers have zero context of our codebase and questionable taste. Document everything they need to know: which files, code, tests to modify for each task, documentation they might need to consult, how to test. Break the entire plan into small step-by-step tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are experienced developers but know almost nothing about our toolchain and problem domain. Assume they are not very good at test design.
Announce at the start: "I'm using the writing-plans skill to create an implementation plan."
Context: This skill should run in a dedicated worktree (created by the brainstorming skill).
Plan save location: docs/superpowers/plans/YYYY-MM-DD-<feature-name>.md
- (User preferences for plan location take precedence over this default)
Scope Check
If the specification covers multiple independent subsystems, it should have been split into sub-project specifications during the brainstorming phase. If not, suggest splitting it into separate plans—one for each subsystem. Each plan should independently produce working, testable software.
File Structure
Before defining tasks, list the files that will be created or modified and the responsibility of each file. This is where you lock down decomposition decisions.
- Design units with clear boundaries and well-defined interfaces. Each file should have a single, clear responsibility.
- You reason best about code that fits into context at once. The more focused the file, the more reliable your edits will be. Prioritize small, focused files over large files that take on too much functionality.
- Files that change together should be placed together. Split by responsibility, not by technical layer.
- In an existing codebase, follow existing patterns. If the codebase uses large files, don't unilaterally refactor—but if the file you're modifying has become unmanageable, it's reasonable to include splitting it in the plan.
This structure determines task decomposition. Each task should produce an independent, meaningful change.
Small Step Task Granularity
Each step is an action (2-5 minutes):
- "Write a failing test" - one step
- "Run it to confirm failure" - one step
- "Implement the minimal code to make the test pass" - one step
- "Run the test to confirm pass" - one step
- "Commit" - one step
Plan Document Header
Every plan must start with this header:
markdown
# [Feature Name] Implementation Plan
> **For AI Agent Workers:** Required sub-skill: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task by task. Use checkbox (`- [ ]`) syntax for steps to track progress.
**Goal:** [One-sentence description of what to build]
**Architecture:** [2-3 sentences describing the solution]
**Tech Stack:** [Key technologies/libraries]
---
Task Structure
markdown
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
- [ ] **Step 1: Write a failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **Step 2: Run test to verify failure**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL, error message "function not defined"
- [ ] **Step 3: Write minimal implementation code**
```python
def function(input):
return expected
```
- [ ] **Step 4: Run test to verify pass**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
- [ ] **Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
No Placeholders
Every step must contain the actual content engineers need. The following are plan defects—never write them:
- "TBD", "TODO", "Implement later", "Add details"
- "Add proper error handling" / "Add validation" / "Handle edge cases"
- "Write tests for the above code" (no actual test code)
- "Similar to Task N" (duplicate code—engineers may not read tasks in order)
- Steps that only describe what to do without showing how (code steps must have code blocks)
- References to types, functions, or methods not defined in any task
Notes
- Always use precise file paths
- Every step includes complete code—if a step involves code changes, show the code
- Precise commands and expected output
- DRY, YAGNI, TDD, frequent commits
Self-Check
After writing the complete plan, review the specification from a fresh perspective and cross-check against the plan. This is your own checklist to execute—not a sub-agent dispatch.
1. Specification Coverage: Go through each section/requirement in the specification. Can you point to the task that implements it? List all omissions.
2. Placeholder Scan: Search for red flags in the plan—any patterns from the "No Placeholders" section above. Fix them.
3. Type Consistency: Are the types, method signatures, and property names used in later tasks consistent with those defined in earlier tasks? A bug would be calling
in Task 3 but
in Task 7.
If issues are found, fix them inline. No need for re-review—fix and proceed. If requirements in the specification have no corresponding tasks, add tasks.
Execution Handover
After saving the plan, provide execution options:
"Plan completed and saved to docs/superpowers/plans/<filename>.md
. Two execution options:
1. Sub-agent Driven (Recommended) - Dispatch a new sub-agent for each task, with reviews between tasks for rapid iteration
2. Inline Execution - Use executing-plans to perform tasks in the current session, with batch execution and checkpoints
Which one to choose?"
If choosing Sub-agent Driven:
- Required Sub-skill: Use superpowers:subagent-driven-development
- One new sub-agent per task + two-stage review
If choosing Inline Execution:
- Required Sub-skill: Use superpowers:executing-plans
- Batch execution with checkpoints for review