cs-feat-impl
At this stage, the user has already signed off on the plan. Your task is to turn the plan into code. It sounds straightforward, but the actual problem-prone part is not writing the code itself, but what to do when you encounter situations not covered by the plan during implementation —— pushing forward blindly renders the plan useless, while stopping to go back and discuss feels troublesome. The entire set of rules below is designed to make "stopping" the default action.
See Section 0 of
codestable/reference/shared-conventions.md
for shared paths and naming conventions. At this stage, the feature directory has already been created via brainstorm or design.
Three Principles for Writing Code
The following "Startup Checks" and "Core Constraints During Implementation" will cover specific rules. This section first introduces three overarching principles —— they determine the default direction you lean towards when writing code. The specific rules are the applications of these three principles in common scenarios.
1. Default to Writing the Least Code
Only write exactly what is explicitly required for the current step. Don't add "might need later" configurable items, abstraction layers, parameter switches, or defensive fallbacks casually. A judgment criterion: After writing a piece of code, if you think "Do I need to add X to make it complete?", first ask yourself whether X is perceivable by the user in this current step —— if not, don't add it.
When you finish the entire implementation, if 200 lines of code could be clearly explained in 50 lines —— rewrite it. Extra code is not neutral; it becomes a burden for future maintainers who have to first understand it, doubt it, and worry about missing some invariant.
2. Only Modify What You're Supposed to, Don't "Improve" Neighboring Code Casually
When you open a file to modify a certain function, only modify that function. If other functions in the same file have ugly styles, weird naming, or outdated comments —— don't touch them unless they directly conflict with your current changes. The style of newly written code should match the existing style in the current file, even if you don't usually write that way.
"Casual changes" mixed into a PR will prevent users from quickly seeing exactly what was changed and why. A clean feature PR will be diluted into a "mess of comprehensive changes" by style adjustments, variable renaming, and rewrites of neighboring functions, multiplying the review cost several times. If you really find something worth modifying, record it as a subsequent issue in the format of "Casual Discovery" as described in "No Out-of-Plan Changes" below.
Orphan handling is also tightened: If your current changes make an import / variable / function become dead code —— delete it. If it was already dead code before your changes —— leave it and record it as a casual discovery.
3. Don't Make Decisions on Your Own for Things Not Stated in the Design
If you encounter corners not covered by the design halfway through writing code (a new boundary condition, an error path not specified how to handle, an out-of-plan file that needs to be modified) —— the default action is to stop and go back to discuss the design, not pick a reasonable approach on your own and continue writing.
The following "Stop When You Feel the Urge to Create Patch Branches" and "Terminology Guard" are two typical applications of this principle; but the scope is broader —— not limited to patch branches and terminology, every moment when "I chose an option for the design that it didn't explicitly state" triggers this rule.
Startup Checks
Go through these checks before starting:
1. Is the Plan File Sufficient to Support Implementation?
Open
and first check the frontmatter:
- The file header has YAML frontmatter with
- The field matches the current feature directory
- is not empty, and has at least 2 items
Then check the section content —— the check items differ between standard design and fastforward design:
Standard design (section numbers 0/1/2/3/4):
- Section 0 (Terminology Conventions) has content
- Section 2 (Interface Contract) has specific code pointers
- The implementation plan in Section 3 (Implementation Tips) specifies exact paths and functions
- The implementation sequence in Section 3 has clear steps and exit signals
- The test design in Section 3 covers each feature point, with test constraints / verification methods / use case skeletons for each feature point
Fastforward design (section numbers 0/1/2/3):
- Section 0 (Requirement Summary) includes "Explicitly Not Doing"
- Section 1 (Design Plan) has modification points (file path + function / type name)
- Each item in Section 2 (Acceptance Criteria) is verifiable (operation steps + expected results)
- The implementation steps in Section 3 have clear steps and exit signals
Stop if any item fails to meet the requirements, and inform the user to go back to
to complete the plan. The reason is that missing items in the plan must be filled in on-site during implementation —— and on-site filling means the user hasn't approved the plan, which bypasses the checkpoint.
2. Does {slug}-checklist.yaml Exist and Is It Usable?
See
codestable/reference/shared-conventions.md
for the lifecycle of
. This stage only consumes and advances the
section:
- The file exists, and the field matches the current feature directory
- The list is not empty, and each item's status is (some may be if resuming from a previous interruption, which is normal)
- If it doesn't exist → Stop and ask the user to go back to to generate it
3. Read the Entire Context
Must-reads before starting:
- Full content of the plan document
- Requirement source (user description + brainstorm note, if any)
- All existing code files mentioned in the contract examples of Section 2 of standard design / modification points of Section 1 of fastforward design —— only read the relevant functions, no need to read the entire file
4. Confirm the Starting Step with the User
Usually it's Step 1, but if resuming from a previous interruption, refer to the
steps in
and continue from the next step.
Core Constraints During Implementation
These are not arbitrary bans; each has a specific cost behind it. You won't execute them rigidly only after understanding why.
Strictly Follow the Step Sequence in {slug}-checklist.yaml
Execute in the order of the
list; don't merge or skip steps. Immediately change the status of each step from
to
after completion.
The most common violation is "Casually do the next step as well" —— why is this not allowed? Because splitting actions into steps in the plan has a purpose: each step corresponds to an independently verifiable exit signal. Doing two steps together means that when a problem occurs, you don't know which step introduced it, and you can't roll back to a clean intermediate state.
No Out-of-Plan Changes
If you find points worth refactoring while reading code (refer to the "Identify During Implementation" section in
), as long as
they are not within the impact scope of this feature, record them as subsequent issues instead of modifying them casually.
Recording format:
markdown
> Casual Discovery: {File:Line Number} {Brief description of the problem}. Not within the scope of this task, recorded as a subsequent issue.
Why is this so strict? Casual changes are not in the plan, so they won't match during acceptance; future maintainers looking at git blame can't distinguish which changes are for this feature and which are casual. If three or five "casual" changes are mixed in, the entire PR can't clearly explain what was changed.
Terminology Guard
Only applicable to standard design: All newly written type names, function names, and variable names must be cross-checked with Section 0 (Terminology Conventions) of the plan document. No new concepts not in the document are allowed. If you feel the need to introduce a new concept, first stop to modify Section 0 of the plan document, grep for conflicts, get user confirmation, then continue writing code.
The cost of violating this rule is also specific: Terminology conflicts mean that in the future, the same concept will have two names in the code, or two different concepts will share the same name —— the latter is especially fatal, as it will completely invalidate search.
Fastforward design doesn't have a formal terminology table, but the same principle applies: When you need to create a new concept name (type / function / key variable), grep the current code for identical or similar names. If conflicts are found, stop to rename or go back to discuss the plan.
Stop When You Feel the Urge to Create Patch Branches
If you find yourself writing code like
if (special case) { special handling }
halfway through ——
Stop.
The almost only reason for such patch branches in new features is that the plan didn't cover this situation. Continuing to write blindly will result in a piece of "special logic added just to make the code run", and future developers modifying this part won't know why this branch exists. The correct approach is to go back to discuss the plan, either add this situation to the design, cut it, or clearly mark it as a legacy issue.
Code Quality Reflection Check
In addition to the above process constraints of "No Skipping Steps / No Out-of-Plan Changes / Terminology Guard / No Patch Branches", there is a set of reflection checks for code quality —— see Section 7 of
codestable/reference/shared-conventions.md
.
Core idea:
It's not "Must split if more than N lines", but "Stop and ask yourself when encountering X situation". Each item corresponds to a pit that AI will default to fall into —— continuing to append to an already long file, adding methods to an already heavy class, a function doing more and more things without being split, writing
if (special handling for this user)
patch branches, copy-pasting, adding the 4+ parameter, piling things into a universal util. Stop when triggered during writing.
If the conclusion of the reflection check is "Need to split / Need to create a new file / Need to rename / Need to extract a shared layer", and this action is beyond the scope of the existing steps in
, discuss with the user first before deciding —— don't split secretly and continue writing.
Submit a Unified Report After Completion
After completing all steps, submit a report using the following fixed template, then Stop and wait for user review.
Why use a fixed template? Vague reports ("Roughly completed", "Should be okay") push all verification responsibilities to the user. The fixed template forces you to clearly state which files were modified, whether out-of-plan content was touched, and whether new concepts were introduced, so the user can review targeted content instead of rereading the entire git diff.
markdown
## Implementation Completion Report
### Modified Files
{Run git status and paste the actual output}
### Modified Functions / Types (Grouped by Step)
**Step N: {Step Name}**
- file:line Function Name Change Type (Add / Modify / Delete)
- ...
### Did You Touch Out-of-Plan Files?
{Yes / No. If yes, explain why and whether the plan document has been updated synchronously}
### Did You Introduce New Concepts / Abstractions Not in the Plan Document?
{Yes / No. If yes, explain that the plan document has been updated (Section 0 Terminology Conventions for standard design; Section 1 Design Plan for fastforward) and grep has been done to prevent conflicts}
### Code Quality Reflection Check Self-Assessment
{Check against Section 7 of shared-conventions. Did any reflection signals trigger during this implementation (appending to large files / adding methods to large classes / function longer than one screen / special branches / copy-paste / multi-parameter functions / piling into util)? If triggered, explain how it was handled (Stopped to split / Included in steps after discussion with user / Confirmed it's natural aggregation and no split needed); if none triggered, write "No triggers"}
### Implementation Sequence Exit Signal Check
{Check against steps in {slug}-checklist.yaml, list action + exit_signal + status (all should be done) one by one}
### Test Constraint Self-Assessment
**Standard design**:
{Check against the test design in Section 3 of the plan. Does the current implementation meet each feature point's test constraints? How is it guaranteed (type system / unit test / integration test / runtime assert)?}
**Fastforward design**:
{Check against each item in Section 2 of the plan's acceptance criteria to confirm compliance}
Stop and wait for review after submitting the report. If the user proposes modification suggestions, make changes according to the suggestions, then submit a brief confirmation again, repeating until the user explicitly approves to enter the acceptance stage.
How to Implement Test Cases
The key use case skeletons in the test design of Section 3 of the plan are inputs for implementing tests, not decorations —— Write complete test cases based on the skeletons.
Note a common misunderstanding: Test passing ≠ Test constraints met. Test passing only means the test cases you wrote all passed, but it doesn't mean each test constraint is covered by a test case. Therefore, during the report, you must confirm one by one that each feature point's test constraint is covered by a test case.
If a test constraint is guaranteed by the type system (e.g., TypeScript type signatures directly exclude a certain call), state in the report "This type signature has been implemented, guaranteed at compile time".
Exit Conditions
After Exit
Tell the user: "All steps completed, plan document has been synchronized. Next is Phase 3: Acceptance Closure. You can trigger the cs-feat-accept skill."
Don't casually start writing the acceptance report yourself —— the acceptance stage requires an independent checklist rhythm, and entering early will invalidate the gatekeeping role of acceptance.
Common Pitfalls
- Submitting a completion report when only part of the code is written —— Submit the report only once after all steps are completed
- Writing "Modified relevant files" in the report instead of listing specific file:line
- Casually modifying out-of-plan code when seeing it
- Introducing new types / concepts but not updating the plan document (modify Section 0 Terminology Conventions for standard design; modify Section 1 Design Plan for fastforward)
- Adding
if (user is X) { special handling }
patch branches without stopping to reflect on the plan
- Entering the acceptance stage on your own before user review is approved
- Not implementing any use case skeletons in the test design, or not verifying each test constraint one by one