cs-feat-design
Original:🇨🇳 Chinese
Translated
Phase 1 of the feature workflow — Draft a design document for the new feature, serving as the sole input for subsequent implementation and acceptance. First gather evidence (read architecture docs, review relevant code, grep to prevent term conflicts, check archives), then write a complete first draft in one go (including YAML frontmatter + three-tier structure + test design), submit it to the user for overall review, and iterate until approval. After approval, extract {slug}-checklist.yaml from {slug}-design.md for use in the next two phases. Trigger scenarios: "Start designing the solution", "Write design doc", "Prepare to implement XX", with the prerequisite that you already know what to do, who it's for, and how to define success.
5installs
Added on
NPX Install
npx skill4agent add liuzhengdongfortest/codestable cs-feat-designTags
Translated version includes tags in frontmatterSKILL.md Content (Chinese)
View Translation Comparison →cs-feat-design
The output of this phase is a design document , plus an action checklist extracted from it. These two artifacts will be used by the next two phases — implement phase follows it for development, acceptance phase uses it for verification. So any mistakes or omissions here will lead to errors downstream.
{slug}-design.md{slug}-checklist.yamlSeefor shared paths and naming conventions. Generally, the feature directory has already been created by brainstorm phase; if not, create it in this step.codestable/reference/shared-conventions.md
There are three entry points for this phase:
- Formal drafting: The user can clearly explain the requirements (or has already filled out ), directly proceed to the "Workflow" section for complete drafting.
{slug}-intent.md - Initialization mode: The user says "Start a new requirement / Create a draft / Add a new feature", but wants to write a semi-finished design themselves instead of dictating. Proceed to the next section "Initialization mode", create the directory and empty , then end this round. Wait for the user to fill it out before returning.
{slug}-intent.md - Start from roadmap item: The user says "Start working on {sub-feature slug} in the roadmap" or "Move forward with the next item in {roadmap}". The slug is taken from roadmap items.yaml, do not create a new one; before drafting, read the roadmap main document and items.yaml to understand the context and dependency status; when finalizing, add /
roadmaptwo fields to the frontmatter, and update items.yaml to change the corresponding item'sroadmap_itemtostatusand fillin-progresswith the feature directory name. See "Start from roadmap item" below for details.feature
Initialization mode: Help user create directory and intent draft
Trigger: The user wants to write a semi-finished design () as input for subsequent design, but doesn't want to create the directory manually.
{slug}-intent.mdActions:
-
Quickly align two things with the user — one-sentence requirement summary + finalize the slug (lowercase letters, numbers, hyphens; e.g.,,
user-auth). Use the current date (useexport-csvin frontmatter). The feature directory is namedcurrentDate.YYYY-MM-DD-{slug} -
Create the directory.
codestable/features/{YYYY-MM-DD}-{slug}/ -
Write an emptyas a draft skeleton, with the following content:
{slug}-intent.mdmarkdown--- doc_type: feature-intent feature: {YYYY-MM-DD}-{slug} status: draft summary: {One-sentence requirement, filled by AI based on alignment with user} --- # {slug} intent ## Background / Why do this (One sentence is enough) ## Rough implementation plan (Approximately 100 words describing the idea, including key steps / data flow) ## Related data structures / types (Paste related types, interface signatures, or point to code locations) ## Known scope exclusions / pending items (Optional: Clear boundaries or areas you haven't figured out yet) -
Inform the user "The skeleton has been created. Come back to me after filling it out, and I'll write the formal design based on the intent", then end this round and do not proceed with the design workflow.
Why stop here? The value of intent is to let the user think offline and put their thoughts on paper. If AI continues to ask, it will degrade the intent mode into brainstorm, losing its meaning.
Start from roadmap item
Trigger: The user says "Start working on {sub-feature slug} in the roadmap", "Move forward with the next item in {roadmap-slug}", or points to a item in .
plannedcodestable/roadmap/{roadmap-slug}/{roadmap-slug}-items.yamlActions:
- Read roadmap context — Open and
{roadmap-slug}-roadmap.md:{roadmap-slug}-items.yaml- The target item's must be
status— if it'splanned, the design is already underway (continue the existing work); if it'sin-progress/done, stop and ask the user.dropped - All prerequisite items in the target item's must be
depends_on— if any aredone/planned, the order is wrong. Stop and tell the user "Prerequisite {X} is not completed yet. It is recommended to finish it first, or confirm whether to adjust the roadmap order".in-progress - Read the "Notes" of this item in the main document and the overall "Scheduling ideas" to understand its position in the large requirement.
- The target item's
- Take the slug from the roadmap — The feature directory is named , using the current date. Do not create a new slug, otherwise items.yaml and the feature will not match.
YYYY-MM-DD-{slug of the roadmap item} - Create the feature directory and proceed to the "Workflow" section as usual.
- Add two additional fields to the design frontmatter:
yaml
roadmap: {roadmap-slug} roadmap_item: {sub-feature slug} - When finalizing the design with , update items.yaml:
status: approved- Find the item with .
slug: {roadmap_item} - Change →
status: planned.in-progress - Change →
feature: null(feature directory name).feature: YYYY-MM-DD-{slug} - Validate with .
python codestable/tools/validate-yaml.py --file {path} --yaml-only
- Find the item with
- Report: Inform the user that the roadmap has been updated, and the next step is the implement phase.
See Section 2.5 of for the complete handover protocol.
codestable/reference/shared-conventions.mdWhat to include and exclude in the design
The design only focuses on one thing: Lock down the parts of this feature that would be very expensive to fix if decided incorrectly. The project code may have 10,000 lines, but the logic layer (orchestration + entities) is only about 100 lines — the design covers this 100-line side, leaving the remaining implementation details to the implement phase.
Specifically, include three types of content:
- Nouns — New/changed entities, data structures, external contracts, type definitions. This is the shared language for implement and acceptance phases; omitting them will lead to misalignment.
- Verb skeleton — Key orchestration, main workflow, progress steps, critical branches. Not pseudocode, but "what order this thing happens in, which steps it goes through".
- Cross-layer disciplines — Constraints that seem like implementation details but can only be found through manual review if decided incorrectly: error semantics (rollback or retry on failure, what to return externally), idempotency, concurrency/order, extension point locations, observability points. See the third item below "Every feature must be uninstallable" for the specific implementation of this category at the feature level.
Exclude: How to write loops, how to split helper functions, exception-catching code, log formats, indentation styles, non-critical library selections. These are decided by the implement phase — if decided incorrectly, acceptance tests can catch them, so they don't need to take up space in the design. Library selections involving long-term architectural constraints or external contracts are still critical decisions and should be included in Section 1.
Criteria: If a piece of content can be caught by acceptance tests if implemented incorrectly, it does not need to be included in the design; if it can only be found through manual review if implemented incorrectly, it must be included in the design.
The design document is for scanning, not reading
The entire writing style of the design revolves around this principle. Readers open to grasp the key points within 5 minutes, and know where to look for details when needed — not to read every word carefully. This principle leads to several specific practices:
{slug}-design.md- Cut or split any section that exceeds one screen. If it can't fit on one screen, readers will lose their sense of orientation.
- Lock down terms first. Run a grep for all new terms before drafting, covering code, architecture central directory, and design documents of all features. The cost of term conflicts is that others will look in the wrong place when checking code — the cost of prevention is far lower than the cost of sorting it out afterwards.
- Examples take precedence over definitions. First use specific examples for interface behavior (input→output for APIs, Props→rendering/Events examples for components), then supplement with formal types if complex. Readers can build a model faster by seeing specific input and output than by reading an abstract description.
- Progress by "feature visibility", not by code file order. First build the minimal closed loop (an end-to-end runnable path), then add details. This way, each step can be independently verified, and if a deviation is found along the way, only one step is lost.
- New logic is placed in new files by default. New cohesive logic units are placed in independent files by default, instead of appending to existing files. Each item in the change plan is marked as "Create new file" or "Append to existing file (reason)". The reason is that larger files make it harder to distinguish responsibilities, and adding things to old files will make future developers read irrelevant changes when checking git blame.
- The same information appears only once in the most natural location. Repetitive statements will make readers repeatedly confirm "Are these two the same thing?", which is more annoying than missing one.
- Workflow first, template second. Complete the workflow below first, then fill in the content according to the template. Don't switch workflows while writing.
Two judgment disciplines during drafting
In addition to writing style, there are two more fundamental disciplines that determine whether this design can truly support implement/acceptance — not just looking like a design but actually being full of vague commitments.
1. Don't make decisions for the user; explicitly state uncertainties
When encountering "unclear corners that the user didn't explain" while writing the design, the default action is to stop and ask, not to pick one and fill it in yourself. Specifically, this applies to several things:
- State assumptions: Every judgment that is not directly stated by the user (input boundaries, error handling, edge behaviors) is written as "Assumption: ...", allowing the user to refute it precisely.
- Provide options instead of choosing yourself: If there are 2-3 reasonable approaches for a point, first list all candidates and then explain your preference, allowing the user to switch during review.
- Mention simpler alternatives: When the user's direction seems to achieve the same goal in a lighter way, explicitly say "There is a simpler approach: ..., should we rule it out first?". Don't stay silent just because the user has already stated the direction.
- Stop if you don't understand: If you are not sure you understand the requirement correctly, directly say "I'm not sure about this section", don't guess and continue writing.
The cost is concrete: Decisions made secretly will become "special logic introduced by AI on its own" during implementation, and will not match the acceptance criteria during the acceptance phase. Design is the last checkpoint to bring all "things everyone thinks they understand" to the surface.
2. Write both goals and constraints in verifiable terms
The output of the design will be followed by the implement phase and checked by the acceptance phase — so every goal and constraint in the design must be independently verifiable:
- Don't use weak standards like "Make it work". Phrases like "Complete X function" / "Handle errors properly" / "Smooth user experience" essentially push the verification responsibility downstream. Rewrite them as "Return B when input is A" / "Display prompt Y when error X occurs".
- Every progress step has an exit signal. Just writing "Implement XX" is not enough; write "After completion, it can pass {specific test / operation steps}".
- "Explicitly not doing" must also be verifiable. "Not doing XX" must be specific enough to be checked reversely via grep or tests, not empty phrases like "Avoid over-design".
The check items "Each step is independently verifiable" and "Test design is organized by feature points" in the exit conditions are the implementation of this discipline — if these sections are written vaguely, it means this discipline has not been implemented.
3. Every feature must be uninstallable
From the first day a feature is added to the project, it must answer one question: If we want to remove it later, which parts need to be removed? If this cannot be clearly answered, it means the boundary between it and existing code has not been thought through — once such a feature goes online, it becomes an "immovable fact" that can only be kept even if no one uses it anymore.
In terms of design, this is a specific task: List a mount point checklist in Section 1 "Decisions and Constraints", listing where this feature is mounted into the project — new/modified routes, module imports, configuration items, database fields and tables, scheduled tasks, event subscriptions, public UI injection points, feature flags, etc. Each item must be specific to a file or configuration key, with granularity sufficient for the acceptance phase to verify complete removal reversely according to this checklist.
This checklist also serves two additional purposes: First, it helps you discover during the design phase that you have accidentally inserted too many stakes (more mount points mean more scattered coupling, which is a signal); second, each item in the change plan during the implement phase can be mapped to a mount point, so you know if you have missed something.
Visible mount points do not mean dynamic switching must be supported — most features do not need feature flags, but every feature must be able to be manually and orderly removed, which is the bottom line.
Workflow: What to do when
1. Startup check
Go through these before drafting. The structure is Pre-gate + 4 mandatory items + 4 signal-triggered items — separating "things to do every time" and "things to do only when needed" to avoid the startup phase becoming a 12-step ritual.
Pre-gate: Is the requirement input clear?
Confirm at least these four items are present (source can be intent / brainstorm / conversation summary): User goals, core behaviors, success criteria, explicit scope exclusions. If any are missing, supplement them first; if the user can't explain clearly, roll back to brainstorm phase.
4 mandatory items
- Continuation check — Glob /
codestable/features/{feature}/{slug}-design.md/{slug}-intent.md:{slug}-brainstorm.md- If exists: Treat it as the user's design input, do not repeat questions about already clarified parts, only follow up on uncovered areas.
{slug}-intent.md - If exists: Treat it as conversation input.
{slug}-brainstorm.md - If the design file does not exist, or only has an empty template / frontmatter → Treat as new creation.
- If design and each section is basically complete → Last time it was written but not reviewed, jump to "5. Overall review" in this workflow.
status=draft - If some sections of the design are missing → Only supplement the missing sections, report "Last time we wrote up to Section X, we'll complete the rest and send you a unified review".
- If design → Do not overwrite silently, ask the user whether to continue modifying or create a new slug.
status=approved
- If
- Read architecture — Architecture main entry + index + subsystem architecture docs related to the requirement. Focus on existing nouns (can they be reused / will there be conflicts) and cross-layer disciplines (what this feature must comply with). Writing without reading will most likely result in a design that is disconnected from reality.
codestable/architecture/DESIGN.md - Align requirement — Glob + grep in :
codestable/requirements/- If a corresponding req exists: Record the slug in the field of the design frontmatter; read the "User story" and "Boundaries" sections of the req before drafting, the design must not conflict.
requirement - If no corresponding req exists, but this feature adds/changes user-perceivable capabilities: Stop, prompt the user to trigger to draft or update the req.
cs-req - For pure internal refactoring / technical debt / toolchain: Leave the field in frontmatter empty, write in Section 1 "This feature does not add new capabilities, no corresponding requirement".
requirement
- If a corresponding req exists: Record the slug in the
- Read existing code related to the requirement — Specific files to read are determined by requirement clues. This is the prerequisite for the design to connect with existing code.
4 signal-triggered items
The following 4 things are usually skipped due to laziness, and are not necessary every time. Do them only when there is a signal; skip if no signal — forcing them into mandatory items will turn the startup phase of small features into a ritual.
- Grep terms to prevent conflicts
- Trigger: The key concept name to be introduced by this feature does not seem to exist in code / architecture / historical features.
- Action: Grep covers code + + design documents of all features. If there is a conflict, change the name, or explicitly state in Section 0 "In this document, X refers to Y, which is not the same as X' in the code".
codestable/architecture/
- Align complexity tiers
- Trigger: Signals in the requirement that may deviate from the default tier — "External SDK" (readability rises from team to public), "High concurrency / low latency" (performance rises from reasonable to budgeted), "Pure exploration script / one-time tool" (robustness drops from L2 to L1).
- Action: Open , match the default combination according to the scenario, list the deviation points and reasons for user confirmation. After confirmation, write into the "Complexity tiers" subsection of Section 1, only record dimensions that deviate from the default.
codestable/reference/code-dimensions.md - No signal: Write one sentence in Section 1 "This feature follows the default tier for {scenario}, no deviations", no need to open the tier table.
- Grep for "similar modules with different names"
- Trigger: Intuition that this feature "may have been done before but with a different name" — common in general tools, abstract capabilities, cross-module functions.
- Action: Grep several synonyms to find candidate modules, confirm whether to extend the existing implementation instead of creating a new one.
- Archive retrieval
- Trigger: Keywords clearly look like something that has been documented before (a decision, a pitfall, an exploration conclusion, a historical feature).
- Action: First run , then filter and read carefully by
python codestable/tools/search-yaml.py --dir codestable/compound --query "{keyword}"; do the same for historical features —doc_type=decision/trick/learning/explore.python codestable/tools/search-yaml.py --dir codestable/features --filter doc_type=feature-design --query "{keyword}" - If hit, prioritize reuse, record the reference source in the design document.
See Section 5 of for detailed rules.
codestable/reference/shared-conventions.md2. Figure out where this feature should be placed
Before writing the change plan, first think about a more fundamental question: Where does the thing we are adding belong in the overall project structure?
Specifically, ask several questions:
- Is this something that an existing module should be responsible for? If yes, extend that module, don't create a new one outside.
- Does this span multiple modules — should we extract a common layer and place it in the middle, or let one party lead and others depend on it?
- Does this not fit well with any existing module? Then we may need to create a new independent module / subsystem — we need to figure out in advance where to place the new module, what to expose externally, and how to interact with others.
- Is there already a module doing similar things but with a different name that you didn't notice? Grep the project, don't reinvent the wheel just because of different naming.
The cost of wrong placement is concrete: Putting it in a module that shouldn't be responsible for it will make that module gradually become a "basket for everything", with increasingly vague responsibilities; creating parallel implementations every time will result in multiple versions of the same thing coexisting in the project, making future maintenance require guessing which one to use.
Write the conclusion into Section 1 "Decisions and Constraints" of the design document — at least clearly state "This feature is placed in {module/layer}, because {brief reason}". When involving new modules or cross-module interfaces, simultaneously write into Section 4 "Relationship with project-level architecture documents", and prompt to add a link to the architecture main entry .
DESIGN.mdThe default mistake AI will make in this step is adding it to the most convenient file in front without thinking — skipping this step and directly entering Step 3 will lead to this mistake.
3. Check the current state of the files to be modified
After figuring out which module to place it in Step 2, before writing the change plan, go one level deeper to see the current state of this file (or class) — can it cleanly accept the new code?
Look at several dimensions:
- How long is this file now? How many responsibilities does it take on? Is the new content an extension of existing responsibilities, or the N+1th thing?
- How many methods does this class have? Is the new method a natural extension of the same responsibility, or does it push this class towards "can do everything"?
- For frontend-related content, also check if the component tree hierarchy is too deep, and if the state ownership is clear (local state / props passing / global store).
Divert according to severity:
| Situation | Handling |
|---|---|
| Healthy state, can add directly | Proceed normally, no additional actions |
| Should clean up first (split a too-long file into several, extract a too-heavy function) | Include it as Step 1 in the "Progress order" subsection of Section 3 "Implementation hints", lock the scope to "Only move without changing behavior", the exit signal is "Existing functions remain unchanged after moving" |
| Structural issues (responsibilities need to be redefined, modules need to be split/merged, interfaces need to be redesigned) | Record it as a prerequisite dependency in Section 1, suggest splitting into an independent feature to solve first; the current feature is suspended or marked "Proceed after prerequisites are completed" |
Why do this step? Forcing features into already messy files will result in even messier files, making the next change even harder. Putting "Should we clean up first" on the table in advance allows the user to make the decision, instead of AI secretly including it in the PR.
Write the conclusion at the beginning of the "Change plan" subsection in Section 3 "Implementation hints" of the design document (see in the same directory for specific format). No need to write for "Healthy, add directly" — only record when there are actions.
reference.md4. Complete the remaining sections, submit the full draft for review at once
Step 2 and 3 have already written the key conclusions of Section 1 "Decisions and Constraints" and Section 3 "Implementation hints" into the document. In this step, complete the remaining sections (Sections 0 / 2 / 4, and parts of Sections 1 / 3 not covered in Steps 2/3) according to the template below, submit the full draft to the user only after it is complete, do not let the user review semi-finished products in batches. Set to in the YAML frontmatter of the first draft.
statusdraftNote that "at once" refers to the number of review times for the user, not the number of times to write the file — the file itself can be written in several rounds, but only sent out after the full draft is complete.
Why not review in batches? The problem with batches is that the user only sees parts each time, and cannot find cross-section issues like "The scope in Section 1 does not match the progress steps in Section 3". Only when the complete first draft is presented can the user scan for global consistency.
5. Overall review
Send one overall review prompt to the user. If the user provides modification suggestions for any part, revise according to the suggestions and confirm again, repeating until the user explicitly says "The design is okay". After the user approves, change the in the frontmatter from to .
statusdraftapproved6. Generate {slug}-checklist.yaml
After the design is confirmed, extract the action checklist from and save it as in the same directory. See for the lifecycle of this checklist: This phase is responsible for generating it, the implement phase only progresses , the acceptance phase only checks . Each of the three phases manages its own part, without crossing boundaries — this way, each phase can see its work progress from the yaml.
{slug}-design.md{slug}-checklist.yamlcodestable/reference/shared-conventions.mdstepschecksThe complete templates, frontmatter examples, section anchors, and extraction formats for and are in in the same directory. This skill only retains the extraction principles:
{slug}-design.md{slug}-checklist.yamlreference.md- : Extract step by step from the "Progress order" subsection of Section 3 "Implementation hints", one step per entry.
steps - : Extract comprehensively from these places —
checks- Each item in Section 1 "Explicit scope exclusions" → Scope guard check items.
- Each item in Section 1 "Mount point checklist" → Uninstallability check items (, the acceptance phase uses this to verify reversely "Can it be completely removed according to the checklist").
source: mount point - Key interface contracts in Section 2 → Interface consistency check items.
- Each test constraint in the "Test design" subsection of Section 3 "Implementation hints" → Test verification check items.
After saving, validate the syntax with .
validate-yaml.py --file {path to slug-checklist.yaml} --yaml-only7. Exit
After checking against the exit condition list below, guide the user to enter Phase 2 (implementation).
Templates and formats
The complete references for / are split into in the same directory:
{slug}-design.md{slug}-checklist.yamlreference.md- YAML frontmatter examples
- Top-level section anchor requirements
- Complete format and status semantics of
{slug}-checklist.yaml - What each section 0-4 should include
This skill only retains workflow-level constraints: Draft the complete first draft at once according to that reference, do not output semi-finished products in batches.
The prompt for overall review is also in . The rule remains: Send only one overall review, do not confirm section by section.
reference.mdExit conditions
The user has approved the overall review, and all of the following are satisfied:
- Term grep for conflict prevention has been done and results recorded
- Complexity tiers have been aligned: Deviated dimensions have been recorded in the "Complexity tiers" subsection of Section 1, or it has been explicitly confirmed that all follow the default
- Requirements have been aligned: Either the field in frontmatter is filled with the corresponding slug, or the design explicitly states "This feature does not add new capabilities, no corresponding requirement"
requirement - YAML frontmatter exists, and /
doc_type/feature/status/summaryare all filledtags - The requirement summary includes "what not to do", and there is no secret scope expansion later
- The mount point checklist is complete: Each item is specific to a file or configuration key, with sufficient granularity for the acceptance phase to verify uninstallability reversely
- Key decisions and rejected solutions have been recorded
- Each key interface has specific examples (API: input→output; component: Props→rendering/Events), covering normal paths and main error paths
- Examples are marked with source locations (file path + function/component name) via comments
- Progress steps are 4-8 steps, each can be independently verified
- Test design is organized by feature points, each feature point has test constraints / verification methods / key use case skeleton
- High-risk implementation constraints have been recorded
- After user confirmation, the in frontmatter has been changed to
statusapproved - has been extracted from
{slug}-checklist.yamland validated with{slug}-design.mdvalidate-yaml.py - The number of steps entries in matches the "Progress order" subsection of Section 3 "Implementation hints"
{slug}-checklist.yaml - If this feature starts from a roadmap item: Frontmatter includes /
roadmapfields; the corresponding item inroadmap_itemhas been updated tocodestable/roadmap/{roadmap}/{roadmap}-items.yaml,status: in-progressfilled with the feature directory name, and the yaml has been validated withfeaturevalidate-yaml.py
File path: The design document is under ; if the feature directory does not exist, create it in this step. See Section 0 of for naming conventions.
codestable/features/{feature}/codestable/reference/shared-conventions.mdCommon pitfalls
The following are recurring anti-patterns from the past. When encountering them, stop and ask yourself if you have fallen into them again:
- Starting to write without reading relevant architecture documents — the resulting design will most likely not match existing code
- Not doing term conflict prevention checks — it will take ten times longer to find the reason via git blame after conflicts occur
- Describing interface behavior with prose without providing specific examples — readers cannot build a model and cannot judge during review
- Writing the contract layer as a full-field encyclopedia — do not copy existing and unchanged interfaces repeatedly
- Forcing to draw diagrams — if there are ≤2 modules and the calls are linear, drawing diagrams will blur the focus instead
- Splitting progress steps too finely (>8 steps) — so fine that each step has no independent value
- Only providing half the document for user review first — the user cannot see global consistency
- Secretly expanding the scope in the requirement summary or change plan — will not match during later acceptance