cs-learn
Every time you work on a feature or fix an issue, you leave behind spec files — such as
/
. However, spec documents record "what was done" and "how it was done",
and do not record "what pitfalls were encountered" and "what better practices were discovered".
Teams without knowledge reusability keep solving the same problem repeatedly. It takes research to solve a problem the first time, but only a few minutes after documenting it. cs-learn is designed to add a "learning card" for each non-trivial engineering practice.
Two tracks:
- Pitfall track: Record the problems encountered, root causes, and solutions to avoid falling into the same traps next time
- Knowledge track: Record discovered best practices, workflow improvements, and reusable patterns
Both are stored in the
directory (shared with other knowledge accumulation sub-skills; refer to Section 1 "Archived Documents" in
codestable/reference/shared-conventions.md
for classification rules), with a unified format that can be retrieved by future AI and humans. Documents generated by this skill include
in their frontmatter, and are named in the format
YYYY-MM-DD-learning-{slug}.md
(starting with the date, with the fixed type segment
), which serves as the identifier for this skill in the shared directory.
When to Trigger
Trigger when any of the following conditions are met:
| Scenario | Description |
|---|
| Completing a feature workflow | proactively asks "Would you like to record the learnings from this work?" in accordance with codestable/reference/shared-conventions.md
|
| Completing an issue workflow | proactively asks "Would you like to record this pitfall?" in accordance with codestable/reference/shared-conventions.md
|
| User-initiated trigger | Phrases like "record this", "document knowledge", "learning", "document learnings", etc. |
| Solving a one-time difficult problem | Engineering problems that took significant time to solve but are not part of a feature / issue workflow |
When proactively recommending, use a single, casual sentence. If the user says "no thanks", skip immediately and do not bring it up again — repeated prompts may make the user feel the AI is overstepping.
What to Write for Each Track
Pitfall track applies to: Debugged bugs, bypassed configuration traps, environment issues, integration failures... all experiences where "things should have worked but didn't".
Knowledge track applies to: Discovered best practices, workflow improvements, architectural insights, reusable design patterns... all learnings that "should be the default approach going forward".
The frontmatter, body templates, and complete examples for both tracks have been moved to the same directory's
. This skill's documentation only retains the judgment and process rules.
Workflow Phases
Phase 1: Identify Source (Automatic)
Extract from the current conversation context:
- Source type: Feature workflow / Issue workflow / Independent problem
- Associated artifacts: Feature directory path / Issue directory path (if available, for reference in the document's "source" field)
- Track classification: Pitfall or knowledge. Judgment criteria — "fixed something that was broken" = pitfall; "discovered a better approach" = knowledge. If both are present, write two separate documents.
If the source is unclear, ask the user one question to clarify, do not guess.
Phase 1.5: Check for Duplicates and Intent Diversion (Mandatory)
Execute in accordance with Sections 5 / 6 of
codestable/reference/shared-conventions.md
§6:
- If the user's message includes phrases like "modify / update / supplement / a specific learning" or clearly refers to an old document → directly follow the update existing entry path
- Otherwise, use the "Search Tool" below with or to search for the current topic / component. If similar old documents are found, present the candidates to the user and let them choose: update / supersede / truly different topic
Update path: Read the old document → align with the user on which sections to modify (common cases include adding newly encountered pitfalls, filling in the root cause that was "unidentified at the time") → draft the diff → write back to the original file, add
, do not create a new file.
Phase 2: Refine Key Points (Dialogue with User)
Ask one question at a time, do not give the user a large form to fill out.
For pitfall track:
- "What was the initial phenomenon you observed?"
- "Which solutions did you try that didn't work?" (Encourage users to write this even if they think it's "nothing" — failed attempts are the most valuable information for future team members; knowing which paths don't work can save a lot of time)
- "How did you finally identify the real cause?"
- "Can this be detected earlier next time? How?"
For knowledge track:
- "In what scenarios is this pattern you discovered most valuable?"
- "What problems would arise if this approach is not followed?"
- "Are there any counterexamples where this does not apply?"
If the user says "nothing" or "skip" to a question, skip it — it's better to have fewer sections than to fill the document with empty phrases.
Phase 3: Confirm Content (AI Drafts, User Reviews)
- AI drafts the complete learning document (including YAML frontmatter + all body sections)
- Present the full draft to the user for review at once
- Write to the file after user confirmation; adjust according to user feedback if there are modifications
Phase 4: Archive
- New document path: Write to , name the file
YYYY-MM-DD-learning-{slug}.md
(use the archive date, not the date the problem occurred), include at the top of the frontmatter (see )
- Update path: Write back to the original file located in Phase 1.5, add to the frontmatter
- Supersede path: Handle the old and new files in accordance with Section 5 of §6
- Report the complete file path after writing
Phase 5: Discoverability Check
After writing, check if
or
includes instructions for AI to access the
knowledge accumulation directory.
If not, prompt the user whether to add a line — do not modify the file without permission, only prompt, and let the user decide. The reason is that changes to entry files like AGENTS.md affect the entire team's guidance for AI, so the user should make the final call.
Search Tool
Complete syntax and examples can be found in
codestable/reference/tools.md
. This section only lists typical queries specific to learning documents.
bash
# Filter pitfalls by track
python codestable/tools/search-yaml.py --dir codestable/compound --filter doc_type=learning --filter track=pitfall --filter severity=high
# Search relevant learnings by component
python codestable/tools/search-yaml.py --dir codestable/compound --filter doc_type=learning --filter component~={component name}
# Check for duplicates after archiving
python codestable/tools/search-yaml.py --dir codestable/compound --filter doc_type=learning --filter tags~={main tag} --json
Guard Rules
Shared guard rules for archiving workflows (add-only, quality over quantity, do not write on behalf of users, discoverability, check for duplicates after archiving) can be found in Section 6 of
codestable/reference/shared-conventions.md
. Rules specific to this skill:
- Do not mix with spec — learning documents are not spec documents, and should not be placed in or ; spec documents should also not be placed in
- Only recognize own doc_type — only read and write documents with , and do not perceive other doc_type documents in the directory