cs-trick
cs-trick is a problem-oriented prescriptive reference library that answers one question: What is the verified correct approach to do X? No trigger event is required; you can directly write whenever you find a pattern or usage worth archiving.
Typical content:
- Standard implementation of a design pattern in this project
- Core API usage + known pitfalls of a certain library/framework
- Command recipes for certain operations (debugging, deployment, data processing...)
Refer to
codestable/reference/shared-conventions.md
for shared paths and naming conventions. The output of this skill is written to
, with file naming format
YYYY-MM-DD-trick-{slug}.md
, and frontmatter includes
.
Three Document Types
Each trick document belongs to one of the following three categories, marked in the
field of the frontmatter:
| Type | Applicable Scenarios | Examples |
|---|
| Design patterns, architectural patterns, programming idioms | "Use Repository pattern to isolate data access layer", "Use Builder pattern to construct complex configuration objects" |
| Usage, configuration methods, common pitfalls of a certain library/framework | "Correct implementation of Prisma transactions", "Error handling for Pinia store actions" |
| Specific operation skills, tool usage, command recipes | "Extract nested fields from JSON using jq", "Locate bug-introducing commits with git bisect" |
Each type serves different purposes during query:
- For "how to organize code" →
- For "how to use a certain API of this library/framework" →
- For "how to perform this kind of operation" →
Choose the closest one if unsure; the
does not affect search usability.
Document Format
The frontmatter, body template and long examples of trick documents have been split into
in the same directory. This skill only retains judgment and process rules:
- only allows / /
- Prioritize using real code or commands from the project for examples
- Sections like "When not applicable", "Known pitfalls", "Related documents" are optional; omit them if the user says "nothing"
Workflow Phases
Phase 1: Identify Type (Dialogue with User)
Confirm core information with at most two questions:
- "Is this about patterns/structures, usage of a certain library/framework, or operation skills/commands?" → Determine
- "In one sentence: When would this be used?" → Determine
Skip the questions and proceed to Phase 1.5 directly if the user's description is clear enough.
Phase 1.5: Check for Duplicates and Intent Diversion (Mandatory)
Execute according to Section 6, Item 5/6 of
codestable/reference/shared-conventions.md
:
- If the user's words include "modify/update/revise/supplement/a certain trick" or clearly point to an old document → Directly follow the update existing entry path, do not enter the creation process; search only to confirm which entry to locate
- Otherwise, use the in the "Search Tools" below to search the , and when hitting semantically similar old documents, present the candidates to the user and let them choose: update / supersede / confirm it is a different topic before proceeding to Phase 2
Process for updating existing entries: Directly read the old document → Align with the user on which sections to modify → Skip the complete code investigation in Phase 2 (but re-read the code involved in the modified sections to confirm it is not invalid) → Draft a diff for user review → Write back to the original file and add
.
Phase 2: Code Investigation (Mandatory, Cannot Be Skipped)
Skills are reflected through code — the user not attaching code does not mean no need to check code. The AI must actively investigate the code repository to find the actual implementation of the skill.
Why is it mandatory? "Skills" written without checking code will stay at an abstract level, and when someone looks for code according to this skill next time, they won't find corresponding real examples, which will instead reduce confidence.
-
Search the code repository based on topic and type:
- Grep keywords (function names, class names, library imports, pattern features)
- Search related files (by file name, directory structure, import path)
- Use semantic search for supplementation if necessary
-
Read key files:
- Locate the code position where the skill is actually used or implemented, and read the context
- For type: Find the library's import statements and call sites
- For type: Find structural code of the pattern (interface definitions, class inheritance, composition relationships)
- For type: Find scripts or configurations corresponding to the operation steps
-
Output:
- Record the found file paths and key code snippets as the factual basis for subsequent drafting
- If no relevant code is found in the code repository (pure experiential skills, external tool usage), state "No in-project code examples available for this skill" during drafting in Phase 3
Additional situation handling:
- If the user attaches a file → Still search the code repository to confirm if there are other usage points or related code
- If search results are empty → Proceed (some skills are indeed "for future use"), but must note it in the document
- If the found code contradicts the user's description → Confirm with the user actively, don't write blindly
Phase 3: Refine Key Points (One Question at a Time)
Ask in the following order; the user can say "nothing" to skip at any time. Combine the code found in Phase 2 to ask and supplement — Do not ask about things that can already be seen in the code:
- "What is the standard approach?" (or "What is the core API / steps?") — If the implementation is already found through code investigation, directly present the understanding for user confirmation
- "Why does this work? Is there any principle behind it?"
- "Are there any counterexamples — When should it not be used?" (Optional)
- "Have you encountered any pitfalls, or anything to note?" (Optional, focus on asking for type)
- "Are there any code snippets or command examples?" (Skip this question if actual code is found in Phase 2, directly use the real code as the basis for examples)
Skip if the user says "nothing" or "skip" for a question; the document should omit sections rather than fill with empty words.
Phase 4: Confirm Content (AI Drafts, User Reviews)
- The AI drafts a complete document (including YAML frontmatter + all body sections) based on the dialogue + code investigation results from Phase 2
- Prioritize using real project code found in Phase 2 for example code (can be simplified), do not write from scratch
- Present the complete draft to the user for review at once, do not show section by section and ask section by section
- Write to the file after user confirmation; adjust according to user's comments if there are modifications
Phase 5: Archiving
- New entry path: Write the file to , name it
YYYY-MM-DD-trick-{slug}.md
, and add at the top of the frontmatter (see )
- Update path: Write back to the original file located in Phase 1.5, add to the frontmatter
- Supersede path: Handle the old and new files according to Section 6, Item 5 of
- Report the complete file path after writing
Phase 6: Discoverability Check
After writing, check if there are instructions in
or
guiding AI to access the
archive directory.
If not, prompt the user whether to add a line — Do not modify the file without permission, only prompt and let the user decide.
Search Tools
See
codestable/reference/tools.md
for complete syntax and examples. This section only lists typical queries specific to tricks.
bash
# Filter by type + framework
python codestable/tools/search-yaml.py --dir codestable/compound --filter doc_type=trick --filter type=library --filter framework~={library-name}
# Browse by tech stack
python codestable/tools/search-yaml.py --dir codestable/compound --filter doc_type=trick --filter language=typescript --filter status=active
# Check for duplicates after archiving
python codestable/tools/search-yaml.py --dir codestable/compound --filter doc_type=trick --query "{keywords}" --json
Guard Rules
Shared guard rules for archiving workflows (add-only, quality over quantity, do not write for users, discoverability, check duplicates after archiving) can be found in Section 6 of
codestable/reference/shared-conventions.md
. Rules specific to or refined for this skill:
- Only archive verified approaches — "Maybe should do this" is not archived; document content must be confirmed as effective by the user or AI
- Must investigate the code repository — The user not attaching code does not mean no need to check; Phase 2 code investigation cannot be skipped. Prioritize using real project code for examples, do not write from scratch
- Do not write principles for users — If the user cannot explain "why it works", write "Principle to be supplemented", do not let AI fabricate seemingly reasonable explanations
- Examples take precedence over descriptions — Use code to explain whenever possible, do not only use text descriptions
- Only recognize its own doc_type — Only read documents with , do not perceive other documents in the directory