cs-issue-report
This stage has two objectives: Convert the user's problem into a structured record, and determine whether the issue should follow the standard path or fast track.
There is one core principle for writing the report: Record only phenomena, not root causes. When the user says "I think it's a problem with the XX component" — note down "User suspects XX component" as a clue, but do not discuss the root cause further. Root causes must be confirmed by actually reading the code in Stage 2, not based on guesses. Reports mixed with root cause speculations will mislead the analysis direction in Stage 2, causing analysts to waste time on incorrect leads.
Refer to Section 0 of
codestable/reference/shared-conventions.md
and the "Where to place files" section of
for shared paths and naming conventions.
Startup Checks
1. Confirm it's a bug, not a new feature request
If the user describes "wanting to add feature X", inform them to follow the
workflow.
2. Check for existing issue directories
Glob the subdirectories under
to see if similar issues have already been recorded. If yes, first confirm with the user whether to create a new report or update the existing one.
3. Fast Track Judgment (This is the only official decision point)
Based on the clues from the user's description, first review the relevant code (locate using Grep / Read) to determine if the root cause can be identified at a glance:
- Yes — Clear root cause (can provide ), minor fix changes (1-2 places), no cross-module impact risk → Inform the user: "I have identified the issue. We can take the fast track: I will directly inform you of the root cause and fix plan. After your confirmation, I will immediately fix it, you verify the fix, and then only a needs to be written." Trigger (fast track mode) after the user agrees.
- No — Multiple candidate root causes / uncertain / need more reproduction information → Proceed with the standard path below to create a complete issue report. No re-judgment will be made by default after entering the standard path.
4. Determine the issue directory name
Agree on the slug with the user, using today's date as the prefix (retrieve
from environment information). Create the directory if it does not exist. An issue directory must also be created for the fast track, and
should be placed there.
5 Mandatory Questions
Ask one question at a time in order, do not throw all 5 at once — if multiple questions are asked at once, the user will most likely only answer the easiest ones, and deeper information will be missed. Perform an ambiguity check for each question; if it fails, continue to follow up.
1. What is the issue? Can you describe the phenomenon you observed?
Expect specific abnormal behaviors: "A blank pop-up appears after clicking the submit button" is a hundred times more useful than "There's a problem with the submission function".
Ambiguous signals: "Sometimes it errors"、"It feels wrong" → Follow up with "When exactly does this happen?"、"What exactly is wrong?".
Red line: Do not let the user describe the root cause. If the user says "It should be because of XXX"、"Maybe YYY caused it" — record the phenomenon, leave the root cause to Stage 2.
2. How to reproduce it?
Expect a set of minimal reproduction steps. For example:
- Go to the XX page
- Enter YY content
- Click the ZZ button
- Observe the issue phenomenon
Ambiguous signals: "Unstable reproduction"、"Sometimes it works, sometimes it doesn't" → Follow up on the reproduction frequency and conditional differences ("Under what conditions can it be reproduced?"). If it is indeed unstable, specify the known trigger conditions and reproduction rate.
"Cannot reproduce" is also a valid answer — specify "Currently cannot stably reproduce, only observed once under condition X", do not force to create steps.
3. Expected Behavior vs Actual Behavior
Expect two sentences:
- Expected: I thought after doing A, B should happen
- Actual: But what actually happened was C
Do not combine into one sentence. Explicitly separate expected and actual behaviors so that reviewers can quickly judge the issue boundary — if combined into "The button isn't working properly", analysts won't know what you consider "proper".
4. Environment Information
At minimum, collect: Which module/functional area the issue was found in、Relevant files or functions (if the user knows).
Optional but valuable: Operating system, browser version, runtime environment (dev / prod), whether relevant code has been modified recently.
If the user says "I don't know which file" — it's okay, write "To be determined", it will be checked during Stage 2 analysis.
5. Severity and Priority
Reference standards (for the user to choose):
- P0 - Blocking: Core function completely fails, affecting all users, must be fixed immediately
- P1 - Critical: Core function is impaired, there is a workaround, needs to be fixed as soon as possible
- P2 - Medium: Non-core function is impaired or affects a small number of users, fixed within the plan
- P3 - Minor: UI flaws, edge cases, better implementation exists, fixed when available
If the user is unsure, recommend one based on the descriptions from previous questions, but let the user make the final decision.
Issue Report Template
After getting all answers, write into the file (see the "Where to place files" section of
for the path):
markdown
---
doc_type: issue-report
issue: {issue directory name}
status: draft
severity: P0 | P1 | P2 | P3
summary: {one-sentence issue phenomenon}
tags: []
---
# {Brief Issue Description} Issue Report
## 1. Issue Phenomenon
{Specific abnormal behavior described by the user, pure phenomenon description without root cause speculation}
## 2. Reproduction Steps
1. {Step 1}
2. {Step 2}
3. {Step 3}
4. Observed: {Issue phenomenon}
Reproduction frequency: {Stable reproduction / Probabilistic reproduction (approx. X%) / Currently unable to stably reproduce}
## 3. Expected vs Actual
**Expected Behavior**: {After doing A, B should happen}
**Actual Behavior**: {But what actually happened was C}
## 4. Environment Information
- Involved module/function: {Module name or function description}
- Relevant files/functions: {Known file:line number, or "To be determined"}
- Runtime environment: {dev / staging / prod / Uncertain}
- Other context: {OS, browser, recent changes, etc. Write "None" if none}
## 5. Severity
**{P0 / P1 / P2 / P3}** — {One-sentence reason}
## Notes
{Optional: Additional context provided by the user, screenshot descriptions, log snippets, etc.}
Exit Criteria
After writing the file, confirm with the user to "exit Stage 1":
After Exit
Inform the user: "The issue report is ready. The next step is Stage 2: Root Cause Analysis. You can trigger the
skill to start the analysis."
Do not start analyzing the root cause on your own — manual checkpoints between stages are hard constraints of the workflow.
Common Pitfalls
- When the user says "Maybe it's a problem with the XX component", you start discussing the root cause — Wrong, that's Stage 2's job; only ask about "what phenomenon you observed" now
- Allowing overly vague reproduction steps (e.g., "Operate on the user interface") — Force executable steps out
- Mixing expected and actual behaviors in one paragraph — Must be explicitly separated
- Leaving severity blank — Provide a default value or write "None", but do not leave it empty
- Throwing all 5 questions as a list to the user at once — Ask one question at a time in conversation, otherwise deep information will be missed