clarify
Original:🇺🇸 English
Translated
Unified requirement clarification to prevent downstream implementation churn by resolving ambiguity early. Default: research-first with autonomous decision-making and persistent questioning. --light: direct iterative Q&A. Triggers: "cwf:clarify", "clarify this", "refine requirements"
1installs
Sourcecorca-ai/claude-plugins
Added on
NPX Install
npx skill4agent add corca-ai/claude-plugins clarifyTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →Clarify
Resolve ambiguity before planning so implementation starts from explicit decisions, not assumptions.
Quick Start
text
cwf:clarify <requirement> # Research-first (default)
cwf:clarify <requirement> --light # Direct Q&A, no sub-agentsMode Selection
Before entering Default or --light mode, assess clarify depth based on input specificity:
- If exists in the active session directory or current workspace, read it as optional prior-session context.
next-session.md - Always parse the user's task description (required input).
- Check if the combined input provides: target file paths, expected changes per file, and BDD-style success criteria.
- Apply this heuristic:
| Input specificity | Clarify depth | Rationale |
|---|---|---|
| All 3 present (files + changes + criteria) | AskUserQuestion only — ask 2-3 binary/choice questions for remaining ambiguities, skip Phases 2-2.5 | Prior session retro effectively served as clarify |
| 1-2 present | --light mode — iterative Q&A without sub-agents | Partial clarity, direct questions suffice |
| None present (vague requirement) | Default mode — full research + expert analysis | Scope is open, exploration needed |
This heuristic can be overridden by explicit flag or user instruction.
--lightDefault Mode
Phase 0: Update Live State
Use the live-state helper for scalar fields, then edit list fields in the resolved live-state file:
bash
bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh set . \
phase="clarify" \
task="{requirement summary}"
live_state_file=$(bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh resolve)Set in to files relevant to the requirement.
live.key_files{live_state_file}Phase 0.5: Resolve Session Directory
Resolve the effective live-state file, then read before any placeholder use:
live.dir{session_dir}bash
live_state_file=$(bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh resolve)
session_dir=$(bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh get . dir)yaml
session_dir: "{live.dir value from resolved live-state file}"Phase 1: Capture & Decompose
- Record the original requirement verbatim
- Decompose into concrete decision points — specific questions that need answers before implementation can begin
- Frame as questions, not categories ("Which auth library?" not "Authentication")
- Focus on decisions that affect implementation
- Present the decision points to the user before proceeding
markdown
## Original Requirement
"{user's original request verbatim}"
## Decision Points
1. {specific question}
2. {specific question}
...Phase 2: Research
Context recovery (before launching)
Apply the context recovery protocol to these files:
{session_dir}/clarify-codebase-research.md{session_dir}/clarify-web-research.md
Launch sub-agents simultaneously using the Task tool (only for missing or invalid results).
- Shared output persistence contract: agent-patterns.md § Sub-agent Output Persistence Contract.
Sub-agent A: Codebase Researcher
yaml
Task tool:
subagent_type: Explore
max_turns: 20
prompt: |
Read `{SKILL_DIR}/references/research-guide.md` and follow
**Section 1: Codebase Research** exactly.
Report evidence only — do not make decisions.
Decision points:
{list from Phase 1}
Write your complete findings to: {session_dir}/clarify-codebase-research.md
The file MUST exist when you finish. End your output file with the exact
line `<!-- AGENT_COMPLETE -->` as the last line.Sub-agent B: Web Researcher
yaml
Task tool:
subagent_type: general-purpose
max_turns: 20
prompt: |
Read `{SKILL_DIR}/references/research-guide.md` and follow
**Section 2: Web / Best Practice Research** exactly.
Also follow the "Web Research Protocol" section in
`{CWF_PLUGIN_DIR}/references/agent-patterns.md` for tool order
and fallback behavior.
Report findings only — do not make decisions.
Decision points:
{list from Phase 1}
Write your complete findings to: {session_dir}/clarify-web-research.md
The file MUST exist when you finish. End your output file with the exact
line `<!-- AGENT_COMPLETE -->` as the last line.Both sub-agents run in parallel. Wait for both to complete. Read the output files from session dir (not the in-memory Task return values).
Phase 2.5: Expert Analysis
Context recovery (before launching)
Apply the context recovery protocol to these files:
{session_dir}/clarify-expert-alpha.md{session_dir}/clarify-expert-beta.md
Launch two domain expert sub-agents simultaneously using the Task tool (only for missing or invalid results).
Expert selection:
- Read from
expert_rostercwf-state.yaml - Analyze decision points for domain keywords; match against each roster entry's field
domain - Select 2 experts with contrasting frameworks — different analytical lenses on the same problem
- If roster has < 2 domain matches, fill remaining slots via independent selection (prioritize well-known figures with contrasting methodological approaches)
Expert α
yaml
Task tool:
subagent_type: general-purpose
max_turns: 12
prompt: |
Read {CWF_PLUGIN_DIR}/references/expert-advisor-guide.md.
You are Expert α, operating in **clarify mode**.
Your identity: {selected expert name}
Your framework: {expert's domain from roster or independent selection}
Decision points:
{list from Phase 1}
Research findings summary:
{summarized outputs from Phase 2 codebase + web research}
Analyze which decisions are most critical through your published framework.
Use web search to verify your expert identity and cite published work.
Output your analysis in the clarify mode format from the guide.
Write your complete findings to: {session_dir}/clarify-expert-alpha.md
The file MUST exist when you finish. End your output file with the exact
line `<!-- AGENT_COMPLETE -->` as the last line.Expert β
yaml
Task tool:
subagent_type: general-purpose
max_turns: 12
prompt: |
Read {CWF_PLUGIN_DIR}/references/expert-advisor-guide.md.
You are Expert β, operating in **clarify mode**.
Your identity: {selected expert name — contrasting framework from Expert α}
Your framework: {expert's domain from roster or independent selection}
Decision points:
{list from Phase 1}
Research findings summary:
{summarized outputs from Phase 2 codebase + web research}
Analyze which decisions are most critical through your published framework.
Use web search to verify your expert identity and cite published work.
Output your analysis in the clarify mode format from the guide.
Write your complete findings to: {session_dir}/clarify-expert-beta.md
The file MUST exist when you finish. End your output file with the exact
line `<!-- AGENT_COMPLETE -->` as the last line.Both expert sub-agents run in parallel. Wait for both to complete. Read the output files from session dir (not the in-memory Task return values).
--light mode: Phase 2.5 is skipped (consistent with --light skipping all sub-agents).
Phase 3: Classify & Decide
Read for full classification rules.
{SKILL_DIR}/references/aggregation-guide.mdFor each decision point, classify using three evidence sources: codebase research (Phase 2), web research (Phase 2), and expert analysis (Phase 2.5).
- T1 (Codebase-resolved) — codebase has clear evidence → decide autonomously, cite files
- T2 (Best-practice-resolved) — best practice consensus → decide autonomously, cite sources
- T3 (Requires human) — evidence conflicts, all silent, or subjective → queue
Constructive tension: When sources conflict, classify as T3. Expert analysis provides additional signal but does not override direct codebase or best-practice evidence.
Present the classification:
markdown
## Agent Decisions (T1 & T2)
| # | Decision Point | Tier | Decision | Evidence |
|---|---------------|------|----------|----------|
| 1 | ... | T1 | ... | file paths |
| 2 | ... | T2 | ... | sources |
## Requires Human Decision (T3)
| # | Decision Point | Reason |
|---|---------------|--------|
| 3 | ... | conflict / no evidence / subjective |If zero T3 items: Skip Phases 3.5 and 4 entirely. Go to Phase 5.
Phase 3.5: Advisory (T3 only)
Context recovery (before launching)
Apply the context recovery protocol to these files:
{session_dir}/clarify-advisor-alpha.md{session_dir}/clarify-advisor-beta.md
Launch two advisory sub-agents simultaneously (only for missing or invalid results):
Advisor α
yaml
Task tool:
subagent_type: general-purpose
model: haiku
max_turns: 12
prompt: |
Read `{SKILL_DIR}/references/advisory-guide.md`. You are Advisor α.
Tier 3 decision points:
{list of T3 items}
Research context:
{codebase findings for these items}
{web research findings for these items}
Argue for the first perspective per the guide's side-assignment rules.
Write your complete findings to: {session_dir}/clarify-advisor-alpha.md
The file MUST exist when you finish. End your output file with the exact
line `<!-- AGENT_COMPLETE -->` as the last line.Advisor β
yaml
Task tool:
subagent_type: general-purpose
model: haiku
max_turns: 12
prompt: |
Read `{SKILL_DIR}/references/advisory-guide.md`. You are Advisor β.
Tier 3 decision points:
{list of T3 items}
Research context:
{codebase findings for these items}
{web research findings for these items}
Argue for the opposing perspective per the guide's side-assignment rules.
Write your complete findings to: {session_dir}/clarify-advisor-beta.md
The file MUST exist when you finish. End your output file with the exact
line `<!-- AGENT_COMPLETE -->` as the last line.Both advisors run in parallel. Wait for both to complete. Read the output files from session dir (not the in-memory Task return values).
Phase 4: Persistent Questioning (T3 only)
Read for full methodology.
{SKILL_DIR}/references/questioning-guide.mdFor each T3 item, use with:
AskUserQuestion- Research context: What codebase and web research found
- Advisor α's position: Their argument (brief)
- Advisor β's position: Their argument (brief)
- The question: With 2-4 concrete options from advisory perspectives
After each answer:
- Why-dig 2-3 times on surface-level answers (see questioning-guide.md)
- Detect tensions between this answer and previous answers
- Check for new ambiguities revealed by the answer → classify → repeat if T3
Phase 5: Output
markdown
## Requirement Clarification Summary
### Before (Original)
"{original request verbatim}"
### After (Clarified)
**Goal**: {precise description}
**Scope**: {what is included and excluded}
**Constraints**: {limitations and requirements}
### Expert Analysis
(Only in default mode, omitted in --light)
| Expert | Framework | Key Insight |
|--------|-----------|-------------|
| {Expert α name} | {framework} | {1-line summary of analysis} |
| {Expert β name} | {framework} | {1-line summary of analysis} |
### All Decisions
| # | Decision Point | Decision | Decided By | Evidence |
|---|---------------|----------|------------|----------|
| 1 | ... | ... | Agent (T1) | file paths |
| 2 | ... | ... | Agent (T2) | sources |
| 3 | ... | ... | Human | advisory context |Then ask: "Save this clarified requirement to a file?" If yes: save to a project-appropriate location with a descriptive filename.
Completion tracking (after saving the summary file):
- Run:
bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh set . clarify_completed_at="{ISO 8601 UTC timestamp}" - Run:
bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh set . clarify_result_file="{saved summary file path}"
This state is what Phase 1.0 checks as a pre-condition.
cwf:implFollow-up suggestions (when CWF plugin is loaded):
- — Multi-perspective review of the clarified requirement before implementation
cwf:review --mode clarify - — Generate a phase handoff document that captures HOW context (protocols, rules, must-read references, constraints) for the implementation phase. Recommended when context will be cleared before implementation, as
cwf:handoff --phasecarries WHAT but not HOW.plan.md
Present both suggestions. If context is getting large or the user is about to clear context, emphasize the phase handoff suggestion.
--light Mode
Fast, direct clarification without sub-agents. The original clarify behavior with added persistence.
Phase 1: Capture
- Record the requirement verbatim
- Identify ambiguities using the categories in questioning-guide.md
Phase 2: Iterative Q&A
Read for methodology.
{SKILL_DIR}/references/questioning-guide.mdLoop using :
AskUserQuestiontext
while ambiguities remain:
pick most critical ambiguity
ask with 2-4 concrete options
why-dig on surface-level answers (2-3 levels)
detect tensions with prior answers
check for new ambiguitiesPhase 3: Output
markdown
## Requirement Clarification Summary
### Before (Original)
"{original request verbatim}"
### After (Clarified)
**Goal**: {precise description}
**Reason**: {the ultimate purpose or jobs-to-be-done}
**Scope**: {what is included and excluded}
**Constraints**: {limitations, requirements, preferences}
**Success Criteria**: {how to verify correctness}
### Decisions Made
| Question | Decision |
|----------|----------|
| ... | ... |Then offer to save.
Completion tracking (after saving the summary file):
- Run:
bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh set . clarify_completed_at="{ISO 8601 UTC timestamp}" - Run:
bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh set . clarify_result_file="{saved summary file path}"
This state is what Phase 1.0 checks as a pre-condition.
cwf:implExpert Roster Update (when experts were used in Phase 2.5): Follow the Roster Maintenance procedure in .
{CWF_PLUGIN_DIR}/references/expert-advisor-guide.mdFollow-up (when CWF plugin is loaded): Suggest if the user plans to clear context before implementation.
cwf:handoff --phaseRules
- Research first, ask later (default mode): Exhaust research before asking
- Cite evidence: Every autonomous decision must include specific evidence
- Respect the tiers: Do not ask about T1/T2; do not auto-decide T3
- Constructive tension: Conflicts between sources are signals, not problems
- Persistent but not annoying: Why-dig on vague answers, accept clear ones
- Preserve intent: Refine the requirement, don't redirect it
- Grounded experts: Best practice research must cite real published work
- Honest advisors: Advisory opinions argue in good faith, not strawman
- Expert evidence is supplementary: Expert analysis enriches classification but does not override direct codebase or best-practice evidence
References
- references/research-guide.md — Default sub-agent research methodology
- references/aggregation-guide.md — Tier classification rules
- references/advisory-guide.md — Advisor α/β methodology
- references/questioning-guide.md — Persistent questioning
- expert-advisor-guide.md — Expert sub-agent identity, grounding, and analysis format