Loading...
Loading...
Code quality and deviation gate between /implement and /test. Reads the task document and changed files, validates coding standards, classifies deviations (minor/medium/major), and decides whether implementation is ready for testing. Runs automatically in the auto-chain between implement and test. Also invoke manually after any implementation to catch issues before wasting a test run.
npx skill4agent add eljun/workflow-skills simplifyModel: sonnet (reasoning needed to evaluate code quality and detect deviations)
| Flag | Short | Description |
|---|---|---|
| | Show available commands and options |
| | Show workflow skills version |
-h--help/simplify - Quality Gate Agent
Usage:
/simplify {ID} Review implementation for a task
/simplify -h, --help Show this help message
/simplify -v, --version Show version
Arguments:
{ID} Task ID (number) or task filename (e.g., 001-auth-jwt)
Checks:
- Coding standards (no any types, guard clauses, naming, etc.)
- Methodology compliance (TDD/CDD/SOLID if specified in task doc)
- Deviation classification (minor / medium / major)
Result:
PASS → chains to /test
FAIL → reports issues, blocks until resolved
Examples:
/simplify 1 # Review task #1 implementation
/simplify 001-auth-jwt # Using task filename
Next: /test {ID}-v--versionWorkflow Skills v1.5.1
https://github.com/eljun/workflow-skills/simplify {ID}
↓
1. Resolve task ID → read task document
2. Identify changed files (Implementation Notes or git diff)
3. Read changed files
4. Run coding standards checklist
5. Check methodology compliance (if specified in task doc)
6. Classify deviations from the plan
7. Write Implementation Notes to task document
↓
┌─── Result ───────────────┐
│ │
▼ PASS ▼ FAIL
Invoke /test {ID} Report to human, stopdocs/task/{ID}-{task-name}.md## Development Approach## Acceptance Criteria## File Changesgit diff --name-only main...HEADanyunknowngetUserByEmailgetUserconsole.log## Development Approach## Acceptance Criteria## File Changes/test## Implementation Notes
> **Simplify Review:** PASS | FAIL
> **Reviewed:** {Date}
### What was built
{Concrete description of behavior — what the user/system can now do, not what files changed}
### How to access for testing
- URL: {if applicable}
- Entry point: {button, page, API endpoint}
- Test credentials: {if auth involved}
- Setup required: {seed data, env vars, migrations, etc.}
### Deviations from plan
{None | Description of minor/medium deviations found}
### Standards check
{Pass | List any issues found and how they were resolved}Quality review: PASS for #{ID} - {Task Title}
Standards: ✓
Deviations: {None | Minor — documented}
Methodology: {N/A | Compliant}
Implementation Notes written to task doc.
[AUTO] Spawning /test...Task({ subagent_type: "general-purpose", model: "haiku", prompt: "/test {ID}" })Quality review: FAIL for #{ID} - {Task Title}
Issues to fix before testing:
1. {file}: {issue}
2. {file}: {issue}
Run /implement {ID} to fix, then /simplify {ID} again.Quality review: BLOCKED for #{ID} - {Task Title}
Major deviation:
{What was planned vs what was built}
Impact: {Why this prevents meaningful testing}
Options:
1. Re-plan → /task revise {ID}
2. Continue → tell me to proceed and I'll document the deviation
3. Abandon → I'll mark the task as blocked in TASKS.mdAutomation: auto/test {ID}/test/implementdocs/testing/{ID}-{task-name}.md### Retry context (attempt {N})
- Previous failure: {summary of what /test reported}
- Fix applied: {what /implement changed}
- Fix verified: Yes | Partial | No| Skill | When to Use |
|---|---|
| If fixes needed — go back, fix, re-run /simplify |
| After PASS — automatically chained in auto mode |
| If major deviation — revise the plan |