Loading...
Loading...
Verifies that the implementation complies with the specs, design, and task plan. Produces verify-report.md. Trigger: /sdd-verify <change-name>, verify implementation, quality gate, validate change.
npx skill4agent add fearovex/claude-config sdd-verifyVerifies that the implementation complies with the specs, design, and task plan.
/sdd-verify <change-name>skills/_shared/sdd-phase-common.md1. .claude/skills/sdd-verify/SKILL.md (project-local — highest priority)
2. ~/.claude/skills/sdd-verify/SKILL.md (global catalog — fallback)docs/SKILL-RESOLUTION.mdmem_search(query: "sdd/{change-name}/tasks")mem_get_observation(id)mem_search(query: "sdd/{change-name}/spec")mem_get_observation(id)mem_search(query: "sdd/{change-name}/design")mem_get_observation(id)### Completeness
| Metric | Value |
| -------------------- | ----- |
| Total tasks | [N] |
| Completed tasks [x] | [M] |
| Incomplete tasks [ ] | [K] |
Incomplete tasks:
- [ ] [number and description of each one]### Correctness (Specs)
| Requirement | Status | Notes |
| ----------- | ------------------ | ------------------------------------- |
| [Req 1] | ✅ Implemented | |
| [Req 2] | ⚠️ Partial | Missing 401 error scenario |
| [Req 3] | ❌ Not implemented | Endpoint /auth/refresh does not exist |
### Scenario Coverage
| Scenario | Status |
| ---------------------------------- | ------------------------------------ |
| Successful login | ✅ Covered |
| Failed login — incorrect password | ✅ Covered |
| Failed login — user does not exist | ⚠️ Partial — implemented but no test |
| Expired token | ❌ Not covered |### Coherence (Design)
| Decision | Followed? | Notes |
| ------------------- | ------------ | ------------------------------------------- |
| Validation with Zod | ✅ Yes | |
| JWT with RS256 | ⚠️ Deviation | HS256 was used. Dev documented it in tasks. |
| Repository pattern | ✅ Yes | |### Testing
| Area | Tests Exist | Scenarios Covered |
| ------------------- | ----------- | ----------------- |
| AuthService.login() | ✅ Yes | 3/4 scenarios |
| AuthController | ✅ Yes | Happy paths only |
| JWT Middleware | ❌ No | — |config.yaml (at project root)verify_commandsif config.yaml (at project root) exists and has key verify_commands:
→ use the listed commands in order
→ do NOT check level 2 or run auto-detection
→ for each command:
run the command via Bash tool
capture exit code + stdout/stderr
record in ## Tool Execution section with source label "verify_commands (config level 1)"
→ skip levels 2 and 3 entirely
else:
→ proceed to level 2 checkverify_commandsverify.test_commandsif config.yaml (at project root) exists and has key verify.test_commands:
if verify.test_commands is not a list:
→ emit WARNING: "verify.test_commands is not a list — treating as absent"
→ proceed to level 3 (auto-detection)
else if verify.test_commands is an empty list []:
→ treat as absent (empty list falls through — prevents silent zero-command success)
→ proceed to level 3 (auto-detection)
else:
→ use the listed commands in order
→ do NOT run auto-detection
→ for each command:
run the command via Bash tool
capture exit code + stdout/stderr
record in ## Tool Execution section with source label "verify.test_commands (config level 2)"
→ skip level 3 entirely
else:
→ proceed to level 3 (auto-detection)verify_commandsverify.test_commands| Priority | File to check | Condition | Command |
|---|---|---|---|
| 1 | | | |
| 2 | | pytest indicators present | |
| 3 | | | |
| 4 | | file exists | |
| 5 | | file exists | |
| — | none of the above | — | Skip with WARNING |
verify.build_commandverify.type_check_commandif config.yaml (at project root) exists and has key verify.build_command:
if verify.build_command is not a string:
→ emit WARNING: "verify.build_command is not a string — treating as absent"
→ proceed to auto-detection for build command
else:
→ use verify.build_command as the build/type-check command
→ skip the auto-detection table below for the build/type-check command
if config.yaml (at project root) exists and has key verify.type_check_command:
if verify.type_check_command is not a string:
→ emit WARNING: "verify.type_check_command is not a string — treating as absent"
→ proceed to auto-detection for type check command
else:
→ use verify.type_check_command as the type-check command
→ skip auto-detection for type check commandverify.build_command| Priority | File to check | Condition | Command |
|---|---|---|---|
| 1 | | | |
| 2 | | | |
| 3 | | file exists + TypeScript in devDependencies | |
| 4 | | | |
| 5 | | file exists | |
| 6 | | file exists | |
| — | none of the above | — | Skip with INFO |
config.yaml (at project root)coverage.thresholdcoverage: { threshold: 80 }| Status | Meaning | Criteria |
|---|---|---|
| COMPLIANT | Fully implemented and verified | Code implements the scenario + test passes (or code inspection confirms correctness when no test runner exists) |
| FAILING | Implemented but test fails | Code implements the scenario + corresponding test fails |
| UNTESTED | Implemented but no test coverage | Code implements the scenario + no test covers this scenario (only when a test runner exists but no test covers it) |
| PARTIAL | Partially implemented | Code covers some but not all THEN/AND clauses of the scenario |
## Spec Compliance Matrix
| Spec Domain | Requirement | Scenario | Status | Evidence |
| ----------- | ------------------ | --------------- | --------- | --------------------------------------------- |
| [domain] | [requirement name] | [scenario name] | COMPLIANT | [evidence description] |
| [domain] | [requirement name] | [scenario name] | FAILING | [failing test name or output] |
| [domain] | [requirement name] | [scenario name] | UNTESTED | No test coverage found |
| [domain] | [requirement name] | [scenario name] | PARTIAL | [which clauses are covered and which are not] |verify-report.md[x][ ][x]## Tool Executionverify-report.mdmem_savetopic_key: sdd/{change-name}/verify-reporttype: architectureproject: {project}# Verification Report: [change-name]
Date: [YYYY-MM-DD]
Verdict: PASS / PASS WITH WARNINGS / FAIL
## Summary
| Dimension | Status |
|---|---|
| Completeness | OK / WARNING / CRITICAL |
| Correctness | OK / WARNING / CRITICAL |
| Coherence | OK / WARNING / CRITICAL |
| Testing | OK / WARNING / CRITICAL |
| Test Execution | OK / WARNING / CRITICAL / SKIPPED |
| Build | OK / WARNING / SKIPPED |
## Tool Execution
| Command | Exit Code | Result |
|---|---|---|
| [command] | [code] | [PASS/FAIL/SKIPPED] |
## Issues
### CRITICAL
- [issue description]
[or: "None."]
### WARNINGS
- [issue description]
[or: "None."]
---
## Verdict Criteria
| Verdict | Condition |
| ---------------------- | ----------------------- |
| **PASS** | 0 critical, 0 warnings |
| **PASS WITH WARNINGS** | 0 critical, 1+ warnings |
| **FAIL** | 1+ critical |
---
## Severities
| Severity | Description | Blocks archiving |
| -------------- | ----------------------------------------------------------------------------------------------------------------- | ---------------- |
| **CRITICAL** | Requirement not implemented, main scenario not covered, core task incomplete | Yes |
| **WARNING** | Edge case scenario without test, design deviation, pending cleanup task, test execution failure | No |
| **SUGGESTION** | Optional quality improvement | No |
| **SKIPPED** | Step preconditions not met (no test runner, no build command, no coverage config) — does NOT count toward verdict | No |
| **INFO** | Informational note (e.g., no build command detected) — does NOT count toward verdict | No |
**Verdict calculation note:** Only the original four dimensions (Completeness, Correctness, Coherence, Testing) plus Test Execution and Spec Compliance contribute CRITICAL/WARNING statuses. SKIPPED and INFO statuses from any dimension do NOT count as WARNING or CRITICAL for the verdict. This preserves identical verdict behavior for projects without test infrastructure.
---
## Output to Orchestrator
```json
{
"status": "ok|warning|failed",
"summary": "Verification [change-name]: [verdict]. [N] critical, [M] warnings.",
"artifacts": ["engram:sdd/{change-name}/verify-report"],
"test_execution": {
"runner": "[detected runner or null]",
"command": "[command or null]",
"exit_code": "[0/1/N or null]",
"result": "PASS|FAILING|ERROR|SKIPPED"
},
"build_check": {
"command": "[command or null]",
"exit_code": "[0/1/N or null]",
"result": "PASS|FAILING|ERROR|SKIPPED"
},
"compliance_matrix": {
"total_scenarios": "[N]",
"compliant": "[N]",
"failing": "[N]",
"untested": "[N]",
"partial": "[N]"
},
"next_recommended": ["sdd-archive (if PASS or PASS WITH WARNINGS)"],
"risks": ["CRITICAL: [description if any]"]
}/sdd-archive <slug>## Tool Executionverify-report.md[x]verify_commandsverify.test_commandsverify.test_commands: []verify.build_commandverify.type_check_command