Loading...
Loading...
Found 51 Skills
Claude Skills meta-skill: extract domain material (docs/APIs/code/specs) into a reusable Skill (SKILL.md + references/scripts/assets), and refactor existing Skills for clarity, activation reliability, and quality gates.
Worker that runs existing tests to catch regressions. Auto-detects framework, reports pass/fail. No status changes or task creation.
Orchestrates Story tasks. Prioritizes To Review -> To Rework -> Todo, delegates to ln-401/ln-402/ln-403/ln-404, hands Story quality to ln-500. Metadata-only loading up front.
Create an AI Evals Pack (eval PRD, test set, rubric, judge plan, results + iteration loop). Use for LLM evaluation, benchmarks, rubrics, error analysis/open coding, and ship/no-ship quality gates for AI features.
Generate or remediate documentation with human-quality writing and style adherence. Use when creating new documentation, rewriting AI-generated content, or applying style profiles. Do not use for slop detection only (use slop-detector) or learning styles (use style-learner).
Measure quality effectively with actionable metrics. Use when establishing quality dashboards, defining KPIs, or evaluating test effectiveness.
BAZDMEG Method workflow checkpoint system for AI-assisted development. Enforce quality gates at three phases: pre-code, post-code, and pre-PR. Use when: (1) starting a new feature or bug fix, (2) finishing AI-generated code before review, (3) preparing a pull request, (4) running a planning interview, (5) auditing automation readiness, (6) preventing AI slop, (7) session bootstrap, (8) source rank, (9) domain gates, (10) bugbook. Triggers: 'bazdmeg', 'pre-code checklist', 'post-code checklist', 'pre-PR checklist', 'planning interview', 'quality gates', 'session bootstrap', 'source rank', 'domain gates', 'bugbook'.
Worker that runs existing tests to catch regressions. Auto-detects framework, reports pass/fail. No status changes or task creation.
Final code review and quality gate — run tests, check coverage, audit security, verify acceptance criteria from spec, and generate ship-ready report. Use when user says "review code", "quality check", "is it ready to ship", "final review", or after /deploy completes. Do NOT use for planning (use /plan) or building (use /build).
Evidence-based test debugging enforcing systematic root cause analysis. Use when tests are failing, pytest errors occur, test suite not passing, debugging test failures, or fixing broken tests. Prevents assumption-based fixes by enforcing proper diagnostic sequence. Works with Python (.py), JavaScript/TypeScript (.js/.ts), Go, Rust test files. Supports pytest, jest, vitest, mocha, go test, cargo test, and other frameworks.
Detects orphaned code (files/functions that exist but are never imported or called in production), preventing "created but not integrated" failures. Use before marking features complete, before moving ADRs to completed, during code reviews, or as part of quality gates. Triggers on "detect orphaned code", "find dead code", "check for unused modules", "verify integration", or proactively before completion. Works with Python modules, functions, classes, and LangGraph nodes. Catches the ADR-013 failure pattern where code exists and tests pass but is never integrated.
Code quality gatekeeper and auditor. Enforces strict quality gates, resolves the AI verification gap, and evaluates codebases across 12 critical dimensions with evidence-based scoring. Use when auditing code quality, reviewing AI-generated code, scoring codebases against industry standards, or enforcing pre-commit quality gates. Use for quality audit, code review, codebase evaluation, security assessment, technical debt analysis.