Loading...
Loading...
Found 1,080 Skills
Run a formal, multi-dimensional code review of a pull request. Reads the PR diff, classifies change types, dispatches parallel reviewers by dimension (correctness, consistency, docs-sync, plus conditional security/edge-cases/UX/performance/structure/maintainability), and synthesizes findings into an actionable punch list. Use when the user asks to review a PR, run /deep-review, mark a PR as ready for review, or requests a formal/thorough code review.
Detects framework-specific anti-patterns, convention violations, and idiom misuse across PHP/Laravel, React/Next.js, and Python/Django/FastAPI codebases. Loads framework-specific reference guides and checks against framework conventions. Generates severity-scored findings with copy-pasteable fix prompts. Trigger phrases: "framework review", "framework check", "laravel best practices", "react best practices", "framework audit", "framework-specific review".
Detects code smells and anti-patterns — long methods, large classes, feature envy, data clumps, primitive obsession, dead code, magic numbers, deep nesting, and more. Uses configurable thresholds from .codeprobe-config.json when available. Trigger phrases: "code smells", "smell check", "anti-patterns", "clean code review".
Analyzes code architecture and structure — layer violations, circular dependencies, god objects, anemic domain models, missing boundaries, directory structure issues, and configuration problems. Generates severity-scored findings with fix prompts. Trigger phrases: "architecture review", "structure check", "layer analysis", "god class".
Audits code for SOLID principle violations — Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion. Identifies classes and methods that violate these principles and generates fix prompts. Trigger phrases: "SOLID check", "solid review", "SRP violation", "dependency inversion".
· Audit AI-generated code slop: hallucinated APIs, over-abstraction, duplicate code, test theater, noisy comments. Triggers: 'slop', 'AI-generated code', 'cleanup', 'overengineered'. Not for prose (use anti-ai-prose).
Codebase intelligence for JavaScript and TypeScript. Free static layer finds unused code (files, exports, types, dependencies), code duplication, circular dependencies, complexity hotspots, architecture boundary violations, and feature flag patterns. Optional paid runtime layer (Fallow Runtime) merges production execution data into the same health report for hot-path review, cold-path deletion confidence, and stale-flag evidence. 90 framework plugins, zero configuration, sub-second static analysis. Use when asked to analyze code health, find unused code, detect duplicates, check circular dependencies, audit complexity, check architecture boundaries, detect feature flags, clean up the codebase, auto-fix issues, merge production coverage, or run fallow.
Code review using the reviewer agent
Analyze repository and suggest improvements
Identify refactoring opportunities by surfacing architectural friction. Apply the deletion test, deep-modules vocabulary, and seams analysis. Each opportunity becomes its own evanflow-writing-plans cycle. Use when reviewing code for refactoring, when a file has grown too large, or when architecture concerns surface during feature work.
Iterative self-review loop after implementing a plan. Re-read changed code with fresh eyes, fix issues found, re-run quality checks, repeat until clean. For UI work, includes visual verification (view the rendered page). Use after evanflow-executing-plans completes; on success, report and stop — the user decides what's next.
Guidance for receiving and responding to code review feedback. Use when addressing PR review comments, incorporating reviewer suggestions, or managing review discussions.