Loading...
Loading...
Found 1,183 Skills
Code quality gatekeeper and auditor. Enforces strict quality gates, resolves the AI verification gap, and evaluates codebases across 12 critical dimensions with evidence-based scoring. Use when auditing code quality, reviewing AI-generated code, scoring codebases against industry standards, or enforcing pre-commit quality gates. Use for quality audit, code review, codebase evaluation, security assessment, technical debt analysis.
Real DCF (Discounted Cash Flow) model creation for equity valuation. Retrieves financial data from SEC filings and analyst reports, builds comprehensive cash flow projections with proper WACC calculations, performs sensitivity analysis, and outputs professional Excel models with executive summaries. Use when users need to value a company using DCF methodology, request intrinsic value analysis, or ask for detailed financial modeling with growth projections and terminal value calculations.
Learn how to extend Dart's functionality to implement JavaScript-style "truthy" checks for easier conditional logic and value evaluations.
Evaluate creative work against explicit taste preferences. Use when drafting to align with project aesthetics, when reviewing to surface preference conflicts, or when generating voting options to reflect diverse tastes.
AI situational awareness — internal threat detection for hallucination risk, scope creep, and context degradation. Maps Cooper color codes to reasoning states and OODA loop to real-time decisions. Use during any task where reasoning quality matters, when operating in unfamiliar territory, after detecting early warning signs such as an uncertain fact or suspicious tool result, or before high-stakes output like irreversible changes or architectural decisions.
Technical research methodology with YAGNI/KISS/DRY principles. Phases: scope definition, information gathering, analysis, synthesis, recommendation. Capabilities: technology evaluation, architecture analysis, best practices research, trade-off assessment, solution design. Actions: research, analyze, evaluate, compare, recommend technical solutions. Keywords: research, technology evaluation, best practices, architecture analysis, trade-offs, scalability, security, maintainability, YAGNI, KISS, DRY, technical analysis, solution design, competitive analysis, feasibility study. Use when: researching technologies, evaluating architectures, analyzing best practices, comparing solutions, assessing technical trade-offs, planning scalable/secure systems.
Evaluate and validate Claude Code rules in .claude/rules/ directories. Use when auditing rule file quality, validating frontmatter and glob patterns, or checking rules organization before deployment. Do not use when writing new rules from scratch - use rule authoring guides instead. Do not use when evaluating skills or hooks - use skills-eval or hooks-eval instead.
Specialized business logic evaluator for the Evaluate-Loop. Use this for evaluating tracks that implement core product logic — pipelines, dependency resolution, state machines, pricing/tier enforcement, packaging. Checks feature correctness against product rules, edge cases, state transitions, data flow, and user journey completeness. Dispatched by loop-execution-evaluator when track type is 'business-logic', 'generator', or 'core-feature'. Triggered by: 'evaluate logic', 'test business rules', 'verify business rules', 'check feature'.
Use this skill whenever the user asks for a security analysis, vulnerability assessment, security audit, or any form of Security Assessment Report (SAR) over a codebase, infrastructure, API, database, or system. Triggers include: "audit my code", "find security issues", "run a security check", "generate a SAR", "check for vulnerabilities", "is this code secure", or any request that involves evaluating the security posture of a project. Also triggers when the user uploads or references source code, config files, environment variables, or architecture diagrams and asks for a security opinion. Do NOT use for generic coding tasks, code reviews focused on quality rather than security, or performance optimization unless a security angle is explicitly present.
INVOKE THIS SKILL when creating, running, or analyzing Arize experiments. Covers experiment CRUD, exporting runs, comparing results, and evaluation workflows using the ax CLI.
Generate objective reference check reports about the user from real AI collaboration data — session history, git logs, GitHub profile, and memory files. Like a colleague writing a professional reference, but grounded in actual shared work. Use whenever the user asks to evaluate them as a developer, wants a reference letter, work style analysis, introduced by my agents content, interview prep from collaboration history, or blog topics from past discussions. Triggers on: write a reference, analyze my work patterns, what do you think of me, 나에 대한 레퍼런스 써줘, 내 작업 스타일 분석해줘. Not for general code review, architecture docs, cover letters, or codebase-only analysis.
Meta-prompting framework for critiquing responses, analyzing solution trajectories, and evaluating AI-generated content quality