Loading...
Loading...
Found 1,161 Skills
Use when comparing multiple named alternatives across several criteria, need transparent trade-off analysis, making group decisions requiring alignment, choosing between vendors/tools/strategies, stakeholders need to see decision rationale, balancing competing priorities (cost vs quality vs speed), user mentions "which option should we choose", "compare alternatives", "evaluate vendors", "trade-offs", or when decision needs to be defensible and data-driven.
Use when "Polars", "fast dataframe", "lazy evaluation", "Arrow backend", or asking about "pandas alternative", "parallel dataframe", "large CSV processing", "ETL pipeline", "expression API"
Paper reviewer that evaluates machine learning research projects following official ICML reviewer guidelines. Provides comprehensive reviews with actionable feedback across all key dimensions: claims/evidence, relation to prior work, originality, significance, clarity, and reproducibility. Also provides formative feedback on incomplete drafts, proposals, and research code repositories. MANDATORY TRIGGERS: review paper, ICML review, paper review, evaluate paper, research paper feedback, ML paper review, conference review, academic review, paper critique, NeurIPS review, ICLR review, project proposal, research proposal, paper draft, early feedback, incomplete paper, work in progress, WIP review, review repo, review codebase, research project review
Production-ready financial analyst skill with ratio analysis, DCF valuation, budget variance analysis, and rolling forecast construction. 4 Python tools (all stdlib-only). Works with Claude Code, Codex CLI, and OpenClaw.
Evaluate research rigor. Assess methodology, experimental design, statistical validity, biases, confounding, evidence quality (GRADE, Cochrane ROB), for critical analysis of scientific claims.
Comprehensive quality auditing and evaluation of tools, frameworks, and systems against industry best practices with detailed scoring across 12 critical dimensions
Implement feature flags using the Vercel Flags SDK with server-side evaluation, environment-based toggles, and Vercel Toolbar integration.
Audit npm, pip, and Go dependencies that OpenClaw skills try to install. Checks for known vulnerabilities, typosquatting, and malicious packages.
Expert in designing effective prompts for LLM-powered applications. Masters prompt structure, context management, output formatting, and prompt evaluation. Use when "prompt engineering, system prompt, few-shot, chain of thought, prompt design, LLM prompt, instruction tuning, prompt template, output format, prompts, llm, gpt, claude, system-prompt, few-shot, chain-of-thought, evaluation" mentioned.
Review and improve documentation with parallel evaluation and iterative improvement loop.
Evaluate UX/UI using Don Norman's 7 fundamental design principles from The Design of Everyday Things. Audit discoverability, affordances, signifiers, feedback, mapping, constraints and conceptual models.
Create structured handoff for session continuation. Triggers: handoff, pause, save context, end session, pick up later, continue later.