Loading...
Loading...
Found 6 Skills
Paper reviewer that evaluates machine learning research projects following official ICML reviewer guidelines. Provides comprehensive reviews with actionable feedback across all key dimensions: claims/evidence, relation to prior work, originality, significance, clarity, and reproducibility. Also provides formative feedback on incomplete drafts, proposals, and research code repositories. MANDATORY TRIGGERS: review paper, ICML review, paper review, evaluate paper, research paper feedback, ML paper review, conference review, academic review, paper critique, NeurIPS review, ICLR review, project proposal, research proposal, paper draft, early feedback, incomplete paper, work in progress, WIP review, review repo, review codebase, research project review
Evaluate how well a codebase supports autonomous AI development. Analyzes repositories across eight technical pillars (Style & Validation, Build System, Testing, Documentation, Dev Environment, Debugging & Observability, Security, Task Discovery) and five maturity levels. Use when users request `/readiness-report` or want to assess agent readiness, codebase maturity, or identify gaps preventing effective AI-assisted development.
Guide for setting up LaunchDarkly projects in your codebase. Helps you assess your stack, choose the right approach, and integrate project management that makes sense for your architecture.
Analyze a GitHub codebase to create comprehensive architecture documentation including ASCII diagrams, component relationships, data flow, hosting infrastructure, and file structure assessment.
Analyzes repositories for AI agent development efficiency. Scores 8 aspects (documentation, architecture, testing, type safety, agent instructions, file structure, context optimization, security) with ASCII dashboards. Use when evaluating AI-readiness, preparing codebases for Claude Code, or improving repository structure for AI-assisted development.
Audit and assess a codebase for programmatic SEO readiness at 1000+ page scale. Use when starting a pSEO project, evaluating an existing codebase for pSEO gaps, or when the user asks to audit, assess, or review their site for programmatic SEO scalability.