Loading...
Loading...
Found 5 Skills
Analyze user research data to uncover insights, identify patterns, and inform design decisions. Synthesize qualitative and quantitative research into actionable recommendations.
Research-driven code review and validation at multiple levels of abstraction. Two modes: (1) Session review — after making changes, review and verify work using parallel reviewers that research-validate every assumption; (2) Full codebase audit — deep end-to-end evaluation using parallel teams of subagent-spawning reviewers. Use when reviewing changes, verifying work quality, auditing a codebase, validating correctness, checking assumptions, finding defects, reducing complexity. NOT for writing new code, explaining code, or benchmarking.
Verify research idea novelty against recent literature. Use when user says "查新", "novelty check", "有没有人做过", "check novelty", or wants to verify a research idea is novel before implementing.
Audit whether an ML or AI paper's experimental baselines are necessary, fair, current, and reviewer-proof. Use this skill whenever the user is planning experiments, comparing methods, choosing baselines, worried about missing SOTA or unfair comparisons, preparing a reviewer-proof experiment section, or converting a literature review into must-have, should-have, optional, and not-comparable baselines.
Conduct simulated user research with AI personas. Triggers when the user says 'do user research', 'run user research', 'simulate user interviews', or '/user-research'. Three phases: free growth → pain extraction → product collision, with four quality validation checkpoints. Supports single or multi-concept testing.