Loading...
Loading...
Found 89 Skills
Conduct deep research on any topic — get comprehensive reports with citations, key findings, and actionable insights in minutes. Use when user wants to "deep research", "research this", "investigate", "analysis report", "深度研究", "调研", "リサーチ", "심층 연구".
Trigger native web search. Use when you need quick internet research with concise summaries and full source URLs.
Simulate target-conference reviewers for an ML/AI paper before submission. Use this skill whenever the user wants a reviewer-style critique, predicted scores, likely reject reasons, rebuttal risks, area-chair style meta-review, adversarial Reviewer 2 feedback, or venue-specific pre-review for conferences such as NeurIPS, ICML, ICLR, CVPR, ACL, EMNLP, or similar venues. This skill should dynamically inspect reviewer guidelines, example reviews, accepted papers, and project evidence when available.
Decompose research ideas into atomic, self-contained concepts with bidirectional math-code mapping. For each concept, extract the math formula from papers and find code implementations. Use for complex system papers requiring formal grounding.
Look up and read Hugging Face paper pages in markdown, and use the papers API for structured metadata such as authors, linked models/datasets/spaces, Github repo and project page. Use when the user shares a Hugging Face paper page URL, an arXiv URL or ID, or asks to summarize, explain, or analyze an AI research paper.
Workflow 1: Full idea discovery pipeline. Orchestrates research-lit → idea-creator → novelty-check → research-review to go from a broad research direction to validated, pilot-tested ideas. Use when user says "找idea全流程", "idea discovery pipeline", "从零开始找方向", or wants the complete idea exploration workflow.
Help a CS or AI PhD student turn a rough research idea into a validated next-step decision using the handbook's FIVE+C framework. Use this skill whenever the user says they have a research idea, wants to know whether an idea is worth pursuing, needs help choosing between project directions, is preparing to pitch an idea to an advisor or senior student, or feels unsure whether a project is too incremental, too ambitious, already solved, hard to evaluate, or missing resources.
Audit a CS or AI research project for reproducibility across environment, data, code, configuration, logging, and documentation. Use this skill whenever the user wants to make experiments reproducible, prepare code for collaborators, debug environment drift, write a README, package a project for paper release, or ensure they can rerun results months later.
Guide a focused CS or AI literature review sprint that turns a topic, idea, claim, or project direction into a ranked paper map, closest-work risk assessment, method taxonomy, novelty implications, baseline implications, and next actions. Use this skill whenever the user needs to survey a topic, check novelty, map related work, prepare a project, find canonical or recent papers, decide read/skim/ignore priority, or turn papers into a research direction.
Finalize an accepted ML or AI paper for camera-ready submission after reviews, rebuttal, and acceptance. Use this skill whenever the user has an accepted paper, camera-ready deadline, final revision, acceptance email, meta-review, rebuttal promises, author-response commitments, de-anonymization tasks, supplement updates, code links, acknowledgements, final LaTeX checks, or needs to ensure the accepted paper's claims, figures, references, and artifacts are consistent before final submission.
Design hypothesis-driven ML/AI experiments before running them. Use this skill whenever the user wants to plan experiments, ablations, baselines, metrics, controls, seeds, logging, stop conditions, reviewer-proof evidence, or an experiment matrix for a paper claim before using run-experiment or writing results.
Diagnose surprising, negative, unstable, or ambiguous ML/AI experiment results and decide whether to debug implementation, rerun experiments, change metrics or baselines, revise the algorithm, narrow the paper claim, park, or kill a direction. Use this skill whenever results do not match expectations, a method fails, metrics conflict, seeds vary, baselines beat the method, plots look suspicious, or the user asks what to do next after experimental results.