Loading...
Loading...
Found 107 Skills
Fine-tunes and evaluates OpenVLA-OFT and OpenVLA-OFT+ policies for robot action generation with continuous action heads, LoRA adaptation, and FiLM conditioning on LIBERO simulation and ALOHA real-world setups. Use when reproducing OpenVLA-OFT paper results, training custom VLA action heads (L1 or diffusion), deploying server-client inference for ALOHA, or debugging normalization, LoRA merge, and cross-GPU issues.
Fine-tune and serve Physical Intelligence OpenPI models (pi0, pi0-fast, pi0.5) using JAX or PyTorch backends for robot policy inference across ALOHA, DROID, and LIBERO environments. Use when adapting pi0 models to custom datasets, converting JAX checkpoints to PyTorch, running policy inference servers, or debugging norm stats and GPU memory issues.
Evaluates NVIDIA Cosmos Policy on LIBERO and RoboCasa simulation environments. Use when setting up cosmos-policy for robot manipulation evaluation, running headless GPU evaluations with EGL rendering, or profiling inference latency on cluster or local GPU machines.
Discover scientific equations from data using LLM-guided evolutionary search (LLM-SR). Multi-island algorithm with softmax-based cluster sampling, island reset, and LLM-proposed equation mutations. Use for symbolic regression and equation discovery.
Formal mathematical reasoning for research papers — derive equations, write proofs, formalize problem settings, select statistical tests, and generate LaTeX math notation. Use when the user needs mathematical derivations, theorem proofs, notation tables, or statistical analysis formalization.
Provides guidance for automatically evolving and optimizing AI agents across any domain using LLM-driven evolution algorithms. Use when building self-improving agents, optimizing agent prompts and skills against benchmarks, or implementing automated agent evaluation loops.
USE FOR web search, research, RAG, grounding, browse, find, lookups, fact-checking, documentation, agentic AI. All-in-one, optimized for AI agents. Pre-extracted, token-budgeted web content, deep research, news, images, videos, places, custom ranking
Use this skill for "review this paper", "review this manuscript", "peer review", "review my paper", "critique this manuscript", "review this submission", "give me feedback on my paper", "check my methods", "review my statistics", "review as a peer reviewer", "evaluate this manuscript", "review this PDF", or mentions manuscript review, peer review, paper critique, or methodological review.
Compiles any research input — PDF papers, GitHub repositories, experiment logs, code directories, or raw notes — into a complete Agent-Native Research Artifact (ARA) with cognitive layer (claims, concepts, heuristics), physical layer (configs, code stubs), exploration graph, and grounded evidence. Use when ingesting a paper or codebase into a structured, machine-executable knowledge package, building an ARA from scratch, or converting research outputs into a falsifiable, agent-traversable form.
This skill should be used when executing the epic-dev workflow, creating epic branches, managing sprint phases, working with git worktrees for phased feature development, or when the user mentions "epic workflow", "sprint phases", "phased development", or "git worktree workflow".
Decide what an ML or AI paper should strategically sell before detailed writing or venue-specific polishing. Use this skill whenever the user has an idea, literature map, experiment results, figures, reviewer risks, or a draft and needs to choose the paper's primary contribution, claim scope, paper archetype, target audience, novelty framing, related-work boundary, title/abstract/main-figure story, or claims to avoid before using conference-writing-adapter.
Sync verified experiment results from the code repo or a code worktree into the paper's daily experiments log and project memory. Use when results in code/docs/results, code/docs/reports, code/docs/runs, worktree docs, logs, or user-confirmed metrics should be promoted into paper-facing evidence.