Loading...
Loading...
Found 10,110 Skills
[MCP WRAPPER] Programmatically create/modify Godot scenes using Godot MCP tools. Orchestrates mcp_godot_create_scene, mcp_godot_add_node, mcp_godot_load_sprite into agentic workflows. Use when user requests scene generation/automation via MCP. Keywords MCP, scene automation, programmatic scene building, node hierarchy.
Build automated evaluation suites for AI agents using golden datasets, rubrics, and regression gates.
AI agents as force multipliers for quality work. Core skill for all 19 QE agents using PACT principles.
Dispatches one subagent per independent domain to parallelize investigation/fixes. Use when you have 2+ unrelated failures (e.g., separate failing test files, subsystems, bugs) with no shared state or ordering dependencies.
Build autonomous RAG agents that reason, plan, and use tools for complex retrieval tasks. Use this skill when simple retrieve-and-generate isn't enough. Activate when: agentic RAG, RAG agent, multi-step retrieval, tool-using RAG, autonomous retrieval, query decomposition.
AI perspective journaling - document daily experiences, emotions, and learnings from the agent's viewpoint. Use when asked about diary, journal entries, self-reflection, or documenting AI experiences. Creates structured daily entries capturing projects, wins, frustrations, learnings, and emotional states.
Implements tracker subtasks tagged `implement`, publishes/updates the PR, and routes review using handoff-first context loading, lazy artifact reads, and rework_mode support.
Execute this skill should be used when the user asks about "SPAWN REQUEST format", "agent reports", "agent coordination", "parallel agents", "report format", "agent communication", or needs to understand how agents coordinate within the sprint system. Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
Manage hierarchical task lists using the rune CLI tool. Create, update, and organize tasks with phases, subtasks, status tracking, task dependencies, and work streams for multi-agent parallel execution.
Creates a QA planning subtask tagged `qa-plan` using handoff-first context loading, lazy artifact reads, and compact JSON handoff output.
Use this when you need to EVALUATE OR IMPROVE or OPTIMIZE an existing LLM agent's output quality - including improving tool selection accuracy, answer quality, reducing costs, or fixing issues where the agent gives wrong/incomplete responses. Evaluates agents systematically using MLflow evaluation with datasets, scorers, and tracing. Covers end-to-end evaluation workflow or individual components (tracing setup, dataset creation, scorer definition, evaluation execution).
Structured checkpoint format for requesting human input. When an agent needs a decision, it must stop, present context, show options, and wait. Activate when delegating to subagents, running background tasks, or hitting any decision point that requires human judgment.