Loading...
Loading...
Found 49 Skills
Evaluates and optimizes agent skills using a DSPy-powered GEPA (Generate/Evaluate/Propose/Apply) loop. Loads scenario YAML files as DSPy datasets, scores outputs with pattern-matching metrics, and optimizes prompts via BootstrapFewShot or MIPROv2 teleprompters. Also generates new scenario YAML files from skill descriptions.
Core DSPy framework guidance — signatures, modules, programs, compilation, and testing. Use when creating DSPy signatures, building modules, compiling programs, or learning DSPy fundamentals.
DSPy optimization workflows — teleprompters, metrics, evaluation, and compilation strategies. Use when optimizing DSPy programs with BootstrapFewShot, MIPROv2, or custom metrics.
Universal text artifact optimizer using GEPA's optimize_anything API for code, prompts, agent architectures, configs, and more
Break a failing complex AI task into reliable subtasks. Use when your AI works on simple inputs but fails on complex ones, extraction misses items in long documents, accuracy degrades as input grows, AI conflates multiple things at once, results are inconsistent across input types, you need to chunk long text for processing, or you want to split one unreliable AI step into multiple reliable ones.
Score, grade, or evaluate things using AI against a rubric. Use when grading essays, scoring code reviews, rating candidate responses, auditing support quality, evaluating compliance, building a quality rubric, running QA checks against criteria, assessing performance, rating content quality, or any task where you need numeric scores with justifications — not just categories.
Stop your AI from making things up. Use when your AI hallucinates, fabricates facts, isn't grounded in real data, doesn't cite sources, makes unsupported claims, or you need to verify AI responses against source material. Covers citation enforcement, faithfulness verification, grounding via retrieval, and confidence thresholds.
Track which optimization experiment was best. Use when you've run multiple optimization passes, need to compare experiments, want to reproduce past results, need to pick the best prompt configuration, track experiment costs, manage optimization artifacts, decide which optimized program to deploy, or justify your choice to stakeholders. Covers experiment logging, comparison, and promotion to production.
Auto-sort, categorize, or label content using AI. Use when sorting tickets into categories, auto-tagging content, labeling emails, detecting sentiment, routing messages to the right team, triaging support requests, building a spam filter, intent detection, topic classification, or any task where text goes in and a category comes out.
Automatically intercepts and optimizes prompts using the prompt-learning MCP server. Learns from performance over time via embedding-indexed history. Uses APE, OPRO, DSPy patterns. Activate on "optimize prompt", "improve this prompt", "prompt engineering", or ANY complex task request. Requires prompt-learning MCP server. NOT for simple questions (just answer them), NOT for direct commands (just execute them), NOT for conversational responses (no optimization needed).
Use this skill when you need to QA audit and fix a plugin skill file. Provides a methodology for verifying skill content against official documentation, fixing issues in-place, and producing verification reports.
Look up error codes, status codes, and system messages. Use when the user asks about error codes, status meanings, or needs to decode system messages.