Loading...
Loading...
Found 30 Skills
Attach judges to AI Config variations for automatic LLM-as-a-judge evaluation. Create custom judges, configure sampling rates, and monitor quality scores.
Configure AI Config targeting rules to control which variations serve to different users. Enable percentage rollouts, attribute-based rules, segment targeting, and guarded rollouts.
Instrument an existing codebase with LaunchDarkly AI Config tracking. Walks the four-tier ladder (managed runner → provider package → custom extractor + trackMetricsOf → raw manual) and picks the lowest-ceremony option that still captures duration, tokens, and success/error.
Migrate an application with hardcoded LLM prompts to a full LaunchDarkly AI Configs implementation in five stages: extract prompts, wrap in the AI SDK, add tools, add tracking, add evals/judges. Use when the user wants to externalize model/prompt configuration, move from direct provider calls (OpenAI, Anthropic, Bedrock, Gemini) to a managed AI Config, or stage a full hardcoded-to-LaunchDarkly migration.
Sets up or repairs the AGENTS.md source-of-truth pattern for any project. Creates a well-structured AGENTS.md with real stack info auto-detected from the project, then wires all AI config satellites (.claude/CLAUDE.md, .github/copilot-instructions.md, .agents/rules/, MEMORY.md) to point to it. Eliminates duplication. Always runs in plan mode — asks before acting. Use this skill whenever the user mentions AGENTS.md, agent config, source of truth for AI rules, setting up Claude/Copilot/Cursor for a project, fixing duplicate AI instructions, or wants to consolidate AI configuration files. Trigger even if the user just says "set up agents" or "fix my AI config".
Configure secret stores in Spice (environment variables, Kubernetes, AWS Secrets Manager, keyring). Use when asked to "configure secrets", "add API keys", "set up credentials", "manage passwords", "use environment variables", or "configure .env file".