Loading...
Loading...
Found 12 Skills
Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production. Use when optimizing prompts, improving LLM outputs, or designing production prompt templates.
Use when designing prompts for LLMs, optimizing model performance, building evaluation frameworks, or implementing advanced prompting techniques like chain-of-thought, few-shot learning, or structured outputs.
Build autonomous AI agents with Claude Agent SDK. Structured outputs guarantee JSON schema validation, with plugins system and hooks for event-driven workflows. Prevents 14 documented errors. Use when: building coding agents, SRE systems, security auditors, or troubleshooting CLI not found, structured output validation, session forking errors, MCP config issues, subagent cleanup.
Build with Claude Messages API using structured outputs for guaranteed JSON schema validation. Covers prompt caching (90% savings), streaming SSE, tool use, and model deprecations. Prevents 16 documented errors. Use when: building chatbots/agents, troubleshooting rate_limit_error, prompt caching issues, streaming SSE parsing errors, MCP timeout issues, or structured output hallucinations.
Create PydanticAI agents with type-safe dependencies, structured outputs, and proper configuration. Use when building AI agents, creating chat systems, or integrating LLMs with Pydantic validation.
Diseño de prompts para LLMs: system prompts, few-shot examples, chain-of-thought, RAG, structured outputs.
AI content generation with OpenAI and Claude, callAIWithPrompt usage, prompt storage in app_settings, structured outputs, response format validation, multi-criteria scoring, rate limiting, JSON schema, and AI API best practices. Use when generating content, creating prompts, scoring articles, or working with OpenAI/Claude APIs.
Monitor API integration of Parallel. Use when building applications with Parallel Monitor API.
Engineer effective LLM prompts using zero-shot, few-shot, chain-of-thought, and structured output techniques. Use when building LLM applications requiring reliable outputs, implementing RAG systems, creating AI agents, or optimizing prompt quality and cost. Covers OpenAI, Anthropic, and open-source models with multi-language examples (Python/TypeScript).
Integrate Perplexity API for web-grounded AI responses and search. Covers Sonar models, Search API, SDK usage (Python/TypeScript), streaming, structured outputs, filters, media attachments, Pro Search, and prompting. Keywords: Perplexity, Sonar, sonar-pro, sonar-reasoning-pro, sonar-deep-research, web search API, grounded LLM, chat completions, perplexityai SDK, image attachments, PDF analysis.
Advanced Gemini 3 Pro features including function calling, built-in tools (Google Search, Code Execution, File Search, URL Context), structured outputs, thought signatures, context caching, batch processing, and framework integration. Use when implementing tools, function calling, structured JSON output, context caching, batch API, LangChain, Vercel AI, or production features.
Expert guidance for building production-grade AI agents and workflows using Pydantic AI (the `pydantic_ai` Python library). Use this skill whenever the user is: writing, debugging, or reviewing any Pydantic AI code; asking how to build AI agents in Python with Pydantic; asking about Agent, RunContext, tools, dependencies, structured outputs, streaming, multi-agent patterns, MCP integration, or testing with Pydantic AI; or migrating from LangChain/LlamaIndex to Pydantic AI. Trigger even for vague requests like "help me build an AI agent in Python" or "how do I add tools to my LLM app" — Pydantic AI is very likely what they need.