Loading...
Loading...
Found 10,111 Skills
This skill should be used when the user asks to "build an AI agent with Claude", "use the Claude Agent SDK", "integrate claude-agent-sdk into a project", "set up an autonomous agent with tools", or needs guidance on the Anthropic Claude Agent SDK best practices for Python and TypeScript.
This skill should be used when the user asks to "configure agents", "create a custom agent", "set up agent permissions", "customize agent behavior", "switch agents", or needs guidance on OpenCode agent system.
Expert guidance for building production-grade AI agents and workflows using Pydantic AI (the `pydantic_ai` Python library). Use this skill whenever the user is: writing, debugging, or reviewing any Pydantic AI code; asking how to build AI agents in Python with Pydantic; asking about Agent, RunContext, tools, dependencies, structured outputs, streaming, multi-agent patterns, MCP integration, or testing with Pydantic AI; or migrating from LangChain/LlamaIndex to Pydantic AI. Trigger even for vague requests like "help me build an AI agent in Python" or "how do I add tools to my LLM app" — Pydantic AI is very likely what they need.
Meta-prompting, context engineering, and spec-driven development system for autonomous long-running coding agents
Starts a voice conversation with the user via the agent-voice CLI. Use when the user invokes /voice. The user is not looking at the screen — they are listening and speaking. All agent output and input goes through voice until the conversation ends.
Symphony turns project work into isolated, autonomous implementation runs, allowing teams to manage work instead of supervising coding agents.
Orchestrate teams of parallel Claude Code sessions working on the same codebase. Handles task decomposition, agent coordination, context isolation, and merge strategies. Builds on worktree-manager for infrastructure.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection
Encodes a continuous improvement loop for goal-seeking agents: EVAL, ANALYZE, RESEARCH (hypothesis + evidence + counter-arguments), IMPROVE, RE-EVAL, DECIDE. Auto-commits improvements (+2% net, no regression >5%) and reverts failures. Works with all 4 SDK implementations. Auto-activates on "improve agent", "self-improving loop", "agent eval loop", "benchmark agents", "run improvement cycle".
Search Zhihu (知乎) using agent-browser with proper authentication handling. Use when user asks to "search zhihu", "知乎搜索", "在知乎上找", or any Zhihu-related search requests. Handles login requirements, session persistence, and common error cases like 40362 restrictions.
Evaluate agents and skills for quality, completeness, and standards compliance using a 6-step rubric: Identify, Structural, Content, Code, Integration, Report. Use when auditing agents/skills, checking quality after creation or update, or reviewing collection health. Triggers: "evaluate", "audit", "check quality", "review agent", "score skill". Do NOT use for creating or modifying agents/skills — only for read-only assessment and scoring.
Fresh-subagent-per-task execution with two-stage review (ADR compliance + code quality). Use when an implementation plan exists with mostly independent tasks and you want quality gates between each. Use for "execute plan", "subagent", "dispatch tasks", or multi-task implementation runs. Do NOT use for single simple tasks, tightly coupled work needing shared context, or when the user wants manual review after each task.