Loading...
Loading...
Found 37 Skills
This skill should be used when the user asks to "create a ReAct agent", "build an agent with tools", "implement tool-calling agent", "use dspy.ReAct", mentions "agent with tools", "reasoning and acting", "multi-step agent", "agent optimization with GEPA", or needs to build production agents that use tools to solve complex tasks.
Build AI agents with Subconscious platform. Use when user wants to: build an agent, create an AI agent, use Subconscious, build with TIM, create agent with tools, research agent, search agent, tool-calling agent, subconscious.dev, TIMRUN, tim, tim-edge, timini, tim-gpt, tim-gpt-heavy. Do NOT use for generic OpenAI/Anthropic/LLM tasks without Subconscious.
Use when generating or reasoning over text with Alibaba Cloud Model Studio Qwen flagship text models (`qwen3-max`, `qwen3.5-plus`, `qwen3.5-flash`, snapshots, and compatible open-source variants). Use when building chat, agent, tool-calling, or long-context text generation workflows on Model Studio.
Core patterns for AI coding agents based on analysis of Claude Code, Codex, Cline, Aider, OpenCode. Triggers when: Building an AI coding agent or assistant, implementing tool-calling loops, managing context windows for LLMs, setting up agent memory or skill systems, or designing multi-provider LLM abstraction. Capabilities: Core agent loop with while(true) and tool execution, context management with pruning and compression and repo maps, tool safety with sandboxing and approval flows and doom loop detection, multi-provider abstraction with unified API for different LLMs, memory systems with project rules and auto-memory and skill loading, session persistence with SQLite vs JSONL patterns.
Use whenever the user wants to build or modify a chat, agent, or tool-calling UI in a React 19 + Tailwind v4 project — especially if the code imports from `@/components/agent-elements/*` or the project has that folder on disk. Triggers: "agent chat", "tool call UI", "streaming chat", "plan approval", "AgentChat", "InputBar", "tool renderer", mentions of Agent Elements, or requests to add a new agent surface with shadcn. Do NOT use for plain chat UIs that don't need tool/plan/approval cards, or for projects already committed to a different agent UI kit.
Build production-ready AI workflows using Firebase Genkit. Use when creating flows, tool-calling agents, RAG pipelines, multi-agent systems, or deploying AI to Firebase/Cloud Run. Supports TypeScript, Go, and Python with Gemini, OpenAI, Anthropic, Ollama, and Vertex AI plugins.
This skill provides production-ready AI chat UI components built on shadcn/ui for conversational AI interfaces. Use when building ChatGPT-style chat interfaces with streaming responses, tool/function call displays, reasoning visualization, or source citations. Provides 30+ components including Message, Conversation, Response, CodeBlock, Reasoning, Tool, Actions, Sources optimized for Vercel AI SDK v5. Prevents common setup errors with Next.js App Router, Tailwind v4, shadcn/ui integration, AI SDK v5 migration, component composition patterns, voice input browser compatibility, responsive design issues, and streaming optimization. Keywords: ai-elements, vercel-ai-sdk, shadcn, chatbot, conversational-ai, streaming-ui, chat-interface, ai-chat, message-components, conversation-ui, tool-calling, reasoning-display, source-citations, markdown-streaming, function-calling, ai-responses, prompt-input, code-highlighting, web-preview, branch-navigation, thinking-display, perplexity-style, claude-artifacts
Use when integrating Foundation Models framework, implementing on-device AI with Apple Intelligence, building tool-calling AI features, working with guided generation schemas, converting models with Core ML and coremltools, or running open-source LLMs on Apple Silicon. Covers Foundation Models (LanguageModelSession, @Generable, @Guide, SystemLanguageModel, structured output, tool calling), Core ML (coremltools, model conversion, quantization, palettization, pruning, Neural Engine, MLTensor), MLX Swift (transformer inference, unified memory), and llama.cpp (GGUF, cross-platform LLM).
Полная русскоязычная справка по Ollama Web Search и Web Fetch API: поиск в интернете, получение контента страниц, Python/JS SDK, MCP-сервер, интеграция с OpenClaw. Используй этот скилл при любых вопросах об Ollama web search: как настроить API-ключ, выполнить поиск, получить содержимое страницы, подключить SDK, настроить MCP-сервер, интегрировать с агентами. Также используй при написании кода для Ollama Search: bash-скрипты, Python asyncio, JS/TS клиенты, tool-calling агенты, конфигурация OpenClaw. Триггерится на слова: ollama search, ollama web search, ollama_search, ollama fetch, web_search ollama, ollama api key, ollama MCP, поиск через ollama.
Answer questions about the AI SDK and help build AI-powered features. Use when developers: (1) Ask about AI SDK functions like generateText, streamText, ToolLoopAgent, embed, or tools, (2) Want to build AI agents, chatbots, RAG systems, or text generation features, (3) Have questions about AI providers (OpenAI, Anthropic, Google, etc.), streaming, tool calling, structured output, or embeddings, (4) Use React hooks like useChat or useCompletion. Triggers on: "AI SDK", "Vercel AI SDK", "generateText", "streamText", "add AI to my app", "build an agent", "tool calling", "structured output", "useChat".
Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build agent, AI agent, autonomous agent, tool use, function calling.
Build voice agents with the Cartesia Line SDK. Supports 100+ LLM providers via LiteLLM with tool calling, multi-agent handoffs, and real-time interruption handling.