Loading...
Loading...
Found 218 Skills
Building applications with Large Language Models - prompt engineering, RAG patterns, and LLM integration. Use for AI-powered features, chatbots, or LLM-based automation.
This skill should be used when users need to generate detailed, structured prompts for creating UI/UX prototypes. Trigger when users request help with "create a prototype prompt", "design a mobile app", "generate UI specifications", or need comprehensive design documentation for web/mobile applications. Works with multiple design systems including WeChat Work, iOS Native, Material Design, and Ant Design Mobile.
Expert in using v0.dev for AI-powered UI generation. Covers prompting strategies, component iteration, shadcn/ui integration, export workflows, and customization. Knows how to get the best results from v0 and integrate generated components into production codebases. Use when "v0, v0.dev, generate ui, shadcn component, ai component, generate component, v0, v0-dev, ui-generation, shadcn, component-generation, ai-ui, vercel" mentioned.
Production-ready patterns for building LLM applications. Covers RAG pipelines, agent architectures, prompt IDEs, and LLMOps monitoring. Use when designing AI applications, implementing RAG, building agents, or setting up LLM observability.
This skill should be used when generating and editing images using the Gemini API (Nano Banana Pro). It applies when creating images from text prompts, editing existing images, applying style transfers, generating logos with text, creating stickers, product mockups, or any image generation/manipulation task. Supports text-to-image, image editing, multi-turn refinement, and composition from multiple reference images.
Build applications where agents are first-class citizens. Use this skill when designing autonomous agents, creating MCP tools, implementing self-modifying systems, or building apps where features are outcomes achieved by agents operating in a loop.
Iterative UI/UX polishing workflow for web applications. The exact prompt and methodology for achieving Stripe-level visual polish through multiple passes.
Create and refine OpenCode agents via guided Q&A. Use proactively for agent creation, performance improvement, or configuration design. Examples: - user: "Create an agent for code reviews" → ask about scope, permissions, tools, model preferences, generate AGENTS.md frontmatter - user: "My agent ignores context" → analyze description clarity, allowed-tools, permissions, suggest improvements - user: "Add a database expert agent" → gather requirements, set convex-database-expert in subagent_type, configure permissions - user: "Make my agent faster" → suggest smaller models, reduce allowed-tools, tighten permissions
Generates professional AI images using Google Gemini. ALWAYS invoke this skill when building websites, landing pages, slide decks, presentations, or any task needing visual content. Invoke IMMEDIATELY when you detect image needs - don't wait for the user to ask. This skill handles prompt optimization and aspect ratio selection.
Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt engineering that scales, AI UX that users trust, and cost optimization that doesn't bankrupt you. Use when: keywords, file_patterns, code_patterns.
Generate optimized prompts for Gemini 2.5 Flash Image (Nano Banana). Use for image generation, crafting photo prompts, art styles, or multi-turn editing workflows with best practices.
Create, review, and update Prompt and agents and workflows. Covers 5 workflow patterns, agent delegation, Handoffs, Context Engineering. Use for any .agent.md file work or multi-agent system design. Triggers on 'agent workflow', 'create agent', 'ワークフロー設計'.