Loading...
Loading...
Found 80 Skills
프롬프트를 실증 기반 기법으로 분석하고 개선합니다. Few-shot, CoT, XML 구조화, Context Engineering 등 검증된 기법을 적용하여 프롬프트 품질을 높입니다. 프롬프트 개선, prompt 리뷰, 프롬프트 최적화, 프롬프팅 개선 요청 시 사용.
A skill for improving prompts by applying general LLM/agent best practices. When the user provides a prompt, this skill outputs an improved version, identifies missing information, and provides specific improvement points. Use when the user asks to "improve this prompt", "review this prompt", or "make this prompt better".
Optimize and restructure user prompts for better AI responses. Use when user writes in non-English (Chinese, Japanese, Korean, etc.), request is vague/unclear, or user asks to improve their prompt. Triggers on: '帮我', '请帮忙', 'お願い', any non-English complex request. Translates, restructures, and shows optimized prompt before proceeding.
Get a second opinion from leading AI models on code, architecture, strategy, prompting, or anything. Queries models via OpenRouter, Gemini, or OpenAI APIs. Supports single opinion, multi-model consensus, and devil's advocate patterns. Trigger with 'brains trust', 'second opinion', 'ask gemini', 'ask gpt', 'peer review', 'consult', 'challenge this', or 'devil's advocate'.
Systematic LLM prompt engineering: analyzes existing prompts for failure modes, generates structured variants (direct, few-shot, chain-of-thought), designs evaluation rubrics with weighted criteria, and produces test case suites for comparing prompt performance. Triggers on: "prompt engineering", "prompt lab", "generate prompt variants", "A/B test prompts", "evaluate prompt", "optimize prompt", "write a better prompt", "prompt design", "prompt iteration", "few-shot examples", "chain-of-thought prompt", "prompt failure modes", "improve this prompt". Use this skill when designing, improving, or evaluating LLM prompts specifically. NOT for evaluating Claude Code skills or SKILL.md files — use skill-evaluator instead.
Generate AI images from text prompts. Triggers on: "生成图片", "画一张", "AI图", "generate image", "配图", "create picture", "draw", "visualize", "generate an image".
Expert skill for Token-Oriented Object Notation (TOON) — compact, schema-aware JSON encoding for LLM prompts that reduces tokens by ~40%.
Use when users provide vague, underspecified, or unclear requests where they need help defining WHAT they actually want - across ANY domain (writing, analysis, code, documentation, proposals, reports, presentations, creative work). Trigger aggressively when users express VAGUE GOALS ("make this better", "improve our X", "figure out what to include", "I don't know where to start", "kinda lost on what to do", "not sure what this means"), UNDEFINED SUCCESS ("should look professional", "explain this clearly", "make it convincing", "whatever works best", missing constraints/audience/format), COMMUNICATION UNCLEAR ("how do I explain/communicate this", "my team gets confused when I describe it", "help me figure out what to ask about X"), AMBIGUOUS REQUIREMENTS ("analyze the data" without saying what to look for, "improve documentation" without saying how, "make it more robust" without defining robustness, any request with multiple valid interpretations), or META-PROMPTING ("optimize this prompt", "improve my prompt", "make this clearer", "review my instructions", learning about prompt frameworks like CO-STAR/RISEN/RODES, understanding what makes prompts effective). Trigger for non-technical users and ANY situation where the request needs refinement, structure, or clarification before execution can begin. When in doubt about whether a request is clear enough - trigger.
When the user wants to improve their app's star rating, increase ratings volume, optimize when and how they prompt users for a review, or recover from a bad rating period. Use when the user mentions "app rating", "star rating", "review prompt", "SKStoreReviewRequest", "In-App Review API", "ask for review", "low rating", "rating drop", "get more reviews", or "recover from 1-star". For responding to reviews, see review-management. For overall ASO health, see aso-audit.
Write, review, and improve prompts for any LLM — Claude, GPT, Gemini, Llama, DeepSeek, Mistral, Cohere, Qwen, Grok, Nova, and more. Use when the user asks to "write a system prompt", "improve this prompt", "review my prompt", "make a prompt for", "optimize my prompt", "fix my prompt", "why isn't my prompt working", or wants help writing better prompts for any AI model. Also use when building agents, chatbots, or AI assistants that need system-level instructions, or when the user has a bad prompt they want rewritten. Covers system prompts, task prompts, tool descriptions, and general prompt improvement across all major model families.
Use when the user needs prompt design, optimization, few-shot examples, chain-of-thought patterns, structured output, evaluation metrics, or prompt versioning. Triggers: new prompt creation, prompt optimization, few-shot example design, structured output specification, A/B testing prompts, evaluation framework setup.
This skill should be used when crafting prompts for Nano Banana Pro (Gemini image generation). Use when users want help writing image generation prompts, need guidance on prompt structure, or want to optimize their prompts for better results.