Loading...
Loading...
Found 30 Skills
UMPF Structured Prompt Master - Guide users to polish film-level scene solutions through in-depth multi-round conversations. Supports 7 visual engines (Photography/Print/3D/Oil Painting/Pixel/Chinese Ink/Claymation). Trigger conditions: (1) User says 'generate prompt', 'draw a...', 'help me design a scene' (2) Needs Nano banana prompt (3) User wants professional art guidance to generate images (4) Mentions UMPF or moonkite-maliang
This skill should be used when the user asks to "run a tracking cycle", "measure AI visibility", "check share of voice", "run Morphiq Track", "track citations", "check GEO score", "generate prompts", "run content creation workflow", or mentions monitoring LLM mentions, running content creation workflows, measuring brand visibility, or generating query fanout content. Queries multiple LLM providers, produces delta reports, and maintains MORPHIQ-TRACKER.md as the persistent state file for the entire pipeline.
Generate optimized prompts for AI image and video generation. Triggers on "generate a prompt for", "write me a prompt", "create an image prompt", "create a video prompt", "optimize this prompt".
Generate Ralph-compatible prompts for multiple related tasks. Creates phased prompts with sequential milestones, cumulative progress tracking, and phase-based completion promises. Use when creating prompts for CRUD implementations, multi-step features, staged migrations, or any work requiring multiple distinct but related tasks.
Core package for defining schemas, catalogs, and AI prompt generation for json-render. Use when working with @json-render/core, defining schemas, creating catalogs, or building JSON specs for UI/video generation.
Convert structured UX specs and product context into a sequenced prompts.md file for Claude Code. Use when a user has completed upstream design thinking (problem framing, PRD, UX spec) and needs to translate that into step-by-step prompts that coding agents can execute incrementally. This skill bridges design artifacts to code generation.
Analyze a reference design image and extract visual DNA — layout, style, color palette, texture, typography, copy tone, spacing, etc. — into a structured, reusable replication prompt that can be applied to new scenarios. Trigger when: user provides a reference image and asks to "extract style", "replicate this", "clone this design", "analyze this visual", "generate a replication prompt", "提取设计要素", "复刻这个风格", "分析这张图", "视觉克隆".
· Turn notes into structured LLM prompts or improve existing prompts. Triggers: 'write a prompt', 'system prompt', 'prompt template', 'prompt engineering', 'rewrite this prompt'. Not for skills or routines.
Novel Visual Prompt Skill. Used to generate reusable Chinese or English AI drawing prompts for characters, scenes, covers, chapter illustrations, storyboards, props, maps, and atmosphere diagrams; activated when users request illustration prompts, character standees, cover images, scene images, visual settings, consistent art styles, or extracting scenes from the text.
Use when the user wants to turn a feature idea, change request, or rough requirement into a precise feature-development prompt for one or more codebase projects.
KERNEL-based prompt engineering — transforms vague requests into structured, high-performance prompts optimized for first-try success.
This skill should be used when users need to generate detailed, structured prompts for creating UI/UX prototypes. Trigger when users request help with "create a prototype prompt", "design a mobile app", "generate UI specifications", or need comprehensive design documentation for web/mobile applications. Works with multiple design systems including WeChat Work, iOS Native, Material Design, and Ant Design Mobile.