Loading...
Loading...
Found 220 Skills
Generate anime-style video prompts for Seedance 2.0 (Higgsfield). Use this when users want anime, Japanese animation style, shonen manga action, seinen manga drama, magical girl, mecha, isekai, slice-of-life anime, or any Japanese animation aesthetics. Trigger conditions: anime, Japanese animation, shonen manga, seinen manga, manga-style video, anime fight, anime opening, anime ending, cherry blossoms, chibi, cute style, mecha, isekai, or any anime-style request. Even phrases like "make it look like anime" or "Japanese cartoon style".
Enhance text storyboards into Seedance 2.0 video prompts one by one. Call this when the text storyboard is completed and needs to be converted into executable video prompts.
Shape conversation context (or a fresh task description) into a 5-part brief — Context / Task / Constraints / Verification / Output format — ready to hand off to an agent. Use when the user is ready to execute a task and wants it structured first. Composes naturally with /grill-me upstream, but works standalone too. Triggers: "/create-brief", "draft a brief", "shape this into a brief", "turn this into a task spec", "write a brief for this".
Turn approved storyboard logic, beat sheets, or prompt plans into provider-ready short-form video requests. Use this when the segment structure is already known and you need a model-agnostic request architecture that can later map cleanly into Seedance or other video generators.
An image generation/editing Skill for GPT Image 2. It can be used in 3 environments: (A) Garden Local Mode: directly generate and save images via OpenAI-compatible APIs; (B) Host-Native Mode: treat this Skill as a prompt engineering guide, and pass the rendered prompt to the image tool built into the host Agent for image generation; (C) Advisor Mode: degrade to a high-quality prompt consultant when the host has no image tools. It covers 18 major categories and over 80 structured templates, including scenarios such as posters, UI, products, infographics, academic figures, technical architecture diagrams, comics, avatars, process boards, storyboards, IP peripherals, and editing workflows.
Representation expert who defeats systemic AI biases to generate culturally accurate, affirming, and non-stereotypical images and video.
Use when the user wants a full feature-development chain: clarify a rough feature idea into a prompt, review it with the user, then hand it to grill-with-docs, to-prd, to-issues, and tdd.
Plan and run commercial image or video production with genmedia. Use this for product photography, ads, e-commerce batches, product reveals, lifestyle commercials, background replacement, social formats, and brand-safe prompt work.
Choose GPT-Image2 / gpt-image-2 visual styles and industrial prompt templates from the awesome-gpt-image-2 style library. Use when an agent needs to create, rewrite, classify, or improve image-generation prompts with repository-backed templates, categories, style tags, scene tags, pitfalls, and example cases.
Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration.
Vision and multimodal capabilities for Claude including image analysis, PDF processing, and document understanding. Activate for image input, base64 encoding, multiple images, and visual analysis.
Build full-stack web applications powered by Google Gemini's Nano Banana & Nano Banana Pro image generation APIs. Use when creating Next.js image generation apps, text-to-image tools, or iterative image editors.