Loading...
Loading...
Found 36 Skills
Best practices for HeyGen - AI avatar video creation API. Use when creating AI avatar videos, generating talking head videos, or integrating HeyGen with Remotion.
Generate images and videos using x402-protected AI models at StableStudio. USE FOR: - Generating images from text prompts - Generating videos from text or images - Editing images with AI - Creating visual content TRIGGERS: - "generate image", "create image", "make a picture" - "generate video", "create video", "make a video" - "edit image", "modify image" - "stablestudio", "nano-banana", "sora", "veo" ALWAYS use `npx agentcash fetch` or `npx agentcash fetch-auth` for stablestudio.dev endpoints.
Minimal video generation smoke test for Model Studio Wan Video.
Generate videos using ComfyUI with Wan 2.2, FramePack, or AnimateDiff. Handles image-to-video, text-to-video, talking heads, and motion-controlled animation. Use when creating any video content from character images or text descriptions.
Expert guidance for Google Veo 3.1 video generation. Use when the user wants to (1) create text-to-video or image-to-video prompts, (2) optimize for cinematic quality and native audio syncing, (3) maintain character consistency via reference images, (4) structure multi-shot sequences with timestamp prompting, (5) use First/Last Frame interpolation, (6) select between standard and fast generation modes, or (7) troubleshoot physics, motion, or audio issues in generated video.
Create production-grade motion graphics and videos using Remotion (React). Use whenever the user wants branded video content, product demos, data-driven video generation, or motion graphics with audio sync, web fonts, TailwindCSS styling, or media embedding. Covers: marketing videos, product launches, data visualizations, social media content, personalized video at scale, explainer videos with voiceover, animated charts, 3D scenes via Three.js. Requires Node.js and Claude Code environment. Trigger on: "create a Remotion video", "React video", "motion graphics", "branded video", "product demo video", "remotion", "video with audio", "TailwindCSS video", "data-driven video generation", "personalized video at scale", "video with voiceover". For mathematical animations, algorithm visualizations, or headless container rendering, use concept-to-video (Manim) instead.
Novita AI: LLM, Image Generation & Editing, Video Generation, Audio (TTS/ASR), and GPU Cloud. Use this skill whenever the user wants to call Novita AI APIs — chat with LLMs (DeepSeek, Llama, Qwen), generate images (FLUX, Stable Diffusion, Seedream, Hunyuan Image), edit images (remove background, upscale, inpainting, img2img, outpainting, reimagine, merge face, replace background, remove text), generate videos (Kling, Wan, Hunyuan, Minimax Hailuo, Vidu, PixVerse, Seedance), do text-to-speech or speech-to-text (MiniMax TTS, GLM TTS, Fish Audio, ASR, voice cloning), run OpenAI-compatible batch jobs, manage GPU cloud instances and serverless endpoints, or check account balance and billing. Also trigger when the user mentions novita.ai, Novita AI, Novita API key, or wants to use any Novita platform service — even if they just say "generate an image" or "run an LLM" and Novita is available as a provider.
Use when generating talking, singing, or presentation videos from a single character image and audio with Alibaba Cloud Model Studio digital-human model `wan2.2-s2v`. Use when creating narrated avatar videos, singing portraits, or broadcast-style talking-head clips.
Guide for using Sirv AI Studio (www.sirv.studio), an AI-powered image and video processing platform. Use when working with product images, background removal, image upscaling, AI generation, video creation, batch processing, or e-commerce image workflows. Triggers on mentions of Sirv AI Studio, product photography, background removal, image upscaling, AI image generation, batch image processing, or marketplace optimization. IMPORTANT - If sirv-ai MCP tools are available (sirv_remove_background, sirv_upscale, sirv_generate, etc.), USE THEM directly for image processing tasks instead of telling user to visit the website.
SuperImg programmatic video generation framework. Create HTML/CSS video templates with defineTemplate(), animate with ctx.std (tween, math, color), and render to MP4. Use when working with superimg templates or video rendering.
Use Chanjing Avatar API for lip-syncing video generation
This skill provides comprehensive guidance for adapting Wan-series video generation models (Wan2.1/Wan2.2) from NVIDIA CUDA to Huawei Ascend NPU. It should be used when performing NPU migration of DiT-based video diffusion models, including device layer adaptation, operator replacement, distributed parallelism refactoring, attention optimization, VAE parallelization, and model quantization. This skill covers 9 major adaptation domains derived from real-world Wan2.2 CUDA-to-Ascend porting experience.