Loading...
Loading...
Found 82 Skills
Generate videos using Seedance models. Invoke when user wants to create videos from text prompts, images, or reference materials.
Generate images and videos with Kling O3 — Kling's most powerful model family. Text-to-image, text-to-video, image-to-video, and video-to-video editing. Use when the user requests "Kling", "Kling O3", "Best quality video", "Kling image", "Kling video editing".
Craft professional video prompts for Google Veo 3.1 using cinematic techniques, audio direction, and timestamp choreography. Use when generating AI videos, creating video prompts, or working with Veo 3.
Generate prompts for 360° product turntables, multi-angle displays, and product reveal videos for Seedance 2.0 (Higgsfield). Use this when users want product rotation videos, turntable displays, product reveals, 360-degree views, multi-angle product showcases, product beauty shots, hero product videos, or unboxing reveals. Trigger conditions: product 360, turntable, product rotation, multi-angle, product reveal, product showcase, hero shot, beauty shot, product rotation, unboxing, or any request to showcase physical products from multiple angles. Even phrases like "show my product from all sides" or "make a product video".
Generate cinematic film-style video prompts for Seedance 2.0 (Higgsfield). Use this skill when users want AI videos with cinematic, film-like, movie-quality, Hollywood-style, dramatic, or professional film-quality. Trigger words: cinematic, film-like, movie scene, dramatic lighting, depth of field, lens flare, anamorphic, letterbox, film noir, epic, stabilized camera, dolly shot, crane shot, or any cinematic video generation request. Use this skill even if users don't explicitly say "cinematic" but describe film aesthetics.
Python video composition with moviepy 2.x — overlaying deterministic text on AI-generated video (LTX-2, SadTalker), compositing clips, single-file build.py video projects. Use when adding labels/captions/lower-thirds to LTX-2 or SadTalker outputs, building short ad-style spots in pure Python without Remotion, or doing programmatic video composition. Triggers include text overlay on video, label LTX-2 clip, caption SadTalker output, lower third, build.py video, moviepy, Python video composition, sub-30s ad spot.
Senior Specialist in Remotion v4.0+, React 19, and Next.js 16. Expert in programmatic video generation, sub-frame animation precision, and AI-driven video workflows for 2026.
PixVerse CLI — generate AI videos and images from the command line. Supports PixVerse, Veo, Sora, Kling, Hailuo, Wan, and more video models; Nano Banana (Gemini), Seedream, Qwen image models; and PixVerse's rich effect template library. Start here.
Use when generating videos from images with DashScope Wan 2.7 image-to-video model (wan2.7-i2v). Use when implementing first-frame video generation, first+last frame interpolation, video continuation, or audio-driven video synthesis via the video-synthesis async API.
Enhance text storyboards into Seedance 2.0 video prompts one by one. Call this when the text storyboard is completed and needs to be converted into executable video prompts.
Use Alibaba Cloud DashScope API and LingMou to generate AI video and speech. Seven capabilities — (1) LivePortrait talking-head (image + audio → video, two-step), (2) EMO talking-head, (3) AA/AnimateAnyone full-body animation (three-step), (4) T2I text-to-image (Wan 2.x, default wan2.2-t2i-flash), (5) I2V image-to-video (Wan 2.x, default wan2.7-i2v-flash, supports T2I→I2V pipeline), (6) Qwen TTS (auto model/voice by scene, default qwen3-tts-vd-realtime-2026-01-15), (7) LingMou digital-human template video with random template, public-template copy, and script confirmation. Trigger when the user needs talking-head, portrait, full-body animation, text-to-image, text-to-video, or speech synthesis.
Adjust video speed using each::sense AI. Create slow motion, time-lapse, hyperlapse, speed ramps, reverse effects, and cinematic slow-mo with frame interpolation for smooth playback.