Loading...
Loading...
Found 168 Skills
Generate AI videos from text prompts using the HeyGen API. Use when: (1) Generating videos from text descriptions, (2) Creating AI-generated video clips for content production, (3) Image-to-video generation with a reference image, (4) Choosing between video generation providers (VEO, Kling, Sora, Runway, Seedance), (5) Working with HeyGen's /v1/workflows/executions endpoint for video generation.
Agent-IM Conversation Skill - Create sessions, send messages such as image/video generation requests via OpenAPI, and query session progress. This skill is activated when users need to generate images/videos or query current session messages.
Expert prompt engineering for Google Veo 3.2 (Artemis engine). Use when the user wants to generate a video with Veo 3.2, needs help crafting cinematic prompts, or mentions Veo, Google video generation, or Artemis engine.
Generate images or videos using Jimeng Dreamina CLI. Invoke when user needs to generate images or videos using Jimeng (Dreamina).
Render a Claude-style prompt typing animation video by calling Remotion CLI against the remote site https://www.laosunwendao.com. Use when the user asks for "做一个 claude 的提示词打字机动画", "做 Claude 打字动画", "创建提示词动画", or similar requests that convert a text prompt into a typing-animation video.
Use when generating videos with Model Studio DashScope SDK using Wan video generation models (wan2.6-t2v, wan2.6-i2v-flash, wan2.6-i2v and regional variants). Use when implementing or documenting video.generate requests/responses, mapping prompt/negative_prompt/duration/fps/size/seed/reference_image/motion_strength, or integrating video generation into the video-agent pipeline.
End-to-end workflow for automatically generating complete whiteboard animation videos from SRT subtitle files. It completes three phases in sequence: storyboard parsing, image generation, and video generation. It is triggered when the user provides an SRT file and requests to generate a whiteboard animation video, or says "generate whiteboard video from subtitles" or "whiteboard video workflow".
Generate whiteboard hand-drawn animation videos from images. Convert any color image into an animation consisting of two stages: line art drawing and coloring, with hand overlay effect, output H.264 MP4 video. Supports both single and batch modes. Triggered when users say "turn the picture into a whiteboard animation", "whiteboard animation", "batch whiteboard animation".
Run browser or terminal automation scenarios with video recording from natural language flow descriptions. Use when the user wants to generate demo videos, capture HAR files, record CLI tool demos, or run acceptance tests.
Prompting techniques for AI video generation models on Replicate. Use when writing prompts for video models or building video generation features.
Short-form video generation skill — 3-10 second clips for product reveals, motion teasers, ambient loops. Defaults to Seedance 2 but works the same with Kling 3 / 4, Veo 3 or Sora 2. Output is one MP4 saved to the project folder. When the workspace also ships an interactive-video / hyperframes skill, prefer composing several short shots into a single timeline rather than one long monolithic clip.
Orchestrates end-to-end video generation through sequential workflow steps (audio, direction, assets, design, coding). Activates when user requests video creation from a script, wants to resume video generation, mentions "create video", "generate video", or "video workflow", requests running a specific step (audio, direction, assets, design, coding), asks to "create audio", "generate direction", "create assets", "generate design", or "code video components", or wants to resume a video. Manages workflow state tracking and parallel scene generation.