Loading...
Loading...
Found 76 Skills
Generate prompts for 360° product turntables, multi-angle displays, and product reveal videos for Seedance 2.0 (Higgsfield). Use this when users want product rotation videos, turntable displays, product reveals, 360-degree views, multi-angle product showcases, product beauty shots, hero product videos, or unboxing reveals. Trigger conditions: product 360, turntable, product rotation, multi-angle, product reveal, product showcase, hero shot, beauty shot, product rotation, unboxing, or any request to showcase physical products from multiple angles. Even phrases like "show my product from all sides" or "make a product video".
Generate and manage provider-backed video renders for short-form production. Use this when approved upstream assets or prompt plans already exist and you need local render manifests, downloaded video files, and replaceable routes for talking-head or Seedance generation without losing continuity across concepts and personas.
Creates motion graphics and video content using AI video generation models (Veo, Runway). Supports product animations, social media videos, explainer content, and cinematic sequences for content workflows.
Create realistic long-form AI talking head/UGC videos that don't look AI-generated. Use when user wants to make "realistic AI video", "UGC video", "talking head video", "AI spokesperson", "AI ad content", "video that looks real", "human-looking AI video". Orchestrates Nano Banana for base images and Kling AI for video generation with natural pacing.
通过兔子API进行AI视频生成。支持 Veo、Sora、Kling、Seedance 等模型,单视频和长视频(多段合成)模式。当用户要求生成视频、创建视频或需要视频生成后端时使用。
(project - Skill) Generate AI videos using Volcengine Jimeng Video 3.0 Pro API. Use when users request video generation from text prompts or images, including text-to-video, image-to-video, or any AI-powered video creation. Triggers include "generate video", "create video", "AI video", "Jimeng video", "text to video", "image to video", or any request involving AI-powered video generation from descriptions.
PixVerse CLI — generate AI videos and images from the command line. Supports PixVerse, Veo, Sora, Kling, Hailuo, Wan, and more video models; Nano Banana (Gemini), Seedream, Qwen image models; and PixVerse's rich effect template library. Start here.
Create professional videos autonomously using claude-code-video-toolkit — AI voiceovers, image generation, music, talking heads, and Remotion rendering.
Generate or edit videos using the Doubao Seedance video model from ByteDance's Volcengine Ark.
How to use the Seedance 2.0 and Seedance 2.0 fast video generation API (Volcengine Ark platform). Use this skill whenever the user wants to generate videos with Seedance, call the Seedance API, create video generation tasks, poll for video results, write code that uses Seedance/doubao-seedance models, or build anything involving AI video generation with the Ark API. Also trigger when the user mentions "seedance", "video generation API", "doubao-seedance", "ark video", "text to video API", or "image to video API".
Comprehensive creation via Xiaoyunque's AI capabilities, supporting generation and editing of images/videos. Covered scenarios include: Generation (text-to-image, text-to-video, image-to-video, animation creation, draw xxx, create xxx clip), Editing & Revision (replace xxx with yyy, remove xxx, add xxx, change to xxx, adjust xxx, local modification, lens adjustment), Style Transfer (style migration, repainting, style change), video continuation, video/TVC/promotional video replication, short drama/short comic drama generation, music MV creation, product advertisement/demo video production, storyboard design, educational video/short video production. This skill should also be triggered when users mention Xiaoyunque, xyq, uploading reference images/videos, or checking generation progress. Key Judgment: This skill must be triggered whenever the user's request involves AI video creation, generation, editing, or revision, regardless of the wording (e.g., "draw a cat", "make a poster", "create a video", "help me revise this video", "help me replicate this video", "make an MV with this song", "generate a short drama with one sentence")
Use when generating videos from images with DashScope Wan 2.7 image-to-video model (wan2.7-i2v). Use when implementing first-frame video generation, first+last frame interpolation, video continuation, or audio-driven video synthesis via the video-synthesis async API.