Loading...
Loading...
Found 2 Skills
Generate videos from text prompts or animate static images using ModelsLab's v7 Video Fusion API. Supports text-to-video, image-to-video, video-to-video, lip-sync, and motion control with 40+ models including Seedance, Wan, Veo, Sora, Kling, and Hailuo.
Automatically generate AI videos using the Seedance 2.0 model of Jianying (also called Xiaoyunque). It supports three modes: Text to Video (T2V), Image to Video (I2V) and Reference Video to Video (V2V). This skill is applicable when users need to generate AI videos, create short films with the Seedance model, or perform style conversion based on reference images/videos. Pre-configuration of the cookies.json login credential is required.