Loading...
Loading...
Found 168 Skills
Generate cinematic film-style video prompts for Seedance 2.0 (Higgsfield). Use this skill when users want AI videos with cinematic, film-like, movie-quality, Hollywood-style, dramatic, or professional film-quality. Trigger words: cinematic, film-like, movie scene, dramatic lighting, depth of field, lens flare, anamorphic, letterbox, film noir, epic, stabilized camera, dolly shot, crane shot, or any cinematic video generation request. Use this skill even if users don't explicitly say "cinematic" but describe film aesthetics.
Generate and manage provider-backed video renders for short-form production. Use this when approved upstream assets or prompt plans already exist and you need local render manifests, downloaded video files, and replaceable routes for talking-head or Seedance generation without losing continuity across concepts and personas.
Build explicit learn/do-not-copy contracts for image and video generation references. Use this when a prompt uses benchmark videos, contact sheets, frames, or product images and you need to state exactly what the model should learn, what identity elements must change, and which references should be excluded from the first test.
Use when preparing, submitting, polling, or debugging Seedance 2.0 video generation jobs from product images, storyboard images, UGC scripts, voiceover copy, or promptPlan request JSON. Use for splitting scripts into render segments, uploading references, creating request JSON, submitting jobs through the hosted capability, polling predictions, and handing off local render paths.
Senior Specialist in Remotion v4.0+, React 19, and Next.js 16. Expert in programmatic video generation, sub-frame animation precision, and AI-driven video workflows for 2026.
Process multiple video generation requests efficiently with Kling AI. Use when generating multiple videos or building content pipelines. Trigger with phrases like 'klingai batch', 'kling ai bulk', 'multiple videos klingai', 'klingai parallel generation'.
Create realistic long-form AI talking head/UGC videos that don't look AI-generated. Use when user wants to make "realistic AI video", "UGC video", "talking head video", "AI spokesperson", "AI ad content", "video that looks real", "human-looking AI video". Orchestrates Nano Banana for base images and Kling AI for video generation with natural pacing.
Generate videos using ByteDance's Seedance model. It supports text-to-video and image-to-video functions, and calls APIs through the volcengine-ark SDK. This skill is activated when users need to generate videos, create video content, or produce videos based on text or images.
通过兔子API进行AI视频生成。支持 Veo、Sora、Kling、Seedance 等模型,单视频和长视频(多段合成)模式。当用户要求生成视频、创建视频或需要视频生成后端时使用。
Call the RawUGC API to generate AI videos/images/music, manage content (personas, products, styles, characters), schedule social media posts, research TikTok content, and analyze viral videos. Use when the user wants to interact with any RawUGC API endpoint.
(project - Skill) Generate AI videos using Volcengine Jimeng Video 3.0 Pro API. Use when users request video generation from text prompts or images, including text-to-video, image-to-video, or any AI-powered video creation. Triggers include "generate video", "create video", "AI video", "Jimeng video", "text to video", "image to video", or any request involving AI-powered video generation from descriptions.
Create explainer videos with narration and AI-generated visuals. Triggers on: "解说视频", "explainer video", "explain this as a video", "tutorial video", "introduce X (video)", "解释一下XX(视频形式)".