Loading...
Loading...
Found 82 Skills
Professional AI Video Storyboard Designer. This skill must be used when users want to create videos, make storyboard scripts, generate AI video prompts, or plan video content structures. It covers all video types: short videos, commercials, educational content, brand videos, vlogs, micro-films, etc. Even if users only say "Help me make a video" or "I want to create content on the theme of XX", this skill should be triggered. It outputs professional storyboard designs + prompts that can be directly used in mainstream AI video tools like Seedance 2.0 (Jimeng), Sora, Kling, Runway, Veo, etc. Among them, Seedance 2.0 supports specialized output with multimodal @ reference syntax.
Animate a single image into a video using fal.ai Veo 3.1. Use when the user wants to create a video from a still image, animate a photo, or bring an image to life. Supports up to 8 seconds of video with optional audio.
UGC video format templates for mobile app brands. Contains 15 TikTok-native, lofi video formats with shot-by-shot structure and AI video generation prompts. Focuses on the person and their physical relationship with their phone — reactions, rituals, and real moments — never screen content or app UI. Use when the user wants to create mobile app UGC content.
UGC lifestyle b-roll video templates for brands. Contains 20 TikTok-native, lofi b-roll formats with shot-by-shot structure and AI video generation prompts. Use when the user wants to create lifestyle b-roll content — aesthetic scene-setting, product-in-context shots, mood pieces, or ambient footage for ads and social.
Generate AI videos using the Pollo AI API. Supports 13 leading models (Kling, Sora, Runway, Veo, Pixverse, Hailuo, Vidu, Luma, Pika, Wan, Seedance, Hunyuan, Pollo) with 50+ versions. It also supports task polling, credit cost estimation, and credit balance checks. Use this skill whenever the user wants to generate an AI video from text or image, use any AI video model, check Pollo credits, or mentions Pollo AI, pollo.ai, or any of the supported model names. Even if the user just says "generate a video" or "make me a short clip" without mentioning Pollo, this skill should be used.
Add or remove watermarks from videos using each::sense AI. Add logo watermarks, text overlays, transparent watermarks, animated watermarks, and remove unwanted watermarks from TikTok, stock footage, and other sources.
AI-powered green screen keyer that unmixes foreground colors and generates clean linear alpha channels using neural networks
This skill applies when OpenStoryline has been installed, and the user needs to start local MCP/Web services, create or continue a session, send editing instructions, perform multi-round re-editing, verify rendered video outputs, or make Chinese requests such as "启动 OpenStoryline", "把 OpenStoryline 跑起来", "用 OpenStoryline 剪视频".
Extract highlights, best moments, and key clips from long videos using each::sense AI. Perfect for gaming highlights, sports clips, podcast moments, webinar summaries, meeting recaps, and auto-trailer generation.
Use when Alibaba Cloud Model Studio Wan video editing models are needed for style transfer, keyframe-controlled editing, or animation remix workflows.
Use when generating dance or motion-transfer videos with Alibaba Cloud Model Studio AnimateAnyone (`animate-anyone-gen2`) using a detected character image and an action template. Use when cloning motion from a dance/action video into a target character image.
Creates professional AI image/video prompts with photographer's and cinematographer's eye. Specializes in composition, lighting, color grading, and storytelling. Use when generating AI images/videos with artistic vision, working with models like Nano Banana Pro, Qwen, Sora2, Wan 2.2. For graphic design work (thumbnails, banners, layouts), use /graphic-designer instead.