Loading...
Loading...
Found 76 Skills
Generate audio visualization videos using each::sense AI. Create waveforms, spectrum analyzers, particle effects, 3D visualizations, and beat-synced animations from audio files.
Generate AI videos using the Pollo AI API. Supports 13 leading models (Kling, Sora, Runway, Veo, Pixverse, Hailuo, Vidu, Luma, Pika, Wan, Seedance, Hunyuan, Pollo) with 50+ versions. It also supports task polling, credit cost estimation, and credit balance checks. Use this skill whenever the user wants to generate an AI video from text or image, use any AI video model, check Pollo credits, or mentions Pollo AI, pollo.ai, or any of the supported model names. Even if the user just says "generate a video" or "make me a short clip" without mentioning Pollo, this skill should be used.
End-to-end AI video generation - create videos from text prompts using image generation, video synthesis, voice-over, and editing. Supports OpenAI DALL-E, Replicate models, LumaAI, Runway, and FFmpeg editing.
Use when the user wants to generate video, shoot short films, or view available video styles. Triggers: short film, make video, shoot short, AI video, generate video from story, short drama, narration video, cinematic video, available video styles.
AI video generation with LTX-2.3 22B — text-to-video, image-to-video clips for video production. Use when generating video clips, animating images, creating b-roll, animated backgrounds, or motion content. Triggers include video generation, animate image, b-roll, motion, video clip, text-to-video, image-to-video.
Use when generating template-driven emoji videos with Alibaba Cloud Model Studio Emoji (`emoji-v1`) from a detected portrait image. Use when producing fixed-style meme or emoji motion clips from a single face image and a selected template ID.
Use when generating lightweight talking-head portrait videos with Alibaba Cloud Model Studio LivePortrait (`liveportrait`) from a detected portrait image and speech audio. Use when you need long-form or simple broadcast-style portrait animation beyond the typical short expressive models.
Use when generating dance or motion-transfer videos with Alibaba Cloud Model Studio AnimateAnyone (`animate-anyone-gen2`) using a detected character image and an action template. Use when cloning motion from a dance/action video into a target character image.
Professional Specifications for Storyboard Decomposition
Generate AI videos using varg SDK React engine. Use when creating videos, animations, talking characters, slideshows, or social media content.
Provides comprehensive guidance for Runway ML including AI video generation, image editing, and creative AI tools. Use when the user asks about Runway ML, needs to generate AI videos, edit images with AI, or work with creative AI tools.
Generate AI videos using Kling video generation models. Use when you need to: (1) create videos from text prompts, (2) animate images into videos, (3) transform existing videos with AI, or (4) create AI avatar videos with speech.