Loading...
Loading...
Found 51 Skills
Use jimeng-mcp-server for AI image and video generation. Use this skill when users request to generate images from text, synthesize multiple images, create videos from text descriptions, or add animations to static images. Supports four core capabilities: text-to-image, image synthesis, text-to-video, and image-to-video. Requires jimeng-mcp-server to run locally or be accessed via SSE/HTTP.
Manga-style video generator - specifically designed to produce animated videos in manga styles such as Japanese healing style, Chinese ink wash style, and American cartoon style. It comes with 8 built-in manga style templates, supports image-to-video generation, and creates high-quality manga animations with one click. Use this skill when you need to generate videos in manga, animation, or hand-drawn styles.
Generate videos using ComfyUI with Wan 2.2, FramePack, or AnimateDiff. Handles image-to-video, text-to-video, talking heads, and motion-controlled animation. Use when creating any video content from character images or text descriptions.
Expert Cinema Director skill for Seedance 2.0 (ByteDance) — high-fidelity video generation using technical camera grammar and multimodal references. Supports text-to-video, image-to-video, and video extension.
Generate videos from text prompts or animate static images using ModelsLab's v7 Video Fusion API. Supports text-to-video, image-to-video, video-to-video, lip-sync, and motion control with 40+ models including Seedance, Wan, Veo, Sora, Kling, and Hailuo.
Generate images and videos via Higgsfield AI through 30+ models including Nano Banana 2, Soul V2, Veo 3.1, Kling 3.0, Seedance 2.0, Flux 2, GPT Image 2, plus Marketing Studio for branded ad video/image with curated avatars and imported products. Use when: "generate an image", "make a picture", "create artwork", "make a video", "animate this photo", "image-to-video", "img2vid", "edit this image with AI", "stylize a photo", "remix this image", "produce a clip", "render a scene", "create an ad", "make a UGC video", "generate marketing video", "make a product demo", "create unboxing", "TV spot", "virtual try-on", "product showcase", "brand video", "presenter video for product", "import product from URL", "create avatar for ad". Supports text-to-image, image-to-image, image-to-video, reference-based generation, and Marketing Studio (avatars + products + ad modes). Auto-detects whether passed IDs are uploads or previous jobs. Chain with higgsfield-soul-id when the user wants their face in the output. NOT for: training Soul Character (use higgsfield-soul-id), professional product photoshoots with mode-specific prompt enhancement (use higgsfield-product-photoshoot), text-only / chat / TTS tasks.
Generate and edit videos with Alibaba HappyHorse 1.0 models via inference.sh CLI. Models: HappyHorse T2V, I2V, R2V, Video Edit. Capabilities: text-to-video, image-to-video, reference-to-video, video editing with natural language, character preservation, 720P/1080P, up to 15 seconds. Use for: physically realistic video, video editing, character-consistent content, product demos, social media. Triggers: happyhorse, happy horse, alibaba video, happyhorse 1.0, dashscope video, alibaba happyhorse, video editing ai, ai video editor
Create and edit videos using Google's Veo 2 and Veo 3 models. Supports Text-to-Video, Image-to-Video, Reference-to-Video, Inpainting, and Video Extension. Available parameters: prompt, image, mask, mode, duration, aspect-ratio. Always confirm parameters with the user or explicitly state defaults before running.
Help users integrate Runway video generation APIs (text-to-video, image-to-video, video-to-video)
Generate multiple coherent images and videos in batches based on users' creative/story ideas, and present them in the form of a professional storyboard. It supports advanced features such as single shot regeneration, image-to-video conversion, and video generation from first and last frames. Suitable for scenarios like short video scripts, animation storyboards, advertising creativity, and story visualization.
Use this skill for AI video generation. Triggers include: "generate video", "create video", "make video", "animate", "text to video", "video from image", "video of", "animate image", "bring to life", "make it move", "add motion", "video with audio", "video with dialogue" Supports text-to-video, image-to-video, video with dialogue/audio using Google Veo 3.1 (default) or OpenAI Sora.
Create and edit videos using Google's Veo 2 and Veo 3 models. Supports Text-to-Video, Image-to-Video, Inpainting, and Advanced Controls.