modelslab-video-generation
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseModelsLab Video Generation
ModelsLab 视频生成
Generate AI videos from text descriptions, animate static images, or transform existing videos using state-of-the-art video generation models.
利用最先进的视频生成模型,通过文本描述生成AI视频、让静态图片动起来,或是对现有视频进行转译。
When to Use This Skill
何时使用该技能
- Generate videos from text descriptions
- Animate static images
- Transform existing videos (video-to-video)
- Lip-sync audio to video
- Apply motion control from reference videos
- Create short-form content
- Build video marketing materials
- 通过文本描述生成视频
- 让静态图片动起来
- 转译现有视频(视频转视频)
- 将音频与视频进行唇形同步
- 参考视频应用运动控制
- 创建短视频内容
- 制作视频营销素材
Available APIs (v7)
可用API(v7版本)
Video Fusion Endpoints
Video Fusion 端点
- Text to Video:
POST https://modelslab.com/api/v7/video-fusion/text-to-video - Image to Video:
POST https://modelslab.com/api/v7/video-fusion/image-to-video - Video to Video:
POST https://modelslab.com/api/v7/video-fusion/video-to-video - Lip Sync:
POST https://modelslab.com/api/v7/video-fusion/lip-sync - Motion Control:
POST https://modelslab.com/api/v7/video-fusion/motion-control - Fetch Result:
POST https://modelslab.com/api/v7/video-fusion/fetch/{id}
Note: v6 endpoints (, etc.) still work but v7 is the current version./api/v6/video/text2video
- 文本转视频:
POST https://modelslab.com/api/v7/video-fusion/text-to-video - 图片转视频:
POST https://modelslab.com/api/v7/video-fusion/image-to-video - 视频转视频:
POST https://modelslab.com/api/v7/video-fusion/video-to-video - 唇形同步:
POST https://modelslab.com/api/v7/video-fusion/lip-sync - 运动控制:
POST https://modelslab.com/api/v7/video-fusion/motion-control - 获取结果:
POST https://modelslab.com/api/v7/video-fusion/fetch/{id}
注意:v6版本的端点(如等)仍可使用,但v7是当前最新版本。/api/v6/video/text2video
Discovering Video Models
探索视频模型
bash
undefinedbash
undefinedSearch all video models
搜索所有视频模型
modelslab models search --feature video_fusion
modelslab models search --feature video_fusion
Search by name
通过名称搜索
modelslab models search --search "seedance"
modelslab models search --search "wan"
modelslab models search --search "veo"
modelslab models search --search "seedance"
modelslab models search --search "wan"
modelslab models search --search "veo"
Get model details
获取模型详情
modelslab models detail --id seedance-t2v
undefinedmodelslab models detail --id seedance-t2v
undefinedText to Video
文本转视频
python
import requests
import time
def generate_video(prompt, api_key, model_id="seedance-t2v"):
"""Generate a video from a text prompt.
Args:
prompt: Text description of the video
api_key: Your ModelsLab API key
model_id: Video model to use
"""
response = requests.post(
"https://modelslab.com/api/v7/video-fusion/text-to-video",
json={
"key": api_key,
"model_id": model_id,
"prompt": prompt,
"negative_prompt": "low quality, blurry, static, distorted"
}
)
data = response.json()
if data["status"] == "error":
raise Exception(f"Error: {data['message']}")
if data["status"] == "success":
return data["output"][0]
# Video generation is async - poll for results
request_id = data["id"]
print(f"Video processing... Request ID: {request_id}")
print(f"Estimated time: {data.get('eta', 'unknown')} seconds")
return poll_video_result(request_id, api_key)
def poll_video_result(request_id, api_key, timeout=600):
"""Poll for video generation results."""
start_time = time.time()
while time.time() - start_time < timeout:
fetch = requests.post(
f"https://modelslab.com/api/v7/video-fusion/fetch/{request_id}",
json={"key": api_key}
)
result = fetch.json()
if result["status"] == "success":
return result["output"][0]
elif result["status"] == "failed":
raise Exception(result.get("message", "Generation failed"))
print(f"Status: processing... ({int(time.time() - start_time)}s elapsed)")
time.sleep(10)
raise Exception("Timeout waiting for video generation")python
import requests
import time
def generate_video(prompt, api_key, model_id="seedance-t2v"):
"""通过文本提示生成视频。
参数:
prompt: 视频的文本描述
api_key: 你的ModelsLab API密钥
model_id: 要使用的视频模型
"""
response = requests.post(
"https://modelslab.com/api/v7/video-fusion/text-to-video",
json={
"key": api_key,
"model_id": model_id,
"prompt": prompt,
"negative_prompt": "low quality, blurry, static, distorted"
}
)
data = response.json()
if data["status"] == "error":
raise Exception(f"错误: {data['message']}")
if data["status"] == "success":
return data["output"][0]
# 视频生成为异步操作 - 轮询获取结果
request_id = data["id"]
print(f"视频处理中... 请求ID: {request_id}")
print(f"预计耗时: {data.get('eta', '未知')} 秒")
return poll_video_result(request_id, api_key)
def poll_video_result(request_id, api_key, timeout=600):
"""轮询视频生成结果。"""
start_time = time.time()
while time.time() - start_time < timeout:
fetch = requests.post(
f"https://modelslab.com/api/v7/video-fusion/fetch/{request_id}",
json={"key": api_key}
)
result = fetch.json()
if result["status"] == "success":
return result["output"][0]
elif result["status"] == "failed":
raise Exception(result.get("message", "生成失败"))
print(f"状态: 处理中... ({int(time.time() - start_time)}秒已过)")
time.sleep(10)
raise Exception("等待视频生成超时")Usage
使用示例
video_url = generate_video(
"A spaceship flying through an asteroid field, cinematic, 4K",
"your_api_key",
model_id="seedance-t2v"
)
print(f"Video ready: {video_url}")
undefinedvideo_url = generate_video(
"A spaceship flying through an asteroid field, cinematic, 4K",
"your_api_key",
model_id="seedance-t2v"
)
print(f"视频已就绪: {video_url}")
undefinedImage to Video (Animate Images)
图片转视频(让图片动起来)
python
def animate_image(image_url, prompt, api_key, model_id="seedance-i2v"):
"""Animate a static image based on a motion prompt.
Args:
image_url: URL of the image to animate
prompt: Description of desired motion/animation
model_id: Video model for image-to-video
"""
response = requests.post(
"https://modelslab.com/api/v7/video-fusion/image-to-video",
json={
"key": api_key,
"model_id": model_id,
"init_image": [image_url], # v7 expects array
"prompt": prompt,
"negative_prompt": "static, still, low quality, blurry"
}
)
data = response.json()
if data["status"] == "success":
return data["output"][0]
elif data["status"] == "processing":
return poll_video_result(data["id"], api_key)
else:
raise Exception(data.get("message", "Unknown error"))python
def animate_image(image_url, prompt, api_key, model_id="seedance-i2v"):
"""根据运动提示让静态图片动起来。
参数:
image_url: 要动起来的图片URL
prompt: 期望的运动/动画描述
model_id: 用于图片转视频的模型
"""
response = requests.post(
"https://modelslab.com/api/v7/video-fusion/image-to-video",
json={
"key": api_key,
"model_id": model_id,
"init_image": [image_url], # v7版本要求为数组
"prompt": prompt,
"negative_prompt": "static, still, low quality, blurry"
}
)
data = response.json()
if data["status"] == "success":
return data["output"][0]
elif data["status"] == "processing":
return poll_video_result(data["id"], api_key)
else:
raise Exception(data.get("message", "未知错误"))Animate a landscape
让风景图动起来
video = animate_image(
"https://example.com/landscape.jpg",
"The clouds moving slowly across the sky, birds flying in the distance",
"your_api_key",
model_id="seedance-i2v"
)
print(f"Animated video: {video}")
undefinedvideo = animate_image(
"https://example.com/landscape.jpg",
"The clouds moving slowly across the sky, birds flying in the distance",
"your_api_key",
model_id="seedance-i2v"
)
print(f"动画视频: {video}")
undefinedVideo to Video
视频转视频
python
def transform_video(video_url, prompt, api_key, model_id="wan2.1"):
"""Transform an existing video with a new style or content.
Args:
video_url: URL of the source video
prompt: Description of desired transformation
"""
response = requests.post(
"https://modelslab.com/api/v7/video-fusion/video-to-video",
json={
"key": api_key,
"model_id": model_id,
"init_video": [video_url], # v7 expects array
"prompt": prompt
}
)
data = response.json()
if data["status"] == "processing":
return poll_video_result(data["id"], api_key)
elif data["status"] == "success":
return data["output"][0]python
def transform_video(video_url, prompt, api_key, model_id="wan2.1"):
"""将现有视频转换为新风格或内容。
参数:
video_url: 源视频的URL
prompt: 期望的转换描述
"""
response = requests.post(
"https://modelslab.com/api/v7/video-fusion/video-to-video",
json={
"key": api_key,
"model_id": model_id,
"init_video": [video_url], # v7版本要求为数组
"prompt": prompt
}
)
data = response.json()
if data["status"] == "processing":
return poll_video_result(data["id"], api_key)
elif data["status"] == "success":
return data["output"][0]Lip Sync
唇形同步
python
def lip_sync(video_url, audio_url, api_key, model_id="lipsync-2"):
"""Sync lip movements to audio.
Args:
video_url: URL of the video with a face
audio_url: URL of the audio to sync to
"""
response = requests.post(
"https://modelslab.com/api/v7/video-fusion/lip-sync",
json={
"key": api_key,
"model_id": model_id,
"init_video": video_url,
"init_audio": audio_url
}
)
data = response.json()
if data["status"] == "processing":
return poll_video_result(data["id"], api_key)
elif data["status"] == "success":
return data["output"][0]python
def lip_sync(video_url, audio_url, api_key, model_id="lipsync-2"):
"""将唇部动作与音频同步。
参数:
video_url: 含有人脸的视频URL
audio_url: 要同步的音频URL
"""
response = requests.post(
"https://modelslab.com/api/v7/video-fusion/lip-sync",
json={
"key": api_key,
"model_id": model_id,
"init_video": video_url,
"init_audio": audio_url
}
)
data = response.json()
if data["status"] == "processing":
return poll_video_result(data["id"], api_key)
elif data["status"] == "success":
return data["output"][0]Popular Video Model IDs
热门视频模型ID
Text to Video
文本转视频
- - Seedance text-to-video (BytePlus)
seedance-t2v - - Seedance Pro Fast
seedance-1.0-pro-fast-t2v - - Wan 2.6 text-to-video (Alibaba)
wan2.6-t2v - - Wan 2.1 (ModelsLab in-house)
wan2.1 - - Google Veo 2
veo2 - - Google Veo 3
veo3 - - OpenAI Sora 2
sora-2 - - Hailuo 2.3 (MiniMax)
Hailuo-2.3-t2v - - Kling V2.5 Turbo
kling-v2-5-turbo-t2v
- - Seedance文本转视频(BytePlus)
seedance-t2v - - Seedance Pro Fast
seedance-1.0-pro-fast-t2v - - Wan 2.6文本转视频(阿里巴巴)
wan2.6-t2v - - Wan 2.1(ModelsLab自研)
wan2.1 - - Google Veo 2
veo2 - - Google Veo 3
veo3 - - OpenAI Sora 2
sora-2 - - Hailuo 2.3(MiniMax)
Hailuo-2.3-t2v - - Kling V2.5 Turbo
kling-v2-5-turbo-t2v
Image to Video
图片转视频
- - Seedance image-to-video
seedance-i2v - - Seedance Pro
seedance-1.0-pro-i2v - - Wan 2.6 image-to-video
wan2.6-i2v - - Hailuo 2.3
Hailuo-2.3-i2v - - Kling V2.1
kling-v2-1-i2v
- - Seedance图片转视频
seedance-i2v - - Seedance Pro
seedance-1.0-pro-i2v - - Wan 2.6图片转视频
wan2.6-i2v - - Hailuo 2.3
Hailuo-2.3-i2v - - Kling V2.1
kling-v2-1-i2v
Lip Sync
唇形同步
- - Sync Labs Lipsync 2
lipsync-2
- - Sync Labs Lipsync 2
lipsync-2
Motion Control
运动控制
- - Kling Motion Control
kling-motion-control - - OmniHuman (BytePlus)
omni-human
Browse all models: https://modelslab.com/models
- - Kling运动控制
kling-motion-control - - OmniHuman(BytePlus)
omni-human
浏览所有模型:https://modelslab.com/models
Key Parameters
关键参数
| Parameter | Description | Recommended Values |
|---|---|---|
| Video generation model (required) | See model tables above |
| Text description of video content | Be specific about motion and scene |
| What to avoid | "static, low quality, blurry" |
| Source image for i2v (array) | |
| Source video for v2v (array) | |
| Audio for lip-sync/video | URL string |
| Video dimensions (512-1024) | 512, 768, 1024 |
| Video length in seconds | 4-30 |
| Aspect ratio | "16:9", "9:16", "1:1" |
| Async notification URL | URL string |
| Custom tracking identifier | Any string |
| 参数 | 说明 | 推荐值 |
|---|---|---|
| 视频生成模型(必填) | 参见上方模型表格 |
| 视频内容的文本描述 | 具体说明运动和场景 |
| 需要避免的内容 | "static, low quality, blurry" |
| 图片转视频的源图片(数组) | |
| 视频转视频的源视频(数组) | |
| 用于唇形同步/视频的音频 | URL字符串 |
| 视频尺寸(512-1024) | 512, 768, 1024 |
| 视频时长(秒) | 4-30 |
| 宽高比 | "16:9", "9:16", "1:1" |
| 异步通知URL | URL字符串 |
| 自定义追踪标识 | 任意字符串 |
Best Practices
最佳实践
1. Write Motion-Focused Prompts
1. 编写聚焦运动的提示词
Bad: "A cat"
Good: "A cat walking through a garden, looking around curiously, sunlight filtering through trees"
Include: Action, movement, camera motion, atmosphere不好的示例: "A cat"
好的示例: "A cat walking through a garden, looking around curiously, sunlight filtering through trees"
建议包含:动作、移动、镜头运动、氛围2. Set Realistic Expectations
2. 设定合理预期
- Videos are 4-30 seconds typically
- Generation takes 30 seconds to several minutes depending on model
- Best for short clips, not full productions
- 视频时长通常为4-30秒
- 生成时间根据模型不同,从30秒到数分钟不等
- 最适合制作短视频片段,而非完整作品
3. Handle Async Operations
3. 处理异步操作
python
undefinedpython
undefinedVideo generation is ALWAYS async
视频生成始终为异步操作
Always implement polling or use webhooks
务必实现轮询或使用webhook
if data["status"] == "processing":
video = poll_video_result(data["id"], api_key)
undefinedif data["status"] == "processing":
video = poll_video_result(data["id"], api_key)
undefined4. Use Webhooks
4. 使用Webhook
python
payload = {
"key": api_key,
"model_id": "seedance-t2v",
"prompt": "...",
"webhook": "https://yourserver.com/webhook/video",
"track_id": "video_001"
}python
payload = {
"key": api_key,
"model_id": "seedance-t2v",
"prompt": "...",
"webhook": "https://yourserver.com/webhook/video",
"track_id": "video_001"
}Error Handling
错误处理
python
try:
video = generate_video(prompt, api_key, model_id="seedance-t2v")
print(f"Video generated: {video}")
except Exception as e:
print(f"Video generation failed: {e}")python
try:
video = generate_video(prompt, api_key, model_id="seedance-t2v")
print(f"视频已生成: {video}")
except Exception as e:
print(f"视频生成失败: {e}")Resources
资源
- API Documentation: https://docs.modelslab.com/video-api/overview
- Model Browser: https://modelslab.com/models
- Model Selection Guide: https://docs.modelslab.com/guides/model-selection
- Get API Key: https://modelslab.com/dashboard
- Webhooks Guide: See skill
modelslab-webhooks
- API文档:https://docs.modelslab.com/video-api/overview
- 模型浏览器:https://modelslab.com/models
- 模型选择指南:https://docs.modelslab.com/guides/model-selection
- 获取API密钥:https://modelslab.com/dashboard
- Webhook指南:参见技能
modelslab-webhooks
Related Skills
相关技能
- - Find and filter models
modelslab-model-discovery - - Generate images for img2video
modelslab-image-generation - - Generate audio for lip-sync
modelslab-audio-generation - - Chat with LLM models
modelslab-chat-generation - - Handle async operations efficiently
modelslab-webhooks
- - 查找和筛选模型
modelslab-model-discovery - - 生成图片用于图片转视频
modelslab-image-generation - - 生成音频用于唇形同步
modelslab-audio-generation - - 与大语言模型对话
modelslab-chat-generation - - 高效处理异步操作
modelslab-webhooks