integrate-video
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseIntegrate Video Generation
集成视频生成能力
PREREQUISITE: Runfirst. Run+check-compatibilityto load the latest API reference before integrating. Requires+fetch-api-referencefor API credentials. Requires+setup-api-keywhen the user has local files to use as input.+integrate-uploads
Help users add Runway video generation to their server-side code.
前置条件: 请先运行。集成前请运行+check-compatibility加载最新的API参考文档。需要运行+fetch-api-reference配置API凭证。如果用户有本地文件作为输入,需要运行+setup-api-key。+integrate-uploads
帮助用户将Runway视频生成能力添加到其服务端代码中。
Available Models
可用模型
| Model | Best For | Input | Cost | Speed |
|---|---|---|---|---|
| Highest quality, general purpose | Text and/or Image | 12 credits/sec | Standard |
| Fast, image-driven | Image required | 5 credits/sec | Fast |
| Video editing/transformation | Video + Text/Image | 15 credits/sec | Standard |
| Premium Google model | Text/Image | 40 credits/sec | Standard |
| High quality Google model | Text/Image | 20-40 credits/sec | Standard |
| Fast Google model | Text/Image | 10-15 credits/sec | Fast |
Model selection guidance:
- Default recommendation: — best balance of quality and cost
gen4.5 - Budget-conscious: (requires image) or
gen4_turboveo3.1_fast - Highest quality: (most expensive)
veo3 - Video-to-video editing: (only option)
gen4_aleph
| 模型 | 最佳适用场景 | 输入 | 成本 | 速度 |
|---|---|---|---|---|
| 最高质量,通用场景 | 文本和/或图片 | 12积分/秒 | 标准 |
| 快速、图片驱动生成 | 必须传入图片 | 5积分/秒 | 快 |
| 视频编辑/转换 | 视频 + 文本/图片 | 15积分/秒 | 标准 |
| 谷歌高端模型 | 文本/图片 | 40积分/秒 | 标准 |
| 谷歌高质量模型 | 文本/图片 | 20-40积分/秒 | 标准 |
| 谷歌快速模型 | 文本/图片 | 10-15积分/秒 | 快 |
模型选择指南:
- 默认推荐:— 质量和成本的最优平衡
gen4.5 - 预算有限:(需要传入图片)或者
gen4_turboveo3.1_fast - 追求最高质量:(成本最高)
veo3 - 视频转视频编辑:(唯一选择)
gen4_aleph
Endpoints
接口端点
Text-to-Video: POST /v1/text_to_video
POST /v1/text_to_video文生视频:POST /v1/text_to_video
POST /v1/text_to_videoGenerate video from a text prompt only.
Compatible models: , , ,
gen4.5veo3veo3.1veo3.1_fastjavascript
// Node.js SDK
import RunwayML from '@runwayml/sdk';
const client = new RunwayML();
const task = await client.textToVideo.create({
model: 'gen4.5',
promptText: 'A golden retriever running through a field of wildflowers at sunset',
ratio: '1280:720',
duration: 5
}).waitForTaskOutput();
// task.output is an array of signed URLs
const videoUrl = task.output[0];python
undefined仅通过文本提示词生成视频。
兼容模型: 、、、
gen4.5veo3veo3.1veo3.1_fastjavascript
// Node.js SDK
import RunwayML from '@runwayml/sdk';
const client = new RunwayML();
const task = await client.textToVideo.create({
model: 'gen4.5',
promptText: 'A golden retriever running through a field of wildflowers at sunset',
ratio: '1280:720',
duration: 5
}).waitForTaskOutput();
// task.output is an array of signed URLs
const videoUrl = task.output[0];python
undefinedPython SDK
Python SDK
from runwayml import RunwayML
client = RunwayML()
task = client.text_to_video.create(
model='gen4.5',
prompt_text='A golden retriever running through a field of wildflowers at sunset',
ratio='1280:720',
duration=5
).wait_for_task_output()
video_url = task.output[0]
undefinedfrom runwayml import RunwayML
client = RunwayML()
task = client.text_to_video.create(
model='gen4.5',
prompt_text='A golden retriever running through a field of wildflowers at sunset',
ratio='1280:720',
duration=5
).wait_for_task_output()
video_url = task.output[0]
undefinedImage-to-Video: POST /v1/image_to_video
POST /v1/image_to_video图生视频:POST /v1/image_to_video
POST /v1/image_to_videoAnimate a still image into a video.
Compatible models: , , , ,
gen4.5gen4_turboveo3veo3.1veo3.1_fastjavascript
// Node.js SDK
const task = await client.imageToVideo.create({
model: 'gen4.5',
promptImage: 'https://example.com/landscape.jpg',
promptText: 'Camera slowly pans right revealing a mountain range',
ratio: '1280:720',
duration: 5
}).waitForTaskOutput();python
undefined将静态图片动效化生成视频。
兼容模型: 、、、、
gen4.5gen4_turboveo3veo3.1veo3.1_fastjavascript
// Node.js SDK
const task = await client.imageToVideo.create({
model: 'gen4.5',
promptImage: 'https://example.com/landscape.jpg',
promptText: 'Camera slowly pans right revealing a mountain range',
ratio: '1280:720',
duration: 5
}).waitForTaskOutput();python
undefinedPython SDK
Python SDK
task = client.image_to_video.create(
model='gen4.5',
prompt_image='https://example.com/landscape.jpg',
prompt_text='Camera slowly pans right revealing a mountain range',
ratio='1280:720',
duration=5
).wait_for_task_output()
**If the user has a local image file**, use `+integrate-uploads` first to upload it:
```javascript
// Upload local file first
import fs from 'fs';
const upload = await client.uploads.createEphemeral(
fs.createReadStream('/path/to/image.jpg')
);
const task = await client.imageToVideo.create({
model: 'gen4.5',
promptImage: upload.runwayUri, // Use the runway:// URI
promptText: 'The scene comes to life with gentle wind',
ratio: '1280:720',
duration: 5
}).waitForTaskOutput();task = client.image_to_video.create(
model='gen4.5',
prompt_image='https://example.com/landscape.jpg',
prompt_text='Camera slowly pans right revealing a mountain range',
ratio='1280:720',
duration=5
).wait_for_task_output()
**如果用户有本地图片文件**,请先使用`+integrate-uploads`上传文件:
```javascript
// 先上传本地文件
import fs from 'fs';
const upload = await client.uploads.createEphemeral(
fs.createReadStream('/path/to/image.jpg')
);
const task = await client.imageToVideo.create({
model: 'gen4.5',
promptImage: upload.runwayUri, // Use the runway:// URI
promptText: 'The scene comes to life with gentle wind',
ratio: '1280:720',
duration: 5
}).waitForTaskOutput();Video-to-Video: POST /v1/video_to_video
POST /v1/video_to_video视频转视频:POST /v1/video_to_video
POST /v1/video_to_videoTransform an existing video with a text prompt and/or reference image.
Compatible models:
gen4_alephjavascript
// Node.js SDK
const task = await client.videoToVideo.create({
model: 'gen4_aleph',
promptVideo: 'https://example.com/source.mp4',
promptText: 'Transform into an animated cartoon style',
ratio: '1280:720',
duration: 5
}).waitForTaskOutput();通过文本提示词和/或参考图片转换现有视频。
兼容模型:
gen4_alephjavascript
// Node.js SDK
const task = await client.videoToVideo.create({
model: 'gen4_aleph',
promptVideo: 'https://example.com/source.mp4',
promptText: 'Transform into an animated cartoon style',
ratio: '1280:720',
duration: 5
}).waitForTaskOutput();Character Performance: POST /v1/character_performance
POST /v1/character_performance角色表演生成:POST /v1/character_performance
POST /v1/character_performanceAnimate a character with facial/body performance.
Compatible models:
act_twojavascript
const task = await client.characterPerformance.create({
model: 'act_two',
promptImage: 'https://example.com/character.jpg',
promptPerformance: 'https://example.com/performance.mp4',
ratio: '1280:720',
duration: 5
}).waitForTaskOutput();通过面部/身体表演动效驱动角色生成视频。
兼容模型:
act_twojavascript
const task = await client.characterPerformance.create({
model: 'act_two',
promptImage: 'https://example.com/character.jpg',
promptPerformance: 'https://example.com/performance.mp4',
ratio: '1280:720',
duration: 5
}).waitForTaskOutput();Common Parameters
通用参数
| Parameter | Type | Description |
|---|---|---|
| string | Model ID (required) |
| string | Text prompt describing the video |
| string | URL, data URI, or |
| string | Aspect ratio, e.g. |
| number | Video length in seconds (2-10) |
| 参数 | 类型 | 描述 |
|---|---|---|
| string | 模型ID(必填) |
| string | 描述视频内容的文本提示词 |
| string | 输入图片的URL、数据URI或者 |
| string | 宽高比,例如 |
| number | 视频时长,单位为秒(2-10) |
Integration Pattern
集成流程
When helping the user integrate, follow this pattern:
- Determine the use case — What type of video generation? (text-to-video, image-to-video, etc.)
- Check for local files — If the user has local images/videos, use first
+integrate-uploads - Select the model — Recommend based on quality/cost/speed needs
- Write the server-side handler — Create an API route or server function
- Handle the output — Download and store the video, don't serve signed URLs to clients
- Add error handling — Wrap in try/catch, handle
TaskFailedError
帮助用户集成时,请遵循以下流程:
- 确定使用场景 — 需要哪种类型的视频生成?(文生视频、图生视频等)
- 检查是否有本地文件 — 如果用户有本地图片/视频,先使用上传
+integrate-uploads - 选择模型 — 根据质量/成本/速度需求给出推荐
- 编写服务端处理逻辑 — 创建API路由或者服务端函数
- 处理输出结果 — 下载并存储视频,不要直接将签名URL返回给客户端
- 添加错误处理 — 使用try/catch包裹逻辑,处理异常
TaskFailedError
Example: Express.js API Route
示例:Express.js API路由
javascript
import RunwayML from '@runwayml/sdk';
import express from 'express';
const client = new RunwayML();
const app = express();
app.use(express.json());
app.post('/api/generate-video', async (req, res) => {
try {
const { prompt, imageUrl, model = 'gen4.5', duration = 5 } = req.body;
const params = {
model,
promptText: prompt,
ratio: '1280:720',
duration
};
let task;
if (imageUrl) {
task = await client.imageToVideo.create({
...params,
promptImage: imageUrl
}).waitForTaskOutput();
} else {
task = await client.textToVideo.create(params).waitForTaskOutput();
}
res.json({ videoUrl: task.output[0] });
} catch (error) {
console.error('Video generation failed:', error);
res.status(500).json({ error: error.message });
}
});javascript
import RunwayML from '@runwayml/sdk';
import express from 'express';
const client = new RunwayML();
const app = express();
app.use(express.json());
app.post('/api/generate-video', async (req, res) => {
try {
const { prompt, imageUrl, model = 'gen4.5', duration = 5 } = req.body;
const params = {
model,
promptText: prompt,
ratio: '1280:720',
duration
};
let task;
if (imageUrl) {
task = await client.imageToVideo.create({
...params,
promptImage: imageUrl
}).waitForTaskOutput();
} else {
task = await client.textToVideo.create(params).waitForTaskOutput();
}
res.json({ videoUrl: task.output[0] });
} catch (error) {
console.error('Video generation failed:', error);
res.status(500).json({ error: error.message });
}
});Example: Next.js API Route
示例:Next.js API路由
typescript
// app/api/generate-video/route.ts
import RunwayML from '@runwayml/sdk';
import { NextRequest, NextResponse } from 'next/server';
const client = new RunwayML();
export async function POST(request: NextRequest) {
const { prompt, imageUrl } = await request.json();
try {
const task = imageUrl
? await client.imageToVideo.create({
model: 'gen4.5',
promptImage: imageUrl,
promptText: prompt,
ratio: '1280:720',
duration: 5
}).waitForTaskOutput()
: await client.textToVideo.create({
model: 'gen4.5',
promptText: prompt,
ratio: '1280:720',
duration: 5
}).waitForTaskOutput();
return NextResponse.json({ videoUrl: task.output[0] });
} catch (error) {
return NextResponse.json(
{ error: error instanceof Error ? error.message : 'Generation failed' },
{ status: 500 }
);
}
}typescript
// app/api/generate-video/route.ts
import RunwayML from '@runwayml/sdk';
import { NextRequest, NextResponse } from 'next/server';
const client = new RunwayML();
export async function POST(request: NextRequest) {
const { prompt, imageUrl } = await request.json();
try {
const task = imageUrl
? await client.imageToVideo.create({
model: 'gen4.5',
promptImage: imageUrl,
promptText: prompt,
ratio: '1280:720',
duration: 5
}).waitForTaskOutput()
: await client.textToVideo.create({
model: 'gen4.5',
promptText: prompt,
ratio: '1280:720',
duration: 5
}).waitForTaskOutput();
return NextResponse.json({ videoUrl: task.output[0] });
} catch (error) {
return NextResponse.json(
{ error: error instanceof Error ? error.message : 'Generation failed' },
{ status: 500 }
);
}
}Example: FastAPI Route
示例:FastAPI路由
python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from runwayml import RunwayML
app = FastAPI()
client = RunwayML()
class VideoRequest(BaseModel):
prompt: str
image_url: str | None = None
model: str = "gen4.5"
duration: int = 5
@app.post("/api/generate-video")
async def generate_video(req: VideoRequest):
try:
if req.image_url:
task = client.image_to_video.create(
model=req.model,
prompt_image=req.image_url,
prompt_text=req.prompt,
ratio="1280:720",
duration=req.duration
).wait_for_task_output()
else:
task = client.text_to_video.create(
model=req.model,
prompt_text=req.prompt,
ratio="1280:720",
duration=req.duration
).wait_for_task_output()
return {"video_url": task.output[0]}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from runwayml import RunwayML
app = FastAPI()
client = RunwayML()
class VideoRequest(BaseModel):
prompt: str
image_url: str | None = None
model: str = "gen4.5"
duration: int = 5
@app.post("/api/generate-video")
async def generate_video(req: VideoRequest):
try:
if req.image_url:
task = client.image_to_video.create(
model=req.model,
prompt_image=req.image_url,
prompt_text=req.prompt,
ratio="1280:720",
duration=req.duration
).wait_for_task_output()
else:
task = client.text_to_video.create(
model=req.model,
prompt_text=req.prompt,
ratio="1280:720",
duration=req.duration
).wait_for_task_output()
return {"video_url": task.output[0]}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))Tips
注意事项
- Output URLs expire in 24-48 hours. Download videos to your own storage (S3, GCS, local filesystem) immediately after generation.
- requires an image — it cannot do text-only generation.
gen4_turbo - is the only video-to-video model — use it for editing/transforming existing videos.
gen4_aleph - Duration range is 2-10 seconds. Longer videos require chaining multiple generations.
- has a default 10-minute timeout. For long-running generations, you may want to implement your own polling loop or increase the timeout.
waitForTaskOutput() - For local files, always use to upload first, then pass the
+integrate-uploadsURI.runway://
- 输出URL的有效期为24-48小时。 生成完成后请立即将视频下载到你自己的存储服务(S3、GCS、本地文件系统)。
- 必须传入图片 — 不支持纯文本生成。
gen4_turbo - 是唯一的视频转视频模型 — 用于编辑/转换现有视频。
gen4_aleph - 时长范围为2-10秒。 更长的视频需要拼接多次生成结果。
- 默认超时时间为10分钟。 对于运行时间较长的生成任务,你可以自行实现轮询逻辑或者增加超时时间。
waitForTaskOutput() - 对于本地文件, 始终先使用上传,再传入返回的
+integrate-uploadsURI。runway://