integrate-video

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Integrate Video Generation

集成视频生成能力

PREREQUISITE: Run
+check-compatibility
first. Run
+fetch-api-reference
to load the latest API reference before integrating. Requires
+setup-api-key
for API credentials. Requires
+integrate-uploads
when the user has local files to use as input.
Help users add Runway video generation to their server-side code.
前置条件: 请先运行
+check-compatibility
。集成前请运行
+fetch-api-reference
加载最新的API参考文档。需要运行
+setup-api-key
配置API凭证。如果用户有本地文件作为输入,需要运行
+integrate-uploads
帮助用户将Runway视频生成能力添加到其服务端代码中。

Available Models

可用模型

ModelBest ForInputCostSpeed
gen4.5
Highest quality, general purposeText and/or Image12 credits/secStandard
gen4_turbo
Fast, image-drivenImage required5 credits/secFast
gen4_aleph
Video editing/transformationVideo + Text/Image15 credits/secStandard
veo3
Premium Google modelText/Image40 credits/secStandard
veo3.1
High quality Google modelText/Image20-40 credits/secStandard
veo3.1_fast
Fast Google modelText/Image10-15 credits/secFast
Model selection guidance:
  • Default recommendation:
    gen4.5
    — best balance of quality and cost
  • Budget-conscious:
    gen4_turbo
    (requires image) or
    veo3.1_fast
  • Highest quality:
    veo3
    (most expensive)
  • Video-to-video editing:
    gen4_aleph
    (only option)
模型最佳适用场景输入成本速度
gen4.5
最高质量,通用场景文本和/或图片12积分/秒标准
gen4_turbo
快速、图片驱动生成必须传入图片5积分/秒
gen4_aleph
视频编辑/转换视频 + 文本/图片15积分/秒标准
veo3
谷歌高端模型文本/图片40积分/秒标准
veo3.1
谷歌高质量模型文本/图片20-40积分/秒标准
veo3.1_fast
谷歌快速模型文本/图片10-15积分/秒
模型选择指南:
  • 默认推荐:
    gen4.5
    — 质量和成本的最优平衡
  • 预算有限:
    gen4_turbo
    (需要传入图片)或者
    veo3.1_fast
  • 追求最高质量:
    veo3
    (成本最高)
  • 视频转视频编辑:
    gen4_aleph
    (唯一选择)

Endpoints

接口端点

Text-to-Video:
POST /v1/text_to_video

文生视频:
POST /v1/text_to_video

Generate video from a text prompt only.
Compatible models:
gen4.5
,
veo3
,
veo3.1
,
veo3.1_fast
javascript
// Node.js SDK
import RunwayML from '@runwayml/sdk';

const client = new RunwayML();

const task = await client.textToVideo.create({
  model: 'gen4.5',
  promptText: 'A golden retriever running through a field of wildflowers at sunset',
  ratio: '1280:720',
  duration: 5
}).waitForTaskOutput();

// task.output is an array of signed URLs
const videoUrl = task.output[0];
python
undefined
仅通过文本提示词生成视频。
兼容模型:
gen4.5
veo3
veo3.1
veo3.1_fast
javascript
// Node.js SDK
import RunwayML from '@runwayml/sdk';

const client = new RunwayML();

const task = await client.textToVideo.create({
  model: 'gen4.5',
  promptText: 'A golden retriever running through a field of wildflowers at sunset',
  ratio: '1280:720',
  duration: 5
}).waitForTaskOutput();

// task.output is an array of signed URLs
const videoUrl = task.output[0];
python
undefined

Python SDK

Python SDK

from runwayml import RunwayML
client = RunwayML()
task = client.text_to_video.create( model='gen4.5', prompt_text='A golden retriever running through a field of wildflowers at sunset', ratio='1280:720', duration=5 ).wait_for_task_output()
video_url = task.output[0]
undefined
from runwayml import RunwayML
client = RunwayML()
task = client.text_to_video.create( model='gen4.5', prompt_text='A golden retriever running through a field of wildflowers at sunset', ratio='1280:720', duration=5 ).wait_for_task_output()
video_url = task.output[0]
undefined

Image-to-Video:
POST /v1/image_to_video

图生视频:
POST /v1/image_to_video

Animate a still image into a video.
Compatible models:
gen4.5
,
gen4_turbo
,
veo3
,
veo3.1
,
veo3.1_fast
javascript
// Node.js SDK
const task = await client.imageToVideo.create({
  model: 'gen4.5',
  promptImage: 'https://example.com/landscape.jpg',
  promptText: 'Camera slowly pans right revealing a mountain range',
  ratio: '1280:720',
  duration: 5
}).waitForTaskOutput();
python
undefined
将静态图片动效化生成视频。
兼容模型:
gen4.5
gen4_turbo
veo3
veo3.1
veo3.1_fast
javascript
// Node.js SDK
const task = await client.imageToVideo.create({
  model: 'gen4.5',
  promptImage: 'https://example.com/landscape.jpg',
  promptText: 'Camera slowly pans right revealing a mountain range',
  ratio: '1280:720',
  duration: 5
}).waitForTaskOutput();
python
undefined

Python SDK

Python SDK

task = client.image_to_video.create( model='gen4.5', prompt_image='https://example.com/landscape.jpg', prompt_text='Camera slowly pans right revealing a mountain range', ratio='1280:720', duration=5 ).wait_for_task_output()

**If the user has a local image file**, use `+integrate-uploads` first to upload it:

```javascript
// Upload local file first
import fs from 'fs';

const upload = await client.uploads.createEphemeral(
  fs.createReadStream('/path/to/image.jpg')
);

const task = await client.imageToVideo.create({
  model: 'gen4.5',
  promptImage: upload.runwayUri,  // Use the runway:// URI
  promptText: 'The scene comes to life with gentle wind',
  ratio: '1280:720',
  duration: 5
}).waitForTaskOutput();
task = client.image_to_video.create( model='gen4.5', prompt_image='https://example.com/landscape.jpg', prompt_text='Camera slowly pans right revealing a mountain range', ratio='1280:720', duration=5 ).wait_for_task_output()

**如果用户有本地图片文件**,请先使用`+integrate-uploads`上传文件:

```javascript
// 先上传本地文件
import fs from 'fs';

const upload = await client.uploads.createEphemeral(
  fs.createReadStream('/path/to/image.jpg')
);

const task = await client.imageToVideo.create({
  model: 'gen4.5',
  promptImage: upload.runwayUri,  // Use the runway:// URI
  promptText: 'The scene comes to life with gentle wind',
  ratio: '1280:720',
  duration: 5
}).waitForTaskOutput();

Video-to-Video:
POST /v1/video_to_video

视频转视频:
POST /v1/video_to_video

Transform an existing video with a text prompt and/or reference image.
Compatible models:
gen4_aleph
javascript
// Node.js SDK
const task = await client.videoToVideo.create({
  model: 'gen4_aleph',
  promptVideo: 'https://example.com/source.mp4',
  promptText: 'Transform into an animated cartoon style',
  ratio: '1280:720',
  duration: 5
}).waitForTaskOutput();
通过文本提示词和/或参考图片转换现有视频。
兼容模型:
gen4_aleph
javascript
// Node.js SDK
const task = await client.videoToVideo.create({
  model: 'gen4_aleph',
  promptVideo: 'https://example.com/source.mp4',
  promptText: 'Transform into an animated cartoon style',
  ratio: '1280:720',
  duration: 5
}).waitForTaskOutput();

Character Performance:
POST /v1/character_performance

角色表演生成:
POST /v1/character_performance

Animate a character with facial/body performance.
Compatible models:
act_two
javascript
const task = await client.characterPerformance.create({
  model: 'act_two',
  promptImage: 'https://example.com/character.jpg',
  promptPerformance: 'https://example.com/performance.mp4',
  ratio: '1280:720',
  duration: 5
}).waitForTaskOutput();
通过面部/身体表演动效驱动角色生成视频。
兼容模型:
act_two
javascript
const task = await client.characterPerformance.create({
  model: 'act_two',
  promptImage: 'https://example.com/character.jpg',
  promptPerformance: 'https://example.com/performance.mp4',
  ratio: '1280:720',
  duration: 5
}).waitForTaskOutput();

Common Parameters

通用参数

ParameterTypeDescription
model
stringModel ID (required)
promptText
stringText prompt describing the video
promptImage
stringURL, data URI, or
runway://
URI of input image
ratio
stringAspect ratio, e.g.
'1280:720'
,
'720:1280'
duration
numberVideo length in seconds (2-10)
参数类型描述
model
string模型ID(必填)
promptText
string描述视频内容的文本提示词
promptImage
string输入图片的URL、数据URI或者
runway://
URI
ratio
string宽高比,例如
'1280:720'
'720:1280'
duration
number视频时长,单位为秒(2-10)

Integration Pattern

集成流程

When helping the user integrate, follow this pattern:
  1. Determine the use case — What type of video generation? (text-to-video, image-to-video, etc.)
  2. Check for local files — If the user has local images/videos, use
    +integrate-uploads
    first
  3. Select the model — Recommend based on quality/cost/speed needs
  4. Write the server-side handler — Create an API route or server function
  5. Handle the output — Download and store the video, don't serve signed URLs to clients
  6. Add error handling — Wrap in try/catch, handle
    TaskFailedError
帮助用户集成时,请遵循以下流程:
  1. 确定使用场景 — 需要哪种类型的视频生成?(文生视频、图生视频等)
  2. 检查是否有本地文件 — 如果用户有本地图片/视频,先使用
    +integrate-uploads
    上传
  3. 选择模型 — 根据质量/成本/速度需求给出推荐
  4. 编写服务端处理逻辑 — 创建API路由或者服务端函数
  5. 处理输出结果 — 下载并存储视频,不要直接将签名URL返回给客户端
  6. 添加错误处理 — 使用try/catch包裹逻辑,处理
    TaskFailedError
    异常

Example: Express.js API Route

示例:Express.js API路由

javascript
import RunwayML from '@runwayml/sdk';
import express from 'express';

const client = new RunwayML();
const app = express();
app.use(express.json());

app.post('/api/generate-video', async (req, res) => {
  try {
    const { prompt, imageUrl, model = 'gen4.5', duration = 5 } = req.body;

    const params = {
      model,
      promptText: prompt,
      ratio: '1280:720',
      duration
    };

    let task;
    if (imageUrl) {
      task = await client.imageToVideo.create({
        ...params,
        promptImage: imageUrl
      }).waitForTaskOutput();
    } else {
      task = await client.textToVideo.create(params).waitForTaskOutput();
    }

    res.json({ videoUrl: task.output[0] });
  } catch (error) {
    console.error('Video generation failed:', error);
    res.status(500).json({ error: error.message });
  }
});
javascript
import RunwayML from '@runwayml/sdk';
import express from 'express';

const client = new RunwayML();
const app = express();
app.use(express.json());

app.post('/api/generate-video', async (req, res) => {
  try {
    const { prompt, imageUrl, model = 'gen4.5', duration = 5 } = req.body;

    const params = {
      model,
      promptText: prompt,
      ratio: '1280:720',
      duration
    };

    let task;
    if (imageUrl) {
      task = await client.imageToVideo.create({
        ...params,
        promptImage: imageUrl
      }).waitForTaskOutput();
    } else {
      task = await client.textToVideo.create(params).waitForTaskOutput();
    }

    res.json({ videoUrl: task.output[0] });
  } catch (error) {
    console.error('Video generation failed:', error);
    res.status(500).json({ error: error.message });
  }
});

Example: Next.js API Route

示例:Next.js API路由

typescript
// app/api/generate-video/route.ts
import RunwayML from '@runwayml/sdk';
import { NextRequest, NextResponse } from 'next/server';

const client = new RunwayML();

export async function POST(request: NextRequest) {
  const { prompt, imageUrl } = await request.json();

  try {
    const task = imageUrl
      ? await client.imageToVideo.create({
          model: 'gen4.5',
          promptImage: imageUrl,
          promptText: prompt,
          ratio: '1280:720',
          duration: 5
        }).waitForTaskOutput()
      : await client.textToVideo.create({
          model: 'gen4.5',
          promptText: prompt,
          ratio: '1280:720',
          duration: 5
        }).waitForTaskOutput();

    return NextResponse.json({ videoUrl: task.output[0] });
  } catch (error) {
    return NextResponse.json(
      { error: error instanceof Error ? error.message : 'Generation failed' },
      { status: 500 }
    );
  }
}
typescript
// app/api/generate-video/route.ts
import RunwayML from '@runwayml/sdk';
import { NextRequest, NextResponse } from 'next/server';

const client = new RunwayML();

export async function POST(request: NextRequest) {
  const { prompt, imageUrl } = await request.json();

  try {
    const task = imageUrl
      ? await client.imageToVideo.create({
          model: 'gen4.5',
          promptImage: imageUrl,
          promptText: prompt,
          ratio: '1280:720',
          duration: 5
        }).waitForTaskOutput()
      : await client.textToVideo.create({
          model: 'gen4.5',
          promptText: prompt,
          ratio: '1280:720',
          duration: 5
        }).waitForTaskOutput();

    return NextResponse.json({ videoUrl: task.output[0] });
  } catch (error) {
    return NextResponse.json(
      { error: error instanceof Error ? error.message : 'Generation failed' },
      { status: 500 }
    );
  }
}

Example: FastAPI Route

示例:FastAPI路由

python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from runwayml import RunwayML

app = FastAPI()
client = RunwayML()

class VideoRequest(BaseModel):
    prompt: str
    image_url: str | None = None
    model: str = "gen4.5"
    duration: int = 5

@app.post("/api/generate-video")
async def generate_video(req: VideoRequest):
    try:
        if req.image_url:
            task = client.image_to_video.create(
                model=req.model,
                prompt_image=req.image_url,
                prompt_text=req.prompt,
                ratio="1280:720",
                duration=req.duration
            ).wait_for_task_output()
        else:
            task = client.text_to_video.create(
                model=req.model,
                prompt_text=req.prompt,
                ratio="1280:720",
                duration=req.duration
            ).wait_for_task_output()

        return {"video_url": task.output[0]}
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))
python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from runwayml import RunwayML

app = FastAPI()
client = RunwayML()

class VideoRequest(BaseModel):
    prompt: str
    image_url: str | None = None
    model: str = "gen4.5"
    duration: int = 5

@app.post("/api/generate-video")
async def generate_video(req: VideoRequest):
    try:
        if req.image_url:
            task = client.image_to_video.create(
                model=req.model,
                prompt_image=req.image_url,
                prompt_text=req.prompt,
                ratio="1280:720",
                duration=req.duration
            ).wait_for_task_output()
        else:
            task = client.text_to_video.create(
                model=req.model,
                prompt_text=req.prompt,
                ratio="1280:720",
                duration=req.duration
            ).wait_for_task_output()

        return {"video_url": task.output[0]}
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

Tips

注意事项

  • Output URLs expire in 24-48 hours. Download videos to your own storage (S3, GCS, local filesystem) immediately after generation.
  • gen4_turbo
    requires an image
    — it cannot do text-only generation.
  • gen4_aleph
    is the only video-to-video model
    — use it for editing/transforming existing videos.
  • Duration range is 2-10 seconds. Longer videos require chaining multiple generations.
  • waitForTaskOutput()
    has a default 10-minute timeout.
    For long-running generations, you may want to implement your own polling loop or increase the timeout.
  • For local files, always use
    +integrate-uploads
    to upload first, then pass the
    runway://
    URI.
  • 输出URL的有效期为24-48小时。 生成完成后请立即将视频下载到你自己的存储服务(S3、GCS、本地文件系统)。
  • gen4_turbo
    必须传入图片
    — 不支持纯文本生成。
  • gen4_aleph
    是唯一的视频转视频模型
    — 用于编辑/转换现有视频。
  • 时长范围为2-10秒。 更长的视频需要拼接多次生成结果。
  • waitForTaskOutput()
    默认超时时间为10分钟。
    对于运行时间较长的生成任务,你可以自行实现轮询逻辑或者增加超时时间。
  • 对于本地文件, 始终先使用
    +integrate-uploads
    上传,再传入返回的
    runway://
    URI。