integrate-audio
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseIntegrate Audio Generation
集成音频生成
PREREQUISITE: Runfirst. Run+check-compatibilityto load the latest API reference before integrating. Requires+fetch-api-referencefor API credentials. Requires+setup-api-keyfor local audio/video files.+integrate-uploads
Help users add Runway audio generation to their server-side code.
前置条件: 先运行。集成前请运行+check-compatibility加载最新的API参考文档。需要运行+fetch-api-reference配置API凭证。处理本地音频/视频文件需要先配置+setup-api-key。+integrate-uploads
帮助用户在服务端代码中接入Runway音频生成能力。
Available Models
可用模型
| Model | Endpoint | Use Case | Cost |
|---|---|---|---|
| | Text to speech | 1 credit/50 chars |
| | Sound effect generation | 1-2 credits |
| | Isolate voice from audio | 1 credit/6 sec |
| | Dub audio to other languages | 1 credit/2 sec |
| | Voice conversion | 1 credit/3 sec |
| 模型 | 接口端点 | 适用场景 | 费用 |
|---|---|---|---|
| | 文本转语音 | 1积分/50字符 |
| | 音效生成 | 1-2积分 |
| | 从音频中分离人声 | 1积分/6秒 |
| | 将音频配音为其他语言 | 1积分/2秒 |
| | 声音转换 | 1积分/3秒 |
Text-to-Speech
文本转语音
Generate speech from text using the ElevenLabs multilingual model.
使用ElevenLabs多语言模型从文本生成语音。
Node.js SDK
Node.js SDK
javascript
import RunwayML from '@runwayml/sdk';
const client = new RunwayML();
const task = await client.textToSpeech.create({
model: 'eleven_multilingual_v2',
text: 'Hello, welcome to our application!',
voiceId: 'voice_id_here' // See voice listing endpoint
}).waitForTaskOutput();
const audioUrl = task.output[0];javascript
import RunwayML from '@runwayml/sdk';
const client = new RunwayML();
const task = await client.textToSpeech.create({
model: 'eleven_multilingual_v2',
text: 'Hello, welcome to our application!',
voiceId: 'voice_id_here' // See voice listing endpoint
}).waitForTaskOutput();
const audioUrl = task.output[0];Python SDK
Python SDK
python
from runwayml import RunwayML
client = RunwayML()
task = client.text_to_speech.create(
model='eleven_multilingual_v2',
text='Hello, welcome to our application!',
voice_id='voice_id_here'
).wait_for_task_output()
audio_url = task.output[0]python
from runwayml import RunwayML
client = RunwayML()
task = client.text_to_speech.create(
model='eleven_multilingual_v2',
text='Hello, welcome to our application!',
voice_id='voice_id_here'
).wait_for_task_output()
audio_url = task.output[0]Sound Effects
音效生成
Generate sound effects from text descriptions.
javascript
const task = await client.soundEffect.create({
model: 'eleven_text_to_sound_v2',
promptText: 'Thunder rolling across a stormy sky'
}).waitForTaskOutput();python
task = client.sound_effect.create(
model='eleven_text_to_sound_v2',
prompt_text='Thunder rolling across a stormy sky'
).wait_for_task_output()根据文本描述生成音效。
javascript
const task = await client.soundEffect.create({
model: 'eleven_text_to_sound_v2',
promptText: 'Thunder rolling across a stormy sky'
}).waitForTaskOutput();python
task = client.sound_effect.create(
model='eleven_text_to_sound_v2',
prompt_text='Thunder rolling across a stormy sky'
).wait_for_task_output()Voice Isolation
人声分离
Extract voice from audio with background noise.
javascript
// If using a local file, upload first
const upload = await client.uploads.createEphemeral(
fs.createReadStream('/path/to/noisy-audio.mp3')
);
const task = await client.voiceIsolation.create({
model: 'eleven_voice_isolation',
audio: upload.runwayUri
}).waitForTaskOutput();从带有背景噪音的音频中提取人声。
javascript
// If using a local file, upload first
const upload = await client.uploads.createEphemeral(
fs.createReadStream('/path/to/noisy-audio.mp3')
);
const task = await client.voiceIsolation.create({
model: 'eleven_voice_isolation',
audio: upload.runwayUri
}).waitForTaskOutput();Voice Dubbing
配音
Dub audio/video into other languages.
javascript
const task = await client.voiceDubbing.create({
model: 'eleven_voice_dubbing',
audio: 'https://example.com/speech.mp3',
targetLanguage: 'es' // Spanish
}).waitForTaskOutput();将音频/视频配音为其他语言。
javascript
const task = await client.voiceDubbing.create({
model: 'eleven_voice_dubbing',
audio: 'https://example.com/speech.mp3',
targetLanguage: 'es' // Spanish
}).waitForTaskOutput();Speech-to-Speech
语音转语音
Convert one voice to another.
javascript
const task = await client.speechToSpeech.create({
model: 'eleven_multilingual_sts_v2',
audio: 'https://example.com/original-speech.mp3',
voiceId: 'target_voice_id'
}).waitForTaskOutput();将一种声音转换为另一种声音。
javascript
const task = await client.speechToSpeech.create({
model: 'eleven_multilingual_sts_v2',
audio: 'https://example.com/original-speech.mp3',
voiceId: 'target_voice_id'
}).waitForTaskOutput();Integration Pattern
集成模式
Express.js — Text-to-Speech Endpoint
Express.js — 文本转语音接口
javascript
import RunwayML from '@runwayml/sdk';
import express from 'express';
const client = new RunwayML();
const app = express();
app.use(express.json());
app.post('/api/text-to-speech', async (req, res) => {
try {
const { text, voiceId } = req.body;
const task = await client.textToSpeech.create({
model: 'eleven_multilingual_v2',
text,
voiceId
}).waitForTaskOutput();
res.json({ audioUrl: task.output[0] });
} catch (error) {
console.error('TTS failed:', error);
res.status(500).json({ error: error.message });
}
});javascript
import RunwayML from '@runwayml/sdk';
import express from 'express';
const client = new RunwayML();
const app = express();
app.use(express.json());
app.post('/api/text-to-speech', async (req, res) => {
try {
const { text, voiceId } = req.body;
const task = await client.textToSpeech.create({
model: 'eleven_multilingual_v2',
text,
voiceId
}).waitForTaskOutput();
res.json({ audioUrl: task.output[0] });
} catch (error) {
console.error('TTS failed:', error);
res.status(500).json({ error: error.message });
}
});FastAPI — Sound Effects
FastAPI — 音效生成接口
python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from runwayml import RunwayML
app = FastAPI()
client = RunwayML()
class SoundRequest(BaseModel):
prompt: str
@app.post("/api/sound-effect")
async def generate_sound(req: SoundRequest):
try:
task = client.sound_effect.create(
model='eleven_text_to_sound_v2',
prompt_text=req.prompt
).wait_for_task_output()
return {"audio_url": task.output[0]}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from runwayml import RunwayML
app = FastAPI()
client = RunwayML()
class SoundRequest(BaseModel):
prompt: str
@app.post("/api/sound-effect")
async def generate_sound(req: SoundRequest):
try:
task = client.sound_effect.create(
model='eleven_text_to_sound_v2',
prompt_text=req.prompt
).wait_for_task_output()
return {"audio_url": task.output[0]}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))Tips
注意事项
- Output URLs expire in 24-48 hours. Download audio files to your own storage.
- For local audio files (voice isolation, dubbing, speech-to-speech), upload via first.
+integrate-uploads - Voice IDs can be listed via the voices endpoint — see for details.
+api-reference - Text-to-speech cost scales with text length: 1 credit per 50 characters.
- 输出链接有效期为24-48小时。 请将音频文件下载到您自己的存储中。
- 如需处理本地音频文件(人声分离、配音、语音转语音),请先通过上传。
+integrate-uploads - Voice ID可通过voices端点查询 — 详情请参考。
+api-reference - 文本转语音费用随文本长度递增:每50个字符消耗1积分。