ai-sdk-core

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

AI SDK Core

AI SDK 核心

Production-ready backend AI with Vercel AI SDK v5.
基于Vercel AI SDK v5的生产级后端AI解决方案。

Quick Start (5 Minutes)

快速入门(5分钟)

Installation

安装

bash
undefined
bash
undefined

Core package

核心包

npm install ai
npm install ai

Provider packages (install what you need)

供应商包(按需安装)

npm install @ai-sdk/openai # OpenAI (GPT-5, GPT-4, GPT-3.5) npm install @ai-sdk/anthropic # Anthropic (Claude Sonnet 4.5, Opus 4, Haiku 4) npm install @ai-sdk/google # Google (Gemini 2.5 Pro/Flash/Lite) npm install workers-ai-provider # Cloudflare Workers AI
npm install @ai-sdk/openai # OpenAI(GPT-5、GPT-4、GPT-3.5) npm install @ai-sdk/anthropic # Anthropic(Claude Sonnet 4.5、Opus 4、Haiku 4) npm install @ai-sdk/google # Google(Gemini 2.5 Pro/Flash/Lite) npm install workers-ai-provider # Cloudflare Workers AI

Schema validation

Schema 验证

npm install zod
undefined
npm install zod
undefined

Environment Variables

环境变量

bash
undefined
bash
undefined

.env

.env

OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... GOOGLE_GENERATIVE_AI_API_KEY=...
undefined
OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... GOOGLE_GENERATIVE_AI_API_KEY=...
undefined

First Example: Generate Text

第一个示例:生成文本

typescript
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-4-turbo'),
  prompt: 'What is TypeScript?',
});

console.log(result.text);
typescript
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-4-turbo'),
  prompt: '什么是TypeScript?',
});

console.log(result.text);

First Example: Streaming Chat

第一个示例:流式聊天

typescript
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';

const stream = streamText({
  model: anthropic('claude-sonnet-4-5-20250929'),
  messages: [
    { role: 'user', content: 'Tell me a story' },
  ],
});

for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}
typescript
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';

const stream = streamText({
  model: anthropic('claude-sonnet-4-5-20250929'),
  messages: [
    { role: 'user', content: '给我讲个故事' },
  ],
});

for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}

First Example: Structured Output

第一个示例:结构化输出

typescript
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateObject({
  model: openai('gpt-4'),
  schema: z.object({
    name: z.string(),
    age: z.number(),
    skills: z.array(z.string()),
  }),
  prompt: 'Generate a person profile for a software engineer',
});

console.log(result.object);
// { name: "Alice", age: 28, skills: ["TypeScript", "React"] }

typescript
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateObject({
  model: openai('gpt-4'),
  schema: z.object({
    name: z.string(),
    age: z.number(),
    skills: z.array(z.string()),
  }),
  prompt: '生成一个软件工程师的个人资料',
});

console.log(result.object);
// { name: "Alice", age: 28, skills: ["TypeScript", "React"] }

Core Functions

核心函数

generateText()

generateText()

Generate text completion with optional tools and multi-step execution.
Signature:
typescript
async function generateText(options: {
  model: LanguageModel;
  prompt?: string;
  messages?: Array<ModelMessage>;
  system?: string;
  tools?: Record<string, Tool>;
  maxOutputTokens?: number;
  temperature?: number;
  stopWhen?: StopCondition;
  // ... other options
}): Promise<GenerateTextResult>
Basic Usage:
typescript
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-4-turbo'),
  prompt: 'Explain quantum computing',
  maxOutputTokens: 500,
  temperature: 0.7,
});

console.log(result.text);
console.log(`Tokens: ${result.usage.totalTokens}`);
With Messages (Chat Format):
typescript
const result = await generateText({
  model: openai('gpt-4-turbo'),
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is the weather?' },
    { role: 'assistant', content: 'I need your location.' },
    { role: 'user', content: 'San Francisco' },
  ],
});
With Tools:
typescript
import { tool } from 'ai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4'),
  tools: {
    weather: tool({
      description: 'Get the weather for a location',
      inputSchema: z.object({
        location: z.string(),
      }),
      execute: async ({ location }) => {
        // API call here
        return { temperature: 72, condition: 'sunny' };
      },
    }),
  },
  prompt: 'What is the weather in Tokyo?',
});
When to Use:
  • Need final response (not streaming)
  • Want to wait for tool executions to complete
  • Simpler code when streaming not needed
  • Building batch/scheduled tasks
Error Handling:
typescript
import { AI_APICallError, AI_NoContentGeneratedError } from 'ai';

try {
  const result = await generateText({
    model: openai('gpt-4-turbo'),
    prompt: 'Hello',
  });
  console.log(result.text);
} catch (error) {
  if (error instanceof AI_APICallError) {
    console.error('API call failed:', error.message);
    // Check rate limits, API key, network
  } else if (error instanceof AI_NoContentGeneratedError) {
    console.error('No content generated');
    // Prompt may have been filtered
  } else {
    console.error('Unknown error:', error);
  }
}

生成文本补全内容,支持可选工具和多步骤执行。
签名:
typescript
async function generateText(options: {
  model: LanguageModel;
  prompt?: string;
  messages?: Array<ModelMessage>;
  system?: string;
  tools?: Record<string, Tool>;
  maxOutputTokens?: number;
  temperature?: number;
  stopWhen?: StopCondition;
  // ... 其他选项
}): Promise<GenerateTextResult>
基础用法:
typescript
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-4-turbo'),
  prompt: '解释量子计算',
  maxOutputTokens: 500,
  temperature: 0.7,
});

console.log(result.text);
console.log(`令牌数: ${result.usage.totalTokens}`);
结合消息(聊天格式):
typescript
const result = await generateText({
  model: openai('gpt-4-turbo'),
  messages: [
    { role: 'system', content: '你是一个乐于助人的助手。' },
    { role: 'user', content: '天气怎么样?' },
    { role: 'assistant', content: '我需要知道你的位置。' },
    { role: 'user', content: '旧金山' },
  ],
});
结合工具:
typescript
import { tool } from 'ai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4'),
  tools: {
    weather: tool({
      description: '获取指定地点的天气',
      inputSchema: z.object({
        location: z.string(),
      }),
      execute: async ({ location }) => {
        // 此处调用API
        return { temperature: 72, condition: 'sunny' };
      },
    }),
  },
  prompt: '东京的天气怎么样?',
});
适用场景:
  • 需要最终响应(非流式)
  • 希望等待工具执行完成
  • 不需要流式传输时使用更简洁的代码
  • 构建批量/定时任务
错误处理:
typescript
import { AI_APICallError, AI_NoContentGeneratedError } from 'ai';

try {
  const result = await generateText({
    model: openai('gpt-4-turbo'),
    prompt: '你好',
  });
  console.log(result.text);
} catch (error) {
  if (error instanceof AI_APICallError) {
    console.error('API调用失败:', error.message);
    // 检查速率限制、API密钥、网络
  } else if (error instanceof AI_NoContentGeneratedError) {
    console.error('未生成任何内容');
    // 提示词可能被过滤
  } else {
    console.error('未知错误:', error);
  }
}

streamText()

streamText()

Stream text completion with real-time chunks.
Signature:
typescript
function streamText(options: {
  model: LanguageModel;
  prompt?: string;
  messages?: Array<ModelMessage>;
  system?: string;
  tools?: Record<string, Tool>;
  maxOutputTokens?: number;
  temperature?: number;
  stopWhen?: StopCondition;
  // ... other options
}): StreamTextResult
Basic Streaming:
typescript
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';

const stream = streamText({
  model: anthropic('claude-sonnet-4-5-20250929'),
  prompt: 'Write a poem about AI',
});

// Stream to console
for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}

// Or get final result
const finalResult = await stream.result;
console.log(finalResult.text);
Streaming with Tools:
typescript
const stream = streamText({
  model: openai('gpt-4'),
  tools: {
    // ... tools definition
  },
  prompt: 'What is the weather?',
});

// Stream text chunks
for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}
Handling the Stream:
typescript
const stream = streamText({
  model: openai('gpt-4-turbo'),
  prompt: 'Explain AI',
});

// Option 1: Text stream
for await (const text of stream.textStream) {
  console.log(text);
}

// Option 2: Full stream (includes metadata)
for await (const part of stream.fullStream) {
  if (part.type === 'text-delta') {
    console.log(part.textDelta);
  } else if (part.type === 'tool-call') {
    console.log('Tool called:', part.toolName);
  }
}

// Option 3: Wait for final result
const result = await stream.result;
console.log(result.text, result.usage);
When to Use:
  • Real-time user-facing responses
  • Long-form content generation
  • Want to show progress
  • Better perceived performance
Production Pattern:
typescript
// Next.js API Route
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(request: Request) {
  const { messages } = await request.json();

  const stream = streamText({
    model: openai('gpt-4-turbo'),
    messages,
  });

  // Return stream to client
  return stream.toDataStreamResponse();
}
Error Handling:
typescript
// Recommended: Use onError callback (added in v4.1.22)
const stream = streamText({
  model: openai('gpt-4-turbo'),
  prompt: 'Hello',
  onError({ error }) {
    console.error('Stream error:', error);
    // Custom error handling
  },
});

for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}

// Alternative: Manual try-catch
try {
  const stream = streamText({
    model: openai('gpt-4-turbo'),
    prompt: 'Hello',
  });

  for await (const chunk of stream.textStream) {
    process.stdout.write(chunk);
  }
} catch (error) {
  console.error('Stream error:', error);
}

通过实时分块流式传输文本补全内容。
签名:
typescript
function streamText(options: {
  model: LanguageModel;
  prompt?: string;
  messages?: Array<ModelMessage>;
  system?: string;
  tools?: Record<string, Tool>;
  maxOutputTokens?: number;
  temperature?: number;
  stopWhen?: StopCondition;
  // ... 其他选项
}): StreamTextResult
基础流式传输:
typescript
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';

const stream = streamText({
  model: anthropic('claude-sonnet-4-5-20250929'),
  prompt: '写一首关于AI的诗',
});

// 流式输出到控制台
for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}

// 或获取最终结果
const finalResult = await stream.result;
console.log(finalResult.text);
结合工具的流式传输:
typescript
const stream = streamText({
  model: openai('gpt-4'),
  tools: {
    // ... 工具定义
  },
  prompt: '天气怎么样?',
});

// 流式传输文本块
for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}
流式传输处理方式:
typescript
const stream = streamText({
  model: openai('gpt-4-turbo'),
  prompt: '解释AI',
});

// 方式1:文本流
for await (const text of stream.textStream) {
  console.log(text);
}

// 方式2:完整流(包含元数据)
for await (const part of stream.fullStream) {
  if (part.type === 'text-delta') {
    console.log(part.textDelta);
  } else if (part.type === 'tool-call') {
    console.log('调用工具:', part.toolName);
  }
}

// 方式3:等待最终结果
const result = await stream.result;
console.log(result.text, result.usage);
适用场景:
  • 实时面向用户的响应
  • 长篇内容生成
  • 希望展示进度
  • 更好的感知性能
生产模式:
typescript
// Next.js API 路由
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(request: Request) {
  const { messages } = await request.json();

  const stream = streamText({
    model: openai('gpt-4-turbo'),
    messages,
  });

  // 将流返回给客户端
  return stream.toDataStreamResponse();
}
错误处理:
typescript
// 推荐:使用onError回调(v4.1.22新增)
const stream = streamText({
  model: openai('gpt-4-turbo'),
  prompt: '你好',
  onError({ error }) {
    console.error('流错误:', error);
    // 自定义错误处理
  },
});

for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}

// 替代方案:手动try-catch
try {
  const stream = streamText({
    model: openai('gpt-4-turbo'),
    prompt: '你好',
  });

  for await (const chunk of stream.textStream) {
    process.stdout.write(chunk);
  }
} catch (error) {
  console.error('流错误:', error);
}

generateObject()

generateObject()

Generate structured output validated by Zod schema.
Signature:
typescript
async function generateObject<T>(options: {
  model: LanguageModel;
  schema: z.Schema<T>;
  prompt?: string;
  messages?: Array<ModelMessage>;
  system?: string;
  mode?: 'auto' | 'json' | 'tool';
  // ... other options
}): Promise<GenerateObjectResult<T>>
Basic Usage:
typescript
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateObject({
  model: openai('gpt-4'),
  schema: z.object({
    recipe: z.object({
      name: z.string(),
      ingredients: z.array(z.object({
        name: z.string(),
        amount: z.string(),
      })),
      instructions: z.array(z.string()),
    }),
  }),
  prompt: 'Generate a recipe for chocolate chip cookies',
});

console.log(result.object.recipe);
Nested Schemas:
typescript
const PersonSchema = z.object({
  name: z.string(),
  age: z.number(),
  address: z.object({
    street: z.string(),
    city: z.string(),
    country: z.string(),
  }),
  hobbies: z.array(z.string()),
});

const result = await generateObject({
  model: openai('gpt-4'),
  schema: PersonSchema,
  prompt: 'Generate a person profile',
});
Arrays and Unions:
typescript
// Array of objects
const result = await generateObject({
  model: openai('gpt-4'),
  schema: z.object({
    people: z.array(z.object({
      name: z.string(),
      role: z.enum(['engineer', 'designer', 'manager']),
    })),
  }),
  prompt: 'Generate a team of 5 people',
});

// Union types
const result = await generateObject({
  model: openai('gpt-4'),
  schema: z.discriminatedUnion('type', [
    z.object({ type: z.literal('text'), content: z.string() }),
    z.object({ type: z.literal('image'), url: z.string() }),
  ]),
  prompt: 'Generate content',
});
When to Use:
  • Need structured data (JSON, forms, etc.)
  • Validation is critical
  • Extracting data from unstructured input
  • Building AI workflows that consume JSON
Error Handling:
typescript
import { AI_NoObjectGeneratedError, AI_TypeValidationError } from 'ai';

try {
  const result = await generateObject({
    model: openai('gpt-4'),
    schema: z.object({ name: z.string() }),
    prompt: 'Generate a person',
  });
} catch (error) {
  if (error instanceof AI_NoObjectGeneratedError) {
    console.error('Model did not generate valid object');
    // Try simplifying schema or adding examples
  } else if (error instanceof AI_TypeValidationError) {
    console.error('Zod validation failed:', error.message);
    // Schema doesn't match output
  }
}

生成经Zod Schema验证的结构化输出。
签名:
typescript
async function generateObject<T>(options: {
  model: LanguageModel;
  schema: z.Schema<T>;
  prompt?: string;
  messages?: Array<ModelMessage>;
  system?: string;
  mode?: 'auto' | 'json' | 'tool';
  // ... 其他选项
}): Promise<GenerateObjectResult<T>>
基础用法:
typescript
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateObject({
  model: openai('gpt-4'),
  schema: z.object({
    recipe: z.object({
      name: z.string(),
      ingredients: z.array(z.object({
        name: z.string(),
        amount: z.string(),
      })),
      instructions: z.array(z.string()),
    }),
  }),
  prompt: '生成巧克力曲奇的食谱',
});

console.log(result.object.recipe);
嵌套Schema:
typescript
const PersonSchema = z.object({
  name: z.string(),
  age: z.number(),
  address: z.object({
    street: z.string(),
    city: z.string(),
    country: z.string(),
  }),
  hobbies: z.array(z.string()),
});

const result = await generateObject({
  model: openai('gpt-4'),
  schema: PersonSchema,
  prompt: '生成一个个人资料',
});
数组与联合类型:
typescript
// 对象数组
const result = await generateObject({
  model: openai('gpt-4'),
  schema: z.object({
    people: z.array(z.object({
      name: z.string(),
      role: z.enum(['engineer', 'designer', 'manager']),
    })),
  }),
  prompt: '生成一个5人的团队',
});

// 联合类型
const result = await generateObject({
  model: openai('gpt-4'),
  schema: z.discriminatedUnion('type', [
    z.object({ type: z.literal('text'), content: z.string() }),
    z.object({ type: z.literal('image'), url: z.string() }),
  ]),
  prompt: '生成内容',
});
适用场景:
  • 需要结构化数据(JSON、表单等)
  • 验证至关重要
  • 从非结构化输入中提取数据
  • 构建消费JSON的AI工作流
错误处理:
typescript
import { AI_NoObjectGeneratedError, AI_TypeValidationError } from 'ai';

try {
  const result = await generateObject({
    model: openai('gpt-4'),
    schema: z.object({ name: z.string() }),
    prompt: '生成一个人物',
  });
} catch (error) {
  if (error instanceof AI_NoObjectGeneratedError) {
    console.error('模型未生成有效对象');
    // 尝试简化Schema或添加示例
  } else if (error instanceof AI_TypeValidationError) {
    console.error('Zod验证失败:', error.message);
    // Schema与输出不匹配
  }
}

streamObject()

streamObject()

Stream structured output with partial updates.
Signature:
typescript
function streamObject<T>(options: {
  model: LanguageModel;
  schema: z.Schema<T>;
  prompt?: string;
  messages?: Array<ModelMessage>;
  mode?: 'auto' | 'json' | 'tool';
  // ... other options
}): StreamObjectResult<T>
Basic Usage:
typescript
import { streamObject } from 'ai';
import { google } from '@ai-sdk/google';
import { z } from 'zod';

const stream = streamObject({
  model: google('gemini-2.5-pro'),
  schema: z.object({
    characters: z.array(z.object({
      name: z.string(),
      class: z.string(),
      stats: z.object({
        hp: z.number(),
        mana: z.number(),
      }),
    })),
  }),
  prompt: 'Generate 3 RPG characters',
});

// Stream partial updates
for await (const partialObject of stream.partialObjectStream) {
  console.log(partialObject);
  // { characters: [{ name: "Aria" }] }
  // { characters: [{ name: "Aria", class: "Mage" }] }
  // { characters: [{ name: "Aria", class: "Mage", stats: { hp: 100 } }] }
  // ...
}
UI Integration Pattern:
typescript
// Server endpoint
export async function POST(request: Request) {
  const { prompt } = await request.json();

  const stream = streamObject({
    model: openai('gpt-4'),
    schema: z.object({
      summary: z.string(),
      keyPoints: z.array(z.string()),
    }),
    prompt,
  });

  return stream.toTextStreamResponse();
}

// Client (with useObject hook from ai-sdk-ui)
const { object, isLoading } = useObject({
  api: '/api/analyze',
  schema: /* same schema */,
});

// Render partial object as it streams
{object?.summary && <p>{object.summary}</p>}
{object?.keyPoints?.map(point => <li key={point}>{point}</li>)}
When to Use:
  • Real-time structured data (forms, dashboards)
  • Show progressive completion
  • Large structured outputs
  • Better UX for slow generations

通过部分更新流式传输结构化输出。
签名:
typescript
function streamObject<T>(options: {
  model: LanguageModel;
  schema: z.Schema<T>;
  prompt?: string;
  messages?: Array<ModelMessage>;
  mode?: 'auto' | 'json' | 'tool';
  // ... 其他选项
}): StreamObjectResult<T>
基础用法:
typescript
import { streamObject } from 'ai';
import { google } from '@ai-sdk/google';
import { z } from 'zod';

const stream = streamObject({
  model: google('gemini-2.5-pro'),
  schema: z.object({
    characters: z.array(z.object({
      name: z.string(),
      class: z.string(),
      stats: z.object({
        hp: z.number(),
        mana: z.number(),
      }),
    })),
  }),
  prompt: '生成3个RPG角色',
});

// 流式传输部分更新
for await (const partialObject of stream.partialObjectStream) {
  console.log(partialObject);
  // { characters: [{ name: "Aria" }] }
  // { characters: [{ name: "Aria", class: "Mage" }] }
  // { characters: [{ name: "Aria", class: "Mage", stats: { hp: 100 } }] }
  // ...
}
UI集成模式:
typescript
// 服务端端点
export async function POST(request: Request) {
  const { prompt } = await request.json();

  const stream = streamObject({
    model: openai('gpt-4'),
    schema: z.object({
      summary: z.string(),
      keyPoints: z.array(z.string()),
    }),
    prompt,
  });

  return stream.toTextStreamResponse();
}

// 客户端(使用ai-sdk-ui中的useObject钩子)
const { object, isLoading } = useObject({
  api: '/api/analyze',
  schema: /* 相同的Schema */,
});

// 流式传输时渲染部分对象
{object?.summary && <p>{object.summary}</p>}
{object?.keyPoints?.map(point => <li key={point}>{point}</li>)}
适用场景:
  • 实时结构化数据(表单、仪表板)
  • 展示渐进式完成状态
  • 大型结构化输出
  • 为慢速生成提供更好的UX

Provider Setup & Configuration

供应商设置与配置

OpenAI

OpenAI

typescript
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';

// API key from environment (recommended)
// OPENAI_API_KEY=sk-...
const model = openai('gpt-4-turbo');

// Or explicit API key
const model = openai('gpt-4', {
  apiKey: process.env.OPENAI_API_KEY,
});

// Available models
const gpt5 = openai('gpt-5');           // Latest (released August 2025)
const gpt4 = openai('gpt-4-turbo');
const gpt35 = openai('gpt-3.5-turbo');

const result = await generateText({
  model: gpt4,
  prompt: 'Hello',
});
Common Errors:
  • AI_LoadAPIKeyError
    : Check
    OPENAI_API_KEY
    environment variable
  • 429 Rate Limit
    : Implement exponential backoff, upgrade tier
  • 401 Unauthorized
    : Invalid API key format
Rate Limiting: OpenAI enforces RPM (requests per minute) and TPM (tokens per minute) limits. Implement retry logic:
typescript
const result = await generateText({
  model: openai('gpt-4'),
  prompt: 'Hello',
  maxRetries: 3,  // Built-in retry
});

typescript
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';

// 从环境变量获取API密钥(推荐)
// OPENAI_API_KEY=sk-...
const model = openai('gpt-4-turbo');

// 或显式指定API密钥
const model = openai('gpt-4', {
  apiKey: process.env.OPENAI_API_KEY,
});

// 可用模型
const gpt5 = openai('gpt-5');           // 最新版本(2025年8月发布)
const gpt4 = openai('gpt-4-turbo');
const gpt35 = openai('gpt-3.5-turbo');

const result = await generateText({
  model: gpt4,
  prompt: '你好',
});
常见错误:
  • AI_LoadAPIKeyError
    : 检查
    OPENAI_API_KEY
    环境变量
  • 429 Rate Limit
    : 实现指数退避重试
  • 401 Unauthorized
    : API密钥格式无效
速率限制: OpenAI实施RPM(每分钟请求数)和TPM(每分钟令牌数)限制。使用内置重试逻辑:
typescript
const result = await generateText({
  model: openai('gpt-4'),
  prompt: '你好',
  maxRetries: 3,  // 内置重试
});

Anthropic

Anthropic

typescript
import { anthropic } from '@ai-sdk/anthropic';

// ANTHROPIC_API_KEY=sk-ant-...
const claude = anthropic('claude-sonnet-4-5-20250929');

// Available models (Claude 4.x family, released 2025)
const sonnet45 = anthropic('claude-sonnet-4-5-20250929');  // Latest, recommended
const sonnet4 = anthropic('claude-sonnet-4-20250522');     // Released May 2025
const opus4 = anthropic('claude-opus-4-20250522');         // Highest quality

// Legacy models (Claude 3.x, deprecated)
// const sonnet35 = anthropic('claude-3-5-sonnet-20241022');  // Use Claude 4.x instead
// const opus3 = anthropic('claude-3-opus-20240229');
// const haiku3 = anthropic('claude-3-haiku-20240307');

const result = await generateText({
  model: sonnet45,
  prompt: 'Explain quantum entanglement',
});
Common Errors:
  • AI_LoadAPIKeyError
    : Check
    ANTHROPIC_API_KEY
    environment variable
  • overloaded_error
    : Retry with exponential backoff
  • rate_limit_error
    : Wait and retry
Best Practices:
  • Claude excels at long-context tasks (200K+ tokens)
  • Claude 4.x recommended: Anthropic deprecated Claude 3.x in 2025
  • Use Sonnet 4.5 for balance of speed/quality (latest model)
  • Use Sonnet 4 for production stability (if avoiding latest)
  • Use Opus 4 for highest quality reasoning and complex tasks

typescript
import { anthropic } from '@ai-sdk/anthropic';

// ANTHROPIC_API_KEY=sk-ant-...
const claude = anthropic('claude-sonnet-4-5-20250929');

// 可用模型(Claude 4.x系列,2025年发布)
const sonnet45 = anthropic('claude-sonnet-4-5-20250929');  // 最新版本,推荐使用
const sonnet4 = anthropic('claude-sonnet-4-20250522');     // 2025年5月发布
const opus4 = anthropic('claude-opus-4-20250522');         // 最高质量

// 旧版模型(Claude 3.x,已弃用)
// const sonnet35 = anthropic('claude-3-5-sonnet-20241022');  // 建议使用Claude 4.x
// const opus3 = anthropic('claude-3-opus-20240229');
// const haiku3 = anthropic('claude-3-haiku-20240307');

const result = await generateText({
  model: sonnet45,
  prompt: '解释量子纠缠',
});
常见错误:
  • AI_LoadAPIKeyError
    : 检查
    ANTHROPIC_API_KEY
    环境变量
  • overloaded_error
    : 使用指数退避重试
  • rate_limit_error
    : 等待后重试
最佳实践:
  • Claude擅长长上下文任务(200K+令牌)
  • 推荐使用Claude 4.x:Anthropic在2025年弃用了Claude 3.x
  • 使用Sonnet 4.5平衡速度与质量(最新模型)
  • 使用Sonnet 4保证生产稳定性(若避免使用最新版本)
  • 使用Opus 4处理最高质量的推理和复杂任务

Google

Google

typescript
import { google } from '@ai-sdk/google';

// GOOGLE_GENERATIVE_AI_API_KEY=...
const gemini = google('gemini-2.5-pro');

// Available models (all GA since June-July 2025)
const pro = google('gemini-2.5-pro');
const flash = google('gemini-2.5-flash');
const lite = google('gemini-2.5-flash-lite');

const result = await generateText({
  model: pro,
  prompt: 'Analyze this data',
});
Common Errors:
  • AI_LoadAPIKeyError
    : Check
    GOOGLE_GENERATIVE_AI_API_KEY
  • SAFETY
    : Content filtered by safety settings
  • QUOTA_EXCEEDED
    : Rate limit hit
Best Practices:
  • Gemini Pro: Best for reasoning and analysis
  • Gemini Flash: Fast, cost-effective for most tasks
  • Free tier has generous limits
  • Good for multimodal tasks (combine with image inputs)

typescript
import { google } from '@ai-sdk/google';

// GOOGLE_GENERATIVE_AI_API_KEY=...
const gemini = google('gemini-2.5-pro');

// 可用模型(2025年6-7月均已正式发布)
const pro = google('gemini-2.5-pro');
const flash = google('gemini-2.5-flash');
const lite = google('gemini-2.5-flash-lite');

const result = await generateText({
  model: pro,
  prompt: '分析这些数据',
});
常见错误:
  • AI_LoadAPIKeyError
    : 检查
    GOOGLE_GENERATIVE_AI_API_KEY
  • SAFETY
    : 内容被安全设置过滤
  • QUOTA_EXCEEDED
    : 超出速率限制
最佳实践:
  • Gemini Pro:最适合推理和分析
  • Gemini Flash:速度快,性价比高,适用于大多数任务
  • 免费层限制宽松
  • 适合多模态任务(结合图像输入)

Cloudflare Workers AI

Cloudflare Workers AI

typescript
import { Hono } from 'hono';
import { generateText } from 'ai';
import { createWorkersAI } from 'workers-ai-provider';

interface Env {
  AI: Ai;
}

const app = new Hono<{ Bindings: Env }>();

app.post('/chat', async (c) => {
  // Create provider inside handler (avoid startup overhead)
  const workersai = createWorkersAI({ binding: c.env.AI });

  const result = await generateText({
    model: workersai('@cf/meta/llama-3.1-8b-instruct'),
    prompt: 'What is Cloudflare?',
  });

  return c.json({ response: result.text });
});

export default app;
wrangler.jsonc:
jsonc
{
  "name": "ai-sdk-worker",
  "compatibility_date": "2025-10-21",
  "ai": {
    "binding": "AI"
  }
}
Important Notes:
Startup Optimization: AI SDK v5 + Zod can cause >270ms startup time in Workers. Solutions:
  1. Move imports inside handler:
typescript
// BAD (startup overhead)
import { createWorkersAI } from 'workers-ai-provider';
const workersai = createWorkersAI({ binding: env.AI });

// GOOD (lazy init)
app.post('/chat', async (c) => {
  const { createWorkersAI } = await import('workers-ai-provider');
  const workersai = createWorkersAI({ binding: c.env.AI });
  // ...
});
  1. Minimize top-level Zod schemas:
typescript
// Move complex schemas into route handlers
When to Use workers-ai-provider:
  • Multi-provider scenarios (OpenAI + Workers AI)
  • Using AI SDK UI hooks with Workers AI
  • Need consistent API across providers
When to Use Native Binding: For Cloudflare-only deployments without multi-provider support, use the
cloudflare-workers-ai
skill instead for maximum performance.

typescript
import { Hono } from 'hono';
import { generateText } from 'ai';
import { createWorkersAI } from 'workers-ai-provider';

interface Env {
  AI: Ai;
}

const app = new Hono<{ Bindings: Env }>();

app.post('/chat', async (c) => {
  // 在处理程序内部创建供应商(避免启动开销)
  const workersai = createWorkersAI({ binding: c.env.AI });

  const result = await generateText({
    model: workersai('@cf/meta/llama-3.1-8b-instruct'),
    prompt: '什么是Cloudflare?',
  });

  return c.json({ response: result.text });
});

export default app;
wrangler.jsonc:
jsonc
{
  "name": "ai-sdk-worker",
  "compatibility_date": "2025-10-21",
  "ai": {
    "binding": "AI"
  }
}
重要说明:
启动优化: AI SDK v5 + Zod在Workers中可能导致启动时间超过270ms。解决方案:
  1. 将导入移至处理程序内部:
typescript
// 错误示例(启动开销大)
import { createWorkersAI } from 'workers-ai-provider';
const workersai = createWorkersAI({ binding: env.AI });

// 正确示例(延迟初始化)
app.post('/chat', async (c) => {
  const { createWorkersAI } = await import('workers-ai-provider');
  const workersai = createWorkersAI({ binding: c.env.AI });
  // ...
});
  1. 最小化顶层Zod Schema:
typescript
// 将复杂Schema移至路由处理程序中
何时使用workers-ai-provider:
  • 多供应商场景(OpenAI + Workers AI)
  • 将AI SDK UI钩子与Workers AI结合使用
  • 需要跨供应商的一致API
何时使用原生绑定: 对于不支持多供应商的纯Cloudflare部署,建议使用
cloudflare-workers-ai
技能以获得最佳性能。

Tool Calling & Agents

工具调用与代理

Basic Tool Definition

基础工具定义

typescript
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4'),
  tools: {
    weather: tool({
      description: 'Get the weather for a location',
      inputSchema: z.object({
        location: z.string().describe('The city and country, e.g. "Paris, France"'),
        unit: z.enum(['celsius', 'fahrenheit']).optional(),
      }),
      execute: async ({ location, unit = 'celsius' }) => {
        // Simulate API call
        const data = await fetch(`https://api.weather.com/${location}`);
        return { temperature: 72, condition: 'sunny', unit };
      },
    }),
    convertTemperature: tool({
      description: 'Convert temperature between units',
      inputSchema: z.object({
        value: z.number(),
        from: z.enum(['celsius', 'fahrenheit']),
        to: z.enum(['celsius', 'fahrenheit']),
      }),
      execute: async ({ value, from, to }) => {
        if (from === to) return { value };
        if (from === 'celsius' && to === 'fahrenheit') {
          return { value: (value * 9/5) + 32 };
        }
        return { value: (value - 32) * 5/9 };
      },
    }),
  },
  prompt: 'What is the weather in Tokyo in Fahrenheit?',
});

console.log(result.text);
// Model will call weather tool, potentially convertTemperature, then answer
v5 Tool Changes:
  • parameters
    inputSchema
    (Zod schema)
  • Tool properties:
    args
    input
    ,
    result
    output
  • ToolExecutionError
    removed (now
    tool-error
    content parts)

typescript
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4'),
  tools: {
    weather: tool({
      description: '获取指定地点的天气',
      inputSchema: z.object({
        location: z.string().describe('城市和国家,例如 "Paris, France"'),
        unit: z.enum(['celsius', 'fahrenheit']).optional(),
      }),
      execute: async ({ location, unit = 'celsius' }) => {
        // 模拟API调用
        const data = await fetch(`https://api.weather.com/${location}`);
        return { temperature: 72, condition: 'sunny', unit };
      },
    }),
    convertTemperature: tool({
      description: '在单位之间转换温度',
      inputSchema: z.object({
        value: z.number(),
        from: z.enum(['celsius', 'fahrenheit']),
        to: z.enum(['celsius', 'fahrenheit']),
      }),
      execute: async ({ value, from, to }) => {
        if (from === to) return { value };
        if (from === 'celsius' && to === 'fahrenheit') {
          return { value: (value * 9/5) + 32 };
        }
        return { value: (value - 32) * 5/9 };
      },
    }),
  },
  prompt: '东京的华氏温度是多少?',
});

console.log(result.text);
// 模型将调用weather工具,可能调用convertTemperature,然后给出答案
v5工具变更:
  • parameters
    inputSchema
    (Zod Schema)
  • 工具属性:
    args
    input
    result
    output
  • 移除
    ToolExecutionError
    (现在为
    tool-error
    内容部分)

Agent Class

Agent类

The Agent class simplifies multi-step execution with tools.
typescript
import { Agent, tool } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
import { z } from 'zod';

const weatherAgent = new Agent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  system: 'You are a weather assistant. Always convert temperatures to the user\'s preferred unit.',
  tools: {
    getWeather: tool({
      description: 'Get current weather for a location',
      inputSchema: z.object({
        location: z.string(),
      }),
      execute: async ({ location }) => {
        return { temp: 72, condition: 'sunny', unit: 'fahrenheit' };
      },
    }),
    convertTemp: tool({
      description: 'Convert temperature between units',
      inputSchema: z.object({
        fahrenheit: z.number(),
      }),
      execute: async ({ fahrenheit }) => {
        return { celsius: (fahrenheit - 32) * 5/9 };
      },
    }),
  },
});

const result = await weatherAgent.run({
  messages: [
    { role: 'user', content: 'What is the weather in SF in Celsius?' },
  ],
});

console.log(result.text);
// Agent will call getWeather, then convertTemp, then respond
When to Use Agent vs Raw generateText:
  • Use Agent when: Multiple tools, complex workflows, multi-step reasoning
  • Use generateText when: Simple single-step, one or two tools, full control needed

Agent类简化了结合工具的多步骤执行。
typescript
import { Agent, tool } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
import { z } from 'zod';

const weatherAgent = new Agent({
  model: anthropic('claude-sonnet-4-5-20250929'),
  system: '你是一个天气助手。始终将温度转换为用户偏好的单位。',
  tools: {
    getWeather: tool({
      description: '获取指定地点的当前天气',
      inputSchema: z.object({
        location: z.string(),
      }),
      execute: async ({ location }) => {
        return { temp: 72, condition: 'sunny', unit: 'fahrenheit' };
      },
    }),
    convertTemp: tool({
      description: '在单位之间转换温度',
      inputSchema: z.object({
        fahrenheit: z.number(),
      }),
      execute: async ({ fahrenheit }) => {
        return { celsius: (fahrenheit - 32) * 5/9 };
      },
    }),
  },
});

const result = await weatherAgent.run({
  messages: [
    { role: 'user', content: '旧金山的摄氏温度是多少?' },
  ],
});

console.log(result.text);
// Agent将调用getWeather,然后调用convertTemp,最后给出响应
何时使用Agent vs 原生generateText:
  • 使用Agent的场景: 多工具、复杂工作流、多步骤推理
  • 使用generateText的场景: 简单单步骤、1-2个工具、需要完全控制

Multi-Step Execution

多步骤执行

Control when multi-step execution stops with
stopWhen
conditions.
typescript
import { generateText, stopWhen, stepCountIs, hasToolCall } from 'ai';
import { openai } from '@ai-sdk/openai';

// Stop after specific number of steps
const result = await generateText({
  model: openai('gpt-4'),
  tools: { /* ... */ },
  prompt: 'Research TypeScript and create a summary',
  stopWhen: stepCountIs(5),  // Max 5 steps (tool calls + responses)
});

// Stop when specific tool is called
const result = await generateText({
  model: openai('gpt-4'),
  tools: {
    research: tool({ /* ... */ }),
    finalize: tool({ /* ... */ }),
  },
  prompt: 'Research and finalize a report',
  stopWhen: hasToolCall('finalize'),  // Stop when finalize is called
});

// Combine conditions
const result = await generateText({
  model: openai('gpt-4'),
  tools: { /* ... */ },
  prompt: 'Complex task',
  stopWhen: (step) => step.stepCount >= 10 || step.hasToolCall('finish'),
});
v5 Change:
maxSteps
parameter removed. Use
stopWhen(stepCountIs(n))
instead.

使用
stopWhen
条件控制多步骤执行的停止时机。
typescript
import { generateText, stopWhen, stepCountIs, hasToolCall } from 'ai';
import { openai } from '@ai-sdk/openai';

// 达到特定步骤数后停止
const result = await generateText({
  model: openai('gpt-4'),
  tools: { /* ... */ },
  prompt: '研究TypeScript并创建摘要',
  stopWhen: stepCountIs(5),  // 最多5步(工具调用 + 响应)
});

// 调用特定工具后停止
const result = await generateText({
  model: openai('gpt-4'),
  tools: {
    research: tool({ /* ... */ }),
    finalize: tool({ /* ... */ }),
  },
  prompt: '研究并完成一份报告',
  stopWhen: hasToolCall('finalize'),  // 调用finalize后停止
});

// 组合条件
const result = await generateText({
  model: openai('gpt-4'),
  tools: { /* ... */ },
  prompt: '复杂任务',
  stopWhen: (step) => step.stepCount >= 10 || step.hasToolCall('finish'),
});
v5变更: 移除
maxSteps
参数。改用
stopWhen(stepCountIs(n))

Dynamic Tools (v5 New Feature)

动态工具(v5新功能)

Add tools at runtime based on context:
typescript
const result = await generateText({
  model: openai('gpt-4'),
  tools: (context) => {
    // Context includes messages, step count, etc.
    const baseTool = {
      search: tool({ /* ... */ }),
    };

    // Add tools based on context
    if (context.messages.some(m => m.content.includes('weather'))) {
      baseTool.weather = tool({ /* ... */ });
    }

    return baseTools;
  },
  prompt: 'Help me with my task',
});

根据上下文在运行时添加工具:
typescript
const result = await generateText({
  model: openai('gpt-4'),
  tools: (context) => {
    // 上下文包含消息、步骤数等
    const baseTools = {
      search: tool({ /* ... */ }),
    };

    // 根据上下文添加工具
    if (context.messages.some(m => m.content.includes('weather'))) {
      baseTools.weather = tool({ /* ... */ });
    }

    return baseTools;
  },
  prompt: '帮我完成任务',
});

Critical v4→v5 Migration

重要v4→v5迁移指南

AI SDK v5 introduced extensive breaking changes. If migrating from v4, follow this guide.
AI SDK v5引入了大量破坏性变更。如果从v4迁移,请遵循本指南。

Breaking Changes Overview

破坏性变更概述

  1. Parameter Renames
    • maxTokens
      maxOutputTokens
    • providerMetadata
      providerOptions
  2. Tool Definitions
    • parameters
      inputSchema
    • Tool properties:
      args
      input
      ,
      result
      output
  3. Message Types
    • CoreMessage
      ModelMessage
    • Message
      UIMessage
    • convertToCoreMessages
      convertToModelMessages
  4. Tool Error Handling
    • ToolExecutionError
      class removed
    • Now
      tool-error
      content parts
    • Enables automated retry
  5. Multi-Step Execution
    • maxSteps
      stopWhen
    • Use
      stepCountIs()
      or
      hasToolCall()
  6. Message Structure
    • Simple
      content
      string →
      parts
      array
    • Parts: text, file, reasoning, tool-call, tool-result
  7. Streaming Architecture
    • Single chunk → start/delta/end lifecycle
    • Unique IDs for concurrent streams
  8. Tool Streaming
    • Enabled by default
    • toolCallStreaming
      option removed
  9. Package Reorganization
    • ai/rsc
      @ai-sdk/rsc
    • ai/react
      @ai-sdk/react
    • LangChainAdapter
      @ai-sdk/langchain
  1. 参数重命名
    • maxTokens
      maxOutputTokens
    • providerMetadata
      providerOptions
  2. 工具定义
    • parameters
      inputSchema
    • 工具属性:
      args
      input
      result
      output
  3. 消息类型
    • CoreMessage
      ModelMessage
    • Message
      UIMessage
    • convertToCoreMessages
      convertToModelMessages
  4. 工具错误处理
    • 移除
      ToolExecutionError
    • 现在为
      tool-error
      内容部分
    • 支持自动重试
  5. 多步骤执行
    • maxSteps
      stopWhen
    • 使用
      stepCountIs()
      hasToolCall()
  6. 消息结构
    • 简单
      content
      字符串 →
      parts
      数组
    • 部分类型:text、file、reasoning、tool-call、tool-result
  7. 流式架构
    • 单一分块 → start/delta/end生命周期
    • 并发流使用唯一ID
  8. 工具流式传输
    • 默认启用
    • 移除
      toolCallStreaming
      选项
  9. 包重组
    • ai/rsc
      @ai-sdk/rsc
    • ai/react
      @ai-sdk/react
    • LangChainAdapter
      @ai-sdk/langchain

Migration Examples

迁移示例

Before (v4):
typescript
import { generateText } from 'ai';

const result = await generateText({
  model: openai.chat('gpt-4'),
  maxTokens: 500,
  providerMetadata: { openai: { user: 'user-123' } },
  tools: {
    weather: {
      description: 'Get weather',
      parameters: z.object({ location: z.string() }),
      execute: async (args) => { /* args.location */ },
    },
  },
  maxSteps: 5,
});
After (v5):
typescript
import { generateText, tool, stopWhen, stepCountIs } from 'ai';

const result = await generateText({
  model: openai('gpt-4'),
  maxOutputTokens: 500,
  providerOptions: { openai: { user: 'user-123' } },
  tools: {
    weather: tool({
      description: 'Get weather',
      inputSchema: z.object({ location: z.string() }),
      execute: async ({ location }) => { /* input.location */ },
    }),
  },
  stopWhen: stepCountIs(5),
});
之前(v4):
typescript
import { generateText } from 'ai';

const result = await generateText({
  model: openai.chat('gpt-4'),
  maxTokens: 500,
  providerMetadata: { openai: { user: 'user-123' } },
  tools: {
    weather: {
      description: '获取天气',
      parameters: z.object({ location: z.string() }),
      execute: async (args) => { /* args.location */ },
    },
  },
  maxSteps: 5,
});
之后(v5):
typescript
import { generateText, tool, stopWhen, stepCountIs } from 'ai';

const result = await generateText({
  model: openai('gpt-4'),
  maxOutputTokens: 500,
  providerOptions: { openai: { user: 'user-123' } },
  tools: {
    weather: tool({
      description: '获取天气',
      inputSchema: z.object({ location: z.string() }),
      execute: async ({ location }) => { /* input.location */ },
    }),
  },
  stopWhen: stepCountIs(5),
});

Migration Checklist

迁移检查清单

  • Update all
    maxTokens
    to
    maxOutputTokens
  • Update
    providerMetadata
    to
    providerOptions
  • Convert tool
    parameters
    to
    inputSchema
  • Update tool execute functions:
    args
    input
  • Replace
    maxSteps
    with
    stopWhen(stepCountIs(n))
  • Update message types:
    CoreMessage
    ModelMessage
  • Remove
    ToolExecutionError
    handling
  • Update package imports (
    ai/rsc
    @ai-sdk/rsc
    )
  • Test streaming behavior (architecture changed)
  • Update TypeScript types
  • 将所有
    maxTokens
    更新为
    maxOutputTokens
  • providerMetadata
    更新为
    providerOptions
  • 将工具
    parameters
    转换为
    inputSchema
  • 更新工具执行函数:
    args
    input
  • stopWhen(stepCountIs(n))
    替换
    maxSteps
  • 更新消息类型:
    CoreMessage
    ModelMessage
  • 移除
    ToolExecutionError
    处理逻辑
  • 更新包导入(
    ai/rsc
    @ai-sdk/rsc
  • 测试流式传输行为(架构已变更)
  • 更新TypeScript类型

Automated Migration

自动迁移

AI SDK provides a migration tool:
bash
npx ai migrate
This will update most breaking changes automatically. Review changes carefully.

AI SDK提供迁移工具:
bash
npx ai migrate
这将自动更新大多数破坏性变更。请仔细审查变更内容。

Top 12 Errors & Solutions

12大常见错误与解决方案

1. AI_APICallError

1. AI_APICallError

Cause: API request failed (network, auth, rate limit).
Solution:
typescript
import { AI_APICallError } from 'ai';

try {
  const result = await generateText({
    model: openai('gpt-4'),
    prompt: 'Hello',
  });
} catch (error) {
  if (error instanceof AI_APICallError) {
    console.error('API call failed:', error.message);
    console.error('Status code:', error.statusCode);
    console.error('Response:', error.responseBody);

    // Check common causes
    if (error.statusCode === 401) {
      // Invalid API key
    } else if (error.statusCode === 429) {
      // Rate limit - implement backoff
    } else if (error.statusCode >= 500) {
      // Provider issue - retry
    }
  }
}
Prevention:
  • Validate API keys at startup
  • Implement retry logic with exponential backoff
  • Monitor rate limits
  • Handle network errors gracefully

原因: API请求失败(网络、认证、速率限制)。
解决方案:
typescript
import { AI_APICallError } from 'ai';

try {
  const result = await generateText({
    model: openai('gpt-4'),
    prompt: '你好',
  });
} catch (error) {
  if (error instanceof AI_APICallError) {
    console.error('API调用失败:', error.message);
    console.error('状态码:', error.statusCode);
    console.error('响应:', error.responseBody);

    // 检查常见原因
    if (error.statusCode === 401) {
      // API密钥无效
    } else if (error.statusCode === 429) {
      // 速率限制 - 实现退避
    } else if (error.statusCode >= 500) {
      // 供应商问题 - 重试
    }
  }
}
预防措施:
  • 在启动时验证API密钥
  • 实现带指数退避的重试逻辑
  • 监控速率限制
  • 优雅处理网络错误

2. AI_NoObjectGeneratedError

2. AI_NoObjectGeneratedError

Cause: Model didn't generate valid object matching schema.
Solution:
typescript
import { AI_NoObjectGeneratedError } from 'ai';

try {
  const result = await generateObject({
    model: openai('gpt-4'),
    schema: z.object({ /* complex schema */ }),
    prompt: 'Generate data',
  });
} catch (error) {
  if (error instanceof AI_NoObjectGeneratedError) {
    console.error('No valid object generated');

    // Solutions:
    // 1. Simplify schema
    // 2. Add more context to prompt
    // 3. Provide examples in prompt
    // 4. Try different model (gpt-4 better than gpt-3.5 for complex objects)
  }
}
Prevention:
  • Start with simple schemas, add complexity incrementally
  • Include examples in prompt: "Generate a person like: { name: 'Alice', age: 30 }"
  • Use GPT-4 for complex structured output
  • Test schemas with sample data first

原因: 模型未生成匹配Schema的有效对象。
解决方案:
typescript
import { AI_NoObjectGeneratedError } from 'ai';

try {
  const result = await generateObject({
    model: openai('gpt-4'),
    schema: z.object({ /* 复杂Schema */ }),
    prompt: '生成数据',
  });
} catch (error) {
  if (error instanceof AI_NoObjectGeneratedError) {
    console.error('未生成有效对象');

    // 解决方案:
    // 1. 简化Schema
    // 2. 为提示词添加更多上下文
    // 3. 在提示词中提供示例
    // 4. 尝试不同模型(gpt-4比gpt-3.5更适合复杂对象)
  }
}
预防措施:
  • 从简单Schema开始,逐步增加复杂度
  • 在提示词中包含示例:"生成如下格式的人物:{ name: 'Alice', age: 30 }"
  • 使用GPT-4处理复杂结构化输出
  • 先使用样本数据测试Schema

3. Worker Startup Limit (270ms+)

3. Worker启动限制(270ms+)

Cause: AI SDK v5 + Zod initialization overhead in Cloudflare Workers exceeds startup limits.
Solution:
typescript
// BAD: Top-level imports cause startup overhead
import { createWorkersAI } from 'workers-ai-provider';
import { complexSchema } from './schemas';

const workersai = createWorkersAI({ binding: env.AI });

// GOOD: Lazy initialization inside handler
export default {
  async fetch(request, env) {
    const { createWorkersAI } = await import('workers-ai-provider');
    const workersai = createWorkersAI({ binding: env.AI });

    // Use workersai here
  }
}
Prevention:
  • Move AI SDK imports inside route handlers
  • Minimize top-level Zod schemas
  • Monitor Worker startup time (must be <400ms)
  • Use Wrangler's startup time reporting
GitHub Issue: Search for "Workers startup limit" in Vercel AI SDK issues

原因: 在Cloudflare Workers中,AI SDK v5 + Zod初始化开销超过启动限制。
解决方案:
typescript
// 错误示例:顶层导入导致启动开销
import { createWorkersAI } from 'workers-ai-provider';
import { complexSchema } from './schemas';

const workersai = createWorkersAI({ binding: env.AI });

// 正确示例:在处理程序内部延迟初始化
export default {
  async fetch(request, env) {
    const { createWorkersAI } = await import('workers-ai-provider');
    const workersai = createWorkersAI({ binding: env.AI });

    // 在此处使用workersai
  }
}
预防措施:
  • 将AI SDK导入移至路由处理程序内部
  • 最小化顶层Zod Schema
  • 监控Worker启动时间(必须<400ms)
  • 使用Wrangler的启动时间报告
GitHub Issue: 在Vercel AI SDK issues中搜索"Workers startup limit"

4. streamText Fails Silently

4. streamText静默失败

Cause: Stream errors can be swallowed by
createDataStreamResponse
.
Status:RESOLVED - Fixed in ai@4.1.22 (February 2025)
Solution (Recommended):
typescript
// Use the onError callback (added in v4.1.22)
const stream = streamText({
  model: openai('gpt-4'),
  prompt: 'Hello',
  onError({ error }) {
    console.error('Stream error:', error);
    // Custom error logging and handling
  },
});

// Stream safely
for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}
Alternative (Manual try-catch):
typescript
// Fallback if not using onError callback
try {
  const stream = streamText({
    model: openai('gpt-4'),
    prompt: 'Hello',
  });

  for await (const chunk of stream.textStream) {
    process.stdout.write(chunk);
  }
} catch (error) {
  console.error('Stream error:', error);
}
Prevention:
  • Use
    onError
    callback
    for proper error capture (recommended)
  • Implement server-side error monitoring
  • Test stream error handling explicitly
  • Always log on server side in production
GitHub Issue: #4726 (RESOLVED)

原因:
createDataStreamResponse
可能会吞掉流错误。
状态:已解决 - 在ai@4.1.22中修复(2025年2月)
解决方案(推荐):
typescript
// 使用onError回调(v4.1.22新增)
const stream = streamText({
  model: openai('gpt-4'),
  prompt: '你好',
  onError({ error }) {
    console.error('流错误:', error);
    // 自定义错误处理
  },
});

// 安全流式传输
for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}
替代方案(手动try-catch):
typescript
// 如果不使用onError回调的回退方案
try {
  const stream = streamText({
    model: openai('gpt-4'),
    prompt: '你好',
  });

  for await (const chunk of stream.textStream) {
    process.stdout.write(chunk);
  }
} catch (error) {
  console.error('流错误:', error);
}
预防措施:
  • 使用
    onError
    回调
    正确捕获错误(推荐)
  • 实现服务端错误监控
  • 显式测试流错误处理
  • 生产环境中始终在服务端记录日志
GitHub Issue: #4726(已解决)

5. AI_LoadAPIKeyError

5. AI_LoadAPIKeyError

Cause: Missing or invalid API key.
Solution:
typescript
import { AI_LoadAPIKeyError } from 'ai';

try {
  const result = await generateText({
    model: openai('gpt-4'),
    prompt: 'Hello',
  });
} catch (error) {
  if (error instanceof AI_LoadAPIKeyError) {
    console.error('API key error:', error.message);

    // Check:
    // 1. .env file exists and loaded
    // 2. Correct env variable name (OPENAI_API_KEY)
    // 3. Key format is valid (starts with sk-)
  }
}
Prevention:
  • Validate API keys at application startup
  • Use environment variable validation (e.g., zod)
  • Provide clear error messages in development
  • Document required environment variables

原因: API密钥缺失或无效。
解决方案:
typescript
import { AI_LoadAPIKeyError } from 'ai';

try {
  const result = await generateText({
    model: openai('gpt-4'),
    prompt: '你好',
  });
} catch (error) {
  if (error instanceof AI_LoadAPIKeyError) {
    console.error('API密钥错误:', error.message);

    // 检查:
    // 1. .env文件存在且已加载
    // 2. 环境变量名称正确(OPENAI_API_KEY)
    // 3. 密钥格式有效(以sk-开头)
  }
}
预防措施:
  • 在应用启动时验证API密钥
  • 使用环境变量验证(如zod)
  • 在开发环境中提供清晰的错误消息
  • 记录所需的环境变量

6. AI_InvalidArgumentError

6. AI_InvalidArgumentError

Cause: Invalid parameters passed to function.
Solution:
typescript
import { AI_InvalidArgumentError } from 'ai';

try {
  const result = await generateText({
    model: openai('gpt-4'),
    maxOutputTokens: -1,  // Invalid!
    prompt: 'Hello',
  });
} catch (error) {
  if (error instanceof AI_InvalidArgumentError) {
    console.error('Invalid argument:', error.message);
    // Check parameter types and values
  }
}
Prevention:
  • Use TypeScript for type checking
  • Validate inputs before calling AI SDK functions
  • Read function signatures carefully
  • Check official docs for parameter constraints

原因: 向函数传递了无效参数。
解决方案:
typescript
import { AI_InvalidArgumentError } from 'ai';

try {
  const result = await generateText({
    model: openai('gpt-4'),
    maxOutputTokens: -1,  // 无效!
    prompt: '你好',
  });
} catch (error) {
  if (error instanceof AI_InvalidArgumentError) {
    console.error('无效参数:', error.message);
    // 检查参数类型和值
  }
}
预防措施:
  • 使用TypeScript进行类型检查
  • 调用AI SDK函数前验证输入
  • 仔细阅读函数签名
  • 查看官方文档了解参数约束

7. AI_NoContentGeneratedError

7. AI_NoContentGeneratedError

Cause: Model generated no content (safety filters, etc.).
Solution:
typescript
import { AI_NoContentGeneratedError } from 'ai';

try {
  const result = await generateText({
    model: openai('gpt-4'),
    prompt: 'Some prompt',
  });
} catch (error) {
  if (error instanceof AI_NoContentGeneratedError) {
    console.error('No content generated');

    // Possible causes:
    // 1. Safety filters blocked output
    // 2. Prompt triggered content policy
    // 3. Model configuration issue

    // Handle gracefully:
    return { text: 'Unable to generate response. Please try different input.' };
  }
}
Prevention:
  • Sanitize user inputs
  • Avoid prompts that may trigger safety filters
  • Have fallback messaging
  • Log occurrences for analysis

原因: 模型未生成任何内容(安全过滤等)。
解决方案:
typescript
import { AI_NoContentGeneratedError } from 'ai';

try {
  const result = await generateText({
    model: openai('gpt-4'),
    prompt: '某个提示词',
  });
} catch (error) {
  if (error instanceof AI_NoContentGeneratedError) {
    console.error('未生成任何内容');

    // 可能原因:
    // 1. 安全过滤阻止了输出
    // 2. 提示词触发了内容策略
    // 3. 模型配置问题

    // 优雅处理:
    return { text: '无法生成响应,请尝试其他输入。' };
  }
}
预防措施:
  • 清理用户输入
  • 避免可能触发安全过滤的提示词
  • 提供回退消息
  • 记录事件以便分析

8. AI_TypeValidationError

8. AI_TypeValidationError

Cause: Zod schema validation failed on generated output.
Solution:
typescript
import { AI_TypeValidationError } from 'ai';

try {
  const result = await generateObject({
    model: openai('gpt-4'),
    schema: z.object({
      age: z.number().min(0).max(120),  // Strict validation
    }),
    prompt: 'Generate person',
  });
} catch (error) {
  if (error instanceof AI_TypeValidationError) {
    console.error('Validation failed:', error.message);

    // Solutions:
    // 1. Relax schema constraints
    // 2. Add more guidance in prompt
    // 3. Use .optional() for unreliable fields
  }
}
Prevention:
  • Start with lenient schemas, tighten gradually
  • Use
    .optional()
    for fields that may not always be present
  • Add validation hints in field descriptions
  • Test with various prompts

原因: 生成的输出未通过Zod Schema验证。
解决方案:
typescript
import { AI_TypeValidationError } from 'ai';

try {
  const result = await generateObject({
    model: openai('gpt-4'),
    schema: z.object({
      age: z.number().min(0).max(120),  // 严格验证
    }),
    prompt: '生成人物',
  });
} catch (error) {
  if (error instanceof AI_TypeValidationError) {
    console.error('验证失败:', error.message);

    // 解决方案:
    // 1. 放宽Schema约束
    // 2. 在提示词中添加更多指导
    // 3. 对不可靠字段使用.optional()
  }
}
预防措施:
  • 从宽松的Schema开始,逐步收紧
  • 对可能不总是存在的字段使用
    .optional()
  • 在字段描述中添加验证提示
  • 使用各种提示词进行测试

9. AI_RetryError

9. AI_RetryError

Cause: All retry attempts failed.
Solution:
typescript
import { AI_RetryError } from 'ai';

try {
  const result = await generateText({
    model: openai('gpt-4'),
    prompt: 'Hello',
    maxRetries: 3,  // Default is 2
  });
} catch (error) {
  if (error instanceof AI_RetryError) {
    console.error('All retries failed');
    console.error('Last error:', error.lastError);

    // Check root cause:
    // - Persistent network issue
    // - Provider outage
    // - Invalid configuration
  }
}
Prevention:
  • Investigate root cause of failures
  • Adjust retry configuration if needed
  • Implement circuit breaker pattern for provider outages
  • Have fallback providers

原因: 所有重试尝试均失败。
解决方案:
typescript
import { AI_RetryError } from 'ai';

try {
  const result = await generateText({
    model: openai('gpt-4'),
    prompt: '你好',
    maxRetries: 3,  // 默认是2
  });
} catch (error) {
  if (error instanceof AI_RetryError) {
    console.error('所有重试均失败');
    console.error('最后错误:', error.lastError);

    // 检查根本原因:
    // - 持续网络问题
    // - 供应商故障
    // - 无效配置
  }
}
预防措施:
  • 调查失败的根本原因
  • 如有需要调整重试配置
  • 为供应商故障实现断路器模式
  • 准备备用供应商

10. Rate Limiting Errors

10. 速率限制错误

Cause: Exceeded provider rate limits (RPM/TPM).
Solution:
typescript
// Implement exponential backoff
async function generateWithBackoff(prompt: string, retries = 3) {
  for (let i = 0; i < retries; i++) {
    try {
      return await generateText({
        model: openai('gpt-4'),
        prompt,
      });
    } catch (error) {
      if (error instanceof AI_APICallError && error.statusCode === 429) {
        const delay = Math.pow(2, i) * 1000;  // Exponential backoff
        console.log(`Rate limited, waiting ${delay}ms`);
        await new Promise(resolve => setTimeout(resolve, delay));
      } else {
        throw error;
      }
    }
  }
  throw new Error('Rate limit retries exhausted');
}
Prevention:
  • Monitor rate limit headers
  • Queue requests to stay under limits
  • Upgrade provider tier if needed
  • Implement request throttling

原因: 超出供应商速率限制(RPM/TPM)。
解决方案:
typescript
// 实现指数退避
async function generateWithBackoff(prompt: string, retries = 3) {
  for (let i = 0; i < retries; i++) {
    try {
      return await generateText({
        model: openai('gpt-4'),
        prompt,
      });
    } catch (error) {
      if (error instanceof AI_APICallError && error.statusCode === 429) {
        const delay = Math.pow(2, i) * 1000;  // 指数退避
        console.log(`超出速率限制,等待${delay}ms`);
        await new Promise(resolve => setTimeout(resolve, delay));
      } else {
        throw error;
      }
    }
  }
  throw new Error('速率限制重试次数耗尽');
}
预防措施:
  • 监控速率限制头
  • 对请求进行排队以保持在限制内
  • 如有需要升级供应商套餐
  • 实现请求限流

11. TypeScript Performance with Zod

11. TypeScript与Zod的性能问题

Cause: Complex Zod schemas slow down TypeScript type checking.
Solution:
typescript
// Instead of deeply nested schemas at top level:
// const complexSchema = z.object({ /* 100+ fields */ });

// Define inside functions or use type assertions:
function generateData() {
  const schema = z.object({ /* complex schema */ });
  return generateObject({ model: openai('gpt-4'), schema, prompt: '...' });
}

// Or use z.lazy() for recursive schemas:
type Category = { name: string; subcategories?: Category[] };
const CategorySchema: z.ZodType<Category> = z.lazy(() =>
  z.object({
    name: z.string(),
    subcategories: z.array(CategorySchema).optional(),
  })
);
Prevention:
  • Avoid top-level complex schemas
  • Use
    z.lazy()
    for recursive types
  • Split large schemas into smaller ones
  • Use type assertions where appropriate

原因: 复杂Zod Schema会减慢TypeScript类型检查。
解决方案:
typescript
// 不要在顶层定义深度嵌套的Schema:
// const complexSchema = z.object({ /* 100+字段 */ });

// 在函数内部定义或使用类型断言:
function generateData() {
  const schema = z.object({ /* 复杂Schema */ });
  return generateObject({ model: openai('gpt-4'), schema, prompt: '...' });
}

// 或对递归Schema使用z.lazy():
type Category = { name: string; subcategories?: Category[] };
const CategorySchema: z.ZodType<Category> = z.lazy(() =>
  z.object({
    name: z.string(),
    subcategories: z.array(CategorySchema).optional(),
  })
);
预防措施:
  • 避免在顶层定义复杂Schema
  • 对递归类型使用
    z.lazy()
  • 将大型Schema拆分为较小的Schema
  • 适当使用类型断言

12. Invalid JSON Response (Provider-Specific)

12. 无效JSON响应(供应商特定)

Cause: Some models occasionally return invalid JSON.
Solution:
typescript
// Use built-in retry and mode selection
const result = await generateObject({
  model: openai('gpt-4'),
  schema: mySchema,
  prompt: 'Generate data',
  mode: 'json',  // Force JSON mode (supported by GPT-4)
  maxRetries: 3,  // Retry on invalid JSON
});

// Or catch and retry manually:
try {
  const result = await generateObject({
    model: openai('gpt-4'),
    schema: mySchema,
    prompt: 'Generate data',
  });
} catch (error) {
  // Retry with different model
  const result = await generateObject({
    model: openai('gpt-4-turbo'),
    schema: mySchema,
    prompt: 'Generate data',
  });
}
Prevention:
  • Use
    mode: 'json'
    when available
  • Prefer GPT-4 for structured output
  • Implement retry logic
  • Validate responses
GitHub Issue: #4302 (Imagen 3.0 Invalid JSON)

For More Errors: See complete error reference at https://ai-sdk.dev/docs/reference/ai-sdk-errors

原因: 某些模型偶尔会返回无效JSON。
解决方案:
typescript
// 使用内置重试和模式选择
const result = await generateObject({
  model: openai('gpt-4'),
  schema: mySchema,
  prompt: '生成数据',
  mode: 'json',  // 强制JSON模式(GPT-4支持)
  maxRetries: 3,  // 无效JSON时重试
});

// 或手动捕获并重试:
try {
  const result = await generateObject({
    model: openai('gpt-4'),
    schema: mySchema,
    prompt: '生成数据',
  });
} catch (error) {
  // 使用不同模型重试
  const result = await generateObject({
    model: openai('gpt-4-turbo'),
    schema: mySchema,
    prompt: '生成数据',
  });
}
预防措施:
  • 可用时使用
    mode: 'json'
  • 优先使用GPT-4处理结构化输出
  • 实现重试逻辑
  • 验证响应
GitHub Issue: #4302(Imagen 3.0无效JSON)

更多错误: 请查看完整错误参考:https://ai-sdk.dev/docs/reference/ai-sdk-errors

Production Best Practices

生产最佳实践

Performance

性能

1. Always use streaming for long-form content:
typescript
// User-facing: Use streamText
const stream = streamText({ model: openai('gpt-4'), prompt: 'Long essay' });
return stream.toDataStreamResponse();

// Background tasks: Use generateText
const result = await generateText({ model: openai('gpt-4'), prompt: 'Analyze data' });
2. Set appropriate maxOutputTokens:
typescript
const result = await generateText({
  model: openai('gpt-4'),
  prompt: 'Short answer',
  maxOutputTokens: 100,  // Limit tokens to save cost
});
3. Cache provider instances:
typescript
// Good: Reuse provider instances
const gpt4 = openai('gpt-4-turbo');
const result1 = await generateText({ model: gpt4, prompt: 'Hello' });
const result2 = await generateText({ model: gpt4, prompt: 'World' });
4. Optimize Zod schemas:
typescript
// Avoid complex nested schemas at top level in Workers
// Move into route handlers to prevent startup overhead
1. 长篇内容始终使用流式传输:
typescript
// 面向用户:使用streamText
const stream = streamText({ model: openai('gpt-4'), prompt: '长文' });
return stream.toDataStreamResponse();

// 后台任务:使用generateText
const result = await generateText({ model: openai('gpt-4'), prompt: '分析数据' });
2. 设置合适的maxOutputTokens:
typescript
const result = await generateText({
  model: openai('gpt-4'),
  prompt: '简短回答',
  maxOutputTokens: 100,  // 限制令牌数以节省成本
});
3. 缓存供应商实例:
typescript
// 正确:重用供应商实例
const gpt4 = openai('gpt-4-turbo');
const result1 = await generateText({ model: gpt4, prompt: '你好' });
const result2 = await generateText({ model: gpt4, prompt: '世界' });
4. 优化Zod Schema:
typescript
// 在Workers中避免顶层复杂Schema
// 移至路由处理程序以避免启动开销

Error Handling

错误处理

1. Wrap all AI calls in try-catch:
typescript
try {
  const result = await generateText({ /* ... */ });
} catch (error) {
  // Handle specific errors
  if (error instanceof AI_APICallError) { /* ... */ }
  else if (error instanceof AI_NoContentGeneratedError) { /* ... */ }
  else { /* ... */ }
}
2. Implement retry logic:
typescript
const result = await generateText({
  model: openai('gpt-4'),
  prompt: 'Hello',
  maxRetries: 3,
});
3. Log errors properly:
typescript
console.error('AI SDK Error:', {
  type: error.constructor.name,
  message: error.message,
  statusCode: error.statusCode,
  timestamp: new Date().toISOString(),
});
1. 所有AI调用都用try-catch包裹:
typescript
try {
  const result = await generateText({ /* ... */ });
} catch (error) {
  // 处理特定错误
  if (error instanceof AI_APICallError) { /* ... */ }
  else if (error instanceof AI_NoContentGeneratedError) { /* ... */ }
  else { /* ... */ }
}
2. 实现重试逻辑:
typescript
const result = await generateText({
  model: openai('gpt-4'),
  prompt: '你好',
  maxRetries: 3,
});
3. 正确记录错误:
typescript
console.error('AI SDK错误:', {
  type: error.constructor.name,
  message: error.message,
  statusCode: error.statusCode,
  timestamp: new Date().toISOString(),
});

Cost Optimization

成本优化

1. Choose appropriate models:
typescript
// Simple tasks: Use cheaper models
const simple = await generateText({ model: openai('gpt-3.5-turbo'), prompt: 'Hello' });

// Complex reasoning: Use GPT-4
const complex = await generateText({ model: openai('gpt-4'), prompt: 'Analyze...' });
2. Set maxOutputTokens appropriately:
typescript
const result = await generateText({
  model: openai('gpt-4'),
  prompt: 'Summarize in 2 sentences',
  maxOutputTokens: 100,  // Prevent over-generation
});
3. Cache results when possible:
typescript
const cache = new Map();

async function getCachedResponse(prompt: string) {
  if (cache.has(prompt)) return cache.get(prompt);

  const result = await generateText({ model: openai('gpt-4'), prompt });
  cache.set(prompt, result.text);
  return result.text;
}
1. 选择合适的模型:
typescript
// 简单任务:使用更便宜的模型
const simple = await generateText({ model: openai('gpt-3.5-turbo'), prompt: '你好' });

// 复杂推理:使用GPT-4
const complex = await generateText({ model: openai('gpt-4'), prompt: '分析...' });
2. 适当设置maxOutputTokens:
typescript
const result = await generateText({
  model: openai('gpt-4'),
  prompt: '用2句话总结',
  maxOutputTokens: 100,  // 防止过度生成
});
3. 可能时缓存结果:
typescript
const cache = new Map();

async function getCachedResponse(prompt: string) {
  if (cache.has(prompt)) return cache.get(prompt);

  const result = await generateText({ model: openai('gpt-4'), prompt });
  cache.set(prompt, result.text);
  return result.text;
}

Cloudflare Workers Specific

Cloudflare Workers特定

1. Move imports inside handlers:
typescript
// Avoid startup overhead
export default {
  async fetch(request, env) {
    const { generateText } = await import('ai');
    const { openai } = await import('@ai-sdk/openai');
    // Use here
  }
}
2. Monitor startup time:
bash
undefined
1. 将导入移至处理程序内部:
typescript
// 避免启动开销
export default {
  async fetch(request, env) {
    const { generateText } = await import('ai');
    const { openai } = await import('@ai-sdk/openai');
    // 在此处使用
  }
}
2. 监控启动时间:
bash
undefined

Wrangler reports startup time

Wrangler会报告启动时间

wrangler deploy
wrangler deploy

Check output for startup duration (must be <400ms)

检查输出中的启动时长(必须<400ms)


**3. Handle streaming properly:**
```typescript
// Return ReadableStream for streaming responses
const stream = streamText({ model: openai('gpt-4'), prompt: 'Hello' });
return new Response(stream.toTextStream(), {
  headers: { 'Content-Type': 'text/plain; charset=utf-8' },
});

**3. 正确处理流式传输:**
```typescript
// 返回ReadableStream作为流式响应
const stream = streamText({ model: openai('gpt-4'), prompt: '你好' });
return new Response(stream.toTextStream(), {
  headers: { 'Content-Type': 'text/plain; charset=utf-8' },
});

Next.js / Vercel Specific

Next.js / Vercel特定

1. Use Server Actions for mutations:
typescript
'use server';

export async function generateContent(input: string) {
  const result = await generateText({
    model: openai('gpt-4'),
    prompt: input,
  });
  return result.text;
}
2. Use Server Components for initial loads:
typescript
// app/page.tsx
export default async function Page() {
  const result = await generateText({
    model: openai('gpt-4'),
    prompt: 'Welcome message',
  });

  return <div>{result.text}</div>;
}
3. Implement loading states:
typescript
'use client';

import { useState } from 'react';
import { generateContent } from './actions';

export default function Form() {
  const [loading, setLoading] = useState(false);

  async function handleSubmit(formData: FormData) {
    setLoading(true);
    const result = await generateContent(formData.get('input'));
    setLoading(false);
  }

  return (
    <form action={handleSubmit}>
      <input name="input" />
      <button disabled={loading}>
        {loading ? 'Generating...' : 'Submit'}
      </button>
    </form>
  );
}
4. For deployment: See Vercel's official deployment documentation: https://vercel.com/docs/functions

1. 使用Server Actions处理变更:
typescript
'use server';

export async function generateContent(input: string) {
  const result = await generateText({
    model: openai('gpt-4'),
    prompt: input,
  });
  return result.text;
}
2. 使用Server Components处理初始加载:
typescript
// app/page.tsx
export default async function Page() {
  const result = await generateText({
    model: openai('gpt-4'),
    prompt: '欢迎消息',
  });

  return <div>{result.text}</div>;
}
3. 实现加载状态:
typescript
'use client';

import { useState } from 'react';
import { generateContent } from './actions';

export default function Form() {
  const [loading, setLoading] = useState(false);

  async function handleSubmit(formData: FormData) {
    setLoading(true);
    const result = await generateContent(formData.get('input'));
    setLoading(false);
  }

  return (
    <form action={handleSubmit}>
      <input name="input" />
      <button disabled={loading}>
        {loading ? '生成中...' : '提交'}
      </button>
    </form>
  );
}
4. 部署: 请查看Vercel官方部署文档:https://vercel.com/docs/functions

When to Use This Skill

何时使用本技能

Use ai-sdk-core when:

使用ai-sdk-core的场景:

  • Building backend AI features (server-side text generation)
  • Implementing server-side text generation (Node.js, Workers, Next.js)
  • Creating structured AI outputs (JSON, forms, data extraction)
  • Building AI agents with tools (multi-step workflows)
  • Integrating multiple AI providers (OpenAI, Anthropic, Google, Cloudflare)
  • Migrating from AI SDK v4 to v5
  • Encountering AI SDK errors (AI_APICallError, AI_NoObjectGeneratedError, etc.)
  • Using AI in Cloudflare Workers (with workers-ai-provider)
  • Using AI in Next.js Server Components/Actions
  • Need consistent API across different LLM providers
  • 构建后端AI功能(服务端文本生成)
  • 实现服务端文本生成(Node.js、Workers、Next.js)
  • 创建结构化AI输出(JSON、表单、数据提取)
  • 构建带工具的AI代理(多步骤工作流)
  • 集成多个AI供应商(OpenAI、Anthropic、Google、Cloudflare)
  • 从AI SDK v4迁移到v5
  • 遇到AI SDK错误(AI_APICallError、AI_NoObjectGeneratedError等)
  • 在Cloudflare Workers中使用AI(结合workers-ai-provider)
  • 在Next.js Server Components/Actions中使用AI
  • 需要跨不同LLM供应商的一致API

Don't use this skill when:

不使用本技能的场景:

  • Building React chat UIs (use ai-sdk-ui skill instead)
  • Need frontend hooks like useChat (use ai-sdk-ui skill instead)
  • Need advanced topics like embeddings or image generation (check official docs)
  • Building native Cloudflare Workers AI apps without multi-provider (use cloudflare-workers-ai skill instead)
  • Need Generative UI / RSC (see https://ai-sdk.dev/docs/ai-sdk-rsc)

  • 构建React聊天UI(改用ai-sdk-ui技能)
  • 需要useChat等前端钩子(改用ai-sdk-ui技能)
  • 需要嵌入或图像生成等高级主题(查看官方文档)
  • 构建无多供应商支持的原生Cloudflare Workers AI应用(改用cloudflare-workers-ai技能)
  • 需要生成式UI / RSC(查看https://ai-sdk.dev/docs/ai-sdk-rsc)

Dependencies & Versions

依赖与版本

json
{
  "dependencies": {
    "ai": "^5.0.81",
    "@ai-sdk/openai": "^2.0.56",
    "@ai-sdk/anthropic": "^2.0.38",
    "@ai-sdk/google": "^2.0.24",
    "workers-ai-provider": "^2.0.0",
    "zod": "^3.23.8"
  },
  "devDependencies": {
    "@types/node": "^20.11.0",
    "typescript": "^5.3.3"
  }
}
Version Notes:
  • AI SDK v5.0.81+ (stable, latest as of October 2025)
  • v6 is in beta - not covered in this skill
  • Zod compatibility: This skill uses Zod 3.x, but AI SDK 5 officially supports both Zod 3.x and Zod 4.x (4.1.12 latest)
    • Zod 4 recommended for new projects (released August 2025)
    • Zod 4 has breaking changes: error APIs,
      .default()
      behavior,
      ZodError.errors
      removed
    • Some peer dependency warnings may occur with
      zod-to-json-schema
      when using Zod 4
    • See https://zod.dev/v4/changelog for migration guide
  • Provider packages at 2.0+ for v5 compatibility
Check Latest Versions:
bash
npm view ai version
npm view @ai-sdk/openai version
npm view @ai-sdk/anthropic version
npm view @ai-sdk/google version
npm view workers-ai-provider version
npm view zod version  # Check for Zod 4.x updates

json
{
  "dependencies": {
    "ai": "^5.0.81",
    "@ai-sdk/openai": "^2.0.56",
    "@ai-sdk/anthropic": "^2.0.38",
    "@ai-sdk/google": "^2.0.24",
    "workers-ai-provider": "^2.0.0",
    "zod": "^3.23.8"
  },
  "devDependencies": {
    "@types/node": "^20.11.0",
    "typescript": "^5.3.3"
  }
}
版本说明:
  • AI SDK v5.0.81+(稳定版,截至2025年10月为最新版)
  • v6处于测试阶段 - 本技能未覆盖
  • Zod兼容性:本技能使用Zod 3.x,但AI SDK 5官方支持Zod 3.x和Zod 4.x(最新为4.1.12)
    • 新项目推荐使用Zod 4(2025年8月发布)
    • Zod 4有破坏性变更:错误API、
      .default()
      行为、移除
      ZodError.errors
    • 使用Zod 4时,
      zod-to-json-schema
      可能会出现一些对等依赖警告
    • 查看https://zod.dev/v4/changelog获取迁移指南
  • 供应商包版本为2.0+以兼容v5
检查最新版本:
bash
npm view ai version
npm view @ai-sdk/openai version
npm view @ai-sdk/anthropic version
npm view @ai-sdk/google version
npm view workers-ai-provider version
npm view zod version  # 检查Zod 4.x更新

Links to Official Documentation

官方文档链接

Core Documentation

核心文档

Advanced Topics (Not Replicated in This Skill)

高级主题(本技能未覆盖)

Migration & Troubleshooting

迁移与故障排除

Provider Documentation

供应商文档

Cloudflare Integration

Cloudflare集成

Vercel / Next.js Integration

Vercel / Next.js集成

GitHub & Community

GitHub与社区

Templates & References

模板与参考

This skill includes:
  • 13 Templates: Ready-to-use code examples in
    templates/
  • 5 Reference Docs: Detailed guides in
    references/
  • 1 Script: Version checker in
    scripts/
All files are optimized for copy-paste into your project.

Last Updated: 2025-10-29 Skill Version: 1.1.0 AI SDK Version: 5.0.81+
本技能包含:
  • 13个模板:
    templates/
    目录下的即用型代码示例
  • 5个参考文档:
    references/
    目录下的详细指南
  • 1个脚本:
    scripts/
    目录下的版本检查器
所有文件均已优化,可直接复制粘贴到你的项目中。

最后更新: 2025-10-29 技能版本: 1.1.0 AI SDK版本: 5.0.81+