ai-ui-patterns

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

AI UI Patterns

AI UI设计模式

Building AI-powered interfaces – from chatbots to intelligent assistants – requires careful integration of backend AI services with reactive UI components. In this chapter, we explore design patterns in React for such interfaces, focusing on two implementations: a plain React app (using Vite) and a Next.js app. We'll use OpenAI's API (via the Vercel AI SDK) as our AI engine, and TailwindCSS for styling. Key topics include prompt management, streaming responses, input debouncing, error handling, and how these patterns differ between Vite and Next.js. We also highlight reusable component patterns and Vercel's AI UI components (AI Elements) for building polished chat UIs.
构建AI驱动的界面——从聊天机器人到智能助手——需要将后端AI服务与响应式UI组件进行细致的集成。在本章中,我们将探索React中这类界面的设计模式,重点介绍两种实现方式:基于Vite的纯React应用,以及Next.js应用。我们将使用OpenAI API(通过Vercel AI SDK)作为AI引擎,并用TailwindCSS进行样式设计。核心主题包括提示词管理、流式响应、输入防抖、错误处理,以及这些模式在Vite和Next.js中的差异。我们还会强调可复用组件模式,以及用于构建精致聊天UI的Vercel AI UI组件(AI Elements)

When to Use

适用场景

  • Use this when building conversational AI interfaces that stream responses from LLMs
  • This is helpful for integrating OpenAI, Anthropic, or other AI providers into React applications
  • Use this when you need patterns for prompt management, streaming, error handling, and AI-specific UI
  • 当你需要构建从大语言模型(LLM)流式返回响应的对话式AI界面时使用
  • 有助于将OpenAI、Anthropic或其他AI提供商集成到React应用中
  • 当你需要提示词管理、流式传输、错误处理和AI专属UI相关模式时使用

Instructions

实现指南

  • Use the Vercel AI SDK's
    useChat
    hook for managing conversation state and streaming responses
  • Keep API keys on the server — use Next.js API routes or a separate backend for AI calls
  • Enable streaming (
    stream: true
    ) for responsive real-time output in chat interfaces
  • Debounce input for autocomplete features; disable input during response streaming for chat
  • Build reusable components (ChatMessage, InputBox) decoupled from data-fetching logic
  • 使用Vercel AI SDK的
    useChat
    钩子管理对话状态和流式响应
  • 将API密钥保存在服务器端——使用Next.js API路由或独立后端处理AI调用
  • 启用流式传输(
    stream: true
    ),为聊天界面提供响应式实时输出
  • 为自动补全功能实现输入防抖;在聊天响应流式传输期间禁用输入
  • 构建与数据获取逻辑解耦的可复用组件(ChatMessage、InputBox)

Details

详细说明

Note: While this article uses OpenAI as an example, the Vercel AI SDK supports multiple model providers including Gemini, OpenAI, and Anthropic. You can easily swap between providers through the SDK's unified interface – we're just choosing one option for demonstration purposes.
注意: 本文以OpenAI为例,但Vercel AI SDK支持多种模型提供商,包括GeminiOpenAIAnthropic。你可以通过SDK的统一接口轻松切换提供商——我们只是选择其中一个进行演示。

Introduction: AI Interfaces in React

简介:React中的AI界面

AI-driven user interfaces (UIs) have become popular with the rise of LLMs like ChatGPT. Unlike traditional UIs, AI interfaces often involve conversational interactions, dynamic content streaming, and asynchronous backend calls. This introduces unique challenges and patterns for React developers. A typical AI chat interface consists of a frontend (for user input and displaying responses) and a backend (to call the AI model). The backend is essential to keep API keys and heavy processing off the client for security and performance. Tools like Vercel's AI SDK make it easier to connect to providers (OpenAI, HuggingFace, etc.) and stream responses in real-time. We'll explore how to set up both a Next.js app and a Vite (React) app to handle these concerns, and discuss best practices that apply to both.
Key patterns covered:
  • Structuring AI prompt data and managing conversation state
  • Streaming AI responses to the UI for real-time feedback
  • Debouncing user input to avoid spamming the API
  • Error handling and fallbacks in the UX
  • Reusable UI components for messages, inputs, and more (with TailwindCSS)
  • Architectural differences: Next.js route handlers vs. Vite with a Node backend
By the end, you'll be equipped to build a responsive, robust AI-powered UI in React, whether you prefer Next.js or a Vite toolchain.
随着ChatGPT等大语言模型的兴起,AI驱动的用户界面(UI)变得越来越流行。与传统UI不同,AI界面通常涉及对话交互、动态内容流式传输和异步后端调用。这给React开发者带来了独特的挑战和模式。典型的AI聊天界面由前端(处理用户输入和显示响应)和后端(调用AI模型)组成。后端对于将API密钥和繁重计算保留在客户端之外至关重要,以保障安全性和性能。Vercel的AI SDK等工具可以更轻松地连接到提供商(OpenAI、HuggingFace等)并实时流式传输响应。我们将探索如何设置Next.js应用和Vite(React)应用来处理这些问题,并讨论适用于两者的最佳实践。
涵盖的核心模式:
  • 构建AI提示词数据结构并管理对话状态
  • 将AI响应流式传输到UI以提供实时反馈
  • 对用户输入进行防抖处理,避免频繁调用API
  • UX中的错误处理和回退方案
  • 用于消息、输入框等的可复用UI组件(搭配TailwindCSS)
  • 架构差异:Next.js路由处理器 vs Vite + Node后端
完成学习后,你将能够在React中构建响应式、健壮的AI驱动UI,无论你偏好Next.js还是Vite工具链。

Project Setup and Tools

项目设置与工具

Before diving into code, ensure you have the necessary packages and configurations:
  • React & Vite: Initialize a Vite + React project (e.g.
    npm create vite@latest my-ai-app -- --template react
    ). For Next.js, you can use
    npx create-next-app
    or the Next 13 App Router templates. Both will work – we'll highlight differences as we go.
  • TailwindCSS: Set up Tailwind in your project for quick styling.
  • OpenAI API & Vercel AI SDK: Install OpenAI's library or the Vercel AI SDK. We will use Vercel's AI SDK (
    npm i ai
    ) which provides helpful React hooks (
    useChat
    ,
    useCompletion
    ) and server utilities. This SDK is framework-agnostic, working with Next.js, vanilla React, Svelte, and more. It simplifies streaming and state management, and is free/open-source.
  • API Keys: Get your OpenAI API key from the OpenAI dashboard and store it safely. In Next.js, put it in
    .env.local
    (e.g.
    OPENAI_API_KEY=sk-...
    ) and never commit it. In a Vite app, do not expose the key in client code – instead, use a backend proxy or environment variable on the server.
在开始编写代码之前,请确保你已准备好必要的包和配置:
  • React & Vite: 初始化一个Vite + React项目(例如
    npm create vite@latest my-ai-app -- --template react
    )。对于Next.js,你可以使用
    npx create-next-app
    或Next 13 App Router模板。两者都适用——我们会重点介绍它们的差异。
  • TailwindCSS: 在项目中设置Tailwind以快速实现样式。
  • OpenAI API & Vercel AI SDK: 安装OpenAI库或Vercel AI SDK。我们将使用Vercel AI SDK
    npm i ai
    ),它提供了实用的React钩子(
    useChat
    useCompletion
    )和服务器工具。该SDK是框架无关的,可与Next.js、原生React、Svelte等配合使用。它简化了流式传输和状态管理,并且是免费开源的。
  • API密钥: 从OpenAI控制台获取你的OpenAI API密钥并安全存储。在Next.js中,将其放在
    .env.local
    中(例如
    OPENAI_API_KEY=sk-...
    ),切勿提交到版本控制系统。在Vite应用中,不要在客户端代码中暴露密钥——而是使用后端代理或服务器端环境变量。

Setting Up AI Endpoints (Next.js vs. Vite)

设置AI端点(Next.js vs Vite)

Next.js Implementation: Next.js allows us to create route handlers as serverless functions. We can define an API route that the React front-end will call for AI responses:
typescript
// app/api/chat/route.ts (Next.js)
import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAIStream, StreamingTextResponse } from 'ai';

export const runtime = 'edge';

const config = new Configuration({ apiKey: process.env.OPENAI_API_KEY });
const openai = new OpenAIApi(config);

export async function POST(req: Request) {
  const { messages } = await req.json();
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages: messages.map((m: any) => ({ role: m.role, content: m.content }))
  });
  const stream = OpenAIStream(response);
  return new StreamingTextResponse(stream);
}
In this handler, we receive a JSON body containing an array of messages (chat history). We call OpenAI's chat completion with
stream: true
to get a streaming response. We then wrap the response in a
StreamingTextResponse
provided by the AI SDK to pipe it back to the client in chunks. The Next.js API route keeps our API key on the server and streams data efficiently.
Vite (React) Implementation: In a Vite app, there's no built-in server, so we need to create our own backend for the OpenAI calls. This can be a simple Node/Express server:
javascript
// backend/server.js (Node/Express for Vite app)
import express from 'express';
import { Configuration, OpenAIApi } from 'openai';

const app = express();
app.use(express.json());

const config = new Configuration({ apiKey: process.env.OPENAI_API_KEY });
const openai = new OpenAIApi(config);

app.post('/api/chat', async (req, res) => {
  try {
    const { messages = [] } = req.body;
    const systemMsg = { role: 'system', content: 'You are a helpful assistant.' };
    const inputMessages = [systemMsg, ...messages];
    const response = await openai.createChatCompletion({
      model: 'gpt-3.5-turbo',
      stream: false,
      messages: inputMessages
    });
    const content = response.data.choices[0].message?.content;
    res.json({ content });
  } catch (err) {
    console.error(err);
    res.status(500).json({ error: 'Internal Server Error' });
  }
});

app.listen(6000, () => console.log('API server listening on http://localhost:6000'));
During development, you can configure the Vite dev server to proxy
/api
calls to this backend (e.g. in
vite.config.js
, set
server.proxy['/api'] = 'http://localhost:6000'
). The key is that the React app calls a relative
/api/chat
endpoint
, which the proxy/hosting will route to your server code. This keeps the OpenAI key hidden.
Enabling Streaming in Node: The above Express example returns the full response after completion (
stream: false
for simplicity). To stream in Node, you can use OpenAI's HTTP stream: set
stream: true
and handle the response as a stream of data. This involves reading the
response.data
stream and flushing chunks to the client with
res.write()
. If you choose to stick with full responses (no streaming), the UI patterns still largely apply – but streaming greatly improves UX.
Next.js实现: Next.js允许我们创建路由处理器作为无服务器函数。我们可以定义一个API路由,供React前端调用以获取AI响应:
typescript
// app/api/chat/route.ts (Next.js)
import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAIStream, StreamingTextResponse } from 'ai';

export const runtime = 'edge';

const config = new Configuration({ apiKey: process.env.OPENAI_API_KEY });
const openai = new OpenAIApi(config);

export async function POST(req: Request) {
  const { messages } = await req.json();
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages: messages.map((m: any) => ({ role: m.role, content: m.content }))
  });
  const stream = OpenAIStream(response);
  return new StreamingTextResponse(stream);
}
在这个处理器中,我们接收包含消息数组(聊天历史)的JSON请求体。我们调用OpenAI的聊天补全接口并设置
stream: true
以获取流式响应。然后,我们将响应包装在AI SDK提供的
StreamingTextResponse
中,以分块的方式将数据传回客户端。Next.js API路由将我们的API密钥保留在服务器端,并高效地流式传输数据。
Vite(React)实现: Vite应用没有内置服务器,因此我们需要创建自己的后端来处理OpenAI调用。这可以是一个简单的Node/Express服务器:
javascript
// backend/server.js (Node/Express for Vite app)
import express from 'express';
import { Configuration, OpenAIApi } from 'openai';

const app = express();
app.use(express.json());

const config = new Configuration({ apiKey: process.env.OPENAI_API_KEY });
const openai = new OpenAIApi(config);

app.post('/api/chat', async (req, res) => {
  try {
    const { messages = [] } = req.body;
    const systemMsg = { role: 'system', content: 'You are a helpful assistant.' };
    const inputMessages = [systemMsg, ...messages];
    const response = await openai.createChatCompletion({
      model: 'gpt-3.5-turbo',
      stream: false,
      messages: inputMessages
    });
    const content = response.data.choices[0].message?.content;
    res.json({ content });
  } catch (err) {
    console.error(err);
    res.status(500).json({ error: 'Internal Server Error' });
  }
});

app.listen(6000, () => console.log('API server listening on http://localhost:6000'));
在开发过程中,你可以配置Vite开发服务器将
/api
调用代理到这个后端(例如在
vite.config.js
中设置
server.proxy['/api'] = 'http://localhost:6000'
)。关键在于React应用调用相对路径
/api/chat
端点
,代理或托管服务会将其路由到你的服务器代码。这样可以隐藏OpenAI密钥。
在Node中启用流式传输: 上面的Express示例在完成后返回完整响应(为简单起见设置
stream: false
)。要在Node中实现流式传输,你可以使用OpenAI的HTTP流:设置
stream: true
并将响应作为数据流处理。这涉及读取
response.data
流,并使用
res.write()
将分块数据刷新到客户端。如果你选择保留完整响应(不使用流式传输),UI模式仍然基本适用——但流式传输可以极大提升用户体验。

Prompt Handling and Conversation State

提示词处理与对话状态

At the heart of any AI interface is prompt management – assembling user input (and context) into a prompt or message sequence for the AI model. In a chat scenario, we maintain a list of messages, each with a role and content. OpenAI's Chat API expects messages in the format
{ role: 'user' | 'assistant' | 'system', content: string }
. We typically start with a system message (to set the assistant's behavior or context), followed by alternating user and assistant messages as the conversation progresses.
State management in React: We can store the conversation in component state. Using the Vercel SDK's React hook:
jsx
import { useChat } from 'ai/react';

function ChatInterface() {
  const { messages, input, handleInputChange, handleSubmit } = useChat();
  // ...
}
The
useChat
hook handles a lot for us: it manages the
messages
state (an array of message objects), an
input
state for the current text input, and provides
handleInputChange
and
handleSubmit
helpers. By default,
useChat()
will POST to
/api/chat
when you submit.
Manual state handling: If you aren't using
useChat
, you can manage state with
useState
or context. On form submit, call your API and then update the messages array by appending the user query and the assistant's response.
System prompts and context: A common pattern is including an initial system message describing the assistant's role or knowledge base. For example, if building a docs helper, system content might be "You are a documentation assistant. Answer with examples from the docs."
Single-turn vs multi-turn: If your interface is a single question answering (no conversation memory), you could use the
useCompletion
hook from the Vercel SDK instead. For chatbots and multi-turn dialogs,
useChat
is the go-to pattern, since it retains and sends the message history on each request.
任何AI界面的核心都是提示词管理——将用户输入(和上下文)组合成AI模型可以接受的提示词或消息序列。在聊天场景中,我们维护一个消息列表,每个消息包含角色和内容。OpenAI的Chat API期望消息格式为
{ role: 'user' | 'assistant' | 'system', content: string }
。我们通常以系统消息开头(用于设置助手的行为或上下文),然后随着对话的进行交替添加用户和助手消息。
React中的状态管理: 我们可以将对话存储在组件状态中。使用Vercel SDK的React钩子:
jsx
import { useChat } from 'ai/react';

function ChatInterface() {
  const { messages, input, handleInputChange, handleSubmit } = useChat();
  // ...
}
useChat
钩子为我们处理了很多事情:它管理
messages
状态(消息对象数组)、当前文本输入的
input
状态,并提供
handleInputChange
handleSubmit
辅助函数。默认情况下,当你提交时,
useChat()
会向
/api/chat
发送POST请求。
手动状态处理: 如果你不使用
useChat
,可以使用
useState
或上下文来管理状态。在表单提交时,调用你的API,然后通过追加用户查询和助手响应来更新消息数组。
系统提示词与上下文: 一种常见模式是包含初始系统消息,描述助手的角色或知识库。例如,如果你正在构建文档助手,系统内容可以是"你是一个文档助手。请结合文档示例进行回答。"
单轮对话 vs 多轮对话: 如果你的界面是单轮问答(无对话记忆),可以使用Vercel SDK的
useCompletion
钩子。对于聊天机器人和多轮对话,
useChat
是首选模式,因为它会保留消息历史并在每次请求时发送。

Streaming AI Responses to the UI

将AI响应流式传输到UI

One hallmark of modern AI UI is streaming output: as the AI generates tokens, the user sees the reply appearing in real-time. This is crucial for better UX because model-generated answers can be lengthy or slow. Instead of waiting many seconds in silence, streaming lets us display partial results immediately.
How streaming works: When we enabled
stream: true
on the OpenAI API, the response is sent as a sequence of chunks (data events) rather than one JSON blob. The Vercel AI SDK simplifies consumption of these chunks. On the server, we turned the response into a text stream (
StreamingTextResponse
). On the client side, the
useChat
hook handles reading this stream and updating the messages state incrementally as new text arrives.
If you implement streaming manually in React (without the SDK), you would do something like:
javascript
const res = await fetch('/api/chat', { method: 'POST', body: JSON.stringify({ messages }) });
const reader = res.body.getReader();
const decoder = new TextDecoder();
let partial = "";
while(true) {
  const { value, done } = await reader.read();
  if (done) break;
  partial += decoder.decode(value);
  setAssistantMessage(partial);
}
Auto-scrolling: One UX detail when streaming is ensuring the latest message is visible. A pattern to handle this is auto-scrolling the message container on update with a
useEffect
watching the messages array length.
Partial rendering and completion: Show a visual indicator during streaming – for example, a blinking cursor or "AI is typing…" message. Once the stream finishes, finalize the message display.
现代AI UI的一个标志性特性是流式输出:当AI生成令牌时,用户可以实时看到回复内容逐渐显示。这对提升用户体验至关重要,因为模型生成的回答可能很长或速度较慢。流式传输让我们可以立即显示部分结果,而不是让用户在沉默中等待数秒。
流式传输的工作原理: 当我们在OpenAI API中启用
stream: true
时,响应会以一系列分块数据(数据事件)的形式发送,而不是单个JSON blob。Vercel AI SDK简化了这些分块数据的处理。在服务器端,我们将响应转换为文本流(
StreamingTextResponse
)。在客户端,
useChat
钩子负责读取这个流,并在新文本到达时逐步更新消息状态。
如果你在React中手动实现流式传输(不使用SDK),可以这样做:
javascript
const res = await fetch('/api/chat', { method: 'POST', body: JSON.stringify({ messages }) });
const reader = res.body.getReader();
const decoder = new TextDecoder();
let partial = "";
while(true) {
  const { value, done } = await reader.read();
  if (done) break;
  partial += decoder.decode(value);
  setAssistantMessage(partial);
}
自动滚动: 流式传输时的一个用户体验细节是确保最新消息可见。处理这个问题的一种模式是使用
useEffect
监听消息数组长度的变化,在更新时自动滚动消息容器。
部分渲染与完成: 在流式传输期间显示视觉指示器——例如闪烁的光标或"AI正在输入…"消息。流结束后,完成消息的最终显示。

Input Handling and Debouncing

输入处理与防抖

For chat interactions, you usually send the query when the user submits the form. In some AI applications, however, you might want to react to input continuously – for example, autocomplete suggestions or real-time validation by AI. In such cases, debouncing is important.
Why debounce? Calling the OpenAI API on every keystroke would be extremely inefficient and costly. Debouncing delays the API call until the user has stopped typing for a short period.
jsx
const [draft, setDraft] = useState("");

useEffect(() => {
  if (!draft) return;
  const timeout = setTimeout(() => {
    getSuggestion(draft);
  }, 500);
  return () => clearTimeout(timeout);
}, [draft]);
For a simple chatbot with explicit "send" action, debouncing is usually not needed – you send when the user hits Enter. However, it's still useful to disable the input or prevent multiple submissions while an AI response is in progress.
对于聊天交互,通常在用户提交表单时发送查询。然而,在某些AI应用中,你可能需要对输入做出连续反应——例如自动补全建议AI实时验证。在这种情况下,防抖非常重要。
为什么需要防抖? 每次按键都调用OpenAI API会非常低效且成本高昂。防抖会延迟API调用,直到用户停止输入一小段时间。
jsx
const [draft, setDraft] = useState("");

useEffect(() => {
  if (!draft) return;
  const timeout = setTimeout(() => {
    getSuggestion(draft);
  }, 500);
  return () => clearTimeout(timeout);
}, [draft]);
对于带有明确"发送"操作的简单聊天机器人,通常不需要防抖——用户按下回车键时发送即可。不过,在AI响应流式传输期间禁用输入防止多次提交仍然是有用的。

Error Handling and Resilience

错误处理与韧性

Robust error handling is vital in AI applications:
  • Try/Catch around API calls: On the server, wrap the OpenAI call in try/catch. Return a proper error response if something fails.
  • Client-side error state: Handle cases where the response indicates an error.
javascript
try {
  await sendMessage({ text: input });
} catch (error) {
  console.error("Failed to send message:", error);
}
  • User feedback: Always inform the user when something goes wrong. Display the error inline in the chat – e.g., as a special "system" message saying "Sorry, something went wrong. Please try again."
  • Retry mechanism: Consider allowing the user to retry with a "Try again" button.
  • Validation errors: Validate on the client before calling the API. Disable send on empty input, or truncate inputs that exceed some length.
在AI应用中,健壮的错误处理至关重要:
  • API调用周围使用Try/Catch: 在服务器端,将OpenAI调用包装在try/catch中。如果发生故障,返回适当的错误响应。
  • 客户端错误状态: 处理响应指示错误的情况。
javascript
try {
  await sendMessage({ text: input });
} catch (error) {
  console.error("Failed to send message:", error);
}
  • 用户反馈: 当出现问题时,务必通知用户。在聊天中内联显示错误——例如,作为特殊的"系统"消息:"抱歉,出现了一些问题。请重试。"
  • 重试机制: 考虑允许用户通过"重试"按钮重新尝试。
  • 验证错误: 在调用API之前在客户端进行验证。禁用空输入的发送按钮,或截断超过一定长度的输入。

Building the UI: Components and Styling Patterns

构建UI:组件与样式模式

Chat message components: Create a
ChatMessage
component that renders a single message bubble. Based on role, style it differently:
jsx
function ChatMessage({ role, content }) {
  const isUser = role === 'user';
  return (
    <div className={`flex ${isUser ? 'justify-end' : 'justify-start'} mb-2`}>
      <div className={`max-w-xl px-4 py-2 rounded-lg ${
        isUser ? 'bg-blue-500 text-white' : 'bg-gray-200 text-gray-900'
      }`}>
        {content}
      </div>
    </div>
  );
}
Input component:
jsx
function InputBox({ value, onChange, onSubmit, disabled }) {
  return (
    <form onSubmit={onSubmit} className="flex gap-2">
      <input
        type="text"
        value={value}
        onChange={onChange}
        disabled={disabled}
        className="flex-1 border rounded px-3 py-2"
        placeholder="Type your message..."
      />
      <button type="submit" disabled={disabled} className="bg-blue-500 text-white px-4 py-2 rounded">
        Send
      </button>
    </form>
  );
}
Composition:
jsx
function ChatInterface() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat();
  
  return (
    <div className="flex flex-col h-screen max-w-2xl mx-auto p-4">
      <div className="flex-1 overflow-y-auto">
        {messages.map((msg, i) => (
          <ChatMessage key={i} role={msg.role} content={msg.content} />
        ))}
      </div>
      <InputBox 
        value={input} 
        onChange={handleInputChange} 
        onSubmit={handleSubmit}
        disabled={isLoading}
      />
    </div>
  );
}
This separation of concerns makes it easy to test and swap UI parts. The logic (
useChat
) is decoupled from the display components.
聊天消息组件: 创建一个
ChatMessage
组件来渲染单个消息气泡。根据角色应用不同的样式:
jsx
function ChatMessage({ role, content }) {
  const isUser = role === 'user';
  return (
    <div className={`flex ${isUser ? 'justify-end' : 'justify-start'} mb-2`}>
      <div className={`max-w-xl px-4 py-2 rounded-lg ${
        isUser ? 'bg-blue-500 text-white' : 'bg-gray-200 text-gray-900'
      }`}>
        {content}
      </div>
    </div>
  );
}
输入组件:
jsx
function InputBox({ value, onChange, onSubmit, disabled }) {
  return (
    <form onSubmit={onSubmit} className="flex gap-2">
      <input
        type="text"
        value={value}
        onChange={onChange}
        disabled={disabled}
        className="flex-1 border rounded px-3 py-2"
        placeholder="Type your message..."
      />
      <button type="submit" disabled={disabled} className="bg-blue-500 text-white px-4 py-2 rounded">
        Send
      </button>
    </form>
  );
}
组合使用:
jsx
function ChatInterface() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat();
  
  return (
    <div className="flex flex-col h-screen max-w-2xl mx-auto p-4">
      <div className="flex-1 overflow-y-auto">
        {messages.map((msg, i) => (
          <ChatMessage key={i} role={msg.role} content={msg.content} />
        ))}
      </div>
      <InputBox 
        value={input} 
        onChange={handleInputChange} 
        onSubmit={handleSubmit}
        disabled={isLoading}
      />
    </div>
  );
}
这种关注点分离使得测试和替换UI部分变得容易。逻辑(
useChat
)与显示组件解耦。

Vercel AI Elements (Pre-Built Chat UI Components)

Vercel AI Elements(预构建聊天UI组件)

Vercel's AI Elements library offers a set of ready-made React components specifically designed for AI chat interfaces:
  • Conversation: A container that renders a list of messages with auto-scrolling.
  • Prompt: An input component optimized for chat prompts.
  • TypingIndicator: Shows when the AI is "thinking" or streaming a response.
  • ErrorBoundary/ErrorMessage: Handle and display errors gracefully.
jsx
import { Conversation, Prompt, TypingIndicator } from '@vercel/ai-elements';

function ChatApp() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat();
  
  return (
    <div className="h-screen flex flex-col">
      <Conversation messages={messages} />
      {isLoading && <TypingIndicator />}
      <Prompt 
        value={input} 
        onChange={handleInputChange} 
        onSubmit={handleSubmit}
      />
    </div>
  );
}
Vercel的AI Elements库提供了一套专门为AI聊天界面设计的现成React组件:
  • Conversation: 一个容器组件,用于渲染消息列表并支持自动滚动。
  • Prompt: 针对聊天提示词优化的输入组件。
  • TypingIndicator: 显示AI正在"思考"或流式传输响应的状态。
  • ErrorBoundary/ErrorMessage: 优雅地处理和显示错误。
jsx
import { Conversation, Prompt, TypingIndicator } from '@vercel/ai-elements';

function ChatApp() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat();
  
  return (
    <div className="h-screen flex flex-col">
      <Conversation messages={messages} />
      {isLoading && <TypingIndicator />}
      <Prompt 
        value={input} 
        onChange={handleInputChange} 
        onSubmit={handleSubmit}
      />
    </div>
  );
}

Putting It All Together

整合所有内容

  1. Backend API Route: Whether using Next.js route handlers or a separate Express server, create an endpoint that receives messages, calls the AI model, and streams the response back.
  2. State Management: Use the Vercel AI SDK's
    useChat
    hook (or roll your own with
    useState
    ) to manage the conversation state.
  3. Streaming: Enable streaming on both server and client for responsive UX.
  4. Debouncing & Rate Limiting: For features like autocomplete, debounce API calls. For chat, disable input during response streaming.
  5. Error Handling: Wrap API calls in try/catch, provide user feedback on errors, and consider retry mechanisms.
  6. Reusable Components: Build presentational components (
    ChatMessage
    ,
    InputBox
    ) that are decoupled from data-fetching logic. Consider using AI Elements for production-ready components.
  7. Styling: Use TailwindCSS (or your preferred styling solution) to create a clean, responsive chat interface.
  1. 后端API路由: 无论使用Next.js路由处理器还是独立的Express服务器,创建一个接收消息、调用AI模型并流式返回响应的端点。
  2. 状态管理: 使用Vercel AI SDK的
    useChat
    钩子(或自行使用
    useState
    实现)管理对话状态。
  3. 流式传输: 在服务器和客户端都启用流式传输以提升响应式用户体验。
  4. 防抖与速率限制: 对于自动补全等功能,对API调用进行防抖处理。对于聊天,在响应流式传输期间禁用输入。
  5. 错误处理: 将API调用包装在try/catch中,向用户提供错误反馈,并考虑实现重试机制。
  6. 可复用组件: 构建与数据获取逻辑解耦的展示组件(
    ChatMessage
    InputBox
    )。考虑使用AI Elements组件来构建生产级界面。
  7. 样式: 使用TailwindCSS(或你偏好的样式方案)创建简洁、响应式的聊天界面。

Architectural Comparison: Next.js vs. Vite

架构对比:Next.js vs Vite

AspectNext.jsVite + Node Backend
API RoutesBuilt-in (
pages/api/
or
app/api/
)
Separate Express/Node server required
StreamingNative support with Edge RuntimeManual implementation with
res.write()
DeploymentVercel (optimized) or self-hostDeploy frontend (static) + backend separately
ComplexityLower (all-in-one)Higher (two codebases)
FlexibilityFramework conventionsFull control
For most AI chat applications, Next.js provides a simpler developer experience with its integrated API routes and streaming support. However, if you have an existing Vite/React app or prefer more control, the patterns described here work well with a separate backend.
方面Next.jsVite + Node后端
API路由内置支持(
pages/api/
app/api/
需要独立的Express/Node服务器
流式传输Edge Runtime原生支持需要使用
res.write()
手动实现
部署优化支持Vercel部署,也可自行托管需分别部署前端(静态)和后端
复杂度较低(一体化)较高(两个代码库)
灵活性遵循框架约定完全可控
对于大多数AI聊天应用,Next.js提供了更简单的开发体验,因为它集成了API路由和流式传输支持。不过,如果你已有Vite/React应用或偏好更高的控制权,本文描述的模式也适用于独立后端的场景。

Source

来源

References

参考资料