create-agent-with-sanity-context

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Build an Agent with Sanity Context

使用Sanity Context构建Agent

Give AI agents intelligent access to your Sanity content. Unlike embedding-only approaches, Context MCP is schema-aware—agents can reason over your content structure, query with real field values, follow references, and combine structural filters with semantic search.
What this enables:
  • Agents understand the relationships between your content types
  • Queries use actual schema fields, not just text similarity
  • Results respect your content model (categories, tags, references)
  • Semantic search is available when needed, layered on structure
Note: Context MCP understands your schema structure but not your domain. You'll provide domain context (what your content is for, how to use it) through the agent's system prompt.
让AI Agent能够智能访问你的Sanity内容。与仅依赖嵌入的方案不同,Context MCP具备schema感知能力——Agent可以理解你的内容结构,使用真实字段值进行查询、追踪引用,并将结构化筛选与语义搜索相结合。
这能实现的功能:
  • Agent可理解不同内容类型之间的关联关系
  • 查询使用实际的schema字段,而非仅依赖文本相似度
  • 查询结果遵循你的内容模型(分类、标签、引用关系)
  • 可按需叠加语义搜索功能,与结构化能力互补
注意:Context MCP能理解你的schema结构,但不了解业务领域。你需要通过Agent的系统提示词来提供领域上下文(内容用途、使用方式等)。

What You'll Need

准备工作

Before starting, gather these credentials:
CredentialWhere to get it
Sanity Project IDYour
sanity.config.ts
or sanity.io/manage
Dataset nameUsually
production
— check your
sanity.config.ts
Sanity API read tokenCreate at sanity.io/manage → Project → API → Tokens. See HTTP Auth docs
LLM API keyFrom your LLM provider (Anthropic, OpenAI, etc.) — any provider works
开始前,请准备好以下凭证:
凭证名称获取方式
Sanity项目ID可在
sanity.config.ts
sanity.io/manage中查看
数据集名称通常为
production
——可在
sanity.config.ts
中确认
Sanity API只读令牌sanity.io/manage → 项目 → API → 令牌页面创建。详情请查看HTTP认证文档
LLM API密钥从你的LLM供应商获取(Anthropic、OpenAI等)——支持任意供应商

How Context MCP Works

Context MCP工作原理

An MCP server that gives AI agents structured access to Sanity content. The core integration pattern:
  1. MCP Connection: HTTP transport to the Context MCP URL
  2. Authentication: Bearer token using Sanity API read token
  3. Tool Discovery: Get available tools from MCP client, pass to LLM
  4. System Prompt: Domain-specific instructions that shape agent behavior
MCP URL formats:
  • https://api.sanity.io/:apiVersion/agent-context/:projectId/:dataset
    — Access all content in the dataset
  • https://api.sanity.io/:apiVersion/agent-context/:projectId/:dataset/:slug
    — Access filtered content (requires agent context document with that slug)
The slug-based URL uses the GROQ filter defined in your agent context document to scope what content the agent can access. Use this for production agents that should only see specific content types.
The integration is simple: Connect to the MCP URL, get tools, use them. The reference implementation shows one way to do this—adapt to your stack and LLM provider.
Context MCP是一个为AI Agent提供Sanity内容结构化访问权限的MCP服务器。核心集成模式如下:
  1. MCP连接:通过HTTP协议连接到Context MCP的URL
  2. 身份验证:使用Sanity API只读令牌作为Bearer令牌
  3. 工具发现:从MCP客户端获取可用工具,并传递给LLM
  4. 系统提示词:提供领域专属指令,定义Agent的行为模式
MCP URL格式:
  • https://api.sanity.io/:apiVersion/agent-context/:projectId/:dataset
    —— 访问数据集中的所有内容
  • https://api.sanity.io/:apiVersion/agent-context/:projectId/:dataset/:slug
    —— 访问经过筛选的内容(需要对应slug的Agent上下文文档)
基于slug的URL会使用Agent上下文文档中定义的GROQ筛选条件,限定Agent可访问的内容范围。适用于面向客户的生产环境Agent。
集成流程简单:连接到MCP URL,获取工具,即可使用。参考实现展示了一种集成方式,你可以根据自己的技术栈和LLM供应商进行适配。

Available MCP Tools

可用的MCP工具

ToolPurpose
initial_context
Get compressed schema overview (types, fields, document counts)
groq_query
Execute GROQ queries with optional semantic search
schema_explorer
Get detailed schema for a specific document type
For development and debugging: The general Sanity MCP provides broader access to your Sanity project (schema deployment, document management, etc.). Useful during development but not intended for customer-facing applications.
工具名称用途
initial_context
获取压缩后的schema概览(内容类型、字段、文档数量)
groq_query
执行GROQ查询,可搭配语义搜索功能
schema_explorer
获取特定文档类型的详细schema
开发与调试专用:通用Sanity MCP提供对Sanity项目的更广泛访问权限(schema部署、文档管理等)。适用于开发阶段,但不建议用于面向客户的应用。

Before You Start: Understand the User's Situation

开始前:了解用户的实际场景

A complete integration has three distinct components that may live in different places:
ComponentWhat it isExamples
1. Studio SetupConfigure the context plugin and create agent context documentsSanity Studio (separate repo or embedded)
2. Agent ImplementationCode that connects to Context MCP and handles LLM interactionsNext.js API route, Express server, Python service, or any MCP-compatible client
3. Frontend (Optional)UI for users to interact with the agentChat widget, search interface, CLI—or none for backend services
Studio setup and agent implementation are required. Frontend is optional—many agents run as backend services or integrate into existing UIs.
Ask the user which part they need help with:
  • Components in different repos (most common): You may only have access to one component. Complete what you can, then tell the user what steps remain for the other repos.
  • Co-located components: All three in the same project—work through them one at a time (Studio → Agent → Frontend).
  • Already on step 2 or 3: If you can't find a Studio in the codebase, ask the user if Studio setup is complete.
Also understand:
  1. Their stack: What framework/runtime? (Next.js, Remix, Node server, Python, etc.)
  2. Their AI library: Vercel AI SDK, LangChain, direct API calls, etc.
  3. Their domain: What will the agent help with? (Shopping, docs, support, search, etc.)
The reference patterns use Next.js + Vercel AI SDK, but adapt to whatever the user is working with.
完整的集成包含三个独立组件,它们可能部署在不同位置:
组件名称说明示例
1. Sanity Studio配置配置Context插件并创建Agent上下文文档Sanity Studio(独立仓库或嵌入式)
2. Agent实现连接到Context MCP并处理LLM交互的代码Next.js API路由、Express服务器、Python服务或任何兼容MCP的客户端
3. 前端(可选)供用户与Agent交互的UI聊天组件、搜索界面、CLI——后端服务可无需前端
Sanity Studio配置和Agent实现是必需的。前端为可选组件——许多Agent作为后端服务运行,或集成到现有UI中。
请询问用户需要帮助的环节:
  • 组件分布在不同仓库(最常见):你可能只能访问其中一个组件。完成当前可处理的部分后,告知用户其他仓库需要执行的步骤。
  • 组件集中部署:三个组件在同一项目中——依次处理(Sanity Studio → Agent → 前端)。
  • 已进入步骤2或3:如果在代码库中找不到Sanity Studio,请询问用户是否已完成Studio配置。
同时还需了解:
  1. 技术栈:使用的框架/运行时?(Next.js、Remix、Node服务器、Python等)
  2. AI库:Vercel AI SDK、LangChain、直接API调用等
  3. 业务领域:Agent的用途是什么?(购物、文档、客服、搜索等)
参考实现使用Next.js + Vercel AI SDK,但你可以根据用户的实际技术栈进行适配。

Workflow

工作流程

Quick Validation (Optional)

快速验证(可选)

Before building an agent, you can validate MCP access directly using the base URL (no slug required):
bash
curl -X POST https://api.sanity.io/YOUR_API_VERSION/agent-context/YOUR_PROJECT_ID/YOUR_DATASET \
  -H "Authorization: Bearer $SANITY_API_READ_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc": "2.0", "method": "tools/list", "id": 1}'
This confirms your token works and the MCP endpoint is reachable. The base URL gives access to all content—useful for testing before setting up content filters via agent context documents.
在构建Agent之前,你可以直接使用基础URL(无需slug)验证MCP访问权限:
bash
curl -X POST https://api.sanity.io/YOUR_API_VERSION/agent-context/YOUR_PROJECT_ID/YOUR_DATASET \
  -H "Authorization: Bearer $SANITY_API_READ_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc": "2.0", "method": "tools/list", "id": 1}'
此命令可验证令牌是否有效,以及MCP端点是否可访问。基础URL允许访问所有内容——适合在通过Agent上下文文档设置内容筛选条件之前进行测试。

Step 1: Set up Sanity Studio

步骤1:配置Sanity Studio

Configure the context plugin and create agent context documents to scope what content the agent can access.
See references/studio-setup.md
配置Context插件并创建Agent上下文文档,限定Agent可访问的内容范围。
详情请查看references/studio-setup.md

Step 2: Build the Agent (Adapt to user's stack)

步骤2:构建Agent(根据用户技术栈适配)

Already have an agent or MCP client? You just need to connect it to your Context MCP URL with a Bearer token. The tools will appear automatically.
Building from scratch? The reference implementation uses Next.js + Vercel AI SDK with Anthropic, but the pattern works with any LLM provider (OpenAI, local models, etc.). It's comprehensive—covering everything from basic chat to advanced patterns. Start with the basics and add advanced patterns as needed.
See references/nextjs-agent.md
The reference covers:
  • Core setup (required): MCP connection, authentication, basic chat route
  • System prompts (required): Domain-specific instructions for your agent
  • Frontend (optional): React chat component
  • Advanced patterns (optional): Client-side tools, auto-continuation, custom rendering
已拥有Agent或MCP客户端? 只需将其连接到你的Context MCP URL,并使用Bearer令牌进行身份验证,工具会自动加载。
从零开始构建? 参考实现使用Next.js + Vercel AI SDK搭配Anthropic,但该模式适用于任何LLM供应商(OpenAI、本地模型等)。实现内容全面——涵盖从基础聊天到高级模式的所有环节。建议从基础功能开始,再按需添加高级模式。
详情请查看references/nextjs-agent.md
参考实现包含:
  • 核心配置(必需):MCP连接、身份验证、基础聊天路由
  • 系统提示词(必需):Agent的领域专属指令
  • 前端(可选):React聊天组件
  • 高级模式(可选):客户端工具、自动续聊、自定义渲染

GROQ with Semantic Search

结合语义搜索的GROQ

Context MCP supports
text::embedding()
for semantic ranking:
groq
*[_type == "article" && category == "guides"]
  | score(text::embedding("getting started tutorial"))
  | order(_score desc)
  { _id, title, summary }[0...10]
Always use
order(_score desc)
when using
score()
to get best matches first.
Context MCP支持使用
text::embedding()
进行语义排序:
groq
*[_type == "article" && category == "guides"]
  | score(text::embedding("getting started tutorial"))
  | order(_score desc)
  { _id, title, summary }[0...10]
使用
score()
时,务必搭配
order(_score desc)
以优先获取最匹配的结果。

Adapting to Different Stacks

适配不同技术栈

The MCP connection pattern is framework and LLM-agnostic. Whether Next.js, Remix, Express, or Python FastAPI—the HTTP transport works the same. Any LLM provider that supports tool calling will work.
See references/nextjs-agent.md for:
  • Framework-specific route patterns (Express, Remix, Python)
  • AI library integrations (LangChain, direct API calls)
  • System prompt examples for different domains (e-commerce, docs, support)
MCP连接模式与框架和LLM无关。无论使用Next.js、Remix、Express还是Python FastAPI,HTTP传输方式都是相同的。任何支持工具调用的LLM供应商都可兼容。
请查看references/nextjs-agent.md获取:
  • 框架专属路由模式(Express、Remix、Python)
  • AI库集成示例(LangChain、直接API调用)
  • 不同领域的系统提示词示例(电商、文档、客服)

Best Practices

最佳实践

  • Start simple: Build the basic integration first, then add advanced patterns as needed
  • Schema design: Use descriptive field names—agents rely on schema understanding
  • GROQ queries: Always include
    _id
    in projections so agents can reference documents
  • Content filters: Start broad, then narrow based on what the agent actually needs
  • System prompts: Be explicit about forbidden behaviors and formatting rules
  • Package versions: NEVER guess package versions. Always check the reference
    package.json
    files or use
    npm info <package> version
    . AI SDK and Sanity packages update frequently—outdated versions will cause errors.
  • 从简入手:先构建基础集成,再按需添加高级模式
  • Schema设计:使用描述性字段名——Agent依赖对schema的理解
  • GROQ查询:在投影中始终包含
    _id
    ,以便Agent可以引用文档
  • 内容筛选:先设置宽泛的范围,再根据Agent的实际需求逐步缩小
  • 系统提示词:明确禁止的行为和格式规则
  • 包版本:切勿猜测包版本。请始终查看参考实现的
    package.json
    文件,或使用
    npm info <package> version
    命令。AI SDK和Sanity包更新频繁——过时版本会导致错误。

Troubleshooting

故障排除

Context MCP returns errors or no schema

Context MCP返回错误或无schema信息

Context MCP requires your schema to be available server-side. This happens automatically when your Studio runs, but if it's not working:
  1. Check Studio version: Ensure you're on Sanity Studio v5.1.0 or later
  2. Open your Studio: Simply opening the Studio in a browser triggers schema deployment
  3. Verify deployment: After opening Studio, retry the MCP connection
Context MCP需要你的schema在服务器端可用。当Sanity Studio运行时会自动完成此操作,但如果出现问题:
  1. 检查Studio版本:确保使用的是Sanity Studio v5.1.0或更高版本
  2. 打开Sanity Studio:只需在浏览器中打开Studio,即可触发schema部署
  3. 验证部署:打开Studio后,重新尝试连接MCP

Escape hatch: Deploy schema via Sanity MCP

应急方案:通过Sanity MCP部署schema

If you're on a cloud-only platform (Lovable, v0, Replit) without a local Studio, or if local Studio schema deployment isn't working, you can deploy schemas using the Sanity MCP server's
deploy_schema
tool.
To install the Sanity MCP (if you don't have it already):
bash
npx sanity@latest mcp configure
This configures the MCP for your AI editor (Claude Code, Cursor, VS Code, etc.). Once connected, ask your AI assistant to use the
deploy_schema
tool to deploy your content types.
Recommended approach: If you have a local Sanity Studio, deploying via the Studio is preferred:
  • Local schema files (in
    schemaTypes/
    ) are the source of truth
  • Using
    deploy_schema
    directly can create drift between your code and the deployed schema
  • Edit your local schema files and run
    npx sanity schema deploy
    instead
Use this escape hatch when local deployment isn't an option or isn't working.
如果你使用的是纯云平台(Lovable、v0、Replit),没有本地Studio,或者本地Studio的schema部署失败,可以使用Sanity MCP服务器的
deploy_schema
工具部署schema。
安装Sanity MCP(如果尚未安装):
bash
npx sanity@latest mcp configure
此命令会为你的AI编辑器(Claude Code、Cursor、VS Code等)配置MCP。连接完成后,可让AI助手使用
deploy_schema
工具部署你的内容类型。
推荐方式:如果你有本地Sanity Studio,建议通过Studio部署schema:
  • 本地schema文件(位于
    schemaTypes/
    )是唯一可信源
  • 直接使用
    deploy_schema
    可能导致代码与已部署schema不一致
  • 请编辑本地schema文件,然后运行
    npx sanity schema deploy
    命令
仅当本地部署不可用或失败时,再使用此应急方案。

Other common issues

其他常见问题

See references/nextjs-agent.md for:
  • Token authentication errors
  • Empty results / no documents found
  • Tools not appearing
请查看references/nextjs-agent.md获取:
  • 令牌身份验证错误
  • 结果为空/未找到文档
  • 工具未加载