claude-api

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Claude API

Claude API

Build applications with the Anthropic Claude API and SDKs.
使用Anthropic Claude API和SDK构建应用程序。

When to Activate

适用场景

  • Building applications that call the Claude API
  • Code imports
    anthropic
    (Python) or
    @anthropic-ai/sdk
    (TypeScript)
  • User asks about Claude API patterns, tool use, streaming, or vision
  • Implementing agent workflows with Claude Agent SDK
  • Optimizing API costs, token usage, or latency
  • 构建调用Claude API的应用程序
  • 代码中导入
    anthropic
    (Python)或
    @anthropic-ai/sdk
    (TypeScript)
  • 用户询问Claude API使用模式、工具调用、流式传输或视觉功能相关问题
  • 使用Claude Agent SDK实现智能体工作流
  • 优化API成本、令牌使用量或延迟

Model Selection

模型选择

ModelIDBest For
Opus 4.1
claude-opus-4-1
Complex reasoning, architecture, research
Sonnet 4
claude-sonnet-4-0
Balanced coding, most development tasks
Haiku 3.5
claude-3-5-haiku-latest
Fast responses, high-volume, cost-sensitive
Default to Sonnet 4 unless the task requires deep reasoning (Opus) or speed/cost optimization (Haiku). For production, prefer pinned snapshot IDs over aliases.
模型ID最佳适用场景
Opus 4.1
claude-opus-4-1
复杂推理、架构设计、研究工作
Sonnet 4
claude-sonnet-4-0
均衡的编码能力、大多数开发任务
Haiku 3.5
claude-3-5-haiku-latest
快速响应、高吞吐量、对成本敏感的场景
默认使用Sonnet 4,除非任务需要深度推理(选择Opus)或追求速度/成本优化(选择Haiku)。生产环境中,优先使用固定的快照ID而非别名。

Python SDK

Python SDK

Installation

安装

bash
pip install anthropic
bash
pip install anthropic

Basic Message

基础消息调用

python
import anthropic

client = anthropic.Anthropic()  # reads ANTHROPIC_API_KEY from env

message = client.messages.create(
    model="claude-sonnet-4-0",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Explain async/await in Python"}
    ]
)
print(message.content[0].text)
python
import anthropic

client = anthropic.Anthropic()  # 从环境变量读取ANTHROPIC_API_KEY

message = client.messages.create(
    model="claude-sonnet-4-0",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "解释Python中的async/await"}
    ]
)
print(message.content[0].text)

Streaming

流式传输

python
with client.messages.stream(
    model="claude-sonnet-4-0",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Write a haiku about coding"}]
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)
python
with client.messages.stream(
    model="claude-sonnet-4-0",
    max_tokens=1024,
    messages=[{"role": "user", "content": "写一首关于编程的俳句"}]
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)

System Prompt

系统提示词

python
message = client.messages.create(
    model="claude-sonnet-4-0",
    max_tokens=1024,
    system="You are a senior Python developer. Be concise.",
    messages=[{"role": "user", "content": "Review this function"}]
)
python
message = client.messages.create(
    model="claude-sonnet-4-0",
    max_tokens=1024,
    system="你是一名资深Python开发者,请保持回答简洁。",
    messages=[{"role": "user", "content": "审查这个函数"}]
)

TypeScript SDK

TypeScript SDK

Installation

安装

bash
npm install @anthropic-ai/sdk
bash
npm install @anthropic-ai/sdk

Basic Message

基础消息调用

typescript
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic(); // reads ANTHROPIC_API_KEY from env

const message = await client.messages.create({
  model: "claude-sonnet-4-0",
  max_tokens: 1024,
  messages: [
    { role: "user", content: "Explain async/await in TypeScript" }
  ],
});
console.log(message.content[0].text);
typescript
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic(); // 从环境变量读取ANTHROPIC_API_KEY

const message = await client.messages.create({
  model: "claude-sonnet-4-0",
  max_tokens: 1024,
  messages: [
    { role: "user", content: "解释TypeScript中的async/await" }
  ],
});
console.log(message.content[0].text);

Streaming

流式传输

typescript
const stream = client.messages.stream({
  model: "claude-sonnet-4-0",
  max_tokens: 1024,
  messages: [{ role: "user", content: "Write a haiku" }],
});

for await (const event of stream) {
  if (event.type === "content_block_delta" && event.delta.type === "text_delta") {
    process.stdout.write(event.delta.text);
  }
}
typescript
const stream = client.messages.stream({
  model: "claude-sonnet-4-0",
  max_tokens: 1024,
  messages: [{ role: "user", content: "写一首俳句" }],
});

for await (const event of stream) {
  if (event.type === "content_block_delta" && event.delta.type === "text_delta") {
    process.stdout.write(event.delta.text);
  }
}

Tool Use

工具调用

Define tools and let Claude call them:
python
tools = [
    {
        "name": "get_weather",
        "description": "Get current weather for a location",
        "input_schema": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "City name"},
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
            },
            "required": ["location"]
        }
    }
]

message = client.messages.create(
    model="claude-sonnet-4-0",
    max_tokens=1024,
    tools=tools,
    messages=[{"role": "user", "content": "What's the weather in SF?"}]
)
定义工具并让Claude调用它们:
python
tools = [
    {
        "name": "get_weather",
        "description": "获取指定地点的当前天气",
        "input_schema": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "城市名称"},
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
            },
            "required": ["location"]
        }
    }
]

message = client.messages.create(
    model="claude-sonnet-4-0",
    max_tokens=1024,
    tools=tools,
    messages=[{"role": "user", "content": "旧金山的天气怎么样?"}]
)

Handle tool use response

处理工具调用响应

for block in message.content: if block.type == "tool_use": # Execute the tool with block.input result = get_weather(**block.input) # Send result back follow_up = client.messages.create( model="claude-sonnet-4-0", max_tokens=1024, tools=tools, messages=[ {"role": "user", "content": "What's the weather in SF?"}, {"role": "assistant", "content": message.content}, {"role": "user", "content": [ {"type": "tool_result", "tool_use_id": block.id, "content": str(result)} ]} ] )
undefined
for block in message.content: if block.type == "tool_use": # 使用block.input执行工具 result = get_weather(**block.input) # 将结果返回给Claude follow_up = client.messages.create( model="claude-sonnet-4-0", max_tokens=1024, tools=tools, messages=[ {"role": "user", "content": "旧金山的天气怎么样?"}, {"role": "assistant", "content": message.content}, {"role": "user", "content": [ {"type": "tool_result", "tool_use_id": block.id, "content": str(result)} ]} ] )
undefined

Vision

视觉功能

Send images for analysis:
python
import base64

with open("diagram.png", "rb") as f:
    image_data = base64.standard_b64encode(f.read()).decode("utf-8")

message = client.messages.create(
    model="claude-sonnet-4-0",
    max_tokens=1024,
    messages=[{
        "role": "user",
        "content": [
            {"type": "image", "source": {"type": "base64", "media_type": "image/png", "data": image_data}},
            {"type": "text", "text": "Describe this diagram"}
        ]
    }]
)
发送图片进行分析:
python
import base64

with open("diagram.png", "rb") as f:
    image_data = base64.standard_b64encode(f.read()).decode("utf-8")

message = client.messages.create(
    model="claude-sonnet-4-0",
    max_tokens=1024,
    messages=[{
        "role": "user",
        "content": [
            {"type": "image", "source": {"type": "base64", "media_type": "image/png", "data": image_data}},
            {"type": "text", "text": "描述这个图表"}
        ]
    }]
)

Extended Thinking

深度思考

For complex reasoning tasks:
python
message = client.messages.create(
    model="claude-sonnet-4-0",
    max_tokens=16000,
    thinking={
        "type": "enabled",
        "budget_tokens": 10000
    },
    messages=[{"role": "user", "content": "Solve this math problem step by step..."}]
)

for block in message.content:
    if block.type == "thinking":
        print(f"Thinking: {block.thinking}")
    elif block.type == "text":
        print(f"Answer: {block.text}")
适用于复杂推理任务:
python
message = client.messages.create(
    model="claude-sonnet-4-0",
    max_tokens=16000,
    thinking={
        "type": "enabled",
        "budget_tokens": 10000
    },
    messages=[{"role": "user", "content": "一步步解决这个数学问题..."}]
)

for block in message.content:
    if block.type == "thinking":
        print(f"思考过程:{block.thinking}")
    elif block.type == "text":
        print(f"答案:{block.text}")

Prompt Caching

提示词缓存

Cache large system prompts or context to reduce costs:
python
message = client.messages.create(
    model="claude-sonnet-4-0",
    max_tokens=1024,
    system=[
        {"type": "text", "text": large_system_prompt, "cache_control": {"type": "ephemeral"}}
    ],
    messages=[{"role": "user", "content": "Question about the cached context"}]
)
缓存大型系统提示词或上下文以降低成本:
python
message = client.messages.create(
    model="claude-sonnet-4-0",
    max_tokens=1024,
    system=[
        {"type": "text", "text": large_system_prompt, "cache_control": {"type": "ephemeral"}}
    ],
    messages=[{"role": "user", "content": "关于缓存上下文的问题"}]
)

Check cache usage

检查缓存使用情况

print(f"Cache read: {message.usage.cache_read_input_tokens}") print(f"Cache creation: {message.usage.cache_creation_input_tokens}")
undefined
print(f"缓存读取令牌数:{message.usage.cache_read_input_tokens}") print(f"缓存创建令牌数:{message.usage.cache_creation_input_tokens}")
undefined

Batches API

批量处理API

Process large volumes asynchronously at 50% cost reduction:
python
import time

batch = client.messages.batches.create(
    requests=[
        {
            "custom_id": f"request-{i}",
            "params": {
                "model": "claude-sonnet-4-0",
                "max_tokens": 1024,
                "messages": [{"role": "user", "content": prompt}]
            }
        }
        for i, prompt in enumerate(prompts)
    ]
)
异步处理大量请求,可降低50%成本:
python
import time

batch = client.messages.batches.create(
    requests=[
        {
            "custom_id": f"request-{i}",
            "params": {
                "model": "claude-sonnet-4-0",
                "max_tokens": 1024,
                "messages": [{"role": "user", "content": prompt}]
            }
        }
        for i, prompt in enumerate(prompts)
    ]
)

Poll for completion

轮询任务完成状态

while True: status = client.messages.batches.retrieve(batch.id) if status.processing_status == "ended": break time.sleep(30)
while True: status = client.messages.batches.retrieve(batch.id) if status.processing_status == "ended": break time.sleep(30)

Get results

获取处理结果

for result in client.messages.batches.results(batch.id): print(result.result.message.content[0].text)
undefined
for result in client.messages.batches.results(batch.id): print(result.result.message.content[0].text)
undefined

Claude Agent SDK

Claude Agent SDK

Build multi-step agents:
python
undefined
构建多步骤智能体:
python
undefined

Note: Agent SDK API surface may change — check official docs

注意:Agent SDK的API可能会更新,请查阅官方文档

import anthropic
import anthropic

Define tools as functions

将工具定义为函数

tools = [{ "name": "search_codebase", "description": "Search the codebase for relevant code", "input_schema": { "type": "object", "properties": {"query": {"type": "string"}}, "required": ["query"] } }]
tools = [{ "name": "search_codebase", "description": "在代码库中搜索相关代码", "input_schema": { "type": "object", "properties": {"query": {"type": "string"}}, "required": ["query"] } }]

Run an agentic loop with tool use

运行包含工具调用的智能体循环

client = anthropic.Anthropic() messages = [{"role": "user", "content": "Review the auth module for security issues"}]
while True: response = client.messages.create( model="claude-sonnet-4-0", max_tokens=4096, tools=tools, messages=messages, ) if response.stop_reason == "end_turn": break # Handle tool calls and continue the loop messages.append({"role": "assistant", "content": response.content}) # ... execute tools and append tool_result messages
undefined
client = anthropic.Anthropic() messages = [{"role": "user", "content": "审查认证模块的安全问题"}]
while True: response = client.messages.create( model="claude-sonnet-4-0", max_tokens=4096, tools=tools, messages=messages, ) if response.stop_reason == "end_turn": break # 处理工具调用并继续循环 messages.append({"role": "assistant", "content": response.content}) # ... 执行工具并添加tool_result消息
undefined

Cost Optimization

成本优化策略

StrategySavingsWhen to Use
Prompt cachingUp to 90% on cached tokensRepeated system prompts or context
Batches API50%Non-time-sensitive bulk processing
Haiku instead of Sonnet~75%Simple tasks, classification, extraction
Shorter max_tokensVariableWhen you know output will be short
StreamingNone (same cost)Better UX, same price
策略成本节省比例适用场景
提示词缓存缓存令牌最高可节省90%重复使用的系统提示词或上下文
批量处理API50%非实时的批量处理任务
使用Haiku替代Sonnet约75%简单任务、分类、信息提取
减小max_tokens值可变已知输出内容较短的场景
流式传输无(成本相同)提升用户体验,价格不变

Error Handling

错误处理

python
import time

from anthropic import APIError, RateLimitError, APIConnectionError

try:
    message = client.messages.create(...)
except RateLimitError:
    # Back off and retry
    time.sleep(60)
except APIConnectionError:
    # Network issue, retry with backoff
    pass
except APIError as e:
    print(f"API error {e.status_code}: {e.message}")
python
import time

from anthropic import APIError, RateLimitError, APIConnectionError

try:
    message = client.messages.create(...)
except RateLimitError:
    # 退避并重试
    time.sleep(60)
except APIConnectionError:
    # 网络问题,退避并重试
    pass
except APIError as e:
    print(f"API错误 {e.status_code}: {e.message}")

Environment Setup

环境配置

bash
undefined
bash
undefined

Required

必填

export ANTHROPIC_API_KEY="your-api-key-here"
export ANTHROPIC_API_KEY="你的API密钥"

Optional: set default model

可选:设置默认模型

export ANTHROPIC_MODEL="claude-sonnet-4-0"

Never hardcode API keys. Always use environment variables.
export ANTHROPIC_MODEL="claude-sonnet-4-0"

切勿硬编码API密钥,请始终使用环境变量。