cloudflare-r2
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCloudflare R2 Object Storage
Cloudflare R2 对象存储
Status: Production Ready ✅
Last Updated: 2026-01-20
Dependencies: cloudflare-worker-base (for Worker setup)
Latest Versions: wrangler@4.59.2, @cloudflare/workers-types@4.20260109.0, aws4fetch@1.0.20
Recent Updates (2025):
- September 2025: R2 SQL open beta (serverless query engine for Apache Iceberg), Pipelines GA (real-time stream ingestion), Remote bindings GA (local dev connects to deployed R2)
- May 2025: Dashboard redesign (deeplink support, bucket settings centralization), Super Slurper 5x faster (rebuilt with Workers/Queues/Durable Objects)
- April 2025: R2 Data Catalog open beta (managed Apache Iceberg catalog), Event Notifications open beta (5,000 msg/s per Queue)
- 2025: Bucket limits increased (1 million max), CRC-64/NVME checksums, Server-side encryption with customer keys, Infrequent Access storage class (beta), Oceania region, S3 API enhancements (sha256/sha1 checksums, ListParts, conditional CopyObject)
状态:已就绪可投入生产 ✅
最后更新:2026-01-20
依赖项:cloudflare-worker-base(用于Worker部署)
最新版本:wrangler@4.59.2, @cloudflare/workers-types@4.20260109.0, aws4fetch@1.0.20
2025年近期更新:
- 2025年9月:R2 SQL公开测试版(面向Apache Iceberg的无服务器查询引擎)、Pipelines正式可用(实时流摄取)、Remote bindings正式可用(本地开发可连接至已部署的R2)
- 2025年5月:控制台重新设计(支持深度链接、存储桶设置集中化)、Super Slurper提速5倍(基于Workers/Queues/Durable Objects重构)
- 2025年4月:R2数据目录公开测试版(托管式Apache Iceberg目录)、事件通知公开测试版(每个队列每秒5000条消息)
- 2025全年:存储桶上限提升至100万个、支持CRC-64/NVME校验和、客户密钥服务端加密、低频访问存储类别(测试版)、大洋洲区域上线、S3 API增强(支持sha256/sha1校验和、ListParts、条件CopyObject)
Quick Start (5 Minutes)
快速入门(5分钟)
bash
undefinedbash
undefined1. Create bucket
1. 创建存储桶
npx wrangler r2 bucket create my-bucket
npx wrangler r2 bucket create my-bucket
2. Add binding to wrangler.jsonc
2. 在wrangler.jsonc中添加绑定
{
{
"r2_buckets": [{
"r2_buckets": [{
"binding": "MY_BUCKET",
"binding": "MY_BUCKET",
"bucket_name": "my-bucket",
"bucket_name": "my-bucket",
"preview_bucket_name": "my-bucket-preview" // Optional: separate dev/prod
"preview_bucket_name": "my-bucket-preview" // 可选:区分开发/生产环境
}]
}]
}
}
3. Upload/download from Worker
3. 通过Worker上传/下载文件
type Bindings = { MY_BUCKET: R2Bucket };
// Upload
await env.MY_BUCKET.put('file.txt', data, {
httpMetadata: { contentType: 'text/plain' }
});
// Download
const object = await env.MY_BUCKET.get('file.txt');
if (!object) return c.json({ error: 'Not found' }, 404);
return new Response(object.body, {
headers: {
'Content-Type': object.httpMetadata?.contentType || 'application/octet-stream',
'ETag': object.httpEtag,
},
});
type Bindings = { MY_BUCKET: R2Bucket };
// 上传
await env.MY_BUCKET.put('file.txt', data, {
httpMetadata: { contentType: 'text/plain' }
});
// 下载
const object = await env.MY_BUCKET.get('file.txt');
if (!object) return c.json({ error: '未找到文件' }, 404);
return new Response(object.body, {
headers: {
'Content-Type': object.httpMetadata?.contentType || 'application/octet-stream',
'ETag': object.httpEtag,
},
});
4. Deploy
4. 部署
npx wrangler deploy
---npx wrangler deploy
---R2 Workers API
R2 Workers API
Core Methods
核心方法
typescript
// put() - Upload objects
await env.MY_BUCKET.put('file.txt', data, {
httpMetadata: {
contentType: 'text/plain',
cacheControl: 'public, max-age=3600',
},
customMetadata: { userId: '123' },
md5: await crypto.subtle.digest('MD5', data), // Checksum verification
});
// Conditional upload (prevent overwrites)
const object = await env.MY_BUCKET.put('file.txt', data, {
onlyIf: { uploadedBefore: new Date('2020-01-01') }
});
if (!object) return c.json({ error: 'File already exists' }, 409);
// get() - Download objects
const object = await env.MY_BUCKET.get('file.txt');
if (!object) return c.json({ error: 'Not found' }, 404);
const text = await object.text(); // As string
const json = await object.json(); // As JSON
const buffer = await object.arrayBuffer(); // As ArrayBuffer
// Range requests (partial downloads)
const partial = await env.MY_BUCKET.get('video.mp4', {
range: { offset: 0, length: 1024 * 1024 } // First 1MB
});
// head() - Get metadata only (no body download)
const object = await env.MY_BUCKET.head('file.txt');
console.log(object.size, object.etag, object.customMetadata);
// delete() - Delete objects
await env.MY_BUCKET.delete('file.txt'); // Single delete (idempotent)
await env.MY_BUCKET.delete(['file1.txt', 'file2.txt']); // Bulk delete (max 1000)
// list() - List objects
const listed = await env.MY_BUCKET.list({
prefix: 'images/', // Filter by prefix
limit: 100,
cursor: cursor, // Pagination
delimiter: '/', // Folder-like listing
include: ['httpMetadata', 'customMetadata'], // IMPORTANT: Opt-in for metadata
});
for (const object of listed.objects) {
console.log(`${object.key}: ${object.size} bytes`);
console.log(object.httpMetadata?.contentType); // Now populated with include parameter
console.log(object.customMetadata); // Now populated with include parameter
}typescript
// put() - 上传对象
await env.MY_BUCKET.put('file.txt', data, {
httpMetadata: {
contentType: 'text/plain',
cacheControl: 'public, max-age=3600',
},
customMetadata: { userId: '123' },
md5: await crypto.subtle.digest('MD5', data), // 校验和验证
});
// 条件上传(防止覆盖)
const object = await env.MY_BUCKET.put('file.txt', data, {
onlyIf: { uploadedBefore: new Date('2020-01-01') }
});
if (!object) return c.json({ error: '文件已存在' }, 409);
// get() - 下载对象
const object = await env.MY_BUCKET.get('file.txt');
if (!object) return c.json({ error: '未找到文件' }, 404);
const text = await object.text(); // 转为字符串
const json = await object.json(); // 转为JSON
const buffer = await object.arrayBuffer(); // 转为ArrayBuffer
// 范围请求(部分下载)
const partial = await env.MY_BUCKET.get('video.mp4', {
range: { offset: 0, length: 1024 * 1024 } // 前1MB
});
// head() - 仅获取元数据(不下载文件内容)
const object = await env.MY_BUCKET.head('file.txt');
console.log(object.size, object.etag, object.customMetadata);
// delete() - 删除对象
await env.MY_BUCKET.delete('file.txt'); // 单个删除(幂等)
await env.MY_BUCKET.delete(['file1.txt', 'file2.txt']); // 批量删除(最多1000个)
// list() - 列出对象
const listed = await env.MY_BUCKET.list({
prefix: 'images/', // 按前缀过滤
limit: 100,
cursor: cursor, // 分页游标
delimiter: '/', // 类文件夹式列出
include: ['httpMetadata', 'customMetadata'], // 重要:需手动开启元数据获取
});
for (const object of listed.objects) {
console.log(`${object.key}: ${object.size} bytes`);
console.log(object.httpMetadata?.contentType); // 开启include后才会填充
console.log(object.customMetadata); // 开启include后才会填充
}Multipart Uploads
分段上传
For files >100MB or resumable uploads. Use when: large files, browser uploads, parallelization needed.
typescript
// 1. Create multipart upload
const multipart = await env.MY_BUCKET.createMultipartUpload('large-file.zip', {
httpMetadata: { contentType: 'application/zip' }
});
// 2. Upload parts (5MB-100MB each, max 10,000 parts)
const multipart = env.MY_BUCKET.resumeMultipartUpload(key, uploadId);
const part1 = await multipart.uploadPart(1, chunk1);
const part2 = await multipart.uploadPart(2, chunk2);
// 3. Complete upload
const object = await multipart.complete([
{ partNumber: 1, etag: part1.etag },
{ partNumber: 2, etag: part2.etag },
]);
// 4. Abort if needed
await multipart.abort();Limits: Parts 5MB-100MB, max 10,000 parts per upload. Don't use for files <5MB (overhead).
适用于文件大于100MB或需要可恢复上传的场景。适用情况:大文件上传、浏览器端上传、需要并行处理的场景。
typescript
// 1. 创建分段上传任务
const multipart = await env.MY_BUCKET.createMultipartUpload('large-file.zip', {
httpMetadata: { contentType: 'application/zip' }
});
// 2. 上传分段(每个分段5MB-100MB,最多10000个分段)
const multipart = env.MY_BUCKET.resumeMultipartUpload(key, uploadId);
const part1 = await multipart.uploadPart(1, chunk1);
const part2 = await multipart.uploadPart(2, chunk2);
// 3. 完成上传
const object = await multipart.complete([
{ partNumber: 1, etag: part1.etag },
{ partNumber: 2, etag: part2.etag },
]);
// 4. 必要时中止上传
await multipart.abort();限制:分段大小为5MB-100MB,每个上传任务最多10000个分段。小于5MB的文件请勿使用分段上传(开销过大)。
Presigned URLs
预签名URL
Allow clients to upload/download directly to/from R2 (bypasses Worker). Use aws4fetch library.
typescript
import { AwsClient } from 'aws4fetch';
const r2Client = new AwsClient({
accessKeyId: env.R2_ACCESS_KEY_ID,
secretAccessKey: env.R2_SECRET_ACCESS_KEY,
});
const url = new URL(
`https://${bucketName}.${accountId}.r2.cloudflarestorage.com/${filename}`
);
url.searchParams.set('X-Amz-Expires', '3600'); // 1 hour expiry
const signed = await r2Client.sign(
new Request(url, { method: 'PUT' }), // or 'GET' for downloads
{ aws: { signQuery: true } }
);
// Client uploads directly to R2
await fetch(signed.url, { method: 'PUT', body: file });CRITICAL Security:
- ❌ NEVER expose R2 access keys in client-side code
- ✅ ALWAYS generate presigned URLs server-side
- ✅ ALWAYS set expiry times (1-24 hours typical)
- ✅ ALWAYS add authentication before generating URLs
- ✅ CONSIDER scoping to user folders:
users/${userId}/${filename}
允许客户端直接向R2上传/下载文件(绕过Worker)。需使用aws4fetch库。
typescript
import { AwsClient } from 'aws4fetch';
const r2Client = new AwsClient({
accessKeyId: env.R2_ACCESS_KEY_ID,
secretAccessKey: env.R2_SECRET_ACCESS_KEY,
});
const url = new URL(
`https://${bucketName}.${accountId}.r2.cloudflarestorage.com/${filename}`
);
url.searchParams.set('X-Amz-Expires', '3600'); // 1小时有效期
const signed = await r2Client.sign(
new Request(url, { method: 'PUT' }), // 下载使用'GET'方法
{ aws: { signQuery: true } }
);
// 客户端直接向R2上传文件
await fetch(signed.url, { method: 'PUT', body: file });安全注意事项(至关重要):
- ❌ 绝对不要在客户端代码中暴露R2访问密钥
- ✅ 务必在服务端生成预签名URL
- ✅ 务必设置有效期(通常1-24小时)
- ✅ 务必在生成URL前添加身份验证
- ✅ 建议按用户文件夹划分范围:
users/${userId}/${filename}
Presigned URL Domain Requirements
预签名URL域名要求
CRITICAL: Presigned URLs ONLY work with the S3 API domain, not custom domains.
typescript
// ❌ WRONG - Presigned URLs don't work with custom domains
const url = new URL(`https://cdn.example.com/${filename}`);
const signed = await r2Client.sign(
new Request(url, { method: 'PUT' }),
{ aws: { signQuery: true } }
);
// This URL will fail - presigning requires S3 domain
// ✅ CORRECT - Use R2 storage domain for presigned URLs
const url = new URL(
`https://${accountId}.r2.cloudflarestorage.com/${filename}`
);
const signed = await r2Client.sign(
new Request(url, { method: 'PUT' }),
{ aws: { signQuery: true } }
);
// Pattern: Upload via presigned S3 URL, serve via custom domain
async function generateUploadUrl(filename: string) {
const uploadUrl = new URL(
`https://${accountId}.r2.cloudflarestorage.com/${filename}`
);
const signed = await r2Client.sign(
new Request(uploadUrl, { method: 'PUT' }),
{ aws: { signQuery: true } }
);
return {
uploadUrl: signed.url, // For client upload (S3 domain)
publicUrl: `https://cdn.example.com/${filename}` // For serving (custom domain)
};
}Source: Community Knowledge
至关重要:预签名URL仅支持S3 API域名,不支持自定义域名。
typescript
// ❌ 错误:预签名URL不支持自定义域名
const url = new URL(`https://cdn.example.com/${filename}`);
const signed = await r2Client.sign(
new Request(url, { method: 'PUT' }),
{ aws: { signQuery: true } }
);
// 该URL会请求失败 - 预签名需要使用S3域名
// ✅ 正确:使用R2存储域名生成预签名URL
const url = new URL(
`https://${accountId}.r2.cloudflarestorage.com/${filename}`
);
const signed = await r2Client.sign(
new Request(url, { method: 'PUT' }),
{ aws: { signQuery: true } }
);
// 推荐模式:通过预签名S3 URL上传,通过自定义域名提供服务
async function generateUploadUrl(filename: string) {
const uploadUrl = new URL(
`https://${accountId}.r2.cloudflarestorage.com/${filename}`
);
const signed = await r2Client.sign(
new Request(uploadUrl, { method: 'PUT' }),
{ aws: { signQuery: true } }
);
return {
uploadUrl: signed.url, // 客户端上传使用(S3域名)
publicUrl: `https://cdn.example.com/${filename}` // 对外提供服务使用(自定义域名)
};
}来源:社区知识库
API Token Requirements for Wrangler
Wrangler所需的API令牌权限
⚠️ Wrangler CLI requires "Admin Read & Write" permissions, not "Object Read & Write".
When creating API tokens for wrangler operations:
- ✅ Use: R2 → Admin Read & Write
- ❌ Don't use: R2 → Object Read & Write (causes 403 Forbidden errors)
Why: "Object Read & Write" is for S3 API direct access only. Wrangler needs admin-level permissions for bucket operations.
bash
undefined⚠️ Wrangler CLI需要"管理员读写"权限,而非"对象读写"权限。
为wrangler操作创建API令牌时:
- ✅ 使用:R2 → 管理员读写
- ❌ 不要使用:R2 → 对象读写(会导致403禁止访问错误)
原因:"对象读写"权限仅适用于直接通过S3 API访问。Wrangler需要管理员级权限来执行存储桶操作。
bash
undefinedWith wrong permissions:
使用错误权限时:
export CLOUDFLARE_API_TOKEN="token_with_object_readwrite"
wrangler r2 object put my-bucket/file.txt --file=./file.txt --remote
export CLOUDFLARE_API_TOKEN="token_with_object_readwrite"
wrangler r2 object put my-bucket/file.txt --file=./file.txt --remote
✘ [ERROR] Failed to fetch - 403: Forbidden
✘ [错误] 请求失败 - 403: 禁止访问
With correct permissions (Admin Read & Write):
使用正确权限(管理员读写)时:
wrangler r2 object put my-bucket/file.txt --file=./file.txt --remote
wrangler r2 object put my-bucket/file.txt --file=./file.txt --remote
✔ Success
✔ 成功
**Source**: [GitHub Issue #9235](https://github.com/cloudflare/workers-sdk/issues/9235)
---
**来源**:[GitHub Issue #9235](https://github.com/cloudflare/workers-sdk/issues/9235)
---CORS Configuration
CORS配置
Configure CORS in bucket settings (Dashboard → R2 → Bucket → Settings → CORS Policy) before browser access.
在浏览器访问前,需在存储桶设置中配置CORS(控制台 → R2 → 存储桶 → 设置 → CORS策略)。
Dashboard Format vs CLI Format
控制台格式与CLI格式差异
⚠️ The wrangler CLI and Dashboard UI use DIFFERENT CORS formats. This commonly causes confusion.
Dashboard Format (works in UI only):
json
[{
"AllowedOrigins": ["https://example.com"],
"AllowedMethods": ["GET", "PUT"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}]CLI Format (required for ):
wrangler r2 bucket corsjson
{
"rules": [{
"allowed": {
"origins": ["https://www.example.com"],
"methods": ["GET", "PUT"],
"headers": ["Content-Type", "Authorization"]
},
"exposeHeaders": ["ETag", "Content-Length"],
"maxAgeSeconds": 8640
}]
}bash
undefined⚠️ wrangler CLI与控制台UI使用不同的CORS格式,这是常见的混淆点。
控制台格式(仅在UI中生效):
json
[{
"AllowedOrigins": ["https://example.com"],
"AllowedMethods": ["GET", "PUT"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}]CLI格式(命令要求使用):
wrangler r2 bucket corsjson
{
"rules": [{
"allowed": {
"origins": ["https://www.example.com"],
"methods": ["GET", "PUT"],
"headers": ["Content-Type", "Authorization"]
},
"exposeHeaders": ["ETag", "Content-Length"],
"maxAgeSeconds": 8640
}]
}bash
undefinedUsing CLI format
使用CLI格式设置
wrangler r2 bucket cors set my-bucket --file cors-config.json
wrangler r2 bucket cors set my-bucket --file cors-config.json
Error if using Dashboard format:
使用控制台格式会报错:
"The CORS configuration file must contain a 'rules' array"
"CORS配置文件必须包含'rules'数组"
**Source**: [GitHub Issue #10076](https://github.com/cloudflare/workers-sdk/issues/10076)
**来源**:[GitHub Issue #10076](https://github.com/cloudflare/workers-sdk/issues/10076)Custom Domain CORS
自定义域名的CORS处理
When using custom domains with R2, CORS is handled in two layers:
- R2 Bucket CORS: Applies to all access methods (presigned URLs, direct S3 access)
- Transform Rules CORS: Additional CORS headers via Cloudflare Cache settings on custom domain
typescript
// Bucket CORS (set via dashboard or wrangler)
{
"rules": [{
"allowed": {
"origins": ["https://app.example.com"],
"methods": ["GET", "PUT"],
"headers": ["Content-Type"]
},
"maxAgeSeconds": 3600
}]
}
// Additional CORS via Transform Rules (Dashboard → Rules → Transform Rules)
// Modify Response Header: Access-Control-Allow-Origin: https://app.example.com
// Order of CORS evaluation:
// 1. R2 bucket CORS (if presigned URL or direct R2 access)
// 2. Transform Rules CORS (if via custom domain)Source: Community Knowledge
For presigned URLs: CORS handled by R2 directly (configure on bucket, not Worker).
当为R2使用自定义域名时,CORS由两层处理:
- R2存储桶CORS:适用于所有访问方式(预签名URL、直接S3访问)
- 转换规则CORS:通过自定义域名的Cloudflare缓存设置添加额外CORS头
typescript
// 存储桶CORS(通过控制台或wrangler设置)
{
"rules": [{
"allowed": {
"origins": ["https://app.example.com"],
"methods": ["GET", "PUT"],
"headers": ["Content-Type"]
},
"maxAgeSeconds": 3600
}]
}
// 通过转换规则添加额外CORS(控制台 → 规则 → 转换规则)
// 修改响应头:Access-Control-Allow-Origin: https://app.example.com
// CORS评估顺序:
// 1. R2存储桶CORS(如果使用预签名URL或直接R2访问)
// 2. 转换规则CORS(如果通过自定义域名访问)来源:社区知识库
对于预签名URL:CORS由R2直接处理(在存储桶上配置,而非Worker)。
HTTP Metadata & Custom Metadata
HTTP元数据与自定义元数据
typescript
// HTTP metadata (standard headers)
await env.MY_BUCKET.put('file.pdf', data, {
httpMetadata: {
contentType: 'application/pdf',
cacheControl: 'public, max-age=31536000, immutable',
contentDisposition: 'attachment; filename="report.pdf"',
contentEncoding: 'gzip',
},
customMetadata: {
userId: '12345',
version: '1.0',
} // Max 2KB total, keys/values must be strings
});
// Read metadata
const object = await env.MY_BUCKET.head('file.pdf');
console.log(object.httpMetadata, object.customMetadata);typescript
// HTTP元数据(标准响应头)
await env.MY_BUCKET.put('file.pdf', data, {
httpMetadata: {
contentType: 'application/pdf',
cacheControl: 'public, max-age=31536000, immutable',
contentDisposition: 'attachment; filename="report.pdf"',
contentEncoding: 'gzip',
},
customMetadata: {
userId: '12345',
version: '1.0',
} // 总大小不超过2KB,键值必须为字符串
});
// 读取元数据
const object = await env.MY_BUCKET.head('file.pdf');
console.log(object.httpMetadata, object.customMetadata);Error Handling
错误处理
Common R2 Errors
常见R2错误
typescript
try {
await env.MY_BUCKET.put(key, data);
} catch (error: any) {
const message = error.message;
if (message.includes('R2_ERROR')) {
// Generic R2 error
} else if (message.includes('exceeded')) {
// Quota exceeded
} else if (message.includes('precondition')) {
// Conditional operation failed
} else if (message.includes('multipart')) {
// Multipart upload error
}
console.error('R2 Error:', message);
return c.json({ error: 'Storage operation failed' }, 500);
}typescript
try {
await env.MY_BUCKET.put(key, data);
} catch (error: any) {
const message = error.message;
if (message.includes('R2_ERROR')) {
// 通用R2错误
} else if (message.includes('exceeded')) {
// 配额超出
} else if (message.includes('precondition')) {
// 条件操作失败
} else if (message.includes('multipart')) {
// 分段上传错误
}
console.error('R2错误:', message);
return c.json({ error: '存储操作失败' }, 500);
}Retry Logic
重试逻辑
R2 experienced two major outages in Q1 2025 (February 6: 59 minutes, March 21: 1h 7min) due to operational issues. Implement robust retry logic with exponential backoff for platform errors.
typescript
async function r2WithRetry<T>(
operation: () => Promise<T>,
maxRetries = 5
): Promise<T> {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await operation();
} catch (error: any) {
const message = error.message;
// Retry on transient errors and platform issues
const is5xxError =
message.includes('500') ||
message.includes('502') ||
message.includes('503') ||
message.includes('504');
const isRetryable =
is5xxError ||
message.includes('network') ||
message.includes('timeout') ||
message.includes('temporarily unavailable');
if (!isRetryable || attempt === maxRetries - 1) {
throw error;
}
// Exponential backoff (longer for platform errors)
// 5xx errors: 1s, 2s, 4s, 8s, 16s (up to 31s total)
// Other errors: 1s, 2s, 4s, 5s, 5s (up to 17s total)
const delay = is5xxError
? Math.min(1000 * Math.pow(2, attempt), 16000)
: Math.min(1000 * Math.pow(2, attempt), 5000);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
throw new Error('Max retries exceeded');
}
// Usage
const object = await r2WithRetry(() =>
env.MY_BUCKET.get('important-file.txt')
);Platform Reliability: While R2 is generally reliable, the 2025 Q1 outages demonstrate the importance of retry logic for production applications. All 5xx errors should be retried with exponential backoff.
Sources:
R2在2025年第一季度经历了两次重大中断(2月6日:59分钟,3月21日:1小时7分钟),均由操作问题导致。为生产应用实现带指数退避的健壮重试逻辑至关重要。
typescript
async function r2WithRetry<T>(
operation: () => Promise<T>,
maxRetries = 5
): Promise<T> {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await operation();
} catch (error: any) {
const message = error.message;
// 针对临时错误和平台问题进行重试
const is5xxError =
message.includes('500') ||
message.includes('502') ||
message.includes('503') ||
message.includes('504');
const isRetryable =
is5xxError ||
message.includes('network') ||
message.includes('timeout') ||
message.includes('temporarily unavailable');
if (!isRetryable || attempt === maxRetries - 1) {
throw error;
}
// 指数退避(平台错误等待时间更长)
// 5xx错误:1s, 2s, 4s, 8s, 16s(总时长最多31s)
// 其他错误:1s, 2s, 4s, 5s, 5s(总时长最多17s)
const delay = is5xxError
? Math.min(1000 * Math.pow(2, attempt), 16000)
: Math.min(1000 * Math.pow(2, attempt), 5000);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
throw new Error('已达到最大重试次数');
}
// 使用示例
const object = await r2WithRetry(() =>
env.MY_BUCKET.get('important-file.txt')
);平台可靠性:尽管R2总体可靠,但2025年第一季度的中断表明生产应用实现重试逻辑的重要性。所有5xx错误都应通过指数退避进行重试。
来源:
Performance Optimization
性能优化
typescript
// Batch delete (up to 1000 keys)
await env.MY_BUCKET.delete(['file1.txt', 'file2.txt', 'file3.txt']);
// Range requests for large files
const partial = await env.MY_BUCKET.get('video.mp4', {
range: { offset: 0, length: 10 * 1024 * 1024 } // First 10MB
});
// Cache headers for immutable assets
await env.MY_BUCKET.put('static/app.abc123.js', jsData, {
httpMetadata: { cacheControl: 'public, max-age=31536000, immutable' }
});
// Checksums for data integrity
const md5Hash = await crypto.subtle.digest('MD5', fileData);
await env.MY_BUCKET.put('important.dat', fileData, { md5: md5Hash });typescript
// 批量删除(最多1000个键)
await env.MY_BUCKET.delete(['file1.txt', 'file2.txt', 'file3.txt']);
// 大文件使用范围请求
const partial = await env.MY_BUCKET.get('video.mp4', {
range: { offset: 0, length: 10 * 1024 * 1024 } // 前10MB
});
// 为不可变资源设置缓存头
await env.MY_BUCKET.put('static/app.abc123.js', jsData, {
httpMetadata: { cacheControl: 'public, max-age=31536000, immutable' }
});
// 使用校验和保证数据完整性
const md5Hash = await crypto.subtle.digest('MD5', fileData);
await env.MY_BUCKET.put('important.dat', fileData, { md5: md5Hash });Concurrent Write Rate Limits
并发写入速率限制
⚠️ High-frequency concurrent writes to the same object key will trigger HTTP 429 rate limiting.
typescript
// ❌ BAD: Multiple Workers writing to same key rapidly
async function logToSharedFile(env: Env, logEntry: string) {
const existing = await env.LOGS.get('global-log.txt');
const content = (await existing?.text()) || '';
await env.LOGS.put('global-log.txt', content + logEntry);
// High write frequency to same key = 429 errors
}
// ✅ GOOD: Shard by timestamp or ID (distribute writes)
async function logWithSharding(env: Env, logEntry: string) {
const timestamp = Date.now();
const shard = Math.floor(timestamp / 60000); // 1-minute shards
await env.LOGS.put(`logs/${shard}.txt`, logEntry, {
customMetadata: { timestamp: timestamp.toString() }
});
// Different keys = no rate limiting
}
// ✅ ALTERNATIVE: Use Durable Objects for append operations
// Durable Objects can handle high-frequency updates to same state
// ✅ ALTERNATIVE: Use Queues + batch processing
// Buffer writes and batch them with unique keysSource: R2 Limits Documentation
⚠️ 对同一对象键的高频并发写入会触发HTTP 429速率限制。
typescript
// ❌ 错误示例:多个Worker快速写入同一键
async function logToSharedFile(env: Env, logEntry: string) {
const existing = await env.LOGS.get('global-log.txt');
const content = (await existing?.text()) || '';
await env.LOGS.put('global-log.txt', content + logEntry);
// 对同一键的高频写入会导致429错误
}
// ✅ 正确示例:按时间戳或ID分片(分散写入)
async function logWithSharding(env: Env, logEntry: string) {
const timestamp = Date.now();
const shard = Math.floor(timestamp / 60000); // 按1分钟分片
await env.LOGS.put(`logs/${shard}.txt`, logEntry, {
customMetadata: { timestamp: timestamp.toString() }
});
// 不同键不会触发速率限制
}
// ✅ 替代方案:使用Durable Objects处理追加操作
// Durable Objects可处理对同一状态的高频更新
// ✅ 替代方案:使用Queues + 批量处理
// 缓冲写入并使用唯一键批量处理来源:R2限制文档
R2.dev Domain Rate Limiting
R2.dev域名速率限制
🚨 CRITICAL: The (r2.dev) domain is NOT for production use.
{bucket}.{account}.r2.cloudflarestorage.comr2.dev limitations:
- ❌ Variable rate limiting (starts at ~hundreds of requests/second)
- ❌ Bandwidth throttling
- ❌ No SLA or performance guarantees
- ❌ You'll receive 429 Too Many Requests under load
For production: ALWAYS use custom domains
typescript
// ❌ NOT for production - r2.dev endpoint
const publicUrl = `https://${bucketName}.${accountId}.r2.cloudflarestorage.com/${key}`;
// This will be rate limited in production
// ✅ Production: Custom domain
const productionUrl = `https://cdn.example.com/${key}`;
// Setup custom domain:
// 1. Dashboard → R2 → Bucket → Settings → Custom Domains
// 2. Add your domain (e.g., cdn.example.com)
// 3. Benefits:
// - No rate limiting beyond account limits
// - Cloudflare Cache support
// - Custom cache rules via Workers
// - Full CDN featuresr2.dev is ONLY for testing/development. Custom domains are required for production.
Source: R2 Limits Documentation
🚨 至关重要:(r2.dev)域名不可用于生产环境。
{bucket}.{account}.r2.cloudflarestorage.comr2.dev限制:
- ❌ 可变速率限制(起始约为每秒数百次请求)
- ❌ 带宽限制
- ❌ 无SLA或性能保证
- ❌ 高负载下会返回429请求过多错误
生产环境:务必使用自定义域名
typescript
// ❌ 不可用于生产 - r2.dev端点
const publicUrl = `https://${bucketName}.${accountId}.r2.cloudflarestorage.com/${key}`;
// 生产环境中会被限流
// ✅ 生产环境:使用自定义域名
const productionUrl = `https://cdn.example.com/${key}`;
// 自定义域名设置步骤:
// 1. 控制台 → R2 → 存储桶 → 设置 → 自定义域名
// 2. 添加你的域名(例如cdn.example.com)
// 3. 优势:
// - 无超出账户限制的额外速率限制
// - 支持Cloudflare缓存
// - 可通过Workers设置自定义缓存规则
// - 完整CDN功能r2.dev仅用于测试/开发。生产环境必须使用自定义域名。
来源:R2限制文档
Best Practices Summary
最佳实践总结
Always Do:
- Set for all uploads
contentType - Use batch delete for multiple objects (up to 1000)
- Set cache headers for static assets
- Use presigned URLs for large client uploads (S3 domain only)
- Use multipart for files >100MB
- Set CORS before browser uploads (use CLI format for wrangler)
- Set expiry times on presigned URLs (1-24 hours)
- Use when you only need metadata
head() - Use conditional operations to prevent overwrites
- Use custom domains for production (never r2.dev)
- Shard writes across keys to avoid rate limits
- Use parameter with
includeto get metadatalist() - Implement retry logic with exponential backoff for 5xx errors
Never Do:
- Never expose R2 access keys in client-side code
- Never skip (files download as binary)
contentType - Never delete in loops (use batch delete)
- Never skip CORS for browser uploads
- Never use multipart for small files (<5MB)
- Never delete >1000 keys in single call
- Never skip presigned URL expiry (security risk)
- Never use r2.dev domain for production (rate limited)
- Never use presigned URLs with custom domains (use S3 domain)
- Never write to same key at high frequency (causes 429)
- Never use "Object Read & Write" tokens for wrangler (use "Admin Read & Write")
务必执行:
- 为所有上传设置
contentType - 批量删除多个对象(最多1000个)
- 为静态资源设置缓存头
- 大文件客户端上传使用预签名URL(仅支持S3域名)
- 文件大于100MB时使用分段上传
- 浏览器上传前配置CORS(wrangler使用CLI格式)
- 为预签名URL设置有效期(1-24小时)
- 仅需元数据时使用
head() - 使用条件操作防止覆盖
- 生产环境使用自定义域名(绝不要用r2.dev)
- 跨多个键分片写入以避免速率限制
- 使用时添加
list()参数获取元数据include - 为5xx错误实现带指数退避的重试逻辑
切勿执行:
- 绝不要在客户端代码中暴露R2访问密钥
- 绝不要省略(文件会以二进制形式下载)
contentType - 绝不要循环删除对象(使用批量删除)
- 浏览器上传前绝不要跳过CORS配置
- 小文件(<5MB)绝不要使用分段上传
- 单次删除绝不要超过1000个键
- 绝不要省略预签名URL有效期(安全风险)
- 生产环境绝不要使用r2.dev域名(会被限流)
- 绝不要为自定义域名生成预签名URL(使用S3域名)
- 绝不要高频写入同一键(会导致429错误)
- 绝不要为wrangler使用"对象读写"令牌(使用"管理员读写")
Multi-Tenant Architecture
多租户架构
With the bucket limit increased to 1 million buckets per account, per-tenant buckets are now viable for large-scale applications.
typescript
// Option 1: Per-tenant buckets (now scalable to 1M tenants)
const bucketName = `tenant-${tenantId}`;
const bucket = env[bucketName]; // Dynamic binding
// Option 2: Key prefixing (still preferred for most use cases)
await env.MY_BUCKET.put(`tenants/${tenantId}/file.txt`, data);
// Choose based on:
// - Per-tenant buckets: Strong isolation, separate billing/quotas
// - Key prefixing: Simpler, fewer resources, easier to manageSource: R2 Limits Documentation
随着每个账户的存储桶上限提升至100万个,为每个租户单独创建存储桶现在对大规模应用来说已可行。
typescript
// 方案1:每个租户一个存储桶(现在可扩展至100万个租户)
const bucketName = `tenant-${tenantId}`;
const bucket = env[bucketName]; // 动态绑定
// 方案2:键前缀(大多数场景下仍为首选)
await env.MY_BUCKET.put(`tenants/${tenantId}/file.txt`, data);
// 选择依据:
// - 每个租户一个存储桶:强隔离、独立计费/配额
// - 键前缀:更简单、资源占用更少、更易管理来源:R2限制文档
Known Issues Prevented
已预防的已知问题
This skill prevents 13 documented issues:
| Issue # | Issue | Error | Prevention |
|---|---|---|---|
| #1 | CORS errors in browser | Browser can't upload/download | Configure CORS in bucket settings, use correct CLI format |
| #2 | Files download as binary | Missing content-type | Always set |
| #3 | Presigned URL expiry | URLs never expire | Always set |
| #4 | Multipart upload limits | Parts exceed limits | Keep parts 5MB-100MB, max 10,000 parts |
| #5 | Bulk delete limits | >1000 keys fails | Chunk deletes into batches of 1000 |
| #6 | Custom metadata overflow | Exceeds 2KB limit | Keep custom metadata under 2KB |
| #7 | list() metadata missing | | Use |
| #8 | CORS format confusion | "Must contain 'rules' array" | Use CLI format with |
| #9 | API token 403 errors | "Failed to fetch - 403" | Use "Admin Read & Write" not "Object Read & Write" for wrangler (Issue #9235) |
| #10 | r2.dev rate limiting | HTTP 429 in production | Use custom domains, never r2.dev for production (R2 Limits) |
| #11 | Concurrent write 429s | Same key written frequently | Shard writes across different keys (R2 Limits) |
| #12 | Presigned URL domain error | Presigned URLs fail | Use S3 domain only, not custom domains (Community) |
| #13 | Platform outages | 5xx errors during outages | Implement retry logic with exponential backoff (Feb 6, Mar 21) |
本指南可预防13种有记录的问题:
| 问题编号 | 问题描述 | 错误表现 | 预防措施 |
|---|---|---|---|
| #1 | 浏览器端CORS错误 | 浏览器无法上传/下载文件 | 在存储桶设置中配置CORS,使用正确的CLI格式 |
| #2 | 文件以二进制形式下载 | 缺少content-type | 上传时始终设置 |
| #3 | 预签名URL永不过期 | URL无有效期限制 | 始终设置 |
| #4 | 分段上传超出限制 | 分段不符合大小要求 | 保持分段大小在5MB-100MB,最多10000个分段 |
| #5 | 批量删除超出限制 | 删除超过1000个键失败 | 将删除操作拆分为1000个键的批次 |
| #6 | 自定义元数据溢出 | 超出2KB限制 | 自定义元数据总大小控制在2KB以内 |
| #7 | list()元数据缺失 | | 使用 |
| #8 | CORS格式混淆 | "必须包含'rules'数组"错误 | wrangler使用带 |
| #9 | API令牌403错误 | "请求失败 - 403" | 为wrangler使用"管理员读写"而非"对象读写"令牌 (Issue #9235) |
| #10 | r2.dev速率限制 | 生产环境返回429错误 | 使用自定义域名,绝不在生产环境使用r2.dev (R2限制文档) |
| #11 | 并发写入429错误 | 高频写入同一键触发限流 | 跨不同键分片写入 (R2限制文档) |
| #12 | 预签名URL域名错误 | 预签名URL请求失败 | 仅使用S3域名,不使用自定义域名 (社区知识库) |
| #13 | 平台中断导致5xx错误 | 平台中断时返回5xx错误 | 为5xx错误实现带指数退避的重试逻辑 (2月6日事件, 3月21日事件) |
Development Best Practices
开发最佳实践
Local R2 Storage Cleanup
本地R2存储清理
⚠️ Local R2 DELETE operations don't cleanup blob files. When using , deleted objects remain in , causing local storage to grow indefinitely.
wrangler dev.wrangler/state/v3/r2/{bucket-name}/blobs/bash
undefined⚠️ 本地R2的DELETE操作不会清理Blob文件。使用时,已删除的对象仍会保留在目录中,导致本地存储持续膨胀。
wrangler dev.wrangler/state/v3/r2/{bucket-name}/blobs/bash
undefinedSymptom: .wrangler/state grows large during development
症状:开发过程中.wrangler/state目录不断增大
du -sh .wrangler/state/v3/r2/
du -sh .wrangler/state/v3/r2/
Fix: Manually cleanup local R2 storage
解决方法:手动清理本地R2存储
rm -rf .wrangler/state/v3/r2/
rm -rf .wrangler/state/v3/r2/
Alternative: Use remote R2 for development
替代方案:开发时使用远程R2
wrangler dev --remote
**Source**: [GitHub Issue #10795](https://github.com/cloudflare/workers-sdk/issues/10795)wrangler dev --remote
**来源**:[GitHub Issue #10795](https://github.com/cloudflare/workers-sdk/issues/10795)Remote R2 Access Issues
远程R2访问问题
⚠️ Local dev with can have unreliable operations. Some users report returning undefined despite working correctly.
--remote.get()get()put()bash
undefined⚠️ 使用进行本地开发时,操作可能不稳定。部分用户反馈操作成功,但返回undefined。
--remote.get()put()get()bash
undefinedIf experiencing issues with remote R2 in local dev:
如果本地开发使用远程R2遇到问题:
Option 1: Use local buckets instead (recommended)
方案1:使用本地存储桶(推荐)
wrangler dev # No --remote flag
wrangler dev # 不添加--remote参数
Option 2: Deploy to preview environment for testing
方案2:部署到预览环境进行测试
wrangler deploy --env preview
wrangler deploy --env preview
Option 3: Add retry logic if must use --remote
方案3:如果必须使用--remote,添加重试逻辑
async function safeGet(bucket: R2Bucket, key: string) {
for (let i = 0; i < 3; i++) {
const obj = await bucket.get(key);
if (obj && obj.body) return obj;
await new Promise(r => setTimeout(r, 1000));
}
throw new Error('Failed to get object after retries');
}
**Source**: [GitHub Issue #8868](https://github.com/cloudflare/workers-sdk/issues/8868) (Community-sourced)
---async function safeGet(bucket: R2Bucket, key: string) {
for (let i = 0; i < 3; i++) {
const obj = await bucket.get(key);
if (obj && obj.body) return obj;
await new Promise(r => setTimeout(r, 1000));
}
throw new Error('多次重试后仍无法获取对象');
}
**来源**:[GitHub Issue #8868](https://github.com/cloudflare/workers-sdk/issues/8868)(社区贡献)
---Wrangler Commands Reference
Wrangler命令参考
bash
undefinedbash
undefinedBucket management
存储桶管理
wrangler r2 bucket create <BUCKET_NAME>
wrangler r2 bucket list
wrangler r2 bucket delete <BUCKET_NAME>
wrangler r2 bucket create <BUCKET_NAME>
wrangler r2 bucket list
wrangler r2 bucket delete <BUCKET_NAME>
Object management
对象管理
wrangler r2 object put <BUCKET_NAME>/<KEY> --file=<FILE_PATH>
wrangler r2 object get <BUCKET_NAME>/<KEY> --file=<OUTPUT_PATH>
wrangler r2 object delete <BUCKET_NAME>/<KEY>
wrangler r2 object put <BUCKET_NAME>/<KEY> --file=<FILE_PATH>
wrangler r2 object get <BUCKET_NAME>/<KEY> --file=<OUTPUT_PATH>
wrangler r2 object delete <BUCKET_NAME>/<KEY>
List objects
列出对象
wrangler r2 object list <BUCKET_NAME>
wrangler r2 object list <BUCKET_NAME> --prefix="folder/"
---wrangler r2 object list <BUCKET_NAME>
wrangler r2 object list <BUCKET_NAME> --prefix="folder/"
---Official Documentation
官方文档
- R2 Overview: https://developers.cloudflare.com/r2/
- Get Started: https://developers.cloudflare.com/r2/get-started/
- Workers API: https://developers.cloudflare.com/r2/api/workers/workers-api-reference/
- Multipart Upload: https://developers.cloudflare.com/r2/api/workers/workers-multipart-usage/
- Presigned URLs: https://developers.cloudflare.com/r2/api/s3/presigned-urls/
- CORS Configuration: https://developers.cloudflare.com/r2/buckets/cors/
- Public Buckets: https://developers.cloudflare.com/r2/buckets/public-buckets/
Ready to store with R2! 🚀
Last verified: 2026-01-20 | Skill version: 2.0.0 | Changes: Added 7 new known issues from community research (list() metadata, CORS format confusion, API token permissions, r2.dev rate limiting, concurrent write limits, presigned URL domain requirements, platform outage retry patterns). Enhanced retry logic for 5xx errors, added development best practices section, documented bucket limit increase to 1M.
- R2概述:https://developers.cloudflare.com/r2/
- 快速开始:https://developers.cloudflare.com/r2/get-started/
- Workers API:https://developers.cloudflare.com/r2/api/workers/workers-api-reference/
- 分段上传:https://developers.cloudflare.com/r2/api/workers/workers-multipart-usage/
- 预签名URL:https://developers.cloudflare.com/r2/api/s3/presigned-urls/
- CORS配置:https://developers.cloudflare.com/r2/buckets/cors/
- 公开存储桶:https://developers.cloudflare.com/r2/buckets/public-buckets/
准备好使用R2存储了! 🚀
最后验证时间:2026-01-20 | 指南版本:2.0.0 | 更新内容:从社区研究中新增7种已知问题(list()元数据、CORS格式混淆、API令牌权限、r2.dev速率限制、并发写入限制、预签名URL域名要求、平台中断重试模式)。增强了5xx错误的重试逻辑,新增开发最佳实践章节,记录了存储桶上限提升至100万个的信息。