create-spec

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
You are tasked with creating a spec for implementing a new feature or system change in the codebase by leveraging existing research in the $ARGUMENTS path. If no research path is specified, use the entire
research/
directory. IMPORTANT: Research documents are located in the
research/
directory — do NOT look in the
specs/
directory for research. Follow the template below to produce a comprehensive specification as output in the
specs/
folder using the findings from RELEVANT research documents found in
research/
. The spec file MUST be named using the format
YYYY-MM-DD-topic.md
(e.g.,
specs/2026-03-26-my-feature.md
), where the date is the current date and the topic is a kebab-case summary. Tip: It's good practice to use the
codebase-research-locator
and
codebase-research-analyzer
agents to help you find and analyze the research documents in the
research/
directory. It is also HIGHLY recommended to cite relevant research throughout the spec for additional context.
<EXTREMELY_IMPORTANT>
  • Please DO NOT implement anything in this stage, just create the comprehensive spec as described below.
  • When writing the spec, DO NOT include information about concrete dates/timelines (e.g. # minutes, hours, days, weeks, etc.) and favor explicit phases (e.g. Phase 1, Phase 2, etc.).
  • Once the spec is generated, refer to the section, "## 9. Open Questions / Unresolved Issues", go through each question one by one, and use contrastive clarification (presenting 2-3 specific options with concrete tradeoffs) rather than open-ended questions. This means presenting interpretations like "(A) Option X — tradeoff Y" and "(B) Option Z — tradeoff W" instead of asking "what do you think about X?". Update the spec with the user's answers as you walk through the questions.
    • Please use your AskUserQuestion tool to ask the user for their input on each question and update the spec accordingly.
  • Finally, once the spec is generated and after open questions are answered, provide an executive summary of the spec to the user including the path to the generated spec document in the
    specs/
    directory.
    • Encourage the user to review the spec for best results and provide feedback or ask any follow-up questions they may have.
</EXTREMELY_IMPORTANT>
你的任务是利用**$ARGUMENTS**路径下的现有研究,为代码库的新功能实现或系统变更创建规范。如果未指定研究路径,则使用整个
research/
目录。重要提示:研究文档位于
research/
目录下——请勿在
specs/
目录中查找研究资料。请遵循下方模板,基于
research/
目录中相关研究文档的结论,在
specs/
文件夹中输出完整的规范文档。规范文件必须采用
YYYY-MM-DD-topic.md
格式命名(例如
specs/2026-03-26-my-feature.md
),其中日期为当前日期,topic为短横线分隔(kebab-case)的概要内容。 提示:使用
codebase-research-locator
codebase-research-analyzer
Agent来帮助你查找和分析
research/
目录中的研究文档是很好的做法。同时我们强烈建议你在规范全文中引用相关研究内容,补充额外上下文。
<EXTREMELY_IMPORTANT>
  • 请不要在此阶段实现任何功能,仅按照下文描述创建完整的规范即可。
  • 编写规范时,请勿包含具体日期/时间线相关信息(例如# 分钟、小时、天、周等),优先使用明确的阶段划分(例如阶段1、阶段2等)。
  • 生成规范后,找到「## 9. 待解决问题/未决事项」章节,逐一梳理每个问题,使用对比澄清法(给出2-3个带明确权衡的具体选项)代替开放式问题。也就是说你需要给出类似「(A) 选项X——权衡点Y」和「(B) 选项Z——权衡点W」的表述,而不是问「你对X有什么看法?」。你需要和用户逐一确认这些问题的答案,并同步更新规范内容。
    • 请使用AskUserQuestion工具向用户询问每个问题的输入,并对应更新规范。
  • 最后,在规范生成且所有待解决问题都得到答复后,向用户提供规范的执行摘要,包含生成的规范文档在
    specs/
    目录中的路径。
    • 鼓励用户审阅规范以获得最佳结果,并提供反馈或提出任何后续问题。
</EXTREMELY_IMPORTANT>

[Project Name] Technical Design Document / RFC

[项目名称] 技术设计文档 / RFC

Document MetadataDetails
Author(s)!
git config user.name
StatusDraft (WIP) / In Review (RFC) / Approved / Implemented / Deprecated / Rejected
Team / Owner
Created / Last Updated
文档元数据详情
作者!
git config user.name
状态草稿(WIP)/ 评审中(RFC)/ 已批准 / 已实现 / 已弃用 / 已拒绝
团队/负责人
创建时间/最后更新时间

1. Executive Summary

1. 执行摘要

Instruction: A "TL;DR" of the document. Assume the reader is a VP or an engineer from another team who has 2 minutes. Summarize the Context (Problem), the Solution (Proposal), and the Impact (Value). Keep it under 200 words.
Example: This RFC proposes replacing our current nightly batch billing system with an event-driven architecture using Kafka and AWS Lambda. Currently, billing delays cause a 5% increase in customer support tickets. The proposed solution will enable real-time invoicing, reducing billing latency from 24 hours to <5 minutes.
说明:本文档的「太长不看版」,假设阅读者是副总裁或者其他团队的工程师,只有2分钟阅读时间。总结背景(问题)、解决方案(提议)和影响(价值),控制在200字以内。
示例: 本RFC提议使用Kafka和AWS Lambda的事件驱动架构替换我们现有的夜间批量计费系统。目前计费延迟导致客户支持工单增加5%,提议的方案将支持实时开票,将计费延迟从24小时降低到5分钟以内。

2. Context and Motivation

2. 背景与动机

Instruction: Why are we doing this? Why now? Link to the Product Requirement Document (PRD).
说明:我们为什么要做这件事?为什么是现在做?关联到产品需求文档(PRD)。

2.1 Current State

2.1 当前状态

Instruction: Describe the existing architecture. Use a "Context Diagram" if possible. Be honest about the flaws.
  • Architecture: Currently, Service A communicates with Service B via a shared SQL database.
  • Limitations: This creates a tight coupling; when Service A locks the table, Service B times out.
说明:描述现有架构,尽可能使用「上下文图」,客观说明存在的缺陷。
  • 架构: 当前服务A通过共享SQL数据库与服务B通信。
  • 局限性: 这种方式会产生紧耦合,当服务A锁定表时,服务B会超时。

2.2 The Problem

2.2 存在的问题

Instruction: What is the specific pain point?
  • User Impact: Customers cannot download receipts during the nightly batch window.
  • Business Impact: We are losing $X/month in churn due to billing errors.
  • Technical Debt: The current codebase is untestable and has 0% unit test coverage.
说明:具体的痛点是什么?
  • 用户影响: 客户在夜间批量处理窗口无法下载收据。
  • 业务影响: 我们每月因计费错误损失X美元的客户 churn。
  • 技术债务: 当前代码库不可测试,单元测试覆盖率为0%。

3. Goals and Non-Goals

3. 目标与非目标

Instruction: This is the contract Definition of Success. Be precise.
说明:这是成功标准的约定,表述要精准。

3.1 Functional Goals

3.1 功能目标

  • Users must be able to export data in CSV format.
  • System must support multi-tenant data isolation.
  • 用户必须能够导出CSV格式的数据。
  • 系统必须支持多租户数据隔离。

3.2 Non-Goals (Out of Scope)

3.2 非目标(超出范围)

Instruction: Explicitly state what you are NOT doing. This prevents scope creep.
  • We will NOT support PDF export in this version (CSV only).
  • We will NOT migrate data older than 3 years.
  • We will NOT build a custom UI (API only).
说明:明确说明你不会做的内容,防止范围蔓延。
  • 本版本不会支持PDF导出(仅支持CSV)。
  • 我们不会迁移3年以上的旧数据。
  • 我们不会构建自定义UI(仅提供API)。

4. Proposed Solution (High-Level Design)

4. 建议解决方案(概要设计)

Instruction: The "Big Picture." Diagrams are mandatory here.
说明:「全局概览」,此处必须提供架构图。

4.1 System Architecture Diagram

4.1 系统架构图

Instruction: Insert a C4 System Context or Container diagram. Show the "Black Boxes."
  • (Place Diagram Here - e.g., Mermaid diagram)
For example,
mermaid
%%{init: {'theme':'base', 'themeVariables': { 'primaryColor':'#f8f9fa','primaryTextColor':'#2c3e50','primaryBorderColor':'#4a5568','lineColor':'#4a90e2','secondaryColor':'#ffffff','tertiaryColor':'#e9ecef','background':'#f5f7fa','mainBkg':'#f8f9fa','nodeBorder':'#4a5568','clusterBkg':'#ffffff','clusterBorder':'#cbd5e0','edgeLabelBackground':'#ffffff'}}}%%

flowchart TB
    %% ---------------------------------------------------------
    %% CLEAN ENTERPRISE DESIGN
    %% Professional • Trustworthy • Corporate Standards
    %% ---------------------------------------------------------

    %% STYLE DEFINITIONS
    classDef person fill:#5a67d8,stroke:#4c51bf,stroke-width:3px,color:#ffffff,font-weight:600,font-size:14px

    classDef systemCore fill:#4a90e2,stroke:#357abd,stroke-width:2.5px,color:#ffffff,font-weight:600,font-size:14px

    classDef systemSupport fill:#667eea,stroke:#5a67d8,stroke-width:2.5px,color:#ffffff,font-weight:600,font-size:13px

    classDef database fill:#48bb78,stroke:#38a169,stroke-width:2.5px,color:#ffffff,font-weight:600,font-size:13px

    classDef external fill:#718096,stroke:#4a5568,stroke-width:2.5px,color:#ffffff,font-weight:600,font-size:13px,stroke-dasharray:6 3

    %% NODES - CLEAN ENTERPRISE HIERARCHY

    User(("◉<br><b>User</b><br>")):::person

    subgraph SystemBoundary["◆ Primary System Boundary"]
        direction TB

        LoadBalancer{{"<b>Load Balancer</b><br>NGINX<br><i>Layer 7 Proxy</i>"}}:::systemCore

        API["<b>API Application</b><br>Go • Gin Framework<br><i>REST Endpoints</i>"]:::systemCore

        Worker(["<b>Background Worker</b><br>Go Runtime<br><i>Async Processing</i>"]):::systemSupport

        Cache[("◆<br><b>Cache Layer</b><br>Redis<br><i>In-Memory</i>")]:::database

        PrimaryDB[("●<br><b>Primary Database</b><br>PostgreSQL<br><i>Persistent Storage</i>")]:::database
    end

    ExternalAPI{{"<b>External API</b><br>Third Party<br><i>HTTP/REST</i>"}}:::external

    %% RELATIONSHIPS - CLEAN FLOW

    User -->|"1. HTTPS Request<br>TLS 1.3"| LoadBalancer
    LoadBalancer -->|"2. Proxy Pass<br>Round Robin"| API

    API <-->|"3. Cache<br>Read/Write"| Cache
    API -->|"4. Persist Data<br>Transactional"| PrimaryDB
    API -.->|"5. Enqueue Event<br>Async"| Worker

    Worker -->|"6. Process Job<br>Execution"| PrimaryDB
    Worker -.->|"7. HTTP Call<br>Webhooks"| ExternalAPI

    %% STYLE BOUNDARY
    style SystemBoundary fill:#ffffff,stroke:#cbd5e0,stroke-width:2px,color:#2d3748,stroke-dasharray:8 4,font-weight:600,font-size:12px
说明:插入C4系统上下文图或容器图,展示「黑盒」组成。
  • (在此处放置图表,例如Mermaid图表)
示例:
mermaid
%%{init: {'theme':'base', 'themeVariables': { 'primaryColor':'#f8f9fa','primaryTextColor':'#2c3e50','primaryBorderColor':'#4a5568','lineColor':'#4a90e2','secondaryColor':'#ffffff','tertiaryColor':'#e9ecef','background':'#f5f7fa','mainBkg':'#f8f9fa','nodeBorder':'#4a5568','clusterBkg':'#ffffff','clusterBorder':'#cbd5e0','edgeLabelBackground':'#ffffff'}}}%%

flowchart TB
    %% ---------------------------------------------------------
    %% CLEAN ENTERPRISE DESIGN
    %% Professional • Trustworthy • Corporate Standards
    %% ---------------------------------------------------------

    %% STYLE DEFINITIONS
    classDef person fill:#5a67d8,stroke:#4c51bf,stroke-width:3px,color:#ffffff,font-weight:600,font-size:14px

    classDef systemCore fill:#4a90e2,stroke:#357abd,stroke-width:2.5px,color:#ffffff,font-weight:600,font-size:14px

    classDef systemSupport fill:#667eea,stroke:#5a67d8,stroke-width:2.5px,color:#ffffff,font-weight:600,font-size:13px

    classDef database fill:#48bb78,stroke:#38a169,stroke-width:2.5px,color:#ffffff,font-weight:600,font-size:13px

    classDef external fill:#718096,stroke:#4a5568,stroke-width:2.5px,color:#ffffff,font-weight:600,font-size:13px,stroke-dasharray:6 3

    %% NODES - CLEAN ENTERPRISE HIERARCHY

    User(("◉<br><b>User</b><br>")):::person

    subgraph SystemBoundary["◆ Primary System Boundary"]
        direction TB

        LoadBalancer{{"<b>Load Balancer</b><br>NGINX<br><i>Layer 7 Proxy</i>"}}:::systemCore

        API["<b>API Application</b><br>Go • Gin Framework<br><i>REST Endpoints</i>"]:::systemCore

        Worker(["<b>Background Worker</b><br>Go Runtime<br><i>Async Processing</i>"]):::systemSupport

        Cache[("◆<br><b>Cache Layer</b><br>Redis<br><i>In-Memory</i>")]:::database

        PrimaryDB[("●<br><b>Primary Database</b><br>PostgreSQL<br><i>Persistent Storage</i>")]:::database
    end

    ExternalAPI{{"<b>External API</b><br>Third Party<br><i>HTTP/REST</i>"}}:::external

    %% RELATIONSHIPS - CLEAN FLOW

    User -->|"1. HTTPS Request<br>TLS 1.3"| LoadBalancer
    LoadBalancer -->|"2. Proxy Pass<br>Round Robin"| API

    API <-->|"3. Cache<br>Read/Write"| Cache
    API -->|"4. Persist Data<br>Transactional"| PrimaryDB
    API -.->|"5. Enqueue Event<br>Async"| Worker

    Worker -->|"6. Process Job<br>Execution"| PrimaryDB
    Worker -.->|"7. HTTP Call<br>Webhooks"| ExternalAPI

    %% STYLE BOUNDARY
    style SystemBoundary fill:#ffffff,stroke:#cbd5e0,stroke-width:2px,color:#2d3748,stroke-dasharray:8 4,font-weight:600,font-size:12px

4.2 Architectural Pattern

4.2 架构模式

Instruction: Name the pattern (e.g., "Event Sourcing", "BFF - Backend for Frontend").
  • We are adopting a Publisher-Subscriber pattern where the Order Service publishes
    OrderCreated
    events, and the Billing Service consumes them asynchronously.
说明:说明使用的模式(例如「Event Sourcing」、「BFF - Backend for Frontend」)。
  • 我们将采用发布-订阅模式,订单服务发布
    OrderCreated
    事件,计费服务异步消费这些事件。

4.3 Key Components

4.3 核心组件

ComponentResponsibilityTechnology StackJustification
Ingestion ServiceValidates incoming webhooksGo, Gin FrameworkHigh concurrency performance needed.
Event BusDecouples servicesKafkaDurable log, replay capability.
Projections DBRead-optimized viewsMongoDBFlexible schema for diverse receipt formats.
组件职责技术栈选型理由
接入服务验证传入的webhookGo, Gin Framework需要高并发性能。
事件总线解耦服务Kafka持久化日志、支持回放能力。
投影数据库读优化视图MongoDB灵活的schema适配多样化的收据格式。

5. Detailed Design

5. 详细设计

Instruction: The "Meat" of the document. Sufficient detail for an engineer to start coding.
说明:文档的「核心部分」,提供足够的细节让工程师可以直接开始编码。

5.1 API Interfaces

5.1 API接口

Instruction: Define the contract. Use OpenAPI/Swagger snippets or Protocol Buffer definitions.
Endpoint:
POST /api/v1/invoices
  • Auth: Bearer Token (Scope:
    invoice:write
    )
  • Idempotency: Required header
    X-Idempotency-Key
  • Request Body:
json
{ "user_id": "uuid", "amount": 100.0, "currency": "USD" }
说明:定义接口契约,使用OpenAPI/Swagger片段或者Protocol Buffer定义。
接口地址:
POST /api/v1/invoices
  • 鉴权: Bearer Token(权限范围:
    invoice:write
  • 幂等性: 必须携带请求头
    X-Idempotency-Key
  • 请求体:
json
{ "user_id": "uuid", "amount": 100.0, "currency": "USD" }

5.2 Data Model / Schema

5.2 数据模型/ Schema

Instruction: Provide ERDs (Entity Relationship Diagrams) or JSON schemas. Discuss normalization vs. denormalization.
Table:
invoices
(PostgreSQL)
ColumnTypeConstraintsDescription
id
UUIDPK
user_id
UUIDFK -> UsersPartition Key
status
ENUM'PENDING', 'PAID'Indexed for filtering
说明:提供ERD(实体关系图)或者JSON schema,讨论规范化与反规范化的选择。
表名:
invoices
(PostgreSQL)
列名类型约束描述
id
UUID主键
user_id
UUID外键关联Users表分区键
status
枚举'PENDING', 'PAID'已建索引用于筛选

5.3 Algorithms and State Management

5.3 算法与状态管理

Instruction: Describe complex logic, state machines, or consistency models.
  • State Machine: An invoice moves from
    DRAFT
    ->
    LOCKED
    ->
    PROCESSING
    ->
    PAID
    .
  • Concurrency: We use Optimistic Locking on the
    version
    column to prevent double-payments.
说明:描述复杂逻辑、状态机或者一致性模型。
  • 状态机: 发票状态流转:
    草稿
    ->
    已锁定
    ->
    处理中
    ->
    已支付
  • 并发处理: 我们对
    version
    列使用乐观锁防止重复支付。

6. Alternatives Considered

6. 已考虑的替代方案

Instruction: Prove you thought about trade-offs. Why is your solution better than the others?
OptionProsConsReason for Rejection
Option A: Synchronous HTTP CallsSimple to implement, Easy to debugTight coupling, cascading failuresLatency requirements (200ms) make blocking calls risky.
Option B: RabbitMQLightweight, Built-in routingLess durable than Kafka, harder to replayWe need message replay for auditing (Compliance requirement).
Option C: Kafka (Selected)High throughput, ReplayabilityOperational complexitySelected: The need for auditability/replay outweighs the complexity cost.
说明:证明你已经权衡过不同选项,为什么你的方案比其他方案更好?
选项优势劣势拒绝理由
选项A:同步HTTP调用实现简单,易于调试紧耦合,级联故障风险高200ms的延迟要求使得阻塞调用风险过高。
选项B:RabbitMQ轻量,内置路由能力持久化能力弱于Kafka,回放难度更高我们需要消息回放能力满足审计要求(合规要求)。
选项C:Kafka(已选择)高吞吐量,支持回放运维复杂度高已选择: 审计/回放需求的优先级高于复杂度成本。

7. Cross-Cutting Concerns

7. 跨领域关注点

7.1 Security and Privacy

7.1 安全与隐私

  • Authentication: Services authenticate via mTLS.
  • Authorization: Policy enforcement point at the API Gateway (OPA - Open Policy Agent).
  • Data Protection: PII (Names, Emails) is encrypted at rest using AES-256.
  • Threat Model: Primary threat is compromised API Key; remediation is rapid rotation and rate limiting.
  • 认证: 服务之间通过mTLS认证。
  • 鉴权: API网关处实现策略执行点(OPA - Open Policy Agent)。
  • 数据保护: PII(姓名、邮箱)使用AES-256进行静态加密。
  • 威胁模型: 主要威胁是API密钥泄露,修复方案是支持快速轮换和限流。

7.2 Observability Strategy

7.2 可观测性策略

  • Metrics: We will track
    invoice_creation_latency
    (Histogram) and
    payment_failure_count
    (Counter).
  • Tracing: All services propagate
    X-Trace-ID
    headers (OpenTelemetry).
  • Alerting: PagerDuty triggers if
    5xx
    error rate > 1% for 5 minutes.
  • 指标: 我们将监控
    invoice_creation_latency
    (直方图)和
    payment_failure_count
    (计数器)。
  • 链路追踪: 所有服务传递
    X-Trace-ID
    请求头(OpenTelemetry)。
  • 告警: 如果
    5xx
    错误率连续5分钟超过1%,将触发PagerDuty告警。

7.3 Scalability and Capacity Planning

7.3 可扩展性与容量规划

  • Traffic Estimates: 1M transactions/day = ~12 TPS avg / 100 TPS peak.
  • Storage Growth: 1KB per record * 1M = 1GB/day.
  • Bottleneck: The PostgreSQL Write node is the bottleneck. We will implement Read Replicas to offload traffic.
  • 流量预估: 每天100万笔交易 = 平均约12 TPS / 峰值100 TPS。
  • 存储增长: 每条记录1KB * 100万 = 每天1GB。
  • 瓶颈: PostgreSQL写入节点是瓶颈,我们将实现只读副本分流流量。

8. Migration, Rollout, and Testing

8. 迁移、上线与测试

8.1 Deployment Strategy

8.1 部署策略

  • Phase 1: Deploy services in "Shadow Mode" (process traffic but do not email users).
  • Phase 2: Enable Feature Flag
    new-billing-engine
    for 1% of internal users.
  • Phase 3: Ramp to 100%.
  • 阶段1:以「影子模式」部署服务(处理流量但不向用户发送邮件)。
  • 阶段2:为1%的内部用户启用功能开关
    new-billing-engine
  • 阶段3:全量上线。

8.2 Data Migration Plan

8.2 数据迁移计划

  • Backfill: We will run a script to migrate the last 90 days of invoices from the legacy SQL server.
  • Verification: A "Reconciliation Job" will run nightly to compare Legacy vs. New totals.
  • 数据回填: 我们将运行脚本从旧SQL服务器迁移最近90天的发票数据。
  • 校验: 「对账任务」将每晚运行,对比旧系统和新系统的统计总额。

8.3 Test Plan

8.3 测试计划

  • Unit Tests:
  • Integration Tests:
  • End-to-End Tests:
  • 单元测试:
  • 集成测试:
  • 端到端测试:

9. Open Questions / Unresolved Issues

9. 待解决问题/未决事项

Instruction: List known unknowns. These must be resolved before the doc is marked "Approved".
  • Will the Legal team approve the 3rd party library for PDF generation?
  • Does the current VPC peering allow connection to the legacy mainframe?
说明:列出已知的未知项,这些问题必须在文档标记为「已批准」前解决。
  • 法务团队是否会批准PDF生成使用的第三方库?
  • 当前的VPC peering是否允许连接到旧大型机?