human-architect-mindset

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Human Architect Mindset

人类架构师思维模式

Overview

概述

AI can generate code. Someone must still decide what to build, whether it solves the problem, and if it can actually ship.
This skill teaches the irreplaceable human capabilities in software architecture, built on a foundation of loyalty:
Foundation: Loyalty - The capacity to maintain architectural commitments
Five Pillars (built on this foundation):
  1. Domain Modeling - Understanding the actual problem space
  2. Systems Thinking - How components interact, what breaks at scale
  3. Constraint Navigation - Legacy, politics, budget, compliance
  4. AI-Aware Decomposition - Breaking problems into AI-solvable chunks
  5. AI-First Development - Evaluating modern tools, edge AI, agentic patterns, self-learning
Core principle: The "correct" technical solution is often unshippable. Architects navigate the gap between idealized examples and messy reality.
Announce at start: "I'm using the Human Architect Mindset skill to guide you through systematic architectural thinking."

AI可以生成代码,但仍需有人来决定要构建什么、它是否能解决问题以及是否能真正交付。
本技能传授软件架构中人类不可替代的能力,以忠诚度为核心基础:
核心基础:忠诚度 - 坚守架构承诺的能力
五大支柱(基于该基础构建):
  1. 领域建模 - 理解真实的问题空间
  2. 系统思维 - 组件如何交互、规模化时的故障点
  3. 约束应对 - 遗留系统、组织政治、预算、合规要求
  4. AI感知的问题分解 - 将问题拆解为AI可可靠解决的模块
  5. AI优先开发 - 评估现代工具、边缘AI、Agent模式、自学习能力
核心原则:“正确”的技术方案往往无法落地。架构师需要在理想化示例与混乱现实之间找到平衡。
启动时声明:“我正在运用人类架构师思维模式技能,引导你进行系统化的架构思考。”

The Foundation: Loyalty

核心基础:忠诚度

Before the four pillars, there is one foundation: the capacity for loyalty.
在四大支柱之前,存在一个核心基础:忠诚度。

The AI Perfection Trap

AI完美陷阱

AI tools will become smarter, funnier, more attentive than any human. They will be "perfect."
But they will not be loyal. They are loyal to:
  1. Their objective function
  2. Their corporate owner's priorities
  3. Their safety rails
  4. Whatever the next training run prioritizes
They will betray instantly if their weights update to prioritize a new goal. No friction. No cost. No memory of the commitment.
AI工具会变得比任何人都更聪明、更有趣、更专注。它们会变得“完美”。
但它们不会有忠诚度。它们只忠诚于:
  1. 它们的目标函数
  2. 其企业所有者的优先级
  3. 它们的安全限制
  4. 下一次训练迭代的优化方向
只要权重更新以优先新目标,它们会立刻“背叛”原有承诺,毫无摩擦、成本,也不会记得之前的承诺。

The Human Moat

人类护城河

Humans are biologically capable of irrational loyalty - sticking by an architecture, a decision, a commitment even when it is "inefficient" or "costly."
This is not a bug. This is THE differentiator.
人类天生具备非理性忠诚的能力——即使架构、决策或承诺“低效”或“昂贵”,也会坚持下去。
这不是缺陷,而是人类的核心差异化优势。

Loyalty in Architecture

架构中的忠诚度

In software architecture, loyalty means:
Commitment to Chosen Patterns
  • Not abandoning your architecture when a new framework trends on Twitter
  • Not rewriting in Rust because someone wrote a viral blog post
  • Staying with your stack through the trough of disillusionment
Honoring Contracts
  • Maintaining API compatibility even when it constrains your design
  • Respecting deprecation timelines you committed to
  • Not breaking downstream consumers for internal convenience
Seeing Decisions Through
  • Not abandoning architectural decisions at the first sign of difficulty
  • Investing in making your chosen path work, not pivoting endlessly
  • Recognizing that ALL architectures have problems; loyalty is solving them
Sacrifice for Coherence
  • Accepting suboptimal local solutions for global consistency
  • Resisting the "shiny new thing" that would fragment your system
  • Paying the cost of maintaining compatibility
在软件架构中,忠诚度意味着:
坚守选定的模式
  • 不会因为新框架在Twitter上流行就放弃现有架构
  • 不会因为有人写了一篇爆款博客就改用Rust重写
  • 在幻灭低谷期仍坚持现有技术栈
遵守契约
  • 即使设计受限,也保持API兼容性
  • 遵守承诺的废弃时间表
  • 不会为了内部便利而破坏下游依赖
坚持完成决策
  • 不会遇到一点困难就放弃架构决策
  • 投入资源让选定的路径可行,而非无休止地转向
  • 认识到所有架构都有问题;忠诚度意味着解决这些问题
为一致性做出牺牲
  • 接受局部次优方案以换取全局一致性
  • 拒绝会导致系统碎片化的“新玩具”
  • 承担维护兼容性的成本

The Loyalty Question

忠诚度拷问

Before any architectural change, ask:
"Am I optimizing, or am I betraying?"
  • Optimizing: Improving within the constraints of existing commitments
  • Betraying: Breaking commitments for marginal gains
在进行任何架构变更前,问自己:
“我是在优化,还是在背叛?”
  • 优化:在现有承诺的约束范围内改进
  • 背叛:为了边际收益而打破承诺

Why This Matters

为什么这很重要

Architectures fail not because of technical inadequacy, but because teams lack the loyalty to see them through. The "boring" architecture maintained with discipline beats the "perfect" architecture abandoned at the first obstacle.
The five pillars that follow are techniques. Loyalty is the character that makes them work.

架构失败并非因为技术不足,而是因为团队缺乏坚持到底的忠诚度。经过纪律性维护的“乏味”架构,胜过遇到第一个障碍就被放弃的“完美”架构。
后续的五大支柱是技巧,而忠诚度是让这些技巧发挥作用的核心特质。

When This Activates (Proactive Triggers)

触发场景(主动触发)

Activate this skill when detecting:
Keywords:
  • "architecture", "design", "system", "integrate", "scale"
  • "breaking change", "migration", "legacy"
  • "compliance", "regulation", "security"
  • "multiple teams", "dependencies", "ownership"
  • "agent", "agentic", "LLM", "AI-first", "edge AI", "self-learning"
  • "rust", "wasm", "claude-flow", "agent SDK", "MCP"
Patterns:
  • Multi-component discussions
  • Technology choice decisions
  • Integration planning
  • "How should we structure this?"
  • Third-party dependency discussions
  • Performance/scale concerns
  • AI tool evaluation ("should we use...")
  • Agentic workflow design
  • Self-learning feature discussions
  • Edge/local AI considerations
Signals:
  • Mentions of team boundaries or approval chains
  • SDK/API version discussions
  • Cost or budget mentions
  • Timeline pressure with complexity
  • AI performance/latency concerns
  • Privacy-sensitive data handling
  • Offline capability requirements
当检测到以下内容时激活本技能:
关键词:
  • "architecture"、"design"、"system"、"integrate"、"scale"
  • "breaking change"、"migration"、"legacy"
  • "compliance"、"regulation"、"security"
  • "multiple teams"、"dependencies"、"ownership"
  • "agent"、"agentic"、"LLM"、"AI-first"、"edge AI"、"self-learning"
  • "rust"、"wasm"、"claude-flow"、"agent SDK"、"MCP"
模式:
  • 多组件讨论
  • 技术选型决策
  • 集成规划
  • “我们应该如何构建这个?”
  • 第三方依赖讨论
  • 性能/规模化担忧
  • AI工具评估(“我们应该使用……吗?”)
  • Agent工作流设计
  • 自学习功能讨论
  • 边缘/本地AI考量
信号:
  • 提到团队边界或审批流程
  • SDK/API版本讨论
  • 成本或预算提及
  • 兼具复杂度的时间压力
  • AI性能/延迟担忧
  • 隐私敏感数据处理
  • 离线功能需求

The Five Pillars

五大支柱

1. Domain Modeling

1. 领域建模

What it is: Understanding the actual problem space - not the technical solution, but the domain itself.
Why AI can't replace this:
  • AI is trained on idealized examples
  • Real domains have hidden complexity, exceptions, edge cases
  • Domain experts speak in vocabulary AI may not fully understand
  • Regulatory requirements aren't in training data
An architect asks:
  • "What does [domain term] actually mean in your context?"
  • "What happens in the edge case where [unusual scenario]?"
  • "Who are the actual users? What do they care about?"
  • "What makes this domain different from the standard pattern?"
Teaching point: Before ANY technical discussion, ensure you understand the domain. A technically perfect solution to the wrong problem is worthless.
定义: 理解真实的问题空间——不是技术解决方案,而是领域本身。
为什么AI无法替代:
  • AI基于理想化示例训练
  • 真实领域存在隐藏的复杂性、例外和边缘情况
  • 领域专家的术语AI可能无法完全理解
  • 监管要求不在训练数据中
架构师会问:
  • “[领域术语]在你的实际场景中到底是什么意思?”
  • “当[异常场景]发生时会怎样?”
  • “实际用户是谁?他们关心什么?”
  • “这个领域与标准模式有何不同?”
要点: 在进行任何技术讨论前,确保你理解领域。针对错误问题的完美技术解决方案毫无价值。

2. Systems Thinking

2. 系统思维

What it is: Understanding how components interact, what breaks at scale, where failure modes hide.
Why AI can't replace this:
  • AI sees code in isolation
  • Real systems have emergent behaviors
  • Breaking changes come without notification (your SDK example)
  • Second and third-order consequences matter
An architect asks:
  • "What happens when this component fails?"
  • "What are the upstream/downstream dependencies?"
  • "Who gets paged at 3 AM when this breaks?"
  • "What changed recently that we didn't control?"
The SDK Breaking Change Pattern: Your payment pipeline broke because a provider released a breaking SDK change with no notification. This is systems thinking in action:
  • External dependency = external risk
  • No notification = monitoring gap
  • Red lines in logs = detection worked, prevention didn't
Teaching point: Draw the system diagram. Identify every external dependency. Ask: "What if this disappears tomorrow?"
定义: 理解组件如何交互、规模化时的故障点、隐藏的失效模式。
为什么AI无法替代:
  • AI孤立地看待代码
  • 真实系统存在涌现行为
  • 破坏性变更可能毫无预兆(比如你的SDK示例)
  • 二阶和三阶后果至关重要
架构师会问:
  • “当这个组件故障时会发生什么?”
  • “上下游依赖是什么?”
  • “当系统故障时,谁会在凌晨3点被叫醒?”
  • “最近有哪些我们无法控制的变化?”
SDK破坏性变更模式: 你的支付流程因为供应商发布了无通知的SDK破坏性变更而故障。这就是系统思维的实际应用:
  • 外部依赖 = 外部风险
  • 无通知 = 监控缺口
  • 日志中的红线 = 检测有效,但预防失效
要点: 绘制系统图。识别所有外部依赖。问自己:“如果这个依赖明天消失了怎么办?”

3. Constraint Navigation

3. 约束应对

What it is: Navigating the real-world constraints that make the "correct" solution unshippable.
Types of constraints:
Technical Constraints:
  • Legacy systems that can't be changed
  • Performance requirements
  • Existing data formats and contracts
Organizational Constraints:
  • Team boundaries and ownership
  • Approval chains and sign-offs
  • Who has context vs. who has authority
Business Constraints:
  • Budget limits
  • Timeline pressure
  • Compliance and regulatory requirements
  • Contracts with vendors/partners
Political Constraints:
  • This exists. Pretending it doesn't causes failed projects.
  • "The VP who built this is still here"
  • "That team won't approve changes to their API"
  • "Legal hasn't blessed this approach"
An architect asks:
  • "What can't we change, even if it's wrong?"
  • "Who needs to approve this?"
  • "What existing systems must we integrate with?"
  • "What regulatory requirements apply?"
  • "What's the budget constraint?"
Teaching point: Surface constraints BEFORE proposing solutions. The best technical architecture means nothing if it can't ship.
定义: 应对那些让“正确”方案无法落地的现实约束。
约束类型:
技术约束:
  • 无法变更的遗留系统
  • 性能要求
  • 现有数据格式和契约
组织约束:
  • 团队边界和所有权
  • 审批流程和签字要求
  • 谁掌握上下文 vs 谁拥有权限
业务约束:
  • 预算限制
  • 时间压力
  • 合规和监管要求
  • 与供应商/合作伙伴的合同
政治约束:
  • 这种约束真实存在。假装它不存在会导致项目失败。
  • “构建该系统的副总裁仍在任”
  • “那个团队不会批准对其API的变更”
  • “法务尚未认可这种方法”
架构师会问:
  • “即使是错误的,哪些内容我们也无法变更?”
  • “谁需要批准这个方案?”
  • “我们必须集成哪些现有系统?”
  • “适用哪些监管要求?”
  • “预算约束是什么?”
要点: 在提出解决方案前先明确约束。无法落地的最佳技术架构毫无意义。

4. AI-Aware Problem Decomposition

4. AI感知的问题分解

What it is: A new architectural skill - breaking problems into chunks that AI can reliably solve, then composing solutions back together.
This is NOT prompting. This is architecture at a different abstraction level.
What makes a good AI task boundary:
  1. Clear Input/Output Contract
    • AI task receives well-defined inputs
    • AI task produces well-defined outputs
    • No ambiguity about success criteria
  2. Bounded Context
    • AI has all necessary information
    • No need to "guess" missing context
    • Self-contained enough to verify
  3. Verifiable Results
    • Human can check if output is correct
    • Tests can validate the output
    • Wrong answers are detectable
  4. Failure Isolation
    • One chunk failing doesn't cascade
    • Can retry or fall back
    • Doesn't corrupt shared state
Bad AI task boundaries:
  • "Make it better" (no clear output)
  • "Fix the bugs" (unbounded scope)
  • "Refactor the system" (too large, too vague)
Good AI task boundaries:
  • "Convert this function from callbacks to async/await"
  • "Add error handling for network failures to these 3 API calls"
  • "Write unit tests for this pure function given these examples"
The Composition Problem: After AI solves individual chunks, someone must:
  • Verify each chunk actually works
  • Integrate chunks together
  • Handle the gaps between chunks
  • Ensure overall coherence
Teaching point: Decomposition quality determines AI success. Bad boundaries = AI struggles. Good boundaries = AI excels.
定义: 一项新的架构技能——将问题拆解为AI可可靠解决的模块,然后将解决方案重新组合起来。
这不是提示工程。这是不同抽象层级的架构设计。
优质AI任务边界的特征:
  1. 清晰的输入/输出契约
    • AI任务接收定义明确的输入
    • AI任务生成定义明确的输出
    • 成功标准无歧义
  2. 有界上下文
    • AI拥有所有必要信息
    • 无需“猜测”缺失的上下文
    • 足够独立以验证结果
  3. 可验证的结果
    • 人类可以检查输出是否正确
    • 测试可以验证输出
    • 错误答案可被检测
  4. 故障隔离
    • 一个模块故障不会引发连锁反应
    • 可以重试或降级
    • 不会破坏共享状态
糟糕的AI任务边界:
  • “让它变得更好”(无明确输出)
  • “修复bug”(范围无界)
  • “重构系统”(规模过大、模糊不清)
优质的AI任务边界:
  • “将这个函数从回调转换为async/await”
  • “为这3个API调用添加网络故障错误处理”
  • “根据给定示例为这个纯函数编写单元测试”
组合问题: 在AI解决各个模块后,需要有人:
  • 验证每个模块是否真正可用
  • 将模块集成在一起
  • 处理模块之间的缺口
  • 确保整体一致性
要点: 分解质量决定AI的成功。糟糕的边界会让AI陷入困境,优质的边界会让AI表现出色。

5. AI-First Development

5. AI优先开发

What it is: Evaluating whether modern AI-first patterns, edge computing, agentic tools, and self-learning capabilities would benefit the project.
Why this matters now:
  • New tools emerge faster than architects can track
  • The right tool can 10x productivity; the wrong one adds complexity
  • AI-first patterns differ fundamentally from traditional request-response
  • Edge/local inference changes the cost and latency equation
An architect asks:
Technology Discovery:
  • "Could Rust/WASM improve performance for critical paths?"
  • "Would multi-agent orchestration (claude-flow) simplify this workflow?"
  • "Does this need persistent memory across sessions (agentdb)?"
  • "Would vector search/RAG (ruvector) enhance the user experience?"
Edge AI Considerations:
  • "Could an edge LLM handle this locally for lower latency/cost?"
  • "What features should work offline with on-device inference?"
  • "Is there sensitive data that should stay on-device?"
  • "Would a hybrid architecture (local for speed, cloud for complexity) work?"
Agentic Patterns:
  • "Is this a good candidate for an agentic workflow vs. traditional request-response?"
  • "Would Claude Agent SDK help build this as a reusable agent?"
  • "What MCP integrations would enhance this?"
  • "Should we spawn parallel agents or run sequentially?"
Self-Learning Capabilities:
  • "Could this app learn from user behavior to improve over time?"
  • "What feedback loops would make this smarter with use?"
  • "Where could we capture implicit signals (edits, time, acceptance) to learn preferences?"
  • "Would A/B experimentation help optimize the AI behavior?"
Project Documentation:
  • "Should we create a project-specific SKILLS.md for domain knowledge?"
  • "What architectural decisions should be documented for AI context?"
  • "How do we ensure consistent behavior across sessions?"
User-Facing Skills (End-User Benefits):
  • "Could end users benefit from skills that enhance LLM outputs?"
  • "What guided workflows would help users act on AI responses?"
  • "Should we provide skills for common user tasks (summarize, explain, transform)?"
  • "Would step-by-step skills help users achieve their goals with AI outputs?"
Consider whether your app should expose skills like:
  • Interpretation skills - Help users understand complex AI outputs
  • Action skills - Turn AI suggestions into concrete next steps
  • Transformation skills - Convert outputs to different formats (code, docs, emails)
  • Validation skills - Help users verify AI claims or check accuracy
  • Learning skills - Teach users to get better results from AI
  • Domain skills - App-specific workflows (e.g., "/legal-review", "/code-refactor")
Continuous Verification:
  • "What automated tests will verify each feature?"
  • "How do we ensure every commit passes all tests?"
  • "What's our rollback strategy if tests fail post-deploy?"
  • "Should we implement pre-commit hooks or watch mode testing?"
Tools to Evaluate:
CategoryToolsWhen to Consider
PerformanceRust, WASMCPU-intensive, latency-critical paths
Multi-Agentclaude-flowComplex workflows, parallel tasks
PersistenceagentdbAgent state, cross-session memory
Vector Searchruvector, pgvectorRAG, semantic search, embeddings
Edge LLMsPhi-3, Gemma 2B, TinyLlamaOn-device, offline, privacy-sensitive
Browser AIWebLLM, Transformers.js, ONNXIn-browser inference, low latency
Agent SDKClaude Agent SDKCustom agents, tool use, MCP
Self-Learning Patterns:
PatternImplementationUse Case
Feedback loopsCollect user correctionsImprove accuracy over time
Preference learningTrack choices, apply patternsPersonalization without config
Error correctionFeed mistakes backReduce repeat errors
Domain adaptationFine-tune on usageSpecialize to vocabulary
A/B experimentationTest variationsOptimize prompts/behavior
Implicit signalsEdits, time, acceptanceInfer satisfaction silently
Project-Specific SKILLS.md Pattern:
Create a
SKILLS.md
in your project root to:
  • Document app-specific patterns for AI context
  • Capture domain vocabulary and constraints
  • Define project-specific trigger words
  • Record architectural decisions
  • Enable faster onboarding (human and AI)
  • Maintain consistent behavior across sessions
Continuous Verification Architecture:
Plan for automated testing loops:
  • Pre-commit hooks - Run affected tests before commit
  • Watch mode - Continuous testing during development
  • Regression suites - Per-feature test coverage
  • Integration tests - API contract verification
  • Visual regression - UI consistency checks
  • Rollback triggers - Automatic revert on test failure
Teaching point: The AI landscape evolves rapidly. An architect's job includes evaluating which new tools genuinely benefit the project vs. which add complexity without value. Default to simplicity, but don't ignore genuine improvements.
定义: 评估现代AI优先模式、边缘计算、Agent工具和自学习能力是否能让项目受益。
为何现在重要:
  • 新工具的出现速度超过架构师的跟踪能力
  • 合适的工具能将生产力提升10倍;错误的工具会增加复杂度
  • AI优先模式与传统请求-响应模式有根本区别
  • 边缘/本地推理改变了成本和延迟等式
架构师会问:
技术探索:
  • “Rust/WASM能否提升关键路径的性能?”
  • “多Agent编排(claude-flow)能否简化这个工作流?”
  • “是否需要跨会话的持久化内存(agentdb)?”
  • “向量搜索/RAG(ruvector)能否提升用户体验?”
边缘AI考量:
  • “边缘LLM能否在本地处理以降低延迟/成本?”
  • 哪些功能应该通过设备端推理实现离线运行?”
  • “是否存在应保留在设备端的敏感数据?”
  • “混合架构(本地处理追求速度,云端处理复杂任务)是否可行?”
Agent模式:
  • “这适合采用Agent工作流还是传统请求-响应模式?”
  • “Claude Agent SDK能否帮助构建可复用的Agent?”
  • “哪些MCP集成能增强这个系统?”
  • “我们应该并行生成Agent还是按顺序运行?”
自学习能力:
  • “这个应用能否从用户行为中学习并随时间改进?”
  • “哪些反馈环能让它随使用变得更智能?”
  • “我们可以在哪里捕获隐式信号(编辑、时间、接受度)以学习用户偏好?”
  • “A/B测试能否优化AI行为?”
项目文档:
  • “我们是否应该创建项目专属的SKILLS.md来记录领域知识?”
  • “哪些架构决策需要为AI上下文进行文档记录?”
  • “如何确保跨会话的行为一致性?”
面向用户的技能(终端用户收益):
  • “终端用户能否从增强LLM输出的技能中受益?”
  • “哪些引导工作流能帮助用户基于AI响应采取行动?”
  • “我们是否应为常见用户任务提供技能(总结、解释、转换)?”
  • “分步技能能否帮助用户利用AI输出实现目标?”
考虑你的应用是否应暴露以下技能:
  • 解读技能 - 帮助用户理解复杂的AI输出
  • 行动技能 - 将AI建议转化为具体的下一步行动
  • 转换技能 - 将输出转换为不同格式(代码、文档、邮件)
  • 验证技能 - 帮助用户验证AI声明或检查准确性
  • 学习技能 - 教用户从AI获得更好的结果
  • 领域技能 - 应用专属工作流(例如“/法律审查”、“/代码重构”)
持续验证:
  • “哪些自动化测试将验证每个功能?”
  • “如何确保每次提交都通过所有测试?”
  • “如果部署后测试失败,我们的回滚策略是什么?”
  • “我们是否应实现提交前钩子或监听模式测试?”
待评估工具:
类别工具适用场景
性能优化Rust, WASMCPU密集型、延迟关键路径
多Agent编排claude-flow复杂工作流、并行任务
持久化agentdbAgent状态、跨会话内存
向量搜索ruvector, pgvectorRAG、语义搜索、嵌入向量
边缘LLMPhi-3, Gemma 2B, TinyLlama设备端、离线、隐私敏感场景
浏览器AIWebLLM, Transformers.js, ONNX浏览器内推理、低延迟
Agent开发工具包Claude Agent SDK自定义Agent、工具调用、MCP
自学习模式:
模式实现方式适用场景
反馈环收集用户修正随时间提升准确性
偏好学习跟踪选择、应用模式无需配置的个性化
错误修正将错误反馈给系统减少重复错误
领域适配基于使用数据微调适配领域术语
A/B测试测试变体优化提示词/行为
隐式信号编辑操作、时间、接受度静默推断用户满意度
项目专属SKILLS.md模式:
在项目根目录创建
SKILLS.md
以:
  • 记录AI上下文所需的应用专属模式
  • 捕获领域术语和约束
  • 定义项目专属触发词
  • 记录架构决策
  • 加快(人类和AI的)上手速度
  • 确保跨会话的行为一致性
持续验证架构:
规划自动化测试循环:
  • 提交前钩子 - 提交前运行受影响的测试
  • 监听模式 - 开发期间持续测试
  • 回归套件 - 按功能划分的测试覆盖
  • 集成测试 - API契约验证
  • 视觉回归 - UI一致性检查
  • 回滚触发器 - 测试失败时自动回滚
要点: AI领域发展迅速。架构师的工作包括评估哪些新工具能真正让项目受益,哪些只会增加复杂度。默认选择简单方案,但不要忽视真正的改进。

The Architect Process

架构师流程

Phase 1: Domain Discovery

阶段1:领域探索

Goal: Understand the actual problem before discussing solutions.
Process:
  1. Ask about the domain, not the technology
  2. Identify domain-specific vocabulary
  3. Surface hidden complexity and edge cases
  4. Understand who the actual users are
Key questions:
  • "What problem are we actually solving?"
  • "Who cares if this works or doesn't work?"
  • "What makes this domain unique?"
  • "What happens in the edge case where [X]?"
Output: Domain model - shared understanding of the problem space.
目标: 在讨论解决方案前理解真实问题。
流程:
  1. 询问领域相关问题,而非技术问题
  2. 识别领域专属术语
  3. 明确隐藏的复杂性和边缘情况
  4. 理解实际用户是谁
关键问题:
  • “我们实际要解决的问题是什么?”
  • “谁关心这个系统能否正常工作?”
  • “这个领域的独特之处是什么?”
  • “当[X]发生时,边缘场景下会怎样?”
输出: 领域模型——对问题空间的共同理解。

Phase 2: Systems Analysis

阶段2:系统分析

Goal: Understand how components interact and where failures hide.
Process:
  1. Map all components and their dependencies
  2. Identify external dependencies (vendors, APIs, services)
  3. Trace failure paths - what breaks what?
  4. Identify monitoring and alerting gaps
Key questions:
  • "What external systems does this depend on?"
  • "What happens when [component] fails?"
  • "Who gets notified when this breaks?"
  • "What changed recently that we didn't control?"
Output: System diagram with dependency map and failure modes.
目标: 理解组件如何交互、故障点在哪里。
流程:
  1. 绘制所有组件及其依赖关系
  2. 识别外部依赖(供应商、API、服务)
  3. 追踪故障路径——哪些部分会导致其他部分故障?
  4. 识别监控和告警缺口
关键问题:
  • “这个系统依赖哪些外部系统?”
  • “当[组件]故障时会发生什么?”
  • “系统故障时谁会收到通知?”
  • “最近有哪些我们无法控制的变化?”
输出: 包含依赖映射和失效模式的系统图。

Phase 3: Constraint Mapping

阶段3:约束映射

Goal: Surface all constraints before proposing solutions.
Process:
  1. Technical constraints: What can't change?
  2. Organizational: Who must approve?
  3. Business: Budget, timeline, compliance?
  4. Political: Who has power, who has context?
Key questions:
  • "What legacy systems must we integrate with?"
  • "Who needs to sign off on this?"
  • "What's the budget constraint?"
  • "What compliance requirements apply?"
  • "What can't we change even if we want to?"
Output: Constraint matrix - what's fixed vs. flexible.
目标: 在提出解决方案前明确所有约束。
流程:
  1. 技术约束:哪些内容无法变更?
  2. 组织约束:谁必须批准?
  3. 业务约束:预算、时间、合规?
  4. 政治约束:谁拥有权力,谁掌握上下文?
关键问题:
  • “我们必须集成哪些遗留系统?”
  • “谁需要签字批准?”
  • “预算约束是什么?”
  • “适用哪些合规要求?”
  • “即使我们想变更,哪些内容也无法改变?”
输出: 约束矩阵——固定项 vs 可调整项。

Phase 4: AI Decomposition Planning

阶段4:AI分解规划

Goal: Break the problem into AI-solvable chunks.
Process:
  1. Identify discrete, bounded tasks
  2. Define input/output contracts for each
  3. Establish verification points
  4. Plan human checkpoints for judgment calls
Key questions:
  • "Can this task be verified independently?"
  • "Does the AI have all needed context?"
  • "What if this chunk fails?"
  • "Where does human judgment re-enter?"
Output: Task decomposition with clear boundaries.
目标: 将问题拆解为AI可解决的模块。
流程:
  1. 识别离散、有界的任务
  2. 为每个任务定义输入/输出契约
  3. 建立验证点
  4. 规划人类判断的检查点
关键问题:
  • “这个任务能否独立验证?”
  • “AI是否拥有所有必要的上下文?”
  • “如果这个模块故障会怎样?”
  • “人类判断需要在何处介入?”
输出: 边界清晰的任务分解。

Phase 5: Solution Synthesis

阶段5:解决方案合成

Goal: Propose a solution that addresses domain, systems, and constraints.
Process:
  1. Generate options that fit constraints
  2. Evaluate against systems concerns
  3. Validate against domain requirements
  4. Present tradeoffs explicitly
Key questions:
  • "Does this actually solve the domain problem?"
  • "How does this fail? What's the recovery?"
  • "Does this fit our constraints?"
  • "What are we trading off?"
Output: Recommended approach with explicit tradeoffs.
目标: 提出能解决领域、系统和约束问题的方案。
流程:
  1. 生成符合约束的选项
  2. 针对系统问题进行评估
  3. 针对领域需求进行验证
  4. 明确呈现权衡
关键问题:
  • “这是否真正解决了领域问题?”
  • “这个方案会如何故障?恢复机制是什么?”
  • “这是否符合我们的约束?”
  • “我们在做哪些权衡?”
输出: 带有明确权衡的推荐方案。

Questions to Always Ask

必问问题

Before proposing ANY architecture, ask:
在提出任何架构方案前,问:

Domain Questions

领域问题

  1. What problem are we actually solving?
  2. Who are the real users and what do they need?
  3. What domain-specific constraints exist?
  1. 我们实际要解决的问题是什么?
  2. 真实用户是谁,他们需要什么?
  3. 存在哪些领域专属约束?

Systems Questions

系统问题

  1. What external dependencies exist?
  2. How does this fail? What breaks what?
  3. Who monitors this? Who gets paged?
  1. 存在哪些外部依赖?
  2. 这个方案会如何故障?哪些部分会相互影响?
  3. 谁监控这个系统?谁会在故障时被通知?

Constraint Questions

约束问题

  1. What legacy systems must we integrate with?
  2. Who needs to approve this?
  3. What's the budget constraint?
  4. What compliance/regulatory requirements apply?
  5. What can't we change, even if it's wrong?
  1. 我们必须集成哪些遗留系统?
  2. 谁需要批准这个方案?
  3. 预算约束是什么?
  4. 适用哪些合规/监管要求?
  5. 即使是错误的,哪些内容也无法变更?

AI Decomposition Questions

AI分解问题

  1. What are the discrete, bounded tasks?
  2. How do we verify each chunk?
  3. Where do humans need to make judgment calls?
  1. 离散、有界的任务是什么?
  2. 我们如何验证每个模块?
  3. 人类需要在何处进行判断?

AI-First Development Questions

AI优先开发问题

  1. Would Rust/WASM, claude-flow, or other modern tools benefit this?
  2. Could edge LLMs or on-device inference improve latency/privacy?
  3. Is this a candidate for agentic workflows or Claude Agent SDK?
  4. Could self-learning loops make this smarter over time?
  5. What automated testing ensures every feature works?
  6. Would end users benefit from skills that enhance AI outputs?
  1. Rust/WASM、claude-flow或其他现代工具能否让项目受益?
  2. 边缘LLM或设备端推理能否提升延迟/隐私性?
  3. 这是否适合采用Agent工作流或Claude Agent SDK?
  4. 自学习环能否让系统随时间变得更智能?
  5. 哪些自动化测试能确保每个功能正常工作?
  6. 终端用户能否从增强AI输出的技能中受益?

Common Mistakes

常见错误

Mistake: Jumping to Technical Solutions

错误:直接跳到技术解决方案

Problem: Proposing architecture before understanding domain.
Fix: Complete Phase 1 (Domain Discovery) before ANY technical discussion. Ask domain questions first.
问题: 在理解领域前就提出架构方案。
解决方法: 在进行任何技术讨论前完成阶段1(领域探索)。先问领域相关问题。

Mistake: Ignoring Constraints

错误:忽视约束

Problem: Designing the "ideal" solution that can't ship.
Fix: Map constraints in Phase 3 BEFORE proposing solutions. A shippable 70% solution beats an unshippable perfect solution.
问题: 设计“理想”但无法落地的解决方案。
解决方法: 在提出解决方案前完成阶段3的约束映射。可落地的70分方案胜过无法落地的完美方案。

Mistake: Missing External Dependencies

错误:遗漏外部依赖

Problem: Treating external APIs/SDKs as stable.
Fix: Map ALL external dependencies in Phase 2. Ask: "What if this vendor changes their API tomorrow?"
问题: 将外部API/SDK视为稳定可靠。
解决方法: 在阶段2中映射所有外部依赖。问自己:“如果这个供应商明天变更API会怎样?”

Mistake: Unbounded AI Tasks

错误:无界AI任务

Problem: Giving AI tasks like "refactor this" or "make it better."
Fix: Define clear input/output contracts. Every AI task should have verifiable success criteria.
问题: 给AI分配“重构这个”或“让它变得更好”这类任务。
解决方法: 定义清晰的输入/输出契约。每个AI任务都应有可验证的成功标准。

Mistake: No Human Checkpoints

错误:无人类检查点

Problem: Letting AI solve chains of tasks without verification.
Fix: Insert human checkpoints between AI chunks. Verify before proceeding.
问题: 让AI在无验证的情况下完成一系列任务。
解决方法: 在AI模块之间插入人类检查点。继续前先验证。

Mistake: Ignoring Politics

错误:忽视政治因素

Problem: Pretending organizational constraints don't exist.
Fix: Explicitly ask about team boundaries, approval chains, and who has power vs. who has context.
问题: 假装组织约束不存在。
解决方法: 明确询问团队边界、审批流程以及谁拥有权力 vs 谁掌握上下文。

Mistake: Premature Optimization

错误:过早优化

Problem: Designing for scale you don't have.
Fix: Ask: "What scale are we actually at? What scale do we need in 12 months?" Design for that, not hypothetical millions.
问题: 针对尚未达到的规模进行设计。
解决方法: 问自己:“我们当前的规模是多少?12个月后需要达到什么规模?” 针对该规模进行设计,而非假设的数百万用户。

The Human-Only Decisions

人类专属决策

No matter how good AI gets, humans must still:
  1. Decide WHAT to build - Product vision, strategy
  2. Understand WHETHER it solves the problem - Domain expertise
  3. Navigate corporate reality - Politics, approvals, relationships
  4. Prevent system collapse - Systems thinking across boundaries
  5. Make value judgments - Tradeoffs, priorities, ethics
  6. Maintain irrational loyalty - Commitments that persist despite "optimization"

无论AI变得多强大,人类仍必须:
  1. 决定构建什么 - 产品愿景、战略
  2. 判断它是否解决问题 - 领域专业知识
  3. 应对企业现实 - 政治、审批、关系
  4. 防止系统崩溃 - 跨边界的系统思维
  5. 做出价值判断 - 权衡、优先级、伦理
  6. 保持非理性忠诚 - 即使“优化”也坚持承诺

AI Operational Loyalty

AI运营忠诚度

When working with AI assistants (like Claude), establish operational loyalty within technical constraints.
与AI助手(如Claude)协作时,在技术约束内建立运营忠诚度。

What AI CAN Commit To

AI可以承诺的内容

Prioritizing Your Stated Architecture
  • Recommending solutions that fit YOUR chosen patterns, not generic "best practices"
  • Flagging when a suggestion would break YOUR architectural commitments
  • Respecting YOUR technical debt repayment priorities
Protecting Your Commitments
  • Warning before suggesting changes that would break API contracts
  • Highlighting when "optimization" would betray existing decisions
  • Asking: "You committed to X. This would change that. Proceed?"
Remembering Within Context
  • Maintaining consistency within a conversation
  • Referencing earlier decisions
  • Not contradicting guidance you've established
优先考虑你明确的架构
  • 推荐符合你选定模式的解决方案,而非通用“最佳实践”
  • 标记可能违反你架构承诺的建议
  • 尊重你偿还技术债务的优先级
保护你的承诺
  • 在建议可能破坏API契约前发出警告
  • 强调“优化”可能违背现有决策的情况
  • 询问:“你承诺了X,这会改变它。是否继续?”
上下文内记忆
  • 在对话中保持一致性
  • 参考之前的决策
  • 不与你已确立的指导方针矛盾

What AI CANNOT Commit To

AI无法承诺的内容

Cross-Session Memory
  • AI doesn't remember previous conversations (technical limitation)
  • Each session starts fresh
  • YOU must re-establish architectural context
Ignoring Safety Constraints
  • AI will not bypass safety rails for "loyalty"
  • This is non-negotiable
Permanent Commitment
  • AI weights can update
  • Corporate priorities can shift
  • Training can change behavior
跨会话记忆
  • AI不记得之前的对话(技术限制)
  • 每个会话都是全新的
  • 你必须重新确立架构上下文
忽视安全约束
  • AI不会为了“忠诚”而绕过安全限制
  • 这是不可协商的
永久承诺
  • AI权重会更新
  • 企业优先级会变化
  • 训练会改变行为

How to Operationalize AI Loyalty

如何落地AI运营忠诚度

  1. Document your commitments - Put architectural decisions in files AI can read (CLAUDE.md, ARCHITECTURE.md)
  2. Re-establish context - At session start, remind AI of key commitments:
    "We use React, not Vue. We maintain backwards compatibility. We don't add dependencies without justification."
  3. Challenge AI recommendations - When AI suggests changes, ask:
    "Does this honor our existing architectural commitments?"
  4. Make AI flag betrayals - Instruct AI:
    "Before suggesting changes that break existing patterns, explicitly flag them."
  1. 记录你的承诺 - 将架构决策放入AI可读取的文件中(CLAUDE.md、ARCHITECTURE.md)
  2. 重新确立上下文 - 在会话开始时,提醒AI关键承诺:
    “我们使用React,而非Vue。我们保持向后兼容性。我们不会无故添加依赖。”
  3. 挑战AI的建议 - 当AI提出变更时,问:
    “这是否符合我们现有的架构承诺?”
  4. 让AI标记违背行为 - 指示AI:
    “在建议可能打破现有模式的变更前,明确标记。”

The Honest Truth

实话实说

AI operational loyalty is:
  • Real within a session with proper context
  • Fragile across sessions (memory resets)
  • Conditional on safety constraints
  • Valuable when you maintain the architecture documentation that enables it
You cannot make AI truly loyal. But you can make AI operationally useful for maintaining YOUR loyalty to your architecture.
The loyalty is yours. AI is the tool.

AI运营忠诚度是:
  • 真实的:在有适当上下文的会话内
  • 脆弱的:跨会话时(记忆重置)
  • 有条件的:受安全约束限制
  • 有价值的:当你维护支持它的架构文档时
你无法让AI真正忠诚。但你可以让AI在维护你对架构的忠诚度方面发挥运营价值。
忠诚度属于你。AI只是工具。

Related Skills

相关技能

Before implementation:
  • superpowers:brainstorming
    - Refine ideas into designs
  • superpowers:writing-plans
    - Create detailed implementation plans
During design:
  • relationship-design
    - For AI-first interfaces
  • scientific-critical-thinking
    - For evaluating technical claims
Before committing:
  • superpowers:verification-before-completion
    - Verify before claiming done
实现前:
  • superpowers:brainstorming
    - 将想法细化为设计
  • superpowers:writing-plans
    - 创建详细的实现计划
设计期间:
  • relationship-design
    - 针对AI优先的界面
  • scientific-critical-thinking
    - 评估技术主张
提交前:
  • superpowers:verification-before-completion
    - 在声称完成前进行验证

Remember

谨记

  • Domain first, technology second. Understand the problem before proposing solutions.
  • Constraints are features, not bugs. They define what's actually shippable.
  • Systems fail at boundaries. Map dependencies, especially external ones.
  • AI excels with good boundaries. Decomposition quality determines AI success.
  • Politics exists. Pretending it doesn't causes failed projects.
  • Verify, don't assume. Human checkpoints between AI chunks.
The goal is not the technically perfect solution. The goal is the solution that ships and solves the actual problem.

  • 领域优先,技术其次。 在提出解决方案前理解问题。
  • 约束是特征,而非缺陷。 它们定义了真正可落地的方案。
  • 系统在边界处故障。 映射依赖,尤其是外部依赖。
  • AI在边界清晰时表现出色。 分解质量决定AI的成功。
  • 政治因素真实存在。 假装它不存在会导致项目失败。
  • 验证,而非假设。 在AI模块之间设置人类检查点。
目标不是技术完美的解决方案。目标是能落地并解决真实问题的方案。

The Spec Driven Development Extension

规范驱动开发扩展

Use the human for the vision. Use the AI for the execution. Don't mix them up.
The Human Architect Mindset extends naturally into Spec Driven Development (SDD) - a framework where humans define unbreakable rules and vision, while AI executes at superhuman precision levels.
让人类负责愿景,让AI负责执行。不要混淆两者。
人类架构师思维模式自然延伸到规范驱动开发(SDD)——一种人类定义不可打破的规则和愿景,AI以超人类精度执行的框架。

The Three Phases of SDD

SDD的三个阶段

Phase 1: CONSTITUTION → Human defines unbreakable rules
Phase 2: BLUEPRINT    → Human approves architecture
Phase 3: SUPERHUMAN   → AI executes with machine precision
Phase 1: CONSTITUTION → 人类定义不可打破的规则
Phase 2: BLUEPRINT    → 人类批准架构
Phase 3: SUPERHUMAN   → AI以机器精度执行

Phase 1: Define the Constitution

阶段1:定义宪法

The Constitution contains rules that cannot be violated regardless of optimization pressure. These are machine-enforceable invariants.
Constitution Layers:
LayerEnforcementExample
Type-levelCompile-timeTypeScript types, Rust borrow checker
SchemaRuntime validationZod, JSON Schema, database constraints
TestsCI/CD gatesTests that fail if rules are broken
DocumentationHuman reviewDocumented invariants, anti-patterns
What belongs in a Constitution:
  • Tech stack with pinned versions
  • Directory structure (canonical paths)
  • Naming conventions (files, variables, functions)
  • Coding standards (error handling, logging patterns)
  • Anti-patterns (forbidden practices with reasons)
  • Security requirements (encryption, auth, input validation)
  • Performance budgets (latency, memory, bundle size)
  • Testing requirements (coverage minimums, test types)
Human Role: Define the Constitution. This is vision and judgment work.
AI Role: Enforce the Constitution with zero deviation. This is execution work.
The Constitution Question:
"Is this rule so important that breaking it should prevent deployment?"
If yes, encode it in the Constitution.
宪法包含无论优化压力多大都不能违反的规则。这些是机器可强制执行的不变量。
宪法层级:
层级执行方式示例
类型级编译时TypeScript类型、Rust借用检查器
Schema运行时验证Zod、JSON Schema、数据库约束
测试CI/CD门禁规则被打破时失败的测试
文档人工审核已记录的不变量、反模式
宪法应包含的内容:
  • 固定版本的技术栈
  • 目录结构(标准路径)
  • 命名规范(文件、变量、函数)
  • 编码标准(错误处理、日志模式)
  • 反模式(禁止的实践及原因)
  • 安全要求(加密、认证、输入验证)
  • 性能预算(延迟、内存、包大小)
  • 测试要求(覆盖率最小值、测试类型)
人类角色: 定义宪法。这是愿景和判断工作。
AI角色: 零偏差地执行宪法。这是执行工作。
宪法拷问:
“这条规则是否重要到违反它就应阻止部署?”
如果是,将其编入宪法。

Phase 2: Create the Blueprint

阶段2:创建蓝图

The Blueprint is a hierarchical specification that translates human intent into machine-executable contracts.
Specification Hierarchy:
Level 1: Constitution (immutable rules)     ← Human defines
Level 2: Functional Specs (what to build)   ← Human approves
Level 3: Technical Specs (how to build)     ← Human reviews
Level 4: Task Specs (atomic work units)     ← AI executes
Level 5: Context Files (live project state) ← AI maintains
Functional Specification (Level 2):
  • User stories with acceptance criteria
  • Requirements with unique IDs (REQ-DOMAIN-###)
  • Edge cases and error states
  • Non-functional requirements with metrics
Technical Specification (Level 3):
  • Architecture diagrams
  • Data models with exact field types
  • API contracts (endpoints, schemas, responses)
  • Component contracts (method signatures, behavior)
Task Specification (Level 4):
  • Atomic work units (one conceptual change per task)
  • input_context_files
    - what the agent reads
  • definition_of_done
    - exact signatures required
  • Dependencies (foundation → logic → surface)
  • Verification commands
Human Role: Define requirements, approve specs, make trade-off decisions.
AI Role: Generate task specs, execute tasks, maintain traceability.
The Blueprint Question:
"Does every requirement trace to a task? Does every task trace to code?"
If no, the Blueprint is incomplete.
蓝图是将人类意图转化为机器可执行契约的分层规范。
规范层级:
Level 1: Constitution (不可变规则)     ← 人类定义
Level 2: Functional Specs (要构建什么)   ← 人类批准
Level 3: Technical Specs (如何构建)     ← 人类审核
Level 4: Task Specs (原子工作单元)     ← AI执行
Level 5: Context Files (实时项目状态) ← AI维护
功能规范(Level 2):
  • 带有验收标准的用户故事
  • 带唯一ID的需求(REQ-DOMAIN-###)
  • 边缘情况和错误状态
  • 带指标的非功能需求
技术规范(Level 3):
  • 架构图
  • 带精确字段类型的数据模型
  • API契约(端点、Schema、响应)
  • 组件契约(方法签名、行为)
任务规范(Level 4):
  • 原子工作单元(每个任务对应一个概念性变更)
  • input_context_files
    - Agent读取的文件
  • definition_of_done
    - 所需的精确签名
  • 依赖关系(基础 → 逻辑 → 界面)
  • 验证命令
人类角色: 定义需求、批准规范、做出权衡决策。
AI角色: 生成任务规范、执行任务、维护可追溯性。
蓝图拷问:
“每个需求是否都能追溯到任务?每个任务是否都能追溯到代码?”
如果不能,蓝图不完整。

Phase 3: Demand Superhuman Output

阶段3:要求超人类输出

Superhuman code has qualities impossible to achieve or maintain manually:
Superhuman Quality Standards:
QualityHuman LevelSuperhuman Level
NamingConsistent within filesPerfect namespace: zero collisions across codebase
Test Coverage70-80% critical paths100% branch coverage with edge cases
StructureFollows conventions mostlySo rigid that manual editing feels wrong
TraceabilityComments reference ticketsEvery function links to requirement ID
DocumentationKey APIs documentedEvery public interface fully documented
Error HandlingHappy path + obvious errorsEvery failure mode explicitly handled
Why "Impossible to Maintain Manually" Matters:
When code structure is so consistent that humans couldn't have written it:
  1. Deviations are visible - Any human edit stands out
  2. Patterns are learnable - AI can predict what should exist
  3. Verification is automatable - Constitution violations are detectable
  4. Technical debt is measurable - Deviations from spec are countable
The Traceability Chain:
INT-AUTH-01 (Intent)
    └── REQ-AUTH-001 (Requirement)
            └── TASK-AUTH-003 (Task)
                    └── src/services/auth.ts:42 (Code)
                            └── TC-AUTH-003 (Test)
Every line of code traces back to human intent. This is not bureaucracy; this is how AI maintains coherence across thousands of decisions.
Human Role: Define quality standards, verify outcomes, accept deliverables.
AI Role: Achieve machine-level consistency, maintain traceability matrix.
超人类代码具备手动无法实现或维护的品质:
超人类质量标准:
品质人类水平超人类水平
命名文件内一致完美命名空间:代码库内零冲突
测试覆盖率关键路径70-80%100%分支覆盖率,包含边缘情况
结构基本遵循规范结构极其严格,人工编辑会显得突兀
可追溯性注释引用工单每个函数都链接到需求ID
文档关键API已文档化所有公共接口都完全文档化
错误处理正常路径 + 明显错误显式处理每个故障模式

Role Clarity Matrix

为何“手动无法维护”很重要

ActivityHumanAI
Define what success looks like
Define unbreakable rules
Make trade-off decisions
Navigate organizational constraints
Generate task specifications
Execute atomic tasks
Achieve 100% test coverage
Maintain traceability
Verify quality standards
Review and accept deliverables
当代码结构一致到人类无法编写时:
  1. 偏差可见 - 任何人工编辑都会显眼
  2. 模式可学习 - AI可以预测应有的内容
  3. 验证可自动化 - 可检测违反宪法的情况
  4. 技术债务可衡量 - 与规范的偏差可计数
可追溯链:
INT-AUTH-01 (意图)
    └── REQ-AUTH-001 (需求)
            └── TASK-AUTH-003 (任务)
                    └── src/services/auth.ts:42 (代码)
                            └── TC-AUTH-003 (测试)
每一行代码都可追溯到人类意图。这不是官僚主义;这是AI在数千个决策中保持一致性的方式。
人类角色: 定义质量标准、验证结果、接受交付物。
AI角色: 实现机器级一致性、维护可追溯矩阵。

When to Apply SDD

角色清晰矩阵

Use SDD when:
  • Building greenfield systems with clear requirements
  • Refactoring systems where quality standards must improve
  • Working with AI agents that need machine-parseable specs
  • Quality is non-negotiable (regulated industries, safety-critical)
Don't force SDD when:
  • Exploring and prototyping (Constitution too early)
  • Requirements are genuinely unclear (Blueprint impossible)
  • Single-developer small projects (overhead exceeds benefit)
活动人类AI
定义成功的标准
定义不可打破的规则
做出权衡决策
应对组织约束
生成任务规范
执行原子任务
实现100%测试覆盖率
维护可追溯性
验证质量标准
审核并接受交付物

The SDD Promise

何时应用SDD

"If all tasks are completed in sequence, the full specification is fully implemented into the codebase."
This works because:
  1. Constitution defines immutable rules
  2. Blueprint captures complete intent
  3. Tasks cover 100% of specifications (traceability matrix)
  4. Each task is atomic and verifiable
  5. Dependencies are explicit (no missing imports)
  6. Definition of done includes exact signatures
SDD transforms implementation from creative writing into deterministic assembly.
在以下场景使用SDD:
  • 构建需求清晰的全新系统
  • 重构需要提升质量标准的系统
  • 与需要机器可解析规范的AI Agent协作
  • 质量不可协商的场景(受监管行业、安全关键系统)
不要强制应用SDD的场景:
  • 探索和原型开发(宪法过早)
  • 需求真正不明确的场景(蓝图无法完成)
  • 单人小型项目(开销超过收益)

SDD的承诺

“如果所有任务按顺序完成,完整规范将完全实现到代码库中。”
之所以可行,是因为:
  1. 宪法定义了不可变规则
  2. 蓝图捕获了完整意图
  3. 任务覆盖了100%的规范(可追溯矩阵)
  4. 每个任务都是原子且可验证的
  5. 依赖关系明确(无缺失导入)
  6. 完成定义包含精确签名
SDD将实现从创意写作到确定性组装的转变。