ai-shaped-readiness-advisor

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Purpose

目的

Assess whether your product work is "AI-first" (using AI to automate existing tasks faster) or "AI-shaped" (fundamentally redesigning how product teams operate around AI capabilities). Use this to evaluate your readiness across 5 essential PM competencies for 2026, identify gaps, and get concrete recommendations on which capability to build first.
Key Distinction: AI-first is cute (using Copilot to write PRDs faster). AI-shaped is survival (building a durable "reality layer" that both humans and AI trust, orchestrating AI workflows, compressing learning cycles).
This is not about AI tools—it's about organizational redesign around AI as co-intelligence. The interactive skill guides you through a maturity assessment, then recommends your next move.
评估你的产品工作是**"AI优先型(AI-first)"(利用AI更快地自动化现有任务)还是"AI重塑型(AI-shaped)"(围绕AI能力从根本上重新设计产品团队的运作方式)。通过本工具评估你在2026年5项核心产品经理(PM)能力**上的准备度,识别差距,并获取关于优先构建哪项能力的具体建议。
核心区别:AI优先型只是"锦上添花"(用Copilot更快地撰写PRD),而AI重塑型则是"生存必需"(构建人类和AI都信任的持久"事实层",编排AI工作流,压缩学习周期)。
这与AI工具无关——而是关于围绕AI作为协同智能体(co-intelligence)进行组织重构。本交互式工具会引导你完成成熟度评估,然后推荐下一步行动。

Key Concepts

核心概念

AI-First vs. AI-Shaped

AI优先型 vs AI重塑型

DimensionAI-First (Cute)AI-Shaped (Survival)
MindsetAutomate existing tasksRedesign how work gets done
GoalSpeed up artifact creationCompress learning cycles
AI RoleTask assistantStrategic co-intelligence
AdvantageTemporary efficiency gainsDefensible competitive moat
Example"Copilot writes PRDs 2x faster""AI agent validates hypotheses in 48 hours instead of 3 weeks"
Critical Insight: If a competitor can replicate your AI usage by throwing bodies at it, it's not differentiation—it's just efficiency (which becomes table stakes within months).

维度AI优先型(锦上添花)AI重塑型(生存必需)
思维模式自动化现有任务重新设计工作完成方式
目标加快产出物创建速度压缩学习周期
AI角色任务助手战略协同智能体
优势短期效率提升可防御的竞争壁垒
示例"用Copilot将PRD撰写速度提升2倍""AI Agent在48小时内验证假设,而非3周"
关键洞察:如果竞争对手通过增加人力就能复制你的AI使用方式,那这不是差异化——只是效率提升(而效率提升在几个月内就会成为行业标配)。

The 5 Essential PM Competencies (2026)

2026年5项核心PM能力

These competencies define AI-shaped product work. You'll assess your maturity on each.
这些能力定义了AI重塑型产品工作。你将评估自己在每项能力上的成熟度。

1. Context Design

1. Context Design(上下文设计)

Building a durable "reality layer" that both humans and AI can trust—treating AI attention as a scarce resource and allocating it deliberately.
What it includes:
  • Documenting what's true vs. assumed
  • Immutable constraints (technical, regulatory, strategic)
  • Operational glossary (shared definitions)
  • Evidence standards (what counts as validation)
  • Context boundaries (what to persist vs. retrieve)
  • Memory architecture (short-term conversational + long-term persistent)
  • Retrieval strategies (semantic search, contextual retrieval)
Key Principle: "If you can't point to evidence, constraints, and definitions, you don't have context. You have vibes."
Critical Distinction: Context Stuffing vs. Context Engineering
  • Context Stuffing (AI-first): Jamming volume without intent ("paste entire PRD")
  • Context Engineering (AI-shaped): Shaping structure for attention (bounded domains, retrieve with intent)
The 5 Diagnostic Questions:
  1. What specific decision does this support?
  2. Can retrieval replace persistence?
  3. Who owns the context boundary?
  4. What fails if we exclude this?
  5. Are we fixing structure or avoiding it?
AI-first version: Pasting PRDs into ChatGPT; no context boundaries; "more is better" mentality AI-shaped version: CLAUDE.md files, evidence databases, constraint registries AI agents reference; two-layer memory architecture; Research→Plan→Reset→Implement cycle to prevent context rot
Deep Dive: See
context-engineering-advisor
for detailed guidance on diagnosing context stuffing and implementing memory architecture.

构建人类和AI都信任的持久**"事实层"**——将AI的注意力视为稀缺资源,并进行刻意分配。
包含内容
  • 记录事实与假设的区别
  • 不可变约束(技术、监管、战略层面)
  • 操作术语表(共享定义)
  • 证据标准(什么可作为验证依据)
  • 上下文边界(需要持久化 vs 需要检索的内容)
  • 记忆架构(短期会话式 + 长期持久化)
  • 检索策略(语义搜索、上下文检索)
核心原则"如果你无法指出证据、约束和定义,那你拥有的不是上下文,只是主观感觉。"
关键区别:上下文堆砌 vs 上下文工程
  • 上下文堆砌(AI优先型):无目的地堆砌内容("粘贴整个PRD")
  • 上下文工程(AI重塑型):为注意力构建结构化内容(限定领域、有目的地检索)
5个诊断问题
  1. 这支持什么具体决策?
  2. 检索能否替代持久化?
  3. 谁负责上下文边界?
  4. 如果排除这部分内容,会导致什么失败?
  5. 我们是在优化结构还是在逃避结构?
AI优先型示例:将PRD粘贴到ChatGPT;无上下文边界;秉持"越多越好"的心态 AI重塑型示例:AI Agent参考CLAUDE.md文件、证据数据库、约束注册表;双层记忆架构;通过Research→Plan→Reset→Implement周期防止上下文老化
深度指南:查看
context-engineering-advisor
获取诊断上下文堆砌和实现记忆架构的详细指导。

2. Agent Orchestration

2. Agent Orchestration(Agent编排)

Creating repeatable, traceable AI workflows (not one-off prompts).
What it includes:
  • Defined workflow loops: research → synthesis → critique → decision → log rationale
  • Each step shows its work (traceable reasoning)
  • Workflows run consistently (same inputs = predictable process)
  • Version-controlled prompts and agents
Key Principle: One-off prompts are tactical. Orchestrated workflows are strategic.
AI-first version: "Ask ChatGPT to analyze this user feedback" AI-shaped version: Automated workflow that ingests feedback, tags themes, generates hypotheses, flags contradictions, logs decisions

创建可重复、可追溯的AI工作流(而非一次性提示词)。
包含内容
  • 定义工作流循环:研究 → 合成 → 评审 → 决策 → 记录理由
  • 每个步骤都展示工作过程(可追溯的推理)
  • 工作流执行一致(相同输入=可预测的流程)
  • 版本控制的提示词和Agent
核心原则:一次性提示词是战术性的,编排的工作流是战略性的。
AI优先型示例:"让ChatGPT分析这份用户反馈" AI重塑型示例:自动化工作流, ingest反馈、标记主题、生成假设、标记矛盾、记录决策

3. Outcome Acceleration

3. Outcome Acceleration(成果加速)

Using AI to compress learning cycles (not just speed up tasks).
What it includes:
  • Eliminate validation lag (PoL probes run in days, not weeks)
  • Remove approval delays (AI pre-validates against constraints)
  • Cut meeting overhead (async AI synthesis replaces status meetings)
Key Principle: Do less, purposefully. AI removes bottlenecks, not generates more work.
AI-first version: "AI writes user stories faster" AI-shaped version: "AI runs feasibility checks overnight, eliminating 2 weeks of technical discovery"

利用AI压缩学习周期(而非仅仅加快任务速度)。
包含内容
  • 消除验证延迟(PoL探针在数天内完成,而非数周)
  • 减少审批延误(AI预先根据约束进行验证)
  • 降低会议开销(异步AI合成替代状态会议)
核心原则:有目的地减少工作量。AI消除瓶颈,而非生成更多工作。
AI优先型示例:"AI更快地撰写用户故事" AI重塑型示例:"AI在夜间完成可行性检查,消除2周的技术调研时间"

4. Team-AI Facilitation

4. Team-AI Facilitation(团队-AI协作)

Redesigning team systems so AI operates as co-intelligence, not an accountability shield.
What it includes:
  • Review norms (who checks AI outputs, when, how)
  • Evidence standards (AI must cite sources, not hallucinate)
  • Decision authority (AI recommends, humans decide—clear boundaries)
  • Psychological safety (team can challenge AI without feeling "dumb")
Key Principle: AI amplifies judgment, doesn't replace accountability.
AI-first version: "I used AI" as excuse for bad outputs AI-shaped version: Clear review protocols; AI outputs treated as drafts requiring human validation

重新设计团队系统,让AI作为协同智能体运作,而非问责的挡箭牌。
包含内容
  • 评审规范(谁、何时、如何评审AI输出)
  • 证据标准(AI必须引用来源,不能生成幻觉内容)
  • 决策权限(AI提供建议,人类做出决策——边界清晰)
  • 心理安全感(团队可以挑战AI而不会觉得自己"愚蠢")
核心原则:AI放大判断力,而非取代问责制。
AI优先型示例:用"我用了AI"作为输出质量差的借口 AI重塑型示例:清晰的评审流程;AI输出被视为需要人工验证的草稿

5. Strategic Differentiation

5. Strategic Differentiation(战略差异化)

Moving beyond efficiency to create defensible competitive advantages.
What it includes:
  • New customer capabilities (what can users do now that they couldn't before?)
  • Workflow rewiring (processes competitors can't replicate without full redesign)
  • Economics competitors can't match (10x cost advantage through AI)
Key Principle: "If a competitor can copy it by throwing bodies at it, it's not differentiation."
AI-first version: "We use AI to write better docs" AI-shaped version: "We validate product hypotheses in 2 days vs. industry standard 3 weeks—ship 6x more validated features per quarter"

超越效率提升,创造可防御的竞争优势
包含内容
  • 新的客户能力(用户现在能做以前做不到的事?)
  • 工作流重构(竞争对手不进行全面 redesign就无法复制的流程)
  • 竞争对手无法匹配的经济性(通过AI实现10倍成本优势)
核心原则"如果竞争对手通过增加人力就能复制,那这不是差异化。"
AI优先型示例:"我们用AI撰写更好的文档" AI重塑型示例:"我们在2天内验证产品假设,而行业标准是3周——每季度发布经过验证的功能数量是竞争对手的6倍"

Anti-Patterns (What This Is NOT)

反模式(本工具不涉及的内容)

  • Not about AI tools: Using Claude vs. ChatGPT doesn't matter. Redesigning workflows matters.
  • Not about speed: Writing PRDs 2x faster isn't strategic if PRDs weren't the bottleneck.
  • Not about automation: Automating bad processes just scales the bad.
  • Not about replacing humans: AI-shaped orgs augment judgment, not eliminate it.

  • 与AI工具无关:使用Claude还是ChatGPT不重要,重新设计工作流才重要。
  • 与速度无关:如果PRD不是瓶颈,将PRD撰写速度提升2倍毫无战略意义。
  • 与自动化无关:自动化糟糕的流程只会让糟糕的情况扩大。
  • 与取代人类无关:AI重塑型组织是增强判断力,而非淘汰人力。

When to Use This Skill

何时使用本工具

Use this when:
  • You're using AI tools but not seeing strategic advantage
  • You suspect you're "AI-first" (efficiency) but want to be "AI-shaped" (transformation)
  • You need to prioritize which AI capability to build next
  • Leadership asks "How are we using AI?" and you're not sure how to answer strategically
  • You want to assess team readiness for AI-powered product work
Don't use this when:
  • You haven't started using AI at all (start with basic tools first)
  • You're looking for tool recommendations (this is about organizational design, not tooling)
  • You need tactical "how to write a prompt" guidance (use skills for that)

适用场景
  • 你在使用AI工具,但未获得战略优势
  • 你怀疑自己处于"AI优先型"(效率提升)阶段,但希望转向"AI重塑型"(转型)
  • 你需要确定优先构建哪项AI能力
  • 领导层问"我们如何使用AI?",而你不确定如何从战略层面回答
  • 你想评估团队对AI驱动型产品工作的准备度
不适用场景
  • 你还完全未开始使用AI(先从基础工具开始)
  • 你正在寻找工具推荐(本工具关注组织设计,而非工具)
  • 你需要战术性的"如何写提示词"指导(使用对应工具)

Facilitation Source of Truth

协作参考标准

Use
workshop-facilitation
as the default interaction protocol for this skill.
It defines:
  • session heads-up + entry mode (Guided, Context dump, Best guess)
  • one-question turns with plain-language prompts
  • progress labels (for example, Context Qx/8 and Scoring Qx/5)
  • interruption handling and pause/resume behavior
  • numbered recommendations at decision points
  • quick-select numbered response options for regular questions (include
    Other (specify)
    when useful)
This file defines the domain-specific assessment content. If there is a conflict, follow this file's domain logic.
workshop-facilitation
作为本工具的默认交互协议。
它定义了:
  • 会议预告 + 参与模式(引导式、上下文导入、最佳猜测)
  • 单次单问题的交互,使用通俗易懂的提示词
  • 进度标签(例如,Context Qx/8和Scoring Qx/5)
  • 中断处理和暂停/恢复行为
  • 决策点的编号式建议
  • 常规问题的快速选择编号选项(必要时包含
    Other (specify)
本文件定义了特定领域的评估内容。如果存在冲突,遵循本文件的领域逻辑。

Application

应用流程

This interactive skill uses adaptive questioning to assess your maturity across 5 competencies, then recommends which to prioritize.
本交互式工具使用自适应提问评估你在5项能力上的成熟度,然后推荐优先方向。

Facilitation Protocol (Mandatory)

协作协议(强制性)

  1. Ask exactly one question per turn.
  2. Wait for the user's answer before asking the next question.
  3. Use plain-language questions (no shorthand labels as the primary question). If needed, include an example response format.
  4. Show progress on every turn using user-facing labels:
    • Context Qx/8
      during context gathering
    • Scoring Qx/5
      during maturity scoring
    • Include "questions remaining" when practical.
  5. Do not use internal phase labels (like "Step 0") in user-facing prompts unless the user asks for internal structure details.
  6. For maturity scoring questions, present concise 1-4 choices first; share full rubric details only if requested.
  7. For context questions, offer concise numbered quick-select options when practical, plus
    Other (specify)
    for open-ended answers. Accept multi-select replies like
    1,3
    or
    1 and 3
    .
  8. Give numbered recommendations only at decision points, not after every answer.
  9. Decision points include:
    • After the full context summary
    • After the 5-dimension maturity profile
    • During priority selection and action-plan path selection
  10. When recommendations are shown, enumerate clearly (
    1.
    ,
    2.
    ,
    3.
    ) and accept selections like
    #1
    ,
    1
    ,
    1 and 3
    ,
    1,3
    , or custom text.
  11. If multiple options are selected, synthesize a combined path and continue.
  12. If custom text is provided, map it to the closest valid path and continue without forcing re-entry.
  13. Interruption handling is mandatory: if the user asks a meta question ("how many left?", "why this label?", "pause"), answer directly first, then restate current progress and resume with the pending question.
  14. If the user says to stop or pause, halt the assessment immediately and wait for explicit resume.
  15. If the user asks for "one question at a time," keep that mode for the rest of the session unless they explicitly opt out.
  16. Before any assessment question, give a short heads-up on time/length and let the user choose an entry mode.

  1. 每次只提出一个问题
  2. 等待用户回答后再提出下一个问题。
  3. 使用通俗易懂的问题(不要将简写标签作为主要问题)。如有需要,可包含示例回答格式。
  4. 每次交互都显示进度,使用用户可见的标签:
    • 上下文收集阶段:
      Context Qx/8
    • 成熟度评分阶段:
      Scoring Qx/5
    • 尽可能注明"剩余问题数量"
  5. 除非用户询问内部结构细节,否则不要在用户可见的提示中使用内部阶段标签(如"Step 0")。
  6. 对于成熟度评分问题,先提供简洁的1-4个选项;仅在用户请求时分享完整的评分细则。
  7. 对于上下文问题,尽可能提供简洁的编号快速选择选项,加上
    Other (specify)
    用于开放式回答。接受
    1,3
    1 and 3
    这样的多选回复。
  8. 仅在决策点提供编号式建议,而非每个回答后都提供。
  9. 决策点包括:
    • 完整上下文总结后
    • 5维度成熟度概况后
    • 优先级选择和行动计划路径选择期间
  10. 显示建议时,清晰编号(
    1.
    ,
    2.
    ,
    3.
    ),并接受
    #1
    ,
    1
    ,
    1 and 3
    ,
    1,3
    或自定义文本作为选择。
  11. 如果选择了多个选项,综合生成组合路径并继续。
  12. 如果提供了自定义文本,将其映射到最接近的有效路径,无需强制重新输入。
  13. 必须处理中断:如果用户提出元问题("还有多少问题?", "为什么用这个标签?", "暂停"),先直接回答,然后重新说明当前进度并继续未完成的问题。
  14. 如果用户要求停止或暂停,立即停止评估,等待明确的恢复指令。
  15. 如果用户要求"一次一个问题",在整个会话中保持此模式,除非用户明确退出。
  16. 在任何评估问题之前,简要说明所需时间/长度,并让用户选择参与模式。

Session Start: Heads-Up + Entry Mode (Mandatory)

会话开始:预告 + 参与模式(强制性)

Agent opening prompt (use this first):
"Quick heads-up before we start: this usually takes about 7-10 minutes and up to 13 questions total (8 context + 5 scoring).
How do you want to do this?
  1. Guided mode: I’ll ask one question at a time.
  2. Context dump: you paste what you already know, and I’ll skip anything redundant.
  3. Best guess mode: I’ll make reasonable assumptions where details are missing, label them, and keep moving."
Accept selections as
#1
,
1
,
1 and 3
,
1,3
, or custom text.
Mode behavior:
  • If Guided mode: Run Step 0 as written, then scoring.
  • If Context dump: Ask for pasted context once, summarize it, identify gaps, and:
    • Skip any context questions already answered.
    • Ask only the minimum missing context needed (0-2 clarifying questions).
    • Move to scoring as soon as context is sufficient.
  • If Best guess mode: Ask for the smallest viable starting input (role/team + primary goal), then:
    • Infer missing details using reasonable defaults.
    • Label each inferred item as
      Assumption
      .
    • Include confidence tags (
      High
      ,
      Medium
      ,
      Low
      ) for each assumption.
    • Continue without blocking on unknowns.
At the final summary, include an Assumptions to Validate section when context dump or best guess mode was used.

Agent开场提示(必须使用此内容)
"开始前快速说明:本评估通常需要7-10分钟,最多包含13个问题(8个上下文问题 + 5个评分问题)。
你希望如何进行?
  1. 引导式模式:我会逐个提问。
  2. 上下文导入模式:你粘贴已有的上下文,我会跳过重复的问题。
  3. 最佳猜测模式:我会基于合理假设补充缺失的细节,并标记这些假设。"
接受
#1
,
1
,
1 and 3
,
1,3
或自定义文本作为选择。
模式行为
  • 引导式模式:按Step 0的说明执行,然后进行评分。
  • 上下文导入模式:要求用户粘贴一次上下文,总结内容,识别差距:
    • 跳过已回答的上下文问题。
    • 仅询问必要的缺失上下文(0-2个澄清问题)。
    • 上下文足够后立即进入评分阶段。
  • 最佳猜测模式:要求用户提供最小可行的起始输入(角色/团队 + 主要目标),然后:
    • 使用合理默认值推断缺失的细节。
    • 将每个推断项标记为
      Assumption
    • 为每个假设添加置信度标签(
      High
      ,
      Medium
      ,
      Low
      )。
    • 无需因未知信息而停滞,继续评估。
在最终总结中,如果使用了上下文导入或最佳猜测模式,需包含待验证假设部分。

Step 0: Gather Context

Step 0:收集上下文

Agent asks:
Collect context using this exact sequence, one question at a time:
  1. "Which AI tools are you using today?"
  2. "How does your team usually use AI today: one-off prompts, reusable templates, or multi-step workflows?"
  3. "Who uses AI consistently today: just you, PMs, or cross-functional teams?"
  4. "About how many PMs, engineers, and designers are on your team?"
  5. "What stage are you in: startup, growth, or enterprise?"
  6. "How are decisions made: centralized, distributed, or consensus-driven?"
  7. "What competitive advantage are you trying to build with AI?"
  8. "What's the biggest bottleneck slowing learning and iteration today?"
After question 8, summarize back in 4 lines:
  • Current AI usage pattern
  • Team context
  • Strategic intent
  • Primary bottleneck

Agent提问
按以下精确顺序逐个提问,收集上下文:
  1. "你目前使用哪些AI工具?"
  2. "你的团队目前通常如何使用AI:一次性提示词、可复用模板,还是多步骤工作流?"
  3. "目前哪些人持续使用AI:只有你、PM团队,还是跨职能团队?"
  4. "你的团队有多少PM、工程师和设计师?"
  5. "你的公司处于什么阶段:初创、增长,还是企业?"
  6. "决策方式是怎样的:集中式、分布式,还是共识驱动?"
  7. "你希望用AI构建什么竞争优势?"
  8. "当前阻碍学习和迭代的最大瓶颈是什么?"
第8个问题后,用4行内容总结:
  • 当前AI使用模式
  • 团队上下文
  • 战略意图
  • 主要瓶颈

Step 1: Context Design Maturity

Step 1:Context Design成熟度

Agent asks:
Let's assess your Context Design capability—how well you've built a "reality layer" that both humans and AI can trust, and whether you're doing context stuffing (volume without intent) or context engineering (structure for attention).
Which statement best describes your current state?
  1. Level 1 (AI-First / Context Stuffing): "I paste entire documents into ChatGPT every time I need something. No shared knowledge base. No context boundaries."
    • Reality: One-off prompting with no durability; "more is better" mentality
    • Problem: AI has no memory; you repeat yourself constantly; context stuffing degrades attention
    • Context Engineering Gap: No answers to the 5 diagnostic questions; persisting everything "just in case"
  2. Level 2 (Emerging / Early Structure): "We have some docs (PRDs, strategy memos), but they're scattered. No consistent format. Starting to notice context stuffing issues (vague responses, normalized retries)."
    • Reality: Context exists but isn't structured for AI consumption; no retrieval strategy
    • Problem: AI can't reliably find or trust information; mixing always-needed with episodic context
    • Context Engineering Gap: No context boundary owner; no distinction between persist vs. retrieve
  3. Level 3 (Transitioning / Context Engineering Emerging): "We've started using CLAUDE.md files and project instructions. Constraints registry exists. We're identifying what to persist vs. retrieve. Experimenting with Research→Plan→Reset→Implement cycle."
    • Reality: Structured context emerging, but not comprehensive; context boundaries defined but not fully enforced
    • Problem: Coverage is patchy; some areas well-documented, others vibe-driven; inconsistent retrieval practices
    • Context Engineering Progress: Can answer 3-4 of the 5 diagnostic questions; context boundary owner assigned; starting to use two-layer memory
  4. Level 4 (AI-Shaped / Context Engineering Mastery): "We maintain a durable reality layer: constraints registry (20+ entries), evidence database, operational glossary (30+ terms). Two-layer memory architecture (short-term conversational + long-term persistent via vector DB). Context boundaries defined and owned. AI agents reference these automatically. We use Research→Plan→Reset→Implement to prevent context rot."
    • Reality: Comprehensive, version-controlled context both humans and AI trust; retrieval with intent (not completeness)
    • Outcome: AI operates with high confidence; reduces hallucination and rework; token usage optimized; no context stuffing
    • Context Engineering Mastery: Can answer all 5 diagnostic questions; context boundary audited quarterly; quantitative efficiency tracking: (Accuracy × Coherence) / (Tokens × Latency)
Select your level: [1, 2, 3, or 4]
Note: If you selected Level 1-2 and struggle with context stuffing, consider using
context-engineering-advisor
to diagnose and fix Context Hoarding Disorder before proceeding.
User response: [Selection]
Agent records: Context Design maturity = [Level X]

Agent提问
现在评估你的Context Design能力——你构建人类和AI都信任的"事实层"的程度,以及你是在进行上下文堆砌(无目的地堆砌内容)还是上下文工程(为注意力构建结构)。
以下哪项最能描述你的当前状态?
  1. Level 1(AI优先型 / 上下文堆砌):"每次需要内容时,我都会将整个文档粘贴到ChatGPT。没有共享知识库,也没有上下文边界。"
    • 实际情况:一次性提示,无持久化;秉持"越多越好"的心态
    • 问题:AI没有记忆;你不断重复自己;上下文堆砌会降低AI的注意力
    • 上下文工程差距:无法回答5个诊断问题;无差别地持久化所有内容"以防万一"
  2. Level 2(萌芽阶段 / 早期结构):"我们有一些文档(PRD、战略备忘录),但分散存放,没有统一格式。开始注意到上下文堆砌的问题(模糊回复、频繁重试)。"
    • 实际情况:上下文存在,但未针对AI使用进行结构化;无检索策略
    • 问题:AI无法可靠地查找或信任信息;混淆了始终需要的内容和临时上下文
    • 上下文工程差距:没有上下文边界负责人;未区分持久化 vs 检索的内容
  3. Level 3(转型阶段 / 上下文工程萌芽):"我们开始使用CLAUDE.md文件和项目说明。已建立约束注册表。正在确定需要持久化 vs 检索的内容。试验Research→Plan→Reset→Implement周期。"
    • 实际情况:结构化上下文正在形成,但不够全面;上下文边界已定义,但未完全执行
    • 问题:覆盖范围不完整;部分领域文档完善,部分依赖主观感觉;检索实践不一致
    • 上下文工程进展:能回答5个诊断问题中的3-4个;已指定上下文边界负责人;开始使用双层记忆
  4. Level 4(AI重塑型 / 上下文工程精通):"我们维护着一个持久的事实层:约束注册表(20+条目)、证据数据库、操作术语表(30+术语)。双层记忆架构(短期会话式 + 长期持久化,使用向量数据库)。上下文边界已定义并明确负责人。AI Agent会自动参考这些内容。我们使用Research→Plan→Reset→Implement周期防止上下文老化。"
    • 实际情况:人类和AI都信任的全面、版本化上下文;有目的地检索(而非追求完整性)
    • 成果:AI能高置信度地运行;减少幻觉内容和返工;优化token使用;无上下文堆砌
    • 上下文工程精通:能回答所有5个诊断问题;每季度审核上下文边界;量化效率跟踪:(准确性 × 连贯性) / (Tokens × 延迟)
选择你的等级:[1, 2, 3, 或 4]
注意:如果你选择了Level 1-2且受困于上下文堆砌,建议先使用
context-engineering-advisor
诊断并修复上下文囤积问题,再继续评估。
用户回复:[选择]
Agent记录:Context Design成熟度 = [Level X]

Step 2: Agent Orchestration Maturity

Step 2:Agent Orchestration成熟度

Agent asks:
Now let's assess Agent Orchestration—whether you have repeatable AI workflows or just one-off prompts.
Which statement best describes your current state?
  1. Level 1 (AI-First): "I type prompts into ChatGPT as needed. No saved workflows or templates."
    • Reality: Tactical, ad-hoc usage
    • Problem: Inconsistent results; can't scale or audit
  2. Level 2 (Emerging): "I have a few saved prompts I reuse. Maybe some custom GPTs or Claude Projects."
    • Reality: Repeatable prompts, but not full workflows
    • Problem: Each step is manual; no orchestration
  3. Level 3 (Transitioning): "We've built some multi-step workflows (research → synthesis → critique). Tracked in tools like Notion or Linear."
    • Reality: Workflows exist but require manual handoffs
    • Problem: Still human-in-the-loop for every step; not fully automated
  4. Level 4 (AI-Shaped): "We have orchestrated AI workflows that run autonomously: research → synthesis → critique → decision → log rationale. Each step is traceable and version-controlled."
    • Reality: Workflows run consistently; show their work at each step
    • Outcome: Reliable, auditable, scalable AI processes
Select your level: [1, 2, 3, or 4]
User response: [Selection]
Agent records: Agent Orchestration maturity = [Level X]

Agent提问
现在评估Agent Orchestration能力——你是否拥有可重复的AI工作流,还是只有一次性提示词。
以下哪项最能描述你的当前状态?
  1. Level 1(AI优先型):"我根据需要在ChatGPT中输入提示词。没有保存的工作流或模板。"
    • 实际情况:战术性、临时性使用
    • 问题:结果不一致;无法扩展或审计
  2. Level 2(萌芽阶段):"我有一些可复用的保存提示词。可能有一些自定义GPT或Claude Projects。"
    • 实际情况:提示词可重复,但不是完整工作流
    • 问题:每个步骤都需要手动操作;无编排
  3. Level 3(转型阶段):"我们已构建一些多步骤工作流(研究 → 合成 → 评审)。在Notion或Linear中跟踪。"
    • 实际情况:工作流存在,但需要手动交接
    • 问题:每个步骤都需要人工参与;未完全自动化
  4. Level 4(AI重塑型):"我们有可自主运行的编排式AI工作流:研究 → 合成 → 评审 → 决策 → 记录理由。每个步骤都可追溯且版本可控。"
    • 实际情况:工作流执行一致;每个步骤都展示工作过程
    • 成果:可靠、可审计、可扩展的AI流程
选择你的等级:[1, 2, 3, 或 4]
用户回复:[选择]
Agent记录:Agent Orchestration成熟度 = [Level X]

Step 3: Outcome Acceleration Maturity

Step 3:Outcome Acceleration成熟度

Agent asks:
Next, Outcome Acceleration—are you using AI to compress learning cycles, or just speed up tasks?
Which statement best describes your current state?
  1. Level 1 (AI-First): "AI helps me write docs faster (PRDs, user stories). Saves me a few hours per week."
    • Reality: Efficiency gains on artifact creation
    • Problem: Docs weren't the bottleneck; learning cycles unchanged
  2. Level 2 (Emerging): "AI helps with research and synthesis (summarize user feedback, analyze competitors). Saves research time."
    • Reality: Modest learning acceleration
    • Problem: Still sequential; AI doesn't eliminate validation lag
  3. Level 3 (Transitioning): "We use AI to run experiments faster (PoL probes, feasibility checks). Cut validation time from weeks to days."
    • Reality: Learning cycles compressing
    • Problem: Not yet systematic; only applied to some experiments
  4. Level 4 (AI-Shaped): "AI systematically removes bottlenecks: overnight feasibility checks, async synthesis replaces meetings, automated validation against constraints. Learning cycles 5-10x faster."
    • Reality: Fundamental redesign of how learning happens
    • Outcome: Ship validated features 6x faster than competitors
Select your level: [1, 2, 3, or 4]
User response: [Selection]
Agent records: Outcome Acceleration maturity = [Level X]

Agent提问
接下来评估Outcome Acceleration能力——你是在使用AI压缩学习周期,还是仅仅加快任务速度?
以下哪项最能描述你的当前状态?
  1. Level 1(AI优先型):"AI帮助我更快地撰写文档(PRD、用户故事)。每周节省几小时。"
    • 实际情况:产出物创建的效率提升
    • 问题:文档并非瓶颈;学习周期未改变
  2. Level 2(萌芽阶段):"AI帮助进行研究和合成(总结用户反馈、分析竞争对手)。节省研究时间。"
    • 实际情况:学习速度略有提升
    • 问题:仍为顺序执行;AI未消除验证延迟
  3. Level 3(转型阶段):"我们使用AI更快地运行实验(PoL探针、可行性检查)。将验证时间从数周缩短至数天。"
    • 实际情况:学习周期正在压缩
    • 问题:尚未系统化;仅应用于部分实验
  4. Level 4(AI重塑型):"AI系统性地消除瓶颈:夜间可行性检查、异步合成替代会议、根据约束自动验证。学习周期加快5-10倍。"
    • 实际情况:学习方式的根本性重构
    • 成果:发布经过验证的功能的速度是竞争对手的6倍
选择你的等级:[1, 2, 3, 或 4]
用户回复:[选择]
Agent记录:Outcome Acceleration成熟度 = [Level X]

Step 4: Team-AI Facilitation Maturity

Step 4:Team-AI Facilitation成熟度

Agent asks:
Now assess Team-AI Facilitation—how well you've redesigned team systems for AI as co-intelligence.
Which statement best describes your current state?
  1. Level 1 (AI-First): "I use AI privately. Team doesn't know or doesn't use it. No shared norms."
    • Reality: Individual tool usage, no team integration
    • Problem: Inconsistent quality; no accountability for AI outputs
  2. Level 2 (Emerging): "Team uses AI, but no formal review process. 'I used AI' mentioned casually."
    • Reality: Awareness but no structure
    • Problem: AI outputs treated as final; errors slip through
  3. Level 3 (Transitioning): "We have review norms emerging (AI outputs are drafts, not finals). Evidence standards discussed but not codified."
    • Reality: Cultural shift underway
    • Problem: Norms are informal; not everyone follows them
  4. Level 4 (AI-Shaped): "Clear protocols: AI outputs require human validation, evidence standards codified, decision authority explicit (AI recommends, humans decide). Team treats AI as co-intelligence."
    • Reality: AI integrated into team operating system
    • Outcome: High-quality outputs; psychological safety maintained
Select your level: [1, 2, 3, or 4]
User response: [Selection]
Agent records: Team-AI Facilitation maturity = [Level X]

Agent提问
现在评估Team-AI Facilitation能力——你围绕AI作为协同智能体重设计团队系统的程度。
以下哪项最能描述你的当前状态?
  1. Level 1(AI优先型):"我私下使用AI。团队不知道或不使用AI。没有共享规范。"
    • 实际情况:个人工具使用,未集成到团队
    • 问题:质量不一致;AI输出无问责机制
  2. Level 2(萌芽阶段):"团队使用AI,但没有正式的评审流程。只是偶尔提到'我用了AI'。"
    • 实际情况:有AI使用意识,但无结构化流程
    • 问题:AI输出被视为最终内容;错误未被发现
  3. Level 3(转型阶段):"我们正在形成评审规范(AI输出是草稿,而非最终内容)。已讨论证据标准,但未正式化。"
    • 实际情况:文化转变正在进行
    • 问题:规范是非正式的;并非所有人都遵循
  4. Level 4(AI重塑型):"清晰的流程:AI输出需要人工验证,证据标准已正式化,决策权限明确(AI提供建议,人类做出决策)。团队将AI视为协同智能体。"
    • 实际情况:AI已集成到团队操作系统
    • 成果:输出质量高;保持心理安全感
选择你的等级:[1, 2, 3, 或 4]
用户回复:[选择]
Agent记录:Team-AI Facilitation成熟度 = [Level X]

Step 5: Strategic Differentiation Maturity

Step 5:Strategic Differentiation成熟度

Agent asks:
Finally, Strategic Differentiation—are you creating defensible competitive advantages, or just efficiency gains?
Which statement best describes your current state?
  1. Level 1 (AI-First): "We use AI to work faster (write better docs, respond to customers quicker). Efficiency gains only."
    • Reality: Table-stakes improvements
    • Problem: Competitors can copy this within months
  2. Level 2 (Emerging): "AI enables us to do things we couldn't before (analyze 10x more data, test more hypotheses). New capabilities, but competitors could replicate."
    • Reality: Capability expansion, but not defensible
    • Problem: No moat; competitors hire more people to match
  3. Level 3 (Transitioning): "We've redesigned some workflows around AI (e.g., validate hypotheses in 2 days vs. 3 weeks). Starting to create separation."
    • Reality: Workflow advantages emerging
    • Problem: Not yet systematic; only applied in pockets
  4. Level 4 (AI-Shaped): "We've fundamentally rewired how we operate: customers get capabilities they can't get elsewhere, our learning cycles are 10x faster than industry standard, our economics are 5x better. Competitors can't replicate without full org redesign."
    • Reality: Defensible competitive moat
    • Outcome: Strategic advantage that compounds over time
Select your level: [1, 2, 3, or 4]
User response: [Selection]
Agent records: Strategic Differentiation maturity = [Level X]

Agent提问
最后评估Strategic Differentiation能力——你是在创造可防御的竞争优势,还是仅仅提升效率?
以下哪项最能描述你的当前状态?
  1. Level 1(AI优先型):"我们用AI加快工作速度(撰写更好的文档、更快地回复客户)。仅提升效率。"
    • 实际情况:行业标配的改进
    • 问题:竞争对手可在数月内复制
  2. Level 2(萌芽阶段):"AI让我们能够做以前做不到的事(分析10倍多的数据、测试更多假设)。有新能力,但竞争对手可复制。"
    • 实际情况:能力扩展,但无防御性
    • 问题:无竞争壁垒;竞争对手可通过增加人力匹配
  3. Level 3(转型阶段):"我们围绕AI重构了一些工作流(例如,2天内验证假设,而非3周)。开始形成差异化。"
    • 实际情况:工作流优势正在形成
    • 问题:尚未系统化;仅在部分领域应用
  4. Level 4(AI重塑型):"我们从根本上重构了运作方式:客户获得了别处无法获得的能力,我们的学习周期比行业标准快10倍,我们的经济性比竞争对手好5倍。竞争对手不进行全面组织重构就无法复制。"
    • 实际情况:可防御的竞争壁垒
    • 成果:随时间复利的战略优势
选择你的等级:[1, 2, 3, 或 4]
用户回复:[选择]
Agent记录:Strategic Differentiation成熟度 = [Level X]

Step 6: Assess Maturity Profile

Step 6:评估成熟度概况

Agent synthesizes:
Here's your AI-Shaped Readiness Profile:
┌─────────────────────────────┬───────┬──────────┐
│ Competency                  │ Level │ Maturity │
├─────────────────────────────┼───────┼──────────┤
│ 1. Context Design           │   X   │ [Label]  │
│ 2. Agent Orchestration      │   X   │ [Label]  │
│ 3. Outcome Acceleration     │   X   │ [Label]  │
│ 4. Team-AI Facilitation     │   X   │ [Label]  │
│ 5. Strategic Differentiation│   X   │ [Label]  │
└─────────────────────────────┴───────┴──────────┘

Overall Assessment: [AI-First / Emerging / Transitioning / AI-Shaped]
Maturity Labels:
  • Level 1: AI-First (efficiency only)
  • Level 2: Emerging (early capabilities)
  • Level 3: Transitioning (redesign underway)
  • Level 4: AI-Shaped (strategic transformation)
Overall Assessment Logic:
  • AI-First: Average level 1-1.5 (mostly Level 1s)
  • Emerging: Average level 2-2.5 (mostly Level 2s)
  • Transitioning: Average level 3-3.5 (mostly Level 3s)
  • AI-Shaped: Average level 3.5-4 (mostly Level 4s)

Agent总结
以下是你的AI重塑型准备度概况:
┌─────────────────────────────┬───────┬──────────┐
│ 能力项                      │ 等级  │ 成熟度   │
├─────────────────────────────┼───────┼──────────┤
│ 1. Context Design           │   X   │ [标签]  │
│ 2. Agent Orchestration      │   X   │ [标签]  │
│ 3. Outcome Acceleration     │   X   │ [标签]  │
│ 4. Team-AI Facilitation     │   X   │ [标签]  │
│ 5. Strategic Differentiation│   X   │ [标签]  │
└─────────────────────────────┴───────┴──────────┘

整体评估:[AI优先型 / 萌芽阶段 / 转型阶段 / AI重塑型]
成熟度标签
  • Level 1:AI优先型(仅效率提升)
  • Level 2:萌芽阶段(早期能力)
  • Level 3:转型阶段(重构进行中)
  • Level 4:AI重塑型(战略转型)
整体评估逻辑
  • AI优先型:平均等级1-1.5(主要为Level 1)
  • 萌芽阶段:平均等级2-2.5(主要为Level 2)
  • 转型阶段:平均等级3-3.5(主要为Level 3)
  • AI重塑型:平均等级3.5-4(主要为Level 4)

Step 7: Identify Priority Gap

Step 7:确定优先差距

Agent asks:
Based on your maturity profile, which competency should you prioritize first?
Agent analyzes dependencies:
Dependency Logic:
  1. Context Design is foundational — If Level 1-2, this must be priority #1 (Agent Orchestration and Outcome Acceleration depend on it)
  2. Agent Orchestration enables Outcome Acceleration — If Context Design is Level 3+, but Agent Orchestration is Level 1-2, prioritize orchestration
  3. Team-AI Facilitation is parallel — Can be developed alongside others, but required for scale
  4. Strategic Differentiation requires Levels 3+ on others — Don't focus here until foundational competencies are built
Agent recommends:
Based on your profile, I recommend focusing on [Competency Name] first because:
Option 1: Context Design (if Level 1-2)
  • Why: Without durable context, AI operates on vibes. Every workflow will be fragile.
  • Impact: Unlocks Agent Orchestration and Outcome Acceleration
  • Next Steps: Build CLAUDE.md files, start constraints registry, create operational glossary
Option 2: Agent Orchestration (if Context is 3+, but Orchestration is 1-2)
  • Why: You have context, but no repeatable workflows. Scaling requires orchestration.
  • Impact: Turn one-off prompts into reliable, traceable workflows
  • Next Steps: Document your most frequent AI workflow, version-control prompts, add traceability
Option 3: Outcome Acceleration (if Context + Orchestration are 3+)
  • Why: You have infrastructure; now compress learning cycles
  • Impact: Strategic advantage emerges from speed-to-learning
  • Next Steps: Identify biggest bottleneck in learning cycle, design AI workflow to eliminate it
Option 4: Team-AI Facilitation (if usage is individual, not team-wide)
  • Why: Can't scale if only you're AI-shaped; team must adopt
  • Impact: Organizational transformation, not just individual productivity
  • Next Steps: Establish review norms, codify evidence standards, create decision authority framework
Option 5: Strategic Differentiation (if all others are 3+)
  • Why: You have the foundation; now build the moat
  • Impact: Create defensible competitive advantage
  • Next Steps: Identify workflow competitors can't replicate, design AI-enabled customer capabilities
Which would you like to focus on?
Options:
  1. Accept recommendation — [Agent provides detailed action plan]
  2. Choose different priority — [Agent warns about dependencies but allows override]
  3. Focus on multiple simultaneously — [Agent suggests parallel tracks if feasible]
User response: [Selection]

Agent提问
基于你的成熟度概况,你应该优先构建哪项能力?
Agent分析依赖关系
依赖逻辑
  1. Context Design是基础 — 如果等级为1-2,必须将其作为优先项#1(Agent Orchestration和Outcome Acceleration依赖于它)
  2. Agent Orchestration支持Outcome Acceleration — 如果Context Design等级为3+,但Agent Orchestration等级为1-2,优先构建编排能力
  3. Team-AI Facilitation可并行构建 — 可与其他能力同时开发,但扩展需要此项
  4. Strategic Differentiation需要其他能力达到Level 3+ — 在基础能力构建完成前,不要聚焦于此
Agent推荐
基于你的概况,我建议优先聚焦**[能力名称]**,原因如下:
选项1:Context Design(如果等级为1-2)
  • 原因:没有持久化上下文,AI的运作基于主观感觉。所有工作流都会很脆弱。
  • 影响:解锁Agent Orchestration和Outcome Acceleration能力
  • 下一步:构建CLAUDE.md文件,启动约束注册表,创建操作术语表
选项2:Agent Orchestration(如果Context Design等级为3+,但Orchestration等级为1-2)
  • 原因:你已有上下文,但没有可重复的工作流。扩展需要编排能力。
  • 影响:将一次性提示词转变为可靠、可追溯的工作流
  • 下一步:记录你最常用的AI工作流,版本控制提示词,添加可追溯性
选项3:Outcome Acceleration(如果Context Design + Orchestration等级为3+)
  • 原因:你已有基础设施;现在需要压缩学习周期
  • 影响:学习速度的提升将形成战略优势
  • 下一步:识别学习周期中的最大瓶颈,设计AI工作流消除该瓶颈
选项4:Team-AI Facilitation(如果AI使用仅为个人行为,未在团队层面推广)
  • 原因:如果只有你实现了AI重塑,无法扩展;团队必须采用
  • 影响:组织转型,而非仅仅个人生产力提升
  • 下一步:建立评审规范,正式化证据标准,创建决策权限框架
选项5:Strategic Differentiation(如果其他所有能力等级为3+)
  • 原因:你已有基础;现在需要构建竞争壁垒
  • 影响:创造可防御的竞争优势
  • 下一步:识别竞争对手无法复制的工作流,设计AI驱动的客户能力
你希望聚焦哪项?
选项
  1. 接受推荐 — [Agent提供详细行动计划]
  2. 选择其他优先项 — [Agent警告依赖关系,但允许你选择]
  3. 同时聚焦多项 — [Agent建议可行的并行路径]
用户回复:[选择]

Step 8: Generate Action Plan

Step 8:生成行动计划

Agent provides tailored action plan based on selected priority:

Agent根据选择的优先项提供定制化行动计划

If Priority = Context Design

如果优先项 = Context Design

Goal: Build a durable "reality layer" that both humans and AI trust—move from context stuffing to context engineering.
Pre-Phase: Diagnose Context Stuffing (If Needed) If you're at Level 1-2, first diagnose context stuffing symptoms:
  1. Run through the 5 diagnostic questions (see
    context-engineering-advisor
    )
  2. Identify what you're persisting that should be retrieved
  3. Assign context boundary owner
  4. Create Context Manifest (what's always-needed vs. episodic)
Phase 1: Document Constraints (Week 1)
  1. Create a constraints registry:
    • Technical constraints (APIs, data models, performance limits)
    • Regulatory constraints (GDPR, HIPAA, etc.)
    • Strategic constraints (we will/won't build X)
  2. Apply diagnostic question #4 to each constraint: "What fails if we exclude this?"
  3. Format: Structured file AI agents can parse (YAML, JSON, or Markdown with frontmatter)
  4. Version control in Git
Phase 2: Build Operational Glossary (Week 2)
  1. List top 20-30 terms your team uses (e.g., "user," "customer," "activation," "churn")
  2. Define each unambiguously (avoid "it depends")
  3. Include edge cases and exceptions
  4. Add to CLAUDE.md or project instructions
  5. This becomes your long-term persistent memory (Declarative Memory)
Phase 3: Establish Evidence Standards + Context Boundaries (Week 3)
  1. Define what counts as validation:
    • User feedback: "X users said Y" (with quotes)
    • Analytics: "Metric Z changed by N%" (with dashboard link)
    • Competitive intel: "Competitor A launched B" (with source)
  2. Reject: "I think," "We feel," "It seems like"
  3. Define context boundaries using the 5 diagnostic questions:
    • What specific decision does each piece of context support?
    • Can retrieval replace persistence?
    • Who owns the context boundary?
  4. Create Context Manifest document
  5. Codify in team docs
Phase 4: Implement Memory Architecture + Workflows (Week 4)
  1. Set up two-layer memory:
    • Short-term (conversational): Summarize/truncate older parts of conversation
    • Long-term (persistent): Constraints registry + operational glossary (consider vector database for retrieval)
  2. Implement Research→Plan→Reset→Implement cycle:
    • Research: Allow chaotic context gathering
    • Plan: Synthesize into high-density SPEC.md or PLAN.md
    • Reset: Clear context window
    • Implement: Use only the plan as context
  3. Update AI prompts to reference constraints registry and glossary
  4. Test: Ask AI to cite constraints when making recommendations
  5. Measure: % of AI outputs that cite evidence vs. hallucinate; token usage efficiency
Success Criteria:
  • ✅ Constraints registry has 20+ entries
  • ✅ Operational glossary has 20-30 terms
  • ✅ Evidence standards documented and shared
  • ✅ Context Manifest created (always-needed vs. episodic)
  • ✅ Context boundary owner assigned
  • ✅ Two-layer memory architecture implemented
  • ✅ Research→Plan→Reset→Implement cycle tested on 1 workflow
  • ✅ AI agents reference these automatically
  • ✅ Token usage down 30%+ (less context stuffing)
  • ✅ Output consistency up (fewer retries)
Related Skills:
  • context-engineering-advisor
    (Interactive) — Deep dive on diagnosing context stuffing and implementing memory architecture
  • problem-statement.md
    — Define constraints before framing problems
  • epic-hypothesis.md
    — Evidence-based hypothesis writing

目标:构建人类和AI都信任的持久"事实层"——从上下文堆砌转向上下文工程。
预阶段:诊断上下文堆砌(如有需要) 如果你的等级为1-2,首先诊断上下文堆砌症状:
  1. 完成5个诊断问题(查看
    context-engineering-advisor
  2. 识别你正在持久化但可通过检索获取的内容
  3. 指定上下文边界负责人
  4. 创建上下文清单(始终需要的内容 vs 临时内容)
Phase 1:记录约束(第1周)
  1. 创建约束注册表:
    • 技术约束(API、数据模型、性能限制)
    • 监管约束(GDPR、HIPAA等)
    • 战略约束(我们会/不会构建X)
  2. 对每个约束应用诊断问题#4:"如果排除此项,会导致什么失败?"
  3. 格式:AI Agent可解析的结构化文件(YAML、JSON或带前置元数据的Markdown)
  4. 在Git中进行版本控制
Phase 2:构建操作术语表(第2周)
  1. 列出团队使用的前20-30个术语(例如,"用户"、"客户"、"激活"、"流失")
  2. 为每个术语提供明确的定义(避免"视情况而定")
  3. 包含边缘情况和例外
  4. 添加到CLAUDE.md或项目说明中
  5. 这将成为你的长期持久化记忆(声明式记忆)
Phase 3:建立证据标准 + 上下文边界(第3周)
  1. 定义可作为验证依据的内容:
    • 用户反馈:"X用户说Y"(附带引用)
    • 分析数据:"指标Z变化了N%"(附带仪表板链接)
    • 竞争情报:"竞争对手A发布了B"(附带来源)
  2. 拒绝以下内容:"我认为"、"我们觉得"、"看起来"
  3. 使用5个诊断问题定义上下文边界:
    • 每部分上下文支持什么具体决策?
    • 检索能否替代持久化?
    • 谁负责上下文边界?
  4. 创建上下文清单文档
  5. 正式化到团队文档中
Phase 4:实现记忆架构 + 工作流(第4周)
  1. 设置双层记忆
    • 短期(会话式):总结/截断对话的较早部分
    • 长期(持久化):约束注册表 + 操作术语表(可考虑使用向量数据库进行检索)
  2. 实现Research→Plan→Reset→Implement周期
    • Research:允许无结构的上下文收集
    • Plan:合成为高密度的SPEC.md或PLAN.md
    • Reset:清空上下文窗口
    • Implement:仅使用计划作为上下文
  3. 更新AI提示词,使其参考约束注册表和术语表
  4. 测试:让AI在提出建议时引用约束
  5. 衡量:AI输出中引用证据 vs 生成幻觉内容的比例;token使用效率
成功标准
  • ✅ 约束注册表有20+条目
  • ✅ 操作术语表有20-30个术语
  • ✅ 证据标准已记录并共享
  • ✅ 上下文清单已创建(始终需要的内容 vs 临时内容)
  • ✅ 上下文边界负责人已指定
  • ✅ 双层记忆架构已实现
  • ✅ Research→Plan→Reset→Implement周期已在1个工作流中测试
  • ✅ AI Agent自动参考这些内容
  • ✅ Token使用量减少30%+(减少上下文堆砌)
  • ✅ 输出一致性提升(减少重试)
相关工具
  • context-engineering-advisor
    (交互式)—— 深入诊断上下文堆砌并实现记忆架构
  • problem-statement.md
    — 在框定问题前定义约束
  • epic-hypothesis.md
    — 基于证据的假设撰写

If Priority = Agent Orchestration

如果优先项 = Agent Orchestration

Goal: Turn one-off prompts into repeatable, traceable AI workflows.
Phase 1: Map Current Workflows (Week 1)
  1. Pick your most frequent AI use case (e.g., "analyze user feedback")
  2. Document every step you currently take:
    • Copy/paste feedback into ChatGPT
    • Ask for themes
    • Manually categorize
    • Write summary
  3. Identify pain points (manual handoffs, inconsistent results)
Phase 2: Design Orchestrated Workflow (Week 2)
  1. Define workflow loop:
    • Research: AI reads all feedback (structured input)
    • Synthesis: AI identifies themes (with evidence)
    • Critique: AI flags contradictions or weak signals
    • Decision: Human reviews and decides next steps
    • Log: AI records rationale and sources
  2. Each step must be traceable (show sources, reasoning)
Phase 3: Build and Test (Week 3)
  1. Implement workflow using:
    • Claude Projects (if simple)
    • Custom GPTs (if moderate)
    • API orchestration (if complex)
  2. Run on 3 past examples; compare to manual process
  3. Measure: Time saved, consistency improved, traceability added
Phase 4: Document and Scale (Week 4)
  1. Version-control prompts (Git)
  2. Document workflow steps for team
  3. Train 2 teammates; observe results
  4. Iterate based on feedback
Success Criteria:
  • ✅ At least 1 workflow runs consistently (same inputs → predictable process)
  • ✅ Each step is traceable (AI cites sources)
  • ✅ Team can replicate workflow without your involvement
Related Skills:
  • pol-probe-advisor.md
    — Use orchestrated workflows for validation experiments

目标:将一次性提示词转变为可重复、可追溯的AI工作流。
Phase 1:映射当前工作流(第1周)
  1. 选择你最常用的AI用例(例如,"分析用户反馈")
  2. 记录你当前执行的每个步骤:
    • 将反馈复制/粘贴到ChatGPT
    • 要求识别主题
    • 手动分类
    • 撰写总结
  3. 识别痛点(手动交接、结果不一致)
Phase 2:设计编排式工作流(第2周)
  1. 定义工作流循环:
    • Research:AI读取所有反馈(结构化输入)
    • Synthesis:AI识别主题(附带证据)
    • Critique:AI标记矛盾或弱信号
    • Decision:人类评审并决定下一步
    • Log:AI记录理由和来源
  2. 每个步骤都必须可追溯(展示来源、推理过程)
Phase 3:构建并测试(第3周)
  1. 使用以下工具实现工作流:
    • Claude Projects(如果简单)
    • 自定义GPT(如果中等复杂度)
    • API编排(如果复杂)
  2. 在3个过往示例上运行;与手动流程进行比较
  3. 衡量:节省的时间、提升的一致性、增加的可追溯性
Phase 4:文档化并扩展(第4周)
  1. 对提示词进行版本控制(Git)
  2. 为团队记录工作流步骤
  3. 培训2名团队成员;观察结果
  4. 根据反馈迭代
成功标准
  • ✅ 至少1个工作流可一致运行(相同输入→可预测流程)
  • ✅ 每个步骤都可追溯(AI引用来源)
  • ✅ 团队无需你的参与即可复制该工作流
相关工具
  • pol-probe-advisor.md
    — 使用编排式工作流进行验证实验

If Priority = Outcome Acceleration

如果优先项 = Outcome Acceleration

Goal: Use AI to compress learning cycles, not just speed up tasks.
Phase 1: Identify Bottleneck (Week 1)
  1. Map your current learning cycle (e.g., hypothesis → experiment → analysis → decision)
  2. Time each step
  3. Identify slowest step (usually: validation lag, approval delays, or meeting overhead)
Phase 2: Design AI Intervention (Week 2)
  1. Ask: "What if this step happened overnight?"
    • Feasibility checks: AI spike in 2 hours vs. 2 days
    • User research synthesis: AI analysis in 1 hour vs. 1 week
    • Approval pre-checks: AI validates against constraints before meeting
  2. Design minimal AI workflow to eliminate bottleneck
Phase 3: Run Pilot (Week 3)
  1. Test AI intervention on 1 real initiative
  2. Measure cycle time: before vs. after
  3. Validate quality: Did AI maintain rigor, or cut corners?
Phase 4: Scale (Week 4)
  1. If successful (cycle time down 50%+, quality maintained), apply to 3 more initiatives
  2. Document workflow
  3. Train team
Success Criteria:
  • ✅ Learning cycle compressed by 50%+ on at least 1 initiative
  • ✅ Quality maintained (no shortcuts that compromise rigor)
  • ✅ Team adopts the accelerated workflow
Related Skills:
  • pol-probe.md
    — Use AI to run PoL probes faster
  • discovery-process.md
    — Compress discovery cycles with AI

目标:使用AI压缩学习周期,而非仅仅加快任务速度。
Phase 1:识别瓶颈(第1周)
  1. 映射你当前的学习周期(例如,假设 → 实验 → 分析 → 决策)
  2. 记录每个步骤的耗时
  3. 识别最慢的步骤(通常是:验证延迟、审批延误,或会议开销)
Phase 2:设计AI干预方案(第2周)
  1. 提问:"如果这个步骤在夜间完成会怎样?"
    • 可行性检查:AI在2小时内完成调研,而非2天
    • 用户研究合成:AI在1小时内完成分析,而非1周
    • 审批预检查:AI在会议前根据约束进行验证
  2. 设计最小可行的AI工作流以消除瓶颈
Phase 3:试点(第3周)
  1. 在1个实际项目上测试AI干预方案
  2. 衡量周期时间:试点前 vs 试点后
  3. 验证质量:AI是否保持严谨性,还是走了捷径?
Phase 4:扩展(第4周)
  1. 如果成功(周期时间减少50%+,质量保持不变),将其应用到另外3个项目
  2. 记录工作流
  3. 培训团队
成功标准
  • ✅ 至少1个项目的学习周期压缩50%+
  • ✅ 质量保持不变(未因走捷径而影响严谨性)
  • ✅ 团队采用加速后的工作流
相关工具
  • pol-probe.md
    — 使用AI更快地运行PoL探针
  • discovery-process.md
    — 用AI压缩调研周期

If Priority = Team-AI Facilitation

如果优先项 = Team-AI Facilitation

Goal: Redesign team systems so AI operates as co-intelligence, not accountability shield.
Phase 1: Establish Review Norms (Week 1)
  1. Codify rule: "AI outputs are drafts, not finals"
  2. Define review protocol:
    • Who reviews AI outputs? (peer, lead PM, cross-functional partner)
    • When? (before sharing externally, before decisions)
    • What to check? (accuracy, completeness, evidence citation)
  3. Share with team, get buy-in
Phase 2: Set Evidence Standards (Week 2)
  1. AI must cite sources (no hallucinations)
  2. Reject outputs that say "I think" or "it seems"
  3. Require: "According to [source], [fact]"
  4. Add to team operating docs
Phase 3: Define Decision Authority (Week 3)
  1. Clarify: AI recommends, humans decide
  2. Document who has authority to override AI recommendations (PM, team lead, cross-functional consensus)
  3. Create escalation path (what if AI and human disagree?)
Phase 4: Build Psychological Safety (Week 4)
  1. Team exercise: Share an AI mistake you caught (normalize catching errors)
  2. Reward critical thinking ("Good catch on that AI hallucination!")
  3. Avoid: "Why didn't you just use AI?" (shaming)
Success Criteria:
  • ✅ Review norms documented and followed by team
  • ✅ Evidence standards codified
  • ✅ Decision authority clear
  • ✅ Team comfortable challenging AI outputs
Related Skills:
  • problem-statement.md
    — Evidence-based problem framing
  • epic-hypothesis.md
    — Testable, evidence-backed hypotheses

目标:重新设计团队系统,让AI作为协同智能体运作,而非问责的挡箭牌。
Phase 1:建立评审规范(第1周)
  1. 正式化规则:"AI输出是草稿,而非最终内容"
  2. 定义评审流程:
    • 谁评审AI输出?(同行、PM负责人、跨职能伙伴)
    • 何时评审?(分享给外部前、决策前)
    • 检查什么?(准确性、完整性、证据引用)
  3. 与团队共享,获得认可
Phase 2:设置证据标准(第2周)
  1. AI必须引用来源(不能生成幻觉内容)
  2. 拒绝包含"我认为"或"看起来"的输出
  3. 要求格式:"根据[来源],[事实]"
  4. 添加到团队操作文档中
Phase 3:定义决策权限(第3周)
  1. 明确:AI提供建议,人类做出决策
  2. 记录谁有权推翻AI建议(PM、团队负责人、跨职能共识)
  3. 创建升级路径(如果AI和人类意见不一致怎么办?)
Phase 4:构建心理安全感(第4周)
  1. 团队练习:分享你发现的一个AI错误(将发现错误常态化)
  2. 奖励批判性思维("很好地发现了AI的幻觉内容!")
  3. 避免:"你为什么不直接用AI?"(指责)
成功标准
  • ✅ 评审规范已记录并被团队遵循
  • ✅ 证据标准已正式化
  • ✅ 决策权限清晰
  • ✅ 团队敢于挑战AI输出
相关工具
  • problem-statement.md
    — 基于证据的问题框定
  • epic-hypothesis.md
    — 可测试、基于证据的假设

If Priority = Strategic Differentiation

如果优先项 = Strategic Differentiation

Goal: Create defensible competitive advantages, not just efficiency gains.
Phase 1: Identify Moat Opportunities (Week 1)
  1. Ask: "What could we do with AI that competitors can't replicate by adding headcount?"
    • New customer capabilities (e.g., "AI advisor suggests personalized roadmap")
    • Workflow rewiring (e.g., "Validate product ideas in 2 days vs. 3 weeks")
    • Economics shift (e.g., "Deliver enterprise features at SMB prices via AI automation")
  2. List 5 candidates
  3. Prioritize by defensibility (how hard to copy?)
Phase 2: Design AI-Enabled Capability (Week 2)
  1. Pick top candidate
  2. Design end-to-end workflow:
    • What does customer experience?
    • What does AI do behind the scenes?
    • What human judgment is required?
  3. Sketch MVP (minimum viable moat)
Phase 3: Build and Test (Weeks 3-4)
  1. Build prototype (can be PoL probe, not production)
  2. Test with 5 customers
  3. Measure: Does this create value competitors can't match?
Phase 4: Validate Moat (Week 5)
  1. Ask: "How would a competitor replicate this?"
    • If answer is "hire more people," it's not a moat
    • If answer is "redesign their entire org," you have a moat
  2. Document competitive analysis
  3. Decide: Build full version, pivot, or kill
Success Criteria:
  • ✅ Identified at least 1 AI-enabled capability competitors can't easily copy
  • ✅ Validated with customers (they see the value)
  • ✅ Confirmed defensibility (competitor analysis)
Related Skills:
  • positioning-statement.md
    — Articulate your AI-driven differentiation
  • jobs-to-be-done.md
    — Understand what customers hire your AI capabilities to do

目标:创造可防御的竞争优势,而非仅仅提升效率。
Phase 1:识别壁垒机会(第1周)
  1. 提问:"我们用AI能做什么,而竞争对手无法通过增加人力复制?"
    • 新的客户能力(例如,"AI顾问提供个性化路线图")
    • 工作流重构(例如,"2天内验证产品想法,而非3周")
    • 经济性转变(例如,"通过AI自动化以SMB价格交付企业级功能")
  2. 列出5个候选方向
  3. 根据防御性(复制难度)排序
Phase 2:设计AI驱动的能力(第2周)
  1. 选择排名第一的候选方向
  2. 设计端到端工作流:
    • 客户体验是什么?
    • AI在后台做什么?
    • 需要什么人类判断?
  3. 绘制MVP(最小可行壁垒)
Phase 3:构建并测试(第3-4周)
  1. 构建原型(可以是PoL探针,而非生产版本)
  2. 与5个客户测试
  3. 衡量:这是否创造了竞争对手无法匹配的价值?
Phase 4:验证壁垒(第5周)
  1. 提问:"竞争对手如何复制这项能力?"
    • 如果答案是"增加人力",那这不是壁垒
    • 如果答案是"全面重构他们的组织",那你拥有了壁垒
  2. 记录竞争分析
  3. 决定:构建完整版本、转向其他方向,还是放弃
成功标准
  • ✅ 识别出至少1个竞争对手无法轻易复制的AI驱动能力
  • ✅ 已通过客户验证(他们认可价值)
  • ✅ 已确认防御性(竞争分析)
相关工具
  • positioning-statement.md
    — 清晰阐述你的AI驱动差异化
  • jobs-to-be-done.md
    — 理解客户雇佣你的AI能力来完成什么任务

Step 9: Track Progress (Optional)

Step 9:跟踪进度(可选)

Agent offers:
Would you like me to create a progress tracker for your AI-shaped transformation?
Tracker includes:
  • Current maturity levels (baseline)
  • Target maturity levels (goal state)
  • Action plan milestones (from Step 8)
  • Review cadence (weekly, monthly)
Options:
  1. Yes, create tracker — [Agent generates Markdown checklist]
  2. No, I'll track separately — [Agent provides summary]

Agent提供
你是否需要我为你的AI重塑型转型创建进度跟踪器?
跟踪器包含
  • 当前成熟度等级(基线)
  • 目标成熟度等级(目标状态)
  • 行动计划里程碑(来自Step 8)
  • 评审节奏(每周、每月)
选项
  1. 是,创建跟踪器 — [Agent生成Markdown清单]
  2. 否,我将自行跟踪 — [Agent提供总结]

Examples

示例

Example 1: Early-Stage Startup (AI-First → Emerging)

示例1:早期初创公司(AI优先型 → 萌芽阶段)

Context:
  • Team: 2 PMs, 5 engineers
  • AI Usage: ChatGPT for writing PRDs, occasional Copilot usage
  • Goal: Move faster than larger competitors
Assessment Results:
  • Context Design: Level 1 (no structured context)
  • Agent Orchestration: Level 1 (one-off prompts)
  • Outcome Acceleration: Level 1 (docs faster, but learning cycles unchanged)
  • Team-AI Facilitation: Level 2 (team uses AI, but no norms)
  • Strategic Differentiation: Level 1 (efficiency only)
Recommendation: Focus on Context Design first.
Action Plan (Week 1-4):
  • Week 1: Create constraints registry (10 technical constraints)
  • Week 2: Build operational glossary (15 terms)
  • Week 3: Establish evidence standards
  • Week 4: Add context to CLAUDE.md files
Outcome: After 4 weeks, Context Design → Level 3. Unlocks Agent Orchestration next quarter.

上下文
  • 团队:2名PM,5名工程师
  • AI使用:用ChatGPT撰写PRD,偶尔使用Copilot
  • 目标:比更大的竞争对手更快行动
评估结果
  • Context Design:Level 1(无结构化上下文)
  • Agent Orchestration:Level 1(一次性提示词)
  • Outcome Acceleration:Level 1(文档撰写更快,但学习周期未改变)
  • Team-AI Facilitation:Level 2(团队使用AI,但无规范)
  • Strategic Differentiation:Level 1(仅效率提升)
推荐:优先聚焦Context Design
行动计划(第1-4周)
  • 第1周:创建约束注册表(10项技术约束)
  • 第2周:构建操作术语表(15个术语)
  • 第3周:建立证据标准
  • 第4周:将上下文添加到CLAUDE.md文件
成果:4周后,Context Design → Level 3。下个季度可解锁Agent Orchestration能力。

Example 2: Growth-Stage Company (Transitioning → AI-Shaped)

示例2:增长阶段公司(转型阶段 → AI重塑型)

Context:
  • Team: 10 PMs, 50 engineers, 5 designers
  • AI Usage: Claude Projects for research, custom workflows emerging
  • Goal: Build defensible AI advantage before IPO
Assessment Results:
  • Context Design: Level 3 (structured context, not comprehensive)
  • Agent Orchestration: Level 3 (some workflows, manual handoffs)
  • Outcome Acceleration: Level 2 (modest gains, not systematic)
  • Team-AI Facilitation: Level 3 (norms emerging, not codified)
  • Strategic Differentiation: Level 2 (new capabilities, but copyable)
Recommendation: Focus on Outcome Acceleration (foundation is solid; now compress learning cycles).
Action Plan (Week 1-4):
  • Week 1: Identify bottleneck (discovery cycles take 3 weeks)
  • Week 2: Design AI workflow to run overnight feasibility checks
  • Week 3: Pilot on 1 initiative (cut cycle to 5 days)
  • Week 4: Scale to 3 initiatives
Outcome: Learning cycles 5x faster → strategic separation from competitors → Level 4 Outcome Acceleration + Level 3 Strategic Differentiation.

上下文
  • 团队:10名PM,50名工程师,5名设计师
  • AI使用:用Claude Projects进行研究,自定义工作流正在形成
  • 目标:IPO前构建可防御的AI优势
评估结果
  • Context Design:Level 3(结构化上下文,但不全面)
  • Agent Orchestration:Level 3(有一些工作流,但需要手动交接)
  • Outcome Acceleration:Level 2(略有提升,但未系统化)
  • Team-AI Facilitation:Level 3(规范正在形成,但未正式化)
  • Strategic Differentiation:Level 2(新能力,但可复制)
推荐:优先聚焦Outcome Acceleration(基础已稳固;现在需要压缩学习周期)。
行动计划(第1-4周)
  • 第1周:识别瓶颈(调研周期需要3周)
  • 第2周:设计AI工作流以在夜间完成可行性检查
  • 第3周:在1个项目上试点(将周期缩短至5天)
  • 第4周:扩展到3个项目
成果:学习周期加快5倍 → 与竞争对手形成战略差异 → Outcome Acceleration达到Level 4,Strategic Differentiation达到Level 3。

Example 3: Enterprise Company (AI-First, Scattered Usage)

示例3:企业公司(AI优先型,使用分散)

Context:
  • Team: 50 PMs, 300 engineers
  • AI Usage: Individual PMs use various tools, no consistency
  • Goal: Standardize AI usage, create cross-functional workflows
Assessment Results:
  • Context Design: Level 2 (docs exist, not structured for AI)
  • Agent Orchestration: Level 1 (no shared workflows)
  • Outcome Acceleration: Level 1 (efficiency only)
  • Team-AI Facilitation: Level 1 (private usage, no norms)
  • Strategic Differentiation: Level 1 (no advantage)
Recommendation: Focus on Team-AI Facilitation first (distributed team needs shared norms before building infrastructure).
Action Plan (Week 1-4):
  • Week 1: Establish review norms (AI outputs are drafts)
  • Week 2: Set evidence standards (AI must cite sources)
  • Week 3: Define decision authority (AI recommends, leads decide)
  • Week 4: Pilot with 3 teams, gather feedback
Outcome: Team-AI Facilitation → Level 3. Creates foundation for Context Design and Agent Orchestration next.

上下文
  • 团队:50名PM,300名工程师
  • AI使用:PM个人使用各种工具,无一致性
  • 目标:标准化AI使用,创建跨职能工作流
评估结果
  • Context Design:Level 2(文档存在,但未针对AI使用结构化)
  • Agent Orchestration:Level 1(无共享工作流)
  • Outcome Acceleration:Level 1(仅效率提升)
  • Team-AI Facilitation:Level 1(个人使用,无规范)
  • Strategic Differentiation:Level 1(无优势)
推荐:优先聚焦Team-AI Facilitation(分布式团队在构建基础设施前需要共享规范)。
行动计划(第1-4周)
  • 第1周:建立评审规范(AI输出是草稿)
  • 第2周:设置证据标准(AI必须引用来源)
  • 第3周:定义决策权限(AI提供建议,负责人决策)
  • 第4周:在3个团队试点,收集反馈
成果:Team-AI Facilitation → Level 3。为后续构建Context Design和Agent Orchestration奠定基础。

Common Pitfalls

常见陷阱

1. Mistaking Efficiency for Differentiation

1. 将效率提升误认为差异化

Failure Mode: "We use AI to write PRDs 2x faster—we're AI-shaped!"
Consequence: Competitors copy within 3 months; no lasting advantage.
Fix: Ask: "If a competitor threw 2x more people at this, could they match us?" If yes, it's efficiency (table stakes), not differentiation.

失败模式:"我们用AI将PRD撰写速度提升2倍——我们是AI重塑型!"
后果:竞争对手在3个月内复制;无持久优势。
解决方法:提问:"如果竞争对手投入2倍人力,能否赶上我们?"如果是,那这只是效率提升(行业标配),而非差异化。

2. Skipping Context Design

2. 跳过Context Design

Failure Mode: Building Agent Orchestration workflows without durable context.
Consequence: AI workflows are fragile (context changes break everything).
Fix: Context Design is foundational. Don't skip it. Build constraints registry, glossary, evidence standards first.

失败模式:在没有持久化上下文的情况下构建Agent Orchestration工作流。
后果:AI工作流非常脆弱(上下文变化会导致所有内容失效)。
解决方法:Context Design是基础。不要跳过它。先构建约束注册表、术语表、证据标准。

3. Individual Usage, Not Team Transformation

3. 个人使用,而非团队转型

Failure Mode: "I'm AI-shaped, but my team isn't."
Consequence: Can't scale; workflows die when you're on vacation.
Fix: Prioritize Team-AI Facilitation. Shared norms > individual productivity.

失败模式:"我实现了AI重塑,但我的团队没有。"
后果:无法扩展;你休假时工作流就会停滞。
解决方法:优先聚焦Team-AI Facilitation。共享规范 > 个人生产力。

4. Focusing on Tools, Not Workflows

4. 聚焦工具,而非工作流

Failure Mode: "Should we use Claude or ChatGPT?"
Consequence: Tool debates distract from organizational redesign.
Fix: Tools don't matter. Workflows matter. Focus on redesigning how work gets done, not which AI you use.

失败模式:"我们应该用Claude还是ChatGPT?"
后果:工具争论分散了对组织重构的注意力。
解决方法:工具不重要,工作流才重要。聚焦于重新设计工作完成方式,而非使用哪款AI。

5. Speed Over Learning

5. 速度优先于学习

Failure Mode: "AI helps us ship faster!"
Consequence: Ship the wrong thing faster (if you're not compressing learning cycles).
Fix: Outcome Acceleration is about learning faster, not building faster. Validate hypotheses in days, not weeks.

失败模式:"AI帮助我们更快地发布产品!"
后果:更快地发布了错误的产品(如果没有压缩学习周期)。
解决方法:Outcome Acceleration是关于更快地学习,而非更快地构建。在数天内验证假设,而非数周。

References

参考资料

Related Skills

相关工具

  • context-engineering-advisor (Interactive) — Deep dive on Context Design competency: Diagnose context stuffing, implement memory architecture, use Research→Plan→Reset→Implement cycle
  • problem-statement (Component) — Evidence-based problem framing (Context Design)
  • epic-hypothesis (Component) — Testable hypotheses with evidence standards
  • pol-probe-advisor (Interactive) — Use AI to compress validation cycles (Outcome Acceleration)
  • discovery-process (Workflow) — Apply AI-shaped principles to discovery
  • positioning-statement (Component) — Articulate your AI-driven differentiation (Strategic Differentiation)
  • context-engineering-advisor(交互式)—— Context Design能力深度指南:诊断上下文堆砌,实现记忆架构,使用Research→Plan→Reset→Implement周期
  • problem-statement(组件)—— 基于证据的问题框定(Context Design)
  • epic-hypothesis(组件)—— 可测试、基于证据的假设
  • pol-probe-advisor(交互式)—— 用AI压缩验证周期(Outcome Acceleration)
  • discovery-process(工作流)—— 将AI重塑型原则应用于调研
  • positioning-statement(组件)—— 清晰阐述你的AI驱动差异化(Strategic Differentiation)

External Frameworks

外部框架

Further Reading

延伸阅读

  • Ethan MollickCo-Intelligence (on AI as co-intelligence, not replacement)
  • Shreyas Doshi — Twitter threads on PM judgment augmentation with AI
  • Lenny Rachitsky — Newsletter interviews with AI-forward PMs
  • Ethan MollickCo-Intelligence(关于AI作为协同智能体,而非替代者)
  • Shreyas Doshi — 关于用AI增强PM判断力的Twitter线程
  • Lenny Rachitsky — 与AI驱动型PM的通讯访谈