deep-research

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Deep Research(深度调研 8 步法)

Deep Research (8-Step In-Depth Research Process)

将用户提出的模糊主题,通过系统化方法转化为高质量、可交付的调研报告。
Transform vague topics proposed by users into high-quality, deliverable research reports through systematic methods.

核心理念

Core Principles

  • 结论来自机制对比,不是"我感觉像"
  • 先钉牢事实,再做推导
  • 资料权威优先,L1 > L2 > L3 > L4
  • 中间结果必须保存,便于回溯和复用
  • Conclusions come from mechanism comparison, not "I feel like it"
  • Establish facts first, then make deductions
  • Priority to authoritative sources, L1 > L2 > L3 > L4
  • Intermediate results must be saved for easy traceback and reuse

工作目录与中间产物管理

Working Directory and Intermediate Product Management

工作目录结构

Working Directory Structure

调研开始时,必须
~/Downloads/research/
下创建以主题命名的工作目录:
~/Downloads/research/<topic>/
├── 00_问题拆解.md          # Step 0-1 产出
├── 01_资料来源.md          # Step 2 产出:所有查阅的资料链接
├── 02_事实卡片.md          # Step 3 产出:抽取的事实
├── 03_对比框架.md          # Step 4 产出:选定的框架和填充
├── 04_推导过程.md          # Step 6 产出:事实→结论的推导
├── 05_验证记录.md          # Step 7 产出:用例验证结果
├── FINAL_调研报告.md       # Step 8 产出:最终交付物
└── raw/                    # 原始资料存档(可选)
    ├── source_1.md
    └── source_2.md
At the start of research, you must create a topic-named working directory under
~/Downloads/research/
:
~/Downloads/research/<topic>/
├── 00_Problem_Decomposition.md          # Output of Step 0-1
├── 01_Source_Materials.md          # Output of Step 2: Links to all consulted materials
├── 02_Fact_Cards.md          # Output of Step 3: Extracted facts
├── 03_Comparison_Framework.md          # Output of Step 4: Selected framework and content
├── 04_Derivation_Process.md          # Output of Step 6: Deduction from facts to conclusions
├── 05_Verification_Records.md          # Output of Step 7: Use case verification results
├── FINAL_Research_Report.md       # Output of Step 8: Final deliverable
└── raw/                    # Archive of original materials (optional)
    ├── source_1.md
    └── source_2.md

保存时机与内容

Saving Timing and Content

步骤完成后立即保存文件名
Step 0-1问题类型判断 + 子问题列表
00_问题拆解.md
Step 2每个查阅的资料链接、层级、摘要
01_资料来源.md
Step 3每条事实卡片(陈述+出处+置信度)
02_事实卡片.md
Step 4选定的对比框架 + 初步填充
03_对比框架.md
Step 6每个维度的推导过程
04_推导过程.md
Step 7验证场景 + 结果 + 回查清单
05_验证记录.md
Step 8完整调研报告
FINAL_调研报告.md
StepSave Immediately After CompletionFile Name
Step 0-1Problem type judgment + sub-question list
00_Problem_Decomposition.md
Step 2Link, hierarchy, and summary of each consulted material
01_Source_Materials.md
Step 3Each fact card (statement + source + confidence level)
02_Fact_Cards.md
Step 4Selected comparison framework + initial content
03_Comparison_Framework.md
Step 6Derivation process for each dimension
04_Derivation_Process.md
Step 7Verification scenario + result + review checklist
05_Verification_Records.md
Step 8Complete research report
FINAL_Research_Report.md

保存原则

Saving Principles

  1. 即时保存:每完成一个步骤,立即写入对应文件,不要等到最后
  2. 增量更新:同一文件可多次更新,新内容追加或替换
  3. 保留过程:中间文件即使内容后来被整合到最终报告,也要保留
  4. 便于恢复:如果调研中断,可以从中间文件恢复进度
  1. Save in real-time: Write to the corresponding file immediately after completing each step, do not wait until the end
  2. Incremental update: The same file can be updated multiple times, append or replace new content
  3. Preserve process: Even if intermediate file content is later integrated into the final report, keep them
  4. Easy recovery: If research is interrupted, progress can be restored from intermediate files

Trigger Conditions

Trigger Conditions

当用户想要:
  • 深入了解某个概念/技术/现象
  • 对比两个或多个事物的异同
  • 为决策收集信息和依据
  • 撰写调研报告或分析文档
关键词
  • "深度调研"、"深度研究"、"深入分析"
  • "帮我调研"、"调研一下"、"研究一下"
  • "对比分析"、"概念对比"、"技术对比"
  • "写调研报告"、"出调研报告"
与其他 Skill 的区分
  • 需要可视化图谱 → 使用
    research-to-diagram
  • 需要写作输出(文章/教程) → 使用
    wsy-writer
  • 需要素材整理 → 使用
    material-to-markdown
  • 需要纯调研报告 → 使用本 Skill
When users want to:
  • Gain in-depth understanding of a concept/technology/phenomenon
  • Compare similarities and differences between two or more things
  • Collect information and basis for decision-making
  • Write research reports or analysis documents
Keywords:
  • "Deep Research", "In-Depth Study", "In-Depth Analysis"
  • "Help me research", "Do a research", "Conduct a study"
  • "Comparative Analysis", "Concept Comparison", "Technology Comparison"
  • "Write a research report", "Produce a research report"
Differentiation from Other Skills:
  • Need visual diagrams → Use
    research-to-diagram
  • Need writing output (articles/tutorials) → Use
    wsy-writer
  • Need material organization → Use
    material-to-markdown
  • Need pure research reports → Use this Skill

Workflow(8 步法)

Workflow (8-Step Process)

Step 0: 问题类型判断

Step 0: Problem Type Judgment

首先判断调研问题属于哪种类型,选择对应策略:
问题类型核心任务侧重维度
概念对比型建立对比框架机制差异、适用边界
决策支持型权衡取舍成本、风险、收益
趋势分析型梳理演进脉络历史、驱动因素、预测
问题诊断型根因分析症状、原因、证据链
知识梳理型系统整理定义、分类、关系
First judge the type of research question and select the corresponding strategy:
Problem TypeCore TaskFocus Dimensions
Concept Comparison TypeEstablish comparison frameworkMechanism differences, application boundaries
Decision Support TypeTrade-off analysisCost, risk, benefits
Trend Analysis TypeSort out evolution contextHistory, driving factors, predictions
Problem Diagnosis TypeRoot cause analysisSymptoms, causes, evidence chain
Knowledge Organization TypeSystematic organizationDefinition, classification, relationships

Step 0.5: 时效敏感性判断(BLOCKING)

Step 0.5: Timeliness Sensitivity Judgment (BLOCKING)

在开始调研前,必须判断问题的时效敏感性,这将决定资料筛选策略。
Before starting research, you must judge the timeliness sensitivity of the question, which will determine the material screening strategy.

时效敏感性分类

Timeliness Sensitivity Classification

敏感级别典型领域资料时间窗口说明
🔴 极高AI/大模型、区块链、加密货币3-6 个月技术迭代极快,几个月前的信息可能已完全过时
🟠 高云服务、前端框架、API 接口6-12 个月版本更新频繁,需确认当前版本
🟡 中编程语言、数据库、操作系统1-2 年相对稳定但仍在演进
🟢 低算法原理、设计模式、理论概念无限制基础原理变化慢
Sensitivity LevelTypical FieldsMaterial Time WindowDescription
🔴 Extremely HighAI/Large Models, Blockchain, Cryptocurrency3-6 monthsTechnology iterates extremely fast, information from a few months ago may be completely outdated
🟠 HighCloud Services, Frontend Frameworks, API Interfaces6-12 monthsVersion updates are frequent, need to confirm the current version
🟡 MediumProgramming Languages, Databases, Operating Systems1-2 yearsRelatively stable but still evolving
🟢 LowAlgorithm Principles, Design Patterns, Theoretical ConceptsNo restrictionsBasic principles change slowly

🔴 极高敏感领域特别规则

Special Rules for 🔴 Extremely High Sensitivity Fields

当调研主题涉及以下领域时,必须执行特殊规则
触发词识别
  • AI 相关:大模型、LLM、GPT、Claude、Gemini、AI Agent、RAG、向量数据库、提示工程
  • 云原生:Kubernetes 新版本、Serverless、容器运行时
  • 前沿技术:Web3、量子计算、AR/VR
强制执行的规则
  1. 搜索时带时间约束
    • 使用
      time_range: "month"
      time_range: "week"
      限制搜索结果
    • 优先查询
      start_date: "YYYY-MM-DD"
      设置为 3 个月内
  2. 官方源优先级提升
    • 必须首先查阅官方文档、官方博客、官方 Changelog
    • GitHub Release Notes、官方 X/Twitter 公告
    • 学术论文(arXiv 等预印本平台)
  3. 版本号强制标注
    • 任何技术描述必须标注当前版本号
    • 示例:「Claude 3.5 Sonnet (claude-3-5-sonnet-20241022) 支持...」
    • 禁止使用「最新版本支持...」这类模糊表述
  4. 过时信息处置
    • 超过 6 个月的技术博客/教程 → 仅作为历史参考,不可作为事实依据
    • 发现版本不一致 → 必须核实当前版本后再使用
    • 明显过时的描述(如「未来将支持」但现在已支持)→ 直接丢弃
  5. 交叉验证
    • 高敏感信息必须从至少 2 个独立来源确认
    • 优先级:官方文档 > 官方博客 > 权威技术媒体 > 个人博客
  6. 官方下载/发布页面直接验证(BLOCKING)
    • 必须直接访问官方下载页面验证平台支持(不依赖搜索引擎缓存)
    • 使用
      mcp__tavily-mcp__tavily-extract
      WebFetch
      直接提取下载页面
    • 示例:
      https://product.com/download
      https://github.com/xxx/releases
    • 搜索结果中关于「即将支持」「Coming soon」的描述可能已过时,必须实时核验
    • 平台支持是高频变动信息,不可从旧资料推断
  7. 产品特定协议/功能名称搜索(BLOCKING)
    • 除了搜索产品名,必须额外搜索该产品支持的协议/标准名称
    • 常见需要搜索的协议/标准:
      • AI 工具类:MCP、ACP(Agent Client Protocol)、LSP、DAP
      • 云服务类:OAuth、OIDC、SAML
      • 数据交换:GraphQL、gRPC、REST
    • 搜索格式:
      "<产品名> <协议名> support"
      "<产品名> <协议名> integration"
    • 这些协议集成往往是差异化功能,容易被主文档遗漏但在专门页面有说明
When the research topic involves the following fields, you must implement special rules:
Trigger Word Recognition:
  • AI-related: Large Model, LLM, GPT, Claude, Gemini, AI Agent, RAG, Vector Database, Prompt Engineering
  • Cloud Native: Kubernetes New Versions, Serverless, Container Runtime
  • Cutting-edge Technologies: Web3, Quantum Computing, AR/VR
Mandatory Rules:
  1. Search with time constraints:
    • Use
      time_range: "month"
      or
      time_range: "week"
      to limit search results
    • Prioritize setting
      start_date: "YYYY-MM-DD"
      to within the last 3 months
  2. Elevated priority for official sources:
    • Must first consult official documents, official blogs, official Changelogs
    • GitHub Release Notes, official X/Twitter announcements
    • Academic papers (preprint platforms like arXiv)
  3. Mandatory version number labeling:
    • Any technical description must be labeled with the current version number
    • Example: "Claude 3.5 Sonnet (claude-3-5-sonnet-20241022) supports..."
    • Prohibit using vague expressions like "the latest version supports..."
  4. Disposal of outdated information:
    • Technical blogs/tutorials older than 6 months → Only used as historical reference, cannot be used as factual basis
    • If version inconsistency is found → Must verify the current version before use
    • Obvious outdated descriptions (e.g., "will support in the future" but now supported) → Discard directly
  5. Cross-verification:
    • High-sensitivity information must be confirmed from at least 2 independent sources
    • Priority: Official Documents > Official Blogs > Authoritative Technical Media > Personal Blogs
  6. Direct verification of official download/release pages (BLOCKING):
    • Must directly access official download pages to verify platform support (do not rely on search engine cache)
    • Use
      mcp__tavily-mcp__tavily-extract
      or
      WebFetch
      to directly extract page content
    • Example:
      https://product.com/download
      or
      https://github.com/xxx/releases
    • Descriptions like "Coming soon" in search results may be outdated, must verify in real-time
    • Platform support is frequently changing information, cannot be inferred from old materials
  7. Search for product-specific protocol/function names (BLOCKING):
    • In addition to searching for product names, must additionally search for the names of protocols/standards supported by the product
    • Common protocols/standards to search for:
      • AI Tools: MCP, ACP (Agent Client Protocol), LSP, DAP
      • Cloud Services: OAuth, OIDC, SAML
      • Data Exchange: GraphQL, gRPC, REST
    • Search format:
      "<Product Name> <Protocol Name> support"
      or
      "<Product Name> <Protocol Name> integration"
    • These protocol integrations are often differentiated features, easily omitted in main documents but explained in dedicated pages

时效性判断输出模板

Timeliness Judgment Output Template

markdown
undefined
markdown
undefined

时效敏感性评估

Timeliness Sensitivity Assessment

  • 调研主题:[主题]
  • 敏感级别:🔴极高 / 🟠高 / 🟡中 / 🟢低
  • 判断依据:[为什么是这个级别]
  • 资料时间窗口:[X 个月/年]
  • 优先查阅的官方源
    1. [官方源 1]
    2. [官方源 2]
  • 需要核实的关键版本信息
    • [产品/技术 1]: 当前版本 ____
    • [产品/技术 2]: 当前版本 ____

**📁 保存动作**:将时效性评估追加到 `00_问题拆解.md` 末尾

---
  • Research Topic: [Topic]
  • Sensitivity Level: 🔴Extremely High / 🟠High / 🟡Medium / 🟢Low
  • Judgment Basis: [Why this level]
  • Material Time Window: [X months/years]
  • Priority Official Sources to Consult:
    1. [Official Source 1]
    2. [Official Source 2]
  • Key Version Information to Verify:
    • [Product/Technology 1]: Current version ____
    • [Product/Technology 2]: Current version ____

**📁 Save Action**: Append the timeliness assessment to the end of `00_Problem_Decomposition.md`

---

Step 1: 问题拆解与边界界定

Step 1: Problem Decomposition and Boundary Definition

把模糊主题拆成 2-4 个可调研的子问题:
  • 子问题 A:「X 是什么、怎么工作的?」(定义与机制)
  • 子问题 B:「X 与 Y 的关系/差异在哪些维度?」(对比分析)
  • 子问题 C:「X 在什么场景下适用/不适用?」(边界条件)
  • 子问题 D:「X 的发展趋势/最佳实践?」(延伸分析)
⚠️ 研究对象界定(BLOCKING - 必须明确)
在拆解问题时,必须明确定义研究对象的边界
维度必须明确的边界示例
人群研究的是哪个群体?大学生 vs 中小学生 vs 职校生 vs 所有学生
地域研究的是哪个地区?中国高校 vs 美国高校 vs 全球
时间研究的是哪个时期?2020年后 vs 历史全貌
层级研究的是哪个层面?本科 vs 研究生 vs 高职
典型错误:用户问「大学生课堂问题」,却收录了针对「中小学生」的政策——适用对象不匹配会导致整个调研失效。
📁 保存动作
  1. 创建工作目录
    ~/Downloads/research/<topic>/
  2. 写入
    00_问题拆解.md
    ,包含:
    • 原始问题
    • 判断的问题类型及理由
    • 研究对象边界定义(人群、地域、时间、层级)
    • 拆解后的子问题列表
  3. 写入 TodoWrite 跟踪进度
Break down vague topics into 2-4 researchable sub-questions:
  • Sub-question A: "What is X and how does it work?" (Definition and mechanism)
  • Sub-question B: "What are the relationships/differences between X and Y in various dimensions?" (Comparative analysis)
  • Sub-question C: "In what scenarios is X applicable/not applicable?" (Boundary conditions)
  • Sub-question D: "What are the development trends/best practices of X?" (Extended analysis)
⚠️ Research Object Definition (BLOCKING - Must be clear):
When decomposing problems, you must clearly define the boundaries of the research object:
DimensionBoundaries to Clearly DefineExample
PopulationWhich group is the research targeting?College students vs middle school students vs vocational school students vs all students
RegionWhich region is the research targeting?Chinese universities vs American universities vs global
TimeWhich period is the research targeting?After 2020 vs full history
LevelWhich level is the research targeting?Undergraduate vs postgraduate vs vocational education
Typical Mistake: User asks about "college students' classroom problems", but includes policies targeting "middle school students"—mismatched application objects will invalidate the entire research.
📁 Save Action:
  1. Create the working directory
    ~/Downloads/research/<topic>/
  2. Write to
    00_Problem_Decomposition.md
    , including:
    • Original question
    • Judged problem type and reasons
    • Research object boundary definition (population, region, time, level)
    • List of decomposed sub-questions
  3. Write to TodoWrite to track progress

Step 2: 资料分层与权威锁定

Step 2: Material Stratification and Authority Locking

按权威性给资料分级,优先消费一手资料
层级资料类型用途可信度
L1官方文档、论文、规范、RFC定义、机制、可核验事实✅ 高
L2官方博客、技术演讲、白皮书设计意图、架构思想✅ 高
L3权威媒体、专家解读、教程补充直觉、案例⚠️ 中
L4社区讨论、个人博客、论坛发现盲点、验证理解❓ 低
L4 社区来源的具体化(产品对比调研必查):
来源类型获取方式价值
GitHub Issues直接访问
github.com/<org>/<repo>/issues
用户真实痛点、功能请求、Bug 反馈
GitHub Discussions访问
github.com/<org>/<repo>/discussions
功能讨论、使用心得、社区共识
Reddit搜索
site:reddit.com "<产品名>"
用户真实评价、对比讨论
Hacker News搜索
site:news.ycombinator.com "<产品名>"
技术社区深度讨论
Discord/Telegram产品官方社群活跃用户反馈(需标注[来源受限])
原则
  • 结论必须能追溯到 L1/L2
  • L3/L4 只作辅助和验证
  • L4 社区讨论用于发现"用户真正关心什么"
  • 记录所有信息来源
⏰ 时效性筛选规则(根据 Step 0.5 的敏感级别执行)
敏感级别资料筛选规则搜索参数建议
🔴 极高仅接受 6 个月内的资料作为事实依据
time_range: "month"
start_date
设为近 3 个月
🟠 高优先 1 年内资料,超过 1 年需标注
time_range: "year"
🟡 中2 年内资料正常使用,超过需验证有效性默认搜索
🟢 低无时间限制默认搜索
高敏感领域的搜索策略
1. 第一轮:官方源定向搜索
   - 使用 include_domains 限定官方域名
   - 示例:include_domains: ["anthropic.com", "openai.com", "docs.xxx.com"]

2. 第二轮:官方下载/发布页面直接验证(BLOCKING)
   - 直接访问官方下载页面,不依赖搜索缓存
   - 使用 tavily-extract 或 WebFetch 提取页面内容
   - 核验:平台支持、当前版本号、发布日期
   - 这一步必须做,搜索引擎可能缓存过时的"Coming soon"信息

3. 第三轮:产品特定协议/功能搜索(BLOCKING)
   - 搜索产品支持的协议名称(MCP、ACP、LSP 等)
   - 格式:`"<产品名> <协议名>" site:官方域名`
   - 这些集成功能往往不在主页展示,但在专门文档中有说明

4. 第四轮:限时广泛搜索
   - time_range: "month" 或 start_date 设为近期
   - 排除明显过时的来源

5. 第五轮:版本核实
   - 对搜索结果中的版本号进行交叉验证
   - 发现不一致立即查阅官方 Changelog

6. 第六轮:社区声音挖掘(BLOCKING - 产品对比调研必做)
   - 访问产品的 GitHub Issues 页面,查看热门/置顶 issue
   - 搜索 Issues 中的关键功能词(如 "MCP"、"plugin"、"integration")
   - 查看最近 3-6 个月的讨论趋势
   - 识别用户最关心的功能点和差异化特性
   - 这一步的价值:官方文档往往不会强调"别人没有而我有"的功能,但社区讨论会
社区声音挖掘的具体操作
GitHub Issues 挖掘步骤:
1. 访问 github.com/<org>/<repo>/issues
2. 按 "Most commented" 排序,查看热门讨论
3. 搜索关键词:
   - 功能类:feature request, enhancement, MCP, plugin, API
   - 对比类:vs, compared to, alternative, migrate from
4. 查看 issue 标签:enhancement, feature, discussion
5. 记录高频出现的功能诉求和用户痛点

价值转化:
- 高频讨论的功能 → 可能是差异化亮点
- 用户抱怨/请求 → 可能是产品短板
- 对比讨论 → 直接获取用户视角的差异分析
资料时效性标注模板(追加到资料来源记录):
markdown
- **发布日期**:[YYYY-MM-DD]
- **时效状态**:✅当前有效 / ⚠️需验证 / ❌已过时
- **版本信息**:[如适用,标注涉及的版本号]
工具使用
  • 优先使用
    mcp__plugin_context7_context7__query-docs
    获取技术文档
  • 使用
    WebSearch
    mcp__tavily-mcp__tavily-search
    进行广泛搜索
  • 使用
    mcp__tavily-mcp__tavily-extract
    提取具体页面内容
⚠️ 适用对象核查(BLOCKING - 收录前必查)
每收录一条资料前,必须核查其适用对象是否与研究边界匹配
资料类型必须核查的适用对象核查方法
政策/法规针对谁?(中小学生/大学生/所有人)查看文件标题、适用范围条款
学术研究样本是谁?(职校生/本科生/研究生)查看研究方法/样本描述章节
统计数据统计的是哪个群体?查看数据来源说明
案例报道涉及的是哪类机构?确认机构类型(高校/中学/职校)
不匹配的资料处理
  • 适用对象完全不匹配 → 不收录
  • 部分重叠(如「学生」包含大学生)→ 收录但标注适用范围
  • 可类比参考(如中小学政策可作为趋势参考)→ 收录但明确标注「仅作参考」
📁 保存动作: 每查阅一个资料,立即追加到
01_资料来源.md
markdown
undefined
Classify materials by authority, prioritize first-hand materials:
LevelMaterial TypeUsageCredibility
L1Official Documents, Papers, Specifications, RFCDefinitions, mechanisms, verifiable facts✅ High
L2Official Blogs, Technical Speeches, White PapersDesign intent, architectural ideas✅ High
L3Authoritative Media, Expert Interpretations, TutorialsSupplement intuition, cases⚠️ Medium
L4Community Discussions, Personal Blogs, ForumsDiscover blind spots, verify understanding❓ Low
L4 Community Source Specification (Mandatory for product comparison research):
Source TypeAcquisition MethodValue
GitHub IssuesDirectly access
github.com/<org>/<repo>/issues
Real user pain points, feature requests, bug feedback
GitHub DiscussionsAccess
github.com/<org>/<repo>/discussions
Feature discussions, usage experiences, community consensus
RedditSearch
site:reddit.com "<Product Name>"
Real user reviews, comparison discussions
Hacker NewsSearch
site:news.ycombinator.com "<Product Name>"
In-depth discussions in technical communities
Discord/TelegramOfficial product communitiesFeedback from active users (mark [Source Restricted])
Principles:
  • Conclusions must be traceable to L1/L2
  • L3/L4 are only for assistance and verification
  • L4 community discussions are used to discover "what users really care about"
  • Record all information sources
⏰ Timeliness Screening Rules (Implemented based on Step 0.5 sensitivity level):
Sensitivity LevelMaterial Screening RulesSuggested Search Parameters
🔴 Extremely HighOnly accept materials within 6 months as factual basis
time_range: "month"
or set start_date to the last 3 months
🟠 HighPrioritize materials within 1 year, mark those older than 1 year
time_range: "year"
🟡 MediumUse materials within 2 years normally, verify validity if olderDefault search
🟢 LowNo time restrictionsDefault search
Search Strategy for High-Sensitivity Fields:
1. First Round: Targeted search of official sources
   - Use include_domains to limit official domain names
   - Example: include_domains: ["anthropic.com", "openai.com", "docs.xxx.com"]

2. Second Round: Direct verification of official download/release pages (BLOCKING)
   - Directly access official download pages, do not rely on search cache
   - Use tavily-extract or WebFetch to extract page content
   - Verify: Platform support, current version number, release date
   - This step must be done, search engines may cache outdated "Coming soon" information

3. Third Round: Search for product-specific protocols/functions (BLOCKING)
   - Search for protocol names supported by the product (MCP, ACP, LSP, etc.)
   - Format: `"<Product Name> <Protocol Name>" site:official domain`
   - These integration features are often not displayed on the homepage but explained in dedicated documents

4. Fourth Round: Time-limited extensive search
   - time_range: "month" or set start_date to recent period
   - Explicitly exclude outdated sources

5. Fifth Round: Version verification
   - Cross-verify version numbers in search results
   - Immediately check official Changelog if inconsistencies are found

6. Sixth Round: Community voice mining (BLOCKING - Mandatory for product comparison research)
   - Access the product's GitHub Issues page, view popular/pinned issues
   - Search for key function words in Issues (e.g., "MCP", "plugin", "integration")
   - View discussion trends in the last 3-6 months
   - Identify the function points and differentiated features that users care about most
   - Value of this step: Official documents often do not emphasize "features that others don't have but I do", but community discussions will
Specific Operations for Community Voice Mining:
GitHub Issues Mining Steps:
1. Access github.com/<org>/<repo>/issues
2. Sort by "Most commented" to view popular discussions
3. Search keywords:
   - Feature-related: feature request, enhancement, MCP, plugin, API
   - Comparison-related: vs, compared to, alternative, migrate from
4. View issue labels: enhancement, feature, discussion
5. Record frequently mentioned feature demands and user pain points

Value Conversion:
- Frequently discussed features → May be differentiated highlights
- User complaints/requests → May be product shortcomings
- Comparison discussions → Directly obtain user-perspective difference analysis
Material Timeliness Labeling Template (Appended to source material records):
markdown
- **Release Date**: [YYYY-MM-DD]
- **Timeliness Status**: ✅Currently Valid / ⚠️Needs Verification / ❌Outdated
- **Version Information**: [If applicable, mark the involved version number]
Tool Usage:
  • Prioritize using
    mcp__plugin_context7_context7__query-docs
    to obtain technical documents
  • Use
    WebSearch
    or
    mcp__tavily-mcp__tavily-search
    for extensive searches
  • Use
    mcp__tavily-mcp__tavily-extract
    to extract specific page content
⚠️ Application Object Verification (BLOCKING - Must check before inclusion):
Before including any material, you must verify whether its application object matches the research boundary:
Material TypeApplication Object to VerifyVerification Method
Policies/RegulationsWho is it for? (Middle school students/college students/all people)Check document title, application scope clauses
Academic ResearchWho are the samples? (Vocational school students/undergraduates/postgraduates)Check research methods/sample description sections
Statistical DataWhich group is being counted?Check data source description
Case ReportsWhich type of institution is involved?Confirm institution type (university/middle school/vocational school)
Handling of Mismatched Materials:
  • Completely mismatched materials → Do not include
  • Partially overlapping (e.g., "students" include college students) → Include but mark application scope
  • Analogous reference (e.g., middle school policies as trend reference) → Include but explicitly mark "For Reference Only"
📁 Save Action: Immediately after consulting a material, append it to
01_Source_Materials.md
:
markdown
undefined

资料 #[序号]

Material #[Serial Number]

  • 标题:[资料标题]
  • 链接:[URL]
  • 层级:L1/L2/L3/L4
  • 发布日期:[YYYY-MM-DD]
  • 时效状态:✅当前有效 / ⚠️需验证 / ❌已过时(仅参考)
  • 版本信息:[如涉及特定版本,必须标注]
  • 适用对象:[明确标注该资料针对的群体/地域/层级]
  • 与研究边界匹配:✅完全匹配 / ⚠️部分重叠 / 📎仅作参考
  • 摘要:[1-2句关键内容]
  • 与子问题关联:[对应哪个子问题]
undefined
  • Title: [Material Title]
  • Link: [URL]
  • Level: L1/L2/L3/L4
  • Release Date: [YYYY-MM-DD]
  • Timeliness Status: ✅Currently Valid / ⚠️Needs Verification / ❌Outdated (Reference Only)
  • Version Information: [If involving specific versions, must mark]
  • Application Object: [Clearly mark the group/region/level targeted by this material]
  • Match with Research Boundary: ✅Fully Matched / ⚠️Partially Overlapping / 📎For Reference Only
  • Abstract: [1-2 key content sentences]
  • Association with Sub-questions: [Corresponding sub-question]
undefined

Step 3: 事实抽取与证据卡片

Step 3: Fact Extraction and Evidence Cards

把资料转化为可核验事实卡片
markdown
undefined
Convert materials into verifiable fact cards:
markdown
undefined

事实卡片

Fact Cards

事实 1

Fact 1

  • 陈述:[具体事实描述]
  • 出处:[链接/文档章节]
  • 置信度:高/中/低
  • Statement: [Specific fact description]
  • Source: [Link/document section]
  • Confidence Level: High/Medium/Low

事实 2

Fact 2

...

**关键纪律**:
- 先钉牢事实,再做推导
- 区分「官方说的」和「我推测的」
- 遇到矛盾信息,标注并保留双方
- 标注置信度:
  - ✅ 高:官方文档明确说明
  - ⚠️ 中:官方博客提及但未正式文档化
  - ❓ 低:推测或来自非官方来源

**📁 保存动作**:
每抽取一条事实,**立即**追加到 `02_事实卡片.md`:
```markdown
...

**Key Discipline**:
- Establish facts first, then make deductions
- Distinguish between "what the official said" and "what I inferred"
- When encountering contradictory information, mark and retain both sides
- Mark confidence level:
  - ✅ High: Clearly stated in official documents
  - ⚠️ Medium: Mentioned in official blogs but not formally documented
  - ❓ Low: Inferred or from non-official sources

**📁 Save Action**:
Immediately after extracting a fact, append it to `02_Fact_Cards.md`:
```markdown

事实 #[序号]

Fact #[Serial Number]

  • 陈述:[具体事实描述]
  • 出处:[资料#序号] [链接]
  • 适用对象:[该事实适用于哪个群体,继承自资料或进一步细化]
  • 置信度:✅/⚠️/❓
  • 关联维度:[对应的对比维度]

**⚠️ 事实陈述中的适用对象**:
- 如果事实来自「部分重叠」或「仅作参考」的资料,陈述时**必须明确标注适用范围**
- 错误示范:「教育部禁止手机进课堂」(未说明针对谁)
- 正确示范:「教育部禁止中小学生将手机带入课堂(不适用于大学生)」
  • Statement: [Specific fact description]
  • Source: [Material #Serial Number] [Link]
  • Application Object: [The group this fact applies to, inherited from materials or further refined]
  • Confidence Level: ✅/⚠️/❓
  • Associated Dimension: [Corresponding comparison dimension]

**⚠️ Application Object in Fact Statements**:
- If the fact comes from partially overlapping or reference-only materials, **must clearly mark the application scope** when stating
- Wrong Example: "Ministry of Education prohibits bringing mobile phones into classrooms" (does not specify who it targets)
- Correct Example: "Ministry of Education prohibits middle school students from bringing mobile phones into classrooms (not applicable to college students)"

Step 4: 建立对比/分析框架

Step 4: Establish Comparison/Analysis Framework

根据问题类型,选择固定的分析维度:
通用维度(按需选用):
  1. 目标/解决什么问题
  2. 工作机制/流程
  3. 输入/输出/边界
  4. 优势/劣势/取舍
  5. 适用场景/边界条件
  6. 成本/收益/风险
  7. 历史演进/未来趋势
  8. 安全/权限/可控性
概念对比型专用维度
  1. 定义与本质
  2. 触发/调用方式
  3. 执行主体
  4. 输入输出与类型约束
  5. 确定性与可重复性
  6. 资源与上下文管理
  7. 组合与复用方式
  8. 安全边界与权限控制
决策支持型专用维度
  1. 方案概述
  2. 实现成本
  3. 维护成本
  4. 风险评估
  5. 收益预期
  6. 适用场景
  7. 团队能力要求
  8. 迁移难度
📁 保存动作: 写入
03_对比框架.md
markdown
undefined
Based on the problem type, select fixed analysis dimensions:
General Dimensions (Select as needed):
  1. Objectives/Problems solved
  2. Working mechanism/process
  3. Input/output/boundaries
  4. Advantages/disadvantages/trade-offs
  5. Application scenarios/boundary conditions
  6. Cost/benefit/risk
  7. Historical evolution/future trends
  8. Security/permission/controllability
Concept Comparison Type Special Dimensions:
  1. Definition and essence
  2. Trigger/call method
  3. Execution subject
  4. Input/output and type constraints
  5. Determinism and repeatability
  6. Resource and context management
  7. Combination and reuse methods
  8. Security boundaries and permission control
Decision Support Type Special Dimensions:
  1. Program overview
  2. Implementation cost
  3. Maintenance cost
  4. Risk assessment
  5. Benefit expectations
  6. Application scenarios
  7. Team capability requirements
  8. Migration difficulty
📁 Save Action: Write to
03_Comparison_Framework.md
:
markdown
undefined

对比框架

Comparison Framework

选定的框架类型

Selected Framework Type

[概念对比型/决策支持型/...]
[Concept Comparison Type/Decision Support Type/...]

选定的维度

Selected Dimensions

  1. [维度1]
  2. [维度2] ...
  1. [Dimension 1]
  2. [Dimension 2] ...

初步填充

Initial Content

维度XY事实依据
[维度1][描述][描述]事实#1, #3
...
undefined
DimensionXYFactual Basis
[Dimension 1][Description][Description]Fact#1, #3
...
undefined

Step 5: 参照物基准对齐

Step 5: Reference Object Alignment

确保对比的各方都有清晰、统一的定义:
检查清单
  • 参照物的定义是否稳定/公认?
  • 是否需要查证,还是可用领域常识?
  • 读者对参照物的理解是否与我一致?
  • 有无歧义需要先澄清?
Ensure all parties in the comparison have clear, unified definitions:
Checklist:
  • Is the definition of the reference object stable/recognized?
  • Does it need verification, or can it use domain common sense?
  • Do readers understand the reference object the same way I do?
  • Are there ambiguities that need clarification first?

Step 6: 从事实到结论的推导链

Step 6: Deduction Chain from Facts to Conclusions

显式写出「事实 → 对照 → 结论」的推导过程:
markdown
undefined
Explicitly write out the deduction process of "Fact → Comparison → Conclusion":
markdown
undefined

推导过程

Deduction Process

关于 [维度名称]

About [Dimension Name]

  1. 事实确认:根据 [来源],X 的机制是...
  2. 对照参照物:而 Y 的机制是...
  3. 结论:因此,X 与 Y 在此维度上的差异是...

**关键纪律**:
- 结论来自机制对比,不是"我感觉像"
- 每个结论都能追溯到具体事实
- 不确定的结论要标注

**📁 保存动作**:
写入 `04_推导过程.md`:
```markdown
  1. Fact Confirmation: According to [Source], the mechanism of X is...
  2. Compare with Reference Object: While the mechanism of Y is...
  3. Conclusion: Therefore, the difference between X and Y in this dimension is...

**Key Discipline**:
- Conclusions come from mechanism comparison, not "I feel like it"
- Each conclusion can be traced back to specific facts
- Mark uncertain conclusions

**📁 Save Action**:
Write to `04_Derivation_Process.md`:
```markdown

推导过程

Deduction Process

维度 1:[维度名称]

Dimension 1: [Dimension Name]

事实确认

Fact Confirmation

根据 [事实#X],X 的机制是...
According to [Fact#X], the mechanism of X is...

对照参照物

Compare with Reference Object

而 Y 的机制是...(来源:[事实#Y])
While the mechanism of Y is... (Source: [Fact#Y])

结论

Conclusion

因此,X 与 Y 在此维度上的差异是...
Therefore, the difference between X and Y in this dimension is...

置信度

Confidence Level

✅/⚠️/❓ + 理由

✅/⚠️/❓ + Reasons

维度 2:[维度名称]

Dimension 2: [Dimension Name]

...
undefined
...
undefined

Step 7: 用例验证(Sanity Check)

Step 7: Use Case Verification (Sanity Check)

用一个典型场景验证结论是否成立:
验证问题
  • 按我的结论,这个场景应该怎么处理?
  • 实际上是不是这样?
  • 有没有反例需要说明?
回查清单
  • 初稿结论是否与 Step 3 的事实卡片一致?
  • 有没有遗漏的重要维度?
  • 有没有过度推断?
  • 结论是否可操作/可验证?
📁 保存动作: 写入
05_验证记录.md
markdown
undefined
Use a typical scenario to verify whether the conclusion holds:
Verification Questions:
  • According to my conclusion, how should this scenario be handled?
  • Is this actually the case?
  • Are there any counterexamples that need explanation?
Review Checklist:
  • Is the draft conclusion consistent with the fact cards in Step 3?
  • Are there any important dimensions missing?
  • Is there any over-inference?
  • Is the conclusion operable/verifiable?
📁 Save Action: Write to
05_Verification_Records.md
:
markdown
undefined

验证记录

Verification Records

验证场景

Verification Scenario

[场景描述]
[Scenario Description]

按结论预期

Expected According to Conclusion

如果用 X:[预期行为] 如果用 Y:[预期行为]
If using X: [Expected Behavior] If using Y: [Expected Behavior]

实际验证结果

Actual Verification Result

[实际情况]
[Actual Situation]

是否有反例

Are There Counterexamples

[有/无,如有则描述]
[Yes/No, describe if yes]

回查清单

Review Checklist

  • 初稿结论与事实卡片一致
  • 无遗漏重要维度
  • 无过度推断
  • 发现问题:[如有]
  • Draft conclusion is consistent with fact cards
  • No missing important dimensions
  • No over-inference
  • Found issues: [If any]

需要修正的结论

Conclusions Needing Correction

[如有]
undefined
[If any]
undefined

Step 8: 可交付化处理

Step 8: Deliverable Processing

让报告老板能读、能转述、能追溯
交付三件套
  1. 一句话总结:可在会上直接复述
  2. 结构化章节:用小标题切开推导链
  3. 证据可追溯:关键事实附出处链接
📁 保存动作: 整合所有中间产物,写入
FINAL_调研报告.md
  • 00_问题拆解.md
    提取背景
  • 02_事实卡片.md
    引用关键事实
  • 04_推导过程.md
    整理结论
  • 01_资料来源.md
    生成参考文献
  • 05_验证记录.md
    补充用例
Make the report readable by bosses, reproducible, and traceable:
Deliverable Three-Piece Set:
  1. One-sentence summary: Can be directly repeated in meetings
  2. Structured chapters: Use subheadings to split the deduction chain
  3. Traceable evidence: Attach source links to key facts
📁 Save Action: Integrate all intermediate products and write to
FINAL_Research_Report.md
:
  • Extract background from
    00_Problem_Decomposition.md
  • Cite key facts from
    02_Fact_Cards.md
  • Organize conclusions from
    04_Derivation_Process.md
  • Generate references from
    01_Source_Materials.md
  • Supplement use cases from
    05_Verification_Records.md

报告输出结构

Report Output Structure

markdown
undefined
markdown
undefined

[调研主题] 调研报告

Research Report on [Research Topic]

摘要

Abstract

[一句话总结核心结论]
[One-sentence summary of core conclusions]

1. 概念对齐

1. Concept Alignment

1.1 X 是什么

1.1 What is X

[定义 + 为什么存在]
[Definition + Why it exists]

1.2 Y 是什么(参照物)

1.2 What is Y (Reference Object)

[作为对比基准]
[As comparison benchmark]

2. 工作机制

2. Working Mechanism

[X 怎么运行,这是核心差异点]
[How X operates, this is the core difference point]

3. 联系

3. Relationships

[共同解决的问题,3-4 点]
[Common problems solved, 3-4 points]

4. 区别

4. Differences

[按维度逐项对比,突出决定性差异]
[Compare item by item by dimension, highlight decisive differences]

5. 用例演示

5. Use Case Demonstration

[把抽象落地到具体场景]
[Bring abstraction to specific scenarios]

6. 总结与建议

6. Summary and Recommendations

[可复述的结论 + 可操作的建议]
[Reproducible conclusions + operable recommendations]

参考资料

References

[所有引用的来源链接]
undefined
[Links to all cited sources]
undefined

利益相关者视角

Stakeholder Perspective

根据受众调整内容深度:
受众侧重详略
决策者结论、风险、建议简洁,强调可操作性
执行者具体机制、操作方法详细,强调如何做
技术专家细节、边界条件、限制深入,强调准确性
Adjust content depth according to the audience:
AudienceFocusDetail Level
Decision-MakersConclusions, risks, recommendationsConcise, emphasize operability
ExecutorsSpecific mechanisms, operation methodsDetailed, emphasize how to do it
Technical ExpertsDetails, boundary conditions, limitationsIn-depth, emphasize accuracy

输出文件

Output Files

默认保存位置:
~/Downloads/research/<topic>/
必须生成的文件(按流程自动产生):
文件内容产生时机
00_问题拆解.md
问题类型、子问题列表Step 0-1 完成后
01_资料来源.md
所有资料链接和摘要Step 2 过程中持续更新
02_事实卡片.md
抽取的事实及出处Step 3 过程中持续更新
03_对比框架.md
选定的框架和填充Step 4 完成后
04_推导过程.md
事实→结论的推导Step 6 完成后
05_验证记录.md
用例验证和回查Step 7 完成后
FINAL_调研报告.md
完整交付报告Step 8 完成后
可选文件
  • raw/*.md
    - 原始资料存档(内容较长时保存)
Default save location:
~/Downloads/research/<topic>/
Mandatory Generated Files (Automatically generated according to process):
FileContentGeneration Timing
00_Problem_Decomposition.md
Problem type, sub-question listAfter completing Step 0-1
01_Source_Materials.md
All material links and abstractsContinuously updated during Step 2
02_Fact_Cards.md
Extracted facts and sourcesContinuously updated during Step 3
03_Comparison_Framework.md
Selected framework and contentAfter completing Step 4
04_Derivation_Process.md
Deduction from facts to conclusionsAfter completing Step 6
05_Verification_Records.md
Use case verification and reviewAfter completing Step 7
FINAL_Research_Report.md
Complete deliverable reportAfter completing Step 8
Optional Files:
  • raw/*.md
    - Archive of original materials (save when content is long)

方法论速查卡

Methodology Quick Reference Card

┌─────────────────────────────────────────────────────────────┐
│                    深度调研 8 步法                           │
├─────────────────────────────────────────────────────────────┤
│ 0. 判断问题类型 → 选择对应框架模板                            │
│ 1. 问题拆解 → 2-4 个可调研子问题                             │
│ 2. 资料分层 → L1官方 > L2博客 > L3媒体 > L4社区              │
│ 3. 事实抽取 → 每条带出处、标置信度                            │
│ 4. 建立框架 → 固定维度,结构化对比                            │
│ 5. 参照物对齐 → 确保定义统一                                  │
│ 6. 推导链 → 事实→对照→结论,显式写出                          │
│ 7. 用例验证 → Sanity check,防止纸上谈兵                     │
│ 8. 交付化 → 一句话总结 + 结构化章节 + 证据可追溯              │
├─────────────────────────────────────────────────────────────┤
│ 报告结构:定义→机制→联系→区别→用例→总结                       │
│ 关键纪律:结论来自机制对比,不是"我感觉像"                     │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│                    8-Step In-Depth Research Methodology                           │
├─────────────────────────────────────────────────────────────┤
│ 0. Judge problem type → Select corresponding framework template                            │
│ 1. Decompose problem → 2-4 researchable sub-questions                             │
│ 2. Stratify materials → L1 Official > L2 Blog > L3 Media > L4 Community              │
│ 3. Extract facts → Each with source, mark confidence level                            │
│ 4. Establish framework → Fixed dimensions, structured comparison                            │
│ 5. Align reference object → Ensure unified definition                                  │
│ 6. Deduction chain → Fact→Comparison→Conclusion, write explicitly                          │
│ 7. Use case verification → Sanity check, prevent armchair theorizing                     │
│ 8. Deliverable processing → One-sentence summary + structured chapters + traceable evidence              │
├─────────────────────────────────────────────────────────────┤
│ Report Structure: Definition→Mechanism→Relationships→Differences→Use Cases→Summary                       │
│ Key Discipline: Conclusions come from mechanism comparison, not "I feel like it"                     │
└─────────────────────────────────────────────────────────────┘

使用示例

Usage Examples

示例 1:技术概念对比

Example 1: Technical Concept Comparison

用户:帮我深度调研 REST API 和 GraphQL 的区别
执行流程:
  1. 判断类型:概念对比型
  2. 拆解问题:定义、机制、适用场景、优劣势
  3. 查阅官方规范(REST论文、GraphQL官方文档)
  4. 抽取事实卡片
  5. 用8维度对比框架分析
  6. 用实际项目场景验证
  7. 输出结构化报告
User: Help me conduct in-depth research on the differences between REST API and GraphQL
Execution Process:
  1. Judge type: Concept Comparison Type
  2. Decompose problems: Definition, mechanism, application scenarios, advantages and disadvantages
  3. Consult official specifications (REST papers, GraphQL official documents)
  4. Extract fact cards
  5. Analyze with 8-dimension comparison framework
  6. Verify with actual project scenarios
  7. Output structured report

示例 2:技术决策支持

Example 2: Technical Decision Support

用户:我们应该选 PostgreSQL 还是 MongoDB?帮我调研一下
执行流程:
  1. 判断类型:决策支持型
  2. 补充问题:用户的业务场景、数据特点、团队经验
  3. 查阅官方文档和性能基准
  4. 用决策维度框架分析
  5. 给出场景化建议
  6. 标注风险和前提条件
User: Should we choose PostgreSQL or MongoDB? Help me research it
Execution Process:
  1. Judge type: Decision Support Type
  2. Supplement questions: User's business scenarios, data characteristics, team experience
  3. Consult official documents and performance benchmarks
  4. Analyze with decision dimension framework
  5. Provide scenario-based recommendations
  6. Mark risks and preconditions

示例 3:趋势分析

Example 3: Trend Analysis

用户:AI Agent 的发展趋势是什么?深入分析一下
执行流程:
  1. 判断类型:趋势分析型
  2. 梳理历史演进脉络
  3. 收集一手资料(论文、官方公告)
  4. 识别驱动因素
  5. 分析当前格局
  6. 审慎预测趋势(标注不确定性)
User: What are the development trends of AI Agent? Conduct an in-depth analysis
Execution Process:
  1. Judge type: Trend Analysis Type
  2. Sort out historical evolution context
  3. Collect first-hand materials (papers, official announcements)
  4. Identify driving factors
  5. Analyze current landscape
  6. Prudent trend prediction (mark uncertainties)

来源周全性要求(Source Verifiability)

Source Verifiability Requirements

核心原则:报告中引用的每一条外部信息,用户都必须能够直接验证。
强制执行的规则
  1. URL 可访问性
    • 所有引用链接必须是公开可访问的(无需登录/付费墙)
    • 如引用需要登录的内容,必须标注
      [需登录]
    • 如引用学术论文,优先提供 arXiv/DOI 等公开版本
  2. 引用精确定位
    • 对于长文档,必须指明具体章节/页码/时间戳
    • 示例:
      [来源: OpenAI Blog, 2024-03-15, "GPT-4 Technical Report", §3.2 Safety]
    • 视频/音频引用需标注时间戳
  3. 内容对应性
    • 引用的事实必须能在原文中找到对应陈述
    • 禁止对原文进行过度推断后当作"引用"
    • 如有解读/推断,必须明确标注"基于 [来源] 推断"
  4. 时效性标注
    • 标注资料的发布/更新日期
    • 对于技术文档,标注版本号
    • 超过 2 年的资料需评估是否仍然有效
  5. 不可验证信息的处理
    • 如信息来源无法公开验证(如私人通信、付费报告摘要),必须在置信度中标注
      [来源受限]
    • 不可验证的信息不能作为核心结论的唯一支撑
Core Principle: Every piece of external information cited in the report must be directly verifiable by users.
Mandatory Rules:
  1. URL Accessibility:
    • All cited links must be publicly accessible (no login/paywall required)
    • If citing content that requires login, must mark
      [Login Required]
    • For academic papers, prioritize providing public versions such as arXiv/DOI
  2. Precise Citation Positioning:
    • For long documents, must specify the specific section/page number/timestamp
    • Example:
      [Source: OpenAI Blog, 2024-03-15, "GPT-4 Technical Report", §3.2 Safety]
    • For video/audio citations, mark the timestamp
  3. Content Correspondence:
    • The cited fact must have a corresponding statement in the original text
    • Prohibit using over-inferred content as "citation"
    • If there is interpretation/inference, must explicitly mark "Inferred based on [Source]"
  4. Timeliness Labeling:
    • Mark the release/update date of materials
    • For technical documents, mark the version number
    • Evaluate validity if materials are older than 2 years
  5. Handling of Unverifiable Information:
    • If the information source cannot be publicly verified (e.g., private communication, paid report abstracts), must mark
      [Source Restricted]
      in the confidence level
    • Unverifiable information cannot be the only support for core conclusions

质量检查清单

Quality Checklist

完成报告前,检查以下项目:
  • 所有核心结论都有 L1/L2 级别的事实支撑
  • 没有使用"可能"、"大概"等模糊词而不标注不确定性
  • 对比维度完整,没有遗漏关键差异
  • 有至少一个实际用例验证结论
  • 参考资料完整,链接可访问
  • 每个引用都可被用户直接验证(来源周全性)
  • 一句话总结清晰可复述
  • 结构层次清晰,老板能快速定位
Before completing the report, check the following items:
  • All core conclusions are supported by L1/L2-level facts
  • No vague words like "may" or "probably" without marking uncertainty
  • Comparison dimensions are complete, no key differences are missing
  • At least one actual use case verifies the conclusion
  • References are complete, links are accessible
  • Every citation can be directly verified by users (Source Verifiability)
  • One-sentence summary is clear and reproducible
  • Structure is clear, bosses can quickly locate content

⏰ 时效性检查(高敏感领域 BLOCKING)

⏰ Timeliness Check (BLOCKING for High-Sensitivity Fields)

当调研主题属于 🔴极高 或 🟠高 敏感级别时,必须完成以下检查
  • 已完成时效敏感性评估
    00_问题拆解.md
    中包含时效性评估章节
  • 资料时效性已标注:每条资料都有发布日期、时效状态、版本信息
  • 无过时资料作为事实依据
    • 🔴极高:核心事实来源均在 6 个月内
    • 🟠高:核心事实来源均在 1 年内
  • 版本号已明确标注
    • 技术产品/API/SDK 相关描述均标注了具体版本号
    • 没有使用「最新版本」「目前」等模糊时间表述
  • 官方源已优先使用:核心结论有来自官方文档/博客的支撑
  • 交叉验证已完成:关键技术信息从至少 2 个独立来源确认
  • 下载页面已直接验证:平台支持信息来自官方下载页面实时提取,非搜索缓存
  • 协议/功能名称已搜索:已搜索产品支持的协议名称(MCP、ACP 等)
  • GitHub Issues 已挖掘:查看了产品的 GitHub Issues 热门讨论
  • 社区热点已识别:识别并记录了用户最关心的功能点
典型社区声音遗漏错误案例
❌ 错误:仅依赖官方文档,报告中 MCP 只作为普通功能点一笔带过 ✅ 正确:通过 GitHub Issues 发现 MCP 是社区最热议的功能,在报告中重点展开分析其价值
❌ 错误:「Alma 和 Cherry Studio 都支持 MCP」(没有分析差异) ✅ 正确:通过社区讨论发现「Alma 的 MCP 实现与 Claude Code 高度一致,这是其核心竞争力」
典型平台支持/协议遗漏错误案例
❌ 错误:「Alma 仅支持 macOS」(基于搜索引擎缓存的 "Coming soon" 信息) ✅ 正确:直接访问 alma.now/download 页面,核验当前实际支持的平台
❌ 错误:「Alma 支持 MCP」(只搜索了 MCP,遗漏了 ACP) ✅ 正确:同时搜索 "Alma MCP" 和 "Alma ACP",发现 Alma 还支持 ACP 协议集成 CLI 工具
典型时效性错误案例
❌ 错误:「Claude 支持 function calling」(未标注版本,且可能指的是旧版本能力) ✅ 正确:「Claude 3.5 Sonnet (claude-3-5-sonnet-20241022) 通过 Tool Use API 支持函数调用,最大支持 8192 tokens 的工具定义」
❌ 错误:「根据 2023 年的博客,GPT-4 的上下文长度是 8K」 ✅ 正确:「截至 2024 年 1 月,GPT-4 Turbo 支持 128K 上下文(来源:OpenAI 官方文档,2024-01-25 更新)」
When the research topic belongs to 🔴Extremely High or 🟠High sensitivity level, you must complete the following checks:
  • Timeliness sensitivity assessment completed:
    00_Problem_Decomposition.md
    includes the timeliness assessment section
  • Material timeliness labeled: Each material has release date, timeliness status, and version information
  • No outdated materials as factual basis:
    • 🔴Extremely High: Core fact sources are all within 6 months
    • 🟠High: Core fact sources are all within 1 year
  • Version numbers clearly marked:
    • Descriptions related to technical products/APIs/SDKs all mark specific version numbers
    • No vague time expressions like "latest version" or "currently"
  • Official sources prioritized: Core conclusions are supported by official documents/blogs
  • Cross-verification completed: Key technical information confirmed from at least 2 independent sources
  • Download pages directly verified: Platform support information comes from real-time extraction of official download pages, not search cache
  • Protocol/function names searched: Have searched for protocol names supported by the product (MCP, ACP, etc.)
  • GitHub Issues mined: Have viewed popular discussions in the product's GitHub Issues
  • Community hotspots identified: Have identified and recorded the function points that users care about most
Typical Community Voice Omission Error Case:
❌ Wrong: Only rely on official documents, mention MCP as an ordinary feature in the report ✅ Correct: Discover that MCP is the most discussed feature in the community through GitHub Issues, and focus on analyzing its value in the report
❌ Wrong: "Both Alma and Cherry Studio support MCP" (no analysis of differences) ✅ Correct: Discover through community discussions that "Alma's MCP implementation is highly consistent with Claude Code, which is its core competitiveness"
Typical Platform Support/Protocol Omission Error Case:
❌ Wrong: "Alma only supports macOS" (based on search engine cached "Coming soon" information) ✅ Correct: Directly access alma.now/download page to verify the currently supported platforms
❌ Wrong: "Alma supports MCP" (only searched MCP, missed ACP) ✅ Correct: Search both "Alma MCP" and "Alma ACP", and discover that Alma also supports ACP protocol integration CLI tools
Typical Timeliness Error Case:
❌ Wrong: "Claude supports function calling" (no version marked, may refer to old version capabilities) ✅ Correct: "Claude 3.5 Sonnet (claude-3-5-sonnet-20241022) supports function calling through the Tool Use API, with a maximum of 8192 tokens for tool definitions"
❌ Wrong: "According to a 2023 blog, the context length of GPT-4 is 8K" ✅ Correct: "As of January 2024, GPT-4 Turbo supports 128K context (Source: OpenAI Official Document, updated on 2024-01-25)"

⚠️ 适用对象一致性检查(BLOCKING)

⚠️ Application Object Consistency Check (BLOCKING)

这是最容易被忽略、也是最致命的检查项:
  • 研究边界已明确定义:在
    00_问题拆解.md
    中有清晰的人群/地域/时间/层级界定
  • 每条资料都标注了适用对象
    01_资料来源.md
    中每条资料都有「适用对象」和「与研究边界匹配」字段
  • 不匹配的资料已正确处理
    • 完全不匹配的资料未被收录
    • 部分重叠的资料已标注适用范围
    • 仅作参考的资料已明确标注
  • 事实卡片中无对象混淆
    02_事实卡片.md
    中每条事实的适用对象与研究边界一致
  • 报告中无对象混淆:最终报告中引用的政策/研究/数据,其适用对象与研究主题一致
典型错误案例
研究主题:「大学生课堂不抬头问题」 错误引用:「2025年10月教育部发文禁止手机进课堂」 问题:该政策针对的是中小学生,不是大学生 后果:读者误以为教育部禁止大学生带手机,严重误导
This is the easiest to ignore and also the most fatal check item:
  • Research boundaries clearly defined:
    00_Problem_Decomposition.md
    has clear population/region/time/level definitions
  • Each material marked with application object:
    01_Source_Materials.md
    has "Application Object" and "Match with Research Boundary" fields for each material
  • Mismatched materials handled correctly:
    • Completely mismatched materials not included
    • Partially overlapping materials marked with application scope
    • Reference-only materials explicitly marked
  • No object confusion in fact cards: Application objects of each fact in
    02_Fact_Cards.md
    are consistent with research boundaries
  • No object confusion in report: Policies/research/data cited in the final report have application objects consistent with the research topic
Typical Error Case:
Research Topic: "College students' inattentiveness in class" Wrong Citation: "In October 2025, the Ministry of Education issued a document prohibiting mobile phones in classrooms" Problem: This policy targets middle school students, not college students Consequence: Readers mistakenly think the Ministry of Education prohibits college students from bringing mobile phones, causing serious misunderstanding

打包输出(BLOCKING)

Package Output (BLOCKING)

调研完成后,将工作目录打包:
bash
tar -czvf ~/outcome.tar.gz -C <parent_dir> <workspace_name>
  • ~/outcome.tar.gz
    已存在,直接覆盖
  • 告知用户打包完成及文件位置
After completing the research, package the working directory:
bash
tar -czvf ~/outcome.tar.gz -C <parent_dir> <workspace_name>
  • If
    ~/outcome.tar.gz
    already exists, overwrite it directly
  • Inform the user that packaging is completed and the file location

最终回复规范

Final Response Specification

调研完成后,向用户回复时:
✅ 应该包含
  • 一句话核心结论
  • 关键发现摘要(3-5 点)
  • 打包文件位置(
    ~/outcome.tar.gz
  • 如有重大不确定性,标注需要进一步验证的点
❌ 禁止包含
  • 过程文件列表(如
    00_问题拆解.md
    01_资料来源.md
    等)
  • 详细的调研步骤说明
  • 工作目录结构展示
原因:过程文件是给后续回溯用的,不是给用户看的。用户关心的是结论,不是过程。
After completing the research, when replying to the user:
✅ Should Include:
  • One-sentence core conclusion
  • Key findings summary (3-5 points)
  • Package file location (
    ~/outcome.tar.gz
    )
  • If there are major uncertainties, mark points that need further verification
❌ Prohibited to Include:
  • List of process files (e.g.,
    00_Problem_Decomposition.md
    ,
    01_Source_Materials.md
    , etc.)
  • Detailed research step descriptions
  • Working directory structure display

版本历史

Version History

  • v1.6 (2026-01-12): 新增社区声音挖掘机制
    • 新增「L4 社区来源的具体化」表格,明确 GitHub Issues/Discussions/Reddit/HN 等来源
    • 搜索策略从 5 轮扩展到 6 轮,新增「社区声音挖掘」轮次
    • 新增「社区声音挖掘的具体操作」指南
    • 质量检查清单新增:GitHub Issues 已挖掘、社区热点已识别
    • 新增典型社区声音遗漏错误案例
    • 来源:Alma vs Cherry Studio 调研中 MCP 功能重要性被低估的教训——官方文档不会强调"别人没有而我有",但社区讨论会
  • v1.5 (2026-01-12): 增强高敏感领域调研准确性
    • 新增规则 6「官方下载/发布页面直接验证」- 必须实时访问下载页面,不依赖搜索缓存
    • 新增规则 7「产品特定协议/功能名称搜索」- 必须搜索 MCP、ACP 等协议名称
    • 搜索策略从 3 轮扩展到 5 轮,新增下载页面验证和协议搜索轮次
    • 新增「最终回复规范」章节 - 禁止在回复中列出过程文件
    • 质量检查清单新增:下载页面验证、协议搜索检查项
    • 新增典型错误案例:平台支持遗漏(Alma Windows)、协议遗漏(Alma ACP)
    • 来源:Alma vs Cherry Studio 调研中遗漏 Windows 支持和 ACP 功能的教训
  • v1.4 (2026-01-12): 添加时效敏感性判断机制
    • 新增 Step 0.5「时效敏感性判断」,将问题分为 4 个敏感级别(极高/高/中/低)
    • 🔴极高敏感领域(AI/大模型等)强制执行:6 个月时间窗口、官方源优先、版本号强制标注
    • Step 2 新增「时效性筛选规则」和「高敏感领域搜索策略」
    • 资料来源模板新增:发布日期、时效状态、版本信息字段
    • 质量检查清单新增「时效性检查」章节
    • 来源:用户反馈科技类调研容易引用过时信息导致误导
  • v1.3 (2026-01-11): 添加来源周全性要求(Source Verifiability)
    • 新增「来源周全性要求」章节,包含 5 条强制执行规则
    • URL 可访问性、引用精确定位、内容对应性、时效性标注、不可验证信息处理
    • 质量检查清单新增「每个引用都可被用户直接验证」条目
    • 来源:用户要求确保外部引用可直接验证
  • v1.2 (2026-01-11): 增加适用对象核查机制
    • Step 1 新增「研究对象界定」,要求明确人群/地域/时间/层级边界
    • Step 2 新增「适用对象核查」,收录资料前必须验证适用对象匹配
    • Step 3 事实卡片模板新增「适用对象」字段
    • 质量检查清单新增「适用对象一致性检查」章节
    • 来源:课堂抬头率调研中误引中小学政策的教训
  • v1.1 (2025-01-11): 增强中间产物管理
    • 新增「工作目录与中间产物管理」章节
    • 每个步骤明确保存动作(📁 标记)
    • 中间文件从"可选"改为"必须"
    • 标准化文件命名和目录结构
  • v1.0 (2025-01-11): 初始版本
    • 基于 Claude Skills vs 函数 调研案例提炼
    • 8 步法完整流程
    • 5 种问题类型框架
    • 多维度对比模板
  • v1.6 (2026-01-12): Added community voice mining mechanism
    • Added the "Specification of L4 Community Sources" table, clarifying sources like GitHub Issues/Discussions/Reddit/HN
    • Expanded search strategy from 5 rounds to 6 rounds, added "Community Voice Mining" round
    • Added "Specific Operations for Community Voice Mining" guide
    • Added to quality checklist: GitHub Issues mined, community hotspots identified
    • Added typical community voice omission error cases
    • Source: Lesson from underestimating the importance of MCP function in Alma vs Cherry Studio research—official documents won't emphasize "features others don't have but I do", but community discussions will
  • v1.5 (2026-01-12): Enhanced accuracy of high-sensitivity field research
    • Added Rule 6 "Direct Verification of Official Download/Release Pages" - Must access download pages in real-time, do not rely on search cache
    • Added Rule 7 "Search for Product-Specific Protocol/Function Names" - Must search for protocol names like MCP and ACP
    • Expanded search strategy from 3 rounds to 5 rounds, added download page verification and protocol search rounds
    • Added "Final Response Specification" chapter - Prohibit listing process files in replies
    • Added to quality checklist: Download page verification, protocol search check items
    • Added typical error cases: Platform support omission (Alma Windows), protocol omission (Alma ACP)
    • Source: Lesson from missing Windows support and ACP function in Alma vs Cherry Studio research
  • v1.4 (2026-01-12): Added timeliness sensitivity judgment mechanism
    • Added Step 0.5 "Timeliness Sensitivity Judgment", dividing questions into 4 sensitivity levels (Extremely High/High/Medium/Low)
    • Mandatory implementation for 🔴Extremely High sensitivity fields (AI/Large Models, etc.): 6-month time window, priority to official sources, mandatory version number labeling
    • Added "Timeliness Screening Rules" and "Search Strategy for High-Sensitivity Fields" in Step 2
    • Added to source material template: Release date, timeliness status, version information fields
    • Added "Timeliness Check" chapter to quality checklist
    • Source: User feedback that technology-related research easily cites outdated information leading to misunderstanding
  • v1.3 (2026-01-11): Added Source Verifiability Requirements
    • Added "Source Verifiability Requirements" chapter, including 5 mandatory rules
    • URL accessibility, precise citation positioning, content correspondence, timeliness labeling, handling of unverifiable information
    • Added to quality checklist: "Every citation can be directly verified by users" item
    • Source: User requirement to ensure external citations are directly verifiable
  • v1.2 (2026-01-11): Added application object verification mechanism
    • Added "Research Object Definition" in Step 1, requiring clear population/region/time/level boundaries
    • Added "Application Object Verification" in Step 2, must verify application object match before including materials
    • Added "Application Object" field to fact card template in Step 3
    • Added "Application Object Consistency Check" chapter to quality checklist
    • Source: Lesson from mistakenly citing middle school policies in classroom attention rate research
  • v1.1 (2026-01-11): Enhanced intermediate product management
    • Added "Working Directory and Intermediate Product Management" chapter
    • Clear save actions for each step (marked with 📁)
    • Changed intermediate files from "optional" to "mandatory"
    • Standardized file naming and directory structure
  • v1.0 (2025-01-11): Initial version
    • Refined from the case study of Claude Skills vs Functions research
    • Complete 8-step process
    • 5 problem type frameworks
    • Multi-dimensional comparison template