research

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Research

研究

Persistent project-scoped store for deep research findings. You activated this skill because the user asked a substantive research question, or invoked it explicitly with
/research <topic>
.
If invoked with a topic argument (e.g.
/research tailwind-v5
), use it as the seed for Retrieval - start by looking up that topic in
INDEX.md
. Don't research blindly; the lookup may answer immediately.
面向深度研究发现的持久化项目级存储工具。您激活此Skill是因为用户提出了实质性的研究问题,或者通过
/research <topic>
显式调用了它。
如果调用时带有主题参数(例如
/research tailwind-v5
),请将其作为检索的起点——先在
INDEX.md
中查找该主题。不要盲目开展研究;查找结果可能直接给出答案。

When to use

使用场景

  • "what's the latest npm package that does X"
  • "compare A vs B vs C for 2026"
  • "which engines / frameworks / libraries can clone X fast"
  • "research how Y works under the hood"
  • "deep dive on Z"
  • User pastes a long markdown research dump and asks you to save it
  • "当前能实现X功能的最新npm包是什么"
  • "对比2026年的A、B、C三种方案"
  • "哪些引擎/框架/库可以快速克隆X"
  • "调研Y的底层工作原理"
  • "深入研究Z"
  • 用户粘贴了长篇Markdown研究内容并要求您保存

When NOT to use

禁用场景

  • Plan-stage notes
  • Small facts or one-line preferences
  • Code-level decisions tied to one file
  • Casual lookups answerable from a single source with no synthesis
  • Recording personal ideas or musings
  • As a substitute for a single WebSearch or WebFetch
If a single WebSearch + 1-2 sentences answers the question, you don't need this skill.
  • 计划阶段的笔记
  • 零散事实或单行偏好记录
  • 与单个文件相关的代码层面决策
  • 可通过单一来源直接回答、无需综合分析的日常查询
  • 记录个人想法或随想
  • 替代单次WebSearch或WebFetch
如果单次WebSearch加上1-2句话就能回答问题,则无需使用此Skill。

Setup (first use only)

首次设置

On first activation in a project, do this once:
  1. Resolve project root:
    git rev-parse --show-toplevel 2>/dev/null || pwd
  2. Create
    <root>/.research/
    if missing
  3. Create
    <root>/.research/INDEX.md
    with this exact content:
    markdown
    # Research index
    
    | Topic | Path | Last verified | One-liner |
    |---|---|---|---|
  4. Add
    .research/
    to
    <root>/.gitignore
    . If
    .gitignore
    doesn't exist, create it. Research data may contain proprietary insights, default private.
The data lives at
<root>/.research/
(sibling of
.claude/
, not nested inside it). It is a top-level project directory chosen so research data is colocated with the project, gitignored by default, and easy to find by name. Auditing remains intact: every read and write goes through the host's normal permission system.
在项目中首次激活时,请执行以下步骤:
  1. 确定项目根目录:
    git rev-parse --show-toplevel 2>/dev/null || pwd
  2. 如果
    <root>/.research/
    目录不存在则创建它
  3. 创建
    <root>/.research/INDEX.md
    ,内容如下:
    markdown
    # Research index
    
    | Topic | Path | Last verified | One-liner |
    |---|---|---|---|
  4. .research/
    添加到
    <root>/.gitignore
    中。如果
    .gitignore
    不存在则创建它。研究数据可能包含专有信息,默认设为私有。
数据存储在
<root>/.research/
(与
.claude/
同级,而非嵌套其中)。这是一个顶级项目目录,确保研究数据与项目共存、默认被Git忽略且易于查找。审计功能保持完整:所有读写操作均通过主机的常规权限系统进行。

Workflow

工作流程

1. Retrieval (the read side - this is how the skill saves your context)

1. 检索(读取端——此Skill如何保存上下文)

The whole point of this system is progressive disclosure: don't load what you don't need.
INDEX.md
is your dispatcher - it lets you decide which entries to load without paying to load them. Walk the hierarchy from cheapest to most expensive; only escalate when the previous tier doesn't answer the question.
本系统的核心是渐进式披露:无需加载不需要的内容。
INDEX.md
是您的调度器——它让您无需加载条目即可决定要加载哪些条目。从成本最低到最高的层级逐步检索;仅当前一层级无法回答问题时才升级。

Loading hierarchy (cheapest → most expensive)

加载层级(成本从低到高)

TierLoadApprox tokensWhen
1
INDEX.md
(always)
~100-500Every retrieval - your routing table
2Entry's
## Summary
only
~50-200When the index shows a topic match
3Full
FINDINGS.md
body
~500-3000When Summary doesn't cover the question
4Specific
raw/<file>
document
varies (often heavy)When a finding cites it and you need to verify a claim
5Cross-referenced entry (
related:
)
repeats tiers 2-3When the question spans entries
层级加载内容大致Token数使用场景
1
INDEX.md
(始终加载)
~100-500每次检索——作为路由表
2条目的
## Summary
部分
~50-200索引显示主题匹配时
3完整的
FINDINGS.md
内容
~500-3000Summary无法覆盖问题时
4特定的
raw/<file>
文档
可变(通常较大)研究结果引用了该文档且需要验证声明时
5交叉引用条目(
related:
重复层级2-3问题涉及多个条目时

Lookup procedure

查找流程

  1. Read
    INDEX.md
    first
    (tier 1). Scan the one-liner summary column against the user's question. This is the dispatcher - same role as
    RESOLVER.md
    in GBrain.
  2. Match decision:
    • Strong match - one entry's one-liner clearly covers the topic → go to step 3 with that entry.
    • Multiple plausible matches - load
      ## Summary
      of each (still cheap at tier 2). Pick the one(s) that actually answer.
    • Weak / no match → fall through to Investigation. A new entry will be added.
  3. Read only the matched entry's
    ## Summary
    (tier 2):
    bash
    sed -n '/^## Summary/,/^## /p' <root>/.research/<slug>/FINDINGS.md
    Usually enough.
  4. Escalate one tier only when needed:
    • Question needs claims-level detail beyond the Summary → load the full
      FINDINGS.md
      body (tier 3).
    • Question is "have we tried X before / what was discarded?" →
      sed
      just that section:
      sed -n '/^## Discarded approaches/,/^## /p' <root>/.research/<slug>/FINDINGS.md
      . Don't load the rest.
    • Question references a paste-cited claim → open that specific file under
      raw/
      (tier 4).
    • Question spans topics covered by separate entries → follow
      related:
      , repeat tiers 2-3 on each.
  5. Fall through to Investigation. Pick the mode:
    • No entry exists in
      INDEX.md
      → Investigation in new entry mode.
    • Existing entry doesn't actually resolve the question (problem still unsolved) → Investigation in merge mode (pass existing entry content to the subagent).
    • Existing entry is stale on a fast-moving topic → Investigation in merge mode (refresh, don't quote).
  1. 先读取
    INDEX.md
    (层级1)。将单行摘要列与用户问题进行比对。这是调度器——与GBrain中的
    RESOLVER.md
    作用相同。
  2. 匹配决策:
    • 强匹配——某个条目的单行摘要明确覆盖主题 → 针对该条目进入步骤3。
    • 多个可能匹配——加载每个条目的
      ## Summary
      (层级2,成本仍较低)。选择实际能回答问题的条目。
    • 弱匹配/无匹配 → 进入调研阶段。将添加新条目。
  3. 仅读取匹配条目的
    ## Summary
    (层级2):
    bash
    sed -n '/^## Summary/,/^## /p' <root>/.research/<slug>/FINDINGS.md
    通常足以回答问题。
  4. 仅在必要时升级一层级:
    • 问题需要Summary之外的细节声明 → 加载完整的
      FINDINGS.md
      内容(层级3)。
    • 问题是“我们之前是否尝试过X/哪些方案被放弃了?” → 仅提取该部分:
      sed -n '/^## Discarded approaches/,/^## /p' <root>/.research/<slug>/FINDINGS.md
      。不要加载其余内容。
    • 问题引用了粘贴的声明 → 打开
      raw/
      下的特定文件(层级4)。
    • 问题涉及多个条目覆盖的主题 → 遵循
      related:
      ,对每个条目重复层级2-3。
  5. 进入调研阶段。选择模式:
    • INDEX.md
      无对应条目 → 以新条目模式进行调研。
    • 现有条目无法实际解决问题(问题仍未解决) → 以合并模式进行调研(将现有条目内容传递给子Agent)。
    • 现有条目在快速变化的主题上已过时 → 以合并模式进行调研(更新内容,不引用旧内容)。

What NEVER to do

绝对禁止的操作

  • Don't load everything. The schema exists so you can be selective.
  • Don't load the full body when Summary suffices. If 3 lines answer it, don't pull 300.
  • Don't load raw documents speculatively. They're heavy; most questions don't need them.
  • Don't re-read an entry you already loaded this session - unless it was updated since.
  • 不要加载所有内容。该架构的存在就是为了让您能够选择性加载。
  • 不要在Summary足够时加载完整内容。如果3行就能回答问题,不要加载300行。
  • 不要推测性加载原始文档。它们体积大;大多数问题不需要它们。
  • 不要重复加载本次会话中已加载过的条目——除非自加载后该条目已更新。

INDEX.md
as dispatcher

作为调度器的
INDEX.md

INDEX.md
exists only so you can decide which entries to load without loading them. The one-liner column is the entire signal you have before paying for an entry read - write it specifically when storing.
Keep
INDEX.md
tight: under ~100 rows. If it grows beyond that, prune or archive. The whole token-saving design collapses if
INDEX.md
itself becomes a bloat source.
INDEX.md
的存在是为了让您无需加载条目即可决定要加载哪些条目。单行摘要列是您在加载条目之前获取的全部信号——存储时要专门撰写该内容。
保持
INDEX.md
简洁:控制在约100行以内。如果超过此数量,请修剪或归档。如果
INDEX.md
本身变得臃肿,整个节省Token的设计就会失效。

2. Investigation (when fresh research is needed)

2. 调研(需要开展新研究时)

Spawn a
general-purpose
subagent with
model: "opus"
and
run_in_background: true
. The Investigation phase needs a strong model: the contrarian pass and synthesis steps depend on reasoning depth that smaller models won't deliver. Background mode keeps the conversation interactive: the user can keep working while research runs. Storage runs asynchronously when the agent's completion notification arrives.
The subagent does research and returns its synthesis as structured text. It does NOT write any files. You (main agent) handle all file writes in Storage. This split keeps responsibility clean: the subagent has zero context and doesn't need to know your schema or
INDEX.md
layout.
Naming convention. Set the Agent tool's
description
parameter to
Research investigation: <topic>
(3 to 5 words). This makes research-skill spawns identifiable in the harness UI.
On completion notification: parse the agent's return, apply Storage immediately, surface a brief notice to the user (e.g. "research on
<topic>
saved to
<path>
"
). Do not dump the full findings into chat unless asked.
Subagents have zero prior context. They don't see this skill, CLAUDE.md, or our conversation. Brief them completely. There is no continuation in this harness - the
SendMessage
tool to resume an agent is not available. One-shot only. If gaps remain, re-spawn with a refined brief.
The mode (new entry vs merge) was decided in Retrieval phase 5. Brief the subagent accordingly:
  • New entry mode - standard brief, no existing context to feed.
  • Merge mode - paste the existing entry's
    ## Summary
    and any relevant
    ## Findings
    sections into the brief, marked clearly as "current state of the entry - verify, update, or supersede". Tell the subagent to flag claims that are now wrong.
启动一个
general-purpose
子Agent,设置
model: "opus"
和**
run_in_background: true
**。调研阶段需要强大的模型:反向验证和综合分析步骤依赖于小型模型无法提供的推理深度。后台模式保持对话交互性:用户可以在研究运行时继续工作。当Agent完成通知到达时,存储操作异步运行。
子Agent负责开展研究并以结构化文本形式返回综合结果。它不会写入任何文件。 所有文件写入操作由您(主Agent)在存储阶段处理。这种分工明确了职责:子Agent没有上下文,无需了解您的架构或
INDEX.md
布局。
命名约定。将Agent工具的
description
参数设置为
Research investigation: <topic>
(3-5个单词)。这使得研究Skill的启动在管理UI中易于识别。
收到完成通知时: 解析Agent的返回结果,立即执行存储操作,并向用户显示简短通知(例如*"
<topic>
的研究结果已保存至
<path>
"*)。除非用户要求,否则不要将完整研究结果转储到聊天中。
子Agent没有任何前置上下文。它们看不到此Skill、CLAUDE.md或我们的对话。请向它们提供完整的说明。此管理工具不支持续会话——无法使用
SendMessage
工具恢复Agent。仅支持单次调用。如果仍有信息缺口,请使用优化后的说明重新启动Agent。
模式(新条目 vs 合并)已在检索阶段5中确定。据此向子Agent提供说明:
  • 新条目模式——标准说明,无需提供现有上下文。
  • 合并模式——将现有条目的
    ## Summary
    和任何相关的
    ## Findings
    部分粘贴到说明中,并明确标记为*"当前条目状态——验证、更新或替代"*。告知子Agent标记现在已错误的声明。

Brief checklist

说明检查清单

Every Investigation brief MUST include:
  • "You have zero prior context" preamble
  • Today's actual date (run
    date +%Y-%m-%d
    first; pass the literal string)
  • Year-pinning rule for WebSearch queries (don't trust the subagent's model prior on what year it is)
  • At least 2 independent sources per non-trivial claim. Sources are gathered into a single
    ## Sources
    block at the end of the return; do NOT cite inline.
  • The cognitive phases below as explicit numbered steps
  • The subagent's required output format (below)
  • The strict no-
    [n]
    / no-inline-URL rule for the Findings body (see Required output format)
  • Merge mode only: the existing entry content the subagent should verify / supersede
每个调研说明必须包含:
  • "您没有任何前置上下文"的前言
  • 当前实际日期(先运行
    date +%Y-%m-%d
    ;传递字面字符串)
  • WebSearch查询的年份固定规则(不要信任子Agent模型对当前年份的预设)
  • 每个非 trivial 声明至少2个独立来源。来源汇总到返回结果末尾的单个
    ## Sources
    块中;不要在正文中引用。
  • 以下认知阶段作为明确的编号步骤
  • 子Agent的必填输出格式(如下)
  • 针对Findings正文的严格禁止
    [n]
    /禁止内联URL规则(参见必填输出格式)
  • 仅合并模式需要:子Agent应验证/替代的现有条目内容

Cognitive phases (include verbatim in the subagent brief)

认知阶段(逐字包含在子Agent说明中)

The subagent walks these as discrete phases. Phase 4 is load-bearing:
  1. Decompose - list sub-claims that would resolve the question; identify what evidence settles each.
  2. Gather - for each sub-claim, find ≥2 independent sources (year-pinned WebSearch → WebFetch on top results). Quote verbatim. Don't synthesize yet.
  3. Validate - re-derive numbers, benchmarks, version claims. Flag anything that fails.
  4. Contrarian pass - actively search for "why is this wrong / scam / criticized / deprecated / known-bad". State the strongest objection found. Skipping this is the most common subagent failure mode. Call it out explicitly in the brief.
  5. Synthesize - verdict + citations + residual disagreements listed explicitly. No silent picks.
子Agent需按这些离散阶段逐步执行。阶段4是核心:
  1. 分解——列出可解决问题的子声明;确定解决每个子声明所需的证据。
  2. 收集——针对每个子声明,查找≥2个独立来源(固定年份的WebSearch → 对顶级结果执行WebFetch)。逐字引用。暂不进行综合分析。
  3. 验证——重新推导数字、基准、版本声明。标记任何验证失败的内容。
  4. 反向验证——主动搜索"为什么这是错误的/骗局/受批评的/已弃用的/已知有问题的"。说明找到的最强烈反对意见。跳过此步骤是子Agent最常见的失败模式。在说明中明确强调这一点。
  5. 综合分析——结论+引用+明确列出剩余分歧。不要默默选择某一方。

Required subagent output format

子Agent必填输出格式

The brief MUST include explicit citation rules. Subagents trained on academic-style writing default to
[1]
,
[2]
inline citations; without explicit instructions they will produce noisy output. State the rules in plain language. Recommended verbatim block to paste into the brief:
Citation rules. Read carefully and follow exactly:
  • Do NOT use
    [n]
    numbered citations. No
    [1]
    ,
    [2]
    , or any bracketed numbers in the Findings body.
  • Do NOT put URLs in the Findings body.
  • Do NOT add inline footnote markers, anchors, or any per-claim citation tags of any kind.
  • Write Findings as plain prose paragraphs.
  • When a claim's interpretation depends on which source said it, name the source as prose, no brackets ("per the README", "according to littlemight.com", "the HN-simulator commenter argues..."). No URL, no
    [n]
    .
  • Put ALL sources in a single
    ## Sources
    block at the END of the return, one bullet per source:
    - url - fetched YYYY-MM-DD
    . The main agent lifts this block to FINDINGS.md frontmatter.
  • Source-count discipline is preserved: at least 2 independent sources per non-trivial claim. The discipline lives in source count, not in inline tagging.
Required output shape (what the subagent returns):
undefined
说明必须包含明确的引用规则。接受过学术写作训练的子Agent默认会使用
[1]
[2]
等内联引用;如果没有明确说明,它们会产生杂乱的输出。用直白语言说明规则。建议将以下逐字块粘贴到说明中:
引用规则。请仔细阅读并严格遵守:
  • 不要使用
    [n]
    编号引用。在Findings正文中不要出现
    [1]
    [2]
    或任何带括号的数字。
  • 不要在Findings正文中放置URL。
  • 不要添加内联脚注标记、锚点或任何类型的每声明引用标签。
  • 将Findings写成普通散文段落。
  • 当声明的解释取决于来源时,在散文中提及来源名称("根据README"、"据littlemight.com报道"、"HN-simulator评论者认为...")。不要使用URL或
    [n]
  • 将所有来源放在返回结果末尾的单个
    ## Sources
    块中,每个来源为一个项目符号:
    - url - fetched YYYY-MM-DD
    。主Agent会将此块移至FINDINGS.md的前置元数据中。
  • 保持来源数量规范:每个非 trivial 声明至少2个独立来源。规范体现在来源数量上,而非内联标记。
必填输出格式(子Agent返回的内容):
undefined

Summary

Summary

3 to 6 lines TL;DR.
3-6行的TL;DR摘要。

Findings

Findings

Plain prose. No
[n]
markers. No inline URLs. Inline source-naming as prose only when load-bearing for interpretation.
普通散文。不要
[n]
标记。不要内联URL。仅当声明的解释取决于来源时,才在散文中提及来源名称。对于原始文档,使用描述性引用("粘贴的白皮书");
raw:
前置元数据中包含文件路径。前置元数据
sources:
是参考文献。

Strongest objection (from contrarian pass)

Strongest objection (from contrarian pass)

1 to 2 sentences, or "none found".
1-2句话,或"未找到"。

Sources

Sources

  • url - fetched YYYY-MM-DD
  • url - fetched YYYY-MM-DD
  • url - fetched YYYY-MM-DD
  • url - fetched YYYY-MM-DD

(Merge mode only) Supersedes

(Merge mode only) Supersedes

  • claim from existing entry that is now wrong + reason

The subagent does not write any files. You parse this return and apply Storage rules.
  • 现有条目中现在已错误的声明 + 原因

子Agent不会写入任何文件。您需要解析此返回结果并应用存储规则。

Gap handling

缺口处理

If the subagent's return has gaps:
  • Small gap (one missing fact, one specific angle) → fill it yourself with a focused WebSearch / WebFetch. Cheaper than re-spawn.
  • Large gap (whole sections shallow, contrarian pass clearly skipped) → re-spawn with a refined brief that names the specific gap. The previous return is discarded (no file was written yet).
如果子Agent的返回结果存在缺口:
  • 小缺口(缺少一个事实、一个特定角度)→ 您自己通过针对性的WebSearch/WebFetch填补。比重新启动Agent更经济。
  • 大缺口(整个部分内容浅显,明显跳过了反向验证步骤)→ 使用明确指出特定缺口的优化说明重新启动Agent。丢弃之前的返回结果(尚未写入任何文件)。

3. Storage (the write side - main agent owns ALL file writes)

3. 存储(写入端——所有文件写入操作均由主Agent负责)

Review before storing

存储前审核

The subagent's structured format does not validate substance. The format only signals "I followed the template"; it does not confirm the content is correct, well-sourced, or relevant to what was asked. Before applying Storage, run this 4-point check:
  1. Relevance: does the Summary actually answer what was asked? If the agent disambiguated an ambiguous topic (picked one interpretation of several), confirm it matches the user's intent. If wrong, re-spawn with a tighter brief; do not store.
  2. Source quality: count primary URLs vs aggregated WebSearch snippets in the
    ## Sources
    block. If most sources are search-result summaries without specific fetched URLs, the entry is weaker than it looks. Either fill primaries yourself with focused WebFetch, or store but flag the weakness explicitly in
    ## Open questions
    .
  3. Contrarian pass evidence: "none found" is rare on any non-trivial topic. If you got "none found", be skeptical: either the topic is genuinely uncontroversial (rare), or the subagent skipped phase 4 (common). If skipped, fill in yourself with focused contrarian searches, or re-spawn.
  4. Citation cleanup: if the return contains
    [n]
    markers in the Findings body despite the brief's no-
    [n]
    rule, strip them before writing FINDINGS.md. This is a known failure mode (subagents fall back to academic citation habits). Do not push the noise downstream. Same for inline URLs in the Findings body: strip them. Sources belong in frontmatter.
If the review surfaces fixable gaps, fill them yourself with a focused WebSearch / WebFetch (cheaper than re-spawn). If gaps are systemic, re-spawn with a refined brief; do not write a half-formed entry.
子Agent的结构化格式不验证内容实质。格式仅表明"我遵循了模板";不确认内容正确、来源可靠或与问题相关。执行存储前,请进行以下4点检查:
  1. 相关性:Summary是否实际回答了用户的问题?如果Agent对模糊主题进行了消歧(从多种解释中选择了一种),请确认它符合用户意图。如果不符合,请使用更严格的说明重新启动Agent;不要存储。
  2. 来源质量:统计
    ## Sources
    块中的主URL与聚合WebSearch片段的数量。如果大多数来源是没有特定获取URL的搜索结果摘要,则该条目的可信度低于预期。您可以通过针对性的WebFetch填补主URL,或者存储但在
    ## Open questions
    中明确标记该弱点。
  3. 反向验证证据:在任何非 trivial 主题上,"未找到"反对意见的情况很少见。如果得到"未找到",请保持怀疑:要么该主题确实无争议(罕见),要么子Agent跳过了阶段4(常见)。如果是跳过,请您自己通过针对性的反向搜索填补,或重新启动Agent。
  4. 引用清理:尽管说明中禁止使用
    [n]
    标记,但如果返回结果的Findings正文中仍包含
    [n]
    标记,请在写入FINDINGS.md前删除它们。这是已知的失败模式(子Agent会回归学术引用习惯)。不要将杂乱内容传递下去。对于Findings正文中的内联URL,同样删除。来源应放在前置元数据中。
如果审核发现可修复的缺口,您自己通过针对性的WebSearch/WebFetch填补(比重新启动Agent更经济)。如果缺口是系统性的,请使用优化后的说明重新启动Agent;不要写入不完整的条目。

Apply Storage

执行存储

After Investigation returns its synthesis (or the user pastes findings), you (main agent, never the subagent) finalize the data layer. Two paths, picked based on the mode chosen in Retrieval phase 5:
New entry path:
  1. Create
    <root>/.research/<topic-slug>/
    .
  2. Write
    FINDINGS.md
    using the schema in File schemas. Frontmatter:
    created
    and
    last_verified
    = today;
    status: active
    ;
    sources
    from the subagent return;
    raw:
    omitted (no raw yet);
    related: []
    unless cross-links apply.
  3. Read
    INDEX.md
    , append a row: topic, path, today's date, a specific one-liner.
Merge path:
  1. Read the existing
    FINDINGS.md
    .
  2. Update frontmatter:
    last_verified
    = today; append new sources.
  3. Apply the subagent's
    ## Supersedes
    list:
    move each named claim from
    ## Findings
    to
    ## Discarded approaches
    with date + reason. See Conflict handling.
  4. Update / extend
    ## Findings
    with new claims.
  5. Append a
    ## Timeline
    entry summarizing the change.
  6. Read
    INDEX.md
    , update the row's
    Last verified
    column. Update the one-liner if the picture has changed.
Use kebab-case slugs that match how the user is likely to ask again - e.g.
tailwind-v5
,
2d-engines-clonable
,
orm-comparison-2026
. The slug should disambiguate.
调研返回综合结果(或用户粘贴研究内容)后,由您(主Agent,而非子Agent)完成数据层的最终处理。根据检索阶段5选择的模式,有两种路径:
新条目路径:
  1. 创建
    <root>/.research/<topic-slug>/
    目录。
  2. 使用文件架构中的模板写入
    FINDINGS.md
    。前置元数据:
    created
    last_verified
    = 当前日期;
    status: active
    sources
    来自子Agent的返回结果;
    raw:
    省略(尚无原始内容);
    related: []
    除非存在交叉链接。
  3. 读取
    INDEX.md
    ,追加一行:主题、路径、当前日期、特定的单行摘要。
合并路径:
  1. 读取现有的
    FINDINGS.md
  2. 更新前置元数据:
    last_verified
    = 当前日期;追加新来源。
  3. 应用子Agent的
    ## Supersedes
    列表:
    将每个已标记的声明从
    ## Findings
    移至
    ## Discarded approaches
    ,并添加日期和原因。参见冲突处理。
  4. 更新/扩展
    ## Findings
    以包含新声明。
  5. 追加
    ## Timeline
    条目,总结所做更改。
  6. 读取
    INDEX.md
    ,更新该行的
    Last verified
    列。如果内容发生重大变化,更新单行摘要。
使用短横线分隔的slug(kebab-case),匹配用户未来可能的提问方式——例如
tailwind-v5
2d-engines-clonable
orm-comparison-2026
。slug应具有歧义消除能力。

4. Pasted content from the user

4. 用户粘贴的内容

If the user pastes a long document and asks you to save it:
  1. Decide path first. Read
    INDEX.md
    . Does this paste extend an existing topic (merge mode), or is it a new topic (new entry mode)? Same decision as Retrieval phase 5.
  2. Save the raw document verbatim to
    <root>/.research/<topic-slug>/raw/<YYYY-MM-DD>-paste.<ext>
    (preserve the original extension -
    .md
    ,
    .pdf
    ,
    .txt
    ,
    .html
    , etc.). If the user pasted text directly with no original file, default to
    .md
    .
  3. Synthesize the content into the same shape the subagent would return (Summary / Findings / Sources). Citations to the raw file:
    [Source: raw/<filename>]
    .
  4. Apply Storage (new entry path or merge path from Section 3) using the synthesized content. When writing the entry, include this raw in the
    raw:
    frontmatter list (path, note, added date).
  5. Offer to delete the original file: "Save this as research and remove the original at
    <path>
    ?". Always ask. Never auto-delete.
If the user provides only synthesized findings (no raw file worth keeping), skip step 2 and the
raw:
frontmatter entry - just synthesize and apply Storage.
Only create the
raw/
subfolder when there's actually something to save in it.
如果用户粘贴了长篇文档并要求您保存:
  1. 先确定路径。读取
    INDEX.md
    。此粘贴内容是扩展现有主题(合并模式),还是新主题(新条目模式)?决策逻辑与检索阶段5相同。
  2. 将原始文档逐字保存至
    <root>/.research/<topic-slug>/raw/<YYYY-MM-DD>-paste.<ext>
    (保留原始扩展名——
    .md
    .pdf
    .txt
    .html
    等)。如果用户直接粘贴文本且无原始文件,默认使用
    .md
  3. 将内容综合分析为子Agent返回的格式(Summary/Findings/Sources)。引用原始文件时使用:
    [Source: raw/<filename>]
  4. 执行存储(使用第3节中的新条目路径或合并路径),使用综合分析后的内容。写入条目时,将此原始文件添加到
    raw:
    前置元数据列表中(路径、说明、添加日期)。
  5. 询问用户是否删除原始文件:"将此保存为研究内容并删除
    <path>
    处的原始文件?"。务必询问,切勿自动删除。
如果用户仅提供了综合分析后的研究结果(无值得保留的原始文件),则跳过步骤2和
raw:
前置元数据条目——仅进行综合分析并执行存储。
仅当确实有内容要保存时才创建
raw/
子目录。

Best practices

最佳实践

These are cross-cutting rules. Apply them throughout the workflow.
这些是贯穿整个工作流程的通用规则。

Date and freshness

日期与时效性

  • Always run
    date +%Y-%m-%d
    first.
    Pin the actual current year in WebSearch queries ("X 2026", "X latest 2026"). Don't trust your model's prior on what year it is.
  • Prefer official release notes and changelogs over blog posts.
  • When a source is older than 30 days on a fast-moving topic (npm packages, framework releases, AI tooling), treat it as a hint, not canon. Cross-check against newer sources.
  • If an existing entry's
    last_verified
    is older than 30 days on a fast-moving topic, refresh before answering - don't quote a stale entry as current truth.
  • 始终先运行
    date +%Y-%m-%d
    。在WebSearch查询中固定实际当前年份("X 2026"、"X latest 2026")。不要信任模型对当前年份的预设。
  • 优先选择官方发布说明和变更日志,而非博客文章。
  • 对于快速变化的主题(npm包、框架发布、AI工具),如果来源已超过30天,则将其视为提示而非权威信息。与较新的来源交叉验证。
  • 如果现有条目的
    last_verified
    在快速变化的主题上已超过30天,请在回答前更新内容——不要引用过时的条目作为当前事实。

Version preference

版本偏好

When recommending a version of a library, framework, or tool:
  • Default = latest stable production release. That's what users should run unless they ask for something else.
  • LTS when the project ships one and the user is on a long-lived stack (Node, Postgres, etc.).
  • Nightly / pre-release / alpha / beta builds: only when the user explicitly asks ("what's coming in next release", "any unreleased features that solve X", "give me the bleeding edge"). Don't recommend nightly as a default - it's unstable and changes daily.
  • Always state the version number you're recommending (e.g., "Drizzle ORM 1.4.2" not just "Drizzle ORM").
当推荐库、框架或工具的版本时:
  • 默认选择最新稳定生产版本。这是用户应使用的版本,除非他们要求其他版本。
  • 如果项目提供LTS版本且用户使用长期支持栈(Node、Postgres等),则选择LTS版本
  • Nightly/预发布/alpha/beta版本:仅当用户明确要求时("下一个版本会有什么新功能"、"是否有未发布的功能可解决X"、"给我最前沿的版本")。不要默认推荐nightly版本——它不稳定且每日更新。
  • 始终说明您推荐的版本号(例如"Drizzle ORM 1.4.2",而非仅"Drizzle ORM")。

Source preference

来源优先级

Higher → lower authority for technical claims:
  1. Official documentation / release notes / changelog
  2. Maintainer blog posts and conference talks
  3. GitHub issues, discussions, and pull request descriptions
  4. Recent third-party benchmarks and reviews
  5. Stack Overflow / Reddit / random blog posts
Always record
fetched: YYYY-MM-DD
next to each source URL.
技术声明的权威性从高到低:
  1. 官方文档/发布说明/变更日志
  2. 维护者博客文章和会议演讲
  3. GitHub问题、讨论和拉取请求描述
  4. 近期第三方基准测试和评测
  5. Stack Overflow/Reddit/随机博客文章
始终在每个来源URL旁记录
fetched: YYYY-MM-DD

Citation discipline

引用规范

  • Every concrete claim has a
    [n]
    citation tied to a source URL in
    sources
    frontmatter
  • If you don't have a source, write "no source - open question" and add it to
    ## Open questions
    . Never invent a URL or quote.
  • When sources disagree, cite both and note the disagreement explicitly. Don't silently pick one.
  • 每个具体声明都有一个
    [n]
    引用,关联到前置元数据
    sources
    中的来源URL
  • 如果没有来源,请写入"无来源——待解决问题"并添加到
    ## Open questions
    中。切勿编造URL或引用内容。
  • 当来源存在分歧时,引用所有相关来源并明确说明分歧。不要默默选择某一方。

Conflict handling

冲突处理

When new evidence contradicts an existing entry:
  1. Move the old claim to
    ## Discarded approaches
    with a one-line reason and date. Never delete silently.
  2. If the same approach has failed twice or more, flag it loudly in
    ## Findings
    : "approach X has been tried and discarded N times - current working answer is Y". The point is to prevent re-trying refuted approaches in future sessions.
  3. Update
    last_verified
    and append a
    ## Timeline
    entry summarizing the change.
当新证据与现有条目矛盾时:
  1. 将旧声明移至
    ## Discarded approaches
    ,并添加一行原因和日期。切勿默默删除。
  2. 如果同一方法已失败两次或更多,在
    ## Findings
    中醒目标记:"方法X已尝试并被放弃N次——当前可行方案为Y"。目的是防止未来会话中重复尝试已被否定的方法。
  3. 更新
    last_verified
    并追加
    ## Timeline
    条目,总结所做更改。

File schemas

文件架构

INDEX.md

INDEX.md

markdown
undefined
markdown
undefined

Research index

Research index

TopicPathLast verifiedOne-liner
<topic-slug><topic-slug>/FINDINGS.mdYYYY-MM-DD<one-line summary that disambiguates>

The one-liner is what future-you scans to decide whether to load the entry. Make it specific.
TopicPathLast verifiedOne-liner
<topic-slug><topic-slug>/FINDINGS.mdYYYY-MM-DD<具有歧义消除能力的单行摘要>

单行摘要是未来的您用来决定是否加载条目的依据。请写得具体些。

FINDINGS.md

FINDINGS.md

markdown
---
topic: <slug>
created: YYYY-MM-DD
last_verified: YYYY-MM-DD
status: active                 # active | superseded
related: []                    # other entry slugs for cross-reference
sources:
  - url: https://...
    fetched: YYYY-MM-DD
raw:                           # omit if no raws were saved
  - path: raw/2026-04-25-paste.pdf
    note: user pasted vendor whitepaper
    added: 2026-04-25
---
markdown
---
topic: <slug>
created: YYYY-MM-DD
last_verified: YYYY-MM-DD
status: active                 # active | superseded
related: []                    # 用于交叉引用的其他条目slug
sources:
  - url: https://...
    fetched: YYYY-MM-DD
raw:                           # 无原始内容时省略
  - path: raw/2026-04-25-paste.pdf
    note: 用户粘贴的供应商白皮书
    added: 2026-04-25
---

<Topic name>

<主题名称>

Summary

Summary

3-6 lines. The TL;DR. Loads first on lookup; should answer the common question alone.
3-6行内容。即TL;DR。查找时首先加载;应能单独回答常见问题。

Findings

Findings

Plain prose. No
[n]
markers. No inline URLs. When a claim's interpretation depends on which source said it, name the source as prose ("per the README", "according to littlemight.com", "the HN-simulator commenter notes"). For raw documents, refer descriptively ("the pasted whitepaper"); the
raw:
frontmatter has the file path. Frontmatter
sources:
is the bibliography.
普通散文。不要
[n]
标记。不要内联URL。当声明的解释取决于来源时,在散文中提及来源名称("根据README"、"据littlemight.com报道"、"HN-simulator评论者指出")。对于原始文档,使用描述性引用("粘贴的白皮书");
raw:
前置元数据中包含文件路径。前置元数据
sources:
是参考文献。

Discarded approaches

Discarded approaches

ApproachWhy droppedDate
方法放弃原因日期

Open questions

Open questions

  • ...
  • ...

Timeline

Timeline

  • YYYY-MM-DD - initial entry

Notes on the schema:
- `raw:` is a list - one entry can accumulate multiple raw documents over time (e.g., user pastes a whitepaper, then later a different report on the same topic). Add new items, don't overwrite.
- Omit the `raw:` key entirely when there are no raw documents - don't leave an empty list.
- `related:` cross-links to other entry slugs. Use this when entries touch overlapping projects but answer different questions (e.g., `knowledge-graphs-comparison` and `mempalace-legitimacy` both mention mempalace but have different scopes - link them, don't merge them).
  • YYYY-MM-DD - 初始条目

架构说明:
- `raw:`是一个列表——一个条目可随时间积累多个原始文档(例如,用户先粘贴了一份白皮书,之后又粘贴了同一主题的另一份报告)。添加新条目,不要覆盖旧内容。
- 无原始文档时完全省略`raw:`键——不要保留空列表。
- `related:`交叉链接到其他条目slug。当条目涉及重叠项目但回答不同问题时使用(例如,`knowledge-graphs-comparison`和`mempalace-legitimacy`都提到了mempalace但范围不同——链接它们,不要合并)。

Anti-patterns

反模式

  • Entry spam: one topic = one entry. Don't create separate entries for sub-aspects; nest them as sections.
  • Researching the skill itself: don't write meta-entries about how research-memory works.
  • Hallucinated sources: never invent URLs or quotes. If WebFetch failed, say so.
  • Auto-delete on paste handler: always offer, never act.
  • Silent supersede: any change to a prior conclusion goes through
    ## Discarded approaches
    +
    ## Timeline
    . Never overwrite.
  • 条目泛滥:一个主题对应一个条目。不要为子方面创建单独条目;将它们作为章节嵌套。
  • 研究Skill本身:不要撰写关于research-memory工作原理的元条目。
  • 虚构来源:切勿编造URL或引用内容。如果WebFetch失败,请如实说明。
  • 粘贴内容自动删除:始终询问,切勿自行操作。
  • 默默替代:对先前结论的任何更改都需经过
    ## Discarded approaches
    +
    ## Timeline
    。切勿覆盖。