distill

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

/distill

/distill

Surface patterns in accumulated knowledge. Propose what to keep, promote, compact, or dismiss. The human decides.
Distill is the complement to
/salvage
: salvage extracts learning from a single session; distill curates the corpus those extractions build over time. Without distill, metis accumulates but never compounds.
挖掘累积知识中的共性模式,提议哪些内容应该保留、升级、合并或移除,最终由人工做决策。
/distill 是
/salvage
的互补功能:salvage 用于从单一会话中提取经验,而 distill 用于对这些长期积累的提取结果组成的语料库进行策展管理。如果不使用 distill,metis 只会不断堆积,永远不会产生复利价值。

When to Use

适用场景

Invoke
/distill
when:
  • After multiple sessions - The corpus has grown and hasn't been reviewed
  • Before a new phase - Want to know what's actually settled before moving forward
  • Search results feel noisy -
    oh_search_context
    returning too much loosely-related content
  • Similar learnings keep appearing -
    /salvage
    keeps extracting the same insights (the meta-signal)
  • End of a successful session - Even good sessions produce learnings worth capturing before context is lost
Use distill (not
/salvage
) when the session went well.
/salvage
is for stopping because things went wrong. Distill is for pausing because things went right — or simply finished — and learnings are worth capturing before context is lost.
Do not use when: You're in the middle of execution. Distill is a pause point, not a mid-flight activity.
满足以下条件时可以调用
/distill
  • 多轮会话结束后 - 语料库已经增长了一段时间且从未被梳理过
  • 新阶段启动前 - 希望在推进工作前明确当前已经沉淀的共识内容
  • 搜索结果过于杂乱 -
    oh_search_context
    返回了太多关联性不强的内容
  • 同类经验反复出现 -
    /salvage
    不断提取到相同的洞察(这是需要整合的元信号)
  • 成功会话结束后 - 即便是顺利完成的会话也会产生有价值的经验,适合在上下文丢失前留存
会话进展顺利时使用 distill(而非
/salvage
/salvage
用于出现问题需要暂停的场景,而 distill 用于进展顺利或者任务完成时的暂停,目的是在上下文丢失前留存有价值的经验。
不适用场景:任务执行过程中请勿使用。distill 是一个暂停节点的操作,而非执行过程中的动态操作。

The Human-Led Curation Principle

人工主导策展原则

Distill leverages LLMs for what they're good at — pattern recognition, clustering, surfacing similar entries — while keeping judgment with the human.
LLMs do: find recurring themes, group similar entries, surface candidates. Humans do: decide what matters, what's worth promoting, what's stale or context-specific.
Auto-promotion is never correct. A theme proposal is not a guardrail until a human writes it.
distill 充分利用LLM擅长的能力——模式识别、聚类、挖掘相似条目,同时将判断权完全交给人类。
LLM负责: 发现重复主题、归组相似条目、输出候选建议。 人类负责: 判断哪些内容有价值、哪些值得升级、哪些已经过时或者仅适用于特定上下文。
自动升级永远是错误的做法,一个主题提议在人类确认写入前都不算是正式的guardrail。

The Process

执行流程

Step 1: Establish Scope

步骤1:确定范围

Decide what corpus to work with:
  • Session scope (default, no RNA needed): learnings from this conversation
  • Corpus scope (RNA available): accumulated metis across all sessions, optionally filtered by outcome, phase, or tag
确定需要处理的语料库范围:
  • 会话范围(默认,无需RNA支持):本次对话产生的经验内容
  • 全语料范围(需要RNA支持):所有会话累积的metis,可按结果、阶段或标签筛选

Step 2: Surface Candidates

步骤2:挖掘候选内容

Session scope: Review the conversation. What was learned? What assumptions were validated or invalidated? What constraints were discovered? What would be useful to know at the start of the next session?
Corpus scope (RNA): Call
oh_search_context
broadly. Cluster by semantic similarity. Identify:
  • Entries that appear together repeatedly (candidates for compaction)
  • Patterns across entries (candidates for guardrail promotion)
  • Entries that contradict each other (candidates for resolution)
  • Entries that are stale or overly context-specific (candidates for dismissal)
会话范围: 回顾对话内容,梳理本次获得了哪些经验?哪些假设被验证或证伪?发现了哪些约束条件?哪些内容在下一次会话启动时会有参考价值?
全语料范围(RNA支持): 大范围调用
oh_search_context
,按语义相似度聚类,识别以下内容:
  • 反复共同出现的条目(合并候选)
  • 跨条目存在的共性模式(升级为guardrail的候选)
  • 互相矛盾的条目(需要解决的冲突候选)
  • 过时或者过度依赖特定上下文的条目(移除候选)

Step 3: Present for Human Review

步骤3:提交人工审核

Present each candidate group with four possible actions:
**Theme: [theme name]**
Entries: [list with source IDs and one-line summaries]
Suggested action: [Keep / Promote / Compact / Dismiss]
Reason: [why this action fits]
→ Your call:
Keep — leave as individual metis entries, no change. Promote — pattern is recurring and stable enough to warrant a guardrail. Distill drafts a stub; human approves the content (editing as needed), then an agent writes the file:
markdown
---
id: [slug]
outcome: [outcome-id]
severity: soft
title: [one-line constraint]
---
[drafted body — human refines, agent writes to .oh/guardrails/]
Compact — multiple entries say the same thing. Human approves a merged version; originals archived or deleted. Dismiss — stale, superseded, or so context-specific it misleads more than it helps.
将每个候选分组按照以下格式呈现,提供四种可选操作:
**主题: [主题名称]**
条目: [包含来源ID和单行摘要的列表]
建议操作: [保留 / 升级 / 合并 / 移除]
理由: [该操作的适配原因]
→ 你的决策:
保留 — 作为独立的metis条目留存,不做修改。 升级 — 模式重复出现且足够稳定,值得作为guardrail。distill 会生成草稿,由人类确认内容(可按需编辑),之后由Agent写入文件:
markdown
---
id: [slug]
outcome: [outcome-id]
severity: soft
title: [单行约束说明]
---
[草稿内容 — 人类优化后,Agent写入 .oh/guardrails/ 目录]
合并 — 多个条目表述内容一致,由人类确认合并后的版本,原始条目可归档或删除。 移除 — 内容过时、被替代,或者过于依赖特定上下文,留存反而容易造成误导。

Step 4: Write Results

步骤4:写入结果

Only write what the human approved. No auto-promotion, no auto-deletion.
  • New metis entries →
    .oh/metis/<slug>.md
  • Guardrail candidates → draft for human to write to
    .oh/guardrails/<slug>.md
  • Compactions → new merged entry + note which originals can be removed
  • Session file compaction → offer to remove stale planning artifacts, keep settled decisions as brief anchors
仅写入人类确认过的内容,不执行自动升级、自动删除操作。
  • 新的metis条目 → 写入
    .oh/metis/<slug>.md
  • Guardrail候选 → 生成草稿供人类确认后写入
    .oh/guardrails/<slug>.md
  • 合并操作 → 生成新的合并条目 + 标注可移除的原始条目
  • 会话文件合并 → 提供移除过时规划产物的选项,仅保留已确认的决策作为简要锚点

Output Format

输出格式

markdown
undefined
markdown
undefined

Distill Summary

Distill 总结

Scope: [session | corpus — filtered by: outcome/phase/tag] Entries reviewed: [N]
范围: [会话 | 全语料 — 筛选条件: 结果/阶段/标签] 已审核条目: [数量]

Proposals

提议

[Theme or entry title]
  • Source(s): [file paths or conversation reference]
  • Suggested: [Keep / Promote to guardrail / Compact / Dismiss]
  • Reason: [one sentence] → Decision: [human fills this in]
[repeat for each proposal]
[主题或条目名称]
  • 来源: [文件路径或对话引用]
  • 建议操作: [保留 / 升级为guardrail / 合并 / 移除]
  • 理由: [单句说明] → 决策: [人类填写]
[每个提议重复以上结构]

Results Written

已写入结果

  • [what was actually written, with file paths]
undefined
  • [实际写入的内容,包含文件路径]
undefined

Guardrails

Guardrails

  • Never auto-promote. A theme proposal is not a guardrail until a human writes it.
  • Never auto-delete. Dismissal proposals require human confirmation.
  • Preserve provenance. Every proposal links to source metis IDs or conversation context.
  • Phase-aware. Corpus-mode clustering must surface phase tags — cross-phase metis often misleads. A solution-space learning is not automatically relevant in problem-space.
  • Graceful with sparse corpus. With fewer than 5 entries, surface what exists without manufacturing false patterns. Don't cluster noise.
  • Metis is contextual, not universal. What worked in one context doesn't carry everywhere. The human selects what applies; distill surfaces candidates.
  • 禁止自动升级。 主题提议在人类确认写入前都不算是正式的guardrail。
  • 禁止自动删除。 移除提议需要人工确认。
  • 保留来源追溯。 每个提议都要关联来源metis ID或对话上下文。
  • 感知阶段属性。 全语料模式聚类必须保留阶段标签 — 跨阶段的metis通常会产生误导,解决方案阶段的经验不一定适用于问题定义阶段。
  • 语料稀疏时优雅降级。 条目少于5条时,仅展示现有内容,不要制造虚假模式,不要对噪声做聚类。
  • Metis是上下文相关的,而非通用的。 在某个场景下有效的方案不一定适用于所有场景,由人类选择适用内容,distill仅负责挖掘候选。

Adaptive Enhancement

自适应增强能力

Base Skill (prompt only)

基础能力(仅Prompt)

Reads the current conversation. Surfaces what was learned this session. Proposes metis entries for human review and approval. Works after any completed session — the positive complement to
/salvage
(which is for drift and failure).
Output: approved entries as markdown, ready to write to
.oh/metis/
.
读取当前会话内容,挖掘本次会话获得的经验,提议metis条目供人类审核确认。可在任何会话完成后使用,是
/salvage
(用于处理偏差和失败场景)的正向互补功能。
输出:经确认的markdown格式条目,可直接写入
.oh/metis/
目录。

With .oh/ session file

配合.oh/会话文件使用

  • Reads
    .oh/<session>.md
    for session context and prior metis
  • Writes approved metis entries directly to
    .oh/metis/
  • After distill, offers to compact the session file itself: remove stale planning content, keep settled decisions as brief anchors
  • The compacted session file seeds future sessions more cleanly
  • 读取
    .oh/<session>.md
    获取会话上下文和过往metis
  • 直接将确认后的metis条目写入
    .oh/metis/
    目录
  • distill执行完成后,提供合并会话文件的选项:移除过时的规划内容,仅保留已确认的决策作为简要锚点
  • 合并后的会话文件可以更清晰地为后续会话提供基础

With RNA MCP (repo-native-alignment)

配合RNA MCP(repo-native-alignment)使用

Corpus mode:
oh_search_context
across all accumulated metis (or filtered subset), cluster by semantic similarity, surface candidate groups for human review.
Output is PR-able: distill produces a set of proposed file writes (new guardrail stubs, compacted metis, removal notes) formatted as a markdown summary. The human creates a branch, writes the approved files, and opens a PR — or uses
/oh-plan
to create issues for each promotion. Curation becomes a collaborative act, not a solo one.
Filtering options: pass outcome ID, phase tag, or recency window to
oh_search_context
to narrow the corpus. Useful when the full corpus is large but only a domain slice needs curation.
全语料模式:在所有累积的metis(或筛选后的子集)中执行
oh_search_context
,按语义相似度聚类,挖掘候选分组供人类审核。
输出支持PR:distill 会生成一组提议的文件写入操作(新的guardrail草稿、合并后的metis、移除说明),格式为markdown总结。人类可以创建分支,写入确认后的文件并提交PR —— 或者使用
/oh-plan
为每个升级操作创建Issue,让策展成为协作行为而非单人操作。
筛选选项: 可以向
oh_search_context
传入结果ID、阶段标签或时间窗口来缩小语料范围,适合全语料规模较大、仅需要梳理特定领域内容的场景。

Position in Framework

框架中的位置

Comes after: Multiple sessions of
/salvage
or
/execute
that have accumulated metis. Or: end of any session where learnings are worth capturing before context is lost. Leads to: Cleaner
oh_search_context
results. Guardrail candidates for human authoring. A compacted session file that seeds the next session. Relationship to
/salvage
:
Salvage is per-session extraction (especially from failure). Distill is corpus-level curation (independent of any single session outcome). They're complementary — salvage feeds the corpus; distill keeps it from becoming noise.
前置操作: 多轮
/salvage
/execute
会话已经累积了一定的metis,或者任何会话结束后有需要在上下文丢失前留存的经验。 后续产出: 更干净的
oh_search_context
结果、供人类编写的guardrail候选、合并后的会话文件为下一次会话提供清晰基础。
/salvage
的关系:
salvage 是单会话层面的经验提取(尤其适用于失败场景),distill 是全语料层面的策展管理(和单一会话的结果无关)。二者是互补关系:salvage 为语料库提供输入,distill 避免语料库变成噪声。

Leads To

后续操作

After distill, typically:
  • Write approved guardrail files to
    .oh/guardrails/
  • Open a PR with
    .oh/
    changes for team review (corpus mode)
  • Return to
    /aim
    or
    /problem-space
    with a cleaner, more settled context

Remember: Distill is not synthesis. The LLM finds patterns; you decide what matters. A corpus that accumulates without curation becomes noise. A corpus that's periodically distilled becomes situated judgment — the kind that actually improves future sessions.
distill执行完成后,通常会进行以下操作:
  • 将确认后的guardrail文件写入
    .oh/guardrails/
    目录
  • 提交包含
    .oh/
    目录变更的PR供团队审核(全语料模式下)
  • 带着更干净、更明确的上下文回到
    /aim
    或问题定义阶段

请注意: Distill不是自动合成,LLM负责发现模式,你负责判断价值。没有经过策展的累积语料只会变成噪声,定期distill的语料会成为场景化的判断依据,真正能够为未来的会话提供价值。