update

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

/update — 更新知識庫

/update — Update Knowledge Base

將本次 session 的工作沉澱為文件與可復用知識。依序執行文件更新、品質審查、模式提取三個階段。
ECC 資源前置確認: 確認 doc-updater agent 和 learn-eval skill 的可用狀態。若已被 defer,提示使用者先 restore 或調整流程。
/pr
的分工:
/update
負責知識沉澱,
/pr
負責 git 輸出。需要時可組合使用:先
/update
/pr
Solidify the work from this session into documents and reusable knowledge. Three phases are executed in sequence: document update, quality review, and pattern extraction.
ECC Resource Pre-check: Confirm the availability status of the doc-updater agent and learn-eval skill. If they have been deferred, prompt the user to restore first or adjust the workflow.
Division of labor with
/pr
:
/update
is responsible for knowledge solidification,
/pr
is responsible for git output. They can be used in combination when needed: run
/update
first then
/pr
.

Step 1: doc-updater — 更新文件

Step 1: doc-updater — Update Documents

使用 doc-updater agent 掃描本次 session 的變更,更新相關文件。
Agent(subagent_type="everything-claude-code:doc-updater")
掃描範圍:
  1. 執行
    git diff --name-only HEAD
    取得本次變更的檔案清單
  2. 若無未 commit 變更,使用
    git diff --name-only HEAD~3..HEAD
    取得近期 commit 涉及的檔案
  3. 若仍無任何變更,使用 AskUserQuestion 詢問:
    • 指定檔案 — 手動指定要檢查的文件
    • 全面掃描 — 掃描所有文件是否有過時內容
    • 只做 learn-eval — 跳過 Step 1-2,直接進入 Step 3 提取 patterns
    • 結束 — 結束 /update 流程
  4. 根據變更內容更新:
    • docs/
      目錄下的相關文件
    • docs/CODEMAPS/
      目錄下的架構圖
    • README.md
      (如果功能或用法有變動)
    • 其他受影響的文件檔案
HITL 確認(必做): 更新任何文件前,先列出「計畫更新的檔案清單」並使用 AskUserQuestion 請使用者確認:
即將更新以下文件,請確認是否正確:
1. path/to/file1.md — 原因:xxx
2. path/to/file2.md — 原因:xxx

[確認繼續] / [調整清單] / [跳過]
文件庫 / 知識庫 歧義處理: 若上下文提到「文件庫」或「知識庫」,必須先釐清指的是哪個目錄,不得自行假設
  • 可能是
    docs/
    research/
    README.md
    (專案文件)
  • 可能是
    ~/.claude/projects/.../memory/
    (Claude 記憶庫)
  • 可能是
    ~/.claude/skills/
    (技能庫)
  • 若無法從上下文判斷,使用 AskUserQuestion 確認後再繼續
交接資訊: 記錄更新了哪些文件檔案、變更摘要,作為下一步 code-reviewer 的輸入。
Use the doc-updater agent to scan changes from this session and update relevant documents.
Agent(subagent_type="everything-claude-code:doc-updater")
Scan Scope:
  1. Run
    git diff --name-only HEAD
    to get the list of files changed in this session
  2. If there are no uncommitted changes, use
    git diff --name-only HEAD~3..HEAD
    to get files involved in recent commits
  3. If there are still no changes, use AskUserQuestion to ask:
    • Specify Files — Manually specify the documents to check
    • Full Scan — Scan all documents for outdated content
    • Run learn-eval Only — Skip Step 1-2, go directly to Step 3 to extract patterns
    • End — Terminate the /update workflow
  4. Update according to the change content:
    • Relevant documents under the
      docs/
      directory
    • Architecture diagrams under the
      docs/CODEMAPS/
      directory
    • README.md
      (if there are changes to functionality or usage)
    • Other affected document files
HITL Confirmation (Required): Before updating any documents, first list the "planned update file list" and use AskUserQuestion to request user confirmation:
The following files will be updated, please confirm if correct:
1. path/to/file1.md — Reason: xxx
2. path/to/file2.md — Reason: xxx

[Confirm to continue] / [Adjust list] / [Skip]
Document Library / Knowledge Base Ambiguity Handling: If "document library" or "knowledge base" is mentioned in the context, you must first clarify which directory it refers to, do not assume on your own:
  • May be
    docs/
    ,
    research/
    ,
    README.md
    (project documents)
  • May be
    ~/.claude/projects/.../memory/
    (Claude memory library)
  • May be
    ~/.claude/skills/
    (skill library)
  • If it cannot be determined from the context, use AskUserQuestion to confirm before proceeding
Handover Information: Record which document files were updated and the change summary, as input for the next step code-reviewer.

Step 2: code-reviewer — 交叉比對 + 審查文件品質

Step 2: code-reviewer — Cross-check + Document Quality Review

使用 code-reviewer agent 審查 Step 1 更新的文件,並交叉比對變更是否完整正確。
Agent(subagent_type="everything-claude-code:code-reviewer")
審查重點:
  • 文件內容是否準確反映程式碼變更
  • 是否有過時或不一致的描述
  • 格式和結構是否符合專案慣例
  • 是否有遺漏的重要資訊
交叉比對(Cross-check):
  1. 對照
    git diff
    的實際變更,逐一確認每份更新的文件是否涵蓋所有重要變更
  2. 確認沒有「應該更新但漏掉的文件」
  3. 確認沒有「不應該更新但錯誤修改的文件」
  4. 若發現遺漏或錯誤,標示在輸出報告中
輸出格式:
Severity項目說明
CRITICAL內容錯誤文件描述與實際程式碼行為不符
HIGH重要遺漏缺少關鍵功能或 API 的說明
MEDIUM格式問題結構不一致、用語不統一
如果有 CRITICAL 或 HIGH 問題,使用 AskUserQuestion 詢問使用者:
  • 修正後繼續:先修正問題,再進入 Step 3
  • 跳過,直接繼續:忽略問題,繼續執行 learn-eval
Use the code-reviewer agent to review the documents updated in Step 1, and cross-check whether the changes are complete and correct.
Agent(subagent_type="everything-claude-code:code-reviewer")
Review Focus:
  • Whether the document content accurately reflects the code changes
  • Whether there are outdated or inconsistent descriptions
  • Whether the format and structure comply with project conventions
  • Whether there is missing important information
Cross-check:
  1. Compare with the actual changes from
    git diff
    , confirm one by one whether each updated document covers all important changes
  2. Confirm there are no "files that should be updated but missed"
  3. Confirm there are no "files that should not be updated but modified incorrectly"
  4. If omissions or errors are found, mark them in the output report
Output Format:
SeverityItemDescription
CRITICALContent ErrorDocument description does not match actual code behavior
HIGHImportant OmissionMissing description of key functions or APIs
MEDIUMFormat IssueInconsistent structure, inconsistent terminology
If there are CRITICAL or HIGH issues, use AskUserQuestion to ask the user:
  • Continue after correction: Fix the issues first before proceeding to Step 3
  • Skip, continue directly: Ignore the issues and proceed to run learn-eval

Step 3: learn-eval — 提取可復用模式

Step 3: learn-eval — Extract Reusable Patterns

使用 learn-eval 從本次 session 提取可復用的 patterns,評估品質後寫入知識庫。
Skill(skill="everything-claude-code:learn-eval")
提取範圍:
  • 錯誤解決模式(debugging patterns)
  • 偵錯技巧(troubleshooting insights)
  • 架構決策(architectural decisions)
  • 專案特定模式(project-specific patterns)
  • 工具使用技巧(tool usage patterns)
  • 業界標準應用(industry standard adoptions)
  • 標準化方案選型(standardized solution selections)
品質評估: learn-eval 會自動進行 5 維度評分(specificity、actionability、scope fit、non-redundancy、coverage),至少達 3 分才會保存。
Use learn-eval to extract reusable patterns from this session, evaluate the quality and write them to the knowledge base.
Skill(skill="everything-claude-code:learn-eval")
Extraction Scope:
  • debugging patterns
  • troubleshooting insights
  • architectural decisions
  • project-specific patterns
  • tool usage patterns
  • industry standard adoptions
  • standardized solution selections
Quality Evaluation: learn-eval will automatically perform 5-dimensional scoring (specificity, actionability, scope fit, non-redundancy, coverage), and only patterns with a score of at least 3 will be saved.

Step 4: 知識庫交叉比對(HITL 確認)

Step 4: Knowledge Base Cross-check (HITL Confirmation)

learn-eval 完成後,執行最終交叉比對,確認所有知識庫都已正確更新。
盤點本次 session 涉及的知識庫位置:
知識庫路徑說明
Claude 記憶庫
~/.claude/projects/<project-hash>/memory/MEMORY.md
專案級記憶
全域技能庫
~/.claude/skills/learned/
全域可復用 patterns
全域記憶
~/.claude/MEMORY.md
跨專案記憶(如有)
專案文件
docs/
research/
README.md
專案說明文件
交叉比對步驟:
  1. 讀取上述各知識庫的實際內容
  2. 對照本次 session 的工作內容,逐一確認:
    • 有做但未記錄的決策或模式
    • 記錄的內容是否與實際一致(無錯誤描述)
    • 本次引用的業界標準或學術依據是否已記錄到適當的知識庫
    • 是否有跨知識庫的不一致(例如 MEMORY.md 與 learned skill 矛盾)
  3. 若發現問題,列出具體差異
HITL 確認: 比對完成後,用 AskUserQuestion 呈現結果,請使用者確認後才結束:
知識庫交叉比對結果:
✅ MEMORY.md — 已正確記錄 xxx
✅ learned/yyy.md — 內容與實作一致
⚠️ MEMORY.md 第 12 行描述與實際行為不符,建議修正:...

[確認無誤,結束] / [修正後結束]
After learn-eval is completed, perform the final cross-check to confirm that all knowledge bases have been updated correctly.
Inventory of knowledge base locations involved in this session:
Knowledge BasePathDescription
Claude Memory Library
~/.claude/projects/<project-hash>/memory/MEMORY.md
Project-level memory
Global Skill Library
~/.claude/skills/learned/
Global reusable patterns
Global Memory
~/.claude/MEMORY.md
Cross-project memory (if any)
Project Documents
docs/
,
research/
,
README.md
Project description documents
Cross-check Steps:
  1. Read the actual content of each of the above knowledge bases
  2. Compare with the work content of this session, confirm one by one:
    • Decisions or patterns that were implemented but not recorded
    • Whether the recorded content is consistent with the actual situation (no wrong descriptions)
    • Whether the industry standards or academic references cited this time have been recorded in the appropriate knowledge base
    • Whether there are inconsistencies across knowledge bases (e.g. contradiction between MEMORY.md and learned skill)
  3. If problems are found, list the specific differences
HITL Confirmation: After the comparison is completed, present the results with AskUserQuestion, and request user confirmation before ending:
Knowledge Base Cross-check Results:
✅ MEMORY.md — xxx has been correctly recorded
✅ learned/yyy.md — Content is consistent with implementation
⚠️ The description on line 12 of MEMORY.md does not match the actual behavior, it is recommended to modify: ...

[Confirm no errors, end] / [Modify before ending]

Step 5: 總結報告

Step 5: Summary Report

所有步驟完成後,輸出最終報告:
markdown
undefined
After all steps are completed, output the final report:
markdown
undefined

/update 執行結果

/update Execution Result

文件更新

Document Update

  • 列出所有更新的文件檔案及變更摘要
  • List all updated document files and change summaries

品質審查

Quality Review

  • 列出 code-reviewer 的審查結果
  • 標示已修正 / 未修正的項目
  • List the review results of code-reviewer
  • Mark corrected / uncorrected items

知識提取

Knowledge Extraction

  • 列出 learn-eval 提取的 patterns
  • 標示保存位置(全域 / 專案級)
  • List the patterns extracted by learn-eval
  • Mark storage location (global / project-level)

知識庫交叉比對

Knowledge Base Cross-check

  • 列出比對結果(✅ 正確 / ⚠️ 差異 / ❌ 錯誤)
  • 標示已修正 / 待修正的項目
  • List comparison results (✅ Correct / ⚠️ Difference / ❌ Error)
  • Mark corrected / pending correction items

建議下一步

Suggested Next Steps

  • 如果有未修正的問題,建議處理方式
  • 如果適合開 PR,建議執行
    /pr
undefined
  • If there are uncorrected issues, suggest handling methods
  • If it is suitable to open a PR, suggest running
    /pr
undefined