memory-systems

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Memory System Design

内存系统设计

Memory provides the persistence layer that allows agents to maintain continuity across sessions and reason over accumulated knowledge. Simple agents rely entirely on context for memory, losing all state when sessions end. Sophisticated agents implement layered memory architectures that balance immediate context needs with long-term knowledge retention. The evolution from vector stores to knowledge graphs to temporal knowledge graphs represents increasing investment in structured memory for improved retrieval and reasoning.
内存是Agent的持久化层,支持Agent在跨会话场景下保持连续性,并基于积累的知识进行推理。简单Agent完全依赖上下文作为内存,会话结束后所有状态都会丢失。复杂Agent则采用分层内存架构,平衡即时上下文需求与长期知识留存。从向量存储到知识图谱再到时序知识图谱的演进,代表着为提升检索与推理能力,在结构化内存上的投入不断增加。

When to Activate

触发场景

Activate this skill when:
  • Building agents that must persist knowledge across sessions
  • Choosing between memory frameworks (Mem0, Zep/Graphiti, Letta, LangMem, Cognee)
  • Needing to maintain entity consistency across conversations
  • Implementing reasoning over accumulated knowledge
  • Designing memory architectures that scale in production
  • Evaluating memory systems against benchmarks (LoCoMo, LongMemEval, DMR)
  • Building dynamic memory with automatic entity/relationship extraction and self-improving memory (Cognee)
在以下场景下激活本技能:
  • 构建需要跨会话持久化知识的Agent
  • 在各类内存框架(Mem0、Zep/Graphiti、Letta、LangMem、Cognee)中做选型
  • 需要在对话中保持实体一致性
  • 基于积累的知识实现推理功能
  • 设计可在生产环境下扩容的内存架构
  • 基于基准测试(LoCoMo、LongMemEval、DMR)评估内存系统
  • 构建具备自动实体/关系提取与自我优化能力的动态内存(Cognee)

Core Concepts

核心概念

Think of memory as a spectrum from volatile context window to persistent storage. Default to the simplest layer that meets retrieval needs, because benchmark evidence shows tool complexity matters less than reliable retrieval — Letta's filesystem agents scored 74% on LoCoMo using basic file operations, beating Mem0's specialized tools at 68.5%. Add structure (graphs, temporal validity) only when retrieval quality degrades or the agent needs multi-hop reasoning, relationship traversal, or time-travel queries.
可以将内存视为从易失性上下文窗口到持久化存储的连续谱系。默认选择能满足检索需求的最简层级,因为基准测试数据表明工具复杂度的重要性低于可靠的检索能力——Letta的文件系统Agent通过基础文件操作在LoCoMo上获得了74%的得分,超过了Mem0专用工具的68.5%。只有当检索质量下降,或Agent需要多跳推理、关系遍历、时序回溯查询时,再添加结构化层(图谱、时序有效性)。

Detailed Topics

详细主题

Production Framework Landscape

生产级框架全景

Select a framework based on the dominant retrieval pattern the agent requires. Use this table to narrow the shortlist, then validate with the benchmark data below.
FrameworkArchitectureBest ForTrade-off
Mem0Vector store + graph memory, pluggable backendsMulti-tenant systems, broad integrationsLess specialized for multi-agent
Zep/GraphitiTemporal knowledge graph, bi-temporal modelEnterprise requiring relationship modeling + temporal reasoningAdvanced features cloud-locked
LettaSelf-editing memory with tiered storage (in-context/core/archival)Full agent introspection, stateful servicesComplexity for simple use cases
CogneeMulti-layer semantic graph via customizable ECL pipeline with customizable TasksEvolving agent memory that adapts and learns; multi-hop reasoningHeavier ingest-time processing
LangMemMemory tools for LangGraph workflowsTeams already on LangGraphTightly coupled to LangGraph
File-systemPlain files with naming conventionsSimple agents, prototypingNo semantic search, no relationships
Choose Zep/Graphiti when the agent needs bi-temporal modeling (tracking both when events occurred and when they were ingested) because its three-tier knowledge graph (episode, semantic entity, community subgraphs) excels at temporal queries. Choose Mem0 when the priority is fast time-to-production with managed infrastructure. Choose Letta when the agent needs deep self-introspection through its Agent Development Environment. Choose Cognee when the agent must build dense multi-layer semantic graphs — it layers text chunks and entity types as nodes with detailed relationship edges, and every core piece (ingestion, entity extraction, post-processing, retrieval) is customizable.
Benchmark Performance Comparison
Consult these benchmarks to set expectations, but treat them as signals for specific retrieval dimensions rather than absolute rankings. No single benchmark is definitive.
SystemDMR AccuracyLoCoMoHotPotQA (multi-hop)Latency
CogneeHighest on EM, F1, CorrectnessVariable
Zep (Temporal KG)94.8%Mid-range across metrics2.58s
Letta (filesystem)74.0%
Mem068.5%Lowest across metrics
MemGPT93.4%Variable
GraphRAG~75-85%Variable
Vector RAG baseline~60-70%Fast
Key takeaways: Zep achieves up to 18.5% accuracy improvement on LongMemEval while reducing latency by 90%. Cognee outperformed Mem0, Graphiti, and LightRAG on HotPotQA multi-hop reasoning benchmarks across Exact Match, F1, and human-like correctness metrics. Letta's filesystem-based agents achieved 74% on LoCoMo using basic file operations, confirming that reliable retrieval beats tool sophistication.
根据Agent的核心检索模式选择框架。可先通过下表缩小选型范围,再结合下方的基准测试数据验证。
框架架构适用场景权衡点
Mem0向量存储 + 图谱内存,支持可插拔后端多租户系统、需要广泛集成的场景在多Agent场景下的专业性不足
Zep/Graphiti时序知识图谱,双时态模型需要关系建模与时序推理的企业场景高级功能仅支持云端部署
Letta具备分层存储(上下文/核心/归档)的自编辑内存需要完整Agent自省能力、有状态服务的场景对简单场景来说过于复杂
Cognee通过可定制ECL流水线与任务构建的多层语义图谱具备自适应学习能力的演进型Agent内存、多跳推理场景数据摄入阶段的处理开销较大
LangMem面向LangGraph工作流的内存工具已采用LangGraph的团队与LangGraph深度耦合
文件系统遵循命名规范的纯文件存储简单Agent、原型开发不支持语义搜索与关系建模
当Agent需要双时态建模(同时跟踪事件发生时间与数据摄入时间)时,选择Zep/Graphiti,因为其三层知识图谱(事件、语义实体、社区子图)在时序查询上表现出色。当优先级是借助托管基础设施快速上线时,选择Mem0。当Agent需要通过Agent开发环境实现深度自省能力时,选择Letta。当Agent需要构建密集的多层语义图谱时,选择Cognee——它将文本块与实体类型作为节点,搭配详细的关系边,且每个核心模块(摄入、实体提取、后处理、检索)都可定制。
基准测试性能对比
可参考这些基准测试设置预期,但需将其视为特定检索维度的信号,而非绝对排名。没有单一基准测试能给出定论。
系统DMR准确率LoCoMoHotPotQA(多跳)延迟
Cognee在精确匹配、F1、类人正确性指标上得分最高可变
Zep(时序KG)94.8%各指标处于中游2.58秒
Letta(文件系统)74.0%
Mem068.5%各指标得分最低
MemGPT93.4%可变
GraphRAG~75-85%可变
向量RAG基准线~60-70%快速
关键结论:Zep在LongMemEval上实现了最高18.5%的准确率提升,同时将延迟降低了90%。Cognee在HotPotQA多跳推理基准测试中,在精确匹配、F1、类人正确性指标上均优于Mem0、Graphiti与LightRAG。Letta基于文件系统的Agent通过基础文件操作在LoCoMo上获得74%的得分,证实了可靠的检索能力优于工具复杂度。

Memory Layers (Decision Points)

内存层级(决策要点)

Pick the shallowest memory layer that satisfies the persistence requirement. Each deeper layer adds infrastructure cost and operational complexity, so only escalate when the shallower layer cannot meet the retrieval or durability need.
LayerPersistenceImplementationWhen to Use
WorkingContext window onlyScratchpad in system promptAlways — optimize with attention-favored positions
Short-termSession-scopedFile-system, in-memory cacheIntermediate tool results, conversation state
Long-termCross-sessionKey-value store → graph DBUser preferences, domain knowledge, entity registries
EntityCross-sessionEntity registry + propertiesMaintaining identity ("John Doe" = same person across conversations)
Temporal KGCross-session + historyGraph with validity intervalsFacts that change over time, time-travel queries, preventing context clash
选择能满足持久化需求的最浅内存层级。每一层更深的层级都会增加基础设施成本与运维复杂度,因此只有当较浅层级无法满足检索或持久化需求时,才升级到更深层级。
层级持久化能力实现方式适用场景
工作内存仅上下文窗口系统提示词中的草稿区始终启用——通过优化注意力优先级位置提升效果
短期内存会话范围文件系统、内存缓存中间工具结果、会话状态
长期内存跨会话键值存储 → 图数据库用户偏好、领域知识、实体注册表
实体内存跨会话实体注册表 + 属性保持身份一致性(如对话中“John Doe”始终指同一人)
时序KG跨会话 + 历史记录带有效时间区间的图谱随时间变化的事实、时序回溯查询、避免上下文冲突

Retrieval Strategies

检索策略

Match the retrieval strategy to the query shape. Semantic search handles direct factual lookups well but degrades on multi-hop reasoning; entity-based traversal handles "everything about X" queries but requires graph structure; temporal filtering handles changing facts but requires validity metadata. When accuracy is paramount and infrastructure budget allows, combine strategies into hybrid retrieval.
StrategyUse WhenLimitation
Semantic (embedding similarity)Direct factual queriesDegrades on multi-hop reasoning
Entity-based (graph traversal)"Tell me everything about X"Requires graph structure
Temporal (validity filter)Facts change over timeRequires validity metadata
Hybrid (semantic + keyword + graph)Best overall accuracyMost infrastructure
Zep's hybrid approach achieves 90% latency reduction (2.58s vs 28.9s) by retrieving only relevant subgraphs. Cognee implements hybrid retrieval through its 14 search modes — each mode combines different strategies from its three-store architecture (graph, vector, relational), letting agents select the retrieval strategy that fits the query type rather than using a one-size-fits-all approach.
根据查询类型匹配检索策略。语义搜索在直接事实查询上表现良好,但在多跳推理上效果下降;基于实体的遍历适合“关于X的所有信息”类查询,但需要图谱结构;时序过滤适合处理变化的事实,但需要有效性元数据。当对准确率要求极高且有足够基础设施预算时,可将多种策略组合为混合检索。
策略适用场景局限性
语义检索(嵌入相似度)直接事实查询多跳推理场景下效果下降
基于实体的检索(图谱遍历)“告诉我关于X的所有信息”需要图谱结构
时序检索(有效性过滤)随时间变化的事实需要有效性元数据
混合检索(语义 + 关键词 + 图谱)整体准确率最优基础设施需求最高
Zep的混合检索通过仅检索相关子图,实现了90%的延迟降低(从28.9秒降至2.58秒)。Cognee通过14种搜索模式实现混合检索——每种模式结合了其三存储架构(图谱、向量、关系型)中的不同策略,让Agent可根据查询类型选择合适的检索策略,而非采用一刀切的方式。

Memory Consolidation

内存整合

Run consolidation periodically to prevent unbounded growth, because unchecked memory accumulation degrades retrieval quality over time. Invalidate but do not discard — preserving history matters for temporal queries that need to reconstruct past states. Trigger consolidation on memory count thresholds, degraded retrieval quality, or scheduled intervals. See Implementation Reference for working consolidation code.
定期执行内存整合以防止无限制增长,因为不受控的内存积累会随时间推移降低检索质量。标记失效但不删除——保留历史记录对需要重建过去状态的时序查询至关重要。可在内存数量达到阈值、检索质量下降或按计划时间间隔触发整合。如需从零开始实现向量存储、属性图谱、时序查询或内存整合逻辑,可参考实现参考中的可用整合代码。

Practical Guidance

实践指南

Choosing a Memory Architecture

选择内存架构

Start with the simplest viable layer and add complexity only when retrieval quality degrades. Most agents do not need a temporal knowledge graph on day one. Follow this escalation path:
  1. Prototype: Use file-system memory. Store facts as structured JSON with timestamps. This validates agent behavior before committing to infrastructure.
  2. Scale: Move to Mem0 or a vector store with metadata when the agent needs semantic search and multi-tenant isolation, because file-based lookup cannot handle similarity queries.
  3. Complex reasoning: Add Zep/Graphiti when the agent needs relationship traversal, temporal validity, or cross-session synthesis. Graphiti uses structured ties with generic relations, keeping graphs simple and easy to reason about; Cognee builds denser multi-layer semantic graphs with detailed relationship edges — choose based on whether the agent needs temporal bi-modeling (Graphiti) or richer interconnected knowledge structures (Cognee).
  4. Full control: Use Letta or Cognee when the agent must self-manage its own memory with deep introspection, because these frameworks expose memory operations as first-class agent actions.
**从最简可行层级开始,仅当检索质量下降时再增加复杂度。**大多数Agent在初期不需要时序知识图谱。可遵循以下升级路径:
  1. 原型开发:使用文件系统内存。将事实存储为带时间戳的结构化JSON。这可在投入基础设施前验证Agent行为。
  2. 扩容阶段:当Agent需要语义搜索与多租户隔离时,迁移到Mem0或带元数据的向量存储,因为基于文件的查找无法处理相似度查询。
  3. 复杂推理场景:当Agent需要关系遍历、时序有效性或跨会话合成能力时,添加Zep/Graphiti。Graphiti使用通用关系的结构化关联,保持图谱简洁且易于推理;Cognee构建更密集的多层语义图谱,搭配详细的关系边——可根据Agent是否需要双时态建模(Graphiti)或更丰富的互联知识结构(Cognee)进行选择。
  4. 完全控制需求:当Agent需要通过深度自省实现自我内存管理时,使用Letta或Cognee,因为这些框架将内存操作作为Agent的一等公民操作暴露出来。

Integration with Context

与上下文的集成

Load memories just-in-time rather than preloading everything, because large context payloads are expensive and degrade attention quality. Place retrieved memories in attention-favored positions (beginning or end of context) to maximize their influence on generation.
采用即时加载而非预加载所有内存,因为大上下文负载成本高昂且会降低注意力质量。将检索到的内存放在注意力优先级位置(上下文的开头或结尾),以最大化其对生成结果的影响。

Error Recovery

错误恢复

Handle retrieval failures gracefully because memory systems are inherently noisy. Apply these recovery strategies in order:
  • Empty retrieval: Fall back to broader search (remove entity filter, widen time range). If still empty, prompt user for clarification.
  • Stale results: Check
    valid_until
    timestamps. If most results are expired, trigger consolidation before retrying.
  • Conflicting facts: Prefer the fact with the most recent
    valid_from
    . Surface the conflict to the user if confidence is low.
  • Storage failure: Queue writes for retry. Never block the agent's response on a memory write.
优雅处理检索失败,因为内存系统本质上存在噪声。按以下顺序应用恢复策略:
  • 空检索结果:回退到更宽泛的搜索(移除实体过滤、扩大时间范围)。如果仍然为空,提示用户澄清需求。
  • 过时结果:检查
    valid_until
    时间戳。如果大多数结果已过期,先触发整合再重试。
  • 事实冲突:优先选择
    valid_from
    最新的事实。如果置信度较低,向用户展示冲突内容。
  • 存储失败:将写入操作加入重试队列。切勿因内存写入阻塞Agent的响应。

Examples

示例

Example 1: Mem0 Integration
python
from mem0 import Memory

m = Memory()
m.add("User prefers dark mode and Python 3.12", user_id="alice")
m.add("User switched to light mode", user_id="alice")
示例1:Mem0集成
python
from mem0 import Memory

m = Memory()
m.add("User prefers dark mode and Python 3.12", user_id="alice")
m.add("User switched to light mode", user_id="alice")

Retrieves current preference (light mode), not outdated one

Retrieves current preference (light mode), not outdated one

results = m.search("What theme does the user prefer?", user_id="alice")

**Example 2: Temporal Query**
```python
results = m.search("What theme does the user prefer?", user_id="alice")

**示例2:时序查询**
```python

Track entity with validity periods

Track entity with validity periods

graph.create_temporal_relationship( source_id=user_node, rel_type="LIVES_AT", target_id=address_node, valid_from=datetime(2024, 1, 15), valid_until=datetime(2024, 9, 1), # moved out )
graph.create_temporal_relationship( source_id=user_node, rel_type="LIVES_AT", target_id=address_node, valid_from=datetime(2024, 1, 15), valid_until=datetime(2024, 9, 1), # moved out )

Query: Where did user live on March 1, 2024?

Query: Where did user live on March 1, 2024?

results = graph.query_at_time( {"type": "LIVES_AT", "source_label": "User"}, query_time=datetime(2024, 3, 1) )

**Example 3: Cognee Memory Ingestion and Search**
```python
import cognee
from cognee.modules.search.types import SearchType
results = graph.query_at_time( {"type": "LIVES_AT", "source_label": "User"}, query_time=datetime(2024, 3, 1) )

**示例3:Cognee内存摄入与搜索**
```python
import cognee
from cognee.modules.search.types import SearchType

Ingest and build knowledge graph

Ingest and build knowledge graph

await cognee.add("./docs/") await cognee.add("any data") await cognee.cognify()
await cognee.add("./docs/") await cognee.add("any data") await cognee.cognify()

Enrich memory

Enrich memory

await cognee.memify()
await cognee.memify()

Agent retrieves relationship-aware context

Agent retrieves relationship-aware context

results = await cognee.search( query_text="Any query for your memory", query_type=SearchType.GRAPH_COMPLETION, )
undefined
results = await cognee.search( query_text="Any query for your memory", query_type=SearchType.GRAPH_COMPLETION, )
undefined

Guidelines

准则

  1. Start with file-system memory; add complexity only when retrieval quality demands it
  2. Track temporal validity for any fact that can change over time
  3. Use hybrid retrieval (semantic + keyword + graph) for best accuracy
  4. Consolidate memories periodically — invalidate but don't discard
  5. Design for retrieval failure: always have a fallback when memory lookup returns nothing
  6. Consider privacy implications of persistent memory (retention policies, deletion rights)
  7. Benchmark your memory system against LoCoMo or LongMemEval before and after changes
  8. Monitor memory growth and retrieval latency in production
  1. 从文件系统内存开始,仅当检索质量要求时再增加复杂度
  2. 对所有随时间变化的事实跟踪时序有效性
  3. 使用混合检索(语义 + 关键词 + 图谱)以获得最佳准确率
  4. 定期整合内存——标记失效但不删除
  5. 为检索失败设计预案:当内存查找无结果时始终有回退方案
  6. 考虑持久化内存的隐私影响(留存策略、删除权限)
  7. 在变更前后,基于LoCoMo或LongMemEval基准测试内存系统
  8. 在生产环境中监控内存增长与检索延迟

Gotchas

常见陷阱

  1. Stuffing everything into context: Loading all available memories into the prompt is expensive and degrades attention quality. Use just-in-time retrieval with relevance filtering instead.
  2. Ignoring temporal validity: Facts go stale. Without validity tracking, outdated information poisons the context and the agent acts on wrong assumptions.
  3. Over-engineering early: A filesystem agent can outperform complex memory tooling (Letta scored 74% vs Mem0's 68.5% on LoCoMo). Add sophistication only when simple approaches demonstrably fail.
  4. No consolidation strategy: Unbounded memory growth degrades retrieval quality over time. Set memory count thresholds or scheduled intervals to trigger consolidation.
  5. Embedding model mismatch: Writing memories with one embedding model and reading with another produces poor retrieval because vector spaces are not interchangeable. Pin a single embedding model for each memory store and re-embed all entries if the model changes.
  6. Graph schema rigidity: Over-structured graph schemas (rigid node types, fixed relationship labels) break when the domain evolves. Prefer generic relation types and flexible property bags so new entity kinds do not require schema migrations.
  7. Stale memory poisoning: Old memories that contradict the current state corrupt agent behavior silently. Implement expiry policies or confidence decay so the agent deprioritizes aged facts, and surface contradictions explicitly when detected.
  8. Memory-context mismatch: Retrieving memories that are topically related but contextually wrong (e.g., a memory about "Python" the snake when the agent is discussing Python the language). Mitigate by including session or domain metadata in memory entries and filtering on it during retrieval.
  1. 将所有内容塞入上下文:将所有可用内存加载到提示词中成本高昂且会降低注意力质量。应使用带相关性过滤的即时检索替代。
  2. 忽略时序有效性:事实会过时。如果没有有效性跟踪,过时信息会污染上下文,导致Agent基于错误假设行动。
  3. 过早过度设计:文件系统Agent的表现可能优于复杂内存工具(Letta在LoCoMo上得分为74%,Mem0为68.5%)。仅当简单方法明确失效时,再增加复杂度。
  4. 无整合策略:不受控的内存增长会随时间推移降低检索质量。设置内存数量阈值或计划时间间隔以触发整合。
  5. 嵌入模型不匹配:使用一种嵌入模型写入内存,另一种读取会导致检索效果不佳,因为向量空间不兼容。为每个内存存储固定单个嵌入模型,若模型变更则重新嵌入所有条目。
  6. 图谱架构僵化:过度结构化的图谱架构(固定节点类型、关系标签)会在领域演进时失效。优先选择通用关系类型与灵活的属性包,这样新实体类型无需架构迁移。
  7. 过时内存污染:与当前状态矛盾的旧内存会悄无声息地破坏Agent行为。实现过期策略或置信度衰减机制,让Agent降低对旧事实的优先级,并在检测到矛盾时明确展示。
  8. 内存-上下文不匹配:检索到主题相关但上下文不符的内存(例如Agent讨论Python语言时,检索到关于“Python蛇”的内存)。可通过在内存条目中加入会话或领域元数据,并在检索时过滤来缓解。

Integration

集成

This skill builds on context-fundamentals. It connects to:
  • multi-agent-patterns - Shared memory across agents
  • context-optimization - Memory-based context loading
  • evaluation - Evaluating memory quality
本技能基于上下文基础技能构建。它与以下技能关联:
  • multi-agent-patterns - Agent间共享内存
  • context-optimization - 基于内存的上下文加载
  • evaluation - 内存质量评估

References

参考资料

Internal references:
  • Implementation Reference - Read when: implementing vector stores, property graphs, temporal queries, or memory consolidation logic from scratch
Related skills in this collection:
  • context-fundamentals - Read when: designing the context layer that memory feeds into
  • multi-agent-patterns - Read when: multiple agents need to share or coordinate memory state
External resources:
  • Zep temporal knowledge graph paper (arXiv:2501.13956) - Read when: evaluating bi-temporal modeling or Graphiti's architecture
  • Mem0 production architecture paper (arXiv:2504.19413) - Read when: assessing managed memory infrastructure trade-offs
  • Cognee optimized knowledge graph + LLM reasoning paper (arXiv:2505.24478) - Read when: comparing multi-layer semantic graph approaches
  • LoCoMo benchmark (Snap Research) - Read when: evaluating long-conversation memory retention
  • MemBench evaluation framework (ACL 2025) - Read when: designing memory evaluation suites
  • Graphiti open-source temporal KG engine (github.com/getzep/graphiti) - Read when: implementing temporal knowledge graphs
  • Cognee open-source knowledge graph memory (github.com/topoteretes/cognee) - Read when: building customizable ECL pipelines for memory
  • Cognee comparison: Form vs Function - Read when: comparing graph structures across Mem0, Graphiti, LightRAG, Cognee

内部参考:
  • 实现参考 - 从零开始实现向量存储、属性图谱、时序查询或内存整合逻辑时阅读
本集合中的相关技能:
  • context-fundamentals - 设计内存所依赖的上下文层时阅读
  • multi-agent-patterns - 多个Agent需要共享或协调内存状态时阅读
外部资源:
  • Zep时序知识图谱论文(arXiv:2501.13956) - 评估双时态建模或Graphiti架构时阅读
  • Mem0生产级架构论文(arXiv:2504.19413) - 评估托管内存基础设施权衡点时阅读
  • Cognee优化知识图谱 + LLM推理论文(arXiv:2505.24478) - 对比多层语义图谱方案时阅读
  • LoCoMo基准测试(Snap Research) - 评估长对话内存留存能力时阅读
  • MemBench评估框架(ACL 2025) - 设计内存评估套件时阅读
  • Graphiti开源时序KG引擎(github.com/getzep/graphiti) - 实现时序知识图谱时阅读
  • Cognee开源知识图谱内存(github.com/topoteretes/cognee) - 构建可定制ECL流水线内存时阅读
  • Cognee对比:形式与功能 - 对比Mem0、Graphiti、LightRAG、Cognee的图谱结构时阅读

Skill Metadata

技能元数据

Created: 2025-12-20 Last Updated: 2026-03-17 Author: Agent Skills for Context Engineering Contributors Version: 4.0.0
创建时间: 2025-12-20 最后更新: 2026-03-17 作者: Agent Skills for Context Engineering Contributors 版本: 4.0.0