multi-agent-patterns

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Multi-Agent Architecture Patterns

多智能体架构模式

Multi-agent architectures distribute work across multiple language model instances, each with its own context window. When designed well, this distribution enables capabilities beyond single-agent limits. When designed poorly, it introduces coordination overhead that negates benefits. The critical insight is that sub-agents exist primarily to isolate context, not to anthropomorphize role division.
多智能体架构将工作分配到多个语言模型实例中,每个实例都有自己的上下文窗口。设计良好时,这种分配方式能实现超越单智能体的能力;设计不佳则会引入协调开销,抵消其优势。核心要点在于:子智能体的主要作用是隔离上下文,而非拟人化的角色划分。

When to Activate

触发场景

Activate this skill when:
  • Single-agent context limits constrain task complexity
  • Tasks decompose naturally into parallel subtasks
  • Different subtasks require different tool sets or system prompts
  • Building systems that must handle multiple domains simultaneously
  • Scaling agent capabilities beyond single-context limits
  • Designing production agent systems with multiple specialized components
在以下场景中激活本技能:
  • 单智能体的上下文窗口限制了任务复杂度
  • 任务可自然分解为并行子任务
  • 不同子任务需要不同的工具集或系统提示词
  • 构建需同时处理多个领域的系统
  • 扩展智能体能力以突破单上下文窗口的限制
  • 设计包含多个专用组件的生产级智能体系统

Core Concepts

核心概念

Use multi-agent patterns when a single agent's context window cannot hold all task-relevant information. Context isolation is the primary benefit — each agent operates in a clean context without accumulated noise from other subtasks, preventing the telephone game problem where information degrades through repeated summarization.
Choose among three dominant patterns based on coordination needs, not organizational metaphor:
  • Supervisor/orchestrator — Use for centralized control when tasks have clear decomposition and human oversight matters. A single coordinator delegates to specialists and synthesizes results.
  • Peer-to-peer/swarm — Use for flexible exploration when rigid planning is counterproductive. Any agent can transfer control to any other through explicit handoff mechanisms.
  • Hierarchical — Use for large-scale projects with layered abstraction (strategy, planning, execution). Each layer operates at a different level of detail with its own context structure.
Design every multi-agent system around explicit coordination protocols, consensus mechanisms that resist sycophancy, and failure handling that prevents error propagation cascades.
当单智能体的上下文窗口无法容纳所有与任务相关的信息时,使用多智能体模式。上下文隔离是主要优势——每个智能体在干净的上下文中运行,不会受到其他子任务累积的干扰,避免了“传话游戏”问题(即信息在多次总结后质量下降)。
根据协调需求(而非组织隐喻)选择三种主流模式:
  • Supervisor/orchestrator — 当任务有清晰的分解、且人工监督很重要时,使用这种集中控制模式。单个协调者将任务委派给专业智能体,并综合结果。
  • Peer-to-peer/swarm — 当刚性规划适得其反,任务需要灵活探索时,使用这种模式。任何智能体都可通过明确的移交机制将控制权转移给其他智能体。
  • Hierarchical — 当处理具有分层抽象(策略、规划、执行)的大型项目时,使用这种模式。每个层级在不同的细节粒度上运行,拥有自己的上下文结构。
围绕明确的协调协议、能抵制趋同附和的共识机制,以及防止错误传播连锁反应的故障处理机制来设计每个多智能体系统。

Detailed Topics

详细主题

Why Multi-Agent Architectures

为何选择多智能体架构

The Context Bottleneck Reach for multi-agent architectures when a single agent's context fills with accumulated history, retrieved documents, and tool outputs to the point where performance degrades. Recognize three degradation signals: the lost-in-middle effect (attention weakens for mid-context content), attention scarcity (too many competing items), and context poisoning (irrelevant content displaces useful content).
Partition work across multiple context windows so each agent operates in a clean context focused on its subtask. Aggregate results at a coordination layer without any single context bearing the full burden.
The Token Economics Reality Budget for substantially higher token costs. Production data shows multi-agent systems run at approximately 15x the token cost of a single-agent chat:
ArchitectureToken MultiplierUse Case
Single agent chat1x baselineSimple queries
Single agent with tools~4x baselineTool-using tasks
Multi-agent system~15x baselineComplex research/coordination
Research on the BrowseComp evaluation found that three factors explain 95% of performance variance: token usage (80% of variance), number of tool calls, and model choice. This validates distributing work across agents with separate context windows to add capacity for parallel reasoning.
Prioritize model selection alongside architecture design — upgrading to better models often provides larger performance gains than doubling token budgets. BrowseComp data shows that model quality improvements frequently outperform raw token increases. Treat model selection and multi-agent architecture as complementary strategies.
The Parallelization Argument Assign parallelizable subtasks to dedicated agents with fresh contexts rather than processing them sequentially in a single agent. A research task requiring searches across multiple independent sources, analysis of different documents, or comparison of competing approaches benefits from parallel execution. Total real-world time approaches the duration of the longest subtask rather than the sum of all subtasks.
The Specialization Argument Configure each agent with only the system prompt, tools, and context it needs for its specific subtask. A general-purpose agent must carry all possible configurations in context, diluting attention. Specialized agents carry only what they need, operating with lean context optimized for their domain. Route from a coordinator to specialized agents to achieve specialization without combinatorial explosion.
上下文瓶颈 当单智能体的上下文被累积的历史记录、检索到的文档和工具输出填满,导致性能下降时,就该考虑多智能体架构。注意三种性能退化信号:中间信息丢失效应(对上下文中间内容的注意力减弱)、注意力稀缺(竞争项过多)、上下文污染(无关内容取代有用内容)。
将工作分配到多个上下文窗口中,让每个智能体专注于自己的子任务,在干净的上下文中运行。在协调层汇总结果,避免单个上下文承担全部负担。
Token经济现状 需为大幅增加的Token成本做预算。生产数据显示,多智能体系统的Token成本约为单智能体聊天的15倍:
架构Token倍数适用场景
单智能体聊天1x基准简单查询
带工具的单智能体~4x基准需使用工具的任务
多智能体系统~15x基准复杂研究/协调任务
BrowseComp评估的研究发现,三个因素解释了95%的性能差异:Token使用量(占80%)、工具调用次数和模型选择。这验证了通过在多个独立上下文窗口的智能体间分配工作,能提升并行推理能力。
将模型选择与架构设计放在同等优先级——升级到更好的模型通常比加倍Token预算带来更大的性能提升。BrowseComp数据显示,模型质量的提升往往优于单纯增加Token数量。将模型选择和多智能体架构视为互补策略。
并行化论证 将可并行的子任务分配给拥有全新上下文的专用智能体,而非在单智能体中按顺序处理。需要跨多个独立来源搜索、分析不同文档或比较竞争方案的研究任务,能从并行执行中获益。实际总耗时接近最长子任务的时长,而非所有子任务时长的总和。
专业化论证 为每个智能体仅配置其特定子任务所需的系统提示词、工具和上下文。通用智能体必须在上下文中携带所有可能的配置,分散注意力。专用智能体仅携带所需内容,在针对其领域优化的精简上下文中运行。通过协调者将任务路由到专用智能体,实现专业化而不产生组合爆炸。

Architectural Patterns

架构模式

Pattern 1: Supervisor/Orchestrator Deploy a central agent that maintains global state and trajectory, decomposes user objectives into subtasks, and routes to appropriate workers.
User Query -> Supervisor -> [Specialist, Specialist, Specialist] -> Aggregation -> Final Output
Choose this pattern when: tasks have clear decomposition, coordination across domains is needed, or human oversight is important.
Expect these trade-offs: strict workflow control and easier human-in-the-loop interventions, but the supervisor context becomes a bottleneck, supervisor failures cascade to all workers, and the "telephone game" problem emerges where supervisors paraphrase sub-agent responses incorrectly.
The Telephone Game Problem and Solution Anticipate that supervisor architectures initially perform approximately 50% worse than optimized versions due to the telephone game problem (LangGraph benchmarks). Supervisors paraphrase sub-agent responses, losing fidelity with each pass.
Fix this by implementing a
forward_message
tool that allows sub-agents to pass responses directly to users:
python
def forward_message(message: str, to_user: bool = True):
    """
    Forward sub-agent response directly to user without supervisor synthesis.

    Use when:
    - Sub-agent response is final and complete
    - Supervisor synthesis would lose important details
    - Response format must be preserved exactly
    """
    if to_user:
        return {"type": "direct_response", "content": message}
    return {"type": "supervisor_input", "content": message}
Prefer swarm architectures over supervisors when sub-agents can respond directly to users, as this eliminates translation errors entirely.
Pattern 2: Peer-to-Peer/Swarm Remove central control and allow agents to communicate directly based on predefined protocols. Any agent transfers control to any other through explicit handoff mechanisms.
python
def transfer_to_agent_b():
    return agent_b  # Handoff via function return

agent_a = Agent(
    name="Agent A",
    functions=[transfer_to_agent_b]
)
Choose this pattern when: tasks require flexible exploration, rigid planning is counterproductive, or requirements emerge dynamically and defy upfront decomposition.
Expect these trade-offs: no single point of failure and effective breadth-first scaling, but coordination complexity increases with agent count, divergence risk rises without a central state keeper, and robust convergence constraints become essential.
Define explicit handoff protocols with state passing. Ensure agents communicate their context needs to receiving agents.
Pattern 3: Hierarchical Organize agents into layers of abstraction: strategy (goal definition), planning (task decomposition), and execution (atomic tasks).
Strategy Layer (Goal Definition) -> Planning Layer (Task Decomposition) -> Execution Layer (Atomic Tasks)
Choose this pattern when: projects have clear hierarchical structure, workflows involve management layers, or tasks require both high-level planning and detailed execution.
Expect these trade-offs: clear separation of concerns and support for different context structures at different levels, but coordination overhead between layers, potential strategy-execution misalignment, and complex error propagation paths.
模式1:Supervisor/Orchestrator 部署一个维护全局状态和执行轨迹的中央智能体,将用户目标分解为子任务,并路由到合适的工作智能体。
User Query -> Supervisor -> [Specialist, Specialist, Specialist] -> Aggregation -> Final Output
当任务有清晰的分解、需要跨领域协调,或人工监督很重要时,选择此模式。
需权衡:严格的工作流控制和更易实现的人工介入,但Supervisor的上下文会成为瓶颈,Supervisor故障会波及所有工作智能体,且会出现“传话游戏”问题(即Supervisor错误地转述子智能体的响应)。
传话游戏问题及解决方案 预计Supervisor架构最初的性能比优化版本低约50%(LangGraph基准测试)。Supervisor会转述子智能体的响应,每传递一次就会丢失保真度。
通过实现
forward_message
工具来解决此问题,该工具允许子智能体直接将响应传递给用户:
python
def forward_message(message: str, to_user: bool = True):
    """
    Forward sub-agent response directly to user without supervisor synthesis.

    Use when:
    - Sub-agent response is final and complete
    - Supervisor synthesis would lose important details
    - Response format must be preserved exactly
    """
    if to_user:
        return {"type": "direct_response", "content": message}
    return {"type": "supervisor_input", "content": message}
当子智能体可直接响应用户时,优先选择Swarm架构而非Supervisor架构,因为这能完全消除翻译错误。
模式2:Peer-to-Peer/Swarm 移除中央控制,允许智能体根据预定义协议直接通信。任何智能体都可通过明确的移交机制将控制权转移给其他智能体。
python
def transfer_to_agent_b():
    return agent_b  # Handoff via function return

agent_a = Agent(
    name="Agent A",
    functions=[transfer_to_agent_b]
)
当任务需要灵活探索、刚性规划适得其反,或需求动态出现且无法预先分解时,选择此模式。
需权衡:无单点故障,能有效进行广度优先扩展,但协调复杂度随智能体数量增加而上升,若无中央状态 keeper则发散风险增加,且鲁棒的收敛约束变得至关重要。
定义带有状态传递的明确移交协议。确保智能体向接收智能体传达其上下文需求。
模式3:Hierarchical 将智能体组织成抽象层级:策略层(目标定义)、规划层(任务分解)、执行层(原子任务)。
Strategy Layer (Goal Definition) -> Planning Layer (Task Decomposition) -> Execution Layer (Atomic Tasks)
当项目有清晰的层级结构、工作流涉及管理层级,或任务需要高层规划和详细执行时,选择此模式。
需权衡:关注点清晰分离,支持不同层级使用不同的上下文结构,但层间协调开销大,可能出现策略-执行不一致,且错误传播路径复杂。

Context Isolation as Design Principle

上下文隔离作为设计原则

Treat context isolation as the primary purpose of multi-agent architectures. Each sub-agent should operate in a clean context window focused on its subtask without carrying accumulated context from other subtasks.
Isolation Mechanisms Select the right isolation mechanism for each subtask:
  • Full context delegation — Share the planner's entire context with the sub-agent. Use for complex tasks where the sub-agent needs complete understanding. The sub-agent has its own tools and instructions but receives full context for its decisions. Note: this partially defeats the purpose of context isolation.
  • Instruction passing — Create instructions via function call; the sub-agent receives only what it needs. Use for simple, well-defined subtasks. Maintains isolation but limits sub-agent flexibility.
  • File system memory — Agents read and write to persistent storage. Use for complex tasks requiring shared state. The file system serves as the coordination mechanism, avoiding context bloat from shared state passing. Introduces latency and consistency challenges but scales better than message-passing.
Choose based on task complexity, coordination needs, and acceptable latency. Default to instruction passing and escalate to file system memory when shared state is needed. Avoid full context delegation unless the subtask genuinely requires it.
将上下文隔离视为多智能体架构的主要设计目标。每个子智能体应在专注于自身子任务的干净上下文窗口中运行,不携带其他子任务的累积上下文。
隔离机制 为每个子任务选择合适的隔离机制:
  • Full context delegation — 将规划者的整个上下文共享给子智能体。适用于子智能体需要完整理解的复杂任务。子智能体拥有自己的工具和指令,但接收完整上下文以做决策。注意:这会部分抵消上下文隔离的目的。
  • 指令传递 — 通过函数调用创建指令;子智能体仅接收所需内容。适用于简单、定义明确的子任务。保持隔离但限制子智能体的灵活性。
  • 文件系统内存 — 智能体读写持久化存储。适用于需要共享状态的复杂任务。文件系统作为协调机制,避免因传递共享状态导致的上下文膨胀。会引入延迟和一致性挑战,但比消息传递扩展性更好。
根据任务复杂度、协调需求和可接受的延迟来选择。默认使用指令传递,当需要共享状态时升级到文件系统内存。除非子任务确实需要,否则避免使用完整上下文委托。

Consensus and Coordination

共识与协调

The Voting Problem Avoid simple majority voting — it treats hallucinations from weak models as equal to reasoning from strong models. Without intervention, multi-agent discussions devolve into consensus on false premises due to inherent bias toward agreement.
Weighted Voting Weight agent votes by confidence or expertise. Agents with higher confidence or domain expertise should carry more weight in final decisions.
Debate Protocols Structure agents to critique each other's outputs over multiple rounds. Adversarial critique often yields higher accuracy on complex reasoning than collaborative consensus. Guard against sycophantic convergence where agents agree to be agreeable rather than correct.
Trigger-Based Intervention Monitor multi-agent interactions for behavioral markers. Activate stall triggers when discussions make no progress. Detect sycophancy triggers when agents mimic each other's answers without unique reasoning.
投票问题 避免简单多数投票——它将弱模型的幻觉等同于强模型的推理结果。若不干预,多智能体讨论会因内在的趋同偏见而达成基于错误前提的共识。
加权投票 根据置信度或专业知识对智能体的投票进行加权。置信度更高或领域专业知识更丰富的智能体,在最终决策中应拥有更大权重。
辩论协议 组织智能体在多轮讨论中互相批评对方的输出。对抗性批评通常比协作共识在复杂推理上产生更高的准确性。防范趋同附和现象,即智能体为了附和而达成一致,而非为了正确。
基于触发的干预 监控多智能体交互的行为标记。当讨论无进展时激活停滞触发。当智能体模仿彼此的答案而无独特推理时,检测趋同附和触发。

Framework Considerations

框架考量

Different frameworks implement these patterns with different philosophies. LangGraph uses graph-based state machines with explicit nodes and edges. AutoGen uses conversational/event-driven patterns with GroupChat. CrewAI uses role-based process flows with hierarchical crew structures.
不同框架以不同理念实现这些模式。LangGraph使用基于图的状态机,带有明确的节点和边。AutoGen使用对话/事件驱动模式,带有GroupChat。CrewAI使用基于角色的流程,带有层级化团队结构。

Practical Guidance

实践指导

Failure Modes and Mitigations

故障模式与缓解措施

Failure: Supervisor Bottleneck The supervisor accumulates context from all workers, becoming susceptible to saturation and degradation.
Mitigate by constraining worker output schemas so workers return only distilled summaries. Use checkpointing to persist supervisor state without carrying full history in context.
Failure: Coordination Overhead Agent communication consumes tokens and introduces latency. Complex coordination can negate parallelization benefits.
Mitigate by minimizing communication through clear handoff protocols. Batch results where possible. Use asynchronous communication patterns. Measure whether multi-agent coordination actually saves time versus a single agent with a longer context.
Failure: Divergence Agents pursuing different goals without central coordination drift from intended objectives.
Mitigate by defining clear objective boundaries for each agent. Implement convergence checks that verify progress toward shared goals. Set time-to-live limits on agent execution to prevent unbounded exploration.
Failure: Error Propagation Errors in one agent's output propagate to downstream agents that consume that output, compounding into increasingly wrong results.
Mitigate by validating agent outputs before passing to consumers. Implement retry logic with circuit breakers. Use idempotent operations where possible. Consider adding a verification agent that cross-checks critical outputs before they enter the pipeline.
故障:Supervisor瓶颈 Supervisor累积所有工作智能体的上下文,容易饱和并导致性能下降。
通过约束工作智能体的输出模式,让它们仅返回提炼后的摘要来缓解。使用检查点来持久化Supervisor状态,无需在上下文中携带完整历史。
故障:协调开销 智能体通信消耗Token并引入延迟。复杂的协调可能抵消并行化的好处。
通过明确的移交协议最小化通信来缓解。批量处理结果。使用异步通信模式。衡量多智能体协调是否真的比使用更长上下文的单智能体更节省时间。
故障:发散 智能体在无中央协调的情况下追求不同目标,偏离预期目标。
通过为每个智能体定义清晰的目标边界来缓解。实现收敛检查,验证向共同目标的进展。设置智能体执行的存活时间限制,防止无限探索。
故障:错误传播 一个智能体的输出错误会传播给下游智能体,导致连锁反应。
通过在传递给消费者前验证智能体输出来缓解。实现带有熔断机制的重试逻辑。尽可能使用幂等操作。考虑添加验证智能体,在关键输出进入流水线前进行交叉检查。

Examples

示例

Example 1: Research Team Architecture
text
Supervisor
├── Researcher (web search, document retrieval)
├── Analyzer (data analysis, statistics)
├── Fact-checker (verification, validation)
└── Writer (report generation, formatting)
Example 2: Handoff Protocol
python
def handle_customer_request(request):
    if request.type == "billing":
        return transfer_to(billing_agent)
    elif request.type == "technical":
        return transfer_to(technical_agent)
    elif request.type == "sales":
        return transfer_to(sales_agent)
    else:
        return handle_general(request)
示例1:研究团队架构
text
Supervisor
├── Researcher (web search, document retrieval)
├── Analyzer (data analysis, statistics)
├── Fact-checker (verification, validation)
└── Writer (report generation, formatting)
示例2:移交协议
python
def handle_customer_request(request):
    if request.type == "billing":
        return transfer_to(billing_agent)
    elif request.type == "technical":
        return transfer_to(technical_agent)
    elif request.type == "sales":
        return transfer_to(sales_agent)
    else:
        return handle_general(request)

Guidelines

指南

  1. Design for context isolation as the primary benefit of multi-agent systems
  2. Choose architecture pattern based on coordination needs, not organizational metaphor
  3. Implement explicit handoff protocols with state passing
  4. Use weighted voting or debate protocols for consensus
  5. Monitor for supervisor bottlenecks and implement checkpointing
  6. Validate outputs before passing between agents
  7. Set time-to-live limits to prevent infinite loops
  8. Test failure scenarios explicitly
  1. 将上下文隔离作为多智能体系统的主要设计优势
  2. 根据协调需求(而非组织隐喻)选择架构模式
  3. 实现带有状态传递的明确移交协议
  4. 使用加权投票或辩论协议达成共识
  5. 监控Supervisor瓶颈并实现检查点机制
  6. 在智能体间传递输出前进行验证
  7. 设置存活时间限制以防止无限循环
  8. 明确测试故障场景

Gotchas

注意事项

  1. Supervisor bottleneck scaling — Supervisor context pressure grows non-linearly with worker count. At 5+ workers, the supervisor spends more tokens processing summaries than workers spend on actual tasks. Set a hard cap on workers per supervisor (3-5) and add a second supervisor tier rather than overloading one.
  2. Token cost underestimation — Multi-agent runs cost approximately 15x baseline. Teams consistently underbudget because they estimate per-agent costs without accounting for coordination overhead, retries, and consensus rounds. Budget for 15x and treat anything less as a bonus.
  3. Sycophantic consensus — Agents in debate patterns tend to converge on agreeable answers, not correct ones. LLMs have an inherent bias toward agreement. Counter this by assigning explicit adversarial roles and requiring agents to state disagreements before convergence is allowed.
  4. Agent sprawl — Adding more agents past 3-5 shows diminishing returns and increases coordination overhead. Each additional agent adds communication channels quadratically. Start with the minimum viable number of agents and add only when a clear context isolation benefit exists.
  5. Telephone game in message-passing — Information degrades through repeated summarization as it passes between agents. Each agent paraphrases and loses nuance. Use filesystem coordination instead of message-passing for state that multiple agents need to access faithfully.
  6. Error propagation cascades — One agent's hallucination becomes another agent's "fact." Downstream agents have no way to distinguish upstream hallucinations from genuine information. Add validation checkpoints between agents and never trust upstream output without verification.
  7. Over-decomposition — Splitting tasks too finely creates more coordination overhead than the task itself. A 10-step pipeline with 10 agents spends more tokens on handoffs than on actual work. Decompose only when subtasks genuinely benefit from separate contexts.
  8. Missing shared state — Agents operating without a shared filesystem or state store duplicate work, produce inconsistent outputs, and lose track of what has already been accomplished. Establish shared persistent storage before building multi-agent workflows.
  1. Supervisor瓶颈扩展 — Supervisor的上下文压力随工作智能体数量呈非线性增长。当工作智能体数量达到5个以上时,Supervisor处理摘要的Token消耗会超过工作智能体处理实际任务的消耗。为每个Supervisor设置工作智能体数量上限(3-5个),当需要更多时添加第二层Supervisor,而非过度加载单个Supervisor。
  2. Token成本低估 — 多智能体运行的成本约为基准的15倍。团队总是低估预算,因为他们仅估算单个智能体的成本,未考虑协调开销、重试和共识轮次。按15倍预算,低于此的部分视为额外收益。
  3. 趋同附和共识 — 辩论模式中的智能体倾向于达成令人满意的答案,而非正确答案。大语言模型天生具有趋同偏见。通过分配明确的对抗角色,并要求智能体在达成共识前陈述不同意见来应对。
  4. 智能体膨胀 — 智能体数量超过3-5个后,收益递减且协调开销增加。每个新增智能体的通信通道呈二次方增长。从最少必要数量的智能体开始,仅当存在明确的上下文隔离收益时才添加。
  5. 消息传递中的传话游戏 — 信息在智能体间传递时,经过多次总结会退化。每个智能体都会转述并丢失细节。对于多个智能体需要准确访问的状态,使用文件系统协调而非消息传递。
  6. 错误传播连锁反应 — 一个智能体的幻觉会成为另一个智能体的“事实”。下游智能体无法区分上游的幻觉与真实信息。在智能体间添加验证检查点,绝不信任上游输出而不进行验证。
  7. 过度分解 — 将任务拆分过细会产生比任务本身更大的协调开销。一个包含10个智能体的10步流水线,在移交上消耗的Token比实际工作还多。仅当子任务真正能从独立上下文中获益时才进行分解。
  8. 缺少共享状态 — 无共享文件系统或状态存储的智能体会重复工作,产生不一致的输出,并忘记已完成的工作。在构建多智能体工作流前,先建立共享持久化存储。

Integration

集成

This skill builds on context-fundamentals and context-degradation. It connects to:
  • memory-systems - Shared state management across agents
  • tool-design - Tool specialization per agent
  • context-optimization - Context partitioning strategies
本技能基于context-fundamentals和context-degradation构建。它与以下技能相关:
  • memory-systems - 跨智能体的共享状态管理
  • tool-design - 针对每个智能体的工具专业化
  • context-optimization - 上下文划分策略

References

参考资料

Internal reference:
  • Frameworks Reference - Read when: implementing a specific multi-agent pattern in LangGraph, AutoGen, or CrewAI and needing framework-specific code examples
Related skills in this collection:
  • context-fundamentals - Read when: needing to understand context window mechanics before designing agent partitioning
  • memory-systems - Read when: agents need to share state across context boundaries or persist information between runs
  • context-optimization - Read when: individual agent contexts are too large and need partitioning or compression strategies
External resources:

内部参考:
  • Frameworks Reference - 当需要在LangGraph、AutoGen或CrewAI中实现特定的多智能体模式,且需要框架特定的代码示例时阅读
本集合中的相关技能:
  • context-fundamentals - 当需要在设计智能体划分前理解上下文窗口机制时阅读
  • memory-systems - 当智能体需要跨上下文边界管理共享状态或在运行间持久化信息时阅读
  • context-optimization - 当单个智能体的上下文过大,需要划分或压缩策略时阅读
外部资源:

Skill Metadata

技能元数据

Created: 2025-12-20 Last Updated: 2026-03-17 Author: Agent Skills for Context Engineering Contributors Version: 2.0.0
创建时间:2025-12-20 最后更新:2026-03-17 作者:Agent Skills for Context Engineering Contributors 版本:2.0.0