cognitive-scaffolding

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Cognitive Scaffolding

Cognitive Scaffolding(认知脚手架)

Description

描述

Cognitive Scaffolding structures an agent's context window using principles from cognitive science — primacy effects, recency bias, chunking, and attention allocation. Language models, like human working memory, are not uniform processors. Information placed at the beginning and end of the context receives disproportionate attention (primacy and recency effects), while content in the middle can be effectively invisible. Cognitive Scaffolding exploits these properties to ensure the most critical information receives maximum model attention.
This skill was developed through extensive experimentation on the Buddy™ agent at googleadsagent.ai™, where analysis accuracy improved by measurable margins simply by restructuring how information was arranged in the context window. Campaign performance data placed at strategic positions within the prompt produced significantly better recommendations than the same data placed arbitrarily. The same principle applies to code context, documentation, and any other information an agent must reason over.
The cognitive scaffolding framework organizes context into four zones: the anchor zone (first 5% of context — highest attention, used for identity and immutable rules), the foreground zone (last 20% — high attention, used for the current task and recent context), the structured middle (60% — moderate attention, organized into clearly delimited chunks), and the background zone (15% — lowest attention, used for reference material and fallbacks). Each zone has specific content strategies that maximize the model's ability to utilize the information placed there.
认知脚手架(Cognitive Scaffolding)利用认知科学原理构建Agent的上下文窗口,包括首因效应、近因偏差、分块处理和注意力分配。语言模型和人类工作记忆一样,并非统一的处理器。位于上下文开头和结尾的信息会获得不成比例的关注(首因效应和近因效应),而中间的内容几乎会被忽略。认知脚手架正是利用这些特性,确保最关键的信息获得模型的最大关注。
该技能是通过在 googleadsagent.ai™ 的Buddy™ Agent上进行大量实验开发而成的,仅仅通过调整上下文窗口中信息的排列方式,分析准确率就得到了显著提升。将广告系列性能数据放在提示词中的战略位置,比随意放置相同数据能产生明显更优的建议。这一原理同样适用于代码上下文、文档以及Agent需要推理的任何其他信息。
认知脚手架框架将上下文分为四个区域:锚定区(占上下文的前5%——关注度最高,用于身份定义和不可变更规则)、前景区(占最后20%——关注度高,用于当前任务和近期上下文)、结构化中间区(占60%——关注度中等,组织为清晰分隔的块)和背景区(占15%——关注度最低,用于参考资料和备用内容)。每个区域都有特定的内容策略,以最大化模型利用该区域信息的能力。

Use When

适用场景

  • Agent accuracy varies inconsistently despite using the same information
  • Long context windows degrade performance compared to shorter interactions
  • Critical instructions or constraints are occasionally ignored by the agent
  • You need to present large amounts of reference data without overwhelming the agent
  • Multi-document reasoning requires the agent to attend to specific sections
  • You are optimizing agent behavior for specific model architectures
  • 尽管使用相同信息,Agent的准确率仍不一致
  • 长上下文窗口相比短交互会降低性能
  • Agent偶尔会忽略关键指令或约束条件
  • 需要呈现大量参考数据但又不想让Agent不堪重负
  • 多文档推理要求Agent关注特定部分
  • 针对特定模型架构优化Agent行为

How It Works

工作原理

mermaid
graph LR
    subgraph "Context Window"
        A[Anchor Zone<br/>5% - Highest Attention<br/>Identity, Rules] --> B[Structured Middle<br/>60% - Chunked Data<br/>Delimited Sections]
        B --> C[Background Zone<br/>15% - Reference<br/>Fallback Material]
        C --> D[Foreground Zone<br/>20% - Current Task<br/>Recent Messages]
    end
    
    E[Primacy Effect] -.-> A
    F[Chunking Theory] -.-> B
    G[Low Attention Region] -.-> C
    H[Recency Effect] -.-> D
The scaffolding exploits well-documented attention patterns in transformer architectures. The anchor zone leverages primacy — the model attends strongly to the earliest tokens, making this the ideal location for identity statements and non-negotiable rules. The foreground zone leverages recency — the most recent tokens receive high attention, making this the best place for the current task and recent conversation. The structured middle uses explicit delimiters and headers to create navigable chunks that compensate for the "lost in the middle" effect. The background zone stores material the model can reference but doesn't need to actively attend to.
mermaid
graph LR
    subgraph "Context Window"
        A[Anchor Zone<br/>5% - Highest Attention<br/>Identity, Rules] --> B[Structured Middle<br/>60% - Chunked Data<br/>Delimited Sections]
        B --> C[Background Zone<br/>15% - Reference<br/>Fallback Material]
        C --> D[Foreground Zone<br/>20% - Current Task<br/>Recent Messages]
    end
    
    E[Primacy Effect] -.-> A
    F[Chunking Theory] -.-> B
    G[Low Attention Region] -.-> C
    H[Recency Effect] -.-> D
该脚手架利用Transformer架构中已被充分记录的注意力模式。锚定区利用首因效应——模型会高度关注最早的token,因此这里是放置身份声明和不可协商规则的理想位置。前景区利用近因效应——最新的token会获得高关注,因此这里是放置当前任务和近期对话的最佳位置。结构化中间区使用明确的分隔符和标题创建可导航的块,以弥补“中间迷失”效应。背景区存储模型可以参考但不需要主动关注的材料。

Implementation

实现

Cognitive Zone Builder:
typescript
interface CognitiveZone {
  name: string;
  position: "anchor" | "middle" | "background" | "foreground";
  budgetPercent: number;
  content: string;
  delimiter: string;
}

class CognitiveScaffold {
  private totalBudget: number;
  private zones: Map<string, CognitiveZone> = new Map();

  constructor(totalTokenBudget: number) {
    this.totalBudget = totalTokenBudget;
  }

  setAnchor(content: string): void {
    this.zones.set("anchor", {
      name: "Identity & Rules",
      position: "anchor",
      budgetPercent: 5,
      content,
      delimiter: "",
    });
  }

  addMiddleChunk(name: string, content: string): void {
    const key = `middle_${this.zones.size}`;
    this.zones.set(key, {
      name,
      position: "middle",
      budgetPercent: 0,
      content,
      delimiter: `\n<section name="${name}">\n`,
    });
  }

  setForeground(content: string): void {
    this.zones.set("foreground", {
      name: "Current Task",
      position: "foreground",
      budgetPercent: 20,
      content,
      delimiter: "\n<current_task>\n",
    });
  }

  assemble(): string {
    const sections: string[] = [];
    const anchor = this.zones.get("anchor");
    if (anchor) sections.push(anchor.content);

    const middle = [...this.zones.entries()]
      .filter(([_, z]) => z.position === "middle")
      .map(([_, z]) => `${z.delimiter}${z.content}\n</section>`);
    sections.push(...middle);

    const bg = [...this.zones.entries()]
      .filter(([_, z]) => z.position === "background");
    for (const [_, zone] of bg) {
      sections.push(`<reference name="${zone.name}">\n${zone.content}\n</reference>`);
    }

    const fg = this.zones.get("foreground");
    if (fg) sections.push(`${fg.delimiter}${fg.content}\n</current_task>`);

    return sections.join("\n\n");
  }
}
Attention-Aware Content Placement:
python
class AttentionOptimizer:
    """Place content based on importance and model attention patterns."""

    ATTENTION_CURVE = {
        "anchor": 0.95,
        "early_middle": 0.60,
        "deep_middle": 0.40,
        "late_middle": 0.55,
        "foreground": 0.90,
    }

    def optimize_placement(self, items: list[dict]) -> list[dict]:
        """Sort items into optimal positions based on importance score."""
        sorted_items = sorted(items, key=lambda x: x["importance"], reverse=True)
        zones = {zone: [] for zone in self.ATTENTION_CURVE}
        zone_order = sorted(self.ATTENTION_CURVE.keys(), key=lambda z: self.ATTENTION_CURVE[z], reverse=True)

        for item in sorted_items:
            best_zone = min(zone_order, key=lambda z: abs(self.ATTENTION_CURVE[z] - item["importance"]))
            zones[best_zone].append(item)

        placement = []
        for zone in ["anchor", "early_middle", "deep_middle", "late_middle", "foreground"]:
            for item in zones[zone]:
                placement.append({**item, "zone": zone, "expected_attention": self.ATTENTION_CURVE[zone]})
        return placement
Chunking Strategy for Structured Data:
python
def chunk_for_middle_zone(data: list[dict], chunk_size: int = 5) -> list[str]:
    """Break data into cognitively manageable chunks with clear boundaries."""
    chunks = []
    for i in range(0, len(data), chunk_size):
        batch = data[i:i + chunk_size]
        header = f"--- Chunk {i // chunk_size + 1} of {(len(data) + chunk_size - 1) // chunk_size} ---"
        body = "\n".join(format_item(item) for item in batch)
        summary = f"Summary: {len(batch)} items, key values: {extract_key_values(batch)}"
        chunks.append(f"{header}\n{body}\n{summary}")
    return chunks


def build_scaffolded_prompt(task, data, rules):
    scaffold = CognitiveScaffold(total_token_budget=150000)

    scaffold.set_anchor(f"""You are Buddy™, a Google Ads analysis agent.
RULES (always enforced):
{chr(10).join(f'- {r}' for r in rules)}""")

    for i, chunk in enumerate(chunk_for_middle_zone(data)):
        scaffold.add_middle_chunk(f"data_chunk_{i}", chunk)

    scaffold.set_foreground(f"""CURRENT TASK:
{task}

Analyze the data in the sections above and provide your recommendation.""")

    return scaffold.assemble()
认知区域构建器:
typescript
interface CognitiveZone {
  name: string;
  position: "anchor" | "middle" | "background" | "foreground";
  budgetPercent: number;
  content: string;
  delimiter: string;
}

class CognitiveScaffold {
  private totalBudget: number;
  private zones: Map<string, CognitiveZone> = new Map();

  constructor(totalTokenBudget: number) {
    this.totalBudget = totalTokenBudget;
  }

  setAnchor(content: string): void {
    this.zones.set("anchor", {
      name: "Identity & Rules",
      position: "anchor",
      budgetPercent: 5,
      content,
      delimiter: "",
    });
  }

  addMiddleChunk(name: string, content: string): void {
    const key = `middle_${this.zones.size}`;
    this.zones.set(key, {
      name,
      position: "middle",
      budgetPercent: 0,
      content,
      delimiter: `\n<section name="${name}">\n`,
    });
  }

  setForeground(content: string): void {
    this.zones.set("foreground", {
      name: "Current Task",
      position: "foreground",
      budgetPercent: 20,
      content,
      delimiter: "\n<current_task>\n",
    });
  }

  assemble(): string {
    const sections: string[] = [];
    const anchor = this.zones.get("anchor");
    if (anchor) sections.push(anchor.content);

    const middle = [...this.zones.entries()]
      .filter(([_, z]) => z.position === "middle")
      .map(([_, z]) => `${z.delimiter}${z.content}\n</section>`);
    sections.push(...middle);

    const bg = [...this.zones.entries()]
      .filter(([_, z]) => z.position === "background");
    for (const [_, zone] of bg) {
      sections.push(`<reference name="${zone.name}">\n${zone.content}\n</reference>`);
    }

    const fg = this.zones.get("foreground");
    if (fg) sections.push(`${fg.delimiter}${fg.content}\n</current_task>`);

    return sections.join("\n\n");
  }
}
注意力感知内容放置:
python
class AttentionOptimizer:
    """Place content based on importance and model attention patterns."""

    ATTENTION_CURVE = {
        "anchor": 0.95,
        "early_middle": 0.60,
        "deep_middle": 0.40,
        "late_middle": 0.55,
        "foreground": 0.90,
    }

    def optimize_placement(self, items: list[dict]) -> list[dict]:
        """Sort items into optimal positions based on importance score."""
        sorted_items = sorted(items, key=lambda x: x["importance"], reverse=True)
        zones = {zone: [] for zone in self.ATTENTION_CURVE}
        zone_order = sorted(self.ATTENTION_CURVE.keys(), key=lambda z: self.ATTENTION_CURVE[z], reverse=True)

        for item in sorted_items:
            best_zone = min(zone_order, key=lambda z: abs(self.ATTENTION_CURVE[z] - item["importance"]))
            zones[best_zone].append(item)

        placement = []
        for zone in ["anchor", "early_middle", "deep_middle", "late_middle", "foreground"]:
            for item in zones[zone]:
                placement.append({**item, "zone": zone, "expected_attention": self.ATTENTION_CURVE[zone]})
        return placement
结构化数据分块策略:
python
def chunk_for_middle_zone(data: list[dict], chunk_size: int = 5) -> list[str]:
    """Break data into cognitively manageable chunks with clear boundaries."""
    chunks = []
    for i in range(0, len(data), chunk_size):
        batch = data[i:i + chunk_size]
        header = f"--- Chunk {i // chunk_size + 1} of {(len(data) + chunk_size - 1) // chunk_size} ---"
        body = "\n".join(format_item(item) for item in batch)
        summary = f"Summary: {len(batch)} items, key values: {extract_key_values(batch)}"
        chunks.append(f"{header}\n{body}\n{summary}")
    return chunks


def build_scaffolded_prompt(task, data, rules):
    scaffold = CognitiveScaffold(total_token_budget=150000)

    scaffold.set_anchor(f"""You are Buddy™, a Google Ads analysis agent.
RULES (always enforced):
{chr(10).join(f'- {r}' for r in rules)}""")

    for i, chunk in enumerate(chunk_for_middle_zone(data)):
        scaffold.add_middle_chunk(f"data_chunk_{i}", chunk)

    scaffold.set_foreground(f"""CURRENT TASK:
{task}

Analyze the data in the sections above and provide your recommendation.""")

    return scaffold.assemble()

Best Practices

最佳实践

  1. Place non-negotiable rules in the anchor zone — system identity, safety constraints, and output format rules belong in the first 5% of context where primacy effect is strongest.
  2. Keep the current task in the foreground — the user's actual request and most recent messages should be the last content the model sees before generating.
  3. Chunk middle content with explicit delimiters — use XML tags, markdown headers, or section boundaries to create navigable structure in the "lost in the middle" zone.
  4. Add per-chunk summaries — a one-line summary at the end of each middle chunk gives the model a retrieval cue without requiring it to re-read the full chunk.
  5. Measure attention empirically — test the same question with data in different positions to quantify your specific model's attention curve.
  6. Avoid critical-only-in-middle placement — if information is essential to the task, place it in anchor or foreground, not solely in the middle zone.
  7. Adapt chunking to content type — code files chunk by function/class, data tables chunk by row groups, documents chunk by section; one-size-fits-all chunking is suboptimal.
  8. Reinforce instructions via repetition — for very long contexts, repeat critical instructions at both the anchor and foreground boundaries.
  1. 将不可协商的规则放在锚定区——系统身份、安全约束和输出格式规则应放在上下文的前5%,这里首因效应最强。
  2. 将当前任务放在前景区——用户的实际请求和最新消息应该是模型生成前看到的最后内容。
  3. 用明确分隔符对中间内容分块——使用XML标签、Markdown标题或章节边界在“中间迷失”区域创建可导航的结构。
  4. 添加每个块的摘要——在每个中间块末尾添加一行摘要,为模型提供检索线索,而无需重新阅读整个块。
  5. 凭经验衡量注意力——在不同位置放置数据测试同一个问题,以量化特定模型的注意力曲线。
  6. 避免关键信息仅放在中间——如果信息对任务至关重要,放在锚定区或前景区,而不仅仅是中间区。
  7. 根据内容类型调整分块方式——代码文件按函数/类分块,数据表按行组分块,文档按章节分块;一刀切的分块方式并非最优。
  8. 通过重复强化指令——对于非常长的上下文,在锚定区和前景区边界重复关键指令。

Platform Compatibility

平台兼容性

FeatureClaude CodeCursorCodexGemini CLI
Context structuring✅ Full✅ Full✅ Full✅ Full
XML delimiters✅ Preferred✅ Supported✅ Supported✅ Supported
Token budget control✅ Full✅ Full✅ Full✅ Full
Attention optimization✅ Claude-tuned✅ Model-dependent✅ Model-dependent✅ Gemini-tuned
Zone-based assembly✅ Full✅ Full✅ Full✅ Full
特性Claude CodeCursorCodexGemini CLI
上下文结构化✅ 完全支持✅ 完全支持✅ 完全支持✅ 完全支持
XML分隔符✅ 推荐使用✅ 支持✅ 支持✅ 支持
Token预算控制✅ 完全支持✅ 完全支持✅ 完全支持✅ 完全支持
注意力优化✅ 针对Claude优化✅ 取决于模型✅ 取决于模型✅ 针对Gemini优化
基于区域的组装✅ 完全支持✅ 完全支持✅ 完全支持✅ 完全支持

Related Skills

相关技能

  • Context Engineering - Token budget enforcement and compression that works within the cognitive scaffold zones
  • Prompt Architecture - Three-layer prompt design that maps directly to cognitive scaffold anchor and foreground zones
  • Session Archaeology - Mining past sessions to empirically calibrate attention curves and zone effectiveness
  • Context Engineering - 在认知脚手架区域内实现Token预算管理和压缩
  • Prompt Architecture - 三层提示词设计,直接映射到认知脚手架的锚定区和前景区
  • Session Archaeology - 挖掘过往会话,凭经验校准注意力曲线和区域有效性

Keywords

关键词

cognitive-scaffolding, primacy-effect, recency-bias, chunking, attention-allocation, working-memory, context-structure, lost-in-the-middle, information-placement, agent-skills

© 2026 googleadsagent.ai™ | Agent Skills™ | MIT License
cognitive-scaffolding, primacy-effect, recency-bias, chunking, attention-allocation, working-memory, context-structure, lost-in-the-middle, information-placement, agent-skills

© 2026 googleadsagent.ai™ | Agent Skills™ | MIT License