deno-debugger

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Deno Debugger Skill

Deno调试技能

Debug Deno/TypeScript applications using the V8 Inspector Protocol with pre-written TypeScript helper scripts.
使用V8 Inspector Protocol及预编写的TypeScript辅助脚本调试Deno/TypeScript应用。

When to Use This Skill

何时使用该技能

  • User reports memory leaks in their Deno application
  • API endpoints are slow and need profiling
  • Async operations complete in the wrong order (race conditions)
  • Application crashes or throws unexpected exceptions
  • User wants to understand memory usage or CPU hotspots
  • 用户反馈Deno应用存在内存泄漏
  • API端点响应缓慢,需要进行性能分析
  • 异步操作执行顺序错误(竞态条件)
  • 应用崩溃或抛出意外异常
  • 用户希望了解内存使用情况或CPU热点

⚠️ CRITICAL: Use Pre-written Scripts

⚠️ 重要提示:使用预编写脚本

DO NOT write your own CDP client, heap analyzer, or profiler code.
All infrastructure is already implemented in
./scripts/
:
  • cdp_client.ts
    - Complete CDP WebSocket client
  • heap_analyzer.ts
    - Heap snapshot parsing and analysis
  • cpu_profiler.ts
    - CPU profiling and hot path detection
  • breadcrumbs.ts
    - Investigation state tracking (use sparingly, see below)
  • report_gen.ts
    - Markdown report generation
Your job is to use these scripts to investigate, not rewrite them.
请勿自行编写CDP客户端、堆分析器或性能分析器代码。
所有基础功能已在
./scripts/
目录中实现:
  • cdp_client.ts
    - 完整的CDP WebSocket客户端
  • heap_analyzer.ts
    - 堆快照解析与分析
  • cpu_profiler.ts
    - CPU性能分析与热点路径检测
  • breadcrumbs.ts
    - 排查状态追踪(谨慎使用,详见下文)
  • report_gen.ts
    - Markdown报告生成
你的工作是使用这些脚本进行排查,而非重写它们。

Breadcrumb Usage Guidelines

Breadcrumbs使用指南

Purpose of Breadcrumbs:
Breadcrumbs create a timeline of your investigative reasoning, not just your actions. They answer:
  • "What did I think was wrong, and why?"
  • "What evidence changed my thinking?"
  • "Why did I focus on X instead of Y?"
  • "How did I arrive at this conclusion?"
This is valuable because:
  1. Review and learning - Later, you or others can understand the investigation process
  2. Debugging the debugging - If the conclusion was wrong, see where reasoning went off track
  3. Knowledge transfer - Team members can learn investigation techniques
  4. Complex investigations - When exploring multiple hypotheses, breadcrumbs prevent getting lost
Use breadcrumbs to track your investigation state, NOT as a log of every action.
Use breadcrumbs for:
  • ✅ Initial hypothesis about the problem
  • ✅ Major decision points (e.g., "focusing on heap analysis vs CPU profiling")
  • ✅ Key findings that change your understanding
  • ✅ Final conclusion
Do NOT use breadcrumbs for:
  • ❌ Every file read or code inspection
  • ❌ Routine actions like "connecting to inspector"
  • ❌ Small intermediate steps
  • ❌ Things already visible in the final report
Example of good breadcrumb use:
typescript
const bc = new Breadcrumbs();

// High-level hypothesis
bc.addHypothesis(
  "Memory leak caused by retained event listeners",
  "User reports memory grows when users navigate between pages"
);

// Major finding that changes direction
bc.addFinding(
  "Found 500+ DOM nodes retained after page navigation",
  { node_count: 523, size_mb: 12.4 },
  "critical"
);

// Final decision
bc.addDecision(
  "Root cause: event listeners not cleaned up in destroy()",
  "Heap snapshot shows references from global event bus"
);
The breadcrumb timeline is for YOU to track your thinking, not a transcript of every action.
Breadcrumbs的用途:
Breadcrumbs用于创建你的排查推理时间线,而非单纯记录操作。它需要回答:
  • "我原本认为问题出在哪里?为什么?"
  • "哪些证据改变了我的判断?"
  • "为什么我专注于X而非Y?"
  • "我是如何得出这个结论的?"
这一功能的价值在于:
  1. 回顾与学习 - 你或他人后续可以理解排查过程
  2. 调试排查过程 - 如果结论错误,可以查看推理过程中的偏差点
  3. 知识传递 - 团队成员可以学习排查技巧
  4. 复杂排查场景 - 当探索多个假设时,Breadcrumbs可避免思路混乱
使用Breadcrumbs追踪排查状态,而非记录每一个操作。
以下场景适合使用Breadcrumbs:
  • ✅ 关于问题的初始假设
  • ✅ 关键决策点(例如:"选择堆分析而非CPU性能分析")
  • ✅ 改变你认知的关键发现
  • ✅ 最终结论
以下场景请勿使用Breadcrumbs:
  • ❌ 每一次文件读取或代码检查
  • ❌ 常规操作如"连接到调试器"
  • ❌ 小型中间步骤
  • ❌ 最终报告中已可见的内容
Breadcrumbs正确使用示例:
typescript
const bc = new Breadcrumbs();

// 高层次假设
bc.addHypothesis(
  "内存泄漏由未释放的事件监听器导致",
  "用户反馈页面导航后内存持续增长"
);

// 改变排查方向的关键发现
bc.addFinding(
  "发现页面导航后有500+个DOM节点未被释放",
  { node_count: 523, size_mb: 12.4 },
  "critical"
);

// 最终决策
bc.addDecision(
  "根因:destroy()方法中未清理事件监听器",
  "堆快照显示全局事件总线持有引用"
);
Breadcrumbs时间线是用于追踪你的思考过程,而非操作的完整记录。

Prerequisites

前置条件

The user must start their Deno app with inspector enabled:
bash
deno run --inspect=127.0.0.1:9229 --allow-net --allow-read app.ts
Or to pause at startup:
bash
deno run --inspect-brk=127.0.0.1:9229 --allow-net app.ts
用户必须启动Deno应用时启用调试器:
bash
deno run --inspect=127.0.0.1:9229 --allow-net --allow-read app.ts
或者在启动时暂停:
bash
deno run --inspect-brk=127.0.0.1:9229 --allow-net app.ts

Workflow

工作流程

Make a todo list for all tasks in this workflow and work through them one at a time.
为该工作流程中的所有任务创建待办列表,并逐一完成。

1. Setup and Connect

1. 配置与连接

Import the pre-written helper scripts:
typescript
import { CDPClient } from "./scripts/cdp_client.ts";
import { Breadcrumbs } from "./scripts/breadcrumbs.ts";

async function investigate() {
  // Initialize investigation tracking (optional for complex cases)
  const bc = new Breadcrumbs();

  // Connect to Deno inspector
  const client = new CDPClient("127.0.0.1", 9229);
  await client.connect();

  // Enable debugging
  await client.enableDebugger();

  // Your investigation continues...
}
DO NOT write a custom CDP client. Use the CDPClient class.
导入预编写的辅助脚本:
typescript
import { CDPClient } from "./scripts/cdp_client.ts";
import { Breadcrumbs } from "./scripts/breadcrumbs.ts";

async function investigate() {
  // 初始化排查追踪(复杂场景可选)
  const bc = new Breadcrumbs();

  // 连接到Deno调试器
  const client = new CDPClient("127.0.0.1", 9229);
  await client.connect();

  // 启用调试
  await client.enableDebugger();

  // 你的排查过程继续...
}
请勿自行编写自定义CDP客户端,请使用CDPClient类。

2. Form Hypothesis

2. 形成假设

Form a clear hypothesis about what's causing the problem. You can optionally record it:
typescript
// Optional: Track your initial hypothesis
bc.addHypothesis(
  "Memory leak in upload handler due to retained buffers",
  "User reports memory grows after each file upload"
);
Note: Only use breadcrumbs if the investigation is complex enough to warrant tracking your thought process. For simple investigations, skip breadcrumbs entirely.
针对问题原因形成清晰的假设,你可以选择记录下来:
typescript
// 可选:记录初始假设
bc.addHypothesis(
  "上传处理程序中因未释放缓冲区导致内存泄漏",
  "用户反馈每次文件上传后内存增长"
);
注意:仅当排查足够复杂,需要追踪思考过程时才使用Breadcrumbs。对于简单排查,完全可以跳过Breadcrumbs。

3. Choose Investigation Pattern

3. 选择排查模式

Based on the problem type, follow one of these patterns:
根据问题类型,选择以下模式之一:

Pattern A: Memory Leak

模式A:内存泄漏

IMPORTANT: For large heaps (>100MB), use the FAST comparison mode to avoid 3+ hour waits!
typescript
import { compareSnapshotsFast } from "./scripts/heap_analyzer.ts";
import type { CDPClient } from "./scripts/cdp_client.ts";

// 1. Capture baseline
console.log("Capturing baseline snapshot...");
await client.takeHeapSnapshot("investigation_output/baseline.heapsnapshot");
const baseline_size = (await Deno.stat("investigation_output/baseline.heapsnapshot")).size / (1024 * 1024);
console.log(`Baseline: ${baseline_size.toFixed(2)} MB`);

// 2. Trigger the leak (ask user or trigger programmatically)
console.log("\nTrigger the leak now...");
// User triggers leak or you make HTTP request, etc.
await new Promise(resolve => setTimeout(resolve, 5000)); // Wait

// 3. Capture comparison
console.log("Capturing comparison snapshot...");
await client.takeHeapSnapshot("investigation_output/after.heapsnapshot");
const after_size = (await Deno.stat("investigation_output/after.heapsnapshot")).size / (1024 * 1024);

// 4. Analyze growth
const growth_mb = after_size - baseline_size;
console.log(`After: ${after_size.toFixed(2)} MB (grew ${growth_mb.toFixed(2)} MB)`);

// 5. FAST: Compare snapshots using summary-only mode
// This skips edges and retention paths (10-50x faster for large heaps)
const comparison = await compareSnapshotsFast(
  "investigation_output/baseline.heapsnapshot",
  "investigation_output/after.heapsnapshot"
);

console.log("\nTop 10 growing objects:");
console.table(comparison.slice(0, 10).map(row => ({
  Type: row.nodeType,
  Name: row.name.substring(0, 40),
  "Count Δ": row.countDelta,
  "Size Δ (MB)": (row.sizeDelta / (1024 * 1024)).toFixed(2),
})));

// 6. If you need retaining paths for specific objects, load with full mode:
// (Only do this if compareSnapshotsFast wasn't enough)
/*
import { loadSnapshot } from "./scripts/heap_analyzer.ts";

const afterSnapshot = await loadSnapshot("investigation_output/after.heapsnapshot");
const suspiciousNode = afterSnapshot.nodes.find(n => n.name === "LeakyObject");
if (suspiciousNode) {
  const path = afterSnapshot.findRetainingPath(suspiciousNode.id);
  console.log("Why is this object retained?", path);
}
*/

// 7. Examine code to find the cause
const sourceCode = await Deno.readTextFile("path/to/app.ts");
// [Your code inspection here]
Performance Guide:
Heap SizecompareSnapshotsFast()loadSnapshot() + compareSnapshots()
<10 MB~2 seconds~5 seconds
100 MB~8 seconds~2 minutes
900 MB~20 seconds~3 hours ❌
When to use full mode:
  • ✅ Use
    compareSnapshotsFast()
    FIRST (always!)
  • ✅ Only load full snapshots if you need retaining paths
  • ✅ Narrow down to specific objects before loading full snapshots
重要提示:对于大型堆(>100MB),使用FAST对比模式以避免3小时以上的等待!
typescript
import { compareSnapshotsFast } from "./scripts/heap_analyzer.ts";
import type { CDPClient } from "./scripts/cdp_client.ts";

// 1. 捕获基准快照
console.log("Capturing baseline snapshot...");
await client.takeHeapSnapshot("investigation_output/baseline.heapsnapshot");
const baseline_size = (await Deno.stat("investigation_output/baseline.heapsnapshot")).size / (1024 * 1024);
console.log(`Baseline: ${baseline_size.toFixed(2)} MB`);

// 2. 触发泄漏(请用户操作或通过代码触发)
console.log("\nTrigger the leak now...");
// 用户触发泄漏或你发起HTTP请求等
await new Promise(resolve => setTimeout(resolve, 5000)); // 等待

// 3. 捕获对比快照
console.log("Capturing comparison snapshot...");
await client.takeHeapSnapshot("investigation_output/after.heapsnapshot");
const after_size = (await Deno.stat("investigation_output/after.heapsnapshot")).size / (1024 * 1024);

// 4. 分析内存增长
const growth_mb = after_size - baseline_size;
console.log(`After: ${after_size.toFixed(2)} MB (grew ${growth_mb.toFixed(2)} MB)`);

// 5. 快速模式:仅使用摘要模式对比快照
// 此模式跳过引用关系和保留路径(大型堆下速度提升10-50倍)
const comparison = await compareSnapshotsFast(
  "investigation_output/baseline.heapsnapshot",
  "investigation_output/after.heapsnapshot"
);

console.log("\nTop 10 growing objects:");
console.table(comparison.slice(0, 10).map(row => ({
  Type: row.nodeType,
  Name: row.name.substring(0, 40),
  "Count Δ": row.countDelta,
  "Size Δ (MB)": (row.sizeDelta / (1024 * 1024)).toFixed(2),
})));

// 6. 如果需要特定对象的保留路径,使用完整模式加载:
// (仅当compareSnapshotsFast无法满足需求时才执行此操作)
/*
import { loadSnapshot } from "./scripts/heap_analyzer.ts";

const afterSnapshot = await loadSnapshot("investigation_output/after.heapsnapshot");
const suspiciousNode = afterSnapshot.nodes.find(n => n.name === "LeakyObject");
if (suspiciousNode) {
  const path = afterSnapshot.findRetainingPath(suspiciousNode.id);
  console.log("Why is this object retained?", path);
}
*/

// 7. 检查代码以找到原因
const sourceCode = await Deno.readTextFile("path/to/app.ts");
// [你的代码检查逻辑]
性能指南:
堆大小compareSnapshotsFast()loadSnapshot() + compareSnapshots()
<10 MB~2秒~5秒
100 MB~8秒~2分钟
900 MB~20秒~3小时 ❌
何时使用完整模式:
  • ✅ 优先使用
    compareSnapshotsFast()
    (务必!)
  • ✅ 仅当需要保留路径时才加载完整快照
  • ✅ 在加载完整快照前先缩小到特定对象范围

Pattern B: Performance Bottleneck

模式B:性能瓶颈

Key Challenge: Large codebases make it hard to find O(n²) or other algorithmic issues.
Strategy: Use CPU profiling with automatic complexity analysis and flamegraph visualization.
typescript
import {
  startProfiling,
  stopProfiling,
  analyzeProfile,
  analyzeComplexity,
  printComplexityAnalysis,
  saveFlamegraphHTML
} from "./scripts/cpu_profiler.ts";

// 1. Start profiling
await startProfiling(client);
console.log("Profiling started");

// 2. Trigger slow operation
console.log("Triggering slow operation (e.g., processing 100 items)...");
await fetch("http://localhost:8080/process", {
  method: "POST",
  body: JSON.stringify({ items: Array(100).fill({}) })
});

// 3. Stop and collect profile
const profile = await stopProfiling(client, "profile.cpuprofile");

// 4. Analyze for hot functions
const analysis = analyzeProfile(profile);
console.log("\nTop 5 Hot Functions:");
for (const func of analysis.hotFunctions.slice(0, 5)) {
  const totalPct = (func.totalTime / analysis.totalDuration * 100).toFixed(1);
  const selfPct = (func.selfTime / analysis.totalDuration * 100).toFixed(1);
  console.log(`  ${func.functionName}`);
  console.log(`    Total: ${totalPct}% | Self: ${selfPct}%`);
}

// 5. NEW: Automatic O(n²) Detection
console.log("\n🔍 Algorithmic Complexity Analysis:");
const complexityIssues = analyzeComplexity(profile);
printComplexityAnalysis(complexityIssues);

// This will automatically flag:
// - Functions with >50% self time (likely O(n²) or worse)
// - Nested loops, checksums, comparisons
// - Common O(n²) patterns

// 6. NEW: Generate Flamegraph Visualization
await saveFlamegraphHTML(profile, "flamegraph.html");
console.log("\n📊 Flamegraph saved to flamegraph.html");
console.log("   Open in browser or upload to https://speedscope.app");
console.log("   Look for: Wide bars = high total time, Tall stacks = deep calls");

// 7. Examine identified bottleneck
// Based on complexity analysis, check the flagged function
const criticalIssues = complexityIssues.filter(i => i.severity === "critical");
if (criticalIssues.length > 0) {
  console.log(`\n🎯 Investigate: ${criticalIssues[0].functionName}`);
  console.log(`   Evidence: ${criticalIssues[0].evidence}`);
  console.log(`   Suspected: ${criticalIssues[0].suspectedComplexity}`);
}
Understanding Self Time vs Total Time:
  • Total Time: Time spent in function + all functions it calls
    • High total time → Function is on the critical path
    • Example:
      processImages()
      calling 100x
      processOne()
  • Self Time: Time spent in function's own code only
    • High self time → Function itself is slow (not just calling slow code)
    • Example: Nested loops, expensive calculations
  • O(n²) Indicator: High self time % (>50%) often indicates O(n²) or worse
    • If total time is high but self time is low → Calling slow functions
    • If self time is high → The function's own logic is the problem
When to Use Each Tool:
ToolUse WhenFinds
analyzeProfile()
Always firstHot functions, call patterns
analyzeComplexity()
Suspected O(n²)Algorithmic bottlenecks
saveFlamegraphHTML()
Complex call treesVisual patterns, deep stacks
Hot paths analysisMultiple bottlenecksCritical execution paths
Common O(n²) Patterns Detected:
typescript
// Pattern 1: Nested loops (CRITICAL)
for (const item of items) {          // O(n)
  for (const other of items) {       // O(n) ← flags this!
    if (compare(item, other)) { }
  }
}

// Pattern 2: Repeated linear searches (CRITICAL)
for (const item of items) {                // O(n)
  const found = items.find(x => x.id === item.ref);  // O(n) ← flags this!
}

// Pattern 3: Checksums in loops (WARNING)
for (const item of items) {          // O(n)
  calculateChecksum(item.data);      // If checksum is O(n) → O(n²) total
}
Fix Strategy:
  1. Run
    analyzeComplexity()
    to find critical issues
  2. Check flamegraph for visual confirmation (wide bars)
  3. Examine flagged function's self time:
    • 50% self time → Definitely the bottleneck
    • <10% self time → Just calling slow code
  4. Common fixes:
    • Use Map/Set instead of Array.find() → O(n) to O(1)
    • Move invariant calculations outside loops
    • Cache expensive computations
    • Use streaming/chunking for large datasets
核心挑战:大型代码库中难以找到O(n²)或其他算法问题。
策略:使用CPU性能分析结合自动复杂度分析和火焰图可视化。
typescript
import {
  startProfiling,
  stopProfiling,
  analyzeProfile,
  analyzeComplexity,
  printComplexityAnalysis,
  saveFlamegraphHTML
} from "./scripts/cpu_profiler.ts";

// 1. 启动性能分析
await startProfiling(client);
console.log("Profiling started");

// 2. 触发慢速操作
console.log("Triggering slow operation (e.g., processing 100 items)...");
await fetch("http://localhost:8080/process", {
  method: "POST",
  body: JSON.stringify({ items: Array(100).fill({}) })
});

// 3. 停止并收集性能分析数据
const profile = await stopProfiling(client, "profile.cpuprofile");

// 4. 分析热点函数
const analysis = analyzeProfile(profile);
console.log("\nTop 5 Hot Functions:");
for (const func of analysis.hotFunctions.slice(0, 5)) {
  const totalPct = (func.totalTime / analysis.totalDuration * 100).toFixed(1);
  const selfPct = (func.selfTime / analysis.totalDuration * 100).toFixed(1);
  console.log(`  ${func.functionName}`);
  console.log(`    Total: ${totalPct}% | Self: ${selfPct}%`);
}

// 5. 新增功能:自动检测O(n²)复杂度
console.log("\n🔍 Algorithmic Complexity Analysis:");
const complexityIssues = analyzeComplexity(profile);
printComplexityAnalysis(complexityIssues);

// 该功能会自动标记:
// - 自执行时间占比>50%的函数(可能为O(n²)或更差)
// - 嵌套循环、校验和、比较操作
// - 常见的O(n²)模式

// 6. 新增功能:生成火焰图可视化
await saveFlamegraphHTML(profile, "flamegraph.html");
console.log("\n📊 Flamegraph saved to flamegraph.html");
console.log("   Open in browser or upload to https://speedscope.app");
console.log("   Look for: Wide bars = high total time, Tall stacks = deep calls");

// 7. 检查已识别的瓶颈
// 根据复杂度分析结果,检查标记的函数
const criticalIssues = complexityIssues.filter(i => i.severity === "critical");
if (criticalIssues.length > 0) {
  console.log(`\n🎯 Investigate: ${criticalIssues[0].functionName}`);
  console.log(`   Evidence: ${criticalIssues[0].evidence}`);
  console.log(`   Suspected: ${criticalIssues[0].suspectedComplexity}`);
}
理解自执行时间与总执行时间:
  • 总执行时间:函数自身执行时间加上其调用的所有函数的执行时间
    • 总执行时间高 → 函数位于关键路径上
    • 示例:
      processImages()
      调用100次
      processOne()
  • 自执行时间:仅函数自身代码的执行时间
    • 自执行时间高 → 函数本身执行缓慢(而非调用了慢速代码)
    • 示例:嵌套循环、昂贵的计算
  • O(n²)指标:自执行时间占比高(>50%)通常表示O(n²)或更差的复杂度
    • 如果总执行时间高但自执行时间低 → 函数调用了慢速代码
    • 如果自执行时间高 → 函数自身逻辑存在问题
工具使用场景:
工具使用场景可发现问题
analyzeProfile()
始终优先使用热点函数、调用模式
analyzeComplexity()
怀疑存在O(n²)问题时算法瓶颈
saveFlamegraphHTML()
复杂调用树场景可视化模式、深层调用栈
热点路径分析存在多个瓶颈时关键执行路径
常见的O(n²)模式:
typescript
// 模式1:嵌套循环(严重)
for (const item of items) {          // O(n)
  for (const other of items) {       // O(n) ← 会被标记!
    if (compare(item, other)) { }
  }
}

// 模式2:重复线性搜索(严重)
for (const item of items) {                // O(n)
  const found = items.find(x => x.id === item.ref);  // O(n) ← 会被标记!
}

// 模式3:循环中的校验和(警告)
for (const item of items) {          // O(n)
  calculateChecksum(item.data);      // 如果校验和是O(n) → 总复杂度O(n²)
}
修复策略:
  1. 运行
    analyzeComplexity()
    找到严重问题
  2. 查看火焰图进行可视化确认(宽条表示高执行时间)
  3. 检查标记函数的自执行时间:
    • 50%自执行时间 → 肯定是瓶颈
    • <10%自执行时间 → 只是调用了慢速代码
  4. 常见修复方法:
    • 使用Map/Set替代Array.find() → 从O(n)优化到O(1)
    • 将不变计算移到循环外
    • 缓存昂贵的计算结果
    • 对大型数据集使用流式/分块处理

Pattern C: Race Condition / Concurrency Bug

模式C:竞态条件/并发问题

Key Challenge: Race conditions are timing-dependent and hard to reproduce consistently.
Strategy: Use conditional breakpoints to catch the race only when it occurs.
typescript
// 1. Set CONDITIONAL breakpoints to catch specific states
// Break only when lock is already claimed (race condition!)
await client.setBreakpointByUrl(
  "file:///app.ts",
  130,  // Line where we check lock state
  0,
  "lock.state !== 'available'"  // ← CONDITION: Only break if lock not available
);

// Break when version increments unexpectedly (indicates concurrent modification)
await client.setBreakpointByUrl(
  "file:///app.ts",
  167,
  0,
  "lock.version > expectedVersion"  // ← CONDITION: Version jumped
);

console.log("✓ Conditional breakpoints set for race detection");

// 2. Set pause on exceptions (catches errors from race)
await client.setPauseOnExceptions("all");

// 3. Generate concurrent requests to trigger the race
// Need many concurrent attempts to hit the timing window
console.log("Generating 100 concurrent requests to trigger race...");

const requests = [];
for (let i = 0; i < 100; i++) {
  requests.push(
    fetch("http://localhost:8081/acquire", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({
        lockId: "test-lock",
        clientId: `client-${i}`,
      }),
    })
  );
}

// Fire all requests concurrently
const responses = await Promise.all(requests);

// 4. If race occurs, breakpoint will trigger
// When paused, inspect the state
const frames = client.getCallFrames();
if (frames.length > 0) {
  const variables = await client.getScopeVariables(frames[0].callFrameId);
  console.log(`🔴 Breakpoint hit!`);
  console.log(`Location: ${frames[0].functionName} line ${frames[0].location.lineNumber}`);
  console.log(`Variables:`, variables);

  // Evaluate lock state
  const lockState = await client.evaluate("lock.state");
  const lockOwner = await client.evaluate("lock.owner");
  const lockVersion = await client.evaluate("lock.version");

  console.log(`Lock state: ${lockState}`);
  console.log(`Lock owner: ${lockOwner}`);
  console.log(`Lock version: ${lockVersion}`);
}

// 5. Check results for race condition evidence
const successes = responses.filter(r => r.ok);
const results = await Promise.all(successes.map(r => r.json()));
const acquiredCount = results.filter(r => r.success).length;

console.log(`\n📊 Results:`);
console.log(`  Total requests: ${responses.length}`);
console.log(`  Successful acquires: ${acquiredCount}`);
console.log(`  Expected: 1`);
console.log(`  Race detected: ${acquiredCount > 1 ? '❌ YES' : '✅ NO'}`);

// 6. Examine code to understand the race window
const sourceCode = await Deno.readTextFile("path/to/async_file.ts");
// Look for:
// - Check-then-act patterns (TOCTOU)
// - Async gaps between read and write
// - Missing atomic operations
Race Condition Debugging Tips:
  1. Conditional breakpoints are essential - Don't waste time on non-race executions
  2. Run many concurrent requests - Races have low probability (1-5%)
  3. Watch for version/state changes - Indicates concurrent modification
  4. Look for async gaps - Time between check and update is the race window
  5. Check timing - Use
    Date.now()
    to measure gaps between operations
Common Race Patterns:
typescript
// BAD: Check-then-act with async gap
if (lock.state === "available") {  // ← Check
  await someAsyncOperation();      // ← GAP (race window!)
  lock.state = "acquired";         // ← Act
}

// GOOD: Atomic check-and-act
const wasAvailable = lock.state === "available";
lock.state = wasAvailable ? "acquired" : lock.state;
if (!wasAvailable) throw new Error("Lock unavailable");
核心挑战:竞态条件依赖于执行时序,难以稳定复现。
策略:使用条件断点仅在竞态条件发生时触发暂停。
typescript
// 1. 设置条件断点以捕获特定状态
// 仅当锁已被占用时触发断点(竞态条件!)
await client.setBreakpointByUrl(
  "file:///app.ts",
  130,  // 检查锁状态的行号
  0,
  "lock.state !== 'available'"  // ← 条件:仅当锁不可用时触发
);

// 当版本意外递增时触发断点(表示并发修改)
await client.setBreakpointByUrl(
  "file:///app.ts",
  167,
  0,
  "lock.version > expectedVersion"  // ← 条件:版本跳变时触发
);

console.log("✓ Conditional breakpoints set for race detection");

// 2. 设置异常暂停(捕获竞态条件导致的错误)
await client.setPauseOnExceptions("all");

// 3. 生成并发请求以触发竞态条件
// 需要多次并发尝试才能命中时序窗口
console.log("Generating 100 concurrent requests to trigger race...");

const requests = [];
for (let i = 0; i < 100; i++) {
  requests.push(
    fetch("http://localhost:8081/acquire", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({
        lockId: "test-lock",
        clientId: `client-${i}`,
      }),
    })
  );
}

// 并发发送所有请求
const responses = await Promise.all(requests);

// 4. 如果竞态条件发生,断点会触发
// 暂停时,检查状态
const frames = client.getCallFrames();
if (frames.length > 0) {
  const variables = await client.getScopeVariables(frames[0].callFrameId);
  console.log(`🔴 Breakpoint hit!`);
  console.log(`Location: ${frames[0].functionName} line ${frames[0].location.lineNumber}`);
  console.log(`Variables:`, variables);

  // 评估锁状态
  const lockState = await client.evaluate("lock.state");
  const lockOwner = await client.evaluate("lock.owner");
  const lockVersion = await client.evaluate("lock.version");

  console.log(`Lock state: ${lockState}`);
  console.log(`Lock owner: ${lockOwner}`);
  console.log(`Lock version: ${lockVersion}`);
}

// 5. 检查结果以寻找竞态条件证据
const successes = responses.filter(r => r.ok);
const results = await Promise.all(successes.map(r => r.json()));
const acquiredCount = results.filter(r => r.success).length;

console.log(`\n📊 Results:`);
console.log(`  Total requests: ${responses.length}`);
console.log(`  Successful acquires: ${acquiredCount}`);
console.log(`  Expected: 1`);
console.log(`  Race detected: ${acquiredCount > 1 ? '❌ YES' : '✅ NO'}`);

// 6. 检查代码以理解竞态窗口
const sourceCode = await Deno.readTextFile("path/to/async_file.ts");
// 寻找:
// - 检查后执行模式(TOCTOU)
// - 读取与写入之间的异步间隙
// - 缺失的原子操作
竞态条件调试技巧:
  1. 条件断点至关重要 - 不要在非竞态执行上浪费时间
  2. 发送大量并发请求 - 竞态条件发生概率低(1-5%)
  3. 关注版本/状态变化 - 表示存在并发修改
  4. 寻找异步间隙 - 读取与更新之间的时间窗口就是竞态窗口
  5. 检查时序 - 使用
    Date.now()
    测量操作之间的间隙
常见竞态模式:
typescript
// 错误:检查后执行模式,存在异步间隙
if (lock.state === "available") {  // ← 检查
  await someAsyncOperation();      // ← 间隙(竞态窗口!)
  lock.state = "acquired";         // ← 执行
}

// 正确:原子性检查与执行
const wasAvailable = lock.state === "available";
lock.state = wasAvailable ? "acquired" : lock.state;
if (!wasAvailable) throw new Error("Lock unavailable");

4. Examine Code

4. 检查代码

Read the relevant source files to understand the bug:
typescript
// Read the problematic file
const code = await Deno.readTextFile("path/to/app.ts");
const lines = code.split("\n");

// Find the problematic pattern
for (let i = 0; i < lines.length; i++) {
  if (lines[i].includes("problematic_pattern")) {
    bc.addFinding(
      `Found issue at line ${i + 1}`,
      { line: i + 1, code: lines[i].trim() },
      "critical"
    );
  }
}
读取相关源码文件以理解问题:
typescript
// 读取问题文件
const code = await Deno.readTextFile("path/to/app.ts");
const lines = code.split("\n");

// 寻找问题模式
for (let i = 0; i < lines.length; i++) {
  if (lines[i].includes("problematic_pattern")) {
    bc.addFinding(
      `Found issue at line ${i + 1}`,
      { line: i + 1, code: lines[i].trim() },
      "critical"
    );
  }
}

5. Analyze and Conclude

5. 分析并得出结论

Based on your investigation data, determine the root cause. You can optionally record your conclusion:
typescript
// Optional: Record your conclusion if using breadcrumbs
bc.addDecision(
  "Root cause identified",
  "Heap snapshot shows ArrayBuffer retention, code shows missing cleanup"
);
Most importantly: Understand the problem well enough to explain it clearly to the user.
根据排查数据确定根因。你可以选择记录结论:
typescript
// 可选:如果使用Breadcrumbs,记录结论
bc.addDecision(
  "Root cause identified",
  "Heap snapshot shows ArrayBuffer retention, code shows missing cleanup"
);
最重要的是:充分理解问题,以便向用户清晰解释。

6. Save Artifacts

6. 保存排查 artifacts

typescript
import { MarkdownReport } from "./scripts/report_gen.ts";

// Create output directory
await Deno.mkdir("investigation_output", { recursive: true });

// Generate comprehensive markdown report
const report = new MarkdownReport("Memory Leak Investigation", bc);

// Add summary
report.addSummary(
  "Upload handler retains ArrayBuffer objects in global array without cleanup."
);

// Add problem description
report.addProblem(
  "Memory usage grows continuously with each file upload and never stabilizes."
);

// Add findings
report.addFinding({
  description: "ArrayBuffer objects not being released",
  severity: "critical",
  details: `Heap grew ${growth_mb.toFixed(2)} MB after single upload. ` +
           `At this rate, production would hit OOM after ~${Math.floor(1024 / growth_mb)} uploads.`,
  evidence: [
    "Heap snapshot shows 500+ retained ArrayBuffers",
    `Global array 'leakedBuffers' grows by ~${(growth_mb * 1024).toFixed(0)} KB per upload`,
    "No cleanup code in success or error paths"
  ]
});

// Add code snippet showing the bug
report.addCodeSnippet(
  "typescript",
  `// Line 22-23 in app.ts:
const leakedBuffers: ArrayBuffer[] = [];  // Global array
leakedBuffers.push(buffer);  // Never cleared`,
  "Problematic code",
  "app.ts:22"
);

// Add root cause explanation
report.addRootCause(
  "Event listeners not cleaned up in destroy()",
  "The handleUpload() function pushes buffers to leakedBuffers[] for tracking, " +
  "but never removes them. Each upload adds ~45KB that persists for the app lifetime. " +
  "This is a 'retain-and-forget' anti-pattern."
);

// Add fix with code
report.addFix(
  "Remove the global array entirely. Process buffers immediately and discard them.",
  {
    language: "typescript",
    code: `// Remove the global array entirely
async function handleUpload(fileSize: number): Promise<string> {
  const buffer = new ArrayBuffer(fileSize);
  const result = await processBuffer(buffer);
  // Buffer goes out of scope here - eligible for GC
  return result;
}`,
    caption: "Recommended fix"
  }
);

// Add data table
report.addDataTable("Investigation Metrics", [
  { Metric: "Baseline heap", Value: `${baseline_size.toFixed(2)} MB` },
  { Metric: "After operation", Value: `${after_size.toFixed(2)} MB` },
  { Metric: "Growth", Value: `${growth_mb.toFixed(2)} MB` },
  { Metric: "Growth per upload", Value: `~${(growth_mb * 1024).toFixed(0)} KB` },
  { Metric: "Projected OOM", Value: `After ~${Math.floor(1024 / growth_mb)} uploads` }
]);

// Save report
await report.save("investigation_output/REPORT.md");

// Optionally save breadcrumbs if used
if (bc && bc.breadcrumbs.length > 0) {
  await bc.save("investigation_output/investigation.json");
}

// Close connection
await client.close();
typescript
import { MarkdownReport } from "./scripts/report_gen.ts";

// 创建输出目录
await Deno.mkdir("investigation_output", { recursive: true });

// 生成全面的Markdown报告
const report = new MarkdownReport("Memory Leak Investigation", bc);

// 添加摘要
report.addSummary(
  "Upload handler retains ArrayBuffer objects in global array without cleanup."
);

// 添加问题描述
report.addProblem(
  "Memory usage grows continuously with each file upload and never stabilizes."
);

// 添加发现
report.addFinding({
  description: "ArrayBuffer objects not being released",
  severity: "critical",
  details: `Heap grew ${growth_mb.toFixed(2)} MB after single upload. ` +
           `At this rate, production would hit OOM after ~${Math.floor(1024 / growth_mb)} uploads.`,
  evidence: [
    "Heap snapshot shows 500+ retained ArrayBuffers",
    `Global array 'leakedBuffers' grows by ~${(growth_mb * 1024).toFixed(0)} KB per upload`,
    "No cleanup code in success or error paths"
  ]
});

// 添加问题代码片段
report.addCodeSnippet(
  "typescript",
  `// Line 22-23 in app.ts:
const leakedBuffers: ArrayBuffer[] = [];  // Global array
leakedBuffers.push(buffer);  // Never cleared`,
  "Problematic code",
  "app.ts:22"
);

// 添加根因解释
report.addRootCause(
  "Event listeners not cleaned up in destroy()",
  "The handleUpload() function pushes buffers to leakedBuffers[] for tracking, " +
  "but never removes them. Each upload adds ~45KB that persists for the app lifetime. " +
  "This is a 'retain-and-forget' anti-pattern."
);

// 添加修复方案代码
report.addFix(
  "Remove the global array entirely. Process buffers immediately and discard them.",
  {
    language: "typescript",
    code: `// Remove the global array entirely
async function handleUpload(fileSize: number): Promise<string> {
  const buffer = new ArrayBuffer(fileSize);
  const result = await processBuffer(buffer);
  // Buffer goes out of scope here - eligible for GC
  return result;
}`,
    caption: "Recommended fix"
  }
);

// 添加数据表
report.addDataTable("Investigation Metrics", [
  { Metric: "Baseline heap", Value: `${baseline_size.toFixed(2)} MB` },
  { Metric: "After operation", Value: `${after_size.toFixed(2)} MB` },
  { Metric: "Growth", Value: `${growth_mb.toFixed(2)} MB` },
  { Metric: "Growth per upload", Value: `~${(growth_mb * 1024).toFixed(0)} KB` },
  { Metric: "Projected OOM", Value: `After ~${Math.floor(1024 / growth_mb)} uploads` }
]);

// 保存报告
await report.save("investigation_output/REPORT.md");

// 可选:如果使用了Breadcrumbs,保存它
if (bc && bc.breadcrumbs.length > 0) {
  await bc.save("investigation_output/investigation.json");
}

// 关闭连接
await client.close();

7. Present Findings

7. 呈现排查结果

When investigation is complete, present your findings to the user as a clear, conversational summary:
Example:
I found the memory leak! 🎯

The issue is in `app.ts` at line 22. The `handleUpload()` function creates
ArrayBuffer objects but never releases them. Each upload adds ~45KB to a global
`leakedBuffers` array that never gets cleared.

Fix:
Remove the global array entirely and process buffers immediately:

```typescript
async function handleUpload(fileSize: number): Promise<string> {
  const buffer = new ArrayBuffer(fileSize);
  const result = await processBuffer(buffer);
  return result; // Buffer becomes eligible for GC
}
I've saved the investigation to investigation_output/:
  • REPORT.md - Full investigation report
  • baseline.heapsnapshot - Before state (8.8 MB)
  • after.heapsnapshot - After state (8.9 MB)
  • investigation.json - Investigation timeline

**Guidelines for presenting findings:**
- Be conversational and clear
- Lead with the root cause
- Explain WHY it's happening, not just WHAT
- Provide a specific, actionable fix
- Reference where artifacts are saved

**IMPORTANT**: Always save artifacts before presenting findings.
排查完成后,以清晰、口语化的总结向用户呈现结果:
示例:
I found the memory leak! 🎯

The issue is in `app.ts` at line 22. The `handleUpload()` function creates
ArrayBuffer objects but never releases them. Each upload adds ~45KB to a global
`leakedBuffers` array that never gets cleared.

Fix:
Remove the global array entirely and process buffers immediately:

```typescript
async function handleUpload(fileSize: number): Promise<string> {
  const buffer = new ArrayBuffer(fileSize);
  const result = await processBuffer(buffer);
  return result; // Buffer becomes eligible for GC
}
I've saved the investigation to investigation_output/:
  • REPORT.md - Full investigation report
  • baseline.heapsnapshot - Before state (8.8 MB)
  • after.heapsnapshot - After state (8.9 MB)
  • investigation.json - Investigation timeline

**结果呈现指南:**
- 口语化且清晰
- 先讲根因
- 解释问题发生的原因,而非仅描述现象
- 提供具体、可执行的修复方案
- 说明artifacts的保存位置

**重要提示**:在呈现结果前务必保存所有artifacts。

Complete Example: Memory Leak Investigation

完整示例:内存泄漏排查

Here's a complete end-to-end investigation you can use as a template:
typescript
import { CDPClient } from "./scripts/cdp_client.ts";
import { captureSnapshot, compareSnapshots } from "./scripts/heap_analyzer.ts";
import { MarkdownReport } from "./scripts/report_gen.ts";
import { Breadcrumbs } from "./scripts/breadcrumbs.ts";

async function investigateMemoryLeak() {
  console.log("Starting memory leak investigation...");

  // Optional: Track investigation reasoning
  const bc = new Breadcrumbs("memory_leak_investigation");
  bc.addHypothesis(
    "Upload handler retains file buffers",
    "User reports memory grows with each upload"
  );

  // Connect
  const client = new CDPClient("127.0.0.1", 9229);
  await client.connect();
  await client.enableDebugger();
  console.log("Connected to Deno inspector");

  // Create output directory
  await Deno.mkdir("investigation_output", { recursive: true });

  // Baseline snapshot
  console.log("\nCapturing baseline...");
  const snapshot1 = await captureSnapshot(
    client,
    "investigation_output/baseline.heapsnapshot"
  );
  const baseline_size = (await Deno.stat("investigation_output/baseline.heapsnapshot")).size / (1024 * 1024);
  console.log(`Baseline: ${baseline_size.toFixed(2)} MB`);

  // Trigger leak
  console.log("\nTrigger the leak now (waiting 5 seconds)...");
  await new Promise(resolve => setTimeout(resolve, 5000));

  // Comparison snapshot
  console.log("Capturing comparison snapshot...");
  const snapshot2 = await captureSnapshot(
    client,
    "investigation_output/after.heapsnapshot"
  );
  const after_size = (await Deno.stat("investigation_output/after.heapsnapshot")).size / (1024 * 1024);

  // Analyze
  const growth_mb = after_size - baseline_size;
  console.log(`After: ${after_size.toFixed(2)} MB (grew ${growth_mb.toFixed(2)} MB)`);

  // Record finding
  bc.addFinding(
    "Heap grew significantly after upload",
    { growth_mb, baseline_size, after_size },
    "critical"
  );

  // Compare snapshots
  const comparison = compareSnapshots(snapshot1, snapshot2);
  console.log("\nTop growing objects:");
  console.table(comparison.slice(0, 10));

  // Examine source code
  console.log("\nExamining source code...");
  const appCode = await Deno.readTextFile("path/to/app.ts");
  // [Code inspection logic would go here]

  bc.addDecision(
    "Root cause: global array retains buffers",
    "Code shows leakedBuffers[] array with no cleanup"
  );

  // Generate comprehensive report
  const report = new MarkdownReport("Memory Leak Investigation", bc);

  report.addSummary(
    "Upload handler retains ArrayBuffer objects in global array without cleanup."
  );

  report.addProblem(
    "Memory grows continuously with each file upload and never stabilizes. " +
    "Production would hit OOM after ~20,000 uploads."
  );

  report.addFinding({
    description: "ArrayBuffer objects not being released",
    severity: "critical",
    details: `Heap grew ${growth_mb.toFixed(2)} MB after single upload.`,
    evidence: [
      "Heap snapshot shows retained ArrayBuffers",
      `Global array grows by ~${(growth_mb * 1024).toFixed(0)} KB per upload`,
      "No cleanup in error or success paths"
    ]
  });

  report.addCodeSnippet(
    "typescript",
    `const leakedBuffers: ArrayBuffer[] = [];
async function handleUpload(fileSize: number) {
  const buffer = new ArrayBuffer(fileSize);
  leakedBuffers.push(buffer);  // BUG: Never cleared!
  await processBuffer(buffer);
}`,
    "Problematic code",
    "app.ts:22"
  );

  report.addRootCause(
    "Global array retains all buffers indefinitely",
    "The handleUpload() function pushes buffers to leakedBuffers[] but never " +
    "removes them. This is a 'retain-and-forget' anti-pattern."
  );

  report.addFix(
    "Remove the global array entirely. Process buffers immediately and discard.",
    {
      language: "typescript",
      code: `async function handleUpload(fileSize: number): Promise<string> {
  const buffer = new ArrayBuffer(fileSize);
  const result = await processBuffer(buffer);
  return result; // Buffer becomes eligible for GC
}`,
      caption: "Recommended fix"
    }
  );

  report.addDataTable("Metrics", [
    { Metric: "Baseline heap", Value: `${baseline_size.toFixed(2)} MB` },
    { Metric: "After operation", Value: `${after_size.toFixed(2)} MB` },
    { Metric: "Growth", Value: `${growth_mb.toFixed(2)} MB` },
    { Metric: "Projected OOM", Value: `~${Math.floor(1024 / growth_mb)} uploads` }
  ]);

  await report.save("investigation_output/REPORT.md");
  await bc.save("investigation_output/investigation.json");
  await client.close();

  console.log("\n✓ Investigation complete! See investigation_output/REPORT.md");
}

// Run it
await investigateMemoryLeak();
以下是完整的端到端排查示例,可作为模板使用:
typescript
import { CDPClient } from "./scripts/cdp_client.ts";
import { captureSnapshot, compareSnapshots } from "./scripts/heap_analyzer.ts";
import { MarkdownReport } from "./scripts/report_gen.ts";
import { Breadcrumbs } from "./scripts/breadcrumbs.ts";

async function investigateMemoryLeak() {
  console.log("Starting memory leak investigation...");

  // 可选:追踪排查推理过程
  const bc = new Breadcrumbs("memory_leak_investigation");
  bc.addHypothesis(
    "Upload handler retains file buffers",
    "User reports memory grows with each upload"
  );

  // 连接
  const client = new CDPClient("127.0.0.1", 9229);
  await client.connect();
  await client.enableDebugger();
  console.log("Connected to Deno inspector");

  // 创建输出目录
  await Deno.mkdir("investigation_output", { recursive: true });

  // 基准快照
  console.log("\nCapturing baseline...");
  const snapshot1 = await captureSnapshot(
    client,
    "investigation_output/baseline.heapsnapshot"
  );
  const baseline_size = (await Deno.stat("investigation_output/baseline.heapsnapshot")).size / (1024 * 1024);
  console.log(`Baseline: ${baseline_size.toFixed(2)} MB`);

  // 触发泄漏
  console.log("\nTrigger the leak now (waiting 5 seconds)...");
  await new Promise(resolve => setTimeout(resolve, 5000));

  // 对比快照
  console.log("Capturing comparison snapshot...");
  const snapshot2 = await captureSnapshot(
    client,
    "investigation_output/after.heapsnapshot"
  );
  const after_size = (await Deno.stat("investigation_output/after.heapsnapshot")).size / (1024 * 1024);

  // 分析
  const growth_mb = after_size - baseline_size;
  console.log(`After: ${after_size.toFixed(2)} MB (grew ${growth_mb.toFixed(2)} MB)`);

  // 记录发现
  bc.addFinding(
    "Heap grew significantly after upload",
    { growth_mb, baseline_size, after_size },
    "critical"
  );

  // 对比快照
  const comparison = compareSnapshots(snapshot1, snapshot2);
  console.log("\nTop growing objects:");
  console.table(comparison.slice(0, 10));

  // 检查源码
  console.log("\nExamining source code...");
  const appCode = await Deno.readTextFile("path/to/app.ts");
  // [代码检查逻辑]

  bc.addDecision(
    "Root cause: global array retains buffers",
    "Code shows leakedBuffers[] array with no cleanup"
  );

  // 生成全面报告
  const report = new MarkdownReport("Memory Leak Investigation", bc);

  report.addSummary(
    "Upload handler retains ArrayBuffer objects in global array without cleanup."
  );

  report.addProblem(
    "Memory grows continuously with each file upload and never stabilizes. " +
    "Production would hit OOM after ~20,000 uploads."
  );

  report.addFinding({
    description: "ArrayBuffer objects not being released",
    severity: "critical",
    details: `Heap grew ${growth_mb.toFixed(2)} MB after single upload.`,
    evidence: [
      "Heap snapshot shows retained ArrayBuffers",
      `Global array grows by ~${(growth_mb * 1024).toFixed(0)} KB per upload`,
      "No cleanup in error or success paths"
    ]
  });

  report.addCodeSnippet(
    "typescript",
    `const leakedBuffers: ArrayBuffer[] = [];
async function handleUpload(fileSize: number) {
  const buffer = new ArrayBuffer(fileSize);
  leakedBuffers.push(buffer);  // BUG: Never cleared!
  await processBuffer(buffer);
}`,
    "Problematic code",
    "app.ts:22"
  );

  report.addRootCause(
    "Global array retains all buffers indefinitely",
    "The handleUpload() function pushes buffers to leakedBuffers[] but never " +
    "removes them. This is a 'retain-and-forget' anti-pattern."
  );

  report.addFix(
    "Remove the global array entirely. Process buffers immediately and discard.",
    {
      language: "typescript",
      code: `async function handleUpload(fileSize: number): Promise<string> {
  const buffer = new ArrayBuffer(fileSize);
  const result = await processBuffer(buffer);
  return result; // Buffer becomes eligible for GC
}`,
      caption: "Recommended fix"
    }
  );

  report.addDataTable("Metrics", [
    { Metric: "Baseline heap", Value: `${baseline_size.toFixed(2)} MB` },
    { Metric: "After operation", Value: `${after_size.toFixed(2)} MB` },
    { Metric: "Growth", Value: `${growth_mb.toFixed(2)} MB` },
    { Metric: "Projected OOM", Value: `~${Math.floor(1024 / growth_mb)} uploads` }
  ]);

  await report.save("investigation_output/REPORT.md");
  await bc.save("investigation_output/investigation.json");
  await client.close();

  console.log("\n✓ Investigation complete! See investigation_output/REPORT.md");
}

// 运行
await investigateMemoryLeak();

API Reference

API参考

CDPClient Methods

CDPClient方法

typescript
const client = new CDPClient("127.0.0.1", 9229);
await client.connect();

// Debugging
await client.enableDebugger();
await client.setBreakpointByUrl("file:///app.ts", 42);
await client.resume();
await client.stepOver();

// Inspection
const frames = client.getCallFrames();
const variables = await client.getScopeVariables(frameId);
const result = await client.evaluate("expression");

// Profiling
const snapshotJson = await client.takeHeapSnapshot();
await client.startProfiling();
const profileData = await client.stopProfiling();

await client.close();
typescript
const client = new CDPClient("127.0.0.1", 9229);
await client.connect();

// 调试
await client.enableDebugger();
await client.setBreakpointByUrl("file:///app.ts", 42);
await client.resume();
await client.stepOver();

// 检查
const frames = client.getCallFrames();
const variables = await client.getScopeVariables(frameId);
const result = await client.evaluate("expression");

// 性能分析
const snapshotJson = await client.takeHeapSnapshot();
await client.startProfiling();
const profileData = await client.stopProfiling();

await client.close();

Breadcrumbs Methods (Optional)

Breadcrumbs方法(可选)

Only use for complex investigations where tracking your thought process adds value.
typescript
const bc = new Breadcrumbs();

// Track major milestones only
bc.addHypothesis(description, rationale);
bc.addFinding(description, data, severity); // severity: "info" | "warning" | "critical"
bc.addDecision(description, rationale);

// Save for later review
await bc.save("investigation.json");
仅在复杂排查场景中使用,此时追踪思考过程能带来价值。
typescript
const bc = new Breadcrumbs();

// 仅追踪关键里程碑
bc.addHypothesis(description, rationale);
bc.addFinding(description, data, severity); // severity: "info" | "warning" | "critical"
bc.addDecision(description, rationale);

// 保存以便后续回顾
await bc.save("investigation.json");

HeapSnapshot Methods

HeapSnapshot方法

typescript
import { loadSnapshot, compareSnapshots, findLargestObjects } from "./scripts/heap_analyzer.ts";

const snapshot = await loadSnapshot("heap.heapsnapshot");
const summary = snapshot.getNodeSizeSummary();
const nodes = snapshot.getNodesByType("Array");
const path = snapshot.findRetainingPath(nodeId);

// Compare two snapshots
const comparison = compareSnapshots(before, after);

// Find largest objects
const largest = findLargestObjects(snapshot);
typescript
import { loadSnapshot, compareSnapshots, findLargestObjects } from "./scripts/heap_analyzer.ts";

const snapshot = await loadSnapshot("heap.heapsnapshot");
const summary = snapshot.getNodeSizeSummary();
const nodes = snapshot.getNodesByType("Array");
const path = snapshot.findRetainingPath(nodeId);

// 对比两个快照
const comparison = compareSnapshots(before, after);

// 找到最大的对象
const largest = findLargestObjects(snapshot);

CPUProfile Methods

CPUProfile方法

typescript
import { loadProfile, analyzeHotPaths, detectAsyncIssues } from "./scripts/cpu_profiler.ts";

const profile = await loadProfile("profile.cpuprofile");
const hot = profile.getHotFunctions(); // Array of hot functions
const issues = detectAsyncIssues(profile);
const paths = analyzeHotPaths(profile);
typescript
import { loadProfile, analyzeHotPaths, detectAsyncIssues } from "./scripts/cpu_profiler.ts";

const profile = await loadProfile("profile.cpuprofile");
const hot = profile.getHotFunctions(); // 热点函数数组
const issues = detectAsyncIssues(profile);
const paths = analyzeHotPaths(profile);

Key Principles

核心原则

  1. Always use pre-written scripts - Never write your own CDP client
  2. Use breadcrumbs sparingly - Track major milestones, not every action
  3. Save artifacts - Snapshots, profiles, investigation timeline
  4. Communicate clearly - Explain what you're doing and why
  5. Be methodical - Form hypothesis → test → analyze → conclude
  1. 始终使用预编写脚本 - 永远不要自行编写CDP客户端
  2. 谨慎使用Breadcrumbs - 仅追踪关键里程碑,而非每一个操作
  3. 保存artifacts - 快照、性能分析数据、排查时间线
  4. 清晰沟通 - 解释你正在做什么以及原因
  5. 系统化排查 - 形成假设 → 测试 → 分析 → 得出结论

Common Mistakes to Avoid

需避免的常见错误

DON'T write a new CDP WebSocket client ❌ DON'T parse heap snapshots manually ❌ DON'T write custom profiling code ❌ DON'T use breadcrumbs for every small action ❌ DON'T forget to save artifacts
DO use CDPClient from cdp_client.ts ✅ DO use HeapSnapshot from heap_analyzer.ts ✅ DO use CPUProfile from cpu_profiler.ts ✅ DO use breadcrumbs only for major milestones ✅ DO save snapshots and investigation timeline

Remember: All the infrastructure is already built. Your job is to use these tools to investigate methodically, track your findings, and present clear results to the user.
请勿编写新的CDP WebSocket客户端 ❌ 请勿手动解析堆快照 ❌ 请勿编写自定义性能分析代码 ❌ 请勿使用Breadcrumbs记录每一个小操作 ❌ 请勿忘记保存artifacts
使用cdp_client.ts中的CDPClient类 ✅ 使用heap_analyzer.ts中的HeapSnapshot类 ✅ 使用cpu_profiler.ts中的CPUProfile类 ✅ 仅在关键里程碑使用Breadcrumbs ✅ 保存快照和排查时间线

记住:所有基础功能已构建完成。你的工作是系统化地使用这些工具进行排查,追踪发现,并向用户呈现清晰的结果。