session-replay

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Session Replay Skill

会话重放技能

Purpose

用途

This skill analyzes claude-trace JSONL files to provide insights into Claude Code session health, token usage patterns, error frequencies, and agent effectiveness. It complements the
/transcripts
command by focusing on API-level trace data rather than conversation transcripts.
该技能分析claude-trace JSONL文件,提供Claude Code会话健康状况、Token使用模式、错误频率以及Agent有效性等方面的洞察。它是
/transcripts
命令的补充,重点关注API级别的跟踪数据而非对话记录。

When to Use This Skill

使用场景

  • Session debugging: Diagnose why a session was slow or failed
  • Token analysis: Understand token consumption patterns
  • Error patterns: Identify recurring failures across sessions
  • Performance optimization: Find bottlenecks in tool usage
  • Agent effectiveness: Measure which agents/tools are most productive
  • 会话调试:诊断会话运行缓慢或失败的原因
  • Token分析:了解Token消耗模式
  • 错误模式识别:发现跨会话的重复失败情况
  • 性能优化:定位工具使用中的瓶颈
  • Agent有效性评估:衡量不同Agent/工具的产出效率

Quick Start

快速开始

Analyze Latest Session

分析最新会话

User: Analyze my latest session health
I'll analyze the most recent trace file:
python
undefined
用户:分析我最新的会话健康状况
我会分析最近的跟踪文件:
python
undefined

Read latest trace file from .claude-trace/

Read latest trace file from .claude-trace/

trace_dir = Path(".claude-trace") trace_files = sorted(trace_dir.glob("*.jsonl"), key=lambda f: f.stat().st_mtime) latest = trace_files[-1] if trace_files else None
trace_dir = Path(".claude-trace") trace_files = sorted(trace_dir.glob("*.jsonl"), key=lambda f: f.stat().st_mtime) latest = trace_files[-1] if trace_files else None

Parse and analyze

Parse and analyze

if latest: analysis = analyze_trace_file(latest) print(format_session_report(analysis))
undefined
if latest: analysis = analyze_trace_file(latest) print(format_session_report(analysis))
undefined

Compare Multiple Sessions

对比多个会话

User: Compare token usage across my last 5 sessions
I'll aggregate metrics across sessions:
python
trace_files = sorted(Path(".claude-trace").glob("*.jsonl"))[-5:]
comparison = compare_sessions(trace_files)
print(format_comparison_table(comparison))
用户:对比我最近5个会话的Token使用情况
我会汇总跨会话的指标:
python
trace_files = sorted(Path(".claude-trace").glob("*.jsonl"))[-5:]
comparison = compare_sessions(trace_files)
print(format_comparison_table(comparison))

Actions

操作动作

Action:
health

动作:
health

Analyze session health metrics from a trace file.
What to do:
  1. Read the trace file (JSONL format)
  2. Extract API requests and responses
  3. Calculate metrics:
    • Total tokens (input/output)
    • Request count and timing
    • Error rate
    • Tool usage distribution
  4. Generate health report
Metrics to extract:
python
undefined
从跟踪文件中分析会话健康指标。
执行步骤
  1. 读取跟踪文件(JSONL格式)
  2. 提取API请求和响应
  3. 计算指标:
    • 总Token数(输入/输出)
    • 请求数量和耗时
    • 错误率
    • 工具使用分布
  4. 生成健康报告
需提取的指标
python
undefined

From each JSONL line containing a request/response pair:

From each JSONL line containing a request/response pair:

{ "timestamp": "...", "request": { "method": "POST", "url": "https://api.anthropic.com/v1/messages", "body": { "model": "claude-...", "messages": [...], "tools": [...] } }, "response": { "usage": { "input_tokens": N, "output_tokens": N }, "content": [...], "stop_reason": "..." } }

**Output format:**

Session Health Report

File: log-2025-11-23-19-32-36.jsonl Duration: 45 minutes
Token Usage:
  • Input: 125,432 tokens
  • Output: 34,521 tokens
  • Total: 159,953 tokens
  • Efficiency: 27.5% output ratio
Request Stats:
  • Total requests: 23
  • Average latency: 2.3s
  • Errors: 2 (8.7%)
Tool Usage:
  • Read: 45 calls
  • Edit: 12 calls
  • Bash: 8 calls
  • Grep: 15 calls
Health Score: 82/100 (Good)
  • Minor issue: 2 errors detected
undefined
{ "timestamp": "...", "request": { "method": "POST", "url": "https://api.anthropic.com/v1/messages", "body": { "model": "claude-...", "messages": [...], "tools": [...] } }, "response": { "usage": { "input_tokens": N, "output_tokens": N }, "content": [...], "stop_reason": "..." } }

**输出格式**:

会话健康报告

文件:log-2025-11-23-19-32-36.jsonl 时长:45分钟
Token使用情况:
  • 输入:125,432 tokens
  • 输出:34,521 tokens
  • 总计:159,953 tokens
  • 效率:27.5% 输出占比
请求统计:
  • 总请求数:23
  • 平均延迟:2.3秒
  • 错误数:2(8.7%)
工具使用:
  • Read:45次调用
  • Edit:12次调用
  • Bash:8次调用
  • Grep:15次调用
健康评分:82/100(良好)
  • 轻微问题:检测到2个错误
undefined

Action:
errors

动作:
errors

Identify error patterns across sessions.
What to do:
  1. Scan trace files for error responses
  2. Categorize errors by type
  3. Identify recurring patterns
  4. Suggest fixes
Error categories to detect:
  • Rate limit errors (429)
  • Token limit exceeded
  • Tool execution failures
  • Timeout errors
  • API errors
Output format:
Error Analysis
==============
Sessions analyzed: 5
Total errors: 12

Error Categories:
1. Rate limit (429): 5 occurrences
   - Recommendation: Add delays between requests

2. Token limit: 3 occurrences
   - Recommendation: Use context management skill

3. Tool failures: 4 occurrences
   - Bash timeout: 2
   - File not found: 2
   - Recommendation: Check paths before operations
识别跨会话的错误模式。
执行步骤
  1. 扫描跟踪文件中的错误响应
  2. 按类型对错误进行分类
  3. 识别重复出现的模式
  4. 提出修复建议
需检测的错误类别
  • 速率限制错误(429)
  • Token超限
  • 工具执行失败
  • 超时错误
  • API错误
输出格式
错误分析
==============
分析的会话数:5
总错误数:12

错误类别:
1. 速率限制(429):5次出现
   - 建议:在请求之间添加延迟

2. Token超限:3次出现
   - 建议:使用上下文管理技能

3. 工具执行失败:4次出现
   - Bash超时:2次
   - 文件未找到:2次
   - 建议:操作前检查路径

Action:
compare

动作:
compare

Compare metrics across multiple sessions.
What to do:
  1. Load multiple trace files
  2. Extract comparable metrics
  3. Calculate trends
  4. Identify anomalies
Output format:
Session Comparison
==================
                    Session 1   Session 2   Session 3   Trend
Tokens (total)      150K        180K        120K        -17%
Requests            25          30          18          -28%
Errors              2           0           1           stable
Duration (min)      45          60          30          -33%
Efficiency          0.27        0.32        0.35        +7%
对比多个会话的指标。
执行步骤
  1. 加载多个跟踪文件
  2. 提取可对比的指标
  3. 计算趋势
  4. 识别异常情况
输出格式
会话对比
==================
                    会话1   会话2   会话3   趋势
总Token数          150K    180K    120K    -17%
请求数              25      30      18      -28%
错误数               2       0       1       稳定
时长(分钟)         45      60      30      -33%
效率               0.27    0.32    0.35     +7%

Action:
tools

动作:
tools

Analyze tool usage patterns.
What to do:
  1. Extract tool calls from traces
  2. Calculate frequency and timing
  3. Identify inefficient patterns
  4. Suggest optimizations
Patterns to detect:
  • Sequential calls that could be parallel
  • Repeated reads of same file
  • Excessive grep/glob calls
  • Unused tool results
Output format:
Tool Usage Analysis
===================
Tool          Calls   Avg Time   Success Rate
Read          45      0.1s       100%
Edit          12      0.3s       92%
Bash          8       1.2s       75%
Grep          15      0.2s       100%
Task          3       45s        100%

Optimization Opportunities:
1. 5 Read calls to same file within 2 minutes
   - Consider caching strategy

2. 3 sequential Bash calls could be parallelized
   - Use multiple Bash calls in single message
分析工具使用模式。
执行步骤
  1. 从跟踪数据中提取工具调用记录
  2. 计算调用频率和耗时
  3. 识别低效模式
  4. 提出优化建议
需检测的模式
  • 可并行执行的串行调用
  • 重复读取同一文件
  • 过多的grep/glob调用
  • 未使用的工具结果
输出格式
工具使用分析
===================
工具          调用次数  平均耗时  成功率
Read          45      0.1秒    100%
Edit          12      0.3秒    92%
Bash          8       1.2秒    75%
Grep          15      0.2秒    100%
Task          3       45秒     100%

优化机会:
1. 2分钟内对同一文件进行了5次Read调用
   - 建议考虑缓存策略

2. 3次串行Bash调用可并行执行
   - 在单个消息中使用多个Bash调用

Implementation Notes

实现说明

Parsing JSONL Traces

解析JSONL跟踪文件

Claude-trace files are JSONL format with request/response pairs:
python
import json
from pathlib import Path
from typing import Dict, List, Any

def parse_trace_file(path: Path) -> List[Dict[str, Any]]:
    """Parse a claude-trace JSONL file."""
    entries = []
    with open(path) as f:
        for line in f:
            if line.strip():
                try:
                    entry = json.loads(line)
                    entries.append(entry)
                except json.JSONDecodeError:
                    continue
    return entries

def extract_metrics(entries: List[Dict]) -> Dict[str, Any]:
    """Extract session metrics from trace entries."""
    metrics = {
        "total_input_tokens": 0,
        "total_output_tokens": 0,
        "request_count": 0,
        "error_count": 0,
        "tool_usage": {},
        "timestamps": [],
    }

    for entry in entries:
        if "request" in entry:
            metrics["request_count"] += 1
            metrics["timestamps"].append(entry.get("timestamp", 0))

        if "response" in entry:
            usage = entry["response"].get("usage", {})
            metrics["total_input_tokens"] += usage.get("input_tokens", 0)
            metrics["total_output_tokens"] += usage.get("output_tokens", 0)

            # Check for errors
            if entry["response"].get("error"):
                metrics["error_count"] += 1

        # Extract tool usage from request body
        if "request" in entry and "body" in entry["request"]:
            body = entry["request"]["body"]
            if isinstance(body, dict) and "tools" in body:
                for tool in body["tools"]:
                    name = tool.get("name", "unknown")
                    metrics["tool_usage"][name] = metrics["tool_usage"].get(name, 0) + 1

    return metrics
Claude-trace文件是包含请求/响应对的JSONL格式:
python
import json
from pathlib import Path
from typing import Dict, List, Any

def parse_trace_file(path: Path) -> List[Dict[str, Any]]:
    """Parse a claude-trace JSONL file."""
    entries = []
    with open(path) as f:
        for line in f:
            if line.strip():
                try:
                    entry = json.loads(line)
                    entries.append(entry)
                except json.JSONDecodeError:
                    continue
    return entries

def extract_metrics(entries: List[Dict]) -> Dict[str, Any]:
    """Extract session metrics from trace entries."""
    metrics = {
        "total_input_tokens": 0,
        "total_output_tokens": 0,
        "request_count": 0,
        "error_count": 0,
        "tool_usage": {},
        "timestamps": [],
    }

    for entry in entries:
        if "request" in entry:
            metrics["request_count"] += 1
            metrics["timestamps"].append(entry.get("timestamp", 0))

        if "response" in entry:
            usage = entry["response"].get("usage", {})
            metrics["total_input_tokens"] += usage.get("input_tokens", 0)
            metrics["total_output_tokens"] += usage.get("output_tokens", 0)

            # Check for errors
            if entry["response"].get("error"):
                metrics["error_count"] += 1

        # Extract tool usage from request body
        if "request" in entry and "body" in entry["request"]:
            body = entry["request"]["body"]
            if isinstance(body, dict) and "tools" in body:
                for tool in body["tools"]:
                    name = tool.get("name", "unknown")
                    metrics["tool_usage"][name] = metrics["tool_usage"].get(name, 0) + 1

    return metrics

Locating Trace Files

定位跟踪文件

python
def find_trace_files(trace_dir: str = ".claude-trace") -> List[Path]:
    """Find all trace files, sorted by modification time."""
    trace_path = Path(trace_dir)
    if not trace_path.exists():
        return []
    return sorted(
        trace_path.glob("*.jsonl"),
        key=lambda f: f.stat().st_mtime,
        reverse=True  # Most recent first
    )
python
def find_trace_files(trace_dir: str = ".claude-trace") -> List[Path]:
    """Find all trace files, sorted by modification time."""
    trace_path = Path(trace_dir)
    if not trace_path.exists():
        return []
    return sorted(
        trace_path.glob("*.jsonl"),
        key=lambda f: f.stat().st_mtime,
        reverse=True  # Most recent first
    )

Error Handling

错误处理

Handle common error scenarios gracefully:
python
def safe_parse_trace_file(path: Path) -> Tuple[List[Dict], List[str]]:
    """Parse trace file with error collection for malformed lines.

    Returns:
        Tuple of (valid_entries, error_messages)
    """
    entries = []
    errors = []

    if not path.exists():
        return [], [f"Trace file not found: {path}"]

    try:
        with open(path) as f:
            for line_num, line in enumerate(f, 1):
                if not line.strip():
                    continue
                try:
                    entry = json.loads(line)
                    entries.append(entry)
                except json.JSONDecodeError as e:
                    errors.append(f"Line {line_num}: Invalid JSON - {e}")
    except PermissionError:
        return [], [f"Permission denied: {path}"]
    except UnicodeDecodeError:
        return [], [f"Encoding error: {path} (expected UTF-8)"]

    return entries, errors


def format_error_report(errors: List[str], path: Path) -> str:
    """Format error report for user display."""
    if not errors:
        return ""

    report = f"""
Trace File Issues
=================
File: {path.name}
Issues found: {len(errors)}

"""
    for error in errors[:10]:  # Limit to first 10
        report += f"- {error}\n"

    if len(errors) > 10:
        report += f"\n... and {len(errors) - 10} more issues"

    return report
Common error scenarios:
ScenarioCauseHandling
Empty fileSession had no API callsReport "No data to analyze"
Malformed JSONCorrupted trace or interrupted writeSkip line, count in error report
Missing fieldsOlder trace formatUse
.get()
with defaults
Permission deniedFile locked by another processClear error message, suggest retry
Encoding errorNon-UTF-8 charactersReport encoding issue
优雅处理常见错误场景:
python
def safe_parse_trace_file(path: Path) -> Tuple[List[Dict], List[str]]:
    """Parse trace file with error collection for malformed lines.

    Returns:
        Tuple of (valid_entries, error_messages)
    """
    entries = []
    errors = []

    if not path.exists():
        return [], [f"Trace file not found: {path}"]

    try:
        with open(path) as f:
            for line_num, line in enumerate(f, 1):
                if not line.strip():
                    continue
                try:
                    entry = json.loads(line)
                    entries.append(entry)
                except json.JSONDecodeError as e:
                    errors.append(f"Line {line_num}: Invalid JSON - {e}")
    except PermissionError:
        return [], [f"Permission denied: {path}"]
    except UnicodeDecodeError:
        return [], [f"Encoding error: {path} (expected UTF-8)"]

    return entries, errors


def format_error_report(errors: List[str], path: Path) -> str:
    """Format error report for user display."""
    if not errors:
        return ""

    report = f"""
Trace File Issues
=================
File: {path.name}
Issues found: {len(errors)}

"""
    for error in errors[:10]:  # Limit to first 10
        report += f"- {error}\n"

    if len(errors) > 10:
        report += f"\n... and {len(errors) - 10} more issues"

    return report
常见错误场景
场景原因处理方式
空文件会话无API调用报告“无数据可分析”
JSON格式错误跟踪文件损坏或写入中断跳过该行,计入错误报告
字段缺失旧版跟踪格式使用
.get()
并设置默认值
权限被拒绝文件被其他进程锁定显示清晰错误信息,建议重试
编码错误非UTF-8字符报告编码问题

Integration with Existing Tools

与现有工具的集成

Tool Selection Matrix

工具选择矩阵

NeedUse ThisWhy
"Why was my session slow?"session-replayAPI latency and token metrics
"What did I discuss last session?"/transcriptsConversation content
"Extract learnings from sessions"CodexTranscriptsBuilderKnowledge extraction
"Reduce my token usage"session-replay + context_managementMetrics + optimization
"Resume interrupted work"/transcriptsContext restoration
需求使用工具原因
“为什么我的会话运行缓慢?”session-replayAPI延迟和Token指标分析
“我上一个会话讨论了什么?”/transcripts对话内容查询
“从会话中提取经验”CodexTranscriptsBuilder知识提取
“减少我的Token使用”session-replay + context_management指标分析+优化方案
“恢复中断的工作”/transcripts上下文恢复

vs. /transcripts Command

与/transcripts命令的对比

/transcripts (conversation management):
  • Focuses on conversation content
  • Restores session context
  • Used for context preservation
  • Trigger: "restore session", "continue work", "what was I doing"
session-replay skill (API-level analysis):
  • Focuses on API metrics
  • Analyzes performance and errors
  • Used for debugging and optimization
  • Trigger: "session health", "token usage", "why slow", "debug session"
/transcripts(对话管理):
  • 聚焦对话内容
  • 恢复会话上下文
  • 用于上下文保存
  • 触发词:“restore session”、“continue work”、“what was I doing”
session-replay技能(API级分析):
  • 聚焦API指标
  • 分析性能和错误
  • 用于调试和优化
  • 触发词:“session health”、“token usage”、“why slow”、“debug session”

vs. CodexTranscriptsBuilder

与CodexTranscriptsBuilder的对比

CodexTranscriptsBuilder (knowledge extraction):
  • Extracts patterns from conversations
  • Builds learning corpus
  • Knowledge-focused
  • Trigger: "extract patterns", "build knowledge base", "learn from sessions"
session-replay skill (metrics analysis):
  • Extracts performance metrics
  • Identifies technical issues
  • Operations-focused
  • Trigger: "performance metrics", "error patterns", "tool efficiency"
CodexTranscriptsBuilder(知识提取):
  • 从对话中提取模式
  • 构建学习语料库
  • 以知识为中心
  • 触发词:“extract patterns”、“build knowledge base”、“learn from sessions”
session-replay技能(指标分析):
  • 提取性能指标
  • 识别技术问题
  • 以操作为中心
  • 触发词:“performance metrics”、“error patterns”、“tool efficiency”

Combined Workflows

组合工作流

Workflow 1: Diagnose and Fix Token Issues
1. session-replay: Analyze token usage patterns (health action)
2. Identify high-token operations
3. context_management skill: Apply proactive trimming
4. session-replay: Compare before/after sessions (compare action)
Workflow 2: Post-Incident Analysis
1. session-replay: Identify error patterns (errors action)
2. /transcripts: Review conversation context around errors
3. session-replay: Check tool usage around failures (tools action)
4. Document findings in DISCOVERIES.md
Workflow 3: Performance Baseline
1. session-replay: Analyze 5-10 recent sessions (compare action)
2. Establish baseline metrics (tokens, latency, errors)
3. Track deviations from baseline over time
工作流1:诊断并修复Token问题
1. session-replay:分析Token使用模式(health动作)
2. 识别高Token消耗操作
3. context_management技能:应用主动裁剪策略
4. session-replay:对比优化前后的会话(compare动作)
工作流2:事后分析
1. session-replay:识别错误模式(errors动作)
2. /transcripts:查看错误发生时的对话上下文
3. session-replay:检查失败时的工具使用情况(tools动作)
4. 将发现记录到DISCOVERIES.md
工作流3:性能基准
1. session-replay:分析最近5-10个会话(compare动作)
2. 建立基准指标(Token数、延迟、错误数)
3. 跟踪随时间变化的基准偏差

Storage Locations

存储位置

  • Trace files:
    .claude-trace/*.jsonl
  • Session logs:
    ~/.amplihack/.claude/runtime/logs/<session_id>/
  • Generated reports: Output directly (no persistent storage needed)
  • 跟踪文件
    .claude-trace/*.jsonl
  • 会话日志
    ~/.amplihack/.claude/runtime/logs/<session_id>/
  • 生成的报告:直接输出(无需持久化存储)

Philosophy Alignment

理念对齐

Ruthless Simplicity

极致简洁

  • Single-purpose: Analyze trace files only - no session management, no transcript editing
  • No external dependencies: Uses only Python standard library (json, pathlib, datetime)
  • Direct file parsing: No ORM, no database, no complex abstractions
  • Present-moment focus: Analyzes what exists now, no future-proofing
  • 单一用途:仅分析跟踪文件 - 不涉及会话管理、不编辑对话记录
  • 无外部依赖:仅使用Python标准库(json、pathlib、datetime)
  • 直接文件解析:无ORM、无数据库、无复杂抽象
  • 聚焦当下:仅分析现有数据,不做未来兼容

Zero-BS Implementation

零冗余实现

  • All functions work completely: Every code example in this skill runs without modification
  • Real parsing, real metrics: No mocked data, no placeholder calculations
  • No stubs or placeholders: If a feature is documented, it works
  • Fail fast on errors: Clear error messages, no silent failures
  • 所有功能完全可用:本技能中的每个代码示例均可直接运行,无需修改
  • 真实解析、真实指标:无模拟数据、无占位计算
  • 无存根或占位符:文档中记录的功能均已实现
  • 快速失败:清晰的错误信息,无静默失败

Brick Philosophy

模块化理念

  • Self-contained analysis: All functionality in this single skill
  • Clear inputs (trace files) and outputs (reports): No hidden state or side effects
  • Regeneratable from this specification: This SKILL.md is the complete source of truth
  • Isolated responsibility: Session analysis only - doesn't modify files or trigger actions
  • 自包含分析:所有功能均在本技能中实现
  • 清晰的输入(跟踪文件)和输出(报告):无隐藏状态或副作用
  • 可从本规范重新生成:本SKILL.md是完整的事实来源
  • 职责隔离:仅负责会话分析 - 不修改文件或触发其他动作

Limitations

局限性

This skill CANNOT:
  • Modify trace files: Read-only analysis, no editing or deletion
  • Generate traces: Use
    claude-trace
    npm package to create trace files
  • Restore sessions: Use
    /transcripts
    command for session restoration
  • Real-time monitoring: Analyzes completed sessions, not live tracking
  • Cross-project analysis: Analyzes traces in current project only
  • Parse non-JSONL formats: Only claude-trace JSONL format supported
  • Access remote traces: Local filesystem only, no cloud storage
本技能无法:
  • 修改跟踪文件:仅支持只读分析,不进行编辑或删除
  • 生成跟踪文件:使用
    claude-trace
    npm包创建跟踪文件
  • 恢复会话:使用
    /transcripts
    命令恢复会话
  • 实时监控:仅分析已完成的会话,不支持实时跟踪
  • 跨项目分析:仅分析当前项目中的跟踪文件
  • 解析非JSONL格式:仅支持claude-trace JSONL格式
  • 访问远程跟踪文件:仅支持本地文件系统,不支持云存储

Tips for Effective Analysis

有效分析技巧

  1. Start with health check: Run
    health
    action first
  2. Look for patterns: Use
    errors
    to find recurring issues
  3. Optimize hot spots: Use
    tools
    to find inefficiencies
  4. Track trends: Use
    compare
    across sessions
  5. Combine with transcripts: Use
    /transcripts
    for context
  1. 从健康检查开始:先运行
    health
    动作
  2. 寻找模式:使用
    errors
    动作发现重复出现的问题
  3. 优化热点:使用
    tools
    动作发现低效点
  4. 跟踪趋势:使用
    compare
    动作跨会话对比
  5. 结合对话记录:使用
    /transcripts
    获取上下文

Common Patterns

常见模式

Pattern 1: Debug Slow Session

模式1:调试缓慢的会话

User: My last session was really slow, analyze it

1. Run health action on latest trace
2. Check request latencies
3. Identify tool bottlenecks
4. Report findings with recommendations
用户:我上一个会话运行非常缓慢,帮我分析

1. 对最新跟踪文件运行health动作
2. 检查请求延迟
3. 识别工具瓶颈
4. 报告发现并给出建议

Pattern 2: Reduce Token Usage

模式2:减少Token使用

User: I'm hitting token limits, help me understand usage

1. Compare token usage across sessions
2. Identify high-token operations
3. Suggest context management strategies
4. Recommend workflow optimizations
用户:我遇到了Token限制,帮我了解使用情况

1. 跨会话对比Token使用情况
2. 识别高Token消耗操作
3. 建议上下文管理策略
4. 推荐工作流优化方案

Pattern 3: Fix Recurring Errors

模式3:修复重复出现的错误

User: I keep getting errors, find the pattern

1. Run errors action across last 10 sessions
2. Categorize and count error types
3. Identify root causes
4. Provide targeted fixes
用户:我一直遇到错误,帮我找出模式

1. 对最近10个会话运行errors动作
2. 分类并统计错误类型
3. 识别根本原因
4. 提供针对性修复方案

Resources

资源

  • Trace directory:
    .claude-trace/
  • Transcripts command:
    /transcripts
  • Context management skill:
    context-management
  • Philosophy:
    ~/.amplihack/.claude/context/PHILOSOPHY.md
  • 跟踪目录
    .claude-trace/
  • 对话记录命令
    /transcripts
  • 上下文管理技能
    context-management
  • 理念文档
    ~/.amplihack/.claude/context/PHILOSOPHY.md

Troubleshooting

故障排除

No trace files found

未找到跟踪文件

Symptom: "No trace files in .claude-trace/"
Causes and fixes:
  1. claude-trace not enabled: Set
    AMPLIHACK_USE_TRACE=1
    before starting session
  2. Wrong directory: Check you're in project root with
    .claude-trace/
    directory
  3. Fresh project: Run a session with tracing enabled first
症状:“No trace files in .claude-trace/”
原因及修复
  1. claude-trace未启用:启动会话前设置
    AMPLIHACK_USE_TRACE=1
  2. 目录错误:检查是否在包含
    .claude-trace/
    目录的项目根目录中
  3. 新项目:先运行一个启用跟踪的会话

Incomplete metrics

指标不完整

Symptom: Missing token counts or zero values
Causes and fixes:
  1. Interrupted session: Trace may be incomplete if session crashed
  2. Streaming responses: Some streaming modes don't capture full metrics
  3. Older trace format: Upgrade claude-trace to latest version
症状:缺失Token计数或数值为零
原因及修复
  1. 会话中断:会话崩溃可能导致跟踪文件不完整
  2. 流式响应:部分流式模式无法捕获完整指标
  3. 旧版跟踪格式:升级claude-trace至最新版本

Health score seems wrong

健康评分似乎不准确

Symptom: Score doesn't match session experience
Understanding the score:
  • 90-100: Excellent - low errors, good efficiency
  • 70-89: Good - minor issues detected
  • 50-69: Fair - significant issues worth investigating
  • Below 50: Poor - likely errors or inefficiencies
Factors in health score:
  • Error rate (40% weight)
  • Token efficiency ratio (30% weight)
  • Request success rate (20% weight)
  • Tool success rate (10% weight)
症状:评分与会话体验不符
评分说明
  • 90-100:优秀 - 错误少,效率高
  • 70-89:良好 - 检测到轻微问题
  • 50-69:一般 - 存在需调查的重大问题
  • 低于50:差 - 可能存在错误或低效
健康评分的影响因素
  • 错误率(权重40%)
  • Token效率比(权重30%)
  • 请求成功率(权重20%)
  • 工具成功率(权重10%)

Large trace files

大体积跟踪文件

Symptom: Analysis is slow or memory-intensive
Solutions:
  1. Analyze specific time range instead of full file
  2. Use
    tools
    action for targeted analysis
  3. Archive old traces:
    mv .claude-trace/old-*.jsonl .claude-trace/archive/
症状:分析缓慢或占用大量内存
解决方案
  1. 分析特定时间范围而非整个文件
  2. 使用
    tools
    动作进行针对性分析
  3. 归档旧跟踪文件:
    mv .claude-trace/old-*.jsonl .claude-trace/archive/

Remember

注意事项

This skill provides session-level debugging and optimization insights. It complements transcript management with API-level visibility. Use it to diagnose issues, optimize workflows, and understand Claude Code behavior patterns.
Key Takeaway: Trace files contain the raw truth about session performance. This skill extracts actionable insights from that data.
本技能提供会话级别的调试和优化洞察。它通过API级别的可见性补充对话记录管理功能。使用它来诊断问题、优化工作流并了解Claude Code的行为模式。
核心要点:跟踪文件包含会话性能的原始真实数据。本技能从这些数据中提取可执行的洞察。