skill-auditor
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseSkill Auditor
Skill Auditor
Portfolio-level skill routing analysis and optimization. Analyzes real session
transcripts to find routing errors, attention competition, and coverage gaps,
then generates an interactive HTML report.
技能组合层级的技能路由分析与优化工具。可分析真实会话记录,查找路由错误、注意力竞争和覆盖缺口,随后生成交互式HTML报告。
Prerequisites
前置要求
- (optional — falls back to character-based estimation)
pip install tiktoken - No external API keys required. Analysis uses Claude sub-agents.
- (可选——未安装时会降级使用基于字符的估算方式)
pip install tiktoken - 无需外部API密钥,分析使用Claude子Agent完成。
Workflow
工作流程
Run all steps sequentially. The coordinator (you) manages data flow between
scripts and sub-agents.
按顺序运行所有步骤。协调器(即你)负责管理脚本和子Agent之间的数据流。
Step 0: Initial Questions
步骤0:初始问题
Before starting, ask the user two questions using AskUserQuestion:
- Report language: "レポートの言語は? (e.g. 日本語, English, 中文, ...)" — Free text input. Default to the user's conversation language if not specified.
- Scope: "分析範囲はどうしますか?" — Cross-project (all projects) / Current project only
Store these choices. Pass the language choice to all sub-agents as an
instruction prefix: "Write all output text (health_assessment, detail, reason,
suggested_fix, etc.) in [chosen language]."
For cross-project mode, use as the project_path argument in Step 3.
For current-project mode, use .
"all"--cwd "$(pwd)"开始之前,使用AskUserQuestion向用户询问两个问题:
- 报告语言: "レポートの言語は? (e.g. 日本語, English, 中文, ...)" — 自由文本输入。如果未指定则默认使用用户当前对话的语言。
- 分析范围: "分析範囲はどうしますか?" — 跨项目(所有项目)/ 仅当前项目
存储这些选择。将语言选择作为指令前缀传递给所有子Agent:"将所有输出文本(health_assessment、detail、reason、suggested_fix等)以[选择的语言]编写。"
跨项目模式下,在步骤3中使用作为project_path参数。当前项目模式下使用。
"all"--cwd "$(pwd)"Step 1: Detect Project
步骤1:检测项目
If cross-project mode was selected:
bash
python3 scripts/collect_transcripts.py all --days 14 \
--output <workspace>/transcripts.json --verboseIf current-project mode:
bash
python3 scripts/collect_transcripts.py --cwd "$(pwd)" --days 14 \
--output <workspace>/transcripts.json --verboseIf auto-detection fails, show the list and ask the user which project to audit.
For cross-project mode, base dir: .
For current-project mode, base dir: .
~/.claude/skill-report/<project>/.claude/skill-report/如果选择了跨项目模式:
bash
python3 scripts/collect_transcripts.py all --days 14 \
--output <workspace>/transcripts.json --verbose如果是当前项目模式:
bash
python3 scripts/collect_transcripts.py --cwd "$(pwd)" --days 14 \
--output <workspace>/transcripts.json --verbose如果自动检测失败,展示项目列表并询问用户要审计的目标项目。
跨项目模式的根目录:。
当前项目模式的根目录:。
~/.claude/skill-report/<project>/.claude/skill-report/Step 2: Set Up Workspace
步骤2:设置工作区
Each run gets a timestamped subdirectory so multiple runs never collide:
bash
RUN_ID=$(date +%Y-%m-%dT%H-%M-%S)
WORKSPACE=<base_dir>/${RUN_ID}
mkdir -p ${WORKSPACE}Use as in all subsequent steps.
stays at (shared
across runs — see Step 8).
${WORKSPACE}<workspace>health-history.json<base_dir>/health-history.json每次运行都会生成带时间戳的子目录,避免多次运行产生冲突:
bash
RUN_ID=$(date +%Y-%m-%dT%H-%M-%S)
WORKSPACE=<base_dir>/${RUN_ID}
mkdir -p ${WORKSPACE}后续所有步骤中的都使用。
存储在(多轮运行共享,见步骤8)。
<workspace>${WORKSPACE}health-history.json<base_dir>/health-history.jsonStep 3: Collect Data
步骤3:收集数据
Run both scripts. They produce the input files for analysis.
bash
undefined运行两个脚本,它们会生成分析所需的输入文件。
bash
undefinedTranscripts already collected in Step 1
会话记录已在步骤1中收集完成
python3 scripts/collect_skills.py
--output <workspace>/skill-manifest.json --verbose
--output <workspace>/skill-manifest.json --verbose
Report the collection summary to the user:
"N sessions, M user turns, K skills found. Attention budget: T tokens total."python3 scripts/collect_skills.py
--output <workspace>/skill-manifest.json --verbose
--output <workspace>/skill-manifest.json --verbose
向用户报告收集摘要:
"已找到N个会话、M条用户轮次、K个技能。注意力预算总计T个token。"Step 4: Routing Audit (Sub-agents)
步骤4:路由审计(子Agent)
Spawn one or more routing-analyst sub-agents. Each sub-agent:
- Reads for its analysis rubric
agents/routing-analyst.md - Reads a filtered skill manifest (only skills visible to that batch)
- Reads a batch of transcripts
- Writes analysis to a batch JSON file
IMPORTANT — Project-aware batching: Projects with local skills must be
batched separately. Projects with only global skills can be pooled together
(they see the same skill set). When many projects have unique local skills,
batches are capped at (default 12). Excess groups are merged
by greedy similarity — the group with the fewest extra skills is merged into
the most similar existing batch. This adds a few extra skills to
but keeps sub-agent count bounded.
MAX_BATCHESvisible_skill_namespython
import json, math
from collections import defaultdict
data = json.load(open("<workspace>/transcripts.json"))
manifest = json.load(open("<workspace>/skill-manifest.json"))
sessions = data["sessions"]启动一个或多个routing-analyst子Agent。每个子Agent执行以下操作:
- 读取获取分析规则
agents/routing-analyst.md - 读取过滤后的技能清单(仅包含该批次可见的技能)
- 读取一批会话记录
- 将分析结果写入批次JSON文件
重要提示——项目感知分批:带有本地技能的项目必须单独分批。仅包含全局技能的项目可以合并(它们使用相同的技能集)。当多个项目拥有独特的本地技能时,批次上限为(默认12)。超出的分组会按照贪心相似度合并——额外技能最少的分组会被合并到最相似的现有批次中。这会给增加少量额外技能,但可以控制子Agent的数量在合理范围内。
MAX_BATCHESvisible_skill_namespython
import json, math
from collections import defaultdict
data = json.load(open("<workspace>/transcripts.json"))
manifest = json.load(open("<workspace>/skill-manifest.json"))
sessions = data["sessions"]Identify global skills and project-local skills
识别全局技能和项目本地技能
global_skills = [s for s in manifest["skills"] if s["scope"] == "global"]
global_names = [s["name"] for s in global_skills]
project_local = defaultdict(list) # project_path -> [skill dicts]
for s in manifest["skills"]:
if s["scope"] == "project-local" and s.get("project_path"):
project_local[s["project_path"]].append(s)
global_skills = [s for s in manifest["skills"] if s["scope"] == "global"]
global_names = [s["name"] for s in global_skills]
project_local = defaultdict(list) # project_path -> [skill dicts]
for s in manifest["skills"]:
if s["scope"] == "project-local" and s.get("project_path"):
project_local[s["project_path"]].append(s)
Helper: does this encoded project_dir match a project_path with locals?
辅助函数:判断编码后的项目目录是否匹配带有本地技能的project_path
def find_local_skills(project_dir):
for pp, skills in project_local.items():
encoded = pp.replace("/", "-").replace(".", "-")
if encoded.lstrip("-") in project_dir.lstrip("-"):
return skills
return []
def find_local_skills(project_dir):
for pp, skills in project_local.items():
encoded = pp.replace("/", "-").replace(".", "-")
if encoded.lstrip("-") in project_dir.lstrip("-"):
return skills
return []
Separate sessions: projects with local skills vs global-only
拆分会话:带有本地技能的项目 vs 仅使用全局技能的项目
global_only_indices = [] # can be pooled
local_project_groups = defaultdict(list) # project_dir -> indices
for i, s in enumerate(sessions):
pdir = s.get("project_dir", "unknown")
locals = find_local_skills(pdir)
if locals:
local_project_groups[pdir].append(i)
else:
global_only_indices.append(i)
global_only_indices = [] # 可以合并
local_project_groups = defaultdict(list) # project_dir -> indices
for i, s in enumerate(sessions):
pdir = s.get("project_dir", "unknown")
locals = find_local_skills(pdir)
if locals:
local_project_groups[pdir].append(i)
else:
global_only_indices.append(i)
Build batches
构建批次
batch_size = 60
MAX_BATCHES = 12 # Cap total sub-agents to keep cost/time bounded
batches = []
batch_size = 60
MAX_BATCHES = 12 # 限制子Agent总数量,控制成本和耗时
batches = []
1) Pool all global-only sessions together
1) 合并所有仅使用全局技能的会话
for chunk_start in range(0, len(global_only_indices), batch_size):
chunk = global_only_indices[chunk_start:chunk_start + batch_size]
batches.append({
"session_indices": chunk,
"label": "global-only (mixed projects)",
"visible_skill_names": global_names,
})
for chunk_start in range(0, len(global_only_indices), batch_size):
chunk = global_only_indices[chunk_start:chunk_start + batch_size]
batches.append({
"session_indices": chunk,
"label": "global-only (mixed projects)",
"visible_skill_names": global_names,
})
2) Group projects with same local skill set, then batch together
2) 按相同本地技能集分组项目,然后分批
by_skill_set = defaultdict(list) # tuple of local names -> indices
for pdir, indices in local_project_groups.items():
local_names = tuple(sorted(s["name"] for s in find_local_skills(pdir)))
by_skill_set[local_names].extend(indices)
local_batches = []
for local_names, indices in by_skill_set.items():
visible = global_names + list(local_names)
for chunk_start in range(0, len(indices), batch_size):
chunk = indices[chunk_start:chunk_start + batch_size]
local_batches.append({
"session_indices": chunk,
"label": f"local skills: {', '.join(local_names[:3])}{'...' if len(local_names) > 3 else ''}",
"visible_skill_names": visible,
"_local_set": set(local_names),
})
by_skill_set = defaultdict(list) # tuple of local names -> indices
for pdir, indices in local_project_groups.items():
local_names = tuple(sorted(s["name"] for s in find_local_skills(pdir)))
by_skill_set[local_names].extend(indices)
local_batches = []
for local_names, indices in by_skill_set.items():
visible = global_names + list(local_names)
for chunk_start in range(0, len(indices), batch_size):
chunk = indices[chunk_start:chunk_start + batch_size]
local_batches.append({
"session_indices": chunk,
"label": f"local skills: {', '.join(local_names[:3])}{'...' if len(local_names) > 3 else ''}",
"visible_skill_names": visible,
"_local_set": set(local_names),
})
3) Merge if too many batches — greedily merge smallest into most similar
3) 如果批次过多则合并——贪心将最小批次合并到最相似的批次
remaining_budget = MAX_BATCHES - len(batches)
while len(local_batches) > remaining_budget and len(local_batches) > 1:
# Find the smallest batch
smallest_idx = min(range(len(local_batches)), key=lambda i: len(local_batches[i]["session_indices"]))
smallest = local_batches.pop(smallest_idx)
# Find the most similar batch (fewest extra skills added)
best_idx, best_extra = 0, float("inf")
for j, b in enumerate(local_batches):
extra = len(smallest["_local_set"] - b["_local_set"]) + len(b["_local_set"] - smallest["_local_set"])
if extra < best_extra:
best_idx, best_extra = j, extra
# Merge into best match
target = local_batches[best_idx]
target["session_indices"].extend(smallest["session_indices"])
target["_local_set"] = target["_local_set"] | smallest["_local_set"]
merged_local = sorted(target["_local_set"])
target["visible_skill_names"] = global_names + merged_local
target["label"] = f"merged local skills: {', '.join(merged_local[:3])}{'...' if len(merged_local) > 3 else ''}"
remaining_budget = MAX_BATCHES - len(batches)
while len(local_batches) > remaining_budget and len(local_batches) > 1:
# 找到最小的批次
smallest_idx = min(range(len(local_batches)), key=lambda i: len(local_batches[i]["session_indices"]))
smallest = local_batches.pop(smallest_idx)
# 找到最相似的批次(新增额外技能最少)
best_idx, best_extra = 0, float("inf")
for j, b in enumerate(local_batches):
extra = len(smallest["_local_set"] - b["_local_set"]) + len(b["_local_set"] - smallest["_local_set"])
if extra < best_extra:
best_idx, best_extra = j, extra
# 合并到最佳匹配批次
target = local_batches[best_idx]
target["session_indices"].extend(smallest["session_indices"])
target["_local_set"] = target["_local_set"] | smallest["_local_set"]
merged_local = sorted(target["_local_set"])
target["visible_skill_names"] = global_names + merged_local
target["label"] = f"merged local skills: {', '.join(merged_local[:3])}{'...' if len(merged_local) > 3 else ''}"
Clean up internal field and add to batches
清理内部字段并加入批次列表
for b in local_batches:
b.pop("_local_set", None)
batches.append(b)
for i, b in enumerate(batches):
print(f"Batch {i}: {len(b['session_indices'])} sessions, "
f"{len(b['visible_skill_names'])} skills — {b['label']}")
Before spawning, build a DMI list per batch from the manifest:
```python
dmi_skills = {s["name"] for s in manifest["skills"] if s.get("disable_model_invocation")}
for b in batches:
b["dmi_skill_names"] = sorted(set(b["visible_skill_names"]) & dmi_skills)Spawn sub-agents in parallel — one per batch:
For each batch i:
Agent tool (general-purpose):
"Read agents/routing-analyst.md from the skill-auditor skill directory for
your analysis instructions.
Read <workspace>/skill-manifest.json for skill definitions.
Read <workspace>/transcripts.json for session data.
Only analyze sessions with these indices: [list from batch].
Only evaluate against these skills: [visible_skill_names from batch].
Ignore skills not in this list — they are not available in this
project context.
These skills have disable-model-invocation: true and NEVER auto-fire:
[dmi_skill_names from batch]. Do NOT flag them as false_negative.
Write your analysis as JSON to <workspace>/batch-audit-<i>.json
following the exact schema in schemas/schemas.md (audit-report.json section)."After all sub-agents complete, merge batch results:
- Union all (combine incidents, recalculate stats per skill)
skill_reports - Union all and
competition_pairscoverage_gaps - Recalculate totals (sum sessions_analyzed, turns_analyzed, etc.)
meta
Write merged result to .
<workspace>/audit-report.jsonfor b in local_batches:
b.pop("_local_set", None)
batches.append(b)
for i, b in enumerate(batches):
print(f"Batch {i}: {len(b['session_indices'])} sessions, "
f"{len(b['visible_skill_names'])} skills — {b['label']}")
启动子Agent前,基于清单为每个批次构建DMI列表:
```python
dmi_skills = {s["name"] for s in manifest["skills"] if s.get("disable_model_invocation")}
for b in batches:
b["dmi_skill_names"] = sorted(set(b["visible_skill_names"]) & dmi_skills)并行启动子Agent——每个批次对应一个:
对于每个批次i:
通用Agent工具:
"读取skill-auditor技能目录下的agents/routing-analyst.md获取分析指令。
读取<workspace>/skill-manifest.json获取技能定义。
读取<workspace>/transcripts.json获取会话数据。
仅分析索引在以下列表中的会话:[批次对应的索引列表]。
仅评估以下技能:[批次对应的visible_skill_names]。
忽略不在此列表中的技能——它们在当前项目上下文中不可用。
以下技能设置了disable-model-invocation: true,永远不会自动触发:
[批次对应的dmi_skill_names]。不要将它们标记为false_negative。
按照schemas/schemas.md(audit-report.json部分)的精确格式将分析结果以JSON格式写入<workspace>/batch-audit-<i>.json。"所有子Agent运行完成后,合并批次结果:
- 合并所有(合并事件,重新计算每个技能的统计数据)
skill_reports - 合并所有和
competition_pairscoverage_gaps - 重新计算总计(求和sessions_analyzed、turns_analyzed等)
meta
将合并结果写入。
<workspace>/audit-report.jsonStep 5: Portfolio Analysis (Sub-agent)
步骤5:技能组合分析(子Agent)
Spawn a portfolio-analyst sub-agent:
Agent tool (general-purpose):
"Read agents/portfolio-analyst.md from the skill-auditor skill directory.
Read <workspace>/skill-manifest.json for skill definitions and attention budget.
Read <workspace>/audit-report.json for the routing audit results.
Write your portfolio analysis as JSON to <workspace>/portfolio-analysis.json."启动portfolio-analyst子Agent:
通用Agent工具:
"读取skill-auditor技能目录下的agents/portfolio-analyst.md。
读取<workspace>/skill-manifest.json获取技能定义和注意力预算。
读取<workspace>/audit-report.json获取路由审计结果。
将你的技能组合分析结果以JSON格式写入<workspace>/portfolio-analysis.json。"Step 6: Improvement Plan (Sub-agent)
步骤6:改进计划(子Agent)
Spawn an improvement-planner sub-agent:
Agent tool (general-purpose):
"Read agents/improvement-planner.md from the skill-auditor skill directory.
Read <workspace>/audit-report.json for routing audit results.
Read <workspace>/portfolio-analysis.json for portfolio analysis.
Read <workspace>/skill-manifest.json for current skill definitions.
IMPORTANT: Write ALL output text in [chosen language] — this includes
fixes_issues, changes_made, cascade_risk, expected_impact, rationale,
suggested_description, and every other human-readable string field.
Write your improvement proposals as JSON to <workspace>/improvement-proposals.json.
Also write individual patch files to <workspace>/patches/ directory."启动improvement-planner子Agent:
通用Agent工具:
"读取skill-auditor技能目录下的agents/improvement-planner.md。
读取<workspace>/audit-report.json获取路由审计结果。
读取<workspace>/portfolio-analysis.json获取技能组合分析结果。
读取<workspace>/skill-manifest.json获取当前技能定义。
重要提示:将所有输出文本以[选择的语言]编写——包括fixes_issues、changes_made、cascade_risk、expected_impact、rationale、suggested_description以及所有其他人类可读的字符串字段。
将你的改进建议以JSON格式写入<workspace>/improvement-proposals.json。
同时将单个补丁文件写入<workspace>/patches/目录。"Step 7: Generate HTML Report
步骤7:生成HTML报告
bash
python3 scripts/generate_report.py \
--workspace <workspace>Output: .
Open the report in the browser:
<workspace>/skill-audit-report.htmlbash
open <workspace>/skill-audit-report.htmlbash
python3 scripts/generate_report.py \
--workspace <workspace>输出文件:。
在浏览器中打开报告:
<workspace>/skill-audit-report.htmlbash
open <workspace>/skill-audit-report.htmlStep 8: Update Health History
步骤8:更新健康历史
Read (create if doesn't exist — start with
empty array ). Append a new entry with the current run's summary:
<base_dir>/health-history.json[]json
{
"timestamp": "<ISO 8601>",
"sessions_analyzed": <N>,
"turns_analyzed": <N>,
"portfolio_health": "<score>",
"routing_accuracy_avg": <0.0-1.0>,
"total_description_tokens": <N>,
"competition_conflicts": <N>,
"coverage_gaps": <N>,
"skills_audited": <N>,
"patches_proposed": <N>
}If there's a previous entry, show the delta: "Accuracy changed from X to Y."
读取(如果不存在则创建,初始化为空数组)。追加当前运行的摘要记录:
<base_dir>/health-history.json[]json
{
"timestamp": "<ISO 8601>",
"sessions_analyzed": <N>,
"turns_analyzed": <N>,
"portfolio_health": "<score>",
"routing_accuracy_avg": <0.0-1.0>,
"total_description_tokens": <N>,
"competition_conflicts": <N>,
"coverage_gaps": <N>,
"skills_audited": <N>,
"patches_proposed": <N>
}如果存在之前的记录,展示变化量:"准确率从X变为Y。"
Step 9: Apply Patches (User Approval)
步骤9:应用补丁(用户确认)
Show the user a summary from the HTML report. For each patch, show the
before/after diff and cascade risk. Let the user approve or reject each.
For approved patches:
bash
python3 scripts/apply_patches.py \
--patches <workspace>/patches/ --confirm \
--output <workspace>/changelog.md向用户展示HTML报告的摘要。对于每个补丁,展示修改前后的差异和级联风险。让用户选择批准或拒绝每个补丁。
对于已批准的补丁:
bash
python3 scripts/apply_patches.py \
--patches <workspace>/patches/ --confirm \
--output <workspace>/changelog.mdStep 10: Summary
步骤10:总结
Report what was done:
- How many sessions analyzed
- How many routing issues found
- Portfolio health score
- Patches proposed / approved / applied
- New skills suggested
- Link to the HTML report
报告本次运行完成的工作:
- 分析了多少个会话
- 发现了多少个路由问题
- 技能组合健康评分
- 提议/批准/应用的补丁数量
- 建议的新技能
- HTML报告的链接
Analysis Capabilities
分析能力
Routing Accuracy
路由准确率
Per-skill fire count, accuracy, false positives/negatives, specific incidents
with root cause analysis. See for the rubric.
agents/routing-analyst.md每个技能的触发次数、准确率、误报/漏报情况、带有根因分析的具体事件。详见的规则说明。
agents/routing-analyst.mdAttention Budget
注意力预算
Total description tokens across all skills. Per-skill token cost and efficiency
rating. Identifies bloated descriptions that waste attention budget.
See .
agents/portfolio-analyst.md所有技能的描述总token数。每个技能的token成本和效率评级。可识别浪费注意力预算的臃肿描述。详见。
agents/portfolio-analyst.mdCompetition Matrix
竞争矩阵
Classifies skill-pair relationships: orthogonal / adjacent / overlapping / nested.
Based on real transcript evidence, not just keyword overlap.
对技能对的关系进行分类:正交/相邻/重叠/嵌套。基于真实会话证据,而非仅关键词重叠。
Portfolio-Aware Optimization
组合感知优化
Patches consider the full skill set. Cascade checking is mandatory — each patch
states what it fixes, what it might break, and the token budget impact.
See .
agents/improvement-planner.md补丁会考虑完整的技能集。级联检查是强制要求——每个补丁都会说明修复的问题、可能破坏的功能以及对token预算的影响。详见。
agents/improvement-planner.mdError Taxonomy
错误分类
| Verdict | Description |
|---|---|
| correct | Right skill loaded for the intent |
| false_negative | Skill should have loaded but didn't. High bar: task must be meaningfully worse without it |
| false_positive | Skill loaded but was irrelevant |
| confused | Wrong skill loaded instead of the correct one |
| no_skill_needed | No skill was needed for this turn (most common) |
| explicit_invocation | User explicitly called |
| coverage_gap | User intent not covered by any existing skill |
Note on : Skills with this flag never
auto-fire by design. They are excluded from false_negative analysis and
listed separately in the report as "explicit-only" skills.
disable-model-invocation: true| 判定结果 | 描述 |
|---|---|
| correct | 针对用户意图加载了正确的技能 |
| false_negative | 应该加载技能但未加载。高判定标准:没有该技能时任务完成质量会明显下降 |
| false_positive | 加载了不相关的技能 |
| confused | 加载了错误的技能,而非正确的技能 |
| no_skill_needed | 该轮对话不需要使用任何技能(最常见的情况) |
| explicit_invocation | 用户显式调用了 |
| coverage_gap | 现有技能未覆盖用户的意图 |
关于的说明:带有此标记的技能设计为永远不会自动触发。它们不会被纳入漏报分析,会在报告中单独列为“仅显式调用”技能。
disable-model-invocation: trueWorkspace Structure
工作区结构
<base_dir>/ # e.g. ~/.claude/skill-report/
├── health-history.json # shared across runs (append-only)
├── 2026-03-04T18-45-23/ # run 1
│ ├── transcripts.json
│ ├── skill-manifest.json
│ ├── batch-audit-*.json
│ ├── audit-report.json
│ ├── portfolio-analysis.json
│ ├── improvement-proposals.json
│ ├── patches/*.patch.json
│ ├── skill-audit-report.html
│ └── changelog.md
└── 2026-03-04T20-12-07/ # run 2
└── ...<base_dir>/ # 例如 ~/.claude/skill-report/
├── health-history.json # 多轮运行共享(仅追加)
├── 2026-03-04T18-45-23/ # 第1次运行
│ ├── transcripts.json
│ ├── skill-manifest.json
│ ├── batch-audit-*.json
│ ├── audit-report.json
│ ├── portfolio-analysis.json
│ ├── improvement-proposals.json
│ ├── patches/*.patch.json
│ ├── skill-audit-report.html
│ └── changelog.md
└── 2026-03-04T20-12-07/ # 第2次运行
└── ...Troubleshooting
故障排查
- "No project found": Run with pointing to the project root, or use
--cwdto see available projects.--list - tiktoken not installed: Token counts will use character-based approximation.
Install with for accuracy.
pip install tiktoken - Large project (100+ sessions): Sessions are batched automatically. Multiple sub-agents run in parallel.
- Sub-agent produces invalid JSON: Re-run the specific sub-agent step. The rubric in agents/ includes exact schema specifications.
- "No project found":运行时使用指向项目根目录,或者使用
--cwd查看可用项目。--list - tiktoken未安装:token计数会使用基于字符的近似计算。可执行获得更准确的结果。
pip install tiktoken - 大型项目(100+会话):会话会自动分批,多个子Agent会并行运行。
- 子Agent生成无效JSON:重新运行对应子Agent步骤。agents目录下的规则包含了精确的格式规范。