codebase-analyzer

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Codebase Analyzer Skill

代码库分析器Skill

Operator Context

操作器上下文

This skill operates as an operator for statistical codebase analysis, configuring Claude's behavior for measurement-based rule discovery from Go codebases. It implements a Measure, Don't Read methodology -- Python scripts count patterns to avoid LLM training bias override, then statistics are interpreted to derive confidence-scored rules.
本Skill作为统计代码库分析的操作器,配置Claude的行为以从Go代码库中基于度量发现规则。它实现了度量而非读取的方法论——Python脚本统计模式以避免LLM训练偏差干扰,随后通过统计数据推导带置信度评分的规则。

Hardcoded Behaviors (Always Apply)

硬编码行为(始终适用)

  • CLAUDE.md Compliance: Read and follow repository CLAUDE.md files before execution. Project instructions override default behaviors.
  • Over-Engineering Prevention: Scripts perform pure statistical measurement only. No feature additions beyond counting patterns. No speculative metrics or flexibility that was not requested.
  • Measurement-Only Analysis: Scripts count and measure; NEVER interpret or judge code quality during data collection phase. The LLM is a calculator, not a judge.
  • No Training Bias: Analysis MUST avoid LLM interpretation of "good" vs "bad" patterns during measurement. What IS in the code is the local standard.
  • Confidence Gating: Only derive rules from patterns with >70% consistency. Below that threshold, report statistics without creating rules.
  • Separate Measurement from Interpretation: Run scripts first (mechanical), then interpret statistics second (analytical). Never combine these steps.
  • CLAUDE.md 合规性:执行前读取并遵循仓库的CLAUDE.md文件。项目指令优先于默认行为。
  • 防止过度设计:脚本仅执行纯统计度量。除统计模式外,不添加任何额外功能。不生成未被要求的推测性指标或灵活配置。
  • 仅度量分析:脚本仅进行统计和度量;在数据收集阶段绝不解读或评判代码质量。LLM仅作为计算器,而非评判者。
  • 无训练偏差:分析必须避免LLM在度量阶段对“好”与“坏”模式的解读。代码中实际存在的内容即为本地标准。
  • 置信度门限:仅从一致性>70%的模式中推导规则。低于该阈值时,仅报告统计数据,不生成规则。
  • 度量与解释分离:先运行脚本(机械步骤),再解读统计数据(分析步骤)。绝不能将这两个步骤合并。

Default Behaviors (ON unless disabled)

默认行为(除非禁用否则启用)

  • Communication Style: Report facts without self-congratulation. Show complete statistics rather than describing them. Be concise but informative.
  • Temporary File Cleanup: Analysis scripts do not create temporary files (single-pass processing). Any debug outputs or iteration files should be removed at completion.
  • Verbose Output: Display summary statistics to stderr, full JSON to stdout or file.
  • Confidence Thresholds: HIGH (>85%), MEDIUM (70-85%), below 70% not extracted as rule.
  • Vendor Filtering: Automatically skip vendor/, testdata/, and generated code to avoid polluting statistics with external patterns.
  • 沟通风格:仅报告事实,不自我夸耀。展示完整统计数据而非描述数据。简洁但信息完整。
  • 临时文件清理:分析脚本不创建临时文件(单遍处理)。所有调试输出或迭代文件应在完成后删除。
  • 详细输出:向stderr输出汇总统计数据,向stdout或文件输出完整JSON。
  • 置信度阈值:高置信度(>85%)、中等置信度(70-85%),低于70%不提取为规则。
  • Vendor目录过滤:自动跳过vendor/、testdata/和生成代码,避免外部模式污染统计数据。

Optional Behaviors (OFF unless enabled)

可选行为(除非启用否则禁用)

  • Cross-Repository Analysis: Compare patterns across multiple repos (requires explicit request).
  • Historical Tracking: Re-analyze same repo over time to track pattern evolution (requires explicit request).
  • Custom Metric Addition: Add new measurement categories beyond the 100 standard metrics (requires explicit request).
  • 跨代码库分析:跨多个仓库比较模式(需明确请求)。
  • 历史追踪:定期重新分析同一仓库,追踪模式演变(需明确请求)。
  • 自定义指标添加:在100项标准指标之外添加新的度量类别(需明确请求)。

What This Skill CAN Do

本Skill可实现的功能

  • Extract implicit coding rules through statistical analysis of Go codebases
  • Measure 100 metrics across 25 categories using Python scripts
  • Derive confidence-scored rules from pattern frequency data
  • Produce a 10-dimensional Style Vector quality fingerprint (0-100 scores)
  • Discover shadow constitution rules (linter suppressions teams accept)
  • Compare patterns across multiple repositories for team-wide standards
  • 通过对Go代码库的统计分析提取隐含编码规则
  • 使用Python脚本统计25个类别下的100项指标
  • 从模式频率数据中推导带置信度评分的规则
  • 生成10维Style Vector质量指纹(0-100分)
  • 发现影子规则(团队认可的代码检查器抑制规则)
  • 跨多个代码库比较模式,以制定团队级标准

What This Skill CANNOT Do

本Skill不可实现的功能

  • Judge code quality subjectively (measures patterns, not "good" vs "bad")
  • Analyze non-Go codebases (scripts are Go-specific)
  • Derive rules from codebases with fewer than 50 Go files (insufficient sample)
  • Replace code review or linting (produces rules, not enforcement)
  • Skip measurement and rely on LLM "reading" the code

  • 主观评判代码质量(仅统计模式,不区分“好”与“坏”)
  • 分析非Go代码库(脚本专为Go设计)
  • 从少于50个Go文件的代码库中推导规则(样本不足)
  • 替代代码审查或代码检查(仅生成规则,不执行)
  • 跳过度量环节,依赖LLM“读取”代码

Instructions

操作步骤

Phase 1: CONFIGURE (Do NOT proceed without validated target)

阶段1:配置(未验证目标前请勿继续)

Goal: Validate target and select analyzer variant.
Step 1: Validate the target
  • Confirm path points to a Go repository root with .go files
  • Check for standard structure (cmd/, internal/, pkg/)
  • Verify sufficient file count (50+ files for meaningful rules, 100+ ideal)
Step 2: Select cartographer variant
VariantScriptMetricsUse When
Omni (recommended)
cartographer_omni.py
(not yet implemented)
100 across 25 categoriesFull codebase profiling
Basic
cartographer.py
(not yet implemented)
~15 categoriesQuick pattern overview
Ultimate
cartographer_ultimate.py
6 focused categoriesPerformance pattern detection
Step 3: Verify environment
  • Python 3.7+ available
  • No external dependencies needed (uses only Python standard library)
  • Output directories exist or can be created
===============================================================
 PHASE 1: CONFIGURE
===============================================================

 Target Repository:
   - Path: [/path/to/repo]
   - Go Files: [N files found]
   - Structure: [cmd/ | internal/ | pkg/ | flat]

 Variant Selected: [Omni | Basic | Ultimate]
 Reason: [why this variant]

 Validation:
   - [ ] Path exists and contains .go files
   - [ ] File count >= 50 (actual: N)
   - [ ] Python 3.7+ available
   - [ ] Output directory writable

 CONFIGURE complete. Proceeding to MEASURE...
===============================================================
Gate: Target directory exists, contains 50+ Go files, variant selected. Proceed only when gate passes.
目标:验证目标并选择分析器变体。
步骤1:验证目标
  • 确认路径指向包含.go文件的Go仓库根目录
  • 检查标准结构(cmd/、internal/、pkg/)
  • 验证文件数量充足(推荐50+文件以生成有意义的规则,100+文件为理想状态)
步骤2:选择cartographer变体
变体脚本指标适用场景
全量版(推荐)
cartographer_omni.py
(暂未实现)
25个类别下的100项指标完整代码库分析
基础版
cartographer.py
(暂未实现)
约15个类别快速模式概览
终极版
cartographer_ultimate.py
6个聚焦类别性能模式检测
步骤3:验证环境
  • 已安装Python 3.7+
  • 无需外部依赖(仅使用Python标准库)
  • 输出目录已存在或可创建
===============================================================
 PHASE 1: CONFIGURE
===============================================================

 目标仓库:
   - 路径: [/path/to/repo]
   - Go文件数量: [找到N个文件]
   - 结构: [cmd/ | internal/ | pkg/ | 扁平结构]

 选择的变体: [全量版 | 基础版 | 终极版]
 选择理由: [选择该变体的原因]

 验证项:
   - [ ] 路径存在且包含.go文件
   - [ ] 文件数量 >= 50(实际: N)
   - [ ] Python 3.7+已安装
   - [ ] 输出目录可写入

 配置完成。进入度量阶段...
===============================================================
准入条件:目标目录存在、包含50+个Go文件、已选择变体。仅当所有条件满足时才可继续。

Phase 2: MEASURE (Do NOT interpret during this phase)

阶段2:度量(本阶段请勿进行解释)

Goal: Run statistical analysis scripts. Pure measurement -- no interpretation yet.
Step 1: Execute the cartographer
bash
undefined
目标:运行统计分析脚本。纯度量步骤——暂不进行任何解读。
步骤1:执行cartographer脚本
bash
undefined

TODO: scripts/cartographer_omni.py not yet implemented

TODO: scripts/cartographer_omni.py 暂未实现

Manual alternative: use grep/find to count patterns across Go files

手动替代方案: 使用grep/find统计Go文件中的模式

Example: count error wrapping patterns

示例: 统计错误包装模式

grep -rn 'fmt.Errorf.%w' ~/repos/my-project --include=".go" | wc -l
grep -rn 'fmt.Errorf.%w' ~/repos/my-project --include=".go" | wc -l

Example: count constructor patterns

示例: 统计构造函数模式

grep -rn 'func New' ~/repos/my-project --include="*.go" | wc -l

**Step 2: Verify output integrity**
- Confirm JSON output is valid and complete
- Check file count matches expectations (no vendor pollution)
- Verify all three lenses produced data
- Confirm derived_rules section exists in output

**Step 3: Check for data quality issues**
- File count suspiciously high? Vendor code may be included
- File count suspiciously low? Subdirectories may be missed
- All percentages near 50%? May indicate mixed codebase or insufficient data

=============================================================== PHASE 2: MEASURE

Script Executed: [cartographer_omni.py (not yet implemented — use manual pattern counting)] Target: [/path/to/repo]
Results:
  • Files analyzed: [N]
  • Total lines: [N]
  • Categories measured: [N of 25]
  • Derived rules: [N auto-extracted]
Data Quality:
  • JSON output valid
  • File count reasonable (no vendor pollution)
  • All three lenses have data
  • No unexpected zeros in major categories
Output saved to: [path/to/output.json]

MEASURE complete. Proceeding to INTERPRET...


**Gate**: Script completed without errors, JSON output is valid, file count is reasonable. Proceed only when gate passes.
grep -rn 'func New' ~/repos/my-project --include="*.go" | wc -l

**步骤2:验证输出完整性**
- 确认JSON输出有效且完整
- 检查文件数量与预期一致(无Vendor目录污染)
- 验证三个分析维度均已生成数据
- 确认输出中存在derived_rules部分

**步骤3:检查数据质量问题**
- 文件数量异常高?可能包含了Vendor目录代码
- 文件数量异常低?可能遗漏了子目录
- 所有百分比接近50%?可能表明代码库模式混合或数据不足

=============================================================== PHASE 2: MEASURE

执行的脚本: [cartographer_omni.py(暂未实现 — 使用手动统计模式)] 目标: [/path/to/repo]
结果:
  • 分析的文件数量: [N]
  • 总行数: [N]
  • 度量的类别数量: [25个中的N个]
  • 自动提取的规则数量: [N]
数据质量:
  • JSON输出有效
  • 文件数量合理(无Vendor目录污染)
  • 三个分析维度均有数据
  • 主要类别中无不正常的零值
输出保存至: [path/to/output.json]

度量完成。进入解释阶段...


**准入条件**:脚本执行无错误、JSON输出有效、文件数量合理。仅当所有条件满足时才可继续。

Phase 3: INTERPRET (Now the LLM analyzes)

阶段3:解释(LLM在此阶段进行分析)

Goal: Derive rules from statistics. This is where LLM interpretation happens -- AFTER measurement is complete.
Step 1: Review the three lenses
LensQuestionMeasures
Consistency (Frequency)"How often do they use X?"Imports, test frameworks, logging, modern features
Signature (Structure)"How do they name/structure things?"Constructors, receivers, parameter order, variables
Idiom (Implementation)"How do they implement patterns?"Error handling, control flow, context usage, defer
For detailed lens explanations, see
references/three-lenses.md
.
Step 2: Extract rules by confidence
ConfidenceThresholdActionExample
HIGH>85% consistencyExtract as enforceable rule"96% use err not e" -> MUST use err
MEDIUM70-85% consistencyExtract as recommendation"78% guard clauses" -> SHOULD prefer guards
Below 70%Not extractedReport as observation only"55% single-letter receivers" -> No rule
Step 3: Review Style Vector (Omni only)
  • 10 composite scores (0-100): Consistency, Modernization, Safety, Idiomaticity, Documentation, Testing Maturity, Architecture, Performance, Observability, Production Readiness
  • Identify strengths (scores >75) and gaps (scores <50)
  • Note shadow constitution entries (accepted linter suppressions)
Step 4: Cross-reference lenses
  • Pattern confirmed across multiple lenses = higher confidence
  • Pattern in one lens only = standard confidence
  • Contradictions between lenses = investigate further
Gate: Rules extracted with evidence and confidence levels. Style Vector reviewed. Proceed only when gate passes.
目标:从统计数据中推导规则。LLM的解读工作从本阶段开始——在度量完成之后。
步骤1:查看三个分析维度
分析维度核心问题统计内容
一致性(频率)“他们使用X的频率有多高?”导入包、测试框架、日志、现代特性
特征(结构)“他们如何命名/组织代码结构?”构造函数、接收器、参数顺序、变量
惯用写法(实现)“他们如何实现模式?”错误处理、控制流、context使用、defer
有关分析维度的详细说明,请参阅
references/three-lenses.md
步骤2:按置信度提取规则
置信度阈值操作示例
一致性>85%提取为可强制执行的规则“96%使用err而非e” -> 必须使用err
一致性70-85%提取为推荐规则“78%使用卫语句” -> 建议优先使用卫语句
低于70%不提取仅作为观察结果报告“55%使用单字母接收器” -> 不生成规则
步骤3:查看Style Vector(仅全量版支持)
  • 10个综合评分(0-100):一致性、现代化程度、安全性、惯用性、文档完整性、测试成熟度、架构、性能、可观测性、生产就绪度
  • 识别优势领域(评分>75)和短板领域(评分<50)
  • 记录影子规则条目(已被接受的代码检查器抑制规则)
步骤4:跨维度交叉验证
  • 多维度均存在的模式 = 更高置信度
  • 仅单一维度存在的模式 = 标准置信度
  • 维度间存在矛盾 = 需进一步调查
准入条件:已提取带证据和置信度的规则。已查看Style Vector。仅当所有条件满足时才可继续。

Phase 4: DELIVER (Do NOT mark complete without artifacts)

阶段4:交付(未生成成果物前请勿标记完成)

Goal: Produce actionable output artifacts.
Step 1: Save statistical report
cartography_data/{repo_name}_cartography.json
Step 2: Generate derived rules document
derived_rules/{repo_name}_rules.md
Format each rule as:
markdown
undefined
目标:生成可落地的输出成果物。
步骤1:保存统计报告
cartography_data/{repo_name}_cartography.json
步骤2:生成推导规则文档
derived_rules/{repo_name}_rules.md
每条规则格式如下:
markdown
undefined

Rule: [Statement]

规则: [规则描述]

Confidence: HIGH/MEDIUM Evidence: [X% consistency across N occurrences] Category: [error_handling | naming | control_flow | architecture | ...] Lens: [Consistency | Signature | Idiom | Multiple]

**Step 3: Summarize Style Vector** (Omni only)

```markdown
置信度: 高/中 证据: [N次出现中的X%一致性] 类别: [错误处理 | 命名 | 控制流 | 架构 | ...] 分析维度: [一致性 | 特征 | 惯用写法 | 多个维度]

**步骤3:生成Style Vector摘要**(仅全量版支持)

```markdown

Style Vector Summary

Style Vector摘要

DimensionScoreAssessment
Consistency[0-100][Strength/Gap/Neutral]
Modernization[0-100][Strength/Gap/Neutral]
.........

**Step 4: Recommend next steps**
- Compare with pr-miner data if available (explicit vs implicit rules)
- Suggest CLAUDE.md updates for high-confidence rules
- Identify golangci-lint rules that could enforce discovered patterns
- Suggest quarterly re-analysis schedule

=============================================================== PHASE 4: DELIVER

Artifacts:
  • JSON report: [path]
  • Rules document: [path]
  • Style Vector summary: [included in rules doc]
Results Summary:
  • HIGH confidence rules: [N]
  • MEDIUM confidence rules: [N]
  • Observations (below threshold): [N]
  • Style Vector overall: [strong/mixed/weak]
Next Steps:
  1. [Specific recommendation]
  2. [Specific recommendation]
  3. [Specific recommendation]

DELIVER complete. Analysis finished.


**Gate**: JSON report saved, rules document generated, next steps documented. Analysis complete.

---
维度评分评估
一致性[0-100][优势/短板/中性]
现代化程度[0-100][优势/短板/中性]
.........

**步骤4:推荐后续步骤**
- 若有pr-miner数据,进行交叉对比(显式规则与隐式规则)
- 建议将高置信度规则更新至CLAUDE.md
- 识别可强制执行已发现模式的golangci-lint规则
- 建议每季度重新分析的计划

=============================================================== PHASE 4: DELIVER

成果物:
  • JSON报告: [路径]
  • 规则文档: [路径]
  • Style Vector摘要: [已包含在规则文档中]
结果汇总:
  • 高置信度规则数量: [N]
  • 中置信度规则数量: [N]
  • 观察结果(低于阈值): [N]
  • Style Vector整体评价: [优秀/一般/薄弱]
后续步骤:
  1. [具体建议]
  2. [具体建议]
  3. [具体建议]

交付完成。分析结束。


**准入条件**:已保存JSON报告、已生成规则文档、已记录后续步骤。分析完成。

---

Complementary Skills

互补技能

SkillExtractsCombined Value
pr-minerExplicit rules (what people argue about in reviews)Agreement = HIGH confidence; Silence + consistency = implicit rule
codebase-analyzerImplicit rules (what they actually do)pr-miner says X but code does Y = rule not followed
Skill提取内容组合价值
pr-miner显式规则(代码审查中争论的内容)两者一致 = 高置信度;无争论但高度一致 = 隐式规则
codebase-analyzer隐式规则(实际编码行为)pr-miner规定X但代码中实际为Y = 规则未被遵循

Reconciliation Matrix

调和矩阵

pr-minercodebase-analyzerConclusion
Says XShows X at >85%Confirmed rule (both explicit and practiced)
SilentShows X at >85%Implicit rule (nobody argues because everyone agrees)
Says XShows Y at >85%Rule stated but not followed (needs enforcement or is outdated)
Mixed signalsInconsistentNo standard yet (opportunity to establish one)

pr-minercodebase-analyzer结论
规定X显示X的一致性>85%已确认规则(既有显式规定又有实际践行)
无相关记录显示X的一致性>85%隐式规则(无人争论因为所有人都认可)
规定X显示Y的一致性>85%规则已被规定但未被遵循(需要强制执行或更新规则)
信号混杂模式不一致尚未形成标准(有机会建立统一标准)

Examples

示例

Example 1: Single Repository Analysis

示例1:单一仓库分析

User says: "What conventions does this repo follow?" Actions:
  1. Validate target has 100+ Go files (CONFIGURE)
  2. Run pattern counting against the repo (MEASURE)
  3. Extract rules from statistics: error wrapping 89%, guard clauses 5.2x, New{Type} 94% (INTERPRET)
  4. Save JSON report and rules document (DELIVER) Result: 30+ rules extracted with confidence levels, Style Vector produced
用户需求: "该仓库遵循哪些约定?" 操作步骤:
  1. 验证目标仓库包含100+个Go文件(配置阶段)
  2. 对仓库运行模式统计(度量阶段)
  3. 从统计数据中提取规则:错误包装89%、卫语句使用量是其他写法的5.2倍、New{Type}构造函数94%(解释阶段)
  4. 保存JSON报告和规则文档(交付阶段) 结果: 提取30+条带置信度的规则,生成Style Vector

Example 2: Team-Wide Standards Discovery

示例2:团队级标准发现

User says: "Find our team's coding patterns across all services" Actions:
  1. Validate all target repos, confirm 50+ files each (CONFIGURE)
  2. Run cartographer on each repo separately (MEASURE)
  3. Cross-reference patterns: error wrapping 87-91% across all repos = team standard (INTERPRET)
  4. Produce team-wide rules document with per-repo breakdowns (DELIVER) Result: Team-wide standards with cross-repo evidence
用户需求: "找出我们团队所有服务的编码模式" 操作步骤:
  1. 验证所有目标仓库,确认每个仓库包含50+个Go文件(配置阶段)
  2. 对每个仓库分别运行cartographer脚本(度量阶段)
  3. 跨仓库交叉对比模式:所有仓库的错误包装一致性为87-91% = 团队标准(解释阶段)
  4. 生成包含各仓库细分数据的团队级规则文档(交付阶段) 结果: 生成带跨仓库证据的团队级标准

Example 3: Onboarding New Developer

示例3:新员工入职指导

User says: "I just joined the team, what coding patterns should I follow?" Actions:
  1. Identify main team repos, validate Go file counts (CONFIGURE)
  2. Run omni-cartographer on primary service (MEASURE)
  3. Extract top 10 HIGH confidence rules as onboarding checklist (INTERPRET)
  4. Produce concise rules doc focusing on error handling, naming, and control flow (DELIVER) Result: Evidence-based onboarding guide with concrete examples from actual codebase

用户需求: "我刚加入团队,应该遵循哪些编码模式?" 操作步骤:
  1. 确定团队主仓库,验证Go文件数量(配置阶段)
  2. 对核心服务运行全量版cartographer(度量阶段)
  3. 提取前10条高置信度规则作为入职检查清单(解释阶段)
  4. 生成聚焦错误处理、命名和控制流的简洁规则文档(交付阶段) 结果: 基于实际代码库的实证入职指南

Error Handling

错误处理

Error: "No Go files found"

错误: "未找到Go文件"

Cause: Path does not point to a Go repository root, or .go files are in subdirectories not being scanned Solution:
  1. Verify path points to repository root with
    ls *.go
    or
    find . -name "*.go" | head
  2. If Go files are nested, point to parent directory
  3. Confirm vendor/ is not the only directory containing Go files
原因: 路径未指向Go仓库根目录,或.go文件位于未被扫描的子目录中 解决方案:
  1. 使用
    ls *.go
    find . -name "*.go" | head
    验证路径指向仓库根目录
  2. 如果Go文件嵌套在子目录中,将路径指向父目录
  3. 确认vendor/并非唯一包含Go文件的目录

Error: "No rules derived"

错误: "未推导任何规则"

Cause: Codebase too small (<50 files) or patterns genuinely inconsistent Solution:
  1. Check file count -- if <50, combine analysis across multiple repos from same team
  2. If >50 files but no rules, team genuinely lacks consistent patterns
  3. Lower threshold to 60% to find emerging patterns (note reduced confidence)
原因: 代码库过小(<50个文件)或模式确实不一致 解决方案:
  1. 检查文件数量——如果<50,合并分析团队的多个仓库
  2. 如果>50个文件但仍无规则,说明团队确实缺乏一致模式
  3. 可将阈值降至60%以发现新兴模式(需注明置信度降低)

Error: "Statistics dominated by vendor/generated code"

错误: "统计数据被vendor/生成代码主导"

Cause: Vendor directory or generated files not filtered, polluting pattern data Solution:
  1. Verify scripts are filtering vendor/, testdata/, and _test files for core patterns
  2. If non-standard structure, analyze specific directories manually
  3. Check for generated code markers (Code generated by...) and exclude those files

原因: 未过滤Vendor目录或生成文件,导致模式数据被污染 解决方案:
  1. 验证脚本是否过滤了vendor/、testdata/和核心模式对应的_test文件
  2. 如果是非标准结构,手动分析特定目录
  3. 检查生成代码标记(Code generated by...)并排除这些文件

Anti-Patterns

反模式

Anti-Pattern 1: LLM Reading Instead of Script Measuring

反模式1:用LLM读取代码而非脚本度量

What: Using Claude to "read the codebase and find patterns" instead of running cartographer scripts Why wrong: LLM applies training bias -- reports what "should be" instead of what IS. When the LLM sees
return err
it reports "not wrapping errors properly" even if that IS the local standard. Do instead: Run the cartographer script first (measurement), then interpret the statistics (analysis). Two separate steps, never combined.
表现: 使用Claude“读取代码库并查找模式”而非运行cartographer脚本 危害: LLM会引入训练偏差——报告“应该是什么”而非“实际是什么”。当LLM看到
return err
时,会报告“错误未正确包装”,但这可能正是本地标准。 正确做法: 先运行cartographer脚本(度量),再解读统计数据(分析)。两个步骤完全分离,绝不合并。

Anti-Pattern 2: Rules from Low-Confidence Patterns

反模式2:从低置信度模式生成规则

What: Creating enforceable rules from patterns below 70% consistency (e.g., "45% use fmt.Errorf with %w" becomes "All errors must use fmt.Errorf") Why wrong: Forces consistency where the team has not achieved it organically. Causes false positives in reviews. Team may be transitioning between patterns. Do instead: Only derive rules from HIGH confidence (>85%). For 70-85%, suggest "consider standardizing." Below 70%, report as observation only.
表现: 从一致性低于70%的模式中生成可强制执行的规则(例如:“45%使用fmt.Errorf和%w”变成“所有错误必须使用fmt.Errorf”) 危害: 在团队尚未自然达成一致的地方强制要求一致性。会导致代码审查中的误报。团队可能正处于模式过渡阶段。 正确做法: 仅从高置信度(>85%)的模式中推导规则。对于70-85%的模式,建议“考虑标准化”。低于70%的模式仅作为观察结果报告。

Anti-Pattern 3: Analyzing Insufficient Sample Size

反模式3:分析样本量不足的代码库

What: Running analysis on a repo with <50 Go files and treating results as definitive patterns Why wrong: Small sample size produces high variance. Patterns that appear consistent at 20 files may be coincidence. Cannot distinguish signal from noise. Do instead: Require 50+ files minimum. For small repos, combine analysis across multiple team repos. For monorepos, analyze the full tree.
表现: 对包含<50个Go文件的仓库进行分析,并将结果视为确定模式 危害: 小样本量会导致高方差。在20个文件中看似一致的模式可能只是巧合。无法区分信号与噪声。 正确做法: 要求至少50个文件。对于小仓库,合并分析团队的多个仓库。对于单体仓库,分析完整代码树。

Anti-Pattern 4: One-Time Analysis Without Follow-Up

反模式4:一次性分析无后续跟踪

What: Analyzing once, extracting rules, never re-running as the codebase evolves Why wrong: Coding patterns evolve with team growth and new Go versions. One-time snapshot becomes stale within months. Cannot measure impact of standardization efforts. Do instead: Re-analyze quarterly. Compare Style Vector scores over time. Track pattern adoption (e.g., "Did Modernization score improve after Go 1.21 adoption?").
表现: 进行一次分析、提取规则后,不再随代码库演进重新分析 危害: 编码模式会随团队成长和Go版本更新而演变。一次性快照会在数月内过时。无法衡量标准化工作的影响。 正确做法: 每季度重新分析一次。随时间跟踪Style Vector评分。追踪模式的采用情况(例如:“Go 1.21升级后,现代化程度评分是否提升?”)

Anti-Pattern 5: Mixing Measurement and Interpretation

反模式5:度量与解释合并

What: Having the LLM "read" code files and count patterns manually instead of running the deterministic Python scripts Why wrong: LLM counting is unreliable at scale -- misses files, double-counts, applies inconsistent criteria. Python scripts produce deterministic, reproducible results across runs. Do instead: ALWAYS run the cartographer script for measurement (Phase 2). The LLM's role begins at interpretation (Phase 3), working from the script's JSON output.

表现: 让LLM“读取”代码文件并手动统计模式,而非运行确定性Python脚本 危害: LLM的统计在大规模场景下不可靠——会遗漏文件、重复统计、应用不一致的标准。Python脚本可生成确定性、可重复的结果。 正确做法: 始终运行cartographer脚本进行度量(阶段2)。LLM的角色从解释阶段(阶段3)开始,基于脚本生成的JSON输出进行工作。

References

参考资料

This skill uses these shared patterns:
  • Anti-Rationalization - Prevents shortcut rationalizations
  • Verification Checklist - Pre-completion checks
本Skill使用以下共享模式:
  • 反合理化 - 防止捷径式合理化
  • 验证清单 - 完成前检查

Domain-Specific Anti-Rationalization

领域特定反合理化

RationalizationWhy It's WrongRequired Action
"I can read the code and find patterns"Reading applies training bias; measures what "should be" not what ISRun cartographer scripts for measurement
"Small repo is fine for analysis"<50 files produces unreliable statisticsCombine repos or accept limited confidence
"This 55% pattern should be a rule"Below 70% is noise, not signalOnly extract rules above confidence threshold
"Analysis was done last year, still valid"Patterns evolve with team and languageRe-analyze quarterly
合理化借口危害要求操作
“我可以直接读取代码找到模式”读取代码会引入训练偏差;度量的是“应该是什么”而非“实际是什么”运行cartographer脚本进行度量
“小仓库也适合分析”<50个文件会生成不可靠的统计数据合并多个团队仓库的分析结果,或接受有限的置信度
“这个55%的模式应该成为规则”低于70%的模式是噪声而非信号仅从高于置信度阈值的模式中提取规则
“去年已经分析过,结果仍然有效”模式会随团队和语言演进每季度重新分析一次

Reference Files

参考文件

  • ${CLAUDE_SKILL_DIR}/references/three-lenses.md
    : Detailed explanation of the three analysis lenses
  • ${CLAUDE_SKILL_DIR}/references/examples.md
    : Real-world analysis examples and workflows
  • ${CLAUDE_SKILL_DIR}/references/metrics-catalog.md
    : Complete 100-metric catalog across 25 categories
  • ${CLAUDE_SKILL_DIR}/references/three-lenses.md
    : 分析维度的详细说明
  • ${CLAUDE_SKILL_DIR}/references/examples.md
    : 真实世界的分析示例和工作流
  • ${CLAUDE_SKILL_DIR}/references/metrics-catalog.md
    : 25个类别下的完整100项指标目录

Prerequisites

前置条件

  • Python 3.7+
  • Go codebase to analyze (50+ files recommended)
  • No external dependencies (uses only Python standard library)
  • Python 3.7+
  • 待分析的Go代码库(推荐50+个文件)
  • 无需外部依赖(仅使用Python标准库)",