dual-axis-skill-reviewer

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Dual Axis Skill Reviewer

双轴Skill审查工具

Run the dual-axis reviewer script and save reports to
reports/
.
The script supports:
  • Random or fixed skill selection
  • Auto-axis scoring with optional test execution
  • LLM prompt generation
  • LLM JSON review merge with weighted final score
  • Cross-project review via
    --project-root
运行双轴审查脚本并将报告保存至
reports/
目录。
该脚本支持:
  • 随机或固定选择Skill
  • 自动轴评分(可选执行测试)
  • LLM提示词生成
  • LLM JSON审查结果合并与加权最终评分
  • 通过
    --project-root
    实现跨项目审查

When to Use

适用场景

  • Need reproducible scoring for one skill in
    skills/*/SKILL.md
    .
  • Need improvement items when final score is below 90.
  • Need both deterministic checks and qualitative LLM code/content review.
  • Need to review skills in a different project from the command line.
  • 需要为
    skills/*/SKILL.md
    中的单个Skill生成可复现的评分。
  • 当最终评分低于90分时,需要获取改进项建议。
  • 同时需要确定性检查与定性LLM代码/内容审查。
  • 需要通过命令行审查其他项目中的Skill。

Prerequisites

前置条件

  • Python 3.9+
  • uv
    (recommended — auto-resolves
    pyyaml
    dependency via inline metadata)
  • For tests:
    uv sync --extra dev
    or equivalent in the target project
  • For LLM-axis merge: JSON file that follows the LLM review schema (see Resources)
  • Python 3.9+
  • uv
    (推荐——通过内联元数据自动解析
    pyyaml
    依赖)
  • 若需执行测试:在目标项目中运行
    uv sync --extra dev
    或等效命令
  • 若需合并LLM轴结果:符合LLM审查 schema 的JSON文件(参见资源部分)

Workflow

工作流程

Determine the correct script path based on your context:
  • Same project:
    skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
  • Global install:
    ~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
The examples below use
REVIEWER
as a placeholder. Set it once:
bash
undefined
根据使用场景确定正确的脚本路径:
  • 同一项目内
    skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
  • 全局安装后
    ~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
以下示例使用
REVIEWER
作为占位符,只需设置一次:
bash
undefined

If reviewing from the same project:

若在同一项目内审查:

REVIEWER=skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
REVIEWER=skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py

If reviewing another project (global install):

若使用全局安装审查其他项目:

REVIEWER=~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
undefined
REVIEWER=~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
undefined

Step 1: Run Auto Axis + Generate LLM Prompt

步骤1:运行自动轴检查并生成LLM提示词

bash
uv run "$REVIEWER" \
  --project-root . \
  --emit-llm-prompt \
  --output-dir reports/
When reviewing a different project, point
--project-root
to it:
bash
uv run "$REVIEWER" \
  --project-root /path/to/other/project \
  --emit-llm-prompt \
  --output-dir reports/
bash
uv run "$REVIEWER" \
  --project-root . \
  --emit-llm-prompt \
  --output-dir reports/
若审查其他项目,将
--project-root
指向目标项目路径:
bash
uv run "$REVIEWER" \
  --project-root /path/to/other/project \
  --emit-llm-prompt \
  --output-dir reports/

Step 2: Run LLM Review

步骤2:运行LLM审查

  • Use the generated prompt file in
    reports/skill_review_prompt_<skill>_<timestamp>.md
    .
  • Ask the LLM to return strict JSON output.
  • When running inside Claude Code, let Claude act as orchestrator: read the generated prompt, produce the LLM review JSON, and save it for the merge step.
  • 使用
    reports/skill_review_prompt_<skill>_<timestamp>.md
    中的生成提示词文件。
  • 要求LLM返回严格的JSON格式输出。
  • 若在Claude Code中运行,可让Claude作为协调者:读取生成的提示词,生成LLM审查JSON并保存,用于后续合并步骤。

Step 3: Merge Auto + LLM Axes

步骤3:合并自动轴与LLM轴结果

bash
uv run "$REVIEWER" \
  --project-root . \
  --skill <skill-name> \
  --llm-review-json <path-to-llm-review.json> \
  --auto-weight 0.5 \
  --llm-weight 0.5 \
  --output-dir reports/
bash
uv run "$REVIEWER" \
  --project-root . \
  --skill <skill-name> \
  --llm-review-json <path-to-llm-review.json> \
  --auto-weight 0.5 \
  --llm-weight 0.5 \
  --output-dir reports/

Step 4: Optional Controls

步骤4:可选控制项

  • Fix selection for reproducibility:
    --skill <name>
    or
    --seed <int>
  • Review all skills at once:
    --all
  • Skip tests for quick triage:
    --skip-tests
  • Change report location:
    --output-dir <dir>
  • Increase
    --auto-weight
    for stricter deterministic gating.
  • Increase
    --llm-weight
    when qualitative/code-review depth is prioritized.
  • 固定选择以保证可复现性:
    --skill <name>
    --seed <int>
  • 一次性审查所有Skill:
    --all
  • 跳过测试以快速分类:
    --skip-tests
  • 修改报告存储位置:
    --output-dir <dir>
  • 提高
    --auto-weight
    以强化确定性管控。
  • 当优先考虑定性/代码审查深度时,提高
    --llm-weight

Output

输出结果

  • reports/skill_review_<skill>_<timestamp>.json
  • reports/skill_review_<skill>_<timestamp>.md
  • reports/skill_review_prompt_<skill>_<timestamp>.md
    (when
    --emit-llm-prompt
    is enabled)
  • reports/skill_review_<skill>_<timestamp>.json
  • reports/skill_review_<skill>_<timestamp>.md
  • reports/skill_review_prompt_<skill>_<timestamp>.md
    (当启用
    --emit-llm-prompt
    时生成)

Installation (Global)

全局安装

To use this skill from any project, symlink it into
~/.claude/skills/
:
bash
ln -sfn /path/to/claude-trading-skills/skills/dual-axis-skill-reviewer \
  ~/.claude/skills/dual-axis-skill-reviewer
After this, Claude Code will discover the skill in all projects, and the script is accessible at
~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
.
若要在任意项目中使用该Skill,将其符号链接至
~/.claude/skills/
bash
ln -sfn /path/to/claude-trading-skills/skills/dual-axis-skill-reviewer \
  ~/.claude/skills/dual-axis-skill-reviewer
完成后,Claude Code将在所有项目中识别该Skill,脚本路径为
~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py

Resources

资源

  • Auto axis scores metadata, workflow coverage, execution safety, artifact presence, and test health.
  • Auto axis detects
    knowledge_only
    skills and adjusts script/test expectations to avoid unfair penalties.
  • LLM axis scores deep content quality (correctness, risk, missing logic, maintainability).
  • Final score is weighted average.
  • If final score is below 90, improvement items are required and listed in the markdown report.
  • Script:
    skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
  • LLM schema:
    skills/dual-axis-skill-reviewer/references/llm_review_schema.md
  • Rubric detail:
    skills/dual-axis-skill-reviewer/references/scoring_rubric.md
  • 自动轴评分包含元数据、工作流程覆盖度、执行安全性、工件存在性与测试健康状态。
  • 自动轴可检测
    knowledge_only
    类型Skill,并调整脚本/测试预期以避免不公平扣分。
  • LLM轴评分针对内容深度质量(正确性、风险、缺失逻辑、可维护性)。
  • 最终评分为加权平均值。
  • 若最终评分低于90分,markdown报告中会列出必填的改进项。
  • 脚本路径:
    skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
  • LLM schema:
    skills/dual-axis-skill-reviewer/references/llm_review_schema.md
  • 评分细则:
    skills/dual-axis-skill-reviewer/references/scoring_rubric.md