code-reviewer

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

code-reviewer

Reviews code in the current working tree through a chosen persona. Reference docs in
references/
are loaded on demand — load only the persona and mode refs the user picks.

Claude Code extensions (其他Agent会忽略) --- argument-hint: '[<file|dir|diff|staged>]' user-invocable: true model-invocable: true

When to use

code-reviewer

Verbatim trigger phrases:
  • "review this code"
  • "audit this diff"
  • "find issues in this"
  • "second opinion on this"
  • "harsh review of"
  • "adversarial review"
  • "security review of"
通过选定的角色对当前工作树中的代码进行审查。会按需加载
references/
中的参考文档——仅加载用户选择的角色和模式对应的参考内容。

When NOT to use

适用场景

  • Formatting / style fixes → use a linter (
    oxlint
    ,
    eslint
    ,
    prettier
    , etc.)
  • Refactoring patterns → use
    ts-best-practices
    or
    ts-best-practices-functional
  • Reviewing pull requests from external repos / contributors → out of scope; the skill only reviews local files. If you need PR review, run it inside a trusted CI environment after vetting the source.
  • Writing new code from scratch → this skill reviews existing code
明确触发短语:
  • "review this code"
  • "audit this diff"
  • "find issues in this"
  • "second opinion on this"
  • "harsh review of"
  • "adversarial review"
  • "security review of"

Inputs

不适用场景

$ARGUMENTS
— one of:
  • File or directory path (
    src/foo.ts
    ,
    src/
    )
  • diff
    — review
    git diff
  • staged
    — review
    git diff --staged
  • Empty — ask the user what to review
  • 格式/样式修复 → 使用linter工具(
    oxlint
    eslint
    prettier
    等)
  • 重构模式建议 → 使用
    ts-best-practices
    ts-best-practices-functional
  • 审查外部仓库/贡献者的Pull Request → 超出范围;本Skill仅审查本地文件。若需要PR审查,请在验证来源后于可信CI环境中执行。
  • 从零开始编写新代码 → 本Skill仅用于审查已有代码

Workflow

输入参数

1. Determine scope

Resolve
$ARGUMENTS
to a concrete set of files + diff:
InputAction
File/dir pathRead the file(s) directly
diff
git diff
staged
git diff --staged
emptyAsk: "What should I review? (file path /
diff
/
staged
)"
$ARGUMENTS
— 可选值包括:
  • 文件或目录路径(
    src/foo.ts
    src/
  • diff
    — 审查
    git diff
    内容
  • staged
    — 审查
    git diff --staged
    内容
  • 空值 — 询问用户需要审查的内容

2. Choose persona(s) — load on demand

工作流程

1. 确定审查范围

Ask the user which review angle. In Claude Code use
AskUserQuestion
with
multiSelect: true
; in other agents use the equivalent multi-select prompt. Always present these exact 5 options (4 personas + an "All" convenience option):
  1. Adversarial — devil's advocate, harsh, looks for everything wrong → load
    references/personas/adversarial.md
  2. Security — threat modeling, OWASP, input handling → load
    references/personas/security.md
  3. Architecture — design, abstractions, coupling, modularity → load
    references/personas/architecture.md
  4. Performance — algorithmic complexity, memory, I/O patterns → load
    references/personas/performance.md
  5. All — run every persona; loads all four refs and locks step 3 to a parallel mode
If the user picks "All", treat it as
[adversarial, security, architecture, performance]
. Don't omit the "All" option even when the user already named a persona in their request — they may want to broaden it.
Only load the references the user actually picked. Don't pre-load all four unless they chose "All" — that's the whole point of progressive disclosure.
If the user picked more than one persona, prefer a parallel mode in step 3 (multi-bg-agent or agent-team). Sequential in-process across multiple personas is slow and pollutes context.
$ARGUMENTS
解析为具体的文件集+diff内容:
输入内容操作说明
文件/目录路径直接读取对应文件(或目录下所有文件)
diff
执行
git diff
获取内容
staged
执行
git diff --staged
获取内容
空值询问用户:“需要我审查什么?(文件路径 /
diff
/
staged
)”

3. Choose mode

2. 选择评审角色——按需加载

Four modes, ordered by review quality (best → worst). The reviewing agent should default to the highest-quality option that's actually available on the host:
ModeQualityBest forReference
Cross-model handoff★★★ RecommendedAny review where bias / blind spots matter. A different model on the same machine catches what the current one missed — especially for adversarial / security / architecture lenses. Pair with
secret-shield
.
references/cross-model-handoff.md
Agent team★★ Baseline (multi)Multi-persona parallel review when the host has a native team primitive (Claude Code Teams). Same model, but each persona gets a fresh context.
references/parallel-execution.md
Multi-bg-agent★★ Baseline (multi)Multi-persona parallel review on agents without a team primitive. Same model, fresh contexts.
references/parallel-execution.md
In-process★ Fallback onlyLast resort — same agent, same context, same model. Most biased option. Only use if no other CLI is available and parallel spawn isn't supported.(no extra ref)
Why cross-model is the recommendation: code review is an independence problem. A second model — even a smaller one — has different training data, different priors, and different blind spots, so it surfaces issues the first model rationalizes past. Same-model parallel agents (bg-agent / team) reduce context-pollution but share the model's biases. In-process review compounds both problems: same biases, same context.
First, detect available CLIs — this drives the recommendation:
bash
node scripts/detect-clis.mjs --available-only --pretty
The script outputs JSON of available AI CLIs on
$PATH
(codex, gemini, aider, etc.).
Recommendation logic, applied in order:
  1. At least one non-current-agent CLI available → recommend cross-model handoff. Strong default — propose it even when the user didn't explicitly say "second opinion". For multi-persona, you can either pick one CLI and run all personas through it sequentially, or (if multiple CLIs are available) split personas across CLIs.
  2. No other CLI AND ≥2 personas → recommend a parallel same-model mode. Use agent team on hosts with a native team primitive (Claude Code), otherwise multi-bg-agent.
  3. No other CLI AND 1 persona → fall back to in-process. Tell the user explicitly: "no second model on this machine — running same-model in-process; quality is lower".
  4. Never silently default to in-process. If you reach in-process, it's because cross-model wasn't available and parallel modes don't apply. Say so.
Ask the user (
AskUserQuestion
or equivalent) to confirm the recommended mode before running. Load only the reference(s) for the chosen mode.
询问用户选择哪种审查视角。在Claude Code中使用
AskUserQuestion
并设置
multiSelect: true
;在其他Agent中使用等效的多选提示。必须始终提供以下5个选项(4种角色+“全部”便捷选项):
  1. Adversarial(对抗性) — 唱反调、严苛审查,寻找所有潜在问题 → 加载
    references/personas/adversarial.md
  2. Security(安全) — 威胁建模、OWASP规范、输入处理检查 → 加载
    references/personas/security.md
  3. Architecture(架构) — 设计、抽象、耦合性、模块化检查 → 加载
    references/personas/architecture.md
  4. Performance(性能) — 算法复杂度、内存、I/O模式检查 → 加载
    references/personas/performance.md
  5. All(全部) — 运行所有角色;加载全部4个参考文档,并将步骤3锁定为并行模式
若用户选择“全部”,则视为选择
[adversarial, security, architecture, performance]
。即使用户在请求中已指定某个角色,也不要省略“全部”选项——用户可能希望扩大审查范围。
仅加载用户实际选择的参考文档。除非用户选择“全部”,否则不要预加载所有4个文档——这是渐进式披露设计的核心。
若用户选择多个角色,步骤3优先选择并行模式(multi-bg-agent或agent-team)。同一进程中依次运行多个角色速度慢且会污染上下文。

4. Run the review

3. 选择运行模式

In-process: apply the loaded persona reference as the guiding lens. Read the code in scope, produce findings. If multiple personas were selected and the user accepted in-process anyway, run them one at a time and merge findings using the rules in
references/parallel-execution.md
§"Step 4 — Merge".
Cross-model: see
references/cross-model-handoff.md
— uses the bundled
invoke-cli.mjs
with secret-shield preflight + prompt-shield wrap. Capture the child CLI's stdout verbatim as the review.
Multi-bg-agent / Agent team: see
references/parallel-execution.md
— fan out one persona per agent with shared scope, collect each agent's findings, merge into one three-tier report with persona tags.
四种模式按审查质量排序(最佳→最差)。评审Agent应默认选择主机上实际可用的最高质量选项
模式名称质量等级适用场景参考文档
Cross-model handoff★★★ 推荐使用任何需要避免偏见/盲点的审查场景。同一机器上的不同模型能发现当前模型遗漏的问题——尤其适用于对抗性/安全/架构视角的审查。搭配
secret-shield
使用。
references/cross-model-handoff.md
Agent team★★ 基线水平(多角色)当主机支持原生团队功能(Claude Code Teams)时,用于多角色并行审查。使用同一模型,但每个角色拥有独立的上下文。
references/parallel-execution.md
Multi-bg-agent★★ 基线水平(多角色)不支持团队功能的Agent上的多角色并行审查。使用同一模型,每个角色拥有独立上下文。
references/parallel-execution.md
In-process★ 仅作为 fallback最后选择——同一Agent、同一上下文、同一模型。偏见风险最高。仅在无其他CLI可用且不支持并行启动时使用。(无额外参考文档)
**为何推荐cross-model模式:**代码审查是一个独立性问题。第二个模型——即使是更小的模型——拥有不同的训练数据、不同的先验知识和不同的盲点,因此能发现第一个模型忽略的问题。同模型并行Agent(bg-agent/team)能减少上下文污染,但仍共享模型偏见。进程内审查则同时存在这两个问题:相同的偏见和相同的上下文。
首先检测可用CLI——这是模式推荐的依据:
bash
node scripts/detect-clis.mjs --available-only --pretty
该脚本输出
$PATH
中可用的AI CLI(codex、gemini、aider等)的JSON信息。
推荐逻辑按以下顺序执行:
  1. 至少有一个非当前Agent的CLI可用→推荐cross-model handoff。这是强默认选项——即使用户未明确要求“第二意见”也应推荐。对于多角色审查,可以选择一个CLI依次运行所有角色,或(若有多个CLI可用)将角色分配到不同CLI。
  2. 无其他CLI且选择≥2个角色→推荐同模型并行模式。若主机支持原生团队功能(Claude Code)则使用agent team,否则使用multi-bg-agent
  3. 无其他CLI且仅选择1个角色→ fallback到in-process。需明确告知用户:“本机无其他模型——将运行同模型进程内审查;审查质量较低”。
  4. 切勿默认使用in-process而不告知用户。若最终使用in-process,是因为cross-model不可用且并行模式不适用,必须向用户说明。
在运行前需询问用户(使用
AskUserQuestion
或等效方式)确认推荐的模式。仅加载所选模式对应的参考文档。

5. Format the output

4. 执行审查

Apply
references/review-output-format.md
:
  • Group findings by severity (
    error
    /
    warn
    /
    info
    — matches our skill-tools tiers)
  • Use
    file:line
    references so the user can click to source
  • Include a one-line summary at the top
  • Reference, don't reproduce — describe the issue and point to
    file:line
    ; do not paste code or diff lines into output (the user already has the file open)
**In-process模式:**将加载的角色参考文档作为审查指导准则。读取范围内的代码,生成审查结果。若用户选择多个角色且仍接受in-process模式,则依次运行每个角色,并按照
references/parallel-execution.md
中“步骤4——合并”的规则合并结果。
**Cross-model模式:**参考
references/cross-model-handoff.md
——使用内置的
invoke-cli.mjs
,搭配secret-shield预检和prompt-shield包装。将子CLI的标准输出直接作为审查结果。
**Multi-bg-agent / Agent team模式:**参考
references/parallel-execution.md
——为每个角色分配一个Agent并行处理,收集每个Agent的审查结果,合并为带有角色标签的三级报告。

Examples

5. 格式化输出

<example> <input>"review the auth refactor in src/auth/" with adversarial persona</input> <output> 1. Scope: read every file under `src/auth/`. 2. Persona: present 5 options (Adversarial, Security, Architecture, Performance, All); user picks Adversarial. Load `references/personas/adversarial.md`. 3. Mode: run `detect-clis.mjs --available-only` — `codex` is on `$PATH`. Recommend **cross-model via codex** (less biased than running adversarial in the same model). User accepts; load `references/cross-model-handoff.md`. 4. Invoke: `git diff src/auth/ | node scripts/invoke-cli.mjs codex --instructions references/personas/adversarial.md --untrusted-content - --secret-mode redact --timeout 120`. Capture stdout verbatim. 5. Present the review with a header noting the model used (`# Code review (via OpenAI Codex 0.128.0)`). </output> </example> <example> <input>"all reviews on src/auth/" — multi-persona, no other CLI on this machine</input> <output> 1. Scope: read every file under `src/auth/`. 2. Personas: user picked "All" (option 5) — expand to `[adversarial, security, architecture, performance]`, load all four persona refs. 3. Mode: run `detect-clis.mjs --available-only` — empty. No cross-model option. Host is Claude Code, so recommend **agent team** (same-model parallel; baseline quality). User accepts; load `references/parallel-execution.md`. 4. Resolve scope once, then spawn 4 agents (one per persona) in a single message with `team_name` set. Each agent gets the persona body + scope + output-format spec inlined. Wait for all four. Merge findings: dedupe by `(file, line, normalized-message)`, regroup by severity, tag with persona. 5. Output one combined three-tier report headed `# Code review — agent-team (4 personas)`. </output> </example> <example> <input>"security review of the staged changes via codex" — cross-model second opinion</input> <output> 1. Scope: `git diff --staged`. 2. Persona: load `references/personas/security.md`. 3. Mode: cross-model (user asked for codex). Run `node scripts/detect-clis.mjs --available-only`. Pick `codex`. 4. Invoke: pipe the diff via stdin and point `--instructions` at the persona ref already on disk. `git diff --staged | node scripts/invoke-cli.mjs codex --instructions references/personas/security.md --untrusted-content - --secret-mode redact --timeout 120`. Capture stdout. 5. Present the verbatim output with a header noting the model used. </output> </example> <example> <good> Loaded only `references/personas/adversarial.md` because the user picked "adversarial". Other persona refs stayed on disk. After CLI detection landed on cross-model, loaded `cross-model-handoff.md` only; `parallel-execution.md` stayed unloaded. </good> <bad> Pre-loaded all four persona references, `cross-model-handoff.md`, and `parallel-execution.md` "just in case". Wastes context budget on every dispatch. </bad> <bad> Defaulted to in-process for a single-persona review without first running `detect-clis.mjs`. Cross-model is the recommended mode whenever another CLI is available — silently picking in-process gives a more biased review. </bad>
The bad examples violate the skill's design: progressive disclosure (load only what's needed) and independence-by-default (prefer a different model when one is on the machine).
遵循
references/review-output-format.md
  • 按严重程度分组(
    error
    /
    warn
    /
    info
    ——与skill-tools的等级匹配)
  • 使用
    file:line
    引用,方便用户点击跳转到源代码
  • 在顶部添加一行摘要
  • 仅引用不复制——描述问题并指向
    file:line
    ;不要将代码或diff行粘贴到输出中(用户已打开对应文件)

Security

示例

Cross-model mode forwards local file / diff content to a third-party AI CLI on the same machine. That crosses a trust boundary even though the source content is local: code under review can contain embedded secrets (API keys, tokens) and prompt-injection-style markers (intentionally or accidentally). Two complementary mitigations are built into
scripts/invoke-cli.mjs
and fire when the agent uses the
--instructions <file> --untrusted-content <file>
form.
Credential exfiltration: the script runs a regex-based secret scan on the content before composing the prompt. Default
--secret-mode scan
refuses to forward when any known secret format (AWS, GitHub, OpenAI, Anthropic, Slack, Stripe, Google API, JWT, PEM private keys) is detected.
--secret-mode redact
substitutes
[REDACTED-{type}-{n}]
placeholders.
--secret-mode allow
skips the check (use only when you've already audited the content). Source:
scripts/secret-shield/
.
Prompt injection: the script generates a fresh 12-hex salt per invocation and wraps the content in
<untrusted-{{salt}}>...</untrusted-{{salt}}>
with an anti-injection preamble before piping to the child CLI. Attacker-embedded closing tags (whether intentional or accidental) can't escape the wrap because they can't predict the salt. Source:
scripts/prompt-shield/
.
In-process review, multi-bg-agent, and agent-team do not cross trust boundaries — they all run within the same agent harness on the local machine, no external sink, no preflight needed. Cross-model is the only mode where preflight applies. Background and threat model:
contributing/prompt-injection.md
.
<example> <input>"审查src/auth/中的认证重构代码",使用对抗性角色</input> <output> 1. 范围:读取`src/auth/`下的所有文件。 2. 角色:提供5个选项(对抗性、安全、架构、性能、全部);用户选择对抗性。加载`references/personas/adversarial.md`。 3. 模式:运行`detect-clis.mjs --available-only`——`codex`在`$PATH`中。推荐**通过codex进行cross-model审查**(比同模型运行对抗性审查的偏见更少)。用户接受;加载`references/cross-model-handoff.md`。 4. 执行:`git diff src/auth/ | node scripts/invoke-cli.mjs codex --instructions references/personas/adversarial.md --untrusted-content - --secret-mode redact --timeout 120`。直接捕获标准输出。 5. 展示审查结果,顶部添加说明使用的模型(`# 代码审查(通过OpenAI Codex 0.128.0)`)。 </output> </example> <example> <input>"对src/auth/进行全角色审查"——多角色,本机无其他CLI</input> <output> 1. 范围:读取`src/auth/`下的所有文件。 2. 角色:用户选择“全部”(选项5)——扩展为`[adversarial, security, architecture, performance]`,加载全部4个角色参考文档。 3. 模式:运行`detect-clis.mjs --available-only`——无结果。无cross-model选项。主机为Claude Code,因此推荐**agent team**(同模型并行;基线质量)。用户接受;加载`references/parallel-execution.md`。 4. 确定范围后,生成4个Agent(每个角色对应一个),设置`team_name`。每个Agent获取角色文档内容+审查范围+输出格式规范。等待所有Agent完成。合并结果:按`(file, line, normalized-message)`去重,按严重程度重新分组,添加角色标签。 5. 输出一份合并后的三级报告,标题为`# 代码审查 — agent-team(4个角色)`。 </output> </example> <example> <input>"通过codex对暂存的变更进行安全审查"——cross-model第二意见</input> <output> 1. 范围:`git diff --staged`。 2. 角色:加载`references/personas/security.md`。 3. 模式:cross-model(用户指定使用codex)。运行`node scripts/detect-clis.mjs --available-only`。选择`codex`。 4. 执行:将diff内容通过标准输入传递,并将`--instructions`指向本地的角色参考文档。`git diff --staged | node scripts/invoke-cli.mjs codex --instructions references/personas/security.md --untrusted-content - --secret-mode redact --timeout 120`。捕获标准输出。 5. 展示原始输出,顶部添加说明使用的模型。 </output> </example> <example> <good> 仅加载`references/personas/adversarial.md`,因为用户选择了“对抗性”角色。其他角色参考文档未加载。检测到可用CLI后确定使用cross-model模式,仅加载`cross-model-handoff.md`;`parallel-execution.md`未加载。 </good> <bad> 预加载了全部4个角色参考文档、`cross-model-handoff.md`和`parallel-execution.md`“以防万一”。每次调用都会浪费上下文预算。 </bad> <bad> 未先运行`detect-clis.mjs`就默认使用in-process模式进行单角色审查。只要有其他CLI可用,cross-model就是推荐模式——默认选择in-process会导致审查偏见更严重。 </bad>
错误示例违反了Skill的设计原则:渐进式披露(仅加载所需内容)和默认独立性(当本机有其他模型时优先选择不同模型)。

References

安全说明

  • references/personas/adversarial.md
    — harsh, devil's-advocate persona
  • references/personas/security.md
    — security-focused review lens
  • references/personas/architecture.md
    — design / coupling / modularity
  • references/personas/performance.md
    — complexity, memory, I/O
  • references/cross-model-handoff.md
    — how to invoke detected CLIs
  • references/parallel-execution.md
    — multi-bg-agent + agent-team fan-out and merge
  • references/review-output-format.md
    — three-tier output spec
  • scripts/detect-clis.mjs
    — node script, ships with the skill, probes for ~20 AI CLIs and emits JSON
Cross-model模式会将本地文件/diff内容转发到同一机器上的第三方AI CLI。即使源内容是本地的,这也跨越了信任边界:待审查的代码可能包含嵌入的密钥(API密钥、令牌)和提示注入式标记(有意或无意)。
scripts/invoke-cli.mjs
内置了两种互补的缓解措施,当Agent使用
--instructions <file> --untrusted-content <file>
形式时会触发。
**凭证泄露防护:**脚本在构建提示前会对内容进行基于正则表达式的密钥扫描。默认
--secret-mode scan
会在检测到任何已知密钥格式(AWS、GitHub、OpenAI、Anthropic、Slack、Stripe、Google API、JWT、PEM私钥)时拒绝转发。
--secret-mode redact
会替换为
[REDACTED-{type}-{n}]
占位符。
--secret-mode allow
会跳过检查(仅在已审核内容后使用)。来源:
scripts/secret-shield/
**提示注入防护:**脚本每次调用都会生成一个12位十六进制的盐值,并将内容包裹在
<untrusted-{{salt}}>...</untrusted-{{salt}}>
中,同时添加防注入前缀,再传递给子CLI。攻击者嵌入的闭合标签(无论有意或无意)无法突破包装,因为无法预测盐值。来源:
scripts/prompt-shield/
In-process审查、multi-bg-agent和agent-team模式不会跨越信任边界——它们都在本地机器的同一Agent框架内运行,无外部输出,无需预检。只有cross-model模式需要预检。背景和威胁模型:
contributing/prompt-injection.md

参考文档

  • references/personas/adversarial.md
    — 严苛的唱反调角色
  • references/personas/security.md
    — 安全导向的审查视角
  • references/personas/architecture.md
    — 设计/耦合性/模块化审查
  • references/personas/performance.md
    — 复杂度、内存、I/O审查
  • references/cross-model-handoff.md
    — 如何调用检测到的CLI
  • references/parallel-execution.md
    — multi-bg-agent + agent-team的分发和合并逻辑
  • references/review-output-format.md
    — 三级输出规范
  • scripts/detect-clis.mjs
    — Node脚本随Skill发布,探测约20种AI CLI并输出JSON结果