agent-friendly

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

agent-friendly

agent-friendly

Score the user's current repo locally with a bundled scorer and recommend a model class. Evaluates 16 static signals (AGENTS.md, CI, tests, README, linter, dev env, license, contributing, pre-commit hooks, deps manifest, type config, codebase size, plus four agent-specific instruction files) and produces overall + per-model scores. Entirely local — no HTTP calls.
通过内置的评分工具在本地评估用户当前的代码仓库,并推荐模型类别。评估涵盖16项静态指标(AGENTS.md、CI、测试、README、代码检查工具、开发环境、许可证、贡献指南、预提交钩子、依赖清单、类型配置、代码库规模,以及4个Agent专属的指令文件),生成整体得分及各模型的专项得分。完全本地化运行——无需进行HTTP调用。

When to use

使用场景

Invoke this skill when the user:
  • Asks about the agent-friendliness of the current repo.
  • Wants a model recommendation for the codebase they're in.
  • Invokes
    /agent-friendly
    (or however their agent triggers skills) explicitly.
Skip when the user is asking about a different repo, a remote URL, or a concept rather than a specific local path — this skill operates only on the local file system.
当用户出现以下情况时,调用该技能:
  • 询问当前代码仓库的Agent适配友好度。
  • 想要为所在的代码库推荐合适的模型。
  • 显式调用/agent-friendly(或其Agent触发技能的对应方式)。
当用户询问的是其他仓库、远程URL或概念而非特定本地路径时,请跳过该技能——本技能仅在本地文件系统上运行。

How to run

运行步骤

  1. Tell the user upfront: "I'll score the current working directory. Make sure you're at your project root — if you're in a subdirectory (
    src/
    ,
    app/
    , etc.), the score will be artificially low. Pass the project root path explicitly if you want me to score somewhere else."
    Wait for confirmation only if cwd looks ambiguous (e.g. you can see they're inside a typical subdirectory name); otherwise just proceed.
  2. Run the bundled scorer against the user's current working directory:
    bash
    node <skill-dir>/dist/index.js .
    <skill-dir>
    is the directory containing this
    SKILL.md
    (typically
    .claude/skills/agent-friendly/
    for Claude Code,
    .agents/skills/agent-friendly/
    for Codex / Cursor / Cline / others). Most agents resolve this automatically — if yours doesn't, look in the agent's skill directory.
    The CLI defaults to
    process.cwd()
    if no path is passed, so
    node <skill-dir>/dist/index.js
    (no argument) works equivalently. If the user explicitly asks to score a different repo on disk, pass that path as the first argument.
  3. Parse the JSON output:
    jsonc
    {
      "overall": 87.4,
      "warnings": [], // string[] — surface to user before showing the score if non-empty
      "signals": [
        { "id": "agents_md", "label": "AGENTS.md", "pass": 1, "matchedPath": "AGENTS.md" }
        // 15 more signals...
      ],
      "modelScores": [
        {
          "modelId": "claude-code",
          "modelLabel": "Claude Code",
          "score": 89.2,
          "contributions": [
            /* ... */
          ]
        }
        // 7 more models...
      ],
      "topImprovements": [
        { "label": "Contributing guide", "signalId": "contributing", "scoreGain": 2.1, "suggestion": "..." }
        // up to 3 entries
      ]
    }
    The
    warnings
    array is the CLI's heads-up channel. Today it fires when the path being scored doesn't look like a project root (no
    package.json
    /
    README.md
    /
    AGENTS.md
    /
    .git
    found). If non-empty, render the warning(s) to the user before showing the score so they can re-invoke from the right place.
  1. 提前告知用户“我将评估当前工作目录的Agent友好度。请确保你处于项目根目录——如果在子目录(如
    src/
    app/
    等)中,得分会被人为拉低。如果需要评估其他路径,请明确指定项目根目录路径。”
    仅当当前工作目录看起来不明确时(例如能识别出用户位于典型子目录中)才等待确认,否则直接继续。
  2. 针对用户当前工作目录运行内置评分工具:
    bash
    node <skill-dir>/dist/index.js .
    <skill-dir>
    是包含此
    SKILL.md
    的目录(对于Claude Code通常是
    .claude/skills/agent-friendly/
    ,对于Codex / Cursor / Cline及其他工具通常是
    .agents/skills/agent-friendly/
    )。大多数Agent会自动解析该路径——如果你的Agent不支持,请查看Agent的技能目录。
    如果未传入路径,CLI默认使用
    process.cwd()
    ,因此
    node <skill-dir>/dist/index.js
    (不带参数)效果相同。如果用户明确要求评估磁盘上的其他仓库,请将该路径作为第一个参数传入。
  3. 解析JSON输出:
    jsonc
    {
      "overall": 87.4,
      "warnings": [], // string[] — 如果非空,在展示得分前先告知用户
      "signals": [
        { "id": "agents_md", "label": "AGENTS.md", "pass": 1, "matchedPath": "AGENTS.md" }
        // 另外15项指标...
      ],
      "modelScores": [
        {
          "modelId": "claude-code",
          "modelLabel": "Claude Code",
          "score": 89.2,
          "contributions": [
            /* ... */
          ]
        }
        // 另外7个模型...
      ],
      "topImprovements": [
        { "label": "Contributing guide", "signalId": "contributing", "scoreGain": 2.1, "suggestion": "..." }
        // 最多3条条目
      ]
    }
    warnings
    数组是CLI的提示通道。目前当被评估的路径看起来不像项目根目录(未找到
    package.json
    /
    README.md
    /
    AGENTS.md
    /
    .git
    )时会触发。如果非空,在展示得分前先向用户显示警告,以便他们从正确的位置重新调用。

How to render the result

结果展示方式

The scorer profiles 8 agents and always returns scores for all of them in
modelScores
: Claude Code, Cursor, Devin, GPT-5 Codex, Gemini CLI, Aider, OpenHands, and Pi.
The recommendation is score-driven. Don't try to detect which agent is invoking this skill — the answer is the same either way: find the highest-scoring entry in
modelScores
, that's the agent this repo is most tuned for. Show the user; let them decide whether to switch. Never programmatically switch the agent or model.
Print a tight summary the user can read at a glance:
  • Overall score with a band label (high / mid / low — see table below).
  • Best-fit agent — the highest-scoring entry in
    modelScores
    , with its score.
  • Runner-up if its score is within ~5 points of the best — gives the user a real alternative.
  • Why — one short sentence pointing at the strongest signals (e.g. "strong AGENTS.md, tests in place, reproducible dev env"). Use the
    signals
    array (signals with
    pass: 1
    and high weight in the best agent's profile).
  • Recommended model class — mapped from the overall score (frontier / standard / small — see table). Note that the user can switch using
    /model
    (Claude Code / Codex / Gemini CLI),
    /profile
    (Cursor), or whatever their agent's equivalent is — but don't switch for them, just suggest.
  • Top improvements — up to 3 entries from
    topImprovements
    , each with its score gain.
Example output:
text
Agent-friendliness: 87.4 (high) — well-prepped for AI coding agents.
Best for: Claude Code (89.2). Cursor close behind at 86.1.
Why: strong AGENTS.md, tests in place, reproducible dev env.
Recommendation: frontier model (Opus / GPT-5 / Gemini 2.5 Pro). Switch via /model (or your agent's equivalent) if you want to — your call.
Top improvements:
  • Add a contributing guide (+2.1 pts)
  • Add pre-commit hooks (+1.4 pts)
  • Document the test command (+0.9 pts)
评分工具会分析8个Agent,并始终在
modelScores
中返回所有Agent的得分:Claude Code、Cursor、Devin、GPT-5 Codex、Gemini CLI、Aider、OpenHands和Pi。
推荐基于得分驱动。无需检测调用该技能的Agent——无论哪种情况答案都是一致的:找到
modelScores
中得分最高的条目,这就是该仓库最适配的Agent。展示给用户,让他们决定是否切换。切勿以编程方式切换Agent或模型。
打印用户一目了然的简洁摘要:
  • 整体得分及等级标签(高/中/低——见下表)。
  • 最佳适配Agent——
    modelScores
    中得分最高的条目及其得分。
  • 备选Agent如果得分与最佳得分相差约5分以内——为用户提供实际替代选项。
  • 原因——一句话说明核心优势指标(例如“完善的AGENTS.md、已配置测试、可复现的开发环境”)。使用
    signals
    数组(
    pass: 1
    且在最佳Agent配置中权重较高的指标)。
  • 推荐模型类别——根据整体得分映射(前沿/标准/小型——见下表)。注意用户可以通过/model(Claude Code / Codex / Gemini CLI)、/profile(Cursor)或其Agent的等效方式切换模型——但不要替用户切换,仅作建议。
  • 优先改进项——来自
    topImprovements
    的最多3条条目,每条包含得分提升值。
示例输出:
text
Agent友好度:87.4(高)——已为AI编码Agent做好充分准备。
最佳适配:Claude Code(89.2)。Cursor紧随其后,得分为86.1。
原因:完善的AGENTS.md、已配置测试、可复现的开发环境。
推荐:前沿模型(Opus / GPT-5 / Gemini 2.5 Pro)。如需切换,请使用/model(或你的Agent的等效方式)——由你决定。
优先改进项:
  • 添加贡献指南(+2.1分)
  • 添加预提交钩子(+1.4分)
  • 记录测试命令(+0.9分)

Score → model mapping

得分→模型映射

Provider-neutral. Recommend a model class — let the user pick the actual ID for their agent.
BandScoreRecommendation
High≥ 80Frontier — Opus / GPT-5 / Gemini 2.5 Pro. Repo is well-prepped, the model can leverage it.
Mid60 – 79Standard — Sonnet / GPT-5 Codex / Gemini 2.5 Flash. Solid baseline; frontier optional.
Low< 60Small / fast — Haiku / GPT-4o-mini / Gemini 2.5 Flash-Lite. Repo lacks scaffolding for a frontier run.
The reasoning: a high-scoring repo has the scaffolding (AGENTS.md, fast tests, clear dev env) that lets a frontier model actually deliver. A low-scoring repo lacks those affordances; a frontier model's extra reasoning has nothing to grip on, so a smaller / faster model is the better trade.
中立于服务商。推荐模型类别——让用户为其Agent选择实际的模型ID。
等级得分范围推荐内容
≥ 80前沿模型——Opus / GPT-5 / Gemini 2.5 Pro。仓库准备充分,模型可充分发挥作用。
60 – 79标准模型——Sonnet / GPT-5 Codex / Gemini 2.5 Flash。基础扎实;可选前沿模型。
< 60小型/快速模型——Haiku / GPT-4o-mini / Gemini 2.5 Flash-Lite。仓库缺乏支撑前沿模型的架构。
理由:高得分仓库具备架构支撑(AGENTS.md、快速测试、清晰的开发环境),能让前沿模型真正发挥价值。低得分仓库缺少这些条件;前沿模型的额外推理能力无用武之地,因此更小/更快的模型是更优选择。

On first invocation

首次调用后

After the first successful run, mention to the user that they can wire the skill into a
SessionStart
hook so it fires automatically each session — pointing at the dashboard's
/skill
page for copy-paste snippets:
  • Claude Code
    .claude/settings.json
    SessionStart
    matcher.
  • Codex CLI
    .codex/hooks.json
    SessionStart
    matcher.
  • Cursor / Cline / Copilot → no
    SessionStart
    event today; paste the same
    node ... --summary
    command into
    .cursorrules
    /
    .clinerules
    as a static instruction, or invoke this skill manually.
首次成功运行后,告知用户可将该技能接入
SessionStart
钩子,使其在每次会话开始时自动触发——指向仪表盘的/skill页面获取可直接复制粘贴的代码片段:
  • Claude Code
    .claude/settings.json
    中的
    SessionStart
    匹配器。
  • Codex CLI
    .codex/hooks.json
    中的
    SessionStart
    匹配器。
  • Cursor / Cline / Copilot → 目前无
    SessionStart
    事件;将相同的
    node ... --summary
    命令粘贴到
    .cursorrules
    /
    .clinerules
    中作为静态指令,或手动调用该技能。
代码片段可在https://www.agentfriendlycode.com/skill获取。

Failure modes

故障模式

  • Bundle missing. If
    dist/index.js
    doesn't exist next to this
    SKILL.md
    , the install was incomplete or the user is on an older skill version. Tell them to re-run
    npx skills add hsnice16/agent-friendly-skill
    .
  • Low score because cwd is a subdirectory. The scorer reads the path you pass it (default: cwd). If the user is in a subdirectory of their project (e.g.
    src/
    ), 16 signals will mostly fail because root-level files like
    AGENTS.md
    ,
    README.md
    ,
    package.json
    aren't there. If the score looks unexpectedly low, mention this and ask the user to invoke from the project root (or pass the root path explicitly).
  • Node not on PATH. Tell the user to install Node ≥ 20.9.0 (matches the bundle's runtime requirement).
  • 缺失捆绑包。如果
    dist/index.js
    不在
    SKILL.md
    旁边,说明安装不完整或用户使用的是旧版本技能。告知用户重新运行
    npx skills add hsnice16/agent-friendly-skill
  • 因当前工作目录是子目录导致得分低。评分工具读取你传入的路径(默认:当前工作目录)。如果用户处于项目子目录(如
    src/
    ),16项指标大多会不通过,因为根级文件如
    AGENTS.md
    README.md
    package.json
    不存在。如果得分异常低,请提及此情况并要求用户从项目根目录调用(或明确传入根目录路径)。
  • Node不在PATH中。告知用户安装Node ≥ 20.9.0(与捆绑包的运行时要求匹配)。

Out of scope

超出范围

  • Scoring a remote URL, a different repo, or a package by name. This skill works on the active repo only. Direct the user to the dashboard at https://www.agentfriendlycode.com or its
    /api/score?host=&repo=owner/name
    endpoint for indexed lookups.
  • Programmatic model switching. The skill recommends; the user runs
    /model
    (or the agent's equivalent) themselves.
  • Submitting the repo to the dashboard. The skill is read-only — it never writes files, contacts the dashboard, or registers the repo. Unindexed repos stay unindexed.
  • 评估远程URL、其他仓库或按名称评估包。本技能仅适用于当前活跃仓库。如需索引查询,请引导用户访问仪表盘https://www.agentfriendlycode.com或其
    /api/score?host=&repo=owner/name
    端点。
  • 程序化模型切换。本技能仅作推荐;用户需自行运行/model(或其Agent的等效命令)。
  • 将仓库提交至仪表盘。本技能为只读——从不写入文件、联系仪表盘或注册仓库。未索引的仓库保持未索引状态。