build-skill

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Super Mega Ultra Bestest Skill Builder

超级无敌最优Skill构建工具

Build skills by running 3 competing approaches in parallel, then merging the best of each.
通过并行运行3种不同的构建方案,再合并各方案的最优产出,来构建skill。

Args

参数

The args string is the skill specification. It should contain:
  • The skill name (required)
  • What the skill does, its modes/subcommands, and any context needed
Example:
/super-mega-ultra-bestest-skill-builder audit-permissions — wraps a TypeScript analyzer, default mode runs report, reset mode archives logs
If args are vague, ask one clarifying question before proceeding. Don't over-interview.
参数字符串为skill的规范说明,需要包含:
  • skill名称(必填)
  • skill的功能、模式/子命令,以及所有必要的上下文信息
示例:
/super-mega-ultra-bestest-skill-builder audit-permissions — wraps a TypeScript analyzer, default mode runs report, reset mode archives logs
如果参数表述模糊,在开始处理前先提1个澄清问题即可,不要过多询问。

Process

流程

1. Parse the spec

1. 解析规范

Extract from args:
  • Skill name (first word or hyphenated phrase before any separator)
  • Skill purpose (everything else)
  • Scope: user-level (
    ~/.claude/skills/
    ) or project-level (
    .claude/skills/
    ) — default user-level unless the spec mentions a specific project
从参数中提取以下信息:
  • Skill名称:任意分隔符前的第一个单词或连字符连接的短语
  • Skill用途:其余所有内容
  • 作用范围:用户级 (
    ~/.claude/skills/
    ) 或项目级 (
    .claude/skills/
    ) — 除非规范中明确提到指定项目,否则默认是用户级

2. Launch 3 parallel agents

2. 启动3个并行Agent

Create
/tmp/skill-compare/{native,superpowers,manual}/
directories, then launch 3 background agents simultaneously via the Agent tool. Each gets the SAME spec but a DIFFERENT approach:
Agent 1 — "native-builder": Invoke
skill-creator:skill-creator
via the Skill tool, then follow its process. Write to
/tmp/skill-compare/native/SKILL.md
.
Agent 2 — "superpowers-builder": Invoke
superpowers:writing-skills
via the Skill tool, then follow its structural guidance (skip live subagent pressure testing but follow CSO, token efficiency, frontmatter, and checklist). Write to
/tmp/skill-compare/superpowers/SKILL.md
.
Agent 3 — "manual-builder": No skill-building guide. Write the SKILL.md using general best practices and intuition only. Write to
/tmp/skill-compare/manual/SKILL.md
.
All agents must be told:
  • Write ONLY to their
    /tmp/skill-compare/<approach>/SKILL.md
    path
  • Do NOT write to
    ~/.claude/skills/
    or
    .claude/skills/
  • The skill spec (passed through verbatim from args)
  • Brief context on what a Claude Code skill is (YAML frontmatter with
    name
    +
    description
    , markdown body with instructions)
创建
/tmp/skill-compare/{native,superpowers,manual}/
目录,然后通过Agent工具同时启动3个后台Agent。每个Agent拿到相同的规范说明,但采用不同的构建方案
Agent 1 — "原生构建器":通过Skill工具调用
skill-creator:skill-creator
,遵循其流程执行,结果写入
/tmp/skill-compare/native/SKILL.md
Agent 2 — "超能力构建器":通过Skill工具调用
superpowers:writing-skills
,遵循其结构指引(跳过实例子代理压力测试,但要遵循CSO、token效率、前置元数据和检查清单要求),结果写入
/tmp/skill-compare/superpowers/SKILL.md
Agent 3 — "手动构建器":不使用任何skill构建指南,仅基于通用最佳实践和经验编写SKILL.md,结果写入
/tmp/skill-compare/manual/SKILL.md
必须告知所有Agent:
  • 仅可写入自己对应的
    /tmp/skill-compare/<方案名>/SKILL.md
    路径
  • 不要写入
    ~/.claude/skills/
    .claude/skills/
    目录
  • 完整传入参数中的skill规范说明
  • 简单说明Claude Code skill的定义:包含
    name
    +
    description
    的YAML前置元数据,以及带操作指引的Markdown正文

3. Compare results

3. 对比结果

Once all 3 complete, read all 3 files and compare on these dimensions:
DimensionWhat to evaluate
DiscoverabilityDoes the description help Claude find it? Trigger-only (good) vs workflow summary (bad per CSO)?
ClarityCan Claude follow instructions unambiguously? Are steps numbered?
CompletenessAll modes covered? Edge cases? Troubleshooting?
Token efficiencyWord count vs information density. Target: <500 words for non-startup skills
ActionabilityConcrete actions vs vague guidance? Explicit guardrails for failure modes?
Present the comparison as a table showing word counts, token costs, and per-dimension winners.
3个Agent全部执行完成后,读取3份结果文件,从以下维度对比:
维度评估标准
可被检索性描述是否能帮助Claude找到该skill?是否仅包含触发词(优)还是包含工作流总结(不符合CSO要求,劣)?
清晰度Claude是否能无歧义地遵循指引?步骤是否有编号?
完整度是否覆盖了所有模式?有没有考虑边缘场景?是否包含故障排查说明?
Token效率字数和信息密度的比值,非启动类skill目标字数:<500字
可执行性是具体可落地的操作还是模糊的指引?是否明确列出故障场景的防护规则?
输出对比表格,展示字数、token成本,以及每个维度的最优方案。

4. Synthesize

4. 整合生成

Cherry-pick the best elements from each approach into a final skill. For each element kept, note which approach it came from and why. Write the final skill to the target location:
  • User-level:
    ~/.claude/skills/<name>/SKILL.md
  • Project-level:
    .claude/skills/<name>/SKILL.md
从每个方案中挑选最优元素,合并为最终的skill。每个保留的元素都要标注来源和选用原因。将最终skill写入目标路径:
  • 用户级:
    ~/.claude/skills/<名称>/SKILL.md
  • 项目级:
    .claude/skills/<名称>/SKILL.md

5. Verify

5. 验证

  • Confirm the skill appears in the available skills list (check system reminder)
  • Report final word count
  • Show what was taken from each approach
  • 确认skill出现在可用skill列表中(检查系统提醒)
  • 报告最终字数
  • 说明每个方案贡献了哪些内容

Known Patterns from Prior Runs

过往运行的已知规律

These patterns consistently emerge — use them to inform the merge:
  • Superpowers excels at: merge guardrails, CSO-compliant descriptions, cross-surface compatibility, explicit failure-mode prevention
  • Manual excels at: unique safety guardrails humans think of, natural "done" summary steps, concise structure
  • Native (skill-creator) excels at: comprehensive coverage, but tends to over-explain internals Claude doesn't need — trim aggressively
  • Your own judgment matters most for: token efficiency and cutting bloat
这些规律在过往运行中反复出现,可用于指导合并操作:
  • Superpowers 擅长:合并防护规则、符合CSO要求的描述、跨平台兼容性、明确的故障模式预防
  • 手动方案 擅长:人类能想到的独特安全防护规则、自然的"完成"总结步骤、简洁的结构
  • 原生(skill-creator) 擅长:全面的功能覆盖,但容易过度解释Claude不需要的内部逻辑 — 要大幅精简
  • 你自己的判断 最适合用于:优化token效率、删减冗余内容