Skill Audit
Audit a skill before calling it ready. Favor Tessl output, repo conventions, and the skill's actual file shape over taste.
is the skill-evaluation CLI this repo uses to review skills, score their quality, and suggest improvements. See
tessl.io and the
CLI docs. If
or
is unavailable, install or initialize Tessl before running the audit loop.
Principles
- Evidence beats hunches
- Discovery matters: score and before polishing the body
- Keep lean; move depth into or scripts only when they earn their keep
- Prefer the smallest change set that improves activation, clarity, or verification
- Audit only the requested scope; flag adjacent issues separately
Handoffs
- Need to update AGENTS, README, or other repo docs beyond the skill surface -> use
- Need to prove a product or code change works on real surfaces -> use
- Need to review general code or a PR instead of a skill package -> use
Before You Start
- Define scope: one skill folder or the whole skills repo
- Load the target repo's guidance files such as , , or repo rules, when present
- Read the target first, then nearby , , and only as needed
- Pick the right Tessl loop:
- single skill:
npx tessl skill review --json skills/<name>
- full repo batch: use a repo wrapper such as
./scripts/review-skills.sh
if one exists; otherwise run direct Tessl reviews per skill
- optimizer only when explicitly requested:
npx tessl skill review --optimize --yes --max-iterations 1 skills/<name>
Workflow
1. Run Tessl first
Capture the score, summary, and concrete suggestions before proposing edits. Prefer per-skill
when you need a narrow audit loop or structured output. If Tessl is missing, use
first or follow the official docs before continuing.
2. Audit discovery
Use references/scorecard.md to check:
- whether is specific and memorable
- whether states what the skill does, when to use it, and its main boundary
- whether likely user phrasing would activate the skill without extra prompting
Quick example:
- weak: — "Helps with skills"
- stronger: — "Audits existing skills with Tessl scoring, metadata checks, and repo conventions"
3. Audit workflow shape
Check that the skill tells the agent how to start, what evidence to gather, what not to change, and what "done" looks like.
Concrete failure signs:
- vague verbs like "help" without a workflow
- missing output expectations
- commands or paths that cannot be run as written
- a fragile task described with high-level prose instead of tighter guardrails
4. Audit progressive disclosure
Check whether detail belongs in
,
, or executable scripts:
- keep core workflow in
- move dense doctrine, examples, or score rubrics into
- use scripts for repeated deterministic work instead of asking the model to recreate them
Use references/best-practices.md when the skill feels bloated, under-specified, or hard to trigger.
5. Audit repo fit
Check for repo-relative links, stale paths, duplicated guidance, and conflicts with the source repo's conventions.
6. Synthesize the smallest useful change set
Separate blockers from polish. If edits are requested, fix the highest-leverage issues first, rerun Tessl, and report what improved.
Output
After an audit, report:
- scope audited
- Tessl command and score
- strongest parts worth keeping
- prioritized findings with file references
- smallest recommended changes
- rerun status if edits were made
References
- references/scorecard.md — audit dimensions, severity, and a compact review template
- references/best-practices.md — distilled skill-authoring guidance from common repo conventions and Claude's skill best-practices guide