Loading...
Loading...
Evaluate the reproducibility of technical articles. Dispatch a subagent to simulate a first-time reader reproducing the work locally and list missing information. Use as the final check on a draft before publication.
npx skill4agent add mizchi/skills tech-article-reproducibility| # | Axis | 0 (NG) | 1 (partial) | 2 (OK) |
|---|---|---|---|---|
| 1 | Environment prerequisites stated | No OS / version / required tools listed | Partially listed | Everything listed (OS, lang version, CLI tools) |
| 2 | Code completeness | Fragments only, imports/setup omitted | Only the main part | Full, copy-pasteable form that runs |
| 3 | Command accuracy | Placeholders left as-is ( | Some placeholders | Runnable as-is |
| 4 | Version dependency stated | No mention | Partial | Explicit, e.g. "works on v3.x", "v2 or earlier behaves as X" |
| 5 | Full config files included | Excerpts only | Main keys only | Full minimal working config |
| 6 | Expected output shown | None | Explained in prose | Actual output / screenshot |
| 7 | Handling of errors | Not mentioned | One case touched on | Several major errors + how to handle them |
| 8 | Project prerequisites stated | Author-environment assumptions are implicit | Partially stated | Paths / repo structure / existing config all stated |
| 9 | Link health | Links broken or require auth | Some require auth | All accessible publicly |
| 10 | Author-specific knowledge stated | Helpers / dotfiles assumed implicitly | Partially stated | Fully stated or not required |
You are a reader interested in <the article's subject area> but new to <the tech stack>.
You are going to read this article and try to reproduce the same thing in your local environment.
## Target article
<path to the article file>
## Evaluation axes (10 reproducibility axes)
Score each axis 0–2. Refer to the rubric in the `tech-article-reproducibility` skill:
/Users/mz/.claude/skills/tech-article-reproducibility/SKILL.md
1. Environment prerequisites stated
2. Code completeness
3. Command accuracy
4. Version dependency stated
5. Full config files included
6. Expected output shown
7. Handling of errors
8. Project prerequisites stated
9. Link health (actually verify with WebFetch)
10. Author-specific knowledge stated
## Tasks
1. While reading the article, imagine "where would I get stuck if I reproduced this on my own machine?"
2. Score each axis 0–2 with quoted evidence
3. List the top 5 sticking points with line numbers
## Report structure
- Reproducibility score: X/20 (breakdown table)
- Top 5 sticking points: <line number> <quote> → <why it sticks>
- Missing information: list of things that should be added to the article
- Overall verdict: what percentage chance (subjective) do you have of reproducing this after reading the articlemizchi-blog-styleempirical-prompt-tuningmizchi-blog-style