Scholar Deep Research
End-to-end academic research workflow that turns a question into a cited, structured report. Built for depth: multi-source federation, transparent ranking, citation chasing, and a mandatory self-critique pass before the report ships.
When to use
Explicit triggers: "literature review", "research report", "state of the art", "survey the field", "what's known about X", "deep research on Y", "systematic review", "scoping review", "compare papers on Z".
Proactive triggers (use without being asked):
- User asks a factual question whose honest answer is "it depends on the literature"
- User frames a research plan and needs the background section
- User is drafting a paper intro/related-work and hasn't yet scoped prior work
- User proposes a method and asks whether it's novel
Do not use when: a single known paper answers the question, the user wants a tutorial (not a survey), or they're debugging code.
Guiding principles
- Scripts over vibes. Every search, dedupe, rank, and export step runs through a script in . The same input should produce the same output. Do not improvise ranking or counting by eye.
- Sources are federated, not singular. OpenAlex is the primary backbone (free, 240M+ works, no key). arXiv (CS/ML/physics preprints), Crossref (DOI metadata), PubMed (biomedical), DBLP (CS conferences/journals), bioRxiv (life-sci preprints via Europe PMC), and Exa (open-web, requires ) fill gaps. Semantic Scholar is also script-driven —
build_citation_graph.py --source s2|both
is the spine path for Phase 4, with better CS / arXiv / cross-disciplinary coverage than OpenAlex; the two graphs disagree more than you'd expect. The asta MCP tools () and Brave Search are skin — used opportunistically for relevance ranking or non-academic context, never on the critical path. If MCP times out, research continues.
- State is persistent. Everything goes through . Queries ran, papers seen, decisions made, phase progress. Research becomes resumable and auditable.
- Citations are anchors, not decorations. Every non-trivial claim in the draft carries where matches a paper in state. Unanchored claims are treated as hallucinations and fail the gate.
- Saturation, not exhaustion, is the stop signal. A phase ends when a new round of search adds <20% novel papers AND no new paper has >100 citations.
- Self-critique is a phase, not a checkbox. Phase 6 reads the draft with adversarial intent. Its output goes into the report appendix.
The 8-phase workflow (Phase 0..7)
Phase 0: Scope → decompose question, pick archetype, init state
Phase 1: Discovery → multi-source search, dedupe
Phase 2: Triage → rank, select top-N for deep read
Phase 3: Deep read → extract evidence per paper
Phase 4: Chasing → citation graph (forward + backward)
Phase 5: Synthesis → cluster by theme, map tensions
Phase 6: Self-critique → adversarial review, gap finding
Phase 7: Report → render archetype template, export bibliography
Each phase writes to
before advancing. If the user pauses or a session crashes, the next run reads the state and picks up from the last completed phase.
Phase 0 — Scope
Before searching anything, decompose the question.
- Restate the question in one sentence. Surface ambiguities.
- PICO-style decomposition (or equivalent for non-biomedical fields):
- Population / Problem — what system, species, setting, or phenomenon?
- Intervention / Independent var — what method, factor, or manipulation?
- Comparison — against what baseline or alternative?
- Outcome — what is being measured or claimed?
- Pick an archetype that matches user intent (see
references/report_templates.md
):
- — what is known about X (default)
- — rigorous PRISMA-lite, comparison of many studies on one narrow question
- — what has been studied and how (breadth over depth)
- — X vs Y, head-to-head
- — narrative background + gap for a proposal
- Draft keyword clusters — 3-5 Boolean clusters covering synonyms, acronyms, and variant spellings. Include a "negative" cluster (terms to exclude).
- Initialize state:
bash
python scripts/research_state.py --state research_state.json init \
--question "<restated question>" \
--archetype literature_review
( is top-level and applies to every subcommand; itself takes , , and optional .)
When in doubt about archetype, ask the user. The choice shapes everything downstream.
Phase 1 — Discovery
Run searches across all available sources, in parallel where the source can take it. OpenAlex is primary; the others fill gaps.
Where parallelism actually pays off. The right place to fan out is
Phase 3 (one agent per paper to read PDFs concurrently — see
references/agent_prompts/phase3_deep_read.md
). At Phase 1 the bottleneck is the upstream API, not local compute, and parallel fan-out across the same source mostly buys 429s and sticky cooldowns. The skill's bias should be: parallel between
different sources, serial within
one source. Concretely:
- Parallel-friendly: OpenAlex (polite-pool, very tolerant), Crossref (polite-pool), Exa (paid quota), bioRxiv (Europe PMC).
- Self-serialised (file-locked, automatic): arXiv (≥3s/req), PubMed (≥0.34s/req without , ≥0.10s with), DBLP (1s buffer to avoid SSL EOF flakes).
The serialised sources use a per-source file lock under
${SCHOLAR_CACHE_DIR:-.scholar_cache}/rate/<source>.lock
, so even N parallel
invocations from the same agent will queue automatically and sleep the right gap — no agent-side coordination required, but parallel calls don't speed those sources up either, just don't error.
bash
# Primary (no API key, always available)
python scripts/search_openalex.py --query "<cluster 1>" --limit 50 --state research_state.json
python scripts/search_openalex.py --query "<cluster 2>" --limit 50 --state research_state.json
# Domain-specific (use when relevant)
python scripts/search_arxiv.py --query "<cluster>" --limit 50 --state research_state.json # CS/ML/physics preprints
python scripts/search_dblp.py --query "<cluster>" --limit 50 --state research_state.json # CS gold-standard bibliography (no abstracts)
python scripts/search_pubmed.py --query "<cluster>" --limit 50 --state research_state.json # biomedical (PubMed)
python scripts/search_biorxiv.py --query "<cluster>" --limit 50 --state research_state.json # life-sci preprints (bioRxiv + medRxiv via Europe PMC)
python scripts/search_crossref.py --query "<cluster>" --limit 50 --state research_state.json # DOI-backed metadata
# Open-web coverage (optional, requires EXA_API_KEY) — finds material the
# scholarly APIs miss: lab sites, institutional PDFs, conference mirrors,
# preprints parked outside arXiv, NGO/government reports.
python scripts/search_exa.py --query "<cluster>" --limit 50 --state research_state.json
# Dedupe across sources (DOI-first, title-similarity fallback)
python scripts/dedupe_papers.py --state research_state.json
MCP enrichment (optional, run if available): call
mcp__asta__search_papers_by_relevance
and
mcp__asta__snippet_search
and feed results via
scripts/research_state.py ingest
. If the MCP call errors or times out, do not retry — move on.
Iterate. Read the state file. Are there keyword gaps? Are there authors appearing 3+ times whose other work you haven't pulled? Run another round. Stop when saturation hits — every source, not just the last one queried:
bash
python scripts/research_state.py saturation --state research_state.json
# Returns { "per_source": {...}, "overall_saturated": true/false, ... }
is true only when every queried source has run at least
(default 2) rounds AND each is individually below the new-paper percentage and new-citation thresholds. A source that has been queried only once cannot be declared saturated, which rules out the failure mode where a single quiet source falsely ends discovery. Use
to check one source in isolation.
Budget caps and broad-topic escape hatches. Phase 1 has two hard caps to prevent runaway agents:
SCHOLAR_PHASE1_MAX_ROUNDS
(default 10 rounds per source) and
SCHOLAR_PHASE1_MAX_REQUESTS_PER_SOURCE
(default 20 ingests per source). Hitting either returns
with a
hint. For genuinely broad topics that cross subfields (e.g. CS-ML topics with multiple keyword clusters), the saturation thresholds can also fail to converge under the defaults — relax them with
SCHOLAR_SATURATION_NEW_PCT
(default 20.0),
SCHOLAR_SATURATION_MAX_CITATIONS
(default 100), and
SCHOLAR_SATURATION_NEW_AUTHORS_PCT
/
SCHOLAR_SATURATION_NEW_VENUES_PCT
. These env vars are honored both by
python scripts/research_state.py saturation
and by the G2 gate, so raising them lets the gate accept "good enough" coverage on topics where the default is unreachable.
Phase 2 — Triage
Rank the deduplicated corpus and pick the top-N for deep reading.
bash
python scripts/rank_papers.py \
--state research_state.json \
--question "<phase 0 question>" \
--alpha 0.4 --beta 0.3 --gamma 0.2 --delta 0.1 \
--top 20
The formula is transparent — the script prints it and writes the components to state so the report can cite its own methodology:
score = α·relevance + β·log10(citations+1)/3 + γ·recency_decay(half-life=5yr) + δ·venue_prior
Defaults target a literature review. For a
scoping review prefer higher
(relevance) and lower
(citations). For a
systematic review of a narrow question, lower
and higher
.
Write the top-N selection to state:
bash
python scripts/research_state.py select --state research_state.json --top 20
Triage the selection into deep / skim / defer tiers before advancing. Phase 3 fan-out is the most expensive stage of the workflow; not every selected paper deserves a full agent dispatch:
bash
python scripts/skim_papers.py --state research_state.json \
--deep-ratio 0.5 --skim-ratio 0.5
Defaults split the top-N evenly: top half →
(agent dispatch in Phase 3), bottom half →
(abstract-derived evidence stub auto-filled,
). For tighter budgets, use
--deep-ratio 0.3 --skim-ratio 0.5
— the remaining 20% gets
and is removed from
(still queryable as candidates for citation chase).
The script emits
listing the deep-tier papers by triage_score.
Show this to the user before advancing so they can hand-override before agents fan out (re-run with different ratios, or manually re-rank in state). Triage is required before G3 passes — the gate's
check rejects the advance otherwise.
Optional but recommended — prefetch deep-tier PDFs before agent fan-out:
bash
python scripts/prefetch_pdfs.py --state research_state.json \
--tier deep --concurrency 4
Fetches every deep-tier paper's PDF into
${SCHOLAR_CACHE_DIR:-.scholar_cache}/pdfs/<id-hash>/
via
(with Unpaywall fallback), in parallel waves, and writes
/
/
/
per paper. Phase 3 agents then
read the local file directly instead of each running its own download — Agent context stays focused on reading + reasoning, not on retrying paywalls.
Failures land as
with a
(
,
,
, …); papers without a DOI get
. Phase 3 agents check
first and only fall back to
if the prefetched path is missing. Re-running prefetch is cheap: papers with an existing
on disk are skipped (
).
Human-in-loop for paywalled PDFs. When automatic fetch fails (paywall, OA chain exhausted, no DOI), surface a hand-fetch list to the user via
(read-only):
bash
python scripts/prefetch_pdfs.py --state research_state.json --emit-manifest
# Returns { needs_user_download: [{id, doi, title, drop_at, alt_urls}, ...] }
The user downloads each PDF (institutional VPN, ResearchGate, etc.) and drops it at the listed
path (any
filename in that subdir works). On the next normal
run, dropped files are auto-absorbed as
pdf_source='user_provided'
without re-fetching.
Skip prefetch entirely when paper-fetch is not installed AND you don't want Unpaywall traffic — Phase 3 agents will then download per-paper inside their own contexts (slower, noisier, but functionally identical).
Phase 3 — Deep read (parallel agent fan-out)
Phase 3 splits by tier:
- — already wrote an abstract-derived evidence stub with . No further action needed.
- — dispatch one agent per paper, in parallel waves of 8–10. Each agent reads the PDF, writes structured evidence back to state, and returns one JSON line. The host's main context never sees the full PDF text.
The agent prompt template lives at
references/agent_prompts/phase3_deep_read.md
. Load it once, instantiate per paper, and dispatch all N tool_use calls in a
single message so they fan out concurrently. Per-agent contract:
- Input: , , , , ,
- Action:
extract_pdf.py --doi <doi> --output <tmp>
→ read text → write
- Output: one line
{"paper_id": "...", "status": "ok"|"evidence_unavailable", ...}
The state CLI is exclusive-locked, so N agents writing concurrent
calls are serialized automatically — no coordination needed.
bash
# After all wave(s) complete, verify deep-tier coverage:
python scripts/research_state.py advance --state research_state.json \
--to 4 --check-only
If
is failing, dispatch a follow-up wave for the missing ids only. If a paper's full text is genuinely unreachable (paywall, exhausted OA chain), the agent should write a
record with
starting
per the prompt's failure-mode section — that record satisfies
without inflating the deep-tier coverage count.
Manual fallback (no agents available). Hosts that cannot dispatch parallel agents (some non-CC platforms) can run Phase 3 sequentially in the main session: for each
paper,
extract_pdf.py --doi <doi>
then
research_state.py evidence --id <pid> --depth full ...
. Slower and burns more context, but the gate logic is identical.
Phase 4 — Citation chasing
Take the top 5-10 highest-ranked papers and expand the graph.
bash
# Preview the request count first — this is the most expensive command
python scripts/build_citation_graph.py \
--state research_state.json \
--seed-top 8 --direction both --depth 1 --dry-run
# Run with an idempotency key so a retry after a network blip is free
python scripts/build_citation_graph.py \
--state research_state.json \
--seed-top 8 --direction both --depth 1 \
--idempotency-key "chase-$(date -u +%Y%m%dT%H%M)"
The script pulls backward references (what did this paper cite?) and forward citations (who cited this paper?), deduplicates against existing state, and writes new candidate papers with
discovered_via: citation_chase
. Run rank + deep read again on any new high-scoring additions.
Dual backend. --source openalex|s2|both
(default
). OpenAlex covers most fields well; Semantic Scholar (S2) has better CS / arXiv / cross-disciplinary coverage. The two graphs disagree more than you'd expect — running both then deduping by id surfaces real coverage gaps. S2 needs a DOI / arXiv id / PMID on each seed (it doesn't accept OpenAlex ids); seeds without one skip the S2 backend.
env var raises the S2 quota; without it the public quota of ~1 req/s applies.
Idempotency. When
is set, the first successful run writes
to
.scholar_cache/<hash>.json
. A retried run with the same key replays the cached response without re-hitting OpenAlex or re-mutating state. Reusing the same key with different arguments returns
rather than silently serving stale data. Cache directory:
env var, default
.
Special case — a highly cited paper has never been challenged. If rank says a paper is top-3 by citations but no critiques appear in the corpus, search explicitly for
"<first author> <year>" critique OR limitations OR reanalysis OR failed replication
. This is the confirmation-bias backstop.
Phase 5 — Synthesis
No scripts here — this is where the agent earns its keep. Cluster and structure:
- Thematic clustering. Group the top-N into 3-6 themes that map onto the report outline. Themes should be orthogonal: a paper can be primary to one, secondary to at most one other.
- Tension map. Where do papers disagree? For each disagreement, note: which papers, on what, and whether the disagreement is empirical (different data), methodological (different tools), or theoretical (different framings).
- Timeline. When relevant, a chronological arc: seminal paper → consolidation → refinement → current frontier.
- Venn / gap. What has been studied well, partially, and not at all? The gap is the pivot for Phase 7.
Phase 6 — Self-critique
This is not optional. Load
assets/prompts/self_critique.md
and run the full checklist against your draft (still unpublished). The checklist covers:
- Single-source claims (any claim backed by only one paper?)
- Citation/recency skew (is the latest-2-years window covered?)
- Venue bias (is the corpus dominated by one journal/venue?)
- Author bias (does one lab dominate the citations?)
- Untested high-citation papers (anyone cite a paper without reading a critique?)
- Contradictions buried (any tension in Phase 5 that got glossed over?)
- Archetype fit (does the structure match the chosen archetype?)
- Unanchored claims (any statement without a anchor?)
Write findings to
under
and fix blockers before Phase 7. Findings go into the report appendix verbatim — the reader deserves to see what the research process doubted itself about.
Phase 7 — Report
Render an archetype scaffold from state, then fill the agent-prose
slots and validate anchors:
bash
# Generate the scaffold — fills header, themes, tensions, methodology
# appendix, self-critique appendix, and bibliography anchor index from
# state. Leaves `<!-- AGENT: ... -->` placeholders for prose.
python scripts/render_report.py --state research_state.json
# → reports/<slug>_<YYYYMMDD>.md by default; pass --output PATH to override.
# After filling in the prose, lint every [^id] anchor against
# state.papers. Catches typo'd anchors before the report ships.
python scripts/render_report.py --state research_state.json \
--lint reports/<slug>_<YYYYMMDD>.md
# Export bibliography in the user's preferred format
python scripts/export_bibtex.py --state research_state.json --format bibtex --output refs.bib
python scripts/export_bibtex.py --state research_state.json --format csl-json --output refs.json
The scaffold's body uses
anchors (the paper id from state). The
bibliography section at the bottom carries one definition per selected
paper. The lint mode flags
(typos) and
(anchors with no footnote definition); both are
blockers.
is a soft signal — selected papers that
ended up not cited inline.
Save path convention: reports/<slug>_<YYYYMMDD>.md
. The skill does not write outside the working directory unless the user specifies a path.
Report archetype selection
| Archetype | When to use | Primary output shape |
|---|
| User wants to know what's established about a topic | Thematic sections + synthesis + gap |
| Narrow question, many studies, need rigorous comparison | PRISMA-lite flow + extraction table + pooled findings |
| Broad topic, "what has been studied?" | Coverage map + methods inventory + research gap |
| "A vs B" — methods, models, approaches | Axes of comparison + per-axis verdict + recommendation |
| Narrative for a proposal introduction | Problem significance + what's known + what's missing + why our approach |
Templates live in
assets/templates/<archetype>.md
. Load only the one you need.
Scripts reference
| Script | Purpose |
|---|
| Init, read, write, query the state file. Central to every phase. |
| Primary search (no key, 240M works, citation counts). |
| arXiv API — preprints and CS/ML/physics. |
| Crossref REST — authoritative DOI metadata. |
| NCBI E-utilities — biomedical corpus with MeSH. |
| Exa neural web search (optional, key-gated) — open-web coverage the scholarly APIs miss. |
| DOI normalization + title similarity merging across sources. |
| Transparent scoring formula. Prints the formula and per-paper components. |
| Phase-3 triage. Splits selected papers into / / tiers on cheap deterministic signals, refines , auto-fills evidence stubs for skim tier. Runs at the close of Phase 2 before G3. |
| Optional. Pulls deep-tier PDFs into a stable cache via paper-fetch (with Unpaywall fallback) before Phase 3 agent fan-out. Concurrent (), idempotent on re-run, fail-soft per paper. Writes / per paper so agents read a local file instead of re-downloading. |
| Forward/backward snowballing via OpenAlex. |
| Full-text extraction (pypdf). Accepts , , or . DOI mode resolves via paper-fetch skill if installed, falls back to Unpaywall. Safe on scanned PDFs (skips, emits warning). |
| BibTeX / CSL-JSON / RIS export from state. |
| Phase 7 — render an archetype scaffold from / / / / , with slots for prose. validates every anchor against . |
All scripts accept
,
, emit a structured JSON envelope on stdout, and use
as the single source of truth. Every script is idempotent on the state file (network-layer idempotency is P1 work).
CLI contract, env vars, and state schema
Three details that agents discover by running scripts and reading the JSON envelopes — kept out of the body to save context. Load on demand:
references/cli_contract.md
— the success/failure envelope shape, exit codes, introspection, and idempotency cache semantics.
- — the trust-boundary env vars (, , , , ). Agents should never set these — surface to the user when a script reports a missing one.
references/state_schema.md
— the shape. Prefer python scripts/research_state.py --schema
for the live, machine-readable version.
Completion gates
Each phase transition has a gate (G1..G7). Advance ONLY via:
bash
python scripts/research_state.py --state <path> advance # advance by 1
python scripts/research_state.py --state <path> advance --check-only # preview only
The gate predicates are enforced in
. Direct
is rejected — the
field is no longer settable. If the gate fails, the envelope lists the failing checks by name so you know exactly what's missing.
| Target | Gate (enforced) |
|---|
| G1 (→ 1) | Question set, archetype valid, state initialized. is host-checked. |
| G2 (→ 2) | overall_saturated == true
across all queried sources AND ≥3 distinct sources in . |
| G3 (→ 3) | recorded; non-empty; every selected paper has ; state.triage_complete=true
(run ). |
| G4 (→ 4) | All selected papers have AND every paper either (a) has , or (b) has with starting one of two documented escape-hatch prefixes: (PDF unreachable — paywall, exhausted OA chain, scanned) or (PDF read fully but off-topic — Phase 2 ranking false-positive). Skim-tier is by design and does not block. |
| G5 (→ 5) | ≥1 query whose contains (any backend layout — , , or the default dual openalex_s2_citation_chase
) AND . |
| G6 (→ 6) | AND ( OR a critique finding mentioning "no tensions"). |
| G7 (→ 7) | state.self_critique.appendix
non-empty; len(resolved) ≥ len(findings)
. |
Enrichment with MCP tools
Semantic Scholar coverage is
not one of these — it is reached through the script path (
build_citation_graph.py --source s2|both
) and is a first-class Phase 4 backend, not enrichment. The MCP tools below are the genuine skin layer: they may time out, get renamed, or be absent entirely, and no phase output depends on them.
If the session has asta or Brave Search MCP tools available, use them as enrichment:
mcp__asta__search_papers_by_relevance
— good for dense relevance ranking on top of the script searches
- — lighter weight than for spot-checking a single seed paper
mcp__asta__snippet_search
— grep-like search across abstracts
- Brave Search — non-academic sources (blog posts, press releases, pre-print discussion)
Treat MCP tools as unreliable by design — they may timeout or be unavailable. Never place a phase-critical step behind an MCP call. Scripts are the spine; MCP is the skin.
Pitfalls (short list; see for detail)
- Treating the first page of search results as "the literature" — run multiple keyword clusters and chase citations.
- Unanchored claims — every non-trivial statement in the report needs a pointing to a paper in state.
- Confirmation bias — actively search for critiques of top-cited papers; see Phase 4 special case.
- Preprint conflation — arXiv/bioRxiv are preprints; tag them as such in the report and weight evidence accordingly. Lint-safe convention: place the anchor and marker separately — , not (commas inside footnote brackets break Markdown parsing and the check).
- Venue monoculture — if >60% of top-N come from one journal/venue, broaden sources.
- Author monoculture — same for a single lab or author.
- Recency collapse — the last 2 years matter for "state of the art" framings; check explicit coverage.
- Stale MCP tool names — MCP servers rename tools; always list available tools before assuming names. Script paths are stable; MCP names are not.
- Single-shot search — budget for ≥3 search rounds per cluster, not one.
- Skipping self-critique — the temptation to ship a clean draft is exactly when Phase 6 catches the most.
Example interaction
A complete walk-through (CRISPR base editing for DMD — Phase 0 question restate through Phase 7 report and bibliography) lives in
references/example_run.md
. Read it once when you want to see what a healthy run looks like end-to-end; it's not load-bearing for routine sessions.
References
Modular documentation, loaded only when needed:
references/search_strategies.md
— Boolean clusters, PICO, snowballing, saturation math
references/source_selection.md
— which database for which question
references/quality_assessment.md
— CRAAP, journal tier, retraction check, preprint handling
references/report_templates.md
— the 5 archetypes with section-by-section guidance
- — long-form version of the pitfalls list with examples
references/cli_contract.md
— JSON envelope shape, exit codes, introspection, idempotency cache
- — trust-boundary configuration (SCHOLAR_*, NCBI_API_KEY, EXA_API_KEY, S2_API_KEY, PAPER_FETCH_SCRIPT)
references/state_schema.md
— shape and ID-normalization rules
references/example_run.md
— full end-to-end example (CRISPR base editing for DMD)
references/agent_prompts/phase3_deep_read.md
— per-paper prompt for parallel agent fan-out in Phase 3