scholar-deep-research
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseScholar Deep Research
学术深度研究工具
End-to-end academic research workflow that turns a question into a cited, structured report. Built for depth: multi-source federation, transparent ranking, citation chasing, and a mandatory self-critique pass before the report ships.
这是一个端到端的学术研究工作流,可将用户的问题转化为带有引用的结构化报告。专为深度研究设计:支持多数据源联合检索、透明排序、引用追踪,且在报告生成前必须经过自我审查环节。
When to use
使用场景
Explicit triggers: "literature review", "research report", "state of the art", "survey the field", "what's known about X", "deep research on Y", "systematic review", "scoping review", "compare papers on Z".
Proactive triggers (use without being asked):
- User asks a factual question whose honest answer is "it depends on the literature"
- User frames a research plan and needs the background section
- User is drafting a paper intro/related-work and hasn't yet scoped prior work
- User proposes a method and asks whether it's novel
Do not use when: a single known paper answers the question, the user wants a tutorial (not a survey), or they're debugging code.
明确触发场景:用户提及“文献综述”“研究报告”“前沿现状”“领域调研”“关于X目前有哪些研究成果”“针对Y的深度研究”“系统性综述”“范围界定综述”“对比Z相关的论文”等需求。
主动触发场景(无需用户明确请求):
- 用户提出的事实性问题,如实回答应为“需参考现有文献才能确定”
- 用户制定研究计划,需要撰写背景部分内容
- 用户正在起草论文引言/相关工作章节,尚未梳理已有研究范围
- 用户提出一种方法,询问其创新性
请勿使用场景:单个已知论文即可回答问题、用户需要教程而非调研、用户正在调试代码。
Guiding principles
指导原则
- Scripts over vibes. Every search, dedupe, rank, and export step runs through a script in . The same input should produce the same output. Do not improvise ranking or counting by eye.
scripts/ - Sources are federated, not singular. OpenAlex is the primary backbone (free, 240M+ works, no key). arXiv (CS/ML/physics preprints), Crossref (DOI metadata), PubMed (biomedical), DBLP (CS conferences/journals), bioRxiv (life-sci preprints via Europe PMC), and Exa (open-web, requires ) fill gaps. Semantic Scholar is also script-driven —
EXA_API_KEYis the spine path for Phase 4, with better CS / arXiv / cross-disciplinary coverage than OpenAlex; the two graphs disagree more than you'd expect. The asta MCP tools (build_citation_graph.py --source s2|both) and Brave Search are skin — used opportunistically for relevance ranking or non-academic context, never on the critical path. If MCP times out, research continues.mcp__asta__* - State is persistent. Everything goes through . Queries ran, papers seen, decisions made, phase progress. Research becomes resumable and auditable.
research_state.json - Citations are anchors, not decorations. Every non-trivial claim in the draft carries where
[^id]matches a paper in state. Unanchored claims are treated as hallucinations and fail the gate.id - Saturation, not exhaustion, is the stop signal. A phase ends when a new round of search adds <20% novel papers AND no new paper has >100 citations.
- Self-critique is a phase, not a checkbox. Phase 6 reads the draft with adversarial intent. Its output goes into the report appendix.
- 脚本驱动,而非主观判断。所有检索、去重、排序和导出步骤均通过目录下的脚本执行。相同输入应生成相同输出,不得手动调整排序或计数。
scripts/ - 多源联合,而非单一数据源。OpenAlex是核心数据源(免费、包含2.4亿+研究成果、无需密钥)。arXiv(计算机科学/机器学习/物理学预印本)、Crossref(DOI元数据)、PubMed(生物医学领域)、DBLP(计算机科学会议/期刊)、bioRxiv(生命科学预印本,通过Europe PMC获取)和Exa(开放网页,需)用于填补数据源空白。Semantic Scholar同样由脚本驱动——
EXA_API_KEY是第4阶段的核心路径,其在计算机科学/arXiv/跨学科领域的覆盖范围优于OpenAlex;两者的引用图谱差异远超预期。asta MCP工具(build_citation_graph.py --source s2|both)和Brave Search属于辅助工具——仅用于相关性排序或非学术背景补充,不参与核心流程。若MCP超时,研究将继续进行。mcp__asta__* - 状态持久化。所有操作均写入,包括执行的查询、检索到的论文、做出的决策、阶段进度。研究可中断恢复,且具备可审计性。
research_state.json - 引用为核心依据,而非装饰。报告草稿中所有非 trivial 结论均需附带标记,其中
[^id]与状态文件中的论文ID匹配。无引用依据的结论将被视为幻觉内容,无法通过审核。id - 以饱和状态为终止信号,而非穷尽检索。当新一轮检索新增论文占比<20%,且新增论文中无引用量>100的文献时,当前阶段结束。
- 自我审查是独立阶段,而非形式化步骤。第6阶段会以批判性视角审阅草稿,审查结果将纳入报告附录。
The 8-phase workflow (Phase 0..7)
8阶段工作流(第0至第7阶段)
Phase 0: Scope → decompose question, pick archetype, init state
Phase 1: Discovery → multi-source search, dedupe
Phase 2: Triage → rank, select top-N for deep read
Phase 3: Deep read → extract evidence per paper
Phase 4: Chasing → citation graph (forward + backward)
Phase 5: Synthesis → cluster by theme, map tensions
Phase 6: Self-critique → adversarial review, gap finding
Phase 7: Report → render archetype template, export bibliographyEach phase writes to before advancing. If the user pauses or a session crashes, the next run reads the state and picks up from the last completed phase.
research_state.jsonPhase 0: Scope → 分解问题,选择报告模板,初始化状态
Phase 1: Discovery → 多源检索,去重
Phase 2: Triage → 排序,筛选需深度阅读的Top-N论文
Phase 3: Deep read → 提取单篇论文的证据
Phase 4: Chasing → 构建引用图谱(正向+反向)
Phase 5: Synthesis → 按主题聚类,梳理研究分歧
Phase 6: Self-critique → 批判性审查,发现研究空白
Phase 7: Report → 渲染报告模板,导出参考文献每个阶段推进前都会将状态写入。若用户暂停或会话崩溃,下次运行时将读取状态并从最后完成的阶段继续。
research_state.jsonPhase 0 — Scope
阶段0 — 范围界定
Before searching anything, decompose the question.
- Restate the question in one sentence. Surface ambiguities.
- PICO-style decomposition (or equivalent for non-biomedical fields):
- Population / Problem — what system, species, setting, or phenomenon?
- Intervention / Independent var — what method, factor, or manipulation?
- Comparison — against what baseline or alternative?
- Outcome — what is being measured or claimed?
- Pick an archetype that matches user intent (see ):
references/report_templates.md- — what is known about X (default)
literature_review - — rigorous PRISMA-lite, comparison of many studies on one narrow question
systematic_review - — what has been studied and how (breadth over depth)
scoping_review - — X vs Y, head-to-head
comparative_analysis - — narrative background + gap for a proposal
grant_background
- Draft keyword clusters — 3-5 Boolean clusters covering synonyms, acronyms, and variant spellings. Include a "negative" cluster (terms to exclude).
- Initialize state:
(bash
python scripts/research_state.py --state research_state.json init \ --question "<restated question>" \ --archetype literature_reviewis top-level and applies to every subcommand;--stateitself takesinit,--question, and optional--archetype.)--force
When in doubt about archetype, ask the user. The choice shapes everything downstream.
在开始检索前,先分解用户的问题。
- 用一句话重述问题,明确潜在歧义。
- PICO式分解(非生物医学领域可采用等效框架):
- Population / Problem — 研究对象、物种、场景或现象是什么?
- Intervention / Independent var — 采用的方法、因素或操作是什么?
- Comparison — 与哪些基线或替代方案对比?
- Outcome — 测量或验证的指标是什么?
- 选择匹配用户需求的报告模板(详见):
references/report_templates.md- — 关于X的已有研究成果(默认模板)
literature_review - — 严格遵循PRISMA简化标准,针对单一窄问题对比多项研究
systematic_review - — 梳理研究范围与方法(广度优先)
scoping_review - — X与Y的直接对比
comparative_analysis - — 基金申请所需的叙述性背景+研究空白
grant_background
- 起草关键词集群 — 3-5组布尔逻辑关键词,涵盖同义词、缩写和变体拼写,同时包含“排除”关键词组。
- 初始化状态:
(bash
python scripts/research_state.py --state research_state.json init \ --question "<重述后的问题>" \ --archetype literature_review为顶级参数,适用于所有子命令;--state子命令接受init、--question和可选的--archetype参数。)--force
若不确定模板类型,可询问用户。模板选择将决定后续所有流程。
Phase 1 — Discovery
阶段1 — 文献发现
Run searches across all available sources, in parallel where the source can take it. OpenAlex is primary; the others fill gaps.
Where parallelism actually pays off. The right place to fan out is Phase 3 (one agent per paper to read PDFs concurrently — see ). At Phase 1 the bottleneck is the upstream API, not local compute, and parallel fan-out across the same source mostly buys 429s and sticky cooldowns. The skill's bias should be: parallel between different sources, serial within one source. Concretely:
references/agent_prompts/phase3_deep_read.md- Parallel-friendly: OpenAlex (polite-pool, very tolerant), Crossref (polite-pool), Exa (paid quota), bioRxiv (Europe PMC).
- Self-serialised (file-locked, automatic): arXiv (≥3s/req), PubMed (≥0.34s/req without , ≥0.10s with), DBLP (1s buffer to avoid SSL EOF flakes).
NCBI_API_KEY
The serialised sources use a per-source file lock under , so even N parallel invocations from the same agent will queue automatically and sleep the right gap — no agent-side coordination required, but parallel calls don't speed those sources up either, just don't error.
${SCHOLAR_CACHE_DIR:-.scholar_cache}/rate/<source>.locksearch_arxiv.pybash
undefined在所有可用数据源中执行检索,支持不同数据源并行检索。OpenAlex为核心数据源,其他数据源用于补充。
并行检索的适用场景。最适合并行处理的环节是阶段3(为每篇论文分配一个Agent,并发读取PDF——详见)。阶段1的瓶颈是上游API而非本地计算,同一数据源内的并行检索通常会导致429错误和冷却限制。工具的优先级为:不同数据源间并行,同一数据源内串行。具体来说:
references/agent_prompts/phase3_deep_read.md- 支持并行检索:OpenAlex(友好限流,容错性高)、Crossref(友好限流)、Exa(付费配额)、bioRxiv(Europe PMC)。
- 自动串行处理(文件锁机制,自动执行):arXiv(≥3秒/请求)、PubMed(无时≥0.34秒/请求,有密钥时≥0.10秒/请求)、DBLP(添加1秒缓冲避免SSL EOF错误)。
NCBI_API_KEY
串行数据源会在路径下使用数据源级文件锁,因此即使同一Agent并行调用N次,也会自动排队并等待合适的间隔——无需Agent侧协调,但并行调用不会提升这些数据源的检索速度,仅避免报错。
${SCHOLAR_CACHE_DIR:-.scholar_cache}/rate/<source>.locksearch_arxiv.pybash
undefinedPrimary (no API key, always available)
核心数据源(无需API密钥,始终可用)
python scripts/search_openalex.py --query "<cluster 1>" --limit 50 --state research_state.json
python scripts/search_openalex.py --query "<cluster 2>" --limit 50 --state research_state.json
python scripts/search_openalex.py --query "<集群1>" --limit 50 --state research_state.json
python scripts/search_openalex.py --query "<集群2>" --limit 50 --state research_state.json
Domain-specific (use when relevant)
领域专属数据源(按需使用)
python scripts/search_arxiv.py --query "<cluster>" --limit 50 --state research_state.json # CS/ML/physics preprints
python scripts/search_dblp.py --query "<cluster>" --limit 50 --state research_state.json # CS gold-standard bibliography (no abstracts)
python scripts/search_pubmed.py --query "<cluster>" --limit 50 --state research_state.json # biomedical (PubMed)
python scripts/search_biorxiv.py --query "<cluster>" --limit 50 --state research_state.json # life-sci preprints (bioRxiv + medRxiv via Europe PMC)
python scripts/search_crossref.py --query "<cluster>" --limit 50 --state research_state.json # DOI-backed metadata
python scripts/search_arxiv.py --query "<集群>" --limit 50 --state research_state.json # 计算机科学/机器学习/物理学预印本
python scripts/search_dblp.py --query "<集群>" --limit 50 --state research_state.json # 计算机科学权威文献库(无摘要)
python scripts/search_pubmed.py --query "<集群>" --limit 50 --state research_state.json # 生物医学领域(PubMed)
python scripts/search_biorxiv.py --query "<集群>" --limit 50 --state research_state.json # 生命科学预印本(bioRxiv + medRxiv,通过Europe PMC获取)
python scripts/search_crossref.py --query "<集群>" --limit 50 --state research_state.json # DOI关联元数据
Open-web coverage (optional, requires EXA_API_KEY) — finds material the
开放网页覆盖(可选,需EXA_API_KEY)——检索学术API未覆盖的内容:实验室网站、机构PDF、会议镜像、arXiv外的预印本、NGO/政府报告。
scholarly APIs miss: lab sites, institutional PDFs, conference mirrors,
—
preprints parked outside arXiv, NGO/government reports.
—
python scripts/search_exa.py --query "<cluster>" --limit 50 --state research_state.json
python scripts/search_exa.py --query "<集群>" --limit 50 --state research_state.json
Dedupe across sources (DOI-first, title-similarity fallback)
跨数据源去重(优先按DOI,标题相似度为备选方案)
python scripts/dedupe_papers.py --state research_state.json
**MCP enrichment (optional, run if available):** call `mcp__asta__search_papers_by_relevance` and `mcp__asta__snippet_search` and feed results via `scripts/research_state.py ingest`. If the MCP call errors or times out, do not retry — move on.
**Iterate.** Read the state file. Are there keyword gaps? Are there authors appearing 3+ times whose other work you haven't pulled? Run another round. Stop when saturation hits — **every source, not just the last one queried:**
```bash
python scripts/research_state.py saturation --state research_state.jsonpython scripts/dedupe_papers.py --state research_state.json
**MCP增强(可选,若可用则执行)**:调用`mcp__asta__search_papers_by_relevance`和`mcp__asta__snippet_search`,并通过`scripts/research_state.py ingest`导入结果。若MCP调用出错或超时,无需重试——继续后续流程。
**迭代检索**。读取状态文件,检查是否存在关键词空白?是否有出现3次以上的作者,其其他研究成果未被检索?执行新一轮检索。当所有数据源达到饱和状态时停止——**需覆盖所有检索过的数据源,而非仅最后一个**:
```bash
python scripts/research_state.py saturation --state research_state.jsonReturns { "per_source": {...}, "overall_saturated": true/false, ... }
返回 { "per_source": {...}, "overall_saturated": true/false, ... }
`overall_saturated` is true only when every queried source has run at least `--min-rounds` (default 2) rounds AND each is individually below the new-paper percentage and new-citation thresholds. A source that has been queried only once cannot be declared saturated, which rules out the failure mode where a single quiet source falsely ends discovery. Use `--source openalex` to check one source in isolation.
**Budget caps and broad-topic escape hatches.** Phase 1 has two hard caps to prevent runaway agents: `SCHOLAR_PHASE1_MAX_ROUNDS` (default 10 rounds per source) and `SCHOLAR_PHASE1_MAX_REQUESTS_PER_SOURCE` (default 20 ingests per source). Hitting either returns `phase1_budget_exhausted` with a `next:` hint. For genuinely broad topics that cross subfields (e.g. CS-ML topics with multiple keyword clusters), the saturation thresholds can also fail to converge under the defaults — relax them with `SCHOLAR_SATURATION_NEW_PCT` (default 20.0), `SCHOLAR_SATURATION_MAX_CITATIONS` (default 100), and `SCHOLAR_SATURATION_NEW_AUTHORS_PCT` / `SCHOLAR_SATURATION_NEW_VENUES_PCT`. These env vars are honored both by `python scripts/research_state.py saturation` *and* by the G2 gate, so raising them lets the gate accept "good enough" coverage on topics where the default is unreachable.
只有当每个检索过的数据源至少执行了`--min-rounds`(默认2轮)检索,且每个数据源均满足新增论文占比和新增引用量阈值时,`overall_saturated`才会为true。仅检索过一次的数据源无法被判定为饱和,以此避免单一数据源检索量不足导致错误终止发现阶段。可使用`--source openalex`单独检查某一数据源的饱和状态。
**预算限制与宽主题逃生舱**。阶段1设置了两个硬限制以避免Agent失控:`SCHOLAR_PHASE1_MAX_ROUNDS`(默认每个数据源最多10轮检索)和`SCHOLAR_PHASE1_MAX_REQUESTS_PER_SOURCE`(默认每个数据源最多20次导入)。触发任一限制将返回`phase1_budget_exhausted`并给出`next:`提示。对于跨子领域的宽主题(例如计算机科学-机器学习领域包含多个关键词集群),默认饱和阈值可能无法收敛——可通过环境变量`SCHOLAR_SATURATION_NEW_PCT`(默认20.0)、`SCHOLAR_SATURATION_MAX_CITATIONS`(默认100)、`SCHOLAR_SATURATION_NEW_AUTHORS_PCT` / `SCHOLAR_SATURATION_NEW_VENUES_PCT`调整阈值。这些环境变量同时被`python scripts/research_state.py saturation`和G2审核门认可,提高阈值可让审核门接受默认阈值无法达到的主题的“足够好”覆盖范围。Phase 2 — Triage
阶段2 — 文献筛选
Rank the deduplicated corpus and pick the top-N for deep reading.
bash
python scripts/rank_papers.py \
--state research_state.json \
--question "<phase 0 question>" \
--alpha 0.4 --beta 0.3 --gamma 0.2 --delta 0.1 \
--top 20The formula is transparent — the script prints it and writes the components to state so the report can cite its own methodology:
score = α·relevance + β·log10(citations+1)/3 + γ·recency_decay(half-life=5yr) + δ·venue_priorDefaults target a literature review. For a scoping review prefer higher (relevance) and lower (citations). For a systematic review of a narrow question, lower and higher .
αβαβWrite the top-N selection to state:
bash
python scripts/research_state.py select --state research_state.json --top 20Triage the selection into deep / skim / defer tiers before advancing. Phase 3 fan-out is the most expensive stage of the workflow; not every selected paper deserves a full agent dispatch:
bash
python scripts/skim_papers.py --state research_state.json \
--deep-ratio 0.5 --skim-ratio 0.5Defaults split the top-N evenly: top half → (agent dispatch in Phase 3), bottom half → (abstract-derived evidence stub auto-filled, ). For tighter budgets, use — the remaining 20% gets and is removed from (still queryable as candidates for citation chase).
deepskimdepth=shallow--deep-ratio 0.3 --skim-ratio 0.5tier=deferselected_idsThe script emits listing the deep-tier papers by triage_score. Show this to the user before advancing so they can hand-override before agents fan out (re-run with different ratios, or manually re-rank in state). Triage is required before G3 passes — the gate's check rejects the advance otherwise.
data.deep_tier_previewtriage_appliedOptional but recommended — prefetch deep-tier PDFs before agent fan-out:
bash
python scripts/prefetch_pdfs.py --state research_state.json \
--tier deep --concurrency 4Fetches every deep-tier paper's PDF into via (with Unpaywall fallback), in parallel waves, and writes / / / per paper. Phase 3 agents then read the local file directly instead of each running its own download — Agent context stays focused on reading + reasoning, not on retrying paywalls.
${SCHOLAR_CACHE_DIR:-.scholar_cache}/pdfs/<id-hash>/paper-fetchpdf_pathpdf_statuspdf_sourcepdf_bytesFailures land as with a (, , , …); papers without a DOI get . Phase 3 agents check first and only fall back to if the prefetched path is missing. Re-running prefetch is cheap: papers with an existing on disk are skipped ().
pdf_status='failed'pdf_failure_codepaper_fetch_errorno_open_access_pdfpdf_download_failedpdf_status='no_doi'pdf_pathextract_pdf.py --doipdf_pathpdf_status='cached'Human-in-loop for paywalled PDFs. When automatic fetch fails (paywall, OA chain exhausted, no DOI), surface a hand-fetch list to the user via (read-only):
--emit-manifestbash
python scripts/prefetch_pdfs.py --state research_state.json --emit-manifest对去重后的论文库进行排序,筛选出需深度阅读的Top-N论文。
bash
python scripts/rank_papers.py \
--state research_state.json \
--question "<阶段0的问题>" \
--alpha 0.4 --beta 0.3 --gamma 0.2 --delta 0.1 \
--top 20排序公式完全透明——脚本会打印公式并将各分量写入状态文件,以便报告可引用自身的方法论:
score = α·相关性 + β·log10(引用量+1)/3 + γ·时效性衰减(半衰期=5年) + δ·期刊优先级默认参数针对文献综述场景。对于范围界定综述,建议提高(相关性)权重,降低(引用量)权重。对于窄问题的系统性综述,降低权重,提高权重。
αβαβ将Top-N筛选结果写入状态文件:
bash
python scripts/research_state.py select --state research_state.json --top 20在推进前将筛选结果分为深度阅读/快速浏览/延后处理三个层级。阶段3的并行Agent调度是工作流中最昂贵的环节;并非所有筛选出的论文都值得分配完整的Agent资源:
bash
python scripts/skim_papers.py --state research_state.json \
--deep-ratio 0.5 --skim-ratio 0.5默认将Top-N论文平均拆分:前半部分→(阶段3分配Agent),后半部分→(自动填充基于摘要的证据 stub,)。若预算紧张,可使用——剩余20%的论文将被标记为并从中移除(仍可作为引用追踪的候选)。
deepskimdepth=shallow--deep-ratio 0.3 --skim-ratio 0.5tier=deferselected_ids脚本会输出,按筛选分数列出深度阅读层级的论文。在推进前展示给用户,以便用户在Agent调度前手动调整(重新运行并设置不同比例,或在状态文件中手动重新排序)。筛选完成是通过G3审核门的必要条件——审核门的检查会拒绝未完成筛选的推进请求。
data.deep_tier_previewtriage_applied可选但推荐——在Agent调度前预取深度阅读层级的PDF:
bash
python scripts/prefetch_pdfs.py --state research_state.json \
--tier deep --concurrency 4通过(Unpaywall为备选方案)将所有深度阅读层级论文的PDF并行下载到路径下,并为每篇论文写入 / / / 信息。阶段3的Agent将直接读取本地文件,而非各自执行下载——Agent的上下文将专注于阅读和推理,无需重试付费墙限制。
paper-fetch${SCHOLAR_CACHE_DIR:-.scholar_cache}/pdfs/<id-hash>/pdf_pathpdf_statuspdf_sourcepdf_bytes下载失败会标记为并附带(、、等);无DOI的论文会标记为。阶段3的Agent会优先检查,仅当预取路径缺失时才会调用重试。重新执行预取操作成本很低:已存在的论文会被跳过()。
pdf_status='failed'pdf_failure_codepaper_fetch_errorno_open_access_pdfpdf_download_failedpdf_status='no_doi'pdf_pathextract_pdf.py --doipdf_pathpdf_status='cached'付费墙PDF的人工介入。当自动下载失败(付费墙、OA链用尽、无DOI)时,通过(只读模式)向用户展示需手动下载的列表:
--emit-manifestbash
python scripts/prefetch_pdfs.py --state research_state.json --emit-manifestReturns { needs_user_download: [{id, doi, title, drop_at, alt_urls}, ...] }
返回 { needs_user_download: [{id, doi, title, drop_at, alt_urls}, ...] }
The user downloads each PDF (institutional VPN, ResearchGate, etc.) and drops it at the listed `drop_at` path (any `*.pdf` filename in that subdir works). On the next normal `prefetch_pdfs.py` run, dropped files are auto-absorbed as `pdf_source='user_provided'` without re-fetching.
Skip prefetch entirely when paper-fetch is not installed AND you don't want Unpaywall traffic — Phase 3 agents will then download per-paper inside their own contexts (slower, noisier, but functionally identical).
用户下载每篇PDF(通过机构VPN、ResearchGate等渠道)并将其放入指定的`drop_at`路径(该子目录下的任何`*.pdf`文件均可)。下次正常执行`prefetch_pdfs.py`时,放入的文件将被自动识别为`pdf_source='user_provided'`,不会重新下载。
若未安装paper-fetch且不想使用Unpaywall流量,可完全跳过预取环节——阶段3的Agent会在各自的上下文内逐篇下载PDF(速度较慢、噪音较大,但功能一致)。Phase 3 — Deep read (parallel agent fan-out)
阶段3 — 深度阅读(并行Agent调度)
Phase 3 splits by tier:
- —
tier=skimalready wrote an abstract-derived evidence stub withapply_triage(). No further action needed.depth=shallow - — dispatch one agent per paper, in parallel waves of 8–10. Each agent reads the PDF, writes structured evidence back to state, and returns one JSON line. The host's main context never sees the full PDF text.
tier=deep
The agent prompt template lives at . Load it once, instantiate per paper, and dispatch all N tool_use calls in a single message so they fan out concurrently. Per-agent contract:
references/agent_prompts/phase3_deep_read.md- Input: ,
paper_id,doi,pdf_url,abstract,questionstate_path - Action: → read text → write
extract_pdf.py --doi <doi> --output <tmp>evidence --depth full - Output: one line
{"paper_id": "...", "status": "ok"|"evidence_unavailable", ...}
The state CLI is exclusive-locked, so N agents writing concurrent calls are serialized automatically — no coordination needed.
evidencebash
undefined阶段3按层级拆分处理:
- —
tier=skim已自动写入基于摘要的浅度证据 stub,apply_triage()。无需进一步操作。depth=shallow - — 为每篇论文分配一个Agent,以8–10个为一组并行调度。每个Agent读取PDF,将结构化证据写入状态文件,并返回一行JSON。主上下文不会接触完整的PDF文本。
tier=deep
Agent提示模板位于。加载一次模板,为每篇论文实例化,并在单个消息中发送所有N个tool_use调用,以便并发调度。每个Agent的约定:
references/agent_prompts/phase3_deep_read.md- 输入:、
paper_id、doi、pdf_url、abstract、questionstate_path - 操作:→ 读取文本 → 写入
extract_pdf.py --doi <doi> --output <tmp>evidence --depth full - 输出:一行JSON
{"paper_id": "...", "status": "ok"|"evidence_unavailable", ...}
状态CLI采用排他锁机制,因此N个Agent并发写入的操作会自动串行化——无需协调。
evidencebash
undefinedAfter all wave(s) complete, verify deep-tier coverage:
所有批次完成后,验证深度阅读层级的覆盖情况:
python scripts/research_state.py advance --state research_state.json
--to 4 --check-only
--to 4 --check-only
If `deep_tier_full_evidence` is failing, dispatch a follow-up wave for the missing ids only. If a paper's full text is genuinely unreachable (paywall, exhausted OA chain), the agent should write a `depth=shallow` record with `method` starting `evidence_unavailable:` per the prompt's failure-mode section — that record satisfies `depth_marks_valid` without inflating the deep-tier coverage count.
**Manual fallback (no agents available).** Hosts that cannot dispatch parallel agents (some non-CC platforms) can run Phase 3 sequentially in the main session: for each `tier=deep` paper, `extract_pdf.py --doi <doi>` then `research_state.py evidence --id <pid> --depth full ...`. Slower and burns more context, but the gate logic is identical.python scripts/research_state.py advance --state research_state.json
--to 4 --check-only
--to 4 --check-only
若`deep_tier_full_evidence`检查失败,仅为缺失ID的论文调度后续批次。若某篇论文的全文确实无法获取(付费墙、OA链用尽),Agent应写入`depth=shallow`记录,且`method`以`evidence_unavailable:`开头(详见提示模板中的失败模式部分)——该记录可满足`depth_marks_valid`要求,不会虚增深度阅读层级的覆盖数。
**手动备选方案(无可用Agent)**。无法调度并行Agent的主机(部分非CC平台)可在主会话中串行执行阶段3:对于每篇`tier=deep`的论文,先执行`extract_pdf.py --doi <doi>`,再执行`research_state.py evidence --id <pid> --depth full ...`。速度较慢且消耗更多上下文,但审核门逻辑一致。Phase 4 — Citation chasing
阶段4 — 引用追踪
Take the top 5-10 highest-ranked papers and expand the graph.
bash
undefined选取排名前5-10的论文,扩展引用图谱。
bash
undefinedPreview the request count first — this is the most expensive command
先预览请求数量——这是成本最高的命令
python scripts/build_citation_graph.py
--state research_state.json
--seed-top 8 --direction both --depth 1 --dry-run
--state research_state.json
--seed-top 8 --direction both --depth 1 --dry-run
python scripts/build_citation_graph.py
--state research_state.json
--seed-top 8 --direction both --depth 1 --dry-run
--state research_state.json
--seed-top 8 --direction both --depth 1 --dry-run
Run with an idempotency key so a retry after a network blip is free
使用幂等键执行,以便网络故障后重试无需重复操作
python scripts/build_citation_graph.py
--state research_state.json
--seed-top 8 --direction both --depth 1
--idempotency-key "chase-$(date -u +%Y%m%dT%H%M)"
--state research_state.json
--seed-top 8 --direction both --depth 1
--idempotency-key "chase-$(date -u +%Y%m%dT%H%M)"
The script pulls backward references (what did this paper cite?) and forward citations (who cited this paper?), deduplicates against existing state, and writes new candidate papers with `discovered_via: citation_chase`. Run rank + deep read again on any new high-scoring additions.
**Dual backend.** `--source openalex|s2|both` (default `both`). OpenAlex covers most fields well; Semantic Scholar (S2) has better CS / arXiv / cross-disciplinary coverage. The two graphs disagree more than you'd expect — running both then deduping by id surfaces real coverage gaps. S2 needs a DOI / arXiv id / PMID on each seed (it doesn't accept OpenAlex ids); seeds without one skip the S2 backend. `S2_API_KEY` env var raises the S2 quota; without it the public quota of ~1 req/s applies.
**Idempotency.** When `--idempotency-key <k>` is set, the first successful run writes `{response, signature}` to `.scholar_cache/<hash>.json`. A retried run with the same key replays the cached response without re-hitting OpenAlex or re-mutating state. Reusing the same key with different arguments returns `idempotency_key_mismatch` rather than silently serving stale data. Cache directory: `SCHOLAR_CACHE_DIR` env var, default `.scholar_cache/`.
**Special case — a highly cited paper has never been challenged.** If rank says a paper is top-3 by citations but no critiques appear in the corpus, search explicitly for `"<first author> <year>" critique OR limitations OR reanalysis OR failed replication`. This is the confirmation-bias backstop.python scripts/build_citation_graph.py
--state research_state.json
--seed-top 8 --direction both --depth 1
--idempotency-key "chase-$(date -u +%Y%m%dT%H%M)"
--state research_state.json
--seed-top 8 --direction both --depth 1
--idempotency-key "chase-$(date -u +%Y%m%dT%H%M)"
脚本会提取反向引用(该论文引用了哪些文献?)和正向引用(哪些文献引用了该论文?),与现有状态去重,并将新的候选论文标记为`discovered_via: citation_chase`。对新增的高评分论文再次执行排序和深度阅读。
**双后端支持**。`--source openalex|s2|both`(默认`both`)。OpenAlex覆盖大多数领域;Semantic Scholar(S2)在计算机科学/arXiv/跨学科领域的覆盖范围更优。两者的引用图谱差异远超预期——同时运行两者并按ID去重可发现真实的覆盖空白。S2要求每个种子论文具备DOI/arXiv ID/PMID(不接受OpenAlex ID);无上述ID的种子论文会跳过S2后端。环境变量`S2_API_KEY`可提高S2的配额;无密钥时使用公共配额(约1请求/秒)。
**幂等性**。当设置`--idempotency-key <k>`时,首次成功运行会将`{response, signature}`写入`.scholar_cache/<hash>.json`。使用相同密钥重试时,会重放缓存的响应,无需重新调用OpenAlex或修改状态文件。若使用相同密钥但参数不同,会返回`idempotency_key_mismatch`,而非静默返回 stale 数据。缓存目录:环境变量`SCHOLAR_CACHE_DIR`,默认`.scholar_cache/`。
**特殊情况——高引用论文未被质疑**。若排名显示某篇论文的引用量位列前三,但论文库中无相关批评,需专门检索`"<第一作者> <年份>" critique OR limitations OR reanalysis OR failed replication`。这是避免确认偏误的保障措施。Phase 5 — Synthesis
阶段5 — 研究合成
No scripts here — this is where the agent earns its keep. Cluster and structure:
- Thematic clustering. Group the top-N into 3-6 themes that map onto the report outline. Themes should be orthogonal: a paper can be primary to one, secondary to at most one other.
- Tension map. Where do papers disagree? For each disagreement, note: which papers, on what, and whether the disagreement is empirical (different data), methodological (different tools), or theoretical (different framings).
- Timeline. When relevant, a chronological arc: seminal paper → consolidation → refinement → current frontier.
- Venn / gap. What has been studied well, partially, and not at all? The gap is the pivot for Phase 7.
此阶段无脚本支持——是Agent发挥核心价值的环节。进行聚类和结构化处理:
- 主题聚类。将Top-N论文分为3-6个主题,与报告大纲对应。主题应相互独立:一篇论文可主要归属一个主题,最多次要归属另一个主题。
- 分歧梳理。论文间存在哪些分歧?针对每个分歧,记录:涉及哪些论文、分歧点是什么、分歧属于实证类(数据不同)、方法类(工具不同)还是理论类(框架不同)。
- 时间线。若相关,梳理时间脉络:开创性论文→整合→细化→当前前沿。
- 研究空白。哪些内容已被充分研究、部分研究、尚未研究?研究空白是阶段7的核心切入点。
Phase 6 — Self-critique
阶段6 — 自我审查
This is not optional. Load and run the full checklist against your draft (still unpublished). The checklist covers:
assets/prompts/self_critique.md- Single-source claims (any claim backed by only one paper?)
- Citation/recency skew (is the latest-2-years window covered?)
- Venue bias (is the corpus dominated by one journal/venue?)
- Author bias (does one lab dominate the citations?)
- Untested high-citation papers (anyone cite a paper without reading a critique?)
- Contradictions buried (any tension in Phase 5 that got glossed over?)
- Archetype fit (does the structure match the chosen archetype?)
- Unanchored claims (any statement without a anchor?)
[^id]
Write findings to under and fix blockers before Phase 7. Findings go into the report appendix verbatim — the reader deserves to see what the research process doubted itself about.
research_state.jsonself_critique此环节为必填项。加载,对照完整清单审阅未发布的草稿。清单涵盖:
assets/prompts/self_critique.md- 单一来源结论(是否存在仅基于一篇论文的结论?)
- 引用/时效性偏差(是否覆盖了近2年的文献?)
- 期刊偏差(论文库是否被单一期刊/会议主导?)
- 作者偏差(引用是否被单一实验室主导?)
- 未验证的高引用论文(是否存在仅引用未阅读批评内容的论文?)
- 被掩盖的矛盾(阶段5梳理的分歧是否被忽略?)
- 模板匹配度(结构是否与所选模板一致?)
- 无依据结论(是否存在无标记的陈述?)
[^id]
将审查结果写入的字段,并在进入阶段7前解决所有阻塞问题。审查结果将原样纳入报告附录——读者有权了解研究过程中的自我质疑点。
research_state.jsonself_critiquePhase 7 — Report
阶段7 — 报告生成
Render an archetype scaffold from state, then fill the agent-prose
slots and validate anchors:
bash
undefined从状态文件渲染模板框架,填充Agent撰写的内容并验证引用标记:
bash
undefinedGenerate the scaffold — fills header, themes, tensions, methodology
生成框架——从状态文件填充标题、主题、分歧、方法论附录、自我审查附录和引用索引。留下<!-- AGENT: ... -->
占位符供撰写内容。
<!-- AGENT: ... -->appendix, self-critique appendix, and bibliography anchor index from
—
state. Leaves <!-- AGENT: ... -->
placeholders for prose.
<!-- AGENT: ... -->—
python scripts/render_report.py --state research_state.json
python scripts/render_report.py --state research_state.json
→ reports/<slug>_<YYYYMMDD>.md by default; pass --output PATH to override.
默认输出到reports/<slug>_<YYYYMMDD>.md;可通过--output PATH指定路径。
After filling in the prose, lint every [^id] anchor against
填充内容后,检查每一个[^id]标记是否与state.papers匹配。在报告生成前捕获标记错误。
state.papers. Catches typo'd anchors before the report ships.
—
python scripts/render_report.py --state research_state.json
--lint reports/<slug>_<YYYYMMDD>.md
--lint reports/<slug>_<YYYYMMDD>.md
python scripts/render_report.py --state research_state.json
--lint reports/<slug>_<YYYYMMDD>.md
--lint reports/<slug>_<YYYYMMDD>.md
Export bibliography in the user's preferred format
按用户偏好格式导出参考文献
python scripts/export_bibtex.py --state research_state.json --format bibtex --output refs.bib
python scripts/export_bibtex.py --state research_state.json --format csl-json --output refs.json
The scaffold's body uses `[^id]` anchors (the paper id from state). The
bibliography section at the bottom carries one definition per selected
paper. The lint mode flags `unknown_anchors_used` (typos) and
`undefined_in_text` (anchors with no footnote definition); both are
blockers. `unused_definitions` is a soft signal — selected papers that
ended up not cited inline.
**Save path convention:** `reports/<slug>_<YYYYMMDD>.md`. The skill does not write outside the working directory unless the user specifies a path.python scripts/export_bibtex.py --state research_state.json --format bibtex --output refs.bib
python scripts/export_bibtex.py --state research_state.json --format csl-json --output refs.json
框架主体使用`[^id]`标记(状态文件中的论文ID)。底部的参考文献部分包含每篇筛选论文的定义。lint模式会标记`unknown_anchors_used`(拼写错误)和`undefined_in_text`(无脚注定义的标记);两者均为阻塞问题。`unused_definitions`是软信号——筛选出但未在正文中引用的论文。
**保存路径约定**:`reports/<slug>_<YYYYMMDD>.md`。除非用户指定路径,否则工具不会写入工作目录外的位置。Report archetype selection
报告模板选择
| Archetype | When to use | Primary output shape |
|---|---|---|
| User wants to know what's established about a topic | Thematic sections + synthesis + gap |
| Narrow question, many studies, need rigorous comparison | PRISMA-lite flow + extraction table + pooled findings |
| Broad topic, "what has been studied?" | Coverage map + methods inventory + research gap |
| "A vs B" — methods, models, approaches | Axes of comparison + per-axis verdict + recommendation |
| Narrative for a proposal introduction | Problem significance + what's known + what's missing + why our approach |
Templates live in . Load only the one you need.
assets/templates/<archetype>.md| 模板类型 | 使用场景 | 核心输出结构 |
|---|---|---|
| 用户想了解某一主题的已有研究成果 | 主题章节+研究合成+研究空白 |
| 窄问题、多项研究、需严格对比 | PRISMA简化流程+提取表格+汇总结果 |
| 宽主题、“已研究哪些内容?” | 覆盖范围图+方法清单+研究空白 |
| “A vs B”——方法、模型、方案对比 | 对比维度+各维度结论+建议 |
| 基金申请引言的叙述性内容 | 问题重要性+已有研究+研究空白+我方方案的必要性 |
模板位于。仅加载所需的模板即可。
assets/templates/<archetype>.mdScripts reference
脚本参考
| Script | Purpose |
|---|---|
| Init, read, write, query the state file. Central to every phase. |
| Primary search (no key, 240M works, citation counts). |
| arXiv API — preprints and CS/ML/physics. |
| Crossref REST — authoritative DOI metadata. |
| NCBI E-utilities — biomedical corpus with MeSH. |
| Exa neural web search (optional, key-gated) — open-web coverage the scholarly APIs miss. |
| DOI normalization + title similarity merging across sources. |
| Transparent scoring formula. Prints the formula and per-paper components. |
| Phase-3 triage. Splits selected papers into |
| Optional. Pulls deep-tier PDFs into a stable cache via paper-fetch (with Unpaywall fallback) before Phase 3 agent fan-out. Concurrent ( |
| Forward/backward snowballing via OpenAlex. |
| Full-text extraction (pypdf). Accepts |
| BibTeX / CSL-JSON / RIS export from state. |
| Phase 7 — render an archetype scaffold from |
All scripts accept , , emit a structured JSON envelope on stdout, and use as the single source of truth. Every script is idempotent on the state file (network-layer idempotency is P1 work).
--help--schemaresearch_state.json| 脚本 | 用途 |
|---|---|
| 初始化、读取、写入、查询状态文件。是所有阶段的核心。 |
| 核心检索工具(无需密钥,包含2.4亿+研究成果,提供引用计数)。 |
| arXiv API——预印本和计算机科学/机器学习/物理学领域文献。 |
| Crossref REST——权威DOI元数据。 |
| NCBI E-utilities——生物医学论文库,支持MeSH术语。 |
| Exa神经网页搜索(可选,需密钥)——检索学术API未覆盖的开放网页内容。 |
| DOI标准化+跨数据源标题相似度合并。 |
| 透明评分公式。会打印公式和单篇论文的各评分分量。 |
| 阶段3筛选工具。基于低成本确定性信号将筛选出的论文分为 |
| 可选工具。在阶段3Agent调度前,通过paper-fetch(Unpaywall为备选)将深度阅读层级的PDF下载到稳定缓存。支持并发( |
| 通过OpenAlex进行正向/反向滚雪球式检索。 |
| 全文提取(基于pypdf)。接受 |
| 从状态文件导出BibTeX/CSL-JSON/RIS格式的参考文献。 |
| 阶段7——从 |
所有脚本均接受、参数,在标准输出中输出结构化JSON包,并以为唯一数据源。所有脚本对状态文件均具备幂等性(网络层幂等性为优先级1的工作)。
--help--schemaresearch_state.jsonCLI contract, env vars, and state schema
CLI约定、环境变量与状态文件 schema
Three details that agents discover by running scripts and reading the JSON envelopes — kept out of the body to save context. Load on demand:
- — the success/failure envelope shape, exit codes,
references/cli_contract.mdintrospection, and idempotency cache semantics.--schema - — the trust-boundary env vars (
references/env_vars.md,SCHOLAR_*,NCBI_API_KEY,EXA_API_KEY,S2_API_KEY). Agents should never set these — surface to the user when a script reports a missing one.PAPER_FETCH_SCRIPT - — the
references/state_schema.mdshape. Preferresearch_state.jsonfor the live, machine-readable version.python scripts/research_state.py --schema
以下细节可通过运行脚本和读取JSON包获取——未放入主体内容以节省上下文。按需加载:
- ——成功/失败包结构、退出码、
references/cli_contract.md自省、幂等缓存语义。--schema - ——信任边界环境变量(
references/env_vars.md、SCHOLAR_*、NCBI_API_KEY、EXA_API_KEY、S2_API_KEY)。Agent不应设置这些变量——当脚本报告缺失时,告知用户。PAPER_FETCH_SCRIPT - ——
references/state_schema.md的结构。优先使用research_state.json获取实时的机器可读版本。python scripts/research_state.py --schema
Completion gates
阶段推进审核门
Each phase transition has a gate (G1..G7). Advance ONLY via:
bash
python scripts/research_state.py --state <path> advance # advance by 1
python scripts/research_state.py --state <path> advance --check-only # preview onlyThe gate predicates are enforced in . Direct is rejected — the field is no longer settable. If the gate fails, the envelope lists the failing checks by name so you know exactly what's missing.
scripts/_gates.pyset --field phasephase| Target | Gate (enforced) |
|---|---|
| G1 (→ 1) | Question set, archetype valid, state initialized. |
| G2 (→ 2) | |
| G3 (→ 3) | |
| G4 (→ 4) | All selected papers have |
| G5 (→ 5) | ≥1 query whose |
| G6 (→ 6) | |
| G7 (→ 7) | |
每个阶段过渡均设有审核门(G1..G7)。仅可通过以下命令推进:
bash
python scripts/research_state.py --state <path> advance # 推进1个阶段
python scripts/research_state.py --state <path> advance --check-only # 仅预览审核结果审核门的判定条件在中强制执行。直接通过修改阶段会被拒绝——字段不可手动设置。若审核失败,返回的包会列出失败的检查项名称,明确告知缺失内容。
scripts/_gates.pyset --field phasephase| 目标阶段 | 审核门(强制执行) |
|---|---|
| G1(→ 1) | 已设置问题、模板有效、状态已初始化。 |
| G2(→ 2) | 所有检索过的数据源 |
| G3(→ 3) | 已记录 |
| G4(→ 4) | 所有筛选出的论文 |
| G5(→ 5) | 至少有一个查询的 |
| G6(→ 6) | |
| G7(→ 7) | |
Enrichment with MCP tools
MCP工具增强
Semantic Scholar coverage is not one of these — it is reached through the script path () and is a first-class Phase 4 backend, not enrichment. The MCP tools below are the genuine skin layer: they may time out, get renamed, or be absent entirely, and no phase output depends on them.
build_citation_graph.py --source s2|bothIf the session has asta or Brave Search MCP tools available, use them as enrichment:
- — good for dense relevance ranking on top of the script searches
mcp__asta__search_papers_by_relevance - — lighter weight than
mcp__asta__get_citationsfor spot-checking a single seed paperbuild_citation_graph.py - — grep-like search across abstracts
mcp__asta__snippet_search - Brave Search — non-academic sources (blog posts, press releases, pre-print discussion)
Treat MCP tools as unreliable by design — they may timeout or be unavailable. Never place a phase-critical step behind an MCP call. Scripts are the spine; MCP is the skin.
Semantic Scholar覆盖不属于此类——它通过脚本路径()访问,是阶段4的一等后端,而非增强工具。以下MCP工具属于真正的辅助层:可能超时、重命名或不可用,且任何阶段输出均不依赖它们。
build_citation_graph.py --source s2|both若会话中有asta或Brave Search MCP工具可用,可将其作为增强工具使用:
- ——在脚本检索基础上进行精准相关性排序
mcp__asta__search_papers_by_relevance - ——比
mcp__asta__get_citations更轻量,适合单点检查单篇种子论文build_citation_graph.py - ——类似grep的摘要检索
mcp__asta__snippet_search - Brave Search——非学术来源(博客文章、新闻稿、预印本讨论)
默认将MCP工具视为不可靠——可能超时或不可用。切勿将阶段核心步骤依赖于MCP调用。脚本是核心骨架;MCP是辅助皮肤。
Pitfalls (short list; see references/pitfalls.md
for detail)
references/pitfalls.md常见陷阱(简短列表;详见references/pitfalls.md
)
references/pitfalls.md- Treating the first page of search results as "the literature" — run multiple keyword clusters and chase citations.
- Unanchored claims — every non-trivial statement in the report needs a pointing to a paper in state.
[^id] - Confirmation bias — actively search for critiques of top-cited papers; see Phase 4 special case.
- Preprint conflation — arXiv/bioRxiv are preprints; tag them as such in the report and weight evidence accordingly. Lint-safe convention: place the anchor and marker separately — , not
[^id] *(preprint)*(commas inside footnote brackets break Markdown parsing and the[^id, preprint]check).render_report.py --lint - Venue monoculture — if >60% of top-N come from one journal/venue, broaden sources.
- Author monoculture — same for a single lab or author.
- Recency collapse — the last 2 years matter for "state of the art" framings; check explicit coverage.
- Stale MCP tool names — MCP servers rename tools; always list available tools before assuming names. Script paths are stable; MCP names are not.
- Single-shot search — budget for ≥3 search rounds per cluster, not one.
- Skipping self-critique — the temptation to ship a clean draft is exactly when Phase 6 catches the most.
- 将检索结果第一页视为“全部文献”——需执行多轮关键词集群检索并追踪引用。
- 无依据结论——报告中所有非 trivial 陈述均需附带指向状态文件中论文的标记。
[^id] - 确认偏误——主动检索高引用论文的批评内容;详见阶段4的特殊情况。
- 预印本混淆——arXiv/bioRxiv为预印本;在报告中标记,并相应权衡证据权重。安全的lint约定:将标记和说明分开——,而非
[^id] *(preprint)*(脚注括号内的逗号会破坏Markdown解析和[^id, preprint]检查)。render_report.py --lint - 期刊单一化——若Top-N论文中>60%来自同一期刊/会议,需扩大数据源范围。
- 作者单一化——单一实验室或作者主导引用的情况同理。
- 时效性缺失——“前沿现状”场景需覆盖近2年的文献;检查是否有明确覆盖。
- MCP工具名称过时——MCP服务器会重命名工具;使用前务必列出可用工具。脚本路径稳定;MCP名称不稳定。
- 单次检索——每个关键词集群需预留≥3轮检索的预算,而非仅1轮。
- 跳过自我审查——急于发布整洁草稿时,正是阶段6发现问题最多的时候。
Example interaction
示例交互
A complete walk-through (CRISPR base editing for DMD — Phase 0 question restate through Phase 7 report and bibliography) lives in . Read it once when you want to see what a healthy run looks like end-to-end; it's not load-bearing for routine sessions.
references/example_run.md完整的流程演示(针对DMD的CRISPR碱基编辑——从阶段0问题重述到阶段7报告和参考文献)位于。当你想了解健康运行的端到端流程时,可阅读一次;日常会话无需依赖该内容。
references/example_run.mdReferences
参考文档
Modular documentation, loaded only when needed:
- — Boolean clusters, PICO, snowballing, saturation math
references/search_strategies.md - — which database for which question
references/source_selection.md - — CRAAP, journal tier, retraction check, preprint handling
references/quality_assessment.md - — the 5 archetypes with section-by-section guidance
references/report_templates.md - — long-form version of the pitfalls list with examples
references/pitfalls.md - — JSON envelope shape, exit codes,
references/cli_contract.mdintrospection, idempotency cache--schema - — trust-boundary configuration (SCHOLAR_*, NCBI_API_KEY, EXA_API_KEY, S2_API_KEY, PAPER_FETCH_SCRIPT)
references/env_vars.md - —
references/state_schema.mdshape and ID-normalization rulesresearch_state.json - — full end-to-end example (CRISPR base editing for DMD)
references/example_run.md - — per-paper prompt for parallel agent fan-out in Phase 3
references/agent_prompts/phase3_deep_read.md
模块化文档,按需加载:
- ——布尔关键词集群、PICO框架、滚雪球检索、饱和计算
references/search_strategies.md - ——不同问题对应的数据库选择
references/source_selection.md - ——CRAAP原则、期刊层级、撤稿检查、预印本处理
references/quality_assessment.md - ——5种模板的分章节指导
references/report_templates.md - ——陷阱列表的详细版本,附带示例
references/pitfalls.md - ——JSON包结构、退出码、
references/cli_contract.md自省、幂等缓存--schema - ——信任边界配置(SCHOLAR_*、NCBI_API_KEY、EXA_API_KEY、S2_API_KEY、PAPER_FETCH_SCRIPT)
references/env_vars.md - ——
references/state_schema.md结构和ID标准化规则research_state.json - ——完整端到端示例(针对DMD的CRISPR碱基编辑)
references/example_run.md - ——阶段3并行Agent调度的单篇论文提示模板
references/agent_prompts/phase3_deep_read.md