novelty-check

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Novelty Check Skill

Novelty Check Skill

Check whether a proposed method/idea has already been done in the literature: $ARGUMENTS
检查所提出的方法/想法是否已在现有文献中被研究过:$ARGUMENTS

Constants

常量定义

  • REVIEWER_MODEL =
    gpt-5.4
    — Model used via Codex MCP. Must be an OpenAI model (e.g.,
    gpt-5.4
    ,
    o3
    ,
    gpt-4o
    )
  • REVIEWER_MODEL =
    gpt-5.4
    — 通过Codex MCP调用的模型。必须是OpenAI的模型(例如:
    gpt-5.4
    o3
    gpt-4o

Instructions

操作步骤

Given a method description, systematically verify its novelty:
给定方法描述后,系统地验证其新颖性:

Phase A: Extract Key Claims

阶段A:提取核心论点

  1. Read the user's method description
  2. Identify 3-5 core technical claims that would need to be novel:
    • What is the method?
    • What problem does it solve?
    • What is the mechanism?
    • What makes it different from obvious baselines?
  1. 阅读用户提供的方法描述
  2. 识别3-5个需要具备新颖性的核心技术论点:
    • 该方法具体是什么?
    • 它解决了什么问题?
    • 其作用机制是什么?
    • 与常规基线方法相比,它的独特之处在哪里?

Phase B: Multi-Source Literature Search

阶段B:多来源文献检索

For EACH core claim, search using ALL available sources:
  1. Web Search (via
    WebSearch
    ):
    • Search arXiv, Google Scholar, Semantic Scholar
    • Use specific technical terms from the claim
    • Try at least 3 different query formulations per claim
    • Include year filters for 2024-2026
  2. Known paper databases: Check against:
    • ICLR 2025/2026, NeurIPS 2025, ICML 2025/2026
    • Recent arXiv preprints (2025-2026)
  3. Read abstracts: For each potentially overlapping paper, WebFetch its abstract and related work section
针对每个核心论点,使用所有可用来源进行检索:
  1. 网络检索(通过
    WebSearch
    工具):
    • 检索arXiv、Google Scholar、Semantic Scholar数据库
    • 使用论点中的特定技术术语作为关键词
    • 每个论点至少尝试3种不同的查询表述
    • 应用年份筛选(2024-2026年)
  2. 知名论文数据库:检查以下数据库:
    • ICLR 2025/2026、NeurIPS 2025、ICML 2025/2026
    • arXiv近期预印本(2025-2026年)
  3. 阅读摘要:对于每篇可能存在重叠的论文,通过WebFetch获取其摘要及相关工作章节

Phase C: Cross-Model Verification

阶段C:跨模型验证

Call REVIEWER_MODEL via Codex MCP (
mcp__codex__codex
) with xhigh reasoning:
config: {"model_reasoning_effort": "xhigh"}
Prompt should include:
  • The proposed method description
  • All papers found in Phase B
  • Ask: "Is this method novel? What is the closest prior work? What is the delta?"
通过Codex MCP(
mcp__codex__codex
)调用REVIEWER_MODEL,并设置超高推理强度:
config: {"model_reasoning_effort": "xhigh"}
提示词应包含:
  • 所提出的方法描述
  • 阶段B中找到的所有论文
  • 提问:“该方法是否新颖?最接近的已有研究是什么?二者的差异在哪里?”

Phase D: Novelty Report

阶段D:新颖性报告

Output a structured report:
markdown
undefined
输出结构化报告:
markdown
undefined

Novelty Check Report

新颖性验证报告

Proposed Method

所提出的方法

[1-2 sentence description]
[1-2句话描述]

Core Claims

核心论点

  1. [Claim 1] — Novelty: HIGH/MEDIUM/LOW — Closest: [paper]
  2. [Claim 2] — Novelty: HIGH/MEDIUM/LOW — Closest: [paper] ...
  1. [论点1] — 新颖性:高/中/低 — 最接近的研究:[论文]
  2. [论点2] — 新颖性:高/中/低 — 最接近的研究:[论文] ...

Closest Prior Work

最接近的已有研究

PaperYearVenueOverlapKey Difference
论文年份会议/期刊重叠部分核心差异

Overall Novelty Assessment

整体新颖性评估

  • Score: X/10
  • Recommendation: PROCEED / PROCEED WITH CAUTION / ABANDON
  • Key differentiator: [what makes this unique, if anything]
  • Risk: [what a reviewer would cite as prior work]
  • 评分:X/10
  • 建议:继续开展/谨慎开展/放弃
  • 核心差异化点:(若存在,说明该方法的独特之处)
  • 风险:(审稿人可能引用的已有研究)

Suggested Positioning

建议的定位方式

[How to frame the contribution to maximize novelty perception]
undefined
[如何呈现研究贡献以最大化新颖性感知]
undefined

Important Rules

重要规则

  • Be BRUTALLY honest — false novelty claims waste months of research time
  • "Applying X to Y" is NOT novel unless the application reveals surprising insights
  • Check both the method AND the experimental setting for novelty
  • If the method is not novel but the FINDING would be, say so explicitly
  • Always check the most recent 6 months of arXiv — the field moves fast
  • 务必绝对诚实——虚假的新颖性声明会浪费数月的研究时间
  • “将X应用于Y”并不具有新颖性,除非该应用能带来令人惊讶的新见解
  • 同时检查方法和实验设置的新颖性
  • 若方法不新颖但研究发现具有新颖性,需明确说明
  • 务必检查arXiv平台过去6个月的最新文献——领域发展速度极快