nsfc-justification-writer

Original🇨🇳 Chinese
Translated
50 scripts

Write/refactor the LaTeX content for the "Project Justification" section of research grant proposals. Based on a minimal information form, output the value and necessity, current limitations, scientific questions/hypotheses, and project entry points, while preserving the template structure. Suitable for writing the project justification section of NSFC and various research grant proposals.

16installs
Added on

NPX Install

npx skill4agent add huangwb8/chineseresearchlatex nsfc-justification-writer

SKILL.md Content (Chinese)

View Translation Comparison →

Research Project Justification Writer

Target Output (Contract)

  • Only write to:
    extraTex/1.1.立项依据.tex
  • Prohibited modifications:
    main.tex
    ,
    extraTex/@config.tex
    , any
    .cls/.sty
    files
  • Writing goal: Clearly explain "why this project needs to be done" and lay the groundwork for " (II) Research Content" with "scientific questions/hypotheses and entry points".
  • AI dependency: By default, uses the native intelligence of Claude Code / Codex provided by the runtime environment (no need to configure any external API Key; will automatically fall back to hard-coded capabilities if AI is unavailable)
  • Theoretical innovation orientation (default): Prioritize the falsifiability of scientific questions/hypotheses, clarity of theoretical contributions, and completeness of verification dimensions (see theoretical_innovation_guidelines.md)
  • Configurable writing orientation: Switch using
    style.mode=theoretical|mixed|engineering
    in
    skills/nsfc-justification-writer/config.yaml
    (default is
    theoretical
    )

Required Input (Minimal Information Form)

  • If not provided by the user, first collect/supplement: references/info_form.md
Recommendation: Use the script to quickly generate the information form (with interactive filling), see
skills/nsfc-justification-writer/scripts/run.py init
.

Workflow (Execute in Order)

  1. Locate project and target file: Confirm
    project_root
    , read and only edit
    extraTex/1.1.立项依据.tex
    .
  2. Extract existing skeleton: If the file already has subheadings like
    \subsubsection
    , prioritize retaining the skeleton and only replace the body paragraphs (unless the user requests restructuring). By default, do not enforce exact title matching (
    strict_title_match=false
    ), focus more on "whether content dimensions are covered".
  3. Progressive writing guidance (recommended): Skeleton first → then paragraphs → then revision → then polishing → then acceptance (avoid the pressure of one-step completion)
    • Use
      scripts/run.py coach --stage auto
      to automatically judge the current stage and provide "only three tasks for this round + questions you need to supplement + copyable prompts"
    • Only modify the body of one
      \subsubsection
      per round, use
      apply-section
      for safe writing and automatic backup
  4. Generate the main narrative of "Project Justification" (recommended 4-paragraph closed loop; AI will check content dimension coverage instead of rigidly adhering to titles):
    • Value and Necessity: Pain points → scope of impact/cost → why it must be done now.
    • Current Status and Limitations: Mainstream approaches/representative works → 2–4 clear limitations (try to be quantifiable/verifiable).
    • Scientific Questions/Core Hypotheses: One hypothesis + 1–3 key scientific questions (breakpoint style), oriented towards "verifiability".
    • Project Entry Point and Contributions: The "differentiated entry point" of this project compared to existing work, with one transition sentence to the research content.
  5. Verifiability and Citation Protection:
    • AI semantically identifies "expressions that may cause reviewer discomfort" (absolute statements/filling gaps/unsubstantiated exaggerations/self-qualification) and provides rewriting suggestions; hard-coded high-risk words are only for prompts, not mechanical blocking.
    • Do not write unprovable statements like "internationally leading/first in China"; when citing external works, first ask the user to provide DOI/link or call
      nsfc-bib-manager
      for verification before writing
      \cite{...}
      .
  6. Cross-section Consistency Check: Check if terms/abbreviations/indicator calibers align with
    2.1 Research Content
    ; if necessary, list 3–5 key nouns and indicators for user confirmation.
  7. Target Word Count Analysis: Prioritize parsing the user's intent/"word count"/"± range"/interval description in the information form; use configuration defaults only if there is no explicit instruction.

Configuration Validation and Large File Support (Optional)

  • Configuration validation:
    python skills/nsfc-justification-writer/scripts/run.py validate-config
  • Large File Tier2:
    diagnose/review --tier2 --chunk-size 12000 --max-chunks 20
    (supports
    .cache/ai
    caching; ultra-large files will prioritize streaming chunking to reduce peak memory usage; use
    --fresh
    to force recalculation)
  • Note: The script layer of this repository will not "directly connect to external large models by default"; whether AI capabilities are available depends on whether the runtime environment injects a responder (will automatically fall back to hard-coded capabilities if unavailable)
  • Related design instructions:
    • Content dimension coverage check:
      skills/nsfc-justification-writer/references/dimension_coverage_design.md
    • Identification and rewriting of "expressions that may cause reviewer discomfort":
      skills/nsfc-justification-writer/references/boastful_expression_guidelines.md
    • Theoretical Innovation Orientation Writing Guidelines:
      skills/nsfc-justification-writer/references/theoretical_innovation_guidelines.md
      (includes warnings on misused methodological terms)
    • Comparison Examples of Misused Methodological Terms:
      skills/nsfc-justification-writer/references/methodology_term_examples.md
      (newly added)

Configurable Prompt Templates (Optional)

prompts.*
in
config.yaml
/
preset.yaml
/
override.yaml
supports two forms:
  • File path: e.g.,
    prompts/tier2_diagnostic.txt
  • Direct multi-line Prompt: Write multi-line text in YAML using
    |
    (suitable for adjusting focus in different fields)
It also supports override by preset variants: for example, when
--preset medical
is used,
prompts.tier2_diagnostic_medical
can be provided.

Recommended
\\subsubsection
Title and Content Mapping

Note: The template and
config.yaml
recommend 4
\subsubsection
titles by default (
structure.recommended_subsubsections
), while the "4-paragraph closed loop" is the content narrative logic. To avoid user confusion, recommend mapping writing according to the following table:
\\subsubsection
Title
Corresponding Narrative ParagraphCore Writing Elements (Theoretical Innovation Orientation by Default)
Research BackgroundValue and NecessityTheoretical gaps/cognitive deficiencies → Why it must be done now (theory-driven)
Domestic and International Research StatusCurrent LimitationsMainstream approaches → Theoretical Limitations (overly strong assumptions/unified framework missing/causality gaps/loose boundaries)
Limitations of Existing ResearchScientific Questions/Core HypothesesFalsifiable hypothesis → Key scientific questions → Verification dimensions (theoretical proof/theorem/numerical verification)
Research Entry PointProject Entry Point and ContributionsTheoretical Differentiated Entry Point (new representation/methodology/unified framework) → Transition to 2.1 Research Content
If the user really needs to modify the subheadings: It is recommended to maintain the 4-paragraph structure and unify the title skeleton first (see
templates/structure_template.tex
); the structure check will no longer mechanically match titles, but still requires at least 4 sections.

Key Capabilities

For the closed loop of "diagnose first → then generate → then safe writing → then acceptance":

AI Function List (Optional Enhancement)

FunctionAI Required?Fallback Behavior
Tier1 Diagnosis (structure/citation/word count/high-risk examples/dangerous commands)N/A
Content Dimension Coverage CheckHeuristic keyword detection (fallback)
Boastful Expression Recognition (semantic)No blocking; only rely on Tier1 high-risk example prompts
Term Consistency (semantic)Only output hard-coded matrix (
terminology.dimensions
)
AI Example Recommendation (with reasons)Keyword/category heuristic matching
AI Stage Judgment (coach --stage auto)Hard-coded threshold rules
Tier2 In-Depth Diagnosis (diagnose --tier2)Skip (only output Tier1)
Whether AI is available depends on whether the runtime environment injects a responder; use
skills/nsfc-justification-writer/scripts/run.py check-ai
for self-test.
  • Tier1 Hard-Coded Diagnosis: Structure (≥4
    \subsubsection
    s) / whether citation keys exist in
    .bib
    / DOI missing and format abnormality prompts / word count statistics / high-risk expression prompts and dangerous command scanning
  • Content Dimension Coverage Check (AI): Does not depend on title wording, checks whether "value and necessity/current limitations/scientific questions/entry points" are covered
  • Boastful Expression Recognition (AI): Identifies absolute statements/gap-filling/unsubstantiated exaggerations/self-qualification, outputs rewriting suggestions
  • Cross-Section Consistency Matrix: Provides cross-section consistency prompts based on
    terminology.dimensions
    (research objects/indicators/terms) in
    config.yaml
  • AI Term Consistency (Optional): When AI is available and
    terminology.mode=auto/ai
    , additionally provides semantic perspective checks and modification suggestions for "synonym/abbreviation misuse" (only outputs matrix if unavailable)
  • Safe Writing Tool: Precisely locates and replaces the body text by
    \subsubsection{...}
    , writes to whitelisted files + backup (products are stored in
    skills/nsfc-justification-writer/runs/
    )
  • Pre-Writing Quality Gate (Optional):
    apply-section --strict-quality
    only scans the "newly added body text" for high-risk words/dangerous commands; if AI is available, it adds semantic blocking of "boastful expressions" to avoid being stuck by historical content
  • Reviewer Suggestion Generator: Based on DoD + diagnosis results, outputs "what reviewers will ask + how to modify" (
    scripts/run.py review
    )
  • Visual HTML Diagnosis Report: Quickly locate issues (
    scripts/run.py diagnose --html-report auto
    )
  • Version Diff/Rollback: View differences and one-click rollback based on runs backups (
    scripts/run.py diff/rollback
    )
  • Example Recommendation: Read
    *.metadata.yaml
    keywords from
    examples/
    to assist in matching reference skeletons by topic (
    scripts/run.py coach --topic ...
    /
    scripts/run.py examples
    )
  • AI Example Recommendation (Optional): When AI is available, prioritize semantic matching and provide "recommendation reasons" (falls back to keyword/category heuristics if unavailable)
  • AI Stage Judgment (Optional): When
    --stage auto
    is used in coach, AI can comprehensively infer "skeleton/draft/revise/polish/final" based on word count/structure/quality status; falls back to hard-coded thresholds if AI is unavailable
  • Configuration Override and Presets: Supports
    --preset medical/engineering
    and
    ~/.config/nsfc-justification-writer/override.yaml
    to override parameters such as term dimensions (can be turned off with
    --no-user-override
    if needed)
Script entry:
skills/nsfc-justification-writer/scripts/run.py
(usage see
skills/nsfc-justification-writer/scripts/README.md
).

systematic-literature-review Integration (Optional)

This skill supports read-only access to literature review directories generated by
systematic-literature-review
, facilitating citation of existing research status content.

Identification Criteria

A directory is automatically identified as a
systematic-literature-review
generated directory if it meets any of the following conditions:
  1. Contains a hidden folder
    .systematic-literature-review
    , which includes
    {topic}_review.tex
    and
    {topic}_参考文献.bib
    /
    references.bib
    files (running pipeline)
  2. Contains typical file combinations:
    {topic}_review.tex
    +
    {topic}_参考文献.bib
    /
    references.bib
    +
    {topic}_工作条件.md
    (completed output directory)
  3. Contains files with the same prefix:
    {topic}_review.tex
    and
    {topic}_参考文献.bib
    (based on filename prefix matching)

Read-Only Access Constraints

For directories generated by
systematic-literature-review
:
  • Read-only mode: Only read the content of
    .tex
    and
    .bib
    files
  • No writing allowed: Will not modify any files in the directory
  • Citation verification: Automatically verify whether citations in
    .tex
    are consistent with definitions in
    .bib

Usage Scenarios

  • The user requests to cite existing literature review content
  • Need to extract research status information from systematic reviews
  • Want to ensure citation consistency

Core Functions

  • Directory detection:
    detect_slr_directory(path)
    judges whether it is a
    systematic-literature-review
    directory
  • Directory analysis:
    analyze_review_directory(path)
    returns directory structure information
  • Citation verification:
    validate_citation_consistency(tex_path, bib_path)
    checks citation consistency
  • Content extraction: Extract key information from
    .tex
    and
    .bib
    files
Implementation:
core/review_integration.py

Acceptance Criteria (Definition of Done)

  • See: references/dod_checklist.md

Change Log

  • This skill does not maintain change history in this document; it is uniformly recorded in the root-level
    CHANGELOG.md
    .
  • Version numbers are only maintained in
    skills/nsfc-justification-writer/config.yaml
    (
    skill_info.version
    ) and the frontmatter of this file to avoid inconsistent caliber.