nsfc-justification-writer
Original:🇨🇳 Chinese
Translated
50 scripts
Write/refactor the LaTeX content for the "Project Justification" section of research grant proposals. Based on a minimal information form, output the value and necessity, current limitations, scientific questions/hypotheses, and project entry points, while preserving the template structure. Suitable for writing the project justification section of NSFC and various research grant proposals.
16installs
Added on
NPX Install
npx skill4agent add huangwb8/chineseresearchlatex nsfc-justification-writerTags
Translated version includes tags in frontmatterSKILL.md Content (Chinese)
View Translation Comparison →Research Project Justification Writer
Target Output (Contract)
- Only write to:
extraTex/1.1.立项依据.tex - Prohibited modifications: ,
main.tex, anyextraTex/@config.texfiles.cls/.sty - Writing goal: Clearly explain "why this project needs to be done" and lay the groundwork for " (II) Research Content" with "scientific questions/hypotheses and entry points".
- AI dependency: By default, uses the native intelligence of Claude Code / Codex provided by the runtime environment (no need to configure any external API Key; will automatically fall back to hard-coded capabilities if AI is unavailable)
- Theoretical innovation orientation (default): Prioritize the falsifiability of scientific questions/hypotheses, clarity of theoretical contributions, and completeness of verification dimensions (see theoretical_innovation_guidelines.md)
- Configurable writing orientation: Switch using in
style.mode=theoretical|mixed|engineering(default isskills/nsfc-justification-writer/config.yaml)theoretical
Required Input (Minimal Information Form)
- If not provided by the user, first collect/supplement: references/info_form.md
Recommendation: Use the script to quickly generate the information form (with interactive filling), see.skills/nsfc-justification-writer/scripts/run.py init
Workflow (Execute in Order)
- Locate project and target file: Confirm , read and only edit
project_root.extraTex/1.1.立项依据.tex - Extract existing skeleton: If the file already has subheadings like , prioritize retaining the skeleton and only replace the body paragraphs (unless the user requests restructuring). By default, do not enforce exact title matching (
\subsubsection), focus more on "whether content dimensions are covered".strict_title_match=false - Progressive writing guidance (recommended): Skeleton first → then paragraphs → then revision → then polishing → then acceptance (avoid the pressure of one-step completion)
- Use to automatically judge the current stage and provide "only three tasks for this round + questions you need to supplement + copyable prompts"
scripts/run.py coach --stage auto - Only modify the body of one per round, use
\subsubsectionfor safe writing and automatic backupapply-section
- Use
- Generate the main narrative of "Project Justification" (recommended 4-paragraph closed loop; AI will check content dimension coverage instead of rigidly adhering to titles):
- Value and Necessity: Pain points → scope of impact/cost → why it must be done now.
- Current Status and Limitations: Mainstream approaches/representative works → 2–4 clear limitations (try to be quantifiable/verifiable).
- Scientific Questions/Core Hypotheses: One hypothesis + 1–3 key scientific questions (breakpoint style), oriented towards "verifiability".
- Project Entry Point and Contributions: The "differentiated entry point" of this project compared to existing work, with one transition sentence to the research content.
- Verifiability and Citation Protection:
- AI semantically identifies "expressions that may cause reviewer discomfort" (absolute statements/filling gaps/unsubstantiated exaggerations/self-qualification) and provides rewriting suggestions; hard-coded high-risk words are only for prompts, not mechanical blocking.
- Do not write unprovable statements like "internationally leading/first in China"; when citing external works, first ask the user to provide DOI/link or call for verification before writing
nsfc-bib-manager.\cite{...}
- Cross-section Consistency Check: Check if terms/abbreviations/indicator calibers align with ; if necessary, list 3–5 key nouns and indicators for user confirmation.
2.1 Research Content - Target Word Count Analysis: Prioritize parsing the user's intent/"word count"/"± range"/interval description in the information form; use configuration defaults only if there is no explicit instruction.
Configuration Validation and Large File Support (Optional)
- Configuration validation:
python skills/nsfc-justification-writer/scripts/run.py validate-config - Large File Tier2: (supports
diagnose/review --tier2 --chunk-size 12000 --max-chunks 20caching; ultra-large files will prioritize streaming chunking to reduce peak memory usage; use.cache/aito force recalculation)--fresh - Note: The script layer of this repository will not "directly connect to external large models by default"; whether AI capabilities are available depends on whether the runtime environment injects a responder (will automatically fall back to hard-coded capabilities if unavailable)
- Related design instructions:
- Content dimension coverage check:
skills/nsfc-justification-writer/references/dimension_coverage_design.md - Identification and rewriting of "expressions that may cause reviewer discomfort":
skills/nsfc-justification-writer/references/boastful_expression_guidelines.md - Theoretical Innovation Orientation Writing Guidelines: (includes warnings on misused methodological terms)
skills/nsfc-justification-writer/references/theoretical_innovation_guidelines.md - Comparison Examples of Misused Methodological Terms: (newly added)
skills/nsfc-justification-writer/references/methodology_term_examples.md
- Content dimension coverage check:
Configurable Prompt Templates (Optional)
prompts.*config.yamlpreset.yamloverride.yaml- File path: e.g.,
prompts/tier2_diagnostic.txt - Direct multi-line Prompt: Write multi-line text in YAML using (suitable for adjusting focus in different fields)
|
It also supports override by preset variants: for example, when is used, can be provided.
--preset medicalprompts.tier2_diagnostic_medicalRecommended \\subsubsection
Title and Content Mapping
\\subsubsectionNote: The template and recommend 4 titles by default (), while the "4-paragraph closed loop" is the content narrative logic. To avoid user confusion, recommend mapping writing according to the following table:
config.yaml\subsubsectionstructure.recommended_subsubsections | Corresponding Narrative Paragraph | Core Writing Elements (Theoretical Innovation Orientation by Default) |
|---|---|---|
| Research Background | Value and Necessity | Theoretical gaps/cognitive deficiencies → Why it must be done now (theory-driven) |
| Domestic and International Research Status | Current Limitations | Mainstream approaches → Theoretical Limitations (overly strong assumptions/unified framework missing/causality gaps/loose boundaries) |
| Limitations of Existing Research | Scientific Questions/Core Hypotheses | Falsifiable hypothesis → Key scientific questions → Verification dimensions (theoretical proof/theorem/numerical verification) |
| Research Entry Point | Project Entry Point and Contributions | Theoretical Differentiated Entry Point (new representation/methodology/unified framework) → Transition to 2.1 Research Content |
If the user really needs to modify the subheadings: It is recommended to maintain the 4-paragraph structure and unify the title skeleton first (see ); the structure check will no longer mechanically match titles, but still requires at least 4 sections.
templates/structure_template.texKey Capabilities
For the closed loop of "diagnose first → then generate → then safe writing → then acceptance":
AI Function List (Optional Enhancement)
| Function | AI Required? | Fallback Behavior |
|---|---|---|
| Tier1 Diagnosis (structure/citation/word count/high-risk examples/dangerous commands) | ❌ | N/A |
| Content Dimension Coverage Check | ✅ | Heuristic keyword detection (fallback) |
| Boastful Expression Recognition (semantic) | ✅ | No blocking; only rely on Tier1 high-risk example prompts |
| Term Consistency (semantic) | ✅ | Only output hard-coded matrix ( |
| AI Example Recommendation (with reasons) | ✅ | Keyword/category heuristic matching |
| AI Stage Judgment (coach --stage auto) | ✅ | Hard-coded threshold rules |
| Tier2 In-Depth Diagnosis (diagnose --tier2) | ✅ | Skip (only output Tier1) |
Whether AI is available depends on whether the runtime environment injects a responder; usefor self-test.skills/nsfc-justification-writer/scripts/run.py check-ai
- Tier1 Hard-Coded Diagnosis: Structure (≥4 s) / whether citation keys exist in
\subsubsection/ DOI missing and format abnormality prompts / word count statistics / high-risk expression prompts and dangerous command scanning.bib - Content Dimension Coverage Check (AI): Does not depend on title wording, checks whether "value and necessity/current limitations/scientific questions/entry points" are covered
- Boastful Expression Recognition (AI): Identifies absolute statements/gap-filling/unsubstantiated exaggerations/self-qualification, outputs rewriting suggestions
- Cross-Section Consistency Matrix: Provides cross-section consistency prompts based on (research objects/indicators/terms) in
terminology.dimensionsconfig.yaml - AI Term Consistency (Optional): When AI is available and , additionally provides semantic perspective checks and modification suggestions for "synonym/abbreviation misuse" (only outputs matrix if unavailable)
terminology.mode=auto/ai - Safe Writing Tool: Precisely locates and replaces the body text by , writes to whitelisted files + backup (products are stored in
\subsubsection{...})skills/nsfc-justification-writer/runs/ - Pre-Writing Quality Gate (Optional): only scans the "newly added body text" for high-risk words/dangerous commands; if AI is available, it adds semantic blocking of "boastful expressions" to avoid being stuck by historical content
apply-section --strict-quality - Reviewer Suggestion Generator: Based on DoD + diagnosis results, outputs "what reviewers will ask + how to modify" ()
scripts/run.py review - Visual HTML Diagnosis Report: Quickly locate issues ()
scripts/run.py diagnose --html-report auto - Version Diff/Rollback: View differences and one-click rollback based on runs backups ()
scripts/run.py diff/rollback - Example Recommendation: Read keywords from
*.metadata.yamlto assist in matching reference skeletons by topic (examples//scripts/run.py coach --topic ...)scripts/run.py examples - AI Example Recommendation (Optional): When AI is available, prioritize semantic matching and provide "recommendation reasons" (falls back to keyword/category heuristics if unavailable)
- AI Stage Judgment (Optional): When is used in coach, AI can comprehensively infer "skeleton/draft/revise/polish/final" based on word count/structure/quality status; falls back to hard-coded thresholds if AI is unavailable
--stage auto - Configuration Override and Presets: Supports and
--preset medical/engineeringto override parameters such as term dimensions (can be turned off with~/.config/nsfc-justification-writer/override.yamlif needed)--no-user-override
Script entry: (usage see ).
skills/nsfc-justification-writer/scripts/run.pyskills/nsfc-justification-writer/scripts/README.mdsystematic-literature-review Integration (Optional)
This skill supports read-only access to literature review directories generated by , facilitating citation of existing research status content.
systematic-literature-reviewIdentification Criteria
A directory is automatically identified as a generated directory if it meets any of the following conditions:
systematic-literature-review- Contains a hidden folder , which includes
.systematic-literature-reviewand{topic}_review.tex/{topic}_参考文献.bibfiles (running pipeline)references.bib - Contains typical file combinations: +
{topic}_review.tex/{topic}_参考文献.bib+references.bib(completed output directory){topic}_工作条件.md - Contains files with the same prefix: and
{topic}_review.tex(based on filename prefix matching){topic}_参考文献.bib
Read-Only Access Constraints
For directories generated by :
systematic-literature-review- Read-only mode: Only read the content of and
.texfiles.bib - No writing allowed: Will not modify any files in the directory
- Citation verification: Automatically verify whether citations in are consistent with definitions in
.tex.bib
Usage Scenarios
- The user requests to cite existing literature review content
- Need to extract research status information from systematic reviews
- Want to ensure citation consistency
Core Functions
- Directory detection: judges whether it is a
detect_slr_directory(path)directorysystematic-literature-review - Directory analysis: returns directory structure information
analyze_review_directory(path) - Citation verification: checks citation consistency
validate_citation_consistency(tex_path, bib_path) - Content extraction: Extract key information from and
.texfiles.bib
Implementation:
core/review_integration.pyAcceptance Criteria (Definition of Done)
- See: references/dod_checklist.md
Change Log
- This skill does not maintain change history in this document; it is uniformly recorded in the root-level .
CHANGELOG.md - Version numbers are only maintained in (
skills/nsfc-justification-writer/config.yaml) and the frontmatter of this file to avoid inconsistent caliber.skill_info.version