> paper-write
Draft LaTeX paper section by section from an outline. Use when user says "写论文", "write paper", "draft LaTeX", "开始写", or wants to generate LaTeX content from a paper plan.
curl "https://skillshub.wtf/wanshuiyin/Auto-claude-code-research-in-sleep/paper-write?format=md"Paper Write: Section-by-Section LaTeX Generation
Draft a LaTeX paper based on: $ARGUMENTS
Constants
- REVIEWER_MODEL =
gpt-5.4— Model used via Codex MCP for section review. Must be an OpenAI model. - TARGET_VENUE =
ICLR— Default venue. Supported:ICLR,NeurIPS,ICML. Determines style file and formatting. - ANONYMOUS = true — If true, use anonymous author block. Set
falsefor camera-ready. - MAX_PAGES = 9 — Main body page limit. Counts from first page to end of Conclusion section. References and appendix are NOT counted.
- DBLP_BIBTEX = true — Fetch real BibTeX from DBLP/CrossRef instead of LLM-generated entries. Eliminates hallucinated citations. Zero install required. Set
falseto use legacy behavior (LLM search +[VERIFY]markers).
Inputs
- PAPER_PLAN.md — outline with claims-evidence matrix, section plan, figure plan (from
/paper-plan) - NARRATIVE_REPORT.md — the research narrative (primary source of content)
- Generated figures — PDF/PNG files in
figures/(from/paper-figure) - LaTeX includes —
figures/latex_includes.tex(from/paper-figure) - Bibliography — existing
.bibfile, or will create one
If no PAPER_PLAN.md exists, ask the user to run /paper-plan first or provide a brief outline.
Templates
Venue-Specific Setup
The skill includes conference templates in templates/. Select based on TARGET_VENUE:
ICLR:
\documentclass{article}
\usepackage{iclr2026_conference,times}
% \iclrfinalcopy % Uncomment for camera-ready
NeurIPS:
\documentclass{article}
\usepackage[preprint]{neurips_2025}
% \usepackage[final]{neurips_2025} % Camera-ready
ICML:
\documentclass[accepted]{icml2025}
% Use [accepted] for camera-ready
Project Structure
Generate this file structure:
paper/
├── main.tex # master file (includes sections)
├── iclr2026_conference.sty # or neurips_2025.sty / icml2025.sty
├── math_commands.tex # shared math macros
├── references.bib # bibliography (filtered — only cited entries)
├── sections/
│ ├── 0_abstract.tex
│ ├── 1_introduction.tex
│ ├── 2_related_work.tex
│ ├── 3_method.tex # or preliminaries, setup, etc.
│ ├── 4_experiments.tex
│ ├── 5_conclusion.tex
│ └── A_appendix.tex # proof details, extra experiments
└── figures/ # symlink or copy from project figures/
Section files are FLEXIBLE: If the paper plan has 6-8 sections, create corresponding files (e.g., 4_theory.tex, 5_experiments.tex, 6_analysis.tex, 7_conclusion.tex).
Workflow
Step 0: Backup and Clean
If paper/ already exists, back up to paper-backup-{timestamp}/ before overwriting. Never silently destroy existing work.
CRITICAL: Clean stale files. When changing section structure (e.g., 5 sections → 7 sections), delete section files that are no longer referenced by main.tex. Stale files (e.g., old 5_conclusion.tex left behind when conclusion moved to 7_conclusion.tex) cause confusion and waste space.
Step 1: Initialize Project
- Create
paper/directory - Copy venue template from
templates/— the template already includes:- All standard packages (amsmath, hyperref, cleveref, booktabs, etc.)
- Theorem environments with
\crefname{assumption}fix - Anonymous author block
- Generate
math_commands.texwith paper-specific notation - Create section files matching PAPER_PLAN structure
Author block (anonymous mode):
\author{Anonymous Authors}
Step 2: Generate math_commands.tex
Create shared math macros based on the paper's notation:
% math_commands.tex — shared notation
\newcommand{\R}{\mathbb{R}}
\newcommand{\E}{\mathbb{E}}
\DeclareMathOperator*{\argmin}{arg\,min}
\DeclareMathOperator*{\argmax}{arg\,max}
% Add paper-specific notation here
Step 3: Write Each Section
Process sections in order. For each section:
- Read the plan — what claims, evidence, citations belong here
- Read NARRATIVE_REPORT.md — extract relevant content, findings, and quantitative results
- Draft content — write complete LaTeX (not placeholders)
- Insert figures/tables — use snippets from
figures/latex_includes.tex - Add citations — use
\citep{}/\citet{}(all three venues usenatbib)
Section-Specific Guidelines
§0 Abstract:
- Must be self-contained (understandable without reading the paper)
- Structure: problem → approach → key result → implication
- Include one concrete quantitative result
- 150-250 words (check venue limit)
- No citations, no undefined acronyms
- No
\begin{abstract}— that's in main.tex
§1 Introduction:
- Open with a compelling hook (1-2 sentences, problem motivation)
- State the gap clearly ("However, ...")
- List contributions as a numbered or bulleted list
- End with a brief roadmap ("The rest of this paper is organized as...")
- Include the main result figure if space allows
- Target: 1.5 pages
§2 Related Work:
- MINIMUM 1 full page (3-4 substantive paragraphs). Short related work sections are a common reviewer complaint.
- Organize by category using
\paragraph{Category Name.} - Each category: 1 paragraph summarizing the line of work + 1-2 sentences positioning this paper
- Do NOT just list papers — synthesize and compare
- End each paragraph with how this paper relates/differs
§3 Method / Preliminaries / Setup:
- Define notation early (reference math_commands.tex)
- Use
\begin{definition},\begin{theorem}environments for formal statements - For theory papers: include proof sketches of key results in main body, full proofs in appendix
- For theory papers: include a comparison table of prior bounds vs. this paper
- Include algorithm pseudocode if applicable (
algorithm2eoralgorithmic) - Target: 1.5-2 pages
§4 Experiments:
- Start with experimental setup (datasets, baselines, metrics, implementation details)
- Main results table/figure first
- Then ablations and analysis
- Every claim from the introduction must have supporting evidence here
- Target: 2.5-3 pages
§5 Conclusion:
- Summarize contributions (NOT copy-paste from intro — rephrase)
- Limitations (be honest — reviewers appreciate this)
- Future work (1-2 concrete directions)
- Ethics statement and reproducibility statement (if venue requires)
- Target: 0.5 pages
Appendix:
- Proof details (full proofs of main-body theorems)
- Additional experiments, ablations
- Implementation details, hyperparameter tables
- Additional visualizations
Step 4: Build Bibliography
CRITICAL: Only include entries that are actually cited in the paper.
- Scan all
\citep{}and\citet{}references in the drafted sections - Build a citation key list
- For each citation key:
- Check existing
.bibfiles in the project/narrative docs - If not found and DBLP_BIBTEX = true, use the verified fetch chain below
- If not found and DBLP_BIBTEX = false, search arXiv/Scholar for correct BibTeX
- NEVER fabricate BibTeX entries — mark unknown ones with
[VERIFY]comment
- Check existing
- Write
references.bibcontaining ONLY cited entries (no bloat)
Verified BibTeX Fetch (when DBLP_BIBTEX = true)
Three-step fallback chain — zero install, zero auth, all real BibTeX:
Step A: DBLP (best quality — full venue, pages, editors)
# 1. Search by title + first author
curl -s "https://dblp.org/search/publ/api?q=TITLE+AUTHOR&format=json&h=3"
# 2. Extract DBLP key from result (e.g., conf/nips/VaswaniSPUJGKP17)
# 3. Fetch real BibTeX
curl -s "https://dblp.org/rec/{key}.bib"
Step B: CrossRef DOI (fallback — works for arXiv preprints)
# If paper has a DOI or arXiv ID (arXiv DOI = 10.48550/arXiv.{id})
curl -sLH "Accept: application/x-bibtex" "https://doi.org/{doi}"
Step C: Mark [VERIFY] (last resort)
If both DBLP and CrossRef return nothing, mark the entry with % [VERIFY] comment. Do NOT fabricate.
Why this matters: LLM-generated BibTeX frequently hallucinates venue names, page numbers, or even co-authors. DBLP and CrossRef return publisher-verified metadata. Upstream skills (/research-lit, /novelty-check) may mention papers from LLM memory — this fetch chain is the gate that prevents hallucinated citations from entering the final .bib.
Automated bib cleaning — use this Python pattern to extract only cited entries:
import re
# 1. Grep all \citep{...} and \citet{...} from all .tex files
# 2. Extract unique keys (handle multi-cite like \citep{a,b,c})
# 3. Parse the full .bib file, keep only entries whose key is in the cited set
# 4. Write the filtered bib
This prevents bib bloat (e.g., 948 lines → 215 lines in testing).
Citation verification rules (from claude-scholar + Imbad0202):
- Every BibTeX entry must have: author, title, year, venue/journal
- Prefer published venue versions over arXiv preprints (if published)
- Use consistent key format:
{firstauthor}{year}{keyword}(e.g.,ho2020denoising) - Double-check year and venue for every entry
- Remove duplicate entries (same paper with different keys)
Step 5: De-AI Polish (from kgraph57/paper-writer-skill)
After drafting all sections, scan for common AI writing patterns and fix them:
Content patterns to fix:
- Significance inflation ("groundbreaking", "revolutionary" → use measured language)
- Formulaic transitions ("In this section, we..." → remove or vary)
- Generic conclusions ("This work opens exciting new avenues" → be specific)
Language patterns to fix (watch words):
- Replace: delve, pivotal, landscape, tapestry, underscore, noteworthy, intriguingly
- Remove filler: "It is worth noting that", "Importantly,", "Notably,"
- Avoid rule-of-three lists ("X, Y, and Z" appearing repeatedly)
- Don't start consecutive sentences with "This" or "We"
Step 6: Cross-Review with REVIEWER_MODEL
Send the complete draft to GPT-5.4 xhigh:
mcp__codex__codex:
model: gpt-5.4
config: {"model_reasoning_effort": "xhigh"}
prompt: |
Review this [VENUE] paper draft (main body, excluding appendix).
Focus on:
1. Does each claim from the intro have supporting evidence?
2. Is the writing clear, concise, and free of AI-isms?
3. Any logical gaps or unclear explanations?
4. Does it fit within [MAX_PAGES] pages (to end of Conclusion)?
5. Is related work sufficiently comprehensive (≥1 page)?
6. For theory papers: are proof sketches adequate?
7. Are figures/tables clearly described and properly referenced?
For each issue, specify: severity (CRITICAL/MAJOR/MINOR), location, and fix.
[paste full draft text]
Apply CRITICAL and MAJOR fixes. Document MINOR issues for the user.
Step 7: Reverse Outline Test (from Research-Paper-Writing-Skills)
After drafting all sections:
- Extract topic sentences — pull the first sentence of every paragraph
- Read them in sequence — they should form a coherent narrative on their own
- Check claim coverage — every claim from the Claims-Evidence Matrix must appear
- Check evidence mapping — every experiment/figure must support a stated claim
- Fix gaps — if a topic sentence doesn't advance the story, rewrite the paragraph
Step 8: Final Checks
Before declaring done:
- All
\ref{}and\label{}match (no undefined references) - All
\citep{}/\citet{}have corresponding BibTeX entries - No author information in anonymous mode
- Figure/table numbering is correct
- Page count within MAX_PAGES (main body to Conclusion end)
- No TODO/FIXME/XXX markers left in the text
- No
[VERIFY]markers left unchecked - Abstract is self-contained (understandable without reading the paper)
- Title is specific and informative (not generic)
- Related work is ≥1 full page
- references.bib contains ONLY cited entries (no bloat)
- No stale section files — every .tex in
sections/is\inputed bymain.tex - Section files match main.tex — file numbering and
\inputpaths are consistent
Key Rules
-
Large file handling: If the Write tool fails due to file size, immediately retry using Bash (
cat << 'EOF' > file) to write in chunks. Do NOT ask the user for permission — just do it silently. -
Do NOT generate author names, emails, or affiliations — use anonymous block or placeholder
-
Write complete sections, not outlines — the output should be compilable LaTeX
-
One file per section — modular structure for easy editing
-
Every claim must cite evidence — cross-reference the Claims-Evidence Matrix
-
Compile-ready — the output should compile with
latexmkwithout errors (modulo missing figures) -
No over-claiming — use hedging language ("suggests", "indicates") for weak evidence
-
Venue style matters — all three venues (ICLR/NeurIPS/ICML) use
natbib(\citep/\citet) -
Page limit = main body to Conclusion — references and appendix do NOT count
-
Clean bib — references.bib must only contain entries that are actually
\cited -
Section count is flexible — match PAPER_PLAN structure, don't force into 5 sections
-
Backup before overwrite — never destroy existing
paper/directory without backing up
Writing Quality Reference
Principles from Research-Paper-Writing-Skills:
- One message per paragraph — each paragraph makes exactly one point
- Topic sentence first — the first sentence states the paragraph's message
- Explicit transitions — connect paragraphs with logical connectors
- Reverse outline test — extract topic sentences; they should form a coherent narrative
De-AI patterns from kgraph57/paper-writer-skill:
- No AI watch words — delve, pivotal, landscape, tapestry, underscore
- No significance inflation — groundbreaking, revolutionary, paradigm shift
- No formulaic structures — vary sentence openings and transitions
Acknowledgements
Writing methodology adapted from Research-Paper-Writing-Skills (CCF award-winning methodology). Citation verification from claude-scholar and Imbad0202/academic-research-skills. De-AI polish from kgraph57/paper-writer-skill. Backup mechanism from baoyu-skills.
> related_skills --same-repo
> run-experiment
Deploy and run ML experiments on local or remote GPU servers. Use when user says "run experiment", "deploy to server", "跑实验", or needs to launch training jobs.
> research-review
Get a deep critical review of research from GPT via Codex MCP. Use when user says "review my research", "help me review", "get external review", or wants critical feedback on research ideas, papers, or experimental results.
> research-refine
Turn a vague research direction into a problem-anchored, elegant, frontier-aware, implementation-oriented method plan via iterative GPT-5.4 review. Use when the user says "refine my approach", "帮我细化方案", "decompose this problem", "打磨idea", "refine research plan", "细化研究方案", or wants a concrete research method that stays simple, focused, and top-venue ready instead of a vague or overbuilt idea.
> research-refine-pipeline
Run an end-to-end workflow that chains `research-refine` and `experiment-plan`. Use when the user wants a one-shot pipeline from vague research direction to focused final proposal plus detailed experiment roadmap, or asks to "串起来", build a pipeline, do it end-to-end, or generate both the method and experiment plan together.