> novelty-check
Verify research idea novelty against recent literature. Use when user says "查新", "novelty check", "有没有人做过", "check novelty", or wants to verify a research idea is novel before implementing.
curl "https://skillshub.wtf/wanshuiyin/Auto-claude-code-research-in-sleep/novelty-check?format=md"Novelty Check Skill
Check whether a proposed method/idea has already been done in the literature: $ARGUMENTS
Constants
- REVIEWER_MODEL =
gpt-5.4— Model used via Codex MCP. Must be an OpenAI model (e.g.,gpt-5.4,o3,gpt-4o)
Instructions
Given a method description, systematically verify its novelty:
Phase A: Extract Key Claims
- Read the user's method description
- Identify 3-5 core technical claims that would need to be novel:
- What is the method?
- What problem does it solve?
- What is the mechanism?
- What makes it different from obvious baselines?
Phase B: Multi-Source Literature Search
For EACH core claim, search using ALL available sources:
-
Web Search (via
WebSearch):- Search arXiv, Google Scholar, Semantic Scholar
- Use specific technical terms from the claim
- Try at least 3 different query formulations per claim
- Include year filters for 2024-2026
-
Known paper databases: Check against:
- ICLR 2025/2026, NeurIPS 2025, ICML 2025/2026
- Recent arXiv preprints (2025-2026)
-
Read abstracts: For each potentially overlapping paper, WebFetch its abstract and related work section
Phase C: Cross-Model Verification
Call REVIEWER_MODEL via Codex MCP (mcp__codex__codex) with xhigh reasoning:
config: {"model_reasoning_effort": "xhigh"}
Prompt should include:
- The proposed method description
- All papers found in Phase B
- Ask: "Is this method novel? What is the closest prior work? What is the delta?"
Phase D: Novelty Report
Output a structured report:
## Novelty Check Report
### Proposed Method
[1-2 sentence description]
### Core Claims
1. [Claim 1] — Novelty: HIGH/MEDIUM/LOW — Closest: [paper]
2. [Claim 2] — Novelty: HIGH/MEDIUM/LOW — Closest: [paper]
...
### Closest Prior Work
| Paper | Year | Venue | Overlap | Key Difference |
|-------|------|-------|---------|----------------|
### Overall Novelty Assessment
- Score: X/10
- Recommendation: PROCEED / PROCEED WITH CAUTION / ABANDON
- Key differentiator: [what makes this unique, if anything]
- Risk: [what a reviewer would cite as prior work]
### Suggested Positioning
[How to frame the contribution to maximize novelty perception]
Important Rules
- Be BRUTALLY honest — false novelty claims waste months of research time
- "Applying X to Y" is NOT novel unless the application reveals surprising insights
- Check both the method AND the experimental setting for novelty
- If the method is not novel but the FINDING would be, say so explicitly
- Always check the most recent 6 months of arXiv — the field moves fast
> related_skills --same-repo
> run-experiment
Deploy and run ML experiments on local or remote GPU servers. Use when user says "run experiment", "deploy to server", "跑实验", or needs to launch training jobs.
> research-review
Get a deep critical review of research from GPT via Codex MCP. Use when user says "review my research", "help me review", "get external review", or wants critical feedback on research ideas, papers, or experimental results.
> research-refine
Turn a vague research direction into a problem-anchored, elegant, frontier-aware, implementation-oriented method plan via iterative GPT-5.4 review. Use when the user says "refine my approach", "帮我细化方案", "decompose this problem", "打磨idea", "refine research plan", "细化研究方案", or wants a concrete research method that stays simple, focused, and top-venue ready instead of a vague or overbuilt idea.
> research-refine-pipeline
Run an end-to-end workflow that chains `research-refine` and `experiment-plan`. Use when the user wants a one-shot pipeline from vague research direction to focused final proposal plus detailed experiment roadmap, or asks to "串起来", build a pipeline, do it end-to-end, or generate both the method and experiment plan together.