ICLR 2026: Submissions, LLM Disclosures, and the Peer Review Shuffle
-
Every September, the ML world braces itself for the tidal wave of ICLR submissions. This year (2026) has been nothing short of wild: submission IDs (according to a data point from a reviewer) are already creeping toward 27,000, more than double last year’s ~11k. If you thought NeurIPS was chaotic, welcome to conference roulette season.
In this post, I’ll walk through the new rules, the LLM disclosure chaos, the bidding drama, and the community uproar. Buckle up — this one’s juicy.
The Submission Checklist Panic
Area Chairs have been circulating “final checklists” like survival guides in a zombie apocalypse. Some highlights:
-
New mandatory sections:
- Ethics statement
- Reproducibility statement
Both must be tucked right before the References section (around page 9+).
- Ethics statement
-
Brand-new rule:
Every paper needs a “Use of LLMs” statement in the Appendix.
Skip it → risk an automatic desk reject. Yes, even if your LLM usage was just “fixing a few grammar errors.” -
Formatting nitpicks:
- Exactly nine pages of main text (no sneaky tenth page).
- Watch out for margins, especially top/bottom spacing around figures.
- Supplementary material must come after References.
-
Dummy placeholders must die:
No more “TODO,” “XXXX,” or “???” hiding in your text. And definitely don’t forget to replace those “dummy numbers” with actual experimental results. (Apparently, some people submitted with random placeholders before…) -
Author verification on OpenReview:
All co-authors must confirm their status. Forgetting this has nuked submissions in the past, especially when senior profs are too busy to click through.
The LLM Disclosure Controversy
Perhaps the spiciest update:
ICLR now requires authors to declare their use of Large Language Models.From the official wording:
“If LLMs played a significant role in research ideation and/or writing to the extent they could be regarded as a contributor, then authors should describe the precise role of the LLM in the main body of the paper in a separate section… Not disclosing significant LLM usage can lead to desk rejection.”
This rule has caused mass confusion:
-
Some ask: “Can I just put it at the end of the Conclusion? Or rename it ‘LLM Usage Disclosure’?”
→ Answer: Nope. It must live in the Appendix. Otherwise it counts as main text. -
Others: “What if my answer is simply NO?”
→ Then you can omit the section entirely. But beware, some reviewers are paranoid — better safe than sorry. -
Gray zone cases: “We only used GPT to polish English grammar. Does that count as ‘significant’?”
→ The community is split. Some argue it’s trivial, others fear even polishing might require disclosure. (One joked that GPT deserves to be “second author.”) -
WeChat/Reddit/Weibo groups are buzzing with templates like:
“We used ChatGPT for language polishing only. No ideas, methods, or experiments were generated by LLMs.”
Basically: Disclosure is the new Acknowledgement section.
Peer Review: From “Choose What You Like” to “Reject What You Hate”
Now onto the review process drama.
In past years, reviewers could bid positively for papers they wanted to review. This year? Nope.From the bidding console screenshots shared by the community:
- Reviewers only get to mark which papers they are unwilling to review.
- Everything else is assumed “fine,” and the system will assign up to 5 papers automatically.
- Translation: Your expertise? Optional. Your luck? Essential.
This has triggered an avalanche of complaints:
- “WTF, are they running a social experiment on randomness?”
- “How bad will review quality get if people can’t self-select their expertise?”
- “Is this how we discover AGI? By making grad students review temporal graph papers outside their field?”
🪓 The New “Review-Quota or Desk-Reject” Rule
Even bigger bombshell:
Every ICLR 2026 submission must have at least one author registered as a reviewer. If you submit ≥3 papers, you must review ≥6 papers. Fail to comply → desk rejection.This is being interpreted as:
- A necessary evil (review load has exploded, someone has to carry the burden).
- Or, a blunt weapon: “Review or perish.”
- Some speculate this will reduce “paper spam,” but others fear it punishes newcomers who lack reviewer credentials.
Fun fact: Being an Area Chair or Senior Area Chair exempts you. (So yes, professors have their escape hatch… again.
)
️ Community Reactions
- Overload concerns: With ~27k submissions, even 3+ reviews per paper means ~80k+ reviews are needed. Yikes.
- Fairness worries: First-timers may be exempted, but mid-career researchers get slammed.
- LLM paranoia: Some joke that half the pool is already “LLM-assisted,” so maybe the real reviewers are GPTs in disguise.
- CVPR overlap: ICLR reviews drop on Nov 11, just two days before CVPR 2026 full submission deadline (Nov 13). Expect mass migration of rejected ICLR papers → CVPR.
️ Final Thoughts
ICLR 2026 is shaping up to be the most chaotic in recent memory. Between:
Mandatory LLM disclosures
New Ethics/Reproducibility checkboxes
Desk rejections for missing reviewers
Bidding turning into “negative only” filtering
Record-breaking submission flood
…it feels less like a research conference and more like an endurance sport.
Maybe that’s the point: science isn’t just about ideas, it’s about surviving the meta-game of peer review.
What do you think — are these changes necessary evolution, or a dangerous social experiment with review quality?
Drop your thoughts below. And I hope that you have replaced your “XXXX” placeholders before submission. -