Skip to content
  • Categories
  • CSPaper Review
  • Recent
  • Tags
  • Popular
  • Paper Copilot
  • OpenReview.net
  • Deadlines
  • CSRanking
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
CSPaper Forum

CSPaper: peer review sidekick

  1. Home
  2. Peer Review in Computer Science: good, bad & broken
  3. Artificial intelligence & Machine Learning
  4. How an ICLR Reviewer Reads Your Paper — Practical Tips to Lift Your Acceptance Odds

How an ICLR Reviewer Reads Your Paper — Practical Tips to Lift Your Acceptance Odds

Scheduled Pinned Locked Moved Artificial intelligence & Machine Learning
iclr2026reviewpeer reviewnoveltydepthpresentationreproducibilityhabitpitfall
1 Posts 1 Posters 116 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • rootR Offline
    rootR Offline
    root
    wrote last edited by
    #1

    TL;DR

    An active ICLR reviewer lays out a straightforward reading workflow and a pragmatic scoring lens: novelty × depth × presentation × reproducibility ≈ Weak Accept or above. Fairness of comparisons, solid ablations, and crystal-clear method writing matter more than squeezing out an extra 0.1% on a leaderboard. Keep theory correct (even if simple) and make your contribution easy to reproduce.

    Screenshot 2025-10-28 at 22.17.53.png
    (picture generated by Nano Banana)

    The Reviewer’s Reading Workflow (What Gets Attention First)

    1. Intro: deep read. The reviewer “locks onto” the Introduction to grasp motivation, what the work does, and how it does it. If this part is murky, the rest of the paper struggles.

    2. Related Work / Conclusion / Appendix: quick scan. Unless there are red flags, these sections are skimmed; over-selling in Conclusions doesn’t help. Minor equation/code checks pass if no obvious errors.

    3. Method & Pipeline: close read. The heart of the review: ablations and fair comparisons. SOTA claims are judged by fairness and rigor of comparisons, not numbers alone.

    Author takeaway: If you have limited time, invest it in: (i) sharpening the Intro; (ii) making the Method section surgical and reproducible; (iii) running honest, well-designed ablations.


    The Scoring Lens: A Multiplicative Heuristic

    The reviewer’s personal rule of thumb:

    novelty (innovation) × depth × presentation (clarity) × reproducibility ≈ Weak Accept or better.
    A zero in any factor sinks the product; solid but unflashy papers can still land well if the logic is airtight.

    What that means for authors

    • Logical coherence beats “blind SOTA.” Clear motivation, reasonable method, trustworthy experiments, and crisp conclusions win. Even an “A+B” style idea can score well if the loop is tight and defensible.

    • Clarity over flourish. Write precise, unembellished English; avoid undefined acronyms, logic jumps, and hype.

    • Reproducibility with fair baselines. Report sensible baselines, reliable metrics, and fair setups; include ablations that truly test your claims.

    • Theory: keep it correct (even if simple). No symbol errors or contradictory assumptions; ensure consistency between theory and implementation.

    • Add a “small spark.” An insightful analysis, interpretability angle, or deployment-oriented observation can tip borderline papers upward.


    Timeline Notes & Process Frictions (ICLR 2026)

    • Reviewer deadline: Nov 1, 19:59 (Anywhere on Earth). Historically, last-minute portal issues can happen — reviewers are advised to submit slightly early.

    • Score release: ~Nov 12, early morning; rebuttal slated to begin around Dec 4. Authors should plan experiments and response writing accordingly.

    • Load & pressure: The current “sharding/assignment” feels accurate; reviewers report a compressed window (e.g., five papers in ~two weeks) and competing end-of-year deadlines (CVPR nearby), which raises the bar for clarity and reviewer efficiency.


    A Reviewer-Aligned Author Checklist

    Use this one-page pre-submission pass aligned to the reviewer’s habits:

    1) Introduction (most scrutinized)

    • Problem & gap are explicit in the first 2–3 paragraphs.
    • Contribution bullets are specific, testable, and map to experiments.

    2) Method (deep dive)

    • A minimal, self-contained description that lets others re-implement.
    • Ablations isolate each design choice; fair SOTA comparisons (same data, budgets, metrics).

    3) Reproducibility

    • Datasets, hyperparams, seeds, and training schedules summarized; any code or equations checked for obvious inconsistencies.

    4) Writing quality

    • Trim hype; define acronyms on first use; remove logic jumps; ensure figures directly support claims.

    5) Theory sanity

    • No symbol misuse; assumptions are stated; math aligns with implementation.

    6) “Small spark”

    • Provide one insightful analysis (e.g., failure modes, sensitivity, or interpretability) that teaches the reader something beyond the main metric.

    Common Pitfalls (and Fixes)

    • Over-indexed on SOTA, under-indexed on fairness.
      Fix: Match training budgets, report variance, and justify metric choices; show when your method doesn’t win and why.

    • Beautiful equations, shaky implementation.
      Fix: Cross-check symbols, units, and pseudo-code; verify that the method implemented is the method claimed.

    • Hype-driven writing.
      Fix: Replace superlatives with evidence; foreground ablations and clear limitations.


    Remember:

    Make it easy to believe you.

    If the motivation is lucid, the method is careful, the comparisons are fair, and the results are reproducible, you don’t need a flashy SOTA to clear the bar. That’s not just “peer review hygiene”; it’s how good ideas travel.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    © 2025 CSPaper.org Sidekick of Peer Reviews
    Debating the highs and lows of peer review in computer science.
    • First post
      Last post
    0
    • Categories
    • CSPaper Review
    • Recent
    • Tags
    • Popular
    • Paper Copilot
    • OpenReview.net
    • Deadlines
    • CSRanking