Skip to content
  • Categories
  • CSPaper Review
  • Recent
  • Tags
  • Popular
  • Paper Copilot
  • OpenReview.net
  • Deadlines
  • CSRanking
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
CSPaper

CSPaper: peer review sidekick

  1. Home
  2. Peer Review in Computer Science: good, bad & broken
  3. Artificial intelligence & Machine Learning
  4. Open Reviewing in Machine Learning: A New Community Survey for ICLR 2025

Open Reviewing in Machine Learning: A New Community Survey for ICLR 2025

Scheduled Pinned Locked Moved Artificial intelligence & Machine Learning
iclrsurvey20132025neuripspeer reviewopenreviewicmlreviewer anonymityclosed reviewing
1 Posts 1 Posters 161 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • cqsyfC Offline
    cqsyfC Offline
    cqsyf
    Super Users
    wrote last edited by
    #1

    The way we review scientific papers is once again under the spotlight. Researchers at Carnegie Mellon University, Justin Payan, Vishisht Rao, and Nihar B. Shah, together with Andrew McCallum (OpenReview), have launched a new survey on open vs. closed reviewing practices, in collaboration with ICLR 2025. This effort revisits a landmark 2013 survey that coincided with the very first ICLR, which was also the debut of OpenReview.net as a live experiment in fully open peer review.

    The survey is open until September 5, 2025:

    2013-2025-iclr-survey.jpg

    Why This Matters

    Peer review is the backbone of scientific communication, yet in computer science, where conferences like NeurIPS, ICML, and ICLR serve as top-tier publication venues, the process is constantly strained by explosive growth in submissions, reviewer overload, and concerns about fairness, bias, and review quality.

    The rise of open reviewing, where submissions, reviews, and sometimes even comments are made public, has transformed expectations. ICLR is perhaps the boldest example: all submissions are visible immediately, all reviews are released publicly, and community commentary is allowed. While many praise the increased transparency, others worry about unintended consequences such as resubmission bias, reputational risks, or less candid feedback.

    This new 2025 survey aims to gather large-scale, empirical insights from authors, reviewers, area chairs, and organizers across ML conferences, to inform the future of peer review policies.


    What the Survey Asks

    The survey is structured around roles (author, reviewer, area chair, etc.) and adapts its questions accordingly. Here are some highlights from the question set:

    • General perception questions

      • Are publicly available reviews useful when you are not an author or reviewer?
      • How comfortable are you posting public comments on others’ submissions?
      • Do open reviewing policies make the decision process more fair compared to closed review?
    • Reviewer-focused questions

      • Does the possibility of your review being public make you fear de-anonymization?
      • Do you become more or less critical under open review? Why?
      • Does openness incentivize higher-quality reviewing?
      • Have public reviews helped you learn to write better reviews?
    • Author-focused questions

      • Did comments from non-reviewers help you improve your paper?
      • Do you feel pressured to add unnecessary experiments under open review?
      • Have you ever refrained from submitting to fully open venues because rejected submissions and reviews would be public?
    • Broader policy implications

      • Should rejected submissions be visible? (Always, opt-in, or never)
      • Should anonymized reviews be public for all papers, only accepted ones, or hidden?
      • Should public comments be allowed (and when)?
      • Should official LLM-generated reviews supplement human reviews, and if so, who should see them?
    • Impact assessment: Participants rank potential outcomes of open reviewing, such as:

      • Reviewers becoming less comfortable expressing criticism.
      • Reviewers writing higher-quality reports.
      • Authors feeling deterred from submitting.
      • Fairer overall decision-making.

    The survey also collects demographic information such as roles in past conferences, primary affiliation (academia vs. industry), and career stage to allow for nuanced analysis.


    Background: From ICLR 2013 to ICLR 2025

    The inspiration for this survey is rooted in a pivotal moment: ICLR 2013, the first conference to fully adopt open review on OpenReview.net. That experiment was bold and controversial, but surveys at the time showed that most participants found the process fairer, reviews more constructive, and the discussions more engaging. Yet concerns about anonymity, reviewer candidness, and reputational risks also emerged.

    Since then:

    • NeurIPS adopted a partially open model: reviews are released with accepted papers, and authors can opt-in to release rejected ones.
    • ICML remained closed until 2024, prioritizing confidentiality and candid feedback.
    • ICLR stayed committed to being fully open, shaping an entire generation’s perception of scientific publishing.

    This diversity of policies across venues highlights the need for data-driven input on what works, what doesn’t, and how different stakeholders perceive the trade-offs.


    Scholarly Context

    The survey is anchored in a body of research on peer review led by Nihar B. Shah, who has published extensively on biases, calibration, reviewer assignment, incentives, and more. His 2024 extended survey article (An Overview of Challenges, Experiments, and Computational Solutions in Peer Review) is a comprehensive catalog of peer review’s pitfalls and algorithmic remedies — from mismatched expertise and collusion rings to miscalibration and hypercriticality.

    Equally relevant is the 2013 Open Scholarship and Peer Review paper by McCallum, Saunders, and Soergel, which introduced the OpenReview system. They argued that the key innovation is decoupling dissemination from evaluation: making papers public early, while experimenting with diverse peer review models layered on top.


    Community Impact and Next Steps

    This survey is not just an academic exercise. Its results will:

    • Inform future ICLR policy decisions, though ultimately the program chairs decide.
    • Provide data for the broader ML community to reflect on trade-offs in peer review.
    • Contribute to computational social science research on distributed evaluation systems.

    😳 Fun fact: Nihar B. Shah has been giving valuable input to CSPaper Review, a tool designed to streamline and improve conference reviewing workflows.


    Further Reading

    • Shah, Nihar B. (2024). An Overview of Challenges, Experiments, and Computational Solutions in Peer Review (Extended Version) https://www.cs.cmu.edu/~nihars/preprints/SurveyPeerReview.pdf.
    • Soergel, D., Saunders, A., & McCallum, A. (2013). Open Scholarship and Peer Review: A Time for Experimentation https://openreview.net/pdf?id=xf0zSBd2iufMg.

    Final Thought:
    As one survey respondent in 2013 put it, “The best part of open reviewing is that all papers are immediately visible so ideas spread faster.” The worst? “The jury is still out.”

    Now, more than a decade later, we as a community have another chance to weigh in. Don’t miss it.

    1 Reply Last reply
    1
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    © 2025 CSPaper.org Sidekick of Peer Reviews
    Debating the highs and lows of peer review in computer science.
    • First post
      Last post
    0
    • Categories
    • CSPaper Review
    • Recent
    • Tags
    • Popular
    • Paper Copilot
    • OpenReview.net
    • Deadlines
    • CSRanking