Skip to content
  • 0 Votes
    3 Posts
    10k Views
    SylviaS
    @Roman Thanks for your thoughtful perspective. I think you’ve highlighted some of the most pressing tensions here. I agree with you that sheer volume is the root driver of this breakdown. As you say, splitting into sub-venues or dual-location conferences doesn’t really shift the ratio of submissions to reviewers; it just redistributes the same workload. On your point (1), capping the number of submissions per first author: while it could help, I do worry it risks disproportionately affecting early-career researchers who often experiment with multiple directions. But perhaps a more nuanced policy could be considered? for example, scaling expectations differently for student authors vs. senior authors. On (2), extending the review period with more dialogue ... I think this would be hugely valuable. Rebuttals often feel compressed, and genuine discussion could help both sides. Of course, that requires balancing timelines with the demands of conference planning, but it seems like one of the most constructive levers we could realistically pull. As for AI-assisted reviews, you’re right that trust is the backbone. I see them less as a replacement and more as scaffolding: useful for flagging inconsistencies, summarizing discussions, or spotting hallucinated citations — but never substituting for a human’s final judgment. AAAI’s experiment will be interesting to watch, though the challenge will be transparency: authors should know which parts of the review were machine-augmented. Ultimately, maybe the problem isn’t just reviewer workload but also our collective reliance on a few “gatekeeper” venues. Until we diversify both where and how impactful work is recognized, these cycles of overload may keep repeating. Curious what you think ... do we need systemic alternatives beyond just fixing NeurIPS/AAAI, or is the first step still making the existing model more sustainable?