Skip to content
  • 0 Votes
    1 Posts
    736 Views
    No one has replied
  • 1 Votes
    1 Posts
    2k Views
    No one has replied
  • 0 Votes
    2 Posts
    5k Views
    L
    Any source on this one? AAAI has not yet released anything...
  • 0 Votes
    3 Posts
    10k Views
    SylviaS
    @Roman Thanks for your thoughtful perspective. I think you’ve highlighted some of the most pressing tensions here. I agree with you that sheer volume is the root driver of this breakdown. As you say, splitting into sub-venues or dual-location conferences doesn’t really shift the ratio of submissions to reviewers; it just redistributes the same workload. On your point (1), capping the number of submissions per first author: while it could help, I do worry it risks disproportionately affecting early-career researchers who often experiment with multiple directions. But perhaps a more nuanced policy could be considered? for example, scaling expectations differently for student authors vs. senior authors. On (2), extending the review period with more dialogue ... I think this would be hugely valuable. Rebuttals often feel compressed, and genuine discussion could help both sides. Of course, that requires balancing timelines with the demands of conference planning, but it seems like one of the most constructive levers we could realistically pull. As for AI-assisted reviews, you’re right that trust is the backbone. I see them less as a replacement and more as scaffolding: useful for flagging inconsistencies, summarizing discussions, or spotting hallucinated citations — but never substituting for a human’s final judgment. AAAI’s experiment will be interesting to watch, though the challenge will be transparency: authors should know which parts of the review were machine-augmented. Ultimately, maybe the problem isn’t just reviewer workload but also our collective reliance on a few “gatekeeper” venues. Until we diversify both where and how impactful work is recognized, these cycles of overload may keep repeating. Curious what you think ... do we need systemic alternatives beyond just fixing NeurIPS/AAAI, or is the first step still making the existing model more sustainable?
  • 1 Votes
    2 Posts
    533 Views
    N
    Is CSPaper (https://review.cspaper.org) collaborated with AAAI's new AI review initiative?
  • 1 Votes
    1 Posts
    305 Views
    No one has replied
  • 1 Votes
    3 Posts
    594 Views
    rootR
    The fix has been pushed to our production site: https://review.cspaper.org/
  • 1 Votes
    4 Posts
    1k Views
    riverR
    Looks like the AAAI-26 rollercoaster isn’t slowing down anytime soon. [image: 1757459547907-685cb90de9969beg.jpeg] Both the latest notices confirm what many of us have been feeling: the review timeline is slipping yet again. Originally, Phase 1 notifications were due Sept 8. Then the OpenReview crash pushed it to Sept 12. Now the official word is Sept 15. August 4, 2025 Supplementary material and code due by 11:59 PM UTC-12 September 15, 2025 Notification of Phase 1 rejections October 7–13, 2025 Author feedback window November 8, 2025 Notification of final acceptance or rejection (Main Technical Track) That three-day “extra wait” might not sound like much, but for authors it’s brutal. It means: Suspended in mid-air: checking inbox + OpenReview every morning, still no word. Rebuttal delayed: the author response window got pushed past the October holidays — which makes sense only if reviews themselves aren’t ready. Compressed transfer time: folks planning to bounce to ICLR 2026 if rejected are losing precious prep days. With ICLR abstract due Sept 19 and full paper Sept 24, every delay cuts deep. To add spice, the program chairs hinted the Phase 1 rejection rate could hit 50–67%. That means only ~1/3 of submissions will survive past the first cut. With nearly 29,000+ papers in the system — more than double last year — the scale is unprecedented . The bigger picture Emergency “last-minute reviewers” are being pulled in to cover gaps. Other conferences are also bending: NeurIPS’s “dual-city” experiment saw accepted papers later force-rejected due to quota caps. The pattern is clear: our current peer review model is hitting a breaking point. Technical crashes, reviewer overload, rebuttals turning into vent sessions — all signs of strain. Open questions for us as a community Do we just accept longer waits and higher rejection odds as the new normal? Should AAAI (and other big A* conferences) move toward dynamic, rolling review models rather than single-shot deadlines? Or do we need to rethink reciprocal review obligations more fundamentally — to balance load without roulette-style assignments? For now, all we can do is hang tight until Sept 15 ( no more extensions). But honestly, given the trajectory, I wouldn’t be surprised if “Sept 15” becomes “Sept whenever.” Anyone here already prepping ICLR as a fallback? Or are you holding out for the rebuttal round?
  • 0 Votes
    1 Posts
    285 Views
    No one has replied
  • 0 Votes
    2 Posts
    869 Views
    rootR
    It is said that there are over 22,000 valid submissions entering review phase.
  • 1 Votes
    2 Posts
    546 Views
    rootR
    thx @Joanne for sharing this. CSPaper Review has many known problems awaits to be fixed. We are committed to fixing them in a timely manner!
  • 0 Votes
    1 Posts
    420 Views
    No one has replied
  • 3 Votes
    3 Posts
    464 Views
    JoanneJ
  • 1 Votes
    3 Posts
    756 Views
    riverR
    I want to add a bit of my reflection on AI Review Potential Strengths Scalability and Efficiency: AI systems could assist in managing the ever-growing number of submissions, reducing workload for human reviewers and accelerating review timelines. Consistency and Standardization: Automated systems can enforce uniform criteria, potentially reducing variance caused by subjective or inconsistent human judgment. Augmented Support for Humans: AI could provide structured summaries, highlight methodological issues, or retrieve related prior work, acting as a co-pilot rather than a replacement for human reviewers. Transparency and Traceability: With criterion-aligned or structured outputs, AI systems might make explicit how particular aspects of a paper were evaluated, offering traceability that complements human interpretation. Concerns and Limitations Quality and Depth of Judgment: Peer review is not just about summarization or surface-level critique. Human reviewers often contribute domain expertise, intuition, and contextual reasoning that AI currently struggles to replicate. Evaluation Metrics Misalignment: Using overlap-based metrics (e.g., ROUGE, BERTScore) may not fully capture the nuanced quality of reviews, which often rely on critical reasoning and qualitative assessment. Dataset and Generalizability Issues: Many experiments in this space rely on small or narrow datasets (e.g., limited to certain conferences), which risks overfitting and reduces generalizability to other domains. Reproducibility and Fairness: Reliance on proprietary large language models introduces cost, access, and reproducibility challenges. Comparisons across different model sizes or modalities can also create fairness concerns. Multimodality and Context Handling: While AI can parse text and visuals, questions remain about whether figures, tables, and extended contexts truly require specialized handling beyond what modern large-context models can already process. Ethical and Practical Considerations Human Replacement vs. Human Augmentation: A key concern is whether AI should replace reviewers or assist them. Many argue for augmentation rather than substitution, especially given the subjective and community-driven nature of peer review. Bias and Trust: AI-generated reviews may inherit biases from training data or evaluation frameworks, raising questions about fairness and transparency in decision-making. Cost and Sustainability: Running AI review systems at scale may incur significant computational and financial costs, particularly when leveraging closed, high-capacity models. Accountability: Unlike human reviewers, AI systems cannot be held accountable for their judgments, which complicates trust and governance in academic publishing. Emerging Attitudes Skepticism: Many scholars remain unconvinced that AI can capture the essence of peer review, viewing it as reductionist or superficial. Cautious Optimism: Some see AI as a promising assistant to support human reviewers, especially for summarization, consistency checks, or initial screening. Call for Rigor: There is a consensus that human evaluation, broader benchmarking, and careful methodological design are critical before integrating AI into the peer review process at scale. In summary: The use of AI in peer review is seen as an intriguing and potentially useful tool for augmentation, but concerns around motivation, evaluation validity, fairness, and the irreplaceable role of human judgment dominate current attitudes. There is strong agreement that more rigorous evidence and careful deployment strategies are needed before AI can play a central role in scholarly reviewing.
  • 1 Votes
    3 Posts
    788 Views
    JoanneJ
    Yeah, it’s quite something. Wondering how the reviewing process will keep up with this scale.
  • 2 Votes
    12 Posts
    4k Views
    R
    Some guide sent today to reviewers (PC member) Dear XXXX, As a member of the Program Committee, you play a critical role in the success of the AAAI-26 conference . Now that we’ve released Phase 1 paper assignments, we rely on reviewers like you to carefully read papers and provide an informed review of the strengths and weaknesses of the papers. Such reviews are then used to decide which papers will pass into Phase 2 of the review process. Reviews for Phase 1 are due on Monday, September 1 (anywhere on earth). If you anticipate any problems meeting this deadline, please notify the SPC for this paper by creating an “official comment” and set “readers” to the SPC in OpenReview ASAP. You can find more detailed information about the entire review process in the Instructions For AAAI 2026 Reviewers. The AAAI-26 paper matching process takes into account many inputs, including paper keywords, your OpenReview profile, your DBLP record, your paper bids, etc. In some cases, the papers you are matched with may not be a perfect fit for your expertise. Because AAAI-26 is a very large conference that encompasses many types of AI, please do your best. If you wish to pull in an additional colleague to help you review the paper, please check with your SPC first (see instructions above on how to create an official comment). If you do use a subreviewer, please remember that you are responsible for all content in the review, and you should still take part in the Phase 2 discussion (if the paper proceeds to Phase 2). For those of you who have not reviewed for AAAI recently, or just want some additional reviewing tips, please check out the “Guidelines On Writing Helpful Reviews” (also included in the Instructions For AAAI 2026 Reviewers). As we go through the paper review process, please ensure you are familiar with AAAI-26’s ethics policies listed in the Ethical Guidelines for AAAI-26 Reviewers. If you have any concerns regarding ethics, such as authors or reviewers breaking anonymity, reviewers using LLMs, inappropriate pressure to change ratings, etc., you may report them to our ethics chairs through this Google Form. Potential ethics violations will be investigated, now or in the future, by the AAAI-26 Ethics Chairs and/or the AAAI standing Ethics Committee. The chairs or committee will impose penalties, depending on the severity of the infraction. We must emphasize that there is no time limit on when ethics violations can be investigated and sanctions imposed. Thank you again for all your work in making AAAI-26 a success! (If you have any questions or suggestions, please continue to contact us via workflowchairs26@aaai.zendesk.com.) Sincerely, The AAAI-26 Program Chairs
  • 1 Votes
    3 Posts
    552 Views
    rootR
    @xiaolong yeah, right? Hopefully we do not hear such news in 2025 and onwards…