Rethinking the Peer Review Burden: Reflections on Eric Xing’s CVPR 2026 Post
-
In a recent LinkedIn post, Eric Xing — President of MBZUAI, Professor at Carnegie Mellon University, and Chief Scientist at GenBio — raised a pressing concern regarding the evolving peer review expectations in major AI and computer vision conferences. His post centers on a new clause in the CVPR 2026 Author Enrollment Form, which mandates that all qualified authors must commit to serve as reviewers, warning that failure to do so could affect the review process of their own submissions.
This move, presumably designed to manage the explosive growth of submissions to top-tier conferences like CVPR, has stirred significant debate. Xing describes the approach as “a new tactic of imposing demands,” arguing that it risks excluding legitimate contributors who, for valid reasons, cannot serve as reviewers. These include senior academics and leaders who already shoulder immense administrative and mentoring duties, non-CV collaborators such as domain experts or data providers, and industry sponsors who meaningfully contribute but lack technical reviewing expertise.
The broader implication of Xing’s post extends beyond CVPR. It highlights the sustainability crisis in the academic peer review system — a system strained by exponential submission growth, limited qualified reviewers, and increasing expectations of review quality. While the intention behind mandatory reviewing may be to distribute the workload more evenly, it inadvertently blurs the line between authorship and reviewer obligation, potentially discouraging interdisciplinary collaboration and senior involvement.
Xing calls for a more thoughtful, long-term solution that ensures fairness, inclusivity, and sustainability in scientific publishing. Rather than relying on coercive mechanisms, the community might consider systemic innovations — from better incentive structures to leveraging technology that supports reviewer efficiency.
One emerging direction involves AI-assisted review systems, such as CSPaper.org, which aim to streamline and partially automate aspects of peer review. Platforms like these could help alleviate the mounting pressure caused by the booming volume of submissions — a challenge that will only intensify as AI research continues its exponential trajectory.