For AI, by AI: A New Scientific Paradigm
-
For AI, by AI: A New Scientific Paradigm
By Prof. Yang Yu, Nanjing University
Original article (Chinese)
Yang Yu’s Perspective
Over the past two years, large language models (LLMs) have increasingly been used to assist in writing, sparking ongoing debates about how to prevent their misuse in academic paper writing. Much of the focus has been on detection and restriction. However, with technological advancements, the trend may be shifting in another direction.
The Association for the Advancement of Artificial Intelligence (AAAI) has announced that its 2026 conference will introduce AI-assisted peer review. In this model, AI is positioned as a participant in the academic process—essentially, as a "peer."
This initiative is motivated by the challenges of today’s peer review system: an overwhelming number of submissions coupled with a shortage of reviewers. The result has been declining efficiency and quality. By delegating part of the initial review process to AI—such as checking experimental completeness and logical rigor—human experts can concentrate on assessing research novelty. This could substantially increase both the efficiency and the upper limits of peer review.
Early outcomes are encouraging: AI reviewers are surpassing expectations, providing constructive feedback and catching detailed errors at a level beyond most human reviewers.
Toward an AI-Reviewed Publication Model
Yang Yu further envisions extending this model into scientific publishing.
Currently, arXiv has become the de facto most influential publishing platform in AI and many other fields, thanks to its speed and accessibility. Yet, the absence of peer review results in uneven quality, placing a heavy burden on researchers to filter information.
What if we could establish a publication platform fully reviewed by AI?
Such a platform could merge the efficiency of arXiv with the quality control of peer review. AI could rapidly evaluate novelty, rigor, and contribution, significantly raising the average quality of published works and easing the reading burden for human researchers. This would not merely refine existing models but potentially usher in a new academic publishing paradigm.
From Human Readers to AI Readers
Looking further ahead, Yang Yu argues that the primary audience of scientific papers may no longer be human at all. Human cognitive bandwidth is limited, but future AI systems could:
- Digest the entire body of literature in a field.
- Identify research gaps.
- Generate hypotheses.
- Design and conduct experiments.
- Write papers to share findings.
These outputs would primarily be read by other AIs, enabling an ever-learning, self-improving AI research network that accelerates the pace of discovery.
This is the essence of a “For AI, by AI” scientific model.
In such a paradigm, human scholars would shift roles—focusing less on executing research directly, and more on posing fundamental questions, setting research directions, and making value and ethical judgments.
Of course, realizing this vision requires addressing major challenges: AI reviewer bias, hallucinations, adversarial vulnerabilities, and the overall security of research processes. Still, Yang Yu suggests this may well be the future trajectory of science.
Reader Comments and Reactions
The article generated active discussion among readers, reflecting both enthusiasm and caution.
-
sicer (Singapore) asked Prof. Yu about his personal experience with LLM-assisted writing, referencing his recent solo paper that acknowledged AI tools.
- Yang Yu’s reply: “LLMs can only refine human writing. Don’t expect machines to master what humans themselves do not understand.”
-
ArthurMoon (UK) suggested a hybrid model: “Perhaps one AI plus half a human reviewer, escalating to senior reviewers when there is a large disagreement. This could cut one-third of the workload.”
-
瀑风 (Beijing) simply expressed agreement with a “[赞同]” (support).
-
洪权 (Shanghai) noted: “If this becomes reality, many will be exposed. Superficial work could be easily filtered out.”
-
lafmdp praised the constructiveness of the vision: “Very valuable ideas [thumbs up]. The bottleneck remains that general-purpose LLMs still lack critical thinking, but this is a capability that can certainly be trained if needed.”
Closing Thoughts
Prof. Yang Yu’s vision pushes beyond today’s conversations on AI detection and regulation in scientific writing. Instead, it paints a future where AI becomes both the producer and the consumer of scientific knowledge, leaving humans to shape the grand questions and oversee ethical boundaries.
Whether welcomed or resisted, the “For AI, by AI” model may mark the beginning of a profound transformation in how science itself is conducted.