ORCID, Peer Review, and the Metadata Crisis in Computer Science Research
-
In a recent Nature article titled “ORCID launched more than a decade ago, but has yet to fulfil its potential”, journalist Jackson Ryan re-ignites a conversation that has long simmered beneath the surface of academic life: how do we reliably track who did what in the sprawling, often messy world of scholarly publishing?
As computer science researchers immersed in an ecosystem that relies heavily on conference-based peer review, preprints, and an ever-evolving array of identity systems (Google Scholar, DBLP, arXiv, OpenReview, Semantic Scholar, etc.), it's worth asking: What is ORCID’s role in our domain, and should we care more?
The ORCID Promise vs. Practice
ORCID (Open Researcher and Contributor ID) was launched in 2012 with a straightforward but ambitious goal: disambiguate researchers by assigning them a unique, lifelong digital identifier. Whether your name is J. Müller or Wei Zhang, your work, grants, affiliations, and contributions could, in theory, be uniquely and reliably tracked across systems.
Fast forward to 2025, and the data paints a mixed picture:
- ~10 million active ORCID users globally
- Only 50% of those profiles contain publication data
- Only 35% contain both publication and affiliation data
- Many accounts are essentially empty placeholders, created out of submission requirements and then abandoned
The crux? Compliance is high, but engagement is low.
Why This Matters for Computer Science
Computer science differs from many other fields in two key ways:
- Conference-first culture: Most of our top-tier work is published in conference proceedings (e.g., NeurIPS, ICML, ACL, SIGCOMM) rather than traditional journals.
- High author churn and collaboration: With massive multi-author papers, rotating contributors, and a growing reliance on student/postdoc labor, attribution clarity becomes harder and more important.
Now layer on the anonymity of double-blind review and the growing use of automated reviewer matching systems. Reviewer-paper assignment tools depend heavily on accurate metadata (e.g., author identity, expertise, affiliations). If ORCID could provide standardized, up-to-date metadata across authors, this would directly benefit the peer review process.
Yet, most computer science researchers do not keep their ORCID profiles up-to-date, and major conference platforms (OpenReview, HotCRP) only partially integrate ORCID, if at all.
What If ORCID Actually Worked?
Imagine a scenario where:
- You submit a paper to ACL 2026.
- Your ORCID profile, already linked to DBLP and Google Scholar, automatically syncs your recent publications and affiliations.
- The reviewer bidding system uses this metadata to assign reviewers based on real, accurate expertise.
- Once the paper is accepted, your ORCID is used to automatically update your academic record—no manual BibTeX wrangling required.
This is not sci-fi. Countries like Portugal have already integrated ORCID into national research CV systems (e.g., CIÊNCIAVITAE), enabling automatic syncing between funding, publication, and institutional databases.
For the rest of us, however, the reality is more fragmented.
Discussion: Whose Responsibility Is This?
This leads to a set of open questions that we should discuss as a community:
- Should CS conferences require and enforce meaningful ORCID integration for all authors and reviewers?
- Can publication platforms (e.g., OpenReview) be incentivized to auto-sync with ORCID?
- Could institutions and funding agencies mandate ORCID updates as part of yearly reviews or grant reporting?
- Is the problem cultural (low incentive for self-maintenance) or technical (poor system interoperability)?
To Summarize
For all its limitations, ORCID still represents one of the few global infrastructures attempting to bring order to the chaos of academic identity. In a field like computer science, where fast-paced publishing meets large-scale collaboration, the stakes for getting attribution and expertise right are only growing.
Let’s not let ORCID become just another checkbox we tick during paper submission. If we believe in the integrity of our peer review systems — and the fairness of credit assignment — we owe it to ourselves to take this seriously.
Question for you:
How do you use ORCID in your work? Have you had experiences (positive or negative) with how identity systems affect peer review? What would make you more likely to keep your ORCID profile updated? -
R root shared this topic